hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
sequence
max_stars_count
int64
1
191k
โŒ€
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
โŒ€
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
โŒ€
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
sequence
max_issues_count
int64
1
67k
โŒ€
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
โŒ€
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
โŒ€
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
sequence
max_forks_count
int64
1
105k
โŒ€
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
โŒ€
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
โŒ€
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
sequence
cell_types
sequence
cell_type_groups
sequence
ecdf561c72349ba335215c52f9b40b14b4d4370d
100,652
ipynb
Jupyter Notebook
demos/Using a delta learning rule to learn embeddings.ipynb
bmcmenamin/hebbnets
e10cc15590e146fc4f42132b0d454b760c23b7e7
[ "MIT" ]
null
null
null
demos/Using a delta learning rule to learn embeddings.ipynb
bmcmenamin/hebbnets
e10cc15590e146fc4f42132b0d454b760c23b7e7
[ "MIT" ]
null
null
null
demos/Using a delta learning rule to learn embeddings.ipynb
bmcmenamin/hebbnets
e10cc15590e146fc4f42132b0d454b760c23b7e7
[ "MIT" ]
null
null
null
105.505241
62,219
0.778156
[ [ [ "import os\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn.datasets import load_iris\n\nfrom hebbnets.networks import MultilayerDahEmbedding, MultilayerHahNetwork", "_____no_output_____" ], [ "data_X, data_Y = load_iris(return_X_y=True)\n\n# Scale X\ndata_X -= np.mean(data_X, axis=0, keepdims=True)\ndata_X /= np.std(data_X, axis=0, keepdims=True)", "_____no_output_____" ], [ "input_layer_size = data_X.shape[1]\nnodes_per_layer = [2]\n\nhah_network = MultilayerHahNetwork(\n input_layer_size,\n nodes_per_layer,\n has_bias=False,\n act_type='linear',\n)\n\ndah_network = MultilayerDahEmbedding(\n input_layer_size,\n nodes_per_layer,\n has_bias=False,\n act_type='linear',\n)", "_____no_output_____" ], [ "hah_network.train(\n data_X,\n num_epochs=25\n)", "_____no_output_____" ], [ "dah_network.train(\n list(zip(data_X, data_Y)),\n num_epochs=25\n)", "_____no_output_____" ] ], [ [ "## Plotting PCA vs Clustered scatter", "_____no_output_____" ] ], [ [ "%matplotlib nbagg\n\ndef get_coords(model, input_data):\n coords = []\n for data in input_data:\n model.propogate_input(data)\n coords.append(model.layers[-1].activation)\n return list(zip(*coords))\n\ncolor_dict = {0:'r', 1:'b', 2:'g'}\ncoord_colors = [color_dict[i] for i in data_Y]\n\nfig, ax = plt.subplots(nrows=2, ncols=1)\nax[0].scatter(\n *get_coords(hah_network, data_X),\n s=5, alpha=0.7,\n c=coord_colors\n)\nax[0].set_title('PCA via Hebbian/Anti-Hebbian')\n\nax[1].scatter(\n *get_coords(dah_network, data_X),\n s=5, alpha=0.7,\n c=coord_colors\n)\nax[1].set_title('Clustering via Delta/Anti-Hebbian')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
ecdf71a1a7df7e991b95f4105e350e4f2ac56015
46,549
ipynb
Jupyter Notebook
1 Big data essentials/Lectures/Week 6/1 Working with samples/Working-with-samples.ipynb
AAbdul12/Big-Data-Engineering-Coursera-Yandex
71702958f420d379fd18fe7e63abfa50d6fac8e6
[ "MIT" ]
92
2018-06-07T13:50:28.000Z
2022-03-14T19:41:37.000Z
1 Big data essentials/Lectures/Week 6/1 Working with samples/Working-with-samples.ipynb
AAbdul12/Big-Data-Engineering-Coursera-Yandex
71702958f420d379fd18fe7e63abfa50d6fac8e6
[ "MIT" ]
3
2018-10-31T13:29:49.000Z
2020-08-30T15:58:25.000Z
1 Big data essentials/Lectures/Week 6/1 Working with samples/Working-with-samples.ipynb
AAbdul12/Big-Data-Engineering-Coursera-Yandex
71702958f420d379fd18fe7e63abfa50d6fac8e6
[ "MIT" ]
82
2018-04-26T16:47:54.000Z
2022-01-11T14:17:11.000Z
53.566168
10,738
0.729661
[ [ [ "import pandas as pd\nimport numpy as np\n\n%pylab inline", "Populating the interactive namespace from numpy and matplotlib\n" ] ], [ [ "How many taxi trips are reported in the original file?", "_____no_output_____" ] ], [ [ "%%sh\nwc -l yellow_tripdata_2016-12.csv ", " 10449409 yellow_tripdata_2016-12.csv\n" ] ], [ [ "Getting random samples:", "_____no_output_____" ] ], [ [ "%%sh\nhead -n 1 yellow_tripdata_2016-12.csv > sample100.csv\ntail -n +2 yellow_tripdata_2016-12.csv | gshuf -n 100 | sed 's/,,//g' >> sample100.csv", "_____no_output_____" ], [ "sample100 = pd.read_csv('sample100.csv', parse_dates=[1,2])\nsample100.head()", "_____no_output_____" ] ], [ [ "## Estimating the proportion of tippers", "_____no_output_____" ] ], [ [ "is_tipped = sample100.tip_amount>0\nis_tipped.mean()", "_____no_output_____" ] ], [ [ "Standard deviation:", "_____no_output_____" ] ], [ [ "ph = is_tipped.mean()\ns = np.sqrt(ph * (1-ph) / len(is_tipped))\ns", "_____no_output_____" ] ], [ [ "95% confidence interval:", "_____no_output_____" ] ], [ [ "from statsmodels.stats.proportion import proportion_confint\nproportion_confint(sum(is_tipped), len(is_tipped), alpha=0.05)", "_____no_output_____" ] ], [ [ "Pretty wide! How big do we need a sample for the 95% confidence interval to be approximately 2% wide?", "_____no_output_____" ] ], [ [ "from statsmodels.stats.proportion import samplesize_confint_proportion\nint(np.ceil(samplesize_confint_proportion(ph, 0.01)))", "_____no_output_____" ] ], [ [ "Let's take a bigger sample:", "_____no_output_____" ] ], [ [ "%%sh\nhead -n 1 yellow_tripdata_2016-12.csv > sample10000.csv\ntail -n +2 yellow_tripdata_2016-12.csv | gshuf -n 10000 | sed 's/,,//g' >> sample10000.csv", "_____no_output_____" ], [ "sample10000 = pd.read_csv('sample10000.csv', parse_dates=[1,2])\nis_tipped = sample10000.tip_amount>0\nis_tipped.mean()", "_____no_output_____" ], [ "ph = is_tipped.mean()\ns = np.sqrt(ph * (1-ph) / len(is_tipped))\ns", "_____no_output_____" ], [ "proportion_confint(sum(is_tipped), len(is_tipped), alpha=0.05)", "_____no_output_____" ] ], [ [ "It is indeed about 2% wide. ", "_____no_output_____" ], [ "## Estimating the average trip duration", "_____no_output_____" ] ], [ [ "sample100['duration'] = [x.total_seconds() / 60 for x in sample100.tpep_dropoff_datetime - sample100.tpep_pickup_datetime]", "_____no_output_____" ], [ "sample100['duration'].mean() ", "_____no_output_____" ], [ "s = sample100['duration'].std(ddof=1) / np.sqrt(len(sample100['duration']))\ns", "_____no_output_____" ], [ "from statsmodels.stats.weightstats import _tconfint_generic\n_tconfint_generic(sample100['duration'].mean(), \n s, \n len(sample100['duration']) - 1, \n 0.05, 'two-sided')", "_____no_output_____" ], [ "sample100.hist(column = 'duration', bins=30, normed=True)", "_____no_output_____" ], [ "sample10000['duration'] = [x.total_seconds() / 60 for x in sample10000.tpep_dropoff_datetime - sample10000.tpep_pickup_datetime]", "_____no_output_____" ], [ "sample10000['duration'].mean()", "_____no_output_____" ], [ "s = sample10000['duration'].std(ddof=1) / np.sqrt(len(sample10000['duration']))\ns", "_____no_output_____" ], [ "_tconfint_generic(sample10000['duration'].mean(), \n s, \n len(sample10000['duration']) - 1, \n 0.05, 'two-sided')", "_____no_output_____" ], [ "sample10000.hist(column = 'duration', bins=30, normed=True)", "_____no_output_____" ], [ "tmp = sample10000['duration'] < 120\ntmp.value_counts()", "_____no_output_____" ], [ "sample10000[sample10000['duration'] < 120].hist(column = 'duration', bins=30, normed=True)\nsample10000['duration'][sample10000['duration'] < 120].mean()", "_____no_output_____" ], [ "sample100['duration'].median()", "_____no_output_____" ], [ "sample10000['duration'].median()", "_____no_output_____" ], [ "sample10000['duration'][sample10000['duration'] < 120].median()", "_____no_output_____" ] ], [ [ "Bootstrap:", "_____no_output_____" ] ], [ [ "def get_bootstrap_samples(data, n_samples):\n indices = np.random.randint(0, len(data), (n_samples, len(data)))\n samples = data[indices]\n return samples\n \ndef stat_intervals(stat, alpha):\n boundaries = np.percentile(stat, [100 * alpha / 2., 100 * (1 - alpha / 2.)])\n return boundaries", "_____no_output_____" ], [ "median_duration = list(map(np.median, get_bootstrap_samples(sample100['duration'].values, 1000)))\nstat_intervals(median_duration, 0.05)", "_____no_output_____" ], [ "median_duration = list(map(np.median, get_bootstrap_samples(sample10000['duration'].values, 1000)))\nstat_intervals(median_duration, 0.05)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
ecdf9479cb1225968c5c6ab5601ed2370b21d2ea
83,093
ipynb
Jupyter Notebook
_notebooks/2020-11-06-R-basic-concept-and-code.ipynb
tfedohk/dohk
81d07978a408459289eb127cd863ec2480de7121
[ "Apache-2.0" ]
null
null
null
_notebooks/2020-11-06-R-basic-concept-and-code.ipynb
tfedohk/dohk
81d07978a408459289eb127cd863ec2480de7121
[ "Apache-2.0" ]
3
2021-05-20T21:32:30.000Z
2022-02-26T09:56:11.000Z
_notebooks/2020-11-06-R-basic-concept-and-code.ipynb
tfedohk/dohk
81d07978a408459289eb127cd863ec2480de7121
[ "Apache-2.0" ]
null
null
null
26.031642
281
0.421865
[ [ [ "# R ๊ธฐ์ดˆ\n\n- author: \"Kwon DoHyung\"\n- toc: true \n- comments: true\n- categories: [CSE, R]\n- image: images/2020-11-06-r-1/Untitled5.png\n- permalink: /r-basic-concept-and-code/", "_____no_output_____" ], [ "# R ์†Œ๊ฐœ\n- R์€ ํ†ต๊ณ„๋‚˜ ๋ฐ์ดํ„ฐ ๋ถ„์„์— ๊ต‰์žฅํžˆ ์ข‹์€ ํ”„๋กœ๊ทธ๋ž˜๋ฐ ์–ธ์–ด\n- ๋‹ค๋ฅธ ์œ ์‚ฌํ•œ ์–ธ์–ด๋กœ๋Š” SPSS๋‚˜ Python์ด ์žˆ๋‹ค.\n - SPSS๋Š” ์ฃผ๋กœ ๋Œ€๊ธฐ์—…์—์„œ ์‚ฌ์šฉ\n - Python์€ ๋จธ์‹ ๋Ÿฌ๋‹, ๋”ฅ๋Ÿฌ๋‹ ๋“ฑ์˜ ๋ฐ์ดํ„ฐ ๋ถ„์„, ์ธ๊ณต์ง€๋Šฅ์— ํŠนํ™”๋œ ์–ธ์–ด\n- R ์ฝ”๋“œ๋Š” '์Šคํฌ๋ฆฝํŠธ'๋ผ๊ณ  ๋ถ€๋ฅธ๋‹ค. ์ด๋Š” R, Python ๋“ฑ์˜ ์–ธ์–ด๊ฐ€ ์Šคํฌ๋ฆฝํŠธ ์–ธ์–ด์ด๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค.", "_____no_output_____" ], [ "# SW ๋‹ค์šด๋กœ๋“œ\n- R: [https://cran.r-project.org/bin/windows/base/](https://cran.r-project.org/bin/windows/base/)\n- R studio: [https://rstudio.com/products/rstudio/download/](https://rstudio.com/products/rstudio/download/)\n- Rtools: [https://cran.r-project.org/bin/windows/Rtools/](https://cran.r-project.org/bin/windows/Rtools/)\n - ์œˆ๋„์šฐ ์‚ฌ์šฉ์ž๋งŒ Rtools ์„ค์น˜ ํ•„์š”. R์€ ๋ฆฌ๋ˆ…์Šค์—์„œ ์ฒ˜์Œ ๊ฐœ๋ฐœ๋œ ํ”„๋กœ๊ทธ๋žจ์ด๋ฏ€๋กœ ์œˆ๋„์šฐ๋Š” ์ถ”๊ฐ€ ์„ค์น˜๊ฐ€ ํ•„์š”ํ•˜๋‹ค.\n - recommended๋ฅผ ๋‹ค์šด๋กœ๋“œ ํ•œ๋‹ค.\n\n# ํ”„๋กœ๊ทธ๋žจ ์„ค์น˜ ์‹œ ์ฃผ์˜ ์‚ฌํ•ญ\n\n![](../images/2021-01-25-r-3/Untitled.png)\n์‚ฌ์šฉ์ž๋ช…์ด ๋˜๋„๋ก ํ•œ๊ธ€์ด ์•„๋‹ˆ์–ด์•ผ ํ•œ๋‹ค.\n\n![](../images/2021-01-25-r-3/Untitled2.png)\nํ™˜๊ฒฝ๋ณ€์ˆ˜ ์„ค์ • ์‹œ, PATH ์„ค์ •์— ์ฒดํฌ ํ‘œ์‹œ๋ฅผ ํ•ด์•ผ ํ•œ๋‹ค.\n\n# R studio IDE\n\n![](../images/2020-11-06-r-1/Untitled.png)\n![](../images/2020-11-06-r-1/Untitled1.png)\n- ํ•œ ๊ฐœ์˜ ๋ผ์ธ ์‹คํ–‰ ์‹œ, ํ•ด๋‹น ๋ผ์ธ์„ ๋ธ”๋ก ์ง€์ •ํ•˜์—ฌ Ctrl+Enter\n- ์—ฌ๋Ÿฌ ๋ผ์ธ ์‹คํ–‰ ์‹œ, ๋ธ”๋ก ์ง€์ • ํ›„ Ctrl+Enter\n- R studio๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด R ์Šคํฌ๋ฆฝํŠธ๋ฅผ ๋ณ„๋„๋กœ ์ €์žฅํ•˜์—ฌ, ๋‚˜์ค‘์— ์žฌ์‚ฌ์šฉ์ด ๊ฐ€๋Šฅํ•˜๋‹ค.\n- ์ฝ˜์†”: R ์–ธ์–ด๊ฐ€ ์ง์ ‘ ์ž…๋ ฅ์„ ๋ฐ›๊ณ  ์ถœ๋ ฅ์„ ์ฃผ๋Š” ์˜์—ญ\n- History: ์ฝ˜์†”์— ์ž…๋ ฅํ–ˆ๋˜ ๋ช…๋ น์–ด๋“ค์ด ์ €์žฅ๋จ\n- Packages: ์ธ์Šคํ†จ ๋ฐ ์—…๋ฐ์ดํŠธ ๋ฐ ์‚ญ์ œ\n- Help: ํ•จ์ˆ˜์˜ ์„ค๋ช…์„œ ์ œ๊ณต\n\n## R Studio ์„ค์ •\n\n- Tools โ†’ Global options โ†’ Basic โ†’ Restore .RData into workspace at startup ์ฒดํฌ ํ•ด์ œ\n - ๋ฐ์ดํ„ฐ๋ฅผ ๋ฏธ๋ฆฌ ์ €์žฅํ•ด๋‘์—ˆ๋‹ค๊ฐ€, R Studio๋ฅผ ์—ด ๋•Œ ๋ถˆ๋Ÿฌ์˜ค๊ฒŒ ํ•˜๋Š” ๊ธฐ๋Šฅ. ๋А๋ ค์งˆ ์ˆ˜ ์žˆ๋‹ค.\n- Tools โ†’ Global options โ†’ Basic โ†’ Always svae history ์ฒดํฌ ํ‘œ์‹œ\n - ์–ด๋–ค ๋ช…๋ น์–ด๋“ค์„ ์‚ฌ์šฉํ–ˆ๋Š”์ง€์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ์•Œ๋ ค์ค€๋‹ค.\n- Tools โ†’ Global options โ†’ Advanced โ†’ Double-click to select words in Console pane ์ฒดํฌ\n- Code โ†’ Editing โ†’ Soft-wrap R source files ์ฒดํฌ\n - R ์†Œ์Šค๊ฐ€ ๊ฐ€๋กœ๋กœ ๊ธธ๊ฒŒ ๋Š˜์–ด์ ธ ์Šคํฌ๋กค๋ฐ”๊ฐ€ ์ƒ๊ธฐ๋Š” ํ˜„์ƒ์„ ๋ง‰๋Š”๋‹ค.\n- Code โ†’ Display โ†’ Show margin์— ์ฒดํฌ\n - 80๊ธ€์ž๋งˆ๋‹ค ์„ ์ด ๊ทธ์–ด์ ธ์„œ ์ฝ”๋“œ๊ฐ€ ๊ธธ์–ด์ง€์ง€ ์•Š๊ฒŒ ์•Œ๋ ค์ค€๋‹ค.\n- Code โ†’ Saving โ†’ UTF-8์„ ์„ ํƒ\n - ์–ด๋А ์‹œ์Šคํ…œ์—์„œ๋“  ๊ณตํ†ต์œผ๋กœ ์ฝ”๋“œ๋ฅผ ๋ณผ ์ˆ˜ ์žˆ๊ฒŒ ํ•œ๋‹ค. ๋‹ค๋ฅธ ์‚ฌ๋žŒ์˜ ์ฝ”๋“œ๋ฅผ ๊ฐ€์ ธ์˜ฌ ๋•Œ ์ฃผ์„์ด ๊นจ์ง€๋Š” ๊ฒƒ์„ ๋ฐฉ์ง€ํ•ด์ค€๋‹ค. ๋งŒ์•ฝ ์ฃผ์„์ด ๊นจ์ ธ์žˆ๋‹ค๋ฉด File โ†’ Reopen Encoding ํ•ญ๋ชฉ์„ ํด๋ฆญํ•˜์—ฌ Encoding์„ ๋ณ€๊ฒฝํ•ด์ฃผ๋ฉด ๋œ๋‹ค.\n \n# R project ์ƒ์„ฑ\n\n![](../images/2020-11-06-r-1/Untitled2.png)\n![](../images/2020-11-06-r-1/Untitled3.png)\n![](../images/2020-11-06-r-1/Untitled4.png)\n![](../images/2020-11-06-r-1/Untitled5.png)\n\n```r\ngetwd()\n```\n\n![](../images/2020-11-06-r-1/Untitled6.png)\n\n```r\nsetwd()\n```\n\n# java ์„ค์น˜\n\n๋งฅ์—์„œ๋Š” ์ž๋ฐ” ์„ค์น˜๊ฐ€ ํ•„์š”ํ•˜๋‹ค.\n\n```bash\njava -version\n```\n\nNo Java runtime present, requesting install.๋ผ๋Š” ๋ฉ”์„ธ์ง€๋ฅผ ๋ฐ›์œผ๋ฉด, ์ž๋ฐ”๊ฐ€ ์„ค์น˜ ๋˜์ง€ ์•Š์€ ๊ฒƒ์ด๋‹ค.\n\n![](../images/2020-11-06-r-1/Untitled7.png)\n\n์ถ”๊ฐ€ ์ •๋ณด๋ฅผ ๋ˆŒ๋Ÿฌ์ฃผ๋ฉด ์„ค์น˜ ํŽ˜์ด์ง€๋กœ ์ด๋™ํ•œ๋‹ค. \n\n![](../images/2020-11-06-r-1/Untitled8.png)\n\nJDK Download๋ฅผ ๋ˆŒ๋Ÿฌ์ค€ ๋‹ค์Œ,\n\n![](../images/2020-11-06-r-1/Untitled9.png)\n\nmacOS Installer์— ํ•ด๋‹นํ•˜๋Š” dmgํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œ ํ›„ ์„ค์น˜๋ฅผ ์ง„ํ–‰ํ•œ๋‹ค.\n\n# package ์„ค์น˜ ๋ฐ ํ™œ์šฉ\n\nR์—์„œ๋Š” ๋‘ ๊ฐ€์ง€ ๋ฐฉ์‹์˜ ํŒจํ‚ค์ง€ ์„ค์น˜ ๋ฐฉ๋ฒ•์ด ์žˆ๋‹ค. ์˜ค๋ฅธ์ชฝ ์•„๋ž˜์˜ package ์˜์—ญ์„ ํด๋ฆญํ•˜๋ฉด ํ˜„์žฌ ์„ค์น˜๋˜์–ด ์žˆ๋Š” ํŒจํ‚ค์ง€๋“ค์˜ ๋ชฉ๋ก์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค. install, update ๊ทธ๋ฆฌ๊ณ  ์˜ค๋ฅธ์ชฝ์— ๊ฒ€์ƒ‰ ์˜์—ญ๊ณผ ์ƒˆ๋กœ ๊ณ ์นจ ์˜์—ญ์ด ๋ณด์ด๋ฉฐ, ๊ทธ ์•„๋ž˜๋กœ ํŒจํ‚ค์ง€๋“ค์˜ ๋ชฉ๋ก์ด ์žˆ๋‹ค. ํŒจํ‚ค์ง€ ์ด๋ฆ„ ์˜†์—๋Š” ์ฒดํฌ๊ฐ€ ๋˜์–ด ์žˆ๋Š” ์ƒํƒœ์˜ ํŒจํ‚ค์ง€๋“ค์ด ์žˆ๊ณ  ์—†๋Š” ๊ฒƒ๋“ค๋„ ์žˆ๋Š”๋ฐ, ์ฒดํฌํ•  ์‹œ ํ•ด๋‹น ํŒจํ‚ค์ง€์˜ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉ๊ฐ€๋Šฅํ•˜๊ฒŒ ๋œ๋‹ค. ํŒจํ‚ค์ง€๋ฅผ ์ œ๊ฑฐํ•˜๊ฑฐ์ž ํ•œ๋‹ค๋ฉด ๋งจ ์˜ค๋ฅธ์ชฝ X ๋ฒ„ํŠผ์„ ๋ˆ„๋ฅธ๋‹ค.\n\n![](../images/2021-01-25-r-3/Untitled3.png)\n\nํŒจํ‚ค์ง€๋ฅผ ์„ค์น˜ํ•˜๊ณ ์ž ํ•  ๋•, install ๋ฒ„ํŠผ์„ ๋ˆ„๋ฅธ๋‹ค. install ๋ฒ„ํŠผ์„ ๋ˆ„๋ฅด๋ฉด ์•„๋ž˜์™€ ๊ฐ™์€ ์ฐฝ์ด ๋œฌ๋‹ค.\n\n![](../images/2021-01-25-r-3/Untitled4.png)\n\ninstall from์—์„œ ๋‘ ๊ฐ€์ง€ ๋ฐฉ์‹์„ ์„ ํƒํ•  ์ˆ˜ ์žˆ๋‹ค. ์ธํ„ฐ๋„ท์„ ํ†ตํ•ด ์„ค์น˜๊ฐ€ ๊ฐ€๋Šฅํ•˜๋‹ค๋ฉด, R ๊ฐœ๋ฐœํŒ€์—์„œ ๊ด€๋ฆฌํ•˜๋Š”Repository(CRAN)์„ ์„ ํƒํ•˜๋ฉฐ ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ์— ํ•ด๋‹นํ•œ๋‹ค. ๋งŒ์•ฝ, ์‚ฌ๋‚ด๋ง์„ ์ด์šฉํ•˜๊ฑฐ๋‚˜ ๋ณด์•ˆ์„ ์ด์œ ๋กœ ์ธํ„ฐ๋„ท ์‚ฌ์šฉ์ด ๋ถˆ๊ฐ€๋Šฅํ•œ ํ™˜๊ฒฝ์ด๋ผ๋ฉด ํŒจํ‚ค์ง€ ์„ค์น˜ ํŒŒ์ผ์„ ๋ณ„๋„๋กœ ์ค€๋น„ํ•˜์—ฌ Packages Archive ์˜ต์…˜์„ ์„ ํƒํ•˜์—ฌ ์„ค์น˜๋ฅผ ์ง„ํ–‰ํ•œ๋‹ค.\n\n## ๋ช…๋ น์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŒจํ‚ค์ง€์˜ ์„ค์น˜, import ๋ฐ un-import, ์ œ๊ฑฐ\n\n```r\ninstall.packages(\"zoo\") # package์˜ ์„ค์น˜\nlibrary(\"zoo\") # ํ”„๋กœ์ ํŠธ์—์„œ ํ•ด๋‹น ํŒจํ‚ค์ง€์˜ ํ•จ์ˆ˜ ์ด์šฉ ๊ฐ€๋Šฅํ•œ ์ƒํƒœ๋กœ ๋งŒ๋“ค๊ธฐ\ndetach(\"package:zoo\"), unload=TRUE) # ํ”„๋กœ์ ํŠธ์—์„œ ๋”์ด์ƒ ํ•ด๋‹น ํŒจํ‚ค์ง€๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š์Œ\nremove.packages(\"zoo\") # ํŒจํ‚ค์ง€ ์ œ๊ฑฐ\n```\n\n# ์‚ฌ์น™ ์—ฐ์‚ฐ: ๋‚˜๋จธ์ง€( %% )\n\n```r\n10%%3\n```\n1\n\n# ๋…ผ๋ฆฌ ์—ฐ์‚ฐ: OR( | ), AND( & )\n\n```r\nTRUE | TRUE\n```\nTRUE\n```r\nTRUE | FALSE\n```\nTRUE\n```r\nFALSE | FALSE\n```\nFALSE\n\n```r\nTRUE & TRUE\n```\nTRUE\n```r\nTRUE & FALSE \n```\nFALSE\n```r\nFALSE & FALSE \n```\nFALSE\n\n\n```r\nTRUE # 1\n```\nTRUE\n```r\nFALSE # 0\n```\nFALSE\n\n\n```r\nTRUE + FALSE \n```\n1\n```r\nTRUE + TRUE\n```\n2\n\n\n## ๋ฌธ์ œ 1\n\n๋‹ค์Œ์˜ ๋…ผ๋ฆฌ ์—ฐ์‚ฐ์— ๋Œ€ํ•œ ์˜ฌ๋ฐ”๋ฅธ ๊ฒฐ๊ณผ๋ฅผ ๊ณ ๋ฅด์‹œ์˜ค.\n\na. 11 != 10\n\nb. 30 > 50 | 100 > 1000\n\nTRUE, FALSE\n\n\n# ๋ณ€์ˆ˜(value ์ง€์ •)\n- ๋ณ€์ˆ˜๋Š” ์ˆ˜๋ฅผ ๋‹ด๋Š” ๊ทธ๋ฆ‡์ด๋‹ค. ๋”ฐ๋ผ์„œ, ์›๋ž˜ ๋‹ด๊ธด ๊ฐ’์„ ๋ณ€๊ฒฝํ•  ์ˆ˜ ์žˆ๋‹ค.\n\n`<-` ๋ฅผ ์ด์šฉํ•œ๋‹ค.\n\n```r\na <- 1\na\n```\n1\n\n```r\na = 1\na \n```\n1\n\n```r\nb <- 10\na+b -> c\nc\n```\n11\n\n```r\nA <- 10\nA + 20\n```\n30\n\n```r\nB <- A + 20\nB\n```\n30\n\n```r\nprint(B)\n```\n30\n\n\n- +, -, *, / ๋“ฑ์˜ ์‚ฌ์น™์—ฐ์‚ฐ์ด ๊ฐ€๋Šฅํ•˜๋‹ค.\n\n## ๋ณ€์ˆ˜ ๋ช…๋ช… ๊ทœ์น™\n\n- ๋Œ€์†Œ๋ฌธ์ž ๊ตฌ๋ถ„\n- ์–ธ๋”๋ผ์ธ, dot, ์ˆซ์ž ์‚ฌ์šฉ ๊ฐ€๋Šฅ\n\n```r\ngene.seq = \"ATGC\"\ngene.seq\n\nGene.seq = \"AGGT\"\ngene.seq == Gene.seq # FALSE\n\ngene_seq = \"ATGC\"\ngene.seq == gene_seq # TRUE\n\ngene_seq_1 = \"AGGT\"\ngene_seq_1 == Gene.seq # TRUE\n```\n\n- ๋ณ€์ˆ˜๋ช… ์‹œ์ž‘์„ dot๋กœ ๊ฐ€๋Šฅ\n\n```r\nseq = \"ATGCTGGC\"\n.seq = \"ATGCTGGC\"\nseq == .seq # TRUE\n```\n\n# ๋‚ด์žฅ ํ•จ์ˆ˜\n\n## demo()\nR ํ”„๋กœ๊ทธ๋žจ์— ๊ธฐ๋ณธ์ ์œผ๋กœ ๋‚ด์žฅ๋˜์–ด ์žˆ๋Š” ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰์‹œํ‚ฌ ์ˆ˜ ์žˆ๋‹ค. R๋กœ ์–ด๋–ค ์ผ์„ ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ๋ณด์—ฌ์ค€๋‹ค.\n```r\ndemo(gersp)\n```\n```r\ndemo(graphcis)\n```\n๋‹ค์–‘ํ•œ ๊ทธ๋ž˜ํ”„๋“ค์„ ๊ทธ๋ฆด ์ˆ˜ ์žˆ์Œ์„ ๋ณผ ์ˆ˜ ์žˆ๋‹ค. ๋…ผ๋ฌธ์šฉ ๊ทธ๋ฆผ๋“ค์€ ์ฃผ๋กœ R๋กœ ์ž‘์„ฑ๋˜๋Š” ์ด์œ ๋‹ค.\n\n\n## c()\nvector ์ƒ์„ฑ. combine์˜ ์•ฝ์ž\n\n```r\nx <- 1, 2, 3\n```\n์—๋Ÿฌ\n\n```r\nc(1, 2, 3)\n```\n1 2 3\n```r\ncx <- c(1, 2, 3)\ncy <- c(2, 3, 4)\ncz <- cx*cy\n```\n2 6 12\n```r\ncx <- c(1, 2, 3)\ny <- 9\ncz <- cx*y\n```\n9 18 27\n```r\ncx <- c(1, 2, 3)\ncy <- c('1', '2', '3')\ncz <- cx*cy\n```\n์—๋Ÿฌ. ๊ฐ™์€ type๋ผ๋ฆฌ๋งŒ ์—ฐ์‚ฐ ๊ฐ€๋Šฅํ•˜๋‹ค.\n\n```r\nc(\"one\", \"two\", \"three\")\n```\n\"one\" \"two\" \"three\" (str ๋˜๋Š” char)\n```r\nc(TRUE, FALSE, TRUE)\n```\nTRUE FALSE TRUE\n\n### ๊ฐ’์˜ ํ˜ธ์ถœ\n```r\nx <- c('1', '2', '3')\nx[1]\n```\n'1'\n\n## length()\nlength๋Š” ํ‰๊ท ์„ ๊ตฌํ•  ๋•Œ ์œ ์šฉํ•˜๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค.\n```r\ncx <- c(1, 2, 3)\ncy <- c(2, 3)\ncz <- cx*cy\n```\n์—๋Ÿฌ. ๋‘ ๋ฒกํ„ฐ์˜ ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅด๋‹ค.\n```r\nlength(cx)\n```\n3\n\n## str()\nstructure์˜ ์•ฝ์ž. ํŠน์ • ๋ฐ์ดํ„ฐ์˜ ํƒ€์ž…๊ณผ ๊ตฌ์กฐ, ๊ทธ ๊ฐ’์„ ๋ณด์—ฌ์ค€๋‹ค. ํฐ ๊ฐ’์ด๋‚˜ ํ…Œ์ด๋ธ”์„ ๊ฐ–๋Š” ๊ฒฝ์šฐ, ์œ ์šฉํ•˜๊ฒŒ ํ™œ์šฉ ๊ฐ€๋Šฅํ•˜๋‹ค.\n```r\nx <- c(1, 2, 3, 5, 6)\nstr(x)\n```\nnum [1:5] 1 2 3 5 6\n```r\nx <- c('1', '2', '3', '5', '6')\nstr(x)\n```\nstr [1:5] 1 2 3 5 6\n\n### boolean\n```r\n1+2=3\n```\n์—๋Ÿฌ. =ํ‘œ์‹œ๋Š” ๋ณ€์ˆ˜์— ๋‹ด๊ฒ ๋‹ค๋Š” ์˜๋ฏธ. ์ˆซ์ž๋Š” value์˜ ์ด๋ฆ„์œผ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†๋‹ค.\n```r\n1+2==3\n```\n1+2๊ฐ€ 3์ธ๊ฐ€. TRUE. TRUE๋‚˜ FALSE๋ฅผ boolean ๋˜๋Š” logical operatior๋ผ๊ณ  ๋ถ€๋ฅธ๋‹ค. ์กฐ๊ฑด๋ฌธ์—์„œ ์ฃผ๋กœ ์‚ฌ์šฉ๋œ๋‹ค.\n```r\ncx <- c(1, 2, 3)\ncy <- c('1', '2', '4')\n```\nTRUE TRUE FALSE. ํƒ€์ž…์— ๋ฌด๊ด€ํ•˜๊ฒŒ, ๊ฐ’์ด ๊ฐ™์€์ง€๋ฅผ ์‚ดํ•€๋‹ค. \n\n## seq(from, to, by)\n\n์—ฐ์†์ ์ธ ์ˆซ์žํ˜• vector ์ƒ์„ฑํ•œ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, 1๋ถ€ํ„ฐ 1,000,000๊นŒ์ง€์˜ ์ˆซ์žํ˜• ๋ฒกํ„ฐ๋ฅผ ์ƒ์„ฑํ•˜๊ณ ์ž ํ•œ๋‹ค๋ฉด? ์ด ๋•Œ ์‚ฌ์šฉํ•˜๋Š” ํ•จ์ˆ˜๊ฐ€ seq()์ด๋‹ค.\n\n`by`์˜ default๊ฐ’์€ 1์ด๋‹ค. ์–ผ๋งˆ๋งŒํผ์˜ ๊ฐ„๊ฒฉ์œผ๋กœ ์ฆ๊ฐ€์‹œํ‚ฌ ๊ฒƒ์ธ์ง€ ๊ฒฐ์ •ํ•œ๋‹ค.\n\n```r\nseq(1, 10, 2)\n```\n1 3 5 7 9\n```r\nseq(1, 10)\n```\n1 2 3 4 5 6 7 8 9 10\n```r\n1:10\n```\n1 2 3 4 5 6 7 8 9 10\n```r\ny <- c(1:100)\n```\n๋ฒกํ„ฐ๋กœ๋„ ์ƒ์„ฑ๊ฐ€๋Šฅํ•˜๋‹ค.\n\n### ๋ฌธ์ œ\n์œ„ y ๋ณ€์ˆ˜์˜ 5๋ฒˆ์งธ ๊ฐ’๋ถ€ํ„ฐ 10๋ฒˆ์งธ ๊ฐ’๊นŒ์ง€๋ฅผ ์ƒˆ๋กœ์šด ๋ณ€์ˆ˜ n์— ์ €์žฅํ•˜๋Š” ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ง ๋‹ค.\n\n### ์ •๋‹ต\n```r\nn <- y[5:10]\n```\n5 6 7 8 9 10\n- ์ธ๋ฑ์Šค๊ฐ€ 1๋ถ€ํ„ฐ ์‹œ์ž‘ํ•œ๋‹ค๋Š” ๊ฒƒ์— ์ฃผ์˜\n\n### ๋ฌธ์ œ\n์œ„ y ๋ณ€์ˆ˜์—์„œ 90๋ณด๋‹ค ํฐ ๊ฐ’๋“ค์—” TRUE, 90์ดํ•˜์ธ ๊ฐ’๋“ค์—” FALSE๋ฅผ ์ƒˆ๋กœ์šด ๋ณ€์ˆ˜ h์— ๋‹ด๋Š” ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ง ๋‹ค.\n```r\nh <- y>90\nh\n```\n\n## seq(from, to, length.out)\n\n`length.out` ํŒŒ๋ผ๋ฏธํ„ฐ๋Š” ๋ช‡ ๊ฐœ์˜ ์ˆซ์ž๋ฅผ ๋งŒ๋“ค์ง€๋ฅผ ๊ฒฐ์ •ํ•œ๋‹ค.\n\n```r\nvariable <- seq(0, 1, length.out=20)\nvariable\n```\n\n![](../images/2020-11-06-r/Untitled10.png)\n\n## rep(object, times/each)\n\n๋ฐ˜๋ณต์ ์ธ ์ž๋ฃŒํ˜• ๋ฒกํ„ฐ ์ƒ์„ฑ. ์ˆซ์žํ˜• ๋ฐ์ดํ„ฐ, ๋ฌธ์žํ˜• ๋ฐ์ดํ„ฐ ๋ชจ๋‘ ๋ฐ˜๋ณต์‹œํ‚ฌ ์ˆ˜ ์žˆ๋‹ค. `times`๋Š” ๋ฐ˜๋ณตํ•  ํšŸ์ˆ˜, `each`๋Š” ๊ฐ ์›์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ๋ฐ˜๋ณตํ•  ํšŸ์ˆ˜๋ฅผ ๋งํ•œ๋‹ค.\n\n```r\nrep(1, times=3)\n```\n1 1 1\n```r\nrep(1:3, times=3)\n```\n1 2 3 1 2 3 1 2 3\n```r\nrep(1:3, each=3)\n```\n1 1 1 2 2 2 3 3 3\n```r\nrep(c(\"a\",\"b\",\"c\"), times=5)\n```\n\"a\" \"b\" \"c\" \"a\" \"b\" \"c\" \"a\" \"b\" \"c\" \"a\" \"b\" \"c\" \"a\" \"b\" \"c\"\n\n\n## sample(x, size, replace=F)\n\n๋ฌด์ž‘์œ„์ ์œผ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ํ•จ์ˆ˜๋‹ค. \n\n```r\nx <- 1:12\nx\nsample(x) # ์ฃผ์–ด์ง„ ๋ฒ”์œ„ ๋‚ด์—์„œ ๋ฌด์ž‘์œ„์ ์ธ ์ˆซ์ž ์ƒ์„ฑ\n```\n\n![](../images/2020-11-06-r-1/Untitled11.png)\n\n```r\n# ์ค‘๋ณต์„ ํ—ˆ์šฉํ•˜์—ฌ ๋ฌด์ž‘์œ„ ์ˆซ์ž ์ƒ์„ฑ\nsample(x, replace=T)\n```\n\n![](../images/2020-11-06-r-1/Untitled12.png)\n\n```r\nsample(45, size=6, replace=T)\n```\n\n![](../images/2020-11-06-r-1/Untitled13.png)\n\n## ๋ฌธ์ œ 2\n\n10๋ถ€ํ„ฐ 50๊นŒ์ง€ ์ •์ˆ˜ํ˜• ๋ฒกํ„ฐ๋ฅผ ์ƒ์„ฑํ•˜์‹œ์˜ค.\n\n```r\n10:50\nrep(10:50, 1)\nseq(10, 50, 1)\n```\n\n๋‹ค์Œ ๋ช…๋ น์–ด๋ฅผ ์‹คํ–‰ํ–ˆ์„ ๋•Œ์˜ ์˜ฌ๋ฐ”๋ฅธ ์ถœ๋ ฅ ๊ฒฐ๊ณผ๋Š”?\n\n```r\nrep(rep(c(\"a\", \"b\", \"c\"), time=2), each=2)\n```\n\n\"a\" \"a\" \"b\" \"b\" \"c\" \"c\" \"a\" \"a\" \"b\" \"b\" \"c\" \"c\"\n\n## ์ž์ฃผ ์‚ฌ์šฉ๋˜๋Š” ํ•จ์ˆ˜๋“ค\n\n![](../images/2020-11-06-r-1/function.png)\n\n### sum()\n\n```r\ny <- c(2, 4, 6, 8, 10)\nsum(y)\n```\n30\n```r\ny <- c(1, 22, 3, 5, 4, 565, -7, 868, -9, 979, -97, 342, 23, 1, 1, 35, 46, 6, 7, 8, 2, 4, 75, 12, 54, -6, 7, 23, 123, 53, 12, 3, 4, 5446, 22, 1221, 235, 23, -234, 5, -54, -7, 7, 2, 1, 23, 46, 67, 64, 2, 2, 4, 5, 25, 768, 963, -26)\nsum(y)\n```\n11779\n```r\ny <- c(1, 22, 3, 5, 4, 565, -7, 868, -9, 979, -97, 342, 23, 1, 1, 35, 46, 6, 7, 8, 2, 4, 75, 12, 54, -6, 7, 23, 123, 53, 12, 3, 4, 5446, 22, 1221, 235, 23, -234, 5, -54, -7, 7, 2, 1, 23, 46, 67, 64, 2, 2, 4, 5, 25, 768, 963, -26)\nsum(y>100)\n```\n10\n\n#### ๋ฌธ์ œ\n100๋ณด๋‹ค ํฐ ๊ฐ’๋“ค์˜ ํ•ฉ์„ ๊ตฌํ•˜๋ผ.\n\n#### ์ •๋‹ต\n```r\nsum(x[x>100])\n```\n11510\n```r\nx <- c(1, 3, 5)\ny <- c(2, 4, 6)\nsum(x+y)\n```\n21\n\n### min()\n\n```r\ny <- c(2, 4, 6, 8, 10)\nmin(y)\n```\n2\n```r\ny <- c(1, 22, 3, 5, 4, 565, -7, 868, -9, 979, -97, 342, 23, 1, 1, 35, 46, 6, 7, 8, 2, 4, 75, 12, 54, -6, 7, 23, 123, 53, 12, 3, 4, 5446, 22, 1221, 235, 23, -234, 5, -54, -7, 7, 2, 1, 23, 46, 67, 64, 2, 2, 4, 5, 25, 768, 963, -26)\nmin(y)\n```\n-234\n```r\nx<- c(1, 3, 5)\ny <- c(2, 4, 6)\nmin(x+y)\n```\n3\n\n\n### max()\n\n```r\ny <- c(2, 4, 6, 8, 10)\nmax(y)\n```\n10\n```r\ny <- c(1, 22, 3, 5, 4, 565, -7, 868, -9, 979, -97, 342, 23, 1, 1, 35, 46, 6, 7, 8, 2, 4, 75, 12, 54, -6, 7, 23, 123, 53, 12, 3, 4, 5446, 22, 1221, 235, 23, -234, 5, -54, -7, 7, 2, 1, 23, 46, 67, 64, 2, 2, 4, 5, 25, 768, 963, -26)\nmax(y)\n```\n5446\n```r\nx <- c(1, 3, 5)\ny <- c(2, 4, 6)\nmax(x+y)\n```\n11\n\n\n### mean()\nํ‰๊ท ๊ฐ’์„ ์‚ฐ์ถœํ•ด์ค€๋‹ค.\n```r\ny <- c(2, 4, 6, 8, 10)\nmean(y)\n```\n6\n```r\ny <- c(1, 22, 3, 5, 4, 565, -7, 868, -9, 979, -97, 342, 23, 1, 1, 35, 46, 6, 7, 8, 2, 4, 75, 12, 54, -6, 7, 23, 123, 53, 12, 3, 4, 5446, 22, 1221, 235, 23, -234, 5, -54, -7, 7, 2, 1, 23, 46, 67, 64, 2, 2, 4, 5, 25, 768, 963, -26)\nsum(y)\n```\n206.6491\n\n#### ๋ฌธ์ œ\nmean() ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ๋ง๊ณ , ์•„๋ž˜ ๊ฐ’์— ๋Œ€ํ•œ ํ‰๊ท ๊ฐ’์„ ๊ตฌํ•˜๋ผ.\n```\n1, 22, 3, 5, 4, 565, -7, 868, -9, 979, -97, 342, 23, 1, 1, 35, 46, 6, 7, 8, 2, 4, 75, 12, 54, -6, 7, 23, 123, 53, 12, 3, 4, 5446, 22, 1221, 235, 23, -234, 5, -54, -7, 7, 2, 1, 23, 46, 67, 64, 2, 2, 4, 5, 25, 768, 963, -26\n```\n\n#### ์ •๋‹ต\n```r\ny <- c(1, 22, 3, 5, 4, 565, -7, 868, -9, 979, -97, 342, 23, 1, 1, 35, 46, 6, 7, 8, 2, 4, 75, 12, 54, -6, 7, 23, 123, 53, 12, 3, 4, 5446, 22, 1221, 235, 23, -234, 5, -54, -7, 7, 2, 1, 23, 46, 67, 64, 2, 2, 4, 5, 25, 768, 963, -26)\nsum(y) / length(y)\n```\n206.6491\n```r\nx <- c(1, 3, 5)\ny <- c(2, 4, 6)\nmean(x+y)\n```\n\n`+` ์—ฐ์‚ฐ์ด ๋จผ์ € ์ˆ˜ํ–‰๋˜์–ด, 3๊ฐœ์˜ ์š”์†Œ์— ๋Œ€ํ•œ ํ‰๊ท ์„ ๊ตฌํ•˜๊ฒŒ ๋จ\n\n### median()\n\n```r\ny <- c(2, 4, 6, 8, 10)\nmedian(y)\n```\n6\n\n```r\nx <- c(1, 3, 5)\ny <- c(2, 4, 6)\nmedian(x+y)\n```\n7\n\n### which()\n์ธ๋ฑ์Šค์˜ ์œ„์น˜๋ฅผ ์•Œ๋ ค์ฃผ๋Š” ํ•จ์ˆ˜\n```r\ny <- c(1, 22, 3, 5, 4, 565, -7, 868, -9, 979, -97, 342, 23, 1, 1, 35, 46, 6, 7, 8, 2, 4, 75, 12, 54, -6, 7, 23, 123, 53, 12, 3, 4, 5446, 22, 1221, 235, 23, -234, 5, -54, -7, 7, 2, 1, 23, 46, 67, 64, 2, 2, 4, 5, 25, 768, 963, -26)\nwhich(y>100)\n```\n6 8 10 12 29 34 36 37 55 56\n\n### log()\n\n```r\nlog(4, 2) # base๊ฐ€ 2์ธ 4์˜ ๋กœ๊ทธ ๊ฐ’\n```\n2\n\n```\ny <- c(4, 2) # base๊ฐ€ 10์ธ 4, 2 ๊ฐ๊ฐ์— ๋Œ€ํ•œ ์ƒ์šฉ๋กœ๊ทธ ๊ฐ’\nlog(y)\n```\n1.3862944 0.6931472\n\n\n`log()` ํ•จ์ˆ˜์— ๋ฒกํ„ฐ๋ฅผ ๋„ฃ์œผ๋ฉด ๋ฒกํ„ฐ์˜ ๊ฐ ์š”์†Œ์— ๋Œ€ํ•œ ์ƒ์šฉ๋กœ๊ทธ๊ฐ’์ด ๋‚˜์˜จ๋‹ค.\n\n### log2()\n\n```r\nlog2(4, 2)\n```\nError in log2(4, 2) : 1๋ฅผ ํ•„์š”๋กœ ํ•˜๋Š” 'log2'์— ์ธ์ž 2๊ฐ€ ์ „๋‹ฌ๋˜์—ˆ์Šต๋‹ˆ๋‹ค\n\n```r\nlog2(4)\n```\n2\n\n```r\ny <- c(4)\nlog2(y)\n```\n2\n\n```r\ny <- c(4, 2)\nlog2(y)\n```\n2 1\n\n\n### sqrt()\n\n```r\nsqrt(2)\n```\n1.414214\n\n```r\ny <- c(2, 4, 6)\nsqrt(y)\n```\n1.414214 2.000000 2.449490\n\n\n๋ฒกํ„ฐ๋ฅผ ๋„ฃ์œผ๋ฉด ๊ฐ ์š”์†Œ์— ๋Œ€ํ•œ ์ œ๊ณฑ๊ทผ ๊ฒฐ๊ณผ๋ฅผ ๋‚ธ๋‹ค.\n\n### sd(), var()\n\n```r\nvar <- 1:10\nmean(var)\n```\n5.5\n\n```r\nsd(var)\n```\n3.02765\n\n```r\nvar(var) # R์—์„œ๋Š” ํ•จ์ˆ˜์˜ ์ด๋ฆ„์„ ๋ณ€์ˆ˜์ด๋ฆ„์œผ๋กœ ์‚ฌ์šฉํ•ด๋„ ์ƒ๊ด€์—†๋‹ค.\n```\n9.166667\n\n\n### sort(), order()\n\n```r\nvar <- c(7,1,4,6,7,21,7)\nsort(var) # ์ค‘๋ณต๋œ ํ•ญ๋ชฉ์— ๋Œ€ํ•ด ์ค‘๋ณต ์ œ๊ฑฐ๋Š” ์—†์Œ\n```\n1 4 6 7 7 7 21 \n\n```r\norder(var) # ์ •๋ ฌ๋œ ํ•ญ๋ชฉ์˜ ์ธ๋ฑ์Šค๋ฅผ ์ถœ๋ ฅ\n```\n2 3 4 1 5 7 6\n\n```r\nsort(var, decreasing=TRUE)\n```\n21 7 7 7 6 4 1\n\n```r\norder(var, decreasing=TRUE)\n```\n6 1 5 7 4 3 2\n\n\n![](../images/2021-01-25-r-3/Untitled12.jpg)\n\n![](../images/2021-01-25-r-3/Untitled13.jpg)\n\n### range()\n\n```r\nx <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)\nrange(x)\n```\n1 10\n\n```r\nx <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 100)\nrange(x)\n```\n1 100\n\n\n### length()\n\n```r\nlength(x)\n```\n10\n\n\n### rnorm()\n\n```r\nx <- rnorm(10)\nx\n```\n [1] 0.78781353 0.97180240 0.21801128\n [4] -0.06918889 -0.47719415 0.80736879\n [7] 1.33420280 0.25329343 0.08678522\n[10] 0.17208155\n\n```r\nlength(x) # ๋ฐ์ดํ„ฐ์˜ ๊ธธ์ด\n```\n[1] 10\n\n```r\nrange(x) # ๋ฐ์ดํ„ฐ์˜ ๋ฒ”์œ„\n```\n[1] -0.4771941 1.3342028\n\n```r\nsort(x) # ๋ฐ์ดํ„ฐ ์ •๋ ฌ\n```\n [1] -0.47719415 -0.06918889 0.08678522\n [4] 0.17208155 0.21801128 0.25329343\n [7] 0.78781353 0.80736879 0.97180240\n[10] 1.33420280\n\n```r\norder(x) # ๋ฐ์ดํ„ฐ ์ •๋ ฌ index\n```\n [1] 5 4 9 10 3 8 1 6 2 7\n\n\n### ๊ธฐ์ˆ ํ†ต๊ณ„\n```r\nsum(x) \n```\n[1] 4.084976\n\n```r\nmin(x)\n```\n[1] -0.4771941\n\n```r\nmax(x)\n```\n[1] 1.334203\n\n```r\nmedian(x)\n```\n[1] 0.2356524\n\n```r\nmean(x)\n```\n[1] 0.4084976\n\n```r\nsd(x)\n```\n[1] 0.5486975\n\n```r\nvar(x)\n```\n[1] 0.3010689\n\n### class()\n\n```r\n## vector ์ž๋ฃŒํ˜•\nclass(c(1, \"one\", TRUE))\n```\n\"character\"\n\n```r\nclass(var1)\n```\n\"numeric\"\n\n```r\nclass(var2)\n```\n\"character\"\n\n```r\nclass(var3)\n```\n\"logical\"\n\n### typeof()\ntypeof() ํ•จ์ˆ˜๋Š” ์›์‹œ ์ž๋ฃŒํ˜•์„ ํ‘œํ˜„ํ•ด์ค€๋‹ค. ์›์‹œ ์ž๋ฃŒํ˜•์ด๋ž€, R์—์„œ ์ทจ๊ธ‰ํ•˜๋Š” ์ผ๋ฐ˜์ ์ธ ์ž๋ฃŒํ˜•์˜ ๋ชจ๋“  ํ˜•ํƒœ๋ฅผ ์ง€์นญํ•œ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒƒ๋“ค์ด ์žˆ๋‹ค.\nNULL: ๋ฐ์ดํ„ฐ๊ฐ€ ์—†๋Š” ๊ฒฝ์šฐ\nLogical: ๋ถˆ๋ฆฌ์–ธ, ์ฐธ ๋˜๋Š” ๊ฑฐ์ง“\nInteger: ์ •์ˆ˜\nDouble: ๋ถ€๋™์†Œ์ˆ˜์  ์‹ค์ˆ˜\nComplex: ๋ณต์†Œ์ˆ˜\nCharacter: ๋ฌธ์ž์—ด\nList: ๋ฆฌ์ŠคํŠธ\nClosure: ํ•จ์ˆ˜\n\n๋ฐ˜๋ฉด, class()๋Š” ๊ฐ์ฒด์ง€ํ–ฅ ๊ด€์ ์—์„œ์˜ ์ž๋ฃŒํ˜•์„ ๋งํ•œ๋‹ค. ๋ชจ๋“  ๊ฐ์ฒด๋Š” ์ถ”์ƒ์ž๋ฃŒํ˜•์ธ 'ํด๋ž˜์Šค'๋ฅผ ๊ฐ–๋Š”๋‹ค. ๊ฑฐ๋ฆฌ, ์‹œ๊ฐ„์˜ ํ•œ ์ง€์  ๋˜๋Š” ๋ฌด๊ฒŒ ๋“ฑ ์–ด๋–ค ํ•˜๋‚˜์˜ ์ˆซ์ž๋กœ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ๋Š” ๊ฒƒ์˜ ์ข…๋ฅ˜๊ฐ€ ๋งŽ๋‹ค. ์ด ๊ฐ์ฒด๋“ค์€ ๋ชจ๋‘ ์ˆซ์ž๋กœ ์ €์žฅ๋˜๊ธฐ ๋•Œ๋ฌธ์— '์ˆ˜์น˜ํ˜•'์ด๋ผ๋Š” ๋ชจ๋“œ๋ฅผ ๊ฐ€์ง€์ง€๋งŒ, ๊ฐ๊ฐ ํ•ด์„๋ฐฉ๋ฒ•์ด ๋‹ค๋ฅด๋ฏ€๋กœ, ํด๋ž˜์Šค๋Š” ์ƒ์ดํ•  ์ˆ˜ ์žˆ๋‹ค. R ์—์„œ ํด๋ž˜์Šค๋Š” ๋ณ€์ˆ˜๊ฐ€ ๊ฐ€์ง€๋Š” ํ•˜๋‚˜์˜ ์†์„ฑ์ด๋‹ค. ๋”ฐ๋ผ์„œ ์ž๋ฃŒํ˜•๊ณผ ํด๋ž˜์Šค๋Š” ๊ฐ™์€ ๊ฐ’์„ ๊ฐ€์ง€์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ๋‹ค.\nํŠนํžˆ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ ์—์„œ ์ฐจ์ด๊ฐ€ ๋‚œ๋‹ค.\n- ๋ถ€๋™์†Œ์ˆ˜์  ์‹ค์ˆ˜์˜ ์ž๋ฃŒํ˜•์€ double ์ด์ง€๋งŒ ํด๋ž˜์Šค๋Š” numeric ์ด๋‹ค.\n- ํ•จ์ˆ˜์˜ ์ž๋ฃŒํ˜•์€ closure ์ด์ง€๋งŒ ํด๋ž˜์Šค๋Š” function ์ด๋‹ค.\n- matrix, data.frame ๋“ฑ์˜ ํด๋ž˜์Šค ๊ฐ์ฒด๋‚˜ ์‚ฌ์šฉ์ž ์ •์˜ ํด๋ž˜์Šค์˜ ์ž๋ฃŒํ˜•์€ list ์ด๋‹ค.\n\n```r\na <- c(1,2,3)\nb <- 'a'\ntypeof(a) # \"double\"\nclass(a) # \"numeric\"\ntypeof(b) # \"character\"\nclass(b) # \"character\"\ntypeof(a) == class(a) # FALSE\ntypeof(a) == typeof(b) # FALSE\nclass(a) == class(b) # FALSE\n```\n\n```r\nlgl_v <- c(T, F, TRUE, FALSE)\nlgl_v # TRUE FALSE TRUE FALSE\ntypeof(lgl_v) # \"logical\"\n```\n\n```r\ntypeof(1) # \"double\"; ์‹ค์ˆ˜ํ˜•์„ ์ˆซ์žํ˜• ์ค‘ ๊ธฐ๋ณธ ์ž๋ฃŒํ˜•์œผ๋กœ ๊ฐ–๋Š”๋‹ค. \ntypeof(1L) # \"integer\"; ์ •์ˆ˜ํ˜•์œผ๋กœ ๊ฐ•์ œ๋กœ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋์— L์„ ๋ถ™์ธ๋‹ค.\n```\n\n```r\nc(10/0, 0/0, NA) # Inf NaN NA\n```\n- Inf: ๋ฌดํ•œ๋Œ€\n- NaN: Not a Number. NaN๋„ ์ž๋ฃŒํ˜•์˜ ์ผ์ข…์ด๋‹ค.\n- NA: Not Available. ํ•„์š”ํ•œ ๊ฐ’์ด ์žˆ์œผ๋‚˜ ๊ฐ’์„ ๋„ฃ์„ ์ˆ˜ ์—†๋Š” ์ƒํƒœ ๋˜๋Š” ์‚ฌ์šฉํ•  ์ˆ˜ ์—†๋Š” ์ƒํƒœ๋ฅผ ๋งํ•œ๋‹ค. ์ฆ‰, ๊ฒฐ์ธก์น˜๋ฅผ ์˜๋ฏธํ•œ๋‹ค.\n\t- NaN๊ณผ NA๋ฅผ ์˜๋ฏธ์ƒ ๊ตฌ๋ถ„ํ•˜๋Š” ๊ฒƒ์€ ์˜๋ฏธ๊ฐ€ ์—†๋‹ค. NA๋ฅผ ์ฃผ๋กœ ์“ฐ๋Š” ํŽธ์ด๋‹ค.\n- NULL: ๊ฐ’์ด ์—†์Œ์„ ์˜๋ฏธํ•œ๋‹ค.\n\n```r\n0/0 == NaN # FALSE\nis.nan(0/0) # TRUE\n[is.na](http://is.na/)(0/0) # TRUE\n```\n\n```r\nlength(c(NA, 1, 2, \"3\")) #4\nlength(c(NA, 1, 2, NaN)) #3\nlength(c(NA, 1, 2, NULL)) #3\n```\n\n### tolower()\n\n```r\nx = \"Hello, R ProGramming\"\nlower_x = tolower(x)\nlower_x\n```\n\n![](../images/2020-11-06-r-1/Untitled14.png)\n\n### toupper()\n\n```r\nx = \"Hello, R ProGramming\"\nupper_x = toupper(x)\nupper_x\n```\n\n![](../images/2020-11-06-r-1/Untitled15.png)\n\n### substr(str, start, stop)\n\n```r\nx <- \"Hello World\"\ncutted_x <- substr(x, start=7, stop=11)\ncutted_x\n```\n\n![](../images/2020-11-06-r-1/Untitled16.png)\n\n### strsplit(x, split)\n\n`x`์˜ ์ž๋ฆฌ์—๋Š” ์Šค์นผ๋ผ, ๋ฒกํ„ฐ, ํ–‰๋ ฌ, ๋ฆฌ์ŠคํŠธ, ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„ ๋“ฑ์ด ์˜ฌ ์ˆ˜ ์žˆ๋‹ค. `split`์—๋Š” ๊ตฌ๋ถ„ํ•˜๊ณ ์ž ํ•˜๋Š” ๊ตฌ๋ถ„์ž๋ฅผ ๋„ฃ์–ด์ค€๋‹ค. `strplit()` ํ•จ์ˆ˜๋Š” ์ตœ์ข…์ ์œผ๋กœ ๋ฆฌ์ŠคํŠธ๋กœ ๊ฒฐ๊ณผ๋ฅผ ๋ฐ˜ํ™˜ํ•œ๋‹ค.\n\n```r\nx <- \"Hello World\"\ncut <- strsplit(x, \" \")\ncut\ncut[1]\ncut[[1]][1] # ๋ฆฌ์ŠคํŠธ๋ฅผ ํ˜ธ์ถœํ•  ๋•, ํ•ญ์ƒ ๋Œ€๊ด„ํ˜ธ๋ฅผ ๋‘ ๋ฒˆ ์จ์•ผํ•˜๋Š” ๊ฒƒ ๊ฐ™๋‹ค.\ncut[[1]][2]\n```\n\n![](../images/2020-11-06-r-1/Untitled17.png)\n\n### paste()\n\n๊ตฌ๋ถ„์ž๋ฅผ ๊ธฐ์ค€์œผ๋กœ ํ•˜๋‚˜์˜ ๋ฌธ์žฅ์œผ๋กœ ๊ฒฐํ•ฉํ•œ๋‹ค. ๊ตฌ๋ถ„์ž๊ฐ€ ์ฃผ์–ด์ง€์ง€ ์•Š์„ ๊ฒฝ์šฐ ๊ณต๋ฐฑ์ด ๊ธฐ๋ณธ์œผ๋กœ ์‚ฌ์šฉ๋œ๋‹ค.\n\n```r\nx <- c(\"Hello\", \"World\", \"This\", \"is\", \"R\", \"Programming\")\npaste(x)\n```\n\n![](../images/2020-11-06-r-1/Untitled18.png)\n\n### paste0()\n\n```r\npaste0(\"Hello\", \"R\", \"Programming\")\n```\n\n![](../images/2020-11-06-r-1/Untitled19.png)\n\n### ๋ฌธ์ž์—ด ๊ฒฐํ•ฉ\n\n```r\nx = c(\"Hello\", \"R\", \"Programming\")\ny = paste(x[1], x[2])\ny\n```\n\n![](../images/2020-11-06-r-1/Untitled20.png)\n\n```r\ny = paste(x[1], x[2], sep=\"-\")\ny\n```\n\n![](../images/2020-11-06-r-1/Untitled21.png)\n\n```r\ny = paste(x[1], x[2], sep=\",\")\ny\n```\n\n![](../images/2020-11-06-r-1/Untitled22.png)\n\n```r\ny = paste(x, collapse=\",\")\ny\n```\n\n![](../images/2020-11-06-r-1/Untitled23.png)\n\n### grep()\n\n์ฐพ์œผ๋ ค๋Š” ๊ฐ’์ด ์–ด๋А ์ธ์ž์— ์žˆ๋Š”์ง€๋ฅผ ์•Œ๋ ค์ค€๋‹ค.\n\n```r\nx = c(\"Hello\", \"R\", \"Programming\")\ngrep(\"a\", x) # \"a\"๋Š” ๋ณ€์ˆ˜ x์˜ ์„ธ ๋ฒˆ์งธ ์ธ์ž์ธ \"Programming\"์— ์žˆ๋‹ค.\n```\n\n![](../images/2020-11-06-r-1/Untitled24.png)\n\n```r\nx = c(\"Hello\", \"R\", \"Programming\")\ngrep(\"a|e\", x)\n```\n\n![](../images/2020-11-06-r-1/Untitled25.png)\n\n```r\nx = c(\"Hello\", \"R\", \"Programming\")\ngrep(\"p\", x, ignore.case=T) # ๋Œ€์†Œ๋ฌธ์ž์— ์ƒ๊ด€์—†์ด ์ฐพ์œผ๋ ค๋ฉด ignore.case=T๋กœ ํ•œ๋‹ค.\n```\n\n![](../images/2020-11-06-r-1/Untitled26.png)\n\n### replace()\n\n```r\nx = seq(1, 10, by=1)\ny = replace(x, x>5, 10) # 5๋ณด๋‹ค ํฐ ๊ฐ’๋“ค์€ 10์œผ๋กœ ๊ต์ฒดํ•œ๋‹ค.\ny\n```\n\n![](../images/2020-11-06-r-1/Untitled27.png)\n\n### which()\n\n```r\ny = c(T, T, F, T)\ny\nwhich(y) # TRUE์— ํ•ด๋‹นํ•˜๋Š” ๊ณณ์˜ ์œ„์น˜๋ฅผ ์•Œ๋ ค์ค€๋‹ค.\n```\n\n![](../images/2020-11-06-r-1/Untitled28.png)\n\n### ์ง‘ํ•ฉ ์—ฐ์‚ฐ\n\n```r\nx = seq(2, 8, by=1)\ny = seq(5, 10, by=1)\nx\ny\nunion(x, y)\nintersect(x, y)\nsetdiff(x, y) # x-y. ์ฆ‰ y์—์„œ ๊ต์ง‘ํ•ฉ์— ํ•ด๋‹นํ•˜๋Š” ์›์†Œ๋“ค์„ ๋บ€ ๊ฒฐ๊ณผ\nsetdiff(y, x) # y-x\n\na = c(\"TP53\", \"APOE\", \"BRCA1\", \"BRCA2\", \"MDK1\", \"CTNNB1\")\nb = c(\"TP53\", \"MDK1\", \"ARID1A\", \"CTNNB1\", \"TLR2\")\nunion(a, b)\nintersect(a, b)\nsetdiff(a, b) # a-b\nsetdiff(b, a) # b-a\n```\n\n![](../images/2020-11-06-r-1/Untitled29.png)\n\n### unique()\n\n์ฃผ์–ด์ง„ ๋ฐ์ดํ„ฐ ๋‚ด์—์„œ ์ค‘๋ณต๋˜์ง€ ์•Š์€ ๋ฐ์ดํ„ฐ๋ฅผ ์ถœ๋ ฅ\n\n```r\nx = c(\"a\", \"c\", \"d\", \"f\", \"c\", \"e\", \"f\")\nx\nunique(x)\n```\n\n![](../images/2020-11-06-r-1/Untitled30.png)\n\n### match()\n\n```r\nx = c(1,2,3,4,5)\ny = c(2,5,7,8,9)\nmatch(x, y) # x๋ฅผ ๊ธฐ์ค€์œผ๋กœ y์™€ ๋™์ผํ•œ ์›์†Œ์˜ ์œ„์น˜๋ฅผ ์•Œ๋ ค์ค€๋‹ค.\nmatch(y, x, nomatch = 0) # ์ผ์น˜ํ•˜์ง€ ์•Š๋Š” ํ•ญ๋ชฉ์— ๋Œ€ํ•ด์„œ๋Š” NA ๋Œ€์‹  0๋กœ ์ฑ„์šด๋‹ค.\n```\n\n![](../images/2020-11-06-r-1/Untitled31.png)\n\n### is.element()\n\n```r\nx = seq(1:5)\ny = c(2,5,7,8,9)\nis.element(x, y) # x๋ฅผ ๊ธฐ์ค€์œผ๋กœ, x์˜ ๊ฐ ์›์†Œ๊ฐ€ y์— ์žˆ๋Š”์ง€๋ฅผ ๊ฐ๊ฐ์— ๋Œ€ํ•ด ๊ฒ€์‚ฌํ•œ ํ›„, T/F ๊ฒฐ๊ณผ๋ฅผ ๋‚ธ๋‹ค.\n```\n\n![](../images/2020-11-06-r-1/Untitled32.png)\n\n### apply()\n\nํ–‰๋ ฌ์ด๋‚˜ ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„์—์„œ ์‚ฌ์šฉ ๊ฐ€๋Šฅ\n\n```r\nx = matrix(1:50, nrow=10, ncol=5)\ncolnames(x) = rep(paste(\"sample\", 1:5, sep=\"\"))\nrownames(x) = rep(paste(\"gene\", 1:10, sep=\"\"))\nx\n\napply(x, 1, mean) # ํ–‰(=1)์— ๋Œ€ํ•œ ํ‰๊ท ๊ฐ’\napply(x, 2, mean) # ์—ด(=2)์— ๋Œ€ํ•œ ํ‰๊ท ๊ฐ’\napply(x, 2, sum) #ํ–‰(=1)์— ๋Œ€ํ•œ ํ•ฉ๊ณ„\n```\n\n![](../images/2020-11-06-r-1/Untitled33.png)\n\n```r\nrowMeans(x)\ncolMeans(x)\ncolSums(x)\n```\n\n![](../images/2020-11-06-r-1/Untitled34.png)\n\n## ๋ฌธ์ œ 3\n\n๋‹ค์Œ ์ค‘ ๋‹ค๋ฅธ ์ถœ๋ ฅ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์ด๋Š” ๊ฒƒ์€?\n\n```r\npaste(c(\"Hello\", \"R\", \"Programming\"), sep=\" \")\npaste(\"Hello\", \"R\", \"Programming\")\npaste(c(\"Hello\", \"R\", \"Programming\"), collapse=\" \")\npaste(\"Hello\", \"R\", \"Programming\", sep=\" \")\n```\n\n๋‹ค์Œ ๋ช…๋ น์–ด๋ฅผ ์‹คํ–‰ํ–ˆ์„ ๋•Œ ์ถœ๋ ฅ๋˜๋Š” ๊ฒฐ๊ณผ๋Š”?\n\n```r\nx = c(10, 12, 7, 11, 6, 8, 9, 3, 4, 1, 2, 5)\nx[5] = 27\ny = sort(x)\nz = order(y)\nz\n```\n\n๋‹ค์Œ ๋ช…๋ น์–ด๋ฅผ ์‹คํ–‰ํ–ˆ์„ ๋•Œ ์ถœ๋ ฅ๋˜๋Š” ๊ฒฐ๊ณผ์™€ ๊ฐ™์€ ๊ฒฐ๊ณผ๋ฅผ ์ถœ๋ ฅํ•˜๊ธฐ ์œ„ํ•œ ๋ช…๋ น์–ด๋Š”?\n\n```r\nx = matrix(1:50, nrow=5, ncol=10)\ncolnames(x) = rep(paste(\"Group\", LETTERS[1:10], sep=\"\"))\nrownames(x) = rep(paste(\"Treat\", 1:5, sep=\"\"))\napply(x, 1, mean)\n\n# ๋‹ต\nrowMeans(x)\n```\n\n# ์ž๋ฃŒํ˜•\n\n## ์ž๋ฃŒํ˜• ๊ฐœ์š”\n### R์˜ ๊ธฐ๋ณธ ๋ฐ์ดํ„ฐ ์œ ํ˜•\n\n![](../images/2021-01-25-r-3/Untitled5.png)\n\n์ „์ฒด์ ์œผ๋กœ๋Š” ๋ฒกํ„ฐ๋ผ๋Š” ์šฉ์–ด๋ฅผ ์‚ฌ์šฉํ•˜๋ฉฐ, ๋ฒกํ„ฐ๋Š” ๋ฆฌ์ŠคํŠธ์™€ ์›์ž ๋ฒกํ„ฐ๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ๋‹ค.\n\n#### ๋ฒกํ„ฐ\n\n๋ฒกํ„ฐ๋ž€, ํ•œ ๊ฐœ ์ด์ƒ์˜ ๋ฐ์ดํ„ฐ์˜ ๋ฌถ์Œ์„ ๋งํ•œ๋‹ค. ๋ฐ์ดํ„ฐ๊ฐ€ ํ•œ ๊ฐœ์—ฌ๋„ ๋˜๊ณ  ์—ฌ๋Ÿฌ ๊ฐœ์—ฌ๋„ ๋œ๋‹ค. ๋ฌถ์—ฌ ์žˆ์œผ๋ฉด ๋ฒกํ„ฐ์ด๋ฉฐ, ๋Œ€๋ถ€๋ถ„์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋ฒกํ„ฐ๋ผ๊ณ  ๋ถ€๋ฅธ๋‹ค.\n\n![](../images/2021-01-25-r-3/Untitled6.png)\n\n#### NULL\n\nNULL์€ ๋ฐ์ดํ„ฐ๊ฐ€ ์—†์Œ์„ ๋œปํ•œ๋‹ค. ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๋ฐ์ดํ„ฐ๋ฅผ '์—†๋‹ค'๊ณ  ํ‘œํ˜„ํ•˜๊ธฐ ์œ„ํ•ด ์ฃผ๋กœ ์‚ฌ์šฉ๋œ๋‹ค.\n\n![](../images/2021-01-25-r-3/Untitled7.png)\n\n#### ์›์ž ๋ฒกํ„ฐ\n\n์›์ž ๋ฒกํ„ฐ๋ž€ ๋ชจ๋‘ ๊ฐ™์€ ์ข…๋ฅ˜์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๋Š” ๋ฐ์ดํ„ฐ์˜ ๋ฌถ์Œ์„ ๋งํ•œ๋‹ค. ๊ธฐ๋ณธ ๋‹จ์œ„์— ํ•ด๋‹น๋˜๋Š” ๋ฐ์ดํ„ฐ๋ผ๋Š” ์˜๋ฏธ์—์„œ atomic์ด๋ผ๋Š” ๋‹จ์–ด๊ฐ€ ๋ถ™์—ˆ๋‹ค.\n\n![](../images/2021-01-25-r-3/Untitled8.png)\n\n#### ๋ฆฌ์ŠคํŠธ\n\n๋ฆฌ์ŠคํŠธ๋Š” ๋‹ค์–‘ํ•œ ์ข…๋ฅ˜์˜ ๋ฐ์ดํ„ฐ ํƒ€์ž…์„ ํ•œ ๊บผ๋ฒˆ์— ๊ฐ€์งˆ ์ˆ˜ ์žˆ๋Š” ๋ฐ์ดํ„ฐ์˜ ๋ฌถ์Œ์ด๋‹ค.\n\n![](../images/2021-01-25-r-3/Untitled9.png)\n\n#### ๊ธฐ๋ณธ ๋ฐ์ดํ„ฐ ํƒ€์ž…\n\n์›์ž ๋ฒกํ„ฐ ๋‚ด์—๋Š” ๋…ผ๋ฆฌํ˜•, ์ˆซ์žํ˜•, ๊ธ€์žํ˜• ํฌ๊ฒŒ ์„ธ ๊ฐ€์ง€๋กœ, ์ˆซ์žํ˜•์„ ๋” ์„ธ๋ถ„ํ™”ํ•˜์—ฌ ๋„ค ๊ฐ€์ง€๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์žˆ๋‹ค; ๋…ผ๋ฆฌํ˜•, ์ •์ˆ˜ํ˜•, ์‹ค์ˆ˜ํ˜•, ๊ธ€์žํ˜•\n\n![](../images/2021-01-25-r-3/Untitled10.png)\n\n์ •์ˆ˜ํ˜•๊ณผ ์‹ค์ˆ˜ํ˜•์„ ํ•จ๊ป˜ ์ˆซ์žํ˜•์ด๋ผ๊ณ  ํ•œ๋‹ค. ์‹ค์ˆ˜๋ฅผ ๊ธฐ๋ณธ์œผ๋กœ ํ•œ๋‹ค. \n\n![](../images/2021-01-25-r-3/Untitled11.png)\n\n## ๊ธฐ๋ณธ ์ž๋ฃŒํ˜•\n\nR์˜ ์ž๋ฃŒํ˜•์€ ํฌ๊ฒŒ ์„ธ ๊ฐ€์ง€๋กœ ๊ตฌ๋ถ„ ๊ฐ€๋Šฅํ•˜๋‹ค; ์ˆซ์ž(Numeric), ๋ฌธ์ž(Character), ๋…ผ๋ฆฌํ˜•(Logical). Numeric ์ž๋ฃŒํ˜•์—๋Š” ์ •์ˆ˜์™€ ์‹ค์ˆ˜๊ฐ€ ๋ชจ๋‘ ํฌํ•จ๋˜๋ฉฐ, Character ์ž๋ฃŒํ˜•์„ ์“ธ ๋• ํฐ ๋”ฐ์˜ดํ‘œ๋‚˜ ์ž‘์€ ๋”ฐ์˜ดํ‘œ๋ฅผ ๋ฐ˜๋“œ์‹œ ํ‘œ๊ธฐํ•ด์•ผ ํ•œ๋‹ค. ๋…ผ๋ฆฌํ˜•์€ `TRUE` ๋˜๋Š” `T`, `FALSE` ๋˜๋Š” `F`๋กœ๋งŒ ํ‘œ๊ธฐํ•  ์ˆ˜ ์žˆ๋‹ค.\n\n```r\n# R์—์„œ ์ฃผ์„์€ #์œผ๋กœ ํ‘œ๊ธฐ\n1\n100\n1000\n\"abc\"\n'abc'\nabc # -> ์ •์˜๋˜์ง€ ์•Š์€ ๋ณ€์ˆ˜์ผ ๊ฒฝ์šฐ์—” ์˜ค๋ฅ˜ ๋ฐœ์ƒ\nTRUE\nFALSE\nT\nF\n\n1, 100, 1000 # -> ์˜ค๋ฅ˜. ๋™์ผํ•œ ํƒ€์ž…์˜ ์ž๋ฃŒํ˜•์€ ํ•œ ์ค„์— ํ•˜๋‚˜์”ฉ๋งŒ ๊ฐ€๋Šฅ\n```\n\n## ๊ทธ ์™ธ ์ž๋ฃŒํ˜•\n\n- Vector, Matrix, Data Frame, List, Factor\n\n# ์ž๋ฃŒํ˜•: ๋ฒกํ„ฐ(vector)\n\n๋‹จ์ผ ์ž๋ฃŒํ˜•์„ ๊ฐ–๋Š” ๊ฒƒ์„ ๋ฒกํ„ฐ๋ผ๊ณ  ํ•œ๋‹ค.\n\n\nvector ์ž๋ฃŒํ˜•์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•: `c()`ํ•จ์ˆ˜ ์ด์šฉ. c: combine์˜ ์•ฝ์–ด\n\nvector์˜ ์ƒ์„ฑ์€ `()`, ํ˜ธ์ถœ์€ `[]`\n\n## vector ์ƒ์„ฑ\n\n```r\nc(1, 2, 3)\n```\n1 2 3\n\n```r\nc(\"one\", \"two\", \"three\")\n```\n\"one\" \"two\" \"three\"\n\n```r\nc(TRUE, FALSE, TRUE)\n```\nTRUE FALSE TRUE\n\n### key-value\n๋ชจ๋“  ๋ฒกํ„ฐ ๋‚ด์˜ ์š”์†Œ๋Š” ์ด๋ฆ„์„ ๊ฐ€์งˆ ์ˆ˜ ์žˆ๋‹ค. ์ฆ‰, key-value์˜ ์Œ์„ ์ง€์„ ์ˆ˜ ์žˆ๋Š”๋ฐ, key๊ฐ€ ๋ฒกํ„ฐ ๋‚ด์˜ ์š”์†Œ์˜ ์ด๋ฆ„, value๊ฐ€ ์š”์†Œ์˜ ๊ฐ’์ด๋‹ค. ํ˜•์‹์€ ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค. \n```\nkey = value\n```\nkey๋Š” ์ด๋ฆ„์ด๋ฏ€๋กœ, ๋”ฐ์˜ดํ‘œ๋ฅผ ๋„ฃ์„ ์ˆ˜๋„, ์ƒ๋žตํ•  ์ˆ˜ ์žˆ๋‹ค. ๋‹จ, ์ด๋Š” ์„ ํƒ ์‚ฌํ•ญ์ด๋ฉฐ key๋ฅผ ๋ช…์‹œํ•˜์ง€ ์•Š์•„๋„ ๋ฒกํ„ฐ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋ฐ์—๋Š” ์ง€์žฅ์ด ์—†๋‹ค.\n```r\nvar.1 = c(\"a\" = \"k\")\nvar.1\n```\n```\na\n\"k\"\n```\n```r\nvar.2 = c(col1 = \"k\", col2 = \"kk\", col3 = \"kkk\")\nvar.2\n```\n```\ncol1 col2 col3\n\"k\" \"kk\" \"kkk\"\n```\n\n## vector๋ฅผ ๋ณ€์ˆ˜์— ์ €์žฅ\n\n```r\nvar1 <- c(1, 2, 3) \nvar2 <- c(\"one\", \"two\", \"three\")\nvar3 <- c(TRUE, FALSE, TRUE)\n```\n\n```r\nvar1\n```\n1 2 3\n\n```r\nvar2\n```\n\"one\" \"two\" \"three\"\n\n```r\nvar3\n```\nTRUE FALSE TRUE\n\n\n## ๊ฐ•์ œ ํ˜•๋ณ€ํ™˜\n๋ฒกํ„ฐ๋Š” ๋‹จ์ผ ์ž๋ฃŒํ˜•์„ ๊ฐ€์ ธ์•ผ ํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ๊ด€๋ก€์ ์œผ๋กœ R์—์„œ ๊ฐ•์ œ๋กœ ํ˜•๋ณ€ํ™˜์„ ์‹œํ‚ค๋Š”๋ฐ, ์ •๋ณด๊ฐ€ ์—†์–ด์ง€์ง€ ์•Š๋Š” ๋ฐฉํ–ฅ์œผ๋กœ ๋ณ€ํ™˜๋œ๋‹ค; ๋…ผ๋ฆฌํ˜• โ†’ ์ˆซ์žํ˜• โ†’ ๊ธ€์žํ˜• ๋ฐฉํ–ฅ์œผ๋กœ ๋ฐ”๋€๋‹ค. \n```r\ntem <- c(1, T, F, TRUE)\ntem # 1 1 0 1\ntypeof(tem) # \"double\"\n```\nlogical ๊ฐ’๋“ค์ด ๊ฐ•์ œ๋กœ double ๊ฐ’(์ˆซ์žํ˜• ์ค‘ ๊ธฐ๋ณธ๊ฐ’์ธ ์‹ค์ˆ˜ํ˜• ๊ฐ’)๋“ค๋กœ ๋ฐ”๋€ ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค(๋…ผ๋ฆฌํ˜• โ†’ ์ˆซ์žํ˜•). \n```r\ntem <- c(\"๊ธ€์ž\", 1, -1)\ntem # \"๊ธ€์ž\" \"1\" \"-1\"\ntypeof(tem) # \"character\"\n```\ndouble ๊ฐ’๋“ค์ด ๊ฐ•์ œ๋กœ character ๊ฐ’(์ˆซ์žํ˜• ์ค‘ ๊ธฐ๋ณธ๊ฐ’์ธ ์‹ค์ˆ˜ํ˜• ๊ฐ’)๋“ค๋กœ ๋ฐ”๋€ ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค(์ˆซ์žํ˜• โ†’ ๊ธ€์žํ˜•). \n```r\ntem <- c(\"๊ธ€์ž\", T, FALSE)\ntem # \"๊ธ€์ž\" \"TRUE\" \"FALSE\"\ntypeof(tem) # \"character\"\n```\nlogical ๊ฐ’๋“ค์ด ๊ฐ•์ œ๋กœ character ๊ฐ’(์ˆซ์žํ˜• ์ค‘ ๊ธฐ๋ณธ๊ฐ’์ธ ์‹ค์ˆ˜ํ˜• ๊ฐ’)๋“ค๋กœ ๋ฐ”๋€ ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค(๋…ผ๋ฆฌํ˜• โ†’ ๊ธ€์žํ˜•). ์ด ๊ฒฝ์šฐ์—๋Š” ๋„์ค‘์— ์ˆซ์žํ˜•์œผ๋กœ ๋ณ€ํ™˜๋˜์ง€ ์•Š์€ ์ฑ„, ๋ฐ”๋€๋‹ค. ๋”ฐ๋ผ์„œ, ์—ฌ๋Ÿฌ ์ž๋ฃŒํ˜•์ด ์„ž์ธ ๊ฒฝ์šฐ์—๋Š” ๋ฌด์กฐ๊ฑด character๋กœ ๋ณ€ํ™˜๋œ๋‹ค.\n\n```r\nc(1, \"one\", TRUE)\n```\n\"1\" \"one\" \"TRUE\"\n\n\n## ๋ฒกํ„ฐ ๊ฐ„ ์—ฐ๊ฒฐ\n\n```r\nx <- c(1,2,3)\nx\n```\n1 2 3\n\n```r\ny <- c(4,5,6,7)\ny\n```\n4 5 6 7\n\n```r\nz <- c(x, y)\nz\n1 2 3 4 5 6 7\n```\n\n## ๋ฒกํ„ฐ ๊ฐ’ ํ˜ธ์ถœ\n\n`[]`์„ ์‚ฌ์šฉํ•œ๋‹ค. ๋ฒกํ„ฐ์˜ ์š”์†Œ๋Š” `**1**`๋ถ€ํ„ฐ ์‹œ์ž‘ํ•จ. **`0`**๋ฒˆ ์ธ๋ฑ์Šค๋Š” ์ž๋ฃŒํ˜•์„ ๋‚˜ํƒ€๋‚ธ๋‹ค.\n\n```r\nvar <- c(\"a\", \"b\", \"c\")\nvar[0] \n```\ncharacter(0)\n\n```r\nvar[1]\n```\n\"a\"\n\n```r\nvar[2]\n```\n\"b\"\n\n```r\nvar[3]\n```\n\"c\"\n\n\n### ์กฐ๊ฑด๋ถ€ ๊ฐ’ ํ˜ธ์ถœ\n\n```r\nvar <- 1:10\nvar\n```\n 1 2 3 4 5 6 7 8 9 10\n\n```r\nvar > 5\n```\nFALSE FALSE FALSE FALSE FALSE TRUE TRUE TRUE TRUE TRUE\n\n```r\nvar[var > 5]\n```\n6 7 8 9 10\n\n```r\nvar[var<3|var>7]\n```\n1 2 8 9 10\n\n```r\nvar[var>3&var<7]\n```\n4 5 6\n\n\n`>` ์—ฐ์‚ฐ์ž ์ž์ฒด๋Š” Logical ์ž๋ฃŒํ˜•์˜ ํ˜•ํƒœ๋กœ ๊ฒฐ๊ณผ๋ฅผ ์ถœ๋ ฅํ•œ๋‹ค. `[]`์™€ ํ•จ๊ป˜ `>` ์—ฐ์‚ฐ์ž๋ฅผ ํ•จ๊ป˜ ์‚ฌ์šฉํ•˜๋ฉด, ๊ฐ’ ์ž์ฒด์— ๋Œ€ํ•œ ์กฐ๊ฑด์— ๋งž๋Š” ๊ฒฐ๊ณผ๋ฅผ ์ถœ๋ ฅํ•œ๋‹ค.\n\n## ๋ฐ์ดํ„ฐ์˜ ์ผ๋ถ€๋งŒ ์‚ฌ์šฉํ•˜๊ธฐ\n### ๋ฐ์ดํ„ฐ์˜ ์ผ๋ถ€๋งŒ ์‚ฌ์šฉํ•˜๊ธฐ: ์„œ๋ธŒ์…‹(subset) \n```r\nsubs <- c(\"ํ•˜๋‚˜\", \"๋‘˜\", \"์…‹\", \"๋„ท\", \"๋‹ค์„ฏ\")\nsubs[c(T,F,T,F,T)] # \"ํ•˜๋‚˜\" \"์…‹\" \"๋‹ค์„ฏ\"\nsubs[c(T,F)] # \"ํ•˜๋‚˜\" \"์…‹\" \"๋‹ค์„ฏ\"; ๋ถ€์กฑํ•˜๊ฒŒ ์ž…๋ ฅํ•  ์‹œ, ์žฌ์‚ฌ์šฉ ๊ทœ์น™์— ์˜ํ•ด T,F,T,F,T๊ฐ€ ์ ์šฉ๋œ๋‹ค.\nsubs[c(T,F,T,F,T,T,T)] # \"ํ•˜๋‚˜\" \"์…‹\" \"๋‹ค์„ฏ\" NA NA; ๋” ๋งŽ์ด ์ž…๋ ฅํ•  ์‹œ, Not Available์ด ๋œฌ๋‹ค.\n```\n\n### ์ˆซ์žํ˜• ๋ฐ์ดํ„ฐ์˜ ์ผ๋ถ€๋งŒ ์‚ฌ์šฉํ•˜๊ธฐ: ์ธ๋ฑ์‹ฑ(indexing)\n```r\nsubs[c(1,2,3)] # \"ํ•˜๋‚˜\" \"๋‘˜\" \"์…‹\"\nsubs[c(3,2,1)] # \"์…‹\" \"๋‘˜\" \"ํ•˜๋‚˜\"\nsubs[c(1,1,1,1,1)] # \"ํ•˜๋‚˜\" \"ํ•˜๋‚˜\" \"ํ•˜๋‚˜\" \"ํ•˜๋‚˜\" \"ํ•˜๋‚˜\"\nsubs[c(-1,-2)] # \"์…‹\" \"๋„ท\" \"๋‹ค์„ฏ\"; ์Œ์ˆ˜๋Š” ํ•ด๋‹น ์ธ๋ฑ์Šค์˜ ์š”์†Œ๋ฅผ ์ œ์™ธํ•˜๋ผ๋Š” ์˜๋ฏธ์ด๋‹ค. ์Œ์ˆ˜์˜ ๋™์ž‘์—์„œ๋Š” ๊ฐ ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ์ˆœ์„œ๋Š” ๋ฌด์˜๋ฏธํ•˜๋‹ค.\nsubs[c(-1,1)] # Error. ์Œ์ˆ˜์˜ ๋™์ž‘๊ณผ ์–‘์ˆ˜์˜ ๋™์ž‘์€ ๋™์‹œ์— ๋™์ž‘ํ•  ์ˆ˜ ์—†๋‹ค.\nsubs[c(6)] # NA. ๋ฒ”์œ„๋ฅผ ๋ฒ—์–ด๋‚œ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด์„œ๋Š” Not Available์ด ๋œฌ๋‹ค.\n```\n\n### ๊ธ€์žํ˜• ๋ฐ์ดํ„ฐ์˜ ์ผ๋ถ€๋งŒ ์‚ฌ์šฉํ•˜๊ธฐ\n\n๊ธ€์žํ˜• ๋ฒกํ„ฐ๋กœ ์ผ๋ถ€์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” key-value๋กœ ๋ฐ์ดํ„ฐ์— ์ด๋ฆ„์„ ์ง€์ •ํ•ด ์ฃผ์–ด์•ผ ํ•œ๋‹ค.\n```r\nsubs_name <- c(a = \"ํ•˜๋‚˜\", b = \"๋‘˜\", c = \"์…‹\", d = \"๋„ท\", e = \"๋‹ค์„ฏ\")\nsubs_name[c(\"a\",\"c\",\"f\", \"a\")]\n```\n```\na c <NA> a\n\"ํ•˜๋‚˜\" \"์…‹\" NA \"ํ•˜๋‚˜\"\n```\n```r\nsubs_name[c(-\"a\")] # Error. ์Œ์ˆ˜๋Š” ์ง€์ •ํ•  ์ˆ˜ ์—†๋‹ค.\n```\n์ด์ค‘๊ด„ํ˜ธ๋ฅผ ์“ฐ๋ฉด, ๋ฐ์ดํ„ฐ์— ๋ถ™์€ ์ด๋ฆ„(key)์„ ์ œ๊ฑฐํ•œ ์ฑ„, value๊ฐ’๋งŒ ๋‚˜์˜ค๊ฒŒ ๋œ๋‹ค.\n\n```r\nsubs_name[[\"a\"]] # \"ํ•˜๋‚˜\"\nsubs_name[[c(\"a\",\"b\")]] # Error.\nsubs_name[[1]] # \"ํ•˜๋‚˜\"\nsubs_name[[1]] # \"ํ•˜๋‚˜\"\n```\n\n### ๊ฐ•์ œ๋กœ ๋ฐ์ดํ„ฐ์˜ ๊ธธ์ด๊ฐ€ ๋งž์ถฐ์ง\n\n๋ฒกํ„ฐ๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๋ฐ์ดํ„ฐ์ด๋‹ค. ๋ฐ˜๋ฉด, ์Šค์นผ๋ผ๋Š” 1๊ฐœ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋งํ•œ๋‹ค. ๋ฒกํ„ฐ์™€ ์Šค์นผ๋ผ๋ฅผ ์—ฐ์‚ฐํ•˜๋ฉด ์–ด๋–ป๊ฒŒ ๋ ๊นŒ?\n```r\n1:10 # 1 2 3 4 5 6 7 8 9 10\n1:10 + 10 # 11 12 13 14 15 16 17 18 19 20\n```\n๋ฒกํ„ฐ์˜ ๊ฐ ์š”์†Œ์˜ ๊ธธ์ด์— ๋งž์ถฐ ์Šค์นผ๋ผ์˜ ๊ฐœ์ˆ˜๊ฐ€ ๋Š˜์–ด๋‚œ ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด์™€ ๊ฐ™์€ ์›๋ฆฌ๋กœ element-wiseํ•œ ์—ฐ์‚ฐ์ด ์ผ์–ด๋‚œ๋‹ค๊ณ  ๋ณผ ์ˆ˜ ์žˆ๋‹ค. ๊ทธ๋Ÿผ ๋งŒ์•ฝ, ๊ธธ์ด๊ฐ€ ์„œ๋กœ ๋‹ค๋ฅธ ๋ฒกํ„ฐ๋ผ๋ฆฌ ์—ฐ์‚ฐํ•˜๋ฉด ์–ด๋–ป๊ฒŒ ๋ ๊นŒ?\n```r\n1:10 + 1:5 # 2 4 6 8 10 7 9 11 13 15\n```\n์œ„์™€ ๊ฐ™์ด ์—ฐ์‚ฐ๋˜๋Š” ๊ฒƒ์„ **์žฌํ™œ์šฉ ๊ทœ์น™**์ด๋ผ๊ณ  ํ•œ๋‹ค. ์žฌํ™œ์šฉ ๊ทœ์น™์ด ์„ฑ๋ฆฝํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋‘ ๋ฒกํ„ฐ๊ฐ€ ๋ฐฐ์ˆ˜ ๊ด€๊ณ„์ด์–ด์•ผ ํ•œ๋‹ค๋Š” ์กฐ๊ฑด์ด ์žˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋‹ค์Œ์˜ ์ฝ”๋“œ๋Š” ๊ฒฐ๊ณผ๋Š” ๋‚˜์˜ค๋‚˜, ๊ฒฝ๊ณ ๋ฅผ ๋ฐ›๋Š”๋‹ค. \n```r\n1:10 + 1:3 \n```\n```\n2 4 6 5 7 9 8 10 12 11\n๊ฒฝ๊ณ ๋ฉ”์‹œ์ง€(๋“ค): In 1:10 + 1:3 : ๋‘ ๊ฐ์ฒด์˜ ๊ธธ์ด๊ฐ€ ์„œ๋กœ ๋ฐฐ์ˆ˜๊ด€๊ณ„์— ์žˆ์ง€ ์•Š์Šต๋‹ˆ๋‹ค\n```\n\n\n## ๋ฒกํ„ฐ ์—ฐ์‚ฐ\n\n์ˆซ์žํ˜• ๋ฒกํ„ฐ๋ผ๋ฆฌ๋Š” ๋ฒกํ„ฐ ๋‚ด ๊ฐ ์š”์†Œ ๋ณ„๋กœ ์—ฐ์‚ฐ์ด ๊ฐ€๋Šฅํ•˜๋‹ค.\n\n### ๋ฒกํ„ฐ ๋‚ด ์—ฐ์‚ฐ\n\n```r\nvar <- c(1, 3, 5, 7, 9)\nvar\n```\n1 3 5 7 9\n\n```r\nvar + 2\n```\n3 5 7 9 11\n\n```r\nvar - 2\n```\n-1 1 3 5 7\n\n```r\nvar * 2\n```\n2 6 10 14 18\n\n```r\nvar / 2\n```\n0.5 1.5 2.5 3.5 4.5\n\n```r\n(var + 2)*4\n```\n12 20 28 36 44\n\n\n### ๋ฒกํ„ฐ ๊ฐ„ ์—ฐ์‚ฐ\n\n```r\nx <- c(1, 3, 5)\ny <- c(2, 4, 6)\nx\n```\n1 3 5\n\n```r\ny\n```\n2 4 6\n\n```r\nx+y\n```\n3 7 11\n\n```r\nx-y\n```\n-1 -1 -1\n\n```r\nx*y\n```\n2 12 30\n\n```r\nx/y\n```\n0.5000000 0.7500000 0.8333333\n\n\n๋ฒกํ„ฐ์˜ ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅผ ๊ฒฝ์šฐ ์—ฐ์‚ฐ์€ ๋ถˆ๊ฐ€๋Šฅํ•˜๋‹ค.\n\n```r\nx <- c(1, 3, 5)\ny <- c(2, 4, 6, 8, 10)\n\nx+y\n3 7 11 9 13\n๊ฒฝ๊ณ ๋ฉ”์‹œ์ง€(๋“ค): \nIn x + y : ๋‘ ๊ฐ์ฒด์˜ ๊ธธ์ด๊ฐ€ ์„œ๋กœ ๋ฐฐ์ˆ˜๊ด€๊ณ„์— ์žˆ์ง€ ์•Š์Šต๋‹ˆ๋‹ค\n```\n\n\n# ์ž๋ฃŒํ˜•: ๋ฆฌ์ŠคํŠธ(list)\n\n์„œ๋กœ ๋‹ค๋ฅธ ๊ธธ์ด์˜ ๋ฒกํ„ฐ ๋˜๋Š” ์„œ๋กœ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ๊ตฌ์กฐ๋“ค์„ ๋ชจ์•„๋†“์€ ์ง‘ํ•ฉ์ด๋‹ค. ํ–‰๋ ฌ์€ ๊ฐ™์€ ๋ฐ์ดํ„ฐ ํƒ€์ž…๋งŒ ํ—ˆ์šฉํ–ˆ์—ˆ์œผ๋‚˜, ๋ฆฌ์ŠคํŠธ๋Š” ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„๊ณผ ๋™์ผํ•˜๊ฒŒ ์—ฌ๋Ÿฌ ์ž๋ฃŒํ˜•์„ ํ—ˆ์šฉํ•œ๋‹ค. R ์ž๋ฃŒํ˜• ์ค‘์—์„œ ๊ฐ€์žฅ ์œ ์—ฐํ•œ ์ž๋ฃŒํ˜•์ด๋ผ๊ณ  ํ•  ์ˆ˜ ์žˆ๋‹ค. \n\n๋ฆฌ์ŠคํŠธ ์ž๋ฃŒํ˜•์„ ๋งŒ๋“ค ๋• ํ•จ์ˆ˜ `list()`๋ฅผ ์ด์šฉํ•œ๋‹ค. \n\n```r\n# ์„œ๋กœ ๋‹ค๋ฅธ ํƒ€์ž…์˜ ์ž๋ฃŒํ˜•์˜ ๋ฒกํ„ฐ๋“ค์„ ๋งŒ๋“ ๋‹ค.\na<-c(1,2,3)\nb<-c(\"a\",\"b\",\"c\",\"d\",\"e\")\nc<-c(TRUE,FALSE)\n\n# list() ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•˜์—ฌ ๊ฐ ๋ฒกํ„ฐ๋ฅผ ๋ฆฌ์ŠคํŠธํ™”ํ•˜์—ฌ ๋ณ€์ˆ˜์— ์ €์žฅํ•œ๋‹ค. \n# ๋งŒ๋“ค ๋•Œ ๋ฒกํ„ฐ๊ฐ€ ๋‹ด๊ณ  ์žˆ๋Š” ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ์ง์ ‘ ๊ธฐ์ˆ ํ•ด ์ค„ ์ˆ˜๋„ ์žˆ๋‹ค.\nlist1 <- list(a, b, c)\nlist1\nlist2 <- list(id=a, name=b, positive=c)\nlist2\n\n# ๋ฆฌ์ŠคํŠธ์—์„œ ๊ฐ’์˜ ์ถ”์ถœ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•œ๋‹ค. \n# ํ˜ธ์ถœ ์‹œ, ์ธ๋ฑ์Šค ๋ฒˆํ˜ธ๋กœ ํ˜ธ์ถœํ•  ์ˆ˜๋„ ์žˆ๊ณ , ๋ฆฌ์ŠคํŠธ๋ฅผ ๋งŒ๋“ค ๋•Œ ๊ธฐ์ˆ ํ–ˆ๋˜ ์ •๋ณด๋ฅผ ์ด์šฉํ•ด ํ˜ธ์ถœํ•  ์ˆ˜๋„ ์žˆ๋‹ค.\nlist2[[1]]\nlist2[[3]]\nlist2[[\"name\"]]\nlist2(๋‹ฌ๋Ÿฌํ‘œ์‹œ)name\n\nclass(list1[1])\nclass(list2[1])\n```\n\n\n\n์ธ๋ฑ์‹ฑ ์—ฐ์‚ฐ๋„ ๊ฐ€๋Šฅํ•˜๋‹ค.\n\n```r\nlist1[c(1,3)] # ๋ฆฌ์ŠคํŠธ์˜ ์ฒซ ๋ฒˆ์งธ์™€ ์„ธ ๋ฒˆ์งธ๋ฅผ ์ถœ๋ ฅํ•œ๋‹ค.\n```\n\n![](../images/2020-11-06-r-1/Untitled85.png)\n\n```[[1]]```๊ณผ ```[[2]]``` ์— ์†์ง€ ๋ง ๊ฒƒ. ๋ฆฌ์ŠคํŠธ์˜ ๋ฒˆํ˜ธ๊ฐ€ ์•„๋‹ˆ๋‹ค. \n\n```r\nx = seq(5)\ny = c(\"one\", \"two\")\nz = matrix(1:10, nrow=2, ncol=5)\n\nlist.all = list(x, y, z)\nlist.all\n\nlist.all[[1]]\nlist.all[[1]][1]\n\nlist.all[[2]]\nlist.all[[3]][[1,3]]\n\nnames(list.all) = c(\"A\", \"B\", \"C\") \n# ๋ฆฌ์ŠคํŠธ๋Š” ๊ต‰์žฅํžˆ ์œ ์—ฐํ•˜๊ณ  ๋‹ค์–‘ํ•œ ํƒ€์ž…์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋‹ฎ์„ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ด๋ ‡๊ฒŒ ์ด๋ฆ„์„ ์ง€์ •ํ•ด์ฃผ๋Š” ๊ฒŒ ์ข‹๋‹ค.\nlist.all(๋‹ฌ๋Ÿฌํ‘œ์‹œ)A\n```\n\n![](../images/2020-11-06-r-1/Untitled86.png)\n\n# ์ž๋ฃŒํ˜•: ํŒฉํ„ฐ(Factor)\n\nํŒฉํ„ฐ๋Š” ๋ฒกํ„ฐ์™€ ์œ ์‚ฌํ•˜๋‹ค. ๋‹ค๋งŒ level ๊ฐ’์„ ๊ฐ€์ง€๋ฉฐ, ๋ฒ”์ฃผํ˜• ๋ฐ์ดํ„ฐ์— ์ฃผ๋กœ ์‚ฌ์šฉ๋œ๋‹ค.\n\n- ๋ฒ”์ฃผํ˜• ๋ฐ์ดํ„ฐ: Male/Female, up/down/left/right, A/B/C/D\n\nlevel ๊ฐ’์— ๋ฒ—์–ด๋‚˜๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ์ž…๋ ฅ๋˜๋ฉด NA ๊ฐ’์œผ๋กœ ์ฒ˜๋ฆฌ๋œ๋‹ค.\n\n๊ฐ’์˜ ์ถ”์ถœ์€ ๋ฒกํ„ฐ์™€ ๋™์ผํ•˜๊ฒŒ `[]`๋ฅผ ์“ด๋‹ค.\n\n๋จผ์ €, ๋ฒกํ„ฐ๋ฅผ ๋งŒ๋“ค์–ด ๋ณด์ž.\n\n```r\ntool <- c(\"C\", \"Python\", \"Java\")\ntool\n```\n\n![](../images/2020-11-06-r-1/Untitled87.png)\n\n์ด ๋ฒกํ„ฐ๋ฅผ ํŒฉํ„ฐ๋กœ ๋ณ€ํ™˜ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” `factor()` ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•œ๋‹ค.\n\n```r\nfactor_var <- factor(tool)\nfactor_var\nclass(factor_var)\n```\n\n![](../images/2020-11-06-r-1/Untitled88.png)\n\nlevel์˜ ์ˆœ์„œ๋ฅผ ์ง์ ‘ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ง์ ‘ ์ง€์ •ํ•ด ์ค„ ์ˆ˜ ์žˆ๋‹ค.\n\n```r\nfactor_var <- factor(tool, level=(\"R\", \"Python\", \"JAVA\"))\nfactor_var\n```\n\n![](../images/2020-11-06-r-1/Untitled89.png)\n\n์ž๋ฃŒํ˜•์„ ์ˆœ์„œ๋Œ€๋กœ ์ˆซ์ž๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, `as.numeric()` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค.\n\n```r\ntool <- as.numeric(factor_var)\ntool\n```\n\n![](../images/2020-11-06-r-1/Untitled90.png)\n\n# ์ž๋ฃŒํ˜•\n\n# ์ž๋ฃŒํ˜•: ํ–‰๋ ฌ(matrix)\n\nํ–‰๋ ฌ์€ ๋™์ผํ•œ ์ž๋ฃŒํ˜•์˜ ์ง‘ํ•ฉ์ด๋ฉฐ, 2์ฐจ์›์˜ ๋ฐ์ดํ„ฐ ๊ตฌ์กฐ๋ฅผ ๋งํ•œ๋‹ค.\n\n![](../images/2020-11-06-r-1/Untitled35.png)\n\n## ํ–‰๋ ฌ ์ƒ์„ฑ\n\n### matrix()๋กœ ํ–‰๋ ฌ ์ƒ์„ฑ\n\n```r\nmatrix(1:9, nrow=3, ncol=3)\n# ํ–‰๊ณผ ์—ด์˜ ๊ฐœ์ˆ˜๋ฅผ ์ง€์ •ํ•ด์ฃผ์–ด์•ผ ํ•œ๋‹ค.\n```\n\n![](../images/2020-11-06-r-1/Untitled36.png)\n\n์—ด๋ถ€ํ„ฐ ๊ฐ’์ด ์ฑ„์›Œ์ง€๋Š” ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค. ์ฆ‰, `byrow` ํŒŒ๋ผ๋ฏธํ„ฐ์˜ default๊ฐ’์€ `FALSE`๋‹ค. ๋”ฐ๋ผ์„œ ์ด๋ฅผ `TRUE`๋กœ ์„ค์ •ํ•˜๋ฉด ํ–‰๋ถ€ํ„ฐ ๊ฐ’์ด ์ฑ„์›Œ์ง€๋Š” ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค.\n\n![](../images/2020-11-06-r-1/Untitled37.png)\n\n### ๋ฒกํ„ฐ๋กœ ํ–‰๋ ฌ ์ƒ์„ฑ: cbind(), rbind()\n\ncbind๋Š” column bind์˜ ์•ฝ์ž์ด๋ฉฐ, rbind๋Š” row bind์˜ ์•ฝ์ž์ด๋‹ค.\n\n```r\nx <- c(1,2,3)\ny <- c(4,5,6)\nz <- c(7,8,9)\n```\n\n```r\nmat1 <- cbind(x,y,z)\nmat1\n```\n\n![](../images/2020-11-06-r-1/Untitled38.png)\n\n```r\nmat2 <- rbind(x,y,z)\nmat2\n```\n\n![](../images/2020-11-06-r-1/Untitled39.png)\n\n```r\ncbind(mat1, c(10, 11, 12))\n```\n\n![](../images/2020-11-06-r-1/Untitled40.png)\n\n```r\nrbind(mat2, c(10, 11, 12))\n```\n\n![](../images/2020-11-06-r-1/Untitled41.png)\n\n```r\nrbind(mat1, mat2)\n```\n\n![](../images/2020-11-06-r-1/Untitled42.png)\n\n## ํ–‰๋ ฌ์˜ column๊ณผ row์˜ ์ด๋ฆ„ ํ™•์ธ ๋ฐ ๋ณ€๊ฒฝ\n\n### column์˜ ์ด๋ฆ„๊ณผ row์˜ ์ด๋ฆ„ ํ™•์ธ\n\ncolumn์€ ๋ณดํ†ต R์—๋Š” '๋ณ€์ˆ˜'๋กœ ๋ณด๋Š” ๊ฒฝํ–ฅ์ด ์žˆ๋‹ค.\n\n```r\ncolnames(mat1)\nrownames(mat1)\n```\n\n### column์˜ ์ด๋ฆ„๊ณผ row์˜ ์ด๋ฆ„ ๋ณ€๊ฒฝ\n\n```r\ncolnames(mat1)<-c(\"col1\", \"col2\", \"col3\")\nrownames(mat1) <- c(\"row1\", \"row2\", \"row3\")\nmat1\ncolnames(mat1)\nrownames(mat1)\n```\n\n![](../images/2020-11-06-r-1/Untitled43.png)\n\n![](../images/2020-11-06-r-1/Untitled44.png)\n\n## ํ–‰๋ ฌ์—์„œ ๊ฐ’์˜ ์ถ”์ถœ\n\n๋ฒกํ„ฐ์—์„œ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ `[]`๋ฅผ ์‚ฌ์šฉํ•˜๋˜, `,`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ–‰๊ณผ ์—ด์„ ๊ตฌ๋ถ„ํ•œ๋‹ค. \n\n```r\nmat1\n\nmat1[1,3]\nmat1[2,2]\nmat1[1,]\nmat1[,2]\n```\n\n![](../images/2020-11-06-r-1/Untitled45.png)\n\nํ–‰๋ ฌ๊ฐ’ ํ˜ธ์ถœ์€ ์ธ๋ฑ์Šค ๋˜๋Š” ํ–‰์ด๋‚˜ ์—ด์˜ ์ด๋ฆ„์„ ์ง์ ‘ ๊ธฐ์ˆ ํ•˜์—ฌ ํ˜ธ์ถœ ๊ฐ€๋Šฅํ•˜๋‹ค.\n\n```r\nmat1[1,3]\nmat1[\"row1\", \"col3\"]\n```\n\n![](../images/2020-11-06-r-1/Untitled46.png)\n\n```r\nmat1[\"row3\",c(\"col1\",\"col3\")]\n```\n\n![](../images/2020-11-06-r-1/Untitled47.png)\n\n## ํ–‰๋ ฌ ์—ฐ์‚ฐ\n\n### ํ–‰๋ ฌ ๋‚ด ์—ฐ์‚ฐ\n\n```r\nmat1\nmat1+2\nmat1-2\nmat1*2\nmat1/2\n```\n\n![](../images/2020-11-06-r-1/Untitled48.png)\n\n### ํ–‰๋ ฌ๋ผ๋ฆฌ ์—ฐ์‚ฐ\n\n```r\nmat2 <- cbind(c(10,3,9), c(3,4,1), c(2,5,7))\nmat2\nmat1\n\nmat1 + mat2\nmat1 - mat2\nmat1 / mat2\nmat1 * mat2\n```\n\n![](../images/2020-11-06-r-1/Untitled49.png)\n\n### cbind(), rbind()\n\n```r\nvec1 = c(1:5)\nvec2 = c(6:10)\nvec3 = c(11:15)\n\nvec1\nvec2\nvec3\n\nmat1 = cbind(vec1, vec2, vec3)\nmat1\ndim(mat1)\nclass(mat1)\n\nmat2 = rbind(vec1, vec2, vec3)\nmat2\n```\n\n![](../images/2020-11-06-r-1/Untitled50.png)\n\n### rownames(), colnames()\n\n```r\nrownames(mat1) = c(\"A\", \"B\", \"C\", \"D\", \"E\")\ncolnames(mat1) = c(\"col01\", \"col02\", \"col03\")\nmat1\n\ncolnames(mat2) = c(\"A\", \"B\", \"C\", \"D\", \"E\")\nmat2\n```\n\n![](../images/2020-11-06-r-1/Untitled51.png)\n\n### matrix์—์„œ colnames์™€ rownames ์ง€์ •ํ•˜๊ธฐ\n\n```r\nx = matrix(1:25, nrow=5, ncol=5)\nx\n\ncolnames(x)=rep(paste(\"sample\", 1:5, sep=\"_\"))\nrownames(x)=rep(paste(\"gene\", 1:5, sep=\"_\"))\nx\n\nx[,4] # 4์—ด๋งŒ ์ถœ๋ ฅ\nx[2,] # 2ํ–‰๋งŒ ์ถœ๋ ฅ\nx[2:3, 1:4]\n```\n\n# ์ž๋ฃŒํ˜•: ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„(DataFrame)\n์—‘์…€์˜ ์Šคํ”„๋ ˆ๋“œ์‹œํŠธ์™€ ๊ฐ™์€ ๋ชจ์Šต์„ ๋ˆ๋‹ค. ์ฆ‰, ์—ด์˜ ์ด๋ฆ„ ๋ฐ‘์— ๊ฐ’๋“ค์ด ๋‚˜์—ด๋˜๊ณ , ๊ฐ ํ–‰๋งˆ๋‹ค ๊ฐ’๋“ค์ด ๋ฐฐ์น˜๋œ ํ˜•ํƒœ๋ฅผ ๋งํ•œ๋‹ค.์™ธ๊ด€์ƒ์œผ๋กœ๋Š” ํ–‰๋ ฌ๊ณผ ์œ ์‚ฌํ•˜๋‹ค. ๋‹ค๋งŒ, ์„œ๋กœ ๋‹ค๋ฅธ ์ž๋ฃŒํ˜• ๋ฒกํ„ฐ๋กœ ๊ตฌ์„ฑํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ์ ์ด ์ฐจ์ด์ ์ด๋‹ค.\n\n## ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„์˜ ์ƒ์„ฑ\n\n### data.frame() ํ•จ์ˆ˜ ์ด์šฉ\n\n๋จผ์ €, ์„œ๋กœ ๋‹ค๋ฅธ ์ž๋ฃŒํ˜•์˜ ๋ฒกํ„ฐ๋“ค์„ ๋งŒ๋“ ๋‹ค. value์˜ ๊ฐœ์ˆ˜๊ฐ€ ๋™์ผํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์— ์œ ์˜ํ•œ๋‹ค.\n\n```r\nvec1 <- c(1,4,7)\nvec2 <- c(\"one\",\"four\",\"seven\")\nvec3 <- c(TRUE,TRUE,FALSE)\n```\n\n```r\nname <- c('john', 'jaehee', 'juliet', 'james')\nsex <- c('f', 'f', 'f', 'm')\noccup <- c('student', 'doctor', 'cto', 'data scientist')\nage <- c(40, 35, 43, 29)\n```\n\n์ด ๋ฒกํ„ฐ๋“ค์„ `data.frame()` ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„ ์ž๋ฃŒํ˜•์œผ๋กœ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋‹ค.\n\n```r\ndf <- data.frame(vec1, vec2, vec3) # ๋ฒกํ„ฐ๋“ค์„ ํŒŒ๋ผ๋ฏธํ„ฐ๋กœ ๋„˜๊ธฐ๋ฉด ๋์ด๋‹ค.\n```\n\n๊ฒฐ๊ณผ๋ฌผ์„ ๋ณด๋ฉด, ๋งˆ์น˜ `cbind()` ํ•œ ๊ฒƒ๊ณผ ๊ฐ™์ด ๋‚˜์˜จ๋‹ค.\n\n```r\ndf\n```\n\n```r\ndata.frame(name, age, sex, occup)\n```\n```\n name age sex occup\n1 john 40 f student\n2 jaehee 35 f doctor\n3 juliet 43 f cto\n4 james 29 m data scientist\n```\n```r\nmember <- data.frame(name, age, sex, occup)\nmember\n```\n```\n name age sex occup\n1 john 40 f student\n2 jaehee 35 f doctor\n3 juliet 43 f cto\n4 james 29 m data scientist\n```\n```r\nage[1]\nname[3]\n```\n```\n40\n\"juliet\"\n```\n```r\nmember[1] # member[1,]\n```\n```\n name\n1 john\n2 jaehee\n3 juliet\n4 james\n```\n```r\nmember[1,] # ํ–‰, ์—ด\n```\n```\n name\n1 john\n2 jaehee\n3 juliet\n4 james\n```\n```r\nmember(๋‹ฌ๋Ÿฌํ‘œ์‹œ)name\n```\n```\n[1] \"john\" \"jaehee\" \"juliet\" \"james\" \n```\n#### ๋ฌธ์ œ\njaehee์˜ occup์„ ์ถœ๋ ฅํ•˜๋ผ.\n\n#### ์ •๋‹ต\n```r\nmember[2,4]\n```\n\n#### ๋ฌธ์ œ\njohn์˜ ์„ฑ๋ณ„์„ m์„ ๋ฐ”๊พธ๋ผ.\n\n#### ์ •๋‹ต\n```r\nmember[1,3] <- 'm'\n```\n\n## ๋ฌธ์ œ\n\n๋‹ค์Œ์˜ ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„ df๋กœ๋ถ€ํ„ฐ ์ปฌ๋Ÿผ col1, col2, col3์˜ ๊ฐ’์ด 3๊ฐœ์˜ ์ปฌ๋Ÿผ์—์„œ ๋ชจ๋‘ 5 ์ด์ƒ์ธ ํ–‰์„ ์ถ”์ถœํ•˜์‹œ์˜ค.\n\n```r\nname = c(\"A\", \"B\", \"C\", \"D\")\ncol1 = c(10, 1, 10, 5)\ncol2 = c(1, 5, 9, 9)\ncol3 = c(2, 8, 1, 8)\n\ndf = data.frame(name, col1, col2, col3)\ndf\n\nrowname = c(\"row1\", \"row2\", \"row3\", \"row4\")\nrownames(df) = rowname\ndf\n\ndf[col1>=5 & col2>=5 & col3>=5,]\n```\n\n![](../images/2020-11-06-r-1/Untitled52.png)\n\n![](../images/2020-11-06-r-1/Untitled53.png)\n\n## ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„์—์„œ ๊ฐ’ ์ถ”์ถœ ๋ฐ column, row์ด๋ฆ„ ์กฐํšŒ/๋ณ€๊ฒฝ\n\nํ–‰๋ ฌ๊ณผ ๋™์ผํ•˜๋‹ค.\n\n### ์—ด(column) ์ถ”์ถœ\n\n```r\nvec1 <- c(1,4,7)\nvec2 <- c(\"one\",\"four\",\"seven\")\nvec3 <- c(TRUE,TRUE,FALSE)\n\ndf <- data.frame(vec1, vec2, vec3)\n\ndf[,1]\ndf[,2]\ndf[,\"vec3\"]\n```\n\n![](../images/2020-11-06-r-1/Untitled54.png)\n\n#### ํŠน์ • column์— ๋Œ€ํ•œ ๊ฐ’ ์ถ”์ถœ\n\n```r\ndf\ndf(๋‹ฌ๋Ÿฌํ‘œ์‹œ)vec2\ndf(๋‹ฌ๋Ÿฌํ‘œ์‹œ)vec2[2:3]\n```\n\n\n# ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„ ๋ฐ ๋ฐ์ดํ„ฐ ๊ฐ์ฒด ํƒ์ƒ‰ ์‹ค์Šต\n\n## ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„์˜ ์ƒ์„ฑ\n\n### 50๊ฐœ์˜ ๋ฐœํ˜„๋Ÿ‰์— ๋Œ€ํ•œ 2์ฐจ์› ๋ฐ์ดํ„ฐ ์ƒ์„ฑ\n\n์œ ์ „์ž ๋ฐ์ดํ„ฐ๋ฅผ ์ž„์˜๋กœ ์ƒ์„ฑ\n\n```r\ngene=paste0(\"gene\", 1:50)\ngene\n```\n\npaste0() ํ•จ์ˆ˜๋Š” ๋ฌธ์ž์—ด๊ณผ ๋ฌธ์ž์—ด์„ ๊ณต๋ฐฑ์—†์ด ์—ฐ๊ฒฐ์‹œํ‚ค๋Š” ํ•จ์ˆ˜๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, \"gene\"๊ณผ \"1\"์„ ๋ถ™์—ฌ gene1์ด ๋˜๋Š” ๋ฐฉ์‹์ด๋‹ค. paste() ํ•จ์ˆ˜๋„ ์žˆ๋‹ค.\n\n![](../images/2020-11-06-r-1/Untitled56.png)\n\n### 5๊ฐœ ์ƒ˜ํ”Œ ์ด๋ฆ„ ๋ฒกํ„ฐ ์ƒ์„ฑ\n\n```r\nsample=paste0(\"sample\", 1:5)\nsample\n```\n\n![](../images/2020-11-06-r-1/Untitled57.png)\n\n### 250๊ฐœ ์ •๊ทœ๋ถ„ํฌ ๋‚œ์ˆ˜๋กœ ํ–‰๋ ฌ์„ ์ƒ์„ฑํ•œ๋‹ค. ์—ด์€ ์œ ์ „์ž ์ˆ˜, ํ–‰์€ ์ƒ˜ํ”Œ์˜ ์ˆ˜ ์ด๋‹ค.\n\n```r\nexpression = matrix(rnorm(250), nrow = 50, ncol=5)\nexpression\n```\n\n![](../images/2020-11-06-r-1/Untitled58.png)\n\n### ์—ด ์ด๋ฆ„์„ sample ๋ฒกํ„ฐ๋กœ ์ง€์ •ํ•œ๋‹ค.\n\n```r\ncolnames(expression) = sample\n```\n\n### ํ–‰ ์ด๋ฆ„์„ gene ๋ฒกํ„ฐ๋กœ ์ง€์ •ํ•œ๋‹ค.\n\n```r\nrownames(expression) = gene\n```\n\n### ๊ฒฐ๊ณผ๋ฌผ์„ ์ถœ๋ ฅํ•ด๋ณธ๋‹ค.\n\n```r\nexpression\n```\n\n![](../images/2020-11-06-r-1/Untitled59.png)\n\n### ๋ฒกํ„ฐ์™€ ํ–‰๋ ฌ์„ ๋ถ™์—ฌ ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„์œผ๋กœ ์ƒ์„ฑํ•œ๋‹ค.\n\n```r\nexam.df = data.frame(gene, expression)\nexam.df\n```\n\n![](../images/2020-11-06-r-1/Untitled60.png)\n\n## ๋ฐ์ดํ„ฐ ๊ฐ์ฒด ํƒ์ƒ‰\n\n### dim(), ndim(), nrow(), ncol(), length(): ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„์˜ ์ฐจ์› ๋ฐ ๊ตฌ์กฐ ํŒŒ์•…ํ•˜๊ธฐ\n\n```r\ndim(exam.df)\n```\n\n[1] 50 6\n\ndim() ํ•จ์ˆ˜์˜ ๊ฒฐ๊ณผ๋Š” str() ํ•จ์ˆ˜๋กœ ํŒŒ์•…์ด ์ „๋ถ€ ๊ฐ€๋Šฅํ•œ ์ •๋ณด๋‹ค. ์—ฌ๊ธฐ์„œ str์€ string์ด ์•„๋‹ˆ๋ผ structure๋‹ค.\n\n๋ฐ์ดํ„ฐ ๊ฐ์ฒด์˜ ์ฐจ์›๋งŒ ์•Œ๊ณ  ์‹ถ๊ฑฐ๋‚˜ ์•„๋‹ˆ๋ฉด ๋ฐ์ดํ„ฐ ๊ฐ์ฒด์˜ ์ฐจ์›์„ ๋ฒกํ„ฐ๋กœ ํ•ด์„œ indexingํ•ด์„œ ์“ธ ์ผ์ด ์žˆ์„ ๋•Œ ์ด ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค. \n\n```r\nndim(exam.df)\n```\n[1] 50 6\n\n```r\nnrow(exam.df)\n```\n[1] 50\n\n```r\nncol(exam.df)\n```\n[1] 5\n\n```r\nlength(exam.df)\n```\n[1] 6\n\nlength() ํ•จ์ˆ˜๋Š” column์˜ ๊ฐœ์ˆ˜(๋ณ€์ˆ˜์˜ ๊ฐœ์ˆ˜)๋ฅผ ์ถœ๋ ฅํ•œ๋‹ค. ํ•ด๋‹น column์˜ obs(observataion)์˜ ๊ฐœ์ˆ˜๋ฅผ ๋ณด๊ณ ์ž ํ•  ๋• ๋‹ค์Œ๊ณผ ๊ฐ™์ด (๋‹ฌ๋Ÿฌํ‘œ์‹œ)๋ฅผ ์“ด๋‹ค. ๋ฒกํ„ฐ ์ž๋ฃŒํ˜•์—์„œ๋„ length๋ฅผ ์ด์šฉํ•˜์—ฌ ๊ทธ ๊ธธ์ด๋ฅผ ํŒŒ์•…ํ–ˆ์—ˆ๋‹ค.\n\n```r\nlength(exam.df(๋‹ฌ๋Ÿฌํ‘œ์‹œ)gene)\n```\n[1] 50\n\n### head()์™€ tail()\n\nhead()ํ•จ์ˆ˜ (๋˜๋Š” tail() ํ•จ์ˆ˜)๋Š” 1์ฐจ์› ๋˜๋Š” 2์ฐจ์›์˜ ํ–‰๋ ฌ ๋˜๋Š” ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„์˜ ์ฒซ 6์ค„ (๋˜๋Š” ๋ 6์ค„;๋””ํดํŠธ๊ฐ’)์„ ์ถœ๋ ฅํ•˜๋Š” ํ•จ์ˆ˜๋‹ค. ๊ด€์ธก์น˜๊ฐ€ ์ˆ˜๋ฐฑ๋งŒ, ์ˆ˜์ฒœ๋งŒ ๊ฑด์ธ ๊ฒฝ์šฐ๋Š” ์ƒ์œ„ ํ˜น์€ ํ•˜์œ„ ๋ช‡ ๊ฐœ๋งŒ ๋ฏธ๋ฆฌ๋ณด๊ธฐ๋ฅผ ํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ด์ค€๋‹ค.\n\n```r\nhead(exam.df)\n```\n\n![](../images/2020-11-06-r-1/Untitled61.png)\n\n```r\nexam.df[1:6,] # ๋™์ผํ•œ ์ฝ”๋“œ\n```\n\n์ถœ๋ ฅํ•˜๊ธฐ๋ฅผ ์›ํ•˜๋Š” ํ–‰์˜ ๊ฐœ์ˆ˜๋ฅผ ํŒŒ๋ผ๋ฏธํ„ฐ๋กœ ์ง€์ •ํ•  ์ˆ˜ ์žˆ๋‹ค.\n\n```r\nhead(exam.df, n = 10)\n```\n\n![](../images/2020-11-06-r-1/Untitled62.png)\n\n```r\ntail(exam.df, n = 10)\n```\n\n![](../images/2020-11-06-r-1/Untitled63.png)\n\n### ํ–‰๋ ฌ ๋˜๋Š” ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„์˜ ๊ตฌ์กฐ ํŒŒ์•…ํ•˜๊ธฐ\n\nR ๋ฐ์ดํ„ฐ ๊ฐ์ฒด๋ฅผ ์‹ ๊ทœ๋กœ ์ƒ์„ฑํ–ˆ๊ฑฐ๋‚˜, ์™ธ๋ถ€์—์„œ ๋ถˆ๋Ÿฌ์™”๊ฑฐ๋‚˜, ์•„๋‹ˆ๋ฉด R ํŒจํ‚ค์ง€์— ๋‚ด์žฅ๋˜์–ด ์žˆ๋Š” ๋ฐ์ดํ„ฐ ์…‹์„ ํ™œ์šฉํ•œ๋‹ค๊ณ  ํ–ˆ์„ ๋•Œ ๋ฐ์ดํ„ฐ ๊ฐ์ฒด์˜ ํ˜„ํ™ฉ, ํŠน์„ฑ์— ๋Œ€ํ•ด์„œ ํŒŒ์•…ํ•˜๋Š” ๊ฒƒ์ด ํ•„์š”ํ•˜๋‹ค.\n\nstr(๊ฐ์ฒด) : ๋ฐ์ดํ„ฐ ๊ตฌ์กฐ, ๋ณ€์ˆ˜ ๊ฐœ์ˆ˜, ๋ณ€์ˆ˜ ๋ช…, ๊ด€์ฐฐ์น˜ ๊ฐœ์ˆ˜, ๊ด€์ฐฐ์น˜์˜ ๋ฏธ๋ฆฌ๋ณด๊ธฐ ๋“ฑ\n\n```r\nstr(exam.df)\n```\n\n![](../images/2020-11-06-r-1/Untitled64.png)\n\n### class(obj) : ๋ฐ์ดํ„ฐ ๊ฐ์ฒด ๊ตฌ์„ฑ์š”์†Œ์˜ ์†์„ฑ ํ™•์ธ\n\n```r\nclass(exam.df)\n```\n[1] \"data.frame\"\n\n### sapply(obj, func): object์— function์„ ์ ์šฉ\n\n```r\nsapply(exam.df, class)\n```\n\n![](../images/2020-11-06-r-1/Untitled65.png)\n\n### names(): ๋ฐ์ดํ„ฐ ๊ฐ์ฒด ๊ตฌ์„ฑ์š”์†Œ ์ด๋ฆ„\n\n๋ฐ์ดํ„ฐ ๊ฐ์ฒด์˜ ๋ณ€์ˆ˜๋ช…์„ ์•Œ๊ณ  ์‹ถ๊ณ , indexingํ•ด์„œ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ์œผ๋ฉด names() ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค.\n\n```r\nnames(exam.df)\n```\n[1] \"gene\" \"sample1\" \"sample2\" \"sample3\" \"sample4\" \"sample5\"\n\n### summary(), stat.desc(), describe(): ์—ฐ์†ํ˜• ๋ณ€์ˆ˜์˜ ์š”์•ฝ ํ†ต๊ณ„\n\n#### summary()\n\nbase package์˜ summary()ย ํ•จ์ˆ˜๋Š” ์ค‘์‹ฌํ™” ๊ฒฝํ–ฅ๊ณผ ํผ์ง์ •๋„์— ๋Œ€ํ•ด์„œ quick ํ•˜๊ฒŒ ๋ณผ ์ˆ˜ ์žˆ๋Š” ํ†ต๊ณ„๋Ÿ‰๋“ค์„ ์ œ๊ณตํ•œ๋‹ค.\n\n```r\nsummary(exam.df)\n```\n\n![](../images/2020-11-06-r-1/Untitled66.png)\n\nstat.desc() ํ•จ์ˆ˜๋‚˜ describe() ํ•จ์ˆ˜๋Š” ๋ณ„๋„์˜ ํŒจํ‚ค์ง€๋ฅผ ์„ค์น˜ํ•ด์•ผ ์“ธ ์ˆ˜ ์žˆ๋‹ค. \n\n#### stat.desc()\n\nํŒจํ‚ค์ง€ ์„ค์น˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ง„ํ–‰ํ•œ๋‹ค.\n\n```r\ninstall.packages(\"pastecs\")\n```\n\n์„ค์น˜ํ•œ ํŒจํ‚ค์ง€๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๋ฐฉ๋ฒ•์€ ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.\n\n```r\nlibrary(pastecs)\n```\n\n```r\nstat.desc(exam.df)\n```\n\n![](../images/2020-11-06-r-1/Untitled67.png)\n\nstat.desc() ํ•จ์ˆ˜์˜ ์˜ต์…˜ ๋ณ„๋กœ ์ œ๊ณตํ•˜๋Š” ํ†ต๊ณ„๋Ÿ‰์€ ์•„๋ž˜์™€ ๊ฐ™๋‹ค.\n\n- basic = TRUE : ๊ด€์ธก์น˜ ๊ฐœ์ˆ˜, null ๊ฐœ์ˆ˜, NA ๊ฐœ์ˆ˜, ์ตœ์†Œ๊ฐ’, ์ตœ๋Œ€๊ฐ’, ๋ฒ”์œ„, ํ•ฉ\n- desc = TRUE : ์ค‘์•™๊ฐ’, ํ‰๊ท , ๋ถ„์‚ฐ, ํ‘œ์ค€ํŽธ์ฐจ, ๋ณ€์ด๊ณ„์ˆ˜\n- norm = TRUE : ์™œ๋„, ์ฒจ๋„, ์ •๊ทœ์„ฑ ๊ฒ€์ •ํ†ต๊ณ„๋Ÿ‰, ์ •๊ทœ์„ฑ ๊ฒ€์ • P-value\n- p = 0.90 :ย ์‹ ๋ขฐ๊ณ„์ˆ˜ 90% (์œ ์˜์ˆ˜์ค€ 10%) ๊ฐ’ => 90% ์‹ ๋ขฐ๊ตฌ๊ฐ„์€ ํ‰๊ท  +- CI.mean.0.9 ๊ฐ’\n\nstat.desc() ํ•จ์ˆ˜์—์„œ IQR, quantile์€ ์ œ๊ณต๋˜์ง€ ์•Š๋Š”๋‹ค.\n\n#### describe()\n\ndescribe() ํ•จ์ˆ˜์— ๋Œ€ํ•ด์„œ๋„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ž‘์—…ํ•  ์ˆ˜ ์žˆ๋‹ค.\n\n```r\ninstall.packages(\"psych\")\nlibrary(psych)\ndescribe(exam.df)\n```\n\n![](../images/2020-11-06-r-1/Untitled68.png)\n\ndescribe() ํ•จ์ˆ˜๋Š” summary() ๋ณด๋‹ค๋Š” ๋งŽ๊ณ  stat.desc() ๋ณด๋‹ค๋Š” ์ ์€ ๊ธฐ์ˆ ํ†ต๊ณ„๋Ÿ‰(๊ด€์ธก๊ฐ’ ๊ฐœ์ˆ˜(n), ํ‰๊ท (mean), ํ‘œ์ค€ํŽธ์ฐจ(sd), ์ค‘์•™๊ฐ’(median), ์ ˆ์‚ญํ‰๊ท (10% ์ ˆ์‚ญํ‰๊ท ), ์ค‘์œ„๊ฐ’์ ˆ๋Œ€ํŽธ์ฐจ(from ์ค‘์œ„๊ฐ’)ย (MAD, median absolute deviation), ์ตœ์†Œ๊ฐ’(min), ์ตœ๋Œ€๊ฐ’(max), ๋ฒ”์œ„(range),ย ์™œ๋„(skew), ์ฒจ๋„(kurtosis),ย ํ‘œ์ค€์˜ค์ฐจ(SE, standard error))์„ ๋ณด์—ฌ์ค€๋‹ค. \n\n- ์ค‘์œ„๊ฐ’ ์ ˆ๋Œ€ ํŽธ์ฐจ (MAD, median absolute deviation) = median(|X - median(X)|) * 1.4826\n1.4826์€ scaling factor (๋˜๋Š” normalizing constant) ์ด๋ฉฐ, ์ •๊ทœ์ ์ธ ์ž๋ฃŒ์—์„œ scaling factor๋ฅผ ๊ณฑํ•ด์ฃผ๋ฉด ํ‘œ์ค€ํŽธ์ฐจ์™€ ๋น„์Šทํ•ด์ง„๋‹ค.\n\n### ์—ฐ์†ํ˜• ๋ณ€์ˆ˜์˜ ๊ทธ๋ฃน๋ณ„ ์š”์•ฝ ํ†ต๊ณ„\n\n์ข…๋ฅ˜: tapply(var, factor, summary), aggregate(), summaryBy(), describe.by()\n\n#### tapply(variable, factor, summary)\n\n\n์—ฐ์†ํ˜• ๋ฐ์ดํ„ฐ์…‹์„ ์ด์šฉํ•˜์—ฌ ํ†ต๊ณ„๋ฅผ ๋‚ด๊ธฐ ์œ„ํ•˜์—ฌ MASS ํŒจํ‚ค์ง€๋ฅผ ์„ค์น˜ํ•˜๊ณ  Cars93 ๋ฐ์ดํ„ฐ์…‹์„ ๋ถˆ๋Ÿฌ์˜จ๋‹ค.\n\n```r\ninstall.packages(\"MASS\")\nlibrary(MASS)\nstr(Cars93)\nwith(Cars93, tapply(Price, Type, summary))\n```\n\n![](../images/2020-11-06-r-1/Untitled69.png)\n\ntapply() ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด, ํŠน์ • variable์„ ์š”์ธ(factor)๋ณ„๋กœ summary() ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•œ ํ†ต๊ณ„์ ์œผ๋กœ ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ๋‹ค. ์œ„์˜ ์˜ˆ๋Š” Cars93 ๋ฐ์ดํ„ฐ์…‹์— ๋Œ€ํ•œ Type๋ณ„ Price์— ๋Œ€ํ•œ summary ์š”์•ฝ ํ†ต๊ณ„๋‹ค.\n\n## rowSums(matrix ์ž๋ฃŒํ˜•), colSums(matrix ์ž๋ฃŒํ˜•)\n\n์ˆซ์žํ˜• ์ž๋ฃŒํ˜•์œผ๋กœ ์ด๋ฃจ์–ด์ง„ ์ž๋ฃŒ์— ๋Œ€ํ•ด์„œ๋งŒ ๊ฐ€๋Šฅํ•œ ์—ฐ์‚ฐ์ด๋‹ค.\n\n```r\nexpression\nclass(expression)\nrowSums(expression)\ncolSums(expression)\n```\n\n![](../images/2020-11-06-r-1/Untitled70.png)\n\n๊ทธ ์™ธ์˜ ์ž๋ฃŒํ˜•์— ๋Œ€ํ•ด ํ•ด๋‹น ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ฉด ์—๋Ÿฌ๊ฐ€ ๋œฌ๋‹ค. ๋งŒ์•ฝ ๋‹ค์Œ๊ณผ ๊ฐ™์ด gene column์— ๋ฌธ์ž์—ด์ด ๋“ค์–ด๊ฐ€ ์žˆ๋‹ค๊ณ  ํ•ด๋ณด์ž. ๊ทธ๋Ÿผ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์—๋Ÿฌ๋ฅผ ๋งˆ์ฃผํ•œ๋‹ค.\n\n```r\ncolSums(exam.df)\n```\n\n![](../images/2020-11-06-r-1/Untitled71.png)\n![](../images/2020-11-06-r-1/Untitled72.png)\n\n์ด ๋•Œ, gene column์„ ์ œ๊ฑฐํ•˜๋ฉด ์ˆซ์ž๋กœ๋งŒ ์ด๋ฃจ์–ด์ง„ ๋ฐ์ดํ„ฐ์ด๋ฏ€๋กœ, ์ด๋ฅผ ์ œ๊ฑฐํ•ด๋ณด์ž.\n\n```r\nrowSums(exam.df[,-1]) # ์ฒซ ๋ฒˆ์งธ ์—ด์„ ์ œ๊ฑฐ(-)ํ•˜๊ฒ ๋‹ค.\nrowMeans(exam.df[,-1])\n```\n\n## rowMeans(matrix ์ž๋ฃŒํ˜•), colMeans(matrix ์ž๋ฃŒํ˜•)\n\n```r\nrowMeans(expression)\ncolMeans(expression)\n```\n![](../images/2020-11-06-r-1/Untitled73.png)\n![](../images/2020-11-06-r-1/Untitled74.png)\n\nrowMeans()๋ฅผ ํ†ตํ•ด ์–ป์€ ๊ฒฐ๊ณผ๋ฌผ ์ค‘ 0๋ณด๋‹ค ํฐ ๊ฐ’์„ ๊ฐ–๋Š” ํ–‰์„ ์ฐพ์œผ๋ ค๋ฉด, ํฌ๊ธฐ ์—ฐ์‚ฐ์ž๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค.\n\n```r\nrowMeans(expression) > 0\n```\n\n![](../images/2020-11-06-r-1/Untitled75.png)\n\n์œ„ ์กฐ๊ฑด์— ๋งž๋Š” ํ–‰๋งŒ ์ถ”์ถœํ•˜์—ฌ ์ƒˆ๋กœ์šด ๋ณ€์ˆ˜์— ์ €์žฅํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์Œ๊ฐ€ ๊ฐ™์ด ํ•œ๋‹ค.\n\n```r\nexam.df2 = exam.df[rowMeans(expression) > 0 , ]\nexam.df2\n```\n\n![](../images/2020-11-06-r-1/Untitled76.png)\n\n,๋ฅผ ๊ธฐ์ค€์œผ๋กœ ์™ผ์ชฝ์€ ํ–‰, ์˜ค๋ฅธ์ชฝ์€ ์—ด์ด๋ฏ€๋กœ ์—ด์€ ์ œ์™ธ์‹œํ‚ค๋ ค๋Š” ์กฐ๊ฑด์ด ์—†์Œ์„ ์˜๋ฏธํ•œ๋‹ค.\n\n![](../images/2020-11-06-r-1/Untitled77.png)\n\nnrow()๋ฅผ ํ†ตํ•ด ์ด ๋ช‡ ๊ฐœ์˜ ํ–‰์ด ์ด์— ํ•ด๋‹นํ•˜๋Š”์ง€ ์‚ดํ•„ ์ˆ˜ ์žˆ๋‹ค.\n\n```r\nnrow(exam.df2)\n```\n\n26\n\n์ด๋“ค ์ค‘ ์–ด๋–ค gene์ด ๊ทธ๋Ÿฌํ•œ์ง€ ์‚ดํŽด๋ณด๊ธฐ ์œ„ํ•ด ๋‹ค์Œ๊ณผ ๊ฐ™์ด (๋‹ฌ๋Ÿฌํ‘œ์‹œ)์™€ column์ด๋ฆ„์„ ์ด์šฉํ•˜์—ฌ ์•Œ์•„๋‚ผ ์ˆ˜ ์žˆ๋‹ค.\n\n```r\ngene2 = exam.df2(๋‹ฌ๋Ÿฌํ‘œ์‹œ)gene\n```\n\n\n\n## subset(data.frame, conditions)\n\nsubset() ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•ด๋„ ์›ํ•˜๋Š” ์กฐ๊ฑด์˜ ํ–‰์„ ์ถ”์ถœํ•  ์ˆ˜ ์žˆ๋‹ค. subset() ํ•จ์ˆ˜์˜ ์ฒซ๋ฒˆ์งธ ํŒŒ๋ผ๋ฏธํ„ฐ๋กœ๋Š” ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„์ด ์˜จ๋‹ค. ํ–‰๋ ฌ ์ž๋ฃŒํ˜•์€ ์˜ฌ ์ˆ˜ ์—†๋‹ค. ๋‘๋ฒˆ์งธ ํŒŒ๋ผ๋ฏธํ„ฐ๋กœ๋Š” ํ•œ ๊ฐœ ์ด์ƒ์˜ ์กฐ๊ฑด์ด ์˜จ๋‹ค. ์กฐ๊ฑด์œผ๋กœ ๋ช…์‹œํ•  column์˜ ์ด๋ฆ„์€ (๋‹ฌ๋Ÿฌํ‘œ์‹œ)๋ฅผ ์ด์šฉํ•˜์—ฌ ๊ธฐ์ˆ ํ•ด๋„ ๋˜๊ณ  ๊ทธ๋ƒฅ column์˜ ์ด๋ฆ„๋งŒ ๊ธฐ์ˆ ํ•ด๋„ ๋œ๋‹ค.\n\n์‹ค์Šต์—์„œ ํ•˜๋ ค๋Š” ๊ฒƒ์€ ๋‹ค์Œ์˜ ๋ฐ์ดํ„ฐ์—์„œ 0๋ณด๋‹ค ํฐ ๊ฐ’์„ ์ถ”์ถœํ•˜๋Š” ๊ฒƒ์ด๋‹ค.\n\n```r\nexam.df(๋‹ฌ๋Ÿฌํ‘œ์‹œ)sample1\n```\n\n![](../images/2020-11-06-r-1/Untitled78.png)\n\nsubset() ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์กฐ๊ฑด์— ๋งž๋Š” ํ–‰์„ ์ถ”์ถœํ•  ์ˆ˜ ์žˆ๋‹ค.\n\n```r\nfiltered.exam.df <- subset(exam.df, exam.df(๋‹ฌ๋Ÿฌํ‘œ์‹œ)sample1>0)\nfiltered.exam.df\nfiltered.exam.df <- subset(exam.df, sample1>0)\nfiltered.exam.df\n```\n\n![](../images/2020-11-06-r-1/Untitled79.png)\n\n![](../images/2020-11-06-r-1/Untitled80.png)\n\n์—ฌ๋Ÿฌ ์กฐ๊ฑด์„ ์ค„ ์ˆ˜๋„ ์žˆ๋‹ค.\n\n![](../images/2020-11-06-r-1/Untitled81.png)\n\n์กฐ๊ฑด์— ์˜ํ•ด ๋ฝ‘ํžŒ ํ–‰์— ํ•ด๋‹นํ•˜๋Š” ์œ ์ „์ž๊ฐ€ ์–ด๋–ค ๊ฒƒ์ธ์ง€ ์•Œ์•„๋ณด๊ธฐ ์œ„ํ•ด ๋‹ค์Œ์˜ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•œ๋‹ค.\n\n```r\nfiltered.exam.df(๋‹ฌ๋Ÿฌํ‘œ์‹œ)gene\n```\n\n![](../images/2020-11-06-r-1/Untitled82.png)\n\n```r\nnames = rep(c(paste(\"C\", 1:5, sep=\"\"), paste(\"R\", 1:5, sep=\"\")))\nnames\n\nvalue = seq(5, 50, by=5)\nvalue\n\nbatch = rep(paste(\"batch\", 1:10, sep=\"\"))\nbatch\n\ndf = data.frame(names, value, batch)\ndim(df)\ndf\n```\n\n![](../images/2020-11-06-r-1/Untitled83.png)\n\n![](../images/2020-11-06-r-1/Untitled84.png)\n\n# ์ง€์ €๋ถ„ํ•œ ๋ฐ์ดํ„ฐ ์ฒ˜๋ฆฌ\n๋ฐ์ดํ„ฐ ๋ถ„์„ ์‹œ๊ฐ„๋ณด๋‹ค๋Š” ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ์‹œ๊ฐ„์ด ๋” ๊ธธ๋‹ค. ์ฆ‰, ๋ฐ์ดํ„ฐ๋ฅผ ๋ถ„์„ํ•˜๋Š” ์‹œ๊ฐ„๋ณด๋‹ค ๋ฐ์ดํ„ฐ๋ฅผ ์ •๋ฆฌํ•˜๋Š” ์‹œ๊ฐ„์ด ๋” ๊ธธ๋‹ค. ๋ฐ์ดํ„ฐ ์ฒ˜๋ฆฌ์— ์“ฐ์ด๋Š” ๋Œ€ํ‘œ์ ์ธ ํŒจํ‚ค์ง€๋Š” ```tidyr```์ด๋‹ค. ๋ฐ์ดํ„ฐ๋ฅผ ๋น ๋ฅด๊ฒŒ ์ •๋ฆฌํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•œ๋‹ค. ์–ด๋–ค ์ž๋ฃŒ๋ฅผ ์ฐพ๊ฑฐ๋‚˜ ๋ฐ”๊ฟ€ ๋•Œ ์‚ฌ์šฉ๋˜๋Š” ํŒจํ‚ค์ง€๋Š” ```dplyr```์ด๋‹ค.\n```r\ninstall.packages(\"tidyr\")\ninstall.packages(\"dplyr\")\n```\n\n```r\nlibrary(tidyr)\nlibrary(dplyr)\n```\n\n์‚ฌ์šฉํ•  ๋ฐ์ดํ„ฐ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.\n```r\nmember <- data.frame(family = c(1,2,3), namef=c(\"a\", \"b\", \"c\"), agef = c(30, 40, 23), namem = c(\"d\", \"e\", \"f\"), agem = c(44, 53, 25))\nmember\n```\n\n```\n family namef agef namem agem\n1 1 a 30 d 44\n2 2 b 40 e 53\n3 3 c 23 f 25\n```\nf๋Š” ์—ฌ์ž, m์€ ๋‚จ์ž๋ฅผ ์˜๋ฏธํ•œ๋‹ค. 1๋ฒˆ ๊ฐ€์กฑ์—” ๋ˆ„๊ฐ€ ์žˆ๋Š”๊ฐ€ ๋“ฑ์„ ์•Œ๊ธฐ๋Š” ์‰ฝ์ง€๋งŒ, ๋ฐ์ดํ„ฐ ๋ถ„์„์ด๋‚˜ ๊ทธ๋ž˜ํ”„๋ฅผ ๊ทธ๋ฆด ๋• ์ฒ˜๋ฆฌ๊ฐ€ ํ•„์š”ํ•˜๋‹ค.\n\n```r\ngather(member, key, value, namef:agem) \n```\ngahter() ํ•จ์ˆ˜๋Š” ๊ฐ’์„ ๊ธธ๊ฒŒ ๋ฝ‘์•„๋‚ธ๋‹ค.\n```\n family key value\n1 1 namef a\n2 2 namef b\n3 3 namef c\n4 1 agef 30\n5 2 agef 40\n6 3 agef 23\n7 1 namem d\n8 2 namem e\n9 3 namem f\n10 1 agem 44\n11 2 agem 53\n12 3 agem 25\n```\n\n\n# ํ•จ์ˆ˜\n\nํ•จ์ˆ˜๋Š” ์–ด๋–ค ๊ฐ’์„ ๋„ฃ์—ˆ์„ ๋•Œ, ์ง€์ •๋œ ํ”„๋กœ์„ธ์Šค์— ๋”ฐ๋ผ ์ธํ’‹์„ ์ฒ˜๋ฆฌํ•œ ๊ฒฐ๊ณผ๋ฅผ ๋‚ด๋Š” ๊ฒƒ์„ ๋งํ•œ๋‹ค. ์ฃผ๋กœ ๋ฐ˜๋ณต์ ์ธ ์ฒ˜๋ฆฌ๊ฐ€ ํ•„์š”ํ•  ๊ฒฝ์šฐ ํ•จ์ˆ˜๋กœ ๋งŒ๋“ค์–ด ์‚ฌ์šฉํ•˜๊ฒŒ ๋œ๋‹ค. R์—๋Š” ๋‚ด์žฅ ํ•จ์ˆ˜ ์™ธ์— ์‚ฌ์šฉ์ž ์ •์˜ ํ•จ์ˆ˜๋„ ๊ฐ€๋Šฅํ•˜๋‹ค.\n\n![](../images/2020-11-06-r-2/Untitled92.png)\n\n\n```r\nf_01 = function(x) {\n return(x)\n}\n\nf_01(1)\n```\n\n[1] 1\n\n```r\nf_02 = function(x) {\n output = x+50\n return(output)\n}\n\nf_02(20)\n```\n\n[1] 70\n\n```r\nx = matrix(1:25, nrow=5, ncol=5)\ncolnames(x) = rep(paste(\"sample\", 1:5, sep=\"\"))\nrownames(x) = rep(paste(\"gene\", 1:5, sep=\"\"))\n\nx\n\nx[,2]\nstat_f(x[,2])\n\nx[4,]\nstat_f(x[4,])\n```\n\n![](../images/2020-11-06-r-2/Untitled93.png)\n\n\n\n```r\nres = apply(x, 2, stat_f) # 2: ์—ด \nres\n```\n\n![](../images/2020-11-06-r-2/Untitled94.png)\n\n\n# ๋ฐ˜๋ณต๋ฌธ\n\n```\nfor(var in seq) {\n\texpression\n}\n```\n\n```r\nfor(i in 1:5) {\n print(rep(i,i))\n}\n```\n\n![](../images/2020-11-06-r-2/Untitled95.png)\n\n\n\n```r\nfor(year in 2015:2020) {\n print(paste(\"The year is\", year))\n}\n```\n\n![](../images/2020-11-06-r-2/Untitled96.png)\n\n\n```r\nx = c(1:10)\nfor(i in x){\n y = 2*i+3\n print(y)\n}\n```\n![](../images/2020-11-06-r-2/Untitled97.png)\n\n\n```r\nfor(i in 2:5){\n for(j in 1:5){\n cat(i, \"*\", j, \"=\", i*j, \"\\n\")\n }\n}\n```\n\n![](../images/2020-11-06-r-2/Untitled98.png)\n\n\n\n## ๋ฐ˜๋ณต๋ฌธ ์˜ˆ์ œ ์ฝ”๋“œ\n\n```r\nfor(time in 1:6){\n for(minute in seq(10, 25, length.out=10)){\n cat(\"It is\", time, \"hour\", minute, \"minute\", \"\\n\")\n }\n}\n```\n\n# ์กฐ๊ฑด๋ฌธ\n\n```r\nx = 4\nif(x>0) print(sqrt(x)) #2\n\nx = -0.2\nif(x<0) print(1+x) else print(x) # 0.8\n\nx = 0.5\nif(x<0) print(1+x) else print(x) # 0.5\nifelse(x<0, 1+x, x)\n```\n![](../images/2020-11-06-r-2/Untitled99.png)\n\n\n```r\ngender = c(rep(\"male\", 30), rep(\"female\", 20))\ngender\nifelse(gender==\"male\", 0, 1)\n```\n\n![](../images/2020-11-06-r-2/Untitled100.png)\n\n\n\n# File Read & Write\n\n๋ฐ์ดํ„ฐ ๋ถ„์„ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ๋• ์™ธ๋ถ€์—์„œ ๋ฐ์ดํ„ฐ๋ฅผ ๋ถˆ๋Ÿฌ์™€ ์ž‘์—…ํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ๋‹ค. ์ฃผ๋กœ csvํŒŒ์ผ์ด๋‚˜ xlsxํŒŒ์ผ์„ ๋ถˆ๋Ÿฌ์˜ค๊ฒŒ ๋œ๋‹ค.\n\n## saveRDS, readRDS\n\nR์—์„œ๋Š” ์ž‘์—…ํ•˜๋˜ ๋ฐ์ดํ„ฐ๋ฅผ ํŒŒ์ผ ๊ฐ์ฒด ํ˜•ํƒœ๋กœ ์ €์žฅํ•  ์ˆ˜ ์žˆ๊ณ , ์ €์žฅ๋œ ํŒŒ์ผ ๊ฐ์ฒด๋ฅผ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜๋„ ์žˆ๋‹ค. R์— ๊ธฐ๋ณธ์ ์œผ๋กœ ๋‚ด์žฅ๋˜์–ด ์žˆ๋Š” iris ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•˜์—ฌ R ๊ฐ์ฒด ํ˜•ํƒœ๋กœ ์ €์žฅํ•˜๊ณ  ๋ถˆ๋Ÿฌ์™€๋ณด์ž.\n```r\niris\nsaveRDS(iris, \"iris.rds\")\niris.data <- readRDS(\"iris.rds\")\n```\n\nR studio ์˜ค๋ฅธ์ชฝ ์•„๋ž˜์— RDS ํŒŒ์ผ์ด ์ƒ๊ธด ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค. \n\n## File Read\n\nํŒŒ์ผ์„ ์ฝ์„ ๋•Œ ์ฃผ๋กœ ์‚ฌ์šฉํ•  ํ•จ์ˆ˜๋Š” `read.delim()`, `read.table`, `read.csv()`์ด๋‹ค. ์ด๋“ค ํ•จ์ˆ˜๋“ค์€ ๊ธฐ๋ณธ์ ์œผ๋กœ ์„ธ ๊ฐœ์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๊ฐ–๋Š”๋‹ค. ์ฒซ ๋ฒˆ์งธ ํŒŒ๋ผ๋ฏธํ„ฐ์ธ `file`์€ ํŒŒ์ผ ๊ฒฝ๋กœ๋‹ค. ๋‘ ๋ฒˆ์งธ ํŒŒ๋ผ๋ฏธํ„ฐ์ธ `header`๋Š” ์ฒซ ๋ฒˆ์งธ ํ–‰์„ ํ—ค๋”๋กœ ์ฒ˜๋ฆฌํ•  ๊ฒƒ์ธ์ง€๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ํŒŒ๋ผ๋ฏธํ„ฐ์ด๋ฉฐ, ๋งˆ์ง€๋ง‰์œผ๋กœ `sep`ํŒŒ๋ผ๋ฏธํ„ฐ๋Š” ๊ตฌ๋ถ„์ž์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ์ฃผ๋Š” ๊ฒƒ์œผ๋กœ์„œ \"\\t\"๋Š” tab์„, \" \"์€ ๊ณต๋ฐฑ์„ ์˜๋ฏธํ•œ๋‹ค.\n\n### ์‹ค์Šต ์ฝ”๋“œ 1\nxls์˜ ํŒŒ์ผ์ธ ๊ฒฝ์šฐ, txt ๋˜๋Š” csvํŒŒ์ผ๋กœ ๋ณ€ํ™˜ํ•ด์ฃผ์–ด์•ผ ํ•œ๋‹ค.\n\n```r\n# read.delim(), read.table()\nex_tab=read.delim(file = \"extdata/example_tab.txt\", header = T, sep = \"\\t\") # ์ฒซ ์ค„์€ ๋ณดํ†ต column์˜ name. ์‹ค์ œ data๊ฐ€ ์•„๋‹˜. \nhead(ex_tab)\n\nex_tab2=read.table(file = \"extdata/example_tab.txt\", header = T, sep = \"\\t\")\nhead(ex_tab2)\n\nhead(read.delim(file = \"extdata/example_tab.txt\"))\n\n# read.csv()\nex_csv=read.csv(file = \"extdata/example.csv\", header = T)\nhead(ex_csv)\n```\n\n![](../images/2020-11-06-r-2/Untitled101.png)\n\n\nํŒŒ์ผ์„ ์ง์ ‘ ์„ ํƒํ•˜๋Š” ๋ฐฉ์‹์€ ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.\n```r\ndata <- read.csv(file.choose(), header=TRUE) # T๋‚˜ TRUE๋‚˜ ๊ฐ™๋‹ค.\n```\n\n```\ntail(data)\nstr(data)\nmax(data(๋‹ฌ๋Ÿฌํ‘œ์‹œ)์ปฌ๋Ÿผ๋„ค์ž„) # Factor type์—๋Š” ์ ์šฉ ๋ถˆ๊ฐ€\n```\n\n### ์‹ค์Šต ์ฝ”๋“œ 2\n\n์—‘์…€ ํŒŒ์ผ์— ํ•ด๋‹นํ•˜๋Š” ํŒŒ์ผ์„ ์ฝ๋Š” ํ•จ์ˆ˜๋Š” `read.xlsx()`๋‹ค. ํ•ด๋‹น ํŒŒ์ผ์„ ์ฝ์„ ๋•, ํŒŒ์ผ ๋‚ด์— ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์‹œํŠธ๊ฐ€ ์žˆ์„ ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, ์‹œํŠธ์˜ ๋ฒˆํ˜ธ๋ฅผ ๊ธฐ์ž…ํ•ด์ฃผ์–ด์•ผ ํ•œ๋‹ค๋Š” ์ ์— ์ฃผ์˜ํ•œ๋‹ค.\n\n```r\n# read.xlsx()\nread.xlsx()\n\nlibrary(xlsx)\nex_xlsx=read.xlsx(file = \"extdata/example.xlsx\", sheetIndex = 1)\n```\n\n![](../images/2020-11-06-r-2/Untitled102.png)\n\n\n## File Write\n\nํŒŒ์ผ์„ ์“ธ ๋•Œ ์ฃผ๋กœ ์‚ฌ์šฉํ•˜๋Š” ํ•จ์ˆ˜๋Š” `write.table()`, `write.csv()`์ด๋‹ค. ํŒŒ๋ผ๋ฏธํ„ฐ๋Š” 6๊ฐœ์ด๋ฉฐ, ํŒŒ์ผ๋กœ ์ €์žฅํ•  ๋ณ€์ˆ˜์˜ ์ด๋ฆ„, ํ™•์žฅ์ž๋ฅผ ํฌํ•จํ•œ ํŒŒ์ผ์˜ ๊ฒฝ๋กœ์™€ ์ด๋ฆ„์„ ์ง€์ •ํ•˜๋Š” `file`, ๋ฌธ์ž์—ด์— ์Œ๋”ฐ์˜ดํ‘œ๋ฅผ ํฌํ•จํ• ์ง€ ์—ฌ๋ถ€๋ฅผ ๋…ผ๋ฆฌ๊ฐ’์œผ๋กœ ์ง€์ •ํ•˜๋Š” `quote`, ๊ตฌ๋ถ„์ž(\"\\t\", \" \")๋ฅผ ์ง€์ •ํ•˜๋Š” `seq`, ํ–‰ ์ด๋ฆ„์„ ํŒŒ์ผ์— ๊ธฐ๋กํ• ์ง€๋ฅผ ๋…ผ๋ฆฌ๊ฐ’์œผ๋กœ ์ง€์ •ํ•˜๋Š” `row.names`, ์—ด ์ด๋ฆ„์„ ํŒŒ์ผ์— ๊ธฐ๋กํ• ์ง€๋ฅผ ๋…ผ๋ฆฌ๊ฐ’์œผ๋กœ ์ง€์ •ํ•˜๋Š” `col.names`๊ฐ€ ์žˆ๋‹ค.\n\n### ์‹ค์Šต ์ฝ”๋“œ 1\n\n```r\n# write.table()\nhead(ex_csv)\nclass(ex_csv)\n\nwrite.table(x = ex_csv, file = \"output/ex.txt\")\nwrite.table(x = ex_csv, file = \"output/ex.txt\", sep = \"\\t\", quote = F)\nwrite.table(x = ex_csv, file = \"output/ex.txt\", sep = \"\\t\", quote = F, row.names = F)\n\n# write.csv()\nwrite.csv(x = ex_csv, file = \"output/ex.csv\", quote = F, row.names = F)\n```\n![](../images/2020-11-06-r-2/Untitled103.png)\n\n\noutput ํด๋”์— ํŒŒ์ผ์ด ์ƒ์„ฑ๋œ ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค.\n\n![](../images/2020-11-06-r-2/Untitled104.png)\n\n\n\n### ์‹ค์Šต ์ฝ”๋“œ 2\n\n์—‘์…€ ํŒŒ์ผ์— ํ•ด๋‹นํ•˜๋Š” ํŒŒ์ผ์„ ์“ฐ๋Š” ํ•จ์ˆ˜๋Š” `write.xlsx()`๋‹ค.\n\n```r\n# write.xlsx()\nwrite.xlsx(x = ex_csv, file = \"output/ex.xlsx\", row.names = F)\n\nex_tab2=read.table(file = \"extdata/example_tab.txt\", header = T, sep = \"\\t\", skip = 10)\nhead(ex_tab2)\n```\n\n![](../images/2020-11-06-r-2/Untitled105.png)\n\n\n\n๋งˆ์ฐฌ๊ฐ€์ง€๋กœ xlsx ํŒŒ์ผ์ด output ํด๋”์— ์ƒ์„ฑ๋˜์—ˆ๋‹ค.\n\n![](../images/2020-11-06-r-2/Untitled106.png)\n\n\n\n## R data ์ €์žฅ ๋ฐ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ\n\n.rda ๋˜๋Š” .RData ํ™•์žฅ์ž๋ฅผ ๊ฐ€์ง„ ํŒŒ์ผ์ด R data์ด๋ฉฐ, ํ•ด๋‹น ํ™•์žฅ์ž๋ฅผ ๊ฐ€์ง„ ํŒŒ์ผ์€ R์—์„œ๋งŒ ์—ด ์ˆ˜ ์žˆ๊ฒŒ ๋˜์–ด์žˆ๋‹ค. R data์˜ ์žฅ์ ์€ ํ…์ŠคํŠธ ํŒŒ์ผ์— ๋น„ํ•ด ๋กœ๋”ฉ์‹œ๊ฐ„์ด ์งง๊ณ  ์ด์ฐจ์› ๋ฐ์ดํ„ฐ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๋ชจ๋“  ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ €์žฅ๊ณผ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ๊ฐ€ ๊ฐ€๋Šฅํ•˜๋‹ค๋Š” ์ ์ด๋‹ค. ์ฃผ์š” ํ•จ์ˆ˜๋กœ๋Š” `save()`, `load()`๊ฐ€ ์žˆ๋‹ค.\n\n### ์‹ค์Šต ์ฝ”๋“œ\n\n```r\n# save()\nsave(ex_csv, file = \"data/ex.rda\")\n\n# load(), data()\nload(file = \"data/ex.rda\")\n```\n![](../images/2020-11-06-r-2/Untitled107.png)\n\n\ndata ํด๋”์— .rda ํ™•์žฅ์ž๋ฅผ ๊ฐ€์ง„ R data๊ฐ€ ์ƒ์„ฑ๋œ ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค.\n![](../images/2020-11-06-r-2/Untitled108.png)\n\n\n๋งŒ์•ฝ ํ”„๋กœ์ ํŠธ ๋‚ด์— data ํด๋”๊ฐ€ ์žˆ๋‹ค๋ฉด `data()` ํ•จ์ˆ˜๋ฅผ ํ†ตํ•ด ์†์‰ฝ๊ฒŒ ํŒŒ์ผ์„ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ๋‹ค. ์ด๋Š” R์ด data ํด๋”๋ฅผ ์ž๋™์œผ๋กœ ์ธ์‹ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๊ฐ€๋Šฅํ•˜๋‹ค.\n\n### ์‹ค์Šต ์ฝ”๋“œ\n\n```r\ndata(ex)\n```\n\n# R plotting(์‹œ๊ฐํ™”)\n\n## R basic graphics\n\nR basic graphics๋Š” R์—์„œ ๊ธฐ๋ณธ์ ์œผ๋กœ ์ œ๊ณต๋˜๋Š” ๊ทธ๋ž˜ํ”ฝ ํŒจํ‚ค์ง€์ด๋‹ค. ์•„๋ž˜ ๊ทธ๋ฆผ๊ณผ ๊ฐ™์€ ์ข…๋ฅ˜์˜ ๊ทธ๋ž˜ํ”„๋“ค์„ ๊ทธ๋ฆด ์ˆ˜ ์žˆ๋‹ค.\n![](../images/2020-11-06-r-2/Untitled109.png)\n\n\n### scatter plot\n\nscatter plot์€ ๋‘ ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ƒ๊ด€๊ด€๊ณ„๋ฅผ ๋‚˜ํƒ€๋‚ผ ๋•Œ ์ฃผ๋กœ ์‚ฌ์šฉ๋œ๋‹ค. ๊ทธ๋ž˜ํ”„๋ฅผ ๊ทธ๋ฆด ๋•Œ ์‚ฌ์šฉ๋˜๋Š” ํ•จ์ˆ˜๋Š” `plot()`์ด๋ฉฐ, ์ƒ๊ด€๊ด€๊ณ„๋ฅผ ๋‚˜ํƒ€๋‚ผ ๋•Œ ์‚ฌ์šฉ๋˜๋Š” ํ•จ์ˆ˜๋Š” `cor()`๋‹ค.\n\n**์‹ค์Šต์ฝ”๋“œ**\n\n```r\n# scatter plot : plot()\nplot(x=2, y=1)\nplot(c(2,5), c(1,10))\n\n# iris\nhead(iris)\n\nplot(x = iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Sepal.Length, y = iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Sepal.Width)\nplot(x = iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Petal.Length, y = iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Petal.Width)\n\ncor(x = iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Sepal.Length, y = iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Sepal.Width)\ncor(x = iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Petal.Length, y = iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Petal.Width)\n```\n![](../images/2020-11-06-r-2/Untitled110.png)\n\n![](../images/2020-11-06-r-2/Untitled111.png)\n\n![](../images/2020-11-06-r-2/Untitled112.png)\n\n![](../images/2020-11-06-r-2/Untitled113.png)\n\n![](../images/2020-11-06-r-2/Untitled114.png)\n\n\n## line plot\n\nline plot์€ scatter plot์˜ ๊ฐ ์ ์„ ์„ ์œผ๋กœ ์ด์–ด์„œ ํ‘œํ˜„ํ•œ ๊ทธ๋ž˜ํ”„๋ผ๊ณ  ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ฃผ๋กœ ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ์—์„œ ์‹œ๊ฐ„์˜ ํ๋ฆ„์— ๋”ฐ๋ฅธ ๋ณ€ํ™”๋ฅผ ์‹œ๊ฐ์ ์œผ๋กœ ํ‘œํ˜„ํ•ด์ค„ ๋•Œ ์‚ฌ์šฉํ•œ๋‹ค. `plot()` ํ•จ์ˆ˜์— `type` ์ธ์ž๋ฅผ ์ฃผ์–ด ํ‘œํ˜„ ๊ฐ€๋Šฅํ•˜๋‹ค.\n\n#### ์‹ค์Šต ์ฝ”๋“œ 1\n\n```r\n# line plot: plot( type = 'l')\nplot(c(2,5), c(1,10), type='l')\nplot(c(2,5), c(1,10), type='o')\n\nhead(pressure)\nplot(x = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)temperature, y = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)pressure)\nplot(x = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)temperature, y = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)pressure, type='l')\nplot(x = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)temperature, y = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)pressure, type='o')\n```\n\n![](../images/2020-11-06-r-2/Untitled115.png)\n\n![](../images/2020-11-06-r-2/Untitled116.png)\n\n![](../images/2020-11-06-r-2/Untitled117.png)\n\n![](../images/2020-11-06-r-2/Untitled118.png)\n\n![](../images/2020-11-06-r-2/Untitled119.png)\n\n![](../images/2020-11-06-r-2/Untitled120.png)\n\n![](../images/2020-11-06-r-2/Untitled121.png)\n\n์‚ฐ์ ๋„ ํฌ์ธํŠธ์˜ ํฌ๊ธฐ๋ฅผ ์กฐ์ ˆํ•˜๊ณ ์ž ํ•  ๋•, `cex` ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ด์šฉํ•˜๋ฉฐ, ์ƒ‰์„ ๋ฐ”๊พธ๊ณ  ์‹ถ์„ ๋• `col` ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ด์šฉํ•œ๋‹ค.\n\n#### ์‹ค์Šต ์ฝ”๋“œ 2\n\n```r\nplot(x = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)temperature, y = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)pressure, cex=2)\n\nplot(x = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)temperature, y = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)pressure, cex=1)\n\nplot(x = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)temperature, y = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)pressure, cex=0.5)\n\nplot(x = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)temperature, y = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)pressure, cex=1, col='red')\n```\n\n![](../images/2020-11-06-r-2/Untitled122.png)\n\n![](../images/2020-11-06-r-2/Untitled123.png)\n\n![](../images/2020-11-06-r-2/Untitled124.png)\n\n## histogram\n\nhistogram์€ ์—ฐ์†์ ์ธ ์ˆซ์žํ˜• ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•ด ๋„์ˆ˜ ๋ถ„ํฌ๋ฅผ ํ‘œํ˜„ํ•˜๋Š” ๊ทธ๋ž˜ํ”„์ด๋‹ค. ๋‚ด๊ฐ€ ๊ฐ€์ง„ ๋ฐ์ดํ„ฐ์˜ ์ „์ฒด์ ์ธ ๋ถ„ํฌ๋‚˜ ๊ฒฝํ–ฅ์„ ํ™•์ธํ•˜๊ณ ์ž ํ•  ๋•Œ ์‚ฌ์šฉํ•œ๋‹ค. `hist()` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค.\n\n#### ์‹ค์Šต ์ฝ”๋“œ\n\n```r\n# Histogram: hist()\nhist(iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Sepal.Length)\nhist(iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Sepal.Width)\n\nhist(iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Sepal.Length, breaks=20)\n\nx <- matrix(rnorm(1000), nrow=100)\nclass(x)\n\nhist(x)\n```\n\n![](../images/2020-11-06-r-2/Untitled125.png)\n\n![](../images/2020-11-06-r-2/Untitled126.png)\n\n![](../images/2020-11-06-r-2/Untitled127.png)\n\n![](../images/2020-11-06-r-2/Untitled128.png)\n\n![](../images/2020-11-06-r-2/Untitled129.png)\n\n![](../images/2020-11-06-r-2/Untitled130.png)\n\n## box plot\n\nbox plot์€ ์ฃผ๋กœ ๊ธˆ์œต ๋ฐ์ดํ„ฐ์™€ ๊ฐ™์€ ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ์—์„œ ์ฃผ๋กœ ์‚ฌ์šฉ๋˜๋Š”๋ฐ, ์—ฐ์†ํ˜• ์ž๋ฃŒ์— ๋Œ€ํ•œ min, max, Q1, Q2, Q3์˜ ๋‹ค์„ฏ ๊ฐ€์ง€ ์ •๋ณด์— ๋Œ€ํ•œ ์š”์•ฝ๋œ ์ •๋ณด๋ฅผ ์ƒ์ž์™€ ์„ , ์ ์œผ๋กœ ํ‘œํ˜„ํ•˜๋Š” ๊ทธ๋ž˜ํ”„์ด๋‹ค. \n\n๋ฐ•์Šค์˜ ํฌ๊ธฐ๋Š” Q1, Q3๊ฐ’์— ๋”ฐ๋ผ ๊ฒฐ์ •๋˜๋ฉฐ, ๊ฐ€์šด๋ฐ์˜ ๊ตต์€ ์„ ์€ median์œผ๋กœ์„œ Q2๊ฐ’์— ํ•ด๋‹นํ•œ๋‹ค. ์ƒ์ž์˜ ์œ„ ์•„๋ž˜๋กœ min, max์—์„œ +-1.5IQR ์ฒ˜๋ฆฌ๋œ ๊ฐ’์ด ํ‘œํ˜„๋˜๋ฉฐ, ๋ฐ์ดํ„ฐ์˜ ์ด์ƒ์น˜๊ฐ’์€ ๋ณ„๋„๋กœ ์ ์œผ๋กœ ์ฐํžŒ๋‹ค.\n\nR์—์„œ box plot์„ ๊ทธ๋ฆด ์ˆ˜ ์žˆ๋Š” ์ž๋ฃŒํ˜•์€ ์ˆซ์žํ˜• ๋ฒกํ„ฐ, ํ–‰๋ ฌ, ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„์ด ํ•ด๋‹นํ•œ๋‹ค. ์‚ฌ์šฉํ•˜๋Š” ํ•จ์ˆ˜๋Š” `boxplot()`์ด๋‹ค.\n\n#### ์‹ค์Šต ์ฝ”๋“œ 1\n\n```r\n# Boxplot\nboxplot(iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Sepal.Length)\n\nboxplot(iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Sepal.Length, iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Sepal.Width, iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Petal.Length, iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Petal.Width)\n\nboxplot(iris[,1:4])\n```\n\n![](../images/2020-11-06-r-2/Untitled131.png)\n\n![](../images/2020-11-06-r-2/Untitled132.png)\n\n![](../images/2020-11-06-r-2/Untitled133.png)\n\n![](../images/2020-11-06-r-2/Untitled134.png)\n\n![](../images/2020-11-06-r-2/Untitled135.png)\n\n#### ์‹ค์Šต ์ฝ”๋“œ 2\n\n๋ฒ”์ฃผ๋ณ„๋กœ box plot์„ ํ‘œํ˜„ํ•  ์ˆ˜๋„ ์žˆ๋‹ค. `boxplot()` ํ•จ์ˆ˜์— `~` ํ‘œ์‹œ๋ฅผ ์ด์šฉํ•ด ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•˜๋‹ค.\n\n```r\n# Boxplot: ~ Formula\ntable(iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Species)\n\nboxplot(iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Petal.Length~iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Species)\n\nboxplot(Petal.Length~Species, data=iris)\n```\n\n![](../images/2020-11-06-r-2/Untitled136.png)\n\n![](../images/2020-11-06-r-2/Untitled137.png)\n\n## bar plot\n\nbar plot์€ ๋ฒ”์ฃผํ˜• ๋ฐ์ดํ„ฐ์— ์‚ฌ์šฉํ•˜๋ฉฐ, ์นดํ…Œ๊ณ ๋ฆฌ๋ณ„๋กœ ๋นˆ๋„์ˆ˜๋‚˜ ๋ฐฑ๋ถ„์œจ์„ ๊ธฐ๋‘ฅ์˜ ๋†’์ด๋กœ ํ‘œํ˜„ํ•˜๋Š” ๊ทธ๋ž˜ํ”„๋‹ค. `barplot()` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค.\n\n#### ์‹ค์Šต ์ฝ”๋“œ 1\n\n```r\n# Barplot\nhead(BOD)\n?BOD\n\nbarplot(BOD(๋‹ฌ๋Ÿฌํ‘œ์‹œ)demand)\nbarplot(BOD(๋‹ฌ๋Ÿฌํ‘œ์‹œ)demand, names.arg = BOD(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Time)\n\n# \nhead(mtcars)\ndim(mtcars)\n\ncyl_freq=table(mtcars(๋‹ฌ๋Ÿฌํ‘œ์‹œ)cyl)\nbarplot(cyl_freq)\n\ncyl_gear_freq=table(mtcars(๋‹ฌ๋Ÿฌํ‘œ์‹œ)cyl, mtcars(๋‹ฌ๋Ÿฌํ‘œ์‹œ)gear)\n\nbarplot(cyl_gear_freq)\nbarplot(cyl_gear_freq, beside = T)\n```\n\n![](../images/2020-11-06-r-2/Untitled138.png)\n\n![](../images/2020-11-06-r-2/Untitled139.png)\n\n![](../images/2020-11-06-r-2/Untitled140.png)\n\n![](../images/2020-11-06-r-2/Untitled141.png)\n\n![](../images/2020-11-06-r-2/Untitled142.png)\n\n![](../images/2020-11-06-r-2/Untitled143.png)\n\n์œ„ ๊ทธ๋ž˜ํ”„ ๋งŒ์œผ๋กœ๋Š” x์ถ•๊ณผ y์ถ•์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ์•Œ ์ˆ˜ ์—†๋‹ค. ๊ฐ ์ถ•์— ๋Œ€ํ•œ ๋ผ๋ฒจ๋ง ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ ์ถ”๊ฐ€์ ์œผ๋กœ x์ถ•, y์ถ•์˜ ๋ฒ”์œ„๋ฅผ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ๋‹ค.\n\n#### ์‹ค์Šต ์ฝ”๋“œ 2\n\n```r\n# graphic parameter\nbarplot(cyl_gear_freq, beside = T, xlab = \"Gears\", ylab = \"Cylinder\")\n\nbarplot(cyl_gear_freq, beside = T, xlab = \"Gears\", ylab = \"Cylinder\", ylim = c(0, 20))\n```\n\n![](../images/2020-11-06-r-2/Untitled144.png)\n\n![](../images/2020-11-06-r-2/Untitled145.png)\n\n![](../images/2020-11-06-r-2/Untitled146.png)\n\n![](../images/2020-11-06-r-2/Untitled148.png)\n\n๊ทธ๋ž˜ํ”„์— ๋Œ€ํ•œ ๋” ๋‹ค์–‘ํ•œ ์ž‘์—…์ด ๊ฐ€๋Šฅํ•œ๋ฐ, x์ถ•์˜ ๊ฐ’์— ๋Œ€ํ•œ ๊ฐ๋„๋ฅผ ๋ฐ”๊พธ๋Š” `las` ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค.\n\n#### ์‹ค์Šต ์ฝ”๋“œ 3\n\n```r\nbarplot(cyl_gear_freq, beside = T, xlab = \"Gears\", ylab = \"Cylinder\", ylim = c(0, 20), las=2)\n```\n\n![](../images/2020-11-06-r-2/Untitled149.png)\n\nํŒจ๋„์˜ ๋ฐฐ์—ด ๋ฐ ์—ฌ๋ฐฑ์„ ์กฐ์ ˆํ•  ์ˆ˜ ์žˆ๋Š” par() ํ•จ์ˆ˜๊ฐ€ ์žˆ๋‹ค. ์—ฌ๋Ÿฌ ํŒŒ๋ผ๋ฏธํ„ฐ๊ฐ€ ์žˆ์œผ๋ฏ€๋กœ ๋‹ค์–‘ํ•œ ์˜ต์…˜์„ ์ค„ ์ˆ˜ ์žˆ๋‹ค. ์ด ๋…ธํŠธ์—์„œ๋Š” mfrow ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ํ†ตํ•ด ๊ทธ๋ž˜ํ”„์˜ ๊ตฌํš์„ ๋‚˜๋ˆ ์ฃผ๊ฒ ๋‹ค. \n\n#### ์‹ค์Šต ์ฝ”๋“œ 4\n\n```r\npar(mfrow=c(2,2))\n\nplot(x = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)temperature, y = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)pressure, cex=1, col='red')\n\nplot(x = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)temperature, y = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)pressure, cex=1, col='blue')\n\nplot(x = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)temperature, y = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)pressure, cex=1, col='green')\n\nplot(x = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)temperature, y = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)pressure, cex=1, col='pink')\n```\n\n![](../images/2020-11-06-r-2/Untitled150.png)\n\npar() ํ•จ์ˆ˜๋ฅผ ํ•œ ๋ฒˆ ์‚ฌ์šฉํ•˜๊ฒŒ ๋˜๋ฉด, ๋‹ค์Œ ๊ทธ๋ž˜ํ”„์—๋„ ์˜ํ–ฅ์„ ์ฃผ๊ฒŒ ๋˜๋ฏ€๋กœ ์ด๋ฅผ ์ดˆ๊ธฐํ™”ํ•˜๋Š” ์ž‘์—…์ด ํ•„์š”ํ•˜๋‹ค. ์ด ๋•, `graphics.off()` ํ•จ์ˆ˜ ๋˜๋Š” `dev.off()` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค.\n\n#### ์‹ค์Šต ์ฝ”๋“œ 5\n\n```r\ngraphics.off()\ndev.off()\n```\n\n`par()` ํ•จ์ˆ˜์˜ ์˜ต์…˜ ์ค‘ ์—ฌ๋ฐฑ(๋งˆ์ง„)์„ ์กฐ์ ˆํ•  ์ˆ˜ ์žˆ๋Š” `mar` ํŒŒ๋ผ๋ฏธํ„ฐ๊ฐ€ ์žˆ๋‹ค. \n\n#### ์‹ค์Šต ์ฝ”๋“œ 6\n\n```r\npar(mar=c(0,0,0,0))\nplot(x = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)temperature, y = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)pressure, cex=1, col='pink')\n\npar(mar=c(5,0,0,0))\nplot(x = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)temperature, y = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)pressure, cex=1, col='pink')\n\npar(mar=c(5,5,0,0))\nplot(x = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)temperature, y = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)pressure, cex=1, col='pink')\n\npar(mar=c(5,5,5,0))\nplot(x = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)temperature, y = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)pressure, cex=1, col='pink')\n\npar(mar=c(5,5,5,5))\nplot(x = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)temperature, y = pressure(๋‹ฌ๋Ÿฌํ‘œ์‹œ)pressure, cex=1, col='pink')\n```\n\n![](../images/2020-11-06-r-2/Untitled151.png)\n\n![](../images/2020-11-06-r-2/Untitled152.png)\n\n![](../images/2020-11-06-r-2/Untitled153.png)\n\n## ์ƒ‰ ์ง€์ •\n\nR์—์„œ๋Š” 657๊ฐœ์˜ ์ƒ‰์„ ์ง€์ •ํ•ด์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. \n\n```r\ncolors()\n```\n\n![](../images/2020-11-06-r-2/Untitled154.png)\n\n![](../images/2020-11-06-r-2/Untitled155.png)\n\n### color set\n\n๊ทธ๋ž˜ํ”„์— ์ปฌ๋Ÿฌ ์…‹์„ ์ž๋™์œผ๋กœ ์„ค์ •ํ•˜๊ฒŒ ํ•  ์ˆ˜ ์žˆ๋‹ค. ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์ปฌ๋Ÿฌ์…‹์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์•Œ์•„๋‚ผ ์ˆ˜ ์žˆ๋‹ค.\n\n```r\nlibrary(\"RColorBrewer\")\ndisplay.brewer.all()\n```\n\n![](../images/2020-11-06-r-2/Untitled157.png)\n\n์‹ค์ œ ๊ทธ๋ž˜ํ”„์— ์ปฌ๋Ÿฌ์…‹์„ ์ ์šฉํ•œ ์ฝ”๋“œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.\n\n![](../images/2020-11-06-r-2/Untitled156.png)\n\n```r\ndisplay.brewer.pal(n = 8, name= \"RdBu\")\nbrewer.pal(n = 8, name=\"RdBu\") # color code๊ฐ’ ์ถ”์ถœ\nbarplot(c(2,5,7), col=brewer.pal(n=3, name=\"RdBu\")) # ๊ทธ๋ž˜ํ”„์— color ์ ์šฉ\n```\n\n![](../images/2020-11-06-r-2/Untitled158.png)\n\n![](../images/2020-11-06-r-2/Untitled159.png)\n\n![](../images/2020-11-06-r-2/Untitled160.png)\n\n## ggplot\n\n### ggplot์œผ๋กœ ๊ทธ๋ฆผ ๊ทธ๋ฆฌ๊ธฐ\n\nggplot์„ ์‚ฌ์šฉํ•˜์—ฌ ๊ทธ๋ž˜ํ”„๋ฅผ ๊ทธ๋ฆฌ๊ธฐ ์œ„ํ•ด์„œ๋Š” `ggplot()` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ `+`๋ฅผ ์ด์šฉํ•˜์—ฌ ๊ทธ๋ž˜ํ”„ ํ˜•ํƒœ์™€ ํ…Œ๋งˆ, ์ƒ‰๊น”, ๊ทธ๋ฃน ์ง€์ • ๋“ฑ์— ํ•ด๋‹นํ•˜๋Š” ์˜ต์…˜๋“ค์„ ์ถ”๊ฐ€ํ•˜๋ฉด ๋œ๋‹ค. ๋‹จ, ๊ธฐ๋ณธ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ๊ฐ€ ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„์ด์–ด์•ผ ํ•œ๋‹ค. `ggplot()` ํ•จ์ˆ˜์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.\n\n```r\nggplot(data=data.frame, aes(x=x์ถ• ๋ฐ์ดํ„ฐ ์ปฌ๋Ÿผ์˜ ์ด๋ฆ„), y=y์ถ• ๋ฐ์ดํ„ฐ ์ปฌ๋Ÿผ์˜ ์ด๋ฆ„)) + geom_xx() + ...\n```\n\n์ถ•์˜ ์ด๋ฆ„์€ ์‹ค์ œ ์ถ•์˜ ์ด๋ฆ„์„ ์ž…๋ ฅํ•ด์•ผ ํ•œ๋‹ค.\n\n`ggplot()` ํ•จ์ˆ˜๋งŒ์œผ๋กœ๋Š” plotting์ด ๋˜์ง€ ์•Š์œผ๋ฉฐ, `+`๋ฅผ ์ด์šฉํ•ด ์–ด๋–ค ๊ทธ๋ž˜ํ”„๋ฅผ ๊ทธ๋ฆด์ง€์— ๋Œ€ํ•œ ์ถ”๊ฐ€์ ์ธ ์ •๋ณด๋ฅผ ์ž…๋ ฅํ•ด์•ผ ํ•œ๋‹ค. ์ž…๋ ฅํ•  ์ˆ˜ ์žˆ๋Š” ์ •๋ณด๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.\n\n- `geom_point()`\n- `geom_line()`\n- `geom_boxplot()`\n- `geon_bar()`\n\n**์‹ค์Šต ์ฝ”๋“œ**\n\n```r\nlibrary(ggplot2)\n\n# geom_point\nggplot(data = iris, aes(x = Petal.Length, y = Petal.Width)) + geom_point(size=3, color='red')\n\n# geom_line\nggplot(data = pressure, aes(x = temperature, y= pressure)) + geom_line() + geom_point()\n\n# geom_boxplot()\nggplot(data = iris, aes(x=Species, y=Petal.Length, fill=Species))+geom_boxplot()\n\n# geom_bar()\nggplot(data = mtcars, aes(x=factor(cyl))) + geom_bar() # factor: ๋ฒ”์ฃผํ˜• ๋ฐ์ดํ„ฐ\n```\n\n![](../images/2020-11-06-r-2/Untitled161.png)\n\n![](../images/2020-11-06-r-2/Untitled162.png)\n\n![](../images/2020-11-06-r-2/Untitled163.png)\n\n![](../images/2020-11-06-r-2/Untitled164.png)\n\n### ์ง์ ‘ ๋งŒ๋“  ๋ฐ์ดํ„ฐ๋ฅผ ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„ํ™” ํ•˜์—ฌ ๊ทธ๋ž˜ํ”„ ๊ทธ๋ฆฌ๊ธฐ\n\n#### ์‹ค์Šต ์ฝ”๋“œ\n\n```r\ndf=data.frame(table(mtcars(๋‹ฌ๋Ÿฌํ‘œ์‹œ)cyl))\ndf\n\nggplot(data = df, aes(x = Var1, y=Freq)) + geom_bar(stat = 'identity')\n\nggplot(data = df, aes(x = Var1, y=Freq)) + geom_bar(stat = 'identity', fill='blue', width=0.5) + ylab(\"Frequency\") + xlab(\"Cylinders\")\n```\n\n![](../images/2020-11-06-r-2/Untitled165.png)\n\n![](../images/2020-11-06-r-2/Untitled166.png)\n\n```r\ngg <- ggplot(data=iris, aes(x = Petal.Length, y = Petal.Width))+geom_point(aes(color=Species, shape=Species), size=3)\n\ngg\n\ngg + theme_bw()\ngg + theme_classic()\ngg + theme_dark()\n\ngg + theme(text = element_text(size = 15, face = 'bold'))\ngg + theme(axis.title.x = element_text(size = 15, face = 'bold'))\ngg + theme(axis.text.x = element_text(size = 15, face = 'bold'))\ngg + theme(legend.position = 'top')\n```\n\n![](../images/2020-11-06-r-2/Untitled167.png)\n\n![](../images/2020-11-06-r-2/Untitled168.png)\n\n![](../images/2020-11-06-r-2/Untitled169.png)\n\n![](../images/2020-11-06-r-2/Untitled170.png)\n\n![](../images/2020-11-06-r-2/Untitled171.png)\n\n![](../images/2020-11-06-r-2/Untitled172.png)\n\n## ๋ฌธ์ œ\n\nR ๊ธฐ๋ณธ๋‚ด์žฅ ๋ฐ์ดํ„ฐ์ธ iris ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•ด ๋‹ค์Œ์˜ boxplot์„ ๊ทธ๋ฆฌ์‹œ์˜ค.\n\n![](../images/2020-11-06-r-2/Untitled173.png)\n\n```r\nboxplot(Sepal.Length~Species, data=iris, xlab=\"Species\", ylab=\"Sepal Length\")\nboxplot(iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Sepal.Length~iris(๋‹ฌ๋Ÿฌํ‘œ์‹œ)Species, xlab=\"Species\", ylab=\"Sepal Length\")\n```\n\n# Reference\n\n- [https://rfriend.tistory.com/20](https://rfriend.tistory.com/20)\n- [https://rfriend.tistory.com/124](https://rfriend.tistory.com/124)\n- [https://rfriend.tistory.com/125](https://rfriend.tistory.com/125?category=605867)\n- [https://jjoyling.tistory.com/31](https://jjoyling.tistory.com/31)", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown" ] ]
ecdf9a94aea75a2dfe2cc42b2e27cebb448a2df5
23,163
ipynb
Jupyter Notebook
estimating_sales.ipynb
oramirezperera/linear_regression
755780bd0855dd6dc819f5fb48c58f8ad5a6d252
[ "MIT" ]
null
null
null
estimating_sales.ipynb
oramirezperera/linear_regression
755780bd0855dd6dc819f5fb48c58f8ad5a6d252
[ "MIT" ]
null
null
null
estimating_sales.ipynb
oramirezperera/linear_regression
755780bd0855dd6dc819f5fb48c58f8ad5a6d252
[ "MIT" ]
null
null
null
94.930328
16,382
0.818806
[ [ [ "<a href=\"https://colab.research.google.com/github/oramirezperera/linear_regression/blob/main/estimating_sales.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "The sales of a company (in million dollars) for each year are shown in the table below.\nx (year)\t2015,\t2016,\t2017,\t2018,\t2019\n\ny (sales)\t12,\t19,\t29,\t37,\t45\n\na) Find the least square regression line y = a x + b.\nb) Use the least squares regression line as a model to estimate the sales of the company in 2022.", "_____no_output_____" ], [ "First of all we import our dependencies", "_____no_output_____" ], [ "## Dependencies", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "Then we create the function to calculate the linear regression manually", "_____no_output_____" ], [ "## Linear regression", "_____no_output_____" ] ], [ [ "def estimate_b0_b1(x, y):\n n = np.size(x)\n\n m_x, m_y = np.mean(x), np.mean(y) # m_x and m_y stands for x mean and y mean\n\n # calculate sum of XY and XX\n sum_xy = np.sum((x-m_x) * (y-m_y))\n sum_xx = np.sum((x-m_x)**2)\n\n # regression coefficients\n b1 = sum_xy / sum_xx\n b0 = m_y - b1*m_x\n\n return b0, b1", "_____no_output_____" ] ], [ [ "after that we create a graph function to see the linear regression we created", "_____no_output_____" ] ], [ [ "# graph function\ndef plot_regression(x, y, b):\n plt.scatter(x, y, color='b', marker='o', s=30)\n\n y_pred = b[0] + b[1] * x # pred for predictions\n plt.plot(x, y_pred, color='g')\n\n #label\n plt.xlabel('x years after 2005')\n plt.ylabel('y sells in million dollars')\n\n plt.show()", "_____no_output_____" ] ], [ [ "Then we create the main function with our data set and the ", "_____no_output_____" ] ], [ [ "def estimation_sales(year, b):\n \"\"\" this function gets the independent variable x, or the year we want to estimate the sales, and the parameter b of the linear regression\n and return the sales estimation for that year \"\"\"\n\n return b[0] + b[1] * x\n", "_____no_output_____" ], [ "\ndef main():\n # dataset\n # for the years in x we change the years 2015, 2016, etc to year - 2015\n x = np.array([0, 1, 2, 3, 4])\n y = np.array([12, 19, 29, 37, 45])\n\n # estimation b0 and b1\n b = estimate_b0_b1(x, y)\n print(f'The value of b0 is {b[0]}, and b1 is {b[1]}')\n\n year_to_estimate = 2022 # This is the year we want to estimate, change this number to change the year of estimation in the linear regression\n\n year = year_to_estimate - 2015 # Transforming the year we want to estimate to a simpler format\n\n sales = estimation_sales(year, b)\n\n print(f'The sales in the year {year_to_estimate} will be around {sales} million dollars')\n\n plot_regression(x, y, b) # b comes from b0 and b1\n \n \nmain()", "The value of b0 is 11.599999999999998, and b1 is 8.4\nThe sales in the year 2022 will be around 70.4 million dollars\n" ] ], [ [ "As we can see in the graph the value of b0 is around 11.5999 and the value of b1 is 8.4.\n\nWe used this values of b0 and b1 to estimate the sales in the year 2012, having a estimation of 70.4 million dollars of sales for that year.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
ecdf9fe3beba800d24316f631998e598ae8a710d
1,278
ipynb
Jupyter Notebook
notebooks/ieee_cog_2021/slide_2.ipynb
riveSunder/carles_game
8eff68cb7b2d87db3a9b160017632b714d309963
[ "MIT" ]
4
2021-04-30T21:27:47.000Z
2021-09-06T20:01:08.000Z
notebooks/ieee_cog_2021/slide_2.ipynb
riveSunder/carles_game
8eff68cb7b2d87db3a9b160017632b714d309963
[ "MIT" ]
5
2021-04-18T23:59:11.000Z
2021-06-17T02:41:48.000Z
notebooks/ieee_cog_2021/slide_2.ipynb
riveSunder/carles_game
8eff68cb7b2d87db3a9b160017632b714d309963
[ "MIT" ]
1
2021-07-25T02:59:29.000Z
2021-07-25T02:59:29.000Z
23.236364
187
0.545383
[ [ [ "# Cellular Automata Reinforcement Learning Environment (CARLE): Why? \n<br><br>\n## _With no intrinsic reward or natural episodes, CARLE hardly fits into the reinforcement learning paradigm._\n\n<br><br>\n<div align=\"center\">\n<img src=\"../../assets/life_sprayer.gif\" width=80%>\n<br>\n<h2>\"Sprayer\" spaceship found by Adam P. Goucher using software tools in May 2021 (<a href=\"https://conwaylife.com/wiki/Sprayer\">https://conwaylife.com/wiki/Sprayer</a>)</h2>\n</div>\n<br>\n \n\n\n", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown" ] ]
ecdfc3ea245fa46bf0368e554687ab6d615bfeab
1,014,149
ipynb
Jupyter Notebook
Exploratory Data Analysis/EDA-Retail.ipynb
Tech29-sam/tsf-tasks
723c0a7a2c88a9dd6ad12c6e4e176e4b7e54d91c
[ "Apache-2.0" ]
1
2021-09-05T13:42:41.000Z
2021-09-05T13:42:41.000Z
Exploratory Data Analysis/EDA-Retail.ipynb
Tech29-sam/tsf-tasks
723c0a7a2c88a9dd6ad12c6e4e176e4b7e54d91c
[ "Apache-2.0" ]
null
null
null
Exploratory Data Analysis/EDA-Retail.ipynb
Tech29-sam/tsf-tasks
723c0a7a2c88a9dd6ad12c6e4e176e4b7e54d91c
[ "Apache-2.0" ]
null
null
null
597.965212
122,928
0.944908
[ [ [ "## Saumya Pailwan\n### Data Science & Business Analytics Intern @ The Sparks Foundation\n### Task - 3: Exploratory Data Analysis - Retail Store", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline", "_____no_output_____" ], [ "data = pd.read_csv(\"SampleSuperstore.csv\")\ndata.head()", "_____no_output_____" ], [ "data.shape", "_____no_output_____" ], [ "data.isnull().any()", "_____no_output_____" ], [ "data.nunique()", "_____no_output_____" ], [ "data.dtypes", "_____no_output_____" ], [ "# Check for duplicate values\ndata.duplicated().sum()", "_____no_output_____" ], [ "# Dropping duplicate values\ndata.drop_duplicates(inplace=True)", "_____no_output_____" ], [ "data.shape", "_____no_output_____" ], [ "data.corr()", "_____no_output_____" ], [ "sns.heatmap(data.corr(),annot=True)", "_____no_output_____" ] ], [ [ "visualize the probability distribution of multiple samples in a single plot.\n", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(8,5))\nsns.kdeplot(data['Sales'],color='red',label='Sales',shade=True,bw=20)\nsns.kdeplot(data['Profit'],color='Black',label='Profit',shade=True,bw=20)\nplt.xlim([-200,1000])\nplt.legend()", "_____no_output_____" ], [ "# To find out Sales and Profit generated by the Superstore\n\nprint('Sales:' ,data['Sales'].sum())\nprint('Profit:' ,data['Profit'].sum())", "Sales: 2296195.5903\nProfit: 286241.4226\n" ] ], [ [ "#### Profit is more than that of sale but there are some areas where profit could be increased.", "_____no_output_____" ] ], [ [ "plt.style.use('seaborn')\ndata.plot(kind = 'scatter', figsize = (10,5) , x = 'Sales', y='Profit', c = 'Discount' , s = 20 , fontsize = 16 , colormap = 'plasma')\nplt.ylabel('TOTAL PROFITS', fontsize = 16)\nplt.title('DEPENDENCY OF SALES AND PROFIT ON DISCOUNT' , fontsize = 16)\nplt.show()", "_____no_output_____" ] ], [ [ "The above Scatterplot depicts that less the discount more is the Profits Discount is effecting profit to a certain extent and after that point Profits has no relation with Discount", "_____no_output_____" ] ], [ [ "fig,axs=plt.subplots(nrows=2,ncols=3,figsize=(20,10));\n\nsns.countplot(data['Ship Mode'],ax=axs[0][0])\nsns.countplot(data['Segment'],ax=axs[0][1])\nsns.countplot(data['Quantity'],ax=axs[0][2])\nsns.countplot(data['Category'],ax=axs[1][0])\nsns.countplot(data['Region'],ax=axs[1][1])\nsns.countplot(data['Discount'],ax=axs[1][2])\n\naxs[0][0].set_title('Ship Mode')\naxs[0][1].set_title('Segment')\naxs[0][2].set_title('Quantity')\naxs[1][0].set_title('Category')\naxs[1][1].set_title('Region')\naxs[1][2].set_title('Discount')\n\n\nplt.tight_layout()", "_____no_output_____" ], [ "plt.figure(figsize=(20,10))\nsns.countplot(data['Sub-Category'])\nplt.title('Sub-Category')", "_____no_output_____" ], [ "plt.figure(figsize=(20,10))\nsns.countplot(data['State'])\nplt.xticks(rotation=90)\nplt.title('State')", "_____no_output_____" ] ], [ [ "### Distribution of the data using the plot", "_____no_output_____" ] ], [ [ "fig, axs = plt.subplots(ncols=2, nrows = 2, figsize = (20,15))\nsns.distplot(data['Sales'], color = 'red', ax = axs[0][0])\nsns.distplot(data['Profit'], color = 'green', ax = axs[0][1])\nsns.distplot(data['Quantity'], color = 'orange', ax = axs[1][0])\nsns.distplot(data['Discount'], color = 'blue', ax = axs[1][1])\naxs[0][0].set_title('Sales')\naxs[0][1].set_title('Profit')\naxs[1][0].set_title('Quantity')\naxs[1][1].set_title('Discount')\nplt.show()", "_____no_output_____" ] ], [ [ "### Deal Analysis", "_____no_output_____" ] ], [ [ "state = data['State'].value_counts()", "_____no_output_____" ], [ "state.plot(kind='bar',figsize=(20,10),color=\"maroon\")\nplt.ylabel(' Number of deals')\nplt.xlabel('States')\n\nplt.title('State Wise Dealings')\nplt.show()", "_____no_output_____" ], [ "city = data['City'].value_counts()\ncity = city.head(50)\ncity.plot(kind='bar',figsize=(20,10),color=\"blue\")\nplt.ylabel(' Number of deals')\nplt.xlabel('States')\n\nplt.title('State Wise Dealings')\nplt.show()", "_____no_output_____" ] ], [ [ "### Customer Analysis", "_____no_output_____" ] ], [ [ "# To check maximum Sales and Profit in each segment\n\ndata.groupby('Segment')['Sales','Profit'].sum().plot.bar()\nplt.title('SALES AND PROFIT IN EACH SEGMENT')\nplt.legend()\nplt.show()", "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:3: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.\n This is separate from the ipykernel package so we can avoid doing imports until\n" ] ], [ [ "##### So the graph presents that Consumer segment is the one which has maximum buying capacity Also they give maximum profit to Superstore whereas Home Office purchases less and add less profit to business\n\n##### Now we will check Ship Mode Segment wise", "_____no_output_____" ] ], [ [ "# To check this we will use countplot \n\nsns.countplot(x='Segment' , hue='Ship Mode' , data=data)\nplt.show()", "_____no_output_____" ] ], [ [ "In each segment most of the transaction has been shipped under Standard Class", "_____no_output_____" ] ], [ [ "data['Ship Mode'].value_counts()", "_____no_output_____" ], [ "shipmode = data.groupby(['Ship Mode'])[['Sales', 'Discount', 'Profit']].mean()", "_____no_output_____" ], [ "shipmode.plot.pie(subplots=True,\n figsize=(18, 20), \n autopct='%1.1f%%', \n labels = shipmode.index)", "_____no_output_____" ] ], [ [ "Profit and Discount is high in First Class\nSales is high for Same day ship", "_____no_output_____" ], [ "#### PRODUCT ANALYSIS", "_____no_output_____" ] ], [ [ "data.groupby('Category')['Sales','Profit'].sum().plot.bar()\nplt.title('PROFIT AND SALES CATEGORY WISE')\nplt.legend(loc = 1)\nplt.show()", "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.\n \"\"\"Entry point for launching an IPython kernel.\n" ], [ "category = data.groupby(['Category'])[['Sales', 'Discount', 'Profit']].mean()\ncategory", "_____no_output_____" ], [ "category.plot.pie(subplots=True, \n figsize=(18, 20), \n autopct='%1.1f%%', \n labels = category.index)", "_____no_output_____" ] ], [ [ "This Bar Plot shows that Technology has given maximum sales subsequently Profit was also maximum. But not following this trend Furniture also had sales at great amount following with least amount of profit.", "_____no_output_____" ], [ "So we have sub categories of Furniture which are Bookcases,Chairs,Furnishings and Tables.\nWith this Bar Plot we can conclude that irrespective of high sales in Tables and Bookcases the store is incurring loss. \nThis loss is affecting the whole of Furniture Category\n\nNow we need to check irrespective of high sales why are we incurring loss?", "_____no_output_____" ] ], [ [ "sub_category = data.groupby(['Sub-Category'])[['Sales', 'Discount', 'Profit']].mean()\nsub_category.head()", "_____no_output_____" ], [ "plt.figure(figsize = (15,15))\nplt.pie(sub_category['Sales'], labels = sub_category.index, autopct = '%1.1f%%')\nplt.title('Sub-Category Wise Sales Analysis', fontsize = 20)\nplt.legend()\nplt.xticks(rotation = 90)\nplt.show()", "_____no_output_____" ], [ "plt.figure(figsize = (15,15))\nplt.pie(sub_category['Discount'], labels = sub_category.index, autopct = '%1.1f%%')\nplt.title('Sub-Category Wise Discount Analysis', fontsize = 20)\nplt.legend()\nplt.xticks(rotation = 90)\nplt.show()", "_____no_output_____" ], [ "sub_category.sort_values('Profit')[['Sales','Profit']].plot(kind='bar',\n figsize= (10,5),\n label=['Avg Sales Price($)','Profit($)'])", "_____no_output_____" ] ], [ [ "We concluded that despite of maximum Discount in Tables and Bookcases the store in incurring losses.\n\nSince we are having sales at max and Discount is also given,now we will check Correlation in between the two.", "_____no_output_____" ] ], [ [ "data[data['Category'] == 'Furniture'].groupby('Sub-Category')['Sales','Profit'].sum().plot.bar()\nplt.title('SALES AND PROFIT FURNITURE CATEGORY WISE ')\nplt.legend(loc = 1)\nplt.show()", "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.\n \"\"\"Entry point for launching an IPython kernel.\n" ], [ "# To check the probable reason of loss \n\ndata[data['Category'] == 'Furniture'].groupby('Sub-Category')['Discount'].mean().plot.bar()\nplt.title('DISCOUNT GIVEN IN FURNITURE CATEGORY')\nplt.legend(loc = 0)\nplt.show()\n", "_____no_output_____" ] ], [ [ "From above Heatmap we concluded there is a Negative correlation in between Profit and Discount\nwhereas a Positive correlation between Profit and Sales", "_____no_output_____" ], [ "#### TOP PRODUCTS", "_____no_output_____" ] ], [ [ "# Now we will check the Top Products Sold\n\ndata.groupby('Sub-Category')['Sales'].sum().sort_values(ascending=False).plot.bar()\nplt.show()", "_____no_output_____" ] ], [ [ "With this we concluded that Phones,Chairs ,Storage,Tables and Binders are being sold at max consecutively.\nWhereas Fasteners,Labels and Envelopes were sold the least", "_____no_output_____" ] ], [ [ "#To check the profit earned in all the Sub-Categories\n\ndata.groupby('Sub-Category')['Profit'].sum().sort_values(ascending = False).plot.bar(color = 'brown')\nplt.show()", "_____no_output_____" ] ], [ [ "Here we saw Copiers ,Phones,Accessories are top profit giving products to the store. whereas Store is incurring losses due to Tables ,Bookcases and suppliers.", "_____no_output_____" ], [ "### REGIONAL ANALYSIS", "_____no_output_____" ] ], [ [ "# To check maximum transactions made regionwise\n\ndata.Region.value_counts().plot.pie(autopct=\"%.1f%%\")\nplt.show()", "_____no_output_____" ], [ "region = data.groupby(['Region'])[['Sales', 'Discount', 'Profit']].mean()\nregion", "_____no_output_____" ], [ "region.plot.pie(subplots=True, \n figsize=(18, 20), \n autopct='%1.1f%%',\n labels = region.index)", "_____no_output_____" ], [ "data.groupby('Region')['Sales','Profit'].sum().plot.bar()\nplt.title('SALES AND PROFITS IN EACH REGION')\nplt.legend()\nplt.show", "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.\n \"\"\"Entry point for launching an IPython kernel.\n" ] ], [ [ "### OBSERVATIONS:\n\n1) MAXIMUM SALES,PROFITS, TRANSACTIONS were made in WEST REGION\n\n2) MAXIMUM SALES AND PROFIT in CONSUMER SEGMENT\n\n3) MAXIMUM TRANSACTIONS were shipped in STANDARD CLASS irrespective of SEGMENT\n\n4) LEAST PROFIT is incurred in FURNITURE CATEGORY irrespective of good amount of Sales\n\n5) In Sub-Category FURNITURE, TABLES and BOOKCASES are INCURRING LOSSES which is effecting the TOTAL PROFIT of Furniture Category\n\n8) HIGH DISCOUNT is being offered in TABLES and BOOKCASES which is somewhere the probable reason of losses.\n\n9) POSITIVE CORRELATION: Profit and Sales; NEGATIVE CORRELATION:Profit and Discount", "_____no_output_____" ], [ "### CONCLUSION:\n\nFrom Above Observation we conclude that FURNITURE category is the \"weak area\" we need to work upon. \n\nSo we need to REDUCE the DISCOUNT in order to INCREASE the PROFIT or increase the cost of products.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "raw", "code", "raw", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "raw", "raw" ], [ "code", "code", "code", "code" ], [ "raw" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]
ecdfc57aed27e518913a74c0dcf0c43d48947a99
73,776
ipynb
Jupyter Notebook
3-Natural-Lanugage-Processing-in-TensorFlow/week2-word-embeddings/Course_3_Week_2_Exercise/Course_3_Week_2_Exercise_Question.ipynb
pranshuag9/DeepLearningAI-Tensorflow-Developer
effe85e27620b9f911933819649186219a23e838
[ "Apache-2.0" ]
1
2020-09-21T22:04:58.000Z
2020-09-21T22:04:58.000Z
3-Natural-Lanugage-Processing-in-TensorFlow/week2-word-embeddings/Course_3_Week_2_Exercise/Course_3_Week_2_Exercise_Question.ipynb
pranshuag9/DeepLearningAI-Tensorflow-Developer
effe85e27620b9f911933819649186219a23e838
[ "Apache-2.0" ]
null
null
null
3-Natural-Lanugage-Processing-in-TensorFlow/week2-word-embeddings/Course_3_Week_2_Exercise/Course_3_Week_2_Exercise_Question.ipynb
pranshuag9/DeepLearningAI-Tensorflow-Developer
effe85e27620b9f911933819649186219a23e838
[ "Apache-2.0" ]
1
2020-11-11T23:52:41.000Z
2020-11-11T23:52:41.000Z
91.081481
18,726
0.743358
[ [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "<a href=\"https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/TensorFlow%20In%20Practice/Course%203%20-%20NLP/Course%203%20-%20Week%202%20-%20Exercise%20-%20Question.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "import csv\nimport tensorflow as tf\nimport numpy as np\nfrom tensorflow.keras.preprocessing.text import Tokenizer\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences\n\n!wget --no-check-certificate \\\n https://storage.googleapis.com/laurencemoroney-blog.appspot.com/bbc-text.csv \\\n -O /tmp/bbc-text.csv", "--2020-09-16 17:44:46-- https://storage.googleapis.com/laurencemoroney-blog.appspot.com/bbc-text.csv\nResolving storage.googleapis.com (storage.googleapis.com)... 173.194.214.128, 173.194.216.128, 64.233.170.128, ...\nConnecting to storage.googleapis.com (storage.googleapis.com)|173.194.214.128|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 5057493 (4.8M) [application/octet-stream]\nSaving to: โ€˜/tmp/bbc-text.csvโ€™\n\n\r/tmp/bbc-text.csv 0%[ ] 0 --.-KB/s \r/tmp/bbc-text.csv 100%[===================>] 4.82M --.-KB/s in 0.03s \n\n2020-09-16 17:44:46 (166 MB/s) - โ€˜/tmp/bbc-text.csvโ€™ saved [5057493/5057493]\n\n" ], [ "vocab_size = 1000\nembedding_dim = 16\nmax_length = 120\ntrunc_type = 'post'\npadding_type = 'post'\noov_tok = '<OOVK>'\ntraining_portion = .8", "_____no_output_____" ], [ "sentences = []\nlabels = []\nstopwords = [ \"a\", \"about\", \"above\", \"after\", \"again\", \"against\", \"all\", \"am\", \"an\", \"and\", \"any\", \"are\", \"as\", \"at\", \"be\", \"because\", \"been\", \"before\", \"being\", \"below\", \"between\", \"both\", \"but\", \"by\", \"could\", \"did\", \"do\", \"does\", \"doing\", \"down\", \"during\", \"each\", \"few\", \"for\", \"from\", \"further\", \"had\", \"has\", \"have\", \"having\", \"he\", \"he'd\", \"he'll\", \"he's\", \"her\", \"here\", \"here's\", \"hers\", \"herself\", \"him\", \"himself\", \"his\", \"how\", \"how's\", \"i\", \"i'd\", \"i'll\", \"i'm\", \"i've\", \"if\", \"in\", \"into\", \"is\", \"it\", \"it's\", \"its\", \"itself\", \"let's\", \"me\", \"more\", \"most\", \"my\", \"myself\", \"nor\", \"of\", \"on\", \"once\", \"only\", \"or\", \"other\", \"ought\", \"our\", \"ours\", \"ourselves\", \"out\", \"over\", \"own\", \"same\", \"she\", \"she'd\", \"she'll\", \"she's\", \"should\", \"so\", \"some\", \"such\", \"than\", \"that\", \"that's\", \"the\", \"their\", \"theirs\", \"them\", \"themselves\", \"then\", \"there\", \"there's\", \"these\", \"they\", \"they'd\", \"they'll\", \"they're\", \"they've\", \"this\", \"those\", \"through\", \"to\", \"too\", \"under\", \"until\", \"up\", \"very\", \"was\", \"we\", \"we'd\", \"we'll\", \"we're\", \"we've\", \"were\", \"what\", \"what's\", \"when\", \"when's\", \"where\", \"where's\", \"which\", \"while\", \"who\", \"who's\", \"whom\", \"why\", \"why's\", \"with\", \"would\", \"you\", \"you'd\", \"you'll\", \"you're\", \"you've\", \"your\", \"yours\", \"yourself\", \"yourselves\" ]\nprint(len(stopwords))\n# Expected Output\n# 153", "153\n" ], [ "with open(\"/tmp/bbc-text.csv\", 'r') as csvfile:\n fr = csv.reader(csvfile)\n next(fr, None)\n for row in fr:\n labels.append(row[0])\n sentence = row[1]\n for stopword in stopwords:\n stopword = \" \"+stopword+\" \"\n sentence = sentence.replace(stopword,\" \")\n \n sentences.append(sentence)\n \nprint(len(labels))\nprint(len(sentences))\nprint(sentences[0])\n# Expected Output\n# 2225\n# 2225\n# tv future hands viewers home theatre systems plasma high-definition tvs digital video recorders moving living room way people watch tv will radically different five years time. according expert panel gathered annual consumer electronics show las vegas discuss new technologies will impact one favourite pastimes. us leading trend programmes content will delivered viewers via home networks cable satellite telecoms companies broadband service providers front rooms portable devices. one talked-about technologies ces digital personal video recorders (dvr pvr). set-top boxes like us s tivo uk s sky+ system allow people record store play pause forward wind tv programmes want. essentially technology allows much personalised tv. also built-in high-definition tv sets big business japan us slower take off europe lack high-definition programming. not can people forward wind adverts can also forget abiding network channel schedules putting together a-la-carte entertainment. us networks cable satellite companies worried means terms advertising revenues well brand identity viewer loyalty channels. although us leads technology moment also concern raised europe particularly growing uptake services like sky+. happens today will see nine months years time uk adam hume bbc broadcast s futurologist told bbc news website. likes bbc no issues lost advertising revenue yet. pressing issue moment commercial uk broadcasters brand loyalty important everyone. will talking content brands rather network brands said tim hanlon brand communications firm starcom mediavest. reality broadband connections anybody can producer content. added: challenge now hard promote programme much choice. means said stacey jolna senior vice president tv guide tv group way people find content want watch simplified tv viewers. means networks us terms channels take leaf google s book search engine future instead scheduler help people find want watch. kind channel model might work younger ipod generation used taking control gadgets play them. might not suit everyone panel recognised. older generations comfortable familiar schedules channel brands know getting. perhaps not want much choice put hands mr hanlon suggested. end kids just diapers pushing buttons already - everything possible available said mr hanlon. ultimately consumer will tell market want. 50 000 new gadgets technologies showcased ces many enhancing tv-watching experience. high-definition tv sets everywhere many new models lcd (liquid crystal display) tvs launched dvr capability built instead external boxes. one example launched show humax s 26-inch lcd tv 80-hour tivo dvr dvd recorder. one us s biggest satellite tv companies directtv even launched branded dvr show 100-hours recording capability instant replay search function. set can pause rewind tv 90 hours. microsoft chief bill gates announced pre-show keynote speech partnership tivo called tivotogo means people can play recorded programmes windows pcs mobile devices. reflect increasing trend freeing multimedia people can watch want want.", "2225\n2225\ntv future hands viewers home theatre systems plasma high-definition tvs digital video recorders moving living room way people watch tv will radically different five years time. according expert panel gathered annual consumer electronics show las vegas discuss new technologies will impact one favourite pastimes. us leading trend programmes content will delivered viewers via home networks cable satellite telecoms companies broadband service providers front rooms portable devices. one talked-about technologies ces digital personal video recorders (dvr pvr). set-top boxes like us s tivo uk s sky+ system allow people record store play pause forward wind tv programmes want. essentially technology allows much personalised tv. also built-in high-definition tv sets big business japan us slower take off europe lack high-definition programming. not can people forward wind adverts can also forget abiding network channel schedules putting together a-la-carte entertainment. us networks cable satellite companies worried means terms advertising revenues well brand identity viewer loyalty channels. although us leads technology moment also concern raised europe particularly growing uptake services like sky+. happens today will see nine months years time uk adam hume bbc broadcast s futurologist told bbc news website. likes bbc no issues lost advertising revenue yet. pressing issue moment commercial uk broadcasters brand loyalty important everyone. will talking content brands rather network brands said tim hanlon brand communications firm starcom mediavest. reality broadband connections anybody can producer content. added: challenge now hard promote programme much choice. means said stacey jolna senior vice president tv guide tv group way people find content want watch simplified tv viewers. means networks us terms channels take leaf google s book search engine future instead scheduler help people find want watch. kind channel model might work younger ipod generation used taking control gadgets play them. might not suit everyone panel recognised. older generations comfortable familiar schedules channel brands know getting. perhaps not want much choice put hands mr hanlon suggested. end kids just diapers pushing buttons already - everything possible available said mr hanlon. ultimately consumer will tell market want. 50 000 new gadgets technologies showcased ces many enhancing tv-watching experience. high-definition tv sets everywhere many new models lcd (liquid crystal display) tvs launched dvr capability built instead external boxes. one example launched show humax s 26-inch lcd tv 80-hour tivo dvr dvd recorder. one us s biggest satellite tv companies directtv even launched branded dvr show 100-hours recording capability instant replay search function. set can pause rewind tv 90 hours. microsoft chief bill gates announced pre-show keynote speech partnership tivo called tivotogo means people can play recorded programmes windows pcs mobile devices. reflect increasing trend freeing multimedia people can watch want want.\n" ], [ "train_size = int(len(sentences) * training_portion)\n\ntrain_sentences = sentences[ : train_size]\ntrain_labels = labels[ : train_size]\n\nvalidation_sentences = sentences[ train_size : ]\nvalidation_labels = labels[ train_size : ]\n\nprint(train_size)\nprint(len(train_sentences))\nprint(len(train_labels))\nprint(len(validation_sentences))\nprint(len(validation_labels))\n\n# Expected output (if training_portion=.8)\n# 1780\n# 1780\n# 1780\n# 445\n# 445", "1780\n1780\n1780\n445\n445\n" ], [ "tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=vocab_size, oov_token='<OOV>')\ntokenizer.fit_on_texts(sentences)\nword_index = tokenizer.word_index\n\ntrain_sequences = tokenizer.texts_to_sequences(train_sentences)\ntrain_padded = tf.keras.preprocessing.sequence.pad_sequences(train_sequences, maxlen=max_length, padding=padding_type)\n\nprint(len(train_sequences[0]))\nprint(len(train_padded[0]))\n\nprint(len(train_sequences[1]))\nprint(len(train_padded[1]))\n\nprint(len(train_sequences[10]))\nprint(len(train_padded[10]))\n\n# Expected Ouput\n# 449\n# 120\n# 200\n# 120\n# 192\n# 120", "449\n120\n200\n120\n192\n120\n" ], [ "validation_sequences = tokenizer.texts_to_sequences(validation_sentences)\nvalidation_padded = tf.keras.preprocessing.sequence.pad_sequences(validation_sequences, maxlen=max_length, padding=padding_type)\n\nprint(len(validation_sequences))\nprint(validation_padded.shape)\n\n# Expected output\n# 445\n# (445, 120)", "445\n(445, 120)\n" ], [ "label_tokenizer = Tokenizer()\nlabel_tokenizer.fit_on_texts(labels)\n\ntraining_label_seq = np.array(label_tokenizer.texts_to_sequences(train_labels))\nvalidation_label_seq = np.array(label_tokenizer.texts_to_sequences(validation_labels))\n\nprint(training_label_seq[0])\nprint(training_label_seq[1])\nprint(training_label_seq[2])\nprint(training_label_seq.shape)\n\nprint(validation_label_seq[0])\nprint(validation_label_seq[1])\nprint(validation_label_seq[2])\nprint(validation_label_seq.shape)\n\n# Expected output\n# [4]\n# [2]\n# [1]\n# (1780, 1)\n# [5]\n# [4]\n# [3]\n# (445, 1)", "[4]\n[2]\n[1]\n(1780, 1)\n[5]\n[4]\n[3]\n(445, 1)\n" ], [ "model = tf.keras.Sequential([\n tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),\n tf.keras.layers.GlobalAveragePooling1D(),\n tf.keras.layers.Dense(24, activation='relu'),\n tf.keras.layers.Dense(6, activation='softmax')\n])\nmodel.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])\nmodel.summary()\n\n# Expected Output\n# Layer (type) Output Shape Param # \n# =================================================================\n# embedding (Embedding) (None, 120, 16) 16000 \n# _________________________________________________________________\n# global_average_pooling1d (Gl (None, 16) 0 \n# _________________________________________________________________\n# dense (Dense) (None, 24) 408 \n# _________________________________________________________________\n# dense_1 (Dense) (None, 6) 150 \n# =================================================================\n# Total params: 16,558\n# Trainable params: 16,558\n# Non-trainable params: 0", "Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nembedding (Embedding) (None, 120, 16) 16000 \n_________________________________________________________________\nglobal_average_pooling1d (Gl (None, 16) 0 \n_________________________________________________________________\ndense (Dense) (None, 24) 408 \n_________________________________________________________________\ndense_1 (Dense) (None, 6) 150 \n=================================================================\nTotal params: 16,558\nTrainable params: 16,558\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "num_epochs = 30\nhistory = model.fit(train_padded, training_label_seq, epochs=num_epochs, validation_data=(validation_padded, validation_label_seq), verbose=2)", "Epoch 1/30\n56/56 - 0s - loss: 1.7631 - accuracy: 0.1944 - val_loss: 1.7301 - val_accuracy: 0.2989\nEpoch 2/30\n56/56 - 0s - loss: 1.6897 - accuracy: 0.4084 - val_loss: 1.6492 - val_accuracy: 0.4989\nEpoch 3/30\n56/56 - 0s - loss: 1.6056 - accuracy: 0.4629 - val_loss: 1.5679 - val_accuracy: 0.4854\nEpoch 4/30\n56/56 - 0s - loss: 1.5189 - accuracy: 0.4860 - val_loss: 1.4726 - val_accuracy: 0.5820\nEpoch 5/30\n56/56 - 0s - loss: 1.4035 - accuracy: 0.6421 - val_loss: 1.3412 - val_accuracy: 0.7258\nEpoch 6/30\n56/56 - 0s - loss: 1.2509 - accuracy: 0.7180 - val_loss: 1.1789 - val_accuracy: 0.7528\nEpoch 7/30\n56/56 - 0s - loss: 1.0773 - accuracy: 0.7539 - val_loss: 1.0169 - val_accuracy: 0.7933\nEpoch 8/30\n56/56 - 0s - loss: 0.9169 - accuracy: 0.8118 - val_loss: 0.8744 - val_accuracy: 0.8090\nEpoch 9/30\n56/56 - 0s - loss: 0.7795 - accuracy: 0.8163 - val_loss: 0.7574 - val_accuracy: 0.8652\nEpoch 10/30\n56/56 - 0s - loss: 0.6636 - accuracy: 0.8938 - val_loss: 0.6597 - val_accuracy: 0.8719\nEpoch 11/30\n56/56 - 0s - loss: 0.5670 - accuracy: 0.9152 - val_loss: 0.5795 - val_accuracy: 0.8854\nEpoch 12/30\n56/56 - 0s - loss: 0.4857 - accuracy: 0.9275 - val_loss: 0.5141 - val_accuracy: 0.8966\nEpoch 13/30\n56/56 - 0s - loss: 0.4177 - accuracy: 0.9371 - val_loss: 0.4605 - val_accuracy: 0.9169\nEpoch 14/30\n56/56 - 0s - loss: 0.3593 - accuracy: 0.9472 - val_loss: 0.4169 - val_accuracy: 0.9191\nEpoch 15/30\n56/56 - 0s - loss: 0.3100 - accuracy: 0.9551 - val_loss: 0.3750 - val_accuracy: 0.9191\nEpoch 16/30\n56/56 - 0s - loss: 0.2659 - accuracy: 0.9607 - val_loss: 0.3431 - val_accuracy: 0.9213\nEpoch 17/30\n56/56 - 0s - loss: 0.2299 - accuracy: 0.9657 - val_loss: 0.3179 - val_accuracy: 0.9191\nEpoch 18/30\n56/56 - 0s - loss: 0.1998 - accuracy: 0.9708 - val_loss: 0.2979 - val_accuracy: 0.9281\nEpoch 19/30\n56/56 - 0s - loss: 0.1759 - accuracy: 0.9736 - val_loss: 0.2803 - val_accuracy: 0.9213\nEpoch 20/30\n56/56 - 0s - loss: 0.1572 - accuracy: 0.9798 - val_loss: 0.2670 - val_accuracy: 0.9236\nEpoch 21/30\n56/56 - 0s - loss: 0.1408 - accuracy: 0.9770 - val_loss: 0.2585 - val_accuracy: 0.9258\nEpoch 22/30\n56/56 - 0s - loss: 0.1258 - accuracy: 0.9831 - val_loss: 0.2484 - val_accuracy: 0.9281\nEpoch 23/30\n56/56 - 0s - loss: 0.1127 - accuracy: 0.9854 - val_loss: 0.2410 - val_accuracy: 0.9281\nEpoch 24/30\n56/56 - 0s - loss: 0.1021 - accuracy: 0.9865 - val_loss: 0.2349 - val_accuracy: 0.9303\nEpoch 25/30\n56/56 - 0s - loss: 0.0927 - accuracy: 0.9871 - val_loss: 0.2288 - val_accuracy: 0.9326\nEpoch 26/30\n56/56 - 0s - loss: 0.0842 - accuracy: 0.9888 - val_loss: 0.2247 - val_accuracy: 0.9326\nEpoch 27/30\n56/56 - 0s - loss: 0.0771 - accuracy: 0.9921 - val_loss: 0.2201 - val_accuracy: 0.9326\nEpoch 28/30\n56/56 - 0s - loss: 0.0706 - accuracy: 0.9933 - val_loss: 0.2162 - val_accuracy: 0.9348\nEpoch 29/30\n56/56 - 0s - loss: 0.0641 - accuracy: 0.9933 - val_loss: 0.2134 - val_accuracy: 0.9326\nEpoch 30/30\n56/56 - 0s - loss: 0.0584 - accuracy: 0.9949 - val_loss: 0.2108 - val_accuracy: 0.9326\n" ], [ "import matplotlib.pyplot as plt\n\n\ndef plot_graphs(history, string):\n plt.plot(history.history[string])\n plt.plot(history.history['val_'+string])\n plt.xlabel(\"Epochs\")\n plt.ylabel(string)\n plt.legend([string, 'val_'+string])\n plt.show()\n \nplot_graphs(history, \"accuracy\")\nplot_graphs(history, \"loss\")", "_____no_output_____" ], [ "reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])\n\ndef decode_sentence(text):\n return ' '.join([reverse_word_index.get(i, '?') for i in text])\n", "_____no_output_____" ], [ "e = model.layers[0]\nweights = e.get_weights()[0]\nprint(weights.shape) # shape: (vocab_size, embedding_dim)\n\n# Expected output\n# (1000, 16)", "(1000, 16)\n" ], [ "import io\n\nout_v = io.open('vecs.tsv', 'w', encoding='utf-8')\nout_m = io.open('meta.tsv', 'w', encoding='utf-8')\nfor word_num in range(1, vocab_size):\n word = reverse_word_index[word_num]\n embeddings = weights[word_num]\n out_m.write(word + \"\\n\")\n out_v.write('\\t'.join([str(x) for x in embeddings]) + \"\\n\")\nout_v.close()\nout_m.close()", "_____no_output_____" ], [ "try:\n from google.colab import files\nexcept ImportError:\n pass\nelse:\n files.download('vecs.tsv')\n files.download('meta.tsv')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ecdfcbbd9f5400b488d7bc39c947fda0a081d9e7
3,572
ipynb
Jupyter Notebook
_data_analysis/ipynb/NewsProcessing_2.ipynb
info5900groupG/dataishumantool
3e17f6287cdd85a4eaf7cf077b2ec05b46bddc60
[ "MIT" ]
null
null
null
_data_analysis/ipynb/NewsProcessing_2.ipynb
info5900groupG/dataishumantool
3e17f6287cdd85a4eaf7cf077b2ec05b46bddc60
[ "MIT" ]
null
null
null
_data_analysis/ipynb/NewsProcessing_2.ipynb
info5900groupG/dataishumantool
3e17f6287cdd85a4eaf7cf077b2ec05b46bddc60
[ "MIT" ]
null
null
null
28.806452
212
0.512318
[ [ [ "import pandas as pd\nfrom sklearn.feature_extraction.text import CountVectorizer\nimport nltk\nimport re\n\npunctuation = re.compile(r'[0-9]')\nfrom nltk.sentiment.vader import SentimentIntensityAnalyzer\nsid = SentimentIntensityAnalyzer()\n\ndef read_data(path):\n old_data = pd.DataFrame.from_csv(path) #take first column as index\n train = old_data.head(n=100)\n \n # combine all the strings of each tuple\n train2 = train[['Label']].copy()\n \n #all the new column\n# new_column = []\n pos = []\n neg = []\n compound = []\n neutral = []\n# example = \"\"\n example_list = []\n for row in train.itertuples():\n for i in range(2,27):\n# example = example + row[i]\n example_list.append(row[i])\n temp1 = \" \"\n example = temp1.join(example_list)\n \n #process example\n# print (example)\n# example2 = example.lower()\n example3 = CountVectorizer().build_tokenizer()(example)\n example4 = [punctuation.sub(\"\", word) for word in example3]\n temp = \" \"\n example5 = temp.join(example4)\n# print(example5)\n result = sid.polarity_scores(example5)\n pos.append(result['pos'])\n neg.append(result['neg'])\n compound.append(result['compound'])\n neutral.append(result['neu'])\n# new_column.append(example)\n# example = \"\"\n example_list = []\n \n# train2['news']=new_column\n train2['pos']=pos\n train2['neg']=neg\n train2['compound']=compound\n train2['neutral']=neutral\n return train2\n\ndata = read_data(\"./Documents/Cornell/Courses/MPS Project/Combined_News_DJIA.csv\")\ndata.to_csv(\"./Documents/Cornell/Courses/MPS Project/data_after_polarity.csv\")\nprint(\"Done!\")\n\n\n\n", "/Users/siqi/anaconda/lib/python2.7/site-packages/nltk/twitter/__init__.py:20: UserWarning: The twython library has not been installed. Some functionality from the twitter package will not be available.\n warnings.warn(\"The twython library has not been installed. \"\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
ecdfd1788a51a0924feb90f8fc4d341a5369d32b
1,481
ipynb
Jupyter Notebook
ipynb/U^2-NetP_test.ipynb
svwriting/PhantomCaptcher
7b90d92766a7b0380f6f628304091b8b3aaa37f8
[ "MIT" ]
null
null
null
ipynb/U^2-NetP_test.ipynb
svwriting/PhantomCaptcher
7b90d92766a7b0380f6f628304091b8b3aaa37f8
[ "MIT" ]
null
null
null
ipynb/U^2-NetP_test.ipynb
svwriting/PhantomCaptcher
7b90d92766a7b0380f6f628304091b8b3aaa37f8
[ "MIT" ]
null
null
null
1,481
1,481
0.675895
[ [ [ "!pip3 install boxx", "_____no_output_____" ] ], [ [ "================================ๆˆ‘ๆ˜ฏๅˆ†้š”็ทš======================================\r\n#U^2-Net", "_____no_output_____" ] ], [ [ "# %cd drive/MyDrive/TibaMe/'Phantom Captcher'\r\n# !git clone https://github.com/NathanUA/U-2-Net.git", "_____no_output_____" ], [ "# %cd /content/drive/MyDrive/TibaMe/'Phantom Captcher'/U-2-Net\r\n!python /content/drive/MyDrive/TibaMe/'Phantom Captcher'/U-2-Net/u2netp_train.py\r\n# %cd /content/", "_____no_output_____" ], [ "# %cd /content/drive/MyDrive/TibaMe/'Phantom Captcher'/U-2-Net\r\n!python /content/drive/MyDrive/TibaMe/'Phantom Captcher'/U-2-Net/u2netp_test.py\r\n# %cd /content/", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
ecdfd56c90d4019b9b74a4f140e5960d0c71c2e3
100,351
ipynb
Jupyter Notebook
tutorials/Certification_Trainings/Healthcare/4.7.Clinical_Deidentification_in_Portuguese.ipynb
iamvarol/spark-nlp-workshop
73a9064bd47d4dc0692f0297748eb43cd094aabd
[ "Apache-2.0" ]
null
null
null
tutorials/Certification_Trainings/Healthcare/4.7.Clinical_Deidentification_in_Portuguese.ipynb
iamvarol/spark-nlp-workshop
73a9064bd47d4dc0692f0297748eb43cd094aabd
[ "Apache-2.0" ]
null
null
null
tutorials/Certification_Trainings/Healthcare/4.7.Clinical_Deidentification_in_Portuguese.ipynb
iamvarol/spark-nlp-workshop
73a9064bd47d4dc0692f0297748eb43cd094aabd
[ "Apache-2.0" ]
null
null
null
61.264347
7,053
0.408945
[ [ [ "![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)", "_____no_output_____" ], [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/4.7.Clinical_Deidentification_in_Portuguese.ipynb)", "_____no_output_____" ], [ "# Clinical Deidentification in Portuguese\n\n**Protected Health Information**:\n\nIndividualโ€™s past, present, or future physical or mental health or condition\nprovision of health care to the individual\npast, present, or future payment for the health care\nProtected health information includes many common identifiers (e.g., name, address, birth date, Social Security Number) when they can be associated with the health information.", "_____no_output_____" ] ], [ [ "import json\nimport os\n\nfrom google.colab import files\n\nif 'spark_jsl.json' not in os.listdir():\n license_keys = files.upload()\n os.rename(list(license_keys.keys())[0], 'spark_jsl.json')\n\nwith open('spark_jsl.json') as f:\n license_keys = json.load(f)\n\n# Defining license key-value pairs as local variables\nlocals().update(license_keys)\nos.environ.update(license_keys)", "_____no_output_____" ], [ "# Installing pyspark and spark-nlp\n! pip install --upgrade -q pyspark==3.1.2\n\n# Installing Spark NLP Healthcare\n! pip install --upgrade -q spark-nlp-jsl==$JSL_VERSION --extra-index-url https://pypi.johnsnowlabs.com/$SECRET", "\u001b[K |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 212.4 MB 70 kB/s \n\u001b[K |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 198 kB 42.9 MB/s \n\u001b[?25h Building wheel for pyspark (setup.py) ... \u001b[?25l\u001b[?25hdone\n\u001b[K |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 188 kB 2.2 MB/s \n\u001b[K |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 142 kB 5.1 MB/s \n\u001b[?25h" ], [ "import sys\nimport os\nimport json\nimport pandas as pd\nimport string\nimport numpy as np\n\nimport sparknlp\nimport sparknlp_jsl\n\nfrom pyspark.ml import Pipeline, PipelineModel\nfrom pyspark.sql import functions as F\nfrom pyspark.sql import SparkSession\n\nfrom sparknlp.base import *\nfrom sparknlp.annotator import *\nfrom sparknlp.pretrained import ResourceDownloader\nfrom sparknlp.util import *\nfrom sparknlp_jsl.annotator import *\n\nparams = {\"spark.driver.memory\":\"16G\", \n \"spark.kryoserializer.buffer.max\":\"2000M\", \n \"spark.driver.maxResultSize\":\"2000M\"} \n\nspark = sparknlp_jsl.start(SECRET, params=params)\n\nprint (\"Spark NLP Version :\", sparknlp.version())\nprint (\"Spark NLP_JSL Version :\", sparknlp_jsl.version())\n\nspark", "Spark NLP Version : 3.4.2\nSpark NLP_JSL Version : 3.5.0\n" ] ], [ [ "# 1. Italian NER Deidentification Models\nWe have two different models you can use:\n* `ner_deid_generic`, detects 8 entities\n* `ner_deid_subentity`, detects 19 entities", "_____no_output_____" ], [ "### Creating pipeline", "_____no_output_____" ] ], [ [ "documentAssembler = DocumentAssembler()\\\n .setInputCol(\"text\")\\\n .setOutputCol(\"document\")\n\nsentencerDL = SentenceDetectorDLModel.pretrained(\"sentence_detector_dl\", \"xx\") \\\n .setInputCols([\"document\"])\\\n .setOutputCol(\"sentence\")\n\ntokenizer = Tokenizer()\\\n .setInputCols([\"sentence\"])\\\n .setOutputCol(\"token\")\n\nword_embeddings = WordEmbeddingsModel.pretrained(\"w2v_cc_300d\", \"pt\")\\\n .setInputCols([\"document\",\"token\"])\\\n\t .setOutputCol(\"embeddings\")", "sentence_detector_dl download started this may take some time.\nApproximate size to download 514.9 KB\n[OK!]\nw2v_cc_300d download started this may take some time.\nApproximate size to download 1.1 GB\n[OK!]\n" ] ], [ [ "## 1.1. NER Deid Generic\n\n**`ner_deid_generic`** extracts:\n- Name\n- Profession\n- Age\n- Date\n- Contact (Telephone numbers, Email addresses)\n- Location (Address, City, Postal code, Hospital Name, Organization)\n- ID (Social Security numbers, Medical record numbers)\n- Sex", "_____no_output_____" ] ], [ [ "ner_generic = MedicalNerModel.pretrained(\"ner_deid_generic\", \"pt\", \"clinical/models\")\\\n .setInputCols([\"sentence\",\"token\",\"embeddings\"])\\\n .setOutputCol(\"ner_deid_generic\")\n\nner_converter_generic = NerConverter()\\\n .setInputCols([\"sentence\",\"token\",\"ner_deid_generic\"])\\\n .setOutputCol(\"ner_chunk_generic\")", "ner_deid_generic download started this may take some time.\nApproximate size to download 14.3 MB\n[OK!]\n" ], [ "ner_generic.getClasses()", "_____no_output_____" ] ], [ [ "## 1.2. NER Deid Subentity\n\n**`ner_deid_subentity`** extracts:\n\n`PATIENT`, `HOSPITAL`, `DATE`, `ORGANIZATION`, `CITY`, `ID`, `STREET`, `SEX`, `EMAIL`, `ZIP`, `PROFESSION`, `PHONE`, `COUNTRY`, `DOCTOR`, `AGE`", "_____no_output_____" ] ], [ [ "ner_subentity = MedicalNerModel.pretrained(\"ner_deid_subentity\", \"pt\", \"clinical/models\")\\\n .setInputCols([\"sentence\",\"token\",\"embeddings\"])\\\n .setOutputCol(\"ner_deid_subentity\")\n\nner_converter_subentity = NerConverter()\\\n .setInputCols([\"sentence\", \"token\", \"ner_deid_subentity\"])\\\n .setOutputCol(\"ner_chunk_subentity\")", "ner_deid_subentity download started this may take some time.\nApproximate size to download 14.3 MB\n[OK!]\n" ], [ "ner_subentity.getClasses()", "_____no_output_____" ] ], [ [ "## 1.3. Pipeline", "_____no_output_____" ] ], [ [ "nlpPipeline = Pipeline(stages=[\n documentAssembler, \n sentencerDL,\n tokenizer,\n word_embeddings,\n ner_generic,\n ner_converter_generic,\n ner_subentity,\n ner_converter_subentity,\n ])\n\nempty_data = spark.createDataFrame([[\"\"]]).toDF(\"text\")\n\nmodel = nlpPipeline.fit(empty_data)", "_____no_output_____" ], [ "text = \"\"\"Detalhes do paciente.\nNome do paciente: Pedro Gonรงalves\nNHC: 2569870.\nEndereรงo: Rua Das Flores 23.\nCรณdigo Postal: 21754-987.\nDados de cuidados.\nData de nascimento: 10/10/1963.\nIdade: 53 anos \nData de admissรฃo: 17/06/2016.\nDoutora: Maria Santos\"\"\"\n\ntext_df = spark.createDataFrame([[text]]).toDF(\"text\")\n\nresult = model.transform(text_df)", "_____no_output_____" ] ], [ [ "### Results for `ner_generic`", "_____no_output_____" ] ], [ [ "result.select(F.explode(F.arrays_zip('ner_chunk_generic.result', 'ner_chunk_generic.metadata')).alias(\"cols\")) \\\n .select(F.expr(\"cols['0']\").alias(\"chunk\"),\n F.expr(\"cols['1']['entity']\").alias(\"ner_label\")).show(truncate=False)", "+-----------------+---------+\n|chunk |ner_label|\n+-----------------+---------+\n|Pedro Gonรงalves |NAME |\n|2569870 |ID |\n|Rua Das Flores 23|LOCATION |\n|21754-987 |LOCATION |\n|10/10/1963 |DATE |\n|53 |AGE |\n|17/06/2016 |DATE |\n|Maria Santos |NAME |\n+-----------------+---------+\n\n" ] ], [ [ "### Results for `ner_subentity`", "_____no_output_____" ] ], [ [ "result.select(F.explode(F.arrays_zip('ner_chunk_subentity.result', 'ner_chunk_subentity.metadata')).alias(\"cols\")) \\\n .select(F.expr(\"cols['0']\").alias(\"chunk\"),\n F.expr(\"cols['1']['entity']\").alias(\"ner_label\")).show(truncate=False)", "+-----------------+---------+\n|chunk |ner_label|\n+-----------------+---------+\n|Pedro Gonรงalves |PATIENT |\n|2569870 |ID |\n|Rua Das Flores 23|STREET |\n|21754-987 |ZIP |\n|10/10/1963 |DATE |\n|53 |AGE |\n|17/06/2016 |DATE |\n|Maria Santos |DOCTOR |\n+-----------------+---------+\n\n" ] ], [ [ "## DeIdentification", "_____no_output_____" ], [ "### Obfuscation mode", "_____no_output_____" ] ], [ [ "# Downloading faker entity list.\n! wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/tutorials/Certification_Trainings/Healthcare/data/obfuscate_pt.txt", "_____no_output_____" ], [ "deid_masked_entity = DeIdentification()\\\n .setInputCols([\"sentence\", \"token\", \"ner_chunk_subentity\"])\\\n .setOutputCol(\"masked_with_entity\")\\\n .setMode(\"mask\")\\\n .setMaskingPolicy(\"entity_labels\")\n\ndeid_masked_char = DeIdentification()\\\n .setInputCols([\"sentence\", \"token\", \"ner_chunk_subentity\"])\\\n .setOutputCol(\"masked_with_chars\")\\\n .setMode(\"mask\")\\\n .setMaskingPolicy(\"same_length_chars\")\n\ndeid_masked_fixed_char = DeIdentification()\\\n .setInputCols([\"sentence\", \"token\", \"ner_chunk_subentity\"])\\\n .setOutputCol(\"masked_fixed_length_chars\")\\\n .setMode(\"mask\")\\\n .setMaskingPolicy(\"fixed_length_chars\")\\\n .setFixedMaskLength(4)\n\ndeid_obfuscated = DeIdentification()\\\n .setInputCols([\"sentence\", \"token\", \"ner_chunk_subentity\"]) \\\n .setOutputCol(\"obfuscated\") \\\n .setMode(\"obfuscate\")\\\n .setObfuscateDate(True)\\\n .setObfuscateRefFile('obfuscate_pt.txt')\\\n .setObfuscateRefSource(\"file\")", "_____no_output_____" ], [ "nlpPipeline = Pipeline(stages=[\n documentAssembler, \n sentencerDL,\n tokenizer,\n word_embeddings,\n ner_subentity,\n ner_converter_subentity,\n deid_masked_entity,\n deid_masked_char,\n deid_masked_fixed_char,\n deid_obfuscated\n ])\n\nempty_data = spark.createDataFrame([[\"\"]]).toDF(\"text\")\n\nmodel = nlpPipeline.fit(empty_data)", "_____no_output_____" ], [ "deid_lp = LightPipeline(model)", "_____no_output_____" ], [ "text = \"\"\"Detalhes do paciente.\nNome do paciente: Antonio Gonรงalves\nNHC: 2569870.\nEndereรงo: Rua Das Flores 23.\nCรณdigo Postal: 21754-987.\nDados de cuidados.\nData de nascimento: 10/10/1963.\nIdade: 23 anos \nData de admissรฃo: 17/06/2016.\nDoutora: Maria Santos\"\"\"", "_____no_output_____" ], [ "result = deid_lp.annotate(text)\n\nprint(\"\\n\".join(result['masked_with_entity']))\nprint(\"\\n\")\nprint(\"\\n\".join(result['masked_with_chars']))\nprint(\"\\n\")\nprint(\"\\n\".join(result['masked_fixed_length_chars']))\nprint(\"\\n\")\nprint(\"\\n\".join(result['obfuscated']))", "Detalhes do paciente.\nNome do paciente: <PATIENT>\nNHC: <ID>.\nEndereรงo: <STREET>.\nCรณdigo Postal: <ZIP>.\nDados de cuidados.\nData de nascimento: <DATE>.\nIdade: <AGE> anos \nData de admissรฃo: <DATE>.\n\nDoutora: <DOCTOR>\n\n\nDetalhes do paciente.\nNome do paciente: [***************]\nNHC: [*****].\nEndereรงo: [***************].\nCรณdigo Postal: [*******].\nDados de cuidados.\nData de nascimento: [********].\nIdade: ** anos \nData de admissรฃo: [********].\n\nDoutora: [**********]\n\n\nDetalhes do paciente.\nNome do paciente: ****\nNHC: ****.\nEndereรงo: ****.\nCรณdigo Postal: ****.\nDados de cuidados.\nData de nascimento: ****.\nIdade: **** anos \nData de admissรฃo: ****.\n\nDoutora: ****\n\n\nDetalhes do paciente.\nNome do paciente: Lourenรงo Morais\nNHC: 569 4653.\nEndereรงo: R. Cuf, 304.\nCรณdigo Postal: 74913712.\nDados de cuidados.\nData de nascimento: 28/11/1963.\nIdade: 53 anos \nData de admissรฃo: 24/06/2016.\n\nDoutora: Moreira\n" ], [ "pd.set_option(\"display.max_colwidth\", 200)\n\ndf = pd.DataFrame(list(zip(result[\"masked_with_entity\"], \n result[\"masked_with_chars\"],\n result[\"masked_fixed_length_chars\"], \n result[\"obfuscated\"])),\n columns= [\"Masked_with_entity\", \"Masked with Chars\", \"Masked with Fixed Chars\", \"Obfuscated\"])\n\ndf", "_____no_output_____" ] ], [ [ "# 2. Pretrained Portuguese Deidentification Pipeline\n\n- We developed a clinical deidentification pretrained pipeline that can be used to deidentify PHI information from Italian medical texts. The PHI information will be masked and obfuscated in the resulting text. \n- The pipeline can mask and obfuscate:\n - Patient\n - Doctor\n - Hospital\n - Date\n - Organization\n - Sex\n - City\n - Street\n - Country\n - ZIP\n - Username\n - Profession\n - Phone\n - Email\n - Age\n - ID number\n - Medical record number\n - Account number\n - SSN\n - Plate Number\n - IP address\n - URL", "_____no_output_____" ] ], [ [ "from sparknlp.pretrained import PretrainedPipeline\n\ndeid_pipeline = PretrainedPipeline(\"clinical_deidentification\", \"pt\", \"clinical/models\")", "clinical_deidentification download started this may take some time.\nApprox size to download 1.2 GB\n[OK!]\n" ], [ "text = \"\"\"RELAร‡รƒO HOSPITALAR\nNOME: Pedro Gonรงalves\nNHC: MVANSK92F09W408A\nENDEREร‡O: Rua Burcardo 7\nCร“DIGO POSTAL: 80139\nDATA DE NASCIMENTO: 03/03/1946\nIDADE: 70 anos\nSEXO: Homens\nE-MAIL: [email protected]\nDATA DE ADMISSรƒO: 12/12/2016\nDOUTORA: Eva Andrade\nRELATO CLรNICO: 70 anos, aposentado, sem alergia a medicamentos conhecida, com a seguinte histรณria: ex-acidente de trabalho com fratura de vรฉrtebras e costelas; operado de doenรงa de Dupuytren na mรฃo direita e ponte รญlio-femoral esquerda; diabetes tipo II, hipercolesterolemia e hiperuricemia; alcoolismo ativo, fuma 20 cigarros/dia.\nEle foi encaminhado a nรณs por apresentar hematรบria macroscรณpica pรณs-evacuaรงรฃo em uma ocasiรฃo e microhematรบria persistente posteriormente, com evacuaรงรฃo normal.\nO exame fรญsico mostrou bom estado geral, com abdome e genitais normais; o toque retal foi compatรญvel com adenoma de prรณstata grau I/IV.\nA urinรกlise mostrou 4 hemรกcias/campo e 0-5 leucรณcitos/campo; o resto do sedimento era normal.\nO hemograma รฉ normal; a bioquรญmica mostrou uma glicemia de 169 mg/dl e triglicerรญdeos 456 mg/dl; funรงรฃo hepรกtica e renal sรฃo normais. PSA de 1,16 ng/ml.\n\nDIRIGIDA A: Dr. Eva Andrade - Centro Hospitalar do Medio Ave - Avenida Dos Aliados, 56\nE-MAIL: [email protected]\n\"\"\"\n\nresult = deid_pipeline.annotate(text)\nprint(\"\\n\".join(result['masked_with_chars']))\nprint(\"\\n\")\nprint(\"\\n\".join(result['masked']))\nprint(\"\\n\")\nprint(\"\\n\".join(result['masked_fixed_length_chars']))\nprint(\"\\n\")\nprint(\"\\n\".join(result['obfuscated']))", "RELAร‡รƒO HOSPITALAR\nNOME: [*************]\nNHC: [**************]\nENDEREร‡O: [************]\nCร“DIGO POSTAL: [***]\nDATA DE NASCIMENTO: [********]\nIDADE: ** anos\nSEXO: [****]\nE-MAIL: [***********]\nDATA DE ADMISSรƒO: [********]\nDOUTORA: [*********]\nRELATO CLรNICO: ** anos, aposentado, sem alergia a medicamentos conhecida, com a seguinte histรณria: ex-acidente de trabalho com fratura de vรฉrtebras e costelas; operado de doenรงa de Dupuytren na mรฃo direita e ponte รญlio-femoral esquerda; diabetes tipo II, hipercolesterolemia e hiperuricemia; alcoolismo ativo, fuma 20 cigarros/dia.\nEle foi encaminhado a nรณs por apresentar hematรบria macroscรณpica pรณs-evacuaรงรฃo em uma ocasiรฃo e microhematรบria persistente posteriormente, com evacuaรงรฃo normal.\nO exame fรญsico mostrou bom estado geral, com abdome e genitais normais; o toque retal foi compatรญvel com adenoma de prรณstata grau I/IV.\nA urinรกlise mostrou 4 hemรกcias/campo e 0-5 leucรณcitos/campo; o resto do sedimento era normal.\nO hemograma รฉ normal; a bioquรญmica mostrou uma glicemia de 169 mg/dl e triglicerรญdeos 456 mg/dl; funรงรฃo hepรกtica e renal sรฃo normais.\nPSA de 1,16 ng/ml.\nDIRIGIDA A: Dr. [*********] - [****************************] - [*****************], 56\nE-MAIL: [****************]\n\n\n\nRELAร‡รƒO HOSPITALAR\nNOME: <DOCTOR>\nNHC: <ID>\nENDEREร‡O: <STREET>\nCร“DIGO POSTAL: <ZIP>\nDATA DE NASCIMENTO: <DATE>\nIDADE: <AGE> anos\nSEXO: <SEX>\nE-MAIL: <EMAIL>\nDATA DE ADMISSรƒO: <DATE>\nDOUTORA: <DOCTOR>\nRELATO CLรNICO: <AGE> anos, aposentado, sem alergia a medicamentos conhecida, com a seguinte histรณria: ex-acidente de trabalho com fratura de vรฉrtebras e costelas; operado de doenรงa de Dupuytren na mรฃo direita e ponte รญlio-femoral esquerda; diabetes tipo II, hipercolesterolemia e hiperuricemia; alcoolismo ativo, fuma 20 cigarros/dia.\nEle foi encaminhado a nรณs por apresentar hematรบria macroscรณpica pรณs-evacuaรงรฃo em uma ocasiรฃo e microhematรบria persistente posteriormente, com evacuaรงรฃo normal.\nO exame fรญsico mostrou bom estado geral, com abdome e genitais normais; o toque retal foi compatรญvel com adenoma de prรณstata grau I/IV.\nA urinรกlise mostrou 4 hemรกcias/campo e 0-5 leucรณcitos/campo; o resto do sedimento era normal.\nO hemograma รฉ normal; a bioquรญmica mostrou uma glicemia de 169 mg/dl e triglicerรญdeos 456 mg/dl; funรงรฃo hepรกtica e renal sรฃo normais.\nPSA de 1,16 ng/ml.\nDIRIGIDA A: Dr. <DOCTOR> - <HOSPITAL> - <STREET>, 56\nE-MAIL: <EMAIL>\n\n\n\nRELAร‡รƒO HOSPITALAR\nNOME: ****\nNHC: ****\nENDEREร‡O: ****\nCร“DIGO POSTAL: ****\nDATA DE NASCIMENTO: ****\nIDADE: **** anos\nSEXO: ****\nE-MAIL: ****\nDATA DE ADMISSรƒO: ****\nDOUTORA: ****\nRELATO CLรNICO: **** anos, aposentado, sem alergia a medicamentos conhecida, com a seguinte histรณria: ex-acidente de trabalho com fratura de vรฉrtebras e costelas; operado de doenรงa de Dupuytren na mรฃo direita e ponte รญlio-femoral esquerda; diabetes tipo II, hipercolesterolemia e hiperuricemia; alcoolismo ativo, fuma 20 cigarros/dia.\nEle foi encaminhado a nรณs por apresentar hematรบria macroscรณpica pรณs-evacuaรงรฃo em uma ocasiรฃo e microhematรบria persistente posteriormente, com evacuaรงรฃo normal.\nO exame fรญsico mostrou bom estado geral, com abdome e genitais normais; o toque retal foi compatรญvel com adenoma de prรณstata grau I/IV.\nA urinรกlise mostrou 4 hemรกcias/campo e 0-5 leucรณcitos/campo; o resto do sedimento era normal.\nO hemograma รฉ normal; a bioquรญmica mostrou uma glicemia de 169 mg/dl e triglicerรญdeos 456 mg/dl; funรงรฃo hepรกtica e renal sรฃo normais.\nPSA de 1,16 ng/ml.\nDIRIGIDA A: Dr. **** - **** - ****, 56\nE-MAIL: ****\n\n\n\nRELAร‡รƒO HOSPITALAR\nNOME: Isabel Magalhรฃes\nNHC: 124 445 311\nENDEREร‡O: Rua de Santa Marรญa, 100\nCร“DIGO POSTAL: 1000-306\nDATA DE NASCIMENTO: 17/04/1946\nIDADE: 46 anos\nSEXO: Mulher\nE-MAIL: [email protected]\nDATA DE ADMISSรƒO: 26/12/2016\nDOUTORA: Isabel Magalhรฃes\nRELATO CLรNICO: 46 anos, aposentado, sem alergia a medicamentos conhecida, com a seguinte histรณria: ex-acidente de trabalho com fratura de vรฉrtebras e costelas; operado de doenรงa de Dupuytren na mรฃo direita e ponte รญlio-femoral esquerda; diabetes tipo II, hipercolesterolemia e hiperuricemia; alcoolismo ativo, fuma 20 cigarros/dia.\nEle foi encaminhado a nรณs por apresentar hematรบria macroscรณpica pรณs-evacuaรงรฃo em uma ocasiรฃo e microhematรบria persistente posteriormente, com evacuaรงรฃo normal.\nO exame fรญsico mostrou bom estado geral, com abdome e genitais normais; o toque retal foi compatรญvel com adenoma de prรณstata grau I/IV.\nA urinรกlise mostrou 4 hemรกcias/campo e 0-5 leucรณcitos/campo; o resto do sedimento era normal.\nO hemograma รฉ normal; a bioquรญmica mostrou uma glicemia de 169 mg/dl e triglicerรญdeos 456 mg/dl; funรงรฃo hepรกtica e renal sรฃo normais.\nPSA de 1,16 ng/ml.\nDIRIGIDA A: Dr. Isabel Magalhรฃes - Centro Hospitalar Universitario do Algarve - Rua Augusta, 19, 56\nE-MAIL: [email protected]\n\n" ] ], [ [ "The results can also be inspected vertically by creating a Pandas dataframe as such:", "_____no_output_____" ] ], [ [ "pd.set_option(\"display.max_colwidth\", None)\n\ndf = pd.DataFrame(list(zip(result[\"sentence\"], \n result[\"masked\"],\n result[\"masked_with_chars\"], \n result[\"masked_fixed_length_chars\"], \n result[\"obfuscated\"])),\n columns= [\"Sentence\", \"Masked\", \"Masked with Chars\", \"Masked with Fixed Chars\", \"Obfuscated\"])\n\ndf", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
ecdfdb79266d594f23b4c4439364f62e5341f146
737,974
ipynb
Jupyter Notebook
7_special_functions/4_random_numbers/2_metropolis/tutorial.ipynb
langerest/Tutorials_Libra
f0f079b0817e627e94cd916a47ad1e497378c94e
[ "CC0-1.0" ]
8
2020-06-19T10:29:11.000Z
2021-12-22T09:04:04.000Z
7_special_functions/4_random_numbers/2_metropolis/tutorial.ipynb
langerest/Tutorials_Libra
f0f079b0817e627e94cd916a47ad1e497378c94e
[ "CC0-1.0" ]
3
2020-05-06T14:49:06.000Z
2020-05-06T15:30:26.000Z
7_special_functions/4_random_numbers/2_metropolis/tutorial.ipynb
langerest/Tutorials_Libra
f0f079b0817e627e94cd916a47ad1e497378c94e
[ "CC0-1.0" ]
14
2020-04-14T17:28:49.000Z
2022-01-12T22:13:32.000Z
765.533195
136,952
0.94776
[ [ [ "# Sampling random numbers via Metropolis Monte Carlo (MC) algorithm", "_____no_output_____" ], [ "## Table of Content <a name=\"TOC\"></a>\n\n1. [General setups](#setups)\n\n2. [General idea of the Metropolis algorithm](#metropolis) \n\n3. [Particle-in-a-box distributions](#piab-1)\n\n 3.1. [Case 3.1](#case-3.1) \n \n 3.2. [Case 3.2](#case-3.2)\n \n4. [Harmonic oscillator distributions](#ho-1)\n \n 4.1. [Case 4.1](#case-4.1) \n \n 4.2. [Case 4.2](#case-4.2)\n \n 4.3. [Case 4.3](#case-4.3)\n ", "_____no_output_____" ], [ "### A. Learning objectives\n\n- to know how Metropolis algorithm works and to use it to sample coornates and momenta from Boltzmann distribution\n- to use C++-level function of Libra to sample particle-in-a-box and harmonic oscillator distributions\n\n### B. Use cases\n\n- Wigner sampling\n- Canonical ensemble sampling via Monte Carlo Sampling from arbitrary distributions\n- computing probability density of the data\n\n### C. Functions\n\n- `liblibra::libmontecarlo`\n - [`metropolis_gau`](#metropolis_gau-1) | [also here](#metropolis_gau-2)\n\n \n### D. Classes and class members\n\n- `liblibra::libdata`\n - [`DATA`](#DATA-1)\n - [`Calculate_Distribution`](#Calculate_Distribution-1)\n \n- `liblibra::librandom`\n - [`Random`](#Random-1)\n - [`uniform`](#uniform-1) \n - [`normal`](#normal-1)\n", "_____no_output_____" ], [ "## 1. General setups\n<a name=\"setups\"></a> [Back to TOC](#TOC)", "_____no_output_____" ] ], [ [ "import math\nimport sys\nimport cmath\nimport math\nimport os\n\nif sys.platform==\"cygwin\":\n from cyglibra_core import *\nelif sys.platform==\"linux\" or sys.platform==\"linux2\":\n from liblibra_core import *\nimport util.libutil as comn\n\nfrom libra_py import units\nimport matplotlib.pyplot as plt # plots\n#matplotlib.use('Agg')\n#%matplotlib inline \n\nimport numpy as np\n#from matplotlib.mlab import griddata\n\nplt.rc('axes', titlesize=24) # fontsize of the axes title\nplt.rc('axes', labelsize=20) # fontsize of the x and y labels\nplt.rc('legend', fontsize=20) # legend fontsize\nplt.rc('xtick', labelsize=16) # fontsize of the tick labels\nplt.rc('ytick', labelsize=16) # fontsize of the tick labels\n\nplt.rc('figure.subplot', left=0.2)\nplt.rc('figure.subplot', right=0.95)\nplt.rc('figure.subplot', bottom=0.13)\nplt.rc('figure.subplot', top=0.88)\n\ncolors = {}\n\ncolors.update({\"11\": \"#8b1a0e\"}) # red \ncolors.update({\"12\": \"#FF4500\"}) # orangered \ncolors.update({\"13\": \"#B22222\"}) # firebrick \ncolors.update({\"14\": \"#DC143C\"}) # crimson \n\ncolors.update({\"21\": \"#5e9c36\"}) # green\ncolors.update({\"22\": \"#006400\"}) # darkgreen \ncolors.update({\"23\": \"#228B22\"}) # forestgreen\ncolors.update({\"24\": \"#808000\"}) # olive \n\ncolors.update({\"31\": \"#8A2BE2\"}) # blueviolet\ncolors.update({\"32\": \"#00008B\"}) # darkblue \n\ncolors.update({\"41\": \"#2F4F4F\"}) # darkslategray\n\nclrs_index = [\"11\", \"21\", \"31\", \"41\", \"12\", \"22\", \"32\", \"13\",\"23\", \"14\", \"24\"]", "/home/alexey/miniconda2/envs/py37/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for std::vector<std::vector<int, std::allocator<int> >, std::allocator<std::vector<int, std::allocator<int> > > > already registered; second conversion method ignored.\n return f(*args, **kwds)\n/home/alexey/miniconda2/envs/py37/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for boost::python::detail::container_element<std::vector<std::vector<int, std::allocator<int> >, std::allocator<std::vector<int, std::allocator<int> > > >, unsigned long, boost::python::detail::final_vector_derived_policies<std::vector<std::vector<int, std::allocator<int> >, std::allocator<std::vector<int, std::allocator<int> > > >, false> > already registered; second conversion method ignored.\n return f(*args, **kwds)\n/home/alexey/miniconda2/envs/py37/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for std::vector<std::vector<float, std::allocator<float> >, std::allocator<std::vector<float, std::allocator<float> > > > already registered; second conversion method ignored.\n return f(*args, **kwds)\n/home/alexey/miniconda2/envs/py37/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for boost::python::detail::container_element<std::vector<std::vector<float, std::allocator<float> >, std::allocator<std::vector<float, std::allocator<float> > > >, unsigned long, boost::python::detail::final_vector_derived_policies<std::vector<std::vector<float, std::allocator<float> >, std::allocator<std::vector<float, std::allocator<float> > > >, false> > already registered; second conversion method ignored.\n return f(*args, **kwds)\n/home/alexey/miniconda2/envs/py37/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for std::vector<std::vector<double, std::allocator<double> >, std::allocator<std::vector<double, std::allocator<double> > > > already registered; second conversion method ignored.\n return f(*args, **kwds)\n/home/alexey/miniconda2/envs/py37/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for boost::python::detail::container_element<std::vector<std::vector<double, std::allocator<double> >, std::allocator<std::vector<double, std::allocator<double> > > >, unsigned long, boost::python::detail::final_vector_derived_policies<std::vector<std::vector<double, std::allocator<double> >, std::allocator<std::vector<double, std::allocator<double> > > >, false> > already registered; second conversion method ignored.\n return f(*args, **kwds)\n/home/alexey/miniconda2/envs/py37/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for std::vector<std::vector<std::complex<double>, std::allocator<std::complex<double> > >, std::allocator<std::vector<std::complex<double>, std::allocator<std::complex<double> > > > > already registered; second conversion method ignored.\n return f(*args, **kwds)\n/home/alexey/miniconda2/envs/py37/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: to-Python converter for boost::python::detail::container_element<std::vector<std::vector<std::complex<double>, std::allocator<std::complex<double> > >, std::allocator<std::vector<std::complex<double>, std::allocator<std::complex<double> > > > >, unsigned long, boost::python::detail::final_vector_derived_policies<std::vector<std::vector<std::complex<double>, std::allocator<std::complex<double> > >, std::allocator<std::vector<std::complex<double>, std::allocator<std::complex<double> > > > >, false> > already registered; second conversion method ignored.\n return f(*args, **kwds)\n" ] ], [ [ "## 2. General idea of the Metropolis algorithm\n<a name=\"metropolis\"></a>[Back to TOC](#TOC)\n\nIf we are to sample random numbers from a distribution $P(z_n)$, where $z$ is a generalized argument, then the Metropolis algorithm generates a sequence of random numbers (random walk), $z_0 -> z_1 -> ... z_n -> z_{n+1} -> ..$\nsuch that:\n\nmoves $z_n -> z_{n+1}$ are accepted with the probability $\\frac{P(z_{n+1})}{P(z_{n})}$\n\nIn our case, the target probability density if proportional to the system's partition function:\n\n$$P(q, p) = \\frac{1}{Z} exp(-\\frac{H(q, p)}{k_B T})$$ where, $$H(q,p) = \\frac{p^2}{2m} + V(q)$$ is the system's Hamiltonian, $v(q)$ is the potential energy, $Z$ is the partition function. In this example, we consider a 1D particle, so there is only 1 degree of freedom, so $\\frac{p^2}{2m}$ is the kinetic energy. More to that - we set the particle's mass to 1.0\n\nAlso, the potential is assumed to be quadratic: $$V(q) = \\frac{\\omega^2 q^2}{2} + \\omega \\sqrt \\frac{E_r}{2} q$$, where $E_r$ is the reorganization energy.\n\nThus, the proposed transitions $(q_n, p_n) -> (q_{n+1}, p_{n+1})$ are accepted with the probability:\n\n$$ \\frac{P(q_{n+1}, p_{n+1})}{P(q_{n}, p_{n})} = exp(-\\frac{H_{n+1} - H_n}{k_B T})$$\n\nEach sampling makes a sifficient number of steps to avoid any \"memory\" effects, although one could have simply run a single very long simulations and just discart a certain number of first steps of the process. This would likely save some extra time and should be considered in more complex situations.\n<a name=\"uniform-1\"></a><a name=\"normal-1\"></a>", "_____no_output_____" ] ], [ [ "def boltz(params, rnd):\n # This function generates x and p taken from the Boltzmann distribution\n\n Er = params[\"Er\"]\n omega = params[\"omega\"]\n kT = params[\"kT\"] \n \n mo2 = 0.5*omega*omega # mass = 1\n M = math.sqrt(mo2*Er)\n\n\n X_ = -0.5*M/mo2 # minimum\n p_ = math.sqrt(kT) # momentum that corresponds to temperatures\n x_ = X_ + 50.0*rnd.normal() # + 0.5*(M/mo2)*rnd.normal() #(M/mo2)*\n\n \n Eold = mo2*x_*x_ + M*x_ + 0.5*p_*p_ # energy\n Enew = 0.0\n\n\n for i in range(1000):\n # Propose a move\n x = x_ + 10.0*rnd.uniform(-1.0, 1.0) \n p = p_ + 1.1*math.sqrt(kT)*rnd.uniform(-1.0, 1.0) \n\n # Compute the energy at the proposed coordinates\n Enew = mo2*x*x + M*x + 0.5*p*p\n \n # Accept or reject the proposed moves stochastically\n # based on the energy difference and temperature\n dE = Enew - Eold\n ksi = rnd.uniform(0.0,1.0)\n prob = 1.0\n argg = dE/kT\n if argg >40:\n prob = 0.0\n elif argg < -40:\n prob = 1.0\n else: \n prob = math.exp(-dE/kT) #min(1.0, math.exp(-dE/kT))\n\n if(ksi<prob): # accept new state with Metropolis scheme\n Eold = Enew\n x_ = x\n p_ = p\n\n return [x_, p_]", "_____no_output_____" ] ], [ [ "Sample the coordinates and momenta from Boltzmann distribution using the Metropolis MC algorithm implemented above.\n\nNote how the `boltz` functions is called 5000 times to sample 5000 random numbers. In each call, there are 1000 steps of the process. We could potentially considered 5000 + 1000 individual steps, with the initial 5000 steps being discarded. \n<a name=\"Random-1\"></a>", "_____no_output_____" ] ], [ [ "# Define it only once\nrnd = Random()\n\nparams = {}\nparams[\"Er\"] = Er = 2.39e-2\nparams[\"omega\"] = omega = 3.5e-4\nparams[\"kT\"] = kT = 9.5e-4\n\nmo2 = 0.5*omega*omega # mass = 1\nM = math.sqrt(mo2*Er)\nx_ = -0.5*M/mo2 # position of the minimum\np_ = math.sqrt(kT) # momentum that corresponds to temperatures\n\n\nx, p = [], []\nfor i in range(0,5000): \n [a, b] = boltz( params, rnd )\n x.append(a)\n p.append(b)\n", "_____no_output_____" ] ], [ [ "Compute the grids in coordinate and momentum spaces and compute the distributions\n<a name=\"DATA-1\"></a><a name=\"Calculate_Distribution-1\"></a>", "_____no_output_____" ] ], [ [ "ax = []\nspread = 500.0\ndx = 2.0*spread / 100.0\nfor i in range(0,100): \n ax.append(x_ - spread + i*dx)\n\npx = []\nspread = 1.0*math.sqrt(kT)\ndp = 4.0*spread / 100.0\nfor i in range(0,100): \n px.append(-2.0*math.sqrt(kT) + i*dp)\n\n\nX = DATA(x)\nP = DATA(p)\n \nXdens = X.Calculate_Distribution(ax)[0]\nPdens = P.Calculate_Distribution(px)[0]", "_____no_output_____" ] ], [ [ "Compute the analytical distributions and normalize both the analytical and sampled distributions.", "_____no_output_____" ] ], [ [ "sz = len(ax)\n\nprob_x, prob_px = [], []\nfor i in range(0, sz):\n prob = math.exp(-( mo2*ax[i]*ax[i] + M*ax[i] + 0.5*p_*p_ )/kT)\n prob_x.append(prob)\n \n prob = math.exp(-( mo2*x_*x_ + M*x_ + 0.5*px[i]*px[i] )/kT) \n prob_px.append(prob)\n \nZ_analyt = sum(prob_x) * dx\nZ_sampled = sum(Xdens) * dx\n\nZ_analyt_p = sum(prob_px) * dp\nZ_sampled_p = sum(Pdens) * dp\n\n\nfor i in range(0, sz):\n prob_x[i] = prob_x[i] / Z_analyt\n Xdens[i] = Xdens[i] / Z_sampled \n \n prob_px[i] = prob_px[i] / Z_analyt_p\n Pdens[i] = Pdens[i] / Z_sampled_p", "_____no_output_____" ] ], [ [ "Fianlly plot the results", "_____no_output_____" ] ], [ [ "plt.figure(1, figsize=(24,12) )\nplt.subplot(1, 2, 1)\nplt.title('Coordinate Boltzmann distribution')\nplt.xlabel('Coordinate')\nplt.ylabel('Probability density')\nplt.plot(ax, prob_x, label='Analytical', linewidth=4, color = colors[\"11\"]) \nplt.plot(ax, Xdens, label='Sampled', linewidth=4, color = colors[\"21\"]) \nplt.legend()\n\nplt.subplot(1, 2, 2)\nplt.title('Momentum Boltzmann distribution')\nplt.xlabel('Momentum')\nplt.ylabel('Probability density')\nplt.plot(px, prob_px, label='Analytical', linewidth=4, color = colors[\"11\"]) \nplt.plot(px, Pdens, label='Sampled', linewidth=4, color = colors[\"21\"]) \nplt.legend()\n\nplt.show()\nplt.close()", "_____no_output_____" ] ], [ [ "## 3. Particle-in-a-box distributions\n<a name=\"piab-1\"></a>[Back to TOC](#TOC)\n\n<a name=\"metropolis_gau-1\"></a>\nIn this and the next sections, we will demonstrate the usage of the `metropolis_gau` function. This function implements a generalization of the algorithm shown in the `boltz` function of Section 2.\n\nThis function has the following signature:\n\n vector<MATRIX> metropolis_gau(Random& rnd, bp::object target_distribution, MATRIX& dof, bp::object distribution_params, int sample_size, int start_sampling, double gau_var)\n \nHere:\n\n* `rnd` - is the random number generator object\n* `target_distribution` is a Python function that computes the probability distribution function and that would be passed to C++ (called by the C++ code) and should be defined by the user. It should have the following signature:\n\n\n double target_distribution(MATRIX& dof, bp::object params);\n\n\n* `dof` - the starting coordinates of the random walker. It is stored in the MATRIX format so that the calculations are possible for arbitrary dimensions.\n* `distribution_params` a dictionary of parameters for the distribution function defined by the user\n* `sample_size` how many multidimensional points to sample from the distribution, this determines the length of the Gaussian random walk\n* `start_sampling` how many first moves to disregard before starting collecting the data\n* `ksi` is the Gaussian variance - the scaling factor for the stepping in various dimensions. If it is too large, we may be not accepting too many moves, because the chances are we will be in a reagion with a very different probability density, so the the chances of the acceptance may be small. Such moves would also make the sampling coarser, so the neat features may be lost. For the distributions with higher fluctuations, use smaller step size value. However, the space exploration process may be slow then, so one may need using larger number of steps.\n \n\nHere, we start by defining a function returning the PDF the user wants to sample random numbers from. \n\nIn this example, we define the probability density for 1D particle-in-a-box states - one of the standard problems encountered in nearly any quantum mechanics course. \n\nNote that the function is defined with the signature above:\n\n double target_distribution(MATRIX& dof, bp::object params);\n \nInside, we unpack the coordinate vector and the parameters of the function and use them to compute the results.", "_____no_output_____" ] ], [ [ "def piab(q, params):\n \"\"\"\n The probability density function\n\n \"\"\"\n\n L = params[\"L\"]\n n = params[\"n\"]\n\n x = q.get(0)\n\n p = 0.0\n if x>0.0 and x<L:\n p = (2.0/L)*(math.sin(x*n*math.pi/L))**2\n\n return p", "_____no_output_____" ] ], [ [ "We also define an auxiliary function that analyzes the computed points, creates the bins as defined by the user and counts the numbers of points in each bin, to compute the probabilities sampled. \n\nThis function also does the visualization of the results, for convenience of the following steps.", "_____no_output_____" ] ], [ [ "def make_bin(sample, min_, max_, dx, i, j, my_label):\n \"\"\"\n An auxiliary function to compute the bins\n \"\"\"\n\n # Prepare the grids\n x_points, y_points = [], []\n max_pts = int((max_ - min_)/dx) + 1\n\n for n in range(max_pts):\n x_points.append(min_ + n * dx)\n y_points.append(0.0)\n\n # Compute the frequencies\n sz = len(sample)\n for n in range(sz):\n x = sample[n].get(i,j) \n indx = int((x - min_)/dx)\n\n y_points[indx] = y_points[indx] + 1.0/float(sz)\n \n \n plt.figure(1, figsize=(24,12) )\n plt.title(my_label)\n plt.xlabel('Coordinate')\n plt.ylabel('Probability density')\n plt.plot(x_points, y_points, label='', linewidth=4, color = colors[\"11\"]) \n plt.legend()\n plt.show()\n plt.close()\n", "_____no_output_____" ] ], [ [ "Finally, we can explore several cases, but simply calling the metropolis_gau functions with the corrsponding parameters and then passing the sampled data to the `make_bin` function\n\n### Case 3.1.\n<a name=\"case-3.1\"></a>[Back to TOC](#TOC)", "_____no_output_____" ] ], [ [ "rnd = Random()\nmy_label = \"Particle-in-a-box Distribution\"\n\nq = MATRIX(1,1); q.set(0, 0.5) \nsample = metropolis_gau(rnd, piab, q, {\"L\":1.0, \"n\":1} , 1100, 10, 0.05) \nmake_bin(sample, -1.0, 2.0, 0.01, 0, 0, my_label) ", "No handles with labels found to put in legend.\n" ] ], [ [ "To make it smoother, we decrease the stepping size, but also include more steps to sample.", "_____no_output_____" ] ], [ [ "q.set(0, 0.5) \nsample = metropolis_gau(rnd, piab, q, {\"L\":1.0, \"n\":1} , 250000, 25000, 0.01) \nmake_bin(sample, -1.0, 2.0, 0.01, 0, 0, my_label) ", "No handles with labels found to put in legend.\n" ] ], [ [ "### Case 3.2.\n<a name=\"case-3.2\"></a>[Back to TOC](#TOC)\n\nNow, let's use a really large number of steps, but play with the quantum number of the piab state:\n\n### n = 2", "_____no_output_____" ] ], [ [ "q.set(0, 0.5) \nsample = metropolis_gau(rnd, piab, q, {\"L\":1.0, \"n\":2} , 500000, 25000, 0.05) \nmake_bin(sample, -1.0, 2.0, 0.01, 0, 0, my_label) ", "No handles with labels found to put in legend.\n" ] ], [ [ "### n = 5", "_____no_output_____" ] ], [ [ "q.set(0, 0.5) \nsample = metropolis_gau(rnd, piab, q, {\"L\":1.0, \"n\":5} , 500000, 25000, 0.05) \nmake_bin(sample, -1.0, 2.0, 0.01, 0, 0, my_label) ", "No handles with labels found to put in legend.\n" ] ], [ [ "### n = 10", "_____no_output_____" ] ], [ [ "q.set(0, 0.5) \nsample = metropolis_gau(rnd, piab, q, {\"L\":1.0, \"n\":10} , 2500000, 25000, 0.05) \nmake_bin(sample, -1.0, 2.0, 0.0025, 0, 0, my_label) ", "No handles with labels found to put in legend.\n" ] ], [ [ "Note that as we increase the complexity of the distribution we also have to increase the number of sampled points, as well as the resolution of the grid we are looking at. Still, the sampling is a bit off on the edges. ", "_____no_output_____" ], [ "## 4. Harmonic oscillator distributions\n<a name=\"ho-1\"></a>[Back to TOC](#TOC)\n\nNow, let's consider sampling random numbers from the probability densities of a harmonic oscillator states ", "_____no_output_____" ] ], [ [ "def hermite(n, x):\n r,s,t = 0.0, 0.0, 0.0\n p,q = 1.0, 0.0\n\n for m in range(n):\n r,s = p,q\n p = 2.0*x*r - 2.0*m*t\n q = 2.0*(m+1)*r\n t = r\n\n return p\n\n\ndef ket_n(q, n, k, m):\n \"\"\"\n HO state |n>\n \"\"\"\n\n hbar = 1.0 # atomic units\n omega = math.sqrt(k/m) \n alp = m*omega/hbar\n\n N_n = math.pow(alp/math.pi, 0.25) / math.sqrt(math.pow(2.0, n) * FACTORIAL(n))\n ksi = math.sqrt(alp)*q \n H_n = hermite(n, ksi)\n \n res = N_n * H_n * math.exp(-0.5*ksi*ksi)\n\n return res\n \n\ndef HO_sup(q, params):\n \"\"\"\n The probability density function: superposition of HO eigenstates\n\n \"\"\"\n\n k = params[\"k\"]\n m = params[\"m\"]\n states = params[\"states\"]\n coeffs = params[\"coeffs\"]\n\n x = q.get(0)\n\n sz = len(states)\n p = 0.0\n for n in range(sz):\n p = p + coeffs[n] * ket_n(x, states[n], k, m)\n\n p = p * p \n\n return p\n\n\ndef HO_sup_t(q, params, t):\n \"\"\"\n The probability density function: superposition of HO eigenstates\n\n now, time-dependent\n\n |Psi> = \\sum_n { c_n * |n> * exp(-i*t*E_n/hbar) }\n\n \"\"\"\n\n k = params[\"k\"]\n m = params[\"m\"]\n states = params[\"states\"]\n coeffs = params[\"coeffs\"]\n\n hbar = 1.0 # atomic units\n omega = math.sqrt(k/m) \n\n\n x = q.get(0)\n\n sz = len(states)\n p = 0.0+0.0j\n for n in range(sz):\n E_n = hbar*omega*(n+0.5)\n p = p + coeffs[n] * ket_n(x, states[n], k, m) * cmath.exp(-1.0j*t*E_n/hbar)\n\n p = (p.conjugate() * p ).real\n\n return p\n", "_____no_output_____" ] ], [ [ "### Case 4.1. Sampling $|0>$\n<a name=\"case-4.1\"></a>[Back to TOC](#TOC)\n<a name=\"metropolis_gau-2\"></a>", "_____no_output_____" ] ], [ [ "rnd = Random()\n\nq = MATRIX(1,1); q.set(0, 0.5) \nparams = {\"k\":1.0, \"m\":2000.0, \"states\":[0], \"coeffs\":[1.0]} \nsampling = metropolis_gau(rnd, HO_sup, q, params, 500000, 50000, 0.05) \nmake_bin(sampling, -1.5, 2.0, 0.01, 0, 0, \"Harmonic oscillator eigenstate |0>\") ", "No handles with labels found to put in legend.\n" ] ], [ [ "### Case 4.2. Sampling $|10>$\n<a name=\"case-4.2\"></a>[Back to TOC](#TOC)", "_____no_output_____" ] ], [ [ "q = MATRIX(1,1); q.set(0, 0.5) \nparams = {\"k\":1.0, \"m\":2000.0, \"states\":[10], \"coeffs\":[1.0]} \nsampling = metropolis_gau(rnd, HO_sup, q, params, 500000, 50000, 0.05) \nmake_bin(sampling, -1.5, 2.0, 0.01, 0, 0, \"Harmonic oscillator eigenstate |5>\")", "No handles with labels found to put in legend.\n" ] ], [ [ "### Case 4.3. Sampling $|0> - |1>$\n<a name=\"case-4.3\"></a>[Back to TOC](#TOC)", "_____no_output_____" ] ], [ [ "q = MATRIX(1,1); q.set(0, 0.5) \nparams = {\"k\":1.0, \"m\":2000.0, \"states\":[0, 1], \"coeffs\":[1.0, 1.0]} \nsampling = metropolis_gau(rnd, HO_sup, q, params, 500000, 50000, 0.05) \nmake_bin(sampling, -1.5, 2.0, 0.01, 0, 0, \"Superposition |0> - |1>\")", "No handles with labels found to put in legend.\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ecdfee77ff196d106892acc0a03a0b9587b5b592
89,460
ipynb
Jupyter Notebook
paramTest/hourglass_lossfilter3.ipynb
YuBeomGon/pytorch_retina
a1713ecbf99e3cf2f8f5edce3329b808b4f9dee8
[ "Apache-2.0" ]
null
null
null
paramTest/hourglass_lossfilter3.ipynb
YuBeomGon/pytorch_retina
a1713ecbf99e3cf2f8f5edce3329b808b4f9dee8
[ "Apache-2.0" ]
null
null
null
paramTest/hourglass_lossfilter3.ipynb
YuBeomGon/pytorch_retina
a1713ecbf99e3cf2f8f5edce3329b808b4f9dee8
[ "Apache-2.0" ]
null
null
null
36.18932
181
0.37334
[ [ [ "import numpy as np \nimport pandas as pd\nfrom tqdm import tqdm\n\nimport sys\nsys.path.append('../')\nfrom retinanet import coco_eval\nfrom retinanet import csv_eval\nfrom retinanet import model\nfrom retinanet import paps_eval\nfrom retinanet import paps_train\n\n# from retinanet import retina\nfrom retinanet.dataloader import *\nfrom retinanet.anchors import Anchors\nfrom retinanet.losses import *\nfrom retinanet.scheduler import *\nfrom retinanet.hourglass import hg1, hg2, hg8\nfrom retinanet.parallel import DataParallelModel, DataParallelCriterion\n\n#Torch\nimport torch\nimport torch.nn as nn\nfrom torch.utils.data import Dataset,DataLoader\nfrom torch.utils.data.sampler import SequentialSampler, RandomSampler\nfrom torch.optim import Adam, lr_scheduler\nimport torch.optim as optim\n\n# python train_paps.py --start_epoch 0 --end_epoch 120 --batch_size 24 \\\n# --saved_dir $OUT_MODEL_DIR --gpu_num 0 --num_workers 12 \\\n# --target_threshold 7 --topk 20 --filter_option 1 | 2>&1 | tee $log\n", "_____no_output_____" ], [ "# device = torch.device('cpu')\n# device = torch.device('cuda')\nGPU_NUM = 3 # ์›ํ•˜๋Š” GPU ๋ฒˆํ˜ธ ์ž…๋ ฅ\ndevice = torch.device(f'cuda:{GPU_NUM}' if torch.cuda.is_available() else 'cpu')\nmodel = hg2(device, pretrained=True, progress=False, num_classes=2)\nmodel.to(device)", "num_classes 2\n" ], [ "# model", "_____no_output_____" ], [ "\n# train_info = np.load('../data/train.npy', allow_pickle=True, encoding='latin1').item()\n# train_info\n\nbatch_size = 24\ndataset_train = PapsDataset('../data/', set_name='train_2class',\n transform=train_transforms)\n\ntrain_data_loader = DataLoader(\n dataset_train,\n batch_size=batch_size,\n shuffle=True,\n num_workers=12,\n pin_memory=True,\n prefetch_factor=1,\n collate_fn=collate_fn\n)", "loading annotations into memory...\nDone (t=2.44s)\ncreating index...\nindex created!\n" ], [ "criterion = PapsLoss(device, target_threshold=7, topk=5, filter_option=2)\ncriterion = criterion.to(device)\nmodel.training = True\n\n# https://gaussian37.github.io/dl-pytorch-lr_scheduler/\noptimizer = optim.Adam(model.parameters(), lr = 1e-8)\nscheduler = CosineAnnealingWarmUpRestarts(optimizer, T_0=20, T_mult=2, eta_max=0.0008, T_up=5, gamma=0.5)\n# CosineAnnealingWarmRestarts", "_____no_output_____" ], [ "saved_dir = '../trained_models/HourGlass/loss_filter' + str(GPU_NUM) + '/'\ns_epoch= 0\ne_epoch= 120", "_____no_output_____" ], [ "if os.path.isdir(saved_dir) == False :\n print('folder is made')\n os.makedirs(saved_dir)", "_____no_output_____" ], [ "if os.path.isfile(saved_dir + 'epoch_' + str(e_epoch) +'_model.pt') :\n print('pretrainind file loading')\n state = torch.load(saved_dir + 'epoch_' + str(e_epoch) +'_model.pt')\n epoch = state['epoch']\n model.load_state_dict(state['state_dict'], strict=False)\n optimizer.load_state_dict(state['optimizer'])\n last_loss = state['loss']\nelse :\n last_loss = 0.6", "_____no_output_____" ], [ "paps_train.train_paps(dataloader=train_data_loader, \n model=model, \n criterion=criterion,\n saved_dir=saved_dir, \n optimizer=optimizer,\n scheduler=scheduler,\n device = device,\n s_epoch= s_epoch,\n e_epoch= e_epoch,\n last_loss = last_loss) ", " 0%| | 0/620 [00:00<?, ?it/s] " ], [ "# state = {\n# 'epoch': 0,\n# 'state_dict': model.state_dict(),\n# 'optimizer': optimizer.state_dict(),\n# 'loss' : 0.6\n# }\n# torch.save(state, saved_dir + 'model.pt')", "_____no_output_____" ], [ "dataset_val = PapsDataset('../data/', set_name='val_2class',\n transform=val_transforms)\n\nval_data_loader = DataLoader(\n dataset_val,\n batch_size=1,\n shuffle=False,\n num_workers=4,\n collate_fn=collate_fn\n)", "loading annotations into memory...\nDone (t=0.35s)\ncreating index...\nindex created!\n" ], [ "paps_eval.evaluate_paps(dataset=dataset_val, \n dataloader=val_data_loader, \n model=model, \n saved_dir=saved_dir, \n device = device,\n threshold=0.1) ", "100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3739/3739 [06:55<00:00, 8.99it/s]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ecdffddccc804de46fe41b868d308eeaa544f3d2
8,126
ipynb
Jupyter Notebook
text/2015-07-23_nltk-and-POS.ipynb
csiu/dataviz
a5f5d0af0e6f913aa61f2427ccbf9d0df2129257
[ "MIT" ]
null
null
null
text/2015-07-23_nltk-and-POS.ipynb
csiu/dataviz
a5f5d0af0e6f913aa61f2427ccbf9d0df2129257
[ "MIT" ]
null
null
null
text/2015-07-23_nltk-and-POS.ipynb
csiu/dataviz
a5f5d0af0e6f913aa61f2427ccbf9d0df2129257
[ "MIT" ]
null
null
null
26.383117
414
0.509845
[ [ [ "**Purpose:** To experiment with Python's [Natural Language Toolkit](http://www.nltk.org).\n\n> NLTK is a leading platform for building Python programs to work with human language data", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport nltk\nfrom nltk.corpus import stopwords\nfrom nltk.stem import SnowballStemmer\nfrom collections import Counter", "_____no_output_____" ] ], [ [ "## Input", "_____no_output_____" ] ], [ [ "bloboftext = \"\"\"\nThis little piggy went to market,\nThis little piggy stayed home,\nThis little piggy had roast beef,\nThis little piggy had none,\nAnd this little piggy went wee wee wee all the way home.\n\"\"\"", "_____no_output_____" ] ], [ [ "## Workflow\n- **Tokenization** to break text into units e.g. words, phrases, or symbols\n- **Stop word removal** to get rid of common words \n - e.g. this, a, is", "_____no_output_____" ] ], [ [ "## Tokenization \nbagofwords = nltk.word_tokenize(bloboftext.lower())\nprint len(bagofwords)", "39\n" ], [ "## Stop word removal\nstop = stopwords.words('english')\nbagofwords = [i for i in bagofwords if i not in stop]\nprint len(bagofwords)", "28\n" ] ], [ [ "### About stemmers and lemmatisation \n- **Stemming** to reduce a word to its roots \n - e.g. having => hav\n\n- **Lemmatisation** to determine a word's lemma/canonical form \n - e.g. having => have\n\n\n> [English Stemmers and Lemmatizers](http://text-processing.com/demo/stem/:)\n> \n> For stemming English words with NLTK, you can choose between the **PorterStemmer** or the **LancasterStemmer**. The [Porter Stemming Algorithm](http://tartarus.org/~martin/PorterStemmer/) is the oldest stemming algorithm supported in NLTK, originally published in 1979. The [Lancaster Stemming Algorithm]() is much newer, published in 1990, and can be more aggressive than the Porter stemming algorithm.\n> \n> The **WordNet Lemmatizer** uses the [WordNet Database](http://wordnet.princeton.edu) to lookup lemmas. Lemmas differ from stems in that a lemma is a canonical form of the word, while a stem may not be a real word.\n\n- Resources:\n - [PorterStemmer or the SnowballStemmer](http://www.nltk.org/howto/stem.html) (Snowball == Porter2)\n - [Stemming and Lemmatization](http://textminingonline.com/tag/lancaster-stemmer)\n - [What are the major differences and benefits of Porter and Lancaster Stemming algorithms?](http://stackoverflow.com/questions/10554052/what-are-the-major-differences-and-benefits-of-porter-and-lancaster-stemming-alg)", "_____no_output_____" ] ], [ [ "snowball_stemmer = SnowballStemmer(\"english\")\n\n## What words was stemmed?\n_original = set(bagofwords) \n_stemmed = set([snowball_stemmer.stem(i) for i in bagofwords])\n\nprint 'BEFORE:\\t%s' % ', '.join(map(lambda x:'\"%s\"'%x, _original-_stemmed))\nprint ' AFTER:\\t%s' % ', '.join(map(lambda x:'\"%s\"'%x, _stemmed-_original))\n\ndel _original, _stemmed\n\n## Proceed with stemming\nbagofwords = [snowball_stemmer.stem(i) for i in bagofwords]", "BEFORE:\t\"little\", \"piggy\", \"stayed\"\n AFTER:\t\"piggi\", \"littl\", \"stay\"\n" ] ], [ [ "## Count & POS tag of each stemmed/non-stop word\n- meaning of POS tags: [Penn Part of Speech Tags](http://cs.nyu.edu/grishman/jet/guide/PennPOS.html)\n```\nNN\t Noun, singular or mass\nVBD\tVerb, past tense\n```", "_____no_output_____" ] ], [ [ "for token, count in Counter(bagofwords).most_common():\n print '%d\\t%s\\t%s' % (count, nltk.pos_tag([token])[0][1], token)", "5\tNN\tpiggi\n5\tNN\tlittl\n4\t,\t,\n3\tNN\twee\n2\tNN\thome\n2\tVBD\twent\n1\tNN\tnone\n1\tNN\tbeef\n1\tNN\tstay\n1\tNN\tway\n1\tNN\troast\n1\t.\t.\n1\tNN\tmarket\n" ] ], [ [ "## Proportion of POS tags", "_____no_output_____" ] ], [ [ "record = {}\nfor token, count in Counter(bagofwords).most_common():\n postag = nltk.pos_tag([token])[0][1]\n\n if record.has_key(postag):\n record[postag] += count\n else:\n record[postag] = count\n\nrecordpd = pd.DataFrame.from_dict([record]).T\nrecordpd.columns = ['count']\nN = sum(recordpd['count'])\nrecordpd['percent'] = recordpd['count']/N*100\nrecordpd", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ece010c8f8b5e45158ff95b4aef70c2b52c5dcb5
567,088
ipynb
Jupyter Notebook
tensorflow/advanced-tensorflow/dann/dann_mnist.ipynb
gopala-kr/ds-notebooks
bc35430ecdd851f2ceab8f2437eec4d77cb59423
[ "MIT" ]
1
2019-05-10T09:16:23.000Z
2019-05-10T09:16:23.000Z
tensorflow/advanced-tensorflow/dann/dann_mnist.ipynb
gopala-kr/ds-notebooks
bc35430ecdd851f2ceab8f2437eec4d77cb59423
[ "MIT" ]
null
null
null
tensorflow/advanced-tensorflow/dann/dann_mnist.ipynb
gopala-kr/ds-notebooks
bc35430ecdd851f2ceab8f2437eec4d77cb59423
[ "MIT" ]
1
2019-05-10T09:17:28.000Z
2019-05-10T09:17:28.000Z
613.068108
212,548
0.931573
[ [ [ "# DOMAIN ADVERSARIAL NEURAL NETWORK\n## FROM [pumpikano](https://github.com/pumpikano/tf-dann)", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nimport tensorflow.contrib.slim as slim\nfrom tensorflow.python.framework import ops\nfrom tensorflow.examples.tutorials.mnist import input_data\nimport numpy as np\nimport cPickle as pkl\nfrom sklearn.manifold import TSNE\nimport matplotlib.pyplot as plt\nimport urllib\nimport os\nimport tarfile\nimport skimage\nimport skimage.io\nimport skimage.transform\n%matplotlib inline\nprint (\"PACKAGES LOADED\")", "PACKAGES LOADED\n" ] ], [ [ "## DOWNLOAD BSR_bsds500.tgz", "_____no_output_____" ] ], [ [ "filelink = 'http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/BSR/BSR_bsds500.tgz'\nfilename = 'data/BSR_bsds500.tgz'\nif os.path.isfile(filename):\n print (\"[%s] ALREADY EXISTS.\" % (filename))\nelse:\n print (\"DOWNLOADING %s ...\" % (filename))\n urllib.urlretrieve(filelink, filename)\n print (\"DONE\")", "[data/BSR_bsds500.tgz] ALREADY EXISTS.\n" ] ], [ [ "## CREATE MNIST-M", "_____no_output_____" ], [ "### HELPER FUNCTIONS", "_____no_output_____" ] ], [ [ "def compose_image(digit, background):\n \"\"\"Difference-blend a digit and a random patch from a background image.\"\"\"\n w, h, _ = background.shape\n dw, dh, _ = digit.shape\n x = np.random.randint(0, w - dw)\n y = np.random.randint(0, h - dh)\n bg = background[x:x+dw, y:y+dh]\n return np.abs(bg - digit).astype(np.uint8)\ndef mnist_to_img(x):\n \"\"\"Binarize MNIST digit and convert to RGB.\"\"\"\n x = (x > 0).astype(np.float32)\n d = x.reshape([28, 28, 1]) * 255\n return np.concatenate([d, d, d], 2)\ndef create_mnistm(X):\n \"\"\"\n Give an array of MNIST digits, blend random background patches to\n build the MNIST-M dataset as described in\n http://jmlr.org/papers/volume17/15-239/15-239.pdf\n \"\"\"\n X_ = np.zeros([X.shape[0], 28, 28, 3], np.uint8)\n for i in range(X.shape[0]):\n bg_img = rand.choice(background_data)\n d = mnist_to_img(X[i])\n d = compose_image(d, bg_img)\n X_[i] = d\n return X_\nprint (\"FUNCTIONS READY\")", "FUNCTIONS READY\n" ] ], [ [ "### CREATE MNIST-M DATASET (IF NECESSARY)", "_____no_output_____" ] ], [ [ "mnistm_name = 'data/mnistm.pkl'\nif os.path.isfile(mnistm_name):\n print (\"[%s] ALREADY EXISTS. \" % (mnistm_name))\nelse:\n mnist = input_data.read_data_sets('data')\n # OPEN BSDS500\n f = tarfile.open(filename)\n train_files = []\n for name in f.getnames():\n if name.startswith('BSR/BSDS500/data/images/train/'):\n train_files.append(name)\n print (\"WE HAVE [%d] TRAIN FILES\" % (len(train_files)))\n # GET BACKGROUND\n print (\"GET BACKGROUND FOR MNIST-M\")\n background_data = []\n for name in train_files:\n try:\n fp = f.extractfile(name)\n bg_img = skimage.io.imread(fp)\n background_data.append(bg_img)\n except:\n continue\n print (\"WE HAVE [%d] BACKGROUND DATA\" % (len(background_data)))\n rand = np.random.RandomState(42)\n print (\"BUILDING TRAIN SET...\")\n train = create_mnistm(mnist.train.images)\n print (\"BUILDING TEST SET...\")\n test = create_mnistm(mnist.test.images)\n print (\"BUILDING VALIDATION SET...\")\n valid = create_mnistm(mnist.validation.images)\n # SAVE\n print (\"SAVE MNISTM DATA TO %s\" % (mnistm_name))\n with open(mnistm_name, 'w') as f:\n pkl.dump({ 'train': train, 'test': test, 'valid': valid }, f, -1)\n print (\"DONE\")", "[data/mnistm.pkl] ALREADY EXISTS. \n" ] ], [ [ "## LOAD MNIST AND MNIST-M ", "_____no_output_____" ] ], [ [ "print (\"LOADING MNIST\")\nmnist = input_data.read_data_sets('data', one_hot=True)\nmnist_train = (mnist.train.images > 0).reshape(55000, 28, 28, 1).astype(np.uint8) * 255\nmnist_train = np.concatenate([mnist_train, mnist_train, mnist_train], 3)\nmnist_test = (mnist.test.images > 0).reshape(10000, 28, 28, 1).astype(np.uint8) * 255\nmnist_test = np.concatenate([mnist_test, mnist_test, mnist_test], 3)\nmnist_train_label = mnist.train.labels\nmnist_test_label = mnist.test.labels\nprint (\"LOADING MNIST-M\")\nmnistm_name = 'data/mnistm.pkl'\nmnistm = pkl.load(open(mnistm_name))\nmnistm_train = mnistm['train']\nmnistm_test = mnistm['test']\nmnistm_valid = mnistm['valid']\nmnistm_train_label = mnist_train_label\nmnistm_test_label = mnist_test_label\nprint (\"GENERATING DOMAIN DATA\")\n\ntotal_train = np.vstack([mnist_train, mnistm_train])\ntotal_test = np.vstack([mnist_test, mnistm_test])\nntrain = mnist_train.shape[0]\nntest = mnist_test.shape[0]\ntotal_train_domain = np.vstack([np.tile([1., 0.], [ntrain, 1]), np.tile([0., 1.], [ntrain, 1])])\ntotal_test_domain = np.vstack([np.tile([1., 0.], [ntest, 1]), np.tile([0., 1.], [ntest, 1])])\nn_total_train = total_train.shape[0]\nn_total_test = total_test.shape[0]\n\n# GET PIXEL MEAN \npixel_mean = np.vstack([mnist_train, mnistm_train]).mean((0, 1, 2))\n\n# PLOT IMAGES\ndef imshow_grid(images, shape=[2, 8]):\n from mpl_toolkits.axes_grid1 import ImageGrid\n fig = plt.figure()\n grid = ImageGrid(fig, 111, nrows_ncols=shape, axes_pad=0.05)\n size = shape[0] * shape[1]\n for i in range(size):\n grid[i].axis('off')\n grid[i].imshow(images[i]) \n plt.show()\nimshow_grid(mnist_train, shape=[5, 10])\nimshow_grid(mnistm_train, shape=[5, 10])", "LOADING MNIST\nExtracting data/train-images-idx3-ubyte.gz\nExtracting data/train-labels-idx1-ubyte.gz\nExtracting data/t10k-images-idx3-ubyte.gz\nExtracting data/t10k-labels-idx1-ubyte.gz\nLOADING MNIST-M\nGENERATING DOMAIN DATA\n" ] ], [ [ "## DEFINE TRAIN / TEST DATA FOR DANN", "_____no_output_____" ] ], [ [ "def print_npshape(x, name):\n print (\"SHAPE OF %s IS %s\" % (name, x.shape,))\n\n# SOURCE AND TARGET DATA \nsource_train_img = mnist_train \nsource_train_label = mnist_train_label \nsource_test_img = mnist_test \nsource_test_label = mnist_test_label \ntarget_test_img = mnistm_test\ntarget_test_label = mnistm_test_label\n# DOMAIN ADVERSARIAL TRAINING\ndomain_train_img = total_train \ndomain_train_label = total_train_domain \n \nimgshape = source_train_img.shape[1:4]\nlabelshape = source_train_label.shape[1]\n\nprint_npshape(source_train_img, \"source_train_img\")\nprint_npshape(source_train_label, \"source_train_label\")\nprint_npshape(source_test_img, \"source_test_img\")\nprint_npshape(source_test_label, \"source_test_label\")\nprint_npshape(target_test_img, \"target_test_img\")\nprint_npshape(target_test_label, \"target_test_label\")\nprint_npshape(domain_train_img, \"domain_train_img\")\nprint_npshape(domain_train_label, \"domain_train_label\")\n\nprint imgshape\nprint labelshape", "SHAPE OF source_train_img IS (55000, 28, 28, 3)\nSHAPE OF source_train_label IS (55000, 10)\nSHAPE OF source_test_img IS (10000, 28, 28, 3)\nSHAPE OF source_test_label IS (10000, 10)\nSHAPE OF target_test_img IS (10000, 28, 28, 3)\nSHAPE OF target_test_label IS (10000, 10)\nSHAPE OF domain_train_img IS (110000, 28, 28, 3)\nSHAPE OF domain_train_label IS (110000, 2)\n(28, 28, 3)\n10\n" ] ], [ [ "## FLIP GRADIENT", "_____no_output_____" ] ], [ [ "class FlipGradientBuilder(object):\n def __init__(self):\n self.num_calls = 0\n def __call__(self, x, l=1.0):\n grad_name = \"FlipGradient%d\" % self.num_calls\n @ops.RegisterGradient(grad_name)\n def _flip_gradients(op, grad):\n return [tf.neg(grad) * l]\n g = tf.get_default_graph()\n with g.gradient_override_map({\"Identity\": grad_name}):\n y = tf.identity(x)\n self.num_calls += 1\n return y\nflip_gradient = FlipGradientBuilder()", "_____no_output_____" ] ], [ [ "## BUILD MODEL", "_____no_output_____" ] ], [ [ "x = tf.placeholder(tf.uint8, [None, imgshape[0], imgshape[1], imgshape[2]])\ny = tf.placeholder(tf.float32, [None, labelshape])\nd = tf.placeholder(tf.float32, [None, 2]) # DOMAIN LABEL\nlr = tf.placeholder(tf.float32, [])\ndw = tf.placeholder(tf.float32, [])\n# FEATURE EXTRACTOR\ndef feat_ext_net(x, name='feat_ext', reuse=False):\n with tf.variable_scope(name) as scope:\n if reuse:\n scope.reuse_variables()\n x = (tf.cast(x, tf.float32) - pixel_mean) / 255.\n net = slim.conv2d(x, 32, [5, 5], scope = 'conv1')\n net = slim.max_pool2d(net, [2, 2], scope='pool1')\n net = slim.conv2d(net, 48, [5, 5], scope='conv2')\n net = slim.max_pool2d(net, [2, 2], scope='pool2')\n feat = slim.flatten(net, scope='flat')\n return feat\n# CLASS PREDICTION\ndef class_pred_net(feat, name='class_pred', reuse=False):\n with tf.variable_scope(name) as scope:\n if reuse:\n scope.reuse_variables()\n net = slim.fully_connected(feat, 100, scope='fc1')\n net = slim.fully_connected(net, 100, scope='fc2')\n net = slim.fully_connected(net, 10, activation_fn = None, scope='out')\n return net\n# DOMAIN PREDICTION\ndef domain_pred_net(feat, name='domain_pred', reuse=False):\n with tf.variable_scope(name) as scope:\n if reuse:\n scope.reuse_variables()\n feat = flip_gradient(feat, dw) # GRADIENT REVERSAL\n net = slim.fully_connected(feat, 100, scope='fc1')\n net = slim.fully_connected(net, 2, activation_fn = None, scope='out')\n return net\n# DOMAIN ADVERSARIAL NEURAL NETWORK\nfeat_ext_dann = feat_ext_net(x, name='dann_feat_ext')\nclass_pred_dann = class_pred_net(feat_ext_dann, name='dann_class_pred')\ndomain_pred_dann = domain_pred_net(feat_ext_dann, name='dann_domain_pred')\n# NAIVE CONVOLUTIONAL NEURAL NETWORK\nfeat_ext_cnn = feat_ext_net(x, name='cnn_feat_ext')\nclass_pred_cnn = class_pred_net(feat_ext_cnn, name='cnn_class_pred')\n\nprint (\"MODEL READY\")", "MODEL READY\n" ] ], [ [ "## CHECK VARIABLES & SET WEIGHT DECAY", "_____no_output_____" ] ], [ [ "t_weights = tf.trainable_variables()\n# TOTAL WEIGHTS\n# print (\" TOTAL WEIGHT LIST\")\n# for i in range(len(t_weights)): print (\"[%2d/%2d] [%s]\" % (i, len(t_weights), t_weights[i]))\n\n# FEATURE EXTRACTOR + CLASS PREDICTOR \nprint (\" WEIGHT LIST FOR CLASS PREDICTOR\")\nw_class = []\nfor i in range(len(t_weights)): \n if t_weights[i].name[:9] == 'dann_feat' or t_weights[i].name[:10] == 'dann_class':\n w_class.append(tf.nn.l2_loss(t_weights[i]))\n print (\"[%s] \\t ADDED TO W_CLASS LIST\" % (t_weights[i].name))\nl2loss_dann_class = tf.add_n(w_class)\n\n# FEATURE EXTRACTOR + DOMAIN CLASSIFIER \nprint (\"\\n WEIGHT LIST FOR DOMAIN CLASSIFIER\")\nw_domain = []\nfor i in range(len(t_weights)):\n if t_weights[i].name[:8] == 'dann_feat' or t_weights[i].name[:11] == 'dann_domain':\n w_domain.append(tf.nn.l2_loss(t_weights[i]))\n print (\"[%s] \\t ADDED TO W_DOMAIN LIST\" % (t_weights[i].name))\nl2loss_cnn_domain = tf.add_n(w_domain)", " WEIGHT LIST FOR CLASS PREDICTOR\n[dann_feat_ext/conv1/weights:0] \t ADDED TO W_CLASS LIST\n[dann_feat_ext/conv1/biases:0] \t ADDED TO W_CLASS LIST\n[dann_feat_ext/conv2/weights:0] \t ADDED TO W_CLASS LIST\n[dann_feat_ext/conv2/biases:0] \t ADDED TO W_CLASS LIST\n[dann_class_pred/fc1/weights:0] \t ADDED TO W_CLASS LIST\n[dann_class_pred/fc1/biases:0] \t ADDED TO W_CLASS LIST\n[dann_class_pred/fc2/weights:0] \t ADDED TO W_CLASS LIST\n[dann_class_pred/fc2/biases:0] \t ADDED TO W_CLASS LIST\n[dann_class_pred/out/weights:0] \t ADDED TO W_CLASS LIST\n[dann_class_pred/out/biases:0] \t ADDED TO W_CLASS LIST\n\n WEIGHT LIST FOR DOMAIN CLASSIFIER\n[dann_domain_pred/fc1/weights:0] \t ADDED TO W_DOMAIN LIST\n[dann_domain_pred/fc1/biases:0] \t ADDED TO W_DOMAIN LIST\n[dann_domain_pred/out/weights:0] \t ADDED TO W_DOMAIN LIST\n[dann_domain_pred/out/biases:0] \t ADDED TO W_DOMAIN LIST\n" ] ], [ [ "## DEFINE FUNCTIONS", "_____no_output_____" ] ], [ [ "# FUNCTIONS FOR DANN\nclass_loss_dann = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(class_pred_dann, y)) \ndomain_loss_dann = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(domain_pred_dann, d)) \noptm_class_dann = tf.train.MomentumOptimizer(lr, 0.9).minimize(class_loss_dann)\noptm_domain_dann = tf.train.MomentumOptimizer(lr, 0.9).minimize(domain_loss_dann)\naccr_class_dann = tf.reduce_mean(tf.cast(tf.equal(tf.arg_max(class_pred_dann, 1), tf.arg_max(y, 1)), tf.float32))\naccr_domain_dann = tf.reduce_mean(tf.cast(tf.equal(tf.arg_max(domain_pred_dann, 1), tf.arg_max(d, 1)), tf.float32))\n\n# FUNCTIONS FOR CNN\nclass_loss_cnn = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(class_pred_cnn, y)) \noptm_class_cnn = tf.train.MomentumOptimizer(lr, 0.9).minimize(class_loss_cnn)\naccr_class_cnn = tf.reduce_mean(tf.cast(tf.equal(tf.arg_max(class_pred_cnn, 1), tf.arg_max(y, 1)), tf.float32))\nprint (\"FUNCTIONS READY\")", "FUNCTIONS READY\n" ] ], [ [ "## OPEN SESSION", "_____no_output_____" ] ], [ [ "sess = tf.Session()\ntf.set_random_seed(0)\ninit = tf.global_variables_initializer()\nsess.run(init)\nprint (\"SESSION OPENED\")", "SESSION OPENED\n" ] ], [ [ "## OPTIMIZE", "_____no_output_____" ] ], [ [ "# PARAMETERS\nbatch_size = 128\ntraining_epochs = 30\nevery_epoch = 1\nnum_batch = int(ntrain/batch_size)+1\ntotal_iter = training_epochs*num_batch\nfor epoch in range(training_epochs):\n randpermlist = np.random.permutation(ntrain)\n for i in range(num_batch): \n # REVERSAL WEIGHT AND LEARNING RATE SCHEDULE\n curriter = epoch*num_batch + i\n p = float(curriter) / float(total_iter)\n dw_val = 2. / (1. + np.exp(-10. * p)) - 1\n lr_val = 0.01 / (1. + 10 * p)**0.75\n \n # OPTIMIZE DANN: CLASS-CLASSIFIER\n randidx_class = randpermlist[i*batch_size:min((i+1)*batch_size, ntrain-1)]\n batch_x_class = source_train_img[randidx_class]\n batch_y_class = source_train_label[randidx_class, :]\n feeds_class = {x:batch_x_class, y:batch_y_class, lr:lr_val, dw:dw_val}\n _, lossclass_val_dann = sess.run([optm_class_dann, class_loss_dann], feed_dict=feeds_class)\n \n # OPTIMIZE DANN: DOMAIN-CLASSIFER\n randidx_domain = np.random.permutation(n_total_train)[:batch_size]\n batch_x_domain = total_train[randidx_domain]\n batch_d_domain = total_train_domain[randidx_domain, :]\n feeds_domain = {x:batch_x_domain, d:batch_d_domain, lr:lr_val, dw:dw_val}\n _, lossdomain_val_dann = sess.run([optm_domain_dann, domain_loss_dann], feed_dict=feeds_domain)\n \n # OPTIMIZE DANN: CLASS-CLASSIFIER\n _, lossclass_val_cnn = sess.run([optm_class_cnn, class_loss_cnn], feed_dict=feeds_class)\n \n if epoch % every_epoch == 0:\n # CHECK BOTH LOSSES\n print (\"[%d/%d][%d/%d] p: %.3f lossclass_val: %.3e, lossdomain_val: %.3e\" \n % (epoch, training_epochs, curriter, total_iter, p, lossdomain_val_dann, lossclass_val_dann))\n # CHECK ACCUARACIES OF BOTH SOURCE AND TARGET\n feed_source = {x:source_test_img, y:source_test_label}\n feed_target = {x:target_test_img, y:target_test_label}\n accr_source_dann = sess.run(accr_class_dann, feed_dict=feed_source)\n accr_target_dann = sess.run(accr_class_dann, feed_dict=feed_target)\n accr_source_cnn = sess.run(accr_class_cnn, feed_dict=feed_source)\n accr_target_cnn = sess.run(accr_class_cnn, feed_dict=feed_target)\n print (\" DANN: SOURCE ACCURACY: %.3f TARGET ACCURACY: %.3f\" \n % (accr_source_dann, accr_target_dann)) \n print (\" CNN: SOURCE ACCURACY: %.3f TARGET ACCURACY: %.3f\" \n % (accr_source_cnn, accr_target_cnn)) ", "[0/30][429/12900] p: 0.033 lossclass_val: 1.432e-01, lossdomain_val: 9.532e-02\n DANN: SOURCE ACCURACY: 0.973 TARGET ACCURACY: 0.536\n CNN: SOURCE ACCURACY: 0.971 TARGET ACCURACY: 0.512\n[1/30][859/12900] p: 0.067 lossclass_val: 1.282e-01, lossdomain_val: 5.568e-02\n DANN: SOURCE ACCURACY: 0.978 TARGET ACCURACY: 0.557\n CNN: SOURCE ACCURACY: 0.981 TARGET ACCURACY: 0.551\n[2/30][1289/12900] p: 0.100 lossclass_val: 1.236e-01, lossdomain_val: 2.624e-02\n DANN: SOURCE ACCURACY: 0.980 TARGET ACCURACY: 0.548\n CNN: SOURCE ACCURACY: 0.986 TARGET ACCURACY: 0.583\n[3/30][1719/12900] p: 0.133 lossclass_val: 1.589e-01, lossdomain_val: 8.414e-02\n DANN: SOURCE ACCURACY: 0.981 TARGET ACCURACY: 0.570\n CNN: SOURCE ACCURACY: 0.985 TARGET ACCURACY: 0.581\n[4/30][2149/12900] p: 0.167 lossclass_val: 2.284e-01, lossdomain_val: 4.871e-02\n DANN: SOURCE ACCURACY: 0.982 TARGET ACCURACY: 0.611\n CNN: SOURCE ACCURACY: 0.987 TARGET ACCURACY: 0.579\n[5/30][2579/12900] p: 0.200 lossclass_val: 1.483e-01, lossdomain_val: 2.691e-01\n DANN: SOURCE ACCURACY: 0.949 TARGET ACCURACY: 0.501\n CNN: SOURCE ACCURACY: 0.988 TARGET ACCURACY: 0.569\n[6/30][3009/12900] p: 0.233 lossclass_val: 5.335e-01, lossdomain_val: 1.523e-02\n DANN: SOURCE ACCURACY: 0.973 TARGET ACCURACY: 0.538\n CNN: SOURCE ACCURACY: 0.985 TARGET ACCURACY: 0.580\n[7/30][3439/12900] p: 0.267 lossclass_val: 3.956e-01, lossdomain_val: 1.409e-01\n DANN: SOURCE ACCURACY: 0.970 TARGET ACCURACY: 0.546\n CNN: SOURCE ACCURACY: 0.987 TARGET ACCURACY: 0.572\n[8/30][3869/12900] p: 0.300 lossclass_val: 7.526e-01, lossdomain_val: 4.144e-02\n DANN: SOURCE ACCURACY: 0.971 TARGET ACCURACY: 0.562\n CNN: SOURCE ACCURACY: 0.987 TARGET ACCURACY: 0.572\n[9/30][4299/12900] p: 0.333 lossclass_val: 6.796e-01, lossdomain_val: 1.618e-01\n DANN: SOURCE ACCURACY: 0.968 TARGET ACCURACY: 0.611\n CNN: SOURCE ACCURACY: 0.988 TARGET ACCURACY: 0.593\n[10/30][4729/12900] p: 0.367 lossclass_val: 6.419e-01, lossdomain_val: 1.779e-01\n DANN: SOURCE ACCURACY: 0.974 TARGET ACCURACY: 0.645\n CNN: SOURCE ACCURACY: 0.988 TARGET ACCURACY: 0.582\n[11/30][5159/12900] p: 0.400 lossclass_val: 4.290e-01, lossdomain_val: 4.524e-02\n DANN: SOURCE ACCURACY: 0.979 TARGET ACCURACY: 0.683\n CNN: SOURCE ACCURACY: 0.989 TARGET ACCURACY: 0.586\n[12/30][5589/12900] p: 0.433 lossclass_val: 5.656e-01, lossdomain_val: 4.452e-02\n DANN: SOURCE ACCURACY: 0.984 TARGET ACCURACY: 0.701\n CNN: SOURCE ACCURACY: 0.988 TARGET ACCURACY: 0.576\n[13/30][6019/12900] p: 0.467 lossclass_val: 5.564e-01, lossdomain_val: 5.877e-02\n DANN: SOURCE ACCURACY: 0.980 TARGET ACCURACY: 0.687\n CNN: SOURCE ACCURACY: 0.988 TARGET ACCURACY: 0.580\n[14/30][6449/12900] p: 0.500 lossclass_val: 4.393e-01, lossdomain_val: 8.811e-02\n DANN: SOURCE ACCURACY: 0.981 TARGET ACCURACY: 0.695\n CNN: SOURCE ACCURACY: 0.989 TARGET ACCURACY: 0.573\n[15/30][6879/12900] p: 0.533 lossclass_val: 6.850e-01, lossdomain_val: 5.342e-02\n DANN: SOURCE ACCURACY: 0.979 TARGET ACCURACY: 0.707\n CNN: SOURCE ACCURACY: 0.989 TARGET ACCURACY: 0.571\n[16/30][7309/12900] p: 0.567 lossclass_val: 5.368e-01, lossdomain_val: 1.093e-01\n DANN: SOURCE ACCURACY: 0.983 TARGET ACCURACY: 0.702\n CNN: SOURCE ACCURACY: 0.989 TARGET ACCURACY: 0.569\n[17/30][7739/12900] p: 0.600 lossclass_val: 5.123e-01, lossdomain_val: 2.576e-02\n DANN: SOURCE ACCURACY: 0.987 TARGET ACCURACY: 0.730\n CNN: SOURCE ACCURACY: 0.989 TARGET ACCURACY: 0.575\n[18/30][8169/12900] p: 0.633 lossclass_val: 4.371e-01, lossdomain_val: 2.007e-02\n DANN: SOURCE ACCURACY: 0.983 TARGET ACCURACY: 0.694\n CNN: SOURCE ACCURACY: 0.989 TARGET ACCURACY: 0.578\n[19/30][8599/12900] p: 0.667 lossclass_val: 5.624e-01, lossdomain_val: 2.166e-02\n DANN: SOURCE ACCURACY: 0.982 TARGET ACCURACY: 0.724\n CNN: SOURCE ACCURACY: 0.989 TARGET ACCURACY: 0.578\n[20/30][9029/12900] p: 0.700 lossclass_val: 5.554e-01, lossdomain_val: 4.575e-03\n DANN: SOURCE ACCURACY: 0.983 TARGET ACCURACY: 0.727\n CNN: SOURCE ACCURACY: 0.989 TARGET ACCURACY: 0.569\n[21/30][9459/12900] p: 0.733 lossclass_val: 4.938e-01, lossdomain_val: 9.608e-02\n DANN: SOURCE ACCURACY: 0.982 TARGET ACCURACY: 0.707\n CNN: SOURCE ACCURACY: 0.989 TARGET ACCURACY: 0.572\n[22/30][9889/12900] p: 0.767 lossclass_val: 5.886e-01, lossdomain_val: 8.391e-02\n DANN: SOURCE ACCURACY: 0.983 TARGET ACCURACY: 0.717\n CNN: SOURCE ACCURACY: 0.988 TARGET ACCURACY: 0.570\n[23/30][10319/12900] p: 0.800 lossclass_val: 5.417e-01, lossdomain_val: 4.446e-02\n DANN: SOURCE ACCURACY: 0.983 TARGET ACCURACY: 0.726\n CNN: SOURCE ACCURACY: 0.988 TARGET ACCURACY: 0.565\n[24/30][10749/12900] p: 0.833 lossclass_val: 5.685e-01, lossdomain_val: 1.941e-02\n DANN: SOURCE ACCURACY: 0.984 TARGET ACCURACY: 0.731\n CNN: SOURCE ACCURACY: 0.988 TARGET ACCURACY: 0.564\n[25/30][11179/12900] p: 0.867 lossclass_val: 5.388e-01, lossdomain_val: 3.776e-02\n DANN: SOURCE ACCURACY: 0.983 TARGET ACCURACY: 0.725\n CNN: SOURCE ACCURACY: 0.989 TARGET ACCURACY: 0.563\n[26/30][11609/12900] p: 0.900 lossclass_val: 6.212e-01, lossdomain_val: 1.021e-01\n DANN: SOURCE ACCURACY: 0.984 TARGET ACCURACY: 0.737\n CNN: SOURCE ACCURACY: 0.989 TARGET ACCURACY: 0.570\n[27/30][12039/12900] p: 0.933 lossclass_val: 5.588e-01, lossdomain_val: 1.804e-02\n DANN: SOURCE ACCURACY: 0.985 TARGET ACCURACY: 0.739\n CNN: SOURCE ACCURACY: 0.989 TARGET ACCURACY: 0.573\n[28/30][12469/12900] p: 0.967 lossclass_val: 5.523e-01, lossdomain_val: 4.930e-02\n DANN: SOURCE ACCURACY: 0.984 TARGET ACCURACY: 0.740\n CNN: SOURCE ACCURACY: 0.988 TARGET ACCURACY: 0.563\n[29/30][12899/12900] p: 1.000 lossclass_val: 5.453e-01, lossdomain_val: 6.175e-02\n DANN: SOURCE ACCURACY: 0.984 TARGET ACCURACY: 0.744\n CNN: SOURCE ACCURACY: 0.990 TARGET ACCURACY: 0.566\n" ] ], [ [ "## CHECK ACCURACIES", "_____no_output_____" ] ], [ [ "feed_source = {x:source_test_img, y:source_test_label}\nfeed_target = {x:target_test_img, y:target_test_label}\naccr_source_dann = sess.run(accr_class_dann, feed_dict=feed_source)\naccr_target_dann = sess.run(accr_class_dann, feed_dict=feed_target)\naccr_source_cnn = sess.run(accr_class_cnn, feed_dict=feed_source)\naccr_target_cnn = sess.run(accr_class_cnn, feed_dict=feed_target)\nprint (\" DANN: SOURCE ACCURACY: %.3f TARGET ACCURACY: %.3f\" \n % (accr_source_dann, accr_target_dann)) \nprint (\" CNN: SOURCE ACCURACY: %.3f TARGET ACCURACY: %.3f\" \n % (accr_source_cnn, accr_target_cnn)) ", " DANN: SOURCE ACCURACY: 0.984 TARGET ACCURACY: 0.744\n CNN: SOURCE ACCURACY: 0.990 TARGET ACCURACY: 0.566\n" ] ], [ [ "## CHECK WITH T-SNE", "_____no_output_____" ] ], [ [ "# COMBINE SOURCE AND TARGET DATA\nnum_test = 500\ncomb_imgs = np.vstack([source_test_img[:num_test], target_test_img[:num_test]])\ncomb_labels = np.vstack([source_test_label[:num_test], target_test_label[:num_test]])\ncomb_domain = np.vstack([np.tile([1., 0.], [num_test, 1]), np.tile([0., 1.], [num_test, 1])])\n\n# GET FEATURE REPRESENTATIONS\ntest_emb_dann = sess.run(feat_ext_dann, feed_dict={x: comb_imgs})\ntest_emb_cnn = sess.run(feat_ext_cnn, feed_dict={x: comb_imgs})\n\n# RUN T-SNE\ntsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=3000)\ndann_tsne = tsne.fit_transform(test_emb_dann)\ncnn_tsne = tsne.fit_transform(test_emb_cnn)\nprint (\"T-SNE DONE.\")", "T-SNE DONE.\n" ] ], [ [ "### PLOT FUNCTION", "_____no_output_____" ] ], [ [ "def plot_embedding(X, y, d, title=None):\n \"\"\"Plot an embedding X with the class label y colored by the domain d.\"\"\"\n x_min, x_max = np.min(X, 0), np.max(X, 0)\n X = (X - x_min) / (x_max - x_min)\n # PLOT COLORED NUMBERS\n plt.figure(figsize=(10, 10)) \n ax = plt.subplot(111)\n for i in range(X.shape[0]):\n plt.text(X[i, 0], X[i, 1], str(y[i]),\n color = plt.cm.bwr(d[i] / 1.),\n fontdict={'weight': 'bold', 'size': 9})\n plt.xticks([]), plt.yticks([]) # REMOVE TICKS\n xmean, xvar = np.mean(X, 0), np.var(X, 0)\n if title is not None:\n plt.title(title)", "_____no_output_____" ] ], [ [ "## PLOT REPRESENTATIONS WITH T-SNE", "_____no_output_____" ] ], [ [ "# 0: BLUE (SOURSE) 1: RED (TARGET) \nplot_embedding(dann_tsne, comb_labels.argmax(1), comb_domain.argmax(1)\n , 'DOMAIN ADAPTATION (BLUE: SOURCE, RED: TARGET)')\nplot_embedding(cnn_tsne, comb_labels.argmax(1), comb_domain.argmax(1)\n , 'NAIVE CNN (BLUE: SOURCE, RED: TARGET)')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ece019fc62ed0b1ad7cee638fc6f3929c02db3bc
139,517
ipynb
Jupyter Notebook
Untitled.ipynb
TamimEhsan2021/AiBetCore
18d8985fc599ceb602d04a194e42bee48675e04e
[ "MIT" ]
null
null
null
Untitled.ipynb
TamimEhsan2021/AiBetCore
18d8985fc599ceb602d04a194e42bee48675e04e
[ "MIT" ]
null
null
null
Untitled.ipynb
TamimEhsan2021/AiBetCore
18d8985fc599ceb602d04a194e42bee48675e04e
[ "MIT" ]
null
null
null
44.333333
19,587
0.466524
[ [ [ "import pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn import model_selection\nimport numpy as np\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import r2_score\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.svm import SVR\nfrom sklearn.tree import DecisionTreeRegressor", "_____no_output_____" ], [ "import os\nimport glob\nfrom pathlib import Path\n\npath = os.getcwd()+'/Data'\nfurtherpath = path+'/Results_Cleaned'\nfile_list = []\nfor path in Path(furtherpath).rglob('*.csv'):\n file_list.append(path)\n#combine all files in the list\ncombined_csv = pd.concat([pd.read_csv(f) for f in file_list])\n#export to csv\ncombined_csv.to_csv( \"combined_clean.csv\", index=False, encoding='utf-8-sig')", "_____no_output_____" ], [ "combined_csv", "_____no_output_____" ], [ "combined_csv.columns", "_____no_output_____" ], [ "features = ['Goals_For_Home',\n 'Goals_For_Away', 'Position_Home', 'Points_Home', 'Total_Wins_Home',\n 'Total_Draw_Home', 'Total_Lose_Home', 'Total_Goals_For_Home_Team',\n 'Total_Goals_Against_Home_Team', 'Total_Streak_Home', 'Wins_When_Home',\n 'Draw_When_Home', 'Lose_When_Home', 'Goals_For_When_Home',\n 'Goals_Against_When_Home', 'Position_Away', 'Points_Away',\n 'Total_Wins_Away', 'Total_Draw_Away', 'Total_Lose_Away',\n 'Total_Goals_For_Away_Team', 'Total_Goals_Against_Away_Team',\n 'Total_Streak_Away', 'Wins_When_Away', 'Draw_When_Away',\n 'Lose_When_Away', 'Goals_For_When_Away', 'Goals_Against_When_Away',\n 'Streak_When_Home', 'Streak_When_Away',]", "_____no_output_____" ], [ "combined_csv[features]", "_____no_output_____" ], [ "def eval_result(result:str):\n result_list = result.split(\"-\")\n try:\n return int(result_list[0])-int(result_list[1])\n except:\n return \"Unknown\"\n \ndifferenceList = []\nfor index, row in combined_csv.iterrows():\n result = row['Result']\n difference=eval_result(result=result)\n differenceList.append(difference)\n \ncombined_csv['difference'] = differenceList", "_____no_output_____" ], [ "X = np.array(combined_csv[features])\ny = np.array(combined_csv['difference'])", "_____no_output_____" ], [ "X.shape", "_____no_output_____" ], [ "y.shape", "_____no_output_____" ], [ "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=4)", "_____no_output_____" ], [ "model = LinearRegression()\nmodel.fit(X_train, y_train)", "_____no_output_____" ], [ "b = model.intercept_\nm = model.coef_\nprint(m, b)", "[ 1.00000000e+00 -1.00000000e+00 5.06773898e-17 -1.24667754e-14\n 3.86209596e-14 1.31957930e-14 2.10412789e-16 -2.36697697e-16\n 6.04305127e-17 -1.42992227e-16 -9.55123617e-16 -7.66265855e-16\n -5.34970882e-16 7.96177647e-17 -2.38670947e-17 -3.49472719e-16\n -2.00904076e-15 5.70377079e-15 1.91513472e-15 0.00000000e+00\n -5.55111512e-17 -2.22044605e-16 -1.87350135e-16 1.01307851e-15\n 9.57567359e-16 3.74700271e-16 -2.91433544e-16 1.11022302e-16\n -8.32667268e-17 -5.55111512e-17] 4.6074255521944e-15\n" ], [ "#r squred value. the closer to one the better\nmodel.score(X_train, y_train)", "_____no_output_____" ] ], [ [ "At least we know that the model works... This feature set includes Goals for Home and Goals for Away which respectively have a weighting of 1 and -1 for the end result of Goal Difference. Now to do the same but remove these features.", "_____no_output_____" ] ], [ [ "features = ['Position_Home', 'Points_Home', 'Total_Wins_Home',\n 'Total_Draw_Home', 'Total_Lose_Home', 'Total_Goals_For_Home_Team',\n 'Total_Goals_Against_Home_Team', 'Total_Streak_Home', 'Wins_When_Home',\n 'Draw_When_Home', 'Lose_When_Home', 'Goals_For_When_Home',\n 'Goals_Against_When_Home', 'Position_Away', 'Points_Away',\n 'Total_Wins_Away', 'Total_Draw_Away', 'Total_Lose_Away',\n 'Total_Goals_For_Away_Team', 'Total_Goals_Against_Away_Team',\n 'Total_Streak_Away', 'Wins_When_Away', 'Draw_When_Away',\n 'Lose_When_Away', 'Goals_For_When_Away', 'Goals_Against_When_Away',\n 'Streak_When_Home', 'Streak_When_Away',]", "_____no_output_____" ], [ "X = np.array(combined_csv[features])\ny = np.array(combined_csv['difference'])", "_____no_output_____" ], [ "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=4)", "_____no_output_____" ], [ "model = LinearRegression()\nmodel.fit(X_train, y_train)", "_____no_output_____" ], [ "b = model.intercept_\nm = model.coef_\nprint(m, b)", "[-0.0169137 -0.00777828 -0.00564078 0.00914408 0.02304197 0.03266904\n -0.02295493 0.03266082 -0.01339504 -0.01252053 -0.02369427 -0.00627958\n 0.00114657 0.01447244 0.0103409 0.00356592 -0.00035685 -0.01213202\n -0.03000784 0.02753704 -0.01808117 -0.00969982 -0.01411911 -0.00126563\n 0.00655419 -0.00856458 0.03724002 -0.05610376] 0.40417084269616227\n" ], [ "model.score(X_train, y_train)", "_____no_output_____" ] ], [ [ "Assessing the features and their weights, as determined by the model.", "_____no_output_____" ] ], [ [ "weights = list(zip(features, m**2))", "_____no_output_____" ], [ "weights_df = df = pd.DataFrame(weights, columns = ['Feature', 'Weight_sq'])", "_____no_output_____" ], [ "weights_df.sort_values(by=['Weight_sq'])", "_____no_output_____" ] ], [ [ "Let us try to normalize the data beforehand and view the results.", "_____no_output_____" ] ], [ [ "from sklearn import preprocessing\nX_normalized = preprocessing.normalize(X, axis=0)", "_____no_output_____" ], [ "X", "_____no_output_____" ], [ "X_normalized[1100]", "_____no_output_____" ], [ "X_train, X_test, y_train, y_test = train_test_split(X_normalized, y, test_size=0.2, random_state=4)\nmodel = LinearRegression()\nmodel.fit(X_train, y_train)\nb = model.intercept_\nm = model.coef_\nprint(m, b)", "[-7.67283045e+01 3.11897299e+09 -2.62161847e+09 -7.03178741e+08\n 7.47783305e+01 3.68549394e+02 -2.50185374e+02 5.53678803e+01\n -3.35896169e+01 -2.24686525e+01 -4.09007008e+01 -4.79342939e+01\n 6.59230961e+00 6.39817218e+01 1.05686151e+10 -8.77877104e+09\n -2.44957007e+09 -4.03431164e+01 -3.23727079e+02 3.03487237e+02\n -3.10045863e+01 -1.66382937e+01 -2.50678226e+01 -3.13777751e+00\n 3.74034757e+01 -6.42861843e+01 7.26349837e+01 -7.57123484e+01] 0.40417632164785233\n" ], [ "model.score(X_train, y_train)", "_____no_output_____" ], [ "X_mod = np.round(model.predict(X_test))\nresult_mod = X_mod - y_test\nlen(result_mod[np.where(result_mod == 0)])/len(X_mod)\n", "_____no_output_____" ], [ "for i, num in enumerate(result_mod):\n if num > 1:\n result_mod[i] = 1\n elif num < -1:\n result_mod[i] = -1\nlen(result_mod[np.where(result_mod == 0)])/len(X_mod)\n ", "_____no_output_____" ], [ "model.predict(X_test)", "_____no_output_____" ], [ "model.score(X_mod, y_test)", "_____no_output_____" ], [ "X_train[12]", "_____no_output_____" ], [ "weights = list(zip(features, m**2))\nweights_df = df = pd.DataFrame(weights, columns = ['Feature', 'Weight_sq'])\nweights_df.sort_values(by=['Weight_sq'])", "_____no_output_____" ], [ "X[0]", "_____no_output_____" ], [ "combined_csv.to_csv( \"combined_clean.csv\", index=False, encoding='utf-8-sig')", "_____no_output_____" ], [ "combined_csv[combined_csv[\"Round\"]>20][features]", "_____no_output_____" ], [ "def lin_reg(X, y):\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=4)\n model = LinearRegression()\n model.fit(X_train, y_train)\n b = model.intercept_\n m = model.coef_\n return (m, b, model.score(X_train, y_train))\n\nlinear_reg_dict = {}\n\nfor i in range(30):\n X = np.array(combined_csv[combined_csv[\"Round\"]>i][features])\n y = np.array(combined_csv[combined_csv[\"Round\"]>i]['difference'])\n linear_reg_dict[i] = lin_reg(X, y)", "_____no_output_____" ], [ "linear_reg_dict", "_____no_output_____" ], [ "feature_weights_dict = {}\nm_values = []\nmodel_score = []\nfor feat in features:\n feature_weights_dict[feat] = []\n\nfor j, _ in enumerate(list(linear_reg_dict.values())):\n for i, feat in enumerate(list(linear_reg_dict.values())[j][0]):\n feature_weights_dict[features[i]].append(feat)\n m_values.append(list(linear_reg_dict.values())[j][1])\n model_score.append(list(linear_reg_dict.values())[j][2])\n \nprint(feature_weights_dict) \nprint(m_values)\nprint(mode_score)", "{'Position_Home': [-0.016913697189088948, -0.018401088757142647, -0.017300479570629777, -0.01642816745722308, -0.015744780277271945, -0.01819213168693474, -0.017205344234898022, -0.017476058814359065, -0.016275906853612403, -0.015059180928335025, -0.013824832361792312, -0.01435325435857346, -0.014329053322870534, -0.013912771613310664, -0.012068327327720398, -0.01324939075980268, -0.010349918619103582, -0.011450993173541636, -0.009078539515320148, -0.008975976832263437, -0.011286953811771997, -0.010138290371134347, -0.01017303078453845, -0.010551635595969304, -0.006294415630353424, -0.009606380791792034, -0.011072640171353424, -0.00845139321191677, -0.010372742360437734, -0.0105089257754317], 'Points_Home': [-0.007778279161564573, -0.00776621440595552, -0.008043386869601767, -0.006000832933057986, -0.007014920218619479, -0.007127676282797306, -0.007960678020577185, -0.0059285108606316, -0.00526347781185622, -0.006730353299644776, -0.004098102247826595, -0.006446075226600803, -0.0040806328448786695, -0.004016170158542148, -0.006684865603732189, -0.0044108413237500775, -0.0024738651666517918, -0.00264529601236959, -0.002259563531005912, -0.005240545238854009, -0.004373658666242583, -0.002312093885513301, -0.0012587529073281797, 0.00024567915160699297, 0.0018612992867771475, -0.001253315993338052, -0.0011453352181423886, -0.006387097293382701, -0.003571884004120859, -0.004096642750199582], 'Total_Wins_Home': [-0.005640784756299617, -0.005615898216613881, -0.005320800225406624, -0.004454615732212547, -0.003935983426799958, -0.00458962486952806, -0.004495499867396355, -0.0043452151965699525, -0.003461855631639891, -0.0031894891927229624, -0.002962904553325483, -0.002179350398371068, -0.0027824255320810947, -0.0022091560876337612, -0.0026221083944585276, -0.0021313095752333805, -0.002091256519800034, -0.002393964473183262, -0.0020499623446902054, -0.00137099405395145, -0.0026696742893022327, -0.001783702906473427, -0.0015340009992838818, -0.002162684255263352, 0.00011518254338920662, -0.0010538330730269347, 0.0004725159404679155, 0.0012340282788332218, 0.0007353204542536097, -0.0008116183324749041], 'Total_Draw_Home': [0.009144075107332696, 0.009081480243889361, 0.007919013806621461, 0.007363014263578709, 0.004793030061779057, 0.006641198325785056, 0.005525821581611975, 0.007107134729078895, 0.005122089083063638, 0.0028381142785242567, 0.004790611412150042, 9.197596851175648e-05, 0.0042666437513638805, 0.002611298104358169, 0.0011814595796426567, 0.0019830874019488793, 0.003799904392749004, 0.0045365974071810665, 0.0038903235030644987, -0.0011275630769996025, 0.003635364201664205, 0.0030390148339069375, 0.003343250090523107, 0.006733731917397431, 0.001515751656609627, 0.0019081832257435948, -0.002562883039546286, -0.010089182129882334, -0.005777845366881871, -0.0016617877527748104], 'Total_Lose_Home': [0.02304196919321851, 0.017983797065501277, 0.01941228963787473, 0.01502882955560687, 0.017570390200250904, 0.018998505640320372, 0.01730816976540843, 0.014518775398397847, 0.014200791765474771, 0.00879542416212459, 0.009007167088158272, 0.007861688570644851, 0.012638389043162428, 0.01037969000287268, 0.007873496030062028, 0.010427176143719389, 0.008942696757294614, 0.01566894437631644, 0.007581252971529349, 0.0025364134920022197, 0.0036079906963472027, 0.009627283672608407, 0.009033297984928362, 0.015244940171669677, 0.0064535233545534535, 0.011044144572770675, 0.001824828110884511, -0.011468793208557301, -0.006351809186724478, 0.0010626722844824291], 'Total_Goals_For_Home_Team': [0.03266904498766385, 0.03095658362276335, 0.03209196132275584, 0.030131341653929004, 0.0297835424330698, 0.03175813794982774, 0.03273028248814014, 0.030151425049908784, 0.028518489980707842, 0.029845561725677542, 0.027533184804351008, 0.02898966544228309, 0.02881585423739085, 0.02811180013627428, 0.030618589301141384, 0.029087033097827307, 0.028278330408680327, 0.02819355239809956, 0.02652444343832414, 0.02846561812508206, 0.02833015790645448, 0.028089711009643435, 0.02665147842426102, 0.025817131023143574, 0.024392021260002204, 0.026105097442278784, 0.023432768334991916, 0.024186887324425522, 0.022224103774857614, 0.025343184036238035], 'Total_Goals_Against_Home_Team': [-0.022954933704662145, -0.021763689397203, -0.021298187885265114, -0.02088680190078755, -0.021773168005548113, -0.022559696754428198, -0.022234902762171303, -0.020802695785347734, -0.019792478744793948, -0.021401070490133747, -0.020268083860619054, -0.021272432328271435, -0.022161746159380245, -0.021650458226550522, -0.02196863048783722, -0.021709592021037807, -0.0220087953356028, -0.022941829438742335, -0.020615481635216266, -0.020526209544581068, -0.019603065171200532, -0.019775069728973672, -0.019360196341398937, -0.019887308313331468, -0.017794297559486992, -0.020247646390757896, -0.016083105511787245, -0.017208991841981457, -0.01603001746571609, -0.019582937524509056], 'Total_Streak_Home': [0.03266082453865847, 0.032405075689628436, 0.03263210014593194, 0.02558773272093919, 0.027778148808160284, 0.028095590052269335, 0.02569281748307593, 0.02337635976921925, 0.02026589326337023, 0.017627595653567963, 0.018040935917924432, 0.013231154325464569, 0.016298478399276905, 0.01249919979269764, 0.012845261672029124, 0.013182244325429884, 0.009174158824350243, 0.011261212995262927, 0.00869600697265378, 0.002978261853869817, 0.012400731873999458, 0.010898578228869903, 0.008625419593326163, 0.0059033012368439695, 0.002297022543610877, 0.007804076669173339, 0.007567609820272708, 0.004838829485803042, 0.002969495337466403, 0.008613799079468276], 'Wins_When_Home': [-0.013395040044085926, -0.018165164651674264, -0.014404874073612073, -0.017359431512585524, -0.01695980470134359, -0.0175174764871969, -0.0047744764466319195, -0.015453314162204797, -0.021868561969410526, -0.013866883972018386, -0.0231433063784773, -0.013953247454636115, -0.02737213563261883, -0.024808688120040574, -0.010710175756520637, -0.022703476019096832, -0.030656909701788575, -0.02943596096483654, -0.0313410378563632, -0.017182847929698052, -0.021852176667885803, -0.019669176576106916, -0.02007354354282045, -0.0290781321839042, -0.028365361620974448, -0.024009992292104352, -0.017417466110933023, 0.0013003278147333884, -0.00035187616013689907, -0.001096034749009366], 'Draw_When_Home': [-0.012520533582202332, -0.01116397060439214, -0.011185421174795915, -0.010373087571739588, -0.005522913136473184, -0.010769652780760306, -0.005685423603531053, -0.010190140671291634, -0.014975038605800304, -0.006939945347200336, -0.015637734983040847, -0.004888091520625964, -0.019076327277687544, -0.013799665514864641, -0.008798728839364186, -0.01608098234645557, -0.02339577927076931, -0.020014906165706083, -0.02466919632640821, -0.009236651129339064, -0.02128216826057042, -0.015837425422751114, -0.01592146085496617, -0.023587171871699696, -0.018147641316090687, -0.016414224485809262, -0.009210777152682629, 0.008707324412118982, 0.01257791165980316, -0.0007550642224317083], 'Lose_When_Home': [-0.023694269795333445, -0.014448200549847946, -0.016320827718163882, -0.01262975015720002, -0.01232459013868019, -0.013582071611312436, -0.013248562703847631, -0.008682532653661288, -0.012388887538914373, -0.0070980806547531325, -0.010856491704332573, -0.0076782983731467505, -0.02105686036245277, -0.013420867460463814, -0.011993164281071075, -0.014596440753091654, -0.0181007262768599, -0.023418583008913214, -0.0158760933275468, -0.010685716601495662, -0.014771105736830723, -0.015712571242947372, -0.013635457349035962, -0.02321697559152832, -0.01625626389265861, -0.019017085016134482, -0.009216949569017899, 0.008072489184669457, 0.012139703209276634, -0.0022489252502685916], 'Goals_For_When_Home': [-0.006279576023815829, -0.003880117252553831, -0.005630907808926382, -0.004240103524453962, -0.003205376062287361, -0.005203237910273565, -0.00911639640608543, -0.004460516064828023, -0.0022646437357991696, -0.0050685537688069954, -0.0017884816488182684, -0.004210581571435087, -0.004768669539167143, -0.004167854466295647, -0.007424825733758492, -0.005828422055797944, -0.004077582054936111, -0.004163210248706498, -0.0028156907445071352, -0.005035082716375525, -0.005925804763333739, -0.006643686052471548, -0.005966926311624651, -0.004329868668033271, -0.0043696302241963995, -0.005320898785276472, -0.004694117216614149, -0.005659922826663808, -0.003431825510939142, -0.008816570517724078], 'Goals_Against_When_Home': [0.0011465682185395085, -0.0007698920719517858, -0.0008807181965595624, -0.0010681059977701987, 0.0004953858756791341, 0.00023489692998001792, 0.0009847871413966587, -0.0010071190434227271, -0.0025977540966732066, 0.001418982603261925, -0.0008585938510950175, 0.00028540565968407307, 0.0022767673070693876, 0.001676568670750632, 0.0022895382794804035, 0.002322988368326956, 0.003413086406930892, 0.0047275888677200594, 0.0016846860566199053, 0.002649392028105652, 0.0017894117586983384, 0.0034605546981266435, 0.0030005441222377294, 0.0018911581666235806, 0.002081925706767193, 0.0042940224981606835, 0.002539411969444843, 0.0017910218510073263, 0.002270175397894287, 0.007626247139894126], 'Position_Away': [0.014472442786067633, 0.013660250640299045, 0.012874576474485191, 0.012671564963724053, 0.01075076058176061, 0.009955590266788552, 0.009965153798443031, 0.010148645847955435, 0.010601405286783484, 0.008946109686892313, 0.00828482740054012, 0.007156592266803235, 0.0068346541096073275, 0.00497553445588397, 0.005316090096907744, 0.002540815111860468, 0.0033023188201214276, 0.0010615003797034, -0.0013295273132093491, -0.0008787870251517846, 0.0001262743035029349, 0.0002643350935805565, -0.0033703146145749037, 0.0009915653744231928, -0.0012861711418042606, 1.9192342796778412e-05, 0.00042349374968263226, 0.0006233331083739773, 0.0051126616513739375, 0.00034854640392835656], 'Points_Away': [0.01034090329611756, 0.01088152851772985, 0.010842226626147066, 0.011601623195238473, 0.010078844414243731, 0.010316739178469557, 0.010182564538194815, 0.010630884072717374, 0.009903071945181476, 0.01025598571073514, 0.008797803873085664, 0.009969582548073508, 0.00902392767578735, 0.008379571634167865, 0.009913079488107499, 0.008196495296980785, 0.0070743653804514715, 0.0036096134307672116, 0.005673984152312655, 0.006533093674509808, 0.004301511129936805, 0.00436630322754848, 0.0025439011949480264, 0.0029035508725187022, 0.0021748878029471397, 0.004533402506403608, 0.0071798444117281215, 0.010065514654611177, 0.009100320280974157, 0.006286339449019071], 'Total_Wins_Away': [0.0035659186118066123, 0.002943598557422255, 0.004171869613947337, 0.0034409658953459673, 0.0022887024845152496, 0.0021305461977566844, 0.002262342284247913, 0.0027958571288142077, 0.0016945801314240784, 0.0008230701865216791, 0.000491027430699959, -0.0013512749450319496, -0.00016581536519278295, -0.0002150345839608283, -0.0006171545595214318, -0.0017650768673160516, -0.0005442169723390258, -0.0025654267419642804, -0.0016581413723818054, -0.003166716871862028, -0.0027983454757988958, -0.002528613374782355, -0.0045215641388737056, -0.0013418356705632078, -0.003829213487262096, -0.0024309467329235914, -0.0017241328223449241, -0.003107536547672126, -0.0017311133012369525, -0.0027522711702046864], 'Total_Draw_Away': [-0.0003568525393014596, 0.0020507328454637433, -0.001673382215696448, 0.0012787255092014341, 0.0032127369606979143, 0.003925100585198942, 0.0033955376854520897, 0.0022433126862742812, 0.0048193315509087835, 0.007786775151170095, 0.0073247215809862255, 0.014023407383169182, 0.009521373771365749, 0.00902467538605033, 0.011764543166672069, 0.013491725898928893, 0.008707016297468836, 0.011305893656660224, 0.01064840826945815, 0.016033244290095953, 0.012696547557333586, 0.011952143351895546, 0.01610859361156914, 0.006929057884208413, 0.01366252826473342, 0.011826242705174255, 0.01235224287876298, 0.019388124297628033, 0.01429366018468509, 0.014543152959633194], 'Total_Lose_Away': [-0.01213202120578691, -0.009456329459477527, -0.0068334080538388475, -0.009168838671911393, -0.0024639096209857765, -0.0020796780123206234, -0.003629439407965879, -0.007193248778814833, 0.0007411238538908684, 0.002304839868733438, -0.0011143660481424487, 0.011491193817189072, 0.005234977042071124, 0.0034967653291920636, 0.0065004920973396885, 0.01450412710017472, 0.007854687983425566, 0.015438294389475277, 0.013970804752361558, 0.022768352384175714, 0.017743885072036968, 0.014550959749467981, 0.019686850251240284, 0.010035809478958884, 0.014630891026242253, 0.014512566827478455, 0.01296150685913182, 0.0304044630889872, 0.019257202888831048, 0.027538734451326517], 'Total_Goals_For_Away_Team': [-0.030007838995620022, -0.02868493589958792, -0.030625588611529597, -0.03067069558966199, -0.029241866525249096, -0.02980459629043824, -0.029286697165099525, -0.03137442820909313, -0.029221539520328105, -0.028014193750191766, -0.02784108283765484, -0.02763306691970033, -0.02768443899393213, -0.02725320792739219, -0.02872423058221323, -0.026820971250612082, -0.02649144209494647, -0.02305769409630173, -0.02563311078179593, -0.023479834233961352, -0.022195675660598782, -0.0234332867443644, -0.022867728134553542, -0.023232085572481413, -0.022212686337159716, -0.023980120232199243, -0.026964366927086772, -0.025071343284204534, -0.023386322557328008, -0.021642821363289697], 'Total_Goals_Against_Away_Team': [0.027537041561534845, 0.026251613649499354, 0.026744629648574852, 0.027120027688254528, 0.026881142424330446, 0.02605183203607704, 0.026416170728907458, 0.028329212293210046, 0.024265668992952287, 0.025637404693568972, 0.026784019277182245, 0.023874693271079394, 0.0245099877731369, 0.026531614519329293, 0.026022531516307512, 0.022709226494398122, 0.02504417433685226, 0.02150481644541284, 0.022935269639972635, 0.02091283787548114, 0.0214667384653983, 0.020467719543008436, 0.021197893540172402, 0.02180537213993232, 0.0207564826498112, 0.0214009401134306, 0.02359060316274703, 0.021862493076544778, 0.022047862093580957, 0.01813381805317738], 'Total_Streak_Away': [-0.018081173892632786, -0.02355667549227867, -0.019202806941440546, -0.017425870924922697, -0.01582321520586811, -0.014789696112711555, -0.013950814320204247, -0.011339567916034715, -0.013083069293227055, -0.010821169846497792, -0.009950206453160096, -0.006952772804281222, -0.008692602581435402, -0.00863202120997327, -0.0011626776313331752, -0.001594924824507236, -0.002659037930645003, 0.0026031605854596233, -0.0025344372795407387, -0.0012868407623180238, 0.0027154709830860984, 0.0020630343814753336, 0.004385962489693351, -0.0003426891885125204, 0.0029241052616738217, 0.002563992088491251, 0.005155714766306646, 0.008301988901655864, -0.0023909261475743484, -0.005088280641203536], 'Wins_When_Away': [-0.009699817123862153, -0.01451733035665089, -0.016520954848669937, -0.019210890015330287, -0.018009001513896075, -0.019395363095226544, -0.018369634790226127, -0.023619538892780723, -0.01830473768104518, -0.020422371871550504, -0.013062432925945907, -0.019846712540745486, -0.009738388409123023, -0.013955712828893525, -0.020032329746325805, -0.014598178668761303, -0.009210318270322777, -0.0007112714131602192, -0.007124356653875082, -0.008774609359275905, 0.001455326710032619, -0.004498178187008005, 0.0008271348151638885, 0.000939613137335355, 0.006164798810137813, -0.005396453121974454, -0.01411811506093507, -0.029816074913761023, -0.030788041783858064, -0.019035672636462594], 'Draw_When_Away': [-0.014119111231160225, -0.018189731701984638, -0.012901783768552581, -0.01892130048369986, -0.020638981285141467, -0.01757235735450197, -0.019710284555402642, -0.018653722884349123, -0.014968270596000446, -0.022066940542922468, -0.015267971427946773, -0.027150482415370245, -0.012994844180093583, -0.016023494300486996, -0.021471691625931107, -0.015996087875750707, -0.00740357797550347, -0.011078042029211435, -0.00793259773222573, -0.020507551215336154, -0.008002153675994305, -0.013026339826948816, -0.015714944724177227, -0.003909890395830333, -0.011870746361696658, -0.012498534407826674, -0.01780947954901492, -0.03539418650126495, -0.03673504256736922, -0.02758426164816224], 'Lose_When_Away': [-0.0012656347525469777, 9.367036719257324e-05, -0.004480524833617084, -0.0038156822742331296, -0.007574444592901508, -0.005548706510804995, -0.012158592903008974, -0.003184936921263303, -0.005988891712816368, -0.009409434277354942, -0.003006130925751845, -0.015331675340724259, -0.000999507152617288, -0.0024315504794063247, -0.0075990265409249585, -0.009220477705782567, 0.0007861657308651385, -0.006720724840647038, -0.005168530138682139, -0.016971070501929243, -0.008314968549877497, -0.012778432551572733, -0.01800312954515299, -0.0036096836423344892, -0.015045104307083362, -0.009912943716434886, -0.016485047462634565, -0.03857817375743173, -0.03235848582566407, -0.038901140265310566], 'Goals_For_When_Away': [0.006554194882837062, 0.006239740458001574, 0.007926374361444562, 0.007846004896010833, 0.006282743874105357, 0.007861713074378909, 0.006414607547775362, 0.009304311717055789, 0.008125173911664759, 0.005194224566273531, 0.006210151364776601, 0.006705860218095841, 0.00616995382941011, 0.005959062035804015, 0.007614711444327347, 0.006492110390036155, 0.0054803192995866775, 0.0026033823257200104, 0.005558744583442824, 0.003328921151070241, 0.00228670551062084, 0.002682877944779902, 0.0025966818701829903, 0.0049186407703404565, 0.0023876046817198414, 0.0047806261242470905, 0.0061302388347287545, 0.0062651706474007645, 0.0035932970549173828, 0.0007082655359426186], 'Goals_Against_When_Away': [-0.00856457544835557, -0.00819961376008383, -0.008212834901065768, -0.00791567059695912, -0.008942508826518841, -0.008333060375547773, -0.006016186283897613, -0.009855973722396053, -0.006805244799485829, -0.007233104518119229, -0.00962819125125658, -0.006461973765119942, -0.006869538815430781, -0.008598810052820934, -0.008108132565083713, -0.005740943704679943, -0.009290077116155766, -0.006314490068173132, -0.007086728186030946, -0.005921426789299373, -0.005778515095633567, -0.004376544910239476, -0.005060769989767792, -0.006667494629677032, -0.004752820157991687, -0.006874464942279897, -0.007622901603497951, -0.005699933916220814, -0.00695123798328729, -0.0011712744935623099], 'Streak_When_Home': [0.03724001602432313, 0.03892982353442895, 0.03917177715406599, 0.03867646209454115, 0.04228377479879645, 0.03718867288495252, 0.03459576719657432, 0.034180378286252804, 0.034000947011170235, 0.03535148567725727, 0.034711842694779, 0.0325288900422121, 0.031708916002484586, 0.03230918552800455, 0.027066187025047806, 0.028258348133852966, 0.026533011326834544, 0.026858853078726957, 0.02551740501282244, 0.029128678272910314, 0.018643698914886184, 0.022627954329829617, 0.020274055827845104, 0.01825759104633568, 0.024210402165081937, 0.018804624751608375, 0.021417050984101356, 0.018783789640660815, 0.028730716678265152, 0.0215433571938397], 'Streak_When_Away': [-0.056103764675746744, -0.05138669337318242, -0.05191051014110083, -0.05344843922553987, -0.04988007910151624, -0.0493520572325493, -0.049598137288052346, -0.047108737255544986, -0.043188216963847996, -0.04225924912368375, -0.04042554790946164, -0.038526114428741806, -0.03403895144533545, -0.033516304594389285, -0.03433670837038065, -0.033197240693049676, -0.02940691304805669, -0.031224766504641874, -0.03170578478918477, -0.030932749999670953, -0.03159321730102361, -0.02681178088669557, -0.028027029727381348, -0.024597629159638482, -0.030805924978265967, -0.02360070081842645, -0.026412593141137024, -0.021750121909224873, -0.01737882075986661, -0.013259868580307147]}\n[0.40417084269616227, 0.4284423891612979, 0.41040657719849327, 0.4205396362678132, 0.3988053845764059, 0.450827312252369, 0.44624544330606974, 0.4397489403454782, 0.4352155005388911, 0.4330775821244758, 0.408395798206994, 0.4369311511521363, 0.4051758902125959, 0.4329711199943496, 0.4135624222416546, 0.42622355026156256, 0.422761728417594, 0.45291522355894215, 0.5016916257928297, 0.47160989228510175, 0.47222035246846866, 0.40831354426839594, 0.44172108969069307, 0.41680355353969906, 0.39914804545659105, 0.4509863258431381, 0.434923677204329, 0.4800627630099234, 0.33229502502508235, 0.4916097436295826]\n" ], [ "rounds = [x+1 for x in range(30)]", "_____no_output_____" ], [ "rounds", "_____no_output_____" ], [ "feat_df = pd.DataFrame(feature_weights_dict)", "_____no_output_____" ], [ "feat_df", "_____no_output_____" ], [ "feat_df[\"model_score\"] = model_score\nfeat_df[\"m_value\"] = m_values", "_____no_output_____" ], [ "feat_df", "_____no_output_____" ], [ "feat_df[feat_df.columns[0]]", "_____no_output_____" ], [ "%matplotlib inline\nfig, axs = plt.subplots(len(feat_df.columns),figsize=(10, 150))\nfig.suptitle('Varying Scores when squeezing the data towards Season end')\nfor i, ax in enumerate(axs):\n ax.plot(rounds, feat_df[feat_df.columns[i]])\n ax.set_title(feat_df.columns[i])\n \nplt.show()\n\n", "_____no_output_____" ], [ "features_plus = ['Round','Position_Home', 'Points_Home', 'Total_Wins_Home',\n 'Total_Draw_Home', 'Total_Lose_Home', 'Total_Goals_For_Home_Team',\n 'Total_Goals_Against_Home_Team', 'Total_Streak_Home', 'Wins_When_Home',\n 'Draw_When_Home', 'Lose_When_Home', 'Goals_For_When_Home',\n 'Goals_Against_When_Home', 'Position_Away', 'Points_Away',\n 'Total_Wins_Away', 'Total_Draw_Away', 'Total_Lose_Away',\n 'Total_Goals_For_Away_Team', 'Total_Goals_Against_Away_Team',\n 'Total_Streak_Away', 'Wins_When_Away', 'Draw_When_Away',\n 'Lose_When_Away', 'Goals_For_When_Away', 'Goals_Against_When_Away',\n 'Streak_When_Home', 'Streak_When_Away',]\nX = np.array(combined_csv[features_plus])\ny = np.array(combined_csv['difference'])\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=4)\nmodel = LinearRegression()\nmodel.fit(X_train, y_train)\nb = model.intercept_\nm = model.coef_\nprint (m, b, model.score(X_train, y_train))", "_____no_output_____" ], [ "from sklearn.linear_model import SGDRegressor\n\nfeatures = ['Position_Home', 'Points_Home', 'Total_Wins_Home',\n 'Total_Draw_Home', 'Total_Lose_Home', 'Total_Goals_For_Home_Team',\n 'Total_Goals_Against_Home_Team', 'Total_Streak_Home', 'Wins_When_Home',\n 'Draw_When_Home', 'Lose_When_Home', 'Goals_For_When_Home',\n 'Goals_Against_When_Home', 'Position_Away', 'Points_Away',\n 'Total_Wins_Away', 'Total_Draw_Away', 'Total_Lose_Away',\n 'Total_Goals_For_Away_Team', 'Total_Goals_Against_Away_Team',\n 'Total_Streak_Away', 'Wins_When_Away', 'Draw_When_Away',\n 'Lose_When_Away', 'Goals_For_When_Away', 'Goals_Against_When_Away',\n 'Streak_When_Home', 'Streak_When_Away',]\nX = np.array(combined_csv[features])\ny = np.array(combined_csv['difference'])\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=4)\nmodel = SGDRegressor(max_iter=1000, tol=1e-3)\nmodel.fit(X_train, y_train)\nb = model.intercept_\nm = model.coef_\nprint (m, b, model.score(X_train, y_train))", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ece01d2946ea6113e02d4a4e29d0f24d1e012714
815,887
ipynb
Jupyter Notebook
MissingMassCompare/MissingMassCut.ipynb
tylern4/analysisNotebooks
750edddb655428a0f7d034bfab93c39e40e88fe6
[ "MIT" ]
null
null
null
MissingMassCompare/MissingMassCut.ipynb
tylern4/analysisNotebooks
750edddb655428a0f7d034bfab93c39e40e88fe6
[ "MIT" ]
null
null
null
MissingMassCompare/MissingMassCut.ipynb
tylern4/analysisNotebooks
750edddb655428a0f7d034bfab93c39e40e88fe6
[ "MIT" ]
null
null
null
1,508.109057
73,422
0.955988
[ [ [ "%matplotlib inline\nimport lux\n#import pandas as pd\nimport modin.pandas as pd\nfrom pyarrow import csv\nimport time\nfrom lmfit import Model\nfrom lmfit.models import *\nfrom scipy.special import erfc\nfrom scipy.interpolate import interp1d\nimport boost_histogram as bh\nimport matplotlib.pyplot as plt\nfrom scipy import stats\n\n\nfrom nicks_plot_utils import *", "_____no_output_____" ], [ "def read_csv(file_name: str = \"\", data: bool = False):\n names = [\n \"electron_sector\",\n \"w\",\n \"q2\",\n \"theta\",\n \"phi\",\n \"mm2\",\n \"cut_fid\",\n \"helicty\",\n \"type\"\n ]\n dtype = {\n \"electron_sector\": \"int8\",\n \"helicty\": \"int8\",\n \"w\": \"float32\",\n \"q2\": \"float32\",\n \"theta\": \"float32\",\n \"phi\": \"float32\",\n \"mm2\": \"float32\",\n \"cut_fid\": \"bool\",\n }\n\n\n # Load file into pyTable before changing to pandas\n pyTable = csv.read_csv(\n file_name,\n read_options=csv.ReadOptions(\n use_threads=True, column_names=names),\n convert_options=csv.ConvertOptions(column_types=dtype),\n )\n df = pyTable.to_pandas(strings_to_categorical=True)\n\n if data:\n return df.dropna()\n\n mc_rec = df[df.type == \"mc_rec\"].dropna()\n thrown = df[df.type == \"thrown\"].dropna()\n del df\n\n return (\n mc_rec,\n thrown,\n )", "_____no_output_____" ], [ "mc_data_file_path = \"/Users/tylern/Data/e1d/data/mc_rec_e1d.csv\"\nrec_data_file_path = \"/Users/tylern/Data/e1d/data/data_e1d.csv\"\n\nrec = read_csv(rec_data_file_path, True)", "_____no_output_____" ], [ "rec.head()", "_____no_output_____" ], [ "NSIGMA = 3\ndata = {}\nfor sec in range(1,7):\n fig, ax = plt.subplots(figsize=(12,9))\n mm2 = Hist1D(rec[rec.electron_sector == sec].mm2, bins=500, xrange=[0.6, 1.4])\n mm2.errorbar(ax, alpha=0.5, density=True, label='Data')\n\n x,y = mm2.hist_to_xy()\n\n peak = PseudoVoigtModel(prefix=\"peak_\")\n # pars = peak.guess(y[(x > 0.8) & (x < 1.0)], x=x[(x > 0.8) & (x < 1.0)])\n pars = peak.guess(y, x=x)\n background = PolynomialModel(4,prefix=\"back_\")\n pars.update(background.make_params())\n pars['peak_sigma'].set(value=0.028, min=0.0001, max=0.05)\n pars['back_c0'].set(value=-0.11)\n pars['back_c1'].set(value=0.18)\n pars['back_c2'].set(value=-0.039)\n pars['back_c3'].set(value=-0.000001, max=100, min=-100)\n pars['back_c4'].set(value=-0.000001, max=100, min=-100)\n # pars['back_c5'].set(value=-0.000001, max=100, min=-100)\n # pars['back_c6'].set(value=-0.000001, max=100, min=-100)\n # pars['back_c7'].set(value=-0.000001, max=100, min=-100)\n \n model = peak + background\n out = model.fit(y[(x > 0.7) & (x < 1.2)], pars, x=x[(x > 0.7) & (x < 1.2)], nan_policy='omit')\n \n xs= np.linspace(0.7,1.2,1000)\n plt.plot(xs, out.eval(params=out.params, x=xs)/np.max(out.eval(params=out.params, x=xs)))\n# dely = out.eval_uncertainty(x=xs, sigma=1)\n# dely = dely/np.max(dely)\n# yss = out.eval(params=out.params, x=xs)\n# yss = yss/np.max(yss)\n# plt.fill_between(xs, yss-dely, yss+dely, color='#888888')\n \n# peak = PseudoVoigtModel(prefix=\"peak_\")\n# pars = peak.guess(y[(x > 0.8) & (x < 1.0)], x=x[(x > 0.8) & (x < 1.0)])\n# background = GaussianModel(prefix=\"back_\")\n# pars.update(background.make_params())\n# model = peak + background\n# out = model.fit(y, pars, x=x)\n comps = out.eval_components(x=xs) \n\n plt.plot(xs, comps['peak_']/np.max(comps['peak_']), alpha=0.4)\n plt.plot(xs, comps['back_']/np.max(comps['peak_']), alpha=0.4)\n\n plt.axvline(out.params['peak_center']+NSIGMA*out.params['peak_fwhm'])\n plt.axvline(out.params['peak_center']-NSIGMA*out.params['peak_fwhm']) \n# data[sec] = (out.params['peak_center']-NSIGMA*out.params['peak_fwhm'], \n# out.params['peak_center']+NSIGMA*out.params['peak_fwhm'])\n \n\n plt.show()\n\n \n", "_____no_output_____" ], [ "# import numpy as np\n# from iminuit import Minuit\n# from probfit import UnbinnedLH, gaussian\n\n# data = np.random.randn(10000)\n\n# @np.vectorize\n# def gaus(x, mean, sigma):\n# return gaussian(x, mean, sigma)\n\n\n# unbinned_likelihood = UnbinnedLH(gaussian, rec[(rec.mm2 > 0.8) & (rec.mm2 < 1.1)].mm2.to_numpy())\n# minuit = Minuit(unbinned_likelihood, mean=0.1, sigma=1.1)\n# minuit.migrad()\n\n# minuit.minos()\n\n# xs = np.linspace(0.8, 1.1, 100)\n# fig, ax = plt.subplots(figsize=(12,9))\n# plt.plot(xs, gaus(xs, minuit.params['mean'].value, minuit.params['sigma'].value)/np.max(gaus(xs, minuit.params['mean'].value, minuit.params['sigma'].value)))\n# mm2 = Hist1D(rec.mm2, bins=500, xrange=[0.6, 1.4])\n# mm2.errorbar(ax, alpha=0.5, density=True, label='Data')\n\n# plt.show()", "_____no_output_____" ], [ "NSIGMA = 3\ndata = {}\nfor sec in range(1,7):\n fig, ax = plt.subplots(figsize=(12,9))\n mm2 = Hist1D(rec[rec.electron_sector == sec].mm2, bins=500, xrange=[0.6, 1.4])\n mm2.errorbar(ax, alpha=0.5, density=True, label='Data')\n\n x,y = mm2.hist_to_xy()\n\n peak = PseudoVoigtModel(prefix=\"peak_\")\n # pars = peak.guess(y[(x > 0.8) & (x < 1.0)], x=x[(x > 0.8) & (x < 1.0)])\n pars = peak.guess(y, x=x)\n background = GaussianModel(prefix=\"back_\")\n pars.update(background.make_params())\n \n model = peak * background\n out = model.fit(y[(x > 0.7) & (x < 1.2)], pars, x=x[(x > 0.7) & (x < 1.2)], nan_policy='omit')\n \n xs= np.linspace(0.6,1.4,1000)\n plt.plot(xs, out.eval(params=out.params, x=xs)/np.max(out.eval(params=out.params, x=xs)))\n# dely = out.eval_uncertainty(x=xs, sigma=1)\n# dely = dely/np.max(dely)\n# yss = out.eval(params=out.params, x=xs)\n# yss = yss/np.max(yss)\n# plt.fill_between(xs, yss-dely, yss+dely, color='#888888')\n \n# peak = PseudoVoigtModel(prefix=\"peak_\")\n# pars = peak.guess(y[(x > 0.8) & (x < 1.0)], x=x[(x > 0.8) & (x < 1.0)])\n# background = GaussianModel(prefix=\"back_\")\n# pars.update(background.make_params())\n# model = peak + background\n# out = model.fit(y, pars, x=x)\n #comps = out.eval_components(x=xs) \n\n #plt.plot(xs, comps['peak_']/np.max(comps['peak_']), alpha=0.4)\n #plt.plot(xs, comps['back_']/np.max(comps['peak_']), alpha=0.4)\n\n #plt.axvline(out.params['peak_center']+NSIGMA*out.params['peak_fwhm'])\n #plt.axvline(out.params['peak_center']-NSIGMA*out.params['peak_fwhm']) \n# data[sec] = (out.params['peak_center']-NSIGMA*out.params['peak_fwhm'], \n# out.params['peak_center']+NSIGMA*out.params['peak_fwhm'])\n \n\n plt.show()\n\n ", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
ece0296ee80e564307adc1592ab657114bebb1db
46,557
ipynb
Jupyter Notebook
ML - Data Science/logi_reg/Employ_leaving_logi_regre.ipynb
ManojBoganadham/Technocrats-HacktoberFest
deca605ea98226fd4a9c04115701f38bf7d4b2b0
[ "MIT" ]
11
2021-10-01T15:39:40.000Z
2021-10-12T16:33:03.000Z
ML - Data Science/logi_reg/Employ_leaving_logi_regre.ipynb
ManojBoganadham/Technocrats-HacktoberFest
deca605ea98226fd4a9c04115701f38bf7d4b2b0
[ "MIT" ]
62
2021-10-01T14:40:32.000Z
2021-10-31T14:47:21.000Z
ML - Data Science/logi_reg/Employ_leaving_logi_regre.ipynb
ManojBoganadham/Technocrats-HacktoberFest
deca605ea98226fd4a9c04115701f38bf7d4b2b0
[ "MIT" ]
71
2021-10-01T14:42:03.000Z
2021-10-21T15:51:24.000Z
49.214588
13,652
0.637949
[ [ [ "<h2 align=\"center\">Employee leaving prediction</h2>\n<h3 align=\"center\">Using Logistic Regression</h3>", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom matplotlib import pyplot as plt\n%matplotlib inline", "_____no_output_____" ] ], [ [ "**reading the dataset**", "_____no_output_____" ] ], [ [ "df = pd.read_csv(\"HR_comma_sep.csv\")\ndf.head()", "_____no_output_____" ], [ "left = df[df.left==1]\nleft.shape", "_____no_output_____" ], [ "retained = df[df.left==0]\nretained.shape", "_____no_output_____" ] ], [ [ "**To check the dependency of 'left' manually**", "_____no_output_____" ] ], [ [ "df.groupby('left').mean()", "_____no_output_____" ] ], [ [ "**Graph between 'salary' and 'left'**", "_____no_output_____" ] ], [ [ "pd.crosstab(df.salary,df.left).plot(kind='bar')", "_____no_output_____" ] ], [ [ "**Graph between 'Department' and 'left'**", "_____no_output_____" ] ], [ [ "pd.crosstab(df.Department,df.left).plot(kind='bar')", "_____no_output_____" ] ], [ [ "**Only selected columns will be in our dataframe**", "_____no_output_____" ] ], [ [ "subdf = df[['satisfaction_level','average_montly_hours','promotion_last_5years','salary']]\nsubdf.head()", "_____no_output_____" ] ], [ [ "**Making dummy columns for 'salary'**", "_____no_output_____" ] ], [ [ "salary_dummies = pd.get_dummies(subdf.salary, prefix=\"salary\")", "_____no_output_____" ], [ "df_with_dummies = pd.concat([subdf,salary_dummies],axis='columns')", "_____no_output_____" ], [ "df_with_dummies.head()", "_____no_output_____" ], [ "df_with_dummies.drop('salary',axis='columns',inplace=True)\ndf_with_dummies.head()", "_____no_output_____" ], [ "X = df_with_dummies\n", "_____no_output_____" ], [ "y = df.left", "_____no_output_____" ] ], [ [ "**Splitting training and testing data**", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X,y,train_size=0.3)", "_____no_output_____" ] ], [ [ "**Importing 'LogisticRegression' from sklearn.linear_model**", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LogisticRegression\nmodel = LogisticRegression()", "_____no_output_____" ], [ "len(X_test)", "_____no_output_____" ], [ "model.fit(X_train, y_train)", "C:\\Users\\lakhan\\anaconda3\\lib\\site-packages\\sklearn\\linear_model\\_logistic.py:763: ConvergenceWarning: lbfgs failed to converge (status=1):\nSTOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n\nIncrease the number of iterations (max_iter) or scale the data as shown in:\n https://scikit-learn.org/stable/modules/preprocessing.html\nPlease also refer to the documentation for alternative solver options:\n https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n n_iter_i = _check_optimize_result(\n" ], [ "y_predicted = model.predict(X_test)", "_____no_output_____" ] ], [ [ "**Accuracy of the model**", "_____no_output_____" ] ], [ [ "model.score(X_test,y_test)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
ece03120706d4b79d19673449e7ccbe77bbb7d4c
6,188
ipynb
Jupyter Notebook
notebooks/10.20-OPTIONAL-astro-libraries.ipynb
Martazzz/jupytertutorial
37039964b0826c3ba31b3cf4d6732a5d53cf950a
[ "BSD-3-Clause" ]
2
2021-02-13T05:52:05.000Z
2022-02-08T09:52:35.000Z
notebooks/10.20-OPTIONAL-astro-libraries.ipynb
SteveWang1992/tutorial
37039964b0826c3ba31b3cf4d6732a5d53cf950a
[ "BSD-3-Clause" ]
null
null
null
notebooks/10.20-OPTIONAL-astro-libraries.ipynb
SteveWang1992/tutorial
37039964b0826c3ba31b3cf4d6732a5d53cf950a
[ "BSD-3-Clause" ]
null
null
null
26
303
0.58969
[ [ [ "# *OPTIONAL* Astronomical widget libraries\n\nThe libraries demonstrated here are not as mature as the ones we've seen so far. Keep an eye on them for future developments!", "_____no_output_____" ], [ "## PyWWT - widget interface to the World Wide Telescope \n\n### https://github.com/WorldWideTelescope/pywwt\n\nWorld Wide Telescope (WWT) was developed by Microsoft for displaying images of the sky in a variety of projects and several layers; it is like `leaflet` for the sky. Now maintained by the American Astronomical Society (AAS), it now has a widget interface.\n\nA javascript API has been available for WWT for a while. The PyWWT package includes javascript to call that API with front ends for both ipywidgets and qt.\n\n### Installation\n\n`pywwt` is on PyPI and on the `wwt` conda channel.", "_____no_output_____" ] ], [ [ "from pywwt.jupyter import WWTJupyterWidget\n\nwwt = WWTJupyterWidget()\nwwt", "_____no_output_____" ] ], [ [ "Several properties of the display can eclipsing binary changed from Python", "_____no_output_____" ] ], [ [ "wwt.constellation_figures = True\n\nwwt.constellation_boundary_color = 'azure'\nwwt.constellation_figure_color = '#D3BC8D'\nwwt.constellation_selection_color = (1, 0, 1)", "_____no_output_____" ] ], [ [ "In addition to interacting with the display with mouse/keyboard, you can manipulate it programmatically.", "_____no_output_____" ] ], [ [ "from astropy import units as u\nfrom astropy.coordinates import SkyCoord\n\norion_neb = SkyCoord.from_name('Orion Nebula')\nwwt.center_on_coordinates(orion_neb, fov=10 * u.degree, instant=False)", "_____no_output_____" ] ], [ [ "A variety of markers can be added to the display, and one can construct tours of the sky.", "_____no_output_____" ] ], [ [ "wwt.load_tour('http://www.worldwidetelescope.org/docs/wtml/tourone.wtt')", "_____no_output_____" ], [ "wwt.pause_tour()", "_____no_output_____" ] ], [ [ "## ipyaladdin - interactive sky atlas backed by simbad/vizier databases\n\n### https://github.com/cds-astro/ipyaladin\n\nThe [Simbad catlog]() and [VizieR database interface]() serve as respositories for most public astronomical data. The Aladin sky atlas, originally developed as a desktop application, then an in-browser javascipt app, now has an experimental widget interface.\n\n### Installation\n\nInstallation instructions are at: https://github.com/cds-astro/ipyaladin#installation ", "_____no_output_____" ] ], [ [ "import ipyaladin.aladin_widget as ipyal", "_____no_output_____" ], [ "aladin = ipyal.Aladin(target='Orion Nebula', fov=10, survey='P/allWISE/color')\naladin", "_____no_output_____" ] ], [ [ "### Add markers for items in a data table ", "_____no_output_____" ] ], [ [ "from astroquery.simbad import Simbad\ntable = Simbad.query_region('Orion Nebula', radius=1 * u.degree)", "_____no_output_____" ], [ "import numpy as np\ndisplay_obj = np.random.choice(range(len(table)), size=100)", "_____no_output_____" ], [ "aladin.add_table(table[display_obj])", "_____no_output_____" ] ], [ [ "## Display a local image\n\nOne goal this week is to wrap the widget below (which displays images stored in a format called FITS that is widely used in astronomy) up in a easily installable widgets. The widget will be demoed during the tutorial but is not yet installable. Code will be in https://github.com/eteq/astrowidgets", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
ece03460acf43092926c8f52eee316adbd622a87
7,830
ipynb
Jupyter Notebook
src/model_mockups/dense_relu_nn.ipynb
kaylani2/machineLearning
692623abf6fe02bde6c7da6c2f8c0ec526a3e8f8
[ "MIT" ]
7
2019-11-06T14:35:37.000Z
2022-03-06T03:55:06.000Z
src/model_mockups/dense_relu_nn.ipynb
kaylani2/machineLearning
692623abf6fe02bde6c7da6c2f8c0ec526a3e8f8
[ "MIT" ]
10
2020-05-16T02:38:35.000Z
2021-04-11T23:55:35.000Z
src/model_mockups/dense_relu_nn.ipynb
kaylani2/machineLearning
692623abf6fe02bde6c7da6c2f8c0ec526a3e8f8
[ "MIT" ]
2
2020-06-26T21:39:41.000Z
2020-09-15T03:38:32.000Z
30.585938
308
0.594636
[ [ [ "import numpy as np \nimport pandas as pd\n\nimport tensorflow as tf\n\nfrom tensorflow import feature_column\nfrom tensorflow.keras import layers\nfrom sklearn.model_selection import train_test_split\n\nDEVICE = 'GPU/:0'", "_____no_output_____" ] ], [ [ "Configuraรงรฃo para o data set", "_____no_output_____" ] ], [ [ "tf.keras.backend.set_floatx('float64')\n\n#getting the csv\nCICIDS_DIRECTORY = '../../datasets/cicids/MachineLearningCVE/'\nCICIDS_MONDAY_FILENAME = 'Monday-WorkingHours.pcap_ISCX.csv'\nCICIDS_WEDNESDAY_FILENAME = 'Wednesday-workingHours.pcap_ISCX.csv'\nCICIDS_MONDAY = CICIDS_DIRECTORY + CICIDS_MONDAY_FILENAME\nCICIDS_WEDNESDAY = CICIDS_DIRECTORY + CICIDS_WEDNESDAY_FILENAME", "_____no_output_____" ] ], [ [ "Tratando o data frame", "_____no_output_____" ] ], [ [ "dataFrame = pd.read_csv(CICIDS_WEDNESDAY)\n## Remove NaN and inf values\ndataFrame.replace ('Infinity', np.nan, inplace = True) ## Or other text values\ndataFrame.replace (np.inf, np.nan, inplace = True) ## Remove infinity\ndataFrame.replace (np.nan, 0, inplace = True)\n\nnewKeys = []\nfor key in dataFrame.keys():\n newKeys.append(key.replace(' ', '-'))\n\ndict_rename = { source:destination for source, destination in zip(dataFrame.keys(), newKeys)}\ndataFrame = dataFrame.rename(columns=dict_rename)\n", "_____no_output_____" ], [ "#converting labels\ndataFrame ['-Label'] = dataFrame ['-Label'].replace ('BENIGN', 0)\ndataFrame ['-Label'] = dataFrame ['-Label'].replace ('DoS slowloris', 1)\ndataFrame ['-Label'] = dataFrame ['-Label'].replace ('DoS Slowhttptest', 2)\ndataFrame ['-Label'] = dataFrame ['-Label'].replace ('DoS Hulk', 3)\ndataFrame ['-Label'] = dataFrame ['-Label'].replace ('DoS GoldenEye', 4)\ndataFrame ['-Label'] = dataFrame ['-Label'].replace ('Heartbleed', 5)", "_____no_output_____" ], [ "#splitting dataset\ntrain, test = train_test_split(dataFrame, test_size=0.2)\ntrain, val = train_test_split(train, test_size=0.2)\nprint(len(train), 'train examples')\nprint(len(val), 'validation examples')\nprint(len(test), 'test examples')", "443329 train examples\n110833 validation examples\n138541 test examples\n" ], [ "#make dataFrame into a data set\ndef df_to_dataset(dataFrame, shuffle=True, batch_size=32):\n dataFrame = dataFrame.copy()\n labels = dataFrame.pop('-Label')\n data_set = tf.data.Dataset.from_tensor_slices((dict(dataFrame), labels))\n if shuffle:\n data_set = data_set.shuffle(buffer_size=len(dataFrame))\n data_set = data_set.batch(batch_size)\n return data_set", "_____no_output_____" ], [ "#transform each part of the dataFrame into the data_set format\nBATCH_SIZE = 32\ntrain_ds = df_to_dataset(train, batch_size=BATCH_SIZE)\nval_ds = df_to_dataset(val, shuffle=False, batch_size=BATCH_SIZE)\ntest_ds = df_to_dataset(test, shuffle=False, batch_size=BATCH_SIZE)", "_____no_output_____" ] ], [ [ "# Feature columns preset for the model\nNote: All columns of this dataset are numeric\n", "_____no_output_____" ], [ "1st approach: using all columns.\n", "_____no_output_____" ] ], [ [ "feature_columns = []\nnumeric_headers = []\ncategorical_headers = []\ncount = 0\nfor feature, label in train_ds.take(1):\n for key in list(feature.keys()):\n feature_columns.append(feature_column.numeric_column(key))", "_____no_output_____" ], [ "feature_layer = tf.keras.layers.DenseFeatures(feature_columns)\n", "_____no_output_____" ], [ "initializer = tf.initializers.VarianceScaling(scale=2.0)\nhidden_layer_size, num_classes = 128, 6\n\nlayers = [\n feature_layer,\n tf.keras.layers.Dense(hidden_layer_size, use_bias=True, activation='relu', kernel_initializer=initializer),\n tf.keras.layers.Dense(hidden_layer_size, use_bias=True, activation='relu', kernel_initializer=initializer),\n tf.keras.layers.Dense(num_classes, use_bias=True, activation='softmax', kernel_initializer=initializer),\n]\n\nmodel = tf.keras.Sequential(layers)\nmodel.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=[tf.keras.metrics.sparse_categorical_accuracy])", "_____no_output_____" ], [ "with tf.device(DEVICE):\n model.fit(train_ds, validation_data=val_ds, epochs=5)", "Epoch 1/5\nWARNING:tensorflow:Layer dense is casting an input tensor from dtype float32 to the layer's dtype of float64, which is new behavior in TensorFlow 2. The layer has dtype float64 because it's dtype defaults to floatx.\n\nIf you intended to run this layer in float64, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.\n\nTo change all layers to have dtype float32 by default, call `tf.keras.backend.set_floatx('float32')`. To change just this layer, pass dtype='float32' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.\n\n13855/13855 [==============================] - 63s 5ms/step - loss: 49182.3925 - sparse_categorical_accuracy: 0.8962 - val_loss: 277.5948 - val_sparse_categorical_accuracy: 0.7800\nEpoch 2/5\n 5792/13855 [===========>..................] - ETA: 37s - loss: 88.3112 - sparse_categorical_accuracy: 0.7452" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ] ]
ece0459f5de69efbc2ac244d6b495332555662a3
21,711
ipynb
Jupyter Notebook
.ipynb_checkpoints/L02-NumPy_Part2-Lesson-checkpoint.ipynb
shresthasrijana099/Data-Analytics-With-Python
09ca484816b44dfe5857d36a0401746871ae7d9c
[ "Apache-2.0" ]
null
null
null
.ipynb_checkpoints/L02-NumPy_Part2-Lesson-checkpoint.ipynb
shresthasrijana099/Data-Analytics-With-Python
09ca484816b44dfe5857d36a0401746871ae7d9c
[ "Apache-2.0" ]
null
null
null
.ipynb_checkpoints/L02-NumPy_Part2-Lesson-checkpoint.ipynb
shresthasrijana099/Data-Analytics-With-Python
09ca484816b44dfe5857d36a0401746871ae7d9c
[ "Apache-2.0" ]
null
null
null
33.660465
419
0.596794
[ [ [ "# Lesson 2: NumPy Part 2\n\nThis notebook is based on the official `NumPy` [documentation](https://docs.scipy.org/doc/numpy/user/quickstart.html). Unless otherwise credited, quoted text comes from this document. The Numpy documention describes NumPy in the following way:\n\n> NumPy is the fundamental package for scientific computing with Python. It contains among other things:\n> - a powerful N-dimensional array object\n> - sophisticated (broadcasting) functions\n> - tools for integrating C/C++ and Fortran code\n> - useful linear algebra, Fourier transform, and random number capabilities\n>\n> Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases.\n", "_____no_output_____" ], [ "## Instructions\nThis tutorial provides step-by-step training divided into numbered sections. The sections often contain embeded exectable code for demonstration. This tutorial is accompanied by a practice notebook: [L02-Numpy_Part2-Practice.ipynb](./L02-Numpy_Part2-Practice.ipynb). \n\nThroughout this tutorial sections labeled as \"Tasks\" are interspersed and indicated with the icon: ![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/16/Apps-gnome-info-icon.png). You should follow the instructions provided in these sections by performing them in the practice notebook. When the tutorial is completed you can turn in the final practice notebook. ", "_____no_output_____" ], [ "---\n## 1. Getting Started\nFirst, we must import the NumPy library. ", "_____no_output_____" ] ], [ [ "# Import numpy\nimport numpy as np", "_____no_output_____" ] ], [ [ "### Task 1a: Setup\n<span style=\"float:right; margin-left:10px; clear:both;\">![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/96/Apps-gnome-info-icon.png)\n</span>\n\nIn the practice notebook, import the following packages:\n+ `numpy` as `np`", "_____no_output_____" ], [ "## 2 Basic Indexing: Subsets and Slicing\n\nWe often want to consider a subset of a given array. You will recognize basic subsetting as it is similar to indexing of Python lists. \n\nThe following code examples demonstrate how to subset a NumPy array:\n\n```python\n# Get items from \"start\" to \"end\" (but the end is not included!)\na[start:end] \n\n# Get all items from \"start\" through the rest of the array\na[start:] \n\n# Get items from the beginning to \"end\" (but the end is not included!)\na[:end] \n```\nSimilarly to Python lists, retriving elements from the end of a NumPy array uses negative indexing. Execute the example code below to see a demonstration:", "_____no_output_____" ] ], [ [ "# Create a 5 x 2 array of random numbers\ndemo_g = np.random.random((5,2))\nprint(demo_g)\n\n# Get the last item from the last 'row':\ndemo_g[-1, -1]", "[[0.21847921 0.58694935]\n [0.61815126 0.81658475]\n [0.4294856 0.02043049]\n [0.38807909 0.75901409]\n [0.78980718 0.67693608]]\n" ] ], [ [ "### Task 2a: Indexing by Subsetting and Slicing\n<span style=\"float:right; margin-left:10px; clear:both;\">![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/96/Apps-gnome-info-icon.png)\n</span>\n\nIn the practice notebook perform the following:\n\n1. Create (or re-use) 3 arrays, each containing three dimensions.\n2. Slice each of these arrays so that:\n + One element / number is returned.\n + One dimension is returned.\n + A subset of a dimension is returned.\n3. What is the difference between `[x:]` and `[x, ...]`? (hint, try each on high-dimension arrays).\n \n*Exactly what you choose to return is not imporant at this point, the goal of this task is to train you so that if you are given an n-dimension NumPy array, you can write an index or slice that returns a subset of desired positions.*", "_____no_output_____" ], [ "## 3. \"Fancy\" Indexing\n\nFancy indexing allows you to provide an array of indicies or an array of boolean values in order to subset an array.\n", "_____no_output_____" ], [ "### 3.1 Using a Boolean Array for Indexing\nRather than using an index range, as shown in the previous section, we can provide an array of boolean values where `True` indicates that we want the value in the position where `True` is found, and `False` indicates we do not want it. Creating these boolean arrays is simple if we use conditional statements. \n\nFor example, review and then execute the following code:", "_____no_output_____" ] ], [ [ "# Create a 5 x 2 array of random numbers\ndemo_g = np.random.random((5,2))\n\n# Find all values in the matrix less than 0.5\ndemo_g < 0.5", "_____no_output_____" ] ], [ [ "Notice the return value is an array of boolean values. True indicates if the value was less than 0.5. False indicates it is greater or equal. We can use this boolean array as an index for the same array to return only those values satisfy the boolean condition. Try executing the following code:", "_____no_output_____" ] ], [ [ "demo_g[demo_g < 0.5]", "_____no_output_____" ] ], [ [ "Or alternatively:", "_____no_output_____" ] ], [ [ "sig_list = demo_g < 0.5\ndemo_g[sig_list]", "_____no_output_____" ] ], [ [ "### Task 3a: Boolean Indexing\n\n<span style=\"float:right; margin-left:10px; clear:both;\">![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/96/Apps-gnome-info-icon.png)\n</span>\n\nIn the practice notebook perform the following:\n\n+ Experiment with the following boolean conditionals to generate boolean arrays for indexing:\n + Greater than\n + Less than\n + Equals\n + Combine two or more of the above with:\n + or `|`\n + and `&`\n\nYou can create arrays or use existing ones", "_____no_output_____" ], [ "### 3.2 Using exact indicies\n\nAlternatively, if there are specific elements from the array that we want to retrieve we can provide the specific numeric indices. \n\nFor example, review and then execute the following code:", "_____no_output_____" ] ], [ [ "# Generate a list of 500 random numbers\ndemo_f = np.random.random((500))\n\n# Retreive 5 random numbers from the list\ndemo_f[[0,100,200,300,400]]", "_____no_output_____" ] ], [ [ "## 4. Intermission -- Getting Help\n\nPython has a built in function, `help()`, we can call on any object (anything) to find out more about it. As we move deeper into the functions provided by most packages, we often need to know exactly what a given function expects as arguments.\n\nThe output of these `help()` calls can be long. Try executing the following help call for the `np.array` attribute:", "_____no_output_____" ] ], [ [ "# Call help on anything from a package.\nhelp(np.array)", "_____no_output_____" ] ], [ [ "Additionally, we can get help about an object that we created! Execute the following code to try it out:", "_____no_output_____" ] ], [ [ "# Call help on an object we created.\nx = np.array([1, 2, 3, 4])\nhelp(x)", "_____no_output_____" ] ], [ [ "### Task 4a: Getting Help\n\n<span style=\"float:right; margin-left:10px; clear:both;\">![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/96/Apps-gnome-info-icon.png)\n</span>\n\nIn the practice notebook peform the following:\n\n+ In the code cell below, call `help()` on two of the following functions: `np.transpose()`, `np.reshape()`, `np.resize()`, `np.ravel()`, `np.append()`, `np.delete()`, `np.concatenate()`, `np.vstack()`, `np.hstack()`, `np.column_stack()`, `np.vsplit()`, `np.hsplit()` \n+ Respond to this question: Did you understand the help docuemntation? Could you use the function just by looking at what the help says about it? ", "_____no_output_____" ], [ "## 5. Manipulating Arrays\nThus far, we have larned to create arrays, perform basic math, aggregate values, and index arrays. Finally, we need to learn to manipulate them by transposing, reshaping, splitting, joining appending, and deleting arrays.", "_____no_output_____" ], [ "### 5.1 Transposing\nTransposing an array is equivalent to flipping it both horizontally and vertically as shown in the following animated image:\n\n<img src=\"https://upload.wikimedia.org/wikipedia/commons/e/e4/Matrix_transpose.gif\">\n\n(image source: https://en.wikipedia.org/wiki/Transpose)\n\nNumpy allows you to tranpose a matrix in one of two ways:\n\n+ Using the `transpose()` function\n+ Accessing the `T` attribute.\n\nExecute the following code examples to see an example of an array transpose", "_____no_output_____" ] ], [ [ "# Create a 2 x 3 random matrix\ndemo_f = np.random.random((2,3))\n\nprint(\"The original matrix\")\nprint(demo_f)\n\nprint(\"\\nThe matrix after being tranposed\")\nprint(np.transpose(demo_f))\n\nprint(\"\\nThe tranposed matrix from the T attribute\")\nprint(demo_f.T)\n", "_____no_output_____" ] ], [ [ "### Task 5a: Transposing an Array\n\n<span style=\"float:right; margin-left:10px; clear:both;\">![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/96/Apps-gnome-info-icon.png)\n</span>\n\nIn the practice notebook peform the following:\n\n+ Create a matrix of any size and transpose it.", "_____no_output_____" ], [ "### 5.2 Reshaping and Resizing\nYou can change the dimensions of your array by use of the following two functions:\n + `resize()`\n + `reshape()`\n \nThe `resize()` function allows you to \"stretch\" your array to increase its size. This can be useful if you need to add more data to an existing array or you need to adjust it prior to performing arithmatic and Broadcasting.\n\nThe `reshape()` function allows you to change the dimensions of an existing array. For example, if you have a _3 x 2_ array you can change it to a _6 x 1_ array using the `reshape()` function without losing the data values in the array.\n\nExamine and execute the following code adapted from the DataCamp Tutorial:", "_____no_output_____" ] ], [ [ "# Create an array x of size 4 x 1. Print the shape of `x`\nx = np.array([1,1,1,1])\nprint(x.shape)\n\n# Resize `x` to ((6,4))\nnp.resize(x, (6,4))", "_____no_output_____" ] ], [ [ "Notice how the array was resized from a _4 x 1_ to a _6 x 4_ array.", "_____no_output_____" ] ], [ [ "# Reshape `x` to (2,6)\nx = np.array([1,2,3,4])\nprint(\"\\noriginal:\")\nprint(x)\nprint(\"\\nreshaped:\")\nprint(x.reshape((2,2)))", "_____no_output_____" ] ], [ [ "### Task 5b: Reshaping an Array\n\n<span style=\"float:right; margin-left:10px; clear:both;\">![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/96/Apps-gnome-info-icon.png)\n</span>\n\nIn the practice notebook perform the following:\n\n+ Create a matrix and resize it by adding 2 extra columns\n+ Create a matrix and resize it by adding 1 extra row\n+ Create a matrix of 8 x 2 and resize it to 4 x 4", "_____no_output_____" ], [ "### 5.3 Appending Arrays\nSometimes, you may want to want to append one array to another. You can append one array to another using the `append()` function. You can append an array to any dimension. Remember that NumPy arrays have **axes**. When you append one array to another you must specify the axes (e.g. row or column for 2D array) that you want to append. Axes are identified using a numeric index starting from 0, therefore:\n\n+ `0`: the first dimension (the columns, or x-axis)\n+ `1`: the second dimension (the rows, or y-axis)\n+ `2`: the third dimension (the z-axis)\n+ `3`: the fourth dimension\n+ etc...\n\nFor example, examine and execute this code borrowed from the DataCamp tutorial:", "_____no_output_____" ] ], [ [ "# Append a 1D array to your `my_array`\nmy_array = np.array([1,2,3,4])\nnew_array = np.append(my_array, [7, 8, 9, 10])\n\n# Print `new_array`\nprint(new_array)\n\n# Append an extra column to your `my_2d_array`\nmy_2d_array = np.array([[1,2,3,4], [5,6,7,8]])\nnew_2d_array = np.append(my_2d_array, [[7], [8]], axis=1)\n\n# Print `new_2d_array`\nprint(new_2d_array)", "_____no_output_____" ] ], [ [ "In the code above, for the first example, the array `[7, 8, 9, 10]` is appended or added to the existing 1D `my_array`. For the second example, the values `7` and `8` are added to the rows (note the `axis=1` parameter.", "_____no_output_____" ], [ "### Task 5c: Appending to an Array\n\n<span style=\"float:right; margin-left:10px; clear:both;\">![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/96/Apps-gnome-info-icon.png)\n</span>\n\nIn the practice notebook perform the following:\n\n + Create a three dimensional array and append another row to the array\n + Append another colum to the array\n + Print the final results", "_____no_output_____" ], [ "### 5.4 Inserting and Deleting Elements\nYou can easily add a new element, or elements to an array using the `insert()` and `delete()` functions. ", "_____no_output_____" ], [ "### Task 5d: Inserting and Deleting Elements\n\n<span style=\"float:right; margin-left:10px; clear:both;\">![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/96/Apps-gnome-info-icon.png)\n</span>\n\nIn the practice notebook perform the following:\n\n+ Examine the `help()` documentation for how to use the `insert()` and `delete()` functions.\n+ Create a matrix and practice inserting a row and deleting a column.\n", "_____no_output_____" ], [ "### 5.5 Joining Arrays\nThere are a variety of functions for joining arrays:\n\n + `concatenate()`\n + `vstack()`\n + `hstack()`\n + `column_stack()`\n\nEach of these functions is used in the following code borrowed from a [DataCamp](https://www.datacamp.com/) tutorial. Examine and execute the following code cell:", "_____no_output_____" ] ], [ [ "# Concatentate `my_array` and `x`: similar to np.append()\nmy_array = np.array([1,2,3,4])\nx = np.array([1,1,1,1])\nprint(\"concatenate:\")\nprint(np.concatenate((my_array, x)))\n\n# Stack arrays row-wise\nmy_2d_array = np.array([[1,2,3,4], [5,6,7,8]])\nprint(\"\\nvstack:\")\nprint(np.vstack((my_array, my_2d_array)))\n\n# Stack arrays horizontally\nprint(\"\\nhstack:\")\nprint(np.hstack((my_2d_array, my_2d_array)))\n\n# Stack arrays column-wise\nprint(\"\\ncolumn_stack:\")\nprint(np.column_stack((my_2d_array, my_2d_array)))", "_____no_output_____" ] ], [ [ "### Task 5e: Joining Arrays\n\n<span style=\"float:right; margin-left:10px; clear:both;\">![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/96/Apps-gnome-info-icon.png)\n</span>\n\nIn the practice notebook perform the following:\n\n+ Execute the code (as shown above).\n+ Examine the output from each of the function calls in the cell above. If needed to understand, review the help pages for each tool either using the `help()` command or the [Numpy Function Reference](https://docs.scipy.org/doc/numpy/reference/routines.html). \n+ Respond to the following question\n + Can you identify what is happening with each of them?", "_____no_output_____" ], [ "### 5.5 Splitting an Array\nYou may find that you need to split arrays. The following functions allow you to split horizontally or vertically:\n + `vsplit()`\n + `hsplit()`\n \nExamine and execute the following code borrowed from the DataCamp Tutorial:", "_____no_output_____" ] ], [ [ "# Create a 2D array.\nmy_2d_array = np.array([[1,2,3,4], [5,6,7,8]])\nprint(\"original:\")\nprint(my_2d_array)\n\n# Split `my_stacked_array` horizontally at the 2nd index\nprint(\"\\nhsplit:\")\nprint(np.hsplit(my_2d_array, 2))\n\n# Split `my_stacked_array` vertically at the 2nd index\nprint(\"\\nvsplit:\")\nprint(np.vsplit(my_2d_array, 2))", "_____no_output_____" ] ], [ [ "### Task 5d: Splitting Arrays\n\n<span style=\"float:right; margin-left:10px; clear:both;\">![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/96/Apps-gnome-info-icon.png)\n</span>\n\nIn the practice notebook perform the following:\n\n+ Execute the code (as shown above).\n+ Examine the output from each of the function calls in the cell above. If needed to understand, review the help pages for each tool either using the `help()` command or the [Numpy Function Reference](https://docs.scipy.org/doc/numpy/reference/routines.html). \n+ Respond to the following question\n + Can you identify what is happening with each of them?", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
ece04c968d7d99dd17dfc9e70aaed75e90ecf972
524,299
ipynb
Jupyter Notebook
notebooks/other-disease-cardiomyopathy-train-clinvar-single-gene.ipynb
samesense/mahdi_epi
ec002df1d6b0dbdd4be8675e48971ed604ee9014
[ "MIT" ]
4
2018-06-05T06:13:04.000Z
2020-11-16T03:03:14.000Z
notebooks/other-disease-cardiomyopathy-train-clinvar-single-gene.ipynb
samesense/mahdi_epi
ec002df1d6b0dbdd4be8675e48971ed604ee9014
[ "MIT" ]
25
2018-05-25T11:46:06.000Z
2018-05-29T10:57:07.000Z
notebooks/other-disease-cardiomyopathy-train-clinvar-single-gene.ipynb
samesense/mahdi_epi
ec002df1d6b0dbdd4be8675e48971ed604ee9014
[ "MIT" ]
null
null
null
688.960578
110,080
0.930725
[ [ [ "### does training on clinvar predict disease better than single mpc? - cardiomyopathy\n* rm testing data from clinvar\n* focus on training single gene models: only use genes w/ 5+ p and b", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy\nfrom scipy.stats import entropy\nimport pydot, pydotplus, graphviz\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nimport seaborn as sns\nfrom sklearn import linear_model, metrics, tree, svm\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.externals.six import StringIO\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.ensemble import ExtraTreesClassifier\nfrom IPython.display import HTML\n%matplotlib inline", "_____no_output_____" ], [ "cols = ['mpc']\nkey_cols = ['chrom', 'pos', 'ref', 'alt']\n\nright_benign = \"#32CD32\"\nright_path = \"#2ecc71\"\nwrong_path = \"#e74c3c\"\nwrong_benign = \"#ffcccb\"\nflatui = [right_benign, right_path, wrong_benign, wrong_path]\nho = ['CorrectBenign', 'CorrectPath', 'WrongBenign', 'WrongPath']\n\ndef eval_pred(row, col):\n if row[col] == row['y']:\n if row['y'] == 1:\n return 'CorrectPath'\n return 'CorrectBenign'\n if row['y'] == 1:\n return 'WrongPath'\n return 'WrongBenign'\n\ndef eval_mpc_raw(row):\n if row['y'] == 1:\n if row['mpc']>=float(2):\n return 'CorrectPath'\n return 'WrongPath'\n if row['mpc']>=float(2):\n return 'WrongBenign'\n return 'CorrectBenign'", "_____no_output_____" ], [ "# load clinvar\ndat_file = '../data/interim/clinvar/clinvar.limit3.dat'\nclinvar_df_pre = pd.read_csv(dat_file, sep='\\t').rename(columns={'clin_class':'y'})", "_____no_output_____" ], [ "# other disease df: missense\ndat_file = '../data/interim/other/other.eff.dbnsfp.anno.hHack.dat.limit.xls'\ndisease_df_pre = pd.read_csv(dat_file, sep='\\t')\ndisease_df_pre.loc[:, 'y'] = disease_df_pre.apply(lambda row: 1 if row['class']=='P' else 0, axis=1)\ndisease = 'Cardiomyopathy'\ndisease_df = disease_df_pre[disease_df_pre.Disease==disease]\ntest_keys = {':'.join([str(x) for x in v]):True for v in disease_df[key_cols].values}\n#tree_clf = tree.DecisionTreeClassifier(max_depth=1)\n#X, y = train_df[cols], train_df['y']\n#tree_clf.fit(X, y)\ncrit = clinvar_df_pre.apply(lambda row: not ':'.join([str(row[x]) for x in key_cols]) in test_keys, axis=1)\nclinvar_df = clinvar_df_pre[crit]\nprint('clinvar w/o testing data', len(clinvar_df))\n\ndisease_genes = set(disease_df['gene'])\ncrit = clinvar_df.apply(lambda row: row['gene'] in disease_genes, axis=1)\nclinvar_df_limit_genes = clinvar_df[crit]\nprint('clinvar w/o testing data for disease genes', len(clinvar_df_limit_genes))", "clinvar w/o testing data 24140\nclinvar w/o testing data for disease genes 1607\n" ], [ "# use genes where clinvar has at least 5 p and b vars\ndef enough_data(rows):\n if len(rows) != 2:\n return False\n return not max(rows['size'] < 5)\n \ndd = (clinvar_df_limit_genes.groupby(['gene','y'])\n .size().reset_index().rename(columns={0:'size'})\n .groupby('gene').apply(enough_data).reset_index()\n )\nuse_genes = list(dd[dd[0]]['gene'])\nprint(use_genes)", "['DSC2', 'DSG2', 'DSP', 'LDB3', 'MYH6', 'MYH7', 'PKP2', 'PRKAG2', 'RBM20', 'RYR2', 'SCN5A', 'TMEM43']\n" ], [ "crit = disease_df.apply(lambda row: row['gene'] in use_genes, axis=1)\ndisease_df[crit].groupby('y').size()", "_____no_output_____" ], [ "# train clinvar\n# apply mpc>2\ntree_clf_clinvar = tree.DecisionTreeClassifier(max_depth=1)\nX, y = clinvar_df[cols], clinvar_df['y']\ntree_clf_clinvar.fit(X, y)\n\ntree_clf_clinvar_limit_genes = tree.DecisionTreeClassifier(max_depth=1)\nX, y = clinvar_df_limit_genes[cols], clinvar_df_limit_genes['y']\ntree_clf_clinvar_limit_genes.fit(X, y)", "_____no_output_____" ], [ "# one gene at a time\nacc_df_ls = []\n\ntree_clf_clinvar = tree.DecisionTreeClassifier(max_depth=1)\nX, y = clinvar_df[cols], clinvar_df['y']\ntree_clf_clinvar.fit(X, y)\n \nfor test_gene in use_genes:\n sub_train_df = disease_df[disease_df.gene != test_gene]\n tree_clf_sub = tree.DecisionTreeClassifier(max_depth=1)\n X, y = sub_train_df[cols], sub_train_df['y']\n tree_clf_sub.fit(X, y)\n \n test_df = disease_df[disease_df.gene == test_gene]\n X_test = test_df[cols]\n preds = tree_clf_sub.predict(X_test)\n test_df['mpc_pred'] = preds\n test_df.loc[:, 'PredictionStatusMPC'] = test_df.apply(lambda row: eval_pred(row, 'mpc_pred'), axis=1)\n \n tree_clf_clinvar_limit_gene = tree.DecisionTreeClassifier(max_depth=1)\n X, y = (clinvar_df_limit_genes[clinvar_df_limit_genes.gene==test_gene][cols],\n clinvar_df_limit_genes[clinvar_df_limit_genes.gene==test_gene]['y'])\n tree_clf_clinvar_limit_gene.fit(X, y)\n preds = tree_clf_clinvar_limit_gene.predict(X_test)\n test_df['mpc_pred_clinvar_limit_gene'] = preds\n test_df.loc[:, 'PredictionStatusMPC_clinvar_limit_gene'] = test_df.apply(lambda row:\n eval_pred(row, 'mpc_pred_clinvar_limit_gene'), axis=1)\n \n \n preds = tree_clf_clinvar.predict(X_test)\n test_df['mpc_pred_clinvar'] = preds\n test_df.loc[:, 'PredictionStatusMPC_clinvar'] = test_df.apply(lambda row: eval_pred(row, 'mpc_pred_clinvar'), axis=1)\n \n preds = tree_clf_clinvar_limit_genes.predict(X_test)\n test_df['mpc_pred_clinvar_limit_genes'] = preds\n test_df.loc[:, 'PredictionStatusMPC_clinvar_limit_genes'] = test_df.apply(lambda row: eval_pred(row, 'mpc_pred_clinvar_limit_genes'), axis=1)\n \n # apply mpc>=2\n test_df.loc[:, 'PredictionStatusMPC>2'] = test_df.apply(eval_mpc_raw, axis=1)\n \n acc_df_ls.append(test_df)\n\ntest_df = pd.concat(acc_df_ls)\ng_df = (test_df[['gene', 'chrom', 'pos', 'ref', 'alt', 'PredictionStatusMPC']]\n .groupby(['gene', 'PredictionStatusMPC'])\n .size().reset_index().rename(columns={0:'size'}))\ndd = g_df.groupby('gene').sum().reset_index()\n\nsns.set(font_scale=1.75)\nss = sns.factorplot(x='gene', hue='PredictionStatusMPC', y='size', data=g_df, hue_order=ho,\n kind='bar', palette=sns.color_palette(flatui), size=5, aspect=3)\nss.set_ylabels('%s CV panel missense variants' % (disease,))\nss.set_xlabels('')\nss.set_titles('MPC performance')\n#ss.savefig(\"../docs/plots/%s_cv_mpc_eval.png\" % (disease,))\n\ng_df = (test_df[['gene', 'chrom', 'pos', 'ref', 'alt', 'PredictionStatusMPC']]\n .groupby(['PredictionStatusMPC'])\n .size().reset_index().rename(columns={0:'size'}))\ng_df.head()", "/opt/conda/lib/python3.4/site-packages/ipykernel/__main__.py:17: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n/opt/conda/lib/python3.4/site-packages/pandas/core/indexing.py:297: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n self.obj[key] = _infer_fill_value(value)\n/opt/conda/lib/python3.4/site-packages/pandas/core/indexing.py:477: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n self.obj[item] = s\n/opt/conda/lib/python3.4/site-packages/ipykernel/__main__.py:25: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n/opt/conda/lib/python3.4/site-packages/ipykernel/__main__.py:31: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n/opt/conda/lib/python3.4/site-packages/ipykernel/__main__.py:35: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n" ], [ "g_df = (test_df[['gene', 'chrom', 'pos', 'ref', 'alt', 'PredictionStatusMPC_clinvar']]\n .groupby(['gene', 'PredictionStatusMPC_clinvar'])\n .size().reset_index().rename(columns={0:'size'}))\ndd = g_df.groupby('gene').sum().reset_index()\n\nsns.set(font_scale=1.75)\nss = sns.factorplot(x='gene', hue='PredictionStatusMPC_clinvar', y='size', data=g_df, hue_order=ho,\n kind='bar', palette=sns.color_palette(flatui), size=5, aspect=3)\nss.set_ylabels('%s ClinVar-trained missense variants' % (disease,))\nss.set_xlabels('')\nss.set_titles('MPC performance')\n#ss.savefig(\"../docs/plots/%s_cv_mpc_eval.png\" % (disease,))\n\ng_df = (test_df[['gene', 'chrom', 'pos', 'ref', 'alt', 'PredictionStatusMPC_clinvar']]\n .groupby(['PredictionStatusMPC_clinvar'])\n .size().reset_index().rename(columns={0:'size'}))\ng_df.head()", "_____no_output_____" ], [ "g_df = (test_df[['gene', 'chrom', 'pos', 'ref', 'alt', 'PredictionStatusMPC_clinvar_limit_genes']]\n .groupby(['gene', 'PredictionStatusMPC_clinvar_limit_genes'])\n .size().reset_index().rename(columns={0:'size'}))\ndd = g_df.groupby('gene').sum().reset_index()\n\nsns.set(font_scale=1.75)\nss = sns.factorplot(x='gene', hue='PredictionStatusMPC_clinvar_limit_genes', y='size', data=g_df,\n kind='bar', palette=sns.color_palette(flatui), size=5, aspect=3, hue_order=ho)\nss.set_ylabels('%s ClinVar-trained (limit genes) missense variants' % (disease,))\nss.set_xlabels('')\n#ss.set_titles('MPC performance')\n#ss.savefig(\"../docs/plots/%s_cv_mpc_eval.png\" % (disease,))\n\ng_df = (test_df[['gene', 'chrom', 'pos', 'ref', 'alt', 'PredictionStatusMPC_clinvar_limit_genes']]\n .groupby(['PredictionStatusMPC_clinvar_limit_genes'])\n .size().reset_index().rename(columns={0:'size'}))\ng_df.head()", "_____no_output_____" ], [ "g_df = (test_df[['gene', 'chrom', 'pos', 'ref', 'alt', 'PredictionStatusMPC>2']]\n .groupby(['gene', 'PredictionStatusMPC>2'])\n .size().reset_index().rename(columns={0:'size'}))\ndd = g_df.groupby('gene').sum().reset_index()\n\nsns.set(font_scale=1.75)\nss = sns.factorplot(x='gene', hue='PredictionStatusMPC>2', y='size', data=g_df, hue_order=ho,\n kind='bar', palette=sns.color_palette(flatui), size=5, aspect=3)\nss.set_ylabels('%s Not trained missense variants' % (disease,))\nss.set_xlabels('')\nss.set_titles('MPC>2 performance')\n\ng_df = (test_df[['gene', 'chrom', 'pos', 'ref', 'alt', 'PredictionStatusMPC>2']]\n .groupby(['PredictionStatusMPC>2'])\n .size().reset_index().rename(columns={0:'size'}))\ng_df.head()", "_____no_output_____" ], [ "g_df = (test_df[['gene', 'chrom', 'pos', 'ref', 'alt', 'PredictionStatusMPC_clinvar_limit_gene']]\n .groupby(['gene', 'PredictionStatusMPC_clinvar_limit_gene'])\n .size().reset_index().rename(columns={0:'size'}))\ndd = g_df.groupby('gene').sum().reset_index()\n\nsns.set(font_scale=1.75)\nss = sns.factorplot(x='gene', hue='PredictionStatusMPC_clinvar_limit_gene', y='size', data=g_df,\n kind='bar', palette=sns.color_palette(flatui), size=5, aspect=3, hue_order=ho)\nss.set_ylabels('%s ClinVar-trained (limit genes) missense variants' % (disease,))\nss.set_xlabels('')\n#ss.set_titles('MPC performance')\n#ss.savefig(\"../docs/plots/%s_cv_mpc_eval.png\" % (disease,))\n\ng_df = (test_df[['gene', 'chrom', 'pos', 'ref', 'alt', 'PredictionStatusMPC_clinvar_limit_gene']]\n .groupby(['PredictionStatusMPC_clinvar_limit_gene'])\n .size().reset_index().rename(columns={0:'size'}))\ng_df.head()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ece04e3dfdb80a02a635b48a7bffd4ed7fb1cdba
3,814
ipynb
Jupyter Notebook
tutorials/tutorial.ipynb
tcmoore3/linear_solver
9c29e2fa0067c1105b0ae91dd16217ab2b9383bb
[ "MIT" ]
null
null
null
tutorials/tutorial.ipynb
tcmoore3/linear_solver
9c29e2fa0067c1105b0ae91dd16217ab2b9383bb
[ "MIT" ]
null
null
null
tutorials/tutorial.ipynb
tcmoore3/linear_solver
9c29e2fa0067c1105b0ae91dd16217ab2b9383bb
[ "MIT" ]
null
null
null
22.702381
208
0.501835
[ [ [ "This module helps solve systems of linear equations. There are several ways of doing this. The first is to just pass the coefficients as a list of lists. Say we want to solve the system of equations:\n$$\n\\begin{array}{c|c|c}\nx - y = 5\\\\\nx + y = -1\n\\end{array}\n$$\nThis is done with a simple call to linear_solver.solve_linear_system(), like so", "_____no_output_____" ] ], [ [ "import linear_solver as ls\n\nxs = ls.solve_linear_system(\n [[1, -1, 5], \n [1, 1, -1]])\nprint(xs)", "[[ 2.]\n [-3.]]\n" ] ], [ [ "Clearly, the solution set $(2, -3)$ satisfies the two equations above.\n\nIf a system of equations that has no unique solutions is given, a warning is printed and ```None``` is returned.", "_____no_output_____" ] ], [ [ "xs = ls.solve_linear_system(\n [[1, 1, 0],\n [2, 2, 0]])\nprint(xs)", "Determinant of coefficients matrix is 0. No unique solution.\nNone\n" ], [ "xs = ls.solve_linear_system(\n [[1, 1, 0],\n [2, 2, 1]])\nprint(xs)", "Determinant of coefficients matrix is 0. No unique solution.\nNone\n" ] ], [ [ "Additionally, the coefficients of the equation can be read from a text file, where expressions are evaluated before they are read. For example, consider the following system of equations:\n\n\n$$\n\\begin{array}{c|c|c|c}\n22m_1 + 22m_2 - m_3 = 0\\\\\n(0.1)(22)m_1 + (0.9)(22)m_2 - 0.6m_3 = 0\\\\\n\\frac{22}{0.68} m_1 + \\frac{22}{0.78} m_2 = (500)(3.785)\n\\end{array}\n$$\n\nWe can put these coefficients into a text file, ```'coefficients.txt'```, which has the contents\n<pre>\n\\# contents of coefficients.txt\n22 22 -1 0\n0.1\\*22 0.9\\*22 -0.6 0\n22/0.68 22/0.78 0 500\\*3.785\n</pre>\nand then pass that file to the solver function.\n", "_____no_output_____" ] ], [ [ "sol = ls.solve_linear_system('coefficients.txt')\nfor i, row in enumerate(sol):\n print('m_{0} = {1:.2f}'.format(i, row[0,0]))", "m_0 = 23.85\nm_1 = 39.74\nm_2 = 1399.00\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
ece0650536aebbfff575b9b8cb3df7f3f0565542
4,038
ipynb
Jupyter Notebook
notebooks/bytopic/xarray/04_agg.ipynb
jukent/ncar-python-tutorial
85c899e865c1861777e99764ef697219355e0585
[ "CC-BY-4.0" ]
38
2019-09-10T05:00:52.000Z
2021-12-06T17:39:14.000Z
notebooks/bytopic/xarray/04_agg.ipynb
jukent/ncar-python-tutorial
85c899e865c1861777e99764ef697219355e0585
[ "CC-BY-4.0" ]
60
2019-08-28T22:34:17.000Z
2021-01-25T22:53:21.000Z
notebooks/bytopic/xarray/04_agg.ipynb
NCAR/ncar-pangeo-tutorial
54d536d40cfaf6f8990c58edb438286c19d32a67
[ "CC-BY-4.0" ]
22
2019-08-29T18:11:57.000Z
2021-01-07T02:23:46.000Z
24.621951
753
0.555225
[ [ [ "# Aggregation\n", "_____no_output_____" ], [ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Aggregation\" data-toc-modified-id=\"Aggregation-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Aggregation</a></span><ul class=\"toc-item\"><li><span><a href=\"#Learning-Objectives\" data-toc-modified-id=\"Learning-Objectives-1.1\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Learning Objectives</a></span></li><li><span><a href=\"#Aggregation-Methods\" data-toc-modified-id=\"Aggregation-Methods-1.2\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Aggregation Methods</a></span></li><li><span><a href=\"#Going-Further\" data-toc-modified-id=\"Going-Further-1.3\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>Going Further</a></span></li></ul></li></ul></div>", "_____no_output_____" ], [ "## Learning Objectives\n\n- Perform aggregation (reduction) along one or multiple dimensions of a DataArray or Dataset\n", "_____no_output_____" ], [ "## Aggregation Methods\n\nXarray supports many of the aggregations methods that numpy has. A partial list includes: all, any, argmax, argmin, max, mean, median, min, prod, sum, std, var.\n\nWhereas the numpy syntax would require scalar axes, xarray can use dimension names:", "_____no_output_____" ] ], [ [ "import xarray as xr", "_____no_output_____" ], [ "ds = xr.open_dataset(\"../../../data/air_temperature.nc\")", "_____no_output_____" ], [ "da = ds['air']\nda", "_____no_output_____" ], [ "da.mean()", "_____no_output_____" ], [ "da.mean(dim=['lat', 'lon'])", "_____no_output_____" ], [ "da.median(dim='time')", "_____no_output_____" ], [ "da.std(dim='time')", "_____no_output_____" ] ], [ [ "## Going Further", "_____no_output_____" ], [ "- [Xarray Docs - Aggregation](https://xarray.pydata.org/en/stable/computation.html#aggregation)", "_____no_output_____" ], [ "<div class=\"alert alert-block alert-success\">\n <p>Previous: <a href=\"03_indexing.ipynb\">Indexing</a></p>\n <p>Next: <a href=\"05_arithmetic.ipynb\">Arithmetic</a></p>\n</div>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ] ]
ece0655bcee3e8e62ad837dc2e7666846e792219
55,222
ipynb
Jupyter Notebook
notebooks/hierarchy-model/markov-MI.ipynb
timsainb/LongRangeSequentialOrgPaper
0b059831ba153c3205291bbae66842d53b42a758
[ "BSD-3-Clause" ]
1
2020-09-21T14:08:29.000Z
2020-09-21T14:08:29.000Z
notebooks/hierarchy-model/markov-MI.ipynb
timsainb/LongRangeSequentialOrgPaper
0b059831ba153c3205291bbae66842d53b42a758
[ "BSD-3-Clause" ]
null
null
null
notebooks/hierarchy-model/markov-MI.ipynb
timsainb/LongRangeSequentialOrgPaper
0b059831ba153c3205291bbae66842d53b42a758
[ "BSD-3-Clause" ]
null
null
null
75.131973
20,280
0.804878
[ [ [ "import numpy as np\nimport pandas as pd\nfrom tqdm.autonotebook import tqdm", "/mnt/cube/tsainbur/conda_envs/tpy3/lib/python3.6/site-packages/tqdm/autonotebook/__init__.py:14: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)\n \" (e.g. in jupyter console)\", TqdmExperimentalWarning)\n" ], [ "from childes_mi.utils.paths import DATA_DIR, FIGURE_DIR, ensure_dir\nfrom childes_mi.utils.general import flatten,save_fig", "_____no_output_____" ], [ "def gen_seq_markov(alphabet, probs, seq_len):\n \"\"\" like sample_sequence_MM, but uses a numpy matrix, no start and end states, and a set sequence length\n \"\"\"\n sequence = list(\n np.random.choice(alphabet, p=np.sum(probs, axis=0) / np.sum(probs), size=1)\n )\n for i in tqdm(range(seq_len)):\n sequence.append(np.random.choice(alphabet, p=probs[:, sequence[-1]], size=1)[0])\n return sequence", "_____no_output_____" ], [ "probs = np.array([[0.1, 0.9], [0.9, 0.1]])\nalphabet = [0, 1]", "_____no_output_____" ], [ "sequence = gen_seq_markov(alphabet, probs, seq_len = 100000)", "_____no_output_____" ], [ "sequence[:10]", "_____no_output_____" ], [ "from childes_mi.information_theory import mutual_information as mi", "/mnt/cube/tsainbur/conda_envs/tpy3/lib/python3.6/site-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.metrics.cluster.supervised module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.metrics.cluster. Anything that cannot be imported from sklearn.metrics.cluster is now part of the private API.\n warnings.warn(message, FutureWarning)\n/mnt/cube/tsainbur/conda_envs/tpy3/lib/python3.6/site-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.metrics.cluster.expected_mutual_info_fast module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.metrics.cluster. Anything that cannot be imported from sklearn.metrics.cluster is now part of the private API.\n warnings.warn(message, FutureWarning)\n" ], [ "distances = np.arange(1,101)", "_____no_output_____" ], [ "(MI, MI_var), (shuff_MI, shuff_MI_var) = mi.sequential_mutual_information(\n [sequence], distances=distances, n_jobs=-1\n)", "_____no_output_____" ], [ "MI_DF = pd.DataFrame(\n [[MI, MI_var, shuff_MI, shuff_MI_var, distances]],\n columns=[\"MI\", \"MI_var\", \"shuff_MI\", \"shuff_MI_var\", \"distances\"],\n)", "_____no_output_____" ], [ "MI-shuff_MI", "_____no_output_____" ], [ "from matplotlib.ticker import StrMethodFormatter, NullFormatter\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "nplot=50", "_____no_output_____" ], [ "row = MI_DF.iloc[0]", "_____no_output_____" ], [ "fig, ax = plt.subplots(figsize=(5,5))\nMI = row.MI-row.shuff_MI\nMI_var = row.MI_var\nnplot=50\nax.scatter(distances[:nplot], MI[:nplot], color='k')\nax.fill_between(distances[:nplot], MI[:nplot]-MI_var[:nplot], MI[:nplot]+MI_var[:nplot], alpha = 0, color= 'k')\n#ax.plot(mean_latent_distances[:nplot], MI[:nplot], alpha = 1, color= 'k', lw=5)\nax.set_yscale('log')\nax.set_xscale('log')\n#ax.set_xlim([1,50])\n\nax.set_xlabel('Sequential distance', fontsize=18)\nax.set_ylabel('Mutual Information (bits)', fontsize=18)\n\n\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\n ax.spines[axis].set_color('k')\nax.grid(False)\nax.tick_params(which='both', direction='in', labelsize=14, pad=10)\nax.tick_params(which='major', length=10, width =3)\nax.tick_params(which='minor', length=5, width =2)\n\nax.set_xticks([1,10,50])\nax.set_xticklabels(['1','10','50'])\nax.set_xlim([0.95,nplot])\nax.set_ylim([1e-4,.5])\n\n\nax.xaxis.set_major_formatter(StrMethodFormatter('{x:.0f}'))\nax.xaxis.set_minor_formatter(NullFormatter())", "_____no_output_____" ], [ "import lmfit", "_____no_output_____" ], [ "def residuals(y_true, y_model, x, logscaled=False):\n if logscaled:\n return np.abs(np.log(y_true) - np.log(y_model)) * (1 / (np.log(1 + x)))\n else:\n return np.abs(y_true - y_model)\n\ndef model_res(p, x, y, fit, model):\n if fit == \"lin\":\n return residuals(y, model(p, x), x)\n else:\n return residuals(y, model(p, x), x, logscaled=True)\n \n# fitting model\ndef fit_model_iter(model, n_iter=10, **kwargs):\n \"\"\" re-fit model n_iter times and choose the best fit\n chooses method based upon best-fit\n \"\"\"\n models = []\n AICs = []\n for iter in np.arange(n_iter):\n results_model = model.minimize(**kwargs)\n models.append(results_model)\n AICs.append(results_model.aic)\n return models[np.argmin(AICs)]\n\ndef get_y(model, results, x):\n return model({i: results.params[i].value for i in results.params}, x)\n\n\ndef exp_decay(p, x):\n return p[\"e_init\"] * np.exp(-x * p[\"e_decay_const\"]) + p[\"intercept\"]\n\n# decay types\ndef powerlaw_decay(p, x):\n return p[\"p_init\"] * x ** (p[\"p_decay_const\"]) + p[\"intercept\"]\n\np_exp = lmfit.Parameters()\np_exp.add_many(\n (\"e_init\", 0.5, True, 1e-10),\n (\"e_decay_const\", 0.1, True, 1e-10),\n (\"intercept\", 1e-5, True, 1e-10),\n)\n\np_power = lmfit.Parameters()\np_power.add_many(\n (\"p_init\", 0.5, True, 1e-10),\n (\"p_decay_const\", -0.5, True, -np.inf, -1e-10),\n (\"intercept\", 1e-5, True, 1e-10),\n)", "_____no_output_____" ], [ "fit='log'\nn_iter=1\nmethod=[\"nelder\", \"leastsq\", \"least-squares\"]", "_____no_output_____" ], [ "d = distances[:nplot]\nsig = MI[:nplot]", "_____no_output_____" ], [ "\nresults_exp_min = lmfit.Minimizer(\n model_res, p_exp, fcn_args=(d, sig, fit, exp_decay), nan_policy=\"omit\"\n)", "_____no_output_____" ], [ "results_exp = [\n fit_model_iter(results_exp_min, n_iter=n_iter, **{\"method\": meth})\n for meth in method\n]\nresults_exp = results_exp[np.argmin([i.aic for i in results_exp])]", "/mnt/cube/tsainbur/conda_envs/tpy3/lib/python3.6/site-packages/ipykernel_launcher.py:3: RuntimeWarning: divide by zero encountered in log\n This is separate from the ipykernel package so we can avoid doing imports until\n/mnt/cube/tsainbur/conda_envs/tpy3/lib/python3.6/site-packages/ipykernel_launcher.py:3: RuntimeWarning: invalid value encountered in log\n This is separate from the ipykernel package so we can avoid doing imports until\n" ], [ "results_exp", "_____no_output_____" ], [ "y_exp = get_y(exp_decay, results_exp, d)", "_____no_output_____" ], [ "fig, ax = plt.subplots(figsize=(5,5))\nMI = row.MI-row.shuff_MI\nMI_var = row.MI_var\nnplot=50\nax.scatter(distances[:nplot], MI[:nplot], color='k')\nax.fill_between(distances[:nplot], MI[:nplot]-MI_var[:nplot], MI[:nplot]+MI_var[:nplot], alpha = 0, color= 'k')\nax.plot(distances[:nplot], y_exp, alpha = 0.5, color= 'k', lw=5)\nax.set_yscale('log')\nax.set_xscale('log')\n#ax.set_xlim([1,50])\n\nax.set_xlabel('Sequential distance', fontsize=18)\nax.set_ylabel('Mutual Information (bits)', fontsize=18)\n\n\nfor axis in ['top','bottom','left','right']:\n ax.spines[axis].set_linewidth(3)\n ax.spines[axis].set_color('k')\nax.grid(False)\nax.tick_params(which='both', direction='in', labelsize=14, pad=10)\nax.tick_params(which='major', length=10, width =3)\nax.tick_params(which='minor', length=5, width =2)\n\nax.set_xticks([1,10,50])\nax.set_xticklabels(['1','10','50'])\nax.set_xlim([0.95,50])\nax.set_ylim([1e-4,.5])\n\n\nax.xaxis.set_major_formatter(StrMethodFormatter('{x:.0f}'))\nax.xaxis.set_minor_formatter(NullFormatter())\n\nensure_dir(FIGURE_DIR/'model_fig')\nsave_fig(FIGURE_DIR/ 'model_fig' / 'exp_decay')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ece07afe6db6fcbd0da952c74209a8472b911e3e
73,526
ipynb
Jupyter Notebook
assignment_4/notebooks/data_cleaning_solar_radiation.ipynb
AlexCubo/MLOps_Omdena
3f45dae1bb932d2f994a663b56fd247798f1957b
[ "MIT" ]
null
null
null
assignment_4/notebooks/data_cleaning_solar_radiation.ipynb
AlexCubo/MLOps_Omdena
3f45dae1bb932d2f994a663b56fd247798f1957b
[ "MIT" ]
null
null
null
assignment_4/notebooks/data_cleaning_solar_radiation.ipynb
AlexCubo/MLOps_Omdena
3f45dae1bb932d2f994a663b56fd247798f1957b
[ "MIT" ]
null
null
null
32.941756
207
0.323301
[ [ [ "# Solar irradiation - City of Sassari (Italy) from 2019 to 2021", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nfrom datetime import time", "_____no_output_____" ], [ "df = pd.read_csv(\"../datasets/raw/solar_irradiation_sassari_2019-2021.csv\")", "_____no_output_____" ], [ "df.head(5)", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ], [ "df = df[['PeriodEnd', 'AirTemp', 'Dhi', 'Dni', 'Ghi',\n 'PrecipitableWater', 'RelativeHumidity', 'SurfacePressure', \n 'WindDirection10m', 'WindSpeed10m']]", "_____no_output_____" ], [ "df.head(5)", "_____no_output_____" ], [ "old_names = ['PeriodEnd', 'AirTemp', 'Dhi', 'Dni', 'Ghi',\n 'PrecipitableWater', 'RelativeHumidity', 'SurfacePressure', \n 'WindDirection10m', 'WindSpeed10m']\nnew_names = ['End_period', 'Temp', 'Diffuse_irr', 'Direct_irr', 'Global_irr',\n 'Precipitation', 'Humidity', 'Pressure', \n 'Wind_direction', 'Wind_speed']", "_____no_output_____" ], [ "df.rename(columns=dict(zip(old_names, new_names)), inplace=True)", "_____no_output_____" ], [ "df.head(5)", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 20277 entries, 0 to 20276\nData columns (total 10 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 End_period 20277 non-null object \n 1 Temp 20277 non-null float64\n 2 Diffuse_irr 20277 non-null int64 \n 3 Direct_irr 20277 non-null int64 \n 4 Global_irr 20277 non-null int64 \n 5 Precipitation 20277 non-null float64\n 6 Humidity 20277 non-null float64\n 7 Pressure 20277 non-null float64\n 8 Wind_direction 20277 non-null int64 \n 9 Wind_speed 20277 non-null float64\ndtypes: float64(5), int64(4), object(1)\nmemory usage: 1.5+ MB\n" ], [ "# change data type in the dataframe\ndf['Temp'] = df['Temp'].astype(float)\ndf['Pressure'] = df['Pressure'].astype(int)\ndf['Humidity'] = df['Humidity'].astype(int)\n#convert End_period to a datetime with a minutely frequency ('T'):\n#to_datetime converts to date, to_period selects the chosen format and to_timestamp converts all in timestamp\ndf['End_period'] = pd.to_datetime(df['End_period']).dt.to_period('T').dt.to_timestamp()", "/home/cubo/anaconda3/envs/MLOps_Omdena/lib/python3.9/site-packages/pandas/core/arrays/datetimes.py:1143: UserWarning: Converting to PeriodArray/Index representation will drop timezone information.\n warnings.warn(\n" ] ], [ [ "* Difference between datetime and timestamp: \n are both time of the format YYYY-MM-DD HH:MM:SS, but timestamp includes also the timezone ", "_____no_output_____" ] ], [ [ "df.set_index('End_period', inplace=True)", "_____no_output_____" ], [ "df['year'] = df.index.year\ndf['month'] = df.index.month\ndf['hour'] = df.index.hour", "_____no_output_____" ], [ "# change the midnight from 0 to hour 24, to have a meaningfull value\ndf['hour'] = df['hour'].replace(0,24)", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "#df.info()", "_____no_output_____" ], [ "#create a feature Sunrise with the hour of the first non-null irradiation of the day\ndef set_sunrise_time(df):\n '''Create a column Sunrise and set the hour of the moment when the sun rises. All other times are 0s'''\n df['Sunrise'] = df.loc[(df['Global_irr'] > 0) & (df['Global_irr'].shift(1)==0), 'hour']\n df['Sunrise'] = df['Sunrise'].fillna(0)\n df['Sunrise'] = df['Sunrise'].astype(int)\n return df", "_____no_output_____" ], [ "# apply set_sunrise_time\nset_sunrise_time(df)", "_____no_output_____" ], [ "df.Sunrise.unique()", "_____no_output_____" ], [ "#create a feature Sunset with the hour of the last non-null irradiation of the day\ndef set_sunset_time(df):\n '''Create a column Sunset and set the hour of the moment when the sun goes down. All other times are 0s'''\n df['Sunset'] = df.loc[(df['Global_irr'] > 0) & (df['Global_irr'].shift(-1) == 0), 'hour']\n df['Sunset'] = df['Sunset'].fillna(0)\n df['Sunset'] = df['Sunset'].astype(int)\n return df", "_____no_output_____" ], [ "# apply set_sunset_time\nset_sunset_time(df)", "_____no_output_____" ], [ "df.Sunset.unique()", "_____no_output_____" ], [ "# Retain only observations in which the Global_irr > 0\ndf = df.loc[~(df['Global_irr'] == 0), :]", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nDatetimeIndex: 10635 entries, 2019-08-31 05:00:00 to 2021-12-22 16:00:00\nData columns (total 14 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Temp 10635 non-null float64\n 1 Diffuse_irr 10635 non-null int64 \n 2 Direct_irr 10635 non-null int64 \n 3 Global_irr 10635 non-null int64 \n 4 Precipitation 10635 non-null float64\n 5 Humidity 10635 non-null int64 \n 6 Pressure 10635 non-null int64 \n 7 Wind_direction 10635 non-null int64 \n 8 Wind_speed 10635 non-null float64\n 9 year 10635 non-null int64 \n 10 month 10635 non-null int64 \n 11 hour 10635 non-null int64 \n 12 Sunrise 10635 non-null int64 \n 13 Sunset 10635 non-null int64 \ndtypes: float64(3), int64(11)\nmemory usage: 1.2 MB\n" ], [ "df.sample(10)", "_____no_output_____" ], [ "# Creating the dataFrame that will be used for prediction. \n# We use a time discretization of one day\ndaily_df = df[['month', 'year', 'Temp', 'Precipitation', \n 'Humidity', 'Pressure', 'Wind_direction',\n 'Wind_speed', 'Diffuse_irr', 'Direct_irr', 'Global_irr']]", "_____no_output_____" ], [ "daily_df.head()", "_____no_output_____" ], [ "daily_df = daily_df.groupby(by=daily_df.index.date).mean().round(1)", "_____no_output_____" ], [ "old_cols = ['Temp', 'Precipitation', 'Humidity', 'Pressure', 'Wind_direction', \n 'Wind_speed', 'Diffuse_irr', 'Direct_irr', 'Global_irr']\ndaily_cols = ['daily_temp', 'daily_rain', 'daily_hum', 'daily_press', 'daily_windDir', \n 'daily_windSp', 'daily_DHI', 'daily_DNI','daily_GHI']", "_____no_output_____" ], [ "daily_df.rename(columns=dict(zip(old_cols,daily_cols)), inplace=True)", "_____no_output_____" ], [ "daily_df.head()", "_____no_output_____" ], [ "daily_df['month'] = daily_df['month'].astype(int)\ndaily_df['year'] = daily_df['year'].astype(int)", "_____no_output_____" ], [ "daily_df.shape", "_____no_output_____" ], [ "daily_df.to_csv('../datasets/cleaned/cleaned_daily_irr.csv')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ece082e5d66ed703781c91849da5ccae64547214
754,251
ipynb
Jupyter Notebook
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-20.ipynb
pvieito/Radar-STATS
9ff991a4db776259bc749a823ee6f0b0c0d38108
[ "Apache-2.0" ]
9
2020-10-14T16:58:32.000Z
2021-10-05T12:01:56.000Z
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-20.ipynb
pvieito/Radar-STATS
9ff991a4db776259bc749a823ee6f0b0c0d38108
[ "Apache-2.0" ]
3
2020-10-08T04:48:35.000Z
2020-10-10T20:46:58.000Z
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-20.ipynb
Radar-STATS/Radar-STATS
61d8b3529f6bbf4576d799e340feec5b183338a3
[ "Apache-2.0" ]
3
2020-09-27T07:39:26.000Z
2020-10-02T07:48:56.000Z
85.671399
142,596
0.73009
[ [ [ "# RadarCOVID-Report", "_____no_output_____" ], [ "## Data Extraction", "_____no_output_____" ] ], [ [ "import datetime\nimport json\nimport logging\nimport os\nimport shutil\nimport tempfile\nimport textwrap\nimport uuid\n\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker\nimport numpy as np\nimport pandas as pd\nimport pycountry\nimport retry\nimport seaborn as sns\n\n%matplotlib inline", "_____no_output_____" ], [ "current_working_directory = os.environ.get(\"PWD\")\nif current_working_directory:\n os.chdir(current_working_directory)\n\nsns.set()\nmatplotlib.rcParams[\"figure.figsize\"] = (15, 6)\n\nextraction_datetime = datetime.datetime.utcnow()\nextraction_date = extraction_datetime.strftime(\"%Y-%m-%d\")\nextraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)\nextraction_previous_date = extraction_previous_datetime.strftime(\"%Y-%m-%d\")\nextraction_date_with_hour = datetime.datetime.utcnow().strftime(\"%Y-%m-%d@%H\")\ncurrent_hour = datetime.datetime.utcnow().hour\nare_today_results_partial = current_hour != 23", "_____no_output_____" ] ], [ [ "### Constants", "_____no_output_____" ] ], [ [ "from Modules.ExposureNotification import exposure_notification_io\n\nspain_region_country_code = \"ES\"\ngermany_region_country_code = \"DE\"\n\ndefault_backend_identifier = spain_region_country_code\n\nbackend_generation_days = 7 * 2\ndaily_summary_days = 7 * 4 * 3\ndaily_plot_days = 7 * 4\ntek_dumps_load_limit = daily_summary_days + 1", "_____no_output_____" ] ], [ [ "### Parameters", "_____no_output_____" ] ], [ [ "environment_backend_identifier = os.environ.get(\"RADARCOVID_REPORT__BACKEND_IDENTIFIER\")\nif environment_backend_identifier:\n report_backend_identifier = environment_backend_identifier\nelse:\n report_backend_identifier = default_backend_identifier\nreport_backend_identifier", "_____no_output_____" ], [ "environment_enable_multi_backend_download = \\\n os.environ.get(\"RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD\")\nif environment_enable_multi_backend_download:\n report_backend_identifiers = None\nelse:\n report_backend_identifiers = [report_backend_identifier]\n\nreport_backend_identifiers", "_____no_output_____" ], [ "environment_invalid_shared_diagnoses_dates = \\\n os.environ.get(\"RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES\")\nif environment_invalid_shared_diagnoses_dates:\n invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(\",\")\nelse:\n invalid_shared_diagnoses_dates = []\n\ninvalid_shared_diagnoses_dates", "_____no_output_____" ] ], [ [ "### COVID-19 Cases", "_____no_output_____" ] ], [ [ "report_backend_client = \\\n exposure_notification_io.get_backend_client_with_identifier(\n backend_identifier=report_backend_identifier)", "_____no_output_____" ], [ "@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))\ndef download_cases_dataframe():\n return pd.read_csv(\"https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv\")\n\nconfirmed_df_ = download_cases_dataframe()\nconfirmed_df_.iloc[0]", "_____no_output_____" ], [ "confirmed_df = confirmed_df_.copy()\nconfirmed_df = confirmed_df[[\"date\", \"new_cases\", \"iso_code\"]]\nconfirmed_df.rename(\n columns={\n \"date\": \"sample_date\",\n \"iso_code\": \"country_code\",\n },\n inplace=True)\n\ndef convert_iso_alpha_3_to_alpha_2(x):\n try:\n return pycountry.countries.get(alpha_3=x).alpha_2\n except Exception as e:\n logging.info(f\"Error converting country ISO Alpha 3 code '{x}': {repr(e)}\")\n return None\n\nconfirmed_df[\"country_code\"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2)\nconfirmed_df.dropna(inplace=True)\nconfirmed_df[\"sample_date\"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True)\nconfirmed_df[\"sample_date\"] = confirmed_df.sample_date.dt.strftime(\"%Y-%m-%d\")\nconfirmed_df.sort_values(\"sample_date\", inplace=True)\nconfirmed_df.tail()", "_____no_output_____" ], [ "confirmed_days = pd.date_range(\n start=confirmed_df.iloc[0].sample_date,\n end=extraction_datetime)\nconfirmed_days_df = pd.DataFrame(data=confirmed_days, columns=[\"sample_date\"])\nconfirmed_days_df[\"sample_date_string\"] = \\\n confirmed_days_df.sample_date.dt.strftime(\"%Y-%m-%d\")\nconfirmed_days_df.tail()", "_____no_output_____" ], [ "def sort_source_regions_for_display(source_regions: list) -> list:\n if report_backend_identifier in source_regions:\n source_regions = [report_backend_identifier] + \\\n list(sorted(set(source_regions).difference([report_backend_identifier])))\n else:\n source_regions = list(sorted(source_regions))\n return source_regions", "_____no_output_____" ], [ "report_source_regions = report_backend_client.source_regions_for_date(\n date=extraction_datetime.date())\nreport_source_regions = sort_source_regions_for_display(\n source_regions=report_source_regions)\nreport_source_regions", "_____no_output_____" ], [ "def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None):\n source_regions_at_date_df = confirmed_days_df.copy()\n source_regions_at_date_df[\"source_regions_at_date\"] = \\\n source_regions_at_date_df.sample_date.apply(\n lambda x: source_regions_for_date_function(date=x))\n source_regions_at_date_df.sort_values(\"sample_date\", inplace=True)\n source_regions_at_date_df[\"_source_regions_group\"] = source_regions_at_date_df. \\\n source_regions_at_date.apply(lambda x: \",\".join(sort_source_regions_for_display(x)))\n source_regions_at_date_df.tail()\n\n #%%\n\n source_regions_for_summary_df_ = \\\n source_regions_at_date_df[[\"sample_date\", \"_source_regions_group\"]].copy()\n source_regions_for_summary_df_.rename(columns={\"_source_regions_group\": \"source_regions\"}, inplace=True)\n source_regions_for_summary_df_.tail()\n\n #%%\n\n confirmed_output_columns = [\"sample_date\", \"new_cases\", \"covid_cases\"]\n confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns)\n\n for source_regions_group, source_regions_group_series in \\\n source_regions_at_date_df.groupby(\"_source_regions_group\"):\n source_regions_set = set(source_regions_group.split(\",\"))\n confirmed_source_regions_set_df = \\\n confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy()\n confirmed_source_regions_group_df = \\\n confirmed_source_regions_set_df.groupby(\"sample_date\").new_cases.sum() \\\n .reset_index().sort_values(\"sample_date\")\n confirmed_source_regions_group_df = \\\n confirmed_source_regions_group_df.merge(\n confirmed_days_df[[\"sample_date_string\"]].rename(\n columns={\"sample_date_string\": \"sample_date\"}),\n how=\"right\")\n confirmed_source_regions_group_df[\"new_cases\"] = \\\n confirmed_source_regions_group_df[\"new_cases\"].clip(lower=0)\n confirmed_source_regions_group_df[\"covid_cases\"] = \\\n confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round()\n confirmed_source_regions_group_df = \\\n confirmed_source_regions_group_df[confirmed_output_columns]\n confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan)\n confirmed_source_regions_group_df.fillna(method=\"ffill\", inplace=True)\n confirmed_source_regions_group_df = \\\n confirmed_source_regions_group_df[\n confirmed_source_regions_group_df.sample_date.isin(\n source_regions_group_series.sample_date_string)]\n confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df)\n\n result_df = confirmed_output_df.copy()\n result_df.tail()\n\n #%%\n\n result_df.rename(columns={\"sample_date\": \"sample_date_string\"}, inplace=True)\n result_df = confirmed_days_df[[\"sample_date_string\"]].merge(result_df, how=\"left\")\n result_df.sort_values(\"sample_date_string\", inplace=True)\n result_df.fillna(method=\"ffill\", inplace=True)\n result_df.tail()\n\n #%%\n\n result_df[[\"new_cases\", \"covid_cases\"]].plot()\n\n if columns_suffix:\n result_df.rename(\n columns={\n \"new_cases\": \"new_cases_\" + columns_suffix,\n \"covid_cases\": \"covid_cases_\" + columns_suffix},\n inplace=True)\n return result_df, source_regions_for_summary_df_", "_____no_output_____" ], [ "confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe(\n report_backend_client.source_regions_for_date)\nconfirmed_es_df, _ = get_cases_dataframe(\n lambda date: [spain_region_country_code],\n columns_suffix=spain_region_country_code.lower())", "_____no_output_____" ] ], [ [ "### Extract API TEKs", "_____no_output_____" ] ], [ [ "raw_zip_path_prefix = \"Data/TEKs/Raw/\"\nbase_backend_identifiers = [report_backend_identifier]\nmulti_backend_exposure_keys_df = \\\n exposure_notification_io.download_exposure_keys_from_backends(\n backend_identifiers=report_backend_identifiers,\n generation_days=backend_generation_days,\n fail_on_error_backend_identifiers=base_backend_identifiers,\n save_raw_zip_path_prefix=raw_zip_path_prefix)\nmulti_backend_exposure_keys_df[\"region\"] = multi_backend_exposure_keys_df[\"backend_identifier\"]\nmulti_backend_exposure_keys_df.rename(\n columns={\n \"generation_datetime\": \"sample_datetime\",\n \"generation_date_string\": \"sample_date_string\",\n },\n inplace=True)\nmulti_backend_exposure_keys_df.head()", "WARNING:root:NoKeysFoundException(\"No exposure keys found on endpoint 'https://radarcovid.covid19.gob.es/dp3t/v2/gaen/exposed/?originCountries=PT' (parameters: {'origin_country': 'PT', 'endpoint_identifier_components': ['PT'], 'backend_identifier': 'PT@ES', 'server_endpoint_url': 'https://radarcovid.covid19.gob.es/dp3t'}).\")\n" ], [ "early_teks_df = multi_backend_exposure_keys_df[\n multi_backend_exposure_keys_df.rolling_period < 144].copy()\nearly_teks_df[\"rolling_period_in_hours\"] = early_teks_df.rolling_period / 6\nearly_teks_df[early_teks_df.sample_date_string != extraction_date] \\\n .rolling_period_in_hours.hist(bins=list(range(24)))", "_____no_output_____" ], [ "early_teks_df[early_teks_df.sample_date_string == extraction_date] \\\n .rolling_period_in_hours.hist(bins=list(range(24)))", "_____no_output_____" ], [ "multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[\n \"sample_date_string\", \"region\", \"key_data\"]]\nmulti_backend_exposure_keys_df.head()", "_____no_output_____" ], [ "active_regions = \\\n multi_backend_exposure_keys_df.groupby(\"region\").key_data.nunique().sort_values().index.unique().tolist()\nactive_regions", "_____no_output_____" ], [ "multi_backend_summary_df = multi_backend_exposure_keys_df.groupby(\n [\"sample_date_string\", \"region\"]).key_data.nunique().reset_index() \\\n .pivot(index=\"sample_date_string\", columns=\"region\") \\\n .sort_index(ascending=False)\nmulti_backend_summary_df.rename(\n columns={\"key_data\": \"shared_teks_by_generation_date\"},\n inplace=True)\nmulti_backend_summary_df.rename_axis(\"sample_date\", inplace=True)\nmulti_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int)\nmulti_backend_summary_df = multi_backend_summary_df.head(backend_generation_days)\nmulti_backend_summary_df.head()", "_____no_output_____" ], [ "def compute_keys_cross_sharing(x):\n teks_x = x.key_data_x.item()\n common_teks = set(teks_x).intersection(x.key_data_y.item())\n common_teks_fraction = len(common_teks) / len(teks_x)\n return pd.Series(dict(\n common_teks=common_teks,\n common_teks_fraction=common_teks_fraction,\n ))\n\nmulti_backend_exposure_keys_by_region_df = \\\n multi_backend_exposure_keys_df.groupby(\"region\").key_data.unique().reset_index()\nmulti_backend_exposure_keys_by_region_df[\"_merge\"] = True\nmulti_backend_exposure_keys_by_region_combination_df = \\\n multi_backend_exposure_keys_by_region_df.merge(\n multi_backend_exposure_keys_by_region_df, on=\"_merge\")\nmulti_backend_exposure_keys_by_region_combination_df.drop(\n columns=[\"_merge\"], inplace=True)\nif multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1:\n multi_backend_exposure_keys_by_region_combination_df = \\\n multi_backend_exposure_keys_by_region_combination_df[\n multi_backend_exposure_keys_by_region_combination_df.region_x !=\n multi_backend_exposure_keys_by_region_combination_df.region_y]\nmulti_backend_exposure_keys_cross_sharing_df = \\\n multi_backend_exposure_keys_by_region_combination_df \\\n .groupby([\"region_x\", \"region_y\"]) \\\n .apply(compute_keys_cross_sharing) \\\n .reset_index()\nmulti_backend_cross_sharing_summary_df = \\\n multi_backend_exposure_keys_cross_sharing_df.pivot_table(\n values=[\"common_teks_fraction\"],\n columns=\"region_x\",\n index=\"region_y\",\n aggfunc=lambda x: x.item())\nmulti_backend_cross_sharing_summary_df", "<ipython-input-21-4e21708c19d8>:2: FutureWarning: `item` has been deprecated and will be removed in a future version\n teks_x = x.key_data_x.item()\n<ipython-input-21-4e21708c19d8>:3: FutureWarning: `item` has been deprecated and will be removed in a future version\n common_teks = set(teks_x).intersection(x.key_data_y.item())\n" ], [ "multi_backend_without_active_region_exposure_keys_df = \\\n multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier]\nmulti_backend_without_active_region = \\\n multi_backend_without_active_region_exposure_keys_df.groupby(\"region\").key_data.nunique().sort_values().index.unique().tolist()\nmulti_backend_without_active_region", "_____no_output_____" ], [ "exposure_keys_summary_df = multi_backend_exposure_keys_df[\n multi_backend_exposure_keys_df.region == report_backend_identifier]\nexposure_keys_summary_df.drop(columns=[\"region\"], inplace=True)\nexposure_keys_summary_df = \\\n exposure_keys_summary_df.groupby([\"sample_date_string\"]).key_data.nunique().to_frame()\nexposure_keys_summary_df = \\\n exposure_keys_summary_df.reset_index().set_index(\"sample_date_string\")\nexposure_keys_summary_df.sort_index(ascending=False, inplace=True)\nexposure_keys_summary_df.rename(columns={\"key_data\": \"shared_teks_by_generation_date\"}, inplace=True)\nexposure_keys_summary_df.head()", "/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/core/frame.py:4110: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n return super().drop(\n" ] ], [ [ "### Dump API TEKs", "_____no_output_____" ] ], [ [ "tek_list_df = multi_backend_exposure_keys_df[\n [\"sample_date_string\", \"region\", \"key_data\"]].copy()\ntek_list_df[\"key_data\"] = tek_list_df[\"key_data\"].apply(str)\ntek_list_df.rename(columns={\n \"sample_date_string\": \"sample_date\",\n \"key_data\": \"tek_list\"}, inplace=True)\ntek_list_df = tek_list_df.groupby(\n [\"sample_date\", \"region\"]).tek_list.unique().reset_index()\ntek_list_df[\"extraction_date\"] = extraction_date\ntek_list_df[\"extraction_date_with_hour\"] = extraction_date_with_hour\n\ntek_list_path_prefix = \"Data/TEKs/\"\ntek_list_current_path = tek_list_path_prefix + f\"/Current/RadarCOVID-TEKs.json\"\ntek_list_daily_path = tek_list_path_prefix + f\"Daily/RadarCOVID-TEKs-{extraction_date}.json\"\ntek_list_hourly_path = tek_list_path_prefix + f\"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json\"\n\nfor path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:\n os.makedirs(os.path.dirname(path), exist_ok=True)\n\ntek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier]\ntek_list_base_df.drop(columns=[\"extraction_date\", \"extraction_date_with_hour\"]).to_json(\n tek_list_current_path,\n lines=True, orient=\"records\")\ntek_list_base_df.drop(columns=[\"extraction_date_with_hour\"]).to_json(\n tek_list_daily_path,\n lines=True, orient=\"records\")\ntek_list_base_df.to_json(\n tek_list_hourly_path,\n lines=True, orient=\"records\")\ntek_list_base_df.head()", "_____no_output_____" ] ], [ [ "### Load TEK Dumps", "_____no_output_____" ] ], [ [ "import glob\n\ndef load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:\n extracted_teks_df = pd.DataFrame(columns=[\"region\"])\n file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + \"/RadarCOVID-TEKs-*.json\"))))\n if limit:\n file_paths = file_paths[:limit]\n for file_path in file_paths:\n logging.info(f\"Loading TEKs from '{file_path}'...\")\n iteration_extracted_teks_df = pd.read_json(file_path, lines=True)\n extracted_teks_df = extracted_teks_df.append(\n iteration_extracted_teks_df, sort=False)\n extracted_teks_df[\"region\"] = \\\n extracted_teks_df.region.fillna(spain_region_country_code).copy()\n if region:\n extracted_teks_df = \\\n extracted_teks_df[extracted_teks_df.region == region]\n return extracted_teks_df", "_____no_output_____" ], [ "daily_extracted_teks_df = load_extracted_teks(\n mode=\"Daily\",\n region=report_backend_identifier,\n limit=tek_dumps_load_limit)\ndaily_extracted_teks_df.head()", "_____no_output_____" ], [ "exposure_keys_summary_df_ = daily_extracted_teks_df \\\n .sort_values(\"extraction_date\", ascending=False) \\\n .groupby(\"sample_date\").tek_list.first() \\\n .to_frame()\nexposure_keys_summary_df_.index.name = \"sample_date_string\"\nexposure_keys_summary_df_[\"tek_list\"] = \\\n exposure_keys_summary_df_.tek_list.apply(len)\nexposure_keys_summary_df_ = exposure_keys_summary_df_ \\\n .rename(columns={\"tek_list\": \"shared_teks_by_generation_date\"}) \\\n .sort_index(ascending=False)\nexposure_keys_summary_df = exposure_keys_summary_df_\nexposure_keys_summary_df.head()", "_____no_output_____" ] ], [ [ "### Daily New TEKs", "_____no_output_____" ] ], [ [ "tek_list_df = daily_extracted_teks_df.groupby(\"extraction_date\").tek_list.apply(\n lambda x: set(sum(x, []))).reset_index()\ntek_list_df = tek_list_df.set_index(\"extraction_date\").sort_index(ascending=True)\ntek_list_df.head()", "_____no_output_____" ], [ "def compute_teks_by_generation_and_upload_date(date):\n day_new_teks_set_df = tek_list_df.copy().diff()\n try:\n day_new_teks_set = day_new_teks_set_df[\n day_new_teks_set_df.index == date].tek_list.item()\n except ValueError:\n day_new_teks_set = None\n if pd.isna(day_new_teks_set):\n day_new_teks_set = set()\n day_new_teks_df = daily_extracted_teks_df[\n daily_extracted_teks_df.extraction_date == date].copy()\n day_new_teks_df[\"shared_teks\"] = \\\n day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))\n day_new_teks_df[\"shared_teks\"] = \\\n day_new_teks_df.shared_teks.apply(len)\n day_new_teks_df[\"upload_date\"] = date\n day_new_teks_df.rename(columns={\"sample_date\": \"generation_date\"}, inplace=True)\n day_new_teks_df = day_new_teks_df[\n [\"upload_date\", \"generation_date\", \"shared_teks\"]]\n day_new_teks_df[\"generation_to_upload_days\"] = \\\n (pd.to_datetime(day_new_teks_df.upload_date) -\n pd.to_datetime(day_new_teks_df.generation_date)).dt.days\n day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]\n return day_new_teks_df\n\nshared_teks_generation_to_upload_df = pd.DataFrame()\nfor upload_date in daily_extracted_teks_df.extraction_date.unique():\n shared_teks_generation_to_upload_df = \\\n shared_teks_generation_to_upload_df.append(\n compute_teks_by_generation_and_upload_date(date=upload_date))\nshared_teks_generation_to_upload_df \\\n .sort_values([\"upload_date\", \"generation_date\"], ascending=False, inplace=True)\nshared_teks_generation_to_upload_df.tail()", "<ipython-input-29-827222b35590>:4: FutureWarning: `item` has been deprecated and will be removed in a future version\n day_new_teks_set = day_new_teks_set_df[\n" ], [ "today_new_teks_df = \\\n shared_teks_generation_to_upload_df[\n shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()\ntoday_new_teks_df.tail()", "_____no_output_____" ], [ "if not today_new_teks_df.empty:\n today_new_teks_df.set_index(\"generation_to_upload_days\") \\\n .sort_index().shared_teks.plot.bar()", "_____no_output_____" ], [ "generation_to_upload_period_pivot_df = \\\n shared_teks_generation_to_upload_df[\n [\"upload_date\", \"generation_to_upload_days\", \"shared_teks\"]] \\\n .pivot(index=\"upload_date\", columns=\"generation_to_upload_days\") \\\n .sort_index(ascending=False).fillna(0).astype(int) \\\n .droplevel(level=0, axis=1)\ngeneration_to_upload_period_pivot_df.head()", "_____no_output_____" ], [ "new_tek_df = tek_list_df.diff().tek_list.apply(\n lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()\nnew_tek_df.rename(columns={\n \"tek_list\": \"shared_teks_by_upload_date\",\n \"extraction_date\": \"sample_date_string\",}, inplace=True)\nnew_tek_df.tail()", "_____no_output_____" ], [ "shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[\n shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \\\n [[\"upload_date\", \"shared_teks\"]].rename(\n columns={\n \"upload_date\": \"sample_date_string\",\n \"shared_teks\": \"shared_teks_uploaded_on_generation_date\",\n })\nshared_teks_uploaded_on_generation_date_df.head()", "_____no_output_____" ], [ "estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \\\n .groupby([\"upload_date\"]).shared_teks.max().reset_index() \\\n .sort_values([\"upload_date\"], ascending=False) \\\n .rename(columns={\n \"upload_date\": \"sample_date_string\",\n \"shared_teks\": \"shared_diagnoses\",\n })\ninvalid_shared_diagnoses_dates_mask = \\\n estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates)\nestimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0\nestimated_shared_diagnoses_df.head()", "_____no_output_____" ] ], [ [ "### Hourly New TEKs", "_____no_output_____" ] ], [ [ "hourly_extracted_teks_df = load_extracted_teks(\n mode=\"Hourly\", region=report_backend_identifier, limit=25)\nhourly_extracted_teks_df.head()", "_____no_output_____" ], [ "hourly_new_tek_count_df = hourly_extracted_teks_df \\\n .groupby(\"extraction_date_with_hour\").tek_list. \\\n apply(lambda x: set(sum(x, []))).reset_index().copy()\nhourly_new_tek_count_df = hourly_new_tek_count_df.set_index(\"extraction_date_with_hour\") \\\n .sort_index(ascending=True)\n\nhourly_new_tek_count_df[\"new_tek_list\"] = hourly_new_tek_count_df.tek_list.diff()\nhourly_new_tek_count_df[\"new_tek_count\"] = hourly_new_tek_count_df.new_tek_list.apply(\n lambda x: len(x) if not pd.isna(x) else 0)\nhourly_new_tek_count_df.rename(columns={\n \"new_tek_count\": \"shared_teks_by_upload_date\"}, inplace=True)\nhourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[\n \"extraction_date_with_hour\", \"shared_teks_by_upload_date\"]]\nhourly_new_tek_count_df.head()", "_____no_output_____" ], [ "hourly_summary_df = hourly_new_tek_count_df.copy()\nhourly_summary_df.set_index(\"extraction_date_with_hour\", inplace=True)\nhourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()\nhourly_summary_df[\"datetime_utc\"] = pd.to_datetime(\n hourly_summary_df.extraction_date_with_hour, format=\"%Y-%m-%d@%H\")\nhourly_summary_df.set_index(\"datetime_utc\", inplace=True)\nhourly_summary_df = hourly_summary_df.tail(-1)\nhourly_summary_df.head()", "_____no_output_____" ] ], [ [ "### Official Statistics", "_____no_output_____" ] ], [ [ "import requests\nimport pandas.io.json\n\nofficial_stats_response = requests.get(\"https://radarcovid.covid19.gob.es/kpi/statistics/basics\")\nofficial_stats_response.raise_for_status()\nofficial_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json())", "_____no_output_____" ], [ "official_stats_df = official_stats_df_.copy()\nofficial_stats_df[\"date\"] = pd.to_datetime(official_stats_df[\"date\"], dayfirst=True)\nofficial_stats_df.head()", "_____no_output_____" ], [ "official_stats_column_map = {\n \"date\": \"sample_date\",\n \"applicationsDownloads.totalAcummulated\": \"app_downloads_es_accumulated\",\n \"communicatedContagions.totalAcummulated\": \"shared_diagnoses_es_accumulated\",\n}\naccumulated_suffix = \"_accumulated\"\naccumulated_values_columns = \\\n list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values()))\ninterpolated_values_columns = \\\n list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns))", "_____no_output_____" ], [ "official_stats_df = \\\n official_stats_df[official_stats_column_map.keys()] \\\n .rename(columns=official_stats_column_map)\nofficial_stats_df[\"extraction_date\"] = extraction_date\nofficial_stats_df.head()", "_____no_output_____" ], [ "official_stats_path = \"Data/Statistics/Current/RadarCOVID-Statistics.json\"\nprevious_official_stats_df = pd.read_json(official_stats_path, orient=\"records\", lines=True)\nprevious_official_stats_df[\"sample_date\"] = pd.to_datetime(previous_official_stats_df[\"sample_date\"], dayfirst=True)\nofficial_stats_df = official_stats_df.append(previous_official_stats_df)\nofficial_stats_df.head()", "_____no_output_____" ], [ "official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)]\nofficial_stats_df.sort_values(\"extraction_date\", ascending=False, inplace=True)\nofficial_stats_df.drop_duplicates(subset=[\"sample_date\"], keep=\"first\", inplace=True)\nofficial_stats_df.head()", "_____no_output_____" ], [ "official_stats_stored_df = official_stats_df.copy()\nofficial_stats_stored_df[\"sample_date\"] = official_stats_stored_df.sample_date.dt.strftime(\"%Y-%m-%d\")\nofficial_stats_stored_df.to_json(official_stats_path, orient=\"records\", lines=True)", "_____no_output_____" ], [ "official_stats_df.drop(columns=[\"extraction_date\"], inplace=True)\nofficial_stats_df = confirmed_days_df.merge(official_stats_df, how=\"left\")\nofficial_stats_df.sort_values(\"sample_date\", ascending=False, inplace=True)\nofficial_stats_df.head()", "_____no_output_____" ], [ "official_stats_df[accumulated_values_columns] = \\\n official_stats_df[accumulated_values_columns] \\\n .astype(float).interpolate(limit_area=\"inside\")\nofficial_stats_df[interpolated_values_columns] = \\\n official_stats_df[accumulated_values_columns].diff(periods=-1)\nofficial_stats_df.drop(columns=\"sample_date\", inplace=True)\nofficial_stats_df.head()", "_____no_output_____" ] ], [ [ "### Data Merge", "_____no_output_____" ] ], [ [ "result_summary_df = exposure_keys_summary_df.merge(\n new_tek_df, on=[\"sample_date_string\"], how=\"outer\")\nresult_summary_df.head()", "_____no_output_____" ], [ "result_summary_df = result_summary_df.merge(\n shared_teks_uploaded_on_generation_date_df, on=[\"sample_date_string\"], how=\"outer\")\nresult_summary_df.head()", "_____no_output_____" ], [ "result_summary_df = result_summary_df.merge(\n estimated_shared_diagnoses_df, on=[\"sample_date_string\"], how=\"outer\")\nresult_summary_df.head()", "_____no_output_____" ], [ "result_summary_df = result_summary_df.merge(\n official_stats_df, on=[\"sample_date_string\"], how=\"outer\")\nresult_summary_df.head()", "_____no_output_____" ], [ "result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge(\n result_summary_df, on=[\"sample_date_string\"], how=\"left\")\nresult_summary_df.head()", "_____no_output_____" ], [ "result_summary_df = confirmed_es_df.tail(daily_summary_days).merge(\n result_summary_df, on=[\"sample_date_string\"], how=\"left\")\nresult_summary_df.head()", "_____no_output_____" ], [ "result_summary_df[\"sample_date\"] = pd.to_datetime(result_summary_df.sample_date_string)\nresult_summary_df = result_summary_df.merge(source_regions_for_summary_df, how=\"left\")\nresult_summary_df.set_index([\"sample_date\", \"source_regions\"], inplace=True)\nresult_summary_df.drop(columns=[\"sample_date_string\"], inplace=True)\nresult_summary_df.sort_index(ascending=False, inplace=True)\nresult_summary_df.head()", "_____no_output_____" ], [ "with pd.option_context(\"mode.use_inf_as_na\", True):\n result_summary_df = result_summary_df.fillna(0).astype(int)\n result_summary_df[\"teks_per_shared_diagnosis\"] = \\\n (result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)\n result_summary_df[\"shared_diagnoses_per_covid_case\"] = \\\n (result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)\n result_summary_df[\"shared_diagnoses_per_covid_case_es\"] = \\\n (result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0)\n\nresult_summary_df.head(daily_plot_days)", "_____no_output_____" ], [ "def compute_aggregated_results_summary(days) -> pd.DataFrame:\n aggregated_result_summary_df = result_summary_df.copy()\n aggregated_result_summary_df[\"covid_cases_for_ratio\"] = \\\n aggregated_result_summary_df.covid_cases.mask(\n aggregated_result_summary_df.shared_diagnoses == 0, 0)\n aggregated_result_summary_df[\"covid_cases_for_ratio_es\"] = \\\n aggregated_result_summary_df.covid_cases_es.mask(\n aggregated_result_summary_df.shared_diagnoses_es == 0, 0)\n aggregated_result_summary_df = aggregated_result_summary_df \\\n .sort_index(ascending=True).fillna(0).rolling(days).agg({\n \"covid_cases\": \"sum\",\n \"covid_cases_es\": \"sum\",\n \"covid_cases_for_ratio\": \"sum\",\n \"covid_cases_for_ratio_es\": \"sum\",\n \"shared_teks_by_generation_date\": \"sum\",\n \"shared_teks_by_upload_date\": \"sum\",\n \"shared_diagnoses\": \"sum\",\n \"shared_diagnoses_es\": \"sum\",\n }).sort_index(ascending=False)\n\n with pd.option_context(\"mode.use_inf_as_na\", True):\n aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int)\n aggregated_result_summary_df[\"teks_per_shared_diagnosis\"] = \\\n (aggregated_result_summary_df.shared_teks_by_upload_date /\n aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)\n aggregated_result_summary_df[\"shared_diagnoses_per_covid_case\"] = \\\n (aggregated_result_summary_df.shared_diagnoses /\n aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)\n aggregated_result_summary_df[\"shared_diagnoses_per_covid_case_es\"] = \\\n (aggregated_result_summary_df.shared_diagnoses_es /\n aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0)\n\n return aggregated_result_summary_df", "_____no_output_____" ], [ "aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7)\naggregated_result_with_7_days_window_summary_df.head()", "_____no_output_____" ], [ "last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient=\"records\")[1]\nlast_7_days_summary", "_____no_output_____" ], [ "aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13)\nlast_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient=\"records\")[1]\nlast_14_days_summary", "_____no_output_____" ] ], [ [ "## Report Results", "_____no_output_____" ] ], [ [ "display_column_name_mapping = {\n \"sample_date\": \"Sample\\u00A0Date\\u00A0(UTC)\",\n \"source_regions\": \"Source Countries\",\n \"datetime_utc\": \"Timestamp (UTC)\",\n \"upload_date\": \"Upload Date (UTC)\",\n \"generation_to_upload_days\": \"Generation to Upload Period in Days\",\n \"region\": \"Backend\",\n \"region_x\": \"Backend\\u00A0(A)\",\n \"region_y\": \"Backend\\u00A0(B)\",\n \"common_teks\": \"Common TEKs Shared Between Backends\",\n \"common_teks_fraction\": \"Fraction of TEKs in Backend (A) Available in Backend (B)\",\n \"covid_cases\": \"COVID-19 Cases (Source Countries)\",\n \"shared_teks_by_generation_date\": \"Shared TEKs by Generation Date (Source Countries)\",\n \"shared_teks_by_upload_date\": \"Shared TEKs by Upload Date (Source Countries)\",\n \"shared_teks_uploaded_on_generation_date\": \"Shared TEKs Uploaded on Generation Date (Source Countries)\",\n \"shared_diagnoses\": \"Shared Diagnoses (Source Countries โ€“ Estimation)\",\n \"teks_per_shared_diagnosis\": \"TEKs Uploaded per Shared Diagnosis (Source Countries)\",\n \"shared_diagnoses_per_covid_case\": \"Usage Ratio (Source Countries)\",\n\n \"covid_cases_es\": \"COVID-19 Cases (Spain)\",\n \"app_downloads_es\": \"App Downloads (Spain โ€“ Official)\",\n \"shared_diagnoses_es\": \"Shared Diagnoses (Spain โ€“ Official)\",\n \"shared_diagnoses_per_covid_case_es\": \"Usage Ratio (Spain)\",\n}", "_____no_output_____" ], [ "summary_columns = [\n \"covid_cases\",\n \"shared_teks_by_generation_date\",\n \"shared_teks_by_upload_date\",\n \"shared_teks_uploaded_on_generation_date\",\n \"shared_diagnoses\",\n \"teks_per_shared_diagnosis\",\n \"shared_diagnoses_per_covid_case\",\n\n \"covid_cases_es\",\n \"app_downloads_es\",\n \"shared_diagnoses_es\",\n \"shared_diagnoses_per_covid_case_es\",\n]\n\nsummary_percentage_columns= [\n \"shared_diagnoses_per_covid_case_es\",\n \"shared_diagnoses_per_covid_case\",\n]", "_____no_output_____" ] ], [ [ "### Daily Summary Table", "_____no_output_____" ] ], [ [ "result_summary_df_ = result_summary_df.copy()\nresult_summary_df = result_summary_df[summary_columns]\nresult_summary_with_display_names_df = result_summary_df \\\n .rename_axis(index=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping)\nresult_summary_with_display_names_df", "_____no_output_____" ] ], [ [ "### Daily Summary Plots", "_____no_output_____" ] ], [ [ "result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \\\n .droplevel(level=[\"source_regions\"]) \\\n .rename_axis(index=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping)\nsummary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(\n title=f\"Daily Summary\",\n rot=45, subplots=True, figsize=(15, 30), legend=False)\nax_ = summary_ax_list[0]\nax_.get_figure().tight_layout()\nax_.get_figure().subplots_adjust(top=0.95)\n_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime(\"%Y-%m-%d\").tolist()))\n\nfor percentage_column in summary_percentage_columns:\n percentage_column_index = summary_columns.index(percentage_column)\n summary_ax_list[percentage_column_index].yaxis \\\n .set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))", "/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:307: MatplotlibDeprecationWarning: \nThe rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead.\n layout[ax.rowNum, ax.colNum] = ax.get_visible()\n/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:307: MatplotlibDeprecationWarning: \nThe colNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().colspan.start instead.\n layout[ax.rowNum, ax.colNum] = ax.get_visible()\n/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:313: MatplotlibDeprecationWarning: \nThe rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead.\n if not layout[ax.rowNum + 1, ax.colNum]:\n/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:313: MatplotlibDeprecationWarning: \nThe colNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().colspan.start instead.\n if not layout[ax.rowNum + 1, ax.colNum]:\n" ] ], [ [ "### Daily Generation to Upload Period Table", "_____no_output_____" ] ], [ [ "display_generation_to_upload_period_pivot_df = \\\n generation_to_upload_period_pivot_df \\\n .head(backend_generation_days)\ndisplay_generation_to_upload_period_pivot_df \\\n .head(backend_generation_days) \\\n .rename_axis(columns=display_column_name_mapping) \\\n .rename_axis(index=display_column_name_mapping)", "_____no_output_____" ], [ "fig, generation_to_upload_period_pivot_table_ax = plt.subplots(\n figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df)))\ngeneration_to_upload_period_pivot_table_ax.set_title(\n \"Shared TEKs Generation to Upload Period Table\")\nsns.heatmap(\n data=display_generation_to_upload_period_pivot_df\n .rename_axis(columns=display_column_name_mapping)\n .rename_axis(index=display_column_name_mapping),\n fmt=\".0f\",\n annot=True,\n ax=generation_to_upload_period_pivot_table_ax)\ngeneration_to_upload_period_pivot_table_ax.get_figure().tight_layout()", "_____no_output_____" ] ], [ [ "### Hourly Summary Plots ", "_____no_output_____" ] ], [ [ "hourly_summary_ax_list = hourly_summary_df \\\n .rename_axis(index=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping) \\\n .plot.bar(\n title=f\"Last 24h Summary\",\n rot=45, subplots=True, legend=False)\nax_ = hourly_summary_ax_list[-1]\nax_.get_figure().tight_layout()\nax_.get_figure().subplots_adjust(top=0.9)\n_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime(\"%Y-%m-%d@%H\").tolist()))", "_____no_output_____" ] ], [ [ "### Publish Results", "_____no_output_____" ] ], [ [ "github_repository = os.environ.get(\"GITHUB_REPOSITORY\")\nif github_repository is None:\n github_repository = \"pvieito/Radar-STATS\"\n\ngithub_project_base_url = \"https://github.com/\" + github_repository\n\ndisplay_formatters = {\n display_column_name_mapping[\"teks_per_shared_diagnosis\"]: lambda x: f\"{x:.2f}\" if x != 0 else \"\",\n display_column_name_mapping[\"shared_diagnoses_per_covid_case\"]: lambda x: f\"{x:.2%}\" if x != 0 else \"\",\n display_column_name_mapping[\"shared_diagnoses_per_covid_case_es\"]: lambda x: f\"{x:.2%}\" if x != 0 else \"\",\n}\ngeneral_columns = \\\n list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values()))\ngeneral_formatter = lambda x: f\"{x}\" if x != 0 else \"\"\ndisplay_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns)))\n\ndaily_summary_table_html = result_summary_with_display_names_df \\\n .head(daily_plot_days) \\\n .rename_axis(index=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping) \\\n .to_html(formatters=display_formatters)\nmulti_backend_summary_table_html = multi_backend_summary_df \\\n .head(daily_plot_days) \\\n .rename_axis(columns=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping) \\\n .rename_axis(index=display_column_name_mapping) \\\n .to_html(formatters=display_formatters)\n\ndef format_multi_backend_cross_sharing_fraction(x):\n if pd.isna(x):\n return \"-\"\n elif round(x * 100, 1) == 0:\n return \"\"\n else:\n return f\"{x:.1%}\"\n\nmulti_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \\\n .rename_axis(columns=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping) \\\n .rename_axis(index=display_column_name_mapping) \\\n .to_html(\n classes=\"table-center\",\n formatters=display_formatters,\n float_format=format_multi_backend_cross_sharing_fraction)\nmulti_backend_cross_sharing_summary_table_html = \\\n multi_backend_cross_sharing_summary_table_html \\\n .replace(\"<tr>\",\"<tr style=\\\"text-align: center;\\\">\")\n\nextraction_date_result_summary_df = \\\n result_summary_df[result_summary_df.index.get_level_values(\"sample_date\") == extraction_date]\nextraction_date_result_hourly_summary_df = \\\n hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]\n\ncovid_cases = \\\n extraction_date_result_summary_df.covid_cases.item()\nshared_teks_by_generation_date = \\\n extraction_date_result_summary_df.shared_teks_by_generation_date.item()\nshared_teks_by_upload_date = \\\n extraction_date_result_summary_df.shared_teks_by_upload_date.item()\nshared_diagnoses = \\\n extraction_date_result_summary_df.shared_diagnoses.item()\nteks_per_shared_diagnosis = \\\n extraction_date_result_summary_df.teks_per_shared_diagnosis.item()\nshared_diagnoses_per_covid_case = \\\n extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()\n\nshared_teks_by_upload_date_last_hour = \\\n extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)\n\ndisplay_source_regions = \", \".join(report_source_regions)\nif len(report_source_regions) == 1:\n display_brief_source_regions = report_source_regions[0]\nelse:\n display_brief_source_regions = f\"{len(report_source_regions)} ๐Ÿ‡ช๐Ÿ‡บ\"", "<ipython-input-67-0a0cb8e530af>:55: FutureWarning: `item` has been deprecated and will be removed in a future version\n extraction_date_result_summary_df.covid_cases.item()\n<ipython-input-67-0a0cb8e530af>:57: FutureWarning: `item` has been deprecated and will be removed in a future version\n extraction_date_result_summary_df.shared_teks_by_generation_date.item()\n<ipython-input-67-0a0cb8e530af>:59: FutureWarning: `item` has been deprecated and will be removed in a future version\n extraction_date_result_summary_df.shared_teks_by_upload_date.item()\n<ipython-input-67-0a0cb8e530af>:61: FutureWarning: `item` has been deprecated and will be removed in a future version\n extraction_date_result_summary_df.shared_diagnoses.item()\n<ipython-input-67-0a0cb8e530af>:63: FutureWarning: `item` has been deprecated and will be removed in a future version\n extraction_date_result_summary_df.teks_per_shared_diagnosis.item()\n<ipython-input-67-0a0cb8e530af>:65: FutureWarning: `item` has been deprecated and will be removed in a future version\n extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()\n" ], [ "def get_temporary_image_path() -> str:\n return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + \".png\")\n\ndef save_temporary_plot_image(ax):\n if isinstance(ax, np.ndarray):\n ax = ax[0]\n media_path = get_temporary_image_path()\n ax.get_figure().savefig(media_path)\n return media_path\n\ndef save_temporary_dataframe_image(df):\n import dataframe_image as dfi\n df = df.copy()\n df_styler = df.style.format(display_formatters)\n media_path = get_temporary_image_path()\n dfi.export(df_styler, media_path)\n return media_path", "_____no_output_____" ], [ "summary_plots_image_path = save_temporary_plot_image(\n ax=summary_ax_list)\nsummary_table_image_path = save_temporary_dataframe_image(\n df=result_summary_with_display_names_df)\nhourly_summary_plots_image_path = save_temporary_plot_image(\n ax=hourly_summary_ax_list)\nmulti_backend_summary_table_image_path = save_temporary_dataframe_image(\n df=multi_backend_summary_df)\ngeneration_to_upload_period_pivot_table_image_path = save_temporary_plot_image(\n ax=generation_to_upload_period_pivot_table_ax)", "_____no_output_____" ] ], [ [ "### Save Results", "_____no_output_____" ] ], [ [ "report_resources_path_prefix = \"Data/Resources/Current/RadarCOVID-Report-\"\nresult_summary_df.to_csv(\n report_resources_path_prefix + \"Summary-Table.csv\")\nresult_summary_df.to_html(\n report_resources_path_prefix + \"Summary-Table.html\")\nhourly_summary_df.to_csv(\n report_resources_path_prefix + \"Hourly-Summary-Table.csv\")\nmulti_backend_summary_df.to_csv(\n report_resources_path_prefix + \"Multi-Backend-Summary-Table.csv\")\nmulti_backend_cross_sharing_summary_df.to_csv(\n report_resources_path_prefix + \"Multi-Backend-Cross-Sharing-Summary-Table.csv\")\ngeneration_to_upload_period_pivot_df.to_csv(\n report_resources_path_prefix + \"Generation-Upload-Period-Table.csv\")\n_ = shutil.copyfile(\n summary_plots_image_path,\n report_resources_path_prefix + \"Summary-Plots.png\")\n_ = shutil.copyfile(\n summary_table_image_path,\n report_resources_path_prefix + \"Summary-Table.png\")\n_ = shutil.copyfile(\n hourly_summary_plots_image_path,\n report_resources_path_prefix + \"Hourly-Summary-Plots.png\")\n_ = shutil.copyfile(\n multi_backend_summary_table_image_path,\n report_resources_path_prefix + \"Multi-Backend-Summary-Table.png\")\n_ = shutil.copyfile(\n generation_to_upload_period_pivot_table_image_path,\n report_resources_path_prefix + \"Generation-Upload-Period-Table.png\")", "_____no_output_____" ] ], [ [ "### Publish Results as JSON", "_____no_output_____" ] ], [ [ "def generate_summary_api_results(df: pd.DataFrame) -> list:\n api_df = df.reset_index().copy()\n api_df[\"sample_date_string\"] = \\\n api_df[\"sample_date\"].dt.strftime(\"%Y-%m-%d\")\n api_df[\"source_regions\"] = \\\n api_df[\"source_regions\"].apply(lambda x: x.split(\",\"))\n return api_df.to_dict(orient=\"records\")\n\nsummary_api_results = \\\n generate_summary_api_results(df=result_summary_df)\ntoday_summary_api_results = \\\n generate_summary_api_results(df=extraction_date_result_summary_df)[0]\n\nsummary_results = dict(\n backend_identifier=report_backend_identifier,\n source_regions=report_source_regions,\n extraction_datetime=extraction_datetime,\n extraction_date=extraction_date,\n extraction_date_with_hour=extraction_date_with_hour,\n last_hour=dict(\n shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,\n shared_diagnoses=0,\n ),\n today=today_summary_api_results,\n last_7_days=last_7_days_summary,\n last_14_days=last_14_days_summary,\n daily_results=summary_api_results)\n\nsummary_results = \\\n json.loads(pd.Series([summary_results]).to_json(orient=\"records\"))[0]\n\nwith open(report_resources_path_prefix + \"Summary-Results.json\", \"w\") as f:\n json.dump(summary_results, f, indent=4)", "_____no_output_____" ] ], [ [ "### Publish on README", "_____no_output_____" ] ], [ [ "with open(\"Data/Templates/README.md\", \"r\") as f:\n readme_contents = f.read()\n\nreadme_contents = readme_contents.format(\n extraction_date_with_hour=extraction_date_with_hour,\n github_project_base_url=github_project_base_url,\n daily_summary_table_html=daily_summary_table_html,\n multi_backend_summary_table_html=multi_backend_summary_table_html,\n multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html,\n display_source_regions=display_source_regions)\n\nwith open(\"README.md\", \"w\") as f:\n f.write(readme_contents)", "_____no_output_____" ] ], [ [ "### Publish on Twitter", "_____no_output_____" ] ], [ [ "enable_share_to_twitter = os.environ.get(\"RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER\")\ngithub_event_name = os.environ.get(\"GITHUB_EVENT_NAME\")\n\nif enable_share_to_twitter and github_event_name == \"schedule\" and \\\n (shared_teks_by_upload_date_last_hour or not are_today_results_partial):\n import tweepy\n\n twitter_api_auth_keys = os.environ[\"RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS\"]\n twitter_api_auth_keys = twitter_api_auth_keys.split(\":\")\n auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])\n auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])\n\n api = tweepy.API(auth)\n\n summary_plots_media = api.media_upload(summary_plots_image_path)\n summary_table_media = api.media_upload(summary_table_image_path)\n generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)\n media_ids = [\n summary_plots_media.media_id,\n summary_table_media.media_id,\n generation_to_upload_period_pivot_table_image_media.media_id,\n ]\n\n if are_today_results_partial:\n today_addendum = \" (Partial)\"\n else:\n today_addendum = \"\"\n\n def format_shared_diagnoses_per_covid_case(value) -> str:\n if value == 0:\n return \"โ€“\"\n return f\"โ‰ค{value:.2%}\"\n\n display_shared_diagnoses_per_covid_case = \\\n format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case)\n display_last_14_days_shared_diagnoses_per_covid_case = \\\n format_shared_diagnoses_per_covid_case(value=last_14_days_summary[\"shared_diagnoses_per_covid_case\"])\n display_last_14_days_shared_diagnoses_per_covid_case_es = \\\n format_shared_diagnoses_per_covid_case(value=last_14_days_summary[\"shared_diagnoses_per_covid_case_es\"])\n\n status = textwrap.dedent(f\"\"\"\n #RadarCOVID โ€“ {extraction_date_with_hour}\n\n Today{today_addendum}:\n - Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)\n - Shared Diagnoses: โ‰ค{shared_diagnoses:.0f}\n - Usage Ratio: {display_shared_diagnoses_per_covid_case}\n\n Last 14 Days:\n - Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case}\n - Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es}\n\n Info: {github_project_base_url}#documentation\n \"\"\")\n status = status.encode(encoding=\"utf-8\")\n api.update_status(status=status, media_ids=media_ids)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ece087409063ce4cec1830675574b275ba3066da
3,809
ipynb
Jupyter Notebook
Final_Exam.ipynb
Markhristian/CPEN-21A-ECE-2-1
a718048ac2ff022f5b3e852088f06d6f3e55fe63
[ "Apache-2.0" ]
null
null
null
Final_Exam.ipynb
Markhristian/CPEN-21A-ECE-2-1
a718048ac2ff022f5b3e852088f06d6f3e55fe63
[ "Apache-2.0" ]
null
null
null
Final_Exam.ipynb
Markhristian/CPEN-21A-ECE-2-1
a718048ac2ff022f5b3e852088f06d6f3e55fe63
[ "Apache-2.0" ]
null
null
null
22.017341
236
0.377527
[ [ [ "<a href=\"https://colab.research.google.com/github/Markhristian/CPEN-21A-ECE-2-1/blob/main/Final_Exam.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "#\"Final Exam\"", "_____no_output_____" ], [ "###Problem Statement 1", "_____no_output_____" ] ], [ [ "num=(\"1\",\"2\",\"3\",\"4\",\"3\",\"2\",\"1\",\"2\",\"3\",\"4\")\nfor x in num:\n print(x)\nelse:\n print(1+2+3+4+3+2+1+2+3+4)", "1\n2\n3\n4\n3\n2\n1\n2\n3\n4\n25\n" ] ], [ [ "###Problem Statement 2", "_____no_output_____" ] ], [ [ "i=1\nwhile i<6:\n print(i)\n i+=1 \nelse:\n print(1+5)", "1\n2\n3\n4\n5\n6\n" ] ], [ [ "###Problem Statement 3", "_____no_output_____" ] ], [ [ "grade = int(input(\"Enter the grade \"))\nif grade>=90:\n print(\"A\")\nelif grade>=80:\n print(\"B\")\nelse:\n if grade>=70:\n print(\"C\")\n elif grade>=60:\n print(\"D\")\n else:\n print(\"F\")\n", "Enter the grade 75\nC\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ece0897261e7c5609187f14406dd4ab174850b99
328,307
ipynb
Jupyter Notebook
Exercises 4/exercises_lecture4.ipynb
hlappal/comp-phys
8d78a459bc5849ddf5c6c21d484503136bccccbd
[ "MIT" ]
null
null
null
Exercises 4/exercises_lecture4.ipynb
hlappal/comp-phys
8d78a459bc5849ddf5c6c21d484503136bccccbd
[ "MIT" ]
null
null
null
Exercises 4/exercises_lecture4.ipynb
hlappal/comp-phys
8d78a459bc5849ddf5c6c21d484503136bccccbd
[ "MIT" ]
null
null
null
181.184879
39,504
0.899027
[ [ [ "Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\\rightarrow$Run All).\n\n**NB. Do not add new or remove/cut cells in the notebook. Additionally, do not change the filename of this notebook.**\n\nMake sure you fill in any place that says `YOUR CODE HERE` or \"YOUR ANSWER HERE\", as well as your student number below:", "_____no_output_____" ] ], [ [ "STUDENT_NUMBER = \"141927\"", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "# Exercises Topic 4", "_____no_output_____" ] ], [ [ "# imports\n\nimport numpy as np\nfrom matplotlib import pyplot as plt", "_____no_output_____" ] ], [ [ "## Exercise 4.1: Detecting periodicity (4 points)\n\nIn the folder that you obtained this notebook there is a file called\n`sunspots.txt`, which contains the observed number of sunspots on the\nSun for each month since January 1749. The file contains two columns of\nnumbers, the first representing the month and the second being the sunspot\nnumber.\n\n(a) Write a program that reads the data in the file and makes a graph of\n sunspots as a function of time. You should see that the number of\n sunspots has fluctuated on a regular cycle for as long as observations\n have been recorded. Make an estimate of the length of the cycle in\n months.\n ", "_____no_output_____" ] ], [ [ "def read_sunspot_data(filename):\n \"\"\"Reads and returns sunspot data\n \n Args:\n filename (str): name of the file\n \n Returns:\n numpy array: the data\"\"\"\n \n data = None\n \n # YOUR CODE HERE\n data = np.loadtxt(filename)\n #raise NotImplementedError()\n \n return data\n\ndata = read_sunspot_data('sunspots.txt')\n# plotting\n# YOUR CODE HERE\nplt.plot(data[:,0], data[:,1])\n#raise NotImplementedError()\nplt.xlabel('time (months)')\nplt.ylabel('sunspot (#)')", "_____no_output_____" ] ], [ [ "### Answer (Double click to edit)\nThere seems to 24 peaks in a time period of 3143 months, or about 272 years.\nThis means that a peak occurs every ~131 months, or once in every 11 years.", "_____no_output_____" ] ], [ [ "# validation\nfn = read_sunspot_data('sunspots.txt')\nassert isinstance(fn, np.ndarray), 'bad function'\nassert fn.shape == (3143, 2), 'bad shape'\nassert fn.dtype == float, 'bad type'", "_____no_output_____" ] ], [ [ "(b) Modify your program to calculate the Fourier transform of the sunspot\n data and then make a graph of the magnitude squared $|c_k|^2$ of the\n Fourier coefficients as a function of $k$---also called the\n *power spectrum* of the sunspot signal. You should see that\n there is a noticeable peak in the power spectrum at a nonzero value\n of~$k$. The appearance of this peak tells us that there is one frequency\n in the Fourier series that has a higher amplitude than the others around\n it---meaning that there is a large sine-wave term with this frequency,\n which corresponds to the periodic wave you can see in the original data.", "_____no_output_____" ] ], [ [ "def discrete_fourier_transform(y):\n \"\"\"Calculates coefficient of the discrete trasnform of 1D y\n \n Args:\n y (numpy array): 1D data\n \n Returns:\n numpy array: 1D array of coefficients after fourier transform\"\"\"\n # coefficients\n c = None\n \n # YOUR CODE HERE\n N = len(y)\n n = np.arange(N)\n c = np.zeros(N//2+1,complex)\n for k in range(N//2+1):\n c[k] = np.sum(y*np.exp(-2j*np.pi*k*n/N))\n #raise NotImplementedError()\n return c", "_____no_output_____" ], [ "# validation\nn = 10\nfn = np.sin(np.arange(n))\ndft = discrete_fourier_transform(fn)\nc_k2 = np.abs(dft)**2\nassert abs(c_k2[0] - 3.82284412) <= 1e-4, 'bad function' \nassert abs(c_k2[1] - 10.28600735) <= 1e-4, 'bad function' ", "_____no_output_____" ], [ "fft_data = discrete_fourier_transform(data[:, 1])\n\n#plotting\n# YOUR CODE HERE\nplt.plot(np.square(abs(fft_data)))\n#raise NotImplementedError()\nplt.xlabel('k')\nplt.ylabel('$c_k^2$')", "_____no_output_____" ], [ "plt.plot(np.square(abs(fft_data[0:50]))) # Zoom closer to the non-zero peak\n#raise NotImplementedError()\nplt.xlabel('k')\nplt.ylabel('$c_k^2$')\n\nk = 24\nN = data.shape[0]\nprint(f\"k = {k}: freq = {N}/{k} = {N/k}\")", "k = 24: freq = 3143/24 = 130.95833333333334\n" ] ], [ [ "(c) Find the approximate value of $k$ to which the peak corresponds.\n What is the period of the sine wave with this value of $k$? You should\n find that the period corresponds roughly to the length of the cycle that\n you estimated in part~(a).\n \n\nThis kind of Fourier analysis is a sensitive method for detecting\nperiodicity in signals. Even in cases where it is not clear to the eye\nthat there is a periodic component to a signal, it may still be possible to\nfind one using a Fourier transform.", "_____no_output_____" ], [ "### Answer (Double click to edit)\n\nFrom the plot, it can be visually estimated that the peak occurs approximately at $k=24$. When the total number of data points, 3143, is divided by $k$, we get the frequency: $N/k = 3143$ months $/24 \\approx 130.96$ months, which is very close to the approximated value 131 months.", "_____no_output_____" ], [ "## Exercise 4.2: Fourier filtering and smoothing (6 points)\n\nIn the folder that you obtained this notebook you'll find a file called `dow.txt`.\nIt contains the daily closing value for each business day from late 2006\nuntil the end of 2010 of the Dow Jones Industrial Average, which is a\nmeasure of average prices on the US stock market.\n\nWrite a program to do the following:\n\n(a) Read in the data from `dow.txt` and plot them on a graph.", "_____no_output_____" ] ], [ [ "def read_dow_data(filename):\n \"\"\"read dow data from filename\n \n Args:\n filename (str): name of the file\n \n Returns:\n numpy array: dow data\"\"\"\n data = None\n \n # YOUR CODE HERE\n data = np.loadtxt(filename)\n #raise NotImplementedError()\n \n return data\n\ndata = read_dow_data('dow.txt')\n\n# plotting\n# YOUR CODE HERE\nplt.plot(data)\n#raise NotImplementedError()\nplt.xlabel('days')\nplt.ylabel('closing value')", "_____no_output_____" ], [ "# validation\nfn = read_dow_data('dow.txt')\nassert isinstance(fn, np.ndarray), 'bad function'\nassert fn.shape == (1024,), 'bad function'\nassert fn.dtype == float, 'bad function'", "_____no_output_____" ] ], [ [ "(b) Calculate the coefficients of the discrete Fourier transform of the\n data using the function `rfft` from `numpy.fft`, which produces\n an array of $\\frac{1}{2} N+1$ complex numbers.\n ", "_____no_output_____" ] ], [ [ "def dft_numpy(y):\n \"\"\"Perform numpy rfft\n \n Args:\n y (array): input data\n Returns:\n numpy array: dft of data\n \"\"\"\n dft = None\n # modifications on y should not change incomming data too\n y = np.copy(y)\n\n # YOUR CODE HERE\n dft = np.fft.rfft(y)\n #raise NotImplementedError()\n \n return dft\n\n# compute dft\ndft_data = dft_numpy(data)", "_____no_output_____" ], [ "# validation\nn = 10\nfn = np.sin(np.arange(n))\ndft = dft_numpy(fn)\nc_k2 = np.abs(dft)**2\nassert abs(c_k2[0] - 3.82284412) <= 1e-4, 'bad function' \nassert abs(c_k2[1] - 10.28600735) <= 1e-4, 'bad function' ", "_____no_output_____" ] ], [ [ "(c) Now set all but the first 10\\% of the elements of this array to zero\n (i.e.,~set the last 90\\% to zero but keep the values of the first 10\\%).", "_____no_output_____" ] ], [ [ "def data_trim(y, x):\n \"\"\"trim data, by making all but first x% elements to zero\n \n Args:\n y (numpy array): fft data\n x (float): percentage of first elements to be the same\n Returns:\n numpy array: trimmed data\n \"\"\"\n partc_out = None\n # modifications on y should not change incomming data too\n y = np.copy(y)\n \n # YOUR CODE HERE\n i = int(len(y)*x/100)\n partc_out = np.zeros_like(y)\n partc_out[:i] = y[:i]\n #raise NotImplementedError()\n \n return partc_out\n\ndft_data_trimmed = data_trim(dft_data, 10)", "_____no_output_____" ], [ "# validation\na = np.arange(1, 11)\na_trimmed = data_trim(a, 10)\nassert len(a_trimmed) == len(a), 'bad function'\nfor i in range(1, 10):\n assert len(np.where(data_trim(a, i*10)==0)[0]) == 10 - i, 'bad function at i = {}'.format(i)", "_____no_output_____" ] ], [ [ "(d) Calculate the inverse Fourier transform of the resulting array, zeros\n and all, using the function \\verb|irfft|, and plot it on the same graph\n as the original data. You may need to vary the colors of the two curves\n to make sure they both show up on the graph. Comment on what you see.\n What is happening when you set the Fourier coefficients to zero?", "_____no_output_____" ] ], [ [ "def idft_numpy(y):\n \"\"\"Perform numpy irfft\n \n Args:\n y (array): input data\n Returns:\n numpy array: dft of data\n \"\"\"\n idft = None\n # modifications on y should not change incomming data too\n y = np.copy(y)\n\n # YOUR CODE HERE\n idft = np.fft.irfft(y)\n #raise NotImplementedError()\n \n return idft\n\n# compute idft\ndata_idft = idft_numpy(dft_data_trimmed)\n\n# plotting\n# YOUR CODE HERE\nplt.plot(data, label='original')\nplt.plot(data_idft, label='irfft')\nplt.legend()\n#raise NotImplementedError()\nplt.xlabel('days')\nplt.ylabel('closing value')", "_____no_output_____" ] ], [ [ "### Answer (Double click to edit)\nThe inverse FFT adapts to the original data quite well. Increasing the percentage x of the data used to perform the inverse FFT, i.e. decreasing the number of zeros, makes the irfft data more accurate. In other words, decreasing the percentage makes the function smoother. 10% data usage already seems to be enough to get very good approximation of the original data.", "_____no_output_____" ] ], [ [ "# validation\nn = 10\nfn = np.sin(np.arange(n))\ndft = dft_numpy(fn)\nfn_idft = idft_numpy(dft)\nassert np.mean(np.abs(fn - fn_idft)) <= 1e-4, 'bad function' ", "_____no_output_____" ] ], [ [ "(e) Modify your program so that it sets all but the first 2\\% of the\n coefficients to zero and run it again.", "_____no_output_____" ] ], [ [ "# get trimmed data\n# YOUR CODE HERE\ndft_data_trimmed_2 = data_trim(dft_data, 2)\ndata_idft_2 = idft_numpy(dft_data_trimmed_2)\n#raise NotImplementedError()\n\n# plotting\n# YOUR CODE HERE\nplt.plot(data, label='original')\nplt.plot(data_idft_2, label='irfft')\nplt.legend()\n#raise NotImplementedError()\nplt.xlabel('days')\nplt.ylabel('closing value')", "_____no_output_____" ] ], [ [ "## Exercise 4.3: Comparison of the DFT and DCT (3 points)\n\nExercise 4.2 looked at data representing the variation of the Dow Jones\nIndustrial Average, colloquially called \"the Dow,\" over time. The\nparticular time period studied in that exercise was special in one sense:\nthe value of the Dow at the end of the period was almost the same as at the\nstart, so the function was, roughly speaking, periodic. In the folder that you obtained this notebook there is another file called `dow2.txt`, which also contains\ndata on the Dow but for a different time period, from 2004 until 2008.\nOver this period the value changed considerably from a starting level\naround 9000 to a final level around 14000.\n\n(a) Write a program in\n which you read the data in the file `dow2.txt` and plot it on a\n graph. Then smooth the data by calculating its Fourier transform,\n setting all but the first 2\\% of the coefficients to zero, and inverting\n the transform again, plotting the result on the same graph as the\n original data. You should see that the data are\n smoothed, but now there will be an additional artifact. At the beginning\n and end of the plot you should see large deviations away from the true\n smoothed function. These occur because the function is required to be\n periodic---its last value must be the same as its first---so it needs to\n deviate substantially from the correct value to make the two ends of the\n function meet. In some situations (including this one) this behavior is\n unsatisfactory. If we want to use the Fourier transform for smoothing,\n we would certainly prefer that it not introduce artifacts of this kind.", "_____no_output_____" ] ], [ [ "# use already made functions in 4.2\n# YOUR CODE HERE\ndata = read_dow_data('dow2.txt')\ndft_data = dft_numpy(data)\ndft_data_trimmed = data_trim(dft_data, 2)\ndata_idft = idft_numpy(dft_data_trimmed)\n#raise NotImplementedError()\n\n# plotting\n# YOUR CODE HERE\nplt.plot(data, label='original')\nplt.plot(data_idft, label='irfft')\nplt.legend()\n#raise NotImplementedError()\nplt.xlabel('days')\nplt.ylabel('DOW closing values')", "_____no_output_____" ] ], [ [ "(b) Modify your program to repeat the same analysis using discrete cosine\n transforms. You can use the functions from `dcst.py` (in the folder that you obtained this notebook) to perform the\n transforms if you wish. Again discard all but the first 2\\% of the\n coefficients, invert the transform, and plot the result. You should see\n a significant improvement, with less distortion of the function at the\n ends of the interval. This occurs because the cosine transform does not force the value of the\n function to be the same at both ends.\n\nIt is because of the artifacts introduced by the strict periodicity\nof the DFT that the cosine transform is favored for many technological\napplications, such as audio compression. The artifacts can degrade the\nsound quality of compressed audio and the cosine transform generally gives\nbetter results.\n\nThe cosine transform is not wholly free of artifacts itself however. It's\ntrue it does not force the function to be periodic, but it does force the\ngradient to be zero at the ends of the interval (which the ordinary Fourier\ntransform does not). You may be able to see this in your calculations for\npart (b) above. Look closely at the smoothed function and you should see\nthat its slope is flat at the beginning and end of the interval. The\ndistortion of the function introduced is less than the distortion in\npart (a), but it's there all the same. To reduce this effect, audio\ncompression schemes often use overlapped cosine transforms, in which\ntransforms are performed on overlapping blocks of samples, so that the\nportions at the ends of blocks, where the worst artifacts lie, need not be\nused.", "_____no_output_____" ] ], [ [ "# use already made functions in 4.2 and dcst.py\nfrom dcst import dct, idct\n# YOUR CODE HERE\ndata = read_dow_data('dow2.txt')\ndct_data = dct(data)\ndct_data_trimmed = data_trim(dct_data, 2)\ndata_idct = idct(dct_data_trimmed)\n#raise NotImplementedError()\n\n# plotting\n# YOUR CODE HERE\nplt.plot(data, label='original')\nplt.plot(data_idct, label='idct')\nplt.legend()\n#raise NotImplementedError()\nplt.xlabel('days')\nplt.ylabel('DOW closing values')\n", "_____no_output_____" ] ], [ [ "## Exercise 4.4: Image deconvolution (7 points)\n\nYou've probably seen it on TV, in one of those crime drama shows.\nThey have a blurry photo of a crime scene and they click a few buttons on\nthe computer and magically the photo becomes sharp and clear, so you can\nmake out someone's face, or some lettering on a sign. Surely (like almost\neverything else on such TV shows) this is just science fiction? Actually,\nno. It's not. It's real and in this exercise you'll write a program that\ndoes it.\n\nWhen a photo is blurred each point on the photo gets smeared out\naccording to some \"smearing distribution,\" which is technically called a\n*point spread function*. We can represent this smearing\nmathematically as follows. For simplicity let's assume we're working with\na black and white photograph, so that the picture can be represented by a\nsingle function $a(x,y)$ which tells you the brightness at each\npoint $(x,y)$. And let us denote the point spread function by $f(x,y)$.\nThis means that a single bright dot at the origin ends up appearing as\n$f(x,y)$ instead. If $f(x,y)$ is a broad function then the picture is\nbadly blurred. If it is a narrow peak then the picture is relatively\nsharp.\n\nIn general the brightness $b(x,y)$ of the blurred photo at point $(x,y)$ is\ngiven by\n\n$$\\begin{equation}\nb(x,y) = \\int_0^K \\int_0^L a(x',y') f(x-x',y-y') \\>d x'\\>d y',\n\\end{equation}$$\n\nwhere $K\\times L$ is the dimension of the picture. This equation is called\nthe *convolution* of the picture with the point spread\nfunction.\n\nWorking with two-dimensional functions can get complicated, so to get the\nidea of how the math works, let's switch temporarily to a one-dimensional\nequivalent of our problem. Once we work out the details in 1D we'll return\nto the 2D version. The one-dimensional version of the convolution above\nwould be\n\n$$\\begin{equation}\nb(x) = \\int_0^L a(x') f(x-x') \\>d x'.\n\\end{equation}$$\n\nThe function $b(x)$ can be represented by a Fourier series as in Eq. (7.5):\n\n$$\\begin{equation}\nb(x) = \\sum_{k=-\\infty}^\\infty\n \\tilde{b}_k \\exp\\biggl( i {2\\pi k x\\over L} \\biggr),\n\\end{equation}$$\n\nwhere\n\n$$\\begin{equation}\n\\tilde{b}_k = {1\\over L} \\int_0^L b(x)\n \\exp\\biggl( -i {2\\pi k x\\over L} \\biggr) \\>d x\n\\end{equation}$$\n\nare the Fourier coefficients. Substituting for $b(x)$ in this equation\ngives\n\n$$\\begin{align*}\n\\tilde{b}_k &= {1\\over L} \\int_0^L \\int_0^L a(x') f(x-x')\n \\exp\\biggl( -i {2\\pi k x\\over L} \\biggr)\n \\>d x'\\>d x \\\\\n &= {1\\over L} \\int_0^L \\int_0^L a(x') f(x-x')\n \\exp\\biggl( -i {2\\pi k (x-x')\\over L} \\biggr)\n \\exp\\biggl( -i {2\\pi k x'\\over L} \\biggr)\n \\>d x'\\>d x.\n\\end{align*}$$\n\nNow let us change variables to $X=x-x'$, and we get\n\n$$\\begin{equation}\n\\tilde{b}_k = {1\\over L} \\int_0^L a(x')\n \\exp\\biggl( -i {2\\pi k x'\\over L} \\biggr)\n \\int_{-x'}^{L-x'} f(X)\n \\exp\\biggl( -i {2\\pi k X\\over L} \\biggr) \\>d X\n \\>d x'.\n\\end{equation}$$\n\nIf we make $f(x)$ a periodic function in the standard fashion by repeating\nit infinitely many times to the left and right of the interval from 0\nto~$L$, then the second integral above can be written as\n\n$$\\begin{align*}\n\\int_{-x'}^{L-x'} f(X) \\exp\\biggl( -i {2\\pi k X\\over L} \\biggr) \\>d X\n&= \\int_{-x'}^0 f(X) \\exp\\biggl( -i {2\\pi k X\\over L} \\biggr) \\>d X\n \\\\\n&\\hspace{5em}{} + \\int_0^{L-x'} f(X) \\exp\\biggl( -i {2\\pi k X\\over L}\n \\biggr) \\>d X \\\\\n&\\hspace{-12em} {} = \\exp\\biggl( i {2\\pi k L\\over L} \\biggr)\n \\int_{L-x'}^L f(X) \\exp\\biggl( -i {2\\pi k X\\over L} \\biggr) \\>d X\n + \\int_0^{L-x'} f(X) \\exp\\biggl( -i {2\\pi k X\\over L} \\biggr) \\>d X\n \\\\\n&\\hspace{-12em} {} = \\int_0^L f(X)\n \\exp\\biggl( -i {2\\pi k X\\over L} \\biggr) \\>d X,\n\\end{align*}$$\n\nwhich is simply $L$ times the Fourier transform $\\tilde{f}_k$ of $f(x)$.\nSubstituting this result back into our equation for $\\tilde{b}_k$ we then\nget\n\n$$\\begin{align*}\n\\tilde{b}_k = \\int_0^L a(x')\n \\exp\\biggl( -i {2\\pi k x'\\over L} \\biggr)\n \\tilde{f}_k \\>d x'\n = L\\,\\tilde{a}_k\\tilde{f}_k.\n\\end{align*}$$\n\nIn other words, apart from the factor of $L$, the Fourier transform of the\nblurred photo is the product of the Fourier transforms of the unblurred\nphoto and the point spread function.\n\nNow it is clear how we deblur our picture. We take the blurred\npicture and Fourier transform it to get $\\tilde{b}_k =\nL\\,\\tilde{a}_k\\tilde{f}_k$. We also take the point spread function and\nFourier transform it to get~$\\tilde{f}_k$. Then we divide one by the\nother:\n\n$$\\begin{equation}\n{\\tilde{b}_k\\over L\\tilde{f}_k} = \\tilde{a}_k\n\\end{equation}$$\n\nwhich gives us the Fourier transform of the \\emph{unblurred} picture.\nThen, finally, we do an inverse Fourier transform on $\\tilde{a}_k$ to get\nback the unblurred picture. This process of recovering the unblurred\npicture from the blurred one, of reversing the convolution process, is\ncalled *deconvolution*.\n\nReal pictures are two-dimensional, but the mathematics follows through\nexactly the same. For a picture of dimensions $K\\times L$ we find that the\ntwo-dimensional Fourier transforms are related by\n\n$$\\begin{equation}\n\\tilde{b}_{kl} = KL\\tilde{a}_{kl}\\tilde{f}_{kl}\\,,\n\\end{equation}$$\n\nand again we just divide the blurred Fourier transform by the Fourier\ntransform of the point spread function to get the Fourier transform of the\nunblurred picture.\n\nIn the digital realm of computers, pictures are not pure functions $f(x,y)$\nbut rather grids of samples, and our Fourier transforms are discrete\ntransforms not continuous ones. But the math works out the same again.\n\nThe main complication with deblurring in practice is that we don't usually\nknow the point spread function. Typically we have to experiment with\ndifferent ones until we find something that works. For many cameras it's a\nreasonable approximation to assume the point spread function is Gaussian:\n\n$$\\begin{equation}\nf(x,y) = \\exp\\biggl( -{x^2+y^2\\over2\\sigma^2} \\biggr),\n\\end{equation}$$\n\nwhere $\\sigma$ is the width of the Gaussian. Even with this assumption,\nhowever, we still don't know the value of $\\sigma$ and we may have to\nexperiment to find a value that works well. In the following exercise, for\nsimplicity, we'll assume we know the value of $\\sigma$.\n\n(a) In the folder that you obtained this notebook you will find a file called `blur.txt` that\n contains a grid of values representing brightness on a black-and-white\n photo---a badly out-of-focus one that has been deliberately blurred using\n a Gaussian point spread function of width $\\sigma=25$. Write a program\n that reads the grid of values into a two-dimensional array of real\n numbers and then draws the values on the screen of the computer as a\n density plot. You should see the photo appear. If you get something\n wrong it might be upside-down. Work with the details of your program\n until you get it appearing correctly. (Hint: The picture has the sky,\n which is bright, at the top and the ground, which is dark, at the\n bottom.)", "_____no_output_____" ] ], [ [ "def read_2D_data(filename):\n \"\"\"Reads 2D data\n \n Args:\n filename (str): name of the file\n \n Returns:\n numpy array: the data\"\"\"\n \n data = None\n \n # YOUR CODE HERE\n data = np.loadtxt(filename)\n #raise NotImplementedError()\n \n return data\n\ndata = read_2D_data('blur.txt')\n\n# plotting\n# YOUR CODE HERE\nplt.imshow(data, cmap='gray')\nplt.show()\n#raise NotImplementedError()", "_____no_output_____" ], [ "# validation\n\ndata = read_2D_data('blur.txt')\nassert isinstance(data, np.ndarray), 'bad function'\nassert data.dtype == float, 'bad function'\nassert data.shape == (1024, 1024), 'bad function'", "_____no_output_____" ] ], [ [ "(b) Write another program that creates an array, of the same size as the\n photo, containing a grid of samples drawn from the Gaussian~$f(x,y)$\n above with $\\sigma=25$. Make a density plot of these values on the\n screen too, so that you get a visualization of your point spread\n function. Remember that the point spread function is periodic (along\n both axes), which means that the values for negative $x$ and $y$ are\n repeated at the end of the interval. Since the Gaussian is centered on\n the origin, this means there should be bright patches in each of the four\n corners of your picture, something like this:\n\n<img src=\"psf.png\" width=\"250\" />\n\nNote: This photo has a smaller shape, than the blur.txt, therefore the \nspreads will not be as drastic as in the photo.", "_____no_output_____" ] ], [ [ "def get_point_spread_function(data, sigma):\n \"\"\"Retruns a gaussian point spread function\n \n Args:\n data (numpy array): the photo array to get the size of the spread function\n sigma (float): sigma of gaussian\n \n Returns:\n numpy array: the point spread function\"\"\"\n spread = np.zeros_like(data)\n \n # YOUR CODE HERE\n w = 2*sigma**2\n \n # Upper left corner\n x,y = np.meshgrid(np.arange(data.shape[0]), np.arange(data.shape[1]))\n d = x*x + y*y\n spread += np.exp(-d/w)\n \n # Upper right corner\n x,y = np.meshgrid(np.arange(data.shape[0]-1,-1,-1), np.arange(data.shape[1]))\n d = x*x + y*y\n spread += np.exp(-d/w)\n \n # Lower left corner\n x,y = np.meshgrid(np.arange(data.shape[0]), np.arange(data.shape[1]-1,-1,-1))\n d = x*x + y*y\n spread += np.exp(-d/w)\n \n # Lower right corner\n x,y = np.meshgrid(np.arange(data.shape[0]-1,-1,-1), np.arange(data.shape[1]-1,-1,-1))\n d = x*x + y*y\n spread += np.exp(-d/w)\n #raise NotImplementedError()\n \n return spread\n\npoint_func = get_point_spread_function(data, 25)\n\n# plotting\n# YOUR CODE HERE\nplt.imshow(point_func, cmap='gray')\nplt.show()\n#raise NotImplementedError()", "_____no_output_____" ], [ "# validation\nfn = read_2D_data('blur.txt')\npt_fn = get_point_spread_function(fn, 25)\nassert pt_fn.shape == fn.shape, 'shape mismatch'\n# checking 90 deg rotational symmetry, since the gaussian is even in all corners\nassert np.mean(np.abs(pt_fn - np.rot90(pt_fn, k=1))) <= 1e-4, 'bad function'", "_____no_output_____" ] ], [ [ "(c) Combine your two programs and add Fourier transforms using the\n functions `rfft2` and `irfft2` from `numpy.fft`, to make a\n program that does the following:\n\ni) Calculates Fourier transform of both, photo and point spread function\n\nii) Divides one by the other\n\niii) Performs an inverse transform to get the unblurred photo\n\niv) Displays the unblurred photo on the screen\n\nWhen you are done, you should be able to make out the scene in\nthe photo, although probably it will still not be perfectly sharp.\n\nHint: One thing you'll need to deal with is what happens when the Fourier\ntransform of the point spread function is zero, or close to zero. In that\ncase if you divide by it you'll get an error (because you can't divide by\nzero) or just a very large number (because you're dividing by something\nsmall). A workable compromise is that if a value in the Fourier transform\nof the point spread function is smaller than a certain amount $\\epsilon$\nyou don't divide by it---just leave that coefficient alone. The value of\n$\\epsilon$ is not very critical but a reasonable value seems to be $10^{-3}$.", "_____no_output_____" ] ], [ [ "def deconvolve(data, point_function):\n \"\"\"Deconvolves point_function from the data\n \n Args:\n data (numpy array): the 2D data to deconvolve\n point_dunction (numpy array): the point function\n \n Returns:\n numpy array: Deconvolved array\"\"\"\n \n deconv_data = np.zeros_like(data)\n epsilon = 1e-3\n \n # YOUR CODE HERE\n # FFT for the blurred data\n rfft_data = np.fft.rfft2(data)\n \n # FFT for the spread function\n rfft_pf = np.fft.rfft2(point_function)\n \n # Assign elements lower than epsilon to 1 in spread func fft\n # (division by 1 = No division)\n rfft_pf[rfft_pf < epsilon] = 1\n \n # a_kl = b_kl / (f_kl * K * L)\n conv_data = rfft_data / (rfft_pf * data.shape[0] * data.shape[1])\n \n # Inverse FFT for the convoluted data\n deconv_data = np.fft.irfft2(conv_data)\n #raise NotImplementedError()\n \n return deconv_data\n\ndata = read_2D_data('blur.txt')\n# try different values of sigma to get the best deconvolution without artefacts\npoint_func = get_point_spread_function(data, 19)\ndeconc_data = deconvolve(data, point_func)\n\n# plotting\n# YOUR CODE HERE\nplt.imshow(deconc_data, cmap='gray')\nplt.show()\n#raise NotImplementedError()", "_____no_output_____" ] ], [ [ "(d) Bearing in mind this last point about zeros in the Fourier transform,\n what is it that limits our ability to deblur a photo? Why can we not\n perfectly unblur any photo and make it completely sharp?\n\n\nWe have seen this process in action here for a normal snapshot, but it is\nalso used in many physics applications where one takes photos. For\ninstance, it is used in astronomy to enhance photos taken by telescopes.\nIt was famously used with images from the Hubble Space Telescope\nafter it was realized that the telescope's main mirror had a serious\nmanufacturing flaw and was returning blurry photos---scientists managed to\npartially correct the blurring using Fourier transform techniques.", "_____no_output_____" ], [ "### Answer (double click to edit)\n\nIf the values in the fourier transformed point function become very small, those points cannot be divided, i.e. it is left \"untreated\", which decreases the unblurring. If enough points are left untouched, the image cannot be properly unblurred. Because the fourier transform produces very small values for at least some of the points, the unblurring can never be perfect. Also, if the point function parameter sigma is increased too much, artifacts are introduced to the image. While increasing the sigma makes the image sharper, at some point it start to draw grid-like dots on the image, which occurs because the Gaussians become too wide.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
ece08bbf4ba1949bb0bc4396fb68014c0a45f531
6,895
ipynb
Jupyter Notebook
examples/minimal/00_from_config_file.ipynb
goodok/composed-trackers
d7d277dbceefd69ab8632da9848bba6d4ceea7df
[ "MIT" ]
2
2020-02-15T22:25:04.000Z
2020-02-26T11:03:25.000Z
examples/minimal/00_from_config_file.ipynb
goodok/composed-trackers
d7d277dbceefd69ab8632da9848bba6d4ceea7df
[ "MIT" ]
1
2020-02-19T20:05:01.000Z
2020-02-20T10:35:20.000Z
examples/minimal/00_from_config_file.ipynb
goodok/composed-trackers
d7d277dbceefd69ab8632da9848bba6d4ceea7df
[ "MIT" ]
null
null
null
21.750789
99
0.498477
[ [ [ "## Imports", "_____no_output_____" ] ], [ [ "%load_ext autoreload\n%autoreload 2\n\nfrom composed_trackers import Config, build_from_cfg, TRACKERS\nfrom pathlib import Path", "_____no_output_____" ] ], [ [ "## Configuration", "_____no_output_____" ] ], [ [ "fn_config = Path('configs/example_00.yaml')\ncfg = Config.fromfile(fn_config)", "_____no_output_____" ], [ "print(cfg.text)", "\ntracker:\n type: ComposedTrackers\n trackers: [NeptuneTracker, SimpleTracker]\n name: 'Experiment 00 name'\n description: 'Example of using configuration file.'\n tags: ['examples', 'config']\n offline: True\n SimpleTracker:\n root_path: './logs'\n exp_id_template: 'EXAM00-{i:03}'\n NeptuneTracker:\n project: 'USER_NAME/PROJECT_NAME'\n\nseed: 777\n\noptimizer:\n type: torch.optim.Adam\n weight_decay: 0.0001\n lr: 0.6\n\n" ], [ "# Comment this line if you finish debugging.\ncfg.tracker.offline = True ", "_____no_output_____" ] ], [ [ "## Create tracker from config", "_____no_output_____" ] ], [ [ "\ntracker = build_from_cfg(cfg.tracker, TRACKERS, \n {'params': cfg.to_flatten_dict()} # flattening config --> params\n )", "Neptune is running in offline mode. No data is being logged to Neptune.\nDisable offline mode to log your experiments.\n" ], [ "tracker.describe()", "ComposedTrackers description:\n name : Experiment 00 name\n description : Example of using configuration file.\n tags : ['examples', 'config']\n offline : True\n\nNeptuneTracker\n exp_id: None\n project: USER_NAME/PROJECT_NAME\n\nSimpleTracker\n exp_id: EXAM00-005\n path: logs/EXAM00-005\n\n" ], [ "# all params of experiment\ntracker.params", "_____no_output_____" ] ], [ [ "## Use trackers", "_____no_output_____" ] ], [ [ "tracker.append_tag('introduction-minimal-example')", "_____no_output_____" ], [ "n = 117\nfor i in range(1, n):\n tracker.log_metric('iteration', i)\n tracker.log_metric('loss', 1/i**0.5)\n tracker.log_text('magic values', 'magic value {}'.format(0.95*i**2))\ntracker.set_property('n_iterations', n)\n\n\ntracker.log_text_as_artifact('Hello', 'summary.txt')", "_____no_output_____" ] ], [ [ "## Stop", "_____no_output_____" ] ], [ [ "tracker.stop()", "NeptuneTracker stopping... Ok.\nBaseTracker stopping... Ok.\n" ], [ "tracker.describe(ids_only=True)", "NeptuneTracker : \u001b[92mNone\u001b[0m\nSimpleTracker : \u001b[92mEXAM00-005\u001b[0m\n\n" ], [ "!ls {tracker.path} -1", "artifacts\r\nmetrics.csv\r\nmetrics.json\r\nparams.yaml\r\nproperties.yaml\r\ntags.yaml\r\ntexts.csv\r\ntexts.json\r\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
ece08cf0f90945e239e2a504e480ed796dab80f8
166,530
ipynb
Jupyter Notebook
STRIFE_custom_notebook.ipynb
tomhadfield95/STRIFE
e5cea4f93a34fd64606010430ed92f34be57b6c8
[ "BSD-3-Clause" ]
1
2022-01-16T21:19:51.000Z
2022-01-16T21:19:51.000Z
STRIFE_custom_notebook.ipynb
oxpig/STRIFE
e5cea4f93a34fd64606010430ed92f34be57b6c8
[ "BSD-3-Clause" ]
null
null
null
STRIFE_custom_notebook.ipynb
oxpig/STRIFE
e5cea4f93a34fd64606010430ed92f34be57b6c8
[ "BSD-3-Clause" ]
1
2022-03-11T12:21:47.000Z
2022-03-11T12:21:47.000Z
222.04
88,760
0.899676
[ [ [ "# Import STRIFE code and default arguments", "_____no_output_____" ] ], [ [ "from STRIFE import STRIFE #STRIFE module\nfrom parse_args import parse_args #Get all of the default arguments for STRIFE\n\nfrom rdkit import Chem\nfrom rdkit.Chem.Draw import IPythonConsole\n", "_____no_output_____" ], [ "%config Completer.use_jedi = False\nimport warnings\nwarnings.filterwarnings(\"ignore\", category=DeprecationWarning)", "/data/hookbill/hadfield/CSD/Python_API_2021/miniconda/envs/STRIFE_clone/lib/python3.7/site-packages/ipykernel/ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.\n and should_run_async(code)\n" ], [ "args = parse_args()", "_____no_output_____" ] ], [ [ "# Set arguments to run STRIFE\n\nTo run STRIFE, we need to specify a few key arguments. You can find a full list of the arguments (and a brief comment on what each of them does) either by opening the ```parse_args.py``` file, or by running ```python STRIFE.py --help``` in the command line.\n\nIn this notebook, we're going to show how user-specified pharmacophoric points can be provided to STRIFE. We've already specified the pharmacophoric points using the ```data_prep/do_manual_pharm_specification.sh``` script (and they are saved in the ```example_custom_pharms/STRIFE_1q8t``` directory).\n\n### Arguments we need to specify:\n\n```protein```: The path to a pdb file which contains our protein of interest - the PDB file needs to prepared so that it can be used by GOLD to dock ligands. More information can be found [here](https://www.ccdc.cam.ac.uk/support-and-resources/ccdcresources/GOLD_User_Guide.pdf), but generally, you need to remove any ligands and waters and ensure that the protein has been protonated.\n\n\nSpecifying the fragment of interest:\n\nThere are two ways to tell STRIFE which fragment to elaborate (and which exit vector you want to make elaborations from).\n\n* Use the ```fragment_SDF``` argument to specify the structure of the fragment you want to elaborate. This must be a bound fragment that fits in the ```protein``` binding site. We also have to specify an ```exit_vector_idx``` - this is the index of the atom that the elaborations will be generated from. We have written a script ```specifyExitVector.py``` (see the README for more info) that you can use to help you identify the index of the atom you want to elaborate from\n\n* Alternatively, we can specify a ```fragment_SDF``` and ```fragment_smiles```. ```fragment_smiles``` is a SMILES string of the desired fragment, where the exit vector is denoted by a dummy atom (again ```specifyExitVector.py``` can help you obtain this SMILES string). You can provide either the raw string as an argument, or a file in which the SMILES string is saved.\n\nStoring the output:\n\n* Specify the directory you would like to store the output in as ```output_directory``` - if the directory doesn't already exist then it will be created.\n\n\n", "_____no_output_____" ] ], [ [ "#Required arguments\nargs.protein = 'example_custom_pharms/1q8t_protein.pdb' \nargs.fragment_sdf = 'example_custom_pharms/1q8t_frag.sdf'\nargs.fragment_smiles = 'example_custom_pharms/1q8t_frag_smiles.smi'\nargs.output_directory = 'example_custom_pharms/STRIFE_1q8t' #When using user-specified pharmacophoric points, this directory must already contain\n #a donorHotspot.sdf or acceptorHotspot.sdf file (or both)\n\n#Extra arguments for user-specified pharmacophoric points run\nargs.load_specified_pharms = True\nargs.model_type = 1\n\n#Other arguments\nargs.num_cpu_cores = 7\nargs.write_elaborations_dataset = True", "_____no_output_____" ] ], [ [ "# Running STRIFE", "_____no_output_____" ] ], [ [ "#Create the STRIFE class\nSTRIFE_model = STRIFE(args)", "Running STRIFE Algorithm....\nDoing argument checking...\nArgument checking complete.\nProcessing pharmacophoric information\nPreprocessing fragment\n" ], [ "#Run STRIFE\nSTRIFE_model.run(args)", "WARNING: Logging before flag parsing goes to stderr.\nW0308 12:05:01.421705 140584540915520 lazy_loader.py:50] \nThe TensorFlow contrib module will not be included in TensorFlow 2.0.\nFor more information, please see:\n * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md\n * https://github.com/tensorflow/addons\n * https://github.com/tensorflow/io (for I/O related ops)\nIf you depend on functionality not listed there, please file an issue.\n\n" ] ], [ [ "# View the generated elaborations", "_____no_output_____" ] ], [ [ "ranked_elabs = STRIFE_model.pharmElabsTestMultiLigEff\nranked_elabs.head(10)", "_____no_output_____" ] ], [ [ "### Now let's visualise the most highly ranked elaborations", "_____no_output_____" ] ], [ [ "Chem.Draw.MolsToGridImage([Chem.MolFromSmiles(s) for s in ranked_elabs['smiles'].drop_duplicates().head(16)], molsPerRow = 4)", "_____no_output_____" ] ], [ [ "Note that, in this instance, the majority of the most highly ranked molecules do not exhibit an elaboration containing one HBA and one HBD. Whilst each of the quasi-actives yielded a fine-grained pharmacophoric profile containing one HBA and one HBD, the generative molecule is not forced to generate molecules with such profiles and molecules with other pharmacophoric profiles attained better predicted ligand efficiency.\n\nThis is not surprising behaviour, as the pharmacophoric points we used were specified arbitrarily to demonstrate how a user can specify their own pharmacophoric points.", "_____no_output_____" ], [ "If we want to, we can view the quasi actives which determined the pharmacophoric profiles used in the refinement phase:", "_____no_output_____" ] ], [ [ "#We can view basic information about each pharmacophoric point\n\nfor k in STRIFE_model.hMulti.keys():\n \n print(f'****Pharmacophoric point: {k}****')\n for kk in STRIFE_model.hMulti[k].keys():\n print(f'{kk} : {STRIFE_model.hMulti[k][kk]}')\n \n print('\\n')\n ", "****Pharmacophoric point: 0****\ntype : Acceptor\nposition : [ 2.675 10.168 3.463]\ndistFromExit : 4.55521678957215\nangFromExit : 0.3618600157404793\n\n\n****Pharmacophoric point: 1****\ntype : Donor\nposition : [4.675 8.168 4.463]\ndistFromExit : 2.958039891549808\nangFromExit : 0.16637923386536377\n\n\n" ], [ "STRIFE_model.multiQuasiActives", "_____no_output_____" ], [ "Chem.Draw.MolsToGridImage([Chem.MolFromSmiles(s) for s in STRIFE_model.multiQuasiActives['smiles']])", "_____no_output_____" ] ], [ [ "### Accessing the docked poses\n\nSTRIFE saves the docked poses in the ```output_directory``` under the name ```pharmsElabsTestDocked.sdf```. You can view them in the binding pocket using a molecule viewer such as PyMol", "_____no_output_____" ] ], [ [ "docked_mols = Chem.SDMolSupplier(f'{args.output_directory}/pharmsElabsTestMultiDocked.sdf')\ndocked_mols[0] #A 2D depiction of one of the docked mols", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
ece08fea49e343ef6ccdc75c91845ef2b7af5865
39,478
ipynb
Jupyter Notebook
work4-sigmod.ipynb
zgx0815/ML
6201766ab49af2de82defb48663760ea056f3b19
[ "Apache-2.0" ]
null
null
null
work4-sigmod.ipynb
zgx0815/ML
6201766ab49af2de82defb48663760ea056f3b19
[ "Apache-2.0" ]
null
null
null
work4-sigmod.ipynb
zgx0815/ML
6201766ab49af2de82defb48663760ea056f3b19
[ "Apache-2.0" ]
null
null
null
131.156146
29,232
0.834287
[ [ [ "import pandas as pd\ndf = pd.read_csv(r'http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data')\ndf.tail()\n", "_____no_output_____" ], [ "y = df.iloc[0:99, 4].values\ny\n", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\ny = np.where(y == 'Iris-setosa', -1, 1)\nx = df.iloc[0: 99, [0, 2]].values\nplt.scatter(x[:49, 0], x[:49, 1], color='red', marker='o', label='setosa')\nplt.scatter(x[49:99, 0], x[49: 99, 1], color='blue', marker='x', label='versicolor')\nplt.xlabel('petal length')\nplt.ylabel('sepal length')\nplt.legend(loc='upper left')\nplt.show()\n", "_____no_output_____" ], [ "import numpy as np \nimport pandas as pd \nimport matplotlib.pyplot as plt \nfrom matplotlib.colors import ListedColormap \n \n \nclass AdalineBGD: \n def __init__(self, eta=0.01, n_iter=50): \n self.eta = eta \n self.n_iter = n_iter \n \n def fit(self, X, y): \n self.w_ = np.zeros(X.shape[1]+1) \n self.J_ = [] \n \n for _ in range(self.n_iter): \n z = np.dot(X, self.w_[1:]) + self.w_[0] \n self.w_[0] += self.eta * (y - z).sum()\n self.w_[1:] += self.eta * np.dot(X.T, (y - z)) \n self.J_.append(((y-z)**2).sum()*0.5) \n return self \n \n def predict(self, X): \n return np.where(np.dot(X, self.w_[1:]) + self.w_[0] >= 0.0, 1, -1) \n \n \ndef plot_decision_regions(X, y, classifier, ax): \n \n # setup marker generator and color map \n markers = ('o', 'x') \n colors = ('blue', 'red') \n cmap = ListedColormap(colors[:len(np.unique(y))]) \n \n # plot the decision region \n x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1 \n x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1 \n xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, 0.02), np.arange(x2_min, x2_max, 0.02)) \n z = classifier.predict(np.array([xx1.flatten(), xx2.flatten()]).T) \n z = z.reshape(xx1.shape) \n plt.contourf(xx1, xx2, z, cmap=cmap, alpha=0.3) \n plt.xlim(x1_min, x1_max) \n plt.ylim(x2_min, x2_max) \n \n # plot class samples \n for idx, cl in enumerate(np.unique(y)): \n ax[1].scatter(x=X[y == cl, 0], y=X[y == cl, 1], cmap=cmap(idx), marker=markers[idx], label=cl, alpha=1)\n ax[1].legend(loc='best')\n ax[1].set_title('Adaline-ๆขฏๅบฆไธ‹้™ๆณ•', fontproperties='SimHei') \n ax[1].set_xlabel('็ปๆ ‡ๅ‡†ๅŒ–ๅค„็†็š„่ผ็‰‡ๅฎฝๅบฆ', fontproperties='SimHei') \n ax[1].set_ylabel('็ปๆ ‡ๅ‡†ๅŒ–ๅค„็†็š„่Šฑ็“ฃๅฎฝๅบฆ', fontproperties='SimHei') \n \n \ndf = pd.read_csv(r'http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data') \ny = df.iloc[:100, 4].values \ny = np.where(y == 'Iris-setosa', -1, 1) \nX = df.iloc[:100, [0, 2]].values \nX_std = X.copy() \nX_std[:, 0] = (X[:, 0] - X[:, 0].mean()) / X[:, 0].std() \nX_std[:, 1] = (X[:, 1] - X[:, 1].mean()) / X[:, 1].std() \n \n \nfig1, ax1 = plt.subplots(1, 2, figsize=(8, 4)) \ndemo = AdalineBGD(n_iter=20, eta=0.01).fit(X_std, y) \nplot_decision_regions(X_std, y, demo, ax1) \nax1[0].plot(range(1, len(demo.J_)+1), np.log10(demo.J_), marker='o') \nax1[0].set_title('ๅญฆไน ้€Ÿ็އ0.01', fontproperties='SimHei') \nax1[0].set_xlabel('่ฟญไปฃๆฌกๆ•ฐ', fontproperties='SimHei') \nax1[0].set_ylabel('log(ๅ‡ๆ–นๆ น่ฏฏๅทฎJ)', fontproperties='SimHei') \n \n \nplt.show() \n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
ece09287b9e61ac537e4f1536bb2e7a057651e0a
17,233
ipynb
Jupyter Notebook
proto/SVM/SVM-Use-Use-with-EV.ipynb
pjkundert/wikienergy
ac3a13780bccb001c81d6f8ee27d3f5706cfa77e
[ "MIT" ]
29
2015-01-08T19:20:37.000Z
2021-04-20T08:25:56.000Z
proto/SVM/SVM-Use-Use-with-EV.ipynb
pjkundert/wikienergy
ac3a13780bccb001c81d6f8ee27d3f5706cfa77e
[ "MIT" ]
null
null
null
proto/SVM/SVM-Use-Use-with-EV.ipynb
pjkundert/wikienergy
ac3a13780bccb001c81d6f8ee27d3f5706cfa77e
[ "MIT" ]
17
2015-02-01T18:12:04.000Z
2020-06-15T14:13:04.000Z
24.653791
175
0.472234
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
ece0a75c14e048a194aee937f4bb7b1b9c4825fa
11,085
ipynb
Jupyter Notebook
lyrics-similarity-playground.ipynb
linhd-postdata/lyrics-similarity-elasticsearch-research
96f83d7d73e18f59c72ff009de0ab883abcc406d
[ "Apache-2.0" ]
null
null
null
lyrics-similarity-playground.ipynb
linhd-postdata/lyrics-similarity-elasticsearch-research
96f83d7d73e18f59c72ff009de0ab883abcc406d
[ "Apache-2.0" ]
null
null
null
lyrics-similarity-playground.ipynb
linhd-postdata/lyrics-similarity-elasticsearch-research
96f83d7d73e18f59c72ff009de0ab883abcc406d
[ "Apache-2.0" ]
null
null
null
37.449324
205
0.572756
[ [ [ "This is a script to play around with semantic similarity of the songs' lyrics \nRemember to start elasticsearch server before by runing in the terminal following comments: \ncd elasticsearch-7.13.2 \nbin/elasticsearch ", "_____no_output_____" ] ], [ [ "import ipywidgets as widgets\nfrom ipywidgets import interact, interact_manual\nimport pandas as pd\nfrom IPython.display import display, HTML\nfrom itertools import chain\nimport umap\nimport matplotlib.pyplot as plt\n\nfrom src.classes.ElasticsearchManager import ElasticsearchManager\nfrom src.classes.Evaluator import Evaluator\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\n", "_____no_output_____" ], [ "def get_dataset(index_name):\n dataset_name = 'datasets/' + index_name.split('_')[1] + '/lyrics.csv'\n dataset = pd.read_csv(dataset_name, sep='\\t', header=0)\n df = dataset[['song_id', 'title', 'text']]\n labels_name = 'testsets/' + index_name.split('_')[1] + '-testset.csv'\n labels_df = pd.read_csv(labels_name, sep='\\t')\n labels_df = labels_df[['song_id', 'most_similar_song_id']]\n labels_df = labels_df.groupby('song_id')['most_similar_song_id'].apply(list)\n df = df.merge(labels_df, on='song_id', sort=False)\n return df\n\ndef merge_titles(df, index_name):\n dataset_name = 'datasets/' + index_name.split('_')[1] + '/lyrics.csv'\n dataset = pd.read_csv(dataset_name, sep='\\t', header=0)\n dataset = dataset[['song_id', 'title']]\n return df.merge(dataset, on='song_id')\n\ndef merge_labels(df, index_name):\n labels_name = 'testsets/' + index_name.split('_')[1] + '-labels.csv'\n labels_df = pd.read_csv(labels_name, sep='\\t')\n labels_df = labels_df[['song_id', 'most_similar_song_id']]\n df = df.merge(labels_df, on='song_id', sort=False)\n return df\n\ndef get_song_id(df, song_title):\n row = df.loc[df['title'] == song_title]\n return row['song_id'].values[0]\n\ndef get_song_title(df, song_id):\n row = df.loc[df['song_id'].isin(song_id)]\n return row['title'].values[0]\n\ndef get_list_of_titles(df):\n return df['title'].tolist()\n\ndef get_expected_labels(df, song_id):\n row = df.loc[df['song_id'] == song_id]\n return row['most_similar_song_id'].values[0]\n\ndef get_testset(df, song_id):\n row = df.loc[df['song_id'] == song_id]\n return [(song_id, row['most_similar_song_id'].values[0][0])]\n\ndef highlight_hit(df, song_id, expected_label):\n if df.song_id in expected_label:\n return ['background-color: green']*4\n elif df.song_id == song_id:\n return ['background-color: yellow']*4\n else:\n return ['background-color: white']*4", "_____no_output_____" ], [ "# Setting Elasticsearch\nes = ElasticsearchManager()\nlist_of_indexes = list(es.es.indices.get_alias(\"*\").keys())", "_____no_output_____" ] ], [ [ "# Ranking", "_____no_output_____" ], [ "In this section you can check how works ranking based on composition embeddings of the lyrics. Choose the index, the metrics, the song you want to check the ranking for and number of songs to display", "_____no_output_____" ] ], [ [ "@interact\ndef prepare_the_ranking(index_name=list_of_indexes):\n df = get_dataset(index_name)\n list_of_titles = get_list_of_titles(df)\n l = es.es.count(index=index_name)['count']\n print(f'There is {l} songs indexed in this index.')\n list_of_metrics = list(chain.from_iterable(('cos_sim' + '_' + cf, 'l2' + '_' + cf, 'icm' + '_' + cf) for cf in es.get_emb_mapping(index_name)))\n @interact(n=(1, l, 1) , metrics=list_of_metrics)\n def show_the_ranking(song_title=list_of_titles, n=3 , metrics='cos_sim_inf_rl'):\n song_id = get_song_id(df, song_title)\n testset = get_testset(df, song_id)\n evaluator = Evaluator(testset, index_name, es, n=n, verbose=True)\n evaluator.search_similar_songs()\n for tp in evaluator.test_points:\n for partial in tp.partials:\n order_key = str(partial['order'])\n ranking = tp.similar_partials[order_key][metrics]\n df_ranking = pd.DataFrame(ranking)\n expected_labels = get_expected_labels(df, song_id)\n styler = df_ranking[:n+1].style.apply(highlight_hit, song_id=song_id, expected_label=expected_labels, axis=1)\n print(f'Ranking for song: \"{song_title}\"')\n display(styler)\n expected_titles = get_song_title(df, expected_labels)\n print(f'Expected hits:')\n display(HTML(df.loc[df['song_id'].isin(expected_labels)].to_html()))\n ", "_____no_output_____" ] ], [ [ "# Models visualization", "_____no_output_____" ] ], [ [ "@interact(first_index=list_of_indexes, second_index=list_of_indexes)\ndef plot_lyrics_embeddings(first_index='roberta-m_26-spanish-songs_stanzas', second_index='roberta-alberti_26-spanish-songs_stanzas'):\n \n @interact\n def compare_models(first_composition_functions=es.get_emb_mapping(first_index), second_composition_functions=es.get_emb_mapping(second_index)):\n \n query = es.make_match_all_query()\n\n # FIRST INDEX\n res = es.search_query(first_index, query)\n first_df = pd.DataFrame(res)\n first_df = merge_titles(first_df, first_index) \n first_df = merge_labels(first_df, first_index)\n first_embeddings = first_df[first_composition_functions].tolist()\n first_titles = first_df['title'].tolist()\n first_dim2vectors = umap.UMAP(n_neighbors=15, n_components=2, min_dist=0.0, metric='cosine', random_state=42).fit_transform(first_embeddings)\n first_result = pd.DataFrame(first_dim2vectors, columns=['x', 'y'])\n first_result['human_labels'] = first_df['most_similar_song_id']\n\n\n fig, ax = plt.subplots(1,2, figsize=(27, 10))\n plt.xlim(min(first_result.x) - 0.5, max(first_result.x) + 0.5)\n plt.ylim(min(first_result.y - 0.5), max(first_result.y) + 0.5)\n\n ax[0].title.set_text(first_index)\n ax[0].scatter(first_result.x, first_result.y, c=first_result.human_labels, s=500, alpha=0.8)\n for i, txt in enumerate(first_titles):\n ax[0].annotate(txt, (first_result.x[i], first_result.y[i]))\n\n # SECOND INDEX \n res = es.search_query(second_index, query)\n second_df = pd.DataFrame(res)\n second_df = merge_titles(second_df, second_index) \n second_df = merge_labels(second_df, second_index)\n second_embeddings = second_df[second_composition_functions].tolist()\n second_titles = second_df['title'].tolist()\n second_dim2vectors = umap.UMAP(n_neighbors=15, n_components=2, min_dist=0.0, metric='cosine',random_state=42).fit_transform(second_embeddings)\n second_result = pd.DataFrame(second_dim2vectors, columns=['x', 'y'])\n second_result['human_labels'] = second_df['most_similar_song_id']\n\n plt.xlim(min(second_result.x) - 0.5, max(second_result.x) + 0.5)\n plt.ylim(min(second_result.y - 0.5), max(second_result.y) + 0.5)\n\n ax[1].title.set_text(second_index)\n ax[1].scatter(second_result.x, second_result.y, c=second_result.human_labels, s=500, alpha=0.8)\n for i, txt in enumerate(second_titles):\n ax[1].annotate(txt, (second_result.x[i], second_result.y[i]))\n \n plt.show()\n\n\n # uncomment if you want to save the png with the figure\n # plt.savefig( index_name + '_inf_lr'+'.png')\n ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ece0beecd491bc9f813a818a0460d4f33ad78ce5
101,775
ipynb
Jupyter Notebook
main.ipynb
elaesme/ADM-HW3
b1f7c3f6de25e36bddf47b5323b9e3b785ac8249
[ "MIT" ]
null
null
null
main.ipynb
elaesme/ADM-HW3
b1f7c3f6de25e36bddf47b5323b9e3b785ac8249
[ "MIT" ]
null
null
null
main.ipynb
elaesme/ADM-HW3
b1f7c3f6de25e36bddf47b5323b9e3b785ac8249
[ "MIT" ]
1
2020-01-14T20:12:29.000Z
2020-01-14T20:12:29.000Z
50.160177
20,360
0.575308
[ [ [ "# Data collection:\n\n## Get the list of movies:", "_____no_output_____" ], [ "* Get a list of Wikipedia URLs from movies1.html/ movies2.html/ movies3.html", "_____no_output_____" ] ], [ [ "from bs4 import BeautifulSoup\nimport requests\nimport time\nimport random\nf1 = open(\"HW3 ADM/movies3.html\")\nf3 = open(\"HW3 ADM/movies1.html\")\nf2 = open(\"HW3 ADM/movies2.html\")\nsoup = BeautifulSoup(f1)\nsoup1 = BeautifulSoup(f3)\nsoup2 = BeautifulSoup(f2)\nlistUrl_Movies1=[]\nlistUrl_Movies2=[]\nlistUrl_Movies3=[]\nfor link in soup.select('a'):\n listUrl_Movies3.append(link.text)\nfor link in soup2.select('a'):\n listUrl_Movies2.append(link.text)\nfor link in soup1.select('a'):\n listUrl_Movies1.append(link.text)", "_____no_output_____" ] ], [ [ "* Merge these 3 lists in a single list that contains all 30 000 urls", "_____no_output_____" ] ], [ [ "totalMovies=listUrl_Movies1+listUrl_Movies2+listUrl_Movies3", "_____no_output_____" ] ], [ [ "## Crawl Wikipedia:", "_____no_output_____" ], [ "* Function 'downloadFile' that allows to download html files from URL list\n\nWe downloaded all movies. 10 000 per person. Every person downloaded file from:\n* listUrl_Movies1/ listUrl_Movies2/ listUrl_Movies3", "_____no_output_____" ] ], [ [ "def downloadFile():\n\n for index in range(listUrl_Movies1):\n #set time=20 mintes\n t2=1200\n try:\n #wait 5 seconds bettween every request\n t1 = random.randint(1,5)\n time.sleep(t1)\n url=listUrl_Movies3[index]\n response = requests.get(url)\n name=\"article_\"\n extension=\".html\"\n file=\"{}{}{}\".format(name,index,extension)\n with open(file,'wb') as f: \n f.write(response.content) \n\n except response.status_code as e:\n print(\"exception\")\n #error=492 is error that occurs when we have done a limit of request\n if e==492:\n #wait 20 minutes \n time.sleep(t2)\n downloadFile(index+1)\n elif e==200:\n soup = BeautifulSoup(listUrl_Movies3[1])\n \n with open(file,'w') as f: \n f.write(soup.text)\n downloadFile(index+1)\n else:\n continue\n\n", "_____no_output_____" ], [ "downloadFile()", "_____no_output_____" ] ], [ [ "## Parse downloaded pages:", "_____no_output_____" ], [ "* Create .tsv files for every html file and put them in tsv directory. These files contain data (title,intro,plot and infobox infos) as written in hw track. About starring in infobox, we saved every actor in a list (to do bonus section).", "_____no_output_____" ] ], [ [ "import csv\nimport pandas as pd\nimport os.path\n#define column of our dataframe\ndf=pd.DataFrame(columns=['title', 'intro', 'plot','film_name','producer','director','writer','starring','music','release date','runtime','country','language','budget'])\n\n\nfor index in range(len(totalMovies)):\n print(index)\n title=''\n plot=''\n intro=''\n title_name='NA'\n producer='NA'\n director='NA'\n writer='NA'\n starring=['NA']\n music='NA'\n release_date='NA'\n runtime='NA'\n country='NA'\n language='NA'\n budget='NA'\n \n \n #define name of the file that we want to find (in my case: in the same directory)\n name=\"article_\"\n extension=\".html\"\n file=\"{}{}{}\".format(name,index ,extension)\n \n #check if this file exists\n if not os.path.isfile(\"HW3 ADM/\"+file):\n continue\n \n #open file \n response2 = open(\"HW3 ADM/\"+file)\n soup = BeautifulSoup(response2)\n #take title.\n title=soup.title.text.rsplit(' ', 2)[0]\n \n #take all p in intro(firt section)\n #print(soup.find('span', attrs={'class': 'mw-headline'}))\n if soup.find('span', attrs={'class': 'mw-headline'}):\n heading = soup.find('span', attrs={'class': 'mw-headline'})\n paragraphs = heading.find_all_previous('p')\n for p in paragraphs: \n intro = p.text + intro\n \n \n #take all p in 'plot'(second section)\n b=True\n #print(soup.find('span', attrs={'class': 'mw-headline'}))\n if soup.find('span', attrs={'class': 'mw-headline'}): \n \n heading = soup.find('span', attrs={'class': 'mw-headline'})\n \n for item in heading.parent.nextSiblingGenerator():\n \n if item.name=='h2':\n break\n if hasattr(item, \"text\"):\n \n plot+=item.text\n\n else:\n plot=\"NAN\"\n \n else:\n intro=\"NAN\"\n plot=\"NAN\"\n \n \n #Get info about infobox from every page and put them in respective sections in tsv file \n if soup.find('table', attrs={'class': 'infobox vevent'}):\n \n table = soup.find('table', attrs={'class': 'infobox vevent'}) \n \n if table.find('th', attrs={'class': 'summary'}):\n \n x=table.find('th', attrs={'class': 'summary'})\n title_name=x.text.strip()\n \n for cell in table.find_all('th'):\n \n if cell.find_next_sibling('td'):\n a=cell.find_next_sibling('td')\n if cell.text.strip()=='Directed by':\n director=a.text.strip()\n elif cell.text.strip()=='Produced by':\n \n producer=a.text.strip()\n elif cell.text.strip()=='Written by':\n \n writer=a.text.strip()\n elif cell.text.strip()=='Starring':\n listStarring=[]\n for link in a.select('a'):\n \n listStarring.append(link.text)\n starring=listStarring\n #print(starring)\n elif cell.text.strip()=='Music by':\n \n music=a.text.strip()\n elif cell.text.strip()=='Release date':\n release_date=a.text.strip() \n elif cell.text.strip()=='Running time':\n \n runtime=a.text.strip()\n elif cell.text.strip()=='Country':\n \n country=a.text.strip()\n elif cell.text.strip()=='Language':\n \n language=a.text.strip()\n elif cell.text.strip()=='Budget':\n \n budget=a.text.strip()\n else:\n continue\n \n \n #put all infos in movie list\n movie=[title,intro,plot,title_name,producer,director,writer,starring,music,release_date,runtime,country,language,budget]\n #update dataframe with this list\n extension2=\".tsv\"\n file=\"{}{}{}\".format(name,index,extension2)\n \n movieTitle=[\"title\",\"intro\",\"plot\",\"title_name\",\"producer\",\"director\",\"writer\",\"starring\",\"music\",\"release_date\",\"runtime\",\"country\",\"language\",\"budget\"]\n with open(\"HW3 ADM/tsv_new/\"+file, 'w', newline='') as f_output:\n tsv_output = csv.writer(f_output, delimiter='\\t')\n tsv_output.writerow(movieTitle)\n tsv_output.writerow(movie)\n df.loc[index] = movie", "_____no_output_____" ] ], [ [ "# Search Engine:\n\n## Search engine 1: Conjunctive query ", "_____no_output_____" ], [ "* Create tsv files with preporcessed text. We have preprocessed all texts in tsv file (that we have just created) and we have put them in other tsv files in tsv_correct directory. For every section we have a list of all words that are in respective section. We have also deleted duplicated words.", "_____no_output_____" ] ], [ [ "import string\nimport csv\n\nfrom shutil import move\nfrom nltk.tokenize import RegexpTokenizer\n#create tsv file in 'tsv_correct' directory wehere we have preprocessed the tsv file (just created in parser.py)\ntokenizer = RegexpTokenizer(r'\\w+')\nname=\"article_\"\nextension2=\".tsv\"\nexclude = string.punctuation\nfor index in range(len(totalMovies)):\n print(index)\n\n\n file=\"{}{}{}\".format(name,index,extension2)\n with open(\"HW3 ADM/tsv/\"+file,\"r\") as tsvfile, open(\"HW3 ADM/tsv_correct/\"+file,\"w\") as outfile:\n tsvreader = csv.reader(tsvfile, delimiter=\"\\t\")\n tsvwriter = csv.writer(outfile, delimiter=\"\\t\")\n for row in tsvreader:\n for i in range(len(row)):\n #take every words, deleting ountuaction and other symbols\n row[i] = tokenizer.tokenize(row[i])\n #remove duplicate case-insensitive elements\n row[i]= list(set(map(str.lower, row[i])))\n #row[i] = row[i].translate({ord(c): None for c in string.punctuation})\n \n tsvwriter.writerow(row)\n ", "_____no_output_____" ] ], [ [ "* We created a vocabulary that we stored in vocabulary.tsv file. Here we have all words that we have in all tsv files (in intro and plot section as wirtten in hw track). Every word matches with an unique term_id. This term_is is a simple counter.", "_____no_output_____" ] ], [ [ "import ast\nfrom itertools import islice\nimport csv\n#create vocabulary and save it on vocabulary.tsv\n \n\ndict1 = dict()\nterm_id=0\npresent=False\nwith open('HW3 ADM/tsv/vocabulary.tsv', 'w', newline='') as f_output:\n tsv_vocabulary = csv.writer(f_output, delimiter='\\t')\n tsv_vocabulary.writerow(['word','term_id'])\n name=\"article_\"\n extension2=\".tsv\"\n \n for index in range(len(totalMovies)):\n \n print(index)\n file=\"{}{}{}\".format(name,index,extension2)\n with open(\"HW3 ADM/tsv_correct/\"+file,\"r\") as tsvfile:\n data_list = list(csv.reader(tsvfile, delimiter=\"\\t\"))\n tsvreader = csv.reader(tsvfile, delimiter=\"\\t\")\n #put in intro a list of all words that we have in intro of i-th page\n intro=data_list[1][2]\n intro = ast.literal_eval(intro)\n #put in plot a list of all words that we have in plot of i-th page\n plot=data_list[1][1]\n plot = ast.literal_eval(plot)\n \n #put in text, a list that contains all words that are in plot and word for every page (no duplicate)\n text=plot+intro\n text= list(set(map(str.lower, text)))\n \n #put in dict1 every words with its term_id (no duplicate)\n for i in text:\n if i in dict1: \n continue\n else:\n dict1[i]=term_id\n term_id+=1\n \n #put dict1 element in vocabulary.tsv file \n for key, val in dict1.items():\n tsv_vocabulary.writerow([key, val])\n\n\n", "_____no_output_____" ] ], [ [ "### Create index:", "_____no_output_____" ], [ "* Then we created the index from vocabulary.tsv and from all preprocessed tsv files. \nFor every word (in plot and intro) in all prorocessed tsv files, we found their term_id (from vocabulary) and for evey term_id we match a list of document where repsective word is present. \n* This index was used to search query words in every document about 1st and 2nd search engines. We saved it into index.tsv", "_____no_output_____" ] ], [ [ "import ast\nfrom itertools import islice\nimport csv\n\n#create index and save it on index.tsv \n\ndict2 = {}\ncount=0\npresent=False\nwith open('HW3 ADM/tsv/vocabulary.tsv', 'r', newline='') as f_output:\n tsv_vocabulary = list(csv.reader(f_output, delimiter='\\t'))\n name=\"article_\"\n extension2=\".tsv\"\n h=0\n for row in tsv_vocabulary:\n \n dict2[row[1]]=[]\n for index in range(len(totalMovies)):\n \n print(index)\n file=\"{}{}{}\".format(name,index,extension2)\n with open(\"HW3 ADM/tsv_correct/\"+file,\"r\") as tsvfile:\n data_list = list(csv.reader(tsvfile, delimiter=\"\\t\"))\n tsvreader = csv.reader(tsvfile, delimiter=\"\\t\")\n intro=data_list[1][1]\n intro = ast.literal_eval(intro)\n plot=data_list[1][2]\n plot = ast.literal_eval(plot)\n text=plot+intro\n text= list(set(map(str.lower, text)))\n \n #for evry words in plot adn intro (for every page) we get every word. From every word we get its term_id and put it whit their occurences (document_id) in dict2\n for i in text:\n for row in tsv_vocabulary:\n if i==row[0]:\n doc=\"document_\"\n name2=\"{}{}\".format(doc,index)\n \n dict2[row[1]].append(name2)\n break\n else:\n continue\n \n #put dict2 in index.tsv file. In. evry row we have a single term_id with occurences of respective word.\n with open('HW3 ADM/tsv/index1.tsv', 'w', newline='') as f_output:\n tsv_vocabulary = csv.writer(f_output, delimiter='\\t') \n for key, val in dict2.items():\n tsv_vocabulary.writerow([key, val]) \n \n\n\n", "_____no_output_____" ] ], [ [ "* Then we create getDocuments function that allows to find all documents where the input query is present.\nThis fucntion has two parameters in input: words that is our query and index is the number of index that we consider to do specific search engine. \nThis function returns a list of documents that contains input query based on index.\n* We use index1 about search engine 1 and search engine 2; index2 about search engine 3.\n\n* This function is used for all three different search engines", "_____no_output_____" ] ], [ [ "import csv\nimport sys\nimport ast\n#define function that allows us to calculate a list that is an intersection from two list\ndef intersection(lst1, lst2): \n return list(set(lst1) & set(lst2)) \ndef getDocuments(words,index):\n\n\n #we use dict3 to store term_id and its respective documents_id\n dict3={}\n ##we use dict4 to store evry word and its respective documents_id\n dict4={}\n csv.field_size_limit(sys.maxsize)\n #in listWords we have a list that contains all words about inout query\n listWords = words.split()\n listWords=[x.lower() for x in listWords]\n \n #with vocabulary.tsv we start to build a dict3 with term_id for every words in wordsList\n with open('HW3 ADM/tsv/vocabulary.tsv', 'r', newline='') as f_output:\n tsv_vocabulary = list(csv.reader(f_output, delimiter='\\t'))\n for word in listWords:\n word=word.lower()\n present=False\n for row in tsv_vocabulary:\n if word.lower()==row[0]:\n dict3[row[1]]=[]\n present=True\n #case where word is not in vocabulary\n if present==False:\n dict4[word]=[]\n indexFile=\"index\"+str(index)+\".tsv\"\n #we continue to match documnets_id to every term_id in dict3\n with open('HW3 ADM/tsv/'+indexFile, 'r', newline='') as f_output:\n tsv_index = list(csv.reader(f_output, delimiter='\\t'))\n for k in dict3.keys(): \n for row in tsv_index:\n if row[0]==k:\n dict3[k]=row[1]\n continue\n\n\n #finally we build dict4 where evry word matches to respective documents_id\n for k in dict3.keys():\n\n for row in tsv_vocabulary:\n if k==row[1]:\n dict4[row[0]]=dict3[row[1]]\n\n document=ast.literal_eval(dict4[listWords[0]]) \n #return \"no results\" if any query words isn't at least in one document\n for i in dict4.values():\n if not i:\n error='No results'\n return error\n #interection between every list in values dict4. In this way we have documnets_id where all words (in query input) are present\n for value in dict4.values():\n document=intersection(document,ast.literal_eval(value))\n \n return document", "_____no_output_____" ] ], [ [ "* searchEngine1 function allows to do the first search engine. It takes an input query and returns a dataframe that contains the result. In this function we call getDocuments and for every document that contains input query we get some info from not preprocessed tsv file in tsv directory.\n\n* This result is a list of movies and for every movies we show only intro, title and URL link.\n\n", "_____no_output_____" ] ], [ [ "import pandas as pd\n\n\n\ndef searchEngine1(words):\n #build the dataframe with info for every documents_id\n document=getDocuments(words,1)\n if document=='No results':\n \n return document\n elif document==[]:\n return \"No results\"\n df=pd.DataFrame(columns=['title', 'intro', 'url'])\n for index in range(len(document)):\n #get id of documnets_is\n numberDocument=document[index][9:]\n #get wikipedia url\n \n url=totalMovies[int(numberDocument)]\n name=\"article_\"\n extension2=\".tsv\"\n index=int(numberDocument)\n file=\"{}{}{}\".format(name,index,extension2)\n #get info about title and intro for evert film that corresponds to every documents_id\n with open('HW3 ADM/tsv/'+file, 'r', newline='') as f_output:\n tsv_index = list(csv.reader(f_output, delimiter='\\t'))\n title=tsv_index[1][3]\n intro=tsv_index[1][1]\n film=[title,intro,url]\n #put all info for every film in a single row of df dataframe\n df.loc[index] = film\n return df", "_____no_output_____" ] ], [ [ "### Execute the query:", "_____no_output_____" ] ], [ [ "query='enormous damage unless something is done immediately'\nsearchEngine1(query)", "_____no_output_____" ] ], [ [ "## Search engine 2: Conjunctive query & Ranking score\n", "_____no_output_____" ], [ "* Create tsv files from not preprocessed files. This is the same process that we've done for tsv file in tsv_correct dirctory. But in this case, we considerated also duplicate words. This it is important to calculate TfIdf about second search engine.\n\n* These tsv file are stored in tsv_correct2 directory", "_____no_output_____" ] ], [ [ "\n\nimport string\nimport csv\nfrom shutil import move\nfrom nltk.tokenize import RegexpTokenizer\n#create tsv file in 'tsv_correct' directory wehere we have preprocessed the tsv file (just created in parser.py)\ntokenizer = RegexpTokenizer(r'\\w+')\nname=\"article_\"\nextension2=\".tsv\"\nexclude = string.punctuation\nfor index in range(len(totalMovies)):\n print(index)\n\n\n file=\"{}{}{}\".format(name,index,extension2)\n with open(\"HW3 ADM/tsv/\"+file,\"r\") as tsvfile, open(\"HW3 ADM/tsv_correct2/\"+file,\"w\") as outfile:\n tsvreader = csv.reader(tsvfile, delimiter=\"\\t\")\n tsvwriter = csv.writer(outfile, delimiter=\"\\t\")\n for row in tsvreader:\n for i in range(len(row)):\n #take every words, deleting ountuaction and other symbols\n row[i] = tokenizer.tokenize(row[i])\n #remove duplicate case-insensitive elements\n row[i]= list(map(str.lower, row[i]))\n #row[i] = row[i].translate({ord(c): None for c in string.punctuation})\n \n tsvwriter.writerow(row)\n ", "_____no_output_____" ] ], [ [ "* From these tsv file that we have just created, we create a dataframe 'df2'.\nThis dataframe contains the TfIdf value for every match between word-document. Its columns are all different words that we have preproccesed tsv file (in intro and plot). Its rows are different document that are identified by document_id (id stays for the number of movies, example:document_222 stays for article_222.html/article_222.tsv)\n\n", "_____no_output_____" ] ], [ [ "import ast\nfrom itertools import islice\nimport csv\nimport sys\nimport pandas as pd\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.feature_extraction.text import CountVectorizer\ncsv.field_size_limit(sys.maxsize)\n#create index and save it on index.tsv \n\ndict2 = {}\ncount=0\npresent=False\ndocuments=[]\nname='article_'\nextension2='.tsv'\nwith open('HW3 ADM/tsv/vocabulary.tsv', 'r', newline='') as f_output:\n tsv_vocabulary = list(csv.reader(f_output, delimiter='\\t'))\n for index in range(len(totalMovies)):\n print(index)\n file=\"{}{}{}\".format(name,index,extension2)\n with open(\"HW3 ADM/tsv_correct2/\"+file,\"r\") as tsvfile:\n data_list = list(csv.reader(tsvfile, delimiter=\"\\t\"))\n tsvreader = csv.reader(tsvfile, delimiter=\"\\t\")\n intro=data_list[1][1]\n intro = ast.literal_eval(intro)\n plot=data_list[1][2]\n plot = ast.literal_eval(plot)\n text=plot+intro\n text=list(map(str.lower, text))\n documents.append(' '.join(text))\n #print(documents)\n # text2= list(set(map(str.lower, text)))\n \n \n vectorizer = TfidfVectorizer(token_pattern=r\"(?u)\\b\\w+\\b\")\n \n vectors = vectorizer.fit_transform(documents)\n #print(vectors)\n feature_names = vectorizer.get_feature_names()\n dense = vectors.todense()\n denselist = dense.tolist()\n df2 = pd.DataFrame(denselist, columns=feature_names)", "_____no_output_____" ], [ "df2", "_____no_output_____" ] ], [ [ "### Inverted index:", "_____no_output_____" ], [ "* We created a new dictionary that has term_id as key (that we get from vocabulary by every word) and an array as value. This array contains a list of matching between document and respective TfIdf for respective word in this document. For example:\nkey: \"122\", value:[[dcoument_12,0.02],[dcoument_18,0.22]]", "_____no_output_____" ] ], [ [ "import ast\nfrom itertools import islice\nimport csv\nimport sys\ncsv.field_size_limit(sys.maxsize)\n#create index and save it on index.tsv \ndict={}\ndict2 = {}\ncount=0\npresent=False\n\n\n#print('step 2')\nname=\"article_\"\nextension2=\".tsv\"\nh=0\nfor row in tsv_vocabulary:\n h+=1\n dict[row[0]]=row[1]\n print(h)\n dict2[row[1]]=[]\nfor index in range(len(totalMovies)):\n #print(index)\n print(\"numero documento \"+ str(index))\n file=\"{}{}{}\".format(name,index,extension2)\n with open(\"HW3 ADM/tsv_correct2/\"+file,\"r\") as tsvfile:\n data_list = list(csv.reader(tsvfile, delimiter=\"\\t\"))\n tsvreader = csv.reader(tsvfile, delimiter=\"\\t\")\n intro=data_list[1][1]\n intro = ast.literal_eval(intro)\n plot=data_list[1][2]\n plot = ast.literal_eval(plot)\n text=plot+intro\n text=list(map(str.lower, text))\n \n text2= list(set(map(str.lower, text)))\n #print(text2)\n #for evry words in plot adn intro (for every page) we get every word. From every word we get its term_id and put it whit their occurences (document_id) in dict2\n for i in text2:\n\n #print(\"word \"+str(i))\n # print(\"word \"+str(i))\n # print(\"aaa\" + str(df2.iloc[index][i]))\n res=df2.iloc[index][i]\n #print(\"res \"+str(res))\n for term in dict:\n if i==term:\n #print(\"key \"+ str(term))\n #print(\"value \"+ str(dict[term]))\n doc=\"document_\"\n name2=\"{}{}\".format(doc,index)\n result=[name2,res]\n dict2[dict[term]].append(result)\n break\n else:\n continue\n\n \n", "_____no_output_____" ] ], [ [ "* We stored this dictionary in index2.tsv file. Then we have the index needed for search engine 2 to get TfIdf for a specific word in a specific document.", "_____no_output_____" ] ], [ [ "#put dict2 in index.tsv file. In. evry row we have a single term_id with occurences of respective word.\n\n\nwith open('HW3 ADM/tsv/index2.tsv', 'w', newline='') as f_output:\n tsv_index2 = csv.writer(f_output, delimiter='\\t') \n for key, val in dict2.items():\n tsv_index2.writerow([key, val])", "_____no_output_____" ] ], [ [ "* This 'getTfidf_query' function allows to calculate TfIdf value about input query. It's needed because we must calculate coisine similiarity between result document and input query (based on TfIdf).\nIt has input query as parameter and returns a dataframe that has tfIdf for every word in input query.", "_____no_output_____" ] ], [ [ "from sklearn.feature_extraction.text import TfidfVectorizer\ndef getTfidf_query(query):\n\n\n vectorizer = TfidfVectorizer(token_pattern=r\"(?u)\\b\\w+\\b\")\n \n vectors = vectorizer.fit_transform([query])\n #print(vectors)\n feature_names = vectorizer.get_feature_names()\n dense = vectors.todense()\n denselist = dense.tolist()\n df_query = pd.DataFrame(denselist, columns=feature_names)\n return df_query", "_____no_output_____" ] ], [ [ "* This 'getTfidf_document' function allows to get TfIdf about every word in query for a specific document.\nThis function searches this value from index2 (that we have just created with TfIdf values). It has as parameters: input query and document where we got TfIdf fro every word in query for this document.\nIt returns a dataframe with these matchings.", "_____no_output_____" ] ], [ [ "def getTfidf_document(words,document_id):\n dict={}\n \n listWords = words.split()\n listWords=[x.lower() for x in listWords]\n #print(listWords)\n df=pd.DataFrame(columns=listWords)\n tfIdf=[]\n with open('HW3 ADM/tsv/vocabulary.tsv', 'r', newline='') as f_output:\n tsv_vocabulary = list(csv.reader(f_output, delimiter='\\t'))\n with open('HW3 ADM/tsv/index2.tsv', 'r', newline='') as f_output:\n tsv_index = list(csv.reader(f_output, delimiter='\\t'))\n \n for word in listWords:\n listDoc=[]\n #print(\"word \" + word)\n for row in tsv_vocabulary:\n if word.lower()==row[0]:\n term=row[1]\n #print(\"teerm\" + str(term))\n break\n for row in tsv_index:\n if term==row[0]:\n\n listDoc=ast.literal_eval(row[1])\n\n #print(listDoc)\n break\n for index in listDoc:\n\n #print(index)\n if index[0]==document_id:\n #print(index[0])\n\n tfIdf.append(index[1])\n #print(index[1])\n break\n\n df.loc[0]=tfIdf\n df=df.reindex(sorted(df.columns), axis=1)\n return df\n", "_____no_output_____" ] ], [ [ "'coisine' function allows to calculate coisine similairity from two list of TfIdf values. First list is about TfIdf values for input query calcuated by 'getTfidf_query' function. Second list is about TfIdf values for query input in document, calculated by 'getTfidf_document' function", "_____no_output_____" ] ], [ [ "\nfrom sklearn.metrics.pairwise import cosine_similarity\ndef coisine(list_query,list_document):\n \n res=(cosine_similarity([list_query,list_document]))\n #print(res)\n return res[0][1]\n", "_____no_output_____" ] ], [ [ "This 'searchEngine2' function allows to calculate the result for the second search engine.\nWe call 'getDocuments(words,1)' to get all documents that contain input query.\nThen we call 'getTfidf_query(words)' to put in 'df' datframe all TfIdf values froi every word in input query.\nThen, for every document, we call 'getTfidf_document' and put its result in 'df_document' dataframe.\nFinally we call 'coisine' fucntion to get coisine value for every input and put its result in a final 'df' dataframe with other info(we get these infos from tsv file (intro, title, and url)). \nIt's ordered by this similiarity value.", "_____no_output_____" ] ], [ [ "import pandas as pd\n\ndef searchEngine2(words):\n #build the dataframe with info for every documents_id\n document=getDocuments(words,1)\n if document=='No results':\n \n return document\n elif document==[]:\n return \"No results\"\n df_query=getTfidf_query(words)\n df=pd.DataFrame(columns=['title', 'intro', 'url','similarity'])\n\n for index in range(len(document)):\n #get id of documnets_is\n df_document=getTfidf_document(words,document[index])\n \n \n #get Tfidf list from df_query dataframe\n list_query=list(df_query.loc[0])\n #get Tfidf list from df_document dataframe\n list_document=list(df_document.loc[0])\n \n similiarity=coisine(list_document,list_query)\n \n numberDocument=document[index][9:]\n #get wikipedia url\n \n url=totalMovies[int(numberDocument)]\n name=\"article_\"\n extension2=\".tsv\"\n index=int(numberDocument)\n file=\"{}{}{}\".format(name,index,extension2)\n #get info about title and intro for evert film that corresponds to every documents_id\n with open('HW3 ADM/tsv/'+file, 'r', newline='') as f_output:\n tsv_index = list(csv.reader(f_output, delimiter='\\t'))\n title=tsv_index[1][3]\n intro=tsv_index[1][1]\n film=[title,intro,url,similiarity]\n #put all info for every film in a single row of df dataframe\n df.loc[index] = film\n df=df.sort_values(by=['similarity'],ascending=False)\n return df", "_____no_output_____" ] ], [ [ "### Execute query:", "_____no_output_____" ] ], [ [ "query='In the United States 2019'\n\nsearchEngine2(query)", "_____no_output_____" ], [ "searchEngine1(query)", "_____no_output_____" ] ], [ [ "# Define a new score, Search Engine 3:\n###### We did 3rd search engine with \"zone index\" methods. We get info about this metohod from this link: https://moz.com/blog/search-engine-algorithm-basics.\n\nWe conseidered the following sections for every movie: title, intro, plot and music. This method consists on assigning a fixed score. In our case we have:\n* title: 0.9\n* intro: 0.4\n* plot: 0.3\n* music: 0.6\n\nThen, we search all documents that contain input query (in title/intro/plot/music).\nFor every document we get its score that we calculate in the following way:\nThis score is a sum of different score in every section. You can sum the score for every section if and only if:\n* score about titile: if query contains the whole title;\n* score about intro/plot: if intro/plot contains the whole query;\n* score about music: if at least one word of the query is in music section.\n\nWith this scoring we should give more importance to music and title section. \nWe have choosen to give this score in the previous way about music because we want to sum this score when only name or surname (or both) of music compositor is in the input query. About title, we want to give this score becuase title must be totally in the input query.\nAbout intro and plot, we have given these score because we think that intro contains more significant words than words in plot.\n\n", "_____no_output_____" ], [ "We created index for the search engine 3 and we put it in index3.tsv file. In this index, we considered also 'music' section to match document for every word. It's similiar to index1.tsv.", "_____no_output_____" ] ], [ [ "import ast\nfrom itertools import islice\nimport csv\n\n#create index and save it on index.tsv \n\ndict2 = {}\ncount=0\npresent=False\nwith open('HW3 ADM/ind/vocabulary.tsv', 'r', newline='') as f_output:\n tsv_vocabulary = list(csv.reader(f_output, delimiter='\\t'))\n name=\"article_\"\n extension2=\".tsv\"\n h=0\n for row in tsv_vocabulary:\n \n dict2[row[1]]=[]\n for index in range(len(totalMovies)):\n \n print(index)\n file=\"{}{}{}\".format(name,index,extension2)\n with open(\"HW3 ADM/tsv_correct/\"+file,\"r\") as tsvfile:\n data_list = list(csv.reader(tsvfile, delimiter=\"\\t\"))\n tsvreader = csv.reader(tsvfile, delimiter=\"\\t\")\n intro=data_list[1][1]\n intro = ast.literal_eval(intro)\n plot=data_list[1][2]\n plot = ast.literal_eval(plot)\n music=data_list[1][8]\n music = ast.literal_eval(music)\n text=plot+intro+music\n text= list(set(map(str.lower, text)))\n \n #for evry words in plot adn intro (for every page) we get every word. From every word we get its term_id and put it whit their occurences (document_id) in dict2\n for i in text:\n for row in tsv_vocabulary:\n if i==row[0]:\n doc=\"document_\"\n name2=\"{}{}\".format(doc,index)\n \n dict2[row[1]].append(name2)\n break\n else:\n continue\n \n #put dict2 in index.tsv file. In. evry row we have a single term_id with occurences of respective word.\n with open('HW3 ADM/tsv/index3.tsv', 'w', newline='') as f_output:\n tsv_vocabulary = csv.writer(f_output, delimiter='\\t') \n for key, val in dict2.items():\n tsv_vocabulary.writerow([key, val]) \n \n\n\n\n", "_____no_output_____" ] ], [ [ "The following function 'heapSortK' allows us to use a heap data structure (using heapq library) for maintaining the top-k documents. This function has a list and a k value. This value is the number of ordered results that we want to show", "_____no_output_____" ] ], [ [ "from operator import itemgetter\nimport heapq\ndef heappush(h, item, key=lambda x: x):\n heapq.heappush(h, (key(item), item))\n\ndef heappop(h):\n return heapq.heappop(h)[1]\n\ndef heapify(h, key=lambda x: x):\n for idx, item in enumerate(h):\n h[idx] = (key(item), item)\n heapq.heapify(h)\ndef heapFunction(a,k): \n df=pd.DataFrame(columns=['title', 'intro', 'plot', 'music','starring', 'score'])\n result=[]\n h = []\n for item in a:\n heappush(h, item, key=itemgetter(-1))\n #print(h)\n while h:\n result.append(heappop(h))\n \n \n j=0 #print(result)\n result.reverse()\n \n for i in range(k):\n \n if len(a)>i:\n \n df.loc[j] = result[i]\n j+=1\n else:\n break\n return df", "_____no_output_____" ] ], [ [ "In 'search3' we have calculated the result for the 3rd search engine.\nInitially we have created a 'df_score' dataframe to store fixed score for every section (taht we considered). Then we call 'getDocuments(query,3)' to get all dcouments where all words in query are present on them (based on index3.tsv).\n\nFor every document, we check the following aspects:\n* if all query words are in title section, sum score about title in 'df_score' to 'score' (final score);\n* if all query words are in intro section, sum score about intro in 'df_score' to 'score' (final score);\n* if all 'title_section' words are in input query, sum score about title in 'df_score' to 'score' (final score);\n* if at least one word in inout query is in 'music' section, sum score about music in 'df_score' to 'score' (final score);\n\nFinal dataframe 'df' contains all movies searched with these following data (title,intro,plot,music, score). It's ordered by this score from 'heapFunction' function and it contains only first k movies. K is a value that the user has choosen. If there are less results than k, only available results are shown.", "_____no_output_____" ] ], [ [ "import pandas as pd\ndef searchEngine3(query,k):\n \n document=getDocuments(query,3)\n if document=='No results':\n \n return document\n elif document==[]:\n return \"No results\"\n listWords = query.split()\n listWords=[x.lower() for x in listWords]\n df=pd.DataFrame(columns=['title', 'intro', 'plot', 'music','starring', 'score'])\n #Create datarame that indicates score for every section.\n df_score=pd.DataFrame(columns=['title_score', 'intro_score', 'plot_score', 'music_score'])\n scores=[0.8,0.4,0.3,0.6]\n df_score.loc[0]=scores\n resultMovies=[]\n actors=[]\n \n for index in range(len(document)):\n score=0\n lista=[]\n #get id of documnets_is\n numberDocument=document[index][9:]\n #get wikipedia url\n url=totalMovies[int(numberDocument)]\n name=\"article_\"\n extension2=\".tsv\"\n index=int(numberDocument)\n file=\"{}{}{}\".format(name,index,extension2)\n #get info about title and intro for evert film that corresponds to every documents_id\n with open(\"HW3 ADM/tsv_correct/\"+file,\"r\") as tsvfile:\n tsv_index = list(csv.reader(tsvfile, delimiter='\\t'))\n title=ast.literal_eval(tsv_index[1][3])\n \n intro=ast.literal_eval(tsv_index[1][1])\n\n plot=ast.literal_eval(tsv_index[1][2])\n \n music=ast.literal_eval(tsv_index[1][8])\n #actors.append(tsv_index[1][7])\n \n if (all(elem in title for elem in listWords)) or (all(elem in listWords for elem in title)):\n score+=df_score.loc[0]['title_score']\n \n if all(elem in intro for elem in listWords)==True:\n score+=df_score.loc[0]['intro_score']\n \n if all(elem in plot for elem in listWords)==True:\n score+=df_score.loc[0]['plot_score']\n \n if any(elem in music for elem in listWords)==True:\n \n score+=df_score.loc[0]['music_score']\n \n\n with open('HW3 ADM/tsv/'+file, 'r', newline='') as f_output:\n tsv_file = list(csv.reader(f_output, delimiter='\\t'))\n title2=tsv_file[1][3]\n intro2=tsv_file[1][1]\n plot2=tsv_file[1][2]\n music2=tsv_file[1][8]\n listActors=ast.literal_eval(tsv_file[1][7])\n actors=listActors\n film=[title2,intro2,plot2,music2,actors,score]\n resultMovies.append(film)\n \n \n \n df=heapFunction(resultMovies,k)\n actorsGraph=[]\n for index, row in df.iterrows():\n actorsGraph.append(row['starring'])\n \n graph(actorsGraph)\n return df\n\n", "_____no_output_____" ] ], [ [ "## Bonus Step: Make a nice visualization!", "_____no_output_____" ], [ "With this following function 'graph'we did the bonus section. This function has a nested list of actor, as parameter, about every film in results of search3. With this function we calculate evry couple of actors that are in (at least) two different film through film on the search3 result. Then we show these couple in a graph wehere every couple is represented by two nodes linked with an edge. If there is no actors couple, we don't show this graph.\nWe used 'networkx' python library, as suggested in hw track. ", "_____no_output_____" ] ], [ [ "import networkx as nx \nimport matplotlib.pyplot as plt\nimport matplotlib.pyplot as plt\ndef graph(listMovies):\n G = nx.Graph() \n couple=[]\n count=0\n for i in range(len(listMovies)):\n for j in range(len(listMovies[i])):\n for y in range(1,len(listMovies[i])):\n couple=[listMovies[i][j],listMovies[i][y]]\n count=0\n for k in listMovies:\n if (couple[0] in k) and (couple[1] in k):\n \n count+=1\n if count>=2 and not couple[0] == couple[1]:\n \n G.add_edge(couple[0],couple[1])\n break\n \n \n \n if not nx.is_empty(G):\n pos = nx.spring_layout(G) #<<<<<<<<<< Initialize this only once\n nx.draw(G,pos=pos, with_labels=True, node_size = 100, font_size=10)\n nx.draw_networkx_nodes(G,pos=pos, with_labels=True, node_size = 1500, font_size=10)\n nx.draw_networkx_edges(G, pos, alpha=0.3)#<<<<<<<<< pass the pos variable\n #plt.draw() \n plt.figure(figsize=(8, 8)) # image is 8 x 8 inches\n # To plot the next graph in a new figure\n plt.show()", "_____no_output_____" ] ], [ [ "### Execute query:", "_____no_output_____" ] ], [ [ "query='United States Richard heroes'\nsearchEngine3(query,5)", "_____no_output_____" ], [ "query='United States Strong Mark '\nsearchEngine3(query,11)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
ece0d4fa05f770fd2a4ff90cf588a820a07d56db
116,655
ipynb
Jupyter Notebook
notebooks/excercise/Excercise_Phase_Behaviour_of_Reservoir_Fluids (1).ipynb
EvenSol/testneqsim
2862507de7da13b39b3575de9095a9c5643e4008
[ "Apache-2.0" ]
10
2020-10-06T23:03:36.000Z
2022-03-09T03:28:12.000Z
notebooks/excercise/Excercise_Phase_Behaviour_of_Reservoir_Fluids (1).ipynb
EvenSol/testneqsim
2862507de7da13b39b3575de9095a9c5643e4008
[ "Apache-2.0" ]
1
2020-02-25T09:33:08.000Z
2020-02-25T09:33:08.000Z
notebooks/excercise/Excercise_Phase_Behaviour_of_Reservoir_Fluids (1).ipynb
EvenSol/testneqsim
2862507de7da13b39b3575de9095a9c5643e4008
[ "Apache-2.0" ]
1
2021-02-22T10:31:17.000Z
2021-02-22T10:31:17.000Z
289.466501
33,508
0.900844
[ [ [ "# Excercise: Phase Behaviour of Reservoir Fluids\nNeqSim and HYSYS\n", "_____no_output_____" ] ], [ [ "#@title Download an install neqsim python package\n#@markdown The first step of using neqsim in Colab will be to download the neqsim python package, and install it using pip. The py4j library is neccesary for using the neqsim java library from python.\n%%capture\n!pip install neqsim", "_____no_output_____" ] ], [ [ "## Excercise 1:\nCreate a fluid of 90% methane and 10% ethane. Calculate fluid properties at 45 C and 50 bar and print:\n\n* Density\n* Enthalpy\n* Entropy\n* Cp\n* Cv\n* Speed of sound\n* Joule Thomson coefficient\n* Viscosity\n* Thermal Conductivity", "_____no_output_____" ] ], [ [ "from neqsim.thermo.thermoTools import *\n\n# Creating a fluid in neqsim\nfluid1 = fluid(\"srk\") #create a fluid using the SRK-EoS\nfluid1.addComponent(\"methane\", 90.0)\nfluid1.addComponent(\"ethane\", 10.0)\nfluid1.setMixingRule(\"classic\")\n\n\nfluid1.setTemperature(45.0, \"C\")\nfluid1.setPressure(50.0, \"bara\")\nTPflash(fluid1)\nfluid1.initProperties()\n\nprint(\"pressure \", fluid1.getPressure(\"bara\"), \" [bara] temperature \" , fluid1.getTemperature(\"C\"), \" [C]\")\nprint(\"density \", fluid1.getDensity(\"kg/m3\"),\" [kg/m3]\")\nprint(\"enthalpy \", fluid1.getEnthalpy(\"J/kg\"),\" [J/kg]\")\nprint(\"entropy \", fluid1.getEntropy(\"J/kgK\"),\" [J/kgK]\")\nprint(\"Cp \", fluid1.getCp(\"J/kgK\"),\" [J/kgK]\")\nprint(\"Cv \", fluid1.getCv(\"J/kgK\"),\" [J/kgK]\")\n#print(\"speed of sound \", fluid1.getSoundSpeed(\"m/s\"),\" [m/s]\")\n#print(\"Joule Thomson coefficient \", fluid1.getJouleThomsonCoefficient(\"K/bar\"),\" [K/bar]\")\nprint(\"Viscosity \", fluid1.getViscosity(\"kg/msec\"),\" [kg/msec]\")\nprint(\"Thermal conductivity \", fluid1.getThermalConductivity(\"W/mK\"),\" [W/mK]\")\n", "pressure 50.0 [bara] temperature 45.0 [C]\ndensity 35.7374581103674 [kg/m3]\nenthalpy 45724.00166945166 [J/kg]\nentropy -1498.2312073246055 [J/kgK]\nCp 2535.302215823932 [J/kgK]\nCv 1772.321039256026 [J/kgK]\nViscosity 1.2745427046088586e-05 [kg/msec]\nThermal conductivity 0.04103882783027299 [W/mK]\n" ] ], [ [ "## Excercise 2:\n1. See the density of gases example\n2. Make a new Colab page and plot the density of pure methane, ethane, propane and CO2 at 25C in the pressure range 1-100 bara\n3. Plot the viscosity, thermal conductivity and heat capacity of a 50/50 methane and ethane gas mixture at 25C in the pressure range 1-100 bara\n4. Report the gas and liquid phase fractions and gas and liquid composition of a methane (90 mol%) and n-heptane (10 mol%) fluid mixture at 35 C and 60 bara.\n\n\n", "_____no_output_____" ] ], [ [ "from neqsim.thermo.thermoTools import *\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Creating a fluid in neqsim\ncomponentName = \"methane\" #@param [\"methane\", \"ethane\", \"propane\", \"CO2\"]\n\nfluid1 = fluid(\"srk\") #create a fluid using the SRK-EoS\nfluid1.addComponent(componentName, 1.0)\nfluid1.setTemperature(25.0, \"C\")\n\ndef realgasdensity(pressure):\n fluid1.setPressure(pressure, \"bara\")\n TPflash(fluid1)\n fluid1.initProperties();\n return fluid1.getDensity('kg/m3')\n\npressure = np.arange(1.0, 101.0, 5.0)\nrealdensity = [realgasdensity(P) for P in pressure]\n\nplt.plot(pressure, realdensity)\nplt.title(\"density of \"+componentName + \" at 25C\")\nplt.xlabel('Pressure [bara]')\nplt.ylabel('Density [kg/m3]');", "_____no_output_____" ], [ "from neqsim.thermo.thermoTools import *\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n\nfluid2 = fluid(\"srk\") #create a fluid using the SRK-EoS\nfluid2.addComponent(\"methane\", 50.0)\nfluid2.addComponent(\"ethane\", 50.0)\nfluid2.setTemperature(25.0, \"C\")\n\ndef fluidproperty(pressure):\n fluid2.setPressure(pressure, \"bara\")\n TPflash(fluid2)\n fluid2.initProperties();\n return np.array([fluid2.getViscosity(\"kg/msec\"), fluid2.getThermalConductivity(\"W/mK\"), fluid2.getCp(\"J/kgK\")])\n\npressure = np.arange(1.0, 101.0, 5.0)\nproperties = [fluidproperty(P) for P in pressure]\n\nplt.figure()\nplt.rcParams['figure.dpi'] = 100\nplt.subplot(2, 2, 1)\nplt.plot(pressure, np.transpose(properties)[0])\nplt.xlabel('Pressure [bara]')\nplt.ylabel('Viscosity [kg/msec]');\nplt.subplot(2, 2, 2)\nplt.plot(pressure, np.transpose(properties)[2])\nplt.xlabel('Pressure [bara]')\nplt.ylabel('Cp [J/kgK]');\nplt.subplot(2, 2, 3)\nplt.plot(pressure, np.transpose(properties)[1])\nplt.xlabel('Pressure [bara]')\nplt.ylabel('Thermal Conductivity [W/mK]');\nplt.subplots_adjust(top=0.92, bottom=0.08, left=0.10, right=0.95, hspace=0.325,\n wspace=0.535)", "_____no_output_____" ], [ "from neqsim.thermo.thermoTools import *\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n\nfluid3 = fluid(\"srk\") #create a fluid using the SRK-EoS\nfluid3.addComponent(\"methane\", 90.0, \"mol/sec\")\nfluid3.addComponent(\"n-heptane\", 10.0, \"mol/sec\")\nfluid3.setTemperature(35.0, \"C\")\nfluid3.setPressure(60.0, \"bara\")\n\nTPflash(fluid3)\n\nprint(\"pressure \", fluid3.getPressure(\"bara\"), \" [bara] temperature \" , fluid3.getTemperature(\"C\"), \" [C]\")\n#print(\"gas fraction \", fluid3.getPhase(\"gas\").getPhaseFraction(), \" [mol/mol]\")\n#print(\"oil fraction \", fluid3.getPhase(\"oil\").getPhaseFraction(), \" [mol/mol]\")\nprint(\"methane in gas \", fluid3.getPhase(\"gas\").getComponent(\"methane\").getx(), \" [mol/mol]\")\nprint(\"n-heptane in gas \", fluid3.getPhase(\"gas\").getComponent(\"n-heptane\").getx(), \" [mol/mol]\")\nprint(\"methane in oil \", fluid3.getPhase(\"oil\").getComponent(\"methane\").getx(), \" [mol/mol]\")\nprint(\"n-heptane in oil \", fluid3.getPhase(\"oil\").getComponent(\"n-heptane\").getx(), \" [mol/mol]\")", "pressure 60.0 [bara] temperature 35.0 [C]\nmethane in gas 0.9940054709037067 [mol/mol]\nn-heptane in gas 0.005994529096293198 [mol/mol]\nmethane in oil 0.2720399764235021 [mol/mol]\nn-heptane in oil 0.727960023576498 [mol/mol]\n" ] ], [ [ "## Excercise 3:\n1. See the density of gases example\n2. Draw the buble point line of methane\n3. Draw the phase envelope of a gas mixture of 90% methane, 5% ethane, 5% propane", "_____no_output_____" ] ], [ [ "# Creating a fluid in neqsim\n\nfluid4 = fluid(\"srk\") #create a fluid using the SRK-EoS\nfluid4.addComponent(\"methane\", 1.0)\n\ndef bubleP(pressure):\n fluid4.setPressure(pressure)\n bubt(fluid4)\n return fluid4.getTemperature('C')\n\npressure = np.arange(1.0, 30.0, 1.0)\ntemperature = [bubleP(P) for P in pressure]\n\nplt.plot(temperature, pressure);\nplt.title(\"vapour pressure line of methane\")\nplt.xlabel('Temperature [C]');\nplt.ylabel('Pressure [bara]');\n", "_____no_output_____" ], [ "fluid3 = fluid(\"srk\") #create a fluid using the SRK-EoS\nfluid3.addComponent(\"methane\", 90.0, \"mol/sec\")\nfluid3.addComponent(\"ethane\", 5.0, \"mol/sec\")\nfluid3.addComponent(\"propane\", 5.0, \"mol/sec\")\n\nenevlope = phaseenvelope(fluid3)\n\nplt.plot(list(enevlope.getOperation().get(\"dewT\")),list(enevlope.getOperation().get(\"dewP\")), label=\"dew point\")\nplt.plot(list(enevlope.getOperation().get(\"bubT\")),list(enevlope.getOperation().get(\"bubP\")), label=\"bubble point\")\nplt.title('PT envelope')\nplt.xlabel('Temperature [\\u00B0C]')\nplt.ylabel('Pressure [bar]')\nplt.legend()\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
ece0e6de97b446f33db4d0b6a6e72220a7007ea2
3,696
ipynb
Jupyter Notebook
examples/notebooks/36_add_labels.ipynb
glennlaughlin/leafmap
7cf4fa4fc3f7d19511f12ff7edee06dc43653cdc
[ "MIT" ]
null
null
null
examples/notebooks/36_add_labels.ipynb
glennlaughlin/leafmap
7cf4fa4fc3f7d19511f12ff7edee06dc43653cdc
[ "MIT" ]
null
null
null
examples/notebooks/36_add_labels.ipynb
glennlaughlin/leafmap
7cf4fa4fc3f7d19511f12ff7edee06dc43653cdc
[ "MIT" ]
null
null
null
19.978378
229
0.529221
[ [ [ "<a href=\"https://githubtocolab.com/giswqs/leafmap/blob/master/examples/notebooks/36_add_labels.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\"/></a>\n\nUncomment the following line to install [leafmap](https://leafmap.org) if needed.", "_____no_output_____" ] ], [ [ "# !pip install leafmap", "_____no_output_____" ], [ "import leafmap", "_____no_output_____" ] ], [ [ "Update the package if needed.", "_____no_output_____" ] ], [ [ "# leafmap.update_package()", "_____no_output_____" ] ], [ [ "Create an interactive map.", "_____no_output_____" ] ], [ [ "data = \"https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/us_states.json\"", "_____no_output_____" ], [ "Map = leafmap.Map(center=[40, -100], zoom=4, add_google_map=False, layers_control=True)", "_____no_output_____" ] ], [ [ "Labeling data.", "_____no_output_____" ] ], [ [ "Map.add_labels(\n data,\n \"id\",\n font_size=\"12pt\",\n font_color=\"blue\",\n font_family=\"arial\",\n font_weight=\"bold\",\n)\nMap", "_____no_output_____" ] ], [ [ "Remove labels", "_____no_output_____" ] ], [ [ "Map.remove_labels()", "_____no_output_____" ] ], [ [ "![](https://i.imgur.com/lELtitr.gif)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
ece0f1c469702bb0cdc9b545d1c8bf53d1202101
26,539
ipynb
Jupyter Notebook
notebooks/07-flow-control.ipynb
alberthu233/EPEtutorials
aaf61fcb79668209ca10c0140de930bc677ffd7d
[ "MIT" ]
3
2020-06-14T18:52:20.000Z
2021-06-18T03:33:33.000Z
notebooks/07-flow-control.ipynb
alberthu233/EPEtutorials
aaf61fcb79668209ca10c0140de930bc677ffd7d
[ "MIT" ]
7
2019-12-01T20:16:32.000Z
2021-06-21T21:39:22.000Z
notebooks/07-flow-control.ipynb
alberthu233/EPEtutorials
aaf61fcb79668209ca10c0140de930bc677ffd7d
[ "MIT" ]
13
2019-11-30T03:39:09.000Z
2021-06-24T06:54:34.000Z
26.121063
1,059
0.526584
[ [ [ "# Flow Control", "_____no_output_____" ], [ "**Materials by: Joshua R. Smith, Milad Fatenejad, Katy Huff, Tommy Gyu, John Blischay and many more**", "_____no_output_____" ], [ "In this lesson we will cover how to write code that will execute only if specified conditions are met and also how to automate repetitive tasks using loops.", "_____no_output_____" ], [ "# Comparisons\n\nPython comes with literal comparison operators. Namely, `< > <= >= == !=`. All comparisons return the literal boolean values: `True` or `False`. These can be used to test values against one another. For example,", "_____no_output_____" ] ], [ [ "2 + 2 == 4", "_____no_output_____" ], [ "'big' < 'small' # Strings are compared alphabetically, where A to Z is least to greatest", "_____no_output_____" ], [ "4 - 2 == 3 + 2", "_____no_output_____" ], [ "'banana' != 24", "_____no_output_____" ] ], [ [ "Comparisons can be chained together with the the `and` & `or` Python keywords. \n- `and`: If both terms are true, returns `True`, and if not, returns `False`.\n- `or`: If either is true, returns `True`, and if not, returns `False`.", "_____no_output_____" ] ], [ [ "1 == 1.0 and 'hello' == 'hello'", "_____no_output_____" ], [ "1 > 10 or False", "_____no_output_____" ], [ "42 < 24 or True and 'wow' != 'mom'", "_____no_output_____" ] ], [ [ "Comparisons may also be negated using the `not` keyword.", "_____no_output_____" ] ], [ [ "not 2 + 2 == 5", "_____no_output_____" ], [ "not 12 > 7", "_____no_output_____" ] ], [ [ "Finally, the `is` opperator says whether two objects are the same because they occupy the same place in memory. This is a test of *equality* (is) rather than *equivalence* (==).", "_____no_output_____" ] ], [ [ "x = [1, 2, 3]\ny = [1, 2, 3]\nx is y", "_____no_output_____" ], [ "x = 'hello'\ny = x\nx is y", "_____no_output_____" ], [ "5 is 5.0", "_____no_output_____" ], [ "5 is not 5.0", "_____no_output_____" ] ], [ [ "# If statements\n\nThat said, these comparisons can be placed inside of an `if` statement. Such statements have the following form:\n\n if <condition>:\n <indented block of code>\n\nThe indented code will only be executed if the condition evaulates to `True`, which is a special boolean value.", "_____no_output_____" ] ], [ [ "x = 5\nif x < 0:\n print(\"x is negative\")", "_____no_output_____" ], [ "x = -5\nif x < 0:\n print(\"x is negative\")", "_____no_output_____" ] ], [ [ "The `if` statement can be combined to great effect with a corresponding `else` clause. \n\n if <condition>:\n <if-block>\n else:\n <else-block>\n \nIf the condition is `True` the if-block is executed, or else the else-block is executed instead.", "_____no_output_____" ] ], [ [ "x = 5\nif x < 0:\n print(\"x is negative\")\nelse:\n print(\"x in non-negative\")", "_____no_output_____" ] ], [ [ "Many cases may be tested by using the `elif` statement. These come between all the `if` and `else` statements:\n\n if <if-condition>:\n <if-block>\n elif <elif-condition>:\n <elif-block>\n else:\n <else-block>\n \nIf the if-condition is true then only the if-block is executed. Or else if the elif-condition is true then only the elif-block is executed. Or else the else-block is executed.", "_____no_output_____" ] ], [ [ "x = 5\nif x < 0:\n print(\"x is negative\")\nelif x == 0:\n print(\"x is zero\")\nelse:\n print(\"x is positive\")", "_____no_output_____" ] ], [ [ "While there must be one if statetment, and there may be at most one else statement, there maybe as many elif statements as are desired.\n\n if <if-condition>:\n <if-block>\n elif <elif-condition1>:\n <elif-block1>\n elif <elif-condition2>:\n <elif-block2>\n elif <elif-condition3>:\n <elif-block3>\n ...\n else:\n <else-block>\n \nOnly the block for top most condition that is true is executed.", "_____no_output_____" ] ], [ [ "x = 5\nif x < 0:\n print(\"x is negative\")\nelif x == 0:\n print(\"x is zero\")\nelif x > 0:\n print(\"x is positive\")\nelif x == 2:\n print(\"x is two\")\nelse:\n print(\"x is positive and not 2\")", "_____no_output_____" ] ], [ [ "Be careful because the computer interprets comparisons very literally.", "_____no_output_____" ] ], [ [ "'1' < 2", "_____no_output_____" ], [ "True == 'True'", "_____no_output_____" ], [ "False == 0", "_____no_output_____" ], [ "'Bears' > 'Packers'", "_____no_output_____" ] ], [ [ "### Aside About Indentation\n\nThe indentation is a feature of Python syntax. Some other programming languages use brackets to denote a command block. Python uses indentation. The amount of indentation doesn't matter, so long as everything in the same block is indented the same amount.", "_____no_output_____" ], [ "**Exercise:** Write an if statement that prints whether x is even or odd. Hint: search up the modulus operator.", "_____no_output_____" ] ], [ [ "x = 1\n\nif False:\n print(\"x is even\")\nelse:\n print(\"x is odd\")", "_____no_output_____" ] ], [ [ "# Loops\n\nLoops come in two flavors: `while` and `for`. While loops have the following structure:\n\n while <condition>:\n <indented block of code>\n \nWhile the condition is True, the code in the block will continue to execute. \n**Warning!** This may lead to infitinely executing loops if the condition never becomes false!", "_____no_output_____" ] ], [ [ "fruits = ['apples', 'oranges', 'pears', 'bananas']\ni = 0\nwhile i < len(fruits):\n print(fruits[i])\n i = i + 1", "_____no_output_____" ] ], [ [ "Meanwhile, for-loops have the following structure:\n\n for <loop variable name> in <iterable>:\n <indented block of code>\n \nThe loop will continue to execute as long as there are more iterations left in the iterable. In other words, for each element in the iterable, run the indented code. Upon each iteration, the value of that iteration is assigned to the loop variable.", "_____no_output_____" ] ], [ [ "for fruit in fruits:\n print(fruit)", "_____no_output_____" ], [ "# range(n) generates numbers [0,n); n is excluded\nfor i in range(len(fruits)):\n print(i, fruits[i])", "_____no_output_____" ], [ "# Use zip to iterate over two lists at once\nfruits = ['apples', 'oranges', 'pears', 'bananas']\nprices = [0.49, 0.99, 1.49, 0.32]\nfor fruit, price in zip(fruits, prices):\n print(fruit, \"cost\", price, \"each\")", "_____no_output_____" ], [ "# Use \"items\" to iterate over a dictionary\n# Note the order is non-deterministic\nprices = {'apples': 0.49, 'oranges': 0.99, 'pears': 1.49, 'bananas': 0.32}\nfor fruit, price in prices.items():\n print(fruit, \"cost\", price, \"each\")", "_____no_output_____" ], [ "# Calculating a sum\nvalues = [1254, 95818, 61813541, 1813, 4]\ntotal = 0\nfor x in values:\n total = total + x\nprint(total)", "_____no_output_____" ] ], [ [ "## Short Exercise\nUsing a loop, calculate the factorial of 42 (the product of all integers up to and including 42).", "_____no_output_____" ] ], [ [ "import math\n\nanswer = 0\n\n# Your code here\n\n# Answer checked here\nassert(answer == math.factorial(42)) # This will give an error if your answer is incorrect\nprint(\"Your solution is correct.\")", "_____no_output_____" ] ], [ [ "## break, continue, and else\n\nA `break` statement exits a loop. It helps\navoid infinite loops by cutting off loops when they're clearly going\nnowhere.", "_____no_output_____" ] ], [ [ "# Prints numbers while n != reasonable\nreasonable = 10\nfor n in range(1,2000):\n if n == reasonable :\n break\n print(n)", "_____no_output_____" ] ], [ [ "Something you might want to do instead of breaking is to `continue` to the\nnext iteration of a loop, giving up on the current one.", "_____no_output_____" ] ], [ [ "# Prints numbers from 1 - 19, excluding 10\nreasonable = 10\nfor n in range(1,20):\n if n == reasonable :\n continue\n print(n)", "_____no_output_____" ] ], [ [ "Importantly, Python allows you to use an else statement in a for loop.\n\nThat is :", "_____no_output_____" ] ], [ [ "knights={\"Sir Belvedere\":\"the Wise\", \n \"Sir Lancelot\":\"the Brave\", \n \"Sir Galahad\":\"the Pure\", \n \"Sir Robin\":\"the Brave\", \n \"The Black Knight\":\"John Cleese\"} \n\nfavorites = list(knights.keys()) # convert keys to list\nfavorites.remove(\"Sir Robin\") # this guy is not a favorite\nfor name, title in knights.items() : \n string = name + \", \"\n for fav in favorites :\n if fav == name :\n string += title\n break\n else: # this is executed if loop above does not break\n string += title + \", but not quite so brave as Sir Lancelot.\" \n print(string)", "_____no_output_____" ] ], [ [ "# List comprehensions\nPython has another way to perform iteration called list comprehensions. The general forms are:\n\n [<elementToAdd> for i in iterator]\n [<elementToAdd> for i in iterator if <condition>]\n [<elementToAdd1> if <condition1> else <elementToAdd2> for i in iterator]\n \nYou can think of list comprehensions as a for-loop formatted differently, which takes an existing iterable object and creates a new list. ", "_____no_output_____" ] ], [ [ "# Multiply every number in a list by 2 using a for loop\nnums1 = [5, 1, 3, 10]\nnums2 = []\nfor i in range(len(nums1)):\n nums2.append(nums1[i] * 2)\n \nprint(nums2)", "_____no_output_____" ], [ "# Multiply every number in a list by 2 using a list comprehension\nnums2 = [x * 2 for x in nums1]\n\nprint(nums2)", "_____no_output_____" ], [ "# Multiply every number in a list by 2, but only if the number is greater than 4\nnums1 = [5, 1, 3, 10]\nnums2 = []\nfor i in range(len(nums1)):\n if nums1[i] > 4:\n nums2.append(nums1[i] * 2)\n else:\n nums2.append(nums1[i])\nprint(nums2)", "_____no_output_____" ], [ "# And using a list comprehension\nnums2 = [x * 2 if x > 4 else x for x in nums1]\n\nprint(nums2)", "_____no_output_____" ] ], [ [ "# Reading and Writing files with loops\nLoops make processing files by each line simple.", "_____no_output_____" ] ], [ [ "my_file = open(\"OtherFiles/example.txt\",\"r\")\nfor line in my_file:\n print(line.strip()) # .strip() removes all the whitespace at the beginning and end\nmy_file.close()", "_____no_output_____" ], [ "new_file = open(\"OtherFiles/example2.txt\", \"w+\") # write a new file\ndwight = ['bears', 'beets', 'Battlestar Galactica']\nfor i in dwight:\n new_file.write(i + '\\n')\nnew_file.close()", "_____no_output_____" ] ], [ [ "Using list comphersion, you can further the code to reduce all whitespace from the file:", "_____no_output_____" ] ], [ [ "my_file = open(\"OtherFiles/example.txt\",\"r\")\nlines = [line.strip() for line in my_file]\nprint(lines)\n\nmy_file.close()", "_____no_output_____" ] ], [ [ "# Flow Control Exercise: Convert genotypes\n\nAttempt all parts. And don't forget to talk to your neighbor!", "_____no_output_____" ], [ "## Motivation:\n\nA biologist is interested in the genetic basis of height. She measures the heights of many subjects and sends off their DNA samples to a core for genotyping arrays. These arrays determine the DNA bases at the variable sites of the genome (known as single nucleotide polymorphisms, or SNPs). Since humans are diploid, i.e. have two of each chromosome, each data point will be two DNA bases corresponding to the two chromosomes in each individual. At each SNP, there will be only three possible genotypes, e.g. AA, AG, GG for an A/G SNP. In order to test the correlation between a SNP genotype and height, she wants to perform a regression with an additive genetic model. However, she cannot do this with the data in the current form. She needs to convert the genotypes, e.g. AA, AG, and GG, to the numbers 0, 1, and 2, respectively (in the example the number corresponds the number of G bases the person has at that SNP). Since she has too much data to do this manually, e.g. in Excel, she comes to you for ideas of how to efficiently transform the data.", "_____no_output_____" ], [ "## Intializing File\nComplete the code below to generate the biologist's data. Look at the file OtherFiles/biodata.txt to understand how the data is formatted. \n\nYou will have to reopen the data file in Jupyter notebook everytime you run this code to see the changes.", "_____no_output_____" ] ], [ [ "import random\n\nsamples_length = 10 # feel free to change this variable\ntypes = ['AA','AG','GG']\n\nopen(\"OtherFiles/biodata.txt\", 'w+').close() # clear all previous data\ndata_file = open(\"OtherFiles/biodata.txt\",\"a+\")\n\nopen(\"OtherFiles/biodata-answers.txt\", 'w+').close() # clear all previous data\nanswers_file = open(\"OtherFiles/biodata-answers.txt\",\"a+\")\n\nfor i in range(samples_length):\n sample_type = random.randint(0,2) # generates 0, 1, or 2 pseudorandomly\n answers_file.write(str(sample_type))\n data_file.write(\"Genotype #%d: %s\\n\" % (i, types[sample_type]))\n\ndata_file.close()\nanswers_file.close()", "_____no_output_____" ] ], [ [ "## Part 1:\n\nOpen the input file generated above and create a new list which has the converted genotype for each subject ('AA' -> 0, 'AG' -> 1, 'GG' -> 2). \n\nHint: you can use the types array defined in the initialization to reduce the amount of if statements.", "_____no_output_____" ] ], [ [ "genos = []\ngenos_new = []\n\n# Open the file and get the current genos. Don't forget to close it at the end!\n\nprint(genos)\n\n# Use your knowledge of if/else statements and loop structures below to convert genos -> genos_new:\n\nprint(genos_new)", "_____no_output_____" ] ], [ [ "Run the code below to check your work:", "_____no_output_____" ] ], [ [ "answers_file = open(\"OtherFiles/biodata-answers.txt\",\"r\")\nanswers = [int(i) for i in answers_file.read()]\nfail = False\n\nif len(answers) < len(genos_new):\n print(\"Your have too many answers for the dataset.\")\nif len(answers) > len(genos_new):\n print(\"Your have too few answers for the dataset.\")\nassert(len(answers) == len(genos_new))\n\nfor case, ans, test in zip(genos, answers, genos_new):\n if ans != test:\n print(\"Genotype: %s Correct: %d Your Conversion: %d\" % (case, ans, test))\n fail = True\n\nif not fail:\n print(\"You have finished Part 1\")\n\nanswers_file.close()", "_____no_output_____" ] ], [ [ "## Part 2:\n\nCount the number of each type of genome and fill out genos_counts.", "_____no_output_____" ] ], [ [ "genos_counts = {\"AA\":0,\"AG\":0,\"GG\":7} # key is string type('AA','AG', or 'GG'), value is count\n\n# iterate through genos_new and fill out genos_counts. Use the types array[\"AA\",\"AG\",\"GG\"] for a more clever solution.\n\nprint(genos_counts)\nassert(sum(genos_counts.values()) == len(genos)) # this will throw an exception if your counts are incorrect", "_____no_output_____" ] ], [ [ "## Part 3:\n\nWrite your output to a new file \"OtherFiles/new-biodata.txt\" using the following format:\n\n Condensed: 0100100222012\n AA count: 6\n AG count: 7\n GG count: 17\n Genotype #0: AA 0\n Genotype #1: AG 1\n ...\n Genotype #11: AG 1\n Genotype #12: AG 1\n \nNotice that with two digit genotypes there is one less space between \"Genotype #11:\" and \"AG 1\". You will have to manually examine the output file to determine if your solution is correct. Specifically, pay attention to your formatting.", "_____no_output_____" ] ], [ [ "print(\"Condensed: \" + str(answers))\nprint(\"Counts: \" + str(genos_counts))", "_____no_output_____" ] ], [ [ "# Short Exercises\nThese are some short exercises to get you thinking about how to apply loops and if statements.\n\n## Fibonacci\nGenerate a list of the first n terms of the Fibonacci Sequence where each element is defined as the sum of the previous two in the sequence. Written mathematically: \n\n f(0) = 0\n f(1) = 1\n f(n) = f(n - 1) + f(n - 2)\n \nUse this defintion to solve the problem. n will always be greater than or equal to 0. What should you return when n is 0, 1, or 2?", "_____no_output_____" ] ], [ [ "n = 10\nfib = [0, 1] # starts with the first two values\n\n# Your code here\n\n\n\n\n# compare output to fibonacci sequence:\n# 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55\nprint(fib) ", "_____no_output_____" ] ], [ [ "## Matrix Rotation\nGiven a two dimensional array of rows of a matrix m, rotate it 90 degrees clockwise.\n\n m represents this matrix:\n 1 2 3\n 4 5 6\n 7 8 9\n Expected output:\n 7 4 1\n 8 5 2\n 9 6 3\n\nChallenge: do this without using any lists or dictionaries (no extra memory).", "_____no_output_____" ] ], [ [ "m = [[1,2,3], [4,5,6], [7,8,9]]\nprint(\"Input:\\n\" + '\\n'.join([' '.join(map(str,i)) for i in m]) + '\\n') # print matrix\n\n# Your code here\n\n\n\n\nprint(\"Result:\\n\" + '\\n'.join([' '.join(map(str,i)) for i in m])) # print matrix", "_____no_output_____" ] ], [ [ "## Diamonds\nGiven the height of a diamond h, print a visual representation of the diamond. h will always be even and h > 1.\n\nExamples:\n\n h = 6\n /\\\n / \\\n / \\\n \\ /\n \\ /\n \\/\n \n h = 2\n /\\\n \\/", "_____no_output_____" ] ], [ [ "# Uncomment a test case to test it\n# h = 2\n# h = 6\n# h = 8\n\n# Type out your answer below, make sure to use 'h' as the height.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ece0fe8197ce2b9f7027d5242492c91aeb9b4e17
69,576
ipynb
Jupyter Notebook
_doc/notebooks/exams/td_note_2016.ipynb
Jerome-maker/ensae_teaching_cs
43ea044361ee60c00c85aea354a7b25c21c0fd07
[ "MIT" ]
73
2015-05-12T13:12:11.000Z
2021-12-21T11:44:29.000Z
_doc/notebooks/exams/td_note_2016.ipynb
Pandinosaurus/ensae_teaching_cs
3bc80f29d93c30de812e34c314bc96e6a4f0d025
[ "MIT" ]
90
2015-06-23T11:11:35.000Z
2021-03-31T22:09:15.000Z
_doc/notebooks/exams/td_note_2016.ipynb
Pandinosaurus/ensae_teaching_cs
3bc80f29d93c30de812e34c314bc96e6a4f0d025
[ "MIT" ]
65
2015-01-13T08:23:55.000Z
2022-02-11T22:42:07.000Z
60.134831
26,874
0.659581
[ [ [ "# 1A.e - TD notรฉ, 11 dรฉcembre 2015\n\nCalcul des intรฉrรชt d'un emprunt pour acheter un appartement, stratรฉgie d'investissement.", "_____no_output_____" ] ], [ [ "%matplotlib inline", "Populating the interactive namespace from numpy and matplotlib\n" ], [ "from jyquickhelper import add_notebook_menu\nadd_notebook_menu()", "_____no_output_____" ] ], [ [ "Aprรจs chaque question, on vรฉrifie sur un petit exemple que cela fonctionne comme attendu.", "_____no_output_____" ], [ "## Exercice 1 : Louer ou acheter un appartement ? \n\nA surface รฉgale, est-il prรฉfรฉrable de louer ou d'acheter son appartement~? Cet exercice propose diffรฉrentes questions afin d'y rรฉpondre.", "_____no_output_____" ], [ "### Q1 \n\nOn suppose qu'on a $X$ euros d'apport, รฉcrire une fonction qui calcule la somme d'argent obtenue aprรจs $n$ annรฉes d'un placement ร  *r%* ? Par exemple, pour $n=2$, la fonction retourne $x + rx + r(1+r)x = (1+r)^2 x$.", "_____no_output_____" ] ], [ [ "def rendement(x, n, r):\n return x*(1+r)**n\n\nrendement(1, 2, 0.02)", "_____no_output_____" ], [ "rendement(1, 3, 0.02)", "_____no_output_____" ] ], [ [ "### Q2\n\nUne banque prรชte de l'argent ร  un taux annuel $p$, c'est-ร -dire au taux mensuel $m=(1+p)^\\frac{1}{12}-1$. Ce taux s'applique chaque mois sur la somme du capital restant ร  rembourser. On emprunte $K$ euros avec une mensualitรฉ fixรฉe ร  $M$ euros, รฉcrire une fonction qui dรฉcompose la mensualitรฉ $M$ en capital remboursรฉ et intรฉrรชt.", "_____no_output_____" ] ], [ [ "def decompose_mensualite(K,M,p):\n i = K * ((1+p)**(1.0/12)-1)\n return M-i, i\n\ndecompose_mensualite(180000, 1000, 0.029)", "_____no_output_____" ] ], [ [ "### Q3 \n\nEcrire une fonction qui calcule toutes les mensualitรฉs.\n\nLors d'un prรชt ร  taux fixe, en France tout du moins, ce que paye l'emprunteur ร  sa banque est un montant fixe par mois : la mensualitรฉ. L'emprunteur paye le mรชme montant chaque mois. Chaque mensualitรฉ se dรฉcompose comme suit :\n\n* les intรชrets correspondant ร  la somme $K$ pour un mois : $i=Km$ oรน $m=(1+p)^{\\frac{1}{12}}-1$\n* la partie servant ร  rembourser le capital : $cap=M-i$\n\nCette partie $cap$ va รชtre รดtรฉe au capital $K$ ร  rembourser de sorte que le mois prochain, la somme prรชtรฉe pour le mois sera moindre. Le rรฉsulat souhaitรฉ pour cette question est une liste qui contient ce mรชme montant $M$ $n$ fois. Ce $n$ correspond ร  la longueur du prรชt. Et pour รชtre plus prรฉcis, le mรชme montant $M$ rรฉpรฉtรฉ $n$ fois exceptรฉ pour la derniรจre mensualitรฉ.", "_____no_output_____" ] ], [ [ "def mensualites(K,M,p):\n res = []\n while K > 0:\n cap, i = decompose_mensualite(K,M,p)\n if cap < 0:\n raise Exception(\"problรจme avec K={0} cap={1} i={2} len={3}\".format(K,cap,i,len(res)))\n K -= cap\n if K < 0:\n res.append(M + K)\n else:\n res.append(M)\n return res\n\nlen(mensualites(180000,1000,0.029))/12", "_____no_output_____" ], [ "mens = mensualites(180000,1000,0.029)\nmens[:12]", "_____no_output_____" ], [ "mens[-6:]", "_____no_output_____" ] ], [ [ "Parfois ce calcul entre dans une boucle infinie : cela signifie que la somme $K$ ou le taux d'intรฉret est trop grand pour la mensualitรฉ $M$. Par consรฉquence, les intรฉrรชts ร  rembourser chaque mois dรฉpasse la mensualitรฉ. Le capital ร  rembourser, plutรดt que de dรฉcroรฎtre augmente. La boucle infinie signifie que l'emprunteur a empruntรฉ une somme au-dessus de ses moyens.", "_____no_output_____" ], [ "### Q4\n\nUn emprunteur souhaite contracter un emprunt pour $Y$ annรฉes. La mensualitรฉ maximum qu'il peut consacrer par mois est $M$, le taux de l'emprunt est $p$. Quelle est la somme maximale qu'on puisse emprunter ? On se contentera d'une valeur approchรฉe ร  1000 euros prรจs.\n\nLa fonction prรฉcรฉdente estime la durรฉe du prรชt. Moins on emprunte, moins la durรฉe est longue. On n'a pas besoin de 20 ans pour emprunter 1000โ‚ฌ. L'idรฉe consiste simplement ร  tester de 1000 en 1000 toutes les sommes jusqu'ร  ce que la durรฉe du prรชt dรฉpasse $Y$ annรฉe soit $12Y$ mois.", "_____no_output_____" ] ], [ [ "def somme_maximale(M,p,Y):\n K = 20000\n l = mensualites(K, M, p)\n while len(l) < Y*12:\n K += 1000\n l = mensualites(K, M, p)\n return K\n \nsomme_maximale(1000, 0.029, 20)", "_____no_output_____" ] ], [ [ "**Remarque :** On pouvait รฉgalement chercher la somme maximale au delร  de laquelle le capital ร  rembourser augmente car la mensualitรฉ est utilisรฉe intรฉgralement pour payer les intรฉrets. C'est-ร -dire que la fonction ``decompose_mensualite`` retourne une valeur nรฉgative ou nulle soit la valeur de $K$ telle que $M = K \\left ( (1+p)^{\\frac{1}{12}}-1 \\right)$. Dans ce cas, c'est comme si la durรฉe du prรชt devenait infinie. Cette somme est nรฉcessairement supรฉrieur ร  la valeur cherchรฉe qui correspond ร  une durรฉe de prรชt finie.", "_____no_output_____" ], [ "**Remarque :** Est-on sรปr que la longueur des bien ``Y*12`` ? \n \nEn effet, rien de le garantit. Il faudrait s'assurer que le nombre de mensualitรฉs ne saute pas des annรฉes comme la version suivante oรน on passe 10000 en 10000 et la valeur est", "_____no_output_____" ] ], [ [ "def somme_maximale_step(M,p,Y,step=10000):\n K = 20000\n l = mensualites(K, M, p)\n while len(l) < Y*12:\n K += step\n l = mensualites(K, M, p)\n if len(l) >= (Y-3)*10:\n print(\"K\", K,\"mois\", len(l), \"annรฉes\", len(l)//12)\n return K\n \nsomme_maximale_step(1000, 0.029, 20)", "K 140000 mois 171 annรฉes 14\nK 150000 mois 186 annรฉes 15\nK 160000 mois 202 annรฉes 16\nK 170000 mois 219 annรฉes 18\nK 180000 mois 236 annรฉes 19\nK 190000 mois 254 annรฉes 21\n" ], [ "somme_maximale_step(1000, 0.029, 20, step=1000)", "K 139000 mois 170 annรฉes 14\nK 140000 mois 171 annรฉes 14\nK 141000 mois 173 annรฉes 14\nK 142000 mois 174 annรฉes 14\nK 143000 mois 176 annรฉes 14\nK 144000 mois 177 annรฉes 14\nK 145000 mois 179 annรฉes 14\nK 146000 mois 180 annรฉes 15\nK 147000 mois 182 annรฉes 15\nK 148000 mois 183 annรฉes 15\nK 149000 mois 185 annรฉes 15\nK 150000 mois 186 annรฉes 15\nK 151000 mois 188 annรฉes 15\nK 152000 mois 190 annรฉes 15\nK 153000 mois 191 annรฉes 15\nK 154000 mois 193 annรฉes 16\nK 155000 mois 194 annรฉes 16\nK 156000 mois 196 annรฉes 16\nK 157000 mois 197 annรฉes 16\nK 158000 mois 199 annรฉes 16\nK 159000 mois 201 annรฉes 16\nK 160000 mois 202 annรฉes 16\nK 161000 mois 204 annรฉes 17\nK 162000 mois 206 annรฉes 17\nK 163000 mois 207 annรฉes 17\nK 164000 mois 209 annรฉes 17\nK 165000 mois 210 annรฉes 17\nK 166000 mois 212 annรฉes 17\nK 167000 mois 214 annรฉes 17\nK 168000 mois 215 annรฉes 17\nK 169000 mois 217 annรฉes 18\nK 170000 mois 219 annรฉes 18\nK 171000 mois 220 annรฉes 18\nK 172000 mois 222 annรฉes 18\nK 173000 mois 224 annรฉes 18\nK 174000 mois 226 annรฉes 18\nK 175000 mois 227 annรฉes 18\nK 176000 mois 229 annรฉes 19\nK 177000 mois 231 annรฉes 19\nK 178000 mois 232 annรฉes 19\nK 179000 mois 234 annรฉes 19\nK 180000 mois 236 annรฉes 19\nK 181000 mois 238 annรฉes 19\nK 182000 mois 239 annรฉes 19\nK 183000 mois 241 annรฉes 20\n" ] ], [ [ "Mรชme dans ce cas, on ne tombe pas exactement sur 20 ans. Il faudrait avoir une valeur pour ``step`` variable pour traiter tous les cas. Par exemple, dรจs que le nombre de mois augmente de plus de 1, on divise $K$ par 2. La prรฉcision serait au mois prรจs.", "_____no_output_____" ] ], [ [ "def somme_maximale_mois_step(M,p,Y,step=10000):\n K = 20000\n l = mensualites(K, M, p)\n l0 = l\n while len(l) < Y*12:\n while True:\n l = mensualites(K + step, M, p)\n if len(l) > len(l0) + 1:\n step /= 2\n else:\n K += step\n l0 = l\n break\n if len(l) >= (Y-1)*12:\n print(\"K\", K,\"mois\", len(l), \"annรฉes\", len(l)//12, \"step\", step)\n return K\n \nsomme_maximale_mois_step(1000, 0.029, 20)", "K 175312.5 mois 228 annรฉes 19 step 312.5\nK 175625.0 mois 228 annรฉes 19 step 312.5\nK 175937.5 mois 229 annรฉes 19 step 312.5\nK 176250.0 mois 229 annรฉes 19 step 312.5\nK 176562.5 mois 230 annรฉes 19 step 312.5\nK 176875.0 mois 231 annรฉes 19 step 312.5\nK 177187.5 mois 231 annรฉes 19 step 312.5\nK 177500.0 mois 232 annรฉes 19 step 312.5\nK 177812.5 mois 232 annรฉes 19 step 312.5\nK 178125.0 mois 233 annรฉes 19 step 312.5\nK 178437.5 mois 233 annรฉes 19 step 312.5\nK 178750.0 mois 234 annรฉes 19 step 312.5\nK 179062.5 mois 234 annรฉes 19 step 312.5\nK 179375.0 mois 235 annรฉes 19 step 312.5\nK 179687.5 mois 235 annรฉes 19 step 312.5\nK 180000.0 mois 236 annรฉes 19 step 312.5\nK 180312.5 mois 237 annรฉes 19 step 312.5\nK 180625.0 mois 237 annรฉes 19 step 312.5\nK 180937.5 mois 238 annรฉes 19 step 312.5\nK 181250.0 mois 238 annรฉes 19 step 312.5\nK 181562.5 mois 239 annรฉes 19 step 312.5\nK 181875.0 mois 239 annรฉes 19 step 312.5\nK 182187.5 mois 240 annรฉes 20 step 312.5\n" ] ], [ [ "### Q5 \n\nA Paris, on loue un appartement pour $L$ euros du m$^2$. Un parisien loue son appartement de $S m^2$ pour $SL$ euros. Ce parisien peut dรฉpenser $A$ euros par mois rรฉpartis en $SL$ le loyer et $A-SL$ les รฉconomies. Ecrire une fonction qui calcule les รฉconomies rรฉalisรฉes au bout de $Y$ annรฉes.\n\nAprรจs les รฉtudes, ce parisien ne dispose d'aucun apport financier. Il commence par louer un appartement. Chaque mois, il envisage de consacrer $A$ euros. Il loue un apprtement suffisamment de petit de telle sorte que le loyer est infรฉrieur ร  $A$. Chaque mois, il รฉconomise la diffรฉrence. Chaque euros รฉconomise est placรฉ ร  taux fixe annuel $r$, donc au taux mensuel $rm = (1+r)^{\\frac{1}{12}}-1$. Cette fonction calcule les รฉconomies rรฉalisรฉes.", "_____no_output_____" ] ], [ [ "def economie(A,S,L,r,Y):\n delta = A - S*L\n rm = ((1+r)**(1.0/12)-1)\n eco = 0\n nbm = Y*12\n while nbm > 0:\n eco = eco * (1+rm) + delta\n nbm -= 1\n return eco\n\neconomie(1000,40,20,0.015,10)", "_____no_output_____" ] ], [ [ "### Q6\n\nEn considรฉrant que ce mรชme parisien ne peut dรฉpenser plus de $A$ euros par mois, qu'il ne possรจde rien au dรฉbut de sa carriรจre professionnelle, on veut savoir ร  partir de combien d'annรฉes il sera en mesure d'acheter son appartement ร  supposer qu'il peut se constituer un apport en capital issu de ces รฉconomies. On suppose que le prix au mรจtre carrรฉ ร  Paris est $C$ et qu'il veut emprunter avec un prรชt d'une durรฉe fixe. \n\n\nSi on rรฉsume, le parisen n'a pas les moyens d'acheter son appartement dรจs la fin de ses รฉtudes. Il commence ร  louer et รฉconomise. La somme $A$ correspond ร  un tiers de son salaire, il sait รฉgalement qu'avec une mensualitรฉ de $A$, il peut emprunter au maximum sur 20 ans `` somme_maximale(A,p,20)``. Il pourra donc acheter un appartement de la mรชme surface lorsque ses รฉconomies ajoutรฉes ร  la somme maximale qu'il peut emprunter dรฉpasseront le prix de l'appartement.\n\nOn regarde chaque annรฉe si les รฉconomies accumulรฉes permettent ร  ce parisien d'acheter.", "_____no_output_____" ] ], [ [ "from pyquickhelper.helpgen import NbImage\nNbImage(\"exam2016_values.png\")", "_____no_output_____" ], [ "def bascule(A,S,L,r,Y,C,p):\n Y = 0\n possible = C*S\n while possible > 0:\n Y += 1\n eco = economie(A,S,L,r,Y)\n somme = somme_maximale(A,p,Y)\n possible = C*S - somme - eco\n return Y\n\nbascule(1000,40,20,0.015,20,8000,0.029)", "_____no_output_____" ] ], [ [ "### Q7 exo 1\n\nEcrire une fonction qui dรฉtermine la plus petite surface que ce parisien ne pourra jamais s'offrir avant 20 ans au mรจtre carrรฉ prรจs.\n\nLe raisonnement est toujours le mรชme, on regarde des plus petites surfaces aux plus grandes la surface que ce parisien pourra s'offrir. La fonction ``bascule`` indique quand un parisien peut acheter son appartement pour la premiรจre fois. Il suffit de partir d'une petite surface et de lui ajouter des mรจtres carrรฉs jusqu'ร  ce que le moment de bascule se produise 20 ans aprรจs avoir commencรฉ ร  travailler.", "_____no_output_____" ] ], [ [ "def surface_max(A,L,r,Y,C,p,delay=20):\n S = 1\n wait = bascule(A,S,L,r,Y,C,p)\n while wait < delay:\n S += 1\n wait = bascule(A,S,L,r,Y,C,p)\n return S\n\nsurface_max(1000,20,0.015,20,8000,0.029)", "_____no_output_____" ] ], [ [ "### Q7 exo 2\n\nDรฉterminer la somme $A$ qu'il faille investir pour acheter 40 $m^2$ ร  30 ans en supposant que ce parisien a commencรฉ ร  travailler ร  23 ans et commence ร  louer un appartement de cette surface.\n\nLe raisonnement est identique au question prรฉcรฉdente. On commence par des mensualitรฉs $A$ รฉlevรฉes qu'on fait dรฉcroรฎtre jusqu'ร  ce que le parisien ne puisse plus s'offrir son appartement.", "_____no_output_____" ] ], [ [ "def A40a30(L,r,Y,C,p):\n A = 10000\n S = 40\n wait = bascule(A,S,L,r,Y,C,p)\n while wait < 7:\n A -= 100\n wait = bascule(A,S,L,r,Y,C,p)\n return A\n\nA40a30(20,0.015,20,8000,0.029)", "_____no_output_____" ] ], [ [ "### Q8 \n\nCe modรจle ne prend pas en compte tous les paramรจtres de la vie rรฉelle. Citez-en un. \n\n* L'inflation\n* Les augmentations de salaires\n* L'augmentation du prix au mรจtre carrรฉ\n* ...", "_____no_output_____" ], [ "### Q9 version dichotomique de Q4\n\nOn utilise la fonction *decompose_mensualite* pour calculer la somme maximale qu'une personne puisse emprunter avec une mensualitรฉ *M*, ร  savoir tout passe en intรฉrรชt ``M / ((1+p)**(1.0/12)-1)`` oรน celle qui correspond ร  un nombre de mensualitรฉ infini.\n\nLa rรฉponse que l'on cherche est une somme mais celle-ci est une fonction croissante de la durรฉe du prรชt. Plutรดt que de donner plus d'explications ร  ce sujet, je suggรจre de lire le problรจme [Problem C. Egg Drop](https://code.google.com/codejam/contest/32003/dashboard#s=p2). La mรชme solution ne s'applique pas ici mais c'est le mรชme รฉtat d'esprit qui permet de la trouver. La solution que je propose n'est pas nรฉcessairement la meilleure mais elle est certainement plus rapide que la premiรจre proposรฉe. \n\nLa ligne avec ``###`` est la plus importante. On multiplie les pas de recherche pas un coefficient. Supรฉrieur ร  1 : l'algorithme ne peut pas converger. Infรฉrieur ร  1 : le pas doit รชtre suffisamment grand pour converger vers la vraie valeur. Cela dรฉpend de la dรฉrivรฉe $\\frac{df}{dK}$ oรน $f$ est la durรฉe du prรชt ``len(mensualites(K, M, p))``.", "_____no_output_____" ] ], [ [ "def somme_maximale_dicho(M,p,Y):\n K_max = M / ((1+p)**(1.0/12)-1)\n K = K_max / 2\n step = 0.5\n dk = K * step\n l = mensualites(K, M, p)\n while len(l) != Y*12 and dk > 1e-5:\n if len(l) < Y*12:\n K += dk\n K = min(K_max - 1000, K)\n else:\n K -= dk\n dk *= step ###\n l = mensualites(K, M, p)\n if len(l) != Y*12:\n raise Exception(\"il faut augmenter step\")\n return K\n \nsomme_maximale_dicho(1000, 0.029, 20)", "_____no_output_____" ] ], [ [ "Le paramรจtre ``dk`` donne une indication de la prรฉcision. On compare les temps d'exรฉcution :", "_____no_output_____" ] ], [ [ "%timeit somme_maximale(1000, 0.029, 20)\n%timeit somme_maximale_dicho(1000, 0.029, 20)", "10 loops, best of 3: 25.7 ms per loop\n100 loops, best of 3: 2.8 ms per loop\n" ] ], [ [ "## Exercice 2 : coder les numรฉros de tรฉlรฉphone", "_____no_output_____" ], [ "### Q1 \n\nMichel perd souvent ses carnets de numรฉros et pour รฉviter d'importuner ses proches de recevoir des coups de fils impromptus, il utilise un code. Si la premiรจre lettre est une voyelle, il permute les numรฉros 3 et 4 sinon il les laisse en l'รฉtat. Si la seconde lettre est une voyelle, il permute les numรฉros 5 et 6, rien en cas de consonnes. Exemple (on enlรจve volontairement les accents) :\n\n adele 06 64 34 22 67 --> 06 46 34 22 67\n gerard 06 64 34 22 68 --> 06 64 43 22 86\n \nEcrire la fonction qui transforme un numรฉro. Il est recommandรฉ de ne pas tenir compte des espaces.", "_____no_output_____" ] ], [ [ "def transforme_numero(prenom, numero):\n res = numero[:2]\n for i, c in enumerate(prenom):\n if c in \"aeiouy\":\n res += numero[i*2+3] + numero[i*2+2]\n else:\n res += numero[i*2+2:i*2+4]\n if len(res) >= len(numero):\n break\n return res\n\ntransforme_numero(\"adele\", \"0664342267\")", "_____no_output_____" ] ], [ [ "### Q2 \n\nEcrire la fonction qui effectue la transformation inverse. La rรฉciproque est en fait la mรชme fonction.", "_____no_output_____" ] ], [ [ "transforme_numero(\"adele\", \"0646342267\")", "_____no_output_____" ] ], [ [ "## Exercice 3 : coder les numรฉros de tรฉlรฉphone", "_____no_output_____" ], [ "### Q1\n\nMichel perd souvent ses carnets de numรฉros et pour รฉviter d'importuner ses proches de recevoir des coups de fils impromptus, il utilise un code. Si la premiรจre lettre est une voyelle, il fait la somme des chiffres 3 et 4 et remplace le chiffre 4 par celui des unitรฉs de l'addition, sinon il les laisse en l'รฉtat. Si la seconde lettre est une voyelle, il additionne les chiffres 5 et 6 et remplace le chiffre 6 par celui des unitรฉs de l'addition, rien en cas de consonnes. Exemple (on enlรจve volontairement les accents) :\n\n adele 06 64 34 22 67 --> 06 60 34 24 67\n gerard 06 64 34 22 68 --> 06 64 37 22 64\n\nEcrire la fonction qui transforme un numรฉro. Il est recommandรฉ de ne pas tenir compte des espaces.", "_____no_output_____" ] ], [ [ "def transforme_numero(prenom, numero):\n res = numero[:2]\n for i, c in enumerate(prenom):\n if c in \"aeiouy\":\n res += numero[i*2+2] + str ( (int(numero[i*2+2]) + int(numero[i*2+3])) % 10)\n else:\n res += numero[i*2+2:i*2+4]\n if len(res) >= len(numero):\n break\n return res\n\ntransforme_numero(\"adele\", \"0664342267\")", "_____no_output_____" ] ], [ [ "### Q2\n\nA votre avis, est-il possible d'รฉcrire la fonction qui effectue la transformation inverse. Justifiez.\n\nOui et voici la rรฉciproque. On considรจre un groupe de chiffre modifiรฉ, l'important est de comprendre que si le second chiffre est infรฉrieur au premier alors la somme des deux chiffres initiaux dรฉpasse nรฉcessairement 10.", "_____no_output_____" ] ], [ [ "def transforme_numero_envers(prenom, numero):\n res = numero[:2]\n for i, c in enumerate(prenom):\n if c in \"aeiouy\":\n res += numero[i*2+2] + str ( (int(numero[i*2+3]) - int(numero[i*2+2]) + 10) % 10)\n else:\n res += numero[i*2+2:i*2+4]\n if len(res) >= len(numero):\n break\n return res\n\ntransforme_numero_envers(\"adele\", \"0660342467\")", "_____no_output_____" ] ], [ [ "**Remarque :** Beaucoup ont affrontรฉ l'erreur suivante", "_____no_output_____" ] ], [ [ "# dรฉclenche une exception\n\"3\" < 4", "_____no_output_____" ] ], [ [ "Comme l'exemple le stipule, cela survient lorsqu'on essaye de faire une comparison numรฉrique entre une chaรฎne de caractรจres et un type numรฉrique. Il faut convertir la chaรฎne de caractรจres.", "_____no_output_____" ] ], [ [ "int(\"3\") < 4", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ece10b62c80ec526bd9aad19285d5be0454cc149
2,464
ipynb
Jupyter Notebook
tutorials/get_country_data.ipynb
francbartoli/flashgeotext
16dea64fba83100d7023222451d0263667830e3d
[ "MIT" ]
47
2020-02-24T01:34:10.000Z
2022-02-14T16:50:40.000Z
tutorials/get_country_data.ipynb
francbartoli/flashgeotext
16dea64fba83100d7023222451d0263667830e3d
[ "MIT" ]
179
2020-02-21T09:14:36.000Z
2022-03-28T04:09:47.000Z
tutorials/get_country_data.ipynb
francbartoli/flashgeotext
16dea64fba83100d7023222451d0263667830e3d
[ "MIT" ]
6
2020-09-21T20:15:45.000Z
2022-03-13T11:26:34.000Z
23.692308
118
0.535714
[ [ [ "# flashgeotext\n\nGet country data (+synonyms)", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom flashtext import KeywordProcessor\nimport io\nfrom fuzzywuzzy import fuzz\nimport requests\nimport pycountry\nimport bs4\nimport requests\nimport json", "_____no_output_____" ], [ "url = \"https://en.wikipedia.org/wiki/List_of_alternative_country_names\"\n\nresponse = requests.get(url)\n\nsoup = bs4.BeautifulSoup(response.text, \"html5lib\")", "_____no_output_____" ], [ "table_bodies = (table_body for table_body in soup.find_all('tbody'))", "_____no_output_____" ], [ "data = {}\nfor table_body in table_bodies:\n rows = table_body.find_all('tr')\n for row in rows:\n cols = row.find_all('td')\n if cols:\n country_name = cols[0].find(\"a\").text\n data[country_name] = [b.text for b in cols[1].find_all(\"b\")] + [country_name]\n data[country_name] = [word for word in data[country_name] if fuzz.ratio(country_name, word) > 20]\n\n# weird geopolitical overtaking of geonames.org\ndata[\"Taiwan\"] = [word for word in data[\"Taiwan\"] if \"China\" not in word]", "_____no_output_____" ], [ "with open(\"countries.json\", \"w\", encoding=\"utf8\") as file:\n json.dump(data, file, ensure_ascii=False)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
ece116710499c54e11b57cf8cc32ab78a23aa846
7,047
ipynb
Jupyter Notebook
Big-Data-Clusters/CU9/public/content/log-analyzers/tsg121-get-all-supervisor-mssql-logs.ipynb
WilliamAntonRohm/tigertoolbox
f6aa9ccbbed06d41fd7161f02a4e65871a3feaa9
[ "MIT" ]
541
2019-05-07T11:41:25.000Z
2022-03-29T17:33:19.000Z
Big-Data-Clusters/CU9/public/content/log-analyzers/tsg121-get-all-supervisor-mssql-logs.ipynb
sqlworldwide/tigertoolbox
2abcb62a09daf0116ab1ab9c9dd9317319b23297
[ "MIT" ]
89
2019-05-09T14:23:52.000Z
2022-01-13T20:21:04.000Z
Big-Data-Clusters/CU9/public/content/log-analyzers/tsg121-get-all-supervisor-mssql-logs.ipynb
sqlworldwide/tigertoolbox
2abcb62a09daf0116ab1ab9c9dd9317319b23297
[ "MIT" ]
338
2019-05-08T05:45:16.000Z
2022-03-28T15:35:03.000Z
7,047
7,047
0.637576
[ [ [ "# TSG121 - Supervisor mssql-server logs\n\nThese supervisor mssql-server logs can contain some more information\nfrom Polybase, not available in errorlog or the polybase logs.\n\n## Steps\n\n### Parameters", "_____no_output_____" ] ], [ [ "import re\n\ntail_lines = 500\n\npod = None # All\ncontainer = \"mssql-server\"\nlog_files = [ \"/var/log/supervisor/log/mssql-server-*.log\" ]\n\nexpressions_to_analyze = [\n re.compile(\".{26}[WARN ]\"),\n re.compile(\".{26}[ERROR]\")\n]\n\nlog_analyzer_rules = []", "_____no_output_____" ] ], [ [ "### Instantiate Kubernetes client", "_____no_output_____" ] ], [ [ "# Instantiate the Python Kubernetes client into 'api' variable\n\nimport os\nfrom IPython.display import Markdown\n\ntry:\n from kubernetes import client, config\n from kubernetes.stream import stream\nexcept ImportError: \n\n # Install the Kubernetes module\n import sys\n !{sys.executable} -m pip install kubernetes \n \n try:\n from kubernetes import client, config\n from kubernetes.stream import stream\n except ImportError:\n display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))\n raise\n\nif \"KUBERNETES_SERVICE_PORT\" in os.environ and \"KUBERNETES_SERVICE_HOST\" in os.environ:\n config.load_incluster_config()\nelse:\n try:\n config.load_kube_config()\n except:\n display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))\n raise\n\napi = client.CoreV1Api()\n\nprint('Kubernetes client instantiated')", "_____no_output_____" ] ], [ [ "### Get the namespace for the big data cluster\n\nGet the namespace of the Big Data Cluster from the Kuberenetes API.\n\n**NOTE:**\n\nIf there is more than one Big Data Cluster in the target Kubernetes\ncluster, then either:\n\n- set \\[0\\] to the correct value for the big data cluster.\n- set the environment variable AZDATA_NAMESPACE, before starting Azure\n Data Studio.", "_____no_output_____" ] ], [ [ "# Place Kubernetes namespace name for BDC into 'namespace' variable\n\nif \"AZDATA_NAMESPACE\" in os.environ:\n namespace = os.environ[\"AZDATA_NAMESPACE\"]\nelse:\n try:\n namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name\n except IndexError:\n from IPython.display import Markdown\n display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))\n display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))\n display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))\n raise\n\nprint('The kubernetes namespace for your big data cluster is: ' + namespace)", "_____no_output_____" ] ], [ [ "### Get tail for log", "_____no_output_____" ] ], [ [ "# Display the last 'tail_lines' of files in 'log_files' list\n\npods = api.list_namespaced_pod(namespace)\n\nentries_for_analysis = []\n\nfor p in pods.items:\n if pod is None or p.metadata.name == pod:\n for c in p.spec.containers:\n if container is None or c.name == container:\n for log_file in log_files:\n print (f\"- LOGS: '{log_file}' for CONTAINER: '{c.name}' in POD: '{p.metadata.name}'\")\n try:\n output = stream(api.connect_get_namespaced_pod_exec, p.metadata.name, namespace, command=['/bin/sh', '-c', f'tail -n {tail_lines} {log_file}'], container=c.name, stderr=True, stdout=True)\n except Exception:\n print (f\"FAILED to get LOGS for CONTAINER: {c.name} in POD: {p.metadata.name}\")\n else:\n for line in output.split('\\n'):\n for expression in expressions_to_analyze:\n if expression.match(line):\n entries_for_analysis.append(line)\n print(line)\nprint(\"\")\nprint(f\"{len(entries_for_analysis)} log entries found for further analysis.\")", "_____no_output_____" ] ], [ [ "### Analyze log entries and suggest relevant Troubleshooting Guides", "_____no_output_____" ] ], [ [ "# Analyze log entries and suggest further relevant troubleshooting guides\nfrom IPython.display import Markdown\n\nprint(f\"Applying the following {len(log_analyzer_rules)} rules to {len(entries_for_analysis)} log entries for analysis, looking for HINTs to further troubleshooting.\")\nprint(log_analyzer_rules)\nhints = 0\nif len(log_analyzer_rules) > 0:\n for entry in entries_for_analysis:\n for rule in log_analyzer_rules:\n if entry.find(rule[0]) != -1:\n print (entry)\n\n display(Markdown(f'HINT: Use [{rule[2]}]({rule[3]}) to resolve this issue.'))\n hints = hints + 1\n\nprint(\"\")\nprint(f\"{len(entries_for_analysis)} log entries analyzed (using {len(log_analyzer_rules)} rules). {hints} further troubleshooting hints made inline.\")", "_____no_output_____" ], [ "print(\"Notebook execution is complete.\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
ece11c1fc046600e4bf19bd786c147553a9e82b8
5,569
ipynb
Jupyter Notebook
Python Fundamentals/Module_2.2_Required_Code_Python_Fundamentals.ipynb
VladimirZayas/pythonteachingcode
2a9ec18bba0d2e0685ca450ce67ebaaa5e7a9470
[ "MIT" ]
null
null
null
Python Fundamentals/Module_2.2_Required_Code_Python_Fundamentals.ipynb
VladimirZayas/pythonteachingcode
2a9ec18bba0d2e0685ca450ce67ebaaa5e7a9470
[ "MIT" ]
null
null
null
Python Fundamentals/Module_2.2_Required_Code_Python_Fundamentals.ipynb
VladimirZayas/pythonteachingcode
2a9ec18bba0d2e0685ca450ce67ebaaa5e7a9470
[ "MIT" ]
null
null
null
36.398693
264
0.565811
[ [ [ "# Module 2 Required Coding Activity \nIntroduction to Python (Unit 2) Fundamentals \n \nBe sure to complete all tutorials and practice prior to attempting this activity. \n\n| Important Assignment Requirements | \n|:-------------------------------| \n| **NOTE:** This program requires creating a function using **`def`** and **`return`**, using **`print`** output, **`input`**, **`if`**, **`in`** keywords, **`.append()`**, **`.pop()`**, **`.remove()`** list methods. As well as other standard Python | \n\n## Program: list-o-matic \nThis program takes string input and checks if that string is in a list of strings \n- if string is in the list it removes the first instance from list \n- if string is not in the list the input gets appended to the list \n- if the string is empty then the last item is popped from the list \n- if the **list becomes empty** the program ends \n- if the user enters \"quit\" then the program ends \n\nprogram has 2 parts \n- **program flow** which can be modified to ask for a specific type of item. This is the programmers choice. Add a list of fish, trees, books, movies, songs.... your choice. \n- **list-o-matic** Function which takes arguments of a string and a list. The function modifies the list and returns a message as seen below. \n\n![Flowchart of List-o-Matic](images/list_o_matic.png)\n\n**[ ]** initialize a list with several strings at the beginning of the program flow and follow the flow chart and output examples\n\n *example input/output* \n ```\nlook at all the animals ['cat', 'goat', 'cat']\nenter the name of an animal: horse\n1 instance of horse appended to list\n\nlook at all the animals ['cat', 'goat', 'cat', 'horse']\nenter the name of an animal: cat\n1 instance of cat removed from list\n\nlook at all the animals ['goat', 'cat', 'horse']\nenter the name of an animal: cat\n1 instance of cat removed from list\n\nlook at all the animals ['goat', 'horse']\nenter the name of an animal: (<-- entered empty string)\nhorse popped from list\n\nlook at all the animals ['goat']\nenter the name of an animal: (<-- entered empty string)\ngoat popped from list\n\nGoodbye!\n``` \n\n*example 2*\n```\nlook at all the animals ['cat', 'goat', 'cat']\nenter the name of an animal: Quit\nGoodbye!\n``` \n\n", "_____no_output_____" ] ], [ [ "# [] create list-o-matic\n# [] copy and paste in edX assignment page\ndef list_o_matic(input_string, input_list):\n if input_string == \"\":\n return print(input_list.pop(), \"popped from list\")\n else:\n if input_string in input_list:\n input_list.remove(input_string)\n return print(input_string, \"removed from list\")\n\n else:\n input_list.append(input_string)\n return print(\"1 instance of\", input_string, \"appended to list\")\n\n\ncheck_list = ['cat', 'goat', 'cat', 'horse']\n\nwhile check_list:\n check_string = input(\"enter the name of an animal or Quit/quit for quit:\")\n if check_string.lower() == \"quit\":\n break\n else:\n list_o_matic(check_string, check_list)\n\n", "enter the name of an animal or Quit/quit for quit:dog\n1 instance of dog appended to list\nenter the name of an animal or Quit/quit for quit:cat\ncat removed from list\nenter the name of an animal or Quit/quit for quit:cat\ncat removed from list\nenter the name of an animal or Quit/quit for quit:cat\n1 instance of cat appended to list\nenter the name of an animal or Quit/quit for quit:cat\ncat removed from list\nenter the name of an animal or Quit/quit for quit:cat\n1 instance of cat appended to list\nenter the name of an animal or Quit/quit for quit:dog\ndog removed from list\nenter the name of an animal or Quit/quit for quit:dog\n1 instance of dog appended to list\nenter the name of an animal or Quit/quit for quit:quit\n" ] ], [ [ "Submit this by creating a python file (.py) and submitting it in D2L. Be sure to test that it works.\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ] ]
ece12a8acc42a0858ee103b1ed304e8f6189579c
40,110
ipynb
Jupyter Notebook
intro-to-pytorch/Part 3 - Training Neural Networks (Exercises).ipynb
DishinGoyani/deep-learning-v2-pytorch
6d703fe82661a9c4761e2710bcea59c8c3eee04a
[ "MIT" ]
null
null
null
intro-to-pytorch/Part 3 - Training Neural Networks (Exercises).ipynb
DishinGoyani/deep-learning-v2-pytorch
6d703fe82661a9c4761e2710bcea59c8c3eee04a
[ "MIT" ]
null
null
null
intro-to-pytorch/Part 3 - Training Neural Networks (Exercises).ipynb
DishinGoyani/deep-learning-v2-pytorch
6d703fe82661a9c4761e2710bcea59c8c3eee04a
[ "MIT" ]
null
null
null
50.772152
7,180
0.656769
[ [ [ "# Training Neural Networks\n\nThe network we built in the previous part isn't so smart, it doesn't know anything about our handwritten digits. Neural networks with non-linear activations work like universal function approximators. There is some function that maps your input to the output. For example, images of handwritten digits to class probabilities. The power of neural networks is that we can train them to approximate this function, and basically any function given enough data and compute time.\n\n<img src=\"assets/function_approx.png\" width=500px>\n\nAt first the network is naive, it doesn't know the function mapping the inputs to the outputs. We train the network by showing it examples of real data, then adjusting the network parameters such that it approximates this function.\n\nTo find these parameters, we need to know how poorly the network is predicting the real outputs. For this we calculate a **loss function** (also called the cost), a measure of our prediction error. For example, the mean squared loss is often used in regression and binary classification problems\n\n$$\n\\large \\ell = \\frac{1}{2n}\\sum_i^n{\\left(y_i - \\hat{y}_i\\right)^2}\n$$\n\nwhere $n$ is the number of training examples, $y_i$ are the true labels, and $\\hat{y}_i$ are the predicted labels.\n\nBy minimizing this loss with respect to the network parameters, we can find configurations where the loss is at a minimum and the network is able to predict the correct labels with high accuracy. We find this minimum using a process called **gradient descent**. The gradient is the slope of the loss function and points in the direction of fastest change. To get to the minimum in the least amount of time, we then want to follow the gradient (downwards). You can think of this like descending a mountain by following the steepest slope to the base.\n\n<img src='assets/gradient_descent.png' width=350px>", "_____no_output_____" ], [ "## Backpropagation\n\nFor single layer networks, gradient descent is straightforward to implement. However, it's more complicated for deeper, multilayer neural networks like the one we've built. Complicated enough that it took about 30 years before researchers figured out how to train multilayer networks.\n\nTraining multilayer networks is done through **backpropagation** which is really just an application of the chain rule from calculus. It's easiest to understand if we convert a two layer network into a graph representation.\n\n<img src='assets/backprop_diagram.png' width=550px>\n\nIn the forward pass through the network, our data and operations go from bottom to top here. We pass the input $x$ through a linear transformation $L_1$ with weights $W_1$ and biases $b_1$. The output then goes through the sigmoid operation $S$ and another linear transformation $L_2$. Finally we calculate the loss $\\ell$. We use the loss as a measure of how bad the network's predictions are. The goal then is to adjust the weights and biases to minimize the loss.\n\nTo train the weights with gradient descent, we propagate the gradient of the loss backwards through the network. Each operation has some gradient between the inputs and outputs. As we send the gradients backwards, we multiply the incoming gradient with the gradient for the operation. Mathematically, this is really just calculating the gradient of the loss with respect to the weights using the chain rule.\n\n$$\n\\large \\frac{\\partial \\ell}{\\partial W_1} = \\frac{\\partial L_1}{\\partial W_1} \\frac{\\partial S}{\\partial L_1} \\frac{\\partial L_2}{\\partial S} \\frac{\\partial \\ell}{\\partial L_2}\n$$\n\n**Note:** I'm glossing over a few details here that require some knowledge of vector calculus, but they aren't necessary to understand what's going on.\n\nWe update our weights using this gradient with some learning rate $\\alpha$. \n\n$$\n\\large W^\\prime_1 = W_1 - \\alpha \\frac{\\partial \\ell}{\\partial W_1}\n$$\n\nThe learning rate $\\alpha$ is set such that the weight update steps are small enough that the iterative method settles in a minimum.", "_____no_output_____" ], [ "## Losses in PyTorch\n\nLet's start by seeing how we calculate the loss with PyTorch. Through the `nn` module, PyTorch provides losses such as the cross-entropy loss (`nn.CrossEntropyLoss`). You'll usually see the loss assigned to `criterion`. As noted in the last part, with a classification problem such as MNIST, we're using the softmax function to predict class probabilities. With a softmax output, you want to use cross-entropy as the loss. To actually calculate the loss, you first define the criterion then pass in the output of your network and the correct labels.\n\nSomething really important to note here. Looking at [the documentation for `nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss),\n\n> This criterion combines `nn.LogSoftmax()` and `nn.NLLLoss()` in one single class.\n>\n> The input is expected to contain scores for each class.\n\nThis means we need to pass in the raw output of our network into the loss, not the output of the softmax function. This raw output is usually called the *logits* or *scores*. We use the logits because softmax gives you probabilities which will often be very close to zero or one but floating-point numbers can't accurately represent values near zero or one ([read more here](https://docs.python.org/3/tutorial/floatingpoint.html)). It's usually best to avoid doing calculations with probabilities, typically we use log-probabilities.", "_____no_output_____" ] ], [ [ "import torch\nfrom torch import nn\nimport torch.nn.functional as F\nfrom torchvision import datasets, transforms\n\n# Define a transform to normalize the data\ntransform = transforms.Compose([transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n ])\n# Download and load the training data\ntrainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)", "_____no_output_____" ] ], [ [ "### Note\nIf you haven't seen `nn.Sequential` yet, please finish the end of the Part 2 notebook.", "_____no_output_____" ] ], [ [ "# Build a feed-forward network\nmodel = nn.Sequential(nn.Linear(784, 128),\n nn.ReLU(),\n nn.Linear(128, 64),\n nn.ReLU(),\n nn.Linear(64, 10))\n\n# Define the loss\ncriterion = nn.CrossEntropyLoss()\n\n# Get our data\nimages, labels = next(iter(trainloader))\n# Flatten images\nimages = images.view(images.shape[0], -1)\n\n# Forward pass, get our logits\nlogits = model(images)\n# Calculate the loss with the logits and the labels\nloss = criterion(logits, labels)\n\nprint(loss)", "tensor(2.3013)\n" ] ], [ [ "In my experience it's more convenient to build the model with a log-softmax output using `nn.LogSoftmax` or `F.log_softmax` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LogSoftmax)). Then you can get the actual probabilities by taking the exponential `torch.exp(output)`. With a log-softmax output, you want to use the negative log likelihood loss, `nn.NLLLoss` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.NLLLoss)).\n\n>**Exercise:** Build a model that returns the log-softmax as the output and calculate the loss using the negative log likelihood loss. Note that for `nn.LogSoftmax` and `F.log_softmax` you'll need to set the `dim` keyword argument appropriately. `dim=0` calculates softmax across the rows, so each column sums to 1, while `dim=1` calculates across the columns so each row sums to 1. Think about what you want the output to be and choose `dim` appropriately.", "_____no_output_____" ] ], [ [ "# TODO: Build a feed-forward network\nmodel = nn.Sequential(nn.Linear(784,128),\n nn.ReLU(),\n nn.Linear(128,64),\n nn.ReLU(),\n nn.Linear(64,10),\n nn.LogSoftmax(dim=1))\n\n# TODO: Define the loss\ncriterion = nn.NLLLoss()\n\n### Run this to check your work\n# Get our data\nimages, labels = next(iter(trainloader))\n# Flatten images\nprint(images.shape[0])\nimages = images.view(images.shape[0], -1)\n\n# Forward pass, get our logits\nlogits = model(images)\n# Calculate the loss with the logits and the labels\nloss = criterion(logits, labels)\n\nprint(loss)", "64\ntensor(2.2886)\n" ], [ "a = torch.tensor([1,2,3],requires_grad=True);", "_____no_output_____" ], [ "a = a + 1\na = a*1", "_____no_output_____" ], [ "a.backward()\nprint(a.grad)", "_____no_output_____" ] ], [ [ "## Autograd\n\nNow that we know how to calculate a loss, how do we use it to perform backpropagation? Torch provides a module, `autograd`, for automatically calculating the gradients of tensors. We can use it to calculate the gradients of all our parameters with respect to the loss. Autograd works by keeping track of operations performed on tensors, then going backwards through those operations, calculating gradients along the way. To make sure PyTorch keeps track of operations on a tensor and calculates the gradients, you need to set `requires_grad = True` on a tensor. You can do this at creation with the `requires_grad` keyword, or at any time with `x.requires_grad_(True)`.\n\nYou can turn off gradients for a block of code with the `torch.no_grad()` content:\n```python\nx = torch.zeros(1, requires_grad=True)\n>>> with torch.no_grad():\n... y = x * 2\n>>> y.requires_grad\nFalse\n```\n\nAlso, you can turn on or off gradients altogether with `torch.set_grad_enabled(True|False)`.\n\nThe gradients are computed with respect to some variable `z` with `z.backward()`. This does a backward pass through the operations that created `z`.", "_____no_output_____" ] ], [ [ "x = torch.randn(2,2, requires_grad=True)\nprint(x)", "tensor([[-1.1030, -1.3583],\n [-1.0687, -1.1809]])\n" ], [ "y = x**2\nprint(y)", "tensor([[ 1.2167, 1.8449],\n [ 1.1420, 1.3944]])\n" ] ], [ [ "Below we can see the operation that created `y`, a power operation `PowBackward0`.", "_____no_output_____" ] ], [ [ "## grad_fn shows the function that generated this variable\nprint(y.grad_fn)", "<PowBackward0 object at 0x7fd878513b38>\n" ] ], [ [ "The autgrad module keeps track of these operations and knows how to calculate the gradient for each one. In this way, it's able to calculate the gradients for a chain of operations, with respect to any one tensor. Let's reduce the tensor `y` to a scalar value, the mean.", "_____no_output_____" ] ], [ [ "z = y.mean()\nprint(z)", "tensor(1.3995)\n" ] ], [ [ "You can check the gradients for `x` and `y` but they are empty currently.", "_____no_output_____" ] ], [ [ "print(x.grad,y.grad)", "None None\n" ] ], [ [ "To calculate the gradients, you need to run the `.backward` method on a Variable, `z` for example. This will calculate the gradient for `z` with respect to `x`\n\n$$\n\\frac{\\partial z}{\\partial x} = \\frac{\\partial}{\\partial x}\\left[\\frac{1}{n}\\sum_i^n x_i^2\\right] = \\frac{x}{2}\n$$", "_____no_output_____" ] ], [ [ "z.backward()", "_____no_output_____" ], [ "x.grad", "_____no_output_____" ], [ "print(x.grad)\nprint(x/2)", "tensor([[-0.5515, -0.6791],\n [-0.5343, -0.5904]])\ntensor([[-0.5515, -0.6791],\n [-0.5343, -0.5904]])\n" ] ], [ [ "These gradients calculations are particularly useful for neural networks. For training we need the gradients of the weights with respect to the cost. With PyTorch, we run data forward through the network to calculate the loss, then, go backwards to calculate the gradients with respect to the loss. Once we have the gradients we can make a gradient descent step. ", "_____no_output_____" ], [ "## Loss and Autograd together\n\nWhen we create a network with PyTorch, all of the parameters are initialized with `requires_grad = True`. This means that when we calculate the loss and call `loss.backward()`, the gradients for the parameters are calculated. These gradients are used to update the weights with gradient descent. Below you can see an example of calculating the gradients using a backwards pass.", "_____no_output_____" ] ], [ [ "# Build a feed-forward network\nmodel = nn.Sequential(nn.Linear(784, 128),\n nn.ReLU(),\n nn.Linear(128, 64),\n nn.ReLU(),\n nn.Linear(64, 10),\n nn.LogSoftmax(dim=1))\n\ncriterion = nn.NLLLoss()\nimages, labels = next(iter(trainloader))\nimages = images.view(images.shape[0], -1)\n\nlogits = model(images)\nloss = criterion(logits, labels)", "_____no_output_____" ], [ "print('Before backward pass: \\n', model[0].weight.grad)\n\nloss.backward()\n\nprint('After backward pass: \\n', model[0].weight.grad)", "Before backward pass: \n None\nAfter backward pass: \n tensor(1.00000e-02 *\n [[ 0.2231, 0.2231, 0.2231, ..., 0.2231, 0.2231, 0.2231],\n [ 0.1125, 0.1125, 0.1125, ..., 0.1125, 0.1125, 0.1125],\n [ 0.0926, 0.0926, 0.0926, ..., 0.0926, 0.0926, 0.0926],\n ...,\n [ 0.5291, 0.5291, 0.5291, ..., 0.5291, 0.5291, 0.5291],\n [ 0.0711, 0.0711, 0.0711, ..., 0.0711, 0.0711, 0.0711],\n [-0.1035, -0.1035, -0.1035, ..., -0.1035, -0.1035, -0.1035]])\n" ] ], [ [ "## Training the network!\n\nThere's one last piece we need to start training, an optimizer that we'll use to update the weights with the gradients. We get these from PyTorch's [`optim` package](https://pytorch.org/docs/stable/optim.html). For example we can use stochastic gradient descent with `optim.SGD`. You can see how to define an optimizer below.", "_____no_output_____" ] ], [ [ "from torch import optim\n\n# Optimizers require the parameters to optimize and a learning rate\noptimizer = optim.SGD(model.parameters(), lr=0.01)", "_____no_output_____" ] ], [ [ "Now we know how to use all the individual parts so it's time to see how they work together. Let's consider just one learning step before looping through all the data. The general process with PyTorch:\n\n* Make a forward pass through the network \n* Use the network output to calculate the loss\n* Perform a backward pass through the network with `loss.backward()` to calculate the gradients\n* Take a step with the optimizer to update the weights\n\nBelow I'll go through one training step and print out the weights and gradients so you can see how it changes. Note that I have a line of code `optimizer.zero_grad()`. When you do multiple backwards passes with the same parameters, the gradients are accumulated. This means that you need to zero the gradients on each training pass or you'll retain gradients from previous training batches.", "_____no_output_____" ] ], [ [ "print('Initial weights - ', model[0].weight)\n\nimages, labels = next(iter(trainloader))\nimages.resize_(64, 784)\n\n# Clear the gradients, do this because gradients are accumulated\noptimizer.zero_grad()\n\n# Forward pass, then backward pass, then update weights\noutput = model.forward(images)\nloss = criterion(output, labels)\nloss.backward()\nprint('Gradient -', model[0].weight.grad)", "Initial weights - Parameter containing:\ntensor([[ 1.6527e-02, 3.0343e-03, 1.4228e-02, ..., 2.4149e-02,\n 3.5275e-02, 1.6788e-02],\n [ 2.8774e-02, 2.2902e-03, 7.4301e-04, ..., 3.3467e-03,\n -1.1749e-03, -8.7809e-03],\n [ 1.7316e-02, -2.4315e-02, -3.0231e-02, ..., 4.2068e-03,\n 3.4960e-02, -2.3314e-02],\n ...,\n [ 1.0681e-02, 2.7114e-02, -1.2297e-02, ..., 2.6286e-05,\n 2.0330e-02, 4.9308e-03],\n [-3.5567e-02, -1.0495e-02, 2.7847e-02, ..., 1.2484e-03,\n -2.5319e-02, 1.3008e-02],\n [ 3.1278e-03, 1.1646e-02, 2.4318e-03, ..., -6.4697e-03,\n 1.2041e-02, -1.8990e-02]])\nGradient - tensor(1.00000e-02 *\n [[ 0.1212, 0.1212, 0.1212, ..., 0.1212, 0.1212, 0.1212],\n [ 0.1417, 0.1417, 0.1417, ..., 0.1417, 0.1417, 0.1417],\n [ 0.0871, 0.0871, 0.0871, ..., 0.0871, 0.0871, 0.0871],\n ...,\n [ 0.3301, 0.3301, 0.3301, ..., 0.3301, 0.3301, 0.3301],\n [-0.0326, -0.0326, -0.0326, ..., -0.0326, -0.0326, -0.0326],\n [ 0.0948, 0.0948, 0.0948, ..., 0.0948, 0.0948, 0.0948]])\n" ], [ "# Take an update step and few the new weights\noptimizer.step()\nprint('Updated weights - ', model[0].weight)", "Updated weights - Parameter containing:\ntensor(1.00000e-02 *\n [[ 1.6515, 0.3022, 1.4216, ..., 2.4136, 3.5262, 1.6776],\n [ 2.8760, 0.2276, 0.0729, ..., 0.3333, -0.1189, -0.8795],\n [ 1.7308, -2.4324, -3.0240, ..., 0.4198, 3.4952, -2.3323],\n ...,\n [ 1.0648, 2.7081, -1.2330, ..., -0.0007, 2.0297, 0.4898],\n [-3.5564, -1.0492, 2.7851, ..., 0.1252, -2.5316, 1.3011],\n [ 0.3118, 1.1637, 0.2422, ..., -0.6479, 1.2031, -1.9000]])\n" ] ], [ [ "### Training for real\n\nNow we'll put this algorithm into a loop so we can go through all the images. Some nomenclature, one pass through the entire dataset is called an *epoch*. So here we're going to loop through `trainloader` to get our training batches. For each batch, we'll doing a training pass where we calculate the loss, do a backwards pass, and update the weights.\n\n>**Exercise:** Implement the training pass for our network. If you implemented it correctly, you should see the training loss drop with each epoch.", "_____no_output_____" ] ], [ [ "## Your solution here\n\nmodel = nn.Sequential(nn.Linear(784, 128),\n nn.ReLU(),\n nn.Linear(128, 64),\n nn.ReLU(),\n nn.Linear(64, 10),\n nn.LogSoftmax(dim=1))\n\ncriterion = nn.NLLLoss()\noptimizer = optim.SGD(model.parameters(), lr=0.003)\n\nepochs = 5\nfor e in range(epochs):\n running_loss = 0\n for images, labels in trainloader:\n # Flatten MNIST images into a 784 long vector\n images = images.view(images.shape[0], -1)\n \n # TODO: Training pass\n output = model.forward(images)\n loss = criterion(output,labels)\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n running_loss += loss.item()\n else:\n print(f\"Training loss: {running_loss/len(trainloader)}\")", "Training loss: 2.0052468328079436\nTraining loss: 0.9583449224546265\nTraining loss: 0.5563749428242762\nTraining loss: 0.4503236487825542\nTraining loss: 0.40166557022630534\n" ], [ "%matplotlib inline\nimport helper\n\nimages, labels = next(iter(trainloader))\n\nimg = images[0].view(1, 784)\n# Turn off gradients to speed up this part\nwith torch.no_grad():\n logits = model.forward(img)\n\n# Output of the network are logits, need to take softmax for probabilities\nps = F.softmax(logits, dim=1)\nhelper.view_classify(img.view(1, 28, 28), ps)", "_____no_output_____" ], [ "# Another Exaample with different network\nmodel = nn.Sequential(nn.Linear(784, 64),\n nn.ReLU(),\n \n nn.Linear(64, 10),\n nn.LogSoftmax(dim=1))\n\ncriterion = nn.NLLLoss()\noptimizer = optim.SGD(model.parameters(), lr=0.003)\n\nepochs = 5\nfor e in range(epochs):\n running_loss = 0\n for images, labels in trainloader:\n # Flatten MNIST images into a 784 long vector\n images = images.view(images.shape[0], -1)\n \n # TODO: Training pass\n optimizer.zero_grad()\n \n output = model.forward(images)\n loss = criterion(output, labels)\n loss.backward()\n optimizer.step()\n \n running_loss += loss.item()\n else:\n print(f\"Training loss: {running_loss/len(trainloader)}\")", "Training loss: 1.4049813839545382\nTraining loss: 0.6131599609182079\nTraining loss: 0.46123547877457094\nTraining loss: 0.4033867822908389\nTraining loss: 0.3724721379276278\n" ] ], [ [ "With the network trained, we can check out it's predictions.", "_____no_output_____" ], [ "Now our network is brilliant. It can accurately predict the digits in our images. Next up you'll write the code for training a neural network on a more complex dataset.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ] ]
ece13298286dc873fb82faf874fdc06640cd3691
7,842
ipynb
Jupyter Notebook
ipynb/HTM_03/Uebungen/3.1_cc.ipynb
kassbohm/wb-snippets
f1ac5194e9f60a9260d096ba5ed1ce40b844a3fe
[ "MIT" ]
null
null
null
ipynb/HTM_03/Uebungen/3.1_cc.ipynb
kassbohm/wb-snippets
f1ac5194e9f60a9260d096ba5ed1ce40b844a3fe
[ "MIT" ]
null
null
null
ipynb/HTM_03/Uebungen/3.1_cc.ipynb
kassbohm/wb-snippets
f1ac5194e9f60a9260d096ba5ed1ce40b844a3fe
[ "MIT" ]
null
null
null
35.008929
123
0.416475
[ [ [ "# Header starts here.\nfrom sympy.physics.units import *\nfrom sympy import *\n\n# Rounding:\nimport decimal\nfrom decimal import Decimal as DX\nfrom copy import deepcopy\ndef iso_round(obj, pv, rounding=decimal.ROUND_HALF_EVEN):\n import sympy\n \"\"\"\n Rounding acc. to DIN EN ISO 80000-1:2013-08\n place value = Rundestellenwert\n \"\"\"\n assert pv in set([\n # place value # round to:\n 1, # 1\n 0.1, # 1st digit after decimal\n 0.01, # 2nd\n 0.001, # 3rd\n 0.0001, # 4th\n 0.00001, # 5th\n 0.000001, # 6th\n 0.0000001, # 7th\n 0.00000001, # 8th\n 0.000000001, # 9th\n 0.0000000001, # 10th\n ])\n objc = deepcopy(obj)\n try:\n tmp = DX(str(float(objc)))\n objc = tmp.quantize(DX(str(pv)), rounding=rounding)\n except:\n for i in range(len(objc)):\n tmp = DX(str(float(objc[i])))\n objc[i] = tmp.quantize(DX(str(pv)), rounding=rounding)\n return objc\n\n# LateX:\nkwargs = {}\nkwargs[\"mat_str\"] = \"bmatrix\"\nkwargs[\"mat_delim\"] = \"\"\n# kwargs[\"symbol_names\"] = {FB: \"F^{\\mathsf B}\", }\n\n# Units:\n(k, M, G ) = ( 10**3, 10**6, 10**9 )\n(mm, cm, deg) = ( m/1000, m/100, pi/180)\nNewton = kg*m/s**2\nPa = Newton/m**2\nMPa = M*Pa\nGPa = G*Pa\nkN = k*Newton\n\nhalf = S(1)/2\n\n# Header ends here.\n#\n# https://colab.research.google.com/github/kassbohm/wb-snippets/blob/master/ipynb/HTM_03/Uebungen/3.1_cc.ipynb\n\nfrom mpmath import radians, degrees, pi\n\nl1, l2, l3, b = 1*m, 2*m, 2*m, 2*m\np1 = radians(30)\nprec = 0.001\n\np2, p3 = var(\"ฯ†โ‚‚ ฯ†โ‚ƒ\")\ndp2, dp3 = var(\"ฮดฯ†โ‚‚ ฮดฯ†โ‚ƒ\")\n\nc1, s1 = cos(p1), sin(p1)\nc2, s2 = cos(p2), sin(p2)\nc3, s3 = cos(p3), sin(p3)\n\nxdev = ( l1*c1 + l2*c2 - l3*c3 - b ) / m\nydev = ( l1*s1 + l2*s2 - l3*s3 ) / m\n\nJ11, J12 = diff(xdev, p2), diff(xdev, p3)\nJ21, J22 = diff(ydev, p2), diff(ydev, p3)\n\n# Jacobian:\nJ = Matrix([[J11, J12],[J21, J22]])\n\npprint(\"\\n--- Numerical Solution using SymPy's nsolve ---\")\n\n# Initial Values:\npprint(\"\\nInitial (ฯ†โ‚‚ ฯ†โ‚ƒ) / deg:\")\npprint(Matrix([45, 90]))\np2i, p3i = radians(45), radians(90)\n\nsol = nsolve( [xdev, ydev], [p2, p3], [p2i , p3i], dict=True )\nsol = sol[0]\nsol = Matrix([sol[p2], sol[p3]])\npprint(\"\\nnsolve-solution:\")\npprint(\"\\n(ฯ†โ‚‚ ฯ†โ‚ƒ) / deg:\")\ntmp = degrees(sol)\npprint(iso_round(tmp,prec))\n\npprint(\"\\n--- Step-by-Step Newton-Solution ---\")\npprint(\"\\n1: Jacobian:\")\npprint(J)\n\n# n = 1 Iteration:\npprint(\"\\n2: Inital (ฯ†โ‚‚ ฯ†โ‚ƒ) / rad:\")\np = Matrix([p2i, p3i])\npprint(iso_round(p, prec))\n\nF = Matrix([xdev, ydev])\n\nsub_list = [\n (p2, p2i),\n (p3, p3i),\n ]\n\npprint(\"\\n3: Jacobian for (ฯ†โ‚‚ ฯ†โ‚ƒ):\")\nJn = J.subs(sub_list)\n# pprint(Jn)\npprint(iso_round(Jn, prec))\n\npprint(\"\\n4: Deviation Phi for (ฯ†โ‚‚ ฯ†โ‚ƒ) / rad:\")\nFn = F.subs(sub_list)\npprint(iso_round(Fn, prec))\n\n# Newton-Step:\npprint(\"\\n5: Increment (ฮดฯ†โ‚‚ ฮดฯ†โ‚ƒ) / rad:\")\ndp = Matrix([dp2, dp3])\neq = Eq(Jn*dp, -Fn)\nsol = solve(eq,[dp2, dp3], dict=True)\nsol = sol[0]\ndp = Matrix([sol[dp2], sol[dp3]])\npprint(iso_round(dp, prec))\np += dp\n\npprint(\"\\n6: At end of iteration step:\")\npprint(\"\\n(ฯ†โ‚‚ ฯ†โ‚ƒ) / rad:\")\npprint(iso_round(p, prec))\n\npprint(\"\\n(ฯ†โ‚‚ ฯ†โ‚ƒ) / deg:\")\npprint(iso_round(degrees(p), prec))\n\n# --- Numerical Solution using SymPy's nsolve ---\n#\n# Initial (ฯ†โ‚‚ ฯ†โ‚ƒ) / deg:\n# โŽก45โŽค\n# โŽข โŽฅ\n# โŽฃ90โŽฆ\n#\n# nsolve-solution:\n#\n# (ฯ†โ‚‚ ฯ†โ‚ƒ) / deg:\n# โŽก48.157โŽค\n# โŽข โŽฅ\n# โŽฃ84.255โŽฆ\n#\n# --- Step-by-Step Newton-Solution ---\n#\n# 1: Jacobian:\n# โŽก-2โ‹…sin(ฯ†โ‚‚) 2โ‹…sin(ฯ†โ‚ƒ) โŽค\n# โŽข โŽฅ\n# โŽฃ2โ‹…cos(ฯ†โ‚‚) -2โ‹…cos(ฯ†โ‚ƒ)โŽฆ\n#\n# 2: Inital (ฯ†โ‚‚ ฯ†โ‚ƒ) / rad:\n# โŽก0.785โŽค\n# โŽข โŽฅ\n# โŽฃ1.571โŽฆ\n#\n# 3: Jacobian for (ฯ†โ‚‚ ฯ†โ‚ƒ):\n# โŽก-1.414 2.0โŽค\n# โŽข โŽฅ\n# โŽฃ1.414 0.0โŽฆ\n#\n# 4: Deviation Phi for (ฯ†โ‚‚ ฯ†โ‚ƒ) / rad:\n# โŽก 0.28 โŽค\n# โŽข โŽฅ\n# โŽฃ-0.086โŽฆ\n#\n# 5: Increment (ฮดฯ†โ‚‚ ฮดฯ†โ‚ƒ) / rad:\n# โŽก0.061 โŽค\n# โŽข โŽฅ\n# โŽฃ-0.097โŽฆ\n#\n# 6: At end of iteration step:\n#\n# (ฯ†โ‚‚ ฯ†โ‚ƒ) / rad:\n# โŽก0.846โŽค\n# โŽข โŽฅ\n# โŽฃ1.474โŽฆ\n#\n# (ฯ†โ‚‚ ฯ†โ‚ƒ) / deg:\n# โŽก48.476โŽค\n# โŽข โŽฅ\n# โŽฃ84.429โŽฆ\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
ece136f149ca5ca6da0e7d2d3f06040ec5372ecc
81,658
ipynb
Jupyter Notebook
docs/source/examples/Using Interact.ipynb
oscar6echo/ipywidgets
a5709728f71e21c81a89e8d123dde2068dd1e74d
[ "BSD-3-Clause" ]
214
2017-12-11T21:32:15.000Z
2022-02-07T23:18:32.000Z
docs/source/examples/Using Interact.ipynb
oscar6echo/ipywidgets
a5709728f71e21c81a89e8d123dde2068dd1e74d
[ "BSD-3-Clause" ]
98
2017-12-21T06:48:23.000Z
2022-03-08T07:20:10.000Z
docs/source/examples/Using Interact.ipynb
oscar6echo/ipywidgets
a5709728f71e21c81a89e8d123dde2068dd1e74d
[ "BSD-3-Clause" ]
21
2018-01-25T09:10:20.000Z
2021-04-11T21:22:38.000Z
30.68696
4,900
0.596108
[ [ [ "# Using Interact", "_____no_output_____" ], [ "The `interact` function (`ipywidgets.interact`) automatically creates user interface (UI) controls for exploring code and data interactively. It is the easiest way to get started using IPython's widgets.", "_____no_output_____" ] ], [ [ "from __future__ import print_function\nfrom ipywidgets import interact, interactive, fixed, interact_manual\nimport ipywidgets as widgets", "_____no_output_____" ] ], [ [ "## Basic `interact`", "_____no_output_____" ], [ "At the most basic level, `interact` autogenerates UI controls for function arguments, and then calls the function with those arguments when you manipulate the controls interactively. To use `interact`, you need to define a function that you want to explore. Here is a function that prints its only argument `x`.", "_____no_output_____" ] ], [ [ "def f(x):\n return x", "_____no_output_____" ] ], [ [ "When you pass this function as the first argument to `interact` along with an integer keyword argument (`x=10`), a slider is generated and bound to the function parameter.", "_____no_output_____" ] ], [ [ "interact(f, x=10);", "_____no_output_____" ] ], [ [ "When you move the slider, the function is called, which prints the current value of `x`.\n\nIf you pass `True` or `False`, `interact` will generate a checkbox:", "_____no_output_____" ] ], [ [ "interact(f, x=True);", "_____no_output_____" ] ], [ [ "If you pass a string, `interact` will generate a text area.", "_____no_output_____" ] ], [ [ "interact(f, x='Hi there!');", "_____no_output_____" ] ], [ [ "`interact` can also be used as a decorator. This allows you to define a function and interact with it in a single shot. As this example shows, `interact` also works with functions that have multiple arguments.", "_____no_output_____" ] ], [ [ "@interact(x=True, y=1.0)\ndef g(x, y):\n return (x, y)", "_____no_output_____" ] ], [ [ "## Fixing arguments using `fixed`", "_____no_output_____" ], [ "There are times when you may want to explore a function using `interact`, but fix one or more of its arguments to specific values. This can be accomplished by wrapping values with the `fixed` function.", "_____no_output_____" ] ], [ [ "def h(p, q):\n return (p, q)", "_____no_output_____" ] ], [ [ "When we call `interact`, we pass `fixed(20)` for q to hold it fixed at a value of `20`.", "_____no_output_____" ] ], [ [ "interact(h, p=5, q=fixed(20));", "_____no_output_____" ] ], [ [ "Notice that a slider is only produced for `p` as the value of `q` is fixed.", "_____no_output_____" ], [ "## Widget abbreviations", "_____no_output_____" ], [ "When you pass an integer-valued keyword argument of `10` (`x=10`) to `interact`, it generates an integer-valued slider control with a range of `[-10,+3*10]`. In this case, `10` is an *abbreviation* for an actual slider widget:\n\n```python\nIntSlider(min=-10,max=30,step=1,value=10)\n```\n\nIn fact, we can get the same result if we pass this `IntSlider` as the keyword argument for `x`:", "_____no_output_____" ] ], [ [ "interact(f, x=widgets.IntSlider(min=-10,max=30,step=1,value=10));", "_____no_output_____" ] ], [ [ "This examples clarifies how `interact` proceses its keyword arguments:\n\n1. If the keyword argument is a `Widget` instance with a `value` attribute, that widget is used. Any widget with a `value` attribute can be used, even custom ones.\n2. Otherwise, the value is treated as a *widget abbreviation* that is converted to a widget before it is used.\n\nThe following table gives an overview of different widget abbreviations:\n\n<table class=\"table table-condensed table-bordered\">\n <tr><td><strong>Keyword argument</strong></td><td><strong>Widget</strong></td></tr> \n <tr><td>`True` or `False`</td><td>Checkbox</td></tr> \n <tr><td>`'Hi there'`</td><td>Text</td></tr>\n <tr><td>`value` or `(min,max)` or `(min,max,step)` if integers are passed</td><td>IntSlider</td></tr>\n <tr><td>`value` or `(min,max)` or `(min,max,step)` if floats are passed</td><td>FloatSlider</td></tr>\n <tr><td>`['orange','apple']` or `{'one':1,'two':2}`</td><td>Dropdown</td></tr>\n</table>\nNote that a dropdown is used if a list or a dict is given (signifying discrete choices), and a slider is used if a tuple is given (signifying a range).", "_____no_output_____" ], [ "You have seen how the checkbox and textarea widgets work above. Here, more details about the different abbreviations for sliders and dropdowns are given.\n\nIf a 2-tuple of integers is passed `(min,max)`, an integer-valued slider is produced with those minimum and maximum values (inclusively). In this case, the default step size of `1` is used.", "_____no_output_____" ] ], [ [ "interact(f, x=(0,4));", "_____no_output_____" ] ], [ [ "If a 3-tuple of integers is passed `(min,max,step)`, the step size can also be set.", "_____no_output_____" ] ], [ [ "interact(f, x=(0,8,2));", "_____no_output_____" ] ], [ [ "A float-valued slider is produced if the elements of the tuples are floats. Here the minimum is `0.0`, the maximum is `10.0` and step size is `0.1` (the default).", "_____no_output_____" ] ], [ [ "interact(f, x=(0.0,10.0));", "_____no_output_____" ] ], [ [ "The step size can be changed by passing a third element in the tuple.", "_____no_output_____" ] ], [ [ "interact(f, x=(0.0,10.0,0.01));", "_____no_output_____" ] ], [ [ "For both integer and float-valued sliders, you can pick the initial value of the widget by passing a default keyword argument to the underlying Python function. Here we set the initial value of a float slider to `5.5`.", "_____no_output_____" ] ], [ [ "@interact(x=(0.0,20.0,0.5))\ndef h(x=5.5):\n return x", "_____no_output_____" ] ], [ [ "Dropdown menus are constructed by passing a list of strings. In this case, the strings are both used as the names in the dropdown menu UI and passed to the underlying Python function.", "_____no_output_____" ] ], [ [ "interact(f, x=['apples','oranges']);", "_____no_output_____" ] ], [ [ "If you want a dropdown menu that passes non-string values to the Python function, you can pass a list of (label, value) pairs.", "_____no_output_____" ] ], [ [ "interact(f, x=[('one', 10), ('two', 20)]);", "_____no_output_____" ] ], [ [ "## `interactive`", "_____no_output_____" ], [ "In addition to `interact`, IPython provides another function, `interactive`, that is useful when you want to reuse the widgets that are produced or access the data that is bound to the UI controls.\n\nNote that unlike `interact`, the return value of the function will not be displayed automatically, but you can display a value inside the function with `IPython.display.display`.", "_____no_output_____" ], [ "Here is a function that returns the sum of its two arguments and displays them. The display line may be omitted if you don't want to show the result of the function.", "_____no_output_____" ] ], [ [ "from IPython.display import display\ndef f(a, b):\n display(a + b)\n return a+b", "_____no_output_____" ] ], [ [ "Unlike `interact`, `interactive` returns a `Widget` instance rather than immediately displaying the widget.", "_____no_output_____" ] ], [ [ "w = interactive(f, a=10, b=20)", "_____no_output_____" ] ], [ [ "The widget is an `interactive`, a subclass of `VBox`, which is a container for other widgets.", "_____no_output_____" ] ], [ [ "type(w)", "_____no_output_____" ] ], [ [ "The children of the `interactive` are two integer-valued sliders and an output widget, produced by the widget abbreviations above.", "_____no_output_____" ] ], [ [ "w.children", "_____no_output_____" ] ], [ [ "To actually display the widgets, you can use IPython's `display` function.", "_____no_output_____" ] ], [ [ "display(w)", "_____no_output_____" ] ], [ [ "At this point, the UI controls work just like they would if `interact` had been used. You can manipulate them interactively and the function will be called. However, the widget instance returned by `interactive` also gives you access to the current keyword arguments and return value of the underlying Python function. \n\nHere are the current keyword arguments. If you rerun this cell after manipulating the sliders, the values will have changed.", "_____no_output_____" ] ], [ [ "w.kwargs", "_____no_output_____" ] ], [ [ "Here is the current return value of the function.", "_____no_output_____" ] ], [ [ "w.result", "_____no_output_____" ] ], [ [ "## Disabling continuous updates", "_____no_output_____" ], [ "When interacting with long running functions, realtime feedback is a burden instead of being helpful. See the following example:", "_____no_output_____" ] ], [ [ "def slow_function(i):\n print(int(i),list(x for x in range(int(i)) if \n str(x)==str(x)[::-1] and \n str(x**2)==str(x**2)[::-1]))\n return", "_____no_output_____" ], [ "%%time\nslow_function(1e6)", "1000000 [0, 1, 2, 3, 11, 22, 101, 111, 121, 202, 212, 1001, 1111, 2002, 10001, 10101, 10201, 11011, 11111, 11211, 20002, 20102, 100001, 101101, 110011, 111111, 200002]\nCPU times: user 578 ms, sys: 5.41 ms, total: 583 ms\nWall time: 586 ms\n" ] ], [ [ "Notice that the output is updated even while dragging the mouse on the slider. This is not useful for long running functions due to lagging:", "_____no_output_____" ] ], [ [ "from ipywidgets import FloatSlider\ninteract(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5));", "_____no_output_____" ] ], [ [ "There are two ways to mitigate this. You can either only execute on demand, or restrict execution to mouse release events.", "_____no_output_____" ], [ "### `interact_manual`", "_____no_output_____" ], [ "The `interact_manual` function provides a variant of interaction that allows you to restrict execution so it is only done on demand. A button is added to the interact controls that allows you to trigger an execute event.", "_____no_output_____" ] ], [ [ "interact_manual(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5));", "_____no_output_____" ] ], [ [ "### `continuous_update`", "_____no_output_____" ], [ "If you are using slider widgets, you can set the `continuous_update` kwarg to `False`. `continuous_update` is a kwarg of slider widgets that restricts executions to mouse release events.", "_____no_output_____" ] ], [ [ "interact(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5, continuous_update=False));", "_____no_output_____" ] ], [ [ "### `interactive_output`\n\n`interactive_output` provides additional flexibility: you can control how the UI elements are laid out.\n\nUnlike `interact`, `interactive`, and `interact_manual`, `interactive_output` does not generate a user interface for the widgets. This is powerful, because it means you can create a widget, put it in a box, and then pass the widget to `interactive_output`, and have control over the widget and its layout.", "_____no_output_____" ] ], [ [ "a = widgets.IntSlider()\nb = widgets.IntSlider()\nc = widgets.IntSlider()\nui = widgets.HBox([a, b, c])\ndef f(a, b, c):\n print((a, b, c))\n\nout = widgets.interactive_output(f, {'a': a, 'b': b, 'c': c})\n\ndisplay(ui, out)", "_____no_output_____" ] ], [ [ "## Arguments that are dependent on each other", "_____no_output_____" ], [ "Arguments that are dependent on each other can be expressed manually using `observe`. See the following example, where one variable is used to describe the bounds of another. For more information, please see the [widget events example notebook](./Widget Events.ipynb).", "_____no_output_____" ] ], [ [ "x_widget = FloatSlider(min=0.0, max=10.0, step=0.05)\ny_widget = FloatSlider(min=0.5, max=10.0, step=0.05, value=5.0)\n\ndef update_x_range(*args):\n x_widget.max = 2.0 * y_widget.value\ny_widget.observe(update_x_range, 'value')\n\ndef printer(x, y):\n print(x, y)\ninteract(printer,x=x_widget, y=y_widget);", "_____no_output_____" ] ], [ [ "## Flickering and jumping output\n\nOn occasion, you may notice interact output flickering and jumping, causing the notebook scroll position to change as the output is updated. The interactive control has a layout, so we can set its height to an appropriate value (currently chosen manually) so that it will not change size as it is updated.\n", "_____no_output_____" ] ], [ [ "%matplotlib inline\nfrom ipywidgets import interactive\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef f(m, b):\n plt.figure(2)\n x = np.linspace(-10, 10, num=1000)\n plt.plot(x, m * x + b)\n plt.ylim(-5, 5)\n plt.show()\n\ninteractive_plot = interactive(f, m=(-2.0, 2.0), b=(-3, 3, 0.5))\noutput = interactive_plot.children[-1]\noutput.layout.height = '350px'\ninteractive_plot", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ece138df08861c8f12adabc3072f235b9db76ade
7,074
ipynb
Jupyter Notebook
03-data-x-data-handling/m350.3-StockMarket-CSV/nb-350.3-stockmarket.ipynb
UCBerkeley-SCET/DataX-Berkeley
f912d22c838b511d3ada4ecfa3548afd80437b74
[ "Apache-2.0" ]
117
2019-09-02T06:08:46.000Z
2022-03-09T18:15:26.000Z
03-data-x-data-handling/m350.3-StockMarket-CSV/nb-350.3-stockmarket.ipynb
UCBerkeley-SCET/DataX-Berkeley
f912d22c838b511d3ada4ecfa3548afd80437b74
[ "Apache-2.0" ]
4
2020-06-24T22:20:31.000Z
2022-02-28T01:37:36.000Z
03-data-x-data-handling/m350.3-StockMarket-CSV/nb-350.3-stockmarket.ipynb
UCBerkeley-SCET/DataX-Berkeley
f912d22c838b511d3ada4ecfa3548afd80437b74
[ "Apache-2.0" ]
78
2020-06-19T09:41:01.000Z
2022-02-05T00:13:29.000Z
30.230769
473
0.541419
[ [ [ "![data-x](https://raw.githubusercontent.com/afo/data-x-plaksha/master/imgsource/dx_logo.png)\n\n---\n# Downloading Stockmarket Data\n\n**Author list:** Ikhlaq Sidhu\n\n**References / Sources:** \n\n\n**License Agreement:** Feel free to do whatever you want with this code\n\n___", "_____no_output_____" ], [ "## Finance example\n\n### Now, lets get some data from Yahoo finance. One format to get a CSV file is like this:\n\nhttp://chart.finance.yahoo.com/table.csv?s=YHOO&a=8&b=21&c=2015&d=8&e=21&f=2016&g=d&ignore=.csv\n\nNotes:\ns = stock symbol\na= from month\nb = from date\nc = from year\nd = to month\ne = to date\nf = to year\n", "_____no_output_____" ] ], [ [ "# Lets download Yahoo historic stock prices, one year back from 8/21/2016", "_____no_output_____" ], [ "df0_yhoo = pd.read_csv('http://chart.finance.yahoo.com/table.csv?s=YHOO&a=8&b=21&c=2015&d=8&e=21&f=2016&g=d&ignore=.csv')", "_____no_output_____" ], [ "# print df0_yhoo.head(10)\nprint df0_yhoo[0:6]", "_____no_output_____" ], [ "# We can do the same for Google, Apple, and Facebook\ndf0_goog = pd.read_csv('http://chart.finance.yahoo.com/table.csv?s=GOOG&a=8&b=21&c=2015&d=8&e=21&f=2016&g=d&ignore=.csv')\ndf0_aapl = pd.read_csv('http://chart.finance.yahoo.com/table.csv?s=AAPL&a=8&b=21&c=2015&d=8&e=21&f=2016&g=d&ignore=.csv')\ndf0_fb = pd.read_csv('http://chart.finance.yahoo.com/table.csv?s=FB&a=8&b=21&c=2015&d=8&e=21&f=2016&g=d&ignore=.csv')", "_____no_output_____" ], [ "# But what if we want to use it the data as a Numpy Array?\n# We can do that with this pandas method: df.as.martix()\n# this will be an n x 2 matrix (axis 0 is long, axis 1 = 2)\n# and then slice to show only 10 rows\n\ndf0_aapl[['Date','Close']].as_matrix()[0:10]", "_____no_output_____" ], [ "# And what is we only want to have a single column as a vector, a nump array with dimension 1 x n\ndf0_aapl['Close'][0:10].as_matrix()", "_____no_output_____" ], [ "# Lets get the all the last closing prices of each of the stocks into NumPy arrays:\n\na = df0_aapl['Close'].as_matrix()\ng = df0_goog['Close'].as_matrix()\nf = df0_fb['Close'].as_matrix()\n\n# Lets get the last 20 closing prices of each of the stocks into NumPy arrays:\na20 = df0_aapl['Close'][0:20].as_matrix()\ng20 = df0_goog['Close'][0:20].as_matrix()\nf20 = df0_fb['Close'][0:20].as_matrix()\n", "_____no_output_____" ], [ "print a20, g20, f20", "[ 113.550003 113.57 113.580002 114.919998 115.57 111.769997\n 107.949997 105.440002 103.129997 105.519997 108.360001 107.699997\n 107.730003 106.730003 106.099998 106. 106.82 106.940002\n 107.57 108.029999] [ 776.219971 771.409973 765.700012 768.880005 771.76001 762.48999\n 759.690002 769.02002 759.659973 775.320007 780.349976 780.080017\n 771.460022 768.780029 767.049988 769.090027 772.150024 769.539978\n 769.409973 769.640015] [ 129.940002 128.639999 128.649994 129.070007 128.350006 127.769997\n 127.209999 128.690002 127.099998 130.270004 131.050003 129.729996\n 126.510002 126.169998 126.120003 125.839996 126.540001 124.959999\n 123.889999 123.480003]\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ece146f9655e961f1e2a575c3770d9120c592bbb
26,591
ipynb
Jupyter Notebook
immersion/guided_projects/guided_project_3_nlp_starter/reusable_embeddings.ipynb
gitgithan/mlops-on-gcp
54b9a772b586fd45fdb0d1b2361a121d6716c82d
[ "Apache-2.0" ]
407
2020-03-29T22:25:45.000Z
2022-03-31T02:23:58.000Z
immersion/guided_projects/guided_project_3_nlp_starter/reusable_embeddings.ipynb
gitgithan/mlops-on-gcp
54b9a772b586fd45fdb0d1b2361a121d6716c82d
[ "Apache-2.0" ]
110
2020-04-08T00:55:40.000Z
2022-02-18T02:30:54.000Z
immersion/guided_projects/guided_project_3_nlp_starter/reusable_embeddings.ipynb
gitgithan/mlops-on-gcp
54b9a772b586fd45fdb0d1b2361a121d6716c82d
[ "Apache-2.0" ]
1,036
2020-03-31T15:16:20.000Z
2022-03-31T13:40:51.000Z
27.356996
553
0.579632
[ [ [ "# Reusable Embeddings\n\n**Learning Objectives**\n1. Learn how to use a pre-trained TF Hub text modules to generate sentence vectors\n1. Learn how to incorporate a pre-trained TF-Hub module into a Keras model\n1. Learn how to deploy and use a text model on CAIP\n\n\n## Introduction\n\n\nIn this notebook, we will implement text models to recognize the probable source (Github, Tech-Crunch, or The New-York Times) of the titles we have in the title dataset.\n\nFirst, we will load and pre-process the texts and labels so that they are suitable to be fed to sequential Keras models with first layer being TF-hub pre-trained modules. Thanks to this first layer, we won't need to tokenize and integerize the text before passing it to our models. The pre-trained layer will take care of that for us, and consume directly raw text. However, we will still have to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.\n\nThen we will build, train and compare simple DNN models starting with different pre-trained TF-Hub layers.", "_____no_output_____" ] ], [ [ "import os\n\nfrom google.cloud import bigquery\nimport pandas as pd", "_____no_output_____" ], [ "%load_ext google.cloud.bigquery", "_____no_output_____" ] ], [ [ "Replace the variable values in the cell below:", "_____no_output_____" ] ], [ [ "PROJECT = # Replace with your PROJECT\nBUCKET = PROJECT \nREGION = \"us-east1\"\n\nos.environ['PROJECT'] = PROJECT\nos.environ['BUCKET'] = BUCKET\nos.environ['REGION'] = REGION", "_____no_output_____" ] ], [ [ "## Create a Dataset from BigQuery \n\nHacker news headlines are available as a BigQuery public dataset. The [dataset](https://bigquery.cloud.google.com/table/bigquery-public-data:hacker_news.stories?tab=details) contains all headlines from the sites inception in October 2006 until October 2015. \n\nHere is a sample of the dataset:", "_____no_output_____" ] ], [ [ "%%bigquery --project $PROJECT\n\nSELECT\n url, title, score\nFROM\n `bigquery-public-data.hacker_news.stories`\nWHERE\n LENGTH(title) > 10\n AND score > 10\n AND LENGTH(url) > 0\nLIMIT 10", "_____no_output_____" ] ], [ [ "Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>", "_____no_output_____" ] ], [ [ "%%bigquery --project $PROJECT\n\nSELECT\n ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,\n COUNT(title) AS num_articles\nFROM\n `bigquery-public-data.hacker_news.stories`\nWHERE\n REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')\n AND LENGTH(title) > 10\nGROUP BY\n source\nORDER BY num_articles DESC\n LIMIT 100", "_____no_output_____" ] ], [ [ "Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.", "_____no_output_____" ] ], [ [ "regex = '.*://(.[^/]+)/'\n\n\nsub_query = \"\"\"\nSELECT\n title,\n ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source\n \nFROM\n `bigquery-public-data.hacker_news.stories`\nWHERE\n REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')\n AND LENGTH(title) > 10\n\"\"\".format(regex)\n\n\nquery = \"\"\"\nSELECT \n LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,\n source\nFROM\n ({sub_query})\nWHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')\n\"\"\".format(sub_query=sub_query)\n\nprint(query)", "_____no_output_____" ] ], [ [ "For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here. \n\n", "_____no_output_____" ] ], [ [ "bq = bigquery.Client(project=PROJECT)\ntitle_dataset = bq.query(query).to_dataframe()\ntitle_dataset.head()", "_____no_output_____" ] ], [ [ "AutoML for text classification requires that\n* the dataset be in csv form with \n* the first column being the texts to classify or a GCS path to the text \n* the last colum to be the text labels\n\nThe dataset we pulled from BiqQuery satisfies these requirements.", "_____no_output_____" ] ], [ [ "print(\"The full dataset contains {n} titles\".format(n=len(title_dataset)))", "_____no_output_____" ] ], [ [ "Let's make sure we have roughly the same number of labels for each of our three labels:", "_____no_output_____" ] ], [ [ "title_dataset.source.value_counts()", "_____no_output_____" ] ], [ [ "Finally we will save our data, which is currently in-memory, to disk.\n\nWe will create a csv file containing the full dataset and another containing only 1000 articles for development.\n\n**Note:** It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool. \n", "_____no_output_____" ] ], [ [ "DATADIR = './data/'\n\nif not os.path.exists(DATADIR):\n os.makedirs(DATADIR)", "_____no_output_____" ], [ "FULL_DATASET_NAME = 'titles_full.csv'\nFULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)\n\n# Let's shuffle the data before writing it to disk.\ntitle_dataset = title_dataset.sample(n=len(title_dataset))\n\ntitle_dataset.to_csv(\n FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')", "_____no_output_____" ] ], [ [ "Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see [here](https://cloud.google.com/natural-language/automl/docs/beginners-guide) for further details on how to prepare data for AutoML).", "_____no_output_____" ] ], [ [ "sample_title_dataset = title_dataset.sample(n=1000)\nsample_title_dataset.source.value_counts()", "_____no_output_____" ] ], [ [ "Let's write the sample datatset to disk.", "_____no_output_____" ] ], [ [ "SAMPLE_DATASET_NAME = 'titles_sample.csv'\nSAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)\n\nsample_title_dataset.to_csv(\n SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')", "_____no_output_____" ], [ "sample_title_dataset.head()", "_____no_output_____" ], [ "import datetime\nimport os\nimport shutil\n\nimport pandas as pd\nimport tensorflow as tf\nfrom tensorflow.keras.callbacks import TensorBoard, EarlyStopping\nfrom tensorflow_hub import KerasLayer\nfrom tensorflow.keras.layers import Dense\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.preprocessing.text import Tokenizer\nfrom tensorflow.keras.utils import to_categorical\n\n\nprint(tf.__version__)", "_____no_output_____" ], [ "%matplotlib inline", "_____no_output_____" ] ], [ [ "Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located:", "_____no_output_____" ] ], [ [ "MODEL_DIR = \"./text_models\"\nDATA_DIR = \"./data\"", "_____no_output_____" ] ], [ [ "## Loading the dataset", "_____no_output_____" ], [ "As in the previous labs, our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, Tech-Crunch, or the New-York Times):", "_____no_output_____" ] ], [ [ "ls ./data/", "_____no_output_____" ], [ "DATASET_NAME = \"titles_full.csv\"\nTITLE_SAMPLE_PATH = os.path.join(DATA_DIR, DATASET_NAME)\nCOLUMNS = ['title', 'source']\n\ntitles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)\ntitles_df.head()", "_____no_output_____" ] ], [ [ "Let's look again at the number of examples per label to make sure we have a well-balanced dataset:", "_____no_output_____" ] ], [ [ "titles_df.source.value_counts()", "_____no_output_____" ] ], [ [ "## Preparing the labels", "_____no_output_____" ], [ "In this lab, we will use pre-trained [TF-Hub embeddings modules for english](https://tfhub.dev/s?q=tf2%20embeddings%20text%20english) for the first layer of our models. One immediate\nadvantage of doing so is that the TF-Hub embedding module will take care for us of processing the raw text. \nThis also means that our model will be able to consume text directly instead of sequences of integers representing the words.\n\nHowever, as before, we still need to preprocess the labels into one-hot-encoded vectors:", "_____no_output_____" ] ], [ [ "CLASSES = {\n 'github': 0,\n 'nytimes': 1,\n 'techcrunch': 2\n}\nN_CLASSES = len(CLASSES)", "_____no_output_____" ], [ "def encode_labels(sources):\n classes = [CLASSES[source] for source in sources]\n one_hots = to_categorical(classes, num_classes=N_CLASSES)\n return one_hots", "_____no_output_____" ], [ "encode_labels(titles_df.source[:4])", "_____no_output_____" ] ], [ [ "## Preparing the train/test splits", "_____no_output_____" ], [ "Let's split our data into train and test splits:", "_____no_output_____" ] ], [ [ "N_TRAIN = int(len(titles_df) * 0.95)\n\ntitles_train, sources_train = (\n titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])\n\ntitles_valid, sources_valid = (\n titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])", "_____no_output_____" ] ], [ [ "To be on the safe side, we verify that the train and test splits\nhave roughly the same number of examples per class.\n\nSince it is the case, accuracy will be a good metric to use to measure\nthe performance of our models.", "_____no_output_____" ] ], [ [ "sources_train.value_counts()", "_____no_output_____" ], [ "sources_valid.value_counts()", "_____no_output_____" ] ], [ [ "Now let's create the features and labels we will feed our models with:", "_____no_output_____" ] ], [ [ "X_train, Y_train = titles_train.values, encode_labels(sources_train)\nX_valid, Y_valid = titles_valid.values, encode_labels(sources_valid)", "_____no_output_____" ], [ "X_train[:3]", "_____no_output_____" ], [ "Y_train[:3]", "_____no_output_____" ] ], [ [ "## NNLM Model", "_____no_output_____" ], [ "We will first try a word embedding pre-trained using a [Neural Probabilistic Language Model](http://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf). TF-Hub has a 50-dimensional one called \n[nnlm-en-dim50-with-normalization](https://tfhub.dev/google/tf2-preview/nnlm-en-dim50/1), which also\nnormalizes the vectors produced. \n\nOnce loaded from its url, the TF-hub module can be used as a normal Keras layer in a sequential or functional model. Since we have enough data to fine-tune the parameters of the pre-trained embedding itself, we will set `trainable=True` in the `KerasLayer` that loads the pre-trained embedding:", "_____no_output_____" ] ], [ [ "# TODO 1\nNNLM = \"https://tfhub.dev/google/nnlm-en-dim50/2\"\n\nnnlm_module = KerasLayer(\n NNLM, output_shape=[50], input_shape=[], dtype=tf.string, trainable=True)", "_____no_output_____" ] ], [ [ "Note that this TF-Hub embedding produces a single 50-dimensional vector when passed a sentence:", "_____no_output_____" ] ], [ [ "# TODO 1\nnnlm_module(tf.constant([\"The dog is happy to see people in the street.\"]))", "_____no_output_____" ] ], [ [ "## Swivel Model", "_____no_output_____" ], [ "Then we will try a word embedding obtained using [Swivel](https://arxiv.org/abs/1602.02215), an algorithm that essentially factorizes word co-occurrence matrices to create the words embeddings. \nTF-Hub hosts the pretrained [gnews-swivel-20dim-with-oov](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1) 20-dimensional Swivel module.", "_____no_output_____" ] ], [ [ "# TODO 1\nSWIVEL = \"https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1\"\nswivel_module = KerasLayer(\n SWIVEL, output_shape=[20], input_shape=[], dtype=tf.string, trainable=True)", "_____no_output_____" ] ], [ [ "Similarly as the previous pre-trained embedding, it outputs a single vector when passed a sentence:", "_____no_output_____" ] ], [ [ "# TODO 1\nswivel_module(tf.constant([\"The dog is happy to see people in the street.\"]))", "_____no_output_____" ] ], [ [ "## Building the models", "_____no_output_____" ], [ "Let's write a function that \n\n* takes as input an instance of a `KerasLayer` (i.e. the `swivel_module` or the `nnlm_module` we constructed above) as well as the name of the model (say `swivel` or `nnlm`)\n* returns a compiled Keras sequential model starting with this pre-trained TF-hub layer, adding one or more dense relu layers to it, and ending with a softmax layer giving the probability of each of the classes:", "_____no_output_____" ] ], [ [ "def build_model(hub_module, name):\n model = Sequential([\n hub_module, # TODO 2\n Dense(16, activation='relu'),\n Dense(N_CLASSES, activation='softmax')\n ], name=name)\n\n model.compile(\n optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy']\n )\n return model", "_____no_output_____" ] ], [ [ "Let's also wrap the training code into a `train_and_evaluate` function that \n* takes as input the training and validation data, as well as the compiled model itself, and the `batch_size`\n* trains the compiled model for 100 epochs at most, and does early-stopping when the validation loss is no longer decreasing\n* returns an `history` object, which will help us to plot the learning curves", "_____no_output_____" ] ], [ [ "def train_and_evaluate(train_data, val_data, model, batch_size=5000):\n X_train, Y_train = train_data\n\n tf.random.set_seed(33)\n\n model_dir = os.path.join(MODEL_DIR, model.name)\n if tf.io.gfile.exists(model_dir):\n tf.io.gfile.rmtree(model_dir)\n\n history = model.fit(\n X_train, Y_train,\n epochs=100,\n batch_size=batch_size,\n validation_data=val_data,\n callbacks=[EarlyStopping(), TensorBoard(model_dir)],\n )\n return history", "_____no_output_____" ] ], [ [ "## Training NNLM", "_____no_output_____" ] ], [ [ "data = (X_train, Y_train)\nval_data = (X_valid, Y_valid)", "_____no_output_____" ], [ "nnlm_model = build_model(nnlm_module, 'nnlm')\nnnlm_history = train_and_evaluate(data, val_data, nnlm_model)", "_____no_output_____" ], [ "history = nnlm_history\npd.DataFrame(history.history)[['loss', 'val_loss']].plot()\npd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()", "_____no_output_____" ] ], [ [ "## Training Swivel", "_____no_output_____" ] ], [ [ "swivel_model = build_model(swivel_module, name='swivel')", "_____no_output_____" ], [ "swivel_history = train_and_evaluate(data, val_data, swivel_model)", "_____no_output_____" ], [ "history = swivel_history\npd.DataFrame(history.history)[['loss', 'val_loss']].plot()\npd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()", "_____no_output_____" ] ], [ [ "## Deploying the model", "_____no_output_____" ], [ "The first step is to serialize one of our trained Keras model as a SavedModel:", "_____no_output_____" ] ], [ [ "OUTPUT_DIR = \"./savedmodels\"\nshutil.rmtree(OUTPUT_DIR, ignore_errors=True)\n\nEXPORT_PATH = os.path.join(OUTPUT_DIR, 'swivel')\nos.environ['EXPORT_PATH'] = EXPORT_PATH\n\nshutil.rmtree(EXPORT_PATH, ignore_errors=True)\n\ntf.saved_model.save(swivel_model, EXPORT_PATH)", "_____no_output_____" ] ], [ [ "Then we can deploy the model using the gcloud CLI as before:", "_____no_output_____" ] ], [ [ "%%bash\n\n# TODO 5\n\nMODEL_NAME=title_model\nVERSION_NAME=swivel\n\nif [[ $(gcloud ai-platform models list --format='value(name)' | grep ^$MODEL_NAME$) ]]; then\n echo \"$MODEL_NAME already exists\"\nelse\n echo \"Creating $MODEL_NAME\"\n gcloud ai-platform models create --region=$REGION $MODEL_NAME\nfi\n\nif [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep ^$VERSION_NAME$) ]]; then\n echo \"Deleting already existing $MODEL_NAME:$VERSION_NAME ... \"\n echo yes | gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME\n echo \"Please run this cell again if you don't see a Creating message ... \"\n sleep 2\nfi\n\necho \"Creating $MODEL_NAME:$VERSION_NAME\"\ngcloud ai-platform versions create \\\n --model=$MODEL_NAME $VERSION_NAME \\\n --framework=tensorflow \\\n --python-version=3.7 \\\n --runtime-version=2.1 \\\n --origin=$EXPORT_PATH \\\n --staging-bucket=gs://$BUCKET \\\n --machine-type n1-standard-4 \\\n --region=$REGION", "_____no_output_____" ] ], [ [ "Before we try our deployed model, let's inspect its signature to know what to send to the deployed API:", "_____no_output_____" ] ], [ [ "!saved_model_cli show \\\n --tag_set serve \\\n --signature_def serving_default \\\n --dir {EXPORT_PATH}\n!find {EXPORT_PATH}", "_____no_output_____" ] ], [ [ "Let's go ahead and hit our model:", "_____no_output_____" ] ], [ [ "%%writefile input.json\n{\"keras_layer_1_input\": \"hello\"}", "_____no_output_____" ], [ "!gcloud ai-platform predict \\\n --model title_model \\\n --json-instances input.json \\\n --version swivel \\\n --region=$REGION", "_____no_output_____" ] ], [ [ "## Bonus", "_____no_output_____" ], [ "Try to beat the best model by modifying the model architecture, changing the TF-Hub embedding, and tweaking the training parameters.", "_____no_output_____" ], [ "Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ] ]
ece15d30e5a336c667920cb173d345149162e732
2,322
ipynb
Jupyter Notebook
module4/s3_sentence_tokenizer.ipynb
BamBalaam/tac
340f6ac3b747eb9ea5f035f0b01638eb09c99f03
[ "MIT" ]
null
null
null
module4/s3_sentence_tokenizer.ipynb
BamBalaam/tac
340f6ac3b747eb9ea5f035f0b01638eb09c99f03
[ "MIT" ]
null
null
null
module4/s3_sentence_tokenizer.ipynb
BamBalaam/tac
340f6ac3b747eb9ea5f035f0b01638eb09c99f03
[ "MIT" ]
null
null
null
22.543689
85
0.512059
[ [ [ "import os\nimport sys\nimport nltk\nfrom nltk.tokenize import sent_tokenize", "_____no_output_____" ] ], [ [ "# Fichiers d'inputs et d'outputs", "_____no_output_____" ] ], [ [ "infile = \"../data/all.txt\"\noutfile = \"../data/sents.txt\"", "_____no_output_____" ] ], [ [ "# Tokenisation du corpus ligne par ligne", "_____no_output_____" ], [ "**Important** : pour traiter tout le corpus, mettez `LIMIT = None`", "_____no_output_____" ] ], [ [ "LIMIT = 1000000", "_____no_output_____" ], [ "with open(outfile, 'w', encoding=\"utf-8\") as output:\n with open(infile, encoding=\"utf-8\", errors=\"backslashreplace\") as f:\n content = f.readlines()\n content = content[:LIMIT] if LIMIT is not None else content\n n_lines = len(content)\n for i, line in enumerate(content):\n if i % 100000 == 0:\n print(f'processing line {i}/{n_lines}')\n sentences = sent_tokenize(line)\n for sent in sentences:\n output.write(sent + \"\\n\")", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ] ]
ece1633044bcfd1341b2cf204859dae2ee555da8
72,290
ipynb
Jupyter Notebook
kdd_cup_1999.ipynb
dcpatton/Structured-Data
88b55caff96f543461092f52351ecddbebf935d3
[ "MIT" ]
null
null
null
kdd_cup_1999.ipynb
dcpatton/Structured-Data
88b55caff96f543461092f52351ecddbebf935d3
[ "MIT" ]
null
null
null
kdd_cup_1999.ipynb
dcpatton/Structured-Data
88b55caff96f543461092f52351ecddbebf935d3
[ "MIT" ]
null
null
null
39.224091
6,937
0.445179
[ [ [ "<a href=\"https://colab.research.google.com/github/dcpatton/Structured-Data/blob/main/kdd_cup_1999.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Objective\n\nBelow is an exploration of a computer network intrusion detection dataset (https://www.kdd.org/kdd-cup/view/kdd-cup-1999). I will first approach it as a multiple classification problem (identifying 23 different methods) and then approach it as a binary classification (identifying normal network use and abnormal use). There is a large class imbalance in the dataset, but this is improved in the binary case.", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nimport pandas as pd\nimport random\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")\ntf.get_logger().setLevel('ERROR')\n\nseed = 52\nrandom.seed(seed)\ntf.random.set_seed(seed)\n\ntf.__version__", "_____no_output_____" ], [ "from google.colab import files\nuploaded = files.upload()", "_____no_output_____" ], [ "!mkdir -p ~/.kaggle\n!cp kaggle.json ~/.kaggle/\n!chmod 600 /root/.kaggle/kaggle.json", "_____no_output_____" ], [ "# !kaggle datasets list -s KDD\n# !kaggle datasets download -d galaxyh/kdd-cup-1999-data -f corrected.gz\n!kaggle datasets download -d galaxyh/kdd-cup-1999-data", "Downloading kdd-cup-1999-data.zip to /content\n 92% 81.0M/87.8M [00:01<00:00, 47.4MB/s]\n100% 87.8M/87.8M [00:01<00:00, 69.9MB/s]\n" ], [ "!unzip kdd-cup-1999-data.zip", "Archive: kdd-cup-1999-data.zip\n inflating: corrected.gz \n inflating: corrected/corrected \n inflating: kddcup.data.corrected \n inflating: kddcup.data.gz \n inflating: kddcup.data/kddcup.data \n inflating: kddcup.data_10_percent.gz \n inflating: kddcup.data_10_percent/kddcup.data_10_percent \n inflating: kddcup.data_10_percent_corrected \n inflating: kddcup.names \n inflating: kddcup.newtestdata_10_percent_unlabeled.gz \n inflating: kddcup.newtestdata_10_percent_unlabeled/kddcup.newtestdata_10_percent_unlabeled \n inflating: kddcup.testdata.unlabeled.gz \n inflating: kddcup.testdata.unlabeled/kddcup.testdata.unlabeled \n inflating: kddcup.testdata.unlabeled_10_percent.gz \n inflating: kddcup.testdata.unlabeled_10_percent/kddcup.testdata.unlabeled_10_percent \n inflating: training_attack_types \n inflating: typo-correction.txt \n" ], [ "data_df = pd.read_csv('kddcup.data.corrected', header=None)\ndata_df.columns = [\n 'duration',\n 'protocol_type',\n 'service',\n 'flag',\n 'src_bytes',\n 'dst_bytes',\n 'land',\n 'wrong_fragment',\n 'urgent',\n 'hot',\n 'num_failed_logins',\n 'logged_in',\n 'num_compromised',\n 'root_shell',\n 'su_attempted',\n 'num_root',\n 'num_file_creations',\n 'num_shells',\n 'num_access_files',\n 'num_outbound_cmds',\n 'is_host_login',\n 'is_guest_login',\n 'count',\n 'srv_count',\n 'serror_rate',\n 'srv_serror_rate',\n 'rerror_rate',\n 'srv_rerror_rate',\n 'same_srv_rate',\n 'diff_srv_rate',\n 'srv_diff_host_rate',\n 'dst_host_count',\n 'dst_host_srv_count',\n 'dst_host_same_srv_rate',\n 'dst_host_diff_srv_rate',\n 'dst_host_same_src_port_rate',\n 'dst_host_srv_diff_host_rate',\n 'dst_host_serror_rate',\n 'dst_host_srv_serror_rate',\n 'dst_host_rerror_rate',\n 'dst_host_srv_rerror_rate',\n 'outcome'\n]", "_____no_output_____" ], [ "data_df.head()", "_____no_output_____" ], [ "data_df.outcome.value_counts()", "_____no_output_____" ], [ "from sklearn.preprocessing import LabelEncoder\nlabel_encoder = LabelEncoder()\ndata_df[['outcome']] = label_encoder.fit_transform(data_df[['outcome']])", "_____no_output_____" ], [ "data_df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 4898431 entries, 0 to 4898430\nData columns (total 42 columns):\n # Column Dtype \n--- ------ ----- \n 0 duration int64 \n 1 protocol_type object \n 2 service object \n 3 flag object \n 4 src_bytes int64 \n 5 dst_bytes int64 \n 6 land int64 \n 7 wrong_fragment int64 \n 8 urgent int64 \n 9 hot int64 \n 10 num_failed_logins int64 \n 11 logged_in int64 \n 12 num_compromised int64 \n 13 root_shell int64 \n 14 su_attempted int64 \n 15 num_root int64 \n 16 num_file_creations int64 \n 17 num_shells int64 \n 18 num_access_files int64 \n 19 num_outbound_cmds int64 \n 20 is_host_login int64 \n 21 is_guest_login int64 \n 22 count int64 \n 23 srv_count int64 \n 24 serror_rate float64\n 25 srv_serror_rate float64\n 26 rerror_rate float64\n 27 srv_rerror_rate float64\n 28 same_srv_rate float64\n 29 diff_srv_rate float64\n 30 srv_diff_host_rate float64\n 31 dst_host_count int64 \n 32 dst_host_srv_count int64 \n 33 dst_host_same_srv_rate float64\n 34 dst_host_diff_srv_rate float64\n 35 dst_host_same_src_port_rate float64\n 36 dst_host_srv_diff_host_rate float64\n 37 dst_host_serror_rate float64\n 38 dst_host_srv_serror_rate float64\n 39 dst_host_rerror_rate float64\n 40 dst_host_srv_rerror_rate float64\n 41 outcome int64 \ndtypes: float64(15), int64(24), object(3)\nmemory usage: 1.5+ GB\n" ], [ "assert 1 == len(data_df.num_outbound_cmds.unique()) # only one unique value, so drop it\ndata_df.drop('num_outbound_cmds', axis='columns', inplace=True)", "_____no_output_____" ], [ "data_df.isna().sum()", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\ntrain_df, test_df = train_test_split(data_df, test_size=0.2, random_state=seed, stratify=data_df['outcome'])\nprint(train_df.shape)\nprint(test_df.shape)", "(3918744, 41)\n(979687, 41)\n" ], [ "train_df.outcome.value_counts()", "_____no_output_____" ], [ "test_df.outcome.value_counts()", "_____no_output_____" ], [ "def df_to_dataset(dataframe, shuffle=True, batch_size=32):\n dataframe = dataframe.copy()\n labels = dataframe.pop('outcome')\n ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))\n if shuffle:\n ds = ds.shuffle(buffer_size=1024)\n ds = ds.batch(batch_size).prefetch(tf.data.experimental.AUTOTUNE)\n return ds", "_____no_output_____" ], [ "for column in data_df.columns:\n print(column + ': ' + str(data_df[column].nunique()))", "duration: 9883\nprotocol_type: 3\nservice: 70\nflag: 11\nsrc_bytes: 7195\ndst_bytes: 21493\nland: 2\nwrong_fragment: 3\nurgent: 6\nhot: 30\nnum_failed_logins: 6\nlogged_in: 2\nnum_compromised: 98\nroot_shell: 2\nsu_attempted: 3\nnum_root: 93\nnum_file_creations: 42\nnum_shells: 3\nnum_access_files: 10\nis_host_login: 2\nis_guest_login: 2\ncount: 512\nsrv_count: 512\nserror_rate: 96\nsrv_serror_rate: 87\nrerror_rate: 89\nsrv_rerror_rate: 76\nsame_srv_rate: 101\ndiff_srv_rate: 95\nsrv_diff_host_rate: 72\ndst_host_count: 256\ndst_host_srv_count: 256\ndst_host_same_srv_rate: 101\ndst_host_diff_srv_rate: 101\ndst_host_same_src_port_rate: 101\ndst_host_srv_diff_host_rate: 76\ndst_host_serror_rate: 101\ndst_host_srv_serror_rate: 100\ndst_host_rerror_rate: 101\ndst_host_srv_rerror_rate: 101\noutcome: 23\n" ], [ "# 1 protocol_type object (3 values)\n# 2 service object (70 values)\n# 3 flag object (11 values)\n# land (2 values)\n# logged_in (2)\n# root_shell (2)\n# su_attempted (2)\n# is_host_login (2)\n# is_guest_login (2)\n\nfrom tensorflow import feature_column\n\nfeature_columns = []\n\n# numeric cols\nfor column in ['duration','src_bytes','dst_bytes','wrong_fragment','urgent','hot',\n 'num_failed_logins','num_compromised','num_root','num_file_creations',\n 'num_shells','num_access_files','count','srv_count','serror_rate',\n 'srv_serror_rate','rerror_rate','srv_rerror_rate','same_srv_rate',\n 'diff_srv_rate','srv_diff_host_rate','dst_host_count','dst_host_srv_count',\n 'dst_host_same_srv_rate','dst_host_diff_srv_rate','dst_host_same_src_port_rate',\n 'dst_host_srv_diff_host_rate','dst_host_serror_rate','dst_host_srv_serror_rate',\n 'dst_host_rerror_rate','dst_host_srv_rerror_rate']:\n feature_columns.append(feature_column.numeric_column(column))\n\n# indicator_columns\nindicator_column_names = ['protocol_type', 'service', 'flag', 'land', 'logged_in', \n 'root_shell', 'su_attempted', 'is_host_login', 'is_guest_login']\nfor col_name in indicator_column_names:\n categorical_column = feature_column.categorical_column_with_vocabulary_list(\n col_name, data_df[col_name].unique())\n indicator_column = feature_column.indicator_column(categorical_column)\n feature_columns.append(indicator_column)\n\n# embedding columns\n# diagnosis = feature_column.categorical_column_with_vocabulary_list(\n# 'diagnosis', data_df.diagnosis.unique())\n# diagnosis_embedding = feature_column.embedding_column(diagnosis, dimension=16)\n# feature_columns.append(diagnosis_embedding)", "_____no_output_____" ], [ "feature_layer = tf.keras.layers.DenseFeatures(feature_columns)", "_____no_output_____" ], [ "batch_size = 128\ntrain_ds = df_to_dataset(train_df, batch_size=batch_size)\ntest_ds = df_to_dataset(test_df, shuffle=False, batch_size=batch_size)", "_____no_output_____" ], [ "from tensorflow.keras.layers import Dense\n\ndef create_model():\n tf.keras.backend.clear_session()\n model = tf.keras.Sequential([\n feature_layer,\n Dense(256, activation='relu'),\n Dense(128, activation='relu'),\n Dense(64, activation='relu'),\n Dense(23, activation='softmax')\n ])\n\n model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc'])\n return model\n\nmodel = create_model()", "_____no_output_____" ], [ "filepath = 'model.h5'\n\nmc = tf.keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', save_best_only=True, \n save_weights_only=True, mode='auto')\n\nes = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, verbose=1, mode='auto')\n\nhistory = model.fit(train_ds, epochs=200, validation_data=test_ds, callbacks=[mc, es],\n verbose=2)", "Epoch 1/200\n30616/30616 - 201s - loss: 147.9262 - acc: 0.9959 - val_loss: 44.8872 - val_acc: 0.9983\nEpoch 2/200\n30616/30616 - 208s - loss: 0.1981 - acc: 0.9983 - val_loss: 0.0115 - val_acc: 0.9984\nEpoch 3/200\n30616/30616 - 211s - loss: 0.0182 - acc: 0.9984 - val_loss: 0.0124 - val_acc: 0.9983\nEpoch 4/200\n30616/30616 - 220s - loss: 0.8055 - acc: 0.9986 - val_loss: 0.0091 - val_acc: 0.9987\nEpoch 5/200\n30616/30616 - 212s - loss: 11.8164 - acc: 0.9986 - val_loss: 0.0124 - val_acc: 0.9985\nEpoch 6/200\n30616/30616 - 205s - loss: 0.1578 - acc: 0.9986 - val_loss: 72.1837 - val_acc: 0.9987\nEpoch 7/200\n30616/30616 - 202s - loss: 0.3010 - acc: 0.9986 - val_loss: 0.0089 - val_acc: 0.9987\nEpoch 8/200\n30616/30616 - 205s - loss: 1.5822 - acc: 0.9986 - val_loss: 0.0110 - val_acc: 0.9986\nEpoch 9/200\n30616/30616 - 209s - loss: 0.0116 - acc: 0.9987 - val_loss: 0.0101 - val_acc: 0.9987\nEpoch 10/200\n30616/30616 - 216s - loss: 0.8230 - acc: 0.8770 - val_loss: 241.5107 - val_acc: 0.9699\nEpoch 11/200\n30616/30616 - 216s - loss: 2.7318 - acc: 0.9552 - val_loss: 0.1415 - val_acc: 0.9673\nEpoch 12/200\n30616/30616 - 211s - loss: 3.9778 - acc: 0.9562 - val_loss: 0.1262 - val_acc: 0.9719\nEpoch 13/200\n30616/30616 - 215s - loss: 5.2859 - acc: 0.9626 - val_loss: 0.2216 - val_acc: 0.9333\nEpoch 14/200\n30616/30616 - 212s - loss: 0.2676 - acc: 0.9620 - val_loss: 7679.9346 - val_acc: 0.9862\nEpoch 15/200\n30616/30616 - 208s - loss: 1.3291 - acc: 0.9634 - val_loss: 35.6090 - val_acc: 0.9849\nEpoch 16/200\n30616/30616 - 205s - loss: 0.5097 - acc: 0.9721 - val_loss: 0.1692 - val_acc: 0.9560\nEpoch 17/200\n30616/30616 - 209s - loss: 0.1667 - acc: 0.9621 - val_loss: 0.1261 - val_acc: 0.9751\nEpoch 00017: early stopping\n" ], [ "model.load_weights('model.h5')\nmodel.evaluate(test_ds)", "7654/7654 [==============================] - 42s 5ms/step - loss: 0.0089 - acc: 0.9987\n" ], [ "import numpy as np\ny_preds = model.predict(test_ds, verbose=1)\nprint(y_preds.shape)\ny_preds = np.argmax(y_preds, axis=1)\nfrom sklearn.metrics import classification_report\ny_true = test_df['outcome']\n# print(classification_report(y_true, y_preds, target_names=label_encoder.classes_))\nprint(classification_report(y_true, y_preds))", "7654/7654 [==============================] - 37s 5ms/step\n(979687, 23)\n precision recall f1-score support\n\n 0 0.00 0.00 0.00 441\n 1 0.00 0.00 0.00 6\n 2 0.00 0.00 0.00 2\n 3 0.00 0.00 0.00 11\n 4 0.00 0.00 0.00 2\n 5 0.93 0.98 0.96 2496\n 6 0.00 0.00 0.00 4\n 7 0.00 0.00 0.00 2\n 8 0.00 0.00 0.00 1\n 9 1.00 1.00 1.00 214403\n 10 0.88 0.51 0.64 463\n 11 0.99 1.00 1.00 194556\n 12 0.00 0.00 0.00 1\n 13 0.00 0.00 0.00 1\n 14 0.00 0.00 0.00 53\n 15 0.99 0.91 0.95 2083\n 16 0.00 0.00 0.00 2\n 17 1.00 0.99 0.99 3178\n 18 1.00 1.00 1.00 561578\n 20 1.00 0.98 0.99 196\n 21 0.00 0.00 0.00 204\n 22 0.00 0.00 0.00 4\n\n accuracy 1.00 979687\n macro avg 0.35 0.33 0.34 979687\nweighted avg 1.00 1.00 1.00 979687\n\n" ] ], [ [ "# class weights", "_____no_output_____" ] ], [ [ "from sklearn.utils import class_weight\nclass_weights = class_weight.compute_class_weight('balanced', np.unique(train_df['outcome']), train_df['outcome'])\nclass_weights", "_____no_output_____" ], [ "class_keys = np.unique(train_df['outcome'])\nclass_keys", "_____no_output_____" ], [ "class_weight_dict = dict(zip(class_keys,class_weights))", "_____no_output_____" ], [ "model = create_model()", "_____no_output_____" ], [ "filepath = 'model.h5'\n\nmc = tf.keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', save_best_only=True, \n save_weights_only=True, mode='auto')\n\nes = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, verbose=1, mode='auto')\n\nhistory = model.fit(train_ds, epochs=200, validation_data=test_ds, callbacks=[mc, es], \n class_weight=class_weight_dict, verbose=2)", "Epoch 1/200\n30616/30616 - 215s - loss: 207184.6250 - acc: 0.9150 - val_loss: 3373.8640 - val_acc: 0.9666\nEpoch 2/200\n30616/30616 - 220s - loss: 1229869.2500 - acc: 0.8999 - val_loss: 103643.5781 - val_acc: 0.9235\nEpoch 3/200\n30616/30616 - 218s - loss: 2159102.0000 - acc: 0.9054 - val_loss: 6858.8359 - val_acc: 0.9826\nEpoch 4/200\n30616/30616 - 219s - loss: 4601909.5000 - acc: 0.9219 - val_loss: 483246.2500 - val_acc: 0.8992\nEpoch 5/200\n30616/30616 - 219s - loss: 5335862.5000 - acc: 0.9187 - val_loss: 151306.9688 - val_acc: 0.9426\nEpoch 6/200\n30616/30616 - 221s - loss: 5539400.0000 - acc: 0.9312 - val_loss: 380728.2188 - val_acc: 0.9193\nEpoch 7/200\n30616/30616 - 216s - loss: 7947741.0000 - acc: 0.9150 - val_loss: 861471.0625 - val_acc: 0.9228\nEpoch 8/200\n30616/30616 - 215s - loss: 7574225.0000 - acc: 0.9250 - val_loss: 163218.5938 - val_acc: 0.9555\nEpoch 9/200\n30616/30616 - 214s - loss: 15989376.0000 - acc: 0.9284 - val_loss: 860234.1250 - val_acc: 0.9107\nEpoch 10/200\n30616/30616 - 211s - loss: 11091273.0000 - acc: 0.9369 - val_loss: 558916.6875 - val_acc: 0.9315\nEpoch 11/200\n30616/30616 - 212s - loss: 12094885.0000 - acc: 0.9402 - val_loss: 457899.3438 - val_acc: 0.9445\nEpoch 00011: early stopping\n" ], [ "model.load_weights('model.h5')\nmodel.evaluate(test_ds)", "7654/7654 [==============================] - 41s 5ms/step - loss: 3373.8640 - acc: 0.9666\n" ], [ "y_preds = model.predict(test_ds, verbose=1)\ny_preds = np.argmax(y_preds, axis=1)\ny_true = test_df['outcome']\nprint(classification_report(y_true, y_preds))", "7654/7654 [==============================] - 37s 5ms/step\n precision recall f1-score support\n\n 0 0.08 1.00 0.15 441\n 1 0.00 0.00 0.00 6\n 2 0.00 0.00 0.00 2\n 3 0.00 0.00 0.00 11\n 4 0.00 0.00 0.00 2\n 5 0.91 0.94 0.93 2496\n 6 0.00 0.00 0.00 4\n 7 0.00 0.00 0.00 2\n 8 0.00 0.00 0.00 1\n 9 1.00 0.94 0.97 214403\n 10 0.12 0.46 0.18 463\n 11 0.99 0.92 0.95 194556\n 12 0.00 0.00 0.00 1\n 13 0.00 0.00 0.00 1\n 14 0.00 0.00 0.00 53\n 15 0.15 0.98 0.25 2083\n 16 0.00 0.00 0.00 2\n 17 0.24 0.18 0.21 3178\n 18 1.00 1.00 1.00 561578\n 20 0.37 0.94 0.53 196\n 21 0.01 0.72 0.03 204\n 22 0.00 0.00 0.00 4\n\n accuracy 0.97 979687\n macro avg 0.22 0.37 0.24 979687\nweighted avg 0.99 0.97 0.98 979687\n\n" ] ], [ [ "# Binary classification", "_____no_output_____" ] ], [ [ "label_encoder.classes_[11]", "_____no_output_____" ], [ "train_df.loc[(train_df.outcome != 11),'outcome'] = 1\ntrain_df.loc[(train_df.outcome == 11),'outcome'] = 0\ntest_df.loc[(test_df.outcome != 11),'outcome'] = 1\ntest_df.loc[(test_df.outcome == 11),'outcome'] = 0", "_____no_output_____" ], [ "train_df.outcome.value_counts()", "_____no_output_____" ], [ "train_ds = df_to_dataset(train_df, batch_size=batch_size)\ntest_ds = df_to_dataset(test_df, shuffle=False, batch_size=batch_size)", "_____no_output_____" ], [ "tf.keras.backend.clear_session()\nmodel = tf.keras.Sequential([\n feature_layer,\n Dense(256, activation='relu'),\n Dense(128, activation='relu'),\n Dense(64, activation='relu'),\n Dense(1, activation='sigmoid')\n])\n\nmodel.compile(optimizer='adam', loss='binary_crossentropy', metrics=[tf.keras.metrics.AUC()])", "_____no_output_____" ], [ "filepath = 'model.h5'\n\nmc = tf.keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', save_best_only=True, \n save_weights_only=True, mode='auto')\n\nes = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, verbose=1, mode='auto')\n\nhistory = model.fit(train_ds, epochs=200, validation_data=test_ds, callbacks=[mc, es], \n verbose=2)", "Epoch 1/200\n30616/30616 - 240s - loss: 70.7713 - auc: 0.9938 - val_loss: 293.6857 - val_auc: 0.9991\nEpoch 2/200\n30616/30616 - 241s - loss: 12.0578 - auc: 0.9948 - val_loss: 33.7645 - val_auc: 0.9992\nEpoch 3/200\n30616/30616 - 238s - loss: 0.8356 - auc: 0.9984 - val_loss: 10.4264 - val_auc: 0.9999\nEpoch 4/200\n30616/30616 - 237s - loss: 0.0082 - auc: 0.9998 - val_loss: 0.0063 - val_auc: 0.9994\nEpoch 5/200\n30616/30616 - 237s - loss: 0.0065 - auc: 0.9994 - val_loss: 0.0053 - val_auc: 0.9995\nEpoch 6/200\n30616/30616 - 239s - loss: 0.0067 - auc: 0.9998 - val_loss: 1.0749 - val_auc: 0.9999\nEpoch 7/200\n30616/30616 - 240s - loss: 0.0138 - auc: 0.9997 - val_loss: 0.0055 - val_auc: 0.9996\nEpoch 8/200\n30616/30616 - 237s - loss: 0.0089 - auc: 0.9997 - val_loss: 8.4735 - val_auc: 0.9999\nEpoch 9/200\n30616/30616 - 241s - loss: 5.1598 - auc: 0.9998 - val_loss: 3.2222 - val_auc: 0.9999\nEpoch 10/200\n30616/30616 - 248s - loss: 0.3308 - auc: 0.9999 - val_loss: 0.0045 - val_auc: 0.9999\nEpoch 11/200\n30616/30616 - 252s - loss: 0.0783 - auc: 0.9998 - val_loss: 0.0053 - val_auc: 0.9998\nEpoch 12/200\n30616/30616 - 255s - loss: 0.0063 - auc: 0.9998 - val_loss: 0.0057 - val_auc: 0.9998\nEpoch 13/200\n30616/30616 - 249s - loss: 0.0093 - auc: 0.9998 - val_loss: 14.2310 - val_auc: 0.9999\nEpoch 14/200\n30616/30616 - 256s - loss: 0.2358 - auc: 0.9997 - val_loss: 0.0046 - val_auc: 0.9998\nEpoch 15/200\n30616/30616 - 260s - loss: 1.6669 - auc: 0.9997 - val_loss: 22.5807 - val_auc: 0.9999\nEpoch 16/200\n30616/30616 - 260s - loss: 5.0847 - auc: 0.9996 - val_loss: 0.0090 - val_auc: 0.9997\nEpoch 17/200\n30616/30616 - 259s - loss: 0.0061 - auc: 0.9997 - val_loss: 0.0059 - val_auc: 0.9998\nEpoch 18/200\n30616/30616 - 260s - loss: 0.4070 - auc: 0.9997 - val_loss: 0.0051 - val_auc: 0.9998\nEpoch 19/200\n30616/30616 - 258s - loss: 0.0841 - auc: 0.9998 - val_loss: 0.0059 - val_auc: 0.9997\nEpoch 20/200\n30616/30616 - 247s - loss: 0.0069 - auc: 0.9997 - val_loss: 0.0055 - val_auc: 0.9997\nEpoch 00020: early stopping\n" ], [ "model.load_weights('model.h5')\nmodel.evaluate(test_ds)", "7654/7654 [==============================] - 46s 6ms/step - loss: 0.0045 - auc: 0.9999\n" ], [ "y_preds = model.predict(test_ds, verbose=1)\ny_preds\ny_preds[y_preds > 0.5] = 1\ny_preds[y_preds <= 0.5] = 0\ny_true = test_df['outcome']\nprint(classification_report(y_true, y_preds))", "7654/7654 [==============================] - 37s 5ms/step\n precision recall f1-score support\n\n 0 1.00 1.00 1.00 194556\n 1 1.00 1.00 1.00 785131\n\n accuracy 1.00 979687\n macro avg 1.00 1.00 1.00 979687\nweighted avg 1.00 1.00 1.00 979687\n\n" ], [ "from sklearn.metrics import confusion_matrix\nprint(confusion_matrix(y_true, y_preds))", "[[194527 29]\n [ 867 784264]]\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ece166a4a79879bcea17820592e44b7bf58d68c1
301,214
ipynb
Jupyter Notebook
InsuranceDataset_Analysis.ipynb
DivyaKrishnani/Data-Preprocessing
e577aa1adef33ffc9f3c357a1e634ccf5c70f7c4
[ "MIT" ]
6
2019-09-13T11:38:14.000Z
2022-03-29T03:29:36.000Z
InsuranceDataset_Analysis.ipynb
DivyaKrishnani/Data-Preprocessing
e577aa1adef33ffc9f3c357a1e634ccf5c70f7c4
[ "MIT" ]
null
null
null
InsuranceDataset_Analysis.ipynb
DivyaKrishnani/Data-Preprocessing
e577aa1adef33ffc9f3c357a1e634ccf5c70f7c4
[ "MIT" ]
10
2019-09-20T10:46:44.000Z
2022-03-29T03:29:35.000Z
56.747174
9,176
0.613982
[ [ [ "# Central Tendency and Dispersion measures over Insurance Dataset", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\n", "_____no_output_____" ], [ "data = pd.read_csv(\"Datasets/insurance.csv\")\ndata", "_____no_output_____" ], [ "data.shape[0]", "_____no_output_____" ] ], [ [ "### Dividing First 30% and the rest 70% data of the dataset", "_____no_output_____" ] ], [ [ "first = int(0.3*data.shape[0])+1\nfirst", "_____no_output_____" ], [ "sec = data.shape[0]-first\nsec", "_____no_output_____" ], [ "df1 = data.head(n=first)\ndf1", "_____no_output_____" ], [ "df2 = data.tail(n=sec)\ndf2", "_____no_output_____" ] ], [ [ "### Randomly dividing 30% and 70% of the data ", "_____no_output_____" ] ], [ [ "#Randomly diving 30% and 70% of the data\nfrom sklearn.model_selection import train_test_split\n\ndf1_r, df2_r = train_test_split(data, test_size=0.3)", "_____no_output_____" ], [ "df1_r", "_____no_output_____" ], [ "df2_r", "_____no_output_____" ] ], [ [ "### Creating Sub Samples with Replacement", "_____no_output_____" ] ], [ [ "sub_samples_r= data.sample(n=10,replace=True)\nsub_samples_r", "_____no_output_____" ] ], [ [ "### Creating Sub Samples without Replacement", "_____no_output_____" ] ], [ [ "sub_samples_wr= data.sample(n=10,replace=False)\nsub_samples_wr", "_____no_output_____" ] ], [ [ "### Creating 10 Sub Samples of the above Sub Sample", "_____no_output_____" ] ], [ [ "samplelist = []\nfor i in range(10):\n samplelist.append(sub_samples_wr.sample(n=10,replace=True))\n print(samplelist[i])", " age sex bmi children smoker region expenses\n221 53 female 33.3 0 no northeast 10564.88\n176 38 male 27.8 2 no northwest 6455.86\n1318 35 male 39.7 4 no northeast 19496.72\n303 28 female 33.0 2 no southeast 4349.46\n569 48 male 40.6 2 yes northwest 45702.02\n569 48 male 40.6 2 yes northwest 45702.02\n221 53 female 33.3 0 no northeast 10564.88\n569 48 male 40.6 2 yes northwest 45702.02\n346 33 male 35.8 2 no southeast 4890.00\n633 40 male 22.7 2 no northeast 7173.36\n age sex bmi children smoker region expenses\n221 53 female 33.3 0 no northeast 10564.88\n569 48 male 40.6 2 yes northwest 45702.02\n1220 30 female 21.9 1 no northeast 4718.20\n1318 35 male 39.7 4 no northeast 19496.72\n317 54 male 32.8 0 no northeast 10435.07\n1165 35 female 26.1 0 no northeast 5227.99\n303 28 female 33.0 2 no southeast 4349.46\n1220 30 female 21.9 1 no northeast 4718.20\n303 28 female 33.0 2 no southeast 4349.46\n346 33 male 35.8 2 no southeast 4890.00\n age sex bmi children smoker region expenses\n633 40 male 22.7 2 no northeast 7173.36\n633 40 male 22.7 2 no northeast 7173.36\n1220 30 female 21.9 1 no northeast 4718.20\n633 40 male 22.7 2 no northeast 7173.36\n1220 30 female 21.9 1 no northeast 4718.20\n1220 30 female 21.9 1 no northeast 4718.20\n1220 30 female 21.9 1 no northeast 4718.20\n221 53 female 33.3 0 no northeast 10564.88\n176 38 male 27.8 2 no northwest 6455.86\n303 28 female 33.0 2 no southeast 4349.46\n age sex bmi children smoker region expenses\n176 38 male 27.8 2 no northwest 6455.86\n1220 30 female 21.9 1 no northeast 4718.20\n633 40 male 22.7 2 no northeast 7173.36\n346 33 male 35.8 2 no southeast 4890.00\n1220 30 female 21.9 1 no northeast 4718.20\n569 48 male 40.6 2 yes northwest 45702.02\n317 54 male 32.8 0 no northeast 10435.07\n317 54 male 32.8 0 no northeast 10435.07\n346 33 male 35.8 2 no southeast 4890.00\n303 28 female 33.0 2 no southeast 4349.46\n age sex bmi children smoker region expenses\n1318 35 male 39.7 4 no northeast 19496.72\n1220 30 female 21.9 1 no northeast 4718.20\n569 48 male 40.6 2 yes northwest 45702.02\n303 28 female 33.0 2 no southeast 4349.46\n1220 30 female 21.9 1 no northeast 4718.20\n221 53 female 33.3 0 no northeast 10564.88\n569 48 male 40.6 2 yes northwest 45702.02\n221 53 female 33.3 0 no northeast 10564.88\n1318 35 male 39.7 4 no northeast 19496.72\n303 28 female 33.0 2 no southeast 4349.46\n age sex bmi children smoker region expenses\n176 38 male 27.8 2 no northwest 6455.86\n1165 35 female 26.1 0 no northeast 5227.99\n317 54 male 32.8 0 no northeast 10435.07\n303 28 female 33.0 2 no southeast 4349.46\n1220 30 female 21.9 1 no northeast 4718.20\n1165 35 female 26.1 0 no northeast 5227.99\n221 53 female 33.3 0 no northeast 10564.88\n633 40 male 22.7 2 no northeast 7173.36\n1220 30 female 21.9 1 no northeast 4718.20\n176 38 male 27.8 2 no northwest 6455.86\n age sex bmi children smoker region expenses\n1318 35 male 39.7 4 no northeast 19496.72\n221 53 female 33.3 0 no northeast 10564.88\n346 33 male 35.8 2 no southeast 4890.00\n1318 35 male 39.7 4 no northeast 19496.72\n633 40 male 22.7 2 no northeast 7173.36\n633 40 male 22.7 2 no northeast 7173.36\n633 40 male 22.7 2 no northeast 7173.36\n1165 35 female 26.1 0 no northeast 5227.99\n1220 30 female 21.9 1 no northeast 4718.20\n1318 35 male 39.7 4 no northeast 19496.72\n age sex bmi children smoker region expenses\n1318 35 male 39.7 4 no northeast 19496.72\n176 38 male 27.8 2 no northwest 6455.86\n221 53 female 33.3 0 no northeast 10564.88\n221 53 female 33.3 0 no northeast 10564.88\n1220 30 female 21.9 1 no northeast 4718.20\n221 53 female 33.3 0 no northeast 10564.88\n346 33 male 35.8 2 no southeast 4890.00\n1220 30 female 21.9 1 no northeast 4718.20\n176 38 male 27.8 2 no northwest 6455.86\n303 28 female 33.0 2 no southeast 4349.46\n age sex bmi children smoker region expenses\n1165 35 female 26.1 0 no northeast 5227.99\n1220 30 female 21.9 1 no northeast 4718.20\n633 40 male 22.7 2 no northeast 7173.36\n176 38 male 27.8 2 no northwest 6455.86\n346 33 male 35.8 2 no southeast 4890.00\n633 40 male 22.7 2 no northeast 7173.36\n1220 30 female 21.9 1 no northeast 4718.20\n303 28 female 33.0 2 no southeast 4349.46\n176 38 male 27.8 2 no northwest 6455.86\n633 40 male 22.7 2 no northeast 7173.36\n age sex bmi children smoker region expenses\n221 53 female 33.3 0 no northeast 10564.88\n1318 35 male 39.7 4 no northeast 19496.72\n176 38 male 27.8 2 no northwest 6455.86\n1165 35 female 26.1 0 no northeast 5227.99\n569 48 male 40.6 2 yes northwest 45702.02\n221 53 female 33.3 0 no northeast 10564.88\n569 48 male 40.6 2 yes northwest 45702.02\n317 54 male 32.8 0 no northeast 10435.07\n1165 35 female 26.1 0 no northeast 5227.99\n569 48 male 40.6 2 yes northwest 45702.02\n" ] ], [ [ "### Calculating measures of central tendency\n\nMean and Median for all numerical features and mode for all categorical features", "_____no_output_____" ] ], [ [ "import statistics\nnumeric_features = ['age','bmi','children','expenses']\ncategorical_features = ['sex','smoker','region']", "_____no_output_____" ], [ "mean10 = []\nfor i in range(10):\n mean10.append([samplelist[i][numeric_features[0]].mean(),samplelist[i][numeric_features[1]].mean(),samplelist[i][numeric_features[2]].mean(),samplelist[i][numeric_features[3]].mean()])\n \nprint(mean10)\nprint()\nprint(statistics.mean(data['age']))\nprint(statistics.mean(data['bmi']))\nprint(statistics.mean(data['children']))\nprint(statistics.mean(data['expenses']))", "[[42.4, 34.74, 1.8, 20060.121999999996], [37.4, 31.810000000000002, 1.4, 11445.2], [35.9, 24.98, 1.4, 6176.307999999999], [38.8, 30.51, 1.4, 10376.724], [38.8, 33.699999999999996, 1.8, 16966.256], [38.1, 27.339999999999996, 1.0, 6532.687], [37.6, 30.429999999999996, 2.1, 10541.131], [39.1, 30.78, 1.4, 8277.893999999998], [35.2, 26.240000000000002, 1.6, 5833.565], [44.7, 34.09, 1.2, 20507.945]]\n\n39.20702541106129\n30.665470852017936\n1.0949177877429\n13270.422414050821\n" ], [ "median10 = []\nfor i in range(10):\n median10.append([samplelist[i][numeric_features[0]].median(),samplelist[i][numeric_features[1]].median(),samplelist[i][numeric_features[2]].median(),samplelist[i][numeric_features[3]].median()])\n \nprint(median10)", "[[44.0, 34.55, 2.0, 10564.88], [34.0, 33.0, 1.5, 5058.995], [34.0, 22.7, 1.5, 5587.03], [35.5, 32.8, 2.0, 5672.93], [35.0, 33.3, 2.0, 10564.88], [36.5, 26.950000000000003, 1.0, 5841.924999999999], [35.0, 29.7, 2.0, 7173.36], [36.5, 33.15, 1.5, 6455.86], [36.5, 24.4, 2.0, 5841.924999999999], [48.0, 33.3, 1.0, 10564.88]]\n" ], [ "mode10 = []\nfor i in range(10):\n mode10.append([samplelist[i][categorical_features[0]].mode(),samplelist[i][categorical_features[1]].mode(),samplelist[i][categorical_features[2]].mode()])\n \nprint(mode10)", "[[0 male\ndtype: object, 0 no\ndtype: object, 0 northeast\n1 northwest\ndtype: object], [0 female\ndtype: object, 0 no\ndtype: object, 0 northeast\ndtype: object], [0 female\ndtype: object, 0 no\ndtype: object, 0 northeast\ndtype: object], [0 male\ndtype: object, 0 no\ndtype: object, 0 northeast\ndtype: object], [0 female\ndtype: object, 0 no\ndtype: object, 0 northeast\ndtype: object], [0 female\ndtype: object, 0 no\ndtype: object, 0 northeast\ndtype: object], [0 male\ndtype: object, 0 no\ndtype: object, 0 northeast\ndtype: object], [0 female\ndtype: object, 0 no\ndtype: object, 0 northeast\ndtype: object], [0 male\ndtype: object, 0 no\ndtype: object, 0 northeast\ndtype: object], [0 male\ndtype: object, 0 no\ndtype: object, 0 northeast\ndtype: object]]\n" ] ], [ [ "### Calculating Dispersion\n\nVariance, Standard Deviation, Range and IQR(Interquartile range)", "_____no_output_____" ] ], [ [ "statistics.variance(samplelist[i][numeric_features[0]])", "_____no_output_____" ], [ "var10 = []\nfor i in range(10):\n var10.append([statistics.variance(samplelist[i][numeric_features[0]]),statistics.variance(samplelist[i][numeric_features[1]]),statistics.variance(samplelist[i][numeric_features[2]]),statistics.variance(samplelist[i][numeric_features[3]])])\n \nprint(var10)", "[[77.15555555555555, 36.56044444444446, 1.288888888888889, 331251060.4000399], [105.37777777777778, 43.387666666666675, 1.6, 168108941.58459997], [60.98888888888889, 21.67066666666667, 0.4888888888888889, 3795748.9961955547], [98.62222222222222, 43.45211111111111, 0.7111111111111111, 159343560.10769328], [109.95555555555556, 49.666666666666686, 1.9555555555555557, 262738536.11047107], [81.21111111111111, 20.175999999999995, 0.8888888888888888, 5185012.771134444], [40.044444444444444, 62.84900000000002, 2.3222222222222224, 40967245.094098896], [102.76666666666667, 33.80177777777778, 1.6, 22271761.077782225], [21.733333333333334, 24.138222222222222, 0.48888888888888893, 1345259.5408277775], [64.9, 36.5298888888889, 1.9555555555555555, 318761539.7469833]]\n" ], [ "stddev10 = []\nfor i in range(10):\n stddev10.append([statistics.stdev(samplelist[i][numeric_features[0]]),statistics.stdev(samplelist[i][numeric_features[1]]),statistics.stdev(samplelist[i][numeric_features[2]]),statistics.stdev(samplelist[i][numeric_features[3]])])\n \nprint(stddev10)\n\nprint()\nprint(statistics.stdev(data['age']))\nprint(statistics.stdev(data['bmi']))\nprint(statistics.stdev(data['children']))\nprint(statistics.stdev(data['expenses']))", "[[8.783823515733655, 6.046523335309677, 1.1352924243950935, 18200.303854607482], [10.265367883216742, 6.5869315061465965, 1.2649110640673518, 12965.683228607737], [7.809538327512637, 4.655176330351694, 0.699205898780101, 1948.2682043793545], [9.930872178324632, 6.59182153210409, 0.8432740427115678, 12623.135906251397], [10.485969461883606, 7.047458170621993, 1.3984117975602022, 16209.211458626576], [9.011720763045819, 4.491770252361533, 0.9428090415820634, 2277.0623116494735], [6.328067986711619, 7.927736120734596, 1.5238839267549948, 6400.5659979488455], [10.13738953906116, 5.813929633025995, 1.2649110640673518, 4719.296671939816], [4.661902329879224, 4.913066478506292, 0.6992058987801011, 1159.8532410731011], [8.056053624449133, 6.0439961026533515, 1.398411797560202, 17853.894245989675]]\n\n14.049960379216156\n6.098382190003363\n1.2054927397819137\n12110.011239706468\n" ], [ "max_value = max(data['age'])\nmax_value", "_____no_output_____" ], [ "min_value = min(data['age'])\nmin_value", "_____no_output_____" ], [ "range_of_age = max_value - min_value\nrange_of_age", "_____no_output_____" ], [ "rangedata = []\nfor i in range(1):\n max_value = max(data[numeric_features[0]])\n min_value = min(data[numeric_features[0]])\n rangei = max_value - min_value\n max_value = max(data[numeric_features[1]])\n min_value = min(data[numeric_features[1]])\n rangei2 = max_value - min_value\n max_value = max(data[numeric_features[2]])\n min_value = min(data[numeric_features[2]])\n rangei3 = max_value - min_value\n max_value = max(data[numeric_features[3]])\n min_value = min(data[numeric_features[3]])\n rangei4 = max_value - min_value\n rangedata.append([rangei,rangei2,rangei3,rangei4])\n\nprint(numeric_features)\nprint(rangedata)", "['age', 'bmi', 'children', 'expenses']\n[[46, 37.1, 5, 62648.56]]\n" ], [ "range10 = []\nprint(numeric_features)\n\nfor i in range(10):\n max_value = max(samplelist[i][numeric_features[0]])\n min_value = min(samplelist[i][numeric_features[0]])\n rangei = max_value - min_value\n max_value = max(samplelist[i][numeric_features[1]])\n min_value = min(samplelist[i][numeric_features[1]])\n rangei2 = max_value - min_value\n max_value = max(samplelist[i][numeric_features[2]])\n min_value = min(samplelist[i][numeric_features[2]])\n rangei3 = max_value - min_value\n max_value = max(samplelist[i][numeric_features[3]])\n min_value = min(samplelist[i][numeric_features[3]])\n rangei4 = max_value - min_value\n range10.append([rangei,rangei2,rangei3,rangei4])\n print(range10[i])\n\n", "['age', 'bmi', 'children', 'expenses']\n[25, 17.900000000000002, 4, 41352.56]\n[26, 18.700000000000003, 4, 41352.56]\n[25, 11.399999999999999, 2, 6215.419999999999]\n[26, 18.700000000000003, 2, 41352.56]\n[25, 18.700000000000003, 4, 41352.56]\n[26, 11.399999999999999, 2, 6215.419999999999]\n[23, 17.800000000000004, 4, 14778.52]\n[25, 17.800000000000004, 4, 15147.260000000002]\n[12, 13.899999999999999, 2, 2823.8999999999996]\n[19, 14.5, 4, 40474.03]\n" ], [ "from scipy.stats import iqr\niqr_age = iqr(data['age'])\niqr_age", "_____no_output_____" ], [ "iqr_age = iqr(samplelist[0]['age'])\niqr_age", "_____no_output_____" ], [ "samplelist[0]", "_____no_output_____" ], [ "iqr10 = []\nprint(numeric_features)\n\nfor i in range(10):\n iqr10.append([iqr(samplelist[i][numeric_features[0]]),iqr(samplelist[i][numeric_features[1]]),iqr(samplelist[i][numeric_features[2]]),iqr(samplelist[i][numeric_features[3]])])\n print(iqr10[i])\n", "['age', 'bmi', 'children', 'expenses']\n[12.25, 7.299999999999997, 0.0, 32515.46]\n[14.75, 7.399999999999995, 1.75, 5814.2275]\n[10.0, 4.6250000000000036, 1.0, 2455.16]\n[15.25, 11.124999999999996, 1.0, 4858.4925]\n[18.0, 6.700000000000003, 1.0, 14778.52]\n[8.25, 8.0, 2.0, 2148.3375000000005]\n[5.0, 16.025000000000002, 2.25, 11549.427500000002]\n[18.5, 5.4999999999999964, 1.75, 5803.73]\n[8.75, 5.100000000000001, 0.75, 2232.835]\n[16.0, 11.325, 2.0, 31700.0325]\n" ], [ "iqr([1,2,3,4,5,6,7])\n", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\ndata1 = [1,2,3,4,5,6,7]\nplt.boxplot(data1)\nplt.show()", "_____no_output_____" ], [ "print(numeric_features)\n", "['age', 'bmi', 'children', 'expenses']\n" ], [ "iqr = []\nprint(numeric_features)\n\nfor i in numeric_features:\n iqr1 = data[i].quantile()\n print(iqr1)", "['age', 'bmi', 'children', 'expenses']\n39.0\n30.4\n1.0\n9382.029999999999\n" ] ], [ [ "### Box Plots for all numeric features of the dataset", "_____no_output_____" ] ], [ [ "for i in numeric_features:\n plt.title(\"Boxplot of \"+ i)\n plt.boxplot(data[i])\n plt.show()", "_____no_output_____" ], [ "#70,30 randomly\n#70\nfor i in numeric_features:\n plt.title(\"Boxplot of \"+ i)\n plt.boxplot(df1_r[i])\n plt.show()", "_____no_output_____" ], [ "#30\nfor i in numeric_features:\n plt.title(\"Boxplot of \"+ i)\n plt.boxplot(df2_r[i])\n plt.show()", "_____no_output_____" ], [ "#30 Sequence\nfor i in numeric_features:\n plt.title(\"Boxplot of \"+ i)\n plt.boxplot(df1[i])\n plt.show()", "_____no_output_____" ], [ "#70 Sequence\nfor i in numeric_features:\n plt.title(\"Boxplot of \"+ i)\n plt.boxplot(df2[i])\n plt.show()", "_____no_output_____" ], [ "data['age'].quantile()", "_____no_output_____" ], [ "import pandas as pd\n\nmylist=[1,2,3,4,5,6,7]\nmylist = pd.DataFrame(mylist)\nmylist.quantile()", "_____no_output_____" ], [ "iqr = []\nprint(numeric_features)\n\nfor i in numeric_features:\n iqr1 = data[i].quantile()\n print(iqr1)", "['age', 'bmi', 'children', 'expenses']\n39.0\n30.4\n1.0\n9382.029999999999\n" ], [ "iqr10 = []\nprint(numeric_features)\n\nfor i in range(10):\n #iqr10.append([iqr(samplelist[i][numeric_features[0]]),iqr(samplelist[i][numeric_features[1]]),iqr(samplelist[i][numeric_features[2]]),iqr(samplelist[i][numeric_features[3]])])\n iqr10.append([samplelist[i]['age'].quantile(),samplelist[i]['bmi'].quantile(),samplelist[i]['children'].quantile(),samplelist[i]['expenses'].quantile()])\n print(iqr10[i])\n", "['age', 'bmi', 'children', 'expenses']\n[44.0, 34.55, 2.0, 10564.88]\n[34.0, 33.0, 1.5, 5058.995]\n[34.0, 22.7, 1.5, 5587.03]\n[35.5, 32.8, 2.0, 5672.93]\n[35.0, 33.3, 2.0, 10564.88]\n[36.5, 26.950000000000003, 1.0, 5841.924999999999]\n[35.0, 29.7, 2.0, 7173.36]\n[36.5, 33.15, 1.5, 6455.86]\n[36.5, 24.4, 2.0, 5841.924999999999]\n[48.0, 33.3, 1.0, 10564.88]\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ece17661128cd639c723cb2026a1189936ac1179
240,495
ipynb
Jupyter Notebook
Embeddings_to_process_features/image-search-GeneratedAmongOriginal-datase=celebA-DCGAN.ipynb
previtus/facenet
9597e7955014f05f655c7617c1edbf41eaf1a3e3
[ "MIT" ]
null
null
null
Embeddings_to_process_features/image-search-GeneratedAmongOriginal-datase=celebA-DCGAN.ipynb
previtus/facenet
9597e7955014f05f655c7617c1edbf41eaf1a3e3
[ "MIT" ]
null
null
null
Embeddings_to_process_features/image-search-GeneratedAmongOriginal-datase=celebA-DCGAN.ipynb
previtus/facenet
9597e7955014f05f655c7617c1edbf41eaf1a3e3
[ "MIT" ]
null
null
null
596.761787
173,240
0.945521
[ [ [ "## Reverse image search and retrieval\n\nThis notebook will show you how you can use a convolutional neural network (convnet) to search through a large collection of images. Specifically, it will show you how you can retrieve a set of images which are similar to a query image, returning you its `n` nearest neighbors in terms of image content.\n\n### Installation and dependencies\n\nThe code has a number of dependencies, which can usually be installed with `pip`. You will need:\n\n * [scikit-learn](scikit-learn.org)\n * [keras](https://keras.io)\n * [Pillow](https://python-pillow.org/)\n * [matplotlib](http://matplotlib.org)\n * [cPickle](https://docs.python.org/2/library/pickle.html)\n\n### Prepare dataset\n\nFinally, prepare a folder of images to do the analysis on. If you don't have one, you may download and extract the [Caltech-101 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech101/) containing roughly 9000 images in 101 categories. To download that dataset, run the following commands inside a folder of your choosing (this notebook will assume you do so in the `data` folder of `ml4a-guides`.\n\n wget http://www.vision.caltech.edu/Image_Datasets/Caltech101/101_ObjectCategories.tar.gz\n tar -xvzf 101_ObjectCategories.tar.gz\n \nOr you can run the `download.sh` script in the `data` folder which will automatically download this dataset for you, along with all the other materials for these notebooks.\n\nNow we can begin. Run the following import commands and make sure all the libraries are correctly installed and import without errors.", "_____no_output_____" ] ], [ [ "#!pip install tqdm", "_____no_output_____" ], [ "%matplotlib inline\nimport os\nimport random\n#import cPickle as pickle\nimport pickle\nimport numpy as np\nimport matplotlib.pyplot\nfrom matplotlib.pyplot import imshow\n#import keras\n#from keras.preprocessing import image\n#from keras.applications.imagenet_utils import decode_predictions, preprocess_input\n#from keras.models import Model\nfrom sklearn.decomposition import PCA\nfrom scipy.spatial import distance\nfrom tqdm import tqdm\n\n#from IPython.display import display \n#from PIL import Image\nimport PIL.Image\nfrom IPython.display import clear_output, Image, display, HTML", "_____no_output_____" ], [ "# get_image will return a handle to the image itself, and a numpy array of its pixels to input the network\ndef get_image(path):\n img = image.load_img(path, target_size=model.input_shape[1:3])\n x = image.img_to_array(img)\n x = np.expand_dims(x, axis=0)\n x = preprocess_input(x)\n return img, x", "_____no_output_____" ] ], [ [ "We load an image into memory, convert it into an input vector, and see the model's top 5 predictions for it.", "_____no_output_____" ], [ "Once that is done, we will take our `n`x4096 matrix of features (where `n` is the number of images), and apply [principal component analysis](https://en.wikipedia.org/wiki/Principal_component_analysis) to it, and keep the first 300 principal components, creating an `n`x300 matrix called `pca_features`. \n\nThe purpose of principal component analysis is to reduce the dimensionality of our feature vectors. This reduces the amount of redundancy in our features (from duplicate or highly-correlated features), speeds up computation over them, and reduces the amount of memory they take up. \n\n\n\nLet's do a query. What we'll do is define a function which returns the num_results closest images to a query image, with repsect to those images contents. What it dos is: for the given query image, it will take its PCA-activations, and compute the euclidean distance between it and every other set of PCA-activations, then return the best ones.\nWe also define a helper function get_concatenated_images which creates a thumbnail of a set of images, so we can display the results.", "_____no_output_____" ] ], [ [ "#path = '/home/kangeunsu/facenet/embeddings_from_genCeleb_2ksample_noflips.p'\n#directory = \"/home/kangeunsu/CelebAHQ_generated_images/celeba_gen_2k_sample/\"\n\npath = '/home/kangeunsu/facenet/embeddings_from_DCGAN_generated_300kImgs.p'\ndirectory = \"/home/kangeunsu/other_gans/DCGAN-tensorflow/samples_300k_generated/\" \n\nto_load = 5000\n\nfull_features = pickle.load(open(path, 'rb'), encoding='latin1')\nfull_features = full_features[0]\nfull_features_n = np.asarray(full_features)\nprint(\"full_features_n.shape\",full_features_n.shape)\n\n#from sklearn.decomposition import PCA\n#pca = PCA(n_components=300)\n#pca.fit(full_features_n)\n#pca_features_mine = pca.transform(full_features_n)\n#pca_features_mine_n = np.asarray(pca_features_mine)\n#print(\"pca_features_mine_n.shape\",pca_features_mine_n.shape)\n\npca_features_mine = full_features\n\nimport fnmatch\n# files like img00009982.png\nfiles = sorted(os.listdir(directory))\nframe_files = fnmatch.filter(files, '*.png')\nfull_paths = [directory+file for file in frame_files]\nimages_mine = full_paths[0:to_load] # or full?\nimages_mine_n = np.asarray(images_mine)\nprint(\"images_mine_n.shape\",images_mine_n.shape)\n\nimages_GEN = images_mine\npca_features_GEN = pca_features_mine[0:to_load]", "full_features_n.shape (300000, 512)\nimages_mine_n.shape (5000,)\n" ], [ "path = '/home/kangeunsu/facenet/embeddings_from_celebA_dataset_features.p'\ndirectory = \"/home/kangeunsu/other_gans/DCGAN-tensorflow/data/celebA/\"\n\n# these are original files from celebA (except that they have been processed by the DCGAN preprocess fcn)\n# /home/kangeunsu/other_gans/DCGAN-tensorflow/samples/initsamples/ <- all\n# just 10k\npath = '/home/kangeunsu/facenet/embeddings_celebA_CroppedByDCGAN_first10k.p'\ndirectory = \"/home/kangeunsu/other_gans/DCGAN-tensorflow/first_10k_editedCelebA/\"\n\n\nto_load = 10000\n\nfull_features = pickle.load(open(path, 'rb'), encoding='latin1')\nfull_features = full_features[0]\nfull_features_n = np.asarray(full_features)\nprint(\"full_features_n.shape\",full_features_n.shape)\n\n#from sklearn.decomposition import PCA\n#pca = PCA(n_components=300)\n#pca.fit(full_features_n)\n#pca_features_mine = pca.transform(full_features_n)\n#pca_features_mine_n = np.asarray(pca_features_mine)\n#print(\"pca_features_mine_n.shape\",pca_features_mine_n.shape)\npca_features_mine = full_features\n\nimport fnmatch\n# files like img00009982.png\nfiles = sorted(os.listdir(directory))\nframe_files = fnmatch.filter(files, '*.png')\nfull_paths = [directory+file for file in frame_files]\nimages_mine = full_paths[0:to_load] # or full?\nimages_mine_n = np.asarray(images_mine)\nprint(\"images_mine_n.shape\",images_mine_n.shape)\n\nimages_ORIG = images_mine\npca_features_ORIG = pca_features_mine[0:to_load]", "full_features_n.shape (10000, 512)\nimages_mine_n.shape (10000,)\n" ] ], [ [ "We are now ready to do our reverse image queries! The matrix `pca_features` contains a compact representation of our images, one 300-element row for each image with high-level feature detections. We should expect that two similar images, which have similar content in them, should have similar arrays in `pca_features`.\n\nThus we can define a new function `get_closest_images`, which will compute the euclidean distance between the PCA features of `query_image_idx`-th image in our dataset, and the PCA features of every image in the dataset (including itself, trivially 0). It then returns an array of indices to the `num_results` (default is 5) most similar images to it (not including itself). \n\nWe also define a helper function `get_concatenated_images` which will simply take those resulting images and concatenate them into a single image for easy display.", "_____no_output_____" ] ], [ [ "from PIL import Image as pil_image\ndef load_img(path, grayscale=False, target_size=None,\n interpolation='nearest'):\n if pil_image is None:\n raise ImportError('Could not import PIL.Image. '\n 'The use of `array_to_img` requires PIL.')\n img = pil_image.open(path)\n if grayscale:\n if img.mode != 'L':\n img = img.convert('L')\n else:\n if img.mode != 'RGB':\n img = img.convert('RGB')\n if target_size is not None:\n width_height_tuple = (target_size[1], target_size[0])\n if img.size != width_height_tuple:\n if interpolation not in _PIL_INTERPOLATION_METHODS:\n raise ValueError(\n 'Invalid interpolation method {} specified. Supported '\n 'methods are {}'.format(\n interpolation,\n \", \".join(_PIL_INTERPOLATION_METHODS.keys())))\n resample = _PIL_INTERPOLATION_METHODS[interpolation]\n img = img.resize(width_height_tuple, resample)\n return img", "_____no_output_____" ], [ "def get_closest_images(query_image_idx, num_results, pca_features_Query, pca_features_Dataset):\n distances = [ distance.euclidean(pca_features_Query[query_image_idx], feat) for feat in pca_features_Dataset ]\n idx_closest = sorted(range(len(distances)), key=lambda k: distances[k])[1:num_results+1]\n return idx_closest\n\ndef get_concatenated_images(indexes, thumb_height, images_Query):\n thumbs = []\n for idx in indexes:\n img = load_img(images_Query[idx])\n img = img.resize((int(img.width * thumb_height / img.height), thumb_height))\n thumbs.append(img)\n concat_image = np.concatenate([np.asarray(t) for t in thumbs], axis=1)\n return concat_image\n", "_____no_output_____" ] ], [ [ "Finally we can do a query on a randomly selected image in our dataset.", "_____no_output_____" ] ], [ [ "print(\"Random image from Generated => Closest NN from Original\")\nrandom_from = [images_GEN, pca_features_GEN]\nclosest_from = [images_ORIG, pca_features_ORIG]\n\n#print(\"Random image from Original => Closest NN from Generated\")\n#random_from = [images_ORIG, pca_features_ORIG]\n#closest_from = [images_GEN, pca_features_GEN]\n\n# do a query on a random image\nquery_image_idx = int(len(images_GEN) * random.random())\nidx_closest = get_closest_images(query_image_idx, 5, random_from[1], closest_from[1])\nquery_image = get_concatenated_images([query_image_idx], 300, random_from[0])\nresults_image = get_concatenated_images(idx_closest, 200, closest_from[0])\n\n# display the query image\nmatplotlib.pyplot.figure(figsize = (5,5))\nimshow(query_image)\nmatplotlib.pyplot.title(\"query image (%d)\" % query_image_idx)\n\n# display the resulting images\nmatplotlib.pyplot.figure(figsize = (16,12))\nimshow(results_image)\nmatplotlib.pyplot.title(\"result images\")", "Random image from Generated => Closest NN from Original\n" ] ], [ [ "If we are satisfied with the quality of our image vectors, now would be a good time to save them to disk for later usage. You will need these vectors to run the [next notebook on making an image t-SNE](image-tsne.ipynb).\n\nWe need to save both the image features matrix (the PCA-reduced features, not the originals), as well as the array containing the paths to each image, to make sure we can line up the images to their corresponding vectors. ", "_____no_output_____" ] ], [ [ "# not needed anymore pickle.dump([images, pca_features], open('./results/features_caltech101.p', 'wb'))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ece17e0bfba97f7675e8f904fe7124ce8e47d677
36,373
ipynb
Jupyter Notebook
w2v_train/.ipynb_checkpoints/run_word2vec-checkpoint.ipynb
zixiliuUSC/EE660-course-project
d4bcf396685ab5ab1d999eb02d7d537cc8a5b4c8
[ "MIT" ]
null
null
null
w2v_train/.ipynb_checkpoints/run_word2vec-checkpoint.ipynb
zixiliuUSC/EE660-course-project
d4bcf396685ab5ab1d999eb02d7d537cc8a5b4c8
[ "MIT" ]
null
null
null
w2v_train/.ipynb_checkpoints/run_word2vec-checkpoint.ipynb
zixiliuUSC/EE660-course-project
d4bcf396685ab5ab1d999eb02d7d537cc8a5b4c8
[ "MIT" ]
null
null
null
31.601216
238
0.566107
[ [ [ "%matplotlib inline", "_____no_output_____" ] ], [ [ "\nWord2Vec Model\n==============\n\nIntroduces Gensim's Word2Vec model and demonstrates its use on the Lee Corpus.\n\n\n", "_____no_output_____" ] ], [ [ "import logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)", "_____no_output_____" ] ], [ [ "In case you missed the buzz, word2vec is a widely featured as a member of the\nโ€œnew waveโ€ of machine learning algorithms based on neural networks, commonly\nreferred to as \"deep learning\" (though word2vec itself is rather shallow).\nUsing large amounts of unannotated plain text, word2vec learns relationships\nbetween words automatically. The output are vectors, one vector per word,\nwith remarkable linear relationships that allow us to do things like:\n\n* vec(\"king\") - vec(\"man\") + vec(\"woman\") =~ vec(\"queen\")\n* vec(\"Montreal Canadiens\") โ€“ vec(\"Montreal\") + vec(\"Toronto\") =~ vec(\"Toronto Maple Leafs\").\n\nWord2vec is very useful in `automatic text tagging\n<https://github.com/RaRe-Technologies/movie-plots-by-genre>`_\\ , recommender\nsystems and machine translation.\n\nThis tutorial:\n\n#. Introduces ``Word2Vec`` as an improvement over traditional bag-of-words\n#. Shows off a demo of ``Word2Vec`` using a pre-trained model\n#. Demonstrates training a new model from your own data\n#. Demonstrates loading and saving models\n#. Introduces several training parameters and demonstrates their effect\n#. Discusses memory requirements\n#. Visualizes Word2Vec embeddings by applying dimensionality reduction\n\nReview: Bag-of-words\n--------------------\n\n.. Note:: Feel free to skip these review sections if you're already familiar with the models.\n\nYou may be familiar with the `bag-of-words model\n<https://en.wikipedia.org/wiki/Bag-of-words_model>`_ from the\n`core_concepts_vector` section.\nThis model transforms each document to a fixed-length vector of integers.\nFor example, given the sentences:\n\n- ``John likes to watch movies. Mary likes movies too.``\n- ``John also likes to watch football games. Mary hates football.``\n\nThe model outputs the vectors:\n\n- ``[1, 2, 1, 1, 2, 1, 1, 0, 0, 0, 0]``\n- ``[1, 1, 1, 1, 0, 1, 0, 1, 2, 1, 1]``\n\nEach vector has 10 elements, where each element counts the number of times a\nparticular word occurred in the document.\nThe order of elements is arbitrary.\nIn the example above, the order of the elements corresponds to the words:\n``[\"John\", \"likes\", \"to\", \"watch\", \"movies\", \"Mary\", \"too\", \"also\", \"football\", \"games\", \"hates\"]``.\n\nBag-of-words models are surprisingly effective, but have several weaknesses.\n\nFirst, they lose all information about word order: \"John likes Mary\" and\n\"Mary likes John\" correspond to identical vectors. There is a solution: bag\nof `n-grams <https://en.wikipedia.org/wiki/N-gram>`__\nmodels consider word phrases of length n to represent documents as\nfixed-length vectors to capture local word order but suffer from data\nsparsity and high dimensionality.\n\nSecond, the model does not attempt to learn the meaning of the underlying\nwords, and as a consequence, the distance between vectors doesn't always\nreflect the difference in meaning. The ``Word2Vec`` model addresses this\nsecond problem.\n\nIntroducing: the ``Word2Vec`` Model\n-----------------------------------\n\n``Word2Vec`` is a more recent model that embeds words in a lower-dimensional\nvector space using a shallow neural network. The result is a set of\nword-vectors where vectors close together in vector space have similar\nmeanings based on context, and word-vectors distant to each other have\ndiffering meanings. For example, ``strong`` and ``powerful`` would be close\ntogether and ``strong`` and ``Paris`` would be relatively far.\n\nThe are two versions of this model and :py:class:`~gensim.models.word2vec.Word2Vec`\nclass implements them both:\n\n1. Skip-grams (SG)\n2. Continuous-bag-of-words (CBOW)\n\n.. Important::\n Don't let the implementation details below scare you.\n They're advanced material: if it's too much, then move on to the next section.\n\nThe `Word2Vec Skip-gram <http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model>`__\nmodel, for example, takes in pairs (word1, word2) generated by moving a\nwindow across text data, and trains a 1-hidden-layer neural network based on\nthe synthetic task of given an input word, giving us a predicted probability\ndistribution of nearby words to the input. A virtual `one-hot\n<https://en.wikipedia.org/wiki/One-hot>`__ encoding of words\ngoes through a 'projection layer' to the hidden layer; these projection\nweights are later interpreted as the word embeddings. So if the hidden layer\nhas 300 neurons, this network will give us 300-dimensional word embeddings.\n\nContinuous-bag-of-words Word2vec is very similar to the skip-gram model. It\nis also a 1-hidden-layer neural network. The synthetic training task now uses\nthe average of multiple input context words, rather than a single word as in\nskip-gram, to predict the center word. Again, the projection weights that\nturn one-hot words into averageable vectors, of the same width as the hidden\nlayer, are interpreted as the word embeddings.\n\n\n", "_____no_output_____" ], [ "Word2Vec Demo\n-------------\n\nTo see what ``Word2Vec`` can do, let's download a pre-trained model and play\naround with it. We will fetch the Word2Vec model trained on part of the\nGoogle News dataset, covering approximately 3 million words and phrases. Such\na model can take hours to train, but since it's already available,\ndownloading and loading it with Gensim takes minutes.\n\n.. Important::\n The model is approximately 2GB, so you'll need a decent network connection\n to proceed. Otherwise, skip ahead to the \"Training Your Own Model\" section\n below.\n\nYou may also check out an `online word2vec demo\n<http://radimrehurek.com/2014/02/word2vec-tutorial/#app>`_ where you can try\nthis vector algebra for yourself. That demo runs ``word2vec`` on the\n**entire** Google News dataset, of **about 100 billion words**.\n\n\n", "_____no_output_____" ] ], [ [ "import gensim.downloader as api\nwv = api.load('word2vec-google-news-300')", "_____no_output_____" ] ], [ [ "A common operation is to retrieve the vocabulary of a model. That is trivial:\n\n", "_____no_output_____" ] ], [ [ "for i, word in enumerate(wv.vocab):\n if i == 10:\n break\n print(word)", "_____no_output_____" ] ], [ [ "We can easily obtain vectors for terms the model is familiar with:\n\n\n", "_____no_output_____" ] ], [ [ "vec_king = wv['king']", "_____no_output_____" ] ], [ [ "Unfortunately, the model is unable to infer vectors for unfamiliar words.\nThis is one limitation of Word2Vec: if this limitation matters to you, check\nout the FastText model.\n\n\n", "_____no_output_____" ] ], [ [ "try:\n vec_cameroon = wv['cameroon']\nexcept KeyError:\n print(\"The word 'cameroon' does not appear in this model\")", "_____no_output_____" ] ], [ [ "Moving on, ``Word2Vec`` supports several word similarity tasks out of the\nbox. You can see how the similarity intuitively decreases as the words get\nless and less similar.\n\n\n", "_____no_output_____" ] ], [ [ "pairs = [\n ('car', 'minivan'), # a minivan is a kind of car\n ('car', 'bicycle'), # still a wheeled vehicle\n ('car', 'airplane'), # ok, no wheels, but still a vehicle\n ('car', 'cereal'), # ... and so on\n ('car', 'communism'),\n]\nfor w1, w2 in pairs:\n print('%r\\t%r\\t%.2f' % (w1, w2, wv.similarity(w1, w2)))", "_____no_output_____" ] ], [ [ "Print the 5 most similar words to \"car\" or \"minivan\"\n\n", "_____no_output_____" ] ], [ [ "print(wv.most_similar(positive=['car', 'minivan'], topn=5))", "_____no_output_____" ] ], [ [ "Which of the below does not belong in the sequence?\n\n", "_____no_output_____" ] ], [ [ "print(wv.doesnt_match(['fire', 'water', 'land', 'sea', 'air', 'car']))", "_____no_output_____" ] ], [ [ "Training Your Own Model\n-----------------------\n\nTo start, you'll need some data for training the model. For the following\nexamples, we'll use the `Lee Corpus\n<https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/test/test_data/lee_background.cor>`_\n(which you already have if you've installed gensim).\n\nThis corpus is small enough to fit entirely in memory, but we'll implement a\nmemory-friendly iterator that reads it line-by-line to demonstrate how you\nwould handle a larger corpus.\n\n\n", "_____no_output_____" ] ], [ [ "from gensim.test.utils import datapath\nfrom gensim import utils\n\nclass MyCorpus(object):\n \"\"\"An interator that yields sentences (lists of str).\"\"\"\n\n def __iter__(self):\n corpus_path = datapath('lee_background.cor')\n for line in open(corpus_path):\n # assume there's one document per line, tokens separated by whitespace\n yield utils.simple_preprocess(line)", "_____no_output_____" ] ], [ [ "If we wanted to do any custom preprocessing, e.g. decode a non-standard\nencoding, lowercase, remove numbers, extract named entities... All of this can\nbe done inside the ``MyCorpus`` iterator and ``word2vec`` doesnโ€™t need to\nknow. All that is required is that the input yields one sentence (list of\nutf8 words) after another.\n\nLet's go ahead and train a model on our corpus. Don't worry about the\ntraining parameters much for now, we'll revisit them later.\n\n\n", "_____no_output_____" ] ], [ [ "import gensim.models\n\nsentences = MyCorpus()\nmodel = gensim.models.Word2Vec(sentences=sentences)", "_____no_output_____" ] ], [ [ "Once we have our model, we can use it in the same way as in the demo above.\n\nThe main part of the model is ``model.wv``\\ , where \"wv\" stands for \"word vectors\".\n\n\n", "_____no_output_____" ] ], [ [ "vec_king = model.wv['king']", "_____no_output_____" ] ], [ [ "Retrieving the vocabulary works the same way:\n\n", "_____no_output_____" ] ], [ [ "for i, word in enumerate(model.wv.vocab):\n if i == 10:\n break\n print(word)", "_____no_output_____" ] ], [ [ "Storing and loading models\n--------------------------\n\nYou'll notice that training non-trivial models can take time. Once you've\ntrained your model and it works as expected, you can save it to disk. That\nway, you don't have to spend time training it all over again later.\n\nYou can store/load models using the standard gensim methods:\n\n\n", "_____no_output_____" ] ], [ [ "import tempfile\n\nwith tempfile.NamedTemporaryFile(prefix='gensim-model-', delete=False) as tmp:\n temporary_filepath = tmp.name\n model.save(temporary_filepath)\n #\n # The model is now safely stored in the filepath.\n # You can copy it to other machines, share it with others, etc.\n #\n # To load a saved model:\n #\n new_model = gensim.models.Word2Vec.load(temporary_filepath)", "_____no_output_____" ] ], [ [ "which uses pickle internally, optionally ``mmap``\\ โ€˜ing the modelโ€™s internal\nlarge NumPy matrices into virtual memory directly from disk files, for\ninter-process memory sharing.\n\nIn addition, you can load models created by the original C tool, both using\nits text and binary formats::\n\n model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)\n # using gzipped/bz2 input works too, no need to unzip\n model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.bin.gz', binary=True)\n\n\n", "_____no_output_____" ], [ "Training Parameters\n-------------------\n\n``Word2Vec`` accepts several parameters that affect both training speed and quality.\n\nmin_count\n---------\n\n``min_count`` is for pruning the internal dictionary. Words that appear only\nonce or twice in a billion-word corpus are probably uninteresting typos and\ngarbage. In addition, thereโ€™s not enough data to make any meaningful training\non those words, so itโ€™s best to ignore them:\n\ndefault value of min_count=5\n\n", "_____no_output_____" ] ], [ [ "model = gensim.models.Word2Vec(sentences, min_count=10)", "_____no_output_____" ] ], [ [ "size\n----\n\n``size`` is the number of dimensions (N) of the N-dimensional space that\ngensim Word2Vec maps the words onto.\n\nBigger size values require more training data, but can lead to better (more\naccurate) models. Reasonable values are in the tens to hundreds.\n\n\n", "_____no_output_____" ] ], [ [ "# default value of size=100\nmodel = gensim.models.Word2Vec(sentences, size=200)", "_____no_output_____" ] ], [ [ "workers\n-------\n\n``workers`` , the last of the major parameters (full list `here\n<http://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec>`_)\nis for training parallelization, to speed up training:\n\n\n", "_____no_output_____" ] ], [ [ "# default value of workers=3 (tutorial says 1...)\nmodel = gensim.models.Word2Vec(sentences, workers=4)", "_____no_output_____" ] ], [ [ "The ``workers`` parameter only has an effect if you have `Cython\n<http://cython.org/>`_ installed. Without Cython, youโ€™ll only be able to use\none core because of the `GIL\n<https://wiki.python.org/moin/GlobalInterpreterLock>`_ (and ``word2vec``\ntraining will be `miserably slow\n<http://rare-technologies.com/word2vec-in-python-part-two-optimizing/>`_\\ ).\n\n\n", "_____no_output_____" ], [ "Memory\n------\n\nAt its core, ``word2vec`` model parameters are stored as matrices (NumPy\narrays). Each array is **#vocabulary** (controlled by min_count parameter)\ntimes **#size** (size parameter) of floats (single precision aka 4 bytes).\n\nThree such matrices are held in RAM (work is underway to reduce that number\nto two, or even one). So if your input contains 100,000 unique words, and you\nasked for layer ``size=200``\\ , the model will require approx.\n``100,000*200*4*3 bytes = ~229MB``.\n\nThereโ€™s a little extra memory needed for storing the vocabulary tree (100,000 words would take a few megabytes), but unless your words are extremely loooong strings, memory footprint will be dominated by the three matrices above.\n\n\n", "_____no_output_____" ], [ "Evaluating\n----------\n\n``Word2Vec`` training is an unsupervised task, thereโ€™s no good way to\nobjectively evaluate the result. Evaluation depends on your end application.\n\nGoogle has released their testing set of about 20,000 syntactic and semantic\ntest examples, following the โ€œA is to B as C is to Dโ€ task. It is provided in\nthe 'datasets' folder.\n\nFor example a syntactic analogy of comparative type is bad:worse;good:?.\nThere are total of 9 types of syntactic comparisons in the dataset like\nplural nouns and nouns of opposite meaning.\n\nThe semantic questions contain five types of semantic analogies, such as\ncapital cities (Paris:France;Tokyo:?) or family members\n(brother:sister;dad:?).\n\n\n", "_____no_output_____" ], [ "Gensim supports the same evaluation set, in exactly the same format:\n\n\n", "_____no_output_____" ] ], [ [ "model.accuracy('./datasets/questions-words.txt')", "_____no_output_____" ] ], [ [ "This ``accuracy`` takes an `optional parameter\n<http://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec.accuracy>`_\n``restrict_vocab`` which limits which test examples are to be considered.\n\n\n", "_____no_output_____" ], [ "In the December 2016 release of Gensim we added a better way to evaluate semantic similarity.\n\nBy default it uses an academic dataset WS-353 but one can create a dataset\nspecific to your business based on it. It contains word pairs together with\nhuman-assigned similarity judgments. It measures the relatedness or\nco-occurrence of two words. For example, 'coast' and 'shore' are very similar\nas they appear in the same context. At the same time 'clothes' and 'closet'\nare less similar because they are related but not interchangeable.\n\n\n", "_____no_output_____" ] ], [ [ "model.evaluate_word_pairs(datapath('wordsim353.tsv'))", "_____no_output_____" ] ], [ [ ".. Important::\n Good performance on Google's or WS-353 test set doesnโ€™t mean word2vec will\n work well in your application, or vice versa. Itโ€™s always best to evaluate\n directly on your intended task. For an example of how to use word2vec in a\n classifier pipeline, see this `tutorial\n <https://github.com/RaRe-Technologies/movie-plots-by-genre>`_.\n\n\n", "_____no_output_____" ], [ "Online training / Resuming training\n-----------------------------------\n\nAdvanced users can load a model and continue training it with more sentences\nand `new vocabulary words <online_w2v_tutorial.ipynb>`_:\n\n\n", "_____no_output_____" ] ], [ [ "model = gensim.models.Word2Vec.load(temporary_filepath)\nmore_sentences = [\n ['Advanced', 'users', 'can', 'load', 'a', 'model',\n 'and', 'continue', 'training', 'it', 'with', 'more', 'sentences']\n]\nmodel.build_vocab(more_sentences, update=True)\nmodel.train(more_sentences, total_examples=model.corpus_count, epochs=model.iter)\n\n# cleaning up temporary file\nimport os\nos.remove(temporary_filepath)", "_____no_output_____" ] ], [ [ "You may need to tweak the ``total_words`` parameter to ``train()``,\ndepending on what learning rate decay you want to simulate.\n\nNote that itโ€™s not possible to resume training with models generated by the C\ntool, ``KeyedVectors.load_word2vec_format()``. You can still use them for\nquerying/similarity, but information vital for training (the vocab tree) is\nmissing there.\n\n\n", "_____no_output_____" ], [ "Training Loss Computation\n-------------------------\n\nThe parameter ``compute_loss`` can be used to toggle computation of loss\nwhile training the Word2Vec model. The computed loss is stored in the model\nattribute ``running_training_loss`` and can be retrieved using the function\n``get_latest_training_loss`` as follows :\n\n\n", "_____no_output_____" ] ], [ [ "# instantiating and training the Word2Vec model\nmodel_with_loss = gensim.models.Word2Vec(\n sentences,\n min_count=1,\n compute_loss=True,\n hs=0,\n sg=1,\n seed=42\n)\n\n# getting the training loss value\ntraining_loss = model_with_loss.get_latest_training_loss()\nprint(training_loss)", "_____no_output_____" ] ], [ [ "Benchmarks\n----------\n\nLet's run some benchmarks to see effect of the training loss computation code\non training time.\n\nWe'll use the following data for the benchmarks:\n\n#. Lee Background corpus: included in gensim's test data\n#. Text8 corpus. To demonstrate the effect of corpus size, we'll look at the\n first 1MB, 10MB, 50MB of the corpus, as well as the entire thing.\n\n\n", "_____no_output_____" ] ], [ [ "import io\nimport os\n\nimport gensim.models.word2vec\nimport gensim.downloader as api\nimport smart_open\n\n\ndef head(path, size):\n with smart_open.open(path) as fin:\n return io.StringIO(fin.read(size))\n\n\ndef generate_input_data():\n lee_path = datapath('lee_background.cor')\n ls = gensim.models.word2vec.LineSentence(lee_path)\n ls.name = '25kB'\n yield ls\n\n text8_path = api.load('text8').fn\n labels = ('1MB', '10MB', '50MB', '100MB')\n sizes = (1024 ** 2, 10 * 1024 ** 2, 50 * 1024 ** 2, 100 * 1024 ** 2)\n for l, s in zip(labels, sizes):\n ls = gensim.models.word2vec.LineSentence(head(text8_path, s))\n ls.name = l\n yield ls\n\n\ninput_data = list(generate_input_data())", "_____no_output_____" ] ], [ [ "We now compare the training time taken for different combinations of input\ndata and model training parameters like ``hs`` and ``sg``.\n\nFor each combination, we repeat the test several times to obtain the mean and\nstandard deviation of the test duration.\n\n\n", "_____no_output_____" ] ], [ [ "# Temporarily reduce logging verbosity\nlogging.root.level = logging.ERROR\n\nimport time\nimport numpy as np\nimport pandas as pd\n\ntrain_time_values = []\nseed_val = 42\nsg_values = [0, 1]\nhs_values = [0, 1]\n\nfast = True\nif fast:\n input_data_subset = input_data[:3]\nelse:\n input_data_subset = input_data\n\n\nfor data in input_data_subset:\n for sg_val in sg_values:\n for hs_val in hs_values:\n for loss_flag in [True, False]:\n time_taken_list = []\n for i in range(3):\n start_time = time.time()\n w2v_model = gensim.models.Word2Vec(\n data,\n compute_loss=loss_flag,\n sg=sg_val,\n hs=hs_val,\n seed=seed_val,\n )\n time_taken_list.append(time.time() - start_time)\n\n time_taken_list = np.array(time_taken_list)\n time_mean = np.mean(time_taken_list)\n time_std = np.std(time_taken_list)\n\n model_result = {\n 'train_data': data.name,\n 'compute_loss': loss_flag,\n 'sg': sg_val,\n 'hs': hs_val,\n 'train_time_mean': time_mean,\n 'train_time_std': time_std,\n }\n print(\"Word2vec model #%i: %s\" % (len(train_time_values), model_result))\n train_time_values.append(model_result)\n\ntrain_times_table = pd.DataFrame(train_time_values)\ntrain_times_table = train_times_table.sort_values(\n by=['train_data', 'sg', 'hs', 'compute_loss'],\n ascending=[False, False, True, False],\n)\nprint(train_times_table)", "_____no_output_____" ] ], [ [ "Adding Word2Vec \"model to dict\" method to production pipeline\n-------------------------------------------------------------\n\nSuppose, we still want more performance improvement in production.\n\nOne good way is to cache all the similar words in a dictionary.\n\nSo that next time when we get the similar query word, we'll search it first in the dict.\n\nAnd if it's a hit then we will show the result directly from the dictionary.\n\notherwise we will query the word and then cache it so that it doesn't miss next time.\n\n\n", "_____no_output_____" ] ], [ [ "# re-enable logging\nlogging.root.level = logging.INFO\n\nmost_similars_precalc = {word : model.wv.most_similar(word) for word in model.wv.index2word}\nfor i, (key, value) in enumerate(most_similars_precalc.items()):\n if i == 3:\n break\n print(key, value)", "_____no_output_____" ] ], [ [ "Comparison with and without caching\n-----------------------------------\n\nfor time being lets take 4 words randomly\n\n\n", "_____no_output_____" ] ], [ [ "import time\nwords = ['voted', 'few', 'their', 'around']", "_____no_output_____" ] ], [ [ "Without caching\n\n\n", "_____no_output_____" ] ], [ [ "start = time.time()\nfor word in words:\n result = model.wv.most_similar(word)\n print(result)\nend = time.time()\nprint(end - start)", "_____no_output_____" ] ], [ [ "Now with caching\n\n\n", "_____no_output_____" ] ], [ [ "start = time.time()\nfor word in words:\n if 'voted' in most_similars_precalc:\n result = most_similars_precalc[word]\n print(result)\n else:\n result = model.wv.most_similar(word)\n most_similars_precalc[word] = result\n print(result)\n\nend = time.time()\nprint(end - start)", "_____no_output_____" ] ], [ [ "Clearly you can see the improvement but this difference will be even larger\nwhen we take more words in the consideration.\n\n\n", "_____no_output_____" ], [ "Visualising the Word Embeddings\n-------------------------------\n\nThe word embeddings made by the model can be visualised by reducing\ndimensionality of the words to 2 dimensions using tSNE.\n\nVisualisations can be used to notice semantic and syntactic trends in the data.\n\nExample:\n\n* Semantic: words like cat, dog, cow, etc. have a tendency to lie close by\n* Syntactic: words like run, running or cut, cutting lie close together.\n\nVector relations like vKing - vMan = vQueen - vWoman can also be noticed.\n\n.. Important::\n The model used for the visualisation is trained on a small corpus. Thus\n some of the relations might not be so clear.\n\n\n", "_____no_output_____" ] ], [ [ "from sklearn.decomposition import IncrementalPCA # inital reduction\nfrom sklearn.manifold import TSNE # final reduction\nimport numpy as np # array handling\n\n\ndef reduce_dimensions(model):\n num_dimensions = 2 # final num dimensions (2D, 3D, etc)\n\n vectors = [] # positions in vector space\n labels = [] # keep track of words to label our data again later\n for word in model.wv.vocab:\n vectors.append(model.wv[word])\n labels.append(word)\n\n # convert both lists into numpy vectors for reduction\n vectors = np.asarray(vectors)\n labels = np.asarray(labels)\n\n # reduce using t-SNE\n vectors = np.asarray(vectors)\n tsne = TSNE(n_components=num_dimensions, random_state=0)\n vectors = tsne.fit_transform(vectors)\n\n x_vals = [v[0] for v in vectors]\n y_vals = [v[1] for v in vectors]\n return x_vals, y_vals, labels\n\n\nx_vals, y_vals, labels = reduce_dimensions(model)\n\ndef plot_with_plotly(x_vals, y_vals, labels, plot_in_notebook=True):\n from plotly.offline import init_notebook_mode, iplot, plot\n import plotly.graph_objs as go\n\n trace = go.Scatter(x=x_vals, y=y_vals, mode='text', text=labels)\n data = [trace]\n\n if plot_in_notebook:\n init_notebook_mode(connected=True)\n iplot(data, filename='word-embedding-plot')\n else:\n plot(data, filename='word-embedding-plot.html')\n\n\ndef plot_with_matplotlib(x_vals, y_vals, labels):\n import matplotlib.pyplot as plt\n import random\n\n random.seed(0)\n\n plt.figure(figsize=(12, 12))\n plt.scatter(x_vals, y_vals)\n\n #\n # Label randomly subsampled 25 data points\n #\n indices = list(range(len(labels)))\n selected_indices = random.sample(indices, 25)\n for i in selected_indices:\n plt.annotate(labels[i], (x_vals[i], y_vals[i]))\n\ntry:\n get_ipython()\nexcept Exception:\n plot_function = plot_with_matplotlib\nelse:\n plot_function = plot_with_plotly\n\nplot_function(x_vals, y_vals, labels)", "_____no_output_____" ] ], [ [ "Conclusion\n----------\n\nIn this tutorial we learned how to train word2vec models on your custom data\nand also how to evaluate it. Hope that you too will find this popular tool\nuseful in your Machine Learning tasks!\n\nLinks\n-----\n\n- API docs: :py:mod:`gensim.models.word2vec`\n- `Original C toolkit and word2vec papers by Google <https://code.google.com/archive/p/word2vec/>`_.\n\n\n", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
ece18211f00ae80bb7c35d9aa9aa27d79ab57406
6,261
ipynb
Jupyter Notebook
utils/refs_hists2.ipynb
asappresearch/neural-ilm
fd7e09960525391f4084a5753429deabd7ff00aa
[ "MIT" ]
null
null
null
utils/refs_hists2.ipynb
asappresearch/neural-ilm
fd7e09960525391f4084a5753429deabd7ff00aa
[ "MIT" ]
null
null
null
utils/refs_hists2.ipynb
asappresearch/neural-ilm
fd7e09960525391f4084a5753429deabd7ff00aa
[ "MIT" ]
2
2021-02-25T04:42:14.000Z
2021-02-25T04:43:06.000Z
37.267857
118
0.499281
[ [ [ "%matplotlib inline\n\nimport time\nimport json\nimport shutil\n\nimport h5py\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sklearn.metrics.cluster\n\nimport torch\n\ndef create_ami_matrix(gnd, pred):\n \"\"\"\n assumptions:\n - gnd and pred are 2d matrices\n - gnd is [U_gnd][N]\n - pred is [U_pred][N]\n \"\"\"\n U_gnd = gnd.size(0)\n U_pred = pred.size(0)\n\n ami_matrix = torch.zeros(U_pred, U_gnd)\n for i in range(U_pred):\n for j in range(U_gnd):\n AMI = sklearn.metrics.cluster.adjusted_mutual_info_score(\n# AMI = sklearn.metrics.cluster.mutual_info_score(\n labels_true=gnd[j].numpy(),\n labels_pred=pred[i].numpy()\n )\n ami_matrix[i, j] = AMI\n return ami_matrix\n\ndef run(filename):\n shutil.copyfile(f'../{filename}', '/tmp/foo.h5')\n f = h5py.File('/tmp/foo.h5', 'r')\n\n hypotheses_train = torch.from_numpy(f['hypotheses_train'][:])\n hypotheses_gnd_train = torch.from_numpy(f['gnd_hypotheses_train'][:].astype(np.uint8)).long()\n dsrefs_train = torch.from_numpy(f['dsrefs_train'][:].astype(np.uint8)).long()\n resdicts = f['resdicts']\n\n meta = json.loads(f['meta'][0])\n ref = meta['ref']\n print(ref)\n# print('meta', json.dumps(meta, indent=2))\n dsrefs = meta['ds_refs']\n \n U_gnd = hypotheses_gnd_train.size(0)\n U_pred = hypotheses_train.size(0)\n\n N = dsrefs_train.size(0)\n\n render_start_id = 0\n render_end_id_excl = len(resdicts)\n if render_end_id_excl > 32:\n render_start_id = render_end_id_excl - 32\n\n batch_size = 128\n\n num_dsrefs = dsrefs_train.max().item() + 1\n for dsref in range(num_dsrefs):\n# print('dsref', dsref)\n# start_render_id = 0\n num_renders = render_end_id_excl - render_start_id\n amis = torch.zeros((num_renders, U_pred, U_gnd), dtype=torch.float32)\n for render_id in range(render_start_id, render_end_id_excl):\n b_start = render_id * batch_size\n b_end = b_start + batch_size\n _hypotheses_train = hypotheses_train[:, b_start:b_end]\n _hypotheses_gnd_train = hypotheses_gnd_train[:, b_start:b_end]\n _dsrefs_train = dsrefs_train[b_start:b_end]\n\n dsref_idxes = (_dsrefs_train == dsref).view(-1).nonzero().view(-1).long()\n _hypotheses_train = _hypotheses_train[:, dsref_idxes]\n# _hypotheses_train.fill_(0)\n# _hypotheses_train[:, :, 0] = 1\n# _hypotheses_train.uniform_(0, 1)\n _hypotheses_gnd_train = _hypotheses_gnd_train[:, dsref_idxes]\n \n _, _pred = _hypotheses_train.max(dim=-1)\n ami_matrix = create_ami_matrix(\n gnd=_hypotheses_gnd_train,\n pred=_pred\n )\n amis[render_id - render_start_id] = ami_matrix\n\n resdict_start = json.loads(resdicts[render_start_id])\n episode_start = resdict_start['episode']\n resdict_final = json.loads(resdicts[-1])\n episode_final = resdict_final['episode']\n\n# plt.figure(figsize=(10.0 * U_gnd, 0.15 * num_renders))\n gnd_utt_len = 1\n if 'things' in dsrefs[dsref]:\n gnd_utt_len = 2\n elif 'rels' in dsrefs[dsref]:\n gnd_utt_len = 5\n for u_gnd in range(gnd_utt_len):\n plt.figure(figsize=(30, 0.5))\n plt.cla()\n# plt.subplot(1, 5, u_gnd + 1)\n ami = amis[:, :, u_gnd]\n# print('u_gnd', u_gnd, 'ami.size()', ami.size())\n# print('dsref', dsrefs[dsref], 'u_gnd', u_gnd, 'min', ami.min().item(), 'max', ami.max().item())\n plt.imshow(\n ami.transpose(0, 1).numpy(),\n extent=[episode_start, episode_final, 0, U_pred],\n interpolation='none',\n vmin=0,\n vmax=1\n )\n plt.title(\n f'{ref} episodes={episode_final} dsref={dsrefs[dsref]} u_gnd={u_gnd}'\n f' min={ami.min().item():.3f} max={ami.max().item():.3f}')\n plt.show()\n f.close()\n\nfilenames = [\n 'hists/hypprop_with_predictor_eeb99_hughtok1_20180921_191657.h5',\n]\n\nfilenames = \"\"\"\n../hists/hypprop_eec59_hughtok1_20180929_150429.h5\n../hists/hypprop_eec60_hughtok2_20180929_150511.h5\n../hists/hypprop_eec61_hughtok33_20180929_150540.h5\n\"\"\"\n\nfilenames = filenames.split('\\n')\nfilenames = [f.replace('../', '') for f in filenames if f != '']\n\nfor filename in filenames:\n run(filename)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
ece182233d9994e4dc7ea4406a3237b6ff08a0ee
10,478
ipynb
Jupyter Notebook
tf0/Solution_Data_Visualization.ipynb
nsingh216/edu
95cbd3dc25c9b15160ba4b146b52eeeb2fa54ecb
[ "Apache-2.0" ]
null
null
null
tf0/Solution_Data_Visualization.ipynb
nsingh216/edu
95cbd3dc25c9b15160ba4b146b52eeeb2fa54ecb
[ "Apache-2.0" ]
null
null
null
tf0/Solution_Data_Visualization.ipynb
nsingh216/edu
95cbd3dc25c9b15160ba4b146b52eeeb2fa54ecb
[ "Apache-2.0" ]
null
null
null
27.941333
569
0.484062
[ [ [ "<a href=\"https://colab.research.google.com/github/osipov/edu/blob/master/tf0/Solution_Data_Visualization.ipynb\" target=\"_blank\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\"/></a>", "_____no_output_____" ], [ "# Exercise: Visualization", "_____no_output_____" ], [ "## First, remember to render images inline...", "_____no_output_____" ] ], [ [ "%matplotlib inline", "_____no_output_____" ] ], [ [ "## You'll need __`matplotlib`__, __`torch`__ and `math`\n* don't forget the standard abbreviations", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport tensorflow as tf\nimport math", "_____no_output_____" ] ], [ [ "## Simple Plot\n\n1. Create __`x`__, a vector of data between 0 and 2 with a step size of 0.01\n* Generate corresponding data __`y`__ using the function $ 1 + sin(2\\pi x) $ using `math.pi`\n* Plot these data\n* Set the x label to \"X-label\"\n* Set the y label to \"Y-label\"\n* Set the title to \"Simple Plot\"\n* Save the figure to a file called \"simple.png\"\n", "_____no_output_____" ] ], [ [ "x = tf.range(0.0, 2.0, 0.01)\ny = 1 + tf.sin(2 * math.pi * x)\n\nfig, ax = plt.subplots()\n\nax.plot(x, y)\nax.set(xlabel='X-label', ylabel='Y-label', title='Simple Plot')\nfig.savefig(\"simple.png\")", "_____no_output_____" ] ], [ [ "## Scatter Plot\n* Generate a scatter plot of 100 random __`x`__ values and 100 random __`y`__ values", "_____no_output_____" ] ], [ [ "n = [100]\nx = tf.random.uniform(n)\ny = tf.random.uniform(n)\n\nplt.scatter(x, y);", "_____no_output_____" ] ], [ [ "## Colored Scatter Plot\n* Now generate a scatterplot with 100 random color values that will be mapped to the default color map", "_____no_output_____" ] ], [ [ "n = [100]\nx = tf.random.uniform(n)\ny = tf.random.uniform(n)\ncolors = tf.random.uniform(n)\n\nplt.scatter(x, y, c=colors);", "_____no_output_____" ] ], [ [ "## Color/Size Scatter Plot\n* Repeat the last plot, but change the size of each of the circles to be a random number between 0.0 and 1.0 * 100\n* __Note:__ you probably want to modify the opacity in case the circles stack on top of one another", "_____no_output_____" ] ], [ [ "n = [100]\nx = tf.random.uniform(n)\ny = tf.random.uniform(n)\ncolors = tf.random.uniform(n)\n\narea = 100 * tf.random.uniform(n)\n\nplt.scatter(x, y, c=colors, s=area, alpha=0.5);", "_____no_output_____" ] ], [ [ "## Histogram\n* Create a histogram of 100 random values from a normal distribution", "_____no_output_____" ] ], [ [ "n = [100]\nx = tf.random.normal(n)\nfig, ax = plt.subplots()\n\nax.hist(x)\n\nplt.show()", "_____no_output_____" ] ], [ [ "## Overlayed Histogram\n* Create two datasets with 100 random values each from a normal distribution\n* Display both on the same plot\n* Make sure they are readable so adjust the transparency for each histogram\n* Display a legend", "_____no_output_____" ] ], [ [ "n = [100]\nx1 = tf.random.normal(n)\nx2 = tf.random.normal(n)\n\nfig, ax = plt.subplots()\n\nax.hist(x1, alpha=0.5, label='x1')\nax.hist(x2, alpha=0.5, label='x2')\nax.legend()\n\nplt.show()", "_____no_output_____" ] ], [ [ "## Side-by-side Histograms\n* Create two side-by-side histograms of separate datasets with 100 random values each from a normal distribution\n* Use 30 bins\n* Make it so the plots share their axes", "_____no_output_____" ] ], [ [ "n = [100]\nx1 = tf.random.normal(n)\nx2 = tf.random.normal(n)\n\nfig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True)\n\nax1.hist(x1, bins=30)\nax2.hist(x2, bins=30)\n\nplt.show()", "_____no_output_____" ] ], [ [ "## Write the code to produce this figure\n![power plot](https://raw.githubusercontent.com/osipov/edu/master/pyt0/images/power.png)\n* the x axis represents time intervals __`t`__ of 200ms\n* the y axis represents $ t, t^2, t^3 $", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nimport matplotlib.pyplot as plt\n\n# Sampled time at 200ms intervals\nt = tf.range(0., 5., 0.2)\n\n# green dashes, blue squares and red triangles\nplt.plot(t, t, 'g--', t, t**2, 'bs', t, t**3, 'r^')\nplt.show()", "_____no_output_____" ] ], [ [ "## Write the code to produce this bar chart\n![popularity](https://raw.githubusercontent.com/osipov/edu/master/pyt0/images/popularity.png)\n* the popularity values are __`22.2, 17.6, 8.8, 8, 7.7, 6.7`__", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nx = ['Java', 'Python', 'PHP', 'JavaScript', 'C#', 'C++']\npopularity = [22.2, 17.6, 8.8, 8, 7.7, 6.7]\nx_pos = range(6)\nplt.bar(x_pos, popularity, color='blue')\nplt.xlabel(\"Languages\")\nplt.ylabel(\"Popularity\")\nplt.title(\"Popularity of Programming Language\\n\" + \"Worldwide, Oct 2017 compared to a year ago\")\nplt.xticks(x_pos, x)\n# Turn on the grid\nplt.minorticks_on()\nplt.grid(which='major', linestyle='-', linewidth=0.5, color='red')\n# Customize the minor grid\nplt.grid(which='minor', linestyle=':', color='black')\nplt.show()", "_____no_output_____" ] ], [ [ "Copyright 2021 CounterFactual.AI LLC. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
ece186beb847fa33aa1b77648322904a6b9901bb
73,922
ipynb
Jupyter Notebook
src/notebooks/feature_processing/wifi_processing.ipynb
yerseg/profiling_data_learning
9de84589b1b72d38c17e16ad2ddd55dd1859b2f8
[ "MIT" ]
null
null
null
src/notebooks/feature_processing/wifi_processing.ipynb
yerseg/profiling_data_learning
9de84589b1b72d38c17e16ad2ddd55dd1859b2f8
[ "MIT" ]
null
null
null
src/notebooks/feature_processing/wifi_processing.ipynb
yerseg/profiling_data_learning
9de84589b1b72d38c17e16ad2ddd55dd1859b2f8
[ "MIT" ]
null
null
null
48.537098
165
0.487609
[ [ [ "import pandas as pd\nimport numpy as np\nimport re\nfrom datetime import datetime as dt\nfrom scipy.spatial import distance\nimport scipy.stats as stats\n\n%matplotlib inline", "_____no_output_____" ], [ "TIME_SAMPLE_FREQ = '30s'", "_____no_output_____" ], [ "df = pd.read_csv(\"..\\\\..\\\\scripts\\\\_split_all\\\\user_1\\\\base_wifi.data\", sep = ';', index_col = False, header = None, low_memory = False, \\\n names = ['timestamp', 'uuid', 'bssid', 'chwidth', 'freq', 'level'])", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 110277 entries, 0 to 110276\nData columns (total 6 columns):\ntimestamp 110277 non-null object\nuuid 110277 non-null object\nbssid 110277 non-null object\nchwidth 110277 non-null int64\nfreq 110277 non-null int64\nlevel 110277 non-null int64\ndtypes: int64(3), object(3)\nmemory usage: 5.0+ MB\n" ], [ "df['timestamp'] = df['timestamp'].apply(lambda x: dt.strptime(x, '%d.%m.%Y_%H:%M:%S.%f'))\ndf.index = pd.DatetimeIndex(df.timestamp)\ndf = df.sort_index()", "_____no_output_____" ], [ "df = df.drop(['timestamp', 'chwidth'], axis = 1)", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nDatetimeIndex: 110277 entries, 2021-03-08 15:15:05.666000 to 2021-03-14 00:57:08.353000\nData columns (total 4 columns):\nuuid 110277 non-null object\nbssid 110277 non-null object\nfreq 110277 non-null int64\nlevel 110277 non-null int64\ndtypes: int64(2), object(2)\nmemory usage: 4.2+ MB\n" ], [ "bssid_map = { bssid.replace(' ', ''): idx for bssid, idx in zip(df.bssid.unique(), range(len(df.bssid.unique()))) }", "_____no_output_____" ], [ "df.bssid = df.bssid.apply(lambda x: str(x).replace(' ', ''))\ndf.level = df.level.apply(lambda x: str(x).replace(' ', ''))\ndf.freq = df.freq.apply(lambda x: str(x).replace(' ', ''))", "_____no_output_____" ], [ "df['bssid_level'] = df[['bssid', 'level']].agg(','.join, axis=1)\ndf['count'] = 1", "_____no_output_____" ], [ "def agg_string_join(col):\n col = col.apply(lambda x: str(x))\n return col.str.cat(sep = ',').replace(' ', '')", "_____no_output_____" ], [ "def agg_bssid_col(col):\n array_len = len(bssid_map)\n array = np.zeros(array_len, dtype = 'float')\n def fill_array(x):\n tmp = x.split(',')\n bssid = tmp[0]\n level = float(tmp[1])\n array[bssid_map[bssid.replace(' ', '')]] = level\n return\n \n col.apply(lambda x: fill_array(x))\n return np.array2string(array, separator = ',').replace(' ', '')[1:-1]", "_____no_output_____" ], [ "all_func_dicts_quantum = { 'freq': agg_string_join, 'level': agg_string_join, 'bssid_level' : agg_bssid_col, 'count' : 'sum' }", "_____no_output_____" ], [ "df_quantum = df.groupby(['timestamp', 'uuid'], as_index=True).agg(all_func_dicts_quantum)", "_____no_output_____" ], [ "df_quantum", "_____no_output_____" ], [ "df_quantum = df_quantum.reset_index()\ndf_quantum.index = pd.DatetimeIndex(df_quantum.timestamp)", "_____no_output_____" ], [ "df_quantum = df_quantum[df_quantum['count'] != 0]", "_____no_output_____" ], [ "df_conn = pd.read_csv(\"..\\\\..\\\\scripts\\\\_split_all\\\\user_1\\\\conn_wifi.data\", sep = ';', index_col = False, header = None, low_memory = False, \\\n names = ['timestamp', '1', 'bssid', '2', '3', '4', '5', 'level', '6'])\n\ndf_conn['timestamp'] = df_conn['timestamp'].apply(lambda x: dt.strptime(x, '%d.%m.%Y_%H:%M:%S.%f'))\ndf_conn.index = pd.DatetimeIndex(df_conn.timestamp)\ndf_conn = df_conn.sort_index()", "_____no_output_____" ], [ "def get_level_from_row(row):\n bssid = df_conn.iloc[df_conn.index.get_loc(row.name, method = 'nearest')]['bssid']\n if str(bssid) == 'nan' or str(bssid) == 'null' or str(bssid) == '':\n return 0\n \n level = df_conn.iloc[df_conn.index.get_loc(row.name, method = 'nearest')]['level']\n time = df_conn.iloc[df_conn.index.get_loc(row.name, method = 'nearest')]['timestamp']\n return level if abs((time - row.name).total_seconds()) <= 10 else 0\n\ndf_quantum['conn_level'] = df_quantum.apply(lambda row: get_level_from_row(row), axis = 1)", "_____no_output_____" ], [ "def string2array(string):\n try:\n array = np.fromstring(string, sep=',')\n return array\n except:\n return np.nan\n\ndef to_ones_array(array):\n try:\n array[array != 0] = 1\n return array\n except:\n return np.nan\n\ndef get_len(obj):\n try:\n length = len(obj)\n return length\n except:\n return np.nan", "_____no_output_____" ], [ "def get_occured_nets_count(row, prev_col, curr_col):\n prev = to_ones_array(string2array(row[prev_col]))\n curr = to_ones_array(string2array(row[curr_col]))\n intersection = np.logical_and(curr, prev)\n diff = np.logical_and(curr, np.logical_not(intersection))\n \n if (np.count_nonzero(np.logical_or(prev, curr)) == 0):\n return 0\n \n return np.count_nonzero(diff) / np.count_nonzero(np.logical_or(prev, curr))\n\ndef get_disappeared_nets_count(row, prev_col, curr_col):\n prev = to_ones_array(string2array(row[prev_col]))\n curr = to_ones_array(string2array(row[curr_col]))\n intersection = np.logical_and(curr, prev)\n diff = np.logical_and(prev, np.logical_not(intersection))\n \n if (np.count_nonzero(np.logical_or(prev, curr)) == 0):\n return 0\n \n return np.count_nonzero(diff) / np.count_nonzero(np.logical_or(prev, curr))\n\ndef get_jaccard_index(row, prev_col, curr_col):\n prev = to_ones_array(string2array(row[prev_col]))\n curr = to_ones_array(string2array(row[curr_col]))\n return distance.jaccard(prev, curr)\n\ndef get_occur_speed(row, prev_col, curr_col):\n prev = to_ones_array(string2array(row[prev_col]))\n curr = to_ones_array(string2array(row[curr_col]))\n return np.linalg.norm(prev - curr) / np.sqrt(get_len(prev))\n \ndef get_level_speed(row, prev_col, curr_col):\n prev = string2array(row[prev_col])\n curr = string2array(row[curr_col])\n return np.linalg.norm(prev - curr) / np.sqrt(get_len(prev))\n\ndef calc_single_cols_in_window(df, col, new_col, window, func):\n def func_wrapper(func, row, prev_col, curr_col):\n delta = row.timestamp - row.prev_timestamp\n if pd.isnull(delta):\n delta = 0\n else:\n delta = abs(delta.total_seconds())\n if delta > 10 * 60:\n return np.nan\n else:\n return func(row, prev_col_name, col)\n \n new_cols = []\n \n for i in range(window):\n prev_col_name = \"_\".join(['prev', col, str(i + 1)])\n new_col_name = \"_\".join([new_col, str(i + 1)])\n \n df['prev_timestamp'] = df.timestamp.shift(i + 1)\n df[prev_col_name] = df[col].shift(i + 1)\n df[new_col_name] = df.apply(lambda row: func_wrapper(func, row, prev_col_name, col), axis = 1)\n df = df.drop(prev_col_name, axis = 1)\n df = df.drop('prev_timestamp', axis = 1)\n new_cols.append(new_col_name)\n \n df[\"_\".join([new_col, 'mean'])] = df[new_cols].mean(axis = 1)\n df[\"_\".join([new_col, 'median'])] = df[new_cols].median(axis = 1)\n df[\"_\".join([new_col, 'var'])] = df[new_cols].var(axis = 1)\n \n return df", "_____no_output_____" ], [ "WINDOW_SIZE = 5\n\noccur_and_level_columns_map = [\n (\"bssid_level\", \"occured_nets_count\", WINDOW_SIZE, get_occured_nets_count),\n (\"bssid_level\", \"disappeared_nets_count\", WINDOW_SIZE, get_disappeared_nets_count),\n (\"bssid_level\", \"jaccard_index\", WINDOW_SIZE, get_jaccard_index), \n (\"bssid_level\", \"occur_speed\", WINDOW_SIZE, get_occur_speed),\n (\"bssid_level\", \"level_speed\", WINDOW_SIZE, get_level_speed)\n]\n\nfor (col, new_col, window, func) in occur_and_level_columns_map:\n df_quantum = calc_single_cols_in_window(df_quantum, col, new_col, window, func)", "_____no_output_____" ], [ "def get_conn_level_speed(row, prev_col, curr_col):\n return row[curr_col] - row[prev_col]", "_____no_output_____" ], [ "single_columns_map = [\n (\"conn_level\", \"conn_level_speed\", WINDOW_SIZE, get_conn_level_speed),\n (\"count\", \"count_speed\", WINDOW_SIZE, get_conn_level_speed)\n]\n\nfor (col, new_col, window, func) in single_columns_map:\n df_quantum = calc_single_cols_in_window(df_quantum, col, new_col, window, func)", "_____no_output_____" ], [ "def agg_str(col):\n# all_freq = col.str.cat(sep=',')\n return string2array(col)\n\ndef str_mean(col):\n array = agg_str(col)\n if str(array) == 'nan':\n return 0 \n return np.mean(array)\n\ndef mean(col):\n return np.mean(col)\n\ndef var(col):\n return np.var(col)\n\ndef median(col):\n return np.median(col)\n\ndef skew(col):\n return stats.skew(col)\n\ndef kurt(col):\n return stats.kurtosis(col)", "_____no_output_____" ], [ "df_quantum['freq'] = df_quantum.apply(lambda row: str_mean(row['freq']), axis = 1)\ndf_quantum['level'] = df_quantum.apply(lambda row: str_mean(row['level']), axis = 1)", "_____no_output_____" ], [ "cols_for_drop = []\nnames = [\n \"occured_nets_count\",\n \"disappeared_nets_count\",\n \"jaccard_index\",\n \"occur_speed\",\n \"count_speed\",\n \"conn_level_speed\",\n \"level_speed\",\n \"count_speed\"\n]\n\nfor i in range(1, WINDOW_SIZE + 1):\n for name in names:\n cols_for_drop.append('_'.join([name, str(i)]))\n \ndf_quantum = df_quantum.drop(['bssid_level', 'timestamp', 'uuid'], axis = 1)\ndf_quantum = df_quantum.drop(cols_for_drop, axis = 1)", "_____no_output_____" ], [ "df_quantum.columns", "_____no_output_____" ], [ "common_cols = df_quantum.columns[0:4]\nspeed_acc_cols = df_quantum.columns[4:]\n\ncommon_funcs_list = [mean, var, median, skew, kurt]\nspecial_funcs_list = [mean, pd.DataFrame.mad, skew]\n\ncommon_cols_map = { col : common_funcs_list for col in common_cols }\nspeed_acc_cols_map = { col : special_funcs_list for col in speed_acc_cols }\n\nagg_dict = common_cols_map\nagg_dict.update(speed_acc_cols_map)", "_____no_output_____" ], [ "df_quantum[speed_acc_cols] = df_quantum[speed_acc_cols].apply(pd.to_numeric)", "_____no_output_____" ], [ "df_sampling = df_quantum.groupby(pd.Grouper(freq = TIME_SAMPLE_FREQ)).agg(agg_dict)", "D:\\Program Files\\Anaconda3\\lib\\site-packages\\numpy\\core\\fromnumeric.py:3118: RuntimeWarning: Mean of empty slice.\n out=out, **kwargs)\nD:\\Program Files\\Anaconda3\\lib\\site-packages\\numpy\\core\\_methods.py:85: RuntimeWarning: invalid value encountered in double_scalars\n ret = ret.dtype.type(ret / rcount)\n" ], [ "df_rolling = df_quantum.rolling(TIME_SAMPLE_FREQ, min_periods = 1, center = False).agg(agg_dict)", "_____no_output_____" ], [ "df_sampling.columns = [\"_\".join([str(high_level_name), str(low_level_name)]) \\\n for (high_level_name, low_level_name) in df_sampling.columns.values]\n\ndf_rolling.columns = [\"_\".join([str(high_level_name), str(low_level_name)]) \\\n for (high_level_name, low_level_name) in df_rolling.columns.values]", "_____no_output_____" ], [ "df_sampling = df_sampling.dropna()\ndf_sampling = df_sampling.fillna(0)\n\ndf_rolling = df_rolling.dropna()\ndf_rolling = df_rolling.fillna(0)", "_____no_output_____" ], [ "df_sampling.to_csv(\".\\\\_datasets\\\\5s\\\\wifi_sampling_dataset_5.csv\")\ndf_rolling.to_csv(\".\\\\_datasets\\\\5s\\\\wifi_rolling_dataset_5.csv\")", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ece1917374ab0ea460882c0faa7b994d463292ec
3,195
ipynb
Jupyter Notebook
preprocessing/hd_Tokenize.ipynb
young-ha713/HEXinAR_exawave_service
4e670c2c685c6ef3bc1c6b93467847d1e4e6a198
[ "Apache-2.0" ]
null
null
null
preprocessing/hd_Tokenize.ipynb
young-ha713/HEXinAR_exawave_service
4e670c2c685c6ef3bc1c6b93467847d1e4e6a198
[ "Apache-2.0" ]
null
null
null
preprocessing/hd_Tokenize.ipynb
young-ha713/HEXinAR_exawave_service
4e670c2c685c6ef3bc1c6b93467847d1e4e6a198
[ "Apache-2.0" ]
null
null
null
31.633663
278
0.548044
[ [ [ "import re\r\nimport sqlite3\r\nfrom konlpy.tag import Okt\r\nfrom collections import Counter\r\n\r\n# db ์ž๋ฃŒ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ\r\nconn = sqlite3.connect('news.db')\r\nc = conn.cursor()\r\nc.execute('SELECT news_content from news_dummy2')\r\n# c.execute('SELECT news_content from news_dummy2 WHERE pubdate LIKE \"%23:55')\r\nnews_con = []\r\nfor row in c.fetchall():\r\n news_con.append(row)\r\n\r\n# test์šฉ -> ์‚ญ์ œํ•˜์‹œ๊ณ  news_con์„ ๊ฐ ์ผ์ž๋ณ„ ๊ธฐ์‚ฌ๋กœ ๋ถ„๋ฅ˜ํ•ด์„œ ๋„ฃ๊ธฐ\r\nnews_test = news_con[5:9]\r\n\r\nokt = Okt()\r\nstopwords = ['์ฝ”๋กœ๋‚˜', '์˜ฌ๋ฆผํ”ฝ', 'ํ๋ ด', '๋ฐ”์ด๋Ÿฌ์Šค', '๋ฐฑ์‹ ', '์ˆ˜๊ธ‰', '๋ฐฉ์—ญ', '๊ฑฐ๋ฆฌ๋‘๊ธฐ', '์ผ์ •', '์บ ํŽ˜์ธ', '๋Œ€ํ•ด', '์—์„œ', '์ด๊ณ ', '๋ผ๊ณ ', '๋‹ค๊ณ ', '๋ผ๊ธฐ', '๋ผ๋ฉฐ', '๋ฉด์„œ', '๋ผ๋ฉด์„œ', '๋กœ์จ', '๋กœ์„œ', '์œผ๋กœ', '์—์„œ', '์–ด์•ผ', '๋ถ€ํ„ฐ', 'ํ•œ๋‹ค', '์ด๋‹ค', '์˜€๋‹ค', '์˜€์—ˆ๋‹ค', '|', '/','โ€˜', 'โ€™', ',' , 'โ€œ', 'โ€', '.', '>', '<', ')', '('] # ์ œ์™ธ๋‹จ์–ด ๋ฆฌ์ŠคํŠธ\r\nnews_nouns = list() # ๊ธฐ์‚ฌ์— ์‚ฌ์šฉ๋œ ๋ชจ๋“  ๋ช…์‚ฌ๋ฅผ ๋ฆฌ์ŠคํŠธ๋กœ ๋‹ด์Œ\r\nfor news in news_test: # news_con์—์„œ ํ•œ ์ค„(ํ•œ ๊ธฐ์‚ฌ)์”ฉ ๋ฝ‘์•„์„œ ๋ฐ˜๋ณต\r\n temp = ' '.join(news) # ๋„์–ด์“ฐ๊ธฐ๋ฅผ ๋„ฃ์–ด ๋ฌธ์ž์—ด์„ ๋ถ™์—ฌ์คŒ\r\n temp_re = re.sub('[^๊ฐ€-ํžฃ]','', temp) # ๊ฐ€-ํžฃ์„ ์ œ์™ธํ•œ ๋ชจ๋“  ๊ธ€์ž๋ฅผ ์‚ญ์ œ -> ์ด๋ฏธ ํ•œ๊ธ€ ๊ธฐ์‚ฌ๋งŒ ๋‚จ์•„์žˆ๋Š” ๊ฒฝ์šฐ ์‚ญ์ œ ๊ฐ€๋Šฅ (๋ฐ‘์— temp_re -> temp๋กœ ์ˆ˜์ •\r\n encoded = okt.nouns(temp_re) # okt๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ช…์‚ฌ ๋‹จ์œ„๋กœ ๋Š์Œ\r\n exist_word = [item for item in encoded if item not in stopwords] # stopword์— ์žˆ๋Š” ๋‹จ์–ด๋ฅผ ์ œ์™ธํ•œ ๊ฒƒ์„ exist_word์— ๋„ฃ์Œ\r\n for prep_word in exist_word: # exist_word ๋ฆฌ์ŠคํŠธ ์•ˆ์˜ ๋‹จ์–ด๋ฅผ ํ•˜๋‚˜์”ฉ ๋ฝ‘์•„์„œ ๋ฐ˜๋ณต\r\n if len(prep_word) > 1: # ๋ฝ‘์•„๋‚ธ ๋‹จ์–ด ์ค‘ 2๊ธ€์ž ์ด์ƒ์ธ ๊ฒƒ๋งŒ\r\n news_nouns.append(prep_word) # news_nouns ๋ฆฌ์ŠคํŠธ์— ์‚ฝ์ž…\r\n noun_counter = Counter(news_nouns) # news_nouns ๋ฆฌ์ŠคํŠธ์˜ ๋ฌธ์ž์˜ ๋นˆ๋„์ˆ˜๋ฅผ ์…ˆ\r\n\r\nprint(news_test)\r\nprint(news_nouns)\r\nprint(noun_counter)\r\n\r\nconn.close()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
ece198dc75ae1ca3f47608216a007561412c3084
130,862
ipynb
Jupyter Notebook
main.ipynb
Gregory-Howard/kaggle-riiid
77fa6a4df9bf3e1602bc07cf76bf634c358107bc
[ "Apache-2.0" ]
null
null
null
main.ipynb
Gregory-Howard/kaggle-riiid
77fa6a4df9bf3e1602bc07cf76bf634c358107bc
[ "Apache-2.0" ]
null
null
null
main.ipynb
Gregory-Howard/kaggle-riiid
77fa6a4df9bf3e1602bc07cf76bf634c358107bc
[ "Apache-2.0" ]
null
null
null
78.595796
25,644
0.71298
[ [ [ "copied from : https://www.kaggle.com/c/riiid-test-answer-prediction/data\n# train.csv\n\n**row_id**: (int64) ID code for the row.\n\n**timestamp**: (int64) the time in milliseconds between this user interaction and the first event completion from that user.\n\n**user_id**: (int32) ID code for the user.\n\n**content_id**: (int16) ID code for the user interaction\n\n**content_type_id**: (int8) 0 if the event was a question being posed to the user, 1 if the event was the user watching a lecture.\n\n**task_container_id**: (int16) Id code for the batch of questions or lectures. For example, a user might see three questions in a row before seeing the explanations for any of them. Those three would all share a task_container_id.\n\n**user_answer**: (int8) the user's answer to the question, if any. Read -1 as null, for lectures.\n\n**answered_correctly**: (int8) if the user responded correctly. Read -1 as null, for lectures.\n\n**prior_question_elapsed_time**: (float32) The average time in milliseconds it took a user to answer each question in the previous question bundle, ignoring any lectures in between. Is null for a user's first question bundle or lecture. Note that the time is the average time a user took to solve each question in the previous bundle.\n\n**prior_question_had_explanation**: (bool) Whether or not the user saw an explanation and the correct response(s) after answering the previous question bundle, ignoring any lectures in between. The value is shared across a single question bundle, and is null for a user's first question bundle or lecture. Typically the first several questions a user sees were part of an onboarding diagnostic test where they did not get any feedback.\n\n# questions.csv: metadata for the questions posed to users.\n\n**question_id**: foreign key for the train/test content_id column, when the content type is question (0).\n\n**bundle_id**: code for which questions are served together.\n\n**correct_answer**: the answer to the question. Can be compared with the train user_answer column to check if the user was right.\n\n**part**: the relevant section of the TOEIC test.\n\n**tags**: one or more detailed tag codes for the question. The meaning of the tags will not be provided, but these codes are sufficient for clustering the questions together.\n\n# lectures.csv: metadata for the lectures watched by users as they progress in their education.\n\n**lecture_id**: foreign key for the train/test content_id column, when the content type is lecture (1).\n\n**part**: top level category code for the lecture.\n\n**tag**: one tag codes for the lecture. The meaning of the tags will not be provided, but these codes are sufficient for clustering the lectures together.\n\n**type_of**: brief description of the core purpose of the lecture\n", "_____no_output_____" ] ], [ [ "#jupyter notebook --NotebookApp.max_buffer_size=16106127360\n#https://www.kaggle.com/c/riiid-test-answer-prediction/data\nimport pandas as pd\nimport numpy as np", "_____no_output_____" ], [ "example_sample_submission = \"data/example_sample_submission.csv\"\nexample_test = \"data/example_test.csv\"\nlectures = \"data/lectures.csv\"\nquestions = \"data/questions.csv\"\n\n\npd_example_sample_submission = pd.read_csv(example_sample_submission,low_memory=False)\npd_example_test = pd.read_csv(example_test,low_memory=False)\npd_lectures = pd.read_csv(lectures,low_memory=False)\npd_questions = pd.read_csv(questions,low_memory=False)\n", "_____no_output_____" ], [ "train = \"data/train.csv\"\npd_train = pd.read_csv(train)", "_____no_output_____" ], [ "selected_train = \"data/selected_train.csv\"\npd_selected_train = pd.read_csv(selected_train,low_memory=False)\npd_selected_train_answers = pd_selected_train[pd_selected_train[\"answered_correctly\"]!=-1]", "_____no_output_____" ], [ "pd_train.describe()", "_____no_output_____" ], [ "pd_example_sample_submission.head(5)", "_____no_output_____" ], [ "pd_train_answers = pd_train[pd_train[\"answered_correctly\"]!=-1]", "_____no_output_____" ], [ "%matplotlib inline\nfrom matplotlib import pyplot as plt\nfrom matplotlib.pyplot import figure", "_____no_output_____" ], [ "\n\n# Discovery of the train set.\n\n# not perfect percentage because there is -1 values\npd_influence_prior = pd_selected_train_answers.groupby('prior_question_had_explanation')['answered_correctly'].agg(answered_correctly='mean')\npd_influence_prior.plot.bar(y=\"answered_correctly\")", "_____no_output_____" ], [ "binned_prior_question_elapsed_time = pd.cut(pd_selected_train_answers[\"prior_question_elapsed_time\"],10).apply(lambda x:x.mid)\npd_selected_train_answers.loc[:,\"prior_question_elapsed_time_bins\"] = binned_prior_question_elapsed_time\npd_influence_prior_question_elapsed_time = pd_selected_train_answers.groupby('prior_question_elapsed_time_bins')['answered_correctly'].agg(answered_correctly='mean')\npd_influence_prior_question_elapsed_time.plot.bar(y=\"answered_correctly\")", "_____no_output_____" ], [ "pd_mean_student = pd_train_answers.groupby('user_id')['answered_correctly'].agg(mean_answered_correctly='mean')\npd_mean_student = pd_mean_student.sort_values(\"mean_answered_correctly\")\n\n\n\nfigure(num=None, figsize=(14, 6), dpi=80, facecolor='w', edgecolor='k')\n\nplt.plot(range(len(pd_mean_student)),pd_mean_student[\"mean_answered_correctly\"],)\nplt.tick_params(\n axis='x', # changes apply to the x-axis\n which='both', # both major and minor ticks are affected\n bottom=False, # ticks along the bottom edge are off\n top=False, # ticks along the top edge are off\n labelbottom=False) # labels along the bottom edge are off\nplt.title(\"average good answer by student\")\nplt.show()\nplt.clf()", "_____no_output_____" ], [ "pd_number_of_exemple_by_question = pd_train_answers.groupby('content_id')[\"content_id\"].agg(count_question=\"count\")\npd_train_answers = pd_train_answers.merge(pd_number_of_exemple_by_question,right_on=\"content_id\",left_on=\"content_id\")\nquestion_difficulty = pd_train_answers[pd_train_answers[\"count_question\"]>200].groupby('content_id')['answered_correctly'].agg(mean_answered_correctly='mean')\nquestion_difficulty = question_difficulty.sort_values(\"mean_answered_correctly\")\n\nfigure(num=None, figsize=(14, 6), dpi=80, facecolor='w', edgecolor='k')\n\nplt.plot(range(len(question_difficulty)),question_difficulty[\"mean_answered_correctly\"],)\nplt.tick_params(\n axis='x', # changes apply to the x-axis\n which='both', # both major and minor ticks are affected\n bottom=False, # ticks along the bottom edge are off\n top=False, # ticks along the top edge are off\n labelbottom=False) # labels along the bottom edge are off\nplt.title(\"average good answer by question\")\nplt.show()\nplt.clf()", "_____no_output_____" ], [ "bottom_five_questions = question_difficulty.head(5)\ntop_five_questions = question_difficulty.tail(5)\nbottom_five_questions = bottom_five_questions.merge(pd_number_of_exemple_by_question,right_on=\"content_id\",left_on=\"content_id\")\ntop_five_questions = top_five_questions.merge(pd_number_of_exemple_by_question,right_on=\"content_id\",left_on=\"content_id\")\nprint(\"bottom_five_questions\\n\",bottom_five_questions)\nprint(\"top_five_questions\\n\",top_five_questions)", "bottom_five_questions\n mean_answered_correctly count_question\ncontent_id \n10062 0.091752 7444\n7639 0.100607 8399\n3125 0.135965 10157\n9220 0.144753 12359\n7487 0.147680 9419\ntop_five_questions\n mean_answered_correctly count_question\ncontent_id \n12444 0.986842 228\n1679 0.989349 6948\n10440 0.990579 5732\n12515 0.991031 223\n10626 0.992656 5719\n" ], [ "pd_enriched_questions.head()", "_____no_output_____" ], [ "pd_enriched_questions = pd_questions.merge(question_difficulty,left_on=\"question_id\",right_on=\"content_id\")\npd_enriched_questions[\"tags\"] = pd_enriched_questions[\"tags\"].apply(lambda x:x.split(' '))", "_____no_output_____" ], [ "from sklearn.preprocessing import MultiLabelBinarizer\nmlb_tags = MultiLabelBinarizer()\npd_transformed = mlb_tags.fit_transform(pd_enriched_questions.tags)\ncol = [\"tags_\"+x for x in mlb_tags.classes_]\npd_enriched_questions = pd_enriched_questions.join(pd.DataFrame(pd_transformed, columns=col))\n\nfrom sklearn.preprocessing import LabelBinarizer\njobs_encoder = LabelBinarizer()\ntransformed = jobs_encoder.fit_transform(pd_enriched_questions['part'])\ncol = [\"part_\"+str(x+1) for x in range(transformed.shape[1])]\nohe_df = pd.DataFrame(transformed,columns=col)\npd_enriched_questions = pd.concat([pd_enriched_questions, ohe_df], axis=1).drop(['part'], axis=1)\n", "_____no_output_____" ], [ "pd_enriched_questions\n", "_____no_output_____" ], [ "pd_enriched_questions.head()", "_____no_output_____" ], [ "print(pd_enriched_questions.columns)", "Index(['question_id', 'bundle_id', 'correct_answer', 'part', 'tags',\n 'mean_answered_correctly', '0', '1', '10', '100',\n ...\n '90', '91', '92', '93', '94', '95', '96', '97', '98', '99'],\n dtype='object', length=194)\n" ], [ "import sklearn\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn import svm\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import train_test_split\nfrom sklearn import metrics\nregressor = RandomForestRegressor(n_estimators=200, random_state=0)\nclf = svm.SVC(kernel='linear', C=1)", "_____no_output_____" ], [ "Xcol = [\"question_id\",\"bundle_id\",\"correct_answer\",\"mean_answered_correctly\",\"tags\"]\nX = pd_enriched_questions.drop(Xcol, axis=1).to_numpy()\nYcol = \"mean_answered_correctly\"\ny = pd_enriched_questions[Ycol].to_numpy()\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)", "_____no_output_____" ], [ "regressor.fit(X_train, y_train)\ny_pred = regressor.predict(X_test)\nprint('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_pred))\nprint('Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred))\nprint('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred)))\n\n", "Mean Absolute Error: 0.12415394651336734\nMean Squared Error: 0.025487431925249646\nRoot Mean Squared Error: 0.15964783720818032\n" ] ], [ [ "Les rรฉsultats ne sont pas parfait mais si l'on rencontre une nouvelle question on pourra grace aux tags lui donner un pourcentage de rรฉussite moyen.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
ece1a0082c2651f2eb02f9d229921d5687189a0d
38,667
ipynb
Jupyter Notebook
pandas_ta_tests.ipynb
poivronjaune/stock_screener
33d19d79ab900746bdb5dae3d40e93cde60aad64
[ "MIT" ]
null
null
null
pandas_ta_tests.ipynb
poivronjaune/stock_screener
33d19d79ab900746bdb5dae3d40e93cde60aad64
[ "MIT" ]
11
2022-01-03T19:36:54.000Z
2022-01-19T02:58:46.000Z
pandas_ta_tests.ipynb
poivronjaune/stock_screener
33d19d79ab900746bdb5dae3d40e93cde60aad64
[ "MIT" ]
1
2022-03-29T11:06:04.000Z
2022-03-29T11:06:04.000Z
62.567961
20,026
0.671787
[ [ [ "import sqlite3\nimport pandas as pd\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport pandas_ta as ta\n\npd.set_option(\"display.max_rows\", None)\ndatabase_name = \"TSX_Quality.sqlite\"", "_____no_output_____" ] ], [ [ "### Utility function to get prices from our local database ", "_____no_output_____" ] ], [ [ "def get_prices_for(symbol):\n conn = sqlite3.connect(database_name)\n \n # Make sur all dates are in ascending order\n sql = f\"SELECT * FROM Prices_Daily WHERE Ticker = '{symbol}' ORDER BY UPPER(Ticker) ASC, Date ASC\"\n prices = pd.read_sql_query(sql, conn, index_col=\"Date\")\n prices.index = pd.to_datetime(prices.index)\n \n # Clean up CSV data to make sure we have only floats and no \"-\" values\n prices.replace(\"-\", np.NaN, inplace=True)\n #prices[\"Volume\"].replace(0, np.NaN, inplace=True)\n prices[\"Open\"] = prices[\"Open\"].astype(float)\n prices[\"High\"] = prices[\"High\"].astype(float)\n prices[\"Low\"] = prices[\"Low\"].astype(float)\n prices[\"Close\"]= prices[\"Close\"].astype(float)\n \n # Required for finTA\n prices.rename(columns={\"Ticker\":\"ticker\", \"Open\":\"open\", \"High\":\"high\", \"Low\":\"low\", \"Close\":\"close\", \"Volume\":\"volume\"}, inplace=True)\n prices.index.rename(\"date\", inplace=True) \n \n return prices", "_____no_output_____" ], [ "# PANDAS_TA works with our data structure that includes a column with the Tcker symbol\nsymbol = \"LI\"\ndf = get_prices_for(symbol)\ndf.tail(10)", "_____no_output_____" ], [ "# RSI Indicator \n# help(ta.rsi)\n# df[\"RSI\"] = ta.rsi(df[\"close\"], length=10)\n# df.ta.rsi(length=10, append = True)\n# df.ta.rsi(length=14, append = True)\n# df.ta.sma(length=100, append = True)\n# df.ta.ema(length=10, append = True)\nst = ta.supertrend(df[\"high\"], df[\"low\"], df[\"close\"], append=True)\ndf[\"ST_VAL\"] = st[\"SUPERT_7_3.0\"]\ndf[\"ST_DIR\"] = st[\"SUPERTd_7_3.0\"]\ndf[\"ST_UPP\"] = st[\"SUPERTl_7_3.0\"]\ndf[\"ST_LOW\"] = st[\"SUPERTs_7_3.0\"]\ndf.tail(10)\n", "_____no_output_____" ], [ "df.index[-5:]", "_____no_output_____" ], [ "df.loc[df.index[-5:], [\"close\"]]", "_____no_output_____" ], [ "df_sub = df.loc[df.index[-100:]]\nplt.plot(df_sub.index, df_sub[\"close\"])\n#plt.plot(df.index, df[\"RSI_10\"])\n# plt.plot(df.index, df[\"SMA_100\"])\n# plt.plot(df.index, df[\"EMA_10\"])\nplt.show()\n", "_____no_output_____" ], [ "# Holy Grail Strategy\n", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
ece1ba49ce2753e9413c345aea12d1b0d913c61c
323,442
ipynb
Jupyter Notebook
02_CNN/CNN.ipynb
starkbao/Machine-Learning-Tutorial
b98a10ad0704ad8932c96179cb7e83ea41cc7315
[ "MIT" ]
3
2021-05-28T15:05:35.000Z
2021-05-31T12:53:01.000Z
02_CNN/CNN.ipynb
starkbao/Machine-Learning-Tutorial
b98a10ad0704ad8932c96179cb7e83ea41cc7315
[ "MIT" ]
null
null
null
02_CNN/CNN.ipynb
starkbao/Machine-Learning-Tutorial
b98a10ad0704ad8932c96179cb7e83ea41cc7315
[ "MIT" ]
null
null
null
58.246353
171
0.547437
[ [ [ "# **Convolutional Neural Network (CNN)**", "_____no_output_____" ], [ "# Download Dataset", "_____no_output_____" ] ], [ [ "# Download the dataset from Google Drive\n!gdown --id '19CzXudqN58R3D-1G8KeFWk8UDQwlb8is' --output food-11.zip\n\n# Unzip it\n!unzip food-11.zip", "\u001b[1;30;43mStreaming output truncated to the last 5000 lines.\u001b[0m\n inflating: food-11/training/4_165.jpg \n inflating: food-11/training/5_376.jpg \n inflating: food-11/training/2_691.jpg \n inflating: food-11/training/0_541.jpg \n inflating: food-11/training/3_482.jpg \n inflating: food-11/training/0_227.jpg \n inflating: food-11/training/5_410.jpg \n inflating: food-11/training/4_603.jpg \n inflating: food-11/training/8_341.jpg \n inflating: food-11/training/5_1154.jpg \n inflating: food-11/training/9_37.jpg \n inflating: food-11/training/9_152.jpg \n inflating: food-11/training/5_438.jpg \n inflating: food-11/training/9_1287.jpg \n inflating: food-11/training/8_369.jpg \n inflating: food-11/training/2_1455.jpg \n inflating: food-11/training/10_247.jpg \n inflating: food-11/training/7_32.jpg \n inflating: food-11/training/10_521.jpg \n inflating: food-11/training/2_1333.jpg \n inflating: food-11/training/2_861.jpg \n inflating: food-11/training/0_569.jpg \n inflating: food-11/training/6_289.jpg \n inflating: food-11/training/2_875.jpg \n inflating: food-11/training/4_159.jpg \n inflating: food-11/training/2_1327.jpg \n inflating: food-11/training/10_535.jpg \n inflating: food-11/training/9_608.jpg \n inflating: food-11/training/5_1168.jpg \n inflating: food-11/training/10_253.jpg \n inflating: food-11/training/2_1441.jpg \n inflating: food-11/training/7_26.jpg \n inflating: food-11/training/9_1293.jpg \n inflating: food-11/training/7_273.jpg \n inflating: food-11/training/3_657.jpg \n inflating: food-11/training/0_794.jpg \n inflating: food-11/training/2_444.jpg \n inflating: food-11/training/9_387.jpg \n inflating: food-11/training/8_194.jpg \n inflating: food-11/training/2_322.jpg \n inflating: food-11/training/3_131.jpg \n inflating: food-11/training/6_4.jpg \n inflating: food-11/training/2_336.jpg \n inflating: food-11/training/3_125.jpg \n inflating: food-11/training/5_809.jpg \n inflating: food-11/training/9_393.jpg \n inflating: food-11/training/8_180.jpg \n inflating: food-11/training/7_267.jpg \n inflating: food-11/training/0_958.jpg \n inflating: food-11/training/3_643.jpg \n inflating: food-11/training/0_780.jpg \n inflating: food-11/training/2_450.jpg \n inflating: food-11/training/8_816.jpg \n inflating: food-11/training/0_970.jpg \n inflating: food-11/training/2_478.jpg \n inflating: food-11/training/3_95.jpg \n inflating: food-11/training/5_821.jpg \n inflating: food-11/training/9_1046.jpg \n inflating: food-11/training/3_81.jpg \n inflating: food-11/training/3_119.jpg \n inflating: food-11/training/5_835.jpg \n inflating: food-11/training/9_1052.jpg \n inflating: food-11/training/4_398.jpg \n inflating: food-11/training/0_964.jpg \n inflating: food-11/training/4_88.jpg \n inflating: food-11/training/8_802.jpg \n inflating: food-11/training/8_631.jpg \n inflating: food-11/training/9_422.jpg \n inflating: food-11/training/4_63.jpg \n inflating: food-11/training/3_694.jpg \n inflating: food-11/training/0_757.jpg \n inflating: food-11/training/2_487.jpg \n inflating: food-11/training/5_160.jpg \n inflating: food-11/training/4_373.jpg \n inflating: food-11/training/4_415.jpg \n inflating: food-11/training/5_606.jpg \n inflating: food-11/training/1_222.jpg \n inflating: food-11/training/9_344.jpg \n inflating: food-11/training/8_157.jpg \n inflating: food-11/training/9_350.jpg \n inflating: food-11/training/8_143.jpg \n inflating: food-11/training/4_401.jpg \n inflating: food-11/training/5_612.jpg \n inflating: food-11/training/1_236.jpg \n inflating: food-11/training/4_77.jpg \n inflating: food-11/training/3_680.jpg \n inflating: food-11/training/0_743.jpg \n inflating: food-11/training/2_493.jpg \n inflating: food-11/training/3_858.jpg \n inflating: food-11/training/5_174.jpg \n inflating: food-11/training/4_367.jpg \n inflating: food-11/training/8_625.jpg \n inflating: food-11/training/9_436.jpg \n inflating: food-11/training/2_1119.jpg \n inflating: food-11/training/3_870.jpg \n inflating: food-11/training/2_1131.jpg \n inflating: food-11/training/9_378.jpg \n inflating: food-11/training/4_429.jpg \n inflating: food-11/training/9_1085.jpg \n inflating: food-11/training/3_56.jpg \n inflating: food-11/training/9_1091.jpg \n inflating: food-11/training/3_42.jpg \n inflating: food-11/training/8_619.jpg \n inflating: food-11/training/2_1125.jpg \n inflating: food-11/training/3_864.jpg \n inflating: food-11/training/5_148.jpg \n inflating: food-11/training/2_530.jpg \n inflating: food-11/training/3_723.jpg \n inflating: food-11/training/6_114.jpg \n inflating: food-11/training/0_838.jpg \n inflating: food-11/training/9_595.jpg \n inflating: food-11/training/8_786.jpg \n inflating: food-11/training/8_88.jpg \n inflating: food-11/training/1_395.jpg \n inflating: food-11/training/5_969.jpg \n inflating: food-11/training/0_186.jpg \n inflating: food-11/training/2_256.jpg \n inflating: food-11/training/1_381.jpg \n inflating: food-11/training/0_192.jpg \n inflating: food-11/training/2_242.jpg \n inflating: food-11/training/9_581.jpg \n inflating: food-11/training/8_792.jpg \n inflating: food-11/training/2_524.jpg \n inflating: food-11/training/3_737.jpg \n inflating: food-11/training/0_2.jpg \n inflating: food-11/training/6_100.jpg \n inflating: food-11/training/1_90.jpg \n inflating: food-11/training/10_694.jpg \n inflating: food-11/training/2_1086.jpg \n inflating: food-11/training/6_128.jpg \n inflating: food-11/training/0_804.jpg \n inflating: food-11/training/9_1132.jpg \n inflating: food-11/training/5_955.jpg \n inflating: food-11/training/6_99.jpg \n inflating: food-11/training/5_799.jpg \n inflating: food-11/training/9_1126.jpg \n inflating: food-11/training/5_941.jpg \n inflating: food-11/training/2_518.jpg \n inflating: food-11/training/0_810.jpg \n inflating: food-11/training/1_84.jpg \n inflating: food-11/training/2_1092.jpg \n inflating: food-11/training/10_680.jpg \n inflating: food-11/training/2_1079.jpg \n inflating: food-11/training/9_556.jpg \n inflating: food-11/training/8_745.jpg \n inflating: food-11/training/4_207.jpg \n inflating: food-11/training/10_21.jpg \n inflating: food-11/training/3_938.jpg \n inflating: food-11/training/0_623.jpg \n inflating: food-11/training/1_356.jpg \n inflating: food-11/training/0_145.jpg \n inflating: food-11/training/2_295.jpg \n inflating: food-11/training/5_772.jpg \n inflating: food-11/training/4_561.jpg \n inflating: food-11/training/5_1236.jpg \n inflating: food-11/training/6_66.jpg \n inflating: food-11/training/9_230.jpg \n inflating: food-11/training/5_1222.jpg \n inflating: food-11/training/6_72.jpg \n inflating: food-11/training/9_224.jpg \n inflating: food-11/training/10_119.jpg \n inflating: food-11/training/1_342.jpg \n inflating: food-11/training/0_151.jpg \n inflating: food-11/training/2_281.jpg \n inflating: food-11/training/5_766.jpg \n inflating: food-11/training/4_575.jpg \n inflating: food-11/training/4_213.jpg \n inflating: food-11/training/10_35.jpg \n inflating: food-11/training/0_637.jpg \n inflating: food-11/training/1_424.jpg \n inflating: food-11/training/9_542.jpg \n inflating: food-11/training/8_751.jpg \n inflating: food-11/training/3_904.jpg \n inflating: food-11/training/10_657.jpg \n inflating: food-11/training/2_1045.jpg \n inflating: food-11/training/1_53.jpg \n inflating: food-11/training/8_779.jpg \n inflating: food-11/training/10_131.jpg \n inflating: food-11/training/8_77.jpg \n inflating: food-11/training/5_996.jpg \n inflating: food-11/training/0_179.jpg \n inflating: food-11/training/5_982.jpg \n inflating: food-11/training/4_549.jpg \n inflating: food-11/training/9_218.jpg \n inflating: food-11/training/10_125.jpg \n inflating: food-11/training/8_63.jpg \n inflating: food-11/training/2_1051.jpg \n inflating: food-11/training/10_643.jpg \n inflating: food-11/training/1_47.jpg \n inflating: food-11/training/3_910.jpg \n inflating: food-11/training/1_418.jpg \n inflating: food-11/training/0_384.jpg \n inflating: food-11/training/3_247.jpg \n inflating: food-11/training/1_197.jpg \n inflating: food-11/training/8_584.jpg \n inflating: food-11/training/9_797.jpg \n inflating: food-11/training/7_105.jpg \n inflating: food-11/training/6_316.jpg \n inflating: food-11/training/3_521.jpg \n inflating: food-11/training/2_732.jpg \n inflating: food-11/training/7_111.jpg \n inflating: food-11/training/2_0.jpg \n inflating: food-11/training/6_302.jpg \n inflating: food-11/training/3_535.jpg \n inflating: food-11/training/2_726.jpg \n inflating: food-11/training/8_590.jpg \n inflating: food-11/training/9_783.jpg \n inflating: food-11/training/0_390.jpg \n inflating: food-11/training/3_253.jpg \n inflating: food-11/training/1_183.jpg \n inflating: food-11/training/9_1318.jpg \n inflating: food-11/training/9_1330.jpg \n inflating: food-11/training/7_139.jpg \n inflating: food-11/training/9_1456.jpg \n inflating: food-11/training/2_1284.jpg \n inflating: food-11/training/10_496.jpg \n inflating: food-11/training/9_973.jpg \n inflating: food-11/training/10_482.jpg \n inflating: food-11/training/2_1290.jpg \n inflating: food-11/training/9_967.jpg \n inflating: food-11/training/9_1442.jpg \n inflating: food-11/training/3_509.jpg \n inflating: food-11/training/9_1324.jpg \n inflating: food-11/training/4_788.jpg \n inflating: food-11/training/8_221.jpg \n inflating: food-11/training/5_1034.jpg \n inflating: food-11/training/4_763.jpg \n inflating: food-11/training/5_570.jpg \n inflating: food-11/training/0_347.jpg \n inflating: food-11/training/3_284.jpg \n inflating: food-11/training/1_154.jpg \n inflating: food-11/training/0_421.jpg \n inflating: food-11/training/5_216.jpg \n inflating: food-11/training/2_929.jpg \n inflating: food-11/training/8_547.jpg \n inflating: food-11/training/9_754.jpg \n inflating: food-11/training/10_469.jpg \n inflating: food-11/training/8_553.jpg \n inflating: food-11/training/9_740.jpg \n inflating: food-11/training/9_998.jpg \n inflating: food-11/training/0_435.jpg \n inflating: food-11/training/5_202.jpg \n inflating: food-11/training/4_777.jpg \n inflating: food-11/training/5_564.jpg \n inflating: food-11/training/2_16.jpg \n inflating: food-11/training/0_353.jpg \n inflating: food-11/training/3_290.jpg \n inflating: food-11/training/1_140.jpg \n inflating: food-11/training/8_235.jpg \n inflating: food-11/training/5_1020.jpg \n inflating: food-11/training/1_168.jpg \n inflating: food-11/training/10_333.jpg \n inflating: food-11/training/5_1008.jpg \n inflating: food-11/training/9_768.jpg \n inflating: food-11/training/2_1247.jpg \n inflating: food-11/training/10_455.jpg \n inflating: food-11/training/5_37.jpg \n inflating: food-11/training/9_1495.jpg \n inflating: food-11/training/2_915.jpg \n inflating: food-11/training/0_409.jpg \n inflating: food-11/training/9_1481.jpg \n inflating: food-11/training/5_23.jpg \n inflating: food-11/training/2_901.jpg \n inflating: food-11/training/10_441.jpg \n inflating: food-11/training/2_1253.jpg \n inflating: food-11/training/10_327.jpg \n inflating: food-11/training/8_209.jpg \n inflating: food-11/training/5_558.jpg \n inflating: food-11/training/5_1024.jpg \n inflating: food-11/training/8_231.jpg \n inflating: food-11/training/0_357.jpg \n inflating: food-11/training/1_144.jpg \n inflating: food-11/training/3_294.jpg \n inflating: food-11/training/4_773.jpg \n inflating: food-11/training/2_12.jpg \n inflating: food-11/training/5_560.jpg \n inflating: food-11/training/5_206.jpg \n inflating: food-11/training/2_939.jpg \n inflating: food-11/training/0_431.jpg \n inflating: food-11/training/10_479.jpg \n inflating: food-11/training/8_557.jpg \n inflating: food-11/training/9_744.jpg \n inflating: food-11/training/9_988.jpg \n inflating: food-11/training/8_543.jpg \n inflating: food-11/training/9_750.jpg \n inflating: food-11/training/5_212.jpg \n inflating: food-11/training/0_425.jpg \n inflating: food-11/training/0_343.jpg \n inflating: food-11/training/1_150.jpg \n inflating: food-11/training/3_280.jpg \n inflating: food-11/training/4_767.jpg \n inflating: food-11/training/5_574.jpg \n inflating: food-11/training/5_1030.jpg \n inflating: food-11/training/8_225.jpg \n inflating: food-11/training/1_178.jpg \n inflating: food-11/training/10_9.jpg \n inflating: food-11/training/5_1018.jpg \n inflating: food-11/training/10_323.jpg \n inflating: food-11/training/2_1257.jpg \n inflating: food-11/training/10_445.jpg \n inflating: food-11/training/9_778.jpg \n inflating: food-11/training/2_905.jpg \n inflating: food-11/training/5_27.jpg \n inflating: food-11/training/9_1485.jpg \n inflating: food-11/training/2_911.jpg \n inflating: food-11/training/9_1491.jpg \n inflating: food-11/training/5_33.jpg \n inflating: food-11/training/0_419.jpg \n inflating: food-11/training/10_451.jpg \n inflating: food-11/training/2_1243.jpg \n inflating: food-11/training/8_219.jpg \n inflating: food-11/training/10_337.jpg \n inflating: food-11/training/5_548.jpg \n inflating: food-11/training/0_394.jpg \n inflating: food-11/training/1_187.jpg \n inflating: food-11/training/3_257.jpg \n inflating: food-11/training/8_594.jpg \n inflating: food-11/training/9_787.jpg \n inflating: food-11/training/3_531.jpg \n inflating: food-11/training/2_722.jpg \n inflating: food-11/training/7_115.jpg \n inflating: food-11/training/6_306.jpg \n inflating: food-11/training/2_4.jpg \n inflating: food-11/training/3_525.jpg \n inflating: food-11/training/2_736.jpg \n inflating: food-11/training/7_101.jpg \n inflating: food-11/training/6_312.jpg \n inflating: food-11/training/8_580.jpg \n inflating: food-11/training/9_793.jpg \n inflating: food-11/training/9_1308.jpg \n inflating: food-11/training/0_380.jpg \n inflating: food-11/training/1_193.jpg \n inflating: food-11/training/3_243.jpg \n inflating: food-11/training/9_1320.jpg \n inflating: food-11/training/7_129.jpg \n inflating: food-11/training/9_1446.jpg \n inflating: food-11/training/9_963.jpg \n inflating: food-11/training/2_1294.jpg \n inflating: food-11/training/10_486.jpg \n inflating: food-11/training/9_977.jpg \n inflating: food-11/training/10_492.jpg \n inflating: food-11/training/2_1280.jpg \n inflating: food-11/training/3_519.jpg \n inflating: food-11/training/9_1452.jpg \n inflating: food-11/training/4_798.jpg \n inflating: food-11/training/9_1334.jpg \n inflating: food-11/training/9_546.jpg \n inflating: food-11/training/8_755.jpg \n inflating: food-11/training/2_1069.jpg \n inflating: food-11/training/0_633.jpg \n inflating: food-11/training/1_420.jpg \n inflating: food-11/training/4_217.jpg \n inflating: food-11/training/3_928.jpg \n inflating: food-11/training/10_31.jpg \n inflating: food-11/training/5_762.jpg \n inflating: food-11/training/4_571.jpg \n inflating: food-11/training/1_346.jpg \n inflating: food-11/training/2_285.jpg \n inflating: food-11/training/0_155.jpg \n inflating: food-11/training/6_76.jpg \n inflating: food-11/training/5_1226.jpg \n inflating: food-11/training/9_220.jpg \n inflating: food-11/training/10_109.jpg \n inflating: food-11/training/6_62.jpg \n inflating: food-11/training/5_1232.jpg \n inflating: food-11/training/9_234.jpg \n inflating: food-11/training/5_776.jpg \n inflating: food-11/training/4_565.jpg \n inflating: food-11/training/1_352.jpg \n inflating: food-11/training/2_291.jpg \n inflating: food-11/training/0_141.jpg \n inflating: food-11/training/0_627.jpg \n inflating: food-11/training/4_203.jpg \n inflating: food-11/training/10_25.jpg \n inflating: food-11/training/9_552.jpg \n inflating: food-11/training/8_741.jpg \n inflating: food-11/training/3_914.jpg \n inflating: food-11/training/1_43.jpg \n inflating: food-11/training/8_769.jpg \n inflating: food-11/training/10_647.jpg \n inflating: food-11/training/2_1055.jpg \n inflating: food-11/training/10_121.jpg \n inflating: food-11/training/8_67.jpg \n inflating: food-11/training/5_986.jpg \n inflating: food-11/training/0_169.jpg \n inflating: food-11/training/4_559.jpg \n inflating: food-11/training/5_992.jpg \n inflating: food-11/training/10_135.jpg \n inflating: food-11/training/8_73.jpg \n inflating: food-11/training/9_208.jpg \n inflating: food-11/training/1_57.jpg \n inflating: food-11/training/2_1041.jpg \n inflating: food-11/training/10_653.jpg \n inflating: food-11/training/1_408.jpg \n inflating: food-11/training/10_19.jpg \n inflating: food-11/training/3_900.jpg \n inflating: food-11/training/6_104.jpg \n inflating: food-11/training/0_6.jpg \n inflating: food-11/training/0_828.jpg \n inflating: food-11/training/2_520.jpg \n inflating: food-11/training/3_733.jpg \n inflating: food-11/training/9_585.jpg \n inflating: food-11/training/8_796.jpg \n inflating: food-11/training/8_98.jpg \n inflating: food-11/training/5_979.jpg \n inflating: food-11/training/1_385.jpg \n inflating: food-11/training/2_246.jpg \n inflating: food-11/training/0_196.jpg \n inflating: food-11/training/1_391.jpg \n inflating: food-11/training/2_252.jpg \n inflating: food-11/training/0_182.jpg \n inflating: food-11/training/9_591.jpg \n inflating: food-11/training/8_782.jpg \n inflating: food-11/training/6_110.jpg \n inflating: food-11/training/2_534.jpg \n inflating: food-11/training/3_727.jpg \n inflating: food-11/training/10_684.jpg \n inflating: food-11/training/2_1096.jpg \n inflating: food-11/training/1_80.jpg \n inflating: food-11/training/6_138.jpg \n inflating: food-11/training/0_814.jpg \n inflating: food-11/training/5_945.jpg \n inflating: food-11/training/9_1122.jpg \n inflating: food-11/training/6_89.jpg \n inflating: food-11/training/5_951.jpg \n inflating: food-11/training/5_789.jpg \n inflating: food-11/training/9_1136.jpg \n inflating: food-11/training/0_800.jpg \n inflating: food-11/training/2_508.jpg \n inflating: food-11/training/2_1082.jpg \n inflating: food-11/training/10_690.jpg \n inflating: food-11/training/1_94.jpg \n inflating: food-11/training/8_621.jpg \n inflating: food-11/training/9_432.jpg \n inflating: food-11/training/5_170.jpg \n inflating: food-11/training/4_363.jpg \n inflating: food-11/training/3_684.jpg \n inflating: food-11/training/4_73.jpg \n inflating: food-11/training/2_497.jpg \n inflating: food-11/training/0_747.jpg \n inflating: food-11/training/1_232.jpg \n inflating: food-11/training/4_405.jpg \n inflating: food-11/training/5_616.jpg \n inflating: food-11/training/9_354.jpg \n inflating: food-11/training/8_147.jpg \n inflating: food-11/training/9_340.jpg \n inflating: food-11/training/8_153.jpg \n inflating: food-11/training/1_226.jpg \n inflating: food-11/training/4_411.jpg \n inflating: food-11/training/5_602.jpg \n inflating: food-11/training/3_848.jpg \n inflating: food-11/training/5_164.jpg \n inflating: food-11/training/4_377.jpg \n inflating: food-11/training/3_690.jpg \n inflating: food-11/training/4_67.jpg \n inflating: food-11/training/2_483.jpg \n inflating: food-11/training/0_753.jpg \n inflating: food-11/training/2_1109.jpg \n inflating: food-11/training/8_635.jpg \n inflating: food-11/training/9_426.jpg \n inflating: food-11/training/3_860.jpg \n inflating: food-11/training/2_1121.jpg \n inflating: food-11/training/9_368.jpg \n inflating: food-11/training/3_46.jpg \n inflating: food-11/training/9_1095.jpg \n inflating: food-11/training/4_439.jpg \n inflating: food-11/training/3_52.jpg \n inflating: food-11/training/9_1081.jpg \n inflating: food-11/training/2_1135.jpg \n inflating: food-11/training/8_609.jpg \n inflating: food-11/training/3_874.jpg \n inflating: food-11/training/5_158.jpg \n inflating: food-11/training/3_647.jpg \n inflating: food-11/training/2_454.jpg \n inflating: food-11/training/0_784.jpg \n inflating: food-11/training/7_263.jpg \n inflating: food-11/training/9_397.jpg \n inflating: food-11/training/8_184.jpg \n inflating: food-11/training/2_332.jpg \n inflating: food-11/training/3_121.jpg \n inflating: food-11/training/6_0.jpg \n inflating: food-11/training/2_326.jpg \n inflating: food-11/training/3_135.jpg \n inflating: food-11/training/5_819.jpg \n inflating: food-11/training/9_383.jpg \n inflating: food-11/training/8_190.jpg \n inflating: food-11/training/3_653.jpg \n inflating: food-11/training/2_440.jpg \n inflating: food-11/training/0_790.jpg \n inflating: food-11/training/7_277.jpg \n inflating: food-11/training/0_948.jpg \n inflating: food-11/training/8_806.jpg \n inflating: food-11/training/2_468.jpg \n inflating: food-11/training/0_960.jpg \n inflating: food-11/training/9_1056.jpg \n inflating: food-11/training/3_85.jpg \n inflating: food-11/training/5_831.jpg \n inflating: food-11/training/9_1042.jpg \n inflating: food-11/training/3_91.jpg \n inflating: food-11/training/3_109.jpg \n inflating: food-11/training/5_825.jpg \n inflating: food-11/training/4_98.jpg \n inflating: food-11/training/0_974.jpg \n inflating: food-11/training/4_388.jpg \n inflating: food-11/training/8_812.jpg \n inflating: food-11/training/2_1479.jpg \n inflating: food-11/training/9_33.jpg \n inflating: food-11/training/5_1150.jpg \n inflating: food-11/training/8_345.jpg \n inflating: food-11/training/9_156.jpg \n inflating: food-11/training/5_414.jpg \n inflating: food-11/training/4_607.jpg \n inflating: food-11/training/0_223.jpg \n inflating: food-11/training/0_545.jpg \n inflating: food-11/training/2_695.jpg \n inflating: food-11/training/3_486.jpg \n inflating: food-11/training/4_161.jpg \n inflating: food-11/training/5_372.jpg \n inflating: food-11/training/9_630.jpg \n inflating: food-11/training/8_423.jpg \n inflating: food-11/training/0_17.jpg \n inflating: food-11/training/9_624.jpg \n inflating: food-11/training/8_437.jpg \n inflating: food-11/training/10_519.jpg \n inflating: food-11/training/0_551.jpg \n inflating: food-11/training/2_681.jpg \n inflating: food-11/training/3_492.jpg \n inflating: food-11/training/2_859.jpg \n inflating: food-11/training/4_175.jpg \n inflating: food-11/training/5_366.jpg \n inflating: food-11/training/5_400.jpg \n inflating: food-11/training/4_613.jpg \n inflating: food-11/training/0_237.jpg \n inflating: food-11/training/5_1144.jpg \n inflating: food-11/training/9_27.jpg \n inflating: food-11/training/8_351.jpg \n inflating: food-11/training/9_142.jpg \n inflating: food-11/training/5_428.jpg \n inflating: food-11/training/9_1297.jpg \n inflating: food-11/training/7_22.jpg \n inflating: food-11/training/2_1445.jpg \n inflating: food-11/training/10_257.jpg \n inflating: food-11/training/8_379.jpg \n inflating: food-11/training/10_531.jpg \n inflating: food-11/training/2_1323.jpg \n inflating: food-11/training/0_579.jpg \n inflating: food-11/training/2_871.jpg \n inflating: food-11/training/2_865.jpg \n inflating: food-11/training/4_149.jpg \n inflating: food-11/training/6_299.jpg \n inflating: food-11/training/9_618.jpg \n inflating: food-11/training/2_1337.jpg \n inflating: food-11/training/10_525.jpg \n inflating: food-11/training/7_36.jpg \n inflating: food-11/training/10_243.jpg \n inflating: food-11/training/2_1451.jpg \n inflating: food-11/training/5_1178.jpg \n inflating: food-11/training/9_1283.jpg \n inflating: food-11/training/3_323.jpg \n inflating: food-11/training/2_130.jpg \n inflating: food-11/training/9_1268.jpg \n inflating: food-11/training/5_1193.jpg \n inflating: food-11/training/8_386.jpg \n inflating: food-11/training/9_195.jpg \n inflating: food-11/training/6_272.jpg \n inflating: food-11/training/0_586.jpg \n inflating: food-11/training/2_656.jpg \n inflating: food-11/training/3_445.jpg \n inflating: food-11/training/6_266.jpg \n inflating: food-11/training/0_592.jpg \n inflating: food-11/training/2_642.jpg \n inflating: food-11/training/3_451.jpg \n inflating: food-11/training/5_1187.jpg \n inflating: food-11/training/8_392.jpg \n inflating: food-11/training/9_181.jpg \n inflating: food-11/training/3_337.jpg \n inflating: food-11/training/2_124.jpg \n inflating: food-11/training/4_808.jpg \n inflating: food-11/training/4_2.jpg \n inflating: food-11/training/2_1486.jpg \n inflating: food-11/training/10_294.jpg \n inflating: food-11/training/4_820.jpg \n inflating: food-11/training/9_1254.jpg \n inflating: food-11/training/3_479.jpg \n inflating: food-11/training/9_817.jpg \n inflating: food-11/training/9_803.jpg \n inflating: food-11/training/5_399.jpg \n inflating: food-11/training/2_118.jpg \n inflating: food-11/training/4_834.jpg \n inflating: food-11/training/9_1240.jpg \n inflating: food-11/training/10_280.jpg \n inflating: food-11/training/2_1492.jpg \n inflating: food-11/training/5_398.jpg \n inflating: food-11/training/9_802.jpg \n inflating: food-11/training/2_1493.jpg \n inflating: food-11/training/10_281.jpg \n inflating: food-11/training/9_1241.jpg \n inflating: food-11/training/4_835.jpg \n inflating: food-11/training/2_119.jpg \n inflating: food-11/training/9_1255.jpg \n inflating: food-11/training/4_821.jpg \n inflating: food-11/training/10_295.jpg \n inflating: food-11/training/2_1487.jpg \n inflating: food-11/training/9_816.jpg \n inflating: food-11/training/3_478.jpg \n inflating: food-11/training/3_450.jpg \n inflating: food-11/training/2_643.jpg \n inflating: food-11/training/0_593.jpg \n inflating: food-11/training/6_267.jpg \n inflating: food-11/training/4_3.jpg \n inflating: food-11/training/4_809.jpg \n inflating: food-11/training/2_125.jpg \n inflating: food-11/training/3_336.jpg \n inflating: food-11/training/9_180.jpg \n inflating: food-11/training/8_393.jpg \n inflating: food-11/training/5_1186.jpg \n inflating: food-11/training/9_194.jpg \n inflating: food-11/training/8_387.jpg \n inflating: food-11/training/5_1192.jpg \n inflating: food-11/training/9_1269.jpg \n inflating: food-11/training/2_131.jpg \n inflating: food-11/training/3_322.jpg \n inflating: food-11/training/3_444.jpg \n inflating: food-11/training/2_657.jpg \n inflating: food-11/training/0_587.jpg \n inflating: food-11/training/6_273.jpg \n inflating: food-11/training/10_524.jpg \n inflating: food-11/training/2_1336.jpg \n inflating: food-11/training/9_619.jpg \n inflating: food-11/training/6_298.jpg \n inflating: food-11/training/4_148.jpg \n inflating: food-11/training/2_864.jpg \n inflating: food-11/training/9_1282.jpg \n inflating: food-11/training/5_1179.jpg \n inflating: food-11/training/2_1450.jpg \n inflating: food-11/training/10_242.jpg \n inflating: food-11/training/7_37.jpg \n inflating: food-11/training/8_378.jpg \n inflating: food-11/training/10_256.jpg \n inflating: food-11/training/2_1444.jpg \n inflating: food-11/training/7_23.jpg \n inflating: food-11/training/9_1296.jpg \n inflating: food-11/training/5_429.jpg \n inflating: food-11/training/2_870.jpg \n inflating: food-11/training/0_578.jpg \n inflating: food-11/training/2_1322.jpg \n inflating: food-11/training/10_530.jpg \n inflating: food-11/training/5_367.jpg \n inflating: food-11/training/4_174.jpg \n inflating: food-11/training/2_858.jpg \n inflating: food-11/training/3_493.jpg \n inflating: food-11/training/2_680.jpg \n inflating: food-11/training/0_550.jpg \n inflating: food-11/training/10_518.jpg \n inflating: food-11/training/8_436.jpg \n inflating: food-11/training/9_625.jpg \n inflating: food-11/training/9_143.jpg \n inflating: food-11/training/8_350.jpg \n inflating: food-11/training/9_26.jpg \n inflating: food-11/training/5_1145.jpg \n inflating: food-11/training/0_236.jpg \n inflating: food-11/training/4_612.jpg \n inflating: food-11/training/5_401.jpg \n inflating: food-11/training/0_222.jpg \n inflating: food-11/training/4_606.jpg \n inflating: food-11/training/5_415.jpg \n inflating: food-11/training/9_157.jpg \n inflating: food-11/training/8_344.jpg \n inflating: food-11/training/5_1151.jpg \n inflating: food-11/training/9_32.jpg \n inflating: food-11/training/2_1478.jpg \n inflating: food-11/training/0_16.jpg \n inflating: food-11/training/8_422.jpg \n inflating: food-11/training/9_631.jpg \n inflating: food-11/training/5_373.jpg \n inflating: food-11/training/4_160.jpg \n inflating: food-11/training/3_487.jpg \n inflating: food-11/training/2_694.jpg \n inflating: food-11/training/0_544.jpg \n inflating: food-11/training/5_824.jpg \n inflating: food-11/training/3_108.jpg \n inflating: food-11/training/3_90.jpg \n inflating: food-11/training/9_1043.jpg \n inflating: food-11/training/8_813.jpg \n inflating: food-11/training/4_389.jpg \n inflating: food-11/training/0_975.jpg \n inflating: food-11/training/4_99.jpg \n inflating: food-11/training/0_961.jpg \n inflating: food-11/training/2_469.jpg \n inflating: food-11/training/8_807.jpg \n inflating: food-11/training/5_830.jpg \n inflating: food-11/training/3_84.jpg \n inflating: food-11/training/9_1057.jpg \n inflating: food-11/training/8_191.jpg \n inflating: food-11/training/9_382.jpg \n inflating: food-11/training/5_818.jpg \n inflating: food-11/training/3_134.jpg \n inflating: food-11/training/2_327.jpg \n inflating: food-11/training/6_1.jpg \n inflating: food-11/training/0_949.jpg \n inflating: food-11/training/7_276.jpg \n inflating: food-11/training/0_791.jpg \n inflating: food-11/training/2_441.jpg \n inflating: food-11/training/3_652.jpg \n inflating: food-11/training/7_262.jpg \n inflating: food-11/training/0_785.jpg \n inflating: food-11/training/2_455.jpg \n inflating: food-11/training/3_646.jpg \n inflating: food-11/training/3_120.jpg \n inflating: food-11/training/2_333.jpg \n inflating: food-11/training/8_185.jpg \n inflating: food-11/training/9_396.jpg \n inflating: food-11/training/9_1080.jpg \n inflating: food-11/training/3_53.jpg \n inflating: food-11/training/5_159.jpg \n inflating: food-11/training/3_875.jpg \n inflating: food-11/training/8_608.jpg \n inflating: food-11/training/2_1134.jpg \n inflating: food-11/training/2_1120.jpg \n inflating: food-11/training/3_861.jpg \n inflating: food-11/training/4_438.jpg \n inflating: food-11/training/9_1094.jpg \n inflating: food-11/training/3_47.jpg \n inflating: food-11/training/9_369.jpg \n inflating: food-11/training/5_603.jpg \n inflating: food-11/training/4_410.jpg \n inflating: food-11/training/1_227.jpg \n inflating: food-11/training/8_152.jpg \n inflating: food-11/training/9_341.jpg \n inflating: food-11/training/9_427.jpg \n inflating: food-11/training/8_634.jpg \n inflating: food-11/training/2_1108.jpg \n inflating: food-11/training/0_752.jpg \n inflating: food-11/training/2_482.jpg \n inflating: food-11/training/4_66.jpg \n inflating: food-11/training/3_691.jpg \n inflating: food-11/training/4_376.jpg \n inflating: food-11/training/5_165.jpg \n inflating: food-11/training/3_849.jpg \n inflating: food-11/training/0_746.jpg \n inflating: food-11/training/2_496.jpg \n inflating: food-11/training/4_72.jpg \n inflating: food-11/training/3_685.jpg \n inflating: food-11/training/4_362.jpg \n inflating: food-11/training/5_171.jpg \n inflating: food-11/training/9_433.jpg \n inflating: food-11/training/8_620.jpg \n inflating: food-11/training/8_146.jpg \n inflating: food-11/training/9_355.jpg \n inflating: food-11/training/5_617.jpg \n inflating: food-11/training/4_404.jpg \n inflating: food-11/training/1_233.jpg \n inflating: food-11/training/9_1137.jpg \n inflating: food-11/training/5_788.jpg \n inflating: food-11/training/5_950.jpg \n inflating: food-11/training/1_95.jpg \n inflating: food-11/training/10_691.jpg \n inflating: food-11/training/2_1083.jpg \n inflating: food-11/training/2_509.jpg \n inflating: food-11/training/0_801.jpg \n inflating: food-11/training/0_815.jpg \n inflating: food-11/training/6_139.jpg \n inflating: food-11/training/1_81.jpg \n inflating: food-11/training/2_1097.jpg \n inflating: food-11/training/10_685.jpg \n inflating: food-11/training/6_88.jpg \n inflating: food-11/training/9_1123.jpg \n inflating: food-11/training/5_944.jpg \n inflating: food-11/training/0_183.jpg \n inflating: food-11/training/2_253.jpg \n inflating: food-11/training/1_390.jpg \n inflating: food-11/training/3_726.jpg \n inflating: food-11/training/2_535.jpg \n inflating: food-11/training/6_111.jpg \n inflating: food-11/training/8_783.jpg \n inflating: food-11/training/9_590.jpg \n inflating: food-11/training/8_797.jpg \n inflating: food-11/training/9_584.jpg \n inflating: food-11/training/3_732.jpg \n inflating: food-11/training/2_521.jpg \n inflating: food-11/training/0_829.jpg \n inflating: food-11/training/0_7.jpg \n inflating: food-11/training/6_105.jpg \n inflating: food-11/training/0_197.jpg \n inflating: food-11/training/2_247.jpg \n inflating: food-11/training/1_384.jpg \n inflating: food-11/training/5_978.jpg \n inflating: food-11/training/8_99.jpg \n inflating: food-11/training/9_209.jpg \n inflating: food-11/training/10_134.jpg \n inflating: food-11/training/8_72.jpg \n inflating: food-11/training/5_993.jpg \n inflating: food-11/training/4_558.jpg \n inflating: food-11/training/3_901.jpg \n inflating: food-11/training/10_18.jpg \n inflating: food-11/training/1_409.jpg \n inflating: food-11/training/10_652.jpg \n inflating: food-11/training/2_1040.jpg \n inflating: food-11/training/1_56.jpg \n inflating: food-11/training/2_1054.jpg \n inflating: food-11/training/10_646.jpg \n inflating: food-11/training/8_768.jpg \n inflating: food-11/training/1_42.jpg \n inflating: food-11/training/3_915.jpg \n inflating: food-11/training/0_168.jpg \n inflating: food-11/training/5_987.jpg \n inflating: food-11/training/10_120.jpg \n inflating: food-11/training/8_66.jpg \n inflating: food-11/training/0_140.jpg \n inflating: food-11/training/2_290.jpg \n inflating: food-11/training/1_353.jpg \n inflating: food-11/training/4_564.jpg \n inflating: food-11/training/5_777.jpg \n inflating: food-11/training/9_235.jpg \n inflating: food-11/training/5_1233.jpg \n inflating: food-11/training/6_63.jpg \n inflating: food-11/training/10_108.jpg \n inflating: food-11/training/8_740.jpg \n inflating: food-11/training/9_553.jpg \n inflating: food-11/training/10_24.jpg \n inflating: food-11/training/4_202.jpg \n inflating: food-11/training/0_626.jpg \n inflating: food-11/training/10_30.jpg \n inflating: food-11/training/3_929.jpg \n inflating: food-11/training/4_216.jpg \n inflating: food-11/training/1_421.jpg \n inflating: food-11/training/0_632.jpg \n inflating: food-11/training/2_1068.jpg \n inflating: food-11/training/8_754.jpg \n inflating: food-11/training/9_547.jpg \n inflating: food-11/training/9_221.jpg \n inflating: food-11/training/5_1227.jpg \n inflating: food-11/training/6_77.jpg \n inflating: food-11/training/0_154.jpg \n inflating: food-11/training/2_284.jpg \n inflating: food-11/training/1_347.jpg \n inflating: food-11/training/4_570.jpg \n inflating: food-11/training/5_763.jpg \n inflating: food-11/training/9_1453.jpg \n inflating: food-11/training/3_518.jpg \n inflating: food-11/training/2_1281.jpg \n inflating: food-11/training/10_493.jpg \n inflating: food-11/training/9_976.jpg \n inflating: food-11/training/9_1335.jpg \n inflating: food-11/training/4_799.jpg \n inflating: food-11/training/9_1321.jpg \n inflating: food-11/training/10_487.jpg \n inflating: food-11/training/2_1295.jpg \n inflating: food-11/training/9_962.jpg \n inflating: food-11/training/9_1447.jpg \n inflating: food-11/training/7_128.jpg \n inflating: food-11/training/9_792.jpg \n inflating: food-11/training/8_581.jpg \n inflating: food-11/training/6_313.jpg \n inflating: food-11/training/7_100.jpg \n inflating: food-11/training/2_737.jpg \n inflating: food-11/training/3_524.jpg \n inflating: food-11/training/3_242.jpg \n inflating: food-11/training/1_192.jpg \n inflating: food-11/training/0_381.jpg \n inflating: food-11/training/9_1309.jpg \n inflating: food-11/training/3_256.jpg \n inflating: food-11/training/1_186.jpg \n inflating: food-11/training/0_395.jpg \n inflating: food-11/training/2_5.jpg \n inflating: food-11/training/6_307.jpg \n inflating: food-11/training/7_114.jpg \n inflating: food-11/training/2_723.jpg \n inflating: food-11/training/3_530.jpg \n inflating: food-11/training/9_786.jpg \n inflating: food-11/training/8_595.jpg \n inflating: food-11/training/2_1242.jpg \n inflating: food-11/training/10_450.jpg \n inflating: food-11/training/0_418.jpg \n inflating: food-11/training/9_1490.jpg \n inflating: food-11/training/5_32.jpg \n inflating: food-11/training/2_910.jpg \n inflating: food-11/training/5_549.jpg \n inflating: food-11/training/10_336.jpg \n inflating: food-11/training/8_218.jpg \n inflating: food-11/training/10_322.jpg \n inflating: food-11/training/5_1019.jpg \n inflating: food-11/training/10_8.jpg \n inflating: food-11/training/1_179.jpg \n inflating: food-11/training/5_26.jpg \n inflating: food-11/training/9_1484.jpg \n inflating: food-11/training/2_904.jpg \n inflating: food-11/training/9_779.jpg \n inflating: food-11/training/10_444.jpg \n inflating: food-11/training/2_1256.jpg \n inflating: food-11/training/0_424.jpg \n inflating: food-11/training/5_213.jpg \n inflating: food-11/training/9_751.jpg \n inflating: food-11/training/8_542.jpg \n inflating: food-11/training/9_989.jpg \n inflating: food-11/training/8_224.jpg \n inflating: food-11/training/5_1031.jpg \n inflating: food-11/training/5_575.jpg \n inflating: food-11/training/4_766.jpg \n inflating: food-11/training/3_281.jpg \n inflating: food-11/training/1_151.jpg \n inflating: food-11/training/0_342.jpg \n inflating: food-11/training/5_561.jpg \n inflating: food-11/training/2_13.jpg \n inflating: food-11/training/4_772.jpg \n inflating: food-11/training/3_295.jpg \n inflating: food-11/training/1_145.jpg \n inflating: food-11/training/0_356.jpg \n inflating: food-11/training/8_230.jpg \n inflating: food-11/training/5_1025.jpg \n inflating: food-11/training/9_745.jpg \n inflating: food-11/training/8_556.jpg \n inflating: food-11/training/10_478.jpg \n inflating: food-11/training/0_430.jpg \n inflating: food-11/training/2_938.jpg \n inflating: food-11/training/5_207.jpg \n inflating: food-11/training/0_340.jpg \n inflating: food-11/training/3_283.jpg \n inflating: food-11/training/1_153.jpg \n inflating: food-11/training/4_764.jpg \n inflating: food-11/training/5_577.jpg \n inflating: food-11/training/8_226.jpg \n inflating: food-11/training/5_1033.jpg \n inflating: food-11/training/10_308.jpg \n inflating: food-11/training/8_540.jpg \n inflating: food-11/training/9_753.jpg \n inflating: food-11/training/5_211.jpg \n inflating: food-11/training/0_426.jpg \n inflating: food-11/training/5_205.jpg \n inflating: food-11/training/5_18.jpg \n inflating: food-11/training/0_432.jpg \n inflating: food-11/training/2_1268.jpg \n inflating: food-11/training/8_554.jpg \n inflating: food-11/training/9_747.jpg \n inflating: food-11/training/8_232.jpg \n inflating: food-11/training/5_1027.jpg \n inflating: food-11/training/0_354.jpg \n inflating: food-11/training/3_297.jpg \n inflating: food-11/training/1_147.jpg \n inflating: food-11/training/4_770.jpg \n inflating: food-11/training/2_11.jpg \n inflating: food-11/training/5_563.jpg \n inflating: food-11/training/10_334.jpg \n inflating: food-11/training/4_758.jpg \n inflating: food-11/training/2_39.jpg \n inflating: food-11/training/5_30.jpg \n inflating: food-11/training/9_1492.jpg \n inflating: food-11/training/2_912.jpg \n inflating: food-11/training/10_452.jpg \n inflating: food-11/training/2_1240.jpg \n inflating: food-11/training/2_1254.jpg \n inflating: food-11/training/10_446.jpg \n inflating: food-11/training/8_568.jpg \n inflating: food-11/training/5_239.jpg \n inflating: food-11/training/9_1486.jpg \n inflating: food-11/training/5_24.jpg \n inflating: food-11/training/2_906.jpg \n inflating: food-11/training/0_368.jpg \n inflating: food-11/training/10_320.jpg \n inflating: food-11/training/0_383.jpg \n inflating: food-11/training/3_240.jpg \n inflating: food-11/training/1_190.jpg \n inflating: food-11/training/3_526.jpg \n inflating: food-11/training/2_735.jpg \n inflating: food-11/training/7_102.jpg \n inflating: food-11/training/6_311.jpg \n inflating: food-11/training/8_583.jpg \n inflating: food-11/training/9_790.jpg \n inflating: food-11/training/9_948.jpg \n inflating: food-11/training/8_597.jpg \n inflating: food-11/training/9_784.jpg \n inflating: food-11/training/3_532.jpg \n inflating: food-11/training/2_721.jpg \n inflating: food-11/training/7_116.jpg \n inflating: food-11/training/9_1479.jpg \n inflating: food-11/training/6_305.jpg \n inflating: food-11/training/2_7.jpg \n inflating: food-11/training/0_397.jpg \n inflating: food-11/training/3_254.jpg \n inflating: food-11/training/1_184.jpg \n inflating: food-11/training/9_1337.jpg \n inflating: food-11/training/5_588.jpg \n inflating: food-11/training/10_491.jpg \n inflating: food-11/training/2_1283.jpg \n inflating: food-11/training/9_974.jpg \n inflating: food-11/training/2_709.jpg \n inflating: food-11/training/9_1451.jpg \n inflating: food-11/training/6_339.jpg \n inflating: food-11/training/9_1445.jpg \n inflating: food-11/training/2_1297.jpg \n inflating: food-11/training/10_485.jpg \n inflating: food-11/training/9_960.jpg \n inflating: food-11/training/9_1323.jpg \n inflating: food-11/training/3_268.jpg \n inflating: food-11/training/0_624.jpg \n inflating: food-11/training/4_200.jpg \n inflating: food-11/training/10_26.jpg \n inflating: food-11/training/1_68.jpg \n inflating: food-11/training/9_551.jpg \n inflating: food-11/training/8_742.jpg \n inflating: food-11/training/5_1231.jpg \n inflating: food-11/training/6_61.jpg \n inflating: food-11/training/9_237.jpg \n inflating: food-11/training/5_775.jpg \n inflating: food-11/training/4_566.jpg \n inflating: food-11/training/1_351.jpg \n inflating: food-11/training/0_142.jpg \n inflating: food-11/training/2_292.jpg \n inflating: food-11/training/5_761.jpg \n inflating: food-11/training/4_572.jpg \n inflating: food-11/training/1_345.jpg \n inflating: food-11/training/0_156.jpg \n inflating: food-11/training/2_286.jpg \n inflating: food-11/training/8_58.jpg \n inflating: food-11/training/5_1225.jpg \n inflating: food-11/training/6_75.jpg \n inflating: food-11/training/9_223.jpg \n inflating: food-11/training/9_545.jpg \n inflating: food-11/training/8_756.jpg \n inflating: food-11/training/10_678.jpg \n inflating: food-11/training/0_630.jpg \n inflating: food-11/training/1_423.jpg \n inflating: food-11/training/4_214.jpg \n inflating: food-11/training/10_32.jpg \n inflating: food-11/training/1_54.jpg \n inflating: food-11/training/2_1042.jpg \n inflating: food-11/training/10_650.jpg \n inflating: food-11/training/0_618.jpg \n inflating: food-11/training/3_903.jpg \n inflating: food-11/training/5_749.jpg \n inflating: food-11/training/5_991.jpg \n inflating: food-11/training/10_136.jpg \n inflating: food-11/training/8_70.jpg \n inflating: food-11/training/10_122.jpg \n inflating: food-11/training/8_64.jpg \n inflating: food-11/training/6_49.jpg \n inflating: food-11/training/5_1219.jpg \n inflating: food-11/training/5_985.jpg \n inflating: food-11/training/1_379.jpg \n inflating: food-11/training/4_228.jpg \n inflating: food-11/training/3_917.jpg \n inflating: food-11/training/9_579.jpg \n inflating: food-11/training/1_40.jpg \n inflating: food-11/training/10_644.jpg \n inflating: food-11/training/2_1056.jpg \n inflating: food-11/training/9_592.jpg \n inflating: food-11/training/8_781.jpg \n inflating: food-11/training/6_113.jpg \n inflating: food-11/training/2_537.jpg \n inflating: food-11/training/3_724.jpg \n inflating: food-11/training/1_392.jpg \n inflating: food-11/training/0_181.jpg \n inflating: food-11/training/2_251.jpg \n inflating: food-11/training/9_1109.jpg \n inflating: food-11/training/1_386.jpg \n inflating: food-11/training/0_195.jpg \n inflating: food-11/training/2_245.jpg \n inflating: food-11/training/6_107.jpg \n inflating: food-11/training/0_5.jpg \n inflating: food-11/training/2_523.jpg \n inflating: food-11/training/3_730.jpg \n inflating: food-11/training/9_586.jpg \n inflating: food-11/training/8_795.jpg \n inflating: food-11/training/0_803.jpg \n inflating: food-11/training/3_718.jpg \n inflating: food-11/training/2_1081.jpg \n inflating: food-11/training/10_693.jpg \n inflating: food-11/training/1_97.jpg \n inflating: food-11/training/5_952.jpg \n inflating: food-11/training/4_599.jpg \n inflating: food-11/training/9_1135.jpg \n inflating: food-11/training/5_946.jpg \n inflating: food-11/training/2_279.jpg \n inflating: food-11/training/9_1121.jpg \n inflating: food-11/training/10_687.jpg \n inflating: food-11/training/2_1095.jpg \n inflating: food-11/training/1_83.jpg \n inflating: food-11/training/0_817.jpg \n inflating: food-11/training/5_167.jpg \n inflating: food-11/training/0_988.jpg \n inflating: food-11/training/4_374.jpg \n inflating: food-11/training/4_64.jpg \n inflating: food-11/training/3_693.jpg \n inflating: food-11/training/0_750.jpg \n inflating: food-11/training/2_480.jpg \n inflating: food-11/training/8_636.jpg \n inflating: food-11/training/9_425.jpg \n inflating: food-11/training/9_343.jpg \n inflating: food-11/training/8_150.jpg \n inflating: food-11/training/1_225.jpg \n inflating: food-11/training/4_412.jpg \n inflating: food-11/training/5_601.jpg \n inflating: food-11/training/3_79.jpg \n inflating: food-11/training/1_231.jpg \n inflating: food-11/training/4_406.jpg \n inflating: food-11/training/5_615.jpg \n inflating: food-11/training/9_357.jpg \n inflating: food-11/training/8_144.jpg \n inflating: food-11/training/8_622.jpg \n inflating: food-11/training/9_431.jpg \n inflating: food-11/training/5_173.jpg \n inflating: food-11/training/4_360.jpg \n inflating: food-11/training/4_70.jpg \n inflating: food-11/training/3_687.jpg \n inflating: food-11/training/0_744.jpg \n inflating: food-11/training/2_494.jpg \n inflating: food-11/training/2_1136.jpg \n inflating: food-11/training/9_419.jpg \n inflating: food-11/training/3_877.jpg \n inflating: food-11/training/4_348.jpg \n inflating: food-11/training/4_58.jpg \n inflating: food-11/training/3_51.jpg \n inflating: food-11/training/1_219.jpg \n inflating: food-11/training/9_1082.jpg \n inflating: food-11/training/8_178.jpg \n inflating: food-11/training/3_45.jpg \n inflating: food-11/training/9_1096.jpg \n inflating: food-11/training/5_629.jpg \n inflating: food-11/training/3_863.jpg \n inflating: food-11/training/0_778.jpg \n inflating: food-11/training/2_1122.jpg \n inflating: food-11/training/3_650.jpg \n inflating: food-11/training/0_793.jpg \n inflating: food-11/training/2_443.jpg \n inflating: food-11/training/7_274.jpg \n inflating: food-11/training/3_888.jpg \n inflating: food-11/training/6_3.jpg \n inflating: food-11/training/2_325.jpg \n inflating: food-11/training/3_136.jpg \n inflating: food-11/training/9_380.jpg \n inflating: food-11/training/8_193.jpg \n inflating: food-11/training/9_394.jpg \n inflating: food-11/training/8_187.jpg \n inflating: food-11/training/9_1069.jpg \n inflating: food-11/training/2_331.jpg \n inflating: food-11/training/3_122.jpg \n inflating: food-11/training/3_644.jpg \n inflating: food-11/training/0_787.jpg \n inflating: food-11/training/2_457.jpg \n inflating: food-11/training/7_260.jpg \n inflating: food-11/training/8_839.jpg \n inflating: food-11/training/5_198.jpg \n inflating: food-11/training/7_248.jpg \n inflating: food-11/training/0_977.jpg \n inflating: food-11/training/8_811.jpg \n inflating: food-11/training/9_1041.jpg \n inflating: food-11/training/3_92.jpg \n inflating: food-11/training/2_319.jpg \n inflating: food-11/training/5_826.jpg \n inflating: food-11/training/9_1055.jpg \n inflating: food-11/training/3_86.jpg \n inflating: food-11/training/5_832.jpg \n inflating: food-11/training/8_805.jpg \n inflating: food-11/training/3_678.jpg \n inflating: food-11/training/0_963.jpg \n inflating: food-11/training/5_403.jpg \n inflating: food-11/training/4_610.jpg \n inflating: food-11/training/0_234.jpg \n inflating: food-11/training/8_352.jpg \n inflating: food-11/training/5_1147.jpg \n inflating: food-11/training/9_24.jpg \n inflating: food-11/training/9_141.jpg \n inflating: food-11/training/9_627.jpg \n inflating: food-11/training/8_434.jpg \n inflating: food-11/training/2_1308.jpg \n inflating: food-11/training/2_682.jpg \n inflating: food-11/training/0_552.jpg \n inflating: food-11/training/3_491.jpg \n inflating: food-11/training/4_176.jpg \n inflating: food-11/training/5_365.jpg \n inflating: food-11/training/2_696.jpg \n inflating: food-11/training/0_546.jpg \n inflating: food-11/training/3_485.jpg \n inflating: food-11/training/4_162.jpg \n inflating: food-11/training/5_371.jpg \n inflating: food-11/training/9_633.jpg \n inflating: food-11/training/8_420.jpg \n inflating: food-11/training/0_14.jpg \n inflating: food-11/training/10_268.jpg \n inflating: food-11/training/8_346.jpg \n inflating: food-11/training/9_30.jpg \n inflating: food-11/training/5_1153.jpg \n inflating: food-11/training/9_155.jpg \n inflating: food-11/training/5_417.jpg \n inflating: food-11/training/4_604.jpg \n inflating: food-11/training/0_220.jpg \n inflating: food-11/training/10_240.jpg \n inflating: food-11/training/2_1452.jpg \n inflating: food-11/training/7_35.jpg \n inflating: food-11/training/9_18.jpg \n inflating: food-11/training/9_1280.jpg \n inflating: food-11/training/0_208.jpg \n inflating: food-11/training/2_866.jpg \n inflating: food-11/training/5_359.jpg \n inflating: food-11/training/8_408.jpg \n inflating: food-11/training/2_1334.jpg \n inflating: food-11/training/10_526.jpg \n inflating: food-11/training/10_532.jpg \n inflating: food-11/training/2_1320.jpg \n inflating: food-11/training/0_28.jpg \n inflating: food-11/training/2_872.jpg \n inflating: food-11/training/9_1294.jpg \n inflating: food-11/training/4_638.jpg \n inflating: food-11/training/2_1446.jpg \n inflating: food-11/training/10_254.jpg \n inflating: food-11/training/7_21.jpg \n inflating: food-11/training/9_169.jpg \n inflating: food-11/training/8_391.jpg \n inflating: food-11/training/5_1184.jpg \n inflating: food-11/training/9_182.jpg \n inflating: food-11/training/3_334.jpg \n inflating: food-11/training/2_127.jpg \n inflating: food-11/training/4_1.jpg \n inflating: food-11/training/2_899.jpg \n inflating: food-11/training/6_265.jpg \n inflating: food-11/training/2_641.jpg \n inflating: food-11/training/0_591.jpg \n inflating: food-11/training/3_452.jpg \n inflating: food-11/training/9_828.jpg \n inflating: food-11/training/6_271.jpg \n inflating: food-11/training/2_655.jpg \n inflating: food-11/training/0_585.jpg \n inflating: food-11/training/3_446.jpg \n inflating: food-11/training/3_320.jpg \n inflating: food-11/training/2_133.jpg \n inflating: food-11/training/8_385.jpg \n inflating: food-11/training/5_1190.jpg \n inflating: food-11/training/9_196.jpg \n inflating: food-11/training/3_308.jpg \n inflating: food-11/training/4_837.jpg \n inflating: food-11/training/9_1243.jpg \n inflating: food-11/training/10_283.jpg \n inflating: food-11/training/2_1491.jpg \n inflating: food-11/training/9_800.jpg \n inflating: food-11/training/6_259.jpg \n inflating: food-11/training/4_189.jpg \n inflating: food-11/training/2_669.jpg \n inflating: food-11/training/9_814.jpg \n inflating: food-11/training/2_1485.jpg \n inflating: food-11/training/10_297.jpg \n inflating: food-11/training/4_823.jpg \n inflating: food-11/training/9_1257.jpg \n inflating: food-11/training/9_815.jpg \n inflating: food-11/training/2_668.jpg \n inflating: food-11/training/9_1256.jpg \n inflating: food-11/training/4_822.jpg \n inflating: food-11/training/10_296.jpg \n inflating: food-11/training/2_1484.jpg \n inflating: food-11/training/2_1490.jpg \n inflating: food-11/training/10_282.jpg \n inflating: food-11/training/9_1242.jpg \n inflating: food-11/training/4_836.jpg \n inflating: food-11/training/3_309.jpg \n inflating: food-11/training/4_188.jpg \n inflating: food-11/training/6_258.jpg \n inflating: food-11/training/9_801.jpg \n inflating: food-11/training/3_447.jpg \n inflating: food-11/training/0_584.jpg \n inflating: food-11/training/2_654.jpg \n inflating: food-11/training/6_270.jpg \n inflating: food-11/training/9_829.jpg \n inflating: food-11/training/9_197.jpg \n inflating: food-11/training/5_1191.jpg \n inflating: food-11/training/8_384.jpg \n inflating: food-11/training/2_132.jpg \n inflating: food-11/training/3_321.jpg \n inflating: food-11/training/4_0.jpg \n inflating: food-11/training/2_126.jpg \n inflating: food-11/training/3_335.jpg \n inflating: food-11/training/9_183.jpg \n inflating: food-11/training/5_1185.jpg \n inflating: food-11/training/8_390.jpg \n inflating: food-11/training/3_453.jpg \n inflating: food-11/training/0_590.jpg \n inflating: food-11/training/2_640.jpg \n inflating: food-11/training/6_264.jpg \n inflating: food-11/training/2_898.jpg \n inflating: food-11/training/2_873.jpg \n inflating: food-11/training/0_29.jpg \n inflating: food-11/training/2_1321.jpg \n inflating: food-11/training/10_533.jpg \n inflating: food-11/training/9_168.jpg \n inflating: food-11/training/7_20.jpg \n inflating: food-11/training/10_255.jpg \n inflating: food-11/training/2_1447.jpg \n inflating: food-11/training/4_639.jpg \n inflating: food-11/training/9_1295.jpg \n inflating: food-11/training/0_209.jpg \n inflating: food-11/training/9_1281.jpg \n inflating: food-11/training/9_19.jpg \n inflating: food-11/training/7_34.jpg \n inflating: food-11/training/2_1453.jpg \n inflating: food-11/training/10_241.jpg \n inflating: food-11/training/10_527.jpg \n inflating: food-11/training/2_1335.jpg \n inflating: food-11/training/8_409.jpg \n inflating: food-11/training/5_358.jpg \n inflating: food-11/training/2_867.jpg \n inflating: food-11/training/0_15.jpg \n inflating: food-11/training/8_421.jpg \n inflating: food-11/training/9_632.jpg \n inflating: food-11/training/5_370.jpg \n inflating: food-11/training/4_163.jpg \n inflating: food-11/training/3_484.jpg \n inflating: food-11/training/0_547.jpg \n inflating: food-11/training/2_697.jpg \n inflating: food-11/training/0_221.jpg \n inflating: food-11/training/4_605.jpg \n inflating: food-11/training/5_416.jpg \n inflating: food-11/training/9_154.jpg \n inflating: food-11/training/5_1152.jpg \n inflating: food-11/training/9_31.jpg \n inflating: food-11/training/8_347.jpg \n inflating: food-11/training/10_269.jpg \n inflating: food-11/training/9_140.jpg \n inflating: food-11/training/9_25.jpg \n inflating: food-11/training/5_1146.jpg \n inflating: food-11/training/8_353.jpg \n inflating: food-11/training/0_235.jpg \n inflating: food-11/training/4_611.jpg \n inflating: food-11/training/5_402.jpg \n inflating: food-11/training/5_364.jpg \n inflating: food-11/training/4_177.jpg \n inflating: food-11/training/3_490.jpg \n inflating: food-11/training/0_553.jpg \n inflating: food-11/training/2_683.jpg \n inflating: food-11/training/2_1309.jpg \n inflating: food-11/training/8_435.jpg \n inflating: food-11/training/9_626.jpg \n inflating: food-11/training/5_833.jpg \n inflating: food-11/training/3_87.jpg \n inflating: food-11/training/9_1054.jpg \n inflating: food-11/training/0_962.jpg \n inflating: food-11/training/3_679.jpg \n inflating: food-11/training/8_804.jpg \n inflating: food-11/training/8_810.jpg \n inflating: food-11/training/0_976.jpg \n inflating: food-11/training/7_249.jpg \n inflating: food-11/training/5_199.jpg \n inflating: food-11/training/5_827.jpg \n inflating: food-11/training/2_318.jpg \n inflating: food-11/training/3_93.jpg \n inflating: food-11/training/9_1040.jpg \n inflating: food-11/training/3_123.jpg \n inflating: food-11/training/2_330.jpg \n inflating: food-11/training/9_1068.jpg \n inflating: food-11/training/8_186.jpg \n inflating: food-11/training/9_395.jpg \n inflating: food-11/training/8_838.jpg \n inflating: food-11/training/7_261.jpg \n inflating: food-11/training/2_456.jpg \n inflating: food-11/training/0_786.jpg \n inflating: food-11/training/3_645.jpg \n inflating: food-11/training/3_889.jpg \n inflating: food-11/training/7_275.jpg \n inflating: food-11/training/2_442.jpg \n inflating: food-11/training/0_792.jpg \n inflating: food-11/training/3_651.jpg \n inflating: food-11/training/8_192.jpg \n inflating: food-11/training/9_381.jpg \n inflating: food-11/training/3_137.jpg \n inflating: food-11/training/2_324.jpg \n inflating: food-11/training/6_2.jpg \n inflating: food-11/training/5_628.jpg \n inflating: food-11/training/9_1097.jpg \n inflating: food-11/training/3_44.jpg \n inflating: food-11/training/8_179.jpg \n inflating: food-11/training/2_1123.jpg \n inflating: food-11/training/0_779.jpg \n inflating: food-11/training/3_862.jpg \n inflating: food-11/training/4_59.jpg \n inflating: food-11/training/4_349.jpg \n inflating: food-11/training/3_876.jpg \n inflating: food-11/training/9_418.jpg \n inflating: food-11/training/2_1137.jpg \n inflating: food-11/training/9_1083.jpg \n inflating: food-11/training/1_218.jpg \n inflating: food-11/training/3_50.jpg \n inflating: food-11/training/8_145.jpg \n inflating: food-11/training/9_356.jpg \n inflating: food-11/training/5_614.jpg \n inflating: food-11/training/4_407.jpg \n inflating: food-11/training/1_230.jpg \n inflating: food-11/training/3_78.jpg \n inflating: food-11/training/2_495.jpg \n inflating: food-11/training/0_745.jpg \n inflating: food-11/training/3_686.jpg \n inflating: food-11/training/4_71.jpg \n inflating: food-11/training/4_361.jpg \n inflating: food-11/training/5_172.jpg \n inflating: food-11/training/9_430.jpg \n inflating: food-11/training/8_623.jpg \n inflating: food-11/training/9_424.jpg \n inflating: food-11/training/8_637.jpg \n inflating: food-11/training/2_481.jpg \n inflating: food-11/training/0_751.jpg \n inflating: food-11/training/3_692.jpg \n inflating: food-11/training/4_65.jpg \n inflating: food-11/training/4_375.jpg \n inflating: food-11/training/0_989.jpg \n inflating: food-11/training/5_166.jpg \n inflating: food-11/training/5_600.jpg \n inflating: food-11/training/4_413.jpg \n inflating: food-11/training/1_224.jpg \n inflating: food-11/training/8_151.jpg \n inflating: food-11/training/9_342.jpg \n inflating: food-11/training/9_1120.jpg \n inflating: food-11/training/2_278.jpg \n inflating: food-11/training/5_947.jpg \n inflating: food-11/training/0_816.jpg \n inflating: food-11/training/1_82.jpg \n inflating: food-11/training/2_1094.jpg \n inflating: food-11/training/10_686.jpg \n inflating: food-11/training/1_96.jpg \n inflating: food-11/training/10_692.jpg \n inflating: food-11/training/2_1080.jpg \n inflating: food-11/training/3_719.jpg \n inflating: food-11/training/0_802.jpg \n inflating: food-11/training/9_1134.jpg \n inflating: food-11/training/4_598.jpg \n inflating: food-11/training/5_953.jpg \n inflating: food-11/training/2_244.jpg \n inflating: food-11/training/0_194.jpg \n inflating: food-11/training/1_387.jpg \n inflating: food-11/training/8_794.jpg \n inflating: food-11/training/9_587.jpg \n inflating: food-11/training/3_731.jpg \n inflating: food-11/training/2_522.jpg \n inflating: food-11/training/0_4.jpg \n inflating: food-11/training/6_106.jpg \n inflating: food-11/training/3_725.jpg \n inflating: food-11/training/2_536.jpg \n inflating: food-11/training/6_112.jpg \n inflating: food-11/training/8_780.jpg \n inflating: food-11/training/9_593.jpg \n inflating: food-11/training/9_1108.jpg \n inflating: food-11/training/2_250.jpg \n inflating: food-11/training/0_180.jpg \n inflating: food-11/training/1_393.jpg \n inflating: food-11/training/1_378.jpg \n inflating: food-11/training/5_984.jpg \n inflating: food-11/training/5_1218.jpg \n inflating: food-11/training/6_48.jpg \n inflating: food-11/training/10_123.jpg \n inflating: food-11/training/8_65.jpg \n inflating: food-11/training/2_1057.jpg \n inflating: food-11/training/10_645.jpg \n inflating: food-11/training/1_41.jpg \n inflating: food-11/training/9_578.jpg \n inflating: food-11/training/3_916.jpg \n inflating: food-11/training/4_229.jpg \n inflating: food-11/training/3_902.jpg \n inflating: food-11/training/0_619.jpg \n inflating: food-11/training/10_651.jpg \n inflating: food-11/training/2_1043.jpg \n inflating: food-11/training/1_55.jpg \n inflating: food-11/training/10_137.jpg \n inflating: food-11/training/8_71.jpg \n inflating: food-11/training/5_990.jpg \n inflating: food-11/training/5_748.jpg \n inflating: food-11/training/9_222.jpg \n inflating: food-11/training/6_74.jpg \n inflating: food-11/training/5_1224.jpg \n inflating: food-11/training/8_59.jpg \n inflating: food-11/training/2_287.jpg \n inflating: food-11/training/0_157.jpg \n inflating: food-11/training/1_344.jpg \n inflating: food-11/training/4_573.jpg \n inflating: food-11/training/5_760.jpg \n inflating: food-11/training/10_33.jpg \n inflating: food-11/training/4_215.jpg \n inflating: food-11/training/1_422.jpg \n inflating: food-11/training/0_631.jpg \n inflating: food-11/training/10_679.jpg \n inflating: food-11/training/8_757.jpg \n inflating: food-11/training/9_544.jpg \n inflating: food-11/training/8_743.jpg \n inflating: food-11/training/9_550.jpg \n inflating: food-11/training/1_69.jpg \n inflating: food-11/training/10_27.jpg \n inflating: food-11/training/4_201.jpg \n inflating: food-11/training/0_625.jpg \n inflating: food-11/training/2_293.jpg \n inflating: food-11/training/0_143.jpg \n inflating: food-11/training/1_350.jpg \n inflating: food-11/training/4_567.jpg \n inflating: food-11/training/5_774.jpg \n inflating: food-11/training/9_236.jpg \n inflating: food-11/training/6_60.jpg \n inflating: food-11/training/5_1230.jpg \n inflating: food-11/training/9_961.jpg \n inflating: food-11/training/10_484.jpg \n inflating: food-11/training/2_1296.jpg \n inflating: food-11/training/9_1444.jpg \n inflating: food-11/training/6_338.jpg \n inflating: food-11/training/3_269.jpg \n inflating: food-11/training/9_1322.jpg \n inflating: food-11/training/5_589.jpg \n inflating: food-11/training/9_1336.jpg \n inflating: food-11/training/9_1450.jpg \n inflating: food-11/training/2_708.jpg \n inflating: food-11/training/9_975.jpg \n inflating: food-11/training/2_1282.jpg \n inflating: food-11/training/10_490.jpg \n inflating: food-11/training/2_6.jpg \n inflating: food-11/training/6_304.jpg \n inflating: food-11/training/9_1478.jpg \n inflating: food-11/training/7_117.jpg \n inflating: food-11/training/2_720.jpg \n inflating: food-11/training/3_533.jpg \n inflating: food-11/training/9_785.jpg \n inflating: food-11/training/8_596.jpg \n inflating: food-11/training/1_185.jpg \n inflating: food-11/training/3_255.jpg \n inflating: food-11/training/0_396.jpg \n inflating: food-11/training/1_191.jpg \n inflating: food-11/training/3_241.jpg \n inflating: food-11/training/0_382.jpg \n inflating: food-11/training/9_949.jpg \n inflating: food-11/training/9_791.jpg \n inflating: food-11/training/8_582.jpg \n inflating: food-11/training/6_310.jpg \n inflating: food-11/training/7_103.jpg \n inflating: food-11/training/2_734.jpg \n inflating: food-11/training/3_527.jpg \n inflating: food-11/training/2_907.jpg \n inflating: food-11/training/9_1487.jpg \n inflating: food-11/training/5_25.jpg \n inflating: food-11/training/5_238.jpg \n inflating: food-11/training/8_569.jpg \n inflating: food-11/training/10_447.jpg \n inflating: food-11/training/2_1255.jpg \n inflating: food-11/training/10_321.jpg \n inflating: food-11/training/0_369.jpg \n inflating: food-11/training/2_38.jpg \n inflating: food-11/training/4_759.jpg \n inflating: food-11/training/10_335.jpg \n inflating: food-11/training/2_1241.jpg \n inflating: food-11/training/10_453.jpg \n inflating: food-11/training/2_913.jpg \n inflating: food-11/training/5_31.jpg \n inflating: food-11/training/9_1493.jpg \n inflating: food-11/training/9_746.jpg \n inflating: food-11/training/8_555.jpg \n inflating: food-11/training/2_1269.jpg \n inflating: food-11/training/0_433.jpg \n inflating: food-11/training/5_19.jpg \n inflating: food-11/training/5_204.jpg \n inflating: food-11/training/5_562.jpg \n inflating: food-11/training/2_10.jpg \n inflating: food-11/training/4_771.jpg \n inflating: food-11/training/1_146.jpg \n inflating: food-11/training/3_296.jpg \n inflating: food-11/training/0_355.jpg \n inflating: food-11/training/5_1026.jpg \n inflating: food-11/training/8_233.jpg \n inflating: food-11/training/10_309.jpg \n inflating: food-11/training/5_1032.jpg \n inflating: food-11/training/8_227.jpg \n inflating: food-11/training/5_576.jpg \n inflating: food-11/training/4_765.jpg \n inflating: food-11/training/1_152.jpg \n inflating: food-11/training/3_282.jpg \n inflating: food-11/training/0_341.jpg \n inflating: food-11/training/0_427.jpg \n inflating: food-11/training/5_210.jpg \n inflating: food-11/training/9_752.jpg \n inflating: food-11/training/8_541.jpg \n creating: food-11/validation/\n inflating: food-11/validation/5_14.jpg \n inflating: food-11/validation/5_209.jpg \n inflating: food-11/validation/0_358.jpg \n inflating: food-11/validation/5_28.jpg \n inflating: food-11/validation/5_235.jpg \n inflating: food-11/validation/8_202.jpg \n inflating: food-11/validation/2_21.jpg \n inflating: food-11/validation/10_6.jpg \n inflating: food-11/validation/2_35.jpg \n inflating: food-11/validation/8_216.jpg \n inflating: food-11/validation/5_221.jpg \n inflating: food-11/validation/3_258.jpg \n inflating: food-11/validation/3_264.jpg \n inflating: food-11/validation/3_270.jpg \n inflating: food-11/validation/10_112.jpg \n inflating: food-11/validation/8_54.jpg \n inflating: food-11/validation/6_79.jpg \n inflating: food-11/validation/4_218.jpg \n inflating: food-11/validation/1_70.jpg \n inflating: food-11/validation/1_64.jpg \n inflating: food-11/validation/10_106.jpg \n inflating: food-11/validation/8_40.jpg \n inflating: food-11/validation/0_166.jpg \n inflating: food-11/validation/8_68.jpg \n inflating: food-11/validation/9_213.jpg \n inflating: food-11/validation/6_45.jpg \n inflating: food-11/validation/4_224.jpg \n inflating: food-11/validation/10_16.jpg \n inflating: food-11/validation/4_230.jpg \n inflating: food-11/validation/1_58.jpg \n inflating: food-11/validation/9_207.jpg \n inflating: food-11/validation/6_51.jpg \n inflating: food-11/validation/0_172.jpg \n inflating: food-11/validation/2_249.jpg \n inflating: food-11/validation/0_199.jpg \n inflating: food-11/validation/8_97.jpg \n inflating: food-11/validation/0_9.jpg \n inflating: food-11/validation/8_83.jpg \n inflating: food-11/validation/6_86.jpg \n inflating: food-11/validation/2_275.jpg \n inflating: food-11/validation/6_137.jpg \n inflating: food-11/validation/6_123.jpg \n inflating: food-11/validation/2_261.jpg \n inflating: food-11/validation/6_92.jpg \n inflating: food-11/validation/8_148.jpg \n inflating: food-11/validation/3_75.jpg \n inflating: food-11/validation/8_0.jpg \n inflating: food-11/validation/2_498.jpg \n inflating: food-11/validation/9_429.jpg \n inflating: food-11/validation/4_68.jpg \n inflating: food-11/validation/3_61.jpg \n inflating: food-11/validation/3_49.jpg \n inflating: food-11/validation/8_174.jpg \n inflating: food-11/validation/9_367.jpg \n inflating: food-11/validation/9_401.jpg \n inflating: food-11/validation/5_143.jpg \n inflating: food-11/validation/4_40.jpg \n inflating: food-11/validation/5_157.jpg \n inflating: food-11/validation/4_54.jpg \n inflating: food-11/validation/9_415.jpg \n inflating: food-11/validation/8_160.jpg \n inflating: food-11/validation/9_373.jpg \n inflating: food-11/validation/9_398.jpg \n inflating: food-11/validation/2_329.jpg \n inflating: food-11/validation/3_112.jpg \n inflating: food-11/validation/2_301.jpg \n inflating: food-11/validation/2_467.jpg \n inflating: food-11/validation/4_83.jpg \n inflating: food-11/validation/5_180.jpg \n inflating: food-11/validation/2_473.jpg \n inflating: food-11/validation/4_97.jpg \n inflating: food-11/validation/5_194.jpg \n inflating: food-11/validation/3_106.jpg \n inflating: food-11/validation/2_315.jpg \n inflating: food-11/validation/0_18.jpg \n inflating: food-11/validation/7_11.jpg \n inflating: food-11/validation/9_159.jpg \n inflating: food-11/validation/9_28.jpg \n inflating: food-11/validation/0_238.jpg \n inflating: food-11/validation/5_369.jpg \n inflating: food-11/validation/5_341.jpg \n inflating: food-11/validation/4_152.jpg \n inflating: food-11/validation/0_24.jpg \n inflating: food-11/validation/9_165.jpg \n inflating: food-11/validation/5_427.jpg \n inflating: food-11/validation/0_210.jpg \n inflating: food-11/validation/5_433.jpg \n inflating: food-11/validation/0_204.jpg \n inflating: food-11/validation/7_39.jpg \n inflating: food-11/validation/9_171.jpg \n inflating: food-11/validation/9_14.jpg \n inflating: food-11/validation/0_30.jpg \n inflating: food-11/validation/5_355.jpg \n inflating: food-11/validation/4_146.jpg \n inflating: food-11/validation/5_382.jpg \n inflating: food-11/validation/4_191.jpg \n inflating: food-11/validation/2_103.jpg \n inflating: food-11/validation/3_310.jpg \n inflating: food-11/validation/2_117.jpg \n inflating: food-11/validation/3_304.jpg \n inflating: food-11/validation/5_396.jpg \n inflating: food-11/validation/4_185.jpg \n inflating: food-11/validation/3_305.jpg \n inflating: food-11/validation/2_116.jpg \n inflating: food-11/validation/4_184.jpg \n inflating: food-11/validation/5_397.jpg \n inflating: food-11/validation/4_190.jpg \n inflating: food-11/validation/5_383.jpg \n inflating: food-11/validation/3_311.jpg \n inflating: food-11/validation/2_102.jpg \n inflating: food-11/validation/9_15.jpg \n inflating: food-11/validation/9_170.jpg \n inflating: food-11/validation/7_38.jpg \n inflating: food-11/validation/0_205.jpg \n inflating: food-11/validation/5_432.jpg \n inflating: food-11/validation/4_147.jpg \n inflating: food-11/validation/5_354.jpg \n inflating: food-11/validation/0_31.jpg \n inflating: food-11/validation/0_25.jpg \n inflating: food-11/validation/4_153.jpg \n inflating: food-11/validation/5_340.jpg \n inflating: food-11/validation/0_211.jpg \n inflating: food-11/validation/5_426.jpg \n inflating: food-11/validation/9_164.jpg \n inflating: food-11/validation/0_239.jpg \n inflating: food-11/validation/9_29.jpg \n inflating: food-11/validation/5_368.jpg \n inflating: food-11/validation/0_19.jpg \n inflating: food-11/validation/9_158.jpg \n inflating: food-11/validation/7_10.jpg \n inflating: food-11/validation/5_195.jpg \n inflating: food-11/validation/4_96.jpg \n inflating: food-11/validation/2_472.jpg \n inflating: food-11/validation/2_314.jpg \n inflating: food-11/validation/3_107.jpg \n inflating: food-11/validation/2_300.jpg \n inflating: food-11/validation/3_113.jpg \n inflating: food-11/validation/5_181.jpg \n inflating: food-11/validation/4_82.jpg \n inflating: food-11/validation/2_466.jpg \n inflating: food-11/validation/2_328.jpg \n inflating: food-11/validation/9_399.jpg \n inflating: food-11/validation/9_414.jpg \n inflating: food-11/validation/4_55.jpg \n inflating: food-11/validation/5_156.jpg \n inflating: food-11/validation/9_372.jpg \n inflating: food-11/validation/8_161.jpg \n inflating: food-11/validation/9_366.jpg \n inflating: food-11/validation/8_175.jpg \n inflating: food-11/validation/3_48.jpg \n inflating: food-11/validation/4_41.jpg \n inflating: food-11/validation/5_142.jpg \n inflating: food-11/validation/9_400.jpg \n inflating: food-11/validation/4_69.jpg \n inflating: food-11/validation/9_428.jpg \n inflating: food-11/validation/3_60.jpg \n inflating: food-11/validation/8_1.jpg \n inflating: food-11/validation/3_74.jpg \n inflating: food-11/validation/8_149.jpg \n inflating: food-11/validation/2_499.jpg \n inflating: food-11/validation/6_122.jpg \n inflating: food-11/validation/6_93.jpg \n inflating: food-11/validation/2_260.jpg \n inflating: food-11/validation/2_274.jpg \n inflating: food-11/validation/6_87.jpg \n inflating: food-11/validation/6_136.jpg \n inflating: food-11/validation/8_82.jpg \n inflating: food-11/validation/8_96.jpg \n inflating: food-11/validation/0_198.jpg \n inflating: food-11/validation/2_248.jpg \n inflating: food-11/validation/0_8.jpg \n inflating: food-11/validation/1_59.jpg \n inflating: food-11/validation/4_231.jpg \n inflating: food-11/validation/10_17.jpg \n inflating: food-11/validation/0_173.jpg \n inflating: food-11/validation/6_50.jpg \n inflating: food-11/validation/9_206.jpg \n inflating: food-11/validation/6_44.jpg \n inflating: food-11/validation/9_212.jpg \n inflating: food-11/validation/8_69.jpg \n inflating: food-11/validation/0_167.jpg \n inflating: food-11/validation/4_225.jpg \n inflating: food-11/validation/1_65.jpg \n inflating: food-11/validation/10_107.jpg \n inflating: food-11/validation/8_41.jpg \n inflating: food-11/validation/6_78.jpg \n inflating: food-11/validation/10_113.jpg \n inflating: food-11/validation/8_55.jpg \n inflating: food-11/validation/1_71.jpg \n inflating: food-11/validation/4_219.jpg \n inflating: food-11/validation/3_271.jpg \n inflating: food-11/validation/3_265.jpg \n inflating: food-11/validation/3_259.jpg \n inflating: food-11/validation/8_217.jpg \n inflating: food-11/validation/2_34.jpg \n inflating: food-11/validation/5_220.jpg \n inflating: food-11/validation/5_234.jpg \n inflating: food-11/validation/5_29.jpg \n inflating: food-11/validation/10_7.jpg \n inflating: food-11/validation/2_20.jpg \n inflating: food-11/validation/8_203.jpg \n inflating: food-11/validation/5_208.jpg \n inflating: food-11/validation/5_15.jpg \n inflating: food-11/validation/0_359.jpg \n inflating: food-11/validation/8_229.jpg \n inflating: food-11/validation/3_298.jpg \n inflating: food-11/validation/5_17.jpg \n inflating: food-11/validation/5_222.jpg \n inflating: food-11/validation/2_36.jpg \n inflating: food-11/validation/8_215.jpg \n inflating: food-11/validation/8_201.jpg \n inflating: food-11/validation/2_22.jpg \n inflating: food-11/validation/10_5.jpg \n inflating: food-11/validation/5_236.jpg \n inflating: food-11/validation/2_8.jpg \n inflating: food-11/validation/3_273.jpg \n inflating: food-11/validation/3_267.jpg \n inflating: food-11/validation/10_105.jpg \n inflating: food-11/validation/8_43.jpg \n inflating: food-11/validation/9_238.jpg \n inflating: food-11/validation/1_67.jpg \n inflating: food-11/validation/10_29.jpg \n inflating: food-11/validation/1_73.jpg \n inflating: food-11/validation/10_111.jpg \n inflating: food-11/validation/8_57.jpg \n inflating: food-11/validation/0_159.jpg \n inflating: food-11/validation/2_289.jpg \n inflating: food-11/validation/10_139.jpg \n inflating: food-11/validation/9_204.jpg \n inflating: food-11/validation/6_52.jpg \n inflating: food-11/validation/0_171.jpg \n inflating: food-11/validation/10_15.jpg \n inflating: food-11/validation/4_233.jpg \n inflating: food-11/validation/4_227.jpg \n inflating: food-11/validation/0_165.jpg \n inflating: food-11/validation/9_210.jpg \n inflating: food-11/validation/6_46.jpg \n inflating: food-11/validation/8_80.jpg \n inflating: food-11/validation/6_108.jpg \n inflating: food-11/validation/8_94.jpg \n inflating: food-11/validation/2_262.jpg \n inflating: food-11/validation/6_91.jpg \n inflating: food-11/validation/1_98.jpg \n inflating: food-11/validation/6_120.jpg \n inflating: food-11/validation/6_134.jpg \n inflating: food-11/validation/6_85.jpg \n inflating: food-11/validation/2_276.jpg \n inflating: food-11/validation/3_62.jpg \n inflating: food-11/validation/5_168.jpg \n inflating: food-11/validation/9_358.jpg \n inflating: food-11/validation/8_3.jpg \n inflating: food-11/validation/3_76.jpg \n inflating: food-11/validation/8_163.jpg \n inflating: food-11/validation/9_370.jpg \n inflating: food-11/validation/5_154.jpg \n inflating: food-11/validation/4_57.jpg \n inflating: food-11/validation/9_416.jpg \n inflating: food-11/validation/9_402.jpg \n inflating: food-11/validation/5_140.jpg \n inflating: food-11/validation/4_43.jpg \n inflating: food-11/validation/8_177.jpg \n inflating: food-11/validation/9_364.jpg \n inflating: food-11/validation/3_139.jpg \n inflating: food-11/validation/2_458.jpg \n inflating: food-11/validation/8_188.jpg \n inflating: food-11/validation/3_105.jpg \n inflating: food-11/validation/2_316.jpg \n inflating: food-11/validation/2_470.jpg \n inflating: food-11/validation/4_94.jpg \n inflating: food-11/validation/5_197.jpg \n inflating: food-11/validation/2_464.jpg \n inflating: food-11/validation/4_80.jpg \n inflating: food-11/validation/5_183.jpg \n inflating: food-11/validation/3_111.jpg \n inflating: food-11/validation/2_302.jpg \n inflating: food-11/validation/3_89.jpg \n inflating: food-11/validation/4_179.jpg \n inflating: food-11/validation/5_418.jpg \n inflating: food-11/validation/7_12.jpg \n inflating: food-11/validation/0_33.jpg \n inflating: food-11/validation/5_356.jpg \n inflating: food-11/validation/4_145.jpg \n inflating: food-11/validation/5_430.jpg \n inflating: food-11/validation/0_207.jpg \n inflating: food-11/validation/9_172.jpg \n inflating: food-11/validation/9_17.jpg \n inflating: food-11/validation/9_166.jpg \n inflating: food-11/validation/5_424.jpg \n inflating: food-11/validation/0_213.jpg \n inflating: food-11/validation/5_342.jpg \n inflating: food-11/validation/4_151.jpg \n inflating: food-11/validation/0_27.jpg \n inflating: food-11/validation/2_128.jpg \n inflating: food-11/validation/9_199.jpg \n inflating: food-11/validation/5_395.jpg \n inflating: food-11/validation/4_186.jpg \n inflating: food-11/validation/2_114.jpg \n inflating: food-11/validation/3_307.jpg \n inflating: food-11/validation/2_100.jpg \n inflating: food-11/validation/3_313.jpg \n inflating: food-11/validation/5_381.jpg \n inflating: food-11/validation/4_192.jpg \n inflating: food-11/validation/3_312.jpg \n inflating: food-11/validation/2_101.jpg \n inflating: food-11/validation/4_193.jpg \n inflating: food-11/validation/5_380.jpg \n inflating: food-11/validation/4_187.jpg \n inflating: food-11/validation/5_394.jpg \n inflating: food-11/validation/3_306.jpg \n inflating: food-11/validation/2_115.jpg \n inflating: food-11/validation/9_198.jpg \n inflating: food-11/validation/2_129.jpg \n inflating: food-11/validation/0_212.jpg \n inflating: food-11/validation/5_425.jpg \n inflating: food-11/validation/9_167.jpg \n inflating: food-11/validation/0_26.jpg \n inflating: food-11/validation/4_150.jpg \n inflating: food-11/validation/5_343.jpg \n inflating: food-11/validation/4_144.jpg \n inflating: food-11/validation/5_357.jpg \n inflating: food-11/validation/0_32.jpg \n inflating: food-11/validation/9_16.jpg \n inflating: food-11/validation/9_173.jpg \n inflating: food-11/validation/0_206.jpg \n inflating: food-11/validation/5_431.jpg \n inflating: food-11/validation/7_13.jpg \n inflating: food-11/validation/5_419.jpg \n inflating: food-11/validation/4_178.jpg \n inflating: food-11/validation/5_182.jpg \n inflating: food-11/validation/4_81.jpg \n inflating: food-11/validation/2_465.jpg \n inflating: food-11/validation/3_88.jpg \n inflating: food-11/validation/2_303.jpg \n inflating: food-11/validation/3_110.jpg \n inflating: food-11/validation/2_317.jpg \n inflating: food-11/validation/3_104.jpg \n inflating: food-11/validation/5_196.jpg \n inflating: food-11/validation/4_95.jpg \n inflating: food-11/validation/2_471.jpg \n inflating: food-11/validation/2_459.jpg \n inflating: food-11/validation/8_189.jpg \n inflating: food-11/validation/3_138.jpg \n inflating: food-11/validation/4_42.jpg \n inflating: food-11/validation/5_141.jpg \n inflating: food-11/validation/9_403.jpg \n inflating: food-11/validation/9_365.jpg \n inflating: food-11/validation/8_176.jpg \n inflating: food-11/validation/9_371.jpg \n inflating: food-11/validation/8_162.jpg \n inflating: food-11/validation/9_417.jpg \n inflating: food-11/validation/4_56.jpg \n inflating: food-11/validation/5_155.jpg \n inflating: food-11/validation/3_77.jpg \n inflating: food-11/validation/8_2.jpg \n inflating: food-11/validation/9_359.jpg \n inflating: food-11/validation/3_63.jpg \n inflating: food-11/validation/5_169.jpg \n inflating: food-11/validation/6_135.jpg \n inflating: food-11/validation/2_277.jpg \n inflating: food-11/validation/6_84.jpg \n inflating: food-11/validation/6_90.jpg \n inflating: food-11/validation/2_263.jpg \n inflating: food-11/validation/6_121.jpg \n inflating: food-11/validation/1_99.jpg \n inflating: food-11/validation/6_109.jpg \n inflating: food-11/validation/8_95.jpg \n inflating: food-11/validation/8_81.jpg \n inflating: food-11/validation/4_226.jpg \n inflating: food-11/validation/6_47.jpg \n inflating: food-11/validation/9_211.jpg \n inflating: food-11/validation/0_164.jpg \n inflating: food-11/validation/0_170.jpg \n inflating: food-11/validation/6_53.jpg \n inflating: food-11/validation/9_205.jpg \n inflating: food-11/validation/10_138.jpg \n inflating: food-11/validation/4_232.jpg \n inflating: food-11/validation/10_14.jpg \n inflating: food-11/validation/1_72.jpg \n inflating: food-11/validation/2_288.jpg \n inflating: food-11/validation/0_158.jpg \n inflating: food-11/validation/10_110.jpg \n inflating: food-11/validation/8_56.jpg \n inflating: food-11/validation/9_239.jpg \n inflating: food-11/validation/10_104.jpg \n inflating: food-11/validation/8_42.jpg \n inflating: food-11/validation/10_28.jpg \n inflating: food-11/validation/1_66.jpg \n inflating: food-11/validation/3_266.jpg \n inflating: food-11/validation/3_272.jpg \n inflating: food-11/validation/2_9.jpg \n inflating: food-11/validation/10_4.jpg \n inflating: food-11/validation/2_23.jpg \n inflating: food-11/validation/8_200.jpg \n inflating: food-11/validation/5_237.jpg \n inflating: food-11/validation/5_223.jpg \n inflating: food-11/validation/8_214.jpg \n inflating: food-11/validation/2_37.jpg \n inflating: food-11/validation/3_299.jpg \n inflating: food-11/validation/5_16.jpg \n inflating: food-11/validation/8_228.jpg \n inflating: food-11/validation/2_33.jpg \n inflating: food-11/validation/8_210.jpg \n inflating: food-11/validation/5_227.jpg \n inflating: food-11/validation/5_233.jpg \n inflating: food-11/validation/8_204.jpg \n inflating: food-11/validation/10_0.jpg \n inflating: food-11/validation/2_27.jpg \n inflating: food-11/validation/3_289.jpg \n inflating: food-11/validation/5_12.jpg \n inflating: food-11/validation/8_238.jpg \n inflating: food-11/validation/3_276.jpg \n inflating: food-11/validation/3_262.jpg \n inflating: food-11/validation/4_236.jpg \n inflating: food-11/validation/10_10.jpg \n inflating: food-11/validation/6_57.jpg \n inflating: food-11/validation/9_201.jpg \n inflating: food-11/validation/0_174.jpg \n inflating: food-11/validation/0_160.jpg \n inflating: food-11/validation/10_128.jpg \n inflating: food-11/validation/6_43.jpg \n inflating: food-11/validation/9_215.jpg \n inflating: food-11/validation/4_222.jpg \n inflating: food-11/validation/1_62.jpg \n inflating: food-11/validation/0_148.jpg \n inflating: food-11/validation/2_298.jpg \n inflating: food-11/validation/10_100.jpg \n inflating: food-11/validation/8_46.jpg \n inflating: food-11/validation/10_114.jpg \n inflating: food-11/validation/8_52.jpg \n inflating: food-11/validation/9_229.jpg \n inflating: food-11/validation/10_38.jpg \n inflating: food-11/validation/1_76.jpg \n inflating: food-11/validation/6_125.jpg \n inflating: food-11/validation/2_267.jpg \n inflating: food-11/validation/6_94.jpg \n inflating: food-11/validation/6_80.jpg \n inflating: food-11/validation/2_273.jpg \n inflating: food-11/validation/6_131.jpg \n inflating: food-11/validation/1_89.jpg \n inflating: food-11/validation/6_119.jpg \n inflating: food-11/validation/8_85.jpg \n inflating: food-11/validation/8_91.jpg \n inflating: food-11/validation/5_151.jpg \n inflating: food-11/validation/4_52.jpg \n inflating: food-11/validation/9_413.jpg \n inflating: food-11/validation/9_375.jpg \n inflating: food-11/validation/8_166.jpg \n inflating: food-11/validation/9_361.jpg \n inflating: food-11/validation/8_172.jpg \n inflating: food-11/validation/9_407.jpg \n inflating: food-11/validation/5_145.jpg \n inflating: food-11/validation/4_46.jpg \n inflating: food-11/validation/3_67.jpg \n inflating: food-11/validation/9_349.jpg \n inflating: food-11/validation/8_6.jpg \n inflating: food-11/validation/3_73.jpg \n inflating: food-11/validation/5_179.jpg \n inflating: food-11/validation/4_91.jpg \n inflating: food-11/validation/2_475.jpg \n inflating: food-11/validation/5_192.jpg \n inflating: food-11/validation/3_98.jpg \n inflating: food-11/validation/2_313.jpg \n inflating: food-11/validation/3_100.jpg \n inflating: food-11/validation/2_307.jpg \n inflating: food-11/validation/3_114.jpg \n inflating: food-11/validation/4_85.jpg \n inflating: food-11/validation/2_461.jpg \n inflating: food-11/validation/5_186.jpg \n inflating: food-11/validation/2_449.jpg \n inflating: food-11/validation/8_199.jpg \n inflating: food-11/validation/6_9.jpg \n inflating: food-11/validation/3_128.jpg \n inflating: food-11/validation/5_435.jpg \n inflating: food-11/validation/0_202.jpg \n inflating: food-11/validation/9_12.jpg \n inflating: food-11/validation/9_177.jpg \n inflating: food-11/validation/0_36.jpg \n inflating: food-11/validation/4_140.jpg \n inflating: food-11/validation/5_353.jpg \n inflating: food-11/validation/4_154.jpg \n inflating: food-11/validation/5_347.jpg \n inflating: food-11/validation/0_22.jpg \n inflating: food-11/validation/9_163.jpg \n inflating: food-11/validation/5_421.jpg \n inflating: food-11/validation/0_216.jpg \n inflating: food-11/validation/5_409.jpg \n inflating: food-11/validation/4_168.jpg \n inflating: food-11/validation/7_17.jpg \n inflating: food-11/validation/3_302.jpg \n inflating: food-11/validation/2_111.jpg \n inflating: food-11/validation/4_183.jpg \n inflating: food-11/validation/5_390.jpg \n inflating: food-11/validation/4_197.jpg \n inflating: food-11/validation/5_384.jpg \n inflating: food-11/validation/3_316.jpg \n inflating: food-11/validation/2_105.jpg \n inflating: food-11/validation/9_188.jpg \n inflating: food-11/validation/2_139.jpg \n inflating: food-11/validation/2_138.jpg \n inflating: food-11/validation/9_189.jpg \n inflating: food-11/validation/5_385.jpg \n inflating: food-11/validation/4_196.jpg \n inflating: food-11/validation/2_104.jpg \n inflating: food-11/validation/3_317.jpg \n inflating: food-11/validation/2_110.jpg \n inflating: food-11/validation/3_303.jpg \n inflating: food-11/validation/5_391.jpg \n inflating: food-11/validation/4_182.jpg \n inflating: food-11/validation/4_169.jpg \n inflating: food-11/validation/7_16.jpg \n inflating: food-11/validation/5_408.jpg \n inflating: food-11/validation/0_23.jpg \n inflating: food-11/validation/5_346.jpg \n inflating: food-11/validation/4_155.jpg \n inflating: food-11/validation/0_217.jpg \n inflating: food-11/validation/5_420.jpg \n inflating: food-11/validation/9_162.jpg \n inflating: food-11/validation/9_176.jpg \n inflating: food-11/validation/9_13.jpg \n inflating: food-11/validation/0_203.jpg \n inflating: food-11/validation/5_434.jpg \n inflating: food-11/validation/5_352.jpg \n inflating: food-11/validation/4_141.jpg \n inflating: food-11/validation/0_37.jpg \n inflating: food-11/validation/3_129.jpg \n inflating: food-11/validation/2_448.jpg \n inflating: food-11/validation/6_8.jpg \n inflating: food-11/validation/8_198.jpg \n inflating: food-11/validation/3_115.jpg \n inflating: food-11/validation/2_306.jpg \n inflating: food-11/validation/5_187.jpg \n inflating: food-11/validation/2_460.jpg \n inflating: food-11/validation/4_84.jpg \n inflating: food-11/validation/5_193.jpg \n inflating: food-11/validation/2_474.jpg \n inflating: food-11/validation/4_90.jpg \n inflating: food-11/validation/3_101.jpg \n inflating: food-11/validation/2_312.jpg \n inflating: food-11/validation/3_99.jpg \n inflating: food-11/validation/3_72.jpg \n inflating: food-11/validation/8_7.jpg \n inflating: food-11/validation/5_178.jpg \n inflating: food-11/validation/9_348.jpg \n inflating: food-11/validation/3_66.jpg \n inflating: food-11/validation/8_173.jpg \n inflating: food-11/validation/9_360.jpg \n inflating: food-11/validation/4_47.jpg \n inflating: food-11/validation/5_144.jpg \n inflating: food-11/validation/9_406.jpg \n inflating: food-11/validation/9_412.jpg \n inflating: food-11/validation/4_53.jpg \n inflating: food-11/validation/5_150.jpg \n inflating: food-11/validation/8_167.jpg \n inflating: food-11/validation/9_374.jpg \n inflating: food-11/validation/8_90.jpg \n inflating: food-11/validation/6_118.jpg \n inflating: food-11/validation/8_84.jpg \n inflating: food-11/validation/2_272.jpg \n inflating: food-11/validation/6_81.jpg \n inflating: food-11/validation/1_88.jpg \n inflating: food-11/validation/6_130.jpg \n inflating: food-11/validation/6_124.jpg \n inflating: food-11/validation/6_95.jpg \n inflating: food-11/validation/2_266.jpg \n inflating: food-11/validation/9_228.jpg \n inflating: food-11/validation/10_115.jpg \n inflating: food-11/validation/8_53.jpg \n inflating: food-11/validation/1_77.jpg \n inflating: food-11/validation/10_39.jpg \n inflating: food-11/validation/1_63.jpg \n inflating: food-11/validation/10_101.jpg \n inflating: food-11/validation/8_47.jpg \n inflating: food-11/validation/2_299.jpg \n inflating: food-11/validation/0_149.jpg \n inflating: food-11/validation/9_214.jpg \n inflating: food-11/validation/6_42.jpg \n inflating: food-11/validation/10_129.jpg \n inflating: food-11/validation/0_161.jpg \n inflating: food-11/validation/4_223.jpg \n inflating: food-11/validation/10_11.jpg \n inflating: food-11/validation/4_237.jpg \n inflating: food-11/validation/0_175.jpg \n inflating: food-11/validation/9_200.jpg \n inflating: food-11/validation/6_56.jpg \n inflating: food-11/validation/3_263.jpg \n inflating: food-11/validation/3_277.jpg \n inflating: food-11/validation/5_13.jpg \n inflating: food-11/validation/8_239.jpg \n inflating: food-11/validation/3_288.jpg \n inflating: food-11/validation/5_232.jpg \n inflating: food-11/validation/2_26.jpg \n inflating: food-11/validation/10_1.jpg \n inflating: food-11/validation/8_205.jpg \n inflating: food-11/validation/8_211.jpg \n inflating: food-11/validation/2_32.jpg \n inflating: food-11/validation/5_226.jpg \n inflating: food-11/validation/8_207.jpg \n inflating: food-11/validation/0_361.jpg \n inflating: food-11/validation/10_3.jpg \n inflating: food-11/validation/2_24.jpg \n inflating: food-11/validation/5_230.jpg \n inflating: food-11/validation/5_224.jpg \n inflating: food-11/validation/5_39.jpg \n inflating: food-11/validation/2_30.jpg \n inflating: food-11/validation/8_213.jpg \n inflating: food-11/validation/2_18.jpg \n inflating: food-11/validation/5_11.jpg \n inflating: food-11/validation/5_218.jpg \n inflating: food-11/validation/0_349.jpg \n inflating: food-11/validation/3_261.jpg \n inflating: food-11/validation/3_275.jpg \n inflating: food-11/validation/3_249.jpg \n inflating: food-11/validation/1_49.jpg \n inflating: food-11/validation/4_221.jpg \n inflating: food-11/validation/0_163.jpg \n inflating: food-11/validation/6_40.jpg \n inflating: food-11/validation/9_216.jpg \n inflating: food-11/validation/8_79.jpg \n inflating: food-11/validation/6_54.jpg \n inflating: food-11/validation/9_202.jpg \n inflating: food-11/validation/0_177.jpg \n inflating: food-11/validation/4_235.jpg \n inflating: food-11/validation/10_13.jpg \n inflating: food-11/validation/1_75.jpg \n inflating: food-11/validation/10_117.jpg \n inflating: food-11/validation/8_51.jpg \n inflating: food-11/validation/10_103.jpg \n inflating: food-11/validation/8_45.jpg \n inflating: food-11/validation/6_68.jpg \n inflating: food-11/validation/1_61.jpg \n inflating: food-11/validation/4_209.jpg \n inflating: food-11/validation/6_132.jpg \n inflating: food-11/validation/6_83.jpg \n inflating: food-11/validation/2_270.jpg \n inflating: food-11/validation/2_264.jpg \n inflating: food-11/validation/6_97.jpg \n inflating: food-11/validation/6_126.jpg \n inflating: food-11/validation/8_92.jpg \n inflating: food-11/validation/8_86.jpg \n inflating: food-11/validation/2_258.jpg \n inflating: food-11/validation/0_188.jpg \n inflating: food-11/validation/9_404.jpg \n inflating: food-11/validation/5_146.jpg \n inflating: food-11/validation/4_45.jpg \n inflating: food-11/validation/9_362.jpg \n inflating: food-11/validation/8_171.jpg \n inflating: food-11/validation/9_376.jpg \n inflating: food-11/validation/8_165.jpg \n inflating: food-11/validation/3_58.jpg \n inflating: food-11/validation/5_152.jpg \n inflating: food-11/validation/4_51.jpg \n inflating: food-11/validation/9_410.jpg \n inflating: food-11/validation/4_79.jpg \n inflating: food-11/validation/9_438.jpg \n inflating: food-11/validation/3_70.jpg \n inflating: food-11/validation/8_5.jpg \n inflating: food-11/validation/3_64.jpg \n inflating: food-11/validation/8_159.jpg \n inflating: food-11/validation/2_489.jpg \n inflating: food-11/validation/4_86.jpg \n inflating: food-11/validation/2_462.jpg \n inflating: food-11/validation/5_185.jpg \n inflating: food-11/validation/2_304.jpg \n inflating: food-11/validation/3_117.jpg \n inflating: food-11/validation/2_310.jpg \n inflating: food-11/validation/3_103.jpg \n inflating: food-11/validation/4_92.jpg \n inflating: food-11/validation/2_476.jpg \n inflating: food-11/validation/5_191.jpg \n inflating: food-11/validation/2_338.jpg \n inflating: food-11/validation/9_389.jpg \n inflating: food-11/validation/7_28.jpg \n inflating: food-11/validation/9_160.jpg \n inflating: food-11/validation/5_422.jpg \n inflating: food-11/validation/0_215.jpg \n inflating: food-11/validation/4_157.jpg \n inflating: food-11/validation/5_344.jpg \n inflating: food-11/validation/0_21.jpg \n inflating: food-11/validation/0_35.jpg \n inflating: food-11/validation/4_143.jpg \n inflating: food-11/validation/5_350.jpg \n inflating: food-11/validation/5_436.jpg \n inflating: food-11/validation/0_201.jpg \n inflating: food-11/validation/9_11.jpg \n inflating: food-11/validation/9_174.jpg \n inflating: food-11/validation/0_229.jpg \n inflating: food-11/validation/7_14.jpg \n inflating: food-11/validation/9_39.jpg \n inflating: food-11/validation/5_378.jpg \n inflating: food-11/validation/9_148.jpg \n inflating: food-11/validation/3_315.jpg \n inflating: food-11/validation/2_106.jpg \n inflating: food-11/validation/4_194.jpg \n inflating: food-11/validation/5_387.jpg \n inflating: food-11/validation/4_180.jpg \n inflating: food-11/validation/5_393.jpg \n inflating: food-11/validation/3_301.jpg \n inflating: food-11/validation/2_112.jpg \n inflating: food-11/validation/4_8.jpg \n inflating: food-11/validation/4_9.jpg \n inflating: food-11/validation/5_392.jpg \n inflating: food-11/validation/4_181.jpg \n inflating: food-11/validation/2_113.jpg \n inflating: food-11/validation/3_300.jpg \n inflating: food-11/validation/2_107.jpg \n inflating: food-11/validation/3_314.jpg \n inflating: food-11/validation/5_386.jpg \n inflating: food-11/validation/4_195.jpg \n inflating: food-11/validation/9_149.jpg \n inflating: food-11/validation/9_38.jpg \n inflating: food-11/validation/7_15.jpg \n inflating: food-11/validation/0_228.jpg \n inflating: food-11/validation/5_379.jpg \n inflating: food-11/validation/5_351.jpg \n inflating: food-11/validation/4_142.jpg \n inflating: food-11/validation/0_34.jpg \n inflating: food-11/validation/9_175.jpg \n inflating: food-11/validation/9_10.jpg \n inflating: food-11/validation/0_200.jpg \n inflating: food-11/validation/5_437.jpg \n inflating: food-11/validation/0_214.jpg \n inflating: food-11/validation/5_423.jpg \n inflating: food-11/validation/9_161.jpg \n inflating: food-11/validation/7_29.jpg \n inflating: food-11/validation/0_20.jpg \n inflating: food-11/validation/5_345.jpg \n inflating: food-11/validation/4_156.jpg \n inflating: food-11/validation/9_388.jpg \n inflating: food-11/validation/2_339.jpg \n inflating: food-11/validation/3_102.jpg \n inflating: food-11/validation/2_311.jpg \n inflating: food-11/validation/5_190.jpg \n inflating: food-11/validation/2_477.jpg \n inflating: food-11/validation/4_93.jpg \n inflating: food-11/validation/5_184.jpg \n inflating: food-11/validation/2_463.jpg \n inflating: food-11/validation/4_87.jpg \n inflating: food-11/validation/3_116.jpg \n inflating: food-11/validation/2_305.jpg \n inflating: food-11/validation/8_158.jpg \n inflating: food-11/validation/3_65.jpg \n inflating: food-11/validation/2_488.jpg \n inflating: food-11/validation/9_439.jpg \n inflating: food-11/validation/4_78.jpg \n inflating: food-11/validation/8_4.jpg \n inflating: food-11/validation/3_71.jpg \n inflating: food-11/validation/3_59.jpg \n inflating: food-11/validation/8_164.jpg \n inflating: food-11/validation/9_377.jpg \n inflating: food-11/validation/9_411.jpg \n inflating: food-11/validation/4_50.jpg \n inflating: food-11/validation/5_153.jpg \n inflating: food-11/validation/4_44.jpg \n inflating: food-11/validation/5_147.jpg \n inflating: food-11/validation/9_405.jpg \n inflating: food-11/validation/8_170.jpg \n inflating: food-11/validation/9_363.jpg \n inflating: food-11/validation/0_189.jpg \n inflating: food-11/validation/2_259.jpg \n inflating: food-11/validation/8_87.jpg \n inflating: food-11/validation/8_93.jpg \n inflating: food-11/validation/6_96.jpg \n inflating: food-11/validation/2_265.jpg \n inflating: food-11/validation/6_127.jpg \n inflating: food-11/validation/6_133.jpg \n inflating: food-11/validation/2_271.jpg \n inflating: food-11/validation/6_82.jpg \n inflating: food-11/validation/6_69.jpg \n inflating: food-11/validation/10_102.jpg \n inflating: food-11/validation/8_44.jpg \n inflating: food-11/validation/4_208.jpg \n inflating: food-11/validation/1_60.jpg \n inflating: food-11/validation/1_74.jpg \n inflating: food-11/validation/10_116.jpg \n inflating: food-11/validation/8_50.jpg \n inflating: food-11/validation/0_176.jpg \n inflating: food-11/validation/9_203.jpg \n inflating: food-11/validation/6_55.jpg \n inflating: food-11/validation/8_78.jpg \n inflating: food-11/validation/10_12.jpg \n inflating: food-11/validation/4_234.jpg \n inflating: food-11/validation/4_220.jpg \n inflating: food-11/validation/1_48.jpg \n inflating: food-11/validation/9_217.jpg \n inflating: food-11/validation/6_41.jpg \n inflating: food-11/validation/0_162.jpg \n inflating: food-11/validation/3_248.jpg \n inflating: food-11/validation/3_274.jpg \n inflating: food-11/validation/3_260.jpg \n inflating: food-11/validation/5_219.jpg \n inflating: food-11/validation/0_348.jpg \n inflating: food-11/validation/2_19.jpg \n inflating: food-11/validation/5_10.jpg \n inflating: food-11/validation/5_38.jpg \n inflating: food-11/validation/5_225.jpg \n inflating: food-11/validation/8_212.jpg \n inflating: food-11/validation/2_31.jpg \n inflating: food-11/validation/2_25.jpg \n inflating: food-11/validation/10_2.jpg \n inflating: food-11/validation/0_360.jpg \n inflating: food-11/validation/8_206.jpg \n inflating: food-11/validation/5_231.jpg \n inflating: food-11/validation/1_128.jpg \n inflating: food-11/validation/5_77.jpg \n inflating: food-11/validation/5_63.jpg \n inflating: food-11/validation/8_249.jpg \n inflating: food-11/validation/1_114.jpg \n inflating: food-11/validation/0_307.jpg \n inflating: food-11/validation/2_42.jpg \n inflating: food-11/validation/8_261.jpg \n inflating: food-11/validation/5_256.jpg \n inflating: food-11/validation/5_242.jpg \n inflating: food-11/validation/8_275.jpg \n inflating: food-11/validation/1_100.jpg \n inflating: food-11/validation/0_313.jpg \n inflating: food-11/validation/2_56.jpg \n inflating: food-11/validation/2_81.jpg \n inflating: food-11/validation/3_207.jpg \n inflating: food-11/validation/5_88.jpg \n inflating: food-11/validation/5_295.jpg \n inflating: food-11/validation/5_281.jpg \n inflating: food-11/validation/2_95.jpg \n inflating: food-11/validation/3_213.jpg \n inflating: food-11/validation/1_13.jpg \n inflating: food-11/validation/0_139.jpg \n inflating: food-11/validation/10_171.jpg \n inflating: food-11/validation/8_37.jpg \n inflating: food-11/validation/10_165.jpg \n inflating: food-11/validation/8_23.jpg \n inflating: food-11/validation/9_258.jpg \n inflating: food-11/validation/9_3.jpg \n inflating: food-11/validation/10_49.jpg \n inflating: food-11/validation/10_61.jpg \n inflating: food-11/validation/4_247.jpg \n inflating: food-11/validation/9_270.jpg \n inflating: food-11/validation/6_26.jpg \n inflating: food-11/validation/0_105.jpg \n inflating: food-11/validation/0_111.jpg \n inflating: food-11/validation/10_159.jpg \n inflating: food-11/validation/9_264.jpg \n inflating: food-11/validation/6_32.jpg \n inflating: food-11/validation/10_75.jpg \n inflating: food-11/validation/4_253.jpg \n inflating: food-11/validation/4_284.jpg \n inflating: food-11/validation/2_216.jpg \n inflating: food-11/validation/2_202.jpg \n inflating: food-11/validation/4_290.jpg \n inflating: food-11/validation/6_140.jpg \n inflating: food-11/validation/3_16.jpg \n inflating: food-11/validation/9_338.jpg \n inflating: food-11/validation/2_389.jpg \n inflating: food-11/validation/5_108.jpg \n inflating: food-11/validation/5_120.jpg \n inflating: food-11/validation/4_23.jpg \n inflating: food-11/validation/9_462.jpg \n inflating: food-11/validation/8_117.jpg \n inflating: food-11/validation/9_304.jpg \n inflating: food-11/validation/8_103.jpg \n inflating: food-11/validation/9_310.jpg \n inflating: food-11/validation/9_476.jpg \n inflating: food-11/validation/5_134.jpg \n inflating: food-11/validation/4_37.jpg \n inflating: food-11/validation/2_438.jpg \n inflating: food-11/validation/3_159.jpg \n inflating: food-11/validation/9_489.jpg \n inflating: food-11/validation/2_404.jpg \n inflating: food-11/validation/3_171.jpg \n inflating: food-11/validation/2_362.jpg \n inflating: food-11/validation/3_165.jpg \n inflating: food-11/validation/2_376.jpg \n inflating: food-11/validation/2_410.jpg \n inflating: food-11/validation/7_72.jpg \n inflating: food-11/validation/10_207.jpg \n inflating: food-11/validation/8_329.jpg \n inflating: food-11/validation/4_119.jpg \n inflating: food-11/validation/7_66.jpg \n inflating: food-11/validation/10_213.jpg \n inflating: food-11/validation/5_444.jpg \n inflating: food-11/validation/0_273.jpg \n inflating: food-11/validation/9_106.jpg \n inflating: food-11/validation/9_63.jpg \n inflating: food-11/validation/8_315.jpg \n inflating: food-11/validation/0_47.jpg \n inflating: food-11/validation/5_322.jpg \n inflating: food-11/validation/4_131.jpg \n inflating: food-11/validation/5_336.jpg \n inflating: food-11/validation/4_125.jpg \n inflating: food-11/validation/0_53.jpg \n inflating: food-11/validation/9_112.jpg \n inflating: food-11/validation/9_77.jpg \n inflating: food-11/validation/8_301.jpg \n inflating: food-11/validation/0_267.jpg \n inflating: food-11/validation/3_8.jpg \n inflating: food-11/validation/9_88.jpg \n inflating: food-11/validation/2_148.jpg \n inflating: food-11/validation/0_298.jpg \n inflating: food-11/validation/2_160.jpg \n inflating: food-11/validation/0_84.jpg \n inflating: food-11/validation/0_90.jpg \n inflating: food-11/validation/2_174.jpg \n inflating: food-11/validation/0_91.jpg \n inflating: food-11/validation/2_175.jpg \n inflating: food-11/validation/2_161.jpg \n inflating: food-11/validation/0_85.jpg \n inflating: food-11/validation/3_9.jpg \n inflating: food-11/validation/0_299.jpg \n inflating: food-11/validation/2_149.jpg \n inflating: food-11/validation/9_89.jpg \n inflating: food-11/validation/0_52.jpg \n inflating: food-11/validation/4_124.jpg \n inflating: food-11/validation/5_337.jpg \n inflating: food-11/validation/0_266.jpg \n inflating: food-11/validation/8_300.jpg \n inflating: food-11/validation/9_76.jpg \n inflating: food-11/validation/9_113.jpg \n inflating: food-11/validation/8_314.jpg \n inflating: food-11/validation/9_62.jpg \n inflating: food-11/validation/9_107.jpg \n inflating: food-11/validation/0_272.jpg \n inflating: food-11/validation/5_445.jpg \n inflating: food-11/validation/4_130.jpg \n inflating: food-11/validation/5_323.jpg \n inflating: food-11/validation/0_46.jpg \n inflating: food-11/validation/4_118.jpg \n inflating: food-11/validation/10_212.jpg \n inflating: food-11/validation/7_67.jpg \n inflating: food-11/validation/8_328.jpg \n inflating: food-11/validation/10_206.jpg \n inflating: food-11/validation/7_73.jpg \n inflating: food-11/validation/2_377.jpg \n inflating: food-11/validation/3_164.jpg \n inflating: food-11/validation/2_411.jpg \n inflating: food-11/validation/2_405.jpg \n inflating: food-11/validation/2_363.jpg \n inflating: food-11/validation/3_170.jpg \n inflating: food-11/validation/3_158.jpg \n inflating: food-11/validation/9_488.jpg \n inflating: food-11/validation/2_439.jpg \n inflating: food-11/validation/9_311.jpg \n inflating: food-11/validation/8_102.jpg \n inflating: food-11/validation/4_36.jpg \n inflating: food-11/validation/5_135.jpg \n inflating: food-11/validation/9_477.jpg \n inflating: food-11/validation/9_463.jpg \n inflating: food-11/validation/4_22.jpg \n inflating: food-11/validation/5_121.jpg \n inflating: food-11/validation/9_305.jpg \n inflating: food-11/validation/8_116.jpg \n inflating: food-11/validation/2_388.jpg \n inflating: food-11/validation/5_109.jpg \n inflating: food-11/validation/9_339.jpg \n inflating: food-11/validation/3_17.jpg \n inflating: food-11/validation/2_203.jpg \n inflating: food-11/validation/6_141.jpg \n inflating: food-11/validation/4_291.jpg \n inflating: food-11/validation/4_285.jpg \n inflating: food-11/validation/2_217.jpg \n inflating: food-11/validation/6_33.jpg \n inflating: food-11/validation/9_265.jpg \n inflating: food-11/validation/10_158.jpg \n inflating: food-11/validation/0_110.jpg \n inflating: food-11/validation/4_252.jpg \n inflating: food-11/validation/10_74.jpg \n inflating: food-11/validation/4_246.jpg \n inflating: food-11/validation/10_60.jpg \n inflating: food-11/validation/0_104.jpg \n inflating: food-11/validation/6_27.jpg \n inflating: food-11/validation/9_271.jpg \n inflating: food-11/validation/9_2.jpg \n inflating: food-11/validation/9_259.jpg \n inflating: food-11/validation/10_164.jpg \n inflating: food-11/validation/8_22.jpg \n inflating: food-11/validation/10_48.jpg \n inflating: food-11/validation/1_12.jpg \n inflating: food-11/validation/10_170.jpg \n inflating: food-11/validation/8_36.jpg \n inflating: food-11/validation/0_138.jpg \n inflating: food-11/validation/5_280.jpg \n inflating: food-11/validation/3_212.jpg \n inflating: food-11/validation/2_94.jpg \n inflating: food-11/validation/3_206.jpg \n inflating: food-11/validation/2_80.jpg \n inflating: food-11/validation/5_294.jpg \n inflating: food-11/validation/5_89.jpg \n inflating: food-11/validation/5_243.jpg \n inflating: food-11/validation/2_57.jpg \n inflating: food-11/validation/0_312.jpg \n inflating: food-11/validation/1_101.jpg \n inflating: food-11/validation/8_274.jpg \n inflating: food-11/validation/8_260.jpg \n inflating: food-11/validation/2_43.jpg \n inflating: food-11/validation/0_306.jpg \n inflating: food-11/validation/1_115.jpg \n inflating: food-11/validation/5_257.jpg \n inflating: food-11/validation/5_62.jpg \n inflating: food-11/validation/8_248.jpg \n inflating: food-11/validation/1_129.jpg \n inflating: food-11/validation/5_76.jpg \n inflating: food-11/validation/2_69.jpg \n inflating: food-11/validation/5_60.jpg \n inflating: food-11/validation/5_74.jpg \n inflating: food-11/validation/5_269.jpg \n inflating: food-11/validation/0_338.jpg \n inflating: food-11/validation/8_276.jpg \n inflating: food-11/validation/1_103.jpg \n inflating: food-11/validation/0_310.jpg \n inflating: food-11/validation/2_55.jpg \n inflating: food-11/validation/5_241.jpg \n inflating: food-11/validation/5_48.jpg \n inflating: food-11/validation/5_255.jpg \n inflating: food-11/validation/1_117.jpg \n inflating: food-11/validation/0_304.jpg \n inflating: food-11/validation/2_41.jpg \n inflating: food-11/validation/8_262.jpg \n inflating: food-11/validation/8_289.jpg \n inflating: food-11/validation/3_238.jpg \n inflating: food-11/validation/2_96.jpg \n inflating: food-11/validation/3_210.jpg \n inflating: food-11/validation/5_282.jpg \n inflating: food-11/validation/5_296.jpg \n inflating: food-11/validation/2_82.jpg \n inflating: food-11/validation/3_204.jpg \n inflating: food-11/validation/10_166.jpg \n inflating: food-11/validation/8_20.jpg \n inflating: food-11/validation/9_0.jpg \n inflating: food-11/validation/10_172.jpg \n inflating: food-11/validation/8_34.jpg \n inflating: food-11/validation/6_19.jpg \n inflating: food-11/validation/1_10.jpg \n inflating: food-11/validation/4_278.jpg \n inflating: food-11/validation/1_38.jpg \n inflating: food-11/validation/10_76.jpg \n inflating: food-11/validation/4_250.jpg \n inflating: food-11/validation/0_112.jpg \n inflating: food-11/validation/9_267.jpg \n inflating: food-11/validation/6_31.jpg \n inflating: food-11/validation/9_273.jpg \n inflating: food-11/validation/6_25.jpg \n inflating: food-11/validation/0_106.jpg \n inflating: food-11/validation/10_62.jpg \n inflating: food-11/validation/4_244.jpg \n inflating: food-11/validation/10_89.jpg \n inflating: food-11/validation/9_298.jpg \n inflating: food-11/validation/2_229.jpg \n inflating: food-11/validation/6_143.jpg \n inflating: food-11/validation/4_293.jpg \n inflating: food-11/validation/10_199.jpg \n inflating: food-11/validation/2_201.jpg \n inflating: food-11/validation/2_215.jpg \n inflating: food-11/validation/4_287.jpg \n inflating: food-11/validation/4_318.jpg \n inflating: food-11/validation/9_449.jpg \n inflating: food-11/validation/3_199.jpg \n inflating: food-11/validation/3_15.jpg \n inflating: food-11/validation/8_128.jpg \n inflating: food-11/validation/9_475.jpg \n inflating: food-11/validation/4_324.jpg \n inflating: food-11/validation/5_137.jpg \n inflating: food-11/validation/4_34.jpg \n inflating: food-11/validation/8_100.jpg \n inflating: food-11/validation/9_313.jpg \n inflating: food-11/validation/8_114.jpg \n inflating: food-11/validation/9_307.jpg \n inflating: food-11/validation/3_29.jpg \n inflating: food-11/validation/5_123.jpg \n inflating: food-11/validation/4_20.jpg \n inflating: food-11/validation/9_461.jpg \n inflating: food-11/validation/1_9.jpg \n inflating: food-11/validation/2_349.jpg \n inflating: food-11/validation/2_413.jpg \n inflating: food-11/validation/3_166.jpg \n inflating: food-11/validation/2_375.jpg \n inflating: food-11/validation/3_172.jpg \n inflating: food-11/validation/2_361.jpg \n inflating: food-11/validation/2_407.jpg \n inflating: food-11/validation/0_258.jpg \n inflating: food-11/validation/2_188.jpg \n inflating: food-11/validation/10_210.jpg \n inflating: food-11/validation/7_65.jpg \n inflating: food-11/validation/9_48.jpg \n inflating: food-11/validation/5_309.jpg \n inflating: food-11/validation/0_78.jpg \n inflating: food-11/validation/10_204.jpg \n inflating: food-11/validation/7_71.jpg \n inflating: food-11/validation/9_139.jpg \n inflating: food-11/validation/7_59.jpg \n inflating: food-11/validation/9_111.jpg \n inflating: food-11/validation/8_302.jpg \n inflating: food-11/validation/9_74.jpg \n inflating: food-11/validation/0_264.jpg \n inflating: food-11/validation/5_335.jpg \n inflating: food-11/validation/4_126.jpg \n inflating: food-11/validation/0_50.jpg \n inflating: food-11/validation/0_44.jpg \n inflating: food-11/validation/5_321.jpg \n inflating: food-11/validation/4_132.jpg \n inflating: food-11/validation/5_447.jpg \n inflating: food-11/validation/0_270.jpg \n inflating: food-11/validation/9_105.jpg \n inflating: food-11/validation/8_316.jpg \n inflating: food-11/validation/9_60.jpg \n inflating: food-11/validation/2_177.jpg \n inflating: food-11/validation/0_93.jpg \n inflating: food-11/validation/0_87.jpg \n inflating: food-11/validation/2_163.jpg \n inflating: food-11/validation/0_86.jpg \n inflating: food-11/validation/2_162.jpg \n inflating: food-11/validation/2_176.jpg \n inflating: food-11/validation/0_92.jpg \n inflating: food-11/validation/4_133.jpg \n inflating: food-11/validation/5_320.jpg \n inflating: food-11/validation/0_45.jpg \n inflating: food-11/validation/9_61.jpg \n inflating: food-11/validation/8_317.jpg \n inflating: food-11/validation/9_104.jpg \n inflating: food-11/validation/0_271.jpg \n inflating: food-11/validation/5_446.jpg \n inflating: food-11/validation/0_265.jpg \n inflating: food-11/validation/9_75.jpg \n inflating: food-11/validation/8_303.jpg \n inflating: food-11/validation/9_110.jpg \n inflating: food-11/validation/7_58.jpg \n inflating: food-11/validation/0_51.jpg \n inflating: food-11/validation/4_127.jpg \n inflating: food-11/validation/5_334.jpg \n inflating: food-11/validation/0_79.jpg \n inflating: food-11/validation/9_138.jpg \n inflating: food-11/validation/7_70.jpg \n inflating: food-11/validation/10_205.jpg \n inflating: food-11/validation/9_49.jpg \n inflating: food-11/validation/7_64.jpg \n inflating: food-11/validation/10_211.jpg \n inflating: food-11/validation/2_189.jpg \n inflating: food-11/validation/0_259.jpg \n inflating: food-11/validation/5_308.jpg \n inflating: food-11/validation/2_360.jpg \n inflating: food-11/validation/3_173.jpg \n inflating: food-11/validation/2_406.jpg \n inflating: food-11/validation/2_412.jpg \n inflating: food-11/validation/2_374.jpg \n inflating: food-11/validation/3_167.jpg \n inflating: food-11/validation/1_8.jpg \n inflating: food-11/validation/2_348.jpg \n inflating: food-11/validation/3_28.jpg \n inflating: food-11/validation/9_306.jpg \n inflating: food-11/validation/8_115.jpg \n inflating: food-11/validation/9_460.jpg \n inflating: food-11/validation/4_21.jpg \n inflating: food-11/validation/5_122.jpg \n inflating: food-11/validation/4_35.jpg \n inflating: food-11/validation/5_136.jpg \n inflating: food-11/validation/4_325.jpg \n inflating: food-11/validation/9_474.jpg \n inflating: food-11/validation/9_312.jpg \n inflating: food-11/validation/8_101.jpg \n inflating: food-11/validation/8_129.jpg \n inflating: food-11/validation/3_14.jpg \n inflating: food-11/validation/9_448.jpg \n inflating: food-11/validation/4_319.jpg \n inflating: food-11/validation/3_198.jpg \n inflating: food-11/validation/2_214.jpg \n inflating: food-11/validation/4_286.jpg \n inflating: food-11/validation/4_292.jpg \n inflating: food-11/validation/6_142.jpg \n inflating: food-11/validation/2_200.jpg \n inflating: food-11/validation/10_198.jpg \n inflating: food-11/validation/2_228.jpg \n inflating: food-11/validation/10_88.jpg \n inflating: food-11/validation/9_299.jpg \n inflating: food-11/validation/0_107.jpg \n inflating: food-11/validation/6_24.jpg \n inflating: food-11/validation/9_272.jpg \n inflating: food-11/validation/4_245.jpg \n inflating: food-11/validation/10_63.jpg \n inflating: food-11/validation/4_251.jpg \n inflating: food-11/validation/10_77.jpg \n inflating: food-11/validation/1_39.jpg \n inflating: food-11/validation/6_30.jpg \n inflating: food-11/validation/9_266.jpg \n inflating: food-11/validation/0_113.jpg \n inflating: food-11/validation/6_18.jpg \n inflating: food-11/validation/10_173.jpg \n inflating: food-11/validation/8_35.jpg \n inflating: food-11/validation/4_279.jpg \n inflating: food-11/validation/1_11.jpg \n inflating: food-11/validation/9_1.jpg \n inflating: food-11/validation/10_167.jpg \n inflating: food-11/validation/8_21.jpg \n inflating: food-11/validation/5_297.jpg \n inflating: food-11/validation/3_205.jpg \n inflating: food-11/validation/2_83.jpg \n inflating: food-11/validation/3_211.jpg \n inflating: food-11/validation/2_97.jpg \n inflating: food-11/validation/5_283.jpg \n inflating: food-11/validation/3_239.jpg \n inflating: food-11/validation/8_288.jpg \n inflating: food-11/validation/5_254.jpg \n inflating: food-11/validation/5_49.jpg \n inflating: food-11/validation/8_263.jpg \n inflating: food-11/validation/2_40.jpg \n inflating: food-11/validation/0_305.jpg \n inflating: food-11/validation/1_116.jpg \n inflating: food-11/validation/2_54.jpg \n inflating: food-11/validation/0_311.jpg \n inflating: food-11/validation/1_102.jpg \n inflating: food-11/validation/8_277.jpg \n inflating: food-11/validation/5_240.jpg \n inflating: food-11/validation/5_268.jpg \n inflating: food-11/validation/5_75.jpg \n inflating: food-11/validation/0_339.jpg \n inflating: food-11/validation/2_68.jpg \n inflating: food-11/validation/5_61.jpg \n inflating: food-11/validation/5_244.jpg \n inflating: food-11/validation/5_59.jpg \n inflating: food-11/validation/8_273.jpg \n inflating: food-11/validation/0_315.jpg \n inflating: food-11/validation/1_106.jpg \n inflating: food-11/validation/2_50.jpg \n inflating: food-11/validation/0_301.jpg \n inflating: food-11/validation/1_112.jpg \n inflating: food-11/validation/2_44.jpg \n inflating: food-11/validation/8_267.jpg \n inflating: food-11/validation/5_250.jpg \n inflating: food-11/validation/5_278.jpg \n inflating: food-11/validation/5_65.jpg \n inflating: food-11/validation/0_329.jpg \n inflating: food-11/validation/2_78.jpg \n inflating: food-11/validation/5_71.jpg \n inflating: food-11/validation/5_287.jpg \n inflating: food-11/validation/2_93.jpg \n inflating: food-11/validation/3_215.jpg \n inflating: food-11/validation/2_87.jpg \n inflating: food-11/validation/3_201.jpg \n inflating: food-11/validation/5_293.jpg \n inflating: food-11/validation/3_229.jpg \n inflating: food-11/validation/5_8.jpg \n inflating: food-11/validation/8_298.jpg \n inflating: food-11/validation/0_117.jpg \n inflating: food-11/validation/8_19.jpg \n inflating: food-11/validation/6_34.jpg \n inflating: food-11/validation/9_262.jpg \n inflating: food-11/validation/4_255.jpg \n inflating: food-11/validation/10_73.jpg \n inflating: food-11/validation/4_241.jpg \n inflating: food-11/validation/10_67.jpg \n inflating: food-11/validation/1_29.jpg \n inflating: food-11/validation/6_20.jpg \n inflating: food-11/validation/9_276.jpg \n inflating: food-11/validation/0_103.jpg \n inflating: food-11/validation/10_163.jpg \n inflating: food-11/validation/8_25.jpg \n inflating: food-11/validation/9_5.jpg \n inflating: food-11/validation/4_269.jpg \n inflating: food-11/validation/1_15.jpg \n inflating: food-11/validation/10_177.jpg \n inflating: food-11/validation/8_31.jpg \n inflating: food-11/validation/2_204.jpg \n inflating: food-11/validation/6_146.jpg \n inflating: food-11/validation/4_296.jpg \n inflating: food-11/validation/4_282.jpg \n inflating: food-11/validation/2_210.jpg \n inflating: food-11/validation/10_188.jpg \n inflating: food-11/validation/2_238.jpg \n inflating: food-11/validation/10_98.jpg \n inflating: food-11/validation/9_289.jpg \n inflating: food-11/validation/3_38.jpg \n inflating: food-11/validation/9_316.jpg \n inflating: food-11/validation/8_105.jpg \n inflating: food-11/validation/9_470.jpg \n inflating: food-11/validation/5_132.jpg \n inflating: food-11/validation/4_321.jpg \n inflating: food-11/validation/4_31.jpg \n inflating: food-11/validation/5_126.jpg \n inflating: food-11/validation/4_25.jpg \n inflating: food-11/validation/9_464.jpg \n inflating: food-11/validation/9_302.jpg \n inflating: food-11/validation/8_111.jpg \n inflating: food-11/validation/8_139.jpg \n inflating: food-11/validation/9_458.jpg \n inflating: food-11/validation/4_309.jpg \n inflating: food-11/validation/4_19.jpg \n inflating: food-11/validation/3_10.jpg \n inflating: food-11/validation/3_188.jpg \n inflating: food-11/validation/2_370.jpg \n inflating: food-11/validation/3_163.jpg \n inflating: food-11/validation/2_416.jpg \n inflating: food-11/validation/2_402.jpg \n inflating: food-11/validation/2_364.jpg \n inflating: food-11/validation/3_177.jpg \n inflating: food-11/validation/2_358.jpg \n inflating: food-11/validation/4_123.jpg \n inflating: food-11/validation/5_330.jpg \n inflating: food-11/validation/0_55.jpg \n inflating: food-11/validation/10_229.jpg \n inflating: food-11/validation/8_307.jpg \n inflating: food-11/validation/9_71.jpg \n inflating: food-11/validation/9_114.jpg \n inflating: food-11/validation/0_261.jpg \n inflating: food-11/validation/5_442.jpg \n inflating: food-11/validation/0_275.jpg \n inflating: food-11/validation/7_48.jpg \n inflating: food-11/validation/8_313.jpg \n inflating: food-11/validation/9_65.jpg \n inflating: food-11/validation/9_100.jpg \n inflating: food-11/validation/0_41.jpg \n inflating: food-11/validation/4_137.jpg \n inflating: food-11/validation/5_324.jpg \n inflating: food-11/validation/0_69.jpg \n inflating: food-11/validation/10_215.jpg \n inflating: food-11/validation/7_60.jpg \n inflating: food-11/validation/9_128.jpg \n inflating: food-11/validation/10_201.jpg \n inflating: food-11/validation/7_74.jpg \n inflating: food-11/validation/9_59.jpg \n inflating: food-11/validation/0_249.jpg \n inflating: food-11/validation/2_199.jpg \n inflating: food-11/validation/5_318.jpg \n inflating: food-11/validation/0_96.jpg \n inflating: food-11/validation/2_172.jpg \n inflating: food-11/validation/2_166.jpg \n inflating: food-11/validation/0_82.jpg \n inflating: food-11/validation/2_167.jpg \n inflating: food-11/validation/0_83.jpg \n inflating: food-11/validation/0_97.jpg \n inflating: food-11/validation/2_173.jpg \n inflating: food-11/validation/2_198.jpg \n inflating: food-11/validation/0_248.jpg \n inflating: food-11/validation/9_58.jpg \n inflating: food-11/validation/7_75.jpg \n inflating: food-11/validation/10_200.jpg \n inflating: food-11/validation/5_319.jpg \n inflating: food-11/validation/0_68.jpg \n inflating: food-11/validation/9_129.jpg \n inflating: food-11/validation/7_61.jpg \n inflating: food-11/validation/10_214.jpg \n inflating: food-11/validation/9_101.jpg \n inflating: food-11/validation/9_64.jpg \n inflating: food-11/validation/8_312.jpg \n inflating: food-11/validation/7_49.jpg \n inflating: food-11/validation/0_274.jpg \n inflating: food-11/validation/5_443.jpg \n inflating: food-11/validation/5_325.jpg \n inflating: food-11/validation/4_136.jpg \n inflating: food-11/validation/0_40.jpg \n inflating: food-11/validation/0_54.jpg \n inflating: food-11/validation/5_331.jpg \n inflating: food-11/validation/4_122.jpg \n inflating: food-11/validation/0_260.jpg \n inflating: food-11/validation/9_115.jpg \n inflating: food-11/validation/9_70.jpg \n inflating: food-11/validation/8_306.jpg \n inflating: food-11/validation/10_228.jpg \n inflating: food-11/validation/2_359.jpg \n inflating: food-11/validation/2_403.jpg \n inflating: food-11/validation/3_176.jpg \n inflating: food-11/validation/2_365.jpg \n inflating: food-11/validation/3_162.jpg \n inflating: food-11/validation/2_371.jpg \n inflating: food-11/validation/2_417.jpg \n inflating: food-11/validation/4_18.jpg \n inflating: food-11/validation/4_308.jpg \n inflating: food-11/validation/9_459.jpg \n inflating: food-11/validation/3_189.jpg \n inflating: food-11/validation/3_11.jpg \n inflating: food-11/validation/8_138.jpg \n inflating: food-11/validation/9_465.jpg \n inflating: food-11/validation/4_24.jpg \n inflating: food-11/validation/5_127.jpg \n inflating: food-11/validation/8_110.jpg \n inflating: food-11/validation/9_303.jpg \n inflating: food-11/validation/8_104.jpg \n inflating: food-11/validation/9_317.jpg \n inflating: food-11/validation/3_39.jpg \n inflating: food-11/validation/4_30.jpg \n inflating: food-11/validation/4_320.jpg \n inflating: food-11/validation/5_133.jpg \n inflating: food-11/validation/9_471.jpg \n inflating: food-11/validation/10_99.jpg \n inflating: food-11/validation/9_288.jpg \n inflating: food-11/validation/2_239.jpg \n inflating: food-11/validation/4_283.jpg \n inflating: food-11/validation/10_189.jpg \n inflating: food-11/validation/2_211.jpg \n inflating: food-11/validation/2_205.jpg \n inflating: food-11/validation/4_297.jpg \n inflating: food-11/validation/1_14.jpg \n inflating: food-11/validation/10_176.jpg \n inflating: food-11/validation/8_30.jpg \n inflating: food-11/validation/9_4.jpg \n inflating: food-11/validation/10_162.jpg \n inflating: food-11/validation/8_24.jpg \n inflating: food-11/validation/4_268.jpg \n inflating: food-11/validation/1_28.jpg \n inflating: food-11/validation/10_66.jpg \n inflating: food-11/validation/4_240.jpg \n inflating: food-11/validation/0_102.jpg \n inflating: food-11/validation/9_277.jpg \n inflating: food-11/validation/6_21.jpg \n inflating: food-11/validation/9_263.jpg \n inflating: food-11/validation/6_35.jpg \n inflating: food-11/validation/8_18.jpg \n inflating: food-11/validation/0_116.jpg \n inflating: food-11/validation/10_72.jpg \n inflating: food-11/validation/4_254.jpg \n inflating: food-11/validation/8_299.jpg \n inflating: food-11/validation/5_9.jpg \n inflating: food-11/validation/3_228.jpg \n inflating: food-11/validation/3_200.jpg \n inflating: food-11/validation/2_86.jpg \n inflating: food-11/validation/5_292.jpg \n inflating: food-11/validation/5_286.jpg \n inflating: food-11/validation/3_214.jpg \n inflating: food-11/validation/2_92.jpg \n inflating: food-11/validation/2_79.jpg \n inflating: food-11/validation/5_70.jpg \n inflating: food-11/validation/5_64.jpg \n inflating: food-11/validation/5_279.jpg \n inflating: food-11/validation/0_328.jpg \n inflating: food-11/validation/8_266.jpg \n inflating: food-11/validation/2_45.jpg \n inflating: food-11/validation/1_113.jpg \n inflating: food-11/validation/0_300.jpg \n inflating: food-11/validation/5_251.jpg \n inflating: food-11/validation/5_58.jpg \n inflating: food-11/validation/5_245.jpg \n inflating: food-11/validation/2_51.jpg \n inflating: food-11/validation/1_107.jpg \n inflating: food-11/validation/0_314.jpg \n inflating: food-11/validation/8_272.jpg \n inflating: food-11/validation/5_253.jpg \n inflating: food-11/validation/0_302.jpg \n inflating: food-11/validation/1_111.jpg \n inflating: food-11/validation/2_47.jpg \n inflating: food-11/validation/8_264.jpg \n inflating: food-11/validation/8_270.jpg \n inflating: food-11/validation/0_316.jpg \n inflating: food-11/validation/1_105.jpg \n inflating: food-11/validation/2_53.jpg \n inflating: food-11/validation/5_247.jpg \n inflating: food-11/validation/5_72.jpg \n inflating: food-11/validation/8_258.jpg \n inflating: food-11/validation/1_139.jpg \n inflating: food-11/validation/5_66.jpg \n inflating: food-11/validation/5_290.jpg \n inflating: food-11/validation/2_84.jpg \n inflating: food-11/validation/3_202.jpg \n inflating: food-11/validation/2_90.jpg \n inflating: food-11/validation/3_216.jpg \n inflating: food-11/validation/5_284.jpg \n inflating: food-11/validation/5_99.jpg \n inflating: food-11/validation/10_148.jpg \n inflating: food-11/validation/6_23.jpg \n inflating: food-11/validation/9_275.jpg \n inflating: food-11/validation/0_100.jpg \n inflating: food-11/validation/4_242.jpg \n inflating: food-11/validation/10_64.jpg \n inflating: food-11/validation/4_256.jpg \n inflating: food-11/validation/10_70.jpg \n inflating: food-11/validation/0_114.jpg \n inflating: food-11/validation/6_37.jpg \n inflating: food-11/validation/9_261.jpg \n inflating: food-11/validation/10_174.jpg \n inflating: food-11/validation/8_32.jpg \n inflating: food-11/validation/9_249.jpg \n inflating: food-11/validation/1_16.jpg \n inflating: food-11/validation/10_58.jpg \n inflating: food-11/validation/10_160.jpg \n inflating: food-11/validation/8_26.jpg \n inflating: food-11/validation/0_128.jpg \n inflating: food-11/validation/9_6.jpg \n inflating: food-11/validation/2_213.jpg \n inflating: food-11/validation/4_281.jpg \n inflating: food-11/validation/4_295.jpg \n inflating: food-11/validation/6_145.jpg \n inflating: food-11/validation/2_207.jpg \n inflating: food-11/validation/7_9.jpg \n inflating: food-11/validation/9_301.jpg \n inflating: food-11/validation/8_112.jpg \n inflating: food-11/validation/5_125.jpg \n inflating: food-11/validation/4_26.jpg \n inflating: food-11/validation/9_467.jpg \n inflating: food-11/validation/9_473.jpg \n inflating: food-11/validation/5_131.jpg \n inflating: food-11/validation/4_322.jpg \n inflating: food-11/validation/4_32.jpg \n inflating: food-11/validation/9_315.jpg \n inflating: food-11/validation/8_106.jpg \n inflating: food-11/validation/2_398.jpg \n inflating: food-11/validation/3_13.jpg \n inflating: food-11/validation/5_119.jpg \n inflating: food-11/validation/9_329.jpg \n inflating: food-11/validation/2_367.jpg \n inflating: food-11/validation/3_174.jpg \n inflating: food-11/validation/2_401.jpg \n inflating: food-11/validation/2_415.jpg \n inflating: food-11/validation/2_373.jpg \n inflating: food-11/validation/3_160.jpg \n inflating: food-11/validation/3_148.jpg \n inflating: food-11/validation/9_498.jpg \n inflating: food-11/validation/2_429.jpg \n inflating: food-11/validation/0_42.jpg \n inflating: food-11/validation/4_134.jpg \n inflating: food-11/validation/5_327.jpg \n inflating: food-11/validation/5_441.jpg \n inflating: food-11/validation/0_276.jpg \n inflating: food-11/validation/9_66.jpg \n inflating: food-11/validation/8_310.jpg \n inflating: food-11/validation/9_103.jpg \n inflating: food-11/validation/9_72.jpg \n inflating: food-11/validation/8_304.jpg \n inflating: food-11/validation/9_117.jpg \n inflating: food-11/validation/0_262.jpg \n inflating: food-11/validation/4_120.jpg \n inflating: food-11/validation/5_333.jpg \n inflating: food-11/validation/0_56.jpg \n inflating: food-11/validation/4_108.jpg \n inflating: food-11/validation/7_77.jpg \n inflating: food-11/validation/10_202.jpg \n inflating: food-11/validation/7_63.jpg \n inflating: food-11/validation/10_216.jpg \n inflating: food-11/validation/8_338.jpg \n inflating: food-11/validation/0_81.jpg \n inflating: food-11/validation/7_88.jpg \n inflating: food-11/validation/2_165.jpg \n inflating: food-11/validation/2_171.jpg \n inflating: food-11/validation/0_95.jpg \n inflating: food-11/validation/2_159.jpg \n inflating: food-11/validation/0_289.jpg \n inflating: food-11/validation/9_99.jpg \n inflating: food-11/validation/9_98.jpg \n inflating: food-11/validation/0_288.jpg \n inflating: food-11/validation/2_158.jpg \n inflating: food-11/validation/2_170.jpg \n inflating: food-11/validation/0_94.jpg \n inflating: food-11/validation/0_80.jpg \n inflating: food-11/validation/2_164.jpg \n inflating: food-11/validation/7_89.jpg \n inflating: food-11/validation/8_339.jpg \n inflating: food-11/validation/10_217.jpg \n inflating: food-11/validation/7_62.jpg \n inflating: food-11/validation/4_109.jpg \n inflating: food-11/validation/10_203.jpg \n inflating: food-11/validation/7_76.jpg \n inflating: food-11/validation/0_263.jpg \n inflating: food-11/validation/9_116.jpg \n inflating: food-11/validation/8_305.jpg \n inflating: food-11/validation/9_73.jpg \n inflating: food-11/validation/0_57.jpg \n inflating: food-11/validation/5_332.jpg \n inflating: food-11/validation/4_121.jpg \n inflating: food-11/validation/5_326.jpg \n inflating: food-11/validation/4_135.jpg \n inflating: food-11/validation/0_43.jpg \n inflating: food-11/validation/9_102.jpg \n inflating: food-11/validation/8_311.jpg \n inflating: food-11/validation/9_67.jpg \n inflating: food-11/validation/0_277.jpg \n inflating: food-11/validation/5_440.jpg \n inflating: food-11/validation/2_428.jpg \n inflating: food-11/validation/3_149.jpg \n inflating: food-11/validation/9_499.jpg \n inflating: food-11/validation/2_414.jpg \n inflating: food-11/validation/3_161.jpg \n inflating: food-11/validation/2_372.jpg \n inflating: food-11/validation/3_175.jpg \n inflating: food-11/validation/2_366.jpg \n inflating: food-11/validation/2_400.jpg \n inflating: food-11/validation/9_328.jpg \n inflating: food-11/validation/3_12.jpg \n inflating: food-11/validation/2_399.jpg \n inflating: food-11/validation/5_118.jpg \n inflating: food-11/validation/4_33.jpg \n inflating: food-11/validation/4_323.jpg \n inflating: food-11/validation/5_130.jpg \n inflating: food-11/validation/9_472.jpg \n inflating: food-11/validation/8_107.jpg \n inflating: food-11/validation/9_314.jpg \n inflating: food-11/validation/8_113.jpg \n inflating: food-11/validation/9_300.jpg \n inflating: food-11/validation/9_466.jpg \n inflating: food-11/validation/4_27.jpg \n inflating: food-11/validation/5_124.jpg \n inflating: food-11/validation/7_8.jpg \n inflating: food-11/validation/6_144.jpg \n inflating: food-11/validation/4_294.jpg \n inflating: food-11/validation/2_206.jpg \n inflating: food-11/validation/2_212.jpg \n inflating: food-11/validation/4_280.jpg \n inflating: food-11/validation/9_7.jpg \n inflating: food-11/validation/0_129.jpg \n inflating: food-11/validation/10_161.jpg \n inflating: food-11/validation/8_27.jpg \n inflating: food-11/validation/9_248.jpg \n inflating: food-11/validation/10_175.jpg \n inflating: food-11/validation/8_33.jpg \n inflating: food-11/validation/10_59.jpg \n inflating: food-11/validation/1_17.jpg \n inflating: food-11/validation/10_71.jpg \n inflating: food-11/validation/4_257.jpg \n inflating: food-11/validation/9_260.jpg \n inflating: food-11/validation/6_36.jpg \n inflating: food-11/validation/0_115.jpg \n inflating: food-11/validation/0_101.jpg \n inflating: food-11/validation/9_274.jpg \n inflating: food-11/validation/6_22.jpg \n inflating: food-11/validation/10_149.jpg \n inflating: food-11/validation/10_65.jpg \n inflating: food-11/validation/4_243.jpg \n inflating: food-11/validation/3_217.jpg \n inflating: food-11/validation/2_91.jpg \n inflating: food-11/validation/5_98.jpg \n inflating: food-11/validation/5_285.jpg \n inflating: food-11/validation/5_291.jpg \n inflating: food-11/validation/3_203.jpg \n inflating: food-11/validation/2_85.jpg \n inflating: food-11/validation/1_138.jpg \n inflating: food-11/validation/5_67.jpg \n inflating: food-11/validation/5_73.jpg \n inflating: food-11/validation/8_259.jpg \n inflating: food-11/validation/2_52.jpg \n inflating: food-11/validation/1_104.jpg \n inflating: food-11/validation/0_317.jpg \n inflating: food-11/validation/8_271.jpg \n inflating: food-11/validation/5_246.jpg \n inflating: food-11/validation/5_252.jpg \n inflating: food-11/validation/8_265.jpg \n inflating: food-11/validation/2_46.jpg \n inflating: food-11/validation/1_110.jpg \n inflating: food-11/validation/0_303.jpg \n inflating: food-11/validation/1_109.jpg \n inflating: food-11/validation/5_56.jpg \n inflating: food-11/validation/5_42.jpg \n inflating: food-11/validation/8_268.jpg \n inflating: food-11/validation/8_240.jpg \n inflating: food-11/validation/1_135.jpg \n inflating: food-11/validation/0_326.jpg \n inflating: food-11/validation/2_63.jpg \n inflating: food-11/validation/5_277.jpg \n inflating: food-11/validation/5_263.jpg \n inflating: food-11/validation/1_121.jpg \n inflating: food-11/validation/0_332.jpg \n inflating: food-11/validation/2_77.jpg \n inflating: food-11/validation/8_254.jpg \n inflating: food-11/validation/5_95.jpg \n inflating: food-11/validation/5_288.jpg \n inflating: food-11/validation/5_81.jpg \n inflating: food-11/validation/2_88.jpg \n inflating: food-11/validation/3_226.jpg \n inflating: food-11/validation/8_283.jpg \n inflating: food-11/validation/8_297.jpg \n inflating: food-11/validation/5_7.jpg \n inflating: food-11/validation/3_232.jpg \n inflating: food-11/validation/1_32.jpg \n inflating: food-11/validation/10_150.jpg \n inflating: food-11/validation/8_16.jpg \n inflating: food-11/validation/0_118.jpg \n inflating: food-11/validation/10_144.jpg \n inflating: food-11/validation/9_279.jpg \n inflating: food-11/validation/1_26.jpg \n inflating: food-11/validation/10_68.jpg \n inflating: food-11/validation/10_40.jpg \n inflating: food-11/validation/4_266.jpg \n inflating: food-11/validation/0_124.jpg \n inflating: food-11/validation/9_251.jpg \n inflating: food-11/validation/10_178.jpg \n inflating: food-11/validation/9_245.jpg \n inflating: food-11/validation/6_13.jpg \n inflating: food-11/validation/0_130.jpg \n inflating: food-11/validation/10_54.jpg \n inflating: food-11/validation/4_272.jpg \n inflating: food-11/validation/4_299.jpg \n inflating: food-11/validation/10_193.jpg \n inflating: food-11/validation/10_187.jpg \n inflating: food-11/validation/10_83.jpg \n inflating: food-11/validation/9_292.jpg \n inflating: food-11/validation/2_237.jpg \n inflating: food-11/validation/2_223.jpg \n inflating: food-11/validation/7_5.jpg \n inflating: food-11/validation/9_286.jpg \n inflating: food-11/validation/10_97.jpg \n inflating: food-11/validation/9_319.jpg \n inflating: food-11/validation/3_37.jpg \n inflating: food-11/validation/3_23.jpg \n inflating: food-11/validation/5_129.jpg \n inflating: food-11/validation/9_443.jpg \n inflating: food-11/validation/4_312.jpg \n inflating: food-11/validation/5_101.jpg \n inflating: food-11/validation/3_193.jpg \n inflating: food-11/validation/2_380.jpg \n inflating: food-11/validation/8_136.jpg \n inflating: food-11/validation/9_325.jpg \n inflating: food-11/validation/8_122.jpg \n inflating: food-11/validation/9_331.jpg \n inflating: food-11/validation/3_187.jpg \n inflating: food-11/validation/2_394.jpg \n inflating: food-11/validation/4_306.jpg \n inflating: food-11/validation/5_115.jpg \n inflating: food-11/validation/4_16.jpg \n inflating: food-11/validation/9_457.jpg \n inflating: food-11/validation/2_419.jpg \n inflating: food-11/validation/3_178.jpg \n inflating: food-11/validation/2_425.jpg \n inflating: food-11/validation/1_3.jpg \n inflating: food-11/validation/9_480.jpg \n inflating: food-11/validation/3_150.jpg \n inflating: food-11/validation/2_343.jpg \n inflating: food-11/validation/3_144.jpg \n inflating: food-11/validation/2_357.jpg \n inflating: food-11/validation/9_494.jpg \n inflating: food-11/validation/2_431.jpg \n inflating: food-11/validation/7_53.jpg \n inflating: food-11/validation/10_226.jpg \n inflating: food-11/validation/8_308.jpg \n inflating: food-11/validation/4_138.jpg \n inflating: food-11/validation/7_47.jpg \n inflating: food-11/validation/9_127.jpg \n inflating: food-11/validation/9_42.jpg \n inflating: food-11/validation/8_334.jpg \n inflating: food-11/validation/0_252.jpg \n inflating: food-11/validation/2_182.jpg \n inflating: food-11/validation/5_303.jpg \n inflating: food-11/validation/4_110.jpg \n inflating: food-11/validation/0_66.jpg \n inflating: food-11/validation/0_72.jpg \n inflating: food-11/validation/5_317.jpg \n inflating: food-11/validation/4_104.jpg \n inflating: food-11/validation/0_246.jpg \n inflating: food-11/validation/2_196.jpg \n inflating: food-11/validation/9_133.jpg \n inflating: food-11/validation/9_56.jpg \n inflating: food-11/validation/8_320.jpg \n inflating: food-11/validation/7_90.jpg \n inflating: food-11/validation/0_99.jpg \n inflating: food-11/validation/2_169.jpg \n inflating: food-11/validation/7_84.jpg \n inflating: food-11/validation/0_291.jpg \n inflating: food-11/validation/2_141.jpg \n inflating: food-11/validation/9_81.jpg \n inflating: food-11/validation/3_1.jpg \n inflating: food-11/validation/9_95.jpg \n inflating: food-11/validation/0_285.jpg \n inflating: food-11/validation/2_155.jpg \n inflating: food-11/validation/2_154.jpg \n inflating: food-11/validation/0_284.jpg \n inflating: food-11/validation/9_94.jpg \n inflating: food-11/validation/9_80.jpg \n inflating: food-11/validation/2_140.jpg \n inflating: food-11/validation/0_290.jpg \n inflating: food-11/validation/3_0.jpg \n inflating: food-11/validation/7_85.jpg \n inflating: food-11/validation/2_168.jpg \n inflating: food-11/validation/7_91.jpg \n inflating: food-11/validation/0_98.jpg \n inflating: food-11/validation/4_105.jpg \n inflating: food-11/validation/5_316.jpg \n inflating: food-11/validation/0_73.jpg \n inflating: food-11/validation/8_321.jpg \n inflating: food-11/validation/9_57.jpg \n inflating: food-11/validation/9_132.jpg \n inflating: food-11/validation/2_197.jpg \n inflating: food-11/validation/0_247.jpg \n inflating: food-11/validation/2_183.jpg \n inflating: food-11/validation/0_253.jpg \n inflating: food-11/validation/8_335.jpg \n inflating: food-11/validation/9_43.jpg \n inflating: food-11/validation/9_126.jpg \n inflating: food-11/validation/0_67.jpg \n inflating: food-11/validation/4_111.jpg \n inflating: food-11/validation/5_302.jpg \n inflating: food-11/validation/4_139.jpg \n inflating: food-11/validation/7_46.jpg \n inflating: food-11/validation/8_309.jpg \n inflating: food-11/validation/10_227.jpg \n inflating: food-11/validation/7_52.jpg \n inflating: food-11/validation/2_356.jpg \n inflating: food-11/validation/3_145.jpg \n inflating: food-11/validation/2_430.jpg \n inflating: food-11/validation/9_495.jpg \n inflating: food-11/validation/9_481.jpg \n inflating: food-11/validation/1_2.jpg \n inflating: food-11/validation/2_424.jpg \n inflating: food-11/validation/2_342.jpg \n inflating: food-11/validation/3_151.jpg \n inflating: food-11/validation/3_179.jpg \n inflating: food-11/validation/2_418.jpg \n inflating: food-11/validation/2_395.jpg \n inflating: food-11/validation/3_186.jpg \n inflating: food-11/validation/9_330.jpg \n inflating: food-11/validation/8_123.jpg \n inflating: food-11/validation/9_456.jpg \n inflating: food-11/validation/4_17.jpg \n inflating: food-11/validation/5_114.jpg \n inflating: food-11/validation/4_307.jpg \n inflating: food-11/validation/5_100.jpg \n inflating: food-11/validation/4_313.jpg \n inflating: food-11/validation/9_442.jpg \n inflating: food-11/validation/9_324.jpg \n inflating: food-11/validation/8_137.jpg \n inflating: food-11/validation/2_381.jpg \n inflating: food-11/validation/3_192.jpg \n inflating: food-11/validation/3_22.jpg \n inflating: food-11/validation/5_128.jpg \n inflating: food-11/validation/3_36.jpg \n inflating: food-11/validation/9_318.jpg \n inflating: food-11/validation/9_287.jpg \n inflating: food-11/validation/7_4.jpg \n inflating: food-11/validation/2_222.jpg \n inflating: food-11/validation/10_96.jpg \n inflating: food-11/validation/10_82.jpg \n inflating: food-11/validation/2_236.jpg \n inflating: food-11/validation/9_293.jpg \n inflating: food-11/validation/10_186.jpg \n inflating: food-11/validation/4_298.jpg \n inflating: food-11/validation/10_192.jpg \n inflating: food-11/validation/0_131.jpg \n inflating: food-11/validation/6_12.jpg \n inflating: food-11/validation/9_244.jpg \n inflating: food-11/validation/10_179.jpg \n inflating: food-11/validation/4_273.jpg \n inflating: food-11/validation/10_55.jpg \n inflating: food-11/validation/4_267.jpg \n inflating: food-11/validation/10_41.jpg \n inflating: food-11/validation/9_250.jpg \n inflating: food-11/validation/0_125.jpg \n inflating: food-11/validation/9_278.jpg \n inflating: food-11/validation/10_145.jpg \n inflating: food-11/validation/10_69.jpg \n inflating: food-11/validation/1_27.jpg \n inflating: food-11/validation/1_33.jpg \n inflating: food-11/validation/0_119.jpg \n inflating: food-11/validation/10_151.jpg \n inflating: food-11/validation/8_17.jpg \n inflating: food-11/validation/3_233.jpg \n inflating: food-11/validation/5_6.jpg \n inflating: food-11/validation/8_296.jpg \n inflating: food-11/validation/8_282.jpg \n inflating: food-11/validation/3_227.jpg \n inflating: food-11/validation/5_80.jpg \n inflating: food-11/validation/2_89.jpg \n inflating: food-11/validation/5_289.jpg \n inflating: food-11/validation/5_94.jpg \n inflating: food-11/validation/5_262.jpg \n inflating: food-11/validation/8_255.jpg \n inflating: food-11/validation/2_76.jpg \n inflating: food-11/validation/0_333.jpg \n inflating: food-11/validation/1_120.jpg \n inflating: food-11/validation/2_62.jpg \n inflating: food-11/validation/0_327.jpg \n inflating: food-11/validation/1_134.jpg \n inflating: food-11/validation/8_241.jpg \n inflating: food-11/validation/5_276.jpg \n inflating: food-11/validation/5_43.jpg \n inflating: food-11/validation/8_269.jpg \n inflating: food-11/validation/1_108.jpg \n inflating: food-11/validation/5_57.jpg \n inflating: food-11/validation/2_48.jpg \n inflating: food-11/validation/5_41.jpg \n inflating: food-11/validation/5_55.jpg \n inflating: food-11/validation/5_248.jpg \n inflating: food-11/validation/0_319.jpg \n inflating: food-11/validation/1_122.jpg \n inflating: food-11/validation/0_331.jpg \n inflating: food-11/validation/2_74.jpg \n inflating: food-11/validation/8_257.jpg \n inflating: food-11/validation/5_260.jpg \n inflating: food-11/validation/5_69.jpg \n inflating: food-11/validation/5_274.jpg \n inflating: food-11/validation/8_243.jpg \n inflating: food-11/validation/1_136.jpg \n inflating: food-11/validation/0_325.jpg \n inflating: food-11/validation/2_60.jpg \n inflating: food-11/validation/5_82.jpg \n inflating: food-11/validation/5_96.jpg \n inflating: food-11/validation/3_219.jpg \n inflating: food-11/validation/8_294.jpg \n inflating: food-11/validation/5_4.jpg \n inflating: food-11/validation/3_231.jpg \n inflating: food-11/validation/3_225.jpg \n inflating: food-11/validation/8_280.jpg \n inflating: food-11/validation/1_25.jpg \n inflating: food-11/validation/10_147.jpg \n inflating: food-11/validation/10_153.jpg \n inflating: food-11/validation/8_15.jpg \n inflating: food-11/validation/6_38.jpg \n inflating: food-11/validation/4_259.jpg \n inflating: food-11/validation/1_31.jpg \n inflating: food-11/validation/10_57.jpg \n inflating: food-11/validation/4_271.jpg \n inflating: food-11/validation/1_19.jpg \n inflating: food-11/validation/9_246.jpg \n inflating: food-11/validation/6_10.jpg \n inflating: food-11/validation/0_133.jpg \n inflating: food-11/validation/9_9.jpg \n inflating: food-11/validation/0_127.jpg \n inflating: food-11/validation/8_29.jpg \n inflating: food-11/validation/9_252.jpg \n inflating: food-11/validation/10_43.jpg \n inflating: food-11/validation/4_265.jpg \n inflating: food-11/validation/10_184.jpg \n inflating: food-11/validation/2_208.jpg \n inflating: food-11/validation/10_190.jpg \n inflating: food-11/validation/10_94.jpg \n inflating: food-11/validation/2_220.jpg \n inflating: food-11/validation/7_6.jpg \n inflating: food-11/validation/9_285.jpg \n inflating: food-11/validation/9_291.jpg \n inflating: food-11/validation/2_234.jpg \n inflating: food-11/validation/10_80.jpg \n inflating: food-11/validation/9_468.jpg \n inflating: food-11/validation/4_29.jpg \n inflating: food-11/validation/3_20.jpg \n inflating: food-11/validation/8_109.jpg \n inflating: food-11/validation/3_34.jpg \n inflating: food-11/validation/4_305.jpg \n inflating: food-11/validation/5_116.jpg \n inflating: food-11/validation/4_15.jpg \n inflating: food-11/validation/9_454.jpg \n inflating: food-11/validation/8_121.jpg \n inflating: food-11/validation/9_332.jpg \n inflating: food-11/validation/3_184.jpg \n inflating: food-11/validation/2_397.jpg \n inflating: food-11/validation/3_190.jpg \n inflating: food-11/validation/2_383.jpg \n inflating: food-11/validation/8_135.jpg \n inflating: food-11/validation/9_326.jpg \n inflating: food-11/validation/9_440.jpg \n inflating: food-11/validation/4_311.jpg \n inflating: food-11/validation/5_102.jpg \n inflating: food-11/validation/2_368.jpg \n inflating: food-11/validation/9_497.jpg \n inflating: food-11/validation/2_432.jpg \n inflating: food-11/validation/3_147.jpg \n inflating: food-11/validation/2_354.jpg \n inflating: food-11/validation/3_153.jpg \n inflating: food-11/validation/2_340.jpg \n inflating: food-11/validation/2_426.jpg \n inflating: food-11/validation/1_0.jpg \n inflating: food-11/validation/9_483.jpg \n inflating: food-11/validation/10_231.jpg \n inflating: food-11/validation/7_44.jpg \n inflating: food-11/validation/9_69.jpg \n inflating: food-11/validation/0_279.jpg \n inflating: food-11/validation/5_328.jpg \n inflating: food-11/validation/0_59.jpg \n inflating: food-11/validation/10_225.jpg \n inflating: food-11/validation/7_50.jpg \n inflating: food-11/validation/9_118.jpg \n inflating: food-11/validation/2_195.jpg \n inflating: food-11/validation/0_245.jpg \n inflating: food-11/validation/7_78.jpg \n inflating: food-11/validation/9_130.jpg \n inflating: food-11/validation/8_323.jpg \n inflating: food-11/validation/9_55.jpg \n inflating: food-11/validation/0_71.jpg \n inflating: food-11/validation/5_314.jpg \n inflating: food-11/validation/4_107.jpg \n inflating: food-11/validation/5_300.jpg \n inflating: food-11/validation/4_113.jpg \n inflating: food-11/validation/0_65.jpg \n inflating: food-11/validation/10_219.jpg \n inflating: food-11/validation/9_124.jpg \n inflating: food-11/validation/8_337.jpg \n inflating: food-11/validation/9_41.jpg \n inflating: food-11/validation/2_181.jpg \n inflating: food-11/validation/0_251.jpg \n inflating: food-11/validation/7_87.jpg \n inflating: food-11/validation/7_93.jpg \n inflating: food-11/validation/9_96.jpg \n inflating: food-11/validation/2_156.jpg \n inflating: food-11/validation/0_286.jpg \n inflating: food-11/validation/3_2.jpg \n inflating: food-11/validation/2_142.jpg \n inflating: food-11/validation/0_292.jpg \n inflating: food-11/validation/9_82.jpg \n inflating: food-11/validation/3_3.jpg \n inflating: food-11/validation/9_83.jpg \n inflating: food-11/validation/0_293.jpg \n inflating: food-11/validation/2_143.jpg \n inflating: food-11/validation/0_287.jpg \n inflating: food-11/validation/2_157.jpg \n inflating: food-11/validation/9_97.jpg \n inflating: food-11/validation/7_92.jpg \n inflating: food-11/validation/7_86.jpg \n inflating: food-11/validation/0_64.jpg \n inflating: food-11/validation/4_112.jpg \n inflating: food-11/validation/5_301.jpg \n inflating: food-11/validation/0_250.jpg \n inflating: food-11/validation/2_180.jpg \n inflating: food-11/validation/9_40.jpg \n inflating: food-11/validation/8_336.jpg \n inflating: food-11/validation/9_125.jpg \n inflating: food-11/validation/10_218.jpg \n inflating: food-11/validation/9_54.jpg \n inflating: food-11/validation/8_322.jpg \n inflating: food-11/validation/9_131.jpg \n inflating: food-11/validation/7_79.jpg \n inflating: food-11/validation/0_244.jpg \n inflating: food-11/validation/2_194.jpg \n inflating: food-11/validation/4_106.jpg \n inflating: food-11/validation/5_315.jpg \n inflating: food-11/validation/0_70.jpg \n inflating: food-11/validation/0_58.jpg \n inflating: food-11/validation/9_119.jpg \n inflating: food-11/validation/7_51.jpg \n inflating: food-11/validation/10_224.jpg \n inflating: food-11/validation/0_278.jpg \n inflating: food-11/validation/9_68.jpg \n inflating: food-11/validation/7_45.jpg \n inflating: food-11/validation/10_230.jpg \n inflating: food-11/validation/5_329.jpg \n inflating: food-11/validation/2_341.jpg \n inflating: food-11/validation/3_152.jpg \n inflating: food-11/validation/9_482.jpg \n inflating: food-11/validation/1_1.jpg \n inflating: food-11/validation/2_427.jpg \n inflating: food-11/validation/2_433.jpg \n inflating: food-11/validation/9_496.jpg \n inflating: food-11/validation/2_355.jpg \n inflating: food-11/validation/3_146.jpg \n inflating: food-11/validation/2_369.jpg \n inflating: food-11/validation/9_327.jpg \n inflating: food-11/validation/8_134.jpg \n inflating: food-11/validation/2_382.jpg \n inflating: food-11/validation/3_191.jpg \n inflating: food-11/validation/5_103.jpg \n inflating: food-11/validation/4_310.jpg \n inflating: food-11/validation/9_441.jpg \n inflating: food-11/validation/9_455.jpg \n inflating: food-11/validation/4_14.jpg \n inflating: food-11/validation/5_117.jpg \n inflating: food-11/validation/4_304.jpg \n inflating: food-11/validation/2_396.jpg \n inflating: food-11/validation/3_185.jpg \n inflating: food-11/validation/9_333.jpg \n inflating: food-11/validation/8_120.jpg \n inflating: food-11/validation/3_35.jpg \n inflating: food-11/validation/8_108.jpg \n inflating: food-11/validation/4_28.jpg \n inflating: food-11/validation/9_469.jpg \n inflating: food-11/validation/3_21.jpg \n inflating: food-11/validation/2_235.jpg \n inflating: food-11/validation/9_290.jpg \n inflating: food-11/validation/10_81.jpg \n inflating: food-11/validation/10_95.jpg \n inflating: food-11/validation/9_284.jpg \n inflating: food-11/validation/7_7.jpg \n inflating: food-11/validation/2_221.jpg \n inflating: food-11/validation/10_191.jpg \n inflating: food-11/validation/2_209.jpg \n inflating: food-11/validation/10_185.jpg \n inflating: food-11/validation/9_253.jpg \n inflating: food-11/validation/8_28.jpg \n inflating: food-11/validation/0_126.jpg \n inflating: food-11/validation/9_8.jpg \n inflating: food-11/validation/4_264.jpg \n inflating: food-11/validation/10_42.jpg \n inflating: food-11/validation/1_18.jpg \n inflating: food-11/validation/4_270.jpg \n inflating: food-11/validation/10_56.jpg \n inflating: food-11/validation/0_132.jpg \n inflating: food-11/validation/6_11.jpg \n inflating: food-11/validation/9_247.jpg \n inflating: food-11/validation/6_39.jpg \n inflating: food-11/validation/10_152.jpg \n inflating: food-11/validation/8_14.jpg \n inflating: food-11/validation/1_30.jpg \n inflating: food-11/validation/4_258.jpg \n inflating: food-11/validation/1_24.jpg \n inflating: food-11/validation/10_146.jpg \n inflating: food-11/validation/8_281.jpg \n inflating: food-11/validation/3_224.jpg \n inflating: food-11/validation/3_230.jpg \n inflating: food-11/validation/5_5.jpg \n inflating: food-11/validation/8_295.jpg \n inflating: food-11/validation/5_97.jpg \n inflating: food-11/validation/3_218.jpg \n inflating: food-11/validation/5_83.jpg \n inflating: food-11/validation/5_275.jpg \n inflating: food-11/validation/5_68.jpg \n inflating: food-11/validation/2_61.jpg \n inflating: food-11/validation/0_324.jpg \n inflating: food-11/validation/1_137.jpg \n inflating: food-11/validation/8_242.jpg \n inflating: food-11/validation/8_256.jpg \n inflating: food-11/validation/2_75.jpg \n inflating: food-11/validation/0_330.jpg \n inflating: food-11/validation/1_123.jpg \n inflating: food-11/validation/5_261.jpg \n inflating: food-11/validation/5_249.jpg \n inflating: food-11/validation/5_54.jpg \n inflating: food-11/validation/0_318.jpg \n inflating: food-11/validation/2_49.jpg \n inflating: food-11/validation/5_40.jpg \n inflating: food-11/validation/5_265.jpg \n inflating: food-11/validation/5_78.jpg \n inflating: food-11/validation/0_334.jpg \n inflating: food-11/validation/1_127.jpg \n inflating: food-11/validation/2_71.jpg \n inflating: food-11/validation/8_252.jpg \n inflating: food-11/validation/8_246.jpg \n inflating: food-11/validation/0_320.jpg \n inflating: food-11/validation/1_133.jpg \n inflating: food-11/validation/2_65.jpg \n inflating: food-11/validation/5_271.jpg \n inflating: food-11/validation/5_259.jpg \n inflating: food-11/validation/5_44.jpg \n inflating: food-11/validation/0_308.jpg \n inflating: food-11/validation/2_59.jpg \n inflating: food-11/validation/5_50.jpg \n inflating: food-11/validation/8_291.jpg \n inflating: food-11/validation/5_1.jpg \n inflating: food-11/validation/3_234.jpg \n inflating: food-11/validation/3_220.jpg \n inflating: food-11/validation/8_285.jpg \n inflating: food-11/validation/5_87.jpg \n inflating: food-11/validation/3_208.jpg \n inflating: food-11/validation/5_93.jpg \n inflating: food-11/validation/8_38.jpg \n inflating: food-11/validation/6_15.jpg \n inflating: food-11/validation/9_243.jpg \n inflating: food-11/validation/0_136.jpg \n inflating: food-11/validation/4_274.jpg \n inflating: food-11/validation/10_52.jpg \n inflating: food-11/validation/4_260.jpg \n inflating: food-11/validation/10_46.jpg \n inflating: food-11/validation/0_122.jpg \n inflating: food-11/validation/9_257.jpg \n inflating: food-11/validation/10_142.jpg \n inflating: food-11/validation/6_29.jpg \n inflating: food-11/validation/1_20.jpg \n inflating: food-11/validation/4_248.jpg \n inflating: food-11/validation/1_34.jpg \n inflating: food-11/validation/10_156.jpg \n inflating: food-11/validation/8_10.jpg \n inflating: food-11/validation/2_225.jpg \n inflating: food-11/validation/7_3.jpg \n inflating: food-11/validation/9_280.jpg \n inflating: food-11/validation/10_91.jpg \n inflating: food-11/validation/10_85.jpg \n inflating: food-11/validation/9_294.jpg \n inflating: food-11/validation/2_231.jpg \n inflating: food-11/validation/10_181.jpg \n inflating: food-11/validation/2_219.jpg \n inflating: food-11/validation/10_195.jpg \n inflating: food-11/validation/9_337.jpg \n inflating: food-11/validation/8_124.jpg \n inflating: food-11/validation/2_392.jpg \n inflating: food-11/validation/3_19.jpg \n inflating: food-11/validation/3_181.jpg \n inflating: food-11/validation/5_113.jpg \n inflating: food-11/validation/4_300.jpg \n inflating: food-11/validation/4_10.jpg \n inflating: food-11/validation/9_451.jpg \n inflating: food-11/validation/9_445.jpg \n inflating: food-11/validation/5_107.jpg \n inflating: food-11/validation/4_314.jpg \n inflating: food-11/validation/2_386.jpg \n inflating: food-11/validation/3_195.jpg \n inflating: food-11/validation/9_323.jpg \n inflating: food-11/validation/8_130.jpg \n inflating: food-11/validation/3_25.jpg \n inflating: food-11/validation/8_118.jpg \n inflating: food-11/validation/4_38.jpg \n inflating: food-11/validation/9_479.jpg \n inflating: food-11/validation/3_31.jpg \n inflating: food-11/validation/2_351.jpg \n inflating: food-11/validation/3_142.jpg \n inflating: food-11/validation/9_492.jpg \n inflating: food-11/validation/2_437.jpg \n inflating: food-11/validation/2_423.jpg \n inflating: food-11/validation/1_5.jpg \n inflating: food-11/validation/9_486.jpg \n inflating: food-11/validation/2_345.jpg \n inflating: food-11/validation/3_156.jpg \n inflating: food-11/validation/2_379.jpg \n inflating: food-11/validation/0_74.jpg \n inflating: food-11/validation/4_102.jpg \n inflating: food-11/validation/5_311.jpg \n inflating: food-11/validation/2_190.jpg \n inflating: food-11/validation/0_240.jpg \n inflating: food-11/validation/10_208.jpg \n inflating: food-11/validation/8_326.jpg \n inflating: food-11/validation/9_50.jpg \n inflating: food-11/validation/9_135.jpg \n inflating: food-11/validation/7_69.jpg \n inflating: food-11/validation/8_332.jpg \n inflating: food-11/validation/9_44.jpg \n inflating: food-11/validation/9_121.jpg \n inflating: food-11/validation/2_184.jpg \n inflating: food-11/validation/0_254.jpg \n inflating: food-11/validation/4_116.jpg \n inflating: food-11/validation/5_305.jpg \n inflating: food-11/validation/0_60.jpg \n inflating: food-11/validation/0_48.jpg \n inflating: food-11/validation/7_41.jpg \n inflating: food-11/validation/9_109.jpg \n inflating: food-11/validation/0_268.jpg \n inflating: food-11/validation/10_220.jpg \n inflating: food-11/validation/7_55.jpg \n inflating: food-11/validation/9_78.jpg \n inflating: food-11/validation/5_339.jpg \n inflating: food-11/validation/9_93.jpg \n inflating: food-11/validation/2_153.jpg \n inflating: food-11/validation/0_283.jpg \n inflating: food-11/validation/2_147.jpg \n inflating: food-11/validation/0_297.jpg \n inflating: food-11/validation/9_87.jpg \n inflating: food-11/validation/3_7.jpg \n inflating: food-11/validation/7_82.jpg \n inflating: food-11/validation/7_83.jpg \n inflating: food-11/validation/9_86.jpg \n inflating: food-11/validation/0_296.jpg \n inflating: food-11/validation/2_146.jpg \n inflating: food-11/validation/3_6.jpg \n inflating: food-11/validation/0_282.jpg \n inflating: food-11/validation/2_152.jpg \n inflating: food-11/validation/9_92.jpg \n inflating: food-11/validation/9_79.jpg \n inflating: food-11/validation/7_54.jpg \n inflating: food-11/validation/10_221.jpg \n inflating: food-11/validation/0_269.jpg \n inflating: food-11/validation/5_338.jpg \n inflating: food-11/validation/0_49.jpg \n inflating: food-11/validation/9_108.jpg \n inflating: food-11/validation/7_40.jpg \n inflating: food-11/validation/0_255.jpg \n inflating: food-11/validation/2_185.jpg \n inflating: food-11/validation/9_120.jpg \n inflating: food-11/validation/9_45.jpg \n inflating: food-11/validation/8_333.jpg \n inflating: food-11/validation/7_68.jpg \n inflating: food-11/validation/0_61.jpg \n inflating: food-11/validation/5_304.jpg \n inflating: food-11/validation/4_117.jpg \n inflating: food-11/validation/5_310.jpg \n inflating: food-11/validation/4_103.jpg \n inflating: food-11/validation/0_75.jpg \n inflating: food-11/validation/9_134.jpg \n inflating: food-11/validation/9_51.jpg \n inflating: food-11/validation/8_327.jpg \n inflating: food-11/validation/10_209.jpg \n inflating: food-11/validation/0_241.jpg \n inflating: food-11/validation/2_191.jpg \n inflating: food-11/validation/2_378.jpg \n inflating: food-11/validation/9_487.jpg \n inflating: food-11/validation/1_4.jpg \n inflating: food-11/validation/2_422.jpg \n inflating: food-11/validation/3_157.jpg \n inflating: food-11/validation/2_344.jpg \n inflating: food-11/validation/3_143.jpg \n inflating: food-11/validation/2_350.jpg \n inflating: food-11/validation/2_436.jpg \n inflating: food-11/validation/9_493.jpg \n inflating: food-11/validation/9_478.jpg \n inflating: food-11/validation/4_39.jpg \n inflating: food-11/validation/3_30.jpg \n inflating: food-11/validation/8_119.jpg \n inflating: food-11/validation/3_24.jpg \n inflating: food-11/validation/4_315.jpg \n inflating: food-11/validation/5_106.jpg \n inflating: food-11/validation/9_444.jpg \n inflating: food-11/validation/8_131.jpg \n inflating: food-11/validation/9_322.jpg \n inflating: food-11/validation/3_194.jpg \n inflating: food-11/validation/2_387.jpg \n inflating: food-11/validation/3_180.jpg \n inflating: food-11/validation/3_18.jpg \n inflating: food-11/validation/2_393.jpg \n inflating: food-11/validation/8_125.jpg \n inflating: food-11/validation/9_336.jpg \n inflating: food-11/validation/9_450.jpg \n inflating: food-11/validation/4_11.jpg \n inflating: food-11/validation/4_301.jpg \n inflating: food-11/validation/5_112.jpg \n inflating: food-11/validation/10_194.jpg \n inflating: food-11/validation/2_218.jpg \n inflating: food-11/validation/10_180.jpg \n inflating: food-11/validation/10_84.jpg \n inflating: food-11/validation/2_230.jpg \n inflating: food-11/validation/9_295.jpg \n inflating: food-11/validation/9_281.jpg \n inflating: food-11/validation/7_2.jpg \n inflating: food-11/validation/2_224.jpg \n inflating: food-11/validation/10_90.jpg \n inflating: food-11/validation/1_35.jpg \n inflating: food-11/validation/10_157.jpg \n inflating: food-11/validation/8_11.jpg \n inflating: food-11/validation/6_28.jpg \n inflating: food-11/validation/10_143.jpg \n inflating: food-11/validation/4_249.jpg \n inflating: food-11/validation/1_21.jpg \n inflating: food-11/validation/10_47.jpg \n inflating: food-11/validation/4_261.jpg \n inflating: food-11/validation/9_256.jpg \n inflating: food-11/validation/0_123.jpg \n inflating: food-11/validation/0_137.jpg \n inflating: food-11/validation/9_242.jpg \n inflating: food-11/validation/6_14.jpg \n inflating: food-11/validation/8_39.jpg \n inflating: food-11/validation/10_53.jpg \n inflating: food-11/validation/4_275.jpg \n inflating: food-11/validation/5_92.jpg \n inflating: food-11/validation/5_86.jpg \n inflating: food-11/validation/3_209.jpg \n inflating: food-11/validation/8_284.jpg \n inflating: food-11/validation/3_221.jpg \n inflating: food-11/validation/3_235.jpg \n inflating: food-11/validation/5_0.jpg \n inflating: food-11/validation/8_290.jpg \n inflating: food-11/validation/2_58.jpg \n inflating: food-11/validation/5_51.jpg \n inflating: food-11/validation/5_45.jpg \n inflating: food-11/validation/5_258.jpg \n inflating: food-11/validation/0_309.jpg \n inflating: food-11/validation/2_64.jpg \n inflating: food-11/validation/1_132.jpg \n inflating: food-11/validation/0_321.jpg \n inflating: food-11/validation/8_247.jpg \n inflating: food-11/validation/5_270.jpg \n inflating: food-11/validation/5_79.jpg \n inflating: food-11/validation/5_264.jpg \n inflating: food-11/validation/8_253.jpg \n inflating: food-11/validation/2_70.jpg \n inflating: food-11/validation/1_126.jpg \n inflating: food-11/validation/0_335.jpg \n inflating: food-11/validation/5_272.jpg \n inflating: food-11/validation/8_245.jpg \n inflating: food-11/validation/0_323.jpg \n inflating: food-11/validation/1_130.jpg \n inflating: food-11/validation/2_66.jpg \n inflating: food-11/validation/0_337.jpg \n inflating: food-11/validation/1_124.jpg \n inflating: food-11/validation/2_72.jpg \n inflating: food-11/validation/8_251.jpg \n inflating: food-11/validation/5_266.jpg \n inflating: food-11/validation/5_53.jpg \n inflating: food-11/validation/8_279.jpg \n inflating: food-11/validation/1_118.jpg \n inflating: food-11/validation/5_47.jpg \n inflating: food-11/validation/3_223.jpg \n inflating: food-11/validation/8_286.jpg \n inflating: food-11/validation/8_292.jpg \n inflating: food-11/validation/5_2.jpg \n inflating: food-11/validation/3_237.jpg \n inflating: food-11/validation/5_90.jpg \n inflating: food-11/validation/2_99.jpg \n inflating: food-11/validation/5_299.jpg \n inflating: food-11/validation/5_84.jpg \n inflating: food-11/validation/0_121.jpg \n inflating: food-11/validation/10_169.jpg \n inflating: food-11/validation/9_254.jpg \n inflating: food-11/validation/4_263.jpg \n inflating: food-11/validation/10_45.jpg \n inflating: food-11/validation/4_277.jpg \n inflating: food-11/validation/10_51.jpg \n inflating: food-11/validation/6_16.jpg \n inflating: food-11/validation/9_240.jpg \n inflating: food-11/validation/0_135.jpg \n inflating: food-11/validation/10_155.jpg \n inflating: food-11/validation/8_13.jpg \n inflating: food-11/validation/9_268.jpg \n inflating: food-11/validation/10_79.jpg \n inflating: food-11/validation/1_37.jpg \n inflating: food-11/validation/1_23.jpg \n inflating: food-11/validation/0_109.jpg \n inflating: food-11/validation/10_141.jpg \n inflating: food-11/validation/9_297.jpg \n inflating: food-11/validation/2_232.jpg \n inflating: food-11/validation/10_86.jpg \n inflating: food-11/validation/10_92.jpg \n inflating: food-11/validation/2_226.jpg \n inflating: food-11/validation/7_0.jpg \n inflating: food-11/validation/9_283.jpg \n inflating: food-11/validation/10_196.jpg \n inflating: food-11/validation/4_288.jpg \n inflating: food-11/validation/10_182.jpg \n inflating: food-11/validation/2_385.jpg \n inflating: food-11/validation/3_196.jpg \n inflating: food-11/validation/9_320.jpg \n inflating: food-11/validation/8_133.jpg \n inflating: food-11/validation/9_446.jpg \n inflating: food-11/validation/5_104.jpg \n inflating: food-11/validation/4_317.jpg \n inflating: food-11/validation/5_110.jpg \n inflating: food-11/validation/4_303.jpg \n inflating: food-11/validation/4_13.jpg \n inflating: food-11/validation/9_452.jpg \n inflating: food-11/validation/9_334.jpg \n inflating: food-11/validation/8_127.jpg \n inflating: food-11/validation/2_391.jpg \n inflating: food-11/validation/3_182.jpg \n inflating: food-11/validation/3_32.jpg \n inflating: food-11/validation/5_138.jpg \n inflating: food-11/validation/3_26.jpg \n inflating: food-11/validation/9_308.jpg \n inflating: food-11/validation/2_346.jpg \n inflating: food-11/validation/3_155.jpg \n inflating: food-11/validation/2_420.jpg \n inflating: food-11/validation/1_6.jpg \n inflating: food-11/validation/9_485.jpg \n inflating: food-11/validation/9_491.jpg \n inflating: food-11/validation/2_434.jpg \n inflating: food-11/validation/2_352.jpg \n inflating: food-11/validation/3_141.jpg \n inflating: food-11/validation/3_169.jpg \n inflating: food-11/validation/2_408.jpg \n inflating: food-11/validation/4_115.jpg \n inflating: food-11/validation/5_306.jpg \n inflating: food-11/validation/0_63.jpg \n inflating: food-11/validation/9_47.jpg \n inflating: food-11/validation/8_331.jpg \n inflating: food-11/validation/9_122.jpg \n inflating: food-11/validation/0_257.jpg \n inflating: food-11/validation/2_187.jpg \n inflating: food-11/validation/0_243.jpg \n inflating: food-11/validation/2_193.jpg \n inflating: food-11/validation/9_53.jpg \n inflating: food-11/validation/8_325.jpg \n inflating: food-11/validation/9_136.jpg \n inflating: food-11/validation/0_77.jpg \n inflating: food-11/validation/4_101.jpg \n inflating: food-11/validation/5_312.jpg \n inflating: food-11/validation/4_129.jpg \n inflating: food-11/validation/7_56.jpg \n inflating: food-11/validation/10_223.jpg \n inflating: food-11/validation/7_42.jpg \n inflating: food-11/validation/8_319.jpg \n inflating: food-11/validation/5_448.jpg \n inflating: food-11/validation/3_4.jpg \n inflating: food-11/validation/0_294.jpg \n inflating: food-11/validation/2_144.jpg \n inflating: food-11/validation/9_84.jpg \n inflating: food-11/validation/9_90.jpg \n inflating: food-11/validation/0_280.jpg \n inflating: food-11/validation/2_150.jpg \n inflating: food-11/validation/7_95.jpg \n inflating: food-11/validation/2_178.jpg \n inflating: food-11/validation/7_81.jpg \n inflating: food-11/validation/0_88.jpg \n inflating: food-11/validation/7_80.jpg \n inflating: food-11/validation/0_89.jpg \n inflating: food-11/validation/2_179.jpg \n inflating: food-11/validation/7_94.jpg \n inflating: food-11/validation/2_151.jpg \n inflating: food-11/validation/0_281.jpg \n inflating: food-11/validation/9_91.jpg \n inflating: food-11/validation/3_5.jpg \n inflating: food-11/validation/9_85.jpg \n inflating: food-11/validation/2_145.jpg \n inflating: food-11/validation/0_295.jpg \n inflating: food-11/validation/8_318.jpg \n inflating: food-11/validation/7_43.jpg \n inflating: food-11/validation/4_128.jpg \n inflating: food-11/validation/10_222.jpg \n inflating: food-11/validation/7_57.jpg \n inflating: food-11/validation/9_137.jpg \n inflating: food-11/validation/8_324.jpg \n inflating: food-11/validation/9_52.jpg \n inflating: food-11/validation/2_192.jpg \n inflating: food-11/validation/0_242.jpg \n inflating: food-11/validation/5_313.jpg \n inflating: food-11/validation/4_100.jpg \n inflating: food-11/validation/0_76.jpg \n inflating: food-11/validation/0_62.jpg \n inflating: food-11/validation/5_307.jpg \n inflating: food-11/validation/4_114.jpg \n inflating: food-11/validation/2_186.jpg \n inflating: food-11/validation/0_256.jpg \n inflating: food-11/validation/9_123.jpg \n inflating: food-11/validation/8_330.jpg \n inflating: food-11/validation/9_46.jpg \n inflating: food-11/validation/2_409.jpg \n inflating: food-11/validation/3_168.jpg \n inflating: food-11/validation/2_435.jpg \n inflating: food-11/validation/9_490.jpg \n inflating: food-11/validation/3_140.jpg \n inflating: food-11/validation/2_353.jpg \n inflating: food-11/validation/3_154.jpg \n inflating: food-11/validation/2_347.jpg \n inflating: food-11/validation/9_484.jpg \n inflating: food-11/validation/1_7.jpg \n inflating: food-11/validation/2_421.jpg \n inflating: food-11/validation/9_309.jpg \n inflating: food-11/validation/3_27.jpg \n inflating: food-11/validation/3_33.jpg \n inflating: food-11/validation/5_139.jpg \n inflating: food-11/validation/9_453.jpg \n inflating: food-11/validation/4_12.jpg \n inflating: food-11/validation/4_302.jpg \n inflating: food-11/validation/5_111.jpg \n inflating: food-11/validation/3_183.jpg \n inflating: food-11/validation/2_390.jpg \n inflating: food-11/validation/8_126.jpg \n inflating: food-11/validation/9_335.jpg \n inflating: food-11/validation/8_132.jpg \n inflating: food-11/validation/9_321.jpg \n inflating: food-11/validation/3_197.jpg \n inflating: food-11/validation/2_384.jpg \n inflating: food-11/validation/4_316.jpg \n inflating: food-11/validation/5_105.jpg \n inflating: food-11/validation/9_447.jpg \n inflating: food-11/validation/4_289.jpg \n inflating: food-11/validation/10_183.jpg \n inflating: food-11/validation/10_197.jpg \n inflating: food-11/validation/10_93.jpg \n inflating: food-11/validation/9_282.jpg \n inflating: food-11/validation/7_1.jpg \n inflating: food-11/validation/2_227.jpg \n inflating: food-11/validation/2_233.jpg \n inflating: food-11/validation/9_296.jpg \n inflating: food-11/validation/10_87.jpg \n inflating: food-11/validation/1_22.jpg \n inflating: food-11/validation/10_140.jpg \n inflating: food-11/validation/0_108.jpg \n inflating: food-11/validation/9_269.jpg \n inflating: food-11/validation/10_154.jpg \n inflating: food-11/validation/8_12.jpg \n inflating: food-11/validation/1_36.jpg \n inflating: food-11/validation/10_78.jpg \n inflating: food-11/validation/10_50.jpg \n inflating: food-11/validation/4_276.jpg \n inflating: food-11/validation/0_134.jpg \n inflating: food-11/validation/9_241.jpg \n inflating: food-11/validation/6_17.jpg \n inflating: food-11/validation/9_255.jpg \n inflating: food-11/validation/10_168.jpg \n inflating: food-11/validation/0_120.jpg \n inflating: food-11/validation/10_44.jpg \n inflating: food-11/validation/4_262.jpg \n inflating: food-11/validation/5_85.jpg \n inflating: food-11/validation/5_298.jpg \n inflating: food-11/validation/5_91.jpg \n inflating: food-11/validation/2_98.jpg \n inflating: food-11/validation/3_236.jpg \n inflating: food-11/validation/5_3.jpg \n inflating: food-11/validation/8_293.jpg \n inflating: food-11/validation/8_287.jpg \n inflating: food-11/validation/3_222.jpg \n inflating: food-11/validation/1_119.jpg \n inflating: food-11/validation/5_46.jpg \n inflating: food-11/validation/5_52.jpg \n inflating: food-11/validation/8_278.jpg \n inflating: food-11/validation/8_250.jpg \n inflating: food-11/validation/2_73.jpg \n inflating: food-11/validation/1_125.jpg \n inflating: food-11/validation/0_336.jpg \n inflating: food-11/validation/5_267.jpg \n inflating: food-11/validation/5_273.jpg \n inflating: food-11/validation/2_67.jpg \n inflating: food-11/validation/1_131.jpg \n inflating: food-11/validation/0_322.jpg \n inflating: food-11/validation/8_244.jpg \n inflating: food-11/validation/5_35.jpg \n inflating: food-11/validation/5_228.jpg \n inflating: food-11/validation/2_28.jpg \n inflating: food-11/validation/5_21.jpg \n inflating: food-11/validation/5_214.jpg \n inflating: food-11/validation/3_286.jpg \n inflating: food-11/validation/0_345.jpg \n inflating: food-11/validation/8_223.jpg \n inflating: food-11/validation/8_237.jpg \n inflating: food-11/validation/3_292.jpg \n inflating: food-11/validation/1_142.jpg \n inflating: food-11/validation/0_351.jpg \n inflating: food-11/validation/2_14.jpg \n inflating: food-11/validation/5_200.jpg \n inflating: food-11/validation/3_279.jpg \n inflating: food-11/validation/3_245.jpg \n inflating: food-11/validation/3_251.jpg \n inflating: food-11/validation/2_2.jpg \n inflating: food-11/validation/10_133.jpg \n inflating: food-11/validation/8_75.jpg \n inflating: food-11/validation/6_58.jpg \n inflating: food-11/validation/1_51.jpg \n inflating: food-11/validation/4_239.jpg \n inflating: food-11/validation/1_45.jpg \n inflating: food-11/validation/10_127.jpg \n inflating: food-11/validation/8_61.jpg \n inflating: food-11/validation/8_49.jpg \n inflating: food-11/validation/9_232.jpg \n inflating: food-11/validation/6_64.jpg \n inflating: food-11/validation/0_147.jpg \n inflating: food-11/validation/2_297.jpg \n inflating: food-11/validation/10_23.jpg \n inflating: food-11/validation/4_205.jpg \n inflating: food-11/validation/1_79.jpg \n inflating: food-11/validation/10_37.jpg \n inflating: food-11/validation/4_211.jpg \n inflating: food-11/validation/0_153.jpg \n inflating: food-11/validation/2_283.jpg \n inflating: food-11/validation/9_226.jpg \n inflating: food-11/validation/6_70.jpg \n inflating: food-11/validation/2_268.jpg \n inflating: food-11/validation/1_92.jpg \n inflating: food-11/validation/1_86.jpg \n inflating: food-11/validation/0_184.jpg \n inflating: food-11/validation/2_254.jpg \n inflating: food-11/validation/6_116.jpg \n inflating: food-11/validation/6_102.jpg \n inflating: food-11/validation/0_0.jpg \n inflating: food-11/validation/0_190.jpg \n inflating: food-11/validation/2_240.jpg \n inflating: food-11/validation/3_54.jpg \n inflating: food-11/validation/8_169.jpg \n inflating: food-11/validation/4_49.jpg \n inflating: food-11/validation/9_408.jpg \n inflating: food-11/validation/3_40.jpg \n inflating: food-11/validation/8_155.jpg \n inflating: food-11/validation/9_346.jpg \n inflating: food-11/validation/3_68.jpg \n inflating: food-11/validation/5_162.jpg \n inflating: food-11/validation/2_485.jpg \n inflating: food-11/validation/4_61.jpg \n inflating: food-11/validation/9_420.jpg \n inflating: food-11/validation/9_434.jpg \n inflating: food-11/validation/5_176.jpg \n inflating: food-11/validation/2_491.jpg \n inflating: food-11/validation/4_75.jpg \n inflating: food-11/validation/8_9.jpg \n inflating: food-11/validation/8_141.jpg \n inflating: food-11/validation/9_352.jpg \n inflating: food-11/validation/3_97.jpg \n inflating: food-11/validation/5_189.jpg \n inflating: food-11/validation/2_308.jpg \n inflating: food-11/validation/3_83.jpg \n inflating: food-11/validation/6_6.jpg \n inflating: food-11/validation/3_133.jpg \n inflating: food-11/validation/2_320.jpg \n inflating: food-11/validation/8_196.jpg \n inflating: food-11/validation/9_385.jpg \n inflating: food-11/validation/2_446.jpg \n inflating: food-11/validation/2_452.jpg \n inflating: food-11/validation/8_182.jpg \n inflating: food-11/validation/9_391.jpg \n inflating: food-11/validation/3_127.jpg \n inflating: food-11/validation/2_334.jpg \n inflating: food-11/validation/0_39.jpg \n inflating: food-11/validation/7_30.jpg \n inflating: food-11/validation/9_178.jpg \n inflating: food-11/validation/0_219.jpg \n inflating: food-11/validation/7_24.jpg \n inflating: food-11/validation/5_348.jpg \n inflating: food-11/validation/5_360.jpg \n inflating: food-11/validation/4_173.jpg \n inflating: food-11/validation/5_406.jpg \n inflating: food-11/validation/0_231.jpg \n inflating: food-11/validation/9_144.jpg \n inflating: food-11/validation/9_21.jpg \n inflating: food-11/validation/7_18.jpg \n inflating: food-11/validation/9_150.jpg \n inflating: food-11/validation/8_343.jpg \n inflating: food-11/validation/9_35.jpg \n inflating: food-11/validation/5_412.jpg \n inflating: food-11/validation/0_225.jpg \n inflating: food-11/validation/5_374.jpg \n inflating: food-11/validation/4_167.jpg \n inflating: food-11/validation/0_11.jpg \n inflating: food-11/validation/3_319.jpg \n inflating: food-11/validation/4_198.jpg \n inflating: food-11/validation/9_187.jpg \n inflating: food-11/validation/2_122.jpg \n inflating: food-11/validation/4_4.jpg \n inflating: food-11/validation/2_136.jpg \n inflating: food-11/validation/3_325.jpg \n inflating: food-11/validation/9_193.jpg \n inflating: food-11/validation/9_192.jpg \n inflating: food-11/validation/3_324.jpg \n inflating: food-11/validation/2_137.jpg \n inflating: food-11/validation/4_5.jpg \n inflating: food-11/validation/2_123.jpg \n inflating: food-11/validation/9_186.jpg \n inflating: food-11/validation/3_318.jpg \n inflating: food-11/validation/4_199.jpg \n inflating: food-11/validation/0_224.jpg \n inflating: food-11/validation/5_413.jpg \n inflating: food-11/validation/9_34.jpg \n inflating: food-11/validation/8_342.jpg \n inflating: food-11/validation/9_151.jpg \n inflating: food-11/validation/7_19.jpg \n inflating: food-11/validation/0_10.jpg \n inflating: food-11/validation/4_166.jpg \n inflating: food-11/validation/5_375.jpg \n inflating: food-11/validation/4_172.jpg \n inflating: food-11/validation/5_361.jpg \n inflating: food-11/validation/9_20.jpg \n inflating: food-11/validation/9_145.jpg \n inflating: food-11/validation/0_230.jpg \n inflating: food-11/validation/5_407.jpg \n inflating: food-11/validation/7_25.jpg \n inflating: food-11/validation/0_218.jpg \n inflating: food-11/validation/5_349.jpg \n inflating: food-11/validation/0_38.jpg \n inflating: food-11/validation/9_179.jpg \n inflating: food-11/validation/7_31.jpg \n inflating: food-11/validation/2_453.jpg \n inflating: food-11/validation/2_335.jpg \n inflating: food-11/validation/3_126.jpg \n inflating: food-11/validation/9_390.jpg \n inflating: food-11/validation/8_183.jpg \n inflating: food-11/validation/9_384.jpg \n inflating: food-11/validation/8_197.jpg \n inflating: food-11/validation/2_321.jpg \n inflating: food-11/validation/3_132.jpg \n inflating: food-11/validation/6_7.jpg \n inflating: food-11/validation/2_447.jpg \n inflating: food-11/validation/5_188.jpg \n inflating: food-11/validation/3_82.jpg \n inflating: food-11/validation/2_309.jpg \n inflating: food-11/validation/3_96.jpg \n inflating: food-11/validation/4_74.jpg \n inflating: food-11/validation/2_490.jpg \n inflating: food-11/validation/5_177.jpg \n inflating: food-11/validation/9_435.jpg \n inflating: food-11/validation/9_353.jpg \n inflating: food-11/validation/8_140.jpg \n inflating: food-11/validation/8_8.jpg \n inflating: food-11/validation/3_69.jpg \n inflating: food-11/validation/9_347.jpg \n inflating: food-11/validation/8_154.jpg \n inflating: food-11/validation/9_421.jpg \n inflating: food-11/validation/4_60.jpg \n inflating: food-11/validation/2_484.jpg \n inflating: food-11/validation/5_163.jpg \n inflating: food-11/validation/9_409.jpg \n inflating: food-11/validation/4_48.jpg \n inflating: food-11/validation/3_41.jpg \n inflating: food-11/validation/8_168.jpg \n inflating: food-11/validation/3_55.jpg \n inflating: food-11/validation/0_1.jpg \n inflating: food-11/validation/6_103.jpg \n inflating: food-11/validation/2_241.jpg \n inflating: food-11/validation/0_191.jpg \n inflating: food-11/validation/2_255.jpg \n inflating: food-11/validation/0_185.jpg \n inflating: food-11/validation/6_117.jpg \n inflating: food-11/validation/1_87.jpg \n inflating: food-11/validation/2_269.jpg \n inflating: food-11/validation/1_93.jpg \n inflating: food-11/validation/4_210.jpg \n inflating: food-11/validation/10_36.jpg \n inflating: food-11/validation/1_78.jpg \n inflating: food-11/validation/6_71.jpg \n inflating: food-11/validation/9_227.jpg \n inflating: food-11/validation/2_282.jpg \n inflating: food-11/validation/0_152.jpg \n inflating: food-11/validation/2_296.jpg \n inflating: food-11/validation/0_146.jpg \n inflating: food-11/validation/6_65.jpg \n inflating: food-11/validation/9_233.jpg \n inflating: food-11/validation/8_48.jpg \n inflating: food-11/validation/4_204.jpg \n inflating: food-11/validation/10_22.jpg \n inflating: food-11/validation/1_44.jpg \n inflating: food-11/validation/10_126.jpg \n inflating: food-11/validation/8_60.jpg \n inflating: food-11/validation/6_59.jpg \n inflating: food-11/validation/10_132.jpg \n inflating: food-11/validation/8_74.jpg \n inflating: food-11/validation/4_238.jpg \n inflating: food-11/validation/1_50.jpg \n inflating: food-11/validation/3_250.jpg \n inflating: food-11/validation/2_3.jpg \n inflating: food-11/validation/3_244.jpg \n inflating: food-11/validation/3_278.jpg \n inflating: food-11/validation/2_15.jpg \n inflating: food-11/validation/0_350.jpg \n inflating: food-11/validation/1_143.jpg \n inflating: food-11/validation/3_293.jpg \n inflating: food-11/validation/8_236.jpg \n inflating: food-11/validation/5_201.jpg \n inflating: food-11/validation/5_215.jpg \n inflating: food-11/validation/8_222.jpg \n inflating: food-11/validation/0_344.jpg \n inflating: food-11/validation/3_287.jpg \n inflating: food-11/validation/2_29.jpg \n inflating: food-11/validation/5_20.jpg \n inflating: food-11/validation/5_229.jpg \n inflating: food-11/validation/5_34.jpg \n inflating: food-11/validation/5_22.jpg \n inflating: food-11/validation/8_208.jpg \n inflating: food-11/validation/5_36.jpg \n inflating: food-11/validation/5_203.jpg \n inflating: food-11/validation/8_234.jpg \n inflating: food-11/validation/1_141.jpg \n inflating: food-11/validation/3_291.jpg \n inflating: food-11/validation/0_352.jpg \n inflating: food-11/validation/2_17.jpg \n inflating: food-11/validation/3_285.jpg \n inflating: food-11/validation/0_346.jpg \n inflating: food-11/validation/8_220.jpg \n inflating: food-11/validation/5_217.jpg \n inflating: food-11/validation/2_1.jpg \n inflating: food-11/validation/3_252.jpg \n inflating: food-11/validation/3_246.jpg \n inflating: food-11/validation/10_124.jpg \n inflating: food-11/validation/8_62.jpg \n inflating: food-11/validation/9_219.jpg \n inflating: food-11/validation/1_46.jpg \n inflating: food-11/validation/1_52.jpg \n inflating: food-11/validation/0_178.jpg \n inflating: food-11/validation/10_130.jpg \n inflating: food-11/validation/8_76.jpg \n inflating: food-11/validation/2_280.jpg \n inflating: food-11/validation/0_150.jpg \n inflating: food-11/validation/10_118.jpg \n inflating: food-11/validation/9_225.jpg \n inflating: food-11/validation/6_73.jpg \n inflating: food-11/validation/10_34.jpg \n inflating: food-11/validation/4_212.jpg \n inflating: food-11/validation/10_20.jpg \n inflating: food-11/validation/4_206.jpg \n inflating: food-11/validation/9_231.jpg \n inflating: food-11/validation/6_67.jpg \n inflating: food-11/validation/2_294.jpg \n inflating: food-11/validation/0_144.jpg \n inflating: food-11/validation/1_85.jpg \n inflating: food-11/validation/6_129.jpg \n inflating: food-11/validation/1_91.jpg \n inflating: food-11/validation/6_98.jpg \n inflating: food-11/validation/2_243.jpg \n inflating: food-11/validation/0_193.jpg \n inflating: food-11/validation/6_101.jpg \n inflating: food-11/validation/0_3.jpg \n inflating: food-11/validation/6_115.jpg \n inflating: food-11/validation/2_257.jpg \n inflating: food-11/validation/0_187.jpg \n inflating: food-11/validation/8_89.jpg \n inflating: food-11/validation/3_43.jpg \n inflating: food-11/validation/5_149.jpg \n inflating: food-11/validation/3_57.jpg \n inflating: food-11/validation/9_379.jpg \n inflating: food-11/validation/8_142.jpg \n inflating: food-11/validation/9_351.jpg \n inflating: food-11/validation/9_437.jpg \n inflating: food-11/validation/5_175.jpg \n inflating: food-11/validation/2_492.jpg \n inflating: food-11/validation/4_76.jpg \n inflating: food-11/validation/5_161.jpg \n inflating: food-11/validation/2_486.jpg \n inflating: food-11/validation/4_62.jpg \n inflating: food-11/validation/9_423.jpg \n inflating: food-11/validation/8_156.jpg \n inflating: food-11/validation/9_345.jpg \n inflating: food-11/validation/3_118.jpg \n inflating: food-11/validation/3_80.jpg \n inflating: food-11/validation/4_89.jpg \n inflating: food-11/validation/2_479.jpg \n inflating: food-11/validation/3_94.jpg \n inflating: food-11/validation/8_181.jpg \n inflating: food-11/validation/9_392.jpg \n inflating: food-11/validation/3_124.jpg \n inflating: food-11/validation/2_337.jpg \n inflating: food-11/validation/2_451.jpg \n inflating: food-11/validation/2_445.jpg \n inflating: food-11/validation/6_5.jpg \n inflating: food-11/validation/3_130.jpg \n inflating: food-11/validation/2_323.jpg \n inflating: food-11/validation/8_195.jpg \n inflating: food-11/validation/9_386.jpg \n inflating: food-11/validation/4_158.jpg \n inflating: food-11/validation/7_27.jpg \n inflating: food-11/validation/7_33.jpg \n inflating: food-11/validation/5_439.jpg \n inflating: food-11/validation/5_377.jpg \n inflating: food-11/validation/4_164.jpg \n inflating: food-11/validation/0_12.jpg \n inflating: food-11/validation/9_153.jpg \n inflating: food-11/validation/9_36.jpg \n inflating: food-11/validation/8_340.jpg \n inflating: food-11/validation/5_411.jpg \n inflating: food-11/validation/0_226.jpg \n inflating: food-11/validation/5_405.jpg \n inflating: food-11/validation/0_232.jpg \n inflating: food-11/validation/9_147.jpg \n inflating: food-11/validation/9_22.jpg \n inflating: food-11/validation/5_363.jpg \n inflating: food-11/validation/4_170.jpg \n inflating: food-11/validation/5_388.jpg \n inflating: food-11/validation/2_109.jpg \n inflating: food-11/validation/2_135.jpg \n inflating: food-11/validation/3_326.jpg \n inflating: food-11/validation/9_190.jpg \n inflating: food-11/validation/9_184.jpg \n inflating: food-11/validation/2_121.jpg \n inflating: food-11/validation/4_7.jpg \n inflating: food-11/validation/4_6.jpg \n inflating: food-11/validation/2_120.jpg \n inflating: food-11/validation/9_185.jpg \n inflating: food-11/validation/9_191.jpg \n inflating: food-11/validation/2_134.jpg \n inflating: food-11/validation/5_389.jpg \n inflating: food-11/validation/2_108.jpg \n inflating: food-11/validation/9_23.jpg \n inflating: food-11/validation/9_146.jpg \n inflating: food-11/validation/0_233.jpg \n inflating: food-11/validation/5_404.jpg \n inflating: food-11/validation/4_171.jpg \n inflating: food-11/validation/5_362.jpg \n inflating: food-11/validation/0_13.jpg \n inflating: food-11/validation/4_165.jpg \n inflating: food-11/validation/5_376.jpg \n inflating: food-11/validation/0_227.jpg \n inflating: food-11/validation/5_410.jpg \n inflating: food-11/validation/8_341.jpg \n inflating: food-11/validation/9_37.jpg \n inflating: food-11/validation/9_152.jpg \n inflating: food-11/validation/5_438.jpg \n inflating: food-11/validation/7_32.jpg \n inflating: food-11/validation/4_159.jpg \n inflating: food-11/validation/7_26.jpg \n inflating: food-11/validation/2_444.jpg \n inflating: food-11/validation/9_387.jpg \n inflating: food-11/validation/8_194.jpg \n inflating: food-11/validation/2_322.jpg \n inflating: food-11/validation/3_131.jpg \n inflating: food-11/validation/6_4.jpg \n inflating: food-11/validation/2_336.jpg \n inflating: food-11/validation/3_125.jpg \n inflating: food-11/validation/9_393.jpg \n inflating: food-11/validation/8_180.jpg \n inflating: food-11/validation/2_450.jpg \n inflating: food-11/validation/2_478.jpg \n inflating: food-11/validation/3_95.jpg \n inflating: food-11/validation/3_81.jpg \n inflating: food-11/validation/3_119.jpg \n inflating: food-11/validation/4_88.jpg \n inflating: food-11/validation/9_422.jpg \n inflating: food-11/validation/4_63.jpg \n inflating: food-11/validation/2_487.jpg \n inflating: food-11/validation/5_160.jpg \n inflating: food-11/validation/9_344.jpg \n inflating: food-11/validation/8_157.jpg \n inflating: food-11/validation/9_350.jpg \n inflating: food-11/validation/8_143.jpg \n inflating: food-11/validation/4_77.jpg \n inflating: food-11/validation/2_493.jpg \n inflating: food-11/validation/5_174.jpg \n inflating: food-11/validation/9_436.jpg \n inflating: food-11/validation/9_378.jpg \n inflating: food-11/validation/3_56.jpg \n inflating: food-11/validation/3_42.jpg \n inflating: food-11/validation/5_148.jpg \n inflating: food-11/validation/6_114.jpg \n inflating: food-11/validation/8_88.jpg \n inflating: food-11/validation/0_186.jpg \n inflating: food-11/validation/2_256.jpg \n inflating: food-11/validation/0_192.jpg \n inflating: food-11/validation/2_242.jpg \n inflating: food-11/validation/0_2.jpg \n inflating: food-11/validation/6_100.jpg \n inflating: food-11/validation/1_90.jpg \n inflating: food-11/validation/6_128.jpg \n inflating: food-11/validation/6_99.jpg \n inflating: food-11/validation/1_84.jpg \n inflating: food-11/validation/4_207.jpg \n inflating: food-11/validation/10_21.jpg \n inflating: food-11/validation/0_145.jpg \n inflating: food-11/validation/2_295.jpg \n inflating: food-11/validation/6_66.jpg \n inflating: food-11/validation/9_230.jpg \n inflating: food-11/validation/6_72.jpg \n inflating: food-11/validation/9_224.jpg \n inflating: food-11/validation/10_119.jpg \n inflating: food-11/validation/0_151.jpg \n inflating: food-11/validation/2_281.jpg \n inflating: food-11/validation/4_213.jpg \n inflating: food-11/validation/10_35.jpg \n inflating: food-11/validation/1_53.jpg \n inflating: food-11/validation/10_131.jpg \n inflating: food-11/validation/8_77.jpg \n inflating: food-11/validation/0_179.jpg \n inflating: food-11/validation/9_218.jpg \n inflating: food-11/validation/10_125.jpg \n inflating: food-11/validation/8_63.jpg \n inflating: food-11/validation/1_47.jpg \n inflating: food-11/validation/3_247.jpg \n inflating: food-11/validation/2_0.jpg \n inflating: food-11/validation/3_253.jpg \n inflating: food-11/validation/8_221.jpg \n inflating: food-11/validation/0_347.jpg \n inflating: food-11/validation/3_284.jpg \n inflating: food-11/validation/5_216.jpg \n inflating: food-11/validation/5_202.jpg \n inflating: food-11/validation/2_16.jpg \n inflating: food-11/validation/0_353.jpg \n inflating: food-11/validation/3_290.jpg \n inflating: food-11/validation/1_140.jpg \n inflating: food-11/validation/8_235.jpg \n inflating: food-11/validation/5_37.jpg \n inflating: food-11/validation/5_23.jpg \n inflating: food-11/validation/8_209.jpg \n inflating: food-11/validation/8_231.jpg \n inflating: food-11/validation/0_357.jpg \n inflating: food-11/validation/3_294.jpg \n inflating: food-11/validation/2_12.jpg \n inflating: food-11/validation/5_206.jpg \n inflating: food-11/validation/5_212.jpg \n inflating: food-11/validation/0_343.jpg \n inflating: food-11/validation/3_280.jpg \n inflating: food-11/validation/8_225.jpg \n inflating: food-11/validation/10_9.jpg \n inflating: food-11/validation/5_27.jpg \n inflating: food-11/validation/5_33.jpg \n inflating: food-11/validation/8_219.jpg \n inflating: food-11/validation/3_257.jpg \n inflating: food-11/validation/2_4.jpg \n inflating: food-11/validation/3_243.jpg \n inflating: food-11/validation/4_217.jpg \n inflating: food-11/validation/10_31.jpg \n inflating: food-11/validation/2_285.jpg \n inflating: food-11/validation/0_155.jpg \n inflating: food-11/validation/6_76.jpg \n inflating: food-11/validation/9_220.jpg \n inflating: food-11/validation/10_109.jpg \n inflating: food-11/validation/6_62.jpg \n inflating: food-11/validation/9_234.jpg \n inflating: food-11/validation/2_291.jpg \n inflating: food-11/validation/0_141.jpg \n inflating: food-11/validation/4_203.jpg \n inflating: food-11/validation/10_25.jpg \n inflating: food-11/validation/1_43.jpg \n inflating: food-11/validation/10_121.jpg \n inflating: food-11/validation/8_67.jpg \n inflating: food-11/validation/0_169.jpg \n inflating: food-11/validation/10_135.jpg \n inflating: food-11/validation/8_73.jpg \n inflating: food-11/validation/9_208.jpg \n inflating: food-11/validation/1_57.jpg \n inflating: food-11/validation/10_19.jpg \n inflating: food-11/validation/6_104.jpg \n inflating: food-11/validation/0_6.jpg \n inflating: food-11/validation/8_98.jpg \n inflating: food-11/validation/2_246.jpg \n inflating: food-11/validation/0_196.jpg \n inflating: food-11/validation/2_252.jpg \n inflating: food-11/validation/0_182.jpg \n inflating: food-11/validation/6_110.jpg \n inflating: food-11/validation/1_80.jpg \n inflating: food-11/validation/6_138.jpg \n inflating: food-11/validation/6_89.jpg \n inflating: food-11/validation/1_94.jpg \n inflating: food-11/validation/9_432.jpg \n inflating: food-11/validation/5_170.jpg \n inflating: food-11/validation/4_73.jpg \n inflating: food-11/validation/2_497.jpg \n inflating: food-11/validation/9_354.jpg \n inflating: food-11/validation/8_147.jpg \n inflating: food-11/validation/9_340.jpg \n inflating: food-11/validation/8_153.jpg \n inflating: food-11/validation/5_164.jpg \n inflating: food-11/validation/4_67.jpg \n inflating: food-11/validation/2_483.jpg \n inflating: food-11/validation/9_426.jpg \n inflating: food-11/validation/9_368.jpg \n inflating: food-11/validation/3_46.jpg \n inflating: food-11/validation/3_52.jpg \n inflating: food-11/validation/5_158.jpg \n inflating: food-11/validation/2_454.jpg \n inflating: food-11/validation/9_397.jpg \n inflating: food-11/validation/8_184.jpg \n inflating: food-11/validation/2_332.jpg \n inflating: food-11/validation/3_121.jpg \n inflating: food-11/validation/6_0.jpg \n inflating: food-11/validation/2_326.jpg \n inflating: food-11/validation/3_135.jpg \n inflating: food-11/validation/9_383.jpg \n inflating: food-11/validation/8_190.jpg \n inflating: food-11/validation/2_440.jpg \n inflating: food-11/validation/2_468.jpg \n inflating: food-11/validation/3_85.jpg \n inflating: food-11/validation/3_91.jpg \n inflating: food-11/validation/3_109.jpg \n inflating: food-11/validation/4_98.jpg \n inflating: food-11/validation/9_33.jpg \n inflating: food-11/validation/8_345.jpg \n inflating: food-11/validation/9_156.jpg \n inflating: food-11/validation/5_414.jpg \n inflating: food-11/validation/0_223.jpg \n inflating: food-11/validation/4_161.jpg \n inflating: food-11/validation/5_372.jpg \n inflating: food-11/validation/0_17.jpg \n inflating: food-11/validation/4_175.jpg \n inflating: food-11/validation/5_366.jpg \n inflating: food-11/validation/5_400.jpg \n inflating: food-11/validation/0_237.jpg \n inflating: food-11/validation/9_27.jpg \n inflating: food-11/validation/9_142.jpg \n inflating: food-11/validation/5_428.jpg \n inflating: food-11/validation/7_22.jpg \n inflating: food-11/validation/4_149.jpg \n inflating: food-11/validation/7_36.jpg \n inflating: food-11/validation/3_323.jpg \n inflating: food-11/validation/2_130.jpg \n inflating: food-11/validation/9_195.jpg \n inflating: food-11/validation/9_181.jpg \n inflating: food-11/validation/2_124.jpg \n inflating: food-11/validation/4_2.jpg \n inflating: food-11/validation/5_399.jpg \n inflating: food-11/validation/2_118.jpg \n inflating: food-11/validation/5_398.jpg \n inflating: food-11/validation/2_119.jpg \n inflating: food-11/validation/4_3.jpg \n inflating: food-11/validation/2_125.jpg \n inflating: food-11/validation/9_180.jpg \n inflating: food-11/validation/9_194.jpg \n inflating: food-11/validation/2_131.jpg \n inflating: food-11/validation/3_322.jpg \n inflating: food-11/validation/4_148.jpg \n inflating: food-11/validation/7_37.jpg \n inflating: food-11/validation/7_23.jpg \n inflating: food-11/validation/5_429.jpg \n inflating: food-11/validation/5_367.jpg \n inflating: food-11/validation/4_174.jpg \n inflating: food-11/validation/9_143.jpg \n inflating: food-11/validation/9_26.jpg \n inflating: food-11/validation/0_236.jpg \n inflating: food-11/validation/5_401.jpg \n inflating: food-11/validation/0_222.jpg \n inflating: food-11/validation/5_415.jpg \n inflating: food-11/validation/9_157.jpg \n inflating: food-11/validation/8_344.jpg \n inflating: food-11/validation/9_32.jpg \n inflating: food-11/validation/0_16.jpg \n inflating: food-11/validation/5_373.jpg \n inflating: food-11/validation/4_160.jpg \n inflating: food-11/validation/3_108.jpg \n inflating: food-11/validation/3_90.jpg \n inflating: food-11/validation/4_99.jpg \n inflating: food-11/validation/2_469.jpg \n inflating: food-11/validation/3_84.jpg \n inflating: food-11/validation/8_191.jpg \n inflating: food-11/validation/9_382.jpg \n inflating: food-11/validation/3_134.jpg \n inflating: food-11/validation/2_327.jpg \n inflating: food-11/validation/6_1.jpg \n inflating: food-11/validation/2_441.jpg \n inflating: food-11/validation/2_455.jpg \n inflating: food-11/validation/3_120.jpg \n inflating: food-11/validation/2_333.jpg \n inflating: food-11/validation/8_185.jpg \n inflating: food-11/validation/9_396.jpg \n inflating: food-11/validation/3_53.jpg \n inflating: food-11/validation/5_159.jpg \n inflating: food-11/validation/3_47.jpg \n inflating: food-11/validation/9_369.jpg \n inflating: food-11/validation/8_152.jpg \n inflating: food-11/validation/9_341.jpg \n inflating: food-11/validation/9_427.jpg \n inflating: food-11/validation/2_482.jpg \n inflating: food-11/validation/4_66.jpg \n inflating: food-11/validation/5_165.jpg \n inflating: food-11/validation/2_496.jpg \n inflating: food-11/validation/4_72.jpg \n inflating: food-11/validation/5_171.jpg \n inflating: food-11/validation/9_433.jpg \n inflating: food-11/validation/8_146.jpg \n inflating: food-11/validation/9_355.jpg \n inflating: food-11/validation/1_95.jpg \n inflating: food-11/validation/6_139.jpg \n inflating: food-11/validation/1_81.jpg \n inflating: food-11/validation/6_88.jpg \n inflating: food-11/validation/0_183.jpg \n inflating: food-11/validation/2_253.jpg \n inflating: food-11/validation/6_111.jpg \n inflating: food-11/validation/0_7.jpg \n inflating: food-11/validation/6_105.jpg \n inflating: food-11/validation/0_197.jpg \n inflating: food-11/validation/2_247.jpg \n inflating: food-11/validation/8_99.jpg \n inflating: food-11/validation/9_209.jpg \n inflating: food-11/validation/10_134.jpg \n inflating: food-11/validation/8_72.jpg \n inflating: food-11/validation/10_18.jpg \n inflating: food-11/validation/1_56.jpg \n inflating: food-11/validation/1_42.jpg \n inflating: food-11/validation/0_168.jpg \n inflating: food-11/validation/10_120.jpg \n inflating: food-11/validation/8_66.jpg \n inflating: food-11/validation/0_140.jpg \n inflating: food-11/validation/2_290.jpg \n inflating: food-11/validation/9_235.jpg \n inflating: food-11/validation/6_63.jpg \n inflating: food-11/validation/10_108.jpg \n inflating: food-11/validation/10_24.jpg \n inflating: food-11/validation/4_202.jpg \n inflating: food-11/validation/10_30.jpg \n inflating: food-11/validation/4_216.jpg \n inflating: food-11/validation/9_221.jpg \n inflating: food-11/validation/6_77.jpg \n inflating: food-11/validation/0_154.jpg \n inflating: food-11/validation/2_284.jpg \n inflating: food-11/validation/3_242.jpg \n inflating: food-11/validation/3_256.jpg \n inflating: food-11/validation/2_5.jpg \n inflating: food-11/validation/5_32.jpg \n inflating: food-11/validation/8_218.jpg \n inflating: food-11/validation/10_8.jpg \n inflating: food-11/validation/5_26.jpg \n inflating: food-11/validation/5_213.jpg \n inflating: food-11/validation/8_224.jpg \n inflating: food-11/validation/3_281.jpg \n inflating: food-11/validation/0_342.jpg \n inflating: food-11/validation/2_13.jpg \n inflating: food-11/validation/3_295.jpg \n inflating: food-11/validation/0_356.jpg \n inflating: food-11/validation/8_230.jpg \n inflating: food-11/validation/5_207.jpg \n inflating: food-11/validation/0_340.jpg \n inflating: food-11/validation/3_283.jpg \n inflating: food-11/validation/8_226.jpg \n inflating: food-11/validation/5_211.jpg \n inflating: food-11/validation/5_205.jpg \n inflating: food-11/validation/5_18.jpg \n inflating: food-11/validation/8_232.jpg \n inflating: food-11/validation/0_354.jpg \n inflating: food-11/validation/3_297.jpg \n inflating: food-11/validation/2_11.jpg \n inflating: food-11/validation/2_39.jpg \n inflating: food-11/validation/5_30.jpg \n inflating: food-11/validation/5_239.jpg \n inflating: food-11/validation/5_24.jpg \n inflating: food-11/validation/3_240.jpg \n inflating: food-11/validation/2_7.jpg \n inflating: food-11/validation/3_254.jpg \n inflating: food-11/validation/3_268.jpg \n inflating: food-11/validation/4_200.jpg \n inflating: food-11/validation/10_26.jpg \n inflating: food-11/validation/1_68.jpg \n inflating: food-11/validation/6_61.jpg \n inflating: food-11/validation/9_237.jpg \n inflating: food-11/validation/0_142.jpg \n inflating: food-11/validation/2_292.jpg \n inflating: food-11/validation/0_156.jpg \n inflating: food-11/validation/2_286.jpg \n inflating: food-11/validation/8_58.jpg \n inflating: food-11/validation/6_75.jpg \n inflating: food-11/validation/9_223.jpg \n inflating: food-11/validation/4_214.jpg \n inflating: food-11/validation/10_32.jpg \n inflating: food-11/validation/1_54.jpg \n inflating: food-11/validation/10_136.jpg \n inflating: food-11/validation/8_70.jpg \n inflating: food-11/validation/10_122.jpg \n inflating: food-11/validation/8_64.jpg \n inflating: food-11/validation/6_49.jpg \n inflating: food-11/validation/4_228.jpg \n inflating: food-11/validation/1_40.jpg \n inflating: food-11/validation/6_113.jpg \n inflating: food-11/validation/0_181.jpg \n inflating: food-11/validation/2_251.jpg \n inflating: food-11/validation/0_195.jpg \n inflating: food-11/validation/2_245.jpg \n inflating: food-11/validation/6_107.jpg \n inflating: food-11/validation/0_5.jpg \n inflating: food-11/validation/1_97.jpg \n inflating: food-11/validation/2_279.jpg \n inflating: food-11/validation/1_83.jpg \n inflating: food-11/validation/5_167.jpg \n inflating: food-11/validation/4_64.jpg \n inflating: food-11/validation/2_480.jpg \n inflating: food-11/validation/9_425.jpg \n inflating: food-11/validation/9_343.jpg \n inflating: food-11/validation/8_150.jpg \n inflating: food-11/validation/3_79.jpg \n inflating: food-11/validation/9_357.jpg \n inflating: food-11/validation/8_144.jpg \n inflating: food-11/validation/9_431.jpg \n inflating: food-11/validation/5_173.jpg \n inflating: food-11/validation/4_70.jpg \n inflating: food-11/validation/2_494.jpg \n inflating: food-11/validation/9_419.jpg \n inflating: food-11/validation/4_58.jpg \n inflating: food-11/validation/3_51.jpg \n inflating: food-11/validation/8_178.jpg \n inflating: food-11/validation/3_45.jpg \n inflating: food-11/validation/2_443.jpg \n inflating: food-11/validation/6_3.jpg \n inflating: food-11/validation/2_325.jpg \n inflating: food-11/validation/3_136.jpg \n inflating: food-11/validation/9_380.jpg \n inflating: food-11/validation/8_193.jpg \n inflating: food-11/validation/9_394.jpg \n inflating: food-11/validation/8_187.jpg \n inflating: food-11/validation/2_331.jpg \n inflating: food-11/validation/3_122.jpg \n inflating: food-11/validation/2_457.jpg \n inflating: food-11/validation/5_198.jpg \n inflating: food-11/validation/3_92.jpg \n inflating: food-11/validation/2_319.jpg \n inflating: food-11/validation/3_86.jpg \n inflating: food-11/validation/5_403.jpg \n inflating: food-11/validation/0_234.jpg \n inflating: food-11/validation/9_24.jpg \n inflating: food-11/validation/9_141.jpg \n inflating: food-11/validation/4_176.jpg \n inflating: food-11/validation/5_365.jpg \n inflating: food-11/validation/4_162.jpg \n inflating: food-11/validation/5_371.jpg \n inflating: food-11/validation/0_14.jpg \n inflating: food-11/validation/8_346.jpg \n inflating: food-11/validation/9_30.jpg \n inflating: food-11/validation/9_155.jpg \n inflating: food-11/validation/5_417.jpg \n inflating: food-11/validation/0_220.jpg \n inflating: food-11/validation/7_35.jpg \n inflating: food-11/validation/9_18.jpg \n inflating: food-11/validation/0_208.jpg \n inflating: food-11/validation/5_359.jpg \n inflating: food-11/validation/0_28.jpg \n inflating: food-11/validation/7_21.jpg \n inflating: food-11/validation/9_169.jpg \n inflating: food-11/validation/9_182.jpg \n inflating: food-11/validation/2_127.jpg \n inflating: food-11/validation/4_1.jpg \n inflating: food-11/validation/3_320.jpg \n inflating: food-11/validation/2_133.jpg \n inflating: food-11/validation/9_196.jpg \n inflating: food-11/validation/3_308.jpg \n inflating: food-11/validation/4_189.jpg \n inflating: food-11/validation/3_309.jpg \n inflating: food-11/validation/4_188.jpg \n inflating: food-11/validation/9_197.jpg \n inflating: food-11/validation/2_132.jpg \n inflating: food-11/validation/3_321.jpg \n inflating: food-11/validation/4_0.jpg \n inflating: food-11/validation/2_126.jpg \n inflating: food-11/validation/9_183.jpg \n inflating: food-11/validation/0_29.jpg \n inflating: food-11/validation/9_168.jpg \n inflating: food-11/validation/7_20.jpg \n inflating: food-11/validation/0_209.jpg \n inflating: food-11/validation/9_19.jpg \n inflating: food-11/validation/7_34.jpg \n inflating: food-11/validation/5_358.jpg \n inflating: food-11/validation/0_15.jpg \n inflating: food-11/validation/5_370.jpg \n inflating: food-11/validation/4_163.jpg \n inflating: food-11/validation/0_221.jpg \n inflating: food-11/validation/5_416.jpg \n inflating: food-11/validation/9_154.jpg \n inflating: food-11/validation/9_31.jpg \n inflating: food-11/validation/9_140.jpg \n inflating: food-11/validation/9_25.jpg \n inflating: food-11/validation/0_235.jpg \n inflating: food-11/validation/5_402.jpg \n inflating: food-11/validation/5_364.jpg \n inflating: food-11/validation/4_177.jpg \n inflating: food-11/validation/3_87.jpg \n inflating: food-11/validation/5_199.jpg \n inflating: food-11/validation/2_318.jpg \n inflating: food-11/validation/3_93.jpg \n inflating: food-11/validation/3_123.jpg \n inflating: food-11/validation/2_330.jpg \n inflating: food-11/validation/8_186.jpg \n inflating: food-11/validation/9_395.jpg \n inflating: food-11/validation/2_456.jpg \n inflating: food-11/validation/2_442.jpg \n inflating: food-11/validation/8_192.jpg \n inflating: food-11/validation/9_381.jpg \n inflating: food-11/validation/3_137.jpg \n inflating: food-11/validation/2_324.jpg \n inflating: food-11/validation/6_2.jpg \n inflating: food-11/validation/3_44.jpg \n inflating: food-11/validation/8_179.jpg \n inflating: food-11/validation/4_59.jpg \n inflating: food-11/validation/9_418.jpg \n inflating: food-11/validation/3_50.jpg \n inflating: food-11/validation/8_145.jpg \n inflating: food-11/validation/9_356.jpg \n inflating: food-11/validation/3_78.jpg \n inflating: food-11/validation/2_495.jpg \n inflating: food-11/validation/4_71.jpg \n inflating: food-11/validation/5_172.jpg \n inflating: food-11/validation/9_430.jpg \n inflating: food-11/validation/9_424.jpg \n inflating: food-11/validation/2_481.jpg \n inflating: food-11/validation/4_65.jpg \n inflating: food-11/validation/5_166.jpg \n inflating: food-11/validation/8_151.jpg \n inflating: food-11/validation/9_342.jpg \n inflating: food-11/validation/2_278.jpg \n inflating: food-11/validation/1_82.jpg \n inflating: food-11/validation/1_96.jpg \n inflating: food-11/validation/2_244.jpg \n inflating: food-11/validation/0_194.jpg \n inflating: food-11/validation/0_4.jpg \n inflating: food-11/validation/6_106.jpg \n inflating: food-11/validation/6_112.jpg \n inflating: food-11/validation/2_250.jpg \n inflating: food-11/validation/0_180.jpg \n inflating: food-11/validation/6_48.jpg \n inflating: food-11/validation/10_123.jpg \n inflating: food-11/validation/8_65.jpg \n inflating: food-11/validation/1_41.jpg \n inflating: food-11/validation/4_229.jpg \n inflating: food-11/validation/1_55.jpg \n inflating: food-11/validation/10_137.jpg \n inflating: food-11/validation/8_71.jpg \n inflating: food-11/validation/9_222.jpg \n inflating: food-11/validation/6_74.jpg \n inflating: food-11/validation/8_59.jpg \n inflating: food-11/validation/2_287.jpg \n inflating: food-11/validation/0_157.jpg \n inflating: food-11/validation/10_33.jpg \n inflating: food-11/validation/4_215.jpg \n inflating: food-11/validation/1_69.jpg \n inflating: food-11/validation/10_27.jpg \n inflating: food-11/validation/4_201.jpg \n inflating: food-11/validation/2_293.jpg \n inflating: food-11/validation/0_143.jpg \n inflating: food-11/validation/9_236.jpg \n inflating: food-11/validation/6_60.jpg \n inflating: food-11/validation/3_269.jpg \n inflating: food-11/validation/2_6.jpg \n inflating: food-11/validation/3_255.jpg \n inflating: food-11/validation/3_241.jpg \n inflating: food-11/validation/5_25.jpg \n inflating: food-11/validation/5_238.jpg \n inflating: food-11/validation/2_38.jpg \n inflating: food-11/validation/5_31.jpg \n inflating: food-11/validation/5_19.jpg \n inflating: food-11/validation/5_204.jpg \n inflating: food-11/validation/2_10.jpg \n inflating: food-11/validation/3_296.jpg \n inflating: food-11/validation/0_355.jpg \n inflating: food-11/validation/8_233.jpg \n inflating: food-11/validation/8_227.jpg \n inflating: food-11/validation/3_282.jpg \n inflating: food-11/validation/0_341.jpg \n inflating: food-11/validation/5_210.jpg \n" ] ], [ [ "# Import libraries", "_____no_output_____" ] ], [ [ "import os\nimport numpy as np\nimport cv2\nimport torch\nimport torch.nn as nn\nimport torchvision.transforms as transforms\nimport pandas as pd\nfrom torch.utils.data import DataLoader, Dataset\nimport time", "_____no_output_____" ] ], [ [ "# Read image\n- Use OpenCV (cv2) to read the image and store it in numpy array", "_____no_output_____" ] ], [ [ "def readfile(path, label):\n \"\"\"Read the image as numpy array.\n\n Args:\n path (String): the path of the image dataset\n label (Boolean): \"True\" if y is needed; otherwise, \"False\"\n Returns:\n x (numpy array): the array of the image dataset\n y (numpy array): the array of the label\n \"\"\"\n image_dir = sorted(os.listdir(path))\n x = np.zeros((len(image_dir), 128, 128, 3), dtype=np.uint8)\n y = np.zeros((len(image_dir)), dtype=np.uint8)\n for i, file in enumerate(image_dir):\n img = cv2.imread(os.path.join(path, file))\n x[i, :, :] = cv2.resize(img,(128, 128))\n if label:\n y[i] = int(file.split(\"_\")[0])\n if label:\n return x, y\n else:\n return x", "_____no_output_____" ], [ "# read the training set, validation set, and testing set with `readfile` function\nworkspace_dir = './food-11'\nprint(\"Reading data\")\ntrain_x, train_y = readfile(os.path.join(workspace_dir, \"training\"), True)\nprint(\"Size of training data = {}\".format(len(train_x)))\nval_x, val_y = readfile(os.path.join(workspace_dir, \"validation\"), True)\nprint(\"Size of validation data = {}\".format(len(val_x)))\ntest_x = readfile(os.path.join(workspace_dir, \"testing\"), False)\nprint(\"Size of Testing data = {}\".format(len(test_x)))", "Reading data\nSize of training data = 9866\nSize of validation data = 3430\nSize of Testing data = 3347\n" ] ], [ [ "# Dataset\n- In PyTorch, it is convenient to use `Dataset` and `DataLoader` from `torch.utils.data` to \"wrap\" the data for training and testing.\n\n- Two functions are needed in the `Dataset`: `__len__` and `__getitem__`\n - `__len__`: returns the size of the dataset\n - `__getitem__`: defines how to return the data if \"\\[ \\]\" is used to get the data.\n - In practice, we won't use these functions directly. Instead, while `enumerate` the `dataloader`, it is necessary. There will be error if not defining them.", "_____no_output_____" ] ], [ [ "# define the transformation for the training set\n# data augmentation is needed\ntrain_transform = transforms.Compose([\n transforms.ToPILImage(),\n transforms.RandomHorizontalFlip(), # randomly flip the image horizontally\n transforms.RandomRotation(15), # randomly rotate the image\n transforms.ToTensor(), # transform the image to tensor and normalize it to [0, 1]\n])\n\n# define the transformation for the testing set\n# data augmentation is NOT needed\ntest_transform = transforms.Compose([\n transforms.ToPILImage(), \n transforms.ToTensor(),\n])\nclass ImgDataset(Dataset):\n def __init__(self, x, y=None, transform=None):\n self.x = x\n # label is required to be a LongTensor\n self.y = y\n if y is not None:\n self.y = torch.LongTensor(y)\n self.transform = transform\n def __len__(self):\n return len(self.x)\n def __getitem__(self, index):\n X = self.x[index]\n if self.transform is not None:\n X = self.transform(X)\n if self.y is not None:\n Y = self.y[index]\n return X, Y\n else:\n return X", "_____no_output_____" ], [ "batch_size = 128\ntrain_set = ImgDataset(train_x, train_y, train_transform)\nval_set = ImgDataset(val_x, val_y, test_transform)\ntrain_loader = DataLoader(train_set, batch_size=batch_size, shuffle=True)\nval_loader = DataLoader(val_set, batch_size=batch_size, shuffle=False)", "_____no_output_____" ] ], [ [ "# Model", "_____no_output_____" ] ], [ [ "class Classifier(nn.Module):\n def __init__(self):\n super(Classifier, self).__init__()\n # torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)\n # torch.nn.MaxPool2d(kernel_size, stride, padding)\n # input size [3, 128, 128]\n self.cnn = nn.Sequential(\n nn.Conv2d(3, 64, 3, 1, 1), # [64, 128, 128]\n nn.BatchNorm2d(64),\n nn.ReLU(),\n nn.MaxPool2d(2, 2, 0), # [64, 64, 64]\n\n nn.Conv2d(64, 128, 3, 1, 1), # [128, 64, 64]\n nn.BatchNorm2d(128),\n nn.ReLU(),\n nn.MaxPool2d(2, 2, 0), # [128, 32, 32]\n\n nn.Conv2d(128, 256, 3, 1, 1), # [256, 32, 32]\n nn.BatchNorm2d(256),\n nn.ReLU(),\n nn.MaxPool2d(2, 2, 0), # [256, 16, 16]\n\n nn.Conv2d(256, 512, 3, 1, 1), # [512, 16, 16]\n nn.BatchNorm2d(512),\n nn.ReLU(),\n nn.MaxPool2d(2, 2, 0), # [512, 8, 8]\n \n nn.Conv2d(512, 512, 3, 1, 1), # [512, 8, 8]\n nn.BatchNorm2d(512),\n nn.ReLU(),\n nn.MaxPool2d(2, 2, 0), # [512, 4, 4]\n )\n self.fc = nn.Sequential(\n nn.Linear(512*4*4, 1024),\n nn.ReLU(),\n nn.Linear(1024, 512),\n nn.ReLU(),\n nn.Linear(512, 11)\n )\n\n def forward(self, x):\n out = self.cnn(x)\n out = out.view(out.size()[0], -1)\n return self.fc(out)", "_____no_output_____" ] ], [ [ "# Training\n- Training the model with training set and finding the good parameters with validation set.", "_____no_output_____" ] ], [ [ "model = Classifier().cuda()\nloss = nn.CrossEntropyLoss() # CrossEntropyLoss is used as we are dealing with classification task\noptimizer = torch.optim.Adam(model.parameters(), lr=0.001)\nnum_epoch = 30\n\nfor epoch in range(num_epoch):\n epoch_start_time = time.time()\n train_acc = 0.0\n train_loss = 0.0\n val_acc = 0.0\n val_loss = 0.0\n\n model.train() # ensure the model is on train model (enable Dropout, ... etc.)\n for i, data in enumerate(train_loader):\n optimizer.zero_grad() # zero the gradient to avoid accumulation\n train_pred = model(data[0].cuda()) # get the predicted distribution (forward)\n batch_loss = loss(train_pred, data[1].cuda()) # compute the loss (the pred and label should both on CPU or GPU)\n batch_loss.backward() # back propagation to compute the gradient\n optimizer.step() # update the gradient with optimizer\n\n train_acc += np.sum(np.argmax(train_pred.cpu().data.numpy(), axis=1) == data[1].numpy())\n train_loss += batch_loss.item()\n \n model.eval()\n with torch.no_grad():\n for i, data in enumerate(val_loader):\n val_pred = model(data[0].cuda())\n batch_loss = loss(val_pred, data[1].cuda())\n\n val_acc += np.sum(np.argmax(val_pred.cpu().data.numpy(), axis=1) == data[1].numpy())\n val_loss += batch_loss.item()\n\n # print the result\n print('[%03d/%03d] %2.2f sec(s) Train Acc: %3.6f Loss: %3.6f | Val Acc: %3.6f loss: %3.6f' % \\\n (epoch + 1, num_epoch, time.time()-epoch_start_time, \\\n train_acc/train_set.__len__(), train_loss/train_set.__len__(), val_acc/val_set.__len__(), val_loss/val_set.__len__()))", "[001/030] 30.53 sec(s) Train Acc: 0.238394 Loss: 0.017837 | Val Acc: 0.274344 loss: 0.016516\n[002/030] 30.43 sec(s) Train Acc: 0.344618 Loss: 0.014741 | Val Acc: 0.236443 loss: 0.018572\n[003/030] 30.51 sec(s) Train Acc: 0.383235 Loss: 0.013762 | Val Acc: 0.376385 loss: 0.014908\n[004/030] 30.49 sec(s) Train Acc: 0.438881 Loss: 0.012584 | Val Acc: 0.412828 loss: 0.013307\n[005/030] 30.61 sec(s) Train Acc: 0.479019 Loss: 0.011782 | Val Acc: 0.425073 loss: 0.013873\n[006/030] 30.73 sec(s) Train Acc: 0.511149 Loss: 0.011073 | Val Acc: 0.375510 loss: 0.013860\n[007/030] 30.73 sec(s) Train Acc: 0.535070 Loss: 0.010570 | Val Acc: 0.449563 loss: 0.012494\n[008/030] 30.80 sec(s) Train Acc: 0.563450 Loss: 0.009898 | Val Acc: 0.445190 loss: 0.014218\n[009/030] 30.85 sec(s) Train Acc: 0.596696 Loss: 0.009260 | Val Acc: 0.515160 loss: 0.011270\n[010/030] 30.93 sec(s) Train Acc: 0.624367 Loss: 0.008555 | Val Acc: 0.481924 loss: 0.012702\n[011/030] 31.04 sec(s) Train Acc: 0.636631 Loss: 0.008316 | Val Acc: 0.557726 loss: 0.010475\n[012/030] 30.92 sec(s) Train Acc: 0.656902 Loss: 0.007816 | Val Acc: 0.544315 loss: 0.011204\n[013/030] 30.88 sec(s) Train Acc: 0.675147 Loss: 0.007362 | Val Acc: 0.526239 loss: 0.011732\n[014/030] 30.98 sec(s) Train Acc: 0.689945 Loss: 0.007133 | Val Acc: 0.515160 loss: 0.012606\n[015/030] 30.92 sec(s) Train Acc: 0.691162 Loss: 0.006956 | Val Acc: 0.617493 loss: 0.008946\n[016/030] 31.01 sec(s) Train Acc: 0.715183 Loss: 0.006552 | Val Acc: 0.617493 loss: 0.009307\n[017/030] 30.92 sec(s) Train Acc: 0.727752 Loss: 0.006177 | Val Acc: 0.538484 loss: 0.012785\n[018/030] 30.98 sec(s) Train Acc: 0.724711 Loss: 0.006238 | Val Acc: 0.599417 loss: 0.010158\n[019/030] 30.89 sec(s) Train Acc: 0.761403 Loss: 0.005398 | Val Acc: 0.593586 loss: 0.011222\n[020/030] 30.97 sec(s) Train Acc: 0.768701 Loss: 0.005145 | Val Acc: 0.654519 loss: 0.008498\n[021/030] 30.93 sec(s) Train Acc: 0.784715 Loss: 0.004809 | Val Acc: 0.612828 loss: 0.010290\n[022/030] 30.93 sec(s) Train Acc: 0.780864 Loss: 0.004900 | Val Acc: 0.641108 loss: 0.009655\n[023/030] 31.01 sec(s) Train Acc: 0.808129 Loss: 0.004278 | Val Acc: 0.659184 loss: 0.008730\n[024/030] 31.00 sec(s) Train Acc: 0.811879 Loss: 0.004286 | Val Acc: 0.616327 loss: 0.011102\n[025/030] 30.89 sec(s) Train Acc: 0.822420 Loss: 0.004066 | Val Acc: 0.509621 loss: 0.015526\n[026/030] 30.99 sec(s) Train Acc: 0.824650 Loss: 0.003977 | Val Acc: 0.594169 loss: 0.012257\n[027/030] 30.96 sec(s) Train Acc: 0.833266 Loss: 0.003817 | Val Acc: 0.668513 loss: 0.009223\n[028/030] 30.87 sec(s) Train Acc: 0.846442 Loss: 0.003413 | Val Acc: 0.658601 loss: 0.010521\n[029/030] 30.99 sec(s) Train Acc: 0.865498 Loss: 0.003038 | Val Acc: 0.619534 loss: 0.012690\n[030/030] 30.98 sec(s) Train Acc: 0.871782 Loss: 0.002866 | Val Acc: 0.628863 loss: 0.012328\n" ] ], [ [ "- After getting a better parameter set, we will train the model with the training set and validation set. (The more data we have, the better performance.)", "_____no_output_____" ] ], [ [ "train_val_x = np.concatenate((train_x, val_x), axis=0)\ntrain_val_y = np.concatenate((train_y, val_y), axis=0)\ntrain_val_set = ImgDataset(train_val_x, train_val_y, train_transform)\ntrain_val_loader = DataLoader(train_val_set, batch_size=batch_size, shuffle=True)", "_____no_output_____" ], [ "model_best = Classifier().cuda()\nloss = nn.CrossEntropyLoss() # CrossEntropyLoss is used as we are dealing with classification task\noptimizer = torch.optim.Adam(model_best.parameters(), lr=0.001)\n\nfor epoch in range(num_epoch):\n epoch_start_time = time.time()\n train_acc = 0.0\n train_loss = 0.0\n\n model_best.train()\n for i, data in enumerate(train_val_loader):\n optimizer.zero_grad()\n train_pred = model_best(data[0].cuda())\n batch_loss = loss(train_pred, data[1].cuda())\n batch_loss.backward()\n optimizer.step()\n\n train_acc += np.sum(np.argmax(train_pred.cpu().data.numpy(), axis=1) == data[1].numpy())\n train_loss += batch_loss.item()\n\n # print the results\n print('[%03d/%03d] %2.2f sec(s) Train Acc: %3.6f Loss: %3.6f' % \\\n (epoch + 1, num_epoch, time.time()-epoch_start_time, \\\n train_acc/train_val_set.__len__(), train_loss/train_val_set.__len__()))", "[001/030] 36.56 sec(s) Train Acc: 0.245036 Loss: 0.017163\n[002/030] 36.84 sec(s) Train Acc: 0.378460 Loss: 0.013862\n[003/030] 37.21 sec(s) Train Acc: 0.447879 Loss: 0.012331\n[004/030] 37.13 sec(s) Train Acc: 0.508348 Loss: 0.011165\n[005/030] 37.11 sec(s) Train Acc: 0.544299 Loss: 0.010312\n[006/030] 37.18 sec(s) Train Acc: 0.582280 Loss: 0.009482\n[007/030] 37.13 sec(s) Train Acc: 0.615298 Loss: 0.008749\n[008/030] 37.13 sec(s) Train Acc: 0.641321 Loss: 0.008164\n[009/030] 37.16 sec(s) Train Acc: 0.659973 Loss: 0.007770\n[010/030] 37.13 sec(s) Train Acc: 0.681408 Loss: 0.007191\n[011/030] 37.12 sec(s) Train Acc: 0.699308 Loss: 0.006825\n[012/030] 37.17 sec(s) Train Acc: 0.712771 Loss: 0.006487\n[013/030] 37.14 sec(s) Train Acc: 0.731649 Loss: 0.006072\n[014/030] 37.09 sec(s) Train Acc: 0.739847 Loss: 0.005856\n[015/030] 37.24 sec(s) Train Acc: 0.756242 Loss: 0.005536\n[016/030] 37.12 sec(s) Train Acc: 0.769028 Loss: 0.005183\n[017/030] 37.05 sec(s) Train Acc: 0.784070 Loss: 0.004819\n[018/030] 36.98 sec(s) Train Acc: 0.792419 Loss: 0.004676\n[019/030] 36.95 sec(s) Train Acc: 0.810168 Loss: 0.004182\n[020/030] 36.91 sec(s) Train Acc: 0.817313 Loss: 0.004051\n[021/030] 36.79 sec(s) Train Acc: 0.832656 Loss: 0.003687\n[022/030] 36.98 sec(s) Train Acc: 0.844690 Loss: 0.003516\n[023/030] 36.91 sec(s) Train Acc: 0.849278 Loss: 0.003319\n[024/030] 36.95 sec(s) Train Acc: 0.866877 Loss: 0.002998\n[025/030] 36.85 sec(s) Train Acc: 0.867554 Loss: 0.002938\n[026/030] 36.81 sec(s) Train Acc: 0.884025 Loss: 0.002521\n[027/030] 36.78 sec(s) Train Acc: 0.882070 Loss: 0.002582\n[028/030] 36.94 sec(s) Train Acc: 0.903279 Loss: 0.002136\n[029/030] 36.93 sec(s) Train Acc: 0.892298 Loss: 0.002329\n[030/030] 36.91 sec(s) Train Acc: 0.913809 Loss: 0.001934\n" ] ], [ [ "# Testing\n- Do prediction with the model we just trained.", "_____no_output_____" ] ], [ [ "test_set = ImgDataset(test_x, transform=test_transform)\ntest_loader = DataLoader(test_set, batch_size=batch_size, shuffle=False)", "_____no_output_____" ], [ "model_best.eval()\nprediction = []\nwith torch.no_grad():\n for i, data in enumerate(test_loader):\n test_pred = model_best(data.cuda())\n test_label = np.argmax(test_pred.cpu().data.numpy(), axis=1)\n for y in test_label:\n prediction.append(y)", "_____no_output_____" ], [ "# write the results to csv file\nwith open(\"predict.csv\", 'w') as f:\n f.write('Id,Category\\n')\n for i, y in enumerate(prediction):\n f.write('{},{}\\n'.format(i, y))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
ece1c518d1749d933a9bbdc19eb2953ffb4fc0a1
41,179
ipynb
Jupyter Notebook
docs/source2/examples/notebooks/generated/statespace_local_linear_trend.ipynb
GreatWei/pythonStates
c4a9b326bfa312e2ae44a70f4dfaaf91f2d47a37
[ "BSD-3-Clause" ]
76
2019-12-28T08:37:10.000Z
2022-03-29T02:19:41.000Z
docs/source2/examples/notebooks/generated/statespace_local_linear_trend.ipynb
GreatWei/pythonStates
c4a9b326bfa312e2ae44a70f4dfaaf91f2d47a37
[ "BSD-3-Clause" ]
null
null
null
docs/source2/examples/notebooks/generated/statespace_local_linear_trend.ipynb
GreatWei/pythonStates
c4a9b326bfa312e2ae44a70f4dfaaf91f2d47a37
[ "BSD-3-Clause" ]
35
2020-02-04T14:46:25.000Z
2022-03-24T03:56:17.000Z
124.033133
25,804
0.819107
[ [ [ "# State space modeling: Local Linear Trends", "_____no_output_____" ], [ "This notebook describes how to extend the statsmodels statespace classes to create and estimate a custom model. Here we develop a local linear trend model.\n\nThe Local Linear Trend model has the form (see Durbin and Koopman 2012, Chapter 3.2 for all notation and details):\n\n$$\n\\begin{align}\ny_t & = \\mu_t + \\varepsilon_t \\qquad & \\varepsilon_t \\sim\n N(0, \\sigma_\\varepsilon^2) \\\\\n\\mu_{t+1} & = \\mu_t + \\nu_t + \\xi_t & \\xi_t \\sim N(0, \\sigma_\\xi^2) \\\\\n\\nu_{t+1} & = \\nu_t + \\zeta_t & \\zeta_t \\sim N(0, \\sigma_\\zeta^2)\n\\end{align}\n$$\n\nIt is easy to see that this can be cast into state space form as:\n\n$$\n\\begin{align}\ny_t & = \\begin{pmatrix} 1 & 0 \\end{pmatrix} \\begin{pmatrix} \\mu_t \\\\ \\nu_t \\end{pmatrix} + \\varepsilon_t \\\\\n\\begin{pmatrix} \\mu_{t+1} \\\\ \\nu_{t+1} \\end{pmatrix} & = \\begin{bmatrix} 1 & 1 \\\\ 0 & 1 \\end{bmatrix} \\begin{pmatrix} \\mu_t \\\\ \\nu_t \\end{pmatrix} + \\begin{pmatrix} \\xi_t \\\\ \\zeta_t \\end{pmatrix}\n\\end{align}\n$$\n\nNotice that much of the state space representation is composed of known values; in fact the only parts in which parameters to be estimated appear are in the variance / covariance matrices:\n\n$$\n\\begin{align}\nH_t & = \\begin{bmatrix} \\sigma_\\varepsilon^2 \\end{bmatrix} \\\\\nQ_t & = \\begin{bmatrix} \\sigma_\\xi^2 & 0 \\\\ 0 & \\sigma_\\zeta^2 \\end{bmatrix}\n\\end{align}\n$$", "_____no_output_____" ] ], [ [ "%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nfrom scipy.stats import norm\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "To take advantage of the existing infrastructure, including Kalman filtering and maximum likelihood estimation, we create a new class which extends from `statsmodels.tsa.statespace.MLEModel`. There are a number of things that must be specified:\n\n1. **k_states**, **k_posdef**: These two parameters must be provided to the base classes in initialization. The inform the statespace model about the size of, respectively, the state vector, above $\\begin{pmatrix} \\mu_t & \\nu_t \\end{pmatrix}'$, and the state error vector, above $\\begin{pmatrix} \\xi_t & \\zeta_t \\end{pmatrix}'$. Note that the dimension of the endogenous vector does not have to be specified, since it can be inferred from the `endog` array.\n2. **update**: The method `update`, with argument `params`, must be specified (it is used when `fit()` is called to calculate the MLE). It takes the parameters and fills them into the appropriate state space matrices. For example, below, the `params` vector contains variance parameters $\\begin{pmatrix} \\sigma_\\varepsilon^2 & \\sigma_\\xi^2 & \\sigma_\\zeta^2\\end{pmatrix}$, and the `update` method must place them in the observation and state covariance matrices. More generally, the parameter vector might be mapped into many different places in all of the statespace matrices.\n3. **statespace matrices**: by default, all state space matrices (`obs_intercept, design, obs_cov, state_intercept, transition, selection, state_cov`) are set to zeros. Values that are fixed (like the ones in the design and transition matrices here) can be set in initialization, whereas values that vary with the parameters should be set in the `update` method. Note that it is easy to forget to set the selection matrix, which is often just the identity matrix (as it is here), but not setting it will lead to a very different model (one where there is not a stochastic component to the transition equation).\n4. **start params**: start parameters must be set, even if it is just a vector of zeros, although often good start parameters can be found from the data. Maximum likelihood estimation by gradient methods (as employed here) can be sensitive to the starting parameters, so it is important to select good ones if possible. Here it does not matter too much (although as variances, they should't be set zero).\n5. **initialization**: in addition to defined state space matrices, all state space models must be initialized with the mean and variance for the initial distribution of the state vector. If the distribution is known, `initialize_known(initial_state, initial_state_cov)` can be called, or if the model is stationary (e.g. an ARMA model), `initialize_stationary` can be used. Otherwise, `initialize_approximate_diffuse` is a reasonable generic initialization (exact diffuse initialization is not yet available). Since the local linear trend model is not stationary (it is composed of random walks) and since the distribution is not generally known, we use `initialize_approximate_diffuse` below.\n\nThe above are the minimum necessary for a successful model. There are also a number of things that do not have to be set, but which may be helpful or important for some applications:\n\n1. **transform / untransform**: when `fit` is called, the optimizer in the background will use gradient methods to select the parameters that maximize the likelihood function. By default it uses unbounded optimization, which means that it may select any parameter value. In many cases, that is not the desired behavior; variances, for example, cannot be negative. To get around this, the `transform` method takes the unconstrained vector of parameters provided by the optimizer and returns a constrained vector of parameters used in likelihood evaluation. `untransform` provides the reverse operation.\n2. **param_names**: this internal method can be used to set names for the estimated parameters so that e.g. the summary provides meaningful names. If not present, parameters are named `param0`, `param1`, etc.", "_____no_output_____" ] ], [ [ "\"\"\"\nUnivariate Local Linear Trend Model\n\"\"\"\nclass LocalLinearTrend(sm.tsa.statespace.MLEModel):\n def __init__(self, endog):\n # Model order\n k_states = k_posdef = 2\n\n # Initialize the statespace\n super(LocalLinearTrend, self).__init__(\n endog, k_states=k_states, k_posdef=k_posdef,\n initialization='approximate_diffuse',\n loglikelihood_burn=k_states\n )\n\n # Initialize the matrices\n self.ssm['design'] = np.array([1, 0])\n self.ssm['transition'] = np.array([[1, 1],\n [0, 1]])\n self.ssm['selection'] = np.eye(k_states)\n\n # Cache some indices\n self._state_cov_idx = ('state_cov',) + np.diag_indices(k_posdef)\n\n @property\n def param_names(self):\n return ['sigma2.measurement', 'sigma2.level', 'sigma2.trend']\n\n @property\n def start_params(self):\n return [np.std(self.endog)]*3\n\n def transform_params(self, unconstrained):\n return unconstrained**2\n\n def untransform_params(self, constrained):\n return constrained**0.5\n\n def update(self, params, *args, **kwargs):\n params = super(LocalLinearTrend, self).update(params, *args, **kwargs)\n \n # Observation covariance\n self.ssm['obs_cov',0,0] = params[0]\n\n # State covariance\n self.ssm[self._state_cov_idx] = params[1:]", "_____no_output_____" ] ], [ [ "Using this simple model, we can estimate the parameters from a local linear trend model. The following example is from Commandeur and Koopman (2007), section 3.4., modeling motor vehicle fatalities in Finland.", "_____no_output_____" ] ], [ [ "import requests\nfrom io import BytesIO\nfrom zipfile import ZipFile\n \n# Download the dataset\nck = requests.get('http://staff.feweb.vu.nl/koopman/projects/ckbook/OxCodeAll.zip').content\nzipped = ZipFile(BytesIO(ck))\ndf = pd.read_table(\n BytesIO(zipped.read('OxCodeIntroStateSpaceBook/Chapter_2/NorwayFinland.txt')),\n skiprows=1, header=None, sep='\\s+', engine='python',\n names=['date','nf', 'ff']\n)", "_____no_output_____" ] ], [ [ "Since we defined the local linear trend model as extending from `MLEModel`, the `fit()` method is immediately available, just as in other statsmodels maximum likelihood classes. Similarly, the returned results class supports many of the same post-estimation results, like the `summary` method.\n", "_____no_output_____" ] ], [ [ "# Load Dataset\ndf.index = pd.date_range(start='%d-01-01' % df.date[0], end='%d-01-01' % df.iloc[-1, 0], freq='AS')\n\n# Log transform\ndf['lff'] = np.log(df['ff'])\n\n# Setup the model\nmod = LocalLinearTrend(df['lff'])\n\n# Fit it using MLE (recall that we are fitting the three variance parameters)\nres = mod.fit(disp=False)\nprint(res.summary())", " Statespace Model Results \n==============================================================================\nDep. Variable: lff No. Observations: 34\nModel: LocalLinearTrend Log Likelihood 27.510\nDate: Tue, 24 Dec 2019 AIC -49.020\nTime: 15:04:44 BIC -44.623\nSample: 01-01-1970 HQIC -47.563\n - 01-01-2003 \nCovariance Type: opg \n======================================================================================\n coef std err z P>|z| [0.025 0.975]\n--------------------------------------------------------------------------------------\nsigma2.measurement 0.0010 0.003 0.346 0.730 -0.005 0.007\nsigma2.level 0.0074 0.005 1.564 0.118 -0.002 0.017\nsigma2.trend 2.419e-11 0.000 1.61e-07 1.000 -0.000 0.000\n===================================================================================\nLjung-Box (Q): nan Jarque-Bera (JB): 0.68\nProb(Q): nan Prob(JB): 0.71\nHeteroskedasticity (H): 0.75 Skew: -0.02\nProb(H) (two-sided): 0.64 Kurtosis: 2.29\n===================================================================================\n\nWarnings:\n[1] Covariance matrix calculated using the outer product of gradients (complex-step).\n" ] ], [ [ "Finally, we can do post-estimation prediction and forecasting. Notice that the end period can be specified as a date.", "_____no_output_____" ] ], [ [ "# Perform prediction and forecasting\npredict = res.get_prediction()\nforecast = res.get_forecast('2014')", "_____no_output_____" ], [ "fig, ax = plt.subplots(figsize=(10,4))\n\n# Plot the results\ndf['lff'].plot(ax=ax, style='k.', label='Observations')\npredict.predicted_mean.plot(ax=ax, label='One-step-ahead Prediction')\npredict_ci = predict.conf_int(alpha=0.05)\npredict_index = np.arange(len(predict_ci))\nax.fill_between(predict_index[2:], predict_ci.iloc[2:, 0], predict_ci.iloc[2:, 1], alpha=0.1)\n\nforecast.predicted_mean.plot(ax=ax, style='r', label='Forecast')\nforecast_ci = forecast.conf_int()\nforecast_index = np.arange(len(predict_ci), len(predict_ci) + len(forecast_ci))\nax.fill_between(forecast_index, forecast_ci.iloc[:, 0], forecast_ci.iloc[:, 1], alpha=0.1)\n\n# Cleanup the image\nax.set_ylim((4, 8));\nlegend = ax.legend(loc='lower left');", "_____no_output_____" ] ], [ [ "### References\n\n Commandeur, Jacques J. F., and Siem Jan Koopman. 2007.\n An Introduction to State Space Time Series Analysis.\n Oxfordโ€ฏ; New York: Oxford University Press.\n\n Durbin, James, and Siem Jan Koopman. 2012.\n Time Series Analysis by State Space Methods: Second Edition.\n Oxford University Press.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
ece1e45aa287599b1fbdabb016fd85e60d352f86
371,063
ipynb
Jupyter Notebook
all final/RGB and Gray.ipynb
parthjain98in/image-processing
cc967eb290e21633d4103f5db582e07b8de4e87d
[ "MIT" ]
null
null
null
all final/RGB and Gray.ipynb
parthjain98in/image-processing
cc967eb290e21633d4103f5db582e07b8de4e87d
[ "MIT" ]
null
null
null
all final/RGB and Gray.ipynb
parthjain98in/image-processing
cc967eb290e21633d4103f5db582e07b8de4e87d
[ "MIT" ]
null
null
null
1,349.32
119,420
0.958964
[ [ [ "import cv2\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "image = cv2.imread(\"mona lisa.jpg\", cv2.IMREAD_COLOR)\nimage[:, :, 0], image[:, :, 2] = np.array(image[:, :, 2]), np.array(image[:, :, 0])", "_____no_output_____" ], [ "plt.imshow(image)", "_____no_output_____" ], [ "shape = image.shape\nprint(shape)", "(1024, 687, 3)\n" ] ], [ [ "Red", "_____no_output_____" ] ], [ [ "red_image = np.array(image)\nred_image[:, :, 1:] = 0\nplt.imshow(red_image)", "_____no_output_____" ] ], [ [ "Green", "_____no_output_____" ] ], [ [ "green_image = np.array(image)\ngreen_image[:,:,0] = 0\ngreen_image[:,:,2] = 0\nplt.imshow(green_image)", "_____no_output_____" ] ], [ [ "Blue", "_____no_output_____" ] ], [ [ "blue_image = np.array(image)\nblue_image[:,:,0:2] = 0\nplt.imshow(blue_image)", "_____no_output_____" ] ], [ [ "Gray", "_____no_output_____" ] ], [ [ "gray_image = np.zeros(shape[:2])\nfor i in range(shape[0]):\n for j in range(shape[1]):\n r = image[i][j][0]\n g = image[i][j][1]\n b = image[i][j][2]\n gray_image[i][j] = 0.3 * r + 0.59 * g + 0.11 * b", "_____no_output_____" ], [ "plt.imshow(gray_image, cmap=\"gray\")", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
ece20171ae82c2a181308af53dfbbb5e3fdec64b
39,488
ipynb
Jupyter Notebook
2-Inferencing/AzureServiceClassifier_Inferencing.ipynb
hyssh/bert-stack-overflow
bed76cbd8ae43b65f17d326e76519221665a9458
[ "MIT" ]
6
2021-05-19T05:59:40.000Z
2021-11-14T08:01:15.000Z
2-Inferencing/AzureServiceClassifier_Inferencing.ipynb
hyssh/bert-stack-overflow
bed76cbd8ae43b65f17d326e76519221665a9458
[ "MIT" ]
null
null
null
2-Inferencing/AzureServiceClassifier_Inferencing.ipynb
hyssh/bert-stack-overflow
bed76cbd8ae43b65f17d326e76519221665a9458
[ "MIT" ]
null
null
null
36.562963
803
0.606843
[ [ [ "Copyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License.", "_____no_output_____" ], [ "# Inferencing with TensorFlow 2.0 on Azure Machine Learning Service", "_____no_output_____" ], [ "## Overview of Workshop\n\nThis notebook is Part 2 (Inferencing and Deploying a Model) of a four part workshop that demonstrates an end-to-end workflow for implementing a BERT model using Tensorflow 2.0 on Azure Machine Learning Service. The different components of the workshop are as follows:\n\n- Part 1: [Preparing Data and Model Training](https://github.com/microsoft/bert-stack-overflow/blob/master/1-Training/AzureServiceClassifier_Training.ipynb)\n- Part 2: [Inferencing and Deploying a Model](https://github.com/microsoft/bert-stack-overflow/blob/master/2-Inferencing/AzureServiceClassifier_Inferencing.ipynb)\n- Part 3: [Setting Up a Pipeline Using MLOps](https://github.com/microsoft/bert-stack-overflow/tree/master/3-ML-Ops)\n- Part 4: [Explaining Your Model Interpretability](https://github.com/microsoft/bert-stack-overflow/blob/master/4-Interpretibility/IBMEmployeeAttritionClassifier_Interpretability.ipynb)\n\nThis workshop shows how to convert a TF 2.0 BERT model and deploy the model as Webservice in step-by-step fashion:\n\n * Initilize your workspace\n * Download a previous saved model (saved on Azure Machine Learning)\n * Test the downloaded model\n * Display scoring script\n * Defining an Azure Environment\n * Deploy Model as Webservice (Local, ACI and AKS)\n * Test Deployment (Azure ML Service Call, Raw HTTP Request)\n * Clean up Webservice", "_____no_output_____" ], [ "## Prerequisites\n* Understand the [architecture and terms](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n* If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup) to:\n * Install the AML SDK\n * Create a workspace and its configuration file (config.json)\n* For local scoring test, you will also need to have Tensorflow and Keras installed in the current Jupyter kernel.\n* Please run through Part 1: [Working With Data and Training](1_AzureServiceClassifier_Training.ipynb) Notebook first to register your model\n* Make sure you enable [Docker for non-root users](https://docs.docker.com/install/linux/linux-postinstall/) (This is needed to run Local Deployment). Run the following commands in your Terminal and go to the your [Jupyter dashboard](/tree) and click `Quit` on the top right corner. After the shutdown, the Notebook will be automatically refereshed with the new permissions.\n```bash\n sudo usermod -a -G docker $USER\n newgrp docker\n```", "_____no_output_____" ], [ "## Azure Service Classification Problem \nOne of the key tasks to ensuring long term success of any Azure service is actively responding to related posts in online forums such as Stackoverflow. In order to keep track of these posts, Microsoft relies on the associated tags to direct questions to the appropriate support team. While Stackoverflow has different tags for each Azure service (azure-web-app-service, azure-virtual-machine-service, etc), people often use the generic **azure** tag. This makes it hard for specific teams to track down issues related to their product and as a result, many questions get left unanswered. \n\n**In order to solve this problem, we will be building a model to classify posts on Stackoverflow with the appropriate Azure service tag.**\n\nWe will be using a BERT (Bidirectional Encoder Representations from Transformers) model which was published by researchers at Google AI Language. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of natural language processing (NLP) tasks without substantial architecture modifications.\n\nFor more information about the BERT, please read this [paper](https://arxiv.org/pdf/1810.04805.pdf)", "_____no_output_____" ], [ "## Checking Azure Machine Learning Python SDK Version\n\nIf you are running this on a Notebook VM, the Azure Machine Learning Python SDK is installed by default. If you are running this locally, you can follow these [instructions](https://docs.microsoft.com/en-us/python/api/overview/azure/ml/install?view=azure-ml-py) to install it using pip.\n\nThis tutorial requires version 1.27.0 or higher. We can import the Python SDK to ensure it has been properly installed:", "_____no_output_____" ] ], [ [ "# Check core SDK version number\nimport azureml.core\n\nprint(\"SDK version:\", azureml.core.VERSION)", "_____no_output_____" ] ], [ [ "## Connect To Workspace\n\nInitialize a [Workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the prerequisites step. Workspace.from_config() creates a workspace object from the details stored in config.json.", "_____no_output_____" ] ], [ [ "from azureml.core import Workspace\n\nws = Workspace.from_config()\nprint('Workspace name: ' + ws.name, \n 'Azure region: ' + ws.location, \n 'Subscription id: ' + ws.subscription_id, \n 'Resource group: ' + ws.resource_group, sep = '\\n')", "_____no_output_____" ] ], [ [ "#### If the datastore has already been registered, then you (and other users in your workspace) can directly run this cell.", "_____no_output_____" ] ], [ [ "from azureml.core import Datastore\n\ndatastore = Datastore.get(ws, 'mtcseattle')\ndatastore", "_____no_output_____" ] ], [ [ "### Download Model from Datastore\nGet the trained model from an Azure Blob container. The model is saved into two files, ``config.json`` and ``model.h5``.", "_____no_output_____" ] ], [ [ "from azureml.core.model import Model\n\ndatastore.download('./',prefix=\"model\")", "_____no_output_____" ] ], [ [ "### Registering the Model with the Workspace\nRegister the model to use in your workspace. ", "_____no_output_____" ] ], [ [ "model = Model.register(model_path = \"./model\",\n model_name = \"azure-service-classifier\", # this is the name the model is registered as\n tags = {'pretrained': \"BERT\"},\n workspace = ws)\nmodel_dir = './model'", "_____no_output_____" ] ], [ [ "### Downloading and Using Registered Models\n> If you already completed Part 1: [Working With Data and Training](1_AzureServiceClassifier_Training.ipynb) Notebook.You can dowload your registered BERT Model and use that instead of the model saved on the blob storage.", "_____no_output_____" ], [ "```python\nmodel = ws.models['azure-service-classifier']\nmodel_dir = model.download(target_dir='.', exist_ok=True, exists_ok=None)\n```", "_____no_output_____" ], [ "## Deploy models on Azure ML\n\nNow we are ready to deploy the model as a web service running on your [local](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#local) machine, in Azure Container Instance [ACI](https://azure.microsoft.com/en-us/services/container-instances/) or Azure Kubernetes Service [AKS](https://azure.microsoft.com/en-us/services/kubernetes-service/). Azure Machine Learning accomplishes this by constructing a Docker image with the scoring logic and model baked in. \n> **Note:** For this Notebook, we'll use the original model format for deployment, but the ONNX model can be deployed in the same way by using ONNX Runtime in the scoring script.\n\n![](./images/aml-deploy.png)\n\n\n### Deploying a web service\nOnce you've tested the model and are satisfied with the results, deploy the model as a web service. For this Notebook, we'll use the original model format for deployment, but note that the ONNX model can be deployed in the same way by using ONNX Runtime in the scoring script.\n\nTo build the correct environment, provide the following:\n* A scoring script to show how to use the model\n* An environment file to show what packages need to be installed\n* A configuration file to build the web service\n* The model you trained before\n\nRead more about deployment [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where)", "_____no_output_____" ], [ "### Create score.py", "_____no_output_____" ], [ "First, we will create a scoring script that will be invoked by the web service call. We have prepared a [score.py script](code/scoring/score.py) in advance that scores your BERT model.\n\n* Note that the scoring script must have two required functions, ``init()`` and ``run(input_data)``.\n * In ``init()`` function, you typically load the model into a global object. This function is executed only once when the Docker container is started.\n * In ``run(input_data)`` function, the model is used to predict a value based on the input data. The input and output to run typically use JSON as serialization and de-serialization format but you are not limited to that.", "_____no_output_____" ] ], [ [ "%pycat score.py", "_____no_output_____" ] ], [ [ "### Create Environment", "_____no_output_____" ], [ "You can create and/or use a Conda environment using the [Conda Dependencies object](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.conda_dependencies.condadependencies?view=azure-ml-py) when deploying a Webservice.", "_____no_output_____" ] ], [ [ "from azureml.core import Environment\nfrom azureml.core.conda_dependencies import CondaDependencies \n\nmyenv = CondaDependencies.create(conda_packages=['pip','numpy','pandas'],\n pip_packages=['numpy','pandas','inference-schema[numpy-support]','azureml-defaults','tensorflow==2.0.0','transformers==2.0.0','h5py<3.0.0'])\n\nwith open(\"myenv.yml\",\"w\") as f:\n f.write(myenv.serialize_to_string())", "_____no_output_____" ] ], [ [ "Review the content of the `myenv.yml` file.", "_____no_output_____" ] ], [ [ "%pycat myenv.yml", "_____no_output_____" ] ], [ [ "## Create Inference Configuration\n\nWe need to define the [Inference Configuration](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py) for the web service. There is support for a source directory, you can upload an entire folder from your local machine as dependencies for the Webservice.\nNote: in that case, your entry_script and conda_file paths are relative paths to the source_directory path.\n\nSample code for using a source directory:\n\n```python\ninference_config = InferenceConfig(source_directory=\"C:/abc\",\n runtime= \"python\", \n entry_script=\"x/y/score.py\",\n conda_file=\"env/myenv.yml\")\n```\n\n - source_directory = holds source path as string, this entire folder gets added in image so its really easy to access any files within this folder or subfolder\n - runtime = Which runtime to use for the image. Current supported runtimes are 'spark-py' and 'python\n - entry_script = contains logic specific to initializing your model and running predictions\n - conda_file = manages conda and python package dependencies.\n \n \n > **Note:** Deployment uses the inference configuration deployment configuration to deploy the models. The deployment process is similar regardless of the compute target. Deploying to AKS is slightly different because you must provide a reference to the AKS cluster.", "_____no_output_____" ] ], [ [ "from azureml.core.model import InferenceConfig\n\ninference_config = InferenceConfig(source_directory=\"./\",\n runtime= \"python\", \n entry_script=\"score.py\",\n conda_file=\"myenv.yml\"\n )", "_____no_output_____" ] ], [ [ "## Deploy as a Local Service\n\nEstimated time to complete: **about 3-7 minutes**\n\nConfigure the image and deploy it locally. The following code goes through these steps:\n\n* Build an image on local machine (or VM, if you are using a VM) using:\n * The scoring file (`score.py`)\n * The environment file (`myenv.yml`)\n * The model file \n* Define [Local Deployment Configuration](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice.localwebservice?view=azure-ml-py#deploy-configuration-port-none-)\n* Send the image to local docker instance. \n* Start up a container using the image.\n* Get the web service HTTP endpoint.\n* This has a very quick turnaround time and is great for testing service before it is deployed to production\n\n> **Note:** Make sure you enable [Docker for non-root users](https://docs.docker.com/install/linux/linux-postinstall/) (This is needed to run Local Deployment). Run the following commands in your Terminal and go to the your [Jupyter dashboard](/tree) and click `Quit` on the top right corner. After the shutdown, the Notebook will be automatically refereshed with the new permissions.\n```bash\n sudo usermod -a -G docker $USER\n newgrp docker\n```", "_____no_output_____" ], [ "#### Deploy Local Service\n\nThis may take 7 mins - 10 mins", "_____no_output_____" ] ], [ [ "from azureml.core.model import InferenceConfig, Model\nfrom azureml.core.webservice import LocalWebservice\n\n# Create a local deployment for the web service endpoint\ndeployment_config = LocalWebservice.deploy_configuration()\n# Deploy the service\nlocal_service = Model.deploy(\n ws, \"mymodel\", [model], inference_config, deployment_config)\n# Wait for the deployment to complete\nlocal_service.wait_for_deployment(True)\n# Display the port that the web service is available on\nprint(local_service.port)", "_____no_output_____" ] ], [ [ "This is the scoring web service endpoint:", "_____no_output_____" ] ], [ [ "print(local_service.scoring_uri)", "_____no_output_____" ] ], [ [ "### Test Local Service", "_____no_output_____" ], [ "Let's test the deployed model. Pick a random samples about an issue, and send it to the web service. Note here we are using the run API in the SDK to invoke the service. You can also make raw HTTP calls using any HTTP tool such as curl.\n\nAfter the invocation, we print the returned predictions.", "_____no_output_____" ] ], [ [ "%%time\nimport json\nraw_data = json.dumps({\n 'text': 'My VM is not working'\n})\n\nprediction = local_service.run(input_data=raw_data)", "_____no_output_____" ] ], [ [ "#### Using HTTP call\nWe will make a Jupyter widget so we can now send construct raw HTTP request and send to the service through the widget.", "_____no_output_____" ] ], [ [ "import ipywidgets as widgets\nfrom ipywidgets import Layout, Button, Box, FloatText, Textarea, Dropdown, Label, IntSlider, VBox\n\nfrom IPython.display import display\n\n\nimport requests\n\ntext = widgets.Text(\n value='',\n placeholder='Type a query',\n description='Question:',\n disabled=False\n)\n\nbutton = widgets.Button(description=\"Get Tag!\")\noutput = widgets.Output()\n\nitems = [text, button] \n\nbox_layout = Layout(display='flex',\n flex_flow='row',\n align_items='stretch',\n width='70%')\n\nbox_auto = Box(children=items, layout=box_layout)\n\n\ndef on_button_clicked(b):\n with output:\n input_data = '{\\\"text\\\": \\\"'+ text.value +'\\\"}'\n headers = {'Content-Type':'application/json'}\n resp = requests.post(local_service.scoring_uri, input_data, headers=headers)\n \n print(\"=\"*10)\n print(\"Question:\", text.value)\n print(\"POST to url\", local_service.scoring_uri)\n print(\"Prediction:\", resp.text)\n print(\"=\"*10)\n\nbutton.on_click(on_button_clicked)\n\n#Display the GUI\nVBox([box_auto, output])", "_____no_output_____" ] ], [ [ "### View service Logs (Debug, when something goes wrong )\n>**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** Run this cell\n\nYou should see the phrase **\"hello from the reloaded script\"** in the logs, because we added it to the script when we did a service reload.", "_____no_output_____" ] ], [ [ "import pprint\npp = pprint.PrettyPrinter(indent=4)\npp.pprint(local_service.get_logs())", "_____no_output_____" ] ], [ [ "#### Test Web Service with HTTP call\n\nDoing a raw HTTP request and send to the service through without a widget.", "_____no_output_____" ] ], [ [ "query = 'My VM is not working'\ninput_data = '{\\\"text\\\": \\\"'+ query +'\\\"}'\nheaders = {'Content-Type':'application/json'}\nresp = requests.post(local_service.scoring_uri, input_data, headers=headers)\n\nprint(\"=\"*10)\nprint(\"Question:\", query)\nprint(\"POST to url\", local_service.scoring_uri)\nprint(\"Prediction:\", resp.text)\nprint(\"=\"*10)", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "# Appendix\n\nFollowings are not for workshop.", "_____no_output_____" ], [ "### Reloading Webservice\nYou can update your score.py file and then call reload() to quickly restart the service. This will only reload your execution script and dependency files, it will not rebuild the underlying Docker image. As a result, reload() is fast.", "_____no_output_____" ] ], [ [ "%%writefile score.py\nimport os\nimport json\nimport tensorflow as tf\nfrom transformers import TFBertPreTrainedModel, TFBertMainLayer, BertTokenizer\nfrom transformers.modeling_tf_utils import get_initializer\nimport logging\nlogging.getLogger(\"transformers.tokenization_utils\").setLevel(logging.ERROR)\n\n\nclass TFBertForMultiClassification(TFBertPreTrainedModel):\n\n def __init__(self, config, *inputs, **kwargs):\n super(TFBertForMultiClassification, self) \\\n .__init__(config, *inputs, **kwargs)\n self.num_labels = config.num_labels\n\n self.bert = TFBertMainLayer(config, name='bert')\n self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob)\n self.classifier = tf.keras.layers.Dense(\n config.num_labels,\n kernel_initializer=get_initializer(config.initializer_range),\n name='classifier',\n activation='softmax')\n\n def call(self, inputs, **kwargs):\n outputs = self.bert(inputs, **kwargs)\n\n pooled_output = outputs[1]\n\n pooled_output = self.dropout(\n pooled_output,\n training=kwargs.get('training', False))\n logits = self.classifier(pooled_output)\n\n # add hidden states and attention if they are here\n outputs = (logits,) + outputs[2:]\n\n return outputs # logits, (hidden_states), (attentions)\n\n\nmax_seq_length = 128\nlabels = ['azure-web-app-service', 'azure-storage',\n 'azure-devops', 'azure-virtual-machine', 'azure-functions']\n\n\ndef init():\n global tokenizer, model\n # os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'azure-service-classifier')\n tokenizer = BertTokenizer.from_pretrained('bert-base-cased')\n model_dir = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'model')\n model = TFBertForMultiClassification \\\n .from_pretrained(model_dir, num_labels=len(labels))\n print(\"hello from the reloaded script\")\n\ndef run(raw_data):\n\n # Encode inputs using tokenizer\n inputs = tokenizer.encode_plus(\n json.loads(raw_data)['text'],\n add_special_tokens=True,\n max_length=max_seq_length\n )\n input_ids, token_type_ids = inputs[\"input_ids\"], inputs[\"token_type_ids\"]\n\n # The mask has 1 for real tokens and 0 for padding tokens.\n # Only real tokens are attended to.\n attention_mask = [1] * len(input_ids)\n\n # Zero-pad up to the sequence length.\n padding_length = max_seq_length - len(input_ids)\n input_ids = input_ids + ([0] * padding_length)\n attention_mask = attention_mask + ([0] * padding_length)\n token_type_ids = token_type_ids + ([0] * padding_length)\n\n # Make prediction\n predictions = model.predict({\n 'input_ids': tf.convert_to_tensor([input_ids], dtype=tf.int32),\n 'attention_mask': tf.convert_to_tensor(\n [attention_mask],\n dtype=tf.int32),\n 'token_type_ids': tf.convert_to_tensor(\n [token_type_ids], \n dtype=tf.int32)\n })\n\n result = {\n 'prediction': str(labels[predictions[0].argmax().item()]),\n 'probability': str(predictions[0].max()),\n 'message': 'NLP on Azure'\n }\n\n print(result)\n return result\n\n\ninit()\nrun(json.dumps({\n 'text': 'My VM is not working'\n}))\n", "_____no_output_____" ], [ "local_service.reload()", "_____no_output_____" ] ], [ [ "### Updating Webservice\nIf you do need to rebuild the image -- to add a new Conda or pip package, for instance -- you will have to call update(), instead (see below).\n\n```python\nlocal_service.update(models=[loaded_model], \n image_config=None, \n deployment_config=None, \n wait=False, inference_config=None)\n```", "_____no_output_____" ], [ "## Deploy in ACI\nEstimated time to complete: **about 3-7 minutes**\n\nConfigure the image and deploy. The following code goes through these steps:\n\n* Build an image using:\n * The scoring file (`score.py`)\n * The environment file (`myenv.yml`)\n * The model file\n* Define [ACI Deployment Configuration](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice.aciwebservice?view=azure-ml-py#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none-)\n* Send the image to the ACI container.\n* Start up a container in ACI using the image.\n* Get the web service HTTP endpoint.", "_____no_output_____" ] ], [ [ "%%time\nfrom azureml.core.webservice import Webservice\nfrom azureml.exceptions import WebserviceException\nfrom azureml.core.webservice import AciWebservice, Webservice\n\n## Create a deployment configuration file and specify the number of CPUs and gigabyte of RAM needed for your ACI container. \n## If you feel you need more later, you would have to recreate the image and redeploy the service.\naciconfig = AciWebservice.deploy_configuration(cpu_cores=2, \n memory_gb=4, \n tags={\"model\": \"BERT\", \"method\" : \"tensorflow\"}, \n description='Predict StackoverFlow tags with BERT')\n\naci_service_name = 'asc-aciservice'\n\ntry:\n # if you want to get existing service below is the command\n # since aci name needs to be unique in subscription deleting existing aci if any\n # we use aci_service_name to create azure ac\n aci_service = Webservice(ws, name=aci_service_name)\n if aci_service:\n aci_service.delete()\nexcept WebserviceException as e:\n print()\n\naci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n\naci_service.wait_for_deployment(True)\nprint(aci_service.state)", "_____no_output_____" ] ], [ [ "This is the scoring web service endpoint:", "_____no_output_____" ] ], [ [ "print(aci_service.scoring_uri)", "_____no_output_____" ] ], [ [ "### Test the deployed model", "_____no_output_____" ], [ "Let's test the deployed model. Pick a random samples about an Azure issue, and send it to the web service. Note here we are using the run API in the SDK to invoke the service. You can also make raw HTTP calls using any HTTP tool such as curl.\n\nAfter the invocation, we print the returned predictions.", "_____no_output_____" ] ], [ [ "%%time\nimport json\nraw_data = json.dumps({\n 'text': 'My VM is not working'\n})\n\nprediction = aci_service.run(input_data=raw_data)\nprint(prediction)", "_____no_output_____" ] ], [ [ "### View service Logs (Debug, when something goes wrong )\n>**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** Run this cell", "_____no_output_____" ] ], [ [ "import pprint\npp = pprint.PrettyPrinter(indent=4)\npp.pprint(aci_service.get_logs())", "_____no_output_____" ] ], [ [ "## Deploy in AKS (Single Node)", "_____no_output_____" ], [ "Estimated time to complete: **about 15-25 minutes**, 10-15 mins for AKS provisioning and 5-10 mins to deploy service\n\nConfigure the image and deploy. The following code goes through these steps:\n\n* Provision a Production AKS Cluster\n* Build an image using:\n * The scoring file (`score.py`)\n * The environment file (`myenv.yml`)\n * The model file\n* Define [AKS Provisioning Configuration](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.compute.akscompute?view=azure-ml-py#provisioning-configuration-agent-count-none--vm-size-none--ssl-cname-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--location-none--vnet-resourcegroup-name-none--vnet-name-none--subnet-name-none--service-cidr-none--dns-service-ip-none--docker-bridge-cidr-none--cluster-purpose-none-)\n* Provision an AKS Cluster\n* Define [AKS Deployment Configuration](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice.akswebservice?view=azure-ml-py#deploy-configuration-autoscale-enabled-none--autoscale-min-replicas-none--autoscale-max-replicas-none--autoscale-refresh-seconds-none--autoscale-target-utilization-none--collect-model-data-none--auth-enabled-none--cpu-cores-none--memory-gb-none--enable-app-insights-none--scoring-timeout-ms-none--replica-max-concurrent-requests-none--max-request-wait-time-none--num-replicas-none--primary-key-none--secondary-key-none--tags-none--properties-none--description-none--gpu-cores-none--period-seconds-none--initial-delay-seconds-none--timeout-seconds-none--success-threshold-none--failure-threshold-none--namespace-none--token-auth-enabled-none-)\n* Send the image to the AKS cluster.\n* Start up a container in AKS using the image.\n* Get the web service HTTP endpoint.", "_____no_output_____" ], [ "#### Provisioning Cluster", "_____no_output_____" ] ], [ [ "from azureml.core.compute import AksCompute, ComputeTarget\n\n# Use the default configuration (you can also provide parameters to customize this).\n# For example, to create a dev/test cluster, use:\n# prov_config = AksCompute.provisioning_configuration(cluster_purpose = AksCompute.ClusterPurpose.DEV_TEST)\nprov_config = AksCompute.provisioning_configuration()\n\naks_name = 'mtcs-amldev-aks'\n# Create the cluster\naks_target = ComputeTarget.create(workspace = ws,\n name = aks_name,\n provisioning_configuration = prov_config)\n\n# Wait for the create process to complete\naks_target.wait_for_completion(show_output = True)", "_____no_output_____" ] ], [ [ "#### Deploying the model", "_____no_output_____" ] ], [ [ "from azureml.core.webservice import AksWebservice, Webservice\nfrom azureml.core.model import Model\n\naks_target = AksCompute(ws,\"mtcs-amldev-aks\")\n\n## Create a deployment configuration file and specify the number of CPUs and gigabyte of RAM needed for your cluster. \n## If you feel you need more later, you would have to recreate the image and redeploy the service.\ndeployment_config = AksWebservice.deploy_configuration(cpu_cores = 2, memory_gb = 4)\n\naks_service = Model.deploy(ws, \"azureserviceclassifier-bert\", [model], inference_config, deployment_config, aks_target)\naks_service.wait_for_deployment(show_output = True)\nprint(aks_service.state)", "_____no_output_____" ] ], [ [ "### Test the deployed model", "_____no_output_____" ], [ "#### Using the Azure SDK service call", "_____no_output_____" ], [ "We can use Azure SDK to make a service call with a simple function", "_____no_output_____" ] ], [ [ "%%time\nimport json\nraw_data = json.dumps({\n 'text': 'My VM is not working'\n})\n\nprediction = aks_service.run(input_data=raw_data)\nprint(prediction)", "{'prediction': 'azure-virtual-machine', 'probability': '0.98652285', 'message': 'NLP on Azure'}\nCPU times: user 24.4 ms, sys: 1.69 ms, total: 26.1 ms\nWall time: 8.84 s\n" ] ], [ [ "This is the scoring web service endpoint:", "_____no_output_____" ] ], [ [ "print(aks_service.scoring_uri)", "_____no_output_____" ] ], [ [ "### View service Logs (Debug, when something goes wrong )\n>**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** Run this cell", "_____no_output_____" ] ], [ [ "import pprint\npp = pprint.PrettyPrinter(indent=4)\npp.pprint(aks_service.get_logs())", "_____no_output_____" ] ], [ [ "## Summary of workspace\nLet's look at the workspace after the web service was deployed. You should see\n\n* a registered model named and with the id \n* an AKS and ACI webservice called with some scoring URL", "_____no_output_____" ] ], [ [ "models = ws.models\nfor name, model in models.items():\n print(\"Model: {}, ID: {}\".format(name, model.id))\n \nwebservices = ws.webservices\nfor name, webservice in webservices.items():\n print(\"Webservice: {}, scoring URI: {}\".format(name, webservice.scoring_uri))", "Model: azure-service-classifier, ID: azure-service-classifier:2\nWebservice: azureserviceclassifier-bert, scoring URI: http://52.156.100.189:80/api/v1/service/azureserviceclassifier-bert/score\n" ] ], [ [ "## Delete ACI to clean up\nYou can delete the ACI deployment with a simple delete API call.", "_____no_output_____" ] ], [ [ "local_service.delete()\naci_service.delete()\naks_service.delete()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ece206738a89b549f5137f7b7eb0195e30669704
2,299
ipynb
Jupyter Notebook
lt226_invert_binary_tree.ipynb
devkosal/code_challenges
0591c4839555376a231682db7cc12c8a70515b09
[ "MIT" ]
null
null
null
lt226_invert_binary_tree.ipynb
devkosal/code_challenges
0591c4839555376a231682db7cc12c8a70515b09
[ "MIT" ]
null
null
null
lt226_invert_binary_tree.ipynb
devkosal/code_challenges
0591c4839555376a231682db7cc12c8a70515b09
[ "MIT" ]
null
null
null
21.895238
74
0.474554
[ [ [ "https://leetcode.com/problems/invert-binary-tree/submissions/", "_____no_output_____" ] ], [ [ "# Definition for a binary tree node.\nclass TreeNode:\n def __init__(self, x):\n self.val = x\n self.left = None\n self.right = None\n\n# my solution 1\nclass Solution:\n def invertTree(self, root: TreeNode) -> TreeNode:\n def flip(node):\n if node: \n if node.right or node.left:\n node.right, node.left = node.left, node.right\n flip(node.right)\n flip(node.left)\n flip(root)\n return root", "_____no_output_____" ], [ "# my solution 2\nclass Solution:\n def invertTree(self, root: TreeNode) -> TreeNode:\n if not root: return root\n root.right, root.left = root.left, root.right\n invertTree(root.right)\n invertTree(root.left)\n return root\n \n ", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ] ]
ece223189e2dc686edc07e33d59a01e4258d4d1a
19,006
ipynb
Jupyter Notebook
Resources/Data-Science/Machine-Learning/Logistic-Regression/Testing the Model - Exercise..ipynb
nate4bs/DataRepo
916f75ff2996967c62684bc75be23dd23aa59a67
[ "MIT" ]
null
null
null
Resources/Data-Science/Machine-Learning/Logistic-Regression/Testing the Model - Exercise..ipynb
nate4bs/DataRepo
916f75ff2996967c62684bc75be23dd23aa59a67
[ "MIT" ]
1
2020-06-11T23:14:24.000Z
2020-06-11T23:14:24.000Z
Resources/Data-Science/Machine-Learning/Logistic-Regression/Testing the Model - Exercise..ipynb
nate4bs/DataRepo
916f75ff2996967c62684bc75be23dd23aa59a67
[ "MIT" ]
null
null
null
27.346763
253
0.387194
[ [ [ "# Testing the model", "_____no_output_____" ], [ "Using your solution so far, test the model on new data.\n\nThe new data is located in the โ€˜Bank_data_testing.csvโ€™.\n\nGood luck!", "_____no_output_____" ], [ "## Import the relevant libraries", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set()", "_____no_output_____" ] ], [ [ "## Load the data", "_____no_output_____" ], [ "Load the โ€˜Bank_data.csvโ€™ dataset.", "_____no_output_____" ] ], [ [ "df = pd.read_csv('Bank-data.csv')\ndf.head()", "_____no_output_____" ], [ "df.set_index('Unnamed: 0',inplace=True)\ndf['y'] = df['y'].map({'yes':1,'no':0})\ndf.head()", "_____no_output_____" ] ], [ [ "### Declare the dependent and independent variables", "_____no_output_____" ], [ "Use 'duration' as the independet variable.", "_____no_output_____" ] ], [ [ "y =df['y']\nx1=df['duration']", "_____no_output_____" ] ], [ [ "### Simple Logistic Regression", "_____no_output_____" ], [ "Run the regression and graph the scatter plot.", "_____no_output_____" ] ], [ [ "x=sm.add_constant(x1)\nreg = sm.Logit(y,x)\nresults = reg.fit()", "Optimization terminated successfully.\n Current function value: 0.546118\n Iterations 7\n" ], [ "results.summary()", "_____no_output_____" ] ], [ [ "## Expand the model", "_____no_output_____" ], [ "We can be omitting many causal factors in our simple logistic model, so we instead switch to a multivariate logistic regression model. Add the โ€˜interest_rateโ€™, โ€˜marchโ€™, โ€˜creditโ€™ and โ€˜previousโ€™ estimators to our model and run the regression again. ", "_____no_output_____" ], [ "### Declare the independent variable(s)", "_____no_output_____" ] ], [ [ "estimators=['interest_rate','credit','march','previous','duration']\n\nX1 = df[estimators]\ny = df['y']", "_____no_output_____" ], [ "X = sm.add_constant(X1)\nreg = sm.Logit(y,X)\nresults = reg.fit()", "Optimization terminated successfully.\n Current function value: 0.336664\n Iterations 7\n" ] ], [ [ "### Confusion Matrix", "_____no_output_____" ], [ "Find the confusion matrix of the model and estimate its accuracy. ", "_____no_output_____" ], [ "<i> For convenience we have already provided you with a function that finds the confusion matrix and the model accuracy.</i>", "_____no_output_____" ] ], [ [ "def confusion_matrix(data,actual_values,model):\n \n # Confusion matrix \n \n # Parameters\n # ----------\n # data: data frame or array\n # data is a data frame formatted in the same way as your input data (without the actual values)\n # e.g. const, var1, var2, etc. Order is very important!\n # actual_values: data frame or array\n # These are the actual values from the test_data\n # In the case of a logistic regression, it should be a single column with 0s and 1s\n \n # model: a LogitResults object\n # this is the variable where you have the fitted model \n # e.g. results_log in this course\n # ----------\n \n #Predict the values using the Logit model\n pred_values = model.predict(data)\n # Specify the bins \n bins=np.array([0,0.5,1])\n # Create a histogram, where if values are between 0 and 0.5 tell will be considered 0\n # if they are between 0.5 and 1, they will be considered 1\n cm = np.histogram2d(actual_values, pred_values, bins=bins)[0]\n # Calculate the accuracy\n accuracy = (cm[0,0]+cm[1,1])/cm.sum()\n # Return the confusion matrix and \n return cm, accuracy", "_____no_output_____" ], [ "confusion_matrix(X,y,results)", "_____no_output_____" ] ], [ [ "## Test the model", "_____no_output_____" ], [ "Load the test data from the โ€˜Bank_data_testing.csvโ€™ file provided. (Remember to convert the outcome variable โ€˜yโ€™ into Boolean). ", "_____no_output_____" ], [ "### Load new data ", "_____no_output_____" ] ], [ [ "df2 = pd.read_csv('Bank-data-testing.csv')", "_____no_output_____" ], [ "df2.set_index('Unnamed: 0',inplace=True)\ndf2['y'] = df2['y'].map({'yes':1,'no':0})", "_____no_output_____" ] ], [ [ "### Declare the dependent and the independent variables", "_____no_output_____" ] ], [ [ "y= df2['y']", "_____no_output_____" ], [ "x1= df2[estimators]\nx=sm.add_constant(x1)", "_____no_output_____" ] ], [ [ "Determine the test confusion matrix and the test accuracy and compare them with the train confusion matrix and the train accuracy.", "_____no_output_____" ] ], [ [ "confusion_matrix(x,y,results)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
ece227e199375be30f9b84ee31af5bef2701e7d9
5,760
ipynb
Jupyter Notebook
inference_DM.ipynb
moonfoam/fewshot-font-generation
b4079423894dff42eaa4d9f57095c5ec15fb5d7c
[ "MIT" ]
49
2021-11-05T05:26:38.000Z
2022-03-31T15:24:56.000Z
inference_DM.ipynb
moonfoam/fewshot-font-generation
b4079423894dff42eaa4d9f57095c5ec15fb5d7c
[ "MIT" ]
4
2021-12-15T05:41:32.000Z
2022-03-15T06:53:40.000Z
inference_DM.ipynb
moonfoam/fewshot-font-generation
b4079423894dff42eaa4d9f57095c5ec15fb5d7c
[ "MIT" ]
10
2021-11-11T15:59:03.000Z
2022-03-21T08:30:50.000Z
28.8
173
0.526563
[ [ [ "# Generating images with DM-Font model from a reference style\nIn this example we'll generate images with trained DM-Font model from a reference style.\nIf you want to generate multiple styles, please check using `inference.py` instead of using this example file (because it is much simpler to load the referece styles).", "_____no_output_____" ], [ "### 1. Loading packages\n* First, load the packages used in this code.\n* All of the packages are avilable in `pip`.", "_____no_output_____" ] ], [ [ "import json\nfrom pathlib import Path\nfrom PIL import Image\n\nimport torch\nfrom sconf import Config\nfrom torchvision import transforms\n\ntransform = transforms.Compose([\n transforms.Resize((128, 128)),\n transforms.ToTensor(),\n transforms.Normalize([0.5], [0.5])\n])", "_____no_output_____" ] ], [ [ "* These modules are defined in this repository.", "_____no_output_____" ] ], [ [ "from base.dataset import read_font, render\nfrom base.utils import save_tensor_to_image, load_reference\nfrom DM.models import Generator\nfrom inference import infer_DM", "_____no_output_____" ] ], [ [ "### 2. Build model\n* Build and load the trained model.\n* `weight_path` : \n * The location of the trained model weight.\n* `decomposition` : \n * The location of the pre-defined decomposition rule file.\n* `n_heads` :\n * The number of heads. 3 for the Korean script.\n* `n_comps` :\n * The number of total components. 68 for the Korean script.", "_____no_output_____" ] ], [ [ "###############################################################\nweight_path = \"weights/kor/dmfont.pth\" # path to weight to infer\ndecomposition = \"data/kor/decomposition_DM.json\"\nn_heads = 3\nn_comps = 68\n###############################################################\n\n# building and loading the model (not recommended to modify)\ncfg = Config(\"cfgs/DM/default.yaml\")\ndecomposition = json.load(open(decomposition))\n\ngen = Generator(n_heads=n_heads, n_comps=n_comps).cuda().eval()\nweight = torch.load(weight_path)\ngen.load_state_dict(weight[\"generator_ema\"])", "_____no_output_____" ] ], [ [ "### 3. Load reference images.\n* `ref_path`: \n * The path of reference font or images.\n * If you are using a ttf file, set this to the location of the ttf file.\n * If you want to use rendered images, set this to the path to the directory which contains the reference images.\n* `ref_chars`:\n * The characters of reference images.\n * If this is `None`, all the available images will be loaded.\n* `extension`:\n * If you are using ttf files, set this to \"ttf\".\n * If you are using image files, set this to their extension(png, jpg, etc..).", "_____no_output_____" ] ], [ [ "###############################################################\nref_path = \"data_example\"\nextension = \"png\"\nref_chars = None\n## Comment upper lines and uncomment lower lines to test with ttf files.\n# extension = \"ttf\"\n# ref_chars = \"๊ฐ’๊ฐ™๊ณฌ๊ณถ๊นŽ๋„‹๋Šช๋‹ซ๋‹ญ๋‹ป๋ฉ๋—Œ๋žต๋ชƒ๋ฐŸ๋ณ˜๋บ๋ฝˆ์†ฉ์์•‰์•Š์–˜์–พ์—Œ์˜ณ์Š์ฃก์ฎœ์ถฐ์ธ„ํ€ญํ‹”ํ•€ํ•ฅํ›Ÿ\"\n###############################################################\n\nref_dict, load_img = load_reference(ref_path, extension, ref_chars)", "_____no_output_____" ] ], [ [ "### 4. Generate and save the images.\n* `gen_chars`: The characters to generate.\n* `save_dir`: Path to save the generated images.\n* `batch_size`: The number of images inferred at once.", "_____no_output_____" ] ], [ [ "###############################################################\ngen_chars = \"์ข‹์€ํ•˜๋ฃจ๋˜์„ธ์š”\" # characters to generate\nsave_dir = \"./result/dm\"\nbatch_size = 16\n###############################################################\n\ninfer_DM(gen, save_dir, gen_chars, ref_dict, load_img, decomposition, batch_size)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ece22bb56c88eb88935ff12c3020b9163c4ba904
7,798
ipynb
Jupyter Notebook
finch/tensorflow2/text_matching/snli/data/make_data.ipynb
hengchao0248/tensorflow-nlp
844f7b092a92aa0a1fd8f6c24364243d60b8af80
[ "MIT" ]
248
2019-07-18T05:59:03.000Z
2022-03-29T21:57:24.000Z
finch/tensorflow2/text_matching/snli/data/make_data.ipynb
NLP4Science/tensorflow-nlp
4944ace8e861d89282cbae3123016c71c0869a6c
[ "MIT" ]
5
2019-07-26T09:29:20.000Z
2020-11-01T08:40:13.000Z
finch/tensorflow2/text_matching/snli/data/make_data.ipynb
NLP4Science/tensorflow-nlp
4944ace8e861d89282cbae3123016c71c0869a6c
[ "MIT" ]
68
2019-07-25T06:59:58.000Z
2022-03-22T06:44:30.000Z
27.75089
92
0.409207
[ [ [ "\"\"\"\nWe use following lines because we are running on Google Colab\nIf you are running notebook on a local computer, you don't need this cell\n\"\"\"\nfrom google.colab import drive\ndrive.mount('/content/gdrive')\nimport os\nos.chdir('/content/gdrive/My Drive/finch/tensorflow1/text_matching/snli/main')", "_____no_output_____" ], [ "import numpy as np\nimport re\n\nfrom collections import Counter\nfrom pathlib import Path", "_____no_output_____" ] ], [ [ "Make Data", "_____no_output_____" ] ], [ [ "def normalize(x):\n x = x.lower()\n x = x.replace('.', '')\n x = x.replace(',', '')\n x = x.replace(';', '')\n x = x.replace('!', '')\n x = x.replace('#', '')\n x = x.replace('(', '')\n x = x.replace(')', '')\n x = x.replace(':', '')\n x = x.replace('%', '')\n x = x.replace('&', '')\n x = x.replace('$', '')\n x = x.replace('?', '')\n x = x.replace('\"', '')\n x = x.replace('/', ' ')\n x = x.replace('-', ' ')\n x = x.replace(\"n't\", \" n't \")\n x = x.replace(\"'\", \" ' \")\n x = re.sub(r'\\d+', ' <num> ', x)\n x = re.sub(r'\\s+', ' ', x)\n return x", "_____no_output_____" ], [ "def write_text(in_path, out_path):\n with open(in_path) as f_in, open(out_path, 'w') as f_out:\n f_in.readline()\n for line in f_in:\n line = line.rstrip()\n sp = line.split('\\t')\n label, sent1, sent2 = sp[0], sp[5], sp[6]\n\n sent1 = normalize(sent1)\n sent2 = normalize(sent2)\n\n f_out.write(label+'\\t'+sent1+'\\t'+sent2+'\\n')", "_____no_output_____" ], [ "write_text('../data/snli_1.0/snli_1.0_train.txt', '../data/train.txt')\nwrite_text('../data/snli_1.0/snli_1.0_test.txt', '../data/test.txt')", "_____no_output_____" ] ], [ [ "Make Vocabulary", "_____no_output_____" ] ], [ [ "counter = Counter()\nwith open('../data/train.txt') as f:\n for line in f:\n line = line.rstrip()\n label, sent1, sent2 = line.split('\\t')\n counter.update(sent1.split())\n counter.update(sent2.split())\n\nwords = [w for w, freq in counter.most_common() if freq >= 3]\n\nPath('../vocab').mkdir(exist_ok=True)\n\nwith open('../vocab/word.txt', 'w') as f:\n f.write('<pad>'+'\\n')\n for w in words:\n f.write(w+'\\n')", "_____no_output_____" ] ], [ [ "Make Pretrained Embedding", "_____no_output_____" ] ], [ [ "def norm_weight(nin, nout, scale=0.01):\n W = scale * np.random.randn(nin, nout)\n return W.astype(np.float32)", "_____no_output_____" ], [ "word2idx = {}\nwith open('../vocab/word.txt') as f:\n for i, line in enumerate(f):\n line = line.rstrip()\n word2idx[line] = i", "_____no_output_____" ], [ "embedding = norm_weight(len(word2idx)+1, 300)\n\nwith open('../data/glove.840B.300d.txt') as f:\n count = 0\n for i, line in enumerate(f):\n if i % 100000 == 0:\n print('- At line {}'.format(i))\n line = line.rstrip()\n sp = line.split(' ')\n word, vec = sp[0], sp[1:]\n if word in word2idx:\n count += 1\n embedding[word2idx[word]] = np.asarray(vec, dtype=np.float32)\n \nprint(\"[%d / %d] words have found pre-trained values\"%(count, len(word2idx)))\nnp.save('../vocab/word.npy', embedding)\nprint('Saved ../vocab/word.npy')", "- At line 0\n- At line 100000\n- At line 200000\n- At line 300000\n- At line 400000\n- At line 500000\n- At line 600000\n- At line 700000\n- At line 800000\n- At line 900000\n- At line 1000000\n- At line 1100000\n- At line 1200000\n- At line 1300000\n- At line 1400000\n- At line 1500000\n- At line 1600000\n- At line 1700000\n- At line 1800000\n- At line 1900000\n- At line 2000000\n- At line 2100000\n[20333 / 20883] words have found pre-trained values\nSaved ../vocab/word.npy\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
ece238ab779cc0803cd65bf17d89ecbb91827558
122,898
ipynb
Jupyter Notebook
Lesson_2/StudentAdmissions.ipynb
Vector202/PyTorch-Scholarship-Challenge-1
ccc944b54cd455a0d62475f1c73d365492cbb3f5
[ "MIT" ]
21
2018-11-24T10:47:31.000Z
2020-01-24T17:58:59.000Z
Lesson_2/StudentAdmissions.ipynb
Vector202/PyTorch-Scholarship-Challenge-1
ccc944b54cd455a0d62475f1c73d365492cbb3f5
[ "MIT" ]
null
null
null
Lesson_2/StudentAdmissions.ipynb
Vector202/PyTorch-Scholarship-Challenge-1
ccc944b54cd455a0d62475f1c73d365492cbb3f5
[ "MIT" ]
11
2018-12-08T23:10:07.000Z
2020-12-23T06:08:57.000Z
127.619938
28,116
0.831551
[ [ [ "# Predicting Student Admissions with Neural Networks\nIn this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:\n- GRE Scores (Test)\n- GPA Scores (Grades)\n- Class rank (1-4)\n\nThe dataset originally came from here: http://www.ats.ucla.edu/\n\n## Loading the data\nTo load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here:\n- https://pandas.pydata.org/pandas-docs/stable/\n- https://docs.scipy.org/", "_____no_output_____" ] ], [ [ "# Importing pandas and numpy\nimport pandas as pd\nimport numpy as np\n\n# Reading the csv file into a pandas DataFrame\ndata = pd.read_csv('student_data.csv')\n\n# Printing out the first 10 rows of our data\ndata[:10]", "_____no_output_____" ] ], [ [ "## Plotting the data\n\nFirst let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank.", "_____no_output_____" ] ], [ [ "# Importing matplotlib\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Function to help us plot\ndef plot_points(data):\n X = np.array(data[[\"gre\",\"gpa\"]])\n y = np.array(data[\"admit\"])\n admitted = X[np.argwhere(y==1)]\n rejected = X[np.argwhere(y==0)]\n plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')\n plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')\n plt.xlabel('Test (GRE)')\n plt.ylabel('Grades (GPA)')\n \n# Plotting the points\nplot_points(data)\nplt.show()", "_____no_output_____" ] ], [ [ "Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.", "_____no_output_____" ] ], [ [ "# Separating the ranks\ndata_rank1 = data[data[\"rank\"]==1]\ndata_rank2 = data[data[\"rank\"]==2]\ndata_rank3 = data[data[\"rank\"]==3]\ndata_rank4 = data[data[\"rank\"]==4]\n\n# Plotting the graphs\nplot_points(data_rank1)\nplt.title(\"Rank 1\")\nplt.show()\nplot_points(data_rank2)\nplt.title(\"Rank 2\")\nplt.show()\nplot_points(data_rank3)\nplt.title(\"Rank 3\")\nplt.show()\nplot_points(data_rank4)\nplt.title(\"Rank 4\")\nplt.show()", "_____no_output_____" ] ], [ [ "This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it.\n\n## One-hot encoding the rank\nUse the `get_dummies` function in numpy in order to one-hot encode the data.", "_____no_output_____" ] ], [ [ "# Make dummy variables for rank\none_hot_data = pd.concat([data, pd.get_dummies(data['rank'], prefix='rank')], axis=1)\n\n# Drop the previous rank column\none_hot_data = one_hot_data.drop('rank', axis=1)\n\n# Print the first 10 rows of our data\none_hot_data[:10]", "_____no_output_____" ] ], [ [ "## Scaling the data\nThe next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800.", "_____no_output_____" ] ], [ [ "# Copying our data\nprocessed_data = one_hot_data[:]\n\n# Scaling the columns\nprocessed_data['gre'] = processed_data['gre']/800\nprocessed_data['gpa'] = processed_data['gpa']/4.0\nprocessed_data[:10]", "_____no_output_____" ] ], [ [ "## Splitting the data into Training and Testing", "_____no_output_____" ], [ "In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data.", "_____no_output_____" ] ], [ [ "sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)\ntrain_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)\n\nprint(\"Number of training samples is\", len(train_data))\nprint(\"Number of testing samples is\", len(test_data))\nprint(train_data[:10])\nprint(test_data[:10])", "Number of training samples is 360\nNumber of testing samples is 40\n admit gre gpa rank_1 rank_2 rank_3 rank_4\n280 0 0.825 0.9850 0 1 0 0\n112 0 0.450 0.7500 0 0 1 0\n189 0 0.625 0.8375 0 1 0 0\n44 0 0.875 0.7350 0 1 0 0\n263 1 0.775 0.9875 0 0 1 0\n178 0 0.775 0.8325 0 0 1 0\n142 0 0.775 0.9850 0 0 0 1\n17 0 0.450 0.6400 0 0 1 0\n7 0 0.500 0.7700 0 1 0 0\n31 0 0.950 0.8375 0 0 1 0\n admit gre gpa rank_1 rank_2 rank_3 rank_4\n1 1 0.825 0.9175 0 0 1 0\n2 1 1.000 1.0000 1 0 0 0\n4 0 0.650 0.7325 0 0 0 1\n11 0 0.550 0.8050 1 0 0 0\n13 0 0.875 0.7700 0 1 0 0\n28 1 0.975 0.8050 0 1 0 0\n49 0 0.500 0.8375 0 0 1 0\n51 0 0.550 0.7825 0 0 0 1\n57 0 0.475 0.7350 0 0 1 0\n58 0 0.500 0.9125 0 1 0 0\n" ] ], [ [ "## Splitting the data into features and targets (labels)\nNow, as a final step before the training, we'll split the data into features (X) and targets (y).", "_____no_output_____" ] ], [ [ "features = train_data.drop('admit', axis=1)\ntargets = train_data['admit']\nfeatures_test = test_data.drop('admit', axis=1)\ntargets_test = test_data['admit']\n\nprint(features[:10])\nprint(targets[:10])", " gre gpa rank_1 rank_2 rank_3 rank_4\n280 0.825 0.9850 0 1 0 0\n112 0.450 0.7500 0 0 1 0\n189 0.625 0.8375 0 1 0 0\n44 0.875 0.7350 0 1 0 0\n263 0.775 0.9875 0 0 1 0\n178 0.775 0.8325 0 0 1 0\n142 0.775 0.9850 0 0 0 1\n17 0.450 0.6400 0 0 1 0\n7 0.500 0.7700 0 1 0 0\n31 0.950 0.8375 0 0 1 0\n280 0\n112 0\n189 0\n44 0\n263 1\n178 0\n142 0\n17 0\n7 0\n31 0\nName: admit, dtype: int64\n" ] ], [ [ "## Training the 2-layer Neural Network\nThe following function trains the 2-layer neural network. First, we'll write some helper functions.", "_____no_output_____" ] ], [ [ "# Activation (sigmoid) function\ndef sigmoid(x):\n return 1 / (1 + np.exp(-x))\ndef sigmoid_prime(x):\n return sigmoid(x) * (1-sigmoid(x))\ndef error_formula(y, output):\n return - y*np.log(output) - (1 - y) * np.log(1-output)", "_____no_output_____" ] ], [ [ "# TODO: Backpropagate the error\nNow it's your turn to shine. Write the error term. Remember that this is given by the equation $$ -(y-\\hat{y}) \\sigma'(x) $$", "_____no_output_____" ] ], [ [ "#Write the error term formula\ndef error_term_formula(y, output):\n return (y-output) * output * (1 - output)", "_____no_output_____" ], [ "# Neural Network hyperparameters\nepochs = 1000\nlearnrate = 0.5\n\n# Training function\ndef train_nn(features, targets, epochs, learnrate):\n \n # Use to same seed to make debugging easier\n np.random.seed(42)\n\n n_records, n_features = features.shape\n last_loss = None\n\n # Initialize weights\n weights = np.random.normal(scale=1 / n_features**.5, size=n_features)\n\n for e in range(epochs):\n del_w = np.zeros(weights.shape)\n for x, y in zip(features.values, targets):\n # Loop through all records, x is the input, y is the target\n\n # Activation of the output unit\n # Notice we multiply the inputs and the weights here \n # rather than storing h as a separate variable \n output = sigmoid(np.dot(x, weights))\n\n # The error, the target minus the network output\n error = error_formula(y, output)\n\n # The error term\n # Notice we calulate f'(h) here instead of defining a separate\n # sigmoid_prime function. This just makes it faster because we\n # can re-use the result of the sigmoid function stored in\n # the output variable\n error_term = error_term_formula(y, output)\n\n # The gradient descent step, the error times the gradient times the inputs\n del_w += error_term * x\n\n # Update the weights here. The learning rate times the \n # change in weights, divided by the number of records to average\n weights += learnrate * del_w / n_records\n\n # Printing out the mean square error on the training set\n if e % (epochs / 10) == 0:\n out = sigmoid(np.dot(features, weights))\n loss = np.mean((out - targets) ** 2)\n print(\"Epoch:\", e)\n if last_loss and last_loss < loss:\n print(\"Train loss: \", loss, \" WARNING - Loss Increasing\")\n else:\n print(\"Train loss: \", loss)\n last_loss = loss\n print(\"=========\")\n print(\"Finished training!\")\n return weights\n \nweights = train_nn(features, targets, epochs, learnrate)", "Epoch: 0\nTrain loss: 0.27289090923487286\n=========\nEpoch: 100\nTrain loss: 0.2091464065794889\n=========\nEpoch: 200\nTrain loss: 0.2068100128367552\n=========\nEpoch: 300\nTrain loss: 0.2057868276183008\n=========\nEpoch: 400\nTrain loss: 0.2052815974209616\n=========\nEpoch: 500\nTrain loss: 0.20498741894721748\n=========\nEpoch: 600\nTrain loss: 0.20478455559815276\n=========\nEpoch: 700\nTrain loss: 0.20462416089862026\n=========\nEpoch: 800\nTrain loss: 0.20448519205238247\n=========\nEpoch: 900\nTrain loss: 0.20435807129923608\n=========\nFinished training!\n" ] ], [ [ "## Calculating the Accuracy on the Test Data", "_____no_output_____" ] ], [ [ "# Calculate accuracy on test data\ntes_out = sigmoid(np.dot(features_test, weights))\npredictions = tes_out > 0.5\naccuracy = np.mean(predictions == targets_test)\nprint(\"Prediction accuracy: {:.3f}\".format(accuracy))", "Prediction accuracy: 0.675\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
ece241fb4b6ffd3b375b8a4e850c5f6f10b7ecd5
8,051
ipynb
Jupyter Notebook
example_notebooks/Advanced.ipynb
qinjunjerry/jupyter-vvp
0052bbca773cf4783dd83a660e17536a0a02bc4b
[ "Apache-2.0" ]
null
null
null
example_notebooks/Advanced.ipynb
qinjunjerry/jupyter-vvp
0052bbca773cf4783dd83a660e17536a0a02bc4b
[ "Apache-2.0" ]
null
null
null
example_notebooks/Advanced.ipynb
qinjunjerry/jupyter-vvp
0052bbca773cf4783dd83a660e17536a0a02bc4b
[ "Apache-2.0" ]
null
null
null
22.488827
167
0.498696
[ [ [ "# Advanced features\n\nThis notebook will demonstrate a few more advanced features of the Jupyter Ververica Platform integration.\n\nFirst we again load the module and connect to an Ververica Platform instance. Make sure you installed `jupyter-vvp` on the kernel you are using.", "_____no_output_____" ] ], [ [ "%reload_ext jupytervvp\n%connect_vvp -n default localhost -p 8080 -s mySession --force", "Registering jupytervvp for vvp.\n" ] ], [ [ "Next we create a table, this time we are using the Kafka connector, which has a few options that need to be set.\n\nRefer to the [Ververica Platform documentation](https://docs.ververica.com/sql-eap/sql_development/table_view.html#supported-connectors) for further information.", "_____no_output_____" ] ], [ [ "%%flink_sql\nCREATE TABLE `kafka_table` (id int)\nCOMMENT 'SomeComment'\nWITH (\n 'connector' = 'kafka',\n 'topic' = 'topic',\n 'properties.bootstrap.servers' = 'localhost:9092',\n 'properties.group.id' = 'orderGroup',\n 'format' = 'csv'\n)", "_____no_output_____" ] ], [ [ "Of course, tables can also be altered. Here we use the `ALTER TABLE` statement to change the topic of the `kafka_table`.", "_____no_output_____" ] ], [ [ "%%flink_sql\nALTER TABLE `kafka_table` SET (\n 'connector' = 'kafka',\n 'topic' = 'other_topic',\n 'properties.bootstrap.servers' = 'localhost:9092',\n 'properties.group.id' = 'orderGroup',\n 'format' = 'csv'\n)", "_____no_output_____" ] ], [ [ "Variables can also be used:", "_____no_output_____" ] ], [ [ "topic_name = \"myTopic\"\nparallelism = 2\ndeployment_name = \"testName\"", "_____no_output_____" ], [ "%%flink_sql\nCREATE TABLE `var_table` (id int)\nCOMMENT 'SomeComment'\nWITH (\n 'connector' = 'kafka',\n 'topic' = '{topic_name}',\n 'properties.bootstrap.servers' = 'localhost:9092',\n 'properties.group.id' = 'orderGroup',\n 'format' = 'csv'\n)", "_____no_output_____" ] ], [ [ "Deployment parameters (for e.g., `INSERT INTO` statements) can be set as a Python dictionary object.\nNotice that variables here are not interpolated and usual Python syntax can be used:", "_____no_output_____" ] ], [ [ "my_deployment_parameters = {\n \"metadata.name\": \"{}\".format(deployment_name),\n \"spec.template.spec.parallelism\": parallelism,\n \"spec.upradeStrategy.kind\": \"STATEFUL\",\n \"spec.restoreStrategy.kind\": \"LATEST_STATE\",\n \"spec.template.spec.flinkConfiguration.state.backend\": \"filesystem\",\n \"spec.template.spec.flinkConfiguration.taskmanager.memory.managed.fraction\": \"0.1\",\n \"spec.template.spec.flinkConfiguration.high-availability\": \"vvp-kubernetes\"\n}", "_____no_output_____" ] ], [ [ "The following succeeds only when a default deployment target has been set up using the platform,\nor when a target has been set up and is specified via a settings object.", "_____no_output_____" ] ], [ [ "%%flink_sql -p my_deployment_parameters -d\nINSERT INTO var_table SELECT * FROM kafka_table", "_____no_output_____" ] ], [ [ "That concludes this example and we will clean up the tables we created.", "_____no_output_____" ] ], [ [ "%%flink_sql\nDROP TABLE `kafka_table`", "_____no_output_____" ], [ "%%flink_sql\nDROP TABLE `var_table`", "_____no_output_____" ], [ "%%flink_sql\nSHOW TABLES", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
ece24f619f575749ae87a4b2d05e9eff424409a0
15,169
ipynb
Jupyter Notebook
nn/simple-cnn-tf.ipynb
sashlinreddy/ml-notebooks
d88433fb2af0c5974652febbc470ae44b3b676d7
[ "MIT" ]
null
null
null
nn/simple-cnn-tf.ipynb
sashlinreddy/ml-notebooks
d88433fb2af0c5974652febbc470ae44b3b676d7
[ "MIT" ]
null
null
null
nn/simple-cnn-tf.ipynb
sashlinreddy/ml-notebooks
d88433fb2af0c5974652febbc470ae44b3b676d7
[ "MIT" ]
null
null
null
26.612281
129
0.524359
[ [ [ "## Simple CNN with tensorflow", "_____no_output_____" ] ], [ [ "import tensorflow as tf\ntf.random.set_seed(42)\nimport numpy as np\nnp.random.seed(42)\nfrom tqdm import tqdm_notebook as tqdm\n\n%load_ext autoreload\n%autoreload\n%config Completer.use_jedi=False", "The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n" ], [ "tf.executing_eagerly()", "_____no_output_____" ] ], [ [ "## Read in data\n\nUsing MNIST", "_____no_output_____" ] ], [ [ "(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()\nX_train = X_train[..., np.newaxis]\nX_test = X_test[..., np.newaxis]", "_____no_output_____" ], [ "fac = 255 * 0.99 + 0.01\n\nX_train = X_train / fac\nX_test = X_test / fac\nX_train = X_train.astype(np.float32) \nX_test = X_test.astype(np.float32)", "_____no_output_____" ], [ "X_train.shape, y_train.shape", "_____no_output_____" ], [ "X_test.shape, y_test.shape", "_____no_output_____" ], [ "x = np.ones((60_000, 784))", "_____no_output_____" ] ], [ [ "Convert numpy tensors to tensorflow tensors and create batches", "_____no_output_____" ] ], [ [ "train_dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train))\ntest_dataset = tf.data.Dataset.from_tensor_slices((X_test, y_test))\n\nBATCH_SIZE = 128\nSHUFFLE_BUFFER_SIZE = 100\n\ntrain_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)\ntest_dataset = test_dataset.batch(BATCH_SIZE)", "_____no_output_____" ] ], [ [ "Customizable training loop", "_____no_output_____" ] ], [ [ "# Define cnn\nmodel = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, (3, 3), kernel_initializer=\"he_uniform\", activation=\"relu\", input_shape=(28, 28, 1)),\n tf.keras.layers.MaxPooling2D((2, 2)),\n# tf.keras.layers.Conv2D(64, (3, 3), activation=\"relu\", kernel_initializer=\"he_uniform\"),\n# tf.keras.layers.MaxPooling2D((2, 2)),\n# tf.keras.layers.Conv2D(64, (3, 3), activation=\"relu\", kernel_initializer=\"he_uniform\"),\n# tf.keras.layers.MaxPooling2D((2, 2)),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation='relu', kernel_initializer=\"he_uniform\"),\n tf.keras.layers.Dropout(0.5),\n tf.keras.layers.Dense(10, activation=\"softmax\")\n])\n\n# Define optimizer\noptimizer = tf.keras.optimizers.Adam(learning_rate=0.001)\n\n# Define loss function\nloss_func = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n\nepoch_loss_avg = tf.keras.metrics.Mean()\nepoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()\ntest_epoch_loss_avg = tf.keras.metrics.Mean()\ntest_epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()", "_____no_output_____" ], [ "model.summary()", "Model: \"sequential_4\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_10 (Conv2D) (None, 26, 26, 32) 320 \n_________________________________________________________________\nmax_pooling2d_10 (MaxPooling (None, 13, 13, 32) 0 \n_________________________________________________________________\nflatten_4 (Flatten) (None, 5408) 0 \n_________________________________________________________________\ndense_8 (Dense) (None, 128) 692352 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 128) 0 \n_________________________________________________________________\ndense_9 (Dense) (None, 10) 1290 \n=================================================================\nTotal params: 693,962\nTrainable params: 693,962\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "@tf.function\ndef train_loop(x, y):\n\n # Calculate gradients\n with tf.GradientTape() as t:\n # training=training is needed only if there are layers with different\n # behavior during training versus inference (e.g. Dropout).\n predictions = model(x, training=True)\n loss = loss_func(y, predictions)\n\n grads = t.gradient(loss, model.trainable_variables)\n\n # Optimize the model\n optimizer.apply_gradients(zip(grads, model.trainable_variables))\n \n # Track progress\n epoch_loss_avg(loss)\n\n # Compare predicted label to actual\n epoch_accuracy.update_state(y, predictions)\n \n# return loss, predictions", "_____no_output_____" ], [ "%%time\ntrain_loss_results = []\ntrain_accuracy_results = []\nepochs = 10\nn_batches = len(list(train_dataset))\n\nfor epoch in tqdm(np.arange(epochs)):\n \n for x, y in tqdm(train_dataset, total=n_batches, leave=False):\n train_loop(x, y)\n\n # End epoch\n train_loss_results.append(epoch_loss_avg.result())\n train_accuracy_results.append(epoch_accuracy.result())\n \n # Test\n for (x_valid, y_valid) in test_dataset:\n preds_test = model(x_valid, training=False)\n test_loss = loss_func(y_valid, preds_test)\n test_epoch_loss_avg(test_loss)\n test_epoch_accuracy.update_state(y_valid, preds_test)\n \n print(f\"Epoch {epoch:03d}: train_loss: {epoch_loss_avg.result():.3f}, \"\n f\"test_loss: {test_epoch_loss_avg.result():.3f} \"\n f\"Accuracy: {epoch_accuracy.result():.3f}%\"\n f\"Test accuracy={test_epoch_accuracy.result():.3f}\"\n )\n \n # Clear the current state of the metrics\n epoch_loss_avg.reset_states()\n epoch_accuracy.reset_states()\n test_epoch_loss_avg.reset_states()\n test_epoch_accuracy.reset_states()\n # valid_loss.reset_states(), valid_acc.reset_states()\n \n ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
ece25bb24e64a8647302fe4cca48f46416426ace
10,258
ipynb
Jupyter Notebook
evaluate_network_example.ipynb
VU-BEAM-Lab/DNNBeamforming
e8ee8c1e57188a795816b119279ac2e60e5c5236
[ "Apache-2.0" ]
1
2021-04-12T19:52:43.000Z
2021-04-12T19:52:43.000Z
evaluate_network_example.ipynb
VU-BEAM-Lab/DNNBeamforming
e8ee8c1e57188a795816b119279ac2e60e5c5236
[ "Apache-2.0" ]
null
null
null
evaluate_network_example.ipynb
VU-BEAM-Lab/DNNBeamforming
e8ee8c1e57188a795816b119279ac2e60e5c5236
[ "Apache-2.0" ]
null
null
null
34.307692
99
0.556444
[ [ [ "# Copyright 2020 Jaime Tierney, Adam Luchies, and Brett Byram\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the license at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and \n# limitations under the License.\n\n# INSTALL NECESSARY PACKAGES PRIOR TO RUNNING THIS NOTEBOOK\n# pytorch\n# jupyter \n# numpy\n# scipy\n# matplotlib\n# pandas\n# h5py\n\n# IMPORT PYTHON PACKAGES\nimport torch\nimport os\nimport numpy as np\nfrom torch import nn\nimport time\nimport argparse\nimport sys\nimport h5py\nfrom scipy.io import loadmat\nfrom scipy.io import savemat\nfrom scipy.signal import hilbert\nimport matplotlib.pyplot as plt\n\n# IMPORT FUNCTIONS FROM PROVIDED SOURCE CODE\nsys.path.insert(0,'src')\nfrom utils import read_model_params\nfrom model import FullyConnectedNet", "_____no_output_____" ], [ "# SPECIFY PATH TO MODEL (THIS IS ALSO OUTPUT PATH)\nmodel_path = 'models/model_1/k_8/'\n\n# LOAD IN MODEL PARAMS\nmodel_params = read_model_params(model_path+'model_params.txt')\n\n# PROVIDE TEST DATA FILE INFO\ntest_data_path = 'test_data/'\ntest_data_name = 'chandat_phantom_5mm_70mm'", "_____no_output_____" ], [ "# SPECIFY CUDA AVAILABILITY\nprint('torch.cuda.is_available(): ' + str(torch.cuda.is_available()))\nif model_params['cuda'] and torch.cuda.is_available():\n print('Using ' + str(torch.cuda.get_device_name(0)))\nelse:\n print('Not using CUDA')\n model_params['cuda']=False\ndevice = torch.device(\"cuda:0\" if model_params['cuda'] else \"cpu\")", "_____no_output_____" ], [ "# LOAD IN THE TEST DATA AND REFORMAT FOR NETWORK PROCESSING\n\n# load in delayed RF channel data\nf = h5py.File(os.path.join(test_data_path,test_data_name+'.mat'),'r')\nrf_data = np.asarray(f['chandat'])\nf.close()\n\n# get dimension info\n[N_beams,N_elements,N_depths] = rf_data.shape\n\n# get analytic data\nanalytic_data = hilbert(rf_data,axis=2)\ndel rf_data\n\n# switch depth and channel axes\nanalytic_data = np.moveaxis(analytic_data,1,2)\n\n# concatenate real and imaginary components into data variable\ndata_real = np.real(analytic_data)\ndata_imag = np.imag(analytic_data)\ndata = np.concatenate([data_real,data_imag],axis=2)\ndel analytic_data\n\n# get conventional DAS B-mode data\nenv = np.sqrt(np.power(np.sum(data_real,axis=2),2)+\n np.power(np.sum(data_imag,axis=2),2))\nbmode = 20*np.log10(env)\ndel data_real, data_imag\n\n# reshape data to flatten depth and beam axes\ndata = np.reshape(data,[N_beams*N_depths,2*N_elements])\n\n# normalize data by L1 norm\ndata_norm = np.linalg.norm(data,ord=np.inf,axis=1)\ndata = data / data_norm[:,np.newaxis]\n\n# load data into pytorch and onto gpu\ndata = torch.from_numpy(data).float()\ndata = data.to(device)", "_____no_output_____" ], [ "# PASS TEST DATA THROUGH NETWORK\n\n# start timer\nt0 = time.time()\n\n# load the model\nmodel = FullyConnectedNet(input_dim=model_params['input_dim'],\n output_dim=model_params['output_dim'],\n layer_width=model_params['layer_width'],\n dropout=model_params['dropout'],\n dropout_input=model_params['dropout_input'],\n num_hidden=model_params['num_hidden'],\n starting_weights=None,\n batch_norm_enable=model_params['batch_norm_enable'])\nprint('Loading weights from: ' + str(os.path.join(model_params['save_dir'], 'model.dat')))\nmodel.load_state_dict(torch.load(os.path.join(model_params['save_dir'], \n 'model.dat'), map_location='cpu'))\nmodel.eval()\nmodel = model.to(device)\n\n# process test data with the model\nwith torch.set_grad_enabled(False):\n data_dnn = model(data).to('cpu').data.numpy()\n \n# stop timer\nprint('Processing time: {:.2f}'.format(time.time()-t0))\n\n# clear the model and input data\ndel model, data", "_____no_output_____" ], [ "# REFORMAT PROCESSED TEST DATA \n\n# scale back\ndata_dnn = data_dnn * data_norm[:,np.newaxis]\n\n# unflatten depth and beam axes\ndata_dnn = np.reshape(data_dnn,[N_beams,N_depths,2*N_elements])\n\n# split up real and imaginary\ndata_dnn_real = data_dnn[:,:,0:N_elements]\ndata_dnn_imag = data_dnn[:,:,N_elements:2*N_elements]\n\n# get DNN beamformer B-mode data\nenv_dnn = np.sqrt(np.power(np.sum(data_dnn_real,axis=2),2)+\n np.power(np.sum(data_dnn_imag,axis=2),2))\nbmode_dnn = 20*np.log10(env_dnn)", "_____no_output_____" ], [ "# MAKE IMAGES AND COMPUTE IMAGE QUALITY METRICS\n\n# load in params file\nf = h5py.File(os.path.join(test_data_path,test_data_name+'_params.mat'),'r')\nbeam_position_x = np.asarray(f['beam_position_x'])\nt = np.asarray(f['t'])\nfs = np.asarray(f['fs'])\nc = np.asarray(f['c'])\nmask_in = np.asarray(f['mask_in'])\nmask_out = np.asarray(f['mask_out'])\nf.close()\ndepths = t/fs*c/2\n\n# make DAS image\nbmode_scaled = bmode - np.max(bmode)\nfig,axs = plt.subplots(nrows=1,ncols=2,sharey=True)\ndas_img=axs[0].imshow(np.moveaxis(bmode_scaled,0,1),cmap='gray',\n aspect='equal',vmin=-60,vmax=0,\n extent=[beam_position_x[0][0]*1000,\n beam_position_x[-1][0]*1000,\n depths[0][-1]*1000,\n depths[0][0]*1000])\naxs[0].set_title('DAS')\naxs[0].set_ylabel('Depth (mm)')\naxs[0].set_xlabel('Lateral Pos. (mm)')\nfig.colorbar(das_img,ax=axs[0])\n\n# make DNN image\nbmode_dnn_scaled = bmode_dnn - np.max(bmode_dnn)\ndnn_img=axs[1].imshow(np.moveaxis(bmode_dnn_scaled,0,1),cmap='gray',\n aspect='equal',vmin=-60,vmax=0,\n extent=[beam_position_x[0][0]*1000,\n beam_position_x[-1][0]*1000,\n depths[0][-1]*1000,\n depths[0][0]*1000])\naxs[1].set_title('DNN')\naxs[1].set_xlabel('Lateral Pos. (mm)')\n\n# add colorbar and save figure\nfig.colorbar(dnn_img,ax=axs[1])\nfig.savefig(os.path.join(model_path,test_data_name+'_result.png'))\n\n# find indicies corresponding to inside and outside of lesion\nidx_in = np.where(mask_in==1)\nidx_out = np.where(mask_out==1)\n\n# compute mean and variance for DAS\nmean_in = np.mean(env[idx_in])\nmean_out = np.mean(env[idx_out])\nvar_in = np.var(env[idx_in])\nvar_out = np.var(env[idx_out])\n\n# compute mean and variance for DNN\nmean_in_dnn = np.mean(env_dnn[idx_in])\nmean_out_dnn = np.mean(env_dnn[idx_out])\nvar_in_dnn = np.var(env_dnn[idx_in])\nvar_out_dnn = np.var(env_dnn[idx_out])\n\n# compute image quality metrics\nCNR = 20*np.log10(np.abs(mean_in-mean_out)/np.sqrt(var_in+var_out))\nCNR_DNN = 20*np.log10(np.abs(mean_in_dnn-mean_out_dnn)/\n np.sqrt(var_in_dnn+var_out_dnn))\nCR = -20*np.log10(np.abs(mean_in/mean_out))\nCR_DNN = -20*np.log10(np.abs(mean_in_dnn/mean_out_dnn))\nprint('CNR DAS: {:.2f}'.format(CNR))\nprint('CNR DNN: {:.2f}'.format(CNR_DNN))\nprint('CR DAS: {:.2f}'.format(CR))\nprint('CR DNN: {:.2f}'.format(CR_DNN))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
ece25c47a970059c237608dfdd663d7ce2ce1684
136,911
ipynb
Jupyter Notebook
docs/examples/notebooks/octavemagic_extension.ipynb
dlsun/ipython
38df12df8589c6da3b9398c7c0a03cea25fdce66
[ "BSD-3-Clause-Clear" ]
1
2018-09-24T13:45:40.000Z
2018-09-24T13:45:40.000Z
docs/examples/notebooks/octavemagic_extension.ipynb
dlsun/ipython
38df12df8589c6da3b9398c7c0a03cea25fdce66
[ "BSD-3-Clause-Clear" ]
null
null
null
docs/examples/notebooks/octavemagic_extension.ipynb
dlsun/ipython
38df12df8589c6da3b9398c7c0a03cea25fdce66
[ "BSD-3-Clause-Clear" ]
null
null
null
369.032345
85,738
0.909408
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
ece260e636b89e943dfae2d5808c81d7c847af26
56,780
ipynb
Jupyter Notebook
Course 1 - Neural Networks and Deep Learning/NoteBooks/Building_your_Deep_Neural_Network_Step_by_Step_v8a.ipynb
HarshitRuwali/Coursera-Deep-Learning-Specialization
8038f2f2d746ad455e0e3c45c736c5d9b7348d8a
[ "MIT" ]
null
null
null
Course 1 - Neural Networks and Deep Learning/NoteBooks/Building_your_Deep_Neural_Network_Step_by_Step_v8a.ipynb
HarshitRuwali/Coursera-Deep-Learning-Specialization
8038f2f2d746ad455e0e3c45c736c5d9b7348d8a
[ "MIT" ]
null
null
null
Course 1 - Neural Networks and Deep Learning/NoteBooks/Building_your_Deep_Neural_Network_Step_by_Step_v8a.ipynb
HarshitRuwali/Coursera-Deep-Learning-Specialization
8038f2f2d746ad455e0e3c45c736c5d9b7348d8a
[ "MIT" ]
null
null
null
37.878586
562
0.516855
[ [ [ "# Building your Deep Neural Network: Step by Step\n\nWelcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!\n\n- In this notebook, you will implement all the functions required to build a deep neural network.\n- In the next assignment, you will use these functions to build a deep neural network for image classification.\n\n**After this assignment you will be able to:**\n- Use non-linear units like ReLU to improve your model\n- Build a deeper neural network (with more than 1 hidden layer)\n- Implement an easy-to-use neural network class\n\n**Notation**:\n- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer. \n - Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.\n- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example. \n - Example: $x^{(i)}$ is the $i^{th}$ training example.\n- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.\n - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).\n\nLet's get started!", "_____no_output_____" ], [ "### <font color='darkblue'> Updates to Assignment <font>\n\n#### If you were working on a previous version\n* The current notebook filename is version \"4a\". \n* You can find your work in the file directory as version \"4\".\n* To see the file directory, click on the Coursera logo at the top left of the notebook.\n\n#### List of Updates\n* compute_cost unit test now includes tests for Y = 0 as well as Y = 1. This catches a possible bug before students get graded.\n* linear_backward unit test now has a more complete unit test that catches a possible bug before students get graded.\n", "_____no_output_____" ], [ "## 1 - Packages\n\nLet's first import all the packages that you will need during this assignment. \n- [numpy](www.numpy.org) is the main package for scientific computing with Python.\n- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.\n- dnn_utils provides some necessary functions for this notebook.\n- testCases provides some test cases to assess the correctness of your functions\n- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed. ", "_____no_output_____" ] ], [ [ "import numpy as np\nimport h5py\nimport matplotlib.pyplot as plt\nfrom testCases_v4a import *\nfrom dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n%load_ext autoreload\n%autoreload 2\n\nnp.random.seed(1)", "_____no_output_____" ] ], [ [ "## 2 - Outline of the Assignment\n\nTo build your neural network, you will be implementing several \"helper functions\". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:\n\n- Initialize the parameters for a two-layer network and for an $L$-layer neural network.\n- Implement the forward propagation module (shown in purple in the figure below).\n - Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).\n - We give you the ACTIVATION function (relu/sigmoid).\n - Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.\n - Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.\n- Compute the loss.\n- Implement the backward propagation module (denoted in red in the figure below).\n - Complete the LINEAR part of a layer's backward propagation step.\n - We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward) \n - Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.\n - Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function\n- Finally update the parameters.\n\n<img src=\"images/final outline.png\" style=\"width:800px;height:500px;\">\n<caption><center> **Figure 1**</center></caption><br>\n\n\n**Note** that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps. ", "_____no_output_____" ], [ "## 3 - Initialization\n\nYou will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.\n\n### 3.1 - 2-layer Neural Network\n\n**Exercise**: Create and initialize the parameters of the 2-layer neural network.\n\n**Instructions**:\n- The model's structure is: *LINEAR -> RELU -> LINEAR -> SIGMOID*. \n- Use random initialization for the weight matrices. Use `np.random.randn(shape)*0.01` with the correct shape.\n- Use zero initialization for the biases. Use `np.zeros(shape)`.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: initialize_parameters\n\ndef initialize_parameters(n_x, n_h, n_y):\n \"\"\"\n Argument:\n n_x -- size of the input layer\n n_h -- size of the hidden layer\n n_y -- size of the output layer\n \n Returns:\n parameters -- python dictionary containing your parameters:\n W1 -- weight matrix of shape (n_h, n_x)\n b1 -- bias vector of shape (n_h, 1)\n W2 -- weight matrix of shape (n_y, n_h)\n b2 -- bias vector of shape (n_y, 1)\n \"\"\"\n \n np.random.seed(1)\n \n ### START CODE HERE ### (โ‰ˆ 4 lines of code)\n W1 = np.random.randn(n_h, n_x) * 0.01\n b1 = np.zeros(shape = (n_h, 1))\n W2 = np.random.randn(n_y, n_h) * 0.01\n b2 = np.zeros(shape = (n_y, 1))\n ### END CODE HERE ###\n \n assert(W1.shape == (n_h, n_x))\n assert(b1.shape == (n_h, 1))\n assert(W2.shape == (n_y, n_h))\n assert(b2.shape == (n_y, 1))\n \n parameters = {\"W1\": W1,\n \"b1\": b1,\n \"W2\": W2,\n \"b2\": b2}\n \n return parameters ", "_____no_output_____" ], [ "parameters = initialize_parameters(3,2,1)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))", "W1 = [[ 0.01624345 -0.00611756 -0.00528172]\n [-0.01072969 0.00865408 -0.02301539]]\nb1 = [[ 0.]\n [ 0.]]\nW2 = [[ 0.01744812 -0.00761207]]\nb2 = [[ 0.]]\n" ] ], [ [ "**Expected output**:\n \n<table style=\"width:80%\">\n <tr>\n <td> **W1** </td>\n <td> [[ 0.01624345 -0.00611756 -0.00528172]\n [-0.01072969 0.00865408 -0.02301539]] </td> \n </tr>\n\n <tr>\n <td> **b1**</td>\n <td>[[ 0.]\n [ 0.]]</td> \n </tr>\n \n <tr>\n <td>**W2**</td>\n <td> [[ 0.01744812 -0.00761207]]</td>\n </tr>\n \n <tr>\n <td> **b2** </td>\n <td> [[ 0.]] </td> \n </tr>\n \n</table>", "_____no_output_____" ], [ "### 3.2 - L-layer Neural Network\n\nThe initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the `initialize_parameters_deep`, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:\n\n<table style=\"width:100%\">\n\n\n <tr>\n <td> </td> \n <td> **Shape of W** </td> \n <td> **Shape of b** </td> \n <td> **Activation** </td>\n <td> **Shape of Activation** </td> \n <tr>\n \n <tr>\n <td> **Layer 1** </td> \n <td> $(n^{[1]},12288)$ </td> \n <td> $(n^{[1]},1)$ </td> \n <td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td> \n \n <td> $(n^{[1]},209)$ </td> \n <tr>\n \n <tr>\n <td> **Layer 2** </td> \n <td> $(n^{[2]}, n^{[1]})$ </td> \n <td> $(n^{[2]},1)$ </td> \n <td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td> \n <td> $(n^{[2]}, 209)$ </td> \n <tr>\n \n <tr>\n <td> $\\vdots$ </td> \n <td> $\\vdots$ </td> \n <td> $\\vdots$ </td> \n <td> $\\vdots$</td> \n <td> $\\vdots$ </td> \n <tr>\n \n <tr>\n <td> **Layer L-1** </td> \n <td> $(n^{[L-1]}, n^{[L-2]})$ </td> \n <td> $(n^{[L-1]}, 1)$ </td> \n <td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td> \n <td> $(n^{[L-1]}, 209)$ </td> \n <tr>\n \n \n <tr>\n <td> **Layer L** </td> \n <td> $(n^{[L]}, n^{[L-1]})$ </td> \n <td> $(n^{[L]}, 1)$ </td>\n <td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>\n <td> $(n^{[L]}, 209)$ </td> \n <tr>\n\n</table>\n\nRemember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if: \n\n$$ W = \\begin{bmatrix}\n j & k & l\\\\\n m & n & o \\\\\n p & q & r \n\\end{bmatrix}\\;\\;\\; X = \\begin{bmatrix}\n a & b & c\\\\\n d & e & f \\\\\n g & h & i \n\\end{bmatrix} \\;\\;\\; b =\\begin{bmatrix}\n s \\\\\n t \\\\\n u\n\\end{bmatrix}\\tag{2}$$\n\nThen $WX + b$ will be:\n\n$$ WX + b = \\begin{bmatrix}\n (ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\\\\n (ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\\\\n (pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u\n\\end{bmatrix}\\tag{3} $$", "_____no_output_____" ], [ "**Exercise**: Implement initialization for an L-layer Neural Network. \n\n**Instructions**:\n- The model's structure is *[LINEAR -> RELU] $ \\times$ (L-1) -> LINEAR -> SIGMOID*. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.\n- Use random initialization for the weight matrices. Use `np.random.randn(shape) * 0.01`.\n- Use zeros initialization for the biases. Use `np.zeros(shape)`.\n- We will store $n^{[l]}$, the number of units in different layers, in a variable `layer_dims`. For example, the `layer_dims` for the \"Planar Data classification model\" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. This means `W1`'s shape was (4,2), `b1` was (4,1), `W2` was (1,4) and `b2` was (1,1). Now you will generalize this to $L$ layers! \n- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).\n```python\n if L == 1:\n parameters[\"W\" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01\n parameters[\"b\" + str(L)] = np.zeros((layer_dims[1], 1))\n```", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: initialize_parameters_deep\n\ndef initialize_parameters_deep(layer_dims):\n \"\"\"\n Arguments:\n layer_dims -- python array (list) containing the dimensions of each layer in our network\n \n Returns:\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", ..., \"WL\", \"bL\":\n Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])\n bl -- bias vector of shape (layer_dims[l], 1)\n \"\"\"\n \n np.random.seed(3)\n parameters = {}\n L = len(layer_dims) # number of layers in the network\n\n for l in range(1, L):\n ### START CODE HERE ### (โ‰ˆ 2 lines of code)\n parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01\n parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))\n ### END CODE HERE ###\n \n assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))\n assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))\n\n \n return parameters", "_____no_output_____" ], [ "parameters = initialize_parameters_deep([5,4,3])\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))", "W1 = [[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]\n [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]\n [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]\n [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]\nb1 = [[ 0.]\n [ 0.]\n [ 0.]\n [ 0.]]\nW2 = [[-0.01185047 -0.0020565 0.01486148 0.00236716]\n [-0.01023785 -0.00712993 0.00625245 -0.00160513]\n [-0.00768836 -0.00230031 0.00745056 0.01976111]]\nb2 = [[ 0.]\n [ 0.]\n [ 0.]]\n" ] ], [ [ "**Expected output**:\n \n<table style=\"width:80%\">\n <tr>\n <td> **W1** </td>\n <td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]\n [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]\n [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]\n [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td> \n </tr>\n \n <tr>\n <td>**b1** </td>\n <td>[[ 0.]\n [ 0.]\n [ 0.]\n [ 0.]]</td> \n </tr>\n \n <tr>\n <td>**W2** </td>\n <td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]\n [-0.01023785 -0.00712993 0.00625245 -0.00160513]\n [-0.00768836 -0.00230031 0.00745056 0.01976111]]</td> \n </tr>\n \n <tr>\n <td>**b2** </td>\n <td>[[ 0.]\n [ 0.]\n [ 0.]]</td> \n </tr>\n \n</table>", "_____no_output_____" ], [ "## 4 - Forward propagation module\n\n### 4.1 - Linear Forward \nNow that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:\n\n- LINEAR\n- LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid. \n- [LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID (whole model)\n\nThe linear forward module (vectorized over all the examples) computes the following equations:\n\n$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\\tag{4}$$\n\nwhere $A^{[0]} = X$. \n\n**Exercise**: Build the linear part of forward propagation.\n\n**Reminder**:\nThe mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find `np.dot()` useful. If your dimensions don't match, printing `W.shape` may help.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: linear_forward\n\ndef linear_forward(A, W, b):\n \"\"\"\n Implement the linear part of a layer's forward propagation.\n\n Arguments:\n A -- activations from previous layer (or input data): (size of previous layer, number of examples)\n W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)\n b -- bias vector, numpy array of shape (size of the current layer, 1)\n\n Returns:\n Z -- the input of the activation function, also called pre-activation parameter \n cache -- a python tuple containing \"A\", \"W\" and \"b\" ; stored for computing the backward pass efficiently\n \"\"\"\n \n ### START CODE HERE ### (โ‰ˆ 1 line of code)\n Z = np.dot(W, A) + b\n ### END CODE HERE ###\n \n assert(Z.shape == (W.shape[0], A.shape[1]))\n cache = (A, W, b)\n \n return Z, cache", "_____no_output_____" ], [ "A, W, b = linear_forward_test_case()\n\nZ, linear_cache = linear_forward(A, W, b)\nprint(\"Z = \" + str(Z))", "Z = [[ 3.26295337 -1.23429987]]\n" ] ], [ [ "**Expected output**:\n\n<table style=\"width:35%\">\n \n <tr>\n <td> **Z** </td>\n <td> [[ 3.26295337 -1.23429987]] </td> \n </tr>\n \n</table>", "_____no_output_____" ], [ "### 4.2 - Linear-Activation Forward\n\nIn this notebook, you will use two activation functions:\n\n- **Sigmoid**: $\\sigma(Z) = \\sigma(W A + b) = \\frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the `sigmoid` function. This function returns **two** items: the activation value \"`a`\" and a \"`cache`\" that contains \"`Z`\" (it's what we will feed in to the corresponding backward function). To use it you could just call: \n``` python\nA, activation_cache = sigmoid(Z)\n```\n\n- **ReLU**: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the `relu` function. This function returns **two** items: the activation value \"`A`\" and a \"`cache`\" that contains \"`Z`\" (it's what we will feed in to the corresponding backward function). To use it you could just call:\n``` python\nA, activation_cache = relu(Z)\n```", "_____no_output_____" ], [ "For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.\n\n**Exercise**: Implement the forward propagation of the *LINEAR->ACTIVATION* layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation \"g\" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: linear_activation_forward\n\ndef linear_activation_forward(A_prev, W, b, activation):\n \"\"\"\n Implement the forward propagation for the LINEAR->ACTIVATION layer\n\n Arguments:\n A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)\n W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)\n b -- bias vector, numpy array of shape (size of the current layer, 1)\n activation -- the activation to be used in this layer, stored as a text string: \"sigmoid\" or \"relu\"\n\n Returns:\n A -- the output of the activation function, also called the post-activation value \n cache -- a python tuple containing \"linear_cache\" and \"activation_cache\";\n stored for computing the backward pass efficiently\n \"\"\"\n \n if activation == \"sigmoid\":\n # Inputs: \"A_prev, W, b\". Outputs: \"A, activation_cache\".\n ### START CODE HERE ### (โ‰ˆ 2 lines of code)\n Z, linear_cache = linear_forward(A_prev, W, b)\n A, activation_cache = sigmoid(Z)\n ### END CODE HERE ###\n \n elif activation == \"relu\":\n # Inputs: \"A_prev, W, b\". Outputs: \"A, activation_cache\".\n ### START CODE HERE ### (โ‰ˆ 2 lines of code)\n Z, linear_cache = linear_forward(A_prev, W, b)\n A, activation_cache = relu(Z)\n ### END CODE HERE ###\n \n assert (A.shape == (W.shape[0], A_prev.shape[1]))\n cache = (linear_cache, activation_cache)\n\n return A, cache", "_____no_output_____" ], [ "A_prev, W, b = linear_activation_forward_test_case()\n\nA, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = \"sigmoid\")\nprint(\"With sigmoid: A = \" + str(A))\n\nA, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = \"relu\")\nprint(\"With ReLU: A = \" + str(A))", "With sigmoid: A = [[ 0.96890023 0.11013289]]\nWith ReLU: A = [[ 3.43896131 0. ]]\n" ] ], [ [ "**Expected output**:\n \n<table style=\"width:35%\">\n <tr>\n <td> **With sigmoid: A ** </td>\n <td > [[ 0.96890023 0.11013289]]</td> \n </tr>\n <tr>\n <td> **With ReLU: A ** </td>\n <td > [[ 3.43896131 0. ]]</td> \n </tr>\n</table>\n", "_____no_output_____" ], [ "**Note**: In deep learning, the \"[LINEAR->ACTIVATION]\" computation is counted as a single layer in the neural network, not two layers. ", "_____no_output_____" ], [ "### d) L-Layer Model \n\nFor even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (`linear_activation_forward` with RELU) $L-1$ times, then follows that with one `linear_activation_forward` with SIGMOID.\n\n<img src=\"images/model_architecture_kiank.png\" style=\"width:600px;height:300px;\">\n<caption><center> **Figure 2** : *[LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID* model</center></caption><br>\n\n**Exercise**: Implement the forward propagation of the above model.\n\n**Instruction**: In the code below, the variable `AL` will denote $A^{[L]} = \\sigma(Z^{[L]}) = \\sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called `Yhat`, i.e., this is $\\hat{Y}$.) \n\n**Tips**:\n- Use the functions you had previously written \n- Use a for loop to replicate [LINEAR->RELU] (L-1) times\n- Don't forget to keep track of the caches in the \"caches\" list. To add a new value `c` to a `list`, you can use `list.append(c)`.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: L_model_forward\n\ndef L_model_forward(X, parameters):\n \"\"\"\n Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation\n \n Arguments:\n X -- data, numpy array of shape (input size, number of examples)\n parameters -- output of initialize_parameters_deep()\n \n Returns:\n AL -- last post-activation value\n caches -- list of caches containing:\n every cache of linear_activation_forward() (there are L-1 of them, indexed from 0 to L-1)\n \"\"\"\n\n caches = []\n A = X\n L = len(parameters) // 2 # number of layers in the neural network\n \n # Implement [LINEAR -> RELU]*(L-1). Add \"cache\" to the \"caches\" list.\n for l in range(1, L):\n A_prev = A \n ### START CODE HERE ### (โ‰ˆ 2 lines of code)\n A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], activation = 'relu')\n caches.append(cache)\n ### END CODE HERE ###\n \n # Implement LINEAR -> SIGMOID. Add \"cache\" to the \"caches\" list.\n ### START CODE HERE ### (โ‰ˆ 2 lines of code)\n AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], activation='sigmoid')\n caches.append(cache)\n ### END CODE HERE ###\n \n assert(AL.shape == (1,X.shape[1]))\n \n return AL, caches", "_____no_output_____" ], [ "X, parameters = L_model_forward_test_case_2hidden()\nAL, caches = L_model_forward(X, parameters)\nprint(\"AL = \" + str(AL))\nprint(\"Length of caches list = \" + str(len(caches)))", "AL = [[ 0.03921668 0.70498921 0.19734387 0.04728177]]\nLength of caches list = 3\n" ] ], [ [ "<table style=\"width:50%\">\n <tr>\n <td> **AL** </td>\n <td > [[ 0.03921668 0.70498921 0.19734387 0.04728177]]</td> \n </tr>\n <tr>\n <td> **Length of caches list ** </td>\n <td > 3 </td> \n </tr>\n</table>", "_____no_output_____" ], [ "Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in \"caches\". Using $A^{[L]}$, you can compute the cost of your predictions.", "_____no_output_____" ], [ "## 5 - Cost function\n\nNow you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.\n\n**Exercise**: Compute the cross-entropy cost $J$, using the following formula: $$-\\frac{1}{m} \\sum\\limits_{i = 1}^{m} (y^{(i)}\\log\\left(a^{[L] (i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{[L](i)}\\right))ย \\tag{7}$$\n", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: compute_cost\n\ndef compute_cost(AL, Y):\n \"\"\"\n Implement the cost function defined by equation (7).\n\n Arguments:\n AL -- probability vector corresponding to your label predictions, shape (1, number of examples)\n Y -- true \"label\" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)\n\n Returns:\n cost -- cross-entropy cost\n \"\"\"\n \n m = Y.shape[1]\n\n # Compute loss from aL and y.\n ### START CODE HERE ### (โ‰ˆ 1 lines of code)\n cost = -1/m * (np.sum(np.multiply(Y, np.log(AL)) + np.multiply(1-Y, np.log(1-AL))))\n ### END CODE HERE ###\n \n cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).\n assert(cost.shape == ())\n \n return cost", "_____no_output_____" ], [ "Y, AL = compute_cost_test_case()\n\nprint(\"cost = \" + str(compute_cost(AL, Y)))", "cost = 0.279776563579\n" ] ], [ [ "**Expected Output**:\n\n<table>\n\n <tr>\n <td>**cost** </td>\n <td> 0.2797765635793422</td> \n </tr>\n</table>", "_____no_output_____" ], [ "## 6 - Backward propagation module\n\nJust like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters. \n\n**Reminder**: \n<img src=\"images/backprop_kiank.png\" style=\"width:650px;height:250px;\">\n<caption><center> **Figure 3** : Forward and Backward propagation for *LINEAR->RELU->LINEAR->SIGMOID* <br> *The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.* </center></caption>\n\n<!-- \nFor those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:\n\n$$\\frac{d \\mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \\frac{d\\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\\frac{{da^{[2]}}}{{dz^{[2]}}}\\frac{{dz^{[2]}}}{{da^{[1]}}}\\frac{{da^{[1]}}}{{dz^{[1]}}} \\tag{8} $$\n\nIn order to calculate the gradient $dW^{[1]} = \\frac{\\partial L}{\\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \\times \\frac{\\partial z^{[1]} }{\\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.\n\nEquivalently, in order to calculate the gradient $db^{[1]} = \\frac{\\partial L}{\\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \\times \\frac{\\partial z^{[1]} }{\\partial b^{[1]}}$.\n\nThis is why we talk about **backpropagation**.\n!-->\n\nNow, similar to forward propagation, you are going to build the backward propagation in three steps:\n- LINEAR backward\n- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation\n- [LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)", "_____no_output_____" ], [ "### 6.1 - Linear backward\n\nFor layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).\n\nSuppose you have already calculated the derivative $dZ^{[l]} = \\frac{\\partial \\mathcal{L} }{\\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]}, dA^{[l-1]})$.\n\n<img src=\"images/linearback_kiank.png\" style=\"width:250px;height:300px;\">\n<caption><center> **Figure 4** </center></caption>\n\nThe three outputs $(dW^{[l]}, db^{[l]}, dA^{[l-1]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:\n$$ dW^{[l]} = \\frac{\\partial \\mathcal{J} }{\\partial W^{[l]}} = \\frac{1}{m} dZ^{[l]} A^{[l-1] T} \\tag{8}$$\n$$ db^{[l]} = \\frac{\\partial \\mathcal{J} }{\\partial b^{[l]}} = \\frac{1}{m} \\sum_{i = 1}^{m} dZ^{[l](i)}\\tag{9}$$\n$$ dA^{[l-1]} = \\frac{\\partial \\mathcal{L} }{\\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \\tag{10}$$\n", "_____no_output_____" ], [ "**Exercise**: Use the 3 formulas above to implement linear_backward().", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: linear_backward\n\ndef linear_backward(dZ, cache):\n \"\"\"\n Implement the linear portion of backward propagation for a single layer (layer l)\n\n Arguments:\n dZ -- Gradient of the cost with respect to the linear output (of current layer l)\n cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer\n\n Returns:\n dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev\n dW -- Gradient of the cost with respect to W (current layer l), same shape as W\n db -- Gradient of the cost with respect to b (current layer l), same shape as b\n \"\"\"\n A_prev, W, b = cache\n m = A_prev.shape[1]\n\n ### START CODE HERE ### (โ‰ˆ 3 lines of code)\n dW = np.dot(dZ, cache[0].T) / m\n db = np.sum(dZ, axis=1, keepdims=True) / m\n dA_prev = np.dot(cache[1].T, dZ)\n ### END CODE HERE ###\n \n assert (dA_prev.shape == A_prev.shape)\n assert (dW.shape == W.shape)\n assert (db.shape == b.shape)\n \n return dA_prev, dW, db", "_____no_output_____" ], [ "# Set up some test inputs\ndZ, linear_cache = linear_backward_test_case()\n\ndA_prev, dW, db = linear_backward(dZ, linear_cache)\nprint (\"dA_prev = \"+ str(dA_prev))\nprint (\"dW = \" + str(dW))\nprint (\"db = \" + str(db))", "dA_prev = [[-1.15171336 0.06718465 -0.3204696 2.09812712]\n [ 0.60345879 -3.72508701 5.81700741 -3.84326836]\n [-0.4319552 -1.30987417 1.72354705 0.05070578]\n [-0.38981415 0.60811244 -1.25938424 1.47191593]\n [-2.52214926 2.67882552 -0.67947465 1.48119548]]\ndW = [[ 0.07313866 -0.0976715 -0.87585828 0.73763362 0.00785716]\n [ 0.85508818 0.37530413 -0.59912655 0.71278189 -0.58931808]\n [ 0.97913304 -0.24376494 -0.08839671 0.55151192 -0.10290907]]\ndb = [[-0.14713786]\n [-0.11313155]\n [-0.13209101]]\n" ] ], [ [ "** Expected Output**:\n \n```\ndA_prev = \n [[-1.15171336 0.06718465 -0.3204696 2.09812712]\n [ 0.60345879 -3.72508701 5.81700741 -3.84326836]\n [-0.4319552 -1.30987417 1.72354705 0.05070578]\n [-0.38981415 0.60811244 -1.25938424 1.47191593]\n [-2.52214926 2.67882552 -0.67947465 1.48119548]]\ndW = \n [[ 0.07313866 -0.0976715 -0.87585828 0.73763362 0.00785716]\n [ 0.85508818 0.37530413 -0.59912655 0.71278189 -0.58931808]\n [ 0.97913304 -0.24376494 -0.08839671 0.55151192 -0.10290907]]\ndb = \n [[-0.14713786]\n [-0.11313155]\n [-0.13209101]]\n```", "_____no_output_____" ], [ "### 6.2 - Linear-Activation backward\n\nNext, you will create a function that merges the two helper functions: **`linear_backward`** and the backward step for the activation **`linear_activation_backward`**. \n\nTo help you implement `linear_activation_backward`, we provided two backward functions:\n- **`sigmoid_backward`**: Implements the backward propagation for SIGMOID unit. You can call it as follows:\n\n```python\ndZ = sigmoid_backward(dA, activation_cache)\n```\n\n- **`relu_backward`**: Implements the backward propagation for RELU unit. You can call it as follows:\n\n```python\ndZ = relu_backward(dA, activation_cache)\n```\n\nIf $g(.)$ is the activation function, \n`sigmoid_backward` and `relu_backward` compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \\tag{11}$$. \n\n**Exercise**: Implement the backpropagation for the *LINEAR->ACTIVATION* layer.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: linear_activation_backward\n\ndef linear_activation_backward(dA, cache, activation):\n \"\"\"\n Implement the backward propagation for the LINEAR->ACTIVATION layer.\n \n Arguments:\n dA -- post-activation gradient for current layer l \n cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently\n activation -- the activation to be used in this layer, stored as a text string: \"sigmoid\" or \"relu\"\n \n Returns:\n dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev\n dW -- Gradient of the cost with respect to W (current layer l), same shape as W\n db -- Gradient of the cost with respect to b (current layer l), same shape as b\n \"\"\"\n linear_cache, activation_cache = cache\n \n if activation == \"relu\":\n ### START CODE HERE ### (โ‰ˆ 2 lines of code)\n dZ = relu_backward(dA, activation_cache)\n dA_prev, dW, db = linear_backward(dZ, linear_cache)\n ### END CODE HERE ###\n \n elif activation == \"sigmoid\":\n ### START CODE HERE ### (โ‰ˆ 2 lines of code)\n dZ = sigmoid_backward(dA, activation_cache)\n dA_prev, dW, db = linear_backward(dZ, linear_cache)\n ### END CODE HERE ###\n \n return dA_prev, dW, db", "_____no_output_____" ], [ "dAL, linear_activation_cache = linear_activation_backward_test_case()\n\ndA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = \"sigmoid\")\nprint (\"sigmoid:\")\nprint (\"dA_prev = \"+ str(dA_prev))\nprint (\"dW = \" + str(dW))\nprint (\"db = \" + str(db) + \"\\n\")\n\ndA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = \"relu\")\nprint (\"relu:\")\nprint (\"dA_prev = \"+ str(dA_prev))\nprint (\"dW = \" + str(dW))\nprint (\"db = \" + str(db))", "sigmoid:\ndA_prev = [[ 0.11017994 0.01105339]\n [ 0.09466817 0.00949723]\n [-0.05743092 -0.00576154]]\ndW = [[ 0.10266786 0.09778551 -0.01968084]]\ndb = [[-0.05729622]]\n\nrelu:\ndA_prev = [[ 0.44090989 -0. ]\n [ 0.37883606 -0. ]\n [-0.2298228 0. ]]\ndW = [[ 0.44513824 0.37371418 -0.10478989]]\ndb = [[-0.20837892]]\n" ] ], [ [ "**Expected output with sigmoid:**\n\n<table style=\"width:100%\">\n <tr>\n <td > dA_prev </td> \n <td >[[ 0.11017994 0.01105339]\n [ 0.09466817 0.00949723]\n [-0.05743092 -0.00576154]] </td> \n\n </tr> \n \n <tr>\n <td > dW </td> \n <td > [[ 0.10266786 0.09778551 -0.01968084]] </td> \n </tr> \n \n <tr>\n <td > db </td> \n <td > [[-0.05729622]] </td> \n </tr> \n</table>\n\n", "_____no_output_____" ], [ "**Expected output with relu:**\n\n<table style=\"width:100%\">\n <tr>\n <td > dA_prev </td> \n <td > [[ 0.44090989 0. ]\n [ 0.37883606 0. ]\n [-0.2298228 0. ]] </td> \n\n </tr> \n \n <tr>\n <td > dW </td> \n <td > [[ 0.44513824 0.37371418 -0.10478989]] </td> \n </tr> \n \n <tr>\n <td > db </td> \n <td > [[-0.20837892]] </td> \n </tr> \n</table>\n\n", "_____no_output_____" ], [ "### 6.3 - L-Model Backward \n\nNow you will implement the backward function for the whole network. Recall that when you implemented the `L_model_forward` function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the `L_model_backward` function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass. \n\n\n<img src=\"images/mn_backward.png\" style=\"width:450px;height:300px;\">\n<caption><center> **Figure 5** : Backward pass </center></caption>\n\n** Initializing backpropagation**:\nTo backpropagate through this network, we know that the output is, \n$A^{[L]} = \\sigma(Z^{[L]})$. Your code thus needs to compute `dAL` $= \\frac{\\partial \\mathcal{L}}{\\partial A^{[L]}}$.\nTo do so, use this formula (derived using calculus which you don't need in-depth knowledge of):\n```python\ndAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL\n```\n\nYou can then use this post-activation gradient `dAL` to keep going backward. As seen in Figure 5, you can now feed in `dAL` into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a `for` loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula : \n\n$$grads[\"dW\" + str(l)] = dW^{[l]}\\tag{15} $$\n\nFor example, for $l=3$ this would store $dW^{[l]}$ in `grads[\"dW3\"]`.\n\n**Exercise**: Implement backpropagation for the *[LINEAR->RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID* model.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: L_model_backward\n\ndef L_model_backward(AL, Y, caches):\n \"\"\"\n Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group\n \n Arguments:\n AL -- probability vector, output of the forward propagation (L_model_forward())\n Y -- true \"label\" vector (containing 0 if non-cat, 1 if cat)\n caches -- list of caches containing:\n every cache of linear_activation_forward() with \"relu\" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)\n the cache of linear_activation_forward() with \"sigmoid\" (it's caches[L-1])\n \n Returns:\n grads -- A dictionary with the gradients\n grads[\"dA\" + str(l)] = ... \n grads[\"dW\" + str(l)] = ...\n grads[\"db\" + str(l)] = ... \n \"\"\"\n grads = {}\n L = len(caches) # the number of layers\n m = AL.shape[1]\n Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL\n \n # Initializing the backpropagation\n ### START CODE HERE ### (1 line of code)\n dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))\n ### END CODE HERE ###\n \n # Lth layer (SIGMOID -> LINEAR) gradients. Inputs: \"dAL, current_cache\". Outputs: \"grads[\"dAL-1\"], grads[\"dWL\"], grads[\"dbL\"]\n ### START CODE HERE ### (approx. 2 lines)\n current_cache = caches[L-1]\n grads[\"dA\" + str(L-1)], grads[\"dW\" + str(L)], grads[\"db\" + str(L)] = linear_activation_backward(dAL, current_cache, 'sigmoid')\n ### END CODE HERE ###\n \n # Loop from l=L-2 to l=0\n for l in reversed(range(L-1)):\n # lth layer: (RELU -> LINEAR) gradients.\n # Inputs: \"grads[\"dA\" + str(l + 1)], current_cache\". Outputs: \"grads[\"dA\" + str(l)] , grads[\"dW\" + str(l + 1)] , grads[\"db\" + str(l + 1)] \n ### START CODE HERE ### (approx. 5 lines)\n current_cache = caches[l]\n dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads[\"dA\" + str(l + 1)], current_cache, 'relu')\n grads[\"dA\" + str(l)] = dA_prev_temp\n grads[\"dW\" + str(l + 1)] = dW_temp\n grads[\"db\" + str(l + 1)] = db_temp\n ### END CODE HERE ###\n\n return grads", "_____no_output_____" ], [ "AL, Y_assess, caches = L_model_backward_test_case()\ngrads = L_model_backward(AL, Y_assess, caches)\nprint_grads(grads)", "dW1 = [[ 0.41010002 0.07807203 0.13798444 0.10502167]\n [ 0. 0. 0. 0. ]\n [ 0.05283652 0.01005865 0.01777766 0.0135308 ]]\ndb1 = [[-0.22007063]\n [ 0. ]\n [-0.02835349]]\ndA1 = [[ 0.12913162 -0.44014127]\n [-0.14175655 0.48317296]\n [ 0.01663708 -0.05670698]]\n" ] ], [ [ "**Expected Output**\n\n<table style=\"width:60%\">\n \n <tr>\n <td > dW1 </td> \n <td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]\n [ 0. 0. 0. 0. ]\n [ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td> \n </tr> \n \n <tr>\n <td > db1 </td> \n <td > [[-0.22007063]\n [ 0. ]\n [-0.02835349]] </td> \n </tr> \n \n <tr>\n <td > dA1 </td> \n <td > [[ 0.12913162 -0.44014127]\n [-0.14175655 0.48317296]\n [ 0.01663708 -0.05670698]] </td> \n\n </tr> \n</table>\n\n", "_____no_output_____" ], [ "### 6.4 - Update Parameters\n\nIn this section you will update the parameters of the model, using gradient descent: \n\n$$ W^{[l]} = W^{[l]} - \\alpha \\text{ } dW^{[l]} \\tag{16}$$\n$$ b^{[l]} = b^{[l]} - \\alpha \\text{ } db^{[l]} \\tag{17}$$\n\nwhere $\\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary. ", "_____no_output_____" ], [ "**Exercise**: Implement `update_parameters()` to update your parameters using gradient descent.\n\n**Instructions**:\nUpdate parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$. \n", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: update_parameters\n\ndef update_parameters(parameters, grads, learning_rate):\n \"\"\"\n Update parameters using gradient descent\n \n Arguments:\n parameters -- python dictionary containing your parameters \n grads -- python dictionary containing your gradients, output of L_model_backward\n \n Returns:\n parameters -- python dictionary containing your updated parameters \n parameters[\"W\" + str(l)] = ... \n parameters[\"b\" + str(l)] = ...\n \"\"\"\n \n L = len(parameters) // 2 # number of layers in the neural network\n\n # Update rule for each parameter. Use a for loop.\n ### START CODE HERE ### (โ‰ˆ 3 lines of code)\n for l in range(L):\n parameters[\"W\" + str(l+1)] = parameters[\"W\" + str(l + 1)] - learning_rate * grads[\"dW\" + str(l + 1)]\n parameters[\"b\" + str(l+1)] = parameters[\"b\" + str(l + 1)] - learning_rate * grads[\"db\" + str(l + 1)]\n ### END CODE HERE ###\n return parameters", "_____no_output_____" ], [ "parameters, grads = update_parameters_test_case()\nparameters = update_parameters(parameters, grads, 0.1)\n\nprint (\"W1 = \"+ str(parameters[\"W1\"]))\nprint (\"b1 = \"+ str(parameters[\"b1\"]))\nprint (\"W2 = \"+ str(parameters[\"W2\"]))\nprint (\"b2 = \"+ str(parameters[\"b2\"]))", "W1 = [[-0.59562069 -0.09991781 -2.14584584 1.82662008]\n [-1.76569676 -0.80627147 0.51115557 -1.18258802]\n [-1.0535704 -0.86128581 0.68284052 2.20374577]]\nb1 = [[-0.04659241]\n [-1.28888275]\n [ 0.53405496]]\nW2 = [[-0.55569196 0.0354055 1.32964895]]\nb2 = [[-0.84610769]]\n" ] ], [ [ "**Expected Output**:\n\n<table style=\"width:100%\"> \n <tr>\n <td > W1 </td> \n <td > [[-0.59562069 -0.09991781 -2.14584584 1.82662008]\n [-1.76569676 -0.80627147 0.51115557 -1.18258802]\n [-1.0535704 -0.86128581 0.68284052 2.20374577]] </td> \n </tr> \n \n <tr>\n <td > b1 </td> \n <td > [[-0.04659241]\n [-1.28888275]\n [ 0.53405496]] </td> \n </tr> \n <tr>\n <td > W2 </td> \n <td > [[-0.55569196 0.0354055 1.32964895]]</td> \n </tr> \n \n <tr>\n <td > b2 </td> \n <td > [[-0.84610769]] </td> \n </tr> \n</table>\n", "_____no_output_____" ], [ "\n## 7 - Conclusion\n\nCongrats on implementing all the functions required for building a deep neural network! \n\nWe know it was a long assignment but going forward it will only get better. The next part of the assignment is easier. \n\nIn the next assignment you will put all these together to build two models:\n- A two-layer neural network\n- An L-layer neural network\n\nYou will in fact use these models to classify cat vs non-cat images!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ] ]
ece268d52c650abf72877a119b18acb564097b46
231,719
ipynb
Jupyter Notebook
Bike_Share_Analysis.ipynb
ZSoumia/Bike_Share_Analysis
a8f3c0444eafb15ae77784cc37154334edd36c3e
[ "MIT" ]
1
2019-08-19T18:49:40.000Z
2019-08-19T18:49:40.000Z
Bike_Share_Analysis.ipynb
ZSoumia/Bike_Share_Analysis
a8f3c0444eafb15ae77784cc37154334edd36c3e
[ "MIT" ]
null
null
null
Bike_Share_Analysis.ipynb
ZSoumia/Bike_Share_Analysis
a8f3c0444eafb15ae77784cc37154334edd36c3e
[ "MIT" ]
null
null
null
176.480579
12,820
0.864034
[ [ [ "# 2016 US Bike Share Activity Snapshot\n> By Soumia Zohra El Mestari \n\n## Table of Contents\n- [Introduction](#intro)\n- [Posing Questions](#pose_questions)\n- [Data Collection and Wrangling](#wrangling)\n - [Condensing the Trip Data](#condensing)\n- [Exploratory Data Analysis](#eda)\n - [Statistics](#statistics)\n - [Visualizations](#visualizations)\n- [Performing Your Own Analysis](#eda_continued)\n- [Conclusions](#conclusions)\n\n<a id='intro'></a>\n## Introduction\n\n\nOver the past decade, bicycle-sharing systems have been growing in number and popularity in cities across the world. Bicycle-sharing systems allow users to rent bicycles for short trips, typically 30 minutes or less. Thanks to the rise in information technologies, it is easy for a user of the system to access a dock within the system to unlock or return bicycles. These technologies also provide a wealth of data that can be used to explore how these bike-sharing systems are used.\n\nIn this project,I performed an exploratory analysis on data provided by [Motivate](https://www.motivateco.com/), a bike-share system provider for many major cities in the United States. I compared the system usage between three large cities: New York City, Chicago, and Washington, DC. You will also see if there are any differences within each system for those users that are registered, regular users and those users that are short-term, casual users.", "_____no_output_____" ], [ "<a id='pose_questions'></a>\n## Posing Questions\nQ1 ) What are the propotions of each user category(female/male/young/adult..) in each region ? (This helps to make smart decisions in investing to provide certain types of bicycles , for example : if in region A most of your users are females then it's a clever choice to consider adding more WSD(women specific design) bikes as in general women's physique have different shapes than the average man physique ) \n<br><br>\nQ2) In each region what are the propotions of each user type (subscriber/ customer...) ? (This can help conduct further studies to learn more about the user's decisions and therefore spot the disturbing effects that influence the user's decisions , to stay just a customer for example )\n<br><br>\nQ3) In each region what are the parts of the day in which a high usage rate is recorded ? (This helps to identify where and when should the company consider adding more bicycles to certain stations in order to tagrget a larger group of users).\n<br><br>\nQ4) How is the rate of the bike's usage is evolving throughout the years in each region ? ( This gives a global overview of the business growth)\n<br><br>\nQ5 ) How many time(duration) each user is spending on the bike per year ? (results can be grouped per gender/region/time of the day ( morning , night ....) in order to spot trends which can influence the business growth , and with further studies we can undertsand the reasons behind this behaviour and take action to enhance the service )\n", "_____no_output_____" ], [ "<a id='wrangling'></a>\n## Data Collection and Wrangling\n\nNow it's time to collect and explore our data. In this project, we will focus on the record of individual trips taken in 2016 from our selected cities: New York City, Chicago, and Washington, DC. Each of these cities has a page where we can freely download the trip data.:\n\n- New York City (Citi Bike): [Link](https://www.citibikenyc.com/system-data)\n- Chicago (Divvy): [Link](https://www.divvybikes.com/system-data)\n- Washington, DC (Capital Bikeshare): [Link](https://www.capitalbikeshare.com/system-data)\n\nIf you visit these pages, you will notice that each city has a different way of delivering its data. Chicago updates with new data twice a year, Washington DC is quarterly, and New York City is monthly. While the original data for 2016 is spread among multiple files for each city, the files in the `/data/` folder collect all of the trip data for the year into one file per city. Some data wrangling of inconsistencies in timestamp format within each city has already been performed . In addition, a random 2% sample of the original data is taken to make the exploration more manageable. \n", "_____no_output_____" ] ], [ [ "## import all necessary packages and functions.\nimport csv # read and write csv files\nfrom datetime import datetime # operations to parse dates\nfrom pprint import pprint # use to print data structures like dictionaries in\n # a nicer way than the base print function.\nimport operator ", "_____no_output_____" ], [ "def print_first_point(filename):\n \"\"\"\n This function prints and returns the first data point (second row) from\n a csv file that includes a header row.\n INPUT :\n filename(string): path of the data file \n OUTPUT : \n second row in the file \n \"\"\"\n # print city name for reference\n city = filename.split('-')[0].split('/')[-1]\n print('\\nCity: {}'.format(city))\n \n with open(filename, 'r') as f_in:\n ## TODO: Use the csv library to set up a DictReader object. ##\n ## see https://docs.python.org/3/library/csv.html ##\n trip_reader = csv.DictReader(f_in)\n \n ## TODO: Use a function on the DictReader object to read the ##\n ## first trip from the data file and store it in a variable. ##\n ## see https://docs.python.org/3/library/csv.html#reader-objects ##\n first_trip = next(trip_reader)\n \n ## TODO: Use the pprint library to print the first trip. ##\n ## see https://docs.python.org/3/library/pprint.html ##\n pprint(first_trip)\n # output city name and first trip for later testing\n return (city, first_trip)\n\n# list of files for each city\ndata_files = ['./data/NYC-CitiBike-2016.csv',\n './data/Chicago-Divvy-2016.csv',\n './data/Washington-CapitalBikeshare-2016.csv',]\n\n# print the first trip from each file, store in dictionary\nexample_trips = {}\nfor data_file in data_files:\n city, first_trip = print_first_point(data_file)\n example_trips[city] = first_trip", "\nCity: NYC\nOrderedDict([('tripduration', '839'),\n ('starttime', '1/1/2016 00:09:55'),\n ('stoptime', '1/1/2016 00:23:54'),\n ('start station id', '532'),\n ('start station name', 'S 5 Pl & S 4 St'),\n ('start station latitude', '40.710451'),\n ('start station longitude', '-73.960876'),\n ('end station id', '401'),\n ('end station name', 'Allen St & Rivington St'),\n ('end station latitude', '40.72019576'),\n ('end station longitude', '-73.98997825'),\n ('bikeid', '17109'),\n ('usertype', 'Customer'),\n ('birth year', ''),\n ('gender', '0')])\n\nCity: Chicago\nOrderedDict([('trip_id', '9080545'),\n ('starttime', '3/31/2016 23:30'),\n ('stoptime', '3/31/2016 23:46'),\n ('bikeid', '2295'),\n ('tripduration', '926'),\n ('from_station_id', '156'),\n ('from_station_name', 'Clark St & Wellington Ave'),\n ('to_station_id', '166'),\n ('to_station_name', 'Ashland Ave & Wrightwood Ave'),\n ('usertype', 'Subscriber'),\n ('gender', 'Male'),\n ('birthyear', '1990')])\n\nCity: Washington\nOrderedDict([('Duration (ms)', '427387'),\n ('Start date', '3/31/2016 22:57'),\n ('End date', '3/31/2016 23:04'),\n ('Start station number', '31602'),\n ('Start station', 'Park Rd & Holmead Pl NW'),\n ('End station number', '31207'),\n ('End station', 'Georgia Ave and Fairmont St NW'),\n ('Bike number', 'W20842'),\n ('Member Type', 'Registered')])\n" ] ], [ [ "If everything has been filled out correctly, you should see below the printout of each city name (which has been parsed from the data file name) that the first trip has been parsed in the form of a dictionary. When you set up a `DictReader` object, the first row of the data file is normally interpreted as column names. Every other row in the data file will use those column names as keys, as a dictionary is generated for each row.\n\nThis will be useful since we can refer to quantities by an easily-understandable label instead of just a numeric index. For example, if we have a trip stored in the variable `row`, then we would rather get the trip duration from `row['duration']` instead of `row[0]`.\n\n<a id='condensing'></a>\n### Condensing the Trip Data\n\nIt should also be observable from the above printout that each city provides different information. Even where the information is the same, the column names and formats are sometimes different. To make things as simple as possible when we get to the actual exploration, we should trim and clean the data. Cleaning the data makes sure that the data formats across the cities are consistent, while trimming focuses only on the parts of the data we are most interested in to make the exploration easier to work with.\n\nYou will generate new data files with five values of interest for each trip: trip duration, starting month, starting hour, day of the week, and user type. Each of these may require additional wrangling depending on the city:\n\n- **Duration**: This has been given to us in seconds (New York, Chicago) or milliseconds (Washington). A more natural unit of analysis will be if all the trip durations are given in terms of minutes.\n- **Month**, **Hour**, **Day of Week**: Ridership volume is likely to change based on the season, time of day, and whether it is a weekday or weekend. Use the start time of the trip to obtain these values. The New York City data includes the seconds in their timestamps, while Washington and Chicago do not. The [`datetime`](https://docs.python.org/3/library/datetime.html) package will be very useful here to make the needed conversions.\n- **User Type**: It is possible that users who are subscribed to a bike-share system will have different patterns of use compared to users who only have temporary passes. Washington divides its users into two types: 'Registered' for users with annual, monthly, and other longer-term subscriptions, and 'Casual', for users with 24-hour, 3-day, and other short-term passes. The New York and Chicago data uses 'Subscriber' and 'Customer' for these groups, respectively. For consistency, you will convert the Washington labels to match the other two.\n\n\n**Question 3a**: Complete the helper functions in the code cells below to address each of the cleaning tasks described above.", "_____no_output_____" ] ], [ [ "def duration_in_mins(datum, city):\n \"\"\"\n Takes as input a dictionary containing info about a single trip (datum) and\n its origin city (city) and returns the trip duration in units of minutes.\n \n Remember that Washington is in terms of milliseconds while Chicago and NYC\n are in terms of seconds. \n \n HINT: The csv module reads in all of the data as strings, including numeric\n values. You will need a function to convert the strings into an appropriate\n numeric type when making your transformations.\n see https://docs.python.org/3/library/functions.html\n \"\"\"\n \n # YOUR CODE HERE\n if (city == 'NYC') or (city == 'Chicago'):\n duration = int(datum['tripduration']) / 60 #conversion from seconds to minutes\n else : #if it's not NYC or Chicago then it's Washington \n duration = int(datum['Duration (ms)']) / 60000 #conversion from millisecondes to minutes\n return duration\n\n\n# Some tests to check that your code works. There should be no output if all of\n# the assertions pass. The `example_trips` dictionary was obtained from when\n# you printed the first trip from each of the original data files.\ntests = {'NYC': 13.9833,\n 'Chicago': 15.4333,\n 'Washington': 7.1231}\n\nfor city in tests:\n assert abs(duration_in_mins(example_trips[city], city) - tests[city]) < .001\n \n#tests \nprint('Duration in minutes results for Chicago {} , the correct value is {} '.format(duration_in_mins(example_trips['Chicago'], 'Chicago'),tests['Chicago']))\nprint('Duration in minutes results for NYC {} , the correct value is {} '.format(duration_in_mins(example_trips['NYC'], 'NYC'),tests['NYC']))\nprint('Duration in minutes results for Washington {} , the correct value is {} '.format(duration_in_mins(example_trips['Washington'], 'Washington'),tests['Washington']))\n", "Duration in minutes results for Chicago 15.433333333333334 , the correct value is 15.4333 \nDuration in minutes results for NYC 13.983333333333333 , the correct value is 13.9833 \nDuration in minutes results for Washington 7.123116666666666 , the correct value is 7.1231 \n" ], [ "def time_of_trip(datum, city):\n \"\"\"\n Takes as input a dictionary containing info about a single trip (datum) and\n its origin city (city) and returns the month, hour, and day of the week in\n which the trip was made.\n \n Remember that NYC includes seconds, while Washington and Chicago do not.\n \n HINT: You should use the datetime module to parse the original date\n strings into a format that is useful for extracting the desired information.\n see https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior\n \"\"\"\n # YOUR CODE HERE\n date_format,attribute_key = None,None\n #date_format specifies the patterns in which the date is formatted \n \n if city == 'NYC':\n date_format = '%m/%d/%Y %H:%M:%S'\n attribute_key = 'starttime'\n elif city == 'Chicago':\n date_format = '%m/%d/%Y %H:%M'\n attribute_key = 'starttime'\n else :#if not Chicago and not NYC then it's Washington\n date_format = '%m/%d/%Y %H:%M'\n attribute_key = 'Start date'\n \n month = datetime.strptime(datum[attribute_key],date_format).month\n hour = datetime.strptime(datum[attribute_key],date_format).hour\n day_of_week = datetime.strptime(datum[attribute_key],date_format).strftime('%A') \n\n return (month, hour, day_of_week)\n\n\n# Some tests to check that your code works. There should be no output if all of\n# the assertions pass. The `example_trips` dictionary was obtained from when\n# you printed the first trip from each of the original data files.\ntests = {'NYC': (1, 0, 'Friday'),\n 'Chicago': (3, 23, 'Thursday'),\n 'Washington': (3, 22, 'Thursday')}\n\nfor city in tests:\n assert time_of_trip(example_trips[city], city) == tests[city]\n \nprint('Time format results for Chicago {} , the correct value is {} '.format(time_of_trip(example_trips['Chicago'], 'Chicago'),tests['Chicago']))\nprint('Time format results results for NYC {} , the correct value is {} '.format(time_of_trip(example_trips['NYC'], 'NYC'),tests['NYC']))\nprint('Time format results results for Washington {} , the correct value is {} '.format(time_of_trip(example_trips['Washington'], 'Washington'),tests['Washington']))\n", "Time format results for Chicago (3, 23, 'Thursday') , the correct value is (3, 23, 'Thursday') \nTime format results results for NYC (1, 0, 'Friday') , the correct value is (1, 0, 'Friday') \nTime format results results for Washington (3, 22, 'Thursday') , the correct value is (3, 22, 'Thursday') \n" ], [ "def type_of_user(datum, city):\n \"\"\"\n Takes as input a dictionary containing info about a single trip (datum) and\n its origin city (city) and returns the type of system user that made the\n trip.\n \n Remember that Washington has different category names compared to Chicago\n and NYC. \n \"\"\"\n \n # YOUR CODE HERE\n if city == 'Washington':\n if(datum['Member Type'] == 'Registered'):\n user_type = 'Subscriber'\n else:\n user_type = 'Customer'\n else :\n user_type = datum['usertype']\n \n \n return user_type\n\n\n# Some tests to check that your code works. There should be no output if all of\n# the assertions pass. The `example_trips` dictionary was obtained from when\n# you printed the first trip from each of the original data files.\ntests = {'NYC': 'Customer',\n 'Chicago': 'Subscriber',\n 'Washington': 'Subscriber'}\n\nfor city in tests:\n assert type_of_user(example_trips[city], city) == tests[city]\n \nprint('User type results for Chicago {} , the correct value is {} '.format( type_of_user(example_trips['Chicago'], 'Chicago'),tests['Chicago']))\nprint('User type result for NYC {} , the correct value is {} '.format( type_of_user(example_trips['NYC'], 'NYC'),tests['NYC']))\nprint('User type result for Washington {} , the correct value is {} '.format( type_of_user(example_trips['Washington'], 'Washington'),tests['Washington']))\n", "User type results for Chicago Subscriber , the correct value is Subscriber \nUser type result for NYC Customer , the correct value is Customer \nUser type result for Washington Subscriber , the correct value is Subscriber \n" ] ], [ [ "**Question 3b**: Now, use the helper functions you wrote above to create a condensed data file for each city consisting only of the data fields indicated above. In the `/examples/` folder, you will see an example datafile from the [Bay Area Bike Share](http://www.bayareabikeshare.com/open-data) before and after conversion. Make sure that your output is formatted to be consistent with the example file.", "_____no_output_____" ] ], [ [ "def condense_data(in_file, out_file, city):\n \"\"\"\n This function takes full data from the specified input file\n and writes the condensed data to a specified output file. The city\n argument determines how the input file will be parsed.\n \n HINT: See the cell below to see how the arguments are structured!\n \"\"\"\n \n with open(out_file, 'w') as f_out, open(in_file, 'r') as f_in:\n # set up csv DictWriter object - writer requires column names for the\n # first row as the \"fieldnames\" argument\n out_colnames = ['duration', 'month', 'hour', 'day_of_week', 'user_type'] \n trip_writer = csv.DictWriter(f_out, fieldnames = out_colnames)\n trip_writer.writeheader()\n \n ## TODO: set up csv DictReader object ##\n trip_reader = list(csv.DictReader(f_in))\n\n # collect data from and process each row\n for row in trip_reader:\n # set up a dictionary to hold the values for the cleaned and trimmed\n # data point\n new_point = {}\n ## TODO: use the helper functions to get the cleaned data from ##\n ## the original data dictionaries. ##\n ## Note that the keys for the new_point dictionary should match ##\n ## the column names set in the DictWriter object above. ##\n new_point[out_colnames[0]] = duration_in_mins(row, city)\n new_point[out_colnames [1]], new_point[out_colnames [2]], new_point[out_colnames[3]] = time_of_trip(row, city) \n new_point[out_colnames[4]] = type_of_user(row, city)\n\n\n ## TODO: write the processed information to the output file. ##\n ## see https://docs.python.org/3/library/csv.html#writer-objects ##\n trip_writer.writerow(new_point)\n ", "_____no_output_____" ], [ "# Run this cell to check your work\ncity_info = {'Washington': {'in_file': './data/Washington-CapitalBikeshare-2016.csv',\n 'out_file': './data/Washington-2016-Summary.csv'},\n 'Chicago': {'in_file': './data/Chicago-Divvy-2016.csv',\n 'out_file': './data/Chicago-2016-Summary.csv'},\n 'NYC': {'in_file': './data/NYC-CitiBike-2016.csv',\n 'out_file': './data/NYC-2016-Summary.csv'}}\n\nfor city, filenames in city_info.items():\n condense_data(filenames['in_file'], filenames['out_file'], city)\n print_first_point(filenames['out_file'])", "\nCity: Washington\nOrderedDict([('duration', '7.123116666666666'),\n ('month', '3'),\n ('hour', '22'),\n ('day_of_week', 'Thursday'),\n ('user_type', 'Subscriber')])\n\nCity: Chicago\nOrderedDict([('duration', '15.433333333333334'),\n ('month', '3'),\n ('hour', '23'),\n ('day_of_week', 'Thursday'),\n ('user_type', 'Subscriber')])\n\nCity: NYC\nOrderedDict([('duration', '13.983333333333333'),\n ('month', '1'),\n ('hour', '0'),\n ('day_of_week', 'Friday'),\n ('user_type', 'Customer')])\n" ] ], [ [ "<a id='eda'></a>\n## Exploratory Data Analysis\n\nNow that we have the data collected and wrangled, we're ready to start exploring the data. In this section I wrote some code to compute descriptive statistics from the data.\n<a id='statistics'></a>\n### Statistics\n\nFirst, let's compute some basic counts. The first cell below contains a function that uses the csv module to iterate through a provided data file, returning the number of trips made by subscribers and customers. The second cell runs this function on the example Bay Area data in the `/examples/` folder. Modify the cells to answer the question below.\n\n**Question 4a**: Which city has the highest number of trips? Which city has the highest proportion of trips made by subscribers? Which city has the highest proportion of trips made by short-term customers?\n\n**Answer**: Based on the execution of the following two cells : \n<br>\n**Answer 1 :** the city that has the hightest number of trips is NYC \n<br>\n**Answer 2 :** the city that has the hightest proportion of trips made by subscribers is NYC\n<br>\n**Answer 3 :** the city that has the hightest the highest proportion of trips made by short-term customers is Chicago ", "_____no_output_____" ] ], [ [ "def number_of_trips(filename):\n \"\"\"\n This function reads in a file with trip data and reports the number of\n trips made by subscribers, customers, and total overall.\n \"\"\"\n with open(filename, 'r') as f_in:\n # set up csv reader object\n reader = csv.DictReader(f_in)\n \n # initialize count variables\n n_subscribers = 0\n n_customers = 0\n \n # tally up ride types\n for row in reader:\n if row['user_type'] == 'Subscriber':\n n_subscribers += 1\n else:\n n_customers += 1\n \n # compute total number of rides\n n_total = n_subscribers + n_customers\n \n # return tallies as a tuple\n return(n_subscribers, n_customers, n_total)", "_____no_output_____" ], [ "## Modify this and the previous cell to answer Question 4a. Remember to run ##\n## the function on the cleaned data files you created from Question 3. ##\ntrips_count = {}\nsubscribers_propotion = {}\ncustomers_propotion = {}\ndata_files = ('./data/Chicago-2016-Summary.csv','\\\n./data/NYC-2016-Summary.csv','./data/Washington-2016-Summary.csv')\nfor data_file in data_files:\n city = data_file.split('-')[0].split('/')[-1]\n a = number_of_trips(data_file)[0] / number_of_trips(data_file)[2]\n b = number_of_trips(data_file)[1] / number_of_trips(data_file)[2]\n \n subscribers_propotion [city],customers_propotion[city],trips_count[city] = a,b,number_of_trips(data_file)[2]\n \nprint('the city that has the hightest number of trips is {} with a total of {} trip '.format(max(trips_count.items(),key=operator.itemgetter(1))[0],max(trips_count.items(),key=operator.itemgetter(1))[1]))\nprint('the city that has the hightest proportion of trips made by subscribers is {} with a propotion of {} '.format(max(subscribers_propotion.items(),key=operator.itemgetter(1))[0],max(subscribers_propotion.items(),key=operator.itemgetter(1))[1])) \nprint('the city that has the hightest the highest proportion of trips made by short-term customers is {} with a propotion up to {} '.format(max(customers_propotion.items(),key=operator.itemgetter(1))[0],max(customers_propotion.items(),key=operator.itemgetter(1))[1]))", "the city that has the hightest number of trips is NYC with a total of 276798 trip \nthe city that has the hightest proportion of trips made by subscribers is NYC with a propotion of 0.8883590199351151 \nthe city that has the hightest the highest proportion of trips made by short-term customers is Chicago with a propotion up to 0.23774798630269925 \n" ] ], [ [ "\nNow, you will write your own code to continue investigating properties of the data.\n\n**Question 4b**: Bike-share systems are designed for riders to take short trips. Most of the time, users are allowed to take trips of 30 minutes or less with no additional charges, with overage charges made for trips of longer than that duration. What is the average trip length for each city? What proportion of rides made in each city are longer than 30 minutes ?\n\n**Answer**:<br>\n**Washington city** : The average trip length = 18.93 minutes <br> proportion of rides made in each city are longer than 30 minutes =10.83 %\n<br>\n**Chicago city** : The average trip length = 16.56 minutes <br> proportion of rides made in each city are longer than 30 minutes =8.33 %\n<br>\n**NYC** :The average trip length = 15.81 minutes <br> proportion of rides made in each city are longer than 30 minutes =7.30%", "_____no_output_____" ] ], [ [ "## Use this and additional cells to answer Question 4b. ##\n## ##\n## HINT: The csv module reads in all of the data as strings, including ##\n## numeric values. You will need a function to convert the strings ##\n## into an appropriate numeric type before you aggregate data. ##\n## TIP: For the Bay Area example, the average trip length is 14 minutes ##\n## and 3.5% of trips are longer than 30 minutes. ##\ndef average_and_charged_propotion(filename):\n \"\"\"\n this function takes as an input a filename ( path) for a given city and returns: \n - the average trip length for each city\n and\n -proportion of rides made in each city are longer than 30 minutes\n \"\"\"\n trips_durations = []\n charged_trips = 0\n with open(filename,'r') as f:\n reader = csv.DictReader(f)\n for row in reader:\n trip_duration = float(row['duration'])\n trips_durations.append(trip_duration)\n if trip_duration > 30:\n charged_trips += 1\n average = sum(trips_durations)/len(trips_durations)\n charged_rides_propotion = (charged_trips/len(trips_durations)) *100\n return average,charged_rides_propotion\n\n#test \nprint('-----------partial test ----------------------------')\nfilename = './examples/BayArea-Y3-Summary.csv'\na,b = average_and_charged_propotion(filename )\nprint('results \\n average trip length for Bay Area : {} minutes while the correct average is 14 minutes '.format(round(a,2)))\n \nprint('proportion of rides made in Bay area that are longer than 30 minutes is : {}% while the correct average is 3.5%'.format(round(b,2))) \n\n#now we answer the Question 4b for each city \ndef average_and_charged_propotion_per_city(data_files):\n result = []\n for file in data_files:\n city_info = {}\n city=file.split('-')[0].split('/')[-1]\n city_info['city_name'] = city\n city_info['avg_trip_duration'],city_info['charged_ride_propotion'] = average_and_charged_propotion(file)\n result.append(city_info)\n return result\n\ndata_files = ('./data/Chicago-2016-Summary.csv','./data/NYC-2016-Summary.csv','./data/Washington-2016-Summary.csv')\n\nprint('--------------------------------------question 4b results---------------------------------')\nprint(average_and_charged_propotion_per_city(data_files))", "-----------partial test ----------------------------\nresults \n average trip length for Bay Area : 14.04 minutes while the correct average is 14 minutes \nproportion of rides made in Bay area that are longer than 30 minutes is : 3.52% while the correct average is 3.5%\n--------------------------------------question 4b results---------------------------------\n[{'city_name': 'Chicago', 'avg_trip_duration': 16.563629368787335, 'charged_ride_propotion': 8.332062497400562}, {'city_name': 'NYC', 'avg_trip_duration': 15.81259299802294, 'charged_ride_propotion': 7.3024371563378345}, {'city_name': 'Washington', 'avg_trip_duration': 18.93287355913721, 'charged_ride_propotion': 10.83888671109369}]\n" ] ], [ [ "**Question 4c**: Dig deeper into the question of trip duration based on ridership. Choose one city. Within that city, which type of user takes longer rides on average: Subscribers or Customers?\n\n**Answer**: \n<br> \n**chosen city** : Washington\n<br>\n**User type that took the longer rides on average:** Subscribers", "_____no_output_____" ] ], [ [ "## Use this and additional cells to answer Question 4c. If you have ##\n## not done so yet, consider revising some of your previous code to ##\n## make use of functions for reusability. ##\n## ##\n## TIP: For the Bay Area example data, you should find the average ##\n## Subscriber trip duration to be 9.5 minutes and the average Customer ##\n## trip duration to be 54.6 minutes. Do the other cities have this ##\n## level of difference? ##\ndef average_rides_users(filename):\n \"\"\"\n This function takes as Input the path of the file where the rides informations of a certain\n city are stored and then returns a dict that have for each user type the average trip duration\n \"\"\"\n sub_count,cust_count = 0,0 #the number of subscribers and customers\n result = {'Subscriber':0,'Customer':0}\n with open(filename,'r') as f:\n reader = csv.DictReader(f)\n for row in reader:\n if row['user_type']=='Subscriber':\n sub_count +=1\n result['Subscriber'] += float(row['duration'])\n else:#if not a subscriber then it's a customer \n cust_count +=1\n result['Customer'] += float(row['duration'])\n result['Subscriber'] = result['Subscriber'] / sub_count\n result['Customer'] = result['Customer'] / cust_count\n return result\n#In this section we will test this function on Bay area \nfile = './examples/BayArea-Y3-Summary.csv'\nprint('the correct results for Bay Area : \\n Average of subscriber trip duration is 9.5 minutes \\n and the average of customer trip duration is 54.6 minutes ')\nprint('the function returned for Bay Area :')\nresult = average_rides_users(file)\nprint('Average of subscriber trip duration is {} minutes \\n and the average of customer trip duration is {} minutes'.format(round(result['Subscriber'],1),round(result['Customer'],1))) \nprint('------------------------------------------------------------------')\n# Now we will test for Washington city to answer the question 4c\nfile = './data/Washington-2016-Summary.csv'\nresult = average_rides_users(file)\nmax_rides_user = max(result)\nprint('For Washington city The type of users that has the longer rides on average is {} with an average of {} minutes'.format(max_rides_user,round(result[max_rides_user],2)))\n", "the correct results for Bay Area : \n Average of subscriber trip duration is 9.5 minutes \n and the average of customer trip duration is 54.6 minutes \nthe function returned for Bay Area :\nAverage of subscriber trip duration is 9.5 minutes \n and the average of customer trip duration is 54.6 minutes\n------------------------------------------------------------------\nFor Washington city The type of users that has the longer rides on average is Subscriber with an average of 12.53 minutes\n" ] ], [ [ "<a id='visualizations'></a>\n### Visualizations\n\nThe last set of values that you computed should have pulled up an interesting result. While the mean trip time for Subscribers is well under 30 minutes, the mean trip time for Customers is actually _above_ 30 minutes! It will be interesting for us to look at how the trip times are distributed. In order to do this, a new library will be introduced here, `matplotlib`. Run the cell below to load the library and to generate an example plot.", "_____no_output_____" ] ], [ [ "# load library\nimport matplotlib.pyplot as plt\n\n# this is a 'magic word' that allows for plots to be displayed\n# inline with the notebook. If you want to know more, see:\n# http://ipython.readthedocs.io/en/stable/interactive/magics.html\n%matplotlib inline \n\n# example histogram, data taken from bay area sample\ndata = [ 7.65, 8.92, 7.42, 5.50, 16.17, 4.20, 8.98, 9.62, 11.48, 14.33,\n 19.02, 21.53, 3.90, 7.97, 2.62, 2.67, 3.08, 14.40, 12.90, 7.83,\n 25.12, 8.30, 4.93, 12.43, 10.60, 6.17, 10.88, 4.78, 15.15, 3.53,\n 9.43, 13.32, 11.72, 9.85, 5.22, 15.10, 3.95, 3.17, 8.78, 1.88,\n 4.55, 12.68, 12.38, 9.78, 7.63, 6.45, 17.38, 11.90, 11.52, 8.63,]\nplt.hist(data)\nplt.title('Distribution of Trip Durations')\nplt.xlabel('Duration (m)')\nplt.show()", "_____no_output_____" ] ], [ [ "In the above cell, we collected fifty trip times in a list, and passed this list as the first argument to the `.hist()` function. This function performs the computations and creates plotting objects for generating a histogram, but the plot is actually not rendered until the `.show()` function is executed. The `.title()` and `.xlabel()` functions provide some labeling for plot context.\n\nYou will now use these functions to create a histogram of the trip times for the city you selected in question 4c. Don't separate the Subscribers and Customers for now: just collect all of the trip times and plot them.", "_____no_output_____" ] ], [ [ "## Use this and additional cells to collect all of the trip times as a list ##\n## and then use pyplot functions to generate a histogram of trip times. ##\n#selected city is Washington \n#trip times <=> trip start time\nfile = './data/Washington-2016-Summary.csv'\ndef plot_trip_times(file_path,plot_title,x_label,y_label):\n times = []\n with open(file_path,'r') as f :\n reader = csv.DictReader(f)\n for row in reader:\n times.append(round(float(row['duration']),2))\n plt.hist(times) \n plt.title(plot_title)\n plt.xlabel(x_label)\n plt.ylabel(y_label)\n plt.show()\n \nplot_trip_times(file,'Histogram of Washington\\'s bike trips durations','Duration in minutes','')\n", "_____no_output_____" ] ], [ [ "If you followed the use of the `.hist()` and `.show()` functions exactly like in the example, you're probably looking at a plot that's completely unexpected. The plot consists of one extremely tall bar on the left, maybe a very short second bar, and a whole lot of empty space in the center and right. Take a look at the duration values on the x-axis. This suggests that there are some highly infrequent outliers in the data. Instead of reprocessing the data, you will use additional parameters with the `.hist()` function to limit the range of data that is plotted. Documentation for the function can be found [[here]](https://matplotlib.org/devdocs/api/_as_gen/matplotlib.pyplot.hist.html#matplotlib.pyplot.hist).\n\n**Question 5**: Use the parameters of the `.hist()` function to plot the distribution of trip times for the Subscribers in your selected city. Do the same thing for only the Customers. Add limits to the plots so that only trips of duration less than 75 minutes are plotted. As a bonus, set the plots up so that bars are in five-minute wide intervals. For each group, where is the peak of each distribution? How would you describe the shape of each distribution?\n\n**Answer**: <br>\n**For the subscribers distribution** : <br>\n-The peak approches to the value of 17500 trips which corresponds to the trips that last from 5 to 10 minutes long.\n-The shape of this Distribution : this ditribution is **nearly** right skewed with the majority of the values lay between 0 to 15 minutes \n<br>\n**For the customers distribution** : <br>\n-The peak approches to the value of 2000 trips which corresponds to the trips that last from 15 to 20 minutes long , In other words the mode = [15,20] minutes \n-The shape of this Distribution : this ditribution doesn't have a specific shape, meanwhile we can mention that most values lay between 5 to 35 minutes \n\n\n", "_____no_output_____" ] ], [ [ "## Use this and additional cells to answer Question 5. ##\n# SELECTED CITY is Washigton \ndef trip_times_by_user_type(file_path,user_type):\n \"\"\"\"\n This function returns the bike trips durations for a certain user type in a certain city\n INPUT : file path(string) : the path of the file that includes informations about bike trips in a given city\n user_type(string) : the type of users \n OUTPUT : times (list) : a list of all trips duration taken by a cetain type of users in a certain city \n \"\"\"\"\n times = []\n with open(file_path,'r') as f :\n reader = csv.DictReader(f)\n for row in reader:\n if row['user_type'] == user_type:\n times.append(round(float(row['duration']),2))\n return times\n#file path of Washigton's bike trips informations \nfile_path = './data/Washington-2016-Summary.csv'\nsub_trip_times = trip_times_by_user_type(file_path,'Subscriber') \ncust_trip_times = trip_times_by_user_type(file_path,'Customer') \n# we plot the results \n# for Washington's subscribers distribution\n\n# BELOW : we have 75/5=15 so here we are filling bins with a set of elements\n# from an arithmetic sequence with a distance = 5 between each consecutive terms \n# this is to satisfy the condition of having a 5 minutes bar width\nbins = [5*i for i in range(15)]\nplt.hist(sub_trip_times,bins,range=(0,75)) \nplt.xticks(range(0,80,5))# this is used to set locations labels in the X-axis \nplt.title('Distribution of Washington\\'s subscribers trips durations')\nplt.xlabel('Duration (min)')\nplt.show()\n\nprint('---------------------------------------------------------------')\n# here another way to have a 5 minutes bar width <=> just sit the number of bars which is 15\nplt.hist(cust_trip_times,bins=15,range=(0,75))\nplt.xticks(range(0,80,5)) # this is used to set locations labels in the X-axis \nplt.title('Distribution of Washington\\'s customers trips durations')\nplt.xlabel('Duration (min)')\nplt.show()\n\n", "_____no_output_____" ] ], [ [ "<a id='eda_continued'></a>\n## Performing Your Own Analysis\n\nSo far, you've performed an initial exploration into the data available. You have compared the relative volume of trips made between three U.S. cities and the ratio of trips made by Subscribers and Customers. For one of these cities, you have investigated differences between Subscribers and Customers in terms of how long a typical trip lasts. Now it is your turn to continue the exploration in a direction that you choose. Here are a few suggestions for questions to explore:\n\n- How does ridership differ by month or season? Which month / season has the highest ridership? Does the ratio of Subscriber trips to Customer trips change depending on the month or season?\n- Is the pattern of ridership different on the weekends versus weekdays? On what days are Subscribers most likely to use the system? What about Customers? Does the average duration of rides change depending on the day of the week?\n- During what time of day is the system used the most? Is there a difference in usage patterns for Subscribers and Customers?\n\nIf any of the questions you posed in your answer to question 1 align with the bullet points above, this is a good opportunity to investigate one of them. As part of your investigation, you will need to create a visualization. If you want to create something other than a histogram, then you might want to consult the [Pyplot documentation](https://matplotlib.org/devdocs/api/pyplot_summary.html). In particular, if you are plotting values across a categorical variable (e.g. city, user type), a bar chart will be useful. The [documentation page for `.bar()`](https://matplotlib.org/devdocs/api/_as_gen/matplotlib.pyplot.bar.html#matplotlib.pyplot.bar) includes links at the bottom of the page with examples for you to build off of for your own use.\n\n**Question 6**: Continue the investigation by exploring another question that could be answered by the data available. Document the question you want to explore below. Your investigation should involve at least two variables and should compare at least two groups. You should also use at least one visualization as part of your explorations.\n\n**Answer**:\nAll answers are extracted from running the code in cell below and observing the visualizations \n <br>\n**Question 1 :** How does ridership differ by month or season in each city ? <br>\n**Answer 1:** <br>\n__For Chicago__ : the distribution of the total bike rides over the months is slightly symetric as it starts with low levels in January and the rates then keeps improuving until geting its maximum rates by July , from july till December the rates decreses again <br>\n__For NYC__ : the Distribution doesn't have a specific shape and the rates sway up and down during the months<br>\n__For Washington__ : the Distribution doesn't have a specific shape but it's worth mentionning that the rate significally reaches high levels in the period from June till October\n<br>\n**Question 2** : Which month / season has the highest ridership in each city ?<br>\n**Answer2** : <br>\n__For Chicago__ : As a global view the month that has the hightest ridership rate is : July<br>\nMeanwhile if we take a look at users types :<br>\nThe month that has the hightest ridership in terms of subscribers is : June <br>\nThe month that has the hightest ridership in terms of subscribers is : July <br>\n__For NYC__ : As a global view the month that has the hightest ridership rate is : September<br>\nMeanwhile if we take a look at users types :<br>\nThe month that has the hightest ridership in terms of subscribers is : October <br>\nThe month that has the hightest ridership in terms of subscribers is : August<br>\n__For Washington__ : As a global view the month that has the hightest ridership rate is : July<br>\nMeanwhile if we take a look at users types :<br>\nThe month that has the hightest ridership in terms of subscribers is : June <br>\nThe month that has the hightest ridership in terms of subscribers is : July<br>\nIn terms of Seasons : <br> \nBoth Chicago and Washigton recorded the hightests levels in bike rides during Summer months (from June to August) \nwhile NYC recorded its peaks during the late summer to Automn (From August to November ) \n<br>\n**Question 3**: How does the ratio of Subscriber trips to Customer trips change in each of the three cities ? \n__For Chicago__ : From January to May ratio increases which means in this period we have more subscribers than customers <br>\nthe rate goes down in June as the number of customers gets higher than the number of subscribers \n<br> \nFrom July to December the ratio follows a decending evoloution to reach its lowest levels by December \n__For NYC__ : The ratio keeps fluctuating up and down multiple times during the year \n<br>\n__For Wahsington__ : The ratio gets it's hightest peak in July , except for July the evolution is smooth and doesn't follow a dramatic falls and ups \n**Conclusion** : the peak season is generally Summer , or the late Summer till Automn , And the rates fall down in Winter ; this result is not suprising because people find it hard to use bikes during Winter due to the harsh weather conditions in this season. ", "_____no_output_____" ] ], [ [ "import calendar as cal\ndef ridership_by_month(file_path):\n \"\"\"\n This function takes as an input a ridership informations file of a certain city\n and returns :\n month_stats : Dict{month : number_of_trips}\n mode_month : String => the month that has the hightest ridership \n mode_month_custs : String => the month that has the hightest ridership for Customers\n mode_month_sub : String => the month that has the hightest ridership for Subscribers\n ratio_cust_sub : Dict => {month : ratio} for each month what is the ratio customers/subscribers \n \"\"\"\n month_stats = {1:0,2:0,3:0,4:0,5:0,6:0,7:0,8:0,9:0,10:0,11:0,12:0}\n month_cust_trips= {1:0,2:0,3:0,4:0,5:0,6:0,7:0,8:0,9:0,10:0,11:0,12:0}\n month_sub_trips= {1:0,2:0,3:0,4:0,5:0,6:0,7:0,8:0,9:0,10:0,11:0,12:0}\n\n with open(file_path,'r') as f:\n reader = csv.DictReader(f)\n for row in reader:\n month_stats[int(row['month'])] += 1\n if row['user_type'] == 'Customer':\n month_cust_trips[int(row['month'])] += 1\n else:\n month_sub_trips[int(row['month'])] += 1 \n \n ratio_cust_sub = {i: round((month_cust_trips[i]/month_sub_trips[i]),2) for i in month_cust_trips} \n mode_month = cal.month_name[max(month_stats.items(), key=operator.itemgetter(1))[0]]\n mode_month_custs = cal.month_name[max(month_cust_trips.items(), key=operator.itemgetter(1))[0]]\n mode_month_sub = cal.month_name[max(month_sub_trips.items(), key=operator.itemgetter(1))[0]]\n return {'mode_month':mode_month,'mode_custs':mode_month_custs,'mode_sub':mode_month_sub,'monthly_stats':month_stats,'monthly_ratio_cust_sub':ratio_cust_sub,\n 'monthly_custs':month_cust_trips,'monthly_subs':month_sub_trips,'ratio_sub/cust':ratio_cust_sub}\n \ndef plot_ridership_month(monthly_stats,monthly_sub,monthly_custs,ratio,city_name):\n \"\"\"\n This funtion plots ridership monthly reports.\n INPUT : monthly_stats(Dict): contains the number of trips per month in a given city.\n monthly_sub(Dict):contains the number of trips taken by subscribers per month in a given city.\n monthly_custs(Dict):contains the number of trips taken by customers per month in a given city.\n ratio(Dict):contains the ratio between the number of trips taken by subscribers and those taken \n by customers per month in a given city.\n OUTPUT : None\n \"\"\"\n plt.bar(range(len(monthly_stats)), monthly_stats.values(), align='center')\n plt.title(\"monthly ridership report of {}\".format(city_name))\n plt.xticks(range(len(monthly_stats)), monthly_stats.keys())\n plt.show()\n plt.bar(range(len(monthly_sub)), monthly_sub.values())\n plt.title(\"monthly subscribers ridership report of {}\".format(city_name))\n plt.xticks(range(len(monthly_sub)), monthly_sub.keys())\n plt.show()\n plt.bar(range(len(monthly_custs)), monthly_custs.values(), align='center')\n plt.title(\"monthly customers ridership report of {}\".format(city_name))\n plt.xticks(range(len(monthly_custs)), monthly_custs.keys())\n plt.show()\n plt.bar(range(len(ratio)), ratio.values(), align='center')\n plt.title(\"monthly ratio subscribers/customers ridership report of {}\".format(city_name))\n plt.xticks(range(len(ratio)), ratio.keys())\n plt.show()\n \nfiles =('./data/Chicago-2016-Summary.csv','./data/NYC-2016-Summary.csv','./data/Washington-2016-Summary.csv') \n#now we will explore the different visualizations for each city in order to compare between the results\n#1 collecting results \nresults = {}\nfor file in files :\n city = file.split('-')[0].split('/')[-1]\n results[city] = ridership_by_month(file)\n#2 Visualizations\nfor key,value in results.items():\n print(\"results for {}\".format(key))\n print(\" the month that has the hightest ridership rate is : {}\".format(value['mode_month']))\n print(\" the month that has the hightest ridership in terms of subscribers is : {}\".format(value['mode_sub']))\n print(\" the month that has the hightest ridership in terms of Customers is : {}\".format(value['mode_custs']))\n print(\"visualisations....\")\n plot_ridership_month(value['monthly_stats'],value['monthly_subs'],value['monthly_custs'],value['ratio_sub/cust'],key)\n print(\"---------------------------------------------------------------------------------------------------\\n\")\n\n", "results for Chicago\n the month that has the hightest ridership rate is : July\n the month that has the hightest ridership in terms of subscribers is : June\n the month that has the hightest ridership in terms of Customers is : July\nvisualisations....\n" ] ], [ [ "<a id='conclusions'></a>\n## Conclusions\n\nCongratulations on completing the project! This is only a sampling of the data analysis process: from generating questions, wrangling the data, and to exploring the data. Normally, at this point in the data analysis process, you might want to draw conclusions about the data by performing a statistical test or fitting the data to a model for making predictions. There are also a lot of potential analyses that could be performed on the data which are not possible with only the data provided. For example, detailed location data has not been investigated. Where are the most commonly used docks? What are the most common routes? As another example, weather has potential to have a large impact on daily ridership. How much is ridership impacted when there is rain or snow? Are subscribers or customers affected more by changes in weather?\n\n**Question 7**: Putting the bike share data aside, think of a topic or field of interest where you would like to be able to apply the techniques of data science. What would you like to be able to learn from your chosen subject?\n\n**Answer**: One of the most worthy fields in apply these techniques in is : Heath care systems and epidemic detection , if we can extract details of patients symptoms and the notes of doctors in each region , and then use these informations to extract more knowledge we will be able to detect epedimics more effectively and gain time in putting good health strategies to face this kind of disasters.<br>\nAnother use case can be in improuving the sales rates in shops throught the day/month/season : For example if we have a supermarket and we notice that we sell more vegetables in the morning for example , then we can arrange the products in a specific way in the morning in order to facilitate the reach to certain products that are more likely to be sold in that part of the day .", "_____no_output_____" ] ], [ [ "from subprocess import call\ncall(['python', '-m', 'nbconvert', 'Bike_Share_Analysis.ipynb'])", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ece27122b3d84e27f4bf5256450eb46bb0058dac
654,085
ipynb
Jupyter Notebook
Sentiment_Classification.ipynb
ArnaudMallet/sentiment_analysis_grokking_DL
3c478b0debc5cb46e90e3322935b46553df46432
[ "MIT" ]
1
2020-10-22T07:32:10.000Z
2020-10-22T07:32:10.000Z
Sentiment_Classification.ipynb
ArnaudMallet/sentiment_analysis_grokking_DL
3c478b0debc5cb46e90e3322935b46553df46432
[ "MIT" ]
null
null
null
Sentiment_Classification.ipynb
ArnaudMallet/sentiment_analysis_grokking_DL
3c478b0debc5cb46e90e3322935b46553df46432
[ "MIT" ]
null
null
null
75.338056
46,924
0.727329
[ [ [ "# Sentiment Classification & How To \"Frame Problems\" for a Neural Network\n\nby Andrew Trask\n\n- **Twitter**: @iamtrask\n- **Blog**: http://iamtrask.github.io", "_____no_output_____" ], [ "### What You Should Already Know\n\n- neural networks, forward and back-propagation\n- stochastic gradient descent\n- mean squared error\n- and train/test splits\n\n### Where to Get Help if You Need it\n- Re-watch previous Udacity Lectures\n- Leverage the recommended Course Reading Material - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) (Check inside your classroom for a discount code)\n- Shoot me a tweet @iamtrask\n\n\n### Tutorial Outline:\n\n- Intro: The Importance of \"Framing a Problem\" (this lesson)\n\n\n- [Curate a Dataset](#lesson_1)\n- [Developing a \"Predictive Theory\"](#lesson_2)\n- [**PROJECT 1**: Quick Theory Validation](#project_1)\n\n\n- [Transforming Text to Numbers](#lesson_3)\n- [**PROJECT 2**: Creating the Input/Output Data](#project_2)\n\n\n- Putting it all together in a Neural Network (video only - nothing in notebook)\n- [**PROJECT 3**: Building our Neural Network](#project_3)\n\n\n- [Understanding Neural Noise](#lesson_4)\n- [**PROJECT 4**: Making Learning Faster by Reducing Noise](#project_4)\n\n\n- [Analyzing Inefficiencies in our Network](#lesson_5)\n- [**PROJECT 5**: Making our Network Train and Run Faster](#project_5)\n\n\n- [Further Noise Reduction](#lesson_6)\n- [**PROJECT 6**: Reducing Noise by Strategically Reducing the Vocabulary](#project_6)\n\n\n- [Analysis: What's going on in the weights?](#lesson_7)", "_____no_output_____" ], [ "# Lesson: Curate a Dataset<a id='lesson_1'></a>", "_____no_output_____" ] ], [ [ "def pretty_print_review_and_label(i):\n print(labels[i] + \"\\t:\\t\" + reviews[i][:80] + \"...\")\n\ng = open('reviews.txt','r') # What we know!\nreviews = list(map(lambda x:x[:-1],g.readlines()))\ng.close()\n\ng = open('labels.txt','r') # What we WANT to know!\nlabels = list(map(lambda x:x[:-1].upper(),g.readlines()))\ng.close()", "_____no_output_____" ] ], [ [ "**Note:** The data in `reviews.txt` we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like `The`, `the`, and `THE`, all the same way.", "_____no_output_____" ] ], [ [ "len(reviews)", "_____no_output_____" ], [ "reviews[0]", "_____no_output_____" ], [ "labels[0]", "_____no_output_____" ] ], [ [ "# Lesson: Develop a Predictive Theory<a id='lesson_2'></a>", "_____no_output_____" ] ], [ [ "print(\"labels.txt \\t : \\t reviews.txt\\n\")\npretty_print_review_and_label(2137)\npretty_print_review_and_label(12816)\npretty_print_review_and_label(6267)\npretty_print_review_and_label(21934)\npretty_print_review_and_label(5297)\npretty_print_review_and_label(4998)", "labels.txt \t : \t reviews.txt\n\nNEGATIVE\t:\tthis movie is terrible but it has some good effects . ...\nPOSITIVE\t:\tadrian pasdar is excellent is this film . he makes a fascinating woman . ...\nNEGATIVE\t:\tcomment this movie is impossible . is terrible very improbable bad interpretat...\nPOSITIVE\t:\texcellent episode movie ala pulp fiction . days suicides . it doesnt get more...\nNEGATIVE\t:\tif you haven t seen this it s terrible . it is pure trash . i saw this about ...\nPOSITIVE\t:\tthis schiffer guy is a real genius the movie is of excellent quality and both e...\n" ] ], [ [ "# Project 1: Quick Theory Validation<a id='project_1'></a>\n\nThere are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.\n\nYou'll find the [Counter](https://docs.python.org/2/library/collections.html#collections.Counter) class to be useful in this exercise, as well as the [numpy](https://docs.scipy.org/doc/numpy/reference/) library.", "_____no_output_____" ] ], [ [ "from collections import Counter\nimport numpy as np", "_____no_output_____" ] ], [ [ "We'll create three `Counter` objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.", "_____no_output_____" ] ], [ [ "# Create three Counter objects to store positive, negative and total counts\npositive_counts = Counter()\nnegative_counts = Counter()\ntotal_counts = Counter()", "_____no_output_____" ] ], [ [ "**TODO:** Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.\n\n**Note:** Throughout these projects, you should use `split(' ')` to divide a piece of text (such as a review) into individual words. If you use `split()` instead, you'll get slightly different results than what the videos and solutions show.", "_____no_output_____" ] ], [ [ "# Loop over all the words in all the reviews and increment the counts in the appropriate counter objects\nfor i in range(len(reviews)):\n if(labels[i] == 'POSITIVE'):\n for word in reviews[i].split(\" \"):\n positive_counts[word] += 1\n total_counts[word] += 1\n else:\n for word in reviews[i].split(\" \"):\n negative_counts[word] += 1\n total_counts[word] += 1", "_____no_output_____" ] ], [ [ "Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used. ", "_____no_output_____" ] ], [ [ "# Examine the counts of the most common words in positive reviews\npositive_counts.most_common()", "_____no_output_____" ], [ "# Examine the counts of the most common words in negative reviews\nnegative_counts.most_common()", "_____no_output_____" ] ], [ [ "As you can see, common words like \"the\" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the **ratios** of word usage between positive and negative reviews.\n\n**TODO:** Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in `pos_neg_ratios`. \n>Hint: the positive-to-negative ratio for a given word can be calculated with `positive_counts[word] / float(negative_counts[word]+1)`. Notice the `+1` in the denominator โ€“ย that ensures we don't divide by zero for words that are only seen in positive reviews.", "_____no_output_____" ] ], [ [ "pos_neg_ratios = Counter()\n\n# Calculate the ratios of positive and negative uses of the most common words\n# Consider words to be \"common\" if they've been used at least 100 times\nfor term,cnt in list(total_counts.most_common()):\n if(cnt > 100):\n pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)\n pos_neg_ratios[term] = pos_neg_ratio", "_____no_output_____" ] ], [ [ "Examine the ratios you've calculated for a few words:", "_____no_output_____" ] ], [ [ "print(\"Pos-to-neg ratio for 'the' = {}\".format(pos_neg_ratios[\"the\"]))\nprint(\"Pos-to-neg ratio for 'amazing' = {}\".format(pos_neg_ratios[\"amazing\"]))\nprint(\"Pos-to-neg ratio for 'terrible' = {}\".format(pos_neg_ratios[\"terrible\"]))", "Pos-to-neg ratio for 'the' = 1.0607993145235326\nPos-to-neg ratio for 'amazing' = 4.022813688212928\nPos-to-neg ratio for 'terrible' = 0.17744252873563218\n" ] ], [ [ "Looking closely at the values you just calculated, we see the following: \n\n* Words that you would expect to see more often in positive reviews โ€“ like \"amazing\"ย โ€“ have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.\n* Words that you would expect to see more often in negative reviews โ€“ like \"terrible\" โ€“ have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.\n* Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews โ€“ like \"the\" โ€“ have values very close to 1. A perfectly neutral word โ€“ย one that was used in exactly the same number of positive reviews as negative reviews โ€“ย would be almost exactly 1. The `+1` we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.\n\nOk, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like \"amazing\" has a value above 4, whereas a very negative word like \"terrible\" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:\n\n* Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.\n* When comparing absolute values it's easier to do that around zero than one. \n\nTo fix these issues, we'll convert all of our ratios to new values using logarithms.\n\n**TODO:** Go through all the ratios you calculated and convert them to logarithms. (i.e. use `np.log(ratio)`)\n\nIn the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.", "_____no_output_____" ] ], [ [ "# Convert ratios to logs\nfor word,ratio in pos_neg_ratios.most_common():\n pos_neg_ratios[word] = np.log(ratio)", "_____no_output_____" ] ], [ [ "**NOTE:** In the video, Andrew uses the following formulas for the previous cell:\n> * For any postive words, convert the ratio using `np.log(ratio)`\n> * For any negative words, convert the ratio using `-np.log(1/(ratio + 0.01))`\n\nThese won't give you the exact same results as the simpler code we show in this notebook, but the values will be similar. In case that second equation looks strange, here's what it's doing: First, it divides one by a very small number, which will produce a larger positive number. Then, it takes the `log` of that, which produces numbers similar to the ones for the postive words. Finally, it negates the values by adding that minus sign up front. The results are extremely positive and extremely negative words having positive-to-negative ratios with similar magnitudes but oppositite signs, just like when we use `np.log(ratio)`.", "_____no_output_____" ], [ "Examine the new ratios you've calculated for the same words from before:", "_____no_output_____" ] ], [ [ "print(\"Pos-to-neg ratio for 'the' = {}\".format(pos_neg_ratios[\"the\"]))\nprint(\"Pos-to-neg ratio for 'amazing' = {}\".format(pos_neg_ratios[\"amazing\"]))\nprint(\"Pos-to-neg ratio for 'terrible' = {}\".format(pos_neg_ratios[\"terrible\"]))", "Pos-to-neg ratio for 'the' = 0.05902269426102881\nPos-to-neg ratio for 'amazing' = 1.3919815802404802\nPos-to-neg ratio for 'terrible' = -1.7291085042663878\n" ] ], [ [ "If everything worked, now you should see neutral words with values close to zero. In this case, \"the\" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at \"amazing\"'s ratio - it's above `1`, showing it is clearly a word with positive sentiment. And \"terrible\" has a similar score, but in the opposite direction, so it's below `-1`. It's now clear that both of these words are associated with specific, opposing sentiments.\n\nNow run the following cells to see more ratios. \n\nThe first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see *all* the words in the list.)\n\nThe second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write `reversed(pos_neg_ratios.most_common())`.)\n\nYou should continue to see values similar to the earlier ones we checked โ€“ย neutral words will be close to `0`, words will get more positive as their ratios approach and go above `1`, and words will get more negative as their ratios approach and go below `-1`. That's why we decided to use the logs instead of the raw ratios.", "_____no_output_____" ] ], [ [ "# words most frequently seen in a review with a \"POSITIVE\" label\npos_neg_ratios.most_common()", "_____no_output_____" ], [ "# words most frequently seen in a review with a \"NEGATIVE\" label\nlist(reversed(pos_neg_ratios.most_common()))[0:30]\n\n# Note: Above is the code Andrew uses in his solution video, \n# so we've included it here to avoid confusion.\n# If you explore the documentation for the Counter class, \n# you will see you could also find the 30 least common\n# words like this: pos_neg_ratios.most_common()[:-31:-1]", "_____no_output_____" ] ], [ [ "# End of Project 1. \n## Watch the next video to continue with Andrew's next lesson.\n\n# Transforming Text into Numbers<a id='lesson_3'></a>", "_____no_output_____" ] ], [ [ "from IPython.display import Image\n\nreview = \"This was a horrible, terrible movie.\"\n\nImage(filename='sentiment_network.png')", "_____no_output_____" ], [ "review = \"The movie was excellent\"\n\nImage(filename='sentiment_network_pos.png')", "_____no_output_____" ] ], [ [ "# Project 2: Creating the Input/Output Data<a id='project_2'></a>\n\n**TODO:** Create a [set](https://docs.python.org/3/tutorial/datastructures.html#sets) named `vocab` that contains every word in the vocabulary.", "_____no_output_____" ] ], [ [ "vocab = set(total_counts.keys())", "_____no_output_____" ] ], [ [ "Run the following cell to check your vocabulary size. If everything worked correctly, it should print **74074**", "_____no_output_____" ] ], [ [ "vocab_size = len(vocab)\nprint(vocab_size)", "74074\n" ] ], [ [ "Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. `layer_0` is the input layer, `layer_1` is a hidden layer, and `layer_2` is the output layer.", "_____no_output_____" ] ], [ [ "from IPython.display import Image\nImage(filename='sentiment_network_2.png')", "_____no_output_____" ] ], [ [ "**TODO:** Create a numpy array called `layer_0` and initialize it to all zeros. You will find the [zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html) function particularly helpful here. Be sure you create `layer_0` as a 2-dimensional matrix with 1 row and `vocab_size` columns. ", "_____no_output_____" ] ], [ [ "layer_0 = np.zeros((1,vocab_size))", "_____no_output_____" ] ], [ [ "Run the following cell. It should display `(1, 74074)`", "_____no_output_____" ] ], [ [ "layer_0.shape", "_____no_output_____" ], [ "from IPython.display import Image\nImage(filename='sentiment_network.png')", "_____no_output_____" ] ], [ [ "`layer_0` contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.", "_____no_output_____" ] ], [ [ "# Create a dictionary of words in the vocabulary mapped to index positions \n# (to be used in layer_0)\nword2index = {}\nfor i,word in enumerate(vocab):\n word2index[word] = i\n \n# display the map of words to indices\nword2index", "_____no_output_____" ] ], [ [ "**TODO:** Complete the implementation of `update_input_layer`. It should count \n how many times each word is used in the given review, and then store\n those counts at the appropriate indices inside `layer_0`.", "_____no_output_____" ] ], [ [ "def update_input_layer(review):\n \"\"\" Modify the global layer_0 to represent the vector form of review.\n The element at a given index of layer_0 should represent\n how many times the given word occurs in the review.\n Args:\n review(string) - the string of the review\n Returns:\n None\n \"\"\"\n \n global layer_0\n \n # clear out previous state, reset the layer to be all 0s\n layer_0 *= 0\n \n # count how many times each word is used in the given review and store the results in layer_0 \n for word in review.split(\" \"):\n layer_0[0][word2index[word]] += 1", "_____no_output_____" ] ], [ [ "Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in `layer_0`. ", "_____no_output_____" ] ], [ [ "update_input_layer(reviews[0])\nlayer_0", "_____no_output_____" ] ], [ [ "**TODO:** Complete the implementation of `get_target_for_labels`. It should return `0` or `1`, \n depending on whether the given label is `NEGATIVE` or `POSITIVE`, respectively.", "_____no_output_____" ] ], [ [ "def get_target_for_label(label):\n \"\"\"Convert a label to `0` or `1`.\n Args:\n label(string) - Either \"POSITIVE\" or \"NEGATIVE\".\n Returns:\n `0` or `1`.\n \"\"\"\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0", "_____no_output_____" ] ], [ [ "Run the following two cells. They should print out`'POSITIVE'` and `1`, respectively.", "_____no_output_____" ] ], [ [ "labels[0]", "_____no_output_____" ], [ "get_target_for_label(labels[0])", "_____no_output_____" ] ], [ [ "Run the following two cells. They should print out `'NEGATIVE'` and `0`, respectively.", "_____no_output_____" ] ], [ [ "labels[1]", "_____no_output_____" ], [ "get_target_for_label(labels[1])", "_____no_output_____" ] ], [ [ "# End of Project 2 solution. \n## Watch the next video to continue with Andrew's next lesson.", "_____no_output_____" ], [ "# Project 3: Building a Neural Network<a id='project_3'></a>", "_____no_output_____" ], [ "**TODO:** We've included the framework of a class called `SentimentNetork`. Implement all of the items marked `TODO` in the code. These include doing the following:\n- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. \n- Do **not** add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.\n- Re-use the code from earlier in this notebook to create the training data (see `TODO`s in the code)\n- Implement the `pre_process_data` function to create the vocabulary for our training data generating functions\n- Ensure `train` trains over the entire corpus", "_____no_output_____" ] ], [ [ "review_vocab = set()\nfor review in reviews:\n for word in review.split(\" \"):\n review_vocab.add(word)\n \n", "_____no_output_____" ], [ "label_vocab = set()\nfor label in labels:\n label_vocab.add(label)", "_____no_output_____" ], [ "labels[0].split(' ')\n", "_____no_output_____" ], [ "len(list(review_vocab))", "_____no_output_____" ], [ "len(list(label_vocab))", "_____no_output_____" ] ], [ [ "### Where to Get Help if You Need it\n- Re-watch previous week's Udacity Lectures\n- Chapters 3-5 - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) - (Check inside your classroom for a discount code)", "_____no_output_____" ] ], [ [ "import time\nimport sys\nimport numpy as np\n\n# Encapsulate our neural network in a class\nclass SentimentNetwork:\n def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):\n \"\"\"Create a SentimenNetwork with the given settings\n Args:\n reviews(list) - List of reviews used for training\n labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews\n hidden_nodes(int) - Number of nodes to create in the hidden layer\n learning_rate(float) - Learning rate to use while training\n \n \"\"\"\n # Assign a seed to our random number generator to ensure we get\n # reproducable results during development \n np.random.seed(1)\n\n # process the reviews and their associated labels so that everything\n # is ready for training\n self.pre_process_data(reviews, labels)\n \n # Build the network to have the number of hidden nodes and the learning rate that\n # were passed into this initializer. Make the same number of input nodes as\n # there are vocabulary words and create a single output node.\n self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)\n\n def pre_process_data(self, reviews, labels):\n \n # populate review_vocab with all of the words in the given reviews\n review_vocab = set()\n for review in reviews:\n for word in review.split(\" \"):\n review_vocab.add(word)\n\n # Convert the vocabulary set to a list so we can access words via indices\n self.review_vocab = list(review_vocab)\n \n # populate label_vocab with all of the words in the given labels.\n label_vocab = set()\n for label in labels:\n label_vocab.add(label)\n \n # Convert the label vocabulary set to a list so we can access labels via indices\n self.label_vocab = list(label_vocab)\n \n # Store the sizes of the review and label vocabularies.\n self.review_vocab_size = len(self.review_vocab)\n self.label_vocab_size = len(self.label_vocab)\n \n # Create a dictionary of words in the vocabulary mapped to index positions\n self.word2index = {}\n for i, word in enumerate(self.review_vocab):\n self.word2index[word] = i\n \n # Create a dictionary of labels mapped to index positions\n self.label2index = {}\n for i, label in enumerate(self.label_vocab):\n self.label2index[label] = i\n \n def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Store the learning rate\n self.learning_rate = learning_rate\n\n # Initialize weights\n\n # These are the weights between the input layer and the hidden layer.\n self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))\n \n # These are the weights between the hidden layer and the output layer.\n self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n \n # The input layer, a two-dimensional matrix with shape 1 x input_nodes\n self.layer_0 = np.zeros((1,input_nodes))\n \n def update_input_layer(self,review):\n\n # clear out previous state, reset the layer to be all 0s\n self.layer_0 *= 0\n \n for word in review.split(\" \"):\n # NOTE: This if-check was not in the version of this method created in Project 2,\n # and it appears in Andrew's Project 3 solution without explanation. \n # It simply ensures the word is actually a key in word2index before\n # accessing it, which is important because accessing an invalid key\n # with raise an exception in Python. This allows us to ignore unknown\n # words encountered in new reviews.\n if(word in self.word2index.keys()):\n self.layer_0[0][self.word2index[word]] += 1\n \n def get_target_for_label(self,label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n \n def sigmoid(self,x):\n return 1 / (1 + np.exp(-x))\n \n def sigmoid_output_2_derivative(self,output):\n return output * (1 - output)\n \n def train(self, training_reviews, training_labels):\n \n # make sure out we have a matching number of reviews and labels\n assert(len(training_reviews) == len(training_labels))\n \n # Keep track of correct predictions to display accuracy during training \n correct_so_far = 0\n\n # Remember when we started for printing time statistics\n start = time.time()\n \n # loop through all the given reviews and run a forward and backward pass,\n # updating weights for every item\n for i in range(len(training_reviews)):\n \n # Get the next review and its correct label\n review = training_reviews[i]\n label = training_labels[i]\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n\n # Input Layer\n self.update_input_layer(review)\n\n # Hidden layer\n layer_1 = self.layer_0.dot(self.weights_0_1)\n\n # Output layer\n layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # Output error\n layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.\n layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)\n\n # Backpropagated error\n layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer\n layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error\n\n # Update the weights\n self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step\n self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step\n\n # Keep track of correct predictions.\n if(layer_2 >= 0.5 and label == 'POSITIVE'):\n correct_so_far += 1\n elif(layer_2 < 0.5 and label == 'NEGATIVE'):\n correct_so_far += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the training process. \n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(training_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct_so_far) + \" #Trained:\" + str(i+1) \\\n + \" Training Accuracy:\" + str(correct_so_far * 100 / float(i+1))[:4] + \"%\")\n if(i % 2500 == 0):\n print(\"\")\n \n def test(self, testing_reviews, testing_labels):\n \"\"\"\n Attempts to predict the labels for the given testing_reviews,\n and uses the test_labels to calculate the accuracy of those predictions.\n \"\"\"\n \n # keep track of how many correct predictions we make\n correct = 0\n\n # we'll time how many predictions per second we make\n start = time.time()\n\n # Loop through each of the given reviews and call run to predict\n # its label. \n for i in range(len(testing_reviews)):\n pred = self.run(testing_reviews[i])\n if(pred == testing_labels[i]):\n correct += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the prediction process. \n\n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(testing_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct) + \" #Tested:\" + str(i+1) \\\n + \" Testing Accuracy:\" + str(correct * 100 / float(i+1))[:4] + \"%\")\n \n def run(self, review):\n \"\"\"\n Returns a POSITIVE or NEGATIVE prediction for the given review.\n \"\"\"\n # Run a forward pass through the network, like in the \"train\" function.\n \n # Input Layer\n self.update_input_layer(review.lower())\n\n # Hidden layer\n layer_1 = self.layer_0.dot(self.weights_0_1)\n\n # Output layer\n layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))\n \n # Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;\n # return NEGATIVE for other values\n if(layer_2[0] >= 0.5):\n return \"POSITIVE\"\n else:\n return \"NEGATIVE\"\n ", "_____no_output_____" ] ], [ [ "Run the following cell to create a `SentimentNetwork` that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of `0.1`.", "_____no_output_____" ] ], [ [ "mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)", "_____no_output_____" ] ], [ [ "Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). \n\n**We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.**", "_____no_output_____" ] ], [ [ "#mlp.test(reviews[-1000:],labels[-1000:])", "_____no_output_____" ] ], [ [ "Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.", "_____no_output_____" ] ], [ [ "#mlp.train(reviews[:-1000],labels[:-1000])", "_____no_output_____" ] ], [ [ "That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, `0.01`, and then train the new network.", "_____no_output_____" ] ], [ [ "#mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)\n#mlp.train(reviews[:-1000],labels[:-1000])", "_____no_output_____" ] ], [ [ "That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, `0.001`, and then train the new network.", "_____no_output_____" ] ], [ [ "#mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)\n#mlp.train(reviews[:-1000],labels[:-1000])", "_____no_output_____" ] ], [ [ "With a learning rate of `0.001`, the network should finally have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.", "_____no_output_____" ], [ "# End of Project 3. \n## Watch the next video to continue with Andrew's next lesson.", "_____no_output_____" ], [ "# Understanding Neural Noise<a id='lesson_4'></a>", "_____no_output_____" ] ], [ [ "from IPython.display import Image\nImage(filename='sentiment_network.png')", "_____no_output_____" ], [ "def update_input_layer(review):\n \n global layer_0\n \n # clear out previous state, reset the layer to be all 0s\n layer_0 *= 0\n for word in review.split(\" \"):\n layer_0[0][word2index[word]] += 1\n\nupdate_input_layer(reviews[0])", "_____no_output_____" ], [ "layer_0", "_____no_output_____" ], [ "review_counter = Counter()", "_____no_output_____" ], [ "for word in reviews[0].split(\" \"):\n review_counter[word] += 1", "_____no_output_____" ], [ "review_counter.most_common()", "_____no_output_____" ] ], [ [ "# Project 4: Reducing Noise in Our Input Data<a id='project_4'></a>\n\n**TODO:** Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:\n* Copy the `SentimentNetwork` class you created earlier into the following cell.\n* Modify `update_input_layer` so it does not count how many times each word is used, but rather just stores whether or not a word was used. ", "_____no_output_____" ], [ "The following code is the same as the previous project, with project-specific changes marked with `\"New for Project 4\"`", "_____no_output_____" ] ], [ [ "import time\nimport sys\nimport numpy as np\n\n# Encapsulate our neural network in a class\nclass SentimentNetwork:\n def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):\n \"\"\"Create a SentimenNetwork with the given settings\n Args:\n reviews(list) - List of reviews used for training\n labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews\n hidden_nodes(int) - Number of nodes to create in the hidden layer\n learning_rate(float) - Learning rate to use while training\n \n \"\"\"\n # Assign a seed to our random number generator to ensure we get\n # reproducable results during development \n np.random.seed(1)\n\n # process the reviews and their associated labels so that everything\n # is ready for training\n self.pre_process_data(reviews, labels)\n \n # Build the network to have the number of hidden nodes and the learning rate that\n # were passed into this initializer. Make the same number of input nodes as\n # there are vocabulary words and create a single output node.\n self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)\n\n def pre_process_data(self, reviews, labels):\n \n # populate review_vocab with all of the words in the given reviews\n review_vocab = set()\n for review in reviews:\n for word in review.split(\" \"):\n review_vocab.add(word)\n\n # Convert the vocabulary set to a list so we can access words via indices\n self.review_vocab = list(review_vocab)\n \n # populate label_vocab with all of the words in the given labels.\n label_vocab = set()\n for label in labels:\n label_vocab.add(label)\n \n # Convert the label vocabulary set to a list so we can access labels via indices\n self.label_vocab = list(label_vocab)\n \n # Store the sizes of the review and label vocabularies.\n self.review_vocab_size = len(self.review_vocab)\n self.label_vocab_size = len(self.label_vocab)\n \n # Create a dictionary of words in the vocabulary mapped to index positions\n self.word2index = {}\n for i, word in enumerate(self.review_vocab):\n self.word2index[word] = i\n \n # Create a dictionary of labels mapped to index positions\n self.label2index = {}\n for i, label in enumerate(self.label_vocab):\n self.label2index[label] = i\n \n def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Store the learning rate\n self.learning_rate = learning_rate\n\n # Initialize weights\n\n # These are the weights between the input layer and the hidden layer.\n self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))\n \n # These are the weights between the hidden layer and the output layer.\n self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n \n # The input layer, a two-dimensional matrix with shape 1 x input_nodes\n self.layer_0 = np.zeros((1,input_nodes))\n \n \n def update_input_layer(self,review):\n\n # clear out previous state, reset the layer to be all 0s\n self.layer_0 *= 0\n \n for word in review.split(\" \"):\n # NOTE: This if-check was not in the version of this method created in Project 2,\n # and it appears in Andrew's Project 3 solution without explanation. \n # It simply ensures the word is actually a key in word2index before\n # accessing it, which is important because accessing an invalid key\n # with raise an exception in Python. This allows us to ignore unknown\n # words encountered in new reviews.\n if(word in self.word2index.keys()):\n ## New for Project 4: changed to set to 1 instead of add 1\n self.layer_0[0][self.word2index[word]] = 1\n \n def get_target_for_label(self,label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n \n def sigmoid(self,x):\n return 1 / (1 + np.exp(-x))\n \n def sigmoid_output_2_derivative(self,output):\n return output * (1 - output)\n \n def train(self, training_reviews, training_labels):\n \n # make sure out we have a matching number of reviews and labels\n assert(len(training_reviews) == len(training_labels))\n \n # Keep track of correct predictions to display accuracy during training \n correct_so_far = 0\n\n # Remember when we started for printing time statistics\n start = time.time()\n \n # loop through all the given reviews and run a forward and backward pass,\n # updating weights for every item\n for i in range(len(training_reviews)):\n \n # Get the next review and its correct label\n review = training_reviews[i]\n label = training_labels[i]\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n\n # Input Layer\n self.update_input_layer(review)\n\n # Hidden layer\n layer_1 = self.layer_0.dot(self.weights_0_1)\n\n # Output layer\n layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # Output error\n layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.\n layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)\n\n # Backpropagated error\n layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer\n layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error\n\n # Update the weights\n self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step\n self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step\n\n # Keep track of correct predictions.\n if(layer_2 >= 0.5 and label == 'POSITIVE'):\n correct_so_far += 1\n elif(layer_2 < 0.5 and label == 'NEGATIVE'):\n correct_so_far += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the training process. \n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(training_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct_so_far) + \" #Trained:\" + str(i+1) \\\n + \" Training Accuracy:\" + str(correct_so_far * 100 / float(i+1))[:4] + \"%\")\n if(i % 2500 == 0):\n print(\"\")\n \n def test(self, testing_reviews, testing_labels):\n \"\"\"\n Attempts to predict the labels for the given testing_reviews,\n and uses the test_labels to calculate the accuracy of those predictions.\n \"\"\"\n \n # keep track of how many correct predictions we make\n correct = 0\n\n # we'll time how many predictions per second we make\n start = time.time()\n\n # Loop through each of the given reviews and call run to predict\n # its label. \n for i in range(len(testing_reviews)):\n pred = self.run(testing_reviews[i])\n if(pred == testing_labels[i]):\n correct += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the prediction process. \n\n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(testing_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct) + \" #Tested:\" + str(i+1) \\\n + \" Testing Accuracy:\" + str(correct * 100 / float(i+1))[:4] + \"%\")\n \n def run(self, review):\n \"\"\"\n Returns a POSITIVE or NEGATIVE prediction for the given review.\n \"\"\"\n # Run a forward pass through the network, like in the \"train\" function.\n \n # Input Layer\n self.update_input_layer(review.lower())\n\n # Hidden layer\n layer_1 = self.layer_0.dot(self.weights_0_1)\n\n # Output layer\n layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))\n \n # Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;\n # return NEGATIVE for other values\n if(layer_2[0] >= 0.5):\n return \"POSITIVE\"\n else:\n return \"NEGATIVE\"\n ", "_____no_output_____" ] ], [ [ "Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of `0.1`.", "_____no_output_____" ] ], [ [ "#mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)\n#mlp.train(reviews[:-1000],labels[:-1000])", "_____no_output_____" ], [ "mlp.test(reviews[-1000:],labels[-1000:])", "\rProgress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Tested:1 Testing Accuracy:100.%\rProgress:0.1% Speed(reviews/sec):37.13 #Correct:1 #Tested:2 Testing Accuracy:50.0%\rProgress:0.2% Speed(reviews/sec):71.61 #Correct:2 #Tested:3 Testing Accuracy:66.6%\rProgress:0.3% Speed(reviews/sec):103.6 #Correct:2 #Tested:4 Testing Accuracy:50.0%\rProgress:0.4% Speed(reviews/sec):129.3 #Correct:3 #Tested:5 Testing Accuracy:60.0%\rProgress:0.5% Speed(reviews/sec):156.6 #Correct:3 #Tested:6 Testing Accuracy:50.0%\rProgress:0.6% Speed(reviews/sec):176.9 #Correct:4 #Tested:7 Testing Accuracy:57.1%\rProgress:0.7% Speed(reviews/sec):194.9 #Correct:4 #Tested:8 Testing Accuracy:50.0%\rProgress:0.8% Speed(reviews/sec):216.7 #Correct:5 #Tested:9 Testing Accuracy:55.5%\rProgress:0.9% Speed(reviews/sec):225.5 #Correct:5 #Tested:10 Testing Accuracy:50.0%\rProgress:1.0% Speed(reviews/sec):233.1 #Correct:6 #Tested:11 Testing Accuracy:54.5%\rProgress:1.1% Speed(reviews/sec):250.6 #Correct:6 #Tested:12 Testing Accuracy:50.0%\rProgress:1.2% Speed(reviews/sec):267.2 #Correct:7 #Tested:13 Testing Accuracy:53.8%\rProgress:1.3% Speed(reviews/sec):275.8 #Correct:7 #Tested:14 Testing Accuracy:50.0%\rProgress:1.4% Speed(reviews/sec):292.4 #Correct:8 #Tested:15 Testing Accuracy:53.3%\rProgress:1.5% Speed(reviews/sec):306.9 #Correct:8 #Tested:16 Testing Accuracy:50.0%\rProgress:1.6% Speed(reviews/sec):308.5 #Correct:9 #Tested:17 Testing Accuracy:52.9%\rProgress:1.7% Speed(reviews/sec):321.6 #Correct:9 #Tested:18 Testing Accuracy:50.0%\rProgress:1.8% Speed(reviews/sec):333.0 #Correct:10 #Tested:19 Testing Accuracy:52.6%\rProgress:1.9% Speed(reviews/sec):339.0 #Correct:10 #Tested:20 Testing Accuracy:50.0%\rProgress:2.0% Speed(reviews/sec):344.5 #Correct:11 #Tested:21 Testing Accuracy:52.3%\rProgress:2.1% Speed(reviews/sec):349.7 #Correct:11 #Tested:22 Testing Accuracy:50.0%\rProgress:2.2% Speed(reviews/sec):354.6 #Correct:12 #Tested:23 Testing Accuracy:52.1%\rProgress:2.3% Speed(reviews/sec):364.9 #Correct:12 #Tested:24 Testing Accuracy:50.0%\rProgress:2.4% Speed(reviews/sec):363.5 #Correct:13 #Tested:25 Testing Accuracy:52.0%\rProgress:2.5% Speed(reviews/sec):373.0 #Correct:13 #Tested:26 Testing Accuracy:50.0%\rProgress:2.6% Speed(reviews/sec):376.7 #Correct:14 #Tested:27 Testing Accuracy:51.8%\rProgress:2.7% Speed(reviews/sec):374.9 #Correct:14 #Tested:28 Testing Accuracy:50.0%\rProgress:2.8% Speed(reviews/sec):383.5 #Correct:15 #Tested:29 Testing Accuracy:51.7%\rProgress:2.9% Speed(reviews/sec):386.6 #Correct:15 #Tested:30 Testing Accuracy:50.0%\rProgress:3.0% Speed(reviews/sec):389.6 #Correct:16 #Tested:31 Testing Accuracy:51.6%\rProgress:3.1% Speed(reviews/sec):397.5 #Correct:16 #Tested:32 Testing Accuracy:50.0%\rProgress:3.2% Speed(reviews/sec):400.0 #Correct:17 #Tested:33 Testing Accuracy:51.5%\rProgress:3.3% Speed(reviews/sec):407.5 #Correct:17 #Tested:34 Testing Accuracy:50.0%\rProgress:3.4% Speed(reviews/sec):414.7 #Correct:18 #Tested:35 Testing Accuracy:51.4%\rProgress:3.5% Speed(reviews/sec):421.8 #Correct:18 #Tested:36 Testing Accuracy:50.0%\rProgress:3.6% Speed(reviews/sec):423.6 #Correct:19 #Tested:37 Testing Accuracy:51.3%\rProgress:3.7% Speed(reviews/sec):430.4 #Correct:19 #Tested:38 Testing Accuracy:50.0%\rProgress:3.8% Speed(reviews/sec):432.0 #Correct:20 #Tested:39 Testing Accuracy:51.2%\rProgress:3.9% Speed(reviews/sec):428.2 #Correct:20 #Tested:40 Testing Accuracy:50.0%\rProgress:4.0% Speed(reviews/sec):435.0 #Correct:21 #Tested:41 Testing Accuracy:51.2%\rProgress:4.1% Speed(reviews/sec):436.4 #Correct:21 #Tested:42 Testing Accuracy:50.0%\rProgress:4.2% Speed(reviews/sec):437.7 #Correct:22 #Tested:43 Testing Accuracy:51.1%\rProgress:4.3% Speed(reviews/sec):443.5 #Correct:22 #Tested:44 Testing Accuracy:50.0%\rProgress:4.4% Speed(reviews/sec):443.9 #Correct:23 #Tested:45 Testing Accuracy:51.1%\rProgress:4.5% Speed(reviews/sec):428.9 #Correct:23 #Tested:46 Testing Accuracy:50.0%\rProgress:4.6% Speed(reviews/sec):422.3 #Correct:24 #Tested:47 Testing Accuracy:51.0%\rProgress:4.7% Speed(reviews/sec):427.6 #Correct:24 #Tested:48 Testing Accuracy:50.0%\rProgress:4.8% Speed(reviews/sec):428.9 #Correct:25 #Tested:49 Testing Accuracy:51.0%\rProgress:4.9% Speed(reviews/sec):434.0 #Correct:25 #Tested:50 Testing Accuracy:50.0%\rProgress:5.0% Speed(reviews/sec):435.2 #Correct:26 #Tested:51 Testing Accuracy:50.9%\rProgress:5.1% Speed(reviews/sec):440.0 #Correct:26 #Tested:52 Testing Accuracy:50.0%\rProgress:5.2% Speed(reviews/sec):437.4 #Correct:27 #Tested:53 Testing Accuracy:50.9%\rProgress:5.3% Speed(reviews/sec):438.4 #Correct:27 #Tested:54 Testing Accuracy:50.0%\rProgress:5.4% Speed(reviews/sec):443.0 #Correct:28 #Tested:55 Testing Accuracy:50.9%\rProgress:5.5% Speed(reviews/sec):444.0 #Correct:28 #Tested:56 Testing Accuracy:50.0%\rProgress:5.6% Speed(reviews/sec):441.2 #Correct:29 #Tested:57 Testing Accuracy:50.8%\rProgress:5.7% Speed(reviews/sec):445.8 #Correct:29 #Tested:58 Testing Accuracy:50.0%\rProgress:5.8% Speed(reviews/sec):450.1 #Correct:30 #Tested:59 Testing Accuracy:50.8%\rProgress:5.9% Speed(reviews/sec):454.3 #Correct:30 #Tested:60 Testing Accuracy:50.0%\rProgress:6.0% Speed(reviews/sec):458.5 #Correct:31 #Tested:61 Testing Accuracy:50.8%\rProgress:6.1% Speed(reviews/sec):459.1 #Correct:31 #Tested:62 Testing Accuracy:50.0%\rProgress:6.2% Speed(reviews/sec):463.2 #Correct:32 #Tested:63 Testing Accuracy:50.7%\rProgress:6.3% Speed(reviews/sec):467.2 #Correct:32 #Tested:64 Testing Accuracy:50.0%\rProgress:6.4% Speed(reviews/sec):467.7 #Correct:33 #Tested:65 Testing Accuracy:50.7%\rProgress:6.5% Speed(reviews/sec):471.5 #Correct:33 #Tested:66 Testing Accuracy:50.0%\rProgress:6.6% Speed(reviews/sec):475.4 #Correct:34 #Tested:67 Testing Accuracy:50.7%\rProgress:6.7% Speed(reviews/sec):479.1 #Correct:34 #Tested:68 Testing Accuracy:50.0%\rProgress:6.8% Speed(reviews/sec):482.3 #Correct:35 #Tested:69 Testing Accuracy:50.7%\rProgress:6.9% Speed(reviews/sec):482.5 #Correct:35 #Tested:70 Testing Accuracy:50.0%\rProgress:7.0% Speed(reviews/sec):483.3 #Correct:36 #Tested:71 Testing Accuracy:50.7%\rProgress:7.1% Speed(reviews/sec):486.7 #Correct:36 #Tested:72 Testing Accuracy:50.0%\rProgress:7.2% Speed(reviews/sec):486.7 #Correct:37 #Tested:73 Testing Accuracy:50.6%\rProgress:7.3% Speed(reviews/sec):490.5 #Correct:37 #Tested:74 Testing Accuracy:50.0%\rProgress:7.4% Speed(reviews/sec):490.7 #Correct:38 #Tested:75 Testing Accuracy:50.6%\rProgress:7.5% Speed(reviews/sec):490.8 #Correct:38 #Tested:76 Testing Accuracy:50.0%\rProgress:7.6% Speed(reviews/sec):494.1 #Correct:39 #Tested:77 Testing Accuracy:50.6%\rProgress:7.7% Speed(reviews/sec):497.4 #Correct:39 #Tested:78 Testing Accuracy:50.0%\rProgress:7.8% Speed(reviews/sec):500.7 #Correct:40 #Tested:79 Testing Accuracy:50.6%\rProgress:7.9% Speed(reviews/sec):497.5 #Correct:40 #Tested:80 Testing Accuracy:50.0%\rProgress:8.0% Speed(reviews/sec):500.7 #Correct:41 #Tested:81 Testing Accuracy:50.6%\rProgress:8.1% Speed(reviews/sec):494.6 #Correct:41 #Tested:82 Testing Accuracy:50.0%\rProgress:8.2% Speed(reviews/sec):497.7 #Correct:42 #Tested:83 Testing Accuracy:50.6%\rProgress:8.3% Speed(reviews/sec):497.7 #Correct:42 #Tested:84 Testing Accuracy:50.0%\rProgress:8.4% Speed(reviews/sec):500.7 #Correct:43 #Tested:85 Testing Accuracy:50.5%\rProgress:8.5% Speed(reviews/sec):503.7 #Correct:43 #Tested:86 Testing Accuracy:50.0%\rProgress:8.6% Speed(reviews/sec):503.6 #Correct:44 #Tested:87 Testing Accuracy:50.5%\rProgress:8.7% Speed(reviews/sec):506.5 #Correct:44 #Tested:88 Testing Accuracy:50.0%\rProgress:8.8% Speed(reviews/sec):509.4 #Correct:45 #Tested:89 Testing Accuracy:50.5%\rProgress:8.9% Speed(reviews/sec):505.8 #Correct:45 #Tested:90 Testing Accuracy:50.0%\rProgress:9.0% Speed(reviews/sec):509.2 #Correct:46 #Tested:91 Testing Accuracy:50.5%\rProgress:9.1% Speed(reviews/sec):509.1 #Correct:46 #Tested:92 Testing Accuracy:50.0%\rProgress:9.2% Speed(reviews/sec):511.9 #Correct:47 #Tested:93 Testing Accuracy:50.5%\rProgress:9.3% Speed(reviews/sec):514.6 #Correct:47 #Tested:94 Testing Accuracy:50.0%\rProgress:9.4% Speed(reviews/sec):517.3 #Correct:48 #Tested:95 Testing Accuracy:50.5%\rProgress:9.5% Speed(reviews/sec):517.1 #Correct:48 #Tested:96 Testing Accuracy:50.0%\rProgress:9.6% Speed(reviews/sec):519.7 #Correct:49 #Tested:97 Testing Accuracy:50.5%\rProgress:9.7% Speed(reviews/sec):520.8 #Correct:49 #Tested:98 Testing Accuracy:50.0%\rProgress:9.8% Speed(reviews/sec):524.9 #Correct:50 #Tested:99 Testing Accuracy:50.5%\rProgress:9.9% Speed(reviews/sec):527.4 #Correct:50 #Tested:100 Testing Accuracy:50.0%\rProgress:10.0% Speed(reviews/sec):529.3 #Correct:51 #Tested:101 Testing Accuracy:50.4%\rProgress:10.1% Speed(reviews/sec):530.8 #Correct:51 #Tested:102 Testing Accuracy:50.0%\rProgress:10.2% Speed(reviews/sec):533.4 #Correct:52 #Tested:103 Testing Accuracy:50.4%\rProgress:10.3% Speed(reviews/sec):533.0 #Correct:52 #Tested:104 Testing Accuracy:50.0%\rProgress:10.4% Speed(reviews/sec):535.4 #Correct:53 #Tested:105 Testing Accuracy:50.4%\rProgress:10.5% Speed(reviews/sec):537.8 #Correct:53 #Tested:106 Testing Accuracy:50.0%\rProgress:10.6% Speed(reviews/sec):537.5 #Correct:54 #Tested:107 Testing Accuracy:50.4%\rProgress:10.7% Speed(reviews/sec):539.8 #Correct:54 #Tested:108 Testing Accuracy:50.0%\rProgress:10.8% Speed(reviews/sec):542.1 #Correct:55 #Tested:109 Testing Accuracy:50.4%\rProgress:10.9% Speed(reviews/sec):544.3 #Correct:55 #Tested:110 Testing Accuracy:50.0%\rProgress:11.0% Speed(reviews/sec):546.7 #Correct:56 #Tested:111 Testing Accuracy:50.4%\rProgress:11.1% Speed(reviews/sec):548.9 #Correct:56 #Tested:112 Testing Accuracy:50.0%\rProgress:11.2% Speed(reviews/sec):551.2 #Correct:57 #Tested:113 Testing Accuracy:50.4%\rProgress:11.3% Speed(reviews/sec):550.7 #Correct:57 #Tested:114 Testing Accuracy:50.0%\rProgress:11.4% Speed(reviews/sec):552.9 #Correct:58 #Tested:115 Testing Accuracy:50.4%\rProgress:11.5% Speed(reviews/sec):554.3 #Correct:58 #Tested:116 Testing Accuracy:50.0%" ] ], [ [ "# End of Project 4 solution. \n## Watch the next video to continue with Andrew's next lesson.\n# Analyzing Inefficiencies in our Network<a id='lesson_5'></a>", "_____no_output_____" ] ], [ [ "Image(filename='sentiment_network_sparse.png')", "_____no_output_____" ], [ "layer_0 = np.zeros(10)", "_____no_output_____" ], [ "layer_0", "_____no_output_____" ], [ "layer_0[4] = 1\nlayer_0[9] = 1", "_____no_output_____" ], [ "layer_0", "_____no_output_____" ], [ "weights_0_1 = np.random.randn(10,5)", "_____no_output_____" ], [ "layer_0.dot(weights_0_1)", "_____no_output_____" ], [ "indices = [4,9]", "_____no_output_____" ], [ "layer_1 = np.zeros(5)", "_____no_output_____" ], [ "for index in indices:\n layer_1 += (1 * weights_0_1[index])", "_____no_output_____" ], [ "layer_1", "_____no_output_____" ], [ "Image(filename='sentiment_network_sparse_2.png')", "_____no_output_____" ], [ "layer_1 = np.zeros(5)", "_____no_output_____" ], [ "for index in indices:\n layer_1 += (weights_0_1[index])", "_____no_output_____" ], [ "layer_1", "_____no_output_____" ] ], [ [ "# Project 5: Making our Network More Efficient<a id='project_5'></a>\n**TODO:** Make the `SentimentNetwork` class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:\n* Copy the `SentimentNetwork` class from the previous project into the following cell.\n* Remove the `update_input_layer` function - you will not need it in this version.\n* Modify `init_network`:\n>* You no longer need a separate input layer, so remove any mention of `self.layer_0`\n>* You will be dealing with the old hidden layer more directly, so create `self.layer_1`, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero\n* Modify `train`:\n>* Change the name of the input parameter `training_reviews` to `training_reviews_raw`. This will help with the next step.\n>* At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from `word2index`) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local `list` variable named `training_reviews` that should contain a `list` for each review in `training_reviews_raw`. Those lists should contain the indices for words found in the review.\n>* Remove call to `update_input_layer`\n>* Use `self`'s `layer_1` instead of a local `layer_1` object.\n>* In the forward pass, replace the code that updates `layer_1` with new logic that only adds the weights for the indices used in the review.\n>* When updating `weights_0_1`, only update the individual weights that were used in the forward pass.\n* Modify `run`:\n>* Remove call to `update_input_layer` \n>* Use `self`'s `layer_1` instead of a local `layer_1` object.\n>* Much like you did in `train`, you will need to pre-process the `review` so you can work with word indices, then update `layer_1` by adding weights for the indices used in the review.", "_____no_output_____" ], [ "The following code is the same as the previous project, with project-specific changes marked with `\"New for Project 5\"`", "_____no_output_____" ] ], [ [ "import time\nimport sys\nimport numpy as np\n\n# Encapsulate our neural network in a class\nclass SentimentNetwork:\n def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):\n \"\"\"Create a SentimenNetwork with the given settings\n Args:\n reviews(list) - List of reviews used for training\n labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews\n hidden_nodes(int) - Number of nodes to create in the hidden layer\n learning_rate(float) - Learning rate to use while training\n \n \"\"\"\n # Assign a seed to our random number generator to ensure we get\n # reproducable results during development \n np.random.seed(1)\n\n # process the reviews and their associated labels so that everything\n # is ready for training\n self.pre_process_data(reviews, labels)\n \n # Build the network to have the number of hidden nodes and the learning rate that\n # were passed into this initializer. Make the same number of input nodes as\n # there are vocabulary words and create a single output node.\n self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)\n\n def pre_process_data(self, reviews, labels):\n \n # populate review_vocab with all of the words in the given reviews\n review_vocab = set()\n for review in reviews:\n for word in review.split(\" \"):\n review_vocab.add(word)\n\n # Convert the vocabulary set to a list so we can access words via indices\n self.review_vocab = list(review_vocab)\n \n # populate label_vocab with all of the words in the given labels.\n label_vocab = set()\n for label in labels:\n label_vocab.add(label)\n \n # Convert the label vocabulary set to a list so we can access labels via indices\n self.label_vocab = list(label_vocab)\n \n # Store the sizes of the review and label vocabularies.\n self.review_vocab_size = len(self.review_vocab)\n self.label_vocab_size = len(self.label_vocab)\n \n # Create a dictionary of words in the vocabulary mapped to index positions\n self.word2index = {}\n for i, word in enumerate(self.review_vocab):\n self.word2index[word] = i\n \n # Create a dictionary of labels mapped to index positions\n self.label2index = {}\n for i, label in enumerate(self.label_vocab):\n self.label2index[label] = i\n\n def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Store the learning rate\n self.learning_rate = learning_rate\n\n # Initialize weights\n\n # These are the weights between the input layer and the hidden layer.\n self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))\n\n # These are the weights between the hidden layer and the output layer.\n self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n \n ## New for Project 5: Removed self.layer_0; added self.layer_1\n # The input layer, a two-dimensional matrix with shape 1 x hidden_nodes\n self.layer_1 = np.zeros((1,hidden_nodes))\n \n ## New for Project 5: Removed update_input_layer function\n \n def get_target_for_label(self,label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n \n def sigmoid(self,x):\n return 1 / (1 + np.exp(-x))\n \n def sigmoid_output_2_derivative(self,output):\n return output * (1 - output)\n \n ## New for Project 5: changed name of first parameter form 'training_reviews' \n # to 'training_reviews_raw'\n def train(self, training_reviews_raw, training_labels):\n\n ## New for Project 5: pre-process training reviews so we can deal \n # directly with the indices of non-zero inputs\n training_reviews = list()\n for review in training_reviews_raw:\n indices = set()\n for word in review.split(\" \"):\n if(word in self.word2index.keys()):\n indices.add(self.word2index[word])\n training_reviews.append(list(indices))\n\n # make sure out we have a matching number of reviews and labels\n assert(len(training_reviews) == len(training_labels))\n \n # Keep track of correct predictions to display accuracy during training \n correct_so_far = 0\n\n # Remember when we started for printing time statistics\n start = time.time()\n \n # loop through all the given reviews and run a forward and backward pass,\n # updating weights for every item\n for i in range(len(training_reviews)):\n \n # Get the next review and its correct label\n review = training_reviews[i]\n label = training_labels[i]\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n\n ## New for Project 5: Removed call to 'update_input_layer' function\n # because 'layer_0' is no longer used\n\n # Hidden layer\n ## New for Project 5: Add in only the weights for non-zero items\n self.layer_1 *= 0\n for index in review:\n self.layer_1 += self.weights_0_1[index]\n\n # Output layer\n ## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'\n layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2)) \n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # Output error\n layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.\n layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)\n\n # Backpropagated error\n layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer\n layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error\n\n # Update the weights\n ## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'\n self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step\n \n ## New for Project 5: Only update the weights that were used in the forward pass\n for index in review:\n self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step\n\n # Keep track of correct predictions.\n if(layer_2 >= 0.5 and label == 'POSITIVE'):\n correct_so_far += 1\n elif(layer_2 < 0.5 and label == 'NEGATIVE'):\n correct_so_far += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the training process. \n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(training_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct_so_far) + \" #Trained:\" + str(i+1) \\\n + \" Training Accuracy:\" + str(correct_so_far * 100 / float(i+1))[:4] + \"%\")\n if(i % 2500 == 0):\n print(\"\")\n \n def test(self, testing_reviews, testing_labels):\n \"\"\"\n Attempts to predict the labels for the given testing_reviews,\n and uses the test_labels to calculate the accuracy of those predictions.\n \"\"\"\n \n # keep track of how many correct predictions we make\n correct = 0\n\n # we'll time how many predictions per second we make\n start = time.time()\n\n # Loop through each of the given reviews and call run to predict\n # its label. \n for i in range(len(testing_reviews)):\n pred = self.run(testing_reviews[i])\n if(pred == testing_labels[i]):\n correct += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the prediction process. \n\n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(testing_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct) + \" #Tested:\" + str(i+1) \\\n + \" Testing Accuracy:\" + str(correct * 100 / float(i+1))[:4] + \"%\")\n \n def run(self, review):\n \"\"\"\n Returns a POSITIVE or NEGATIVE prediction for the given review.\n \"\"\"\n # Run a forward pass through the network, like in the \"train\" function.\n \n ## New for Project 5: Removed call to update_input_layer function\n # because layer_0 is no longer used\n\n # Hidden layer\n ## New for Project 5: Identify the indices used in the review and then add\n # just those weights to layer_1 \n self.layer_1 *= 0\n unique_indices = set()\n for word in review.lower().split(\" \"):\n if word in self.word2index.keys():\n unique_indices.add(self.word2index[word])\n for index in unique_indices:\n self.layer_1 += self.weights_0_1[index]\n \n # Output layer\n ## New for Project 5: changed to use self.layer_1 instead of local layer_1\n layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))\n \n # Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;\n # return NEGATIVE for other values\n if(layer_2[0] >= 0.5):\n return \"POSITIVE\"\n else:\n return \"NEGATIVE\"\n", "_____no_output_____" ] ], [ [ "Run the following cell to recreate the network and train it once again.", "_____no_output_____" ] ], [ [ "mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)\nmlp.train(reviews[:-1000],labels[:-1000])", "Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%\nProgress:10.4% Speed(reviews/sec):1110. #Correct:1802 #Trained:2501 Training Accuracy:72.0%\nProgress:20.8% Speed(reviews/sec):1190. #Correct:3802 #Trained:5001 Training Accuracy:76.0%\nProgress:31.2% Speed(reviews/sec):1257. #Correct:5899 #Trained:7501 Training Accuracy:78.6%\nProgress:41.6% Speed(reviews/sec):1174. #Correct:8035 #Trained:10001 Training Accuracy:80.3%\nProgress:52.0% Speed(reviews/sec):1077. #Correct:10171 #Trained:12501 Training Accuracy:81.3%\nProgress:62.5% Speed(reviews/sec):1014. #Correct:12311 #Trained:15001 Training Accuracy:82.0%\nProgress:72.9% Speed(reviews/sec):998.2 #Correct:14436 #Trained:17501 Training Accuracy:82.4%\nProgress:83.3% Speed(reviews/sec):943.8 #Correct:16618 #Trained:20001 Training Accuracy:83.0%\nProgress:93.7% Speed(reviews/sec):890.4 #Correct:18791 #Trained:22501 Training Accuracy:83.5%\nProgress:99.9% Speed(reviews/sec):856.8 #Correct:20110 #Trained:24000 Training Accuracy:83.7%" ] ], [ [ "That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.", "_____no_output_____" ] ], [ [ "mlp.test(reviews[-1000:],labels[-1000:])", "Progress:99.9% Speed(reviews/sec):907.5 #Correct:851 #Tested:1000 Testing Accuracy:85.1%" ] ], [ [ "# End of Project 5 solution. \n## Watch the next video to continue with Andrew's next lesson.\n# Further Noise Reduction<a id='lesson_6'></a>", "_____no_output_____" ] ], [ [ "Image(filename='sentiment_network_sparse_2.png')", "_____no_output_____" ], [ "# words most frequently seen in a review with a \"POSITIVE\" label\npos_neg_ratios.most_common()", "_____no_output_____" ], [ "# words most frequently seen in a review with a \"NEGATIVE\" label\nlist(reversed(pos_neg_ratios.most_common()))[0:30]", "_____no_output_____" ], [ "from bokeh.models import ColumnDataSource, LabelSet\nfrom bokeh.plotting import figure, show, output_file\nfrom bokeh.io import output_notebook\noutput_notebook()", "_____no_output_____" ], [ "hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)\n\np = figure(tools=\"pan,wheel_zoom,reset,save\",\n toolbar_location=\"above\",\n title=\"Word Positive/Negative Affinity Distribution\")\np.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color=\"#555555\")\nshow(p)", "_____no_output_____" ], [ "frequency_frequency = Counter()\n\nfor word, cnt in total_counts.most_common():\n frequency_frequency[cnt] += 1", "_____no_output_____" ], [ "hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)\n\np = figure(tools=\"pan,wheel_zoom,reset,save\",\n toolbar_location=\"above\",\n title=\"The frequency distribution of the words in our corpus\")\np.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color=\"#555555\")\nshow(p)", "_____no_output_____" ] ], [ [ "# Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a>\n\n**TODO:** Improve `SentimentNetwork`'s performance by reducing more noise in the vocabulary. Specifically, do the following:\n* Copy the `SentimentNetwork` class from the previous project into the following cell.\n* Modify `pre_process_data`:\n>* Add two additional parameters: `min_count` and `polarity_cutoff`\n>* Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)\n>* Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. \n>* Change so words are only added to the vocabulary if they occur in the vocabulary more than `min_count` times.\n>* Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least `polarity_cutoff`\n* Modify `__init__`:\n>* Add the same two parameters (`min_count` and `polarity_cutoff`) and use them when you call `pre_process_data`", "_____no_output_____" ], [ "The following code is the same as the previous project, with project-specific changes marked with `\"New for Project 6\"`", "_____no_output_____" ] ], [ [ "import time\nimport sys\nimport numpy as np\n\n# Encapsulate our neural network in a class\nclass SentimentNetwork:\n ## New for Project 6: added min_count and polarity_cutoff parameters\n def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):\n \"\"\"Create a SentimenNetwork with the given settings\n Args:\n reviews(list) - List of reviews used for training\n labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews\n min_count(int) - Words should only be added to the vocabulary \n if they occur more than this many times\n polarity_cutoff(float) - The absolute value of a word's positive-to-negative\n ratio must be at least this big to be considered.\n hidden_nodes(int) - Number of nodes to create in the hidden layer\n learning_rate(float) - Learning rate to use while training\n \n \"\"\"\n # Assign a seed to our random number generator to ensure we get\n # reproducable results during development \n np.random.seed(1)\n\n # process the reviews and their associated labels so that everything\n # is ready for training\n ## New for Project 6: added min_count and polarity_cutoff arguments to pre_process_data call\n self.pre_process_data(reviews, labels, polarity_cutoff, min_count)\n \n # Build the network to have the number of hidden nodes and the learning rate that\n # were passed into this initializer. Make the same number of input nodes as\n # there are vocabulary words and create a single output node.\n self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)\n\n ## New for Project 6: added min_count and polarity_cutoff parameters\n def pre_process_data(self, reviews, labels, polarity_cutoff, min_count):\n \n ## ----------------------------------------\n ## New for Project 6: Calculate positive-to-negative ratios for words before\n # building vocabulary\n #\n positive_counts = Counter()\n negative_counts = Counter()\n total_counts = Counter()\n\n for i in range(len(reviews)):\n if(labels[i] == 'POSITIVE'):\n for word in reviews[i].split(\" \"):\n positive_counts[word] += 1\n total_counts[word] += 1\n else:\n for word in reviews[i].split(\" \"):\n negative_counts[word] += 1\n total_counts[word] += 1\n\n pos_neg_ratios = Counter()\n\n for term,cnt in list(total_counts.most_common()):\n if(cnt >= 50):\n pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)\n pos_neg_ratios[term] = pos_neg_ratio\n\n for word,ratio in pos_neg_ratios.most_common():\n if(ratio > 1):\n pos_neg_ratios[word] = np.log(ratio)\n else:\n pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))\n #\n ## end New for Project 6\n ## ----------------------------------------\n\n # populate review_vocab with all of the words in the given reviews\n review_vocab = set()\n for review in reviews:\n for word in review.split(\" \"):\n ## New for Project 6: only add words that occur at least min_count times\n # and for words with pos/neg ratios, only add words\n # that meet the polarity_cutoff\n if(total_counts[word] > min_count):\n if(word in pos_neg_ratios.keys()):\n if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):\n review_vocab.add(word)\n else:\n review_vocab.add(word)\n\n # Convert the vocabulary set to a list so we can access words via indices\n self.review_vocab = list(review_vocab)\n \n # populate label_vocab with all of the words in the given labels.\n label_vocab = set()\n for label in labels:\n label_vocab.add(label)\n \n # Convert the label vocabulary set to a list so we can access labels via indices\n self.label_vocab = list(label_vocab)\n \n # Store the sizes of the review and label vocabularies.\n self.review_vocab_size = len(self.review_vocab)\n self.label_vocab_size = len(self.label_vocab)\n \n # Create a dictionary of words in the vocabulary mapped to index positions\n self.word2index = {}\n for i, word in enumerate(self.review_vocab):\n self.word2index[word] = i\n \n # Create a dictionary of labels mapped to index positions\n self.label2index = {}\n for i, label in enumerate(self.label_vocab):\n self.label2index[label] = i\n\n def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Store the learning rate\n self.learning_rate = learning_rate\n\n # Initialize weights\n\n # These are the weights between the input layer and the hidden layer.\n self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))\n\n # These are the weights between the hidden layer and the output layer.\n self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n \n ## New for Project 5: Removed self.layer_0; added self.layer_1\n # The input layer, a two-dimensional matrix with shape 1 x hidden_nodes\n self.layer_1 = np.zeros((1,hidden_nodes))\n \n ## New for Project 5: Removed update_input_layer function\n \n def get_target_for_label(self,label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n \n def sigmoid(self,x):\n return 1 / (1 + np.exp(-x))\n \n def sigmoid_output_2_derivative(self,output):\n return output * (1 - output)\n \n ## New for Project 5: changed name of first parameter form 'training_reviews' \n # to 'training_reviews_raw'\n def train(self, training_reviews_raw, training_labels):\n\n ## New for Project 5: pre-process training reviews so we can deal \n # directly with the indices of non-zero inputs\n training_reviews = list()\n for review in training_reviews_raw:\n indices = set()\n for word in review.split(\" \"):\n if(word in self.word2index.keys()):\n indices.add(self.word2index[word])\n training_reviews.append(list(indices))\n\n # make sure out we have a matching number of reviews and labels\n assert(len(training_reviews) == len(training_labels))\n \n # Keep track of correct predictions to display accuracy during training \n correct_so_far = 0\n\n # Remember when we started for printing time statistics\n start = time.time()\n \n # loop through all the given reviews and run a forward and backward pass,\n # updating weights for every item\n for i in range(len(training_reviews)):\n \n # Get the next review and its correct label\n review = training_reviews[i]\n label = training_labels[i]\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n\n ## New for Project 5: Removed call to 'update_input_layer' function\n # because 'layer_0' is no longer used\n\n # Hidden layer\n ## New for Project 5: Add in only the weights for non-zero items\n self.layer_1 *= 0\n for index in review:\n self.layer_1 += self.weights_0_1[index]\n\n # Output layer\n ## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'\n layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2)) \n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # Output error\n layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.\n layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)\n\n # Backpropagated error\n layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer\n layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error\n\n # Update the weights\n ## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'\n self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step\n \n ## New for Project 5: Only update the weights that were used in the forward pass\n for index in review:\n self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step\n\n # Keep track of correct predictions.\n if(layer_2 >= 0.5 and label == 'POSITIVE'):\n correct_so_far += 1\n elif(layer_2 < 0.5 and label == 'NEGATIVE'):\n correct_so_far += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the training process. \n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(training_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct_so_far) + \" #Trained:\" + str(i+1) \\\n + \" Training Accuracy:\" + str(correct_so_far * 100 / float(i+1))[:4] + \"%\")\n if(i % 2500 == 0):\n print(\"\")\n \n def test(self, testing_reviews, testing_labels):\n \"\"\"\n Attempts to predict the labels for the given testing_reviews,\n and uses the test_labels to calculate the accuracy of those predictions.\n \"\"\"\n \n # keep track of how many correct predictions we make\n correct = 0\n\n # we'll time how many predictions per second we make\n start = time.time()\n\n # Loop through each of the given reviews and call run to predict\n # its label. \n for i in range(len(testing_reviews)):\n pred = self.run(testing_reviews[i])\n if(pred == testing_labels[i]):\n correct += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the prediction process. \n\n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(testing_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct) + \" #Tested:\" + str(i+1) \\\n + \" Testing Accuracy:\" + str(correct * 100 / float(i+1))[:4] + \"%\")\n \n def run(self, review):\n \"\"\"\n Returns a POSITIVE or NEGATIVE prediction for the given review.\n \"\"\"\n # Run a forward pass through the network, like in the \"train\" function.\n \n ## New for Project 5: Removed call to update_input_layer function\n # because layer_0 is no longer used\n\n # Hidden layer\n ## New for Project 5: Identify the indices used in the review and then add\n # just those weights to layer_1 \n self.layer_1 *= 0\n unique_indices = set()\n for word in review.lower().split(\" \"):\n if word in self.word2index.keys():\n unique_indices.add(self.word2index[word])\n for index in unique_indices:\n self.layer_1 += self.weights_0_1[index]\n \n # Output layer\n ## New for Project 5: changed to use self.layer_1 instead of local layer_1\n layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))\n \n # Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;\n # return NEGATIVE for other values\n if(layer_2[0] >= 0.5):\n return \"POSITIVE\"\n else:\n return \"NEGATIVE\"\n", "_____no_output_____" ] ], [ [ "Run the following cell to train your network with a small polarity cutoff.", "_____no_output_____" ] ], [ [ "mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)\nmlp.train(reviews[:-1000],labels[:-1000])", "_____no_output_____" ] ], [ [ "And run the following cell to test it's performance.", "_____no_output_____" ] ], [ [ "mlp.test(reviews[-1000:],labels[-1000:])", "_____no_output_____" ] ], [ [ "Run the following cell to train your network with a much larger polarity cutoff.", "_____no_output_____" ] ], [ [ "mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)\nmlp.train(reviews[:-1000],labels[:-1000])", "_____no_output_____" ] ], [ [ "And run the following cell to test it's performance.", "_____no_output_____" ] ], [ [ "mlp.test(reviews[-1000:],labels[-1000:])", "_____no_output_____" ] ], [ [ "# End of Project 6 solution. \n## Watch the next video to continue with Andrew's next lesson.\n# Analysis: What's Going on in the Weights?<a id='lesson_7'></a>", "_____no_output_____" ] ], [ [ "mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)", "_____no_output_____" ], [ "mlp_full.train(reviews[:-1000],labels[:-1000])", "_____no_output_____" ], [ "Image(filename='sentiment_network_sparse.png')", "_____no_output_____" ], [ "def get_most_similar_words(focus = \"horrible\"):\n most_similar = Counter()\n\n for word in mlp_full.word2index.keys():\n most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])\n \n return most_similar.most_common()", "_____no_output_____" ], [ "get_most_similar_words(\"excellent\")", "_____no_output_____" ], [ "get_most_similar_words(\"terrible\")", "_____no_output_____" ], [ "import matplotlib.colors as colors\n\nwords_to_visualize = list()\nfor word, ratio in pos_neg_ratios.most_common(500):\n if(word in mlp_full.word2index.keys()):\n words_to_visualize.append(word)\n \nfor word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:\n if(word in mlp_full.word2index.keys()):\n words_to_visualize.append(word)", "_____no_output_____" ], [ "pos = 0\nneg = 0\n\ncolors_list = list()\nvectors_list = list()\nfor word in words_to_visualize:\n if word in pos_neg_ratios.keys():\n vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])\n if(pos_neg_ratios[word] > 0):\n pos+=1\n colors_list.append(\"#00ff00\")\n else:\n neg+=1\n colors_list.append(\"#000000\")\n ", "_____no_output_____" ], [ "from sklearn.manifold import TSNE\ntsne = TSNE(n_components=2, random_state=0)\nwords_top_ted_tsne = tsne.fit_transform(vectors_list)", "_____no_output_____" ], [ "p = figure(tools=\"pan,wheel_zoom,reset,save\",\n toolbar_location=\"above\",\n title=\"vector T-SNE for most polarized words\")\n\nsource = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],\n x2=words_top_ted_tsne[:,1],\n names=words_to_visualize,\n color=colors_list))\n\np.scatter(x=\"x1\", y=\"x2\", size=8, source=source, fill_color=\"color\")\n\nword_labels = LabelSet(x=\"x1\", y=\"x2\", text=\"names\", y_offset=6,\n text_font_size=\"8pt\", text_color=\"#555555\",\n source=source, text_align='center')\np.add_layout(word_labels)\n\nshow(p)\n\n# green indicates positive words, black indicates negative words", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ece272a8d110c567963d475fc0e0a3ca3d4549f3
9,738
ipynb
Jupyter Notebook
snippets.ipynb
Hassanalihazaraa/pandas
a4510728fa9b90f4efa8a17f26d704d961a6e5b0
[ "MIT" ]
null
null
null
snippets.ipynb
Hassanalihazaraa/pandas
a4510728fa9b90f4efa8a17f26d704d961a6e5b0
[ "MIT" ]
null
null
null
snippets.ipynb
Hassanalihazaraa/pandas
a4510728fa9b90f4efa8a17f26d704d961a6e5b0
[ "MIT" ]
null
null
null
23.46506
95
0.380058
[ [ [ "people = {\n \"first\": [\"Corey\", 'Jane', 'John'], \n \"last\": [\"Schafer\", 'Doe', 'Doe'], \n \"email\": [\"[email protected]\", '[email protected]', '[email protected]']\n}", "_____no_output_____" ], [ "people['email']", "_____no_output_____" ], [ "import pandas as pd", "_____no_output_____" ], [ "df = pd.DataFrame(people)", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "df['email']", "_____no_output_____" ], [ "df[['last', 'email']]", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ], [ "df.iloc[[0, 1], 2]", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "df.loc[[0, 1], ['email', 'last']]", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ece2773c35ba53504cf0dd2a556ba984d6952ba5
432,204
ipynb
Jupyter Notebook
Deepfake-ResNetv2.ipynb
wangyoucao/FDFtNet
b712f4d38c6de46d6cb767095c7b1608d6fa2bec
[ "MIT" ]
null
null
null
Deepfake-ResNetv2.ipynb
wangyoucao/FDFtNet
b712f4d38c6de46d6cb767095c7b1608d6fa2bec
[ "MIT" ]
null
null
null
Deepfake-ResNetv2.ipynb
wangyoucao/FDFtNet
b712f4d38c6de46d6cb767095c7b1608d6fa2bec
[ "MIT" ]
1
2021-05-03T02:04:19.000Z
2021-05-03T02:04:19.000Z
93.957391
217
0.630466
[ [ [ "import numpy as np\nimport pandas as pd\nimport os\nimport tensorflow as tf\nfrom keras import backend as K\n# os.environ[\"CUDA_VISIBLE_DEVICES\"] = '1'\nconfig = tf.ConfigProto()\nconfig.gpu_options.allow_growth = True\nsess = tf.Session(config=config)\nK.set_session(sess)\nfrom keras.regularizers import l2\nfrom keras.layers import Input, Dense, Flatten, GlobalAveragePooling2D, Activation, Conv2D, MaxPooling2D, BatchNormalization, Lambda, Dropout\nfrom keras.layers import SeparableConv2D, Add, Convolution2D, concatenate, Layer, ReLU, DepthwiseConv2D, Reshape, Multiply, InputSpec\nfrom keras.models import Model, load_model, model_from_json\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras.optimizers import Adam, SGD\nfrom keras.callbacks import EarlyStopping, ReduceLROnPlateau\nfrom keras.utils import to_categorical\nfrom sklearn import metrics\nfrom sklearn.metrics import roc_curve, roc_auc_score, confusion_matrix, classification_report\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import brentq\nfrom scipy.interpolate import interp1d\nimport glob\nfrom PIL import Image\nfrom tqdm import tqdm, trange\nimport random\nfrom keras.applications import Xception, ResNet152\nfrom PIL import ImageFile\nImageFile.LOAD_TRUNCATED_IMAGES = True\nimport cv2", "Using TensorFlow backend.\n" ], [ "nb_classes = 2 # number of classes\nimg_width, img_height = 64, 64 # change based on the shape/structure of your images\nbatch_size = 64 # try 4, 8, 16, 32, 64, 128, 256 dependent on CPU/GPU memory capacity (powers of 2 values).\nnb_epoch = 300 # number of iteration the algorithm gets trained.", "_____no_output_____" ], [ "def bgr(img):\n return cv2.cvtColor(img, cv2.COLOR_BGR2RGB)", "_____no_output_____" ], [ "train_dir = '/mnt/a/fakedata/deepfake/train'\nvalidation_dir = '/mnt/a/fakedata/deepfake/val'\ntest50_dir = '/mnt/a/fakedata/deepfake/test'", "_____no_output_____" ], [ "def res_block(x, in_planes, out_planes, bottleneck_ratio=4, strides=1):\n bottleneck_planes = in_planes // bottleneck_ratio\n out = BatchNormalization()(x)\n out = Activation('relu')(out)\n if strides == 2 or in_planes != out_planes:\n x_res = Conv2D(out_planes, kernel_size=1, strides=strides, use_bias=False)(x)\n else:\n x_res = x\n out = Conv2D(bottleneck_planes, kernel_size=1, strides=1, use_bias=False)(out)\n \n out = BatchNormalization()(out)\n out = Activation('relu')(out)\n out = Conv2D(bottleneck_planes, kernel_size=3, padding='same', strides=strides, use_bias=False)(out)\n \n out = BatchNormalization()(out)\n out = Activation('relu')(out)\n out = Conv2D(out_planes, kernel_size=1, strides=1, use_bias=False)(out)\n out = Add()([out, x_res])\n return out", "_____no_output_____" ], [ "img_input = Input(shape=[img_width, img_height, 3])\nx = Conv2D(16, kernel_size=7, strides=2, padding='same', use_bias=False)(img_input)\nx = BatchNormalization()(x)\nx = Activation('relu')(x)\n\nx = res_block(x, 16, 64, strides=1)\nfor i in range(49):\n x = res_block(x, 64, 64, strides=1)\n\nx = res_block(x, 64, 256, strides=2)\nfor i in range(49):\n x = res_block(x, 256, 256)\n\nx = res_block(x, 256, 512, strides=2)\nfor i in range(49):\n x = res_block(x, 512, 512)\n\nx_gap = GlobalAveragePooling2D()(x)\nx_dense = Dense(nb_classes)(x_gap)\nx_sm = Activation('softmax')(x_dense)\n\nmodel = Model(img_input, x_sm)\nmodel.summary()", "Model: \"model_1\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 64, 64, 3) 0 \n__________________________________________________________________________________________________\nconv2d_1 (Conv2D) (None, 32, 32, 16) 2352 input_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_1 (BatchNor (None, 32, 32, 16) 64 conv2d_1[0][0] \n__________________________________________________________________________________________________\nactivation_1 (Activation) (None, 32, 32, 16) 0 batch_normalization_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_2 (BatchNor (None, 32, 32, 16) 64 activation_1[0][0] \n__________________________________________________________________________________________________\nactivation_2 (Activation) (None, 32, 32, 16) 0 batch_normalization_2[0][0] \n__________________________________________________________________________________________________\nconv2d_3 (Conv2D) (None, 32, 32, 4) 64 activation_2[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_3 (BatchNor (None, 32, 32, 4) 16 conv2d_3[0][0] \n__________________________________________________________________________________________________\nactivation_3 (Activation) (None, 32, 32, 4) 0 batch_normalization_3[0][0] \n__________________________________________________________________________________________________\nconv2d_4 (Conv2D) (None, 32, 32, 4) 144 activation_3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_4 (BatchNor (None, 32, 32, 4) 16 conv2d_4[0][0] \n__________________________________________________________________________________________________\nactivation_4 (Activation) (None, 32, 32, 4) 0 batch_normalization_4[0][0] \n__________________________________________________________________________________________________\nconv2d_5 (Conv2D) (None, 32, 32, 64) 256 activation_4[0][0] \n__________________________________________________________________________________________________\nconv2d_2 (Conv2D) (None, 32, 32, 64) 1024 activation_1[0][0] \n__________________________________________________________________________________________________\nadd_1 (Add) (None, 32, 32, 64) 0 conv2d_5[0][0] \n conv2d_2[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_5 (BatchNor (None, 32, 32, 64) 256 add_1[0][0] \n__________________________________________________________________________________________________\nactivation_5 (Activation) (None, 32, 32, 64) 0 batch_normalization_5[0][0] \n__________________________________________________________________________________________________\nconv2d_6 (Conv2D) (None, 32, 32, 16) 1024 activation_5[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_6 (BatchNor (None, 32, 32, 16) 64 conv2d_6[0][0] \n__________________________________________________________________________________________________\nactivation_6 (Activation) (None, 32, 32, 16) 0 batch_normalization_6[0][0] \n__________________________________________________________________________________________________\nconv2d_7 (Conv2D) (None, 32, 32, 16) 2304 activation_6[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_7 (BatchNor (None, 32, 32, 16) 64 conv2d_7[0][0] \n__________________________________________________________________________________________________\nactivation_7 (Activation) (None, 32, 32, 16) 0 batch_normalization_7[0][0] \n__________________________________________________________________________________________________\nconv2d_8 (Conv2D) (None, 32, 32, 64) 1024 activation_7[0][0] \n__________________________________________________________________________________________________\nadd_2 (Add) (None, 32, 32, 64) 0 conv2d_8[0][0] \n add_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_8 (BatchNor (None, 32, 32, 64) 256 add_2[0][0] \n__________________________________________________________________________________________________\nactivation_8 (Activation) (None, 32, 32, 64) 0 batch_normalization_8[0][0] \n__________________________________________________________________________________________________\nconv2d_9 (Conv2D) (None, 32, 32, 16) 1024 activation_8[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_9 (BatchNor (None, 32, 32, 16) 64 conv2d_9[0][0] \n__________________________________________________________________________________________________\nactivation_9 (Activation) (None, 32, 32, 16) 0 batch_normalization_9[0][0] \n__________________________________________________________________________________________________\nconv2d_10 (Conv2D) (None, 32, 32, 16) 2304 activation_9[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_10 (BatchNo (None, 32, 32, 16) 64 conv2d_10[0][0] \n__________________________________________________________________________________________________\nactivation_10 (Activation) (None, 32, 32, 16) 0 batch_normalization_10[0][0] \n__________________________________________________________________________________________________\nconv2d_11 (Conv2D) (None, 32, 32, 64) 1024 activation_10[0][0] \n__________________________________________________________________________________________________\nadd_3 (Add) (None, 32, 32, 64) 0 conv2d_11[0][0] \n add_2[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_11 (BatchNo (None, 32, 32, 64) 256 add_3[0][0] \n__________________________________________________________________________________________________\nactivation_11 (Activation) (None, 32, 32, 64) 0 batch_normalization_11[0][0] \n__________________________________________________________________________________________________\nconv2d_12 (Conv2D) (None, 32, 32, 16) 1024 activation_11[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_12 (BatchNo (None, 32, 32, 16) 64 conv2d_12[0][0] \n__________________________________________________________________________________________________\nactivation_12 (Activation) (None, 32, 32, 16) 0 batch_normalization_12[0][0] \n__________________________________________________________________________________________________\nconv2d_13 (Conv2D) (None, 32, 32, 16) 2304 activation_12[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_13 (BatchNo (None, 32, 32, 16) 64 conv2d_13[0][0] \n__________________________________________________________________________________________________\nactivation_13 (Activation) (None, 32, 32, 16) 0 batch_normalization_13[0][0] \n__________________________________________________________________________________________________\nconv2d_14 (Conv2D) (None, 32, 32, 64) 1024 activation_13[0][0] \n__________________________________________________________________________________________________\nadd_4 (Add) (None, 32, 32, 64) 0 conv2d_14[0][0] \n add_3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_14 (BatchNo (None, 32, 32, 64) 256 add_4[0][0] \n__________________________________________________________________________________________________\nactivation_14 (Activation) (None, 32, 32, 64) 0 batch_normalization_14[0][0] \n__________________________________________________________________________________________________\nconv2d_15 (Conv2D) (None, 32, 32, 16) 1024 activation_14[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_15 (BatchNo (None, 32, 32, 16) 64 conv2d_15[0][0] \n__________________________________________________________________________________________________\nactivation_15 (Activation) (None, 32, 32, 16) 0 batch_normalization_15[0][0] \n__________________________________________________________________________________________________\nconv2d_16 (Conv2D) (None, 32, 32, 16) 2304 activation_15[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_16 (BatchNo (None, 32, 32, 16) 64 conv2d_16[0][0] \n__________________________________________________________________________________________________\nactivation_16 (Activation) (None, 32, 32, 16) 0 batch_normalization_16[0][0] \n__________________________________________________________________________________________________\nconv2d_17 (Conv2D) (None, 32, 32, 64) 1024 activation_16[0][0] \n__________________________________________________________________________________________________\nadd_5 (Add) (None, 32, 32, 64) 0 conv2d_17[0][0] \n add_4[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_17 (BatchNo (None, 32, 32, 64) 256 add_5[0][0] \n__________________________________________________________________________________________________\nactivation_17 (Activation) (None, 32, 32, 64) 0 batch_normalization_17[0][0] \n__________________________________________________________________________________________________\nconv2d_18 (Conv2D) (None, 32, 32, 16) 1024 activation_17[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_18 (BatchNo (None, 32, 32, 16) 64 conv2d_18[0][0] \n__________________________________________________________________________________________________\nactivation_18 (Activation) (None, 32, 32, 16) 0 batch_normalization_18[0][0] \n__________________________________________________________________________________________________\nconv2d_19 (Conv2D) (None, 32, 32, 16) 2304 activation_18[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_19 (BatchNo (None, 32, 32, 16) 64 conv2d_19[0][0] \n__________________________________________________________________________________________________\nactivation_19 (Activation) (None, 32, 32, 16) 0 batch_normalization_19[0][0] \n__________________________________________________________________________________________________\nconv2d_20 (Conv2D) (None, 32, 32, 64) 1024 activation_19[0][0] \n__________________________________________________________________________________________________\nadd_6 (Add) (None, 32, 32, 64) 0 conv2d_20[0][0] \n add_5[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_20 (BatchNo (None, 32, 32, 64) 256 add_6[0][0] \n__________________________________________________________________________________________________\nactivation_20 (Activation) (None, 32, 32, 64) 0 batch_normalization_20[0][0] \n__________________________________________________________________________________________________\nconv2d_21 (Conv2D) (None, 32, 32, 16) 1024 activation_20[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_21 (BatchNo (None, 32, 32, 16) 64 conv2d_21[0][0] \n__________________________________________________________________________________________________\nactivation_21 (Activation) (None, 32, 32, 16) 0 batch_normalization_21[0][0] \n__________________________________________________________________________________________________\nconv2d_22 (Conv2D) (None, 32, 32, 16) 2304 activation_21[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_22 (BatchNo (None, 32, 32, 16) 64 conv2d_22[0][0] \n__________________________________________________________________________________________________\nactivation_22 (Activation) (None, 32, 32, 16) 0 batch_normalization_22[0][0] \n__________________________________________________________________________________________________\nconv2d_23 (Conv2D) (None, 32, 32, 64) 1024 activation_22[0][0] \n__________________________________________________________________________________________________\nadd_7 (Add) (None, 32, 32, 64) 0 conv2d_23[0][0] \n add_6[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_23 (BatchNo (None, 32, 32, 64) 256 add_7[0][0] \n__________________________________________________________________________________________________\nactivation_23 (Activation) (None, 32, 32, 64) 0 batch_normalization_23[0][0] \n__________________________________________________________________________________________________\nconv2d_24 (Conv2D) (None, 32, 32, 16) 1024 activation_23[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_24 (BatchNo (None, 32, 32, 16) 64 conv2d_24[0][0] \n__________________________________________________________________________________________________\nactivation_24 (Activation) (None, 32, 32, 16) 0 batch_normalization_24[0][0] \n__________________________________________________________________________________________________\nconv2d_25 (Conv2D) (None, 32, 32, 16) 2304 activation_24[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_25 (BatchNo (None, 32, 32, 16) 64 conv2d_25[0][0] \n__________________________________________________________________________________________________\nactivation_25 (Activation) (None, 32, 32, 16) 0 batch_normalization_25[0][0] \n__________________________________________________________________________________________________\nconv2d_26 (Conv2D) (None, 32, 32, 64) 1024 activation_25[0][0] \n__________________________________________________________________________________________________\nadd_8 (Add) (None, 32, 32, 64) 0 conv2d_26[0][0] \n add_7[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_26 (BatchNo (None, 32, 32, 64) 256 add_8[0][0] \n__________________________________________________________________________________________________\nactivation_26 (Activation) (None, 32, 32, 64) 0 batch_normalization_26[0][0] \n__________________________________________________________________________________________________\nconv2d_27 (Conv2D) (None, 32, 32, 16) 1024 activation_26[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_27 (BatchNo (None, 32, 32, 16) 64 conv2d_27[0][0] \n__________________________________________________________________________________________________\nactivation_27 (Activation) (None, 32, 32, 16) 0 batch_normalization_27[0][0] \n__________________________________________________________________________________________________\nconv2d_28 (Conv2D) (None, 32, 32, 16) 2304 activation_27[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_28 (BatchNo (None, 32, 32, 16) 64 conv2d_28[0][0] \n__________________________________________________________________________________________________\nactivation_28 (Activation) (None, 32, 32, 16) 0 batch_normalization_28[0][0] \n__________________________________________________________________________________________________\nconv2d_29 (Conv2D) (None, 32, 32, 64) 1024 activation_28[0][0] \n__________________________________________________________________________________________________\nadd_9 (Add) (None, 32, 32, 64) 0 conv2d_29[0][0] \n add_8[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_29 (BatchNo (None, 32, 32, 64) 256 add_9[0][0] \n__________________________________________________________________________________________________\nactivation_29 (Activation) (None, 32, 32, 64) 0 batch_normalization_29[0][0] \n__________________________________________________________________________________________________\nconv2d_30 (Conv2D) (None, 32, 32, 16) 1024 activation_29[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_30 (BatchNo (None, 32, 32, 16) 64 conv2d_30[0][0] \n__________________________________________________________________________________________________\nactivation_30 (Activation) (None, 32, 32, 16) 0 batch_normalization_30[0][0] \n__________________________________________________________________________________________________\nconv2d_31 (Conv2D) (None, 32, 32, 16) 2304 activation_30[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_31 (BatchNo (None, 32, 32, 16) 64 conv2d_31[0][0] \n__________________________________________________________________________________________________\nactivation_31 (Activation) (None, 32, 32, 16) 0 batch_normalization_31[0][0] \n__________________________________________________________________________________________________\nconv2d_32 (Conv2D) (None, 32, 32, 64) 1024 activation_31[0][0] \n__________________________________________________________________________________________________\nadd_10 (Add) (None, 32, 32, 64) 0 conv2d_32[0][0] \n add_9[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_32 (BatchNo (None, 32, 32, 64) 256 add_10[0][0] \n__________________________________________________________________________________________________\nactivation_32 (Activation) (None, 32, 32, 64) 0 batch_normalization_32[0][0] \n__________________________________________________________________________________________________\nconv2d_33 (Conv2D) (None, 32, 32, 16) 1024 activation_32[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_33 (BatchNo (None, 32, 32, 16) 64 conv2d_33[0][0] \n__________________________________________________________________________________________________\nactivation_33 (Activation) (None, 32, 32, 16) 0 batch_normalization_33[0][0] \n__________________________________________________________________________________________________\nconv2d_34 (Conv2D) (None, 32, 32, 16) 2304 activation_33[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_34 (BatchNo (None, 32, 32, 16) 64 conv2d_34[0][0] \n__________________________________________________________________________________________________\nactivation_34 (Activation) (None, 32, 32, 16) 0 batch_normalization_34[0][0] \n__________________________________________________________________________________________________\nconv2d_35 (Conv2D) (None, 32, 32, 64) 1024 activation_34[0][0] \n__________________________________________________________________________________________________\nadd_11 (Add) (None, 32, 32, 64) 0 conv2d_35[0][0] \n add_10[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_35 (BatchNo (None, 32, 32, 64) 256 add_11[0][0] \n__________________________________________________________________________________________________\nactivation_35 (Activation) (None, 32, 32, 64) 0 batch_normalization_35[0][0] \n__________________________________________________________________________________________________\nconv2d_36 (Conv2D) (None, 32, 32, 16) 1024 activation_35[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_36 (BatchNo (None, 32, 32, 16) 64 conv2d_36[0][0] \n__________________________________________________________________________________________________\nactivation_36 (Activation) (None, 32, 32, 16) 0 batch_normalization_36[0][0] \n__________________________________________________________________________________________________\nconv2d_37 (Conv2D) (None, 32, 32, 16) 2304 activation_36[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_37 (BatchNo (None, 32, 32, 16) 64 conv2d_37[0][0] \n__________________________________________________________________________________________________\nactivation_37 (Activation) (None, 32, 32, 16) 0 batch_normalization_37[0][0] \n__________________________________________________________________________________________________\nconv2d_38 (Conv2D) (None, 32, 32, 64) 1024 activation_37[0][0] \n__________________________________________________________________________________________________\nadd_12 (Add) (None, 32, 32, 64) 0 conv2d_38[0][0] \n add_11[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_38 (BatchNo (None, 32, 32, 64) 256 add_12[0][0] \n__________________________________________________________________________________________________\nactivation_38 (Activation) (None, 32, 32, 64) 0 batch_normalization_38[0][0] \n__________________________________________________________________________________________________\nconv2d_39 (Conv2D) (None, 32, 32, 16) 1024 activation_38[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_39 (BatchNo (None, 32, 32, 16) 64 conv2d_39[0][0] \n__________________________________________________________________________________________________\nactivation_39 (Activation) (None, 32, 32, 16) 0 batch_normalization_39[0][0] \n__________________________________________________________________________________________________\nconv2d_40 (Conv2D) (None, 32, 32, 16) 2304 activation_39[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_40 (BatchNo (None, 32, 32, 16) 64 conv2d_40[0][0] \n__________________________________________________________________________________________________\nactivation_40 (Activation) (None, 32, 32, 16) 0 batch_normalization_40[0][0] \n__________________________________________________________________________________________________\nconv2d_41 (Conv2D) (None, 32, 32, 64) 1024 activation_40[0][0] \n__________________________________________________________________________________________________\nadd_13 (Add) (None, 32, 32, 64) 0 conv2d_41[0][0] \n add_12[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_41 (BatchNo (None, 32, 32, 64) 256 add_13[0][0] \n__________________________________________________________________________________________________\nactivation_41 (Activation) (None, 32, 32, 64) 0 batch_normalization_41[0][0] \n__________________________________________________________________________________________________\nconv2d_42 (Conv2D) (None, 32, 32, 16) 1024 activation_41[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_42 (BatchNo (None, 32, 32, 16) 64 conv2d_42[0][0] \n__________________________________________________________________________________________________\nactivation_42 (Activation) (None, 32, 32, 16) 0 batch_normalization_42[0][0] \n__________________________________________________________________________________________________\nconv2d_43 (Conv2D) (None, 32, 32, 16) 2304 activation_42[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_43 (BatchNo (None, 32, 32, 16) 64 conv2d_43[0][0] \n__________________________________________________________________________________________________\nactivation_43 (Activation) (None, 32, 32, 16) 0 batch_normalization_43[0][0] \n__________________________________________________________________________________________________\nconv2d_44 (Conv2D) (None, 32, 32, 64) 1024 activation_43[0][0] \n__________________________________________________________________________________________________\nadd_14 (Add) (None, 32, 32, 64) 0 conv2d_44[0][0] \n add_13[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_44 (BatchNo (None, 32, 32, 64) 256 add_14[0][0] \n__________________________________________________________________________________________________\nactivation_44 (Activation) (None, 32, 32, 64) 0 batch_normalization_44[0][0] \n__________________________________________________________________________________________________\nconv2d_45 (Conv2D) (None, 32, 32, 16) 1024 activation_44[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_45 (BatchNo (None, 32, 32, 16) 64 conv2d_45[0][0] \n__________________________________________________________________________________________________\nactivation_45 (Activation) (None, 32, 32, 16) 0 batch_normalization_45[0][0] \n__________________________________________________________________________________________________\nconv2d_46 (Conv2D) (None, 32, 32, 16) 2304 activation_45[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_46 (BatchNo (None, 32, 32, 16) 64 conv2d_46[0][0] \n__________________________________________________________________________________________________\nactivation_46 (Activation) (None, 32, 32, 16) 0 batch_normalization_46[0][0] \n__________________________________________________________________________________________________\nconv2d_47 (Conv2D) (None, 32, 32, 64) 1024 activation_46[0][0] \n__________________________________________________________________________________________________\nadd_15 (Add) (None, 32, 32, 64) 0 conv2d_47[0][0] \n add_14[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_47 (BatchNo (None, 32, 32, 64) 256 add_15[0][0] \n__________________________________________________________________________________________________\nactivation_47 (Activation) (None, 32, 32, 64) 0 batch_normalization_47[0][0] \n__________________________________________________________________________________________________\nconv2d_48 (Conv2D) (None, 32, 32, 16) 1024 activation_47[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_48 (BatchNo (None, 32, 32, 16) 64 conv2d_48[0][0] \n__________________________________________________________________________________________________\nactivation_48 (Activation) (None, 32, 32, 16) 0 batch_normalization_48[0][0] \n__________________________________________________________________________________________________\nconv2d_49 (Conv2D) (None, 32, 32, 16) 2304 activation_48[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_49 (BatchNo (None, 32, 32, 16) 64 conv2d_49[0][0] \n__________________________________________________________________________________________________\nactivation_49 (Activation) (None, 32, 32, 16) 0 batch_normalization_49[0][0] \n__________________________________________________________________________________________________\nconv2d_50 (Conv2D) (None, 32, 32, 64) 1024 activation_49[0][0] \n__________________________________________________________________________________________________\nadd_16 (Add) (None, 32, 32, 64) 0 conv2d_50[0][0] \n add_15[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_50 (BatchNo (None, 32, 32, 64) 256 add_16[0][0] \n__________________________________________________________________________________________________\nactivation_50 (Activation) (None, 32, 32, 64) 0 batch_normalization_50[0][0] \n__________________________________________________________________________________________________\nconv2d_51 (Conv2D) (None, 32, 32, 16) 1024 activation_50[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_51 (BatchNo (None, 32, 32, 16) 64 conv2d_51[0][0] \n__________________________________________________________________________________________________\nactivation_51 (Activation) (None, 32, 32, 16) 0 batch_normalization_51[0][0] \n__________________________________________________________________________________________________\nconv2d_52 (Conv2D) (None, 32, 32, 16) 2304 activation_51[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_52 (BatchNo (None, 32, 32, 16) 64 conv2d_52[0][0] \n__________________________________________________________________________________________________\nactivation_52 (Activation) (None, 32, 32, 16) 0 batch_normalization_52[0][0] \n__________________________________________________________________________________________________\nconv2d_53 (Conv2D) (None, 32, 32, 64) 1024 activation_52[0][0] \n__________________________________________________________________________________________________\nadd_17 (Add) (None, 32, 32, 64) 0 conv2d_53[0][0] \n add_16[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_53 (BatchNo (None, 32, 32, 64) 256 add_17[0][0] \n__________________________________________________________________________________________________\nactivation_53 (Activation) (None, 32, 32, 64) 0 batch_normalization_53[0][0] \n__________________________________________________________________________________________________\nconv2d_54 (Conv2D) (None, 32, 32, 16) 1024 activation_53[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_54 (BatchNo (None, 32, 32, 16) 64 conv2d_54[0][0] \n__________________________________________________________________________________________________\nactivation_54 (Activation) (None, 32, 32, 16) 0 batch_normalization_54[0][0] \n__________________________________________________________________________________________________\nconv2d_55 (Conv2D) (None, 32, 32, 16) 2304 activation_54[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_55 (BatchNo (None, 32, 32, 16) 64 conv2d_55[0][0] \n__________________________________________________________________________________________________\nactivation_55 (Activation) (None, 32, 32, 16) 0 batch_normalization_55[0][0] \n__________________________________________________________________________________________________\nconv2d_56 (Conv2D) (None, 32, 32, 64) 1024 activation_55[0][0] \n__________________________________________________________________________________________________\nadd_18 (Add) (None, 32, 32, 64) 0 conv2d_56[0][0] \n add_17[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_56 (BatchNo (None, 32, 32, 64) 256 add_18[0][0] \n__________________________________________________________________________________________________\nactivation_56 (Activation) (None, 32, 32, 64) 0 batch_normalization_56[0][0] \n__________________________________________________________________________________________________\nconv2d_57 (Conv2D) (None, 32, 32, 16) 1024 activation_56[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_57 (BatchNo (None, 32, 32, 16) 64 conv2d_57[0][0] \n__________________________________________________________________________________________________\nactivation_57 (Activation) (None, 32, 32, 16) 0 batch_normalization_57[0][0] \n__________________________________________________________________________________________________\nconv2d_58 (Conv2D) (None, 32, 32, 16) 2304 activation_57[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_58 (BatchNo (None, 32, 32, 16) 64 conv2d_58[0][0] \n__________________________________________________________________________________________________\nactivation_58 (Activation) (None, 32, 32, 16) 0 batch_normalization_58[0][0] \n__________________________________________________________________________________________________\nconv2d_59 (Conv2D) (None, 32, 32, 64) 1024 activation_58[0][0] \n__________________________________________________________________________________________________\nadd_19 (Add) (None, 32, 32, 64) 0 conv2d_59[0][0] \n add_18[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_59 (BatchNo (None, 32, 32, 64) 256 add_19[0][0] \n__________________________________________________________________________________________________\nactivation_59 (Activation) (None, 32, 32, 64) 0 batch_normalization_59[0][0] \n__________________________________________________________________________________________________\nconv2d_60 (Conv2D) (None, 32, 32, 16) 1024 activation_59[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_60 (BatchNo (None, 32, 32, 16) 64 conv2d_60[0][0] \n__________________________________________________________________________________________________\nactivation_60 (Activation) (None, 32, 32, 16) 0 batch_normalization_60[0][0] \n__________________________________________________________________________________________________\nconv2d_61 (Conv2D) (None, 32, 32, 16) 2304 activation_60[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_61 (BatchNo (None, 32, 32, 16) 64 conv2d_61[0][0] \n__________________________________________________________________________________________________\nactivation_61 (Activation) (None, 32, 32, 16) 0 batch_normalization_61[0][0] \n__________________________________________________________________________________________________\nconv2d_62 (Conv2D) (None, 32, 32, 64) 1024 activation_61[0][0] \n__________________________________________________________________________________________________\nadd_20 (Add) (None, 32, 32, 64) 0 conv2d_62[0][0] \n add_19[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_62 (BatchNo (None, 32, 32, 64) 256 add_20[0][0] \n__________________________________________________________________________________________________\nactivation_62 (Activation) (None, 32, 32, 64) 0 batch_normalization_62[0][0] \n__________________________________________________________________________________________________\nconv2d_63 (Conv2D) (None, 32, 32, 16) 1024 activation_62[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_63 (BatchNo (None, 32, 32, 16) 64 conv2d_63[0][0] \n__________________________________________________________________________________________________\nactivation_63 (Activation) (None, 32, 32, 16) 0 batch_normalization_63[0][0] \n__________________________________________________________________________________________________\nconv2d_64 (Conv2D) (None, 32, 32, 16) 2304 activation_63[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_64 (BatchNo (None, 32, 32, 16) 64 conv2d_64[0][0] \n__________________________________________________________________________________________________\nactivation_64 (Activation) (None, 32, 32, 16) 0 batch_normalization_64[0][0] \n__________________________________________________________________________________________________\nconv2d_65 (Conv2D) (None, 32, 32, 64) 1024 activation_64[0][0] \n__________________________________________________________________________________________________\nadd_21 (Add) (None, 32, 32, 64) 0 conv2d_65[0][0] \n add_20[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_65 (BatchNo (None, 32, 32, 64) 256 add_21[0][0] \n__________________________________________________________________________________________________\nactivation_65 (Activation) (None, 32, 32, 64) 0 batch_normalization_65[0][0] \n__________________________________________________________________________________________________\nconv2d_66 (Conv2D) (None, 32, 32, 16) 1024 activation_65[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_66 (BatchNo (None, 32, 32, 16) 64 conv2d_66[0][0] \n__________________________________________________________________________________________________\nactivation_66 (Activation) (None, 32, 32, 16) 0 batch_normalization_66[0][0] \n__________________________________________________________________________________________________\nconv2d_67 (Conv2D) (None, 32, 32, 16) 2304 activation_66[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_67 (BatchNo (None, 32, 32, 16) 64 conv2d_67[0][0] \n__________________________________________________________________________________________________\nactivation_67 (Activation) (None, 32, 32, 16) 0 batch_normalization_67[0][0] \n__________________________________________________________________________________________________\nconv2d_68 (Conv2D) (None, 32, 32, 64) 1024 activation_67[0][0] \n__________________________________________________________________________________________________\nadd_22 (Add) (None, 32, 32, 64) 0 conv2d_68[0][0] \n add_21[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_68 (BatchNo (None, 32, 32, 64) 256 add_22[0][0] \n__________________________________________________________________________________________________\nactivation_68 (Activation) (None, 32, 32, 64) 0 batch_normalization_68[0][0] \n__________________________________________________________________________________________________\nconv2d_69 (Conv2D) (None, 32, 32, 16) 1024 activation_68[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_69 (BatchNo (None, 32, 32, 16) 64 conv2d_69[0][0] \n__________________________________________________________________________________________________\nactivation_69 (Activation) (None, 32, 32, 16) 0 batch_normalization_69[0][0] \n__________________________________________________________________________________________________\nconv2d_70 (Conv2D) (None, 32, 32, 16) 2304 activation_69[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_70 (BatchNo (None, 32, 32, 16) 64 conv2d_70[0][0] \n__________________________________________________________________________________________________\nactivation_70 (Activation) (None, 32, 32, 16) 0 batch_normalization_70[0][0] \n__________________________________________________________________________________________________\nconv2d_71 (Conv2D) (None, 32, 32, 64) 1024 activation_70[0][0] \n__________________________________________________________________________________________________\nadd_23 (Add) (None, 32, 32, 64) 0 conv2d_71[0][0] \n add_22[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_71 (BatchNo (None, 32, 32, 64) 256 add_23[0][0] \n__________________________________________________________________________________________________\nactivation_71 (Activation) (None, 32, 32, 64) 0 batch_normalization_71[0][0] \n__________________________________________________________________________________________________\nconv2d_72 (Conv2D) (None, 32, 32, 16) 1024 activation_71[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_72 (BatchNo (None, 32, 32, 16) 64 conv2d_72[0][0] \n__________________________________________________________________________________________________\nactivation_72 (Activation) (None, 32, 32, 16) 0 batch_normalization_72[0][0] \n__________________________________________________________________________________________________\nconv2d_73 (Conv2D) (None, 32, 32, 16) 2304 activation_72[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_73 (BatchNo (None, 32, 32, 16) 64 conv2d_73[0][0] \n__________________________________________________________________________________________________\nactivation_73 (Activation) (None, 32, 32, 16) 0 batch_normalization_73[0][0] \n__________________________________________________________________________________________________\nconv2d_74 (Conv2D) (None, 32, 32, 64) 1024 activation_73[0][0] \n__________________________________________________________________________________________________\nadd_24 (Add) (None, 32, 32, 64) 0 conv2d_74[0][0] \n add_23[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_74 (BatchNo (None, 32, 32, 64) 256 add_24[0][0] \n__________________________________________________________________________________________________\nactivation_74 (Activation) (None, 32, 32, 64) 0 batch_normalization_74[0][0] \n__________________________________________________________________________________________________\nconv2d_75 (Conv2D) (None, 32, 32, 16) 1024 activation_74[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_75 (BatchNo (None, 32, 32, 16) 64 conv2d_75[0][0] \n__________________________________________________________________________________________________\nactivation_75 (Activation) (None, 32, 32, 16) 0 batch_normalization_75[0][0] \n__________________________________________________________________________________________________\nconv2d_76 (Conv2D) (None, 32, 32, 16) 2304 activation_75[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_76 (BatchNo (None, 32, 32, 16) 64 conv2d_76[0][0] \n__________________________________________________________________________________________________\nactivation_76 (Activation) (None, 32, 32, 16) 0 batch_normalization_76[0][0] \n__________________________________________________________________________________________________\nconv2d_77 (Conv2D) (None, 32, 32, 64) 1024 activation_76[0][0] \n__________________________________________________________________________________________________\nadd_25 (Add) (None, 32, 32, 64) 0 conv2d_77[0][0] \n add_24[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_77 (BatchNo (None, 32, 32, 64) 256 add_25[0][0] \n__________________________________________________________________________________________________\nactivation_77 (Activation) (None, 32, 32, 64) 0 batch_normalization_77[0][0] \n__________________________________________________________________________________________________\nconv2d_78 (Conv2D) (None, 32, 32, 16) 1024 activation_77[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_78 (BatchNo (None, 32, 32, 16) 64 conv2d_78[0][0] \n__________________________________________________________________________________________________\nactivation_78 (Activation) (None, 32, 32, 16) 0 batch_normalization_78[0][0] \n__________________________________________________________________________________________________\nconv2d_79 (Conv2D) (None, 32, 32, 16) 2304 activation_78[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_79 (BatchNo (None, 32, 32, 16) 64 conv2d_79[0][0] \n__________________________________________________________________________________________________\nactivation_79 (Activation) (None, 32, 32, 16) 0 batch_normalization_79[0][0] \n__________________________________________________________________________________________________\nconv2d_80 (Conv2D) (None, 32, 32, 64) 1024 activation_79[0][0] \n__________________________________________________________________________________________________\nadd_26 (Add) (None, 32, 32, 64) 0 conv2d_80[0][0] \n add_25[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_80 (BatchNo (None, 32, 32, 64) 256 add_26[0][0] \n__________________________________________________________________________________________________\nactivation_80 (Activation) (None, 32, 32, 64) 0 batch_normalization_80[0][0] \n__________________________________________________________________________________________________\nconv2d_81 (Conv2D) (None, 32, 32, 16) 1024 activation_80[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_81 (BatchNo (None, 32, 32, 16) 64 conv2d_81[0][0] \n__________________________________________________________________________________________________\nactivation_81 (Activation) (None, 32, 32, 16) 0 batch_normalization_81[0][0] \n__________________________________________________________________________________________________\nconv2d_82 (Conv2D) (None, 32, 32, 16) 2304 activation_81[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_82 (BatchNo (None, 32, 32, 16) 64 conv2d_82[0][0] \n__________________________________________________________________________________________________\nactivation_82 (Activation) (None, 32, 32, 16) 0 batch_normalization_82[0][0] \n__________________________________________________________________________________________________\nconv2d_83 (Conv2D) (None, 32, 32, 64) 1024 activation_82[0][0] \n__________________________________________________________________________________________________\nadd_27 (Add) (None, 32, 32, 64) 0 conv2d_83[0][0] \n add_26[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_83 (BatchNo (None, 32, 32, 64) 256 add_27[0][0] \n__________________________________________________________________________________________________\nactivation_83 (Activation) (None, 32, 32, 64) 0 batch_normalization_83[0][0] \n__________________________________________________________________________________________________\nconv2d_84 (Conv2D) (None, 32, 32, 16) 1024 activation_83[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_84 (BatchNo (None, 32, 32, 16) 64 conv2d_84[0][0] \n__________________________________________________________________________________________________\nactivation_84 (Activation) (None, 32, 32, 16) 0 batch_normalization_84[0][0] \n__________________________________________________________________________________________________\nconv2d_85 (Conv2D) (None, 32, 32, 16) 2304 activation_84[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_85 (BatchNo (None, 32, 32, 16) 64 conv2d_85[0][0] \n__________________________________________________________________________________________________\nactivation_85 (Activation) (None, 32, 32, 16) 0 batch_normalization_85[0][0] \n__________________________________________________________________________________________________\nconv2d_86 (Conv2D) (None, 32, 32, 64) 1024 activation_85[0][0] \n__________________________________________________________________________________________________\nadd_28 (Add) (None, 32, 32, 64) 0 conv2d_86[0][0] \n add_27[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_86 (BatchNo (None, 32, 32, 64) 256 add_28[0][0] \n__________________________________________________________________________________________________\nactivation_86 (Activation) (None, 32, 32, 64) 0 batch_normalization_86[0][0] \n__________________________________________________________________________________________________\nconv2d_87 (Conv2D) (None, 32, 32, 16) 1024 activation_86[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_87 (BatchNo (None, 32, 32, 16) 64 conv2d_87[0][0] \n__________________________________________________________________________________________________\nactivation_87 (Activation) (None, 32, 32, 16) 0 batch_normalization_87[0][0] \n__________________________________________________________________________________________________\nconv2d_88 (Conv2D) (None, 32, 32, 16) 2304 activation_87[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_88 (BatchNo (None, 32, 32, 16) 64 conv2d_88[0][0] \n__________________________________________________________________________________________________\nactivation_88 (Activation) (None, 32, 32, 16) 0 batch_normalization_88[0][0] \n__________________________________________________________________________________________________\nconv2d_89 (Conv2D) (None, 32, 32, 64) 1024 activation_88[0][0] \n__________________________________________________________________________________________________\nadd_29 (Add) (None, 32, 32, 64) 0 conv2d_89[0][0] \n add_28[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_89 (BatchNo (None, 32, 32, 64) 256 add_29[0][0] \n__________________________________________________________________________________________________\nactivation_89 (Activation) (None, 32, 32, 64) 0 batch_normalization_89[0][0] \n__________________________________________________________________________________________________\nconv2d_90 (Conv2D) (None, 32, 32, 16) 1024 activation_89[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_90 (BatchNo (None, 32, 32, 16) 64 conv2d_90[0][0] \n__________________________________________________________________________________________________\nactivation_90 (Activation) (None, 32, 32, 16) 0 batch_normalization_90[0][0] \n__________________________________________________________________________________________________\nconv2d_91 (Conv2D) (None, 32, 32, 16) 2304 activation_90[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_91 (BatchNo (None, 32, 32, 16) 64 conv2d_91[0][0] \n__________________________________________________________________________________________________\nactivation_91 (Activation) (None, 32, 32, 16) 0 batch_normalization_91[0][0] \n__________________________________________________________________________________________________\nconv2d_92 (Conv2D) (None, 32, 32, 64) 1024 activation_91[0][0] \n__________________________________________________________________________________________________\nadd_30 (Add) (None, 32, 32, 64) 0 conv2d_92[0][0] \n add_29[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_92 (BatchNo (None, 32, 32, 64) 256 add_30[0][0] \n__________________________________________________________________________________________________\nactivation_92 (Activation) (None, 32, 32, 64) 0 batch_normalization_92[0][0] \n__________________________________________________________________________________________________\nconv2d_93 (Conv2D) (None, 32, 32, 16) 1024 activation_92[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_93 (BatchNo (None, 32, 32, 16) 64 conv2d_93[0][0] \n__________________________________________________________________________________________________\nactivation_93 (Activation) (None, 32, 32, 16) 0 batch_normalization_93[0][0] \n__________________________________________________________________________________________________\nconv2d_94 (Conv2D) (None, 32, 32, 16) 2304 activation_93[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_94 (BatchNo (None, 32, 32, 16) 64 conv2d_94[0][0] \n__________________________________________________________________________________________________\nactivation_94 (Activation) (None, 32, 32, 16) 0 batch_normalization_94[0][0] \n__________________________________________________________________________________________________\nconv2d_95 (Conv2D) (None, 32, 32, 64) 1024 activation_94[0][0] \n__________________________________________________________________________________________________\nadd_31 (Add) (None, 32, 32, 64) 0 conv2d_95[0][0] \n add_30[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_95 (BatchNo (None, 32, 32, 64) 256 add_31[0][0] \n__________________________________________________________________________________________________\nactivation_95 (Activation) (None, 32, 32, 64) 0 batch_normalization_95[0][0] \n__________________________________________________________________________________________________\nconv2d_96 (Conv2D) (None, 32, 32, 16) 1024 activation_95[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_96 (BatchNo (None, 32, 32, 16) 64 conv2d_96[0][0] \n__________________________________________________________________________________________________\nactivation_96 (Activation) (None, 32, 32, 16) 0 batch_normalization_96[0][0] \n__________________________________________________________________________________________________\nconv2d_97 (Conv2D) (None, 32, 32, 16) 2304 activation_96[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_97 (BatchNo (None, 32, 32, 16) 64 conv2d_97[0][0] \n__________________________________________________________________________________________________\nactivation_97 (Activation) (None, 32, 32, 16) 0 batch_normalization_97[0][0] \n__________________________________________________________________________________________________\nconv2d_98 (Conv2D) (None, 32, 32, 64) 1024 activation_97[0][0] \n__________________________________________________________________________________________________\nadd_32 (Add) (None, 32, 32, 64) 0 conv2d_98[0][0] \n add_31[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_98 (BatchNo (None, 32, 32, 64) 256 add_32[0][0] \n__________________________________________________________________________________________________\nactivation_98 (Activation) (None, 32, 32, 64) 0 batch_normalization_98[0][0] \n__________________________________________________________________________________________________\nconv2d_99 (Conv2D) (None, 32, 32, 16) 1024 activation_98[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_99 (BatchNo (None, 32, 32, 16) 64 conv2d_99[0][0] \n__________________________________________________________________________________________________\nactivation_99 (Activation) (None, 32, 32, 16) 0 batch_normalization_99[0][0] \n__________________________________________________________________________________________________\nconv2d_100 (Conv2D) (None, 32, 32, 16) 2304 activation_99[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_100 (BatchN (None, 32, 32, 16) 64 conv2d_100[0][0] \n__________________________________________________________________________________________________\nactivation_100 (Activation) (None, 32, 32, 16) 0 batch_normalization_100[0][0] \n__________________________________________________________________________________________________\nconv2d_101 (Conv2D) (None, 32, 32, 64) 1024 activation_100[0][0] \n__________________________________________________________________________________________________\nadd_33 (Add) (None, 32, 32, 64) 0 conv2d_101[0][0] \n add_32[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_101 (BatchN (None, 32, 32, 64) 256 add_33[0][0] \n__________________________________________________________________________________________________\nactivation_101 (Activation) (None, 32, 32, 64) 0 batch_normalization_101[0][0] \n__________________________________________________________________________________________________\nconv2d_102 (Conv2D) (None, 32, 32, 16) 1024 activation_101[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_102 (BatchN (None, 32, 32, 16) 64 conv2d_102[0][0] \n__________________________________________________________________________________________________\nactivation_102 (Activation) (None, 32, 32, 16) 0 batch_normalization_102[0][0] \n__________________________________________________________________________________________________\nconv2d_103 (Conv2D) (None, 32, 32, 16) 2304 activation_102[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_103 (BatchN (None, 32, 32, 16) 64 conv2d_103[0][0] \n__________________________________________________________________________________________________\nactivation_103 (Activation) (None, 32, 32, 16) 0 batch_normalization_103[0][0] \n__________________________________________________________________________________________________\nconv2d_104 (Conv2D) (None, 32, 32, 64) 1024 activation_103[0][0] \n__________________________________________________________________________________________________\nadd_34 (Add) (None, 32, 32, 64) 0 conv2d_104[0][0] \n add_33[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_104 (BatchN (None, 32, 32, 64) 256 add_34[0][0] \n__________________________________________________________________________________________________\nactivation_104 (Activation) (None, 32, 32, 64) 0 batch_normalization_104[0][0] \n__________________________________________________________________________________________________\nconv2d_105 (Conv2D) (None, 32, 32, 16) 1024 activation_104[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_105 (BatchN (None, 32, 32, 16) 64 conv2d_105[0][0] \n__________________________________________________________________________________________________\nactivation_105 (Activation) (None, 32, 32, 16) 0 batch_normalization_105[0][0] \n__________________________________________________________________________________________________\nconv2d_106 (Conv2D) (None, 32, 32, 16) 2304 activation_105[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_106 (BatchN (None, 32, 32, 16) 64 conv2d_106[0][0] \n__________________________________________________________________________________________________\nactivation_106 (Activation) (None, 32, 32, 16) 0 batch_normalization_106[0][0] \n__________________________________________________________________________________________________\nconv2d_107 (Conv2D) (None, 32, 32, 64) 1024 activation_106[0][0] \n__________________________________________________________________________________________________\nadd_35 (Add) (None, 32, 32, 64) 0 conv2d_107[0][0] \n add_34[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_107 (BatchN (None, 32, 32, 64) 256 add_35[0][0] \n__________________________________________________________________________________________________\nactivation_107 (Activation) (None, 32, 32, 64) 0 batch_normalization_107[0][0] \n__________________________________________________________________________________________________\nconv2d_108 (Conv2D) (None, 32, 32, 16) 1024 activation_107[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_108 (BatchN (None, 32, 32, 16) 64 conv2d_108[0][0] \n__________________________________________________________________________________________________\nactivation_108 (Activation) (None, 32, 32, 16) 0 batch_normalization_108[0][0] \n__________________________________________________________________________________________________\nconv2d_109 (Conv2D) (None, 32, 32, 16) 2304 activation_108[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_109 (BatchN (None, 32, 32, 16) 64 conv2d_109[0][0] \n__________________________________________________________________________________________________\nactivation_109 (Activation) (None, 32, 32, 16) 0 batch_normalization_109[0][0] \n__________________________________________________________________________________________________\nconv2d_110 (Conv2D) (None, 32, 32, 64) 1024 activation_109[0][0] \n__________________________________________________________________________________________________\nadd_36 (Add) (None, 32, 32, 64) 0 conv2d_110[0][0] \n add_35[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_110 (BatchN (None, 32, 32, 64) 256 add_36[0][0] \n__________________________________________________________________________________________________\nactivation_110 (Activation) (None, 32, 32, 64) 0 batch_normalization_110[0][0] \n__________________________________________________________________________________________________\nconv2d_111 (Conv2D) (None, 32, 32, 16) 1024 activation_110[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_111 (BatchN (None, 32, 32, 16) 64 conv2d_111[0][0] \n__________________________________________________________________________________________________\nactivation_111 (Activation) (None, 32, 32, 16) 0 batch_normalization_111[0][0] \n__________________________________________________________________________________________________\nconv2d_112 (Conv2D) (None, 32, 32, 16) 2304 activation_111[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_112 (BatchN (None, 32, 32, 16) 64 conv2d_112[0][0] \n__________________________________________________________________________________________________\nactivation_112 (Activation) (None, 32, 32, 16) 0 batch_normalization_112[0][0] \n__________________________________________________________________________________________________\nconv2d_113 (Conv2D) (None, 32, 32, 64) 1024 activation_112[0][0] \n__________________________________________________________________________________________________\nadd_37 (Add) (None, 32, 32, 64) 0 conv2d_113[0][0] \n add_36[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_113 (BatchN (None, 32, 32, 64) 256 add_37[0][0] \n__________________________________________________________________________________________________\nactivation_113 (Activation) (None, 32, 32, 64) 0 batch_normalization_113[0][0] \n__________________________________________________________________________________________________\nconv2d_114 (Conv2D) (None, 32, 32, 16) 1024 activation_113[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_114 (BatchN (None, 32, 32, 16) 64 conv2d_114[0][0] \n__________________________________________________________________________________________________\nactivation_114 (Activation) (None, 32, 32, 16) 0 batch_normalization_114[0][0] \n__________________________________________________________________________________________________\nconv2d_115 (Conv2D) (None, 32, 32, 16) 2304 activation_114[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_115 (BatchN (None, 32, 32, 16) 64 conv2d_115[0][0] \n__________________________________________________________________________________________________\nactivation_115 (Activation) (None, 32, 32, 16) 0 batch_normalization_115[0][0] \n__________________________________________________________________________________________________\nconv2d_116 (Conv2D) (None, 32, 32, 64) 1024 activation_115[0][0] \n__________________________________________________________________________________________________\nadd_38 (Add) (None, 32, 32, 64) 0 conv2d_116[0][0] \n add_37[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_116 (BatchN (None, 32, 32, 64) 256 add_38[0][0] \n__________________________________________________________________________________________________\nactivation_116 (Activation) (None, 32, 32, 64) 0 batch_normalization_116[0][0] \n__________________________________________________________________________________________________\nconv2d_117 (Conv2D) (None, 32, 32, 16) 1024 activation_116[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_117 (BatchN (None, 32, 32, 16) 64 conv2d_117[0][0] \n__________________________________________________________________________________________________\nactivation_117 (Activation) (None, 32, 32, 16) 0 batch_normalization_117[0][0] \n__________________________________________________________________________________________________\nconv2d_118 (Conv2D) (None, 32, 32, 16) 2304 activation_117[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_118 (BatchN (None, 32, 32, 16) 64 conv2d_118[0][0] \n__________________________________________________________________________________________________\nactivation_118 (Activation) (None, 32, 32, 16) 0 batch_normalization_118[0][0] \n__________________________________________________________________________________________________\nconv2d_119 (Conv2D) (None, 32, 32, 64) 1024 activation_118[0][0] \n__________________________________________________________________________________________________\nadd_39 (Add) (None, 32, 32, 64) 0 conv2d_119[0][0] \n add_38[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_119 (BatchN (None, 32, 32, 64) 256 add_39[0][0] \n__________________________________________________________________________________________________\nactivation_119 (Activation) (None, 32, 32, 64) 0 batch_normalization_119[0][0] \n__________________________________________________________________________________________________\nconv2d_120 (Conv2D) (None, 32, 32, 16) 1024 activation_119[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_120 (BatchN (None, 32, 32, 16) 64 conv2d_120[0][0] \n__________________________________________________________________________________________________\nactivation_120 (Activation) (None, 32, 32, 16) 0 batch_normalization_120[0][0] \n__________________________________________________________________________________________________\nconv2d_121 (Conv2D) (None, 32, 32, 16) 2304 activation_120[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_121 (BatchN (None, 32, 32, 16) 64 conv2d_121[0][0] \n__________________________________________________________________________________________________\nactivation_121 (Activation) (None, 32, 32, 16) 0 batch_normalization_121[0][0] \n__________________________________________________________________________________________________\nconv2d_122 (Conv2D) (None, 32, 32, 64) 1024 activation_121[0][0] \n__________________________________________________________________________________________________\nadd_40 (Add) (None, 32, 32, 64) 0 conv2d_122[0][0] \n add_39[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_122 (BatchN (None, 32, 32, 64) 256 add_40[0][0] \n__________________________________________________________________________________________________\nactivation_122 (Activation) (None, 32, 32, 64) 0 batch_normalization_122[0][0] \n__________________________________________________________________________________________________\nconv2d_123 (Conv2D) (None, 32, 32, 16) 1024 activation_122[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_123 (BatchN (None, 32, 32, 16) 64 conv2d_123[0][0] \n__________________________________________________________________________________________________\nactivation_123 (Activation) (None, 32, 32, 16) 0 batch_normalization_123[0][0] \n__________________________________________________________________________________________________\nconv2d_124 (Conv2D) (None, 32, 32, 16) 2304 activation_123[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_124 (BatchN (None, 32, 32, 16) 64 conv2d_124[0][0] \n__________________________________________________________________________________________________\nactivation_124 (Activation) (None, 32, 32, 16) 0 batch_normalization_124[0][0] \n__________________________________________________________________________________________________\nconv2d_125 (Conv2D) (None, 32, 32, 64) 1024 activation_124[0][0] \n__________________________________________________________________________________________________\nadd_41 (Add) (None, 32, 32, 64) 0 conv2d_125[0][0] \n add_40[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_125 (BatchN (None, 32, 32, 64) 256 add_41[0][0] \n__________________________________________________________________________________________________\nactivation_125 (Activation) (None, 32, 32, 64) 0 batch_normalization_125[0][0] \n__________________________________________________________________________________________________\nconv2d_126 (Conv2D) (None, 32, 32, 16) 1024 activation_125[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_126 (BatchN (None, 32, 32, 16) 64 conv2d_126[0][0] \n__________________________________________________________________________________________________\nactivation_126 (Activation) (None, 32, 32, 16) 0 batch_normalization_126[0][0] \n__________________________________________________________________________________________________\nconv2d_127 (Conv2D) (None, 32, 32, 16) 2304 activation_126[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_127 (BatchN (None, 32, 32, 16) 64 conv2d_127[0][0] \n__________________________________________________________________________________________________\nactivation_127 (Activation) (None, 32, 32, 16) 0 batch_normalization_127[0][0] \n__________________________________________________________________________________________________\nconv2d_128 (Conv2D) (None, 32, 32, 64) 1024 activation_127[0][0] \n__________________________________________________________________________________________________\nadd_42 (Add) (None, 32, 32, 64) 0 conv2d_128[0][0] \n add_41[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_128 (BatchN (None, 32, 32, 64) 256 add_42[0][0] \n__________________________________________________________________________________________________\nactivation_128 (Activation) (None, 32, 32, 64) 0 batch_normalization_128[0][0] \n__________________________________________________________________________________________________\nconv2d_129 (Conv2D) (None, 32, 32, 16) 1024 activation_128[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_129 (BatchN (None, 32, 32, 16) 64 conv2d_129[0][0] \n__________________________________________________________________________________________________\nactivation_129 (Activation) (None, 32, 32, 16) 0 batch_normalization_129[0][0] \n__________________________________________________________________________________________________\nconv2d_130 (Conv2D) (None, 32, 32, 16) 2304 activation_129[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_130 (BatchN (None, 32, 32, 16) 64 conv2d_130[0][0] \n__________________________________________________________________________________________________\nactivation_130 (Activation) (None, 32, 32, 16) 0 batch_normalization_130[0][0] \n__________________________________________________________________________________________________\nconv2d_131 (Conv2D) (None, 32, 32, 64) 1024 activation_130[0][0] \n__________________________________________________________________________________________________\nadd_43 (Add) (None, 32, 32, 64) 0 conv2d_131[0][0] \n add_42[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_131 (BatchN (None, 32, 32, 64) 256 add_43[0][0] \n__________________________________________________________________________________________________\nactivation_131 (Activation) (None, 32, 32, 64) 0 batch_normalization_131[0][0] \n__________________________________________________________________________________________________\nconv2d_132 (Conv2D) (None, 32, 32, 16) 1024 activation_131[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_132 (BatchN (None, 32, 32, 16) 64 conv2d_132[0][0] \n__________________________________________________________________________________________________\nactivation_132 (Activation) (None, 32, 32, 16) 0 batch_normalization_132[0][0] \n__________________________________________________________________________________________________\nconv2d_133 (Conv2D) (None, 32, 32, 16) 2304 activation_132[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_133 (BatchN (None, 32, 32, 16) 64 conv2d_133[0][0] \n__________________________________________________________________________________________________\nactivation_133 (Activation) (None, 32, 32, 16) 0 batch_normalization_133[0][0] \n__________________________________________________________________________________________________\nconv2d_134 (Conv2D) (None, 32, 32, 64) 1024 activation_133[0][0] \n__________________________________________________________________________________________________\nadd_44 (Add) (None, 32, 32, 64) 0 conv2d_134[0][0] \n add_43[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_134 (BatchN (None, 32, 32, 64) 256 add_44[0][0] \n__________________________________________________________________________________________________\nactivation_134 (Activation) (None, 32, 32, 64) 0 batch_normalization_134[0][0] \n__________________________________________________________________________________________________\nconv2d_135 (Conv2D) (None, 32, 32, 16) 1024 activation_134[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_135 (BatchN (None, 32, 32, 16) 64 conv2d_135[0][0] \n__________________________________________________________________________________________________\nactivation_135 (Activation) (None, 32, 32, 16) 0 batch_normalization_135[0][0] \n__________________________________________________________________________________________________\nconv2d_136 (Conv2D) (None, 32, 32, 16) 2304 activation_135[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_136 (BatchN (None, 32, 32, 16) 64 conv2d_136[0][0] \n__________________________________________________________________________________________________\nactivation_136 (Activation) (None, 32, 32, 16) 0 batch_normalization_136[0][0] \n__________________________________________________________________________________________________\nconv2d_137 (Conv2D) (None, 32, 32, 64) 1024 activation_136[0][0] \n__________________________________________________________________________________________________\nadd_45 (Add) (None, 32, 32, 64) 0 conv2d_137[0][0] \n add_44[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_137 (BatchN (None, 32, 32, 64) 256 add_45[0][0] \n__________________________________________________________________________________________________\nactivation_137 (Activation) (None, 32, 32, 64) 0 batch_normalization_137[0][0] \n__________________________________________________________________________________________________\nconv2d_138 (Conv2D) (None, 32, 32, 16) 1024 activation_137[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_138 (BatchN (None, 32, 32, 16) 64 conv2d_138[0][0] \n__________________________________________________________________________________________________\nactivation_138 (Activation) (None, 32, 32, 16) 0 batch_normalization_138[0][0] \n__________________________________________________________________________________________________\nconv2d_139 (Conv2D) (None, 32, 32, 16) 2304 activation_138[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_139 (BatchN (None, 32, 32, 16) 64 conv2d_139[0][0] \n__________________________________________________________________________________________________\nactivation_139 (Activation) (None, 32, 32, 16) 0 batch_normalization_139[0][0] \n__________________________________________________________________________________________________\nconv2d_140 (Conv2D) (None, 32, 32, 64) 1024 activation_139[0][0] \n__________________________________________________________________________________________________\nadd_46 (Add) (None, 32, 32, 64) 0 conv2d_140[0][0] \n add_45[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_140 (BatchN (None, 32, 32, 64) 256 add_46[0][0] \n__________________________________________________________________________________________________\nactivation_140 (Activation) (None, 32, 32, 64) 0 batch_normalization_140[0][0] \n__________________________________________________________________________________________________\nconv2d_141 (Conv2D) (None, 32, 32, 16) 1024 activation_140[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_141 (BatchN (None, 32, 32, 16) 64 conv2d_141[0][0] \n__________________________________________________________________________________________________\nactivation_141 (Activation) (None, 32, 32, 16) 0 batch_normalization_141[0][0] \n__________________________________________________________________________________________________\nconv2d_142 (Conv2D) (None, 32, 32, 16) 2304 activation_141[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_142 (BatchN (None, 32, 32, 16) 64 conv2d_142[0][0] \n__________________________________________________________________________________________________\nactivation_142 (Activation) (None, 32, 32, 16) 0 batch_normalization_142[0][0] \n__________________________________________________________________________________________________\nconv2d_143 (Conv2D) (None, 32, 32, 64) 1024 activation_142[0][0] \n__________________________________________________________________________________________________\nadd_47 (Add) (None, 32, 32, 64) 0 conv2d_143[0][0] \n add_46[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_143 (BatchN (None, 32, 32, 64) 256 add_47[0][0] \n__________________________________________________________________________________________________\nactivation_143 (Activation) (None, 32, 32, 64) 0 batch_normalization_143[0][0] \n__________________________________________________________________________________________________\nconv2d_144 (Conv2D) (None, 32, 32, 16) 1024 activation_143[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_144 (BatchN (None, 32, 32, 16) 64 conv2d_144[0][0] \n__________________________________________________________________________________________________\nactivation_144 (Activation) (None, 32, 32, 16) 0 batch_normalization_144[0][0] \n__________________________________________________________________________________________________\nconv2d_145 (Conv2D) (None, 32, 32, 16) 2304 activation_144[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_145 (BatchN (None, 32, 32, 16) 64 conv2d_145[0][0] \n__________________________________________________________________________________________________\nactivation_145 (Activation) (None, 32, 32, 16) 0 batch_normalization_145[0][0] \n__________________________________________________________________________________________________\nconv2d_146 (Conv2D) (None, 32, 32, 64) 1024 activation_145[0][0] \n__________________________________________________________________________________________________\nadd_48 (Add) (None, 32, 32, 64) 0 conv2d_146[0][0] \n add_47[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_146 (BatchN (None, 32, 32, 64) 256 add_48[0][0] \n__________________________________________________________________________________________________\nactivation_146 (Activation) (None, 32, 32, 64) 0 batch_normalization_146[0][0] \n__________________________________________________________________________________________________\nconv2d_147 (Conv2D) (None, 32, 32, 16) 1024 activation_146[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_147 (BatchN (None, 32, 32, 16) 64 conv2d_147[0][0] \n__________________________________________________________________________________________________\nactivation_147 (Activation) (None, 32, 32, 16) 0 batch_normalization_147[0][0] \n__________________________________________________________________________________________________\nconv2d_148 (Conv2D) (None, 32, 32, 16) 2304 activation_147[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_148 (BatchN (None, 32, 32, 16) 64 conv2d_148[0][0] \n__________________________________________________________________________________________________\nactivation_148 (Activation) (None, 32, 32, 16) 0 batch_normalization_148[0][0] \n__________________________________________________________________________________________________\nconv2d_149 (Conv2D) (None, 32, 32, 64) 1024 activation_148[0][0] \n__________________________________________________________________________________________________\nadd_49 (Add) (None, 32, 32, 64) 0 conv2d_149[0][0] \n add_48[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_149 (BatchN (None, 32, 32, 64) 256 add_49[0][0] \n__________________________________________________________________________________________________\nactivation_149 (Activation) (None, 32, 32, 64) 0 batch_normalization_149[0][0] \n__________________________________________________________________________________________________\nconv2d_150 (Conv2D) (None, 32, 32, 16) 1024 activation_149[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_150 (BatchN (None, 32, 32, 16) 64 conv2d_150[0][0] \n__________________________________________________________________________________________________\nactivation_150 (Activation) (None, 32, 32, 16) 0 batch_normalization_150[0][0] \n__________________________________________________________________________________________________\nconv2d_151 (Conv2D) (None, 32, 32, 16) 2304 activation_150[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_151 (BatchN (None, 32, 32, 16) 64 conv2d_151[0][0] \n__________________________________________________________________________________________________\nactivation_151 (Activation) (None, 32, 32, 16) 0 batch_normalization_151[0][0] \n__________________________________________________________________________________________________\nconv2d_152 (Conv2D) (None, 32, 32, 64) 1024 activation_151[0][0] \n__________________________________________________________________________________________________\nadd_50 (Add) (None, 32, 32, 64) 0 conv2d_152[0][0] \n add_49[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_152 (BatchN (None, 32, 32, 64) 256 add_50[0][0] \n__________________________________________________________________________________________________\nactivation_152 (Activation) (None, 32, 32, 64) 0 batch_normalization_152[0][0] \n__________________________________________________________________________________________________\nconv2d_154 (Conv2D) (None, 32, 32, 16) 1024 activation_152[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_153 (BatchN (None, 32, 32, 16) 64 conv2d_154[0][0] \n__________________________________________________________________________________________________\nactivation_153 (Activation) (None, 32, 32, 16) 0 batch_normalization_153[0][0] \n__________________________________________________________________________________________________\nconv2d_155 (Conv2D) (None, 16, 16, 16) 2304 activation_153[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_154 (BatchN (None, 16, 16, 16) 64 conv2d_155[0][0] \n__________________________________________________________________________________________________\nactivation_154 (Activation) (None, 16, 16, 16) 0 batch_normalization_154[0][0] \n__________________________________________________________________________________________________\nconv2d_156 (Conv2D) (None, 16, 16, 256) 4096 activation_154[0][0] \n__________________________________________________________________________________________________\nconv2d_153 (Conv2D) (None, 16, 16, 256) 16384 add_50[0][0] \n__________________________________________________________________________________________________\nadd_51 (Add) (None, 16, 16, 256) 0 conv2d_156[0][0] \n conv2d_153[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_155 (BatchN (None, 16, 16, 256) 1024 add_51[0][0] \n__________________________________________________________________________________________________\nactivation_155 (Activation) (None, 16, 16, 256) 0 batch_normalization_155[0][0] \n__________________________________________________________________________________________________\nconv2d_157 (Conv2D) (None, 16, 16, 64) 16384 activation_155[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_156 (BatchN (None, 16, 16, 64) 256 conv2d_157[0][0] \n__________________________________________________________________________________________________\nactivation_156 (Activation) (None, 16, 16, 64) 0 batch_normalization_156[0][0] \n__________________________________________________________________________________________________\nconv2d_158 (Conv2D) (None, 16, 16, 64) 36864 activation_156[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_157 (BatchN (None, 16, 16, 64) 256 conv2d_158[0][0] \n__________________________________________________________________________________________________\nactivation_157 (Activation) (None, 16, 16, 64) 0 batch_normalization_157[0][0] \n__________________________________________________________________________________________________\nconv2d_159 (Conv2D) (None, 16, 16, 256) 16384 activation_157[0][0] \n__________________________________________________________________________________________________\nadd_52 (Add) (None, 16, 16, 256) 0 conv2d_159[0][0] \n add_51[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_158 (BatchN (None, 16, 16, 256) 1024 add_52[0][0] \n__________________________________________________________________________________________________\nactivation_158 (Activation) (None, 16, 16, 256) 0 batch_normalization_158[0][0] \n__________________________________________________________________________________________________\nconv2d_160 (Conv2D) (None, 16, 16, 64) 16384 activation_158[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_159 (BatchN (None, 16, 16, 64) 256 conv2d_160[0][0] \n__________________________________________________________________________________________________\nactivation_159 (Activation) (None, 16, 16, 64) 0 batch_normalization_159[0][0] \n__________________________________________________________________________________________________\nconv2d_161 (Conv2D) (None, 16, 16, 64) 36864 activation_159[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_160 (BatchN (None, 16, 16, 64) 256 conv2d_161[0][0] \n__________________________________________________________________________________________________\nactivation_160 (Activation) (None, 16, 16, 64) 0 batch_normalization_160[0][0] \n__________________________________________________________________________________________________\nconv2d_162 (Conv2D) (None, 16, 16, 256) 16384 activation_160[0][0] \n__________________________________________________________________________________________________\nadd_53 (Add) (None, 16, 16, 256) 0 conv2d_162[0][0] \n add_52[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_161 (BatchN (None, 16, 16, 256) 1024 add_53[0][0] \n__________________________________________________________________________________________________\nactivation_161 (Activation) (None, 16, 16, 256) 0 batch_normalization_161[0][0] \n__________________________________________________________________________________________________\nconv2d_163 (Conv2D) (None, 16, 16, 64) 16384 activation_161[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_162 (BatchN (None, 16, 16, 64) 256 conv2d_163[0][0] \n__________________________________________________________________________________________________\nactivation_162 (Activation) (None, 16, 16, 64) 0 batch_normalization_162[0][0] \n__________________________________________________________________________________________________\nconv2d_164 (Conv2D) (None, 16, 16, 64) 36864 activation_162[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_163 (BatchN (None, 16, 16, 64) 256 conv2d_164[0][0] \n__________________________________________________________________________________________________\nactivation_163 (Activation) (None, 16, 16, 64) 0 batch_normalization_163[0][0] \n__________________________________________________________________________________________________\nconv2d_165 (Conv2D) (None, 16, 16, 256) 16384 activation_163[0][0] \n__________________________________________________________________________________________________\nadd_54 (Add) (None, 16, 16, 256) 0 conv2d_165[0][0] \n add_53[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_164 (BatchN (None, 16, 16, 256) 1024 add_54[0][0] \n__________________________________________________________________________________________________\nactivation_164 (Activation) (None, 16, 16, 256) 0 batch_normalization_164[0][0] \n__________________________________________________________________________________________________\nconv2d_166 (Conv2D) (None, 16, 16, 64) 16384 activation_164[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_165 (BatchN (None, 16, 16, 64) 256 conv2d_166[0][0] \n__________________________________________________________________________________________________\nactivation_165 (Activation) (None, 16, 16, 64) 0 batch_normalization_165[0][0] \n__________________________________________________________________________________________________\nconv2d_167 (Conv2D) (None, 16, 16, 64) 36864 activation_165[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_166 (BatchN (None, 16, 16, 64) 256 conv2d_167[0][0] \n__________________________________________________________________________________________________\nactivation_166 (Activation) (None, 16, 16, 64) 0 batch_normalization_166[0][0] \n__________________________________________________________________________________________________\nconv2d_168 (Conv2D) (None, 16, 16, 256) 16384 activation_166[0][0] \n__________________________________________________________________________________________________\nadd_55 (Add) (None, 16, 16, 256) 0 conv2d_168[0][0] \n add_54[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_167 (BatchN (None, 16, 16, 256) 1024 add_55[0][0] \n__________________________________________________________________________________________________\nactivation_167 (Activation) (None, 16, 16, 256) 0 batch_normalization_167[0][0] \n__________________________________________________________________________________________________\nconv2d_169 (Conv2D) (None, 16, 16, 64) 16384 activation_167[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_168 (BatchN (None, 16, 16, 64) 256 conv2d_169[0][0] \n__________________________________________________________________________________________________\nactivation_168 (Activation) (None, 16, 16, 64) 0 batch_normalization_168[0][0] \n__________________________________________________________________________________________________\nconv2d_170 (Conv2D) (None, 16, 16, 64) 36864 activation_168[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_169 (BatchN (None, 16, 16, 64) 256 conv2d_170[0][0] \n__________________________________________________________________________________________________\nactivation_169 (Activation) (None, 16, 16, 64) 0 batch_normalization_169[0][0] \n__________________________________________________________________________________________________\nconv2d_171 (Conv2D) (None, 16, 16, 256) 16384 activation_169[0][0] \n__________________________________________________________________________________________________\nadd_56 (Add) (None, 16, 16, 256) 0 conv2d_171[0][0] \n add_55[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_170 (BatchN (None, 16, 16, 256) 1024 add_56[0][0] \n__________________________________________________________________________________________________\nactivation_170 (Activation) (None, 16, 16, 256) 0 batch_normalization_170[0][0] \n__________________________________________________________________________________________________\nconv2d_172 (Conv2D) (None, 16, 16, 64) 16384 activation_170[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_171 (BatchN (None, 16, 16, 64) 256 conv2d_172[0][0] \n__________________________________________________________________________________________________\nactivation_171 (Activation) (None, 16, 16, 64) 0 batch_normalization_171[0][0] \n__________________________________________________________________________________________________\nconv2d_173 (Conv2D) (None, 16, 16, 64) 36864 activation_171[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_172 (BatchN (None, 16, 16, 64) 256 conv2d_173[0][0] \n__________________________________________________________________________________________________\nactivation_172 (Activation) (None, 16, 16, 64) 0 batch_normalization_172[0][0] \n__________________________________________________________________________________________________\nconv2d_174 (Conv2D) (None, 16, 16, 256) 16384 activation_172[0][0] \n__________________________________________________________________________________________________\nadd_57 (Add) (None, 16, 16, 256) 0 conv2d_174[0][0] \n add_56[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_173 (BatchN (None, 16, 16, 256) 1024 add_57[0][0] \n__________________________________________________________________________________________________\nactivation_173 (Activation) (None, 16, 16, 256) 0 batch_normalization_173[0][0] \n__________________________________________________________________________________________________\nconv2d_175 (Conv2D) (None, 16, 16, 64) 16384 activation_173[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_174 (BatchN (None, 16, 16, 64) 256 conv2d_175[0][0] \n__________________________________________________________________________________________________\nactivation_174 (Activation) (None, 16, 16, 64) 0 batch_normalization_174[0][0] \n__________________________________________________________________________________________________\nconv2d_176 (Conv2D) (None, 16, 16, 64) 36864 activation_174[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_175 (BatchN (None, 16, 16, 64) 256 conv2d_176[0][0] \n__________________________________________________________________________________________________\nactivation_175 (Activation) (None, 16, 16, 64) 0 batch_normalization_175[0][0] \n__________________________________________________________________________________________________\nconv2d_177 (Conv2D) (None, 16, 16, 256) 16384 activation_175[0][0] \n__________________________________________________________________________________________________\nadd_58 (Add) (None, 16, 16, 256) 0 conv2d_177[0][0] \n add_57[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_176 (BatchN (None, 16, 16, 256) 1024 add_58[0][0] \n__________________________________________________________________________________________________\nactivation_176 (Activation) (None, 16, 16, 256) 0 batch_normalization_176[0][0] \n__________________________________________________________________________________________________\nconv2d_178 (Conv2D) (None, 16, 16, 64) 16384 activation_176[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_177 (BatchN (None, 16, 16, 64) 256 conv2d_178[0][0] \n__________________________________________________________________________________________________\nactivation_177 (Activation) (None, 16, 16, 64) 0 batch_normalization_177[0][0] \n__________________________________________________________________________________________________\nconv2d_179 (Conv2D) (None, 16, 16, 64) 36864 activation_177[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_178 (BatchN (None, 16, 16, 64) 256 conv2d_179[0][0] \n__________________________________________________________________________________________________\nactivation_178 (Activation) (None, 16, 16, 64) 0 batch_normalization_178[0][0] \n__________________________________________________________________________________________________\nconv2d_180 (Conv2D) (None, 16, 16, 256) 16384 activation_178[0][0] \n__________________________________________________________________________________________________\nadd_59 (Add) (None, 16, 16, 256) 0 conv2d_180[0][0] \n add_58[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_179 (BatchN (None, 16, 16, 256) 1024 add_59[0][0] \n__________________________________________________________________________________________________\nactivation_179 (Activation) (None, 16, 16, 256) 0 batch_normalization_179[0][0] \n__________________________________________________________________________________________________\nconv2d_181 (Conv2D) (None, 16, 16, 64) 16384 activation_179[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_180 (BatchN (None, 16, 16, 64) 256 conv2d_181[0][0] \n__________________________________________________________________________________________________\nactivation_180 (Activation) (None, 16, 16, 64) 0 batch_normalization_180[0][0] \n__________________________________________________________________________________________________\nconv2d_182 (Conv2D) (None, 16, 16, 64) 36864 activation_180[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_181 (BatchN (None, 16, 16, 64) 256 conv2d_182[0][0] \n__________________________________________________________________________________________________\nactivation_181 (Activation) (None, 16, 16, 64) 0 batch_normalization_181[0][0] \n__________________________________________________________________________________________________\nconv2d_183 (Conv2D) (None, 16, 16, 256) 16384 activation_181[0][0] \n__________________________________________________________________________________________________\nadd_60 (Add) (None, 16, 16, 256) 0 conv2d_183[0][0] \n add_59[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_182 (BatchN (None, 16, 16, 256) 1024 add_60[0][0] \n__________________________________________________________________________________________________\nactivation_182 (Activation) (None, 16, 16, 256) 0 batch_normalization_182[0][0] \n__________________________________________________________________________________________________\nconv2d_184 (Conv2D) (None, 16, 16, 64) 16384 activation_182[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_183 (BatchN (None, 16, 16, 64) 256 conv2d_184[0][0] \n__________________________________________________________________________________________________\nactivation_183 (Activation) (None, 16, 16, 64) 0 batch_normalization_183[0][0] \n__________________________________________________________________________________________________\nconv2d_185 (Conv2D) (None, 16, 16, 64) 36864 activation_183[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_184 (BatchN (None, 16, 16, 64) 256 conv2d_185[0][0] \n__________________________________________________________________________________________________\nactivation_184 (Activation) (None, 16, 16, 64) 0 batch_normalization_184[0][0] \n__________________________________________________________________________________________________\nconv2d_186 (Conv2D) (None, 16, 16, 256) 16384 activation_184[0][0] \n__________________________________________________________________________________________________\nadd_61 (Add) (None, 16, 16, 256) 0 conv2d_186[0][0] \n add_60[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_185 (BatchN (None, 16, 16, 256) 1024 add_61[0][0] \n__________________________________________________________________________________________________\nactivation_185 (Activation) (None, 16, 16, 256) 0 batch_normalization_185[0][0] \n__________________________________________________________________________________________________\nconv2d_187 (Conv2D) (None, 16, 16, 64) 16384 activation_185[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_186 (BatchN (None, 16, 16, 64) 256 conv2d_187[0][0] \n__________________________________________________________________________________________________\nactivation_186 (Activation) (None, 16, 16, 64) 0 batch_normalization_186[0][0] \n__________________________________________________________________________________________________\nconv2d_188 (Conv2D) (None, 16, 16, 64) 36864 activation_186[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_187 (BatchN (None, 16, 16, 64) 256 conv2d_188[0][0] \n__________________________________________________________________________________________________\nactivation_187 (Activation) (None, 16, 16, 64) 0 batch_normalization_187[0][0] \n__________________________________________________________________________________________________\nconv2d_189 (Conv2D) (None, 16, 16, 256) 16384 activation_187[0][0] \n__________________________________________________________________________________________________\nadd_62 (Add) (None, 16, 16, 256) 0 conv2d_189[0][0] \n add_61[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_188 (BatchN (None, 16, 16, 256) 1024 add_62[0][0] \n__________________________________________________________________________________________________\nactivation_188 (Activation) (None, 16, 16, 256) 0 batch_normalization_188[0][0] \n__________________________________________________________________________________________________\nconv2d_190 (Conv2D) (None, 16, 16, 64) 16384 activation_188[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_189 (BatchN (None, 16, 16, 64) 256 conv2d_190[0][0] \n__________________________________________________________________________________________________\nactivation_189 (Activation) (None, 16, 16, 64) 0 batch_normalization_189[0][0] \n__________________________________________________________________________________________________\nconv2d_191 (Conv2D) (None, 16, 16, 64) 36864 activation_189[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_190 (BatchN (None, 16, 16, 64) 256 conv2d_191[0][0] \n__________________________________________________________________________________________________\nactivation_190 (Activation) (None, 16, 16, 64) 0 batch_normalization_190[0][0] \n__________________________________________________________________________________________________\nconv2d_192 (Conv2D) (None, 16, 16, 256) 16384 activation_190[0][0] \n__________________________________________________________________________________________________\nadd_63 (Add) (None, 16, 16, 256) 0 conv2d_192[0][0] \n add_62[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_191 (BatchN (None, 16, 16, 256) 1024 add_63[0][0] \n__________________________________________________________________________________________________\nactivation_191 (Activation) (None, 16, 16, 256) 0 batch_normalization_191[0][0] \n__________________________________________________________________________________________________\nconv2d_193 (Conv2D) (None, 16, 16, 64) 16384 activation_191[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_192 (BatchN (None, 16, 16, 64) 256 conv2d_193[0][0] \n__________________________________________________________________________________________________\nactivation_192 (Activation) (None, 16, 16, 64) 0 batch_normalization_192[0][0] \n__________________________________________________________________________________________________\nconv2d_194 (Conv2D) (None, 16, 16, 64) 36864 activation_192[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_193 (BatchN (None, 16, 16, 64) 256 conv2d_194[0][0] \n__________________________________________________________________________________________________\nactivation_193 (Activation) (None, 16, 16, 64) 0 batch_normalization_193[0][0] \n__________________________________________________________________________________________________\nconv2d_195 (Conv2D) (None, 16, 16, 256) 16384 activation_193[0][0] \n__________________________________________________________________________________________________\nadd_64 (Add) (None, 16, 16, 256) 0 conv2d_195[0][0] \n add_63[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_194 (BatchN (None, 16, 16, 256) 1024 add_64[0][0] \n__________________________________________________________________________________________________\nactivation_194 (Activation) (None, 16, 16, 256) 0 batch_normalization_194[0][0] \n__________________________________________________________________________________________________\nconv2d_196 (Conv2D) (None, 16, 16, 64) 16384 activation_194[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_195 (BatchN (None, 16, 16, 64) 256 conv2d_196[0][0] \n__________________________________________________________________________________________________\nactivation_195 (Activation) (None, 16, 16, 64) 0 batch_normalization_195[0][0] \n__________________________________________________________________________________________________\nconv2d_197 (Conv2D) (None, 16, 16, 64) 36864 activation_195[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_196 (BatchN (None, 16, 16, 64) 256 conv2d_197[0][0] \n__________________________________________________________________________________________________\nactivation_196 (Activation) (None, 16, 16, 64) 0 batch_normalization_196[0][0] \n__________________________________________________________________________________________________\nconv2d_198 (Conv2D) (None, 16, 16, 256) 16384 activation_196[0][0] \n__________________________________________________________________________________________________\nadd_65 (Add) (None, 16, 16, 256) 0 conv2d_198[0][0] \n add_64[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_197 (BatchN (None, 16, 16, 256) 1024 add_65[0][0] \n__________________________________________________________________________________________________\nactivation_197 (Activation) (None, 16, 16, 256) 0 batch_normalization_197[0][0] \n__________________________________________________________________________________________________\nconv2d_199 (Conv2D) (None, 16, 16, 64) 16384 activation_197[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_198 (BatchN (None, 16, 16, 64) 256 conv2d_199[0][0] \n__________________________________________________________________________________________________\nactivation_198 (Activation) (None, 16, 16, 64) 0 batch_normalization_198[0][0] \n__________________________________________________________________________________________________\nconv2d_200 (Conv2D) (None, 16, 16, 64) 36864 activation_198[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_199 (BatchN (None, 16, 16, 64) 256 conv2d_200[0][0] \n__________________________________________________________________________________________________\nactivation_199 (Activation) (None, 16, 16, 64) 0 batch_normalization_199[0][0] \n__________________________________________________________________________________________________\nconv2d_201 (Conv2D) (None, 16, 16, 256) 16384 activation_199[0][0] \n__________________________________________________________________________________________________\nadd_66 (Add) (None, 16, 16, 256) 0 conv2d_201[0][0] \n add_65[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_200 (BatchN (None, 16, 16, 256) 1024 add_66[0][0] \n__________________________________________________________________________________________________\nactivation_200 (Activation) (None, 16, 16, 256) 0 batch_normalization_200[0][0] \n__________________________________________________________________________________________________\nconv2d_202 (Conv2D) (None, 16, 16, 64) 16384 activation_200[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_201 (BatchN (None, 16, 16, 64) 256 conv2d_202[0][0] \n__________________________________________________________________________________________________\nactivation_201 (Activation) (None, 16, 16, 64) 0 batch_normalization_201[0][0] \n__________________________________________________________________________________________________\nconv2d_203 (Conv2D) (None, 16, 16, 64) 36864 activation_201[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_202 (BatchN (None, 16, 16, 64) 256 conv2d_203[0][0] \n__________________________________________________________________________________________________\nactivation_202 (Activation) (None, 16, 16, 64) 0 batch_normalization_202[0][0] \n__________________________________________________________________________________________________\nconv2d_204 (Conv2D) (None, 16, 16, 256) 16384 activation_202[0][0] \n__________________________________________________________________________________________________\nadd_67 (Add) (None, 16, 16, 256) 0 conv2d_204[0][0] \n add_66[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_203 (BatchN (None, 16, 16, 256) 1024 add_67[0][0] \n__________________________________________________________________________________________________\nactivation_203 (Activation) (None, 16, 16, 256) 0 batch_normalization_203[0][0] \n__________________________________________________________________________________________________\nconv2d_205 (Conv2D) (None, 16, 16, 64) 16384 activation_203[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_204 (BatchN (None, 16, 16, 64) 256 conv2d_205[0][0] \n__________________________________________________________________________________________________\nactivation_204 (Activation) (None, 16, 16, 64) 0 batch_normalization_204[0][0] \n__________________________________________________________________________________________________\nconv2d_206 (Conv2D) (None, 16, 16, 64) 36864 activation_204[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_205 (BatchN (None, 16, 16, 64) 256 conv2d_206[0][0] \n__________________________________________________________________________________________________\nactivation_205 (Activation) (None, 16, 16, 64) 0 batch_normalization_205[0][0] \n__________________________________________________________________________________________________\nconv2d_207 (Conv2D) (None, 16, 16, 256) 16384 activation_205[0][0] \n__________________________________________________________________________________________________\nadd_68 (Add) (None, 16, 16, 256) 0 conv2d_207[0][0] \n add_67[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_206 (BatchN (None, 16, 16, 256) 1024 add_68[0][0] \n__________________________________________________________________________________________________\nactivation_206 (Activation) (None, 16, 16, 256) 0 batch_normalization_206[0][0] \n__________________________________________________________________________________________________\nconv2d_208 (Conv2D) (None, 16, 16, 64) 16384 activation_206[0][0] \n" ], [ "model.compile(optimizer=Adam(),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\nprint(len(model.trainable_weights))", "1358\n" ], [ "train_datagen = ImageDataGenerator(rescale=1./255, preprocessing_function=bgr)\n\ntest_datagen = ImageDataGenerator(rescale=1./255, preprocessing_function=bgr)\n\ntrain_generator = train_datagen.flow_from_directory(train_dir,\n target_size=(img_height, img_width),\n batch_size=batch_size,\n shuffle=True,\n class_mode='categorical')\n\nvalidation_generator = train_datagen.flow_from_directory(validation_dir,\n target_size=(img_height, img_width),\n batch_size=batch_size,\n shuffle=False,\n class_mode='categorical')\n\ntest50_generator = test_datagen.flow_from_directory(test50_dir,\n target_size=(img_height, img_width),\n batch_size=batch_size,\n shuffle=False,\n class_mode='categorical')", "Found 60000 images belonging to 2 classes.\nFound 18000 images belonging to 2 classes.\nFound 20000 images belonging to 2 classes.\n" ], [ "# callback_list = [EarlyStopping(monitor='val_accuracy', patience=10),\n# ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=3)]\n# history = model.fit_generator(train_generator,\n# steps_per_epoch=200,\n# epochs=100,\n# validation_data=validation_generator,\n# validation_steps=len(validation_generator),\n# callbacks=callback_list)", "_____no_output_____" ], [ "# model.save('/home/www/fake_detection/model/deepfake_resnet.h5')", "_____no_output_____" ], [ "# model = load_model('/home/www/fake_detection/model/deepfake_resnet.h5', compile=False)", "_____no_output_____" ], [ "# output = model.predict_generator(test50_generator, steps=len(test50_generator), verbose=1)\n# np.set_printoptions(formatter={'float': lambda x: \"{0:0.3f}\".format(x)})\n# print(test50_generator.class_indices)\n# print(output)", "_____no_output_____" ], [ "# output_score50 = []\n# output_class50 = []\n# answer_class50 = []\n# answer_class50_1 =[]\n\n# for i in trange(len(test50_generator)):\n# output50 = model.predict_on_batch(test50_generator[i][0])\n# output_score50.append(output50)\n# answer_class50.append(test50_generator[i][1])\n \n# output_score50 = np.concatenate(output_score50)\n# answer_class50 = np.concatenate(answer_class50)\n\n# output_class50 = np.argmax(output_score50, axis=1)\n# answer_class50_1 = np.argmax(answer_class50, axis=1)\n\n# print(output_class50)\n# print(answer_class50_1)", "_____no_output_____" ], [ "# cm50 = confusion_matrix(answer_class50_1, output_class50)\n# report50 = classification_report(answer_class50_1, output_class50)\n\n# recall50 = cm50[0][0] / (cm50[0][0] + cm50[0][1])\n# fallout50 = cm50[1][0] / (cm50[1][0] + cm50[1][1])\n\n# fpr50, tpr50, thresholds50 = roc_curve(answer_class50_1, output_score50[:, 1], pos_label=1.)\n# eer50 = brentq(lambda x : 1. - x - interp1d(fpr50, tpr50)(x), 0., 1.)\n# thresh50 = interp1d(fpr50, thresholds50)(eer50)\n\n# print(report50)\n# print(cm50)\n# print(\"AUROC: %f\" %(roc_auc_score(answer_class50_1, output_score50[:, 1])))\n# print(thresh50)\n# print('test_acc: ', len(output_class50[np.equal(output_class50, answer_class50_1)]) / len(output_class50))", "_____no_output_____" ], [ "def cutout(img):\n \"\"\"\n # Function: RandomCrop (ZeroPadded (4, 4)) + random occulusion image\n # Arguments:\n img: image\n # Returns:\n img\n \"\"\"\n img = bgr(img)\n height = img.shape[0]\n width = img.shape[1]\n channels = img.shape[2]\n MAX_CUTS = 3 # chance to get more cuts\n MAX_LENGTH_MUTIPLIER = 10 # chance to get larger cuts\n # 16 for cifar10, 8 for cifar100\n \n # Zero-padded (4, 4)\n# img = np.pad(img, ((4,4),(4,4),(0,0)), mode='constant', constant_values=(0))\n \n# # random-crop 64x64\n# dy, dx = height, width\n# x = np.random.randint(0, width - dx + 1)\n# y = np.random.randint(0, height - dy + 1)\n# img = img[y:(y+dy), x:(x+dx)]\n \n# mean norm\n# mean = img.mean(keepdims=True)\n# img -= mean\n\n img *= 1./255\n \n mask = np.ones((height, width, channels), dtype=np.float32)\n nb_cuts = np.random.randint(0, MAX_CUTS + 1)\n \n # cutout\n for i in range(nb_cuts):\n y = np.random.randint(height)\n x = np.random.randint(width)\n length = 4 * np.random.randint(1, MAX_LENGTH_MUTIPLIER+1)\n \n y1 = np.clip(y-length//2, 0, height)\n y2 = np.clip(y+length//2, 0, height)\n x1 = np.clip(x-length//2, 0, width)\n x2 = np.clip(x+length//2, 0, width)\n \n mask[y1:y2, x1:x2, :] = 0.\n \n img = img * mask\n return img", "_____no_output_____" ], [ "class ReLU6(Layer):\n def __init__(self):\n super().__init__(name=\"ReLU6\")\n self.relu6 = ReLU(max_value=6, name=\"ReLU6\")\n\n def call(self, input):\n return self.relu6(input)\n\n\nclass HardSigmoid(Layer):\n def __init__(self):\n super().__init__()\n self.relu6 = ReLU6()\n\n def call(self, input):\n return self.relu6(input + 3.0) / 6.0\n\n\nclass HardSwish(Layer):\n def __init__(self):\n super().__init__()\n self.hard_sigmoid = HardSigmoid()\n\n def call(self, input):\n return input * self.hard_sigmoid(input)\n \nclass Attention(Layer):\n def __init__(self, ch, **kwargs):\n super(Attention, self).__init__(**kwargs)\n self.channels = ch\n self.filters_f_g = self.channels // 8\n self.filters_h = self.channels\n\n def build(self, input_shape):\n kernel_shape_f_g = (1, 1) + (self.channels, self.filters_f_g)\n print(kernel_shape_f_g)\n kernel_shape_h = (1, 1) + (self.channels, self.filters_h)\n\n # Create a trainable weight variable for this layer:\n self.gamma = self.add_weight(name='gamma', shape=[1], initializer='zeros', trainable=True)\n self.kernel_f = self.add_weight(shape=kernel_shape_f_g,\n initializer='glorot_uniform',\n name='kernel_f')\n self.kernel_g = self.add_weight(shape=kernel_shape_f_g,\n initializer='glorot_uniform',\n name='kernel_g')\n self.kernel_h = self.add_weight(shape=kernel_shape_h,\n initializer='glorot_uniform',\n name='kernel_h')\n self.bias_f = self.add_weight(shape=(self.filters_f_g,),\n initializer='zeros',\n name='bias_F')\n self.bias_g = self.add_weight(shape=(self.filters_f_g,),\n initializer='zeros',\n name='bias_g')\n self.bias_h = self.add_weight(shape=(self.filters_h,),\n initializer='zeros',\n name='bias_h')\n super(Attention, self).build(input_shape)\n # Set input spec.\n self.input_spec = InputSpec(ndim=4,\n axes={3: input_shape[-1]})\n self.built = True\n\n\n def call(self, x):\n def hw_flatten(x):\n return K.reshape(x, shape=[K.shape(x)[0], K.shape(x)[1]*K.shape(x)[2], K.shape(x)[-1]])\n\n f = K.conv2d(x,\n kernel=self.kernel_f,\n strides=(1, 1), padding='same') # [bs, h, w, c']\n f = K.bias_add(f, self.bias_f)\n g = K.conv2d(x,\n kernel=self.kernel_g,\n strides=(1, 1), padding='same') # [bs, h, w, c']\n g = K.bias_add(g, self.bias_g)\n h = K.conv2d(x,\n kernel=self.kernel_h,\n strides=(1, 1), padding='same') # [bs, h, w, c]\n h = K.bias_add(h, self.bias_h)\n\n s = tf.matmul(hw_flatten(g), hw_flatten(f), transpose_b=True) # # [bs, N, N]\n\n beta = K.softmax(s, axis=-1) # attention map\n\n o = K.batch_dot(beta, hw_flatten(h)) # [bs, N, C]\n\n o = K.reshape(o, shape=K.shape(x)) # [bs, h, w, C]\n x = self.gamma * o + x\n\n return x\n\n def compute_output_shape(self, input_shape):\n return input_shape", "_____no_output_____" ], [ "ft_dir = '/mnt/a/fakedata/deepfake/finetune'\ntrain_gen_aug = ImageDataGenerator(shear_range=0, \n zoom_range=0,\n rotation_range=0.2,\n width_shift_range=2., \n height_shift_range=2.,\n horizontal_flip=True,\n zca_whitening=False,\n fill_mode='nearest',\n preprocessing_function=cutout)\n\ntest_datagen = ImageDataGenerator(rescale=1./255, preprocessing_function=bgr)\n\nft_gen = train_gen_aug.flow_from_directory(ft_dir,\n target_size=(img_height, img_width),\n batch_size=batch_size,\n shuffle=True,\n class_mode='categorical')\n\n\nvalidation_generator = test_datagen.flow_from_directory(validation_dir,\n target_size=(img_height, img_width),\n batch_size=batch_size,\n shuffle=False,\n class_mode='categorical')\n\ntest50_generator = test_datagen.flow_from_directory(test50_dir,\n target_size=(img_height, img_width),\n batch_size=batch_size,\n shuffle=False,\n class_mode='categorical')", "Found 2000 images belonging to 2 classes.\nFound 18000 images belonging to 2 classes.\nFound 20000 images belonging to 2 classes.\n" ], [ "model_ft = load_model('/home/www/fake_detection/model/deepfake_resnet.h5', compile=False)\nfor i in range(3):\n model_ft.layers.pop()\nim_in = Input(shape=(img_width, img_height, 3))\n\nbase_model = Model(img_input, x)\nbase_model.set_weights(model_ft.get_weights())\n# for i in range(len(base_model.layers) - 0):\n# base_model.layers[i].trainable = False\n \nx1 = base_model(im_in) # (12, 12, 32)\n########### Mobilenet block bneck 3x3 (32 --> 128) #################\nexpand1 = Conv2D(576, kernel_size=1, strides=1, kernel_regularizer=l2(1e-5), use_bias=False)(x1)\nexpand1 = BatchNormalization()(expand1)\nexpand1 = HardSwish()(expand1)\ndw1 = DepthwiseConv2D(kernel_size=(3,3), strides=(2,2), padding='same', depthwise_regularizer=l2(1e-5), use_bias=False)(expand1)\ndw1 = BatchNormalization()(dw1)\nse_gap1 = GlobalAveragePooling2D()(dw1)\nse_gap1 = Reshape([1, 1, -1])(se_gap1)\nse1 = Conv2D(144, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(se_gap1)\nse1 = Activation('relu')(se1)\nse1 = Conv2D(576, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(se1)\nse1 = HardSigmoid()(se1)\nse1 = Multiply()([expand1, se1])\nproject1 = HardSwish()(se1)\nproject1 = Conv2D(128, kernel_size=(1, 1), padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(project1)\nproject1 = BatchNormalization()(project1)\n\n########### Mobilenet block bneck 5x5 (128 --> 128) #################\nexpand2 = Conv2D(576, kernel_size=1, strides=1, kernel_regularizer=l2(1e-5), use_bias=False)(project1)\nexpand2 = BatchNormalization()(expand2)\nexpand2 = HardSwish()(expand2)\ndw2 = DepthwiseConv2D(kernel_size=(5,5), strides=(1,1), padding='same', depthwise_regularizer=l2(1e-5), use_bias=False)(expand2)\ndw2 = BatchNormalization()(dw2)\nse_gap2 = GlobalAveragePooling2D()(dw2)\nse_gap2 = Reshape([1, 1, -1])(se_gap2)\nse2 = Conv2D(144, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(se_gap2)\nse2 = Activation('relu')(se2)\nse2 = Conv2D(576, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(se2)\nse2 = HardSigmoid()(se2)\nse2 = Multiply()([expand2, se2])\nproject2 = HardSwish()(se2)\nproject2 = Conv2D(128, kernel_size=(1, 1), padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(project2)\nproject2 = BatchNormalization()(project2)\nproject2 = Add()([project1, project2])\n\n########### Mobilenet block bneck 5x5 (128 --> 128) #################\nexpand3 = Conv2D(576, kernel_size=1, strides=1, kernel_regularizer=l2(1e-5), use_bias=False)(project2)\nexpand3 = BatchNormalization()(expand3)\nexpand3 = HardSwish()(expand3)\ndw3 = DepthwiseConv2D(kernel_size=(5,5), strides=(1,1), padding='same', depthwise_regularizer=l2(1e-5), use_bias=False)(expand3)\ndw3 = BatchNormalization()(dw3)\nse_gap3 = GlobalAveragePooling2D()(dw3)\nse_gap3 = Reshape([1, 1, -1])(se_gap3)\nse3 = Conv2D(144, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(se_gap3)\nse3 = Activation('relu')(se3)\nse3 = Conv2D(576, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(se3)\nse3 = HardSigmoid()(se3)\nse3 = Multiply()([expand3, se3])\nproject3 = HardSwish()(se3)\nproject3 = Conv2D(128, kernel_size=(1, 1), padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(project3)\nproject3 = BatchNormalization()(project3)\nproject3 = Add()([project2, project3])\n\nexpand4 = Conv2D(576, kernel_size=1, strides=1, kernel_regularizer=l2(1e-5), use_bias=False)(project3)\nexpand4 = BatchNormalization()(expand4)\nexpand4 = HardSwish()(expand4)\ndw4 = DepthwiseConv2D(kernel_size=(5,5), strides=(1,1), padding='same', depthwise_regularizer=l2(1e-5), use_bias=False)(expand4)\ndw4 = BatchNormalization()(dw4)\nse_gap4 = GlobalAveragePooling2D()(dw4)\nse_gap4 = Reshape([1, 1, -1])(se_gap4)\nse4 = Conv2D(144, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(se_gap4)\nse4 = Activation('relu')(se4)\nse4 = Conv2D(576, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(se4)\nse4 = HardSigmoid()(se4)\nse4 = Multiply()([expand4, se4])\nproject4 = HardSwish()(se4)\nproject4 = Conv2D(128, kernel_size=(1, 1), padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(project4)\nproject4 = BatchNormalization()(project4)\nproject4 = Add()([project3, project4])\n\n\n########## Classification ##########\nx2 = Conv2D(576, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(project4)\nx2 = BatchNormalization()(x2)\nx2 = HardSwish()(x2)\nx2 = GlobalAveragePooling2D()(x2)\n\n\n######### Image Attention Model #########\n### Block 1 ###\nx3 = SeparableConv2D(32, kernel_size=(3, 3), strides=(2,2), padding='same', depthwise_regularizer=l2(1e-5), pointwise_regularizer=l2(1e-5), use_bias=False)(im_in)\nx3 = BatchNormalization()(x3)\nx3 = Activation('relu')(x3)\nx3 = Attention(32)(x3)\n\n### Block 2 ###\nx4 = SeparableConv2D(64, kernel_size=(3, 3), strides=(2,2), padding='same', depthwise_regularizer=l2(1e-5), pointwise_regularizer=l2(1e-5), use_bias=False)(x3)\nx4 = BatchNormalization()(x4)\nx4 = Activation('relu')(x4)\nx4 = Attention(64)(x4)\n\n### Block 3 ###\nx5 = SeparableConv2D(128, kernel_size=(3, 3), strides=(2,2), padding='same', depthwise_regularizer=l2(1e-5), pointwise_regularizer=l2(1e-5), use_bias=False)(x4)\nx5 = BatchNormalization()(x5)\nx5 = Activation('relu')(x5)\nx5 = Attention(128)(x5)\n\n### final stage ###\nx6 = Conv2D(576, kernel_size=1, strides=1, padding='valid', kernel_regularizer=l2(1e-5), use_bias=False)(x5)\nx6 = BatchNormalization()(x6)\nx6 = Activation('relu')(x6)\nx6 = GlobalAveragePooling2D()(x6)\n\n######## final addition #########\n\nx2 = Add()([x2, x6])\nx2 = Dense(2, kernel_regularizer=l2(1e-5))(x2)\nx2 = Activation('softmax')(x2)\n\nmodel_top = Model(inputs=im_in, outputs=x2)\nmodel_top.summary()", "(1, 1, 32, 4)\n(1, 1, 64, 8)\n(1, 1, 128, 16)\nModel: \"model_3\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_2 (InputLayer) (None, 64, 64, 3) 0 \n__________________________________________________________________________________________________\nmodel_2 (Model) (None, 8, 8, 512) 17764512 input_2[0][0] \n__________________________________________________________________________________________________\nconv2d_455 (Conv2D) (None, 8, 8, 576) 294912 model_2[1][0] \n__________________________________________________________________________________________________\nbatch_normalization_452 (BatchN (None, 8, 8, 576) 2304 conv2d_455[0][0] \n__________________________________________________________________________________________________\nhard_swish_1 (HardSwish) (None, 8, 8, 576) 0 batch_normalization_452[0][0] \n__________________________________________________________________________________________________\ndepthwise_conv2d_1 (DepthwiseCo (None, 4, 4, 576) 5184 hard_swish_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_453 (BatchN (None, 4, 4, 576) 2304 depthwise_conv2d_1[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling2d_2 (Glo (None, 576) 0 batch_normalization_453[0][0] \n__________________________________________________________________________________________________\nreshape_1 (Reshape) (None, 1, 1, 576) 0 global_average_pooling2d_2[0][0] \n__________________________________________________________________________________________________\nconv2d_456 (Conv2D) (None, 1, 1, 144) 82944 reshape_1[0][0] \n__________________________________________________________________________________________________\nactivation_453 (Activation) (None, 1, 1, 144) 0 conv2d_456[0][0] \n__________________________________________________________________________________________________\nconv2d_457 (Conv2D) (None, 1, 1, 576) 82944 activation_453[0][0] \n__________________________________________________________________________________________________\nhard_sigmoid_2 (HardSigmoid) (None, 1, 1, 576) 0 conv2d_457[0][0] \n__________________________________________________________________________________________________\nmultiply_1 (Multiply) (None, 8, 8, 576) 0 hard_swish_1[0][0] \n hard_sigmoid_2[0][0] \n__________________________________________________________________________________________________\nhard_swish_2 (HardSwish) (None, 8, 8, 576) 0 multiply_1[0][0] \n__________________________________________________________________________________________________\nconv2d_458 (Conv2D) (None, 8, 8, 128) 73728 hard_swish_2[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_454 (BatchN (None, 8, 8, 128) 512 conv2d_458[0][0] \n__________________________________________________________________________________________________\nconv2d_459 (Conv2D) (None, 8, 8, 576) 73728 batch_normalization_454[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_455 (BatchN (None, 8, 8, 576) 2304 conv2d_459[0][0] \n__________________________________________________________________________________________________\nhard_swish_3 (HardSwish) (None, 8, 8, 576) 0 batch_normalization_455[0][0] \n__________________________________________________________________________________________________\ndepthwise_conv2d_2 (DepthwiseCo (None, 8, 8, 576) 14400 hard_swish_3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_456 (BatchN (None, 8, 8, 576) 2304 depthwise_conv2d_2[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling2d_3 (Glo (None, 576) 0 batch_normalization_456[0][0] \n__________________________________________________________________________________________________\nreshape_2 (Reshape) (None, 1, 1, 576) 0 global_average_pooling2d_3[0][0] \n__________________________________________________________________________________________________\nconv2d_460 (Conv2D) (None, 1, 1, 144) 82944 reshape_2[0][0] \n__________________________________________________________________________________________________\nactivation_454 (Activation) (None, 1, 1, 144) 0 conv2d_460[0][0] \n__________________________________________________________________________________________________\nconv2d_461 (Conv2D) (None, 1, 1, 576) 82944 activation_454[0][0] \n__________________________________________________________________________________________________\nhard_sigmoid_5 (HardSigmoid) (None, 1, 1, 576) 0 conv2d_461[0][0] \n__________________________________________________________________________________________________\nmultiply_2 (Multiply) (None, 8, 8, 576) 0 hard_swish_3[0][0] \n hard_sigmoid_5[0][0] \n__________________________________________________________________________________________________\nhard_swish_4 (HardSwish) (None, 8, 8, 576) 0 multiply_2[0][0] \n__________________________________________________________________________________________________\nconv2d_462 (Conv2D) (None, 8, 8, 128) 73728 hard_swish_4[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_457 (BatchN (None, 8, 8, 128) 512 conv2d_462[0][0] \n__________________________________________________________________________________________________\nadd_151 (Add) (None, 8, 8, 128) 0 batch_normalization_454[0][0] \n batch_normalization_457[0][0] \n__________________________________________________________________________________________________\nconv2d_463 (Conv2D) (None, 8, 8, 576) 73728 add_151[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_458 (BatchN (None, 8, 8, 576) 2304 conv2d_463[0][0] \n__________________________________________________________________________________________________\nhard_swish_5 (HardSwish) (None, 8, 8, 576) 0 batch_normalization_458[0][0] \n__________________________________________________________________________________________________\ndepthwise_conv2d_3 (DepthwiseCo (None, 8, 8, 576) 14400 hard_swish_5[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_459 (BatchN (None, 8, 8, 576) 2304 depthwise_conv2d_3[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling2d_4 (Glo (None, 576) 0 batch_normalization_459[0][0] \n__________________________________________________________________________________________________\nreshape_3 (Reshape) (None, 1, 1, 576) 0 global_average_pooling2d_4[0][0] \n__________________________________________________________________________________________________\nconv2d_464 (Conv2D) (None, 1, 1, 144) 82944 reshape_3[0][0] \n__________________________________________________________________________________________________\nactivation_455 (Activation) (None, 1, 1, 144) 0 conv2d_464[0][0] \n__________________________________________________________________________________________________\nconv2d_465 (Conv2D) (None, 1, 1, 576) 82944 activation_455[0][0] \n__________________________________________________________________________________________________\nhard_sigmoid_8 (HardSigmoid) (None, 1, 1, 576) 0 conv2d_465[0][0] \n__________________________________________________________________________________________________\nmultiply_3 (Multiply) (None, 8, 8, 576) 0 hard_swish_5[0][0] \n hard_sigmoid_8[0][0] \n__________________________________________________________________________________________________\nhard_swish_6 (HardSwish) (None, 8, 8, 576) 0 multiply_3[0][0] \n__________________________________________________________________________________________________\nconv2d_466 (Conv2D) (None, 8, 8, 128) 73728 hard_swish_6[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_460 (BatchN (None, 8, 8, 128) 512 conv2d_466[0][0] \n__________________________________________________________________________________________________\nadd_152 (Add) (None, 8, 8, 128) 0 add_151[0][0] \n batch_normalization_460[0][0] \n__________________________________________________________________________________________________\nconv2d_467 (Conv2D) (None, 8, 8, 576) 73728 add_152[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_461 (BatchN (None, 8, 8, 576) 2304 conv2d_467[0][0] \n__________________________________________________________________________________________________\nhard_swish_7 (HardSwish) (None, 8, 8, 576) 0 batch_normalization_461[0][0] \n__________________________________________________________________________________________________\ndepthwise_conv2d_4 (DepthwiseCo (None, 8, 8, 576) 14400 hard_swish_7[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_462 (BatchN (None, 8, 8, 576) 2304 depthwise_conv2d_4[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d_1 (SeparableCo (None, 32, 32, 32) 123 input_2[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling2d_5 (Glo (None, 576) 0 batch_normalization_462[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_465 (BatchN (None, 32, 32, 32) 128 separable_conv2d_1[0][0] \n__________________________________________________________________________________________________\nreshape_4 (Reshape) (None, 1, 1, 576) 0 global_average_pooling2d_5[0][0] \n__________________________________________________________________________________________________\nactivation_457 (Activation) (None, 32, 32, 32) 0 batch_normalization_465[0][0] \n__________________________________________________________________________________________________\nconv2d_468 (Conv2D) (None, 1, 1, 144) 82944 reshape_4[0][0] \n__________________________________________________________________________________________________\nattention_1 (Attention) (None, 32, 32, 32) 1321 activation_457[0][0] \n__________________________________________________________________________________________________\nactivation_456 (Activation) (None, 1, 1, 144) 0 conv2d_468[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d_2 (SeparableCo (None, 16, 16, 64) 2336 attention_1[0][0] \n__________________________________________________________________________________________________\nconv2d_469 (Conv2D) (None, 1, 1, 576) 82944 activation_456[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_466 (BatchN (None, 16, 16, 64) 256 separable_conv2d_2[0][0] \n__________________________________________________________________________________________________\nhard_sigmoid_11 (HardSigmoid) (None, 1, 1, 576) 0 conv2d_469[0][0] \n__________________________________________________________________________________________________\nactivation_458 (Activation) (None, 16, 16, 64) 0 batch_normalization_466[0][0] \n__________________________________________________________________________________________________\nmultiply_4 (Multiply) (None, 8, 8, 576) 0 hard_swish_7[0][0] \n hard_sigmoid_11[0][0] \n__________________________________________________________________________________________________\nattention_2 (Attention) (None, 16, 16, 64) 5201 activation_458[0][0] \n__________________________________________________________________________________________________\nhard_swish_8 (HardSwish) (None, 8, 8, 576) 0 multiply_4[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d_3 (SeparableCo (None, 8, 8, 128) 8768 attention_2[0][0] \n__________________________________________________________________________________________________\nconv2d_470 (Conv2D) (None, 8, 8, 128) 73728 hard_swish_8[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_467 (BatchN (None, 8, 8, 128) 512 separable_conv2d_3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_463 (BatchN (None, 8, 8, 128) 512 conv2d_470[0][0] \n__________________________________________________________________________________________________\nactivation_459 (Activation) (None, 8, 8, 128) 0 batch_normalization_467[0][0] \n__________________________________________________________________________________________________\nadd_153 (Add) (None, 8, 8, 128) 0 add_152[0][0] \n batch_normalization_463[0][0] \n__________________________________________________________________________________________________\nattention_3 (Attention) (None, 8, 8, 128) 20641 activation_459[0][0] \n__________________________________________________________________________________________________\nconv2d_471 (Conv2D) (None, 8, 8, 576) 73728 add_153[0][0] \n__________________________________________________________________________________________________\nconv2d_472 (Conv2D) (None, 8, 8, 576) 73728 attention_3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_464 (BatchN (None, 8, 8, 576) 2304 conv2d_471[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_468 (BatchN (None, 8, 8, 576) 2304 conv2d_472[0][0] \n__________________________________________________________________________________________________\nhard_swish_9 (HardSwish) (None, 8, 8, 576) 0 batch_normalization_464[0][0] \n__________________________________________________________________________________________________\nactivation_460 (Activation) (None, 8, 8, 576) 0 batch_normalization_468[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling2d_6 (Glo (None, 576) 0 hard_swish_9[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling2d_7 (Glo (None, 576) 0 activation_460[0][0] \n__________________________________________________________________________________________________\nadd_154 (Add) (None, 576) 0 global_average_pooling2d_6[0][0] \n global_average_pooling2d_7[0][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 2) 1154 add_154[0][0] \n__________________________________________________________________________________________________\nactivation_461 (Activation) (None, 2) 0 dense_2[0][0] \n==================================================================================================\nTotal params: 19,500,440\nTrainable params: 19,364,104\nNon-trainable params: 136,336\n__________________________________________________________________________________________________\n" ], [ "# optimizer = SGD(lr=1e-3, momentum=0.9, nesterov=True)\noptimizer = Adam()\nmodel_top.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['acc'])\ncallback_list = [EarlyStopping(monitor='val_acc', patience=30), \n ReduceLROnPlateau(monitor='loss', factor=np.sqrt(0.5), cooldown=0, patience=5, min_lr=0.5e-5)]\noutput = model_top.fit_generator(ft_gen, steps_per_epoch=200, epochs=300,\n validation_data=validation_generator, validation_steps=len(validation_generator), callbacks=callback_list)", "WARNING:tensorflow:From /home/www/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n\nEpoch 1/300\n200/200 [==============================] - 911s 5s/step - loss: 0.7435 - acc: 0.5371 - val_loss: 0.5061 - val_acc: 0.4991\nEpoch 2/300\n200/200 [==============================] - 430s 2s/step - loss: 0.6966 - acc: 0.5987 - val_loss: 0.1962 - val_acc: 0.5000\nEpoch 3/300\n200/200 [==============================] - 428s 2s/step - loss: 0.6746 - acc: 0.6183 - val_loss: 0.5532 - val_acc: 0.6563\nEpoch 4/300\n200/200 [==============================] - 428s 2s/step - loss: 0.6516 - acc: 0.6374 - val_loss: 0.9090 - val_acc: 0.6892\nEpoch 5/300\n200/200 [==============================] - 429s 2s/step - loss: 0.6301 - acc: 0.6631 - val_loss: 0.9729 - val_acc: 0.6841\nEpoch 6/300\n200/200 [==============================] - 436s 2s/step - loss: 0.6224 - acc: 0.6656 - val_loss: 0.9122 - val_acc: 0.7251\nEpoch 7/300\n200/200 [==============================] - 433s 2s/step - loss: 0.6070 - acc: 0.6753 - val_loss: 1.4977 - val_acc: 0.6043\nEpoch 8/300\n200/200 [==============================] - 421s 2s/step - loss: 0.5911 - acc: 0.6905 - val_loss: 1.0674 - val_acc: 0.6934\nEpoch 9/300\n200/200 [==============================] - 422s 2s/step - loss: 0.5729 - acc: 0.7008 - val_loss: 0.7848 - val_acc: 0.7319\nEpoch 10/300\n200/200 [==============================] - 433s 2s/step - loss: 0.5652 - acc: 0.7033 - val_loss: 0.8461 - val_acc: 0.7388\nEpoch 11/300\n200/200 [==============================] - 433s 2s/step - loss: 0.5601 - acc: 0.7055 - val_loss: 1.0242 - val_acc: 0.7182\nEpoch 12/300\n200/200 [==============================] - 432s 2s/step - loss: 0.5550 - acc: 0.7144 - val_loss: 1.8160 - val_acc: 0.6923\nEpoch 13/300\n200/200 [==============================] - 423s 2s/step - loss: 0.5424 - acc: 0.7273 - val_loss: 0.9624 - val_acc: 0.7443\nEpoch 14/300\n200/200 [==============================] - 429s 2s/step - loss: 0.5328 - acc: 0.7295 - val_loss: 0.7828 - val_acc: 0.7576\nEpoch 15/300\n200/200 [==============================] - 433s 2s/step - loss: 0.5285 - acc: 0.7322 - val_loss: 1.4517 - val_acc: 0.6921\nEpoch 16/300\n200/200 [==============================] - 438s 2s/step - loss: 0.5173 - acc: 0.7397 - val_loss: 0.8260 - val_acc: 0.7653\nEpoch 17/300\n200/200 [==============================] - 443s 2s/step - loss: 0.5199 - acc: 0.7371 - val_loss: 0.5713 - val_acc: 0.7814\nEpoch 18/300\n200/200 [==============================] - 442s 2s/step - loss: 0.5142 - acc: 0.7427 - val_loss: 0.8598 - val_acc: 0.7743\nEpoch 19/300\n200/200 [==============================] - 443s 2s/step - loss: 0.5006 - acc: 0.7532 - val_loss: 1.5125 - val_acc: 0.7098\nEpoch 20/300\n200/200 [==============================] - 444s 2s/step - loss: 0.4919 - acc: 0.7573 - val_loss: 1.1405 - val_acc: 0.7390\nEpoch 21/300\n200/200 [==============================] - 439s 2s/step - loss: 0.4799 - acc: 0.7643 - val_loss: 2.1038 - val_acc: 0.6425\nEpoch 22/300\n200/200 [==============================] - 438s 2s/step - loss: 0.4787 - acc: 0.7641 - val_loss: 1.0172 - val_acc: 0.7439\nEpoch 23/300\n200/200 [==============================] - 440s 2s/step - loss: 0.4811 - acc: 0.7623 - val_loss: 1.9231 - val_acc: 0.6682\nEpoch 24/300\n200/200 [==============================] - 440s 2s/step - loss: 0.4642 - acc: 0.7734 - val_loss: 0.4432 - val_acc: 0.7412\nEpoch 25/300\n200/200 [==============================] - 438s 2s/step - loss: 0.4601 - acc: 0.7810 - val_loss: 0.6838 - val_acc: 0.8076\nEpoch 26/300\n200/200 [==============================] - 441s 2s/step - loss: 0.4584 - acc: 0.7773 - val_loss: 0.8662 - val_acc: 0.7587\nEpoch 27/300\n200/200 [==============================] - 442s 2s/step - loss: 0.4546 - acc: 0.7819 - val_loss: 0.9614 - val_acc: 0.7909\nEpoch 28/300\n200/200 [==============================] - 348s 2s/step - loss: 0.4452 - acc: 0.7872 - val_loss: 0.9591 - val_acc: 0.8072\nEpoch 29/300\n200/200 [==============================] - 431s 2s/step - loss: 0.4465 - acc: 0.7892 - val_loss: 0.7146 - val_acc: 0.8108\nEpoch 30/300\n200/200 [==============================] - 437s 2s/step - loss: 0.4382 - acc: 0.7900 - val_loss: 1.7979 - val_acc: 0.7203\nEpoch 31/300\n200/200 [==============================] - 439s 2s/step - loss: 0.4315 - acc: 0.7930 - val_loss: 0.5380 - val_acc: 0.8239\nEpoch 32/300\n200/200 [==============================] - 440s 2s/step - loss: 0.4314 - acc: 0.7962 - val_loss: 0.9796 - val_acc: 0.7729\nEpoch 33/300\n200/200 [==============================] - 312s 2s/step - loss: 0.4254 - acc: 0.8003 - val_loss: 0.7153 - val_acc: 0.8252\nEpoch 34/300\n200/200 [==============================] - 297s 1s/step - loss: 0.4207 - acc: 0.8016 - val_loss: 0.5743 - val_acc: 0.8158\nEpoch 35/300\n200/200 [==============================] - 297s 1s/step - loss: 0.4238 - acc: 0.8020 - val_loss: 0.3165 - val_acc: 0.7823\nEpoch 36/300\n200/200 [==============================] - 296s 1s/step - loss: 0.4116 - acc: 0.8114 - val_loss: 0.7533 - val_acc: 0.8116\nEpoch 37/300\n200/200 [==============================] - 295s 1s/step - loss: 0.4056 - acc: 0.8054 - val_loss: 1.4711 - val_acc: 0.7821\nEpoch 38/300\n200/200 [==============================] - 297s 1s/step - loss: 0.4160 - acc: 0.8074 - val_loss: 0.3567 - val_acc: 0.8331\nEpoch 39/300\n200/200 [==============================] - 296s 1s/step - loss: 0.3935 - acc: 0.8152 - val_loss: 1.2253 - val_acc: 0.8025\nEpoch 40/300\n200/200 [==============================] - 296s 1s/step - loss: 0.4027 - acc: 0.8126 - val_loss: 0.5905 - val_acc: 0.8443\nEpoch 41/300\n200/200 [==============================] - 298s 1s/step - loss: 0.3881 - acc: 0.8198 - val_loss: 1.4184 - val_acc: 0.7646\nEpoch 42/300\n200/200 [==============================] - 298s 1s/step - loss: 0.3973 - acc: 0.8157 - val_loss: 1.4537 - val_acc: 0.7626\nEpoch 43/300\n200/200 [==============================] - 292s 1s/step - loss: 0.3941 - acc: 0.8174 - val_loss: 1.2902 - val_acc: 0.8183\nEpoch 44/300\n200/200 [==============================] - 219s 1s/step - loss: 0.3839 - acc: 0.8232 - val_loss: 0.6550 - val_acc: 0.8492\nEpoch 45/300\n200/200 [==============================] - 209s 1s/step - loss: 0.3850 - acc: 0.8195 - val_loss: 1.5019 - val_acc: 0.8129\nEpoch 46/300\n200/200 [==============================] - 210s 1s/step - loss: 0.3872 - acc: 0.8215 - val_loss: 0.5263 - val_acc: 0.8156\nEpoch 47/300\n200/200 [==============================] - 210s 1s/step - loss: 0.3767 - acc: 0.8256 - val_loss: 0.4267 - val_acc: 0.8556\nEpoch 48/300\n200/200 [==============================] - 209s 1s/step - loss: 0.3762 - acc: 0.8294 - val_loss: 0.9952 - val_acc: 0.8247\nEpoch 49/300\n200/200 [==============================] - 210s 1s/step - loss: 0.3713 - acc: 0.8285 - val_loss: 0.3819 - val_acc: 0.8157\nEpoch 50/300\n200/200 [==============================] - 209s 1s/step - loss: 0.3690 - acc: 0.8310 - val_loss: 0.8029 - val_acc: 0.8361\nEpoch 51/300\n200/200 [==============================] - 208s 1s/step - loss: 0.3716 - acc: 0.8297 - val_loss: 1.6216 - val_acc: 0.7282\nEpoch 52/300\n200/200 [==============================] - 208s 1s/step - loss: 0.3578 - acc: 0.8372 - val_loss: 0.5241 - val_acc: 0.8477\nEpoch 53/300\n200/200 [==============================] - 210s 1s/step - loss: 0.3661 - acc: 0.8372 - val_loss: 1.1911 - val_acc: 0.8071\nEpoch 54/300\n200/200 [==============================] - 209s 1s/step - loss: 0.3547 - acc: 0.8364 - val_loss: 0.5890 - val_acc: 0.8311\nEpoch 55/300\n200/200 [==============================] - 209s 1s/step - loss: 0.3561 - acc: 0.8404 - val_loss: 0.6161 - val_acc: 0.8562\nEpoch 56/300\n200/200 [==============================] - 209s 1s/step - loss: 0.3641 - acc: 0.8344 - val_loss: 0.9273 - val_acc: 0.8243\nEpoch 57/300\n200/200 [==============================] - 209s 1s/step - loss: 0.3491 - acc: 0.8429 - val_loss: 0.5289 - val_acc: 0.8567\nEpoch 58/300\n200/200 [==============================] - 209s 1s/step - loss: 0.3515 - acc: 0.8410 - val_loss: 0.9164 - val_acc: 0.8599\nEpoch 59/300\n200/200 [==============================] - 209s 1s/step - loss: 0.3360 - acc: 0.8524 - val_loss: 0.9430 - val_acc: 0.8425\nEpoch 60/300\n" ], [ "output_score50 = []\noutput_class50 = []\nanswer_class50 = []\nanswer_class50_1 =[]\n\nfor i in trange(len(test50_generator)):\n output50 = model_top.predict_on_batch(test50_generator[i][0])\n output_score50.append(output50)\n answer_class50.append(test50_generator[i][1])\n \noutput_score50 = np.concatenate(output_score50)\nanswer_class50 = np.concatenate(answer_class50)\n\noutput_class50 = np.argmax(output_score50, axis=1)\nanswer_class50_1 = np.argmax(answer_class50, axis=1)\n\nprint(output_class50)\nprint(answer_class50_1)", "100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 313/313 [04:06<00:00, 1.27it/s]" ], [ "cm50 = confusion_matrix(answer_class50_1, output_class50)\nreport50 = classification_report(answer_class50_1, output_class50)\n\nrecall50 = cm50[0][0] / (cm50[0][0] + cm50[0][1])\nfallout50 = cm50[1][0] / (cm50[1][0] + cm50[1][1])\n\nfpr50, tpr50, thresholds50 = roc_curve(answer_class50_1, output_score50[:, 1], pos_label=1.)\neer50 = brentq(lambda x : 1. - x - interp1d(fpr50, tpr50)(x), 0., 1.)\nthresh50 = interp1d(fpr50, thresholds50)(eer50)\n\nprint(report50)\nprint(cm50)\nprint(\"AUROC: %f\" %(roc_auc_score(answer_class50_1, output_score50[:, 1])))\nprint(thresh50)\nprint('test_acc: ', len(output_class50[np.equal(output_class50, answer_class50_1)]) / len(output_class50))", " precision recall f1-score support\n\n 0 0.91 0.95 0.93 10000\n 1 0.94 0.91 0.92 10000\n\n accuracy 0.93 20000\n macro avg 0.93 0.93 0.93 20000\nweighted avg 0.93 0.93 0.93 20000\n\n[[9454 546]\n [ 948 9052]]\nAUROC: 0.975839\n0.3461155891418459\ntest_acc: 0.9253\n" ], [ "model_top.save('/home/www/fake_detection/model/deepfake_resnet_ft.h5')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ece2788d68dc2ca1f492e1ac3cea371ea14c498e
668,491
ipynb
Jupyter Notebook
Bioinformatica - Correlation.ipynb
LucaCappelletti94/snv_classifier
15539f2ab44a4b200945ca0a61827d1fdf32f618
[ "MIT" ]
1
2018-10-03T09:11:46.000Z
2018-10-03T09:11:46.000Z
Bioinformatica - Correlation.ipynb
LucaCappelletti94/snv_classifier
15539f2ab44a4b200945ca0a61827d1fdf32f618
[ "MIT" ]
null
null
null
Bioinformatica - Correlation.ipynb
LucaCappelletti94/snv_classifier
15539f2ab44a4b200945ca0a61827d1fdf32f618
[ "MIT" ]
null
null
null
4,610.282759
366,764
0.96169
[ [ [ "import pandas as pd\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport seaborn as sns", "_____no_output_____" ], [ "file_path = 'Mendelian.train.tsv'\ntraining_dataframe=pd.read_csv(file_path, sep='\\t')", "_____no_output_____" ], [ "corr = training_dataframe.corr()", "_____no_output_____" ], [ "plt.rcParams['figure.figsize'] = [20, 20]\nplt.axis('scaled')\nsns.heatmap(corr, \n annot=True,\n xticklabels=corr.columns,\n yticklabels=corr.columns)", "_____no_output_____" ], [ "dropped = training_dataframe.drop(columns=[\"CpGobsExp\", \"CpGperCpG\", \"dbVARCount\", \"mamPhyloP46way\"])", "_____no_output_____" ], [ "dropped_corr = dropped.corr()", "_____no_output_____" ], [ "plt.rcParams['figure.figsize'] = [20, 20]\nplt.axis('scaled')\nsns.heatmap(dropped_corr, \n annot=True,\n xticklabels=dropped_corr.columns,\n yticklabels=dropped_corr.columns)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
ece27c3da996b2b7e5bdde29197213a895202855
32,950
ipynb
Jupyter Notebook
Challenge.ipynb
shumeiberk/Movies_ETL1
780429ea3b1249d4bcc28bfd35284a7af3639ca1
[ "MIT" ]
null
null
null
Challenge.ipynb
shumeiberk/Movies_ETL1
780429ea3b1249d4bcc28bfd35284a7af3639ca1
[ "MIT" ]
null
null
null
Challenge.ipynb
shumeiberk/Movies_ETL1
780429ea3b1249d4bcc28bfd35284a7af3639ca1
[ "MIT" ]
null
null
null
105.271565
1,704
0.648042
[ [ [ "import json\nimport pandas as pd\nimport numpy as np\nimport re\nimport time\nimport psycopg2\nfrom sqlalchemy import create_engine\nfrom config import db_password\n\n\n\n\ndef clean_movie(movie):\n movie = dict(movie) #create a non-destructive copy\n alt_titles = {}\n \n \n # combine alternate titles into one list\n for key in ['Also known as','Arabic','Cantonese','Chinese','French',\n 'Hangul','Hebrew','Hepburn','Japanese','Literally',\n 'Mandarin','McCune-Reischauer','Original title','Polish',\n 'Revised Romanization','Romanized','Russian',\n 'Simplified','Traditional','Yiddish']:\n if key in movie:\n alt_titles[key] = movie[key]\n movie.pop(key)\n if len(alt_titles) > 0:\n movie['alt_titles'] = alt_titles\n\n # merge column names\n def change_column_name(old_name, new_name):\n if old_name in movie:\n movie[new_name] = movie.pop(old_name)\n change_column_name('Adaptation by', 'Writer(s)')\n change_column_name('Country of origin', 'Country')\n change_column_name('Directed by', 'Director')\n change_column_name('Distributed by', 'Distributor')\n change_column_name('Edited by', 'Editor(s)')\n change_column_name('Length', 'Running time')\n change_column_name('Original release', 'Release date')\n change_column_name('Music by', 'Composer(s)')\n change_column_name('Produced by', 'Producer(s)')\n change_column_name('Producer', 'Producer(s)')\n change_column_name('Productioncompanies ', 'Production company(s)')\n change_column_name('Productioncompany ', 'Production company(s)')\n change_column_name('Released', 'Release Date')\n change_column_name('Release Date', 'Release date')\n change_column_name('Screen story by', 'Writer(s)')\n change_column_name('Screenplay by', 'Writer(s)')\n change_column_name('Story by', 'Writer(s)')\n change_column_name('Theme music composer', 'Composer(s)')\n change_column_name('Written by', 'Writer(s)')\n\n return movie\n\n\ndef extract_transform_load(wiki_file, kaggle_file, ratings_file):\n \n with open(wiki_file, mode='r') as file:\n wiki_movies_raw = json.load(file)\n \n kaggle_metadata = pd.read_csv(kaggle_file)\n ratings = pd.read_csv(ratings_file)\n\n wiki_movies = [movie for movie in wiki_movies_raw\n if ('Director' in movie or 'Directed by' in movie)\n and 'imdb_link' in movie]\n clean_movies = [clean_movie(movie) for movie in wiki_movies]\n wiki_movies_df = pd.DataFrame(clean_movies)\n\n #Assume wiki data still contains IMDB id\n try:\n wiki_movies_df['imdb_id'] = wiki_movies_df['imdb_link'].str.extract(r'(tt\\d{7})')\n wiki_movies_df.drop_duplicates(subset='imdb_id', inplace=True)\n except Exception as e:\n print(e)\n\n wiki_columns_to_keep = [column for column in wiki_movies_df.columns if wiki_movies_df[column].isnull().sum() < len(wiki_movies_df) * 0.9]\n wiki_movies_df = wiki_movies_df[wiki_columns_to_keep]\n\n \n box_office = wiki_movies_df['Box office'].dropna() \n box_office = box_office.apply(lambda x: ' '.join(x) if type(x) == list else x)\n\n \n form_one = r'\\$\\d+\\.?\\d*\\s*[mb]illion'\n form_two = r'\\$\\d{1,3}(?:,\\d{3})+'\n \n def parse_dollars(s):\n # if s is not a string, return NaN\n if type(s) != str:\n return np.nan\n\n # if input is of the form $###.# million\n if re.match(r'\\$\\s*\\d+\\.?\\d*\\s*milli?on', s, flags=re.IGNORECASE):\n\n # remove dollar sign and \" million\"\n s = re.sub('\\$|\\s|[a-zA-Z]','', s)\n\n # convert to float and multiply by a million\n value = float(s) * 10**6\n\n # return value\n return value\n\n # if input is of the form $###.# billion\n elif re.match(r'\\$\\s*\\d+\\.?\\d*\\s*billi?on', s, flags=re.IGNORECASE):\n\n # remove dollar sign and \" billion\"\n s = re.sub('\\$|\\s|[a-zA-Z]','', s)\n\n # convert to float and multiply by a billion\n value = float(s) * 10**9\n\n # return value\n return value\n\n # if input is of the form $###,###,###\n elif re.match(r'\\$\\s*\\d{1,3}(?:[,\\.]\\d{3})+(?!\\s[mb]illion)', s, flags=re.IGNORECASE):\n\n # remove dollar sign and commas\n s = re.sub('\\$|,','', s)\n\n # convert to float\n value = float(s)\n\n # return value\n return value\n\n # otherwise, return NaN\n else:\n return np.nan\n \n \n wiki_movies_df['box_office'] = box_office.str.extract(f'({form_one}|{form_two})', flags=re.IGNORECASE)[0].apply(parse_dollars)\n wiki_movies_df.drop('Box office', axis=1, inplace=True) \n \n budget = wiki_movies_df['Budget'].dropna()\n budget = budget.map(lambda x: ' '.join(x) if type(x) == list else x)\n budget = budget.str.replace(r'\\$.*[-โ€”โ€“](?![a-z])', '$', regex=True)\n\n wiki_movies_df['budget'] = budget.str.extract(f'({form_one}|{form_two})', flags=re.IGNORECASE)[0].apply(parse_dollars)\n wiki_movies_df.drop('Budget', axis=1, inplace=True)\n \n release_date = wiki_movies_df['Release date'].dropna().apply(lambda x: ' '.join(x) if type(x) == list else x)\n date_form_one = r'(?:January|February|March|April|May|June|July|August|September|October|November|December)\\s[123]\\d,\\s\\d{4}'\n date_form_two = r'\\d{4}.[01]\\d.[123]\\d'\n date_form_three = r'(?:January|February|March|April|May|June|July|August|September|October|November|December)\\s\\d{4}'\n date_form_four = r'\\d{4}'\n wiki_movies_df['release_date'] = pd.to_datetime(release_date.str.extract(f'({date_form_one}|{date_form_two}|{date_form_three}|{date_form_four})')[0], infer_datetime_format=True)\n \n running_time = wiki_movies_df['Running time'].dropna().apply(lambda x: ' '.join(x) if type(x) == list else x)\n running_time_extract = running_time.str.extract(r'(\\d+)\\s*ho?u?r?s?\\s*(\\d*)|(\\d+)\\s*m')\n running_time_extract = running_time_extract.apply(lambda col: pd.to_numeric(col, errors='coerce')).fillna(0)\n wiki_movies_df['running_time'] = running_time_extract.apply(lambda row: row[0]*60 + row[1] if row[2] == 0 else row[2], axis=1)\n \n kaggle_metadata = kaggle_metadata[kaggle_metadata['adult'] == 'False'].drop('adult',axis='columns')\n kaggle_metadata['video'] = kaggle_metadata['video'] == 'True'\n kaggle_metadata['budget'] = kaggle_metadata['budget'].astype(int)\n kaggle_metadata['id'] = pd.to_numeric(kaggle_metadata['id'], errors='raise')\n kaggle_metadata['popularity'] = pd.to_numeric(kaggle_metadata['popularity'], errors='raise')\n kaggle_metadata['release_date'] = pd.to_datetime(kaggle_metadata['release_date'])\n\n\n movies_df = pd.merge(wiki_movies_df, kaggle_metadata, on='imdb_id', suffixes=['_wiki','_kaggle'])\n movies_df.drop(columns=['title_wiki','release_date_wiki','Language','Production company(s)'], inplace=True)\n \n def fill_missing_kaggle_data(df, kaggle_column, wiki_column):\n df[kaggle_column] = df.apply(\n lambda row: row[wiki_column] if row[kaggle_column] == 0 else row[kaggle_column]\n , axis=1)\n df.drop(columns=wiki_column, inplace=True)\n\n fill_missing_kaggle_data(movies_df, 'runtime', 'running_time')\n fill_missing_kaggle_data(movies_df, 'budget_kaggle', 'budget_wiki')\n fill_missing_kaggle_data(movies_df, 'revenue', 'box_office')\n \n movies_df = movies_df.loc[:, ['imdb_id','id','title_kaggle','original_title','tagline','belongs_to_collection','url','imdb_link',\n 'runtime','budget_kaggle','revenue','release_date_kaggle','popularity','vote_average','vote_count',\n 'genres','original_language','overview','spoken_languages','Country',\n 'production_companies','production_countries','Distributor',\n 'Producer(s)','Director','Starring','Cinematography','Editor(s)','Writer(s)','Composer(s)','Based on'\n ]]\n\n movies_df.rename({'id':'kaggle_id',\n 'title_kaggle':'title',\n 'url':'wikipedia_url',\n 'budget_kaggle':'budget',\n 'release_date_kaggle':'release_date',\n 'Country':'country',\n 'Distributor':'distributor',\n 'Producer(s)':'producers',\n 'Director':'director',\n 'Starring':'starring',\n 'Cinematography':'cinematography',\n 'Editor(s)':'editors',\n 'Writer(s)':'writers',\n 'Composer(s)':'composers',\n 'Based on':'based_on'\n }, axis='columns', inplace=True)\n\n rating_counts = ratings.groupby(['movieId','rating'], as_index=False).count().rename({'userId':'count'}, axis=1).pivot(index='movieId',columns='rating', values='count') \n rating_counts.columns = ['rating_' + str(col) for col in rating_counts.columns]\n movies_with_ratings_df = pd.merge(movies_df, rating_counts, left_on='kaggle_id', right_index=True, how='left')\n movies_with_ratings_df[rating_counts.columns] = movies_with_ratings_df[rating_counts.columns].fillna(0)\n\n db_string = f\"postgres://postgres:{db_password}@localhost:5432/movie_data\"\n engine = create_engine(db_string)\n \n movies_df.to_sql(name='movies', con=engine, if_exists='append')\n \n rows_imported = 0\n # get the start_time from time.time()\n start_time = time.time()\n for data in pd.read_csv(f'{file_dir}/ratings.csv', chunksize=1000000):\n print(f'importing rows {rows_imported} to {rows_imported + len(data)}...', end='')\n data.to_sql(name='ratings', con=engine, if_exists='append')\n rows_imported += len(data)\n \n # add elapsed time to final print out\n print(f'Done. {time.time() - start_time} total seconds elapsed')\n\n\n\nfile_dir = \"./Resources\"\nwiki_file = f'{file_dir}/wikipedia.movies.json'\nkaggle_file = f'{file_dir}/movies_metadata.csv'\nratings_file = f'{file_dir}/ratings.csv'\n \nextract_transform_load(wiki_file, kaggle_file, ratings_file)\n", "C:\\Users\\ous1\\AppData\\Local\\Continuum\\anaconda3\\envs\\PythonData\\lib\\site-packages\\IPython\\core\\interactiveshell.py:3254: DtypeWarning: Columns (10) have mixed types.Specify dtype option on import or set low_memory=False.\n if (await self.run_code(code, result, async_=asy)):\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
ece291f4a8b114444451161dd6414afeb20cda26
412,152
ipynb
Jupyter Notebook
03a-tools-webscraping/notebook-webscraping_v4.ipynb
afo/dataXkuwait
3640e25fa7bfb906e470d62b1fa2fde372f74bd4
[ "Apache-2.0" ]
2
2018-11-08T13:42:19.000Z
2018-11-08T17:54:56.000Z
03a-tools-webscraping/notebook-webscraping_v4.ipynb
afo/dataXkuwait
3640e25fa7bfb906e470d62b1fa2fde372f74bd4
[ "Apache-2.0" ]
null
null
null
03a-tools-webscraping/notebook-webscraping_v4.ipynb
afo/dataXkuwait
3640e25fa7bfb906e470d62b1fa2fde372f74bd4
[ "Apache-2.0" ]
1
2018-11-08T17:55:01.000Z
2018-11-08T17:55:01.000Z
64.268205
74,573
0.520027
[ [ [ "<img src=\"http://i67.tinypic.com/2jcbwcw.png\" align=\"left\"></img><br><br><br><br>\n\n\n## Notebook: Web Scraping & Web Crawling\n\n**Author List**: Alexander Fred-Ojala\n\n**Original Sources**: https://www.crummy.com/software/BeautifulSoup/bs4/doc/ & https://www.dataquest.io/blog/web-scraping-tutorial-python/\n\n**License**: Feel free to do whatever you want to with this code\n\n**Compatibility:** Python 2.x and 3.x", "_____no_output_____" ], [ "### Other Popular Web Scraping tools & libraries\n\nThis notebook mainly goes over how to get data with the Python packages `requests` and `BeautifulSoup`. However, there are many other Python packages that can be used for scraping.\n\nTwo very popular and widely used are:\n\n* **[Selenium:](http://selenium-python.readthedocs.io/)** Pyton scraper that can act as a human when visiting websites, almost like a macro. Makes sense of modern Javascript based websites built with React, Angular etc.\n* **[Scrapy:](https://scrapy.org/)** For automated scripting and has a lot of built in tools for web crawling and scraping that can facilitate the process (e.g. time based, IP rotation etc). Mainly script based scraping for larger projects.\n\n\n### API: Application Programming Interfaces\n\nMany services offer API's to grab data (Twitter, Wikipedia, Reddit etc.) We have already used an API in the Pandas notebook when we grabbed stock data in CSV format to do analysis. If a good API exists, it is usually the preferred method of obtaining data.", "_____no_output_____" ], [ "# Helpful webscraping Cheat Sheet\n\nIf you want a good documentation of functions in requests and Beautifulsoup (as well as how to save scarped data to an SQLite database), this is a good resource:\n\n- https://blog.hartleybrody.com/web-scraping-cheat-sheet/", "_____no_output_____" ], [ "# Table of Contents\n(Clickable document links)\n___\n\n### [0: Pre-steup](#sec0)\nDocument setup and Python 2 and Python 3 compability\n\n### [1: Simple webscraping intro](#sec1)\n\nSimple example of webscraping on a premade HTML template\n\n### [2: Scrape Data-X Schedule](#sec2)\n\nFind and scrape the current Data-X schedule. \n\n### [3: IMDB top 250 movies w MetaScore](#sec3)\n\nScrape IMDB and compare MetaScore to user reviews.\n\n### [4: Scrape Images and Files](#sec4)\n\nScrape a website of Images, PDF's, CSV data or any other file type.\n\n## [Breakout Problem: Scrape Weather Data](#secBK)\n\nScrape real time weather data in Berkeley.\n\n\n### [Appendix](#sec5)\n\n#### [Scrape Bloomberg sitemap for political news headlines](#sec6)\n\n#### [Webcrawl Twitter, recusrive URL link fetcher + depth](#sec7)\n\n#### [SEO, visualize webite categories as a tree](#sec8)", "_____no_output_____" ], [ "<a id='sec0'></a>\n## Pre-Setup", "_____no_output_____" ] ], [ [ "# stretch Jupyter coding blocks to fit screen\nfrom IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:90% !important; }</style>\")) \n# if 100% it would fit the screen", "_____no_output_____" ], [ "# make it run on py2 and py3\nfrom __future__ import division, print_function", "_____no_output_____" ] ], [ [ "<a id='sec1'></a>\n# Webscraping intro\n\nIn order to scrape content from a website we first need to download the HTML contents of the website. This can be done with the Python library **requests** that makes HTTP requests on the internet (with its `.get` method).\n\nThen when we want to extract certain information from a website we use the scraping tool **BeautifulSoup4** (import bs4). In order to parse information with beautifulsoup we have to create a soup object from the HTML source code of a website.", "_____no_output_____" ] ], [ [ "import requests # The requests library is an \n# HTTP library for getting and posting content etc.\n\nimport bs4 as bs # BeautifulSoup4 is a Python library \n# for pulling data out of HTML and XML code.\n# We can query markup languages for specific content", "_____no_output_____" ] ], [ [ "# Scraping a simple website", "_____no_output_____" ] ], [ [ "source = requests.get(\"https://alex.fo/other/data-x/\") \n# a GET request will download the HTML webpage.", "_____no_output_____" ], [ "print(source) # If <Response [200]> then \n# the website has been downloaded succesfully", "<Response [200]>\n" ] ], [ [ "**Different types of repsonses:**\nGenerally status code starting with 2 indicates success. Status code starting with 4 or 5 indicates error. Frequent appearance of the status codes like 404 (Not Found), 403 (Forbidden), 408 (Request Timeout) might indicate that you got blocked.", "_____no_output_____" ] ], [ [ "print(source.content) # This is the HTML content of the website,\n# as you can see it's quite hard to decipher", "b'<!DOCTYPE html>\\n<html>\\n<head>\\n\\n<title>Data-X Webscrape Tutorial</title>\\n\\n<style>\\ndiv.container {\\n width: 100%;\\n border: 1px solid gray;\\n}\\n\\n.header {\\n color:green;\\n}\\n\\n#second {\\n font-style: italic;\\n}\\n\\n</style>\\n\\n</head>\\n\\n<body style=\"background-color: pink\">\\n\\n<h1 class=\"header\">Simple Data-X site</h1>\\n\\n\\n<h3 id=\"second\">This site is only live to be scraped.</h3>\\n\\n\\n<div class=\"container\">\\n<p>Some cool text in a container</p>\\n</div>\\n \\n\\n <h4> Random list </h4>\\n<nav class=\"regular_list\">\\n <ul>\\n <li><a href=\"https://en.wikipedia.org/wiki/London\">London</a></li>\\n <li><a href=\"https://en.wikipedia.org/wiki/Tokyo\">Tokyo</a></li>\\n </ul>\\n</nav>\\n\\n\\n\\n\\n <h2>Random London Information within p tags</h2>\\n\\n <p>London is the capital city of England. It is the most populous city in the United Kingdom, with a metropolitan area of over 13 million inhabitants.</p>\\n <p>Standing on the River Thames, London has been a major settlement for two millennia, its history going back to its founding by the Romans, who named it Londinium.</p>\\n\\n<footer>footer content</footer>\\n\\n</div>\\n\\n</body>\\n</html>\\n'\n" ], [ "print(type(source.content)) # type byte in Python 3", "<class 'bytes'>\n" ], [ "# Convert source.content to a beautifulsoup object \n# beautifulsoup can parse (extract specific information) HTML code\n\nsoup = bs.BeautifulSoup(source.content, features='html.parser') \n# we pass in the source content\n# features specifies what type of code we are parsing, \n# here 'html.parser' specifies that we want beautiful soup to parse HTML code", "_____no_output_____" ], [ "print(type(soup))", "<class 'bs4.BeautifulSoup'>\n" ], [ "print(soup) # looks a lot nicer!", "<!DOCTYPE html>\n\n<html>\n<head>\n<title>Data-X Webscrape Tutorial</title>\n<style>\ndiv.container {\n width: 100%;\n border: 1px solid gray;\n}\n\n.header {\n color:green;\n}\n\n#second {\n font-style: italic;\n}\n\n</style>\n</head>\n<body style=\"background-color: pink\">\n<h1 class=\"header\">Simple Data-X site</h1>\n<h3 id=\"second\">This site is only live to be scraped.</h3>\n<div class=\"container\">\n<p>Some cool text in a container</p>\n</div>\n<h4> Random list </h4>\n<nav class=\"regular_list\">\n<ul>\n<li><a href=\"https://en.wikipedia.org/wiki/London\">London</a></li>\n<li><a href=\"https://en.wikipedia.org/wiki/Tokyo\">Tokyo</a></li>\n</ul>\n</nav>\n<h2>Random London Information within p tags</h2>\n<p>London is the capital city of England. It is the most populous city in the United Kingdom, with a metropolitan area of over 13 million inhabitants.</p>\n<p>Standing on the River Thames, London has been a major settlement for two millennia, its history going back to its founding by the Romans, who named it Londinium.</p>\n<footer>footer content</footer>\n</body></html>\n\n\n\n" ] ], [ [ "Above we printed the HTML code of the website, decoded as a beautiful soup object.\n\n### HTML tags\n`<xxx> </xxx>`: are all the HTML tags, that specifies certain sections, stylings etc of the website, for more info: \nhttps://www.w3schools.com/tags/ref_byfunc.asp\n\nFull list of HTML tags: https://developer.mozilla.org/en-US/docs/Web/HTML/Element", "_____no_output_____" ], [ "---\n## `class` and `id`:\n\nclass and id attributes of HTML tags, they are used as hooks to give unique styling to certain elements and an id for sections / parts of the page.\n\n- **id:** is a unique tag for a specific element (this often does not change)\n- **class:** specifies a class of objects. Several elements in the HTML code can have the same class.", "_____no_output_____" ], [ "### Suppose we want to extract content that is shown on the website", "_____no_output_____" ] ], [ [ "# Inside the <body> tag of the website is where all the main content is\nprint(soup.body)", "<body style=\"background-color: pink\">\n<h1 class=\"header\">Simple Data-X site</h1>\n<h3 id=\"second\">This site is only live to be scraped.</h3>\n<div class=\"container\">\n<p>Some cool text in a container</p>\n</div>\n<h4> Random list </h4>\n<nav class=\"regular_list\">\n<ul>\n<li><a href=\"https://en.wikipedia.org/wiki/London\">London</a></li>\n<li><a href=\"https://en.wikipedia.org/wiki/Tokyo\">Tokyo</a></li>\n</ul>\n</nav>\n<h2>Random London Information within p tags</h2>\n<p>London is the capital city of England. It is the most populous city in the United Kingdom, with a metropolitan area of over 13 million inhabitants.</p>\n<p>Standing on the River Thames, London has been a major settlement for two millennia, its history going back to its founding by the Romans, who named it Londinium.</p>\n<footer>footer content</footer>\n</body>\n" ], [ "print(soup.title) # Title of the website", "<title>Data-X Webscrape Tutorial</title>\n" ], [ "print(soup.find('title')) # same as .title", "<title>Data-X Webscrape Tutorial</title>\n" ], [ "# If we want to extract specific text\nprint(soup.find('p')) # will only return first <p> tag", "<p>Some cool text in a container</p>\n" ], [ "print(soup.find('p').text) # extracts the string within the <p> tag, strips it of tag", "Some cool text in a container\n" ], [ "# If we want to extract all <p> tags\nprint(soup.find_all('p')) # returns list of all <p> tags", "[<p>Some cool text in a container</p>, <p>London is the capital city of England. It is the most populous city in the United Kingdom, with a metropolitan area of over 13 million inhabitants.</p>, <p>Standing on the River Thames, London has been a major settlement for two millennia, its history going back to its founding by the Romans, who named it Londinium.</p>]\n" ], [ "# we can also search for classes within all tags, using class_\n# note _ is used to distinguish with Python's builtin class function\n\nprint(soup.find(class_='header')) ", "<h1 class=\"header\">Simple Data-X site</h1>\n" ], [ "# We can also find tags with a speific id\n\nprint(soup.find(id='second'))", "<h3 id=\"second\">This site is only live to be scraped.</h3>\n" ], [ "print(soup.find_all(class_='regular_list')) # find all returns list, \n# even if there is only one object", "[<nav class=\"regular_list\">\n<ul>\n<li><a href=\"https://en.wikipedia.org/wiki/London\">London</a></li>\n<li><a href=\"https://en.wikipedia.org/wiki/Tokyo\">Tokyo</a></li>\n</ul>\n</nav>]\n" ], [ "for p in soup.find_all('p'): # print all text paragraphs on the webpage\n print(p.text)", "Some cool text in a container\nLondon is the capital city of England. It is the most populous city in the United Kingdom, with a metropolitan area of over 13 million inhabitants.\nStanding on the River Thames, London has been a major settlement for two millennia, its history going back to its founding by the Romans, who named it Londinium.\n" ], [ "# Extract links / urls\n# Links in html is usually coded as <a href=\"url\">\n# where the link is url\n\nprint(soup.a)\nprint(type(soup.a))\n", "<a href=\"https://en.wikipedia.org/wiki/London\">London</a>\n<class 'bs4.element.Tag'>\n" ], [ "soup.a.get('href') \n# to get the link from href attribute", "_____no_output_____" ], [ "links = soup.find_all('a')", "_____no_output_____" ], [ "links", "_____no_output_____" ], [ "a = 'run'\nb = 'tomorrow'\n'i wanna {c} {d} {c}'.format(d=a,c=b)", "_____no_output_____" ], [ "# if we want to list links and their text info\n\nlinks = soup.find_all('a')\n\nfor l in links:\n print(\"Info about {}: \".format(l.text), l.get('href')) \n ", "Info about London: https://en.wikipedia.org/wiki/London\nInfo about Tokyo: https://en.wikipedia.org/wiki/Tokyo\n" ] ], [ [ "# Other useful scraping tips\n\n### robots.txt\n\nAlways check if a webiste has a `robots.txt` document specifying what parts of the site that you're allowed to scrape (however, the website cannot prevent requests from getting its content, but I'd recommend you all to be nice). It may also contain information about the scraping frequency allowed etc.\n\nE.g. \n- http://www.imdb.com/robots.txt\n- http://www.nytimes.com/robots.txt\n\n### user-agent\n\nWhen you're sending a request to a webpage (no matter if it comes from your computer, iphone, or Python's request package), then you also include a user-agent. This let's the webserver know how to render the contents for you. You can also send user-agent information via a request (to specify who you are for example, or to disguise that you're an automated scraper).\n\nFind your machine's / browser's true user agent here: https://www.whoishostingthis.com/tools/user-agent/", "_____no_output_____" ] ], [ [ "# user-agent example\n\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:58.0) Gecko/20100101 Firefox/58.0',\n 'From': '[email protected]' \n}\n\nresponse = requests.get('http://alex.fo/other/data-x', headers=headers)\nprint(response)\nprint(response.headers) # the response will also have some meta informaiton about the content", "<Response [200]>\n{'Date': 'Thu, 11 Oct 2018 20:23:10 GMT', 'Content-Type': 'text/html; charset=utf-8', 'Transfer-Encoding': 'chunked', 'Connection': 'keep-alive', 'Last-Modified': 'Fri, 28 Sep 2018 16:12:47 GMT', 'Vary': 'Accept-Encoding', 'Access-Control-Allow-Origin': '*', 'Expires': 'Thu, 11 Oct 2018 20:32:58 GMT', 'Cache-Control': 'max-age=600', 'X-GitHub-Request-Id': '88B8:711D:B3934:EB110:5BBFB12E', 'Expect-CT': 'max-age=604800, report-uri=\"https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct\"', 'Server': 'cloudflare', 'CF-RAY': '46840b00293522ac-LAX', 'Content-Encoding': 'gzip'}\n" ] ], [ [ "<a id='sec2'></a>\n\n# Data-X website Scraping\n### Now let us scrape the current Syllabus Schedule from the Data-X website\n", "_____no_output_____" ] ], [ [ "source = requests.get('https://data-x.blog/').content \n# get the source content", "_____no_output_____" ], [ "source", "_____no_output_____" ], [ "soup = bs.BeautifulSoup(source,'html.parser')", "_____no_output_____" ], [ "print(soup.prettify()) \n# .prettify() method makes the HTML code more readable\n\n# as you can see this code is more difficult \n# to read then the simple example above\n# mostly because this is a real Wordpress website", "<!DOCTYPE html>\n<html class=\"no-js no-svg\" lang=\"en-US\">\n <head>\n <meta charset=\"utf-8\"/>\n <meta content=\"width=device-width, initial-scale=1\" name=\"viewport\"/>\n <link href=\"http://gmpg.org/xfn/11\" rel=\"profile\"/>\n <script>\n (function(html){html.className = html.className.replace(/\\bno-js\\b/,'js')})(document.documentElement);\n </script>\n <title>\n Data-X at Berkeley โ€“ For Rapid Impact in Digital Transformation\n </title>\n <link href=\"//s0.wp.com\" rel=\"dns-prefetch\">\n <link href=\"//fonts.googleapis.com\" rel=\"dns-prefetch\"/>\n <link href=\"//s.w.org\" rel=\"dns-prefetch\"/>\n <link crossorigin=\"\" href=\"https://fonts.gstatic.com\" rel=\"preconnect\"/>\n <link href=\"https://data-x.blog/feed/\" rel=\"alternate\" title=\"Data-X at Berkeley ยป Feed\" type=\"application/rss+xml\"/>\n <link href=\"https://data-x.blog/comments/feed/\" rel=\"alternate\" title=\"Data-X at Berkeley ยป Comments Feed\" type=\"application/rss+xml\"/>\n <script type=\"text/javascript\">\n window._wpemojiSettings = {\"baseUrl\":\"https:\\/\\/s.w.org\\/images\\/core\\/emoji\\/11\\/72x72\\/\",\"ext\":\".png\",\"svgUrl\":\"https:\\/\\/s.w.org\\/images\\/core\\/emoji\\/11\\/svg\\/\",\"svgExt\":\".svg\",\"source\":{\"concatemoji\":\"https:\\/\\/data-x.blog\\/wp-includes\\/js\\/wp-emoji-release.min.js?ver=4.9.8\"}};\n\t\t\t!function(a,b,c){function d(a,b){var c=String.fromCharCode;l.clearRect(0,0,k.width,k.height),l.fillText(c.apply(this,a),0,0);var d=k.toDataURL();l.clearRect(0,0,k.width,k.height),l.fillText(c.apply(this,b),0,0);var e=k.toDataURL();return d===e}function e(a){var b;if(!l||!l.fillText)return!1;switch(l.textBaseline=\"top\",l.font=\"600 32px Arial\",a){case\"flag\":return!(b=d([55356,56826,55356,56819],[55356,56826,8203,55356,56819]))&&(b=d([55356,57332,56128,56423,56128,56418,56128,56421,56128,56430,56128,56423,56128,56447],[55356,57332,8203,56128,56423,8203,56128,56418,8203,56128,56421,8203,56128,56430,8203,56128,56423,8203,56128,56447]),!b);case\"emoji\":return b=d([55358,56760,9792,65039],[55358,56760,8203,9792,65039]),!b}return!1}function f(a){var c=b.createElement(\"script\");c.src=a,c.defer=c.type=\"text/javascript\",b.getElementsByTagName(\"head\")[0].appendChild(c)}var g,h,i,j,k=b.createElement(\"canvas\"),l=k.getContext&&k.getContext(\"2d\");for(j=Array(\"flag\",\"emoji\"),c.supports={everything:!0,everythingExceptFlag:!0},i=0;i<j.length;i++)c.supports[j[i]]=e(j[i]),c.supports.everything=c.supports.everything&&c.supports[j[i]],\"flag\"!==j[i]&&(c.supports.everythingExceptFlag=c.supports.everythingExceptFlag&&c.supports[j[i]]);c.supports.everythingExceptFlag=c.supports.everythingExceptFlag&&!c.supports.flag,c.DOMReady=!1,c.readyCallback=function(){c.DOMReady=!0},c.supports.everything||(h=function(){c.readyCallback()},b.addEventListener?(b.addEventListener(\"DOMContentLoaded\",h,!1),a.addEventListener(\"load\",h,!1)):(a.attachEvent(\"onload\",h),b.attachEvent(\"onreadystatechange\",function(){\"complete\"===b.readyState&&c.readyCallback()})),g=c.source||{},g.concatemoji?f(g.concatemoji):g.wpemoji&&g.twemoji&&(f(g.twemoji),f(g.wpemoji)))}(window,document,window._wpemojiSettings);\n </script>\n <style type=\"text/css\">\n img.wp-smiley,\nimg.emoji {\n\tdisplay: inline !important;\n\tborder: none !important;\n\tbox-shadow: none !important;\n\theight: 1em !important;\n\twidth: 1em !important;\n\tmargin: 0 .07em !important;\n\tvertical-align: -0.1em !important;\n\tbackground: none !important;\n\tpadding: 0 !important;\n}\n </style>\n <link href=\"https://fonts.googleapis.com/css?family=Libre+Franklin%3A300%2C300i%2C400%2C400i%2C600%2C600i%2C800%2C800i&amp;subset=latin%2Clatin-ext\" id=\"twentyseventeen-fonts-css\" media=\"all\" rel=\"stylesheet\" type=\"text/css\"/>\n <link href=\"https://data-x.blog/wp-content/themes/twentyseventeen/style.css?ver=4.9.8\" id=\"twentyseventeen-style-css\" media=\"all\" rel=\"stylesheet\" type=\"text/css\"/>\n <style id=\"twentyseventeen-style-inline-css\" type=\"text/css\">\n .site-content-contain {background-color: #ffffff; background-image: url(\"\"); background-position: left top; background-size: auto; background-repeat: repeat; background-attachment: scroll; }\n </style>\n <!--[if lt IE 9]>\n<link rel='stylesheet' id='twentyseventeen-ie8-css' href='https://data-x.blog/wp-content/themes/twentyseventeen/assets/css/ie8.css?ver=1.0' type='text/css' media='all' />\n<![endif]-->\n <link href=\"https://data-x.blog/wp-content/themes/twentyseventeen/assets/css/style-wpcom.css?ver=20170306\" id=\"twentyseventeen-wpcom-style-css\" media=\"all\" rel=\"stylesheet\" type=\"text/css\"/>\n <link href=\"https://c0.wp.com/p/jetpack/6.6.1/_inc/social-logos/social-logos.min.css\" id=\"social-logos-css\" media=\"all\" rel=\"stylesheet\" type=\"text/css\"/>\n <link href=\"https://c0.wp.com/p/jetpack/6.6.1/css/jetpack.css\" id=\"jetpack_css-css\" media=\"all\" rel=\"stylesheet\" type=\"text/css\"/>\n <script src=\"https://c0.wp.com/c/4.9.8/wp-includes/js/jquery/jquery.js\" type=\"text/javascript\">\n </script>\n <script src=\"https://c0.wp.com/c/4.9.8/wp-includes/js/jquery/jquery-migrate.min.js\" type=\"text/javascript\">\n </script>\n <!--[if lt IE 9]>\n<script type='text/javascript' src='https://data-x.blog/wp-content/themes/twentyseventeen/assets/js/html5.js?ver=3.7.3'></script>\n<![endif]-->\n <script src=\"https://c0.wp.com/p/jetpack/6.6.1/_inc/build/postmessage.min.js\" type=\"text/javascript\">\n </script>\n <script src=\"https://c0.wp.com/p/jetpack/6.6.1/_inc/build/jquery.jetpack-resize.min.js\" type=\"text/javascript\">\n </script>\n <link href=\"https://data-x.blog/wp-json/\" rel=\"https://api.w.org/\"/>\n <link href=\"https://data-x.blog/xmlrpc.php?rsd\" rel=\"EditURI\" title=\"RSD\" type=\"application/rsd+xml\"/>\n <link href=\"https://data-x.blog/wp-includes/wlwmanifest.xml\" rel=\"wlwmanifest\" type=\"application/wlwmanifest+xml\"/>\n <link href=\"https://data-x.blog/\" rel=\"canonical\"/>\n <link href=\"https://wp.me/P8boZu-2\" rel=\"shortlink\"/>\n <link href=\"https://data-x.blog/wp-json/oembed/1.0/embed?url=https%3A%2F%2Fdata-x.blog%2F\" rel=\"alternate\" type=\"application/json+oembed\"/>\n <link href=\"https://data-x.blog/wp-json/oembed/1.0/embed?url=https%3A%2F%2Fdata-x.blog%2F&amp;format=xml\" rel=\"alternate\" type=\"text/xml+oembed\"/>\n <link href=\"//widgets.wp.com\" rel=\"dns-prefetch\"/>\n <link href=\"//s0.wp.com\" rel=\"dns-prefetch\"/>\n <link href=\"//0.gravatar.com\" rel=\"dns-prefetch\"/>\n <link href=\"//1.gravatar.com\" rel=\"dns-prefetch\"/>\n <link href=\"//2.gravatar.com\" rel=\"dns-prefetch\"/>\n <link href=\"//jetpack.wordpress.com\" rel=\"dns-prefetch\"/>\n <link href=\"//s1.wp.com\" rel=\"dns-prefetch\"/>\n <link href=\"//s2.wp.com\" rel=\"dns-prefetch\"/>\n <link href=\"//public-api.wordpress.com\" rel=\"dns-prefetch\"/>\n <link href=\"//i0.wp.com\" rel=\"dns-prefetch\"/>\n <link href=\"//i1.wp.com\" rel=\"dns-prefetch\"/>\n <link href=\"//i2.wp.com\" rel=\"dns-prefetch\"/>\n <link href=\"//c0.wp.com\" rel=\"dns-prefetch\"/>\n <style type=\"text/css\">\n img#wpstats{display:none}\n </style>\n <meta content=\"This is just a short excerpt for the aboutย page.\" name=\"description\">\n <!-- Jetpack Open Graph Tags -->\n <meta content=\"website\" property=\"og:type\">\n <meta content=\"Data-X at Berkeley\" property=\"og:title\"/>\n <meta content=\"For Rapid Impact in Digital Transformation\" property=\"og:description\"/>\n <meta content=\"https://data-x.blog/\" property=\"og:url\"/>\n <meta content=\"Data-X at Berkeley\" property=\"og:site_name\"/>\n <meta content=\"https://data-x.blog/wp-content/uploads/2016/12/screen-shot-2017-05-04-at-6-59-17-pm-1024x574.png\" property=\"og:image\"/>\n <meta content=\"1024\" property=\"og:image:width\"/>\n <meta content=\"574\" property=\"og:image:height\"/>\n <meta content=\"en_US\" property=\"og:locale\"/>\n <meta content=\"About Data-X\" name=\"twitter:text:title\"/>\n <meta content=\"summary\" name=\"twitter:card\"/>\n <!-- End Jetpack Open Graph Tags -->\n <style id=\"wp-custom-css\" type=\"text/css\">\n /*Computer screen */\n@media screen and (min-width: 48em) {\n\t.twentyseventeen-front-page.has-header-image .custom-header-image {\n\t/*height: 1200px;*/\n\t/*height: 100vh;*/\n\theight: 35vh;\n\t/*max-height: 100%;*/\n\t/*overflow: hidden;*/\n\t}\n}\n\n/* Mobile screen*/\n.has-header-image.twentyseventeen-front-page .custom-header {\n\t/*display: table;*/\n\t/*height: 300px;*/\n\t/*height: 75vh;*/\n\theight: 35vh;\n\t/*width: 100%;*/\n}\n\n/* Computer screen with logged in user and admin bar showing on front end*/\n@media screen and (min-width: 48em) {\n\t.admin-bar.twentyseventeen-front-page.has-header-image .custom-header-image {\n\t/*height: calc(100vh - 32px);*/\n\theight: calc(50vh - 32px);\n\t}\n}\n\n.wrap {\nmax-width: 1100px !important;\n}\n\n.page.page-one-column:not(.twentyseventeen-front-page) #primary {\nmax-width: 100% !important;\n}\n\n@media screen and (min-width: 48em) {\n.wrap {\nmax-width: 1100px !important;\n}\n}\n\n@media screen and (min-width: 30em) {\n\n.page-one-column .panel-content .wrap {\nmax-width: 1100px !important;\n}\n}\n\n@media screen and (max-width: 650px) {\n\n.wrap {\nmax-width: 95% !important;\n}\n}\n </style>\n </meta>\n </meta>\n </link>\n </head>\n <body class=\"home page-template-default page page-id-2 twentyseventeen-front-page has-header-image page-one-column colors-light cannot-edit\">\n <div class=\"site\" id=\"page\">\n <a class=\"skip-link screen-reader-text\" href=\"#content\">\n Skip to content\n </a>\n <header class=\"site-header\" id=\"masthead\" role=\"banner\">\n <div class=\"custom-header\">\n <div class=\"custom-header-media\">\n <div class=\"wp-custom-header\" id=\"wp-custom-header\">\n <img alt=\"Data-X at Berkeley\" height=\"973\" sizes=\"100vw\" src=\"https://data-x.blog/wp-content/uploads/2018/04/cropped-AdobeStock_120712749-Converted.png\" srcset=\"https://i1.wp.com/data-x.blog/wp-content/uploads/2018/04/cropped-AdobeStock_120712749-Converted.png?w=2000&amp;ssl=1 2000w, https://i1.wp.com/data-x.blog/wp-content/uploads/2018/04/cropped-AdobeStock_120712749-Converted.png?resize=300%2C146&amp;ssl=1 300w, https://i1.wp.com/data-x.blog/wp-content/uploads/2018/04/cropped-AdobeStock_120712749-Converted.png?resize=768%2C374&amp;ssl=1 768w, https://i1.wp.com/data-x.blog/wp-content/uploads/2018/04/cropped-AdobeStock_120712749-Converted.png?resize=1024%2C498&amp;ssl=1 1024w, https://i1.wp.com/data-x.blog/wp-content/uploads/2018/04/cropped-AdobeStock_120712749-Converted.png?w=1288&amp;ssl=1 1288w, https://i1.wp.com/data-x.blog/wp-content/uploads/2018/04/cropped-AdobeStock_120712749-Converted.png?w=1932&amp;ssl=1 1932w\" width=\"2000\"/>\n </div>\n </div>\n <div class=\"site-branding\">\n <div class=\"wrap\">\n <div class=\"site-branding-text\">\n <h1 class=\"site-title\">\n <a href=\"https://data-x.blog/\" rel=\"home\">\n Data-X at Berkeley\n </a>\n </h1>\n <p class=\"site-description\">\n For Rapid Impact in Digital Transformation\n </p>\n </div>\n <!-- .site-branding-text -->\n </div>\n <!-- .wrap -->\n </div>\n <!-- .site-branding -->\n </div>\n <!-- .custom-header -->\n <div class=\"navigation-top\">\n <div class=\"wrap\">\n <nav aria-label=\"Top Menu\" class=\"main-navigation\" id=\"site-navigation\" role=\"navigation\">\n <button aria-controls=\"top-menu\" aria-expanded=\"false\" class=\"menu-toggle\">\n <svg aria-hidden=\"true\" class=\"icon icon-bars\" role=\"img\">\n <use href=\"#icon-bars\" xlink:href=\"#icon-bars\">\n </use>\n </svg>\n <svg aria-hidden=\"true\" class=\"icon icon-close\" role=\"img\">\n <use href=\"#icon-close\" xlink:href=\"#icon-close\">\n </use>\n </svg>\n Menu\n </button>\n <div class=\"menu-primary-container\">\n <ul class=\"menu\" id=\"top-menu\">\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom current-menu-item current_page_item menu-item-8\" id=\"menu-item-8\">\n <a href=\"/\">\n Home\n </a>\n </li>\n <li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-726\" id=\"menu-item-726\">\n <a href=\"https://data-x.blog/about-data-x/\">\n About Data-X\n </a>\n </li>\n <li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-183\" id=\"menu-item-183\">\n <a href=\"https://data-x.blog/resources/\">\n Resources\n </a>\n </li>\n <li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1055\" id=\"menu-item-1055\">\n <a href=\"https://data-x.blog/dx-online/\">\n Data-X Online\n </a>\n </li>\n <li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-102\" id=\"menu-item-102\">\n <a href=\"https://data-x.blog/syllabus/\">\n Berkeley Syllabus\n </a>\n </li>\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-has-children menu-item-719\" id=\"menu-item-719\">\n <a href=\"http://data-x.blog/projects\">\n Projects\n <svg aria-hidden=\"true\" class=\"icon icon-angle-down\" role=\"img\">\n <use href=\"#icon-angle-down\" xlink:href=\"#icon-angle-down\">\n </use>\n </svg>\n </a>\n <ul class=\"sub-menu\">\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-572\" id=\"menu-item-572\">\n <a href=\"http://data-x.blog/projects\">\n Completed Projects\n </a>\n </li>\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-737\" id=\"menu-item-737\">\n <a href=\"http://data-x.blog/project-ideas\">\n Project Ideas\n </a>\n </li>\n <li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-485\" id=\"menu-item-485\">\n <a href=\"https://data-x.blog/advisors/\">\n Advisors\n </a>\n </li>\n </ul>\n </li>\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-738\" id=\"menu-item-738\">\n <a href=\"http://data-x.blog/posts\">\n Posts\n </a>\n </li>\n <li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-129\" id=\"menu-item-129\">\n <a href=\"https://data-x.blog/project/\">\n Labs\n </a>\n </li>\n <li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-10\" id=\"menu-item-10\">\n <a href=\"https://data-x.blog/contact/\">\n Contact\n </a>\n </li>\n </ul>\n </div>\n <a class=\"menu-scroll-down\" href=\"#content\">\n <svg aria-hidden=\"true\" class=\"icon icon-arrow-right\" role=\"img\">\n <use href=\"#icon-arrow-right\" xlink:href=\"#icon-arrow-right\">\n </use>\n </svg>\n <span class=\"screen-reader-text\">\n Scroll down to content\n </span>\n </a>\n </nav>\n <!-- #site-navigation -->\n </div>\n <!-- .wrap -->\n </div>\n <!-- .navigation-top -->\n </header>\n <!-- #masthead -->\n <div class=\"site-content-contain\">\n <div class=\"site-content\" id=\"content\">\n <div class=\"content-area\" id=\"primary\">\n <main class=\"site-main\" id=\"main\" role=\"main\">\n <article class=\"twentyseventeen-panel post-2 page type-page status-publish hentry\" id=\"post-2\">\n <div class=\"panel-content\">\n <div class=\"wrap\">\n <header class=\"entry-header\">\n <h2 class=\"entry-title\">\n About Data-X\n </h2>\n </header>\n <!-- .entry-header -->\n <div class=\"entry-content\">\n <h2>\n Data-X: Enable Rapid Impact in a Digital World\n </h2>\n <p>\n Ikhlaq Sidhu, IEOR, UC Berkeley (\n <a href=\"http://data-x.blog/contact/\">\n contact\n </a>\n )\n <br/>\n Alexander Fred-Ojala, IEOR, UC Berkeley (\n <a href=\"http://data-x.blog/contact/\">\n contact\n </a>\n )\n </p>\n <p class=\"p1\">\n Data-X is a frameworkย designed at UC Berkeley for learning and applying AI, data science, and emerging technologies. Data-X fills a gap between theory and practiceย to empower the rapid development of data and AI projects that actually work. ย Data-X projects create new ventures, new research, and corporate innovations all over the world.\n </p>\n <p>\n <a href=\"http://scet.berkeley.edu/berkeley-x/data-ai-blockchain-projects-work-in-real-life/\" rel=\"noopener\" target=\"_blank\">\n Learn more from this article\n </a>\n </p>\n <h3>\n <strong>\n Undergraduate and Graduate Curriculum at Berkeley\n </strong>\n </h3>\n <p>\n At UC Berkeley, Data-X is offered as a 3 unit class called Applied Data Science with Venture Applications.\n </p>\n <p>\n <img alt=\"Screen-Shot-2017-05-04-at-6.59.17-PM-1024x574\" class=\"alignnone size-full wp-image-453\" data-attachment-id=\"453\" data-comments-opened=\"1\" data-image-description=\"\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screen-Shot-2017-05-04-at-6.59.17-PM-1024ร—574\" data-large-file=\"https://i2.wp.com/data-x.blog/wp-content/uploads/2016/12/screen-shot-2017-05-04-at-6-59-17-pm-1024x574.png?fit=644%2C361&amp;ssl=1\" data-medium-file=\"https://i2.wp.com/data-x.blog/wp-content/uploads/2016/12/screen-shot-2017-05-04-at-6-59-17-pm-1024x574.png?fit=300%2C168&amp;ssl=1\" data-orig-file=\"https://i2.wp.com/data-x.blog/wp-content/uploads/2016/12/screen-shot-2017-05-04-at-6-59-17-pm-1024x574.png?fit=1024%2C574&amp;ssl=1\" data-orig-size=\"1024,574\" data-permalink=\"https://data-x.blog/about/screen-shot-2017-05-04-at-6-59-17-pm-1024x574/\" data-recalc-dims=\"1\" height=\"361\" src=\"https://i2.wp.com/data-x.blog/wp-content/uploads/2016/12/screen-shot-2017-05-04-at-6-59-17-pm-1024x574.png?resize=644%2C361&amp;ssl=1\" width=\"644\"/>\n </p>\n <p>\n <strong>\n Applied Data Science with Venture Applications\n </strong>\n <br/>\n <strong>\n Fall 2018 course information:\n </strong>\n </p>\n <ul>\n <li>\n <strong>\n Undergrad:\n </strong>\n <a href=\"http://classes.berkeley.edu/content/2018-fall-indeng-135-001-lec-001\">\n INDENG\n <span class=\"il\">\n 135\n </span>\n </a>\n </li>\n <li>\n <strong>\n Grad Section:\n </strong>\n <a href=\"https://classes.berkeley.edu/content/2018-fall-indeng-290-002-lec-002\">\n INDENGย 290\n </a>\n </li>\n <li>\n <strong>\n Location:\n </strong>\n <span class=\"ng-binding\">\n TuTh 12:30 pm โ€“ 2:00 pm\n </span>\n |\n <a href=\"http://www.berkeley.edu/map/?evans\">\n Evans 10\n </a>\n </li>\n <li>\n <em>\n <strong>\n Prerequisite\n </strong>\n </em>\n : Students should have a working knowledge of Python, have completed a fundamental probability / statistics course, as well as have a basic understanding Linear Algebra.\n </li>\n </ul>\n <p>\n While the course offers instruction on tools and methods in data science, the Data-X project includes new open-ended problems with industry, social, and venture perspectives on utilizing data for rapid impact.\n <a href=\"https://data-x.blog/projects/\">\n Click here to see examples of projects completed earlier semesters.\n </a>\n </p>\n <h3>\n <strong>\n Masterclasses for Technical Leaders and Executives\n </strong>\n </h3>\n <p>\n <strong>\n Past and Upcoming Workshops, Bootcamps and talks related to Data-X:\n </strong>\n </p>\n <ul>\n <li>\n Data-X Masterclass for Technical Leaders, Hightech Summit, Germany, February 2019\n </li>\n <li>\n CIO Connect Data Science Master Class, From Data Strategy to Implementation, Hong Kong Sept 2018\n </li>\n <li>\n Applied Analytics for Competitive Advantage: Data Strategies for Disruptive Impact\u000bManilla, Philippines Sept. 2018\n </li>\n <li>\n Data-X lectures on AI and Blockchain, ICE-IEEE Conference, Stuttgart, Germany, June 19th 2018 (\n <a href=\"http://www.ice-conference.org/\">\n ICE/IEEE website\n </a>\n )\n </li>\n <li>\n Data-X Masterclass for Technical Leaders, HKBU, Hong Kong, May 24th-25th 2018\n </li>\n <li>\n <span style=\"font-size: 1rem;\">\n Data-X Masterclass, Applied Analytics in Business Projects, University of Economics, Prague,ย April 26-27th 2018 (\n <a href=\"http://dataxprague.cz/\">\n Official Website\n </a>\n )\n </span>\n </li>\n <li>\n Data-X Masterclass for Technical Leaders, Hong Kong, January 25th-26th, 2018 (\n <a href=\"https://github.com/afo/dataXhkbu\">\n Github link\n </a>\n ,\n <a href=\"https://event.hkbu.edu.hk/en/event.php?id=430\">\n HKBU event\n </a>\n ,\n <a href=\"http://scet.berkeley.edu/ikhlaq-sidhu-addresses-data-science-hong-kong-baptist-university/\">\n SCET article\n </a>\n )\n </li>\n <li>\n โ€œResearch Trends in AI with Venture Applicationsโ€,ย at GMIC, Hong Kong, October 2017\n </li>\n <li>\n โ€œTurning Data into Dollarsโ€, CIO Connect, Hong Kong, September 11, 2017\n </li>\n <li>\n โ€œData at Scale, AI, and Business Modelsโ€, Jardines CIO Conference, July 2017\n </li>\n <li>\n โ€œHow Data and AI Affect Business, Government, and Societyโ€, July 2017\n </li>\n </ul>\n <h3>\n </h3>\n <h3>\n Data-X Breadth: Trends &amp;\n <span style=\"font-size: 18.72px;\">\n Industry\n </span>\n Updates\n </h3>\n <ul>\n <li>\n Ref B01:\n <a href=\"http://scet.berkeley.edu/data-strategy-working-hard-enough/\">\n Is Your Data Strategy Working Hard Enough\n </a>\n </li>\n <li>\n Ref B02:\n <a href=\"https://data-x.blog/wp-content/uploads/2017/12/fueling-growth-through-data-monetization-mckinsey-company1.pdf\">\n Fueling growth through data monetization | McKinsey,\n </a>\n (\n <a href=\"https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/fueling-growth-through-data-monetization?cid=other-eml-alt-mip-mck-oth-1712\">\n original\n </a>\n )\n </li>\n <li>\n Ref B03:\n <a href=\"http://scet.berkeley.edu/berkeley-x/data-ai-blockchain-projects-work-in-real-life/\" rel=\"noopener\" target=\"_blank\">\n AI, Data, and Blockchain Projects that Work in Real Life\n </a>\n </li>\n <li>\n Ref B04:\n <a href=\"https://data-x.blog/wp-content/uploads/2016/12/why-you_re-not-getting-value-from-your-data-science.pdf\">\n Why youโ€™re not getting value from your data science\n </a>\n </li>\n <li>\n Ref B05:\n <a href=\"https://data-x.blog/wp-content/uploads/2018/04/How-to-Make-Your-Company-Machine-Learning-Ready.pdf\" title=\"How to Make Your Company Machine Learning Ready\">\n How to Make Your Company Machine Learning Ready\n </a>\n </li>\n <li>\n Ref B06:\n <a href=\"https://data-x.blog/wp-content/uploads/2018/05/Artificial-Intelligence-โ€”-The-Revolution-Hasnโ€™t-Happened-Yet.pdf\" title=\"Artificial Intelligence โ€” The Revolution Hasnโ€™t Happened Yet\">\n Artificial Intelligence โ€” The Revolution Hasnโ€™t Happened Yet\n </a>\n </li>\n <li>\n Ref B07:\n <a href=\"https://blogs.wsj.com/cio/2018/06/12/companies-eager-to-expand-digital-business-learn-value-of-the-once-obscure-api/\">\n The Importance and Value of APIs (WSJ)\n </a>\n </li>\n </ul>\n </div>\n <!-- .entry-content -->\n </div>\n <!-- .wrap -->\n </div>\n <!-- .panel-content -->\n </article>\n <!-- #post-## -->\n </main>\n <!-- #main -->\n </div>\n <!-- #primary -->\n </div>\n <!-- #content -->\n <footer class=\"site-footer\" id=\"colophon\" role=\"contentinfo\">\n <div class=\"wrap\">\n <nav aria-label=\"Footer Social Links Menu\" class=\"social-navigation\" role=\"navigation\">\n <div class=\"menu-social-links-container\">\n <ul class=\"social-links-menu\" id=\"menu-social-links\">\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-11\" id=\"menu-item-11\">\n <a href=\"https://twitter.com/\">\n <span class=\"screen-reader-text\">\n Twitter\n </span>\n <svg aria-hidden=\"true\" class=\"icon icon-twitter\" role=\"img\">\n <use href=\"#icon-twitter\" xlink:href=\"#icon-twitter\">\n </use>\n </svg>\n </a>\n </li>\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-12\" id=\"menu-item-12\">\n <a href=\"https://www.facebook.com/\">\n <span class=\"screen-reader-text\">\n Facebook\n </span>\n <svg aria-hidden=\"true\" class=\"icon icon-facebook\" role=\"img\">\n <use href=\"#icon-facebook\" xlink:href=\"#icon-facebook\">\n </use>\n </svg>\n </a>\n </li>\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-13\" id=\"menu-item-13\">\n <a href=\"http://plus.google.com\">\n <span class=\"screen-reader-text\">\n Google+\n </span>\n <svg aria-hidden=\"true\" class=\"icon icon-google-plus\" role=\"img\">\n <use href=\"#icon-google-plus\" xlink:href=\"#icon-google-plus\">\n </use>\n </svg>\n </a>\n </li>\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-14\" id=\"menu-item-14\">\n <a href=\"http://github.com\">\n <span class=\"screen-reader-text\">\n GitHub\n </span>\n <svg aria-hidden=\"true\" class=\"icon icon-github\" role=\"img\">\n <use href=\"#icon-github\" xlink:href=\"#icon-github\">\n </use>\n </svg>\n </a>\n </li>\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-15\" id=\"menu-item-15\">\n <a href=\"http://wordpress.com\">\n <span class=\"screen-reader-text\">\n WordPress.com\n </span>\n <svg aria-hidden=\"true\" class=\"icon icon-wordpress\" role=\"img\">\n <use href=\"#icon-wordpress\" xlink:href=\"#icon-wordpress\">\n </use>\n </svg>\n </a>\n </li>\n </ul>\n </div>\n </nav>\n <!-- .social-navigation -->\n <div class=\"site-info\">\n <a href=\"https://wordpress.com/?ref=footer_custom_powered\">\n Powered by WordPress.com\n </a>\n .\n </div>\n <!-- .site-info -->\n </div>\n <!-- .wrap -->\n </footer>\n <!-- #colophon -->\n </div>\n <!-- .site-content-contain -->\n </div>\n <!-- #page -->\n <!-- -->\n <!-- Your Google Analytics Plugin is missing the tracking ID -->\n <div style=\"display:none\">\n </div>\n <!--[if lte IE 8]>\n<link rel='stylesheet' id='jetpack-carousel-ie8fix-css' href='https://c0.wp.com/p/jetpack/6.6.1/modules/carousel/jetpack-carousel-ie8fix.css' type='text/css' media='all' />\n<![endif]-->\n <script src=\"https://c0.wp.com/p/jetpack/6.6.1/_inc/build/photon/photon.min.js\" type=\"text/javascript\">\n </script>\n <script src=\"https://s0.wp.com/wp-content/js/devicepx-jetpack.js?ver=201841\" type=\"text/javascript\">\n </script>\n <script type=\"text/javascript\">\n /* <![CDATA[ */\nvar twentyseventeenScreenReaderText = {\"quote\":\"<svg class=\\\"icon icon-quote-right\\\" aria-hidden=\\\"true\\\" role=\\\"img\\\"> <use href=\\\"#icon-quote-right\\\" xlink:href=\\\"#icon-quote-right\\\"><\\/use> <\\/svg>\",\"expand\":\"Expand child menu\",\"collapse\":\"Collapse child menu\",\"icon\":\"<svg class=\\\"icon icon-angle-down\\\" aria-hidden=\\\"true\\\" role=\\\"img\\\"> <use href=\\\"#icon-angle-down\\\" xlink:href=\\\"#icon-angle-down\\\"><\\/use> <span class=\\\"svg-fallback icon-angle-down\\\"><\\/span><\\/svg>\"};\n/* ]]> */\n </script>\n <script src=\"https://data-x.blog/wp-content/themes/twentyseventeen/assets/js/skip-link-focus-fix.js?ver=1.0\" type=\"text/javascript\">\n </script>\n <script src=\"https://data-x.blog/wp-content/themes/twentyseventeen/assets/js/navigation.js?ver=1.0\" type=\"text/javascript\">\n </script>\n <script src=\"https://data-x.blog/wp-content/themes/twentyseventeen/assets/js/global.js?ver=1.0\" type=\"text/javascript\">\n </script>\n <script src=\"https://data-x.blog/wp-content/themes/twentyseventeen/assets/js/jquery.scrollTo.js?ver=2.1.2\" type=\"text/javascript\">\n </script>\n <script src=\"https://c0.wp.com/p/jetpack/6.6.1/_inc/build/likes/queuehandler.min.js\" type=\"text/javascript\">\n </script>\n <script src=\"https://c0.wp.com/c/4.9.8/wp-includes/js/wp-embed.min.js\" type=\"text/javascript\">\n </script>\n <script src=\"https://c0.wp.com/p/jetpack/6.6.1/_inc/build/spin.min.js\" type=\"text/javascript\">\n </script>\n <script src=\"https://c0.wp.com/p/jetpack/6.6.1/_inc/build/jquery.spin.min.js\" type=\"text/javascript\">\n </script>\n <script type=\"text/javascript\">\n /* <![CDATA[ */\nvar jetpackCarouselStrings = {\"widths\":[370,700,1000,1200,1400,2000],\"is_logged_in\":\"\",\"lang\":\"en\",\"ajaxurl\":\"https:\\/\\/data-x.blog\\/wp-admin\\/admin-ajax.php\",\"nonce\":\"6d7c2fb409\",\"display_exif\":\"1\",\"display_geo\":\"1\",\"single_image_gallery\":\"1\",\"single_image_gallery_media_file\":\"\",\"background_color\":\"black\",\"comment\":\"Comment\",\"post_comment\":\"Post Comment\",\"write_comment\":\"Write a Comment...\",\"loading_comments\":\"Loading Comments...\",\"download_original\":\"View full size <span class=\\\"photo-size\\\">{0}<span class=\\\"photo-size-times\\\">\\u00d7<\\/span>{1}<\\/span>\",\"no_comment_text\":\"Please be sure to submit some text with your comment.\",\"no_comment_email\":\"Please provide an email address to comment.\",\"no_comment_author\":\"Please provide your name to comment.\",\"comment_post_error\":\"Sorry, but there was an error posting your comment. Please try again later.\",\"comment_approved\":\"Your comment was approved.\",\"comment_unapproved\":\"Your comment is in moderation.\",\"camera\":\"Camera\",\"aperture\":\"Aperture\",\"shutter_speed\":\"Shutter Speed\",\"focal_length\":\"Focal Length\",\"copyright\":\"Copyright\",\"comment_registration\":\"0\",\"require_name_email\":\"1\",\"login_url\":\"https:\\/\\/data-x.blog\\/wp-login.php?redirect_to=https%3A%2F%2Fdata-x.blog%2F\",\"blog_id\":\"1\",\"meta_data\":[\"camera\",\"aperture\",\"shutter_speed\",\"focal_length\",\"copyright\"],\"local_comments_commenting_as\":\"<fieldset><label for=\\\"email\\\">Email (Required)<\\/label> <input type=\\\"text\\\" name=\\\"email\\\" class=\\\"jp-carousel-comment-form-field jp-carousel-comment-form-text-field\\\" id=\\\"jp-carousel-comment-form-email-field\\\" \\/><\\/fieldset><fieldset><label for=\\\"author\\\">Name (Required)<\\/label> <input type=\\\"text\\\" name=\\\"author\\\" class=\\\"jp-carousel-comment-form-field jp-carousel-comment-form-text-field\\\" id=\\\"jp-carousel-comment-form-author-field\\\" \\/><\\/fieldset><fieldset><label for=\\\"url\\\">Website<\\/label> <input type=\\\"text\\\" name=\\\"url\\\" class=\\\"jp-carousel-comment-form-field jp-carousel-comment-form-text-field\\\" id=\\\"jp-carousel-comment-form-url-field\\\" \\/><\\/fieldset>\"};\n/* ]]> */\n </script>\n <script src=\"https://c0.wp.com/p/jetpack/6.6.1/_inc/build/carousel/jetpack-carousel.min.js\" type=\"text/javascript\">\n </script>\n <script async=\"async\" defer=\"defer\" src=\"https://stats.wp.com/e-201841.js\" type=\"text/javascript\">\n </script>\n <script type=\"text/javascript\">\n _stq = window._stq || [];\n\t_stq.push([ 'view', {v:'ext',j:'1:6.6.1',blog:'120928364',post:'2',tz:'0',srv:'data-x.blog'} ]);\n\t_stq.push([ 'clickTrackerInit', '120928364', '2' ]);\n </script>\n <svg style=\"position: absolute; width: 0; height: 0; overflow: hidden;\" version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\n <defs>\n <symbol id=\"icon-behance\" viewbox=\"0 0 37 32\">\n <path class=\"path1\" d=\"M33 6.054h-9.125v2.214h9.125v-2.214zM28.5 13.661q-1.607 0-2.607 0.938t-1.107 2.545h7.286q-0.321-3.482-3.571-3.482zM28.786 24.107q1.125 0 2.179-0.571t1.357-1.554h3.946q-1.786 5.482-7.625 5.482-3.821 0-6.080-2.357t-2.259-6.196q0-3.714 2.33-6.17t6.009-2.455q2.464 0 4.295 1.214t2.732 3.196 0.902 4.429q0 0.304-0.036 0.839h-11.75q0 1.982 1.027 3.063t2.973 1.080zM4.946 23.214h5.286q3.661 0 3.661-2.982 0-3.214-3.554-3.214h-5.393v6.196zM4.946 13.625h5.018q1.393 0 2.205-0.652t0.813-2.027q0-2.571-3.393-2.571h-4.643v5.25zM0 4.536h10.607q1.554 0 2.768 0.25t2.259 0.848 1.607 1.723 0.563 2.75q0 3.232-3.071 4.696 2.036 0.571 3.071 2.054t1.036 3.643q0 1.339-0.438 2.438t-1.179 1.848-1.759 1.268-2.161 0.75-2.393 0.232h-10.911v-22.5z\">\n </path>\n </symbol>\n <symbol id=\"icon-deviantart\" viewbox=\"0 0 18 32\">\n <path class=\"path1\" d=\"M18.286 5.411l-5.411 10.393 0.429 0.554h4.982v7.411h-9.054l-0.786 0.536-2.536 4.875-0.536 0.536h-5.375v-5.411l5.411-10.411-0.429-0.536h-4.982v-7.411h9.054l0.786-0.536 2.536-4.875 0.536-0.536h5.375v5.411z\">\n </path>\n </symbol>\n <symbol id=\"icon-medium\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M10.661 7.518v20.946q0 0.446-0.223 0.759t-0.652 0.313q-0.304 0-0.589-0.143l-8.304-4.161q-0.375-0.179-0.634-0.598t-0.259-0.83v-20.357q0-0.357 0.179-0.607t0.518-0.25q0.25 0 0.786 0.268l9.125 4.571q0.054 0.054 0.054 0.089zM11.804 9.321l9.536 15.464-9.536-4.75v-10.714zM32 9.643v18.821q0 0.446-0.25 0.723t-0.679 0.277-0.839-0.232l-7.875-3.929zM31.946 7.5q0 0.054-4.58 7.491t-5.366 8.705l-6.964-11.321 5.786-9.411q0.304-0.5 0.929-0.5 0.25 0 0.464 0.107l9.661 4.821q0.071 0.036 0.071 0.107z\">\n </path>\n </symbol>\n <symbol id=\"icon-slideshare\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M15.589 13.214q0 1.482-1.134 2.545t-2.723 1.063-2.723-1.063-1.134-2.545q0-1.5 1.134-2.554t2.723-1.054 2.723 1.054 1.134 2.554zM24.554 13.214q0 1.482-1.125 2.545t-2.732 1.063q-1.589 0-2.723-1.063t-1.134-2.545q0-1.5 1.134-2.554t2.723-1.054q1.607 0 2.732 1.054t1.125 2.554zM28.571 16.429v-11.911q0-1.554-0.571-2.205t-1.982-0.652h-19.857q-1.482 0-2.009 0.607t-0.527 2.25v12.018q0.768 0.411 1.58 0.714t1.446 0.5 1.446 0.33 1.268 0.196 1.25 0.071 1.045 0.009 1.009-0.036 0.795-0.036q1.214-0.018 1.696 0.482 0.107 0.107 0.179 0.161 0.464 0.446 1.089 0.911 0.125-1.625 2.107-1.554 0.089 0 0.652 0.027t0.768 0.036 0.813 0.018 0.946-0.018 0.973-0.080 1.089-0.152 1.107-0.241 1.196-0.348 1.205-0.482 1.286-0.616zM31.482 16.339q-2.161 2.661-6.643 4.5 1.5 5.089-0.411 8.304-1.179 2.018-3.268 2.643-1.857 0.571-3.25-0.268-1.536-0.911-1.464-2.929l-0.018-5.821v-0.018q-0.143-0.036-0.438-0.107t-0.42-0.089l-0.018 6.036q0.071 2.036-1.482 2.929-1.411 0.839-3.268 0.268-2.089-0.643-3.25-2.679-1.875-3.214-0.393-8.268-4.482-1.839-6.643-4.5-0.446-0.661-0.071-1.125t1.071 0.018q0.054 0.036 0.196 0.125t0.196 0.143v-12.393q0-1.286 0.839-2.196t2.036-0.911h22.446q1.196 0 2.036 0.911t0.839 2.196v12.393l0.375-0.268q0.696-0.482 1.071-0.018t-0.071 1.125z\">\n </path>\n </symbol>\n <symbol id=\"icon-snapchat-ghost\" viewbox=\"0 0 30 32\">\n <path class=\"path1\" d=\"M15.143 2.286q2.393-0.018 4.295 1.223t2.92 3.438q0.482 1.036 0.482 3.196 0 0.839-0.161 3.411 0.25 0.125 0.5 0.125 0.321 0 0.911-0.241t0.911-0.241q0.518 0 1 0.321t0.482 0.821q0 0.571-0.563 0.964t-1.232 0.563-1.232 0.518-0.563 0.848q0 0.268 0.214 0.768 0.661 1.464 1.83 2.679t2.58 1.804q0.5 0.214 1.429 0.411 0.5 0.107 0.5 0.625 0 1.25-3.911 1.839-0.125 0.196-0.196 0.696t-0.25 0.83-0.589 0.33q-0.357 0-1.107-0.116t-1.143-0.116q-0.661 0-1.107 0.089-0.571 0.089-1.125 0.402t-1.036 0.679-1.036 0.723-1.357 0.598-1.768 0.241q-0.929 0-1.723-0.241t-1.339-0.598-1.027-0.723-1.036-0.679-1.107-0.402q-0.464-0.089-1.125-0.089-0.429 0-1.17 0.134t-1.045 0.134q-0.446 0-0.625-0.33t-0.25-0.848-0.196-0.714q-3.911-0.589-3.911-1.839 0-0.518 0.5-0.625 0.929-0.196 1.429-0.411 1.393-0.571 2.58-1.804t1.83-2.679q0.214-0.5 0.214-0.768 0-0.5-0.563-0.848t-1.241-0.527-1.241-0.563-0.563-0.938q0-0.482 0.464-0.813t0.982-0.33q0.268 0 0.857 0.232t0.946 0.232q0.321 0 0.571-0.125-0.161-2.536-0.161-3.393 0-2.179 0.482-3.214 1.143-2.446 3.071-3.536t4.714-1.125z\">\n </path>\n </symbol>\n <symbol id=\"icon-yelp\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M13.804 23.554v2.268q-0.018 5.214-0.107 5.446-0.214 0.571-0.911 0.714-0.964 0.161-3.241-0.679t-2.902-1.589q-0.232-0.268-0.304-0.643-0.018-0.214 0.071-0.464 0.071-0.179 0.607-0.839t3.232-3.857q0.018 0 1.071-1.25 0.268-0.339 0.705-0.438t0.884 0.063q0.429 0.179 0.67 0.518t0.223 0.75zM11.143 19.071q-0.054 0.982-0.929 1.25l-2.143 0.696q-4.911 1.571-5.214 1.571-0.625-0.036-0.964-0.643-0.214-0.446-0.304-1.339-0.143-1.357 0.018-2.973t0.536-2.223 1-0.571q0.232 0 3.607 1.375 1.25 0.518 2.054 0.839l1.5 0.607q0.411 0.161 0.634 0.545t0.205 0.866zM25.893 24.375q-0.125 0.964-1.634 2.875t-2.42 2.268q-0.661 0.25-1.125-0.125-0.25-0.179-3.286-5.125l-0.839-1.375q-0.25-0.375-0.205-0.821t0.348-0.821q0.625-0.768 1.482-0.464 0.018 0.018 2.125 0.714 3.625 1.179 4.321 1.42t0.839 0.366q0.5 0.393 0.393 1.089zM13.893 13.089q0.089 1.821-0.964 2.179-1.036 0.304-2.036-1.268l-6.75-10.679q-0.143-0.625 0.339-1.107 0.732-0.768 3.705-1.598t4.009-0.563q0.714 0.179 0.875 0.804 0.054 0.321 0.393 5.455t0.429 6.777zM25.714 15.018q0.054 0.696-0.464 1.054-0.268 0.179-5.875 1.536-1.196 0.268-1.625 0.411l0.018-0.036q-0.411 0.107-0.821-0.071t-0.661-0.571q-0.536-0.839 0-1.554 0.018-0.018 1.339-1.821 2.232-3.054 2.679-3.643t0.607-0.696q0.5-0.339 1.161-0.036 0.857 0.411 2.196 2.384t1.446 2.991v0.054z\">\n </path>\n </symbol>\n <symbol id=\"icon-vine\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M26.732 14.768v3.536q-1.804 0.411-3.536 0.411-1.161 2.429-2.955 4.839t-3.241 3.848-2.286 1.902q-1.429 0.804-2.893-0.054-0.5-0.304-1.080-0.777t-1.518-1.491-1.83-2.295-1.92-3.286-1.884-4.357-1.634-5.616-1.259-6.964h5.054q0.464 3.893 1.25 7.116t1.866 5.661 2.17 4.205 2.5 3.482q3.018-3.018 5.125-7.25-2.536-1.286-3.982-3.929t-1.446-5.946q0-3.429 1.857-5.616t5.071-2.188q3.179 0 4.875 1.884t1.696 5.313q0 2.839-1.036 5.107-0.125 0.018-0.348 0.054t-0.821 0.036-1.125-0.107-1.107-0.455-0.902-0.92q0.554-1.839 0.554-3.286 0-1.554-0.518-2.357t-1.411-0.804q-0.946 0-1.518 0.884t-0.571 2.509q0 3.321 1.875 5.241t4.768 1.92q1.107 0 2.161-0.25z\">\n </path>\n </symbol>\n <symbol id=\"icon-vk\" viewbox=\"0 0 35 32\">\n <path class=\"path1\" d=\"M34.232 9.286q0.411 1.143-2.679 5.25-0.429 0.571-1.161 1.518-1.393 1.786-1.607 2.339-0.304 0.732 0.25 1.446 0.304 0.375 1.446 1.464h0.018l0.071 0.071q2.518 2.339 3.411 3.946 0.054 0.089 0.116 0.223t0.125 0.473-0.009 0.607-0.446 0.491-1.054 0.223l-4.571 0.071q-0.429 0.089-1-0.089t-0.929-0.393l-0.357-0.214q-0.536-0.375-1.25-1.143t-1.223-1.384-1.089-1.036-1.009-0.277q-0.054 0.018-0.143 0.063t-0.304 0.259-0.384 0.527-0.304 0.929-0.116 1.384q0 0.268-0.063 0.491t-0.134 0.33l-0.071 0.089q-0.321 0.339-0.946 0.393h-2.054q-1.268 0.071-2.607-0.295t-2.348-0.946-1.839-1.179-1.259-1.027l-0.446-0.429q-0.179-0.179-0.491-0.536t-1.277-1.625-1.893-2.696-2.188-3.768-2.33-4.857q-0.107-0.286-0.107-0.482t0.054-0.286l0.071-0.107q0.268-0.339 1.018-0.339l4.893-0.036q0.214 0.036 0.411 0.116t0.286 0.152l0.089 0.054q0.286 0.196 0.429 0.571 0.357 0.893 0.821 1.848t0.732 1.455l0.286 0.518q0.518 1.071 1 1.857t0.866 1.223 0.741 0.688 0.607 0.25 0.482-0.089q0.036-0.018 0.089-0.089t0.214-0.393 0.241-0.839 0.17-1.446 0-2.232q-0.036-0.714-0.161-1.304t-0.25-0.821l-0.107-0.214q-0.446-0.607-1.518-0.768-0.232-0.036 0.089-0.429 0.304-0.339 0.679-0.536 0.946-0.464 4.268-0.429 1.464 0.018 2.411 0.232 0.357 0.089 0.598 0.241t0.366 0.429 0.188 0.571 0.063 0.813-0.018 0.982-0.045 1.259-0.027 1.473q0 0.196-0.018 0.75t-0.009 0.857 0.063 0.723 0.205 0.696 0.402 0.438q0.143 0.036 0.304 0.071t0.464-0.196 0.679-0.616 0.929-1.196 1.214-1.92q1.071-1.857 1.911-4.018 0.071-0.179 0.179-0.313t0.196-0.188l0.071-0.054 0.089-0.045t0.232-0.054 0.357-0.009l5.143-0.036q0.696-0.089 1.143 0.045t0.554 0.295z\">\n </path>\n </symbol>\n <symbol id=\"icon-search\" viewbox=\"0 0 30 32\">\n <path class=\"path1\" d=\"M20.571 14.857q0-3.304-2.348-5.652t-5.652-2.348-5.652 2.348-2.348 5.652 2.348 5.652 5.652 2.348 5.652-2.348 2.348-5.652zM29.714 29.714q0 0.929-0.679 1.607t-1.607 0.679q-0.964 0-1.607-0.679l-6.125-6.107q-3.196 2.214-7.125 2.214-2.554 0-4.884-0.991t-4.018-2.679-2.679-4.018-0.991-4.884 0.991-4.884 2.679-4.018 4.018-2.679 4.884-0.991 4.884 0.991 4.018 2.679 2.679 4.018 0.991 4.884q0 3.929-2.214 7.125l6.125 6.125q0.661 0.661 0.661 1.607z\">\n </path>\n </symbol>\n <symbol id=\"icon-envelope-o\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M29.714 26.857v-13.714q-0.571 0.643-1.232 1.179-4.786 3.679-7.607 6.036-0.911 0.768-1.482 1.196t-1.545 0.866-1.83 0.438h-0.036q-0.857 0-1.83-0.438t-1.545-0.866-1.482-1.196q-2.821-2.357-7.607-6.036-0.661-0.536-1.232-1.179v13.714q0 0.232 0.17 0.402t0.402 0.17h26.286q0.232 0 0.402-0.17t0.17-0.402zM29.714 8.089v-0.438t-0.009-0.232-0.054-0.223-0.098-0.161-0.161-0.134-0.25-0.045h-26.286q-0.232 0-0.402 0.17t-0.17 0.402q0 3 2.625 5.071 3.446 2.714 7.161 5.661 0.107 0.089 0.625 0.527t0.821 0.67 0.795 0.563 0.902 0.491 0.768 0.161h0.036q0.357 0 0.768-0.161t0.902-0.491 0.795-0.563 0.821-0.67 0.625-0.527q3.714-2.946 7.161-5.661 0.964-0.768 1.795-2.063t0.83-2.348zM32 7.429v19.429q0 1.179-0.839 2.018t-2.018 0.839h-26.286q-1.179 0-2.018-0.839t-0.839-2.018v-19.429q0-1.179 0.839-2.018t2.018-0.839h26.286q1.179 0 2.018 0.839t0.839 2.018z\">\n </path>\n </symbol>\n <symbol id=\"icon-close\" viewbox=\"0 0 25 32\">\n <path class=\"path1\" d=\"M23.179 23.607q0 0.714-0.5 1.214l-2.429 2.429q-0.5 0.5-1.214 0.5t-1.214-0.5l-5.25-5.25-5.25 5.25q-0.5 0.5-1.214 0.5t-1.214-0.5l-2.429-2.429q-0.5-0.5-0.5-1.214t0.5-1.214l5.25-5.25-5.25-5.25q-0.5-0.5-0.5-1.214t0.5-1.214l2.429-2.429q0.5-0.5 1.214-0.5t1.214 0.5l5.25 5.25 5.25-5.25q0.5-0.5 1.214-0.5t1.214 0.5l2.429 2.429q0.5 0.5 0.5 1.214t-0.5 1.214l-5.25 5.25 5.25 5.25q0.5 0.5 0.5 1.214z\">\n </path>\n </symbol>\n <symbol id=\"icon-angle-down\" viewbox=\"0 0 21 32\">\n <path class=\"path1\" d=\"M19.196 13.143q0 0.232-0.179 0.411l-8.321 8.321q-0.179 0.179-0.411 0.179t-0.411-0.179l-8.321-8.321q-0.179-0.179-0.179-0.411t0.179-0.411l0.893-0.893q0.179-0.179 0.411-0.179t0.411 0.179l7.018 7.018 7.018-7.018q0.179-0.179 0.411-0.179t0.411 0.179l0.893 0.893q0.179 0.179 0.179 0.411z\">\n </path>\n </symbol>\n <symbol id=\"icon-folder-open\" viewbox=\"0 0 34 32\">\n <path class=\"path1\" d=\"M33.554 17q0 0.554-0.554 1.179l-6 7.071q-0.768 0.911-2.152 1.545t-2.563 0.634h-19.429q-0.607 0-1.080-0.232t-0.473-0.768q0-0.554 0.554-1.179l6-7.071q0.768-0.911 2.152-1.545t2.563-0.634h19.429q0.607 0 1.080 0.232t0.473 0.768zM27.429 10.857v2.857h-14.857q-1.679 0-3.518 0.848t-2.929 2.134l-6.107 7.179q0-0.071-0.009-0.223t-0.009-0.223v-17.143q0-1.643 1.179-2.821t2.821-1.179h5.714q1.643 0 2.821 1.179t1.179 2.821v0.571h9.714q1.643 0 2.821 1.179t1.179 2.821z\">\n </path>\n </symbol>\n <symbol id=\"icon-twitter\" viewbox=\"0 0 30 32\">\n <path class=\"path1\" d=\"M28.929 7.286q-1.196 1.75-2.893 2.982 0.018 0.25 0.018 0.75 0 2.321-0.679 4.634t-2.063 4.437-3.295 3.759-4.607 2.607-5.768 0.973q-4.839 0-8.857-2.589 0.625 0.071 1.393 0.071 4.018 0 7.161-2.464-1.875-0.036-3.357-1.152t-2.036-2.848q0.589 0.089 1.089 0.089 0.768 0 1.518-0.196-2-0.411-3.313-1.991t-1.313-3.67v-0.071q1.214 0.679 2.607 0.732-1.179-0.786-1.875-2.054t-0.696-2.75q0-1.571 0.786-2.911 2.161 2.661 5.259 4.259t6.634 1.777q-0.143-0.679-0.143-1.321 0-2.393 1.688-4.080t4.080-1.688q2.5 0 4.214 1.821 1.946-0.375 3.661-1.393-0.661 2.054-2.536 3.179 1.661-0.179 3.321-0.893z\">\n </path>\n </symbol>\n <symbol id=\"icon-facebook\" viewbox=\"0 0 19 32\">\n <path class=\"path1\" d=\"M17.125 0.214v4.714h-2.804q-1.536 0-2.071 0.643t-0.536 1.929v3.375h5.232l-0.696 5.286h-4.536v13.554h-5.464v-13.554h-4.554v-5.286h4.554v-3.893q0-3.321 1.857-5.152t4.946-1.83q2.625 0 4.071 0.214z\">\n </path>\n </symbol>\n <symbol id=\"icon-github\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M13.714 2.286q3.732 0 6.884 1.839t4.991 4.991 1.839 6.884q0 4.482-2.616 8.063t-6.759 4.955q-0.482 0.089-0.714-0.125t-0.232-0.536q0-0.054 0.009-1.366t0.009-2.402q0-1.732-0.929-2.536 1.018-0.107 1.83-0.321t1.679-0.696 1.446-1.188 0.946-1.875 0.366-2.688q0-2.125-1.411-3.679 0.661-1.625-0.143-3.643-0.5-0.161-1.446 0.196t-1.643 0.786l-0.679 0.429q-1.661-0.464-3.429-0.464t-3.429 0.464q-0.286-0.196-0.759-0.482t-1.491-0.688-1.518-0.241q-0.804 2.018-0.143 3.643-1.411 1.554-1.411 3.679 0 1.518 0.366 2.679t0.938 1.875 1.438 1.196 1.679 0.696 1.83 0.321q-0.696 0.643-0.875 1.839-0.375 0.179-0.804 0.268t-1.018 0.089-1.17-0.384-0.991-1.116q-0.339-0.571-0.866-0.929t-0.884-0.429l-0.357-0.054q-0.375 0-0.518 0.080t-0.089 0.205 0.161 0.25 0.232 0.214l0.125 0.089q0.393 0.179 0.777 0.679t0.563 0.911l0.179 0.411q0.232 0.679 0.786 1.098t1.196 0.536 1.241 0.125 0.991-0.063l0.411-0.071q0 0.679 0.009 1.58t0.009 0.973q0 0.321-0.232 0.536t-0.714 0.125q-4.143-1.375-6.759-4.955t-2.616-8.063q0-3.732 1.839-6.884t4.991-4.991 6.884-1.839zM5.196 21.982q0.054-0.125-0.125-0.214-0.179-0.054-0.232 0.036-0.054 0.125 0.125 0.214 0.161 0.107 0.232-0.036zM5.75 22.589q0.125-0.089-0.036-0.286-0.179-0.161-0.286-0.054-0.125 0.089 0.036 0.286 0.179 0.179 0.286 0.054zM6.286 23.393q0.161-0.125 0-0.339-0.143-0.232-0.304-0.107-0.161 0.089 0 0.321t0.304 0.125zM7.036 24.143q0.143-0.143-0.071-0.339-0.214-0.214-0.357-0.054-0.161 0.143 0.071 0.339 0.214 0.214 0.357 0.054zM8.054 24.589q0.054-0.196-0.232-0.286-0.268-0.071-0.339 0.125t0.232 0.268q0.268 0.107 0.339-0.107zM9.179 24.679q0-0.232-0.304-0.196-0.286 0-0.286 0.196 0 0.232 0.304 0.196 0.286 0 0.286-0.196zM10.214 24.5q-0.036-0.196-0.321-0.161-0.286 0.054-0.25 0.268t0.321 0.143 0.25-0.25z\">\n </path>\n </symbol>\n <symbol id=\"icon-bars\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M27.429 24v2.286q0 0.464-0.339 0.804t-0.804 0.339h-25.143q-0.464 0-0.804-0.339t-0.339-0.804v-2.286q0-0.464 0.339-0.804t0.804-0.339h25.143q0.464 0 0.804 0.339t0.339 0.804zM27.429 14.857v2.286q0 0.464-0.339 0.804t-0.804 0.339h-25.143q-0.464 0-0.804-0.339t-0.339-0.804v-2.286q0-0.464 0.339-0.804t0.804-0.339h25.143q0.464 0 0.804 0.339t0.339 0.804zM27.429 5.714v2.286q0 0.464-0.339 0.804t-0.804 0.339h-25.143q-0.464 0-0.804-0.339t-0.339-0.804v-2.286q0-0.464 0.339-0.804t0.804-0.339h25.143q0.464 0 0.804 0.339t0.339 0.804z\">\n </path>\n </symbol>\n <symbol id=\"icon-google-plus\" viewbox=\"0 0 41 32\">\n <path class=\"path1\" d=\"M25.661 16.304q0 3.714-1.554 6.616t-4.429 4.536-6.589 1.634q-2.661 0-5.089-1.036t-4.179-2.786-2.786-4.179-1.036-5.089 1.036-5.089 2.786-4.179 4.179-2.786 5.089-1.036q5.107 0 8.768 3.429l-3.554 3.411q-2.089-2.018-5.214-2.018-2.196 0-4.063 1.107t-2.955 3.009-1.089 4.152 1.089 4.152 2.955 3.009 4.063 1.107q1.482 0 2.723-0.411t2.045-1.027 1.402-1.402 0.875-1.482 0.384-1.321h-7.429v-4.5h12.357q0.214 1.125 0.214 2.179zM41.143 14.125v3.75h-3.732v3.732h-3.75v-3.732h-3.732v-3.75h3.732v-3.732h3.75v3.732h3.732z\">\n </path>\n </symbol>\n <symbol id=\"icon-linkedin\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M6.232 11.161v17.696h-5.893v-17.696h5.893zM6.607 5.696q0.018 1.304-0.902 2.179t-2.42 0.875h-0.036q-1.464 0-2.357-0.875t-0.893-2.179q0-1.321 0.92-2.188t2.402-0.866 2.375 0.866 0.911 2.188zM27.429 18.714v10.143h-5.875v-9.464q0-1.875-0.723-2.938t-2.259-1.063q-1.125 0-1.884 0.616t-1.134 1.527q-0.196 0.536-0.196 1.446v9.875h-5.875q0.036-7.125 0.036-11.554t-0.018-5.286l-0.018-0.857h5.875v2.571h-0.036q0.357-0.571 0.732-1t1.009-0.929 1.554-0.777 2.045-0.277q3.054 0 4.911 2.027t1.857 5.938z\">\n </path>\n </symbol>\n <symbol id=\"icon-quote-right\" viewbox=\"0 0 30 32\">\n <path class=\"path1\" d=\"M13.714 5.714v12.571q0 1.857-0.723 3.545t-1.955 2.92-2.92 1.955-3.545 0.723h-1.143q-0.464 0-0.804-0.339t-0.339-0.804v-2.286q0-0.464 0.339-0.804t0.804-0.339h1.143q1.893 0 3.232-1.339t1.339-3.232v-0.571q0-0.714-0.5-1.214t-1.214-0.5h-4q-1.429 0-2.429-1t-1-2.429v-6.857q0-1.429 1-2.429t2.429-1h6.857q1.429 0 2.429 1t1 2.429zM29.714 5.714v12.571q0 1.857-0.723 3.545t-1.955 2.92-2.92 1.955-3.545 0.723h-1.143q-0.464 0-0.804-0.339t-0.339-0.804v-2.286q0-0.464 0.339-0.804t0.804-0.339h1.143q1.893 0 3.232-1.339t1.339-3.232v-0.571q0-0.714-0.5-1.214t-1.214-0.5h-4q-1.429 0-2.429-1t-1-2.429v-6.857q0-1.429 1-2.429t2.429-1h6.857q1.429 0 2.429 1t1 2.429z\">\n </path>\n </symbol>\n <symbol id=\"icon-mail-reply\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M32 20q0 2.964-2.268 8.054-0.054 0.125-0.188 0.429t-0.241 0.536-0.232 0.393q-0.214 0.304-0.5 0.304-0.268 0-0.42-0.179t-0.152-0.446q0-0.161 0.045-0.473t0.045-0.42q0.089-1.214 0.089-2.196 0-1.804-0.313-3.232t-0.866-2.473-1.429-1.804-1.884-1.241-2.375-0.759-2.75-0.384-3.134-0.107h-4v4.571q0 0.464-0.339 0.804t-0.804 0.339-0.804-0.339l-9.143-9.143q-0.339-0.339-0.339-0.804t0.339-0.804l9.143-9.143q0.339-0.339 0.804-0.339t0.804 0.339 0.339 0.804v4.571h4q12.732 0 15.625 7.196 0.946 2.393 0.946 5.946z\">\n </path>\n </symbol>\n <symbol id=\"icon-youtube\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M17.339 22.214v3.768q0 1.196-0.696 1.196-0.411 0-0.804-0.393v-5.375q0.393-0.393 0.804-0.393 0.696 0 0.696 1.196zM23.375 22.232v0.821h-1.607v-0.821q0-1.214 0.804-1.214t0.804 1.214zM6.125 18.339h1.911v-1.679h-5.571v1.679h1.875v10.161h1.786v-10.161zM11.268 28.5h1.589v-8.821h-1.589v6.75q-0.536 0.75-1.018 0.75-0.321 0-0.375-0.375-0.018-0.054-0.018-0.625v-6.5h-1.589v6.982q0 0.875 0.143 1.304 0.214 0.661 1.036 0.661 0.857 0 1.821-1.089v0.964zM18.929 25.857v-3.518q0-1.304-0.161-1.768-0.304-1-1.268-1-0.893 0-1.661 0.964v-3.875h-1.589v11.839h1.589v-0.857q0.804 0.982 1.661 0.982 0.964 0 1.268-0.982 0.161-0.482 0.161-1.786zM24.964 25.679v-0.232h-1.625q0 0.911-0.036 1.089-0.125 0.643-0.714 0.643-0.821 0-0.821-1.232v-1.554h3.196v-1.839q0-1.411-0.482-2.071-0.696-0.911-1.893-0.911-1.214 0-1.911 0.911-0.5 0.661-0.5 2.071v3.089q0 1.411 0.518 2.071 0.696 0.911 1.929 0.911 1.286 0 1.929-0.946 0.321-0.482 0.375-0.964 0.036-0.161 0.036-1.036zM14.107 9.375v-3.75q0-1.232-0.768-1.232t-0.768 1.232v3.75q0 1.25 0.768 1.25t0.768-1.25zM26.946 22.786q0 4.179-0.464 6.25-0.25 1.054-1.036 1.768t-1.821 0.821q-3.286 0.375-9.911 0.375t-9.911-0.375q-1.036-0.107-1.83-0.821t-1.027-1.768q-0.464-2-0.464-6.25 0-4.179 0.464-6.25 0.25-1.054 1.036-1.768t1.839-0.839q3.268-0.357 9.893-0.357t9.911 0.357q1.036 0.125 1.83 0.839t1.027 1.768q0.464 2 0.464 6.25zM9.125 0h1.821l-2.161 7.125v4.839h-1.786v-4.839q-0.25-1.321-1.089-3.786-0.661-1.839-1.161-3.339h1.893l1.268 4.696zM15.732 5.946v3.125q0 1.446-0.5 2.107-0.661 0.911-1.893 0.911-1.196 0-1.875-0.911-0.5-0.679-0.5-2.107v-3.125q0-1.429 0.5-2.089 0.679-0.911 1.875-0.911 1.232 0 1.893 0.911 0.5 0.661 0.5 2.089zM21.714 3.054v8.911h-1.625v-0.982q-0.946 1.107-1.839 1.107-0.821 0-1.054-0.661-0.143-0.429-0.143-1.339v-7.036h1.625v6.554q0 0.589 0.018 0.625 0.054 0.393 0.375 0.393 0.482 0 1.018-0.768v-6.804h1.625z\">\n </path>\n </symbol>\n <symbol id=\"icon-dropbox\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M7.179 12.625l8.821 5.446-6.107 5.089-8.75-5.696zM24.786 22.536v1.929l-8.75 5.232v0.018l-0.018-0.018-0.018 0.018v-0.018l-8.732-5.232v-1.929l2.625 1.714 6.107-5.071v-0.036l0.018 0.018 0.018-0.018v0.036l6.125 5.071zM9.893 2.107l6.107 5.089-8.821 5.429-6.036-4.821zM24.821 12.625l6.036 4.839-8.732 5.696-6.125-5.089zM22.125 2.107l8.732 5.696-6.036 4.821-8.821-5.429z\">\n </path>\n </symbol>\n <symbol id=\"icon-instagram\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M18.286 16q0-1.893-1.339-3.232t-3.232-1.339-3.232 1.339-1.339 3.232 1.339 3.232 3.232 1.339 3.232-1.339 1.339-3.232zM20.75 16q0 2.929-2.054 4.982t-4.982 2.054-4.982-2.054-2.054-4.982 2.054-4.982 4.982-2.054 4.982 2.054 2.054 4.982zM22.679 8.679q0 0.679-0.482 1.161t-1.161 0.482-1.161-0.482-0.482-1.161 0.482-1.161 1.161-0.482 1.161 0.482 0.482 1.161zM13.714 4.75q-0.125 0-1.366-0.009t-1.884 0-1.723 0.054-1.839 0.179-1.277 0.33q-0.893 0.357-1.571 1.036t-1.036 1.571q-0.196 0.518-0.33 1.277t-0.179 1.839-0.054 1.723 0 1.884 0.009 1.366-0.009 1.366 0 1.884 0.054 1.723 0.179 1.839 0.33 1.277q0.357 0.893 1.036 1.571t1.571 1.036q0.518 0.196 1.277 0.33t1.839 0.179 1.723 0.054 1.884 0 1.366-0.009 1.366 0.009 1.884 0 1.723-0.054 1.839-0.179 1.277-0.33q0.893-0.357 1.571-1.036t1.036-1.571q0.196-0.518 0.33-1.277t0.179-1.839 0.054-1.723 0-1.884-0.009-1.366 0.009-1.366 0-1.884-0.054-1.723-0.179-1.839-0.33-1.277q-0.357-0.893-1.036-1.571t-1.571-1.036q-0.518-0.196-1.277-0.33t-1.839-0.179-1.723-0.054-1.884 0-1.366 0.009zM27.429 16q0 4.089-0.089 5.661-0.179 3.714-2.214 5.75t-5.75 2.214q-1.571 0.089-5.661 0.089t-5.661-0.089q-3.714-0.179-5.75-2.214t-2.214-5.75q-0.089-1.571-0.089-5.661t0.089-5.661q0.179-3.714 2.214-5.75t5.75-2.214q1.571-0.089 5.661-0.089t5.661 0.089q3.714 0.179 5.75 2.214t2.214 5.75q0.089 1.571 0.089 5.661z\">\n </path>\n </symbol>\n <symbol id=\"icon-flickr\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M22.286 2.286q2.125 0 3.634 1.509t1.509 3.634v17.143q0 2.125-1.509 3.634t-3.634 1.509h-17.143q-2.125 0-3.634-1.509t-1.509-3.634v-17.143q0-2.125 1.509-3.634t3.634-1.509h17.143zM12.464 16q0-1.571-1.107-2.679t-2.679-1.107-2.679 1.107-1.107 2.679 1.107 2.679 2.679 1.107 2.679-1.107 1.107-2.679zM22.536 16q0-1.571-1.107-2.679t-2.679-1.107-2.679 1.107-1.107 2.679 1.107 2.679 2.679 1.107 2.679-1.107 1.107-2.679z\">\n </path>\n </symbol>\n <symbol id=\"icon-tumblr\" viewbox=\"0 0 19 32\">\n <path class=\"path1\" d=\"M16.857 23.732l1.429 4.232q-0.411 0.625-1.982 1.179t-3.161 0.571q-1.857 0.036-3.402-0.464t-2.545-1.321-1.696-1.893-0.991-2.143-0.295-2.107v-9.714h-3v-3.839q1.286-0.464 2.304-1.241t1.625-1.607 1.036-1.821 0.607-1.768 0.268-1.58q0.018-0.089 0.080-0.152t0.134-0.063h4.357v7.571h5.946v4.5h-5.964v9.25q0 0.536 0.116 1t0.402 0.938 0.884 0.741 1.455 0.25q1.393-0.036 2.393-0.518z\">\n </path>\n </symbol>\n <symbol id=\"icon-dockerhub\" viewbox=\"0 0 24 28\">\n <path class=\"path1\" d=\"M1.597 10.257h2.911v2.83H1.597v-2.83zm3.573 0h2.91v2.83H5.17v-2.83zm0-3.627h2.91v2.829H5.17V6.63zm3.57 3.627h2.912v2.83H8.74v-2.83zm0-3.627h2.912v2.829H8.74V6.63zm3.573 3.627h2.911v2.83h-2.911v-2.83zm0-3.627h2.911v2.829h-2.911V6.63zm3.572 3.627h2.911v2.83h-2.911v-2.83zM12.313 3h2.911v2.83h-2.911V3zm-6.65 14.173c-.449 0-.812.354-.812.788 0 .435.364.788.812.788.447 0 .811-.353.811-.788 0-.434-.363-.788-.811-.788\">\n </path>\n <path class=\"path2\" d=\"M28.172 11.721c-.978-.549-2.278-.624-3.388-.306-.136-1.146-.91-2.149-1.83-2.869l-.366-.286-.307.345c-.618.692-.8 1.845-.718 2.73.063.651.273 1.312.685 1.834-.313.183-.668.328-.985.434-.646.212-1.347.33-2.028.33H.083l-.042.429c-.137 1.432.065 2.866.674 4.173l.262.519.03.048c1.8 2.973 4.963 4.225 8.41 4.225 6.672 0 12.174-2.896 14.702-9.015 1.689.085 3.417-.4 4.243-1.968l.211-.4-.401-.223zM5.664 19.458c-.85 0-1.542-.671-1.542-1.497 0-.825.691-1.498 1.541-1.498.849 0 1.54.672 1.54 1.497s-.69 1.498-1.539 1.498z\">\n </path>\n </symbol>\n <symbol id=\"icon-dribbble\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M18.286 26.786q-0.75-4.304-2.5-8.893h-0.036l-0.036 0.018q-0.286 0.107-0.768 0.295t-1.804 0.875-2.446 1.464-2.339 2.045-1.839 2.643l-0.268-0.196q3.286 2.679 7.464 2.679 2.357 0 4.571-0.929zM14.982 15.946q-0.375-0.875-0.946-1.982-5.554 1.661-12.018 1.661-0.018 0.125-0.018 0.375 0 2.214 0.786 4.223t2.214 3.598q0.893-1.589 2.205-2.973t2.545-2.223 2.33-1.446 1.777-0.857l0.661-0.232q0.071-0.018 0.232-0.063t0.232-0.080zM13.071 12.161q-2.143-3.804-4.357-6.75-2.464 1.161-4.179 3.321t-2.286 4.857q5.393 0 10.821-1.429zM25.286 17.857q-3.75-1.071-7.304-0.518 1.554 4.268 2.286 8.375 1.982-1.339 3.304-3.384t1.714-4.473zM10.911 4.625q-0.018 0-0.036 0.018 0.018-0.018 0.036-0.018zM21.446 7.214q-3.304-2.929-7.732-2.929-1.357 0-2.768 0.339 2.339 3.036 4.393 6.821 1.232-0.464 2.321-1.080t1.723-1.098 1.17-1.018 0.67-0.723zM25.429 15.875q-0.054-4.143-2.661-7.321l-0.018 0.018q-0.161 0.214-0.339 0.438t-0.777 0.795-1.268 1.080-1.786 1.161-2.348 1.152q0.446 0.946 0.786 1.696 0.036 0.107 0.116 0.313t0.134 0.295q0.643-0.089 1.33-0.125t1.313-0.036 1.232 0.027 1.143 0.071 1.009 0.098 0.857 0.116 0.652 0.107 0.446 0.080zM27.429 16q0 3.732-1.839 6.884t-4.991 4.991-6.884 1.839-6.884-1.839-4.991-4.991-1.839-6.884 1.839-6.884 4.991-4.991 6.884-1.839 6.884 1.839 4.991 4.991 1.839 6.884z\">\n </path>\n </symbol>\n <symbol id=\"icon-skype\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M20.946 18.982q0-0.893-0.348-1.634t-0.866-1.223-1.304-0.875-1.473-0.607-1.563-0.411l-1.857-0.429q-0.536-0.125-0.786-0.188t-0.625-0.205-0.536-0.286-0.295-0.375-0.134-0.536q0-1.375 2.571-1.375 0.768 0 1.375 0.214t0.964 0.509 0.679 0.598 0.714 0.518 0.857 0.214q0.839 0 1.348-0.571t0.509-1.375q0-0.982-1-1.777t-2.536-1.205-3.25-0.411q-1.214 0-2.357 0.277t-2.134 0.839-1.589 1.554-0.598 2.295q0 1.089 0.339 1.902t1 1.348 1.429 0.866 1.839 0.58l2.607 0.643q1.607 0.393 2 0.643 0.571 0.357 0.571 1.071 0 0.696-0.714 1.152t-1.875 0.455q-0.911 0-1.634-0.286t-1.161-0.688-0.813-0.804-0.821-0.688-0.964-0.286q-0.893 0-1.348 0.536t-0.455 1.339q0 1.643 2.179 2.813t5.196 1.17q1.304 0 2.5-0.33t2.188-0.955 1.58-1.67 0.589-2.348zM27.429 22.857q0 2.839-2.009 4.848t-4.848 2.009q-2.321 0-4.179-1.429-1.375 0.286-2.679 0.286-2.554 0-4.884-0.991t-4.018-2.679-2.679-4.018-0.991-4.884q0-1.304 0.286-2.679-1.429-1.857-1.429-4.179 0-2.839 2.009-4.848t4.848-2.009q2.321 0 4.179 1.429 1.375-0.286 2.679-0.286 2.554 0 4.884 0.991t4.018 2.679 2.679 4.018 0.991 4.884q0 1.304-0.286 2.679 1.429 1.857 1.429 4.179z\">\n </path>\n </symbol>\n <symbol id=\"icon-foursquare\" viewbox=\"0 0 23 32\">\n <path class=\"path1\" d=\"M17.857 7.75l0.661-3.464q0.089-0.411-0.161-0.714t-0.625-0.304h-12.714q-0.411 0-0.688 0.304t-0.277 0.661v19.661q0 0.125 0.107 0.018l5.196-6.286q0.411-0.464 0.679-0.598t0.857-0.134h4.268q0.393 0 0.661-0.259t0.321-0.527q0.429-2.321 0.661-3.411 0.071-0.375-0.205-0.714t-0.652-0.339h-5.25q-0.518 0-0.857-0.339t-0.339-0.857v-0.75q0-0.518 0.339-0.848t0.857-0.33h6.179q0.321 0 0.625-0.241t0.357-0.527zM21.911 3.786q-0.268 1.304-0.955 4.759t-1.241 6.25-0.625 3.098q-0.107 0.393-0.161 0.58t-0.25 0.58-0.438 0.589-0.688 0.375-1.036 0.179h-4.839q-0.232 0-0.393 0.179-0.143 0.161-7.607 8.821-0.393 0.446-1.045 0.509t-0.866-0.098q-0.982-0.393-0.982-1.75v-25.179q0-0.982 0.679-1.83t2.143-0.848h15.857q1.696 0 2.268 0.946t0.179 2.839zM21.911 3.786l-2.821 14.107q0.071-0.304 0.625-3.098t1.241-6.25 0.955-4.759z\">\n </path>\n </symbol>\n <symbol id=\"icon-wordpress\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M2.268 16q0-2.911 1.196-5.589l6.554 17.946q-3.5-1.696-5.625-5.018t-2.125-7.339zM25.268 15.304q0 0.339-0.045 0.688t-0.179 0.884-0.205 0.786-0.313 1.054-0.313 1.036l-1.357 4.571-4.964-14.75q0.821-0.054 1.571-0.143 0.339-0.036 0.464-0.33t-0.045-0.554-0.509-0.241l-3.661 0.179q-1.339-0.018-3.607-0.179-0.214-0.018-0.366 0.089t-0.205 0.268-0.027 0.33 0.161 0.295 0.348 0.143l1.429 0.143 2.143 5.857-3 9-5-14.857q0.821-0.054 1.571-0.143 0.339-0.036 0.464-0.33t-0.045-0.554-0.509-0.241l-3.661 0.179q-0.125 0-0.411-0.009t-0.464-0.009q1.875-2.857 4.902-4.527t6.563-1.67q2.625 0 5.009 0.946t4.259 2.661h-0.179q-0.982 0-1.643 0.723t-0.661 1.705q0 0.214 0.036 0.429t0.071 0.384 0.143 0.411 0.161 0.375 0.214 0.402 0.223 0.375 0.259 0.429 0.25 0.411q1.125 1.911 1.125 3.786zM16.232 17.196l4.232 11.554q0.018 0.107 0.089 0.196-2.25 0.786-4.554 0.786-2 0-3.875-0.571zM28.036 9.411q1.696 3.107 1.696 6.589 0 3.732-1.857 6.884t-4.982 4.973l4.196-12.107q1.054-3.018 1.054-4.929 0-0.75-0.107-1.411zM16 0q3.25 0 6.214 1.268t5.107 3.411 3.411 5.107 1.268 6.214-1.268 6.214-3.411 5.107-5.107 3.411-6.214 1.268-6.214-1.268-5.107-3.411-3.411-5.107-1.268-6.214 1.268-6.214 3.411-5.107 5.107-3.411 6.214-1.268zM16 31.268q3.089 0 5.92-1.214t4.875-3.259 3.259-4.875 1.214-5.92-1.214-5.92-3.259-4.875-4.875-3.259-5.92-1.214-5.92 1.214-4.875 3.259-3.259 4.875-1.214 5.92 1.214 5.92 3.259 4.875 4.875 3.259 5.92 1.214z\">\n </path>\n </symbol>\n <symbol id=\"icon-stumbleupon\" viewbox=\"0 0 34 32\">\n <path class=\"path1\" d=\"M18.964 12.714v-2.107q0-0.75-0.536-1.286t-1.286-0.536-1.286 0.536-0.536 1.286v10.929q0 3.125-2.25 5.339t-5.411 2.214q-3.179 0-5.42-2.241t-2.241-5.42v-4.75h5.857v4.679q0 0.768 0.536 1.295t1.286 0.527 1.286-0.527 0.536-1.295v-11.071q0-3.054 2.259-5.214t5.384-2.161q3.143 0 5.393 2.179t2.25 5.25v2.429l-3.482 1.036zM28.429 16.679h5.857v4.75q0 3.179-2.241 5.42t-5.42 2.241q-3.161 0-5.411-2.223t-2.25-5.366v-4.786l2.339 1.089 3.482-1.036v4.821q0 0.75 0.536 1.277t1.286 0.527 1.286-0.527 0.536-1.277v-4.911z\">\n </path>\n </symbol>\n <symbol id=\"icon-digg\" viewbox=\"0 0 37 32\">\n <path class=\"path1\" d=\"M5.857 5.036h3.643v17.554h-9.5v-12.446h5.857v-5.107zM5.857 19.661v-6.589h-2.196v6.589h2.196zM10.964 10.143v12.446h3.661v-12.446h-3.661zM10.964 5.036v3.643h3.661v-3.643h-3.661zM16.089 10.143h9.518v16.821h-9.518v-2.911h5.857v-1.464h-5.857v-12.446zM21.946 19.661v-6.589h-2.196v6.589h2.196zM27.071 10.143h9.5v16.821h-9.5v-2.911h5.839v-1.464h-5.839v-12.446zM32.911 19.661v-6.589h-2.196v6.589h2.196z\">\n </path>\n </symbol>\n <symbol id=\"icon-spotify\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M20.125 21.607q0-0.571-0.536-0.911-3.446-2.054-7.982-2.054-2.375 0-5.125 0.607-0.75 0.161-0.75 0.929 0 0.357 0.241 0.616t0.634 0.259q0.089 0 0.661-0.143 2.357-0.482 4.339-0.482 4.036 0 7.089 1.839 0.339 0.196 0.589 0.196 0.339 0 0.589-0.241t0.25-0.616zM21.839 17.768q0-0.714-0.625-1.089-4.232-2.518-9.786-2.518-2.732 0-5.411 0.75-0.857 0.232-0.857 1.143 0 0.446 0.313 0.759t0.759 0.313q0.125 0 0.661-0.143 2.179-0.589 4.482-0.589 4.982 0 8.714 2.214 0.429 0.232 0.679 0.232 0.446 0 0.759-0.313t0.313-0.759zM23.768 13.339q0-0.839-0.714-1.25-2.25-1.304-5.232-1.973t-6.125-0.67q-3.643 0-6.5 0.839-0.411 0.125-0.688 0.455t-0.277 0.866q0 0.554 0.366 0.929t0.92 0.375q0.196 0 0.714-0.143 2.375-0.661 5.482-0.661 2.839 0 5.527 0.607t4.527 1.696q0.375 0.214 0.714 0.214 0.518 0 0.902-0.366t0.384-0.92zM27.429 16q0 3.732-1.839 6.884t-4.991 4.991-6.884 1.839-6.884-1.839-4.991-4.991-1.839-6.884 1.839-6.884 4.991-4.991 6.884-1.839 6.884 1.839 4.991 4.991 1.839 6.884z\">\n </path>\n </symbol>\n <symbol id=\"icon-soundcloud\" viewbox=\"0 0 41 32\">\n <path class=\"path1\" d=\"M14 24.5l0.286-4.304-0.286-9.339q-0.018-0.179-0.134-0.304t-0.295-0.125q-0.161 0-0.286 0.125t-0.125 0.304l-0.25 9.339 0.25 4.304q0.018 0.179 0.134 0.295t0.277 0.116q0.393 0 0.429-0.411zM19.286 23.982l0.196-3.768-0.214-10.464q0-0.286-0.232-0.429-0.143-0.089-0.286-0.089t-0.286 0.089q-0.232 0.143-0.232 0.429l-0.018 0.107-0.179 10.339q0 0.018 0.196 4.214v0.018q0 0.179 0.107 0.304 0.161 0.196 0.411 0.196 0.196 0 0.357-0.161 0.161-0.125 0.161-0.357zM0.625 17.911l0.357 2.286-0.357 2.25q-0.036 0.161-0.161 0.161t-0.161-0.161l-0.304-2.25 0.304-2.286q0.036-0.161 0.161-0.161t0.161 0.161zM2.161 16.5l0.464 3.696-0.464 3.625q-0.036 0.161-0.179 0.161-0.161 0-0.161-0.179l-0.411-3.607 0.411-3.696q0-0.161 0.161-0.161 0.143 0 0.179 0.161zM3.804 15.821l0.446 4.375-0.446 4.232q0 0.196-0.196 0.196-0.179 0-0.214-0.196l-0.375-4.232 0.375-4.375q0.036-0.214 0.214-0.214 0.196 0 0.196 0.214zM5.482 15.696l0.411 4.5-0.411 4.357q-0.036 0.232-0.25 0.232-0.232 0-0.232-0.232l-0.375-4.357 0.375-4.5q0-0.232 0.232-0.232 0.214 0 0.25 0.232zM7.161 16.018l0.375 4.179-0.375 4.393q-0.036 0.286-0.286 0.286-0.107 0-0.188-0.080t-0.080-0.205l-0.357-4.393 0.357-4.179q0-0.107 0.080-0.188t0.188-0.080q0.25 0 0.286 0.268zM8.839 13.411l0.375 6.786-0.375 4.393q0 0.125-0.089 0.223t-0.214 0.098q-0.286 0-0.321-0.321l-0.321-4.393 0.321-6.786q0.036-0.321 0.321-0.321 0.125 0 0.214 0.098t0.089 0.223zM10.518 11.875l0.339 8.357-0.339 4.357q0 0.143-0.098 0.241t-0.241 0.098q-0.321 0-0.357-0.339l-0.286-4.357 0.286-8.357q0.036-0.339 0.357-0.339 0.143 0 0.241 0.098t0.098 0.241zM12.268 11.161l0.321 9.036-0.321 4.321q-0.036 0.375-0.393 0.375-0.339 0-0.375-0.375l-0.286-4.321 0.286-9.036q0-0.161 0.116-0.277t0.259-0.116q0.161 0 0.268 0.116t0.125 0.277zM19.268 24.411v0 0zM15.732 11.089l0.268 9.107-0.268 4.268q0 0.179-0.134 0.313t-0.313 0.134-0.304-0.125-0.143-0.321l-0.25-4.268 0.25-9.107q0-0.196 0.134-0.321t0.313-0.125 0.313 0.125 0.134 0.321zM17.5 11.429l0.25 8.786-0.25 4.214q0 0.196-0.143 0.339t-0.339 0.143-0.339-0.143-0.161-0.339l-0.214-4.214 0.214-8.786q0.018-0.214 0.161-0.357t0.339-0.143 0.33 0.143 0.152 0.357zM21.286 20.214l-0.25 4.125q0 0.232-0.161 0.393t-0.393 0.161-0.393-0.161-0.179-0.393l-0.107-2.036-0.107-2.089 0.214-11.357v-0.054q0.036-0.268 0.214-0.429 0.161-0.125 0.357-0.125 0.143 0 0.268 0.089 0.25 0.143 0.286 0.464zM41.143 19.875q0 2.089-1.482 3.563t-3.571 1.473h-14.036q-0.232-0.036-0.393-0.196t-0.161-0.393v-16.054q0-0.411 0.5-0.589 1.518-0.607 3.232-0.607 3.482 0 6.036 2.348t2.857 5.777q0.946-0.393 1.964-0.393 2.089 0 3.571 1.482t1.482 3.589z\">\n </path>\n </symbol>\n <symbol id=\"icon-codepen\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M3.857 20.875l10.768 7.179v-6.411l-5.964-3.982zM2.75 18.304l3.446-2.304-3.446-2.304v4.607zM17.375 28.054l10.768-7.179-4.804-3.214-5.964 3.982v6.411zM16 19.25l4.857-3.25-4.857-3.25-4.857 3.25zM8.661 14.339l5.964-3.982v-6.411l-10.768 7.179zM25.804 16l3.446 2.304v-4.607zM23.339 14.339l4.804-3.214-10.768-7.179v6.411zM32 11.125v9.75q0 0.732-0.607 1.143l-14.625 9.75q-0.375 0.232-0.768 0.232t-0.768-0.232l-14.625-9.75q-0.607-0.411-0.607-1.143v-9.75q0-0.732 0.607-1.143l14.625-9.75q0.375-0.232 0.768-0.232t0.768 0.232l14.625 9.75q0.607 0.411 0.607 1.143z\">\n </path>\n </symbol>\n <symbol id=\"icon-twitch\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M16 7.75v7.75h-2.589v-7.75h2.589zM23.107 7.75v7.75h-2.589v-7.75h2.589zM23.107 21.321l4.518-4.536v-14.196h-21.321v18.732h5.821v3.875l3.875-3.875h7.107zM30.214 0v18.089l-7.75 7.75h-5.821l-3.875 3.875h-3.875v-3.875h-7.107v-20.679l1.946-5.161h26.482z\">\n </path>\n </symbol>\n <symbol id=\"icon-meanpath\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M23.411 15.036v2.036q0 0.429-0.241 0.679t-0.67 0.25h-3.607q-0.429 0-0.679-0.25t-0.25-0.679v-2.036q0-0.429 0.25-0.679t0.679-0.25h3.607q0.429 0 0.67 0.25t0.241 0.679zM14.661 19.143v-4.464q0-0.946-0.58-1.527t-1.527-0.58h-2.375q-1.214 0-1.714 0.929-0.5-0.929-1.714-0.929h-2.321q-0.946 0-1.527 0.58t-0.58 1.527v4.464q0 0.393 0.375 0.393h0.982q0.393 0 0.393-0.393v-4.107q0-0.429 0.241-0.679t0.688-0.25h1.679q0.429 0 0.679 0.25t0.25 0.679v4.107q0 0.393 0.375 0.393h0.964q0.393 0 0.393-0.393v-4.107q0-0.429 0.25-0.679t0.679-0.25h1.732q0.429 0 0.67 0.25t0.241 0.679v4.107q0 0.393 0.393 0.393h0.982q0.375 0 0.375-0.393zM25.179 17.429v-2.75q0-0.946-0.589-1.527t-1.536-0.58h-4.714q-0.946 0-1.536 0.58t-0.589 1.527v7.321q0 0.375 0.393 0.375h0.982q0.375 0 0.375-0.375v-3.214q0.554 0.75 1.679 0.75h3.411q0.946 0 1.536-0.58t0.589-1.527zM27.429 6.429v19.143q0 1.714-1.214 2.929t-2.929 1.214h-19.143q-1.714 0-2.929-1.214t-1.214-2.929v-19.143q0-1.714 1.214-2.929t2.929-1.214h19.143q1.714 0 2.929 1.214t1.214 2.929z\">\n </path>\n </symbol>\n <symbol id=\"icon-pinterest-p\" viewbox=\"0 0 23 32\">\n <path class=\"path1\" d=\"M0 10.661q0-1.929 0.67-3.634t1.848-2.973 2.714-2.196 3.304-1.393 3.607-0.464q2.821 0 5.25 1.188t3.946 3.455 1.518 5.125q0 1.714-0.339 3.357t-1.071 3.161-1.786 2.67-2.589 1.839-3.375 0.688q-1.214 0-2.411-0.571t-1.714-1.571q-0.179 0.696-0.5 2.009t-0.42 1.696-0.366 1.268-0.464 1.268-0.571 1.116-0.821 1.384-1.107 1.545l-0.25 0.089-0.161-0.179q-0.268-2.804-0.268-3.357 0-1.643 0.384-3.688t1.188-5.134 0.929-3.625q-0.571-1.161-0.571-3.018 0-1.482 0.929-2.786t2.357-1.304q1.089 0 1.696 0.723t0.607 1.83q0 1.179-0.786 3.411t-0.786 3.339q0 1.125 0.804 1.866t1.946 0.741q0.982 0 1.821-0.446t1.402-1.214 1-1.696 0.679-1.973 0.357-1.982 0.116-1.777q0-3.089-1.955-4.813t-5.098-1.723q-3.571 0-5.964 2.313t-2.393 5.866q0 0.786 0.223 1.518t0.482 1.161 0.482 0.813 0.223 0.545q0 0.5-0.268 1.304t-0.661 0.804q-0.036 0-0.304-0.054-0.911-0.268-1.616-1t-1.089-1.688-0.58-1.929-0.196-1.902z\">\n </path>\n </symbol>\n <symbol id=\"icon-periscope\" viewbox=\"0 0 24 28\">\n <path class=\"path1\" d=\"M12.285,1C6.696,1,2.277,5.643,2.277,11.243c0,5.851,7.77,14.578,10.007,14.578c1.959,0,9.729-8.728,9.729-14.578 C22.015,5.643,17.596,1,12.285,1z M12.317,16.551c-3.473,0-6.152-2.611-6.152-5.664c0-1.292,0.39-2.472,1.065-3.438 c0.206,1.084,1.18,1.906,2.352,1.906c1.322,0,2.393-1.043,2.393-2.333c0-0.832-0.447-1.561-1.119-1.975 c0.467-0.105,0.955-0.161,1.46-0.161c3.133,0,5.81,2.611,5.81,5.998C18.126,13.94,15.449,16.551,12.317,16.551z\">\n </path>\n </symbol>\n <symbol id=\"icon-get-pocket\" viewbox=\"0 0 31 32\">\n <path class=\"path1\" d=\"M27.946 2.286q1.161 0 1.964 0.813t0.804 1.973v9.268q0 3.143-1.214 6t-3.259 4.911-4.893 3.259-5.973 1.205q-3.143 0-5.991-1.205t-4.902-3.259-3.268-4.911-1.214-6v-9.268q0-1.143 0.821-1.964t1.964-0.821h25.161zM15.375 21.286q0.839 0 1.464-0.589l7.214-6.929q0.661-0.625 0.661-1.518 0-0.875-0.616-1.491t-1.491-0.616q-0.839 0-1.464 0.589l-5.768 5.536-5.768-5.536q-0.625-0.589-1.446-0.589-0.875 0-1.491 0.616t-0.616 1.491q0 0.911 0.643 1.518l7.232 6.929q0.589 0.589 1.446 0.589z\">\n </path>\n </symbol>\n <symbol id=\"icon-vimeo\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M30.518 9.25q-0.179 4.214-5.929 11.625-5.946 7.696-10.036 7.696-2.536 0-4.286-4.696-0.786-2.857-2.357-8.607-1.286-4.679-2.804-4.679-0.321 0-2.268 1.357l-1.375-1.75q0.429-0.375 1.929-1.723t2.321-2.063q2.786-2.464 4.304-2.607 1.696-0.161 2.732 0.991t1.446 3.634q0.786 5.125 1.179 6.661 0.982 4.446 2.143 4.446 0.911 0 2.75-2.875 1.804-2.875 1.946-4.393 0.232-2.482-1.946-2.482-1.018 0-2.161 0.464 2.143-7.018 8.196-6.821 4.482 0.143 4.214 5.821z\">\n </path>\n </symbol>\n <symbol id=\"icon-reddit-alien\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M32 15.107q0 1.036-0.527 1.884t-1.42 1.295q0.214 0.821 0.214 1.714 0 2.768-1.902 5.125t-5.188 3.723-7.143 1.366-7.134-1.366-5.179-3.723-1.902-5.125q0-0.839 0.196-1.679-0.911-0.446-1.464-1.313t-0.554-1.902q0-1.464 1.036-2.509t2.518-1.045q1.518 0 2.589 1.125 3.893-2.714 9.196-2.893l2.071-9.304q0.054-0.232 0.268-0.375t0.464-0.089l6.589 1.446q0.321-0.661 0.964-1.063t1.411-0.402q1.107 0 1.893 0.777t0.786 1.884-0.786 1.893-1.893 0.786-1.884-0.777-0.777-1.884l-5.964-1.321-1.857 8.429q5.357 0.161 9.268 2.857 1.036-1.089 2.554-1.089 1.482 0 2.518 1.045t1.036 2.509zM7.464 18.661q0 1.107 0.777 1.893t1.884 0.786 1.893-0.786 0.786-1.893-0.786-1.884-1.893-0.777q-1.089 0-1.875 0.786t-0.786 1.875zM21.929 25q0.196-0.196 0.196-0.464t-0.196-0.464q-0.179-0.179-0.446-0.179t-0.464 0.179q-0.732 0.75-2.161 1.107t-2.857 0.357-2.857-0.357-2.161-1.107q-0.196-0.179-0.464-0.179t-0.446 0.179q-0.196 0.179-0.196 0.455t0.196 0.473q0.768 0.768 2.116 1.214t2.188 0.527 1.625 0.080 1.625-0.080 2.188-0.527 2.116-1.214zM21.875 21.339q1.107 0 1.884-0.786t0.777-1.893q0-1.089-0.786-1.875t-1.875-0.786q-1.107 0-1.893 0.777t-0.786 1.884 0.786 1.893 1.893 0.786z\">\n </path>\n </symbol>\n <symbol id=\"icon-hashtag\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M17.696 18.286l1.143-4.571h-4.536l-1.143 4.571h4.536zM31.411 9.286l-1 4q-0.125 0.429-0.554 0.429h-5.839l-1.143 4.571h5.554q0.268 0 0.446 0.214 0.179 0.25 0.107 0.5l-1 4q-0.089 0.429-0.554 0.429h-5.839l-1.446 5.857q-0.125 0.429-0.554 0.429h-4q-0.286 0-0.464-0.214-0.161-0.214-0.107-0.5l1.393-5.571h-4.536l-1.446 5.857q-0.125 0.429-0.554 0.429h-4.018q-0.268 0-0.446-0.214-0.161-0.214-0.107-0.5l1.393-5.571h-5.554q-0.268 0-0.446-0.214-0.161-0.214-0.107-0.5l1-4q0.125-0.429 0.554-0.429h5.839l1.143-4.571h-5.554q-0.268 0-0.446-0.214-0.179-0.25-0.107-0.5l1-4q0.089-0.429 0.554-0.429h5.839l1.446-5.857q0.125-0.429 0.571-0.429h4q0.268 0 0.446 0.214 0.161 0.214 0.107 0.5l-1.393 5.571h4.536l1.446-5.857q0.125-0.429 0.571-0.429h4q0.268 0 0.446 0.214 0.161 0.214 0.107 0.5l-1.393 5.571h5.554q0.268 0 0.446 0.214 0.161 0.214 0.107 0.5z\">\n </path>\n </symbol>\n <symbol id=\"icon-chain\" viewbox=\"0 0 30 32\">\n <path class=\"path1\" d=\"M26 21.714q0-0.714-0.5-1.214l-3.714-3.714q-0.5-0.5-1.214-0.5-0.75 0-1.286 0.571 0.054 0.054 0.339 0.33t0.384 0.384 0.268 0.339 0.232 0.455 0.063 0.491q0 0.714-0.5 1.214t-1.214 0.5q-0.268 0-0.491-0.063t-0.455-0.232-0.339-0.268-0.384-0.384-0.33-0.339q-0.589 0.554-0.589 1.304 0 0.714 0.5 1.214l3.679 3.696q0.482 0.482 1.214 0.482 0.714 0 1.214-0.464l2.625-2.607q0.5-0.5 0.5-1.196zM13.446 9.125q0-0.714-0.5-1.214l-3.679-3.696q-0.5-0.5-1.214-0.5-0.696 0-1.214 0.482l-2.625 2.607q-0.5 0.5-0.5 1.196 0 0.714 0.5 1.214l3.714 3.714q0.482 0.482 1.214 0.482 0.75 0 1.286-0.554-0.054-0.054-0.339-0.33t-0.384-0.384-0.268-0.339-0.232-0.455-0.063-0.491q0-0.714 0.5-1.214t1.214-0.5q0.268 0 0.491 0.063t0.455 0.232 0.339 0.268 0.384 0.384 0.33 0.339q0.589-0.554 0.589-1.304zM29.429 21.714q0 2.143-1.518 3.625l-2.625 2.607q-1.482 1.482-3.625 1.482-2.161 0-3.643-1.518l-3.679-3.696q-1.482-1.482-1.482-3.625 0-2.196 1.571-3.732l-1.571-1.571q-1.536 1.571-3.714 1.571-2.143 0-3.643-1.5l-3.714-3.714q-1.5-1.5-1.5-3.643t1.518-3.625l2.625-2.607q1.482-1.482 3.625-1.482 2.161 0 3.643 1.518l3.679 3.696q1.482 1.482 1.482 3.625 0 2.196-1.571 3.732l1.571 1.571q1.536-1.571 3.714-1.571 2.143 0 3.643 1.5l3.714 3.714q1.5 1.5 1.5 3.643z\">\n </path>\n </symbol>\n <symbol id=\"icon-thumb-tack\" viewbox=\"0 0 21 32\">\n <path class=\"path1\" d=\"M8.571 15.429v-8q0-0.25-0.161-0.411t-0.411-0.161-0.411 0.161-0.161 0.411v8q0 0.25 0.161 0.411t0.411 0.161 0.411-0.161 0.161-0.411zM20.571 21.714q0 0.464-0.339 0.804t-0.804 0.339h-7.661l-0.911 8.625q-0.036 0.214-0.188 0.366t-0.366 0.152h-0.018q-0.482 0-0.571-0.482l-1.357-8.661h-7.214q-0.464 0-0.804-0.339t-0.339-0.804q0-2.196 1.402-3.955t3.17-1.759v-9.143q-0.929 0-1.607-0.679t-0.679-1.607 0.679-1.607 1.607-0.679h11.429q0.929 0 1.607 0.679t0.679 1.607-0.679 1.607-1.607 0.679v9.143q1.768 0 3.17 1.759t1.402 3.955z\">\n </path>\n </symbol>\n <symbol id=\"icon-arrow-left\" viewbox=\"0 0 43 32\">\n <path class=\"path1\" d=\"M42.311 14.044c-0.178-0.178-0.533-0.356-0.711-0.356h-33.778l10.311-10.489c0.178-0.178 0.356-0.533 0.356-0.711 0-0.356-0.178-0.533-0.356-0.711l-1.6-1.422c-0.356-0.178-0.533-0.356-0.889-0.356s-0.533 0.178-0.711 0.356l-14.578 14.933c-0.178 0.178-0.356 0.533-0.356 0.711s0.178 0.533 0.356 0.711l14.756 14.933c0 0.178 0.356 0.356 0.533 0.356s0.533-0.178 0.711-0.356l1.6-1.6c0.178-0.178 0.356-0.533 0.356-0.711s-0.178-0.533-0.356-0.711l-10.311-10.489h33.778c0.178 0 0.533-0.178 0.711-0.356 0.356-0.178 0.533-0.356 0.533-0.711v-2.133c0-0.356-0.178-0.711-0.356-0.889z\">\n </path>\n </symbol>\n <symbol id=\"icon-arrow-right\" viewbox=\"0 0 43 32\">\n <path class=\"path1\" d=\"M0.356 17.956c0.178 0.178 0.533 0.356 0.711 0.356h33.778l-10.311 10.489c-0.178 0.178-0.356 0.533-0.356 0.711 0 0.356 0.178 0.533 0.356 0.711l1.6 1.6c0.178 0.178 0.533 0.356 0.711 0.356s0.533-0.178 0.711-0.356l14.756-14.933c0.178-0.356 0.356-0.711 0.356-0.889s-0.178-0.533-0.356-0.711l-14.756-14.933c0-0.178-0.356-0.356-0.533-0.356s-0.533 0.178-0.711 0.356l-1.6 1.6c-0.178 0.178-0.356 0.533-0.356 0.711s0.178 0.533 0.356 0.711l10.311 10.489h-33.778c-0.178 0-0.533 0.178-0.711 0.356-0.356 0.178-0.533 0.356-0.533 0.711v2.311c0 0.178 0.178 0.533 0.356 0.711z\">\n </path>\n </symbol>\n <symbol id=\"icon-play\" viewbox=\"0 0 22 28\">\n <path d=\"M21.625 14.484l-20.75 11.531c-0.484 0.266-0.875 0.031-0.875-0.516v-23c0-0.547 0.391-0.781 0.875-0.516l20.75 11.531c0.484 0.266 0.484 0.703 0 0.969z\">\n </path>\n </symbol>\n <symbol id=\"icon-pause\" viewbox=\"0 0 24 28\">\n <path d=\"M24 3v22c0 0.547-0.453 1-1 1h-8c-0.547 0-1-0.453-1-1v-22c0-0.547 0.453-1 1-1h8c0.547 0 1 0.453 1 1zM10 3v22c0 0.547-0.453 1-1 1h-8c-0.547 0-1-0.453-1-1v-22c0-0.547 0.453-1 1-1h8c0.547 0 1 0.453 1 1z\">\n </path>\n </symbol>\n </defs>\n </svg>\n </body>\n</html>\n<!--\n\tgenerated 62 seconds ago\n\tgenerated in 0.227 seconds\n\tserved from batcache in 0.002 seconds\n\texpires in 238 seconds\n-->\n\n" ] ], [ [ "#### Print the Title of the website", "_____no_output_____" ] ], [ [ "print(soup.find('title').text) \n# check that we are at the correct website", "Data-X at Berkeley โ€“ For Rapid Impact in Digital Transformation\n" ] ], [ [ "#### Extract all paragraphs of text", "_____no_output_____" ] ], [ [ "for p in soup.find_all('p'):\n print(p.text)", "For Rapid Impact in Digital Transformation\nIkhlaq Sidhu, IEOR, UC Berkeley (contact)\nAlexander Fred-Ojala, IEOR, UC Berkeley (contact)\nData-X is a frameworkย designed at UC Berkeley for learning and applying AI, data science, and emerging technologies. Data-X fills a gap between theory and practiceย to empower the rapid development of data and AI projects that actually work. ย Data-X projects create new ventures, new research, and corporate innovations all over the world.\nLearn more from this article\nAt UC Berkeley, Data-X is offered as a 3 unit class called Applied Data Science with Venture Applications.\n\nApplied Data Science with Venture Applications\nFall 2018 course information:\nWhile the course offers instruction on tools and methods in data science, the Data-X project includes new open-ended problems with industry, social, and venture perspectives on utilizing data for rapid impact. Click here to see examples of projects completed earlier semesters.\nPast and Upcoming Workshops, Bootcamps and talks related to Data-X:\n" ] ], [ [ "### Look at the navigation bar", "_____no_output_____" ] ], [ [ "navigation_bar = soup.find('nav')\nprint(navigation_bar)", "<nav aria-label=\"Top Menu\" class=\"main-navigation\" id=\"site-navigation\" role=\"navigation\">\n<button aria-controls=\"top-menu\" aria-expanded=\"false\" class=\"menu-toggle\">\n<svg aria-hidden=\"true\" class=\"icon icon-bars\" role=\"img\"> <use href=\"#icon-bars\" xlink:href=\"#icon-bars\"></use> </svg><svg aria-hidden=\"true\" class=\"icon icon-close\" role=\"img\"> <use href=\"#icon-close\" xlink:href=\"#icon-close\"></use> </svg>Menu\t</button>\n<div class=\"menu-primary-container\"><ul class=\"menu\" id=\"top-menu\"><li class=\"menu-item menu-item-type-custom menu-item-object-custom current-menu-item current_page_item menu-item-8\" id=\"menu-item-8\"><a href=\"/\">Home</a></li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-726\" id=\"menu-item-726\"><a href=\"https://data-x.blog/about-data-x/\">About Data-X</a></li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-183\" id=\"menu-item-183\"><a href=\"https://data-x.blog/resources/\">Resources</a></li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1055\" id=\"menu-item-1055\"><a href=\"https://data-x.blog/dx-online/\">Data-X Online</a></li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-102\" id=\"menu-item-102\"><a href=\"https://data-x.blog/syllabus/\">Berkeley Syllabus</a></li>\n<li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-has-children menu-item-719\" id=\"menu-item-719\"><a href=\"http://data-x.blog/projects\">Projects<svg aria-hidden=\"true\" class=\"icon icon-angle-down\" role=\"img\"> <use href=\"#icon-angle-down\" xlink:href=\"#icon-angle-down\"></use> </svg></a>\n<ul class=\"sub-menu\">\n<li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-572\" id=\"menu-item-572\"><a href=\"http://data-x.blog/projects\">Completed Projects</a></li>\n<li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-737\" id=\"menu-item-737\"><a href=\"http://data-x.blog/project-ideas\">Project Ideas</a></li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-485\" id=\"menu-item-485\"><a href=\"https://data-x.blog/advisors/\">Advisors</a></li>\n</ul>\n</li>\n<li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-738\" id=\"menu-item-738\"><a href=\"http://data-x.blog/posts\">Posts</a></li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-129\" id=\"menu-item-129\"><a href=\"https://data-x.blog/project/\">Labs</a></li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-10\" id=\"menu-item-10\"><a href=\"https://data-x.blog/contact/\">Contact</a></li>\n</ul></div>\n<a class=\"menu-scroll-down\" href=\"#content\"><svg aria-hidden=\"true\" class=\"icon icon-arrow-right\" role=\"img\"> <use href=\"#icon-arrow-right\" xlink:href=\"#icon-arrow-right\"></use> </svg><span class=\"screen-reader-text\">Scroll down to content</span></a>\n</nav>\n" ], [ "# These are the linked subpages in the navigation bar\nnav_bar = navigation_bar.text\nprint(nav_bar)", "\n\n Menu\t\nHome\nAbout Data-X\nResources\nData-X Online\nBerkeley Syllabus\nProjects \n\nCompleted Projects\nProject Ideas\nAdvisors\n\n\nPosts\nLabs\nContact\n\n Scroll down to content\n\n" ] ], [ [ "### Scrape the Syllabus of its content\n(maybe to use in an App)", "_____no_output_____" ] ], [ [ "# Now we want to find the Syllabus, \n# however we are at the root web page, not displaying the Syllabus\n\n# Get all links from navigation bar at the data-x home webpage\nfor url in navigation_bar.find_all('a'): \n link = url.get('href')\n if 'data-x.blog' in link: # check link to a subpage\n print(link) \n if 'syllabus' in link:\n syllabus_url = link", "https://data-x.blog/about-data-x/\nhttps://data-x.blog/resources/\nhttps://data-x.blog/dx-online/\nhttps://data-x.blog/syllabus/\nhttp://data-x.blog/projects\nhttp://data-x.blog/projects\nhttp://data-x.blog/project-ideas\nhttps://data-x.blog/advisors/\nhttp://data-x.blog/posts\nhttps://data-x.blog/project/\nhttps://data-x.blog/contact/\n" ], [ "# syllabus is located at https://data-x.blog/syllabus/\nprint(syllabus_url)", "https://data-x.blog/syllabus/\n" ], [ "# Open new connection to the Syllabus url. Replace soup object.\n\nsource = requests.get(syllabus_url).content\nsoup = bs.BeautifulSoup(source, 'html.parser')\n\nprint(soup.body.prettify()) \n# we can see that the Syllabus is built up of <td>, <tr> and <table> tags", "<body class=\"page-template-default page page-id-94 has-header-image page-one-column colors-light cannot-edit\">\n <div class=\"site\" id=\"page\">\n <a class=\"skip-link screen-reader-text\" href=\"#content\">\n Skip to content\n </a>\n <header class=\"site-header\" id=\"masthead\" role=\"banner\">\n <div class=\"custom-header\">\n <div class=\"custom-header-media\">\n <div class=\"wp-custom-header\" id=\"wp-custom-header\">\n <img alt=\"Data-X at Berkeley\" height=\"973\" sizes=\"100vw\" src=\"https://data-x.blog/wp-content/uploads/2018/04/cropped-AdobeStock_120712749-Converted.png\" srcset=\"https://i1.wp.com/data-x.blog/wp-content/uploads/2018/04/cropped-AdobeStock_120712749-Converted.png?w=2000&amp;ssl=1 2000w, https://i1.wp.com/data-x.blog/wp-content/uploads/2018/04/cropped-AdobeStock_120712749-Converted.png?resize=300%2C146&amp;ssl=1 300w, https://i1.wp.com/data-x.blog/wp-content/uploads/2018/04/cropped-AdobeStock_120712749-Converted.png?resize=768%2C374&amp;ssl=1 768w, https://i1.wp.com/data-x.blog/wp-content/uploads/2018/04/cropped-AdobeStock_120712749-Converted.png?resize=1024%2C498&amp;ssl=1 1024w, https://i1.wp.com/data-x.blog/wp-content/uploads/2018/04/cropped-AdobeStock_120712749-Converted.png?w=1480&amp;ssl=1 1480w\" width=\"2000\"/>\n </div>\n </div>\n <div class=\"site-branding\">\n <div class=\"wrap\">\n <div class=\"site-branding-text\">\n <p class=\"site-title\">\n <a href=\"https://data-x.blog/\" rel=\"home\">\n Data-X at Berkeley\n </a>\n </p>\n <p class=\"site-description\">\n For Rapid Impact in Digital Transformation\n </p>\n </div>\n <!-- .site-branding-text -->\n </div>\n <!-- .wrap -->\n </div>\n <!-- .site-branding -->\n </div>\n <!-- .custom-header -->\n <div class=\"navigation-top\">\n <div class=\"wrap\">\n <nav aria-label=\"Top Menu\" class=\"main-navigation\" id=\"site-navigation\" role=\"navigation\">\n <button aria-controls=\"top-menu\" aria-expanded=\"false\" class=\"menu-toggle\">\n <svg aria-hidden=\"true\" class=\"icon icon-bars\" role=\"img\">\n <use href=\"#icon-bars\" xlink:href=\"#icon-bars\">\n </use>\n </svg>\n <svg aria-hidden=\"true\" class=\"icon icon-close\" role=\"img\">\n <use href=\"#icon-close\" xlink:href=\"#icon-close\">\n </use>\n </svg>\n Menu\n </button>\n <div class=\"menu-primary-container\">\n <ul class=\"menu\" id=\"top-menu\">\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-8\" id=\"menu-item-8\">\n <a href=\"/\">\n Home\n </a>\n </li>\n <li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-726\" id=\"menu-item-726\">\n <a href=\"https://data-x.blog/about-data-x/\">\n About Data-X\n </a>\n </li>\n <li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-183\" id=\"menu-item-183\">\n <a href=\"https://data-x.blog/resources/\">\n Resources\n </a>\n </li>\n <li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1055\" id=\"menu-item-1055\">\n <a href=\"https://data-x.blog/dx-online/\">\n Data-X Online\n </a>\n </li>\n <li class=\"menu-item menu-item-type-post_type menu-item-object-page current-menu-item page_item page-item-94 current_page_item menu-item-102\" id=\"menu-item-102\">\n <a href=\"https://data-x.blog/syllabus/\">\n Berkeley Syllabus\n </a>\n </li>\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-has-children menu-item-719\" id=\"menu-item-719\">\n <a href=\"http://data-x.blog/projects\">\n Projects\n <svg aria-hidden=\"true\" class=\"icon icon-angle-down\" role=\"img\">\n <use href=\"#icon-angle-down\" xlink:href=\"#icon-angle-down\">\n </use>\n </svg>\n </a>\n <ul class=\"sub-menu\">\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-572\" id=\"menu-item-572\">\n <a href=\"http://data-x.blog/projects\">\n Completed Projects\n </a>\n </li>\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-737\" id=\"menu-item-737\">\n <a href=\"http://data-x.blog/project-ideas\">\n Project Ideas\n </a>\n </li>\n <li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-485\" id=\"menu-item-485\">\n <a href=\"https://data-x.blog/advisors/\">\n Advisors\n </a>\n </li>\n </ul>\n </li>\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-738\" id=\"menu-item-738\">\n <a href=\"http://data-x.blog/posts\">\n Posts\n </a>\n </li>\n <li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-129\" id=\"menu-item-129\">\n <a href=\"https://data-x.blog/project/\">\n Labs\n </a>\n </li>\n <li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-10\" id=\"menu-item-10\">\n <a href=\"https://data-x.blog/contact/\">\n Contact\n </a>\n </li>\n </ul>\n </div>\n </nav>\n <!-- #site-navigation -->\n </div>\n <!-- .wrap -->\n </div>\n <!-- .navigation-top -->\n </header>\n <!-- #masthead -->\n <div class=\"site-content-contain\">\n <div class=\"site-content\" id=\"content\">\n <div class=\"wrap\">\n <div class=\"content-area\" id=\"primary\">\n <main class=\"site-main\" id=\"main\" role=\"main\">\n <article class=\"post-94 page type-page status-publish hentry\" id=\"post-94\">\n <header class=\"entry-header\">\n <h1 class=\"entry-title\">\n Berkeley Syllabus\n </h1>\n </header>\n <!-- .entry-header -->\n <div class=\"entry-content\">\n <h2 style=\"text-align: left;\">\n <strong>\n Applied Data Science with Venture Applications\n <br/>\n IEOR 135/ 290\n <br/>\n </strong>\n </h2>\n <p style=\"text-align: left;\">\n <strong>\n Instructors\n </strong>\n : Ikhlaq Sidhu &amp; Alexander Fred-Ojala\n <br/>\n Department of Industrial Engineering &amp; Operations Research\n </p>\n <p style=\"text-align: left;\">\n 3 Units, Lecture and Lab\n </p>\n <p style=\"text-align: left;\">\n <strong>\n Prerequisite\n </strong>\n : Interested students should have working knowledge of Python in advance of the class, and also should have completed a fundamental probability or statistics course.\n </p>\n <p style=\"text-align: left;\">\n <strong>\n Teaching Team:\n </strong>\n </p>\n <ul style=\"text-align: left;\">\n <li>\n Ikhlaq Sidhu, IEOR, [email protected] (Instructor)\n </li>\n <li>\n Alexander Fred-Ojala, [email protected] (Instructor)\n </li>\n <li>\n Sumayah Rahman,ย [email protected] (GSI)\n </li>\n <li>\n Tanya Piplani, [email protected] (GSI)\n </li>\n <li>\n Vanessa Salas,ย [email protected] (Data-X Lab)\n </li>\n </ul>\n <p>\n <strong>\n Extended Team:\n </strong>\n </p>\n <ul style=\"text-align: left;\">\n <li>\n Sana Iqbal, [email protected]\n </li>\n <li>\n Kevin Bozhe Li, [email protected]\n </li>\n </ul>\n <p style=\"text-align: left;\">\n <strong>\n Office Hours:\n <br/>\n </strong>\n <strong>\n Sumayah / Tanya\n </strong>\n (HW + project related): Wednesdays 2-3 PM,ย 438 Koshland Hall\n <br/>\n <strong>\n Alex\n </strong>\n (Project + Tech Setup): ย Fridays 1-2 PM, SCET / California Memorial Stadium #122\n <br/>\n <strong>\n Ikhlaq Sidhu\n </strong>\n : by appointmentย via Melissa Glass,\n <strong>\n [email protected]\n </strong>\n </p>\n <h2 style=\"text-align: left;\">\n <u>\n Description\n </u>\n </h2>\n <p style=\"text-align: left;\">\n <strong>\n Course Description:\n </strong>\n </p>\n <p style=\"text-align: left;\">\n This course is designed primarily for upper-level undergraduate engineering and technical students. Graduate students at a mezzanine level can also take a co-located section of the course. The course material offers an understanding at the intersection of foundational math mathematical concepts and current computer science tools, with applications of real world problems. ย Math concepts includeย filtering, prediction, classification, decision-making, entropy as part of information theory, LTI systems, spectral analysis, and frameworks for learning from data. ย Computer science tools for this course include open source tools such as Python with Numpy, Scipy, Pandas, SQL, NLTK, Tensor Flow, and Spark.ย ย The course includes a team based data application project.\n </p>\n <p style=\"text-align: left;\">\n The lectures present alternating and related topics between mathematical frameworks and the same concept within code examples. One goal is that students who understand math concepts can bring them to life with scalable CS tools.ย  And, students who are comfortable with computer software code can create systems by understanding selected, structured mathematical frameworks.ย This course is designed to be more applied than a traditional ML algorithms course as it includes a systems view and covers implementation concepts.\n </p>\n <p style=\"text-align: left;\">\n Applications of this course are broad.ย  They include industry sectors such as finance, health, engineering, transportation, energy, and many others.ย  The lab section of the course meets in parallel with the lecture.ย  In the lab, the first 4 weeks are used to generate a story and low-tech demo for a real-world project that performs actions on data, and the following 8 weeks will be an agile sprint, with a demonstration of working project code by the end of the class.ย The skill set learned in this class can be applied to a broad range of industry sectors such as finance, health, engineering, transportation, energy, and many others.\n </p>\n <p>\n <a href=\"https://github.com/sanaiqbalwani/Data-x-F-2015-Projects/tree/master/projects\" rel=\"noopener\" target=\"_blank\">\n Find our amazing projects from Fall-2017 here.\n </a>\n </p>\n <p style=\"text-align: left;\">\n <strong>\n <u>\n TEXTS AND REQUIRED SUPPLIES\n </u>\n </strong>\n </p>\n <ul style=\"text-align: left;\">\n <li>\n <a href=\"https://data-x.blog/\" rel=\"noopener\" target=\"_blank\">\n General Information\n </a>\n </li>\n <li>\n <a href=\"https://github.com/ikhlaqsidhu/data-x\" rel=\"noopener\" target=\"_blank\">\n Github for Code and Slides\n </a>\n </li>\n <li>\n Anaconda Python Environment on personal computer\n </li>\n <li>\n <strong>\n Text Book\n </strong>\n :\n <a href=\"http://shop.oreilly.com/product/0636920052289.do\" rel=\"noopener\" target=\"_blank\">\n Hands-On Machine Learning with Scikit-Learn and TensorFlow By Aurรฉlien Gรฉron\n </a>\n (optional, but highly recommended)\n </li>\n </ul>\n <p style=\"text-align: left;\">\n <strong>\n <u>\n HOMEWORK, GRADING &amp; ATTENDANCE\n </u>\n </strong>\n </p>\n <p style=\"text-align: left;\">\n Class attendance and participation are expected, and sign-ins for sessions are tracked. ย Absences for unavoidable reasons should be preapproved whenever possible via an email to the GSI\n </p>\n <p style=\"text-align: left;\">\n <strong>\n Grading:ย (\n <em>\n Required to be taken on Letter Grade only\n </em>\n )\n <br/>\n </strong>\n The class will be graded according to the categories below. At the end of the class there will be a poster presentation + live demo during reading week where invited judges will provide assessment of each project.\n </p>\n <ul style=\"text-align: left;\">\n <li>\n Homework: 35%\n </li>\n <li>\n Attendance + Quizzes: 25%\n </li>\n <li>\n Low Tech Validated Solution 10%\n </li>\n <li>\n Final Project + Write up + Code Review: 30%\n </li>\n </ul>\n <p style=\"text-align: left;\">\n <strong>\n <u>\n Piazza (Fall 2018)\n </u>\n </strong>\n </p>\n <p style=\"text-align: left;\">\n <a href=\"https://piazza.com/class/jkl2irfsf9p5sl\">\n piazza.com/class/jkl2irfsf9p5sl\n </a>\n </p>\n <p style=\"text-align: left;\">\n <strong>\n <u>\n TENTATIVE SCHEDULE FOR SPRING 2018\n </u>\n </strong>\n </p>\n <ul style=\"text-align: left;\">\n <li>\n On a weekly basis, class sessions may start with a โ€œmeet a mentorโ€ and/or โ€œapplication model case studyโ€ section.*\n </li>\n <li>\n All slides and notebook samples will be updated\n <a href=\"https://github.com/ikhlaqsidhu/data-x\" rel=\"noopener\" target=\"_blank\">\n at this site\n </a>\n .\n </li>\n </ul>\n <p>\n </p>\n <table width=\"472\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 1:\n </strong>\n </td>\n <td width=\"341\">\n Introduction\n <br/>\n <strong>\n Theory\n </strong>\n : Overview of Frameworks for obtaining insights from data (Slides).\n <br/>\n <strong>\n Tools\n </strong>\n : Python Review\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n 1. Introduction to GitHub\n <br/>\n 2. Setting up Anaconda Environment\n <br/>\n 3. Coding with Python Review\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n </td>\n <td width=\"341\">\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n Office Hours Session that week for Environment Set Up\n </td>\n </tr>\n </tbody>\n </table>\n <table width=\"472\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 2:\n </strong>\n </td>\n <td width=\"341\">\n <strong>\n Tools:\n </strong>\n Linear Regression, Data as a Signal with Correlation\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n โ€”\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n Text Book Chapter 2 | Page 33 -45\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n Bring three ideas to the class.\n </td>\n </tr>\n </tbody>\n </table>\n <table width=\"472\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 3:\n </strong>\n </td>\n <td width=\"341\">\n <strong>\n Theory\n </strong>\n : Numpy\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n <ol>\n <li>\n Coding with Numpy\n </li>\n <li>\n Coding with Pandas\n </li>\n <li>\n Coding with Matplotlib\n </li>\n </ol>\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n <a href=\"https://www.datacamp.com/community/tutorials/pandas-tutorial-dataframe-python\">\n DataCamp\n </a>\n ,\n <a class=\"\" href=\"https://www.tutorialspoint.com/numpy/numpy_introduction.htm\">\n tutorialpoint\n </a>\n , Text Book Chapter 1 | Page 3 -13\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n Share ideas and finalize projects\n </td>\n </tr>\n </tbody>\n </table>\n <table width=\"472\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 4:\n </strong>\n </td>\n <td width=\"341\">\n <strong>\n Theory:\n </strong>\n <span style=\"font-weight: 400;\">\n Classification and Logistic Regression\n </span>\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n Coding with Pandas, Matplotlib\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n Text Book Chapter 3| Page 81 -95\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n Develop insightful story and brainstorm solutions\n </td>\n </tr>\n </tbody>\n </table>\n <p>\n </p>\n <table width=\"472\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 5:\n </strong>\n </td>\n <td width=\"341\">\n <strong>\n <strong>\n <strong>\n Theory:\n </strong>\n </strong>\n </strong>\n <strong>\n <strong>\n <span style=\"font-weight: 400;\">\n Correlation\n </span>\n </strong>\n </strong>\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n โ€”\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n โ€”\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n Team break out discussions\n </td>\n </tr>\n </tbody>\n </table>\n <p>\n </p>\n <table width=\"472\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 6:\n </strong>\n </td>\n <td width=\"341\">\n <strong>\n Theory:\n </strong>\n <span style=\"font-weight: 400;\">\n Into to Skikit-Learn\n </span>\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n Coding with\n <span style=\"font-weight: 400;\">\n Skikit-Learn\n </span>\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n โ€”\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n โ€”\n </td>\n </tr>\n </tbody>\n </table>\n <p>\n </p>\n <table width=\"472\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 7:\n </strong>\n </td>\n <td width=\"341\">\n <strong>\n Theory:\n </strong>\n <span style=\"font-weight: 400;\">\n Matplotlib / Python Visualization\n </span>\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n Coding with\n <span style=\"font-weight: 400;\">\n <strong>\n </strong>\n Matplotlib\n </span>\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n โ€”\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n โ€”\n </td>\n </tr>\n </tbody>\n </table>\n <table width=\"472\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 8:\n </strong>\n </td>\n <td width=\"341\">\n <strong>\n Theory:\n </strong>\n Low Tech Demo Presentations\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n โ€”\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n Low Tech Demo Presentations\n </td>\n </tr>\n </tbody>\n </table>\n <p>\n </p>\n <table style=\"height: 159px;\" width=\"643\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 9:\n </strong>\n </td>\n <td width=\"341\">\n <strong>\n Theory:\n </strong>\n Classification &amp; Prediction\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n Reference Titanic Notebook\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n </td>\n </tr>\n </tbody>\n </table>\n <p>\n </p>\n <table width=\"472\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 10:\n </strong>\n </td>\n <td width=\"341\">\n <strong>\n Theory:\n </strong>\n Machine Learning &amp; Cross Validation\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n โ€”\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n </td>\n </tr>\n </tbody>\n </table>\n <p>\n </p>\n <table width=\"472\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 10:\n </strong>\n </td>\n <td width=\"341\">\n <strong>\n Theory:\n </strong>\n Decision Trees, Information Theory, Random Forest\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n Coding with python\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n Text Bookย Chapter 6, Chapter 7\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n </td>\n </tr>\n </tbody>\n </table>\n <table width=\"472\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 11:\n </strong>\n </td>\n <td width=\"341\">\n <strong>\n Tools: Webscrapingย / crawling\n <br/>\n </strong>\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n Text Book Chapter 4| Page 105 โ€“ 110\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n </td>\n </tr>\n </tbody>\n </table>\n <p>\n </p>\n <table width=\"472\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 12:\n </strong>\n </td>\n <td width=\"341\">\n <strong>\n Theory:\n </strong>\n <br/>\n 1. Introduction to Natural Language Processing โ€“ NLTK overview and Word2vec\n <br/>\n 2. Sentiment Analysis\n <br/>\n <strong>\n Tools:\n </strong>\n NLTK, Gensim, Tensorflow\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n Coding with NLTK, Gensim, Tensorflow\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n <a href=\"https://blog.algorithmia.com/introduction-natural-language-processing-nlp/\">\n Links\n </a>\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n Agile sprint with reflection\n </td>\n </tr>\n </tbody>\n </table>\n <table width=\"472\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 13:\n </strong>\n </td>\n <td width=\"341\">\n <strong>\n Theory:\n </strong>\n Polynomial Regression, Bias Variance Tradeoff, Regularization\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n Coding with Python\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n โ€”\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n </td>\n </tr>\n </tbody>\n </table>\n <table width=\"472\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 14:\n </strong>\n </td>\n <td width=\"341\">\n <strong>\n Theory:\n </strong>\n Introduction to Neural Networks- ANN, CNN, RNN\n <br/>\n <strong>\n Tools:\n </strong>\n Tensorflow\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n Coding with Tensorflow for image classification\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n Text Bookย Chapter 10| Page 256-272, Chapter 13| Page 357-359\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n Low Tech Demo and Validation Results\n </td>\n </tr>\n </tbody>\n </table>\n <p>\n </p>\n <table width=\"472\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 15:\n </strong>\n </td>\n <td width=\"341\">\n <strong>\n Theory:\n </strong>\n <br/>\n 1. Introduction to database\n <br/>\n 2.ย Introduction to SQL\n <br/>\n 3.ย ย Introduction to Block Chain as a database\n <br/>\n 4. Big Data Analysis with Spark\n <br/>\n <strong>\n Tools:\n </strong>\n SQL libraries in python, Solidity\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n Coding with python for SQLย  andย  Spark\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n Text Book\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n Agile sprint with reflection\n </td>\n </tr>\n </tbody>\n </table>\n <table style=\"height: 270px;\" width=\"680\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 16:\n </strong>\n </td>\n <td width=\"341\">\n <span style=\"font-weight: 400;\">\n <strong>\n Theory:\n </strong>\n Spectral Signals, LTI -Fundamentals and Applications\n </span>\n <br/>\n <strong>\n Tools:\n </strong>\n Temporal and Spatial Signal processing\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n Coding with python for signal processing\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n Text Book\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n Agile sprint with reflection\n </td>\n </tr>\n </tbody>\n </table>\n <table style=\"height: 192px;\" width=\"670\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 17:\n </strong>\n </td>\n <td width=\"341\">\n <strong>\n Theory:\n </strong>\n Reinforcement Learning primer\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n TBD\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Reading\n </td>\n <td width=\"341\">\n Text Bookย Chapter 16| Page 443-450\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n Agile sprint with reflection\n </td>\n </tr>\n </tbody>\n </table>\n <table style=\"height: 181px;\" width=\"678\">\n <tbody>\n <tr>\n <td width=\"132\">\n <strong>\n Topic 18:\n </strong>\n </td>\n <td width=\"341\">\n Project Presentations โ€“ Demo Day(s)\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Code\n </td>\n <td width=\"341\">\n Presentation including running code and code samples\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Due\n </td>\n <td width=\"341\">\n Includes preparation time in last week\n </td>\n </tr>\n <tr>\n <td width=\"132\">\n Project\n </td>\n <td width=\"341\">\n Final Presentations\n </td>\n </tr>\n </tbody>\n </table>\n <p>\n </p>\n <ul style=\"text-align: left;\">\n <li>\n To include, ย if possible tool: Connecting Pandas to SQL for Long-term storage. ย AWS / SQL / Parallelization.\n </li>\n <li>\n Example application topics may include examples such as recommendation engines, digital mirror, customer journey, bloom filters, fuzzy join applications.\n </li>\n </ul>\n <p style=\"text-align: left;\">\n <strong>\n <u>\n COURSE MODEL ILLUSTRATION:\n </u>\n </strong>\n </p>\n <p>\n <img alt=\"dx-project\" class=\"alignnone size-full wp-image-331 alignleft\" data-attachment-id=\"331\" data-comments-opened=\"1\" data-image-description=\"\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"dx-project\" data-large-file=\"https://i2.wp.com/data-x.blog/wp-content/uploads/2017/01/dx-project-e1498414303751.jpg?fit=573%2C228&amp;ssl=1\" data-medium-file=\"https://i2.wp.com/data-x.blog/wp-content/uploads/2017/01/dx-project-e1498414303751.jpg?fit=300%2C119&amp;ssl=1\" data-orig-file=\"https://i2.wp.com/data-x.blog/wp-content/uploads/2017/01/dx-project-e1498414303751.jpg?fit=573%2C228&amp;ssl=1\" data-orig-size=\"573,228\" data-permalink=\"https://data-x.blog/syllabus/dx-project/\" data-recalc-dims=\"1\" height=\"228\" src=\"https://i2.wp.com/data-x.blog/wp-content/uploads/2017/01/dx-project-e1498414303751.jpg?resize=573%2C228&amp;ssl=1\" width=\"573\"/>\n </p>\n <div class=\"sharedaddy sd-block sd-like jetpack-likes-widget-wrapper jetpack-likes-widget-unloaded\" data-name=\"like-post-frame-120928364-94-5bbfb17f79736\" data-src=\"https://widgets.wp.com/likes/#blog_id=120928364&amp;post_id=94&amp;origin=data-x.blog&amp;obj_id=120928364-94-5bbfb17f79736\" id=\"like-post-wrapper-120928364-94-5bbfb17f79736\">\n <h3 class=\"sd-title\">\n Like this:\n </h3>\n <div class=\"likes-widget-placeholder post-likes-widget-placeholder\" style=\"height: 55px;\">\n <span class=\"button\">\n <span>\n Like\n </span>\n </span>\n <span class=\"loading\">\n Loading...\n </span>\n </div>\n <span class=\"sd-text-color\">\n </span>\n <a class=\"sd-link-color\">\n </a>\n </div>\n </div>\n <!-- .entry-content -->\n </article>\n <!-- #post-## -->\n </main>\n <!-- #main -->\n </div>\n <!-- #primary -->\n </div>\n <!-- .wrap -->\n </div>\n <!-- #content -->\n <footer class=\"site-footer\" id=\"colophon\" role=\"contentinfo\">\n <div class=\"wrap\">\n <nav aria-label=\"Footer Social Links Menu\" class=\"social-navigation\" role=\"navigation\">\n <div class=\"menu-social-links-container\">\n <ul class=\"social-links-menu\" id=\"menu-social-links\">\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-11\" id=\"menu-item-11\">\n <a href=\"https://twitter.com/\">\n <span class=\"screen-reader-text\">\n Twitter\n </span>\n <svg aria-hidden=\"true\" class=\"icon icon-twitter\" role=\"img\">\n <use href=\"#icon-twitter\" xlink:href=\"#icon-twitter\">\n </use>\n </svg>\n </a>\n </li>\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-12\" id=\"menu-item-12\">\n <a href=\"https://www.facebook.com/\">\n <span class=\"screen-reader-text\">\n Facebook\n </span>\n <svg aria-hidden=\"true\" class=\"icon icon-facebook\" role=\"img\">\n <use href=\"#icon-facebook\" xlink:href=\"#icon-facebook\">\n </use>\n </svg>\n </a>\n </li>\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-13\" id=\"menu-item-13\">\n <a href=\"http://plus.google.com\">\n <span class=\"screen-reader-text\">\n Google+\n </span>\n <svg aria-hidden=\"true\" class=\"icon icon-google-plus\" role=\"img\">\n <use href=\"#icon-google-plus\" xlink:href=\"#icon-google-plus\">\n </use>\n </svg>\n </a>\n </li>\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-14\" id=\"menu-item-14\">\n <a href=\"http://github.com\">\n <span class=\"screen-reader-text\">\n GitHub\n </span>\n <svg aria-hidden=\"true\" class=\"icon icon-github\" role=\"img\">\n <use href=\"#icon-github\" xlink:href=\"#icon-github\">\n </use>\n </svg>\n </a>\n </li>\n <li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-15\" id=\"menu-item-15\">\n <a href=\"http://wordpress.com\">\n <span class=\"screen-reader-text\">\n WordPress.com\n </span>\n <svg aria-hidden=\"true\" class=\"icon icon-wordpress\" role=\"img\">\n <use href=\"#icon-wordpress\" xlink:href=\"#icon-wordpress\">\n </use>\n </svg>\n </a>\n </li>\n </ul>\n </div>\n </nav>\n <!-- .social-navigation -->\n <div class=\"site-info\">\n <a href=\"https://wordpress.com/?ref=footer_custom_powered\">\n Powered by WordPress.com\n </a>\n .\n </div>\n <!-- .site-info -->\n </div>\n <!-- .wrap -->\n </footer>\n <!-- #colophon -->\n </div>\n <!-- .site-content-contain -->\n </div>\n <!-- #page -->\n <!-- -->\n <!-- Your Google Analytics Plugin is missing the tracking ID -->\n <div style=\"display:none\">\n </div>\n <!--[if lte IE 8]>\n<link rel='stylesheet' id='jetpack-carousel-ie8fix-css' href='https://c0.wp.com/p/jetpack/6.6.1/modules/carousel/jetpack-carousel-ie8fix.css' type='text/css' media='all' />\n<![endif]-->\n <script src=\"https://c0.wp.com/p/jetpack/6.6.1/_inc/build/photon/photon.min.js\" type=\"text/javascript\">\n </script>\n <script src=\"https://s0.wp.com/wp-content/js/devicepx-jetpack.js?ver=201841\" type=\"text/javascript\">\n </script>\n <script type=\"text/javascript\">\n /* <![CDATA[ */\nvar twentyseventeenScreenReaderText = {\"quote\":\"<svg class=\\\"icon icon-quote-right\\\" aria-hidden=\\\"true\\\" role=\\\"img\\\"> <use href=\\\"#icon-quote-right\\\" xlink:href=\\\"#icon-quote-right\\\"><\\/use> <\\/svg>\",\"expand\":\"Expand child menu\",\"collapse\":\"Collapse child menu\",\"icon\":\"<svg class=\\\"icon icon-angle-down\\\" aria-hidden=\\\"true\\\" role=\\\"img\\\"> <use href=\\\"#icon-angle-down\\\" xlink:href=\\\"#icon-angle-down\\\"><\\/use> <span class=\\\"svg-fallback icon-angle-down\\\"><\\/span><\\/svg>\"};\n/* ]]> */\n </script>\n <script src=\"https://data-x.blog/wp-content/themes/twentyseventeen/assets/js/skip-link-focus-fix.js?ver=1.0\" type=\"text/javascript\">\n </script>\n <script src=\"https://data-x.blog/wp-content/themes/twentyseventeen/assets/js/navigation.js?ver=1.0\" type=\"text/javascript\">\n </script>\n <script src=\"https://data-x.blog/wp-content/themes/twentyseventeen/assets/js/global.js?ver=1.0\" type=\"text/javascript\">\n </script>\n <script src=\"https://data-x.blog/wp-content/themes/twentyseventeen/assets/js/jquery.scrollTo.js?ver=2.1.2\" type=\"text/javascript\">\n </script>\n <script src=\"https://c0.wp.com/p/jetpack/6.6.1/_inc/build/likes/queuehandler.min.js\" type=\"text/javascript\">\n </script>\n <script src=\"https://c0.wp.com/c/4.9.8/wp-includes/js/wp-embed.min.js\" type=\"text/javascript\">\n </script>\n <script src=\"https://c0.wp.com/p/jetpack/6.6.1/_inc/build/spin.min.js\" type=\"text/javascript\">\n </script>\n <script src=\"https://c0.wp.com/p/jetpack/6.6.1/_inc/build/jquery.spin.min.js\" type=\"text/javascript\">\n </script>\n <script type=\"text/javascript\">\n /* <![CDATA[ */\nvar jetpackCarouselStrings = {\"widths\":[370,700,1000,1200,1400,2000],\"is_logged_in\":\"\",\"lang\":\"en\",\"ajaxurl\":\"https:\\/\\/data-x.blog\\/wp-admin\\/admin-ajax.php\",\"nonce\":\"6d7c2fb409\",\"display_exif\":\"1\",\"display_geo\":\"1\",\"single_image_gallery\":\"1\",\"single_image_gallery_media_file\":\"\",\"background_color\":\"black\",\"comment\":\"Comment\",\"post_comment\":\"Post Comment\",\"write_comment\":\"Write a Comment...\",\"loading_comments\":\"Loading Comments...\",\"download_original\":\"View full size <span class=\\\"photo-size\\\">{0}<span class=\\\"photo-size-times\\\">\\u00d7<\\/span>{1}<\\/span>\",\"no_comment_text\":\"Please be sure to submit some text with your comment.\",\"no_comment_email\":\"Please provide an email address to comment.\",\"no_comment_author\":\"Please provide your name to comment.\",\"comment_post_error\":\"Sorry, but there was an error posting your comment. Please try again later.\",\"comment_approved\":\"Your comment was approved.\",\"comment_unapproved\":\"Your comment is in moderation.\",\"camera\":\"Camera\",\"aperture\":\"Aperture\",\"shutter_speed\":\"Shutter Speed\",\"focal_length\":\"Focal Length\",\"copyright\":\"Copyright\",\"comment_registration\":\"0\",\"require_name_email\":\"1\",\"login_url\":\"https:\\/\\/data-x.blog\\/wp-login.php?redirect_to=https%3A%2F%2Fdata-x.blog%2Fsyllabus%2F\",\"blog_id\":\"1\",\"meta_data\":[\"camera\",\"aperture\",\"shutter_speed\",\"focal_length\",\"copyright\"],\"local_comments_commenting_as\":\"<fieldset><label for=\\\"email\\\">Email (Required)<\\/label> <input type=\\\"text\\\" name=\\\"email\\\" class=\\\"jp-carousel-comment-form-field jp-carousel-comment-form-text-field\\\" id=\\\"jp-carousel-comment-form-email-field\\\" \\/><\\/fieldset><fieldset><label for=\\\"author\\\">Name (Required)<\\/label> <input type=\\\"text\\\" name=\\\"author\\\" class=\\\"jp-carousel-comment-form-field jp-carousel-comment-form-text-field\\\" id=\\\"jp-carousel-comment-form-author-field\\\" \\/><\\/fieldset><fieldset><label for=\\\"url\\\">Website<\\/label> <input type=\\\"text\\\" name=\\\"url\\\" class=\\\"jp-carousel-comment-form-field jp-carousel-comment-form-text-field\\\" id=\\\"jp-carousel-comment-form-url-field\\\" \\/><\\/fieldset>\"};\n/* ]]> */\n </script>\n <script src=\"https://c0.wp.com/p/jetpack/6.6.1/_inc/build/carousel/jetpack-carousel.min.js\" type=\"text/javascript\">\n </script>\n <iframe id=\"likes-master\" name=\"likes-master\" scrolling=\"no\" src=\"https://widgets.wp.com/likes/master.html?ver=201841#ver=201841\" style=\"display:none;\">\n </iframe>\n <div id=\"likes-other-gravatars\">\n <div class=\"likes-text\">\n <span>\n %d\n </span>\n bloggers like this:\n </div>\n <ul class=\"wpl-avatars sd-like-gravatars\">\n </ul>\n </div>\n <script async=\"async\" defer=\"defer\" src=\"https://stats.wp.com/e-201841.js\" type=\"text/javascript\">\n </script>\n <script type=\"text/javascript\">\n _stq = window._stq || [];\n\t_stq.push([ 'view', {v:'ext',j:'1:6.6.1',blog:'120928364',post:'94',tz:'0',srv:'data-x.blog'} ]);\n\t_stq.push([ 'clickTrackerInit', '120928364', '94' ]);\n </script>\n <svg style=\"position: absolute; width: 0; height: 0; overflow: hidden;\" version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\n <defs>\n <symbol id=\"icon-behance\" viewbox=\"0 0 37 32\">\n <path class=\"path1\" d=\"M33 6.054h-9.125v2.214h9.125v-2.214zM28.5 13.661q-1.607 0-2.607 0.938t-1.107 2.545h7.286q-0.321-3.482-3.571-3.482zM28.786 24.107q1.125 0 2.179-0.571t1.357-1.554h3.946q-1.786 5.482-7.625 5.482-3.821 0-6.080-2.357t-2.259-6.196q0-3.714 2.33-6.17t6.009-2.455q2.464 0 4.295 1.214t2.732 3.196 0.902 4.429q0 0.304-0.036 0.839h-11.75q0 1.982 1.027 3.063t2.973 1.080zM4.946 23.214h5.286q3.661 0 3.661-2.982 0-3.214-3.554-3.214h-5.393v6.196zM4.946 13.625h5.018q1.393 0 2.205-0.652t0.813-2.027q0-2.571-3.393-2.571h-4.643v5.25zM0 4.536h10.607q1.554 0 2.768 0.25t2.259 0.848 1.607 1.723 0.563 2.75q0 3.232-3.071 4.696 2.036 0.571 3.071 2.054t1.036 3.643q0 1.339-0.438 2.438t-1.179 1.848-1.759 1.268-2.161 0.75-2.393 0.232h-10.911v-22.5z\">\n </path>\n </symbol>\n <symbol id=\"icon-deviantart\" viewbox=\"0 0 18 32\">\n <path class=\"path1\" d=\"M18.286 5.411l-5.411 10.393 0.429 0.554h4.982v7.411h-9.054l-0.786 0.536-2.536 4.875-0.536 0.536h-5.375v-5.411l5.411-10.411-0.429-0.536h-4.982v-7.411h9.054l0.786-0.536 2.536-4.875 0.536-0.536h5.375v5.411z\">\n </path>\n </symbol>\n <symbol id=\"icon-medium\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M10.661 7.518v20.946q0 0.446-0.223 0.759t-0.652 0.313q-0.304 0-0.589-0.143l-8.304-4.161q-0.375-0.179-0.634-0.598t-0.259-0.83v-20.357q0-0.357 0.179-0.607t0.518-0.25q0.25 0 0.786 0.268l9.125 4.571q0.054 0.054 0.054 0.089zM11.804 9.321l9.536 15.464-9.536-4.75v-10.714zM32 9.643v18.821q0 0.446-0.25 0.723t-0.679 0.277-0.839-0.232l-7.875-3.929zM31.946 7.5q0 0.054-4.58 7.491t-5.366 8.705l-6.964-11.321 5.786-9.411q0.304-0.5 0.929-0.5 0.25 0 0.464 0.107l9.661 4.821q0.071 0.036 0.071 0.107z\">\n </path>\n </symbol>\n <symbol id=\"icon-slideshare\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M15.589 13.214q0 1.482-1.134 2.545t-2.723 1.063-2.723-1.063-1.134-2.545q0-1.5 1.134-2.554t2.723-1.054 2.723 1.054 1.134 2.554zM24.554 13.214q0 1.482-1.125 2.545t-2.732 1.063q-1.589 0-2.723-1.063t-1.134-2.545q0-1.5 1.134-2.554t2.723-1.054q1.607 0 2.732 1.054t1.125 2.554zM28.571 16.429v-11.911q0-1.554-0.571-2.205t-1.982-0.652h-19.857q-1.482 0-2.009 0.607t-0.527 2.25v12.018q0.768 0.411 1.58 0.714t1.446 0.5 1.446 0.33 1.268 0.196 1.25 0.071 1.045 0.009 1.009-0.036 0.795-0.036q1.214-0.018 1.696 0.482 0.107 0.107 0.179 0.161 0.464 0.446 1.089 0.911 0.125-1.625 2.107-1.554 0.089 0 0.652 0.027t0.768 0.036 0.813 0.018 0.946-0.018 0.973-0.080 1.089-0.152 1.107-0.241 1.196-0.348 1.205-0.482 1.286-0.616zM31.482 16.339q-2.161 2.661-6.643 4.5 1.5 5.089-0.411 8.304-1.179 2.018-3.268 2.643-1.857 0.571-3.25-0.268-1.536-0.911-1.464-2.929l-0.018-5.821v-0.018q-0.143-0.036-0.438-0.107t-0.42-0.089l-0.018 6.036q0.071 2.036-1.482 2.929-1.411 0.839-3.268 0.268-2.089-0.643-3.25-2.679-1.875-3.214-0.393-8.268-4.482-1.839-6.643-4.5-0.446-0.661-0.071-1.125t1.071 0.018q0.054 0.036 0.196 0.125t0.196 0.143v-12.393q0-1.286 0.839-2.196t2.036-0.911h22.446q1.196 0 2.036 0.911t0.839 2.196v12.393l0.375-0.268q0.696-0.482 1.071-0.018t-0.071 1.125z\">\n </path>\n </symbol>\n <symbol id=\"icon-snapchat-ghost\" viewbox=\"0 0 30 32\">\n <path class=\"path1\" d=\"M15.143 2.286q2.393-0.018 4.295 1.223t2.92 3.438q0.482 1.036 0.482 3.196 0 0.839-0.161 3.411 0.25 0.125 0.5 0.125 0.321 0 0.911-0.241t0.911-0.241q0.518 0 1 0.321t0.482 0.821q0 0.571-0.563 0.964t-1.232 0.563-1.232 0.518-0.563 0.848q0 0.268 0.214 0.768 0.661 1.464 1.83 2.679t2.58 1.804q0.5 0.214 1.429 0.411 0.5 0.107 0.5 0.625 0 1.25-3.911 1.839-0.125 0.196-0.196 0.696t-0.25 0.83-0.589 0.33q-0.357 0-1.107-0.116t-1.143-0.116q-0.661 0-1.107 0.089-0.571 0.089-1.125 0.402t-1.036 0.679-1.036 0.723-1.357 0.598-1.768 0.241q-0.929 0-1.723-0.241t-1.339-0.598-1.027-0.723-1.036-0.679-1.107-0.402q-0.464-0.089-1.125-0.089-0.429 0-1.17 0.134t-1.045 0.134q-0.446 0-0.625-0.33t-0.25-0.848-0.196-0.714q-3.911-0.589-3.911-1.839 0-0.518 0.5-0.625 0.929-0.196 1.429-0.411 1.393-0.571 2.58-1.804t1.83-2.679q0.214-0.5 0.214-0.768 0-0.5-0.563-0.848t-1.241-0.527-1.241-0.563-0.563-0.938q0-0.482 0.464-0.813t0.982-0.33q0.268 0 0.857 0.232t0.946 0.232q0.321 0 0.571-0.125-0.161-2.536-0.161-3.393 0-2.179 0.482-3.214 1.143-2.446 3.071-3.536t4.714-1.125z\">\n </path>\n </symbol>\n <symbol id=\"icon-yelp\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M13.804 23.554v2.268q-0.018 5.214-0.107 5.446-0.214 0.571-0.911 0.714-0.964 0.161-3.241-0.679t-2.902-1.589q-0.232-0.268-0.304-0.643-0.018-0.214 0.071-0.464 0.071-0.179 0.607-0.839t3.232-3.857q0.018 0 1.071-1.25 0.268-0.339 0.705-0.438t0.884 0.063q0.429 0.179 0.67 0.518t0.223 0.75zM11.143 19.071q-0.054 0.982-0.929 1.25l-2.143 0.696q-4.911 1.571-5.214 1.571-0.625-0.036-0.964-0.643-0.214-0.446-0.304-1.339-0.143-1.357 0.018-2.973t0.536-2.223 1-0.571q0.232 0 3.607 1.375 1.25 0.518 2.054 0.839l1.5 0.607q0.411 0.161 0.634 0.545t0.205 0.866zM25.893 24.375q-0.125 0.964-1.634 2.875t-2.42 2.268q-0.661 0.25-1.125-0.125-0.25-0.179-3.286-5.125l-0.839-1.375q-0.25-0.375-0.205-0.821t0.348-0.821q0.625-0.768 1.482-0.464 0.018 0.018 2.125 0.714 3.625 1.179 4.321 1.42t0.839 0.366q0.5 0.393 0.393 1.089zM13.893 13.089q0.089 1.821-0.964 2.179-1.036 0.304-2.036-1.268l-6.75-10.679q-0.143-0.625 0.339-1.107 0.732-0.768 3.705-1.598t4.009-0.563q0.714 0.179 0.875 0.804 0.054 0.321 0.393 5.455t0.429 6.777zM25.714 15.018q0.054 0.696-0.464 1.054-0.268 0.179-5.875 1.536-1.196 0.268-1.625 0.411l0.018-0.036q-0.411 0.107-0.821-0.071t-0.661-0.571q-0.536-0.839 0-1.554 0.018-0.018 1.339-1.821 2.232-3.054 2.679-3.643t0.607-0.696q0.5-0.339 1.161-0.036 0.857 0.411 2.196 2.384t1.446 2.991v0.054z\">\n </path>\n </symbol>\n <symbol id=\"icon-vine\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M26.732 14.768v3.536q-1.804 0.411-3.536 0.411-1.161 2.429-2.955 4.839t-3.241 3.848-2.286 1.902q-1.429 0.804-2.893-0.054-0.5-0.304-1.080-0.777t-1.518-1.491-1.83-2.295-1.92-3.286-1.884-4.357-1.634-5.616-1.259-6.964h5.054q0.464 3.893 1.25 7.116t1.866 5.661 2.17 4.205 2.5 3.482q3.018-3.018 5.125-7.25-2.536-1.286-3.982-3.929t-1.446-5.946q0-3.429 1.857-5.616t5.071-2.188q3.179 0 4.875 1.884t1.696 5.313q0 2.839-1.036 5.107-0.125 0.018-0.348 0.054t-0.821 0.036-1.125-0.107-1.107-0.455-0.902-0.92q0.554-1.839 0.554-3.286 0-1.554-0.518-2.357t-1.411-0.804q-0.946 0-1.518 0.884t-0.571 2.509q0 3.321 1.875 5.241t4.768 1.92q1.107 0 2.161-0.25z\">\n </path>\n </symbol>\n <symbol id=\"icon-vk\" viewbox=\"0 0 35 32\">\n <path class=\"path1\" d=\"M34.232 9.286q0.411 1.143-2.679 5.25-0.429 0.571-1.161 1.518-1.393 1.786-1.607 2.339-0.304 0.732 0.25 1.446 0.304 0.375 1.446 1.464h0.018l0.071 0.071q2.518 2.339 3.411 3.946 0.054 0.089 0.116 0.223t0.125 0.473-0.009 0.607-0.446 0.491-1.054 0.223l-4.571 0.071q-0.429 0.089-1-0.089t-0.929-0.393l-0.357-0.214q-0.536-0.375-1.25-1.143t-1.223-1.384-1.089-1.036-1.009-0.277q-0.054 0.018-0.143 0.063t-0.304 0.259-0.384 0.527-0.304 0.929-0.116 1.384q0 0.268-0.063 0.491t-0.134 0.33l-0.071 0.089q-0.321 0.339-0.946 0.393h-2.054q-1.268 0.071-2.607-0.295t-2.348-0.946-1.839-1.179-1.259-1.027l-0.446-0.429q-0.179-0.179-0.491-0.536t-1.277-1.625-1.893-2.696-2.188-3.768-2.33-4.857q-0.107-0.286-0.107-0.482t0.054-0.286l0.071-0.107q0.268-0.339 1.018-0.339l4.893-0.036q0.214 0.036 0.411 0.116t0.286 0.152l0.089 0.054q0.286 0.196 0.429 0.571 0.357 0.893 0.821 1.848t0.732 1.455l0.286 0.518q0.518 1.071 1 1.857t0.866 1.223 0.741 0.688 0.607 0.25 0.482-0.089q0.036-0.018 0.089-0.089t0.214-0.393 0.241-0.839 0.17-1.446 0-2.232q-0.036-0.714-0.161-1.304t-0.25-0.821l-0.107-0.214q-0.446-0.607-1.518-0.768-0.232-0.036 0.089-0.429 0.304-0.339 0.679-0.536 0.946-0.464 4.268-0.429 1.464 0.018 2.411 0.232 0.357 0.089 0.598 0.241t0.366 0.429 0.188 0.571 0.063 0.813-0.018 0.982-0.045 1.259-0.027 1.473q0 0.196-0.018 0.75t-0.009 0.857 0.063 0.723 0.205 0.696 0.402 0.438q0.143 0.036 0.304 0.071t0.464-0.196 0.679-0.616 0.929-1.196 1.214-1.92q1.071-1.857 1.911-4.018 0.071-0.179 0.179-0.313t0.196-0.188l0.071-0.054 0.089-0.045t0.232-0.054 0.357-0.009l5.143-0.036q0.696-0.089 1.143 0.045t0.554 0.295z\">\n </path>\n </symbol>\n <symbol id=\"icon-search\" viewbox=\"0 0 30 32\">\n <path class=\"path1\" d=\"M20.571 14.857q0-3.304-2.348-5.652t-5.652-2.348-5.652 2.348-2.348 5.652 2.348 5.652 5.652 2.348 5.652-2.348 2.348-5.652zM29.714 29.714q0 0.929-0.679 1.607t-1.607 0.679q-0.964 0-1.607-0.679l-6.125-6.107q-3.196 2.214-7.125 2.214-2.554 0-4.884-0.991t-4.018-2.679-2.679-4.018-0.991-4.884 0.991-4.884 2.679-4.018 4.018-2.679 4.884-0.991 4.884 0.991 4.018 2.679 2.679 4.018 0.991 4.884q0 3.929-2.214 7.125l6.125 6.125q0.661 0.661 0.661 1.607z\">\n </path>\n </symbol>\n <symbol id=\"icon-envelope-o\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M29.714 26.857v-13.714q-0.571 0.643-1.232 1.179-4.786 3.679-7.607 6.036-0.911 0.768-1.482 1.196t-1.545 0.866-1.83 0.438h-0.036q-0.857 0-1.83-0.438t-1.545-0.866-1.482-1.196q-2.821-2.357-7.607-6.036-0.661-0.536-1.232-1.179v13.714q0 0.232 0.17 0.402t0.402 0.17h26.286q0.232 0 0.402-0.17t0.17-0.402zM29.714 8.089v-0.438t-0.009-0.232-0.054-0.223-0.098-0.161-0.161-0.134-0.25-0.045h-26.286q-0.232 0-0.402 0.17t-0.17 0.402q0 3 2.625 5.071 3.446 2.714 7.161 5.661 0.107 0.089 0.625 0.527t0.821 0.67 0.795 0.563 0.902 0.491 0.768 0.161h0.036q0.357 0 0.768-0.161t0.902-0.491 0.795-0.563 0.821-0.67 0.625-0.527q3.714-2.946 7.161-5.661 0.964-0.768 1.795-2.063t0.83-2.348zM32 7.429v19.429q0 1.179-0.839 2.018t-2.018 0.839h-26.286q-1.179 0-2.018-0.839t-0.839-2.018v-19.429q0-1.179 0.839-2.018t2.018-0.839h26.286q1.179 0 2.018 0.839t0.839 2.018z\">\n </path>\n </symbol>\n <symbol id=\"icon-close\" viewbox=\"0 0 25 32\">\n <path class=\"path1\" d=\"M23.179 23.607q0 0.714-0.5 1.214l-2.429 2.429q-0.5 0.5-1.214 0.5t-1.214-0.5l-5.25-5.25-5.25 5.25q-0.5 0.5-1.214 0.5t-1.214-0.5l-2.429-2.429q-0.5-0.5-0.5-1.214t0.5-1.214l5.25-5.25-5.25-5.25q-0.5-0.5-0.5-1.214t0.5-1.214l2.429-2.429q0.5-0.5 1.214-0.5t1.214 0.5l5.25 5.25 5.25-5.25q0.5-0.5 1.214-0.5t1.214 0.5l2.429 2.429q0.5 0.5 0.5 1.214t-0.5 1.214l-5.25 5.25 5.25 5.25q0.5 0.5 0.5 1.214z\">\n </path>\n </symbol>\n <symbol id=\"icon-angle-down\" viewbox=\"0 0 21 32\">\n <path class=\"path1\" d=\"M19.196 13.143q0 0.232-0.179 0.411l-8.321 8.321q-0.179 0.179-0.411 0.179t-0.411-0.179l-8.321-8.321q-0.179-0.179-0.179-0.411t0.179-0.411l0.893-0.893q0.179-0.179 0.411-0.179t0.411 0.179l7.018 7.018 7.018-7.018q0.179-0.179 0.411-0.179t0.411 0.179l0.893 0.893q0.179 0.179 0.179 0.411z\">\n </path>\n </symbol>\n <symbol id=\"icon-folder-open\" viewbox=\"0 0 34 32\">\n <path class=\"path1\" d=\"M33.554 17q0 0.554-0.554 1.179l-6 7.071q-0.768 0.911-2.152 1.545t-2.563 0.634h-19.429q-0.607 0-1.080-0.232t-0.473-0.768q0-0.554 0.554-1.179l6-7.071q0.768-0.911 2.152-1.545t2.563-0.634h19.429q0.607 0 1.080 0.232t0.473 0.768zM27.429 10.857v2.857h-14.857q-1.679 0-3.518 0.848t-2.929 2.134l-6.107 7.179q0-0.071-0.009-0.223t-0.009-0.223v-17.143q0-1.643 1.179-2.821t2.821-1.179h5.714q1.643 0 2.821 1.179t1.179 2.821v0.571h9.714q1.643 0 2.821 1.179t1.179 2.821z\">\n </path>\n </symbol>\n <symbol id=\"icon-twitter\" viewbox=\"0 0 30 32\">\n <path class=\"path1\" d=\"M28.929 7.286q-1.196 1.75-2.893 2.982 0.018 0.25 0.018 0.75 0 2.321-0.679 4.634t-2.063 4.437-3.295 3.759-4.607 2.607-5.768 0.973q-4.839 0-8.857-2.589 0.625 0.071 1.393 0.071 4.018 0 7.161-2.464-1.875-0.036-3.357-1.152t-2.036-2.848q0.589 0.089 1.089 0.089 0.768 0 1.518-0.196-2-0.411-3.313-1.991t-1.313-3.67v-0.071q1.214 0.679 2.607 0.732-1.179-0.786-1.875-2.054t-0.696-2.75q0-1.571 0.786-2.911 2.161 2.661 5.259 4.259t6.634 1.777q-0.143-0.679-0.143-1.321 0-2.393 1.688-4.080t4.080-1.688q2.5 0 4.214 1.821 1.946-0.375 3.661-1.393-0.661 2.054-2.536 3.179 1.661-0.179 3.321-0.893z\">\n </path>\n </symbol>\n <symbol id=\"icon-facebook\" viewbox=\"0 0 19 32\">\n <path class=\"path1\" d=\"M17.125 0.214v4.714h-2.804q-1.536 0-2.071 0.643t-0.536 1.929v3.375h5.232l-0.696 5.286h-4.536v13.554h-5.464v-13.554h-4.554v-5.286h4.554v-3.893q0-3.321 1.857-5.152t4.946-1.83q2.625 0 4.071 0.214z\">\n </path>\n </symbol>\n <symbol id=\"icon-github\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M13.714 2.286q3.732 0 6.884 1.839t4.991 4.991 1.839 6.884q0 4.482-2.616 8.063t-6.759 4.955q-0.482 0.089-0.714-0.125t-0.232-0.536q0-0.054 0.009-1.366t0.009-2.402q0-1.732-0.929-2.536 1.018-0.107 1.83-0.321t1.679-0.696 1.446-1.188 0.946-1.875 0.366-2.688q0-2.125-1.411-3.679 0.661-1.625-0.143-3.643-0.5-0.161-1.446 0.196t-1.643 0.786l-0.679 0.429q-1.661-0.464-3.429-0.464t-3.429 0.464q-0.286-0.196-0.759-0.482t-1.491-0.688-1.518-0.241q-0.804 2.018-0.143 3.643-1.411 1.554-1.411 3.679 0 1.518 0.366 2.679t0.938 1.875 1.438 1.196 1.679 0.696 1.83 0.321q-0.696 0.643-0.875 1.839-0.375 0.179-0.804 0.268t-1.018 0.089-1.17-0.384-0.991-1.116q-0.339-0.571-0.866-0.929t-0.884-0.429l-0.357-0.054q-0.375 0-0.518 0.080t-0.089 0.205 0.161 0.25 0.232 0.214l0.125 0.089q0.393 0.179 0.777 0.679t0.563 0.911l0.179 0.411q0.232 0.679 0.786 1.098t1.196 0.536 1.241 0.125 0.991-0.063l0.411-0.071q0 0.679 0.009 1.58t0.009 0.973q0 0.321-0.232 0.536t-0.714 0.125q-4.143-1.375-6.759-4.955t-2.616-8.063q0-3.732 1.839-6.884t4.991-4.991 6.884-1.839zM5.196 21.982q0.054-0.125-0.125-0.214-0.179-0.054-0.232 0.036-0.054 0.125 0.125 0.214 0.161 0.107 0.232-0.036zM5.75 22.589q0.125-0.089-0.036-0.286-0.179-0.161-0.286-0.054-0.125 0.089 0.036 0.286 0.179 0.179 0.286 0.054zM6.286 23.393q0.161-0.125 0-0.339-0.143-0.232-0.304-0.107-0.161 0.089 0 0.321t0.304 0.125zM7.036 24.143q0.143-0.143-0.071-0.339-0.214-0.214-0.357-0.054-0.161 0.143 0.071 0.339 0.214 0.214 0.357 0.054zM8.054 24.589q0.054-0.196-0.232-0.286-0.268-0.071-0.339 0.125t0.232 0.268q0.268 0.107 0.339-0.107zM9.179 24.679q0-0.232-0.304-0.196-0.286 0-0.286 0.196 0 0.232 0.304 0.196 0.286 0 0.286-0.196zM10.214 24.5q-0.036-0.196-0.321-0.161-0.286 0.054-0.25 0.268t0.321 0.143 0.25-0.25z\">\n </path>\n </symbol>\n <symbol id=\"icon-bars\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M27.429 24v2.286q0 0.464-0.339 0.804t-0.804 0.339h-25.143q-0.464 0-0.804-0.339t-0.339-0.804v-2.286q0-0.464 0.339-0.804t0.804-0.339h25.143q0.464 0 0.804 0.339t0.339 0.804zM27.429 14.857v2.286q0 0.464-0.339 0.804t-0.804 0.339h-25.143q-0.464 0-0.804-0.339t-0.339-0.804v-2.286q0-0.464 0.339-0.804t0.804-0.339h25.143q0.464 0 0.804 0.339t0.339 0.804zM27.429 5.714v2.286q0 0.464-0.339 0.804t-0.804 0.339h-25.143q-0.464 0-0.804-0.339t-0.339-0.804v-2.286q0-0.464 0.339-0.804t0.804-0.339h25.143q0.464 0 0.804 0.339t0.339 0.804z\">\n </path>\n </symbol>\n <symbol id=\"icon-google-plus\" viewbox=\"0 0 41 32\">\n <path class=\"path1\" d=\"M25.661 16.304q0 3.714-1.554 6.616t-4.429 4.536-6.589 1.634q-2.661 0-5.089-1.036t-4.179-2.786-2.786-4.179-1.036-5.089 1.036-5.089 2.786-4.179 4.179-2.786 5.089-1.036q5.107 0 8.768 3.429l-3.554 3.411q-2.089-2.018-5.214-2.018-2.196 0-4.063 1.107t-2.955 3.009-1.089 4.152 1.089 4.152 2.955 3.009 4.063 1.107q1.482 0 2.723-0.411t2.045-1.027 1.402-1.402 0.875-1.482 0.384-1.321h-7.429v-4.5h12.357q0.214 1.125 0.214 2.179zM41.143 14.125v3.75h-3.732v3.732h-3.75v-3.732h-3.732v-3.75h3.732v-3.732h3.75v3.732h3.732z\">\n </path>\n </symbol>\n <symbol id=\"icon-linkedin\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M6.232 11.161v17.696h-5.893v-17.696h5.893zM6.607 5.696q0.018 1.304-0.902 2.179t-2.42 0.875h-0.036q-1.464 0-2.357-0.875t-0.893-2.179q0-1.321 0.92-2.188t2.402-0.866 2.375 0.866 0.911 2.188zM27.429 18.714v10.143h-5.875v-9.464q0-1.875-0.723-2.938t-2.259-1.063q-1.125 0-1.884 0.616t-1.134 1.527q-0.196 0.536-0.196 1.446v9.875h-5.875q0.036-7.125 0.036-11.554t-0.018-5.286l-0.018-0.857h5.875v2.571h-0.036q0.357-0.571 0.732-1t1.009-0.929 1.554-0.777 2.045-0.277q3.054 0 4.911 2.027t1.857 5.938z\">\n </path>\n </symbol>\n <symbol id=\"icon-quote-right\" viewbox=\"0 0 30 32\">\n <path class=\"path1\" d=\"M13.714 5.714v12.571q0 1.857-0.723 3.545t-1.955 2.92-2.92 1.955-3.545 0.723h-1.143q-0.464 0-0.804-0.339t-0.339-0.804v-2.286q0-0.464 0.339-0.804t0.804-0.339h1.143q1.893 0 3.232-1.339t1.339-3.232v-0.571q0-0.714-0.5-1.214t-1.214-0.5h-4q-1.429 0-2.429-1t-1-2.429v-6.857q0-1.429 1-2.429t2.429-1h6.857q1.429 0 2.429 1t1 2.429zM29.714 5.714v12.571q0 1.857-0.723 3.545t-1.955 2.92-2.92 1.955-3.545 0.723h-1.143q-0.464 0-0.804-0.339t-0.339-0.804v-2.286q0-0.464 0.339-0.804t0.804-0.339h1.143q1.893 0 3.232-1.339t1.339-3.232v-0.571q0-0.714-0.5-1.214t-1.214-0.5h-4q-1.429 0-2.429-1t-1-2.429v-6.857q0-1.429 1-2.429t2.429-1h6.857q1.429 0 2.429 1t1 2.429z\">\n </path>\n </symbol>\n <symbol id=\"icon-mail-reply\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M32 20q0 2.964-2.268 8.054-0.054 0.125-0.188 0.429t-0.241 0.536-0.232 0.393q-0.214 0.304-0.5 0.304-0.268 0-0.42-0.179t-0.152-0.446q0-0.161 0.045-0.473t0.045-0.42q0.089-1.214 0.089-2.196 0-1.804-0.313-3.232t-0.866-2.473-1.429-1.804-1.884-1.241-2.375-0.759-2.75-0.384-3.134-0.107h-4v4.571q0 0.464-0.339 0.804t-0.804 0.339-0.804-0.339l-9.143-9.143q-0.339-0.339-0.339-0.804t0.339-0.804l9.143-9.143q0.339-0.339 0.804-0.339t0.804 0.339 0.339 0.804v4.571h4q12.732 0 15.625 7.196 0.946 2.393 0.946 5.946z\">\n </path>\n </symbol>\n <symbol id=\"icon-youtube\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M17.339 22.214v3.768q0 1.196-0.696 1.196-0.411 0-0.804-0.393v-5.375q0.393-0.393 0.804-0.393 0.696 0 0.696 1.196zM23.375 22.232v0.821h-1.607v-0.821q0-1.214 0.804-1.214t0.804 1.214zM6.125 18.339h1.911v-1.679h-5.571v1.679h1.875v10.161h1.786v-10.161zM11.268 28.5h1.589v-8.821h-1.589v6.75q-0.536 0.75-1.018 0.75-0.321 0-0.375-0.375-0.018-0.054-0.018-0.625v-6.5h-1.589v6.982q0 0.875 0.143 1.304 0.214 0.661 1.036 0.661 0.857 0 1.821-1.089v0.964zM18.929 25.857v-3.518q0-1.304-0.161-1.768-0.304-1-1.268-1-0.893 0-1.661 0.964v-3.875h-1.589v11.839h1.589v-0.857q0.804 0.982 1.661 0.982 0.964 0 1.268-0.982 0.161-0.482 0.161-1.786zM24.964 25.679v-0.232h-1.625q0 0.911-0.036 1.089-0.125 0.643-0.714 0.643-0.821 0-0.821-1.232v-1.554h3.196v-1.839q0-1.411-0.482-2.071-0.696-0.911-1.893-0.911-1.214 0-1.911 0.911-0.5 0.661-0.5 2.071v3.089q0 1.411 0.518 2.071 0.696 0.911 1.929 0.911 1.286 0 1.929-0.946 0.321-0.482 0.375-0.964 0.036-0.161 0.036-1.036zM14.107 9.375v-3.75q0-1.232-0.768-1.232t-0.768 1.232v3.75q0 1.25 0.768 1.25t0.768-1.25zM26.946 22.786q0 4.179-0.464 6.25-0.25 1.054-1.036 1.768t-1.821 0.821q-3.286 0.375-9.911 0.375t-9.911-0.375q-1.036-0.107-1.83-0.821t-1.027-1.768q-0.464-2-0.464-6.25 0-4.179 0.464-6.25 0.25-1.054 1.036-1.768t1.839-0.839q3.268-0.357 9.893-0.357t9.911 0.357q1.036 0.125 1.83 0.839t1.027 1.768q0.464 2 0.464 6.25zM9.125 0h1.821l-2.161 7.125v4.839h-1.786v-4.839q-0.25-1.321-1.089-3.786-0.661-1.839-1.161-3.339h1.893l1.268 4.696zM15.732 5.946v3.125q0 1.446-0.5 2.107-0.661 0.911-1.893 0.911-1.196 0-1.875-0.911-0.5-0.679-0.5-2.107v-3.125q0-1.429 0.5-2.089 0.679-0.911 1.875-0.911 1.232 0 1.893 0.911 0.5 0.661 0.5 2.089zM21.714 3.054v8.911h-1.625v-0.982q-0.946 1.107-1.839 1.107-0.821 0-1.054-0.661-0.143-0.429-0.143-1.339v-7.036h1.625v6.554q0 0.589 0.018 0.625 0.054 0.393 0.375 0.393 0.482 0 1.018-0.768v-6.804h1.625z\">\n </path>\n </symbol>\n <symbol id=\"icon-dropbox\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M7.179 12.625l8.821 5.446-6.107 5.089-8.75-5.696zM24.786 22.536v1.929l-8.75 5.232v0.018l-0.018-0.018-0.018 0.018v-0.018l-8.732-5.232v-1.929l2.625 1.714 6.107-5.071v-0.036l0.018 0.018 0.018-0.018v0.036l6.125 5.071zM9.893 2.107l6.107 5.089-8.821 5.429-6.036-4.821zM24.821 12.625l6.036 4.839-8.732 5.696-6.125-5.089zM22.125 2.107l8.732 5.696-6.036 4.821-8.821-5.429z\">\n </path>\n </symbol>\n <symbol id=\"icon-instagram\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M18.286 16q0-1.893-1.339-3.232t-3.232-1.339-3.232 1.339-1.339 3.232 1.339 3.232 3.232 1.339 3.232-1.339 1.339-3.232zM20.75 16q0 2.929-2.054 4.982t-4.982 2.054-4.982-2.054-2.054-4.982 2.054-4.982 4.982-2.054 4.982 2.054 2.054 4.982zM22.679 8.679q0 0.679-0.482 1.161t-1.161 0.482-1.161-0.482-0.482-1.161 0.482-1.161 1.161-0.482 1.161 0.482 0.482 1.161zM13.714 4.75q-0.125 0-1.366-0.009t-1.884 0-1.723 0.054-1.839 0.179-1.277 0.33q-0.893 0.357-1.571 1.036t-1.036 1.571q-0.196 0.518-0.33 1.277t-0.179 1.839-0.054 1.723 0 1.884 0.009 1.366-0.009 1.366 0 1.884 0.054 1.723 0.179 1.839 0.33 1.277q0.357 0.893 1.036 1.571t1.571 1.036q0.518 0.196 1.277 0.33t1.839 0.179 1.723 0.054 1.884 0 1.366-0.009 1.366 0.009 1.884 0 1.723-0.054 1.839-0.179 1.277-0.33q0.893-0.357 1.571-1.036t1.036-1.571q0.196-0.518 0.33-1.277t0.179-1.839 0.054-1.723 0-1.884-0.009-1.366 0.009-1.366 0-1.884-0.054-1.723-0.179-1.839-0.33-1.277q-0.357-0.893-1.036-1.571t-1.571-1.036q-0.518-0.196-1.277-0.33t-1.839-0.179-1.723-0.054-1.884 0-1.366 0.009zM27.429 16q0 4.089-0.089 5.661-0.179 3.714-2.214 5.75t-5.75 2.214q-1.571 0.089-5.661 0.089t-5.661-0.089q-3.714-0.179-5.75-2.214t-2.214-5.75q-0.089-1.571-0.089-5.661t0.089-5.661q0.179-3.714 2.214-5.75t5.75-2.214q1.571-0.089 5.661-0.089t5.661 0.089q3.714 0.179 5.75 2.214t2.214 5.75q0.089 1.571 0.089 5.661z\">\n </path>\n </symbol>\n <symbol id=\"icon-flickr\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M22.286 2.286q2.125 0 3.634 1.509t1.509 3.634v17.143q0 2.125-1.509 3.634t-3.634 1.509h-17.143q-2.125 0-3.634-1.509t-1.509-3.634v-17.143q0-2.125 1.509-3.634t3.634-1.509h17.143zM12.464 16q0-1.571-1.107-2.679t-2.679-1.107-2.679 1.107-1.107 2.679 1.107 2.679 2.679 1.107 2.679-1.107 1.107-2.679zM22.536 16q0-1.571-1.107-2.679t-2.679-1.107-2.679 1.107-1.107 2.679 1.107 2.679 2.679 1.107 2.679-1.107 1.107-2.679z\">\n </path>\n </symbol>\n <symbol id=\"icon-tumblr\" viewbox=\"0 0 19 32\">\n <path class=\"path1\" d=\"M16.857 23.732l1.429 4.232q-0.411 0.625-1.982 1.179t-3.161 0.571q-1.857 0.036-3.402-0.464t-2.545-1.321-1.696-1.893-0.991-2.143-0.295-2.107v-9.714h-3v-3.839q1.286-0.464 2.304-1.241t1.625-1.607 1.036-1.821 0.607-1.768 0.268-1.58q0.018-0.089 0.080-0.152t0.134-0.063h4.357v7.571h5.946v4.5h-5.964v9.25q0 0.536 0.116 1t0.402 0.938 0.884 0.741 1.455 0.25q1.393-0.036 2.393-0.518z\">\n </path>\n </symbol>\n <symbol id=\"icon-dockerhub\" viewbox=\"0 0 24 28\">\n <path class=\"path1\" d=\"M1.597 10.257h2.911v2.83H1.597v-2.83zm3.573 0h2.91v2.83H5.17v-2.83zm0-3.627h2.91v2.829H5.17V6.63zm3.57 3.627h2.912v2.83H8.74v-2.83zm0-3.627h2.912v2.829H8.74V6.63zm3.573 3.627h2.911v2.83h-2.911v-2.83zm0-3.627h2.911v2.829h-2.911V6.63zm3.572 3.627h2.911v2.83h-2.911v-2.83zM12.313 3h2.911v2.83h-2.911V3zm-6.65 14.173c-.449 0-.812.354-.812.788 0 .435.364.788.812.788.447 0 .811-.353.811-.788 0-.434-.363-.788-.811-.788\">\n </path>\n <path class=\"path2\" d=\"M28.172 11.721c-.978-.549-2.278-.624-3.388-.306-.136-1.146-.91-2.149-1.83-2.869l-.366-.286-.307.345c-.618.692-.8 1.845-.718 2.73.063.651.273 1.312.685 1.834-.313.183-.668.328-.985.434-.646.212-1.347.33-2.028.33H.083l-.042.429c-.137 1.432.065 2.866.674 4.173l.262.519.03.048c1.8 2.973 4.963 4.225 8.41 4.225 6.672 0 12.174-2.896 14.702-9.015 1.689.085 3.417-.4 4.243-1.968l.211-.4-.401-.223zM5.664 19.458c-.85 0-1.542-.671-1.542-1.497 0-.825.691-1.498 1.541-1.498.849 0 1.54.672 1.54 1.497s-.69 1.498-1.539 1.498z\">\n </path>\n </symbol>\n <symbol id=\"icon-dribbble\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M18.286 26.786q-0.75-4.304-2.5-8.893h-0.036l-0.036 0.018q-0.286 0.107-0.768 0.295t-1.804 0.875-2.446 1.464-2.339 2.045-1.839 2.643l-0.268-0.196q3.286 2.679 7.464 2.679 2.357 0 4.571-0.929zM14.982 15.946q-0.375-0.875-0.946-1.982-5.554 1.661-12.018 1.661-0.018 0.125-0.018 0.375 0 2.214 0.786 4.223t2.214 3.598q0.893-1.589 2.205-2.973t2.545-2.223 2.33-1.446 1.777-0.857l0.661-0.232q0.071-0.018 0.232-0.063t0.232-0.080zM13.071 12.161q-2.143-3.804-4.357-6.75-2.464 1.161-4.179 3.321t-2.286 4.857q5.393 0 10.821-1.429zM25.286 17.857q-3.75-1.071-7.304-0.518 1.554 4.268 2.286 8.375 1.982-1.339 3.304-3.384t1.714-4.473zM10.911 4.625q-0.018 0-0.036 0.018 0.018-0.018 0.036-0.018zM21.446 7.214q-3.304-2.929-7.732-2.929-1.357 0-2.768 0.339 2.339 3.036 4.393 6.821 1.232-0.464 2.321-1.080t1.723-1.098 1.17-1.018 0.67-0.723zM25.429 15.875q-0.054-4.143-2.661-7.321l-0.018 0.018q-0.161 0.214-0.339 0.438t-0.777 0.795-1.268 1.080-1.786 1.161-2.348 1.152q0.446 0.946 0.786 1.696 0.036 0.107 0.116 0.313t0.134 0.295q0.643-0.089 1.33-0.125t1.313-0.036 1.232 0.027 1.143 0.071 1.009 0.098 0.857 0.116 0.652 0.107 0.446 0.080zM27.429 16q0 3.732-1.839 6.884t-4.991 4.991-6.884 1.839-6.884-1.839-4.991-4.991-1.839-6.884 1.839-6.884 4.991-4.991 6.884-1.839 6.884 1.839 4.991 4.991 1.839 6.884z\">\n </path>\n </symbol>\n <symbol id=\"icon-skype\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M20.946 18.982q0-0.893-0.348-1.634t-0.866-1.223-1.304-0.875-1.473-0.607-1.563-0.411l-1.857-0.429q-0.536-0.125-0.786-0.188t-0.625-0.205-0.536-0.286-0.295-0.375-0.134-0.536q0-1.375 2.571-1.375 0.768 0 1.375 0.214t0.964 0.509 0.679 0.598 0.714 0.518 0.857 0.214q0.839 0 1.348-0.571t0.509-1.375q0-0.982-1-1.777t-2.536-1.205-3.25-0.411q-1.214 0-2.357 0.277t-2.134 0.839-1.589 1.554-0.598 2.295q0 1.089 0.339 1.902t1 1.348 1.429 0.866 1.839 0.58l2.607 0.643q1.607 0.393 2 0.643 0.571 0.357 0.571 1.071 0 0.696-0.714 1.152t-1.875 0.455q-0.911 0-1.634-0.286t-1.161-0.688-0.813-0.804-0.821-0.688-0.964-0.286q-0.893 0-1.348 0.536t-0.455 1.339q0 1.643 2.179 2.813t5.196 1.17q1.304 0 2.5-0.33t2.188-0.955 1.58-1.67 0.589-2.348zM27.429 22.857q0 2.839-2.009 4.848t-4.848 2.009q-2.321 0-4.179-1.429-1.375 0.286-2.679 0.286-2.554 0-4.884-0.991t-4.018-2.679-2.679-4.018-0.991-4.884q0-1.304 0.286-2.679-1.429-1.857-1.429-4.179 0-2.839 2.009-4.848t4.848-2.009q2.321 0 4.179 1.429 1.375-0.286 2.679-0.286 2.554 0 4.884 0.991t4.018 2.679 2.679 4.018 0.991 4.884q0 1.304-0.286 2.679 1.429 1.857 1.429 4.179z\">\n </path>\n </symbol>\n <symbol id=\"icon-foursquare\" viewbox=\"0 0 23 32\">\n <path class=\"path1\" d=\"M17.857 7.75l0.661-3.464q0.089-0.411-0.161-0.714t-0.625-0.304h-12.714q-0.411 0-0.688 0.304t-0.277 0.661v19.661q0 0.125 0.107 0.018l5.196-6.286q0.411-0.464 0.679-0.598t0.857-0.134h4.268q0.393 0 0.661-0.259t0.321-0.527q0.429-2.321 0.661-3.411 0.071-0.375-0.205-0.714t-0.652-0.339h-5.25q-0.518 0-0.857-0.339t-0.339-0.857v-0.75q0-0.518 0.339-0.848t0.857-0.33h6.179q0.321 0 0.625-0.241t0.357-0.527zM21.911 3.786q-0.268 1.304-0.955 4.759t-1.241 6.25-0.625 3.098q-0.107 0.393-0.161 0.58t-0.25 0.58-0.438 0.589-0.688 0.375-1.036 0.179h-4.839q-0.232 0-0.393 0.179-0.143 0.161-7.607 8.821-0.393 0.446-1.045 0.509t-0.866-0.098q-0.982-0.393-0.982-1.75v-25.179q0-0.982 0.679-1.83t2.143-0.848h15.857q1.696 0 2.268 0.946t0.179 2.839zM21.911 3.786l-2.821 14.107q0.071-0.304 0.625-3.098t1.241-6.25 0.955-4.759z\">\n </path>\n </symbol>\n <symbol id=\"icon-wordpress\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M2.268 16q0-2.911 1.196-5.589l6.554 17.946q-3.5-1.696-5.625-5.018t-2.125-7.339zM25.268 15.304q0 0.339-0.045 0.688t-0.179 0.884-0.205 0.786-0.313 1.054-0.313 1.036l-1.357 4.571-4.964-14.75q0.821-0.054 1.571-0.143 0.339-0.036 0.464-0.33t-0.045-0.554-0.509-0.241l-3.661 0.179q-1.339-0.018-3.607-0.179-0.214-0.018-0.366 0.089t-0.205 0.268-0.027 0.33 0.161 0.295 0.348 0.143l1.429 0.143 2.143 5.857-3 9-5-14.857q0.821-0.054 1.571-0.143 0.339-0.036 0.464-0.33t-0.045-0.554-0.509-0.241l-3.661 0.179q-0.125 0-0.411-0.009t-0.464-0.009q1.875-2.857 4.902-4.527t6.563-1.67q2.625 0 5.009 0.946t4.259 2.661h-0.179q-0.982 0-1.643 0.723t-0.661 1.705q0 0.214 0.036 0.429t0.071 0.384 0.143 0.411 0.161 0.375 0.214 0.402 0.223 0.375 0.259 0.429 0.25 0.411q1.125 1.911 1.125 3.786zM16.232 17.196l4.232 11.554q0.018 0.107 0.089 0.196-2.25 0.786-4.554 0.786-2 0-3.875-0.571zM28.036 9.411q1.696 3.107 1.696 6.589 0 3.732-1.857 6.884t-4.982 4.973l4.196-12.107q1.054-3.018 1.054-4.929 0-0.75-0.107-1.411zM16 0q3.25 0 6.214 1.268t5.107 3.411 3.411 5.107 1.268 6.214-1.268 6.214-3.411 5.107-5.107 3.411-6.214 1.268-6.214-1.268-5.107-3.411-3.411-5.107-1.268-6.214 1.268-6.214 3.411-5.107 5.107-3.411 6.214-1.268zM16 31.268q3.089 0 5.92-1.214t4.875-3.259 3.259-4.875 1.214-5.92-1.214-5.92-3.259-4.875-4.875-3.259-5.92-1.214-5.92 1.214-4.875 3.259-3.259 4.875-1.214 5.92 1.214 5.92 3.259 4.875 4.875 3.259 5.92 1.214z\">\n </path>\n </symbol>\n <symbol id=\"icon-stumbleupon\" viewbox=\"0 0 34 32\">\n <path class=\"path1\" d=\"M18.964 12.714v-2.107q0-0.75-0.536-1.286t-1.286-0.536-1.286 0.536-0.536 1.286v10.929q0 3.125-2.25 5.339t-5.411 2.214q-3.179 0-5.42-2.241t-2.241-5.42v-4.75h5.857v4.679q0 0.768 0.536 1.295t1.286 0.527 1.286-0.527 0.536-1.295v-11.071q0-3.054 2.259-5.214t5.384-2.161q3.143 0 5.393 2.179t2.25 5.25v2.429l-3.482 1.036zM28.429 16.679h5.857v4.75q0 3.179-2.241 5.42t-5.42 2.241q-3.161 0-5.411-2.223t-2.25-5.366v-4.786l2.339 1.089 3.482-1.036v4.821q0 0.75 0.536 1.277t1.286 0.527 1.286-0.527 0.536-1.277v-4.911z\">\n </path>\n </symbol>\n <symbol id=\"icon-digg\" viewbox=\"0 0 37 32\">\n <path class=\"path1\" d=\"M5.857 5.036h3.643v17.554h-9.5v-12.446h5.857v-5.107zM5.857 19.661v-6.589h-2.196v6.589h2.196zM10.964 10.143v12.446h3.661v-12.446h-3.661zM10.964 5.036v3.643h3.661v-3.643h-3.661zM16.089 10.143h9.518v16.821h-9.518v-2.911h5.857v-1.464h-5.857v-12.446zM21.946 19.661v-6.589h-2.196v6.589h2.196zM27.071 10.143h9.5v16.821h-9.5v-2.911h5.839v-1.464h-5.839v-12.446zM32.911 19.661v-6.589h-2.196v6.589h2.196z\">\n </path>\n </symbol>\n <symbol id=\"icon-spotify\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M20.125 21.607q0-0.571-0.536-0.911-3.446-2.054-7.982-2.054-2.375 0-5.125 0.607-0.75 0.161-0.75 0.929 0 0.357 0.241 0.616t0.634 0.259q0.089 0 0.661-0.143 2.357-0.482 4.339-0.482 4.036 0 7.089 1.839 0.339 0.196 0.589 0.196 0.339 0 0.589-0.241t0.25-0.616zM21.839 17.768q0-0.714-0.625-1.089-4.232-2.518-9.786-2.518-2.732 0-5.411 0.75-0.857 0.232-0.857 1.143 0 0.446 0.313 0.759t0.759 0.313q0.125 0 0.661-0.143 2.179-0.589 4.482-0.589 4.982 0 8.714 2.214 0.429 0.232 0.679 0.232 0.446 0 0.759-0.313t0.313-0.759zM23.768 13.339q0-0.839-0.714-1.25-2.25-1.304-5.232-1.973t-6.125-0.67q-3.643 0-6.5 0.839-0.411 0.125-0.688 0.455t-0.277 0.866q0 0.554 0.366 0.929t0.92 0.375q0.196 0 0.714-0.143 2.375-0.661 5.482-0.661 2.839 0 5.527 0.607t4.527 1.696q0.375 0.214 0.714 0.214 0.518 0 0.902-0.366t0.384-0.92zM27.429 16q0 3.732-1.839 6.884t-4.991 4.991-6.884 1.839-6.884-1.839-4.991-4.991-1.839-6.884 1.839-6.884 4.991-4.991 6.884-1.839 6.884 1.839 4.991 4.991 1.839 6.884z\">\n </path>\n </symbol>\n <symbol id=\"icon-soundcloud\" viewbox=\"0 0 41 32\">\n <path class=\"path1\" d=\"M14 24.5l0.286-4.304-0.286-9.339q-0.018-0.179-0.134-0.304t-0.295-0.125q-0.161 0-0.286 0.125t-0.125 0.304l-0.25 9.339 0.25 4.304q0.018 0.179 0.134 0.295t0.277 0.116q0.393 0 0.429-0.411zM19.286 23.982l0.196-3.768-0.214-10.464q0-0.286-0.232-0.429-0.143-0.089-0.286-0.089t-0.286 0.089q-0.232 0.143-0.232 0.429l-0.018 0.107-0.179 10.339q0 0.018 0.196 4.214v0.018q0 0.179 0.107 0.304 0.161 0.196 0.411 0.196 0.196 0 0.357-0.161 0.161-0.125 0.161-0.357zM0.625 17.911l0.357 2.286-0.357 2.25q-0.036 0.161-0.161 0.161t-0.161-0.161l-0.304-2.25 0.304-2.286q0.036-0.161 0.161-0.161t0.161 0.161zM2.161 16.5l0.464 3.696-0.464 3.625q-0.036 0.161-0.179 0.161-0.161 0-0.161-0.179l-0.411-3.607 0.411-3.696q0-0.161 0.161-0.161 0.143 0 0.179 0.161zM3.804 15.821l0.446 4.375-0.446 4.232q0 0.196-0.196 0.196-0.179 0-0.214-0.196l-0.375-4.232 0.375-4.375q0.036-0.214 0.214-0.214 0.196 0 0.196 0.214zM5.482 15.696l0.411 4.5-0.411 4.357q-0.036 0.232-0.25 0.232-0.232 0-0.232-0.232l-0.375-4.357 0.375-4.5q0-0.232 0.232-0.232 0.214 0 0.25 0.232zM7.161 16.018l0.375 4.179-0.375 4.393q-0.036 0.286-0.286 0.286-0.107 0-0.188-0.080t-0.080-0.205l-0.357-4.393 0.357-4.179q0-0.107 0.080-0.188t0.188-0.080q0.25 0 0.286 0.268zM8.839 13.411l0.375 6.786-0.375 4.393q0 0.125-0.089 0.223t-0.214 0.098q-0.286 0-0.321-0.321l-0.321-4.393 0.321-6.786q0.036-0.321 0.321-0.321 0.125 0 0.214 0.098t0.089 0.223zM10.518 11.875l0.339 8.357-0.339 4.357q0 0.143-0.098 0.241t-0.241 0.098q-0.321 0-0.357-0.339l-0.286-4.357 0.286-8.357q0.036-0.339 0.357-0.339 0.143 0 0.241 0.098t0.098 0.241zM12.268 11.161l0.321 9.036-0.321 4.321q-0.036 0.375-0.393 0.375-0.339 0-0.375-0.375l-0.286-4.321 0.286-9.036q0-0.161 0.116-0.277t0.259-0.116q0.161 0 0.268 0.116t0.125 0.277zM19.268 24.411v0 0zM15.732 11.089l0.268 9.107-0.268 4.268q0 0.179-0.134 0.313t-0.313 0.134-0.304-0.125-0.143-0.321l-0.25-4.268 0.25-9.107q0-0.196 0.134-0.321t0.313-0.125 0.313 0.125 0.134 0.321zM17.5 11.429l0.25 8.786-0.25 4.214q0 0.196-0.143 0.339t-0.339 0.143-0.339-0.143-0.161-0.339l-0.214-4.214 0.214-8.786q0.018-0.214 0.161-0.357t0.339-0.143 0.33 0.143 0.152 0.357zM21.286 20.214l-0.25 4.125q0 0.232-0.161 0.393t-0.393 0.161-0.393-0.161-0.179-0.393l-0.107-2.036-0.107-2.089 0.214-11.357v-0.054q0.036-0.268 0.214-0.429 0.161-0.125 0.357-0.125 0.143 0 0.268 0.089 0.25 0.143 0.286 0.464zM41.143 19.875q0 2.089-1.482 3.563t-3.571 1.473h-14.036q-0.232-0.036-0.393-0.196t-0.161-0.393v-16.054q0-0.411 0.5-0.589 1.518-0.607 3.232-0.607 3.482 0 6.036 2.348t2.857 5.777q0.946-0.393 1.964-0.393 2.089 0 3.571 1.482t1.482 3.589z\">\n </path>\n </symbol>\n <symbol id=\"icon-codepen\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M3.857 20.875l10.768 7.179v-6.411l-5.964-3.982zM2.75 18.304l3.446-2.304-3.446-2.304v4.607zM17.375 28.054l10.768-7.179-4.804-3.214-5.964 3.982v6.411zM16 19.25l4.857-3.25-4.857-3.25-4.857 3.25zM8.661 14.339l5.964-3.982v-6.411l-10.768 7.179zM25.804 16l3.446 2.304v-4.607zM23.339 14.339l4.804-3.214-10.768-7.179v6.411zM32 11.125v9.75q0 0.732-0.607 1.143l-14.625 9.75q-0.375 0.232-0.768 0.232t-0.768-0.232l-14.625-9.75q-0.607-0.411-0.607-1.143v-9.75q0-0.732 0.607-1.143l14.625-9.75q0.375-0.232 0.768-0.232t0.768 0.232l14.625 9.75q0.607 0.411 0.607 1.143z\">\n </path>\n </symbol>\n <symbol id=\"icon-twitch\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M16 7.75v7.75h-2.589v-7.75h2.589zM23.107 7.75v7.75h-2.589v-7.75h2.589zM23.107 21.321l4.518-4.536v-14.196h-21.321v18.732h5.821v3.875l3.875-3.875h7.107zM30.214 0v18.089l-7.75 7.75h-5.821l-3.875 3.875h-3.875v-3.875h-7.107v-20.679l1.946-5.161h26.482z\">\n </path>\n </symbol>\n <symbol id=\"icon-meanpath\" viewbox=\"0 0 27 32\">\n <path class=\"path1\" d=\"M23.411 15.036v2.036q0 0.429-0.241 0.679t-0.67 0.25h-3.607q-0.429 0-0.679-0.25t-0.25-0.679v-2.036q0-0.429 0.25-0.679t0.679-0.25h3.607q0.429 0 0.67 0.25t0.241 0.679zM14.661 19.143v-4.464q0-0.946-0.58-1.527t-1.527-0.58h-2.375q-1.214 0-1.714 0.929-0.5-0.929-1.714-0.929h-2.321q-0.946 0-1.527 0.58t-0.58 1.527v4.464q0 0.393 0.375 0.393h0.982q0.393 0 0.393-0.393v-4.107q0-0.429 0.241-0.679t0.688-0.25h1.679q0.429 0 0.679 0.25t0.25 0.679v4.107q0 0.393 0.375 0.393h0.964q0.393 0 0.393-0.393v-4.107q0-0.429 0.25-0.679t0.679-0.25h1.732q0.429 0 0.67 0.25t0.241 0.679v4.107q0 0.393 0.393 0.393h0.982q0.375 0 0.375-0.393zM25.179 17.429v-2.75q0-0.946-0.589-1.527t-1.536-0.58h-4.714q-0.946 0-1.536 0.58t-0.589 1.527v7.321q0 0.375 0.393 0.375h0.982q0.375 0 0.375-0.375v-3.214q0.554 0.75 1.679 0.75h3.411q0.946 0 1.536-0.58t0.589-1.527zM27.429 6.429v19.143q0 1.714-1.214 2.929t-2.929 1.214h-19.143q-1.714 0-2.929-1.214t-1.214-2.929v-19.143q0-1.714 1.214-2.929t2.929-1.214h19.143q1.714 0 2.929 1.214t1.214 2.929z\">\n </path>\n </symbol>\n <symbol id=\"icon-pinterest-p\" viewbox=\"0 0 23 32\">\n <path class=\"path1\" d=\"M0 10.661q0-1.929 0.67-3.634t1.848-2.973 2.714-2.196 3.304-1.393 3.607-0.464q2.821 0 5.25 1.188t3.946 3.455 1.518 5.125q0 1.714-0.339 3.357t-1.071 3.161-1.786 2.67-2.589 1.839-3.375 0.688q-1.214 0-2.411-0.571t-1.714-1.571q-0.179 0.696-0.5 2.009t-0.42 1.696-0.366 1.268-0.464 1.268-0.571 1.116-0.821 1.384-1.107 1.545l-0.25 0.089-0.161-0.179q-0.268-2.804-0.268-3.357 0-1.643 0.384-3.688t1.188-5.134 0.929-3.625q-0.571-1.161-0.571-3.018 0-1.482 0.929-2.786t2.357-1.304q1.089 0 1.696 0.723t0.607 1.83q0 1.179-0.786 3.411t-0.786 3.339q0 1.125 0.804 1.866t1.946 0.741q0.982 0 1.821-0.446t1.402-1.214 1-1.696 0.679-1.973 0.357-1.982 0.116-1.777q0-3.089-1.955-4.813t-5.098-1.723q-3.571 0-5.964 2.313t-2.393 5.866q0 0.786 0.223 1.518t0.482 1.161 0.482 0.813 0.223 0.545q0 0.5-0.268 1.304t-0.661 0.804q-0.036 0-0.304-0.054-0.911-0.268-1.616-1t-1.089-1.688-0.58-1.929-0.196-1.902z\">\n </path>\n </symbol>\n <symbol id=\"icon-periscope\" viewbox=\"0 0 24 28\">\n <path class=\"path1\" d=\"M12.285,1C6.696,1,2.277,5.643,2.277,11.243c0,5.851,7.77,14.578,10.007,14.578c1.959,0,9.729-8.728,9.729-14.578 C22.015,5.643,17.596,1,12.285,1z M12.317,16.551c-3.473,0-6.152-2.611-6.152-5.664c0-1.292,0.39-2.472,1.065-3.438 c0.206,1.084,1.18,1.906,2.352,1.906c1.322,0,2.393-1.043,2.393-2.333c0-0.832-0.447-1.561-1.119-1.975 c0.467-0.105,0.955-0.161,1.46-0.161c3.133,0,5.81,2.611,5.81,5.998C18.126,13.94,15.449,16.551,12.317,16.551z\">\n </path>\n </symbol>\n <symbol id=\"icon-get-pocket\" viewbox=\"0 0 31 32\">\n <path class=\"path1\" d=\"M27.946 2.286q1.161 0 1.964 0.813t0.804 1.973v9.268q0 3.143-1.214 6t-3.259 4.911-4.893 3.259-5.973 1.205q-3.143 0-5.991-1.205t-4.902-3.259-3.268-4.911-1.214-6v-9.268q0-1.143 0.821-1.964t1.964-0.821h25.161zM15.375 21.286q0.839 0 1.464-0.589l7.214-6.929q0.661-0.625 0.661-1.518 0-0.875-0.616-1.491t-1.491-0.616q-0.839 0-1.464 0.589l-5.768 5.536-5.768-5.536q-0.625-0.589-1.446-0.589-0.875 0-1.491 0.616t-0.616 1.491q0 0.911 0.643 1.518l7.232 6.929q0.589 0.589 1.446 0.589z\">\n </path>\n </symbol>\n <symbol id=\"icon-vimeo\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M30.518 9.25q-0.179 4.214-5.929 11.625-5.946 7.696-10.036 7.696-2.536 0-4.286-4.696-0.786-2.857-2.357-8.607-1.286-4.679-2.804-4.679-0.321 0-2.268 1.357l-1.375-1.75q0.429-0.375 1.929-1.723t2.321-2.063q2.786-2.464 4.304-2.607 1.696-0.161 2.732 0.991t1.446 3.634q0.786 5.125 1.179 6.661 0.982 4.446 2.143 4.446 0.911 0 2.75-2.875 1.804-2.875 1.946-4.393 0.232-2.482-1.946-2.482-1.018 0-2.161 0.464 2.143-7.018 8.196-6.821 4.482 0.143 4.214 5.821z\">\n </path>\n </symbol>\n <symbol id=\"icon-reddit-alien\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M32 15.107q0 1.036-0.527 1.884t-1.42 1.295q0.214 0.821 0.214 1.714 0 2.768-1.902 5.125t-5.188 3.723-7.143 1.366-7.134-1.366-5.179-3.723-1.902-5.125q0-0.839 0.196-1.679-0.911-0.446-1.464-1.313t-0.554-1.902q0-1.464 1.036-2.509t2.518-1.045q1.518 0 2.589 1.125 3.893-2.714 9.196-2.893l2.071-9.304q0.054-0.232 0.268-0.375t0.464-0.089l6.589 1.446q0.321-0.661 0.964-1.063t1.411-0.402q1.107 0 1.893 0.777t0.786 1.884-0.786 1.893-1.893 0.786-1.884-0.777-0.777-1.884l-5.964-1.321-1.857 8.429q5.357 0.161 9.268 2.857 1.036-1.089 2.554-1.089 1.482 0 2.518 1.045t1.036 2.509zM7.464 18.661q0 1.107 0.777 1.893t1.884 0.786 1.893-0.786 0.786-1.893-0.786-1.884-1.893-0.777q-1.089 0-1.875 0.786t-0.786 1.875zM21.929 25q0.196-0.196 0.196-0.464t-0.196-0.464q-0.179-0.179-0.446-0.179t-0.464 0.179q-0.732 0.75-2.161 1.107t-2.857 0.357-2.857-0.357-2.161-1.107q-0.196-0.179-0.464-0.179t-0.446 0.179q-0.196 0.179-0.196 0.455t0.196 0.473q0.768 0.768 2.116 1.214t2.188 0.527 1.625 0.080 1.625-0.080 2.188-0.527 2.116-1.214zM21.875 21.339q1.107 0 1.884-0.786t0.777-1.893q0-1.089-0.786-1.875t-1.875-0.786q-1.107 0-1.893 0.777t-0.786 1.884 0.786 1.893 1.893 0.786z\">\n </path>\n </symbol>\n <symbol id=\"icon-hashtag\" viewbox=\"0 0 32 32\">\n <path class=\"path1\" d=\"M17.696 18.286l1.143-4.571h-4.536l-1.143 4.571h4.536zM31.411 9.286l-1 4q-0.125 0.429-0.554 0.429h-5.839l-1.143 4.571h5.554q0.268 0 0.446 0.214 0.179 0.25 0.107 0.5l-1 4q-0.089 0.429-0.554 0.429h-5.839l-1.446 5.857q-0.125 0.429-0.554 0.429h-4q-0.286 0-0.464-0.214-0.161-0.214-0.107-0.5l1.393-5.571h-4.536l-1.446 5.857q-0.125 0.429-0.554 0.429h-4.018q-0.268 0-0.446-0.214-0.161-0.214-0.107-0.5l1.393-5.571h-5.554q-0.268 0-0.446-0.214-0.161-0.214-0.107-0.5l1-4q0.125-0.429 0.554-0.429h5.839l1.143-4.571h-5.554q-0.268 0-0.446-0.214-0.179-0.25-0.107-0.5l1-4q0.089-0.429 0.554-0.429h5.839l1.446-5.857q0.125-0.429 0.571-0.429h4q0.268 0 0.446 0.214 0.161 0.214 0.107 0.5l-1.393 5.571h4.536l1.446-5.857q0.125-0.429 0.571-0.429h4q0.268 0 0.446 0.214 0.161 0.214 0.107 0.5l-1.393 5.571h5.554q0.268 0 0.446 0.214 0.161 0.214 0.107 0.5z\">\n </path>\n </symbol>\n <symbol id=\"icon-chain\" viewbox=\"0 0 30 32\">\n <path class=\"path1\" d=\"M26 21.714q0-0.714-0.5-1.214l-3.714-3.714q-0.5-0.5-1.214-0.5-0.75 0-1.286 0.571 0.054 0.054 0.339 0.33t0.384 0.384 0.268 0.339 0.232 0.455 0.063 0.491q0 0.714-0.5 1.214t-1.214 0.5q-0.268 0-0.491-0.063t-0.455-0.232-0.339-0.268-0.384-0.384-0.33-0.339q-0.589 0.554-0.589 1.304 0 0.714 0.5 1.214l3.679 3.696q0.482 0.482 1.214 0.482 0.714 0 1.214-0.464l2.625-2.607q0.5-0.5 0.5-1.196zM13.446 9.125q0-0.714-0.5-1.214l-3.679-3.696q-0.5-0.5-1.214-0.5-0.696 0-1.214 0.482l-2.625 2.607q-0.5 0.5-0.5 1.196 0 0.714 0.5 1.214l3.714 3.714q0.482 0.482 1.214 0.482 0.75 0 1.286-0.554-0.054-0.054-0.339-0.33t-0.384-0.384-0.268-0.339-0.232-0.455-0.063-0.491q0-0.714 0.5-1.214t1.214-0.5q0.268 0 0.491 0.063t0.455 0.232 0.339 0.268 0.384 0.384 0.33 0.339q0.589-0.554 0.589-1.304zM29.429 21.714q0 2.143-1.518 3.625l-2.625 2.607q-1.482 1.482-3.625 1.482-2.161 0-3.643-1.518l-3.679-3.696q-1.482-1.482-1.482-3.625 0-2.196 1.571-3.732l-1.571-1.571q-1.536 1.571-3.714 1.571-2.143 0-3.643-1.5l-3.714-3.714q-1.5-1.5-1.5-3.643t1.518-3.625l2.625-2.607q1.482-1.482 3.625-1.482 2.161 0 3.643 1.518l3.679 3.696q1.482 1.482 1.482 3.625 0 2.196-1.571 3.732l1.571 1.571q1.536-1.571 3.714-1.571 2.143 0 3.643 1.5l3.714 3.714q1.5 1.5 1.5 3.643z\">\n </path>\n </symbol>\n <symbol id=\"icon-thumb-tack\" viewbox=\"0 0 21 32\">\n <path class=\"path1\" d=\"M8.571 15.429v-8q0-0.25-0.161-0.411t-0.411-0.161-0.411 0.161-0.161 0.411v8q0 0.25 0.161 0.411t0.411 0.161 0.411-0.161 0.161-0.411zM20.571 21.714q0 0.464-0.339 0.804t-0.804 0.339h-7.661l-0.911 8.625q-0.036 0.214-0.188 0.366t-0.366 0.152h-0.018q-0.482 0-0.571-0.482l-1.357-8.661h-7.214q-0.464 0-0.804-0.339t-0.339-0.804q0-2.196 1.402-3.955t3.17-1.759v-9.143q-0.929 0-1.607-0.679t-0.679-1.607 0.679-1.607 1.607-0.679h11.429q0.929 0 1.607 0.679t0.679 1.607-0.679 1.607-1.607 0.679v9.143q1.768 0 3.17 1.759t1.402 3.955z\">\n </path>\n </symbol>\n <symbol id=\"icon-arrow-left\" viewbox=\"0 0 43 32\">\n <path class=\"path1\" d=\"M42.311 14.044c-0.178-0.178-0.533-0.356-0.711-0.356h-33.778l10.311-10.489c0.178-0.178 0.356-0.533 0.356-0.711 0-0.356-0.178-0.533-0.356-0.711l-1.6-1.422c-0.356-0.178-0.533-0.356-0.889-0.356s-0.533 0.178-0.711 0.356l-14.578 14.933c-0.178 0.178-0.356 0.533-0.356 0.711s0.178 0.533 0.356 0.711l14.756 14.933c0 0.178 0.356 0.356 0.533 0.356s0.533-0.178 0.711-0.356l1.6-1.6c0.178-0.178 0.356-0.533 0.356-0.711s-0.178-0.533-0.356-0.711l-10.311-10.489h33.778c0.178 0 0.533-0.178 0.711-0.356 0.356-0.178 0.533-0.356 0.533-0.711v-2.133c0-0.356-0.178-0.711-0.356-0.889z\">\n </path>\n </symbol>\n <symbol id=\"icon-arrow-right\" viewbox=\"0 0 43 32\">\n <path class=\"path1\" d=\"M0.356 17.956c0.178 0.178 0.533 0.356 0.711 0.356h33.778l-10.311 10.489c-0.178 0.178-0.356 0.533-0.356 0.711 0 0.356 0.178 0.533 0.356 0.711l1.6 1.6c0.178 0.178 0.533 0.356 0.711 0.356s0.533-0.178 0.711-0.356l14.756-14.933c0.178-0.356 0.356-0.711 0.356-0.889s-0.178-0.533-0.356-0.711l-14.756-14.933c0-0.178-0.356-0.356-0.533-0.356s-0.533 0.178-0.711 0.356l-1.6 1.6c-0.178 0.178-0.356 0.533-0.356 0.711s0.178 0.533 0.356 0.711l10.311 10.489h-33.778c-0.178 0-0.533 0.178-0.711 0.356-0.356 0.178-0.533 0.356-0.533 0.711v2.311c0 0.178 0.178 0.533 0.356 0.711z\">\n </path>\n </symbol>\n <symbol id=\"icon-play\" viewbox=\"0 0 22 28\">\n <path d=\"M21.625 14.484l-20.75 11.531c-0.484 0.266-0.875 0.031-0.875-0.516v-23c0-0.547 0.391-0.781 0.875-0.516l20.75 11.531c0.484 0.266 0.484 0.703 0 0.969z\">\n </path>\n </symbol>\n <symbol id=\"icon-pause\" viewbox=\"0 0 24 28\">\n <path d=\"M24 3v22c0 0.547-0.453 1-1 1h-8c-0.547 0-1-0.453-1-1v-22c0-0.547 0.453-1 1-1h8c0.547 0 1 0.453 1 1zM10 3v22c0 0.547-0.453 1-1 1h-8c-0.547 0-1-0.453-1-1v-22c0-0.547 0.453-1 1-1h8c0.547 0 1 0.453 1 1z\">\n </path>\n </symbol>\n </defs>\n </svg>\n</body>\n\n" ] ], [ [ "### Find the course schedule table from the syllabus: \nUsually organized data in HTML format on a website is stored in tables under `<table>, <tr>,` and `<td>` tags. Here we want to extract the information in the Data-X syllabus.\n\n**NOTE:** To identify element, class or id name of the object of your interest on a web page, you can go to the link address in your browser, under 'more tools' option click __'developer tools'__. This opens the 'Document object Model' of the webpage. Hover on the element of your interest on the webpage to check its location. This will help you in deciding which parts of 'soup content' you want to parse. More info at: https://developer.chrome.com/devtools", "_____no_output_____" ] ], [ [ "# We can see that course schedule is in <table><table/> elements\n# We can also get the table\nfull_table = soup.find_all('table')", "_____no_output_____" ], [ "full_table", "_____no_output_____" ], [ "# A new row in an HTML table starts with <tr> tag\n# A new column entry is defined by <td> tag\ntable_result = list()\nfor table in full_table:\n for row in table.find_all('tr'):\n row_cells = row.find_all('td') # find all table data\n row_entries = [cell.text for cell in row_cells]\n print(row_entries) \n table_result.append(row_entries)\n # get all the table data into a list", "['Topic 1:', 'Introduction\\nTheory: Overview of Frameworks for obtaining insights from data (Slides).\\nTools: Python Review']\n['Code', '1. Introduction to GitHub\\n2. Setting up Anaconda Environment\\n3. Coding with Python Review']\n['', '']\n['\\xa0Project', 'Office Hours Session that week for Environment Set Up']\n['Topic 2:', 'Tools:\\xa0Linear Regression, Data as a Signal with Correlation']\n['Code', 'โ€”']\n['Reading', '\\xa0Text Book Chapter 2 | Page 33 -45']\n['\\xa0Project', 'Bring three ideas to the class.']\n['Topic 3:', 'Theory: Numpy']\n['Code', '\\n\\nCoding with Numpy\\nCoding with Pandas\\nCoding with Matplotlib\\n\\n']\n['Reading', 'DataCamp, tutorialpoint, Text Book Chapter 1 | Page 3 -13']\n['\\xa0Project', 'Share ideas and finalize projects']\n['Topic 4:', 'Theory:\\xa0Classification and Logistic Regression']\n['Code', 'Coding with Pandas, Matplotlib']\n['Reading', 'Text Book Chapter 3| Page 81 -95']\n['\\xa0Project', 'Develop insightful story and brainstorm solutions']\n['Topic 5:', 'Theory:\\xa0Correlation']\n['Code', 'โ€”']\n['Reading', 'โ€”']\n['\\xa0Project', 'Team break out discussions']\n['Topic 6:', 'Theory:\\xa0Into to Skikit-Learn']\n['Code', 'Coding with Skikit-Learn']\n['Reading', 'โ€”']\n['\\xa0Project', 'โ€”']\n['Topic 7:', 'Theory:\\xa0Matplotlib / Python Visualization\\xa0']\n['Code', 'Coding with\\xa0Matplotlib']\n['Reading', 'โ€”']\n['\\xa0Project', 'โ€”']\n['Topic 8:', 'Theory:\\xa0Low Tech Demo Presentations']\n['Code', 'โ€”']\n['Reading', '']\n['\\xa0Project', 'Low Tech Demo Presentations']\n['Topic 9:', 'Theory:\\xa0Classification & Prediction']\n['Code', 'Reference Titanic Notebook']\n['Reading', '']\n['\\xa0Project', '']\n['Topic 10:', 'Theory:\\xa0Machine Learning & Cross Validation']\n['Code', 'โ€”']\n['Reading', '']\n['\\xa0Project', '']\n['Topic 10:', 'Theory:\\xa0Decision Trees, Information Theory, Random Forest']\n['Code', 'Coding with python']\n['Reading', 'Text Book\\xa0Chapter 6, Chapter 7']\n['\\xa0Project', '']\n['Topic 11:', 'Tools: Webscraping\\xa0/ crawling\\n']\n['Code', '']\n['Reading', '\\xa0Text Book Chapter 4| Page 105 โ€“ 110']\n['\\xa0Project', '']\n['Topic 12:', 'Theory:\\n1. Introduction to Natural Language Processing โ€“ NLTK overview and Word2vec\\n2. Sentiment Analysis\\nTools: NLTK, Gensim, Tensorflow']\n['Code', 'Coding with NLTK, Gensim, Tensorflow']\n['Reading', 'Links']\n['\\xa0Project', 'Agile sprint with reflection']\n['Topic 13:', 'Theory: Polynomial Regression, Bias Variance Tradeoff, Regularization']\n['Code', 'Coding with Python']\n['Reading', 'โ€”']\n['\\xa0Project', '']\n['Topic 14:', 'Theory: Introduction to Neural Networks- ANN, CNN, RNN\\nTools:\\xa0 Tensorflow']\n['Code', 'Coding with Tensorflow for image classification']\n['Reading', 'Text Book\\xa0Chapter 10| Page 256-272, Chapter 13| Page 357-359']\n['\\xa0Project', 'Low Tech Demo and Validation Results']\n['Topic 15:', 'Theory:\\n1. Introduction to database\\n2.\\xa0Introduction to SQL\\n3.\\xa0\\xa0Introduction to Block Chain as a database\\n4. Big Data Analysis with Spark\\nTools: SQL libraries in python, Solidity']\n['Code', 'Coding with python for SQL\\xa0 and\\xa0 Spark']\n['Reading', 'Text Book']\n['\\xa0Project', 'Agile sprint with reflection']\n['Topic 16:', 'Theory: Spectral Signals, LTI -Fundamentals and Applications\\nTools: Temporal and Spatial Signal processing']\n['Code', 'Coding with python for signal processing']\n['Reading', '\\xa0Text Book']\n['\\xa0Project', '\\xa0Agile sprint with reflection']\n['Topic 17:', 'Theory: Reinforcement Learning primer']\n['Code', 'TBD']\n['Reading', 'Text Book\\xa0Chapter 16| Page 443-450']\n['Project', 'Agile sprint with reflection']\n['Topic 18:', 'Project Presentations โ€“ Demo Day(s)']\n['Code', 'Presentation including running code and code samples']\n['Due', '\\xa0Includes preparation time in last week']\n['\\xa0Project', 'Final Presentations']\n" ], [ "# We can also read it in to a Pandas DataFrame\nimport pandas as pd\npd.set_option('display.max_colwidth', 10000)\n\ndf = pd.DataFrame(table_result)\ndf", "_____no_output_____" ], [ "# Pandas can also grab tables from a website automatically\n\nimport pandas as pd\n\nimport html5lib\n# requires html5lib: \n#!conda install --yes html5\ndfs = pd.read_html('https://data-x.blog/syllabus/') \n# returns a list of all tables at url\n\n", "_____no_output_____" ], [ "dfs[1]", "_____no_output_____" ], [ "print(type(dfs)) #list of tables\nprint(len(dfs)) # we only have one table\nprint(type(dfs[0])) # stored as DataFrame\ndf = pd.concat(dfs,ignore_index=True)\ndf = df.dropna()", "<class 'list'>\n19\n<class 'pandas.core.frame.DataFrame'>\n" ], [ "# Looks so-so, however striped from break line characters etc.\ndf.head()", "_____no_output_____" ], [ "# Make it nicer\n\n# Assign column names\ndf.columns= ['Part','Detailed Description']\n\n# Assing week number\nweeks = list()\ni=0\nfor k in range(df.shape[0]):\n if 'Topic' in df.iloc[k,0]:\n i=i+1\n weeks.append('Lecture{}'.format(i))\ndf['Week'] = weeks", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "# Set Week and Part as Multiindex\ndf = df.set_index(['Week','Part'])", "_____no_output_____" ], [ "df.head(12).dropna()", "_____no_output_____" ] ], [ [ "<a id='sec3'></a>", "_____no_output_____" ], [ "# Keep a current list IMDB top 250 vs MetaScore\n\nLet's say that we want to build an app that can display the most popular movies at the IMDB website.\n\nWe got to the URL that lists the top 250 movies according to the reviews: http://www.imdb.com/chart/top\n\nWe see that the entries are stored in a table format, so we try pandas.", "_____no_output_____" ] ], [ [ "df_imdb = pd.read_html('http://www.imdb.com/chart/top',attrs={'class':'chart full-width'})[0]", "_____no_output_____" ], [ "df_imdb.head()", "_____no_output_____" ], [ "df_imdb = df_imdb.drop(df_imdb.columns[[0,3,4]],axis=1)", "_____no_output_____" ], [ "df_imdb.tail()", "_____no_output_____" ], [ "# Extract all URLs to find meta score\nimdb_html = requests.get('http://www.imdb.com/chart/top').content\nsoup = bs.BeautifulSoup(imdb_html, features='html.parser')", "_____no_output_____" ], [ "links = soup.find('table').find_all('a')\n", "_____no_output_____" ], [ "urls = ['http://www.imdb.com'+l.get('href') for l in links]\nurls[0]", "_____no_output_____" ], [ "urls[-1]", "_____no_output_____" ], [ "import numpy as np\nmeta_scores = np.zeros(250, dtype=int)", "_____no_output_____" ], [ "\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:58.0) Gecko/20100101 Firefox/58.0',\n 'From': '[email protected]' \n}\n\nfor idx,url in enumerate(urls):\n print('Getting metascore for movie {}'.format(idx))\n film = requests.get(url, headers=headers, timeout=10)\n print(film)\n soup = bs.BeautifulSoup(film.content, features='html.parser')\n info = soup.find(class_='metacriticScore score_favorable titleReviewBarSubItem')\n meta_scores[idx] = int(info.find('span').text)\n if idx == 5:\n break", "Getting metascore for movie 0\n<Response [200]>\nGetting metascore for movie 1\n<Response [200]>\nGetting metascore for movie 2\n<Response [200]>\nGetting metascore for movie 3\n<Response [200]>\nGetting metascore for movie 4\n<Response [200]>\nGetting metascore for movie 5\n<Response [200]>\n" ], [ "df_imdb['meta_scores'] = meta_scores", "_____no_output_____" ], [ "df_imdb.head(20)", "_____no_output_____" ] ], [ [ "<a id='sec4'></a>\n# Scrape images and other files\n\nLet's see how we can automatically find and download files linked at any website.", "_____no_output_____" ] ], [ [ "# As we can see there are two images on the data-x.blog/resources\n# say that we want to download them\n# Images are displayed with the <img> tag in HTML\n\n# open connection and create new soup\n\nraw = requests.get('https://data-x.blog/resources/').content\nsoup = bs.BeautifulSoup(raw,features='html.parser')\n\nprint(soup.find('img')) \n# as we can see below the image urls \n# are stored in the src attribute inside the img tag", "_____no_output_____" ], [ "# Parse all url to the images\nimg_urls = list()\nfor img in soup.find_all('img'): \n img_url = img.get('src') \n if '.jpeg' in img_url or '.jpg' in img_url:\n print(img_url)\n img_urls.append(img_url)\n ", "_____no_output_____" ], [ "%ls", "_____no_output_____" ], [ "# To download and save files with Python we can use \n# the shutil library which is a file operations library\n'''\nThe shutil module offers a number of high-level operations on files and \ncollections of files. In particular, functions are provided which support \nfile copying and removal.\n'''\n\nimport shutil\n\nfor idx, img_url in enumerate(img_urls): \n #enumarte to create a file integer name for every image\n \n # make a request to the image URL\n img_source = requests.get(img_url, stream=True) \n # we set stream = True to download/ \n # stream the content of the data\n \n with open('img'+str(idx)+'.jpg', 'wb') as file: \n # open file connection, create file and write to it\n shutil.copyfileobj(img_source.raw, file) \n # save the raw file object\n\n del img_source # to remove the file from memory", "_____no_output_____" ], [ "%ls", "_____no_output_____" ] ], [ [ "## Scraping function to download files of any type from a website\n\nBelow is a function that takes in a website and a specific file type to download X of them from the website.", "_____no_output_____" ] ], [ [ "# Extended scraping function of any file format\nimport os # To interact with operating system and format file name\nimport shutil # To copy file object from python to disk\nimport requests\nimport bs4 as bs\n\ndef py_file_scraper(url, html_tag='img', source_tag='src', file_type='.jpg',max=-1):\n \n '''\n Function that scrapes a website for certain file formats.\n The files will be placed in a folder called \"files\" \n in the working directory.\n \n url = the url we want to scrape from\n html_tag = the file tag (usually img for images or \n a for file links)\n \n source_tag = the source tag for the file url \n (usually src for images or href for files)\n \n file_type = .png, .jpg, .pdf, .csv, .xls etc.\n \n max = integer (max number of files to scrape, \n if = -1 it will scrape all files)\n '''\n \n # make a directory called 'files' \n # for the files if it does not exist\n if not os.path.exists('files/'):\n os.makedirs('files/')\n print('Loading content from the url...')\n source = requests.get(url).content\n print('Creating content soup...')\n soup = bs.BeautifulSoup(source,'html.parser')\n \n i=0\n print('Finding tag:%s...'%html_tag)\n for n, link in enumerate(soup.find_all(html_tag)):\n file_url=link.get(source_tag)\n print ('\\n',n+1,'. File url',file_url)\n \n \n if 'http' in file_url: # check that it is a valid link\n print('It is a valid url..')\n \n \n if file_type in file_url: #only check for specific \n # file type\n \n print('%s FILE TYPE FOUND IN THE URL...'%file_type)\n file_name = os.path.splitext(os.path.basename(file_url))[0] + file_type \n #extract file name from url\n\n file_source = requests.get(file_url, stream = True)\n \n # open new stream connection\n\n with open('./files/'+file_name, 'wb') as file: \n # open file connection, create file and \n # write to it\n \n shutil.copyfileobj(file_source.raw, file) \n # save the raw file object\n \n print('DOWNLOADED:',file_name)\n \n i+=1\n \n del file_source # delete from memory\n else:\n print('%s file type NOT found in url:'%file_type)\n print('EXCLUDED:',file_url) \n # urls not downloaded from\n \n if i == max:\n print('Max reached')\n break\n \n\n print('Done!')", "_____no_output_____" ] ], [ [ "# Scrape funny cat pictures", "_____no_output_____" ] ], [ [ "py_file_scraper('https://funcatpictures.com/') \n# scrape cats", "_____no_output_____" ], [ "!ls ./files", "_____no_output_____" ] ], [ [ "# Scrape pdf's from Data-X site", "_____no_output_____" ] ], [ [ "py_file_scraper('https://data-x.blog/resources',\n html_tag='a',source_tag='href',file_type='.pdf', \\\n max=5)", "_____no_output_____" ] ], [ [ "# Scrape real data CSV files from websites", "_____no_output_____" ] ], [ [ "py_file_scraper('http://www-eio.upc.edu/~pau/cms/rdata/datasets.html',\n html_tag='a', # R data sets\n source_tag='href', file_type='.csv',max=5)", "_____no_output_____" ] ], [ [ "# Extended tip: IP rotation\n\nThe website might get suspicious if a lot of requests are coming from the same IP address. If you use a shared proxy, VPN or TOR that can help you get around that problem\n\nFor example:\n\n```pyton\nproxies = {'http' : 'http://10.10.0.0:0000', \n 'https': 'http://120.10.0.0:0000'}\nresponse = requests.get('https://whateverwebsite.com', proxies=proxies, timeout=5)\n\n```\n\nAlso note the `timeout` argument, this specifies that the request should not be carried out indefinitely (prevents the webserver from detecting scraping activity).\n \n\nBy using a shared proxy, the website will see the IP address of the proxy server and not yours. A VPN connects you to another network and the IP address of the VPN provider will be sent to the website.", "_____no_output_____" ], [ "---\n<a id='secBK'></a>\n# Breakout problem\n\n\nIn this Breakout Problem you should extract live weather data in Berkeley from:\n\n[http://forecast.weather.gov/MapClick.php?lat=37.87158815800046&lon=-122.27274583799971](http://forecast.weather.gov/MapClick.php?lat=37.87158815800046&lon=-122.27274583799971)\n\n* Task scrape\n * period / day (as Tonight, Friday, FridayNight etc.)\n * the temperature for the period (as Low, High)\n * the long weather description (e.g. Partly cloudy, with a low around 49..)\n \nStore the scraped data strings in a Pandas DataFrame\n\n\n\n**Hint:** The weather information is found in a div tag with `id='seven-day-forecast'`\n\n", "_____no_output_____" ], [ "\n# Appendix", "_____no_output_____" ], [ "<a id='sec6'></a>\n# Scrape Bloomberg sitemap (XML) for current political news", "_____no_output_____" ] ], [ [ "# XML documents - site maps, all the urls. just between tags\n# XML human and machine readable.\n# Newest links: all the links for FIND SITE MAP!\n# News websites will have sitemaps for politics, bot constantly\n# tracking news track the sitemaps\n\n# Before scraping a website look at robots.txt file\nbs.BeautifulSoup(requests.get('https://www.bloomberg.com/robots.txt').content,'lxml')", "_____no_output_____" ], [ "source = requests.get('https://www.bloomberg.com/feeds/bpol/sitemap_news.xml').content\nsoup = bs.BeautifulSoup(source,'xml') # Note parser 'xml'", "_____no_output_____" ], [ "print(soup.prettify())", "_____no_output_____" ], [ "# Find political news headlines\nfor news in soup.find_all({'news'}):\n print(news.title.text)\n print(news.publication_date.text)\n #print(news.keywords.text)\n print('\\n')", "_____no_output_____" ] ], [ [ "<a id='sec7'></a>\n# Web crawl\n\nWeb crawling is almost like webscraping, but instead you crawl a specific website (and often its subsites) and extract meta information. It can be seen as simple, recursive scraping. This can be used for web indexing (in order to build a web search engine).", "_____no_output_____" ], [ "## Web crawl Twitter account\n**Authors:** Kunal Desai & Alexander Fred Ojala", "_____no_output_____" ] ], [ [ "import bs4\nfrom bs4 import BeautifulSoup\nimport requests", "_____no_output_____" ], [ "# Helper function to maintain the urls and the number of times they appear\n\nurl_dict = dict()\n\ndef add_to_dict(url_d, key):\n if key in url_d:\n url_d[key] = url_d[key] + 1\n else:\n url_d[key] = 1", "_____no_output_____" ], [ "# Recursive function which extracts links from the given url upto a given 'depth'.\n\ndef get_urls(url, depth):\n if depth == 0:\n return\n r = requests.get(url)\n soup = BeautifulSoup(r.text, 'html.parser')\n for link in soup.find_all('a'):\n if link.has_attr('href') and \"https://\" in link['href']:\n# print(link['href'])\n add_to_dict(url_dict, link['href'])\n get_urls(link['href'], depth - 1)", "_____no_output_____" ], [ "# Iterative function which extracts links from the given url upto a given 'depth'.\n\ndef get_urls_iterative(url, depth):\n urls = [url]\n for url in urls:\n r = requests.get(url)\n soup = BeautifulSoup(r.text, 'html.parser')\n for link in soup.find_all('a'):\n if link.has_attr('href') and \"https://\" in link['href']:\n add_to_dict(url_dict, link['href'])\n urls.append(link['href'])\n if len(urls) > depth:\n break", "_____no_output_____" ], [ "get_urls(\"https://twitter.com/GolfWorld\", 2)\nfor key in url_dict:\n print(str(key) + \" ---- \" + str(url_dict[key]))", "_____no_output_____" ] ], [ [ "<a id='sec8'></a>\n# SEO: Visualize sitemap and categories in a website\n\n**Source:** https://www.ayima.com/guides/how-to-visualize-an-xml-sitemap-using-python.html", "_____no_output_____" ] ], [ [ "# Visualize XML sitemap with categories!\nimport requests\nfrom bs4 import BeautifulSoup\n\nurl = 'https://www.sportchek.ca/sitemap.xml'\nurl = 'https://www.bloomberg.com/feeds/bpol/sitemap_index.xml'\npage = requests.get(url)\nprint('Loaded page with: %s' % page)\n\nsitemap_index = BeautifulSoup(page.content, 'html.parser')\nprint('Created %s object' % type(sitemap_index))", "_____no_output_____" ], [ "urls = [element.text for element in sitemap_index.findAll('loc')]\nprint(urls)", "_____no_output_____" ], [ "def extract_links(url):\n ''' Open an XML sitemap and find content wrapped in loc tags. '''\n\n page = requests.get(url)\n soup = BeautifulSoup(page.content, 'html.parser')\n links = [element.text for element in soup.findAll('loc')]\n\n return links\n\nsitemap_urls = []\nfor url in urls:\n links = extract_links(url)\n sitemap_urls += links\n\nprint('Found {:,} URLs in the sitemap'.format(len(sitemap_urls)))", "_____no_output_____" ], [ "with open('sitemap_urls.dat', 'w') as f:\n for url in sitemap_urls:\n f.write(url + '\\n')", "_____no_output_____" ], [ "'''\nCategorize a list of URLs by site path.\nThe file containing the URLs should exist in the working directory and be\nnamed sitemap_urls.dat. It should contain one URL per line.\nCategorization depth can be specified by executing a call like this in the\nterminal (where we set the granularity depth level to 5):\n python categorize_urls.py --depth 5\nThe same result can be achieved by setting the categorization_depth variable\nmanually at the head of this file and running the script with:\n python categorize_urls.py\n'''\nfrom __future__ import print_function\n\n\ncategorization_depth=3\n\n\n\n# Main script functions\n\n\ndef peel_layers(urls, layers=3):\n ''' Builds a dataframe containing all unique page identifiers up\n to a specified depth and counts the number of sub-pages for each.\n Prints results to a CSV file.\n urls : list\n List of page URLs.\n layers : int\n Depth of automated URL search. Large values for this parameter\n may cause long runtimes depending on the number of URLs.\n '''\n\n # Store results in a dataframe\n sitemap_layers = pd.DataFrame()\n\n # Get base levels\n bases = pd.Series([url.split('//')[-1].split('/')[0] for url in urls])\n sitemap_layers[0] = bases\n\n # Get specified number of layers\n for layer in range(1, layers+1):\n\n page_layer = []\n for url, base in zip(urls, bases):\n try:\n page_layer.append(url.split(base)[-1].split('/')[layer])\n except:\n # There is nothing that deep!\n page_layer.append('')\n\n sitemap_layers[layer] = page_layer\n\n # Count and drop duplicate rows + sort\n sitemap_layers = sitemap_layers.groupby(list(range(0, layers+1)))[0].count()\\\n .rename('counts').reset_index()\\\n .sort_values('counts', ascending=False)\\\n .sort_values(list(range(0, layers)), ascending=True)\\\n .reset_index(drop=True)\n\n # Convert column names to string types and export\n sitemap_layers.columns = [str(col) for col in sitemap_layers.columns]\n sitemap_layers.to_csv('sitemap_layers.csv', index=False)\n\n # Return the dataframe\n return sitemap_layers\n\n\n\n\nsitemap_urls = open('sitemap_urls.dat', 'r').read().splitlines()\nprint('Loaded {:,} URLs'.format(len(sitemap_urls)))\n\nprint('Categorizing up to a depth of %d' % categorization_depth)\nsitemap_layers = peel_layers(urls=sitemap_urls,\n layers=categorization_depth)\nprint('Printed {:,} rows of data to sitemap_layers.csv'.format(len(sitemap_layers)))\n", "_____no_output_____" ], [ "'''\nVisualize a list of URLs by site path.\nThis script reads in the sitemap_layers.csv file created by the\ncategorize_urls.py script and builds a graph visualization using Graphviz.\nGraph depth can be specified by executing a call like this in the\nterminal:\n python visualize_urls.py --depth 4 --limit 10 --title \"My Sitemap\" --style \"dark\" --size \"40\"\nThe same result can be achieved by setting the variables manually at the head\nof this file and running the script with:\n python visualize_urls.py\n'''\nfrom __future__ import print_function\n\n\n# Set global variables\n\ngraph_depth = 3 # Number of layers deep to plot categorization\nlimit = 3 # Maximum number of nodes for a branch\ntitle = '' # Graph title\nstyle = 'light' # Graph style, can be \"light\" or \"dark\"\nsize = '8,5' # Size of rendered PDF graph\n\n\n# Import external library dependencies\n\nimport pandas as pd\nimport graphviz\n\n\n\n# Main script functions\n\ndef make_sitemap_graph(df, layers=3, limit=50, size='8,5'):\n ''' Make a sitemap graph up to a specified layer depth.\n sitemap_layers : DataFrame\n The dataframe created by the peel_layers function\n containing sitemap information.\n layers : int\n Maximum depth to plot.\n limit : int\n The maximum number node edge connections. Good to set this\n low for visualizing deep into site maps.\n '''\n\n\n # Check to make sure we are not trying to plot too many layers\n if layers > len(df) - 1:\n layers = len(df)-1\n print('There are only %d layers available to plot, setting layers=%d'\n % (layers, layers))\n\n\n # Initialize graph\n f = graphviz.Digraph('sitemap', filename='sitemap_graph_%d_layer' % layers)\n f.body.extend(['rankdir=LR', 'size=\"%s\"' % size])\n\n\n def add_branch(f, names, vals, limit, connect_to=''):\n ''' Adds a set of nodes and edges to nodes on the previous layer. '''\n\n # Get the currently existing node names\n node_names = [item.split('\"')[1] for item in f.body if 'label' in item]\n\n # Only add a new branch it it will connect to a previously created node\n if connect_to:\n if connect_to in node_names:\n for name, val in list(zip(names, vals))[:limit]:\n f.node(name='%s-%s' % (connect_to, name), label=name)\n f.edge(connect_to, '%s-%s' % (connect_to, name), label='{:,}'.format(val))\n\n\n f.attr('node', shape='rectangle') # Plot nodes as rectangles\n\n # Add the first layer of nodes\n for name, counts in df.groupby(['0'])['counts'].sum().reset_index()\\\n .sort_values(['counts'], ascending=False).values:\n f.node(name=name, label='{} ({:,})'.format(name, counts))\n\n if layers == 0:\n return f\n\n f.attr('node', shape='oval') # Plot nodes as ovals\n f.graph_attr.update()\n\n # Loop over each layer adding nodes and edges to prior nodes\n for i in range(1, layers+1):\n cols = [str(i_) for i_ in range(i)]\n nodes = df[cols].drop_duplicates().values\n for j, k in enumerate(nodes):\n\n # Compute the mask to select correct data\n mask = True\n for j_, ki in enumerate(k):\n mask &= df[str(j_)] == ki\n\n # Select the data then count branch size, sort, and truncate\n data = df[mask].groupby([str(i)])['counts'].sum()\\\n .reset_index().sort_values(['counts'], ascending=False)\n\n # Add to the graph\n add_branch(f,\n names=data[str(i)].values,\n vals=data['counts'].values,\n limit=limit,\n connect_to='-'.join(['%s']*i) % tuple(k))\n\n print(('Built graph up to node %d / %d in layer %d' % (j, len(nodes), i))\\\n .ljust(50), end='\\r')\n\n return f\n\n\ndef apply_style(f, style, title=''):\n ''' Apply the style and add a title if desired. More styling options are\n documented here: http://www.graphviz.org/doc/info/attrs.html#d:style\n f : graphviz.dot.Digraph\n The graph object as created by graphviz.\n style : str\n Available styles: 'light', 'dark'\n title : str\n Optional title placed at the bottom of the graph.\n '''\n\n dark_style = {\n 'graph': {\n 'label': title,\n 'bgcolor': '#3a3a3a',\n 'fontname': 'Helvetica',\n 'fontsize': '18',\n 'fontcolor': 'white',\n },\n 'nodes': {\n 'style': 'filled',\n 'color': 'white',\n 'fillcolor': 'black',\n 'fontname': 'Helvetica',\n 'fontsize': '14',\n 'fontcolor': 'white',\n },\n 'edges': {\n 'color': 'white',\n 'arrowhead': 'open',\n 'fontname': 'Helvetica',\n 'fontsize': '12',\n 'fontcolor': 'white',\n }\n }\n\n light_style = {\n 'graph': {\n 'label': title,\n 'fontname': 'Helvetica',\n 'fontsize': '18',\n 'fontcolor': 'black',\n },\n 'nodes': {\n 'style': 'filled',\n 'color': 'black',\n 'fillcolor': '#dbdddd',\n 'fontname': 'Helvetica',\n 'fontsize': '14',\n 'fontcolor': 'black',\n },\n 'edges': {\n 'color': 'black',\n 'arrowhead': 'open',\n 'fontname': 'Helvetica',\n 'fontsize': '12',\n 'fontcolor': 'black',\n }\n }\n\n if style == 'light':\n apply_style = light_style\n\n elif style == 'dark':\n apply_style = dark_style\n\n f.graph_attr = apply_style['graph']\n f.node_attr = apply_style['nodes']\n f.edge_attr = apply_style['edges']\n\n return f\n\n\n\n\n# Read in categorized data\nsitemap_layers = pd.read_csv('sitemap_layers.csv', dtype=str)\n# Convert numerical column to integer\nsitemap_layers.counts = sitemap_layers.counts.apply(int)\nprint('Loaded {:,} rows of categorized data from sitemap_layers.csv'\\\n .format(len(sitemap_layers)))\n\nprint('Building %d layer deep sitemap graph' % graph_depth)\nf = make_sitemap_graph(sitemap_layers, layers=graph_depth,\n limit=limit, size=size)\nf = apply_style(f, style=style, title=title)\n\nf.render(cleanup=True)\nprint('Exported graph to sitemap_graph_%d_layer.pdf' % graph_depth)\n\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
ece298990505a7e6613afd4e0cc3a649a7a7cf88
5,744
ipynb
Jupyter Notebook
2018-06-18-IFPEN-Tech/notebooks/xframe.ipynb
jtpio/quantstack-talks
092f93ddb9901cb614f428e13a0b1b1e3ffcc0ec
[ "BSD-3-Clause" ]
82
2017-04-14T20:18:55.000Z
2021-12-25T23:38:52.000Z
2018-06-18-IFPEN-Tech/notebooks/xframe.ipynb
jtpio/quantstack-talks
092f93ddb9901cb614f428e13a0b1b1e3ffcc0ec
[ "BSD-3-Clause" ]
3
2017-04-07T18:37:21.000Z
2020-07-11T09:37:53.000Z
2018-06-18-IFPEN-Tech/notebooks/xframe.ipynb
jtpio/quantstack-talks
092f93ddb9901cb614f428e13a0b1b1e3ffcc0ec
[ "BSD-3-Clause" ]
59
2017-04-07T11:16:56.000Z
2022-03-25T14:48:55.000Z
20.736462
89
0.438893
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
ece2c375df9060336121b64d128556f64fd0c056
362,635
ipynb
Jupyter Notebook
covid19/covid.ipynb
stmeem/covid19-map
7e0ab7628b4ec7b649e054083db3415d1a69f743
[ "MIT" ]
null
null
null
covid19/covid.ipynb
stmeem/covid19-map
7e0ab7628b4ec7b649e054083db3415d1a69f743
[ "MIT" ]
null
null
null
covid19/covid.ipynb
stmeem/covid19-map
7e0ab7628b4ec7b649e054083db3415d1a69f743
[ "MIT" ]
null
null
null
2,627.789855
358,646
0.785961
[ [ [ "# COVID-19 World Map", "_____no_output_____" ] ], [ [ "import folium\nimport numpy\nimport pandas\nfrom IPython.core.display import display, HTML\n\ndata = pandas.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/web-data/data/cases_country.csv')\n\nlatitude =list(data[\"Lat\"].replace(numpy.nan,0))\nlongitude =list(data [\"Long_\"].replace(numpy.nan,0))\ncountry = list(data[\"Country_Region\"])\ncases = list(data[\"Confirmed\"])\nrecover = list(data[\"Recovered\"])\ndeath = list(data[\"Deaths\"])\nupdate=list(data[\"Last_Update\"])\nlastupdate=update[0][0:10]\ndisplay(HTML(\"<p> Last Updated: \"+ str(lastupdate)+\"</p>\"))\n", "_____no_output_____" ], [ "\nmap = folium.Map(location = [30,112],zoom_start=2, tiles=\"Stamen Terrain\")\n\nfeature_group = folium.FeatureGroup(name=\"My Map\")\n\ndef color_icon(cases):\n if cases < 5000:\n return 'green'\n \n elif 5000 <= cases < 50000:\n return 'blue'\n \n else:\n return 'red'\n \n\nfor lat, lon, c,d,recovered, t_case in zip(latitude, longitude, country, death, recover, cases):\n feature_group.add_child(folium.Marker(\n location =[lat, lon], popup =str(c)+\" Cases: \"+str(t_case)+\" Recovered: \"+str(recovered)+\" Death: \"+ str(d), icon =folium.Icon(color=color_icon(t_case))))\n \nmap.add_child(feature_group)\n\nmap\n", "_____no_output_____" ], [ "display(HTML(\"<br><div><div style = 'width: 30px; height: 30px; background:#f00;'></div> Total cases above 50,000</div><br>\"+ \"<div><div style = 'width: 30px; height: 30px; background:#33BCFF;'></div>Total cases above 5,000</div><br>\"+ \"<div><div style = 'width: 30px; height: 30px; background:#32CD32;'></div>Total cases below 5,000</div><br>\"+\n\"<br><p style='text-align: center;'>Copyright &copy; Sumaiya Tasmeem </p>\"))\n", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ] ]
ece2cb24ff070cf82f0d262801be32a18e810feb
23,849
ipynb
Jupyter Notebook
notebooks/Statistical Tests.ipynb
mattalhonte/survey_stats
6bffaadf36452d9220b55f6cbac1c83ff63af97d
[ "MIT" ]
null
null
null
notebooks/Statistical Tests.ipynb
mattalhonte/survey_stats
6bffaadf36452d9220b55f6cbac1c83ff63af97d
[ "MIT" ]
null
null
null
notebooks/Statistical Tests.ipynb
mattalhonte/survey_stats
6bffaadf36452d9220b55f6cbac1c83ff63af97d
[ "MIT" ]
null
null
null
27.131968
314
0.372175
[ [ [ "%load_ext autoreload\n%autoreload 2\n\nfrom src.data import survey_stats\n\nimport pathlib\nfrom pathlib import Path\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\npd.set_option('display.float_format', '{:.4f}'.format)", "The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n" ], [ "df = pd.read_csv('../data/raw/BAI_and_browsers.csv')\n\nsurvey_text = pd.read_json(\n \"../data/processed/BAI_and_browser_survey_text.json\"\n).set_index(\"name\")", "_____no_output_____" ] ], [ [ "## Display t-test results and the descriptives", "_____no_output_____" ] ], [ [ "survey_stats.display_test_and_descs(df, survey_text, \"BAI_tot\", \"Browser_used\")", "_____no_output_____" ] ], [ [ "## Display alongside question text", "_____no_output_____" ] ], [ [ "survey_stats.all_anovas_between_thrshs(\n df, survey_text, [\"BAI_tot\"], [\"Browser_used\", \"cake_or_pie\"], 1\n)", "_____no_output_____" ] ], [ [ "Can also just display significant tests - good for reports!", "_____no_output_____" ] ], [ [ "survey_stats.display_test_and_descs(df, survey_text, \"BAI_tot\", \"cake_or_pie\")", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ece2d63295ff71ef284c63f88cc8e64c1420f367
267,387
ipynb
Jupyter Notebook
examples/bloch_angle_planewave.ipynb
xj361685640/ceviche
5da9df12cb2b15cc25ca1a3d4b5eb827eb89e195
[ "MIT" ]
111
2019-11-14T13:55:15.000Z
2022-03-29T12:19:01.000Z
examples/bloch_angle_planewave.ipynb
xj361685640/ceviche
5da9df12cb2b15cc25ca1a3d4b5eb827eb89e195
[ "MIT" ]
13
2019-11-22T05:49:07.000Z
2022-03-20T17:02:59.000Z
examples/bloch_angle_planewave.ipynb
xj361685640/ceviche
5da9df12cb2b15cc25ca1a3d4b5eb827eb89e195
[ "MIT" ]
42
2019-11-13T19:29:06.000Z
2022-03-19T11:58:09.000Z
983.040441
176,288
0.952687
[ [ [ "## Bloch Boundaries and Plane Waves with Angled Incidence\nThis notebook gives a demo of Bloch periodic boundary conditions.\nAs an example, we show how to set up a simulation with a plane wave with angled incidence.\n\n![image.png](attachment:image.png)\n", "_____no_output_____" ] ], [ [ "# import all the necessities\nimport numpy as np\nimport matplotlib.pylab as plt\n\nfrom ceviche import fdfd_hz\nfrom ceviche.constants import C_0", "_____no_output_____" ], [ "# define some constants\nwavelength = 1.5e-6\nomega = 2 * np.pi * C_0 / wavelength\nk0 = 2 * np.pi / wavelength\n\ndL = 1.5e-8\ngrid_shape = Nx, Ny = 400, 200\neps_r = np.ones(grid_shape)\n\nsource_amp = 1e-8\nsource_loc_x = Nx//4\nnpml = [40, 0]", "_____no_output_____" ], [ "# set up the FDFD\nF = fdfd_hz(omega, dL, eps_r, npml)", "_____no_output_____" ], [ "# define the source as just a constant source along y at `x = source_loc_x`\nsource = np.zeros(grid_shape, dtype=complex)\nsource[source_loc_x, :] = source_amp\n\n# add a source directly behind to cancel left traveling wave (for TFSF effect)\nsource[source_loc_x-1, :] = source[source_loc_x, :] * np.exp(-1j * k0 * dL - 1j * np.pi);", "_____no_output_____" ], [ "# solve the FDFD simulation for the fields, plot Hz\nEx, Ey, Hz = F.solve(source)\nHz_max = np.max(np.abs(Hz)) \nplt.imshow(np.real(Hz.T), cmap='RdBu', vmin=-Hz_max, vmax=Hz_max)\nplt.show()", "_____no_output_____" ] ], [ [ "We see a plane wave in x direction with wavelength 100 grid cells = `wavelength / dl` (as expected)\n\n## Incidence Angle\nNow, lets try to add an angle of incidence to this plane wave", "_____no_output_____" ] ], [ [ "angle_deg = 45 # degrees from y = 0\nangle_rad = angle_deg * np.pi / 180\n\n# compute the wave vector\nkx = k0 * np.cos(angle_rad)\nky = k0 * np.sin(angle_rad)\nk_vector = [kx, ky]\n\n# get an array of the y positions across the simulation\nLy = Ny * dL\ny_vec = np.linspace(-Ly / 2, Ly / 2, Ny)", "_____no_output_____" ] ], [ [ "make a new source where `source[y] ~ exp(i * ky * y)` to simulate an angle", "_____no_output_____" ] ], [ [ "source_amp_y = np.exp(1j * ky * y_vec)\n\nsource_angle = np.zeros(grid_shape, dtype=complex)\nsource_angle[source_loc_x, :] = source_amp * source_amp_y\n\n# add another source panel directly behind to cancel the left-traveling wave\nsource_angle[source_loc_x-1, :] = source_angle[source_loc_x, :] * np.exp(-1j * kx * dL - 1j * np.pi);", "_____no_output_____" ] ], [ [ "Lets simulate our original FDTD with this new source and see what happens", "_____no_output_____" ] ], [ [ "Ex, Ey, Hz = F.solve(source_angle)\nHz_max = np.max(np.abs(Hz)) \nplt.imshow(np.real(Hz.T), cmap='RdBu', vmin=-Hz_max, vmax=Hz_max)\nplt.show()", "_____no_output_____" ] ], [ [ "As you can see, this gives bad results because we have periodic boundary conditions in y.\n\nWhen the plane wave extends across the y boundary, it should come out the other side with a phase difference to account.\n\nWe can accomplish this with **Bloch boundary conditions.**\n\nHere we specify that the simulation is periodic but with an extra phase applied for fields jumping extending across boundaries.", "_____no_output_____" ] ], [ [ "# add a bloch phase of ky * Ly across boundary to compensate for plane wave\nF_Bloch = fdfd_hz(omega, dL, eps_r, npml, bloch_phases=[0, ky * Ly])", "_____no_output_____" ], [ "# now fields look as expected\nEx, Ey, Hz = F_Bloch.solve(source_angle)\nHz_max = np.max(np.abs(Hz)) \nplt.imshow(np.real(Hz.T), cmap='RdBu', vmin=-Hz_max, vmax=Hz_max)\nplt.show()", "_____no_output_____" ] ], [ [ "Now everything works as expected.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
ece2e8ce11748634c3b07a0e09178adc1eab7d4a
11,054
ipynb
Jupyter Notebook
Data Structures/Trees/01 Create_a_binary_tree.ipynb
gmendozah/Data-Structures-and-Algorithms
07474db45acfe42855cc0f4cc968c0564b2cb91a
[ "MIT" ]
5
2021-10-08T11:21:08.000Z
2022-01-24T22:40:03.000Z
Data Structures/Trees/01 Create_a_binary_tree.ipynb
gmendozah/Data-Structures-and-Algorithms
07474db45acfe42855cc0f4cc968c0564b2cb91a
[ "MIT" ]
null
null
null
Data Structures/Trees/01 Create_a_binary_tree.ipynb
gmendozah/Data-Structures-and-Algorithms
07474db45acfe42855cc0f4cc968c0564b2cb91a
[ "MIT" ]
3
2021-12-13T06:50:58.000Z
2022-02-05T03:38:49.000Z
24.188184
283
0.49602
[ [ [ "# Create a binary tree", "_____no_output_____" ], [ "![tree image](images/tree_01.png \"Tree\")", "_____no_output_____" ], [ "## Task 01: build a node\n\n* on a piece of paper, draw a tree.\n* Define a node, what are the three things you'd expect in a node?\n* Define class called `Node`, and define a constructor that takes no arguments, and sets the three instance variables to `None`.\n* Note: coding from a blank cell (or blank piece of paper) is good practice for interviews!", "_____no_output_____" ] ], [ [ "## Define a node\nclass Node(object):\n def __init__(self):\n self.value = None\n self.left = None\n self.right = None", "_____no_output_____" ], [ "node0 = Node()\nprint(f\"\"\"\nvalue: {node0.value}\nleft: {node0.left}\nright: {node0.right}\n\"\"\")", "\nvalue: None\nleft: None\nright: None\n\n" ] ], [ [ "## Task 02: add a constructor that takes the value as a parameter\n\nCopy what you just made, and modify the constructor so that it takes in an optional value, which it assigns as the node's value. Otherwise, it sets the node's value to `None`.\n", "_____no_output_____" ] ], [ [ "## Your code here\n## Define a node\nclass Node(object):\n def __init__(self, value=None):\n self.value = value\n self.left = None\n self.right = None", "_____no_output_____" ], [ "## Check\n\nnode0 = Node()\nprint(f\"\"\"\nvalue: {node0.value}\nleft: {node0.left}\nright: {node0.right}\n\"\"\")\n\nnode0 = Node(\"apple\")\nprint(f\"\"\"\nvalue: {node0.value}\nleft: {node0.left}\nright: {node0.right}\n\"\"\")", "\nvalue: None\nleft: None\nright: None\n\n\nvalue: apple\nleft: None\nright: None\n\n" ] ], [ [ "## Task 03: add functions to set and get the value of the node\n\nAdd functions `get_value` and `set_value`", "_____no_output_____" ] ], [ [ "# add set_value and get_value functions\nclass Node(object):\n def __init__(self, value=None):\n self.value = value\n self.left = None\n self.right = None\n \n def getValue(self):\n return self.value\n \n def setValue(self, value):\n self.value = value", "_____no_output_____" ] ], [ [ "## Task 04: add functions that assign a left child, or right child\n\nDefine a function `set_left_child` and a function `set_right_child`. Each function takes in a node that it assigns as the left or right child, respectively. Note that we can assume that this will replace any existing node if it's already assigned as a left or right child.\n\nAlso, define `get_left_child` and `get_right_child` functions.", "_____no_output_____" ] ], [ [ "## your code here\nclass Node(object):\n def __init__(self, value=None):\n self.value = value\n self.left = None\n self.right = None\n \n def getValue(self):\n return self.value\n \n def setValue(self, value):\n self.value = value\n \n def get_left_child(self):\n return self.lefy\n \n def get_right_child(self):\n return self.right\n \n def set_left_child(self, node):\n self.left = node\n \n def set_right_child(self, node):\n self.right = node", "_____no_output_____" ], [ "## check\n\nnode0 = Node(\"apple\")\nnode1 = Node(\"banana\")\nnode2 = Node(\"orange\")\nnode0.set_left_child(node1)\nnode0.set_right_child(node2)\n\nprint(f\"\"\"\nnode 0: {node0.value}\nnode 0 left child: {node0.left.value}\nnode 0 right child: {node0.right.value}\n\"\"\")", "\nnode 0: apple\nnode 0 left child: banana\nnode 0 right child: orange\n\n" ] ], [ [ "## Task 05: check if left or right child exists\n\nDefine functions `has_left_child`, `has_right_child`, so that they return true if the node has left child, or right child respectively.", "_____no_output_____" ] ], [ [ "## your code here\nclass Node(object):\n def __init__(self, value=None):\n self.value = value\n self.left = None\n self.right = None\n \n def getValue(self):\n return self.value\n \n def setValue(self, value):\n self.value = value\n \n def get_left_child(self):\n return self.lefy\n \n def get_right_child(self):\n return self.right\n \n def set_left_child(self, node):\n self.left = node\n \n def set_right_child(self, node):\n self.right = node\n \n def has_left_child(self):\n return self.right != None\n \n def has_right_child(self):\n return self.left != None\n ", "_____no_output_____" ], [ "## check\n\nnode0 = Node(\"apple\")\nnode1 = Node(\"banana\")\nnode2 = Node(\"orange\")\n\nprint(f\"has left child? {node0.has_left_child()}\")\nprint(f\"has right child? {node0.has_right_child()}\")\n\nprint(\"adding left and right children\")\nnode0.set_left_child(node1)\nnode0.set_right_child(node2)\n\nprint(f\"has left child? {node0.has_left_child()}\")\nprint(f\"has right child? {node0.has_right_child()}\")", "has left child? False\nhas right child? False\nadding left and right children\nhas left child? True\nhas right child? True\n" ] ], [ [ "## Task 06: Create a binary tree\n\nCreate a class called `Tree` that has a \"root\" instance variable of type `Node`.\n\nAlso define a get_root method that returns the root node.", "_____no_output_____" ] ], [ [ "# define a Tree class here\nclass Tree(object):\n def __init__(self):\n self.root = None\n \n def get_root(self):\n return self.root", "_____no_output_____" ] ], [ [ "## Task 07: setting root node in constructor\n\nLet's modify the `Tree` constructor so that it takes an input that initializes the root node. Choose between one of two options: \n1) the constructor takes a `Node` object \n2) the constructor takes a value, then creates a new `Node` object using that value. \n\nWhich do you think is better?", "_____no_output_____" ] ], [ [ "# choose option 1 or 2 (you can try both), and explain why you made this choice\nclass Tree(object):\n def __init__(self, node):\n self.root = node\n \n def get_root(self):\n return self.root", "_____no_output_____" ], [ "class Tree(object):\n def __init__(self, value):\n self.root = Node(value)\n \n def get_root(self):\n return self.root", "_____no_output_____" ] ], [ [ "#### Discussion\nWrite your thoughts here:", "_____no_output_____" ], [ "## Next:\n\nBefore we learn how to insert values into a tree, we'll first want to learn how to traverse a tree. We'll practice tree traversal next!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ] ]
ece2f86df36f3a5a6e5e73b91335116eabbb217e
142,388
ipynb
Jupyter Notebook
CC5/Exos_10Mar2021/ex_classe/Seance5-ex_classe3-Exponential graph_KA.ipynb
ARKEnsae/DM_Analyse_Reseaux
15573d8e5277457f84763f3d7e69786a6c58e154
[ "MIT" ]
1
2022-01-23T15:57:16.000Z
2022-01-23T15:57:16.000Z
CC5/Exos_10Mar2021/ex_classe/Seance5-ex_classe3-Exponential graph_KA.ipynb
ARKEnsae/DM_Analyse_Reseaux
15573d8e5277457f84763f3d7e69786a6c58e154
[ "MIT" ]
null
null
null
CC5/Exos_10Mar2021/ex_classe/Seance5-ex_classe3-Exponential graph_KA.ipynb
ARKEnsae/DM_Analyse_Reseaux
15573d8e5277457f84763f3d7e69786a6c58e154
[ "MIT" ]
null
null
null
405.663818
41,044
0.943366
[ [ [ "import networkx as nx\nimport random\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "#parameters\nNini=5 #nombre de noeuds initiaux qui peuvent ou non รชtre connectรฉs. \nNmax=1000\nm=3 #j'ajoute un noeud ร  chaque itรฉration qui va crรฉer 3 liens avec les noeuds prรฉcรฉdents", "_____no_output_____" ], [ "#initialization : premier graphe complรจtement connectรฉ avec 5 noeuds\nG=nx.complete_graph(Nini)", "_____no_output_____" ], [ "G.number_of_nodes()", "_____no_output_____" ], [ "G.number_of_edges()", "_____no_output_____" ], [ "nx.draw(G)", "_____no_output_____" ], [ "#node attachment loop:\nfor t in range(Nini+1,Nmax):\n listNodes=G.nodes()\n selNodes=random.sample(G.nodes(),m)\n selEdges=[(t,i) for i in selNodes]\n G.add_edges_from(selEdges)\n if(t<8):\n nx.draw(G)\n plt.show()", "_____no_output_____" ], [ "#degree distribution:\nk=[G.degree(n) for n in G.nodes()]", "_____no_output_____" ] ], [ [ "Ci-dessous test Kim avant de passer ร  l'exponentielle.", "_____no_output_____" ] ], [ [ "def binning(degreeList,nbin):\n kmin=min(degreeList)\n kmax=max(degreeList)\n Bins = np.linspace(kmin, kmax,num=nbin)\n BinDensity, binedges = np.histogram(degreeList, bins=Bins, density=True)\n Bins = np.delete(Bins, -1)\n return BinDensity, Bins", "_____no_output_____" ], [ "y,x=binning(np.array(k),20)\nplt.scatter(x,y)\nplt.xlabel('k',size=15)\nplt.ylabel('P(k)',size=15)\nplt.show()", "_____no_output_____" ], [ "def logBinning(degreeList,nbin):\n kmin=min(degreeList)\n kmax=max(degreeList)\n logBins = np.logspace(np.log10(kmin), np.log10(kmax),num=nbin)\n logBinDensity, binedges = np.histogram(degreeList, bins=logBins, density=True)\n logBins = np.delete(logBins, -1)\n return logBinDensity, logBins", "_____no_output_____" ] ], [ [ "Ce n'est pas une loi de puissance, c'est une loi exponentielle. \nLร  je l'ai fait sur une seule expรฉrience mais le mieux est de le faire sur plusieurs expรฉriences. ", "_____no_output_____" ] ], [ [ "y,x=logBinning(np.array(k),20)\nplt.semilogy(x,y,'o',markersize=10)\nplt.xlabel('k',size=15)\nplt.ylabel('P(k)',size=15)\nplt.show()", "_____no_output_____" ] ], [ [ "# several replicas", "_____no_output_____" ] ], [ [ "#parameters\nNini=5\nNmax=1000\nm=3\nNREPL=50", "_____no_output_____" ] ], [ [ "Pour chaque replication, je fais un graphe initial et je fais tourner l'algorithme et j'ajoute le degrรฉ. ", "_____no_output_____" ] ], [ [ "k=[]\nfor r in range(NREPL):\n #initialization\n G=nx.complete_graph(Nini)\n #node attachment loop:\n for t in range(Nini+1,Nmax):\n listNodes=G.nodes()\n selNodes=random.sample(G.nodes(),m)\n selEdges=[(t,i) for i in selNodes]\n G.add_edges_from(selEdges)\n k=k+[G.degree(n) for n in G.nodes()]", "_____no_output_____" ] ], [ [ "On le voit encore mieux, on a une distribution exponentielle, ce n'est pas une loi de puissance (cf. slide 33). ", "_____no_output_____" ] ], [ [ "y,x=logBinning(np.array(k),20)\nplt.semilogy(x,y,'o',markersize=10)\nplt.xlabel('k',size=15)\nplt.ylabel('P(k)',size=15)\nplt.show()", "_____no_output_____" ] ], [ [ "# IT HAS AN EXPONENTIAL BEHAVIOR!!! IT IS NOT A POWER LAW", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
ece2fb58b679a3aff26ed53bc7a7fe45d12ea482
59,522
ipynb
Jupyter Notebook
analysis/Alise/Method_Channing.ipynb
data301-2020-winter2/course-project-group_1011
20081de73800fcfa1be06ee67e53f3a761568fac
[ "MIT" ]
null
null
null
analysis/Alise/Method_Channing.ipynb
data301-2020-winter2/course-project-group_1011
20081de73800fcfa1be06ee67e53f3a761568fac
[ "MIT" ]
1
2021-03-24T01:09:32.000Z
2021-03-26T22:51:34.000Z
analysis/Alise/Method_Channing.ipynb
data301-2020-winter2/course-project-group_1011
20081de73800fcfa1be06ee67e53f3a761568fac
[ "MIT" ]
null
null
null
37.458779
444
0.332533
[ [ [ "import pandas as pd\nimport numpy as np\n\nspotify00s = pd.read_csv('../../data/raw/spotify_datasheet/dataset-of-00s.csv')\n\nspotify00s.head()", "_____no_output_____" ], [ "spotify00s.rename(columns={\"track\":\"Track\", \"artist\":\"Artist\", \"danceability\":\"Danceability\", \"energy\":\"Energy\", \"key\":\"Key\", \"loudness\":\"Loudness\", \"mode\":\"Mode\", \"speechiness\":\"Speechiness\", \"acousticness\":\"Acousticness\", \"instrumentalness\":\"Instrumentalness\", \"liveness\":\"Liveness\", \"valence\":\"Valence\", \"tempo\":\"Tempo\", \"time_signature\":\"Time Signature\", \"target\":\"Target\"})", "_____no_output_____" ], [ "spotify00s.drop(axis =1, columns={'chorus_hit','sections','duration_ms','speechiness','acousticness','instrumentalness','uri','liveness'})", "_____no_output_____" ], [ "spotify00s.describe(include=[object]).T", "_____no_output_____" ], [ "spotify00s.describe(exclude=[object]).T", "_____no_output_____" ], [ "spotify00s.dropna()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
ece2fbd48e342ba5638481781589c785e2e3e3cd
182,431
ipynb
Jupyter Notebook
_build/jupyter_execute/13_price.ipynb
myamullaciencia/Bayesian-statistics
1439716e377884465316cf9512f8e3a9199e79d6
[ "Apache-2.0" ]
null
null
null
_build/jupyter_execute/13_price.ipynb
myamullaciencia/Bayesian-statistics
1439716e377884465316cf9512f8e3a9199e79d6
[ "Apache-2.0" ]
null
null
null
_build/jupyter_execute/13_price.ipynb
myamullaciencia/Bayesian-statistics
1439716e377884465316cf9512f8e3a9199e79d6
[ "Apache-2.0" ]
null
null
null
103.126625
27,072
0.860342
[ [ [ "# Chapter 13", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "## Review\n\n[In a previous notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/09_predict.ipynb) we used the time between goals to update our estimate of the goal-scoring rate of a soccer team.\n\nUnder the assumption that goal-scoring is well-modeled by a Poisson process, the time between goals follows an [exponential distribution](https://en.wikipedia.org/wiki/Exponential_distribution).\n\nIn other words, if the goal-scoring rate is ฮป, the probability of seeing an interval between goals of $t$ is proportional to the PDF of the exponential distribution:\n\n$f(t; ฮป) = ฮป~\\exp(-ฮป t)$\n\nBecause $t$ is a continuous quantity, the value of this expression is not really a probability; technically it is a [probability density](https://en.wikipedia.org/wiki/Probability_density_function). However, it is proportional to the probability of the data, so we can use it as a likelihood in a Bayesian update.\n\nIn this notebook, we'll use the PDF of a normal distribution the same way, in order to estimate the value of prizes on a game show.\nOnce we compute a posterior distribution, we'll use it to optimize a decision-making process.\n\nThis example demonstrates the real power of Bayesian methods, not just computing posterior distributions, but using them to make better decisions.", "_____no_output_____" ], [ "## The Price is Right problem\n\nOn November 1, 2007, contestants named Letia and Nathaniel appeared on *The Price is Right*, an American game show. They competed in a game called \"The Showcase\", where the objective is to guess the price of a collection of prizes. The contestant who comes closest to the actual price, without going over, wins the prizes.\n\nNathaniel went first. His showcase included a dishwasher, a wine cabinet, a laptop computer, and a car. He bid $26,000.\n\nLetiaโ€™s showcase included a pinball machine, a video arcade game, a pool table, and a cruise of the Bahamas. She bid $21,500.\n\nThe actual price of Nathanielโ€™s showcase was $25,347. His bid was too high, so he lost.\n\nThe actual price of Letiaโ€™s showcase was $21,578. \n\nShe was only off by $78, so she won her showcase and, because her bid was off by less than 250, she also won Nathanielโ€™s showcase.", "_____no_output_____" ], [ "For a Bayesian thinker, this scenario suggests several questions:\n\n1. Before seeing the prizes, what prior beliefs should the contestant have about the price of the showcase?\n\n2. After seeing the prizes, how should the contestant update those beliefs?\n\n3. Based on the posterior distribution, what should the contestant bid?\n\nThe third question demonstrates a common use of Bayesian methods: [decision analysis](https://en.wikipedia.org/wiki/Decision_analysis).\n\nThis problem is inspired by [this example](https://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter5_LossFunctions/Ch5_LossFunctions_PyMC3.ipynb) in Cameron Davidson-Pilonโ€™s book, [Probablistic Programming and Bayesian Methods for Hackers](http://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers).", "_____no_output_____" ], [ "## The prior\n\nTo choose a prior distribution of prices, we can take advantage of data from previous episodes. Fortunately, [fans of the show keep detailed records](https://web.archive.org/web/20121107204942/http://www.tpirsummaries.8m.com/). \n\nFor this example, I downloaded files containing the price of each showcase from the 2011 and 2012 seasons and the bids offered by the contestants.\n\nThe following cells load the data files.", "_____no_output_____" ] ], [ [ "# Load the data files\nimport os\n\nif not os.path.exists('showcases.2011.csv'):\n !wget https://github.com/AllenDowney/BiteSizeBayes/raw/master/showcases.2011.csv\n\nif not os.path.exists('showcases.2012.csv'):\n !wget http://github.com/AllenDowney/BiteSizeBayes/raw/master/showcases.2012.csv", "_____no_output_____" ] ], [ [ "The following function reads the data and cleans it up a little.", "_____no_output_____" ] ], [ [ "def read_data(filename):\n \"\"\"Read the showcase price data.\n \n filename: string\n \n returns: DataFrame\n \"\"\"\n df = pd.read_csv(filename, index_col=0, skiprows=[1])\n return df.dropna().transpose()", "_____no_output_____" ] ], [ [ "I'll read both files and concatenate them.", "_____no_output_____" ] ], [ [ "df2011 = read_data('showcases.2011.csv')\ndf2011.shape", "_____no_output_____" ], [ "df2012 = read_data('showcases.2012.csv')\ndf2012.shape", "_____no_output_____" ], [ "df = pd.concat([df2011, df2012], ignore_index=True)\ndf.shape", "_____no_output_____" ] ], [ [ "Here's what the dataset looks like:", "_____no_output_____" ] ], [ [ "df.head()", "_____no_output_____" ] ], [ [ "## Kernel density estimation\n\nThis dataset contains the prices for 313 previous showcases, which we can think of as a sample from the population of possible prices.\n\nWe can use this sample to estimate the prior distribution of showcase prices. One way to do that is kernel density estimation (KDE), which uses the sample to estimate a smooth distribution.\n\nSciPy provides `gaussian_kde`, which takes a sample and returns an object that represents the estimated distribution.", "_____no_output_____" ] ], [ [ "from scipy.stats import gaussian_kde\n\nkde = gaussian_kde(df['Showcase 1'])\nkde", "_____no_output_____" ] ], [ [ "We can use `kde` to evaluate the estimated distribution for a sequence of values:", "_____no_output_____" ] ], [ [ "xs = np.linspace(0, 80000, 81)\nps = kde(xs)", "_____no_output_____" ] ], [ [ "And put the results into a normalized Series that represents the prior distribution for Showcase 1.", "_____no_output_____" ] ], [ [ "def make_pmf(xs, ps, **options):\n \"\"\"Make a Series that represents a PMF.\n \n xs: sequence of values\n ps: sequence of probabilities\n options: keyword arguments passed to Series constructor\n \n returns: Pandas Series\n \"\"\"\n pmf = pd.Series(ps, index=xs, **options)\n return pmf", "_____no_output_____" ], [ "prior1 = make_pmf(xs, ps)\nprior1 /= prior1.sum()", "_____no_output_____" ] ], [ [ "Here's what it looks like:", "_____no_output_____" ] ], [ [ "prior1.plot(label='Prior 1')\n\nplt.xlabel('Showcase value ($)')\nplt.ylabel('Probability')\nplt.title('Prior distribution of showcase value')\nplt.legend();", "_____no_output_____" ] ], [ [ "The following function takes a sample, makes a KDE, evaluates it at a given sequence of `xs`, and returns the result as a normalized PMF.", "_____no_output_____" ] ], [ [ "def make_kde(xs, sample):\n \"\"\"Make a PMF based on KDE:\n \n xs: places where we should evaluate the KDE\n sample: sequence of values\n \n returns: Series that represents a normalized PMF\n \"\"\"\n kde = gaussian_kde(sample)\n ps = kde(xs)\n pmf = make_pmf(xs, ps)\n pmf /= pmf.sum()\n return pmf", "_____no_output_____" ] ], [ [ "**Exercise:** Use this function to make a Pmf that represents the prior distribution for Showcase 2, and plot it.", "_____no_output_____" ] ], [ [ "# Solution goes here", "_____no_output_____" ], [ "# Solution goes here", "_____no_output_____" ] ], [ [ "## Distribution of error\n\nTo update these priors, we have to answer these questions:\n\n* What data should we consider and how should we quantify it?\n\n* Can we compute a likelihood function; that is, for each hypothetical price, can we compute the conditional likelihood of the data?\n\nTo answer these questions, I will model the contestant as a price-guessing instrument with known error characteristics. In other words, when the contestant sees the prizes, they guess the price of each prize --- ideally without taking into consideration the fact that the prize is part of a showcase --- and add up the prices. Letโ€™s call this total `guess`.\n\nUnder this model, the question we have to answer is, โ€œIf the actual price is `price`, what is the likelihood that the contestantโ€™s estimate would be `guess`?โ€\n\nEquivalently, if we define `error = guess - price`, we can ask, โ€œWhat is the likelihood that the contestantโ€™s estimate is off by `error`?โ€\n\nTo answer this question, I'll use the historical data again. For each showcase in the dataset, let's look at the difference between the contestant's bid and the actual price:", "_____no_output_____" ] ], [ [ "sample_diff1 = df['Bid 1'] - df['Showcase 1']\nsample_diff2 = df['Bid 2'] - df['Showcase 2']", "_____no_output_____" ] ], [ [ "To visualize the distribution of these differences, we can use KDE again.", "_____no_output_____" ] ], [ [ "xs = np.linspace(-40000, 20000, 61)\nkde_diff1 = make_kde(xs, sample_diff1)\nkde_diff2 = make_kde(xs, sample_diff2)", "_____no_output_____" ], [ "kde_diff1.plot(label='Diff 1')\nkde_diff2.plot(label='Diff 2')\n\nplt.xlabel('Difference in value ($)')\nplt.ylabel('Probability')\nplt.title('Difference between bid and actual value')\nplt.legend();", "_____no_output_____" ] ], [ [ "It looks like the bids are too low more often than too high, which makes sense. Remember that under the rules of the game, you lose if you overbid, so contestants probably underbid to some degree deliberately.\n\nHere is the mean and standard deviation of `Diff` for Player 1.", "_____no_output_____" ] ], [ [ "mean_diff1 = sample_diff1.mean()\nstd_diff1 = sample_diff1.std()\n\nmean_diff1, std_diff1", "_____no_output_____" ] ], [ [ "We can use the observed distribution of differences to model the contestant's distribution of errors.\n\nThis step is a little tricky because we donโ€™t actually know the contestantโ€™s guesses; we only know what they bid.\n\nSo we have to make some assumptions:\n\n* I'll assume that contestants underbid because they are being strategic, and that on average their guesses are accurate. In other word, the mean of their errors is 0.\n\n* But I'll assume that the spread of the differences reflects the actual spread of their errors. So, I'll use the standard deviation of the differences as the standard deviation of their errors.\n\nBased on these assumptions, I'll make a normal distribution with parameters 0 and `std_diff1`.", "_____no_output_____" ] ], [ [ "from scipy.stats import norm\n\nerror_dist1 = norm(0, std_diff1)\nerror_dist1", "_____no_output_____" ] ], [ [ "We'll use this distribution to do the update.", "_____no_output_____" ], [ "## Update\n\nSuppose you are Player 1. You see the prizes in your showcase and your estimate of the total price is $23,000.\n\nFor each hypothetical price in the prior distribution, I'll subtract away your guess.\nThe result is your error under each hypothesis.", "_____no_output_____" ] ], [ [ "guess1 = 23000\n\nxs = prior1.index\nerror1 = guess1 - xs", "_____no_output_____" ] ], [ [ "Now suppose you know based on past performance that your estimation error is well modeled by `error_dist1`.\n\nUnder that assumption we can compute the likelihood of your estimate under each hypothesis.", "_____no_output_____" ] ], [ [ "likelihood1 = error_dist1.pdf(error1)", "_____no_output_____" ] ], [ [ "And we can use that likelihood to update the prior.", "_____no_output_____" ] ], [ [ "posterior1 = prior1 * likelihood1\nposterior1 /= posterior1.sum()", "_____no_output_____" ] ], [ [ "Here's what the posterior distribution looks like:", "_____no_output_____" ] ], [ [ "prior1.plot(color='gray', label='Prior 1')\nposterior1.plot(color='C0', label='Posterior 1')\n\nplt.xlabel('Showcase value ($)')\nplt.ylabel('Probability')\nplt.title('Prior and posterior distribution of showcase value')\nplt.legend();", "_____no_output_____" ] ], [ [ "Because your estimate is in the lower end of the range, the posterior distribution has shifted to the left. We can use the posterior mean to see by how much.", "_____no_output_____" ] ], [ [ "def pmf_mean(pmf):\n \"\"\"Compute the mean of a PMF.\n \n pmf: Series representing a PMF\n \n return: float\n \"\"\"\n return np.sum(pmf.index * pmf)", "_____no_output_____" ], [ "pmf_mean(prior1), pmf_mean(posterior1)", "_____no_output_____" ] ], [ [ "Before you saw the prizes, you expected to see a showcase with a value close to $30,000.\n\nAfter making an estimate of $23,000, you updated the prior distribution.\n\nBased on the combination of the prior and your estimate, you now expect the actual price to be about $26,000.", "_____no_output_____" ], [ "**Exercise:** Now suppose you are Player 2. When you see your showcase, you estimte that the total price is $38,000.\n\nUse `diff2` to construct a normal distribution that represents the distribution of your estimation errors.\n\nCompute the likelihood of your estimate for each actual price and use it to update `prior2`.\n\nPlot the posterior distribution and compute the posterior mean. Based on your estimate, what do you expect the actual price of the showcase to be?", "_____no_output_____" ] ], [ [ "# Solution goes here", "_____no_output_____" ], [ "# Solution goes here", "_____no_output_____" ], [ "# Solution goes here", "_____no_output_____" ], [ "# Solution goes here", "_____no_output_____" ], [ "# Solution goes here", "_____no_output_____" ], [ "# Solution goes here", "_____no_output_____" ] ], [ [ "## Probability of winning\n\nNow that we have a posterior distribution for each player, let's think about strategy.\n\nFirst, from the point of view of Player 1, let's compute the probability that Player 2 overbids. To keep it simple, I'll use only the performance of past players, ignoring the estimated value of the showcase. \n\nThe following function takes a sequence of past bids and returns the fraction that overbid.", "_____no_output_____" ] ], [ [ "def prob_overbid(sample_diff):\n \"\"\"Returns the probability this player overbids.\n\n sample_diff: sequence of differences\n \"\"\"\n return np.mean(sample_diff > 0)", "_____no_output_____" ] ], [ [ "Here's an estimate for the probability that Player 2 overbids.", "_____no_output_____" ] ], [ [ "prob_overbid(sample_diff2)", "_____no_output_____" ] ], [ [ "Now suppose Player 1 underbids by $5000.\nWhat is the probability that Player 2 underbids by more?\n\nThe following function uses past performance to estimate the probabily that a player underbids by more than a given amount, `diff`:", "_____no_output_____" ] ], [ [ "def prob_worse_than(diff, sample_diff):\n \"\"\"Probability the opponents's diff is worse than the given diff.\n\n diff: how much the oppenent is off by (always negative)\n sample_diff: sequence of differences for the opponent\n \"\"\"\n return np.mean(sample_diff < diff)", "_____no_output_____" ] ], [ [ "Here's the probability that Player 2 underbids by more than $5000.", "_____no_output_____" ] ], [ [ "prob_worse_than(-5000, sample_diff2)", "_____no_output_____" ] ], [ [ "And here's the probability they are off by more than $10,000.", "_____no_output_____" ] ], [ [ "prob_worse_than(-10000, sample_diff2)", "_____no_output_____" ] ], [ [ "We can combine these function to compute the probability that Player 1 wins, given the difference between their bid and the actual price:", "_____no_output_____" ] ], [ [ "def compute_prob_win(diff, sample_diff):\n \"\"\"Computes the probability of winning for a given diff.\n\n diff: how much your bid was off by\n sample_diff: sequence of differences for the opponent\n \"\"\"\n # if you overbid you lose\n if diff > 0:\n return 0\n \n # if the opponent over bids, you win\n p1 = prob_overbid(sample_diff)\n \n # or of their bid is worse than yours, you win\n p2 = prob_worse_than(diff, sample_diff)\n return p1 + p2", "_____no_output_____" ] ], [ [ "Here's the probability that you win, given that you underbid by $5000.", "_____no_output_____" ] ], [ [ "compute_prob_win(-5000, sample_diff2)", "_____no_output_____" ] ], [ [ "Now let's look at the probability of winning for a range of possible differences.", "_____no_output_____" ] ], [ [ "xs = np.linspace(-30000, 5000, 121)\nys = [compute_prob_win(x, sample_diff2) for x in xs]\n\nplt.plot(xs, ys)\nplt.xlabel('Difference between guess and actual price ($)')\nplt.ylabel('Probability of winning')\nplt.title('Player 1');", "_____no_output_____" ] ], [ [ "If you underbid by $30,000, the chance of winning is about 30%, which is mostly the chance your opponent overbids.\n\nAs your bids gets closer to the actual price, your chance of winning approaches 1.\n\nAnd, of course, if you overbid, you lose (even if your opponent also overbids).", "_____no_output_____" ], [ "**Exercise:** Run the same analysis from the point of view of Player 2. Using the sample of differences from Player 1, compute:\n\n1. The probability that Player 1 overbids.\n\n2. The probability that Player 1 underbids by more than $5000.\n\n3. The probability that Player 2 wins, given that they underbid by $5000.\n\nThen plot the probability that Player 2 wins for a range of possible differences between their bid and the actual price.", "_____no_output_____" ] ], [ [ "prob_overbid(sample_diff1)", "_____no_output_____" ], [ "prob_worse_than(-5000, sample_diff1)", "_____no_output_____" ], [ "compute_prob_win(-5000, sample_diff1)", "_____no_output_____" ], [ "xs = np.linspace(-30000, 5000, 121)\nys = [compute_prob_win(x, sample_diff1) for x in xs]\n\nplt.plot(xs, ys)\nplt.xlabel('Difference between guess and actual price ($)')\nplt.ylabel('Probability of winning')\nplt.title('Player 2');", "_____no_output_____" ] ], [ [ "## Decision analysis\n\nIn the previous section we computed the probability of winning, given that we have underbid by a particular amount.\n\nIn reality the contestants don't know how much they have underbid by, because they don't know the actual price.\n\nBut they do have a posterior distribution that represents their beliefs about the actual price, and they can use that to estimate their probability of winning with a given bid.\n\nThe following function take a possible bid, a posterior distribution of actual prices, and a sample of differences for the opponent.\n\nIt loops through the hypothetical prices in the posterior distribution and for each price:\n\n1. Computes the difference between the bid and the hypothetical price.\n\n2. Computes the probability that the player wins, given that difference.\n\n3. Adds up the weighted sum of the probabilities, where the weights are the probabilites in the posterior distribution. ", "_____no_output_____" ] ], [ [ "def total_prob_win(bid, posterior, sample_diff):\n \"\"\"Computes the total probability of winning with a given bid.\n\n bid: your bid\n posterior: Pmf of showcase value\n sample_diff: sequence of differences for the opponent\n \n returns: probability of winning\n \"\"\"\n total = 0\n for price, prob in posterior.items():\n diff = bid - price\n total += prob * compute_prob_win(diff, sample_diff)\n return total", "_____no_output_____" ] ], [ [ "This loop implements the law of total probability:\n\n$P(win) = \\sum_{price} P(price) ~ P(win ~|~ price)$", "_____no_output_____" ] ], [ [ "total_prob_win(25000, posterior1, sample_diff2)", "_____no_output_____" ], [ "bids = posterior1.index\n\nprobs = [total_prob_win(bid, posterior1, sample_diff2) \n for bid in bids]\n\nprob_win_series = pd.Series(probs, index=bids)", "_____no_output_____" ], [ "prob_win_series.plot(color='C1')\n\nplt.xlabel('Bid ($)')\nplt.ylabel('Probability of winning')\nplt.title('Player 1');", "_____no_output_____" ] ], [ [ "And here's the bid that maximizes your chance of winning.", "_____no_output_____" ] ], [ [ "prob_win_series.idxmax()", "_____no_output_____" ] ], [ [ "Recall that your estimate was $23,000.\n\nAfter using your estimate to compute the posterior distribution, the posterior mean is about $26,000.\n\nBut the bid that maximizes your chance of winning is $21,000.", "_____no_output_____" ], [ "**Exercise:** Do the same analysis for Player 2.", "_____no_output_____" ] ], [ [ "# Solution goes here", "_____no_output_____" ], [ "# Solution goes here", "_____no_output_____" ], [ "# Solution goes here", "_____no_output_____" ] ], [ [ "## Maximizing expected gain\n\nIn the previous section we computed the bid that maximizes your chance of winning.\nAnd if that's your goal, the bid we computed is optimal.\n\nBut winning isn't everything.\nRemember that if your bid is off by $250 or less, you will both showcases.\nSo it might be a good idea to increase your bid a little: it increases the chance you overbid and lose, but it also increases the chance of winning both showcases.\n\nLet's see how that works out.\nThe following function computes how much you will win, on average, given your bid, the actual price, and a sample of errors for your opponent.", "_____no_output_____" ] ], [ [ "def compute_gain(bid, price, sample_diff):\n \"\"\"Computes expected gain given a bid and actual price.\n\n bid: number\n price: actual price\n sample_diff: sequence of differences for the opponent\n \"\"\"\n diff = bid - price\n prob = compute_prob_win(diff, sample_diff)\n\n # if you are within 250 dollars, you win both showcases\n if -250 <= diff <= 0:\n return 2 * price * prob\n else:\n return price * prob", "_____no_output_____" ] ], [ [ "For example, if the actual price is $35000 \n\nand you bid $30000, \n\nyou will win about $23,600 worth of prizes on average.", "_____no_output_____" ] ], [ [ "compute_gain(30000, 35000, sample_diff2)", "_____no_output_____" ] ], [ [ "In reality we don't know the actual price, but we have a posterior distribution that represents what we know about it.\nBy averaging over the prices and probabilities in the posterior distribution, we can compute the \"expected gain\" for a particular bid.", "_____no_output_____" ] ], [ [ "def expected_gain(bid, posterior, sample_diff):\n \"\"\"Computes the expected return of a given bid.\n\n bid: your bid\n posterior: distribution of showcase values\n sample_diff: distribution of differences for the opponent\n \"\"\"\n total = 0\n for price, prob in posterior.items():\n total += prob * compute_gain(bid, price, sample_diff)\n return total", "_____no_output_____" ] ], [ [ "For the posterior we computed earlier, based on an estimate of $23,000, \n\nthe expected gain for a bid of $21,000\n\nis about $16,900.", "_____no_output_____" ] ], [ [ "expected_gain(21000, posterior1, sample_diff2)", "_____no_output_____" ] ], [ [ "But can we do any better? \n\nTo find out, we can loop through a range of bids and find the one that maximizes expected gain.", "_____no_output_____" ] ], [ [ "bids = posterior1.index\n\ngains = [expected_gain(bid, posterior1, sample_diff2) for bid in bids]\n\nexpected_gain_series = pd.Series(gains, index=bids)", "_____no_output_____" ] ], [ [ "Here are the results.", "_____no_output_____" ] ], [ [ "expected_gain_series.plot(color='C1')\n\nplt.xlabel('Bid ($)')\nplt.ylabel('Expected gain ($)')\nplt.title('Player 1');", "_____no_output_____" ] ], [ [ "And here is the optimal bid.", "_____no_output_____" ] ], [ [ "expected_gain_series.idxmax()", "_____no_output_____" ] ], [ [ "With that bid, the expected gain is about $17,400.", "_____no_output_____" ] ], [ [ "expected_gain_series.max()", "_____no_output_____" ] ], [ [ "Recall that the estimated value of the prizes was $23,000.\n\nThe bid that maximizes the chance of winning is $21,000.\n\nAnd the bid that maximizes your the expected gain is $22,000.", "_____no_output_____" ], [ "**Exercise:** Do the same analysis for Player 2.", "_____no_output_____" ] ], [ [ "bids = posterior2.index\n\ngains = [expected_gain(bid, posterior2, sample_diff1) for bid in bids]\n\nexpected_gain_series = pd.Series(gains, index=bids)", "_____no_output_____" ] ], [ [ "Here are the results.", "_____no_output_____" ] ], [ [ "expected_gain_series.plot(color='C2')\n\nplt.xlabel('Bid ($)')\nplt.ylabel('Expected gain ($)')\nplt.title('Player 2');", "_____no_output_____" ] ], [ [ "And here is the optimal bid.", "_____no_output_____" ] ], [ [ "expected_gain_series.idxmax()", "_____no_output_____" ] ], [ [ "## Review\n\nIn this notebook\n\n[In the next notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/13_xxx.ipynb)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
ece316a74ae1e63b5d1f8a6cf23d06fe3244dd6c
556
ipynb
Jupyter Notebook
notebooks/11.sampling_importante_mcmc.ipynb
rsilveira79/fermenting_gradients_2
693fa4c6a119265f797b8a56122857f9f98510da
[ "MIT" ]
null
null
null
notebooks/11.sampling_importante_mcmc.ipynb
rsilveira79/fermenting_gradients_2
693fa4c6a119265f797b8a56122857f9f98510da
[ "MIT" ]
null
null
null
notebooks/11.sampling_importante_mcmc.ipynb
rsilveira79/fermenting_gradients_2
693fa4c6a119265f797b8a56122857f9f98510da
[ "MIT" ]
null
null
null
16.848485
34
0.528777
[]
[]
[]
ece318364f1b225ca586c134d38771c32fcf54e5
117,919
ipynb
Jupyter Notebook
notebooks/ACS_Maryland_County_Skeleton.ipynb
fedderw/maryland-child-allowance
5879472210d15f78449462b5d8e575b66db61aa4
[ "MIT" ]
null
null
null
notebooks/ACS_Maryland_County_Skeleton.ipynb
fedderw/maryland-child-allowance
5879472210d15f78449462b5d8e575b66db61aa4
[ "MIT" ]
1
2022-03-10T03:57:43.000Z
2022-03-10T03:57:43.000Z
notebooks/ACS_Maryland_County_Skeleton.ipynb
fedderw/maryland-child-allowance
5879472210d15f78449462b5d8e575b66db61aa4
[ "MIT" ]
1
2022-02-23T00:38:52.000Z
2022-02-23T00:38:52.000Z
47.50967
509
0.28484
[ [ [ "Runing the following to install microdf in colab:", "_____no_output_____" ] ], [ [ "# Install microdf\n# !pip install git+https://github.com/PSLmodels/microdf.git", "Collecting git+https://github.com/PSLmodels/microdf.git\n Cloning https://github.com/PSLmodels/microdf.git to /tmp/pip-req-build-_zt74wzk\n Running command git clone -q https://github.com/PSLmodels/microdf.git /tmp/pip-req-build-_zt74wzk\nRequirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from microdf==0.3.0) (3.2.2)\nRequirement already satisfied: matplotlib-label-lines in /usr/local/lib/python3.7/dist-packages (from microdf==0.3.0) (0.5.0)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from microdf==0.3.0) (1.21.5)\nRequirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from microdf==0.3.0) (1.3.5)\nRequirement already satisfied: seaborn in /usr/local/lib/python3.7/dist-packages (from microdf==0.3.0) (0.11.2)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->microdf==0.3.0) (3.0.7)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->microdf==0.3.0) (1.3.2)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->microdf==0.3.0) (0.11.0)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->microdf==0.3.0) (2.8.2)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib->microdf==0.3.0) (1.15.0)\nRequirement already satisfied: more-itertools in /usr/local/lib/python3.7/dist-packages (from matplotlib-label-lines->microdf==0.3.0) (8.12.0)\nRequirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/dist-packages (from pandas->microdf==0.3.0) (2018.9)\nRequirement already satisfied: scipy>=1.0 in /usr/local/lib/python3.7/dist-packages (from seaborn->microdf==0.3.0) (1.4.1)\n" ], [ "import pandas as pd\nimport numpy as np\nimport microdf as mdf\nimport plotly.express as px", "_____no_output_____" ], [ "person = pd.read_stata(\n \"https://www2.census.gov/programs-surveys/supplemental-poverty-measure/datasets/spm/spm_2019_pu.dta\",\n columns=[\n \"serialno\",\n \"sporder\",\n \"wt\",\n \"age\",\n \"spm_id\",\n \"spm_povthreshold\",\n \"spm_resources\",\n \"st\",\n \"puma\"\n ],\n)", "_____no_output_____" ], [ "# Cleanup\nperson.columns = person.columns.str.lower()\nperson = person.rename(columns={'serialno': 'serial', 'sporder':'pernum'})", "_____no_output_____" ], [ "person = person.astype({\"serial\":'int', \"pernum\":'int',\n \"wt\":'int', \"age\":'int',\n \"spm_id\":'int', \"spm_povthreshold\":'int',\n \"spm_resources\": \"int\"}) ", "_____no_output_____" ], [ "# Sort to just Maryland\nperson = person[person['st'] == 24]", "_____no_output_____" ], [ "# assign random district\nperson['district'] = np.random.randint(1, 48, person.shape[0])", "_____no_output_____" ], [ "# Assign random county\nperson['county'] = np.random.randint(1, 25, person.shape[0])", "_____no_output_____" ], [ "# Replace NIUs\nperson = person.replace(9999999,0)", "_____no_output_____" ], [ "# Define age groups\nperson['child'] = person.age < 18\nperson['young_child'] = person.age < 5\nperson['baby'] = person.age == 0", "_____no_output_____" ], [ "# Use groupby to calculate total babies, young children, and children in each spm unit\nspmu = person.groupby(['spm_id'])[['child', 'young_child', 'baby']].sum()\nspmu.columns = ['spm_children', 'spm_young_children', 'spm_babies']\n# merge back onto the person dataframe\nperson = person.merge(spmu, left_on =['spm_id'], right_index=True)", "_____no_output_____" ], [ "# Consider three reforms\n#1 a $100 universal child allowance (0-17)\n#2 a $100 universal young child allowance (0-4)\n#3 a $1,000 baby bonus given upon the birth of a child\n\ndef pov(reform, county):\n if county == 'Maryland':\n tp = person.copy(deep=True) \n else:\n tp = person[person.county==county].copy(deep=True)\n\n if reform == 'All Children':\n tp['total_ca'] = tp.spm_children * 100 * 12\n \n if reform == 'Young Children':\n tp['total_ca'] = tp.spm_young_children * 100 * 12\n \n if reform == 'Babies':\n tp['total_ca'] = tp.spm_babies * 1_000\n\n tp['new_resources'] = tp.total_ca + tp.spm_resources\t\n tp['still_poor'] = tp.new_resources < tp.spm_povthreshold\n\n # populations\n population = (tp.wt).sum()\n child_population = (tp.child * tp.wt).sum()\n young_child_population = (tp.young_child * tp.wt).sum()\n baby_population = (tp.baby * tp.wt).sum()\n\n # orginal poverty rates\n tp['poor'] = tp.spm_resources < tp.spm_povthreshold\n\n total_poor = (tp.poor * tp.wt).sum()\n total_pov_rate = (total_poor / population)\n\n total_child_poor = (tp.child * tp.poor * tp.wt).sum()\n child_pov_rate = (total_child_poor / child_population)\n\n total_young_child_poor = (tp.young_child * tp.poor * tp.wt).sum()\n young_child_pov_rate = (total_young_child_poor / young_child_population)\n\n total_baby_poor = (tp.baby * tp.poor * tp.wt).sum()\n baby_pov_rate = (total_baby_poor / baby_population)\n\n # new poverty rates\n new_total_poor = (tp.still_poor * tp.wt).sum()\n new_total_pov_rate = (new_total_poor / population)\n\n new_total_child_poor = (tp.child * tp.still_poor * tp.wt).sum()\n new_child_pov_rate = (new_total_child_poor / child_population)\n\n new_total_young_child_poor = (tp.young_child * tp.still_poor * tp.wt).sum()\n new_young_child_pov_rate = (new_total_young_child_poor / young_child_population)\n\n new_total_baby_poor = (tp.baby * tp.still_poor * tp.wt).sum()\n new_baby_pov_rate = (new_total_baby_poor / baby_population)\n\n # percent change\n total_pov_change = ((new_total_poor - total_poor) / (total_poor) * 100).round(1)\n child_pov_change = ((new_total_child_poor - total_child_poor) / (total_child_poor) * 100).round(1)\n young_child_pov_change = ((new_total_young_child_poor - total_young_child_poor) / (total_young_child_poor) * 100).round(1)\n baby_pov_change = ((new_total_baby_poor - total_baby_poor) / (total_baby_poor) * 100).round(1)\n \n return pd.Series([total_pov_change,\n child_pov_change,\n young_child_pov_change,\n baby_pov_change,\n population,\n child_population,\n young_child_population,\n baby_population,\n total_pov_rate,\n child_pov_rate,\n young_child_pov_rate,\n baby_pov_rate,\n new_total_pov_rate,\n new_child_pov_rate,\n new_young_child_pov_rate,\n new_baby_pov_rate,\n ])", "_____no_output_____" ], [ "counties = person.county.unique().tolist()\nsummary = mdf.cartesian_product({\n 'reform':['All Children', 'Young Children', 'Babies'],\n 'countyfip': ['Maryland'] + counties})", "_____no_output_____" ], [ "def pov_row(row):\n return pov(row.reform, row.countyfip)", "_____no_output_____" ], [ "summary[['total_pov_change',\n 'child_pov_change',\n 'young_child_pov_change',\n 'baby_pov_change',\n 'population',\n 'child_population',\n 'young_child_population',\n 'baby_population',\n 'total_pov_rate',\n 'child_pov_rate',\n 'young_child_pov_rate',\n 'baby_pov_rate',\n 'new_total_pov_rate',\n 'new_child_pov_rate',\n 'new_young_child_pov_rate',\n 'new_baby_pov_rate',]] = summary.apply(pov_row, axis=1)", "_____no_output_____" ], [ "pd.set_option(\"display.max_rows\", None, \"display.max_columns\", None)", "_____no_output_____" ], [ "summary.to_csv('skeleton_county_data.csv')", "_____no_output_____" ], [ "summary", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ece324a2a052cd674094f2fa1fc19fe970492e72
44,595
ipynb
Jupyter Notebook
examples/ITK_UnitTestExample6_PointSetTransformation.ipynb
N-Dekker/ITKElastix
667d08b0621125bdc6b09913f6544badec3045bf
[ "Apache-2.0" ]
null
null
null
examples/ITK_UnitTestExample6_PointSetTransformation.ipynb
N-Dekker/ITKElastix
667d08b0621125bdc6b09913f6544badec3045bf
[ "Apache-2.0" ]
null
null
null
examples/ITK_UnitTestExample6_PointSetTransformation.ipynb
N-Dekker/ITKElastix
667d08b0621125bdc6b09913f6544badec3045bf
[ "Apache-2.0" ]
null
null
null
220.767327
38,712
0.920888
[ [ [ "# Elastix\n\nThis notebooks show very basic image registration examples with on-the-fly generated binary images.", "_____no_output_____" ] ], [ [ "from itk import itkElastixRegistrationMethodPython\nfrom itk import itkTransformixFilterPython\nimport itk\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "## Image generators", "_____no_output_____" ] ], [ [ "def image_generator(x1, x2, y1, y2, upsampled=False, bspline=False,\n mask=False, artefact=False):\n if upsampled:\n image = np.zeros([1000, 1000], np.float32)\n elif mask:\n image = np.zeros([100, 100], np.uint8)\n else:\n image = np.zeros([100, 100], np.float32)\n for x in range(x1, x2):\n for y in range(y1, y2):\n if bspline:\n y += x\n if x > 99 or y > 99:\n pass\n else:\n image[x, y] = 1\n else:\n image[x, y] = 1\n if artefact:\n image[:, -10:] = 1\n image = itk.image_view_from_array(image)\n return image", "_____no_output_____" ] ], [ [ "## Point set transformation test\nSee example 10 for more explanation", "_____no_output_____" ] ], [ [ "def point_set_from_txt(file_path):\n image = np.zeros([100, 100], np.float32)\n with open(file_path, \"rt\") as myfile: # Open lorem.txt for reading text data.\n for myline in myfile: # For each line, stored as myline,\n string = myline.partition('OutputIndexMoving =')[2]\n string=string.strip()\n string = string.strip('[]')\n string=string.strip()\n y,x = string.split()\n image[int(x),int(y)] = 1\n return image\n\ndef point_set_from_input(x,y):\n image = np.zeros([100, 100], np.float32)\n for i in range(len(x)):\n image[x[i],y[i]]=1\n return image", "_____no_output_____" ], [ "# Create rigid transformed test images with artefact\nfixed_image = image_generator(25, 76, 25, 76)\nmoving_image = image_generator(1, 52, 10, 61)\n\n# Create fixed point set\nfixed_point_set = open(\"data/fixed_point_set_test.txt\", \"w+\")\nfixed_point_set.write(\"point\\n5\\n\")\nfixed_point_set.write(\"25 25\\n\")\nfixed_point_set.write(\"25 75\\n\")\nfixed_point_set.write(\"75 75\\n\")\nfixed_point_set.write(\"75 25\\n\")\nfixed_point_set.write(\"50 50\")\nfixed_point_set.close()\n\n\n# Import Default Parameter Map\nparameter_object = itk.ParameterObject.New()\ndefault_rigid_parameter_map = parameter_object.GetDefaultParameterMap('rigid')\nparameter_object.AddParameterMap(default_rigid_parameter_map)\n\n# # Call registration function\nresult_image, result_transform_parameters = itk.elastix_registration_method(\n fixed_image, moving_image,\n parameter_object=parameter_object)\n\n# Call transformix\nempty_image = itk.transformix_filter(\n moving_image=moving_image,\n fixed_point_set_file_name='data/fixed_point_set_test.txt',\n transform_parameter_object=result_transform_parameters,\n output_directory='exampleoutput')\n\nresult_point_set = point_set_from_txt('exampleoutput/outputpoints.txt')\nfixed_point_set = point_set_from_input([25,25,75,75,50],[25,75,75,25,50])", "_____no_output_____" ] ], [ [ "### Point set transformation test Visualization", "_____no_output_____" ] ], [ [ "%matplotlib inline\n\n# Plot images\nfig, axs = plt.subplots(2,2, sharey=True, figsize=[30,30])\nplt.figsize=[100,100]\naxs[0,0].imshow(fixed_image)\naxs[0,0].set_title('Fixed', fontsize=30)\naxs[0,1].imshow(fixed_point_set+fixed_image)\naxs[0,1].set_title('Fixed Point Set', fontsize=30)\naxs[1,0].imshow(moving_image)\naxs[1,0].set_title('Moving', fontsize=30)\naxs[1,1].imshow(result_point_set+moving_image)\naxs[1,1].set_title('Result Point Set', fontsize=30)\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
ece32657060f28f4628ff1cbeb9fad7e348629fd
49,470
ipynb
Jupyter Notebook
lessons/landlab/landlab-fault-scarp-for-espin.ipynb
huxiaoni/espin
78791e1c91a9ec7a0f059652e8b7c9df6aba9617
[ "MIT" ]
null
null
null
lessons/landlab/landlab-fault-scarp-for-espin.ipynb
huxiaoni/espin
78791e1c91a9ec7a0f059652e8b7c9df6aba9617
[ "MIT" ]
null
null
null
lessons/landlab/landlab-fault-scarp-for-espin.ipynb
huxiaoni/espin
78791e1c91a9ec7a0f059652e8b7c9df6aba9617
[ "MIT" ]
null
null
null
37.279578
686
0.620174
[ [ [ "<a href=\"http://landlab.github.io\"><img style=\"float: left\" src=\"../../media/landlab_header.png\"></a>", "_____no_output_____" ], [ "# Introduction to Landlab: Creating a simple 2D scarp diffusion model", "_____no_output_____" ], [ "<hr>\n<small>For more Landlab tutorials, click here: <a href=\"https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html\">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>\n<hr>\n", "_____no_output_____" ], [ "NOTE: this tutorial is based on the generic Landlab Fault Scarp Tutorial, but it also adds a set of exercises for use in ESPIn.\n\nThis tutorial illustrates how you can use Landlab to construct a simple two-dimensional numerical model on a regular (raster) grid, using a simple forward-time, centered-space numerical scheme. The example is the erosional degradation of an earthquake fault scarp, and which evolves over time in response to the gradual downhill motion of soil. Here we use a simple \"geomorphic diffusion\" model for landform evolution, in which the downhill flow of soil is assumed to be proportional to the (downhill) gradient of the land surface multiplied by a transport coefficient.\n\nWe start by importing the [numpy](https://numpy.org) and [matplotlib](https://matplotlib.org) libraries:", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ] ], [ [ "## Part 1: 1D version using numpy\n\nThis example uses a finite-volume numerical solution to the 2D diffusion equation. The 2D diffusion equation in this case is derived as follows. Continuity of mass states that:\n\n$\\frac{\\partial z}{\\partial t} = -\\nabla \\cdot \\mathbf{q}_s$,\n\nwhere $z$ is elevation, $t$ is time, the vector $\\mathbf{q}_s$ is the volumetric soil transport rate per unit width, and $\\nabla$ is the divergence operator (here in two dimensions). (Note that we have omitted a porosity factor here; its effect will be subsumed in the transport coefficient). The sediment flux vector depends on the slope gradient:\n\n$\\mathbf{q}_s = -D \\nabla z$,\n\nwhere $D$ is a transport-rate coefficient---sometimes called *hillslope diffusivity*---with dimensions of length squared per time. Combining the two, and assuming $D$ is uniform, we have a classical 2D diffusion equation:\n\n$\\frac{\\partial z}{\\partial t} = -\\nabla^2 z$.\n\nIn this first example, we will create a our 1D domain in $x$ and $z$, and set a value for $D$.\n\nThis means that the equation we solve will be in 1D. \n\n$\\frac{d z}{d t} = \\frac{d q_s}{dx}$,\n\nwhere \n\n$q_s = -D \\frac{d z}{dx}$\n", "_____no_output_____" ] ], [ [ "dx = 1\nx = np.arange(0, 100, dx, dtype=float)\nz = np.zeros(x.shape, dtype=float)\nD = 0.01", "_____no_output_____" ] ], [ [ "Next we must create our fault by uplifting some of the domain. We will increment all elements of `z` in which `x>50`.", "_____no_output_____" ] ], [ [ "z[x>50] += 100", "_____no_output_____" ] ], [ [ "Finally, we will diffuse our fault for 1,000 years.\n\nWe will use a timestep with a [Courantโ€“Friedrichsโ€“Lewy condition](https://en.wikipedia.org/wiki/Courantโ€“Friedrichsโ€“Lewy_condition) of $C_{cfl}=0.2$. This will keep our solution numerically stable. \n\n$C_{cfl} = \\frac{\\Delta t D}{\\Delta x^2} = 0.2$", "_____no_output_____" ] ], [ [ "dt = 0.2 * dx * dx / D\ntotal_time = 1e3\nnts = int(total_time/dt)\nz_orig = z.copy()\nfor i in range(nts):\n qs = -D * np.diff(z)/dx\n dzdt = -np.diff(qs)/dx\n z[1:-1] += dzdt*dt\n\nplt.plot(x, z_orig, label=\"Original Profile\")\nplt.plot(x, z, label=\"Diffused Profile\")\nplt.legend()", "_____no_output_____" ] ], [ [ "The prior example is pretty simple. If this was all you needed to do, you wouldn't need Landlab. \n\nBut what if you wanted...\n\n... to use the same diffusion model in 2D instead of 1D.\n\n... to use an irregular grid (in 1 or 2D). \n\n... wanted to combine the diffusion model with a more complex model. \n\n... have a more complex model you want to use over and over again with different boundary conditions.\n\nThese are the sorts of problems that Landlab was designed to solve. \n\nIn the next two sections we will introduce some of the core capabilities of Landlab. \n\nIn Part 2 we will use the RasterModelGrid, fields, and a numerical utility for calculating flux divergence. \n\nIn Part 3 we will use the HexagonalModelGrid. \n\nIn Part 4 we will use the LinearDiffuser component. \n\n## Part 2: 2D version using Landlab's Model Grids\n\nThe Landlab model grids are data structures that represent the model domain (the variable `x` in our prior example). Here we will use `RasterModelGrid` which creates a grid with regularly spaced square grid elements. The RasterModelGrid knows how the elements are connected and how far apart they are.\n\nLets start by creating a RasterModelGrid class. First we need to import it. ", "_____no_output_____" ] ], [ [ "from landlab import RasterModelGrid", "_____no_output_____" ] ], [ [ "\n### (a) Explore the RasterModelGrid\n\nBefore we make a RasterModelGrid for our fault example, lets explore the Landlab model grid. \n\nLandlab considers the grid as a \"dual\" graph. Two sets of points, lines and polygons that represent 2D space. \n\nThe first graph considers points called \"nodes\" that are connected by lines called \"links\". The area that surrounds each node is called a \"cell\".\n\nFirst, the nodes", "_____no_output_____" ] ], [ [ "from landlab.plot.graph import plot_graph\ngrid = RasterModelGrid((4, 5), xy_spacing=(3,4))\nplot_graph(grid, at=\"node\")", "_____no_output_____" ] ], [ [ "You can see that the nodes are points and they are numbered with unique IDs from lower left to upper right. \n\nNext the links", "_____no_output_____" ] ], [ [ "plot_graph(grid, at=\"link\")", "_____no_output_____" ] ], [ [ "which are lines that connect the nodes and each have a unique ID number. \n\nAnd finally, the cells", "_____no_output_____" ] ], [ [ "plot_graph(grid, at=\"cell\")", "_____no_output_____" ] ], [ [ "which are polygons centered around the nodes. \n\nLandlab is a \"dual\" graph because it also keeps track of a second set of points, lines, and polygons (\"corners\", \"faces\", and \"patches\"). We will not focus on them further.", "_____no_output_____" ], [ "### *Exercises for section 2a*\n\n(2a.1) Create an instance of a `RasterModelGrid` with 5 rows and 7 columns, with a spacing between nodes of 10 units. Plot the node layout, and identify the ID number of the center-most node.", "_____no_output_____" ] ], [ [ "# (enter your solution to 2a.1 here)\n\nrmg = RasterModelGrid((5, 7), 10.0)\nplot_graph(rmg, at='node')", "_____no_output_____" ] ], [ [ "(2a.2) Find the ID of the cell that contains this node.", "_____no_output_____" ] ], [ [ "# (enter your solution to 2a.2 here)\nplot_graph(rmg, at='cell')", "_____no_output_____" ] ], [ [ "(2a.3) Find the ID of the horizontal link that connects to the last node on the right in the middle column.", "_____no_output_____" ] ], [ [ "# (enter your solution to 2a.3 here)\nplot_graph(rmg, at='link')", "_____no_output_____" ] ], [ [ "### (b) Use the RasterModelGrid for 2D diffusion \n\nLets continue by making a new grid that is bigger. We will use this for our next fault diffusion example.\n\nThe syntax in the next line says: create a new *RasterModelGrid* object called **mg**, with 25 rows, 40 columns, and a grid spacing of 10 m.", "_____no_output_____" ] ], [ [ "mg = RasterModelGrid((25, 40), 10.0)", "_____no_output_____" ] ], [ [ "Note the use of object-oriented programming here. `RasterModelGrid` is a class; `mg` is a particular instance of that class, and it contains all the data necessary to fully describe the topology and geometry of this particular grid.\n\nNext we'll add a *data field* to the grid, to represent the elevation values at grid nodes. The \"dot\" syntax below indicates that we are calling a function (or *method*) that belongs to the *RasterModelGrid* class, and will act on data contained in **mg**. The arguments indicate that we want the data elements attached to grid nodes (rather than links, for example), and that we want to name this data field `topographic__elevation`. The `add_zeros` method returns the newly created NumPy array.", "_____no_output_____" ] ], [ [ "z = mg.add_zeros('topographic__elevation', at='node')", "_____no_output_____" ] ], [ [ "The above line of code creates space in memory to store 1,000 floating-point values, which will represent the elevation of the land surface at each of our 1,000 grid nodes.", "_____no_output_____" ], [ "Let's plot the positions of all the grid nodes. The nodes' *(x,y)* positions are stored in the arrays `mg.x_of_node` and `mg.y_of_node`, respectively.", "_____no_output_____" ] ], [ [ "plt.plot(mg.x_of_node, mg.y_of_node, '.')", "_____no_output_____" ] ], [ [ "If we bothered to count, we'd see that there are indeed 1,000 grid nodes, and a corresponding number of `z` values:", "_____no_output_____" ] ], [ [ "len(z)", "_____no_output_____" ] ], [ [ "Now for some tectonics. Let's say there's a fault trace that angles roughly east-northeast. We can describe the trace with the equation for a line. One trick here: by using `mg.x_of_node`, in the line of code below, we are calculating a *y* (i.e., north-south) position of the fault trace for each grid node---meaning that this is the *y* coordinate of the trace at the *x* coordinate of a given node.", "_____no_output_____" ] ], [ [ "fault_trace_y = 50.0 + 0.25 * mg.x_of_node", "_____no_output_____" ] ], [ [ "Here comes the earthquake. For all the nodes north of the fault (i.e., those with a *y* coordinate greater than the corresponding *y* coordinate of the fault trace), we'll add elevation equal to 10 meters plus a centimeter for every meter east along the grid (just to make it interesting):", "_____no_output_____" ] ], [ [ "z[mg.y_of_node >\n fault_trace_y] += 10.0 + 0.01 * mg.x_of_node[mg.y_of_node > fault_trace_y]", "_____no_output_____" ] ], [ [ "(A little bit of Python under the hood: the statement `mg.y_of_node > fault_trace_y` creates a 1000-element long boolean array; placing this within the index brackets will select only those array entries that correspond to `True` in the boolean array)\n\nLet's look at our newly created initial topography using Landlab's *imshow_node_grid* plotting function (which we first need to import).", "_____no_output_____" ] ], [ [ "from landlab.plot.imshow import imshow_grid\nimshow_grid(mg, 'topographic__elevation')", "_____no_output_____" ] ], [ [ "To finish getting set up, we will define two parameters: the transport (\"diffusivity\") coefficient, `D`, and the time-step size, `dt`. (The latter is set using the Courant condition for a forward-time, centered-space finite-difference solution; you can find the explanation in most textbooks on numerical methods).", "_____no_output_____" ] ], [ [ "D = 0.01 # m2/yr transport coefficient\ndt = 0.2 * mg.dx * mg.dx / D\ndt", "_____no_output_____" ] ], [ [ "Boundary conditions: for this example, we'll assume that the east and west sides are closed to flow of sediment, but that the north and south sides are open. (The order of the function arguments is east, north, west, south)", "_____no_output_____" ] ], [ [ "mg.set_closed_boundaries_at_grid_edges(True, False, True, False)", "_____no_output_____" ] ], [ [ "*A note on boundaries:* with a Landlab raster grid, all the perimeter nodes are boundary nodes. In this example, there are 24 + 24 + 39 + 39 = 126 boundary nodes. The previous line of code set those on the east and west edges to be **closed boundaries**, while those on the north and south are **open boundaries** (the default). All the remaining nodes are known as **core** nodes. In this example, there are 1000 - 126 = 874 core nodes:", "_____no_output_____" ] ], [ [ "len(mg.core_nodes)", "_____no_output_____" ] ], [ [ "One more thing before we run the time loop: we'll create an array to contain soil flux. In the function call below, the first argument tells Landlab that we want one value for each grid link, while the second argument provides a name for this data *field*:", "_____no_output_____" ] ], [ [ "qs = mg.add_zeros('sediment_flux', at='link')", "_____no_output_____" ] ], [ [ "And now for some landform evolution. We will loop through 25 iterations, representing 50,000 years. On each pass through the loop, we do the following:\n\n1. Calculate, and store in the array `g`, the gradient between each neighboring pair of nodes. These calculations are done on **links**. The gradient value is a positive number when the gradient is \"uphill\" in the direction of the link, and negative when the gradient is \"downhill\" in the direction of the link. On a raster grid, link directions are always in the direction of increasing $x$ (\"horizontal\" links) or increasing $y$ (\"vertical\" links).\n\n2. Calculate, and store in the array `qs`, the sediment flux between each adjacent pair of nodes by multiplying their gradient by the transport coefficient. We will only do this for the **active links** (those not connected to a closed boundary, and not connecting two boundary nodes of any type); others will remain as zero.\n\n3. Calculate the resulting net flux at each node (positive=net outflux, negative=net influx). The negative of this array is the rate of change of elevation at each (core) node, so store it in a node array called `dzdt'.\n\n4. Update the elevations for the new time step.", "_____no_output_____" ] ], [ [ "for i in range(25):\n g = mg.calc_grad_at_link(z)\n qs[mg.active_links] = -D * g[mg.active_links]\n dzdt = -mg.calc_flux_div_at_node(qs)\n z[mg.core_nodes] += dzdt[mg.core_nodes] * dt", "_____no_output_____" ] ], [ [ "Let's look at how our fault scarp has evolved.", "_____no_output_____" ] ], [ [ "imshow_grid(mg, 'topographic__elevation')", "_____no_output_____" ] ], [ [ "Notice that we have just created and run a 2D model of fault-scarp creation and diffusion with fewer than two dozen lines of code. How long would this have taken to write in C or Fortran?\n\nWhile it was very very easy to write in 1D, writing this in 2D would mean we would have needed to keep track of the adjacency of the different parts of the grid. This is the primary problem that the Landlab grids are meant to solve. \n\nThink about how difficult this would be to hand code if the grid were irregular or hexagonal. In order to conserve mass and implement the differential equation you would need to know how nodes were conected, how long the links were, and how big each cell was.\n\nWe do such an example after the next section. ", "_____no_output_____" ], [ "### *Exercises for section 2b*\n\n(2b .1) Create an instance of a `RasterModelGrid` called `mygrid`, with 16 rows and 25 columns, with a spacing between nodes of 5 meters. Use the `plot` function in the `matplotlib` library to make a plot that shows the position of each node marked with a dot (hint: see the plt.plot() example above).", "_____no_output_____" ] ], [ [ "# (enter your solution to 2b.1 here)\nmygrid = RasterModelGrid((16, 25), xy_spacing=5.0)\nplt.plot(mygrid.x_of_node, mygrid.y_of_node, '.')", "_____no_output_____" ] ], [ [ "(2b.2) Query the grid variables `number_of_nodes` and `number_of_core_nodes` to find out how many nodes are in your grid, and how many of them are core nodes.", "_____no_output_____" ] ], [ [ "# (enter your solution to 2b.2 here)\nprint(mygrid.number_of_nodes)\nprint(mygrid.number_of_core_nodes)", "_____no_output_____" ] ], [ [ "(2b.3) Add a new field to your grid, called `temperature` and attached to nodes. Have the initial values be all zero.", "_____no_output_____" ] ], [ [ "# (enter your solution to 2b.3 here)\ntemp = mygrid.add_zeros('temperature', at='node')", "_____no_output_____" ] ], [ [ "(2b.4) Change the temperature of nodes in the top (north) half of the grid to be 10 degrees C. Use the `imshow_grid` function to display a shaded image of the elevation field.", "_____no_output_____" ] ], [ [ "# (enter your solution to 2b.4 here)\ntemp[mygrid.y_of_node >= 40.0] = 10.0\nimshow_grid(mygrid, 'temperature')", "_____no_output_____" ] ], [ [ "(2b.5) Use the grid function `set_closed_boundaries_at_grid_edges` to assign closed boundaries to the right and left sides of the grid.", "_____no_output_____" ] ], [ [ "# (enter your solution to 2b.5 here)\nmygrid.set_closed_boundaries_at_grid_edges(True, False, True, False)\nimshow_grid(mygrid, 'temperature', color_for_closed='c')", "_____no_output_____" ] ], [ [ "(2b.6) Create a new field of zeros called `heat_flux` and attached to links. Using the `number_of_links` grid variable, verify that your new field array has the correct number of items. ", "_____no_output_____" ] ], [ [ "# (enter your solution to 2b.6 here)\nQ = mygrid.add_zeros('heat_flux', at='link')\nprint(mygrid.number_of_links)\nprint(len(Q))", "_____no_output_____" ] ], [ [ "(2b.7) Use the `calc_grad_at_link` grid function to calculate the temperature gradients at all the links in the grid. Given the node spacing and the temperatures you assigned to the top versus bottom grid nodes, what do you expect the maximum temperature gradient to be? Print the values in the gradient array to verify that this is indeed the maximum temperature gradient.", "_____no_output_____" ] ], [ [ "# (enter your solution to 2b.7 here)\nprint('Expected max gradient is 2 C/m')\ntemp_grad = mygrid.calc_grad_at_link(temp)\nprint(temp_grad)", "_____no_output_____" ] ], [ [ "(2b.8) Back to hillslopes: Reset the values in the elevation field of the grid `mg` to zero. Then copy and paste the time loop above (i.e., the block in Section 2b that starts with `for i in range(25):`) below. Modify the last line to add uplift of the hillslope material at a rate `uplift_rate` = 0.0001 m/yr (hint: the amount of uplift in each iteration should be the uplift rate times the time-step duration). Then run the block and plot the resulting topography. Try experimenting with different uplift rates and different values of `D`.", "_____no_output_____" ] ], [ [ "# (enter your solution to 2b.8 here)\nz[:] = 0.0\nuplift_rate = 0.0001\nfor i in range(25):\n g = mg.calc_grad_at_link(z)\n qs[mg.active_links] = -D * g[mg.active_links]\n dzdt = -mg.calc_flux_div_at_node(qs)\n z[mg.core_nodes] += (dzdt[mg.core_nodes] + uplift_rate) * dt\nimshow_grid(mg, z)", "_____no_output_____" ] ], [ [ "### (c) What's going on under the hood?\n\nThis example uses a finite-volume numerical solution to the 2D diffusion equation. The 2D diffusion equation in this case is derived as follows. Continuity of mass states that:\n\n$\\frac{\\partial z}{\\partial t} = -\\nabla \\cdot \\mathbf{q}_s$,\n\nwhere $z$ is elevation, $t$ is time, the vector $\\mathbf{q}_s$ is the volumetric soil transport rate per unit width, and $\\nabla$ is the divergence operator (here in two dimensions). (Note that we have omitted a porosity factor here; its effect will be subsumed in the transport coefficient). The sediment flux vector depends on the slope gradient:\n\n$\\mathbf{q}_s = -D \\nabla z$,\n\nwhere $D$ is a transport-rate coefficient---sometimes called *hillslope diffusivity*---with dimensions of length squared per time. Combining the two, and assuming $D$ is uniform, we have a classical 2D diffusion equation:\n\n$\\frac{\\partial z}{\\partial t} = -\\nabla^2 z$.\n\nFor the numerical solution, we discretize $z$ at a series of *nodes* on a grid. The example in this notebook uses a Landlab *RasterModelGrid*, in which every interior node sits inside a cell of width $\\Delta x$, but we could alternatively have used any grid type that provides nodes, links, and cells.\n\nThe gradient and sediment flux vectors will be calculated at the *links* that connect each pair of adjacent nodes. These links correspond to the mid-points of the cell faces, and the values that we assign to links represent the gradients and fluxes, respectively, along the faces of the cells.\n\nThe flux divergence, $\\nabla \\mathbf{q}_s$, will be calculated by summing, for every cell, the total volume inflows and outflows at each cell face, and dividing the resulting sum by the cell area. Note that for a regular, rectilinear grid, as we use in this example, this finite-volume method is equivalent to a finite-difference method.\n\nTo advance the solution in time, we will use a simple explicit, forward-difference method. This solution scheme for a given node $i$ can be written:\n\n$\\frac{z_i^{t+1} - z_i^t}{\\Delta t} = -\\frac{1}{A_i} \\sum\\limits_{j=1}^{N_i} \\delta (l_{ij}) q_s (l_{ij}) \\lambda(l_{ij})$.\n\nHere the superscripts refer to time steps, $\\Delta t$ is time-step size, $q_s(l_{ij})$ is the sediment flux per width associated with the link that crosses the $j$-th face of the cell at node $i$, $\\lambda(l_{ij})$ is the width of the cell face associated with that link ($=\\Delta x$ for a regular uniform grid), and $N_i$ is the number of active links that connect to node $i$. The variable $\\delta(l_{ij})$ contains either +1 or -1: it is +1 if link $l_{ij}$ is oriented away from the node (in which case positive flux would represent material leaving its cell), or -1 if instead the link \"points\" into the cell (in which case positive flux means material is entering).\n\nTo get the fluxes, we first calculate the *gradient*, $G$, at each link, $k$:\n\n$G(k) = \\frac{z(H_k) - z(T_k)}{L_k}$.\n\nHere $H_k$ refers the *head node* associated with link $k$, $T_k$ is the *tail node* associated with link $k$. Each link has a direction: from the tail node to the head node. The length of link $k$ is $L_k$ (equal to $\\Delta x$ is a regular uniform grid). What the above equation says is that the gradient in $z$ associated with each link is simply the difference in $z$ value between its two endpoint nodes, divided by the distance between them. The gradient is positive when the value at the head node (the \"tip\" of the link) is greater than the value at the tail node, and vice versa.\n\nThe calculation of gradients in $z$ at the links is accomplished with the `calc_grad_at_link` function. The sediment fluxes are then calculated by multiplying the link gradients by $-D$. Once the fluxes at links have been established, the `calc_flux_div_at_node` function performs the summation of fluxes.", "_____no_output_____" ], [ "### *Exercises for section 2c*\n\n(2c.1) Make a 3x3 `RasterModelGrid` called `tinygrid`, with a cell spacing of 2 m. Use the `plot_graph` function to display the nodes and their ID numbers.", "_____no_output_____" ] ], [ [ "# (enter your solution to 2c.1 here)\ntinygrid = RasterModelGrid((3, 3), 2.0)\nplot_graph(tinygrid, at='node')", "_____no_output_____" ] ], [ [ "(2c.2) Give your `tinygrid` a node field called `height` and set the height of the center-most node to 0.5. Use `imshow_grid` to display the height field.", "_____no_output_____" ] ], [ [ "# (enter your solution to 2c.2 here)\nht = tinygrid.add_zeros('height', at='node')\nht[4] = 0.5\nimshow_grid(tinygrid, ht)", "_____no_output_____" ] ], [ [ "(2c.3) The grid should have 12 links (extra credit: verify this with `plot_graph`). When you compute gradients, which of these links will have non-zero gradients? What will the absolute value(s) of these gradients be? Which (if any) will have positive gradients and which negative? To codify your answers, make a 12-element numpy array that contains your predicted gradient value for each link.", "_____no_output_____" ] ], [ [ "# (enter your solution to 2c.3 here)\nplot_graph(tinygrid, at='link')\npred_grad = np.array([0, 0, 0, 0.25, 0, 0.25, -0.25, 0, -0.25, 0, 0, 0])\nprint(pred_grad)", "_____no_output_____" ] ], [ [ "(2c.4) Test your prediction by running the `calc_grad_at_link` function on your tiny grid. Print the resulting array and compare it with your predictions.", "_____no_output_____" ] ], [ [ "# (enter your solution to 2c.4 here)\ngrad = tinygrid.calc_grad_at_link(ht)\nprint(grad)", "_____no_output_____" ] ], [ [ "(2c.5) Suppose the flux of soil per unit cell width is defined as -0.01 times the height gradient. What would the flux be at the those links that have non-zero gradients? Test your prediction by creating and printing a new array whose values are equal to -0.01 times the link-gradient values.", "_____no_output_____" ] ], [ [ "# (enter your solution to 2c.5 here)\nflux = -0.01 * grad\nprint(flux)", "_____no_output_____" ] ], [ [ "(2c.6) Consider the net soil accumulation or loss rate around the center-most node in your tiny grid (which is the only one that has a cell). The *divergence* of soil flux can be represented numerically as the sum of the total volumetric soil flux across each of the cell's four faces. What is the flux across each face? (Hint: multiply by face width) What do they add up to? Test your prediction by running the grid function `calc_flux_div_at_node` (hint: pass your unit flux array as the argument). What are the units of the divergence values returned by the `calc_flux_div_at_node` function?", "_____no_output_____" ] ], [ [ "# (enter your solution to 2c.6 here)\nprint('predicted div is 0 m/yr')\ndqsdx = tinygrid.calc_flux_div_at_node(flux)\nprint(dqsdx)", "_____no_output_____" ] ], [ [ "## Part 3: Hexagonal grid\n\nNext we will use an non-raster Landlab grid.\n\nWe start by making a random set of points with x values between 0 and 400 and y values of 0 and 250. We then add zeros to our grid at a field called \"topographic__elevation\" and plot the node locations. \n\nNote that the syntax here is exactly the same as in the RasterModelGrid example (once the grid has been created).", "_____no_output_____" ] ], [ [ "from landlab import HexModelGrid\n\nmg = HexModelGrid((25, 40), 10, node_layout=\"rect\")\nz = mg.add_zeros('topographic__elevation', at='node')\nplt.plot(mg.x_of_node, mg.y_of_node, '.')", "_____no_output_____" ] ], [ [ "Next we create our fault trace and uplift the hanging wall. \n\nWe can plot just like we did with the RasterModelGrid. ", "_____no_output_____" ] ], [ [ "fault_trace_y = 50.0 + 0.25 * mg.x_of_node\nz[mg.y_of_node >\n fault_trace_y] += 10.0 + 0.01 * mg.x_of_node[mg.y_of_node > fault_trace_y]\nimshow_grid(mg, \"topographic__elevation\")", "_____no_output_____" ] ], [ [ "And we can use the same code as before to create a diffusion model!\n\nLandlab supports multiple grid types. You can read more about them [here](https://landlab.readthedocs.io/en/latest/reference/grid/index.html).", "_____no_output_____" ] ], [ [ "qs = mg.add_zeros('sediment_flux', at='link')\nfor i in range(25):\n g = mg.calc_grad_at_link(z)\n qs[mg.active_links] = -D * g[mg.active_links]\n dzdt = -mg.calc_flux_div_at_node(qs)\n z[mg.core_nodes] += dzdt[mg.core_nodes] * dt\nimshow_grid(mg, 'topographic__elevation')", "_____no_output_____" ] ], [ [ "### *Exercises for section 3*\n\n(3.1-6) Repeat the exercises from section 2c, but this time using a hexagonal tiny grid called `tinyhex`. Your grid should have 7 nodes: one core node and 6 perimeter nodes. (Hints: use `node_layout = 'hex'`, and make a grid with 3 rows and 2 base-row columns.)", "_____no_output_____" ] ], [ [ "# (enter your solution to 3.1 here)\ntinyhex = HexModelGrid((3, 2), 2.0)\nplot_graph(tinyhex, at='node')", "_____no_output_____" ], [ "# (enter your solution to 3.2 here)\nhexht = tinyhex.add_zeros('height', at='node')\nhexht[3] = 0.5\nimshow_grid(tinyhex, hexht)", "_____no_output_____" ], [ "# (enter your solution to 3.3 here)\nplot_graph(tinyhex, at='link')\npred_grad = np.array([0, 0, 0.25, 0.25, 0, 0.25, -0.25, 0, -0.25, -0.25, 0, 0])\nprint(pred_grad)", "_____no_output_____" ], [ "# (enter your solution to 3.4 here)\nhexgrad = tinyhex.calc_grad_at_link(hexht)\nprint(hexgrad)", "_____no_output_____" ], [ "# (enter your solution to 3.5 here)\nhexflux = -0.01 * hexgrad\nprint(hexflux)", "_____no_output_____" ], [ "# (enter your solution to 3.6 here)\nprint(tinyhex.length_of_face)\nprint(tinyhex.area_of_cell)\ntotal_outflux = 6 * 0.0025 * tinyhex.length_of_face[0]\ndivergence = total_outflux / tinyhex.area_of_cell[0]\nprint(total_outflux)\nprint(divergence)", "_____no_output_____" ] ], [ [ "## Part 4: Landlab Components\n\nFinally we will use a Landlab component, called the LinearDiffuser [link to its documentation](https://landlab.readthedocs.io/en/latest/reference/components/diffusion.html).\n\nLandlab was designed to have many of the utilities like `calc_grad_at_link`, and `calc_flux_divergence_at_node` to help you make your own models. Sometimes, however, you may use such a model over and over and over. Then it is nice to be able to put it in its own python class with a standard interface. \n\nThis is what a Landlab Component is. \n\nThere is a whole [tutorial on components](../component_tutorial/component_tutorial.ipynb) and a [page on the User Guide](https://landlab.readthedocs.io/en/latest/user_guide/components.html). For now we will just show you what the prior example looks like if we use the LinearDiffuser. \n\nFirst we import it, set up the grid, and uplift our fault block. ", "_____no_output_____" ] ], [ [ "from landlab.components import LinearDiffuser\n\nmg = HexModelGrid((25, 40), 10, node_layout=\"rect\")\nz = mg.add_zeros('topographic__elevation', at='node')\nfault_trace_y = 50.0 + 0.25 * mg.x_of_node\nz[mg.y_of_node >\n fault_trace_y] += 10.0 + 0.01 * mg.x_of_node[mg.y_of_node > fault_trace_y]", "_____no_output_____" ] ], [ [ "Next we instantiate a LinearDiffuser. We have to tell the component what value to use for the diffusivity. ", "_____no_output_____" ] ], [ [ "ld = LinearDiffuser(mg, linear_diffusivity=D)", "_____no_output_____" ] ], [ [ "Finally we run the component forward in time and plot. Like many Landlab components, the LinearDiffuser has a method called \"run_one_step\" that takes one input, the timestep dt. Calling this method runs the LinearDiffuser forward in time by an increment dt. ", "_____no_output_____" ] ], [ [ "for i in range(25):\n ld.run_one_step(dt)\nimshow_grid(mg, 'topographic__elevation')", "_____no_output_____" ] ], [ [ "### *Exercises for section 4*\n\n(4.1) Repeat the steps above that instantiate and run a `LinearDiffuser` component, but this time give it a `RasterModelGrid`. Use `imshow_grid` to display the topography below.", "_____no_output_____" ] ], [ [ "# (enter your solution to 4.1 here)\nrmg = RasterModelGrid((25, 40), 10)\nz = rmg.add_zeros('topographic__elevation', at='node')\nfault_trace_y = 50.0 + 0.25 * rmg.x_of_node\nz[rmg.y_of_node >\n fault_trace_y] += 10.0 + 0.01 * rmg.x_of_node[rmg.y_of_node > fault_trace_y]\nld = LinearDiffuser(rmg, linear_diffusivity=D)\nfor i in range(25):\n ld.run_one_step(dt)\nimshow_grid(rmg, 'topographic__elevation')", "_____no_output_____" ] ], [ [ "(4.2) Using either a raster or hex grid (your choice) with a `topographic__elevation` field that is initially all zeros, write a modified version of the loop that adds uplift to the core nodes each iteration, at a rate of 0.0001 m/yr. Run the model for enough time to accumulate 10 meters of uplift. Plot the terrain to verify that the land surface height never gets higher than 10 m. ", "_____no_output_____" ] ], [ [ "# (enter your solution to 4.2 here)\nrmg = RasterModelGrid((40, 40), 10) # while we're at it, make it a bit bigger\nz = rmg.add_zeros('topographic__elevation', at='node')\nld = LinearDiffuser(rmg, linear_diffusivity=D)\nfor i in range(50):\n ld.run_one_step(dt)\n z[rmg.core_nodes] += dt * 0.0001\nimshow_grid(rmg, 'topographic__elevation')", "_____no_output_____" ] ], [ [ "(4.3) Now run the same model long enough that it reaches (or gets very close to) a dynamic equilibrium between uplift and erosion. What shape does the hillslope have? ", "_____no_output_____" ] ], [ [ "# (enter your solution to 4.3 here)\nz[:] = 0.0\nuplift_rate = 0.0001\nfor i in range(4000):\n ld.run_one_step(dt)\n z[rmg.core_nodes] += dt * uplift_rate\nimshow_grid(rmg, 'topographic__elevation')\nplt.figure()\nplt.plot(rmg.x_of_node, z, '.')", "_____no_output_____" ] ], [ [ "(BONUS CHALLENGE QUESTION) Derive an analytical solution for the cross-sectional shape of your steady-state hillslope. Plot this solution next to the actual model's cross-section.", "_____no_output_____" ], [ "#### *SOLUTION (derivation)*\n\n##### Derivation of the original governing equation\n\n(Note: you could just start with the governing equation and go from there, but we include this here for completeness).\n\nConsider a topographic profile across a hillslope. The horizontal coordinate along the profile is $x$, measured from the left side of the profile (i.e., the base of the hill on the left side, where $x=0$). The horizontal coordinate perpendicular to the profile is $y$. Assume that at any time, the hillslope is perfectly symmetrical in the $y$ direction, and that there is no flow of soil in this direction.\n\nNow consider a vertical column of soil somewhere along the profile. The left side of the column is at position $x$, and the right side is at position $x+\\Delta x$, with $\\Delta x$ being the width of the column in the $x$ direction. The width of the column in the $y$ direction is $W$. The height of the column, $z$, is also the height of the land surface at that location. Height is measured relative to the height of the base of the slope (in other words, $z(0) = 0$).\n\nThe total mass of soil inside the column, and above the slope base, is equal to the volume of soil material times its density times the fraction of space that it fills, which is 1 - porosity. Denoting soil particle density by $\\rho$ and porosity by $\\phi$, the soil mass in a column of height $h$ is\n\n$m = (1-\\phi ) \\rho \\Delta x W z$.\n\nConservation of mass dictates that the rate of change of mass equals the rate of mass inflow minus the rate of mass outflow. Assume that mass enters or leaves only by (1) soil creep, and (2) uplift of the hillslope material relative to the elevation of the hillslope base. The rate of the latter, in terms of length per time, will be denoted $U$. The rate of soil creep at a particular position $x$, in terms of bulk volume (including pores) per time per width, will be denoted $q_s(x)$. With this definition in mind, mass conservation dictates that:\n\n$\\frac{\\partial (1-\\phi ) \\rho \\Delta x W z}{\\partial t} = \\rho (1-\\phi ) \\Delta x W U + \\rho (1-\\phi ) q_s(x) - \\rho (1-\\phi ) q_s(x+\\Delta x)$.\n\nAssume that porosity and density are steady and uniform. Then,\n\n$\\frac{\\partial z}{\\partial t} = U + \\frac{q_s(x) - q_s(x+\\Delta x)}{\\Delta x}$.\n\nFactoring out -1 from the right-most term, and taking the limit as $\\Delta x\\rightarrow 0$, we get a differential equation that expresses conservation of mass for this situation:\n\n$\\frac{\\partial z}{\\partial t} = U - \\frac{\\partial q_s}{\\partial x}$.\n\nNext, substitute the soil-creep rate law\n\n$q_s = -D \\frac{\\partial z}{\\partial x}$,\n\nto obtain\n\n$\\frac{\\partial z}{\\partial t} = U + D \\frac{\\partial^2 z}{\\partial x^2}$.\n\n##### Steady state\n\nSteady means $dz/dt = 0$. If we go back to the mass conservation law a few steps ago and apply steady state, we find\n\n$\\frac{dq_s}{dx} = U$.\n\nIf you think of a hillslope that slopes down to the right, you can think of this as indicating that for every step you take to the right, you get another increment of incoming soil via uplift relative to baselevel. (Turns out it works the same way for a slope that angles down the left, but that's less obvious in the above math)\n\nIntegrate to get:\n\n$q_s = Ux + C_1$, where $C_1$ is a constant of integration.\n\nTo evaluate the integration constant, let's assume the crest of the hill is right in the middle of the profile, at $x=L/2$, with $L$ being the total length of the profile. Net downslope soil flux will be zero at the crest (where the slope is zero), so for this location:\n\n$q_s = 0 = UL/2 + C_1$, \n\nand therefore,\n\n$C_1 = -UL/2$, \n\nand\n\n$q_s = U (x - L/2)$.\n\nNow substitute the creep law for $q_s$ and divide both sides by $-D$:\n\n$\\frac{dz}{dx} = \\frac{U}{D} (L/2 - x)$.\n\nIntegrate:\n\n$z = \\frac{U}{D} (Lx/2 - x^2/2) + C_2$.\n\nTo evaluate $C_2$, recall that $z(0)=0$ (and also $z(L)=0$), so $C_2=0$. Hence, here's our analytical solution, which describes a parabola:\n\n$\\boxed{z = \\frac{U}{2D} (Lx - x^2)}$.", "_____no_output_____" ] ], [ [ "# (enter your solution to the bonus challenge question here)\nL = 390.0 # hillslope length, m\nx_analytic = np.arange(0.0, L)\nz_analytic = 0.5 * (uplift_rate / D) * (L * x_analytic - x_analytic * x_analytic)\nplt.plot(rmg.x_of_node, z, '.')\nplt.plot(x_analytic, z_analytic, 'r')", "_____no_output_____" ] ], [ [ "Hey, hang on a minute, that's not a very good fit! What's going on? \n\nTurns out our 2D hillslope isn't as tall as the idealized 1D profile because of the boundary conditions: with soil free to flow east and west as well as north and south, the crest ends up lower than it would be if it were perfectly symmetrical in one direction.\n\nSo let's try re-running the numerical model, but this time with the north and south boundaries closed so that the hill shape becomes uniform in the $y$ direction:", "_____no_output_____" ] ], [ [ "rmg = RasterModelGrid((40, 40), 10)\nz = rmg.add_zeros('topographic__elevation', at='node')\nrmg.set_closed_boundaries_at_grid_edges(False, True, False, True) # closed on N and S\nld = LinearDiffuser(rmg, linear_diffusivity=D)\nfor i in range(4000):\n ld.run_one_step(dt)\n z[rmg.core_nodes] += dt * uplift_rate\nimshow_grid(rmg, 'topographic__elevation')", "_____no_output_____" ], [ "plt.plot(rmg.x_of_node, z, '.')\nplt.plot(x_analytic, z_analytic, 'r')", "_____no_output_____" ] ], [ [ "That's more like it!", "_____no_output_____" ], [ "Congratulations on making it to the end of this tutorial!\n\n### Click here for more <a href=\"https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html\">Landlab tutorials</a>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ] ]
ece32adf884c93717f0be9c9493e2bdd8c4bc34f
13,507
ipynb
Jupyter Notebook
beefy-impact/Data Visualisation/Beefy_impact_design.ipynb
futureCodersSE/data-stories
23cbbf971fcb33b1ed2d7231078564b42c8be4c9
[ "CC0-1.0" ]
1
2021-06-19T10:50:13.000Z
2021-06-19T10:50:13.000Z
beefy-impact/Data Visualisation/Beefy_impact_design.ipynb
futureCodersSE/data-stories
23cbbf971fcb33b1ed2d7231078564b42c8be4c9
[ "CC0-1.0" ]
1
2021-06-19T13:47:37.000Z
2021-06-19T13:47:37.000Z
beefy-impact/Data Visualisation/Beefy_impact_design.ipynb
futureCodersSE/data-stories
23cbbf971fcb33b1ed2d7231078564b42c8be4c9
[ "CC0-1.0" ]
null
null
null
59.502203
633
0.591693
[ [ [ "from bokeh.io import output_notebook, show\nfrom bokeh.plotting import figure, output_file, show\nfrom bokeh.models import ColumnDataSource\nfrom bokeh.models.annotations import BoxAnnotation\noutput_notebook()\n\nimport pandas as pd\n\ndata = pd.read_csv('https://drive.google.com/uc?id=13CqHK6mNZqlZYjgKnhD8po7khP_c_NoZ')\ndata.rename(columns =\n{\n 'Value':'Kilotonnes' \n},inplace =True\n )\nnitrogen_oxide_data=data[(data['Element']=='Emissions (N2O)')&(data['Item'] =='Manure left on Pasture')]\n\ny = nitrogen_oxide_data['Kilotonnes']\np = figure(title=\"Beefy impact\", x_axis_label='Year', y_axis_label='N2O Emissions (Kilotonnes)')\n\ndata= [266.3719,269.9084,268.7174,276.5519,287.6046,295.7288,304.6194,309.7130,310.4924,315.2846,322.5973, 334.3371, 345.1684, 355.4687, 364.1337, 380.6988, 393.7741, 389.4829, 392.8934, 408.2732, 413.3995, 414.5968, 414.9290, 419.2414, 421.8975, 427.4531, 434.5807, 445.5036, 453.6046, 456.9224, 464.5313, 470.7279, 471.4683, 480.3156, 485.3805, 477.8059, 481.1975, 480.7061, 484.5734, 492.7635, 503.9538, 522.5276, 547.4345, 563.6819, 571.0238, 573.6576, 568.0377, 571.8201, 572.6675, 567.4878, 568.3301, 569.4757, 570.3922, 572.9986, 575.5566, 583.9221, 588.0879, 588.4832, 595.2982]\nx=years=['1961', '1962' ,'1963', '1964', '1965', '1966', '1967' ,'1968', '1969', '1970', '1971', '1972', '1973', '1974', '1975','1976', '1977', '1978', '1979', '1980', '1981', '1982', '1983', '1984', '1985', '1986', '1987', '1988', '1989', '1990', '1991', '1992', '1993', '1994', '1995', '1996', '1997', '1998', '1999', '2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015', '2016', '2017', '2018', '2019']\np.circle(x, y, size=8,fill_color='#FF8000')\np.line(x,y,line_color='orange')\n\nline = BoxAnnotation(top=100, bottom=0, left=1961, right=2019, fill_color='#FFE5CC')\nline1 = BoxAnnotation(top=200, bottom=100, left=1961, right=2019, fill_color='#FFCB95')\n\nline2 = BoxAnnotation(top=data[0], bottom=200, left=1961, right=2019, fill_color='#FFB264')\nline3 = BoxAnnotation(top=data[1], bottom=data[0], left=1962, right=2019, fill_color='#FFB264')\nline4 = BoxAnnotation(top=data[2], bottom=data[1], left=1963, right=2019, fill_color='#FFB264')\nline5 = BoxAnnotation(top=data[3], bottom=data[2], left=1964, right=2019, fill_color='#FFB264')\nline6 = BoxAnnotation(top=data[4], bottom=data[3], left=1965, right=2019, fill_color='#FFB264')\nline7 = BoxAnnotation(top=data[5], bottom=data[4], left=1966, right=2019, fill_color='#FFB264')\n\nline8 = BoxAnnotation(top=data[6], bottom=data[5], left=1967, right=2019, fill_color='#FFA245')\nline9 = BoxAnnotation(top=data[7], bottom=data[6], left=1968, right=2019, fill_color='#FFA245')\nline10 =BoxAnnotation(top=data[8], bottom=data[7], left=1969, right=2019, fill_color='#FFA245')\nline11 =BoxAnnotation(top=data[9], bottom=data[8], left=1970, right=2019, fill_color='#FFA245')\nline12 =BoxAnnotation(top=data[10], bottom=data[9], left=1971, right=2019, fill_color='#FFA245')\nline13 =BoxAnnotation(top=data[11], bottom=data[10], left=1972, right=2019, fill_color='#FFA245')\nline14 =BoxAnnotation(top=data[12], bottom=data[11], left=1973, right=2019, fill_color='#FFA245')\nline15 =BoxAnnotation(top=data[13], bottom=data[12], left=1974, right=2019, fill_color='#FFA245')\nline16 =BoxAnnotation(top=data[14], bottom=data[13], left=1975, right=2019, fill_color='#FFA245')\nline17 =BoxAnnotation(top=data[15], bottom=data[14], left=1976, right=2019, fill_color='#FFA245')\nline18 =BoxAnnotation(top=data[16], bottom=data[15], left=1977, right=2019, fill_color='#FFB264')\nline19 =BoxAnnotation(top=data[17], bottom=data[16], left=1978, right=2019, fill_color='#FFCB95')\nline20= BoxAnnotation(top=data[18], bottom=data[17], left=1979, right=2019, fill_color='#FFCB95')\n\nline21= BoxAnnotation(top=data[19], bottom=data[18], left=1980, right=2019, fill_color='#FF9326')\nline22 =BoxAnnotation(top=data[20], bottom=data[19], left=1981, right=2019, fill_color='#FF9326')\nline23 =BoxAnnotation(top=data[21], bottom=data[20], left=1982, right=2019, fill_color='#FF9326')\nline24 =BoxAnnotation(top=data[22], bottom=data[21], left=1983, right=2019, fill_color='#FF9326')\nline25 =BoxAnnotation(top=data[23], bottom=data[22], left=1984, right=2019, fill_color='#FF9326')\nline26= BoxAnnotation(top=data[24], bottom=data[23], left=1985, right=2019, fill_color='#FF9326')\nline27 =BoxAnnotation(top=data[25], bottom=data[24], left=1986, right=2019, fill_color='#FF9326')\nline28= BoxAnnotation(top=data[26], bottom=data[25], left=1987, right=2019, fill_color='#FF9326')\nline29= BoxAnnotation(top=data[27], bottom=data[26], left=1988, right=2019, fill_color='#FF9326')\nline30= BoxAnnotation(top=data[28], bottom=data[27], left=1989, right=2019, fill_color='#FF9326')\nline31= BoxAnnotation(top=data[29], bottom=data[28], left=1990, right=2019, fill_color='#FF9326')\nline32= BoxAnnotation(top=data[30], bottom=data[29], left=1991, right=2019, fill_color='#FF9326')\nline33= BoxAnnotation(top=data[31], bottom=data[30], left=1992, right=2019, fill_color='#FF9326')\nline34= BoxAnnotation(top=data[32], bottom=data[31], left=1993, right=2019, fill_color='#FF9326')\nline35= BoxAnnotation(top=data[33], bottom=data[32], left=1994, right=2019, fill_color='#FF9326')\nline36= BoxAnnotation(top=data[34], bottom=data[33], left=1995, right=2019, fill_color='#FFB264')\nline37= BoxAnnotation(top=data[35], bottom=data[34], left=1996, right=2019, fill_color='#FFCB95')\nline38= BoxAnnotation(top=data[36], bottom=data[35], left=1997, right=2019, fill_color='#FFCB95')\nline39= BoxAnnotation(top=data[37], bottom=data[36], left=1998, right=2019, fill_color='#FFCB95')\nline40= BoxAnnotation(top=data[38], bottom=data[37], left=1999, right=2019, fill_color='#FF9326')\nline41= BoxAnnotation(top=data[39], bottom=data[38], left=2000, right=2019, fill_color='#FF9326')\n\nline42= BoxAnnotation(top=data[40], bottom=data[39], left=2001, right=2019, fill_color='#FF8000')\nline43= BoxAnnotation(top=data[41], bottom=data[40], left=2002, right=2019, fill_color='#FF8000')\nline44= BoxAnnotation(top=data[42], bottom=data[41], left=2003, right=2019, fill_color='#FF8000')\nline45= BoxAnnotation(top=data[43], bottom=data[42], left=2004, right=2019, fill_color='#FF8000')\nline46= BoxAnnotation(top=data[44], bottom=data[43], left=2005, right=2019, fill_color='#FF8000')\nline47= BoxAnnotation(top=data[45], bottom=data[44], left=2006, right=2019, fill_color='#FF8000')\nline48= BoxAnnotation(top=data[46], bottom=data[45], left=2007, right=2019, fill_color='#FF8000')\nline49= BoxAnnotation(top=data[47], bottom=data[46], left=2008, right=2019, fill_color='#FF8000')\nline50= BoxAnnotation(top=data[48], bottom=data[47], left=2009, right=2019, fill_color='#FFB264')\nline51= BoxAnnotation(top=data[49], bottom=data[48], left=2010, right=2019, fill_color='#FFB264')\nline52= BoxAnnotation(top=data[50], bottom=data[49], left=2011, right=2019, fill_color='#FFB264')\nline53= BoxAnnotation(top=data[51], bottom=data[50], left=2012, right=2019, fill_color='#FFB264')\nline54= BoxAnnotation(top=data[52], bottom=data[51], left=2013, right=2019, fill_color='#FFB264')\nline55= BoxAnnotation(top=data[53], bottom=data[52], left=2014, right=2019, fill_color='#FFB264')\nline56= BoxAnnotation(top=data[54], bottom=data[53], left=2015, right=2019, fill_color='#FF8000')\nline57= BoxAnnotation(top=data[55], bottom=data[54], left=2016, right=2019, fill_color='#FF8000')\nline58= BoxAnnotation(top=data[56], bottom=data[55], left=2017, right=2019, fill_color='#FF8000')\nline59= BoxAnnotation(top=data[57], bottom=data[56], left=2018, right=2019, fill_color='#FF8000')\nline60= BoxAnnotation(top=data[58], bottom=data[57], left=2019, right=2019, fill_color='#FF8000')\n\np.add_layout(line)\np.add_layout(line1)\np.add_layout(line2)\np.add_layout(line3)\np.add_layout(line4)\np.add_layout(line5)\np.add_layout(line6)\np.add_layout(line7)\np.add_layout(line8)\np.add_layout(line9)\np.add_layout(line10)\np.add_layout(line11)\np.add_layout(line12)\np.add_layout(line13)\np.add_layout(line14)\np.add_layout(line15)\np.add_layout(line16)\np.add_layout(line17)\np.add_layout(line18)\np.add_layout(line19)\np.add_layout(line20)\np.add_layout(line21)\np.add_layout(line22)\np.add_layout(line23)\np.add_layout(line24)\np.add_layout(line25)\np.add_layout(line26)\np.add_layout(line27)\np.add_layout(line28)\np.add_layout(line29)\np.add_layout(line30)\np.add_layout(line31)\np.add_layout(line32)\np.add_layout(line33)\np.add_layout(line34)\np.add_layout(line35)\np.add_layout(line36)\np.add_layout(line37)\np.add_layout(line38)\np.add_layout(line39)\np.add_layout(line40)\np.add_layout(line41)\np.add_layout(line42)\np.add_layout(line43)\np.add_layout(line44)\np.add_layout(line45)\np.add_layout(line46)\np.add_layout(line47)\np.add_layout(line48)\np.add_layout(line49)\np.add_layout(line50)\np.add_layout(line51)\np.add_layout(line52)\np.add_layout(line53)\np.add_layout(line54)\np.add_layout(line55)\np.add_layout(line56)\np.add_layout(line57)\np.add_layout(line58)\np.add_layout(line59)\np.add_layout(line60)\n\np.ygrid.grid_line_color = 'white'\np.xgrid.grid_line_color = 'white'\n\n\np.varea(x,\n y1=0, y2=y, alpha=0,fill_alpha=0)\n\nshow(p)", "_____no_output_____" ], [ "from bokeh.io import show, output_file\nfrom bokeh.plotting import figure\nimport pandas as pd\nfrom bokeh.models.annotations import BoxAnnotation\n\ndata = pd.read_csv('https://drive.google.com/uc?id=13CqHK6mNZqlZYjgKnhD8po7khP_c_NoZ')\n\ndata.rename(columns =\n{\n 'Value':'Kilotonnes' \n},inplace =True\n )\n\nnitrogen_oxide_data=data[(data['Element']=='Emissions (N2O)')&(data['Item'] =='Manure left on Pasture')]\n\ny = nitrogen_oxide_data['Kilotonnes']\nyears=['1961', '1962' ,'1963', '1964', '1965', '1966', '1967' ,'1968', '1969', '1970', '1971', '1972', '1973', '1974', '1975','1976', '1977', '1978', '1979', '1980', '1981', '1982', '1983', '1984', '1985', '1986', '1987', '1988', '1989', '1990', '1991', '1992', '1993', '1994', '1995', '1996', '1997', '1998', '1999', '2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015', '2016', '2017', '2018', '2019']\n\n\np = figure(x_range=years, plot_height=600,plot_width =1700, x_axis_label='Year', y_axis_label='N2O Emissions (Kilotonnes)', title=\"South America\")\np.vbar(x=years, top=y, width=0.4, fill_color='orange')\n\np.ygrid.grid_line_color = 'black'\np.xgrid.grid_line_color = 'white'\np.y_range.start = 0\n\nshow(p)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
ece32c71bd2a26a5150968aa8c83306a7c6526b1
52,654
ipynb
Jupyter Notebook
Notebook/chap6_ex1.ipynb
habimis95/hello-world
0c8b876e3cc291b282388e23b4904453d9123b70
[ "BSD-3-Clause" ]
null
null
null
Notebook/chap6_ex1.ipynb
habimis95/hello-world
0c8b876e3cc291b282388e23b4904453d9123b70
[ "BSD-3-Clause" ]
1
2020-07-21T03:03:10.000Z
2020-07-21T03:05:22.000Z
Notebook/chap6_ex1.ipynb
habimis95/hello-world
0c8b876e3cc291b282388e23b4904453d9123b70
[ "BSD-3-Clause" ]
null
null
null
110.154812
15,568
0.780511
[ [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport collections", "_____no_output_____" ], [ "url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv'\nchipo = pd.read_csv(url, sep = '\\t')", "_____no_output_____" ], [ "chipo.shape\nchipo.head()\nchipo[0]", "_____no_output_____" ], [ "x = chipo.item_name\nx.head()", "_____no_output_____" ], [ "letter_counts = collections.Counter(x)\nprint(letter_counts)", "Counter({'Chicken Bowl': 726, 'Chicken Burrito': 553, 'Chips and Guacamole': 479, 'Steak Burrito': 368, 'Canned Soft Drink': 301, 'Chips': 211, 'Steak Bowl': 211, 'Bottled Water': 162, 'Chicken Soft Tacos': 115, 'Chips and Fresh Tomato Salsa': 110, 'Chicken Salad Bowl': 110, 'Canned Soda': 104, 'Side of Chips': 101, 'Veggie Burrito': 95, 'Barbacoa Burrito': 91, 'Veggie Bowl': 85, 'Carnitas Bowl': 68, 'Barbacoa Bowl': 66, 'Carnitas Burrito': 59, 'Steak Soft Tacos': 55, '6 Pack Soft Drink': 54, 'Chips and Tomatillo Red Chili Salsa': 48, 'Chicken Crispy Tacos': 47, 'Chips and Tomatillo Green Chili Salsa': 43, 'Carnitas Soft Tacos': 40, 'Steak Crispy Tacos': 35, 'Chips and Tomatillo-Green Chili Salsa': 31, 'Steak Salad Bowl': 29, 'Nantucket Nectar': 27, 'Barbacoa Soft Tacos': 25, 'Chips and Roasted Chili Corn Salsa': 22, 'Izze': 20, 'Chips and Tomatillo-Red Chili Salsa': 20, 'Veggie Salad Bowl': 18, 'Chips and Roasted Chili-Corn Salsa': 18, 'Barbacoa Crispy Tacos': 11, 'Barbacoa Salad Bowl': 10, 'Chicken Salad': 9, 'Carnitas Crispy Tacos': 7, 'Veggie Soft Tacos': 7, 'Burrito': 6, 'Carnitas Salad Bowl': 6, 'Veggie Salad': 6, 'Steak Salad': 4, 'Bowl': 2, 'Crispy Tacos': 2, 'Salad': 2, 'Chips and Mild Fresh Tomato Salsa': 1, 'Veggie Crispy Tacos': 1, 'Carnitas Salad': 1})\n" ], [ "df = pd.DataFrame.from_dict(letter_counts, orient='index')\ndf.head()", "_____no_output_____" ], [ "# Sap xep tan suat giam dan, va lay 5 item dau tien\n# ve bieu do khoi voi 5 item nay co title xlabel, ylabel va xsticks\ndf_5 = df.sort_values(by = 0, ascending = False)[0:5]\n\n# create the plot\nplt.bar(df_5.index.values, df_5[0].values)\n\n# Thiแบฟt lแบญp title vร  labels, xsticks (rotation='vertical')\nplt.xlabel('Items')\nplt.ylabel('Price')\nplt.title('Most ordered Chipotle\\'s Items')\nplt.xticks(df_5.index.values, df_5.index.values, rotation='vertical')\n\n# hiแปƒn thแป‹ biแปƒu ฤ‘แป“\nplt.show()", "_____no_output_____" ], [ "# Cแบญp nhแบญt lแบกi cแป™t item_price vแป›i ฤ‘ฦกn giรก lร  sแป‘ thแปฑc. Nhรณm cรกc ฤ‘ฦกn hร ng theo order_id\n# Tรญnh tแป•ng giรก trแป‹ cแปงa mแป—i ฤ‘ฦกn hร ng => gรกn vร o biแบฟn orders. In head cแปงa orders\nchipo.item_price = [float(value[1:-1]) for value in chipo.item_price]\nchipo.head()", "_____no_output_____" ], [ "orders = chipo.groupby('order_id').sum()\norders.head()", "_____no_output_____" ], [ "# Vแบฝ scatterplot cแปงa orders vแป›i x lร  orders.item_price vร  y lร  orders.quantity\nplt.scatter(x = orders.item_price, y = orders.quantity, s = 50, c = 'green')\n\n# Thiแบฟt lแบญp title vร  labels\nplt.xlabel('Order Price')\nplt.ylabel('Items ordered')\nplt.title('Number of items ordered per order price')\nplt.ylim(0)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ece3354516a8af754e3cfc52973bf0fac2c388e9
101,144
ipynb
Jupyter Notebook
system_properties/classes.ipynb
swchao/signalsAndSystemsLecture
7f135d091499e1d3d635bac6ddf22adee15454f8
[ "MIT" ]
3
2019-01-27T12:39:27.000Z
2022-03-15T10:26:12.000Z
system_properties/classes.ipynb
swchao/signalsAndSystemsLecture
7f135d091499e1d3d635bac6ddf22adee15454f8
[ "MIT" ]
null
null
null
system_properties/classes.ipynb
swchao/signalsAndSystemsLecture
7f135d091499e1d3d635bac6ddf22adee15454f8
[ "MIT" ]
2
2020-09-18T06:26:48.000Z
2021-12-10T06:11:45.000Z
201.081511
20,068
0.876938
[ [ [ "# System Properties\n\n*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Communications Engineering, Universitรคt Rostock. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).*", "_____no_output_____" ], [ "## Classes of Systems\n\nThe spectral and temporal characteristics of a linear-time invariant (LTI) system are given by its transfer function $H(s)$ and impulse response $h(t) = \\mathcal{L}^{-1} \\{ H(s) \\}$, respectively. However it is useful to introduce some higher-level properties and classes of LTI systems. These are for instance useful for the classification of existing systems or the design of systems with specific properties.", "_____no_output_____" ], [ "### Real-Valued System\n\nA real-valued system is a system whose output signal $y(t) = \\mathcal{H} \\{ x(t) \\}$ is real-valued $y(t) \\in \\mathbb{R}$ for a real-valued input signal $x(t) \\in \\mathbb{R}$. Since the output signal $y(t)$ of an LTI system is given by convolving the input signal $x(t)$ with its impulse response $h(t)$, the impulse response $h(t)$ of a real-valued system has to be real-valued\n\n\\begin{equation}\nh(t) \\in \\mathbb{R}\n\\end{equation}\n\nIf existing, the transfer function $H(s) = \\mathcal{L} \\{ h(t) \\}$ shows conjugate complex symmetry\n\n\\begin{equation}\nH(s) = H^*(s^*)\n\\end{equation}\n\nfor a real-valued system due to the [symmetry of the Laplace transform for real-valued signals](../laplace_transform/properties.ipynb#Symmetry-for-Real-Valued-Signals). It follows, that the poles and zeros of a rational transfer function $H(s)$ are either real-valued or complex conjugate pairs.\n\nIf existing, the transfer function $H(j \\omega) = \\mathcal{F} \\{ h(t) \\}$ in the Fourier domain shows also conjugate complex symmetry\n\n\\begin{equation}\nH(j \\omega) = H^*(-j \\omega)\n\\end{equation}\n\nfor a real-valued system due to the [symmetry of the Fourier transform for real-valued signals](../fourier_transform/properties.ipynb#Real-valued-signals). The magnitude spectrum $|H(j \\omega)|$ shows even symmetry and the phase $\\varphi(j \\omega)$ odd symmetry\n\n\\begin{align}\n|H(j \\omega)| &= |H(-j \\omega)| \\\\\n\\varphi(j \\omega) &= - \\varphi(-j \\omega)\n\\end{align}\n\nDue to these symmetry relations often only the positive part $\\omega \\geq 0$ of the magnitude spectrum and phase are plotted. Note that such a representation is ambiguous without the additional knowledge that the system is real-valued.\n\nReal-valued systems are of paramount importance in practical applications, since physically existing signals are real-valued. The symmetry relations derived above have to be considered when designing real-valued systems in the spectral domain.", "_____no_output_____" ], [ "### Distortionless System\n\nA distortionless system is a system whose output signal $x(t)$ is the attenuated and delayed version of the input signal $y(t)$\n\n\\begin{equation}\ny(t) = H_0 \\cdot x(t - \\tau)\n\\end{equation}\n\nwhere $H_0 \\in \\mathbb{R}$ denotes the attenuation and $\\tau \\in \\mathbb{R}$ the delay. The impulse response of a distortionless system is consequently given as\n\n\\begin{equation}\nh(t) = H_0 \\cdot \\delta(t - \\tau)\n\\end{equation}\n\nThis can be concluded from the [sifting property of the Dirac pulse](../continuous_signals/standard_signals.ipynb#Dirac-Impulse). Its transfer function $H(j \\omega) = \\mathcal{F} \\{ h(t) \\}$ reads\n\n\\begin{equation}\nH(j \\omega) = H_0 \\cdot e^{- j \\omega \\tau}\n\\end{equation}\n\nIt follows that the magnitude response of a distortonless system is constant $| H(j \\omega) | = H_0$, the phase response is linearly dependent on the frequency $\\varphi_\\text{H}(j \\omega) = - \\omega \\tau$, and the [phase and group delay](../systems_spectral_domain/phase_group_delay.ipynb) are constant $t_p = t_g = \\tau$.\n\nThe characteristics of a distortionless system are often the desired properties for sensors, actuators, amplifiers and transmission paths. In this context the distortionless system serves as an idealized model for such elements or is used as goal for the equalization of non-ideal elements.", "_____no_output_____" ], [ "### Linear-Phase System\n\nSystems whose unwrapped phase $\\varphi_\\text{H}(j \\omega) = \\arg \\{ H(j \\omega) \\}$ scales linearly with frequency \n\n\\begin{equation}\n\\varphi_\\text{H}(j \\omega) = - c \\cdot \\omega + d\n\\end{equation}\n\nwith $c, d \\in \\mathbb{R}$ are termed *linear-phase systems*. There are no constraints with respect to their magnitude response $|H(j \\omega)|$. The transfer function $H(j \\omega)$ is given as\n\n\\begin{equation}\nH(j \\omega) = |H(j \\omega)| \\; e^{j (- c \\omega + d)}\n\\end{equation}\n\nThe group delay of a linear-phase system is constant\n\n\\begin{equation}\nt_g(\\omega) = - \\frac{\\partial (- c \\cdot \\omega + d)}{\\partial \\omega} = c\n\\end{equation}\n\nSystems with a linear phase are often desired in practical applications, since all spectral components of a signal are delayed by the same amount when passing the system. The distortionless system introduced above can be seen as a special linear-phase system with $|H(j \\omega)| = H_0$, $c = \\tau$ and $d=0$.\n\nThe impulse response of a linear-phase system shows a specific symmetry. Let's first assume that $c=0$ and $d=0$. Splitting $|H(j \\omega)| \\in \\mathbb{R}$ into its odd and even part, and considering the [symmetries of the Fourier transform](../fourier_transform/properties.ipynb#Symmetries) yields\n\n\\begin{equation}\nh(t) = h^*(-t)\n\\end{equation}\n\nThis result can be generalized to the case $c \\neq 0$ by applying the [temporal shift theorem](../fourier_transform/theorems.ipynb#Temporal-Shift-Theorem)\n\n\\begin{equation}\nh(t - c) = h^*(-t + c)\n\\end{equation}\n\nThe impulse response of a linear-phase system exhibits complex conjugate symmetry with respect to the time instant $t-c$. In the general case $d \\neq 0$, a constant phase shift is added.", "_____no_output_____" ], [ "**Example**\n\nA real-valued linear-phase system with the following transfer function\n\n\\begin{equation}\nH(j \\omega) = \\Lambda(\\omega) \\cdot e^{-j c \\omega}\n\\end{equation}\n\nis considered. Its impulse response $h(t) = \\mathcal{F}^{-1} \\{ H(j \\omega) \\}$ is derived by applying the [duality principle](../fourier_transform/properties.ipynb#Duality) to the [Fourier transformation of the triangular signal](../fourier_transform/theorems.ipynb#Transformation-of-the-triangular-signal) and considering the [shift theorem](../fourier_transform/theorems.ipynb#Temporal-Shift-Theorem)\n\n\\begin{equation}\nh(t) = \\frac{1}{2 \\pi} \\text{sinc}^2 \\left( \\frac{t - c}{2} \\right)\n\\end{equation}\n\nIt is straightforward to show that the impulse response fulfills the properties stated above. For illustration, the impulse response is plotted for $c=5$", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport sympy as sym\nsym.init_printing()\n\nt = sym.symbols('t', real=True)\n\nh = 1/(2*sym.pi) * sym.sinc((t-5)/2)**2\nsym.plot(h, (t, -5, 15), xlabel='$t$', ylabel='$h(t)$');", "_____no_output_____" ] ], [ [ "**Exercise**\n\nThe minimum-phase system with transfer function\n\n\\begin{equation}\nH(j \\omega) = \\text{rect}(\\omega - \\frac{1}{2}) \\cdot e^{-j c \\omega}\n\\end{equation}\n\nis investigated in this exercise.\n\n* Is the system real-valued?\n* Derive the impulse response $h(t)$ of the minimum-phase system.\n* Do the above derived symmetries hold for the impulse response? ", "_____no_output_____" ], [ "### Minimum-Phase System\n\nFor a given magnitude response $|H(j \\omega)|$ often the realization which has the minimum possible group delay $t_g(\\omega)$ is desired. The group delay quantifies the frequency dependent delay a system introduces to a signal. For many applications this delay should be as small as possible. A [causal and stable system](causality_stability.ipynb) with rational transfer function $H(s)$ is minimum-phase iff it has no zeros and poles in the right $s$-half-plane. Such a system is termed [*minimum-phase system*](https://en.wikipedia.org/wiki/Minimum_phase). It has the minimum possible group delay for a given magnitude response.\n\nIn order to proof this, the dependence of the magnitude response $|H(j \\omega)|$ and phase $\\varphi_\\text{H}(j \\omega)$ on the locations of the poles and zeros of $H(s)$ is analyzed. It is assumed in the following that the poles of the system are located in the left $s$-half-plane. The magnitude response of the system can then be written as\n\n\\begin{equation}\n|H(j \\omega)| = |K| \\cdot \\frac{\\prod_{\\mu=0}^{Q} | j \\omega - s_{0 \\mu} |}{\\prod_{\\nu=0}^{P} | j \\omega - s_{\\infty \\nu}|}\n= |K| \\cdot \\frac{\\prod_{\\mu=0}^{Q} \\sqrt{\\Re \\{ s_{0 \\mu} \\}^2 + (\\omega - \\Im \\{ s_{0 \\mu} \\} )^2 }}{\\prod_{\\nu=0}^{P} \\sqrt{\\Re \\{ s_{\\infty \\nu} \\}^2 + (\\omega - \\Im \\{ s_{\\infty \\nu} \\} )^2 }}\n\\end{equation}\n\nwhere $s_{0 \\mu}$ and $s_{\\infty \\nu}$ denote the $\\mu$-th zero and $\\nu$-th pole of $H(s)$, and $Q$ and $P$ the total number of zeros and poles, respectively. The magnitude response for a given frequency $\\omega$ is determined by the relative distances of the poles and zeros to the point $s = j \\omega$ on the imaginary axis. Mirroring the zeros at the imaginary axis does not change the magnitude response. This is due to the fact that $| j \\omega - s_{0 \\mu} | = | j \\omega + s^*_{0 \\mu} |$ holds since only the sign of the real part inside the magnitude changes.\n\nHowever, the group delay is different for zeros in the left and right $s$-half-plane. In the [foundations of the Bode plots](../systems_spectral_domain/bode_plot.ipynb#Bode-Plots) it has been shown that the phase $\\varphi_\\text{H}(j \\omega)$ of a rational transfer function $H(s)$ can be written as\n\n\\begin{equation}\n\\varphi_H(j \\omega) = \\sum_{\\mu=0}^{Q} \\arg (j \\omega - s_{0 \\mu}) - \\sum_{\\nu=0}^{P} \\arg (j \\omega - s_{\\infty \\nu})\n\\end{equation}\n\nSince the contributions of the individual zeros are superimposed, let's first look at the contribution of a single zero. The group delay for a single zero $s_0 = \\Re \\{ s_0 \\} + j \\Im \\{ s_0 \\}$ can be derived as\n\n\\begin{equation}\nt_g(\\omega) = - \\frac{d}{d \\omega} \\arg (j \\omega - s_0) =\n\\frac{\\Re \\{ s_0 \\}}{(\\omega - \\Im \\{ s_0 \\})^2 + \\Re \\{s_0\\}} =\n\\frac{\\Re \\{ s_0 \\}}{\\omega^2 - 2 \\omega \\Im \\{ s_0 \\} + |s_0|^2}\n\\end{equation}\n\nFor a given magnitude response, the denominator is invariant to a mirroring of the zero at the imaginary axis. However, the numerator $\\Re \\{ s_0 \\} < 0$ for a zero in the left $s$-half-plane and $\\Re \\{ s_0 \\} > 0$ for a zero in the right $s$-half-plane. Hence, the group delay for the zero in the left $s$-half-plane is always smaller than for its mirror-imaged zero in the right $s$-half-plane. Minimizing the group delay for each zero minimizes the overall group delay.\n\nA system where zeros are located in the left and right $s$-half-plane is called *non-minimum-phase system* or *mixed-phase system*. A system where all zeros are located in the right $s$-half-plane is called *maximum-phase system*.", "_____no_output_____" ], [ "**Example**\n\nA real-valued minimum-phase system (MPS) with the following transfer function is investigated\n\n\\begin{equation}\nH_\\text{MPS}(j \\omega) = \\frac{(j \\omega - s_0)(j \\omega - s_0^*)}{j \\omega - s_\\infty}\n\\end{equation}\n\nThe system has one real-valued pole $s_\\infty$, and a pair of complex conjugate zeros $s_0$ and $s_0^*$. The corresponding non-minimum-phase system (NMPS) with equal magnitude response $|H_\\text{MPS}(j \\omega)| = |H_\\text{NMPS}(j \\omega)|$ is yielded by mirroring the zeros at the imaginary axis\n\n\\begin{equation}\nH_\\text{NMPS}(j \\omega) = \\frac{(j \\omega + s_0^*)(j \\omega + s_0)}{j \\omega - s_\\infty}\n\\end{equation}\n\nThe magnitude response and group delay for both systems is computed and plotted for $s_\\infty = -2$ and $s_0 = - 1 + j$. First the transfer functions are defined and the group delays are computed", "_____no_output_____" ] ], [ [ "w = sym.symbols('omega', real=True)\njw = sym.I * w\ns_0 = - 1 + sym.I\ns_inf = -2\n\nH_MPS = (jw - s_0)*(jw - sym.conjugate(s_0))/(jw - s_inf)\nH_NMPS = (jw + sym.conjugate(s_0))*(jw + s_0)/(jw - s_inf)\n\ntg_MPS = - sym.diff(sym.arg(H_MPS), w)\ntg_NMPS = - sym.diff(sym.arg(H_NMPS), w)", "_____no_output_____" ] ], [ [ "The magnitude response $|H_\\text{MPS}(j \\omega)|$ of the MPS is illustrated by the green line and the magnitude response $|H_\\text{NMPS}(j \\omega)|$ of the NMPS by the red line, respectively. Note that both overlap.", "_____no_output_____" ] ], [ [ "p1 = sym.plot(sym.Abs(H_MPS), (w, -5, 5), line_color='g', xlabel='$\\omega$', ylabel='$|H(j \\omega)|$', show=False)\np2 = sym.plot(sym.Abs(H_NMPS), (w, -5, 5), line_color='r', show=False)\np1.extend(p2)\np1.show()", "_____no_output_____" ] ], [ [ "The same color scheme is used for the group delay of both systems", "_____no_output_____" ] ], [ [ "p1 = sym.plot(tg_MPS, (w, -5, 5), line_color='g', xlabel='$\\omega$', ylabel='$t_g(\\omega)$', show=False)\np2 = sym.plot(tg_NMPS, (w, -5, 5), line_color='r', show=False)\np1.extend(p2)\np1.show()", "_____no_output_____" ] ], [ [ "**Exercise**\n\n* Add a second pair of complex conjugate zeros to above MPS.\n* For your MPS system derive the transfer function $H(j \\omega)$ of the corresponding maximum-phase and one mixed-phase system.\n* Compute the group delays for all three systems.\n* In what range is the group delay of the mixed-phase system located with respect to the minimum- and maximum-phase system?", "_____no_output_____" ], [ "### All-Pass\n\nAn [all-pass](https://en.wikipedia.org/wiki/All-pass_filter) is a system with constant magnitude response $|H(j \\omega)| = H_0$ but frequency dependent phase $\\varphi_\\text{H}(j \\omega)$. It allows to modify the phase of a signal without modifying its magnitude. The [distortionless system](#Distortionless-System) is a special case of an all-pass with $\\varphi_\\text{H}(j \\omega) = - \\omega \\tau$. A stable system with rational transfer function $H(s)$ is an causal all-pass iff for each pole in the left $s$-half-plane there exists a zero in the right $s$-half-plane which is mirrored at the imaginary axis $\\Re \\{ s \\} = 0$.\n\nThe magnitude response $|H(j \\omega)|$ of a system $H(s)$ in dependence of its poles and zeros has already been derived above as \n\n\\begin{equation}\n|H(j \\omega)| = |K| \\cdot \\frac{\\prod_{\\mu=0}^{Q} | j \\omega - s_{0 \\mu} |}{\\prod_{\\nu=0}^{P} | j \\omega - s_{\\infty \\nu}|}\n\\end{equation}\n\nFor a pole $s_{\\infty \\mu}$ and its mirror-imaged zero $s_{0 \\nu} = - s_{\\infty \\mu}^*$ the following holds\n\n\\begin{equation}\n|j \\omega - s_{\\infty \\mu}| = |j \\omega - s_{0 \\nu}|\n\\end{equation}\n\nsince only the sign of the real part of the mirror-imaged zero changes. Introducing this result into $|H(j \\omega)|$ above for all pole/zero pairs yields that the magnitude response of an all-pass is constant.", "_____no_output_____" ], [ "**Example**\n\nThe properties of a real-valued 2nd-order all-pass with transfer function\n\n\\begin{equation}\nH(j \\omega) = \\frac{(j \\omega - s_0)(j \\omega - s_0^*)}{(j \\omega - s_\\infty)(j \\omega - s_\\infty^*)}\n\\end{equation}\n\nare investigated. The pole is chosen as $s_\\infty = -1 + j$, and the zero due to the required symmetry as $s_0 = - s_\\infty^*$. First the transfer function is defined", "_____no_output_____" ] ], [ [ "s_inf = -1 + sym.I\ns_0 = - sym.conjugate(s_inf)\n\nH = (jw - s_0)*(jw - sym.conjugate(s_0))/((jw - s_inf)*(jw - sym.conjugate(s_inf)))\nH", "_____no_output_____" ] ], [ [ "The magnitude response $|H(j \\omega)|$ of the all-pass is plotted", "_____no_output_____" ] ], [ [ "sym.plot(sym.Abs(H), (w, -5, 5), xlabel='$\\omega$', ylabel='$|H(j \\omega)|$', ylim=(0, 1.2));", "_____no_output_____" ] ], [ [ "as well as its phase $\\varphi_\\text{H}(j \\omega)$", "_____no_output_____" ] ], [ [ "sym.plot(sym.arg(H), (w, -5, 5), xlabel='$\\omega$', ylabel=r'$\\varphi(j \\omega)$');", "_____no_output_____" ] ], [ [ "**Copyright**\n\nThe notebooks are provided as [Open Educational Resource](https://de.wikipedia.org/wiki/Open_Educational_Resources). Feel free to use the notebooks for your own educational purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Lecture Notes on Signals and Systems* by Sascha Spors.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
ece336f59774ca80e559862bfbc99a635779a170
4,460
ipynb
Jupyter Notebook
V6/3_movie_lens_parquet/2_DDL_Parquet_Compressed.ipynb
fred1234/BDLC_FS22
9db012026bfa1b8672bb4c62558b3beac4baff98
[ "MIT" ]
1
2022-02-24T12:47:23.000Z
2022-02-24T12:47:23.000Z
V6/3_movie_lens_parquet/2_DDL_Parquet_Compressed.ipynb
fred1234/BDLC_FS22
9db012026bfa1b8672bb4c62558b3beac4baff98
[ "MIT" ]
null
null
null
V6/3_movie_lens_parquet/2_DDL_Parquet_Compressed.ipynb
fred1234/BDLC_FS22
9db012026bfa1b8672bb4c62558b3beac4baff98
[ "MIT" ]
null
null
null
22.079208
78
0.547534
[ [ [ "# Data Definition Language - DDL\nLet us create a database `movielens_parquet_compressed`", "_____no_output_____" ] ], [ [ "%load_ext sql\n%sql hive://hadoop@localhost:10000/", "_____no_output_____" ], [ "%%sql\nCREATE DATABASE IF NOT EXISTS movielens_parquet_compressed", "_____no_output_____" ], [ "%sql SHOW DATABASES", "_____no_output_____" ], [ "%sql USE movielens_parquet_compressed", "_____no_output_____" ] ], [ [ "## Defining the Tables and Loading the Data", "_____no_output_____" ], [ "### Movies", "_____no_output_____" ] ], [ [ "%%sql\nCREATE TABLE IF NOT EXISTS movies (\n movieId int,\n title String,\n year int,\n genres ARRAY<STRING>\n)\nSTORED AS Parquet\nTBLPROPERTIES(\"parquet.compression\"=\"SNAPPY\")", "_____no_output_____" ], [ "%sql INSERT INTO TABLE movies select * from movielens.movies", "_____no_output_____" ] ], [ [ "## Ratings", "_____no_output_____" ] ], [ [ "%%sql\n\nCREATE TABLE IF NOT EXISTS ratings (\n userid INT, \n movieid INT,\n rating INT, \n `timestamp` bigint\n)\nSTORED AS Parquet\nTBLPROPERTIES(\"parquet.compression\"=\"SNAPPY\")", "_____no_output_____" ], [ "%sql INSERT INTO TABLE ratings select * from movielens.ratings", "_____no_output_____" ] ], [ [ "## Creating a Joined Table", "_____no_output_____" ] ], [ [ "%%sql\n\nCREATE TABLE IF NOT EXISTS movie_rating (\n movieid INT,\n title STRING,\n year INT,\n genres ARRAY<STRING>,\n num_rating INT, \n first_quartile_rating FLOAT,\n median_rating FLOAT,\n third_quartile_rating FLOAT,\n avg_rating FLOAT,\n min_rating FLOAT,\n max_rating FLOAT\n)\nSTORED AS Parquet\nTBLPROPERTIES(\"parquet.compression\"=\"SNAPPY\")", "_____no_output_____" ], [ "%sql INSERT INTO TABLE movie_rating select * from movielens.movie_rating", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
ece349727695ae7d1bed9d66679f03cf45a69a10
711,014
ipynb
Jupyter Notebook
S03 - Time Series/BLU04 - Time Series Concepts/BLU04 - Learning Notebook - Part 1 of 3 - Pandas for Timeseries.ipynb
jtiagosg/batch3-students
5eb94bee46625881e9470da2b137aaa0f6cf7912
[ "MIT" ]
12
2019-07-06T09:06:17.000Z
2020-11-13T00:58:42.000Z
S03 - Time Series/BLU04 - Time Series Concepts/BLU04 - Learning Notebook - Part 1 of 3 - Pandas for Timeseries.ipynb
Daniel3424/batch3-students
10c46963e51ce974837096ad06a8c134ed4bcd8a
[ "MIT" ]
29
2019-07-01T14:19:49.000Z
2021-03-24T13:29:50.000Z
S03 - Time Series/BLU04 - Time Series Concepts/BLU04 - Learning Notebook - Part 1 of 3 - Pandas for Timeseries.ipynb
Daniel3424/batch3-students
10c46963e51ce974837096ad06a8c134ed4bcd8a
[ "MIT" ]
36
2019-07-05T15:53:35.000Z
2021-07-04T04:18:02.000Z
308.332177
57,772
0.921616
[ [ [ "# BLU04 - Learning Notebook - Part 1 of 3 - Pandas for Timeseries", "_____no_output_____" ], [ "## 1. A bit of housekeeping before we get started\n\nWelcome to timeseries! \n\nHere we will learn how to explore datasets which depend on time. As you might imagine, many datasets in the real world are timeseries. The stockmarket springs to mind, but also anything to do with sales or marketing, engineering processes (when will this particular turbine break, you may ask?), medical processes (what is the effect of this medication over time), and so many, many more. \n\nNow, timeseries are a less settled field than most things you've learned so far. There are many ways of doing things, and different schools of thought fighting to get _\"followers\"_. Our objective here is to avoid indoctrination, but rather to give you a bit of exposure to the super-basics of how to handle timeseries data, and making some basic but useful predictions. We will in no way try to teach you everything, but rather enough to get your hands dirty. From then on, there is the good old fashioned documentation, and hacking around. \n\nSpeaking of covering only the basics, as you might know Neural Networks are becoming increasingly prevalent in the prediction of timeseries. While this is a very exciting topic, we've decided not to include Neural Networks in the Academy, for three main reasons:\n1. We want to make sure you gain a solid technical base, on which you can then later add NNs, rather than have you \"run before you can walk\" \n2. Teaching NNs is slow, as to be understood correctly there are a lot of tricks and \"best practice\" things to know, which don't necessarily have the most scientific of basis (it's still mostly an empirical field), so they do not fit well into the \"basic intuition followed by practice\" approach of the Academy \n3. There are already EXCELLENT resources to learn NNs, namely [Andrew NGs course on Coursera](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwjVmcT5wtbaAhXHuRQKHc2yD2wQFggpMAA&url=https%3A%2F%2Fwww.coursera.org%2Fspecializations%2Fdeep-learning&usg=AOvVaw3vIqYhrM-dZQd6HUBci4QA), which are best approached after a solid technical foundation has been laid. \n\nLastly, remember that time-series are notoriously tricky to evaluate. While with \"static\" data we can trust a few metrics and for the most part be done with it (yay _roc auc_ !), in time-series the metrics tend to be more problem specific. The thing to remember is: your problems will be extra sneaky, and you will have to be extra careful. As a corolary to this, when you think you've predicted the stockmarket... you probably haven't. \n\nAnd now, let the fun begin! ", "_____no_output_____" ], [ "## Pandas for Timeseries ", "_____no_output_____" ], [ "In this BLU we will not learn any fancy prediction stuff, but rather how to wrangle timeseries data. ", "_____no_output_____" ], [ "Imports: ", "_____no_output_____" ] ], [ [ "import pandas as pd \nfrom matplotlib import pyplot as plt \nimport numpy as np \nimport utils\nimport matplotlib \n\nnp.random.seed(1000)\n%matplotlib inline ", "_____no_output_____" ] ], [ [ "#### Timestamps ", "_____no_output_____" ], [ "The timestamp is the most basic form of time series indexer that Pandas has. It does exactly what the name describes: marks the exact moment in which the data was collected. \n\nWhile kaggle datasets and other online challenges are normally clean \"hourly\" or \"daily\" dataset, TimeStamps are how most data is normally collected in the wild! \n\nAn event happens, and the time of the event is dumped into a database. \n\nOne example of this would be... bitcoin! Now, whatever you may think about bitcoin, ( _whether it is a ponzi scheme or a perfectly legitimate way to destroy the enviroment while helping organ trafickers and kidnappers launder money_ ), it is an excellent source of high-granularity data. Let's dive in! ", "_____no_output_____" ] ], [ [ "data = pd.read_csv('data/bitcoin.csv')", "_____no_output_____" ] ], [ [ "Let's take a look:", "_____no_output_____" ] ], [ [ "data.head()", "_____no_output_____" ], [ "data.tail()", "_____no_output_____" ] ], [ [ "Interesting. We have this `Timestamp` column, that we can kind of parse by looking at it. ", "_____no_output_____" ] ], [ [ "data.Timestamp.head()", "_____no_output_____" ] ], [ [ "We can kind of understand this. Looks like Year, month, and day, then hours, minutes, then seconds ... ", "_____no_output_____" ], [ "Let's inspect a random row: ", "_____no_output_____" ] ], [ [ "print('One of the times in our dataset: %s' % data.Timestamp.iloc[3])\nprint('Type of the Series (data.Time): %s' % data.Timestamp.dtype)\nprint('Type of a particular time: %s' % type(data.Timestamp.iloc[3]))", "One of the times in our dataset: 2017-01-01 00:03:00\nType of the Series (data.Time): object\nType of a particular time: <class 'str'>\n" ] ], [ [ "Ah, so these are just strings. How boring. \n\nHowever, pandas can do something pretty amazing with these strings: ", "_____no_output_____" ] ], [ [ "time_as_a_timestamp = pd.to_datetime(data.Timestamp, infer_datetime_format=True)", "_____no_output_____" ] ], [ [ "What is it now? ", "_____no_output_____" ] ], [ [ "time_as_a_timestamp.head(2)", "_____no_output_____" ], [ "time_as_a_timestamp.min()", "_____no_output_____" ], [ "time_as_a_timestamp.max()", "_____no_output_____" ] ], [ [ "It is a `datetime64[ns]`, which I shall for the sake of simplicity just refer to as a TimeStamp. ", "_____no_output_____" ], [ "What can we do with this? Well, for one thing, extracting days, months etc is trivial:", "_____no_output_____" ] ], [ [ "time_as_a_timestamp.dt.day.head(2)", "_____no_output_____" ] ], [ [ "Notice the nomenclature. `Series.dt.<whatever I want>`. \n\nAnd we can want [just about anything we can think of!](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#time-date-components)", "_____no_output_____" ] ], [ [ "# I'll make a little dataset so that we can see some of the results side by side\nnew = pd.DataFrame()\nnew['date'] = time_as_a_timestamp\nnew['day'] = new['date'].dt.day\nnew['month'] = new['date'].dt.month\nnew['year'] = new['date'].dt.year\nnew['hour'] = new['date'].dt.hour\nnew['minute'] = new['date'].dt.minute\nnew['second'] = new['date'].dt.second\nnew['day of the week'] = new['date'].dt.weekday\nnew['day of the week name'] = new['date'].dt.weekday_name\nnew['quarter'] = new['date'].dt.quarter\nnew['is it a leap year?'] = new['date'].dt.is_leap_year\n\nnew.head(2)", "_____no_output_____" ] ], [ [ "Pandas... is amazing. ", "_____no_output_____" ], [ "### Different date formats ", "_____no_output_____" ], [ "Now you may be thinking _\"hang on, was that just because the strings were exactly in the way Pandas likes them?\"_\n\nIt's a fair question, and the answer is No. Pandas' [`to_datetime`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html) has an `infer_datetime_format` argument which is amazingly good, and can for the most part figure out what you need from it. \n\nLet's put it to the test: ", "_____no_output_____" ] ], [ [ "# little function to sanity check our dates\ndef sanity_check(dates):\n # go ahead Pandas, guess my date format! \n inferred_dates = pd.to_datetime(dates, infer_datetime_format=True)\n \n # Print out the results \n print('Our first day is 5, and was infered as %0.0f' % inferred_dates.iloc[0].day)\n print('Our first month is 4, and was infered as %0.0f' % inferred_dates.iloc[0].month)\n print('Our first year is 2007, and was infered as %0.0f' % inferred_dates.iloc[0].year)", "_____no_output_____" ] ], [ [ "Let's start with an easy one ", "_____no_output_____" ] ], [ [ "american_dates = pd.Series(['04/05/2007', # <-- April 5th, 2007\n '04/13/2006', \n '12/27/2014'])\n\nsanity_check(american_dates)", "Our first day is 5, and was infered as 5\nOur first month is 4, and was infered as 4\nOur first year is 2007, and was infered as 2007\n" ] ], [ [ "Can we separate them with hyphens? ", "_____no_output_____" ] ], [ [ "hyphen_separated_dates = pd.Series(['04-05-2007', # <-- April 5th, 2007\n '04-13-2006', \n '12-27-2014'])\n\nsanity_check(hyphen_separated_dates)", "Our first day is 5, and was infered as 5\nOur first month is 4, and was infered as 4\nOur first year is 2007, and was infered as 2007\n" ] ], [ [ "Let's write the year in a weird way", "_____no_output_____" ] ], [ [ "short_year = pd.Series(['04-05-07', # <-- April 5th, 2007\n '04-13-06', \n '12-27-14'])\n\nsanity_check(short_year)", "Our first day is 5, and was infered as 5\nOur first month is 4, and was infered as 4\nOur first year is 2007, and was infered as 2007\n" ] ], [ [ "Eh... english? ", "_____no_output_____" ] ], [ [ "dates_in_english = pd.Series(['April 5th, 2007', # <-- April 5th, 2007\n 'April 13th, 2006', \n 'December 27th, 2014'])\n\nsanity_check(dates_in_english)", "Our first day is 5, and was infered as 5\nOur first month is 4, and was infered as 4\nOur first year is 2007, and was infered as 2007\n" ] ], [ [ "Wow! So, european dates should be easy... right? ", "_____no_output_____" ] ], [ [ "european_dates = pd.Series(['05/04/2007', # <-- April 5th, 2007\n '13/04/2006', \n '27/12/2014'])\n\nsanity_check(european_dates)", "Our first day is 5, and was infered as 4\nOur first month is 4, and was infered as 5\nOur first year is 2007, and was infered as 2007\n" ] ], [ [ "Wait... what? It got the day and month mixed up! \n\nIt turns out Pandas can infer lots of things, but Europe isn't its strenght. Even though the second and third line clearly indicate that the month is in the middle (the 13'th can't be a month), it still gets confused. \n\nAnd here is where line 2 of [The Zen of Python](https://www.python.org/dev/peps/pep-0020/#id3) comes in:\n> Explicit is better than implicit ", "_____no_output_____" ] ], [ [ "inferred_dates = pd.to_datetime(european_dates, \n dayfirst=True) # <--- explicit! ", "_____no_output_____" ], [ "print('Our first day is 5, and was infered as %0.0f' % inferred_dates.iloc[0].day)\nprint('Our first month is 4, and was infered as %0.0f' % inferred_dates.iloc[0].month)\nprint('Our first year is 2007, and was infered as %0.0f' % inferred_dates.iloc[0].year)", "Our first day is 5, and was infered as 5\nOur first month is 4, and was infered as 4\nOur first year is 2007, and was infered as 2007\n" ] ], [ [ "By being explicit, we can parse arbitrarily crazy dates:", "_____no_output_____" ] ], [ [ "dates_in_quackland = pd.Series(['05_quack_2007$04', # <-- April 5th, 2007, in quack_timesystem\n '13_quack_2006$04', \n '27_quack_2014$12'])\n\ninferred_dates = pd.to_datetime(dates_in_quackland, \n format='%d_quack_%Y$%m') # <--- %d is day, %m is month, %Y is 4 digit year\n\nprint('Our first day is 5, and was infered as %0.0f' % inferred_dates.iloc[0].day)\nprint('Our first month is 4, and was infered as %0.0f' % inferred_dates.iloc[0].month)\nprint('Our first year is 2007, and was infered as %0.0f' % inferred_dates.iloc[0].year)", "Our first day is 5, and was infered as 5\nOur first month is 4, and was infered as 4\nOur first year is 2007, and was infered as 2007\n" ] ], [ [ "Geeks among us will be thinking _\"That's all good and fine, but [real programmers](https://xkcd.com/378/) use time since epoch!\"_\n\nWell fear not, Pandas has got you covered. ", "_____no_output_____" ] ], [ [ "dev_time = inferred_dates.astype('int64') # our inferred dates were datetime objects, remember? \ndev_time", "_____no_output_____" ] ], [ [ "And to convert back? ", "_____no_output_____" ] ], [ [ "pd.to_datetime(dev_time)", "_____no_output_____" ] ], [ [ "## Selecting ", "_____no_output_____" ], [ "Now, back to our data. Let's try to ask some useful questions, such as \n> \" _How has the price of bitcoin varied over time?_ \"", "_____no_output_____" ] ], [ [ "data.head()", "_____no_output_____" ] ], [ [ "Let's start by making the timestamp the index. This is common good practice, for reasons we shall soon see. ", "_____no_output_____" ] ], [ [ "data.Timestamp = pd.to_datetime(data.Timestamp, infer_datetime_format=True)\n\ndata = data.set_index('Timestamp', # <---- Set the index to be our timestamp data \n drop=True) # <---- drop the original column", "_____no_output_____" ], [ "data.head()", "_____no_output_____" ] ], [ [ "When you have a datetime index, you should always, always, always sort it! \n\n( _Note: I **deliberately won't remind you to do this in the exercises**, and if you forget, you will get wrong answers!_ )", "_____no_output_____" ] ], [ [ "data = data.sort_index()", "_____no_output_____" ], [ "print('We have data between %s and %s' % (data.index.min(), data.index.max()))", "We have data between 2017-01-01 00:00:00 and 2018-03-27 00:00:00\n" ] ], [ [ "So we know that somewhere about [Jan 17th, bitcoin crashed pretty hard](https://www.cnbc.com/2018/01/17/bitcoin-tests-important-price-level-after-dramatic-plunge.html). Let's try to select that time. ", "_____no_output_____" ] ], [ [ "data.loc['Jan 17th 2018'].head() # <--- wait, you can do that???", "_____no_output_____" ] ], [ [ "Pretty cool huh? Now that we have a datetime index, we can do some crazy selecting, including just writing dates in that way. ", "_____no_output_____" ] ], [ [ "data.loc['Jan 17th 2018'].Close.plot(figsize=(16, 4)); ", "_____no_output_____" ] ], [ [ "We can also select less specific date ranges. How's January? ", "_____no_output_____" ] ], [ [ "data.loc['Jan 2018'].Close.plot(figsize=(16, 4)); # <--- Pandas... is... awesome ", "_____no_output_____" ] ], [ [ "Let's see those days between the 15th and the 22nd. Let's select in a different way, for the sake of giggles. ", "_____no_output_____" ] ], [ [ "data.loc['01/15/2018':'01/22/2018'].Close.plot(figsize=(16, 4)); # <--- remember, American dates are less error prone in Pandas ", "_____no_output_____" ] ], [ [ "Interesting. What were things like during that \"drop\"? From our first chart, we saw it was on the 17th, between 3PM and 4PM \n\n_(btw there is [lots of stuff on Pandas for Timezones](https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-zone-handling), but we won't go into it here. Assume it's [GMT](https://i.imgur.com/84XItMo.gif) for argument's sake)_", "_____no_output_____" ] ], [ [ "data.loc['01/17/2018 1:30PM':'01/17/2018 4:30PM'].Close.plot(figsize=(16, 4)); # yep, minutes, seconds, up to nanoseconds actually! ", "_____no_output_____" ] ], [ [ "How did traders react? Let's get the volume ", "_____no_output_____" ] ], [ [ "data.loc['01/17/2018 1:30PM':'01/17/2018 4:30PM']['Volume_(Currency)'].plot(figsize=(16, 4));", "_____no_output_____" ] ], [ [ "And here, we reach the limits of our dear Timestamps. \n\nLet's think about this objectively. The price on Jan 17th, at 3h00m00s makes sense. But the volume \"in that moment\"? It's a bit non-sensical. Some datasets (this one probably included) will treat data as being \"since the last timestamp\", but real world data may not be so forgiving. \n\nCounting using timestamps is like asking _how many people went into McDonnals at an exact moment_. Probably none. It doesn't tell us much. \n\nWe think in terms of people \"per minute\", or \"per hour\". \n\nSo... Let's get our volume per minute! For this, we can use [resample](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwi3jfnKgNnaAhUGvBQKHRCwBd4QFggpMAA&url=https%3A%2F%2Fpandas.pydata.org%2Fpandas-docs%2Fstable%2Fgenerated%2Fpandas.DataFrame.resample.html&usg=AOvVaw1le9agxvLanaQp9zlNYG9Y)\n\nIn our case, we'll resample to every 5 minutes, and take the sum (because we want to sum the volume of those 5 minutes). ", "_____no_output_____" ] ], [ [ "data.loc['01/17/2018 1:30PM':'01/17/2018 4:30PM']['Volume_(Currency)'].resample('5 min').sum().plot(\n figsize=(16, 4));", "_____no_output_____" ] ], [ [ "Wow! How much money (in dollars) was traded in the largest 10 minute peak?", "_____no_output_____" ] ], [ [ "money = data.loc['01/17/2018 1:30PM':'01/17/2018 4:30PM']['Volume_(Currency)'].resample('10 min').sum().max()\n\nprint('In 10 minutes, %0.1f dollars were traded in Bitcoin' % money)", "In 10 minutes, 17657576.5 dollars were traded in Bitcoin\n" ] ], [ [ "We just took the sum, but if we were looking at prices, would that make sense? Probably not, we would resample, and take the mean: ", "_____no_output_____" ] ], [ [ "data.resample('W').Close.mean().plot(figsize=(16, 4)); # the mean weekly closing prices, since 2015", "_____no_output_____" ] ], [ [ "Time... is... cool. ", "_____no_output_____" ], [ "What if we wanted to know the total amount of money that has been traded in bitcoin? \n\nOne way would simply be to sum it, but that doesn't give us any idea of how that total varied over time: ", "_____no_output_____" ] ], [ [ "data['Volume_(Currency)'].sum()", "_____no_output_____" ] ], [ [ "One cooler way to see this over time is to use the cumulative sum:", "_____no_output_____" ] ], [ [ "data['Volume_(Currency)'].cumsum().plot(figsize=(16, 4)); # the total volume traded since the start ", "_____no_output_____" ] ], [ [ "As you've learnt before, there are many cool [methods for groupby](https://pandas.pydata.org/pandas-docs/stable/reference/groupby.html). \n\nLet's say we want to know \"what was the record for total volume per day, over time?\" Naturally the record can only go up, and will have some \"steps\". ", "_____no_output_____" ] ], [ [ "data['Volume_(Currency)'].resample('d').sum().cummax().plot(figsize=(16, 4)); ", "_____no_output_____" ] ], [ [ "A more important question however may be \n> \" _what were the biggest variations in price?_ \"", "_____no_output_____" ], [ "For this, we might find it useful to calculate consecutive differences between periods, using [diff](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.diff.html)", "_____no_output_____" ] ], [ [ "data.Close.diff().head() # this can take a few seconds to run", "_____no_output_____" ] ], [ [ "The first entrance is NaN, which makes sense because it's got no previous day to substract. \n\nWhat do the diffs look like? ", "_____no_output_____" ] ], [ [ "data.Close.diff().plot(figsize=(16, 4));", "_____no_output_____" ] ], [ [ "Not particularly useful. How about on a particular day? ", "_____no_output_____" ] ], [ [ "data.loc['May 22nd 2017'].Close.diff(periods=1).plot(figsize=(16, 4));", "_____no_output_____" ] ], [ [ " #### Rolling window\n\nRolling windows do what their name suggest: aggregate the previous X periods using a certain function, for instance the mean. They are very useful to smooth choppy timeseries and be less reactive to noise. \n\nWe can choose to center the window (look back and forward), but in general we only want to take into account information from the past, so we should use `center=False` (which is the default)", "_____no_output_____" ], [ "Let's say it's December 18th, in the early morning, and we are at our terminal. ", "_____no_output_____" ], [ "##### Midnight and a bit... ", "_____no_output_____" ] ], [ [ "data.loc['Dec 18th 2017 00:08:00':'Dec 18th 2017 00:12:00', 'Weighted_Price'].plot(figsize=(16, 4));", "_____no_output_____" ] ], [ [ "![](https://i.imgflip.com/29iucd.jpg)", "_____no_output_____" ], [ "##### A few minutes pass... ", "_____no_output_____" ] ], [ [ "data.loc['Dec 18th 2017 00:12:00':'Dec 18th 2017 00:15:00', 'Weighted_Price'].plot(figsize=(16, 4));", "_____no_output_____" ] ], [ [ "![](https://i.redditmedia.com/VE5dgdjQ8FKZ47gdxJdQ07q36bsZVyhvAmllvLdtTnI.jpg?w=534&s=ce869cd0d8630cd420af7fa72b3c296d)", "_____no_output_____" ], [ "##### A few more minutes... ", "_____no_output_____" ] ], [ [ "data.loc['Dec 18th 2017 00:15:00':'Dec 18th 2017 00:18:00', 'Weighted_Price'].plot(figsize=(16, 4));", "_____no_output_____" ] ], [ [ "![](https://i.imgflip.com/29iucd.jpg)", "_____no_output_____" ], [ "I think you get the picture. What's going on is that we're being extremely reactive to noise, and missing the underlying process. What is in fact going on is that we are in a free-fall, but it might not be obvious unless we look at the slightly broader picture. \n\nIn other words, assuming there is an underlying process, we can assume the recent past should carry some weight. How much weight? A rolling [window](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rolling.html) of weight! ", "_____no_output_____" ], [ "#### The first hour of Dec 18th 2017, as seen by traders", "_____no_output_____" ] ], [ [ "data.loc['Dec 18th 2017 00:00:00':'Dec 18th 2017 01:00:00', 'Weighted_Price'].plot(figsize=(16, 4));", "_____no_output_____" ] ], [ [ "#### The first hour of Dec 18th 2017, as seen by a rolling window of 10 minutes", "_____no_output_____" ] ], [ [ "# this is just the raw data, so we can apply a rolling window on it \nfirst_hour = data.loc['Dec 18th 2017 00:00:00':'Dec 18th 2017 01:00:00', 'Weighted_Price']\n\n# notice the window size as a parameter of rolling, feel free to mess around with that parameter \n# and the center set to False. That's because we don't want to use data from the future! \n# Also notice how we use the mean. We can use many others. Try changing it! \nwindow_size = 10 \nfirst_hour_rolling_window = first_hour.rolling(window=window_size, center=False).mean()", "_____no_output_____" ] ], [ [ "What do these look like? ", "_____no_output_____" ] ], [ [ "# Let's plot these together \nfirst_hour_rolling_window.plot(figsize=(16, 8), color='b', label='rolling_window = %0.0f' % window_size);\nfirst_hour.plot(figsize=(16, 8), label='raw data', alpha=.7, ls='-', color='orange');\nplt.legend();", "_____no_output_____" ] ], [ [ "( _Note: In case you are curious about what would have happened if you had interpreted the yellow line as a potential recovery... It doesn't end well._ )", "_____no_output_____" ], [ "### Let's ask some more questions of this dataset! ", "_____no_output_____" ], [ "Back to the entire dataset. Let's answer the following question:\n> What was the weekly change in price, over time? ", "_____no_output_____" ] ], [ [ "# resample to weekly, get the mean Close price (per week), calculate the differences, and plot them \ndata.resample('W').Close.mean().diff(periods=1).plot(figsize=(16, 4));", "_____no_output_____" ] ], [ [ "We are observing something that makes sense. As the magnitude gets bigger, so does the volatility. It makes more sense for bitcoin to go down \\\\$100 in a week when it was \\\\$5000 than when it was at \\\\$20. \n\nWhat we actually want... is the percent change. ", "_____no_output_____" ] ], [ [ "# resample to weekly, take the mean close price (weekly), and calcualte the percentage change \ndata.resample('W').Close.mean().pct_change().plot(figsize=(16, 4));", "_____no_output_____" ] ], [ [ "Interestingly enough, this chart seems to tell us that while bitcoin varies a lot, it is relatively consistent, with weekly variations of +/- 30% being as big as it gets. As the magnitude becoming larger it becomes more newsworthy \" _up one thousand dollars!!!_ \", but the underlying percent change doesn't seem so radically altered. ", "_____no_output_____" ] ], [ [ "data.resample('W').Close.mean().pct_change().plot(kind='bar', figsize=(16, 4));", "_____no_output_____" ] ], [ [ "Wow that is one ugly X axis. Unfortunately this is a [known issue](https://stackoverflow.com/questions/19143857/pandas-bar-plot-xtick-frequency?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa) with Pandas, and when it happens, it requires a bit of copy-pasting around to fix. ", "_____no_output_____" ] ], [ [ "# our actual chart \nax = data.resample('W').Close.mean().pct_change().plot(kind='bar', figsize=(16, 4), rot=45);\n\n# this fixes the axis. \n# I wouldn't spend too much time in this horrible Matplotlib code, just know it exists. \n\nn = 10\nticks = ax.xaxis.get_ticklocs()\nticklabels = [l.get_text() for l in ax.xaxis.get_ticklabels()] # <---- blearghk!!\nax.xaxis.set_ticks(ticks[::n])\nax.xaxis.set_ticklabels(ticklabels[::n])\nplt.ylabel('Percent Change')\nplt.show()", "_____no_output_____" ] ], [ [ "Fun pattern, huh? Growth seems to bring more growth, and then crashes quite spectacularly, and the cycle re-starts. ", "_____no_output_____" ], [ "Great, we've covered a number of methods for dealing with Timeseries. \n\nNext up, we go and try to examine the full stockmarket. Only to find, to our dismay, that we need multi-indexing. [What is multi-indexing, I hear you ask?](https://www.youtube.com/watch?v=gC24hhNbXN0)\n#### Move on to the Part 2 of 3 of BLU04 to find out! ", "_____no_output_____" ], [ "----", "_____no_output_____" ], [ "### **Summary of the methods we have learnt in this unit:**\n* `pd.to_datetime()` - this allows you to create datetime format and gives you access to several methods that pandas has specifically to handle dates \n* We need to have a datetime sorted index!!\n* Selection: `dataframe.loc['Jan 17th 2018]` to select the 17th of January, 2018 \n * Remember pandas is really helpful here as you can even do something like `dataframe.loc['Jan 2018']` or even ranges `dataframe.loc['01/15/2018':'01/22/2018']`\n* How to resample, e.g: `.resample('5min').sum()`. Don't forget that after resampling we need an aggregation function! \n * few new aggregation functions that we've talked: \n * `cumsum()`\n * `cummax()`\n* We have also learnt how to calculate the difference between periods, using `diff()`\n* Rolling windows: `.rolling()` that allows us to specify the rolling window size ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
ece34b35c206c49e900a1a87f7c04edc28597822
569
ipynb
Jupyter Notebook
notebooks/book1/20/fig_20_38.ipynb
karm-patel/pyprobml
af8230a0bc0d01bb0f779582d87e5856d25e6211
[ "MIT" ]
null
null
null
notebooks/book1/20/fig_20_38.ipynb
karm-patel/pyprobml
af8230a0bc0d01bb0f779582d87e5856d25e6211
[ "MIT" ]
1
2022-03-27T04:59:50.000Z
2022-03-27T04:59:50.000Z
notebooks/book1/20/fig_20_38.ipynb
karm-patel/pyprobml
af8230a0bc0d01bb0f779582d87e5856d25e6211
[ "MIT" ]
2
2022-03-26T11:52:36.000Z
2022-03-27T05:17:48.000Z
35.5625
398
0.70826
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
ece35fd77180e8a296db38383ec393a94a97e7c1
26,431
ipynb
Jupyter Notebook
pandas/series_2 (intermediate).ipynb
xames3-old/cheat_sheet
0191cd4cbde2876eb3e217212e1806e986708997
[ "Apache-2.0" ]
1
2022-03-04T10:31:53.000Z
2022-03-04T10:31:53.000Z
pandas/series_2 (intermediate).ipynb
xames3-old/cheat_sheet
0191cd4cbde2876eb3e217212e1806e986708997
[ "Apache-2.0" ]
null
null
null
pandas/series_2 (intermediate).ipynb
xames3-old/cheat_sheet
0191cd4cbde2876eb3e217212e1806e986708997
[ "Apache-2.0" ]
null
null
null
23.641324
287
0.466649
[ [ [ "## 4. Some basic yet important attributes of Pandas Series\nThese are some of the most commonly used attributes while working with series object. There are lot more, but it is not possible to cover all of them. Just in case you get confuse between attributes and methods, remember that methods have **()** at the end but attributes don't.\n\nYou can find all Series attributes [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html). Note that few of them are deprecated.", "_____no_output_____" ] ], [ [ "import pandas as pd\n\nandroid_version_names = ['astro', 'boxer', 'cupcake', 'doughnut', 'รฉclair', 'froyo',\n 'gingerbread', 'honeycomb', 'icecream sandwich', 'jellybean',\n 'kitkat', 'lollipop', 'marshmallow', 'nougat', 'oreo', 'pie']\n\nandroid_versions = [1.0, 1.1, 1.5, 1.6, 2.0, 2.2, 2.3, 3.0, 4.0, 4.1, 4.4, 5.0, 6.0, 7.0, 8.0, 9.0]\n\nandroid = pd.Series(data=android_version_names, index=android_versions, name='Android Versions') \n# You can assign series object to a variable.\n\nandroid", "_____no_output_____" ] ], [ [ "### Name attribute\nReturns name/header of the Series.", "_____no_output_____" ] ], [ [ "android.name", "_____no_output_____" ] ], [ [ "#### Note:\nYou can change the name/header by overwritting it.", "_____no_output_____" ] ], [ [ "android.name = 'Versions name for Android OS'\n\nandroid", "_____no_output_____" ] ], [ [ "### Index attribute\nReturns list of index in the Series. Index is name of the row.\n#### Note:\nIf the index is integer, then it will return **RangeIndex** object and if the anything besides integer it will return respective **Index** object.", "_____no_output_____" ] ], [ [ "android.index # Index not an integer", "_____no_output_____" ], [ "android = pd.Series(data=android_version_names)\n\nandroid", "_____no_output_____" ], [ "android.index", "_____no_output_____" ] ], [ [ "### Values attribute\nReturns list of all values in the Series.", "_____no_output_____" ] ], [ [ "android.values", "_____no_output_____" ] ], [ [ "### Dtype attribute\nReturns the datatype of the pandas series.", "_____no_output_____" ] ], [ [ "android.dtype # O = object (string)", "_____no_output_____" ] ], [ [ "### Is_Unique attribute\nReturns boolean values. If the series has all unique values i.e no duplicates, it returns **True**.", "_____no_output_____" ] ], [ [ "android.is_unique", "_____no_output_____" ] ], [ [ "### NDim attribute\nReturns the number of dimension, in this case just 1 as series is 1D object.", "_____no_output_____" ] ], [ [ "android.ndim", "_____no_output_____" ] ], [ [ "### Shape attributes\nReturns a tuple of number of rows and number of columns. This is like (m, n) from matrices, where **m** is the number of rows and **n** is the number of columns. \n\nAs you can see it returns the tuple but it just returns the number of rows but leaves column section blank. Since a series has just a single column Pandas assumes the column value to be 1 but doesn't renders it and leaves it blank.", "_____no_output_____" ] ], [ [ "android.shape", "_____no_output_____" ] ], [ [ "### Size attribute\nReturns the total number of elements in the underlying data. Similar to (m x n) operation.", "_____no_output_____" ] ], [ [ "android.size # 16 x 1 = 16", "_____no_output_____" ] ], [ [ "### How to access value/s from the Series using attributes\nUnlike other attributes these take input in square brackets **[]**. These attributes treat a pandas series list a traditional python list.", "_____no_output_____" ] ], [ [ "android_version_names = ['astro', 'boxer', 'cupcake', 'doughnut', 'รฉclair', 'froyo',\n 'gingerbread', 'honeycomb', 'icecream sandwich', 'jellybean',\n 'kitkat', 'lollipop', 'marshmallow', 'nougat', 'oreo', 'pie']\n\nandroid_versions = [1.0, 1.1, 1.5, 1.6, 2.0, 2.2, 2.3, 3.0, 4.0, 4.1, 4.4, 5.0, 6.0, 7.0, 8.0, 9.0]\n\nandroid = pd.Series(data=android_version_names, index=android_versions, name='Android Versions') \n\nandroid", "_____no_output_____" ] ], [ [ "#### Using \"at\" attribute\nThis take the index name as its input.", "_____no_output_____" ] ], [ [ "android.at[4.4] # Returns value for that partiular index", "_____no_output_____" ] ], [ [ "#### Using \"iat\" attribute\nThis take the index number as its input. ", "_____no_output_____" ] ], [ [ "android.iat[10] # Returns the value at 11th position (Index starts from 0)", "_____no_output_____" ] ], [ [ "#### Using \"loc\" attribute (locate using index name)\nThis takes the index name as its input similar to \"at\" attribute.", "_____no_output_____" ] ], [ [ "android.loc[2.2] # Returns value based on index name (Index name in this case is a float value)", "_____no_output_____" ] ], [ [ "#### Using \"iloc\" attribute\nThis takes the index number as its input similar to \"iat\" attribute.", "_____no_output_____" ] ], [ [ "android.iloc[5]", "_____no_output_____" ] ], [ [ "### Using \"ix\" attribute (locate using either index name or value)\n#### But this deprecated now. But works just fine.\nThis can take either index name or index value as its input.", "_____no_output_____" ] ], [ [ "android.ix[1] # Returns value using index", "d:\\xa\\cheat_sheet\\venv\\lib\\site-packages\\ipykernel_launcher.py:1: FutureWarning: \n.ix is deprecated. Please use\n.loc for label based indexing or\n.iloc for positional indexing\n\nSee the documentation here:\nhttp://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#ix-indexer-is-deprecated\n \"\"\"Entry point for launching an IPython kernel.\n" ], [ "android.ix[1.6] # Returns value using index name (Index name in this case is a float value)", "d:\\xa\\cheat_sheet\\venv\\lib\\site-packages\\ipykernel_launcher.py:1: FutureWarning: \n.ix is deprecated. Please use\n.loc for label based indexing or\n.iloc for positional indexing\n\nSee the documentation here:\nhttp://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#ix-indexer-is-deprecated\n \"\"\"Entry point for launching an IPython kernel.\n" ] ], [ [ "## 5. Some basic yet important methods of Pandas Series\nThis methods are self explanatory. Lets flip the data (Mathematical operations don't work well with string values)", "_____no_output_____" ] ], [ [ "android_version_names = ['astro', 'boxer', 'cupcake', 'doughnut', 'รฉclair', 'froyo',\n 'gingerbread', 'honeycomb', 'icecream sandwich', 'jellybean',\n 'kitkat', 'lollipop', 'marshmallow', 'nougat', 'oreo', 'pie']\n\nandroid_versions = [1.0, 1.1, 1.5, 1.6, 2.0, 2.2, 2.3, 3.0, 4.0, 4.1, 4.4, 5.0, 6.0, 7.0, 8.0, 9.0]\n\nandroid = pd.Series(data=android_versions, index=android_version_names, name='Android Versions') \n\nandroid", "_____no_output_____" ], [ "android.sum() # Add all values", "_____no_output_____" ], [ "android.prod() # Multiplies all values", "_____no_output_____" ], [ "android.mean() # Calculates mean (average)", "_____no_output_____" ], [ "android.median() # Calculates median ", "_____no_output_____" ], [ "android.mode() # Finds mode (mostly occurred values).", "_____no_output_____" ] ], [ [ "## 6. Extracting Series from Pandas dataframe\nPandas dataframe is another main pandas datatype besides Series. Dataframes are 2-D object meaning multiple rows and multiple columns. If you need to display just few values from the dataframe/series object we can use **.head()** or **.tail()** method.", "_____no_output_____" ] ], [ [ "pokemon = pd.read_csv(filepath_or_buffer='csv/pokemon.csv').head()\n\npokemon", "_____no_output_____" ] ], [ [ "The above object is still not a Series. Note that it has 2 columns. Series should have only single column. We can check this using **type()** method.", "_____no_output_____" ] ], [ [ "type(pokemon)", "_____no_output_____" ] ], [ [ "We can extract Series (single column) from Dataframe using **squeeze** parameter. You should specify the column name (header) that needs to be extracted using **usecols** parameters.", "_____no_output_____" ] ], [ [ "pokemon = pd.read_csv(filepath_or_buffer='csv/pokemon.csv', usecols=['Pokemon'], squeeze=True)\n\npokemon", "_____no_output_____" ], [ "type(pokemon)", "_____no_output_____" ] ], [ [ "If you need to extract few rows from the series, you can use **.head()** or **.tail()** method.", "_____no_output_____" ] ], [ [ "pokemon.head() # Returns 1st 5 elements from the series/dataframe", "_____no_output_____" ], [ "pokemon.tail() # Returns last 5 elements from the series/dataframe", "_____no_output_____" ] ], [ [ "You can specify the number of rows you want pandas to display. By default head and tail methods displays 5 values.", "_____no_output_____" ] ], [ [ "pokemon.head(3) # Returns 1st 3 elements from the series/dataframe", "_____no_output_____" ], [ "pokemon.tail(3) # Returns last 3 elements from the series/dataframe", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
ece362f614940aaca36b98e55d676cf96acd092c
83,081
ipynb
Jupyter Notebook
examples/sudoku_9x9/paper_results/graphs/Generate Interpretability Results.ipynb
DanCunnington/FFNSL
fb81da074e95210b75869265d10b645ffde88549
[ "MIT" ]
null
null
null
examples/sudoku_9x9/paper_results/graphs/Generate Interpretability Results.ipynb
DanCunnington/FFNSL
fb81da074e95210b75869265d10b645ffde88549
[ "MIT" ]
null
null
null
examples/sudoku_9x9/paper_results/graphs/Generate Interpretability Results.ipynb
DanCunnington/FFNSL
fb81da074e95210b75869265d10b645ffde88549
[ "MIT" ]
null
null
null
226.997268
34,488
0.89243
[ [ [ "import json\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gridspec\nimport matplotlib as mpl", "_____no_output_____" ], [ "non_perturbed_dataset = 'standard'\ndatasets = [\n 'rotated'\n] \nnoise_pcts = [10,20,30,40,50,60,70,80,90,95,96,97,98,99,100]\nnoise_pcts = [10,20,30,40,50,60,70,80,90,100]\n\nFONT_SIZE=14\nTICK_FONT_SIZE=14\nmpl.rcParams['xtick.labelsize'] = TICK_FONT_SIZE\nmpl.rcParams['ytick.labelsize'] = TICK_FONT_SIZE", "_____no_output_____" ], [ "def get_baseline_results(method, dataset, data_size='small'):\n # need to build array of results for noise pcts\n num_preds = []\n num_preds_stds = []\n \n # get standard\n np_0 = json.loads(open('../'+method+'/'+data_size+'/standard.json').read())\n num_preds.append(np_0['noise_pct_0']['interpretability']['num_predicates']['mean'])\n num_preds_stds.append(np_0['noise_pct_0']['interpretability']['num_predicates']['std_err'])\n \n # other noise pcts\n np_res = json.loads(open('../'+method+'/'+data_size+'/'+dataset+'.json').read())\n for n in noise_pcts:\n num_preds.append(np_res['noise_pct_'+str(n)]['interpretability']['num_predicates']['mean'])\n num_preds_stds.append(np_res['noise_pct_'+str(n)]['interpretability']['num_predicates']['std_err'])\n return num_preds, num_preds_stds", "_____no_output_____" ], [ "def get_high_baseline_results(method, dataset, data_size='small'):\n # need to build array of results for noise pcts\n num_preds = []\n num_preds_stds = []\n \n # other noise pcts\n np_res = json.loads(open('../'+method+'/'+data_size+'/'+dataset+'.json').read())\n for n in noise_pcts:\n if n > 70:\n num_preds.append(np_res['noise_pct_'+str(n)]['interpretability']['num_predicates']['mean'])\n num_preds_stds.append(np_res['noise_pct_'+str(n)]['interpretability']['num_predicates']['std_err'])\n return num_preds, num_preds_stds", "_____no_output_____" ], [ "def get_nsl_results(net_type, dataset): \n # need to build array of results for noise pcts\n num_preds = []\n num_preds_stds = []\n \n # get standard\n np_0 = json.loads(open('../nsl/structured_test_data/'+net_type+'/standard_interpretability.json').read())\n num_preds.append(np_0['noise_pct_0']['interpretability']['num_predicates']['mean'])\n num_preds_stds.append(np_0['noise_pct_0']['interpretability']['num_predicates']['std_err'])\n \n # other noise pcts\n np_res = json.loads(open('../nsl/structured_test_data/'+net_type+'/'+dataset+'_interpretability.json').read()) \n for n in noise_pcts:\n num_preds.append(np_res['noise_pct_'+str(n)]['interpretability']['num_predicates']['mean'])\n num_preds_stds.append(np_res['noise_pct_'+str(n)]['interpretability']['num_predicates']['std_err'])\n return num_preds, num_preds_stds", "_____no_output_____" ], [ "def get_high_nsl_results(net_type, dataset): \n # need to build array of results for noise pcts\n num_preds = []\n num_preds_stds = []\n \n # other noise pcts\n np_res = json.loads(open('../nsl/structured_test_data/'+net_type+'/'+dataset+'.json').read()) \n for n in noise_pcts:\n if n > 70:\n num_preds.append(np_res['noise_pct_'+str(n)]['interpretability']['num_predicates']['mean'])\n num_preds_stds.append(np_res['noise_pct_'+str(n)]['interpretability']['num_predicates']['std_err'])\n return num_preds, num_preds_stds", "_____no_output_____" ], [ "def get_pct_symbolic_perturbs(net_type, deck):\n def format_pct(x):\n return math.floor(x*100)\n pcts = []\n # get standard\n std_perturbs = json.loads(open('../mislabelled_example_analysis/'+net_type+'/standard.json').read())\n pcts.append(format_pct(std_perturbs['noise_pct_0']['pct_incorrect_examples']))\n pct_symbolic_perturbs = json.loads(open('../mislabelled_example_analysis/'+net_type+'/'+deck+'.json').read())\n for n in noise_pcts:\n pcts.append(format_pct(pct_symbolic_perturbs['noise_pct_'+str(n)]['pct_incorrect_examples']))\n return pcts", "_____no_output_____" ] ], [ [ "# Interpretability", "_____no_output_____" ] ], [ [ "fig2 = plt.figure(constrained_layout=True, figsize=(16,10))\nspec2 = gridspec.GridSpec(ncols=3, nrows=2, figure=fig2)\nf2_ax1 = fig2.add_subplot(spec2[0, 0])\n# f2_ax2 = fig2.add_subplot(spec2[0, 1])\n\naxes = [f2_ax1]\n\nnps_x = [0]+noise_pcts\nfor i in range(1):\n # NSL\n nsl, nsl_err = get_nsl_results('softmax', datasets[i])\n axes[i].plot(nps_x, nsl, label = \"NSL Softmax 320 examples\", color=\"b\", linestyle='-.')\n axes[i].errorbar(nps_x, nsl, yerr=nsl_err, color=\"b\", capsize=5,linestyle='-.')\n \n # EDL-GEN\n nsl, nsl_err = get_nsl_results('edl_gen', datasets[i])\n axes[i].plot(nps_x, nsl, label = \"NSL EDL-GEN 320 examples\", color=\"k\", linestyle='-.')\n axes[i].errorbar(nps_x, nsl, yerr=nsl_err, color=\"k\", capsize=5,linestyle='-.')\n \n # Random Forest Small\n rf, rf_err = get_baseline_results('rf', datasets[i])\n axes[i].plot(nps_x, rf, label = \"Baseline RF 320 examples\", color=\"r\", linestyle=':')\n axes[i].errorbar(nps_x, rf, yerr=rf_err, color=\"r\", capsize=5, linestyle=':')\n \n # Random Forest Large\n rf, rf_err = get_baseline_results('rf', datasets[i], data_size='large')\n axes[i].plot(nps_x, rf, label = \"Baseline RF 32,000 examples\", color=\"darkorange\", linestyle=':')\n axes[i].errorbar(nps_x, rf, yerr=rf_err, color=\"darkorange\", capsize=5, linestyle=':')\n \n rf_acc, rf_err = get_baseline_results('rf_with_knowledge', datasets[i], data_size='small')\n axes[i].plot(nps_x, rf_acc, label = \"Baseline RF (with knowledge) 320 examples\", color=\"tab:brown\", linestyle=':')\n axes[i].errorbar(nps_x, rf_acc, yerr=rf_err, color=\"tab:brown\", capsize=5, linestyle=':')\n \n # CNN-LSTM Small\n fcn, fcn_err = get_baseline_results('cnn_lstm', datasets[i])\n axes[i].plot(nps_x, fcn, label = \"Baseline CNN-LSTM 320 examples\", color=\"g\", linestyle=':')\n axes[i].errorbar(nps_x, fcn, yerr=fcn_err, color=\"g\", capsize=5, linestyle=':')\n \n # CNN-LSTM Large\n fcn, fcn_err = get_baseline_results('cnn_lstm', datasets[i], data_size='large')\n axes[i].plot(nps_x, fcn, label = \"Baseline CNN-LSTM 32,000 examples\", color=\"darkcyan\", linestyle=':')\n axes[i].errorbar(nps_x, fcn, yerr=fcn_err, color=\"darkcyan\", capsize=5, linestyle=':')\n \n # Twin Axes to denote pct symbolic perturbations\n# pct_symbolic_perturbs_softmax = get_pct_symbolic_perturbs('softmax', datasets[i])\n# pct_symbolic_perturbs_edl_gen = get_pct_symbolic_perturbs('edl_gen', datasets[i])\n# ax2 = axes[i].twiny()\n \n axes[i].set_xticks(nps_x)\n #axes[i].set_yticks(np.arange(0.45,1.01,0.05))\n axes[i].set_xlabel('Training data points subject to distributional shift (%)', fontsize=FONT_SIZE)\n axes[i].set_ylabel('Total number of atoms', fontsize=FONT_SIZE)\n axes[i].set_yscale('log')\n# axes[i].set_title(datasets[i])\n axes[i].grid(True)\n \n \n# ax2.set_xticks(nps_x)\n# ax2.set_xticklabels(pct_symbolic_perturbs_softmax)\n# ax2.xaxis.set_ticks_position('bottom') # set the position of the second x-axis to bottom\n# ax2.xaxis.set_label_position('bottom') # set the position of the second x-axis to bottom\n# ax2.spines['bottom'].set_position(('outward', 40))\n# ax2.set_xlabel('Pct examples with incorrect labels post feature extraction: Softmax (%)')\n# ax2.set_xlim(axes[i].get_xlim())\n \n# ax3 = axes[i].twiny()\n# ax3.set_xticks(nps_x)\n# ax3.set_xticklabels(pct_symbolic_perturbs_edl_gen)\n# ax3.xaxis.set_ticks_position('bottom') # set the position of the second x-axis to bottom\n# ax3.xaxis.set_label_position('bottom') # set the position of the second x-axis to bottom\n# ax3.spines['bottom'].set_position(('outward', 80))\n# ax3.set_xlabel('Pct examples with incorrect labels post feature extraction: EDL-GEN (%)')\n# ax3.set_xlim(axes[i].get_xlim())\n\n \n# Set legend\n# f2_ax2.legend(*axes[0].get_legend_handles_labels(), loc='center')\n# f2_ax2.get_xaxis().set_visible(False)\n# f2_ax2.get_yaxis().set_visible(False)\n# f2_ax2.set_title('Legend')\nplt.savefig('sudoku_9x9_interpretability_results.pdf', format='pdf', bbox_inches='tight')\nplt.show()", "_____no_output_____" ] ], [ [ "# 80-100%", "_____no_output_____" ] ], [ [ "fig2 = plt.figure(constrained_layout=True, figsize=(16,10))\nspec2 = gridspec.GridSpec(ncols=3, nrows=2, figure=fig2)\nf2_ax1 = fig2.add_subplot(spec2[0, 0])\n# f2_ax2 = fig2.add_subplot(spec2[0, 1])\n\naxes = [f2_ax1]\n\nnps_x = [0]+noise_pcts\nnps_x = [80,90,95,96,97,98,99,100]\nfor i in range(1):\n # NSL\n nsl, nsl_err = get_high_nsl_results('softmax', datasets[i])\n axes[i].plot(nps_x, nsl, label = \"NSL Softmax 320 examples\", color=\"b\", linestyle='-.')\n axes[i].errorbar(nps_x, nsl, yerr=nsl_err, color=\"b\", capsize=5,linestyle='-.')\n \n # EDL-GEN\n nsl, nsl_err = get_high_nsl_results('edl_gen', datasets[i])\n axes[i].plot(nps_x, nsl, label = \"NSL EDL-GEN 320 examples\", color=\"k\", linestyle='-.')\n axes[i].errorbar(nps_x, nsl, yerr=nsl_err, color=\"k\", capsize=5,linestyle='-.')\n \n # Random Forest Small\n rf, rf_err = get_high_baseline_results('rf', datasets[i])\n axes[i].plot(nps_x, rf, label = \"Baseline RF 320 examples\", color=\"r\", linestyle=':')\n axes[i].errorbar(nps_x, rf, yerr=rf_err, color=\"r\", capsize=5, linestyle=':')\n \n # Random Forest Large\n rf, rf_err = get_high_baseline_results('rf', datasets[i], data_size='large')\n axes[i].plot(nps_x, rf, label = \"Baseline RF 32,000 examples\", color=\"darkorange\", linestyle=':')\n axes[i].errorbar(nps_x, rf, yerr=rf_err, color=\"darkorange\", capsize=5, linestyle=':')\n \n # CNN-LSTM Small\n fcn, fcn_err = get_high_baseline_results('cnn_lstm', datasets[i])\n axes[i].plot(nps_x, fcn, label = \"Baseline CNN-LSTM 320 examples\", color=\"g\", linestyle=':')\n axes[i].errorbar(nps_x, fcn, yerr=fcn_err, color=\"g\", capsize=5, linestyle=':')\n \n # CNN-LSTM Large\n fcn, fcn_err = get_high_baseline_results('cnn_lstm', datasets[i], data_size='large')\n axes[i].plot(nps_x, fcn, label = \"Baseline CNN-LSTM 32,000 examples\", color=\"darkcyan\", linestyle=':')\n axes[i].errorbar(nps_x, fcn, yerr=fcn_err, color=\"darkcyan\", capsize=5, linestyle=':')\n \n # Twin Axes to denote pct symbolic perturbations\n pct_symbolic_perturbs_softmax = get_pct_symbolic_perturbs('softmax', datasets[i])\n pct_symbolic_perturbs_edl_gen = get_pct_symbolic_perturbs('edl_gen', datasets[i])\n# ax2 = axes[i].twiny()\n \n axes[i].set_xticks(nps_x)\n #axes[i].set_yticks(np.arange(0.45,1.01,0.05))\n axes[i].set_xlabel('Training examples subject\\nto distributional shift (%)', fontsize=FONT_SIZE)\n axes[i].set_ylabel('Total number of atoms', fontsize=FONT_SIZE)\n axes[i].set_yscale('log')\n# axes[i].set_title(datasets[i])\n axes[i].grid(True)\n \n \nplt.show()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ece3674a9c8ee450660cda62e12aa672ad4f7825
8,970
ipynb
Jupyter Notebook
Research/DEG/DEG_bacteria.ipynb
loven-doo/EAGLE
c4e635b61c746041437073f4432907e0f144632e
[ "BSD-2-Clause" ]
null
null
null
Research/DEG/DEG_bacteria.ipynb
loven-doo/EAGLE
c4e635b61c746041437073f4432907e0f144632e
[ "BSD-2-Clause" ]
null
null
null
Research/DEG/DEG_bacteria.ipynb
loven-doo/EAGLE
c4e635b61c746041437073f4432907e0f144632e
[ "BSD-2-Clause" ]
1
2020-09-07T10:02:24.000Z
2020-09-07T10:02:24.000Z
43.756098
196
0.557971
[ [ [ "# imports\nimport os\nimport re\nimport json\n\nimport pandas as pd\nfrom Bio.Seq import Seq\n\nfrom eagle.lib.seqs import SeqsDict", "_____no_output_____" ], [ "# constants\nWORK_DIR = \"../data/DEG/bacteria\"\n\nGTF_COLS = [\"seqid\", \"source\", \"type\", \"start\", \"end\", \"score\", \"strand\", \"frame\", \"attribute\"]", "_____no_output_____" ], [ "# lib\ndef get_gtfs(chr_dict, deg_annot_path, deg_seqs_fasta, deg_essential):\n annot_df = pd.read_csv(deg_annot_path, sep=\"\\t\")\n seqs_dict = SeqsDict.load_from_file(deg_seqs_fasta, low_memory=True)\n\n gtfs_df = pd.DataFrame(list(annot_df.apply(prepare_deg_annot_line, axis=1, args=(chr_dict, seqs_dict, deg_essential))))\n summary_list = list()\n for deg_org in gtfs_df.groupby(\"org_DEG_ID\"):\n summary_list.append(deg_org[1].iloc[0][[\"org_DEG_ID\", \"chr_id\", \"org_name\"]].to_dict())\n gtf_df = deg_org[1][pd.notna(deg_org[1][\"seqid\"])][GTF_COLS]\n \n if deg_essential:\n gtf_path = os.path.join(WORK_DIR, deg_org[0].replace(\"DNEG\", \"DEG\")+\"_essential.gtf\")\n summary_list[-1].update({\"essential_genes\": gtf_df.shape[0], \"essential_genes_gtf\": os.path.basename(gtf_path)})\n else:\n gtf_path = os.path.join(WORK_DIR, deg_org[0].replace(\"DNEG\", \"DEG\")+\"_nonessential.gtf\")\n summary_list[-1].update({\"nonessential_genes\": gtf_df.shape[0], \"nonessential_genes_gtf\": os.path.basename(gtf_path)})\n if not gtf_df.empty:\n gtf_df.to_csv(gtf_path, sep=\"\\t\", index=False, quotechar=\"'\")\n gtf_path = None\n \n print(summary_list[-1])\n return pd.DataFrame(summary_list)\n\n\ndef prepare_deg_annot_line(row, chr_dict, deg_seqs_dict, deg_essential):\n result_dict = {\n \"org_DEG_ID\": row[\"#DEG_ORG\"],\n \"chr_id\": row[\"#Refseq\"],\n \"org_name\": row[\"#Organism\"],\n # gtf fields\n \"seqid\": None,\n \"source\": \"DEG\",\n \"type\": \"CDS\",\n \"start\": int(),\n \"end\": int(),\n \"score\": \"-\",\n \"strand\": None,\n \"frame\": \".\",\n \"attribute\": json.dumps({\"gene_name\": row[\"#Gene_Name\"]}),\n }\n if row[\"#Refseq\"] not in chr_dict:\n return result_dict \n ori = 1\n match = search_in_chr(deg_seqs_dict[row[\"#DEG_AC\"]], chr_dict[row[\"#Refseq\"]])\n if match is None:\n if deg_essential:\n match = search_in_chr(str(Seq(deg_seqs_dict[row[\"#DEG_AC\"]]).reverse_complement()), chr_dict[row[\"#Refseq\"]])\n else:\n match = search_in_chr(deg_seqs_dict[row[\"#DEG_AC\"]][::-1], chr_dict[row[\"#Refseq\"]])\n ori = -1\n if match is not None:\n result_dict[\"seqid\"] = match[0]\n result_dict[\"start\"] = match[1] + 1\n result_dict[\"end\"] = match[2]\n if ori > 0:\n result_dict[\"strand\"] = \"+\"\n else:\n result_dict[\"strand\"] = \"-\"\n else:\n print(\"WARNING: no match found for '%s'\" % row[\"#DEG_AC\"])\n return result_dict\n\n\ndef search_in_chr(seq, chr_seq_dict):\n for seq_name in chr_seq_dict:\n match = re.search(seq.lower(), chr_seq_dict[seq_name].lower())\n if match is not None:\n return seq_name.split()[0], match.start(), match.end()\n\n \ndef detect_gtf_intersect(row, inter_gtf_df):\n max_inter = 0.0\n query = 'seqid == \"%s\" & end >= %s & start <= %s & strand == \"%s\"' % (row[\"seqid\"], row[\"start\"], row[\"end\"], row[\"strand\"])\n for i, row_ in inter_gtf_df.query(query).iterrows():\n cur_inter = float(min(row[\"end\"], row_[\"end\"]) - max(row[\"start\"], row_[\"start\"]) + 1) / float(max(row[\"end\"], row_[\"end\"]) - min(row[\"start\"], row_[\"start\"]) + 1)\n if cur_inter > max_inter:\n max_inter = cur_inter\n return max_inter", "_____no_output_____" ], [ "# main\nessential_nucl_fasta = os.path.join(WORK_DIR, \"deg_essential/degseq-p.dat\")\nessential_annot = os.path.join(WORK_DIR, \"deg_essential/degannotation-p.dat\")\n\nnonessential_nucl_fasta = os.path.join(WORK_DIR, \"deg_nonessential/degseq-np.dat\")\nnonessential_annot = os.path.join(WORK_DIR, \"deg_nonessential/degannotation-np.dat\")\n\nchr_id_path = os.path.join(WORK_DIR, \"chr_id.txt\")\nsummary_path = os.path.join(WORK_DIR, \"summary_table.txt\")\n\nchr_dict = dict()\nchr_id_df = pd.read_csv(chr_id_path, sep=\"\\t\")\nfor i, chr_id in enumerate(chr_id_df[\"chr_id\"]):\n chr_dict[chr_id] = SeqsDict.load_from_file(os.path.join(WORK_DIR, chr_id_df.iloc[i][\"fna_path\"]), low_memory=True)\n\nessential_summary_df = get_gtfs(chr_dict=chr_dict, deg_annot_path=essential_annot, deg_seqs_fasta=essential_nucl_fasta, deg_essential=True)\nprint(\"INFO: got essential summary\")\nnonessential_summary_df = get_gtfs(chr_dict=chr_dict, deg_annot_path=nonessential_annot, deg_seqs_fasta=nonessential_nucl_fasta, deg_essential=False)\nnonessential_summary_df[\"org_DEG_ID\"] = nonessential_summary_df[\"org_DEG_ID\"].apply(lambda org_deg_id: org_deg_id.replace(\"DNEG\", \"DEG\"))\n\nsummary_df = essential_summary_df.merge(nonessential_summary_df[[\"org_DEG_ID\", \"nonessential_genes\", \"nonessential_genes_gtf\"]], on=\"org_DEG_ID\")\nsummary_df_columns = [\"org_DEG_ID\", \"chr_id\", \"org_name\", \"essential_genes\", \"essential_genes_gtf\", \"nonessential_genes\", \"nonessential_genes_gtf\"]\nsummary_df = summary_df[summary_df_columns].merge(chr_id_df, how='left', on=\"chr_id\")\nsummary_df[summary_df[\"essential_genes\"] > 0].to_csv(os.path.join(WORK_DIR, \"summary_table.txt\"), sep=\"\\t\", index=False)\n\n!rm ./.*.dat", "_____no_output_____" ], [ "# nonessential gtf from ncbi feature_table.txt\nfeature_table_path = os.path.join(WORK_DIR, \"NC_000913_feature_table.txt\")\ndeg_ess_gtf_path = os.path.join(WORK_DIR, \"DEG1018_essential.gtf\")\nout_gtf_path = os.path.join(WORK_DIR, \"DEG1018_nonessential_ncbi.gtf\")\n\ndeg_ess_gtf_df = pd.read_csv(deg_ess_gtf_path, sep=\"\\t\")\nft_df = pd.read_csv(feature_table_path, sep=\"\\t\")\nncbi_gtf_df = ft_df[ft_df[\"# feature\"] == \"gene\"][ft_df[\"class\"] == \"protein_coding\"][[\"genomic_accession\", \"start\", \"end\", \"strand\", \"symbol\"]].reset_index(drop=True)\nncbi_gtf_df[\"attribute\"] = ncbi_gtf_df[\"symbol\"].apply(lambda gn: json.dumps({\"gene_name\": gn}))\ndel ncbi_gtf_df[\"symbol\"]\nncbi_gtf_df.rename(index=str, columns={\"genomic_accession\": \"seqid\"}, inplace=True)\nncbi_gtf_df[\"source\"] = \"NCBI\"\nncbi_gtf_df[\"type\"] = \"CDS\"\nncbi_gtf_df[\"score\"] = \"-\"\nncbi_gtf_df[\"frame\"] = \".\"\nncbi_gtf_df = ncbi_gtf_df[GTF_COLS]\ndeg_ess_intersect = ncbi_gtf_df.apply(detect_gtf_intersect, axis=1, args=(deg_ess_gtf_df,))\nncbi_gtf_df[\"is_essential\"] = deg_ess_intersect[deg_ess_intersect < 0.5]\nncbi_gtf_df.dropna()[GTF_COLS].to_csv(out_gtf_path, sep=\"\\t\", index=False)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
ece3677d9128fd147690fcb08a72f199a55cb453
3,344
ipynb
Jupyter Notebook
tasks/time-related-features/Deployment.ipynb
platiagro/tasks
a6103cb101eeed26381cdb170a11d0e1dc53d3ad
[ "MIT", "Apache-2.0", "BSD-3-Clause" ]
2
2021-02-16T12:39:57.000Z
2021-07-21T11:36:39.000Z
tasks/time-related-features/Deployment.ipynb
platiagro/tasks
a6103cb101eeed26381cdb170a11d0e1dc53d3ad
[ "MIT", "Apache-2.0", "BSD-3-Clause" ]
20
2020-10-26T18:05:27.000Z
2021-11-30T19:05:22.000Z
tasks/time-related-features/Deployment.ipynb
platiagro/tasks
a6103cb101eeed26381cdb170a11d0e1dc53d3ad
[ "MIT", "Apache-2.0", "BSD-3-Clause" ]
7
2020-10-13T18:12:22.000Z
2021-08-13T19:16:21.000Z
29.078261
149
0.531998
[ [ [ "# Nova Tarefa - Implantaรงรฃo\n\nPreencha aqui com detalhes sobre a tarefa.<br>\n### **Em caso de dรบvidas, consulte os [tutoriais da PlatIAgro](https://platiagro.github.io/tutorials/).**", "_____no_output_____" ], [ "## Declaraรงรฃo de Classe para Prediรงรตes em Tempo Real\n\nA tarefa de implantaรงรฃo cria um serviรงo REST para prediรงรตes em tempo-real.<br>\nPara isso vocรช deve criar uma classe `Model` que implementa o mรฉtodo `predict`.", "_____no_output_____" ] ], [ [ "%%writefile Model.py\nimport joblib\n\n\nclass Model:\n def __init__(self):\n model = joblib.load(\"/tmp/data/model.joblib\")\n\n def predict(self, X, feature_names, meta=None):\n # adicione seu cรณdigo aqui...\n return", "_____no_output_____" ] ], [ [ "## Teste do serviรงo REST\n\nCrie um arquivo `contract.json` com os seguintes atributos:\n\n- `features` : A lista de features em uma requisiรงรฃo.\n- `targets` : A lista de valores retornados pelo mรฉtodo `predict`.\n\nCada `feature` pode conter as seguintes informaรงรตes:\n\n- `name` : nome da feature\n- `ftype` : tipo da feature : **continuous** ou **categorical**\n- `dtype` : tipo de dado : **FLOAT** ou **INT** : *obrigatรณrio para ftype continuous*\n- `range` : intervalo de valores numรฉricos : *obrigatรณrio para ftype continuous*\n- `values` : lista de valores categรณricos : *obrigatรณrio para ftype categorical*\n\nEm seguida, utilize a funรงรฃo `test_deployment` do [SDK da PlatIAgro](https://platiagro.github.io/sdk/) para simular prediรงรฃo em tempo-real.<br>", "_____no_output_____" ] ], [ [ "%%writefile contract.json\n{\n \"features\": [\n {\n \"name\": \"some_feature_name\",\n \"dtype\": \"FLOAT\",\n \"ftype\": \"continuous\",\n \"range\": [0.0, 100.0]\n },\n {\n \"name\": \"another_feature_name\",\n \"ftype\": \"categorical\",\n \"values\": [\"category1\", \"category2\"]\n }\n ],\n \"targets\": []\n}", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]