hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e75be6811514f50c4a2dc69f9b4729ffe9cec786 | 282,158 | ipynb | Jupyter Notebook | module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb | Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments | af540610696e55e8c84e2a7f84b3f58401400924 | [
"MIT"
] | null | null | null | module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb | Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments | af540610696e55e8c84e2a7f84b3f58401400924 | [
"MIT"
] | null | null | null | module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb | Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments | af540610696e55e8c84e2a7f84b3f58401400924 | [
"MIT"
] | null | null | null | 101.605329 | 49,330 | 0.721167 | [
[
[
"<a href=\"https://colab.research.google.com/github/Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments/blob/master/module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Lambda School Data Science Module 132\n## Sampling, Confidence Intervals, and Hypothesis Testing",
"_____no_output_____"
],
[
"## Prepare - examine other available hypothesis tests\n\nIf you had to pick a single hypothesis test in your toolbox, t-test would probably be the best choice - but the good news is you don't have to pick just one! Here's some of the others to be aware of:",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom scipy.stats import chisquare # One-way chi square test\n\n# Chi square can take any crosstab/table and test the independence of rows/cols\n# The null hypothesis is that the rows/cols are independent -> low chi square\n# The alternative is that there is a dependence -> high chi square\n# Be aware! Chi square does *not* tell you direction/causation\n\nind_obs = np.array([[1, 1], [2, 2]]).T\nprint(ind_obs)\nprint(chisquare(ind_obs, axis=None))\n\ndep_obs = np.array([[16, 18, 16, 14, 12, 12], [32, 24, 16, 28, 20, 24]]).T\nprint(dep_obs)\nprint(chisquare(dep_obs, axis=None))",
"[[1 2]\n [1 2]]\nPower_divergenceResult(statistic=0.6666666666666666, pvalue=0.8810148425137847)\n[[16 32]\n [18 24]\n [16 16]\n [14 28]\n [12 20]\n [12 24]]\nPower_divergenceResult(statistic=23.31034482758621, pvalue=0.015975692534127565)\n"
],
[
"# Distribution tests:\n# We often assume that something is normal, but it can be important to *check*\n\n# For example, later on with predictive modeling, a typical assumption is that\n# residuals (prediction errors) are normal - checking is a good diagnostic\n\nfrom scipy.stats import normaltest\n# Poisson models arrival times and is related to the binomial (coinflip)\nsample = np.random.poisson(5, 1000)\nprint(normaltest(sample)) # Pretty clearly not normal",
"NormaltestResult(statistic=40.82576734015371, pvalue=1.3639462696177559e-09)\n"
],
[
"# Kruskal-Wallis H-test - compare the median rank between 2+ groups\n# Can be applied to ranking decisions/outcomes/recommendations\n# The underlying math comes from chi-square distribution, and is best for n>5\nfrom scipy.stats import kruskal\n\nx1 = [1, 3, 5, 7, 9]\ny1 = [2, 4, 6, 8, 10]\nprint(kruskal(x1, y1)) # x1 is a little better, but not \"significantly\" so\n\nx2 = [1, 1, 1]\ny2 = [2, 2, 2]\nz = [2, 2] # Hey, a third group, and of different size!\nprint(kruskal(x2, y2, z)) # x clearly dominates",
"KruskalResult(statistic=0.2727272727272734, pvalue=0.6015081344405895)\nKruskalResult(statistic=7.0, pvalue=0.0301973834223185)\n"
]
],
[
[
"And there's many more! `scipy.stats` is fairly comprehensive, though there are even more available if you delve into the extended world of statistics packages. As tests get increasingly obscure and specialized, the importance of knowing them by heart becomes small - but being able to look them up and figure them out when they *are* relevant is still important.",
"_____no_output_____"
],
[
"## Degrees of Freedom\n",
"_____no_output_____"
]
],
[
[
"mean = 20\nn = 7\n[5, 9, 10, 20 , 15, 12, 69]\n\n# the first 6 days added up to 71\n# The mean has to be 20\n# I need the sum of all the values in the list to be 140\n# The last value in the list *HAS* to be 140-71 = 69",
"_____no_output_____"
]
],
[
[
"## T-test Assumptions\n\n<https://statistics.laerd.com/statistical-guides/independent-t-test-statistical-guide.php>\n\n- Independence of means\n\nAre the means of our voting data independent (do not affect the outcome of one another)?\n \nThe best way to increase thel likelihood of our means being independent is to randomly sample (which we did not do).\n",
"_____no_output_____"
]
],
[
[
"from scipy.stats import ttest_ind\n\n?ttest_ind",
"_____no_output_____"
]
],
[
[
"- \"Homogeneity\" of Variance? \n\nIs the magnitude of the variance between the two roughly the same?\n\nI think we're OK on this one for the voting data, although it probably could be better, one party was larger than the other.\n\nIf we suspect this to be a problem then we can use Welch's T-test",
"_____no_output_____"
]
],
[
[
"?ttest_ind",
"_____no_output_____"
]
],
[
[
"- \"Dependent Variable\" (sample means) are Distributed Normally\n\n<https://stats.stackexchange.com/questions/9573/t-test-for-non-normal-when-n50>\n\nLots of statistical tests depend on normal distributions. We can test for normality using Scipy as was shown above.\n\nThis assumption is often assumed even if the assumption is a weak one. If you strongly suspect that things are not normally distributed, you can transform your data to get it looking more normal and then run your test. This problem typically goes away for large sample sizes (yay Central Limit Theorem) and is often why you don't hear it brought up. People declare the assumption to be satisfied either way. \n\n",
"_____no_output_____"
],
[
"## Central Limit Theorem\n\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nsample_means = []\nfor x in range(0,3000):\n coinflips = np.random.binomial(n=1, p=.5, size=250)\n one_sample = coinflips\n sample_means.append(coinflips.mean())\n\nprint(len(sample_means))\nprint(sample_means)",
"3000\n[0.456, 0.396, 0.436, 0.524, 0.524, 0.516, 0.524, 0.556, 0.516, 0.44, 0.464, 0.456, 0.468, 0.44, 0.56, 0.504, 0.432, 0.496, 0.472, 0.564, 0.508, 0.52, 0.484, 0.448, 0.5, 0.476, 0.48, 0.484, 0.544, 0.468, 0.54, 0.512, 0.524, 0.508, 0.524, 0.464, 0.476, 0.496, 0.532, 0.424, 0.528, 0.54, 0.496, 0.54, 0.564, 0.532, 0.512, 0.464, 0.484, 0.488, 0.512, 0.456, 0.512, 0.48, 0.516, 0.468, 0.488, 0.492, 0.524, 0.508, 0.488, 0.44, 0.484, 0.556, 0.468, 0.504, 0.48, 0.512, 0.5, 0.52, 0.516, 0.48, 0.488, 0.5, 0.484, 0.488, 0.536, 0.496, 0.536, 0.504, 0.568, 0.536, 0.496, 0.572, 0.524, 0.52, 0.508, 0.488, 0.512, 0.488, 0.576, 0.524, 0.46, 0.476, 0.52, 0.508, 0.552, 0.488, 0.476, 0.524, 0.536, 0.448, 0.516, 0.528, 0.476, 0.492, 0.464, 0.488, 0.476, 0.464, 0.492, 0.468, 0.512, 0.532, 0.504, 0.464, 0.52, 0.556, 0.504, 0.484, 0.54, 0.472, 0.544, 0.52, 0.528, 0.46, 0.456, 0.484, 0.484, 0.508, 0.524, 0.548, 0.464, 0.52, 0.492, 0.448, 0.488, 0.528, 0.556, 0.472, 0.44, 0.492, 0.476, 0.548, 0.508, 0.528, 0.572, 0.464, 0.472, 0.464, 0.44, 0.492, 0.524, 0.492, 0.448, 0.476, 0.472, 0.5, 0.496, 0.508, 0.54, 0.524, 0.48, 0.508, 0.5, 0.524, 0.548, 0.468, 0.516, 0.472, 0.512, 0.488, 0.52, 0.508, 0.484, 0.516, 0.536, 0.508, 0.52, 0.504, 0.548, 0.424, 0.448, 0.492, 0.524, 0.528, 0.496, 0.452, 0.504, 0.512, 0.472, 0.5, 0.484, 0.476, 0.464, 0.48, 0.448, 0.484, 0.456, 0.504, 0.424, 0.492, 0.48, 0.496, 0.516, 0.496, 0.528, 0.492, 0.488, 0.496, 0.488, 0.496, 0.448, 0.496, 0.548, 0.476, 0.484, 0.46, 0.564, 0.484, 0.48, 0.516, 0.532, 0.484, 0.52, 0.44, 0.56, 0.492, 0.5, 0.492, 0.504, 0.508, 0.556, 0.524, 0.564, 0.468, 0.516, 0.56, 0.428, 0.496, 0.508, 0.492, 0.46, 0.496, 0.496, 0.492, 0.424, 0.496, 0.396, 0.512, 0.508, 0.508, 0.556, 0.492, 0.524, 0.46, 0.532, 0.476, 0.524, 0.512, 0.492, 0.496, 0.508, 0.492, 0.484, 0.528, 0.456, 0.48, 0.508, 0.436, 0.512, 0.484, 0.56, 0.508, 0.508, 0.52, 0.544, 0.472, 0.5, 0.544, 0.472, 0.468, 0.48, 0.404, 0.444, 0.504, 0.492, 0.528, 0.548, 0.472, 0.476, 0.54, 0.492, 0.5, 0.484, 0.5, 0.472, 0.536, 0.496, 0.512, 0.444, 0.488, 0.56, 0.536, 0.468, 0.472, 0.508, 0.396, 0.488, 0.516, 0.468, 0.472, 0.548, 0.524, 0.52, 0.524, 0.5, 0.56, 0.516, 0.516, 0.52, 0.472, 0.536, 0.452, 0.58, 0.512, 0.516, 0.5, 0.536, 0.476, 0.532, 0.484, 0.52, 0.5, 0.524, 0.452, 0.464, 0.484, 0.456, 0.472, 0.484, 0.508, 0.48, 0.492, 0.48, 0.488, 0.484, 0.556, 0.456, 0.476, 0.52, 0.544, 0.452, 0.54, 0.5, 0.492, 0.496, 0.508, 0.5, 0.484, 0.516, 0.556, 0.468, 0.544, 0.524, 0.556, 0.528, 0.488, 0.436, 0.504, 0.512, 0.524, 0.544, 0.54, 0.496, 0.504, 0.472, 0.436, 0.52, 0.576, 0.488, 0.488, 0.512, 0.464, 0.488, 0.452, 0.508, 0.496, 0.452, 0.556, 0.492, 0.516, 0.528, 0.5, 0.512, 0.44, 0.532, 0.5, 0.496, 0.516, 0.544, 0.492, 0.468, 0.556, 0.48, 0.472, 0.492, 0.468, 0.488, 0.564, 0.492, 0.456, 0.46, 0.516, 0.54, 0.452, 0.452, 0.46, 0.488, 0.54, 0.532, 0.504, 0.476, 0.508, 0.524, 0.532, 0.512, 0.52, 0.468, 0.472, 0.504, 0.552, 0.5, 0.492, 0.464, 0.512, 0.4, 0.52, 0.576, 0.524, 0.512, 0.432, 0.488, 0.508, 0.496, 0.48, 0.496, 0.484, 0.56, 0.46, 0.508, 0.52, 0.52, 0.492, 0.5, 0.464, 0.472, 0.48, 0.512, 0.536, 0.548, 0.5, 0.52, 0.5, 0.572, 0.492, 0.484, 0.536, 0.468, 0.532, 0.508, 0.516, 0.536, 0.452, 0.5, 0.508, 0.508, 0.444, 0.496, 0.444, 0.528, 0.5, 0.484, 0.432, 0.492, 0.488, 0.492, 0.512, 0.476, 0.488, 0.46, 0.456, 0.484, 0.476, 0.508, 0.488, 0.56, 0.472, 0.516, 0.496, 0.512, 0.504, 0.508, 0.452, 0.5, 0.512, 0.48, 0.452, 0.504, 0.504, 0.52, 0.508, 0.508, 0.548, 0.46, 0.488, 0.52, 0.492, 0.452, 0.532, 0.508, 0.556, 0.52, 0.512, 0.492, 0.496, 0.436, 0.496, 0.492, 0.496, 0.484, 0.512, 0.54, 0.52, 0.528, 0.552, 0.508, 0.488, 0.448, 0.484, 0.584, 0.564, 0.5, 0.496, 0.54, 0.46, 0.452, 0.496, 0.536, 0.524, 0.572, 0.544, 0.516, 0.48, 0.516, 0.46, 0.464, 0.452, 0.468, 0.516, 0.544, 0.488, 0.52, 0.524, 0.48, 0.504, 0.548, 0.532, 0.48, 0.524, 0.488, 0.548, 0.488, 0.464, 0.464, 0.544, 0.548, 0.504, 0.46, 0.536, 0.424, 0.468, 0.484, 0.436, 0.48, 0.524, 0.468, 0.52, 0.532, 0.524, 0.48, 0.476, 0.476, 0.56, 0.604, 0.5, 0.476, 0.48, 0.48, 0.524, 0.56, 0.456, 0.456, 0.52, 0.5, 0.476, 0.484, 0.484, 0.532, 0.564, 0.496, 0.5, 0.484, 0.528, 0.512, 0.504, 0.552, 0.464, 0.464, 0.524, 0.476, 0.488, 0.496, 0.48, 0.476, 0.452, 0.44, 0.512, 0.54, 0.472, 0.556, 0.524, 0.516, 0.524, 0.516, 0.552, 0.532, 0.464, 0.564, 0.548, 0.5, 0.504, 0.504, 0.56, 0.464, 0.508, 0.544, 0.5, 0.46, 0.484, 0.524, 0.524, 0.472, 0.5, 0.524, 0.512, 0.5, 0.46, 0.492, 0.512, 0.548, 0.464, 0.492, 0.46, 0.444, 0.524, 0.476, 0.516, 0.508, 0.504, 0.524, 0.452, 0.448, 0.52, 0.552, 0.492, 0.544, 0.432, 0.512, 0.472, 0.444, 0.472, 0.524, 0.516, 0.476, 0.496, 0.456, 0.496, 0.52, 0.516, 0.496, 0.488, 0.456, 0.476, 0.472, 0.48, 0.468, 0.496, 0.54, 0.496, 0.46, 0.484, 0.472, 0.516, 0.532, 0.54, 0.44, 0.5, 0.46, 0.512, 0.492, 0.504, 0.472, 0.484, 0.512, 0.48, 0.456, 0.476, 0.452, 0.528, 0.504, 0.44, 0.512, 0.548, 0.492, 0.524, 0.508, 0.508, 0.424, 0.556, 0.484, 0.524, 0.544, 0.52, 0.504, 0.484, 0.448, 0.484, 0.472, 0.564, 0.468, 0.544, 0.5, 0.504, 0.568, 0.484, 0.412, 0.48, 0.504, 0.428, 0.504, 0.472, 0.544, 0.48, 0.496, 0.512, 0.504, 0.484, 0.484, 0.5, 0.492, 0.492, 0.54, 0.464, 0.472, 0.476, 0.492, 0.432, 0.548, 0.556, 0.492, 0.448, 0.484, 0.508, 0.488, 0.512, 0.544, 0.496, 0.436, 0.432, 0.492, 0.496, 0.528, 0.472, 0.528, 0.484, 0.468, 0.456, 0.444, 0.568, 0.504, 0.492, 0.472, 0.504, 0.548, 0.468, 0.452, 0.464, 0.48, 0.52, 0.496, 0.548, 0.548, 0.448, 0.456, 0.484, 0.488, 0.492, 0.492, 0.456, 0.508, 0.508, 0.472, 0.512, 0.528, 0.52, 0.52, 0.54, 0.496, 0.552, 0.448, 0.5, 0.44, 0.484, 0.536, 0.468, 0.492, 0.464, 0.516, 0.42, 0.488, 0.452, 0.532, 0.472, 0.62, 0.488, 0.504, 0.484, 0.48, 0.496, 0.54, 0.52, 0.5, 0.528, 0.46, 0.488, 0.548, 0.548, 0.508, 0.544, 0.552, 0.492, 0.484, 0.488, 0.5, 0.504, 0.524, 0.476, 0.488, 0.472, 0.468, 0.552, 0.544, 0.532, 0.468, 0.444, 0.456, 0.376, 0.508, 0.504, 0.492, 0.524, 0.484, 0.496, 0.544, 0.544, 0.476, 0.432, 0.488, 0.496, 0.576, 0.512, 0.552, 0.444, 0.488, 0.532, 0.552, 0.508, 0.516, 0.536, 0.48, 0.504, 0.484, 0.524, 0.516, 0.484, 0.54, 0.5, 0.516, 0.508, 0.492, 0.476, 0.452, 0.532, 0.484, 0.492, 0.52, 0.484, 0.504, 0.508, 0.476, 0.464, 0.528, 0.532, 0.516, 0.508, 0.484, 0.48, 0.484, 0.496, 0.552, 0.524, 0.552, 0.452, 0.532, 0.448, 0.584, 0.468, 0.512, 0.5, 0.532, 0.464, 0.536, 0.504, 0.524, 0.52, 0.532, 0.496, 0.508, 0.524, 0.516, 0.536, 0.488, 0.484, 0.484, 0.5, 0.488, 0.476, 0.576, 0.488, 0.48, 0.56, 0.492, 0.512, 0.448, 0.48, 0.528, 0.5, 0.524, 0.464, 0.524, 0.52, 0.544, 0.44, 0.468, 0.46, 0.52, 0.496, 0.472, 0.492, 0.54, 0.396, 0.5, 0.524, 0.488, 0.532, 0.508, 0.508, 0.5, 0.54, 0.456, 0.468, 0.464, 0.488, 0.476, 0.492, 0.472, 0.488, 0.504, 0.528, 0.46, 0.516, 0.448, 0.492, 0.528, 0.428, 0.492, 0.524, 0.516, 0.52, 0.5, 0.516, 0.464, 0.472, 0.556, 0.5, 0.432, 0.544, 0.512, 0.516, 0.504, 0.52, 0.468, 0.532, 0.524, 0.476, 0.472, 0.48, 0.56, 0.5, 0.492, 0.464, 0.508, 0.484, 0.464, 0.5, 0.48, 0.488, 0.496, 0.46, 0.528, 0.516, 0.448, 0.484, 0.496, 0.532, 0.568, 0.476, 0.516, 0.532, 0.504, 0.488, 0.532, 0.524, 0.524, 0.452, 0.552, 0.456, 0.484, 0.472, 0.536, 0.516, 0.476, 0.524, 0.488, 0.472, 0.448, 0.472, 0.42, 0.496, 0.444, 0.508, 0.464, 0.484, 0.484, 0.5, 0.488, 0.5, 0.476, 0.516, 0.544, 0.516, 0.508, 0.492, 0.5, 0.5, 0.456, 0.464, 0.504, 0.508, 0.516, 0.452, 0.552, 0.516, 0.504, 0.504, 0.524, 0.488, 0.436, 0.464, 0.524, 0.468, 0.512, 0.504, 0.48, 0.48, 0.488, 0.456, 0.472, 0.508, 0.536, 0.452, 0.472, 0.496, 0.52, 0.436, 0.5, 0.508, 0.5, 0.512, 0.512, 0.428, 0.58, 0.528, 0.456, 0.504, 0.504, 0.508, 0.5, 0.468, 0.548, 0.576, 0.496, 0.488, 0.536, 0.516, 0.528, 0.512, 0.532, 0.508, 0.572, 0.404, 0.48, 0.516, 0.46, 0.528, 0.48, 0.48, 0.484, 0.496, 0.48, 0.536, 0.492, 0.468, 0.472, 0.48, 0.492, 0.492, 0.488, 0.552, 0.464, 0.488, 0.56, 0.52, 0.512, 0.46, 0.536, 0.5, 0.516, 0.536, 0.488, 0.468, 0.512, 0.38, 0.544, 0.5, 0.532, 0.5, 0.512, 0.48, 0.504, 0.476, 0.456, 0.524, 0.536, 0.508, 0.516, 0.54, 0.492, 0.516, 0.488, 0.524, 0.508, 0.528, 0.508, 0.44, 0.524, 0.52, 0.516, 0.484, 0.468, 0.48, 0.52, 0.476, 0.5, 0.552, 0.48, 0.524, 0.488, 0.46, 0.452, 0.484, 0.48, 0.524, 0.48, 0.492, 0.536, 0.468, 0.504, 0.552, 0.52, 0.456, 0.536, 0.516, 0.576, 0.516, 0.536, 0.524, 0.556, 0.524, 0.504, 0.516, 0.464, 0.42, 0.524, 0.5, 0.524, 0.516, 0.516, 0.472, 0.46, 0.5, 0.46, 0.5, 0.508, 0.496, 0.5, 0.536, 0.548, 0.484, 0.5, 0.476, 0.492, 0.528, 0.476, 0.504, 0.516, 0.544, 0.508, 0.476, 0.536, 0.496, 0.46, 0.472, 0.496, 0.504, 0.504, 0.52, 0.484, 0.464, 0.476, 0.476, 0.556, 0.508, 0.516, 0.548, 0.512, 0.504, 0.484, 0.48, 0.528, 0.528, 0.46, 0.58, 0.524, 0.576, 0.484, 0.496, 0.564, 0.516, 0.528, 0.46, 0.5, 0.488, 0.492, 0.5, 0.488, 0.54, 0.516, 0.5, 0.44, 0.536, 0.552, 0.556, 0.504, 0.492, 0.484, 0.516, 0.428, 0.512, 0.496, 0.508, 0.464, 0.52, 0.528, 0.472, 0.48, 0.504, 0.472, 0.52, 0.496, 0.548, 0.52, 0.452, 0.464, 0.504, 0.544, 0.508, 0.5, 0.48, 0.48, 0.496, 0.504, 0.52, 0.468, 0.456, 0.48, 0.56, 0.496, 0.576, 0.496, 0.512, 0.46, 0.492, 0.452, 0.536, 0.492, 0.46, 0.516, 0.5, 0.48, 0.504, 0.544, 0.488, 0.524, 0.528, 0.496, 0.52, 0.464, 0.504, 0.472, 0.508, 0.48, 0.512, 0.476, 0.552, 0.5, 0.504, 0.548, 0.464, 0.52, 0.524, 0.536, 0.424, 0.504, 0.492, 0.552, 0.468, 0.5, 0.496, 0.5, 0.552, 0.436, 0.464, 0.476, 0.584, 0.516, 0.492, 0.52, 0.504, 0.492, 0.452, 0.528, 0.472, 0.488, 0.536, 0.552, 0.532, 0.512, 0.5, 0.556, 0.584, 0.524, 0.484, 0.496, 0.504, 0.516, 0.48, 0.468, 0.468, 0.512, 0.484, 0.492, 0.496, 0.504, 0.448, 0.488, 0.516, 0.48, 0.492, 0.516, 0.6, 0.532, 0.508, 0.52, 0.528, 0.48, 0.452, 0.488, 0.488, 0.512, 0.496, 0.512, 0.428, 0.492, 0.572, 0.468, 0.54, 0.496, 0.5, 0.508, 0.484, 0.464, 0.504, 0.552, 0.492, 0.472, 0.46, 0.532, 0.528, 0.504, 0.564, 0.48, 0.512, 0.516, 0.52, 0.48, 0.528, 0.508, 0.492, 0.48, 0.488, 0.52, 0.472, 0.512, 0.476, 0.5, 0.488, 0.508, 0.492, 0.54, 0.46, 0.424, 0.452, 0.46, 0.48, 0.52, 0.472, 0.524, 0.5, 0.468, 0.488, 0.528, 0.536, 0.468, 0.488, 0.496, 0.492, 0.52, 0.488, 0.56, 0.428, 0.468, 0.488, 0.46, 0.568, 0.38, 0.476, 0.512, 0.556, 0.46, 0.492, 0.524, 0.524, 0.48, 0.456, 0.472, 0.56, 0.532, 0.476, 0.484, 0.508, 0.512, 0.532, 0.472, 0.472, 0.508, 0.588, 0.5, 0.472, 0.472, 0.472, 0.472, 0.492, 0.492, 0.468, 0.504, 0.452, 0.52, 0.48, 0.528, 0.508, 0.52, 0.516, 0.512, 0.464, 0.488, 0.5, 0.488, 0.56, 0.48, 0.484, 0.516, 0.484, 0.524, 0.54, 0.464, 0.508, 0.524, 0.484, 0.452, 0.508, 0.564, 0.5, 0.48, 0.48, 0.492, 0.536, 0.472, 0.524, 0.464, 0.468, 0.496, 0.528, 0.46, 0.508, 0.488, 0.52, 0.516, 0.472, 0.488, 0.484, 0.492, 0.432, 0.48, 0.456, 0.496, 0.532, 0.472, 0.436, 0.508, 0.528, 0.488, 0.528, 0.496, 0.52, 0.536, 0.5, 0.56, 0.548, 0.54, 0.548, 0.5, 0.492, 0.528, 0.484, 0.456, 0.496, 0.468, 0.496, 0.436, 0.516, 0.5, 0.552, 0.492, 0.544, 0.54, 0.512, 0.524, 0.496, 0.492, 0.472, 0.512, 0.504, 0.512, 0.464, 0.524, 0.504, 0.46, 0.472, 0.496, 0.544, 0.464, 0.42, 0.444, 0.46, 0.56, 0.464, 0.524, 0.484, 0.452, 0.488, 0.496, 0.46, 0.492, 0.456, 0.504, 0.54, 0.456, 0.5, 0.472, 0.488, 0.552, 0.472, 0.496, 0.472, 0.532, 0.496, 0.492, 0.504, 0.476, 0.452, 0.512, 0.552, 0.528, 0.5, 0.448, 0.488, 0.488, 0.488, 0.492, 0.42, 0.492, 0.512, 0.544, 0.524, 0.564, 0.564, 0.54, 0.496, 0.512, 0.456, 0.448, 0.484, 0.52, 0.42, 0.492, 0.512, 0.452, 0.544, 0.572, 0.496, 0.536, 0.504, 0.52, 0.548, 0.508, 0.54, 0.484, 0.48, 0.476, 0.476, 0.544, 0.508, 0.476, 0.512, 0.512, 0.536, 0.464, 0.512, 0.432, 0.488, 0.576, 0.48, 0.5, 0.452, 0.492, 0.56, 0.508, 0.5, 0.48, 0.52, 0.52, 0.544, 0.508, 0.436, 0.552, 0.496, 0.476, 0.428, 0.432, 0.488, 0.504, 0.472, 0.504, 0.492, 0.524, 0.548, 0.532, 0.532, 0.532, 0.496, 0.484, 0.5, 0.44, 0.508, 0.504, 0.5, 0.5, 0.496, 0.552, 0.504, 0.508, 0.504, 0.472, 0.568, 0.488, 0.496, 0.504, 0.5, 0.496, 0.492, 0.544, 0.488, 0.512, 0.52, 0.492, 0.544, 0.476, 0.524, 0.464, 0.504, 0.5, 0.524, 0.552, 0.524, 0.536, 0.54, 0.464, 0.5, 0.508, 0.548, 0.488, 0.492, 0.492, 0.524, 0.512, 0.476, 0.472, 0.512, 0.492, 0.488, 0.464, 0.536, 0.496, 0.516, 0.568, 0.488, 0.516, 0.524, 0.456, 0.508, 0.512, 0.504, 0.5, 0.484, 0.472, 0.412, 0.468, 0.552, 0.484, 0.52, 0.436, 0.492, 0.492, 0.432, 0.496, 0.472, 0.508, 0.54, 0.508, 0.5, 0.456, 0.496, 0.524, 0.488, 0.44, 0.54, 0.468, 0.448, 0.484, 0.54, 0.484, 0.52, 0.5, 0.52, 0.444, 0.476, 0.432, 0.42, 0.488, 0.52, 0.512, 0.472, 0.5, 0.484, 0.492, 0.504, 0.472, 0.508, 0.52, 0.536, 0.488, 0.492, 0.496, 0.552, 0.52, 0.52, 0.488, 0.444, 0.468, 0.532, 0.544, 0.44, 0.52, 0.468, 0.464, 0.46, 0.516, 0.512, 0.468, 0.472, 0.508, 0.52, 0.532, 0.472, 0.54, 0.48, 0.444, 0.52, 0.484, 0.616, 0.488, 0.472, 0.564, 0.52, 0.48, 0.492, 0.524, 0.492, 0.496, 0.528, 0.468, 0.464, 0.504, 0.476, 0.492, 0.56, 0.496, 0.512, 0.504, 0.492, 0.48, 0.492, 0.472, 0.524, 0.508, 0.516, 0.512, 0.504, 0.484, 0.528, 0.464, 0.428, 0.544, 0.54, 0.472, 0.536, 0.516, 0.572, 0.504, 0.5, 0.46, 0.476, 0.496, 0.568, 0.472, 0.476, 0.496, 0.492, 0.544, 0.468, 0.444, 0.572, 0.492, 0.544, 0.46, 0.472, 0.524, 0.536, 0.54, 0.504, 0.504, 0.504, 0.496, 0.472, 0.516, 0.508, 0.44, 0.44, 0.488, 0.5, 0.472, 0.556, 0.46, 0.464, 0.528, 0.484, 0.524, 0.512, 0.484, 0.468, 0.544, 0.448, 0.48, 0.544, 0.496, 0.548, 0.516, 0.516, 0.58, 0.516, 0.516, 0.568, 0.476, 0.488, 0.532, 0.524, 0.52, 0.512, 0.536, 0.516, 0.488, 0.46, 0.516, 0.432, 0.488, 0.556, 0.476, 0.492, 0.496, 0.488, 0.472, 0.496, 0.46, 0.528, 0.48, 0.48, 0.516, 0.532, 0.512, 0.488, 0.5, 0.476, 0.508, 0.468, 0.552, 0.472, 0.52, 0.504, 0.52, 0.508, 0.472, 0.496, 0.536, 0.54, 0.524, 0.472, 0.532, 0.504, 0.496, 0.472, 0.5, 0.568, 0.484, 0.544, 0.44, 0.54, 0.456, 0.528, 0.472, 0.588, 0.524, 0.468, 0.488, 0.472, 0.496, 0.524, 0.496, 0.5, 0.484, 0.484, 0.48, 0.472, 0.464, 0.456, 0.456, 0.532, 0.536, 0.508, 0.48, 0.532, 0.52, 0.528, 0.48, 0.504, 0.524, 0.496, 0.476, 0.516, 0.404, 0.468, 0.52, 0.492, 0.524, 0.48, 0.536, 0.428, 0.52, 0.452, 0.456, 0.5, 0.488, 0.48, 0.544, 0.496, 0.508, 0.548, 0.552, 0.532, 0.496, 0.464, 0.516, 0.58, 0.504, 0.572, 0.52, 0.532, 0.5, 0.476, 0.496, 0.5, 0.448, 0.512, 0.528, 0.468, 0.54, 0.472, 0.5, 0.468, 0.548, 0.464, 0.476, 0.52, 0.488, 0.48, 0.516, 0.524, 0.536, 0.512, 0.504, 0.472, 0.472, 0.492, 0.512, 0.456, 0.552, 0.484, 0.504, 0.524, 0.552, 0.528, 0.536, 0.528, 0.504, 0.488, 0.5, 0.5, 0.488, 0.504, 0.524, 0.528, 0.48, 0.508, 0.54, 0.532, 0.54, 0.48, 0.476, 0.488, 0.496, 0.528, 0.552, 0.524, 0.56, 0.508, 0.484, 0.476, 0.496, 0.52, 0.532, 0.544, 0.54, 0.464, 0.472, 0.552, 0.524, 0.472, 0.512, 0.524, 0.528, 0.496, 0.516, 0.512, 0.516, 0.432, 0.412, 0.46, 0.504, 0.536, 0.524, 0.488, 0.484, 0.492, 0.484, 0.528, 0.472, 0.44, 0.496, 0.436, 0.524, 0.484, 0.456, 0.46, 0.48, 0.5, 0.492, 0.512, 0.476, 0.472, 0.484, 0.476, 0.472, 0.452, 0.504, 0.42, 0.48, 0.492, 0.536, 0.516, 0.56, 0.512, 0.436, 0.46, 0.516, 0.536, 0.42, 0.492, 0.476, 0.46, 0.512, 0.488, 0.524, 0.524, 0.548, 0.536, 0.492, 0.528, 0.444, 0.532, 0.472, 0.464, 0.544, 0.472, 0.52, 0.428, 0.512, 0.448, 0.488, 0.448, 0.456, 0.532, 0.492, 0.524, 0.52, 0.452, 0.476, 0.5, 0.496, 0.488, 0.476, 0.496, 0.464, 0.512, 0.472, 0.46, 0.508, 0.54, 0.544, 0.476, 0.476, 0.484, 0.528, 0.524, 0.524, 0.536, 0.508, 0.452, 0.42, 0.468, 0.496, 0.48, 0.5, 0.52, 0.5, 0.568, 0.548, 0.472, 0.528, 0.46, 0.516, 0.516, 0.54, 0.46, 0.528, 0.568, 0.552, 0.452, 0.492, 0.476, 0.456, 0.516, 0.436, 0.536, 0.536, 0.548, 0.488, 0.48, 0.42, 0.42, 0.5, 0.524, 0.508, 0.484, 0.488, 0.432, 0.476, 0.52, 0.512, 0.48, 0.52, 0.516, 0.524, 0.528, 0.468, 0.468, 0.548, 0.52, 0.504, 0.5, 0.536, 0.512, 0.548, 0.508, 0.572, 0.52, 0.548, 0.488, 0.508, 0.472, 0.532, 0.472, 0.492, 0.532, 0.46, 0.548, 0.5, 0.456, 0.428, 0.48, 0.508, 0.456, 0.452, 0.52, 0.496, 0.444, 0.564, 0.484, 0.42, 0.496, 0.496, 0.48, 0.5, 0.524, 0.572, 0.504, 0.48, 0.52, 0.508, 0.46, 0.508, 0.544, 0.488, 0.512, 0.488, 0.528, 0.468, 0.472, 0.496, 0.496, 0.512, 0.508, 0.452, 0.52, 0.48, 0.508, 0.496, 0.464, 0.516, 0.512, 0.48, 0.516, 0.5, 0.464, 0.496, 0.488, 0.504, 0.528, 0.516, 0.524, 0.452, 0.484, 0.5, 0.476, 0.488, 0.512, 0.508, 0.452, 0.544, 0.484, 0.504, 0.46, 0.492, 0.548, 0.46, 0.488, 0.5, 0.528, 0.484, 0.512, 0.5, 0.464, 0.496, 0.528, 0.532, 0.512, 0.488, 0.472, 0.44, 0.508, 0.496, 0.472, 0.492, 0.528, 0.476, 0.516, 0.476, 0.492, 0.516, 0.508, 0.476, 0.456, 0.5, 0.436, 0.492, 0.492, 0.512, 0.52, 0.524, 0.46, 0.548, 0.488, 0.5, 0.504, 0.528, 0.512, 0.46, 0.528, 0.46, 0.472, 0.508, 0.464, 0.528, 0.468, 0.484, 0.508, 0.452, 0.516, 0.488, 0.54, 0.488, 0.452, 0.54, 0.512, 0.488, 0.548, 0.492, 0.416, 0.532, 0.576, 0.548, 0.508, 0.48, 0.5, 0.504, 0.472, 0.496, 0.496, 0.5, 0.44, 0.492, 0.488, 0.556, 0.512, 0.468, 0.524, 0.512, 0.468, 0.512, 0.512, 0.516, 0.5, 0.524, 0.508, 0.484, 0.52, 0.532, 0.52, 0.48, 0.492, 0.5, 0.524, 0.444, 0.508, 0.476, 0.512, 0.496, 0.532, 0.54, 0.488, 0.544, 0.46, 0.512, 0.54, 0.524, 0.484, 0.456, 0.52, 0.504, 0.516, 0.488, 0.524, 0.48, 0.492, 0.472, 0.548, 0.508, 0.524, 0.496, 0.456, 0.492, 0.492, 0.572, 0.544, 0.472, 0.528, 0.476, 0.508, 0.496, 0.528, 0.532, 0.508, 0.508, 0.488, 0.484, 0.472, 0.464, 0.42, 0.5, 0.496, 0.552, 0.464, 0.536, 0.5, 0.5, 0.484, 0.528, 0.504, 0.496, 0.524, 0.492, 0.52, 0.516, 0.512, 0.56, 0.56, 0.508, 0.524, 0.504, 0.472, 0.524, 0.5, 0.504, 0.492, 0.452, 0.48, 0.456, 0.492, 0.524, 0.46, 0.5, 0.508, 0.468, 0.504, 0.492, 0.452, 0.504, 0.44, 0.54, 0.472, 0.52, 0.468, 0.552, 0.48, 0.496, 0.576, 0.48, 0.484, 0.504, 0.54, 0.58, 0.436, 0.56, 0.48, 0.516, 0.496, 0.524, 0.492, 0.464, 0.516, 0.464, 0.496, 0.516, 0.52, 0.5, 0.524, 0.524, 0.516, 0.5, 0.54, 0.484, 0.492, 0.44, 0.48, 0.512, 0.56, 0.516, 0.484, 0.48, 0.476, 0.456, 0.504, 0.524, 0.48, 0.548, 0.532, 0.532, 0.44, 0.508, 0.556, 0.484, 0.508, 0.484, 0.464, 0.504, 0.512, 0.496, 0.512, 0.484, 0.524, 0.504, 0.488, 0.468, 0.576, 0.536, 0.52, 0.496, 0.476, 0.468, 0.532, 0.492, 0.536, 0.504, 0.516, 0.48, 0.488, 0.504, 0.48, 0.5, 0.524, 0.552, 0.492, 0.492, 0.552, 0.52, 0.46, 0.484, 0.5, 0.512, 0.5, 0.528, 0.472, 0.536, 0.512, 0.536, 0.472, 0.488, 0.504, 0.532, 0.508, 0.496, 0.528, 0.52, 0.532, 0.5, 0.476, 0.52, 0.508, 0.504, 0.484, 0.476, 0.532, 0.52, 0.524, 0.472, 0.508, 0.496, 0.48, 0.516, 0.456, 0.512, 0.492, 0.532, 0.48, 0.472, 0.54, 0.52, 0.524, 0.492, 0.488, 0.484, 0.444, 0.536, 0.496, 0.468, 0.516, 0.472, 0.48, 0.548, 0.504, 0.532, 0.548, 0.52, 0.492, 0.524, 0.492, 0.536, 0.512, 0.504, 0.492, 0.512, 0.496, 0.512, 0.508, 0.448, 0.544, 0.492, 0.456, 0.516, 0.496, 0.504, 0.5, 0.444, 0.48, 0.44, 0.488, 0.48, 0.464, 0.528, 0.424, 0.504, 0.56, 0.472, 0.448, 0.564, 0.568, 0.504, 0.464, 0.532, 0.484, 0.492, 0.548, 0.58, 0.42, 0.508, 0.524, 0.484, 0.444, 0.48, 0.476, 0.48, 0.524, 0.504, 0.52, 0.408, 0.468, 0.484, 0.48, 0.484, 0.48, 0.468, 0.528, 0.464, 0.512, 0.488, 0.508, 0.448, 0.492, 0.476, 0.496, 0.54, 0.492, 0.468, 0.512, 0.516, 0.456, 0.508, 0.508, 0.48, 0.44, 0.516, 0.46, 0.488, 0.512, 0.524, 0.5, 0.476, 0.516, 0.508, 0.54, 0.464, 0.548, 0.524, 0.476, 0.48, 0.528, 0.5, 0.472, 0.516, 0.472, 0.516, 0.496, 0.452, 0.524, 0.484, 0.536, 0.492, 0.496, 0.496, 0.556, 0.48, 0.496, 0.516, 0.492, 0.536, 0.516, 0.496, 0.532, 0.516, 0.504, 0.536, 0.46, 0.512, 0.456, 0.524, 0.552, 0.484, 0.476, 0.536, 0.512, 0.468, 0.444, 0.512, 0.52, 0.44, 0.476, 0.52, 0.528, 0.488, 0.508, 0.488, 0.532, 0.484, 0.476, 0.54, 0.524, 0.436, 0.508, 0.448, 0.468, 0.488, 0.472, 0.472, 0.492, 0.508, 0.5, 0.488, 0.524, 0.432, 0.536, 0.52, 0.556, 0.424, 0.516, 0.5, 0.464, 0.472, 0.524, 0.548, 0.492, 0.488, 0.548, 0.484, 0.46, 0.556, 0.488, 0.452, 0.512, 0.596, 0.432, 0.448, 0.528, 0.536, 0.508, 0.508, 0.5, 0.508, 0.484, 0.532, 0.592, 0.532, 0.516, 0.528, 0.496, 0.472, 0.444, 0.532, 0.48, 0.516, 0.516, 0.548, 0.452, 0.508, 0.496, 0.488, 0.504, 0.48, 0.516, 0.4, 0.5, 0.428, 0.54, 0.5, 0.54, 0.472, 0.512, 0.472, 0.5, 0.46, 0.536, 0.48, 0.484, 0.484, 0.516, 0.476, 0.456, 0.48, 0.54, 0.484, 0.512, 0.476, 0.46, 0.54, 0.476, 0.5, 0.5, 0.532, 0.512, 0.456, 0.48, 0.476, 0.524, 0.516, 0.488, 0.472, 0.492, 0.528, 0.488, 0.448, 0.444, 0.468, 0.492, 0.528, 0.472, 0.56, 0.448, 0.484, 0.472, 0.492, 0.504, 0.484, 0.5, 0.436, 0.528, 0.56, 0.516, 0.544, 0.476, 0.508, 0.544, 0.48, 0.484, 0.46, 0.524, 0.528, 0.484, 0.532, 0.476, 0.548, 0.472, 0.5, 0.544, 0.5, 0.508, 0.508, 0.484, 0.468, 0.452, 0.488, 0.504, 0.508, 0.516, 0.504, 0.496, 0.508, 0.52, 0.508, 0.52, 0.516, 0.516]\n"
],
[
"df = pd.DataFrame({'single_sample': one_sample})\ndf.head()\n#df.size",
"_____no_output_____"
],
[
"df.single_sample.hist();",
"_____no_output_____"
],
[
"ax = plt.hist(sample_means, bins=150)\nplt.title('Distribution of 3000 sample means \\n (of 250 coinflips each)');",
"_____no_output_____"
]
],
[
[
"What does the Central Limit Theorem State? That no matter the initial distribution of the population, the distribution of sample means taken will approximate a normal distribution as $n \\rightarrow \\infty$.\n\nThis has very important implications for hypothesis testing and is precisely the reason why the t-distribution begins to approximate the normal distribution as our sample size increases. ",
"_____no_output_____"
]
],
[
[
"sample_means_small = []\nsample_means_large = []\nfor x in range(0,3000):\n coinflips_small = np.random.binomial(n=1, p=.5, size=20)\n coinflips_large = np.random.binomial(n=1, p=.5, size=100)\n one_small_sample = coinflips_small\n one_small_large = coinflips_large\n sample_means_small.append(coinflips_small.mean())\n sample_means_large.append(coinflips_large.mean())\n\nprint(len(sample_means_small))\nprint(sample_means_small)",
"3000\n[0.6, 0.45, 0.45, 0.5, 0.3, 0.5, 0.5, 0.35, 0.55, 0.6, 0.45, 0.45, 0.4, 0.65, 0.65, 0.2, 0.55, 0.35, 0.75, 0.55, 0.65, 0.5, 0.8, 0.65, 0.55, 0.5, 0.3, 0.6, 0.4, 0.5, 0.45, 0.5, 0.7, 0.7, 0.4, 0.25, 0.4, 0.5, 0.4, 0.65, 0.55, 0.6, 0.35, 0.6, 0.4, 0.45, 0.4, 0.75, 0.55, 0.6, 0.4, 0.35, 0.35, 0.45, 0.4, 0.5, 0.5, 0.4, 0.35, 0.4, 0.45, 0.4, 0.45, 0.6, 0.5, 0.55, 0.6, 0.3, 0.5, 0.65, 0.4, 0.35, 0.6, 0.55, 0.4, 0.65, 0.35, 0.55, 0.55, 0.55, 0.5, 0.65, 0.5, 0.65, 0.35, 0.4, 0.4, 0.55, 0.5, 0.4, 0.35, 0.6, 0.2, 0.4, 0.5, 0.4, 0.5, 0.5, 0.45, 0.5, 0.65, 0.5, 0.55, 0.35, 0.55, 0.4, 0.4, 0.6, 0.45, 0.35, 0.6, 0.75, 0.5, 0.4, 0.5, 0.6, 0.45, 0.55, 0.45, 0.5, 0.45, 0.55, 0.3, 0.5, 0.35, 0.45, 0.5, 0.5, 0.5, 0.5, 0.7, 0.55, 0.55, 0.65, 0.45, 0.5, 0.35, 0.5, 0.45, 0.7, 0.7, 0.4, 0.65, 0.55, 0.45, 0.75, 0.4, 0.25, 0.5, 0.45, 0.55, 0.45, 0.5, 0.35, 0.5, 0.35, 0.4, 0.5, 0.6, 0.7, 0.4, 0.55, 0.55, 0.6, 0.6, 0.5, 0.85, 0.5, 0.4, 0.45, 0.4, 0.45, 0.5, 0.4, 0.4, 0.6, 0.65, 0.45, 0.75, 0.6, 0.5, 0.4, 0.65, 0.5, 0.45, 0.5, 0.55, 0.55, 0.6, 0.45, 0.6, 0.55, 0.5, 0.15, 0.55, 0.5, 0.5, 0.55, 0.2, 0.55, 0.65, 0.45, 0.55, 0.55, 0.5, 0.6, 0.3, 0.5, 0.5, 0.35, 0.4, 0.35, 0.6, 0.25, 0.5, 0.5, 0.45, 0.5, 0.65, 0.5, 0.5, 0.55, 0.45, 0.4, 0.65, 0.7, 0.55, 0.55, 0.4, 0.55, 0.55, 0.4, 0.35, 0.5, 0.75, 0.65, 0.35, 0.6, 0.45, 0.5, 0.45, 0.6, 0.55, 0.5, 0.55, 0.55, 0.4, 0.55, 0.45, 0.55, 0.65, 0.75, 0.75, 0.5, 0.55, 0.6, 0.45, 0.55, 0.35, 0.45, 0.6, 0.55, 0.45, 0.75, 0.4, 0.35, 0.8, 0.6, 0.45, 0.4, 0.5, 0.55, 0.45, 0.55, 0.4, 0.5, 0.35, 0.55, 0.55, 0.5, 0.5, 0.45, 0.35, 0.3, 0.55, 0.6, 0.45, 0.45, 0.3, 0.35, 0.5, 0.7, 0.55, 0.5, 0.55, 0.4, 0.55, 0.6, 0.35, 0.5, 0.35, 0.6, 0.35, 0.55, 0.3, 0.45, 0.45, 0.55, 0.4, 0.55, 0.45, 0.45, 0.45, 0.65, 0.3, 0.35, 0.55, 0.5, 0.6, 0.55, 0.25, 0.5, 0.5, 0.4, 0.55, 0.6, 0.55, 0.4, 0.65, 0.45, 0.5, 0.45, 0.4, 0.55, 0.5, 0.45, 0.5, 0.55, 0.25, 0.5, 0.45, 0.35, 0.35, 0.6, 0.65, 0.35, 0.4, 0.5, 0.55, 0.6, 0.5, 0.45, 0.4, 0.5, 0.55, 0.55, 0.55, 0.5, 0.5, 0.65, 0.4, 0.45, 0.65, 0.45, 0.55, 0.45, 0.6, 0.45, 0.55, 0.4, 0.4, 0.35, 0.35, 0.45, 0.55, 0.5, 0.6, 0.45, 0.5, 0.7, 0.45, 0.35, 0.45, 0.45, 0.4, 0.25, 0.55, 0.4, 0.35, 0.5, 0.35, 0.55, 0.4, 0.45, 0.4, 0.45, 0.4, 0.3, 0.6, 0.65, 0.35, 0.45, 0.7, 0.65, 0.55, 0.45, 0.35, 0.45, 0.65, 0.45, 0.65, 0.4, 0.55, 0.4, 0.55, 0.3, 0.45, 0.4, 0.65, 0.35, 0.3, 0.4, 0.3, 0.55, 0.55, 0.7, 0.35, 0.5, 0.45, 0.35, 0.4, 0.3, 0.5, 0.6, 0.5, 0.55, 0.4, 0.65, 0.5, 0.45, 0.75, 0.35, 0.4, 0.4, 0.5, 0.5, 0.45, 0.6, 0.45, 0.5, 0.15, 0.6, 0.55, 0.7, 0.35, 0.5, 0.55, 0.25, 0.3, 0.25, 0.4, 0.65, 0.5, 0.4, 0.5, 0.45, 0.55, 0.45, 0.5, 0.55, 0.55, 0.5, 0.35, 0.65, 0.25, 0.65, 0.65, 0.45, 0.45, 0.45, 0.45, 0.6, 0.45, 0.45, 0.6, 0.4, 0.5, 0.4, 0.75, 0.65, 0.65, 0.4, 0.4, 0.65, 0.6, 0.5, 0.5, 0.5, 0.5, 0.6, 0.45, 0.65, 0.45, 0.7, 0.55, 0.75, 0.45, 0.25, 0.4, 0.6, 0.45, 0.65, 0.45, 0.65, 0.65, 0.7, 0.4, 0.35, 0.45, 0.5, 0.6, 0.4, 0.5, 0.5, 0.8, 0.55, 0.45, 0.45, 0.55, 0.4, 0.55, 0.6, 0.6, 0.4, 0.5, 0.55, 0.5, 0.65, 0.5, 0.7, 0.55, 0.4, 0.45, 0.65, 0.55, 0.6, 0.55, 0.65, 0.75, 0.45, 0.55, 0.6, 0.7, 0.65, 0.5, 0.4, 0.5, 0.3, 0.4, 0.55, 0.35, 0.75, 0.3, 0.45, 0.4, 0.75, 0.45, 0.65, 0.35, 0.3, 0.4, 0.55, 0.35, 0.4, 0.5, 0.4, 0.45, 0.5, 0.5, 0.3, 0.4, 0.55, 0.45, 0.45, 0.7, 0.65, 0.3, 0.3, 0.55, 0.6, 0.55, 0.6, 0.5, 0.55, 0.45, 0.5, 0.7, 0.4, 0.6, 0.3, 0.5, 0.6, 0.55, 0.65, 0.35, 0.5, 0.75, 0.6, 0.6, 0.55, 0.5, 0.45, 0.6, 0.55, 0.55, 0.35, 0.65, 0.45, 0.55, 0.4, 0.2, 0.45, 0.5, 0.45, 0.5, 0.5, 0.4, 0.5, 0.35, 0.6, 0.55, 0.55, 0.6, 0.3, 0.55, 0.45, 0.55, 0.45, 0.5, 0.45, 0.4, 0.45, 0.4, 0.6, 0.5, 0.5, 0.6, 0.6, 0.45, 0.45, 0.55, 0.5, 0.45, 0.35, 0.7, 0.5, 0.6, 0.65, 0.55, 0.5, 0.35, 0.55, 0.8, 0.5, 0.5, 0.55, 0.6, 0.6, 0.55, 0.6, 0.55, 0.4, 0.45, 0.6, 0.4, 0.8, 0.45, 0.35, 0.6, 0.6, 0.5, 0.5, 0.35, 0.45, 0.4, 0.65, 0.45, 0.55, 0.6, 0.65, 0.5, 0.6, 0.4, 0.7, 0.55, 0.55, 0.6, 0.45, 0.45, 0.45, 0.5, 0.45, 0.4, 0.7, 0.45, 0.5, 0.6, 0.5, 0.65, 0.4, 0.25, 0.55, 0.45, 0.45, 0.4, 0.7, 0.7, 0.4, 0.6, 0.35, 0.45, 0.55, 0.55, 0.4, 0.5, 0.45, 0.65, 0.4, 0.6, 0.45, 0.4, 0.5, 0.45, 0.7, 0.45, 0.6, 0.4, 0.4, 0.5, 0.35, 0.55, 0.5, 0.5, 0.5, 0.55, 0.6, 0.45, 0.6, 0.45, 0.45, 0.45, 0.6, 0.55, 0.45, 0.25, 0.5, 0.4, 0.7, 0.65, 0.65, 0.6, 0.65, 0.35, 0.5, 0.45, 0.55, 0.45, 0.55, 0.6, 0.55, 0.6, 0.35, 0.6, 0.5, 0.5, 0.4, 0.35, 0.55, 0.6, 0.65, 0.65, 0.65, 0.45, 0.7, 0.7, 0.55, 0.5, 0.5, 0.4, 0.4, 0.45, 0.4, 0.65, 0.45, 0.5, 0.7, 0.4, 0.5, 0.5, 0.35, 0.4, 0.5, 0.65, 0.4, 0.5, 0.45, 0.4, 0.55, 0.55, 0.55, 0.65, 0.5, 0.45, 0.55, 0.3, 0.45, 0.8, 0.6, 0.35, 0.5, 0.35, 0.3, 0.5, 0.5, 0.65, 0.65, 0.6, 0.5, 0.45, 0.35, 0.6, 0.7, 0.5, 0.5, 0.45, 0.55, 0.4, 0.5, 0.65, 0.55, 0.4, 0.55, 0.6, 0.5, 0.4, 0.45, 0.3, 0.5, 0.5, 0.45, 0.5, 0.85, 0.5, 0.5, 0.3, 0.5, 0.6, 0.55, 0.65, 0.55, 0.65, 0.45, 0.55, 0.55, 0.55, 0.5, 0.5, 0.35, 0.45, 0.45, 0.4, 0.45, 0.55, 0.65, 0.6, 0.6, 0.4, 0.55, 0.6, 0.5, 0.4, 0.75, 0.6, 0.45, 0.45, 0.7, 0.5, 0.55, 0.6, 0.5, 0.4, 0.6, 0.6, 0.45, 0.5, 0.4, 0.45, 0.35, 0.5, 0.6, 0.55, 0.35, 0.35, 0.55, 0.6, 0.7, 0.55, 0.65, 0.5, 0.55, 0.6, 0.55, 0.6, 0.65, 0.35, 0.6, 0.6, 0.45, 0.6, 0.55, 0.3, 0.45, 0.45, 0.7, 0.55, 0.6, 0.65, 0.6, 0.5, 0.45, 0.5, 0.35, 0.5, 0.45, 0.35, 0.5, 0.7, 0.6, 0.55, 0.6, 0.35, 0.6, 0.5, 0.5, 0.65, 0.25, 0.45, 0.45, 0.45, 0.6, 0.7, 0.45, 0.45, 0.4, 0.65, 0.5, 0.4, 0.5, 0.4, 0.3, 0.5, 0.45, 0.75, 0.35, 0.6, 0.55, 0.5, 0.5, 0.3, 0.35, 0.5, 0.6, 0.4, 0.45, 0.5, 0.6, 0.35, 0.25, 0.7, 0.35, 0.6, 0.4, 0.6, 0.55, 0.55, 0.5, 0.25, 0.5, 0.5, 0.25, 0.45, 0.55, 0.45, 0.55, 0.4, 0.65, 0.65, 0.4, 0.45, 0.35, 0.45, 0.5, 0.4, 0.25, 0.4, 0.45, 0.55, 0.65, 0.4, 0.7, 0.55, 0.4, 0.55, 0.45, 0.45, 0.65, 0.45, 0.5, 0.35, 0.5, 0.45, 0.65, 0.55, 0.65, 0.45, 0.45, 0.55, 0.55, 0.45, 0.55, 0.3, 0.35, 0.5, 0.6, 0.35, 0.3, 0.5, 0.55, 0.7, 0.65, 0.6, 0.6, 0.5, 0.8, 0.5, 0.5, 0.5, 0.55, 0.5, 0.55, 0.6, 0.65, 0.45, 0.45, 0.45, 0.45, 0.5, 0.4, 0.45, 0.55, 0.45, 0.2, 0.5, 0.5, 0.45, 0.5, 0.5, 0.45, 0.45, 0.4, 0.6, 0.35, 0.5, 0.7, 0.5, 0.45, 0.3, 0.35, 0.55, 0.65, 0.45, 0.4, 0.55, 0.3, 0.6, 0.5, 0.55, 0.4, 0.5, 0.5, 0.45, 0.7, 0.5, 0.55, 0.55, 0.5, 0.25, 0.4, 0.7, 0.55, 0.5, 0.45, 0.6, 0.55, 0.6, 0.45, 0.5, 0.6, 0.25, 0.65, 0.3, 0.55, 0.4, 0.75, 0.45, 0.45, 0.55, 0.45, 0.45, 0.45, 0.55, 0.5, 0.35, 0.55, 0.4, 0.5, 0.3, 0.55, 0.65, 0.45, 0.35, 0.35, 0.6, 0.55, 0.3, 0.35, 0.8, 0.3, 0.4, 0.55, 0.45, 0.4, 0.35, 0.65, 0.65, 0.3, 0.4, 0.35, 0.5, 0.7, 0.35, 0.35, 0.8, 0.45, 0.5, 0.4, 0.6, 0.65, 0.5, 0.4, 0.55, 0.5, 0.4, 0.45, 0.3, 0.4, 0.4, 0.65, 0.6, 0.5, 0.7, 0.6, 0.45, 0.7, 0.45, 0.55, 0.65, 0.6, 0.5, 0.35, 0.3, 0.55, 0.6, 0.7, 0.5, 0.6, 0.4, 0.65, 0.5, 0.4, 0.55, 0.35, 0.65, 0.45, 0.4, 0.45, 0.65, 0.4, 0.5, 0.5, 0.6, 0.6, 0.45, 0.3, 0.5, 0.45, 0.5, 0.55, 0.3, 0.6, 0.25, 0.65, 0.4, 0.25, 0.5, 0.6, 0.45, 0.55, 0.6, 0.55, 0.65, 0.5, 0.45, 0.5, 0.6, 0.5, 0.3, 0.55, 0.6, 0.65, 0.45, 0.3, 0.55, 0.5, 0.5, 0.6, 0.35, 0.5, 0.5, 0.4, 0.4, 0.5, 0.6, 0.7, 0.65, 0.55, 0.55, 0.55, 0.35, 0.4, 0.45, 0.55, 0.65, 0.6, 0.5, 0.55, 0.35, 0.35, 0.55, 0.4, 0.4, 0.5, 0.5, 0.7, 0.55, 0.65, 0.5, 0.5, 0.4, 0.6, 0.5, 0.55, 0.6, 0.55, 0.6, 0.4, 0.4, 0.45, 0.55, 0.35, 0.5, 0.45, 0.35, 0.55, 0.45, 0.55, 0.4, 0.45, 0.4, 0.65, 0.4, 0.55, 0.5, 0.6, 0.55, 0.55, 0.4, 0.55, 0.35, 0.45, 0.4, 0.55, 0.6, 0.3, 0.55, 0.5, 0.35, 0.6, 0.55, 0.5, 0.4, 0.45, 0.45, 0.6, 0.6, 0.75, 0.65, 0.6, 0.45, 0.75, 0.65, 0.55, 0.4, 0.35, 0.4, 0.25, 0.55, 0.6, 0.6, 0.35, 0.45, 0.5, 0.55, 0.6, 0.55, 0.65, 0.4, 0.65, 0.5, 0.6, 0.25, 0.75, 0.55, 0.4, 0.6, 0.5, 0.55, 0.3, 0.4, 0.6, 0.15, 0.55, 0.4, 0.55, 0.5, 0.7, 0.5, 0.55, 0.35, 0.25, 0.4, 0.55, 0.45, 0.45, 0.45, 0.6, 0.45, 0.4, 0.35, 0.4, 0.4, 0.4, 0.55, 0.4, 0.55, 0.55, 0.5, 0.45, 0.55, 0.55, 0.45, 0.6, 0.75, 0.5, 0.4, 0.35, 0.45, 0.65, 0.55, 0.35, 0.5, 0.6, 0.45, 0.35, 0.4, 0.25, 0.7, 0.55, 0.7, 0.4, 0.4, 0.45, 0.5, 0.55, 0.25, 0.4, 0.55, 0.4, 0.25, 0.6, 0.5, 0.55, 0.6, 0.35, 0.4, 0.45, 0.3, 0.7, 0.65, 0.55, 0.4, 0.6, 0.4, 0.5, 0.5, 0.35, 0.45, 0.4, 0.4, 0.6, 0.45, 0.5, 0.4, 0.55, 0.7, 0.6, 0.6, 0.45, 0.65, 0.65, 0.4, 0.7, 0.5, 0.55, 0.4, 0.4, 0.6, 0.55, 0.55, 0.6, 0.45, 0.6, 0.4, 0.55, 0.4, 0.5, 0.6, 0.6, 0.4, 0.55, 0.7, 0.5, 0.4, 0.35, 0.45, 0.45, 0.55, 0.4, 0.4, 0.75, 0.55, 0.6, 0.6, 0.6, 0.55, 0.4, 0.55, 0.3, 0.7, 0.5, 0.55, 0.4, 0.35, 0.6, 0.25, 0.55, 0.4, 0.5, 0.55, 0.65, 0.5, 0.55, 0.5, 0.45, 0.55, 0.7, 0.6, 0.4, 0.6, 0.65, 0.55, 0.5, 0.3, 0.5, 0.5, 0.55, 0.65, 0.45, 0.35, 0.6, 0.5, 0.4, 0.45, 0.7, 0.5, 0.55, 0.65, 0.7, 0.45, 0.7, 0.3, 0.5, 0.45, 0.35, 0.5, 0.5, 0.45, 0.7, 0.45, 0.4, 0.65, 0.6, 0.6, 0.5, 0.5, 0.4, 0.65, 0.5, 0.55, 0.6, 0.55, 0.4, 0.5, 0.55, 0.65, 0.45, 0.35, 0.25, 0.6, 0.6, 0.6, 0.45, 0.4, 0.6, 0.55, 0.5, 0.55, 0.65, 0.55, 0.4, 0.5, 0.3, 0.45, 0.6, 0.65, 0.6, 0.5, 0.45, 0.45, 0.6, 0.45, 0.6, 0.6, 0.65, 0.7, 0.65, 0.5, 0.55, 0.6, 0.3, 0.45, 0.5, 0.55, 0.7, 0.5, 0.55, 0.35, 0.45, 0.65, 0.6, 0.55, 0.3, 0.55, 0.4, 0.55, 0.4, 0.5, 0.6, 0.45, 0.5, 0.4, 0.55, 0.35, 0.45, 0.5, 0.6, 0.55, 0.45, 0.55, 0.45, 0.55, 0.6, 0.35, 0.45, 0.5, 0.7, 0.45, 0.55, 0.4, 0.65, 0.65, 0.7, 0.65, 0.55, 0.4, 0.65, 0.45, 0.6, 0.6, 0.5, 0.55, 0.45, 0.7, 0.7, 0.35, 0.55, 0.55, 0.5, 0.5, 0.5, 0.6, 0.6, 0.35, 0.55, 0.55, 0.5, 0.6, 0.35, 0.5, 0.6, 0.55, 0.4, 0.55, 0.55, 0.5, 0.35, 0.55, 0.55, 0.5, 0.3, 0.5, 0.45, 0.9, 0.6, 0.65, 0.6, 0.75, 0.6, 0.25, 0.55, 0.45, 0.4, 0.4, 0.35, 0.55, 0.9, 0.35, 0.25, 0.6, 0.4, 0.55, 0.55, 0.55, 0.45, 0.4, 0.4, 0.65, 0.5, 0.65, 0.5, 0.45, 0.6, 0.6, 0.2, 0.5, 0.65, 0.5, 0.65, 0.35, 0.35, 0.55, 0.3, 0.6, 0.3, 0.5, 0.6, 0.45, 0.6, 0.5, 0.45, 0.5, 0.45, 0.55, 0.45, 0.5, 0.5, 0.55, 0.6, 0.35, 0.35, 0.3, 0.45, 0.7, 0.55, 0.6, 0.55, 0.5, 0.55, 0.55, 0.45, 0.5, 0.5, 0.5, 0.4, 0.55, 0.65, 0.5, 0.65, 0.45, 0.4, 0.6, 0.55, 0.25, 0.5, 0.4, 0.5, 0.45, 0.5, 0.6, 0.6, 0.45, 0.6, 0.4, 0.6, 0.45, 0.55, 0.55, 0.5, 0.55, 0.5, 0.6, 0.55, 0.45, 0.5, 0.5, 0.5, 0.45, 0.55, 0.55, 0.5, 0.65, 0.4, 0.6, 0.4, 0.45, 0.5, 0.65, 0.7, 0.45, 0.55, 0.45, 0.5, 0.35, 0.5, 0.45, 0.4, 0.55, 0.65, 0.5, 0.35, 0.7, 0.45, 0.35, 0.55, 0.6, 0.6, 0.3, 0.65, 0.45, 0.5, 0.25, 0.6, 0.4, 0.35, 0.65, 0.5, 0.4, 0.5, 0.4, 0.55, 0.35, 0.55, 0.5, 0.4, 0.5, 0.45, 0.35, 0.65, 0.35, 0.45, 0.45, 0.6, 0.55, 0.65, 0.5, 0.65, 0.5, 0.75, 0.45, 0.45, 0.5, 0.6, 0.4, 0.5, 0.4, 0.5, 0.65, 0.65, 0.45, 0.7, 0.7, 0.65, 0.45, 0.55, 0.4, 0.45, 0.5, 0.5, 0.55, 0.6, 0.55, 0.65, 0.45, 0.65, 0.65, 0.45, 0.5, 0.4, 0.4, 0.55, 0.45, 0.55, 0.35, 0.4, 0.5, 0.35, 0.5, 0.4, 0.4, 0.45, 0.55, 0.65, 0.6, 0.5, 0.5, 0.35, 0.3, 0.6, 0.3, 0.35, 0.4, 0.35, 0.5, 0.5, 0.6, 0.55, 0.6, 0.45, 0.5, 0.5, 0.65, 0.5, 0.25, 0.35, 0.35, 0.4, 0.45, 0.55, 0.55, 0.55, 0.5, 0.6, 0.45, 0.6, 0.65, 0.6, 0.6, 0.6, 0.6, 0.4, 0.55, 0.3, 0.6, 0.35, 0.6, 0.45, 0.45, 0.6, 0.5, 0.5, 0.55, 0.4, 0.4, 0.5, 0.6, 0.35, 0.65, 0.7, 0.6, 0.45, 0.4, 0.45, 0.6, 0.5, 0.4, 0.35, 0.4, 0.45, 0.65, 0.4, 0.55, 0.55, 0.45, 0.55, 0.35, 0.6, 0.5, 0.5, 0.45, 0.55, 0.5, 0.6, 0.45, 0.35, 0.6, 0.45, 0.55, 0.65, 0.65, 0.65, 0.45, 0.45, 0.6, 0.35, 0.55, 0.8, 0.6, 0.55, 0.4, 0.6, 0.5, 0.4, 0.6, 0.5, 0.7, 0.45, 0.5, 0.5, 0.8, 0.45, 0.45, 0.6, 0.5, 0.5, 0.4, 0.45, 0.35, 0.5, 0.4, 0.45, 0.5, 0.4, 0.6, 0.4, 0.5, 0.35, 0.65, 0.35, 0.25, 0.55, 0.5, 0.55, 0.5, 0.4, 0.45, 0.35, 0.45, 0.35, 0.5, 0.55, 0.6, 0.5, 0.3, 0.6, 0.5, 0.7, 0.6, 0.55, 0.25, 0.5, 0.55, 0.45, 0.5, 0.4, 0.6, 0.5, 0.45, 0.7, 0.6, 0.65, 0.55, 0.5, 0.45, 0.55, 0.5, 0.75, 0.55, 0.65, 0.45, 0.5, 0.7, 0.45, 0.3, 0.55, 0.5, 0.6, 0.5, 0.65, 0.45, 0.45, 0.45, 0.55, 0.55, 0.6, 0.65, 0.5, 0.5, 0.45, 0.3, 0.6, 0.7, 0.55, 0.5, 0.55, 0.6, 0.45, 0.45, 0.4, 0.55, 0.65, 0.4, 0.45, 0.4, 0.55, 0.65, 0.6, 0.7, 0.45, 0.6, 0.4, 0.65, 0.4, 0.6, 0.5, 0.55, 0.45, 0.7, 0.5, 0.6, 0.4, 0.5, 0.45, 0.6, 0.6, 0.55, 0.45, 0.7, 0.5, 0.45, 0.45, 0.6, 0.55, 0.45, 0.6, 0.35, 0.35, 0.65, 0.45, 0.5, 0.4, 0.5, 0.55, 0.65, 0.5, 0.45, 0.4, 0.5, 0.7, 0.4, 0.6, 0.5, 0.65, 0.55, 0.55, 0.35, 0.45, 0.6, 0.55, 0.45, 0.45, 0.45, 0.6, 0.45, 0.55, 0.4, 0.6, 0.6, 0.55, 0.45, 0.6, 0.3, 0.55, 0.45, 0.6, 0.5, 0.5, 0.45, 0.4, 0.6, 0.85, 0.45, 0.5, 0.65, 0.3, 0.4, 0.55, 0.55, 0.65, 0.5, 0.5, 0.6, 0.55, 0.35, 0.4, 0.55, 0.5, 0.65, 0.55, 0.45, 0.6, 0.55, 0.5, 0.4, 0.45, 0.55, 0.7, 0.35, 0.6, 0.5, 0.55, 0.35, 0.4, 0.5, 0.5, 0.55, 0.6, 0.5, 0.8, 0.4, 0.5, 0.4, 0.5, 0.45, 0.35, 0.35, 0.7, 0.55, 0.55, 0.6, 0.6, 0.5, 0.35, 0.45, 0.35, 0.65, 0.35, 0.5, 0.45, 0.35, 0.55, 0.4, 0.4, 0.45, 0.55, 0.35, 0.6, 0.4, 0.5, 0.5, 0.45, 0.75, 0.4, 0.4, 0.4, 0.65, 0.65, 0.45, 0.5, 0.65, 0.4, 0.3, 0.6, 0.6, 0.5, 0.45, 0.35, 0.65, 0.7, 0.55, 0.4, 0.55, 0.45, 0.55, 0.45, 0.45, 0.6, 0.45, 0.4, 0.4, 0.6, 0.6, 0.35, 0.3, 0.55, 0.4, 0.35, 0.55, 0.3, 0.6, 0.5, 0.5, 0.65, 0.55, 0.5, 0.55, 0.65, 0.5, 0.6, 0.45, 0.45, 0.5, 0.45, 0.7, 0.55, 0.25, 0.4, 0.65, 0.55, 0.5, 0.45, 0.35, 0.5, 0.55, 0.45, 0.4, 0.6, 0.45, 0.45, 0.6, 0.35, 0.35, 0.6, 0.45, 0.4, 0.75, 0.55, 0.75, 0.85, 0.45, 0.6, 0.5, 0.4, 0.7, 0.65, 0.35, 0.4, 0.45, 0.55, 0.5, 0.5, 0.45, 0.75, 0.35, 0.35, 0.5, 0.45, 0.45, 0.55, 0.5, 0.6, 0.5, 0.7, 0.4, 0.35, 0.6, 0.4, 0.45, 0.4, 0.5, 0.5, 0.5, 0.55, 0.55, 0.5, 0.65, 0.65, 0.75, 0.65, 0.55, 0.4, 0.5, 0.65, 0.45, 0.5, 0.6, 0.5, 0.55, 0.45, 0.45, 0.45, 0.55, 0.4, 0.55, 0.4, 0.4, 0.6, 0.5, 0.35, 0.4, 0.5, 0.55, 0.6, 0.55, 0.45, 0.3, 0.45, 0.55, 0.7, 0.55, 0.45, 0.3, 0.3, 0.55, 0.75, 0.6, 0.6, 0.45, 0.55, 0.45, 0.45, 0.6, 0.5, 0.65, 0.4, 0.55, 0.55, 0.45, 0.4, 0.55, 0.45, 0.35, 0.45, 0.7, 0.5, 0.65, 0.65, 0.45, 0.6, 0.55, 0.7, 0.55, 0.35, 0.5, 0.6, 0.55, 0.35, 0.2, 0.5, 0.65, 0.35, 0.55, 0.25, 0.65, 0.3, 0.45, 0.65, 0.5, 0.45, 0.5, 0.6, 0.45, 0.7, 0.75, 0.4, 0.65, 0.5, 0.45, 0.6, 0.5, 0.55, 0.5, 0.5, 0.5, 0.75, 0.3, 0.65, 0.65, 0.45, 0.45, 0.7, 0.65, 0.5, 0.55, 0.55, 0.5, 0.4, 0.6, 0.55, 0.6, 0.6, 0.45, 0.55, 0.3, 0.5, 0.5, 0.75, 0.5, 0.5, 0.5, 0.25, 0.55, 0.5, 0.7, 0.3, 0.55, 0.55, 0.65, 0.75, 0.4, 0.3, 0.35, 0.65, 0.6, 0.5, 0.45, 0.55, 0.3, 0.75, 0.55, 0.45, 0.6, 0.5, 0.4, 0.4, 0.5, 0.35, 0.6, 0.6, 0.4, 0.3, 0.55, 0.6, 0.65, 0.6, 0.6, 0.55, 0.5, 0.35, 0.7, 0.45, 0.5, 0.35, 0.65, 0.6, 0.35, 0.45, 0.35, 0.45, 0.65, 0.5, 0.55, 0.6, 0.6, 0.5, 0.45, 0.35, 0.7, 0.5, 0.45, 0.5, 0.5, 0.5, 0.45, 0.6, 0.5, 0.5, 0.5, 0.55, 0.5, 0.6, 0.35, 0.45, 0.3, 0.6, 0.3, 0.5, 0.5, 0.55, 0.6, 0.6, 0.55, 0.45, 0.4, 0.6, 0.5, 0.5, 0.45, 0.55, 0.55, 0.55, 0.45, 0.6, 0.5, 0.5, 0.2, 0.55, 0.5, 0.45, 0.4, 0.6, 0.65, 0.55, 0.4, 0.4, 0.55, 0.4, 0.8, 0.6, 0.5, 0.45, 0.5, 0.35, 0.4, 0.45, 0.45, 0.35, 0.55, 0.55, 0.75, 0.4, 0.5, 0.55, 0.65, 0.6, 0.45, 0.5, 0.45, 0.45, 0.35, 0.5, 0.3, 0.6, 0.35, 0.7, 0.45, 0.45, 0.4, 0.35, 0.45, 0.45, 0.65, 0.35, 0.55, 0.5, 0.55, 0.6, 0.55, 0.35, 0.35, 0.35, 0.55, 0.45, 0.5, 0.4, 0.5, 0.5, 0.5, 0.55, 0.4, 0.45, 0.5, 0.5, 0.35, 0.4, 0.4, 0.7, 0.5, 0.7, 0.55, 0.45, 0.4, 0.45, 0.65, 0.4, 0.75, 0.5, 0.45, 0.6, 0.45, 0.4, 0.5, 0.45, 0.25, 0.45, 0.35, 0.65, 0.45, 0.65, 0.4, 0.35, 0.6, 0.45, 0.4, 0.6, 0.35, 0.55, 0.45, 0.5, 0.4, 0.5, 0.4, 0.6, 0.55, 0.5, 0.55, 0.55, 0.7, 0.35, 0.55, 0.5, 0.6, 0.7, 0.6, 0.4, 0.5, 0.45, 0.65, 0.6, 0.5, 0.5, 0.4, 0.5, 0.5, 0.3, 0.6, 0.3, 0.45, 0.55, 0.25, 0.6, 0.55, 0.45, 0.65, 0.6, 0.4, 0.45, 0.3, 0.55, 0.4, 0.45, 0.6, 0.45, 0.55, 0.4, 0.55, 0.45, 0.65, 0.45, 0.7, 0.6, 0.4, 0.3, 0.45, 0.65, 0.55, 0.4, 0.65, 0.5, 0.55, 0.55, 0.5, 0.65, 0.55, 0.5, 0.6, 0.45, 0.55, 0.65, 0.35, 0.55, 0.4, 0.4, 0.35, 0.45, 0.6, 0.55, 0.6, 0.4, 0.4, 0.5, 0.45, 0.45, 0.65, 0.5, 0.55, 0.4, 0.55, 0.7, 0.6, 0.4, 0.4, 0.35, 0.5, 0.45, 0.55, 0.35, 0.45, 0.45, 0.65, 0.55, 0.45, 0.5, 0.65, 0.4, 0.5, 0.75, 0.7, 0.4, 0.55, 0.55, 0.7, 0.35, 0.45, 0.45, 0.4, 0.4, 0.45, 0.6, 0.35, 0.4, 0.55, 0.45, 0.4, 0.35, 0.5, 0.6, 0.5, 0.45, 0.35, 0.5, 0.55, 0.5, 0.4, 0.5, 0.6, 0.3, 0.4, 0.75, 0.45, 0.55, 0.45, 0.45, 0.55, 0.35, 0.75, 0.45, 0.35, 0.55, 0.6, 0.45, 0.45, 0.4, 0.55, 0.4, 0.6, 0.4, 0.4, 0.5, 0.45, 0.65, 0.6, 0.25, 0.35, 0.75, 0.6, 0.55, 0.45, 0.3, 0.4, 0.5, 0.5, 0.6, 0.3, 0.4, 0.5, 0.55, 0.4, 0.6, 0.65, 0.45, 0.45, 0.65, 0.4, 0.6, 0.55, 0.65, 0.55, 0.65, 0.45, 0.35, 0.7, 0.5, 0.55, 0.5, 0.6, 0.45, 0.35, 0.55, 0.35, 0.35, 0.6, 0.3, 0.55, 0.5, 0.4, 0.45, 0.45, 0.6, 0.4, 0.75, 0.6, 0.55, 0.6, 0.3, 0.55, 0.35, 0.4, 0.6, 0.5, 0.45, 0.6, 0.35, 0.35, 0.65, 0.7, 0.6, 0.55, 0.6, 0.65, 0.45, 0.55, 0.55, 0.75, 0.35, 0.4, 0.5, 0.4, 0.5, 0.4, 0.6, 0.25, 0.65, 0.7, 0.5, 0.5, 0.5, 0.4, 0.4, 0.5, 0.5, 0.45, 0.55, 0.7, 0.4, 0.55, 0.6, 0.55, 0.55, 0.4, 0.5, 0.6, 0.35, 0.5, 0.2, 0.5, 0.65, 0.55, 0.35, 0.55, 0.35, 0.45, 0.6, 0.7, 0.25, 0.4, 0.4, 0.45, 0.7, 0.5, 0.55, 0.4, 0.55, 0.65, 0.65, 0.35, 0.55, 0.4, 0.7, 0.45, 0.45, 0.6, 0.55, 0.55, 0.65, 0.6, 0.45, 0.45, 0.55, 0.4, 0.4, 0.45, 0.5, 0.5, 0.35, 0.35, 0.45, 0.6, 0.55, 0.65, 0.55, 0.6, 0.2, 0.5, 0.45, 0.5, 0.3, 0.45, 0.65, 0.35, 0.55, 0.55, 0.5, 0.35, 0.6, 0.45, 0.5, 0.65, 0.4, 0.65, 0.35, 0.55, 0.4, 0.5, 0.55, 0.3, 0.6, 0.25, 0.4, 0.6, 0.4]\n"
],
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\nfig, ax = plt.subplots()\nfor sample in [sample_means_small, sample_means_large]:\n sns.distplot(sample)",
"_____no_output_____"
]
],
[
[
"## Standard Error of the Mean\n\nWhat does it mean to \"estimate\"? the Population mean?",
"_____no_output_____"
]
],
[
[
"# Calculate the sample mean for a single sample\ndf.single_sample.mean()",
"_____no_output_____"
]
],
[
[
"## Build and Interpret a Confidence Interval\n\n<img src=\"https://github.com/ryanallredblog/ryanallredblog.github.io/blob/master/img/Confidence_Interval.png?raw=true\" width=400>",
"_____no_output_____"
]
],
[
[
"import numpy as np\ncoinflips = np.random.binomial(n=1, p=.5, size=42)\n\n# ddof modifies the divisor of the sum of the squares of the samples-minus-mean. \n# The divisor is N - ddof, where the default ddof is 0 as you can see from your result.\n\nprint(np.std(coinflips, ddof=1))\nprint(coinflips)\nprint(np.std(coinflips))",
"0.5060608273474739\n[0 1 1 0 1 0 0 1 0 1 0 0 0 1 0 0 1 1 0 0 0 0 0 1 1 1 0 1 1 1 1 1 0 1 1 0 0\n 1 1 0 0 1]\n0.5\n"
],
[
"import scipy.stats as stats\n\ndef confidence_interval(data, confidence=0.95):\n \"\"\"\n Calculate a confidence interval around a sample mean for given data.\n Using t-distribution and two-tailed test, default 95% confidence. \n \n Arguments:\n data - iterable (list or numpy array) of sample observations\n confidence - level of confidence for the interval\n \n Returns:\n tuple of (mean, lower bound, upper bound)\n \"\"\"\n data = np.array(data)\n mean = np.mean(data)\n n = len(data)\n \n # .sem is std err of the measurement\n \n stderr = stats.sem(data)\n \n # using numpy using the above formula\n # stderr = np.std(data, ddof = 1)/np.sqrt(n)\n \n # .ppf gives the t value for the given confidence interval\n # t * stderr gives margin err\n \n # sample mean +/- margin err gives confidence interval\n \n margin_err = stderr * stats.t.ppf((1 + confidence) / 2.0, n - 1)\n print(margin_err)\n return (mean, mean - margin_err, mean + margin_err)",
"_____no_output_____"
],
[
"confidence_interval(coinflips)",
"0.15316449851372824\n"
],
[
"# new coin flips and new confidence interval\ncoinflips = np.random.binomial(n=1, p=.5, size=50)\nconfidence_interval(coinflips)",
"0.14342620933732686\n"
],
[
"# bigger sample size deviations are less\n\ncoinflips = np.random.binomial(n=1, p=.5, size=5000)\nconfidence_interval(coinflips)",
"0.013863621347299408\n"
],
[
"coinflips = np.random.binomial(n=1, p=.5, size=500)\nconfidence_interval(coinflips)",
"0.04397105407442487\n"
],
[
"coinflips = np.random.binomial(n=1, p=.5, size=500)\nconfidence_interval(coinflips)",
"0.04395944121015544\n"
],
[
"coinflips = np.random.binomial(n=1, p=.5, size=500)\nconfidence_interval(coinflips)",
"0.04396401634104949\n"
]
],
[
[
"## Looking at stats.t.ppf\n",
"_____no_output_____"
]
],
[
[
"# stats.t.ppf(# probability cutoff, # degrees of freedom)\n\n# 95% confidence level -> .025\n\n# 1 - confidence_level == .05 / 2 -> .025 is our upper/lower bound cutoff\n\nconfidence_level = .95\n# dof is degree of freedom is n-1\ndof = 42-1\n\n# .ppf is percent point function that gives the upper/lower bounds of the cofindence interval\nstats.t.ppf((1 + confidence_level) / 2, dof)\n\n# +/- gives the upper and lower bounds\nstats.t.ppf((1 - confidence_level) / 2, dof)",
"_____no_output_____"
]
],
[
[
"## Graphically Represent a Confidence Interval",
"_____no_output_____"
]
],
[
[
"import seaborn as sns\ncoinflips_42 = np.random.binomial(n=1, p=.5, size=42)\nsns.kdeplot(coinflips_42)\nCI = confidence_interval(coinflips_42)\nplt.axvline(x=CI[1], color='red')\nplt.axvline(x=CI[2], color='red')\nplt.axvline(x=CI[0], color='k');",
"0.1575207555477215\n"
]
],
[
[
"## Relationship between Confidence Intervals and T-tests\n\nConfidence Interval == Bounds of statistical significance for our t-test\n\nA sample mean that falls inside of our confidence interval will \"FAIL TO REJECT\" our null hypothesis\n\nA sample mean that falls outside of our confidence interval will \"REJECT\" our null hypothesis",
"_____no_output_____"
]
],
[
[
"from scipy.stats import t, ttest_1samp",
"_____no_output_____"
],
[
"import numpy as np\n\ncoinflip_means = []\nfor x in range(0,100):\n coinflips = np.random.binomial(n=1, p=.5, size=30)\n coinflip_means.append(coinflips.mean())\n\nprint(coinflip_means)",
"[0.43333333333333335, 0.5, 0.5, 0.6333333333333333, 0.6666666666666666, 0.5333333333333333, 0.5666666666666667, 0.5666666666666667, 0.5666666666666667, 0.43333333333333335, 0.6333333333333333, 0.7333333333333333, 0.5, 0.4666666666666667, 0.5333333333333333, 0.5, 0.4, 0.4666666666666667, 0.5, 0.4666666666666667, 0.5, 0.5, 0.4666666666666667, 0.6333333333333333, 0.43333333333333335, 0.5333333333333333, 0.3333333333333333, 0.5, 0.5666666666666667, 0.5666666666666667, 0.7, 0.6, 0.5, 0.5, 0.3333333333333333, 0.4666666666666667, 0.5333333333333333, 0.43333333333333335, 0.6333333333333333, 0.4666666666666667, 0.5333333333333333, 0.26666666666666666, 0.43333333333333335, 0.3333333333333333, 0.5, 0.6, 0.4, 0.6, 0.5666666666666667, 0.5666666666666667, 0.43333333333333335, 0.5, 0.43333333333333335, 0.5666666666666667, 0.7, 0.5333333333333333, 0.43333333333333335, 0.36666666666666664, 0.6333333333333333, 0.43333333333333335, 0.4666666666666667, 0.6333333333333333, 0.43333333333333335, 0.5666666666666667, 0.5, 0.43333333333333335, 0.43333333333333335, 0.5333333333333333, 0.6, 0.36666666666666664, 0.5333333333333333, 0.6666666666666666, 0.4666666666666667, 0.4666666666666667, 0.5, 0.4, 0.6, 0.6666666666666666, 0.4666666666666667, 0.5666666666666667, 0.4666666666666667, 0.4666666666666667, 0.43333333333333335, 0.3333333333333333, 0.4, 0.3333333333333333, 0.3333333333333333, 0.5333333333333333, 0.6, 0.5, 0.6, 0.43333333333333335, 0.4666666666666667, 0.3333333333333333, 0.5, 0.43333333333333335, 0.6, 0.4666666666666667, 0.43333333333333335, 0.36666666666666664]\n"
],
[
"# Sample Size\nn = len(coinflip_means)\n# Degrees of Freedom\ndof = n-1\n# The Mean of Means:\nmean = np.mean(coinflip_means)\n# Sample Standard Deviation\nsample_std = np.std(coinflip_means, ddof=1)\n# Standard Error\nstd_err = sample_std/n**.5\n\nCI = t.interval(.95, dof, loc=mean, scale=std_err)\nprint(\"95% Confidence Interval: \", CI)",
"95% Confidence Interval: (0.48087691780652664, 0.5184564155268068)\n"
],
[
"'''You can roll your own CI calculation pretty easily. \nThe only thing that's a little bit challenging \nis understanding the t stat lookup'''\n\n# 95% confidence interval\nt_stat = t.ppf(.975, dof)\nprint(\"t Statistic:\", t_stat)\n\nCI = (mean-(t_stat*std_err), mean+(t_stat*std_err))\nprint(\"Confidence Interval\", CI)",
"t Statistic: 1.9842169515086827\nConfidence Interval (0.48087691780652664, 0.5184564155268068)\n"
]
],
[
[
"A null hypothesis that's just inside of our confidence interval == fail to reject\n\n",
"_____no_output_____"
]
],
[
[
"ttest_1samp(coinflip_means, .5)",
"_____no_output_____"
]
],
[
[
"A null hypothesis that's just outside of our confidence interval == reject\n\n",
"_____no_output_____"
]
],
[
[
"ttest_1samp(coinflip_means, .53)",
"_____no_output_____"
]
],
[
[
"## Run a $\\chi^{2}$ Test \"by hand\" (Using Numpy)\n\n\\begin{align}\n\\chi^2 = \\sum \\frac{(observed_{ij}-expected_{ij})^2}{(expected_{ij})}\n\\end{align}\n",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/adult.csv', na_values=\" ?\")\nprint(df.shape)\ndf.head()",
"(32561, 15)\n"
],
[
"df.describe()",
"_____no_output_____"
],
[
"df.describe(exclude='number')",
"_____no_output_____"
],
[
"cut_points = [0, 9, 19, 29, 39, 49, 1000]\nlabel_names = ['0-9', '10-19', '20-29', '30-39', '40-49', '50+']\ndf['hours_per_week_categories'] = pd.cut(df['hours-per-week'], cut_points, labels=label_names)\n\ndf.head()",
"_____no_output_____"
],
[
"df['sex'].value_counts()",
"_____no_output_____"
],
[
"df['hours_per_week_categories'].value_counts()",
"_____no_output_____"
],
[
"# we sort to avoid a pandas bug\n\ndf = df.sort_values(by='hours_per_week_categories', ascending=True)\n\ndf.head()",
"_____no_output_____"
],
[
"contingency_table = pd.crosstab(df['sex'], df['hours_per_week_categories'], margins=True)\n\ncontingency_table",
"_____no_output_____"
],
[
"femalecount = contingency_table.iloc[0][0:6].values\nfemalecount",
"_____no_output_____"
],
[
"malecount = contingency_table.iloc[1][0:6].values\nmalecount",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\n\n#Plots the bar chart\nfig = plt.figure(figsize=(10, 5))\nsns.set(font_scale=1.8)\ncategories = [\"0-9\",\"10-19\",\"20-29\",\"30-39\",\"40-49\",\"50+\"]\np1 = plt.bar(categories, malecount, 0.55, color='#d62728')\np2 = plt.bar(categories, femalecount, 0.55, bottom=malecount)\nplt.legend((p2[0], p1[0]), ('Female', 'Male'))\nplt.xlabel('Hours per Week Worked')\nplt.ylabel('Count')\nplt.show()",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\n\n#Plots the bar chart\nfig = plt.figure(figsize=(10, 5))\nsns.set(font_scale=1.8)\ncategories = [\"0-9\",\"10-19\",\"20-29\",\"30-39\",\"40-49\",\"50+\"]\np1 = plt.plot(categories, malecount, 0.55, color='blue')\np2 = plt.plot(categories, femalecount, 0.55, color='red')\nplt.legend((p2[0], p1[0]), ('Female', 'Male'))\nplt.xlabel('Hours per Week Worked')\nplt.ylabel('Count')\nplt.show()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\n\n#Plots the bar chart\nfig = plt.figure(figsize=(10, 5))\nsns.set(font_scale=1.8)\ncategories = [\"0-9\",\"10-19\",\"20-29\",\"30-39\",\"40-49\",\"50+\"]\np1 = plt.bar(categories, malecount, 0.25)\nwidth = 0.3\np2 = plt.bar(categories, femalecount, 0.25, color='red')\nplt.legend((p2[0], p1[0]), ('Female', 'Male'))\nplt.xlabel('Hours per Week Worked')\nplt.ylabel('Count')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Expected Value Calculation\n\\begin{align}\nexpected_{i,j} =\\frac{(row_{i} \\text{total})(column_{j} \\text{total}) }{(\\text{total observations})} \n\\end{align}",
"_____no_output_____"
]
],
[
[
"row_sums = contingency_table.iloc[0:2, 6].values\ncol_sums = contingency_table.iloc[2, 0:6].values\n\nprint(row_sums)\nprint(col_sums)",
"[10771 21790]\n[ 458 1246 2392 3667 18336 6462]\n"
],
[
"total = contingency_table.loc['All','All']\ntotal",
"_____no_output_____"
],
[
"# same thing as previous one. no of rows in the data set\ndf.shape[0]\n",
"_____no_output_____"
],
[
"expected = []\nfor i in range(len(row_sums)):\n expected_row = []\n for column in col_sums:\n expected_val = column*row_sums[i]/total\n expected_row.append(expected_val)\n expected.append(expected_row)\n \n\nexpected = np.array(expected)\nprint(expected.shape) \nprint(expected)",
"(2, 6)\n[[ 151.50388502 412.16995793 791.26046497 1213.02346365\n 6065.44811277 2137.59411566]\n [ 306.49611498 833.83004207 1600.73953503 2453.97653635\n 12270.55188723 4324.40588434]]\n"
],
[
"observed = pd.crosstab(df['sex'], df['hours_per_week_categories']).values\nprint(observed.shape)\nobserved",
"(2, 6)\n"
]
],
[
[
"## Chi-Squared Statistic with Numpy\n\n\\begin{align}\n\\chi^2 = \\sum \\frac{(observed_{i}-expected_{i})^2}{(expected_{i})}\n\\end{align}\n\nFor the $observed$ values we will just use a version of our contingency table without the margins as a numpy array. In this way, if our observed values array and our expected values array are the same shape, then we can subtract them and divide them directly which makes the calculations a lot cleaner. No for loops!",
"_____no_output_____"
]
],
[
[
"# Array broadcasting will work with numpy arrays but not python lists\nchi_squared = ((observed - expected)**2/(expected)).sum() \nprint(f\"Chi-Squared: {chi_squared}\")",
"Chi-Squared: 2287.190943926107\n"
],
[
"# Degrees of Freedom of a Chi-squared test\n# range between 3 to 40\n\n#degrees_of_freedom = (num_rows - 1)(num_columns - 1)\n\n\n# Calculate Degrees of Freedom\ndof = (len(row_sums)-1)*(len(col_sums)-1)\nprint(f\"Degrees of Freedom: {dof}\") ",
"Degrees of Freedom: 5\n"
]
],
[
[
"## Run a $\\chi^{2}$ Test using Scipy",
"_____no_output_____"
]
],
[
[
"chi_squared, p_value, dof, expected = stats.chi2_contingency(observed)\n\nprint(f\"Chi-Squared: {chi_squared}\")\nprint(f\"P-value: {p_value}\")\nprint(f\"Degrees of Freedom: {dof}\") \nprint(\"Expected: \\n\", np.array(expected))",
"Chi-Squared: 2287.190943926107\nP-value: 0.0\nDegrees of Freedom: 5\nExpected: \n [[ 151.50388502 412.16995793 791.26046497 1213.02346365\n 6065.44811277 2137.59411566]\n [ 306.49611498 833.83004207 1600.73953503 2453.97653635\n 12270.55188723 4324.40588434]]\n"
]
],
[
[
"Null Hypothesis: Hours worked per week bins is **independent** of sex. \n\nDue to a p-value of 0, we REJECT the null hypothesis that hours worked per week and sex are independent, and conclude that there is an association between hours worked per week and sex.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e75be85e0eaf0e6de1634385ca5e5b17e3ed86e9 | 6,297 | ipynb | Jupyter Notebook | week_5/week_5_unit_6_methodsfunct_notebook.ipynb | ceedee666/opensap_python_intro | d4b94eca1172c2e86d7711ed0d23c37250142b88 | [
"CC0-1.0"
] | 8 | 2021-10-09T14:55:01.000Z | 2022-02-16T15:55:53.000Z | week_5/week_5_unit_6_methodsfunct_notebook.ipynb | ceedee666/opensap_python_intro | d4b94eca1172c2e86d7711ed0d23c37250142b88 | [
"CC0-1.0"
] | 11 | 2021-10-01T12:50:04.000Z | 2022-03-30T10:16:52.000Z | week_5/week_5_unit_6_methodsfunct_notebook.ipynb | ceedee666/opensap_python_intro | d4b94eca1172c2e86d7711ed0d23c37250142b88 | [
"CC0-1.0"
] | 3 | 2021-09-30T07:04:28.000Z | 2021-12-16T09:52:04.000Z | 40.10828 | 154 | 0.65698 | [
[
[
"# Functions vs. Methods\nYou might have noticed in one of the previous units that sometimes the term *function* and sometimes the term *method*\nwas used to refer to some functionality in the Python 🐍 standard library. This was not by mistake but a conscious usage\nof the terms to refer to different concepts. After learning about functions in Python 🐍 earlier, this unit highlights\nthe main difference between functions and methods.\n\n\n## Programming paradigms\nTo cope with the ever increasing\n[complexity of software systems](https://informationisbeautiful.net/visualizations/million-lines-of-code/)\ndifferent [programming paradigms](https://en.wikipedia.org/wiki/Programming_paradigm)\nhave been developed. Two well know paradigms are the procedural programming paradigm and the object oriented\nprogramming paradigm. A detailed discussions of these paradigms is beyond the scope of this\nintroductory lecture. Nevertheless the main aspects of these paradigms are described in the following.\n\nIn the procedural programming paradigm the programs are structured using procedures. These procedures contain the\nprogram statements. In contrast to that, the object oriented programming paradigm uses the notion of objects to\nstructure programs. An object encapsulates data and methods to manipulate this data[<sup id=\"fn1-back\">1</sup>](#fn1).\n\nPython 🐍 supports the procedural and the object oriented programming paradigm. Procedures are called functions in\nPython. As discussed earlier, a function contains several statements. The following discussion focuses on the\ndifferences when invoking functions and methods.\n\n\n# Invoking functions\nAs shown in the previous units a function is invoked using its name. As an example consider the `print()` function in\nthe following cell.",
"_____no_output_____"
]
],
[
[
"song = \"Blue Train\"\n\nprint(\"Listening to\", song)",
"_____no_output_____"
]
],
[
[
"The `print()` function is invoked by using its name followed by parentheses. Inside the parentheses *all* the data\nrequired for the execution of the function is provided as parameters. In the example above two parameters are provided:\n\n- The string `\"Listening to\"`\n- The variable `song` containing the value `\"Blue Train\"`.\n\nThe `print()` function uses these parameters to perform its functionality. In case of the `print()` function this is\nprinting the text `Listening to Blue Train`.",
"_____no_output_____"
],
[
"# Invoking methods\nIn contrast to functions methods can not be invoked by only using the method name. Instead an object is required to\ninvoke a method. This is shown in the following example.",
"_____no_output_____"
]
],
[
[
"song = \"Ace of Spades\"\nturned_up_song = song.upper()\n\nprint(\"Listening to\", turned_up_song)",
"_____no_output_____"
]
],
[
[
"In the example a variable `song` of type `string` is defined. Note that in Python 🐍 there are actually no primitive\ndata types. Instead everything is an object in the sense of the object oriented programming paradigm. Using the `song`\nobject, the method `upper()` is invoked. This is done by adding a `.` to the object followed by the method name.\nInvoking the method `upper()` returns a new string with all characters converted to upper case. Consequently the output\nof the print function is `Listening to ACE OF SPADES`.\n\nAs the method `upper()` is invoked on the object `song`, no parameters are provided. Instead the method uses the data of\nthe object (in this case the value `Ace of Spades`) to perform its functionality. \n\nOf course, methods can also have optional parameters. This is shown in the following example.",
"_____no_output_____"
]
],
[
[
"songs = \"Ace of Spaces, Blitzkrieg Bop, Blue Train\"\nsong_list = songs.split(\", \")\n\nfor song in song_list:\n print(\"Listening to\", song)",
"_____no_output_____"
]
],
[
[
"In this example the variable songs contains a comma separated list of songs. Using the `split()` method this string is\nsplit into a list of strings. The parameter of the `split()` method is the delimiter used to split the string. In the\nexample above the delimiter is `\", \"` (a comma followed by a space). As the result the `split()` method returns a list\nof strings, which is stored in the `song_list` variable. ",
"_____no_output_____"
],
[
"# Footnote\n[<sup id=\"fn1\">1</sup>](#fn1-back) Key concept of the object oriented programming paradigm like message passing, encapsulation or polymorphism \nwere deliberately omitted. A brief introduction to object oriented programming in python is available\n[here](https://docs.python.org/3/tutorial/classes.html). ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e75be8c302499ca9e99073ad15b3cdb6b1a812c3 | 439,937 | ipynb | Jupyter Notebook | development/acceptance-study/NCMC_Analysis_mu750.ipynb | choderalab/saltswap | d30804beb158960a62f94182c694df6dd9130fb8 | [
"MIT"
] | 3 | 2017-06-30T11:40:20.000Z | 2021-05-14T02:20:38.000Z | development/acceptance-study/NCMC_Analysis_mu750.ipynb | choderalab/saltswap | d30804beb158960a62f94182c694df6dd9130fb8 | [
"MIT"
] | 19 | 2017-04-27T14:56:51.000Z | 2019-12-10T14:26:38.000Z | development/acceptance-study/NCMC_Analysis_mu750.ipynb | choderalab/saltswap | d30804beb158960a62f94182c694df6dd9130fb8 | [
"MIT"
] | 2 | 2017-02-01T21:46:18.000Z | 2018-01-15T18:56:56.000Z | 822.31215 | 151,364 | 0.939473 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython.display import Image\nfrom pymbar import timeseries as ts",
"_____no_output_____"
]
],
[
[
"# Analysis saltswap results\n## Applied chemical potential: $\\Delta\\mu = 750$",
"_____no_output_____"
],
[
"Defining a functions to read the simulation data, and generating pretty colours for plotting.",
"_____no_output_____"
]
],
[
[
"def read_data(filename):\n \"\"\"\n Read the number of salt molecules added, acceptance rates, and simulation run times for iterations of saltswap\n \n Parameters\n ----------\n filename: str\n the name of the file that contains the simulation data\n \n Returns\n -------\n data: numpy.ndarray\n array containing number of waters, number of salt pairs, acceptance probability, and run-time per iteration\n \"\"\"\n filelines = open(filename).readlines()\n Nwats = []\n Nsalt = []\n Accprob = []\n time = []\n #i=3\n #step = int(filelines[i][0:5].strip())\n #while i-3 == step:\n #while i > 0:\n for i in range(3,len(filelines)-3):\n # It appears some of the files have a varying length. This exception will pick those up.\n try:\n dummy = int(filelines[i][6:10].strip())\n except ValueError:\n break\n Nwats.append(int(filelines[i][6:10].strip()))\n Nsalt.append(int(filelines[i][13:18].strip()))\n #print 'nsalt', int(filelines[i][13:18].strip())\n Accprob.append(float(filelines[i][19:24].strip()))\n #print 'acc', filelines[i][19:24].strip()\n time.append(int(filelines[i][35:40].strip()))\n #print 'time', filelines[i][35:40].strip()\n #i += 1\n return np.vstack((np.array(Nwats),np.array(Nsalt),np.array(Accprob),np.array(time)))\n\ndef read_work(filename):\n \"\"\"\n Function to read the work to add or remove salt in a saltswap simulation.\n \n Parameter\n ---------\n filename: str\n the name of the file containing the work for each attempt\n \n Returns\n -------\n work: numpy.ndarray\n array of work values\n \"\"\"\n filelines = open(filename).readlines()\n work = []\n for i in range(2,len(filelines)):\n work += [float(wrk) for wrk in filelines[i].split()]\n return np.array(work) \n\n\n# Nice colours, taken on 26th Nov 2015 from:\n#http://tableaufriction.blogspot.co.uk/2012/11/finally-you-can-use-tableau-data-colors.html\n\n# These are the \"Tableau\" colors as RGB. I've chosen my faves. \n# In order: blue, green, purple, orange. Hopefully a good compromise for colour-blind people.\ntableau4 = [(31, 119, 180),(44, 160, 44),(148,103,189),(255, 127, 14)]\ntableau4_light = [(174,199,232),(152,223,138),(197,176,213),(255,187,120)]\n\n# Scale the RGB values to the [0, 1] range, which is the format matplotlib accepts. \nfor i in range(len(tableau4)): \n r, g, b = tableau4[i] \n tableau4[i] = (r / 255., g / 255., b / 255.)\n r, g, b = tableau4_light[i] \n tableau4_light[i] = (r / 255., g / 255., b / 255.)",
"_____no_output_____"
]
],
[
[
"<a id='summary'></a>\n## NCMC parameter sweep at a glance\nPlotting colored matrices to summarise the main results.",
"_____no_output_____"
]
],
[
[
"# Results from the initial set of parameters:\nnperturbations = [1024,2048,4096]\nnpropogations = [1,2,4]\n\nMeanSalt = np.zeros((len(nperturbations),len(npropogations)))\nAccProb = np.zeros((len(nperturbations),len(npropogations)))\nMeanTime = np.zeros((len(nperturbations),len(npropogations)))\n\nfor i in range(len(nperturbations)):\n for j in range(len(npropogations)):\n filename = 'Titration750/prt{0}_prp{1}/data.txt'.format(nperturbations[i],npropogations[j])\n data = read_data(filename)\n MeanSalt[i,j] = data[1].mean() # The recorded times are for 10 insertion attempts \n AccProb[i,j] = data[2].mean()\n MeanTime[i,j] = data[3].mean()/20\n\nfig,ax = plt.subplots(nrows=3,ncols=1,squeeze=True)\n\n# Acceptance probability\ncax = ax[0].matshow(np.round(AccProb,2),cmap=plt.cm.Purples,interpolation='none')\nc = fig.colorbar(cax,ax=ax[0])\nc.ax.tick_params(labelsize=8) \nax[0].get_xaxis().set_visible(False)\nax[0].set_yticklabels(['']+nperturbations)\nax[0].set_title('Acceptance probability',fontsize=12)\n\n# Mean time per attempt on log scale\nlogtimes = np.log10(MeanTime)\nlt = np.arange(np.min(logtimes),np.max(logtimes),0.21)\nticks = np.round(10**lt,1)\ncax = ax[1].matshow(logtimes,cmap=plt.cm.Greens,interpolation='none')\nc = fig.colorbar(cax,ticks=lt,ax=ax[1])\nc.set_ticklabels(ticks, update_ticks=True)\nc.ax.tick_params(labelsize=8) \nax[1].set_xticklabels(['']+npropogations)\nax[1].get_xaxis().set_visible(False)\n#ax[1].xaxis.set_tick_params(labelbottom='on',labeltop='off')\nax[1].set_yticklabels(['']+nperturbations)\nax[1].set_title('Mean time (s) per attempt',fontsize=12)\nax[1].set_ylabel(\"Number of perturbation kernels\",fontsize=10) \n\n# Mean number of salt pairs to test consistency\ncax = ax[2].matshow(MeanSalt,cmap=plt.cm.Oranges,interpolation='none')\nc = fig.colorbar(cax)\nc.ax.tick_params(labelsize=8) \nax[2].set_xticklabels(['']+npropogations)\nax[2].xaxis.set_tick_params(labelbottom='on',labeltop='off')\nax[2].set_yticklabels(['']+nperturbations)\nax[2].set_title('Mean number of salt pairs',fontsize=12)\nax[2].set_xlabel(\"Number of propagation kernels\",fontsize=10) \nax[2].xaxis.set_label_position('bottom') \n#(h_pad=-1,w_pad=-100)\nplt.tight_layout(h_pad=-1,rect=(-1,0,1,1)) # rect = (left, bottom, right, top)\nplt.savefig(\"ParamSweep1.png\", format='png')\nImage(\"ParamSweep1.png\",width=1200)",
"/Users/rossg/miniconda2/lib/python2.7/site-packages/matplotlib/tight_layout.py:222: UserWarning: tight_layout : falling back to Agg renderer\n warnings.warn(\"tight_layout : falling back to Agg renderer\")\n"
]
],
[
[
"The data does not seem equilibrated.",
"_____no_output_____"
],
[
"## Time series plots\nViewing to what extent the data can be considered to be 'in equilibrium'.",
"_____no_output_____"
]
],
[
[
"params = [(1024,1),(1024,2),(1024,4),(2048,1),(2048,2),(2048,4),(4096,1),(4096,2),(4096,4)]\ncoords = [(0,0),(0,1),(0,2),(1,0),(1,1),(1,2),(2,0),(2,1),(2,2)]\n\nf, axarr = plt.subplots(3, 3)\nxlims =(0,400) # x limits\nylims = (0,125) # y limits\nxstep = 100\n\nfor p,c in zip(params,coords):\n # Reading in data\n filename = 'Titration750/prt{0}_prp{1}/data.txt'.format(p[0],p[1])\n nsalt = read_data(filename)[1]\n #time = np.arange(1,len(nsalt)+1) \n time = range(len(nsalt))\n # Plotting\n axarr[c].plot(nsalt,color=tableau4[1],linewidth=2)\n axarr[c].set_xlim(xlims)\n axarr[c].set_ylim(ylims)\n axarr[c].set_ylim(ylims)\n axarr[c].set_title('# pert. = {0}, # prop. = {1}'.format(p[0],p[1]),fontsize=11)\n axarr[c].set_xticks(np.arange(xlims[0], xlims[1]+xstep, xstep))\n axarr[c].grid()\n #try:\n # stats = ts.detectEquilibration(nsalt) # Start of equil time, stat inefficiency, num uncorrelated sample\n # axarr[c].axvline(x=stats[0],linewidth=2, color='k')\n #except ts.ParameterError:\n # pass\n# Fine-tune figure; hide x ticks for top plots and y ticks for right plots\nplt.setp([a.get_xticklabels() for a in axarr[0, :]], visible=False)\nplt.setp([a.get_xticklabels() for a in axarr[1, :]], visible=False)\nplt.setp([a.get_yticklabels() for a in axarr[:, 1]], visible=False) \nplt.setp([a.get_yticklabels() for a in axarr[:, 2]], visible=False) \nfor a in axarr[2, :]: a.set_xlabel('Step') \nfor a in axarr[:,0]: a.set_ylabel('# salt') \n\n\nplt.tight_layout()\nplt.savefig(\"TimeSeries.png\", format='png')\nImage(\"TimeSeries.png\")",
"_____no_output_____"
]
],
[
[
"None of the simulations reach equilibrium, and it's worrying that the steady state appears to be the formation of a salt crystal.",
"_____no_output_____"
],
[
"## Work distributions\n### # perturbations = 4096, # propagations = 4 \nLooking at # perturbations = 4096, # propagations = 4 as it's the most computationally expensive protocol.",
"_____no_output_____"
]
],
[
[
"kT = 2.479\nprint 'Chemical potential in units of kT =', 750/kT",
"Chemical potential in units of kT = 302.541347317\n"
],
[
"work_add = read_work('Titration750/prt4096_prp4/work_add_data.txt')\nwork_rm = read_work('Titration750/prt4096_prp4/work_rm_data.txt')\n\n\nplt.clf()\nplt.plot(-work_add, color=tableau4[0])\nplt.plot(work_rm, color=tableau4[3])\nplt.axhline(750/kT, ls='--', color='k')\n\nplt.title('Work to add/remove salt: # pert. 4096, # prop. 4')\nplt.legend(('-Work to add','Work to remove','$\\Delta \\mu$'),loc=0)\nplt.xlabel('Attempt')\nplt.ylabel('Energy (kT)')\nplt.xlim((0,2000))\n\nplt.savefig('Work_prt4096_prp4.png',format='png')\nImage('Work_prt4096_prp4.png')",
"_____no_output_____"
]
],
[
[
"The work to remove salt increases are more salt is added to the system. This is the opposite of what we want, as this incourages more salt to enter the system.",
"_____no_output_____"
],
[
"### How Work decreases with longer protocol\nGiven the increase in salt pairs over time, I'll only look at the work distributions for the first 500 insertion/deletion attempts.",
"_____no_output_____"
]
],
[
[
"params = [(1024,1),(2048,1),(4096,1)]\n\nN = 500\nwork_add = np.zeros((3,N))\nwork_rm = np.zeros((3,N))\n\nfor i in range(len(params)):\n filename = 'Titration750/prt{0}_prp{1}/work_add_data.txt'.format(params[i][0],params[i][1])\n work_add[i,:] = read_work(filename)[0:N]\n filename = 'Titration750/prt{0}_prp{1}/work_rm_data.txt'.format(params[i][0],params[i][1])\n work_rm[i,:] = read_work(filename)[0:N]",
"_____no_output_____"
]
],
[
[
"### Histogram of work to add salt",
"_____no_output_____"
]
],
[
[
"# Automatically calculate the histogram of all the data, and save the edges and midpoints\ncounts, edges = np.histogram(-work_add,30)\nmidpoints = edges[0:-1] + np.diff(edges)/2.0\n\ncolours = (tableau4_light[0], tableau4_light[1], tableau4_light[3])\nplt.clf()\nlines = []\nfor i in range(3):\n cnts, junk = np.histogram(a = -work_add[i,:], bins = edges)\n plt.step(midpoints, cnts, where='mid',color='k', lw = 3)\n lines.append(plt.step(midpoints, cnts, where='mid', color=colours[i], lw = 2, label = '# pert. = {0}'.format(params[i][0])))\nplt.axvline(750/kT, ls='--', color='k',lw=3) \n\nplt.ylim((0,120))\nplt.xlim((160,305))\nplt.legend(loc=0)\nplt.title('Histogram of work to add salt')\nplt.ylabel('Counts')\nplt.xlabel('Energy (kT)')\nplt.grid()\nplt.savefig(filename=\"Hist_Work_Add.png\")\nImage(filename=\"Hist_Work_Add.png\")",
"_____no_output_____"
],
[
"# Automatically calculate the histogram of all the data, and save the edges and midpoints\ncounts, edges = np.histogram(work_rm,30)\nmidpoints = edges[0:-1] + np.diff(edges)/2.0\n\ncolours = (tableau4_light[0], tableau4_light[1], tableau4_light[3])\nplt.clf()\nlines = []\nfor i in range(3):\n cnts, junk = np.histogram(a = work_rm[i,:], bins = edges)\n plt.step(midpoints, cnts, where='mid',color='k', lw = 3)\n lines.append(plt.step(midpoints, cnts, where='mid', color=colours[i], lw = 2, label = '# pert. = {0}'.format(params[i][0])))\nplt.axvline(750/kT, ls='--', color='k',lw=3) \n\nplt.ylim((0,120))\nplt.xlim(300,440)\nplt.legend(loc=0)\nplt.title('Histogram of work to remove salt')\nplt.ylabel('Counts')\nplt.xlabel('Energy (kT)')\nplt.grid()\nplt.savefig(filename=\"Hist_Work_Rm.png\")\nImage(filename=\"Hist_Work_Rm.png\")",
"_____no_output_____"
]
],
[
[
"The location and spread of the work decreases as the length of the protocol decreases.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
e75bfc1dd1546eedeb3359d5a0999b2fb8517c40 | 67,961 | ipynb | Jupyter Notebook | docsrc/Python/VariableTypes.ipynb | wklchris/blog | 6229eeef06e5e542736ac26722c1ac7a03829204 | [
"MIT"
] | null | null | null | docsrc/Python/VariableTypes.ipynb | wklchris/blog | 6229eeef06e5e542736ac26722c1ac7a03829204 | [
"MIT"
] | null | null | null | docsrc/Python/VariableTypes.ipynb | wklchris/blog | 6229eeef06e5e542736ac26722c1ac7a03829204 | [
"MIT"
] | null | null | null | 19.965041 | 335 | 0.43263 | [
[
[
"# 变量类型\n\n本章介绍 Python 的内置变量类型。",
"_____no_output_____"
],
[
"我认为以下内置变量类型是在 Python 中经常用到、或者必须有所了解的:\n\n| 类型 | 关键字 | 说明 | 例子 |\n| :---: | :--- | :--- | :--- |\n| 【数字】 |\n| 整型 | `int` | 整数 | `1`, `-1` |\n| 浮点型 | `float` | 浮点数 | `1.0` |\n| 复数型 | `complex` | 复数 | `complex(1,2)`\n| 【序列】 |\n| 列表 | `list` | 一串有序的可变数据序列,每一项数据可以是任意类型。 | `[1, 2]` |\n| 元组 | `tuple` | 一串有序的不可变数据序列,在创建时就固定了每一项的数据值。 | `(1, 2)` |\n| 字符串 | `str` | 一串文本字符组成的不可变序列。 | `\"string\"` |\n| 【映射】 |\n| 字典 | `dict` | 一些互不相同的键及它们各自对应的值组成的键值对数据集。 | `{\"a\":1, \"b\":2}` |\n| 【集合】 |\n| 集合 | `set` | 一些互不相同的数据值组成的无序可变数据集。 | `{1, 2}` |\n| 【其他】 |\n| 布尔型 | `bool` | 表示真或假的逻辑类型。 | `True` |\n| 空对象 | `None` | 表示空。 | `None` |\n\n以上并不是 Python 的全部内置类型:\n\n- 一些高级的、复杂的变量类型,例如 `range` 构造器,不再在这里列出。它们会在后续的章节进行介绍。\n- 一些较少使用到的类型,比如 `byte` 二进制字节类型,不会在本文的任何章节介绍。",
"_____no_output_____"
],
[
"## 布尔型与空对象\n\n在介绍其他的变量类型之前,先介绍这两个特殊的类型。",
"_____no_output_____"
],
[
"### 布尔型\n\n布尔型有真(True)或假(False)两种逻辑值,单词的首字母大写。常用的逻辑运算:\n\n- 逻辑与:全真才为真 `x and y`\n- 逻辑或:含真即为真 `x or y`\n- 逻辑非: `not x`\n- 逻辑异或:相异为真 `x ^ y`\n\n以上 `x` 与 `y` 均是布尔型变量。",
"_____no_output_____"
]
],
[
[
"x = True\ny = False\n\nprint(x and y, x or y, not x, x ^ y)",
"False True False True\n"
]
],
[
[
"### 空对象\n\nNone 是 Python 中的空对象,它或许并不常用,但读者有必要了解。\n\n空对象的逻辑值为假:",
"_____no_output_____"
]
],
[
[
"x = None\nprint(bool(x))",
"False\n"
]
],
[
[
"## 数字类型:int, float, complex\n\n数字类型没有太多需要介绍的地方。\n\n- 四则运算:`+`, `-`, `*`, `/`\n- 整除与取余:`c = a // b`, `d = a % b`;或者 `c, d = divmod(a,b)`。\n - 这里的整除是指向负无穷取整,例如 `-5//2` 的结果是 `-3`。\n - 复数不能参与整除或取余运算。\n- 乘方:`a ** b`,或者 `pow(a, b)`\n- 取模:`abs(a)`。如果 `a` 是复数,那么会计算模长;如果是整数或浮点数,实质上就是取绝对值。\n- 自运算:`a += 1` 即 `a` 自增 1;同理有 `-=`, `*=`, `/=`\n\n值得注意的点:\n\n- **只要有除法参与的数学运算,其结果一定是浮点型**。\n- **只要有浮点型参与的数学运算,其结果也一定是浮点型**。\n- Python 的内部机制已经处理了整数溢出的问题,因此无须担心。\n- 虽然在数学上不合法,但是在 Python(以及一众编程语言)中,`0 ** 0` 等于 1。\n\n特别地,浮点型中还包含两个特殊的值,分别是”非数“(Not a Number, `nan`)与”正/负无穷“(Infinity, `inf`):",
"_____no_output_____"
]
],
[
[
"x, y, z = 'nan', 'inf', '-inf'\nprint(float(x), float(y), float(z))",
"nan inf -inf\n"
]
],
[
[
"复数的使用非常少见,随便举个例子吧:",
"_____no_output_____"
]
],
[
[
"x = complex(1, 5)\ny = complex(2, -1)\nz = x + y\nprint(z, abs(z))",
"(3+4j) 5.0\n"
]
],
[
[
"### 类型转换与取整\n\nPython 中从浮点型到整型的强制类型转换会截断小数点之后的部分:",
"_____no_output_____"
]
],
[
[
"a, b, c, d = 1.2, 1.6, -1.2, -1.6\nprint(int(a), int(b), int(c), int(d))",
"1 1 -1 -1\n"
]
],
[
[
"要实现复杂的取整控制,可以调用 Python 内置的 `math` 模块:\n\n- floor:向负无穷取整。\n- ceil:向正无穷取整。",
"_____no_output_____"
]
],
[
[
"import math # 导入 math 模块\n\nprint(math.floor(a), math.ceil(b), math.floor(c), math.ceil(d))",
"1 2 -2 -1\n"
]
],
[
[
"不过在我个人的实践中,取整与四舍五入进位的任务通常由 `numpy` 库代劳;有兴趣的读者,可以阅读 Numpy 的相关函数:\n\n- [numpy.round](https://numpy.org/doc/stable/reference/generated/numpy.round_.html)\n- [numpy.floor](https://numpy.org/doc/stable/reference/generated/numpy.floor.html)\n- [numpy.ceil](https://numpy.org/doc/stable/reference/generated/numpy.ceil.html)\n- [numpy.trunc](https://numpy.org/doc/stable/reference/generated/numpy.trunc.html)",
"_____no_output_____"
],
[
"### 比较数字大小\n\n常规的数字大小判断:\n\n- 小于 `<` 与小于等于 `<=`\n- 大于 `>` 与大于等于 `>=`\n- 等于 `==` 与不等于 `!=`",
"_____no_output_____"
]
],
[
[
"x = 3\ny = 4\nprint(x != y)",
"True\n"
]
],
[
[
"特别地,Python 还支持“三元比较”:",
"_____no_output_____"
]
],
[
[
"print(3 < 4 == 4, 3 > 2 < 4, 1 < 3 <= 5)",
"True True True\n"
]
],
[
[
"<div class=\"alert alert-info\">\n\n重要\n \n不要试图用双等号比较两个浮点值的是否相等!\n\n</div>",
"_____no_output_____"
],
[
"浮点计算是有精度的,直接比较它们是否相等是不明智的:",
"_____no_output_____"
]
],
[
[
"x, y = 0.1, 0.2\nz = 0.3\n\nprint(x+y, z, x+y==z)",
"0.30000000000000004 0.3 False\n"
]
],
[
[
"关于高精度的数学计算,推荐配合科学计算库 NumPy 使用。",
"_____no_output_____"
],
[
"## 列表:list\n\nPython 的三种常用序列 list, tuple, str, 我们先讲列表 list;列表大概是最接近其他编程语言的序列了。\n\n- 列表序号从 0 开始。\n- Python 中的列表类似于其他语言的数组,不过列表的长度可变、并且元素不必是同一类型的。\n\n*虽然列表中可以包含不同类型的元素,但从编程习惯上说,个人不推荐这样做。*",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 'a', 4]\ny = [] # 空列表\nprint(x)",
"[1, 2, 'a', 4]\n"
]
],
[
[
"Python 中的列表默认支持常见所有的序列操作:\n\n- 索引元素:单个选取 `x[index]` ,切片型选取 `x[start:end:step]`\n- 元素个数: `len(x)`\n- 追加:单个元素 `x.append(item)` ,追加一个列表 `x.extend(y)`\n- 插入: `x.insert(index, item)`\n- 排序: \n - 按值排序:`x.sort()` ,或者带返回值的 `sorted(x)`\n - 反序:`x.reverse()` ,或者带返回值的 `reversed(x)`\n- 查询:\n - 判断是否包含 `item in x` \n - 判断包含的次数 `x.count(item)`\n - 返回索引序数 `x.index(item)`\n- 删除:\n - 按索引:弹出并返回一个元素 `x.pop(index)` ,直接删除元素 `del(x[index])`\n - 按值:移除等于给定值的项 `x.remove(item)`\n - 清空: `x.clear()`\n- 最值:最大值 `max(x)` ,最小值 `min(x)`\n\n上述叙述中, `x` 与 `y` 均表示列表, `index` 表示序数(整数类型), `item` 表示列表元素。",
"_____no_output_____"
],
[
"### 索引元素\n\n列表中最基础的操作就是根据索引序号,取出单个(或多个)元素:",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 4, 5]\nprint(x[0], x[4])",
"1 5\n"
]
],
[
[
"Python 支持负数索引,比如 `x[-1]` 表示列表的倒数第 1 位元素:",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 4, 5]\nprint(x[-2])",
"4\n"
]
],
[
[
"Python 支持一种切片语法,可以指定索引序号的起始、终止、步长,来选取多个元素。\n\n- `x[start:end]` :从 `x[start]` 依次选取到 `x[end-1]`\n- `x[start:end:step]` :从 `x[start]` 每 `step` 个元素选取依次,直到 `x[end-1]` (或它之前最后一个能被选取到的元素)。步长可以是负数,但相应地必须有 start >= end\n- `x[start:]` 或者 `x[:end]` :从 `x[start]` 选取到末尾,或者从起始选取到 `x[end-1]` 。这其实是忽略了冒号一侧的项,也可以用空值 None 补位",
"_____no_output_____"
],
[
"<div class=\"alert alert-info\">\n\n重要\n\n切片选取的结束是第 end-1 个元素,而不是第 end 个元素。\n\nPython这样设计的原因是,这样保证了切片 x[start:end] 的长度恰好是 end 减去 start,而不是 end-start+1.\n\n</div>",
"_____no_output_____"
]
],
[
[
"# 选取 0 到 3,注意末尾被舍去\nx = [1, 2, 3, 4, 5]\nprint(x[0:4])",
"[1, 2, 3, 4]\n"
],
[
"# 选取 0 到 3 (或4),每 2 个选取一次\nprint(x[0:4:2], x[0:5:2])",
"[1, 3] [1, 3, 5]\n"
],
[
"# 负数步长\nprint(x[::-1])",
"[5, 4, 3, 2, 1]\n"
],
[
"# 从 1 选取到末尾,或从起始选取至倒数第 2 项\nprint(x[1:], x[:-1])",
"[2, 3, 4, 5] [1, 2, 3, 4]\n"
],
[
"# 也可以忽略某一项,等同于 None\nprint(x[::2], x[None:None:2])",
"[1, 3, 5] [1, 3, 5]\n"
]
],
[
[
"赋值可以直接进行:",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 4, 5]\nx[:2] = [6, 7] # 本质是将右侧解包,然后分别传入两个元素位\nprint(x)",
"[6, 7, 3, 4, 5]\n"
]
],
[
[
"### 元素个数\n\nPython 的 `len()` 函数是一个独立的函数,并不是通过 `x.len()` 的方式调用的。这其中设计的差别,读者可以仔细体会。\n\n该函数不止适用于列表,也适用于其他序列类型。",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 4, 5]\nprint(len(x))",
"5\n"
]
],
[
[
"### 追加\n\nPython 的追加元素使用 `x.append()` : ",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 4, 5]\nx.append(-2)\n\nprint(x)",
"[1, 2, 3, 4, 5, -2]\n"
]
],
[
[
"要追加一个列表,使用 `x.extend()` :",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 4, 5]\nx.extend([-2, -1])\n\nprint(x)",
"[1, 2, 3, 4, 5, -2, -1]\n"
]
],
[
[
"注意, `x.append()` 这种 Python 内置实例的方法并不返回任何值。所以,你不能把它赋值到另一个变量:",
"_____no_output_____"
]
],
[
[
"y = [1, 2, 3, 4, 5].append(-2)\nprint(y) # 无效的赋值,因为它并没有返回值",
"None\n"
]
],
[
[
"要实现这种“返回值”效果,可以考虑下面两种方法之一:\n\n- 加号:Python 支持用加号连接两个可变列表,从而把它们“融合”在一起\n- 星号:Python 支持用前缀星号的方式进行列表展开",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3]\ny1 = x + [4, 5]\ny2 = [*x, 4, 5]\n\nprint(y1, y2)",
"[1, 2, 3, 4, 5] [1, 2, 3, 4, 5]\n"
]
],
[
[
"### 插入\n\n用 `x.insert(index, item)` 将元素插入到第 `index` 个元素的位置,原第 index 及以后的元素依次后移:",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3]\nx.insert(1, -1) # 插入到 x[1] 处\n\nprint(x, x[1])",
"[1, -1, 2, 3] -1\n"
]
],
[
[
"要实现类似上一小节的“返回值”效果,仍然可以使用加号或者星号两种方式:",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3]\nitem = -1\n\ny1 = x[:1] + [item] + x[1:]\ny2 = [*x[:1], item, *x[1:]]\n\nprint(y1, y2)",
"[1, -1, 2, 3] [1, -1, 2, 3]\n"
]
],
[
[
"### 排序\n\n#### 按值排序\n\n用 `x.sort()` 对列表进行排序,默认是按升序。Python 的排序是稳定排序。",
"_____no_output_____"
]
],
[
[
"x = [1, 4, 3, 2]\nx.sort()\n\nprint(x)",
"[1, 2, 3, 4]\n"
]
],
[
[
"添加 `reverse=True` 选项,可以按降序进行排列。",
"_____no_output_____"
]
],
[
[
"x = [1, 4, 3, 2]\nx.sort(reverse=True)\n\nprint(x)",
"[4, 3, 2, 1]\n"
]
],
[
[
"如果列表内的元素不是数字,是列表或其他序列,会按 `<` 比较的方式来排序。下面是对列表元素进行排序:",
"_____no_output_____"
]
],
[
[
"x = [[1, 2], [4, 3, 2], [3, 4], [3, 2]]\nx.sort()\n\nprint(x)",
"[[1, 2], [3, 2], [3, 4], [4, 3, 2]]\n"
]
],
[
[
"要实现类似上一小节的“返回值”效果,仍然可以使用加号或者星号两种方式:",
"_____no_output_____"
]
],
[
[
"x = [1, 4, 3, 2]\n\nprint(sorted(x, reverse=True))",
"[4, 3, 2, 1]\n"
]
],
[
[
"#### 反序\n\n列表反序有三种方式:\n\n- 就地反序: `x.reverse()`\n- 带返回值的反序:\n - 利用 `reversed` 生成器: `list(reversed(x))` \n - 利用步长为 -1 的切片: `x[::-1]`",
"_____no_output_____"
]
],
[
[
"x = [1, 4, 3, 2]\nx_copy = [1, 4, 3, 2]\n\nx.reverse()\ny1 = list(reversed(x_copy))\ny2 = x_copy[::-1]\nprint(x, x==y1, x==y2)",
"[2, 3, 4, 1] True True\n"
]
],
[
[
"### 查询\n\n要查询一个元素是否在列表中,使用 `in` 关键字:",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 4]\n\nprint(2 in x, 5 in x)",
"True False\n"
]
],
[
[
"相反地,要确定一个元素是否不再列表中,使用 `not in` 来取非:",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 4]\n\nprint(2 not in x, 5 not in x)",
"False True\n"
]
],
[
[
"要查询一个元素在列表中的位置,使用 `x.index(item)` 。\n\n* 如果列表中该元素出现了多次, `x.index()` 只会返回最靠前的那项对应的索引序数。\n* 如果要返回该元素在列表中出现的所有位置,可以使用列表解析(参考下方的[列表解析](#列表解析)一节)功能。",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 2]\nitem = 2\ny1 = x.index(item)\ny2 = [i for i, v in enumerate(x) if v == item]\n\nprint(y1, y2, sep='\\n')",
"1\n[1, 3]\n"
]
],
[
[
"如果 `item` 不在列表中,这样会弹出 ValueError 错误,提示你要查询的对象并不在列表中:",
"_____no_output_____"
]
],
[
[
"x.index(5)",
"_____no_output_____"
]
],
[
[
"使用 `x.count()` 函数,可以避免该错误:",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 2]\n\nprint(x.count(2), x.count(5))",
"2 0\n"
]
],
[
[
"### 删除\n\n#### 按索引删除\n\n用 `x.pop()` 弹出列表末尾的元素,或用 `x.pop(index)` 弹出第 index 位的元素:",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 4]\ny = x.pop()\nprint(y, x)",
"4 [1, 2, 3]\n"
],
[
"x = [1, 2, 3, 4]\ny = x.pop(2)\nprint(y, x)",
"3 [1, 2, 4]\n"
]
],
[
[
"或者直接用 `del(index)` 命令来删除元素 `x[index]` :",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 4]\ndel(x[1])\nprint(x)",
"[1, 3, 4]\n"
]
],
[
[
"#### 按值删除",
"_____no_output_____"
],
[
"给定一个值 `item` ,用 `x.remove(item)` 可以删除列表中等于 `item` 的项:\n\n* 类似于 `x.index()` ,它只会删除最靠前的匹配项。\n* 考虑使用列表解析(参考下方的[列表解析](#列表解析)一节)来移除所有的匹配项。",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 2]\nx_copy = [1, 2, 3, 2]\nitem = 2\n\nx.remove(item)\ny = [k for k in x_copy if k != item]\nprint(x, y, sep='\\n')",
"[1, 3, 2]\n[1, 3]\n"
]
],
[
[
"#### 清空列表\n\n使用 `x.clear()` 来清空列表为 `[]` :",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 4]\nx.clear()\nprint(x)",
"[]\n"
]
],
[
[
"### 最值\n\n用 `max()` 或 `min()` 函数返回列表中的最值。类似于 `len()` ,这两个函数对其他的序列变量类型也同样有效。",
"_____no_output_____"
]
],
[
[
"x = [1, 3, 4, 2]\n\nprint(min(x), max(x))",
"1 4\n"
]
],
[
[
"## 高级列表操作",
"_____no_output_____"
],
[
"### 列表展开\n\n可以“就地”把列表拆开成多个以逗号分隔的元素。除了上文提到的展开为列表中的项(比如 `[1, *x, 2]` ),更常见的是拆成函数的传入参数。比如取余函数 `divmod()` 接受两个参数:",
"_____no_output_____"
]
],
[
[
"x = [7, 2]\n\nprint(divmod(*x)) # 7÷2=3 … 1 ",
"(3, 1)\n"
]
],
[
[
".. _list-comprehension:\n\n### 列表解析\n\n列表解析(也称列表推导)通过 for 循环与 if 判断来筛选列表中的元素。",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 4, 5, 6]\ny = [k+10 for k in x[:3]]\nz = [k for k in x if k % 2 == 1]\n\nprint(y, z, sep='\\n')",
"[11, 12, 13]\n[1, 3, 5]\n"
]
],
[
[
"## 元组\n\nPython 中的元组与列表类似,但是元素不可变。\n\n- 元组与列表有不同的使用场景。\n- 元组的元素之间以逗号隔开,外侧的圆括号是可选的。",
"_____no_output_____"
]
],
[
[
"x = 1, 2, 3 # 等同于 x = (1, 2, 3)\ny = () # 空元组\nz = (1,) # 单元素元组",
"_____no_output_____"
]
],
[
[
"元组也可以解包,将内部值赋给多个变量:",
"_____no_output_____"
]
],
[
[
"a, b, c = x\nprint(a, b, c)",
"1 2 3\n"
]
],
[
[
"元组的索引与列表的方式一致,只是不能赋值给元组中的元素:",
"_____no_output_____"
]
],
[
[
"x = 1, 2, 3, 4, 5\nx[::2]",
"_____no_output_____"
]
],
[
[
"元组的长度也可以用 `len()` 函数获取:",
"_____no_output_____"
]
],
[
[
"x = 1, 2, 3\nprint(len(x))",
"3\n"
]
],
[
[
"## 字符串\n\n字符串是一种不可变序列。因此,字符串不存在“就地改动”,任何字符串方法都不会改变原字符串的值。\n\n它在创建时可以用单引号包括,也可以用双引号包括。内部的引号可以用反斜线 `\\` 转义:",
"_____no_output_____"
]
],
[
[
"x = \"It's a string.\"\ny = 'It\\'s a string.'\nprint(x, y, x == y)",
"It's a string. It's a string. True\n"
]
],
[
[
"字符串可以用加号与乘号来进行连接与重复;如果要连接的均是字符串面值(直接用引号括起的值,而不是字符串变量),可以用空格进行连接。",
"_____no_output_____"
]
],
[
[
"x = \"abc\" * 3 + \"def\"\ny = (\"This is a single-line string \"\n \"but I can write it across lines.\")\nz = x + 'qwe' # 字符串变量也可用加号与面值连接\nprint(x, y, z, sep='\\n')",
"abcabcabcdef\nThis is a single-line string but I can write it across lines.\nabcabcabcdefqwe\n"
]
],
[
[
"多行字符串可以用三个连续的单引号(或双引号)包括。多行字符串内容的所有换行都会被保留;可以在行尾添加一个反斜线 `\\` 来忽略当前行的换行:",
"_____no_output_____"
]
],
[
[
"x = \"\"\"\\\nThis is\na multiline\nstring.\n\"\"\"\nprint(x)",
"This is\na multiline\nstring.\n\n"
]
],
[
[
"### 强制类型转换\n\n用 `str()` 可以强制将其他类型转换为字符串(如果能够转换):",
"_____no_output_____"
]
],
[
[
"x = 123\ny = True\nz = [3, [1, 2], 4]\n\nprint(str(x), str(y), str(z), sep='\\n')",
"123\nTrue\n[3, [1, 2], 4]\n"
]
],
[
[
"字符串也可以转换为列表,本质是按字符依次拆开:",
"_____no_output_____"
]
],
[
[
"x = \"abcdefg\"\nprint(list(x))",
"['a', 'b', 'c', 'd', 'e', 'f', 'g']\n"
]
],
[
[
"### 转义与 r 字符串\n\n常见的转移字符包括:\n\n| 字符 | 含义 |\n| --- | :--- |\n| `\\b` | 退格 |\n| `\\n` | 换行 |\n| `\\r` | 回车(移动到行首) |\n| `\\t` | 制表符 |\n| `\\\\` | 被转义的反斜线 |\n\n如果不想让反斜线进行转义,在字符串的 **左侧引号之前** 添加 `r` 实现:",
"_____no_output_____"
]
],
[
[
"x = \"String with\\tTAB.\"\ny = r\"String with\\tTAB.\"\nprint(x, y, sep='\\n')",
"String with\tTAB.\nString with\\tTAB.\n"
]
],
[
[
"### 字符串索引\n\n字符串的索引仍然是字符串。Python 中没有字符类型,单个字符只是长度为 1 的字符串。\n\n字符串的索引仍然支持",
"_____no_output_____"
]
],
[
[
"x = \"This is a string.\"\nx[:6]",
"_____no_output_____"
],
[
"x[::2]",
"_____no_output_____"
]
],
[
[
"唯一不同于列表索引的是,字符串在切片选取时,可以超出索引范围而不报错:",
"_____no_output_____"
]
],
[
[
"x = \"This is a string.\"\nprint(x[5:2333])",
"is a string.\n"
]
],
[
[
"### 分割字符串\n\nPython 中字符串分割函数主要有 `split()` , `rsplit()` 与 `splitline()` 三个方法。",
"_____no_output_____"
],
[
"使用 `x.split(sep, maxsplit)` 来用 sep 字符串分割字符串 `x` ,并指定最多分割 maxsplit 次:\n\n- 分割字符串长度可以大于 1\n- 默认的 `maxsplit=-1` ,即会完全分割字符串",
"_____no_output_____"
]
],
[
[
"x = \"This is a string.\"\n\nprint(x.split('s')) # 默认完全分割\nprint(x.split('s', 1)) # 最多分割 1 次,即分割成 2 份\nprint(x.split('z')) # 空分割\nprint(x.split()) # 默认以空格分割",
"['Thi', ' i', ' a ', 'tring.']\n['Thi', ' is a string.']\n['This is a string.']\n['This', 'is', 'a', 'string.']\n"
]
],
[
[
"我们经常把这个技巧用在 for 循环语句中:",
"_____no_output_____"
]
],
[
[
"fruits = \"apple,pear,orange,banana\"\nfor fruit in fruits.split(','):\n print(fruit)",
"apple\npear\norange\nbanana\n"
]
],
[
[
"Python 还提供了 `rsplit()` ,可以从字符串的右侧开始分割。从下例比较两者的区别:",
"_____no_output_____"
]
],
[
[
"x = \"This is a string.\"\n\nprint(x.split('s', 1))\nprint(x.rsplit('s', 1))",
"['Thi', ' is a string.']\n['This is a ', 'tring.']\n"
]
],
[
[
"最后是 `splitlines(keepends=False)` ,它可以按行分隔符集中的所有符号进行分割(因此比手动地使用 `split('\\n')` 的效果更好):\n\n- 在 `keepends=True` 时,它能保留换行符。\n- 关于该函数认定的换行符号集,参考 [官方文档:str.splitlines()](https://docs.python.org/zh-cn/3/library/stdtypes.html?highlight=join#str.splitlines) 。\n- 它与 `split()` 函数的另一个差别是对末尾空白行的处理,读者可以参考下例。",
"_____no_output_____"
]
],
[
[
"x = \"line 1\\nline 2\\r\\nline 3\\n\"\n\nprint(x.splitlines())\nprint(x.splitlines(keepends=True))\nprint(x.split('\\n')) # 多一行",
"['line 1', 'line 2', 'line 3']\n['line 1\\n', 'line 2\\r\\n', 'line 3\\n']\n['line 1', 'line 2\\r', 'line 3', '']\n"
]
],
[
[
"### 合并字符串:join()\n\n与 `x.split(sep)` 相反, 函数 `sep.join(lst)` 则是将一个列表用指定的分割符连接起来,组成一个字符串:",
"_____no_output_____"
]
],
[
[
"data = [\"apple\", \"pear\", \"orange\", \"banana\"]\ns = ' ~ '.join(data)\nprint(s)",
"apple ~ pear ~ orange ~ banana\n"
]
],
[
[
"### 替换子字符串:replace()\n\n字符串替换方法 `x.split(old, new, count)` 用 new 来替换(从左向右)前 count 次搜索到的 old 子字符串。",
"_____no_output_____"
]
],
[
[
"x = \"This is a string.\"\ny1 = x.replace(\"s\", \"t\")\ny2 = x.replace(\"s\", \"t\", 2)\nprint(y1, y2, sep='\\n')",
"Thit it a ttring.\nThit it a string.\n"
]
],
[
[
"### 检查首尾匹配:startswith() / endswith()\n\n用 `x.startswith(prefix)` 与 `x.endswith(suffix)` 这两个方法来检查字符串首尾端的子字符串是否匹配 prefix 或 suffix。比较常用的情形可能是检查文件的扩展名:",
"_____no_output_____"
]
],
[
[
"files = [\"doc-A.txt\", \"doc-B.md\", \"Doc-C.txt\"]\n\nfor f in files:\n if f.startswith('doc'):\n print(f\"doc - {f}\")\n if f.endswith('txt'):\n print(f\"txt - {f}\")",
"doc - doc-A.txt\ntxt - doc-A.txt\ndoc - doc-B.md\ntxt - Doc-C.txt\n"
]
],
[
[
"### 清除首尾字符:strip()\n\n用 `strip()` 清除两侧匹配的字符串,或者用 `lstrip()` 单独清除左侧的,或用 `rstrip()` 单独清除右侧的。",
"_____no_output_____"
]
],
[
[
"files = [\"doc-A.txt\", \"doc-B.md\", \"Doc-C.txt\"]\n\nfor f in files:\n s = (f\"strip: {f.strip('d'):10}\\t\"\n f\"lstrip: {f.lstrip('doc-'):10}\\t\"\n f\"rstrip: {f.rstrip('txt')}\")\n print(s)",
"strip: oc-A.txt \tlstrip: A.txt \trstrip: doc-A.\nstrip: oc-B.m \tlstrip: B.md \trstrip: doc-B.md\nstrip: Doc-C.txt \tlstrip: Doc-C.txt \trstrip: Doc-C.\n"
]
],
[
[
"默认的 `strip()` 函数会清除字符串两侧的空格——这有时在读取文件时会用到:",
"_____no_output_____"
]
],
[
[
"x = \"line 1 \\n line 2 \\r\\n\\tline 3\\n\"\nlines = x.splitlines()\nlines_strip = [line.strip() for line in lines]\nprint(lines, lines_strip, sep='\\n')",
"['line 1 ', ' line 2 ', '\\tline 3']\n['line 1', 'line 2', 'line 3']\n"
]
],
[
[
"### 字符串格式化:format() 与 f 字符串",
"_____no_output_____"
],
[
"字符串格式化是一种将字符串中的一部分(通常以 `{}` 标记)以外部数据(比如变量值或者其他字符串)替换的方法。\n\n- 方法 `x.format()` 在任何版本的 Python 3 中受到支持\n- 带有前缀 f 的格式化字符串(即 f-string 或 f 字符串)在 Python 3.6 开始被支持\n\n字符串还有一种以百分号 `%` 进行格式化的方法,是从 Python 2 时期延续下来的语法(在 Python 3 中也可以使用)。本文不再介绍这部分内容,有兴趣的读者可以自行查阅。",
"_____no_output_____"
],
[
"我们常常需要把变量的值(可能不是字符串类型)嵌入到字符串中,比如:",
"_____no_output_____"
]
],
[
[
"lang, ver = \"Python\", 3\nx = \"We are learning \" + lang + \" \" + str(ver) + \".\"\nprint(x)",
"We are learning Python 3.\n"
]
],
[
[
"上例中虽然用加法连接实现了这一点,但的确非常狼狈。\n\n#### format() 方法\n\nPython 支持用 `x.format()` 的形式来格式化字符串:",
"_____no_output_____"
]
],
[
[
"lang, ver = \"Python\", 3\nx = \"We are learning {} {}.\".format(lang, ver)\n\n# 也可以按名称访问\ny = \"We are learning {mylang} {myver}.\".format(mylang=lang, myver=ver)\n\n# 甚至可以进行字典展开,参考字典章节\nd = {\"mylang\": lang, \"myver\": ver}\nz = \"We are learning {mylang} {myver}.\".format(**d)\n\nprint(x, x==y, x==z, sep='\\n')",
"We are learning Python 3.\nTrue\nTrue\n"
]
],
[
[
"上面的几种写法都可以得到相同的结果。如你所见,在格式化字符串中,我们用花括号来占位。默认的, `x.format()` 的输入参数都会被转为字符串格式。\n\n- 如果要在格式化字符串中打印花括号,请使用双写花括号(比如 `{{` )。\n- 如果传入的参数是键值对(即 `key=value` 的形式),请在花括号内注明键名称。这样的优势是代码可读性好。\n\n利用重复的键名或者重复的序号,可以反复使用同一个目标:",
"_____no_output_____"
]
],
[
[
"# 复用 lang 变量\nx = \"We are learning {0} {1}. I love {0}!\".format(lang, ver)\ny = \"We are learning {mylang} {myver}. I love {mylang}!\".format(mylang=lang, myver=ver)\nprint(x, x==y, sep='\\n')",
"We are learning Python 3. I love Python!\nTrue\n"
]
],
[
[
"一个更复杂的例子是将输入参数转为特定的字符串格式,比如将浮点数转换为保留指定小数位的字符串。这时候需要用到冒号 `:` 来指定格式——格式指定位于冒号右侧,而键名(如果无则留空)位于冒号左侧:",
"_____no_output_____"
]
],
[
[
"val, digits = 3.1415926535, 5\nx = \"PI = {:.5f}...\".format(val)\ny = \"PI = {val:.5f}...\".format(val=val)\nz = \"PI = {0:.{1}f}...\".format(val, digits)\nprint(x, x==y, x==z, sep='\\n')",
"PI = 3.14159...\nTrue\nTrue\n"
]
],
[
[
"Python 支持的格式有:\n\n| 格式声明 | 格式示例 | 解释 | 输入 | 输出 |\n| :--- | :---: | :--- | :--- | :--- |\n| `s` | `{:5s}` | 字符串,以最小宽5位输出 | `\"abc\"` | \" abc\" |\n| 任意数 | \n| `e` | `{:.2e}` | 科学计数法小数点后2位(默认6) | `1234.567` | \"1.23e+03\" |\n| `E` | / | 同上,但输出时大写字母 E | `1234.567` | \"1.23E+03\" |\n| `f` | `{:.4f}` | 保留小数点后4位(默认6) | `1234.567` | \"1234.5670\" |\n| `F` | / | 同上,但输出时大写 NAN 与 INF | `float('inf')` | \"INF\" |\n| `g` | / | 接受数字,并自行判断输出格式 | | |\n| | `{:.5g}` | 5位有效数字(自动定点格式) | `1234.567` | \"1234.6\" |\n| | `{:.3g}` | 3位有效数字(自动科学计数格式)| `1234.567` | \"1.23e+03\" |\n| | `{:g}` | (默认6位) | `float('nan')` | \"nan\" |\n| `G` | `{:.3G}` | 同上,但输出时使用大写的 E、NAN 与 INF | `1234.567` | \"1.23E+03\" |\n| `+` | `{:+.2f}` | 正负数均标明符号;保留2位小数 | `1234.567` | \"+1234.57\" | \n| `␣` | `{: .2f}` | 正数空格,负数标明负号;保留2位小数 | `1234.567` | \" 1234.57\" |\n| `%` | `{:.1%}` | 百分比,保留2位小数 | `0.12` | \" 12.0%\" |\n| 整数 |\n| `d` | `{:d}` | 十进制整数 | `123` | \"123\" |\n| `b` | `{:b}` | 二进制整数 | `123` | \"1111011\" |\n| `o` | `{:o}` | 八进制整数 | `123` | \"173\" |\n| `x` | `{:x}` | 十六进制整数 | `123` | \"7b\" |\n| `X` | / | 同上,但使用大写的 a~f | `123` | \"7B\" |\n| 格式符 |\n| `>` | `{:>4d}` | 强制右对齐(数字对象默认) | `123` | \" 123\" |\n| | `{:0>4d}` | 以0而不是空格补位 | `123` | \"0123\" |\n| `<` | `{:<4d}` | 强制左对齐(其他对象默认) | `123` | \"123 \" |\n| `^` | `{:^4d}` | 强制居中对齐 | `123` | \"123 \" |\n| 其他 |\n| `,` | `{:,}` | 以逗号千分位分隔 | `1234` | \"1,234\" |\n| `#` | `{:#b}` | 显式保留输入类型;如二进制以\"0b\"开头 | `123` | \"0b1111011\" |",
"_____no_output_____"
],
[
"下面是几个例子:",
"_____no_output_____"
]
],
[
[
"# 在数字两侧各添加3个星号\nx = 1234\nprint(\"{:*^{}d}\".format(x, len(str(x))+6))",
"***1234***\n"
],
[
"# 输入给定十进制数的其他进制形式\nx = 123\nfor base in \"dboxX\":\n print(\"Type {base}: {val:#{base}}\".format(base=base, val=x))",
"Type d: 123\nType b: 0b1111011\nType o: 0o173\nType x: 0x7b\nType X: 0X7B\n"
]
],
[
[
"#### f 字符串\n\n上面的例子均由 `x.format()` 方法实现,但是不足之处在于不能方便地直接利用已有的变量值。比如上文中给浮点数保留指定位数的例子,变量 `val` 与 `digits` 仍然需要显式地作为 `format()` 方法的输入参数:",
"_____no_output_____"
]
],
[
[
"val, digits = 3.1415926535, 5\nx = \"PI = {0:.{1}f}...\".format(val, digits)\ny = \"PI = {myval:.{mydigits}f}...\".format(myval=val, mydigits=digits)\nprint(x, x==y, sep='\\n')",
"PI = 3.14159...\nTrue\n"
]
],
[
[
"当然,采用上文所介绍过字典展开的方式,你可以用先声明一个字典,然后在传入时用双写星号前缀来展开……",
"_____no_output_____"
]
],
[
[
"val, digits = 3.1415926535, 5\nd = {\"myval\": val, \"mydigits\": digits}\nx = \"PI = {myval:.{mydigits}f}...\".format(**d)\nprint(x)",
"PI = 3.14159...\n"
]
],
[
[
"这样, `x.format()` 方法的输入就显得不是太累赘。\n\n但并不是所有数据都适合写入同一个字典里的。因此,我推荐使用更方便的 f 字符串来格式化字符串,在字符串的左侧引号之前添加字母 f 即可:",
"_____no_output_____"
]
],
[
[
"# Python >= 3.6\nval, digits = 3.1415926535, 5\nx = f\"PI = {val:.{digits}f}...\"\nprint(x)",
"PI = 3.14159...\n"
]
],
[
[
"与 `x.format()` 方法一样,f 字符串支持格式化字串的所有格式。f 字符串也允许在花括号内进行合法的 Python 表达式书写:",
"_____no_output_____"
]
],
[
[
"print(f\"{2**3:0>4d}\")",
"0008\n"
]
],
[
[
"但是,花括号中的表达式不能显示地含有反斜线:",
"_____no_output_____"
]
],
[
[
"print(f\"A multiline string:\\n{'a'+'\\n'+'b'}\")",
"_____no_output_____"
]
],
[
[
"你可以通过将带反斜线的值赋值到变量来规避这一点:",
"_____no_output_____"
]
],
[
[
"x = \"{}\\n{}\".format('a', 'b')\nprint(f\"A multiline string:\\n{x}\")",
"A multiline string:\na\nb\n"
]
],
[
[
"### 大小写转换\\*\n\nPython 提供了丰富的大小写转换支持,包括\n\n- 全体强制大小写 `upper()/lower()`\n- 仅首字母大写 `capitalize()`\n- 每个单词首字母大写 `title()`",
"_____no_output_____"
]
],
[
[
"x = \"It's a string. This is another.\"\nn = len(x) + 10\n\nd = {\n \"全大写\": x.upper(),\n \"全小写\": x.lower(),\n \"首字母大写\": x.capitalize(),\n \"单词首字母大写\": x.title()\n}\nfor k, v in d.items():\n print(f\"{k:10}\\t{v}\")",
"全大写 \tIT'S A STRING. THIS IS ANOTHER.\n全小写 \tit's a string. this is another.\n首字母大写 \tIt's a string. this is another.\n单词首字母大写 \tIt'S A String. This Is Another.\n"
]
],
[
[
"注意, `title()` 方法会将一些并非单词头的字母识别为单词头(比如上例中 `It's` 的字母 s)。要避免这一情形,可以配合正则表达式处理,参考 [官方文档:str.title()](https://docs.python.org/zh-cn/3/library/stdtypes.html?highlight=split#str.title) 。",
"_____no_output_____"
],
[
"## 字典:dict\n\nPython 的字典以键值对(key-value pairs)的形式存储数据:\n\n- 每一项数据之间用逗号 `,` 分隔,所有数据外侧用一组花括号 `{}` 包裹\n- 每一项数据都是一组键值对,键与值之间用冒号 `:` 分隔\n- 一个字典内,不能存在相同的键名\n- 在 Python >= 3.7 的版本中,字典的项变更为 **有序的** (指在循环中被迭代,或被强制转换成序列时,键或键值对的顺序是稳定的),其顺序与创建时每一项被加入的顺序相同。",
"_____no_output_____"
]
],
[
[
"x = {} # 空字典\ny = {\"a\": 1, \"b\": 3}\nprint(y)",
"{'a': 1, 'b': 3}\n"
]
],
[
[
"字典的每一项数据的值可以是任意的数据类型。同时,不同于 JSON 文件中的键,Python 字典的键也可以是大多数类型(而不仅仅是字符串)。\n\n*从编程习惯上讲,我并不推荐在字典键中使用字符串以外的其他类型。*\n\n字典可以用 `len()` 来返回其键值对的个数,用 `in` 来查询一个键是否在字典中:",
"_____no_output_____"
]
],
[
[
"x = {}\nprint(len(x))\nprint(\"a\" in x)",
"0\nFalse\n"
]
],
[
[
"### 字典初始化\n\n字典有许多初始化方式。\n\n- 依次添加键值对\n- 利用 `dict()` 构造\n - 从成对序列数据中构造\n - 显式地传入字面键值\n- 利用 `fromkeys()` 方法\n- 字典解析\n\n最朴素的方式是依次添加键值对。先新建一个空字典,然后依次向内添加键值对:",
"_____no_output_____"
]
],
[
[
"d = {}\nd[\"a\"] = 1\nd[\"c\"] = 3\nd[\"b\"] = 2\nprint(d)",
"{'a': 1, 'c': 3, 'b': 2}\n"
]
],
[
[
"在键与值分别存储在两个序列中时,我们可以利用 for 循环:",
"_____no_output_____"
]
],
[
[
"keys = \"a\", \"c\", \"b\"\nvals = 1, 3, 2\n\nd = {}\nfor k, v in zip(keys, vals):\n d[k] = v\nprint(d)",
"{'a': 1, 'c': 3, 'b': 2}\n"
]
],
[
[
"从成对序列数据中构造要求一种整合的数据存储方式。如果键与值是“成对地”存储在一个序列中,可以直接使用 `dict()` 来进行初始化:",
"_____no_output_____"
]
],
[
[
"data = [[\"a\", 1], [\"c\", 3], [\"b\", 2]]\nd = dict(data)\nprint(d)",
"{'a': 1, 'c': 3, 'b': 2}\n"
]
],
[
[
"在字典数据不多时,也可能考虑显式地传入字面键值(键自动视为字符串):",
"_____no_output_____"
]
],
[
[
"d = dict(a=1, c=3, b=2)\nprint(d)",
"{'a': 1, 'c': 3, 'b': 2}\n"
]
],
[
[
"使用 `fromkeys()` 方法能够快速初始化已知键名的字典,将所有键都赋同一个初始值(比如 `None` ):",
"_____no_output_____"
]
],
[
[
"keys = \"a\", \"c\", \"b\"\nd = {}.fromkeys(keys, None)\nprint(d)",
"{'a': None, 'c': None, 'b': None}\n"
]
],
[
[
"类似于列表解析,字典也支持解析:",
"_____no_output_____"
]
],
[
[
"data = [[\"a\", 1], [\"c\", 3], [\"b\", 2]]\nd1 = {x[0]: x[1] for x in data}\n\nkeys = \"a\", \"c\", \"b\"\nvals = 1, 3, 2\nd2 = {k: v for k, v in zip(keys, vals)}\n\nprint(d1, d1==d2)",
"{'a': 1, 'c': 3, 'b': 2} True\n"
]
],
[
[
"### 字典的视图\n\n由于字典有键、值、项(键值对)这三个概念,Python 也提供了对应的三种视图:\n\n- 键视图: `x.keys()` ,一个依序的、每个键为一个元素的序列\n- 值视图: `x.values()` ,一个与键视图中的键依次对应的值组成的序列\n- 项视图: `x.items()` ,一个依上述顺序的、每个元素是一个键值对元组的序列\n\n用 for 循环来展示一下这三种视图:",
"_____no_output_____"
]
],
[
[
"x = {\"a\": 1, \"c\": 3}\nfor k in x.keys():\n print(f\"key: {k}\")\n\nfor v in x.values():\n print(f\"val: {v}\")\n\nfor i in x.items():\n print(f\"item: {i}\")",
"key: a\nkey: c\nval: 1\nval: 3\nitem: ('a', 1)\nitem: ('c', 3)\n"
]
],
[
[
"在 `x.keys()` 中循环其实与在 `x` 中循环的结果是相同的,都是遍历字典的键:",
"_____no_output_____"
]
],
[
[
"for k in x:\n print(k)",
"a\nc\n"
]
],
[
[
"字典的项视图 `items()` 常常被用在 for 循环中,解包成两个循环变量:",
"_____no_output_____"
]
],
[
[
"x = {\"a\": 1, \"c\": 3}\nfor k, v in x.items():\n print(f\"{k} = {v}\")",
"a = 1\nc = 3\n"
]
],
[
[
"### 按键索引值:get() / setdefault()\n\n要按字典键索引其对应的值,有以下几种方法:\n\n- 如果确认键 key 位于字典 x 中,可以直接使用该键来索引 `x[key]`\n- 如果键 key 可能不在 x 中:\n - 用 `x.get(key, default=None)` 方法,失败时它会返回一个备用值 `default`\n - 用 `x.setdefault(key, default=None)` 方法,失败时它会把键值对 `key: default` 添加到字典",
"_____no_output_____"
]
],
[
[
"x = {\"a\": 1, \"c\": 3}\nprint(x[\"a\"])",
"1\n"
]
],
[
[
"更安全的选择是使用 `x.get(key, default=None)` 方法。它的作用是:如果 key 存在于 x 的键中,那么返回该键对应的值;否则,返回 default 指定的值。",
"_____no_output_____"
]
],
[
[
"x = {\"a\": 1, \"c\": 3}\nprint(x.get(\"b\", \"Not in dict\"))",
"Not in dict\n"
]
],
[
[
"另一个选择是利用 `x.setdefault(key, default=None)` 方法。如果 key 存在于 x 的键中,那么返回该键对应的值;否则,将 default 值与 key 键组成键值对,加入到字典 x 中。",
"_____no_output_____"
]
],
[
[
"x = {\"a\": 1, \"c\": 3}\ny1 = x.setdefault(\"b\", 2)\ny2 = x[\"b\"] # 键值对已经被添加\nprint(y1, y2)",
"2 2\n"
]
],
[
[
"### 删除项:pop() / popitem()\n\n最简单的自然是 `del()` 删除函数:",
"_____no_output_____"
]
],
[
[
"x = {\"a\": 1, \"c\": 3, \"b\": 2}\ndel(x[\"a\"])\nprint(x)",
"{'c': 3, 'b': 2}\n"
]
],
[
[
"字典的 `x.pop(key)` 与列表在表现上类似,也是返回一个值的同时将其从容器中删除:",
"_____no_output_____"
]
],
[
[
"x = {\"a\": 1, \"c\": 3, \"b\": 2}\ny = x.pop(\"a\")\nprint(x, y)",
"{'c': 3, 'b': 2} 1\n"
]
],
[
[
"特别指出,方法 `pop()` 也有一个名为 `default` 的参数,会在字典不包含键时返回该 default 值(如果不显示地给出 default 的值,那么会造成 KeyError)。这一点与 `setdefault()` 方法相映成趣。",
"_____no_output_____"
]
],
[
[
"x = {\"a\": 1, \"c\": 3, \"b\": 2}\ny = x.pop(\"a\", 10) # 正常返回 x[\"a\"],并删除键 \"a\"\nz = x.pop(\"a\", 100) # 没有键 \"a\",返回默认值 100\nprint(x, y, z)",
"{'c': 3, 'b': 2} 1 100\n"
]
],
[
[
"最后,介绍 `popitem()` ,这个方法被用到的较少。它并不指定键名来抛出一个项,而是按照字典键的顺序,反向地依次抛出字典地项——即后进先出(LIFO),如同对栈容器进行弹栈操作一样。\n\n*对于 Python 3.7 之前的版本,它并不是依照 LIFO 顺序弹出字典项的,而是按照随机选择的顺序。* ",
"_____no_output_____"
]
],
[
[
"x = {\"a\": 1, \"c\": 3, \"b\": 2}\nwhile len(x) > 0:\n y = x.popitem()\n print(y)",
"('b', 2)\n('c', 3)\n('a', 1)\n"
]
],
[
[
"### 更新字典:update()\n\n通过 `x.update(y)` ,字典 x 可以根据字典 y 的键值对来就地更新字典 x 的数据:\n\n- 在字典 x 与 y 中都存在的键,以 y 中的值为准\n- 仅在一个字典中存在的键,得以保留",
"_____no_output_____"
]
],
[
[
"x = {\"a\": 1, \"c\": 3, \"b\": 2}\ny = {\"a\": -10, \"d\": 4}\nx.update(y)\nprint(x)",
"{'a': -10, 'c': 3, 'b': 2, 'd': 4}\n"
]
],
[
[
"利用循环,我们可以实现与 `update()` 方法相同的效果:",
"_____no_output_____"
]
],
[
[
"x = {\"a\": 1, \"c\": 3, \"b\": 2}\ny = {\"a\": -10, \"d\": 4}\n\nfor k in y:\n x[k] = y[k]\nprint(x)",
"{'a': -10, 'c': 3, 'b': 2, 'd': 4}\n"
]
],
[
[
"### 更改键名\\*\n\n字典并没有单独提供更改键名的方法,但这也是一个实用场景。由于 Python >= 3.7 版本引入了字典键的顺序,要在更改键名时保持键的顺序也显得重要。\n\n先说明一种会打乱键顺序的键名更改方法,那就是将指定键用 `pop(oldkey)` 弹出然后赋值给 `d[newkey]` 。下面以将键名改为小写为例:",
"_____no_output_____"
]
],
[
[
"d = dict(a=1, B=2, c=3, D=4)\ndkey = dict(B=\"b\", D=\"d\")\n\nfor oldkey, newkey in dkey.items():\n d[newkey] = d.pop(oldkey)\nprint(d)",
"{'a': 1, 'c': 3, 'b': 2, 'd': 4}\n"
]
],
[
[
"可以看到,字典 d 的键顺序也被打乱了。要保留顺序,只能遍历所有的字典键:",
"_____no_output_____"
]
],
[
[
"d = {\"a\": 1, \"B\": 2, \"c\": 3, \"D\": 4}\ndkey = {\"B\": \"b\", \"D\": \"d\"}\n\noldkeys = tuple(d.keys())\nfor k in oldkeys:\n # 如果有新键名则使用 dkey[k],否则使用旧键名 k\n d[dkey.get(k, k)] = d.pop(k)\n\nprint(d)",
"{'a': 1, 'b': 2, 'c': 3, 'd': 4}\n"
]
],
[
[
"## 集合\n\nPython 中的集合借鉴了数学意义上的集合概念,要求内部的元素不能重复。\n\n- 集合 set 是可变的;Python 也提供了一种不可变集合 frozenset ,其不涉及可变性部分的用法与 set 大致相同。\n\n集合用花括号括起的、逗号分隔的多个项来表示。但是,空集合不能使用 `{}` 来指明(因为这代表空字典),请使用 `set()` 指明。",
"_____no_output_____"
]
],
[
[
"x = set([1, 2, 3, 2])\ny = {1, 2, 3}\nz = set() # 空集合\nprint(x, y, z)",
"{1, 2, 3} {1, 2, 3} set()\n"
]
],
[
[
"集合同样支持长度度量 `len()` ,包含性检查 `in` ,以及循环遍历。",
"_____no_output_____"
]
],
[
[
"x = {1, 2, 3}\nprint(len(x), 4 in x)\n\nfor k in x:\n print(k, end=' ')",
"3 False\n1 2 3 "
]
],
[
[
"### 集合运算与包含关系\n\n集合运算包括交集、并集、差集、对称差集,包含关系包括(真)子集、(真)超集。\n\n| 运算 | 说明 | 运算符 | 示例 | 集合方法 | 示例 |\n| --- | --- | --- | :--- | :--- | :--- |\n| 交 | `&` | `a & b` | 同时位于两个集合中的元素 | `intersection` | `a.intersection(b, c, ...)` |\n| 并 | `\\|` | `a \\| b` | 至少位于其一集合中的元素 | `union` | `a.union(b,c, ...)` |\n| 差 | `-` | `a - b` | 只位于 a 而不位于 b 的元素 | `difference` | `a.difference(b, c, ...)` |\n| 对称差 | `^` | `a ^ b` | 位于且仅位于其一集合中的元素 | `symmetric_difference` | `a.symmetric_difference(b)` |\n| 子集 | `<=` | `a <= b` | a 的所有元素都在 b 中 | `issubset` | `a.issubset(b)` |\n| 真子集 | `<` | `a < b` | a 是 b 的子集且 a、b 不同 | / | / |\n| 超集 | `>=` | `a >= b` | b 的所有元素都在 a 中 | `issuperset` | `a.issuperset(b)` |\n| 真超集 | `>` | `a > b` | b 是 a 的子集且 a、b 不同 | / | / |\n| 互斥 | / | /| a 与 b 没有共同元素 | `isdisjoint()` | `a.isdisjoint(b)` |\n\n注意,上述命令中:\n\n- 交、并、差可以输入多个集合(如最后一列所示)。如果使用运算符,则用重复使用相同的运算符连接即可。\n- 对于“运算符”这一列命令,变量 a 与 b 都必须是集合(或者不可变集合)类型。对于“集合方法”这一列命令,只有 a 必须是严格的集合(或者不可变集合)类型,b 可以是任意的可迭代对象。\n- 上述集合方法也可以用 `set` 来调用。例如,取差集可以写为 `set.difference(a, b)` 。\n- 上述运算的返回值的结果均以 a 为准。例如,如果 a 是 set 类型而 b 是 frozenset 类型,那么返回值将是 set 类型。",
"_____no_output_____"
]
],
[
[
"a, b = {1, 2, 3}, {2, 4, 5}\nset_funcs = {\n \"交\": set.intersection,\n \"并\": set.union,\n \"差\": set.difference,\n \"对称差\": set.symmetric_difference, \n \"子集\": set.issubset,\n \"超集\": set.issuperset,\n \"互斥\": set.isdisjoint\n}\n\nfor text, func in set_funcs.items():\n print(f\"{text:8}\\t{func(a, b)}\")",
"交 \t{2}\n并 \t{1, 2, 3, 4, 5}\n差 \t{1, 3}\n对称差 \t{1, 3, 4, 5}\n子集 \tFalse\n超集 \tFalse\n互斥 \tFalse\n"
]
],
[
[
"### 更新集合\n\n集合的更新是就地更新,也涉及到上面的集合运算:\n\n- 直接更新 `a.update(b, ...)` ,即将并集赋回,等同于 `a |= b | ...`\n- 只保留交集 `a.intersection_update(b, ...)` ,即将交集赋回,等同于 `a &= b & ...`\n- 只保留差集 `a.difference_update(b, ...)` ,等同于 `a -= b | ...`\n- 只保留对称差集 `a.symmetric_difference_update(b)` ,等同于 `a ^= b`\n\n以上命令中,除了对称差以外的运算都可以输入多个集合。",
"_____no_output_____"
]
],
[
[
"a, b = {1, 2, 3}, {2, 4, 5}\nset_funcs = {\n \"(并)更新\": set.update,\n \"交更新\": set.intersection_update,\n \"差更新\": set.difference_update,\n \"对称差更新\": set.symmetric_difference_update\n}\n\nfor text, func in set_funcs.items():\n a_copy = a.copy() # 拷贝一个副本,避免直接对 a 改动\n func(a_copy, b)\n print(f\"{text:8}\\t{a_copy}\")",
"(并)更新 \t{1, 2, 3, 4, 5}\n交更新 \t{2}\n差更新 \t{1, 3}\n对称差更新 \t{1, 3, 4, 5}\n"
]
],
[
[
"### 增删元素\n\n本节中的命令只对可变集合(set 类型)有效,对不可变集合(frozenset 类型)无效 。\n\n- 增加:`add(elem)`\n- 移除:\n - 移除指定:\n - `remove(elem)` :移除一个不存在集合中的元素时会造成 KeyError\n - `discard(elem)` :尝试移除一个元素,如果不存在集合中则静默\n - 随机移除: `pop()` ,随机返回一个集合中的元素,并将其从集合中移除\n - 清空: `clear()`\n\n",
"_____no_output_____"
]
],
[
[
"a = {1, 2, 3}\na.add(4)\nprint(\"After add\\t\", a)\n\na.remove(3)\nprint(\"After remove\\t\", a)\n\na.discard(100)\nprint(\"After discard\\t\", a)",
"After add\t {1, 2, 3, 4}\nAfter remove\t {1, 2, 4}\nAfter discard\t {1, 2, 4}\n"
]
],
[
[
"其中, `pop()` 与 `clear()` 十分易懂,这里就不再用代码说明了。",
"_____no_output_____"
],
[
"## 其他\n\nPython 中提供了一个比较奇怪的变量,即单下划线 `_` —— 虽然从理论上讲,下划线是变量名中可以使用的合法字符,这个变量名本应如同单字符变量名 `x`、`s` 一样自然。\n\n它的作用一般有:\n\n1. 在交互式 Python 运行环境中,它会自动记录最后一次代码执行的结果。\n2. 习惯上,我们将不需要用到的变量值用该变量标记。例如:\n - 不使用的返回值:`x, _ = divmod(7, 2)`\n - 不使用的循环变量值:`[None for _ in range(3)]`\n - 不使用的匿名变量输入值:`lambda _: 1`\n3. 避免编辑器的语法检查。Python 语法检查器会忽略 `_` 变量;而其他变量如果声明而未在后文引用,检查器会发出警告。",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e75c13f86974b2b52f2dff3d096a180f8ca13e37 | 15,187 | ipynb | Jupyter Notebook | Untitled.ipynb | MarcAntoineAlex/casinfo | 775bb852903afccf30975d9a47463a22d31ae1a3 | [
"Apache-2.0"
] | null | null | null | Untitled.ipynb | MarcAntoineAlex/casinfo | 775bb852903afccf30975d9a47463a22d31ae1a3 | [
"Apache-2.0"
] | null | null | null | Untitled.ipynb | MarcAntoineAlex/casinfo | 775bb852903afccf30975d9a47463a22d31ae1a3 | [
"Apache-2.0"
] | null | null | null | 41.722527 | 208 | 0.566208 | [
[
[
"import os\nimport sys\nimport time\nimport glob\nimport numpy as np\nimport torch\nimport utils\nimport logging\nimport argparse\nimport torch.nn as nn\nimport torch.utils\nimport torch.nn.functional as F\nimport torchvision.datasets as dset\nimport torch.backends.cudnn as cudnn\n\nfrom torch.autograd import Variable\nfrom model_search import Network\nfrom architect1 import Architect\nfrom resnet import *",
"_____no_output_____"
],
[
"parser = argparse.ArgumentParser(\"cifar\")\nparser.add_argument('-f')\nparser.add_argument('--data', type=str, default='../data', help='location of the data corpus')\nparser.add_argument('--batch_size', type=int, default=8, help='batch size')\nparser.add_argument('--learning_rate', type=float, default=0.025, help='init learning rate')\nparser.add_argument('--learning_rate_min', type=float, default=0.001, help='min learning rate')\nparser.add_argument('--momentum', type=float, default=0.9, help='momentum')\nparser.add_argument('--weight_decay', type=float, default=3e-4, help='weight decay')\nparser.add_argument('--report_freq', type=float, default=1, help='report frequency')\nparser.add_argument('--gpu', type=int, default=0, help='gpu device id')\nparser.add_argument('--epochs', type=int, default=30, help='num of training epochs')\nparser.add_argument('--init_channels', type=int, default=16, help='num of init channels')\nparser.add_argument('--layers', type=int, default=8, help='total number of layers')\nparser.add_argument('--model_path', type=str, default='saved_models', help='path to save the model')\nparser.add_argument('--cutout', action='store_true', default=False, help='use cutout')\nparser.add_argument('--cutout_length', type=int, default=16, help='cutout length')\nparser.add_argument('--drop_path_prob', type=float, default=0.3, help='drop path probability')\nparser.add_argument('--save', type=str, default='EXP', help='experiment name')\nparser.add_argument('--seed', type=int, default=2, help='random seed')\nparser.add_argument('--grad_clip', type=float, default=5, help='gradient clipping')\nparser.add_argument('--train_portion', type=float, default=0.5, help='portion of training data')\nparser.add_argument('--unrolled', action='store_true', default=True, help='use one-step unrolled validation loss')\nparser.add_argument('--arch_learning_rate', type=float, default=3e-4, help='learning rate for arch encoding')\nparser.add_argument('--arch_weight_decay', type=float, default=1e-3, help='weight decay for arch encoding')\nparser.add_argument('--lambda_par', type=float, default=1.0, help='unlabeled ratio')\nargs = parser.parse_args()",
"_____no_output_____"
],
[
"args.save = 'search-{}-{}'.format(args.save, time.strftime(\"%Y%m%d-%H%M%S\"))\nutils.create_exp_dir(args.save, scripts_to_save=glob.glob('*.py'))\n\nlog_format = '%(asctime)s %(message)s'\nlogging.basicConfig(stream=sys.stdout, level=logging.INFO,\n format=log_format, datefmt='%m/%d %I:%M:%S %p')\nfh = logging.FileHandler(os.path.join(args.save, 'log.txt'))\nfh.setFormatter(logging.Formatter(log_format))\nlogging.getLogger().addHandler(fh)\n\n\nCIFAR_CLASSES = 10",
"_____no_output_____"
],
[
"np.random.seed(args.seed)\ntorch.cuda.set_device(args.gpu)\ncudnn.benchmark = True\ntorch.manual_seed(args.seed)\ncudnn.enabled=True\ntorch.cuda.manual_seed(args.seed)\nlogging.info('gpu device = %d' % args.gpu)\nlogging.info(\"args = %s\", args)\n\ncriterion = nn.CrossEntropyLoss()\ncriterion = criterion.cuda()\ncriterion_stud = nn.CrossEntropyLoss()\ncriterion_stud = criterion_stud.cuda()\ncriterion_mid = nn.CrossEntropyLoss()\ncriterion_mid = criterion_stud.cuda()\nmodel = Network(args.init_channels, CIFAR_CLASSES, args.layers, criterion)\nmodel = model.cuda()\nlogging.info(\"param size = %fMB\", utils.count_parameters_in_MB(model))\nstudent = ResNet(criterion_stud)\nstudent = student.cuda()\nmid = ResNet(criterion_mid)\nmid = mid.cuda()\n\noptimizer = torch.optim.SGD(model.parameters(),args.learning_rate,momentum=args.momentum,weight_decay=args.weight_decay)\noptimizer_stud = torch.optim.SGD(student.parameters(),args.learning_rate,momentum=args.momentum,weight_decay=args.weight_decay)\noptimizer_mid = torch.optim.SGD(mid.parameters(),args.learning_rate,momentum=args.momentum,weight_decay=args.weight_decay)\n\ntrain_transform, valid_transform = utils._data_transforms_cifar10(args)\ntrain_transform1, valid_transform1 = utils._data_transforms_cifar100(args)\ntrain_data = dset.CIFAR10(root=args.data, train=True, download=True, transform=train_transform)\nu_data = dset.CIFAR100(root=args.data, train=True, download=True, transform=train_transform1)\n\nnum_train = len(train_data)\nindices = list(range(num_train))\nsplit = int(np.floor(args.train_portion * num_train))\n\ntrain_queue = torch.utils.data.DataLoader(train_data, batch_size=args.batch_size,\n sampler=torch.utils.data.sampler.SubsetRandomSampler(indices[:split]),\n pin_memory=True, num_workers=2)\n\nvalid_queue = torch.utils.data.DataLoader(train_data, batch_size=args.batch_size,\n sampler=torch.utils.data.sampler.SubsetRandomSampler(indices[split:num_train]),\n pin_memory=True, num_workers=2)\n \nunlabeled_queue = torch.utils.data.DataLoader(u_data, batch_size=args.batch_size,\n sampler=torch.utils.data.sampler.SubsetRandomSampler(indices[:]),\n pin_memory=True, num_workers=2)\n\n\nscheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, float(args.epochs), eta_min=args.learning_rate_min)\n\narchitect = Architect(model, mid, student, args)",
"_____no_output_____"
],
[
"def cusloss(inp, tar):\n m = nn.Softmax(1)\n lm = nn.LogSoftmax(1)\n lenn = inp.shape[0]\n inp = lm(inp)\n tar = m(tar)\n out = inp*tar\n ll = (out.sum()*(-1))/lenn\n return ll\n\ndef train(train_queue, valid_queue, unlabeled_queue, model, mid, student, architect, criterion, criterion_mid, criterion_stud, optimizer, optimizer_mid, optimizer_stud, lr):\n objs = utils.AvgrageMeter()\n top1 = utils.AvgrageMeter()\n top5 = utils.AvgrageMeter()\n\n print(\"1---------------------------\")\n for step, (input, target) in enumerate(train_queue):\n print(\"2---------------------------\")\n model.train()\n n = input.size(0)\n input = input.cuda()\n target = target.cuda(non_blocking=True)\n\n # get a random minibatch from the search queue with replacement\n try:\n input_search, target_search = next(valid_queue_iter)\n except:\n valid_queue_iter = iter(valid_queue)\n input_search, target_search = next(valid_queue_iter)\n input_search = input_search.cuda()\n target_search = target_search.cuda(non_blocking=True)\n \n # get a random minibatch from the unlabeled queue with replacement\n try:\n input_unlabeled, target_unlabeled = next(unlabeled_queue_iter)\n except:\n unlabeled_queue_iter = iter(unlabeled_queue)\n input_unlabeled, target_unlabeled = next(unlabeled_queue_iter)\n input_unlabeled = input_unlabeled.cuda()\n target_unlabeled = target_unlabeled.cuda(non_blocking=True)\n \n #print(\"start###############################\")\n architect.step_all3(input, target, input_search, target_search, input_unlabeled, lr, optimizer, optimizer_mid, optimizer_stud, unrolled=args.unrolled)\n #print(\"end#################################\")\n \n #print(\"s1---------------------------\")\n #architect.step(input, target, input_search, target_search, input_unlabeled, lr, optimizer, unrolled=args.unrolled)\n #print(\"s2---------------------------\")\n #architect.step1(input, target, input_search, target_search, input_unlabeled, lr, optimizer, optimizer_mid, unrolled=args.unrolled)\n #print(\"s3---------------------------\")\n #architect.step2(input, target, input_search, target_search, input_unlabeled, lr, optimizer, optimizer_mid, optimizer_stud, unrolled=args.unrolled)\n #print(\"s4---------------------------\")\n \n ##########################################################################################################\n \n optimizer.zero_grad()\n logits = model(input)\n loss = criterion(logits, target)\n\n loss.backward()\n nn.utils.clip_grad_norm(model.parameters(), args.grad_clip)\n optimizer.step()\n \n ##########################################################################################################\n \n optimizer_mid.zero_grad()\n l1 = model(input_unlabeled)\n logits1 = mid(input_unlabeled)\n loss1 = cusloss(logits1, l1.detach())\n\n #loss1.backward()\n #nn.utils.clip_grad_norm(mid.parameters(), args.grad_clip)\n #optimizer_mid.step()\n \n #optimizer_mid.zero_grad()\n logits2 = mid(input)\n loss2 = criterion_mid(logits2, target)\n \n loss5 = loss1 + loss2\n loss5.backward()\n #nn.utils.clip_grad_norm(mid.parameters(), args.grad_clip)\n optimizer_mid.step()\n \n ##########################################################################################################\n \n optimizer_stud.zero_grad()\n l3 = mid(input_unlabeled)\n logits3 = student(input_unlabeled)\n loss3 = cusloss(logits3, l3.detach())\n \n #loss3.backward()\n #nn.utils.clip_grad_norm(student.parameters(), args.grad_clip)\n #optimizer_stud.step()\n \n #optimizer_stud.zero_grad()\n logits4 = student(input)\n loss4 = criterion_stud(logits4, target)\n\n loss6 = loss3 + loss4\n loss6.backward()\n #nn.utils.clip_grad_norm(student.parameters(), args.grad_clip)\n optimizer_stud.step()\n \n ##########################################################################################################\n\n prec1, prec5 = utils.accuracy(logits, target, topk=(1, 5))\n objs.update(loss.item(), n)\n top1.update(prec1.item(), n)\n top5.update(prec5.item(), n)\n\n if step % args.report_freq == 0:\n logging.info('train %03d %e %f %f', step, objs.avg, top1.avg, top5.avg)\n\n\n return top1.avg, objs.avg\n\ndef infer(valid_queue, model, criterion):\n objs = utils.AvgrageMeter()\n top1 = utils.AvgrageMeter()\n top5 = utils.AvgrageMeter()\n model.eval()\n\n for step, (input, target) in enumerate(valid_queue):\n input = Variable(input, volatile=True).cuda()\n target = Variable(target, volatile=True).cuda()\n\n logits = model(input)\n loss = criterion(logits, target)\n\n prec1, prec5 = utils.accuracy(logits, target, topk=(1, 5))\n n = input.size(0)\n objs.update(loss.item(), n)\n top1.update(prec1.item(), n)\n top5.update(prec5.item(), n)\n\n if step % args.report_freq == 0:\n logging.info('valid %03d %e %f %f', step, objs.avg, top1.avg, top5.avg)\n \n\n return top1.avg, objs.avg",
"_____no_output_____"
],
[
"for epoch in range(args.epochs):\n scheduler.step()\n lr = scheduler.get_lr()[0]\n logging.info('epoch %d lr %e', epoch, lr)\n\n genotype = model.genotype()\n logging.info('genotype = %s', genotype)\n\n print(F.softmax(model.alphas_normal, dim=-1))\n print(F.softmax(model.alphas_reduce, dim=-1))\n print(\"---------------------------\")\n\n # training\n train_acc, train_obj = train(train_queue, valid_queue, unlabeled_queue, model, mid, student, architect, criterion, criterion_mid, criterion_stud, optimizer, optimizer_mid, optimizer_stud, lr)\n logging.info('train_acc %f', train_acc)\n\n # validation\n valid_acc, valid_obj = infer(valid_queue, model, criterion)\n logging.info('valid_acc %f', valid_acc)\n\n utils.save(model, os.path.join(args.save, 'weights.pt'))\ngenotype = model.genotype()\nlogging.info('genotype = %s', genotype)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75c2357f44f15eb7dc57b699e2f0af2b3b46684 | 43,429 | ipynb | Jupyter Notebook | Understanding_and_Visualizing_Data_with_Python-master/week3/Multivariate_Distributions.ipynb | rezapci/UofM_Statistics_with_Python_Specialization | edc31cadcbada20d385ae9b0304b8c0cb7ba83e2 | [
"MIT"
] | 2 | 2020-05-11T18:39:31.000Z | 2022-01-26T09:08:02.000Z | Understanding_and_Visualizing_Data_with_Python-master/week3/Multivariate_Distributions.ipynb | rezapci/UofM_Statistics_with_Python_Specialization | edc31cadcbada20d385ae9b0304b8c0cb7ba83e2 | [
"MIT"
] | null | null | null | Understanding_and_Visualizing_Data_with_Python-master/week3/Multivariate_Distributions.ipynb | rezapci/UofM_Statistics_with_Python_Specialization | edc31cadcbada20d385ae9b0304b8c0cb7ba83e2 | [
"MIT"
] | 2 | 2020-05-11T18:39:17.000Z | 2020-05-12T14:59:37.000Z | 331.519084 | 29,984 | 0.934468 | [
[
[
"## Multivariate Distributions in Python\n\nSometimes we can get a lot of information about how two variables (or more) relate if we plot them together. This tutorial aims to show how plotting two variables together can give us information that plotting each one separately may miss.\n\n",
"_____no_output_____"
]
],
[
[
"# import the packages we are going to be using\nimport numpy as np # for getting our distribution\nimport matplotlib.pyplot as plt # for plotting\nimport seaborn as sns; sns.set() # For a different plotting theme\n\n# Don't worry so much about what rho is doing here\n# Just know if we have a rho of 1 then we will get a perfectly\n# upward sloping line, and if we have a rho of -1, we will get \n# a perfectly downward slopping line. A rho of 0 will \n# get us a 'cloud' of points\nr = 1\n\n# Don't worry so much about the following three lines of code for now\n# this is just getting the data for us to plot\nmean = [15, 5]\ncov = [[1, r], [r, 1]]\nx, y = x, y = np.random.multivariate_normal(mean, cov, 400).T\n\n# Adjust the figure size\nplt.figure(figsize=(10,5))\n\n# Plot the histograms of X and Y next to each other\nplt.subplot(1,2,1)\nplt.hist(x = x, bins = 15)\nplt.title(\"X\")\n\nplt.subplot(1,2,2)\nplt.hist(x = y, bins = 15)\nplt.title(\"Y\")\n\nplt.show()",
"_____no_output_____"
],
[
"# Plot the data\nplt.figure(figsize=(10,10))\nplt.subplot(2,2,2)\nplt.scatter(x = x, y = y)\nplt.title(\"Joint Distribution of X and Y\")\n\n# Plot the Marginal X Distribution\nplt.subplot(2,2,4)\nplt.hist(x = x, bins = 15)\nplt.title(\"Marginal Distribution of X\")\n\n\n# Plot the Marginal Y Distribution\nplt.subplot(2,2,1)\nplt.hist(x = y, orientation = \"horizontal\", bins = 15)\nplt.title(\"Marginal Distribution of Y\")\n\n# Show the plots\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
e75c2658bf7ddedd68a1864ebba72a3178f01b07 | 7,144 | ipynb | Jupyter Notebook | all models code/logisticregression.ipynb | mostlypanda/100-Days-Of-ML-Code | 1dca27638c36f1932f68bd64f901e2d0f7564824 | [
"MIT"
] | 2 | 2021-01-04T14:42:04.000Z | 2021-09-27T17:09:37.000Z | all models code/logisticregression.ipynb | mostlypanda/100-Days-Of-ML-Code | 1dca27638c36f1932f68bd64f901e2d0f7564824 | [
"MIT"
] | null | null | null | all models code/logisticregression.ipynb | mostlypanda/100-Days-Of-ML-Code | 1dca27638c36f1932f68bd64f901e2d0f7564824 | [
"MIT"
] | null | null | null | 59.041322 | 1,350 | 0.656075 | [
[
[
"#importing libraries adn loading dataset\nimport numpy as np \nimport pandas as pd \nimport matplotlib.pyplot as plt \n\ndataset=pd.read_csv('Social_Network_Ads.csv')\nx=dataset.iloc[:,[2,3]].values\ny=dataset.iloc[:,4].values\n\n#splitting the dataset\nfrom sklearn.model_selection import train_test_split\nx_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.25,random_state=0)\n\n#feature scaling\nfrom sklearn.preprocessing import StandardScaler\nsc=StandardScaler()\nx_train=sc.fit_transform(x_train)\nx_test=sc.transform(x_test)\n",
"_____no_output_____"
],
[
"#fitting logistic regression\nfrom sklearn.linear_model import LogisticRegression\nclassifier=LogisticRegression()\nclassifier.fit(x_train,y_train)",
"_____no_output_____"
],
[
"y_pred=classifier.predict(x_test)",
"_____no_output_____"
],
[
"#making matrix\nfrom sklearn.metrics import confusion_matrix\ncm=confusion_matrix(x_test,y_pred)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
e75c2ea10d858bcbb69719eef1853bb73f5a80c7 | 24,016 | ipynb | Jupyter Notebook | color_filtering.ipynb | abhichacko/WebCamDrawing | 0f3311b6e66138d1ff45954c4a677c8be7cbaa45 | [
"MIT"
] | null | null | null | color_filtering.ipynb | abhichacko/WebCamDrawing | 0f3311b6e66138d1ff45954c4a677c8be7cbaa45 | [
"MIT"
] | null | null | null | color_filtering.ipynb | abhichacko/WebCamDrawing | 0f3311b6e66138d1ff45954c4a677c8be7cbaa45 | [
"MIT"
] | null | null | null | 138.022989 | 7,996 | 0.904314 | [
[
[
"import cv2\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"rub_im=cv2.imread('images/rubik.png')",
"_____no_output_____"
],
[
"plt.imshow(rub_im)",
"_____no_output_____"
],
[
"hsv_image=cv2.cvtColor(rub_im,cv2.COLOR_BGR2HSV)\nplt.imshow(hsv_image)",
"_____no_output_____"
],
[
"hsv = [240, 100, 5.9]\nthresh = 40",
"_____no_output_____"
],
[
"#hsv = cv2.cvtColor( np.uint8([[bgr]] ), cv2.COLOR_BGR2HSV)[0][0]\nlower_blue = np.array([110,50,50])\nupper_blue = np.array([130,255,255])\n",
"_____no_output_____"
],
[
"maskHSV = cv2.inRange(hsv_image, lower_blue,upper_blue)\nresultHSV = cv2.bitwise_and(hsv_image, hsv_image, mask = maskHSV)",
"_____no_output_____"
],
[
"plt.imshow(resultHSV)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75c3a8d24fa3af9704503ea38bd39343e4a990a | 26,887 | ipynb | Jupyter Notebook | 07 - Work with Compute.ipynb | HarshKothari21/mslearn-dp100 | 5edb988bf8af81018afa87b0c42cbff7682d684c | [
"MIT"
] | 1 | 2021-04-27T17:41:12.000Z | 2021-04-27T17:41:12.000Z | 07 - Work with Compute.ipynb | gosiaborzecka/mslearn-dp100 | f239fd89deb74b8808e79f452dab1b737a3c3070 | [
"MIT"
] | 2 | 2021-02-22T11:34:30.000Z | 2021-02-22T11:34:58.000Z | 07 - Work with Compute.ipynb | gosiaborzecka/mslearn-dp100 | f239fd89deb74b8808e79f452dab1b737a3c3070 | [
"MIT"
] | 6 | 2021-02-09T11:07:16.000Z | 2021-07-08T08:46:58.000Z | 38.630747 | 480 | 0.620783 | [
[
[
"# Work with Compute\n\nWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:\n\n* The Python environment for the script, which must include all Python packages used in the script.\n* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.\n\nIn this notebook, you'll explore *environments* and *compute targets* for experiments.",
"_____no_output_____"
],
[
"## Connect to your workspace\n\nTo get started, connect to your workspace.\n\n> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.",
"_____no_output_____"
]
],
[
[
"import azureml.core\nfrom azureml.core import Workspace\n\n# Load the workspace from the saved config file\nws = Workspace.from_config()\nprint('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))",
"_____no_output_____"
]
],
[
[
"## Prepare data for an experiment\n\nIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)",
"_____no_output_____"
]
],
[
[
"from azureml.core import Dataset\n\ndefault_ds = ws.get_default_datastore()\n\nif 'diabetes dataset' not in ws.datasets:\n default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data\n target_path='diabetes-data/', # Put it in a folder path in the datastore\n overwrite=True, # Replace existing files of the same name\n show_progress=True)\n\n #Create a tabular dataset from the path on the datastore (this may take a short while)\n tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))\n\n # Register the tabular dataset\n try:\n tab_data_set = tab_data_set.register(workspace=ws, \n name='diabetes dataset',\n description='diabetes data',\n tags = {'format':'CSV'},\n create_new_version=True)\n print('Dataset registered.')\n except Exception as ex:\n print(ex)\nelse:\n print('Dataset already registered.')",
"_____no_output_____"
]
],
[
[
"## Create a training script\n\nRun the following two cells to create:\n1. A folder for a new experiment\n2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.",
"_____no_output_____"
]
],
[
[
"import os\n\n# Create a folder for the experiment files\nexperiment_folder = 'diabetes_training_logistic'\nos.makedirs(experiment_folder, exist_ok=True)\nprint(experiment_folder, 'folder created')",
"_____no_output_____"
],
[
"%%writefile $experiment_folder/diabetes_training.py\n# Import libraries\nimport argparse\nfrom azureml.core import Run\nimport pandas as pd\nimport numpy as np\nimport joblib\nimport os\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.metrics import roc_curve\nimport matplotlib.pyplot as plt\n\n# Get script arguments\nparser = argparse.ArgumentParser()\nparser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')\nparser.add_argument(\"--input-data\", type=str, dest='training_dataset_id', help='training dataset')\nargs = parser.parse_args()\n\n# Set regularization hyperparameter\nreg = args.reg_rate\n\n# Get the experiment run context\nrun = Run.get_context()\n\n# load the diabetes data (passed as an input dataset)\nprint(\"Loading Data...\")\ndiabetes = run.input_datasets['training_data'].to_pandas_dataframe()\n\n# Separate features and labels\nX, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values\n\n# Split data into training set and test set\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)\n\n# Train a logistic regression model\nprint('Training a logistic regression model with regularization rate of', reg)\nrun.log('Regularization Rate', np.float(reg))\nmodel = LogisticRegression(C=1/reg, solver=\"liblinear\").fit(X_train, y_train)\n\n# calculate accuracy\ny_hat = model.predict(X_test)\nacc = np.average(y_hat == y_test)\nprint('Accuracy:', acc)\nrun.log('Accuracy', np.float(acc))\n\n# calculate AUC\ny_scores = model.predict_proba(X_test)\nauc = roc_auc_score(y_test,y_scores[:,1])\nprint('AUC: ' + str(auc))\nrun.log('AUC', np.float(auc))\n\n# plot ROC curve\nfpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])\nfig = plt.figure(figsize=(6, 4))\n# Plot the diagonal 50% line\nplt.plot([0, 1], [0, 1], 'k--')\n# Plot the FPR and TPR achieved by our model\nplt.plot(fpr, tpr)\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('ROC Curve')\nrun.log_image(name = \"ROC\", plot = fig)\nplt.show()\n\nos.makedirs('outputs', exist_ok=True)\n# note file saved in the outputs folder is automatically uploaded into experiment record\njoblib.dump(value=model, filename='outputs/diabetes_model.pkl')\n\nrun.complete()",
"_____no_output_____"
]
],
[
[
"## Define an environment\n\nWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.\n\nYou can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.\n\n> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)",
"_____no_output_____"
]
],
[
[
"from azureml.core import Environment\nfrom azureml.core.conda_dependencies import CondaDependencies\n\n# Create a Python environment for the experiment\ndiabetes_env = Environment(\"diabetes-experiment-env\")\ndiabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies\ndiabetes_env.docker.enabled = True # Use a docker container\n\n# Create a set of package dependencies (conda or pip as required)\ndiabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],\n pip_packages=['azureml-sdk','pyarrow'])\n\n# Add the dependencies to the environment\ndiabetes_env.python.conda_dependencies = diabetes_packages\n\nprint(diabetes_env.name, 'defined.')",
"_____no_output_____"
]
],
[
[
"Now you can use the environment to run a script as an experiment.\n\nThe following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.",
"_____no_output_____"
]
],
[
[
"from azureml.core import Experiment, ScriptRunConfig, Environment\nfrom azureml.core.conda_dependencies import CondaDependencies\nfrom azureml.widgets import RunDetails\n\n# Get the training dataset\ndiabetes_ds = ws.datasets.get(\"diabetes dataset\")\n\n# Create a script config\nscript_config = ScriptRunConfig(source_directory=experiment_folder,\n script='diabetes_training.py',\n arguments = ['--regularization', 0.1, # Regularizaton rate parameter\n '--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset\n environment=diabetes_env) \n\n# submit the experiment\nexperiment_name = 'mslearn-train-diabetes'\nexperiment = Experiment(workspace=ws, name=experiment_name)\nrun = experiment.submit(config=script_config)\nRunDetails(run).show()\nrun.wait_for_completion()",
"_____no_output_____"
]
],
[
[
"The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.",
"_____no_output_____"
]
],
[
[
"# Get logged metrics\nmetrics = run.get_metrics()\nfor key in metrics.keys():\n print(key, metrics.get(key))\nprint('\\n')\nfor file in run.get_file_names():\n print(file)",
"_____no_output_____"
]
],
[
[
"## Register the environment\n\nHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.",
"_____no_output_____"
]
],
[
[
"# Register the environment\ndiabetes_env.register(workspace=ws)",
"_____no_output_____"
]
],
[
[
"Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).\n\nWith the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:",
"_____no_output_____"
]
],
[
[
"import os\n\n# Create a folder for the experiment files\nexperiment_folder = 'diabetes_training_tree'\nos.makedirs(experiment_folder, exist_ok=True)\nprint(experiment_folder, 'folder created')",
"_____no_output_____"
],
[
"%%writefile $experiment_folder/diabetes_training.py\n# Import libraries\nimport argparse\nfrom azureml.core import Run\nimport pandas as pd\nimport numpy as np\nimport joblib\nimport os\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.metrics import roc_curve\nimport matplotlib.pyplot as plt\n\n# Get script arguments\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--input-data\", type=str, dest='training_dataset_id', help='training dataset')\nargs = parser.parse_args()\n\n# Get the experiment run context\nrun = Run.get_context()\n\n# load the diabetes data (passed as an input dataset)\nprint(\"Loading Data...\")\ndiabetes = run.input_datasets['training_data'].to_pandas_dataframe()\n\n# Separate features and labels\nX, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values\n\n# Split data into training set and test set\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)\n\n# Train a decision tree model\nprint('Training a decision tree model')\nmodel = DecisionTreeClassifier().fit(X_train, y_train)\n\n# calculate accuracy\ny_hat = model.predict(X_test)\nacc = np.average(y_hat == y_test)\nprint('Accuracy:', acc)\nrun.log('Accuracy', np.float(acc))\n\n# calculate AUC\ny_scores = model.predict_proba(X_test)\nauc = roc_auc_score(y_test,y_scores[:,1])\nprint('AUC: ' + str(auc))\nrun.log('AUC', np.float(auc))\n\n# plot ROC curve\nfpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])\nfig = plt.figure(figsize=(6, 4))\n# Plot the diagonal 50% line\nplt.plot([0, 1], [0, 1], 'k--')\n# Plot the FPR and TPR achieved by our model\nplt.plot(fpr, tpr)\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('ROC Curve')\nrun.log_image(name = \"ROC\", plot = fig)\nplt.show()\n\nos.makedirs('outputs', exist_ok=True)\n# note file saved in the outputs folder is automatically uploaded into experiment record\njoblib.dump(value=model, filename='outputs/diabetes_model.pkl')\n\nrun.complete()",
"_____no_output_____"
]
],
[
[
"Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).",
"_____no_output_____"
]
],
[
[
"from azureml.core import Experiment, ScriptRunConfig, Environment\nfrom azureml.core.conda_dependencies import CondaDependencies\nfrom azureml.widgets import RunDetails\n\n# get the registered environment\nregistered_env = Environment.get(ws, 'diabetes-experiment-env')\n\n# Get the training dataset\ndiabetes_ds = ws.datasets.get(\"diabetes dataset\")\n\n# Create a script config\nscript_config = ScriptRunConfig(source_directory=experiment_folder,\n script='diabetes_training.py',\n arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset\n environment=registered_env) \n\n# submit the experiment\nexperiment_name = 'mslearn-train-diabetes'\nexperiment = Experiment(workspace=ws, name=experiment_name)\nrun = experiment.submit(config=script_config)\nRunDetails(run).show()\nrun.wait_for_completion()",
"_____no_output_____"
]
],
[
[
"This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.\n\nLet's look at the metrics and outputs from the experiment.",
"_____no_output_____"
]
],
[
[
"# Get logged metrics\nmetrics = run.get_metrics()\nfor key in metrics.keys():\n print(key, metrics.get(key))\nprint('\\n')\nfor file in run.get_file_names():\n print(file)",
"_____no_output_____"
]
],
[
[
"## View registered environments\n\nIn addition to registering your own environments, you can leverage pre-built \"curated\" environments for common experiment types. The following code lists all registered environments:",
"_____no_output_____"
]
],
[
[
"from azureml.core import Environment\n\nenvs = Environment.list(workspace=ws)\nfor env in envs:\n print(\"Name\",env)",
"_____no_output_____"
]
],
[
[
"All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).\n\nLet's explore the curated environments in more depth and see what packages are included in each of them.",
"_____no_output_____"
]
],
[
[
"for env in envs:\n if env.startswith(\"AzureML\"):\n print(\"Name\",env)\n print(\"packages\", envs[env].python.conda_dependencies.serialize_to_string())",
"_____no_output_____"
]
],
[
[
"## Create a compute cluster\n\nIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.\n\nYou can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.\n\n> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.",
"_____no_output_____"
]
],
[
[
"from azureml.core.compute import ComputeTarget, AmlCompute\nfrom azureml.core.compute_target import ComputeTargetException\n\ncluster_name = \"your-compute-cluster\"\n\ntry:\n # Check for existing compute target\n training_cluster = ComputeTarget(workspace=ws, name=cluster_name)\n print('Found existing cluster, use it.')\nexcept ComputeTargetException:\n # If it doesn't already exist, create it\n try:\n compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)\n training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)\n training_cluster.wait_for_completion(show_output=True)\n except Exception as ex:\n print(ex)",
"_____no_output_____"
]
],
[
[
"## Run an experiment on remote compute\n\nNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. \n\n> **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.",
"_____no_output_____"
]
],
[
[
"# Create a script config\nscript_config = ScriptRunConfig(source_directory=experiment_folder,\n script='diabetes_training.py',\n arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],\n environment=registered_env,\n compute_target=cluster_name) \n\n# submit the experiment\nexperiment_name = 'mslearn-train-diabetes'\nexperiment = Experiment(workspace=ws, name=experiment_name)\nrun = experiment.submit(config=script_config)\nRunDetails(run).show()",
"_____no_output_____"
]
],
[
[
"While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.",
"_____no_output_____"
]
],
[
[
"cluster_state = training_cluster.get_status()\nprint(cluster_state.allocation_state, cluster_state.current_node_count)",
"_____no_output_____"
]
],
[
[
"Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.",
"_____no_output_____"
]
],
[
[
"run.wait_for_completion()",
"_____no_output_____"
]
],
[
[
"After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.",
"_____no_output_____"
]
],
[
[
"# Get logged metrics\nmetrics = run.get_metrics()\nfor key in metrics.keys():\n print(key, metrics.get(key))\nprint('\\n')\nfor file in run.get_file_names():\n print(file)",
"_____no_output_____"
]
],
[
[
"Now you can register the model that was trained by the experiment.",
"_____no_output_____"
]
],
[
[
"from azureml.core import Model\n\n# Register the model\nrun.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',\n tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})\n\n# List registered models\nfor model in Model.list(ws):\n print(model.name, 'version:', model.version)\n for tag_name in model.tags:\n tag = model.tags[tag_name]\n print ('\\t',tag_name, ':', tag)\n for prop_name in model.properties:\n prop = model.properties[prop_name]\n print ('\\t',prop_name, ':', prop)\n print('\\n')",
"_____no_output_____"
]
],
[
[
"> **More Information**:\n>\n> - For more information about environments in Azure Machine Learning, see [Create & use software environments in Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/how-to-use-environments)\n> - For more information about compute targets in Azure Machine Learning, see the [What are compute targets in Azure Machine Learning?](https://docs.microsoft.com/azure/machine-learning/concept-compute-target).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e75c3ee002694f9b24e3aa2118014de5e0b36353 | 26,085 | ipynb | Jupyter Notebook | nbs/index.ipynb | crowdcent/numerblox | e014a30eb22ce64cdc590e32d776367a7132cb39 | [
"Apache-2.0"
] | 30 | 2022-03-17T03:23:20.000Z | 2022-03-30T15:20:19.000Z | nbs/index.ipynb | crowdcent/numerblox | e014a30eb22ce64cdc590e32d776367a7132cb39 | [
"Apache-2.0"
] | 8 | 2022-03-18T10:31:44.000Z | 2022-03-31T15:43:46.000Z | nbs/index.ipynb | crowdcent/numerblox | e014a30eb22ce64cdc590e32d776367a7132cb39 | [
"Apache-2.0"
] | 5 | 2022-03-18T10:24:38.000Z | 2022-03-30T14:40:08.000Z | 55.031646 | 1,293 | 0.541844 | [
[
[
"# hide\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
]
],
[
[
"# NumerBlox\n\n> Solid Numerai pipelines",
"_____no_output_____"
],
[
"`numerblox` offers Numerai specific functionality, so you can worry less about software/data engineering and focus more on building great Numerai models!\n\nMost of the components in this library are designed for solid weekly inference pipelines, but tools like `NumerFrame`, preprocessors and evaluators also greatly simplify the training process.\n\n**Questions and discussion:** [rocketchat.numer.ai/channel/numerblox](https://rocketchat.numer.ai/channel/numerblox)\n\n**Documentation:** [crowdcent.github.io/numerblox](https://crowdcent.github.io/numerblox/)\n\n     ",
"_____no_output_____"
],
[
"## 1. Install",
"_____no_output_____"
],
[
"## 1. Getting Started\n\n**This document has been generated by [NBDev](https://github.com/fastai/nbdev).** Please edit `nbs/index.ipynb` instead of this `README.md`. Read `CONTRIBUTING.MD` for more information on the contribution process and how to change files. Thank you!\n\n### 1.1 Installation\n\nInstall numerblox from PyPi by running:\n\n`pip install numerblox`\n\nAlternatively you can clone this repository and install it in development mode\nrunning the following from the root of the repository:\n\n`pip install -e .`\n\n### 1.2 Running Notebooks\n\nStart by spinning up your favorite Jupyter Notebook environment. Here we'll use:\n\n`jupyter notebook`\n\nTest your installation using one of the education notebooks in `nbs/edu_nbs`.\nA good example is `numerframe_tutorial`. Run it in your Notebook environment to\nquickly test if your installation has succeeded",
"_____no_output_____"
],
[
"### 2.1. Contents",
"_____no_output_____"
],
[
"#### 2.1.1. Core functionality\n\n`numerblox` features the following functionality:\n\n1. Downloading data\n2. A custom data structure extending Pandas DataFrame (`NumerFrame`)\n3. A suite of preprocessors for Numerai Classic and Signals (feature selection, engineering and manipulation)\n4. Model objects for easy inference.\n5. A suite of postprocessors for Numerai Classic and Signals (standardization, ensembling, neutralization and penalization)\n6. Pipelines handling processing and prediction (`ModelPipeline` and `ModelPipelineCollection`)\n7. Evaluation (`NumeraiClassicEvaluator` and `NumeraiSignalsEvaluator`)\n8. Authentication (`Key` and `load_key_from_json`)\n9. Submitting (`NumeraiClassicSubmitter`, `NumeraiSignalsSubmitter` and `NumerBaySubmitter`)\n10. Automated staking (`NumeraiClassicStaker` and `NumeraiSignalsStaker`)",
"_____no_output_____"
],
[
"#### 2.1.2. Educational notebooks\n\nExample notebooks can be found in the `nbs/edu_nbs` directory.\n\n`nbs/edu_nbs` currently contains the following examples:\n- `numerframe_tutorial.ipynb`: A deep dive into what `NumerFrame` has to offer.\n- `pipeline_construction.ipynb`: How to use `numerblox` tools for efficient Numerai inference.\n- `submitting.ipynb`: How to use Submitters for safe and easy Numerai submissions.\n- `google_cloud_storage.ipynb`: How to use Downloaders and Submitters to interact with Google Cloud Storage (GCS).\n- `load_model_from_wandb.ipynb`: For [Weights & Biases](https://wandb.ai/) users. Easily pull a model from W&B for inference.\n- `numerbay_integration.ipynb`: How to use `NumerBlox` to download and upload predictions listed on [NumerBay](https://numerbay.ai).\n\nDevelopment notebooks are also in the `nbs` directory. These notebooks are also used to generate the documentation.\n\n**Questions or idea discussion for educational notebooks:** [rocketchat.numer.ai/channel/numerblox](https://rocketchat.numer.ai/channel/numerblox)\n\n**Full documentation:** [crowdcent.github.io/numerblox](https://crowdcent.github.io/numerblox/)",
"_____no_output_____"
],
[
"### 2.2. Examples\n\nBelow we will illustrate a common use case for inference pipelines. To learn more in-depth about the features of this library, check out notebooks in `nbs/edu_nbs`.",
"_____no_output_____"
],
[
"#### 2.2.1. Numerai Classic",
"_____no_output_____"
],
[
"```python\n# --- 0. Numerblox dependencies ---\nfrom numerblox.download import NumeraiClassicDownloader\nfrom numerblox.numerframe import create_numerframe\nfrom numerblox.postprocessing import FeatureNeutralizer\nfrom numerblox.model import SingleModel\nfrom numerblox.model_pipeline import ModelPipeline\nfrom numerblox.key import load_key_from_json\nfrom numerblox.submission import NumeraiClassicSubmitter\n\n# --- 1. Download version 4 data ---\ndownloader = NumeraiClassicDownloader(\"data\")\ndownloader.download_inference_data(\"current_round\")\n\n# --- 2. Initialize NumerFrame ---\nmetadata = {\"version\": 4,\n \"joblib_model_name\": \"test\",\n \"joblib_model_path\": \"test_assets/joblib_v2_example_model.joblib\",\n \"numerai_model_name\": \"test_model1\",\n \"key_path\": \"test_assets/test_credentials.json\"}\ndataf = create_numerframe(file_path=\"data/current_round/live.parquet\",\n metadata=metadata)\n\n# --- 3. Define and run pipeline ---\nmodels = [SingleModel(dataf.meta.joblib_model_path,\n model_name=dataf.meta.joblib_model_name)]\n# No preprocessing and 0.5 feature neutralization\npostprocessors = [FeatureNeutralizer(pred_name=f\"prediction_{dataf.meta.joblib_model_name}\",\n proportion=0.5)]\npipeline = ModelPipeline(preprocessors=[],\n models=models,\n postprocessors=postprocessors)\ndataf = pipeline(dataf)\n\n# --- 4. Submit ---\n# Load credentials from .json (random credentials in this example)\nkey = load_key_from_json(dataf.meta.key_path)\nsubmitter = NumeraiClassicSubmitter(directory_path=\"sub_current_round\", key=key)\n# full_submission checks contents, saves as csv and submits.\nsubmitter.full_submission(dataf=dataf,\n cols=f\"prediction_{dataf.meta.joblib_model_name}_neutralized_0.5\",\n model_name=dataf.meta.numerai_model_name,\n version=dataf.meta.version)\n\n# --- 5. Clean up environment (optional) ---\ndownloader.remove_base_directory()\nsubmitter.remove_base_directory()\n```",
"_____no_output_____"
]
],
[
[
"# hide_input\nfrom rich.console import Console\nfrom rich.tree import Tree\n\nconsole = Console(record=True, width=100)\n\ntree = Tree(\":computer: Directory structure before starting\", guide_style=\"bold bright_black\")\nmodel_tree = tree.add(\":file_folder: test_assets\")\nmodel_tree.add(\":page_facing_up: joblib_v2_example_model.joblib\")\nmodel_tree.add(\":page_facing_up: test_credentials.json\")\n\nconsole.print(tree)\n\ntree2 = Tree(\":computer: Directory structure after submitting\", guide_style=\"bold bright_black\")\ndata_tree = tree2.add(\":file_folder: data\")\ncurrent_tree = data_tree.add(\":file_folder: current_round\")\ncurrent_tree.add(\":page_facing_up: numerai_tournament_data.parquet\")\nsub_tree = tree2.add(\":file_folder: sub_current_round\")\nsub_tree.add(\":page_facing_up: test_model1.csv\")\nmodel_tree = tree.add(\":file_folder: test_assets\")\nmodel_tree.add(\":page_facing_up: joblib_v2_example_model.joblib\")\nmodel_tree.add(\":page_facing_up: test_credentials.json\")\n\nconsole.print(tree2)",
"_____no_output_____"
]
],
[
[
"#### 2.2.2. Numerai Signals",
"_____no_output_____"
],
[
"```python\n# --- 0. Numerblox dependencies ---\nfrom numerblox.download import KaggleDownloader\nfrom numerblox.numerframe import create_numerframe\nfrom numerblox.preprocessing import KatsuFeatureGenerator\nfrom numerblox.model import SingleModel\nfrom numerblox.model_pipeline import ModelPipeline\nfrom numerblox.key import load_key_from_json\nfrom numerblox.submission import NumeraiSignalsSubmitter\n\n# --- 1. Download Katsu1110 yfinance dataset from Kaggle ---\nkd = KaggleDownloader(\"data\")\nkd.download_inference_data(\"code1110/yfinance-stock-price-data-for-numerai-signals\")\n\n# --- 2. Initialize NumerFrame with metadata ---\nmetadata = {\"numerai_model_name\": \"test_model1\",\n \"key_path\": \"test_assets/test_credentials.json\"}\ndataf = create_numerframe(\"data/full_data.parquet\", metadata=metadata)\n\n# --- 3. Define and run pipeline ---\nmodels = [SingleModel(\"models/signals_model.cbm\", model_name=\"cb\")]\n# Simple and fast feature generator based on Katsu Signals starter notebook\n# https://www.kaggle.com/code1110/numeraisignals-starter-for-beginners\npipeline = ModelPipeline(preprocessors=[KatsuFeatureGenerator(windows=[20, 40, 60])],\n models=models,\n postprocessors=[])\ndataf = pipeline(dataf)\n\n# --- 4. Submit ---\n# Load credentials from .json (random credentials in this example)\nkey = load_key_from_json(dataf.meta.key_path)\nsubmitter = NumeraiSignalsSubmitter(directory_path=\"sub_current_round\", key=key)\n# full_submission checks contents, saves as csv and submits.\n# cols selection must at least contain 1 ticker column and a signal column.\ndataf['signal'] = dataf['prediction_cb']\nsubmitter.full_submission(dataf=dataf,\n cols=['bloomberg_ticker', 'signal'],\n model_name=dataf.meta.numerai_model_name)\n\n# --- 5. Clean up environment (optional) ---\nkd.remove_base_directory()\nsubmitter.remove_base_directory()\n```",
"_____no_output_____"
]
],
[
[
"# hide_input\nfrom rich.console import Console\nfrom rich.tree import Tree\n\nconsole = Console(record=True, width=100)\n\ntree = Tree(\":computer: Directory structure before starting\", guide_style=\"bold bright_black\")\nmodel_tree = tree.add(\":file_folder: test_assets\")\nmodels_tree = tree.add(\":file_folder: models\")\nmodels_tree.add(\":page_facing_up: signals_model.cbm\")\nmodel_tree.add(\":page_facing_up: test_credentials.json\")\n\nconsole.print(tree)\n\ntree2 = Tree(\":computer: Directory structure after submitting\", guide_style=\"bold bright_black\")\ndata_tree = tree2.add(\":file_folder: data\")\ndata_tree.add(\":page_facing_up: full_data.parquet\")\nsub_tree = tree2.add(\":file_folder: sub_current_round\")\nsub_tree.add(\":page_facing_up: submission.csv\")\nmodel_tree = tree.add(\":file_folder: test_assets\")\nmodel_tree.add(\":page_facing_up: test_credentials.json\")\nmodels_tree = tree.add(\":file_folder: models\")\nmodels_tree.add(\":page_facing_up: signals_model.cbm\")\n\nconsole.print(tree2)",
"_____no_output_____"
]
],
[
[
"## 3. Contributing\n\nBe sure to read `CONTRIBUTING.md` for detailed instructions on contributing.\n\nIf you have questions or want to discuss new ideas for `numerblox`, check out [rocketchat.numer.ai/channel/numerblox](https://rocketchat.numer.ai/channel/numerblox).\n\n",
"_____no_output_____"
],
[
"## 4. Branch structure\n",
"_____no_output_____"
],
[
"Every new feature should be implemented in a branch that branches from `dev` and has the naming convention `feature/{FEATURE_DESCRIPTION}`. Explicit bugfixes should be named `bugfix/{FIX_DESCRIPTION}`. An example structure is given below.",
"_____no_output_____"
]
],
[
[
"# hide_input\nconsole = Console(record=True, width=100)\n\ntree = Tree(\"Branch structure\", guide_style=\"bold bright_black\")\n\nmain_tree = tree.add(\"📦 master (release)\", guide_style=\"bright_black\")\ndev_tree = main_tree.add(\"👨💻 dev\")\nfeature_tree = dev_tree.add(\":sparkles: feature/ta-signals-features\")\ndev_tree.add(\":sparkles: feature/news-api-downloader\")\ndev_tree.add(\":sparkles: feature/staking-portfolio-management\")\ndev_tree.add(\":sparkles: bugfix/evaluator-metrics-fix\")\n\nconsole.print(tree)",
"_____no_output_____"
]
],
[
[
"\n## 5. Crediting sources\n\nSome of the components in this library may be based on forum posts, notebooks or ideas made public by the Numerai community. We have done our best to ask all parties who posted a specific piece of code for their permission and credit their work in the documentation. If your code is used in this library without credits, please let us know, so we can add a link to your article/code.\n\nIf you are contributing to `numerblox` and are using ideas posted earlier by someone else, make sure to credit them by posting a link to their article/code in documentation.",
"_____no_output_____"
]
],
[
[
"# hide\n# Run this cell to sync all changes with library\nfrom nbdev.export import notebook2script\n\nnotebook2script()",
"Converted 00_misc.ipynb.\nConverted 01_download.ipynb.\nConverted 02_numerframe.ipynb.\nConverted 03_preprocessing.ipynb.\nConverted 04_model.ipynb.\nConverted 05_postprocessing.ipynb.\nConverted 06_modelpipeline.ipynb.\nConverted 07_evaluation.ipynb.\nConverted 08_key.ipynb.\nConverted 09_submission.ipynb.\nConverted 10_staking.ipynb.\nConverted index.ipynb.\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e75c45c1e5ec1d1ec3163c8438b00595603d19c1 | 760,373 | ipynb | Jupyter Notebook | Immuno_fluorescence_analysis/20210319-Aire_IF_mouse_thymus_analysis.ipynb | shiwei23/Chromatin_Analysis_Scripts | 909b9b81de8fcf04dd4c39ac21a84864ce2003ff | [
"MIT"
] | null | null | null | Immuno_fluorescence_analysis/20210319-Aire_IF_mouse_thymus_analysis.ipynb | shiwei23/Chromatin_Analysis_Scripts | 909b9b81de8fcf04dd4c39ac21a84864ce2003ff | [
"MIT"
] | null | null | null | Immuno_fluorescence_analysis/20210319-Aire_IF_mouse_thymus_analysis.ipynb | shiwei23/Chromatin_Analysis_Scripts | 909b9b81de8fcf04dd4c39ac21a84864ce2003ff | [
"MIT"
] | null | null | null | 586.255204 | 506,592 | 0.9362 | [
[
[
"# Analysis of Aire Immuno-fluorescence in thymus\n\nby Pu Zheng\n\n2021.3.19",
"_____no_output_____"
]
],
[
[
"%run \"..\\..\\Startup_py3.py\"\nsys.path.append(r\"..\\..\\..\\..\\Documents\")\n\nimport ImageAnalysis3 as ia\n%matplotlib notebook\n\nfrom ImageAnalysis3 import *\nprint(os.getpid())\n\nimport h5py\nfrom ImageAnalysis3.classes import _allowed_kwds\nimport ast",
"9112\n"
],
[
"data_folder = r'\\\\10.245.74.158\\Chromatin_NAS_5\\Thymus_mouse\\191017_Th_Aire_CK5'\n",
"_____no_output_____"
],
[
"dax_filenames = [os.path.join(data_folder, _fl) \n for _fl in os.listdir(data_folder) \n if _fl.split(os.extsep)[-1]=='dax']",
"_____no_output_____"
],
[
"# load image\nfov_id = 5\n_filename = dax_filenames[fov_id]\n_reader = visual_tools.DaxReader(_filename, verbose=True)\n_im = _reader.loadAll()\n_reader.close()",
"_____no_output_____"
],
[
"im_dict = {}\n_start_frame = 0\n_order = 1\nfor _info in os.path.basename(_filename).split('_')[0].split('-')[1:]:\n _ch = str(_info.split('s')[0])\n _len = int(_info.split('s')[1])\n if _order > 0:\n im_dict[_ch] = _im[_start_frame:_start_frame+_len]\n if _order < 0:\n im_dict[_ch] = _im[_start_frame+_len-1:_start_frame-1:-1]\n \n #updates\n _order *= -1\n _start_frame += _len\n print(im_dict[_ch].shape)",
"(7, 2048, 2048)\n(7, 2048, 2048)\n(1, 2048, 2048)\n(7, 2048, 2048)\n"
],
[
"list(im_dict.keys())",
"_____no_output_____"
],
[
"viewer.dic_min_max",
"_____no_output_____"
],
[
"%matplotlib notebook\nviewer = visual_tools.imshow_mark_3d_v2(list(im_dict.values()))",
"_____no_output_____"
],
[
"reload(ia.figure_tools.image)\nfrom ImageAnalysis3.figure_tools.image import visualize_2d_gaussian, visualize_2d_projection\nfrom ImageAnalysis3.figure_tools.color import black_gradient\nfrom ImageAnalysis3.figure_tools import _dpi, _single_col_width,_double_col_width, _font_size, _ticklabel_size, _ticklabel_width",
"_____no_output_____"
],
[
"%matplotlib inline\n",
"_____no_output_____"
],
[
"ck5_im = im_dict['750'][:,900:1300,1100:1500]\naire_im = im_dict['635'][:,900:1300,1100:1500]\ndapi_im = im_dict['408'][:,900:1300,1100:1500]\n",
"_____no_output_____"
],
[
"figure_folder = os.path.join(data_folder, 'plots')\nprint(figure_folder)\nif not os.path.exists(figure_folder):\n os.makedirs(figure_folder)",
"\\\\10.245.74.158\\Chromatin_NAS_5\\Thymus_mouse\\191017_Th_Aire_CK5\\plots\n"
],
[
"\nax = visualize_2d_projection(dapi_im[1:-1], \n figure_width=_single_col_width, figure_dpi=300,\n cmap=black_gradient([0.,0.,1], transparent=False), \n color_limits=[200,1000], \n add_reference_bar=False,\n reference_bar_color=[1,1,1])\n\nax = visualize_2d_projection(ck5_im[1:-1], \n ax=ax,\n figure_width=_single_col_width, figure_dpi=300,\n projection_type='max',\n cmap=black_gradient([0,1.,0.], transparent=True),\n figure_alpha=1,\n color_limits=[2000,16000], \n reference_bar_length=5000/ia._distance_zxy[-1],\n reference_bar_color=[1,1,1])\n\nax = visualize_2d_projection(aire_im[1:-1], \n ax=ax,\n figure_width=_single_col_width, figure_dpi=300,\n projection_type='max',\n cmap=black_gradient([1,0.,0.], transparent=True),\n figure_alpha=1,\n color_limits=[1000,8000], \n reference_bar_length=5000/ia._distance_zxy[-1],\n reference_bar_color=[1,1,1])\n\n#ax.text(340, 320, \"Aire\", color=[1,0,0], fontsize=7.5)\n#ax.text(340, 345, \"CK5\", color=[0,1,0], fontsize=7.5)\n#ax.text(340, 370, \"DAPI\", color=[0,0,1], fontsize=7.5)\n\nplt.savefig(os.path.join(figure_folder, f'overlayed_IF_fov_{fov_id}.png'), transparent=True)\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75c553038e5669fa84f7bbe5862d9c3cb15ec4a | 223,441 | ipynb | Jupyter Notebook | courses/dl1/lesson5-movielens_max_playground.ipynb | maxwellmckinnon/fastai_maxfork | b67bf7184ac2be1825697709051c5bcba058a40d | [
"Apache-2.0"
] | null | null | null | courses/dl1/lesson5-movielens_max_playground.ipynb | maxwellmckinnon/fastai_maxfork | b67bf7184ac2be1825697709051c5bcba058a40d | [
"Apache-2.0"
] | null | null | null | courses/dl1/lesson5-movielens_max_playground.ipynb | maxwellmckinnon/fastai_maxfork | b67bf7184ac2be1825697709051c5bcba058a40d | [
"Apache-2.0"
] | null | null | null | 108.046905 | 135,796 | 0.834153 | [
[
[
"## Movielens",
"_____no_output_____"
]
],
[
[
"%reload_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nfrom fastai.learner import *\nfrom fastai.column_data import *",
"_____no_output_____"
]
],
[
[
"Data available from http://files.grouplens.org/datasets/movielens/ml-latest-small.zip",
"_____no_output_____"
]
],
[
[
"path='/root/data/ml-latest-small/'",
"_____no_output_____"
]
],
[
[
"We're working with the movielens data, which contains one rating per row, like this:",
"_____no_output_____"
]
],
[
[
"ratings = pd.read_csv(path+'ratings.csv')\nratings.head()",
"_____no_output_____"
]
],
[
[
"Just for display purposes, let's read in the movie names too.",
"_____no_output_____"
]
],
[
[
"movies = pd.read_csv(path+'movies.csv')\nmovies.head()",
"_____no_output_____"
]
],
[
[
"## Create subset for Excel",
"_____no_output_____"
],
[
"We create a crosstab of the most popular movies and most movie-addicted users which we'll copy into Excel for creating a simple example. This isn't necessary for any of the modeling below however.",
"_____no_output_____"
]
],
[
[
"g=ratings.groupby('userId')['rating'].count()\ntopUsers=g.sort_values(ascending=False)[:15]\n\ng=ratings.groupby('movieId')['rating'].count()\ntopMovies=g.sort_values(ascending=False)[:15]\n\ntop_r = ratings.join(topUsers, rsuffix='_r', how='inner', on='userId')\ntop_r = top_r.join(topMovies, rsuffix='_r', how='inner', on='movieId')\n\npd.crosstab(top_r.userId, top_r.movieId, top_r.rating, aggfunc=np.sum)",
"_____no_output_____"
]
],
[
[
"## Collaborative filtering",
"_____no_output_____"
]
],
[
[
"val_idxs = get_cv_idxs(len(ratings))\nwd=2e-4\nn_factors = 50",
"_____no_output_____"
],
[
"cf = CollabFilterDataset.from_csv(path, 'ratings.csv', 'userId', 'movieId', 'rating')\nlearn = cf.get_learner(n_factors, val_idxs, 64, opt_fn=optim.Adam)",
"_____no_output_____"
],
[
"learn.fit(1e-2, 2, wds=wd, cycle_len=1, cycle_mult=2)",
"_____no_output_____"
]
],
[
[
"Let's compare to some benchmarks. Here's [some benchmarks](https://www.librec.net/release/v1.3/example.html) on the same dataset for the popular Librec system for collaborative filtering. They show best results based on [RMSE](http://www.statisticshowto.com/rmse/) of 0.91. We'll need to take the square root of our loss, since we use plain MSE.",
"_____no_output_____"
]
],
[
[
"math.sqrt(0.776)",
"_____no_output_____"
]
],
[
[
"Looking good - we've found a solution better than any of those benchmarks! Let's take a look at how the predictions compare to actuals for this model.",
"_____no_output_____"
]
],
[
[
"preds = learn.predict()",
"_____no_output_____"
],
[
"y=learn.data.val_y\nsns.jointplot(preds, y, kind='hex', stat_func=None);",
"/root/anaconda3/envs/fastai/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6462: UserWarning: The 'normed' kwarg is deprecated, and has been replaced by the 'density' kwarg.\n warnings.warn(\"The 'normed' kwarg is deprecated, and has been \"\n/root/anaconda3/envs/fastai/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6462: UserWarning: The 'normed' kwarg is deprecated, and has been replaced by the 'density' kwarg.\n warnings.warn(\"The 'normed' kwarg is deprecated, and has been \"\n"
]
],
[
[
"## Analyze results",
"_____no_output_____"
],
[
"### Movie bias",
"_____no_output_____"
]
],
[
[
"movie_names = movies.set_index('movieId')['title'].to_dict()\ng=ratings.groupby('movieId')['rating'].count()\ntopMovies=g.sort_values(ascending=False).index.values[:3000]\ntopMovieIdx = np.array([cf.item2idx[o] for o in topMovies])",
"_____no_output_____"
],
[
"m=learn.model; m.cuda()",
"_____no_output_____"
]
],
[
[
"First, we'll look at the movie bias term. Here, our input is the movie id (a single id), and the output is the movie bias (a single float).",
"_____no_output_____"
]
],
[
[
"movie_bias = to_np(m.ib(V(topMovieIdx)))",
"_____no_output_____"
],
[
"movie_bias",
"_____no_output_____"
],
[
"movie_ratings = [(b[0], movie_names[i]) for i,b in zip(topMovies,movie_bias)]",
"_____no_output_____"
]
],
[
[
"Now we can look at the top and bottom rated movies. These ratings are corrected for different levels of reviewer sentiment, as well as different types of movies that different reviewers watch.",
"_____no_output_____"
]
],
[
[
"sorted(movie_ratings, key=lambda o: o[0])[:15]",
"_____no_output_____"
],
[
"sorted(movie_ratings, key=itemgetter(0))[:15]",
"_____no_output_____"
],
[
"sorted(movie_ratings, key=lambda o: o[0], reverse=True)[:15]",
"_____no_output_____"
]
],
[
[
"### Embedding interpretation",
"_____no_output_____"
],
[
"We can now do the same thing for the embeddings.",
"_____no_output_____"
]
],
[
[
"movie_emb = to_np(m.i(V(topMovieIdx)))\nmovie_emb.shape",
"_____no_output_____"
]
],
[
[
"Because it's hard to interpret 50 embeddings, we use [PCA](https://plot.ly/ipython-notebooks/principal-component-analysis/) to simplify them down to just 3 vectors. ",
"_____no_output_____"
]
],
[
[
"from sklearn.decomposition import PCA\npca = PCA(n_components=3)\nmovie_pca = pca.fit(movie_emb.T).components_",
"_____no_output_____"
],
[
"movie_pca.shape",
"_____no_output_____"
],
[
"fac0 = movie_pca[0]\nmovie_comp = [(f, movie_names[i]) for f,i in zip(fac0, topMovies)]",
"_____no_output_____"
]
],
[
[
"Here's the 1st component. It seems to be 'easy watching' vs 'serious'.",
"_____no_output_____"
]
],
[
[
"sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]",
"_____no_output_____"
],
[
"sorted(movie_comp, key=itemgetter(0))[:10]",
"_____no_output_____"
],
[
"fac1 = movie_pca[1]\nmovie_comp = [(f, movie_names[i]) for f,i in zip(fac1, topMovies)]",
"_____no_output_____"
]
],
[
[
"Here's the 2nd component. It seems to be 'CGI' vs 'dialog driven'.",
"_____no_output_____"
]
],
[
[
"sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]",
"_____no_output_____"
],
[
"sorted(movie_comp, key=itemgetter(0))[:10]",
"_____no_output_____"
]
],
[
[
"We can draw a picture to see how various movies appear on the map of these components. This picture shows the first two components.",
"_____no_output_____"
]
],
[
[
"idxs = np.random.choice(len(topMovies), 50, replace=False)\nX = fac0[idxs]\nY = fac1[idxs]\nplt.figure(figsize=(15,15))\nplt.scatter(X, Y)\nfor i, x, y in zip(topMovies[idxs], X, Y):\n plt.text(x,y,movie_names[i], color=np.random.rand(3)*0.7, fontsize=11)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Collab filtering from scratch",
"_____no_output_____"
],
[
"### Dot product example",
"_____no_output_____"
]
],
[
[
"a = T([[1.,2],[3,4]])\nb = T([[2.,2],[10,10]])\na,b",
"_____no_output_____"
],
[
"a*b",
"_____no_output_____"
],
[
"(a*b).sum(1)",
"_____no_output_____"
],
[
"class DotProduct(nn.Module):\n def forward(self, u, m): return (u*m).sum(1)",
"_____no_output_____"
],
[
"model=DotProduct()",
"_____no_output_____"
],
[
"model(a,b)",
"_____no_output_____"
]
],
[
[
"### Dot product model",
"_____no_output_____"
]
],
[
[
"u_uniq = ratings.userId.unique()\nuser2idx = {o:i for i,o in enumerate(u_uniq)}\nratings.userId = ratings.userId.apply(lambda x: user2idx[x])\n\nm_uniq = ratings.movieId.unique()\nmovie2idx = {o:i for i,o in enumerate(m_uniq)}\nratings.movieId = ratings.movieId.apply(lambda x: movie2idx[x])\n\nn_users=int(ratings.userId.nunique())\nn_movies=int(ratings.movieId.nunique())",
"_____no_output_____"
],
[
"class EmbeddingDot(nn.Module):\n def __init__(self, n_users, n_movies):\n super().__init__()\n self.u = nn.Embedding(n_users, n_factors)\n self.m = nn.Embedding(n_movies, n_factors)\n self.u.weight.data.uniform_(0,0.05)\n self.m.weight.data.uniform_(0,0.05)\n \n def forward(self, cats, conts):\n users,movies = cats[:,0],cats[:,1]\n u,m = self.u(users),self.m(movies)\n return (u*m).sum(1)",
"_____no_output_____"
],
[
"x = ratings.drop(['rating', 'timestamp'],axis=1)\ny = ratings['rating'].astype(np.float32)",
"_____no_output_____"
],
[
"data = ColumnarModelData.from_data_frame(path, val_idxs, x, y, ['userId', 'movieId'], 64)",
"_____no_output_____"
],
[
"wd=1e-5\nmodel = EmbeddingDot(n_users, n_movies).cuda()\nopt = optim.SGD(model.parameters(), 1e-1, weight_decay=wd, momentum=0.9)",
"_____no_output_____"
],
[
"fit(model, data, 3, opt, F.mse_loss)",
"_____no_output_____"
],
[
"set_lrs(opt, 0.01)",
"_____no_output_____"
],
[
"fit(model, data, 3, opt, F.mse_loss)",
"_____no_output_____"
]
],
[
[
"### Bias",
"_____no_output_____"
]
],
[
[
"min_rating,max_rating = ratings.rating.min(),ratings.rating.max()\nmin_rating,max_rating",
"_____no_output_____"
],
[
"def get_emb(ni,nf):\n e = nn.Embedding(ni, nf)\n e.weight.data.uniform_(-0.01,0.01)\n return e\n\nclass EmbeddingDotBias(nn.Module):\n def __init__(self, n_users, n_movies):\n super().__init__()\n (self.u, self.m, self.ub, self.mb) = [get_emb(*o) for o in [\n (n_users, n_factors), (n_movies, n_factors), (n_users,1), (n_movies,1)\n ]]\n \n def forward(self, cats, conts):\n users,movies = cats[:,0],cats[:,1]\n um = (self.u(users)* self.m(movies)).sum(1)\n res = um + self.ub(users).squeeze() + self.mb(movies).squeeze()\n res = F.sigmoid(res) * (max_rating-min_rating) + min_rating\n return res",
"_____no_output_____"
],
[
"wd=2e-4\nmodel = EmbeddingDotBias(cf.n_users, cf.n_items).cuda()\nopt = optim.SGD(model.parameters(), 1e-1, weight_decay=wd, momentum=0.9)",
"_____no_output_____"
],
[
"fit(model, data, 3, opt, F.mse_loss)",
"_____no_output_____"
],
[
"set_lrs(opt, 1e-2)",
"_____no_output_____"
],
[
"fit(model, data, 3, opt, F.mse_loss)",
"_____no_output_____"
]
],
[
[
"### Mini net",
"_____no_output_____"
]
],
[
[
"class EmbeddingNet(nn.Module):\n def __init__(self, n_users, n_movies, nh=10, p1=0.05, p2=0.5):\n super().__init__()\n (self.u, self.m) = [get_emb(*o) for o in [\n (n_users, n_factors), (n_movies, n_factors)]]\n self.lin1 = nn.Linear(n_factors*2, nh)\n self.lin2 = nn.Linear(nh, 1)\n self.drop1 = nn.Dropout(p1)\n self.drop2 = nn.Dropout(p2)\n \n def forward(self, cats, conts):\n users,movies = cats[:,0],cats[:,1]\n x = self.drop1(torch.cat([self.u(users),self.m(movies)], dim=1))\n x = self.drop2(F.relu(self.lin1(x)))\n return F.sigmoid(self.lin2(x)) * (max_rating-min_rating+1) + min_rating-0.5",
"_____no_output_____"
],
[
"wd=1e-5\nmodel = EmbeddingNet(n_users, n_movies).cuda()\nopt = optim.Adam(model.parameters(), 1e-3, weight_decay=wd)",
"_____no_output_____"
],
[
"fit(model, data, 3, opt, F.mse_loss)",
"_____no_output_____"
],
[
"set_lrs(opt, 1e-3)",
"_____no_output_____"
],
[
"fit(model, data, 3, opt, F.mse_loss)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e75c582a90e12b6245e9a21722c8da515058f1dc | 499,132 | ipynb | Jupyter Notebook | mnist-pca.ipynb | jiwoncpark/cs269q-quantum-computer-programming | b9a1633b457854548003a680c9dc09d29770515f | [
"MIT"
] | null | null | null | mnist-pca.ipynb | jiwoncpark/cs269q-quantum-computer-programming | b9a1633b457854548003a680c9dc09d29770515f | [
"MIT"
] | null | null | null | mnist-pca.ipynb | jiwoncpark/cs269q-quantum-computer-programming | b9a1633b457854548003a680c9dc09d29770515f | [
"MIT"
] | null | null | null | 1,078.038877 | 136,704 | 0.959169 | [
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\n\nfrom sklearn.datasets import fetch_openml\nfrom sklearn.decomposition import PCA\nfrom matplotlib import cm, colors\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nplt.style.use('fivethirtyeight')\nplt.rcParams[\"axes.grid\"] = False\n\n%matplotlib inline",
"_____no_output_____"
],
[
"X, y = fetch_openml('mnist_784', version=1, return_X_y=True)",
"_____no_output_____"
],
[
"num_components = 8\npca = PCA(n_components=num_components, whiten=True)\nX_r = pca.fit(X).transform(X)",
"_____no_output_____"
],
[
"X_r.shape",
"_____no_output_____"
],
[
"total_explained = np.sum(pca.explained_variance_ratio_)\ntotal_explained",
"_____no_output_____"
],
[
"fig, axarr = plt.subplots(num_components//2, 2, figsize=(12, 2*num_components))\n\nfor comp in range(num_components):\n col = comp // 2\n row = comp % 2\n sns.heatmap(pca.components_[comp, :].reshape(28, 28), \n xticklabels=5, yticklabels=5, ax=axarr[col, row], cmap='gray_r')\n axarr[col, row].set_title(\n \"{0:.2f}% explained variance\".format(pca.explained_variance_ratio_[comp]*100),\n fontsize=12)\n axarr[col, row].set_aspect('equal')\n\n#plt.suptitle('{0:d}-component PCA, {0:.2f}% total explained variance'.format(num_components, total_explained))\nplt.tight_layout()",
"_____no_output_____"
],
[
"num_classes = 10\n#sim_colors = cm.ScalarMappable(cmap=cm.jet, norm=colors.Normalize(vmin=0, vmax=num_classes - 1))\ny_task = y.astype(int)\nplt.scatter(X_r[:, 0], X_r[:, 1], c=y_task, cmap=cm.get_cmap('jet', num_classes), alpha=0.5)\nplt.xlabel(\"First principal component\")\nplt.ylabel(\"Second principal component\")\nplt.title(\"2D Projection of PCA-reduced data with digit labels\")\nplt.tight_layout()\nplt.colorbar()",
"_____no_output_____"
],
[
"num_classes = 2\n#sim_colors = cm.ScalarMappable(cmap=cm.jet, norm=colors.Normalize(vmin=0, vmax=num_classes - 1))\ny_task = y.astype(int)\n#y_task = (y_task > 4).astype(int)\ny_task = (y_task%2 == 0)\nplt.scatter(X_r[:, 0], X_r[:, 1], c=y_task, cmap=cm.get_cmap('jet', num_classes), alpha=0.5)\nplt.xlabel(\"First principal component\")\nplt.ylabel(\"Second principal component\")\nplt.title(\"2D Projection of PCA-reduced data with 'Is even' labels\")\nplt.tight_layout()\nplt.colorbar(ticks=[0, 1])",
"_____no_output_____"
],
[
"num_classes = 2\n#sim_colors = cm.ScalarMappable(cmap=cm.jet, norm=colors.Normalize(vmin=0, vmax=num_classes - 1))\ny_task = y.astype(int)\nonly_01 = (y_task < 2)\ny_task = y_task[only_01]\nX_task = X_r[only_01, :]\nplt.scatter(X_task[:, 0], X_task[:, 1], c=y_task, cmap=cm.get_cmap('jet', num_classes), alpha=0.5)\nplt.xlabel(\"First principal component\")\nplt.ylabel(\"Second principal component\")\nplt.title(\"2D Projection of PCA-reduced data with '0 or 1' labels\")\nplt.tight_layout()\nplt.colorbar(ticks=[0, 1])",
"_____no_output_____"
],
[
"num_classes = 2\n#sim_colors = cm.ScalarMappable(cmap=cm.jet, norm=colors.Normalize(vmin=0, vmax=num_classes - 1))\ny_task = y.astype(int)\n#y_task = (y_task > 4).astype(int)\nonly_27 = np.isin(y_task, [2, 7])\ny_task = y_task[only_27]\nX_task = X_r[only_27, :]\nplt.scatter(X_task[:, 0], X_task[:, 1], c=y_task, cmap=cm.get_cmap('jet', num_classes), alpha=0.5)\nplt.xlabel(\"First principal component\")\nplt.ylabel(\"Second principal component\")\nplt.title(\"2D Projection of PCA-reduced data with '2 or 7' labels\")\nplt.tight_layout()\nplt.colorbar(ticks=[0, 1])",
"_____no_output_____"
]
],
[
[
"## Qubit encoding",
"_____no_output_____"
]
],
[
[
"def whiten(arr):\n arr_mean = np.mean(arr)\n arr_std = np.std(arr)\n whitened = (arr - arr_mean)/arr_std\n return whitened",
"_____no_output_____"
],
[
"plt.imshow(whiten(X[0, :].reshape(28, 28)), cmap='gray')\n\nplt.colorbar()",
"_____no_output_____"
],
[
"plt.imshow(X_r[0, :].reshape(8, 1), cmap='gray')\nplt.colorbar()\nplt.tight_layout()\nplt.axis('off')",
"_____no_output_____"
],
[
"def rescale_to_angle(arr):\n arr_min = np.min(arr)\n arr_max = np.max(arr)\n rescaled_unit = (arr - arr_min)/(arr_max - arr_min)\n return rescaled_unit*0.5*np.pi",
"_____no_output_____"
],
[
"plt.imshow(rescale_to_angle(X_r[0, :].reshape(8, 1)), cmap='gray')\ncb = plt.colorbar(ticks=np.linspace(0., 0.5*np.pi, 5))\ncb.ax.set_yticklabels(['0', '$\\pi/8$', '$\\pi/4$', '$3\\pi/8$', '$\\pi/2$'])\nplt.tight_layout()\nplt.axis('off')",
"_____no_output_____"
],
[
"rescale_to_angle(X_r[0, :])",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75c5f08c3ffaba687ac4d7bfff9842a3cfde89f | 2,231 | ipynb | Jupyter Notebook | OriginalImgSize.ipynb | Taslim-M/hover_net | 1c23a1e712b4407377c0b143f124840a77db6f72 | [
"MIT"
] | null | null | null | OriginalImgSize.ipynb | Taslim-M/hover_net | 1c23a1e712b4407377c0b143f124840a77db6f72 | [
"MIT"
] | null | null | null | OriginalImgSize.ipynb | Taslim-M/hover_net | 1c23a1e712b4407377c0b143f124840a77db6f72 | [
"MIT"
] | null | null | null | 17.991935 | 136 | 0.493949 | [
[
[
"# check Pillow version number\nimport PIL\nfrom PIL import Image\nfrom numpy import asarray",
"_____no_output_____"
],
[
"print('Pillow Version:', PIL.__version__)",
"Pillow Version: 7.2.0\n"
],
[
"file= r'C:\\Users\\Tasli\\Desktop\\Python_Folder_New\\ECCE635\\hover_net-master\\DATA\\consep\\CoNSeP\\Train\\Images\\train_1.png'",
"_____no_output_____"
],
[
"image = Image.open(file)",
"_____no_output_____"
],
[
"print(image.size)",
"(1000, 1000)\n"
],
[
"# convert image to numpy array\ndata = asarray(image)\n# summarize shape\nprint(data.shape)",
"(1000, 1000, 3)\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75c5f8c1c69c6b0cd82f5a3c7a5f5aef4e63697 | 27,878 | ipynb | Jupyter Notebook | assignments/assignment05/InteractEx03.ipynb | aschaffn/phys202-2015-work | c0e2eca987ed717a8cc345a7cf556b8111aa5457 | [
"MIT"
] | null | null | null | assignments/assignment05/InteractEx03.ipynb | aschaffn/phys202-2015-work | c0e2eca987ed717a8cc345a7cf556b8111aa5457 | [
"MIT"
] | null | null | null | assignments/assignment05/InteractEx03.ipynb | aschaffn/phys202-2015-work | c0e2eca987ed717a8cc345a7cf556b8111aa5457 | [
"MIT"
] | null | null | null | 74.341333 | 9,998 | 0.825669 | [
[
[
"# Interact Exercise 3",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"from IPython.html.widgets import interact, interactive, fixed\nfrom IPython.display import display",
":0: FutureWarning: IPython widgets are experimental and may change in the future.\n"
]
],
[
[
"# Using interact for animation with data",
"_____no_output_____"
],
[
"A [*soliton*](http://en.wikipedia.org/wiki/Soliton) is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the [Korteweg–de Vries](http://en.wikipedia.org/wiki/Korteweg%E2%80%93de_Vries_equation) equation, which has the following analytical solution:\n\n$$\n\\phi(x,t) = \\frac{1}{2} c \\mathrm{sech}^2 \\left[ \\frac{\\sqrt{c}}{2} \\left(x - ct - a \\right) \\right]\n$$\n\nThe constant `c` is the velocity and the constant `a` is the initial location of the soliton.\n\nDefine `soliton(x, t, c, a)` function that computes the value of the soliton wave for the given arguments. Your function should work when the postion `x` *or* `t` are NumPy arrays, in which case it should return a NumPy array itself.",
"_____no_output_____"
]
],
[
[
"def soliton(x, t, c, a):\n \"\"\"Return phi(x, t) for a soliton wave with constants c and a.\"\"\"\n phiarg = (np.sqrt(c)/2.)*(x-c*t-a)\n phi = .5 * np.cosh(phiarg)**2\n return(phi)",
"_____no_output_____"
],
[
"assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))",
"_____no_output_____"
]
],
[
[
"To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays:",
"_____no_output_____"
]
],
[
[
"tmin = 0.0\ntmax = 10.0\ntpoints = 100\nt = np.linspace(tmin, tmax, tpoints)\n\nxmin = 0.0\nxmax = 10.0\nxpoints = 200\nx = np.linspace(xmin, xmax, xpoints)\n\nc = 1.0\na = 0.0",
"_____no_output_____"
]
],
[
[
"Compute a 2d NumPy array called `phi`:\n\n* It should have a dtype of `float`.\n* It should have a shape of `(xpoints, tpoints)`.\n* `phi[i,j]` should contain the value $\\phi(x[i],t[j])$.",
"_____no_output_____"
]
],
[
[
"phi = np.zeros([200,100], dtype = 'float')\nfor i in range(0,200):\n for j in range(0,100):\n phi[i,j] = soliton(x[i], t[j], c, a)\n \n# is there a list comprehension that would make this better?",
"_____no_output_____"
],
[
"assert phi.shape==(xpoints, tpoints)\nassert phi.ndim==2\nassert phi.dtype==np.dtype(float)\nassert phi[0,0]==soliton(x[0],t[0],c,a)",
"_____no_output_____"
]
],
[
[
"Write a `plot_soliton_data(i)` function that plots the soliton wave $\\phi(x, t[i])$. Customize your plot to make it effective and beautiful.",
"_____no_output_____"
]
],
[
[
"def plot_soliton_data(i=0):\n \"\"\"Plot the soliton data at t[i] versus x.\"\"\"\n plt.plot(x, phi[:,i])\n plt.xlim((0,10))\n plt.ylim((0,3000))\n plt.title(\"t =\" + str(t[i]))",
"_____no_output_____"
],
[
"plot_soliton_data(0)",
"_____no_output_____"
],
[
"\"\"\"hi there\"\"\"\nprint(\"\"\"hi \nhow are you\"\"\")\n",
"hi \nhow are you\n"
],
[
"assert True # leave this for grading the plot_soliton_data function",
"_____no_output_____"
]
],
[
[
"Use `interact` to animate the `plot_soliton_data` function versus time.",
"_____no_output_____"
]
],
[
[
"interact(plot_soliton_data, i=(0,99))",
"_____no_output_____"
],
[
"assert True # leave this for grading the interact with plot_soliton_data cell",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e75c6063f26eadd49a759a1331fcc0b7eeeedd64 | 48,405 | ipynb | Jupyter Notebook | in_progress/Tutorial-GiRaFFE_NRPy_Main_Driver_new_way.ipynb | philchang/nrpytutorial | a69d90777b2519192e3c53a129fe42827224faa3 | [
"BSD-2-Clause"
] | null | null | null | in_progress/Tutorial-GiRaFFE_NRPy_Main_Driver_new_way.ipynb | philchang/nrpytutorial | a69d90777b2519192e3c53a129fe42827224faa3 | [
"BSD-2-Clause"
] | null | null | null | in_progress/Tutorial-GiRaFFE_NRPy_Main_Driver_new_way.ipynb | philchang/nrpytutorial | a69d90777b2519192e3c53a129fe42827224faa3 | [
"BSD-2-Clause"
] | null | null | null | 59.759259 | 913 | 0.623923 | [
[
[
"<script async src=\"https://www.googletagmanager.com/gtag/js?id=UA-59152712-8\"></script>\n<script>\n window.dataLayer = window.dataLayer || [];\n function gtag(){dataLayer.push(arguments);}\n gtag('js', new Date());\n\n gtag('config', 'UA-59152712-8');\n</script>\n\n# `GiRaFFE_NRPy`: Main Driver\n\n## Author: Patrick Nelson\n\n<a id='intro'></a>\n\n**Notebook Status:** <font color=Red><b> Validation in progress </b></font>\n\n**Validation Notes:** This code assembles the various parts needed for GRFFE evolution in order.\n\n### NRPy+ Source Code for this module:\n* [GiRaFFE_NRPy/GiRaFFE_NRPy_Main_Driver_new_way.py.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Main_Driver_new_way.py.py)\n\n### Other critical files (in alphabetical order): \n* [GiRaFFE_NRPy/GiRaFFE_NRPy_Afield_flux_handwritten.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Afield_flux_handwritten.py) [\\[**tutorial**\\]](Tutorial-GiRaFFE_NRPy-Afield_flux_handwritten.ipynb) Generates the expressions to find the flux term of the induction equation.\n* [GiRaFFE_NRPy/GiRaFFE_NRPy_A2B.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_A2B.py) [\\[**tutorial**\\]](Tutorial-GiRaFFE_NRPy-A2B.ipynb) Generates the driver to compute the magnetic field from the vector potential/\n* [GiRaFFE_NRPy/GiRaFFE_NRPy_BCs.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_BCs.py) [\\[**tutorial**\\]](Tutorial-GiRaFFE_NRPy-BCs.ipynb) Generates the code to apply boundary conditions to the vector potential, scalar potential, and three-velocity.\n* [GiRaFFE_NRPy/GiRaFFE_NRPy_C2P_P2C.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_C2P_P2C.py) [\\[**tutorial**\\]](Tutorial-GiRaFFE_NRPy-C2P_P2C.ipynb) Generates the conservative-to-primitive and primitive-to-conservative solvers.\n* [GiRaFFE_NRPy/GiRaFFE_NRPy_Metric_Face_Values.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Metric_Face_Values.py) [\\[**tutorial**\\]](Tutorial-GiRaFFE_NRPy-Metric_Face_Values.ipynb) Generates code to interpolate metric gridfunctions to cell faces.\n* [GiRaFFE_NRPy/GiRaFFE_NRPy_PPM.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_PPM.py) [\\[**tutorial**\\]](Tutorial-GiRaFFE_NRPy-PPM.ipynb) Genearates code to reconstruct primitive variables on cell faces.\n* [GiRaFFE_NRPy/GiRaFFE_NRPy_Source_Terms.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Source_Terms.py) [\\[**tutorial**\\]](Tutorial-GiRaFFE_NRPy-Source_Terms.ipynb) Genearates code to compute the $\\tilde{S}_i$ source term.\n* [GiRaFFE_NRPy/Stilde_flux.py](../../edit/in_progress/GiRaFFE_NRPy/Stilde_flux.py) [\\[**tutorial**\\]](Tutorial-GiRaFFE_NRPy-Stilde-flux.ipynb) Generates the expressions to find the flux term of the Poynting flux evolution equation.\n* [../GRFFE/equations.py](../../edit/GRFFE/equations.py) [\\[**tutorial**\\]](../Tutorial-GRFFE_Equations-Cartesian.ipynb) Generates code necessary to compute the source terms.\n* [../GRHD/equations.py](../../edit/GRHD/equations.py) [\\[**tutorial**\\]](../Tutorial-GRHD_Equations-Cartesian.ipynb) Generates code necessary to compute the source terms.\n\n## Introduction: \nHaving written all the various algorithms that will go into evolving the GRFFE equations forward through time, we are ready to write a start-to-finish module to do so. However, to help keep things more organized, we will first create a dedicated module to assemble the various functions we need to run, in order, to perform the evolution. This will reduce the length of the standalone C code, improving that notebook's readability.\n\n<a id='prelim'></a>\n# Table of Contents\n$$\\label{prelim}$$\n\nDuring a given RK substep, we will perform the following steps in this order, based on the order used in the original `GiRaFFE`:\n0. [Step 0](#prelim): Preliminaries\n1. [Step 1](#rhs): Calculate the right-hand sides\n 1. [Step 1.a](#operand): Calculate the portion of the gauge terms for $A_k$, $(\\alpha \\Phi - \\beta^j A_j)$ and $\\Phi$, $(\\alpha\\sqrt{\\gamma}A^j - \\beta^j [\\sqrt{\\gamma} \\Phi])$ *inside* the parentheses to be finite-differenced.\n 1. [**GRFFE/equations.py**](../../edit/GRFFE/equations.py), [**GRHD/equations.py**](../../edit/GRHD/equations.py)\n 1. [Step 1.b](#source): Calculate the source terms of $\\partial_t A_i$, $\\partial_t \\tilde{S}_i$, and $\\partial_t [\\sqrt{\\gamma} \\Phi]$ right-hand sides\n 1. [**GRFFE/equations.py**](../../edit/GRFFE/equations.py), [**GRHD/equations.py**](../../edit/GRHD/equations.py), [**GiRaFFE_NRPy/GiRaFFE_NRPy_Source_Terms**](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Source_Terms.py)\n 1. [Step 1.c](#flux): Calculate the Flux terms\n 1. In each direction: \n 1. Interpolate the metric gridfunctions to cell faces\n 1. [**GiRaFFE_NRPy/GiRaFFE_NRPy_Metric_Face_Values.py**](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Metric_Face_Values.py)\n 1. Reconstruct primitives $\\bar{v}^i$ and $B^i$ on cell faces with the piecewise-parabolic method\n 1. [**GiRaFFE_NRPy/GiRaFFE_NRPy_PPM.py**](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_PPM.py)\n 1. Compute the fluxes of $\\tilde{S}_i$ and $A_i$ and add the appropriate combinations to the evolution equation right-hand sides\n 1. [**GiRaFFE_NRPy/Stilde_flux.py**](../../edit/in_progress/GiRaFFE_NRPy/Stilde_flux.py), [**GiRaFFE_NRPy/GiRaFFE_NRPy_Afield_flux_handwritten.py**](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Afield_flux_handwritten.py)\n1. [Step 2](#poststep): Recover the primitive variables and apply boundary conditions (post-step)\n 1. [Step 2.a](#potential_bc): Apply boundary conditions to $A_i$ and $\\sqrt{\\gamma} \\Phi$\n 1. [**GiRaFFE_NRPy/GiRaFFE_NRPy_BCs.py**](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_BCs.py)\n 1. [Step 2.b](#a2b): Compute $B^i$ from $A_i$\n 1. [**GiRaFFE_NRPy/GiRaFFE_NRPy_A2B.py**](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_A2B.py)\n 1. [Step 2.c](#c2p): Run the Conservative-to-Primitive solver\n 1. This applies fixes to $\\tilde{S}_i$, then computes $\\bar{v}^i$. A current sheet prescription is then applied to $\\bar{v}^i$, and $\\tilde{S}_i$ is recomputed to be consistent.\n 1. [**GiRaFFE_NRPy/GiRaFFE_NRPy_C2P_P2C.py**](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_C2P_P2C.py)\n 1. [Step 2.d](#velocity_bc): Apply outflow boundary conditions to $\\bar{v}^i$\n 1. [**GiRaFFE_NRPy/GiRaFFE_NRPy_BCs.py**](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_BCs.py)\n1. [Step 3](#write_out): Write out the C code function\n1. [Step 3](#code_validation): Self-Validation against `GiRaFFE_NRPy_Main_Drive.py`\n1. [Step 5](#latex_pdf_output): Output this notebook to $\\LaTeX$-formatted PDF file\n",
"_____no_output_____"
],
[
"<a id='prelim'></a>\n\n# Step 0: Preliminaries \\[Back to [top](#toc)\\]\n$$\\label{prelim}$$\n\nWe begin by importing the NRPy+ core functionality. We also import the GRHD module and the GRFFE module.",
"_____no_output_____"
]
],
[
[
"# Step 0: Add NRPy's directory to the path\n# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory\nimport os,sys\nnrpy_dir_path = os.path.join(\"..\")\nif nrpy_dir_path not in sys.path:\n sys.path.append(nrpy_dir_path)\n\nfrom outputC import outCfunction, lhrh, add_to_Cfunction_dict # NRPy+: Core C code output module\nimport finite_difference as fin # NRPy+: Finite difference C code generation module\nimport NRPy_param_funcs as par # NRPy+: Parameter interface\nimport grid as gri # NRPy+: Functions having to do with numerical grids\nimport loop as lp # NRPy+: Generate C code loops\nimport indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support\nimport reference_metric as rfm # NRPy+: Reference metric support\nimport cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface\n\nthismodule = \"GiRaFFE_NRPy_Main_Driver\"\n\npar.set_parval_from_str(\"finite_difference::FD_CENTDERIVS_ORDER\",2)\n\nout_dir = os.path.join(\"GiRaFFE_standalone_Ccodes\")\ncmd.mkdir(out_dir)\n\nCoordSystem = \"Cartesian\"\n\npar.set_parval_from_str(\"reference_metric::CoordSystem\",CoordSystem)\nrfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.\n\n# Default Kreiss-Oliger dissipation strength\ndefault_KO_strength = 0.1\ndiss_strength = par.Cparameters(\"REAL\", thismodule, \"diss_strength\", default_KO_strength)\n\noutCparams = \"outCverbose=False,CSE_sorting=none\"",
"_____no_output_____"
]
],
[
[
"<a id='rhs'></a>\n\n# Step 1: Calculate the right-hand sides \\[Back to [top](#toc)\\]\n$$\\label{rhs}$$\n\n<a id='operand'></a>\n\nIn the method of lines using Runge-Kutta methods, each timestep involves several \"RK substeps\" during which we will run the same set of function calls. These can be divided into two groups: one in which the RHSs themselves are calculated, and a second in which boundary conditions are applied and auxiliary variables updated (the post-step). Here, we focus on that first group.\n\n## Step 1.a: Calculate the portion of the gauge terms for $A_k$, $(\\alpha \\Phi - \\beta^j A_j)$ and $\\Phi$, $(\\alpha\\sqrt{\\gamma}A^j - \\beta^j [\\sqrt{\\gamma} \\Phi])$ *inside* the parentheses to be finite-differenced. \\[Back to [top](#toc)\\]\n$$\\label{operand}$$\n\nThe gauge terms of our evolution equations consist of two derivative terms: the Lorentz gauge term of $\\partial_t A_k$, which is $\\partial_k (\\alpha \\Phi - \\beta^j A_j)$ and the non-damping, flux-like term of $\\partial_t [\\psi^6 \\Phi]$, which is $\\partial_j (\\alpha\\sqrt{\\gamma}A^j - \\beta^j [\\sqrt{\\gamma} \\Phi])$. We can save some effort and execution time (at the cost of memory needed) by computing the derivative operands, storing them, and then finite-differencing that stored variable. For more information, see the notebook for the [implementation](Tutorial-GiRaFFE_NRPy-Source_Terms.ipynb) and the [validation](Tutorial-Start_to_Finish_UnitTest-GiRaFFE_NRPy-Source_Terms.ipynb), as well as [Tutorial-GRFFE_Equations-Cartesian](../Tutorial-GRFFE_Equations-Cartesian.ipynb) and [Tutorial-GRHD_Equations-Cartesian](../Tutorial-GRHD_Equations-Cartesian.ipynb) for the terms themselves. ",
"_____no_output_____"
]
],
[
[
"import GRHD.equations as GRHD # NRPy+: Generate general relativistic hydrodynamics equations\nimport GRFFE.equations as GRFFE # NRPy+: Generate general relativisitic force-free electrodynamics equations\n\ngammaDD = ixp.register_gridfunctions_for_single_rank2(\"AUXEVOL\",\"gammaDD\",\"sym01\",DIM=3)\nbetaU = ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"betaU\",DIM=3)\nalpha = gri.register_gridfunctions(\"AUXEVOL\",\"alpha\")\nAD = ixp.register_gridfunctions_for_single_rank1(\"EVOL\",\"AD\")\nBU = ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"BU\")\nValenciavU = ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"ValenciavU\")\npsi6Phi = gri.register_gridfunctions(\"EVOL\",\"psi6Phi\")\nStildeD = ixp.register_gridfunctions_for_single_rank1(\"EVOL\",\"StildeD\")\n\n# We will pass values of the gridfunction on the cell faces into the function. This requires us\n# to declare them as C parameters in NRPy+. We will denote this with the _face infix/suffix.\nalpha_face = gri.register_gridfunctions(\"AUXEVOL\",\"alpha_face\")\ngamma_faceDD = ixp.register_gridfunctions_for_single_rank2(\"AUXEVOL\",\"gamma_faceDD\",\"sym01\")\nbeta_faceU = ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"beta_faceU\")\n\n# We'll need some more gridfunctions, now, to represent the reconstructions of BU and ValenciavU\n# on the right and left faces\nValenciav_rU = ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"Valenciav_rU\",DIM=3)\nB_rU = ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"B_rU\",DIM=3)\nValenciav_lU = ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"Valenciav_lU\",DIM=3)\nB_lU = ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"B_lU\",DIM=3)\n\nixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"Stilde_flux_HLLED\")\n\nixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"PhievolParenU\",DIM=3)\ngri.register_gridfunctions(\"AUXEVOL\",\"AevolParen\")\n\n# Declare this symbol\nsqrt4pi = par.Cparameters(\"REAL\",thismodule,\"sqrt4pi\",\"sqrt(4.0*M_PI)\")\n\ndef add_to_Cfunction_dict__AD_gauge_term_psi6Phi_flux_term(includes=None):\n GRHD.compute_sqrtgammaDET(gammaDD)\n GRFFE.compute_AD_source_term_operand_for_FD(GRHD.sqrtgammaDET,betaU,alpha,psi6Phi,AD)\n GRFFE.compute_psi6Phi_rhs_flux_term_operand(gammaDD,GRHD.sqrtgammaDET,betaU,alpha,AD,psi6Phi)\n\n parens_to_print = [\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"AevolParen\"),rhs=GRFFE.AevolParen),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"PhievolParenU0\"),rhs=GRFFE.PhievolParenU[0]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"PhievolParenU1\"),rhs=GRFFE.PhievolParenU[1]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"PhievolParenU2\"),rhs=GRFFE.PhievolParenU[2]),\n ]\n\n desc = \"Calculate quantities to be finite-differenced for the GRFFE RHSs\"\n name = \"calculate_AD_gauge_term_psi6Phi_flux_term_for_RHSs\"\n params = \"const paramstruct *restrict params,const REAL *restrict in_gfs,REAL *restrict auxevol_gfs\"\n body = fin.FD_outputC(\"returnstring\",parens_to_print,params=outCparams)\n loopopts = \"AllPoints\"\n rel_path_to_Cparams=os.path.join(\"../\")\n add_to_Cfunction_dict(\n includes=includes,\n desc=desc,\n name=name, params=params,\n body=body, loopopts=loopopts)\n return pickle_NRPy_env()",
"_____no_output_____"
]
],
[
[
"<a id='source'></a>\n\n## Step 1.b: Calculate the source terms of $\\partial_t A_i$, $\\partial_t \\tilde{S}_i$, and $\\partial_t [\\sqrt{\\gamma} \\Phi]$ right-hand sides \\[Back to [top](#toc)\\]\n$$\\label{source}$$\n\nWith the operands of the gradient of divergence operators stored in memory from the previous step, we can now calculate the terms on the RHS of $A_i$ and $[\\sqrt{\\gamma} \\Phi]$ that involve the derivatives of those terms. We also compute the other term in the RHS of $[\\sqrt{\\gamma} \\Phi]$, which is a straightforward damping term. ",
"_____no_output_____"
]
],
[
[
"def add_to_Cfunction_dict__AD_gauge_term_psi6Phi_fin_diff(includes=None):\n xi_damping = par.Cparameters(\"REAL\",thismodule,\"xi_damping\",0.1)\n GRFFE.compute_psi6Phi_rhs_damping_term(alpha,psi6Phi,xi_damping)\n\n AevolParen_dD = ixp.declarerank1(\"AevolParen_dD\",DIM=3)\n PhievolParenU_dD = ixp.declarerank2(\"PhievolParenU_dD\",\"nosym\",DIM=3)\n\n A_rhsD = ixp.zerorank1()\n psi6Phi_rhs = GRFFE.psi6Phi_damping\n\n for i in range(3):\n A_rhsD[i] += -AevolParen_dD[i]\n psi6Phi_rhs += -PhievolParenU_dD[i][i]\n\n # Add Kreiss-Oliger dissipation to the GRFFE RHSs:\n # psi6Phi_dKOD = ixp.declarerank1(\"psi6Phi_dKOD\")\n # AD_dKOD = ixp.declarerank2(\"AD_dKOD\",\"nosym\")\n # for i in range(3):\n # psi6Phi_rhs += diss_strength*psi6Phi_dKOD[i]*rfm.ReU[i] # ReU[i] = 1/scalefactor_orthog_funcform[i]\n # for j in range(3):\n # A_rhsD[j] += diss_strength*AD_dKOD[j][i]*rfm.ReU[i] # ReU[i] = 1/scalefactor_orthog_funcform[i]\n\n RHSs_to_print = [\n lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"AD0\"),rhs=A_rhsD[0]),\n lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"AD1\"),rhs=A_rhsD[1]),\n lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"AD2\"),rhs=A_rhsD[2]),\n lhrh(lhs=gri.gfaccess(\"rhs_gfs\",\"psi6Phi\"),rhs=psi6Phi_rhs),\n ]\n\n desc = \"Calculate AD gauge term and psi6Phi RHSs\"\n name = \"calculate_AD_gauge_psi6Phi_RHSs\"\n params =\"const paramstruct *params,const REAL *in_gfs,const REAL *auxevol_gfs,REAL *rhs_gfs\",\n body = fin.FD_outputC(\"returnstring\",RHSs_to_print,params=outCparams),\n loopopts =\"InteriorPoints\",\n add_to_Cfunction_dict(\n includes=includes,\n desc=desc,\n name=name, params=params,\n body=body, loopopts=loopopts)\n outC_function_dict[name] = outC_function_dict[name].replace(\"= NGHOSTS\",\"= NGHOSTS_A2B\").replace(\"NGHOSTS+Nxx0\",\"Nxx_plus_2NGHOSTS0-NGHOSTS_A2B\").replace(\"NGHOSTS+Nxx1\",\"Nxx_plus_2NGHOSTS1-NGHOSTS_A2B\").replace(\"NGHOSTS+Nxx2\",\"Nxx_plus_2NGHOSTS2-NGHOSTS_A2B\")\n # Note the above .replace() functions. These serve to expand the loop range into the ghostzones, since\n # the second-order FD needs fewer than some other algorithms we use do.",
"_____no_output_____"
]
],
[
[
"We also need to compute the source term of the $\\tilde{S}_i$ evolution equation. This term involves derivatives of the four metric, so we can save some effort here by taking advantage of the interpolations done of the metric gridfunctions to the cell faces, which will allow us to take a finite-difference derivative with the accuracy of a higher order and the computational cost of a lower order. However, it will require some more complicated coding, detailed in [Tutorial-GiRaFFE_NRPy-Source_Terms](Tutorial-GiRaFFE_NRPy-Source_Terms.ipynb); the function to generate the funcitons can be found there and used as follows:\n\n```python\nimport GiRaFFE_NRPy.GiRaFFE_NRPy_Source_Terms as source\nsource.add_to_Cfunction_dict__functions_for_StildeD_source_term(md.outCparams,md.gammaDD,md.betaU,md.alpha,\n md.ValenciavU,md.BU,md.sqrt4pi)\n```",
"_____no_output_____"
],
[
"<a id='flux'></a>\n\n## Step 1.c: Calculate the Flux terms \\[Back to [top](#toc)\\]\n$$\\label{flux}$$\n\nNow, we will compute the flux terms of $\\partial_t A_i$ and $\\partial_t \\tilde{S}_i$. To do so, we will first need to interpolate the metric gridfunctions to cell faces and to reconstruct the primitives on the cell faces using the code detailed in [Tutorial-GiRaFFE_NRPy-Metric_Face_Values](Tutorial-GiRaFFE_NRPy-Metric_Face_Values.ipynb) and in [Tutorial-GiRaFFE_NRPy-PPM](Tutorial-GiRaFFE_NRPy-PPM.ipynb).\n\nThe functions to write the C codes for these and add them the the appropriate dictionaries are as follows:\n```python\nimport GiRaFFE_NRPy.GiRaFFE_NRPy_Metric_Face_Values as FCVAL\nFCVAL.add_to_Cfunction_dict__GiRaFFE_NRPy_FCVAL()\nimport GiRaFFE_NRPy.GiRaFFE_NRPy_PPM as PPM\nPPM.add_to_Cfunction_dict__GiRaFFE_NRPy_PPM()\n```",
"_____no_output_____"
],
[
"Here, we will write the function to compute the electric field contribution to the induction equation RHS. This is coded with documentation in [Tutorial-GiRaFFE_NRPy-Afield_flux_handwritten](Tutorial-GiRaFFE_NRPy-Afield_flux_handwritten.ipynb). The recontructed values in the $i^{\\rm th}$ direction will contribute to the $j^{\\rm th}$ and $k^{\\rm th}$ component of the electric field. That is, in Cartesian coordinates, the component $x$ of the electric field will be the average of the values computed on the cell faces in the $\\pm y$- and $\\pm z$-directions, and so forth for the other components. However, all of these can be written as only a single function as long as we appropriately pass cyclical permutations of the inputs.\n\nThis can be done with the following code:\n```python\nimport GiRaFFE_NRPy.GiRaFFE_NRPy_Afield_flux_handwritten as Af\nAf.add_to_Cfunction_dict__GiRaFFE_NRPy_Afield_flux(gammaDD, betaU, alpha)\n```",
"_____no_output_____"
],
[
"We must do something similar here, albeit a bit simpler. For instance, the $x$ component of $\\partial_t \\tilde{S}_i$ will be a finite difference of the flux throught the faces in the $\\pm x$ direction; for further detail, see [Tutorial-GiRaFFE_NRPy-Stilde-flux](Tutorial-GiRaFFE_NRPy-Stilde-flux.ipynb). The C code can be generated and added to the appropriate dictionaries as follows: \n```python\nimport GiRaFFE_NRPy.Stilde_flux as Sf\nSf.add_to_Cfunction_dict__Stilde_flux(inputs_provided = True, alpha_face=md.alpha_face, gamma_faceDD=md.gamma_faceDD,\n beta_faceU=md.beta_faceU, Valenciav_rU=md.Valenciav_rU, B_rU=md.B_rU,\n Valenciav_lU=md.Valenciav_lU, B_lU=md.B_lU, sqrt4pi=md.sqrt4pi)\n```",
"_____no_output_____"
],
[
"<a id='poststep'></a>\n\n# Step 2: Recover the primitive variables and apply boundary conditions \\[Back to [top](#toc)\\]\n$$\\label{poststep}$$\n\nWith the RHSs computed, we can now recover the primitive variables, which are the Valencia three-velocity $\\bar{v}^i$ and the magnetic field $B^i$. We can also apply boundary conditions to the vector potential and velocity. By doing this at each RK substep, we can help ensure the accuracy of the following substeps. \n\n<a id='potential_bc'></a>\n\n## Step 2.a: Apply boundary conditions to $A_i$ and $\\sqrt{\\gamma} \\Phi$ \\[Back to [top](#toc)\\]\n$$\\label{potential_bc}$$\n\nFirst, we will apply boundary conditions to the vector potential, $A_i$, and the scalar potential $\\sqrt{\\gamma} \\Phi$. The file we generate here contains both functions we need for BCs, as documented in [Tutorial-GiRaFFE_NRPy-BCs](Tutorial-GiRaFFE_NRPy-BCs.ipynb). This is done as follows:\n```python\nimport GiRaFFE_NRPy.GiRaFFE_NRPy_BCs as BC\nBC.add_to_Cfunction_dict__GiRaFFE_NRPy_BCs()\n```",
"_____no_output_____"
]
],
[
[
"subdir = \"boundary_conditions\"\ncmd.mkdir(os.path.join(out_dir,subdir))\nimport GiRaFFE_NRPy.GiRaFFE_NRPy_BCs as BC\nBC.GiRaFFE_NRPy_BCs(os.path.join(out_dir,subdir))",
"_____no_output_____"
]
],
[
[
"<a id='a2b'></a>\n\n## Step 2.b: Compute $B^i$ from $A_i$ \\[Back to [top](#toc)\\]\n$$\\label{a2b}$$\n\nNow, we will calculate the magnetic field as the curl of the vector potential at all points in our domain; this requires care to be taken in the ghost zones, which is detailed in [Tutorial-GiRaFFE_NRPy-A2B](Tutorial-GiRaFFE_NRPy-A2B.ipynb). This can be done with the following function:\n```python\nimport GiRaFFE_NRPy.GiRaFFE_NRPy_A2B as A2B\nA2B.add_to_Cfunction_dict__GiRaFFE_NRPy_A2B(gammaDD,AD,BU)\n```",
"_____no_output_____"
],
[
"<a id='c2p'></a>\n\n## Step 2.c: Run the Conservative-to-Primitive solver \\[Back to [top](#toc)\\]\n$$\\label{c2p}$$\n\nWith these functions, we apply fixes to the Poynting flux, and use that to update the three-velocity. Then, we apply our current sheet prescription to the velocity, and recompute the Poynting flux to agree with the now-fixed velocity. More detail can be found in [Tutorial-GiRaFFE_NRPy-C2P_P2C](Tutorial-GiRaFFE_NRPy-C2P_P2C.ipynb).",
"_____no_output_____"
]
],
[
[
"import GiRaFFE_NRPy.GiRaFFE_NRPy_C2P_P2C as C2P_P2C\ndef add_to_Cfunction_dict__cons_to_prims(StildeD,BU,gammaDD,betaU,alpha, includes=None):\n C2P_P2C.GiRaFFE_NRPy_C2P(StildeD,BU,gammaDD,betaU,alpha)\n\n values_to_print = [\n lhrh(lhs=gri.gfaccess(\"in_gfs\",\"StildeD0\"),rhs=C2P_P2C.outStildeD[0]),\n lhrh(lhs=gri.gfaccess(\"in_gfs\",\"StildeD1\"),rhs=C2P_P2C.outStildeD[1]),\n lhrh(lhs=gri.gfaccess(\"in_gfs\",\"StildeD2\"),rhs=C2P_P2C.outStildeD[2]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"ValenciavU0\"),rhs=C2P_P2C.ValenciavU[0]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"ValenciavU1\"),rhs=C2P_P2C.ValenciavU[1]),\n lhrh(lhs=gri.gfaccess(\"auxevol_gfs\",\"ValenciavU2\"),rhs=C2P_P2C.ValenciavU[2])\n ]\n\n desc = \"Apply fixes to \\tilde{S}_i and recompute the velocity to match with current sheet prescription.\"\n name = \"GiRaFFE_NRPy_cons_to_prims\"\n params =\"const paramstruct *params,REAL *xx[3],REAL *auxevol_gfs,REAL *in_gfs\",\n body = fin.FD_outputC(\"returnstring\",values_to_print,params=outCparams),\n loopopts =\"AllPoints,Read_xxs\",\n add_to_Cfunction_dict(\n includes=includes,\n desc=desc,\n name=name, params=params,\n body=body, loopopts=loopopts)\n",
"_____no_output_____"
],
[
"# TINYDOUBLE = par.Cparameters(\"REAL\",thismodule,\"TINYDOUBLE\",1e-100)\n\ndef add_to_Cfunction_dict__prims_to_cons(gammaDD,betaU,alpha, ValenciavU,BU, sqrt4pi, includes=None):\n C2P_P2C.GiRaFFE_NRPy_P2C(gammaDD,betaU,alpha, ValenciavU,BU, sqrt4pi)\n\n values_to_print = [\n lhrh(lhs=gri.gfaccess(\"in_gfs\",\"StildeD0\"),rhs=C2P_P2C.StildeD[0]),\n lhrh(lhs=gri.gfaccess(\"in_gfs\",\"StildeD1\"),rhs=C2P_P2C.StildeD[1]),\n lhrh(lhs=gri.gfaccess(\"in_gfs\",\"StildeD2\"),rhs=C2P_P2C.StildeD[2]),\n ]\n\n desc = \"Recompute StildeD after current sheet fix to Valencia 3-velocity to ensure consistency between conservative & primitive variables.\"\n name = \"GiRaFFE_NRPy_prims_to_cons\"\n params =\"const paramstruct *params,REAL *auxevol_gfs,REAL *in_gfs\",\n body = fin.FD_outputC(\"returnstring\",values_to_print,params=outCparams),\n loopopts =\"AllPoints\",\n rel_path_to_Cparams=os.path.join(\"../\")\n add_to_Cfunction_dict(\n includes=includes,\n desc=desc,\n name=name, params=params,\n body=body, loopopts=loopopts,\n path_from_rootsrcdir_to_this_Cfunc = path_from_rootsrcdir_to_this_Cfunc,\n rel_path_to_Cparams=rel_path_to_Cparams)\n",
"_____no_output_____"
]
],
[
[
"<a id='velocity_bc'></a>\n\n## Step 2.d: Apply outflow boundary conditions to $\\bar{v}^i$ \\[Back to [top](#toc)\\]\n$$\\label{velocity_bc}$$\n\nNow, we can apply outflow boundary conditions to the Valencia three-velocity. This specific type of boundary condition helps avoid numerical error \"flowing\" into our grid. \n\nThis function has already been generated [above](#potential_bc).",
"_____no_output_____"
],
[
"<a id='write_out'></a>\n\n# Step 3: Write out the C code function \\[Back to [top](#toc)\\]\n$$\\label{write_out}$$\n\nNow, we have generated all the functions we will need for the `GiRaFFE` evolution. So, we will now assemble our evolution driver. This file will first `#include` all of the files we just generated for easy access. Then, we will write a function that calls these functions in the correct order, iterating over the flux directions as necessary. ",
"_____no_output_____"
]
],
[
[
"%%writefile $out_dir/GiRaFFE_NRPy_Main_Driver.h\n// Structure to track ghostzones for PPM:\ntypedef struct __gf_and_gz_struct__ {\n REAL *gf;\n int gz_lo[4],gz_hi[4];\n} gf_and_gz_struct;\n// Some additional constants needed for PPM:\nconst int VX=0,VY=1,VZ=2,BX=3,BY=4,BZ=5;\nconst int NUM_RECONSTRUCT_GFS = 6;\n\n// Include ALL functions needed for evolution\n#include \"RHSs/calculate_AD_gauge_term_psi6Phi_flux_term_for_RHSs.h\"\n#include \"RHSs/calculate_AD_gauge_psi6Phi_RHSs.h\"\n#include \"PPM/reconstruct_set_of_prims_PPM_GRFFE_NRPy.c\"\n#include \"FCVAL/interpolate_metric_gfs_to_cell_faces.h\"\n#include \"RHSs/calculate_StildeD0_source_term.h\"\n#include \"RHSs/calculate_StildeD1_source_term.h\"\n#include \"RHSs/calculate_StildeD2_source_term.h\"\n#include \"../calculate_E_field_flat_all_in_one.h\"\n#include \"RHSs/calculate_Stilde_flux_D0.h\"\n#include \"RHSs/calculate_Stilde_flux_D1.h\"\n#include \"RHSs/calculate_Stilde_flux_D2.h\"\n#include \"RHSs/calculate_Stilde_rhsD.h\"\n#include \"boundary_conditions/GiRaFFE_boundary_conditions.h\"\n#include \"A2B/driver_AtoB.h\"\n#include \"C2P/GiRaFFE_NRPy_cons_to_prims.h\"\n#include \"C2P/GiRaFFE_NRPy_prims_to_cons.h\"\n\nvoid GiRaFFE_NRPy_RHSs(const paramstruct *restrict params,REAL *restrict auxevol_gfs,const REAL *restrict in_gfs,REAL *restrict rhs_gfs) {\n#include \"set_Cparameters.h\"\n // First thing's first: initialize the RHSs to zero!\n#pragma omp parallel for\n for(int ii=0;ii<Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2*NUM_EVOL_GFS;ii++) {\n rhs_gfs[ii] = 0.0;\n }\n // Next calculate the easier source terms that don't require flux directions\n // This will also reset the RHSs for each gf at each new timestep.\n calculate_AD_gauge_term_psi6Phi_flux_term_for_RHSs(params,in_gfs,auxevol_gfs);\n calculate_AD_gauge_psi6Phi_RHSs(params,in_gfs,auxevol_gfs,rhs_gfs);\n\n // Now, we set up a bunch of structs of pointers to properly guide the PPM algorithm.\n // They also count the number of ghostzones available.\n gf_and_gz_struct in_prims[NUM_RECONSTRUCT_GFS], out_prims_r[NUM_RECONSTRUCT_GFS], out_prims_l[NUM_RECONSTRUCT_GFS];\n int which_prims_to_reconstruct[NUM_RECONSTRUCT_GFS],num_prims_to_reconstruct;\n const int Nxxp2NG012 = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;\n\n REAL *temporary = auxevol_gfs + Nxxp2NG012*AEVOLPARENGF; //We're not using this anymore\n // This sets pointers to the portion of auxevol_gfs containing the relevant gridfunction.\n int ww=0;\n in_prims[ww].gf = auxevol_gfs + Nxxp2NG012*VALENCIAVU0GF;\n out_prims_r[ww].gf = auxevol_gfs + Nxxp2NG012*VALENCIAV_RU0GF;\n out_prims_l[ww].gf = auxevol_gfs + Nxxp2NG012*VALENCIAV_LU0GF;\n ww++;\n in_prims[ww].gf = auxevol_gfs + Nxxp2NG012*VALENCIAVU1GF;\n out_prims_r[ww].gf = auxevol_gfs + Nxxp2NG012*VALENCIAV_RU1GF;\n out_prims_l[ww].gf = auxevol_gfs + Nxxp2NG012*VALENCIAV_LU1GF;\n ww++;\n in_prims[ww].gf = auxevol_gfs + Nxxp2NG012*VALENCIAVU2GF;\n out_prims_r[ww].gf = auxevol_gfs + Nxxp2NG012*VALENCIAV_RU2GF;\n out_prims_l[ww].gf = auxevol_gfs + Nxxp2NG012*VALENCIAV_LU2GF;\n ww++;\n in_prims[ww].gf = auxevol_gfs + Nxxp2NG012*BU0GF;\n out_prims_r[ww].gf = auxevol_gfs + Nxxp2NG012*B_RU0GF;\n out_prims_l[ww].gf = auxevol_gfs + Nxxp2NG012*B_LU0GF;\n ww++;\n in_prims[ww].gf = auxevol_gfs + Nxxp2NG012*BU1GF;\n out_prims_r[ww].gf = auxevol_gfs + Nxxp2NG012*B_RU1GF;\n out_prims_l[ww].gf = auxevol_gfs + Nxxp2NG012*B_LU1GF;\n ww++;\n in_prims[ww].gf = auxevol_gfs + Nxxp2NG012*BU2GF;\n out_prims_r[ww].gf = auxevol_gfs + Nxxp2NG012*B_RU2GF;\n out_prims_l[ww].gf = auxevol_gfs + Nxxp2NG012*B_LU2GF;\n ww++;\n\n // Prims are defined AT ALL GRIDPOINTS, so we set the # of ghostzones to zero:\n for(int i=0;i<NUM_RECONSTRUCT_GFS;i++) for(int j=1;j<=3;j++) { in_prims[i].gz_lo[j]=0; in_prims[i].gz_hi[j]=0; }\n // Left/right variables are not yet defined, yet we set the # of gz's to zero by default:\n for(int i=0;i<NUM_RECONSTRUCT_GFS;i++) for(int j=1;j<=3;j++) { out_prims_r[i].gz_lo[j]=0; out_prims_r[i].gz_hi[j]=0; }\n for(int i=0;i<NUM_RECONSTRUCT_GFS;i++) for(int j=1;j<=3;j++) { out_prims_l[i].gz_lo[j]=0; out_prims_l[i].gz_hi[j]=0; }\n\n ww=0;\n which_prims_to_reconstruct[ww]=VX; ww++;\n which_prims_to_reconstruct[ww]=VY; ww++;\n which_prims_to_reconstruct[ww]=VZ; ww++;\n which_prims_to_reconstruct[ww]=BX; ww++;\n which_prims_to_reconstruct[ww]=BY; ww++;\n which_prims_to_reconstruct[ww]=BZ; ww++;\n num_prims_to_reconstruct=ww;\n\n // In each direction, perform the PPM reconstruction procedure.\n // Then, add the fluxes to the RHS as appropriate.\n for(int flux_dirn=0;flux_dirn<3;flux_dirn++) {\n // In each direction, interpolate the metric gfs (gamma,beta,alpha) to cell faces.\n interpolate_metric_gfs_to_cell_faces(params,auxevol_gfs,flux_dirn+1);\n // Then, reconstruct the primitive variables on the cell faces.\n // This function is housed in the file: \"reconstruct_set_of_prims_PPM_GRFFE_NRPy.c\"\n reconstruct_set_of_prims_PPM_GRFFE_NRPy(params, auxevol_gfs, flux_dirn+1, num_prims_to_reconstruct,\n which_prims_to_reconstruct, in_prims, out_prims_r, out_prims_l, temporary);\n // For example, if flux_dirn==0, then at gamma_faceDD00(i,j,k) represents gamma_{xx}\n // at (i-1/2,j,k), Valenciav_lU0(i,j,k) is the x-component of the velocity at (i-1/2-epsilon,j,k),\n // and Valenciav_rU0(i,j,k) is the x-component of the velocity at (i-1/2+epsilon,j,k).\n\n if(flux_dirn==0) {\n // Next, we calculate the source term for StildeD. Again, this also resets the rhs_gfs array at\n // each new timestep.\n calculate_StildeD0_source_term(params,auxevol_gfs,rhs_gfs);\n // Now, compute the electric field on each face of a cell and add it to the RHSs as appropriate\n //calculate_E_field_D0_right(params,auxevol_gfs,rhs_gfs);\n //calculate_E_field_D0_left(params,auxevol_gfs,rhs_gfs);\n // Finally, we calculate the flux of StildeD and add the appropriate finite-differences\n // to the RHSs.\n calculate_Stilde_flux_D0(params,auxevol_gfs,rhs_gfs);\n }\n else if(flux_dirn==1) {\n calculate_StildeD1_source_term(params,auxevol_gfs,rhs_gfs);\n //calculate_E_field_D1_right(params,auxevol_gfs,rhs_gfs);\n //calculate_E_field_D1_left(params,auxevol_gfs,rhs_gfs);\n calculate_Stilde_flux_D1(params,auxevol_gfs,rhs_gfs);\n }\n else {\n calculate_StildeD2_source_term(params,auxevol_gfs,rhs_gfs);\n //calculate_E_field_D2_right(params,auxevol_gfs,rhs_gfs);\n //calculate_E_field_D2_left(params,auxevol_gfs,rhs_gfs);\n calculate_Stilde_flux_D2(params,auxevol_gfs,rhs_gfs);\n }\n calculate_Stilde_rhsD(flux_dirn+1,params,auxevol_gfs,rhs_gfs);\n for(int count=0;count<=1;count++) {\n // This function is written to be general, using notation that matches the forward permutation added to AD2,\n // i.e., [F_HLL^x(B^y)]_z corresponding to flux_dirn=0, count=1.\n // The SIGN parameter is necessary because\n // -E_z(x_i,y_j,z_k) = 0.25 ( [F_HLL^x(B^y)]_z(i+1/2,j,k)+[F_HLL^x(B^y)]_z(i-1/2,j,k)\n // -[F_HLL^y(B^x)]_z(i,j+1/2,k)-[F_HLL^y(B^x)]_z(i,j-1/2,k) )\n // Note the negative signs on the reversed permutation terms!\n\n // By cyclically permuting with flux_dirn, we\n // get contributions to the other components, and by incrementing count, we get the backward permutations:\n // Let's suppose flux_dirn = 0. Then we will need to update Ay (count=0) and Az (count=1):\n // flux_dirn=count=0 -> AD0GF+(flux_dirn+1+count)%3 = AD0GF + (0+1+0)%3=AD1GF <- Updating Ay!\n // (flux_dirn)%3 = (0)%3 = 0 Vx\n // (flux_dirn-count+2)%3 = (0-0+2)%3 = 2 Vz . Inputs Vx, Vz -> SIGN = -1 ; 2.0*((REAL)count)-1.0=-1 check!\n // flux_dirn=0,count=1 -> AD0GF+(flux_dirn+1+count)%3 = AD0GF + (0+1+1)%3=AD2GF <- Updating Az!\n // (flux_dirn)%3 = (0)%3 = 0 Vx\n // (flux_dirn-count+2)%3 = (0-1+2)%3 = 1 Vy . Inputs Vx, Vy -> SIGN = +1 ; 2.0*((REAL)count)-1.0=2-1=+1 check!\n // Let's suppose flux_dirn = 1. Then we will need to update Az (count=0) and Ax (count=1):\n // flux_dirn=1,count=0 -> AD0GF+(flux_dirn+1+count)%3 = AD0GF + (1+1+0)%3=AD2GF <- Updating Az!\n // (flux_dirn)%3 = (1)%3 = 1 Vy\n // (flux_dirn-count+2)%3 = (1-0+2)%3 = 0 Vx . Inputs Vy, Vx -> SIGN = -1 ; 2.0*((REAL)count)-1.0=-1 check!\n // flux_dirn=count=1 -> AD0GF+(flux_dirn+1+count)%3 = AD0GF + (1+1+1)%3=AD0GF <- Updating Ax!\n // (flux_dirn)%3 = (1)%3 = 1 Vy\n // (flux_dirn-count+2)%3 = (1-1+2)%3 = 2 Vz . Inputs Vy, Vz -> SIGN = +1 ; 2.0*((REAL)count)-1.0=2-1=+1 check!\n // Let's suppose flux_dirn = 2. Then we will need to update Ax (count=0) and Ay (count=1):\n // flux_dirn=2,count=0 -> AD0GF+(flux_dirn+1+count)%3 = AD0GF + (2+1+0)%3=AD0GF <- Updating Ax!\n // (flux_dirn)%3 = (2)%3 = 2 Vz\n // (flux_dirn-count+2)%3 = (2-0+2)%3 = 1 Vy . Inputs Vz, Vy -> SIGN = -1 ; 2.0*((REAL)count)-1.0=-1 check!\n // flux_dirn=2,count=1 -> AD0GF+(flux_dirn+1+count)%3 = AD0GF + (2+1+1)%3=AD1GF <- Updating Ay!\n // (flux_dirn)%3 = (2)%3 = 2 Vz\n // (flux_dirn-count+2)%3 = (2-1+2)%3 = 0 Vx . Inputs Vz, Vx -> SIGN = +1 ; 2.0*((REAL)count)-1.0=2-1=+1 check!\n calculate_E_field_flat_all_in_one(params,\n &auxevol_gfs[IDX4ptS(VALENCIAV_RU0GF+(flux_dirn)%3, 0)],&auxevol_gfs[IDX4ptS(VALENCIAV_RU0GF+(flux_dirn-count+2)%3, 0)],\n &auxevol_gfs[IDX4ptS(VALENCIAV_LU0GF+(flux_dirn)%3, 0)],&auxevol_gfs[IDX4ptS(VALENCIAV_LU0GF+(flux_dirn-count+2)%3, 0)],\n &auxevol_gfs[IDX4ptS(B_RU0GF +(flux_dirn)%3, 0)],&auxevol_gfs[IDX4ptS(B_RU0GF +(flux_dirn-count+2)%3, 0)],\n &auxevol_gfs[IDX4ptS(B_LU0GF +(flux_dirn)%3, 0)],&auxevol_gfs[IDX4ptS(B_LU0GF +(flux_dirn-count+2)%3, 0)],\n &auxevol_gfs[IDX4ptS(B_RU0GF +(flux_dirn-count+2)%3, 0)],\n &auxevol_gfs[IDX4ptS(B_LU0GF +(flux_dirn-count+2)%3, 0)],\n &rhs_gfs[IDX4ptS(AD0GF+(flux_dirn+1+count)%3,0)], 2.0*((REAL)count)-1.0, flux_dirn);\n }\n }\n}\n\nvoid GiRaFFE_NRPy_post_step(const paramstruct *restrict params,REAL *xx[3],REAL *restrict auxevol_gfs,REAL *restrict evol_gfs,const int n) {\n // First, apply BCs to AD and psi6Phi. Then calculate BU from AD\n apply_bcs_potential(params,evol_gfs);\n driver_A_to_B(params,evol_gfs,auxevol_gfs);\n //override_BU_with_old_GiRaFFE(params,auxevol_gfs,n);\n // Apply fixes to StildeD, then recompute the velocity at the new timestep.\n // Apply the current sheet prescription to the velocities\n GiRaFFE_NRPy_cons_to_prims(params,xx,auxevol_gfs,evol_gfs);\n // Then, recompute StildeD to be consistent with the new velocities\n //GiRaFFE_NRPy_prims_to_cons(params,auxevol_gfs,evol_gfs);\n // Finally, apply outflow boundary conditions to the velocities.\n apply_bcs_velocity(params,auxevol_gfs);\n}",
"_____no_output_____"
]
],
[
[
"<a id='code_validation'></a>\n\n# Step 4: Self-Validation against `GiRaFFE_NRPy_Main_Drive.py` \\[Back to [top](#toc)\\]\n$$\\label{code_validation}$$\n\nTo validate the code in this tutorial we check for agreement between the files\n\n1. that were generated in this tutorial and\n1. those that are generated in the module [`GiRaFFE_NRPy_Main_Driver.py`](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Main_Driver.py)\n",
"_____no_output_____"
]
],
[
[
"gri.glb_gridfcs_list = []\n# Define the directory that we wish to validate against:\nvaldir = os.path.join(\"GiRaFFE_validation_Ccodes\")\ncmd.mkdir(valdir)\n\nimport GiRaFFE_NRPy.GiRaFFE_NRPy_Main_Driver as md\nmd.GiRaFFE_NRPy_Main_Driver_generate_all(valdir)\n\n",
"_____no_output_____"
]
],
[
[
"With both sets of codes generated, we can now compare them against each other.",
"_____no_output_____"
]
],
[
[
"import difflib\nimport sys\n\nprint(\"Printing difference between original C code and this code...\")\n# Open the files to compare\nfiles = [\"GiRaFFE_NRPy_Main_Driver.h\",\n \"RHSs/calculate_AD_gauge_term_psi6Phi_flux_term_for_RHSs.h\",\n \"RHSs/calculate_AD_gauge_psi6Phi_RHSs.h\",\n \"PPM/reconstruct_set_of_prims_PPM_GRFFE_NRPy.c\",\n \"PPM/loop_defines_reconstruction_NRPy.h\",\n \"FCVAL/interpolate_metric_gfs_to_cell_faces.h\",\n \"RHSs/calculate_StildeD0_source_term.h\",\n \"RHSs/calculate_StildeD1_source_term.h\",\n \"RHSs/calculate_StildeD2_source_term.h\",\n \"RHSs/calculate_E_field_flat_all_in_one.h\",\n \"RHSs/calculate_Stilde_flux_D0.h\",\n \"RHSs/calculate_Stilde_flux_D1.h\",\n \"RHSs/calculate_Stilde_flux_D2.h\",\n \"boundary_conditions/GiRaFFE_boundary_conditions.h\",\n \"A2B/driver_AtoB.h\",\n \"C2P/GiRaFFE_NRPy_cons_to_prims.h\",\n \"C2P/GiRaFFE_NRPy_prims_to_cons.h\"]\n\nfor file in files:\n print(\"Checking file \" + file)\n with open(os.path.join(valdir,file)) as file1, open(os.path.join(out_dir,file)) as file2:\n # Read the lines of each file\n file1_lines = file1.readlines()\n file2_lines = file2.readlines()\n num_diffs = 0\n for line in difflib.unified_diff(file1_lines, file2_lines, fromfile=os.path.join(valdir,file), tofile=os.path.join(out_dir,file)):\n sys.stdout.writelines(line)\n num_diffs = num_diffs + 1\n if num_diffs == 0:\n print(\"No difference. TEST PASSED!\")\n else:\n print(\"ERROR: Disagreement found with .py file. See differences above.\")\n sys.exit(1)",
"_____no_output_____"
]
],
[
[
"<a id='latex_pdf_output'></a>\n\n# Step 4: Output this notebook to $\\LaTeX$-formatted PDF file \\[Back to [top](#toc)\\]\n$$\\label{latex_pdf_output}$$\n\nThe following code cell converts this Jupyter notebook into a proper, clickable $\\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename\n[Tutorial-GiRaFFE_NRPy_Main_Driver](TTutorial-GiRaFFE_NRPy_Main_Driver.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)",
"_____no_output_____"
]
],
[
[
"import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface\ncmd.output_Jupyter_notebook_to_LaTeXed_PDF(\"Tutorial-GiRaFFE_NRPy_Main_Driver\",location_of_template_file=os.path.join(\"..\"))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e75c69d15a1e7280134e5bbfd79c8907e5bf83d4 | 857,692 | ipynb | Jupyter Notebook | src/TRAMP/candidate_TFBSs.ipynb | Switham1/PromoterArchitecture | 0a9021b869ac66cdd622be18cd029950314d111e | [
"MIT"
] | null | null | null | src/TRAMP/candidate_TFBSs.ipynb | Switham1/PromoterArchitecture | 0a9021b869ac66cdd622be18cd029950314d111e | [
"MIT"
] | null | null | null | src/TRAMP/candidate_TFBSs.ipynb | Switham1/PromoterArchitecture | 0a9021b869ac66cdd622be18cd029950314d111e | [
"MIT"
] | null | null | null | 604.008451 | 791,102 | 0.929103 | [
[
[
"#consider using svist4get api for adding open chromatin peak tracks: https://bitbucket.org/artegorov/svist4get/src/eeb5151f49c31fa887dbfc168320c63d66a17334/docs/API.md\n#actually use https://github.com/deeptools/pyGenomeTracks as can install with conda\n#then import each track as vector image like here: https://stackoverflow.com/questions/31452451/importing-an-svg-file-into-a-matplotlib-figure using svgutils\n#integrate it with matplotlib like mentioned here: https://github.com/deeptools/pyGenomeTracks/issues/20\n#the bigwig files I used were from potter et al 2018 NaOH treatment https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE116287 called GSE116287_Roots_NaOH_Merged.bw and GSE116287_Shoots_NaOH_Merged.bw\n#had to install \"conda install -c conda-forge tqdm\" too\n\n##NOTE - if using genbank files from Benchling, first add custom fields on the metadata page with the start and end index (eg. start_index 6015558)\n##also ensure the name of the genebank file (and sequence id) include the AGI locus name enclosed in parentheses(eg. \"(AT4G24020)\")\n##This is so the chromosome number can be extracted\n##This is to ensure the start and end chromosome positions open chromatin data\n#make sure when exporting the .gb file from Benchling that you do not tick \"Convert Non-Standard Annotation Types\"\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as mpatches\nfrom matplotlib.lines import Line2D\nfrom matplotlib import rcParams\nfrom matplotlib.gridspec import GridSpec\nimport matplotlib.text as mtext\n\n#latex style rendering so can make parts of text bold\nfrom matplotlib import rc\n#import matplotlib as mpl\nfrom Bio import SeqIO\nfrom Bio.SeqFeature import FeatureLocation\nimport numpy as np\n#allows custom colours\nfrom dna_features_viewer import BiopythonTranslator\nimport pandas as pd\nfrom itertools import cycle\nimport re\nimport pandas as pd\n#allow converting from RGB to CMYK\n#import cv2\nfrom PIL import Image\n# from PIL import Image\n# from io import BytesIO\nimport pygenometracks.tracks as pygtk\n\n###note - use conda env gene_feature_plot\n",
"_____no_output_____"
],
[
"#create a class specifying feature colours\n#make feature_list and colour_dict so that each feature name and colour is only added once to legend if more than one share the same name\n\nfeature_list = []\ncolour_dict = {}\nclass MyCustomTranslator(BiopythonTranslator):\n \"\"\"Custom translator iplementing the following theme:\n -Colour promoter in pale green\n -colour exons in dark grey\n -colour introns in light grey\n -colour TFs from colour palette\"\"\"\n\n #import colour blind palette\n #colour palette from mkweb.bcgsc.ca/colorblind/palettes.mhtml\n # df = pd.read_csv(\"colourblind_palette.csv\", header=0)\n # #convert to floats\n # floats=df.divide(255)\n # #make sets of each row to get the red, green blue colours\n # CB_colour_palette = floats.apply(tuple, axis = 1)\n # #make into df\n # df = CB_colour_palette.to_frame()\n # #save file\n # df.to_csv('../../data/TRAMP/colour_list')\n \n # #turn into a list of colour sets\n # list_colours = list(CB_colour_palette)\n # colour_list = list_colours\n #colour_list = ['#88CCEE', '#44AA99', '#117733', '#332288', '#DDCC77', '#999933','#CC6677', '#882255', '#AA4499', '#DDDDDD']\n colour_list = ['#2271B2',\n '#3DB7E9',\n '#F748A5', \n '#d55e00',\n '#e69f00',\n #'#f0e442',\n '#228833',\n '#000000',]\n #make colour iterator\n colour_iterator=cycle(colour_list)\n #change colour cycle\n #mpl.rcParams['axes.prop_cycle'] = mpl.cycler(color=list_colours)\n #count = -1\n #set colour count\n\n\n def compute_feature_color(self, feature):\n \"\"\"set colour of each feature\"\"\" \n\n if feature.type == \"promoter\":\n return \"#F5F5F5\"\n elif feature.type == \"gene_upstream\":\n return \"#DDCC77\"#2f4f4f\"#dark slate grey\n elif feature.type == \"mRNA_upstream\":\n return \"#DDCC77\"#dark slate grey\n elif feature.type == \"exon_upstream\":\n return \"#DDCC77\"#dark slate grey\n elif feature.type == \"exon\":\n return \"#635147\"#umber\n elif feature.type == \"gene\":\n return \"#F5F5F5\"\n #return (169,169,169)\n elif feature.type == \"intron\":\n return \"lightgrey\" \n #return (211,211,211)\n elif feature.type == \"5'UTR\":\n return \"c4aead\"#silver pink\n elif feature.type == \"start_codon\":\n return \"black\"\n # elif feature.type == \"TRAMP_probe_tested\":\n # if feature.qualifiers.get(\"label\")[0] == \"NLP7#7\":\n # col = \"#e69f00\"\n # elif feature.qualifiers.get(\"label\")[0] == \"NLP7#9\":\n # col = \"#d55e00\"\n # elif feature.qualifiers.get(\"label\")[0] == \"NLP7#10\":\n # col = \"#2271B2\"\n # elif feature.qualifiers.get(\"label\")[0] == \"ANAC032#4\":\n # col = \"#2271B2\"\n # elif feature.qualifiers.get(\"label\")[0] == \"ANAC032#8\":\n # col = \"#F748A5\"\n # elif feature.qualifiers.get(\"label\")[0] == \"ANAC032#3\":\n # col = \"#000000\"\n\n # else: \n # pass\n \n # return col\n \n elif feature.type == \"TFBS\":\n if feature.qualifiers.get(\"label\")[0] in colour_dict.keys():\n col = colour_dict[feature.qualifiers.get(\"label\")[0]]\n else:\n col = next(self.colour_iterator)\n colour_dict[feature.qualifiers.get(\"label\")[0]] = col\n \n return col\n else:\n return \"white\"\n\n def compute_feature_box_linewidth(self, feature):\n \"\"\"change shape of features\"\"\"\n if feature.type ==\"TRAMP_probe_tested\":\n return 1\n else:\n return 0\n\n def compute_feature_linewidth(self, feature):\n \"\"\"change linewidth of feature's arrow/rectangle\"\"\"\n #remove border from certain features\n if feature.type == \"gene_upstream\":\n return 0\n elif feature.type == \"mRNA_upstream\":\n return 0\n elif feature.type == \"exon_upstream\":\n return 0\n elif feature.type == \"misc_RNA_upstream\":\n return 0 \n elif feature.type == \"exon\":\n return 0\n elif feature.type == \"intron\":\n return 0\n elif feature.type == \"5'UTR\":\n return 0\n elif feature.type == \"TFBS\":\n return 0\n elif feature.type ==\"TRAMP_probe_tested\":\n return 2\n return 1\n \n # def compute_feature_linecolor(self, feature):\n # if feature.type == \"TRAMP_probe_tested\":\n # if feature.qualifiers.get(\"label\")[0] == \"NLP7#7\":\n # col = \"#e69f00\"\n # elif feature.qualifiers.get(\"label\")[0] == \"NLP7#9\":\n # col = \"#d55e00\"\n # elif feature.qualifiers.get(\"label\")[0] == \"NLP7#10\":\n # col = \"#2271B2\"\n # elif feature.qualifiers.get(\"label\")[0] == \"ANAC032#4\":\n # col = \"#2271B2\"\n # elif feature.qualifiers.get(\"label\")[0] == \"ANAC032#8\":\n # col = \"#F748A5\"\n # elif feature.qualifiers.get(\"label\")[0] == \"ANAC032#3\":\n # col = \"#000000\"\n\n # else: \n # pass\n \n # return col\n \n \n\n def compute_feature_box_color(self, feature):\n \"\"\"change colour of feature box border\"\"\"\n if feature.type == \"TRAMP_probe_tested\":\n if feature.qualifiers.get(\"label\")[0] == \"NLP7#7\": \n #col = colour_dict[\"NLP6/7\"]\n col = \"#228833\"\n elif feature.qualifiers.get(\"label\")[0] == \"NLP7#9\":\n #col = colour_dict[\"DREB26\"]\n col = \"#e69f00\"\n elif feature.qualifiers.get(\"label\")[0] == \"NLP7#10\":\n #col = colour_dict[\"ANAC032*\"]\n col = \"#d55e00\"\n elif feature.qualifiers.get(\"label\")[0] == \"ANAC032#4\":\n #col = colour_dict[\"ANAC032*\"]\n col = \"#d55e00\"\n elif feature.qualifiers.get(\"label\")[0] == \"ANAC032#8\":\n #col = colour_dict[\"ARF9/18*\"]\n col = \"#2271B2\"\n elif feature.qualifiers.get(\"label\")[0] == \"ANAC032#3\":\n #col = colour_dict[\"TGA1\"]\n col = \"#F748A5\"\n else: \n return \"black\"\n \n return col\n #'ARF9/18*': '#2271B2', 'ANR1*': '#3DB7E9', 'TGA1': '#F748A5', 'ANAC032*': '#d55e00', 'DREB26': '#e69f00', 'NLP6/7': '#228833'\n\n def compute_feature_open_left(self, feature):\n \"\"\"set to true if feature does not end on the left\"\"\"\n return False\n\n def compute_feature_open_right(self, feature):\n \"\"\"set to true if feature does not end on the right\"\"\"\n return False\n\n def compute_feature_label(self, feature):\n \"\"\"Remove most feature labels\"\"\"\n if feature.type == 'start_codon':\n return \"ATG\"\n if feature.type == 'TRAMP_probe_tested':\n return feature.qualifiers.get(\"label\")[0] \n else:\n pass\n\n #return super().compute_feature_label(feature)\n # def compute_feature_min_y_height_of_text_line(self, feature):\n # return 0.1\n\n def compute_feature_fontdict(self, feature):\n \"\"\"change label font to arial, size to 10\"\"\"\n if feature.type == \"TRAMP_probe_tested\":\n #if certain label, align to the right\n if feature.qualifiers.get(\"label\")[0] == \"ANAC032#3\": \n return dict(family='sans-serif',size=10, ha='right')\n # elif feature.qualifiers.get(\"label\")[0] == \"ANAC032#8\":\n # return dict(family='sans-serif',size=10, ha='left')\n if feature.qualifiers.get(\"label\")[0] == \"ANAC032#8\": \n return dict(family='sans-serif',size=10)\n else:\n return dict(family='sans-serif',size=10)\n\n else:\n return dict(family='sans-serif',size=10)\n #return dict([('family','sans-serif'),('sans-serif','Arial'),('size',10)])\n\n #make feature_list so that each feature name is only added once if more than one share the same name\n #feature_list = []\n def compute_feature_legend_text(self, feature):\n \"\"\"add legend if feature label has not been added to legend already\"\"\"\n if feature.type=='promoter':\n pass\n # elif feature.type=='exon':\n # pass\n # elif feature.type=='intron':\n # pass\n # elif feature.type==\"5'UTR\":\n # pass\n # elif feature.qualifiers.get(\"label\")[0] in self.feature_list:\n # pass\n elif feature.qualifiers.get(\"label\")[0] in feature_list:\n pass\n else:\n \n feature_list.append(feature.qualifiers.get(\"label\")[0])\n \n #feature_list.append(feature.qualifiers.get(\"label\")[0])\n return feature.qualifiers.get(\"label\")[0] \n\n\n def compute_filtered_features(self, features):\n \"\"\"Do not display the following features\"\"\"\n return [\n feature for feature in features\n if (feature.type != \"TRAMP_probe\")\n and (feature.type != \"none\")\n and (feature.type != \"DHS\")\n and (feature.type != \"misc_feature\")\n and (feature.type != \"primer\")\n #and (feature.type != \"gene\")\n and (feature.type != \"mRNA\")\n and (feature.type != \"CDS\")\n and (feature.type != \"source\")\n and (feature.type != \"misc_RNA\")\n and (feature.qualifiers.get(\"label\")[0] != \"ARID5_ARID6\"\n and (feature.qualifiers.get(\"label\")[0] != \"ARDI5_ARID6\")\n and ('Translation' not in feature.qualifiers.get(\"label\")[0]))\n ]\n",
"_____no_output_____"
],
[
"def preprocess_record(seqrecord):\n \"\"\"Preprocess the biopython record before feeding it into the translator.\"\"\"\n #get length of the whole sequence\n #seq_length = len(seqrecord.seq)\n # #print(f'{seqrecord.id} + {seq_length}')\n # #change seqrecord locations\n # #if start location is greater than TSS position then the new location is the length of the whole sequence minus the original position\n # def convert_location(location, TSS_position, seq_length):\n # \"\"\"convert locations to be relative to TSS position (or another locaton if you choose)\"\"\"\n # if location > TSS_position:\n # new_location = seq_length - location\n # elif location < TSS_position:\n # new_location = -(TSS_position-location)\n # elif location == TSS_position:\n # new_location = 0\n # return new_location\n # print(seqrecord)\n #Ensure that genbank files are reverse complemented so that plots can be aligned to the right in figure\n new_seqrecord = seqrecord.reverse_complement(id=seqrecord.id +\"_rc\")\n \n for feature in new_seqrecord.features:\n #change strand to None so that features are rectangular\n feature.location.strand = None\n \n if feature.type == 'TFBS':\n #print(feature)\n #change sigil to box\n feature.qualifiers[\"sigil\"] = 'OCTAGON'\n #increase width of TFBS so colour is more visible\n start = feature.location.start\n end = feature.location.end\n #find middle of TFBS\n middle = (end-start)//2 #floor division creating an integar\n \n new_start = start-8+middle\n new_end = end-middle+8\n feature.location = FeatureLocation(new_start,new_end)\n #change name of some TFBSs\n if feature.qualifiers.get(\"label\")[0] == 'ANR1_AGL16':\n feature.qualifiers.get(\"label\")[0] = 'ANR1*'\n elif feature.qualifiers.get(\"label\")[0] == 'ANAC032_NAC002':\n feature.qualifiers.get(\"label\")[0] = 'ANAC032*'\n elif feature.qualifiers.get(\"label\")[0] == 'ANAC032':\n feature.qualifiers.get(\"label\")[0] = 'ANAC032*'\n elif feature.qualifiers.get(\"label\")[0] == 'ARF18/9_ARF2':\n feature.qualifiers.get(\"label\")[0] = 'ARF9/18*'\n elif feature.qualifiers.get(\"label\")[0] == 'ARF9/18':\n feature.qualifiers.get(\"label\")[0] = 'ARF9/18*'\n elif feature.qualifiers.get(\"label\")[0] == 'NLP7':\n feature.qualifiers.get(\"label\")[0] = 'NLP6/7'\n \n \n\n #if feature was experimentally validated, add qualifier\n \n return new_seqrecord\n\n",
"_____no_output_____"
],
[
"def gb_file_to_seqrecord(promoter_name):\n \"\"\"load genbankfile into a seqrecord\"\"\"\n #file location\n gb_file=f\"../../data/TRAMP/{promoter_name}.gb\"\n record = SeqIO.read(gb_file, 'genbank')\n #preprocess record\n modified_seqrecord = preprocess_record(record)\n return modified_seqrecord",
"_____no_output_____"
],
[
"def RGB2CMYK(image_file_location, image_file_extension):\n \"\"\"convert image from RGB to CMYK colours\"\"\"\n # Import image\n image = Image.open(image_file_location+image_file_extension)\n print(image.mode)\n if image.mode == 'RGBA':\n print('converting_image')\n cmyk_image = image.convert('CMYK')\n cmyk_image.save(image_file_location+'_CMYK'+image_file_extension)\n #close image\n image.close()\n # img = plt.imread(image_file_location+image_file_extension)\n # #print(image_file_location+image_file_extension)\n # # Create float\n # bgr = img.astype(float)/255.\n\n # # Extract channels\n # with np.errstate(invalid='ignore', divide='ignore'):\n # K = 1 - np.max(bgr, axis=2)\n # C = (1-bgr[...,2] - K)/(1-K)\n # M = (1-bgr[...,1] - K)/(1-K)\n # Y = (1-bgr[...,0] - K)/(1-K)\n\n # # Convert the input BGR image to CMYK colorspace\n # CMYK = (np.dstack((C,M,Y,K)) * 255).astype(np.uint8)\n # cv2.imwrite(image_file_location+'_CMYK'+image_file_extension, CMYK)\n\n # # Split CMYK channels\n # Y, M, C, K = cv2.split(CMYK)\n\n # np.isfinite(C).all()\n # np.isfinite(M).all()\n # np.isfinite(K).all()\n # np.isfinite(Y).all()\n\n # # Save channels\n # cv2.imwrite('C:/path/to/C.jpg', C)\n # cv2.imwrite('C:/path/to/M.jpg', M)\n # cv2.imwrite('C:/path/to/Y.jpg', Y)\n # cv2.imwrite('C:/path/to/K.jpg', K)\n",
"_____no_output_____"
],
[
"# #create a class specifying feature colours\n# #make feature_list and colour_dict so that each feature name and colour is only added once to legend if more than one share the same name\n\n# feature_list = []\n# colour_dict = {}\n# class MyCustomTranslator(BiopythonTranslator):\n# \"\"\"Custom translator iplementing the following theme:\n# -Colour promoter in pale green\n# -colour exons in dark grey\n# -colour introns in light grey\n# -colour TFs from colour palette\"\"\"\n\n# #import colour blind palette\n# #colour palette from mkweb.bcgsc.ca/colorblind/palettes.mhtml\n# # df = pd.read_csv(\"colourblind_palette.csv\", header=0)\n# # #convert to floats\n# # floats=df.divide(255)\n# # #make sets of each row to get the red, green blue colours\n# # CB_colour_palette = floats.apply(tuple, axis = 1)\n# # #make into df\n# # df = CB_colour_palette.to_frame()\n# # #save file\n# # df.to_csv('../../data/TRAMP/colour_list')\n \n# # #turn into a list of colour sets\n# # list_colours = list(CB_colour_palette)\n# # colour_list = list_colours\n# #colour_list = ['#88CCEE', '#44AA99', '#117733', '#332288', '#DDCC77', '#999933','#CC6677', '#882255', '#AA4499', '#DDDDDD']\n# colour_list = ['#2271B2',\n# '#3DB7E9',\n# '#F748A5', \n# '#d55e00',\n# '#e69f00',\n# '#f0e442',\n# '#000000',]\n# #make colour iterator\n# colour_iterator=cycle(colour_list)\n# #change colour cycle\n# #mpl.rcParams['axes.prop_cycle'] = mpl.cycler(color=list_colours)\n# #count = -1\n# #set colour count\n\n\n# def compute_feature_color(self, feature):\n# \"\"\"set colour of each feature\"\"\" \n\n# if feature.type == \"promoter\":\n# return \"#359B73\"\n# elif feature.type == \"exon\":\n# return \"darkgrey\"\n# #return (169,169,169)\n# elif feature.type == \"intron\":\n# return \"lightgrey\" \n# #return (211,211,211)\n# elif feature.type == \"5'UTR\":\n# return \"lightgrey\"\n# elif feature.type == \"TFBS\":\n# if feature.qualifiers.get(\"label\")[0] in colour_dict.keys():\n# col = colour_dict[feature.qualifiers.get(\"label\")[0]]\n# else:\n# col = next(self.colour_iterator)\n# colour_dict[feature.qualifiers.get(\"label\")[0]] = col\n \n# return col\n# else:\n# return \"gold\"\n\n# def compute_feature_box_linewidth(self, feature):\n# \"\"\"change shape of features\"\"\"\n# return 0\n\n# def compute_feature_label(self, feature):\n# \"\"\"Remove feature labels\"\"\"\n# #return super().compute_feature_label(feature)\n \n\n# def compute_feature_fontdict(self, feature):\n# \"\"\"change font to arial, size to 10\"\"\"\n# return dict(family='sans-serif',size=10)\n# #return dict([('family','sans-serif'),('sans-serif','Arial'),('size',10)])\n\n# #make feature_list so that each feature name is only added once if more than one share the same name\n# #feature_list = []\n# def compute_feature_legend_text(self, feature):\n# \"\"\"add legend if feature label has not been added to legend already\"\"\"\n# if feature.type=='promoter':\n# pass\n# elif feature.type=='exon':\n# pass\n# elif feature.type=='intron':\n# pass\n# elif feature.type==\"5'UTR\":\n# pass\n# # elif feature.qualifiers.get(\"label\")[0] in self.feature_list:\n# # pass\n# elif feature.qualifiers.get(\"label\")[0] in feature_list:\n# pass\n# else: \n\n# feature_list.append(feature.qualifiers.get(\"label\")[0])\n \n# #feature_list.append(feature.qualifiers.get(\"label\")[0])\n# return feature.qualifiers.get(\"label\")[0] \n\n\n# def compute_filtered_features(self, features):\n# \"\"\"Do not display features the following features\"\"\"\n# return [\n# feature for feature in features\n# if (feature.type != \"TRAMP_probe\")\n# and (feature.type != \"none\")\n# and (feature.type != \"DHS\")\n# and (feature.type != \"misc_feature\")\n# and (feature.type != \"primer\")\n# and (feature.type != \"gene\")\n# and (feature.qualifiers.get(\"label\")[0] != \"ARID5_ARID6\"\n# and (feature.qualifiers.get(\"label\")[0] != \"ARDI5_ARID6\"))\n# ]\n ",
"_____no_output_____"
],
[
"# def genometrack_plot(file_location):\n# \"\"\"make pyGenomeTracks plot\"\"\"\n# fig, axs = plt.subplots(2,1,sharex='col')\n# region = '1',3000,6000\n# chrom_region,start_region,end_region = region\n# ax = axs[1]\n\n# track_config = dict(\n# file=file_location)",
"_____no_output_____"
],
[
"# def make_plot2(dictionary_of_records_seqrecords,dictionary_of_records_promoternames):\n# #length of dictionary\n# length_dict = len(dictionary_of_records_promoternames) \n# height = length_dict//2\n# fig, axs = plt.subplots(length_dict,1,sharex='none',figsize=(10,height*2))\n# def pygenometracks(chrom_region,start_region,end_region,axis):\n# \"\"\"make pygenome track for bigwig files\"\"\" \n# file_location_roots=\"../../data/ATAC-seq/potter2018/GSE116287_Roots_NaOH_Merged.bw\"\n# file_location_shoots=\"../../data/ATAC-seq/potter2018/GSE116287_Shoots_NaOH_Merged.bw\"\n# track_config_roots = dict(file=file_location_roots,overlay_previous = 'share-y',color='brown')\n# track_config_shoots = dict(file=file_location_shoots,overlay_previous = 'share-y',alpha = 0.5)#alpha is transparency\n# tk_roots = pygtk.BigWigTrack(track_config_roots)\n# tk_shoots = pygtk.BigWigTrack(track_config_shoots)\n# tk_roots.plot(axis,chrom_region,start_region,end_region,)\n# tk_shoots.plot(axis,chrom_region,start_region,end_region,)\n# #make graphic records of genes and TFBSs\n# for k,v in dictionary_of_records_seqrecords.items():\n# #take last character of string, double it as twice as many sequence tracks to include open chromatin atacseq data\n# last = int(k[-1])\n# #get promoter name\n# prom_name = dictionary_of_records_promoternames[k]\n# #split on dashes\n# #short_prom_name = prom_name.split('-')[0].upper()\n# #make_graphic_record(v,short_prom_name,axsRight[last-2]\n# #print(v.id)\n# #add atacseq track\n# #first get chromosome number from sequence ID\n# AGI = v.id[v.id.find('(')+1:v.id.find(')')]\n# chrom_region = AGI[2]\n# #then get start and stop region\n# #open genbank file, read third line containing keywords\n# gb_file=f\"../../data/TRAMP/{prom_name}.gb\"\n# with open(gb_file, \"r\") as f:\n# all_lines = f.readlines()\n# start_region = int(re.findall(r\"start_index:(\\d+)\",all_lines)[0])\n# end_region = int(re.findall(r\"end_index:(\\d+)\",all_lines)[0])\n\n# #keywords = re.findall(r\"KEYWORDS.*\" \n \n# print(\"start=\"+str(start_region)+\"end=\"+str(end_region)) \n# pygenometracks(chrom_region,start_region,end_region,axs[last-1])\n# #add titles\n# short_prom_name = prom_name.split('-')[0].upper()\n# axs[last-1].set_title(short_prom_name)\n# #change xvalues of open chromatin track\n# region_length = end_region-start_region\n# ax1Ticks = axs[last-1].get_xticks() \n# #ax2Ticks = ax1Ticks\n# def tick_function(X,start_region):\n# V = X-start_region\n# return [\"%.3f\" % z for z in V]\n# #axs[last-1].set_xticks(ax2Ticks)\n# #ax2.set_xbound(ax1.get_xbound())\n# axs[last-1].set_xticklabels(tick_function(ax1Ticks,start_region))\n\n# #plt.xticks(np.arange(end_region-start_region),np.arange(end_region-start_region))\n# #make x axes start at 1\n \n# fig.tight_layout()\n# # chrom_region,start_region,end_region = region\n# # ax = axs[1]\n# # file_location_roots=\"../../data/ATAC-seq/potter2018/GSE116287_Roots_NaOH_Merged.bw\"\n# # file_location_shoots=\"../../data/ATAC-seq/potter2018/GSE116287_Shoots_NaOH_Merged.bw\"\n# # track_config_roots = dict(file=file_location_roots,overlay_previous = 'share-y',color='brown')\n# # track_config_shoots = dict(file=file_location_shoots,overlay_previous = 'share-y',alpha = 0.5)#alpha is transparency\n# # tk_roots = pygtk.BigWigTrack(track_config_roots)\n# # tk_shoots = pygtk.BigWigTrack(track_config_shoots)\n# # tk_roots.plot(axs[0],chrom_region,start_region,end_region,)\n# # tk_shoots.plot(axs[0],chrom_region,start_region,end_region,)\n",
"_____no_output_____"
],
[
"def make_plot(dictionary_of_records_seqrecords,dictionary_of_records_promoternames,dir_name,atacseq=True):\n def make_graphic_record(seqrecord, promoter_name, ax,short_annotation=True,title=True):\n \"\"\"make a graphic record object\"\"\"\n #display figure\n graphic_record = MyCustomTranslator().translate_record(seqrecord)\n #graphic_record.labels_spacing = -5\n #set minimum height of annotations\n if short_annotation is True:\n graphic_record.min_y_height_of_text_line = 0.5\n else:\n graphic_record.min_y_height_of_text_line = 0.5\n # graphic_record.labels_spacing = 10\n\n #set spacing between labels\n #graphic_record.labels_spacing = -5\n #change height to 0 so TFBS can overlap the promoter\n graphic_record.feature_level_height = 0\n #graphic_record = BiopythonTranslator().translate_record(gb_file)\n graphic_record.plot(ax=ax, with_ruler=True,annotate_inline=True,)#,figure_width=10, #strand_in_label_threshold=4,annotate_inline=True so that labels are within the feature\n #add title of promoter\n if title is True:\n ax.title.set_text(promoter_name)\n #return graphic_record\n #set plot parameters\n rcParams['xtick.major.width'] = 2\n rcParams['ytick.major.width'] = 2\n rcParams['font.family'] = 'sans-serif'\n rcParams['font.sans-serif'] = ['Arial']\n #allow font to be edited later in pdf editor\n rcParams ['pdf.fonttype'] = 42 \n #rcParams['axes.linewidth'] = 2\n #rcParams['lines.linewidth'] = 2\n #remove top and right lines\n # rcParams['axes.spines.top'] = False\n # rcParams['axes.spines.right'] = False\n #font size\n rcParams['font.size'] = 11\n\n\n #length of dictionary\n length_dict = len(dictionary_of_records_promoternames)\n #print(length_dict)\n #\n #if including atacseq track, include more subfigures\n if atacseq is True:\n #make plot\n height = length_dict//2+length_dict-1 #add length_dict-1 to include empty grids for spacing between pairs\n ### NEED TO MAKE INTO A GRID LIKE HERE: https://stackoverflow.com/questions/51717199/how-to-adjust-space-between-every-second-row-of-subplots-in-matplotlib\n\n \n\n if length_dict < 4:\n fig = plt.figure(constrained_layout=False,figsize=(12,height+1))\n else:\n fig = plt.figure(constrained_layout=False,figsize=(11,height-5))\n \n #make subfigures\n #subfigs = fig.subfigures(1,2, wspace=0.0, width_ratios=[1,5])\n #left legend \n #axsLeft=subfigs[0].subplots(1,1)\n #remove axis\n #axsLeft.axis('off')\n # #right figures\n #create gridspec so that open chromatin is paired with promoter\n n = length_dict # number of double-rows\n m = 2 # number of columns\n\n t = 0.9 # 1-t == top space \n b = 0.1 # bottom space (both in figure coordinates)\n if length_dict < 4:\n msp = -0.7\n sp = 0.3\n\n else:\n msp = -0.3 # minor spacing\n sp = 0.3 # major spacing\n offs=(1+msp)*(t-b)/(2*n+n*msp+(n-1)*sp) # grid offset\n hspace = sp+msp+1 #height space per grid\n\n gso = GridSpec(n,m, bottom=b+offs, top=t, hspace=hspace,width_ratios=[2,10])\n gse = GridSpec(n,m, bottom=b, top=t-offs, hspace=hspace,width_ratios=[2,10])\n\n #fig = plt.figure()\n\n grid = []\n for i in range(n*m):\n grid.append(fig.add_subplot(gso[i]))\n grid.append(fig.add_subplot(gse[i]))\n \n\n \n #print(grid)\n #print(len(grid))\n #split plots into two lists - one for left column and one for the right\n #turn off axes for the plots on the left\n axsRight = []\n axsLeft = []\n count=0\n for number in np.arange(len(grid)):\n if count == 0:\n axsLeft += [grid[number]]\n grid[number].axis('off')\n count +=1\n elif count == 1:\n axsLeft += [grid[number]]\n count +=1\n grid[number].axis('off')\n elif count == 2:\n axsRight += [grid[number]]\n count +=1\n elif count ==3:\n axsRight += [grid[number]]\n count = 0\n\n #make legend span two plots \n axsLeft[0].set_position(gso[0:3].get_position(fig))\n #axsLeft[1].set_position(gso[0:3].get_position(fig)\n \n \n \n \n #move plots closeer to each other\n # for i in range(0,len(axsRight),2):\n # axsRight[i] = plt.subplot()\n #print(axsRight)\n \n\n\n\n # num_rows = length_dict*2\n # num_cols = 1\n # row_height = 6\n # space_height = 2\n # num_sep_rows = lambda x: int((x-1)/2)\n # grid = (row_height*num_rows + space_height*num_sep_rows(num_rows), num_cols)\n # #axsRight = subfigs[1].subplots(num_rows,1,sharex=False)\n # #axsRight = subfigs[1]\n # axsRight = []\n\n # for ind_row in range(num_rows):\n # for ind_col in range(num_cols):\n # grid_row = row_height*ind_row + space_height*num_sep_rows(ind_row+1)\n # grid_col = ind_col\n\n # axsRight += [plt.subplot2grid(grid, (grid_row, grid_col), rowspan=row_height)]\n\n \n def pygenometracks(chrom_region,start_region,end_region,axis):\n \"\"\"make pygenome track for bigwig files\"\"\" \n file_location_roots=\"../../data/ATAC-seq/potter2018/GSE116287_Roots_NaOH_Merged.bw\"\n file_location_shoots=\"../../data/ATAC-seq/potter2018/GSE116287_Shoots_NaOH_Merged.bw\" \n track_config_roots = dict(file=file_location_roots,overlay_previous = 'share-y',color='brown',alpha = 0.25,min_value=0, max_value=30)\n track_config_shoots = dict(file=file_location_shoots,overlay_previous = 'share-y',color='teal',alpha = 0.25,min_value=0,max_value=30)\n \n #add lines too\n track_config_roots_lines = dict(file=file_location_roots,overlay_previous = 'share-y',color='brown',type=\"line:1\",min_value=0, max_value=30)\n track_config_shoots_lines = dict(file=file_location_shoots,overlay_previous = 'share-y',color='teal',type=\"line:1\",min_value=0, max_value=30)\n #alpha is transparency\n tk_roots = pygtk.BigWigTrack(track_config_roots)\n tk_shoots = pygtk.BigWigTrack(track_config_shoots)\n tk_roots_lines = pygtk.BigWigTrack(track_config_roots_lines)\n tk_shoots_lines = pygtk.BigWigTrack(track_config_shoots_lines)\n tk_roots.plot(axis,chrom_region,start_region,end_region,)\n tk_shoots.plot(axis,chrom_region,start_region,end_region,)\n tk_roots_lines.plot(axis,chrom_region,start_region,end_region,)\n tk_shoots_lines.plot(axis,chrom_region,start_region,end_region,)\n #make graphic records of genes and TFBSs\n for k,v in dictionary_of_records_seqrecords.items():\n #take last character of string, double it as twice as many sequence tracks to include open chromatin atacseq data\n last = int(k[-1])*2+1\n last_chromatin = int(k[-1])*2\n #get promoter name\n prom_name = dictionary_of_records_promoternames[k]\n #split on dashes\n short_prom_name = prom_name.split('-')[0].upper()\n #print(short_prom_name)\n if short_prom_name == \"ANAC032\":\n #print(ANAC032)\n make_graphic_record(v,short_prom_name,axsRight[last-2],short_annotation=False, title=False)\n else:\n make_graphic_record(v,short_prom_name,axsRight[last-2],title=False)\n \n #print(v.id)\n #add atacseq track\n #first get chromosome number from sequence ID\n chrom_region = re.findall(r\"chromosome:TAIR10:(\\d)\",v.id)[0]\n #chrom_region = v.id[v.id.find('TAIR10:')+1:v.id.find(':')]\n #print(v)\n #print(chrom_region)\n #chrom_region = AGI[2]\n #then get start and stop region\n #open genbank file, read third line containing keywords\n gb_file=f\"../../data/TRAMP/{prom_name}.gb\"\n with open(gb_file, \"r\") as f:\n all_lines = f.readlines()\n for line in all_lines:\n if re.match(r'KEYWORDS', line):\n keywords = line \n start_region = int(re.findall(r\"start_index:(\\d+)\",keywords)[0])\n end_region = int(re.findall(r\"end_index:(\\d+)\",keywords)[0])\n #print(start_region) \n pygenometracks(chrom_region,start_region,end_region,axsRight[last_chromatin-2])\n #set xlim\n offset_length = 4550-(end_region-start_region)\n axsRight[last_chromatin-2].set_xlim(start_region-offset_length,end_region)\n #get x and y lim\n #axsRight[last_chromatin-2].set_title(short_prom_name,y=0.5, ) #put title to left of axis\n ##setlocation of the title \n #first transform offset_length to between 0 and 1 for axes location, and offset a little to the left\n trans = (offset_length-25)/4550\n\n \n axsRight[last_chromatin-2].text(x=trans, y=0.125, s=short_prom_name, weight=\"extra bold\", fontsize=10,transform=axsRight[last-2].transAxes, ha='right')#transform=axsRight[last_chromatin-2].transAxes\n\n \n #axsRight[last_chromatin-2].invert_xaxis()\n #change xvalues of open chromatin track \n #first get x and y values\n #line = axsRight[last_chromatin-2].get_lines()\n #xd = line.get_xdata()\n #yd = line.get_ydata()\n #print(line)\n #ax1Ticks = axsRight[last_chromatin-2].get_xticks() \n # ax2Ticks = ax1Ticks.copy()\n # def tick_function(X,start_region):\n # V = X-start_region\n # return [\"%.3f\" % z for z in V]\n\n #\n #axs[last-1].set_xticks(ax2Ticks)\n #ax2.set_xbound(ax1.get_xbound())\n #axsRight[last_chromatin-2].set_xticklabels(tick_function(ax2Ticks,start_region))\n #axsRight[last_chromatin-2].set_xscale('function', functions=(forward_function,inverse_function))\n #make x labels integars\n \n\n colour_dict_sorted = {k: v for k,v in sorted(colour_dict.items(), key=lambda item: item[0])}\n #print(colour_dict_sorted)\n handles = []\n labels = []\n for TFBS,colour in colour_dict_sorted.items():\n addition = mpatches.Patch(color=colour)\n #append to handles list\n handles.append(addition)\n labels.append(TFBS)\n\n #use latex style rendering to allow parts to be bold\n \n rc('text', usetex=True)\n\n #handles = sorted(handles)\n\n #create TFBS legend and align left\n #append open chromatin to handles\n #first create custom handler for string in legend to add a second title for open chromatin\n #used https://gist.github.com/Raudcu/44b43c7f3f893fe2f4dd900afb740b7f\n class LegendTitle(object):\n def __init__(self, text_props=None):\n self.text_props = text_props or {}\n super(LegendTitle, self).__init__()\n\n def legend_artist(self, legend, orig_handle, fontsize, handlebox):\n x0, y0 = handlebox.xdescent, handlebox.ydescent\n title = mtext.Text(x0, y0, r\"\\textbf{{{}}}\".format(orig_handle), **self.text_props)\n handlebox.add_artist(title)\n return title\n#r\"\\textbf{'+orig_handle+'}\"\n\n \n \n\n #label titles\n title1 = 'Candidate transcription factor binding sites' \n title2 = 'Gene features'\n title3 = 'Open chromatin peaks'\n #add colour patches\n intron = mpatches.Patch(color='lightgrey')\n exon = mpatches.Patch(color='#635147')\n upstream_mrna = mpatches.Patch(color='#DDCC77')\n root = Line2D([0],[0],color='brown', lw=2)\n shoot = Line2D([0],[0],color='teal',lw=2)\n\n #insert handles\n handles.insert(0,title1)\n labels.insert(0,'')\n handles.insert(3,title2)\n labels.insert(3,'')\n handles.insert(4,exon)\n labels.insert(4,'Exon')\n handles.insert(5,title3)\n labels.insert(5,'')\n handles.insert(6,root)\n labels.insert(6,'Root')\n #blank label to make layout look nice\n handles.insert(7,'')\n labels.insert(7,'')\n handles.insert(10,'')\n labels.insert(10,'')\n handles.insert(11,intron)\n labels.insert(11,'Intron')\n #blank label to make layout look nice \n handles.insert(12,'')\n labels.insert(12,'')\n handles.insert(13,shoot)\n labels.insert(13,'Shoot')\n #blank label to make layout look nice\n handles.insert(14,'')\n labels.insert(14,'')\n handles.insert(17,'')\n labels.insert(17,'')\n \n handles.insert(18,upstream_mrna)\n labels.insert(18,'Upstream transcript')\n handles.insert(19,'')\n labels.insert(19,'')\n\n \n\n \n axsLeft[0].legend(handles=handles,labels=labels, loc='upper right',ncol=3,handler_map={str: LegendTitle({'fontsize': 14})})#handler_map={title: LegendTitle({'fontsize': 14})}\n\n\n\n \n #axsLeft[0].legend(handles=handles,labels=labels, loc='upper right', title=r\"\\textbf{Transcription factor binding site}\", title_fontsize='14',ncol=2,)#handler_map={title: LegendTitle({'fontsize': 14})}\n # # Add the legend manually to the current Axes.\n\n #create open chromatin legend below\n #open_chrom_handles = [mpatches.Patch(facecolor='brown', edgecolor='brown', label='Root'),mpatches.Patch(facecolor='green', edgecolor='green', label='Shoot')]\n #axsLeft[4].legend(handles=open_chrom_handles, loc='lower right', title=r\"\\textbf{Open chromatin peaks}\", title_fontsize='14',ncol=2)\n\n # #turn off latex rendering of text\n rc('text', usetex=False)\n \n #change x_lim to flip x axis\n for n in np.arange(length_dict):\n last = n*2+1\n \n axsRight[last-2].set_xlim(4550,0)\n \n #change font colour of x axis text\n axsRight[last-2].tick_params(axis='x', colors='black')\n #change width of line\n # for axis in ['bottom','right']:\n # ax.spines[axis].set_linewidth(5)\n\n x_ticks = np.arange(176, 4550, 500) #start stop and step\n axsRight[last-2].set_xticks(x_ticks)\n fig.canvas.draw()\n # if axsRight[last-2] == axsRight[length_dict*2-1]:\n # pass\n # else:\n # #remove xticks\n # #axsRight[last-2].xaxis.set_ticks_position('none')\n # #remove axes\n # ax.axis('off')\n \n #remove all axes\n for ax in axsRight: \n if ax == axsRight[length_dict*2-1]:\n pass\n else:\n #ax.xaxis.set_ticks_position('none')\n ax.axis('off')\n\n \n labels = [item._text for item in axsRight[length_dict*2-1].get_xticklabels()]\n #labels: ['176', '676', '1,176', '1,676', '2,176', '2,676', '3,176', '3,676', '4,176']\n new_labels = []\n for label in labels:\n #remove non-numberic character from string\n label = re.sub(\"[^0-9]\",\"\",label) \n label = 1176-int(label)\n if label == 0:\n label = f'{0} (TSS)'\n\n new_labels.append(label)\n axsRight[length_dict*2-1].set_xticklabels(new_labels)\n #set x axis to be a little closer to the gene\n axsRight[length_dict*2-1].spines['bottom'].set_position(('data',-0.5))\n #increase spacing between bottom plots\n\n # for k,v in dictionary_of_records_seqrecords.items():\n # last = int(k[-1])*2+1\n # last_chromatin = int(k[-1])*2\n # #get promoter name\n # prom_name = dictionary_of_records_promoternames[k]\n # #split on dashes\n # short_prom_name = prom_name.split('-')[0].upper()\n\n #chartBox=axsRight[length_dict*2-1].get_position()\n #axsRight[length_dict*2-2].set_position([chartBox.x0,(chartBox.y0-1),chartBox.width,chartBox.height])\n #axsRight[length_dict*2-2].set_position(['data',-0.2,'data','data'])\n\n fig.subplots_adjust(left=0, bottom=-0.1, right=1, top=0, wspace=-0.5, hspace=-0.1)\n\n\n else:\n #make plot\n height = length_dict//2\n fig = plt.figure(constrained_layout=False,figsize=(8,height))\n #make subfigures\n subfigs = fig.subfigures(1,2, wspace=0.0, width_ratios=[1,5])\n #left legend \n axsLeft=subfigs[0].subplots(1,1)\n #remove axis\n axsLeft.axis('off')\n #right figures\n axsRight = subfigs[1].subplots(length_dict,1,sharex=True)\n #move legend to the right\n # box = axsLeft.get_position()\n # box.x0 = box.x0 + 1.8\n # box.x1 = box.x1 + 1.8\n # box.y0 = box.y0 +0.13\n # box.y1 = box.y1 +0.13\n # axsLeft.set_position(box)\n \n #make graphic records\n for k,v in dictionary_of_records_seqrecords.items():\n #take last character of string\n last = int(k[-1])\n #get promoter name\n prom_name = dictionary_of_records_promoternames[k]\n #split on dashes\n short_prom_name = prom_name.split('-')[0].upper()\n make_graphic_record(v,short_prom_name,axsRight[last-1])\n\n #add legend\n #first import colour iterator\n #import colour blind palette\n #colour palette from mkweb.bcgsc.ca/colorblind/palettes.mhtml\n # df = pd.read_csv(\"colourblind_palette.csv\", header=0)\n #convert to floats\n # floats=df.divide(255)\n #make sets of each row to get the red, green blue colours\n # CB_colour_palette = floats.apply(tuple, axis = 1)\n #turn into a list of colour sets\n # list_colours = list(CB_colour_palette)\n #make colour iterator\n #colour_iterator=cycle(list_colours)\n #Use feature_list generated just above the Class MyCustomTranslator(BiopythonTranslator)\n #iterate creating legend from feature list\n #create empty handles list\n #print(colour_dict)\n #sort TFBS names into alphabetical order in colour_dict\n colour_dict_sorted = {k: v for k,v in sorted(colour_dict.items(), key=lambda item: item[0])}\n #print(colour_dict_sorted)\n handles = []\n for TFBS,colour in colour_dict_sorted.items():\n addition = mpatches.Patch(color=colour, label=TFBS)\n #append to handles list\n handles.append(addition)\n \n #use latex style rendering to allow parts to be bold\n rc('text', usetex=True)\n \n\n #handles = sorted(handles)\n \n #create legend and align left\n axsLeft.legend(handles=handles, loc='upper right', title=r\"\\textbf{Transcription factor binding site}\", title_fontsize='14',ncol=2)\n \n\n #turn off latex rendering of text\n rc('text', usetex=False)\n\n\n #change x_lim to flip x axis\n for ax in axsRight:\n ax.set_xlim(4550,0)\n #change font colour of x axis text\n ax.tick_params(axis='x', colors='black')\n #change width of line\n # for axis in ['bottom','right']:\n # ax.spines[axis].set_linewidth(5)\n if ax == axsRight[length_dict-1]:\n pass\n else:\n #remove xticks\n ax.xaxis.set_ticks_position('none')\n #remove axes\n #ax.axis('off')\n \n \n\n\n x_ticks = np.arange(176, 4550, 500) #start stop and step\n plt.xticks(x_ticks)\n # for ax in axsRight:\n # if ax == axsRight[length_dict-1]:\n # pass\n # else:\n # #remove xticks\n # ax.set_xticks([])\n #Now change labels of xticks centered around the TSS \n fig.canvas.draw()\n labels = [item._text for item in axsRight[length_dict-1].get_xticklabels()]\n #labels: ['176', '676', '1,176', '1,676', '2,176', '2,676', '3,176', '3,676', '4,176']\n new_labels = []\n for label in labels:\n #remove non-numberic character from string\n label = re.sub(\"[^0-9]\",\"\",label) \n label = 1176-int(label)\n if label == 0:\n label = f'{0} (TSS)'\n\n new_labels.append(label)\n axsRight[length_dict-1].set_xticklabels(new_labels)\n fig.subplots_adjust(left=0.5, bottom=-0.1, right=0, top=1.2, wspace=0, hspace=0)\n \n #print(int(labels[1])+2)\n print(colour_dict)\n #print(colour_iterator)\n\n #combine graphic records\n #all_records = [graphic_record1,graphic_record2]\n #print(feature_list)\n #create figure legend\n # lines_labels = [graphic_record.get_legend_handles_labels() for graphic_record in axsRight]\n # handles, labels = [sum(l, []) for l in zip(*lines_labels)]\n # fig.legend(handles, labels)\n\n #subfigs[1].suptitle('Promoters', fontsize='x-large')\n \n # #make subplots\n # fig, (ax1,ax2) = plt.subplots(\n # 1,2, figsize=(12,4),gridspec_kw={'width_ratios':[1,4]}\n # )\n #plot Record Map\n \n #add legend of last record\n #graphic_record1.plot_legend(ax=axsLeft, loc=1, ncol=1, frameon=True)\n #remove whitespace\n #plt.tight_layout()\n \n #add a line at TSS\n #get xtick locations\n xtickslocs = axsRight[length_dict*2-1].get_position()\n xstart = xtickslocs.x0 \n xstop = xtickslocs.x1\n #get length of x axis using figure coordinates \n xlength=xstop-xstart\n #get tss_location after converting from x axis position to figure position\n tss_location = ((4550-1176)/4550)*xlength+xstart\n #add vertical dashed line at TSS position\n line = Line2D((tss_location,tss_location),(.1,.9), color='black', linewidth=2, linestyle='--')\n fig.add_artist(line)\n\n\n #set DPI to 600\n fig.set_dpi(600)\n #fig.tight_layout()\n fig.savefig(f\"{dir_name}/combined.pdf\",bbox_inches='tight')\n fig.savefig(f\"{dir_name}/combined.svg\",bbox_inches='tight')\n fig.savefig(f\"{dir_name}/combined.tiff\",bbox_inches='tight')\n #convert image from RGB to CMYK\n RGB2CMYK(f'{dir_name}/combined', '.tiff')\n # #save figure into memory\n # svg1 = BytesIO()\n # fig.savefig(svg1)\n # #load image into PIL\n # svg2 = Image.open(svg1)\n # #save as TIFF\n # svg2.save(f\"../../data/TRAMP/plots/combined2.tiff\")\n # svg1.close()\n ",
"_____no_output_____"
],
[
"#I need to add the following function in to fix annotation boxes being detached (see https://github.com/Edinburgh-Genome-Foundry/DnaFeaturesViewer/issues/42_)\n# This problem is a bit complicated and may need refactoring, but redefining the below function before plotting at least connects the boxes until a proper solution is implemented:\n\n# def new_determine_annotation_height(levels):\n# return 1\n# record.determine_annotation_height = new_determine_annotation_height",
"_____no_output_____"
],
[
"def main(args):\n # parse arguments\n #args = parse_args(args)\n #dependent_variable = \"GC_content\"\n\n # make directory for the plots to be exported to\n dirName = f\"../../data/TRAMP/plots\"\n try:\n # Create target Directory\n os.mkdir(dirName)\n print(\"Directory \", dirName, \" created\")\n except FileExistsError:\n print(\"Directory \", dirName, \" already exists\")\n \n promoter_names=dict([('promoter_name8',\"nlp7-at4g24020_ensembl_plant\"),\n ('promoter_name7',\"nlp6-at1g64530_ensembl_plant\"),\n ('promoter_name9',\"tga1-at5g65210_ensembl_plant\"),\n ('promoter_name4',\"arf18-at3g61830_ensembl_plant\"),\n ('promoter_name3',\"arf9-at4g23980_ensembl_plant\"),\n ('promoter_name1',\"anac032-at1g77450_ensembl_plant\"),\n ('promoter_name2',\"anr1-at2g14210_ensembl_plant\"),\n ('promoter_name5',\"dreb26-at1g21910_ensembl_plant\"),\n ('promoter_name6',\"nir1-at2g15620_ensembl_plant\")])\n #sort promoter names dictionary in alphabetical order\n promoter_names = {k: v for k,v in sorted(promoter_names.items(), key=lambda item: item[0])}\n #load seqrecords\n #create empty dictionary\n seqrecords = {}\n for k,v in promoter_names.items(): \n #add to new dictionary of seqrecords\n seqrecords[k] = gb_file_to_seqrecord(v)\n \n #make plot using dictionary\n #make_plot(seqrecords,promoter_names)\n make_plot(seqrecords,promoter_names,dirName)\n\n",
"_____no_output_____"
],
[
"#figure for SAB 2021 poster\ndef main(args):\n # parse arguments\n #args = parse_args(args)\n #dependent_variable = \"GC_content\"\n\n # make directory for the plots to be exported to\n dirName = f\"../../data/TRAMP/plots/SAB2021\"\n try:\n # Create target Directory\n os.mkdir(dirName)\n print(\"Directory \", dirName, \" created\")\n except FileExistsError:\n print(\"Directory \", dirName, \" already exists\")\n \n promoter_names=dict([('promoter_name2',\"nlp7-at4g24020_ensembl_plant\"),\n #('promoter_name7',\"nlp6-at1g64530_ensembl_plant\"),\n #('promoter_name9',\"tga1-at5g65210_ensembl_plant\"),\n #('promoter_name4',\"arf18-at3g61830_ensembl_plant\"),\n #('promoter_name3',\"arf9-at4g23980_ensembl_plant\"),\n ('promoter_name1',\"anac032-at1g77450_ensembl_plant\")])\n #('promoter_name2',\"anr1-at2g14210_ensembl_plant\"),\n #('promoter_name5',\"dreb26-at1g21910_ensembl_plant\"),\n#('promoter_name6',\"nir1-at2g15620_ensembl_plant\")])\n #sort promoter names dictionary in alphabetical order\n promoter_names = {k: v for k,v in sorted(promoter_names.items(), key=lambda item: item[0])}\n #load seqrecords\n #create empty dictionary\n seqrecords = {}\n for k,v in promoter_names.items(): \n #add to new dictionary of seqrecords\n seqrecords[k] = gb_file_to_seqrecord(v)\n \n #make plot using dictionary\n #make_plot(seqrecords,promoter_names)\n make_plot(seqrecords,promoter_names,dirName)\n\n\n ",
"_____no_output_____"
],
[
"if __name__ == \"__main__\":\n import sys\n\n main(sys.argv[1:])",
"Directory ../../data/TRAMP/plots/SAB2021 already exists\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75c82ed81a95fc39acf5a57d9b86603bf3a754f | 954,673 | ipynb | Jupyter Notebook | twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb | msharma043510/Twitter-Sentiment-Analysis | af8535dd80fc82ba8ce9234b1b0d46ce28163113 | [
"MIT"
] | null | null | null | twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb | msharma043510/Twitter-Sentiment-Analysis | af8535dd80fc82ba8ce9234b1b0d46ce28163113 | [
"MIT"
] | null | null | null | twitter-sentiment-analysis-word2vec-doc2vec (2).ipynb | msharma043510/Twitter-Sentiment-Analysis | af8535dd80fc82ba8ce9234b1b0d46ce28163113 | [
"MIT"
] | null | null | null | 954,673 | 954,673 | 0.923832 | [
[
[
"# Let’s load the libraries\n\nimport re # for regular expressions \nimport nltk # for text manipulation \nimport string \nimport warnings \nimport numpy as np \nimport pandas as pd \nimport seaborn as sns \nimport matplotlib.pyplot as plt \n\npd.set_option(\"display.max_colwidth\", 200) \nwarnings.filterwarnings(\"ignore\", category=DeprecationWarning) \n\n%matplotlib inline",
"_____no_output_____"
],
[
"# Let’s read train and test datasets.\n\ntrain = pd.read_csv('../input/twitter-sentiment-analysis/train_E6oV3lV.csv') \ntest = pd.read_csv('../input/twitter-sentiment-analysis/test_tweets_anuFYb8.csv')",
"_____no_output_____"
]
],
[
[
"Text is a highly unstructured form of data, various types of noise are present in it and the data is not readily analyzable without any pre-processing. The entire process of cleaning and standardization of text, making it noise-free and ready for analysis is known as text preprocessing. We will divide it into 2 parts:\n\n* Data Inspection\n* Data Cleaning",
"_____no_output_____"
]
],
[
[
"train.head()",
"_____no_output_____"
]
],
[
[
"#### Data Inspection\nLet’s check out a few **non** racist/sexist tweets.",
"_____no_output_____"
]
],
[
[
"train[train['label'] == 0].head(10)",
"_____no_output_____"
]
],
[
[
"Now check out a few racist/sexist tweets.",
"_____no_output_____"
]
],
[
[
"\n\ntrain[train['label'] == 1].head(10)\n",
"_____no_output_____"
]
],
[
[
"There are quite a many words and characters which are not really required. So, we will try to keep only those words which are important and add value.\n\nLet’s check dimensions of the train and test dataset.",
"_____no_output_____"
]
],
[
[
"train.shape, test.shape",
"_____no_output_____"
]
],
[
[
"Train set has 31,962 tweets and test set has 17,197 tweets.\n\nLet’s have a glimpse at label-distribution in the train dataset.",
"_____no_output_____"
]
],
[
[
"train[\"label\"].value_counts()\n",
"_____no_output_____"
]
],
[
[
"In the train dataset, we have 2,242 (~7%) tweets labeled as racist or sexist, and 29,720 (~93%) tweets labeled as non racist/sexist. So, it is an imbalanced classification challenge.\n\nNow we will check the distribution of length of the tweets, in terms of words, in both train and test data.",
"_____no_output_____"
]
],
[
[
"plt.hist(train.tweet.str.len(), bins=20, label='train')\nplt.hist(test.tweet.str.len(), bins=20, label='test')\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"In any natural language processing task, cleaning raw text data is an important step. It helps in getting rid of the unwanted words and characters which helps in obtaining better features. If we skip this step then there is a higher chance that you are working with noisy and inconsistent data. The objective of this step is to clean noise those are less relevant to find the sentiment of tweets such as punctuation, special characters, numbers, and terms which don’t carry much weightage in context to the text.\n\nBefore we begin cleaning, let’s first combine train and test datasets. Combining the datasets will make it convenient for us to preprocess the data. Later we will split it back into train and test data.",
"_____no_output_____"
]
],
[
[
"combi = train.append(test, ignore_index=True, sort=True)\ncombi.shape",
"_____no_output_____"
]
],
[
[
"Given below is a user-defined function to remove unwanted text patterns from the tweets.",
"_____no_output_____"
]
],
[
[
"def remove_pattern(input_txt, pattern):\n r = re.findall(pattern, input_txt)\n for i in r:\n input_txt = re.sub(i, '', input_txt)\n return input_txt",
"_____no_output_____"
]
],
[
[
"**1. Removing Twitter Handles (@user)**\n\nLet’s create a new column tidy_tweet, it will contain the cleaned and processed tweets. Note that we have passed “@[]*” as the pattern to the remove_pattern function. It is actually a regular expression which will pick any word starting with ‘@’.",
"_____no_output_____"
]
],
[
[
"combi['tidy_tweet'] = np.vectorize(remove_pattern)(combi['tweet'], \"@[\\w]*\") \ncombi.head(10)",
"_____no_output_____"
]
],
[
[
"**2. Removing Punctuations, Numbers, and Special Characters**\n\nHere we will replace everything except characters and hashtags with spaces. The regular expression “[^a-zA-Z#]” means anything except alphabets and ‘#’.",
"_____no_output_____"
]
],
[
[
"combi.tidy_tweet = combi.tidy_tweet.str.replace(\"[^a-zA-Z#]\", \" \")\ncombi.head(10)",
"_____no_output_____"
]
],
[
[
"**3. Removing Short Words**\n\nWe have to be a little careful here in selecting the length of the words which we want to remove. So, I have decided to remove all the words having length 3 or less. For example, terms like “hmm”, “oh” are of very little use. It is better to get rid of them.",
"_____no_output_____"
]
],
[
[
"combi.tidy_tweet = combi.tidy_tweet.apply(lambda x: ' '.join([w for w in x.split() if len(w) > 3]))\ncombi.head(10)",
"_____no_output_____"
]
],
[
[
"You can see the difference between the raw tweets and the cleaned tweets (tidy_tweet) quite clearly. Only the important words in the tweets have been retained and the noise (numbers, punctuations, and special characters) has been removed.",
"_____no_output_____"
],
[
"**4. Text Normalization**\n\nHere we will use nltk’s PorterStemmer() function to normalize the tweets. But before that we will have to tokenize the tweets. Tokens are individual terms or words, and tokenization is the process of splitting a string of text into tokens.",
"_____no_output_____"
]
],
[
[
"tokenized_tweet = combi.tidy_tweet.apply(lambda x: x.split())\ntokenized_tweet.head()",
"_____no_output_____"
],
[
"# Now we can normalize the tokenized tweets.\n\nfrom nltk.stem.porter import * \nstemmer = PorterStemmer() \ntokenized_tweet = tokenized_tweet.apply(lambda x: [stemmer.stem(i) for i in x]) # stemming\ntokenized_tweet.head()",
"_____no_output_____"
],
[
"# Now let’s stitch these tokens back together. It can easily be done using nltk’s MosesDetokenizer function.\n\nfor i in range(len(tokenized_tweet)):\n tokenized_tweet[i] = ' '.join(tokenized_tweet[i]) \ncombi['tidy_tweet'] = tokenized_tweet\ncombi.head(10)",
"_____no_output_____"
],
[
"all_words = ' '.join([text for text in combi['tidy_tweet']]) \n\nfrom wordcloud import WordCloud\nwordcloud = WordCloud(width=800, height=500, random_state=21, max_font_size=110).generate(all_words) \nplt.figure(figsize=(10, 7)) \nplt.imshow(wordcloud, interpolation=\"bilinear\") \nplt.axis('off')\nplt.show()",
"_____no_output_____"
]
],
[
[
"We can see most of the words are positive or neutral. Words like love, great, friend, life are the most frequent ones. It doesn’t give us any idea about the words associated with the racist/sexist tweets. Hence, we will plot separate wordclouds for both the classes (racist/sexist or not) in our train data.",
"_____no_output_____"
],
[
"**B) Words in non racist/sexist tweets**",
"_____no_output_____"
]
],
[
[
"normal_words =' '.join([text for text in combi['tidy_tweet'][combi['label'] == 0]]) \n\nwordcloud = WordCloud(width=800, height=500, random_state=21, max_font_size=110).generate(normal_words)\nplt.figure(figsize=(10, 7))\nplt.imshow(wordcloud, interpolation=\"bilinear\")\nplt.axis('off')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Most of the frequent words are compatible with the sentiment, i.e, non-racist/sexists tweets. Similarly, we will plot the word cloud for the other sentiment. Expect to see negative, racist, and sexist terms.",
"_____no_output_____"
],
[
"**C) Racist/Sexist Tweets**",
"_____no_output_____"
]
],
[
[
"negative_words = ' '.join([text for text in combi['tidy_tweet'][combi['label'] == 1]])\n\nwordcloud = WordCloud(width=800, height=500, random_state=21, max_font_size=110).generate(negative_words)\nplt.figure(figsize=(10, 7))\nplt.imshow(wordcloud, interpolation=\"bilinear\")\nplt.axis('off')\nplt.show()",
"_____no_output_____"
]
],
[
[
"As we can clearly see, most of the words have negative connotations. So, it seems we have a pretty good text data to work on. Next we will the hashtags/trends in our twitter data.",
"_____no_output_____"
]
],
[
[
"# function to collect hashtags \n\ndef hashtag_extract(x):\n hashtags = [] # Loop over the words in the tweet\n for i in x:\n ht = re.findall(r\"#(\\w+)\", i)\n hashtags.append(ht)\n return hashtags",
"_____no_output_____"
],
[
"# extracting hashtags from non racist/sexist tweets \n\nHT_regular = hashtag_extract(combi['tidy_tweet'][combi['label'] == 0]) ",
"_____no_output_____"
],
[
"# extracting hashtags from racist/sexist tweets\n\nHT_negative = hashtag_extract(combi['tidy_tweet'][combi['label'] == 1]) ",
"_____no_output_____"
],
[
"# unnesting list\n\nHT_regular = sum(HT_regular,[]) \nHT_negative = sum(HT_negative,[])",
"_____no_output_____"
]
],
[
[
"Now that we have prepared our lists of hashtags for both the sentiments, we can plot the top ‘n’ hashtags. So, first let’s check the hashtags in the non-racist/sexist tweets.",
"_____no_output_____"
],
[
"**Non-Racist/Sexist Tweets**",
"_____no_output_____"
]
],
[
[
"a = nltk.FreqDist(HT_regular)\nd = pd.DataFrame(\n {\n 'Hashtag': list(a.keys()),\n 'Count': list(a.values())\n }\n) ",
"_____no_output_____"
],
[
"# selecting top 20 most frequent hashtags\n\nd = d.nlargest(columns=\"Count\", n = 20)\nplt.figure(figsize=(20,5))\nax = sns.barplot(data=d, x= \"Hashtag\", y = \"Count\")\nax.set(ylabel = 'Count')\n# plt.xticks(rotation=90)\nplt.show()",
"_____no_output_____"
]
],
[
[
"All these hashtags are positive and it makes sense. I am expecting negative terms in the plot of the second list. Let’s check the most frequent hashtags appearing in the racist/sexist tweets.",
"_____no_output_____"
],
[
"**Racist/Sexist Tweets**",
"_____no_output_____"
]
],
[
[
"a = nltk.FreqDist(HT_negative)\nd = pd.DataFrame(\n {\n 'Hashtag': list(a.keys()),\n 'Count': list(a.values())\n }\n) ",
"_____no_output_____"
],
[
"# selecting top 20 most frequent hashtags\n\nd = d.nlargest(columns=\"Count\", n = 20)\nplt.figure(figsize=(20,5))\nax = sns.barplot(data=d, x= \"Hashtag\", y = \"Count\")\nax.set(ylabel = 'Count')\n# plt.xticks(rotation=90)\nplt.show()",
"_____no_output_____"
]
],
[
[
"As expected, most of the terms are negative with a few neutral terms as well. So, it’s not a bad idea to keep these hashtags in our data as they contain useful information. Next, we will try to extract features from the tokenized tweets.",
"_____no_output_____"
],
[
"#### Bag-of-Words Features\n\nTo analyse a preprocessed data, it needs to be converted into features. Depending upon the usage, text features can be constructed using assorted techniques – Bag of Words, TF-IDF, and Word Embeddings. Read on to understand these techniques in detail.",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer \nimport gensim",
"_____no_output_____"
]
],
[
[
"Let’s start with the **Bag-of-Words** Features.\n\nConsider a Corpus C of D documents {d1,d2…..dD} and N unique tokens extracted out of the corpus C. The N tokens (words) will form a dictionary and the size of the bag-of-words matrix M will be given by D X N. Each row in the matrix M contains the frequency of tokens in document D(i).\n\nLet us understand this using a simple example.\n\nD1: He is a lazy boy. She is also lazy.\n\nD2: Smith is a lazy person.\n\nThe dictionary created would be a list of unique tokens in the corpus =[‘He’,’She’,’lazy’,’boy’,’Smith’,’person’]\n\nHere, D=2, N=6\n\nThe matrix M of size 2 X 6 will be represented as –\n\n\nNow the columns in the above matrix can be used as features to build a classification model.",
"_____no_output_____"
]
],
[
[
"bow_vectorizer = CountVectorizer(max_df=0.90, min_df=2, max_features=1000, stop_words='english')\nbow = bow_vectorizer.fit_transform(combi['tidy_tweet'])\nbow.shape",
"_____no_output_____"
]
],
[
[
"#### TF-IDF Features\n\nThis is another method which is based on the frequency method but it is different to the bag-of-words approach in the sense that it takes into account not just the occurrence of a word in a single document (or tweet) but in the entire corpus.\n\nTF-IDF works by penalising the common words by assigning them lower weights while giving importance to words which are rare in the entire corpus but appear in good numbers in few documents.\n\nLet’s have a look at the important terms related to TF-IDF:\n\n* TF = (Number of times term t appears in a document)/(Number of terms in the document)\n\n* IDF = log(N/n), where, N is the number of documents and n is the number of documents a term t has appeared in.\n\n* TF-IDF = TF*IDF",
"_____no_output_____"
]
],
[
[
"tfidf_vectorizer = TfidfVectorizer(max_df=0.90, min_df=2, max_features=1000, stop_words='english')\ntfidf = tfidf_vectorizer.fit_transform(combi['tidy_tweet'])\ntfidf.shape",
"_____no_output_____"
]
],
[
[
"#### Word2Vec Features\n\nWord embeddings are the modern way of representing words as vectors. The objective of word embeddings is to redefine the high dimensional word features into low dimensional feature vectors by preserving the contextual similarity in the corpus. They are able to achieve tasks like **King -man +woman = Queen**, which is mind-blowing.\n\n\n\nThe advantages of using word embeddings over BOW or TF-IDF are:\n\n1. Dimensionality reduction - significant reduction in the no. of features required to build a model.\n\n1. It capture meanings of the words, semantic relationships and the different types of contexts they are used in.",
"_____no_output_____"
]
],
[
[
"**1. Word2Vec Embeddings**\n\nWord2Vec is not a single algorithm but a combination of two techniques – **CBOW (Continuous bag of words)** and **Skip-gram** model. Both of these are shallow neural networks which map word(s) to the target variable which is also a word(s). Both of these techniques learn weights which act as word vector representations.\n\nCBOW tends to predict the probability of a word given a context. A context may be a single adjacent word or a group of surrounding words. The Skip-gram model works in the reverse manner, it tries to predict the context for a given word.\n\nBelow is a diagrammatic representation of a 1-word context window Word2Vec model.\n\n\n\nThere are three laters: - an input layer, - a hidden layer, and - an output layer.\n\nThe input layer and the output, both are one- hot encoded of size [1 X V], where V is the size of the vocabulary (no. of unique words in the corpus). The output layer is a softmax layer which is used to sum the probabilities obtained in the output layer to 1. The weights learned by the model are then used as the word-vectors.\n\nWe will go ahead with the Skip-gram model as it has the following advantages:\n\n* It can capture two semantics for a single word. i.e it will have two vector representations of ‘apple’. One for the company Apple and the other for the fruit.\n\n* Skip-gram with negative sub-sampling outperforms CBOW generally.\n\nWe will train a Word2Vec model on our data to obtain vector representations for all the unique words present in our corpus. There is one more option of using **pre-trained word vectors** instead of training our own model. Some of the freely available pre-trained vectors are:\n\n1. [Google News Word Vectors](https://code.google.com/archive/p/word2vec/)\n\n1. [Freebase names](https://code.google.com/archive/p/word2vec/)\n\n1. [DBPedia vectors (wiki2vec)](https://github.com/idio/wiki2vec#prebuilt-models)\n\nHowever, for now, we will train our own word vectors since size of the pre-trained word vectors is generally huge.\n\nLet’s train a Word2Vec model on our corpus.",
"_____no_output_____"
]
],
[
[
"**1. Word2Vec Embeddings**\n\nWord2Vec is not a single algorithm but a combination of two techniques – **CBOW (Continuous bag of words)** and **Skip-gram** model. Both of these are shallow neural networks which map word(s) to the target variable which is also a word(s). Both of these techniques learn weights which act as word vector representations.\n\nCBOW tends to predict the probability of a word given a context. A context may be a single adjacent word or a group of surrounding words. The Skip-gram model works in the reverse manner, it tries to predict the context for a given word.\n\nBelow is a diagrammatic representation of a 1-word context window Word2Vec model.\n\n\n\nThere are three laters: - an input layer, - a hidden layer, and - an output layer.\n\nThe input layer and the output, both are one- hot encoded of size [1 X V], where V is the size of the vocabulary (no. of unique words in the corpus). The output layer is a softmax layer which is used to sum the probabilities obtained in the output layer to 1. The weights learned by the model are then used as the word-vectors.\n\nWe will go ahead with the Skip-gram model as it has the following advantages:\n\n* It can capture two semantics for a single word. i.e it will have two vector representations of ‘apple’. One for the company Apple and the other for the fruit.\n\n* Skip-gram with negative sub-sampling outperforms CBOW generally.\n\nWe will train a Word2Vec model on our data to obtain vector representations for all the unique words present in our corpus. There is one more option of using **pre-trained word vectors** instead of training our own model. Some of the freely available pre-trained vectors are:\n\n1. [Google News Word Vectors](https://code.google.com/archive/p/word2vec/)\n\n1. [Freebase names](https://code.google.com/archive/p/word2vec/)\n\n1. [DBPedia vectors (wiki2vec)](https://github.com/idio/wiki2vec#prebuilt-models)\n\nHowever, for now, we will train our own word vectors since size of the pre-trained word vectors is generally huge.\n\nLet’s train a Word2Vec model on our corpus.",
"_____no_output_____"
]
],
[
[
"%%time\n\ntokenized_tweet = combi['tidy_tweet'].apply(lambda x: x.split()) # tokenizing \n\nmodel_w2v = gensim.models.Word2Vec(\n tokenized_tweet,\n size=200, # desired no. of features/independent variables\n window=5, # context window size\n min_count=2, # Ignores all words with total frequency lower than 2. \n sg = 1, # 1 for skip-gram model\n hs = 0,\n negative = 10, # for negative sampling\n workers= 32, # no.of cores\n seed = 34\n) \n\nmodel_w2v.train(tokenized_tweet, total_examples= len(combi['tidy_tweet']), epochs=20)",
"CPU times: user 2min 48s, sys: 895 ms, total: 2min 49s\nWall time: 1min 38s\n"
]
],
[
[
"Let’s play a bit with our Word2Vec model and see how does it perform. We will specify a word and the model will pull out the most similar words from the corpus.",
"_____no_output_____"
]
],
[
[
"model_w2v.wv.most_similar(positive=\"dinner\")",
"_____no_output_____"
],
[
"model_w2v.most_similar(positive=\"trump\")",
"_____no_output_____"
]
],
[
[
"From the above two examples, we can see that our word2vec model does a good job of finding the most similar words for a given word. But how is it able to do so? That’s because it has learned vectors for every unique word in our data and it uses cosine similarity to find out the most similar vectors (words).\n\nLet’s check the vector representation of any word from our corpus.",
"_____no_output_____"
]
],
[
[
"model_w2v['food']",
"_____no_output_____"
],
[
"len(model_w2v['food']) #The length of the vector is 200",
"_____no_output_____"
]
],
[
[
"#### Preparing Vectors for Tweets\n\nSince our data contains tweets and not just words, we’ll have to figure out a way to use the word vectors from word2vec model to create vector representation for an entire tweet. There is a simple solution to this problem, we can simply take mean of all the word vectors present in the tweet. The length of the resultant vector will be the same, i.e. 200. We will repeat the same process for all the tweets in our data and obtain their vectors. Now we have 200 word2vec features for our data.\n\nWe will use the below function to create a vector for each tweet by taking the average of the vectors of the words present in the tweet.",
"_____no_output_____"
]
],
[
[
"def word_vector(tokens, size):\n vec = np.zeros(size).reshape((1, size))\n count = 0\n for word in tokens:\n try:\n vec += model_w2v[word].reshape((1, size))\n count += 1.\n except KeyError: # handling the case where the token is not in vocabulary\n continue\n if count != 0:\n vec /= count\n return vec",
"_____no_output_____"
]
],
[
[
"Preparing word2vec feature set…",
"_____no_output_____"
]
],
[
[
"wordvec_arrays = np.zeros((len(tokenized_tweet), 200)) \nfor i in range(len(tokenized_tweet)):\n wordvec_arrays[i,:] = word_vector(tokenized_tweet[i], 200)\nwordvec_df = pd.DataFrame(wordvec_arrays)\nwordvec_df.shape",
"_____no_output_____"
]
],
[
[
"Now we have 200 new features, whereas in Bag of Words and TF-IDF we had 1000 features.",
"_____no_output_____"
],
[
"#### 2. Doc2Vec Embedding\n\nDoc2Vec model is an unsupervised algorithm to generate vectors for sentence/paragraphs/documents. This approach is an extension of the word2vec. The major difference between the two is that doc2vec provides an additional context which is unique for every document in the corpus. This additional context is nothing but another feature vector for the whole document. This document vector is trained along with the word vectors.\n\n\n\n\nLet’s load the required libraries.",
"_____no_output_____"
]
],
[
[
"from tqdm import tqdm \ntqdm.pandas(desc=\"progress-bar\") \nfrom gensim.models.doc2vec import LabeledSentence",
"_____no_output_____"
]
],
[
[
"To implement doc2vec, we have to **labelise** or **tag** each tokenised tweet with unique IDs. We can do so by using Gensim’s *LabeledSentence()* function.",
"_____no_output_____"
]
],
[
[
"def add_label(twt):\n output = []\n for i, s in zip(twt.index, twt):\n output.append(LabeledSentence(s, [\"tweet_\" + str(i)]))\n return output\n\nlabeled_tweets = add_label(tokenized_tweet) # label all the tweets",
"_____no_output_____"
]
],
[
[
"Let’s have a look at the result.",
"_____no_output_____"
]
],
[
[
"labeled_tweets[:6]",
"_____no_output_____"
]
],
[
[
"Now let’s train a **doc2vec** model.",
"_____no_output_____"
]
],
[
[
"%%time \nmodel_d2v = gensim.models.Doc2Vec(dm=1, # dm = 1 for ‘distributed memory’ model\n dm_mean=1, # dm_mean = 1 for using mean of the context word vectors\n vector_size=200, # no. of desired features\n window=5, # width of the context window \n negative=7, # if > 0 then negative sampling will be used\n min_count=5, # Ignores all words with total frequency lower than 5. \n workers=32, # no. of cores \n alpha=0.1, # learning rate \n seed = 23, # for reproducibility\n ) \n\nmodel_d2v.build_vocab([i for i in tqdm(labeled_tweets)])\n\nmodel_d2v.train(labeled_tweets, total_examples= len(combi['tidy_tweet']), epochs=15)",
"100%|██████████| 49159/49159 [00:00<00:00, 1287900.95it/s]\n"
]
],
[
[
"**Preparing doc2vec Feature Set**",
"_____no_output_____"
]
],
[
[
"docvec_arrays = np.zeros((len(tokenized_tweet), 200)) \nfor i in range(len(combi)):\n docvec_arrays[i,:] = model_d2v.docvecs[i].reshape((1,200)) \n\ndocvec_df = pd.DataFrame(docvec_arrays) \ndocvec_df.shape",
"_____no_output_____"
]
],
[
[
"We are now done with all the pre-modeling stages required to get the data in the proper form and shape. We will be building models on the datasets with different feature sets prepared in the earlier sections — Bag-of-Words, TF-IDF, word2vec vectors, and doc2vec vectors. We will use the following algorithms to build models:\n\n1. Logistic Regression\n1. Support Vector Machine\n1. RandomForest\n1. XGBoost\n\n**Evaluation Metric**\n\n**F1 score** is being used as the evaluation metric. It is the weighted average of Precision and Recall. Therefore, this score takes both false positives and false negatives into account. It is suitable for uneven class distribution problems.\n\nThe important components of F1 score are:\n\n1. True Positives (TP) - These are the correctly predicted positive values which means that the value of actual class is yes and the value of predicted class is also yes.\n1. True Negatives (TN) - These are the correctly predicted negative values which means that the value of actual class is no and value of predicted class is also no.\n1. False Positives (FP) – When actual class is no and predicted class is yes.\n1. False Negatives (FN) – When actual class is yes but predicted class in no.\n\n**Precision** = TP/TP+FP\n\n**Recall** = TP/TP+FN\n\n**F1 Score** = 2(Recall * Precision) / (Recall + Precision)",
"_____no_output_____"
],
[
"#### Logistic Regression\n\nLogistic Regression is a classification algorithm. It is used to predict a binary outcome (1 / 0, Yes / No, True / False) given a set of independent variables. You can also think of logistic regression as a special case of linear regression when the outcome variable is categorical, where we are using log of odds as the dependent variable. In simple words, it predicts the probability of occurrence of an event by fitting data to a logit function.\n\nThe following equation is used in Logistic Regression:\n\n\n\nA typical logistic model plot is shown below. You can see probability never goes below 0 and above 1.\n\n\n\nRead this [article](https://www.analyticsvidhya.com/blog/2015/11/beginners-guide-on-logistic-regression-in-r/) to know more about Logistic Regression.\n\n",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import f1_score",
"_____no_output_____"
]
],
[
[
"**Bag-of-Words Features**\n\nWe will first try to fit the logistic regression model on the Bag-of-Words (BoW) features.",
"_____no_output_____"
]
],
[
[
"# Extracting train and test BoW features \ntrain_bow = bow[:31962,:] \ntest_bow = bow[31962:,:] \n\n# splitting data into training and validation set \nxtrain_bow, xvalid_bow, ytrain, yvalid = train_test_split(train_bow, train['label'], random_state=42, test_size=0.3)\n\nlreg = LogisticRegression(solver='lbfgs') \n\n# training the model \nlreg.fit(xtrain_bow, ytrain) \nprediction = lreg.predict_proba(xvalid_bow) # predicting on the validation set \nprediction_int = prediction[:,1] >= 0.3 # if prediction is greater than or equal to 0.3 than 1 else 0 \nprediction_int = prediction_int.astype(np.int) \nf1_score(yvalid, prediction_int) # calculating f1 score for the validation set",
"_____no_output_____"
]
],
[
[
"Now let’s make predictions for the test dataset and create a submission file.",
"_____no_output_____"
]
],
[
[
"test_pred = lreg.predict_proba(test_bow)\ntest_pred_int = test_pred[:,1] >= 0.3\ntest_pred_int = test_pred_int.astype(np.int)\ntest['label'] = test_pred_int\nsubmission = test[['id','label']]\nsubmission.to_csv('sub_lreg_bow.csv', index=False) # writing data to a CSV file",
"_____no_output_____"
]
],
[
[
"**TF-IDF Features**\n\nWe’ll follow the same steps as above, but now for the TF-IDF feature set.",
"_____no_output_____"
]
],
[
[
"train_tfidf = tfidf[:31962,:]\ntest_tfidf = tfidf[31962:,:] \n\nxtrain_tfidf = train_tfidf[ytrain.index]\nxvalid_tfidf = train_tfidf[yvalid.index]\n\nlreg.fit(xtrain_tfidf, ytrain) \n\nprediction = lreg.predict_proba(xvalid_tfidf)\n\nprediction_int = prediction[:,1] >= 0.3\nprediction_int = prediction_int.astype(np.int) \n\nf1_score(yvalid, prediction_int) # calculating f1 score for the validation set",
"_____no_output_____"
]
],
[
[
"**Word2Vec Features**",
"_____no_output_____"
]
],
[
[
"train_w2v = wordvec_df.iloc[:31962,:]\ntest_w2v = wordvec_df.iloc[31962:,:]\n\nxtrain_w2v = train_w2v.iloc[ytrain.index,:]\nxvalid_w2v = train_w2v.iloc[yvalid.index,:]\n\nlreg.fit(xtrain_w2v, ytrain) \n\nprediction = lreg.predict_proba(xvalid_w2v)\n\nprediction_int = prediction[:,1] >= 0.3\nprediction_int = prediction_int.astype(np.int)\n\nf1_score(yvalid, prediction_int)",
"_____no_output_____"
]
],
[
[
"**Doc2Vec Features**",
"_____no_output_____"
]
],
[
[
"train_d2v = docvec_df.iloc[:31962,:]\ntest_d2v = docvec_df.iloc[31962:,:] \n\nxtrain_d2v = train_d2v.iloc[ytrain.index,:]\nxvalid_d2v = train_d2v.iloc[yvalid.index,:]\n\nlreg.fit(xtrain_d2v, ytrain) \n\nprediction = lreg.predict_proba(xvalid_d2v)\n\nprediction_int = prediction[:,1] >= 0.3\nprediction_int = prediction_int.astype(np.int)\n\nf1_score(yvalid, prediction_int)",
"_____no_output_____"
]
],
[
[
"Doc2Vec features do not seem to be capturing the right signals as the F1-score on validation set is quite low.",
"_____no_output_____"
],
[
"#### Support Vector Machine (SVM)\n\nSupport Vector Machine (SVM) is a supervised machine learning algorithm which can be used for both classification or regression challenges. However, it is mostly used in classification problems. In this algorithm, we plot each data item as a point in n-dimensional space (where n is the number of features you have) with the value of each feature being the value of a particular coordinate. Then, we perform classification by finding the hyper-plane that differentiate the two classes as shown in the plot below:\n\n\n\nRefer this [article](https://www.analyticsvidhya.com/blog/2017/09/understaing-support-vector-machine-example-code/) to learn more about SVM. Now we will implement SVM on our data using the scikit-learn library.",
"_____no_output_____"
]
],
[
[
"from sklearn import svm",
"_____no_output_____"
]
],
[
[
"**Bag-of-Words Features**",
"_____no_output_____"
]
],
[
[
"svc = svm.SVC(kernel='linear', C=1, probability=True).fit(xtrain_bow, ytrain) \nprediction = svc.predict_proba(xvalid_bow) \nprediction_int = prediction[:,1] >= 0.3 \nprediction_int = prediction_int.astype(np.int) \nf1_score(yvalid, prediction_int)",
"_____no_output_____"
]
],
[
[
"Again let’s make predictions for the test dataset and create another submission file.",
"_____no_output_____"
]
],
[
[
"test_pred = svc.predict_proba(test_bow) \ntest_pred_int = test_pred[:,1] >= 0.3 \ntest_pred_int = test_pred_int.astype(np.int) \ntest['label'] = test_pred_int \nsubmission = test[['id','label']] \nsubmission.to_csv('sub_svm_bow.csv', index=False)",
"_____no_output_____"
]
],
[
[
"Here validation score is slightly lesser than the Logistic Regression score for bag-of-words features.",
"_____no_output_____"
],
[
"**TF-IDF Features**",
"_____no_output_____"
]
],
[
[
"svc = svm.SVC(kernel='linear', C=1, probability=True).fit(xtrain_tfidf, ytrain) \nprediction = svc.predict_proba(xvalid_tfidf) \nprediction_int = prediction[:,1] >= 0.3 \nprediction_int = prediction_int.astype(np.int) \nf1_score(yvalid, prediction_int)",
"_____no_output_____"
]
],
[
[
"**Word2Vec Features**",
"_____no_output_____"
]
],
[
[
"svc = svm.SVC(kernel='linear', C=1, probability=True).fit(xtrain_w2v, ytrain) \nprediction = svc.predict_proba(xvalid_w2v) \nprediction_int = prediction[:,1] >= 0.3 \nprediction_int = prediction_int.astype(np.int) \nf1_score(yvalid, prediction_int)",
"_____no_output_____"
]
],
[
[
"**Doc2Vec Features**",
"_____no_output_____"
]
],
[
[
"svc = svm.SVC(kernel='linear', C=1, probability=True).fit(xtrain_d2v, ytrain) \nprediction = svc.predict_proba(xvalid_d2v) \nprediction_int = prediction[:,1] >= 0.3 \nprediction_int = prediction_int.astype(np.int) \nf1_score(yvalid, prediction_int)",
"_____no_output_____"
]
],
[
[
"#### RandomForest\n\nRandom Forest is a versatile machine learning algorithm capable of performing both regression and classification tasks. It is a kind of ensemble learning method, where a few weak models combine to form a powerful model. In Random Forest, we grow multiple trees as opposed to a decision single tree. To classify a new object based on attributes, each tree gives a classification and we say the tree “votes” for that class. The forest chooses the classification having the most votes (over all the trees in the forest).\n\nIt works in the following manner. Each tree is planted & grown as follows:\n\n1. Assume number of cases in the training set is N. Then, sample of these N cases is taken at random but with replacement. This sample will be the training set for growing the tree.\n\n1. If there are M input variables, a number m (m<M) is specified such that at each node, m variables are selected at random out of the M. The best split on these m variables is used to split the node. The value of m is held constant while we grow the forest.\n\n1. Each tree is grown to the largest extent possible and there is no pruning.\n\n1. Predict new data by aggregating the predictions of the ntree trees (i.e., majority votes for classification, average for regression).\n\n",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier",
"_____no_output_____"
]
],
[
[
"**Bag-of-Words Features**\n\nFirst we will train our RandomForest model on the Bag-of-Words features and check its performance on validation set.",
"_____no_output_____"
]
],
[
[
"rf = RandomForestClassifier(n_estimators=400, random_state=11).fit(xtrain_bow, ytrain) \nprediction = rf.predict(xvalid_bow) \nf1_score(yvalid, prediction) # validation score",
"_____no_output_____"
]
],
[
[
"Let’s make predictions for the test dataset and create another submission file.",
"_____no_output_____"
]
],
[
[
"test_pred = rf.predict(test_bow)\ntest['label'] = test_pred\nsubmission = test[['id','label']]\nsubmission.to_csv('sub_rf_bow.csv', index=False)",
"_____no_output_____"
]
],
[
[
"**TF-IDF Features**",
"_____no_output_____"
]
],
[
[
"rf = RandomForestClassifier(n_estimators=400, random_state=11).fit(xtrain_tfidf, ytrain) \nprediction = rf.predict(xvalid_tfidf)\nf1_score(yvalid, prediction)",
"_____no_output_____"
]
],
[
[
"**Word2Vec Features**",
"_____no_output_____"
]
],
[
[
"rf = RandomForestClassifier(n_estimators=400, random_state=11).fit(xtrain_w2v, ytrain) \nprediction = rf.predict(xvalid_w2v)\nf1_score(yvalid, prediction)",
"_____no_output_____"
]
],
[
[
"**Doc2Vec Features**",
"_____no_output_____"
]
],
[
[
"rf = RandomForestClassifier(n_estimators=400, random_state=11).fit(xtrain_d2v, ytrain) \nprediction = rf.predict(xvalid_d2v)\nf1_score(yvalid, prediction)",
"_____no_output_____"
]
],
[
[
"#### XGBoost\n\nExtreme Gradient Boosting (xgboost) is an advanced implementation of gradient boosting algorithm. It has both linear model solver and tree learning algorithms. Its ability to do parallel computation on a single machine makes it extremely fast. It also has additional features for doing cross validation and finding important variables. There are many parameters which need to be controlled to optimize the model.\n\nSome key benefits of XGBoost are:\n\n1. **Regularization** - helps in reducing overfitting\n\n1. **Parallel Processing** - XGBoost implements parallel processing and is blazingly faster as compared to GBM.\n\n1. **Handling Missing Values** - It has an in-built routine to handle missing values.\n\n1. **Built-in Cross-Validation** - allows user to run a cross-validation at each iteration of the boosting process\n\nCheck out this wonderful guide on XGBoost parameter tuning.",
"_____no_output_____"
]
],
[
[
"from xgboost import XGBClassifier",
"_____no_output_____"
]
],
[
[
"**Bag-of-Words Features**",
"_____no_output_____"
]
],
[
[
"xgb_model = XGBClassifier(max_depth=6, n_estimators=1000).fit(xtrain_bow, ytrain)\nprediction = xgb_model.predict(xvalid_bow)\nf1_score(yvalid, prediction)",
"_____no_output_____"
],
[
"test_pred = xgb_model.predict(test_bow)\ntest['label'] = test_pred\nsubmission = test[['id','label']]\nsubmission.to_csv('sub_xgb_bow.csv', index=False)",
"_____no_output_____"
]
],
[
[
"**TF-IDF Features**",
"_____no_output_____"
]
],
[
[
"xgb = XGBClassifier(max_depth=6, n_estimators=1000).fit(xtrain_tfidf, ytrain) \nprediction = xgb.predict(xvalid_tfidf)\nf1_score(yvalid, prediction)",
"_____no_output_____"
]
],
[
[
"**Word2Vec Features**",
"_____no_output_____"
]
],
[
[
"xgb = XGBClassifier(max_depth=6, n_estimators=1000, nthread= 3).fit(xtrain_w2v, ytrain) \nprediction = xgb.predict(xvalid_w2v)\nf1_score(yvalid, prediction)",
"_____no_output_____"
]
],
[
[
"XGBoost model on word2vec features has outperformed all the previuos models",
"_____no_output_____"
],
[
"**Doc2Vec Features**",
"_____no_output_____"
]
],
[
[
"xgb = XGBClassifier(max_depth=6, n_estimators=1000, nthread= 3).fit(xtrain_d2v, ytrain) \nprediction = xgb.predict(xvalid_d2v)\nf1_score(yvalid, prediction)",
"_____no_output_____"
],
[
"import xgboost as xgb",
"_____no_output_____"
]
],
[
[
"Here we will use DMatrices. A DMatrix can contain both the features and the target.",
"_____no_output_____"
]
],
[
[
"dtrain = xgb.DMatrix(xtrain_w2v, label=ytrain) \ndvalid = xgb.DMatrix(xvalid_w2v, label=yvalid) \ndtest = xgb.DMatrix(test_w2v)\n# Parameters that we are going to tune \nparams = {\n 'objective':'binary:logistic',\n 'max_depth':6,\n 'min_child_weight': 1,\n 'eta':.3,\n 'subsample': 1,\n 'colsample_bytree': 1\n }",
"/opt/conda/lib/python3.6/site-packages/xgboost/core.py:587: FutureWarning: Series.base is deprecated and will be removed in a future version\n if getattr(data, 'base', None) is not None and \\\n"
]
],
[
[
"We will prepare a custom evaluation metric to calculate F1 score.",
"_____no_output_____"
]
],
[
[
"def custom_eval(preds, dtrain):\n labels = dtrain.get_label().astype(np.int)\n preds = (preds >= 0.3).astype(np.int)\n return [('f1_score', f1_score(labels, preds))]",
"_____no_output_____"
]
],
[
[
"**General Approach for Parameter Tuning**\n\nWe will follow the steps below to tune the parameters.\n\n1. Choose a relatively high learning rate. Usually a learning rate of 0.3 is used at this stage.\n\n1. Tune tree-specific parameters such as max_depth, min_child_weight, subsample, colsample_bytree keeping the learning rate fixed.\n\n1. Tune the learning rate.\n\n1. Finally tune gamma to avoid overfitting.\n\n*Tuning max_depth and min_child_weight*",
"_____no_output_____"
]
],
[
[
"gridsearch_params = [\n (max_depth, min_child_weight)\n for max_depth in range(6,10)\n for min_child_weight in range(5,8)\n ]\n\nmax_f1 = 0. # initializing with 0 \n\nbest_params = None \n\nfor max_depth, min_child_weight in gridsearch_params:\n print(\"CV with max_depth={}, min_child_weight={}\".format(max_depth,min_child_weight))\n \n # Update our parameters\n params['max_depth'] = max_depth\n params['min_child_weight'] = min_child_weight\n\n # Cross-validation\n cv_results = xgb.cv(\n params,\n dtrain,\n feval= custom_eval,\n num_boost_round=200,\n maximize=True,\n seed=16,\n nfold=5,\n early_stopping_rounds=10\n ) \n \n# Finding best F1 Score \nmean_f1 = cv_results['test-f1_score-mean'].max()\nboost_rounds = cv_results['test-f1_score-mean'].idxmax() \nprint(\"\\tF1 Score {} for {} rounds\".format(mean_f1, boost_rounds)) \n\nif mean_f1 > max_f1:\n max_f1 = mean_f1\n best_params = (max_depth,min_child_weight) \n\nprint(\"Best params: {}, {}, F1 Score: {}\".format(best_params[0], best_params[1], max_f1))",
"CV with max_depth=6, min_child_weight=5\nCV with max_depth=6, min_child_weight=6\nCV with max_depth=6, min_child_weight=7\nCV with max_depth=7, min_child_weight=5\nCV with max_depth=7, min_child_weight=6\nCV with max_depth=7, min_child_weight=7\nCV with max_depth=8, min_child_weight=5\nCV with max_depth=8, min_child_weight=6\nCV with max_depth=8, min_child_weight=7\nCV with max_depth=9, min_child_weight=5\nCV with max_depth=9, min_child_weight=6\nCV with max_depth=9, min_child_weight=7\n\tF1 Score 0.6807784 for 105 rounds\nBest params: 9, 7, F1 Score: 0.6807784\n"
]
],
[
[
"Updating max_depth and min_child_weight parameters.",
"_____no_output_____"
]
],
[
[
"params['max_depth'] = 9 \nparams['min_child_weight'] = 7",
"_____no_output_____"
]
],
[
[
"Tuning *subsample* and *colsample*",
"_____no_output_____"
]
],
[
[
"gridsearch_params = [\n (subsample, colsample)\n for subsample in [i/10. for i in range(5,10)]\n for colsample in [i/10. for i in range(5,10)]\n]\n\nmax_f1 = 0. \nbest_params = None \n\nfor subsample, colsample in gridsearch_params:\n print(\"CV with subsample={}, colsample={}\".format(subsample,colsample))\n \n # Update our parameters\n params['colsample'] = colsample\n params['subsample'] = subsample\n \n cv_results = xgb.cv(\n params,\n dtrain,\n feval= custom_eval,\n num_boost_round=200,\n maximize=True,\n seed=16,\n nfold=5,\n early_stopping_rounds=10\n )\n \n # Finding best F1 Score\n mean_f1 = cv_results['test-f1_score-mean'].max()\n boost_rounds = cv_results['test-f1_score-mean'].idxmax()\n print(\"\\tF1 Score {} for {} rounds\".format(mean_f1, boost_rounds))\n \n if mean_f1 > max_f1:\n max_f1 = mean_f1\n best_params = (subsample, colsample) \n\nprint(\"Best params: {}, {}, F1 Score: {}\".format(best_params[0], best_params[1], max_f1))",
"CV with subsample=0.5, colsample=0.5\n\tF1 Score 0.6542134 for 48 rounds\nCV with subsample=0.5, colsample=0.6\n\tF1 Score 0.6542134 for 48 rounds\nCV with subsample=0.5, colsample=0.7\n\tF1 Score 0.6542134 for 48 rounds\nCV with subsample=0.5, colsample=0.8\n\tF1 Score 0.6542134 for 48 rounds\nCV with subsample=0.5, colsample=0.9\n\tF1 Score 0.6542134 for 48 rounds\nCV with subsample=0.6, colsample=0.5\n\tF1 Score 0.6554578 for 69 rounds\nCV with subsample=0.6, colsample=0.6\n\tF1 Score 0.6554578 for 69 rounds\nCV with subsample=0.6, colsample=0.7\n\tF1 Score 0.6554578 for 69 rounds\nCV with subsample=0.6, colsample=0.8\n\tF1 Score 0.6554578 for 69 rounds\nCV with subsample=0.6, colsample=0.9\n\tF1 Score 0.6554578 for 69 rounds\nCV with subsample=0.7, colsample=0.5\n\tF1 Score 0.6645196 for 68 rounds\nCV with subsample=0.7, colsample=0.6\n\tF1 Score 0.6645196 for 68 rounds\nCV with subsample=0.7, colsample=0.7\n\tF1 Score 0.6645196 for 68 rounds\nCV with subsample=0.7, colsample=0.8\n\tF1 Score 0.6645196 for 68 rounds\nCV with subsample=0.7, colsample=0.9\n\tF1 Score 0.6645196 for 68 rounds\nCV with subsample=0.8, colsample=0.5\n\tF1 Score 0.6720572 for 72 rounds\nCV with subsample=0.8, colsample=0.6\n\tF1 Score 0.6720572 for 72 rounds\nCV with subsample=0.8, colsample=0.7\n\tF1 Score 0.6720572 for 72 rounds\nCV with subsample=0.8, colsample=0.8\n\tF1 Score 0.6720572 for 72 rounds\nCV with subsample=0.8, colsample=0.9\n\tF1 Score 0.6720572 for 72 rounds\nCV with subsample=0.9, colsample=0.5\n\tF1 Score 0.6550186 for 37 rounds\nCV with subsample=0.9, colsample=0.6\n\tF1 Score 0.6550186 for 37 rounds\nCV with subsample=0.9, colsample=0.7\n\tF1 Score 0.6550186 for 37 rounds\nCV with subsample=0.9, colsample=0.8\n\tF1 Score 0.6550186 for 37 rounds\nCV with subsample=0.9, colsample=0.9\n\tF1 Score 0.6550186 for 37 rounds\nBest params: 0.8, 0.5, F1 Score: 0.6720572\n"
]
],
[
[
"Updating *subsample* and *colsample_bytree*",
"_____no_output_____"
]
],
[
[
"params['subsample'] = 0.9\nparams['colsample_bytree'] = 0.5",
"_____no_output_____"
]
],
[
[
"Now let’s tune the *learning rate*.",
"_____no_output_____"
]
],
[
[
"max_f1 = 0. \nbest_params = None \nfor eta in [.3, .2, .1, .05, .01, .005]:\n print(\"CV with eta={}\".format(eta))\n # Update ETA\n params['eta'] = eta\n\n # Run CV\n cv_results = xgb.cv(\n params,\n dtrain,\n feval= custom_eval,\n num_boost_round=1000,\n maximize=True,\n seed=16,\n nfold=5,\n early_stopping_rounds=20\n )\n\n # Finding best F1 Score\n mean_f1 = cv_results['test-f1_score-mean'].max()\n boost_rounds = cv_results['test-f1_score-mean'].idxmax()\n print(\"\\tF1 Score {} for {} rounds\".format(mean_f1, boost_rounds))\n \n if mean_f1 > max_f1:\n max_f1 = mean_f1\n best_params = eta \n \nprint(\"Best params: {}, F1 Score: {}\".format(best_params, max_f1))",
"CV with eta=0.3\n\tF1 Score 0.678087 for 97 rounds\nCV with eta=0.2\n\tF1 Score 0.6725521999999999 for 60 rounds\nCV with eta=0.1\n\tF1 Score 0.6811619999999999 for 149 rounds\nCV with eta=0.05\n\tF1 Score 0.6785198 for 243 rounds\nCV with eta=0.01\n\tF1 Score 0.1302024 for 0 rounds\nCV with eta=0.005\n\tF1 Score 0.1302024 for 0 rounds\nBest params: 0.1, F1 Score: 0.6811619999999999\n"
]
],
[
[
"Let’s have a look at the final list of tuned parameters.",
"_____no_output_____"
]
],
[
[
"params = {\n 'colsample': 0.9,\n 'colsample_bytree': 0.5,\n 'eta': 0.1,\n 'max_depth': 9,\n 'min_child_weight': 7,\n 'objective': 'binary:logistic',\n 'subsample': 0.9\n}",
"_____no_output_____"
]
],
[
[
"Finally we can now use these tuned parameters in our xgboost model. We have used early stopping of 10 which means if the model’s performance doesn’t improve under 10 rounds, then the model training will be stopped.",
"_____no_output_____"
]
],
[
[
"xgb_model = xgb.train(\n params,\n dtrain,\n feval= custom_eval,\n num_boost_round= 1000,\n maximize=True,\n evals=[(dvalid, \"Validation\")],\n early_stopping_rounds=10\n )",
"[0]\tValidation-error:0.061633\tValidation-f1_score:0.133165\nMultiple eval metrics have been passed: 'Validation-f1_score' will be used for early stopping.\n\nWill train until Validation-f1_score hasn't improved in 10 rounds.\n[1]\tValidation-error:0.058505\tValidation-f1_score:0.133165\n[2]\tValidation-error:0.056836\tValidation-f1_score:0.133165\n[3]\tValidation-error:0.05694\tValidation-f1_score:0.133165\n[4]\tValidation-error:0.055585\tValidation-f1_score:0.133191\n[5]\tValidation-error:0.056315\tValidation-f1_score:0.354509\n[6]\tValidation-error:0.054959\tValidation-f1_score:0.448232\n[7]\tValidation-error:0.054646\tValidation-f1_score:0.515453\n[8]\tValidation-error:0.055272\tValidation-f1_score:0.555822\n[9]\tValidation-error:0.054125\tValidation-f1_score:0.559793\n[10]\tValidation-error:0.053812\tValidation-f1_score:0.586395\n[11]\tValidation-error:0.05256\tValidation-f1_score:0.598425\n[12]\tValidation-error:0.052665\tValidation-f1_score:0.605477\n[13]\tValidation-error:0.052977\tValidation-f1_score:0.610733\n[14]\tValidation-error:0.052977\tValidation-f1_score:0.608964\n[15]\tValidation-error:0.052769\tValidation-f1_score:0.621835\n[16]\tValidation-error:0.053082\tValidation-f1_score:0.619874\n[17]\tValidation-error:0.052977\tValidation-f1_score:0.616372\n[18]\tValidation-error:0.052456\tValidation-f1_score:0.623397\n[19]\tValidation-error:0.052039\tValidation-f1_score:0.622871\n[20]\tValidation-error:0.051935\tValidation-f1_score:0.624693\n[21]\tValidation-error:0.052039\tValidation-f1_score:0.621423\n[22]\tValidation-error:0.051622\tValidation-f1_score:0.622222\n[23]\tValidation-error:0.051622\tValidation-f1_score:0.630025\n[24]\tValidation-error:0.051622\tValidation-f1_score:0.628524\n[25]\tValidation-error:0.051205\tValidation-f1_score:0.628003\n[26]\tValidation-error:0.050475\tValidation-f1_score:0.624066\n[27]\tValidation-error:0.05037\tValidation-f1_score:0.629232\n[28]\tValidation-error:0.050475\tValidation-f1_score:0.632365\n[29]\tValidation-error:0.049432\tValidation-f1_score:0.631229\n[30]\tValidation-error:0.049327\tValidation-f1_score:0.62396\n[31]\tValidation-error:0.049432\tValidation-f1_score:0.629475\n[32]\tValidation-error:0.049536\tValidation-f1_score:0.634711\n[33]\tValidation-error:0.049432\tValidation-f1_score:0.636513\n[34]\tValidation-error:0.04891\tValidation-f1_score:0.638683\n[35]\tValidation-error:0.048493\tValidation-f1_score:0.637562\n[36]\tValidation-error:0.049119\tValidation-f1_score:0.639209\n[37]\tValidation-error:0.04818\tValidation-f1_score:0.639803\n[38]\tValidation-error:0.048389\tValidation-f1_score:0.638158\n[39]\tValidation-error:0.048076\tValidation-f1_score:0.639869\n[40]\tValidation-error:0.047867\tValidation-f1_score:0.647446\n[41]\tValidation-error:0.047763\tValidation-f1_score:0.642504\n[42]\tValidation-error:0.04745\tValidation-f1_score:0.64532\n[43]\tValidation-error:0.047972\tValidation-f1_score:0.646481\n[44]\tValidation-error:0.047972\tValidation-f1_score:0.645953\n[45]\tValidation-error:0.04818\tValidation-f1_score:0.647635\n[46]\tValidation-error:0.047972\tValidation-f1_score:0.649837\n[47]\tValidation-error:0.047867\tValidation-f1_score:0.648208\n[48]\tValidation-error:0.047242\tValidation-f1_score:0.647681\n[49]\tValidation-error:0.047033\tValidation-f1_score:0.652529\n[50]\tValidation-error:0.047137\tValidation-f1_score:0.648072\n[51]\tValidation-error:0.047033\tValidation-f1_score:0.647588\n[52]\tValidation-error:0.046616\tValidation-f1_score:0.64918\n[53]\tValidation-error:0.047242\tValidation-f1_score:0.651961\n[54]\tValidation-error:0.047242\tValidation-f1_score:0.654694\n[55]\tValidation-error:0.047033\tValidation-f1_score:0.658517\n[56]\tValidation-error:0.04672\tValidation-f1_score:0.656887\n[57]\tValidation-error:0.046094\tValidation-f1_score:0.660178\n[58]\tValidation-error:0.046303\tValidation-f1_score:0.661277\n[59]\tValidation-error:0.046824\tValidation-f1_score:0.660729\n[60]\tValidation-error:0.046407\tValidation-f1_score:0.663438\n[61]\tValidation-error:0.046512\tValidation-f1_score:0.665045\n[62]\tValidation-error:0.046616\tValidation-f1_score:0.664506\n[63]\tValidation-error:0.046407\tValidation-f1_score:0.663961\n[64]\tValidation-error:0.045886\tValidation-f1_score:0.662348\n[65]\tValidation-error:0.045573\tValidation-f1_score:0.663968\n[66]\tValidation-error:0.045469\tValidation-f1_score:0.666126\n[67]\tValidation-error:0.045782\tValidation-f1_score:0.66775\n[68]\tValidation-error:0.045469\tValidation-f1_score:0.666667\n[69]\tValidation-error:0.045886\tValidation-f1_score:0.660194\n[70]\tValidation-error:0.045782\tValidation-f1_score:0.662903\n[71]\tValidation-error:0.045677\tValidation-f1_score:0.660729\n[72]\tValidation-error:0.044947\tValidation-f1_score:0.666128\n[73]\tValidation-error:0.044322\tValidation-f1_score:0.668285\n[74]\tValidation-error:0.044009\tValidation-f1_score:0.666128\n[75]\tValidation-error:0.044009\tValidation-f1_score:0.670445\n[76]\tValidation-error:0.044113\tValidation-f1_score:0.670999\n[77]\tValidation-error:0.04453\tValidation-f1_score:0.669911\n[78]\tValidation-error:0.04453\tValidation-f1_score:0.665587\n[79]\tValidation-error:0.044217\tValidation-f1_score:0.670455\n[80]\tValidation-error:0.044009\tValidation-f1_score:0.669374\n[81]\tValidation-error:0.043696\tValidation-f1_score:0.669374\n[82]\tValidation-error:0.044217\tValidation-f1_score:0.668842\n[83]\tValidation-error:0.0438\tValidation-f1_score:0.669935\n[84]\tValidation-error:0.044009\tValidation-f1_score:0.670482\n[85]\tValidation-error:0.043592\tValidation-f1_score:0.669935\n[86]\tValidation-error:0.043487\tValidation-f1_score:0.666667\nStopping. Best iteration:\n[76]\tValidation-error:0.044113\tValidation-f1_score:0.670999\n\n"
],
[
"test_pred = xgb_model.predict(dtest)\ntest['label'] = (test_pred >= 0.3).astype(np.int)\nsubmission = test[['id','label']] \nsubmission.to_csv('sub_xgb_w2v_finetuned.csv', index=False)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e75c84525adbdb45efe78012363d564d4e2799fe | 256,764 | ipynb | Jupyter Notebook | src/xr_finger_model.ipynb | rajkumargithub/denset.mura | 8ed1da097240eb1be35d92b7e8a4c50e05a6d5a1 | [
"MIT"
] | 2 | 2019-03-08T15:36:04.000Z | 2020-11-24T11:13:40.000Z | src/xr_finger_model.ipynb | rajkumargithub/denset.mura | 8ed1da097240eb1be35d92b7e8a4c50e05a6d5a1 | [
"MIT"
] | null | null | null | src/xr_finger_model.ipynb | rajkumargithub/denset.mura | 8ed1da097240eb1be35d92b7e8a4c50e05a6d5a1 | [
"MIT"
] | 5 | 2019-05-15T22:36:45.000Z | 2020-12-23T11:27:51.000Z | 317.384425 | 49,276 | 0.925461 | [
[
[
"# Abnormality Detection in Musculoskeletal Radiographs",
"_____no_output_____"
],
[
"The objective is to build a machine learning model that can detect an abnormality in the X-Ray radiographs. These models can help towards providing healthcare access to the parts of the world where access to skilled radiologists is limited. According to a study on the Global Burden of Disease and the worldwide impact of all diseases found that, “musculoskeletal conditions affect more than 1.7 billion people worldwide. They are the 2nd greatest cause of disabilities, and have the 4th greatest impact on the overall health of the world population when considering both death and disabilities”. (www.usbji.org, n.d.).\n\nThis project attempts to implement deep neural network using DenseNet169 inspired from the Stanford Paper Rajpurkar, et al., 2018.",
"_____no_output_____"
],
[
"## XR_FINGER Study Type",
"_____no_output_____"
],
[
"## Phase 3: Data Preprocessing",
"_____no_output_____"
],
[
"As per the paper, i have normalized the each image to have same mean & std of the images in the ImageNet training set. In the paper, they have used variable-sized images to 320 x 320. But i have chosen to scale 224 x 224. Then i have augmented the data during the training by applying random lateral inversions and rotations of up to 30 degrees using ",
"_____no_output_____"
]
],
[
[
"from keras.applications.densenet import DenseNet169, DenseNet121, preprocess_input\nfrom keras.preprocessing.image import ImageDataGenerator, load_img, image\nfrom keras.models import Sequential, Model, load_model\nfrom keras.layers import Conv2D, MaxPool2D\nfrom keras.layers import Activation, Dropout, Flatten, Dense\nfrom keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint, Callback\nfrom keras import regularizers\nimport pandas as pd\nfrom tqdm import tqdm\nimport os\nimport numpy as np\nimport random\nfrom keras.optimizers import Adam\nimport keras.backend as K\nimport cv2\nimport matplotlib.pyplot as plt",
"Using TensorFlow backend.\n"
]
],
[
[
"### 3.1 Data preprocessing",
"_____no_output_____"
]
],
[
[
"#Utility function to find the list of files in a directory excluding the hidden files.\ndef listdir_nohidden(path):\n for f in os.listdir(path):\n if not f.startswith('.'):\n yield f",
"_____no_output_____"
]
],
[
[
"### 3.1.1 Creating a csv file containing path to image & csv",
"_____no_output_____"
]
],
[
[
"def create_images_metadata_csv(category,study_types):\n \"\"\"\n This function creates a csv file containing the path of images, label.\n \"\"\"\n image_data = {}\n study_label = {'positive': 1, 'negative': 0}\n #study_types = ['XR_ELBOW','XR_FINGER','XR_FOREARM','XR_HAND','XR_HUMERUS','XR_SHOULDER','XR_WRIST']\n #study_types = ['XR_ELBOW']\n i = 0\n image_data[category] = pd.DataFrame(columns=['Path','Count', 'Label'])\n for study_type in study_types: # Iterate throught every study types\n DATA_DIR = 'data/MURA-v1.1/%s/%s/' % (category, study_type)\n patients = list(os.walk(DATA_DIR))[0][1] # list of patient folder names\n for patient in tqdm(patients): # for each patient folder\n for study in os.listdir(DATA_DIR + patient): # for each study in that patient folder\n if(study != '.DS_Store'):\n label = study_label[study.split('_')[1]] # get label 0 or 1\n path = DATA_DIR + patient + '/' + study + '/' # path to this study\n for j in range(len(list(listdir_nohidden(path)))):\n image_path = path + 'image%s.png' % (j + 1)\n image_data[category].loc[i] = [image_path,1, label] # add new row\n i += 1\n image_data[category].to_csv(category+\"_image_data.csv\",index = None, header=False)",
"_____no_output_____"
],
[
"#New function create image array by study level\ndef getImagesInArrayNew(train_dataframe):\n images = []\n labels = []\n for i, data in tqdm(train_dataframe.iterrows()):\n img = cv2.imread(data['Path'])\n# #random rotation\n# angle = random.randint(-30,30)\n# M = cv2.getRotationMatrix2D((img_width/2,img_height/2),angle,1)\n# img = cv2.warpAffine(img,M,(img_width,img_height))\n #resize\n img = cv2.resize(img,(img_width,img_height)) \n img = img[...,::-1].astype(np.float32)\n images.append(img)\n labels.append(data['Label'])\n images = np.asarray(images).astype('float32') \n #normalization\n mean = np.mean(images[:, :, :])\n std = np.std(images[:, :, :])\n images[:, :, :] = (images[:, :, :] - mean) / std\n labels = np.asarray(labels)\n return {'images': images, 'labels': labels}",
"_____no_output_____"
]
],
[
[
"#### 3.1.1.1 Variables intialization",
"_____no_output_____"
]
],
[
[
"img_width, img_height = 224, 224\n#Keras ImageDataGenerator to load, transform the images of the dataset\nBASE_DATA_DIR = 'data/'\nIMG_DATA_DIR = 'MURA-v1.1/'",
"_____no_output_____"
]
],
[
[
"### 3.1.2 XR_ELBOW ImageDataGenertors",
"_____no_output_____"
],
[
"I am going to generate model for every study type and ensemble them. Hence i am preparing data per study type for the model to be trained on.",
"_____no_output_____"
]
],
[
[
"train_data_dir = BASE_DATA_DIR + IMG_DATA_DIR + 'train/XR_FINGER'\nvalid_data_dir = BASE_DATA_DIR + IMG_DATA_DIR + 'valid/XR_FINGER'\n\ntrain_datagen = ImageDataGenerator(\n rotation_range=30,\n horizontal_flip=True\n)\n\ntest_datagen = ImageDataGenerator(\n rotation_range=30,\n horizontal_flip=True\n\n)\n\nstudy_types = ['XR_FINGER']\n\ncreate_images_metadata_csv('train',study_types)\ncreate_images_metadata_csv('valid',study_types)\n\nvalid_image_df = pd.read_csv('valid_image_data.csv', names=['Path','Count', 'Label'])\ntrain_image_df = pd.read_csv('train_image_data.csv', names=['Path', 'Count','Label'])\n\ndd={}\n\ndd['train'] = train_image_df\ndd['valid'] = valid_image_df\n\nvalid_dict = getImagesInArrayNew(valid_image_df)\ntrain_dict = getImagesInArrayNew(train_image_df)\n\ntrain_datagen.fit(train_dict['images'],augment=True)\ntest_datagen.fit(valid_dict['images'],augment=True)\n\nvalidation_generator = test_datagen.flow(\n x=valid_dict['images'],\n y=valid_dict['labels'],\n batch_size = 1\n)\n\ntrain_generator = train_datagen.flow(\n x=train_dict['images'],\n y=train_dict['labels']\n)",
"100%|██████████| 1865/1865 [00:13<00:00, 133.99it/s]\n100%|██████████| 166/166 [00:01<00:00, 148.46it/s]\n461it [00:01, 271.51it/s]\n5106it [00:18, 269.46it/s]\n"
]
],
[
[
"### 3.2 Building a model",
"_____no_output_____"
],
[
"As per the MURA paper, i replaced the fully connected layer with the one that has a single output, after that i applied a sigmoid nonlinearity. In the paper, the optimized weighted binary cross entropy loss. Please see below for the formula,\n\nL(X, y) = -WT,1 * ylog p(Y = 1|X) -WT,0 * (1 - y)log p(Y = 0|X);\np(Y = 1|X) is the probability that the network assigns to the label i, WT,1 = |NT| / (|AT| + |NT|), and WT,0 = |AT| / (|AT| + |NT|) where |AT|) and |NT|) are the number of abnormal images and normal images of study type T in the training set, respectively.\n\nBut i choose to use the default binary cross entropy. The network is configured with Adam using default parameters, batch size of 8, initial learning rate = 0.0001 that is decayed by a factor of 10 each time the validation loss plateaus after an epoch.",
"_____no_output_____"
],
[
"### 3.2.1 Model paramaters",
"_____no_output_____"
]
],
[
[
"#model parameters for training\n#K.set_learning_phase(1)\nnb_train_samples = len(train_dict['images'])\nnb_validation_samples = len(valid_dict['images'])\nepochs = 10\nbatch_size = 8\nsteps_per_epoch = nb_train_samples//batch_size\nprint(steps_per_epoch)\nn_classes = 1",
"638\n"
],
[
"def build_model():\n base_model = DenseNet169(input_shape=(None, None,3),\n weights='imagenet',\n include_top=False,\n pooling='avg')\n# i = 0\n# total_layers = len(base_model.layers)\n# for layer in base_model.layers:\n# if(i <= total_layers//2):\n# layer.trainable = False\n# i = i+1\n\n x = base_model.output\n predictions = Dense(n_classes,activation='sigmoid')(x)\n model = Model(inputs=base_model.input, outputs=predictions)\n return model",
"_____no_output_____"
],
[
"model = build_model()",
"_____no_output_____"
],
[
"#Compiling the model\nmodel.compile(loss=\"binary_crossentropy\", optimizer='adam', metrics=['acc', 'mse'])",
"_____no_output_____"
],
[
"#callbacks for early stopping incase of reduced learning rate, loss unimprovement\nearly_stop = EarlyStopping(monitor='val_loss', patience=8, verbose=1, min_delta=1e-4)\nreduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=1, verbose=1, min_lr=0.0001)\ncallbacks_list = [early_stop, reduce_lr]",
"_____no_output_____"
]
],
[
[
"### 3.2.2 Training the Model",
"_____no_output_____"
]
],
[
[
"#train the module\nmodel_history = model.fit_generator(\n train_generator,\n epochs=epochs,\n workers=0,\n use_multiprocessing=False, \n steps_per_epoch = nb_train_samples//batch_size,\n validation_data=validation_generator,\n validation_steps=nb_validation_samples //batch_size,\n callbacks=callbacks_list\n)",
"Epoch 1/10\n638/638 [==============================] - 413s 647ms/step - loss: 0.5846 - acc: 0.6858 - mean_squared_error: 0.1984 - val_loss: 0.7364 - val_acc: 0.6140 - val_mean_squared_error: 0.2324\nEpoch 2/10\n638/638 [==============================] - 359s 562ms/step - loss: 0.5443 - acc: 0.7151 - mean_squared_error: 0.1833 - val_loss: 0.7983 - val_acc: 0.5263 - val_mean_squared_error: 0.2928\n\nEpoch 00002: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.\nEpoch 3/10\n638/638 [==============================] - 357s 560ms/step - loss: 0.4824 - acc: 0.7617 - mean_squared_error: 0.1590 - val_loss: 0.5056 - val_acc: 0.7719 - val_mean_squared_error: 0.1696\nEpoch 4/10\n638/638 [==============================] - 357s 560ms/step - loss: 0.4579 - acc: 0.7773 - mean_squared_error: 0.1493 - val_loss: 0.5936 - val_acc: 0.7018 - val_mean_squared_error: 0.2033\n\nEpoch 00004: ReduceLROnPlateau reducing learning rate to 0.0001.\nEpoch 5/10\n638/638 [==============================] - 358s 561ms/step - loss: 0.4337 - acc: 0.7992 - mean_squared_error: 0.1398 - val_loss: 0.6060 - val_acc: 0.7544 - val_mean_squared_error: 0.1956\nEpoch 6/10\n638/638 [==============================] - 358s 561ms/step - loss: 0.4179 - acc: 0.8099 - mean_squared_error: 0.1335 - val_loss: 0.5128 - val_acc: 0.7368 - val_mean_squared_error: 0.1717\nEpoch 7/10\n638/638 [==============================] - 357s 560ms/step - loss: 0.3940 - acc: 0.8223 - mean_squared_error: 0.1249 - val_loss: 0.5659 - val_acc: 0.7368 - val_mean_squared_error: 0.1895\nEpoch 8/10\n638/638 [==============================] - 358s 561ms/step - loss: 0.3737 - acc: 0.8368 - mean_squared_error: 0.1172 - val_loss: 0.8138 - val_acc: 0.6316 - val_mean_squared_error: 0.2559\nEpoch 9/10\n638/638 [==============================] - 358s 561ms/step - loss: 0.3455 - acc: 0.8525 - mean_squared_error: 0.1072 - val_loss: 0.4797 - val_acc: 0.8070 - val_mean_squared_error: 0.1529\nEpoch 10/10\n638/638 [==============================] - 357s 560ms/step - loss: 0.3262 - acc: 0.8622 - mean_squared_error: 0.1005 - val_loss: 0.5555 - val_acc: 0.7895 - val_mean_squared_error: 0.1637\n"
],
[
"model.save(\"densenet_mura_rs_v3_xr_finger.h5\")",
"_____no_output_____"
]
],
[
[
"### 3.2.3 Visualizing the model",
"_____no_output_____"
]
],
[
[
"#There was a bug in keras to use pydot in the vis_utils class. In order to fix the bug, i had to comment out line#55 in vis_utils.py file and reload the module\n#~/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/utils\nfrom keras.utils import plot_model \nfrom keras.utils.vis_utils import *\nimport keras\nimport importlib\nimportlib.reload(keras.utils.vis_utils)\nimport pydot\nplot_model(model, to_file='images/densenet_archi_xr_finger_v3.png', show_shapes=True)\n",
"_____no_output_____"
]
],
[
[
"### 3.3 Performance Evaluation",
"_____no_output_____"
]
],
[
[
"#Now we have trained our model, we can see the metrics during the training proccess\nplt.figure(0)\nplt.plot(model_history.history['acc'],'r')\nplt.plot(model_history.history['val_acc'],'g')\nplt.xticks(np.arange(0, 5, 1))\nplt.rcParams['figure.figsize'] = (8, 6)\nplt.xlabel(\"Num of Epochs\")\nplt.ylabel(\"Accuracy\")\nplt.title(\"Training Accuracy vs Validation Accuracy\")\nplt.legend(['train','validation'])\n \nplt.figure(1)\nplt.plot(model_history.history['loss'],'r')\nplt.plot(model_history.history['val_loss'],'g')\nplt.xticks(np.arange(0, 5, 1))\nplt.rcParams['figure.figsize'] = (8, 6)\nplt.xlabel(\"Num of Epochs\")\nplt.ylabel(\"Loss\")\nplt.title(\"Training Loss vs Validation Loss\")\nplt.legend(['train','validation'])\n\nplt.figure(2)\nplt.plot(model_history.history['mean_squared_error'],'r')\nplt.plot(model_history.history['val_mean_squared_error'],'g')\nplt.xticks(np.arange(0, 5, 1))\nplt.rcParams['figure.figsize'] = (8, 6)\nplt.xlabel(\"Num of Epochs\")\nplt.ylabel(\"MSE\")\nplt.title(\"Training Loss vs Validation Loss\")\nplt.legend(['train','validation'])\n \nplt.show()",
"_____no_output_____"
],
[
"#Now we evaluate the trained model with the validation dataset and make a prediction. \n#The class predicted will be the class with maximum value for each image.\nev = model.evaluate_generator(validation_generator, steps=nb_validation_samples, workers=0, use_multiprocessing=False)\nev[1]\n",
"_____no_output_____"
],
[
"#pred = model.predict_generator(validation_generator, steps=1, batch_size=1, use_multiprocessing=False, max_queue_size=25, verbose=1)\nvalidation_generator.reset()\n#pred = model.predict_generator(validation_generator,steps=nb_validation_samples)\npred_batch = model.predict_on_batch(valid_dict['images'])",
"_____no_output_____"
],
[
"predictions = []\nfor p in pred_batch:\n if(p > 0.5):\n predictions+=[1]\n else:\n predictions+=[0]",
"_____no_output_____"
],
[
"error = np.sum(np.not_equal(predictions, valid_dict['labels'])) / valid_dict['labels'].shape[0] \npred = predictions",
"_____no_output_____"
],
[
"def evaluate_error(model):\n pred = model.predict_generator(validation_generator,steps=nb_validation_samples, workers=0, use_multiprocessing=False, verbose=1)\n #pred = model.predict_generator(validation_generator, steps=nb_validation_samples //batch_size, workers=0, use_multiprocessing=False,verbose=1)\n# predictions = []\n# for p in pred:\n# if(p >= 0.5):\n# predictions+=[1]\n# else:\n# predictions+=[0] \n# predictions = np.asarray(predictions)\n pred = np.argmax(pred, axis=1)\n predictions = np.expand_dims(pred, axis=1) # make same shape as y_test\n error = np.sum(np.not_equal(predictions, valid_dict['labels'])) / valid_dict['labels'].shape[0] \n return error,predictions\n\nerror, pred = evaluate_error(model)\n",
"465/465 [==============================] - 14s 30ms/step\n"
],
[
"evaluate_error(model)",
"_____no_output_____"
],
[
"print('Confusion Matrix')\nfrom sklearn.metrics import confusion_matrix, classification_report, cohen_kappa_score\nimport seaborn as sn\ncm = confusion_matrix( pred ,valid_dict['labels'])\nplt.figure(figsize = (30,20))\nsn.set(font_scale=1.4) #for label size\nsn.heatmap(cm, annot=True, annot_kws={\"size\": 20},cmap=\"YlGnBu\") # font size\nplt.show()",
"Confusion Matrix\n"
],
[
"print()\nprint('Classification Report')\nprint(classification_report(valid_dict['labels'], pred, target_names=[\"0\",\"1\"]))",
"\nClassification Report\n precision recall f1-score support\n\n 0 0.76 0.78 0.77 214\n 1 0.80 0.79 0.79 247\n\navg / total 0.78 0.78 0.78 461\n\n"
],
[
"from sklearn.metrics import confusion_matrix, classification_report, cohen_kappa_score\ncohen_kappa_score(valid_dict['labels'], pred)",
"_____no_output_____"
]
],
[
[
"### ROC Curve",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import roc_curve\nfpr_keras, tpr_keras, thresholds_keras = roc_curve(valid_dict['labels'], pred_batch)",
"_____no_output_____"
],
[
"from sklearn.metrics import auc\nauc_keras = auc(fpr_keras, tpr_keras)",
"_____no_output_____"
],
[
"plt.figure(1)\nplt.plot([0, 1], [0, 1], 'k--')\nplt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras))\nplt.xlabel('False positive rate')\nplt.ylabel('True positive rate')\nplt.title('ROC curve')\nplt.legend(loc='best')\nplt.show()\n\nplt.figure(2)\nplt.xlim(0.0, 0.2)\nplt.ylim(0.65, 0.9)\nplt.plot([0, 1], [0, 1], 'k--')\nplt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras))\nplt.xlabel('False positive rate')\nplt.ylabel('True positive rate')\nplt.title('ROC curve (zoomed in at top left)')\nplt.legend(loc='best')\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e75c87f8bcc88c89712f07f46be886c8816147f9 | 168,893 | ipynb | Jupyter Notebook | Software-Analysis.ipynb | softvis-research/security-analysis | 461f7507103b61a42b7b366dd1ac7b268710bba8 | [
"Apache-2.0"
] | null | null | null | Software-Analysis.ipynb | softvis-research/security-analysis | 461f7507103b61a42b7b366dd1ac7b268710bba8 | [
"Apache-2.0"
] | null | null | null | Software-Analysis.ipynb | softvis-research/security-analysis | 461f7507103b61a42b7b366dd1ac7b268710bba8 | [
"Apache-2.0"
] | null | null | null | 126.891811 | 56,741 | 0.65162 | [
[
[
"# Schwachstellenanalyse hinsichtlich bekannter CVE's",
"_____no_output_____"
],
[
"## Fragestellung\n\nTreten in dem vorliegendem Java-Projekt mögliche Sicherheitslücken aufgrund von gemeldeten CVE auf? Wie kritisch werden jene Schwachstellen eingeschätzt?\n* relevant für Dienstleister/Entwickler und Kunden/Anwender zugleich um Angriffe von Dritten über die Ausnutzung von Schwachstellen zu unterbinden (z.B. Sniffer-Attacks, Denial-of-Service, Buffer-Overflow)\n* Ergebnisse sollen weiterführende Analysemöglichkeiten und Risikobewertuntgen ermöglichen",
"_____no_output_____"
],
[
"## Datenquelle\n* Java-Strukturen eines Git-Projekts, welches aufgrund der Anpassung der pom.xml durch jQAssistant gescannt und in einer Neo4j-Instanz gespeichert wurde\n* Daten zu CVE über Dateiimport und Request an cveapi (letzteres nur für einen übersichtlichen Datensatz möglich, da die Firewall des API-Servers den Request zum Schutz vor Denial-of-Service-Attacken sperrt)\n\n### konkrete Quellen\n* Projektgrundlage: Petclinic → verwendete Frameworks (Artefakte) & zugehörige Versionen\n* CVE-Dateiimport: historische CVE von 2021 als JSON-Datei (Stand: 23.08.2021) <br>\n→ Download älterer Dateien unter https://nvd.nist.gov/vuln/data-feeds möglich\n* CVE API über NVD: Import der 20 aktuellsten CVE oder aller CVE über einen definierten Erstellungszeitraum <br>\n→ Anpassung über URL möglich <br>\n→ Höchstens 20 CVE sind in der Response enthalten.\n\n### mögliche Schwachstellen\n* Artefakte/Frameworks ohne Versionsnummer, Versionsnummer mit unbrauchbaren Präfix/Suffix\n* Artefakte/Frameworks, welche nicht über jQAssistant erfasst werden konnten\n* genaues Matching zwischen verwendete Version und schwachstellenrelevante Version",
"_____no_output_____"
],
[
"## Annahmen\n* Die relevanten Artefakte konnten von jQAssistant gescannt und als Artefakte mit den entsprechenden Labels \"name\" und \"version\" gespeichert werden. \n* Relevante Daten wie betroffene Konfigurationen, Versionen und Schweregrad der Schwachstellen können über die JSON-Strukturierung der CVE abgerufen werden.\n* Die Artefakte können mit den Informationen der CVE abgeglichen werden. ",
"_____no_output_____"
],
[
"## Validierung\n\n### Datenaufbereitung\n* Tabellenansicht bzgl. aller relevanten (im Projekt möglichen) Schwachstellen\n* Graphische Übersicht über betroffene Artefakte und Anzahl der zugehörigen CVE\n* Graphische Übersicht über hohe (zu analysierende) Impacts der auftretenden CVE bzgl. einer Auswahl von Artefakten\n* Graphische Übersicht über Base Score der auftretenden CVE\n\n\n### Aktionen\n* Review bzgl. der auftretenden Schwachstellen und dessen Schweregrade\n* Validierung & Evaluierung der Schwachstellen durch Domainexperten\n* Planung der nächsten Schritte/Meilensteine durch Projekt",
"_____no_output_____"
],
[
"## Implementierung\n* Identifikation der relevanten Artefakte über Node Label \"Artifact\" und über die Property-Nodes des Nodes der pom.xml mit der Relationship [:HAS_PROPERTY] → Extraktion des Namens & der Version\n* Extraktion der CVE aus der JSON-Datei & über cveapi → Auflösung von Verschachtelungen und Kürzung der Spaltenanzahl\n* Abgleich der identifizierten Artefakte aus dem Projekt mit den betroffenden Konfiguration aus den CVE",
"_____no_output_____"
]
],
[
[
"#Import of all used libraries\nimport py2neo\nimport pandas as pd\nimport numpy as np\n\nimport json\nfrom pandas.io.json import json_normalize\nimport openpyxl\n\nimport urllib3\nfrom urllib3 import request\nimport json\nimport certifi\n\nfrom IPython.display import display, HTML\nimport pygal",
"_____no_output_____"
],
[
"base_html = \"\"\"\n<!DOCTYPE html>\n<html>\n <head>\n <script type=\"text/javascript\" src=\"http://kozea.github.com/pygal.js/javascripts/svg.jquery.js\"></script>\n <script type=\"text/javascript\" src=\"https://kozea.github.io/pygal.js/2.0.x/pygal-tooltips.min.js\"\"></script>\n </head>\n <body>\n <figure>\n {rendered_chart}\n </figure>\n </body>\n</html>\n\"\"\"",
"_____no_output_____"
],
[
"#Verbindung zu neo4j und Speicherung des Graphen in der Variable 'Graph'\n\ngraph = py2neo.Graph(host='localhost', user='neo4j', password='neo4j')",
"_____no_output_____"
],
[
"#Query um alle Artefakte bzgl. Frameworks zu erhalten\n#Bereininung von Duplikaten, Testfiles und unnötigen Präfixes \n\nquery = \"\"\"\nMATCH (artifact:Artifact) WHERE NOT artifact.name contains 'petclinic' AND NOT artifact.type = 'test-jar'\nWITH DISTINCT artifact \nReturn artifact.name as Artefakt, artifact.version as Version\n\"\"\"\ndf_usedArtifacts = pd.DataFrame(graph.run(query), columns=['Artefakt', 'Version'])\ndf_usedArtifacts['Version'] = df_usedArtifacts['Version'].str.replace('[.-]?[a-zA-Z]+[-]?\\w+([.-]?\\d*)*$','',regex=True)",
"_____no_output_____"
],
[
"#Query zu allen Frameworks, die als Property an der pom.xml gespeichert sind\n#Joining beider Dataframes, Duplikaterntfernung\n\nquery2 = \"\"\"\nMatch (p:Pom)-[:HAS_PROPERTY]->(pr:Property) \nReturn pr.name as Artefakt, pr.value as Version\n\"\"\"\ndf_PomProperties = pd.DataFrame(graph.run(query2), columns=['Artefakt', 'Version'])\ndf_PomProperties['Artefakt'] = df_PomProperties['Artefakt'].str.replace('.?[vV]ersion','',regex=True)\n\ndf_usedItems = pd.merge(left=df_PomProperties, right=df_usedArtifacts, how='outer', left_on='Artefakt', right_on='Artefakt')\nfor i in df_usedItems.index:\n version_y = df_usedItems['Version_y'][i]\n version_x = df_usedItems['Version_x'][i]\n if pd.isnull(version_x) and (version_y is not None):\n df_usedItems.loc[i, 'Version_x'] = version_y\n\ndf_usedItems = df_usedItems.sort_values(by=['Artefakt','Version_x'], ascending=False, na_position='last')\ndf_usedItems = df_usedItems.drop_duplicates(subset='Artefakt', keep=\"first\")\ndf_usedItems = df_usedItems.reset_index()\ndf_usedItems = df_usedItems.drop(['index'], axis=1)\n\ndf_usedItems = df_usedItems.drop(['Version_y'], axis=1)\ndf_usedItems.columns =['Artefakt', 'Version']\ndf_usedItems",
"_____no_output_____"
]
],
[
[
"## Datenimport über CVE API oder entsprechender JSON\nBeide Möglichkeiten besitzen eine ähnliche Datenstrukturierung. Der Request über cveapi erweitert die Strukturierung nur um wenige weitere Key-Value-Paare, weshalb das DataFrame zusätzlicher Anpassung benötigt.",
"_____no_output_____"
]
],
[
[
"# Laden der statischen CVE-Daten über JSON-Datei\n# Download über https://nvd.nist.gov/vuln/data-feeds\n\napi = 'false'\n\nwith open('CVE/nvdcve-1.1-2021.json', encoding='utf-8') as staticData:\n jsonData = json.load(staticData)\ndf_raw = pd.json_normalize(jsonData, record_path =['CVE_Items'])\n",
"_____no_output_____"
],
[
"# Datenimport über cveapi \n# Import aller CVE's wird nicht empfohlen, da dies von der Firewall des Servers\n# zur Prävention von Denial-of-service-Attacken verhindert wird\n# Response enthält immer höchstens 20 CVE\n\napi = 'true'\n\nhttp = urllib3.PoolManager(\n cert_reqs=\"CERT_REQUIRED\",\n ca_certs=certifi.where()\n)\n\n#Request der 20 aktuellsten CVE\nurl ='https://services.nvd.nist.gov/rest/json/cves/1.0?startIndex=20' \n\n#Request der CVE ab einem definierten Startzeitpunkt\n#url ='https://services.nvd.nist.gov/rest/json/cves/1.0?pubStartDate=2021-08-01T00:00:00:000 UTC-05:00'\n\n\nr = http.request('GET', url)\nr.status\n\n# JSON-Daten werden ausgewertet & in ein Dictionary gespeichert\njsonData = json.loads(r.data.decode('utf-8'))\ndf_nested_list = pd.json_normalize(jsonData)\ndf_raw = df_nested_list.loc[:,df_nested_list.columns.isin(['result.CVE_Items'])]\njson_struct = json.loads(df_raw.to_json(orient=\"records\")) \ndf_raw = pd.json_normalize(json_struct,record_path =['result.CVE_Items'])",
"_____no_output_____"
]
],
[
[
"## Datenaufbereitung über DataFrames\nDaten aus dem Request oder der JSON-Datei werden in mehreren DataFrames entschachtelt und aufbereitet, um für die darauffolgende Filterung bzgl. der verwendeten Artefakte vorbereitet zu werden.",
"_____no_output_____"
]
],
[
[
"#Aufbereitung der Daten zu einer Tabelle mit ID, Beschreibung & Schweregrad der Sicherheitslücke\n\n#DataFrame wird auf 6 Spalten gekürzt & Spalten werden umbenannt (Lesbarkeit)\nvulnerableList= df_raw.columns.isin(['cve.CVE_data_meta.ID', 'impact.baseMetricV3.cvssV3.confidentialityImpact', 'impact.baseMetricV3.cvssV3.integrityImpact', 'impact.baseMetricV3.cvssV3.availabilityImpact', 'impact.baseMetricV3.cvssV3.baseScore', 'impact.baseMetricV3.exploitabilityScore', 'impact.baseMetricV3.impactScore'])\ndf_basic = df_raw.loc[:,vulnerableList]\ndf_basic.columns =['CVE-ID', 'Confidentially Impact', 'Integrity Impact', 'Availability Impact','Base Score', 'Exploitability Score', 'Impact Score']\n\n\n#Neues DF mit den CVE-Beschreibungen, da \"cve.description.description_data\" ein Dictionary enthält\ndf_raw2 = df_raw.loc[:,df_raw.columns.isin(['cve.CVE_data_meta.ID', 'cve.description.description_data'])]\n\n#Reload & Manipulation des DataFrames um an die entsprechende Beschreibung zu gelangen\njson_struct = json.loads(df_raw2.to_json(orient=\"records\")) \ndf_desc = pd.json_normalize(json_struct,record_path =['cve.description.description_data'], meta=['cve.CVE_data_meta.ID'])\ndf_desc = df_desc.loc[:,df_desc.columns.isin(['value', 'cve.CVE_data_meta.ID'])]\ndf_desc.columns =['CVE-Beschreibung', 'CVE-ID']\n\n#DF-Join von df_basic & df_desc\nbasicList = pd.merge(left=df_basic, right=df_desc, left_on='CVE-ID', right_on='CVE-ID')\nbasicList = basicList[['CVE-ID', 'CVE-Beschreibung', 'Confidentially Impact', 'Integrity Impact', 'Availability Impact','Base Score', 'Impact Score', 'Exploitability Score']]",
"_____no_output_____"
],
[
"#Neues DF mit den Konfigurationsbeschreibungen, da \"configurations.nodes\" ein Dictionary enthält\nnewList= df_raw.columns.isin(['cve.CVE_data_meta.ID', 'configurations.nodes'])\ndf_raw3 = df_raw.loc[:,newList]\n\n#Reload & Manipulation des DataFrames um an die entsprechende fehlerhafte Konfiguration zu gelangen\njson_struct = json.loads(df_raw3.to_json(orient=\"records\")) \ndf_conf = pd.json_normalize(json_struct,record_path =['configurations.nodes'], meta=['cve.CVE_data_meta.ID'])\njson_struct = json.loads(df_conf.to_json(orient=\"records\")) \ndf_conf2 = pd.json_normalize(json_struct,record_path =['cpe_match'], meta=['operator', 'cve.CVE_data_meta.ID'])\n\n#Kürzung & Umbenennung der Spalten\nif api == 'false':\n df_conf2 = df_conf2.loc[:,df_conf2.columns.isin(['cpe23Uri', 'versionEndIncluding', 'versionEndExcluding', 'versionStartIncluding', 'versionStartExcluding', 'operator', 'cve.CVE_data_meta.ID'])]\n df_conf2.columns =['cpe23URI', 'Last version (excl)', 'First version (incl)', 'Last version (incl)', 'First version (excl)', 'Connector/Relation', 'CVE-ID']\n df_conf2 = df_conf2[['CVE-ID', 'Connector/Relation', 'cpe23URI','First version (excl)', 'First version (incl)', 'Last version (excl)', 'Last version (incl)']]\nelif api == 'true':\n df_conf2 = df_conf2.rename(columns={\"cve.CVE_data_meta.ID\": \"CVE-ID\", \"cpe23Uri\": \"cpe23URI\", \"operator\": \"Connector/Relation\"})\n df_conf2 = df_conf2.drop(columns=['cpe_name'])\n ",
"_____no_output_____"
],
[
"#DataFrame bzgl. Konfigurationen mit Schwachstellen mit den verwendeten Artefakten scannen\nlist1 = []\nlist2 = []\ndf_result = df_conf2[0:0]\ndf_result.insert(len(df_result.columns), \"verwendetes Artefakt\", [])\ndf_result.insert(len(df_result.columns), \"verwendete Version\", [])\n\nfor j in df_usedItems.index:\n version = df_usedItems['Version'][j]\n artefakt = df_usedItems['Artefakt'][j]\n \n df_search = df_conf2.loc[df_conf2['cpe23URI'].str.contains(':'+artefakt + ':', case=False)]\n if df_search is not None:\n lengthDF = df_search.shape[0]\n for i in range(lengthDF):\n list1.append(artefakt)\n list2.append(version)\n df_search.insert(len(df_search.columns), \"verwendetes Artefakt\", list1)\n df_search.insert(len(df_search.columns), \"verwendete Version\", list2)\n list1.clear()\n list2.clear()\n df_result = df_result.append(df_search, ignore_index=True)",
"_____no_output_____"
]
],
[
[
"### Auflistung der Schwachstellen\nDie folgende Tabelle stellt alle möglichen Schwachstellen bzgl. der verwendeten Artefakte in dem gescannten Projekt dar. Es werden neben der CVE-ID & dem im Projekt verwendeten Artefakt + Versionsnummer auch eine CVE-Beschreibung und die jeweiligen Scores zur Einschätzung der Relevanz und des Schweregrades aufgelistet.\nDie Liste wird als Excel abgespeichert.",
"_____no_output_____"
]
],
[
[
"#Neues DataFrame mit kompakten Daten (ohne Versionenvergleich)\ndf_compact = df_result[0:0]\ndf_compact = df_result.drop_duplicates(subset='CVE-ID', keep=\"first\")\ndf_compact = df_compact.loc[:,df_compact.columns.isin(['CVE-ID', 'verwendetes Artefakt', 'verwendete Version'])]\ndf_compact = pd.merge(left=basicList, right=df_compact, left_on='CVE-ID', right_on='CVE-ID')\ndf_compact.to_excel(\"result_analysis.xlsx\")\ndf_compact",
"_____no_output_____"
]
],
[
[
"#### Legende\n*Base Score* <br>\n= Repräsentation des natürlichen Charakters und des Schweregrads einer Schwachstelle (konstant über einen längeren Zeitraum, über verschiedene Umgebungen hinweg) <br>\n→ beinhaltet den Impact Score & den Exploitability Score\n\n*Exploitability Score* <br>\n= Reflektion der angreifbaren Komponente → beinhaltet unteranderem die Situation und den Kontext, der den Angriff ermöglichen kann\n\n*Impact Score* <br>\n= Schweregrad der direkten Konsequenzen eines erfolgreichen Exploits auf den betroffenen \"Gegenstand\" (Softwaresystem, Umgebung, Daten, ...) und der direkte & vorhersehbare Effekt der ausgenutzten Schwachstelle \n\n\n**Mögliche Ausprägungen**<br>\n*None*: 0 <br>\n*Low*: 0.1 - 3.9<br>\n*Medium*: 4.0 - 6.9<br>\n*High*: 7.0 - 8.9<br>\n*Severe*: 9.0 - 10.0",
"_____no_output_____"
],
[
"### Visualisierung\nIm folgenden Abschnitt wird mithilfe einiger Charts der Zusammenhang zwischen CVE, betroffenes Artefakt, Auftrittshäufigkeit und Schweregrade dargestellt.",
"_____no_output_____"
]
],
[
[
"df_count = df_compact['verwendetes Artefakt'].value_counts().to_frame()\ndf_count['Artefakt'] = df_count.index\ndf_count.columns =['Anzahl auftretender CVE', 'Artefakt']\ndf_count.columns.name = None\ndf_count = df_count.reset_index()\ndf_count = df_count.loc[:,df_count.columns.isin(['Anzahl auftretender CVE', 'Artefakt'])]\ndf_count",
"_____no_output_____"
],
[
"pie_chart = pygal.Pie()\npie_chart.title = 'Anzahl der Artefakte mit Sicherheitsbedenken'\nfor i in range(len(df_count)):\n artefakt= df_count['Artefakt'][i]\n anzahl=df_count['Anzahl auftretender CVE'][i]\n pie_chart.add(artefakt, anzahl)\ndisplay(HTML(base_html.format(rendered_chart=pie_chart.render(is_unicode=True))))",
"_____no_output_____"
]
],
[
[
"### Verteilung der hohen Impactbewertungen über die zehn häufigsten Artefakte\n\n**Legende** <br>\n* *Confidentiality Impact* = Einfluss auf die Vertraulichkeit der Informationsgewinnung durch das System (Beinhaltet z.B. begrenzter Informationszugriff, Zugriff nur per Authorisierung)\n* *Integrity Impact* = Einfluss auf den Wahrheitswert der zu schützenden Informationen durch z.B. unautorisierte & unbemerkte Manipulation durch Angreifer\n* *Availability Impact* = Einfluss auf die Verfügbarkeit der anzugreifenden Komponente durch einen Exploit\n",
"_____no_output_____"
]
],
[
[
"from pygal.style import DefaultStyle\ntop5 = df_count.head(10)\nlabels = []\nimpactTypes = ['Confidentially Impact', 'Integrity Impact', 'Availability Impact']\ncount_ci = []\ncount_ii = []\ncount_ai = []\n\nline_chart = pygal.StackedBar(show_legend=True, human_readable=True, fill=True, legend_at_bottom=True, print_values=True, style=DefaultStyle(value_font_size=12))\n\nline_chart.title = 'Verteilung der hohen Impacts unter den Top 10'\nline_chart.x_title= 'betroffene Artefakte'\nline_chart.y_title='Anzahl der Impact-Treffer'\n\nfor j in top5.index:\n artefakt = top5['Artefakt'][j]\n \n \n df_interimResult = df_compact.loc[df_compact['verwendetes Artefakt'].str.contains(artefakt, case=False)]\n \n count_hi = len(df_interimResult[(df_interimResult[impactTypes[0]] == 'HIGH') | (df_interimResult[impactTypes[0]] == 'HIGH') | (df_interimResult[impactTypes[2]] == 'HIGH')])\n \n count_ci.append(len(df_interimResult[df_interimResult[impactTypes[0]] == 'HIGH']))\n count_ii.append(len(df_interimResult[df_interimResult[impactTypes[1]] == 'HIGH']))\n count_ai.append(len(df_interimResult[df_interimResult[impactTypes[2]] == 'HIGH']))\n labels.append(artefakt+' ('+ str(count_hi)+')')\n\nline_chart.add(impactTypes[0], count_ci)\nline_chart.add(impactTypes[1], count_ii)\nline_chart.add(impactTypes[2], count_ai)\n\nline_chart.x_labels = labels\n \n\ndisplay(HTML(base_html.format(rendered_chart=line_chart.render(is_unicode=True))))\n",
"_____no_output_____"
],
[
"df_interimResult = df_compact[0:0]\ncveListC = []\ncveListH = []\ncveListM = []\ncveListL = []\ncveListN = []\n\n\ntreemap_BaseS = pygal.Treemap()\ntreemap_BaseS.title = 'Schweregrad der jeweiligen Schwachstelle (CVE) anhand des Base Score'\n\nfor j in df_count.index:\n artefakt = df_count['Artefakt'][j]\n df_interimResult = df_compact.loc[df_compact['verwendetes Artefakt'].str.contains(artefakt, case=False)]\n \n for i in df_interimResult.index:\n baseScore = df_interimResult['Base Score'][i]\n if baseScore >= 9:\n cveListC.append({'value': df_interimResult['Base Score'][i], 'label': (df_interimResult['CVE-ID'][i]+': ' + artefakt)})\n elif baseScore >= 7 and baseScore < 9:\n cveListH.append({'value': df_interimResult['Base Score'][i], 'label': (df_interimResult['CVE-ID'][i]+': ' + artefakt)})\n elif baseScore >= 4 and baseScore < 7:\n cveListM.append({'value': df_interimResult['Base Score'][i], 'label': (df_interimResult['CVE-ID'][i]+': ' + artefakt)})\n elif baseScore < 4:\n cveListL.append({'value': df_interimResult['Base Score'][i], 'label': (df_interimResult['CVE-ID'][i]+': ' + artefakt)})\n elif baseScore == 0:\n cveListN.append({'value': df_interimResult['Base Score'][i], 'label': (df_interimResult['CVE-ID'][i]+': ' + artefakt)})\n \ntreemap_BaseS.add('Critical', cveListC)\ntreemap_BaseS.add('High', cveListH)\ntreemap_BaseS.add('Medium', cveListM)\ntreemap_BaseS.add('Low', cveListL)\ntreemap_BaseS.add('None', cveListN)\n \n \n\ndisplay(HTML(base_html.format(rendered_chart=treemap_BaseS.render(is_unicode=True))))",
"_____no_output_____"
]
],
[
[
"CVE mit den höchsten Werten sollten zuerst gegen geprüft werden. → mögliche Priorisierung",
"_____no_output_____"
],
[
"## Ergebnisse\n\n### Softwareanalyse\n* Die häufigsten CVE bzgl. Petclinic treten aufgrund der Verwendung von mysql. (2020 unteranderem auch FasterXML jackson-databind)\n* Besonders häufig werden Schwachstellen bzgl. der Verfügbarkeit ausgenutzt.\n* Anhand des Base Scores konnten 2 Artefakte (postgresql & solr) mit je kritischen 2 CVE identifiziert werden, deren Schwachstellen möglichst zeitnah analysiert werden sollten.\n\n### Hindernisse/Verbesserungspotentiale\n* betroffene Artefaktversionen innerhalb der JSON-Datei/Response werden in vier statt zwei Spalten gepflegt <br>\n→ Verhinderung von weiterer Eingrenzung der Liste<br>\n→ zusätzliche manuelle Einkürzung notwendig\n* fehlende Versionsnummern an einigen Artefakten\n* Verbesserung der Schnittstelle zur API\n* Verbesserung der Regex-Ausdrücke\n* Interessante Idee: Relation zwischen betroffene Artefakte, Klassen und Codezeilen",
"_____no_output_____"
],
[
"## Nächste Schritte\n* Präsentation der Ergebnisse und Diskussion mit den Domainexperten und dem Projekt (womöglich einem Teil der Stakeholder)\n* Sichtung der Ergebnistabelle und weitere Kürzung der CVE-Liste\n* Planung & Aufwandsschätzung von möglichen Updates & Bugfixes\n* Regelmäßige CVE-Scans planen",
"_____no_output_____"
],
[
"## Quellen\n\n- Harrer, M., Software Analytics Canvas, URL: https://www.feststelltaste.de/software-analytics-canvas/, gelesen am 24.07.2021\n- o.V., NVD Data Feeds, URL: https://nvd.nist.gov/vuln/data-feeds, gelesen am 31.07.2021\n- o.V., Vulnerability Metrics, URL: https://nvd.nist.gov/vuln-metrics/cvss, gelesen am 26.08.2021\n- o.V., Common Vulnerability Scoring System version 3.1: Specification Document, URL: https://www.first.org/cvss/specification-document, gelesen am 26.08.2021",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e75c9bfbf7e1958d7a7bff87344fa3284cefb1e6 | 1,476 | ipynb | Jupyter Notebook | Untitled.ipynb | ehosseiniasl/CAM | 7241bf346d71ec61581ffaba8b08c05321790491 | [
"MIT"
] | null | null | null | Untitled.ipynb | ehosseiniasl/CAM | 7241bf346d71ec61581ffaba8b08c05321790491 | [
"MIT"
] | null | null | null | Untitled.ipynb | ehosseiniasl/CAM | 7241bf346d71ec61581ffaba8b08c05321790491 | [
"MIT"
] | null | null | null | 21.391304 | 52 | 0.579946 | [
[
[
"import argparse\nimport os\nimport random\nimport shutil\nimport time\nimport warnings\nimport sys\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.parallel\nimport torch.backends.cudnn as cudnn\nimport torch.distributed as dist\nimport torch.optim\nimport torch.multiprocessing as mp\nimport torch.utils.data\nimport torch.utils.data.distributed\nimport torchvision.transforms as transforms\nimport torchvision.datasets as datasets\nimport torchvision.models as models\n#from tensorboardX import SummaryWriter\n# import ipdb",
"_____no_output_____"
],
[
"from voc_dataset import ValueErroroc",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
e75cb66e6d7f198f6e5cf6b65ee8247814311dd7 | 17,476 | ipynb | Jupyter Notebook | services/retriever_test.ipynb | dunkelhaus/BitVision | afbb2c57a409c85dded57fe1dee3283cee991121 | [
"MIT"
] | null | null | null | services/retriever_test.ipynb | dunkelhaus/BitVision | afbb2c57a409c85dded57fe1dee3283cee991121 | [
"MIT"
] | null | null | null | services/retriever_test.ipynb | dunkelhaus/BitVision | afbb2c57a409c85dded57fe1dee3283cee991121 | [
"MIT"
] | null | null | null | 37.582796 | 189 | 0.558595 | [
[
[
"import os\nimport json\nimport requests\nimport moment\nfrom bs4 import BeautifulSoup\nfrom textblob import TextBlob\n\n# Local\nfrom engine import dataset, transformer",
"_____no_output_____"
],
[
"response = requests.get(\"https://www.bitstamp.net/api/ticker/\").json()",
"_____no_output_____"
],
[
"response",
"_____no_output_____"
],
[
"corjson = json.dumps({\n \"error\": False,\n \"data\": {\n \"last\": round(float(response[\"last\"]), 2),\n \"high\": round(float(response[\"high\"]), 2),\n \"low\": round(float(response[\"low\"]), 2),\n \"open\": round(float(response[\"open\"]), 2),\n \"volume\": round(float(response[\"volume\"]), 2)\n }\n }, indent=2)",
"_____no_output_____"
],
[
"corjson",
"_____no_output_____"
],
[
"html = requests.get(\"https://www.coindesk.com/\")\nsoup = BeautifulSoup(html.text, \"html.parser\")\n\ntop_articles = soup.find_all('div', class_=\"card-text-block\")\nbelow_list = soup.find_all('div', class_=\"list-item-card post\")\n\nheadlines = []",
"_____no_output_____"
],
[
"len(below_list), len(top_articles)",
"_____no_output_____"
],
[
"for i in top_articles + below_list:\n date_container = i.find(\"span\", class_=\"card-date\")\n \n if date_container is None:\n # i.e. below_list\n date_container = i.find(\"time\")\n \n date_published = moment.date(date_container.get_text()).format(\"M-D\")\n print(date_published)\n headline_container = i.find(\"h4\") if i.find(\"h4\") else i.find(\"h2\")\n headline = headline_container.get_text().strip()\n print(i.find(\"a\", class_=\"\")[\"href\"])\n \n print(headline)\n headlines.append((headline, date_published, i.find(\"a\", class_=\"\")[\"href\"]))",
"04-01\n/dogecoin-takes-off-after-musk-moonshot\nDOGE Jumps After Tesla’s Musk Promises ‘Literal’ Moonshot\n04-01\n/filecoin-surges-42-replaces-litecoin-as-the-9th-largest-cryptocurrency\nFilecoin Surges 42%, Replaces Litecoin as 9th Largest Digital Asset\n03-31\n/no-joke-chipotle-to-give-away-200k-in-free-burritos-and-bitcoin-on-april-1\nNo Joke: Chipotle to Give Away $200K in Free Burritos and Bitcoin on April 1\n03-31\n/bull-flag-70k-bitcoin-skepticism\n‘Bull Flag’ Call for $70K Bitcoin Draws Skepticism From Rival Analysts\n03-12\n/how-to-create-buy-sell-nfts\nHow to Create, Buy and Sell NFTs\n04-02\n/fincen-names-former-chainalysis-executive-acting-director-as-blanco-resigns\nFinCEN Names Former Chainalysis Executive Acting Director as Blanco Resigns\n04-02\n/former-sec-chairman-jay-clayton-new-bitcoin-regulations\nFormer SEC Chairman Jay Clayton Warns of New Bitcoin Regulations\n04-02\n/ether-price-rises-above-2k-for-first-in-six-weeks\nEther Price Jumps to All-Time High Near $2,100\n04-01\n/coindesk-q1-quarterly-review-retail-institutional-research\nRetail Gains Amid Institutional Influx in Q1: CoinDesk Quarterly Review\n04-01\n/decentraland-launches-dapp-portal-polygon-bypass-high-gas-fees\nDecentraland Launches Dapp Portal With Polygon to Bypass High Gas Fees\n04-02\n/blockchain-recovery-scam-uk-fca\n‘Blockchain Recovery’ Scam Is Posing as a Legit Firm, UK FCA Warns\n04-02\n/handshake-patches-inflation-bug\nDecentralized DNS Project Handshake Patches Inflation Bug\n04-01\n/blog/coindesk-tv-welcomes-launch-sponsors\nCoinDesk TV Welcomes Launch Sponsors\n04-02\n/crypto-as-a-payment-system-here-we-go-again\nCrypto as a Payment System? Here We Go Again\n04-01\n/podcasts/coindesk-podcast-network/archegos-fastest-wealth-loss-history\nCorruption, Leverage and Cheap Money: Archegos and the Fastest Loss of Wealth in History\n04-02\n/central-bank-digital-currencies-stimulus-inflation-bank-of-america\nCentral Bank ‘Money Drops’ With Digital Currencies Could Fuel Inflation: Bank of America\n04-02\n/future-fintech-hydro-chinese-mining-farm\nPublicly-Traded Fintech Firm Acquires Chinese Mining Farm for $9M\n04-02\n/bitcoins-drop-in-volatility-may-boost-appeal-make-130k-possible-jpmorgan-says-report\nBitcoin’s Drop in Volatility May Boost Appeal, Make $130K Possible, JPMorgan Says: Report\n04-02\n/bitcoin-decoupled-from-stocks\nBitcoin Decoupled From Stocks in Q1 as Institutional Demand Strengthened: CoinDesk Research\n04-01\n/uniswaps-token-issue\nUniswap’s ‘Token’ Issue\n04-02\n/market-wrap-ether-all-time-high-bitcoin-stalls\nMarket Wrap: Ether Jumps to All-Time High as Bitcoin Stalls Despite JPMorgan’s $130K Call\n04-03\n/thai-central-bank-to-pilot-its-retail-central-bank-digital-currency-in-2022-report\nThai Central Bank to Pilot Its Retail Central Bank Digital Currency in 2022: Report\n04-03\n/u-s-added-more-than-900k-jobs-in-march-blowing-past-estimates\nUS Added More Than 900K Jobs in March, Blowing Past Estimates\n04-03\n/microstrategy-rated-buy-at-btig-partly-on-view-bitcoin-will-hit-95k-by-end-of-2022\nMicroStrategy Rated ‘Buy’ at BTIG Partly on View Bitcoin Will Hit $95K by End of 2022\n04-03\n/bitcoin-mining-difficulty\nBitcoin Mining Difficulty Hits All-Time High as Delayed ASIC Shipments Come Online\n04-01\n/irs-seeks-names-of-circle-customers-transacting-over-20k\nIRS Seeks Names of Circle Customers Transacting Over $20K in Crypto\n04-01\n/market-wrap-bitcoin-below-60k-cardano-sixfold-gains\nMarket Wrap: Bitcoin Stuck Below $60K; Cardano’s Sixfold 1Q Gains Led CoinDesk 20\n04-01\n/coinbases-coin-stock-to-go-live-on-nasdaq-april-14\nCoinbase’s COIN Stock to Go Live on Nasdaq April 14\n04-01\n/coinshares-partners-with-canadas-3iq-to-launch-new-bitcoin-etf-on-tsx\nCoinShares Partners With Canada’s 3iQ to Launch New Bitcoin ETF on TSX\n04-01\n/former-senior-executive-of-russias-largest-bank-leaves-for-crypto-startup\nFormer Senior Executive of Russia’s Largest Bank Leaves for Crypto Startup\n04-01\n/the-myths-and-realities-of-green-bitcoin\nThe Myths and Realities of ‘Green Bitcoin’\n04-01\n/vindicatio-nft-pioneer-looks-ahead\n‘There’s a Sense of Vindication’: A NFT Pioneer Looks to the Future\n04-01\n/bitcoin-miners-record-revenue\nBitcoin Miners Saw a Monthly Record $1.5B Revenue in March\n"
],
[
"ordered_headlines = sorted(headlines, key=lambda h: h[1], reverse=True)\nprocessed_headlines = []\nfor headline in ordered_headlines:\n headline_str = headline[0].split('\\n')[0]\n date_published = headline[1]\n sentiment = TextBlob(headline_str).sentiment.polarity\n\n if sentiment > 0:\n sentiment = \"POS\"\n elif int(sentiment) == 0:\n sentiment = \"NEUT\"\n else:\n sentiment = \"NEG\"\n\n processed_headlines += [[\n date_published,\n headline_str,\n sentiment,\n headline[2]\n ]]",
"_____no_output_____"
],
[
"processed_headlines",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75cc9d8ef44e4cdd75ce09c8fa92cd91d22885e | 198,282 | ipynb | Jupyter Notebook | courses/machine_learning/datasets/create_datasets.ipynb | AmirQureshi/code-to-run- | bc8e5ee5b55c0408b7436d0f866b3b7e79164daf | [
"Apache-2.0"
] | 58 | 2019-05-16T00:12:11.000Z | 2022-03-14T06:12:12.000Z | courses/machine_learning/datasets/create_datasets.ipynb | AmirQureshi/code-to-run- | bc8e5ee5b55c0408b7436d0f866b3b7e79164daf | [
"Apache-2.0"
] | 1 | 2021-03-26T00:38:05.000Z | 2021-03-26T00:38:05.000Z | courses/machine_learning/datasets/create_datasets.ipynb | AmirQureshi/code-to-run- | bc8e5ee5b55c0408b7436d0f866b3b7e79164daf | [
"Apache-2.0"
] | 46 | 2018-03-03T17:17:27.000Z | 2022-03-24T14:56:46.000Z | 96.534567 | 36,376 | 0.758657 | [
[
[
"<h1> Explore and create ML datasets </h1>\n\nIn this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected.\n\n<div id=\"toc\"></div>\n\nLet's start off with the Python imports that we need.",
"_____no_output_____"
]
],
[
[
"import datalab.bigquery as bq\nimport seaborn as sns\nimport pandas as pd\nimport numpy as np\nimport shutil",
"_____no_output_____"
],
[
"%%javascript\n$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')",
"_____no_output_____"
]
],
[
[
"<h3> Extract sample data from BigQuery </h3>\n\nThe dataset that we will use is <a href=\"https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips\">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows.\n\nLet's write a SQL query to pick up interesting fields from the dataset.",
"_____no_output_____"
]
],
[
[
"%sql --module afewrecords\nSELECT pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude,\ndropoff_latitude, passenger_count, trip_distance, tolls_amount, \nfare_amount, total_amount FROM [nyc-tlc:yellow.trips] LIMIT 10",
"_____no_output_____"
],
[
"trips = bq.Query(afewrecords).to_dataframe()\ntrips",
"_____no_output_____"
]
],
[
[
"Let's increase the number of records so that we can do some neat graphs. There is no guarantee about the order in which records are returned, and so no guarantee about which records get returned if we simply increase the LIMIT. To properly sample the dataset, let's use the HASH of the pickup time and return 1 in 100,000 records -- because there are 1 billion records in the data, we should get back approximately 10,000 records if we do this.",
"_____no_output_____"
]
],
[
[
"%sql --module afewrecords2\nSELECT\n pickup_datetime,\n pickup_longitude, pickup_latitude, \n dropoff_longitude, dropoff_latitude,\n passenger_count,\n trip_distance,\n tolls_amount,\n fare_amount,\n total_amount\nFROM\n [nyc-tlc:yellow.trips]\nWHERE\n ABS(HASH(pickup_datetime)) % $EVERY_N == 1",
"_____no_output_____"
],
[
"trips = bq.Query(afewrecords2, EVERY_N=100000).to_dataframe()\ntrips[:10]",
"_____no_output_____"
]
],
[
[
"<h3> Exploring data </h3>\n\nLet's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.",
"_____no_output_____"
]
],
[
[
"ax = sns.regplot(x=\"trip_distance\", y=\"fare_amount\", ci=None, truncate=True, data=trips)",
"_____no_output_____"
]
],
[
[
"Hmm ... do you see something wrong with the data that needs addressing?\n\nIt appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50).\n\nNote the extra WHERE clauses.",
"_____no_output_____"
]
],
[
[
"%sql --module afewrecords3\nSELECT\n pickup_datetime,\n pickup_longitude, pickup_latitude, \n dropoff_longitude, dropoff_latitude,\n passenger_count,\n trip_distance,\n tolls_amount,\n fare_amount,\n total_amount\nFROM\n [nyc-tlc:yellow.trips]\nWHERE\n (ABS(HASH(pickup_datetime)) % $EVERY_N == 1 AND\n trip_distance > 0 AND fare_amount >= 2.5)",
"_____no_output_____"
],
[
"trips = bq.Query(afewrecords3, EVERY_N=100000).to_dataframe()\nax = sns.regplot(x=\"trip_distance\", y=\"fare_amount\", ci=None, truncate=True, data=trips)",
"_____no_output_____"
]
],
[
[
"What's up with the streaks at \\$45 and \\$50? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable.\n\nLet's examine whether the toll amount is captured in the total amount.",
"_____no_output_____"
]
],
[
[
"tollrides = trips[trips['tolls_amount'] > 0]\ntollrides[tollrides['pickup_datetime'] == '2012-09-05 15:45:00']",
"_____no_output_____"
]
],
[
[
"Looking a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool.\n\nLet's also look at the distribution of values within the columns.",
"_____no_output_____"
]
],
[
[
"trips.describe()",
"_____no_output_____"
]
],
[
[
"Hmm ... The min, max of longitude look strange.\n\nFinally, let's actually look at the start and end of a few of the trips.",
"_____no_output_____"
]
],
[
[
"def showrides(df, numlines):\n import matplotlib.pyplot as plt\n lats = []\n lons = []\n for iter, row in df[:numlines].iterrows():\n lons.append(row['pickup_longitude'])\n lons.append(row['dropoff_longitude'])\n lons.append(None)\n lats.append(row['pickup_latitude'])\n lats.append(row['dropoff_latitude'])\n lats.append(None)\n\n sns.set_style(\"darkgrid\")\n plt.plot(lons, lats)\n\nshowrides(trips, 10)",
"_____no_output_____"
],
[
"showrides(tollrides, 10)",
"_____no_output_____"
]
],
[
[
"As you'd expect, rides that involve a toll are longer than the typical ride.",
"_____no_output_____"
],
[
"<h3> Quality control and other preprocessing </h3>\n\nWe need to some clean-up of the data:\n<ol>\n<li>New York city longitudes are around -74 and latitudes are around 41.</li>\n<li>We shouldn't have zero passengers.</li>\n<li>Clean up the total_amount column to reflect only fare_amount and tolls_amount, and then remove those two columns.</li>\n<li>Before the ride starts, we'll know the pickup and dropoff locations, but not the trip distance (that depends on the route taken), so remove it from the ML dataset</li>\n<li>Discard the timestamp</li>\n</ol>\n\nWe could do preprocessing in BigQuery, similar to how we removed the zero-distance rides, but just to show you another option, let's do this in Python. In production, we'll have to carry out the same preprocessing on the real-time input data. \n\nThis sort of preprocessing of input data is quite common in ML, especially if the quality-control is dynamic.",
"_____no_output_____"
]
],
[
[
"def preprocess(trips_in):\n trips = trips_in.copy(deep=True)\n trips.fare_amount = trips.fare_amount + trips.tolls_amount\n del trips['tolls_amount']\n del trips['total_amount']\n del trips['trip_distance']\n del trips['pickup_datetime']\n qc = np.all([\\\n trips['pickup_longitude'] > -78, \\\n trips['pickup_longitude'] < -70, \\\n trips['dropoff_longitude'] > -78, \\\n trips['dropoff_longitude'] < -70, \\\n trips['pickup_latitude'] > 37, \\\n trips['pickup_latitude'] < 45, \\\n trips['dropoff_latitude'] > 37, \\\n trips['dropoff_latitude'] < 45, \\\n trips['passenger_count'] > 0,\n ], axis=0)\n return trips[qc]\n\ntripsqc = preprocess(trips)\ntripsqc.describe()",
"_____no_output_____"
]
],
[
[
"The quality control has removed about 300 rows (11400 - 11101) or about 3% of the data. This seems reasonable.\n\nLet's move on to creating the ML datasets.\n\n<h3> Create ML datasets </h3>\n\nLet's split the QCed data randomly into training, validation and test sets.",
"_____no_output_____"
]
],
[
[
"shuffled = tripsqc.sample(frac=1)\ntrainsize = int(len(shuffled['fare_amount']) * 0.70)\nvalidsize = int(len(shuffled['fare_amount']) * 0.15)\n\ndf_train = shuffled.iloc[:trainsize, :]\ndf_valid = shuffled.iloc[trainsize:(trainsize+validsize), :]\ndf_test = shuffled.iloc[(trainsize+validsize):, :]",
"_____no_output_____"
],
[
"df_train.describe()",
"_____no_output_____"
],
[
"df_valid.describe()",
"_____no_output_____"
],
[
"df_test.describe()",
"_____no_output_____"
]
],
[
[
"Let's write out the three dataframes to appropriately named csv files. We can use these csv files for local training (recall that these files represent only 1/100,000 of the full dataset) until we get to point of using Dataflow and Cloud ML.",
"_____no_output_____"
]
],
[
[
"def to_csv(df, filename):\n outdf = df.copy(deep=False)\n outdf.loc[:, 'key'] = np.arange(0, len(outdf)) # rownumber as key\n # reorder columns so that target is first column\n cols = outdf.columns.tolist()\n cols.remove('fare_amount')\n cols.insert(0, 'fare_amount')\n print (cols) # new order of columns\n outdf = outdf[cols]\n outdf.to_csv(filename, header=False, index_label=False, index=False)\n\nto_csv(df_train, 'taxi-train.csv')\nto_csv(df_valid, 'taxi-valid.csv')\nto_csv(df_test, 'taxi-test.csv')",
"['fare_amount', u'pickup_longitude', u'pickup_latitude', u'dropoff_longitude', u'dropoff_latitude', u'passenger_count', 'key']\n['fare_amount', u'pickup_longitude', u'pickup_latitude', u'dropoff_longitude', u'dropoff_latitude', u'passenger_count', 'key']\n['fare_amount', u'pickup_longitude', u'pickup_latitude', u'dropoff_longitude', u'dropoff_latitude', u'passenger_count', 'key']\n"
],
[
"!head -10 taxi-valid.csv",
"6.0,-74.013667,40.713935,-74.007627,40.702992,2,0\r\n9.3,-74.007025,40.730305,-73.979111,40.752267,1,1\r\n6.9,-73.9664,40.7598,-73.9864,40.7624,1,2\r\n36.8,-73.961938,40.773337,-73.86582,40.769607,1,3\r\n6.5,-73.989408,40.735895,-73.9806,40.745115,1,4\r\n5.5,-73.983033,40.739107,-73.979105,40.74436,6,5\r\n4.9,-73.983879,40.761266,-73.982485,40.768045,1,6\r\n5.3,-73.991107,40.733908,-73.991082,40.74567,3,7\r\n12.0,-73.96837,40.762312,-73.999902,40.720617,1,8\r\n6.9,-73.97555,40.776823,-73.960875,40.770087,1,9\r\n"
]
],
[
[
"<h3> Verify that datasets exist </h3>",
"_____no_output_____"
]
],
[
[
"!ls -l *.csv",
"-rw-r--r-- 1 root root 88622 Feb 26 02:34 taxi-test.csv\r\n-rw-r--r-- 1 root root 417222 Feb 26 02:34 taxi-train.csv\r\n-rw-r--r-- 1 root root 88660 Feb 26 02:34 taxi-valid.csv\r\n"
]
],
[
[
"We have 3 .csv files corresponding to train, valid, test. The ratio of file-sizes correspond to our split of the data.",
"_____no_output_____"
]
],
[
[
"%bash\nhead taxi-train.csv",
"12.0,-73.987625,40.750617,-73.971163,40.78518,1,0\n4.5,-73.96362,40.774363,-73.953485,40.772665,1,1\n4.5,-73.989649,40.756633,-73.985597,40.765662,1,2\n10.0,-73.9939498901,40.7275238037,-74.0065841675,40.7442398071,1,3\n2.5,-73.950223,40.66896,-73.948112,40.668872,6,4\n7.3,-73.98511,40.742173,-73.96586,40.759668,4,5\n8.1,-73.997638,40.720887,-74.012937,40.716323,2,6\n41.5,-74.004283,40.740476,-73.897273,40.817774,2,7\n5.3,-73.984345,40.755862,-73.98152,40.750347,1,8\n13.5,-73.9615020752,40.7683258057,-73.9846801758,40.7363166809,1,9\n"
]
],
[
[
"Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them.",
"_____no_output_____"
],
[
"<h3> Benchmark </h3>\n\nBefore we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark.\n\nMy model is going to be to simply divide the mean fare_amount by the mean trip_distance to come up with a rate and use that to predict. Let's compute the RMSE of such a model.",
"_____no_output_____"
]
],
[
[
"import datalab.bigquery as bq\nimport pandas as pd\nimport numpy as np\nimport shutil\n\ndef distance_between(lat1, lon1, lat2, lon2):\n # haversine formula to compute distance \"as the crow flies\". Taxis can't fly of course.\n dist = np.degrees(np.arccos(np.minimum(1,np.sin(np.radians(lat1)) * np.sin(np.radians(lat2)) + np.cos(np.radians(lat1)) * np.cos(np.radians(lat2)) * np.cos(np.radians(lon2 - lon1))))) * 60 * 1.515 * 1.609344\n return dist\n\ndef estimate_distance(df):\n return distance_between(df['pickuplat'], df['pickuplon'], df['dropofflat'], df['dropofflon'])\n\ndef compute_rmse(actual, predicted):\n return np.sqrt(np.mean((actual-predicted)**2))\n\ndef print_rmse(df, rate, name):\n print (\"{1} RMSE = {0}\".format(compute_rmse(df['fare_amount'], rate*estimate_distance(df)), name))\n\nFEATURES = ['pickuplon','pickuplat','dropofflon','dropofflat','passengers']\nTARGET = 'fare_amount'\ncolumns = list([TARGET])\ncolumns.extend(FEATURES) # in CSV, target is the first column, after the features\ncolumns.append('key')\ndf_train = pd.read_csv('taxi-train.csv', header=None, names=columns)\ndf_valid = pd.read_csv('taxi-valid.csv', header=None, names=columns)\ndf_test = pd.read_csv('taxi-test.csv', header=None, names=columns)\nrate = df_train['fare_amount'].mean() / estimate_distance(df_train).mean()\nprint (\"Rate = ${0}/km\".format(rate))\nprint_rmse(df_train, rate, 'Train')\nprint_rmse(df_valid, rate, 'Valid') \nprint_rmse(df_test, rate, 'Test') ",
"Rate = $2.58056321263/km\nTrain RMSE = 6.78227475714\nValid RMSE = 6.78227475714\nTest RMSE = 5.56794896998\n"
]
],
[
[
"<h2>Benchmark on same dataset</h2>\n\nThe RMSE depends on the dataset, and for comparison, we have to evaluate on the same dataset each time. We'll use this query in later labs:",
"_____no_output_____"
]
],
[
[
"def create_query(phase, EVERY_N):\n \"\"\"\n phase: 1=train 2=valid\n \"\"\"\n base_query = \"\"\"\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n CONCAT(STRING(pickup_datetime), STRING(pickup_longitude), STRING(pickup_latitude), STRING(dropoff_latitude), STRING(dropoff_longitude)) AS key,\n DAYOFWEEK(pickup_datetime)*1.0 AS dayofweek,\n HOUR(pickup_datetime)*1.0 AS hourofday,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers,\nFROM\n [nyc-tlc:yellow.trips]\nWHERE\n trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n \"\"\"\n\n if EVERY_N == None:\n if phase < 2:\n # training\n query = \"{0} AND ABS(HASH(pickup_datetime)) % 4 < 2\".format(base_query)\n else:\n query = \"{0} AND ABS(HASH(pickup_datetime)) % 4 == {1}\".format(base_query, phase)\n else:\n query = \"{0} AND ABS(HASH(pickup_datetime)) % {1} == {2}\".format(base_query, EVERY_N, phase)\n \n return query\n\nquery = create_query(2, 100000)\ndf_valid = bq.Query(query).to_dataframe()\nprint_rmse(df_valid, 2.56, 'Final Validation Set')",
"Final Validation Set RMSE = 8.02608564676\n"
]
],
[
[
"The simple distance-based rule gives us a RMSE of <b>$8.03</b>. We have to beat this, of course, but you will find that simple rules of thumb like this can be surprisingly difficult to beat.\n\nLet's be ambitious, though, and make our goal to build ML models that have a RMSE of less than $6 on the test set.",
"_____no_output_____"
],
[
"Copyright 2016 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e75cdb6c878c9b0971c9a839b9c0acfbabd61c68 | 33,667 | ipynb | Jupyter Notebook | materials/week4/debugging.ipynb | yuchiaol/ESS-Python-Tutorial | dfa2c77ff16d066833eca753b6d5c847d4e63af5 | [
"MIT"
] | null | null | null | materials/week4/debugging.ipynb | yuchiaol/ESS-Python-Tutorial | dfa2c77ff16d066833eca753b6d5c847d4e63af5 | [
"MIT"
] | null | null | null | materials/week4/debugging.ipynb | yuchiaol/ESS-Python-Tutorial | dfa2c77ff16d066833eca753b6d5c847d4e63af5 | [
"MIT"
] | 1 | 2022-03-11T17:40:39.000Z | 2022-03-11T17:40:39.000Z | 94.83662 | 1,714 | 0.653578 | [
[
[
"https://www.digitalocean.com/community/tutorials/how-to-use-the-python-debugger\nhttps://docs.python.org/3/library/pdb.html",
"_____no_output_____"
]
],
[
[
"import pdb",
"_____no_output_____"
],
[
"def func():\n a = range(20)\n a[30] ",
"_____no_output_____"
],
[
"func()",
"_____no_output_____"
],
[
"%debug",
"> \u001b[0;32m<ipython-input-2-83c0785ed7fb>\u001b[0m(3)\u001b[0;36mfunc\u001b[0;34m()\u001b[0m\n\u001b[0;32m 1 \u001b[0;31m\u001b[0;32mdef\u001b[0m \u001b[0mfunc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 2 \u001b[0;31m \u001b[0ma\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mrange\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m20\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m----> 3 \u001b[0;31m \u001b[0ma\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m30\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\nipdb> h\n\nDocumented commands (type help <topic>):\n========================================\nEOF cl disable interact next psource rv unt \na clear display j p q s until \nalias commands down jump pdef quit source up \nargs condition enable l pdoc r step w \nb cont exit list pfile restart tbreak whatis\nbreak continue h ll pinfo return u where \nbt d help longlist pinfo2 retval unalias \nc debug ignore n pp run undisplay\n\nMiscellaneous help topics:\n==========================\nexec pdb\n\nipdb> len(a)\n20\nipdb> p a\nrange(0, 20)\nipdb> l\n\u001b[1;32m 1 \u001b[0m\u001b[0;32mdef\u001b[0m \u001b[0mfunc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2 \u001b[0m \u001b[0ma\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mrange\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m20\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 3 \u001b[0;31m \u001b[0ma\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m30\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\nipdb> q\n"
],
[
"def func():\n a = range(20)\n pdb.set_trace()\n a[30] ",
"_____no_output_____"
],
[
"func()",
"> <ipython-input-6-4fb9c0eb7a4e>(4)func()\n-> a[30]\n(Pdb) q\n"
],
[
"import xarray as xr",
"_____no_output_____"
],
[
"ds = xr.open_mfdataset('CAM*', decode_times=False)",
"_____no_output_____"
],
[
"a = 20\nds.sel(lat=a)",
"_____no_output_____"
],
[
"%debug",
"> \u001b[0;32m/Users/stephanrasp/repositories/ESS-Python-Tutorial/materials/week4/pandas/_libs/hashtable_class_helper.pxi\u001b[0m(339)\u001b[0;36mpandas._libs.hashtable.Float64HashTable.get_item (pandas/_libs/hashtable.c:7415)\u001b[0;34m()\u001b[0m\n\nipdb> u\n> \u001b[0;32m/Users/stephanrasp/anaconda/envs/py36_keras/lib/python3.6/site-packages/pandas/core/indexes/base.py\u001b[0m(2444)\u001b[0;36mget_loc\u001b[0;34m()\u001b[0m\n\u001b[0;32m 2442 \u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_engine\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mget_loc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mkey\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 2443 \u001b[0;31m \u001b[0;32mexcept\u001b[0m \u001b[0mKeyError\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m-> 2444 \u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_engine\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mget_loc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_maybe_cast_indexer\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mkey\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 2445 \u001b[0;31m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 2446 \u001b[0;31m \u001b[0mindexer\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mget_indexer\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mkey\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mmethod\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mmethod\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtolerance\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mtolerance\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\nipdb> \n> \u001b[0;32m/Users/stephanrasp/anaconda/envs/py36_keras/lib/python3.6/site-packages/pandas/core/indexes/numeric.py\u001b[0m(378)\u001b[0;36mget_loc\u001b[0;34m()\u001b[0m\n\u001b[0;32m 376 \u001b[0;31m \u001b[0;32mpass\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 377 \u001b[0;31m return super(Float64Index, self).get_loc(key, method=method,\n\u001b[0m\u001b[0;32m--> 378 \u001b[0;31m tolerance=tolerance)\n\u001b[0m\u001b[0;32m 379 \u001b[0;31m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 380 \u001b[0;31m \u001b[0;34m@\u001b[0m\u001b[0mcache_readonly\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\nipdb> u\n> \u001b[0;32m/Users/stephanrasp/anaconda/envs/py36_keras/lib/python3.6/site-packages/xarray/core/indexing.py\u001b[0m(95)\u001b[0;36mget_loc\u001b[0;34m()\u001b[0m\n\u001b[0;32m 93 \u001b[0;31m\u001b[0;32mdef\u001b[0m \u001b[0mget_loc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mindex\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mlabel\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mmethod\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtolerance\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 94 \u001b[0;31m \u001b[0mkwargs\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_index_method_kwargs\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmethod\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtolerance\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m---> 95 \u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mindex\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mget_loc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlabel\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 96 \u001b[0;31m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 97 \u001b[0;31m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\nipdb> \n> \u001b[0;32m/Users/stephanrasp/anaconda/envs/py36_keras/lib/python3.6/site-packages/xarray/core/indexing.py\u001b[0m(165)\u001b[0;36mconvert_label_indexer\u001b[0;34m()\u001b[0m\n\u001b[0;32m 163 \u001b[0;31m \u001b[0mindexer\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnew_index\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mindex\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mget_loc_level\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlabel\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mitem\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mlevel\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 164 \u001b[0;31m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m--> 165 \u001b[0;31m \u001b[0mindexer\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mget_loc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mindex\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mlabel\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mitem\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mmethod\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtolerance\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 166 \u001b[0;31m \u001b[0;32melif\u001b[0m \u001b[0mlabel\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdtype\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mkind\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m'b'\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 167 \u001b[0;31m \u001b[0mindexer\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mlabel\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\nipdb> \n> \u001b[0;32m/Users/stephanrasp/anaconda/envs/py36_keras/lib/python3.6/site-packages/xarray/core/indexing.py\u001b[0m(236)\u001b[0;36mremap_label_indexers\u001b[0;34m()\u001b[0m\n\u001b[0;32m 234 \u001b[0;31m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 235 \u001b[0;31m idxr, new_idx = convert_label_indexer(index, label,\n\u001b[0m\u001b[0;32m--> 236 \u001b[0;31m dim, method, tolerance)\n\u001b[0m\u001b[0;32m 237 \u001b[0;31m \u001b[0mpos_indexers\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mdim\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0midxr\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 238 \u001b[0;31m \u001b[0;32mif\u001b[0m \u001b[0mnew_idx\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\nipdb> \n> \u001b[0;32m/Users/stephanrasp/anaconda/envs/py36_keras/lib/python3.6/site-packages/xarray/core/coordinates.py\u001b[0m(346)\u001b[0;36mremap_label_indexers\u001b[0;34m()\u001b[0m\n\u001b[0;32m 344 \u001b[0;31m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 345 \u001b[0;31m pos_indexers, new_indexes = indexing.remap_label_indexers(\n\u001b[0m\u001b[0;32m--> 346 \u001b[0;31m \u001b[0mobj\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mv_indexers\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mmethod\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mmethod\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtolerance\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mtolerance\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 347 \u001b[0;31m )\n\u001b[0m\u001b[0;32m 348 \u001b[0;31m \u001b[0;31m# attach indexer's coordinate to pos_indexers\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\nipdb> \n> \u001b[0;32m/Users/stephanrasp/anaconda/envs/py36_keras/lib/python3.6/site-packages/xarray/core/dataset.py\u001b[0m(1466)\u001b[0;36msel\u001b[0;34m()\u001b[0m\n\u001b[0;32m 1464 \u001b[0;31m \"\"\"\n\u001b[0m\u001b[0;32m 1465 \u001b[0;31m pos_indexers, new_indexes = remap_label_indexers(self, method,\n\u001b[0m\u001b[0;32m-> 1466 \u001b[0;31m tolerance, **indexers)\n\u001b[0m\u001b[0;32m 1467 \u001b[0;31m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0misel\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdrop\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mdrop\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mpos_indexers\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 1468 \u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mresult\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_replace_indexes\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnew_indexes\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\nipdb> \n> \u001b[0;32m<ipython-input-13-49f59a850a2b>\u001b[0m(2)\u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[0;32m 1 \u001b[0;31m\u001b[0ma\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;36m20\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m----> 2 \u001b[0;31m\u001b[0mds\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msel\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlat\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0ma\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\nipdb> p a\n20\nipdb> ds.lat\n<xarray.DataArray 'lat' (lat: 64)>\narray([-87.863799, -85.096527, -82.312913, -79.525607, -76.7369 , -73.947515,\n -71.157752, -68.367756, -65.577607, -62.787352, -59.99702 , -57.206632,\n -54.4162 , -51.625734, -48.835241, -46.044727, -43.254195, -40.463648,\n -37.67309 , -34.882521, -32.091944, -29.30136 , -26.510769, -23.720174,\n -20.929574, -18.138971, -15.348365, -12.557756, -9.767146, -6.976534,\n -4.185921, -1.395307, 1.395307, 4.185921, 6.976534, 9.767146,\n 12.557756, 15.348365, 18.138971, 20.929574, 23.720174, 26.510769,\n 29.30136 , 32.091944, 34.882521, 37.67309 , 40.463648, 43.254195,\n 46.044727, 48.835241, 51.625734, 54.4162 , 57.206632, 59.99702 ,\n 62.787352, 65.577607, 68.367756, 71.157752, 73.947515, 76.7369 ,\n 79.525607, 82.312913, 85.096527, 87.863799])\nCoordinates:\n * lat (lat) float64 -87.86 -85.1 -82.31 -79.53 -76.74 -73.95 -71.16 ...\nAttributes:\n long_name: latitude\n units: degrees_north\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75cddabc510e1b76a5b3e2027e7d33c2cb92862 | 307,864 | ipynb | Jupyter Notebook | DOC2VEC.ipynb | utkueray/CQA | 38f5957fb982845810a5f230f9ab18a171bc9fb7 | [
"MIT"
] | 1 | 2020-08-31T07:20:01.000Z | 2020-08-31T07:20:01.000Z | DOC2VEC.ipynb | zseda/CQA | 38f5957fb982845810a5f230f9ab18a171bc9fb7 | [
"MIT"
] | 5 | 2021-03-30T13:40:28.000Z | 2021-09-22T19:09:19.000Z | DOC2VEC.ipynb | zseda/CQA | 38f5957fb982845810a5f230f9ab18a171bc9fb7 | [
"MIT"
] | 2 | 2020-06-12T20:20:41.000Z | 2021-03-09T16:32:06.000Z | 28.324961 | 5,380 | 0.457069 | [
[
[
"import numpy as np\nimport heapq\nfrom operator import itemgetter\nimport numpy.linalg as LA\nimport xml.etree.ElementTree as et, pandas as pd, re\nfrom bs4 import BeautifulSoup\nimport gensim\nfrom markdown import markdown\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"xtree = et.parse('Posts.xml')\n\nxroot = xtree.getroot()\n\ndfCols = [\"Closed Date\", \"Favorite Count\", \"Comment Count\", \"Answer Count\", \"Tags\", \"Title\",\n \"Last Activity Date\", \"Owner User ID\", \"Body\", \"View Count\", \"Score\", \"Creation Date\", \"Post Type ID\", \n \"ID\", \"Parent ID\", \"Last Edit Date\", \"Last Editor User ID\", \"Accepted Answer ID\"]\ndfRows = []",
"_____no_output_____"
],
[
"for node in xroot:\n closedDate = node.attrib.get(\"ClosedDate\")\n favCount = node.attrib.get(\"FavoriteCount\")\n commentCount = node.attrib.get(\"CommentCount\")\n ansCount = node.attrib.get(\"AnswerCount\")\n tags = node.attrib.get(\"Tags\")\n title = node.attrib.get(\"Title\")\n lastActDate = node.attrib.get(\"LastActivityDate\")\n ownerUserID = node.attrib.get(\"OwnerUserId\")\n body = node.attrib.get(\"Body\")\n viewCount = node.attrib.get(\"ViewCount\") \n score = node.attrib.get(\"Score\") \n creationDate = node.attrib.get(\"CreationDate\") \n postTypeID = node.attrib.get(\"PostTypeId\") \n ID = node.attrib.get(\"Id\") \n parentID = node.attrib.get(\"ParentId\") \n lastEditDate = node.attrib.get(\"LastEditDate\") \n lastEditorUserID = node.attrib.get(\"LastEditorUserId\") \n acceptedAnswerID = node.attrib.get(\"AcceptedAnswerID\")\n \n dfRows.append({\"Closed Date\": closedDate, \"Favorite Count\": favCount, \"Comment Count\": commentCount,\n \"Answer Count\": ansCount, \"Tags\": tags, \"Title\": title, \"Last Activity Date\": lastActDate,\n \"Owner User ID\": ownerUserID, \"Body\": body, \"View Count\": viewCount, \"Score\": score, \n \"Creation Date\": creationDate, \"Post Type ID\": postTypeID, \"ID\": ID, \"Parent ID\": parentID,\n \"Last Edit Date\": lastEditDate, \"Last Editor User ID\": lastEditorUserID, \"Accepted Answer ID\": acceptedAnswerID})",
"_____no_output_____"
],
[
"out = pd.DataFrame(dfRows, columns=dfCols)\n\nout = out.fillna(0)\n\nout['Creation Date'] = pd.to_datetime(out['Creation Date'])\nout['Creation Date'] = out['Creation Date'].dt.strftime('%Y/%m/%d')\nout['Comment Count'] = out['Comment Count'].astype(int)\nout['Owner User ID'] = out['Owner User ID'].astype(int)\nout['Post Type ID'] = out['Post Type ID'].astype(int)\nout['Score'] = out['Score'].astype(int)\nout['Favorite Count'] = out['Favorite Count'].astype(int)\nout['Answer Count'] = out['Answer Count'].astype(int)\nout['View Count'] = out['View Count'].astype(int)\n\nanswers = out[(out['Post Type ID'] == 1)]\n\nanswers = answers[['ID','Creation Date','Tags','Title','Body']]\n\n#words kolonu title ile bodynin birleşmiş hali, \nanswers['Words'] = answers[['Title', 'Body', 'Tags']].apply(lambda x: ' '.join(x), axis=1)\n\nanswers['Words'].apply(lambda x: ''.join(BeautifulSoup(markdown(x)).findAll(text=True)))\nanswers.head",
"_____no_output_____"
],
[
"size = len(answers.ID.to_list())",
"_____no_output_____"
],
[
"id_set = answers.ID.to_list() #Documents",
"_____no_output_____"
],
[
"def read_corpus(fname, tokens_only=False):\n for i, line in enumerate(fname):\n tokens = gensim.utils.simple_preprocess(line)\n if tokens_only:\n yield tokens\n else:\n # For training data, add tags\n yield gensim.models.doc2vec.TaggedDocument(tokens, [int(id_set[i])])",
"_____no_output_____"
],
[
"trainData =answers['Words'].tolist()#[:4068]\ntestData =answers['Words'].tolist()[4068:]\ntags = dict(zip(answers.ID.astype(int), answers.Tags))",
"_____no_output_____"
],
[
"train_corpus = list(read_corpus(trainData))\ntest_corpus = list(read_corpus(testData, tokens_only=True))",
"_____no_output_____"
],
[
"idTextDict = dict(zip(answers.ID, answers.Words))",
"_____no_output_____"
],
[
"#model = gensim.models.doc2vec.Doc2Vec(vector_size=100, min_count=1, epochs=200)\nmodel = gensim.models.doc2vec.Doc2Vec(min_count=1,window=5,vector_size=300,workers=5,alpha=0.025,min_alpha=0.00025,dm=1, epochs = 50)\n#model = gensim.models.doc2vec.Doc2Vec(min_count=2,window=15,vector_size=300,workers=5,alpha=0.025,min_alpha=0.00025,dm=0, epochs=100)\nmodel.build_vocab(train_corpus)\nmodel.train(train_corpus, total_examples=model.corpus_count, epochs=model.epochs)",
"_____no_output_____"
],
[
"doc_id = id_set.index(\"14119\")\nprint(doc_id)\ninferred_vector = model.infer_vector(test_corpus[doc_id-4068])\nsims = model.docvecs.most_similar([inferred_vector], topn=100)\nprint(id_set[id_set.index(\"14119\")])",
"4356\n14119\n"
],
[
"print(id_set.index(\"13425\"))\nid_set[id_set.index(\"13425\")]",
"4068\n"
],
[
"for (item,val) in sims:\n print(item, val)",
"14119 0.9364782571792603\n15932 0.37294119596481323\n1289 0.3690992593765259\n13398 0.3602584898471832\n15713 0.3595762550830841\n5274 0.3579294979572296\n7926 0.34869682788848877\n1420 0.34739869832992554\n1897 0.340761661529541\n3024 0.34038621187210083\n16741 0.3371763229370117\n6 0.33657312393188477\n2066 0.3354340195655823\n11208 0.33535096049308777\n13866 0.33177649974823\n3775 0.3302019238471985\n8854 0.32728591561317444\n5258 0.32612329721450806\n6161 0.3257138431072235\n12593 0.3225095868110657\n7 0.32203012704849243\n9924 0.31965571641921997\n2642 0.3181639313697815\n2048 0.31813257932662964\n6584 0.31744369864463806\n6421 0.3167300820350647\n13261 0.31494900584220886\n11500 0.31336528062820435\n7573 0.3126540184020996\n15449 0.31164583563804626\n1376 0.3091354966163635\n9439 0.3075100779533386\n2111 0.3053240180015564\n5018 0.3042410612106323\n4533 0.30351635813713074\n1501 0.30272164940834045\n1658 0.30258041620254517\n5156 0.3021986484527588\n2430 0.30189239978790283\n6125 0.3012242317199707\n1824 0.3003641963005066\n1976 0.29988300800323486\n6291 0.2997512221336365\n11523 0.2979367971420288\n1635 0.2969781160354614\n15866 0.29674333333969116\n7875 0.2967084050178528\n13018 0.29632657766342163\n9905 0.2956051826477051\n2518 0.295107901096344\n2900 0.2948864996433258\n9133 0.29335319995880127\n16667 0.2930881381034851\n11154 0.292486310005188\n140 0.29218900203704834\n9383 0.29157841205596924\n2578 0.2914215326309204\n3176 0.2904726266860962\n13233 0.2898673415184021\n80 0.28968435525894165\n9081 0.2890405058860779\n10621 0.2871703803539276\n2917 0.28714877367019653\n11008 0.2867233455181122\n3771 0.28575828671455383\n3226 0.28517425060272217\n2462 0.28491172194480896\n7268 0.2840617895126343\n7337 0.2838600277900696\n1853 0.2838209867477417\n13631 0.2836076617240906\n10997 0.2819984257221222\n1397 0.28101712465286255\n2646 0.2809758186340332\n12120 0.2801826596260071\n7642 0.27970370650291443\n10866 0.27927282452583313\n1617 0.2785053849220276\n2897 0.27828747034072876\n2769 0.2778869569301605\n1648 0.27767637372016907\n191 0.2765301465988159\n3006 0.27621781826019287\n9103 0.2755886912345886\n103 0.2754529118537903\n3537 0.27532458305358887\n3950 0.27464228868484497\n8773 0.27435094118118286\n13273 0.274175763130188\n1592 0.2734326124191284\n91 0.27239882946014404\n11056 0.2719342112541199\n2841 0.27173617482185364\n15769 0.2714817523956299\n4008 0.27088794112205505\n28 0.27068468928337097\n7021 0.27050554752349854\n2020 0.2704281806945801\n3187 0.27024275064468384\n2347 0.2700803875923157\n"
],
[
"cosineResultDict = {}\nfor i in range(0, len(test_corpus)):\n inferred_vector = model.infer_vector(test_corpus[i])\n sims = model.docvecs.most_similar([inferred_vector], topn=100)\n cosineResultDict[int(id_set[4068+i])] = sims ",
"_____no_output_____"
],
[
"cosineResultDict[13425]",
"_____no_output_____"
],
[
"relatedId = {}\nwith open(\"relatedFrom13425_test.txt\") as f:\n for line in f:\n (key, val) = line.split(\",\")\n relatedId[int(key)] = [int(i) for i in val.strip().split()]\nrelatedId",
"_____no_output_____"
],
[
"crossCheckDict = {}\nbothRelatedAndSim = {}\nfor key in cosineResultDict.keys():\n counter = 0\n if key in relatedId.keys():\n for rel in relatedId[key]:\n for (simId,sim) in cosineResultDict[key]:\n if rel == simId:\n if key in bothRelatedAndSim.keys():\n bothRelatedAndSim[key].append((simId,sim))\n else:\n bothRelatedAndSim[key] = [(simId,sim)]\n\n counter += 1\n crossCheckDict[key] = counter\n",
"_____no_output_____"
],
[
"crossCheckDict",
"_____no_output_____"
]
],
[
[
"Sıfır related matchleyen caselere örnek:\n14173 - relatedla sorunun alakası yok denecek kadar az - bizim bulduklarımız similarity degerine göre daha mantıklı\n14204 - hiç related yok - bizim bulduklarımız some what similar\n13849 - soru ve relatedları arasında 1-2 kelime matchliyor, anlam acısından bag yok - bizim bulduklarımızın context olarak benzerliği daha fazla",
"_____no_output_____"
]
],
[
[
"plt.bar(range(len(crossCheckDict)), list(crossCheckDict.values()), align='center')\nplt.xticks(range(len(crossCheckDict)), list(crossCheckDict.keys()))\nplt.show()",
"_____no_output_____"
],
[
"listOfVals = crossCheckDict.values()",
"_____no_output_____"
],
[
"len(listOfVals)",
"_____no_output_____"
]
],
[
[
"On average how many similar related questions that stack exchange and our doc2vec model suggests",
"_____no_output_____"
]
],
[
[
"sum(listOfVals)/len(listOfVals)",
"_____no_output_____"
],
[
"sum(listOfVals)",
"_____no_output_____"
],
[
"for key in bothRelatedAndSim.keys():\n print(\"https://ai.stackexchange.com/questions/\" + str(key))\n print([i for i,j in bothRelatedAndSim[key]])",
"https://ai.stackexchange.com/questions/13425\n[7446]\nhttps://ai.stackexchange.com/questions/13429\n[15910, 9829, 7580, 4984]\nhttps://ai.stackexchange.com/questions/13434\n[1742, 8637]\nhttps://ai.stackexchange.com/questions/13443\n[5836]\nhttps://ai.stackexchange.com/questions/13444\n[13531]\nhttps://ai.stackexchange.com/questions/13448\n[2127, 1488, 3457]\nhttps://ai.stackexchange.com/questions/13451\n[12954]\nhttps://ai.stackexchange.com/questions/13454\n[13573, 13549, 5769, 12649, 12366, 8929, 7949]\nhttps://ai.stackexchange.com/questions/13457\n[9197, 12053, 5454]\nhttps://ai.stackexchange.com/questions/13464\n[7685, 7832]\nhttps://ai.stackexchange.com/questions/13466\n[12469, 12408]\nhttps://ai.stackexchange.com/questions/13470\n[16850, 5769]\nhttps://ai.stackexchange.com/questions/13472\n[7550]\nhttps://ai.stackexchange.com/questions/13473\n[3518]\nhttps://ai.stackexchange.com/questions/13476\n[16076, 13028]\nhttps://ai.stackexchange.com/questions/13478\n[9197, 10797, 7707]\nhttps://ai.stackexchange.com/questions/13479\n[11125]\nhttps://ai.stackexchange.com/questions/13489\n[4357]\nhttps://ai.stackexchange.com/questions/13495\n[247, 4320, 6167]\nhttps://ai.stackexchange.com/questions/13499\n[13544]\nhttps://ai.stackexchange.com/questions/13502\n[13307, 13232, 12270, 10422]\nhttps://ai.stackexchange.com/questions/13504\n[3081]\nhttps://ai.stackexchange.com/questions/13507\n[8716, 7511]\nhttps://ai.stackexchange.com/questions/13508\n[13317]\nhttps://ai.stackexchange.com/questions/13515\n[9319, 5539, 8016]\nhttps://ai.stackexchange.com/questions/13522\n[12671, 7367, 5093]\nhttps://ai.stackexchange.com/questions/13523\n[13575, 12787]\nhttps://ai.stackexchange.com/questions/13524\n[2314, 13544, 11847]\nhttps://ai.stackexchange.com/questions/13526\n[13470]\nhttps://ai.stackexchange.com/questions/13527\n[13641]\nhttps://ai.stackexchange.com/questions/13529\n[13148, 13152, 11347]\nhttps://ai.stackexchange.com/questions/13531\n[12397, 9624]\nhttps://ai.stackexchange.com/questions/13538\n[2462]\nhttps://ai.stackexchange.com/questions/13540\n[16597, 11103]\nhttps://ai.stackexchange.com/questions/13544\n[8518, 12671]\nhttps://ai.stackexchange.com/questions/13545\n[5318]\nhttps://ai.stackexchange.com/questions/13549\n[16173, 13738, 13454]\nhttps://ai.stackexchange.com/questions/13550\n[13695, 11976]\nhttps://ai.stackexchange.com/questions/13551\n[15502]\nhttps://ai.stackexchange.com/questions/13552\n[10053, 12397, 13531, 5185]\nhttps://ai.stackexchange.com/questions/13554\n[10975, 11669]\nhttps://ai.stackexchange.com/questions/13555\n[8212, 7832]\nhttps://ai.stackexchange.com/questions/13557\n[4743]\nhttps://ai.stackexchange.com/questions/13560\n[14153, 13994, 13710]\nhttps://ai.stackexchange.com/questions/13562\n[9019]\nhttps://ai.stackexchange.com/questions/13566\n[35]\nhttps://ai.stackexchange.com/questions/13567\n[5904, 5818]\nhttps://ai.stackexchange.com/questions/13568\n[12738]\nhttps://ai.stackexchange.com/questions/13570\n[5081]\nhttps://ai.stackexchange.com/questions/13572\n[1462]\nhttps://ai.stackexchange.com/questions/13573\n[6934, 13454, 5769, 12649, 5107]\nhttps://ai.stackexchange.com/questions/13575\n[13523]\nhttps://ai.stackexchange.com/questions/13577\n[4085, 13555]\nhttps://ai.stackexchange.com/questions/13580\n[4247]\nhttps://ai.stackexchange.com/questions/13593\n[16038]\nhttps://ai.stackexchange.com/questions/13595\n[13572, 3581]\nhttps://ai.stackexchange.com/questions/13596\n[5057]\nhttps://ai.stackexchange.com/questions/13600\n[16353, 4354]\nhttps://ai.stackexchange.com/questions/13604\n[15843, 11485, 11337]\nhttps://ai.stackexchange.com/questions/13605\n[10675]\nhttps://ai.stackexchange.com/questions/13612\n[94]\nhttps://ai.stackexchange.com/questions/13613\n[5111]\nhttps://ai.stackexchange.com/questions/13615\n[5011]\nhttps://ai.stackexchange.com/questions/13617\n[14175, 10579, 10303]\nhttps://ai.stackexchange.com/questions/13619\n[7339]\nhttps://ai.stackexchange.com/questions/13620\n[12451]\nhttps://ai.stackexchange.com/questions/13627\n[7721, 14326]\nhttps://ai.stackexchange.com/questions/13631\n[11542, 2841, 9133]\nhttps://ai.stackexchange.com/questions/13642\n[13549, 9751, 5991, 5606]\nhttps://ai.stackexchange.com/questions/13643\n[6503, 13487]\nhttps://ai.stackexchange.com/questions/13644\n[3172]\nhttps://ai.stackexchange.com/questions/13647\n[15743, 7916]\nhttps://ai.stackexchange.com/questions/13648\n[15761]\nhttps://ai.stackexchange.com/questions/13657\n[7634]\nhttps://ai.stackexchange.com/questions/13660\n[13371, 11596, 6783]\nhttps://ai.stackexchange.com/questions/13662\n[13572]\nhttps://ai.stackexchange.com/questions/13668\n[16606, 13349, 6126, 12911, 7877]\nhttps://ai.stackexchange.com/questions/13671\n[9909, 2637]\nhttps://ai.stackexchange.com/questions/13683\n[16593]\nhttps://ai.stackexchange.com/questions/13689\n[12487, 9270]\nhttps://ai.stackexchange.com/questions/13692\n[13738]\nhttps://ai.stackexchange.com/questions/13694\n[3291, 3548]\nhttps://ai.stackexchange.com/questions/13697\n[6460]\nhttps://ai.stackexchange.com/questions/13699\n[9392]\nhttps://ai.stackexchange.com/questions/13706\n[15616, 16570, 14090, 12340]\nhttps://ai.stackexchange.com/questions/13709\n[13619, 7707, 1909]\nhttps://ai.stackexchange.com/questions/13710\n[14153, 13994, 13560]\nhttps://ai.stackexchange.com/questions/13725\n[15590]\nhttps://ai.stackexchange.com/questions/13726\n[16535, 16493]\nhttps://ai.stackexchange.com/questions/13732\n[15589, 12271]\nhttps://ai.stackexchange.com/questions/13735\n[16542, 4663]\nhttps://ai.stackexchange.com/questions/13738\n[4683]\nhttps://ai.stackexchange.com/questions/13739\n[16855, 16570, 12340, 8546, 7470]\nhttps://ai.stackexchange.com/questions/13741\n[6488]\nhttps://ai.stackexchange.com/questions/13747\n[1963, 4409]\nhttps://ai.stackexchange.com/questions/13748\n[2871]\nhttps://ai.stackexchange.com/questions/13750\n[6622, 13944]\nhttps://ai.stackexchange.com/questions/13751\n[11557]\nhttps://ai.stackexchange.com/questions/13755\n[10529, 10549]\nhttps://ai.stackexchange.com/questions/13756\n[16175, 3581]\nhttps://ai.stackexchange.com/questions/13762\n[13886, 13887]\nhttps://ai.stackexchange.com/questions/13771\n[12948, 5638]\nhttps://ai.stackexchange.com/questions/13772\n[4683, 12042]\nhttps://ai.stackexchange.com/questions/13776\n[11929]\nhttps://ai.stackexchange.com/questions/13777\n[5322]\nhttps://ai.stackexchange.com/questions/13785\n[16157]\nhttps://ai.stackexchange.com/questions/13790\n[7723]\nhttps://ai.stackexchange.com/questions/13793\n[7940]\nhttps://ai.stackexchange.com/questions/13794\n[1742, 35]\nhttps://ai.stackexchange.com/questions/13799\n[12216, 16684]\nhttps://ai.stackexchange.com/questions/13800\n[11929, 2517]\nhttps://ai.stackexchange.com/questions/13806\n[13793, 10830, 8251]\nhttps://ai.stackexchange.com/questions/13819\n[15546]\nhttps://ai.stackexchange.com/questions/13820\n[13596]\nhttps://ai.stackexchange.com/questions/13824\n[9425]\nhttps://ai.stackexchange.com/questions/13830\n[5392]\nhttps://ai.stackexchange.com/questions/13832\n[14205, 16506, 14096]\nhttps://ai.stackexchange.com/questions/13836\n[13531, 7362, 6581, 4666]\nhttps://ai.stackexchange.com/questions/13837\n[13460]\nhttps://ai.stackexchange.com/questions/13840\n[2672]\nhttps://ai.stackexchange.com/questions/13842\n[5769]\nhttps://ai.stackexchange.com/questions/13845\n[13523, 2192]\nhttps://ai.stackexchange.com/questions/13850\n[16848, 16131, 10555]\nhttps://ai.stackexchange.com/questions/13852\n[11293]\nhttps://ai.stackexchange.com/questions/13858\n[14280, 2192]\nhttps://ai.stackexchange.com/questions/13861\n[6176]\nhttps://ai.stackexchange.com/questions/13862\n[16803]\nhttps://ai.stackexchange.com/questions/13865\n[9439, 4949, 1397]\nhttps://ai.stackexchange.com/questions/13866\n[7137]\nhttps://ai.stackexchange.com/questions/13867\n[6953]\nhttps://ai.stackexchange.com/questions/13870\n[5970]\nhttps://ai.stackexchange.com/questions/13875\n[16175]\nhttps://ai.stackexchange.com/questions/13880\n[6274]\nhttps://ai.stackexchange.com/questions/13885\n[15497, 3006]\nhttps://ai.stackexchange.com/questions/13886\n[13762]\nhttps://ai.stackexchange.com/questions/13887\n[13762]\nhttps://ai.stackexchange.com/questions/13891\n[15778, 2474]\nhttps://ai.stackexchange.com/questions/13893\n[11924, 10797, 8605]\nhttps://ai.stackexchange.com/questions/13895\n[16515, 2524, 7434]\nhttps://ai.stackexchange.com/questions/13901\n[15759, 13560]\nhttps://ai.stackexchange.com/questions/13902\n[13276]\nhttps://ai.stackexchange.com/questions/13903\n[13867, 12448, 10164]\nhttps://ai.stackexchange.com/questions/13907\n[9638, 9897]\nhttps://ai.stackexchange.com/questions/13908\n[7683, 7127]\nhttps://ai.stackexchange.com/questions/13910\n[13573, 5769]\nhttps://ai.stackexchange.com/questions/13917\n[11438]\nhttps://ai.stackexchange.com/questions/13921\n[16457, 15796]\nhttps://ai.stackexchange.com/questions/13925\n[6069]\nhttps://ai.stackexchange.com/questions/13926\n[6571]\nhttps://ai.stackexchange.com/questions/13934\n[9105]\nhttps://ai.stackexchange.com/questions/13944\n[5208, 2612]\nhttps://ai.stackexchange.com/questions/13947\n[15676, 13988, 11825, 11328]\nhttps://ai.stackexchange.com/questions/13948\n[6231, 10247]\nhttps://ai.stackexchange.com/questions/13953\n[16854]\nhttps://ai.stackexchange.com/questions/13954\n[15715, 12357]\nhttps://ai.stackexchange.com/questions/13957\n[4683, 13738, 5769, 12649, 3953]\nhttps://ai.stackexchange.com/questions/13959\n[14153]\nhttps://ai.stackexchange.com/questions/13966\n[5769, 12418, 11937, 7949]\nhttps://ai.stackexchange.com/questions/13968\n[12053, 9502, 3403]\nhttps://ai.stackexchange.com/questions/13970\n[13595, 8196]\nhttps://ai.stackexchange.com/questions/13973\n[16294, 10579]\nhttps://ai.stackexchange.com/questions/13974\n[13973, 10579]\nhttps://ai.stackexchange.com/questions/13975\n[13692, 13573, 13549, 13454, 5769]\nhttps://ai.stackexchange.com/questions/13977\n[7942]\nhttps://ai.stackexchange.com/questions/13978\n[2526]\nhttps://ai.stackexchange.com/questions/13982\n[12759]\nhttps://ai.stackexchange.com/questions/13983\n[9152]\nhttps://ai.stackexchange.com/questions/13986\n[11381]\nhttps://ai.stackexchange.com/questions/13988\n[15676, 11825, 6144, 11328]\nhttps://ai.stackexchange.com/questions/13994\n[14153, 13710]\nhttps://ai.stackexchange.com/questions/13999\n[8327]\nhttps://ai.stackexchange.com/questions/14001\n[11442, 7739]\nhttps://ai.stackexchange.com/questions/14007\n[7389]\nhttps://ai.stackexchange.com/questions/14009\n[4683, 13454]\nhttps://ai.stackexchange.com/questions/14013\n[9914, 8240]\nhttps://ai.stackexchange.com/questions/14020\n[9141, 10540, 8126]\nhttps://ai.stackexchange.com/questions/14032\n[14041, 4167]\nhttps://ai.stackexchange.com/questions/14033\n[4286]\nhttps://ai.stackexchange.com/questions/14041\n[14032, 10472]\nhttps://ai.stackexchange.com/questions/14047\n[5502]\nhttps://ai.stackexchange.com/questions/14052\n[7707]\nhttps://ai.stackexchange.com/questions/14054\n[16283]\nhttps://ai.stackexchange.com/questions/14055\n[14162, 12759, 12927, 11987]\nhttps://ai.stackexchange.com/questions/14056\n[6662, 7715]\nhttps://ai.stackexchange.com/questions/14059\n[13460]\nhttps://ai.stackexchange.com/questions/14061\n[15645, 8038]\nhttps://ai.stackexchange.com/questions/14072\n[11055, 10329, 8251, 7736]\nhttps://ai.stackexchange.com/questions/14073\n[12247, 10303, 8348]\nhttps://ai.stackexchange.com/questions/14078\n[5427]\nhttps://ai.stackexchange.com/questions/14081\n[8168]\nhttps://ai.stackexchange.com/questions/14085\n[13246, 8326]\nhttps://ai.stackexchange.com/questions/14090\n[2]\nhttps://ai.stackexchange.com/questions/14096\n[13832, 7949, 2037]\nhttps://ai.stackexchange.com/questions/14098\n[16608, 4700]\nhttps://ai.stackexchange.com/questions/14102\n[12810]\nhttps://ai.stackexchange.com/questions/14103\n[13604, 11337]\nhttps://ai.stackexchange.com/questions/14112\n[3172, 13434]\nhttps://ai.stackexchange.com/questions/14113\n[6546]\nhttps://ai.stackexchange.com/questions/14119\n[7573, 15932, 11208, 5258, 1897]\nhttps://ai.stackexchange.com/questions/14132\n[13119, 12472]\nhttps://ai.stackexchange.com/questions/14135\n[7646]\nhttps://ai.stackexchange.com/questions/14137\n[14191]\nhttps://ai.stackexchange.com/questions/14140\n[12049]\nhttps://ai.stackexchange.com/questions/14145\n[5246]\nhttps://ai.stackexchange.com/questions/14147\n[5370]\nhttps://ai.stackexchange.com/questions/14150\n[12499]\nhttps://ai.stackexchange.com/questions/14151\n[4760]\nhttps://ai.stackexchange.com/questions/14153\n[16695, 13994, 13560]\nhttps://ai.stackexchange.com/questions/14159\n[10549]\nhttps://ai.stackexchange.com/questions/14162\n[8251]\nhttps://ai.stackexchange.com/questions/14163\n[12411]\nhttps://ai.stackexchange.com/questions/14165\n[8259, 6167, 4849]\nhttps://ai.stackexchange.com/questions/14174\n[12523, 8929, 4061]\nhttps://ai.stackexchange.com/questions/14175\n[4456, 13617, 11845, 11078]\nhttps://ai.stackexchange.com/questions/14178\n[16176]\nhttps://ai.stackexchange.com/questions/14184\n[5167]\nhttps://ai.stackexchange.com/questions/14188\n[11593]\nhttps://ai.stackexchange.com/questions/14191\n[15372, 14137, 13968]\nhttps://ai.stackexchange.com/questions/14193\n[13399, 5539, 6827]\nhttps://ai.stackexchange.com/questions/14205\n[16542]\nhttps://ai.stackexchange.com/questions/14206\n[14211]\nhttps://ai.stackexchange.com/questions/14207\n[4]\nhttps://ai.stackexchange.com/questions/14212\n[14296, 12135]\nhttps://ai.stackexchange.com/questions/14218\n[15683]\nhttps://ai.stackexchange.com/questions/14219\n[15621, 15712, 14326, 6571]\nhttps://ai.stackexchange.com/questions/14223\n[13392]\nhttps://ai.stackexchange.com/questions/14224\n[13261, 1824]\nhttps://ai.stackexchange.com/questions/14225\n[11760]\nhttps://ai.stackexchange.com/questions/14228\n[10975]\nhttps://ai.stackexchange.com/questions/14230\n[11760]\nhttps://ai.stackexchange.com/questions/14236\n[16119, 3817]\nhttps://ai.stackexchange.com/questions/14243\n[1970, 3490, 1809]\nhttps://ai.stackexchange.com/questions/14248\n[3469, 13557, 12759, 11987]\nhttps://ai.stackexchange.com/questions/14250\n[11328]\nhttps://ai.stackexchange.com/questions/14254\n[15965, 16328]\nhttps://ai.stackexchange.com/questions/14263\n[9392]\nhttps://ai.stackexchange.com/questions/14267\n[5452]\nhttps://ai.stackexchange.com/questions/14280\n[15621, 6753, 6961]\nhttps://ai.stackexchange.com/questions/14284\n[15666, 14296, 14212, 12135, 7638]\nhttps://ai.stackexchange.com/questions/14290\n[3065]\nhttps://ai.stackexchange.com/questions/14293\n[16608, 16463]\nhttps://ai.stackexchange.com/questions/14296\n[15666, 14212, 12135, 10492, 8605]\nhttps://ai.stackexchange.com/questions/14299\n[15562]\nhttps://ai.stackexchange.com/questions/14303\n[5960]\nhttps://ai.stackexchange.com/questions/14305\n[15468, 13376, 8844, 2514, 6728]\nhttps://ai.stackexchange.com/questions/14310\n[12171, 4700]\nhttps://ai.stackexchange.com/questions/14321\n[15761, 14159, 10529, 4740]\nhttps://ai.stackexchange.com/questions/14322\n[12194]\nhttps://ai.stackexchange.com/questions/14325\n[5408]\nhttps://ai.stackexchange.com/questions/14326\n[15712, 14219]\nhttps://ai.stackexchange.com/questions/14329\n[12366, 6526]\nhttps://ai.stackexchange.com/questions/14332\n[12226, 9197, 5454]\nhttps://ai.stackexchange.com/questions/14333\n[15737, 8427]\nhttps://ai.stackexchange.com/questions/14335\n[16605]\nhttps://ai.stackexchange.com/questions/14341\n[9786, 12345, 249]\nhttps://ai.stackexchange.com/questions/14342\n[12024]\nhttps://ai.stackexchange.com/questions/14346\n[5814]\nhttps://ai.stackexchange.com/questions/14348\n[5462, 13399]\nhttps://ai.stackexchange.com/questions/14354\n[9442, 7962]\nhttps://ai.stackexchange.com/questions/14355\n[11609, 2980, 5810]\nhttps://ai.stackexchange.com/questions/14357\n[10948]\nhttps://ai.stackexchange.com/questions/14358\n[5656]\nhttps://ai.stackexchange.com/questions/14359\n[3802]\nhttps://ai.stackexchange.com/questions/14363\n[13161, 4750, 6728]\nhttps://ai.stackexchange.com/questions/15366\n[15546]\nhttps://ai.stackexchange.com/questions/15368\n[14351]\nhttps://ai.stackexchange.com/questions/15370\n[4233]\nhttps://ai.stackexchange.com/questions/15371\n[9030]\nhttps://ai.stackexchange.com/questions/15374\n[5462]\nhttps://ai.stackexchange.com/questions/15375\n[6083]\nhttps://ai.stackexchange.com/questions/15379\n[12270]\nhttps://ai.stackexchange.com/questions/15388\n[12402]\nhttps://ai.stackexchange.com/questions/15389\n[15715]\nhttps://ai.stackexchange.com/questions/15392\n[8258, 1393]\nhttps://ai.stackexchange.com/questions/15393\n[9491]\nhttps://ai.stackexchange.com/questions/15397\n[5729]\nhttps://ai.stackexchange.com/questions/15403\n[5814, 5336]\nhttps://ai.stackexchange.com/questions/15408\n[15509, 9319, 4425]\nhttps://ai.stackexchange.com/questions/15409\n[13819]\nhttps://ai.stackexchange.com/questions/15415\n[13706]\nhttps://ai.stackexchange.com/questions/15416\n[4650]\nhttps://ai.stackexchange.com/questions/15417\n[8761]\nhttps://ai.stackexchange.com/questions/15422\n[13706, 10975]\nhttps://ai.stackexchange.com/questions/15426\n[6383]\nhttps://ai.stackexchange.com/questions/15431\n[10329, 6573]\nhttps://ai.stackexchange.com/questions/15434\n[7371]\nhttps://ai.stackexchange.com/questions/15435\n[12600]\nhttps://ai.stackexchange.com/questions/15441\n[8821, 7314]\nhttps://ai.stackexchange.com/questions/15444\n[7638, 10422]\nhttps://ai.stackexchange.com/questions/15446\n[16848, 9306]\nhttps://ai.stackexchange.com/questions/15449\n[10413, 1964, 2347]\nhttps://ai.stackexchange.com/questions/15461\n[4677]\nhttps://ai.stackexchange.com/questions/15463\n[11261]\nhttps://ai.stackexchange.com/questions/15468\n[13376]\nhttps://ai.stackexchange.com/questions/15473\n[7390, 11787, 7580]\nhttps://ai.stackexchange.com/questions/15475\n[16533]\nhttps://ai.stackexchange.com/questions/15490\n[11178, 11328]\nhttps://ai.stackexchange.com/questions/15491\n[13156, 12284]\nhttps://ai.stackexchange.com/questions/15497\n[16815, 2324, 5410]\nhttps://ai.stackexchange.com/questions/15500\n[12061, 10323, 9814]\nhttps://ai.stackexchange.com/questions/15501\n[4219]\nhttps://ai.stackexchange.com/questions/15502\n[8320]\nhttps://ai.stackexchange.com/questions/15506\n[6503, 13487]\nhttps://ai.stackexchange.com/questions/15508\n[7707, 3403]\nhttps://ai.stackexchange.com/questions/15509\n[10500]\nhttps://ai.stackexchange.com/questions/15522\n[8844, 2655]\nhttps://ai.stackexchange.com/questions/15526\n[12472, 6486, 2733]\nhttps://ai.stackexchange.com/questions/15528\n[6488]\nhttps://ai.stackexchange.com/questions/15529\n[15396, 11337]\nhttps://ai.stackexchange.com/questions/15544\n[10406]\nhttps://ai.stackexchange.com/questions/15554\n[16756, 16502]\nhttps://ai.stackexchange.com/questions/15555\n[2250]\nhttps://ai.stackexchange.com/questions/15558\n[16391]\nhttps://ai.stackexchange.com/questions/15562\n[5536]\nhttps://ai.stackexchange.com/questions/15565\n[2474]\nhttps://ai.stackexchange.com/questions/15577\n[10617]\nhttps://ai.stackexchange.com/questions/15578\n[14280]\nhttps://ai.stackexchange.com/questions/15579\n[15737, 13544]\nhttps://ai.stackexchange.com/questions/15589\n[13732]\nhttps://ai.stackexchange.com/questions/15601\n[8076]\nhttps://ai.stackexchange.com/questions/15612\n[3403]\nhttps://ai.stackexchange.com/questions/15613\n[58]\nhttps://ai.stackexchange.com/questions/15616\n[16570, 5400]\nhttps://ai.stackexchange.com/questions/15617\n[12926, 12226, 11992, 6669, 12053, 8605]\nhttps://ai.stackexchange.com/questions/15618\n[8215, 14117]\nhttps://ai.stackexchange.com/questions/15621\n[10136, 6961, 5486]\nhttps://ai.stackexchange.com/questions/15622\n[11674, 9180]\nhttps://ai.stackexchange.com/questions/15623\n[9624]\nhttps://ai.stackexchange.com/questions/15624\n[3497]\nhttps://ai.stackexchange.com/questions/15625\n[8258, 12736, 8588]\nhttps://ai.stackexchange.com/questions/15626\n[15612, 5221, 5051]\nhttps://ai.stackexchange.com/questions/15627\n[12878, 8414, 4217]\nhttps://ai.stackexchange.com/questions/15629\n[7715, 3629]\nhttps://ai.stackexchange.com/questions/15631\n[10021]\nhttps://ai.stackexchange.com/questions/15632\n[12411, 16738, 11759, 2342, 13080, 6026, 8821]\nhttps://ai.stackexchange.com/questions/15634\n[1946]\nhttps://ai.stackexchange.com/questions/15637\n[13437]\nhttps://ai.stackexchange.com/questions/15639\n[6488]\nhttps://ai.stackexchange.com/questions/15642\n[12881]\nhttps://ai.stackexchange.com/questions/15645\n[3938]\nhttps://ai.stackexchange.com/questions/15648\n[12764]\nhttps://ai.stackexchange.com/questions/15650\n[1742]\nhttps://ai.stackexchange.com/questions/15658\n[4907]\nhttps://ai.stackexchange.com/questions/15663\n[15666, 9518]\nhttps://ai.stackexchange.com/questions/15666\n[14296]\nhttps://ai.stackexchange.com/questions/15676\n[13988]\nhttps://ai.stackexchange.com/questions/15683\n[3731]\nhttps://ai.stackexchange.com/questions/15685\n[9106, 7195]\nhttps://ai.stackexchange.com/questions/15686\n[13662, 12313]\nhttps://ai.stackexchange.com/questions/15688\n[13097]\nhttps://ai.stackexchange.com/questions/15693\n[7057]\nhttps://ai.stackexchange.com/questions/15695\n[3243]\nhttps://ai.stackexchange.com/questions/15699\n[16667, 16182, 15666, 13377, 6721]\nhttps://ai.stackexchange.com/questions/15701\n[11759, 13080, 16375]\nhttps://ai.stackexchange.com/questions/15707\n[12600]\nhttps://ai.stackexchange.com/questions/15708\n[5057, 13088, 10772, 7640, 6721]\nhttps://ai.stackexchange.com/questions/15710\n[14246]\nhttps://ai.stackexchange.com/questions/15711\n[54, 123]\nhttps://ai.stackexchange.com/questions/15712\n[6753, 14219]\nhttps://ai.stackexchange.com/questions/15714\n[6927]\nhttps://ai.stackexchange.com/questions/15715\n[12878, 8339]\nhttps://ai.stackexchange.com/questions/15720\n[10948]\nhttps://ai.stackexchange.com/questions/15742\n[13988, 6571, 11178, 11328]\nhttps://ai.stackexchange.com/questions/15751\n[46]\nhttps://ai.stackexchange.com/questions/15754\n[11612]\nhttps://ai.stackexchange.com/questions/15755\n[16316, 1462]\nhttps://ai.stackexchange.com/questions/15761\n[7685, 10555, 10549]\nhttps://ai.stackexchange.com/questions/15769\n[80, 2427]\nhttps://ai.stackexchange.com/questions/15772\n[3345]\nhttps://ai.stackexchange.com/questions/15776\n[12914, 74]\nhttps://ai.stackexchange.com/questions/15778\n[13479, 7145]\nhttps://ai.stackexchange.com/questions/15790\n[1462, 2634]\nhttps://ai.stackexchange.com/questions/15793\n[8296, 3442, 7201, 6460]\nhttps://ai.stackexchange.com/questions/15796\n[4456]\nhttps://ai.stackexchange.com/questions/15797\n[12850]\nhttps://ai.stackexchange.com/questions/15802\n[16807, 9076]\nhttps://ai.stackexchange.com/questions/15808\n[11407, 6343]\nhttps://ai.stackexchange.com/questions/15815\n[4209, 4257]\nhttps://ai.stackexchange.com/questions/15816\n[3817, 13399]\nhttps://ai.stackexchange.com/questions/15817\n[10551]\nhttps://ai.stackexchange.com/questions/15818\n[16226, 8061]\nhttps://ai.stackexchange.com/questions/15837\n[5502]\nhttps://ai.stackexchange.com/questions/15840\n[16266]\nhttps://ai.stackexchange.com/questions/15843\n[8063]\nhttps://ai.stackexchange.com/questions/15848\n[3751]\nhttps://ai.stackexchange.com/questions/15850\n[5741]\nhttps://ai.stackexchange.com/questions/15852\n[15459, 8560, 12524]\nhttps://ai.stackexchange.com/questions/15861\n[12558, 2942]\nhttps://ai.stackexchange.com/questions/15866\n[13398]\nhttps://ai.stackexchange.com/questions/15874\n[16214, 12558]\nhttps://ai.stackexchange.com/questions/15885\n[15912]\nhttps://ai.stackexchange.com/questions/15890\n[1393]\nhttps://ai.stackexchange.com/questions/15894\n[2158]\nhttps://ai.stackexchange.com/questions/15897\n[9613]\nhttps://ai.stackexchange.com/questions/15907\n[16707]\nhttps://ai.stackexchange.com/questions/15910\n[13307, 9590]\nhttps://ai.stackexchange.com/questions/15912\n[15885]\nhttps://ai.stackexchange.com/questions/15916\n[13324]\nhttps://ai.stackexchange.com/questions/15918\n[16157, 14212]\nhttps://ai.stackexchange.com/questions/15929\n[12311]\nhttps://ai.stackexchange.com/questions/15932\n[7427, 11208, 9133]\nhttps://ai.stackexchange.com/questions/15936\n[15589, 9639, 4672, 7962]\nhttps://ai.stackexchange.com/questions/15939\n[15737, 12544]\nhttps://ai.stackexchange.com/questions/15942\n[15850]\nhttps://ai.stackexchange.com/questions/15949\n[15939]\nhttps://ai.stackexchange.com/questions/15955\n[10203]\nhttps://ai.stackexchange.com/questions/15959\n[16722, 15808]\nhttps://ai.stackexchange.com/questions/15964\n[15971]\nhttps://ai.stackexchange.com/questions/15970\n[11760, 2588]\nhttps://ai.stackexchange.com/questions/15971\n[16803, 15964]\nhttps://ai.stackexchange.com/questions/15973\n[13650]\nhttps://ai.stackexchange.com/questions/15976\n[11816]\nhttps://ai.stackexchange.com/questions/15977\n[7966]\nhttps://ai.stackexchange.com/questions/15980\n[11405]\nhttps://ai.stackexchange.com/questions/15983\n[16087]\nhttps://ai.stackexchange.com/questions/15984\n[13706, 12965]\nhttps://ai.stackexchange.com/questions/15994\n[4048]\nhttps://ai.stackexchange.com/questions/15999\n[9137]\nhttps://ai.stackexchange.com/questions/16006\n[4219, 6662]\nhttps://ai.stackexchange.com/questions/16008\n[9197, 10303, 10352]\nhttps://ai.stackexchange.com/questions/16014\n[6468, 7088]\nhttps://ai.stackexchange.com/questions/16017\n[9708]\nhttps://ai.stackexchange.com/questions/16023\n[9105]\nhttps://ai.stackexchange.com/questions/16025\n[7940]\nhttps://ai.stackexchange.com/questions/16036\n[9106, 4672]\nhttps://ai.stackexchange.com/questions/16037\n[16176, 11150, 10446, 4283]\nhttps://ai.stackexchange.com/questions/16038\n[8546]\nhttps://ai.stackexchange.com/questions/16045\n[2524]\nhttps://ai.stackexchange.com/questions/16051\n[10048, 12283, 4822]\nhttps://ai.stackexchange.com/questions/16052\n[9639]\nhttps://ai.stackexchange.com/questions/16054\n[12769, 15992, 8844, 2127, 7437]\nhttps://ai.stackexchange.com/questions/16057\n[12490]\nhttps://ai.stackexchange.com/questions/16061\n[5527]\nhttps://ai.stackexchange.com/questions/16062\n[2959, 3335]\nhttps://ai.stackexchange.com/questions/16067\n[16538, 3368, 2693]\nhttps://ai.stackexchange.com/questions/16069\n[9518, 5098]\nhttps://ai.stackexchange.com/questions/16072\n[13149, 13179, 11374]\nhttps://ai.stackexchange.com/questions/16075\n[10724, 10344]\nhttps://ai.stackexchange.com/questions/16077\n[11318]\nhttps://ai.stackexchange.com/questions/16104\n[9425, 7817]\nhttps://ai.stackexchange.com/questions/16108\n[14246]\nhttps://ai.stackexchange.com/questions/16111\n[12640, 8605]\nhttps://ai.stackexchange.com/questions/16112\n[16128]\nhttps://ai.stackexchange.com/questions/16119\n[15993]\nhttps://ai.stackexchange.com/questions/16120\n[12673, 11554]\nhttps://ai.stackexchange.com/questions/16124\n[5325, 12612]\nhttps://ai.stackexchange.com/questions/16128\n[16112, 10003, 8038]\nhttps://ai.stackexchange.com/questions/16131\n[14296, 13619, 7524]\nhttps://ai.stackexchange.com/questions/16136\n[16025, 16322, 7680]\nhttps://ai.stackexchange.com/questions/16138\n[88, 15731]\nhttps://ai.stackexchange.com/questions/16145\n[11079, 9278]\nhttps://ai.stackexchange.com/questions/16148\n[16440, 3065]\nhttps://ai.stackexchange.com/questions/16152\n[10272]\nhttps://ai.stackexchange.com/questions/16157\n[15918]\nhttps://ai.stackexchange.com/questions/16160\n[11718]\nhttps://ai.stackexchange.com/questions/16161\n[13619, 13119, 7707, 8267]\nhttps://ai.stackexchange.com/questions/16162\n[11542]\nhttps://ai.stackexchange.com/questions/16169\n[8274]\nhttps://ai.stackexchange.com/questions/16172\n[7853, 4711]\nhttps://ai.stackexchange.com/questions/16173\n[13692]\nhttps://ai.stackexchange.com/questions/16175\n[16435]\nhttps://ai.stackexchange.com/questions/16176\n[16037, 9010, 3629]\nhttps://ai.stackexchange.com/questions/16177\n[16722, 16184]\nhttps://ai.stackexchange.com/questions/16179\n[12182, 10003]\nhttps://ai.stackexchange.com/questions/16180\n[6488]\nhttps://ai.stackexchange.com/questions/16182\n[11277]\nhttps://ai.stackexchange.com/questions/16183\n[16147]\nhttps://ai.stackexchange.com/questions/16184\n[16177]\nhttps://ai.stackexchange.com/questions/16212\n[16225]\nhttps://ai.stackexchange.com/questions/16213\n[6343]\nhttps://ai.stackexchange.com/questions/16217\n[3110]\nhttps://ai.stackexchange.com/questions/16218\n[1978]\nhttps://ai.stackexchange.com/questions/16219\n[16834, 16322, 10531, 8605, 7896]\nhttps://ai.stackexchange.com/questions/16221\n[3288]\nhttps://ai.stackexchange.com/questions/16233\n[11374]\nhttps://ai.stackexchange.com/questions/16234\n[5177]\nhttps://ai.stackexchange.com/questions/16237\n[7579]\nhttps://ai.stackexchange.com/questions/16242\n[9392]\nhttps://ai.stackexchange.com/questions/16252\n[12881]\nhttps://ai.stackexchange.com/questions/16255\n[15793]\nhttps://ai.stackexchange.com/questions/16257\n[15837]\nhttps://ai.stackexchange.com/questions/16264\n[9935]\nhttps://ai.stackexchange.com/questions/16266\n[16854, 13572, 4282]\nhttps://ai.stackexchange.com/questions/16282\n[2623]\nhttps://ai.stackexchange.com/questions/16291\n[1288, 9417]\nhttps://ai.stackexchange.com/questions/16295\n[13560]\nhttps://ai.stackexchange.com/questions/16296\n[8844]\nhttps://ai.stackexchange.com/questions/16311\n[16435]\nhttps://ai.stackexchange.com/questions/16316\n[2917, 6464]\nhttps://ai.stackexchange.com/questions/16322\n[12724, 16616, 16136, 7680, 11676, 9396, 2733]\nhttps://ai.stackexchange.com/questions/16323\n[12021]\nhttps://ai.stackexchange.com/questions/16329\n[13373]\nhttps://ai.stackexchange.com/questions/16330\n[12239, 5855]\nhttps://ai.stackexchange.com/questions/16331\n[12168]\nhttps://ai.stackexchange.com/questions/16336\n[10529]\nhttps://ai.stackexchange.com/questions/16347\n[7685, 8658]\nhttps://ai.stackexchange.com/questions/16353\n[10133, 7684, 13600, 1580]\nhttps://ai.stackexchange.com/questions/16360\n[6429]\nhttps://ai.stackexchange.com/questions/16361\n[16575, 4864]\nhttps://ai.stackexchange.com/questions/16364\n[11220, 8223]\nhttps://ai.stackexchange.com/questions/16365\n[5539]\nhttps://ai.stackexchange.com/questions/16372\n[16176]\nhttps://ai.stackexchange.com/questions/16374\n[14246, 5625]\nhttps://ai.stackexchange.com/questions/16375\n[6026, 10847]\nhttps://ai.stackexchange.com/questions/16377\n[16336]\nhttps://ai.stackexchange.com/questions/16380\n[4964]\nhttps://ai.stackexchange.com/questions/16381\n[12933]\nhttps://ai.stackexchange.com/questions/16384\n[8637]\nhttps://ai.stackexchange.com/questions/16391\n[12657]\nhttps://ai.stackexchange.com/questions/16394\n[4672, 4830, 6069, 5234]\nhttps://ai.stackexchange.com/questions/16395\n[13307]\nhttps://ai.stackexchange.com/questions/16408\n[3172, 5452, 152]\nhttps://ai.stackexchange.com/questions/16418\n[16364, 1877, 8976]\nhttps://ai.stackexchange.com/questions/16419\n[2436]\nhttps://ai.stackexchange.com/questions/16423\n[4394, 7979]\nhttps://ai.stackexchange.com/questions/16435\n[16175]\nhttps://ai.stackexchange.com/questions/16440\n[10975, 13596, 11669]\nhttps://ai.stackexchange.com/questions/16447\n[15796]\nhttps://ai.stackexchange.com/questions/16448\n[9017, 86]\nhttps://ai.stackexchange.com/questions/16456\n[7255]\nhttps://ai.stackexchange.com/questions/16457\n[9197, 12263, 10798, 7580]\nhttps://ai.stackexchange.com/questions/16459\n[16608]\nhttps://ai.stackexchange.com/questions/16465\n[16058, 12764, 8885]\nhttps://ai.stackexchange.com/questions/16469\n[2409]\nhttps://ai.stackexchange.com/questions/16473\n[16695]\nhttps://ai.stackexchange.com/questions/16478\n[8916]\nhttps://ai.stackexchange.com/questions/16479\n[16570]\nhttps://ai.stackexchange.com/questions/16481\n[4655, 12066]\nhttps://ai.stackexchange.com/questions/16490\n[7369]\nhttps://ai.stackexchange.com/questions/16493\n[16535, 13805, 3282]\nhttps://ai.stackexchange.com/questions/16496\n[11593]\nhttps://ai.stackexchange.com/questions/16498\n[12008]\nhttps://ai.stackexchange.com/questions/16502\n[16479, 6800, 8016]\nhttps://ai.stackexchange.com/questions/16506\n[4282]\nhttps://ai.stackexchange.com/questions/16509\n[13210]\nhttps://ai.stackexchange.com/questions/16510\n[16465]\nhttps://ai.stackexchange.com/questions/16513\n[16612, 16506]\nhttps://ai.stackexchange.com/questions/16515\n[13399]\nhttps://ai.stackexchange.com/questions/16516\n[10010]\nhttps://ai.stackexchange.com/questions/16517\n[9011]\nhttps://ai.stackexchange.com/questions/16524\n[6421]\nhttps://ai.stackexchange.com/questions/16525\n[16538, 9731]\nhttps://ai.stackexchange.com/questions/16531\n[7369, 11535]\nhttps://ai.stackexchange.com/questions/16534\n[16022, 5462]\nhttps://ai.stackexchange.com/questions/16535\n[16493, 5814, 6510, 3302]\nhttps://ai.stackexchange.com/questions/16536\n[3885]\nhttps://ai.stackexchange.com/questions/16538\n[9927, 5452]\nhttps://ai.stackexchange.com/questions/16542\n[16729, 13276]\nhttps://ai.stackexchange.com/questions/16548\n[10953]\nhttps://ai.stackexchange.com/questions/16553\n[15676, 5285]\nhttps://ai.stackexchange.com/questions/16554\n[6170]\nhttps://ai.stackexchange.com/questions/16555\n[11676]\nhttps://ai.stackexchange.com/questions/16556\n[7838, 12558, 8427, 4621]\nhttps://ai.stackexchange.com/questions/16570\n[13706, 12335, 7756]\nhttps://ai.stackexchange.com/questions/16575\n[3885, 88, 10272]\nhttps://ai.stackexchange.com/questions/16576\n[16656]\nhttps://ai.stackexchange.com/questions/16578\n[7916]\nhttps://ai.stackexchange.com/questions/16581\n[13340]\nhttps://ai.stackexchange.com/questions/16593\n[2634]\nhttps://ai.stackexchange.com/questions/16596\n[8879]\nhttps://ai.stackexchange.com/questions/16599\n[16384, 12065, 2723]\nhttps://ai.stackexchange.com/questions/16605\n[13487]\nhttps://ai.stackexchange.com/questions/16606\n[13349, 6126, 12911, 13668, 7877, 2637]\nhttps://ai.stackexchange.com/questions/16608\n[16707, 16463]\nhttps://ai.stackexchange.com/questions/16610\n[12247, 10082, 8493]\nhttps://ai.stackexchange.com/questions/16611\n[15693]\nhttps://ai.stackexchange.com/questions/16612\n[7367]\nhttps://ai.stackexchange.com/questions/16616\n[10822]\nhttps://ai.stackexchange.com/questions/16617\n[12965]\nhttps://ai.stackexchange.com/questions/16625\n[9270, 4117, 1478]\nhttps://ai.stackexchange.com/questions/16627\n[10387, 13221, 10036, 9768]\nhttps://ai.stackexchange.com/questions/16631\n[5553]\nhttps://ai.stackexchange.com/questions/16634\n[12878, 6111]\nhttps://ai.stackexchange.com/questions/16638\n[11112]\nhttps://ai.stackexchange.com/questions/16642\n[12429, 13444]\nhttps://ai.stackexchange.com/questions/16646\n[16660]\nhttps://ai.stackexchange.com/questions/16650\n[5593]\nhttps://ai.stackexchange.com/questions/16652\n[8215, 5370]\nhttps://ai.stackexchange.com/questions/16655\n[16612]\nhttps://ai.stackexchange.com/questions/16656\n[16576]\nhttps://ai.stackexchange.com/questions/16660\n[2192, 1834]\nhttps://ai.stackexchange.com/questions/16667\n[5625, 13377, 5835]\nhttps://ai.stackexchange.com/questions/16669\n[16848, 14296, 8721]\nhttps://ai.stackexchange.com/questions/16672\n[15509]\nhttps://ai.stackexchange.com/questions/16674\n[6662]\nhttps://ai.stackexchange.com/questions/16675\n[9652, 6170]\nhttps://ai.stackexchange.com/questions/16676\n[13867, 12448]\nhttps://ai.stackexchange.com/questions/16677\n[5343]\nhttps://ai.stackexchange.com/questions/16679\n[16112, 6040]\nhttps://ai.stackexchange.com/questions/16684\n[15509]\nhttps://ai.stackexchange.com/questions/16688\n[15590]\nhttps://ai.stackexchange.com/questions/16695\n[14140]\nhttps://ai.stackexchange.com/questions/16696\n[12734]\nhttps://ai.stackexchange.com/questions/16697\n[11523]\nhttps://ai.stackexchange.com/questions/16698\n[16111, 11401, 6669, 10203]\nhttps://ai.stackexchange.com/questions/16703\n[15761, 7685, 9217, 2733]\nhttps://ai.stackexchange.com/questions/16705\n[15565]\nhttps://ai.stackexchange.com/questions/16706\n[5174, 4743, 8620, 4421]\nhttps://ai.stackexchange.com/questions/16707\n[16729]\nhttps://ai.stackexchange.com/questions/16713\n[16807, 5682, 9076, 8109]\nhttps://ai.stackexchange.com/questions/16714\n[15731, 11000]\nhttps://ai.stackexchange.com/questions/16722\n[6255, 16177, 5580]\nhttps://ai.stackexchange.com/questions/16725\n[16570]\nhttps://ai.stackexchange.com/questions/16738\n[9158, 8943, 15632, 2342, 9182, 11464]\nhttps://ai.stackexchange.com/questions/16740\n[5899]\nhttps://ai.stackexchange.com/questions/16744\n[16536, 50]\nhttps://ai.stackexchange.com/questions/16746\n[1742, 3172, 7683]\nhttps://ai.stackexchange.com/questions/16750\n[12948, 6343]\nhttps://ai.stackexchange.com/questions/16754\n[4683, 8745]\nhttps://ai.stackexchange.com/questions/16756\n[16502]\nhttps://ai.stackexchange.com/questions/16758\n[16025, 13360, 10082]\nhttps://ai.stackexchange.com/questions/16775\n[16605]\nhttps://ai.stackexchange.com/questions/16776\n[16605]\nhttps://ai.stackexchange.com/questions/16779\n[14219]\nhttps://ai.stackexchange.com/questions/16801\n[9785]\nhttps://ai.stackexchange.com/questions/16803\n[15971]\nhttps://ai.stackexchange.com/questions/16807\n[3009, 4094]\nhttps://ai.stackexchange.com/questions/16811\n[16854, 12596]\nhttps://ai.stackexchange.com/questions/16812\n[16597]\nhttps://ai.stackexchange.com/questions/16814\n[11825, 2475]\nhttps://ai.stackexchange.com/questions/16815\n[15497]\nhttps://ai.stackexchange.com/questions/16816\n[7940, 12053, 7755]\nhttps://ai.stackexchange.com/questions/16824\n[16153, 4683]\nhttps://ai.stackexchange.com/questions/16825\n[3992]\nhttps://ai.stackexchange.com/questions/16826\n[5343]\nhttps://ai.stackexchange.com/questions/16828\n[8613]\nhttps://ai.stackexchange.com/questions/16829\n[10322, 11593]\nhttps://ai.stackexchange.com/questions/16834\n[12226, 10549]\nhttps://ai.stackexchange.com/questions/16835\n[15524, 12490]\nhttps://ai.stackexchange.com/questions/16836\n[13842]\nhttps://ai.stackexchange.com/questions/16845\n[4396, 10860]\nhttps://ai.stackexchange.com/questions/16846\n[11629]\nhttps://ai.stackexchange.com/questions/16848\n[9306, 10591]\nhttps://ai.stackexchange.com/questions/16850\n[13082, 3953]\nhttps://ai.stackexchange.com/questions/16855\n[16038]\nhttps://ai.stackexchange.com/questions/16857\n[1768, 2260, 172]\nhttps://ai.stackexchange.com/questions/16860\n[8793]\nhttps://ai.stackexchange.com/questions/16865\n[9221]\nhttps://ai.stackexchange.com/questions/16867\n[7159]\nhttps://ai.stackexchange.com/questions/16868\n[15524, 5862, 2776, 13612]\nhttps://ai.stackexchange.com/questions/16869\n[6921]\n"
],
[
"f = open(\"DOC2VEC_RELATEDDICT.txt\", \"a\")\nfor key in bothRelatedAndSim: \n f.write(str(key)+\"\\t\"+str([i for i,j in bothRelatedAndSim[key]])+\"\\n\")\nf.close()",
"_____no_output_____"
],
[
"for key in crossCheckDict.keys():\n if crossCheckDict[key] == 0:\n print(\"https://ai.stackexchange.com/questions/\" + str(key))\n",
"https://ai.stackexchange.com/questions/13426\nhttps://ai.stackexchange.com/questions/13432\nhttps://ai.stackexchange.com/questions/13435\nhttps://ai.stackexchange.com/questions/13436\nhttps://ai.stackexchange.com/questions/13437\nhttps://ai.stackexchange.com/questions/13449\nhttps://ai.stackexchange.com/questions/13450\nhttps://ai.stackexchange.com/questions/13460\nhttps://ai.stackexchange.com/questions/13475\nhttps://ai.stackexchange.com/questions/13482\nhttps://ai.stackexchange.com/questions/13487\nhttps://ai.stackexchange.com/questions/13494\nhttps://ai.stackexchange.com/questions/13498\nhttps://ai.stackexchange.com/questions/13503\nhttps://ai.stackexchange.com/questions/13510\nhttps://ai.stackexchange.com/questions/13516\nhttps://ai.stackexchange.com/questions/13518\nhttps://ai.stackexchange.com/questions/13556\nhttps://ai.stackexchange.com/questions/13576\nhttps://ai.stackexchange.com/questions/13603\nhttps://ai.stackexchange.com/questions/13607\nhttps://ai.stackexchange.com/questions/13622\nhttps://ai.stackexchange.com/questions/13630\nhttps://ai.stackexchange.com/questions/13633\nhttps://ai.stackexchange.com/questions/13641\nhttps://ai.stackexchange.com/questions/13645\nhttps://ai.stackexchange.com/questions/13646\nhttps://ai.stackexchange.com/questions/13650\nhttps://ai.stackexchange.com/questions/13651\nhttps://ai.stackexchange.com/questions/13654\nhttps://ai.stackexchange.com/questions/13656\nhttps://ai.stackexchange.com/questions/13663\nhttps://ai.stackexchange.com/questions/13666\nhttps://ai.stackexchange.com/questions/13669\nhttps://ai.stackexchange.com/questions/13691\nhttps://ai.stackexchange.com/questions/13693\nhttps://ai.stackexchange.com/questions/13695\nhttps://ai.stackexchange.com/questions/13698\nhttps://ai.stackexchange.com/questions/13720\nhttps://ai.stackexchange.com/questions/13722\nhttps://ai.stackexchange.com/questions/13737\nhttps://ai.stackexchange.com/questions/13765\nhttps://ai.stackexchange.com/questions/13767\nhttps://ai.stackexchange.com/questions/13770\nhttps://ai.stackexchange.com/questions/13775\nhttps://ai.stackexchange.com/questions/13782\nhttps://ai.stackexchange.com/questions/13784\nhttps://ai.stackexchange.com/questions/13797\nhttps://ai.stackexchange.com/questions/13798\nhttps://ai.stackexchange.com/questions/13805\nhttps://ai.stackexchange.com/questions/13808\nhttps://ai.stackexchange.com/questions/13821\nhttps://ai.stackexchange.com/questions/13826\nhttps://ai.stackexchange.com/questions/13838\nhttps://ai.stackexchange.com/questions/13847\nhttps://ai.stackexchange.com/questions/13848\nhttps://ai.stackexchange.com/questions/13849\nhttps://ai.stackexchange.com/questions/13854\nhttps://ai.stackexchange.com/questions/13859\nhttps://ai.stackexchange.com/questions/13873\nhttps://ai.stackexchange.com/questions/13884\nhttps://ai.stackexchange.com/questions/13890\nhttps://ai.stackexchange.com/questions/13897\nhttps://ai.stackexchange.com/questions/13905\nhttps://ai.stackexchange.com/questions/13916\nhttps://ai.stackexchange.com/questions/13928\nhttps://ai.stackexchange.com/questions/13935\nhttps://ai.stackexchange.com/questions/13950\nhttps://ai.stackexchange.com/questions/13952\nhttps://ai.stackexchange.com/questions/13993\nhttps://ai.stackexchange.com/questions/14003\nhttps://ai.stackexchange.com/questions/14006\nhttps://ai.stackexchange.com/questions/14012\nhttps://ai.stackexchange.com/questions/14023\nhttps://ai.stackexchange.com/questions/14025\nhttps://ai.stackexchange.com/questions/14028\nhttps://ai.stackexchange.com/questions/14046\nhttps://ai.stackexchange.com/questions/14050\nhttps://ai.stackexchange.com/questions/14076\nhttps://ai.stackexchange.com/questions/14077\nhttps://ai.stackexchange.com/questions/14079\nhttps://ai.stackexchange.com/questions/14080\nhttps://ai.stackexchange.com/questions/14082\nhttps://ai.stackexchange.com/questions/14084\nhttps://ai.stackexchange.com/questions/14094\nhttps://ai.stackexchange.com/questions/14099\nhttps://ai.stackexchange.com/questions/14101\nhttps://ai.stackexchange.com/questions/14107\nhttps://ai.stackexchange.com/questions/14116\nhttps://ai.stackexchange.com/questions/14117\nhttps://ai.stackexchange.com/questions/14121\nhttps://ai.stackexchange.com/questions/14126\nhttps://ai.stackexchange.com/questions/14128\nhttps://ai.stackexchange.com/questions/14130\nhttps://ai.stackexchange.com/questions/14136\nhttps://ai.stackexchange.com/questions/14149\nhttps://ai.stackexchange.com/questions/14152\nhttps://ai.stackexchange.com/questions/14164\nhttps://ai.stackexchange.com/questions/14167\nhttps://ai.stackexchange.com/questions/14172\nhttps://ai.stackexchange.com/questions/14173\nhttps://ai.stackexchange.com/questions/14189\nhttps://ai.stackexchange.com/questions/14194\nhttps://ai.stackexchange.com/questions/14196\nhttps://ai.stackexchange.com/questions/14202\nhttps://ai.stackexchange.com/questions/14204\nhttps://ai.stackexchange.com/questions/14210\nhttps://ai.stackexchange.com/questions/14211\nhttps://ai.stackexchange.com/questions/14213\nhttps://ai.stackexchange.com/questions/14215\nhttps://ai.stackexchange.com/questions/14246\nhttps://ai.stackexchange.com/questions/14249\nhttps://ai.stackexchange.com/questions/14258\nhttps://ai.stackexchange.com/questions/14262\nhttps://ai.stackexchange.com/questions/14273\nhttps://ai.stackexchange.com/questions/14289\nhttps://ai.stackexchange.com/questions/14292\nhttps://ai.stackexchange.com/questions/14301\nhttps://ai.stackexchange.com/questions/14309\nhttps://ai.stackexchange.com/questions/14316\nhttps://ai.stackexchange.com/questions/14320\nhttps://ai.stackexchange.com/questions/14324\nhttps://ai.stackexchange.com/questions/14337\nhttps://ai.stackexchange.com/questions/14338\nhttps://ai.stackexchange.com/questions/14345\nhttps://ai.stackexchange.com/questions/14350\nhttps://ai.stackexchange.com/questions/14351\nhttps://ai.stackexchange.com/questions/14353\nhttps://ai.stackexchange.com/questions/15365\nhttps://ai.stackexchange.com/questions/15367\nhttps://ai.stackexchange.com/questions/15372\nhttps://ai.stackexchange.com/questions/15376\nhttps://ai.stackexchange.com/questions/15386\nhttps://ai.stackexchange.com/questions/15387\nhttps://ai.stackexchange.com/questions/15394\nhttps://ai.stackexchange.com/questions/15396\nhttps://ai.stackexchange.com/questions/15398\nhttps://ai.stackexchange.com/questions/15413\nhttps://ai.stackexchange.com/questions/15421\nhttps://ai.stackexchange.com/questions/15433\nhttps://ai.stackexchange.com/questions/15437\nhttps://ai.stackexchange.com/questions/15439\nhttps://ai.stackexchange.com/questions/15448\nhttps://ai.stackexchange.com/questions/15451\nhttps://ai.stackexchange.com/questions/15459\nhttps://ai.stackexchange.com/questions/15466\nhttps://ai.stackexchange.com/questions/15479\nhttps://ai.stackexchange.com/questions/15485\nhttps://ai.stackexchange.com/questions/15504\nhttps://ai.stackexchange.com/questions/15510\nhttps://ai.stackexchange.com/questions/15515\nhttps://ai.stackexchange.com/questions/15524\nhttps://ai.stackexchange.com/questions/15525\nhttps://ai.stackexchange.com/questions/15536\nhttps://ai.stackexchange.com/questions/15539\nhttps://ai.stackexchange.com/questions/15540\nhttps://ai.stackexchange.com/questions/15541\nhttps://ai.stackexchange.com/questions/15542\nhttps://ai.stackexchange.com/questions/15546\nhttps://ai.stackexchange.com/questions/15559\nhttps://ai.stackexchange.com/questions/15566\nhttps://ai.stackexchange.com/questions/15573\nhttps://ai.stackexchange.com/questions/15575\nhttps://ai.stackexchange.com/questions/15582\nhttps://ai.stackexchange.com/questions/15583\nhttps://ai.stackexchange.com/questions/15588\nhttps://ai.stackexchange.com/questions/15590\nhttps://ai.stackexchange.com/questions/15594\nhttps://ai.stackexchange.com/questions/15598\nhttps://ai.stackexchange.com/questions/15611\nhttps://ai.stackexchange.com/questions/15619\nhttps://ai.stackexchange.com/questions/15643\nhttps://ai.stackexchange.com/questions/15644\nhttps://ai.stackexchange.com/questions/15668\nhttps://ai.stackexchange.com/questions/15672\nhttps://ai.stackexchange.com/questions/15677\nhttps://ai.stackexchange.com/questions/15681\nhttps://ai.stackexchange.com/questions/15691\nhttps://ai.stackexchange.com/questions/15696\nhttps://ai.stackexchange.com/questions/15703\nhttps://ai.stackexchange.com/questions/15704\nhttps://ai.stackexchange.com/questions/15705\nhttps://ai.stackexchange.com/questions/15706\nhttps://ai.stackexchange.com/questions/15713\nhttps://ai.stackexchange.com/questions/15719\nhttps://ai.stackexchange.com/questions/15727\nhttps://ai.stackexchange.com/questions/15729\nhttps://ai.stackexchange.com/questions/15730\nhttps://ai.stackexchange.com/questions/15731\nhttps://ai.stackexchange.com/questions/15737\nhttps://ai.stackexchange.com/questions/15741\nhttps://ai.stackexchange.com/questions/15743\nhttps://ai.stackexchange.com/questions/15746\nhttps://ai.stackexchange.com/questions/15752\nhttps://ai.stackexchange.com/questions/15759\nhttps://ai.stackexchange.com/questions/15764\nhttps://ai.stackexchange.com/questions/15766\nhttps://ai.stackexchange.com/questions/15771\nhttps://ai.stackexchange.com/questions/15774\nhttps://ai.stackexchange.com/questions/15784\nhttps://ai.stackexchange.com/questions/15789\nhttps://ai.stackexchange.com/questions/15792\nhttps://ai.stackexchange.com/questions/15800\nhttps://ai.stackexchange.com/questions/15820\nhttps://ai.stackexchange.com/questions/15824\nhttps://ai.stackexchange.com/questions/15831\nhttps://ai.stackexchange.com/questions/15833\nhttps://ai.stackexchange.com/questions/15834\nhttps://ai.stackexchange.com/questions/15836\nhttps://ai.stackexchange.com/questions/15857\nhttps://ai.stackexchange.com/questions/15859\nhttps://ai.stackexchange.com/questions/15860\nhttps://ai.stackexchange.com/questions/15868\nhttps://ai.stackexchange.com/questions/15873\nhttps://ai.stackexchange.com/questions/15875\nhttps://ai.stackexchange.com/questions/15877\nhttps://ai.stackexchange.com/questions/15882\nhttps://ai.stackexchange.com/questions/15883\nhttps://ai.stackexchange.com/questions/15895\nhttps://ai.stackexchange.com/questions/15900\nhttps://ai.stackexchange.com/questions/15903\nhttps://ai.stackexchange.com/questions/15914\nhttps://ai.stackexchange.com/questions/15917\nhttps://ai.stackexchange.com/questions/15924\nhttps://ai.stackexchange.com/questions/15937\nhttps://ai.stackexchange.com/questions/15945\nhttps://ai.stackexchange.com/questions/15946\nhttps://ai.stackexchange.com/questions/15947\nhttps://ai.stackexchange.com/questions/15950\nhttps://ai.stackexchange.com/questions/15951\nhttps://ai.stackexchange.com/questions/15965\nhttps://ai.stackexchange.com/questions/15966\nhttps://ai.stackexchange.com/questions/15972\nhttps://ai.stackexchange.com/questions/15978\nhttps://ai.stackexchange.com/questions/15986\nhttps://ai.stackexchange.com/questions/15992\nhttps://ai.stackexchange.com/questions/15993\nhttps://ai.stackexchange.com/questions/15995\nhttps://ai.stackexchange.com/questions/16004\nhttps://ai.stackexchange.com/questions/16005\nhttps://ai.stackexchange.com/questions/16022\nhttps://ai.stackexchange.com/questions/16026\nhttps://ai.stackexchange.com/questions/16033\nhttps://ai.stackexchange.com/questions/16042\nhttps://ai.stackexchange.com/questions/16046\nhttps://ai.stackexchange.com/questions/16056\nhttps://ai.stackexchange.com/questions/16058\nhttps://ai.stackexchange.com/questions/16073\nhttps://ai.stackexchange.com/questions/16076\nhttps://ai.stackexchange.com/questions/16080\nhttps://ai.stackexchange.com/questions/16087\nhttps://ai.stackexchange.com/questions/16088\nhttps://ai.stackexchange.com/questions/16096\nhttps://ai.stackexchange.com/questions/16097\nhttps://ai.stackexchange.com/questions/16098\nhttps://ai.stackexchange.com/questions/16099\nhttps://ai.stackexchange.com/questions/16101\nhttps://ai.stackexchange.com/questions/16106\nhttps://ai.stackexchange.com/questions/16109\nhttps://ai.stackexchange.com/questions/16133\nhttps://ai.stackexchange.com/questions/16146\nhttps://ai.stackexchange.com/questions/16147\nhttps://ai.stackexchange.com/questions/16151\nhttps://ai.stackexchange.com/questions/16153\nhttps://ai.stackexchange.com/questions/16170\nhttps://ai.stackexchange.com/questions/16171\nhttps://ai.stackexchange.com/questions/16187\nhttps://ai.stackexchange.com/questions/16191\nhttps://ai.stackexchange.com/questions/16206\nhttps://ai.stackexchange.com/questions/16207\nhttps://ai.stackexchange.com/questions/16208\nhttps://ai.stackexchange.com/questions/16214\nhttps://ai.stackexchange.com/questions/16220\nhttps://ai.stackexchange.com/questions/16222\nhttps://ai.stackexchange.com/questions/16224\nhttps://ai.stackexchange.com/questions/16225\nhttps://ai.stackexchange.com/questions/16226\nhttps://ai.stackexchange.com/questions/16238\nhttps://ai.stackexchange.com/questions/16240\nhttps://ai.stackexchange.com/questions/16251\nhttps://ai.stackexchange.com/questions/16260\nhttps://ai.stackexchange.com/questions/16263\nhttps://ai.stackexchange.com/questions/16268\nhttps://ai.stackexchange.com/questions/16274\nhttps://ai.stackexchange.com/questions/16279\nhttps://ai.stackexchange.com/questions/16283\nhttps://ai.stackexchange.com/questions/16294\nhttps://ai.stackexchange.com/questions/16327\nhttps://ai.stackexchange.com/questions/16328\nhttps://ai.stackexchange.com/questions/16332\nhttps://ai.stackexchange.com/questions/16343\nhttps://ai.stackexchange.com/questions/16345\nhttps://ai.stackexchange.com/questions/16346\nhttps://ai.stackexchange.com/questions/16348\nhttps://ai.stackexchange.com/questions/16351\nhttps://ai.stackexchange.com/questions/16357\nhttps://ai.stackexchange.com/questions/16366\nhttps://ai.stackexchange.com/questions/16367\nhttps://ai.stackexchange.com/questions/16369\nhttps://ai.stackexchange.com/questions/16379\nhttps://ai.stackexchange.com/questions/16383\nhttps://ai.stackexchange.com/questions/16416\nhttps://ai.stackexchange.com/questions/16420\nhttps://ai.stackexchange.com/questions/16424\nhttps://ai.stackexchange.com/questions/16427\nhttps://ai.stackexchange.com/questions/16430\nhttps://ai.stackexchange.com/questions/16431\nhttps://ai.stackexchange.com/questions/16438\nhttps://ai.stackexchange.com/questions/16441\nhttps://ai.stackexchange.com/questions/16443\nhttps://ai.stackexchange.com/questions/16444\nhttps://ai.stackexchange.com/questions/16452\nhttps://ai.stackexchange.com/questions/16458\nhttps://ai.stackexchange.com/questions/16463\nhttps://ai.stackexchange.com/questions/16487\nhttps://ai.stackexchange.com/questions/16492\nhttps://ai.stackexchange.com/questions/16505\nhttps://ai.stackexchange.com/questions/16507\nhttps://ai.stackexchange.com/questions/16512\nhttps://ai.stackexchange.com/questions/16514\nhttps://ai.stackexchange.com/questions/16518\nhttps://ai.stackexchange.com/questions/16520\nhttps://ai.stackexchange.com/questions/16529\nhttps://ai.stackexchange.com/questions/16533\nhttps://ai.stackexchange.com/questions/16539\nhttps://ai.stackexchange.com/questions/16545\nhttps://ai.stackexchange.com/questions/16550\nhttps://ai.stackexchange.com/questions/16557\nhttps://ai.stackexchange.com/questions/16564\nhttps://ai.stackexchange.com/questions/16566\nhttps://ai.stackexchange.com/questions/16567\nhttps://ai.stackexchange.com/questions/16572\nhttps://ai.stackexchange.com/questions/16588\nhttps://ai.stackexchange.com/questions/16597\nhttps://ai.stackexchange.com/questions/16598\nhttps://ai.stackexchange.com/questions/16607\nhttps://ai.stackexchange.com/questions/16628\nhttps://ai.stackexchange.com/questions/16629\nhttps://ai.stackexchange.com/questions/16644\nhttps://ai.stackexchange.com/questions/16645\nhttps://ai.stackexchange.com/questions/16664\nhttps://ai.stackexchange.com/questions/16668\nhttps://ai.stackexchange.com/questions/16687\nhttps://ai.stackexchange.com/questions/16689\nhttps://ai.stackexchange.com/questions/16691\nhttps://ai.stackexchange.com/questions/16692\nhttps://ai.stackexchange.com/questions/16711\nhttps://ai.stackexchange.com/questions/16716\nhttps://ai.stackexchange.com/questions/16717\nhttps://ai.stackexchange.com/questions/16719\nhttps://ai.stackexchange.com/questions/16727\nhttps://ai.stackexchange.com/questions/16728\nhttps://ai.stackexchange.com/questions/16729\nhttps://ai.stackexchange.com/questions/16739\nhttps://ai.stackexchange.com/questions/16741\nhttps://ai.stackexchange.com/questions/16751\nhttps://ai.stackexchange.com/questions/16757\nhttps://ai.stackexchange.com/questions/16760\nhttps://ai.stackexchange.com/questions/16768\nhttps://ai.stackexchange.com/questions/16769\nhttps://ai.stackexchange.com/questions/16772\nhttps://ai.stackexchange.com/questions/16781\nhttps://ai.stackexchange.com/questions/16787\nhttps://ai.stackexchange.com/questions/16792\nhttps://ai.stackexchange.com/questions/16793\nhttps://ai.stackexchange.com/questions/16798\nhttps://ai.stackexchange.com/questions/16799\nhttps://ai.stackexchange.com/questions/16800\nhttps://ai.stackexchange.com/questions/16805\nhttps://ai.stackexchange.com/questions/16810\nhttps://ai.stackexchange.com/questions/16817\nhttps://ai.stackexchange.com/questions/16818\nhttps://ai.stackexchange.com/questions/16819\nhttps://ai.stackexchange.com/questions/16823\nhttps://ai.stackexchange.com/questions/16854\nhttps://ai.stackexchange.com/questions/16863\nhttps://ai.stackexchange.com/questions/16871\n"
],
[
"x = relatedId.keys()\nbins = []\n\nfor key in x:\n bins.append(len(relatedId[key]))\n\nwidth = bins[1] - bins[0]\nplt.bar(x, bins, align='center', width=width)\nplt.show()\n",
"_____no_output_____"
],
[
"tags[1]",
"_____no_output_____"
],
[
"for key in tags.keys():\n s = tags[key].replace(\"<\", \"\", len(tags[key]))\n s = s.replace(\">\", \" \", len(tags[key]))\n tags[key] = s.strip().split(\" \")\n",
"_____no_output_____"
],
[
"key = 13544\nprint(key, relatedId[key])\nprint(tags[key])\nfor item in relatedId[key]:\n if item in tags.keys():\n print(item, tags[item])",
"13544 [4376, 8518, 2817, 12671, 8962, 6139]\n['neural-networks', 'recurrent-neural-networks', 'optimization', 'logic', 'function-approximation']\n4376 ['neural-networks']\n8518 ['neural-networks', 'machine-learning', 'backpropagation']\n2817 ['optimization', 'heuristics']\n12671 ['neural-networks', 'function-approximation']\n8962 ['machine-learning', 'backpropagation', 'terminology', 'optimization']\n6139 ['neural-networks', 'ai-design', 'optimization']\n"
]
],
[
[
"import requests\nAPIKEY = \"unCQQDAhgl)qZ4GZRXVVGQ((\";\n\nquery = \"https://api.stackexchange.com/2.2/questions/\" + str(54)+\"?order=desc&sort=activity&site=ai&key=\"+APIKEY;\nresponse = requests.get(query)\n",
"_____no_output_____"
],
[
"print(response.json()[\"items\"][0][\"tags\"])",
"_____no_output_____"
],
[
"Api ile websitesi eşleşiyor ama post.xml dosyasındakiler ile eşleşmiyor",
"_____no_output_____"
],
[
"import time",
"_____no_output_____"
],
[
"betterTags = {}",
"_____no_output_____"
],
[
"for id in id_set:\n if id not in betterTags.keys():\n query = \"https://api.stackexchange.com/2.2/questions/\" + str(id)+\"?order=desc&sort=activity&site=ai&key=\"+APIKEY;\n response = requests.get(query)\n print(response.status_code)\n betterTags[id] = response.json()[\"items\"][0][\"tags\"]\n print(id, betterTags[id])\n time.sleep(1)",
"_____no_output_____"
],
[
"betterTags",
"_____no_output_____"
]
],
[
[
"tagVsQuestion = {}\nfor key in relatedId.keys():\n #print(key, relatedId[key])\n #print(tags[key])\n true = 0\n false = 0\n for item in relatedId[key]:\n if item in tags.keys():\n #print(item, tags[item])\n #print(np.in1d(tags[key],tags[item]).any())\n if np.in1d(tags[key],tags[item]).any():\n true += 1\n else:\n false += 0\n if true+false == 0:\n tagVsQuestion[key] = 0\n else:\n print(true,false)\n tagVsQuestion[key] = float(true /(true+false))\n else:\n tagVsQuestion[key] = 0\n",
"1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n1 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n3 0\n4 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n1 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n10 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n10 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n1 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n10 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n1 0\n2 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n1 0\n2 0\n3 0\n1 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n9 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 0\n2 0\n3 0\n1 0\n2 0\n3 0\n4 0\n5 0\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n8 0\n1 0\n2 0\n"
],
[
"tagVsQuestion[13425]",
"_____no_output_____"
]
],
[
[
"related ile sorunun tagleri eşleşmeyen sayısı",
"_____no_output_____"
]
],
[
[
"print([x for x in tagVsQuestion.keys() if tagVsQuestion[x] == 0])",
"[15415, 15451, 15515, 16017, 16490]\n"
]
],
[
[
"sadec 5 tane eşleşmiyor o datalarda elimizde mevcut değil normalde çalışıyorlar",
"_____no_output_____"
]
],
[
[
"model.save(\"doc2vecmodel\")",
"_____no_output_____"
],
[
"model2 = gensim.models.doc2vec.Doc2Vec.load(\"doc2vecmodel\")",
"_____no_output_____"
],
[
"inferred_vector = model.infer_vector([\"asda\"])\nsims = model.docvecs.most_similar([inferred_vector], topn=100)",
"_____no_output_____"
],
[
"sims",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"raw",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"raw"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e75cf6494b3b9a3d5b4db6f8551ff372a0f72864 | 10,122 | ipynb | Jupyter Notebook | autoencoder/linear-autoencoder/Simple_Autoencoder_Exercise.ipynb | Kshitij09/deep-learning-v2-pytorch | b214e63b7b560122bc5fd5b26bff6946b5078ba6 | [
"MIT"
] | null | null | null | autoencoder/linear-autoencoder/Simple_Autoencoder_Exercise.ipynb | Kshitij09/deep-learning-v2-pytorch | b214e63b7b560122bc5fd5b26bff6946b5078ba6 | [
"MIT"
] | null | null | null | autoencoder/linear-autoencoder/Simple_Autoencoder_Exercise.ipynb | Kshitij09/deep-learning-v2-pytorch | b214e63b7b560122bc5fd5b26bff6946b5078ba6 | [
"MIT"
] | null | null | null | 36.673913 | 416 | 0.596226 | [
[
[
"# A Simple Autoencoder\n\nWe'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.\n\n<img src='notebook_ims/autoencoder_1.png' />\n\n### Compressed Representation\n\nA compressed representation can be great for saving and sharing any kind of data in a way that is more efficient than storing raw data. In practice, the compressed representation often holds key information about an input image and we can use it for denoising images or oher kinds of reconstruction and transformation!\n\n<img src='notebook_ims/denoising.png' width=60%/>\n\nIn this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.",
"_____no_output_____"
]
],
[
[
"import torch\nimport numpy as np\nfrom torchvision import datasets\nimport torchvision.transforms as transforms\n\n# convert data to torch.FloatTensor\ntransform = transforms.ToTensor()\n\n# load the training and test datasets\ntrain_data = datasets.MNIST(root='data', train=True,\n download=True, transform=transform)\ntest_data = datasets.MNIST(root='data', train=False,\n download=True, transform=transform)",
"_____no_output_____"
],
[
"# Create training and test dataloaders\n\n# number of subprocesses to use for data loading\nnum_workers = 0\n# how many samples per batch to load\nbatch_size = 20\n\n# prepare data loaders\ntrain_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers)\ntest_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)",
"_____no_output_____"
]
],
[
[
"### Visualize the Data",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n%matplotlib inline\n \n# obtain one batch of training images\ndataiter = iter(train_loader)\nimages, labels = dataiter.next()\nimages = images.numpy()\n\n# get one image from the batch\nimg = np.squeeze(images[0])\n\nfig = plt.figure(figsize = (5,5)) \nax = fig.add_subplot(111)\nax.imshow(img, cmap='gray')",
"_____no_output_____"
]
],
[
[
"---\n## Linear Autoencoder\n\nWe'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building a simple autoencoder. The encoder and decoder should be made of **one linear layer**. The units that connect the encoder and decoder will be the _compressed representation_.\n\nSince the images are normalized between 0 and 1, we need to use a **sigmoid activation on the output layer** to get values that match this input value range.\n\n<img src='notebook_ims/simple_autoencoder.png' width=50% />\n\n\n#### TODO: Build the graph for the autoencoder in the cell below. \n> The input images will be flattened into 784 length vectors. The targets are the same as the inputs. \n> The encoder and decoder will be made of two linear layers, each.\n> The depth dimensions should change as follows: 784 inputs > **encoding_dim** > 784 outputs.\n> All layers will have ReLu activations applied except for the final output layer, which has a sigmoid activation.\n\n**The compressed representation should be a vector with dimension `encoding_dim=32`.**",
"_____no_output_____"
]
],
[
[
"import torch.nn as nn\nimport torch.nn.functional as F\n\n# define the NN architecture\nclass Autoencoder(nn.Module):\n def __init__(self, encoding_dim):\n super(Autoencoder, self).__init__()\n ## encoder ##\n \n ## decoder ##\n \n\n def forward(self, x):\n # define feedforward behavior \n # and scale the *output* layer with a sigmoid activation function\n \n return x\n\n# initialize the NN\nencoding_dim = 32\nmodel = Autoencoder(encoding_dim)\nprint(model)",
"_____no_output_____"
]
],
[
[
"---\n## Training\n\nHere I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards. \n\nWe are not concerned with labels in this case, just images, which we can get from the `train_loader`. Because we're comparing pixel values in input and output images, it will be best to use a loss that is meant for a regression task. Regression is all about comparing _quantities_ rather than probabilistic values. So, in this case, I'll use `MSELoss`. And compare output images and input images as follows:\n```\nloss = criterion(outputs, images)\n```\n\nOtherwise, this is pretty straightfoward training with PyTorch. We flatten our images, pass them into the autoencoder, and record the training loss as we go.",
"_____no_output_____"
]
],
[
[
"# specify loss function\ncriterion = nn.MSELoss()\n\n# specify loss function\noptimizer = torch.optim.Adam(model.parameters(), lr=0.001)",
"_____no_output_____"
],
[
"# number of epochs to train the model\nn_epochs = 20\n\nfor epoch in range(1, n_epochs+1):\n # monitor training loss\n train_loss = 0.0\n \n ###################\n # train the model #\n ###################\n for data in train_loader:\n # _ stands in for labels, here\n images, _ = data\n # flatten images\n images = images.view(images.size(0), -1)\n # clear the gradients of all optimized variables\n optimizer.zero_grad()\n # forward pass: compute predicted outputs by passing inputs to the model\n outputs = model(images)\n # calculate the loss\n loss = criterion(outputs, images)\n # backward pass: compute gradient of the loss with respect to model parameters\n loss.backward()\n # perform a single optimization step (parameter update)\n optimizer.step()\n # update running training loss\n train_loss += loss.item()*images.size(0)\n \n # print avg training statistics \n train_loss = train_loss/len(train_loader)\n print('Epoch: {} \\tTraining Loss: {:.6f}'.format(\n epoch, \n train_loss\n ))",
"_____no_output_____"
]
],
[
[
"## Checking out the results\n\nBelow I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.",
"_____no_output_____"
]
],
[
[
"# obtain one batch of test images\ndataiter = iter(test_loader)\nimages, labels = dataiter.next()\n\nimages_flatten = images.view(images.size(0), -1)\n# get sample outputs\noutput = model(images_flatten)\n# prep images for display\nimages = images.numpy()\n\n# output is resized into a batch of images\noutput = output.view(batch_size, 1, 28, 28)\n# use detach when it's an output that requires_grad\noutput = output.detach().numpy()\n\n# plot the first ten input images and then reconstructed images\nfig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(25,4))\n\n# input images on top row, reconstructions on bottom\nfor images, row in zip([images, output], axes):\n for img, ax in zip(images, row):\n ax.imshow(np.squeeze(img), cmap='gray')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)",
"_____no_output_____"
]
],
[
[
"## Up Next\n\nWe're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e75cf678002c6849d3a01d54ca4451d72a4510e3 | 3,543 | ipynb | Jupyter Notebook | day01.ipynb | 1298646087/ghfhg | 6aec697a45c673b63e2869c19cc981dd912d7df8 | [
"Apache-2.0"
] | null | null | null | day01.ipynb | 1298646087/ghfhg | 6aec697a45c673b63e2869c19cc981dd912d7df8 | [
"Apache-2.0"
] | null | null | null | day01.ipynb | 1298646087/ghfhg | 6aec697a45c673b63e2869c19cc981dd912d7df8 | [
"Apache-2.0"
] | null | null | null | 26.051471 | 160 | 0.534575 | [
[
[
"在开房数据中取出一条:\n- 1、删除最后的换行符\n- 2、获取该数据下的人名以及邮箱\n- 3、将邮箱进行简单加密\n- 4、拼接人名 + 性别 + 地址 + 加密邮箱,以’-’分隔\n",
"_____no_output_____"
]
],
[
[
"file = 'C:\\\\Users\\\\lenovo\\\\Desktop\\\\python\\\\movies\\\\day01\\\\kaifangX.txt'\nopen_file = open(file,mode='r',encoding='gbk')\nline = open_file.readline()\n# 删除最后的换行符\nprint(line)\nstrip_line = line.strip('\\n')\nprint(strip_line)\n\n# 获取人名\nname = strip_line.find('陈萌')\nprint(name)\nprint(type(name))\nprint(strip_line[0:2])\nchengmeng = strip_line[0:2]\nprint(chengmeng)\n\n# 获取邮箱\nemail_index = strip_line.find('[email protected]')\nprint(strip_line[90:107])\nemail = strip_line[90:107]\nprint(email)\n\n# 将邮箱进行加密\nkong = ''\nfor i in email:\n jiami = chr(ord(i)+2)\n kong = kong + jiami\nprint(kong)\n\n# 字符串的拼接\n\nprint(chengmeng + '-'+ kong )\nopen_file.close()\n",
"陈萌,010-116321,M,19000101,北京市海淀区苏州街3号大恒科技大厦北座6层,100080,10116,010-82808028,010-82828028-208,[email protected],0\n\n陈萌,010-116321,M,19000101,北京市海淀区苏州街3号大恒科技大厦北座6层,100080,10116,010-82808028,010-82828028-208,[email protected],0\n0\n<class 'int'>\n陈萌\n陈萌\[email protected]\[email protected]\nejgpogpiBfkuv0eqo\n陈萌-ejgpogpiBfkuv0eqo\n"
],
[
"file = 'C:\\\\Users\\\\lenovo\\\\Desktop\\\\python\\\\movies\\\\day01\\\\kaifangX.txt'\nopen_file = open(file,mode='r',encoding='gbk')\nline = open_file.readline()\n# 删除最后的换行符\nprint(line)\nstrip_line = line.strip('\\n')\nprint(strip_line)\n\nsplit_line = line.split(',') # 将代码以 , 拆分\nprint(split_line)\nprint(type(split_line))\nname = split_line[0] # 按照索引进行取值\nprint(name)\nemail = split_line[9]\nprint(email)\n\nopen_file.close()\n",
"陈萌,010-116321,M,19000101,北京市海淀区苏州街3号大恒科技大厦北座6层,100080,10116,010-82808028,010-82828028-208,[email protected],0\n\n陈萌,010-116321,M,19000101,北京市海淀区苏州街3号大恒科技大厦北座6层,100080,10116,010-82808028,010-82828028-208,[email protected],0\n['陈萌', '010-116321', 'M', '19000101', '北京市海淀区苏州街3号大恒科技大厦北座6层', '100080', '10116', '010-82808028', '010-82828028-208', '[email protected]', '0\\n']\n<class 'list'>\n陈萌\[email protected]\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
e75cfc700c51639d45e6f8f0fe2cd3abe8a6aa02 | 452,326 | ipynb | Jupyter Notebook | explore_rasters.ipynb | matmons/gridfinder_master | 60eaca10c4ada7bb46e31d0ce098d4c63bbaa1a4 | [
"MIT"
] | null | null | null | explore_rasters.ipynb | matmons/gridfinder_master | 60eaca10c4ada7bb46e31d0ce098d4c63bbaa1a4 | [
"MIT"
] | null | null | null | explore_rasters.ipynb | matmons/gridfinder_master | 60eaca10c4ada7bb46e31d0ce098d4c63bbaa1a4 | [
"MIT"
] | null | null | null | 607.965054 | 407,672 | 0.944796 | [
[
[
"import rasterio\nimport numpy as np\nimport os\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"os.getcwd()",
"_____no_output_____"
],
[
"base_path = os.getcwd()+'/Data'\ndataset = '/costs_extended/europe' # ie. '/protected_areas', '/pop'\nfile_path = base_path+dataset\nprint(file_path)",
"/home/andre/Documents/gridfinder/predictive-mapping-global-power/Data/costs_extended/europe\n"
],
[
"s = rasterio.open(file_path+'/BEL.tif')\nslope = s.read(1)\nprint(np.unique(slope))\nplt.figure(figsize=(16,9))\nplt.imshow(slope)\nplt.show()\nnp.histogram(slope,bins=4)",
"[0.1 0.11111111 0.125 0.14285715 0.16666667 0.2\n 0.22222222 0.25 0.2857143 0.33333334 0.5 0.5714286\n 0.66666669 1. 2. 3. 4. 6. ]\n"
],
[
"s = rasterio.open(file_path+'/.tif')\nslope = s.read(1)\nprint(slope.shape, type(slope))\nslope[slope <= 20] = 1\nslope[(slope > 20) & (slope <= 30)] = 2\nslope[slope > 30] = 3\n\nplt.imshow(slope)\nplt.show()\nnp.histogram(slope,bins=10)tm",
"_____no_output_____"
],
[
"a = np.array([[1,2,3],[4,5,6],[7,8,9]])\nb = np.array([[2,2,2],[3,3,3],[4,4,4]])\nz = np.array([[1,1,1],[1,0,1],[1,0,1]])\nc = np.multiply(a+b,z)\nc",
"_____no_output_____"
],
[
"c[c <= 10] = 4\nc[(c > 10) & (c <= 20)] = 15\nc[(c > 20) & (c <= 40)] = 32\nc",
"_____no_output_____"
],
[
"import csv\n\nwith open('names.csv', 'w', newline='') as csvfile:\n fieldnames = ['first_name', 'age']\n writer = csv.DictWriter(csvfile, fieldnames=fieldnames)\n\n writer.writeheader()\n writer.writerow({'first_name': 'Baked', 'age': None})\n writer.writerow({'first_name': 'Son', 'age': '12'})\n writer.writerow({'first_name': 'Lovely', 'age': '13;15'})\n writer.writerow({'first_name': 'Wonderful', 'age': 16})\n writer.writerow({'first_name': 'Sonderful', 'age': 'low'})",
"_____no_output_____"
],
[
"with open('names.csv', newline='') as csvfile:\n reader = csv.DictReader(csvfile)\n for row in reader:\n print(row['first_name'], row['age'])",
"Baked \nSon 12\nLovely 13;15\nWonderful 16\nSonderful low\n"
],
[
"import geopandas as gpd\nimport pandas as pd",
"_____no_output_____"
],
[
"df = pd.read_csv('names.csv')\ndf[\"weight\"] = 1\ndf.age = df.age.apply(lambda x: x.split(';')[0] if (type(x) == str) else x)\ndf.age = pd.to_numeric(df.age, errors='coerce')\n#df.age = df.age.apply(lambda x: 0 if x in (\"low\", \"medium\") else x)\n#df.age = pd.to_numeric(df.age)\ndf.loc[df[\"age\"] > 12, \"weight\"] = 0\ndf.dropna()\ndf.head()",
"_____no_output_____"
],
[
"base_path = os.getcwd()+'/Data'\ndataset = '/costs_extended/europe/'\nfile_path = base_path+dataset\nprint(file_path)",
"/home/andre/Documents/gridfinder/predictive-mapping-global-power/Data/costs_extended/europe/\n"
],
[
"costs_roads_in = file_path+'roads/PRT.tif'\ncosts_protected_areas_in = file_path+'protected_areas/PRT.tif'\ncosts_slope_in = file_path+'slope/PRT.tif'\ncosts_hv_in = file_path+'hv/PRT.tif'\n\nr = 1\npa = 0.5\ns = 0\n\ncosts_roads_ra = rasterio.open(costs_roads_in)\ncosts_roads = costs_roads_ra.read(1) * r\n\ncosts_protected_areas_ra = rasterio.open(costs_protected_areas_in)\ncosts_protected_areas = costs_protected_areas_ra.read(1) * pa\n\ncosts_slope_ra = rasterio.open(costs_slope_in)\ncosts_slope = costs_slope_ra.read(1) * s\n\ncosts_hv_ra = rasterio.open(costs_hv_in)\ncosts_hv = costs_hv_ra.read(1)\n\n# Elementwise multiplication of HV-lines to properly set all hv containing cells\n# to zero\ncosts = np.multiply(costs_roads + costs_protected_areas + costs_slope, costs_hv)",
"_____no_output_____"
],
[
"plt.imshow(costs_hv)",
"_____no_output_____"
],
[
"country = 'CZE'\n\ngrid_in = base_path+f'/ground_truth/europe/{country}.tif'\ngrid_buff_in = base_path+f'/ground_truth_buffered/europe/{country}.tif'\nguess_in = base_path+f'/mv/europe/base/{country}.tif'\nhv_in = base_path+f'/costs_extended/europe/hv/{country}.tif'\nadmin = os.getcwd()+'/admin_boundaries/europe_trimmed.gpkg'\n\nadmin = gpd.read_file(admin)\ncode = 'adm0_a3'\naoi_in = admin.loc[admin[code] == f'{country}']\n\ndef positives(guesses, truths):\n \"\"\"Calculate true positives, used by accuracy().\n\n Parameters\n ----------\n guesses : numpy array\n Output from model.\n truths : numpy array\n Truth feature converted to array.\n\n Returns\n -------\n tp : float\n Ratio of true positives.\n \"\"\"\n\n yes_guesses = 0\n yes_guesses_correct = 0\n rows = guesses.shape[0]\n cols = guesses.shape[1]\n\n for x in range(0, rows):\n for y in range(0, cols):\n guess = guesses[x, y]\n truth = truths[x, y]\n if guess == 1:\n yes_guesses += 1\n if guess == truth:\n yes_guesses_correct += 1\n\n tp = yes_guesses_correct\n fp = yes_guesses - yes_guesses_correct\n\n return tp, fp\n\n\ndef negatives(guesses, truths):\n \"\"\"Calculate false negatives, used by accuracy().\n\n Parameters\n ----------\n guesses : numpy array\n Output from model.\n truths : numpy array\n Truth feature converted to array.\n\n Returns\n -------\n fn : float\n Ratio of false negatives.\n \"\"\"\n\n actual_grid = 0\n actual_grid_missed = 0\n\n rows = guesses.shape[0]\n cols = guesses.shape[1]\n\n for x in range(0, rows):\n for y in range(0, cols):\n guess = guesses[x, y]\n truth = truths[x, y]\n\n if truth == 1:\n actual_grid += 1\n if guess != truth:\n found = False\n for i in range(-5, 6):\n for j in range(-5, 6):\n if i == 0 and j == 0:\n continue\n\n shift_x = x + i\n shift_y = y + j\n if shift_x < 0 or shift_y < 0:\n continue\n if shift_x >= rows or shift_y >= cols:\n continue\n\n other_guess = guesses[shift_x, shift_y]\n if other_guess == 1:\n found = True\n if not found:\n actual_grid_missed += 1\n\n fn = actual_grid_missed\n\n return fn\n\ndef flip_arr_values(arr):\n arr[arr == 1] = 2\n arr[arr == 0] = 1\n arr[arr == 2] = 0\n return arr",
"_____no_output_____"
],
[
"if isinstance(aoi_in, gpd.GeoDataFrame):\n aoi = aoi_in\nelse:\n aoi = gpd.read_file(aoi_in)\n\nguess = rasterio.open(guess_in)\nguesses = guess.read(1)\n\ng = rasterio.open(grid_in)\ngrid_raster = g.read(1)\ngrid_raster = flip_arr_values(grid_raster)\n\ng_buff = rasterio.open(grid_buff_in)\ngrid_buff_raster = g_buff.read(1)\ngrid_buff_raster = flip_arr_values(grid_buff_raster)\n\nhv = rasterio.open(hv_in)\nhv_raster = hv.read(1)\n\n#guesses = np.multiply(guesses, hv_raster)\n#grid_raster = np.multiply(grid_raster, hv_raster)\n#grid_buff_raster = np.multiply(grid_buff_raster, hv_raster)\n\nassert grid_raster.shape == grid_buff_raster.shape, \"Ground truth rasters are not same shape\"\nassert guesses.shape == grid_raster.shape, \"Shapes of guesses and groundt truth do not match\"\n\ntp, fp = positives(guesses, grid_buff_raster)\nfn = negatives(guesses, grid_raster)\n\nprecision = tp/(tp+fp)\nrecall = tp/(tp+fn)\niou = tp/(tp+fp+fn)\nprint(tp, fp, fn, precision, recall, iou)",
"34363 29577 125 0.5374257116046294 0.9963755509162607 0.5363771169905565\n"
],
[
"# 34363 29577 125 0.5374257116046294 0.9963755509162607 0.5363771169905565\n# 12570 29577 198 0.2982418677485942 0.9844924812030075 0.29684732554020543",
"_____no_output_____"
],
[
"np.histogram(hv_raster)",
"_____no_output_____"
],
[
"pop_raster = rasterio.open(\"./Data/pop/europe/FRA.tif\")\npop = pop_raster.read(1)",
"_____no_output_____"
],
[
"a = \"/Data/pop/base_10/\"\nb = \"/Data/hv/base_10/\"",
"_____no_output_____"
],
[
"a.split(\"/\")[-2]",
"_____no_output_____"
],
[
"base = rasterio.open(\"Data/mv/europe/base_02/FRA.tif\")\nbase_r = base.read(1)\nfilt = rasterio.open(\"Data/mv/europe/lccs_filtered/FRA.tif\")\nfilt_r = filt.read(1)",
"_____no_output_____"
],
[
"np.count_nonzero(base_r)",
"_____no_output_____"
],
[
"np.count_nonzero(filt_r)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75d227ac5d08d5121cc6d5646d4b53143343ef8 | 1,243 | ipynb | Jupyter Notebook | Anjani/Leetcode/Array/Count Good Triplets.ipynb | Anjani100/competitive-coding | 229e4475487412c702e99a45d8ec4f46e6aea241 | [
"MIT"
] | null | null | null | Anjani/Leetcode/Array/Count Good Triplets.ipynb | Anjani100/competitive-coding | 229e4475487412c702e99a45d8ec4f46e6aea241 | [
"MIT"
] | null | null | null | Anjani/Leetcode/Array/Count Good Triplets.ipynb | Anjani100/competitive-coding | 229e4475487412c702e99a45d8ec4f46e6aea241 | [
"MIT"
] | 2 | 2020-10-07T13:48:02.000Z | 2022-03-31T16:10:36.000Z | 21.067797 | 120 | 0.453741 | [
[
[
"def countGoodTriplets(arr, a, b, c):\n count = 0\n for i in range(len(arr) - 2):\n for j in range(i + 1, len(arr) - 1):\n for k in range(j + 1, len(arr)):\n if (abs(arr[i] - arr[j]) <= a) and (abs(arr[j] - arr[k]) <= b) and (abs(arr[i] - arr[k]) <= c):\n count += 1\n return count\n\nprint(countGoodTriplets([1,1,2,2,3], 0, 0, 1))",
"0\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
e75d3ac0e1e1b440bf1386791c9057ebe0b9e406 | 29,290 | ipynb | Jupyter Notebook | nbs/111b_models.MINIROCKET.ipynb | HafizAhmadHassan/tsai | a20344344a2ed928d97317a1b02428e50d803883 | [
"Apache-2.0"
] | null | null | null | nbs/111b_models.MINIROCKET.ipynb | HafizAhmadHassan/tsai | a20344344a2ed928d97317a1b02428e50d803883 | [
"Apache-2.0"
] | null | null | null | nbs/111b_models.MINIROCKET.ipynb | HafizAhmadHassan/tsai | a20344344a2ed928d97317a1b02428e50d803883 | [
"Apache-2.0"
] | 1 | 2021-08-12T20:45:07.000Z | 2021-08-12T20:45:07.000Z | 38.743386 | 160 | 0.547013 | [
[
[
"# default_exp models.MINIROCKET",
"_____no_output_____"
]
],
[
[
"# MINIROCKET\n\n> A Very Fast (Almost) Deterministic Transform for Time Series Classification.",
"_____no_output_____"
]
],
[
[
"#export\nfrom tsai.imports import *\nfrom tsai.utils import *\nfrom tsai.data.external import *\nfrom tsai.models.layers import *",
"_____no_output_____"
],
[
"#export\nfrom sktime.transformations.panel.rocket._minirocket import _fit as minirocket_fit\nfrom sktime.transformations.panel.rocket._minirocket import _transform as minirocket_transform\nfrom sktime.transformations.panel.rocket._minirocket_multivariate import _fit_multi as minirocket_fit_multi\nfrom sktime.transformations.panel.rocket._minirocket_multivariate import _transform_multi as minirocket_transform_multi\nfrom sktime.transformations.panel.rocket import MiniRocketMultivariate\nfrom sklearn.linear_model import RidgeCV, RidgeClassifierCV\nfrom sklearn.ensemble import VotingClassifier, VotingRegressor",
"_____no_output_____"
],
[
"# export\nclass MiniRocketClassifier(sklearn.pipeline.Pipeline):\n \"\"\"Time series classification using MINIROCKET features and a linear classifier\"\"\"\n def __init__(self, num_features=10_000, max_dilations_per_kernel=32, random_state=None,\n alphas=np.logspace(-3, 3, 7), normalize_features=True, memory=None, verbose=False, scoring=None, class_weight=None, **kwargs):\n \"\"\"\n MiniRocketClassifier is recommended for up to 10k time series. \n For a larger dataset, you can use MINIROCKET (in Pytorch).\n scoring = None --> defaults to accuracy.\n \"\"\"\n self.steps = [('minirocketmultivariate', MiniRocketMultivariate(num_features=num_features, \n max_dilations_per_kernel=max_dilations_per_kernel,\n random_state=random_state)),\n ('ridgeclassifiercv', RidgeClassifierCV(alphas=alphas, \n normalize=normalize_features, \n scoring=scoring, \n class_weight=class_weight, \n **kwargs))]\n store_attr()\n self._validate_steps()\n\n def __repr__(self):\n return f'Pipeline(steps={self.steps.copy()})'\n\n def save(self, fname=None, path='./models'):\n fname = ifnone(fname, 'MiniRocketClassifier')\n path = Path(path)\n filename = path/fname\n filename.parent.mkdir(parents=True, exist_ok=True)\n with open(f'{filename}.pkl', 'wb') as output:\n pickle.dump(self, output, pickle.HIGHEST_PROTOCOL)",
"_____no_output_____"
],
[
"MiniRocketClassifier.__doc__",
"_____no_output_____"
],
[
"#export\ndef load_minirocket(fname, path='./models'):\n path = Path(path)\n filename = path/fname\n with open(f'{filename}.pkl', 'rb') as input:\n output = pickle.load(input)\n return output",
"_____no_output_____"
],
[
"# export\nclass MiniRocketRegressor(sklearn.pipeline.Pipeline):\n \"\"\"Time series regression using MINIROCKET features and a linear regressor\"\"\"\n def __init__(self, num_features=10000, max_dilations_per_kernel=32, random_state=None,\n alphas=np.logspace(-3, 3, 7), *, normalize_features=True, memory=None, verbose=False, scoring=None, **kwargs):\n \"\"\"\n MiniRocketRegressor is recommended for up to 10k time series. \n For a larger dataset, you can use MINIROCKET (in Pytorch).\n scoring = None --> defaults to r2.\n \"\"\"\n self.steps = [('minirocketmultivariate', MiniRocketMultivariate(num_features=num_features,\n max_dilations_per_kernel=max_dilations_per_kernel,\n random_state=random_state)),\n ('ridgecv', RidgeCV(alphas=alphas, normalize=normalize_features, scoring=scoring, **kwargs))]\n store_attr()\n self._validate_steps()\n\n def __repr__(self):\n return f'Pipeline(steps={self.steps.copy()})'\n\n def save(self, fname=None, path='./models'):\n fname = ifnone(fname, 'MiniRocketRegressor')\n path = Path(path)\n filename = path/fname\n filename.parent.mkdir(parents=True, exist_ok=True)\n with open(f'{filename}.pkl', 'wb') as output:\n pickle.dump(self, output, pickle.HIGHEST_PROTOCOL)",
"_____no_output_____"
],
[
"#export\ndef load_minirocket(fname, path='./models'):\n path = Path(path)\n filename = path/fname\n with open(f'{filename}.pkl', 'rb') as input:\n output = pickle.load(input)\n return output",
"_____no_output_____"
],
[
"# export\nclass MiniRocketVotingClassifier(VotingClassifier):\n \"\"\"Time series classification ensemble using MINIROCKET features, a linear classifier and majority voting\"\"\"\n def __init__(self, n_estimators=5, weights=None, n_jobs=-1, num_features=10_000, max_dilations_per_kernel=32, random_state=None, \n alphas=np.logspace(-3, 3, 7), normalize_features=True, memory=None, verbose=False, scoring=None, class_weight=None, **kwargs):\n store_attr()\n estimators = [(f'est_{i}', MiniRocketClassifier(num_features=num_features, max_dilations_per_kernel=max_dilations_per_kernel, \n random_state=random_state, alphas=alphas, normalize_features=normalize_features, memory=memory, \n verbose=verbose, scoring=scoring, class_weight=class_weight, **kwargs)) \n for i in range(n_estimators)]\n super().__init__(estimators, voting='hard', weights=weights, n_jobs=n_jobs, verbose=verbose)\n\n def __repr__(self): \n return f'MiniRocketVotingClassifier(n_estimators={self.n_estimators}, \\nsteps={self.estimators[0][1].steps})'\n\n def save(self, fname=None, path='./models'):\n fname = ifnone(fname, 'MiniRocketVotingClassifier')\n path = Path(path)\n filename = path/fname\n filename.parent.mkdir(parents=True, exist_ok=True)\n with open(f'{filename}.pkl', 'wb') as output:\n pickle.dump(self, output, pickle.HIGHEST_PROTOCOL)",
"_____no_output_____"
],
[
"#export\ndef get_minirocket_preds(X, fname, path='./models', model=None):\n if X.ndim == 1: X = X[np.newaxis][np.newaxis]\n elif X.ndim == 2: X = X[np.newaxis]\n if model is None: \n model = load_minirocket(fname=fname, path=path)\n return model.predict(X)",
"_____no_output_____"
],
[
"# export\nclass MiniRocketVotingRegressor(VotingRegressor):\n \"\"\"Time series regression ensemble using MINIROCKET features, a linear regressor and a voting regressor\"\"\"\n def __init__(self, n_estimators=5, weights=None, n_jobs=-1, num_features=10_000, max_dilations_per_kernel=32, random_state=None,\n alphas=np.logspace(-3, 3, 7), normalize_features=True, memory=None, verbose=False, scoring=None, **kwargs):\n store_attr()\n estimators = [(f'est_{i}', MiniRocketRegressor(num_features=num_features, max_dilations_per_kernel=max_dilations_per_kernel,\n random_state=random_state, alphas=alphas, normalize_features=normalize_features, memory=memory,\n verbose=verbose, scoring=scoring, **kwargs))\n for i in range(n_estimators)]\n super().__init__(estimators, weights=weights, n_jobs=n_jobs, verbose=verbose)\n\n def __repr__(self):\n return f'MiniRocketVotingRegressor(n_estimators={self.n_estimators}, \\nsteps={self.estimators[0][1].steps})'\n\n def save(self, fname=None, path='./models'):\n fname = ifnone(fname, 'MiniRocketVotingRegressor')\n path = Path(path)\n filename = path/fname\n filename.parent.mkdir(parents=True, exist_ok=True)\n with open(f'{filename}.pkl', 'wb') as output:\n pickle.dump(self, output, pickle.HIGHEST_PROTOCOL)",
"_____no_output_____"
],
[
"# Univariate classification with sklearn-type API\ndsid = 'OliveOil'\nfname = 'MiniRocketClassifier'\nX_train, y_train, X_test, y_test = get_UCR_data(dsid)\ncls = MiniRocketClassifier()\ncls.fit(X_train, y_train)\ncls.save(fname)\npred = cls.score(X_test, y_test)\ndel cls\ncls = load_minirocket(fname)\ntest_eq(cls.score(X_test, y_test), pred)",
"_____no_output_____"
],
[
"# Multivariate classification with sklearn-type API\ndsid = 'NATOPS'\nX_train, y_train, X_test, y_test = get_UCR_data(dsid)\ncls = MiniRocketClassifier()\ncls.fit(X_train, y_train)\ncls.score(X_test, y_test)",
"_____no_output_____"
],
[
"# Multivariate classification with sklearn-type API\ndsid = 'NATOPS'\nX_train, y_train, X_test, y_test = get_UCR_data(dsid)\ncls = MiniRocketVotingClassifier(5)\ncls.fit(X_train, y_train)\ncls.score(X_test, y_test)",
"_____no_output_____"
],
[
"# Univariate regression with sklearn-type API\nfrom sklearn.metrics import mean_squared_error\ndsid = 'Covid3Month'\nfname = 'MiniRocketRegressor'\nX_train, y_train, X_test, y_test = get_Monash_data(dsid)\nrmse_scorer = make_scorer(mean_squared_error, greater_is_better=False)\nreg = MiniRocketRegressor(scoring=rmse_scorer)\nreg.fit(X_train, y_train)\nreg.save(fname)\ndel reg\nreg = load_minirocket(fname)\ny_pred = reg.predict(X_test)\nrmse = mean_squared_error(y_test, y_pred, squared=False)\nrmse",
"_____no_output_____"
],
[
"# Multivariate regression with sklearn-type API\nfrom sklearn.metrics import mean_squared_error\ndsid = 'AppliancesEnergy'\nX_train, y_train, X_test, y_test = get_Monash_data(dsid)\nrmse_scorer = make_scorer(mean_squared_error, greater_is_better=False)\nreg = MiniRocketRegressor(scoring=rmse_scorer)\nreg.fit(X_train, y_train)\nreg.save(fname)\ndel reg\nreg = load_minirocket(fname)\ny_pred = reg.predict(X_test)\nrmse = mean_squared_error(y_test, y_pred, squared=False)\nrmse",
"_____no_output_____"
],
[
"# Multivariate regression ensemble with sklearn-type API\nreg = MiniRocketVotingRegressor(5, scoring=rmse_scorer)\nreg.fit(X_train, y_train)\ny_pred = reg.predict(X_test)\nrmse = mean_squared_error(y_test, y_pred, squared=False)\nrmse",
"_____no_output_____"
],
[
"#export\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport numpy as np\n\nclass MiniRocketFeatures(nn.Module):\n \"\"\"This is a Pytorch implementation of MiniRocket developed by Malcolm McLean and Ignacio Oguiza\n \n MiniRocket paper citation:\n @article{dempster_etal_2020,\n author = {Dempster, Angus and Schmidt, Daniel F and Webb, Geoffrey I},\n title = {{MINIROCKET}: A Very Fast (Almost) Deterministic Transform for Time Series Classification},\n year = {2020},\n journal = {arXiv:2012.08791}\n }\n Original paper: https://arxiv.org/abs/2012.08791\n Original code: https://github.com/angus924/minirocket\"\"\"\n\n kernel_size, num_kernels, fitting = 9, 84, False\n\n def __init__(self, c_in, seq_len, num_features=10_000, max_dilations_per_kernel=32, random_state=None):\n super(MiniRocketFeatures, self).__init__()\n self.c_in, self.seq_len = c_in, seq_len\n self.num_features = num_features // self.num_kernels * self.num_kernels\n self.max_dilations_per_kernel = max_dilations_per_kernel\n self.random_state = random_state\n\n # Convolution\n indices = torch.combinations(torch.arange(self.kernel_size), 3).unsqueeze(1)\n kernels = (-torch.ones(self.num_kernels, 1, self.kernel_size)).scatter_(2, indices, 2)\n self.kernels = nn.Parameter(kernels.repeat(c_in, 1, 1), requires_grad=False)\n\n # Dilations & padding\n self._set_dilations(seq_len)\n\n # Channel combinations (multivariate)\n if c_in > 1:\n self._set_channel_combinations(c_in)\n\n # Bias\n for i in range(self.num_dilations):\n self.register_buffer(f'biases_{i}', torch.empty((self.num_kernels, self.num_features_per_dilation[i])))\n self.register_buffer('prefit', torch.BoolTensor([False]))\n \n def fit(self, X, chunksize=None):\n num_samples = X.shape[0]\n if chunksize is None:\n chunksize = min(num_samples, self.num_dilations * self.num_kernels)\n else: \n chunksize = min(num_samples, chunksize)\n np.random.seed(self.random_state)\n idxs = np.random.choice(num_samples, chunksize, False)\n self.fitting = True\n self(X[idxs])\n self.fitting = False\n \n def forward(self, x):\n _features = []\n for i, (dilation, padding) in enumerate(zip(self.dilations, self.padding)):\n _padding1 = i%2\n \n # Convolution\n C = F.conv1d(x, self.kernels, padding=padding, dilation=dilation, groups=self.c_in)\n if self.c_in > 1: # multivariate\n C = C.reshape(x.shape[0], self.c_in, self.num_kernels, -1)\n channel_combination = getattr(self, f'channel_combinations_{i}')\n C = torch.mul(C, channel_combination)\n C = C.sum(1)\n\n # Bias\n if not self.prefit or self.fitting:\n num_features_this_dilation = self.num_features_per_dilation[i]\n bias_this_dilation = self._get_bias(C, num_features_this_dilation)\n setattr(self, f'biases_{i}', bias_this_dilation) \n if self.fitting:\n if i < self.num_dilations - 1:\n continue\n else:\n self.prefit = torch.BoolTensor([True])\n return\n elif i == self.num_dilations - 1:\n self.prefit = torch.BoolTensor([True])\n else:\n bias_this_dilation = getattr(self, f'biases_{i}')\n \n # Features\n _features.append(self._get_PPVs(C[:, _padding1::2], bias_this_dilation[_padding1::2]))\n _features.append(self._get_PPVs(C[:, 1-_padding1::2, padding:-padding], bias_this_dilation[1-_padding1::2]))\n return torch.cat(_features, dim=1) \n\n def _get_PPVs(self, C, bias):\n C = C.unsqueeze(-1)\n bias = bias.view(1, bias.shape[0], 1, bias.shape[1])\n return (C > bias).float().mean(2).flatten(1)\n\n def _set_dilations(self, input_length):\n num_features_per_kernel = self.num_features // self.num_kernels\n true_max_dilations_per_kernel = min(num_features_per_kernel, self.max_dilations_per_kernel)\n multiplier = num_features_per_kernel / true_max_dilations_per_kernel\n max_exponent = np.log2((input_length - 1) / (9 - 1))\n dilations, num_features_per_dilation = \\\n np.unique(np.logspace(0, max_exponent, true_max_dilations_per_kernel, base = 2).astype(np.int32), return_counts = True)\n num_features_per_dilation = (num_features_per_dilation * multiplier).astype(np.int32)\n remainder = num_features_per_kernel - num_features_per_dilation.sum()\n i = 0\n while remainder > 0:\n num_features_per_dilation[i] += 1\n remainder -= 1\n i = (i + 1) % len(num_features_per_dilation)\n self.num_features_per_dilation = num_features_per_dilation\n self.num_dilations = len(dilations)\n self.dilations = dilations\n self.padding = []\n for i, dilation in enumerate(dilations): \n self.padding.append((((self.kernel_size - 1) * dilation) // 2))\n\n def _set_channel_combinations(self, num_channels):\n num_combinations = self.num_kernels * self.num_dilations\n max_num_channels = min(num_channels, 9)\n max_exponent_channels = np.log2(max_num_channels + 1)\n np.random.seed(self.random_state)\n num_channels_per_combination = (2 ** np.random.uniform(0, max_exponent_channels, num_combinations)).astype(np.int32)\n channel_combinations = torch.zeros((1, num_channels, num_combinations, 1))\n for i in range(num_combinations):\n channel_combinations[:, np.random.choice(num_channels, num_channels_per_combination[i], False), i] = 1\n channel_combinations = torch.split(channel_combinations, self.num_kernels, 2) # split by dilation\n for i, channel_combination in enumerate(channel_combinations): \n self.register_buffer(f'channel_combinations_{i}', channel_combination) # per dilation\n\n def _get_quantiles(self, n):\n return torch.tensor([(_ * ((np.sqrt(5) + 1) / 2)) % 1 for _ in range(1, n + 1)]).float()\n\n def _get_bias(self, C, num_features_this_dilation):\n np.random.seed(self.random_state)\n idxs = np.random.choice(C.shape[0], self.num_kernels)\n samples = C[idxs].diagonal().T \n biases = torch.quantile(samples, self._get_quantiles(num_features_this_dilation).to(C.device), dim=1).T\n return biases\n\nMRF = MiniRocketFeatures",
"_____no_output_____"
],
[
"#export \ndef get_minirocket_features(o, model, chunksize=1024, device=None, to_np=False):\n \"\"\"Function used to split a large dataset into chunks, avoiding OOM error.\"\"\"\n device = ifnone(device, default_device())\n model = model.to(device)\n if isinstance(o, np.ndarray): o = torch.from_numpy(o).to(device)\n _features = []\n for oi in torch.split(o, chunksize): \n _features.append(model(oi))\n features = torch.cat(_features).unsqueeze(-1)\n if to_np: return features.cpu().numpy()\n else: return features",
"_____no_output_____"
],
[
"#export\nclass MiniRocketHead(nn.Sequential):\n def __init__(self, c_in, c_out, seq_len=1, bn=True, fc_dropout=0.):\n layers = [Flatten()]\n if bn: layers += [nn.BatchNorm1d(c_in)]\n if fc_dropout: layers += [nn.Dropout(fc_dropout)] \n linear = nn.Linear(c_in, c_out)\n nn.init.constant_(linear.weight.data, 0)\n nn.init.constant_(linear.bias.data, 0) \n layers += [linear]\n head = nn.Sequential(*layers)\n super().__init__(OrderedDict([('backbone', nn.Sequential()), ('head', head)]))",
"_____no_output_____"
],
[
"#export\nclass MiniRocket(nn.Sequential):\n def __init__(self, c_in, c_out, seq_len, num_features=10_000, max_dilations_per_kernel=32, random_state=None, bn=True, fc_dropout=0):\n \n # Backbone\n backbone = MiniRocketFeatures(c_in, seq_len, num_features=num_features, max_dilations_per_kernel=max_dilations_per_kernel, \n random_state=random_state)\n num_features = backbone.num_features\n\n # Head\n self.head_nf = num_features\n layers = [Flatten()]\n if bn: layers += [nn.BatchNorm1d(num_features)]\n if fc_dropout: layers += [nn.Dropout(fc_dropout)] \n linear = nn.Linear(num_features, c_out)\n nn.init.constant_(linear.weight.data, 0)\n nn.init.constant_(linear.bias.data, 0) \n layers += [linear]\n head = nn.Sequential(*layers)\n\n super().__init__(OrderedDict([('backbone', backbone), ('head', head)]))\n\n def fit(self, X, chunksize=None):\n self.backbone.fit(X, chunksize=chunksize)",
"_____no_output_____"
],
[
"# Offline feature calculation\nfrom fastai.torch_core import default_device\nfrom tsai.data.all import *\nfrom tsai.learner import *\ndsid = 'ECGFiveDays'\nX, y, splits = get_UCR_data(dsid, split_data=False)\nmrf = MiniRocketFeatures(c_in=X.shape[1], seq_len=X.shape[2]).to(default_device())\nX_train = torch.from_numpy(X[splits[0]]).to(default_device())\nmrf.fit(X_train)\nX_tfm = get_minirocket_features(X, mrf)\ntfms = [None, TSClassification()]\nbatch_tfms = TSStandardize(by_var=True)\ndls = get_ts_dls(X_tfm, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=256)\nlearn = ts_learner(dls, MiniRocketHead, metrics=accuracy)\nlearn.fit(1, 1e-4, cbs=ReduceLROnPlateau(factor=0.5, min_lr=1e-8, patience=10))",
"_____no_output_____"
],
[
"# Online feature calculation\nfrom fastai.torch_core import default_device\nfrom tsai.data.all import *\nfrom tsai.learner import *\ndsid = 'ECGFiveDays'\nX, y, splits = get_UCR_data(dsid, split_data=False)\ntfms = [None, TSClassification()]\nbatch_tfms = TSStandardize()\ndls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=256)\nlearn = ts_learner(dls, MiniRocket, metrics=accuracy)\nlearn.fit_one_cycle(1, 1e-2)",
"_____no_output_____"
],
[
"#hide\nout = create_scripts(); beep(out)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75d3bc25031b20defc2a5655b758d115fe5560a | 73,915 | ipynb | Jupyter Notebook | test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb | shaheen19/wildfires_causes_consequences | eaf33c98d5b1dff5cec1628d4d1db621b6163f90 | [
"BSD-3-Clause"
] | 1 | 2020-07-05T20:33:22.000Z | 2020-07-05T20:33:22.000Z | test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb | shaheen19/wildfires_causes_consequences | eaf33c98d5b1dff5cec1628d4d1db621b6163f90 | [
"BSD-3-Clause"
] | null | null | null | test_notebooks/test_notebooks_aq/ea-python-2020-09-cold-springs-workflows-h4_workingDRAFT.ipynb | shaheen19/wildfires_causes_consequences | eaf33c98d5b1dff5cec1628d4d1db621b6163f90 | [
"BSD-3-Clause"
] | null | null | null | 33.355144 | 1,235 | 0.567057 | [
[
[
"<img style=\"float: left;\" src=\"earth-lab-logo-rgb.png\" width=\"150\" height=\"150\" />\n\n# Homework Template: Earth Analytics Python Course: Spring 2020",
"_____no_output_____"
],
[
"Before submitting this assignment, be sure to restart the kernel and run all cells. To do this, pull down the Kernel drop down at the top of this notebook. Then select **restart and run all**.\n\nMake sure you fill in any place that says `YOUR CODE HERE` or \"YOUR ANSWER HERE\", as well as your name and collaborators below.\n\n* IMPORTANT: Before you submit your notebook, restart the kernel and run all! Your first cell in the notebook should be `[1]` and all cells should run in order! You will lose points if your notebook does not run. \n\nFor all plots and code in general:\n\n* Add appropriate titles to your plot that clearly and concisely describe what the plot shows (e.g. time, location, phenomenon).\n* Be sure to use the correct bands for each plot.\n* Specify the source of the data for each plot using a plot caption created with `ax.text()`.\n* Place ONLY the code needed to create a plot in the plot cells. Place additional processing code ABOVE that cell (in a separate code cell).\n\nMake sure that you:\n\n* **Only include the package imports, code, data, and outputs that are CRUCIAL to your homework assignment.**\n* Follow PEP 8 standards. Use the `pep8` tool in Jupyter Notebook to ensure proper formatting (however, note that it does not catch everything!).\n* Keep comments concise and strategic. Don't comment every line!\n* Organize your code in a way that makes it easy to follow. \n* Write your code so that it can be run on any operating system. This means that:\n 1. the data should be downloaded in the notebook to ensure it's reproducible.\n 2. all paths should be created dynamically using the os package to ensure that they work across operating systems. \n* Check for spelling errors in your text and code comments\n",
"_____no_output_____"
]
],
[
[
"NAME = \"Sarah Jaffe\"\nCOLLABORATORS = \"Ruby Shaheen\"",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"# Week 09 Homework - Multispectral Remote Sensing II\n\n\n## Include the Plots, Text and Outputs Below\n\nFor all plots:\n\n* Add appropriate titles to your plot that clearly and concisely describe what the plot shows.\n* Be sure to use the correct bands for each plot.\n* Specify the source of the data used for each plot in a plot caption using `ax.text()`.\n\n\n## Project Introduction (10 points)\n\nRead the overview of the cold springs fire: https://www.earthdatascience.org/courses/use-data-open-source-python/data-stories/cold-springs-wildfire/\n\nIn the Markdown cell below, add a 2-4 sentence description of the Cold Springs Fire. This should \ninclude the event:\n1. name, \n2. type, \n3. duration / dates and \n4. location. ",
"_____no_output_____"
],
[
"Notifications of the Cold Springs Fire began on July 9th, 2016. This fire on Hurricane Hill, two miles northeast of Nederland, was reported to have started by an improperly extinguished campfire, burned for 5 days (officially extinguished July 14th, 2016). This fire resulted in the damage of 528 acres and 8 homes.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"# Autograding imports - do not modify this cell\nimport matplotcheck.notebook as nb\nimport matplotcheck.autograde as ag\nimport matplotcheck.raster as rs",
"_____no_output_____"
],
[
"# Import libraries (5 points) \n# Only include imports required to run this notebook\nimport os\nfrom glob import glob\nimport warnings\n\nimport numpy as np\nimport numpy.ma as ma\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import ListedColormap\nimport re \n\nimport rasterio as rio\nfrom rasterio.plot import plotting_extent\nimport geopandas as gpd\n\nimport earthpy as et\nimport earthpy.spatial as es\nimport earthpy.plot as ep\nimport earthpy.mask as em\n\n# Get Landsat data \net.data.get_data(url=\"https://ndownloader.figshare.com/files/21941085\")\n\n# Set working directory\nos.chdir(os.path.join(et.io.HOME, 'earth-analytics', 'data')) \nwarnings.simplefilter('ignore')",
"_____no_output_____"
]
],
[
[
"## Assignment Week 1 - Complete by March 18 \n\n## Define Functions To Process Landsat Data\n\nFor next week (March 18), create the 3 functions below used to process landsat data.\n\nFor all functions, add a docstring that uses numpy style format (for examples, review the [Intro to Earth Data Sciene textbook chapter on Functions](https://www.earthdatascience.org/courses/intro-to-earth-data-science/write-efficient-python-code/functions-modular-code/write-functions-in-python/#docstring)). \n\nThe docstring should include:\n\n * A one sentence description of what the function does.\n * Description of each input variable (parameter), following numpy docstring standards.\n * Description of each output object (return), following numpy docstring standards.",
"_____no_output_____"
],
[
"## Function 1: crop_stack_data function (5 points)\n\nWrite a function called `crop_stack_data` that: \n1. Takes a **list** of raster TIF files and crops all of the files in the list to a given input boundary in GeoPandas GeoDataFrame format.\n * **3 inputs:** \n * 1) list of files (i.e. the files to crop).\n * 2) directory to export cropped files.\n * 3) GeoPandas GeoDataFrame to crop the data.\n2. Returns a stacked **numpy array** of the cropped data and the metadata.\n * **2 outputs:**\n * 1) **numpy array**.\n * 2) **dictionary** containing metadata.",
"_____no_output_____"
]
],
[
[
"# Add your function here. Do NOT modify the function name\ndef crop_stack_data(files_to_crop, crop_dir_path, crop_bound):\n \"\"\"Crops a set of tif files and saves them \n in a crop directory. Returns a stacked numpy \n array of bands.\n \n Parameters\n ----------\n files_to_crop : list\n List of paths to multispectrum scenes \n (.tiff) that will need cropping.\n \n crop_dir_path : str\n The path to an output directory already \n in existance, or will be made, and that \n will store cropped and exported stacked \n bands.\n \n crop_bound : GeoPandas GeoDataFrame\n Vector shape file geodataframe used for \n cropping aoi's from files_to_crop.\n \n Returns\n -------\n all_bands_stack : numpy array(s)\n Stacked and cropped numpy array bands \n (our new aoi's).\n \n fire_crop_utmz13 : GeoPandas GeoDataFrame\n A vector shape file that either shares \n the crs of the stacked bands or is \n reprojected from the crop_bound crs.\n \"\"\"\n # Create output directory \n if not os.path.exists(crop_dir_path):\n os.mkdir(crop_dir_path)\n\n # Reproject boundary .shp to geotiff crs\n with rio.open(files_to_crop[0]) as landsat_src:\n if not crop_bound.crs == landsat_src.crs:\n fire_crop_utmz13 = crop_bound.to_crs(\n landsat_src.crs)\n\n # Crop all geotiff layers in each scene\n es.crop_all(raster_paths=files_to_crop,\n output_dir=crop_dir_path,\n geoms=fire_crop_utmz13,\n overwrite=True)\n\n # Retrieve cropped bands from output directory\n all_bands = sorted(glob(os.path.join(\n crop_dir_path, \"*.tif\")))\n \n # Stacked cropped bands\n all_bands_stack = es.stack(all_bands)\n\n \n############################### DOES NOT INCLUDE DICTIONARY FOR META DATA #############################\n # Stacked cropped bands and new fire boundary\n return all_bands_stack, fire_crop_utmz13",
"_____no_output_____"
]
],
[
[
"## Function 2: mask_data (5 points)\n\nIn the call below, write a function called `mask_data` that: \n1. Masks a numpy array using the Landsat Cloud QA layer. \n * **2 inputs:** \n * 1) numpy array to be masked\n * 2) Landsat QA layer in numpy array format \n2. Returns a masked array. \n * **1 output:**\n * 1) masked numpy array",
"_____no_output_____"
]
],
[
[
"# Add your function here. Do NOT modify the function name\ndef mask_data(arr, path_to_qa):\n \"\"\"Function that masks a numpy array using a \n cloud qa layer.\n \n Parameters\n ----------\n arr : numpy array\n Numpy array(s) of bands of aoi scene(s). \n \n path_to_qa : str\n Path to QA layer(s) associated with aoi(s). \n \n Returns\n -------\n arr : masked numpy array\n Updated numpy array(s) of bands with high cloud\n confidence, clouds and cloud shadows masked. \n \"\"\"\n # Open the qa layer\n with rio.open(path_to_qa[0]) as src:\n mask_arr = src.read(1)\n\n # Cloud mask values\n high_cloud_confidence = em.pixel_flags[\"pixel_qa\"][\n \"L8\"][\n \"High Cloud Confidence\"]\n cloud = em.pixel_flags[\"pixel_qa\"][\"L8\"][\"Cloud\"]\n cloud_shadow = em.pixel_flags[\"pixel_qa\"][\"L8\"][\"Cloud Shadow\"]\n\n all_masked_values = cloud_shadow + cloud + high_cloud_confidence\n \n # Mask the numpy array\n if any(i in np.unique(mask_arr) for i in all_masked_values):\n landsat_masked_bands = em.mask_pixels(arr,\n mask_arr,\n vals=all_masked_values)\n return landsat_masked_bands\n else:\n return arr",
"_____no_output_____"
]
],
[
[
"## Function 3: classify_dnbr (5 points)\n\nIn the cell below, write a function called `classify_dnbr` that: \n1. Classifies a numpy array using classes/bins defined in the function. \n * **1 input:**\n * 1) numpy array containing dNBR data in numpy array format \n2. Returns a classified numpy array. \n * **1 output:**\n * 1) numpy array with classified values (integers)",
"_____no_output_____"
]
],
[
[
"# Add your function here. Do NOT modify the function name\ndef classify_dnbr(arr):\n \"\"\"Function that creates a new numpy array of classified\n values from a difference normalized burn ration (dNBR) \n numpy array. \n \n Parameters\n ----------\n arr : Numpy array\n Numpy array(s) containing dNBR data. \n \n Returns\n -------\n arr_class : Numpy array\n Numpy array(s) containing reclassified \n dNBR values in 5 possible classes. \n \"\"\"\n # Reclassify values #########NOTE: WK 2 NOTEBOOK##########\n class_bins = [-np.inf, -.1, .1, .27, .66, np.inf]\n arr_reclass = np.digitize(arr, class_bins)\n \n return arr_reclass ",
"_____no_output_____"
]
],
[
[
"## Assignment Week 2 - BEGINS March 18 \n\nYou will write the function below next week after learning more about MODIS h4 data in class on March 18, 2020.\n\nBe sure to add a docstring that uses numpy style format (for examples, review the [Intro to Earth Data Sciene textbook chapter on Functions](https://www.earthdatascience.org/courses/intro-to-earth-data-science/write-efficient-python-code/functions-modular-code/write-functions-in-python/#docstring)). \n\nThe docstring should include:\n\n * A one sentence description of what the function does.\n * Description of each input variable (parameter), following numpy docstring standards.\n * Description of each output object (return), following numpy docstring standards.\n \n## Function 4: stack_modis_bands (10 points)\n\nWrite a function called `stack_modis_bands` that: \n1. Loops through an `h4` file to collect and stack all \"band\" layers in the file.\n2. Crops each band to the extent of a given input boundary in GeoPandas GeoDataFrame format.\n3. Returns a stacked numpy array of the cropped data and the metadata.\n\n#### Function Inputs and Outputs\n\n1. Takes a path to an h4 file and returns the reflectance bands cropped to a specific extent. \n * **2 inputs:**\n * 1) string path to hdf h4 file\n * 2) crop_bound in GeoDataFrame format\n2. Returns a classified numpy array. \n * **2 outputs:**\n * 1) numpy array containing cropped MODIS reflectance bands\n * 2) metadata for the numpy array",
"_____no_output_____"
]
],
[
[
"# Add your function here. Do NOT modify the function name\ndef stack_modis_bands(h4_path, crop_bound):\n '''\n Accessing, cropping, stacking and cleaning (masking\n nodata) all bands within h4 files and producing\n outputs necessary to process and plot data.\n \n Parameters\n ----------\n h4_path : str\n Path(s) to h4 file(s).\n \n crop_bound : GeoPandas GeoDataFrame\n Vector shape file geodataframe used for \n cropping aoi's from files_to_crop.\n \n Returns\n ------- \n cleaned_modis_data : numpy arrays\n Stacked and cropped numpy array bands \n (our new aoi's).\n \n crop_meta : dict\n Dictionary containing details of \n stacked numpy arrays in \n cleaned_modis_data.\n \n extent_modis : tuple\n Boundary defined by crop_bound \n required to plot aoi extent.\n \n fire_bound_sin : GeoPandas GeoDataFrame\n A vector shape file that either shares \n the crs of the stacked bands or is \n reprojected from the crop_bound crs. \n '''\n # Temporary hold cropped, stacked bands\n processed_bands = []\n \n # Access bands and band data\n with rio.open(h4_path) as dataset:\n for name in dataset.subdatasets:\n if re.search(\"b0.\\_1$\", name): \n with rio.open(name) as subdataset:\n\n if not crop_bound.crs == subdataset.crs:\n fire_bound_sin = crop_bound.to_crs(\n subdataset.crs)\n\n # Crop bands read with fire boundary\n crop_band, crop_meta = es.crop_image(\n subdataset,fire_bound_sin)\n processed_bands.append(np.squeeze(crop_band))\n \n # Stack\n modis_bands_stack = np.stack(processed_bands) \n\n # Identify and clean array of nodata\n cleaned_modis_data = ma.masked_where(\n modis_bands_stack == crop_meta[\"nodata\"], modis_bands_stack) \n \n # Plotting boundary\n extent_modis = plotting_extent(\n crop_band[0], crop_meta[\"transform\"])\n\n return cleaned_modis_data, crop_meta, fire_bound_sin, extent_modis\n",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"# Figure 1 Overview - Grid of 3 Color InfraRed (CIR) Plots: NAIP, Landsat and MODIS\n\n**You will be able to complete the MODIS subplot after March 18 class!**\n\nCreate a single figure that contains a grid of 3 plots of color infrared (also called false color) composite images using:\n\n* Post Fire NAIP data (this is the data that you downloaded for your week 6 homework)\n* Post Fire Landsat data (use: `et.data.get_data(url=\"https://ndownloader.figshare.com/files/21941085\")`\n* Post Fire MODIS data (use: `et.data.get_data('cold-springs-modis-h5')`\n\nFor each map, be sure to:\n\n* Crop the data to the fire boundary extent.\n* Overlay the fire boundary layer (`vector_layers/fire-boundary-geomac/co_cold_springs_20160711_2200_dd83.shp`).\n* Use the band combination **r = infrared band**, **g = red band**, **b = green** band.\n* Be sure to label each plot with the data type (NAIP vs. Landsat vs. MODIS) and spatial resolution.\n\nHINT: In a CIR image, the NIR band is plotted on the “red” band, the red band is plotted on the \"green\" band and the green band is plotted on the \"blue\" band.",
"_____no_output_____"
]
],
[
[
"# Open fire boundary in this cell\n# Import fire boundary .shp\nfire_path = os.path.join(\"cold-springs-fire\",\n \"vector_layers\",\n \"fire-boundary-geomac\",\n \"co_cold_springs_20160711_2200_dd83.shp\")\nfire_crop = gpd.read_file(fire_path)\nprint(fire_crop.total_bounds)\n\n# View data attributes and CRS.\nprint(fire_crop.shape)\nprint(fire_crop.crs)",
"[-105.49580578 39.97552258 -105.45633769 39.98718058]\n(1, 22)\n{'init': 'epsg:4269'}\n"
]
],
[
[
"## Process Landsat Data\n\nIn the cells below, open and process your Landsat Data using loops and string manipulation of paths. \n\nUse the functions `crop_stack_data()` and `mask_data()` that you wrote above to open, \ncrop and stack each Landsat data scene (pre and post fire).\n\n#### Notes\n\nIf you were implementing this workflow for more scenes, you could write \na helper function that tested the crs of the crop extent. If it needed to be \nreprojected you could do so and write it out as a file, or store is in a dictionary\nfor for reuse \nin your loop. For this assignment rather than introducing additional tasks\nwe will keep it simple and open up and reproject the boundary once.\n\nOne way to do this is to create a connection to one single tif file using\nrasterio. Once you have the `src` object you can grab the crs and reproject\nthe fire boundary. The code to do that is below:\n\n```\nwith rio.open(glob(all_dirs[0] + \"/*.tif\")[0]) as src:\n fire_bound_utmz13 = fire_boundary.to_crs(src.crs)\n```\n",
"_____no_output_____"
]
],
[
[
"# Create loop to process Landsat data in this cell\n# Set paths to scene directories\nbase_path = os.path.join(\"earthpy-downloads\", \n \"landsat-coldsprings-hw\")\n \nall_dirs = glob(os.path.join(base_path, \"*\"))\n\n# Lists for naming convension dictionaries\ncleaned_landsat_data = {}\n\n# Set paths, create output and run functiond to crop/mask scenes \nfor i in all_dirs:\n \n # Paths to all landsat scenes\n all_scenes = sorted(glob(os.path.join(i, \"*.tif\")))\n crop_path = os.path.join(i, \"cropped\")\n \n # Grabbing identifying names/dates\n scene_name = os.path.basename(os.path.normpath(i))\n date = scene_name[10:18]\n\n # Crop and stack all landsat scenes \n stacked_bands, fire_crop_utmz13 = crop_stack_data(\n all_scenes, \n crop_path, \n fire_crop)\n \n # Paths to cropped qa and bands\n cropped_qa = glob(os.path.join(crop_path, \n \"*pixel**crop*.tif\"))\n cropped_scenes = sorted(glob(os.path.join(crop_path, \n \"*band*\")))\n \n # Mask all landsat scenes of bad pixels \n bands_arr, bands_meta = es.stack(cropped_scenes, nodata=-9999)\n cleaned_landsat_data[date] = mask_data(bands_arr, cropped_qa)\n \n # Create plotting extent\n with rio.open(cropped_scenes[1]) as landsat_src:\n extent_landsat = plotting_extent(landsat_src)",
"_____no_output_____"
],
[
"cleaned_landsat_data['20160621']",
"_____no_output_____"
],
[
"# Landsat NDVI processing\nlandsat_prefire_ndvi = es.normalized_diff(\n cleaned_landsat_data[\"20160621\"][4], \n cleaned_landsat_data[\"20160621\"][3])\nlandsat_postfire_ndvi = es.normalized_diff(\n cleaned_landsat_data[\"20160723\"][4], \n cleaned_landsat_data[\"20160723\"][3])\n \nlandsat_dndvi = landsat_postfire_ndvi - landsat_prefire_ndvi",
"_____no_output_____"
]
],
[
[
"## Landsat Function Tests",
"_____no_output_____"
]
],
[
[
"# DO NOT MODIFY - test: crop_stack_data function",
"_____no_output_____"
],
[
"# DO NOT MODIFY - test mask_data function\n",
"_____no_output_____"
],
[
"# DO NOT MODIFY - test classify_dnbr function\n",
"_____no_output_____"
]
],
[
[
"## Process NAIP Post Fire Data\n\nIn the cell below, open and crop the post-fire NAIP data that you downloaded \nfor homework 6.",
"_____no_output_____"
]
],
[
[
"# Process NAIP data\n#Import NAIP 2017 image\nnaip_2017_path = os.path.join(\"cold-springs-fire\", \"naip\", \n \"m_3910505_nw_13_1_20170902\",\n \"m_3910505_nw_13_1_20170902.tif\")\n\n#######################NODATA NOT ACCOUNTED FOR HERE = \"None\"########################################\nwith rio.open(naip_2017_path) as naip_2017_src:\n if landsat_src.crs == naip_2017_src.crs:\n fire_crop_reproj = fire_crop_utmz13\n else:\n fire_crop_reproj = fire_crop.to_crs('epsg:26913')\n \n naip_2017_crop, naip_2017_meta = es.crop_image(\n naip_2017_src, fire_crop_reproj)\n naip_extent = plotting_extent(naip_2017_crop[0], \n naip_2017_meta['transform'])",
"_____no_output_____"
]
],
[
[
"## Process MODIS h4 Data - March 18th, 2020\n\nIn the cells below, open and process your MODIS hdf4 data using loops and \nstring manipulation of paths. You will learn more about working with MODIS \nin class on March 18th.\n\nUse the function `stack_modis_bands()` that you previously wrote in this notebook to open and crop \nthe MODIS data.\n",
"_____no_output_____"
]
],
[
[
"# Process MODIS Data\n# Set paths to scene directories\nmodis_dirs = glob(os.path.join(\"cold-springs-modis-h5\", \n \"*\"))\n\n# Modis dictionary of scene dates\nmodis_bands_dict = {}\n\nfor i in modis_dirs:\n \n # Paths to all modis scenes\n all_scenes = glob(os.path.join(i, \"*.hdf\"))\n \n # Grabbing identifying names/dates\n scene_names = os.path.basename(os.path.normpath(i))\n date = scene_names\n \n # Use modis function to stack and crop\n cleaned_modis_data, modis_meta, modis_boundary, extent_modis = stack_modis_bands(all_scenes[0], fire_crop)\n\n # Creating dictionary\n modis_bands_dict[date] = np.squeeze(cleaned_modis_data)",
"_____no_output_____"
],
[
"# Process modis NDVI\nmodis_prefire_ndvi = es.normalized_diff(\n modis_bands_dict['07_july_2016'][1], \n modis_bands_dict['07_july_2016'][0])\nmodis_postfire_ndvi = es.normalized_diff(\n modis_bands_dict['17_july_2016'][1], \n modis_bands_dict['17_july_2016'][0])\n \nmodis_dndvi = modis_postfire_ndvi - modis_prefire_ndvi",
"_____no_output_____"
],
[
"# DO NOT MODIFY THIS CELL - autograding tests for MODIS function stack_modis_bands\n",
"_____no_output_____"
]
],
[
[
"## Figure 1: Plot CIR for NAIP, Landsat and MODIS Using Post Fire Data (15 points each subplot)\n\nIn the cell below, create a figure with 3 subplots stacked vertically.\n\nIn each subplot, plot a CIR composite image using the post-fire data for:\n\n* NAIP (first figure axis) \n* Landsat (second figure axis) \n* MODIS (third figure axis)\n\nrespectively on this figure.",
"_____no_output_____"
]
],
[
[
"# Plot CIR of Post Fire NAIP, Landsat & MODIS together in one figure\nfig, [ax1, ax2, ax3] = plt.subplots(3, 1, figsize=(12, 18))\n\n# Plot NAIP CIR\nep.plot_rgb(naip_2017_crop,\n extent = naip_extent,\n rgb = [3, 0, 1],\n ax = ax1,\n title=\"NAIP CIR Image\\n Post Cold Springs\" \\\n \" Fire, Colorado\\n 2 September 2017\")\n\nfire_crop_reproj.plot(ax = ax1, color = 'None', \n edgecolor = 'white', linewidth=2)\n\n# Plot Landsat CIR\nep.plot_rgb(cleaned_landsat_data[\"20160723\"],\n rgb=[4,3,2],\n extent=extent_landsat,\n ax=ax2,\n title = \"Landsat CIR Composit Image\\n Post Cold Springs\" \\\n \" Fire, Colorado\\n 23 July 2016\")\n\nfire_crop_utmz13.plot(ax=ax2, color=\"None\", \n edgecolor=\"white\", linewidth=2)\n\n# Plot modis CIR\nep.plot_rgb(modis_bands_dict['17_july_2016'],\n rgb=[4,3,2],\n extent=extent_modis,\n ax=ax3,\n title = \"Modis CIR Composit Image\\n Post Cold Springs\" \\\n \" Fire, Colorado\\n 17 July 2016\")\n\nmodis_boundary.plot(ax=ax3, color=\"None\", \n edgecolor=\"white\", linewidth=2)\n\n\n# Captions\nax1.text(0, -0.1,'2017 1m NAIP Image Data Source: Earth Explorer\\n'\n r'Boundary Data Source: Geospatial Multi-Agency ' \\\n r'Coordination (GeoMAC)', verticalalignment='bottom', \n horizontalalignment='left', transform=ax1.transAxes)\n\nax2.text(0, -0.1, '2017 30m Landsat Image Data Source: ' \\\n 'https://ndownloader.figshare.com/files/21941085\\n' \n r'Boundary Data Source: Geospatial Multi-Agency ' \\\n r'Coordination (GeoMAC)', verticalalignment='bottom', \n horizontalalignment='left', transform=ax2.transAxes)\n\nax3.text(0, -0.1, '2017 500m MODIS Image Data Source: earthpy ' \\\n 'module key \"cold-springs-modis-h5\" \\n'\n r'Boundary Data Source: Geospatial Multi-Agency ' \\\n r'Coordination (GeoMAC)', verticalalignment='bottom', \n horizontalalignment='left', transform=ax3.transAxes)\n\n### DO NOT REMOVE LINE BELOW ###\nplot01_CIR_res_comparison = nb.convert_axes(plt, which_axes=\"all\")",
"_____no_output_____"
],
[
"# DO NOT TOUCH THIS CELL - autograding tests for NAIP subplot\n",
"_____no_output_____"
],
[
"# DO NOT TOUCH THIS CELL - autograding tests for Landsat subplot\n",
"_____no_output_____"
],
[
"# DO NOT TOUCH THIS CELL - autograding tests for MODIS subplot\n",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"# Figure 2: Difference NDVI (dNDVI) Using Landsat & MODIS Data (20 points each subplot)\n\nPlot the NDVI difference using before and after Landsat and MODIS data \nthat cover the Cold Springs Fire study area. For each axis be sure to:\n\n1. overlay the fire boundary (`vector_layers/fire-boundary-geomac/co_cold_springs_20160711_2200_dd83.shp`) on top of the data.\n2. Be sure that the data are cropped using the fire boundary extent.\n\nIn the cell below, create a figure with 2 subplots stacked vertically:\n* Plot dNDVI for Landsat on the first axis of the figure.\n* Plot dNDVI using MODIS on the second axis of the figure.\n\nUse the \"before\" and \"after\" data that you processed above to calculate NDVI difference for both MODIS and Landsat\n\n\n## NDVI Difference\n\nTo create the NDVI Difference raster using \"before\" and \"after\" fire \nLandsat and MODIS data, you must first calculate NDVI for each \ndataset \"before\" and \"after\" the fire. \n\nOnce you have the \"before\" and \"after\" NDVI arrays, you can subtract \nthe pre-fire NDVI array FROM the post-fire NDVI array (post-fire minus pre-fire). \n\nThe resulting array will show you change in the area's NDVI from the first image to the second image.\n\nHINT: Remember, you can use `es.normalized_diff(band_1, band_2)` to get the NDVI of an image. ",
"_____no_output_____"
]
],
[
[
"# Plot Difference NDVI for Landsat & MODIS together in one figure\nfig, [ax1, ax2] = plt.subplots(2, 1, figsize=(12, 12))\n\n# Plot Landsat dNDVI\nep.plot_bands(landsat_dndvi, cmap=\"RdYlGn\",\n vmin=-0.6, vmax=0.6, ax = ax1, extent=extent_landsat,\n title=\"Landsat Derived dNDVI\\n 21 June vs. 23 July,\" \\\n \" 2016\\n Cold Springs Fire, Colorado\", scale=False)\n\nfire_crop_utmz13.plot(ax=ax1, color='None', \n edgecolor='black', linewidth=2)\n\n# Plot modis dNDVI\nep.plot_bands(modis_dndvi, cmap=\"RdYlGn\",\n vmin=-0.6, vmax=0.6, ax = ax2, extent=extent_modis,\n title=\"Modis Derived dNDVI\\n 07 July vs. 17 July,\" \\\n \" 2016\\n Cold Springs Fire, Colorado\", scale=False)\n\nmodis_boundary.plot(ax=ax2, color='None', \n edgecolor='black', linewidth=2)\n\n# Captions\nax1.text(0, -0.1, '2017 30m Landsat Image Data Source: ' \\\n 'https://ndownloader.figshare.com/files/21941085\\n' \n r'Boundary Data Source: Geospatial Multi-Agency ' \\\n r'Coordination (GeoMAC)', verticalalignment='bottom', \n horizontalalignment='left', transform=ax1.transAxes)\n\nax2.text(0, -0.1, '2017 500m MODIS Image Data Source: earthpy ' \\\n 'module key \"cold-springs-modis-h5\" \\n'\n r'Boundary Data Source: Geospatial Multi-Agency ' \\\n r'Coordination (GeoMAC)', verticalalignment='bottom', \n horizontalalignment='left', transform=ax2.transAxes)\n\n### DO NOT REMOVE LINE BELOW ###\nplot02_landsat_modis_ndvi_diff = nb.convert_axes(plt, which_axes=\"all\")",
"_____no_output_____"
],
[
"# Ignore this cell - autograding tests for Landsat subplot\n",
"_____no_output_____"
],
[
"# Ignore this cell - autograding tests for MODIS subplot\n",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"# Figure 3 Overview: Difference NBR (dNBR) Using Landsat & MODIS Data (25 points each subplot)\n\nCreate a figure that has two subplots stacked vertically using the same MODIS and Landsat data that you processed above. \n\n* Subplot one: classified dNBR using Landsat data\n* Subplot two: classified dNBR using MODIS data \n\nFor each subplot, overlay the fire extent boundary `vector_layers/fire-boundary-geomac/co_cold_springs_20160711_2200_dd83.shp`\non top of the dNBR map\n\nTo classify each dNBR raster, use the `classify_dnbr()` function that you \ndefined above. \n\nWhen you plot your MODIS data, you may notice that the data does not contain all of the classes that Landsat contains which can range from 1-5. To ensure that your colormap plots properly, set the `vmin=` and `vmax=` parameters to 1 and 5 respectively when you call `ep.plot_bands()`:\n\n`vmin=1, vmax=5`\n\n\n## Figure Legend\n\nYou only need one legend for this figure. The `ep.draw_legend()` function will create a legend of \"boxes\" if you provide it with an:\n\n1. `imshow()` image object\n2. classes : a list of numbers that represent the classes in your numpy array\n3. titles: a list of dNBR class names example: `[\"High Severity\", \"Low Severity\"]`\n\n### dNBR Classes\n\nNote: if you scaled your data, you may need to scale the values below by a factor of 10.\n\n| SEVERITY LEVEL | dNBR RANGE |\n|----------------|--------------|\n| Enhanced Regrowth | < -.1 |\n| Unburned | -.1 to +.1 |\n| Low Severity | +.1 to +.27 |\n| Moderate Severity | +.27 to +.66 |\n| High Severity | > .66 |\n\n\nHINT: Your dNBR classification list should look like this:\n`[-np.inf, -.1, .1, .27, .66, np.inf]`\n\nHINT 2: If you want to use them, these are the colors used in the maps on the website:\n\n`[\"g\", \"yellowgreen\", \"peachpuff\", \"coral\", \"maroon\"]`\n\nThe code to create a custom colormap for the plot is below:\n\n```\nnbr_colors = [\"g\", \"yellowgreen\", \"peachpuff\", \"coral\", \"maroon\"]\nnbr_cmap = ListedColormap(nbr_colors)\n```",
"_____no_output_____"
]
],
[
[
"# Calculate dNBR for Landsat - be sure to use the correct bands!\n# Landsat NBR processing\nlandsat_prefire_nbr = es.normalized_diff(\n cleaned_landsat_data[\"20160621\"][4], \n cleaned_landsat_data[\"20160621\"][6])\nlandsat_postfire_nbr = es.normalized_diff(\n cleaned_landsat_data[\"20160723\"][4], \n cleaned_landsat_data[\"20160723\"][6])\n \nlandsat_dnbr = landsat_prefire_nbr - landsat_postfire_nbr\n\nlandsat_dnbr_reclass = classify_dnbr(landsat_dnbr)",
"_____no_output_____"
],
[
"# Calculate dNBR for MODIS - be sure to use the correct bands!\n# Modis NBR Processing\nmodis_prefire_nbr = es.normalized_diff(\n modis_bands_dict['07_july_2016'][1], \n modis_bands_dict['07_july_2016'][6])\nmodis_postfire_nbr = es.normalized_diff(\n modis_bands_dict['17_july_2016'][1], \n modis_bands_dict['17_july_2016'][6])\n \nmodis_dnbr = modis_prefire_nbr - modis_postfire_nbr\n\nmodis_dnbr_reclass = classify_dnbr(modis_dnbr)",
"_____no_output_____"
],
[
"# Plot Difference NBR (dNBR) for Landsat & MODIS together in one figure\nfig, [ax1, ax2] = plt.subplots(2, 1, figsize=(12, 12))\n\ncolors = [\"g\", \"yellowgreen\", \"peachpuff\", \"coral\", \"maroon\"]\nclass_labels = [\"Enhanced Regrowth\", \"Unburned\", \"Low Severity\",\n \"Moderate Severity\", \"High Severity\"]\ncmap = ListedColormap(colors)\n\n# Plot Landsat dNBR\nim = ax1.imshow(landsat_dnbr_reclass, cmap = cmap, \n extent=extent_landsat)\n\nfire_crop_utmz13.plot(ax=ax1, color='None', \n edgecolor='black', linewidth=2)\n\nep.draw_legend(im, titles=class_labels)\nax1.set_title(\"Landsat Derived dNBR\\n 21 June vs 23 July \" \\\n \"2016\\n Cold Springs Fire, Colorado\")\nax1.set_axis_off()\n\n# Plot Modis dNBR\nim2 = ax2.imshow(modis_dnbr_reclass, cmap = cmap, \n vmin=1, vmax=5, extent=extent_modis)\n\nmodis_boundary.plot(ax=ax2, color='None', \n edgecolor='black', linewidth=2)\n\nax2.set_title(\"Modis Derived dNBR\\n 21 June vs 23 July \" \\\n \"2016\\n Cold Springs Fire, Colorado\")\nax2.set_axis_off()\n\n# Captions\nax1.text(0, -0.1, '2017 30m Landsat Image Data Source: ' \\\n 'https://ndownloader.figshare.com/files/21941085\\n' \n r'Boundary Data Source: Geospatial Multi-Agency ' \\\n r'Coordination (GeoMAC)', verticalalignment='bottom', \n horizontalalignment='left', transform=ax1.transAxes)\n\nax2.text(0, -0.1, '2017 500m MODIS Image Data Source: earthpy ' \\\n 'module key \"cold-springs-modis-h5\" \\n'\n r'Boundary Data Source: Geospatial Multi-Agency ' \\\n r'Coordination (GeoMAC)', verticalalignment='bottom', \n horizontalalignment='left', transform=ax2.transAxes)\n\n### DO NOT REMOVE LINE BELOW ###\nplot03_landsat_dnbr = nb.convert_axes(plt, which_axes=\"all\")",
"_____no_output_____"
],
[
"# Ignore this cell - autograding tests for Landsat subplot\n",
"_____no_output_____"
],
[
"# Ignore this cell - autograding tests for MODIS subplot\n",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"# Landsat vs MODIS Burned Area (10 points)\n\nIn the cell below, print the total area burned in classes 4 and 5 (moderate to high severity) for both datasets \n(Landsat and MODIS).\n\nHINT: Feel free to experiment with loops to complete this part of the homework. ",
"_____no_output_____"
]
],
[
[
"# Total Burned Area in Classes 4 and 5 for Landsat and MODIS\nlandsat_pixel_area = int(landsat_src.res[0]) * int(landsat_src.res[0])\nmodis_pixel_area = int(modis_meta['transform'][0]) * \\\n int(modis_meta['transform'][0])\n\nburned_area_per_source = [landsat_dnbr_reclass, modis_dnbr_reclass]\nburned_area_per_pixel = [landsat_pixel_area, modis_pixel_area]\ntotal_burned_area = []\n\n# for i, z in zip(burned_area_per_source, burned_area_per_pixel):\n# area_loop = ((((i[i == 4]).size)*z) + (((i[i == 5]).size)*z))\n# total_burned_area.append(area_loop)\n\nprint('Landsat total burned area (meters squared):', \n total_burned_area[0])\nprint('Modis total burned area (meters squared):', \n total_burned_area[1])\n\nlandsat_class_4 = (landsat_dnbr_reclass[landsat_dnbr_reclass == 4]).size\nlandsat_class_5 = (landsat_dnbr_reclass[landsat_dnbr_reclass == 5]).size\n\nmodis_class_4 = (modis_dnbr_reclass[modis_dnbr_reclass == 4]).size\nmodis_class_5 = (modis_dnbr_reclass[modis_dnbr_reclass == 5]).size\n\nlandsat_mod_severity_area = landsat_pixel_area * landsat_class_4\nlandsat_high_severity_area = landsat_pixel_area * landsat_class_5\n\nlandsat_total_burn_area = landsat_mod_severity_area + landsat_high_severity_area\n\nmodis_mod_severity_area = modis_pixel_area * modis_class_4\nmodis_high_severity_area = modis_pixel_area * modis_class_5\n\nmodis_total_burn_area = modis_mod_severity_area + modis_high_severity_area\n\nprint('Landsat total burned area:', landsat_total_burn_area, 'meters')\nprint('Modis total burned area:', modis_total_burn_area, 'meters')",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"# Do not edit this cell! (5 points)\n\n* Each figure specifies the source of the data (for each plot) using a plot caption created with `ax.text()`.",
"_____no_output_____"
],
[
"# Do not edit this cell! (5 points)\n\nThe notebook will also be checked for overall clean code requirements as specified at the **very top** of this notebook! Some of these requirements include (review the top cells for more specifics): \n\n* Notebook begins at cell [1] and runs on any machine in its entirety.\n* PEP 8 format is applied throughout (including lengths of comment and code lines).\n* No additional code or imports in the notebook\n* Notebook is fully reproducible. This means:\n * reproducible paths using the os module.\n * data downloaded using code in the notebook.\n * all imports at top of notebook.",
"_____no_output_____"
],
[
"# Do not edit this cell! (5 points)\n\nAll functions contain docstrings with inputs and outputs clearly identified and following numpy docstring standards.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e75d5058888fbc584bd0804cea5819841acf7037 | 7,573 | ipynb | Jupyter Notebook | Image/edge_detection.ipynb | c11/earthengine-py-notebooks | 144b57e4d952da095ba73c3cc8ce2f36291162ff | [
"MIT"
] | 1 | 2020-05-31T14:19:59.000Z | 2020-05-31T14:19:59.000Z | Image/edge_detection.ipynb | c11/earthengine-py-notebooks | 144b57e4d952da095ba73c3cc8ce2f36291162ff | [
"MIT"
] | null | null | null | Image/edge_detection.ipynb | c11/earthengine-py-notebooks | 144b57e4d952da095ba73c3cc8ce2f36291162ff | [
"MIT"
] | null | null | null | 43.274286 | 1,031 | 0.577182 | [
[
[
"<table class=\"ee-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://github.com/giswqs/earthengine-py-notebooks/tree/master/Image/edge_detection.ipynb\"><img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /> View source on GitHub</a></td>\n <td><a target=\"_blank\" href=\"https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Image/edge_detection.ipynb\"><img width=26px src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png\" />Notebook Viewer</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Image/edge_detection.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /> Run in Google Colab</a></td>\n</table>",
"_____no_output_____"
],
[
"## Install Earth Engine API and geemap\nInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.\nThe following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.\n\n**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).",
"_____no_output_____"
]
],
[
[
"# Installs geemap package\nimport subprocess\n\ntry:\n import geemap\nexcept ImportError:\n print('geemap package not installed. Installing ...')\n subprocess.check_call([\"python\", '-m', 'pip', 'install', 'geemap'])\n\n# Checks whether this notebook is running on Google Colab\ntry:\n import google.colab\n import geemap.eefolium as emap\nexcept:\n import geemap as emap\n\n# Authenticates and initializes Earth Engine\nimport ee\n\ntry:\n ee.Initialize()\nexcept Exception as e:\n ee.Authenticate()\n ee.Initialize() ",
"_____no_output_____"
]
],
[
[
"## Create an interactive map \nThe default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function. ",
"_____no_output_____"
]
],
[
[
"Map = emap.Map(center=[40,-100], zoom=4)\nMap.add_basemap('ROADMAP') # Add Google Map\nMap",
"_____no_output_____"
]
],
[
[
"## Add Earth Engine Python script ",
"_____no_output_____"
]
],
[
[
"# Add Earth Engine dataset\n# Load a Landsat 8 image, select the panchromatic band.\nimage = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318').select('B8')\n\n# Perform Canny edge detection and display the result.\ncanny = ee.Algorithms.CannyEdgeDetector(**{\n 'image': image, 'threshold': 10, 'sigma': 1\n})\nMap.setCenter(-122.054, 37.7295, 10)\nMap.addLayer(canny, {}, 'canny')\n\n# Perform Hough transform of the Canny result and display.\nhough = ee.Algorithms.HoughTransform(canny, 256, 600, 100)\nMap.addLayer(hough, {}, 'hough')\n\n# Load a Landsat 8 image, select the panchromatic band.\nimage = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318').select('B8')\nMap.addLayer(image, {'max': 12000})\n\n# Define a \"fat\" Gaussian kernel.\nfat = ee.Kernel.gaussian(**{\n 'radius': 3,\n 'sigma': 3,\n 'units': 'pixels',\n 'normalize': True,\n 'magnitude': -1\n})\n\n# Define a \"skinny\" Gaussian kernel.\nskinny = ee.Kernel.gaussian(**{\n 'radius': 3,\n 'sigma': 1,\n 'units': 'pixels',\n 'normalize': True,\n})\n\n# Compute a difference-of-Gaussians (DOG) kernel.\ndog = fat.add(skinny)\n\n# Compute the zero crossings of the second derivative, display.\nzeroXings = image.convolve(dog).zeroCrossing()\nMap.setCenter(-122.054, 37.7295, 10)\nMap.addLayer(zeroXings.updateMask(zeroXings), {'palette': 'FF0000'}, 'zero crossings')\n\n",
"_____no_output_____"
]
],
[
[
"## Display Earth Engine data layers ",
"_____no_output_____"
]
],
[
[
"Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.\nMap",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e75d57401973bb52a8697ea483085dea79b29f4c | 10,455 | ipynb | Jupyter Notebook | quests/serverlessml/03_tfdata/labs/input_pipeline.ipynb | juancaob/training-data-analyst | 85ad8c70849466bb87a6c6eb01cd0db883277d51 | [
"Apache-2.0"
] | 2 | 2021-12-29T10:49:00.000Z | 2021-12-31T13:42:35.000Z | quests/serverlessml/03_tfdata/labs/input_pipeline.ipynb | juancaob/training-data-analyst | 85ad8c70849466bb87a6c6eb01cd0db883277d51 | [
"Apache-2.0"
] | null | null | null | quests/serverlessml/03_tfdata/labs/input_pipeline.ipynb | juancaob/training-data-analyst | 85ad8c70849466bb87a6c6eb01cd0db883277d51 | [
"Apache-2.0"
] | null | null | null | 34.166667 | 416 | 0.59933 | [
[
[
"# Input pipeline into Keras\n\nIn this notebook, we will look at how to read large datasets, datasets that may not fit into memory, using TensorFlow. We can use the tf.data pipeline to feed data to Keras models that use a TensorFlow backend.\n\n## Learning Objectives\n1. Use tf.data to read CSV files\n2. Load the training data into memory\n3. Prune the data by removing columns\n4. Use tf.data to map features and labels\n5. Adjust the batch size of our dataset\n6. Shuffle the dataset to optimize for deep learning\n\nEach learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solution/input_pipeline.ipynb). \n\nLet's start off with the Python imports that we need.",
"_____no_output_____"
]
],
[
[
"%%bash\nexport PROJECT=$(gcloud config list project --format \"value(core.project)\")\necho \"Your current GCP Project Name is: \"$PROJECT",
"_____no_output_____"
],
[
"!pip install tensorflow==2.1.0 --user",
"_____no_output_____"
]
],
[
[
"Let's make sure we install the necessary version of tensorflow. After doing the pip install above, click __Restart the kernel__ on the notebook so that the Python environment picks up the new packages.",
"_____no_output_____"
]
],
[
[
"import os, json, math\nimport numpy as np\nimport shutil\nimport logging\n# SET TF ERROR LOG VERBOSITY\nlogging.getLogger(\"tensorflow\").setLevel(logging.ERROR)\nimport tensorflow as tf\n\nprint(\"TensorFlow version: \",tf.version.VERSION)\n\nPROJECT = \"your-gcp-project-here\" # REPLACE WITH YOUR PROJECT NAME\nREGION = \"us-central1\" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1\n\n# Do not change these\nos.environ[\"PROJECT\"] = PROJECT\nos.environ[\"REGION\"] = REGION\nos.environ[\"BUCKET\"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID\n\nif PROJECT == \"your-gcp-project-here\":\n print(\"Don't forget to update your PROJECT name! Currently:\", PROJECT)",
"_____no_output_____"
],
[
"# If you're not using TF 2.0+, let's enable eager execution\nif tf.version.VERSION < '2.0':\n print('Enabling v2 behavior and eager execution; if necessary restart kernel, and rerun notebook')\n tf.enable_v2_behavior()",
"_____no_output_____"
]
],
[
[
"## Locating the CSV files\n\nWe will start with the CSV files that we wrote out in the [first notebook](../01_explore/taxifare.iypnb) of this sequence. Just so you don't have to run the notebook, we saved a copy in ../data",
"_____no_output_____"
]
],
[
[
"!ls -l ../../data/*.csv",
"_____no_output_____"
]
],
[
[
"## Use tf.data to read the CSV files\n\nSee the documentation for [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset).\nIf you have TFRecords (which is recommended), use [make_batched_features_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_batched_features_dataset) instead.",
"_____no_output_____"
]
],
[
[
"CSV_COLUMNS = ['fare_amount', 'pickup_datetime',\n 'pickup_longitude', 'pickup_latitude', \n 'dropoff_longitude', 'dropoff_latitude', \n 'passenger_count', 'key']\nLABEL_COLUMN = 'fare_amount'\nDEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]",
"_____no_output_____"
],
[
"# load the training data\ndef load_dataset(pattern):\n# TODO 1: Use tf.data to read CSV files\n# Tip: Refer to: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/data/experimental/make_csv_dataset\n return tf.data. # complete this line\n\n# TODO 2: Load the training data into memory\ntempds = load_dataset('') # find and load the taxi-train* into memory\nprint(tempds)",
"_____no_output_____"
]
],
[
[
"Note that this is a prefetched dataset. If you loop over the dataset, you'll get the rows one-by-one. Let's convert each row into a Python dictionary:",
"_____no_output_____"
]
],
[
[
"# print a few of the rows\nfor n, data in enumerate(tempds):\n row_data = {k: v.numpy() for k,v in data.items()}\n print(n, row_data)\n if n > 2:\n break",
"_____no_output_____"
]
],
[
[
"What we really need is a dictionary of features + a label. So, we have to do two things to the above dictionary. (1) remove the unwanted column \"key\" and (2) keep the label separate from the features.",
"_____no_output_____"
]
],
[
[
"# get features, label\ndef features_and_labels(row_data):\n # TODO 3: Prune the data by removing column named 'key'\n for unwanted_col in ['pickup_datetime', '']: # specify column to remove \n row_data.pop(unwanted_col)\n label = row_data.pop(LABEL_COLUMN)\n return row_data, label # features, label\n\n# print a few rows to make it sure works\nfor n, data in enumerate(tempds):\n row_data = {k: v.numpy() for k,v in data.items()}\n features, label = features_and_labels(row_data)\n print(n, label, features)\n if n > 2:\n break",
"_____no_output_____"
]
],
[
[
"## Batching\n\nLet's do both (loading, features_label)\nin our load_dataset function, and also add batching.",
"_____no_output_____"
]
],
[
[
"def load_dataset(pattern, batch_size):\n return (\n \n # TODO 4: Use tf.data to map features and labels\n tf.data.experimental.make_csv_dataset() # complete parameters\n .map() # complete with name of features and labels\n )\n\n# TODO 5: Experiment by adjusting batch size\n# try changing the batch size and watch what happens.\ntempds = load_dataset('../../data/taxi-train*', batch_size=2)\n\n\nprint(list(tempds.take(3))) # truncate and print as a list ",
"_____no_output_____"
]
],
[
[
"## Shuffling\n\nWhen training a deep learning model in batches over multiple workers, it is helpful if we [shuffle the data](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/data/Dataset#shuffle). That way, different workers will be working on different parts of the input file at the same time, and so averaging gradients across workers will help. Also, during training, we will need to read the data indefinitely.",
"_____no_output_____"
]
],
[
[
"def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):\n dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)\n .map(features_and_labels) # features, label\n .cache())\n if mode == tf.estimator.ModeKeys.TRAIN:\n \n # TODO 6: Add dataset.shuffle 1000 to our dataset and have it repeat\n # Tip: Refer to https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/data/Dataset#shuffle\n dataset = dataset.shuffle(1000).repeat()\n dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE\n return dataset\n\ntempds = load_dataset('../../data/taxi-train*', 2, tf.estimator.ModeKeys.TRAIN)\nprint(list(tempds.take(1)))\ntempds = load_dataset('../../data/taxi-valid*', 2, tf.estimator.ModeKeys.EVAL)\nprint(list(tempds.take(1)))",
"_____no_output_____"
]
],
[
[
"In the next notebook, we will build the model using this input pipeline.",
"_____no_output_____"
],
[
"Copyright 2022 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e75d5be995862aa983a79b85e72474627dee2bb2 | 42,780 | ipynb | Jupyter Notebook | Task_2_KMeans_Clustering.ipynb | chetna-source/TSF-Tasks | cab7e00aa09f8f3a92c4c591e83a778137794794 | [
"MIT"
] | null | null | null | Task_2_KMeans_Clustering.ipynb | chetna-source/TSF-Tasks | cab7e00aa09f8f3a92c4c591e83a778137794794 | [
"MIT"
] | null | null | null | Task_2_KMeans_Clustering.ipynb | chetna-source/TSF-Tasks | cab7e00aa09f8f3a92c4c591e83a778137794794 | [
"MIT"
] | null | null | null | 162.045455 | 20,306 | 0.848761 | [
[
[
"### Name : Chetna Nihalani\n### Task 2: K- Means Clustering\n\n",
"_____no_output_____"
]
],
[
[
"# Importing the libraries\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom sklearn import datasets\n\n# Load the iris dataset\niris = datasets.load_iris()\niris_df = pd.DataFrame(iris.data, columns = iris.feature_names)\niris_df.head() # See the first 5 rows",
"_____no_output_____"
],
[
"# Finding the optimum number of clusters for k-means classification\n\nx = iris_df.iloc[:, [0, 1, 2, 3]].values\n\nfrom sklearn.cluster import KMeans\nwcss = []\n\nfor i in range(1, 11):\n kmeans = KMeans(n_clusters = i, init = 'k-means++', \n max_iter = 300, n_init = 10, random_state = 0)\n kmeans.fit(x)\n wcss.append(kmeans.inertia_)\n \n# Plotting the results onto a line graph, \n# `allowing us to observe 'The elbow'\nplt.plot(range(1, 11), wcss)\nplt.title('The elbow method')\nplt.xlabel('Number of clusters')\nplt.ylabel('WCSS') # Within cluster sum of squares\nplt.show()",
"_____no_output_____"
]
],
[
[
"We clearly see why it is called 'The elbow method' from the above graph, the optimum clusters is where the elbow occurs. This is when the within cluster sum of squares (WCSS) doesn't decrease significantly with every iteration.\n\nFrom this we choose the number of clusters as ** '3**'.",
"_____no_output_____"
]
],
[
[
"# Applying kmeans to the dataset / Creating the kmeans classifier\nkmeans = KMeans(n_clusters = 3, init = 'k-means++',\n max_iter = 300, n_init = 10, random_state = 0)\ny_kmeans = kmeans.fit_predict(x)",
"_____no_output_____"
],
[
"# Visualising the clusters - On the first two columns\nplt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1], \n s = 100, c = 'red', label = 'Iris-setosa')\nplt.scatter(x[y_kmeans == 1, 0], x[y_kmeans == 1, 1], \n s = 100, c = 'blue', label = 'Iris-versicolour')\nplt.scatter(x[y_kmeans == 2, 0], x[y_kmeans == 2, 1],\n s = 100, c = 'green', label = 'Iris-virginica')\n\n# Plotting the centroids of the clusters\nplt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1], \n s = 100, c = 'yellow', label = 'Centroids')\n\nplt.legend()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e75d605e7a9c5fc1595187a1ce6ced7c61ae1503 | 17,212 | ipynb | Jupyter Notebook | Correlations.ipynb | adriannebradford/INST_314_examples | e5d0397206fb8a8f9ae5981b015986bbd980ab53 | [
"BSD-3-Clause"
] | null | null | null | Correlations.ipynb | adriannebradford/INST_314_examples | e5d0397206fb8a8f9ae5981b015986bbd980ab53 | [
"BSD-3-Clause"
] | null | null | null | Correlations.ipynb | adriannebradford/INST_314_examples | e5d0397206fb8a8f9ae5981b015986bbd980ab53 | [
"BSD-3-Clause"
] | null | null | null | 38.419643 | 643 | 0.632233 | [
[
[
"# Correlations\n\nWe're now going to look at the relationships between two __numerical__ variables. This will allow us to make this final leap from ANOVA to linear regression. \n\nCorrelations are a numerical representation of the strength of the relationship between two numerical variables - and in a way reflects the ability to predict the second variable given the value of the first variable. \n\n<img src=\"images/horse.png\" width=\"300\" height=\"400\">\n\n\n\nThe bivariate (relationship between __two__ variables) correlation tells us:\n- If the association exists\n- The strength of the association\n- The direction of the association\n\nCorrelations specifically tell us about *__linear__* relationships between two variables. They are represented by the lower-case letter $r$, and range from -1 to 1, where 0 is no correlation (or association) between the two variables, -1 as the strongest possible _negative_ correlation and 1 as the strongest possible _positive_ correlation.\n\nAs you might have guessed, these $r$ values are related to our r-squared ($r^2$) values we've looked at previously. $r$ is the \"coefficient of correlation,\" and when we square that value we get $r^2$ the \"coefficient of determination.\" Keep in mind that while these values are related, the interpretations are different.\n\nSo, what are linear relationships? What is a positive vs. a negative correlation? Let's look at some scatterplots!",
"_____no_output_____"
]
],
[
[
"## loading some libraries!\nlibrary(tidyverse) ## all of our normal functions for working with data\nlibrary(ggcorrplot) ## make pretty corrplot\nlibrary(GGally) ## scatterplot matrix function\nlibrary(gtrendsR) ## Google Trends API\n\noptions(repr.plot.width=4, repr.plot.height=3) ## set options for plot size within the notebook -\n# this is only for jupyter notebooks, you can disregard this.",
"_____no_output_____"
],
[
"## loading up our old friend the mpg dataset. \n## remember this is a dataset that is \"built-in\" to R\n## this is not how you load data from elsewhere\ndata(mpg) #load built-in dataset\nhead(mpg) #peek at the first 6 observations\nsummary(mpg) #look at summary of variables",
"_____no_output_____"
],
[
"?mpg ## obtain documentation for r functions or built-in datasets using ?",
"_____no_output_____"
]
],
[
[
"First we're going to plot hwy mpg vs. cty mpg. What type of relationship would we expect between these variables?",
"_____no_output_____"
]
],
[
[
"mpg %>% ggplot (aes(x = hwy, y = cty)) ## the variables we want to plot\n + geom_point() ## the type of plot we want",
"_____no_output_____"
]
],
[
[
"This graph shows an example of positive correlation - see how all of the dots \"line up\" and show a general trend that as the x variable increases, the y variable increases. Because they are both increasing together, the correlation is positive.\n\nRemember the correlation value is an estimation of the _linear_ relationship between the two variables - and this is a clump of dots. We can fit a \"best fit\" line to this graph to show that estimated linear relationship.",
"_____no_output_____"
]
],
[
[
"mpg %>% ggplot(aes(x=hwy, y=cty)) + ## the variables we want to plot\n geom_point()+ ## the type of plot we want\n geom_smooth(method=lm, se=FALSE) ## adding that \"best fit\" line using a linear model",
"_____no_output_____"
]
],
[
[
"This is our \"best fit\" line. These linear relationships are going to form the basis of Linear Regression that we will get into next week, but right now we're going to focus simply on the correlation between these two variables. Note that the distance of the dots from the line affects the strength of the relationship - the closer to the line that the dots are clumped, the stronger the relationship. \n\nLet's look at the relationship between hwy and another numerical variable - displ - instead. displ is the engine displacement, in litres.",
"_____no_output_____"
]
],
[
[
"mpg %>% ggplot(aes(x=hwy, y=displ)) + \n geom_point()+\n geom_smooth(method=lm, se=FALSE)",
"_____no_output_____"
]
],
[
[
"Now what we see is an example of a negative correlation, because the line starts at the top left corner and goes down toward the bottom right. So as hwy increases, displ decreases. These dots seem to be more spread out from the line, so I would guess that the strength of the relationship between hwy and displ (the correlation between the variables) is lower than that between hwy and cty.",
"_____no_output_____"
]
],
[
[
"mpg %>% ggplot(aes(x=hwy, y=cyl)) + \n geom_point()+\n geom_smooth(method=lm, se=FALSE)",
"_____no_output_____"
]
],
[
[
"You need to make sure your variables are truely numeric, and not simply ordinal. Because cylinder can only be 4, 5, 6, or 8, and no values in between those - we see that all of the values line up at those values of y. The line indicates that there may be a general trend of negative association between these variables, but since cyl is ordinal it would not be best suited for a correlation analysis (however we can do an ANOVA using cyl as a categorical grouping variable, and hwy as the numerical DV). There are non-parametric versions of the correlation coefficient that could also be used, but we will not cover in this course.\n\nThere are various statistical tests that are most appropriate for certain types of data. Once you have \"stocked\" your statistical \"toolbelt\" you can do an inference test on any combination of data and variable types, by using the one that is most appropriate for your data types and your RQ.\n\nWe can also quickly inspect the correlations among all of the numerical variables in a dataset. This will be more important when we get into modeling and visualize our data as a preparation for creating our linear models.",
"_____no_output_____"
]
],
[
[
"corr <- round(cor(mtcars), 1) ## obtain all of the correlations within pairs of all the num vars\nggcorrplot(corr) ## plot those correlations",
"_____no_output_____"
]
],
[
[
"This (above) is a heatmap of the strength and direction of the correlations between these variables where very blue is -1 and very read is +1.\n\nWe can also create a scatterplot matrix where a number of variables are compared in scatterplots (like our examples above) in a grid.",
"_____no_output_____"
]
],
[
[
"options(repr.plot.width=8, repr.plot.height=4) ## plot size options for Jupyter notebook ONLY\npairs(iris[,1:4]) ## plot the correlations between pairs of variables in columns 1 through 4 of iris dataset\n",
"_____no_output_____"
]
],
[
[
"We can also use a categorical variable to color the dots in the grid by groups, to see how the relationship between the two numerical variables might be associated with an additional categorical variable....... more about this to come soon.",
"_____no_output_____"
]
],
[
[
"options(repr.plot.width=8, repr.plot.height=4)\npairs(iris[,1:4], col=iris$Species) ## col= adds color based on the grouping variable specified",
"_____no_output_____"
]
],
[
[
"We also have situations when there is no correlation between the two variables: <BR>\n<img src=\"images/r2model.png\" width=\"500\" >\n \nTo summarize:\n<br>\n<img src=\"images/strength.PNG\" width=\"1000\" >\n\nThere are also cases where variables have obvious associations, but they are not linear. They have a __*correlation*__ of 0, but they are __*associated*__. We have to be careful how we use the word correlation in writing our statistical results.\n\n<img src=\"images/curvy.PNG\" width=\"600\" >",
"_____no_output_____"
],
[
"## Assumptions\nThe assumptions are pretty basic, you need two variables, both numeric. \n\n_IF_ you want to do significance testing, they would need to be normally distributed.\n\n## Significance Testing?\nCorrelation coefficients by themselves are interpretable as the size of the relationship between two variables. However, there is also a significance test we can conduct on a correlation which will generate a t-score we can compare to the t-distribution to obtain a p-value.\n\nThe hypotheses for this type of test is pretty basic - is the correlation coefficient significantly different from 0? This tells us nothing about the relative strength, only if a significant effect exists (or not).\n\n#### Non-directional (two-tailed):\n$H_0: r = 0$ <BR>\n$H_A: r \\neq 0$ <BR>\n\n#### Directional (one-tailed):\n$H_0: r = 0$ <BR>\n$H_A: r > 0$ <BR>\n __OR__ <BR>\n $H_A: r < 0$ <BR> \n \n## Reminder: Correlation does not equal Causation\n<img src=\"images/venti.jpg\" width=\"300\" >\n\nCorrelation _could_ be evidence of potential causality, but:\n- there could be a third variable that is actually causing the effect (ice cream sales -> rise in crime)\n\n- we don't know which direction the effect occurs - does X predict Y or does Y predict X?\n\n## Calculating Correlation:\n\n## $r = \\frac{cov_{xy}}{s_xs_y} = \\frac{\\sum{(x-\\bar{x})(y-\\bar{y})}}{\\sqrt{\\sum{(x-\\bar{x})^2}\\sum{(y-\\bar{y})^2}}}$\n\n\nLet's look at some examples now. For fun, we're going to connect to the Google Trends API using the R package `gtrendsR` and get data about certain keyword searches over a period of time.",
"_____no_output_____"
]
],
[
[
"options(repr.plot.width=10, repr.plot.height=4) ## set options for plot size within the notebook -\n# this is only for jupyter notebooks, you can disregard this.\n\n## call the google trends api and return hits info for keyword searches\n## hits are scaled to values between 0 - 100\ntrends <- gtrends(c(\"statistics\", \"pugs\"), geo = \"US-MD\", time = \"2016-01-01 2019-09-30\", low_search_volume = T)\nplot(trends)",
"_____no_output_____"
]
],
[
[
"Keep in mind that to be correlated the variables __*do not*__ have to have the same magnitude - they just have to trend together.",
"_____no_output_____"
]
],
[
[
"## extract the interest_over_time df from the gtrends object\n\ntrend_time <- as_tibble(trends$interest_over_time)\nglimpse(trend_time)",
"_____no_output_____"
],
[
"## look at basic summary statistics\n\ntrend_time %>%\n group_by(keyword) %>%\n summarise(mean(hits), median(hits), var(hits))",
"_____no_output_____"
]
],
[
[
"To use this data we need to pivot it so that our \"long\" format is in \"wide\" format - so that we have hits for statistics and hits for pugs in their own columns.",
"_____no_output_____"
]
],
[
[
"trend_wide <- \n trend_time %>%\n spread(key = keyword, value = hits)\nglimpse(trend_wide)",
"_____no_output_____"
]
],
[
[
"Let's look at a scatterplot.",
"_____no_output_____"
]
],
[
[
"trend_wide %>% \n ggplot (aes(x = statistics, y = pugs)) + \n geom_point() +\n geom_smooth(method=lm, se=FALSE)",
"_____no_output_____"
]
],
[
[
"And finally, let's calculate the correlation between these two variables. For this we will use the function (base R) cor(). The arguments for cor are simple just specify x and y (your two variables to compare).",
"_____no_output_____"
]
],
[
[
"cor(trend_wide$statistics, trend_wide$pugs) \n# correlation between hits for statistics and hits for pugs\n",
"_____no_output_____"
]
],
[
[
"This correlation is not as low as we may have expected. It is a \"weak\" correlation, but is it significantly different from a zero correlation? For that we need to use cor.test() which performs the hypothesis test as well.",
"_____no_output_____"
]
],
[
[
"cor.test(trend_wide$statistics, trend_wide$pugs) ## correlation with CI and t-test",
"_____no_output_____"
]
],
[
[
"So our p-value is below 0.05, so the correlation is statistically significant from zero, and therefore there is at least some correlation. However, the confidence interval for the correlation is 0.09 to 0.36, so the true population correlation may be anywhere between 0.09 (very very low) or 0.36 (small/mediumish).\n\n__NOTE: the values in the output are subject to change as Google Trends samples from the overall data and only returns a small sample of their massive dataset. This is a real-life example of all of that sampling variance we've been talking about.__\n\n\n### Reporting a correlation\nHow would we report this formally?\n\nKeyword searches in Google for \"statistics\" has a small correlation with keyword searches for \"pugs\" ($r = 0.24$) over the time period from 2016 to today within the state of Maryland. While the correlation is significant (p < 0.001), the magnitude of the correlation is small.",
"_____no_output_____"
],
[
"## R-squared\nRemember, our value of the proportion of variance explained is literally this correlation coefficient, $r$, squared - $r^2$. Let's take a look:",
"_____no_output_____"
]
],
[
[
"cor(trend_wide$statistics, trend_wide$pugs)^2 ## calculate the correlation, and square it - ^2",
"_____no_output_____"
]
],
[
[
"The interpretation of this is that this % of the variance in the hits for statistics is explained in the hits for pugs. So there are not many people out there like me that search for statistics and pugs.",
"_____no_output_____"
],
[
"## Fun stuff related to correlations:\n- <a href=\"http://guessthecorrelation.com/\">Guess the Correlation game</a>\n- <a href=\"https://www.tylervigen.com/spurious-correlations\"> Spurious Correlations </a>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e75d6e37a4088d55aadd10e1130e117294e92c6a | 13,148 | ipynb | Jupyter Notebook | Week 14 - Choropleths/Week 14 - Assignment.ipynb | cuinfoscience/INFO3402-Spring2022 | 916d08bf497ee74a7a71ee09c282992698eaa68c | [
"MIT"
] | null | null | null | Week 14 - Choropleths/Week 14 - Assignment.ipynb | cuinfoscience/INFO3402-Spring2022 | 916d08bf497ee74a7a71ee09c282992698eaa68c | [
"MIT"
] | null | null | null | Week 14 - Choropleths/Week 14 - Assignment.ipynb | cuinfoscience/INFO3402-Spring2022 | 916d08bf497ee74a7a71ee09c282992698eaa68c | [
"MIT"
] | null | null | null | 30.018265 | 362 | 0.603057 | [
[
[
"# INFO 3402 – Week 14: Assignment - Solutions\n\n[Brian C. Keegan, Ph.D.](http://brianckeegan.com/) \n[Assistant Professor, Department of Information Science](https://www.colorado.edu/cmci/people/information-science/brian-c-keegan) \nUniversity of Colorado Boulder \n\nCopyright and distributed under an [MIT License](https://opensource.org/licenses/MIT)",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport geopandas as gpd\nimport numpy as np\n\n%matplotlib inline\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"## Question 01: Clean Colorado precinct results (15 pts)\n\nDownload the 2020 \"General Election precinct-level results in Excel\" from the Colorado Secretary of State: https://www.sos.state.co.us/pubs/elections/Results/Archives.html\n\nLoad the results and store as `co_results_df` making sure the \"Precinct\" column is read in as a string. (3 pts)",
"_____no_output_____"
],
[
"Make two boolean indexed filters variables using `co_results_df` to:\n\n1. The \"President/Vice President\" for the \"Office\" column\n2. The Democratic and Republican parties in the \"Party\" column\n\nUse the intersection of the two filters and store as `pres_results_df`. Print the shape and first five rows of `pres_results_df`. (3 pts)",
"_____no_output_____"
],
[
"Use `pres_results_df` to make a pivot table called `pres_precincts_df` with the \"Precinct\" column as an index, the \"Candidate\" column as columns, and the \"Candidate Votes\" as values. Print the shape and first five rows of `pres_precincts_df`. (2 pts)",
"_____no_output_____"
],
[
"Filter `co_results_df` to the \"President/Vice President\" \"Office\" (I would strongly encourage you to re-use the filter variable you should've defined two steps back) and then perform a groupby-aggreation to compute the total \"Candidate Votes\" per \"Precinct\" as `total_precinct_votes`. (2 pts)",
"_____no_output_____"
],
[
"Perform a left join/merge with with `pres_precincts_df` as the left DataFrame and `total_precinct_votes` as the right DataFrame ans store as `precincts_df`. Print the shape and the first five rows of `precincts_df`. (2 pts)",
"_____no_output_____"
],
[
"Rename the columns of `precincts_df` to be \"Republican\", \"Democratic\", and \"Total\". Compute two new columns in `precincts_df` called \"Dem_Share\" and \"Rep_Shape\" that are the fraction of Democratic and Republican Votes (respectively) out of the \"Total\" votes in the precinct. Print the shape and the first five rows of `precincts_df`. (3 pts)",
"_____no_output_____"
],
[
"## Question 02: Boulder County precincts shapefile (15 pts)\n\nDownload and unzip the Boulder County precinct shapefile: https://opendata-bouldercounty.hub.arcgis.com/datasets/precincts\n\nUse GeoPandas's `read_file` function to load the \"Precincts.shp\" shapefile as `boco_precincts_gdf` with the \"PRECINCT\" column as a string. Print the shape and show the first five rows of `boco_precincts_gdf`. (2 pts)",
"_____no_output_____"
],
[
"Use the plot method for GeoPandas ([docs](https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoDataFrame.plot.html)) on `boco_precincts_gdf` and make the \"color\" white, \"edgecolor\" black, and remove the visibility of the ticklabels on the xaxis and yaxis. (3 pts)",
"_____no_output_____"
],
[
"Check to make sure there's some overlap between the \"PRECINCT\" column in `boco_precincts_gdf` and the index in `precincts_df`. Print the length of the intersection of these two sets. (1 pts)",
"_____no_output_____"
],
[
"Perform a left join/merge with with `boco_precincts_gdf` as the left DataFrame and `precincts_df` as the right DataFrame and store as `boulder_precinct_results_df` using the precinct columns/indices as a key. Print the shape and the first five rows of `boulder_precinct_results_df`. (3 pts)",
"_____no_output_____"
],
[
"Make a choropleth of `boulder_precinct_results_df` with \"Rep_Share\" as the \"column\" to visualize. Set the \"vmin\" and \"vmax\" to cover the range [0,1], the \"cmap\" to \"bwr\", the \"edgecolor\" to black, include a legend, and remove the ticklabels. (4 pts)",
"_____no_output_____"
],
[
"Write a few sentences about the patterns you see in the precinct-level results about the 2020 presidential election. (2 pts)",
"_____no_output_____"
],
[
"**Extra credit**. Make a scatterplot of the area of the precincts against the \"Dem_Share\". Keeping in mind that the precinct areas are strongly skewed and need to be log-scaled, report the correlation coefficient, use `linregress` to estimate a model, and interpret the slope parameter. Write a few sentences about what this relationship implies. (4 pts)",
"_____no_output_____"
],
[
"## Question 03: Visualize county-level results (25 pts)\n\nGo to the State of Colorado's State Demography Office and download and unzip the \"Counties\" data under \"2020 Census Statistical Geography\": https://demography.dola.colorado.gov/assets/html/gis.html\n\nUse GeoPandas's `read_file` function to load the \"County_Data_2020.shp\" shapefile as `counties_gdf`. Print the shape and show the first five rows of `counties_gdf`. (2 pts)",
"_____no_output_____"
],
[
"Use the plot method for GeoPandas ([docs](https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoDataFrame.plot.html)) on `counties_gdf` and make the \"color\" white, \"edgecolor\" black, and remove the visibility of the ticklabels on the xaxis and yaxis. (3 pts)",
"_____no_output_____"
],
[
"Use `pres_results_df` to make a pivot table with \"County\" as an index, \"Candidate\" as columns, \"Candidate Votes\" as values, and a sum aggfunc and store as `pres_county_df`. (2 pts)",
"_____no_output_____"
],
[
"Filter `co_results_df` to the \"President/Vice President\" \"Office\" and then perform a groupby-aggreation to compute the total \"Candidate Votes\" per \"County\". Store the aggregated DataFrame as `total_county_votes`. Show the first five rows of `total_county_votes`. (2 pts)",
"_____no_output_____"
],
[
"Merge `pres_county_df` (left) and `counties_df` (right) together as `county_total_df`. Print the shape and show the first five rows of `counties_df`. (3 pts)",
"_____no_output_____"
],
[
"Rename the columns of `counties_df` to be \"Republican\", \"Democratic\", and \"Total\". Compute two new columns in `counties_df` called \"Dem_Share\" and \"Rep_Shape\" that are the fraction of Democratic and Republican Votes (respectively) out of the \"Total\" votes in the precinct. Print the shape and the first five rows of `counties_df`. (3 pts)",
"_____no_output_____"
],
[
"Perform a left join/merge with with `counties_gdf` as the left DataFrame and `counties_df` as the right DataFrame and store as `counties_results_df` using the \"NAME20\" column (left) and index (right). Print the shape and the first five rows of `counties_results_df`. (3 pts)",
"_____no_output_____"
],
[
"Make a choropleth of `counties_results_df` with \"Rep_Share\" as the \"column\" to visualize. Set the \"vmin\" and \"vmax\" to cover the range [0,1], the \"cmap\" to \"bwr\", the \"edgecolor\" to black, include a legend, and remove the ticklabels. (4 pts)",
"_____no_output_____"
],
[
"Write a few sentences about the patterns you see in the county-level results about the 2020 presidential election. (3 pts)",
"_____no_output_____"
],
[
"**Extra credit**. Annotate each county with its name and \"Rep_Share\" at its \"representative_point\" ([docs](https://shapely.readthedocs.io/en/stable/manual.html)). ([Hint](https://stackoverflow.com/a/38902492/1574687)) (3 pts)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e75d76aacbccd33919daba582b3cc5daba2f2c77 | 421,418 | ipynb | Jupyter Notebook | MLP_S_3_compart.ipynb | ryanayf/KidNeYronal | 369d23e8d3a144202b8aa840d427ef67d1253feb | [
"MIT"
] | null | null | null | MLP_S_3_compart.ipynb | ryanayf/KidNeYronal | 369d23e8d3a144202b8aa840d427ef67d1253feb | [
"MIT"
] | null | null | null | MLP_S_3_compart.ipynb | ryanayf/KidNeYronal | 369d23e8d3a144202b8aa840d427ef67d1253feb | [
"MIT"
] | null | null | null | 161.400996 | 26,758 | 0.763736 | [
[
[
"# Dependencies and Data Loading",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nimport time\nimport numpy as np\nimport random\nimport pandas as pd\nimport scipy.io as sio\nfrom collections import Counter\nfrom itertools import product\nfrom scipy.io import loadmat\nimport tensorflow as tf\nfrom keras.utils import np_utils\nfrom tensorflow.keras import optimizers,backend\nfrom keras.models import Sequential\nfrom keras.layers import Dropout, Activation, Dense, Flatten, Lambda, concatenate\nfrom keras.layers.convolutional import Convolution1D,MaxPooling1D\nfrom keras.callbacks import ModelCheckpoint\nfrom matplotlib import pyplot as plt\nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"%tensorflow_version 2.x\ndevice_name = tf.test.gpu_device_name()\nif device_name != '/device:GPU:0':\n raise SystemError('GPU device not found')\nprint('Found GPU at: {}'.format(device_name))",
"Found GPU at: /device:GPU:0\n"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Mounted at /content/drive\n"
],
[
"# load training and ground truth data \nfrom scipy.io import loadmat\n\n# synthetic training data\nfilename = '/content/drive/My Drive/Manuscript_EJNMMI_2021/Github/Traindata_synthetic.mat'\ninput = loadmat(filename)\ninput_spectra = input['spectra_syn_train']\nlabels = input['pH_labels_syn_train']\n",
"_____no_output_____"
],
[
"# load testdata - synthetic spectra\nfilename = '/content/drive/My Drive/Manuscript_EJNMMI_2021/Github/Testdata_synthetic.mat'\ntest_aug = loadmat(filename)\ntest_input_spectra_aug = test_aug['spectra_syn_test']\ntest_labels_aug = test_aug['pH_labels_syn_test']\n\n# load testdata - kidney\nfilename = '/content/drive/My Drive/Manuscript_EJNMMI_2021/Github/Testdata_kidney.mat'\ntest = loadmat(filename)\ntest_input_spectra = test['spectra_kidney_test']\ntest_labels = test['pH_labels_kidney_test']",
"_____no_output_____"
],
[
"display(test_input_spectra_aug.shape)\ndisplay(test_labels_aug.shape)\ndisplay(test_input_spectra.shape)\ndisplay(test_labels.shape)",
"_____no_output_____"
]
],
[
[
"# Preprocessing",
"_____no_output_____"
]
],
[
[
"#train-val split\nX_train, X_val, y_train, y_val = train_test_split(input_spectra, labels, test_size=0.15, random_state=13)\n\nX_train = np.array(X_train).astype('float32')\nX_train = X_train.reshape(X_train.shape + (1,))\nX_val = np.array(X_val).astype('float32')\nX_val = X_val.reshape(X_val.shape + (1,))\n\nX_test = np.array(test_input_spectra).astype('float32')\nX_test = X_test.reshape(X_test.shape + (1,))\nX_test_aug = np.array(test_input_spectra_aug).astype('float32')\nX_test_aug = X_test_aug.reshape(X_test_aug.shape + (1,))\n\ny_train = np.array(y_train)\ny_val = np.array(y_val)\ny_test = np.array(test_labels)\ny_test_aug = np.array(test_labels_aug)\n\ndisplay(X_train.shape)\ndisplay(y_train.shape)\n\nprint(\"Total of \"+str(len(X_train))+\" training samples.\")\nprint(\"Total of \"+str(len(X_val))+\" validation samples.\")\n",
"_____no_output_____"
],
[
"# print(\"Total of \"+str(num_classes)+\" classes.\")\ndisplayind = 2\n\nprint(\"Total of \"+str(len(X_train))+\" training samples.\")\nprint(\"Total of \"+str(len(X_val))+\" validation samples.\")\nplt.plot(X_train[displayind],label='input')\nplt.plot(X_val[displayind],label='ground truth')\nplt.legend(['Test', 'Train'], loc='upper right')\nplt.show()",
"Total of 8500 training samples.\nTotal of 1500 validation samples.\n"
]
],
[
[
"# Model Architecture",
"_____no_output_____"
]
],
[
[
"def mapping_to_target_range( x, target_min=6.32, target_max=7.44) :\n x02 = backend.tanh(x) + 1 # x in range(0,2)\n scale = ( target_max-target_min )/2.\n return x02 * scale + target_min",
"_____no_output_____"
],
[
"# default random initialization for weights\nbackend.clear_session()\n\nmodel = Sequential()\nactivation = 'relu'\nmodel.add(Dense(16, input_shape=(1024,1), activation='relu'))\nmodel.add(MaxPooling1D())\n\nmodel.add(Dense(16, activation='relu'))\nmodel.add(MaxPooling1D())\n\nmodel.add(Dense(32, activation='relu'))\nmodel.add(MaxPooling1D())\n\nmodel.add(Dense(32, activation='relu'))\nmodel.add(MaxPooling1D())\nmodel.add(Dropout(0.10))\n\nmodel.add(Flatten())\nmodel.add(Dense(3,activation=mapping_to_target_range))\nnadam = optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)\nmodel.compile(loss='mean_squared_error', optimizer=nadam)\n\nprint(model.summary())\nprint(\"MLP Model created.\")",
"Model: \"sequential\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n dense (Dense) (None, 1024, 16) 32 \n \n max_pooling1d (MaxPooling1D (None, 512, 16) 0 \n ) \n \n dense_1 (Dense) (None, 512, 16) 272 \n \n max_pooling1d_1 (MaxPooling (None, 256, 16) 0 \n 1D) \n \n dense_2 (Dense) (None, 256, 32) 544 \n \n max_pooling1d_2 (MaxPooling (None, 128, 32) 0 \n 1D) \n \n dense_3 (Dense) (None, 128, 32) 1056 \n \n max_pooling1d_3 (MaxPooling (None, 64, 32) 0 \n 1D) \n \n dropout (Dropout) (None, 64, 32) 0 \n \n flatten (Flatten) (None, 2048) 0 \n \n dense_4 (Dense) (None, 3) 6147 \n \n=================================================================\nTotal params: 8,051\nTrainable params: 8,051\nNon-trainable params: 0\n_________________________________________________________________\nNone\nMLP Model created.\n"
]
],
[
[
"# Training",
"_____no_output_____"
]
],
[
[
"#Params\nepochs = 400\nbatch_size = 200\n\nbest_model_file = '/content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5'\nstart = time.time()\n\nbest_model = ModelCheckpoint(best_model_file, monitor='loss', verbose = 1, save_best_only=True, save_weights_only=False)\nhist = model.fit(X_train,\n y_train,\n validation_data=(X_val, y_val), \n epochs=epochs,\n batch_size=batch_size,\n callbacks = [best_model],\n shuffle = True,\n verbose=1)\n\nprint(\"training time: \",time.time()-start)\nprint(\"done\")",
"Epoch 1/400\n43/43 [==============================] - ETA: 0s - loss: 0.0314\nEpoch 00001: loss improved from inf to 0.03140, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 34s 711ms/step - loss: 0.0314 - val_loss: 0.0018\nEpoch 2/400\n43/43 [==============================] - ETA: 0s - loss: 0.0018\nEpoch 00002: loss improved from 0.03140 to 0.00184, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 23ms/step - loss: 0.0018 - val_loss: 0.0016\nEpoch 3/400\n43/43 [==============================] - ETA: 0s - loss: 0.0017\nEpoch 00003: loss improved from 0.00184 to 0.00173, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 22ms/step - loss: 0.0017 - val_loss: 0.0016\nEpoch 4/400\n43/43 [==============================] - ETA: 0s - loss: 0.0016\nEpoch 00004: loss improved from 0.00173 to 0.00165, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 22ms/step - loss: 0.0016 - val_loss: 0.0015\nEpoch 5/400\n43/43 [==============================] - ETA: 0s - loss: 0.0016\nEpoch 00005: loss improved from 0.00165 to 0.00161, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 22ms/step - loss: 0.0016 - val_loss: 0.0015\nEpoch 6/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0016\nEpoch 00006: loss improved from 0.00161 to 0.00157, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 22ms/step - loss: 0.0016 - val_loss: 0.0015\nEpoch 7/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0016\nEpoch 00007: loss improved from 0.00157 to 0.00155, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 0.0016 - val_loss: 0.0015\nEpoch 8/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0015\nEpoch 00008: loss improved from 0.00155 to 0.00154, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 0.0015 - val_loss: 0.0015\nEpoch 9/400\n42/43 [============================>.] - ETA: 0s - loss: 0.0015\nEpoch 00009: loss improved from 0.00154 to 0.00152, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 0.0015 - val_loss: 0.0015\nEpoch 10/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0015\nEpoch 00010: loss did not improve from 0.00152\n43/43 [==============================] - 1s 19ms/step - loss: 0.0015 - val_loss: 0.0015\nEpoch 11/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0015\nEpoch 00011: loss improved from 0.00152 to 0.00151, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 0.0015 - val_loss: 0.0014\nEpoch 12/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0015\nEpoch 00012: loss improved from 0.00151 to 0.00151, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 0.0015 - val_loss: 0.0014\nEpoch 13/400\n42/43 [============================>.] - ETA: 0s - loss: 0.0015\nEpoch 00013: loss improved from 0.00151 to 0.00150, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 0.0015 - val_loss: 0.0014\nEpoch 14/400\n42/43 [============================>.] - ETA: 0s - loss: 0.0015\nEpoch 00014: loss did not improve from 0.00150\n43/43 [==============================] - 1s 19ms/step - loss: 0.0015 - val_loss: 0.0015\nEpoch 15/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0015\nEpoch 00015: loss improved from 0.00150 to 0.00150, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0015 - val_loss: 0.0014\nEpoch 16/400\n42/43 [============================>.] - ETA: 0s - loss: 0.0015\nEpoch 00016: loss did not improve from 0.00150\n43/43 [==============================] - 1s 19ms/step - loss: 0.0015 - val_loss: 0.0015\nEpoch 17/400\n42/43 [============================>.] - ETA: 0s - loss: 0.0015\nEpoch 00017: loss improved from 0.00150 to 0.00149, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 0.0015 - val_loss: 0.0014\nEpoch 18/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0015\nEpoch 00018: loss improved from 0.00149 to 0.00148, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 0.0015 - val_loss: 0.0014\nEpoch 19/400\n43/43 [==============================] - ETA: 0s - loss: 0.0015\nEpoch 00019: loss improved from 0.00148 to 0.00147, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0015 - val_loss: 0.0014\nEpoch 20/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0015\nEpoch 00020: loss improved from 0.00147 to 0.00146, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0015 - val_loss: 0.0014\nEpoch 21/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0015\nEpoch 00021: loss did not improve from 0.00146\n43/43 [==============================] - 1s 19ms/step - loss: 0.0015 - val_loss: 0.0014\nEpoch 22/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0015\nEpoch 00022: loss improved from 0.00146 to 0.00145, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0015 - val_loss: 0.0014\nEpoch 23/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0015\nEpoch 00023: loss did not improve from 0.00145\n43/43 [==============================] - 1s 18ms/step - loss: 0.0015 - val_loss: 0.0014\nEpoch 24/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0015\nEpoch 00024: loss did not improve from 0.00145\n43/43 [==============================] - 1s 19ms/step - loss: 0.0015 - val_loss: 0.0014\nEpoch 25/400\n43/43 [==============================] - ETA: 0s - loss: 0.0014\nEpoch 00025: loss improved from 0.00145 to 0.00143, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 0.0014 - val_loss: 0.0014\nEpoch 26/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0014\nEpoch 00026: loss improved from 0.00143 to 0.00142, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0014 - val_loss: 0.0013\nEpoch 27/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0014\nEpoch 00027: loss improved from 0.00142 to 0.00141, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0014 - val_loss: 0.0013\nEpoch 28/400\n43/43 [==============================] - ETA: 0s - loss: 0.0014\nEpoch 00028: loss improved from 0.00141 to 0.00140, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0014 - val_loss: 0.0013\nEpoch 29/400\n42/43 [============================>.] - ETA: 0s - loss: 0.0014\nEpoch 00029: loss did not improve from 0.00140\n43/43 [==============================] - 1s 18ms/step - loss: 0.0014 - val_loss: 0.0013\nEpoch 30/400\n43/43 [==============================] - ETA: 0s - loss: 0.0014\nEpoch 00030: loss improved from 0.00140 to 0.00137, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0014 - val_loss: 0.0013\nEpoch 31/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0013\nEpoch 00031: loss improved from 0.00137 to 0.00136, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0014 - val_loss: 0.0016\nEpoch 32/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0014\nEpoch 00032: loss did not improve from 0.00136\n43/43 [==============================] - 1s 18ms/step - loss: 0.0014 - val_loss: 0.0014\nEpoch 33/400\n43/43 [==============================] - ETA: 0s - loss: 0.0013\nEpoch 00033: loss improved from 0.00136 to 0.00135, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0013 - val_loss: 0.0013\nEpoch 34/400\n43/43 [==============================] - ETA: 0s - loss: 0.0013\nEpoch 00034: loss improved from 0.00135 to 0.00133, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0013 - val_loss: 0.0012\nEpoch 35/400\n42/43 [============================>.] - ETA: 0s - loss: 0.0013\nEpoch 00035: loss improved from 0.00133 to 0.00132, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0013 - val_loss: 0.0013\nEpoch 36/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0013\nEpoch 00036: loss improved from 0.00132 to 0.00131, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0013 - val_loss: 0.0012\nEpoch 37/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0013\nEpoch 00037: loss improved from 0.00131 to 0.00131, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0013 - val_loss: 0.0012\nEpoch 38/400\n42/43 [============================>.] - ETA: 0s - loss: 0.0013\nEpoch 00038: loss did not improve from 0.00131\n43/43 [==============================] - 1s 18ms/step - loss: 0.0013 - val_loss: 0.0012\nEpoch 39/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0013\nEpoch 00039: loss improved from 0.00131 to 0.00128, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0013 - val_loss: 0.0012\nEpoch 40/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0013\nEpoch 00040: loss did not improve from 0.00128\n43/43 [==============================] - 1s 18ms/step - loss: 0.0013 - val_loss: 0.0012\nEpoch 41/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0013\nEpoch 00041: loss did not improve from 0.00128\n43/43 [==============================] - 1s 19ms/step - loss: 0.0013 - val_loss: 0.0011\nEpoch 42/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0013\nEpoch 00042: loss did not improve from 0.00128\n43/43 [==============================] - 1s 19ms/step - loss: 0.0013 - val_loss: 0.0012\nEpoch 43/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0013\nEpoch 00043: loss improved from 0.00128 to 0.00126, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0013 - val_loss: 0.0011\nEpoch 44/400\n43/43 [==============================] - ETA: 0s - loss: 0.0013\nEpoch 00044: loss improved from 0.00126 to 0.00126, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0013 - val_loss: 0.0011\nEpoch 45/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0013\nEpoch 00045: loss improved from 0.00126 to 0.00125, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0013 - val_loss: 0.0011\nEpoch 46/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0013\nEpoch 00046: loss did not improve from 0.00125\n43/43 [==============================] - 1s 18ms/step - loss: 0.0013 - val_loss: 0.0011\nEpoch 47/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0013\nEpoch 00047: loss did not improve from 0.00125\n43/43 [==============================] - 1s 18ms/step - loss: 0.0013 - val_loss: 0.0011\nEpoch 48/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0012\nEpoch 00048: loss improved from 0.00125 to 0.00123, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0012 - val_loss: 0.0011\nEpoch 49/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0013\nEpoch 00049: loss did not improve from 0.00123\n43/43 [==============================] - 1s 18ms/step - loss: 0.0013 - val_loss: 0.0011\nEpoch 50/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0012\nEpoch 00050: loss improved from 0.00123 to 0.00120, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0012 - val_loss: 0.0011\nEpoch 51/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0012\nEpoch 00051: loss improved from 0.00120 to 0.00118, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0012 - val_loss: 0.0011\nEpoch 52/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0012\nEpoch 00052: loss did not improve from 0.00118\n43/43 [==============================] - 1s 18ms/step - loss: 0.0012 - val_loss: 0.0010\nEpoch 53/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0012\nEpoch 00053: loss improved from 0.00118 to 0.00118, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0012 - val_loss: 0.0010\nEpoch 54/400\n43/43 [==============================] - ETA: 0s - loss: 0.0012\nEpoch 00054: loss did not improve from 0.00118\n43/43 [==============================] - 1s 19ms/step - loss: 0.0012 - val_loss: 0.0010\nEpoch 55/400\n42/43 [============================>.] - ETA: 0s - loss: 0.0012\nEpoch 00055: loss improved from 0.00118 to 0.00117, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0012 - val_loss: 0.0010\nEpoch 56/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0011\nEpoch 00056: loss improved from 0.00117 to 0.00114, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0011 - val_loss: 0.0010\nEpoch 57/400\n43/43 [==============================] - ETA: 0s - loss: 0.0011\nEpoch 00057: loss did not improve from 0.00114\n43/43 [==============================] - 1s 18ms/step - loss: 0.0011 - val_loss: 9.6094e-04\nEpoch 58/400\n43/43 [==============================] - ETA: 0s - loss: 0.0012\nEpoch 00058: loss did not improve from 0.00114\n43/43 [==============================] - 1s 18ms/step - loss: 0.0012 - val_loss: 0.0011\nEpoch 59/400\n43/43 [==============================] - ETA: 0s - loss: 0.0011\nEpoch 00059: loss improved from 0.00114 to 0.00113, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0011 - val_loss: 9.5781e-04\nEpoch 60/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0011\nEpoch 00060: loss improved from 0.00113 to 0.00113, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0011 - val_loss: 0.0011\nEpoch 61/400\n42/43 [============================>.] - ETA: 0s - loss: 0.0011\nEpoch 00061: loss did not improve from 0.00113\n43/43 [==============================] - 1s 18ms/step - loss: 0.0011 - val_loss: 9.5256e-04\nEpoch 62/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0011\nEpoch 00062: loss improved from 0.00113 to 0.00109, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0011 - val_loss: 9.5701e-04\nEpoch 63/400\n42/43 [============================>.] - ETA: 0s - loss: 0.0011\nEpoch 00063: loss did not improve from 0.00109\n43/43 [==============================] - 1s 19ms/step - loss: 0.0011 - val_loss: 9.3354e-04\nEpoch 64/400\n42/43 [============================>.] - ETA: 0s - loss: 0.0011\nEpoch 00064: loss did not improve from 0.00109\n43/43 [==============================] - 1s 18ms/step - loss: 0.0011 - val_loss: 9.0418e-04\nEpoch 65/400\n42/43 [============================>.] - ETA: 0s - loss: 0.0011\nEpoch 00065: loss improved from 0.00109 to 0.00108, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0011 - val_loss: 0.0010\nEpoch 66/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0011\nEpoch 00066: loss improved from 0.00108 to 0.00107, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0011 - val_loss: 8.9968e-04\nEpoch 67/400\n43/43 [==============================] - ETA: 0s - loss: 0.0010\nEpoch 00067: loss improved from 0.00107 to 0.00105, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0010 - val_loss: 9.1371e-04\nEpoch 68/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0010\nEpoch 00068: loss did not improve from 0.00105\n43/43 [==============================] - 1s 19ms/step - loss: 0.0010 - val_loss: 0.0010\nEpoch 69/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0011\nEpoch 00069: loss did not improve from 0.00105\n43/43 [==============================] - 1s 18ms/step - loss: 0.0011 - val_loss: 8.5534e-04\nEpoch 70/400\n42/43 [============================>.] - ETA: 0s - loss: 0.0010\nEpoch 00070: loss improved from 0.00105 to 0.00103, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 0.0010 - val_loss: 8.6209e-04\nEpoch 71/400\n43/43 [==============================] - ETA: 0s - loss: 0.0011\nEpoch 00071: loss did not improve from 0.00103\n43/43 [==============================] - 1s 18ms/step - loss: 0.0011 - val_loss: 8.3727e-04\nEpoch 72/400\n42/43 [============================>.] - ETA: 0s - loss: 0.0010\nEpoch 00072: loss improved from 0.00103 to 0.00102, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 0.0010 - val_loss: 9.7264e-04\nEpoch 73/400\n41/43 [===========================>..] - ETA: 0s - loss: 0.0011\nEpoch 00073: loss did not improve from 0.00102\n43/43 [==============================] - 1s 20ms/step - loss: 0.0011 - val_loss: 9.0852e-04\nEpoch 74/400\n40/43 [==========================>...] - ETA: 0s - loss: 0.0010\nEpoch 00074: loss improved from 0.00102 to 0.00100, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 0.0010 - val_loss: 9.3649e-04\nEpoch 75/400\n42/43 [============================>.] - ETA: 0s - loss: 9.8614e-04\nEpoch 00075: loss improved from 0.00100 to 0.00099, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 9.8596e-04 - val_loss: 9.5360e-04\nEpoch 76/400\n40/43 [==========================>...] - ETA: 0s - loss: 9.7949e-04\nEpoch 00076: loss improved from 0.00099 to 0.00098, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 9.8206e-04 - val_loss: 8.0741e-04\nEpoch 77/400\n42/43 [============================>.] - ETA: 0s - loss: 9.9703e-04\nEpoch 00077: loss did not improve from 0.00098\n43/43 [==============================] - 1s 19ms/step - loss: 9.9572e-04 - val_loss: 8.6711e-04\nEpoch 78/400\n40/43 [==========================>...] - ETA: 0s - loss: 9.9192e-04\nEpoch 00078: loss did not improve from 0.00098\n43/43 [==============================] - 1s 19ms/step - loss: 9.8931e-04 - val_loss: 8.6052e-04\nEpoch 79/400\n40/43 [==========================>...] - ETA: 0s - loss: 9.7383e-04\nEpoch 00079: loss improved from 0.00098 to 0.00097, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 9.6984e-04 - val_loss: 7.7694e-04\nEpoch 80/400\n41/43 [===========================>..] - ETA: 0s - loss: 9.9777e-04\nEpoch 00080: loss did not improve from 0.00097\n43/43 [==============================] - 1s 19ms/step - loss: 9.9497e-04 - val_loss: 7.9262e-04\nEpoch 81/400\n42/43 [============================>.] - ETA: 0s - loss: 9.6358e-04\nEpoch 00081: loss improved from 0.00097 to 0.00097, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 9.6522e-04 - val_loss: 7.5924e-04\nEpoch 82/400\n40/43 [==========================>...] - ETA: 0s - loss: 9.4713e-04\nEpoch 00082: loss improved from 0.00097 to 0.00094, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 9.4429e-04 - val_loss: 7.7757e-04\nEpoch 83/400\n40/43 [==========================>...] - ETA: 0s - loss: 9.5459e-04\nEpoch 00083: loss did not improve from 0.00094\n43/43 [==============================] - 1s 19ms/step - loss: 9.5714e-04 - val_loss: 8.2212e-04\nEpoch 84/400\n40/43 [==========================>...] - ETA: 0s - loss: 9.7700e-04\nEpoch 00084: loss did not improve from 0.00094\n43/43 [==============================] - 1s 19ms/step - loss: 9.7885e-04 - val_loss: 8.5632e-04\nEpoch 85/400\n42/43 [============================>.] - ETA: 0s - loss: 9.6429e-04\nEpoch 00085: loss did not improve from 0.00094\n43/43 [==============================] - 1s 18ms/step - loss: 9.6299e-04 - val_loss: 7.5678e-04\nEpoch 86/400\n41/43 [===========================>..] - ETA: 0s - loss: 9.2346e-04\nEpoch 00086: loss improved from 0.00094 to 0.00092, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 9.2414e-04 - val_loss: 7.9125e-04\nEpoch 87/400\n42/43 [============================>.] - ETA: 0s - loss: 9.6384e-04\nEpoch 00087: loss did not improve from 0.00092\n43/43 [==============================] - 1s 19ms/step - loss: 9.6282e-04 - val_loss: 7.7533e-04\nEpoch 88/400\n41/43 [===========================>..] - ETA: 0s - loss: 9.6917e-04\nEpoch 00088: loss did not improve from 0.00092\n43/43 [==============================] - 1s 19ms/step - loss: 9.6317e-04 - val_loss: 7.9020e-04\nEpoch 89/400\n41/43 [===========================>..] - ETA: 0s - loss: 9.1892e-04\nEpoch 00089: loss improved from 0.00092 to 0.00092, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 9.2103e-04 - val_loss: 7.3038e-04\nEpoch 90/400\n40/43 [==========================>...] - ETA: 0s - loss: 9.6667e-04\nEpoch 00090: loss did not improve from 0.00092\n43/43 [==============================] - 1s 19ms/step - loss: 9.8133e-04 - val_loss: 8.6329e-04\nEpoch 91/400\n42/43 [============================>.] - ETA: 0s - loss: 9.1112e-04\nEpoch 00091: loss improved from 0.00092 to 0.00091, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 9.1066e-04 - val_loss: 7.2935e-04\nEpoch 92/400\n40/43 [==========================>...] - ETA: 0s - loss: 9.3449e-04\nEpoch 00092: loss did not improve from 0.00091\n43/43 [==============================] - 1s 19ms/step - loss: 9.3128e-04 - val_loss: 7.4357e-04\nEpoch 93/400\n42/43 [============================>.] - ETA: 0s - loss: 9.2838e-04\nEpoch 00093: loss did not improve from 0.00091\n43/43 [==============================] - 1s 19ms/step - loss: 9.2789e-04 - val_loss: 7.2850e-04\nEpoch 94/400\n42/43 [============================>.] - ETA: 0s - loss: 9.1732e-04\nEpoch 00094: loss did not improve from 0.00091\n43/43 [==============================] - 1s 19ms/step - loss: 9.1644e-04 - val_loss: 7.2711e-04\nEpoch 95/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.9909e-04\nEpoch 00095: loss improved from 0.00091 to 0.00090, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 8.9595e-04 - val_loss: 7.5925e-04\nEpoch 96/400\n42/43 [============================>.] - ETA: 0s - loss: 9.3253e-04\nEpoch 00096: loss did not improve from 0.00090\n43/43 [==============================] - 1s 19ms/step - loss: 9.3368e-04 - val_loss: 8.0825e-04\nEpoch 97/400\n41/43 [===========================>..] - ETA: 0s - loss: 9.1269e-04\nEpoch 00097: loss did not improve from 0.00090\n43/43 [==============================] - 1s 19ms/step - loss: 9.1694e-04 - val_loss: 7.2489e-04\nEpoch 98/400\n42/43 [============================>.] - ETA: 0s - loss: 9.0446e-04\nEpoch 00098: loss did not improve from 0.00090\n43/43 [==============================] - 1s 19ms/step - loss: 9.0440e-04 - val_loss: 7.5880e-04\nEpoch 99/400\n42/43 [============================>.] - ETA: 0s - loss: 8.9356e-04\nEpoch 00099: loss improved from 0.00090 to 0.00089, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 8.9343e-04 - val_loss: 7.9338e-04\nEpoch 100/400\n40/43 [==========================>...] - ETA: 0s - loss: 9.1943e-04\nEpoch 00100: loss did not improve from 0.00089\n43/43 [==============================] - 1s 19ms/step - loss: 9.1577e-04 - val_loss: 7.2650e-04\nEpoch 101/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.8990e-04\nEpoch 00101: loss improved from 0.00089 to 0.00089, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 8.9073e-04 - val_loss: 7.3094e-04\nEpoch 102/400\n40/43 [==========================>...] - ETA: 0s - loss: 9.2267e-04\nEpoch 00102: loss did not improve from 0.00089\n43/43 [==============================] - 1s 18ms/step - loss: 9.1690e-04 - val_loss: 7.2848e-04\nEpoch 103/400\n43/43 [==============================] - ETA: 0s - loss: 8.9385e-04\nEpoch 00103: loss did not improve from 0.00089\n43/43 [==============================] - 1s 19ms/step - loss: 8.9385e-04 - val_loss: 7.6796e-04\nEpoch 104/400\n42/43 [============================>.] - ETA: 0s - loss: 9.1972e-04\nEpoch 00104: loss did not improve from 0.00089\n43/43 [==============================] - 1s 19ms/step - loss: 9.2042e-04 - val_loss: 7.3338e-04\nEpoch 105/400\n41/43 [===========================>..] - ETA: 0s - loss: 9.1608e-04\nEpoch 00105: loss did not improve from 0.00089\n43/43 [==============================] - 1s 19ms/step - loss: 9.1985e-04 - val_loss: 7.7271e-04\nEpoch 106/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.8966e-04\nEpoch 00106: loss improved from 0.00089 to 0.00089, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 8.8767e-04 - val_loss: 6.9919e-04\nEpoch 107/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.9184e-04\nEpoch 00107: loss did not improve from 0.00089\n43/43 [==============================] - 1s 20ms/step - loss: 8.9168e-04 - val_loss: 7.3670e-04\nEpoch 108/400\n42/43 [============================>.] - ETA: 0s - loss: 8.8881e-04\nEpoch 00108: loss did not improve from 0.00089\n43/43 [==============================] - 1s 19ms/step - loss: 8.8921e-04 - val_loss: 6.9667e-04\nEpoch 109/400\n42/43 [============================>.] - ETA: 0s - loss: 8.8291e-04\nEpoch 00109: loss improved from 0.00089 to 0.00088, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 8.8228e-04 - val_loss: 7.2597e-04\nEpoch 110/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.8231e-04\nEpoch 00110: loss did not improve from 0.00088\n43/43 [==============================] - 1s 19ms/step - loss: 8.8456e-04 - val_loss: 8.1596e-04\nEpoch 111/400\n43/43 [==============================] - ETA: 0s - loss: 9.1121e-04\nEpoch 00111: loss did not improve from 0.00088\n43/43 [==============================] - 1s 19ms/step - loss: 9.1121e-04 - val_loss: 7.2007e-04\nEpoch 112/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.6612e-04\nEpoch 00112: loss improved from 0.00088 to 0.00087, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 8.6836e-04 - val_loss: 7.2100e-04\nEpoch 113/400\n42/43 [============================>.] - ETA: 0s - loss: 8.7034e-04\nEpoch 00113: loss did not improve from 0.00087\n43/43 [==============================] - 1s 19ms/step - loss: 8.7077e-04 - val_loss: 7.6953e-04\nEpoch 114/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.7704e-04\nEpoch 00114: loss did not improve from 0.00087\n43/43 [==============================] - 1s 19ms/step - loss: 8.7408e-04 - val_loss: 6.9915e-04\nEpoch 115/400\n42/43 [============================>.] - ETA: 0s - loss: 8.5646e-04\nEpoch 00115: loss improved from 0.00087 to 0.00086, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 8.5584e-04 - val_loss: 7.1293e-04\nEpoch 116/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.8799e-04\nEpoch 00116: loss did not improve from 0.00086\n43/43 [==============================] - 1s 19ms/step - loss: 8.9013e-04 - val_loss: 7.6954e-04\nEpoch 117/400\n42/43 [============================>.] - ETA: 0s - loss: 8.8274e-04\nEpoch 00117: loss did not improve from 0.00086\n43/43 [==============================] - 1s 19ms/step - loss: 8.8190e-04 - val_loss: 8.0251e-04\nEpoch 118/400\n43/43 [==============================] - ETA: 0s - loss: 8.8088e-04\nEpoch 00118: loss did not improve from 0.00086\n43/43 [==============================] - 1s 19ms/step - loss: 8.8088e-04 - val_loss: 7.5302e-04\nEpoch 119/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.7917e-04\nEpoch 00119: loss did not improve from 0.00086\n43/43 [==============================] - 1s 19ms/step - loss: 8.8435e-04 - val_loss: 7.3509e-04\nEpoch 120/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.7997e-04\nEpoch 00120: loss did not improve from 0.00086\n43/43 [==============================] - 1s 19ms/step - loss: 8.7497e-04 - val_loss: 7.0510e-04\nEpoch 121/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.7712e-04\nEpoch 00121: loss did not improve from 0.00086\n43/43 [==============================] - 1s 19ms/step - loss: 8.7117e-04 - val_loss: 7.2703e-04\nEpoch 122/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.5719e-04\nEpoch 00122: loss improved from 0.00086 to 0.00086, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 8.5529e-04 - val_loss: 7.0189e-04\nEpoch 123/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.7280e-04\nEpoch 00123: loss did not improve from 0.00086\n43/43 [==============================] - 1s 19ms/step - loss: 8.7330e-04 - val_loss: 7.1129e-04\nEpoch 124/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.6516e-04\nEpoch 00124: loss did not improve from 0.00086\n43/43 [==============================] - 1s 18ms/step - loss: 8.6338e-04 - val_loss: 6.8227e-04\nEpoch 125/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.6029e-04\nEpoch 00125: loss did not improve from 0.00086\n43/43 [==============================] - 1s 19ms/step - loss: 8.5872e-04 - val_loss: 7.1049e-04\nEpoch 126/400\n43/43 [==============================] - ETA: 0s - loss: 8.7962e-04\nEpoch 00126: loss did not improve from 0.00086\n43/43 [==============================] - 1s 19ms/step - loss: 8.7962e-04 - val_loss: 7.1813e-04\nEpoch 127/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.6824e-04\nEpoch 00127: loss did not improve from 0.00086\n43/43 [==============================] - 1s 19ms/step - loss: 8.6648e-04 - val_loss: 6.7524e-04\nEpoch 128/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.5210e-04\nEpoch 00128: loss improved from 0.00086 to 0.00085, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 8.5078e-04 - val_loss: 6.9181e-04\nEpoch 129/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.5636e-04\nEpoch 00129: loss did not improve from 0.00085\n43/43 [==============================] - 1s 19ms/step - loss: 8.5516e-04 - val_loss: 6.7280e-04\nEpoch 130/400\n42/43 [============================>.] - ETA: 0s - loss: 8.5588e-04\nEpoch 00130: loss did not improve from 0.00085\n43/43 [==============================] - 1s 19ms/step - loss: 8.5515e-04 - val_loss: 7.0433e-04\nEpoch 131/400\n42/43 [============================>.] - ETA: 0s - loss: 8.5496e-04\nEpoch 00131: loss did not improve from 0.00085\n43/43 [==============================] - 1s 19ms/step - loss: 8.5661e-04 - val_loss: 6.7507e-04\nEpoch 132/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.7076e-04\nEpoch 00132: loss did not improve from 0.00085\n43/43 [==============================] - 1s 19ms/step - loss: 8.6818e-04 - val_loss: 7.3807e-04\nEpoch 133/400\n42/43 [============================>.] - ETA: 0s - loss: 8.5526e-04\nEpoch 00133: loss did not improve from 0.00085\n43/43 [==============================] - 1s 19ms/step - loss: 8.5693e-04 - val_loss: 7.6646e-04\nEpoch 134/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.4986e-04\nEpoch 00134: loss improved from 0.00085 to 0.00085, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 8.4944e-04 - val_loss: 6.9102e-04\nEpoch 135/400\n42/43 [============================>.] - ETA: 0s - loss: 8.4416e-04\nEpoch 00135: loss improved from 0.00085 to 0.00084, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 20ms/step - loss: 8.4398e-04 - val_loss: 7.0916e-04\nEpoch 136/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.3286e-04\nEpoch 00136: loss improved from 0.00084 to 0.00083, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 8.3409e-04 - val_loss: 6.6784e-04\nEpoch 137/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.4029e-04\nEpoch 00137: loss did not improve from 0.00083\n43/43 [==============================] - 1s 19ms/step - loss: 8.4226e-04 - val_loss: 6.8771e-04\nEpoch 138/400\n42/43 [============================>.] - ETA: 0s - loss: 8.3428e-04\nEpoch 00138: loss did not improve from 0.00083\n43/43 [==============================] - 1s 19ms/step - loss: 8.3416e-04 - val_loss: 6.9175e-04\nEpoch 139/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.2775e-04\nEpoch 00139: loss improved from 0.00083 to 0.00083, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 8.2895e-04 - val_loss: 7.1850e-04\nEpoch 140/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.4652e-04\nEpoch 00140: loss did not improve from 0.00083\n43/43 [==============================] - 1s 19ms/step - loss: 8.5208e-04 - val_loss: 9.3612e-04\nEpoch 141/400\n42/43 [============================>.] - ETA: 0s - loss: 8.5463e-04\nEpoch 00141: loss did not improve from 0.00083\n43/43 [==============================] - 1s 19ms/step - loss: 8.5190e-04 - val_loss: 6.7520e-04\nEpoch 142/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.3612e-04\nEpoch 00142: loss did not improve from 0.00083\n43/43 [==============================] - 1s 20ms/step - loss: 8.3605e-04 - val_loss: 6.6812e-04\nEpoch 143/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.2575e-04\nEpoch 00143: loss improved from 0.00083 to 0.00082, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 8.2352e-04 - val_loss: 6.7144e-04\nEpoch 144/400\n42/43 [============================>.] - ETA: 0s - loss: 8.2989e-04\nEpoch 00144: loss did not improve from 0.00082\n43/43 [==============================] - 1s 20ms/step - loss: 8.2858e-04 - val_loss: 6.7594e-04\nEpoch 145/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.3065e-04\nEpoch 00145: loss did not improve from 0.00082\n43/43 [==============================] - 1s 19ms/step - loss: 8.2930e-04 - val_loss: 6.8845e-04\nEpoch 146/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.2187e-04\nEpoch 00146: loss improved from 0.00082 to 0.00082, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 21ms/step - loss: 8.2247e-04 - val_loss: 7.0465e-04\nEpoch 147/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.3394e-04\nEpoch 00147: loss did not improve from 0.00082\n43/43 [==============================] - 1s 20ms/step - loss: 8.2999e-04 - val_loss: 6.7998e-04\nEpoch 148/400\n42/43 [============================>.] - ETA: 0s - loss: 8.3813e-04\nEpoch 00148: loss did not improve from 0.00082\n43/43 [==============================] - 1s 19ms/step - loss: 8.3673e-04 - val_loss: 6.8025e-04\nEpoch 149/400\n42/43 [============================>.] - ETA: 0s - loss: 8.1793e-04\nEpoch 00149: loss improved from 0.00082 to 0.00082, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 22ms/step - loss: 8.1881e-04 - val_loss: 6.7203e-04\nEpoch 150/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.2555e-04\nEpoch 00150: loss did not improve from 0.00082\n43/43 [==============================] - 1s 20ms/step - loss: 8.2344e-04 - val_loss: 6.5432e-04\nEpoch 151/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.3813e-04\nEpoch 00151: loss did not improve from 0.00082\n43/43 [==============================] - 1s 20ms/step - loss: 8.3266e-04 - val_loss: 6.6467e-04\nEpoch 152/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.3000e-04\nEpoch 00152: loss did not improve from 0.00082\n43/43 [==============================] - 1s 20ms/step - loss: 8.3408e-04 - val_loss: 6.5416e-04\nEpoch 153/400\n43/43 [==============================] - ETA: 0s - loss: 8.1836e-04\nEpoch 00153: loss improved from 0.00082 to 0.00082, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 22ms/step - loss: 8.1836e-04 - val_loss: 6.5643e-04\nEpoch 154/400\n43/43 [==============================] - ETA: 0s - loss: 8.1627e-04\nEpoch 00154: loss improved from 0.00082 to 0.00082, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 22ms/step - loss: 8.1627e-04 - val_loss: 6.6350e-04\nEpoch 155/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.1267e-04\nEpoch 00155: loss improved from 0.00082 to 0.00081, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 22ms/step - loss: 8.1051e-04 - val_loss: 6.8721e-04\nEpoch 156/400\n43/43 [==============================] - ETA: 0s - loss: 8.2465e-04\nEpoch 00156: loss did not improve from 0.00081\n43/43 [==============================] - 1s 20ms/step - loss: 8.2465e-04 - val_loss: 7.4044e-04\nEpoch 157/400\n43/43 [==============================] - ETA: 0s - loss: 8.1262e-04\nEpoch 00157: loss did not improve from 0.00081\n43/43 [==============================] - 1s 20ms/step - loss: 8.1262e-04 - val_loss: 6.9949e-04\nEpoch 158/400\n43/43 [==============================] - ETA: 0s - loss: 8.4310e-04\nEpoch 00158: loss did not improve from 0.00081\n43/43 [==============================] - 1s 20ms/step - loss: 8.4310e-04 - val_loss: 6.6574e-04\nEpoch 159/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.1906e-04\nEpoch 00159: loss did not improve from 0.00081\n43/43 [==============================] - 1s 22ms/step - loss: 8.1855e-04 - val_loss: 7.0597e-04\nEpoch 160/400\n43/43 [==============================] - ETA: 0s - loss: 8.1194e-04\nEpoch 00160: loss did not improve from 0.00081\n43/43 [==============================] - 1s 21ms/step - loss: 8.1194e-04 - val_loss: 6.5170e-04\nEpoch 161/400\n43/43 [==============================] - ETA: 0s - loss: 8.2566e-04\nEpoch 00161: loss did not improve from 0.00081\n43/43 [==============================] - 1s 20ms/step - loss: 8.2566e-04 - val_loss: 6.6167e-04\nEpoch 162/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.1015e-04\nEpoch 00162: loss did not improve from 0.00081\n43/43 [==============================] - 1s 20ms/step - loss: 8.1059e-04 - val_loss: 6.6171e-04\nEpoch 163/400\n41/43 [===========================>..] - ETA: 0s - loss: 8.2078e-04\nEpoch 00163: loss did not improve from 0.00081\n43/43 [==============================] - 1s 20ms/step - loss: 8.2060e-04 - val_loss: 6.6166e-04\nEpoch 164/400\n43/43 [==============================] - ETA: 0s - loss: 8.1049e-04\nEpoch 00164: loss improved from 0.00081 to 0.00081, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 22ms/step - loss: 8.1049e-04 - val_loss: 6.5587e-04\nEpoch 165/400\n43/43 [==============================] - ETA: 0s - loss: 8.1964e-04\nEpoch 00165: loss did not improve from 0.00081\n43/43 [==============================] - 1s 21ms/step - loss: 8.1964e-04 - val_loss: 6.5346e-04\nEpoch 166/400\n43/43 [==============================] - ETA: 0s - loss: 8.1415e-04\nEpoch 00166: loss did not improve from 0.00081\n43/43 [==============================] - 1s 21ms/step - loss: 8.1415e-04 - val_loss: 6.5092e-04\nEpoch 167/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.1687e-04\nEpoch 00167: loss did not improve from 0.00081\n43/43 [==============================] - 1s 20ms/step - loss: 8.1268e-04 - val_loss: 6.5694e-04\nEpoch 168/400\n43/43 [==============================] - ETA: 0s - loss: 8.1662e-04\nEpoch 00168: loss did not improve from 0.00081\n43/43 [==============================] - 1s 21ms/step - loss: 8.1662e-04 - val_loss: 6.3547e-04\nEpoch 169/400\n43/43 [==============================] - ETA: 0s - loss: 8.1964e-04\nEpoch 00169: loss did not improve from 0.00081\n43/43 [==============================] - 1s 21ms/step - loss: 8.1964e-04 - val_loss: 6.6076e-04\nEpoch 170/400\n43/43 [==============================] - ETA: 0s - loss: 7.9798e-04\nEpoch 00170: loss improved from 0.00081 to 0.00080, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 22ms/step - loss: 7.9798e-04 - val_loss: 6.5531e-04\nEpoch 171/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.1645e-04\nEpoch 00171: loss did not improve from 0.00080\n43/43 [==============================] - 1s 20ms/step - loss: 8.1939e-04 - val_loss: 6.6313e-04\nEpoch 172/400\n42/43 [============================>.] - ETA: 0s - loss: 8.1817e-04\nEpoch 00172: loss did not improve from 0.00080\n43/43 [==============================] - 1s 21ms/step - loss: 8.1593e-04 - val_loss: 6.4599e-04\nEpoch 173/400\n43/43 [==============================] - ETA: 0s - loss: 8.0804e-04\nEpoch 00173: loss did not improve from 0.00080\n43/43 [==============================] - 1s 21ms/step - loss: 8.0804e-04 - val_loss: 6.5399e-04\nEpoch 174/400\n43/43 [==============================] - ETA: 0s - loss: 7.9894e-04\nEpoch 00174: loss did not improve from 0.00080\n43/43 [==============================] - 1s 21ms/step - loss: 7.9894e-04 - val_loss: 6.4751e-04\nEpoch 175/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.2363e-04\nEpoch 00175: loss did not improve from 0.00080\n43/43 [==============================] - 1s 21ms/step - loss: 8.1997e-04 - val_loss: 6.9385e-04\nEpoch 176/400\n43/43 [==============================] - ETA: 0s - loss: 8.1403e-04\nEpoch 00176: loss did not improve from 0.00080\n43/43 [==============================] - 1s 22ms/step - loss: 8.1403e-04 - val_loss: 6.5344e-04\nEpoch 177/400\n43/43 [==============================] - ETA: 0s - loss: 7.9199e-04\nEpoch 00177: loss improved from 0.00080 to 0.00079, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 22ms/step - loss: 7.9199e-04 - val_loss: 6.3942e-04\nEpoch 178/400\n43/43 [==============================] - ETA: 0s - loss: 7.8842e-04\nEpoch 00178: loss improved from 0.00079 to 0.00079, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 22ms/step - loss: 7.8842e-04 - val_loss: 6.3081e-04\nEpoch 179/400\n43/43 [==============================] - ETA: 0s - loss: 7.9543e-04\nEpoch 00179: loss did not improve from 0.00079\n43/43 [==============================] - 1s 21ms/step - loss: 7.9543e-04 - val_loss: 6.7514e-04\nEpoch 180/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.1046e-04\nEpoch 00180: loss did not improve from 0.00079\n43/43 [==============================] - 1s 21ms/step - loss: 8.0514e-04 - val_loss: 7.0271e-04\nEpoch 181/400\n43/43 [==============================] - ETA: 0s - loss: 8.1758e-04\nEpoch 00181: loss did not improve from 0.00079\n43/43 [==============================] - 1s 21ms/step - loss: 8.1758e-04 - val_loss: 6.6182e-04\nEpoch 182/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.1472e-04\nEpoch 00182: loss did not improve from 0.00079\n43/43 [==============================] - 1s 20ms/step - loss: 8.1337e-04 - val_loss: 6.6389e-04\nEpoch 183/400\n42/43 [============================>.] - ETA: 0s - loss: 8.1699e-04\nEpoch 00183: loss did not improve from 0.00079\n43/43 [==============================] - 1s 21ms/step - loss: 8.1584e-04 - val_loss: 6.9168e-04\nEpoch 184/400\n43/43 [==============================] - ETA: 0s - loss: 7.9211e-04\nEpoch 00184: loss did not improve from 0.00079\n43/43 [==============================] - 1s 21ms/step - loss: 7.9211e-04 - val_loss: 6.4949e-04\nEpoch 185/400\n43/43 [==============================] - ETA: 0s - loss: 8.1888e-04\nEpoch 00185: loss did not improve from 0.00079\n43/43 [==============================] - 1s 21ms/step - loss: 8.1888e-04 - val_loss: 6.4022e-04\nEpoch 186/400\n43/43 [==============================] - ETA: 0s - loss: 8.0418e-04\nEpoch 00186: loss did not improve from 0.00079\n43/43 [==============================] - 1s 21ms/step - loss: 8.0418e-04 - val_loss: 6.3987e-04\nEpoch 187/400\n41/43 [===========================>..] - ETA: 0s - loss: 7.8820e-04\nEpoch 00187: loss did not improve from 0.00079\n43/43 [==============================] - 1s 20ms/step - loss: 7.9152e-04 - val_loss: 6.4368e-04\nEpoch 188/400\n43/43 [==============================] - ETA: 0s - loss: 7.7876e-04\nEpoch 00188: loss improved from 0.00079 to 0.00078, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 23ms/step - loss: 7.7876e-04 - val_loss: 6.3524e-04\nEpoch 189/400\n43/43 [==============================] - ETA: 0s - loss: 7.9781e-04\nEpoch 00189: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.9781e-04 - val_loss: 6.4371e-04\nEpoch 190/400\n43/43 [==============================] - ETA: 0s - loss: 8.1323e-04\nEpoch 00190: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 8.1323e-04 - val_loss: 6.6017e-04\nEpoch 191/400\n43/43 [==============================] - ETA: 0s - loss: 7.9706e-04\nEpoch 00191: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.9706e-04 - val_loss: 6.4006e-04\nEpoch 192/400\n42/43 [============================>.] - ETA: 0s - loss: 8.0702e-04\nEpoch 00192: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 8.0691e-04 - val_loss: 6.9848e-04\nEpoch 193/400\n40/43 [==========================>...] - ETA: 0s - loss: 8.2003e-04\nEpoch 00193: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 8.1910e-04 - val_loss: 6.5078e-04\nEpoch 194/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.9524e-04\nEpoch 00194: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.9642e-04 - val_loss: 6.2968e-04\nEpoch 195/400\n43/43 [==============================] - ETA: 0s - loss: 7.9776e-04\nEpoch 00195: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.9776e-04 - val_loss: 6.6368e-04\nEpoch 196/400\n42/43 [============================>.] - ETA: 0s - loss: 8.0073e-04\nEpoch 00196: loss did not improve from 0.00078\n43/43 [==============================] - 1s 22ms/step - loss: 8.0096e-04 - val_loss: 6.5613e-04\nEpoch 197/400\n43/43 [==============================] - ETA: 0s - loss: 7.9091e-04\nEpoch 00197: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.9091e-04 - val_loss: 6.3612e-04\nEpoch 198/400\n43/43 [==============================] - ETA: 0s - loss: 7.9891e-04\nEpoch 00198: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.9891e-04 - val_loss: 6.2738e-04\nEpoch 199/400\n43/43 [==============================] - ETA: 0s - loss: 7.8093e-04\nEpoch 00199: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.8093e-04 - val_loss: 6.3847e-04\nEpoch 200/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.8960e-04\nEpoch 00200: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.9284e-04 - val_loss: 6.2356e-04\nEpoch 201/400\n43/43 [==============================] - ETA: 0s - loss: 7.8860e-04\nEpoch 00201: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.8860e-04 - val_loss: 6.6069e-04\nEpoch 202/400\n43/43 [==============================] - ETA: 0s - loss: 8.0019e-04\nEpoch 00202: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 8.0019e-04 - val_loss: 6.4388e-04\nEpoch 203/400\n43/43 [==============================] - ETA: 0s - loss: 7.8393e-04\nEpoch 00203: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.8393e-04 - val_loss: 6.3337e-04\nEpoch 204/400\n43/43 [==============================] - ETA: 0s - loss: 7.8653e-04\nEpoch 00204: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.8653e-04 - val_loss: 6.2280e-04\nEpoch 205/400\n43/43 [==============================] - ETA: 0s - loss: 7.8790e-04\nEpoch 00205: loss did not improve from 0.00078\n43/43 [==============================] - 1s 22ms/step - loss: 7.8790e-04 - val_loss: 6.3357e-04\nEpoch 206/400\n43/43 [==============================] - ETA: 0s - loss: 7.8413e-04\nEpoch 00206: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.8413e-04 - val_loss: 6.3641e-04\nEpoch 207/400\n43/43 [==============================] - ETA: 0s - loss: 7.8168e-04\nEpoch 00207: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.8168e-04 - val_loss: 6.9021e-04\nEpoch 208/400\n43/43 [==============================] - ETA: 0s - loss: 7.8222e-04\nEpoch 00208: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.8222e-04 - val_loss: 6.7701e-04\nEpoch 209/400\n43/43 [==============================] - ETA: 0s - loss: 7.8908e-04\nEpoch 00209: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.8908e-04 - val_loss: 6.2217e-04\nEpoch 210/400\n43/43 [==============================] - ETA: 0s - loss: 7.8791e-04\nEpoch 00210: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.8791e-04 - val_loss: 6.4642e-04\nEpoch 211/400\n42/43 [============================>.] - ETA: 0s - loss: 7.9327e-04\nEpoch 00211: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.9323e-04 - val_loss: 6.7301e-04\nEpoch 212/400\n43/43 [==============================] - ETA: 0s - loss: 7.8588e-04\nEpoch 00212: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.8588e-04 - val_loss: 6.1887e-04\nEpoch 213/400\n43/43 [==============================] - ETA: 0s - loss: 7.9242e-04\nEpoch 00213: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.9242e-04 - val_loss: 6.3426e-04\nEpoch 214/400\n43/43 [==============================] - ETA: 0s - loss: 7.9108e-04\nEpoch 00214: loss did not improve from 0.00078\n43/43 [==============================] - 1s 21ms/step - loss: 7.9108e-04 - val_loss: 6.3951e-04\nEpoch 215/400\n43/43 [==============================] - ETA: 0s - loss: 7.7276e-04\nEpoch 00215: loss improved from 0.00078 to 0.00077, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 26ms/step - loss: 7.7276e-04 - val_loss: 6.2531e-04\nEpoch 216/400\n42/43 [============================>.] - ETA: 0s - loss: 7.8326e-04\nEpoch 00216: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.8303e-04 - val_loss: 6.2664e-04\nEpoch 217/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.9881e-04\nEpoch 00217: loss did not improve from 0.00077\n43/43 [==============================] - 1s 20ms/step - loss: 7.9908e-04 - val_loss: 6.3408e-04\nEpoch 218/400\n43/43 [==============================] - ETA: 0s - loss: 7.9140e-04\nEpoch 00218: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.9140e-04 - val_loss: 6.5347e-04\nEpoch 219/400\n43/43 [==============================] - ETA: 0s - loss: 7.9359e-04\nEpoch 00219: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.9359e-04 - val_loss: 6.5081e-04\nEpoch 220/400\n43/43 [==============================] - ETA: 0s - loss: 7.9658e-04\nEpoch 00220: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.9658e-04 - val_loss: 6.7150e-04\nEpoch 221/400\n42/43 [============================>.] - ETA: 0s - loss: 7.9125e-04\nEpoch 00221: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.9120e-04 - val_loss: 6.9986e-04\nEpoch 222/400\n43/43 [==============================] - ETA: 0s - loss: 7.9149e-04\nEpoch 00222: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.9149e-04 - val_loss: 6.4888e-04\nEpoch 223/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.7781e-04\nEpoch 00223: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7981e-04 - val_loss: 6.4192e-04\nEpoch 224/400\n43/43 [==============================] - ETA: 0s - loss: 7.8534e-04\nEpoch 00224: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.8534e-04 - val_loss: 6.3820e-04\nEpoch 225/400\n42/43 [============================>.] - ETA: 0s - loss: 7.8000e-04\nEpoch 00225: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.8153e-04 - val_loss: 6.1793e-04\nEpoch 226/400\n43/43 [==============================] - ETA: 0s - loss: 7.7626e-04\nEpoch 00226: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7626e-04 - val_loss: 6.3432e-04\nEpoch 227/400\n43/43 [==============================] - ETA: 0s - loss: 7.7960e-04\nEpoch 00227: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7960e-04 - val_loss: 6.1717e-04\nEpoch 228/400\n43/43 [==============================] - ETA: 0s - loss: 7.7705e-04\nEpoch 00228: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7705e-04 - val_loss: 6.6174e-04\nEpoch 229/400\n43/43 [==============================] - ETA: 0s - loss: 7.9244e-04\nEpoch 00229: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.9244e-04 - val_loss: 6.4808e-04\nEpoch 230/400\n43/43 [==============================] - ETA: 0s - loss: 7.7876e-04\nEpoch 00230: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7876e-04 - val_loss: 6.5217e-04\nEpoch 231/400\n43/43 [==============================] - ETA: 0s - loss: 7.6670e-04\nEpoch 00231: loss improved from 0.00077 to 0.00077, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 27ms/step - loss: 7.6670e-04 - val_loss: 6.3817e-04\nEpoch 232/400\n43/43 [==============================] - ETA: 0s - loss: 7.7906e-04\nEpoch 00232: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7906e-04 - val_loss: 6.4418e-04\nEpoch 233/400\n43/43 [==============================] - ETA: 0s - loss: 7.8093e-04\nEpoch 00233: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.8093e-04 - val_loss: 6.3215e-04\nEpoch 234/400\n43/43 [==============================] - ETA: 0s - loss: 7.8314e-04\nEpoch 00234: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.8314e-04 - val_loss: 6.0835e-04\nEpoch 235/400\n43/43 [==============================] - ETA: 0s - loss: 7.7616e-04\nEpoch 00235: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7616e-04 - val_loss: 6.5172e-04\nEpoch 236/400\n43/43 [==============================] - ETA: 0s - loss: 7.8194e-04\nEpoch 00236: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.8194e-04 - val_loss: 6.4037e-04\nEpoch 237/400\n43/43 [==============================] - ETA: 0s - loss: 7.7089e-04\nEpoch 00237: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7089e-04 - val_loss: 6.2589e-04\nEpoch 238/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.7094e-04\nEpoch 00238: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7210e-04 - val_loss: 6.6174e-04\nEpoch 239/400\n42/43 [============================>.] - ETA: 0s - loss: 7.8333e-04\nEpoch 00239: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.8299e-04 - val_loss: 6.3851e-04\nEpoch 240/400\n43/43 [==============================] - ETA: 0s - loss: 7.8125e-04\nEpoch 00240: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.8125e-04 - val_loss: 6.4023e-04\nEpoch 241/400\n43/43 [==============================] - ETA: 0s - loss: 7.7991e-04\nEpoch 00241: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7991e-04 - val_loss: 6.3319e-04\nEpoch 242/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.9446e-04\nEpoch 00242: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.8856e-04 - val_loss: 6.3083e-04\nEpoch 243/400\n43/43 [==============================] - ETA: 0s - loss: 7.8324e-04\nEpoch 00243: loss did not improve from 0.00077\n43/43 [==============================] - 1s 20ms/step - loss: 7.8324e-04 - val_loss: 6.3818e-04\nEpoch 244/400\n43/43 [==============================] - ETA: 0s - loss: 7.7820e-04\nEpoch 00244: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7820e-04 - val_loss: 6.2607e-04\nEpoch 245/400\n43/43 [==============================] - ETA: 0s - loss: 7.8424e-04\nEpoch 00245: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.8424e-04 - val_loss: 6.1306e-04\nEpoch 246/400\n43/43 [==============================] - ETA: 0s - loss: 7.7640e-04\nEpoch 00246: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7640e-04 - val_loss: 7.0863e-04\nEpoch 247/400\n43/43 [==============================] - ETA: 0s - loss: 7.7208e-04\nEpoch 00247: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7208e-04 - val_loss: 6.2733e-04\nEpoch 248/400\n43/43 [==============================] - ETA: 0s - loss: 7.7853e-04\nEpoch 00248: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7853e-04 - val_loss: 6.1332e-04\nEpoch 249/400\n43/43 [==============================] - ETA: 0s - loss: 7.7564e-04\nEpoch 00249: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7564e-04 - val_loss: 6.1160e-04\nEpoch 250/400\n43/43 [==============================] - ETA: 0s - loss: 7.7551e-04\nEpoch 00250: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7551e-04 - val_loss: 6.0861e-04\nEpoch 251/400\n43/43 [==============================] - ETA: 0s - loss: 7.8761e-04\nEpoch 00251: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.8761e-04 - val_loss: 6.3920e-04\nEpoch 252/400\n43/43 [==============================] - ETA: 0s - loss: 7.7546e-04\nEpoch 00252: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7546e-04 - val_loss: 6.6619e-04\nEpoch 253/400\n43/43 [==============================] - ETA: 0s - loss: 7.7423e-04\nEpoch 00253: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7423e-04 - val_loss: 6.4098e-04\nEpoch 254/400\n43/43 [==============================] - ETA: 0s - loss: 7.7701e-04\nEpoch 00254: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7701e-04 - val_loss: 6.1331e-04\nEpoch 255/400\n43/43 [==============================] - ETA: 0s - loss: 7.7398e-04\nEpoch 00255: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7398e-04 - val_loss: 6.1648e-04\nEpoch 256/400\n43/43 [==============================] - ETA: 0s - loss: 7.7652e-04\nEpoch 00256: loss did not improve from 0.00077\n43/43 [==============================] - 1s 21ms/step - loss: 7.7652e-04 - val_loss: 6.1952e-04\nEpoch 257/400\n43/43 [==============================] - ETA: 0s - loss: 7.6437e-04\nEpoch 00257: loss improved from 0.00077 to 0.00076, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 25ms/step - loss: 7.6437e-04 - val_loss: 6.0835e-04\nEpoch 258/400\n43/43 [==============================] - ETA: 0s - loss: 7.7425e-04\nEpoch 00258: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.7425e-04 - val_loss: 6.1199e-04\nEpoch 259/400\n43/43 [==============================] - ETA: 0s - loss: 7.6274e-04\nEpoch 00259: loss improved from 0.00076 to 0.00076, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 23ms/step - loss: 7.6274e-04 - val_loss: 6.2782e-04\nEpoch 260/400\n42/43 [============================>.] - ETA: 0s - loss: 7.7319e-04\nEpoch 00260: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.7430e-04 - val_loss: 6.2844e-04\nEpoch 261/400\n43/43 [==============================] - ETA: 0s - loss: 7.8595e-04\nEpoch 00261: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.8595e-04 - val_loss: 6.3386e-04\nEpoch 262/400\n43/43 [==============================] - ETA: 0s - loss: 7.7702e-04\nEpoch 00262: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.7702e-04 - val_loss: 6.1334e-04\nEpoch 263/400\n43/43 [==============================] - ETA: 0s - loss: 7.7151e-04\nEpoch 00263: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.7151e-04 - val_loss: 6.1653e-04\nEpoch 264/400\n43/43 [==============================] - ETA: 0s - loss: 7.7957e-04\nEpoch 00264: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.7957e-04 - val_loss: 6.3351e-04\nEpoch 265/400\n43/43 [==============================] - ETA: 0s - loss: 7.8016e-04\nEpoch 00265: loss did not improve from 0.00076\n43/43 [==============================] - 1s 22ms/step - loss: 7.8016e-04 - val_loss: 6.3660e-04\nEpoch 266/400\n42/43 [============================>.] - ETA: 0s - loss: 7.7488e-04\nEpoch 00266: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.7451e-04 - val_loss: 6.1309e-04\nEpoch 267/400\n43/43 [==============================] - ETA: 0s - loss: 7.7381e-04\nEpoch 00267: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.7381e-04 - val_loss: 6.2408e-04\nEpoch 268/400\n42/43 [============================>.] - ETA: 0s - loss: 7.6622e-04\nEpoch 00268: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.6372e-04 - val_loss: 6.2440e-04\nEpoch 269/400\n43/43 [==============================] - ETA: 0s - loss: 7.7269e-04\nEpoch 00269: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.7269e-04 - val_loss: 6.0151e-04\nEpoch 270/400\n41/43 [===========================>..] - ETA: 0s - loss: 7.5803e-04\nEpoch 00270: loss improved from 0.00076 to 0.00076, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 28ms/step - loss: 7.5888e-04 - val_loss: 6.2330e-04\nEpoch 271/400\n43/43 [==============================] - ETA: 0s - loss: 7.6034e-04\nEpoch 00271: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.6034e-04 - val_loss: 6.1742e-04\nEpoch 272/400\n43/43 [==============================] - ETA: 0s - loss: 7.5965e-04\nEpoch 00272: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.5965e-04 - val_loss: 6.2124e-04\nEpoch 273/400\n43/43 [==============================] - ETA: 0s - loss: 7.6607e-04\nEpoch 00273: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.6607e-04 - val_loss: 6.0935e-04\nEpoch 274/400\n43/43 [==============================] - ETA: 0s - loss: 7.6724e-04\nEpoch 00274: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.6724e-04 - val_loss: 6.3377e-04\nEpoch 275/400\n43/43 [==============================] - ETA: 0s - loss: 7.7608e-04\nEpoch 00275: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.7608e-04 - val_loss: 6.0991e-04\nEpoch 276/400\n43/43 [==============================] - ETA: 0s - loss: 7.6040e-04\nEpoch 00276: loss did not improve from 0.00076\n43/43 [==============================] - 1s 22ms/step - loss: 7.6040e-04 - val_loss: 6.0831e-04\nEpoch 277/400\n43/43 [==============================] - ETA: 0s - loss: 7.6627e-04\nEpoch 00277: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.6627e-04 - val_loss: 6.3011e-04\nEpoch 278/400\n43/43 [==============================] - ETA: 0s - loss: 7.5741e-04\nEpoch 00278: loss improved from 0.00076 to 0.00076, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 28ms/step - loss: 7.5741e-04 - val_loss: 6.6243e-04\nEpoch 279/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.6827e-04\nEpoch 00279: loss did not improve from 0.00076\n43/43 [==============================] - 1s 20ms/step - loss: 7.6841e-04 - val_loss: 6.1127e-04\nEpoch 280/400\n43/43 [==============================] - ETA: 0s - loss: 7.7417e-04\nEpoch 00280: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.7417e-04 - val_loss: 6.5351e-04\nEpoch 281/400\n43/43 [==============================] - ETA: 0s - loss: 7.7277e-04\nEpoch 00281: loss did not improve from 0.00076\n43/43 [==============================] - 1s 22ms/step - loss: 7.7277e-04 - val_loss: 6.8257e-04\nEpoch 282/400\n43/43 [==============================] - ETA: 0s - loss: 7.6436e-04\nEpoch 00282: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.6436e-04 - val_loss: 6.2202e-04\nEpoch 283/400\n43/43 [==============================] - ETA: 0s - loss: 7.6454e-04\nEpoch 00283: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.6454e-04 - val_loss: 6.0568e-04\nEpoch 284/400\n43/43 [==============================] - ETA: 0s - loss: 7.7007e-04\nEpoch 00284: loss did not improve from 0.00076\n43/43 [==============================] - 1s 21ms/step - loss: 7.7007e-04 - val_loss: 6.1803e-04\nEpoch 285/400\n43/43 [==============================] - ETA: 0s - loss: 7.5180e-04\nEpoch 00285: loss improved from 0.00076 to 0.00075, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 27ms/step - loss: 7.5180e-04 - val_loss: 6.0577e-04\nEpoch 286/400\n43/43 [==============================] - ETA: 0s - loss: 7.7273e-04\nEpoch 00286: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.7273e-04 - val_loss: 6.2018e-04\nEpoch 287/400\n42/43 [============================>.] - ETA: 0s - loss: 7.6087e-04\nEpoch 00287: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.6052e-04 - val_loss: 6.1040e-04\nEpoch 288/400\n43/43 [==============================] - ETA: 0s - loss: 7.6292e-04\nEpoch 00288: loss did not improve from 0.00075\n43/43 [==============================] - 1s 22ms/step - loss: 7.6292e-04 - val_loss: 6.2614e-04\nEpoch 289/400\n43/43 [==============================] - ETA: 0s - loss: 7.7410e-04\nEpoch 00289: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.7410e-04 - val_loss: 6.0871e-04\nEpoch 290/400\n43/43 [==============================] - ETA: 0s - loss: 7.6498e-04\nEpoch 00290: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.6498e-04 - val_loss: 6.4088e-04\nEpoch 291/400\n43/43 [==============================] - ETA: 0s - loss: 7.7304e-04\nEpoch 00291: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.7304e-04 - val_loss: 6.4076e-04\nEpoch 292/400\n42/43 [============================>.] - ETA: 0s - loss: 7.7072e-04\nEpoch 00292: loss did not improve from 0.00075\n43/43 [==============================] - 1s 22ms/step - loss: 7.6850e-04 - val_loss: 6.0796e-04\nEpoch 293/400\n42/43 [============================>.] - ETA: 0s - loss: 7.6813e-04\nEpoch 00293: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.6616e-04 - val_loss: 6.4784e-04\nEpoch 294/400\n42/43 [============================>.] - ETA: 0s - loss: 7.7315e-04\nEpoch 00294: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.7378e-04 - val_loss: 6.1149e-04\nEpoch 295/400\n43/43 [==============================] - ETA: 0s - loss: 7.7065e-04\nEpoch 00295: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.7065e-04 - val_loss: 6.1956e-04\nEpoch 296/400\n43/43 [==============================] - ETA: 0s - loss: 7.4897e-04\nEpoch 00296: loss improved from 0.00075 to 0.00075, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 25ms/step - loss: 7.4897e-04 - val_loss: 6.2011e-04\nEpoch 297/400\n43/43 [==============================] - ETA: 0s - loss: 7.6267e-04\nEpoch 00297: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.6267e-04 - val_loss: 6.1769e-04\nEpoch 298/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.5110e-04\nEpoch 00298: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5316e-04 - val_loss: 6.0955e-04\nEpoch 299/400\n43/43 [==============================] - ETA: 0s - loss: 7.7050e-04\nEpoch 00299: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.7050e-04 - val_loss: 6.3769e-04\nEpoch 300/400\n43/43 [==============================] - ETA: 0s - loss: 7.5754e-04\nEpoch 00300: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5754e-04 - val_loss: 6.2374e-04\nEpoch 301/400\n43/43 [==============================] - ETA: 0s - loss: 7.4884e-04\nEpoch 00301: loss improved from 0.00075 to 0.00075, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 22ms/step - loss: 7.4884e-04 - val_loss: 6.0912e-04\nEpoch 302/400\n43/43 [==============================] - ETA: 0s - loss: 7.6017e-04\nEpoch 00302: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.6017e-04 - val_loss: 6.2257e-04\nEpoch 303/400\n43/43 [==============================] - ETA: 0s - loss: 7.6118e-04\nEpoch 00303: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.6118e-04 - val_loss: 6.1551e-04\nEpoch 304/400\n43/43 [==============================] - ETA: 0s - loss: 7.5736e-04\nEpoch 00304: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5736e-04 - val_loss: 6.1438e-04\nEpoch 305/400\n43/43 [==============================] - ETA: 0s - loss: 7.6577e-04\nEpoch 00305: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.6577e-04 - val_loss: 6.0324e-04\nEpoch 306/400\n43/43 [==============================] - ETA: 0s - loss: 7.6898e-04\nEpoch 00306: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.6898e-04 - val_loss: 6.0196e-04\nEpoch 307/400\n43/43 [==============================] - ETA: 0s - loss: 7.7524e-04\nEpoch 00307: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.7524e-04 - val_loss: 6.1043e-04\nEpoch 308/400\n43/43 [==============================] - ETA: 0s - loss: 7.6154e-04\nEpoch 00308: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.6154e-04 - val_loss: 6.1328e-04\nEpoch 309/400\n43/43 [==============================] - ETA: 0s - loss: 7.5635e-04\nEpoch 00309: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5635e-04 - val_loss: 6.2006e-04\nEpoch 310/400\n42/43 [============================>.] - ETA: 0s - loss: 7.5342e-04\nEpoch 00310: loss did not improve from 0.00075\n43/43 [==============================] - 1s 22ms/step - loss: 7.5482e-04 - val_loss: 6.2314e-04\nEpoch 311/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.4721e-04\nEpoch 00311: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5043e-04 - val_loss: 6.1214e-04\nEpoch 312/400\n43/43 [==============================] - ETA: 0s - loss: 7.5511e-04\nEpoch 00312: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5511e-04 - val_loss: 6.0070e-04\nEpoch 313/400\n43/43 [==============================] - ETA: 0s - loss: 7.6164e-04\nEpoch 00313: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.6164e-04 - val_loss: 6.1462e-04\nEpoch 314/400\n43/43 [==============================] - ETA: 0s - loss: 7.6390e-04\nEpoch 00314: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.6390e-04 - val_loss: 6.2450e-04\nEpoch 315/400\n43/43 [==============================] - ETA: 0s - loss: 7.5119e-04\nEpoch 00315: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5119e-04 - val_loss: 6.0216e-04\nEpoch 316/400\n43/43 [==============================] - ETA: 0s - loss: 7.5634e-04\nEpoch 00316: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5634e-04 - val_loss: 6.1511e-04\nEpoch 317/400\n43/43 [==============================] - ETA: 0s - loss: 7.5007e-04\nEpoch 00317: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5007e-04 - val_loss: 6.0200e-04\nEpoch 318/400\n43/43 [==============================] - ETA: 0s - loss: 7.5944e-04\nEpoch 00318: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5944e-04 - val_loss: 6.2070e-04\nEpoch 319/400\n43/43 [==============================] - ETA: 0s - loss: 7.5479e-04\nEpoch 00319: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5479e-04 - val_loss: 6.0013e-04\nEpoch 320/400\n43/43 [==============================] - ETA: 0s - loss: 7.5868e-04\nEpoch 00320: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5868e-04 - val_loss: 6.2179e-04\nEpoch 321/400\n43/43 [==============================] - ETA: 0s - loss: 7.5595e-04\nEpoch 00321: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5595e-04 - val_loss: 6.1868e-04\nEpoch 322/400\n43/43 [==============================] - ETA: 0s - loss: 7.5847e-04\nEpoch 00322: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5847e-04 - val_loss: 6.2630e-04\nEpoch 323/400\n43/43 [==============================] - ETA: 0s - loss: 7.5383e-04\nEpoch 00323: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5383e-04 - val_loss: 6.2755e-04\nEpoch 324/400\n43/43 [==============================] - ETA: 0s - loss: 7.5024e-04\nEpoch 00324: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5024e-04 - val_loss: 6.0417e-04\nEpoch 325/400\n43/43 [==============================] - ETA: 0s - loss: 7.4732e-04\nEpoch 00325: loss improved from 0.00075 to 0.00075, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 27ms/step - loss: 7.4732e-04 - val_loss: 6.0552e-04\nEpoch 326/400\n43/43 [==============================] - ETA: 0s - loss: 7.4526e-04\nEpoch 00326: loss improved from 0.00075 to 0.00075, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 22ms/step - loss: 7.4526e-04 - val_loss: 6.1282e-04\nEpoch 327/400\n43/43 [==============================] - ETA: 0s - loss: 7.5306e-04\nEpoch 00327: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5306e-04 - val_loss: 6.2025e-04\nEpoch 328/400\n42/43 [============================>.] - ETA: 0s - loss: 7.4879e-04\nEpoch 00328: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.4914e-04 - val_loss: 6.1275e-04\nEpoch 329/400\n42/43 [============================>.] - ETA: 0s - loss: 7.5900e-04\nEpoch 00329: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5848e-04 - val_loss: 6.0006e-04\nEpoch 330/400\n43/43 [==============================] - ETA: 0s - loss: 7.5734e-04\nEpoch 00330: loss did not improve from 0.00075\n43/43 [==============================] - 1s 21ms/step - loss: 7.5734e-04 - val_loss: 6.0086e-04\nEpoch 331/400\n42/43 [============================>.] - ETA: 0s - loss: 7.6136e-04\nEpoch 00331: loss did not improve from 0.00075\n43/43 [==============================] - 1s 22ms/step - loss: 7.6003e-04 - val_loss: 6.0936e-04\nEpoch 332/400\n42/43 [============================>.] - ETA: 0s - loss: 7.4266e-04\nEpoch 00332: loss improved from 0.00075 to 0.00074, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 23ms/step - loss: 7.4475e-04 - val_loss: 6.0296e-04\nEpoch 333/400\n43/43 [==============================] - ETA: 0s - loss: 7.5253e-04\nEpoch 00333: loss did not improve from 0.00074\n43/43 [==============================] - 1s 21ms/step - loss: 7.5253e-04 - val_loss: 6.1537e-04\nEpoch 334/400\n43/43 [==============================] - ETA: 0s - loss: 7.5831e-04\nEpoch 00334: loss did not improve from 0.00074\n43/43 [==============================] - 1s 21ms/step - loss: 7.5831e-04 - val_loss: 6.0117e-04\nEpoch 335/400\n43/43 [==============================] - ETA: 0s - loss: 7.4860e-04\nEpoch 00335: loss did not improve from 0.00074\n43/43 [==============================] - 1s 21ms/step - loss: 7.4860e-04 - val_loss: 6.2022e-04\nEpoch 336/400\n42/43 [============================>.] - ETA: 0s - loss: 7.4968e-04\nEpoch 00336: loss did not improve from 0.00074\n43/43 [==============================] - 1s 22ms/step - loss: 7.4986e-04 - val_loss: 6.1961e-04\nEpoch 337/400\n43/43 [==============================] - ETA: 0s - loss: 7.4729e-04\nEpoch 00337: loss did not improve from 0.00074\n43/43 [==============================] - 1s 22ms/step - loss: 7.4729e-04 - val_loss: 5.9792e-04\nEpoch 338/400\n43/43 [==============================] - ETA: 0s - loss: 7.6439e-04\nEpoch 00338: loss did not improve from 0.00074\n43/43 [==============================] - 1s 21ms/step - loss: 7.6439e-04 - val_loss: 6.2018e-04\nEpoch 339/400\n43/43 [==============================] - ETA: 0s - loss: 7.6093e-04\nEpoch 00339: loss did not improve from 0.00074\n43/43 [==============================] - 1s 21ms/step - loss: 7.6093e-04 - val_loss: 6.2131e-04\nEpoch 340/400\n42/43 [============================>.] - ETA: 0s - loss: 7.3322e-04\nEpoch 00340: loss improved from 0.00074 to 0.00073, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 23ms/step - loss: 7.3306e-04 - val_loss: 6.1254e-04\nEpoch 341/400\n43/43 [==============================] - ETA: 0s - loss: 7.4933e-04\nEpoch 00341: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4933e-04 - val_loss: 5.9495e-04\nEpoch 342/400\n42/43 [============================>.] - ETA: 0s - loss: 7.3765e-04\nEpoch 00342: loss did not improve from 0.00073\n43/43 [==============================] - 1s 22ms/step - loss: 7.3732e-04 - val_loss: 5.9162e-04\nEpoch 343/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.4550e-04\nEpoch 00343: loss did not improve from 0.00073\n43/43 [==============================] - 1s 22ms/step - loss: 7.4074e-04 - val_loss: 5.9389e-04\nEpoch 344/400\n43/43 [==============================] - ETA: 0s - loss: 7.3770e-04\nEpoch 00344: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3770e-04 - val_loss: 6.1477e-04\nEpoch 345/400\n43/43 [==============================] - ETA: 0s - loss: 7.4221e-04\nEpoch 00345: loss did not improve from 0.00073\n43/43 [==============================] - 1s 22ms/step - loss: 7.4221e-04 - val_loss: 6.0267e-04\nEpoch 346/400\n43/43 [==============================] - ETA: 0s - loss: 7.4488e-04\nEpoch 00346: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4488e-04 - val_loss: 6.2104e-04\nEpoch 347/400\n43/43 [==============================] - ETA: 0s - loss: 7.4717e-04\nEpoch 00347: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4717e-04 - val_loss: 6.1316e-04\nEpoch 348/400\n42/43 [============================>.] - ETA: 0s - loss: 7.4851e-04\nEpoch 00348: loss did not improve from 0.00073\n43/43 [==============================] - 1s 22ms/step - loss: 7.4804e-04 - val_loss: 6.1191e-04\nEpoch 349/400\n43/43 [==============================] - ETA: 0s - loss: 7.4415e-04\nEpoch 00349: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4415e-04 - val_loss: 6.1204e-04\nEpoch 350/400\n43/43 [==============================] - ETA: 0s - loss: 7.4988e-04\nEpoch 00350: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4988e-04 - val_loss: 6.1053e-04\nEpoch 351/400\n43/43 [==============================] - ETA: 0s - loss: 7.4136e-04\nEpoch 00351: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4136e-04 - val_loss: 6.3264e-04\nEpoch 352/400\n43/43 [==============================] - ETA: 0s - loss: 7.5283e-04\nEpoch 00352: loss did not improve from 0.00073\n43/43 [==============================] - 1s 22ms/step - loss: 7.5283e-04 - val_loss: 6.1778e-04\nEpoch 353/400\n43/43 [==============================] - ETA: 0s - loss: 7.6000e-04\nEpoch 00353: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.6000e-04 - val_loss: 6.0077e-04\nEpoch 354/400\n43/43 [==============================] - ETA: 0s - loss: 7.5000e-04\nEpoch 00354: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.5000e-04 - val_loss: 6.1223e-04\nEpoch 355/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.5910e-04\nEpoch 00355: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.5701e-04 - val_loss: 6.2680e-04\nEpoch 356/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.5394e-04\nEpoch 00356: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.5359e-04 - val_loss: 5.9552e-04\nEpoch 357/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.5625e-04\nEpoch 00357: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.5184e-04 - val_loss: 5.9686e-04\nEpoch 358/400\n43/43 [==============================] - ETA: 0s - loss: 7.4314e-04\nEpoch 00358: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4314e-04 - val_loss: 6.0007e-04\nEpoch 359/400\n43/43 [==============================] - ETA: 0s - loss: 7.4478e-04\nEpoch 00359: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4478e-04 - val_loss: 5.9969e-04\nEpoch 360/400\n42/43 [============================>.] - ETA: 0s - loss: 7.3929e-04\nEpoch 00360: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3804e-04 - val_loss: 6.2390e-04\nEpoch 361/400\n42/43 [============================>.] - ETA: 0s - loss: 7.5451e-04\nEpoch 00361: loss did not improve from 0.00073\n43/43 [==============================] - 1s 22ms/step - loss: 7.5559e-04 - val_loss: 6.0699e-04\nEpoch 362/400\n43/43 [==============================] - ETA: 0s - loss: 7.4543e-04\nEpoch 00362: loss did not improve from 0.00073\n43/43 [==============================] - 1s 22ms/step - loss: 7.4543e-04 - val_loss: 6.2123e-04\nEpoch 363/400\n42/43 [============================>.] - ETA: 0s - loss: 7.3734e-04\nEpoch 00363: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3781e-04 - val_loss: 6.3183e-04\nEpoch 364/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.5191e-04\nEpoch 00364: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.5089e-04 - val_loss: 5.9910e-04\nEpoch 365/400\n41/43 [===========================>..] - ETA: 0s - loss: 7.4620e-04\nEpoch 00365: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4656e-04 - val_loss: 5.9204e-04\nEpoch 366/400\n43/43 [==============================] - ETA: 0s - loss: 7.4696e-04\nEpoch 00366: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4696e-04 - val_loss: 6.0246e-04\nEpoch 367/400\n43/43 [==============================] - ETA: 0s - loss: 7.4070e-04\nEpoch 00367: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4070e-04 - val_loss: 5.9932e-04\nEpoch 368/400\n43/43 [==============================] - ETA: 0s - loss: 7.3366e-04\nEpoch 00368: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3366e-04 - val_loss: 6.0834e-04\nEpoch 369/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.4500e-04\nEpoch 00369: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4241e-04 - val_loss: 5.9974e-04\nEpoch 370/400\n43/43 [==============================] - ETA: 0s - loss: 7.3924e-04\nEpoch 00370: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3924e-04 - val_loss: 5.9284e-04\nEpoch 371/400\n43/43 [==============================] - ETA: 0s - loss: 7.4585e-04\nEpoch 00371: loss did not improve from 0.00073\n43/43 [==============================] - 1s 22ms/step - loss: 7.4585e-04 - val_loss: 6.4006e-04\nEpoch 372/400\n43/43 [==============================] - ETA: 0s - loss: 7.3708e-04\nEpoch 00372: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3708e-04 - val_loss: 6.2919e-04\nEpoch 373/400\n43/43 [==============================] - ETA: 0s - loss: 7.4294e-04\nEpoch 00373: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4294e-04 - val_loss: 5.9618e-04\nEpoch 374/400\n43/43 [==============================] - ETA: 0s - loss: 7.3713e-04\nEpoch 00374: loss did not improve from 0.00073\n43/43 [==============================] - 1s 22ms/step - loss: 7.3713e-04 - val_loss: 5.9710e-04\nEpoch 375/400\n43/43 [==============================] - ETA: 0s - loss: 7.3813e-04\nEpoch 00375: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3813e-04 - val_loss: 6.2813e-04\nEpoch 376/400\n42/43 [============================>.] - ETA: 0s - loss: 7.2879e-04\nEpoch 00376: loss improved from 0.00073 to 0.00073, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 26ms/step - loss: 7.2931e-04 - val_loss: 6.5777e-04\nEpoch 377/400\n43/43 [==============================] - ETA: 0s - loss: 7.3761e-04\nEpoch 00377: loss did not improve from 0.00073\n43/43 [==============================] - 1s 22ms/step - loss: 7.3761e-04 - val_loss: 6.0391e-04\nEpoch 378/400\n43/43 [==============================] - ETA: 0s - loss: 7.3372e-04\nEpoch 00378: loss did not improve from 0.00073\n43/43 [==============================] - 1s 20ms/step - loss: 7.3372e-04 - val_loss: 6.0467e-04\nEpoch 379/400\n43/43 [==============================] - ETA: 0s - loss: 7.3632e-04\nEpoch 00379: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3632e-04 - val_loss: 6.2740e-04\nEpoch 380/400\n43/43 [==============================] - ETA: 0s - loss: 7.4008e-04\nEpoch 00380: loss did not improve from 0.00073\n43/43 [==============================] - 1s 22ms/step - loss: 7.4008e-04 - val_loss: 6.1374e-04\nEpoch 381/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.4585e-04\nEpoch 00381: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4540e-04 - val_loss: 6.0936e-04\nEpoch 382/400\n43/43 [==============================] - ETA: 0s - loss: 7.3446e-04\nEpoch 00382: loss did not improve from 0.00073\n43/43 [==============================] - 1s 22ms/step - loss: 7.3446e-04 - val_loss: 6.1112e-04\nEpoch 383/400\n40/43 [==========================>...] - ETA: 0s - loss: 7.4931e-04\nEpoch 00383: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4621e-04 - val_loss: 6.1087e-04\nEpoch 384/400\n43/43 [==============================] - ETA: 0s - loss: 7.3973e-04\nEpoch 00384: loss did not improve from 0.00073\n43/43 [==============================] - 1s 22ms/step - loss: 7.3973e-04 - val_loss: 6.1185e-04\nEpoch 385/400\n43/43 [==============================] - ETA: 0s - loss: 7.3956e-04\nEpoch 00385: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3956e-04 - val_loss: 6.2808e-04\nEpoch 386/400\n43/43 [==============================] - ETA: 0s - loss: 7.4243e-04\nEpoch 00386: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4243e-04 - val_loss: 6.6126e-04\nEpoch 387/400\n43/43 [==============================] - ETA: 0s - loss: 7.3838e-04\nEpoch 00387: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3838e-04 - val_loss: 5.9769e-04\nEpoch 388/400\n43/43 [==============================] - ETA: 0s - loss: 7.4529e-04\nEpoch 00388: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.4529e-04 - val_loss: 6.0167e-04\nEpoch 389/400\n43/43 [==============================] - ETA: 0s - loss: 7.2624e-04\nEpoch 00389: loss improved from 0.00073 to 0.00073, saving model to /content/drive/My Drive/weights/KidNeYronal_model_mlp-s.h5\n43/43 [==============================] - 1s 27ms/step - loss: 7.2624e-04 - val_loss: 6.0040e-04\nEpoch 390/400\n43/43 [==============================] - ETA: 0s - loss: 7.3656e-04\nEpoch 00390: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3656e-04 - val_loss: 6.0524e-04\nEpoch 391/400\n43/43 [==============================] - ETA: 0s - loss: 7.3948e-04\nEpoch 00391: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3948e-04 - val_loss: 6.0807e-04\nEpoch 392/400\n43/43 [==============================] - ETA: 0s - loss: 7.3299e-04\nEpoch 00392: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3299e-04 - val_loss: 5.9657e-04\nEpoch 393/400\n43/43 [==============================] - ETA: 0s - loss: 7.3573e-04\nEpoch 00393: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3573e-04 - val_loss: 6.1584e-04\nEpoch 394/400\n43/43 [==============================] - ETA: 0s - loss: 7.3809e-04\nEpoch 00394: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3809e-04 - val_loss: 5.8471e-04\nEpoch 395/400\n43/43 [==============================] - ETA: 0s - loss: 7.2753e-04\nEpoch 00395: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.2753e-04 - val_loss: 5.9882e-04\nEpoch 396/400\n41/43 [===========================>..] - ETA: 0s - loss: 7.3676e-04\nEpoch 00396: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3163e-04 - val_loss: 5.8929e-04\nEpoch 397/400\n43/43 [==============================] - ETA: 0s - loss: 7.3736e-04\nEpoch 00397: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3736e-04 - val_loss: 6.0604e-04\nEpoch 398/400\n43/43 [==============================] - ETA: 0s - loss: 7.3073e-04\nEpoch 00398: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3073e-04 - val_loss: 5.9663e-04\nEpoch 399/400\n43/43 [==============================] - ETA: 0s - loss: 7.3916e-04\nEpoch 00399: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3916e-04 - val_loss: 6.3720e-04\nEpoch 400/400\n43/43 [==============================] - ETA: 0s - loss: 7.3359e-04\nEpoch 00400: loss did not improve from 0.00073\n43/43 [==============================] - 1s 21ms/step - loss: 7.3359e-04 - val_loss: 6.0707e-04\ntraining time: 443.7013142108917\ndone\n"
],
[
"plt.rcParams.update({'font.size': 16})\nplt.plot(hist.history['loss'], linewidth = 5)\nplt.title('Model Training Loss')\nplt.ylabel('Loss'), plt.xlabel('Epoch')\nplt.ylim(top=0.01,bottom=0) \n",
"_____no_output_____"
],
[
"plt.plot(hist.history['loss'], linewidth = 5)\nplt.title('Model Training Loss')\nplt.ylabel('Loss'), plt.xlabel('Epoch')\nplt.ylim(top=0.005,bottom=0) ",
"_____no_output_____"
],
[
"plt.plot(hist.history['val_loss'], linewidth = 5)\nplt.title('Model Validation Loss')\nplt.ylabel('Loss'), plt.xlabel('Epoch')\nplt.ylim(top=0.01,bottom=0) \n",
"_____no_output_____"
],
[
"plt.plot(hist.history['val_loss'], linewidth = 5)\nplt.title('Model Validation Loss')\nplt.ylabel('Loss'), plt.xlabel('Epoch')\nplt.ylim(top=0.005,bottom=0) ",
"_____no_output_____"
]
],
[
[
"# Testing Model Accuracy",
"_____no_output_____"
]
],
[
[
"predict = model.predict(X_test)\ndis_index = 0 #index for the test data",
"_____no_output_____"
],
[
"def plot_predict_gt(predict_aug,y_test_aug):\n predict_output = pd.DataFrame({'predict':predict_aug,'ground truth':y_test_aug})\n predict_output = pd.DataFrame.transpose(predict_output)\n print('test: ',dis_index+1)\n print(predict_output)\n plt.figure()\n plt.plot(X_test[dis_index,:] )\n plt.title('spect= %i' %(dis_index+1))\n plt.ylabel('Intensity')\n print('\\n')",
"_____no_output_____"
],
[
"dis_index = 0\nfor idx in range(len(predict)):\n plot_predict_gt(predict[dis_index,:],y_test[dis_index,:])\n dis_index +=1",
"test: 1\n 0 1 2\npredict 7.386837 7.048204 6.755214\nground truth 7.429000 7.146000 6.739000\n\n\ntest: 2\n 0 1 2\npredict 7.340264 7.027956 6.835915\nground truth 7.363000 7.103000 6.414000\n\n\ntest: 3\n 0 1 2\npredict 7.353236 7.049346 6.858728\nground truth 7.332000 7.118000 6.381000\n\n\ntest: 4\n 0 1 2\npredict 7.422341 7.076198 6.744566\nground truth 7.395000 6.970000 6.326000\n\n\ntest: 5\n 0 1 2\npredict 7.43129 7.142857 6.960212\nground truth 7.35100 7.166000 6.750000\n\n\ntest: 6\n 0 1 2\npredict 7.429321 7.122438 6.651381\nground truth 7.396000 7.045000 6.529000\n\n\ntest: 7\n 0 1 2\npredict 6.738786 7.0713 6.66346\nground truth 7.431000 7.1500 6.76800\n\n\ntest: 8\n 0 1 2\npredict 7.421052 7.044291 6.853285\nground truth 7.431000 7.073000 6.533000\n\n\n"
]
],
[
[
"# Testing Model Accuracy - Synthetic data\n\n",
"_____no_output_____"
]
],
[
[
"predict_aug = model.predict(X_test_aug)",
"_____no_output_____"
],
[
"def print_predict_gt(predict_aug,y_test_aug):\n predict_output = pd.DataFrame({'predict':predict_aug,'ground truth':y_test_aug})\n predict_output = pd.DataFrame.transpose(predict_output)\n print('test: ',dis_index+1)\n print(predict_output)",
"_____no_output_____"
],
[
"dis_index = 0\nfor idx in range(len(predict_aug)):\n print_predict_gt(predict_aug[dis_index,:],y_test_aug[dis_index,:])\n dis_index +=1\n",
"test: 1\n 0 1 2\npredict 7.384405 7.057750 6.500827\nground truth 7.366038 7.071987 6.522446\ntest: 2\n 0 1 2\npredict 7.38638 7.069021 6.540631\nground truth 7.44000 7.006862 6.779162\ntest: 3\n 0 1 2\npredict 7.388608 7.059159 6.518758\nground truth 7.367518 7.109335 6.486110\ntest: 4\n 0 1 2\npredict 7.384007 7.063979 6.539958\nground truth 7.393396 7.006035 6.516890\ntest: 5\n 0 1 2\npredict 7.390619 7.051356 6.472352\nground truth 7.405328 7.071277 6.422578\ntest: 6\n 0 1 2\npredict 7.381419 7.060530 6.549490\nground truth 7.375795 7.023096 6.567435\ntest: 7\n 0 1 2\npredict 7.386343 7.060539 6.472683\nground truth 7.400738 7.062025 6.364269\ntest: 8\n 0 1 2\npredict 7.388346 7.050047 6.538689\nground truth 7.390047 7.042914 6.548729\ntest: 9\n 0 1 2\npredict 7.385495 7.050274 6.673564\nground truth 7.398375 6.978933 6.748799\ntest: 10\n 0 1 2\npredict 7.388052 7.047622 6.533160\nground truth 7.385743 7.015931 6.446879\ntest: 11\n 0 1 2\npredict 7.387222 7.072223 6.459566\nground truth 7.384701 7.150000 6.453065\ntest: 12\n 0 1 2\npredict 7.386828 7.077037 6.489162\nground truth 7.378950 7.126594 6.497919\ntest: 13\n 0 1 2\npredict 7.375126 7.067279 6.605557\nground truth 7.330000 7.125268 6.627981\ntest: 14\n 0 1 2\npredict 7.382566 7.046120 6.709905\nground truth 7.346859 6.989944 6.733028\ntest: 15\n 0 1 2\npredict 7.38286 7.053723 6.741304\nground truth 7.40377 7.091446 6.780000\ntest: 16\n 0 1 2\npredict 7.383055 7.053160 6.638666\nground truth 7.381054 7.097785 6.659991\ntest: 17\n 0 1 2\npredict 7.378168 7.064490 6.5516\nground truth 7.381242 7.009982 6.3200\ntest: 18\n 0 1 2\npredict 7.391646 7.012324 6.59342\nground truth 7.387878 6.960000 6.57782\ntest: 19\n 0 1 2\npredict 7.388879 7.063576 6.480732\nground truth 7.390311 7.085053 6.481594\ntest: 20\n 0 1 2\npredict 7.391189 7.051326 6.449616\nground truth 7.383296 7.020925 6.389064\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e75d81cae84e9ffa7d97b2479b6019c950526000 | 63,074 | ipynb | Jupyter Notebook | assets/python/top_words.ipynb | emelinemeline/emelinemeline.github.io | bd78a827af4df03a0dc57b228dac7fc50bfe3384 | [
"MIT"
] | null | null | null | assets/python/top_words.ipynb | emelinemeline/emelinemeline.github.io | bd78a827af4df03a0dc57b228dac7fc50bfe3384 | [
"MIT"
] | 1 | 2022-02-26T02:31:33.000Z | 2022-02-26T02:31:33.000Z | assets/python/top_words.ipynb | emelinemeline/emelinemeline.github.io | bd78a827af4df03a0dc57b228dac7fc50bfe3384 | [
"MIT"
] | 3 | 2017-12-15T04:12:43.000Z | 2018-07-11T03:29:56.000Z | 204.122977 | 13,252 | 0.90267 | [
[
[
"I wrote a post in 2016 on text mining the bible with R. \n[That post](https://emelineliu.com/2016/01/10/bible1/) remains the most popular post on this site according to Google Analytics.\n\nI still use R occasionally at work, butttt I work more on Java and my heart will probably eternally belong to Python.\n(List comprehensions! Need I say more!)\nI figured I would revisit the problem of retrieving the most common words in a text using Python in a Jupyter notebook.\n\n\nThe notebook was converted to an html file using [nbconvert](https://github.com/jupyter/nbconvert). You can also access this notebook in the `_scripts` folder of the website's repo if you want to play around with it.\n\n---\n\nThe following code segment is just some imports:\n- re - for removing punctionation using regex\n- requests - to retrieve the text files from Project Gutenberg\n- seaborn - for plotting\n- Counter - creates a counting dictionary from a list, so the key will be unique items in the list and the value will be the number of occurrences\n- stopwords - a list of commonly occurring words in the English language, such as \"the\" and \"and\"",
"_____no_output_____"
]
],
[
[
"import re\nimport requests\nimport seaborn as sb\nimport matplotlib.pyplot as plt\nfrom collections import Counter\nfrom nltk.corpus import stopwords\n\n\n# These stopwords are commonly used words in the English language\nSTOPWORDS = {re.sub(r'[^\\w\\s]', '', word) for word in stopwords.words('english')}\n# Adding custom words\nSTOPWORDS.add(\"gutenberg\")\nSTOPWORDS.add(\"ebook\")",
"_____no_output_____"
]
],
[
[
"Project Gutenberg has quite a few books available with the option to access them as text files on [their website](http://www.gutenberg.org/wiki/Main_Page). The last time I did this text analysis, I downloaded a variety of text files. \n\nThis time, I'm using the call `requests.get(url)` to retrieve the full text file from the Project Gutenberg website in the `get_text(url)` function. This means I don't have to manually download the file and point to that location in this notebook. I can specify just the url ☺︎\n\nThen, once I retrieve the text from that url, I do some cleaning based on manually investigating the files. There is some extraneous text that occurs before and after those *** lines, so might as well cut that off. I then return the text as a list of lines.\n\nIn the `get_word_counter(url)` function, I call `get_text(url`) to retrieve the text, then do some additional cleaning on the word level. Every word is lower-cased. I also remove all punctuation.\n\nThen, once there's a list of cleaned words, I send that list into the constructor for a Counter, which will create a dictionary counted entities from the list. The Counter class has a function `.most_common(n)` to retrieve the `n` most commonly occuring items in the list.",
"_____no_output_____"
]
],
[
[
"def get_text(url):\n text = requests.get(url).text\n # trim everything before this line\n clean = text.split(\"*** START OF THIS PROJECT GUTENBERG EBOOK\")[1]\n # trim everything after this line\n clean = clean.split(\"*** END OF THIS PROJECT GUTENBERG EBOOK\")[0]\n # split the text into lines and remove extraneous newlines\n return [line for line in clean.splitlines() if line.strip()]\n\n\ndef get_word_counter(url):\n text = get_text(url)\n # remove puncuation and remove words that are in the STOPWORDS set\n words = [word.lower() for line in text for word in line.split(\" \") if word]\n words = [re.sub(r'[^\\w\\s]', '', word) for word in words if re.sub(r'[^\\w\\s]', '', word) not in STOPWORDS]\n return Counter(words)\n\n\ntop_n = 10\ntitle = \"The King James Bible\"\nurl = \"http://www.gutenberg.org/cache/epub/10/pg10.txt\"\ncounter = get_word_counter(url)\ncounter.most_common(top_n)",
"_____no_output_____"
]
],
[
[
"Looks like \"shall\" is the most common word. I guess it makes sense that the bible would be quite prescriptive in its language...\n\nWell that's cool and all, but let's make a picture too. The snippet below uses the Seaborn plotting library to create a barplot from that counter. I see seaborn is most commonly imported as `sns` [due to a joke referencing Samuel Seaborn from the West Wing](https://stackoverflow.com/questions/41499857/seaborn-why-import-as-sns), but that's just too much for me. So I import it as `sb`.....",
"_____no_output_____"
]
],
[
[
"top_words = counter.most_common(top_n)\nword_label = [word for (word, count) in top_words]\ncount_words = [count for (word, count) in top_words]\nax = sb.barplot(x=word_label, y=count_words, palette=sb.color_palette(\"cubehelix\", 8))\nax.set_title(title)\nax.set(ylabel='Count of words')\nax.set_xticklabels(ax.get_xticklabels(), rotation=40, ha=\"right\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"What about the least common words?\n\nTo retrieve the least common words, this call gets the entire ordered list of (item, occurrence) pairs and then retrieves the `n` last terms:",
"_____no_output_____"
]
],
[
[
"n = 20\nbottom_words = counter.most_common()[:-n-1:-1]\nbottom_words",
"_____no_output_____"
]
],
[
[
"Fascinating, apparently sardonyx is a type of onyx that has layers of sard. What a straightforward name.\n\nThen I figured I might look at some other text files on Project Gutenberg. I picked Ion, Frankenstein, and Little Women for a little further investigation. I shoved those into a dictionary so I could iterate to generate multiple plots. Then might as well put the code to generate the barplot into a function:",
"_____no_output_____"
]
],
[
[
"def create_barplot(title, counter):\n word_label = [word for (word, count) in counter]\n count_words = [count for (word, count) in counter]\n ax = sb.barplot(x=word_label, y=count_words, palette=sb.color_palette(\"cubehelix\", 8))\n ax.set_title(title)\n ax.set(ylabel='Count of words')\n ax.set_xticklabels(ax.get_xticklabels(), rotation=40, ha=\"right\")\n # make plot show up\n plt.show()\n\nmore_titles = {\"Ion\": \"http://www.gutenberg.org/cache/epub/1635/pg1635.txt\",\n \"Frankenstein\": \"http://www.gutenberg.org/files/84/84-0.txt\",\n \"Little Women\": \"http://www.gutenberg.org/cache/epub/514/pg514.txt\"}\n\nfor title, url in more_titles.items():\n counter = get_word_counter(url).most_common(top_n)\n create_barplot(title, counter)",
"_____no_output_____"
]
],
[
[
"I've read Frankenstein and Little Women (this one, many times...), but never Ion. From these plots, I suppose Ion is about Ion.\n\nTotally forgot the utterly depressing narative of Frankenstein's monster desperately longing for a father. \n\nAnd if there was any doubt as to who the main character of Little Women is, well here we are.\n\nThat's all for now. Feel free to comment below or email me if you have any questions.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e75d979ff5ebcf0d0179ce76e97d19fc7df012bc | 3,005 | ipynb | Jupyter Notebook | Module_7/Module_7_kata.ipynb | CristopherA96/LaunchX_IntroPython_w1 | a45e225c8f2ab643a35f240b36339743d1df4782 | [
"MIT"
] | null | null | null | Module_7/Module_7_kata.ipynb | CristopherA96/LaunchX_IntroPython_w1 | a45e225c8f2ab643a35f240b36339743d1df4782 | [
"MIT"
] | null | null | null | Module_7/Module_7_kata.ipynb | CristopherA96/LaunchX_IntroPython_w1 | a45e225c8f2ab643a35f240b36339743d1df4782 | [
"MIT"
] | null | null | null | 29.174757 | 174 | 0.565058 | [
[
[
"## **Curso Intro Python**\n### Módulo 7: Estructuras de control",
"_____no_output_____"
],
[
"#### **Problema 1:** Uso de ciclos while\nSe solicita crear una aplicación que pida a un usuario que ingrese una lista de planetas.",
"_____no_output_____"
]
],
[
[
"# Step 1: Declarar variables para la entrada del usuario y la lista de los planetas.\nnew_planet = \"\" \nplanets = [] # Lista vacía\n\n# Step 2: Crear ciclo while\nwhile (new_planet != \"done\"):\n if new_planet: # Verificando que new_planet tenga un valor el cual fue ingresado por el usuario, y si lo tiene\n planets.append(new_planet) # que lo agregue al final de la lista de planetas.\n new_planet = input(\"Por favor, ingresa un nuevo planeta\") # de lo contrario que ingrese un nuevo planeta o algo (puede ser done para que no entre al ciclo.) \n# Cabe mencionar que el ciclo while termina hasta que se escribe la palabra done.\nprint(\"Los valores ingresados a la lista son:\", planets)",
"Los valores ingresados a la lista son: ['Marte', 'Tierra', 'Venus']\n"
]
],
[
[
"#### **Problema 2:** Crear ciclo for para lista\nAhora, se pide que se muestre la lista de valores ingresados por parte del usuario.\n",
"_____no_output_____"
]
],
[
[
"# Step 1: Crear ciclo for para recorrer cada elemento de la lista para imprimirlo\n# en pantalla.\nfor i_planet in planets:\n print(\"El elemento \", i_planet, \"de la lista está en la posición\", planets.index(i_planet))",
"El elemento Marte de la lista está en la posición 0\nEl elemento Tierra de la lista está en la posición 1\nEl elemento Venus de la lista está en la posición 2\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e75dc47b4910dda2d6c522fca813b079b25b26f8 | 82,488 | ipynb | Jupyter Notebook | challenge/data_challenge (1).ipynb | elkysandor/IML.HUJI | cb576d0e2cf2890f269143fdefe8fff5f149f4b6 | [
"MIT"
] | null | null | null | challenge/data_challenge (1).ipynb | elkysandor/IML.HUJI | cb576d0e2cf2890f269143fdefe8fff5f149f4b6 | [
"MIT"
] | null | null | null | challenge/data_challenge (1).ipynb | elkysandor/IML.HUJI | cb576d0e2cf2890f269143fdefe8fff5f149f4b6 | [
"MIT"
] | null | null | null | 42.171779 | 7,053 | 0.474857 | [
[
[
"un_in_colab = False\nif 'google.colab' in str(get_ipython()):\n run_in_colab = True\n print('Running on CoLab')\nelse:\n print('Running locally on Jupyter')",
"Running on CoLab\n"
],
[
"if run_in_colab:\n from google.colab import drive\n drive.mount('/content/drive')\nelse: # Set local path \n data_path = \"path/to/data_folder\"",
"Mounted at /content/drive\n"
],
[
"if run_in_colab:\n from google.colab import files\n uploaded = files.upload()",
"_____no_output_____"
],
[
"# from challenge.agoda_cancellation_estimator import AgodaCancellationEstimator\n# from IMLearn.utils import split_train_test\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.tree import DecisionTreeClassifier \nfrom sklearn.preprocessing import normalize\nfrom datetime import datetime\nimport re\nimport io\nimport plotly\nimport plotly.express as px\nimport numpy as np\nimport pandas as pd\n",
"_____no_output_____"
],
[
"COUNTRY_ALPHA2_TO_CONTINENT = {\n \"TL\": \"Asia\",\n 'AB': 'Asia',\n 'AD': 'Europe',\n 'AE': 'Asia',\n 'AF': 'Asia',\n 'AG': 'North America',\n 'AI': 'North America',\n 'AL': 'Europe',\n 'AM': 'Asia',\n 'AO': 'Africa',\n 'AR': 'South America',\n 'AS': 'Oceania',\n 'AT': 'Europe',\n 'AU': 'Oceania',\n 'AW': 'North America',\n 'AX': 'Europe',\n 'AZ': 'Asia',\n 'BA': 'Europe',\n 'BB': 'North America',\n 'BD': 'Asia',\n 'BE': 'Europe',\n 'BF': 'Africa',\n 'BG': 'Europe',\n 'BH': 'Asia',\n 'BI': 'Africa',\n 'BJ': 'Africa',\n 'BL': 'North America',\n 'BM': 'North America',\n 'BN': 'Asia',\n 'BO': 'South America',\n 'BQ': 'North America',\n 'BR': 'South America',\n 'BS': 'North America',\n 'BT': 'Asia',\n 'BV': 'Antarctica',\n 'BW': 'Africa',\n 'BY': 'Europe',\n 'BZ': 'North America',\n 'CA': 'North America',\n 'CC': 'Asia',\n 'CD': 'Africa',\n 'CF': 'Africa',\n 'CG': 'Africa',\n 'CH': 'Europe',\n 'CI': 'Africa',\n 'CK': 'Oceania',\n 'CL': 'South America',\n 'CM': 'Africa',\n 'CN': 'Asia',\n 'CO': 'South America',\n 'CR': 'North America',\n 'CU': 'North America',\n 'CV': 'Africa',\n 'CW': 'North America',\n 'CX': 'Asia',\n 'CY': 'Asia',\n 'CZ': 'Europe',\n 'DE': 'Europe',\n 'DJ': 'Africa',\n 'DK': 'Europe',\n 'DM': 'North America',\n 'DO': 'North America',\n 'DZ': 'Africa',\n 'EC': 'South America',\n 'EE': 'Europe',\n 'EG': 'Africa',\n 'ER': 'Africa',\n 'ES': 'Europe',\n 'ET': 'Africa',\n 'FI': 'Europe',\n 'FJ': 'Oceania',\n 'FK': 'South America',\n 'FM': 'Oceania',\n 'FO': 'Europe',\n 'FR': 'Europe',\n 'GA': 'Africa',\n 'GB': 'Europe',\n 'GD': 'North America',\n 'GE': 'Asia',\n 'GF': 'South America',\n 'GG': 'Europe',\n 'GH': 'Africa',\n 'GI': 'Europe',\n 'GL': 'North America',\n 'GM': 'Africa',\n 'GN': 'Africa',\n 'GP': 'North America',\n 'GQ': 'Africa',\n 'GR': 'Europe',\n 'GS': 'South America',\n 'GT': 'North America',\n 'GU': 'Oceania',\n 'GW': 'Africa',\n 'GY': 'South America',\n 'HK': 'Asia',\n 'HM': 'Antarctica',\n 'HN': 'North America',\n 'HR': 'Europe',\n 'HT': 'North America',\n 'HU': 'Europe',\n 'ID': 'Asia',\n 'IE': 'Europe',\n 'IL': 'Asia',\n 'IM': 'Europe',\n 'IN': 'Asia',\n 'IO': 'Asia',\n 'IQ': 'Asia',\n 'IR': 'Asia',\n 'IS': 'Europe',\n 'IT': 'Europe',\n 'JE': 'Europe',\n 'JM': 'North America',\n 'JO': 'Asia',\n 'JP': 'Asia',\n 'KE': 'Africa',\n 'KG': 'Asia',\n 'KH': 'Asia',\n 'KI': 'Oceania',\n 'KM': 'Africa',\n 'KN': 'North America',\n 'KP': 'Asia',\n 'KR': 'Asia',\n 'KW': 'Asia',\n 'KY': 'North America',\n 'KZ': 'Asia',\n 'LA': 'Asia',\n 'LB': 'Asia',\n 'LC': 'North America',\n 'LI': 'Europe',\n 'LK': 'Asia',\n 'LR': 'Africa',\n 'LS': 'Africa',\n 'LT': 'Europe',\n 'LU': 'Europe',\n 'LV': 'Europe',\n 'LY': 'Africa',\n 'MA': 'Africa',\n 'MC': 'Europe',\n 'MD': 'Europe',\n 'ME': 'Europe',\n 'MF': 'North America',\n 'MG': 'Africa',\n 'MH': 'Oceania',\n 'MK': 'Europe',\n 'ML': 'Africa',\n 'MM': 'Asia',\n 'MN': 'Asia',\n 'MO': 'Asia',\n 'MP': 'Oceania',\n 'MQ': 'North America',\n 'MR': 'Africa',\n 'MS': 'North America',\n 'MT': 'Europe',\n 'MU': 'Africa',\n 'MV': 'Asia',\n 'MW': 'Africa',\n 'MX': 'North America',\n 'MY': 'Asia',\n 'MZ': 'Africa',\n 'NA': 'Africa',\n 'NC': 'Oceania',\n 'NE': 'Africa',\n 'NF': 'Oceania',\n 'NG': 'Africa',\n 'NI': 'North America',\n 'NL': 'Europe',\n 'NO': 'Europe',\n 'NP': 'Asia',\n 'NR': 'Oceania',\n 'NU': 'Oceania',\n 'NZ': 'Oceania',\n 'OM': 'Asia',\n 'OS': 'Asia',\n 'PA': 'North America',\n 'PE': 'South America',\n 'PF': 'Oceania',\n 'PG': 'Oceania',\n 'PH': 'Asia',\n 'PK': 'Asia',\n 'PL': 'Europe',\n 'PM': 'North America',\n 'PR': 'North America',\n 'PS': 'Asia',\n 'PT': 'Europe',\n 'PW': 'Oceania',\n 'PY': 'South America',\n 'QA': 'Asia',\n 'RE': 'Africa',\n 'RO': 'Europe',\n 'RS': 'Europe',\n 'RU': 'Europe',\n 'RW': 'Africa',\n 'SA': 'Asia',\n 'SB': 'Oceania',\n 'SC': 'Africa',\n 'SD': 'Africa',\n 'SE': 'Europe',\n 'SG': 'Asia',\n 'SH': 'Africa',\n 'SI': 'Europe',\n 'SJ': 'Europe',\n 'SK': 'Europe',\n 'SL': 'Africa',\n 'SM': 'Europe',\n 'SN': 'Africa',\n 'SO': 'Africa',\n 'SR': 'South America',\n 'SS': 'Africa',\n 'ST': 'Africa',\n 'SV': 'North America',\n 'SY': 'Asia',\n 'SZ': 'Africa',\n 'TC': 'North America',\n 'TD': 'Africa',\n 'TG': 'Africa',\n 'TH': 'Asia',\n 'TJ': 'Asia',\n 'TK': 'Oceania',\n 'TM': 'Asia',\n 'TN': 'Africa',\n 'TO': 'Oceania',\n 'TP': 'Asia',\n 'TR': 'Asia',\n 'TT': 'North America',\n 'TV': 'Oceania',\n 'TW': 'Asia',\n 'TZ': 'Africa',\n 'UA': 'Europe',\n 'UG': 'Africa',\n 'US': 'North America',\n 'UY': 'South America',\n 'UZ': 'Asia',\n 'VC': 'North America',\n 'VE': 'South America',\n 'VG': 'North America',\n 'VI': 'North America',\n 'VN': 'Asia',\n 'VU': 'Oceania',\n 'WF': 'Oceania',\n 'WS': 'Oceania',\n 'XK': 'Europe',\n 'YE': 'Asia',\n 'YT': 'Africa',\n 'ZA': 'Africa',\n 'ZM': 'Africa',\n 'ZW': 'Africa',\n \"A1\": 'Unknown',\n np.nan: \"Unknown\"\n}\nhas_unique = ['charge_option', 'original_payment_type','continent',\"accommadation_type_name\"]\n\nbool_cols = ['is_user_logged_in', 'is_first_booking']\n\nnames_of_non_numeric_cols = ['hotel_country_code', 'accommadation_type_name',\n 'charge_option', 'customer_nationality',\n 'guest_nationality_country_name', 'origin_country_code', 'language',\n 'original_payment_method', 'original_payment_type',\n 'original_payment_currency', 'cancellation_policy_code']\n\ndate_time_cols = ['booking_datetime', 'checkin_date', 'checkout_date',\n 'hotel_live_date']\n",
"_____no_output_____"
],
[
"#not in use\ndef compute_z_score(df):\n return (df-df.mean())/df.std()\n\ndef fillter_to_binary(val):\n if val in [0,1,1.0,0.0] or np.isnan(val):\n return True\n return False\n\n#not in use\ndef match_to_test_dat(df):\n df1 = df[(df[\"charge_option\"]!='Pay at Check-in')]\n df2 = df1[~(df1.accommadation_type_name.isin(['Pay at Check-in','Chalet','Holiday Park / Caravan Park','Homestay','Inn', 'Lodge', 'Love Hotel']))]\n return df2\n#prase the policy str to 2 numeric features\ndef prase_to_vec(lst,days):\n vec=np.zeros(2)\n if lst:\n before_D = re.findall(r\"(\\d+)D\", \" \".join(lst))\n before_N = re.findall(r\"(\\d+)N\", \" \".join(lst))\n before_P = re.findall(r\"(\\d+)P\", \" \".join(lst))\n # print(before_D) \n if before_D:\n vec[0] = (np.array(before_D).astype(int)).mean()\n if before_N:\n vec[1] = (np.array(before_N).astype(int)).mean()\n if before_P:\n vec[1]+=((np.array(before_P).astype(int)*days)/100).astype(float).mean()\n return vec\n return [0,0]\n#not in use\ndef counry_code_to_continent(contry):\n return COUNTRY_ALPHA2_TO_CONTINENT[contry]\n\ndef remove_not_showing(lst):\n return [strr for strr in lst if \"D\" in strr]",
"_____no_output_____"
],
[
"def data_preprocessing(full_data,train : bool):\n features = []\n #convert cancellation_datetime to binary clf\n if train:\n full_data.cancellation_datetime = full_data.cancellation_datetime.fillna(0).astype(bool).astype(int)\n # remove h_booking_id \n if \"h_booking_id\" in full_data.columns:\n full_data = full_data.drop(columns=[\"h_booking_id\"],axis=1)\n # add column of continent of each country \n full_data[\"continent\"] = full_data.origin_country_code.apply(counry_code_to_continent)\n #same date features\n full_data[\"quarter_booking\"] = pd.to_datetime(full_data.booking_datetime).dt.quarter\n full_data[\"days_before_checkin\"] = (pd.to_datetime(full_data.checkin_date)-pd.to_datetime(full_data.booking_datetime)).dt.days.abs()\n #convert all date string to unix\n for date_time_col_name in date_time_cols:\n full_data[date_time_col_name] = (pd.to_datetime(full_data[date_time_col_name]).view(np.int64))/1000000000\n # convert all categorial variable to dummies \n for has_unique_col_name in has_unique:\n one_hot = pd.get_dummies(full_data[has_unique_col_name])\n features.append(one_hot.columns)\n full_data = full_data.drop(has_unique_col_name, axis=1)\n full_data = full_data.join(one_hot)\n for bool_col_name in bool_cols:\n full_data[bool_col_name] = full_data[bool_col_name].astype(int)\n # create num_of_booked_days col \n full_data['num_of_booked_days'] = full_data['checkout_date'] - full_data['checkin_date']\n full_data['num_of_booked_days'] = full_data['num_of_booked_days']/(60*60*24)\n # calc price_per_night\n full_data[\"price_per_night\"] = full_data.original_selling_amount/full_data.num_of_booked_days\n # creating 2 features payment_late_cancellation&norm_of_cancellation_policy from policy cancellation\n str_vec = full_data.cancellation_policy_code.str.split(\"_\")\n str_vec = str_vec.apply(remove_not_showing)\n df1 = pd.DataFrame(list(pd.concat([str_vec,(full_data.num_of_booked_days)],axis=1).apply(lambda x: prase_to_vec(x[0], x[1]), axis=1)),columns=[\"D\",\"N\"])\n df2 = pd.concat([df1[\"N\"],full_data[\"price_per_night\"]],axis=1)\n full_data[\"payment_late_cancellation\"] = df2[\"N\"]*df2[\"price_per_night\"]\n scale_df1 = df1/(365,30)\n full_data[\"norm_of_cancellation_policy\"]=np.linalg.norm(scale_df1,ord=1,axis=1)\n return full_data,features",
"_____no_output_____"
],
[
"# full_data2 = pd.read_csv(io.BytesIO(uploaded[\"test_set_week_2.csv\"]))\n# match_to_test_dat(full_data2)\n# full_data2.loc[(full_data2.accommadation_type_name=='Chalet')]",
"_____no_output_____"
],
[
"\ndef load_data(filename: str,train : bool,tree : bool):\n \"\"\"\n Load Agoda booking cancellation dataset\n Parameters\n ----------\n filename: str\n Path to house prices dataset\n\n Returns\n -------\n Design matrix and response vector in either of the following formats:\n 1) Single dataframe with last column representing the response\n 2) Tuple of pandas.DataFrame and Series\n 3) Tuple of ndarray of shape (n_samples, n_features) and ndarray of shape (n_samples,)\n \"\"\"\n # TODO - replace below code with any desired preprocessing\n if run_in_colab and train:\n full_data = pd.read_csv(io.BytesIO(uploaded[\"agoda_cancellation_train.csv\"]))\n elif run_in_colab and not train:\n full_data = pd.read_csv(io.BytesIO(uploaded[\"test_set_week_3.csv\"]))\n elif not run_in_colab and train:\n full_data = pd.read_csv(filename)\n else: \n full_data = pd.read_csv(filename)\n full_data,one_hot_feature = data_preprocessing(full_data,train)\n binary_data = full_data.select_dtypes([np.number]).columns[full_data.select_dtypes([np.number]).applymap(fillter_to_binary).all()]\n bad_columns = ['Pay at Check-in','Chalet','Holiday Park / Caravan Park','Homestay','Inn', 'Lodge', 'Love Hotel']# becuase not in test set\n one_hot_feature = [col_name for columns in one_hot_feature for col_name in columns if col_name not in bad_columns]\n # we saw that num of rooms has high corr with num of adults so we can keep one of them.\n \n #original_payment_method, origin_country_code&guest_nationality_country_name, charge_option - add as features\n \n #looks like pyment method is not so informative becuase it is corralte with the comment ones\n # wanted_features=[\"norm_of_cancellation_policy\",\"payment_late_cancellation\",\"booking_datetime\",\"checkin_date\",\"hotel_star_rating\",\"guest_is_not_the_customer\",\"no_of_children\",\"no_of_extra_bed\",\"no_of_room\",\"original_selling_amount\",\"is_user_logged_in\",\"is_first_booking\",\n # \"request_nonesmoke\",\"request_latecheckin\",\"request_highfloor\",\"request_twinbeds\",]+one_hot_feature\n # wanted_features_tree = [\"norm_of_cancellation_policy\",\"payment_late_cancellation\",\"booking_datetime\",\"checkin_date\",\"hotel_star_rating\",\"guest_is_not_the_customer\",\"no_of_children\",\"no_of_extra_bed\",\"no_of_room\",\"original_selling_amount\",\"is_user_logged_in\",\"is_first_booking\",\n # \"request_nonesmoke\",\"request_latecheckin\",\"request_highfloor\",\"request_twinbeds\",\"hotel_area_code\",\"hotel_chain_code\"]+one_hot_feature\n small_good_features = ['norm_of_cancellation_policy', 'payment_late_cancellation',\n 'booking_datetime', 'checkin_date', 'hotel_star_rating',\n 'original_selling_amount', 'hotel_area_code', 'hotel_chain_code',\n 'Pay Later', 'Pay Now',]\n if tree:\n features = full_data\n else: \n features = full_data\n features.fillna(0,inplace=True)\n if train:\n labels = full_data[\"cancellation_datetime\"]\n return features, labels\n return features\n\n\n",
"_____no_output_____"
],
[
"# full_data = pd.read_csv(io.BytesIO(uploaded[\"test_set_week_2.csv\"]))\n# for has_unique_col_name in has_unique:\n# print(full_data[has_unique_col_name].unique())",
"_____no_output_____"
],
[
"train_x,train_y = load_data(\"im on colab\",True,True)\ntest_x = load_data(\"im on colab\",False,True)",
"_____no_output_____"
],
[
"test_x.columns",
"_____no_output_____"
],
[
"small_good_features = ['norm_of_cancellation_policy', 'payment_late_cancellation','price_per_night','num_of_booked_days'\n,'booking_datetime', 'checkin_date', 'hotel_star_rating','Credit Card', 'Gift Card', 'Invoice',\n 'original_selling_amount', 'hotel_area_code', 'hotel_chain_code',\n 'Pay Later', 'Pay Now', 'Africa', 'Asia', 'Europe', 'North America', 'Oceania',\n 'South America', 'Hotel']",
"_____no_output_____"
],
[
"from sklearn.ensemble import GradientBoostingClassifier\nmodel = GradientBoostingClassifier(n_estimators=100,random_state=0).fit(train_x[small_good_features],train_y )\n",
"_____no_output_____"
],
[
"from sklearn.metrics import f1_score\nprint(f1_score(model.predict(test_x[small_good_features]), week2_labels, average='macro'))\n",
"0.5269518206392114\n"
],
[
"from google.colab import files\npd.DataFrame(model.predict(test_x[small_good_features]),columns=[\"predicted_values\"]).to_csv(\"207047259_313450876_208346320.csv\",index=False)\nfiles.download(\"207047259_313450876_208346320.csv\")",
"_____no_output_____"
],
[
"# labels = pd.read_csv(io.BytesIO(uploaded[\"test_set_week_1_labels.csv\"]))\n# labels = (labels.iloc[:,0]).map(lambda x: x[-1]).astype(int)",
"_____no_output_____"
],
[
"# from sklearn.feature_selection import SelectFromModel\n# sel = SelectFromModel(RandomForestClassifier(n_estimators = 50))\n# sel.fit(train_x, train_y)",
"_____no_output_____"
],
[
"# train_x[train_x.columns[sel.get_support()]]\n# test_x.hotel_chain_code.unique().size",
"_____no_output_____"
],
[
"# train_x.columns.size",
"_____no_output_____"
],
[
"# [\"newton-cg\",\"lbfgs\", \"liblinear\", \"sag\", \"saga\"]\n# lr_clf = LogisticRegression(solver=\"lbfgs\")\n# dec_tree = DecisionTreeClassifier(min_samples_split=4)\n# tree = dec_tree.fit(train_x, train_y)\n# lr_clf.fit(train_x, train_y)\n",
"_____no_output_____"
],
[
"# forest_cl = clf = RandomForestClassifier(n_estimators=50)\n# forest = forest_cl.fit(X=train_x,y=train_y)\n# forest.score(test_x,labels)",
"_____no_output_____"
],
[
"# f1_score(forest.predict(test_x), labels, average='macro')",
"_____no_output_____"
],
[
"# forest_features = train_x.columns[forest.feature_importances_>= 0.01]\n# forest_features",
"_____no_output_____"
],
[
"# from sklearn.model_selection import train_test_split\n\n# X_train2, X_test2, y_train2, y_test2 = train_test_split(\n# train_x, train_y, test_size=0.3, random_state=42)\n",
"_____no_output_____"
],
[
"# dec_tree2 = DecisionTreeClassifier(max_depth=5, min_samples_leaf=5)\n# dec_tree2.fit(X_train2, y_train2)\n# lr_clf.fit(X_train2, y_train2)",
"_____no_output_____"
],
[
"# dec_tree2.score(X_test2,y_test2)",
"_____no_output_____"
],
[
"# lr_clf.score(X_test2,y_test2)",
"_____no_output_____"
],
[
"\n\n# param_grid = {\n# \"max_depth\": [3,5,10,15,20,None],\n# \"min_samples_split\": [2,5,7,10],\n# \"min_samples_leaf\": [1,2,5]\n# }\n\n# clf = DecisionTreeClassifier(random_state=42)\n# tree_grid_cv = GridSearchCV(clf, param_grid, scoring=\"roc_auc\",verbose=4 ,n_jobs=-1, cv=3).fit(train_x,train_y)",
"_____no_output_____"
],
[
"# ans = pd.DataFrame(grid_cv.cv_results_)\n# ans.sort_values(\"rank_test_score\").head(10)\n",
"_____no_output_____"
],
[
"# param_grid_2 = {\n# \"max_depth\": [5],\n# \"min_samples_split\": [2,5,7,10],\n# \"min_samples_leaf\": [1,2]\n# }\n# tree_grid_cv2 = GridSearchCV(clf, param_grid_2, scoring=\"roc_auc\",verbose=4 ,n_jobs=-1, cv=8).fit(train_x,train_y)",
"_____no_output_____"
],
[
"# ans_2 = pd.DataFrame(tree_grid_cv2.cv_results_)\n# ans_2.sort_values(\"rank_test_score\").head(10)",
"_____no_output_____"
],
[
"# parameters = {\n# 'penalty' : ['l1','l2'], \n# 'C' : np.logspace(-3,3,7),\n# 'solver' : [\"saga\", 'liblinear'],\n# }\n# lr_clf = LogisticRegression(random_state=42)\n# reg_grid_cv = GridSearchCV(lr_clf, parameters, scoring=\"roc_auc\",verbose=4 ,n_jobs=-1, cv=3).fit(train_x,train_y)\n",
"_____no_output_____"
],
[
"# ans2 = pd.DataFrame(grid_cv2.cv_results_)\n# ans2.sort_values(\"rank_test_score\")\n",
"_____no_output_____"
],
[
"# best_featues_logostic = grid_cv2.best_estimator_\n# best_featues_logostic",
"_____no_output_____"
],
[
"# lr_clf = LogisticRegression(penalty=\"l1\",C=1.0,solver=\"liblinear\",max_iter=1000,random_state=42)\n# lr_clf.fit(X=train_x[forest_features],y=train_y)",
"_____no_output_____"
],
[
"# lr_clf.score(X=test_x[forest_features],y=labels)",
"_____no_output_____"
],
[
"# from google.colab import files\n# pd.DataFrame(lr_clf.predict(test_x),columns=[\"predicted_values\"]).to_csv(\"207047259.csv\",index=False)\n# files.download(\"207047259.csv\")",
"_____no_output_____"
],
[
"# from sklearn.feature_selection import SelectFromModel\n# sel = SelectFromModel(LogisticRegression(penalty=\"l1\",C=1.0,solver=\"liblinear\",max_iter=100,random_state=42))\n# sel.fit(train_x, train_y)\n",
"_____no_output_____"
],
[
"# reg_features = train_x.columns[sel.get_support()]",
"_____no_output_____"
],
[
"# lr_clf = LogisticRegression(penalty=\"l1\",C=1.0,solver=\"liblinear\",max_iter=100,random_state=42)\n# lr_clf.fit(X=train_x,y=train_y)\n# f1_score(lr_clf.predict(test_x), labels, average='macro')",
"/usr/local/lib/python3.7/dist-packages/sklearn/svm/_base.py:1208: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.\n ConvergenceWarning,\n"
],
[
"# lr_clf = LogisticRegression(random_state=42)\n# parameters_2 = {\n# 'penalty' : ['l1'], \n# 'C' : [1.0,10.0,100.0],\n# 'solver' : ['liblinear'],\n# }\n# reg_grid_cv2 = GridSearchCV(lr_clf, parameters_2, scoring=\"roc_auc\",verbose=4 ,n_jobs=-1, cv=8).fit(train_x,train_y)",
"_____no_output_____"
],
[
"# small_good_features = ['norm_of_cancellation_policy', 'payment_late_cancellation',\n# 'booking_datetime', 'checkin_date', 'hotel_star_rating',\n# 'original_selling_amount', 'hotel_area_code', 'hotel_chain_code',\n# 'Pay Later', 'Pay Now', 'Africa', 'Asia', 'Europe', 'North America', 'Oceania',\n# 'South America', 'Hotel']#,'Unknown', 'Credit Card', 'Gift Card',\n #'Invoice']",
"_____no_output_____"
],
[
"# from sklearn.ensemble import GradientBoostingClassifier\n# clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0,max_depth=1, random_state=0).fit(train_x[small_good_features], train_y)",
"_____no_output_____"
],
[
"# from sklearn.metrics import f1_score\n# f1_score(clf.predict(test_x[small_good_features]), labels, average='macro')",
"_____no_output_____"
],
[
"# train_x.columns",
"_____no_output_____"
],
[
"# from sklearn.model_selection import GridSearchCV\n# parameters = {\n# \"loss\":[\"deviance\"],\n# \"learning_rate\": [0.01, 0.2],\n# \"min_samples_split\": [0.1,0.5],\n# \"min_samples_leaf\":[0.1,0.5],\n# \"max_depth\":[3,5],\n# \"max_features\":[\"log2\",\"sqrt\"],\n# \"criterion\": [\"friedman_mse\", \"mae\"],\n# \"n_estimators\":[10]\n# }\n# #passing the scoring function in the GridSearchCV\n# clf = GridSearchCV(GradientBoostingClassifier(), parameters,verbose=4,scoring=\"f1_macro\",refit=False,cv=2, n_jobs=-1).fit(train_x[small_good_features], train_y)",
"Fitting 2 folds for each of 16 candidates, totalling 32 fits\n"
],
[
"# boost = pd.DataFrame(clf.cv_results_)\n# boost.sort_values(\"rank_test_score\").loc[0,\"params\"]\n",
"_____no_output_____"
],
[
"# model = GradientBoostingClassifier(n_estimators=100,random_state=0).fit(train_x[small_good_features], train_y)\n# for_sub2 = GradientBoostingClassifier(n_estimators=100,learning_rate=1.0,random_state=0).fit(train_x[small_good_features], train_y)\n# from sklearn.metrics import f1_score\n# print(f1_score(for_sub2.predict(test_x[small_good_features]), labels, average='macro'))\n# f1_score(model.predict(test_x[small_good_features]), labels, average='macro')",
"0.4285864756688434\n"
],
[
"# week2 = pd.read_csv(io.BytesIO(uploaded[\"test_set_week_2.csv\"]))\n# week2_labels = pd.read_csv(io.BytesIO(uploaded[\"test_set_labels_week_2.csv\"]))\n# week2_labels = (week2_labels.iloc[:,0]).map(lambda x: x[-1]).astype(int)",
"_____no_output_____"
],
[
"# full_data = pd.read_csv(io.BytesIO(uploaded[\"agoda_cancellation_train.csv\"]))\n# full_data.cancellation_datetime = full_data.cancellation_datetime.fillna(0).astype(bool).astype(int)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75dc808adc2bd4ff869c36ac92d362872a40e42 | 28,877 | ipynb | Jupyter Notebook | notebooks/4-ImageData.ipynb | jasonsydes/DNNWS_2022 | 8464d1120c15d8c731eb9e07c83bc4d3c00edd21 | [
"BSD-3-Clause"
] | null | null | null | notebooks/4-ImageData.ipynb | jasonsydes/DNNWS_2022 | 8464d1120c15d8c731eb9e07c83bc4d3c00edd21 | [
"BSD-3-Clause"
] | null | null | null | notebooks/4-ImageData.ipynb | jasonsydes/DNNWS_2022 | 8464d1120c15d8c731eb9e07c83bc4d3c00edd21 | [
"BSD-3-Clause"
] | 1 | 2022-02-14T22:27:36.000Z | 2022-02-14T22:27:36.000Z | 30.428872 | 272 | 0.567337 | [
[
[
"# Quick Check for Setup",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nimport os\nprint(\"If this cell fails please check your Open OnDemand Setup\", \n \"Make sure you requested enough resources and the correct enviroment\")\n\nassert(tf.__version__==\"2.6.0\")\nassert(int(os.environ[\"SLURM_MEM_PER_NODE\"]) > 8000 )\nassert(int(os.environ[\"SLURM_CPUS_ON_NODE\"]) >= 2 )\n\nprint('Your setup looks good!')\n",
"_____no_output_____"
]
],
[
[
"# Quick Programming Powerup\n\nNumpy arrays have a large number of useful ways to access data, we'll talk about two in this powerup!\n\n## Masking\nA handy way to select elements in NumPy is masking:\n\n* This lets you easily do things like Select X's for example where Y=True\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\n#Here is some fake data\nXs=np.random.uniform(size=(20))\nYs=np.random.uniform(size=(20)) > .8 #80% of our Ys should be true\n\nprint(Xs)\nprint(Ys)\n\n# Here we'll use as mask to grab all True Xs\nmask=(Ys==1)\n\nprint(mask)\n\nprint(Xs[mask])\n#Or since Ys are true or false \nprint(Xs[Ys])\n\n",
"_____no_output_____"
]
],
[
[
"## Using Lists as Indices\nA handy way to select elements in NumPy is using lists:\n\n* This lets you easily do things like Select The 3 Biggest Elements in X ",
"_____no_output_____"
]
],
[
[
"#Some other Handy Tricks\n\nXs=np.random.uniform(size=(5)) #Input Data\nYs=np.random.uniform(size=(5)) >.5 #Target Data\n\nprint(\"Xs\",Xs)\nprint(\"Ys\",Xs)\n\n#Lists for index's work too\n\nprint(\"Selected Xs\",Xs[[1,2,4]])\n\n#This can be useful if you want to grab the labels for smallest values of x\n\n#This gives you index's of an array in a list\nsort_i=np.argsort(Xs)\nprint('Sorted Xs',Xs[sort_i])\nprint('Largest 3 Xs', Xs[sort_i[-3:]])\n\nbiggest_index=sort_i[-1]\nsmallest_index=sort_i[0]\n\nprint('Label for Largest X ',Ys[biggest_index])\nprint('Label for Smallest X ',Ys[smallest_index])\n\nprint('The Same X 4 Times ',Xs[[1,1,1,1]])\n\n\n\n\n",
"_____no_output_____"
],
[
"Xs=np.random.uniform(size=(100)) #Input Data\n\"try to print the 5th,6th, and 8th, largest element in the Xs above in two lines of code\"\n",
"_____no_output_____"
]
],
[
[
"# Images with Neural Networks",
"_____no_output_____"
],
[
"This notebook makes extensive use of examples and figures from [here](http://cs231n.github.io/convolutional-networks/), which is a great reference for further details.\n\n\n# GOALS\n\n* Understand how Image data is stored and used\n* Write a Multi-Class classification model\n* Be able to use convolutional layers\n* Build a network for Image Classification\n* Understand Over-fitting and some ways to deal with it",
"_____no_output_____"
],
[
"# Example: MNIST - Fashion\n\nFor this example we'll use MNIST- Fashion, a collection of small 28x28 pixel images of various pieces of clothing. It is a common benchmark along with with the original MNIST which is a collection of hand written digits. We will load the data directly from keras.\n\n\n\n## The Task\nThis is a multi-class classification problem, identify the type of object in the image\n\n|Label| Class |\n|------ | ------|\n| 0|T-shirt/top|\n| 1|Trouser|\n| 2| Pullover|\n| 3| Dress|\n| 4| Coat|\n| 5| Sandal|\n| 6| Shirt|\n| 7| Sneaker|\n| 8| Bag|\n| 9| Ankle boot|\n \n\n",
"_____no_output_____"
],
[
"## Image Data ",
"_____no_output_____"
],
[
"Here we'll rely on tensorflow and the handy package Keras that comes with it",
"_____no_output_____"
]
],
[
[
"import os\nimport os.path\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom random import random\nfrom sys import version\nprint(\"Import complete\") ",
"_____no_output_____"
],
[
"# Load pre-shuffled MNIST data into train and test sets\n(_xtrain, _ytrain), (X_test, Y_test) = tf.keras.datasets.fashion_mnist.load_data()\n\n#We want to include a develop set so let's split the training set\ntrain_index=[]\ndevelop_index=[]\nfor i in range(len(_xtrain)):\n if random() <0.8:\n train_index.append(i)\n else:\n develop_index.append(i)\nX_train=_xtrain[train_index]\nY_train=_ytrain[train_index]\n\nX_develop=_xtrain[develop_index]\nY_develop=_ytrain[develop_index]\n\n\nnp.set_printoptions(linewidth=115)\nn_targets=np.max(Y_test)+1\nprint('A Single Image:\\n',X_train[0])\nplt.imshow(X_train[0],cmap='gray')\nplt.show()\nprint('Example Label:', Y_train[0])",
"_____no_output_____"
]
],
[
[
"* Note above that the labels are integers from 0-9\n* Also note the images are integers from 0-255 (uint8)\n\nWe will deal with the labels first. Lets make some useful arrays and dictionaries to keep track of what each integer means ",
"_____no_output_____"
]
],
[
[
"# This is useful for making plots it takes an integer\nlookup_dict={\n 0 :'T-shirt/top',\n 1 :'Trouser',\n 2 :'Pullover',\n 3 :'Dress',\n 4 :'Coat',\n 5 :'Sandal',\n 6 :'Shirt',\n 7 :'Sneaker',\n 8 :'Bag',\n 9 :'Ankle boot' \n}\n\n\n#Lets make a list in the order of the labels above so [T-Shirt,Trouser,...]\nlabels=list(lookup_dict.values())\n\n#Check to make sure labels list is in the right order (not guaranteed in python < 3.6)\nif not all([v==lookup_dict[i] for i,v in enumerate(labels) ]):\n print('This looks like an old version of python making labels the long way, you are using python version', version)\n labels=['' for i in range(n_targets) ] #make a list with the right size\n for key in lookup_dict:\n labels[key]=lookup_dict[key] #Assign list to the vaules\n \n#Always good to make simple checks that what you think is going to work actually is working\n#Here we check that our array of labels is in the same order as the dictionary we wrote above\nassert(all([v==lookup_dict[i] for i,v in enumerate(labels) ]))\nprint(\"Array and dictionary are in same order\") \n\n#Another Simple Check (Keras is well tested this will work, but it's good to get in the habit when using your own data)\nassert(len(X_train)==len(Y_train))\nprint(\"X_train and Y_train are the same length\") \nassert(len(X_develop)==len(Y_develop))\nprint(\"X_develop and Y_develop are the same length\") \nassert(len(X_test)==len(Y_test))\nprint(\"X_test and Y_test are the same length\") ",
"_____no_output_____"
]
],
[
[
"# Multi-Class Classification\n\n**Reminder**\n * Classification is problem where each of our examples (x) belongs to a class (y). Since Neural networks are universal function approximators, we can use $P(y|x)$\n\n**Like before to change our problem we need**\n* The correct activation on our last layer - **softmax**\n* The correct loss function - **categorical_crossentropy**\n\nWe have more than two classes (0,1,2...) and we need to predict the probability of all of them. However, we have a constraint that all the probabilities must sum to one.\n\n**Our network**\n * Inputs are our images\n * Output is a Dense layer with dimension equal to the number of classes\n * Each output represents $\\{P(y=0|x),(y=1|x),(y=2|x)\\ ...\\}$.\n * We require $\\sum_i P(y=i|x) = 1$.\n\n* To enforce this we use a different activation function: a **softmax**\n\n * $\\sigma(x)_i= \\frac{e^{x_i}}{\\sum_i e^{x_i}}$\n \n* Our loss function becomes\n\n $L=-\\frac{1}{N}\\sum_i \\sum_n y_{true,i,n}*ln(y_{pred,i,n})$\n\n* What this means\n * $y_{true,i,n}$ is a vector with a 1 in the dimention that example belongs to and a zero everywhere else\n * i.e. Ankle boot = class 9 = (0,0,0,0,0,0,0,0,0,1)\n * The sum in this loss term $\\sum_n y_{true,i,n}*ln(y_{pred,i,n})$\n * is zero except for the one value when n=class of $y_{true}$\n * Then it's just $ln(y_{pred,i,n})$\n * This is same as binary classfication: make -1*$ln(y_{pred,i,n})$ as small as possible\n\n\n",
"_____no_output_____"
],
[
"Our input data set has labels stored as integers, but the labels we need for our loss function need to be **one-hot** encoded\n\n**one-hot** - A vector of zeros except for one entry with a 1 that represents the class of an object\n * i.e. Ankle boot = class 9 = (0,0,0,0,0,0,0,0,0,1)\n\nkeras has a utility to convert integers like this easily.",
"_____no_output_____"
]
],
[
[
"Y_train_one_hot = tf.keras.utils.to_categorical(Y_train, 10)\nY_develop_one_hot = tf.keras.utils.to_categorical(Y_develop, 10)\nY_test_one_hot = tf.keras.utils.to_categorical(Y_test, 10)\n\nprint('Example:',Y_train[0],'=',Y_train_one_hot[0])",
"_____no_output_____"
]
],
[
[
"Now lets handle the image data\n* Our Convolutional Neural Networks need a shape of Batch x Height x Width x Channels, for us (batch_size x 28 x 28 x 1)\n* In this case channels=1, but for a color image you'll have 3 RGB and sometimes 4 with a transparency channel RGBA \n* It's much easier for a neural network to handle data with range from 0-1, rather than 0-255, so we will scale the data",
"_____no_output_____"
]
],
[
[
"\nf=plt.figure(figsize=(15,3))\nplt.imshow(np.squeeze(np.hstack(X_train[0:7])),cmap='gray') #hstack aranges the first 7 images into one long image\n\n#Reshape\nX_train = X_train.reshape(X_train.shape[0], 28, 28, 1)\nX_test = X_test.reshape(X_test.shape[0], 28, 28, 1)\nX_develop = X_develop.reshape(X_develop.shape[0], 28, 28, 1)\n\n\nprint(\"Datatype:\",X_train.dtype, \"\\nMax value:\", X_train.max())",
"_____no_output_____"
]
],
[
[
"Notice that the pixel values imported as an integer array that saturates at `255`. Let's turn the data into floats $\\in [0, 1]$.",
"_____no_output_____"
]
],
[
[
"X_train = X_train.astype('float32')\nX_test = X_test.astype('float32')\n\nif X_train.max()>1: \n X_train = X_train/255\n X_test = X_test/255\n X_develop = X_develop/255\n\nassert(np.max(X_train) <=1)\nassert(np.max(X_test) <=1)\nassert(np.max(X_develop) <=1)\nprint(\"all sets scaled to float values between\", X_train.min(), \"and\", X_train.max())\n",
"_____no_output_____"
]
],
[
[
"# The Take Away\n\n* Image data is 3 dimensional (width,height,channel (i.e color) )\n * It is often stored from 0-255 and should be normalized between 0-1\n* Class labels are given as integers and need to be converted to **one hot** vectors\n \n* Multi-classification problems \n * Use **softmax** as an output\n * Use **Categorical Cross Entropy** as a loss function\n",
"_____no_output_____"
],
[
"# Dense Network for Image Classification\n\n* We can use everything we learned in Lesson 2 for Image classification\n* But we need one extra layer\n * Dense Layers take 1-D data not 3-D data\n * Convert the two by Flattening\n * tf.keras.layers.Flatten()\n \nAll this does is reshape the input data\n\n$\\begin{pmatrix}a & b \\\\c & d\\end{pmatrix} \\rightarrow (a,b,c,d)$\n\nLet's try the network below \n",
"_____no_output_____"
]
],
[
[
"input_layer=tf.keras.layers.Input( shape=X_train.shape[1:] ) # Shape here does not include the batch size \n\n## Here is our magic layer to turn image data into something a dense layer can use\nflat_input=tf.keras.layers.Flatten()(input_layer )#Dense layers take a shape of ( batch x features)\n##\nhidden_layer1=tf.keras.layers.Dense(100)(flat_input) \nhidden_layer_activation=tf.keras.layers.LeakyReLU()(hidden_layer1)\nhidden_layer2=tf.keras.layers.Dense(100)(hidden_layer_activation)\nhidden_layer_activation=tf.keras.layers.LeakyReLU()(hidden_layer2)\noutput_layer=tf.keras.layers.Dense(n_targets,activation='softmax')(hidden_layer_activation)\ndense_model=tf.keras.models.Model(input_layer,output_layer)\n\ndense_model.compile(loss='categorical_crossentropy',\n optimizer='adam',\n metrics=['accuracy'])\n\ndense_model.summary()\n\nhistory=dense_model.fit(X_train, Y_train_one_hot, \n batch_size=32, epochs=10, verbose=1,\n validation_data=(X_develop,Y_develop_one_hot)\n )\n",
"_____no_output_____"
]
],
[
[
"## Loss Curves\n\nThe keras fit function returns a history object, that we've ignored until now, but it's a very important tool.\nIt records the loss of the training and development datasets at each epoch, as well as metrics like accuracy.\nLet's plot the loss.\n\n**Most imporantly**\n* Is the development loss greater than the train loss?\n * If so your model is overfit and will give worse performance\n\n",
"_____no_output_____"
]
],
[
[
"#We'll do this a lot so let's put it in a function\ndef plot_history(history): \n plt.plot(history.history['loss'],label='Train')\n plt.plot(history.history['val_loss'],label='Develop')\n plt.xlabel('Epochs')\n plt.ylabel('Loss')\n plt.ylim((0,1.5*np.max(history.history['val_loss'])))\n plt.legend()\n plt.show()\nplot_history(history)",
"_____no_output_____"
]
],
[
[
"There are many techniques to deal with over-fitting and we'll talk more about them latter, but the easiest way is to just stop the training earlier. You can do this with\n\n\n```keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)```\n\nThis is a callback, or a function that can be used to control the fitting process. It's called at the end of every epoch, or even the end of every batch. We can use these functions by adding them to the fit functions with\n\n\n```\nmodel.fit(...,\n callbacks=[func1,func2])\n ```\n \n",
"_____no_output_____"
]
],
[
[
"es=tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto')\nhistory=dense_model.fit(X_train, Y_train_one_hot, \n batch_size=32, epochs=10, verbose=1,\n validation_data=(X_develop,Y_develop_one_hot),\n callbacks=[es] \n )\n\nplot_history(history)",
"_____no_output_____"
]
],
[
[
"Since we picked up training where we left off the early stopping function quits training as soon as the develop loss stops going down.",
"_____no_output_____"
],
[
"# Excerise\n\nWith that let's practice writing our own Dense network image classifier\nWe will a new dataset as an example cifar10\n\n\nlabels=https://www.cs.toronto.edu/~kriz/cifar.html\n",
"_____no_output_____"
]
],
[
[
"# Load CIFAR data into train and test sets\n(_cfxtrain, _cfytrain), (cfX_test, cfY_test) = tf.keras.datasets.cifar10.load_data()\n\n#Split into Train and Develop\n\ntrain_index=[]\ndevelop_index=[]\nfor i in range(len(_cfxtrain)):\n if random() <0.8:\n train_index.append(i)\n else:\n develop_index.append(i)\ncfX_train=_cfxtrain[train_index]\ncfY_train=_cfytrain[train_index]\n\ncfX_develop=_cfxtrain[develop_index]\ncfY_develop=_cfytrain[develop_index]\n\nf=plt.figure(figsize=(15,3))\nplt.imshow(np.hstack(cfX_train[0:7])) #hstack aranges the first 7 images into one long image\n\n",
"_____no_output_____"
]
],
[
[
"# Step 1 Scale your data to be between 0 and 1",
"_____no_output_____"
]
],
[
[
"\"Your code here normalize cfX_train/test/develop\"",
"_____no_output_____"
],
[
"for data_set in [cfX_train,cfX_develop,cfX_test]:\n assert np.max(data_set)==1., 'Max of your data set is '+str(np.max(data_set))+' not 1'\n assert np.min(data_set)==0., 'Max of your data set is '+str(np.min(data_set))+' not 0'\n\nprint('Great job! Your dataset is normalized correctly')",
"_____no_output_____"
]
],
[
[
"# Step 2 Create One-Hot encoded labels\nName them:\n* cfY_train_one_hot\n* cfY_develop_one_hot\n* cfY_test_one_hot\n",
"_____no_output_____"
]
],
[
[
"\"Your code here\"",
"_____no_output_____"
],
[
"assert 'cfY_train_one_hot' in locals(), 'cfY_train_one_hot not found' \nassert 'cfY_develop_one_hot' in locals(), 'cfY_develop_one_hot not found' \nassert 'cfY_test_one_hot' in locals(), 'cfY_test_one_hot not found' \n\nassert (cfY_train_one_hot).shape[1]==10, 'cfY_train_one_hot not the correct size' \nassert (cfY_develop_one_hot).shape[1]==10, 'cfY_develop_one_hot not the correct size' \nassert (cfY_test_one_hot).shape[1]==10, 'cfY_test_one_hot not the correct size'\nprint(\"One-Hot encoded labels created, correct size\")",
"_____no_output_____"
]
],
[
[
"# Step 3 Create a Dense Neural Network\nWrite your own dense image classifier.\n\nRemeber you'll need: \n* an input layer\n* a flatten layer\n* some dense layers with activations\n* an output layer with a softmax activation\n\nCreate and compile a model named **cifar_model**\n* Make sure the loss is catagorical_crossentropy\n",
"_____no_output_____"
]
],
[
[
"\"your code here\"",
"_____no_output_____"
],
[
"assert 'cifar_model' in locals(), \"Could not find cifar_model\"\nassert cifar_model.input_shape ==(None,32,32,3), \"Check your input shape is correct\"\nassert cifar_model.output_shape[1] ==10, \"Check your output shape is correct\"\nassert cifar_model._is_compiled, \"Make sure to compile your model\"\nassert cifar_model.loss=='categorical_crossentropy', \"Check your loss to make sure it's correct\"\nassert (np.abs(np.sum(cifar_model.predict(cfX_train[0:10]),axis=1)-1) < 1e-5).all(), \"Outputs don't sum to 1 make sure you have the correct activation\"\n\nprint('Fantastic Job! It looks like your model is ready to fit.')",
"_____no_output_____"
]
],
[
[
"## Step 4: Fit your Model",
"_____no_output_____"
]
],
[
[
"\"your code here\"",
"_____no_output_____"
]
],
[
[
"## Step 5: Plot your loss curves",
"_____no_output_____"
]
],
[
[
"\"your code here\"",
"_____no_output_____"
]
],
[
[
"# Save Your Model",
"_____no_output_____"
]
],
[
[
"cifar_model.save(\"my_cifar_model\")",
"_____no_output_____"
]
],
[
[
"# Load Your Model",
"_____no_output_____"
]
],
[
[
"loaded_model=tf.keras.models.load_model(\"my_cifar_model\")",
"_____no_output_____"
]
],
[
[
"# Use your Model\n\nWill try a quick example with a photo. You can use mine or upload your own to Talapas\n\n* We need to open our photos and resize/reshape them for our model\n* We need rescale them to match training",
"_____no_output_____"
]
],
[
[
"import PIL",
"_____no_output_____"
],
[
"dog_image=PIL.Image.open('/gpfs/projects/bgmp/shared/2019_ML_workshop/datasets/Test_Photos/dog.jpg')\ndog_array=np.asarray(dog_image)\nprint(dog_array.shape)\nplt.imshow(dog_array)",
"_____no_output_____"
]
],
[
[
"# Crop and Resize",
"_____no_output_____"
]
],
[
[
"length,width=dog_image.size\nmin_length=min([length,width])\n\n#Crop a box with coordinates (left,top,right,bottom) 0,0 is the upper left corner\nnew_image=dog_image.crop((0,0,min_length,min_length))\nprint(\"Cropped Size\",new_image.size)\nnew_image=new_image.resize((32,32))\nprint(\"Resized Image\",new_image)\n\nnew_array=np.asarray(new_image)\nplt.imshow(new_array)\n",
"_____no_output_____"
]
],
[
[
"# Put it together into a function",
"_____no_output_____"
]
],
[
[
"def process_image(input_file):\n image=PIL.Image.open(input_file)\n length,width=dog_image.size\n min_length=min([length,width])\n\n #Crop a box with coordinates (left,top,right,bottom) 0,0 is the upper left corner\n new_image=image.crop((0,0,min_length,min_length))\n new_image=new_image.resize((32,32))\n new_array=np.asarray(new_image)\n new_array=new_array/255\n new_array=np.expand_dims(new_array,axis=0)\n return new_array\n ",
"_____no_output_____"
],
[
"my_images=[\n'/gpfs/projects/bgmp/shared/2019_ML_workshop/datasets/Test_Photos/dog.jpg',\n '/gpfs/projects/bgmp/shared/2019_ML_workshop/datasets/Test_Photos/bird.jpg',\n '/gpfs/projects/bgmp/shared/2019_ML_workshop/datasets/Test_Photos/unknown.jpg'\n]\n\ndata=[process_image(f) for f in my_images]\nprint(\"First Image Shape\",data[0].shape)\ndata=np.concatenate(data,axis=0)\nprint(\"Data Set Shape\",data.shape)\n\nplt.imshow(np.hstack(data))\nprint(\"Dataset max,min:\",np.max(data),np.min(data))",
"_____no_output_____"
],
[
"photo_pred=cifar_model.predict(data)\n\n#\n\nphoto_last_space=cifar_model\n\n",
"_____no_output_____"
],
[
"cifar_category=['airplane',\n 'automobile', \n 'bird',\n 'cat', \n 'deer', \n 'dog',\n 'frog', \n 'horse', \n 'ship', \n 'truck']\n\nfor i,pred in enumerate(photo_pred):\n sort_list=np.argsort(pred)\n first=sort_list[-1]\n second=sort_list[-2]\n print(\"Image\",i,\" predicted as:\",cifar_category[first],round(pred[first]*100),\"%\",\" Second place:\",cifar_category[second],round(pred[second]*100),\"%\" )\n _train_data=cfX_train[cfY_train[:,0]==first][0:5]\n plt.imshow(np.hstack(_train_data))\n plt.show()\n",
"_____no_output_____"
]
],
[
[
"# How were your results?\n\nMine were somewhat disappointing; this is one reason among many why convolutional neural networks, which we will see next time are almost always used in image analysis tasks.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e75dd5fa7635730392d1929d8dde318718bc6694 | 146,271 | ipynb | Jupyter Notebook | borreliaplots_MM1.ipynb | lausted/cl_borrelia_plasmid_plot | fdf501701847c0177c47b923b75eb93c872a993c | [
"Apache-2.0"
] | null | null | null | borreliaplots_MM1.ipynb | lausted/cl_borrelia_plasmid_plot | fdf501701847c0177c47b923b75eb93c872a993c | [
"Apache-2.0"
] | null | null | null | borreliaplots_MM1.ipynb | lausted/cl_borrelia_plasmid_plot | fdf501701847c0177c47b923b75eb93c872a993c | [
"Apache-2.0"
] | null | null | null | 205.725738 | 16,766 | 0.870514 | [
[
[
"<a href=\"https://colab.research.google.com/github/lausted/cl_borrelia_plasmid_plot/blob/main/borreliaplots_MM1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"## Borrelia MM1 Plasmid Plots ##\n\n*Chris Lausted and Chenkai Luo*\n\n*7 Jan 2018*\n\nWe can upload a file called `replicons.fna` before running this notebook. Or in this case, let's download the plasmids of B31(NRZ) and put them into one file. \n\n* <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5511922/>\n* <https://www.ncbi.nlm.nih.gov/assembly/GCF_002151505.1/>",
"_____no_output_____"
]
],
[
[
"%%bash\n## Simple function to download fasta and genbank\n## <https://www.ncbi.nlm.nih.gov/genome/738?genome_assembly_id=393461> \ndlncbi () {\n ## Download with wget and rename. \n wget -q -O temp.fna \"https://www.ncbi.nlm.nih.gov/search/api/sequence/$1?report=fasta\"\n ## Rename the plasmid to start with something like lp or cp. \n sed -ri \"s/^>/>$2 /g\" temp.fna\n ## Concatenate all fasta files into one. \n cat temp.fna >> replicons.fna\n}\n\n## Download B31NRZ accessory genome plasmids.\nrm replicons.fna\n#dlncbi CP031412.1 chr\ndlncbi CP031398.1 cp26 \ndlncbi CP031408.1 cp32-1\ndlncbi CP031410.1 cp32-4\ndlncbi CP031411.1 cp32-5\ndlncbi CP031405.1 cp32-6\ndlncbi CP031407.1 cp32-7\ndlncbi CP031406.1 cp32-9\ndlncbi CP031399.1 cp9 \ndlncbi CP031400.1 lp17\ndlncbi CP031401.1 lp25\ndlncbi CP031402.1 lp28-3\ndlncbi CP031403.1 lp28-4\ndlncbi CP031409.1 lp28-8\ndlncbi CP031404.1 lp36\ndlncbi CP031397.1 lp54\n\n## Preview new file\ncat replicons.fna | grep \">\" ",
">cp26 CP031398.1 Borreliella burgdorferi strain MM1 plasmid plsm_cp26, complete sequence\r\n>cp32-1 CP031408.1 Borreliella burgdorferi strain MM1 plasmid plsm_cp32-1, complete sequence\r\n>cp32-4 CP031410.1 Borreliella burgdorferi strain MM1 plasmid plsm_cp32-4, complete sequence\r\n>cp32-5 CP031411.1 Borreliella burgdorferi strain MM1 plasmid plsm_cp32-5, complete sequence\r\n>cp32-6 CP031405.1 Borreliella burgdorferi strain MM1 plasmid plsm_cp32-6, complete sequence\r\n>cp32-7 CP031407.1 Borreliella burgdorferi strain MM1 plasmid plsm_cp32-7, complete sequence\r\n>cp32-9 CP031406.1 Borreliella burgdorferi strain MM1 plasmid plsm_cp32-9, complete sequence\r\n>cp9 CP031399.1 Borreliella burgdorferi strain MM1 plasmid plsm_cp9, complete sequence\r\n>lp17 CP031400.1 Borreliella burgdorferi strain MM1 plasmid plsm_lp17, complete sequence\r\n>lp25 CP031401.1 Borreliella burgdorferi strain MM1 plasmid plsm_lp25, complete sequence\r\n>lp28-3 CP031402.1 Borreliella burgdorferi strain MM1 plasmid plsm_lp28-3, complete sequence\r\n>lp28-4 CP031403.1 Borreliella burgdorferi strain MM1 plasmid plsm_lp28-4, complete sequence\r\n>lp28-8 CP031409.1 Borreliella burgdorferi strain MM1 plasmid plsm_lp28-8, complete sequence\r\n>lp36 CP031404.1 Borreliella burgdorferi strain MM1 plasmid plsm_lp36, complete sequence\r\n>lp54 CP031397.1 Borreliella burgdorferi strain MM1 plasmid plsm_lp54, complete sequence\r\n"
],
[
"%%bash\n## Match the replicons in the FASTA file to ParA proteins for identification.\n\n## Create a reference amino acid FASTA file of Borrelia ParA proteins.\n## Use Borrelia B31 lp54 A20, A21, A19 and lp21 U04 and cp26 B10. \n## from <https://www.ncbi.nlm.nih.gov/nuccore/AE000790.2>\n## and <https://www.ncbi.nlm.nih.gov/nuccore/AE001582.2>\n## and <https://www.ncbi.nlm.nih.gov/nuccore/AE000792.1> \ncat << 'EOF' > pfam.faa\n>32\nMDKKETKVITIASIKGGVGKSTTSLIFATLLSIKCKVLLIDIDTQASTTSYFFNKIKDNNIDLINRNIYEVLISNLHIDNALITINKNLDLIPSYLTLHKFNSESIPYKEFKLKEQLKLLSNHYDYIILDTNPSLDFTLTNALVCSNYIIIPITAEKWAVESLDLFTFFMNKLLLTLPMYLINTKFKRNNTHRELLKVLEKNNNFLGTISEREDLNKRIAKNDRFDLTKDYIIEYQNTLTAFLNKSSYVH\n>49 \nMEIKINKRNLSESVREEEQALVHYNKLKEKLNINFQKEIYCKIEAMKVLKEIKDKEYYKLDNYSSFDDFAKDYRLARTQTYKYLKIATAIEEGLIEEKYVVKNGINDTICLLKTKESPSLKKSNQNPIKPLRFQLKKEEAYSFYKKNPKLTSFLLEKIFFKEKDFLLKIIKEFETLRNKRK\n>50\nMTALLERLKQKQKELKLDADNKPKAKKGKKATVFSKIEEVKGRKIYHTKIFNDFYTFGISKNEPTKFFISLRGIFNIEDISMFHLFSVREDDEFMGIYYGIRKLDKAFIVKNFNKKETYTLRKCEYIEFRFKKGSVFCYLNGLHILLKKDRVNSPYYNTLLNIILELETELYAFYSKKLSKGGIIPEWIKKRQK\n>57\nMLISFVKESLPMHKIKTTNNNPRNCYNKVQYKLIVLISTICYLNKTHKKYTQKTILYYFNENLRKNGQTISTLRTMQKYIYRLQKEIKVTTNYYQHMGVNSGTEIYYKLNYPKKDCYHKINQHFKEKKETRFQNRVTNYFNKNSDSKMGSVQCESCNSNKNNIKEERKINEIEKYQVINYFNKCNFSCKEILSILLNLNVDKDTMIKIIKTIKRTDIKAKNKNIYFPKSCSKEKQEKLKKILCNTQKELEKSGYNSEQLETNFQKIYENYKYKPHFIIENHKYSDLSYIKRKLEKSIERKKENSKQDYKNLRTNIFNILIEQLKKETNIEILKPIIKEYLNNQKKIEYKKVFRIYYSELLEIINGKHYSNLKKFRKKSVG\n>62\nMYQIKTNKMPFNKVVDRRLKIFWVIQKLSANYFISKKKYSLSNVVAMTNSILEKKGFKRVTKRTIQNDIKIFETLGLIKSHFNPLGKNNGSFTYYTINKALEKLAKKIISTAYFIDKKTKHEKSKNKQLKKIKIIEESQKYKISHQITSHVLSNNISKKYKNSKYSFRRKNQNIKKTINFLEKEIKKKSKSINLEEIKKITENDITYKNSLWNLKDFMEELKEYEEKKIIKFYKKNLEKKKQKIWFMAKKFKNTDFDKLIKKFKIKNKMEREKNYENENQIHTSNNIKNAIVLMKTLIKKQKYDKKIKK\nEOF\n\n## Other proteins of interest:\n## C6: MKKDDQIAAAIALRGMAKDGKFAVK\n## OspC: MKKNTLSAILMTLFLFISCNNSGKDGNASANSADESVKGPNLAEISKKITESNAVVLAVKEVEALLASIDEIGSKAIGKRIQANGLQDLQGQNGSLLAGAYAISNLITQKINVLNGLKNSEELKEKINEAKGCSEKFTKKLSESHADIGIQAATDANAKDAILKTNPTKTKGAEELDKLFKAVENLSKAAKEMLANSVKELTSPVVAESPKKP\n\n## Create an example fasta file with one replicons.\ncat << 'EOF' > example.fna\n>lp17_B31\nGACTCAAAACTTTACCCTTTAAATTGCTAACTTTAACTT ATTAGTCTTTTCATGTAATTTAAGTAATTCTGATCAAAACAAATATTCAATTAACCATTATTACTAACCAAAATGATAATA TTAAATAGAAAAACAAG TTGACGGAAATATTATTATTAATGGGATGACTAAAGAAAGTGGCACAGAAACTAAAAAGC TTTTAGAAATTCCAAATGGGAATATTTCTCGACTTAAAGATGCAATTCAATATGGAGGAA GTTTTAGGGCTAAAGATGTTAGAGAAAATCAAACCCAAAAAGAAAACAACAAAGACTCGC ATATTCATGTCGACAATTTTAAAGAATACATACATTTAATCATGCCTAGCATTAACAATA ATGCTGATAGTAGTAGTAGTTATTACTATACCAACTACATAATAAATGGAGACAATTTGT TAAGAATTATTAGCAACTTATAAAAAATCTTTATAAATTACCAATATTCTTGAAAATTTT AATACTATTTTTTTTATATACTATAATATTATGAAAAAAAATCAAAAAAACAAGTGCTCA GAAATAGAAAAAACACAATTAGAAATAATAAATAACCAATCAGAAATAGAAAAACAACTC CATCAATTAGAAATTGAGTTTACTGGGGTATGCCTGCTTTATGTGACAATACATTATTAA ATCTAGAATTGAATAATTATTCTCAAAAAAACTATTAAAATTTTACAACGAAATTCTTAA AAAAGATAATAAAAATTCTTGCGATCTACCAACAATGAATAAATATCTTGATATATTAGA AAAAACAAAAACCATAGTAAAACTATCTTTTAAAAACCAGTCCAAATATATGATTTATTA TAAAATTAATCCCCCTTAAAGTGTTTCGTTCAACAATACAAGACTACTATCAAACAATAG CAGATAAACTAAAACTACGGTTAGAACTAAACTATCCTACTACTATTTAATCGTAAAAAA TATTTCTTTGCAAATTAAGCAATTTAGAAATATAAATGTAAAGACATATATTTTTATTTG ATAAATAATAAAAATTACTGGGGCACTATTTGGAAAAATTTTTAAAAGAAATATTAAGTA TGAATAGCAAAAATAGGCTATCTTCACACTTAATAATTCTTATTTACACACTAAACAACA TTGACCTAAATTCAAAAAATATTGGATACTATAGTAGGGGCTTTATACGCCGTGCGTTTA CTTTTAACATAGATAGATATTGCAATACTAGTAAAGATATTGAAATAGACATAGACTTAT TAATAAAGTATCTCGATTTTTTAGAAAACAACCTAAAAATTATAACTAATAAATATAAAG TAGAAAAAAATATATTCAAACTTTACTACATAATCAATTATCCTTTAAAAATATGTTACA CAAAAATTATGAACTACTATAAATAGACTATATAATGATATTAAAAAGAGAAACATCTTT AGTATATTACTAAAGGTGTTTCTCCCCTTAATCTAAAGTTGTTTTAAGGTGTATAATGGG GGTGATACCATATTTTAAATTATATATCCCAAATTAATTAAAAAATCAGGTATTGCAAAT GTATTATAGTGTCTCATAGGCCTAATAAAGAACAATTAAAACTAAAAAATATATAAATAA AACGCAAATTAGAAAAAGAAATAACCGTCATAGTCAAACTTTATTTTAAGAAAAATCCTA AATCTATAATTTATTATAAAGTTAATTGCTCCTTAGAAAGAGTTTTATTAAAAATAAAAG ACTACTACGTATTATTCTATGAAGAATTAAAACAATTTTTACAAAAAATCACTACTACTT AATTATAAATACATTATAAAATAAGCTTATGCAAAACTTTAGAAATATATTGTTTTACGC TAAAAAAATTTAAAAAAATACTGTGCTATATTTATAATATAAATTTAATATAATAGGGGG CTAATTCATTATGGATGGAGTAATTAACAATACATTGGCAAGAATAACAAAGCAAATTTA ATTTGCTAAGAATAAGTTAATCATTCTTGTCAAAACACTAGATCATATGAATAAAAAATT ATTCCATAGTGCAAATAAAAATTATGCTTATTCCTTAATAAGAAGCAAGTTTAATAAGGC TCTAGCTAAAACTAATCAACATGAAGTTGATTCTAAAACCCTGTTAGAATATCTTGAAAT ATTAGAAAAAATCCAAAAGTAATCTTCAAATGTTCCACAAATAAAGAAAATGAAAGCTTT AGAGGCCTTTATACACTCCTTTACCCTGTAGAAGGTTGTTGCACTAAAATTTATAATTCT CATCCTAATATGTAAGCTAATATAATCCAGAAAATTATTTTTGCGAAATAGCATAATACT TTAGAGTATTAAAGGCCTAATAAAGAACAATTAAAGCTAAAAATATATACAATTACAAAA CTAATCCTCCATTTTATGATTAAAAATTAATAATATAACGCTCTAAACAAAATAAAACGT AAATTAGGGAAATTAGTATAAATGCGATAAGGGCTTTAAGTAACTTATCTTCCTTAAGCC ACTTAAAGCCCTTTATCTGGTTATCATCCATTTGGAGCACATAATGCTTCCTAATTAATT GTATATTAAATAGAAAATAAAGTCTAGATATATTAATCTAGTAATATTCTAAATTATAAC CAATATAAATATAAATTGATAGTTCTTATTTCTACAAGCTATTTAAATACCAAAAATCTA AAATATACCCCAAAAACCATATTATATTATTTCAATGAAAATCTATAAAAAACGGGCAAA CTATTTCTACACTAAGAACTATGCAAAAGTATATTTATAGACTACAAAAAGAAATAAAAG TCACAACAAATTACTATCAACATATGGGGGTAAATTCGGGTACAGAAATTTATTATAAGC TTAATTATCCTAAAAAAGATTGTTACCATAAGATAAACCAACACTTTAAAGAAAAAAAAG AAAAAAGATTTCAAAATAGAGTTGCCAACTATTTTAATAAAAATTCTGATACAAAAATGG GTAGTGTACAATTGGAGAATTGTAATAATAATATAAAAGAAGAAAGAAAAATTAACGAAA TAGAAAAGTATCAAGTAATAAAGCATTTCAATAAATGTGACTTTTTATGTAAAGAAATTA TTTCAATTTTATTAACATTAAATATTGATAAAGAAAATATGATTAAAATAATAAAAATCC TAAAAATAACTGAAATTAAATCAAAAAATAAAAATATACGCTTTACTAAATCTTGTATTG CTTAAATATTATTTGGGAAAATTTTTATCTACAAAAACAACAGTCTCAGAAATGATGCAG CAACTCTTATTAGACTATCAAAATAATATAAATAACATACAAACAGATGAAAATGCACTT AAATCTCATACAGAAGATATTTGCAATCAAGTCTCAGAAAAAAGAAAAGAAGCAGAAAAA CTAAAAAATGACATATATTCAATATATAGTAGCCTTTAAATTGCATATTGAAATTGTTAT AATACAAAGCCTGTCTTTAATAAAAGATAGGTTTAACATTCTTACTAAAGATTACAAAAA ATTTTTAACCTATAATTTACATAAACTTTAATTCTAATAATAAGTAAACTGATGTTTCTA TAGATCTTTAATTTAATAATATCATTAGTAATAGATTAATATTAGTAATGATATATTCTC TAAAAGAAGTATATTTAAAGGTAATTAATTTTACATTAATTCCCTTTAATAATTTTATTT TTAATAATTGGCGATTACAAACAAAAATTAAAATTGCATAGCAAATTCTTTATGTAAAAG AACCCTTAATAAATCCAATTATGACAATGATTACATTCATTTATTATTAGAATTTGCCTC CAATATTTAACTTTCTAAATTCATCAATAACCCAAAGATAGTAATCTTAAAGATTTAATA AAAAGAAAACCCCTTAAAGATAAGAGGTTTATTTAAAAGAATGTATGGTATGAAGTAAAT ATTGATAATAAATTCTTTTAATTTTAAAAAAGGAGAATATTTAATACTCAATTATTAGTT CAATAATTGAGTATTACCATGTATTTTTAAAATTTTTTTTTAAATTAGATTTTTGCAGAG AAGATCAAATATTATTCAAAGCTAGAATAATGGCTTAAGTAATGCTCATATAAATAAATA AAATATTTTTTCTTATAAATCTTTAAGATTACTATCTTTGGGTTATTGATGAATTTAGAA AGTTAAATATTGGAGGCAAATTCTAATAATAAATGAATGTAATCATTGTCATAATTGGAT TTATTAAGGGTTCTTTTCTACACAGAACAGAAAATAAAGTATTATTTGGATAAAAAAAGA TGAAAGCTTAGAATTTACACATTTATGCCTATATTTATGGCTAATTATAAAAACATAGCA ATAGAAAACTATAAATTAAAGCTATACAAAAAGGAATGTTTAGAAAGAGTATTAATGATT TAGAATTGCATGAAATTGTAAGACAATTATTGTATAGCTATCCAGAATAAAGGAAAAAGC TGTTTTTATAATCCTTGATTTATAAATAGTAGATTATCATCTAACACAAATCTCTGCAAA ATATTGAAAGACATATTCTCTAAAAAATCAACAAAAACTACTTTTATAACTAAAATTTTT ATAAATTTCAGACTTAATCTATAAAAAAGGTTTACTAAAATTATTCAAGACTGCTATCAT ATAATAAGTTGTATAATGAAAAATAAAACTTTTATATTCTTAATATTTTTGATTAAAAAT TTGGCAATATATGCTCAAAATATAAACTATGAATTTAAACAAGCTAAAATTAAAAATCTA AAAGGGATATTTATTAATTATAAAGTCTATCTAGCAGAAAATTTAGCAATTAACAATGTA AAAACCTTAAACCATATTTCTACATTCAAAATCAATCTTGTTATTGACAAAAAAATTGCA ACTTCTATTAAAAATGAGCAAGATGTAATTAGAGCCGGTAATGAATGCGGAATCTTTTTA GAATTTCAAATCAATAACCGCATATACTATACCAAATTTTCATCAATAAAATATATTTTA CAAGCAATTGAGAGTTTTGCTAAAATTAAAAATACAATCAATAATTTAGAAATTAAAAAT CTTGAAGGAAATGGAATTTTTTTATACAAAAATGGTCACTCATACAATTTAAAAACAGAT TTTCAAGAAACAGCAATATTTGTAAATTTTCTTGGCTTTAAAGATAATACTGGAAGACCT TACTTTATATTTTATTACGACAATATAGATGATAAAGATAAAAATTTAAAAACAATCTTA ATTTCCTTTGAAGAATTCCATAATGGAATAAGGGAAGGGCTATTCTTGCTGAGAAATGAA AAAGCGATACTAGAATTTATTAATTTTTCAAAAGATTAGATATTAAAAACATTTTAAAGT CGATATTATAATAAACTAATATCAAAAACCTTGTTTATACTAAACACCCTTAATCTATAT CCTTTTTAACAAATAATTACTTAACTTGGGCTTAAAAATTATTTAAATTTAAGTCAACTG CCACCAAGTAATTTAACTTTTGAATTAAAATCATCCAAAAACTTTTTAAAAGGAATCTTA TTAAAACTTACAATAACATCATTTCTGAACGCATAGATTCTAAATGCTGCTATATTATCA TTTCCCAAGGCAACAAATCTTATAAATTCGCTTTCTGAAATAATAAATTTATAAATTTCT ACCTTAATATTTTTTTCAGTAATCACTGTTGTGTAATCATAAGGAAAATATGAATCATAA AAATCGTTATAATAAACAACACGTTTGCTATCCTCTGAAACCAACTCTCCTTTAGCCTCA AATCTAGTATCTGTTATTATCTTTTCGGGCTTTATTTGACTAGAATTTCTAGATTCAACT CTAACTAAAATAAAATATTCCCTTTTATTTTTCAATTTATCATATTTAATAGAAATTACA GACTTAAATATTCCATCTCTAGAATAATTTACCCCAATATCATATTGTGAATCTAATACT TGAGAAAACCTATCATAAGCTATTTGATAAGTTTGACAAGAAACTAAAAAAAACGCCAAA ATAAAATACTTTAAAATAAATTTTGAATTCATTAAAACCCCCCTTAGTTTATCTTATATT TATTTTAAAAAAATTTCTCAAAAATTAGAAAATTAAATAATATAATTAATTCAAATAAAA ACATTCTAATTTTAGAGCTTGAATTAAAATTAAAAAACTATTATTATTAGGTGTTGTTCT AACAAAAACCAATTTAAAGATTAGATTATATCTAATCCCAATTTATATAAATTTCAAAGT GCATCTATAGGAGAAATCGTGTATACTGACCCAAGGTCAATTATTAATACTTACTTTGTA AAAAACAAAAAAATCTTCATTATCAACCCAATAGCTTTTAAATTTGAACTTAATTTTGAA AAAATTATTCAAACAAATTCAAAAGAGAATATAGCACTTAAAACATTTAAAGGCGGAATT ATTAGTCTACAATTAAGGTCAAATGAACTTTCTAAATTACCACAAGACATTCTAAAGGGA AAACTTGAATTTTATATTAACTATGTAAGTGAAGAAAAACTAAAAATAGTTTATGACATG ATGGTAGTCAAAGTTTACACAATTGACTTAAAAGCTATCAATAAAGATGAAATTTACTTA ATCGAACTTAAAATACTTGGATCTATTTATAGAAAAGAAAATATAGAAAATGCGTTTATT CCTATTATAAAAAATAACAATACTTACCTTTTTGAAAACAAAGCAAACAACCAAAAAGTT AATTTACTATTAAAAGGCATCGATAAGACTATTGAACTTCCGTTTTTAACAAAAATCAGC TATTCAAATATTAAAACTCTTAATACCGCAGATATAAAAACTAATGAAAATTTAAATACG AACACTAAAACAACGAATAGAATGTTGCTAAATCTAAGCACAAAAATCTACGAAGAACTA GTTTTAACAAATTCAAATCAAATAAAAACTATCAACAATAAACAAAAAATTCTAAAAACA TTAAGATCTTTTATTGAGAATGAGAATGGGTTGGGAGGTAAAATAAAGTTAATCTTCTTA GAAAGAACAAATTTTTTGATTAAAAATTTAAACTTAAATGACTTAGACTTTATATTAAAT GACATCAATATTATACAAGAAAATAGCTGTATCAGAATCGATATCACCTTATTAGAAGAT AAAATTGAAAAAAAATACTTAAACCAAGCTTCAAATGTAACCCCATTTCTTAAAAATATC ACATTATTGTAATTTTTTTTTGGAAAGACAAATTACTGCAATCTGCCCAAGCTACATAAT CTAATAAGCTCTTGACAAACACCCTAACAAAAGTATTCAACATATAACCTAACATCCTTG AAACTAACATGCAATATATAATAAAATGAAATTATGAGGTATTTATGTTAAGTATTAACG GTGTAAGACTTTATTCTTTAAAAGAATTCCAAAATATATTAGAAGATTCTTATAATCTTT CAATTAGCAAAAATACTATATCTAAAAAATCAAAAATTTTAAAATGCGCAATCCATGTAG ACAACCGTCCTTACCTTTTAGAAGATTTTTGCACATATTTTCTAATGGACTTTAGAAGAC CCAAACCTATTACAAATAGCATGAAAGAAACAATTCAATTAAGAATTGAACAAGCAAAAA AAATAATGAGCCGAAAATCTACAAAACACGAAATACAAATTGCTTCAAAAAAGATGGGTC ATATTTTATAATCTATCATATTGACATTTTAAAAACATTGCTATAGAATATAGTAAAAAG CTAATTATTTCGTTTGTGATGCAGTAAGAATTTAGAATGCAAGCCAAATAATTGAAAAAG CTTCTGAAACCACAACAAATTTAACTTCAGAAGCCAGGATAAGATAATGATAATAAAAAT AAAAAATAATGTCAATACAAATTTTAATAATCTCATAACATTAGAAGAAATTATAAAGTA CAATCAAAAAAACGCAAGCTCTAATTTAATAGAATTAAAACGCTCAAGGCTAAAATCATA TTTAACTAAAAAAAGAGCCATATACCAACGAATACTCAAAGTATGCTGGGCAATTGACCT TAAAAACAAACAATACTATAAATCTAACAAACTTAAAACATATTCCACAATAGAAATACA TAATATAGTTAATAAATGCCTTGCAAAAGATAATAAAAAAATATCAATCAGGACCTTAGA ATATGATATATCATTTTTAAATCAAATACTCTTAATAAAAACCAAACTAAAACATTTAGG CAAAGATAACGGAAGCTTTGCATTTTATATACAAAACAAAAATCTTTGGAAACACCGCTT TATAATTATTCAGGAAGCAATTAATAAAGAAATAAAAGAATATTTAAAAGATAAAAAAAT AGTATCTGATTTTTTCAAGGAAATCAACAATACTATAAACAAGAATAATATAAGAAATAT AAAACCTAAAAGCTCAATTGCAGATGAATCAATTGCGGATGTTATACCTAAAGGTATAAA AGGTATAAATAAGATAGAGAATTCTATAGAAAAAAATAATGGAAAAATAAATAAAATTTC TTACAAAGAATATATTGCAAACAAGTTAGTAGAAGTTCACAAAATAGAAAAAATGCAAAT AACAAAAATACTTAAAATAAGCAACAATGAAAAAACCTATATAAATGCATTAAGAAACTT AAAGTTGGCAATAGAAAAATATAAGGAAGAATATAAAATTGAAGACATTTCAAATCATTT TATAAAAGAGTTTAAAAATAAGTATAGTAAAAAAATATGGATGATGAATGGAAAAACTGA CAGAACAAATGACTTTTATGAAATTTGGGAAAAAAGATTTAAAAAAACGTTTTTAAATAA AAATTTAAAAAAACAATATAGAAGTAATTATGAAAAAGAAAATAAAAAGATTATTAATAA CGAAAAAAGAGTAAGTATTATTTTTTCTAACAGTAAAGGTTTTAAAAGAATCAGCAAAAT TAAAATTAATCAAAATTAATTGCTATTTACATATATAAAAAGTATTATTATATATAGGCC CTATTCTAAGAATTTTTTTACTTTAGGAACATTTAATATATTTTCTTACAGTTCATATAT ATATATGCATTTCTATTAAAAAGACACTGAAAAACTCATATTTACCAACTAAACTAAAAT TTTTTAAAATCAGATTGTAAACAAAGCATGCTTTTATTTTTCAAAAAAAATCAATCATTT TTTTTGCAAGATATATGAGTTCTATCATAGTAATTTGAAACATTAGATACTCTGGAAGAA TAAAAATGATTTTTATTATTATTCTCATCCAGTTTAATTTGAGAATTTTTAGCCTCAGAC TCTTTTTTGCTTAAACATTGATAATTCAAATTATCCATATATTTCTTTTTAAAATGAGAA GCCCCATCTATCTTGTTTCGATCGTTATTTATTGATAAATCACAACCCAATATTAATAAA AAAGTTAATATTATAAATAATAATGAAATAACAAATTTTCTATTCATAAATTTCTTTCCC CTAAATTTGATTATTAAAATTAACCTTACAAAAACCATAAAGCACAAGTAAACACATTTT AAAAATTTAAAGTTAAGCATTATTGTAGTATATACAGATAATTTGTTTTTCAATATAATA AAATTAGTCCAAATTTATATCCATTTTTAGCATTTTAGCCATTAGAATTATATTCTAAAT TAACCATTTTTTAAAGCAGAATTCGAACAAAATTATATATTCTTATAAAGAAATATAAAC GAATAAGAAATGCAAAAATATTATTTCATAAACCCTAAATAATGCATTAAATTATTATCA TTCTTTTAAATAAAAATTGAAAATTAATTATTGATATTCTAAACCCAAAAGAAATTTTCA AAAGTATTTAGATCTGTAAGATTTTTGTATAGCAAAAGAAAAGTATTGGTGGTTTATGGT GGTATAGAGAGTCGCAAGATAATTATTATATAAATTAAGTTGTTATGGATTTTATTTACA TAAAGCAGATAGATGTTTCCTATTAAGTAAACTATGTAGTAATTATGGTATTAAAAATAC AACTCTAAAATTAAGAGAAACCAGTTGGCCTTTTGGCAGTTGTAGTTCTAAGTGATAGAA ATCTAAATTCAAATCCAAATCTTAAGTCTTATTTATAAAGAAATAAAAGCTAAGACAGGA GCTGCCTTAAGTAAGGCTAGATCATGTATTGGTAAATACTGTTCTTTTAAATAGTTTAGA AAGCTGCCATGAAATAAATCGAGGATTTGTTTCTATAATCTTTGATTTAGAAATAGTAAT TTAAATCTTAAAGCATTGTTAAAATTTTATATTCACAATGCAAATTTTTAGTTAATTTAT GTTTTTTAGATTTTCATTTTTCTTATTTGCTATAAACTTATTAATTTAGGCACCAAATAT TTTAGTTTTTATTTTTTTAAAAAAGAAGTCCCTACCCCCGATCTTATTTGGAGAGGATTA ATAGGGGTGTTGGGGACTATTGGTAGATCGTTTTCTACCTTACATATATAATATTTTATT TTTTTAATTTTGTAAATATTATTTAATAATAATAAATAAAATTAAACGCTTTTAAGTAAA ATGTTAAATTTTGGTTTTTTCCCCATGATTTCCTTTTAATTTTTTAGTTTTGCTATTTGA TTTAGTAGCCTGTTGTTGTGATTGATCTTCAATTTTATTGATGGCTTGTAAAAGCCTTAT TTGGCCGTTATTAAGGTTTTTTTTTCTATGTTTGAGTTTTAAAGGATTTAATTTTTGATT CATTATTTGAATTTCTTTAATTTCTTCTGCAGTTGGTTCATAATTTATCCATTTTTTATC TATCCAGCCTTTTTCCCATTCATAATAGCTTTCATTAAAGTTGGTTTTGTTTTGTTCGTC GAAGTGCCTTGCTAGTGCGGCAATTATTTTTGTTCCATATTTAAGCATTTTTTCTGTGTC TGTTTTGGTAATAAATGTGTCATAATCGTAATTTTCACCGGTTTGCATATTAACAATATC AACTATAAGCTTGAAAGTATTTATGCAATTCATTGCAATATTTGTGGATGAAAAATTAGG ATCACTAAGGGCTTTAATATTAAGTTCATATTTTTTCTTTAAACTTATGGGAATTTTAAA AAAGTCAATTATAAAGCCATTTATTTTTGCTCGATAATTAGCTTCATATTGAGAATTTCC AAAGAAAGTTCCTTTTATGCCCCCAATTGTATATTCATTATAGTTTATTGTTATTTCTTT TTGCATTTTTATCCTTTAAAAATATATTAATATAATATAATACTATGTATTAATTGATAT TATATATCAATTAATACATAGTATAAATAAAAAAAATATTTTTAATTTTGTTTTTTTTCT AAACTGTTGTAGATTTTAACTTATCTTTAACCTTTAACCATTTATAAAAAGTCCTTAAGA GAGAAATTTCTTAGGACTTTTTATAAATTTAATAAAGAATAATGAAGAATTGCCCGCCTA AATTTCAAAATAAATTTGGGCGGGGGGTAGGTTTGTAAAATAGATTTATTTTTAGCTCTG AATATACTTTTTATGATATCGGTAGAGGCTTCTAATATAGATAAGACAATAGCTTAGGTG CCGTTTTTAGATTATTAATTAATTTAGAAGGTTAAATATTGTAAGTAAATTCTGGTGATA AATGAATTTTATTTTTATCATAATTTAAATTTTTTTAATAAAAGTTCTTTTTTCTATAAA GAACATAAAATAAGTATTATTTGGCAAAGAGAAGGTTGGCATTTATAAGTACAGAATATA TTTTAACACCAACCAAAAGAAATACTTTTTAAAAGTATTTAGATGTGTAGGATTTTTATA TAATACAGTGTTAAAGCGGCAAGATAGATTTTTATTAAAAAAAATAAGCAAAATCTTATT ACTTGTTCAAATAAATATAAATAAGAGCTTCTATTGCTAAAAAAAGTTTATAGTTTGGCC TTTTGTATGCATGGATTGACTTAAGCTCTATATATAAATTCTTTTAGAGAAATTTAAAGA GGGATAGAACACAAGGATTTCTTAAACATAAGAGCAAGAAAAATAAGCAAGCTTTTAAAG CTAATAATAAAAAAACAACGAAAATAAACTTAAAGAATACTAAAAAAATTATCAAAAAAA CAAAAAACTTCTATTAATGGAGCTAAATCTATATTAAGAATTACTAAGTTACATGAGAAA ATTGTAAATCAAAAAACCTTTTTACACAAGGTATTTTTTGCTTTATAGTTATTTATAAAA ATATAGCAATAGAGAACTTATTAATTAAAGATATGTGAATGTTTGGGTAAAGTATTAATA ATTTAGGATGTTATAAACTTGTAAGACGATTATAGTATAAATCAGAATGGTATGGATCCT TTTTGTATAAAGTAGGTATATATTTTTCATCAAGCAAGCTATTTGTAATTGTGGTGTTAA AAATACAATTCTAAAATTAAGTGGTGCTAGGTAAACTTACAGCGTTTTGTATAATAAAGA TATAAATGCAATTCTAAATCTTAAGTTTTATTGCTATAAAGAAATAAAAGCTAAGACAGG AATTTTCCGAATTTAGGCTTGTTAATCATGCTATTGGTGTGAAAATAAGATATAAAAAGA ACTAAAAAGTTATCGTAGAATGAGACAAAAGCCGTGAATATAGAGATAGTAGTTTGCTAG CTTTTAAGGCCACATCAGATCTAATTCATTAAAAAATTTTGTGAAAGGGGCTTCATAGGT AGAGATTAAAGTAATTAATATGTGATATAAAATAATTAATATGTGATATAAAATAATTAA TATGTGATATAAAATAATTAATATGTGATATAAAATAATTAATATGTGATATAAAATAAT TAATATGTGATATAAAATAATTAATATGTGATATAAAATAATTAATATGTGATATAAAAT AATTAAAAGGAAGTTTGTATGAAAAAAATAGCATTTCATATTCAAAAAGGTGGTGTTGGG AAAACTACCTTAAGCGGAAATATTGCAAGTTATTTATCTAAAACAAAAAAAGTTATATTA GTTGATTGTGATATACAGCAAGCAAGTTCTTCTACATGGTTTCTTAATCATGAAATTCTT AAGCTGGATATTAAAGATTTTCTTTTGAAGAAGATGGATGTAGATCAAGTAGTAAGACAA ATACAAAAAAATTTTTATATTTTGCCATGTGTGCCGAGTGGAACTTTTAGAAGAGATGTG CAACATGAATTGCAAGATTTTCCATATTTGATAGATGATTTTTGTTTGGAATTGGAGAAA TTGGGATTTGAATTTGCAATTTTTGATTTATCTCCCAGTTTTGAGCTTTGGGAGCGAAGA ATTATTCTTGCAATGTGTGAAGTTATTACTCCACTGACCCCAGAATTTTTAAGCCTTGAA GGAATTAATATTTTTAAAGAAGAGTTTGAGTCTTTGTTAAAATCTTATAGAAAAAAGGTT AAACATGAGAAGATTATTTGCAATATGCTTAATAAAAGTTTTAAAAGACATAATTTGCAT CTAAGGCAATTTAAAACTTTTGGATATGATCTTTATGAGGTTGGACAAGATGCTAAAATA GCAGAATCTCAGCTATATAAGAAGTCGATTTTTGATTATTATCCGGAGAGTAGATCTGTC TTAGAACTTTCAAGATTGGGAGATGCTTTATGCCTATAAGCAAAGAGGTTAATTTAGAAG AAATTATTAAACAAAATGTTAGCAAATCTAAATTTGCTTTTCGTGATAATGATGTTATTA GCAATAACATTCCAAATGTTAAGCCTATTAGATCAAAAATGACGATCAAGCAAATAAGTG TAGGACTGAAATATGAATATTGGATTGAATTTTATTCTATTTTAGACAAAAAGGGATTAA CAGCTTCTGGTTTTATAAGAACTTTAATTATGGAATATATAGAAGCTTCTAAAAAATAAT AATTTTGTTATTTAAATTTTTAACAAAAAAATAATAAAGAATTATCTACCTATGATTTCT GTGAAATTTAGGTAGATATGAATTTGTTAGTAAATGGATTTATTTTGATTCCGAATATGC TTTTTGATCATATCGGAAAAACTTTCTCTCTAGTATAAATAAGTCAGTAGTTTTTAGATT AAAAATAAGATTTTCAGCAATGCATAAATAATTAGAATATTTTTTCTTATAAGCTTTGAT GAAGATATTGTATTAAATTATTGATGAATTTAGAAGATTGAATATTGGCAATGTGATCTT TATTGCAATTTAATTCATTAATGGTTATTTTCCATAAGGAATATAAAAGTATTATTTTGT GAGACAGGGCATAAGCTCATCATTTGTGTCTATGGTTAGTAACTAGTACTCGGGGGGGGG GGATAATTAACTAAATATATGCAGTGATTATTAAAAGCTATCTCATTTCTATATACACTA AGTAATTATTATAAAATAATAGAGTATATAAGTGCCGATAAAGTTTGTAAAAACAGAATA TATTCTAACACTAATCAAAAAAATTTTTAAAAGTATTTAGATTTGTAAGATTTTTGTATG ACAAAATGTTAAGTGATAGGAAAGATTCTTATAAAAAAACAAGAAAAGTTTTAGCATTAA TCTAGGCAAATATAAAAATAAATTTTCACTCTTAAAGGAGATTAATACTTTAACTCTTTT TTGTGCATGGATTGATTAAAGCCTCGCAGATAATAATTTTTTAGATAAATTAAAAAGGGA AGCAAAACGTAAGGATTTCCTAAATATAAAATGAAAAACCCAATAAGAATAGAAAATGAA TATAAAGCTTATCTAAAATAGGATTTGTAAAGTTATGTCTACATAGGATCATTATAATAA TAAAGTTATTAAAAATTTAGTAGTAGAAAAAATACCGATAATAAATATTATATTTCAGTA GTAGTTGAGAGCTTAAATATTAAAAATAACAATAAAAAAAATAATAAAAAAGAGTCGTGA AAGTAAAAAAACAATCATTCTAAATATTTATAAAAAATAAAAGTAAACTTAAAAATACTG GAAGAAAACTATTAAAAAGACAAAGGGCGCTATTAATAGAGCTAAATCTAGATTAAGGGT TGCTAATTTGTATAAGAAAAGTTTAAATTAAATTCAAAGAAAAGACTTTTTATGTAAATT GTCTTATTACTTTTTAGTCAATTATAAAAACATAATGATAGGGAATTTAATCAATTAATT TTATACAAAAAGGAATGTTTTAGTAAAGTTTCAATATGTATGATAGAGATTTAAATGCAA TTTTAAATCTTAAGACTTTCTTTTTATAAAGAAATAAAAAACCAAGGTAAGAACTTCCTA AAGTAAGGTTTGGGGGATTATGTTTTGGCGGATGGCCGGTCTTAATAAAAAATAATTTAA AAAGTTATCATATGAGATGAAGCAATAAGTTATTTTAATGATCTTTGATTTAGAAATAGT AGTTAGCTTGTTGGATACATTTGGCTTAATTACAATCAATACAATAAGTTAAATCATTGT ATTGATTGTAATTAAAGCTATTGATATTATAATAAATATATAAAACTTATTATTAATAAT TAATAGATAAAAAATTAATTTTAACATTTCTATTAATAATAAAAAAATATCAAAACATTT TTTACAAAATCTGATTAATTTTGTGAAAAATGTTTTTAGTTGCAAGTGAAAATATATTAC ACACATAGTCAAGCATGAATACACGATTATTGCTGTCATAAAAATTTTAATTCTCCTTCA ATTTTAGTCTATCTCAATTTTTAAAAAATTCCATTTTTGTTTATCGATAGTTCTGTAGCA TTTATATAAATGCATGTTATAATTTTAATATTTGAGAGTGTTTTTAAATGCCTTTTTAAC TATGTTATTTTTTTGAAAAATTTCCACACAAAAAATTTTTTTGTTTCCTCTTTTTTACTT ATCTTGTTACATTTAATGGGCATTCTTTTTTTCTCTTCTAGGCTTTGAATTAAAAATAAT AAGTATATTATCTATATAATCTTTATTTAATGTTTTAAATAAATTATATTGATCCAGATT TAAGACATAGTTAGGATTTGTTTTTTTAAAATTTTAAAATAATAAGATTTGCTAGGCAAT ATTATTGGATTTTTATATATTTGATTGCAAGCTCTTCAAACGTTATGGCATTTTGACCAA TTTTAATAGATTGCACTTGAGAAAATATTTGTTAGGGTTGTTACTTTTTAAAAGTTTGGT GTTATAAAATATTTTCAAATTTTTTCTTTTGGGGGGATGTAAATTAATACTTTTTCCCTC ATACTTTTTGAAAGTTTTTATATTTTTTGATGGTTTATAGGAAATAATTTTCCCTGTTTT TCTAATCCAAGTAATTTTATATAAACGGCATTTATATATTTTTCTTTTGTATATTTCGTA GTATATAGCTATTATAAGTAA\n>lp17_B31_dup\nGACTCAAAACTTTACCCGTTAAATTGCTAACTTTAACTTCAAAATACTCAACTTTAACCC GAAATGATAAAACTTTGTTTCGCCGTACGATTAGA ATAATCGTTGATCAGGTTTATTGATTATCAATAAACCTGATCTATAATATTATAAGCGGT TTTTGCAAGTTTAATAGGAGCTATAATATCCATGAACAAATTATTGATATTCATTATTTT ATTAGTCTTTTCATGTAATTTAAGTAATTCTGATCAAAATAATCCACTAAACATGTCAAA ACAAATATTCAATTTTCAA ACGAAATTCAAGCGTTAAA AAAATAGAAAAACAAG TTGACGGAAATATTATTATTAATGGGATGACTAAAGAAAGTGGCACAGAAACGATGTTAGAGAAAATCAAACCCAAAAAGAAAACAACAAAGACTCGC ATGCTGATAGTAGTAGTAGTTATTACTATACCAACTACATAATAAATGGAGACAATTTGT TAAGAATTATTAGCAACTTATAAAAAATCTTTATAAATTACCAATATTCTTGAAAATTTT AATACTATTTTTTTTATATACTATAATATTATGAAAAAAAATCAAAAAAACAAGTGCTCA GAAATAGAAAAAACACAATTAGAAATAATAAATAACCAATCAGAAATAGAAAAACAACTC CATCAATTAGAAATTGAGTTTACTGGGGTATGCCTGCTTTATGTGACAATACATTATTAA ATCTAGAATTGAATAATTATTCTCAAAAAAACTATTAAAATTTTACAACGAAATTCTTAA AAAAGATAATAAAAATTCTTGCGATCTACCAACAATGAATAAATATCTTGATATATTAGA AAAAACAAAAACCATAGTAAAACTATCTTTTAAAAACCAGTCCAAATATATGATTTATTA TAAAATTAATCCCCCTTAAAGTGTTTCGTTCAACAATACAAGACTACTATCAAACAATAG CAGATAAACTAAAACTACGGTTAGAACTAAACTATCCTACTACTATTTAATCGTAAAAAA TATTTCTTTGCAAATTAAGCAATTTAGAAATATAAATGTAAAGACATATATTTTTATTTG ATAAATAATAAAAATTACTGGGGCACTATTTGGAAAAATTTTTAAAAGAAATATTAAGTA TGAATAGCAAAAATAGGCTATCTTCACACTTAATAATTCTTATTTACACACTAAACAACA TTGACCTAAATTCAAAAAATATTGGATACTATAGTAGGGGCTTTATACGCCGTGCGTTTA CTTTTAACATAGATAGATATTGCAATACTAGTAAAGATATTGAAATAGACATAGACTTAT TAATAAAGTATCTCGATTTTTTAGAAAACAACCTAAAAATTATAACTAATAAATATAAAG TAGAAAAAAATATATTCAAACTTTACTACATAATCAATTATCCTTTAAAAATATGTTACA CAAAAATTATGAACTACTATAAATAGACTATATAATGATATTAAAAAGAGAAACATCTTT AGTATATTACTAAAGGTGTTTCTCCCCTTAATCTAAAGTTGTTTTAAGGTGTATAATGGG GGTGATACCATATTTTAAATTATATATCCCAAATTAATTAAAAAATCAGGTATTGCAAAT GTATTATAGTGTCTCATAGGCCTAATAAAGAACAATTAAAACTAAAAAATATATAAATAA AACGCAAATTAGAAAAAGAAATAACCGTCATAGTCAAACTTTATTTTAAGAAAAATCCTA AATCTATAATTTATTATAAAGTTAATTGCTCCTTAGAAAGAGTTTTATTAAAAATAAAAG ACTACTACGTATTATTCTATGAAGAATTAAAACAATTTTTACAAAAAATCACTACTACTT AATTATAAATACATTATAAAATAAGCTTATGCAAAACTTTAGAAATATATTGTTTTACGC TAAAAAAATTTAAAAAAATACTGTGCTATATTTATAATATAAATTTAATATAATAGGGGG CTAATTCATTATGGATGGAGTAATTAACAATACATTGGCAAGAATAACAAAGCAAATTTA ATTTGCTAAGAATAAGTTAATCATTCTTGTCAAAACACTAGATCATATGAATAAAAAATT ATTCCATAGTGCAAATAAAAATTATGCTTATTCCTTAATAAGAAGCAAGTTTAATAAGGC TCTAGCTAAAACTAATCAACATGAAGTTGATTCTAAAACCCTGTTAGAATATCTTGAAAT ATTAGAAAAAATCCAAAAGTAATCTTCAAATGTTCCACAAATAAAGAAAATGAAAGCTTT AGAGGCCTTTATACACTCCTTTACCCTGTAGAAGGTTGTTGCACTAAAATTTATAATTCT CATCCTAATATGTAAGCTAATATAATCCAGAAAATTATTTTTGCGAAATAGCATAATACT TTAGAGTATTAAAGGCCTAATAAAGAACAATTAAAGCTAAAAATATATACAATTACAAAA CTAATCCTCCATTTTATGATTAAAAATTAATAATATAACGCTCTAAACAAAATAAAACGT AAATTAGGGAAATTAGTATAAATGCGATAAGGGCTTTAAGTAACTTATCTTCCTTAAGCC ACTTAAAGCCCTTTATCTGGTTATCATCCATTTGGAGCACATAATGCTTCCTAATTAATT GTATATTAAATAGAAAATAAAGTCTAGATATATTAATCTAGTAATATTCTAAATTATAAC CAATATAAATATAAATTGATAGTTCTTATTTCTACAAGCTATTTAAATACCAAAAATCTA AAATATACCCCAAAAACCATATTATATTATTTCAATGAAAATCTATAAAAAACGGGCAAA CTATTTCTACACTAAGAACTATGCAAAAGTATATTTATAGACTACAAAAAGAAATAAAAG TCACAACAAATTACTATCAACATATGGGGGTAAATTCGGGTACAGAAATTTATTATAAGC TTAATTATCCTAAAAAAGATTGTTACCATAAGATAAACCAACACTTTAAAGAAAAAAAAG AAAAAAGATTTCAAAATAGAGTTGCCAACTAT GTAGTGTACAATTG TAGA TTTCAATTTTATTA TA CTTAAAT CA A CTAA AATAC AT T T TT AA C AAAAGAAA ATTG CA AAGATCAAATATTATTCAAAGCTAGAATAATGGCTTAAGTAATGCTCATATAAATAAATA AAATATTTTTTCTTATAAATCTTTAAGATTACTATCTTTGGGTTATTGATGAATTTAGAA AGTTAAATATTGGAGGCAAATTCTAATAATAAATGAATGTAATCATTGTCATAATTGGAT TTATTAAGGGTTCTTTTCTACACAGAACAGAAAATAAAGTATTATTTGGATAAAAAAAGA TGAAAGCTTAGAATTTACACATTTATGCCTATATTTATGGCTAATTATAAAAACATAGCA ATAGAAAACTATAAATTAAAGCTATACAAAAAGGAATGTTTAGAAAGAGTATTAATGATT TAGAATTGCATGAAATTGTAAGACAATTATTGTATAGCTATCCAGAATAAAGGAAAAAGC TGTTTTTATAATCCTTGATTTATAAATAGT ATATTGAAAGACATATTC ATAAATTTCA ATAATAAGTTGTAT TTGGCAATATAT AAAGGGAT AAAACCT ACTTCTATTA GA CAAGCAATTGAGAGTTTTGCTAAAATTAAAAATACAATCAATAATTTAGAAATTAAAAAT CTTGAAGGAAATGGAATTTTTTTATACAAAAATGGTCACTCATACAATTTAAAAACAGAT TTTCAAGAAACAGCAATATTTGTAAATTTTCTTGGCTTTAAAGATAATACTGGAAGACCT TACTTTATATTTTATTACGACAATATAGATGATAAAGATAAAAATTTAAAAACAATCTTA ATTTCCTTTGAAGAATTCCATAATGGAATAAGGGAAGGGCTATTCTTGCTGAGAAATGAA AAAGCGATACTAGAATTTATTAATTTTTCAAAAGATTAGATATTAAAAACATTTTAAAGT CGATATTATAATAAACTAATATCAAAAACCTTGTTTATACTAAACACCCTTAATCTATAT CCTTTTTAACAAATAATTACTTAACTTGGGCTTAAAAATTATTTAAATTTAAGTCAACTG CCACCAAGTAATTTAACTTTTGAATTAAAATCATCCAAAAACTTTTTAAAAGGAATCTTA TTAAAACTTACAATAACATCATTTCTGAACGCATAGATTCTAAATGCTGCTATATTATCA TTTCCCAAGGCAACAAATCTTATAAATTCGCTTTCTGAAATAATAAATTTATAAATTTCT ACCTTAATATTTTTTTCAGTAATCACTGTTGTGTAATCATAAGGAAAATATGAATCATAA AAATCGTTATAATAAACAACACGTTTGCTATCCTCTGAAACCAACTCTCCTTTAGCCTCA AATCTAGTATCTGTTATTATCTTTTCGGGCTTTATTTGACTAGAATTTCTAGATTCAACT CTAACTAAAATAAAATATTCCCTTTTATTTTTCAATTTATCATATTTAATAGAAATTACA GACTTAAATATTCCATCTCTAGAATAATTTACCCCAATATCATATTGTGAATCTAATACT TGAGAAAACCTATCATAAGCTATTTGATAAGTTTGACAAGAAACTAAAAAAAACGCCAAA ATAAAATACTTTAAAATAAATTTTGAATTCATTAAAACCCCCCTTAGTTTATCTTATATT TATTTTAAAAAAATTTCTCAAAAATTAGAAAATTAAATAATATAATTAATTCAAATAAAA ACATTCTAATTTTAGAGCTTGAATTAAAATTAAAAAACTATTATTATTAGGTGTTGTTCT AACAAAAACCAATTTAAAGATTAGATTATATCTAATCCCAATTTATATAAATTTCAAAGT GCATCTATA AAACTTGAATTTTATATTAACTATG CCTATTAT GTTTTAACAAATTCAAATCAAATAAAAACTATCAACAATAAACAAAAAATTCTAAAAACA TTAAGATCTTTTATTGAGAATGAGAATGGGTTGGGAGGTAAAATAAAGTTAATCTTCTTA GAAAGAACAAAGAAAATAGCTGTATCAGAATCGATATCACCTTATTAGAAGAT AAAATTGAAAAAAAATACTTAAACTAACCCCATTTCTTAAAAATATC ACATTATTGTAATTTTTTTTTGGAAAGACAAATTACTGCAATCTGCCCAAGCTACATAAT CTAATAAGCTCTTGACAAACACCCTAACAAAAGTATTCAACATATAACCTAACATCCTTG AAACTAACATGCAATATATAATAAAATGAAATTATGAGGTATTTATGTTAAGTATTAACG GTGTAAGACTTTATTCTTTAAAAGAATTCCAAAATATATTAGAAGATTCTTATAATCTTT CAATTAGCAAAAATACTATATCTAAAAAATCAAAAATTTTAAAATGCGCAATCCATGTAG ACAACCGTCCTTACCTTTTAGAAGATTTTTGCACATATTTTCTAATGGACTTTAGAAGAC CCAAACCTATTACAAATAGCATGAAAGAAACAATTCAATTAAGAATTGAACAAGCAAAAA AAATAATGAGCCGAAAATCTACAAAACACGAAATACAAATTGCTTCAAAAAAGATGGGTC ATATTTTATAATCTATCATATTGACATTTTAAAAACATTGCTATAGAATATAGTAAAAAG CTAATTATTTCGTTTGTGATGCAGTAAGAATTTAGAATGCAAGCCAAATAATTGAAAAAG CTTCTGAAACCACAACAAATTTAACTTCAGAAGCCAGGATAAGATAATGATAATAAAAAT AAAAAATAATGTCAATACAAATTTTAATAATCTCATAACATTAGAAGAAATTATAAAGTA CAATCAAAAAAACGCAAGCTCTAATTTAATAGAATTAAAACGCTCAAGGCTAAAATCATA TTTAACTAAAAAAAGAGCCATATACCAACGAATACTCAAAGTATGCTGGGCAATTGACCT TAAAAACAAACAATACTATAAATCTAACAAACTTAAAACATATTCCACAATAGAAATACA TAATATAGTTAATAAATGCCTTGCAAAAGATAATAAAAAAATATCAATCAGGACCTTAGA ATATGATATATCATTTTTAAATCAAATACTCTTAATAAAAACCAAACTAAAACATTTAGG CAAAGATAACGGAAGCTTTGCATTTTATATACAAAACAAAAATCTTTGGAAACACCGCTT TATAATTATTCAGGAAGCAATTAATAAAGAAATAAAAGAATATTTAAAAGATAAAAAAAT AGTATCTGATTTTTTCAAGGAAATCAACAATACTATAAACAAGAATAATATAAGAAATAT AAAACCTAAAAGCTCAATTGCAGATGAATCAATTGCGGATGTTATACCTAAAGGTATAAA AGGTATAAATAAGATAGAGAATTCTATAGAAAAAAATAATGGAAAAATAAATAAAATTTC TTACAAAGAATATATTGCAAACAAGTTAGTAGAAGTTCACAAAATAGAAAAAATGCAAAT AACAAAAATACTTAAAATAAGCAACAATGAAAAAACCTATATAAATGCATTAAGAAACTT AAAGTTGGCAATAGAAAAATATAAGGAAGAATATAAAATTGAAGACATTTCAAATCATTT TATAAAAGAGTTTAAAAATAAGTATAGTAAAAAAATATGGATGATGAATGGAAAAACTGA CAGAACAAATGACTTTTATGAAATTTGGGAAAAAAGATTTAAAAAAACGTTTTTAAATAA AAATTTAAAAAAACAATATAGAAGTAATTATGAAAAAGAAAATAAAAAGATTATTAATAA CGAAAAAAGAGTAAGTATTATTTTTTCTAACAGTAAAGGTTTTAAAAGAATCAGCAAAAT TAAAATTAATCAAAATTAATTGCTATTTACATATATAAAAAGTATTATTATATATAGGCC CTATTCTAAGAATTTTTTTACTTTAGGAACATTTAATATATTTTCTTACAGTTCATATAT ATATATGCATTTCTATTAAAAAGACACTGAAAAACTCATATTTACCAACTAAACTAAAAT TTTTTAAAATCAGATTGTAAACAAAGCATGCTTTTATTTTTCAAAAAAAATCAATCATTT TTTTTGCAAGATATATGAGTTCTATCATAGTAATTTGAAACATTAGATACTCTGGAAGAA TAAAAATGATTTTTATTATTATTCTCATCCAGTTTAATTTGAGAATTTTTAGCCTCAGAC TCTTTTTTGCTTAAACATTGATAATTCAAATTATCCATATATTTCTTTTTAAAATGAGAA GCCCCATCTATCTTGTTTCGATCGTTATTTATTGATAAATCACAACCCAATATTAATAAA AAAGTTAATATTATAAATAATAATGAAATAACAAATTTTCTATTCATAAATTTCTTTCCC CTAAATTTGATTATTAAAATTAACCTTACAAAAACCATAAAGCACAAGTAAACACATTTT AAAAATTTAAAGTTAAGCATTATTGTAGTATATACAGATAATTTGTTTTTCAATATAATA AAATTAGTCCAAATTTATATCCATTTTTAGCATTTTAGCCATTAGAATTATATTCTAAAT TAACCATTTTTTAAAGCAGAATTCGAACAAAATTATATATTCTTATAAAGAAATATAAAC GAATAAGAAATGCAAAAATATTATTTCATAAACCCTAAATAATGCATTAAATTATTATCA TTCTTTTAAATAAAAATTGAAAATTAATTATTGATATTCTAAACCCAAAAGAAATTTTCA AAAGTATTTAGATCTGTAAGATTTTTGTATAGCAAAAGAAAAGTATTGGTGGTTTATGGT GGTATAGAGAGTCGCAAGATAATTATTATATAAATTAAGTTGTTATGGATTTTATTTACA TAAAGCAGATAGATGTTTCCTATTAAGTAAACTATGTAGTAATTATGGTATTAAAAATAC AACTCTAAAATTAAGAGAAACCAGTTGGCCTTTTGGCAGTTGTAGTTCTAAGTGATAGAA ATCTAAATTCAAATCCAAATCTTAAGTCTTATTTTAAGGCTAGATCATGTATTGGTAAATACTGTTCTTTTAAATAGTTTAGA AAGCTGCCATGAAATAAATCGAGGATTTGTTTCTATAATCTTTGATTTAGAAATAGTAAT TTAAATCTTAAAGCATTGTTAAAATTTTATATTCACAATGCAAATTTTTAGTTAATTTAT GTTTTTTAGATTTTCATTTTTCTTATTTGCTATAAACTTATTAATTTAGGCACCAAATAT TTTAGTTTTTATTTTTTTAAAAAAGAAGTCCCTACCCCCGATCTTATTTGGAGAGGATTA ATAGGGGTGTTGGGGACTATTGGTAGATCGTTTTCTACCTTACATATATAATATTTTATT TTTTTAATTTTGTAAATATTATTTAATAATAATAAATAAAATTAAACGCTTTTAAGTAAA ATGTTAAATTTTGGTTTTTTCCCCATGATTTCCTTTTAATTTTTTAGTTTTGCTATTTGA TTTAGTAGCCTGTTGTTGTGATTGATCTTCAATTTTATTGATGGCTTGTAAAAGCCTTAT TTGGCCGTTATTAAGGTTTTTTTTTCTATGTTTGAGTTTTAAAGGATTTAATTTTTGATT CATTATTTGAATTTCTTTAATTTCTTCTGCAGTTGGTTCATAATTTATCCATTTTTTATC TATCCAGCCTTTTTCCCATTCATAATAGCTTTCATTAAAGTTGGTTTTGTTTTGTTCGTC GAAGTGCCTTGCTAGTGCGGCAATTATTTTTGTTCCATATTTAAGCATTTTTTCTGTGTC TGTTTTGGTAATAAATGTGTCATAATCGTAATTTTCACCGGTTTGCATATTAACAATATC AACTATAAGCTTGAAAGTATTTATGCAATTCATTGCAATATTTGTGGATGAAAAATTAGG ATCACTAAGGGCTTTAATATTAAGTTCATATTTTTTCTTTAAACTTATGGGAATTTTAAA AAAGTCAATTATAAAGCCATTTATTTTTGCTCGATAATTAGCTTCATATTGAGAATTTCC AAAGAAAGTTCCTTTTATGCCCCCAATTGTATATTCATTATAGTTTATTGTTATTTCTTT TTGCATTTTTATCCTTTAAAAATATATTAATATAATATAATACTATGTATTAATTGATAT TATATATCAATTAATACATAGTATAAATAAAAAAAATATTTTTAATTTTGTTTTTTTTCT AAACTGTTGTAGATTTTAACTTATCTTTAACCTTTAACCATTTATAAAAAGTCCTTAAGA GAGAAATTTCTTAGGACTTTTTATAAATTTAATAAAGAATAATGAAGAATTGCCCGCCTA AATTTCAAAATAAATTTGGGCGGGGGGTAGGTTTGTAAAATAGATTTATTTTTAGCTCTG AATATACTTTTTATGATATCGGTAGAGGCTTCTAATATAGATAAGACAATAGCTTAGGTG CCGTTTTTAGATTATTAATTAATTTAGAAGGTTAAATATTGTAAGTAAATTCTGGTGATA AATGAATTTTATTTTTATCATAATTTAAATTTTTTTAATAAAAGTTCTTTTTTCTATAAA GAACATAAAATAAGTATTATTTGGCAAAGAGAAGGTTGGCATTTATAAGTACAGAATATA TTTTAACACCAACCAAAAGAAATACTTTTTAAAAGTATTTAGATGTGTAGGATTTTTATA TAATACAGTGTTAAAGCGGCAAGATAGATTTTTATTAAAAAAAATAAGCAAAATCTTATT ACTTGTTCAAATAAATATAAATAAGAGCTTCTATTGCTAAAAAAAGTTTATAGTTTGGCC TTTTGTATGCATGGATTGACTTAAGCTCTATATATAAATTCTTTTAGAGAAATTTAAAGA GGGATAGAACACAAGGATTTCTTAAACATAAGAGCAAGAAAAATAAGCAAGCTTTTAAAG CTAATAATAAAAAAACAACGAAAATAAACTTAAAGAATACTAAAAAAATTATCAAAAAAA CAAAAAACTTCTATTAATGGAGCTAAATCTATATTAAGAATTACTAAGTTACATGAGAAA ATTGTAAATCAAAAAACCTTTTTACACAAGGTATTTTTTGCTTTATAGTTATTTATAAAA ATATAGCAATAGAGAACTTATTAATTAAAGATATGTGAATGTTTGGGTAAAGTATTAATA ATTTAGGATGTTATAAACTTGTAAGACGATTATAGTATAAATCAGAATGGTATGGATCCT TTTTGTATAAAGTAGGTATATATTTTTCATCAAGCAAGCTATTTGTAATTGTGGTGTTAA AAATACAATTCTAAAATTAAGTGGTGCATTGCTATAAAGAAATAAAAGCTAAGACAGG AATTTTCCGAATTTAGGCTTGTTAATCATGCTATTGGTGTGAAAATAAGATATAAAAAGA ACTAAAAAGTTATCGTAGAATGAGACAAAAGCCGTGAATATAGAGATAGTAGTTTGCTAG CTTTTAAGGCCACATCAGATCTAATTCATTAAAAAATTTTGTGAAAGGGGCTTCATAGGT AGAGATTAAAGTAATTAATATGTGATATAAAATAATTAATATGTGATATAAAATAATTAA TATGTGATATAAAATAATTAATATGTGATATAAAATAATTAATATGTGATATAAAATAAT TAATATGTGATATAAAATAATTAATATGTGATATAAAATAATTAATATGTGATATAAAAT AATTAAAAGGAAGTTTGTATGAAAAAAATAGCATTTCATATTCAAAAAGGTGGTGTTGGG AAAACTACCTTAAGCGGAAATATTGCAAGTTATTTATCTAAAACAAAAAAAGTTATATTA GTTGATTGTGATATACAGCAAGCAAGTTCTTCTACATGGTTTCTTAATCATGAAATTCTT AAGCTGGATATTAAAGATTTTCTTTTGAAGAAGATGGATGTAGATCAAGTAGTAAGACAA ATACAAAAAAATTTTTATATTTTGCCATGTGTGCCGAGTGGAACTTTTAGAAGAGATGTG CAACATGAATTGCAAGATTTTCCATATTTGATAGATGATTTTTGTTTGGAATTGGAGAAA TTGGGATTTGAATTTGCAATTTTTGATTTATCTCCCAGTTTTGAGCTTTGGGAGCGAAGA ATTATTCTTGCAATGTGTGAAGTTATTACTCCACTGACCCCAGAATTTTTAAGCCTTGAA GGAATTAATATTTTTAAAGAAGAGTTTGAGTCTTTGTTAAAATCTTATAGAAAAAAGGTT AAACATGAGAAGATTATTTGCAATATGCTTAATAAAAGTTTTAAAAGACATAATTTGCAT CTAAGGCAATTTAAAACTTTTGGATATGATCTTTATGAGGTTGGACAAGATGCTAAAATA GCAGAATCTCAGCTATATAAGAAGTCGATTTTTGATTATTATCCGGAGAGTAGATCTGTC TTAGAACTTTCAAGATTGGGAGATGCTTTATGCCTATAAGCAAAGAGGTTAATTTAGAAG AAATTATTAAACAAAATGTTAGCAAATCTAAATTTGCTTTTCGTGATAATGATGTTATTA GCAATAACATTCCAAATGTTAAGCCTATTAGATCAAAAATGACGATCAAGCAAATAAGTG TAGGACTGAAATATGAATATTGGATTGAATTTTATTCTATTTTAGACAAAAAGGGATTAA CAGCTTCTGGTTTTATAAGAACTTTAATTATGGAATATATAGAAGCTTCTAAAATAATAAAGAATTATCTACCTATGATTTCT GTGAAATTTAGGTAGATATGAATTTGTTAGTAAATGGATTTATTTTGATTCCGAATATGC TTTTTGATCATATCGGAAAAACTTTCTCTCTAGTATAAATAAGTCAGTAGTTTTTAGATT AAAAATAAGATTTTCAGCAATGCATAAATAATTAGAATATTTTTTCTTATAAGCTTTGAT GAAGATATTGTATTAAATTATTGATGAATTTAGAAGATTGAATATTGGCAATGTGATCTT TATTGCAATTTAATTCATTAATGGTTATTTTCCATAAGGAATATAAAAGTATTATTTTGT GAGACAGGGCATAAGCTCATCATTTGTGTCTATGGTTAGTAACTAGTACTCGGGGGGGGG GGATAATTAACTAAATATATGCAGTGA TATGTTATTTTTTTGAAAAATTTCCACACAAAAAATTTTTTTGTTTCCTCTTTTTTACTT ATCTTGTTACATTTAATGGGCATTCTTTTTTTCTCTTCTAGGCTTTGAATTAAAAATAAT AAGTATATTATCTATATAATCTTTATTTAATGTTTTAAATAAATTATATTGATCCAGATT TAAGACATAGTTAGGATTTGTTTTTTTAAAATTTTAAAATAATAAGATTTGCTAGGCAAT ATTATTGGATTTTTATATATTTGATTGCAAGCTCTTCAAACGTTATGGCATTTTGACCAA TTTTAATAGATTGCACTTGAGAAAATA GTTATAAAATATTTTCAAATTTTTTCTTTTGGGGGGATGTAAATTAATACTTTTTCCCTC ATACTTTTTGAAAGTTTTTATATTTTTTGATGGTTTATAGGAAATAATTTTCCCTGTTTT TCTAATCCAAGTAATTTTATATAAACGGCATTTATATATTTTTCTTTTGTATATTTCGTA GTATATAGCTATTATAAGTAA\nEOF\n\n## Download 64-bit Linux Fasta36 binaries, if necessary. \nif ! [ -e /content/fasta-36.3.8g/bin ]; then\n wget -q https://github.com/wrpearson/fasta36/releases/download/fasta-v36.3.8g/fasta-36.3.8g-linux64.tar.gz\n tar -xzf fasta-36.3.8g-linux64.tar.gz\nfi\n\n## Update path with these binaries. This will be forgotten in other cells.\nif [ \":$PATH:\" != *\":/content/fasta-36.3.8g/bin/fasta36:\"* ]; then\n PATH=\"$PATH:/content/fasta-36.3.8g/bin\"\nfi\n\n## Use an example file if user has not uploaded their own FASTA file. \nif ! [ -e replicons.fna ]; then\n ln -s example.fna replicons.fna\nfi\n\n ## Use FASTX36 to find all matches above the threshold \n fastx36 -m 8 -E 0.05 replicons.fna pfam.faa > output.txt\n \n echo \"########## RESULTS ################\"\n echo \"Replicon Match Ident% Length MMC Gaps QStart QEnd SStart SEnd Expect BScore\"\n cat output.txt",
"########## RESULTS ################\nReplicon Match Ident% Length MMC Gaps QStart QEnd SStart SEnd Expect BScore\ncp26\t62\t97.41\t309\t8\t0\t7793\t8719\t1\t309\t9.7e-38\t148.2\ncp26\t32\t46.31\t244\t131\t5\t9232\t9978\t1\t244\t8.1e-13\t65.2\ncp26\t50\t44.27\t192\t107\t4\t8672\t9253\t1\t194\t1.3e-09\t54.1\ncp26\t49\t36.16\t177\t113\t2\t10076\t10606\t2\t180\t2.6e-07\t46.4\ncp32-1\t57\t68.04\t363\t116\t6\t8539\t9636\t14\t379\t1e-73\t268.3\ncp32-1\t32\t59.59\t245\t99\t0\t10191\t10925\t1\t245\t6.6e-46\t175.3\ncp32-1\t49\t54.80\t177\t80\t0\t11019\t11549\t5\t181\t1.3e-31\t127.4\ncp32-1\t50\t43.75\t160\t90\t4\t9724\t10212\t34\t194\t3.7e-19\t86.1\ncp32-1\t62\t29.12\t261\t185\t42\t8647\t9528\t37\t306\t0.035\t30.3\ncp32-4\t57\t66.31\t371\t125\t5\t4991\t6106\t5\t379\t3.4e-55\t205.5\ncp32-4\t32\t38.68\t243\t149\t16\t6661\t7425\t1\t247\t2e-15\t72.8\ncp32-4\t50\t43.75\t160\t90\t4\t6194\t6682\t34\t194\t1.2e-14\t69.8\ncp32-4\t49\t39.66\t174\t105\t7\t7488\t8018\t4\t181\t3.5e-14\t68.2\ncp32-5\t57\t66.21\t364\t123\t6\t7000\t8097\t12\t379\t5.2e-62\t228.9\ncp32-5\t32\t60.41\t245\t97\t1\t8652\t9389\t1\t245\t7.1e-41\t158.1\ncp32-5\t49\t56.50\t177\t77\t4\t9486\t10016\t1\t181\t3.4e-27\t112.2\ncp32-5\t50\t43.75\t160\t90\t4\t8185\t8673\t34\t194\t1.2e-16\t77.3\ncp32-5\t62\t26.69\t251\t184\t25\t7105\t7917\t37\t292\t0.015\t31.0\ncp32-6\t57\t68.47\t352\t111\t4\t7548\t8606\t25\t379\t9.2e-31\t125.0\ncp32-6\t32\t60.41\t245\t97\t2\t9161\t9901\t1\t245\t1.4e-19\t87.3\ncp32-6\t49\t53.85\t169\t78\t4\t10026\t10532\t5\t177\t5.3e-12\t61.6\ncp32-6\t50\t43.75\t160\t90\t4\t8694\t9182\t34\t194\t2.6e-08\t49.4\ncp32-7\t57\t67.58\t364\t118\t5\t4177\t5271\t12\t379\t2.3e-56\t209.5\ncp32-7\t32\t63.79\t243\t88\t0\t5826\t6554\t1\t243\t1e-37\t146.9\ncp32-7\t49\t59.65\t171\t69\t7\t6615\t7130\t1\t177\t2.8e-23\t98.5\ncp32-7\t50\t43.75\t160\t90\t4\t5359\t5847\t34\t194\t1e-14\t70.2\ncp32-9\t57\t67.33\t352\t115\t5\t19084\t20145\t25\t379\t1.5e-24\t104.9\ncp32-9\t32\t45.27\t243\t133\t5\t20701\t21444\t1\t243\t1.1e-10\t58.2\ncp32-9\t49\t44.38\t169\t94\t2\t21543\t22049\t7\t177\t9.7e-08\t48.0\ncp32-9\t50\t40.00\t190\t114\t5\t20141\t20722\t4\t194\t9.7e-07\t44.8\ncp9\t57\t69.75\t367\t111\t9\t1\t1125\t12\t379\t1.2e-68\t250.0\ncp9\t49\t56.88\t160\t69\t0\t1816\t2295\t18\t177\t2.1e-26\t108.6\ncp9\t50\t36.75\t166\t105\t4\t1183\t1686\t27\t194\t7.4e-15\t70.3\nlp17\t32\t29.05\t179\t127\t19\t11876\t12451\t15\t199\t5.3e-07\t44.9\nlp17\t62\t34.95\t289\t188\t32\t6808\t7752\t14\t308\t1e-06\t44.3\nlp25\t57\t42.70\t363\t208\t10\t14040\t12940\t1\t369\t1.6e-26\t111.2\nlp25\t32\t54.88\t246\t111\t0\t12358\t11621\t1\t246\t3e-26\t109.6\nlp25\t49\t52.30\t174\t83\t6\t11562\t11023\t1\t174\t3e-17\t79.3\nlp25\t50\t44.86\t185\t102\t11\t12897\t12337\t1\t194\t1.2e-14\t70.7\nlp25\t49\t46.59\t176\t92\t11\t16275\t16821\t1\t174\t1.8e-13\t66.7\nlp25\t32\t49.09\t55\t27\t1\t16062\t16224\t193\t246\t0.00075\t35.2\nlp25\t62\t30.42\t263\t183\t36\t13905\t13021\t35\t301\t0.0027\t33.7\nlp28-3\t32\t41.10\t236\t138\t14\t17346\t18078\t1\t239\t4.2e-20\t89.4\nlp28-3\t50\t41.62\t185\t108\t9\t16807\t17364\t1\t193\t5.7e-20\t88.6\nlp28-3\t49\t39.26\t163\t99\t14\t18284\t18808\t17\t181\t1.1e-10\t57.6\nlp28-3\t50\t41.09\t129\t73\t16\t2859\t2444\t43\t168\t2.6e-09\t53.2\nlp28-3\t62\t30.80\t250\t173\t38\t16029\t16811\t28\t304\t2.3e-05\t40.7\nlp28-4\t57\t59.04\t354\t145\t8\t10906\t11973\t18\t377\t1.9e-19\t87.9\nlp28-4\t32\t51.44\t243\t118\t8\t12524\t13270\t1\t245\t3.6e-12\t63.0\nlp28-4\t49\t44.69\t179\t99\t6\t13350\t13904\t1\t179\t6.9e-08\t48.4\nlp28-4\t50\t46.15\t182\t98\t7\t11994\t12545\t8\t194\t4.7e-07\t45.7\nlp28-8\t57\t47.24\t362\t191\t12\t17462\t16359\t6\t373\t6.7e-38\t149.2\nlp28-8\t32\t56.15\t244\t107\t5\t15801\t15055\t1\t244\t1.4e-32\t130.9\nlp28-8\t50\t49.73\t183\t92\t13\t16334\t15780\t1\t194\t2.9e-21\t92.9\nlp28-8\t49\t47.40\t173\t91\t9\t14934\t14389\t2\t174\t4.6e-18\t82.2\nlp28-8\t62\t33.46\t254\t169\t52\t17339\t16422\t36\t289\t1.6e-05\t41.4\nlp36\t32\t43.07\t202\t115\t13\t9568\t8936\t1\t206\t6.9e-19\t85.4\nlp36\t50\t41.57\t178\t104\t15\t10083\t9550\t1\t193\t3.3e-18\t82.8\nlp36\t49\t36.65\t161\t102\t10\t11615\t12118\t17\t180\t3.9e-14\t69.1\nlp36\t62\t34.77\t279\t182\t31\t22899\t23828\t17\t295\t1.9e-10\t57.7\nlp36\t57\t30.57\t350\t241\t87\t10966\t9697\t19\t375\t0.0094\t32.4\nlp54\t32\t100.00\t250\t0\t0\t13708\t14457\t1\t250\t5.4e-55\t205.8\nlp54\t50\t100.00\t194\t0\t0\t13148\t13729\t1\t194\t3.3e-44\t169.6\nlp54\t49\t99.45\t181\t1\t0\t14491\t15033\t1\t181\t1.1e-39\t154.6\nlp54\t57\t41.18\t357\t210\t51\t11889\t13097\t3\t364\t1.8e-16\t78.5\n"
],
[
"## Install BioPython library. \n!pip install biopython ",
"Collecting biopython\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/28/15/8ac646ff24cfa2588b4d5e5ea51e8d13f3d35806bd9498fbf40ef79026fd/biopython-1.73-cp36-cp36m-manylinux1_x86_64.whl (2.2MB)\n\u001b[K 100% |████████████████████████████████| 2.2MB 13.9MB/s \n\u001b[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from biopython) (1.14.6)\nInstalling collected packages: biopython\nSuccessfully installed biopython-1.73\n"
],
[
"## Use matplotlib to draw the plasmids in replicons.fna / output.txt.\nimport sys\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom itertools import islice\nfrom Bio import SeqIO\n\n# -------- Begin constants -------- #\n\n# Colors\nBASE_COLOR = 'gray'\nSEQUENCE_COLORS = {32: 'red', 49: 'green', 50: 'yellow', 57: 'cyan', 62: 'blue'}\n\n# Linear plot\nBASE_LINE_WIDTH = 15\nSEQUENCE_LINE_WIDTH = 20\nMARKER_LINE_WIDTH = 15\nHORIZONTAL_SCALE_CONSTANT = 1/4000\n# Offsets to avoid labels intersecting plot\nLABEL_Y_ADJUST = 0.005\nLABEL_LEFT_X_ADJUST = 0.07\nLABEL_RIGHT_X_ADJUST = 0.05\n\n# Circular plot\nCHART_BOTTOM = 8\nCHART_THICKNESS = 4.5 ## Was 1.5\nCIRCULAR_SCALE_CONSTANT = 1 ## Was 3\n\n# -------- End constants -------- #\n\n# -------- Begin functions -------- #\n\ndef file_length(file):\n with open(file) as f:\n for i, l in enumerate(f):\n pass\n return i + 1\n\n\ndef linear_plot(data):\n # Scaling\n horizontal_scale = plasmid_length * HORIZONTAL_SCALE_CONSTANT\n plt.figure()\n plt.rcParams['figure.figsize'] = (horizontal_scale, 0.5)\n plt.close('all') \n \n linear = plt.subplot(111)\n linear.set_facecolor('white')\n background = linear.barh(0, plasmid_length, height = 20, color = 'white')\n baseline = linear.barh(0, plasmid_length, height = 1, color = BASE_COLOR)\n \n for index in range(2, data_count):\n sequence = data[index]\n # Get sequence data\n sequence_start = min(sequence[1], sequence[2])\n sequence_end = max(sequence[1], sequence[2])\n sequence_length = sequence_end - sequence_start\n sequence_family = sequence[0]\n\n # Plot gene onto plasmid\n color = SEQUENCE_COLORS[sequence_family]\n gene_plot = linear.barh(0, sequence_length, left = sequence_start, height = 1, color = color)\n \n # Add labels\n label_y = 0\n # Left side label - name of plasmid (second column)\n label_x = -plasmid_length * LABEL_LEFT_X_ADJUST\n if plasmid_length < 10000:\n label_x *= 2\n plt.text(label_x, label_y, \"0\")\n # Right side label - length of plasmid\n label_x = plasmid_length + plasmid_length * LABEL_RIGHT_X_ADJUST\n plt.text(label_x, label_y, str(plasmid_length))\n # Top label - plasmid name\n label_x = 0\n label_y = 0.8\n label_text = str(sequence_name) + \" - \" + str(plasmid_length) + \"nt\"\n plt.text(label_x, label_y, label_text)\n \n # Fix axes limits\n xmin, xmax, ymin, ymax = plt.axis('tight')\n plt.ylim(0, 1)\n \n # Hide background and show plot\n plt.axis('off')\n plt.show()\n\n\ndef circular_plot(data):\n # Sizing\n graph_scale = np.sqrt(plasmid_length/9000) * CIRCULAR_SCALE_CONSTANT\n circle_width = CHART_THICKNESS/graph_scale * CIRCULAR_SCALE_CONSTANT\n circle_bottom = CHART_BOTTOM\n plt.figure()\n plt.rcParams['figure.figsize'] = (graph_scale, graph_scale)\n plt.close('all')\n \n circle = plt.subplot(111, polar=True)\n circle.set_theta_offset(np.radians(90)) # Move origin to top instead of right\n baseline = circle.bar(0, circle_width, width = -np.radians(360), bottom = circle_bottom, align='edge', color = BASE_COLOR)\n \n # Use data from first protein family sequence for initial minimum/maximum values\n first_data_point = data[2]\n minimum = min(first_data_point[1], first_data_point[2])\n maximum = max(first_data_point[1], first_data_point[2])\n \n # Center the plot\n for index in range(2, data_count):\n sequence = data[index]\n sequence_start = min(sequence[1], sequence[2])\n sequence_end = max(sequence[1], sequence[2])\n if sequence_start < minimum:\n minimum = sequence_start\n if sequence_end > maximum:\n maximum = sequence_end\n \n center = (minimum + maximum)/2\n \n # Plot\n for index in range(2, data_count):\n sequence = data[index]\n # Get sequence data\n sequence_start = min(sequence[1], sequence[2])\n sequence_end = max(sequence[1], sequence[2])\n sequence_length = sequence_end - sequence_start\n sequence_family = sequence[0]\n\n # Plot gene onto plasmid\n gene_plot_start = ((center - sequence_start) / plasmid_length) * np.radians(360)\n gene_plot_width = (sequence_length / plasmid_length) * np.radians(360)\n color = SEQUENCE_COLORS[sequence_family]\n gene_plot = circle.bar(gene_plot_start, circle_width, width = gene_plot_width, bottom = circle_bottom, color = color)\n \n # Label - plasmid name and length\n label_x = 0\n label_y = 0 ## Was 0\n label_text = str(sequence_name) + \"\\n\" + str(plasmid_length) + \"nt\"\n plt.text(label_x, label_y, label_text, ha = 'center')\n \n # Hide background and show plot\n plt.axis('off')\n plt.show()\n\n\n# -------- End functions -------- #\n\n# -------- Begin main program -------- #\n\nplasmid_type = 'l'\nplasmid_length = 56000\ndata_dict = {}\nline_number = 0\n\n## Get sequence length of each record in \"replicons.fna\"\nreplen = {}\nwith open('replicons.fna', 'r') as fna: \n for rec in SeqIO.parse(fna, 'fasta'):\n replen[rec.id] = len(rec.seq) ## replen['lp17'] = 17000\n \n\n# Create dictionary from FASTA output\nfilename = 'output.txt'\nwith open(filename) as data:\n for line in islice(data, 0, file_length(filename)):\n line_number += 1\n plasmid = line.split()[0]\n \n # If sequence does not exist in dictionary, initialize with the plasmid type and length\n if plasmid not in data_dict:\n plasmid_length = replen[plasmid]\n data_dict[plasmid] = [plasmid_type, plasmid_length]\n \n # Pull data from relevant columns in FASTA output\n pf = int(line.split()[1])\n start = min(int(line.split()[6]), int(line.split()[7]))\n end = max(int(line.split()[6]), int(line.split()[7]))\n \n value_cleared = True\n \n # Check to see if overlaps with another sequence\n for tuple_index in range(2, len(data_dict[plasmid])):\n ranges = data_dict[plasmid][tuple_index]\n \n # Test for overlap with previously added sequences\n #if (start > ranges[1] and start < ranges[2]) or (end > ranges[1] and end < ranges[2]):\n # print(\"Overlap at line \" + str(line_number) + \" of input. Skipped following data from sequence \" \n # + plasmid + \": Family \" + str(pf) + \" between \" \n # + str(start) + \" and \" + str(end) + \".\")\n # value_cleared = False\n # break\n \n # If no overlap, add data point to value in dictionary for plasmid key\n if value_cleared:\n pf_tuple = (pf, start, end)\n data_dict[plasmid].append(pf_tuple)\n\n# Loops over every key in dictionary and creates a plot for each\nfor plasmid, data in data_dict.items():\n sequence_name = plasmid\n plasmid_type = data[0]\n plasmid_length = data[1]\n data_count = len(data)\n if ('cp' in plasmid):\n circular_plot(data)\n else:\n linear_plot(data)\n",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e75ddc5d60ee0fb44420d2ee26e6a3d9f94e9da3 | 11,928 | ipynb | Jupyter Notebook | Modone.ipynb | mrandolph95/STC510 | 829df07fe5b80d2fe7303b4bd9bdfbb7fcfadb53 | [
"CC0-1.0"
] | null | null | null | Modone.ipynb | mrandolph95/STC510 | 829df07fe5b80d2fe7303b4bd9bdfbb7fcfadb53 | [
"CC0-1.0"
] | null | null | null | Modone.ipynb | mrandolph95/STC510 | 829df07fe5b80d2fe7303b4bd9bdfbb7fcfadb53 | [
"CC0-1.0"
] | null | null | null | 44.674157 | 613 | 0.565644 | [
[
[
"<a href=\"https://colab.research.google.com/github/mrandolph95/STC510/blob/main/Modone.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"The FBI is tracking on a potential smuggling ring that is led by a shady character known by his nom de guerre of The Hamburgler and is using social media platforms to help organize her or his efforts. Your mission, should you choose to accept it, is to create a system that uses the API of various services to trace comments made over the last 72 hours that make mention of the terms that he is using as cover: cheese (payments), pickles (firearms), buns (identity covers), meat (targets), and sesame (keys). We need your help tracking this person and associates who may use these terms on social media.",
"_____no_output_____"
],
[
"Write a Python script that draws data from a subreddit (or from multiple subreddits, or from all subreddits) and stores it as a CSV.\nThink about the kinds of questions you might be able to address using data from Reddit. I was looking at /r/Phoenix because I wanted to extract place-based comments and learn from them. Here you are looking for particular keywords, within a particular timeframe. Design an extractor that does this, and also saves the dates and times of these comments to a human-readable version. It should also only collect the last 72 hours-worth of data.",
"_____no_output_____"
]
],
[
[
"!pip install praw\n!pip install pytz # suggested to use because kept getting tzinfo error when importing datetime. this will specify pacific timezone\n\n",
"Requirement already satisfied: praw in /usr/local/lib/python3.7/dist-packages (7.5.0)\nRequirement already satisfied: prawcore<3,>=2.1 in /usr/local/lib/python3.7/dist-packages (from praw) (2.3.0)\nRequirement already satisfied: websocket-client>=0.54.0 in /usr/local/lib/python3.7/dist-packages (from praw) (1.2.3)\nRequirement already satisfied: update-checker>=0.18 in /usr/local/lib/python3.7/dist-packages (from praw) (0.18.0)\nRequirement already satisfied: requests<3.0,>=2.6.0 in /usr/local/lib/python3.7/dist-packages (from prawcore<3,>=2.1->praw) (2.23.0)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0,>=2.6.0->prawcore<3,>=2.1->praw) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3.0,>=2.6.0->prawcore<3,>=2.1->praw) (2021.10.8)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0,>=2.6.0->prawcore<3,>=2.1->praw) (2.10)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3.0,>=2.6.0->prawcore<3,>=2.1->praw) (1.24.3)\nRequirement already satisfied: pytz in /usr/local/lib/python3.7/dist-packages (2018.9)\nCollecting datetime\n Downloading DateTime-4.3-py2.py3-none-any.whl (60 kB)\n\u001b[K |████████████████████████████████| 60 kB 4.3 MB/s \n\u001b[?25hRequirement already satisfied: pytz in /usr/local/lib/python3.7/dist-packages (from datetime) (2018.9)\nCollecting zope.interface\n Downloading zope.interface-5.4.0-cp37-cp37m-manylinux2010_x86_64.whl (251 kB)\n\u001b[K |████████████████████████████████| 251 kB 14.6 MB/s \n\u001b[?25hRequirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from zope.interface->datetime) (57.4.0)\nInstalling collected packages: zope.interface, datetime\nSuccessfully installed datetime-4.3 zope.interface-5.4.0\n"
],
[
"import praw\nimport time\nfrom time import sleep # importing to slow down execution\nimport datetime \nfrom datetime import datetime\nfrom datetime import timedelta \nfrom praw.reddit import Submission # colab suggested this\nfrom praw.models.reddit.comment import Comment # colab suggested this",
"_____no_output_____"
]
],
[
[
"2. enter login and script info for reddit",
"_____no_output_____"
]
],
[
[
"usr_name = \"nunya_9911\"\nusr_password = \"disposablepassword123!\"\nreddit_app_id = '0Fss0e88a5UL1dWmgk2vug'\nreddit_app_secret = 'AmCxyt0gEFlMe6r2TDs6ILzQfZI5Eg'\nreddit = praw.Reddit(user_agent=\"Mod 1 (by u/nunya_9911)\",\n client_id=reddit_app_id, client_secret=reddit_app_secret,\n # added the check for async as colab suggested I do so.\n username=usr_name, password=usr_password,check_for_async=False) ",
"_____no_output_____"
],
[
"# defines which subreddit we will be looking in. No preference so to subreddit so put 'all'\nsubreddit = reddit.subreddit('all') ",
"_____no_output_____"
]
],
[
[
"**This is an explanation of how to use search.** It is copied from the floating comment when I began to type \"search\". I put it here so it was easy to reference while I was typing everything out.\n\ndef search(query: str, sort: str='relevance', syntax: str='lucene', time_filter: str='all', **generator_kwargs: Any) ->Iterator['praw.models.Submission']\n\nReturn a .ListingGenerator for items that match query.\n\n:param query: The query string to search for.\n:param sort: Can be one of: relevance, hot, top, new, comments. (default:\n relevance).\n:param syntax: Can be one of: cloudsearch, lucene, plain (default: lucene).\n:param time_filter: Can be one of: all, day, hour, month, week, year (default:\n all).\n\nFor more information on building a search query see:\nhttps://www.reddit.com/wiki/search\n\nFor example, to search all subreddits for praw try:",
"_____no_output_____"
]
],
[
[
"# this labels the current date so I can use it later\nrightnow = datetime.now()\n# per the reading about time. the reading said to enter datetime.datetime.timedelta, but that didn't work.\n# did this instead. not sure if Python updated since the reading was published?\ndelta = timedelta(hours=72)\n# this defines the time 3 days ago and converts it from datetime.datetime to a float\nthe_last_seventytwo_hours = datetime.timestamp(rightnow - delta)\n\n",
"_____no_output_____"
],
[
"with open(\"moduleone_missionhamburgler.csv\", 'w') as subfile:\n\n # creating an empty list so I can add things to it\n list_of_found_codenames = [] \n\n # so reddit doesn't get mad at me\n sleep(2)\n\n # in all subreddits, I am searching for submissions that includes the Hamburgler's cover names\n for submission in subreddit.search(\"cheese, bun, meat, pickle, sesame\",'new','lucene','week'):\n # while completing the for loop above, it is going to make sure the submissions were made in the last 3 days\n if submission.created_utc >= the_last_seventytwo_hours:\n sleep(4)\n # adding the submissions from the for loop to the list\n list_of_found_codenames.append(submission.id)\n \n\n\n sleep(4)\n\n# this for loop will help format the submissions I added to the list\n for eachtopic in list_of_found_codenames:\n submission = reddit.submission(eachtopic)\n\n # making sure reddit doesn't get mad. better safe than sorry :)\n sleep(4)\n\n # this formats all the submissions so it is easy to read\n format = '*' + eachtopic + '* \"'\n format += submission.title + '\", written by '\n format += submission.author.name + '. @ '\n format += datetime.strftime(datetime.fromtimestamp(submission.created_utc), ' %A, %B %d, %Y') + '.\\n'\n\n # add everything we formatted to the subfile\n subfile.write(format)\n\n # making sure that it pulls all the comments from reddit by bypassing the \"more\" button\n submission.comments.replace_more(limit=None)\n commentlist = submission.comments.list()\n\n sleep(4)\n\n# opening the file again to add/append the found comments to list\nwith open(\"moduleone_missionhamburgler.csv\", 'a') as subfile:\n\n # same as what I did for the submissions. \n for eachcomment in commentlist:\n sleep(2)\n\n format = str(eachcomment) + ','\n format += eachcomment.body.replace('\\n', '/') + ','\n format += submission.author.name + ','\n format += datetime.strftime(datetime.fromtimestamp(submission.created_utc), ' %A, %B %d, %Y') + ','\n\n subfile.write(format) \n\n\n\n\n \n\n \n ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e75de10b423854473c7a8776c0be4108379e0162 | 15,185 | ipynb | Jupyter Notebook | examples/SparkMeasure_Jupyter_Colab_Example.ipynb | knockdata/sparkMeasure | 56496642c51f0a5bf9fbd9fd39c96b0f6e7c5d97 | [
"Apache-2.0"
] | 453 | 2017-03-30T04:21:03.000Z | 2022-03-31T12:00:40.000Z | examples/SparkMeasure_Jupyter_Colab_Example.ipynb | hellodk/sparkMeasure | 781e28d378f6028980e08f75783b5a4fc8a9cd6c | [
"Apache-2.0"
] | 35 | 2017-11-04T00:27:14.000Z | 2022-03-04T09:06:30.000Z | examples/SparkMeasure_Jupyter_Colab_Example.ipynb | hellodk/sparkMeasure | 781e28d378f6028980e08f75783b5a4fc8a9cd6c | [
"Apache-2.0"
] | 101 | 2017-04-20T15:43:22.000Z | 2022-03-19T00:37:44.000Z | 33.082789 | 269 | 0.562002 | [
[
[
"# Jupyter/Colab Notebook to Showcase sparkMeasure APIs for Python",
"_____no_output_____"
],
[
"### [Run on Google Colab Research: <img src=\"https://raw.githubusercontent.com/googlecolab/open_in_colab/master/images/icon128.png\">](https://colab.research.google.com/github/LucaCanali/sparkMeasure/blob/master/examples/SparkMeasure_Jupyter_Colab_Example.ipynb)",
"_____no_output_____"
],
[
"**SparkMeasure is a tool for performance troubleshooting of Apache Spark workloads** \nIt simplifies the collection and analysis of Spark performance metrics. It is also intended as a working example of how to use Spark listeners for collecting and processing Spark executors task metrics data.\n\n**References:**\n- [https://github.com/LucaCanali/sparkMeasure](https://github.com/LucaCanali/sparkMeasure) \n- sparkmeasure Python docs: [docs/Python_shell_and_Jupyter](https://github.com/LucaCanali/sparkMeasure/blob/master/docs/Python_shell_and_Jupyter.md) \n\n**Architecture:**\n\n\nContact: [email protected], February 2019 ",
"_____no_output_____"
]
],
[
[
"# Install Spark \n# Note: This installs the latest Spark version (version 2.4.3, as tested in May 2019)\n\n!pip install pyspark",
"_____no_output_____"
],
[
"from pyspark.sql import SparkSession\n\n# Create Spark Session\n# This example uses a local cluster, you can modify master to use YARN or K8S if available \n# This example downloads sparkMeasure 0.14 for scala 2_11 from maven central\n\nspark = SparkSession \\\n .builder \\\n .master(\"local[*]\") \\\n .appName(\"Test sparkmeasure instrumentation of Python/PySpark code\") \\\n .config(\"spark.jars.packages\",\"ch.cern.sparkmeasure:spark-measure_2.11:0.14\") \\\n .getOrCreate()",
"_____no_output_____"
],
[
"# test that Spark is working OK\nspark.sql(\"select 1 as id, 'Hello world!' as Greeting\").show()",
"+---+------------+\n| id| Greeting|\n+---+------------+\n| 1|Hello world!|\n+---+------------+\n\n"
],
[
"# Install the Python wrapper API for spark-measure\n\n!pip install sparkmeasure",
"_____no_output_____"
],
[
"# Load the Python API in sparkmeasure package\n# an attache the sparkMeasure Listener for stagemetrics to the active Spark session\n\nfrom sparkmeasure import StageMetrics\nstagemetrics = StageMetrics(spark)",
"_____no_output_____"
],
[
"# Define cell and line magic to wrap the instrumentation\nfrom IPython.core.magic import (register_line_magic, register_cell_magic, register_line_cell_magic)\n\n@register_line_cell_magic\ndef sparkmeasure(line, cell=None):\n \"run and measure spark workload. Use: %sparkmeasure or %%sparkmeasure\"\n val = cell if cell is not None else line\n stagemetrics.begin()\n eval(val)\n stagemetrics.end()\n stagemetrics.print_report()",
"_____no_output_____"
],
[
"%%sparkmeasure\nspark.sql(\"select count(*) from range(1000) cross join range(1000) cross join range(100)\").show()",
"+---------+\n| count(1)|\n+---------+\n|100000000|\n+---------+\n\n\nScheduling mode = FIFO\nSpark Context default degree of parallelism = 8\nAggregated Spark stage metrics:\nnumStages => 4\nsum(numTasks) => 25\nelapsedTime => 1488 (1 s)\nsum(stageDuration) => 1456 (1 s)\nsum(executorRunTime) => 10085 (10 s)\nsum(executorCpuTime) => 9582 (10 s)\nsum(executorDeserializeTime) => 172 (0.2 s)\nsum(executorDeserializeCpuTime) => 83 (83 ms)\nsum(resultSerializationTime) => 10 (10 ms)\nsum(jvmGCTime) => 0 (0 ms)\nsum(shuffleFetchWaitTime) => 0 (0 ms)\nsum(shuffleWriteTime) => 10 (10 ms)\nmax(resultSize) => 21343 (20.0 KB)\nsum(numUpdatedBlockStatuses) => 0\nsum(diskBytesSpilled) => 0 (0 Bytes)\nsum(memoryBytesSpilled) => 0 (0 Bytes)\nmax(peakExecutionMemory) => 0\nsum(recordsRead) => 2100\nsum(bytesRead) => 0 (0 Bytes)\nsum(recordsWritten) => 0\nsum(bytesWritten) => 0 (0 Bytes)\nsum(shuffleTotalBytesRead) => 472 (472 Bytes)\nsum(shuffleTotalBlocksFetched) => 8\nsum(shuffleLocalBlocksFetched) => 8\nsum(shuffleRemoteBlocksFetched) => 0\nsum(shuffleBytesWritten) => 472 (472 Bytes)\nsum(shuffleRecordsWritten) => 8\n"
],
[
"# Print additional metrics from accumulables\nstagemetrics.print_accumulables()",
"\nAggregated Spark accumulables of type internal.metric. Sum of values grouped by metric name\nName => sum(value) [group by name]\n\nexecutorCpuTime => 9584 (10 s)\nexecutorDeserializeCpuTime => 85 (85 ms)\nexecutorDeserializeTime => 172 (0.2 s)\nexecutorRunTime => 10085 (10 s)\ninput.recordsRead => 2100\nresultSerializationTime => 10 (10 ms)\nresultSize => 44249 (43.0 KB)\nshuffle.read.fetchWaitTime => 0 (0 ms)\nshuffle.read.localBlocksFetched => 8\nshuffle.read.localBytesRead => 472 (472 Bytes)\nshuffle.read.recordsRead => 8\nshuffle.read.remoteBlocksFetched => 0\nshuffle.read.remoteBytesRead => 0 (0 Bytes)\nshuffle.read.remoteBytesReadToDisk => 0 (0 Bytes)\nshuffle.write.bytesWritten => 472 (472 Bytes)\nshuffle.write.recordsWritten => 8\nshuffle.write.writeTime => 10 (10 ms)\n\nSQL Metrics and other non-internal metrics. Values grouped per accumulatorId and metric name.\nAccid, Name => max(value) [group by accId, name]\n\n 29, data size total => 119 (119 Bytes)\n 30, duration total => 2 (2 ms)\n 31, number of output rows => 1\n 34, aggregate time total => 2 (2 ms)\n 36, duration total => 9897 (10 s)\n 37, number of output rows => 8\n 40, aggregate time total => 9825 (10 s)\n 42, number of output rows => 100000000\n 43, number of output rows => 1000000\n 44, duration total => 9937 (10 s)\n 45, number of output rows => 1000\n 51, number of output rows => 1000\n 57, number of output rows => 100\n"
],
[
"# You can also explicitly Wrap your Spark workload into stagemetrics instrumentation \n# as in this example\nstagemetrics.begin()\n\nspark.sql(\"select count(*) from range(1000) cross join range(1000) cross join range(100)\").show()\n\nstagemetrics.end()\n# Print a summary report\nstagemetrics.print_report()",
"+---------+\n| count(1)|\n+---------+\n|100000000|\n+---------+\n\n\nScheduling mode = FIFO\nSpark Context default degree of parallelism = 8\nAggregated Spark stage metrics:\nnumStages => 4\nsum(numTasks) => 25\nelapsedTime => 1563 (2 s)\nsum(stageDuration) => 1541 (2 s)\nsum(executorRunTime) => 11610 (12 s)\nsum(executorCpuTime) => 11153 (11 s)\nsum(executorDeserializeTime) => 37 (37 ms)\nsum(executorDeserializeCpuTime) => 15 (15 ms)\nsum(resultSerializationTime) => 1 (1 ms)\nsum(jvmGCTime) => 24 (24 ms)\nsum(shuffleFetchWaitTime) => 0 (0 ms)\nsum(shuffleWriteTime) => 1 (1 ms)\nmax(resultSize) => 21343 (20.0 KB)\nsum(numUpdatedBlockStatuses) => 0\nsum(diskBytesSpilled) => 0 (0 Bytes)\nsum(memoryBytesSpilled) => 0 (0 Bytes)\nmax(peakExecutionMemory) => 0\nsum(recordsRead) => 2100\nsum(bytesRead) => 0 (0 Bytes)\nsum(recordsWritten) => 0\nsum(bytesWritten) => 0 (0 Bytes)\nsum(shuffleTotalBytesRead) => 472 (472 Bytes)\nsum(shuffleTotalBlocksFetched) => 8\nsum(shuffleLocalBlocksFetched) => 8\nsum(shuffleRemoteBlocksFetched) => 0\nsum(shuffleBytesWritten) => 472 (472 Bytes)\nsum(shuffleRecordsWritten) => 8\n"
],
[
"# Another way to encapsulate code and instrumentation in a compact form\n\nstagemetrics.runandmeasure(locals(), \"\"\"\nspark.sql(\"select count(*) from range(1000) cross join range(1000) cross join range(100)\").show()\n\"\"\")",
"+---------+\n| count(1)|\n+---------+\n|100000000|\n+---------+\n\n\nScheduling mode = FIFO\nSpark Context default degree of parallelism = 8\nAggregated Spark stage metrics:\nnumStages => 4\nsum(numTasks) => 25\nelapsedTime => 1518 (2 s)\nsum(stageDuration) => 1493 (1 s)\nsum(executorRunTime) => 11473 (11 s)\nsum(executorCpuTime) => 11134 (11 s)\nsum(executorDeserializeTime) => 54 (54 ms)\nsum(executorDeserializeCpuTime) => 18 (18 ms)\nsum(resultSerializationTime) => 0 (0 ms)\nsum(jvmGCTime) => 40 (40 ms)\nsum(shuffleFetchWaitTime) => 0 (0 ms)\nsum(shuffleWriteTime) => 1 (1 ms)\nmax(resultSize) => 21472 (20.0 KB)\nsum(numUpdatedBlockStatuses) => 0\nsum(diskBytesSpilled) => 0 (0 Bytes)\nsum(memoryBytesSpilled) => 0 (0 Bytes)\nmax(peakExecutionMemory) => 0\nsum(recordsRead) => 2100\nsum(bytesRead) => 0 (0 Bytes)\nsum(recordsWritten) => 0\nsum(bytesWritten) => 0 (0 Bytes)\nsum(shuffleTotalBytesRead) => 472 (472 Bytes)\nsum(shuffleTotalBlocksFetched) => 8\nsum(shuffleLocalBlocksFetched) => 8\nsum(shuffleRemoteBlocksFetched) => 0\nsum(shuffleBytesWritten) => 472 (472 Bytes)\nsum(shuffleRecordsWritten) => 8\n"
]
],
[
[
"## Example of collecting using Task Metrics\nCollecting Spark task metrics at the granularity of each task completion has additional overhead\ncompare to collecting at the stage completion level, therefore this option should only be used if you need data with this finer granularity, for example because you want\nto study skew effects, otherwise consider using stagemetrics aggregation as preferred choice.\n",
"_____no_output_____"
]
],
[
[
"from sparkmeasure import TaskMetrics\ntaskmetrics = TaskMetrics(spark)\n\ntaskmetrics.begin()\nspark.sql(\"select count(*) from range(1000) cross join range(1000) cross join range(100)\").show()\ntaskmetrics.end()\ntaskmetrics.print_report()",
"+---------+\n| count(1)|\n+---------+\n|100000000|\n+---------+\n\n\nScheduling mode = FIFO\nSpark Contex default degree of parallelism = 8\nAggregated Spark task metrics:\nnumtasks => 25\nelapsedTime => 1478 (1 s)\nsum(duration) => 11402 (11 s)\nsum(schedulerDelay) => 62\nsum(executorRunTime) => 11299 (11 s)\nsum(executorCpuTime) => 11208 (11 s)\nsum(executorDeserializeTime) => 40 (40 ms)\nsum(executorDeserializeCpuTime) => 7 (7 ms)\nsum(resultSerializationTime) => 1 (1 ms)\nsum(jvmGCTime) => 0 (0 ms)\nsum(shuffleFetchWaitTime) => 0 (0 ms)\nsum(shuffleWriteTime) => 0 (0 ms)\nsum(gettingResultTime) => 0 (0 ms)\nmax(resultSize) => 2641 (2.0 KB)\nsum(numUpdatedBlockStatuses) => 0\nsum(diskBytesSpilled) => 0 (0 Bytes)\nsum(memoryBytesSpilled) => 0 (0 Bytes)\nmax(peakExecutionMemory) => 0\nsum(recordsRead) => 2100\nsum(bytesRead) => 0 (0 Bytes)\nsum(recordsWritten) => 0\nsum(bytesWritten) => 0 (0 Bytes)\nsum(shuffleTotalBytesRead) => 472 (472 Bytes)\nsum(shuffleTotalBlocksFetched) => 8\nsum(shuffleLocalBlocksFetched) => 8\nsum(shuffleRemoteBlocksFetched) => 0\nsum(shuffleBytesWritten) => 472 (472 Bytes)\nsum(shuffleRecordsWritten) => 8\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e75de829e764085b52badbd911059a00a92c3680 | 3,276 | ipynb | Jupyter Notebook | PrelimExam.ipynb | cieloanne/Linear-Algebra-58019 | 92477dc11a26264d349062584e4cd2afef9ee7de | [
"Apache-2.0"
] | null | null | null | PrelimExam.ipynb | cieloanne/Linear-Algebra-58019 | 92477dc11a26264d349062584e4cd2afef9ee7de | [
"Apache-2.0"
] | null | null | null | PrelimExam.ipynb | cieloanne/Linear-Algebra-58019 | 92477dc11a26264d349062584e4cd2afef9ee7de | [
"Apache-2.0"
] | null | null | null | 21.695364 | 237 | 0.392247 | [
[
[
"<a href=\"https://colab.research.google.com/github/cieloanne/Linear-Algebra-58019/blob/main/PrelimExam.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"## Prelim Exam",
"_____no_output_____"
],
[
"### Question 1",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\nc = np.eye(4)\nprint(c)",
"[[1. 0. 0. 0.]\n [0. 1. 0. 0.]\n [0. 0. 1. 0.]\n [0. 0. 0. 1.]]\n"
]
],
[
[
"### Question 2",
"_____no_output_____"
]
],
[
[
"answer = 2*c\n\nprint(answer)",
"[[2. 0. 0. 0.]\n [0. 2. 0. 0.]\n [0. 0. 2. 0.]\n [0. 0. 0. 2.]]\n"
]
],
[
[
"### Question 3",
"_____no_output_____"
]
],
[
[
"A = np.array([2,7,4])\nB = np.array([3,9,8])\n\ncross = np.cross(A,B)\nprint(cross)",
"[20 -4 -3]\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e75df9b4e543f145ec72511c015a2c071a3bfdd6 | 17,540 | ipynb | Jupyter Notebook | 3. Initial file generation of CSVs for the graph generation/basal_ganglia_xlsx_to_csv.ipynb | marenpg/jupyter_basal_ganglia | ab4bb2034f559ecdad3a507edd752c290670c2df | [
"CC-BY-4.0"
] | null | null | null | 3. Initial file generation of CSVs for the graph generation/basal_ganglia_xlsx_to_csv.ipynb | marenpg/jupyter_basal_ganglia | ab4bb2034f559ecdad3a507edd752c290670c2df | [
"CC-BY-4.0"
] | null | null | null | 3. Initial file generation of CSVs for the graph generation/basal_ganglia_xlsx_to_csv.ipynb | marenpg/jupyter_basal_ganglia | ab4bb2034f559ecdad3a507edd752c290670c2df | [
"CC-BY-4.0"
] | null | null | null | 58.07947 | 395 | 0.621779 | [
[
[
"# Generate CSV-files from database Excel files\n\nThis scripts converts the Excel files that are extracted from the original rodent basal ganglia database into CSV files that are used by the notebook that generates the database, `1. Create the Rodent Basal Ganglia Graph.ipynb`.\n\nThe project provides these files, so it is not necessary to run these scripts",
"_____no_output_____"
]
],
[
[
"## REGIONS\nimport pandas as pd\n\nfiles = [\n (\"../Data/xlsx/basal_ganglia_regions.xlsx\", \"regions_other\", \"../Data/csvs/basal_ganglia/regions/regions_other.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_regions.xlsx\", \"region_records\", \"../Data/csvs/basal_ganglia/regions/region_records.csv\", {\"Original_framework\":int, \"Documentation_score\":int}),\n (\"../Data/xlsx/basal_ganglia_regions.xlsx\", \"original_region_records\", \"../Data/csvs/basal_ganglia/regions/original_region_records.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_regions.xlsx\", \"regions\", \"../Data/csvs/basal_ganglia/regions/regions.csv\", {\"Nomenclature\":int}),\n (\"../Data/xlsx/basal_ganglia_regions.xlsx\", \"region_zones\", \"../Data/csvs/basal_ganglia/regions/region_zones.csv\", {}), \n (\"../Data/xlsx/basal_ganglia_regions.xlsx\", \"nomenclatures\", \"../Data/csvs/basal_ganglia/regions/nomenclatures.csv\", {\"Version\":int, \"Published\":int}), \n (\"../Data/xlsx/basal_ganglia_regions.xlsx\", \"nomenclatures_other\", \"../Data/csvs/basal_ganglia/regions/nomenclatures_other.csv\", {\"Version\":int, \"Published\":int, \"Strain\": int}), \n (\"../Data/xlsx/basal_ganglia_regions.xlsx\", \"BAMS_region_mapping\", \"../Data/csvs/basal_ganglia/regions/BAMS_region_mapping.csv\", {}), \n]\n\nfor xlsx, sheet, csv, converters in files:\n data_xls = pd.read_excel(xlsx, sheet, index_col=None, converters = converters)\n data_xls.to_csv(csv, encoding='utf-8')\n \nprint(\"Converted all regions from xlsx to csv\")\n\n\n## Csvs of brain region, remove prefix in names\nimport pandas as pd\nimport re \n\nroot = \"../Data/csvs/basal_ganglia/regions\"\n\n# Load region csvs\ndf_region = pd.read_csv(root + \"/regions.csv\", dtype=\"object\")\ndf_region_other = pd.read_csv(root + \"/regions_other.csv\", dtype=\"object\")\n\n# Remove the prefix of the name as it only adds a relation to the nomenclature which we again add later\ndf_region[\"Region_name\"] = [re.sub(r'\\w*\\_','', str(x)) for x in df_region['Region_name']]\ndf_region_other[\"Region_name\"] = [re.sub(r'\\w*\\_','', str(x)) for x in df_region_other['Region_name']]\n\ndf_region[\"Region_name\"] = [ (x[0].upper() + x[1:]) for x in df_region[\"Region_name\"]]\n\n# Store in common csv\ndf_region.to_csv(root + \"/regions.csv\", encoding='utf-8')\ndf_region_other.to_csv(root + \"/regions_other.csv\", encoding='utf-8')\n\nprint(\"Csv with all regions fixed for regions.csv and regions_other.csv\")",
"Converted all regions from xlsx to csv\nCsv with all regions fixed for regions.csv and regions_other.csv\n"
],
[
"## SUBJECTS\nimport pandas as pd\n\nfiles = [\n (\"../Data/xlsx/basal_ganglia_subjects.xlsx\", \"Age_categories\", \"../Data/csvs/basal_ganglia/subjects/age_categories.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_subjects.xlsx\", \"Sex\", \"../Data/csvs/basal_ganglia/subjects/sex.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_subjects.xlsx\", \"Species\", \"../Data/csvs/basal_ganglia/subjects/species.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_subjects.xlsx\", \"Specimens\", \"../Data/csvs/basal_ganglia/subjects/specimens.csv\", {\"Experiment_ID\":int}),\n (\"../Data/xlsx/basal_ganglia_subjects.xlsx\", \"Strains\", \"../Data/csvs/basal_ganglia/subjects/strains.csv\", {}), \n (\"../Data/xlsx/basal_ganglia_subjects.xlsx\", \"Substrains\", \"../Data/csvs/basal_ganglia/subjects/substrains.csv\", {\"Strain_name\":int}), \n (\"../Data/xlsx/basal_ganglia_subjects.xlsx\", \"Transgenic_lines\", \"../Data/csvs/basal_ganglia/subjects/transgenic_lines.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"derived_ddr_specimen\", \"../Data/csvs/basal_ganglia/subjects/derived_ddr_specimen.csv\", {\"analysisId\": int, \"SpecimenId\":int, \"Species\":int, \"Strain\":int, \"Substrain\":int, \"Transgenic_line_name\":int, \"Sex\":int, \"ageId\":int})\n]\n\nfor xlsx, sheet, csv, converters in files:\n data_xls = pd.read_excel(xlsx, sheet, index_col=None, converters = converters)\n data_xls.to_csv(csv, encoding='utf-8')\n \nprint(\"Converted all subjects from xlsx to csv\")",
"Converted all subjects from xlsx to csv\n"
],
[
"## EXPERIMENTS\nimport pandas as pd\n\nfiles = [\n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Calculations\", \"../Data/csvs/basal_ganglia/experiments/calculations.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Cell_morphologies\", \"../Data/csvs/basal_ganglia/experiments/cell_morphologies.csv\", {\"Region_zone\":int}),\n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Derived_data_EMdetails\", \"../Data/csvs/basal_ganglia/experiments/derived_data_EMdetails.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Derived_data_LFMdetails\", \"../Data/csvs/basal_ganglia/experiments/derived_data_LFMdetails.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Derived_data_records\", \"../Data/csvs/basal_ganglia/experiments/derived_data_records.csv\", {\"Number_of_animals\":int, \"Cell_type_putative\":int, \"Object_of_interest\":int, \"Visualization_method\":int }), \n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Distributions\", \"../Data/csvs/basal_ganglia/experiments/distributions.csv\", {\"Stereology_details_record\":int, \"Related_quantitation\":int, \"Software\":int, \"Cellular_region\":int}), \n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Electron_microscopy_details\", \"../Data/csvs/basal_ganglia/experiments/electron_microscopy_details.csv\", {\"Magnification\":int}), \n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Experiments\", \"../Data/csvs/basal_ganglia/experiments/experiments.csv\", {\"Strain\":int, \"Substrain\":int, \"Transgenic_line_name\":int, \"Sex\":int, \"Age_lower_limit\":int, \"Age_upper_limit\":int, \"Age\":int, \"Weight_lower_limit\":int, \"Weight_upper_limit\":int, \"Anaesthetic\":int, \"Perfusion_fix_medium\":int}), \n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Light_fluorescence_microscopy_d\", \"../Data/csvs/basal_ganglia/experiments/light_fluorescence_microscopy_details.csv\", {}), \n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Microscopes\", \"../Data/csvs/basal_ganglia/experiments/microscopes.csv\", {}), \n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Quantitations\", \"../Data/csvs/basal_ganglia/experiments/quantitations.csv\", {\"Region_zone\":int,\"Cellular_target_region\":int,\"Cellular_target_ID\":int, \"Software\":int,\"Stereology_details_record\":int}), \n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Reporter_incubations\", \"../Data/csvs/basal_ganglia/experiments/reporter_incubations.csv\", {}), \n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Reporter_labels\", \"../Data/csvs/basal_ganglia/experiments/reporter_labels.csv\", {}), \n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Reporter_targets\", \"../Data/csvs/basal_ganglia/experiments/reporter_targets.csv\", {}), \n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"ReporterType\", \"../Data/csvs/basal_ganglia/experiments/reporter_types.csv\", {}), \n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Reporters\", \"../Data/csvs/basal_ganglia/experiments/reporters.csv\", {\"Target\":int, \"Label\":int}), \n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Sectioning_details\", \"../Data/csvs/basal_ganglia/experiments/sectioning_details.csv\", {\"Section_thickness\":int,\"Sectioning_instrument\":int}),\n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Sectioning_instruments\", \"../Data/csvs/basal_ganglia/experiments/sectioning_instruments.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Single_cell_labelings\", \"../Data/csvs/basal_ganglia/experiments/single_cell_labelings.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Softwares\", \"../Data/csvs/basal_ganglia/experiments/softwares.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Solutions\", \"../Data/csvs/basal_ganglia/experiments/solutions.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Specimen_treatments\", \"../Data/csvs/basal_ganglia/experiments/specimen_treatments.csv\", {\"Specimen_ID\":int,\"Solution\":int}),\n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Stereology_details\", \"../Data/csvs/basal_ganglia/experiments/stereology_details.csv\", {\"Disector_height\":int, \"Investigated_sections\":int, \"Investigated_fields\":int, \"Counted_objects\":int}),\n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"Visualization_protocols\", \"../Data/csvs/basal_ganglia/experiments/visualization_protocols.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"derived_ddr_reporters\", \"../Data/csvs/basal_ganglia/experiments/derived_ddr_reporters.csv\", {\"analysisId\": int, \"SpecimenId\":int, \"reporterId\":int}), \n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"derived_ddr_sectioning_details\", \"../Data/csvs/basal_ganglia/experiments/derived_ddr_sectioning_details.csv\", {\"analysisId\": int, \"SpecimenId\":int, \"sectioningDetailsId\":int}),\n (\"../Data/xlsx/basal_ganglia_experiments.xlsx\", \"derived_ddr_experiments\", \"../Data/csvs/basal_ganglia/experiments/derived_ddr_experiments.csv\", {\"analysisId\": int, \"specimenId\":int, \"analysisId\":int})\n]\n\nfor xlsx, sheet, csv, converters in files:\n data_xls = pd.read_excel(xlsx, sheet, index_col=None, converters = converters)\n data_xls.to_csv(csv, encoding='utf-8')\n \nprint(\"Converted all experiments from xlsx to csv\")",
"Converted all experiments from xlsx to csv\n"
],
[
"## SOURCES\nimport pandas as pd\n\nfiles = [\n (\"../Data/xlsx/basal_ganglia_sources.xlsx\", \"Sources\", \"../Data/csvs/basal_ganglia/sources/sources.csv\", {\"Source_publication_year\":int, \"Source_origin\":int}),\n (\"../Data/xlsx/basal_ganglia_sources.xlsx\", \"Source_origins_lookup\", \"../Data/csvs/basal_ganglia/sources/source_origins_lookup.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_sources.xlsx\", \"Considered_papers\", \"../Data/csvs/basal_ganglia/sources/considered_papers.csv\", {\"Published\":int, \"Journal\":int}),\n (\"../Data/xlsx/basal_ganglia_sources.xlsx\", \"Considered_papers_desicions\", \"../Data/csvs/basal_ganglia/sources/considered_papers_desicions.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_sources.xlsx\", \"Journals\", \"../Data/csvs/basal_ganglia/sources/journals.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_sources.xlsx\", \"Repositories\", \"../Data/csvs/basal_ganglia/sources/repositories.csv\", {}),\n]\n\nfor xlsx, sheet, csv, converters in files:\n data_xls = pd.read_excel(xlsx, sheet, index_col=None, converters = converters)\n data_xls.to_csv(csv, encoding='utf-8')\n\nprint(\"Converted all sources from xlsx to csv\")",
"Converted all sources from xlsx to csv\n"
],
[
"## CELLS\nimport pandas as pd\n\nfiles = [\n (\"../Data/xlsx/basal_ganglia_cells.xlsx\", \"Cell_phenotype_categories\", \"../Data/csvs/basal_ganglia/cells/cell_phenotype_categories.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_cells.xlsx\", \"Cell_phenotypes\", \"../Data/csvs/basal_ganglia/cells/cell_phenotypes.csv\", {\"Phenotype_category\":int}),\n (\"../Data/xlsx/basal_ganglia_cells.xlsx\", \"Cell_type_classifications\", \"../Data/csvs/basal_ganglia/cells/cell_type_classifications.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_cells.xlsx\", \"Cell_types\", \"../Data/csvs/basal_ganglia/cells/cell_types.csv\", {\"Cell_class_membership\":int}),\n (\"../Data/xlsx/basal_ganglia_cells.xlsx\", \"Cell_classes\", \"../Data/csvs/basal_ganglia/cells/cell_classes.csv\", {\"Cell_group_membership\":int}),\n (\"../Data/xlsx/basal_ganglia_cells.xlsx\", \"Cell_groups\", \"../Data/csvs/basal_ganglia/cells/cell_groups.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_cells.xlsx\", \"Cellular_regions\", \"../Data/csvs/basal_ganglia/cells/cellular_regions.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_cells.xlsx\", \"Objects_of_interest\", \"../Data/csvs/basal_ganglia/cells/objects_of_interest.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_cells.xlsx\", \"cell_phenotype_type_categories\", \"../Data/csvs/basal_ganglia/cells/cell_phenotype_type_categories.csv\", {}),\n (\"../Data/xlsx/basal_ganglia_cells.xlsx\", \"celltypes_phenotypes\", \"../Data/csvs/basal_ganglia/cells/celltypes_phenotypes.csv\", {}),\n]\n\nfor xlsx, sheet, csv, converters in files:\n data_xls = pd.read_excel(xlsx, sheet, index_col=None, converters = converters)\n data_xls.to_csv(csv, encoding='utf-8')\n\nprint(\"Converted all Cells from xlsx to csv\")",
"Converted all Cells from xlsx to csv\n"
],
[
"## Analysis similarities\nimport pandas as pd\n\nfiles = [\n (\"../Data/xlsx/analyses_similarity.xlsx\", \"rat\", \"../Data/csvs/graph/analyses_similarity_rat.csv\", {\"id1\":int, \"id2\":int, \"similarity\":float}),\n (\"../Data/xlsx/analyses_similarity.xlsx\", \"mouse\", \"../Data/csvs/graph/analyses_similarity_mouse.csv\", {\"id1\":int, \"id2\":int, \"similarity\":float}),\n (\"../Data/xlsx/analyses_similarity.xlsx\", \"rat2\", \"../Data/csvs/graph/analyses_similarity_rat_2.csv\", {\"id1\":int, \"id2\":int, \"similarity\":float}),\n (\"../Data/xlsx/analyses_similarity.xlsx\", \"rat_all\", \"../Data/csvs/graph/analyses_similarity_rat_all.csv\", {\"id1\":int, \"id2\":int, \"similarity\":float}),\n (\"../Data/xlsx/analyses_similarity.xlsx\", \"mouse_all\", \"../Data/csvs/graph/analyses_similarity_mouse_all.csv\", {\"id1\":int, \"id2\":int, \"similarity\":float}),\n\n]\n\nfor xlsx, sheet, csv, converters in files:\n data_xls = pd.read_excel(xlsx, sheet, index_col=None, converters = converters)\n data_xls.to_csv(csv, encoding='utf-8')\n\nprint(\"Converted all Cells from xlsx to csv\")",
"Converted all Cells from xlsx to csv\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75e0414aecf67d4867030bde146689538a8b5d8 | 75,429 | ipynb | Jupyter Notebook | courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb | mohakala/training-data-analyst | 2b0e2647d528a4d0fb5c589e8c549836590b60cf | [
"Apache-2.0"
] | 3 | 2021-09-26T00:11:36.000Z | 2021-12-06T05:55:25.000Z | courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb | mohakala/training-data-analyst | 2b0e2647d528a4d0fb5c589e8c549836590b60cf | [
"Apache-2.0"
] | 1 | 2021-09-29T10:41:09.000Z | 2021-09-29T10:42:39.000Z | courses/machine_learning/deepdive2/launching_into_ml/labs/automl_text_classification.ipynb | mohakala/training-data-analyst | 2b0e2647d528a4d0fb5c589e8c549836590b60cf | [
"Apache-2.0"
] | 2 | 2021-10-03T20:44:21.000Z | 2021-12-08T23:15:06.000Z | 48.135929 | 391 | 0.643572 | [
[
[
"# Vertex AI: Create, train, and deploy an AutoML text classification model\n\n## Learning Objective\n\nIn this notebook, you learn how to:\n\n* Create a dataset and import data\n* Train an AutoML model\n* Get and review evaluations for the model\n* Deploy a model to an endpoint\n* Get online predictions\n* Get batch predictions\n\n## Introduction\n\nThis notebook walks you through the major phases of building and using a text classification model on [Vertex AI](https://cloud.google.com/vertex-ai/docs/). In this notebook, you use the \"Happy Moments\" sample dataset to train a model. The resulting model classifies happy moments into categories that reflect the causes of happiness. \n\nEach learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/automl_text_classification.ipynb). \n\n**Make sure to enable the Vertex AI, Cloud Storage, and Compute Engine APIs.**",
"_____no_output_____"
],
[
"### Install additional packages\n\nThis notebook uses the Python SDK for Vertex AI, which is contained in the `python-aiplatform` package. You must first install the package into your development environment.",
"_____no_output_____"
]
],
[
[
"# Setup your dependencies\nimport os\n\n# The Google Cloud Notebook product has specific requirements\nIS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\"/opt/deeplearning/metadata/env_version\")\n\n# Google Cloud Notebook requires dependencies to be installed with '--user'\nUSER_FLAG = \"\"\nif IS_GOOGLE_CLOUD_NOTEBOOK:\n USER_FLAG = \"--user\"\n\n# Upgrade the specified package to the newest available version\n! pip install {USER_FLAG} --upgrade google-cloud-aiplatform google-cloud-storage jsonlines",
"Requirement already satisfied: google-cloud-aiplatform in /opt/conda/lib/python3.7/site-packages (1.1.1)\nCollecting google-cloud-aiplatform\n Downloading google_cloud_aiplatform-1.3.0-py2.py3-none-any.whl (1.3 MB)\n\u001b[K |████████████████████████████████| 1.3 MB 8.6 MB/s eta 0:00:01\n\u001b[?25hRequirement already satisfied: google-cloud-storage in /opt/conda/lib/python3.7/site-packages (1.41.1)\nCollecting google-cloud-storage\n Downloading google_cloud_storage-1.42.0-py2.py3-none-any.whl (105 kB)\n\u001b[K |████████████████████████████████| 105 kB 48.2 MB/s eta 0:00:01\n\u001b[?25hCollecting jsonlines\n Downloading jsonlines-2.0.0-py3-none-any.whl (6.3 kB)\nRequirement already satisfied: proto-plus>=1.10.1 in /opt/conda/lib/python3.7/site-packages (from google-cloud-aiplatform) (1.19.0)\nRequirement already satisfied: google-api-core[grpc]<3.0.0dev,>=1.26.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-aiplatform) (1.31.1)\nRequirement already satisfied: packaging>=14.3 in /opt/conda/lib/python3.7/site-packages (from google-cloud-aiplatform) (21.0)\nRequirement already satisfied: google-cloud-bigquery<3.0.0dev,>=1.15.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-aiplatform) (2.23.2)\nRequirement already satisfied: google-cloud-core<3.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage) (1.7.2)\nRequirement already satisfied: google-auth<3.0dev,>=1.25.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage) (1.34.0)\nRequirement already satisfied: google-resumable-media<3.0dev,>=1.3.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage) (1.3.2)\nRequirement already satisfied: requests<3.0.0dev,>=2.18.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage) (2.25.1)\nRequirement already satisfied: pytz in /opt/conda/lib/python3.7/site-packages (from google-api-core[grpc]<3.0.0dev,>=1.26.0->google-cloud-aiplatform) (2021.1)\nRequirement already satisfied: protobuf>=3.12.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core[grpc]<3.0.0dev,>=1.26.0->google-cloud-aiplatform) (3.16.0)\nRequirement already satisfied: setuptools>=40.3.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core[grpc]<3.0.0dev,>=1.26.0->google-cloud-aiplatform) (49.6.0.post20210108)\nRequirement already satisfied: six>=1.13.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core[grpc]<3.0.0dev,>=1.26.0->google-cloud-aiplatform) (1.16.0)\nRequirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core[grpc]<3.0.0dev,>=1.26.0->google-cloud-aiplatform) (1.53.0)\nRequirement already satisfied: grpcio<2.0dev,>=1.29.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core[grpc]<3.0.0dev,>=1.26.0->google-cloud-aiplatform) (1.38.1)\nRequirement already satisfied: rsa<5,>=3.1.4 in /opt/conda/lib/python3.7/site-packages (from google-auth<3.0dev,>=1.25.0->google-cloud-storage) (4.7.2)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<3.0dev,>=1.25.0->google-cloud-storage) (4.2.2)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<3.0dev,>=1.25.0->google-cloud-storage) (0.2.7)\nRequirement already satisfied: google-crc32c<2.0dev,>=1.0 in /opt/conda/lib/python3.7/site-packages (from google-resumable-media<3.0dev,>=1.3.0->google-cloud-storage) (1.1.2)\nRequirement already satisfied: cffi>=1.0.0 in /opt/conda/lib/python3.7/site-packages (from google-crc32c<2.0dev,>=1.0->google-resumable-media<3.0dev,>=1.3.0->google-cloud-storage) (1.14.6)\nRequirement already satisfied: pycparser in /opt/conda/lib/python3.7/site-packages (from cffi>=1.0.0->google-crc32c<2.0dev,>=1.0->google-resumable-media<3.0dev,>=1.3.0->google-cloud-storage) (2.20)\nRequirement already satisfied: pyparsing>=2.0.2 in /opt/conda/lib/python3.7/site-packages (from packaging>=14.3->google-cloud-aiplatform) (2.4.7)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/conda/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth<3.0dev,>=1.25.0->google-cloud-storage) (0.4.8)\nRequirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-cloud-storage) (2021.5.30)\nRequirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-cloud-storage) (4.0.0)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-cloud-storage) (1.26.6)\nRequirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-cloud-storage) (2.10)\nInstalling collected packages: google-cloud-storage, jsonlines, google-cloud-aiplatform\n\u001b[33m WARNING: The script tb-gcp-uploader is installed in '/home/jupyter/.local/bin' which is not on PATH.\n Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.\u001b[0m\nSuccessfully installed google-cloud-aiplatform-1.3.0 google-cloud-storage-1.42.0 jsonlines-2.0.0\n"
]
],
[
[
"Please ignore any incompatibility warnings.\n",
"_____no_output_____"
],
[
"**Restart** the kernel before proceeding further (On the Notebook menu - Kernel - Restart Kernel).\n",
"_____no_output_____"
],
[
"### Set your project ID\n\nFinally, you must initialize the client library before you can send requests to the Vertex AI service. With the Python SDK, you initialize the client library as shown in the following cell. This tutorial also uses the Cloud Storage Python library for accessing batch prediction results.\n\nBe sure to provide the ID for your Google Cloud project in the `project` variable. This notebook uses the `us-central1` region, although you can change it to another region. \n\n**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.",
"_____no_output_____"
]
],
[
[
"# import necessary libraries\nimport os\nfrom datetime import datetime\n\nimport jsonlines\nfrom google.cloud import aiplatform, storage\nfrom google.protobuf import json_format\n\nPROJECT_ID = \"[your-project-id]\"\nREGION = \"us-central1\"\n\n# Get your Google Cloud project ID from gcloud\nif not os.getenv(\"IS_TESTING\"):\n shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID: \", PROJECT_ID)\n\naiplatform.init(project=PROJECT_ID, location=REGION)",
"Project ID: qwiklabs-gcp-00-09d98f4803b0\n"
]
],
[
[
"## Create a dataset and import your data\n\nThe notebook uses the 'Happy Moments' dataset for demonstration purposes. You can change it to another text classification dataset that [conforms to the data preparation requirements](https://cloud.google.com/vertex-ai/docs/datasets/prepare-text#classification).\n\nUsing the Python SDK, you can create a dataset and import the dataset in one call to `TextDataset.create()`, as shown in the following cell.\n\nCreating and importing data is a long-running operation. This next step can take a while. The sample waits for the operation to complete, outputting statements as the operation progresses. The statements contain the full name of the dataset that you will use in the following section.\n\n**Note**: You can close the noteboook while you wait for this operation to complete. ",
"_____no_output_____"
]
],
[
[
"# TODO\n# Use a timestamp to ensure unique resources\nTIMESTAMP = # TODO: Your code goes here\n\nsrc_uris = \"gs://cloud-ml-data/NL-classification/happiness.csv\"\ndisplay_name = f\"e2e-text-dataset-{TIMESTAMP}\"",
"_____no_output_____"
],
[
"# TODO\n# create a dataset and import the dataset\nds = # TODO: Your code goes here(\n display_name=display_name,\n gcs_source=src_uris,\n import_schema_uri=aiplatform.schema.dataset.ioformat.text.single_label_classification,\n sync=True,\n)",
"INFO:google.cloud.aiplatform.datasets.dataset:Creating TextDataset\nINFO:google.cloud.aiplatform.datasets.dataset:Create TextDataset backing LRO: projects/259224131669/locations/us-central1/datasets/7829200088927830016/operations/2215787784218607616\nINFO:google.cloud.aiplatform.datasets.dataset:TextDataset created. Resource name: projects/259224131669/locations/us-central1/datasets/7829200088927830016\nINFO:google.cloud.aiplatform.datasets.dataset:To use this TextDataset in another session:\nINFO:google.cloud.aiplatform.datasets.dataset:ds = aiplatform.TextDataset('projects/259224131669/locations/us-central1/datasets/7829200088927830016')\nINFO:google.cloud.aiplatform.datasets.dataset:Importing TextDataset data: projects/259224131669/locations/us-central1/datasets/7829200088927830016\nINFO:google.cloud.aiplatform.datasets.dataset:Import TextDataset data backing LRO: projects/259224131669/locations/us-central1/datasets/7829200088927830016/operations/7120207778425077760\nINFO:google.cloud.aiplatform.datasets.dataset:TextDataset data imported. Resource name: projects/259224131669/locations/us-central1/datasets/7829200088927830016\n"
]
],
[
[
"## Train your text classification model\n\nOnce your dataset has finished importing data, you are ready to train your model. To do this, you first need the full resource name of your dataset, where the full name has the format `projects/[YOUR_PROJECT]/locations/us-central1/datasets/[YOUR_DATASET_ID]`. If you don't have the resource name handy, you can list all of the datasets in your project using `TextDataset.list()`. \n\nAs shown in the following code block, you can pass in the display name of your dataset in the call to `list()` to filter the results.\n",
"_____no_output_____"
]
],
[
[
"# TODO\n# list all of the datasets in your project\ndatasets = # TODO: Your code goes here(filter=f'display_name=\"{display_name}\"')\nprint(datasets)",
"[<google.cloud.aiplatform.datasets.text_dataset.TextDataset object at 0x7fe2544d6dd0> \nresource name: projects/259224131669/locations/us-central1/datasets/7829200088927830016]\n"
]
],
[
[
"When you create a new model, you need a reference to the `TextDataset` object that corresponds to your dataset. You can use the `ds` variable you created previously when you created the dataset or you can also list all of your datasets to get a reference to your dataset. Each item returned from `TextDataset.list()` is an instance of `TextDataset`.\n\nThe following code block shows how to instantiate a `TextDataset` object using a dataset ID. Note that this code is intentionally verbose for demonstration purposes.",
"_____no_output_____"
]
],
[
[
"# Get the dataset ID if it's not available\ndataset_id = \"[your-dataset-id]\"\n\nif dataset_id == \"[your-dataset-id]\":\n # Use the reference to the new dataset captured when we created it\n dataset_id = ds.resource_name.split(\"/\")[-1]\n print(f\"Dataset ID: {dataset_id}\")\n\ntext_dataset = aiplatform.TextDataset(dataset_id)",
"Dataset ID: 7829200088927830016\n"
]
],
[
[
"Now you can begin training your model. Training the model is a two part process:\n\n1. **Define the training job.** You must provide a display name and the type of training you want when you define the training job.\n2. **Run the training job.** When you run the training job, you need to supply a reference to the dataset to use for training. At this step, you can also configure the data split percentages.\n\nYou do not need to specify [data splits](https://cloud.google.com/vertex-ai/docs/general/ml-use). The training job has a default setting of training 80%/ testing 10%/ validate 10% if you don't provide these values.\n\nTo train your model, you call `AutoMLTextTrainingJob.run()` as shown in the following snippets. The method returns a reference to your new `Model` object.\n\nAs with importing data into the dataset, training your model can take a substantial amount of time. The client library prints out operation status messages while the training pipeline operation processes. You must wait for the training process to complete before you can get the resource name and ID of your new model, which is required for model evaluation and model deployment.\n\n**Note**: You can close the notebook while you wait for the operation to complete.",
"_____no_output_____"
]
],
[
[
"# Define the training job\ntraining_job_display_name = f\"e2e-text-training-job-{TIMESTAMP}\"\n# TODO\n# constructs a AutoML Text Training Job\njob = # TODO: Your code goes here(\n display_name=training_job_display_name,\n prediction_type=\"classification\",\n multi_label=False,\n)",
"_____no_output_____"
],
[
"model_display_name = f\"e2e-text-classification-model-{TIMESTAMP}\"\n\n# TODO\n# Run the training job\nmodel = # TODO: Your code goes here(\n dataset=text_dataset,\n model_display_name=model_display_name,\n training_fraction_split=0.7,\n validation_fraction_split=0.2,\n test_fraction_split=0.1,\n sync=True,\n)",
"INFO:google.cloud.aiplatform.training_jobs:View Training:\nhttps://console.cloud.google.com/ai/platform/locations/us-central1/training/1280924449289273344?project=259224131669\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_PENDING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_PENDING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob run completed. Resource name: projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344\nINFO:google.cloud.aiplatform.training_jobs:Model available at projects/259224131669/locations/us-central1/models/274218199967334400\n"
]
],
[
[
"## Get and review model evaluation scores\n\nAfter your model has finished training, you can review the evaluation scores for it.\n\nFirst, you need to get a reference to the new model. As with datasets, you can either use the reference to the `model` variable you created when deployed the model or you can list all of the models in your project. When listing your models, you can provide filter criteria to narrow down your search.",
"_____no_output_____"
]
],
[
[
"# TODO\n# list the aiplatform model\nmodels = # TODO: Your code goes here(filter=f'display_name=\"{model_display_name}\"')\nprint(models)",
"[<google.cloud.aiplatform.models.Model object at 0x7fe24dedda90> \nresource name: projects/259224131669/locations/us-central1/models/274218199967334400]\n"
]
],
[
[
"Using the model name (in the format `projects/[PROJECT_NAME]/locations/us-central1/models/[MODEL_ID]`), you can get its model evaluations. To get model evaluations, you must use the underlying service client.\n\nBuilding a service client requires that you provide the name of the regionalized hostname used for your model. In this tutorial, the hostname is `us-central1-aiplatform.googleapis.com` because the model was created in the `us-central1` location.",
"_____no_output_____"
]
],
[
[
"# Get the ID of the model\nmodel_name = \"[your-model-resource-name]\"\nif model_name == \"[your-model-resource-name]\":\n # Use the `resource_name` of the Model instance you created previously\n model_name = model.resource_name\n print(f\"Model name: {model_name}\")\n\n\n# Get a reference to the Model Service client\nclient_options = {\"api_endpoint\": \"us-central1-aiplatform.googleapis.com\"}\nmodel_service_client = aiplatform.gapic.ModelServiceClient(\n client_options=client_options\n)",
"Model name: projects/259224131669/locations/us-central1/models/274218199967334400\n"
]
],
[
[
"Before you can view the model evaluation you must first list all of the evaluations for that model. Each model can have multiple evaluations, although a new model is likely to only have one. ",
"_____no_output_____"
]
],
[
[
"model_evaluations = model_service_client.list_model_evaluations(parent=model_name)\nmodel_evaluation = list(model_evaluations)[0]",
"_____no_output_____"
]
],
[
[
"Now that you have the model evaluation, you can look at your model's scores. If you have questions about what the scores mean, review the [public documentation](https://cloud.google.com/vertex-ai/docs/training/evaluating-automl-models#text).\n\nThe results returned from the service are formatted as [`google.protobuf.Value`](https://googleapis.dev/python/protobuf/latest/google/protobuf/struct_pb2.html) objects. You can transform the return object as a `dict` for easier reading and parsing.",
"_____no_output_____"
]
],
[
[
"model_eval_dict = json_format.MessageToDict(model_evaluation._pb)\nmetrics = model_eval_dict[\"metrics\"]\nconfidence_metrics = metrics[\"confidenceMetrics\"]\n\nprint(f'Area under precision-recall curve (AuPRC): {metrics[\"auPrc\"]}')\nfor confidence_scores in confidence_metrics:\n metrics = confidence_scores.keys()\n print(\"\\n\")\n for metric in metrics:\n print(f\"\\t{metric}: {confidence_scores[metric]}\")",
"Area under precision-recall curve (AuPRC): 0.9556533\n\n\n\trecallAt1: 0.8852321\n\tprecisionAt1: 0.8852321\n\trecall: 1.0\n\tprecision: 0.14285715\n\tf1ScoreAt1: 0.8852321\n\tf1Score: 0.25\n\n\n\tf1Score: 0.82728595\n\tf1ScoreAt1: 0.8852321\n\trecall: 0.96202534\n\tprecision: 0.72565246\n\trecallAt1: 0.8852321\n\tprecisionAt1: 0.8852321\n\tconfidenceThreshold: 0.05\n\n\n\tprecision: 0.7781617\n\tprecisionAt1: 0.8852321\n\tf1Score: 0.8556231\n\tf1ScoreAt1: 0.8852321\n\trecallAt1: 0.8852321\n\trecall: 0.950211\n\tconfidenceThreshold: 0.1\n\n\n\tf1ScoreAt1: 0.8852321\n\tprecisionAt1: 0.8852321\n\trecallAt1: 0.8852321\n\trecall: 0.9409283\n\tf1Score: 0.87075365\n\tconfidenceThreshold: 0.15\n\tprecision: 0.8103198\n\n\n\trecallAt1: 0.8852321\n\trecall: 0.9316456\n\tf1Score: 0.87653834\n\tprecisionAt1: 0.8852321\n\tprecision: 0.82758623\n\tconfidenceThreshold: 0.2\n\tf1ScoreAt1: 0.8852321\n\n\n\tf1ScoreAt1: 0.8852321\n\trecall: 0.9206751\n\tf1Score: 0.8794841\n\trecallAt1: 0.8852321\n\tprecisionAt1: 0.8852321\n\tconfidenceThreshold: 0.25\n\tprecision: 0.841821\n\n\n\tprecisionAt1: 0.8852321\n\trecall: 0.9105485\n\tconfidenceThreshold: 0.3\n\tprecision: 0.85363925\n\trecallAt1: 0.8852321\n\tf1ScoreAt1: 0.8852321\n\tf1Score: 0.881176\n\n\n\tf1Score: 0.8820132\n\tprecisionAt1: 0.8852321\n\tprecision: 0.86279255\n\trecallAt1: 0.8852321\n\trecall: 0.9021097\n\tf1ScoreAt1: 0.8852321\n\tconfidenceThreshold: 0.35\n\n\n\tconfidenceThreshold: 0.4\n\tf1ScoreAt1: 0.8854123\n\trecall: 0.8953586\n\tf1Score: 0.8871237\n\tprecisionAt1: 0.88728815\n\tprecision: 0.87903893\n\trecallAt1: 0.8835443\n\n\n\trecall: 0.8835443\n\tprecisionAt1: 0.8906917\n\tf1Score: 0.8861617\n\trecallAt1: 0.8801688\n\tconfidenceThreshold: 0.45\n\tf1ScoreAt1: 0.885399\n\tprecision: 0.88879454\n\n\n\trecallAt1: 0.87257385\n\tprecisionAt1: 0.8913793\n\tconfidenceThreshold: 0.5\n\tf1Score: 0.88187635\n\trecall: 0.87257385\n\tprecision: 0.8913793\n\tf1ScoreAt1: 0.88187635\n\n\n\tf1Score: 0.88093185\n\tprecisionAt1: 0.9011474\n\trecall: 0.8616034\n\tf1ScoreAt1: 0.88093185\n\tprecision: 0.9011474\n\trecallAt1: 0.8616034\n\tconfidenceThreshold: 0.55\n\n\n\tconfidenceThreshold: 0.6\n\trecall: 0.85316455\n\trecallAt1: 0.85316455\n\tprecisionAt1: 0.9124549\n\tprecision: 0.9124549\n\tf1Score: 0.88181424\n\tf1ScoreAt1: 0.88181424\n\n\n\tconfidenceThreshold: 0.65\n\trecall: 0.8413502\n\tf1ScoreAt1: 0.8784141\n\tf1Score: 0.8784141\n\tprecisionAt1: 0.918894\n\trecallAt1: 0.8413502\n\tprecision: 0.918894\n\n\n\trecall: 0.8303797\n\tprecision: 0.92742693\n\tf1ScoreAt1: 0.8762244\n\tf1Score: 0.8762244\n\trecallAt1: 0.8303797\n\tconfidenceThreshold: 0.7\n\tprecisionAt1: 0.92742693\n\n\n\tprecisionAt1: 0.9354528\n\trecallAt1: 0.8194093\n\tconfidenceThreshold: 0.75\n\tprecision: 0.9354528\n\tf1Score: 0.8735943\n\tf1ScoreAt1: 0.8735943\n\trecall: 0.8194093\n\n\n\tprecisionAt1: 0.94077\n\tprecision: 0.94077\n\trecall: 0.8042194\n\tconfidenceThreshold: 0.8\n\tf1ScoreAt1: 0.867152\n\trecallAt1: 0.8042194\n\tf1Score: 0.867152\n\n\n\tprecisionAt1: 0.9483806\n\trecallAt1: 0.7907173\n\trecall: 0.7907173\n\tprecision: 0.9483806\n\tf1ScoreAt1: 0.8624022\n\tconfidenceThreshold: 0.85\n\tf1Score: 0.8624022\n\n\n\trecallAt1: 0.7763713\n\tprecisionAt1: 0.9533679\n\tf1Score: 0.855814\n\trecall: 0.7763713\n\tf1ScoreAt1: 0.855814\n\tprecision: 0.9533679\n\tconfidenceThreshold: 0.875\n\n\n\tconfidenceThreshold: 0.9\n\tf1ScoreAt1: 0.8509638\n\tprecision: 0.96072185\n\trecallAt1: 0.76371306\n\trecall: 0.76371306\n\tprecisionAt1: 0.96072185\n\tf1Score: 0.8509638\n\n\n\tf1ScoreAt1: 0.84578997\n\tconfidenceThreshold: 0.91\n\tprecision: 0.9623251\n\tprecisionAt1: 0.9623251\n\tf1Score: 0.84578997\n\trecall: 0.75443035\n\trecallAt1: 0.75443035\n\n\n\tconfidenceThreshold: 0.92\n\trecall: 0.7443038\n\trecallAt1: 0.7443038\n\tprecisionAt1: 0.9639344\n\tf1ScoreAt1: 0.84000003\n\tprecision: 0.9639344\n\tf1Score: 0.84000003\n\n\n\trecallAt1: 0.735865\n\tf1ScoreAt1: 0.83524907\n\tf1Score: 0.83524907\n\tprecisionAt1: 0.96567\n\tconfidenceThreshold: 0.93\n\tprecision: 0.96567\n\trecall: 0.735865\n\n\n\tf1ScoreAt1: 0.8294798\n\trecall: 0.7265823\n\trecallAt1: 0.7265823\n\tf1Score: 0.8294798\n\tprecision: 0.96633\n\tconfidenceThreshold: 0.94\n\tprecisionAt1: 0.96633\n\n\n\tf1Score: 0.8214112\n\trecallAt1: 0.7122363\n\tf1ScoreAt1: 0.8214112\n\trecall: 0.7122363\n\tprecision: 0.97011495\n\tconfidenceThreshold: 0.95\n\tprecisionAt1: 0.97011495\n\n\n\tprecision: 0.97402596\n\tf1ScoreAt1: 0.81200784\n\tconfidenceThreshold: 0.96\n\trecallAt1: 0.6962025\n\tf1Score: 0.81200784\n\trecall: 0.6962025\n\tprecisionAt1: 0.97402596\n\n\n\trecallAt1: 0.68016875\n\tf1Score: 0.8023892\n\tprecisionAt1: 0.9781553\n\trecall: 0.68016875\n\tprecision: 0.9781553\n\tf1ScoreAt1: 0.8023892\n\tconfidenceThreshold: 0.97\n\n\n\trecall: 0.64978904\n\tprecision: 0.98214287\n\tprecisionAt1: 0.98214287\n\tconfidenceThreshold: 0.98\n\trecallAt1: 0.64978904\n\tf1ScoreAt1: 0.7821229\n\tf1Score: 0.7821229\n\n\n\trecall: 0.5915612\n\tf1Score: 0.74062335\n\tprecisionAt1: 0.990113\n\tprecision: 0.990113\n\trecallAt1: 0.5915612\n\tconfidenceThreshold: 0.99\n\tf1ScoreAt1: 0.74062335\n\n\n\tprecision: 0.9923195\n\tprecisionAt1: 0.9923195\n\trecallAt1: 0.54514766\n\tf1ScoreAt1: 0.7037037\n\tf1Score: 0.7037037\n\trecall: 0.54514766\n\tconfidenceThreshold: 0.995\n\n\n\tprecisionAt1: 0.99206346\n\tprecision: 0.99206346\n\trecallAt1: 0.5274262\n\tconfidenceThreshold: 0.996\n\trecall: 0.5274262\n\tf1ScoreAt1: 0.68870527\n\tf1Score: 0.68870527\n\n\n\tprecision: 0.9933665\n\tprecisionAt1: 0.9933665\n\tf1Score: 0.67002237\n\trecall: 0.50548524\n\tf1ScoreAt1: 0.67002237\n\tconfidenceThreshold: 0.997\n\trecallAt1: 0.50548524\n\n\n\tconfidenceThreshold: 0.998\n\tprecision: 0.9929453\n\trecallAt1: 0.4751055\n\tprecisionAt1: 0.9929453\n\trecall: 0.4751055\n\tf1Score: 0.64269406\n\tf1ScoreAt1: 0.64269406\n\n\n\tf1ScoreAt1: 0.58064514\n\tconfidenceThreshold: 0.999\n\trecall: 0.4101266\n\tprecision: 0.993865\n\tprecisionAt1: 0.993865\n\trecallAt1: 0.4101266\n\tf1Score: 0.58064514\n\n\n\tconfidenceThreshold: 1.0\n\tprecisionAt1: 1.0\n\tf1Score: 0.023352792\n\trecallAt1: 0.011814346\n\tprecision: 1.0\n\trecall: 0.011814346\n\tf1ScoreAt1: 0.023352792\n"
]
],
[
[
"## Deploy your text classification model\n\nOnce your model has completed training, you must deploy it to an _endpoint_ to get online predictions from it. When you deploy the model to an endpoint, a copy of the model is made on the endpoint with a new resource name and display name.\n\nYou can deploy multiple models to the same endpoint and split traffic between the various models assigned to the endpoint. However, you must deploy one model at a time to the endpoint. To change the traffic split percentages, you must assign new values on your second (and subsequent) models each time you deploy a new model.\n\nThe following code block demonstrates how to deploy a model. The code snippet relies on the Python SDK to create a new endpoint for deployment. The call to `model.deploy()` returns a reference to an `Endpoint` object--you need this reference for online predictions in the next section.",
"_____no_output_____"
]
],
[
[
"deployed_model_display_name = f\"e2e-deployed-text-classification-model-{TIMESTAMP}\"\n\n# TODO\n# deploy a model\nendpoint = # TODO: Your code goes here(\n deployed_model_display_name=deployed_model_display_name, sync=True\n)",
"INFO:google.cloud.aiplatform.models:Creating Endpoint\nINFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/259224131669/locations/us-central1/endpoints/7980783159979540480/operations/3267096822232907776\nINFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/259224131669/locations/us-central1/endpoints/7980783159979540480\nINFO:google.cloud.aiplatform.models:To use this Endpoint in another session:\nINFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/259224131669/locations/us-central1/endpoints/7980783159979540480')\nINFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/259224131669/locations/us-central1/endpoints/7980783159979540480\nINFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/259224131669/locations/us-central1/endpoints/7980783159979540480/operations/7878782840660295680\nINFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/259224131669/locations/us-central1/endpoints/7980783159979540480\n"
]
],
[
[
"In case you didn't record the name of the new endpoint, you can get a list of all your endpoints as you did before with datasets and models. For each endpoint, you can list the models deployed to that endpoint. To get a reference to the model that you just deployed, you can check the `display_name` of each model deployed to the endpoint against the model you're looking for.",
"_____no_output_____"
]
],
[
[
"endpoints = aiplatform.Endpoint.list()\n\nendpoint_with_deployed_model = []\n\nfor endpoint_ in endpoints:\n for model in endpoint_.list_models():\n if model.display_name.find(deployed_model_display_name) == 0:\n endpoint_with_deployed_model.append(endpoint_)\n\nprint(endpoint_with_deployed_model)",
"[<google.cloud.aiplatform.models.Endpoint object at 0x7fe24de99d10> \nresource name: projects/259224131669/locations/us-central1/endpoints/7980783159979540480]\n"
]
],
[
[
"## Get online predictions from your model\n\nNow that you have your endpoint's resource name, you can get online predictions from the text classification model. To get the online prediction, you send a prediction request to your endpoint.",
"_____no_output_____"
]
],
[
[
"endpoint_name = \"[your-endpoint-name]\"\nif endpoint_name == \"[your-endpoint-name]\":\n endpoint_name = endpoint.resource_name\n\nprint(f\"Endpoint name: {endpoint_name}\")\n\nendpoint = aiplatform.Endpoint(endpoint_name)\ncontent = \"I got a high score on my math final!\"\n\n# TODO\n# send a prediction request to your endpoint\nresponse = # TODO: Your code goes here(instances=[{\"content\": content}])\n\nfor prediction_ in response.predictions:\n ids = prediction_[\"ids\"]\n display_names = prediction_[\"displayNames\"]\n confidence_scores = prediction_[\"confidences\"]\n for count, id in enumerate(ids):\n print(f\"Prediction ID: {id}\")\n print(f\"Prediction display name: {display_names[count]}\")\n print(f\"Prediction confidence score: {confidence_scores[count]}\")",
"Endpoint name: projects/259224131669/locations/us-central1/endpoints/7980783159979540480\nPrediction ID: 5078374828348538880\nPrediction display name: affection\nPrediction confidence score: 4.789760350831784e-05\nPrediction ID: 4213683699893403648\nPrediction display name: achievement\nPrediction confidence score: 0.9997887015342712\nPrediction ID: 7384217837562232832\nPrediction display name: enjoy_the_moment\nPrediction confidence score: 5.908047751290724e-05\nPrediction ID: 466688809921150976\nPrediction display name: bonding\nPrediction confidence score: 2.292021417815704e-05\nPrediction ID: 1619610314527997952\nPrediction display name: leisure\nPrediction confidence score: 5.406829222920351e-05\nPrediction ID: 8825369718320791552\nPrediction display name: nature\nPrediction confidence score: 2.831711753970012e-06\nPrediction ID: 2772531819134844928\nPrediction display name: exercise\nPrediction confidence score: 2.4389159079873934e-05\n"
]
],
[
[
"## Get batch predictions from your model\n\nYou can get batch predictions from a text classification model without deploying it. You must first format all of your prediction instances (prediction input) in JSONL format and you must store the JSONL file in a Google Cloud Storage bucket. You must also provide a Google Cloud Storage bucket to hold your prediction output.\n\nTo start, you must first create your predictions input file in JSONL format. Each line in the JSONL document needs to be formatted like so:\n\n```\n{ \"content\": \"gs://sourcebucket/datasets/texts/source_text.txt\", \"mimeType\": \"text/plain\"}\n```\n\nThe `content` field in the JSON structure must be a Google Cloud Storage URI to another document that contains the text input for prediction.\n[See the documentation for more information.](https://cloud.google.com/ai-platform-unified/docs/predictions/batch-predictions#text)",
"_____no_output_____"
]
],
[
[
"instances = [\n \"We hiked through the woods and up the hill to the ice caves\",\n \"My kitten is so cute\",\n]\ninput_file_name = \"batch-prediction-input.jsonl\"",
"_____no_output_____"
]
],
[
[
"For batch prediction, you must supply the following:\n\n+ All of your prediction instances as individual files on Google Cloud Storage, as TXT files for your instances\n+ A JSONL file that lists the URIs of all your prediction instances\n+ A Google Cloud Storage bucket to hold the output from batch prediction\n\nFor this tutorial, the following cells create a new Storage bucket, upload individual prediction instances as text files to the bucket, and then create the JSONL file with the URIs of your prediction instances.",
"_____no_output_____"
]
],
[
[
"TIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")\nBUCKET_NAME = \"[your-bucket-name]\"\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"[your-bucket-name]\":\n BUCKET_NAME = f\"automl-text-notebook-{TIMESTAMP}\"\n\nBUCKET_URI = f\"gs://{BUCKET_NAME}\"\n\n! gsutil mb -l $REGION $BUCKET_URI",
"Creating gs://qwiklabs-gcp-00-09d98f4803b0/...\n"
],
[
"# Instantiate the Storage client and create the new bucket\nstorage = storage.Client()\nbucket = storage.bucket(BUCKET_NAME)\n\n# Iterate over the prediction instances, creating a new TXT file\n# for each.\ninput_file_data = []\nfor count, instance in enumerate(instances):\n instance_name = f\"input_{count}.txt\"\n instance_file_uri = f\"{BUCKET_URI}/{instance_name}\"\n\n # Add the data to store in the JSONL input file.\n tmp_data = {\"content\": instance_file_uri, \"mimeType\": \"text/plain\"}\n input_file_data.append(tmp_data)\n\n # Create the new instance file\n blob = bucket.blob(instance_name)\n blob.upload_from_string(instance)\n\ninput_str = \"\\n\".join([str(d) for d in input_file_data])\nfile_blob = bucket.blob(f\"{input_file_name}\")\nfile_blob.upload_from_string(input_str)",
"_____no_output_____"
]
],
[
[
"Now that you have the bucket with the prediction instances ready, you can send a batch prediction request to Vertex AI. When you send a request to the service, you must provide the URI of your JSONL file and your output bucket, including the `gs://` protocols.\n\nWith the Python SDK, you can create a batch prediction job by calling `Model.batch_predict()`.",
"_____no_output_____"
]
],
[
[
"job_display_name = \"e2e-text-classification-batch-prediction-job\"\nmodel = aiplatform.Model(model_name=model_name)\n\n# TODO\n# create a batch prediction job\nbatch_prediction_job = # TODO: Your code goes here(\n job_display_name=job_display_name,\n gcs_source=f\"{BUCKET_URI}/{input_file_name}\",\n gcs_destination_prefix=f\"{BUCKET_URI}/output\",\n sync=True,\n)\n\nbatch_prediction_job_name = batch_prediction_job.resource_name",
"INFO:google.cloud.aiplatform.jobs:Creating BatchPredictionJob\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/259224131669/locations/us-central1/batchPredictionJobs/3571004859807170560\nINFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:\nINFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/259224131669/locations/us-central1/batchPredictionJobs/3571004859807170560')\nINFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:\nhttps://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/3571004859807170560?project=259224131669\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/259224131669/locations/us-central1/batchPredictionJobs/3571004859807170560 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/259224131669/locations/us-central1/batchPredictionJobs/3571004859807170560 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/259224131669/locations/us-central1/batchPredictionJobs/3571004859807170560 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/259224131669/locations/us-central1/batchPredictionJobs/3571004859807170560 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/259224131669/locations/us-central1/batchPredictionJobs/3571004859807170560 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/259224131669/locations/us-central1/batchPredictionJobs/3571004859807170560 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/259224131669/locations/us-central1/batchPredictionJobs/3571004859807170560 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/259224131669/locations/us-central1/batchPredictionJobs/3571004859807170560 current state:\nJobState.JOB_STATE_SUCCEEDED\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob run completed. Resource name: projects/259224131669/locations/us-central1/batchPredictionJobs/3571004859807170560\n"
]
],
[
[
"Once the batch prediction job completes, the Python SDK prints out the resource name of the batch prediction job in the format `projects/[PROJECT_ID]/locations/[LOCATION]/batchPredictionJobs/[BATCH_PREDICTION_JOB_ID]`. You can query the Vertex AI service for the status of the batch prediction job using its ID.\n\nThe following code snippet demonstrates how to create an instance of the `BatchPredictionJob` class to review its status. Note that you need the full resource name printed out from the Python SDK for this snippet.\n",
"_____no_output_____"
]
],
[
[
"from google.cloud.aiplatform import jobs\n\nbatch_job = jobs.BatchPredictionJob(batch_prediction_job_name)\nprint(f\"Batch prediction job state: {str(batch_job.state)}\")",
"Batch prediction job state: JobState.JOB_STATE_SUCCEEDED\n"
]
],
[
[
"After the batch job has completed, you can view the results of the job in your output Storage bucket. You might want to first list all of the files in your output bucket to find the URI of the output file.",
"_____no_output_____"
]
],
[
[
"BUCKET_OUTPUT = f\"{BUCKET_URI}/output\"\n\n! gsutil ls -a $BUCKET_OUTPUT",
"gs://qwiklabs-gcp-00-09d98f4803b0/output/prediction-e2e-text-classification-model-20210824122127-2021-08-24T17:42:17.359307Z/\n"
]
],
[
[
"The output from the batch prediction job should be contained in a folder (or _prefix_) that includes the name of the batch prediction job plus a time stamp for when it was created.\n\nFor example, if your batch prediction job name is `my-job` and your bucket name is `my-bucket`, the URI of the folder containing your output might look like the following:\n\n```\ngs://my-bucket/output/prediction-my-job-2021-06-04T19:54:25.889262Z/\n```\n\nTo read the batch prediction results, you must download the file locally and open the file. The next cell copies all of the files in the `BUCKET_OUTPUT_FOLDER` into a local folder.",
"_____no_output_____"
]
],
[
[
"RESULTS_DIRECTORY = \"prediction_results\"\nRESULTS_DIRECTORY_FULL = f\"{RESULTS_DIRECTORY}/output\"\n\n# Create missing directories\nos.makedirs(RESULTS_DIRECTORY, exist_ok=True)\n\n# Get the Cloud Storage paths for each result\n! gsutil -m cp -r $BUCKET_OUTPUT $RESULTS_DIRECTORY\n\n# Get most recently modified directory\nlatest_directory = max(\n [\n os.path.join(RESULTS_DIRECTORY_FULL, d)\n for d in os.listdir(RESULTS_DIRECTORY_FULL)\n ],\n key=os.path.getmtime,\n)\n\nprint(f\"Local results folder: {latest_directory}\")",
"Copying gs://qwiklabs-gcp-00-09d98f4803b0/output/prediction-e2e-text-classification-model-20210824122127-2021-08-24T17:42:17.359307Z/predictions_00001.jsonl...\n/ [1/1 files][ 945.0 B/ 945.0 B] 100% Done \nOperation completed over 1 objects/945.0 B. \nLocal results folder: prediction_results/output/prediction-e2e-text-classification-model-20210824122127-2021-08-24T17:42:17.359307Z\n"
]
],
[
[
"With all of the results files downloaded locally, you can open them and read the results. In this tutorial, you use the [`jsonlines`](https://jsonlines.readthedocs.io/en/latest/) library to read the output results.\n\nThe following cell opens up the JSONL output file and then prints the predictions for each instance.",
"_____no_output_____"
]
],
[
[
"# Get downloaded results in directory\nresults_files = []\nfor dirpath, subdirs, files in os.walk(latest_directory):\n for file in files:\n if file.find(\"predictions\") >= 0:\n results_files.append(os.path.join(dirpath, file))\n\n\n# Consolidate all the results into a list\nresults = []\nfor results_file in results_files:\n # Open each result\n with jsonlines.open(results_file) as reader:\n for result in reader.iter(type=dict, skip_invalid=True):\n instance = result[\"instance\"]\n prediction = result[\"prediction\"]\n print(f\"\\ninstance: {instance['content']}\")\n for key, output in prediction.items():\n print(f\"\\n{key}: {output}\")",
"\ninstance: gs://qwiklabs-gcp-00-09d98f4803b0/input_1.txt\n\nids: ['5078374828348538880', '7384217837562232832', '4213683699893403648', '8825369718320791552', '1619610314527997952', '466688809921150976', '2772531819134844928']\n\ndisplayNames: ['affection', 'enjoy_the_moment', 'achievement', 'nature', 'leisure', 'bonding', 'exercise']\n\nconfidences: [0.59658015, 0.19601594, 0.18707053, 0.00931904, 0.00615081, 0.0036246842, 0.0012388825]\n\ninstance: gs://qwiklabs-gcp-00-09d98f4803b0/input_0.txt\n\nids: ['8825369718320791552', '7384217837562232832', '4213683699893403648', '1619610314527997952', '5078374828348538880', '466688809921150976', '2772531819134844928']\n\ndisplayNames: ['nature', 'enjoy_the_moment', 'achievement', 'leisure', 'affection', 'bonding', 'exercise']\n\nconfidences: [0.3603325, 0.3316431, 0.28240985, 0.019016903, 0.0025975837, 0.0022325595, 0.0017675894]\n"
]
],
[
[
"## Cleaning up\n\nTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n\nOtherwise, you can delete the individual resources you created in this tutorial:\n\n* Dataset\n* Training job\n* Model\n* Endpoint\n* Batch prediction\n* Batch prediction bucket",
"_____no_output_____"
]
],
[
[
"if os.getenv(\"IS_TESTING\"):\n ! gsutil rm -r $BUCKET_URI\n\nbatch_job.delete()\n\n# `force` parameter ensures that models are undeployed before deletion\nendpoint.delete(force=True)\n\nmodel.delete()\n\ntext_dataset.delete()\n\n# Training job\njob.delete()",
"INFO:google.cloud.aiplatform.base:Deleting BatchPredictionJob : projects/259224131669/locations/us-central1/batchPredictionJobs/3571004859807170560\nINFO:google.cloud.aiplatform.base:Delete BatchPredictionJob backing LRO: projects/259224131669/locations/us-central1/operations/7219005495250518016\nINFO:google.cloud.aiplatform.base:BatchPredictionJob deleted. . Resource name: projects/259224131669/locations/us-central1/batchPredictionJobs/3571004859807170560\nINFO:google.cloud.aiplatform.models:Undeploying Endpoint model: projects/259224131669/locations/us-central1/endpoints/7980783159979540480\nINFO:google.cloud.aiplatform.models:Undeploy Endpoint model backing LRO: projects/259224131669/locations/us-central1/endpoints/7980783159979540480/operations/6552472750399684608\nINFO:google.cloud.aiplatform.models:Endpoint model undeployed. Resource name: projects/259224131669/locations/us-central1/endpoints/7980783159979540480\nINFO:google.cloud.aiplatform.base:Deleting Endpoint : projects/259224131669/locations/us-central1/endpoints/7980783159979540480\nINFO:google.cloud.aiplatform.base:Delete Endpoint backing LRO: projects/259224131669/locations/us-central1/operations/260944070963101696\nINFO:google.cloud.aiplatform.base:Endpoint deleted. . Resource name: projects/259224131669/locations/us-central1/endpoints/7980783159979540480\nINFO:google.cloud.aiplatform.base:Deleting Model : projects/259224131669/locations/us-central1/models/274218199967334400\nINFO:google.cloud.aiplatform.base:Delete Model backing LRO: projects/259224131669/locations/us-central1/operations/6016966607207661568\nINFO:google.cloud.aiplatform.base:Model deleted. . Resource name: projects/259224131669/locations/us-central1/models/274218199967334400\nINFO:google.cloud.aiplatform.base:Deleting TextDataset : projects/259224131669/locations/us-central1/datasets/7829200088927830016\nINFO:google.cloud.aiplatform.base:Delete TextDataset backing LRO: projects/259224131669/locations/us-central1/operations/7851761242896072704\nINFO:google.cloud.aiplatform.base:TextDataset deleted. . Resource name: projects/259224131669/locations/us-central1/datasets/7829200088927830016\nINFO:google.cloud.aiplatform.base:Deleting AutoMLTextTrainingJob : projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344\nINFO:google.cloud.aiplatform.base:Delete AutoMLTextTrainingJob backing LRO: projects/259224131669/locations/us-central1/operations/1454397972216283136\nINFO:google.cloud.aiplatform.base:AutoMLTextTrainingJob deleted. . Resource name: projects/259224131669/locations/us-central1/trainingPipelines/1280924449289273344\n"
]
],
[
[
"## Next Steps\n\nAfter completing this tutorial, see the following documentation pages to learn more about Vertex AI:\n\n* [Preparing text training data](https://cloud.google.com/vertex-ai/docs/datasets/prepare-text)\n* [Training an AutoML model using the API](https://cloud.google.com/vertex-ai/docs/training/automl-api#text)\n* [Evaluating AutoML models](https://cloud.google.com/vertex-ai/docs/training/evaluating-automl-models#text)\n* [Deploying a model using the Vertex AI API](https://cloud.google.com/vertex-ai/docs/predictions/deploy-model-api#aiplatform_create_endpoint_sample-python)\n* [Getting online predictions from AutoML models](https://cloud.google.com/vertex-ai/docs/predictions/deploy-model-api#aiplatform_create_endpoint_sample-python)\n* [Getting batch predictions](https://cloud.google.com/vertex-ai/docs/predictions/batch-predictions#text)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e75e2ae0519d39569405f30b3d8f473468459506 | 257,938 | ipynb | Jupyter Notebook | fits/Fit-3P-HL-LHC-FCC-hh.ipynb | talismanbrandi/IML-diHiggs | b355cf2c3488508a3196577f21e689ab94783de1 | [
"MIT"
] | null | null | null | fits/Fit-3P-HL-LHC-FCC-hh.ipynb | talismanbrandi/IML-diHiggs | b355cf2c3488508a3196577f21e689ab94783de1 | [
"MIT"
] | null | null | null | fits/Fit-3P-HL-LHC-FCC-hh.ipynb | talismanbrandi/IML-diHiggs | b355cf2c3488508a3196577f21e689ab94783de1 | [
"MIT"
] | null | null | null | 424.240132 | 121,744 | 0.917279 | [
[
[
"#####################################################################\n# This notebook is authored by: Ayan Paul & Lina Alasfar #\n# Date: May 2022 #\n# If you use this code or the results from this work please cite: # \n# Machine learning the trilinear and light-quark Yukawa couplings #\n# from Higgs pair kinematic shapes #\n# Lina Alasfar, Ramona Gröber, Christophe Grojean, Ayan Paul #\n# and Zuoni Qian #\n# arXiv:2205.XXXXX (https://arxiv.org/abs/2005.XXXXX) # \n#####################################################################\n\nimport pymc3 as pm\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport corner as crn\nimport math\nimport arviz as az\nimport os\nimport theano.tensor as tensor\nimport pandas as pd\nfrom matplotlib import rc\nfrom scipy import optimize\nimport json\nfrom sigma_br_HL_LHC import sigmahh_kukdkl as sigma_HL_LHC\nfrom sigma_br_FCC_hh import sigmahh_kukdkl as sigma_FCC_hh\nfrom matplotlib.ticker import AutoMinorLocator\nrc('text', usetex=True)\nplt.rcParams['text.latex.preamble'] = r\"\\usepackage{amsmath}\"\nplt.rcParams['font.family'] = 'monospace'\n## ***************************************************************************\n## * RC param *\n## ***************************************************************************\nplt.rcParams['xtick.top'] = True\nplt.rcParams['xtick.major.size'] = 10\nplt.rcParams['xtick.minor.size'] = 5\nplt.rcParams['xtick.direction'] = 'in'\nplt.rcParams['ytick.right'] = True\nplt.rcParams['ytick.major.size'] = 10\nplt.rcParams['ytick.minor.size'] = 5\nplt.rcParams['ytick.direction'] = 'in'\nplt.rcParams['xtick.labelsize'] = 22\nplt.rcParams['ytick.labelsize'] = 18\n## ***************************************************************************\nNBINS = 100\n\nwith open('../results/confusion/HL-LHC-BDT/hh-BDT-7class-ku-kd.confusion.json') as f:\n confusion_1 = json.load(f)\n\nwith open('../results/confusion/FCC-hh-BDT/hh-BDT-7class-ku-kd.confusion.json') as f:\n confusion_2 = json.load(f)\n \nLambdaNP2 = 1e+3**2\nv4 = 246.**4\nv3 = 246.**3\nmh2 = 125.1**2\nsqrt_2 = np.sqrt(2.0)\n\nkltoCH = lambda x : LambdaNP2/v4*mh2*0.5*(1-x)\nkutoCuH = lambda x : LambdaNP2/v3*(sqrt_2*2.2e-3*(1-x))\nkdtoCdH = lambda x : LambdaNP2/v3*(sqrt_2*4.7e-3*(1-x))\nunity = lambda x : x",
"_____no_output_____"
],
[
"def likelihood_1(x, y, z):\n z2_lim = sigma_HL_LHC(1., 1., 1., 'ku', confusion_1)\n z3_lim = sigma_HL_LHC(1., 1., 1., 'kd', confusion_1)\n z4_lim = sigma_HL_LHC(1., 1., 1., 'tri', confusion_1)\n z5_lim = sigma_HL_LHC(1., 1., 1., 'int', confusion_1)\n \n return -((sigma_HL_LHC(x, y, z, 'ku', confusion_1) - z2_lim)**2/z2_lim \n + (sigma_HL_LHC(x, y, z, 'kd', confusion_1) - z3_lim)**2/z3_lim \n + (sigma_HL_LHC(x, y, z, 'tri', confusion_1) - z4_lim)**2/z4_lim\n + (sigma_HL_LHC(x, y, z, 'int', confusion_1) - z5_lim)**2/z5_lim\n ) * 0.5\n\n\ndef likelihood_2(x, y, z):\n z2_lim = sigma_FCC_hh(1., 1., 1., 'ku', confusion_2)\n z3_lim = sigma_FCC_hh(1., 1., 1., 'kd', confusion_2)\n z4_lim = sigma_FCC_hh(1., 1., 1., 'tri', confusion_2)\n z5_lim = sigma_FCC_hh(1., 1., 1., 'int', confusion_2)\n \n return -((sigma_FCC_hh(x, y, z, 'ku', confusion_2) - z2_lim)**2/z2_lim \n + (sigma_FCC_hh(x, y, z, 'kd', confusion_2) - z3_lim)**2/z3_lim \n + (sigma_FCC_hh(x, y, z, 'tri', confusion_2) - z4_lim)**2/z4_lim\n + (sigma_FCC_hh(x, y, z, 'int', confusion_2) - z5_lim)**2/z5_lim\n ) * 0.5\n\n\ndef runMCMC(likelihood, limits, trace_dir='', config=[], fit=True):\n \"\"\" pyMC3 MCMC run\n argument:\n likelihood: the likelihood function\n limits: an array of the limits for the parameters [r_lowers, r_upper, theta_lower, theta_upper]\n trace_dir: the directory to which the MCMC traces are saves. '' implies none\n config: the setup for the MCMC. [MCMC smaple size, target_accept, chains]\n fit: bolean for determining whether to run the fit\n returns:\n trace: if fit is true it returns the trace\n model;if fit is false it returns the model\n \"\"\"\n with pm.Model() as model:\n k1 = pm.Uniform('k1', lower=limits[0], upper=limits[1])\n k2 = pm.Uniform('k2', lower=limits[2], upper=limits[3])\n k3 = pm.Uniform('k3', lower=limits[4], upper=limits[5])\n\n like = pm.Potential('like', likelihood(k1, k2, k3))\n \n if fit:\n with model:\n trace = pm.sample(config[0], tune=int(np.max([1000,config[0]/5])), cores=4, target_accept=config[1], chains=config[2], init='advi_map')\n if trace_dir != '': pm.save_trace(trace=trace, directory=trace_dir, overwrite=True)\n return trace, model\n return model\n\ndef makeCorner(trace, model, filename, collider, label, limit, lambdas=[unity, unity, unity]):\n \"\"\" Corner plot builder\n argument:\n trace: the trace from the pyMC3 run\n filename: the file to save the plot in\n collider: a string with the collider name to attach to the plot\n label: the label for the axes corresponding to the variables (couplings)\n limit: the limit on the variables (couplings)\n lambdas: the conversion lambdas from kappa to Wilson coefficients\n \"\"\"\n var = ['k1', 'k2', 'k3']\n \n samples = np.vstack((lambdas[0](trace['k1']), lambdas[1](trace['k2']), lambdas[2](trace['k3']))).T\n\n if lambdas[0] != unity:\n limit_t = np.array(limit)\n limit_t[0] = lambdas[0](limit[1])\n limit_t[1] = lambdas[0](limit[0])\n limit_t[2] = lambdas[1](limit[3])\n limit_t[3] = lambdas[1](limit[2])\n limit_t[4] = lambdas[2](limit[5])\n limit_t[5] = lambdas[2](limit[4])\n limits = limit_t\n else: limits = limit\n \n \n fig = plt.figure(figsize=(12,12))\n fig = crn.corner(samples,labels = [label[0], label[1], label[2]], \n truths = None, bins=NBINS,\n show_titles=True, title_kwargs={\"fontsize\": 32}, label_kwargs={\"fontsize\": 30},\n levels=(0.6827,0.9545,0.9973), \n plot_contours = True, fill_contours=True, smooth=True, smooth1d=None,\n plot_datapoints = False,\n color='#9697ae',\n labelpad=-0.06, fig=fig, title_fmt='.3f', hist_kwargs={'linewidth': 2, 'histtype': 'bar'}, \n range=[(limits[0],limits[1]), (limits[2], limits[3]), (limits[4], limits[5])], truth_color='#343434')\n \n stats_func_1 = {\n 'b0': lambda x: multimode(x, 0, 0.6827),\n 'b1': lambda x: multimode(x, 1, 0.6827),\n }\n \n stats_func_2 = {\n 'b0': lambda x: multimode(x, 0, 0.9545),\n 'b1': lambda x: multimode(x, 1, 0.9545),\n }\n \n stats_func_3 = {\n 'b0': lambda x: multimode(x, 0, 0.9973),\n 'b1': lambda x: multimode(x, 1, 0.9973),\n }\n \n df_1 = pd.DataFrame(az.summary(trace, kind='stats', hdi_prob=0.6827, round_to='none', stat_funcs=stats_func_1))\n df_2 = pd.DataFrame(az.summary(trace, kind='stats', hdi_prob=0.9545, round_to='none', stat_funcs=stats_func_2))\n df_3 = pd.DataFrame(az.summary(trace, kind='stats', hdi_prob=0.9973, round_to='none', stat_funcs=stats_func_3))\n\n for ax in fig.get_axes():\n ax.tick_params(axis='both', labelsize=28, rotation=30)\n ax.xaxis.set_minor_locator(AutoMinorLocator())\n ax.yaxis.set_minor_locator(AutoMinorLocator())\n\n \n ax = fig.get_axes()\n ax[0].tick_params(\n axis='x', # changes apply to the x-axis\n which='both', # both major and minor ticks are affected\n bottom=True, # ticks along the bottom edge are off\n top=False, # ticks along the top edge are off)\n )\n ax[4].tick_params(\n axis='x', # changes apply to the x-axis\n which='both', # both major and minor ticks are affected\n bottom=True, # ticks along the bottom edge are off\n top=False, # ticks along the top edge are off)\n )\n ax[8].tick_params(\n axis='x', # changes apply to the x-axis\n which='both', # both major and minor ticks are affected\n bottom=True, # ticks along the bottom edge are off\n top=False, # ticks along the top edge are off)\n )\n for j in range(3):\n for i in range(NBINS):\n \n lim_1l = min(lambdas[j](df_1.loc[var[j]]['b0']), lambdas[j](df_1.loc[var[j]]['b1']))\n lim_1u = max(lambdas[j](df_1.loc[var[j]]['b0']), lambdas[j](df_1.loc[var[j]]['b1']))\n \n lim_2l = min(lambdas[j](df_2.loc[var[j]]['b0']), lambdas[j](df_2.loc[var[j]]['b1']))\n lim_2u = max(lambdas[j](df_2.loc[var[j]]['b0']), lambdas[j](df_2.loc[var[j]]['b1']))\n \n lim_3l = min(lambdas[j](df_3.loc[var[j]]['b0']), lambdas[j](df_3.loc[var[j]]['b1']))\n lim_3u = max(lambdas[j](df_3.loc[var[j]]['b0']), lambdas[j](df_3.loc[var[j]]['b1']))\n \n if ax[4*j].patches[i].xy[0] > lim_1l and ax[4*j].patches[i].xy[0] < lim_1u:\n ax[4*j].patches[i].set_alpha(1)\n elif ax[4*j].patches[i].xy[0] > lim_2l and ax[4*j].patches[i].xy[0] < lim_2u:\n ax[4*j].patches[i].set_alpha(0.5)\n elif ax[4*j].patches[i].xy[0] > lim_3l and ax[4*j].patches[i].xy[0] < lim_3u:\n ax[4*j].patches[i].set_alpha(0.25)\n else:\n ax[4*j].patches[i].set_alpha(0.1)\n \n \n ## 1D histogram labels\n ax[0].set_title(label[0]+r'$ = [{:.3f}, {:.3f}]$'.format(lambdas[0](df_1.loc['k1']['hdi_84.135%']), lambdas[0](df_1.loc['k1']['hdi_15.865%'])), fontsize=28)\n ax[4].set_title(label[1]+r'$ = [{:.3f}, {:.3f}]$'.format(lambdas[1](df_1.loc['k2']['hdi_84.135%']), lambdas[1](df_1.loc['k2']['hdi_15.865%'])), fontsize=28)\n if collider == 'HL-LHC': ax[8].set_title(label[2]+r'$ = [{:.3f}, {:.3f}]$'.format(lambdas[2](df_1.loc['k3']['hdi_84.135%']), lambdas[2](df_1.loc['k3']['hdi_15.865%'])), fontsize=28)\n else: ax[8].set_title(label[2]+r'$ = [{:.3f}, {:.3f}]$'.format(lambdas[2](df_1.loc['k3']['hdi_84.135%']), lambdas[2](df_1.loc['k3']['hdi_15.865%'])), fontsize=28)\n \n ## title\n ax[1].annotate(collider, xy=(0.5, 0.9), xycoords='axes fraction', horizontalalignment='center',\n verticalalignment='center', fontsize=30, fontweight='bold')\n ax[1].annotate('Best Fit Point:', xy=(0.5, 0.75), xycoords='axes fraction', horizontalalignment='center',\n verticalalignment='center', fontsize=28)\n ax[1].annotate(label[0]+r'$ = 0$', xy=(0.5, 0.55), xycoords='axes fraction', horizontalalignment='center',\n verticalalignment='center', fontsize=28)\n ax[1].annotate(label[1]+r'$ = 0$', xy=(0.5, 0.35), xycoords='axes fraction', horizontalalignment='center',\n verticalalignment='center', fontsize=28)\n ax[1].annotate(label[2]+r'$ = 0$', xy=(0.5, 0.15), xycoords='axes fraction', horizontalalignment='center',\n verticalalignment='center', fontsize=28)\n \n ## grid\n ax[3].grid(linestyle=':', zorder=0)\n ax[6].grid(linestyle=':', zorder=0)\n ax[7].grid(linestyle=':', zorder=0)\n \n plt.tight_layout()\n fig.savefig(filename, dpi=300, transparent=True,bbox_inches='tight')\n plt.show()\n \n\ndef mode(x):\n \"\"\" Finds the mode of x\n argument:\n x: an array\n \"\"\"\n n, bins = np.histogram(x, bins=101)\n m = np.argmax(n)\n m = (bins[m] + bins[m-1])/2.\n return m\n\ndef multimode(x, n, hdi_prob):\n \"\"\" Finds all the modes in the distribution\n arguments:\n x: the array for the distribution\n n: the identifier for the variable\n \"\"\"\n md = az.hdi(x, hdi_prob=hdi_prob, multimodal=False)\n if len(md) < 2 and n > 1:\n return np.NaN\n else:\n return md[n%2]\n\ndef minimize(likelihood, guess):\n \"\"\" Minimizing routine for finding global mode\n argument:\n likelihood: the likelihood function\n guess: the guess for the mode, [r, theta]\n \"\"\"\n res = optimize.minimize(lambda x: -likelihood(x[0], x[1]), guess, method='BFGS', tol=1e-6)\n return res",
"_____no_output_____"
],
[
"## HL-LHC ku, kd, kl\nlimits = [-1500., 1500., -1500., 1500., -1.5, 6.]\nconfig = [150000, 0.8, 50]\ntrace_1, model_1 = runMCMC(likelihood_1, limits, config=config)\nfilename = '../plots/kappa_u-kappa_d-kappa_l-HL-LHC.pdf'\nmakeCorner(trace_1, model_1, filename, collider='HL-LHC', label=[r\"$C_{u\\phi}$\", r\"$C_{d\\phi}$\", r\"$C_\\phi$\"], limit=limits, lambdas=[kutoCuH, kdtoCdH, kltoCH])",
"Auto-assigning NUTS sampler...\nInitializing NUTS using advi_map...\n"
],
[
"## FCC-hh ku, kd, kl\nlimits = [-160., 160., -150., 150., 0.85, 1.2]\nconfig = [150000, 0.95, 50]\ntrace_2, model_2 = runMCMC(likelihood_2, limits, config=config)\nfilename = '../plots/kappa_u-kappa_d-kappa_l-FCC-hh_poster.pdf'\nmakeCorner(trace_2, model_2, filename, collider='FCC-hh', label=[r\"$C_{u\\phi}$\", r\"$C_{d\\phi}$\", r\"$C_\\phi$\"], limit=limits, lambdas=[kutoCuH, kdtoCdH, kltoCH])",
"Auto-assigning NUTS sampler...\nInitializing NUTS using advi_map...\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
e75e37c99be21765ab95ed51b77d4ab73f5c460c | 6,950 | ipynb | Jupyter Notebook | 1. Highlights of Linear Algebra/3. The Four Fundamental Subspaces/.ipynb_checkpoints/Problem Set I.3-checkpoint.ipynb | nickovchinnikov/LinearAlgebraAndLearningFromData | a797de46881b52628dac0230ba16fac8e149474f | [
"MIT"
] | 2 | 2021-08-31T12:03:28.000Z | 2021-11-22T12:55:54.000Z | 1. Highlights of Linear Algebra/3. The Four Fundamental Subspaces/.ipynb_checkpoints/Problem Set I.3-checkpoint.ipynb | nickovchinnikov/LinearAlgebraAndLearningFromData | a797de46881b52628dac0230ba16fac8e149474f | [
"MIT"
] | null | null | null | 1. Highlights of Linear Algebra/3. The Four Fundamental Subspaces/.ipynb_checkpoints/Problem Set I.3-checkpoint.ipynb | nickovchinnikov/LinearAlgebraAndLearningFromData | a797de46881b52628dac0230ba16fac8e149474f | [
"MIT"
] | null | null | null | 17.550505 | 81 | 0.399856 | [
[
[
"# Problem Set I.3",
"_____no_output_____"
],
[
"## \\# 1",
"_____no_output_____"
],
[
"$Bx=0,AB=C \\implies ABx=Cx, A(Bx)=Cx, A \\cdot 0 = Cx, Cx = 0$ or $ABx=0$",
"_____no_output_____"
],
[
"## \\# 2",
"_____no_output_____"
],
[
"Example of $rank(A^2) < rank(A)$",
"_____no_output_____"
],
[
"$$A=\\begin{bmatrix}\n 0 & 1 \\\\\n 0 & 0\n\\end{bmatrix},\nA^2=\\begin{bmatrix}\n 0 & 1 \\\\\n 0 & 0\n\\end{bmatrix}\n\\cdot\n\\begin{bmatrix}\n 0 & 1 \\\\\n 0 & 0\n\\end{bmatrix}\n=\\begin{bmatrix}\n 0 & 0 \\\\\n 0 & 0\n\\end{bmatrix}$$",
"_____no_output_____"
],
[
"$rank(A^TA)=rank(A)$",
"_____no_output_____"
],
[
"$$A^TA=\\begin{bmatrix}\n 0 & 0 \\\\\n 1 & 0\n\\end{bmatrix}\n\\cdot\n\\begin{bmatrix}\n 0 & 1 \\\\\n 0 & 0\n\\end{bmatrix}\n=\\begin{bmatrix}\n 0 & 1 \\\\\n 0 & 0\n\\end{bmatrix}\n$$",
"_____no_output_____"
],
[
"$rank(AA^T)=rank(A)$",
"_____no_output_____"
],
[
"$$AA^T=\\begin{bmatrix}\n 0 & 1 \\\\\n 0 & 0\n\\end{bmatrix}\n\\cdot\n\\begin{bmatrix}\n 0 & 0 \\\\\n 1 & 0\n\\end{bmatrix}\n=\\begin{bmatrix}\n 1 & 0 \\\\\n 0 & 0\n\\end{bmatrix}\n$$",
"_____no_output_____"
],
[
"## \\# 3",
"_____no_output_____"
],
[
"$$\nC=\\begin{bmatrix}\n A \\\\\n B\n\\end{bmatrix}\n$$",
"_____no_output_____"
],
[
"$$Cx=0\n\\implies \n\\begin{bmatrix}\n A \\\\\n B\n\\end{bmatrix}x=0,\n\\begin{bmatrix}\n Ax \\\\\n Bx\n\\end{bmatrix}=0\n\\implies Ax=0,Bx=0\n$$",
"_____no_output_____"
],
[
"$N(C)=N(A) \\cap N(B)$",
"_____no_output_____"
],
[
"## \\# 4",
"_____no_output_____"
],
[
"$C(A)=C(A^T)$ and $N(A)=N(A^T)$",
"_____no_output_____"
],
[
"$Ax=A^Tx=0$ because $Ax=0$ and $A^Ty=0$ and $A^T=A \\implies x = y$",
"_____no_output_____"
],
[
"$$\\begin{bmatrix}\n u & v\n\\end{bmatrix}=\\begin{bmatrix}\n u & v\n\\end{bmatrix}^T,\n\\begin{bmatrix}\n u & v\n\\end{bmatrix}=\\begin{bmatrix}\n u^* \\\\\n v^*\n\\end{bmatrix}\n$$",
"_____no_output_____"
],
[
"I suppose than $A = A^T \\implies S = S^T$ is symmetric matrix",
"_____no_output_____"
],
[
"## \\# 5",
"_____no_output_____"
],
[
"1) $r = m = n$, $A_1x=b$ has 1 solution for every b\n\n$A_1$ is any full-rank matrix",
"_____no_output_____"
],
[
"2) $r = m < n$, $A_2x=b$ has 1 or $\\infty$ solutions\n\n$$A_2x=\n\\begin{bmatrix}\n 1 & 0\n\\end{bmatrix}\n\\cdot\n\\begin{bmatrix}\n x_1 \\\\\n x_2\n\\end{bmatrix}\n=b\n$$\n\n$A_2$ has an extra column",
"_____no_output_____"
],
[
"3) $r=n < m, A_3x=b$ has 0 or 1 solutions\n\n$A_3=A_2^T$",
"_____no_output_____"
],
[
"4) $r < m, r < n; A_4x = b$ has 0 or $\\infty$ solutions\n\n$$\nA_4=\n\\begin{bmatrix}\n 1 & 0 \\\\\n 0 & 0\n\\end{bmatrix}\n$$\n\nor dependent columns\n\n$$\n\\begin{bmatrix}\n 1 & 2 \\\\\n 1 & 2\n\\end{bmatrix}\n$$\n\n$$\n\\begin{bmatrix}\n 1 & 2 & 3 \\\\\n 1 & 2 & 3 \\\\\n 1 & 2 & 3\n\\end{bmatrix}\n$$\n\nAnd etc.",
"_____no_output_____"
],
[
"## \\# 6",
"_____no_output_____"
],
[
"$Ax=0, A^TAx=A^T(Ax)=A^T0=0$",
"_____no_output_____"
],
[
"$N(A) \\subset N(A^TA)$",
"_____no_output_____"
],
[
"$A^TAx=0$, then $x^TA^TAx=x^T0 \\implies (Ax)^T(Ax)=0 \\implies ||Ax||^2=0$",
"_____no_output_____"
],
[
"$N(A^TA)=N(A)$",
"_____no_output_____"
],
[
"## \\# 7",
"_____no_output_____"
],
[
"$$A=\\begin{bmatrix}\n 0 & 1 \\\\\n 0 & 0\n\\end{bmatrix};\nA^2=\\begin{bmatrix}\n 0 & 0 \\\\\n 0 & 0\n\\end{bmatrix};\n$$",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e75e3f2e5fb1329d6ef1e0f28c3d25b211ca8ca2 | 3,631 | ipynb | Jupyter Notebook | resources/setup_project/solution/mlvtools/split_dataset.ipynb | jim-obrien-orig/mlv-tools-tutorial | 915f052d4e0a27969832d0f2873de239e644b756 | [
"BSD-3-Clause"
] | 61 | 2018-11-15T14:55:29.000Z | 2020-03-23T22:23:51.000Z | resources/setup_project/solution/mlvtools/split_dataset.ipynb | jim-obrien-orig/mlv-tools-tutorial | 915f052d4e0a27969832d0f2873de239e644b756 | [
"BSD-3-Clause"
] | 4 | 2020-04-12T02:47:56.000Z | 2020-08-31T12:33:16.000Z | resources/setup_project/solution/mlvtools/split_dataset.ipynb | jim-obrien-orig/mlv-tools-tutorial | 915f052d4e0a27969832d0f2873de239e644b756 | [
"BSD-3-Clause"
] | 11 | 2019-04-06T11:08:34.000Z | 2020-03-11T17:58:10.000Z | 24.206667 | 85 | 0.585789 | [
[
[
"\"\"\"\n:param str preprocessed_data_path: Path to preprocessed data input file\n:param str train_dataset_path: Path to the train data output file\n:param str test_dataset_path: Path to the test data output file\n:param float test_percent: Percentage of test data (example: 0.15)\n \n:dvc-in preprocessed_data_path: ./data/intermediate/preprocessed_data.json\n:dvc-out train_dataset_path: ./data/intermediate/train_dataset.txt\n:dvc-out test_dataset_path: ./data/intermediate/test_dataset.txt\n:dvc-extra: --test-percent 0.15\n\"\"\"\n# Following code in this cell will not be add in the generated Python script\n# They are values only for notebook purpose\npreprocessed_data_path = '../data/intermediate/preprocessed_data.json'\ntrain_dataset_path = '../data/intermediate/train_dataset.txt'\ntest_dataset_path = '../data/intermediate/test_dataset.txt'\ntest_percent = 0.15",
"_____no_output_____"
],
[
"import json\nwith open(preprocessed_data_path, 'r') as fd:\n preprocessed_data = json.load(fd)",
"_____no_output_____"
],
[
"# No effect\npreprocessed_data",
"_____no_output_____"
],
[
"# No effect\nlen(preprocessed_data)",
"_____no_output_____"
],
[
"from classifier.split import split_dataset\n\n\ntest_dataset, train_dataset = split_dataset(preprocessed_data, test_percent)",
"_____no_output_____"
],
[
"# No effect\nlen(test_dataset), len(train_dataset)",
"_____no_output_____"
],
[
"# No effect\ntest_dataset",
"_____no_output_____"
],
[
"# No effect\nfrom collections import Counter\ntest_review_by_labels = Counter([d.split()[0] for d in test_dataset])\ntrain_review_by_labels = Counter([d.split()[0] for d in train_dataset])\n\ntest_review_by_labels.most_common()",
"_____no_output_____"
],
[
"# No effect\ntrain_review_by_labels.most_common()",
"_____no_output_____"
],
[
"from classifier.helper import write_lines_file\n\nwrite_lines_file(train_dataset_path, train_dataset)\nwrite_lines_file(test_dataset_path, test_dataset)\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75e424fb9491c6fbe676a9b6c897f764d3c8b4d | 60,924 | ipynb | Jupyter Notebook | sklearn.datasets_create_two_datasets_blobs.ipynb | tonyhuang84/notebook_dnn | 59615cc2c6b6daadc5e96ca068552fcfaa147e5c | [
"BSD-2-Clause"
] | null | null | null | sklearn.datasets_create_two_datasets_blobs.ipynb | tonyhuang84/notebook_dnn | 59615cc2c6b6daadc5e96ca068552fcfaa147e5c | [
"BSD-2-Clause"
] | null | null | null | sklearn.datasets_create_two_datasets_blobs.ipynb | tonyhuang84/notebook_dnn | 59615cc2c6b6daadc5e96ca068552fcfaa147e5c | [
"BSD-2-Clause"
] | null | null | null | 454.656716 | 58,108 | 0.946458 | [
[
[
"import everything",
"_____no_output_____"
]
],
[
[
"import matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom sklearn.datasets import make_blobs\n\n%matplotlib inline\nmatplotlib.rcParams['figure.figsize'] = (10.0, 8.0)",
"_____no_output_____"
],
[
"#data, label = make_moons(n_samples=500, noise=0.2, random_state=0)\n#label = label.reshape(500, 1)\n\ndata, label = make_blobs(n_samples=500, centers=2)",
"_____no_output_____"
],
[
"print('data shape :', data.shape)\nprint(data[:5], '\\n')\nprint('label shape:', label.shape)\nprint(label[:5])",
"('data shape :', (500, 2))\n(array([[ -8.66658983, -10.44785679],\n [ 5.11850417, -1.39596025],\n [ -6.57018871, -9.14365721],\n [ -7.97949476, -10.60722113],\n [ -6.89000223, -9.31686095]]), '\\n')\n('label shape:', (500,))\n[0 1 0 0 0]\n"
],
[
"# draw picture\nplt.scatter(data[:,0], data[:,1], s=40, c=label, cmap=plt.cm.Accent)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e75e47d65c430167bcec9c2ef50c5d54f1fb9e13 | 462,445 | ipynb | Jupyter Notebook | charts/.ipynb_checkpoints/life_expectancy-checkpoint.ipynb | NaveenRajuS/lung-disease | 7f9be26979ab4e00d0e2ec3dfde13de2b31b8c5d | [
"MIT"
] | null | null | null | charts/.ipynb_checkpoints/life_expectancy-checkpoint.ipynb | NaveenRajuS/lung-disease | 7f9be26979ab4e00d0e2ec3dfde13de2b31b8c5d | [
"MIT"
] | null | null | null | charts/.ipynb_checkpoints/life_expectancy-checkpoint.ipynb | NaveenRajuS/lung-disease | 7f9be26979ab4e00d0e2ec3dfde13de2b31b8c5d | [
"MIT"
] | null | null | null | 808.47028 | 260,624 | 0.949901 | [
[
[
"# Using Machine Learning to explain and predict the life expectancy of different countries\n\n#### Data Source:\nhttps://www.kaggle.com/kumarajarshi/life-expectancy-who/data\n\n#### Timeframe of the Data:\n2000 - 2015",
"_____no_output_____"
],
[
"## Data Preprocessing",
"_____no_output_____"
]
],
[
[
"# Importing libraries\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd",
"_____no_output_____"
],
[
"# Importing the dataset\nlife_data = pd.read_csv(\"data/Life_expectancy_data.csv\", sep=\",\")\n# Dropping the year column as Life expectancy for each country between 1950 - 2015 is analyzed in another model\nlife_data = life_data.drop('Year', axis = 1)\nlife_data.head()",
"_____no_output_____"
],
[
"# Dealing with categorical data\nstatus = pd.get_dummies(life_data.Status)\nlife_data = pd.concat([life_data, status], axis = 1)\nlife_data = life_data.drop(['Status'], axis = 1)\nlife_data.head()\n",
"_____no_output_____"
],
[
"life_data = life_data.groupby('Country').mean()\nlife_data.head()",
"_____no_output_____"
]
],
[
[
"## Exploratory Data Analysis",
"_____no_output_____"
]
],
[
[
"life_data.columns",
"_____no_output_____"
],
[
"# GDP vs. Life Expectancy\nplt.scatter(life_data['GDP'], life_data['Life expectancy '])\nplt.title('GDP vs. Life Expectancy')\nplt.xlabel('GDP')\nplt.ylabel('Life Expectancy')",
"_____no_output_____"
],
[
"# Adult Mortality vs. Life Expectancy\nplt.scatter(life_data['Adult Mortality'], life_data['Life expectancy '])\nplt.title('Adult Mortality vs. Life Expectancy')\nplt.xlabel('Adult Mortality')\nplt.ylabel('Life Expectancy')",
"_____no_output_____"
],
[
"# Infant Deaths vs. Life Expectancy\nplt.scatter(life_data['infant deaths'], life_data['Life expectancy '])\nplt.title('infant deaths vs. Life Expectancy')\nplt.xlabel('infant deaths')\nplt.ylabel('Life Expectancy')",
"_____no_output_____"
],
[
"# Alcohol vs. Life Expectancy\nplt.scatter(life_data['Alcohol'], life_data['Life expectancy '])\nplt.title('Alcohol vs. Life Expectancy')\nplt.xlabel('Alcohol')\nplt.ylabel('Life Expectancy')",
"_____no_output_____"
],
[
"# percentage expenditure vs. Life Expectancy\nplt.scatter(life_data['percentage expenditure'], life_data['Life expectancy '])\nplt.title('percentage expenditure vs. Life Expectancy')\nplt.xlabel('Percentage Healthcare Expenditure')\nplt.ylabel('Life Expectancy')",
"_____no_output_____"
],
[
"# BMI vs. Life Expectancy\nplt.scatter(life_data[' BMI '], life_data['Life expectancy '])\nplt.title('BMI vs. Life Expectancy')\nplt.xlabel('BMI')\nplt.ylabel('Life Expectancy')",
"_____no_output_____"
],
[
"# HIV/AIDS vs. Life Expectancy\nplt.scatter(life_data[' HIV/AIDS'], life_data['Life expectancy '])\nplt.title('HIV/AIDS vs. Life Expectancy')\nplt.xlabel('HIV/AIDS')\nplt.ylabel('Life Expectancy')",
"_____no_output_____"
],
[
"# Population vs. Life Expectancy\nplt.scatter(life_data['Population'], life_data['Life expectancy '])\nplt.title('Population vs. Life Expectancy')\nplt.xlabel('Population')\nplt.ylabel('Life Expectancy')",
"_____no_output_____"
],
[
"# Population vs. Life Expectancy\nplt.scatter(life_data[' thinness 1-19 years'], life_data['Life expectancy '])\nplt.title('thinness 1-19 years vs. Life Expectancy')\nplt.xlabel('thinness 1-19 years')\nplt.ylabel('Life Expectancy')",
"_____no_output_____"
],
[
"# thinness 5-9 years vs. Life Expectancy\nplt.scatter(life_data[' thinness 5-9 years'], life_data['Life expectancy '])\nplt.title('thinness 5-9 years vs. Life Expectancy')\nplt.xlabel('thinness 5-9 years')\nplt.ylabel('Life Expectancy')",
"_____no_output_____"
],
[
"# Schooling vs. Life Expectancy\nplt.scatter(life_data['Schooling'], life_data['Life expectancy '])\nplt.title('Schooling vs. Life Expectancy')\nplt.xlabel('Schooling')\nplt.ylabel('Life Expectancy')",
"_____no_output_____"
],
[
"import seaborn as sns\nplt.figure(figsize = (14, 10))\nsns.heatmap(life_data.corr(), annot = True)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75e64286946d9d57fb46d49d8e54110c373b228 | 388,614 | ipynb | Jupyter Notebook | Simple-Web-Scraper-Python.ipynb | demetriospogkas/Web-Scraping-with-Beautiful-Soup | 836849305835f4169ec96b82c7c7e8c7ee288d80 | [
"MIT"
] | null | null | null | Simple-Web-Scraper-Python.ipynb | demetriospogkas/Web-Scraping-with-Beautiful-Soup | 836849305835f4169ec96b82c7c7e8c7ee288d80 | [
"MIT"
] | null | null | null | Simple-Web-Scraper-Python.ipynb | demetriospogkas/Web-Scraping-with-Beautiful-Soup | 836849305835f4169ec96b82c7c7e8c7ee288d80 | [
"MIT"
] | 1 | 2019-09-04T04:31:45.000Z | 2019-09-04T04:31:45.000Z | 98.934318 | 70,057 | 0.639833 | [
[
[
"# How to: Scrape the Web\nwith Python + requests + BeautifulSoup",
"_____no_output_____"
],
[
"Before you replicate the following code, make sure you have Python and all dependencies installed.\n- To install package manager brew: \n`/usr/bin/ruby -e \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)\"` \n- To install Python3: `brew install python3` \n- To install Jupyter and use Notebooks: `pip3 install jupyter` \n- To install requests: `pip3 install requests` \n- To install BeautifulSoup: `pip3 install bs4` ",
"_____no_output_____"
],
[
"Documentation: \n- Python: https://www.python.org/doc/\n- requests: http://docs.python-requests.org/en/master/ \n- BeautifulSoup: https://www.crummy.com/software/BeautifulSoup/bs4/doc/",
"_____no_output_____"
],
[
"### Import all the needed dependencies",
"_____no_output_____"
]
],
[
[
"import requests\nfrom bs4 import BeautifulSoup",
"_____no_output_____"
]
],
[
[
"### Grab HTML source code",
"_____no_output_____"
],
[
"##### Send GET request",
"_____no_output_____"
]
],
[
[
"url = 'http://www.imfdb.org/wiki/Category:Movie'\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36',\n 'Connection' : 'keep-alive'\n}\n\nproxies = {\n # Include your proxies if needed\n # 'http':'...',\n # 'https':'...'\n}\n\nresponse = requests.get(url, headers=headers, proxies=proxies)\nresponse",
"_____no_output_____"
]
],
[
[
"##### Save the response",
"_____no_output_____"
]
],
[
[
"text = response.text\ntext",
"_____no_output_____"
]
],
[
[
"### Parse the response with BeautifulSoup",
"_____no_output_____"
]
],
[
[
"souped = BeautifulSoup(text, \"html.parser\")\nsouped",
"_____no_output_____"
]
],
[
[
"### Find the `<div>` for movie pages",
"_____no_output_____"
]
],
[
[
"movie_pages = souped.find('div', attrs={'id':'mw-pages'})\nmovie_pages",
"_____no_output_____"
]
],
[
[
"### Grab all links to movie pages",
"_____no_output_____"
]
],
[
[
"bullets = movie_pages.find_all('li')\nbullets",
"_____no_output_____"
],
[
"urls = [] # Initiate an empty list",
"_____no_output_____"
],
[
"for bullet in bullets: # simple for loop\n url = 'http://www.imfdb.org' + bullet.a['href'] # local scope variable\n print(url) # console.log in JavaScript\n urls.append(url)",
"http://www.imfdb.org/wiki/%2771\nhttp://www.imfdb.org/wiki/%27Burbs,_The\nhttp://www.imfdb.org/wiki/008:_Operation_Exterminate\nhttp://www.imfdb.org/wiki/10_Cloverfield_Lane\nhttp://www.imfdb.org/wiki/10_to_Midnight\nhttp://www.imfdb.org/wiki/100_Bloody_Acres\nhttp://www.imfdb.org/wiki/100_Rifles\nhttp://www.imfdb.org/wiki/10th_Victim,_The\nhttp://www.imfdb.org/wiki/11.6\nhttp://www.imfdb.org/wiki/12_Rounds\nhttp://www.imfdb.org/wiki/12_Strong\nhttp://www.imfdb.org/wiki/12_Years_a_Slave\nhttp://www.imfdb.org/wiki/13\nhttp://www.imfdb.org/wiki/13_Hours:_The_Secret_Soldiers_of_Benghazi\nhttp://www.imfdb.org/wiki/1492:_Conquest_of_Paradise\nhttp://www.imfdb.org/wiki/15_Minutes\nhttp://www.imfdb.org/wiki/15:17_to_Paris,_The\nhttp://www.imfdb.org/wiki/16_Blocks\nhttp://www.imfdb.org/wiki/1612\nhttp://www.imfdb.org/wiki/18-14\nhttp://www.imfdb.org/wiki/1911_(2011)\nhttp://www.imfdb.org/wiki/1922_(2017)\nhttp://www.imfdb.org/wiki/1941\nhttp://www.imfdb.org/wiki/1944\nhttp://www.imfdb.org/wiki/1968_Tunnel_Rats\nhttp://www.imfdb.org/wiki/2_Days_in_the_Valley\nhttp://www.imfdb.org/wiki/2_Fast_2_Furious\nhttp://www.imfdb.org/wiki/2_Guns\nhttp://www.imfdb.org/wiki/2_Lava_2_Lantula!\nhttp://www.imfdb.org/wiki/20_Million_Miles_to_Earth\nhttp://www.imfdb.org/wiki/20000_Leagues_Under_the_Sea\nhttp://www.imfdb.org/wiki/2009:_Lost_Memories\nhttp://www.imfdb.org/wiki/2012\nhttp://www.imfdb.org/wiki/21_Grams\nhttp://www.imfdb.org/wiki/21_Jump_Street_(2012)\nhttp://www.imfdb.org/wiki/22_Bullets\nhttp://www.imfdb.org/wiki/22_Jump_Street\nhttp://www.imfdb.org/wiki/22_Minutes_(22_minuty)\nhttp://www.imfdb.org/wiki/23_(1998)\nhttp://www.imfdb.org/wiki/24:_Redemption\nhttp://www.imfdb.org/wiki/25th_Hour\nhttp://www.imfdb.org/wiki/28_Days_Later\nhttp://www.imfdb.org/wiki/28_Weeks_Later\nhttp://www.imfdb.org/wiki/3_Days_to_Kill\nhttp://www.imfdb.org/wiki/3_Women\nhttp://www.imfdb.org/wiki/30_Days_of_Night\nhttp://www.imfdb.org/wiki/30_Minutes_or_Less_(2011)\nhttp://www.imfdb.org/wiki/3000_Miles_to_Graceland\nhttp://www.imfdb.org/wiki/31_North_62_East\nhttp://www.imfdb.org/wiki/317th_Platoon,_The\nhttp://www.imfdb.org/wiki/36th_Precinct\nhttp://www.imfdb.org/wiki/39_Steps,_The_(1935)\nhttp://www.imfdb.org/wiki/39_Steps,_The_(1959)\nhttp://www.imfdb.org/wiki/39_Steps,_The_(2008)\nhttp://www.imfdb.org/wiki/3:10_to_Yuma_(1957)\nhttp://www.imfdb.org/wiki/3:10_to_Yuma_(2007)\nhttp://www.imfdb.org/wiki/4_for_Texas\nhttp://www.imfdb.org/wiki/4.3.2.1\nhttp://www.imfdb.org/wiki/44_Minutes:_The_North_Hollywood_Shootout\nhttp://www.imfdb.org/wiki/47_Ronin\nhttp://www.imfdb.org/wiki/48_Hrs.\nhttp://www.imfdb.org/wiki/5_Days_of_War\nhttp://www.imfdb.org/wiki/52_Pick-Up\nhttp://www.imfdb.org/wiki/55_Days_at_Peking\nhttp://www.imfdb.org/wiki/5th_Wave,_The\nhttp://www.imfdb.org/wiki/6_Days\nhttp://www.imfdb.org/wiki/633_Squadron\nhttp://www.imfdb.org/wiki/6th_Day,_The\nhttp://www.imfdb.org/wiki/7_Seconds\nhttp://www.imfdb.org/wiki/71:_Into_the_Fire\nhttp://www.imfdb.org/wiki/8_Heads_in_a_Duffel_Bag\nhttp://www.imfdb.org/wiki/8_Mile\nhttp://www.imfdb.org/wiki/8_Million_Ways_to_Die\nhttp://www.imfdb.org/wiki/800_Bullets_(800_Malas)\nhttp://www.imfdb.org/wiki/88_Minutes\nhttp://www.imfdb.org/wiki/8MM\nhttp://www.imfdb.org/wiki/99_and_44/100%25_Dead\nhttp://www.imfdb.org/wiki/9th_Company\nhttp://www.imfdb.org/wiki/A_Bad_Good_Man_(Plokhoy_khoroshiy_chelovek)\nhttp://www.imfdb.org/wiki/A_Bay_of_Blood\nhttp://www.imfdb.org/wiki/A_Beautiful_Mind\nhttp://www.imfdb.org/wiki/A_Better_Tomorrow\nhttp://www.imfdb.org/wiki/A_Better_Tomorrow_II\nhttp://www.imfdb.org/wiki/A_Better_Tomorrow_III\nhttp://www.imfdb.org/wiki/A_Bittersweet_Life\nhttp://www.imfdb.org/wiki/A_Boy_and_His_Dog\nhttp://www.imfdb.org/wiki/A_Breath_of_Scandal\nhttp://www.imfdb.org/wiki/A_Bridge_Too_Far\nhttp://www.imfdb.org/wiki/A_Bronx_Tale\nhttp://www.imfdb.org/wiki/A_Bullet_For_Joey\nhttp://www.imfdb.org/wiki/A_Bullet_for_the_General\nhttp://www.imfdb.org/wiki/A_Captain_at_Fifteen_(Pyatnadtsatiletniy_kapitan)\nhttp://www.imfdb.org/wiki/A_Captain%27s_Honor_(L%27Honneur_d%27un_capitaine)\nhttp://www.imfdb.org/wiki/A_Christmas_Story\nhttp://www.imfdb.org/wiki/A_Clockwork_Orange\nhttp://www.imfdb.org/wiki/A_Cruel_Romance\nhttp://www.imfdb.org/wiki/A_Dangerous_Man\nhttp://www.imfdb.org/wiki/A_Dark_Truth\nhttp://www.imfdb.org/wiki/A_Day_of_Fury\nhttp://www.imfdb.org/wiki/A_Dear_Boy_(Dorogoy_malchik)\nhttp://www.imfdb.org/wiki/A_Farewell_to_Arms\nhttp://www.imfdb.org/wiki/A_Few_Days_in_September\nhttp://www.imfdb.org/wiki/A_Few_Good_Men\nhttp://www.imfdb.org/wiki/A_Field_in_England\nhttp://www.imfdb.org/wiki/A_Fish_Called_Wanda\nhttp://www.imfdb.org/wiki/A_Fistful_of_Dollars\nhttp://www.imfdb.org/wiki/A_Force_of_One\nhttp://www.imfdb.org/wiki/A_Game_without_Rules_(Hra_bez_pravidel)\nhttp://www.imfdb.org/wiki/A_Gang_Story_(Les_Lyonnais)\nhttp://www.imfdb.org/wiki/A_Generation_(Pokolenie)\nhttp://www.imfdb.org/wiki/A_Gentle_Creature_(Krotkaya)\nhttp://www.imfdb.org/wiki/A_Gentle_Woman_(Une_femme_douce)\nhttp://www.imfdb.org/wiki/A_Golden-coloured_Straw_Hat_(Solomennaya_shlyapka)\nhttp://www.imfdb.org/wiki/A_Good_Day_to_Die_Hard\nhttp://www.imfdb.org/wiki/A_Good_Lad_(Slavnyy_malyy)\nhttp://www.imfdb.org/wiki/A_Good_Man\nhttp://www.imfdb.org/wiki/A_History_of_Violence\nhttp://www.imfdb.org/wiki/A_Hologram_for_the_King\nhttp://www.imfdb.org/wiki/A_Jester%27s_Tale_(Bl%C3%A1znova_kronika)\nhttp://www.imfdb.org/wiki/A_Judgement_in_Stone_(La_C%C3%A9r%C3%A9monie)\nhttp://www.imfdb.org/wiki/A_Life_Less_Ordinary\nhttp://www.imfdb.org/wiki/A_Low_Down_Dirty_Shame\nhttp://www.imfdb.org/wiki/A_Man_Apart\nhttp://www.imfdb.org/wiki/A_Man_Called_Blade_(Mannaja)\nhttp://www.imfdb.org/wiki/A_Man_Called_Magnum_(Napoli_si_ribella)\nhttp://www.imfdb.org/wiki/A_Man_from_the_Boulevard_des_Capucines_(Chelovek_s_bulvara_Kaputsinov)\nhttp://www.imfdb.org/wiki/A_Man_Named_Rocca_(Un_nomm%C3%A9_La_Rocca)\nhttp://www.imfdb.org/wiki/A_Midnight_Clear\nhttp://www.imfdb.org/wiki/A_Million_Ways_to_Die_in_the_West\nhttp://www.imfdb.org/wiki/A_Most_Violent_Year\nhttp://www.imfdb.org/wiki/A_Night_to_Remember\nhttp://www.imfdb.org/wiki/A_Nightmare_on_Elm_Street_(1984)\nhttp://www.imfdb.org/wiki/A_Nightmare_on_Elm_Street_2:_Freddy%27s_Revenge\nhttp://www.imfdb.org/wiki/A_Noisy_Household_(Bespokoynoe_khozyaystvo)\nhttp://www.imfdb.org/wiki/A_Pain_in_the_Ass_(L%27emmerdeur)_(1973)\nhttp://www.imfdb.org/wiki/A_Pain_in_the_Ass_(L%27emmerdeur)_(2008)\nhttp://www.imfdb.org/wiki/A_Perfect_Getaway\nhttp://www.imfdb.org/wiki/A_Perfect_Murder\nhttp://www.imfdb.org/wiki/A_Perfect_World\nhttp://www.imfdb.org/wiki/A_Pistol_Shot_(Vystrel)\nhttp://www.imfdb.org/wiki/A_Police_Commissioner_Accuses_(Un_comisar_acuza)\nhttp://www.imfdb.org/wiki/A_Prayer_for_Katarina_Horovitzova\nhttp://www.imfdb.org/wiki/A_Professional_Gun_(Il_mercenario)\nhttp://www.imfdb.org/wiki/A_Prophet\nhttp://www.imfdb.org/wiki/A_Quiet_Outpost_(Tikhaya_zastava)\nhttp://www.imfdb.org/wiki/A_Scanner_Darkly\nhttp://www.imfdb.org/wiki/A_Serbian_Film_(Srpski_film)\nhttp://www.imfdb.org/wiki/A_Shot_in_the_Dark\nhttp://www.imfdb.org/wiki/A_Simple_Plan\nhttp://www.imfdb.org/wiki/A_Slight_Case_of_Murder\nhttp://www.imfdb.org/wiki/A_Soldier%27s_Story\nhttp://www.imfdb.org/wiki/A_Sound_of_Thunder\nhttp://www.imfdb.org/wiki/A_Star_Called_Wormwood_(Hvezda_zvan%C3%A1_Pelynek)\nhttp://www.imfdb.org/wiki/A_Step_into_the_Darkness_(Krok_do_tmy)\nhttp://www.imfdb.org/wiki/A_Time_to_Kill\nhttp://www.imfdb.org/wiki/A_Very_Harold_%26_Kumar_3D_Christmas\nhttp://www.imfdb.org/wiki/A_Very_Long_Engagement\nhttp://www.imfdb.org/wiki/A_View_to_a_Kill\nhttp://www.imfdb.org/wiki/A_Walk_Among_the_Tombstones\nhttp://www.imfdb.org/wiki/A_Walk_in_the_Sun\nhttp://www.imfdb.org/wiki/A_While_(Chv%C3%ADle)\nhttp://www.imfdb.org/wiki/A_Woman_at_Her_Window_(Une_femme_%C3%A0_sa_fen%C3%AAtre)\nhttp://www.imfdb.org/wiki/A_Woman_in_Berlin\nhttp://www.imfdb.org/wiki/A_Woman%27s_Secret\nhttp://www.imfdb.org/wiki/A_Yakuza%27s_Daughter_Never_Cries\nhttp://www.imfdb.org/wiki/A_Youth_Orchestra_(Orkestar_jedne_mladosti)\nhttp://www.imfdb.org/wiki/A-Team,_The_(2010)\nhttp://www.imfdb.org/wiki/Abandoned,_The\nhttp://www.imfdb.org/wiki/Abang_Long_Fadil_2\nhttp://www.imfdb.org/wiki/Abduction_(2011)\nhttp://www.imfdb.org/wiki/Abominable\nhttp://www.imfdb.org/wiki/About_Friends-Comrades_(O_druzyakh-tovarishchakh)\nhttp://www.imfdb.org/wiki/Above_the_Law\nhttp://www.imfdb.org/wiki/Above_Us_the_Waves\nhttp://www.imfdb.org/wiki/Abraham_Lincoln:_Vampire_Hunter\nhttp://www.imfdb.org/wiki/Absolute_Power\nhttp://www.imfdb.org/wiki/Absolution\nhttp://www.imfdb.org/wiki/Abyss,_The\nhttp://www.imfdb.org/wiki/Accountant,_The\nhttp://www.imfdb.org/wiki/Ace_of_Aces_(L%27As_des_as),_The\nhttp://www.imfdb.org/wiki/Ace_Ventura:_Pet_Detective\nhttp://www.imfdb.org/wiki/Ace_Ventura:_When_Nature_Calls\nhttp://www.imfdb.org/wiki/Aces_High\nhttp://www.imfdb.org/wiki/Aces:_Iron_Eagle_III\nhttp://www.imfdb.org/wiki/Across_110th_Street\nhttp://www.imfdb.org/wiki/Across_the_Line:_The_Exodus_of_Charlie_Wright\nhttp://www.imfdb.org/wiki/Across_the_Pacific\nhttp://www.imfdb.org/wiki/Act_of_Aggression_(L%27Agression)\nhttp://www.imfdb.org/wiki/Act_of_Valor\nhttp://www.imfdb.org/wiki/Action_(Aktsiya),_The\nhttp://www.imfdb.org/wiki/Action_B_(Akce_B)\nhttp://www.imfdb.org/wiki/Action_Jackson\nhttp://www.imfdb.org/wiki/Action_Man_(Le_soleil_des_voyous)\nhttp://www.imfdb.org/wiki/Adele_Hasn%27t_Had_Her_Dinner_Yet\nhttp://www.imfdb.org/wiki/Adelheid\nhttp://www.imfdb.org/wiki/Adulthood\nhttp://www.imfdb.org/wiki/Adventures_in_Babysitting\nhttp://www.imfdb.org/wiki/Adventures_of_Buckaroo_Banzai_Across_the_8th_Dimension,_The\nhttp://www.imfdb.org/wiki/Adventures_of_Ford_Fairlane,_The\nhttp://www.imfdb.org/wiki/Adventures_of_Pluto_Nash,_The\n"
],
[
"urls",
"_____no_output_____"
]
],
[
[
"### Find the link to the next page\nConveniently enough, it's the very last `<a>` in the movie_pages `<div>`",
"_____no_output_____"
]
],
[
[
"movie_pages",
"_____no_output_____"
],
[
"movie_pages.find_all('a')",
"_____no_output_____"
],
[
"# This is a list\ntype(movie_pages.find_all('a'))",
"_____no_output_____"
],
[
"next_page = movie_pages.find_all('a')[-1]\nnext_page",
"_____no_output_____"
],
[
"next_page.text",
"_____no_output_____"
],
[
"next_page['href']",
"_____no_output_____"
],
[
"next_page_url = 'http://www.imfdb.org' + next_page['href']\nnext_page_url",
"_____no_output_____"
]
],
[
[
"### Bind that to one piece of code\nto extract 5k pages/links",
"_____no_output_____"
]
],
[
[
"urls = []\n\ndef scrape_the_web(url): # Python function with one parameter\n headers = {\n 'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36',\n 'Connection' : 'keep-alive'\n }\n\n proxies = {\n # Don't forget your proxies if you need any\n }\n\n response = requests.get(url, headers=headers, proxies=proxies)\n souped = BeautifulSoup(response.text, \"html.parser\")\n\n movie_pages = souped.find('div', attrs={'id':'mw-pages'})\n\n bullets = movie_pages.find_all('li')\n for bullet in bullets:\n url = 'http://www.imfdb.org' + bullet.a['href']\n urls.append(url)\n next_page = movie_pages.find_all('a')[-1]\n next_page_text = next_page.text\n \n if next_page_text == \"next 200\":\n next_page_url = 'http://www.imfdb.org' + next_page['href']\n print(next_page_url)\n scrape_the_web(next_page_url)\n else:\n pass\n\nurl = 'http://www.imfdb.org/wiki/Category:Movie'\nscrape_the_web(url)",
"http://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Adventures+of+Prince+Florisel+%28Priklyucheniya+printsa+Florizelya%29%2C+The#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Arthur+Hailey%27s+Detective#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Believed+Violent+%28Pr%C3%A9sum%C3%A9+dangereux%29#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Bourne+Supremacy%2C+The#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Cell%2C+The#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Cop+and+a+Half%3A+New+Recruit#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Deadlier+Than+the+Male#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Dogs+of+War%2C+The#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Extraction#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Fracchia+the+Human+Beast+%28Fracchia+la+belva+umana%29#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Going+Back#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Hell+Is+for+Heroes#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Ice+Cold+in+Alex#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Johnny+Guitar#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Last+Known+Address+%28Dernier+domicile+connu%29#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=L%C3%A9on%3A+The+Professional#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Millionairess%2C+The#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Next+of+Kin#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Outlaw%2C+The#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Pretty+Poison#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Resident+Evil%3A+Retribution#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Scotland%2C+PA.#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Slither#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Strawberries+in+the+Supermarket+%28Jagoda+u+supermarketu%29#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Thirteen+at+Dinner#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Ultimate+Weapon%2C+The#mw-pages\nhttp://www.imfdb.org/index.php?title=Category:Movie&pagefrom=Whisper+%28%C5%A0eptej%29#mw-pages\n"
],
[
"len(urls)",
"_____no_output_____"
],
[
"urls[-1]",
"_____no_output_____"
]
],
[
[
"# Now that we've got every link, let's extract firearm information from each page",
"_____no_output_____"
]
],
[
[
"url = 'http://www.imfdb.org/wiki/American_Graffiti'\n\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36',\n 'Connection' : 'keep-alive'\n}\n\nproxies = {\n # Don't forget your proxies if you need any\n}\n\nresponse = requests.get(url, headers=headers, proxies=proxies)\nsouped = BeautifulSoup(response.text, \"html.parser\")\nsouped",
"_____no_output_____"
],
[
"souped.find_all('span', attrs={'class':'mw-headline'})",
"_____no_output_____"
],
[
"# list comprehension\n[span.text for span in souped.find_all('span', attrs={'class':'mw-headline'})]",
"_____no_output_____"
],
[
"[span.next.next.next.text for span in souped.find_all('span', attrs={'class':'mw-headline'})]",
"_____no_output_____"
]
],
[
[
"### Let's try with another movie",
"_____no_output_____"
]
],
[
[
"url = 'http://www.imfdb.org/wiki/And_All_Will_Be_Quiet_(Potem_nastapi_cisza)'\n\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36',\n 'Connection' : 'keep-alive'\n}\n\nproxies = {\n # Don't forget your proxies if you need any\n}\n\nresponse = requests.get(url, headers=headers, proxies=proxies)\nsouped = BeautifulSoup(response.text, \"html.parser\")\nprint([span.text for span in souped.find_all('span', attrs={'class':'mw-headline'})])\nprint([span.next.next.next.text for span in souped.find_all('span', attrs={'class':'mw-headline'})])",
"[' Pistols ', ' Tokarev TT-33 ', 'Luger P08', ' Submachine Guns ', 'PPSh-41', ' MP40 ', ' Machine Guns ', 'Degtyaryov DP-28', ' MG34 ', 'Goryunov SG-43 Machine Gun', ' Maxim ', ' Rifles ', ' Mosin Nagant M44 Carbine ', ' Mosin Nagant M38 Carbine ', ' Karabiner 98k ', ' Hand Grenades ', ' F-1 hand grenade ', ' Model 24 Stielhandgranate ', ' Others ', ' SPSh Flare Pistol ', ' PTRD-41 ', ' 7.5 cm Pak 40 ', ' 45mm anti-tank gun M1937 (53-K) ', ' 76 mm divisional gun M1942 (ZiS-3)', ' SU-76M ', ' T-34 ']\n[' Tokarev TT-33 ', 'Various characters are seen with a Tokarev TT-33 pistol.\\n', 'Some German NCO and officers carry a Luger P08 pistol.\\n', ' PPSh-41', 'Polish infantrymen are mainly armed with PPSh-41 submachine guns.\\n', 'MP40 is submachine gun used by German infantrymen.\\n', ' Degtyaryov DP-28', 'Polish soldiers mainly use Degtyarev DP-28 machine guns.\\n', 'MG34 machine guns are widely used by German soldiers.\\n', 'Polish soldiers are also occasionally seen with Goryunov SG-43 machine guns.\\n', 'Polish troops are equipped with a Maxim M1910/30 machine guns.\\n', ' Mosin Nagant M44 Carbine ', 'Some Polish soldiers are armed with a Mosin Nagant M44 carbine.\\n', 'But most Polish infantrymen carry older type M38 carbines.\\n', 'The Kar98k carry a few German soldiers.\\n', ' F-1 hand grenade ', 'Polish infantrymen carry F-1 hand grenades and also Model 24 Stielhandgranates.\\n', ' Model 24 Stielhandgranate \"Potato Masher\" high-explosive fragmentation hand grenade', ' SPSh Flare Pistol ', 'Lt. Kolski (Marek Perepeczko) gives instruction to the firing a rocket from SPSh Flare Pistol.\\n', 'Polish troops are equipped with PTRD-41 anti-tank rifles.\\n', 'The popular weapon of the German Army is a 7.5 cm Pak 40 anti tank gun.\\n', 'Soviet troops are equipped with 45 mm anti-tank gun M1937 (53-K)s.\\n', 'Polish artillery use against German tanks a 76 mm divisional gun M1942 (ZiS-3).\\n', 'On the battlefield appears also several Polish SU-76M self-propelled guns.\\n', 'The Polish army in the USSR had in service with the Soviet tanks T-34.\\n']\n"
]
],
[
[
"### Remove the extra spaces, or any special characters",
"_____no_output_____"
]
],
[
[
"print([span.next.next.next.text.strip() for span in souped.find_all('span', attrs={'class':'mw-headline'})])",
"['Tokarev TT-33', 'Various characters are seen with a Tokarev TT-33 pistol.', 'Some German NCO and officers carry a Luger P08 pistol.', 'PPSh-41', 'Polish infantrymen are mainly armed with PPSh-41 submachine guns.', 'MP40 is submachine gun used by German infantrymen.', 'Degtyaryov DP-28', 'Polish soldiers mainly use Degtyarev DP-28 machine guns.', 'MG34 machine guns are widely used by German soldiers.', 'Polish soldiers are also occasionally seen with Goryunov SG-43 machine guns.', 'Polish troops are equipped with a Maxim M1910/30 machine guns.', 'Mosin Nagant M44 Carbine', 'Some Polish soldiers are armed with a Mosin Nagant M44 carbine.', 'But most Polish infantrymen carry older type M38 carbines.', 'The Kar98k carry a few German soldiers.', 'F-1 hand grenade', 'Polish infantrymen carry F-1 hand grenades and also Model 24 Stielhandgranates.', 'Model 24 Stielhandgranate \"Potato Masher\" high-explosive fragmentation hand grenade', 'SPSh Flare Pistol', 'Lt. Kolski (Marek Perepeczko) gives instruction to the firing a rocket from SPSh Flare Pistol.', 'Polish troops are equipped with PTRD-41 anti-tank rifles.', 'The popular weapon of the German Army is a 7.5 cm Pak 40 anti tank gun.', 'Soviet troops are equipped with 45 mm anti-tank gun M1937 (53-K)s.', 'Polish artillery use against German tanks a 76 mm divisional gun M1942 (ZiS-3).', 'On the battlefield appears also several Polish SU-76M self-propelled guns.', 'The Polish army in the USSR had in service with the Soviet tanks T-34.']\n"
]
],
[
[
"# Bind into one code",
"_____no_output_____"
]
],
[
[
"len(urls)",
"_____no_output_____"
],
[
"headers = {\n'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36',\n'Connection' : 'keep-alive'\n}\n\nproxies = {\n # Don't forget your proxies if you need any\n}\n\nevery_gun_in_every_movie = []\nuncaught_movies = []\n\nfor url in urls:\n \n movie_title = url.split('wiki/')[1]\n \n response = requests.get(url, headers=headers, proxies=proxies)\n souped = BeautifulSoup(response.text, \"html.parser\")\n\n try:\n guns_depicted = [p.span.text.strip() for p in souped.find_all('h2') if p.span and p.span['class'][0] == 'mw-headline']\n scene_descriptions = [p.span.parent.find_next('p').text.strip() for p in souped.find_all('h2') if p.span and p.span['class'][0] == 'mw-headline']\n except:\n uncaught_movies.append(url)\n \n for gun, description in zip(guns_depicted,scene_descriptions):\n empty_dictionary = {} # Python dictionaries\n empty_dictionary['movie_title'] = movie_title\n empty_dictionary['gun_used'] = gun\n empty_dictionary['scene_description'] = description\n every_gun_in_every_movie.append(empty_dictionary)",
"_____no_output_____"
],
[
"len(every_gun_in_every_movie)",
"_____no_output_____"
],
[
"len(uncaught_movies)",
"_____no_output_____"
]
],
[
[
"# And since we're at it\n`pip3 install pandas`",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"df = pd.DataFrame(every_gun_in_every_movie)\ndf",
"_____no_output_____"
],
[
"df.movie_title.value_counts().head(8)",
"_____no_output_____"
],
[
"df.gun_used.value_counts().head(8)",
"_____no_output_____"
],
[
"df.to_csv(\"every_gun_in_every_movie.csv\", index=False)",
"_____no_output_____"
],
[
"from matplotlib import pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"df.movie_title.value_counts().head(8).plot(kind='bar')",
"_____no_output_____"
],
[
"plt.style.use('ggplot')\ndf.movie_title.value_counts().head(8).plot(kind='bar', figsize=(10,8))\nplt.savefig('every_gun_in_every_movie.svg')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75e6664ab6acdf9dced575f6a04400173d0dafd | 40,120 | ipynb | Jupyter Notebook | algorithms_in_ipython_notebooks/ipython_nbs/efficiency/fibonacci-tree.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 1 | 2019-05-10T09:16:23.000Z | 2019-05-10T09:16:23.000Z | algorithms_in_ipython_notebooks/ipython_nbs/efficiency/fibonacci-tree.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | null | null | null | algorithms_in_ipython_notebooks/ipython_nbs/efficiency/fibonacci-tree.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 1 | 2019-05-10T09:17:28.000Z | 2019-05-10T09:17:28.000Z | 125.768025 | 32,692 | 0.870738 | [
[
[
"%load_ext watermark\n%watermark -a 'Sebastian Raschka' -u -d -v",
"Sebastian Raschka \nlast updated: 2016-06-01 \n\nCPython 3.5.1\nIPython 4.2.0\n"
]
],
[
[
"# Fibonacci Numbers",
"_____no_output_____"
],
[
"A Fibonacci number F(n) is computed as the sum of the two numbers preceeding it in a Fibonacci sequence\n\n(0), 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ...,\n\nfor example, F(10) = 55.\n\nMore formally, we can define a Fibonacci number F(n) as \n\n$F(n) = F(n-1) + F(n-2)$, for integers $n > 1$:\n\n$$F(n)=\n\\begin{cases} \n 0 & n=0, \\\\\n 1, & n=1, \\\\\n F(n-1) + F(n-2), & n > 1.\n \\end{cases}$$",
"_____no_output_____"
],
[
"The Fibonacci sequence was named after Leanardo Fibonacci, who used the Fibonacci sequence to study rabit populations in the 12th century. I highly recommend reading the excellent articles on [Wikipedia](https://en.wikipedia.org/wiki/Fibonacci_number) and [Wolfram](http://mathworld.wolfram.com/FibonacciNumber.html), which discuss the interesting facts about the Fibonacci number in great detail.",
"_____no_output_____"
],
[
"The recursive Fibonacci number computation is a typical text book example of a recursive algorithm:",
"_____no_output_____"
]
],
[
[
"def fibo_recurse(n):\n if n <= 1:\n return n\n else:\n return fibo_recurse(n-1) + fibo_recurse(n-2)\n \nprint(fibo_recurse(0))\nprint(fibo_recurse(1))\nprint(fibo_recurse(10))",
"0\n1\n55\n"
]
],
[
[
"However, it is unfortunately a terribly inefficient algorithm with an exponential running time of $O(2^n)$. The main problem why it is so slow is that we recompute Fibonacci number $F(n) = F(n-1) + F(n-2)$ repeatedly as shown in the recursive tree below:\n\n",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"For example, assuming $n \\geq 2$ we have \n\n$O(2^{n-1}) + O(2^{n-2}) + O(1) = O(2^n)$\n\nfor $F(n) = F(n-1) + F(n-2)$, where $O(1)$ is for adding to Fibonacci numbers together.\n\n",
"_____no_output_____"
],
[
"A more efficient approach to compute a Fibonacci number is a dynamic approach with linear runtime, $O(n)$:",
"_____no_output_____"
]
],
[
[
"def fibo_dynamic(n):\n f, f_minus_1 = 0, 1\n for i in range(n):\n f_minus_1, f = f, f + f_minus_1\n return f\n\nprint(fibo_dynamic(0))\nprint(fibo_dynamic(1))\nprint(fibo_dynamic(10))",
"0\n1\n55\n"
]
],
[
[
"(If you are interested in other approaches, I recommend you take a look at the pages on [Wikipedia](https://en.wikipedia.org/wiki/Fibonacci_number) and [Wolfram](http://mathworld.wolfram.com/FibonacciNumber.html).)",
"_____no_output_____"
],
[
"To get a rough idea of the running times of each of our implementations, let's use the `%timeit` magic for F(30).",
"_____no_output_____"
]
],
[
[
"%timeit -r 3 -n 10 fibo_recurse(n=30) ",
"10 loops, best of 3: 499 ms per loop\n"
],
[
"%timeit -r 3 -n 10 fibo_dynamic(n=30) ",
"10 loops, best of 3: 4.05 µs per loop\n"
]
],
[
[
"Finally, let's benchmark our implementations for varying sizes of n:",
"_____no_output_____"
]
],
[
[
"import timeit\n\nfuncs = ['fibo_recurse', 'fibo_dynamic']\norders_n = list(range(0, 50, 10))\ntimes_n = {f:[] for f in funcs}\n\nfor n in orders_n:\n for f in funcs:\n times_n[f].append(min(timeit.Timer('%s(n)' % f, \n 'from __main__ import %s, n' % f)\n .repeat(repeat=3, number=5)))",
"_____no_output_____"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\ndef plot_timing():\n\n labels = [('fibo_recurse', 'fibo_recurse'), \n ('fibo_dynamic', 'fibo_dynamic')]\n\n plt.rcParams.update({'font.size': 12})\n\n fig = plt.figure(figsize=(10, 8))\n for lb in labels:\n plt.plot(orders_n, times_n[lb[0]], \n alpha=0.5, label=lb[1], marker='o', lw=3)\n plt.xlabel('sample size n')\n plt.ylabel('time per computation in milliseconds [ms]')\n plt.legend(loc=2)\n plt.ylim([-1, 300])\n plt.grid()\n plt.show()",
"_____no_output_____"
],
[
"plot_timing()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e75e748b85cd6f9395a7c8b30ec491c1002545e0 | 13,701 | ipynb | Jupyter Notebook | pandas_exercise.ipynb | Tima1117/assignment2-template | b7d378ce460e4042223dc09c23736f6962caa37d | [
"MIT"
] | null | null | null | pandas_exercise.ipynb | Tima1117/assignment2-template | b7d378ce460e4042223dc09c23736f6962caa37d | [
"MIT"
] | null | null | null | pandas_exercise.ipynb | Tima1117/assignment2-template | b7d378ce460e4042223dc09c23736f6962caa37d | [
"MIT"
] | null | null | null | 37.64011 | 467 | 0.544778 | [
[
[
"# Homework 3. Pandas\n\n## Important notes\n\n1. *When you open this file on GitHub, copy the address to this file from the address bar of your browser. Now you can go to [Google Colab](https://colab.research.google.com/), click `File -> Open notebook -> GitHub`, paste the copied URL and click the search button (the one with the magnifying glass to the right of the search input box). Your personal copy of this notebook will now open on Google Colab.*\n2. *Do not delete or change variable names in the code cells below. You may add to each cell as many lines of code as you need, just make sure to assign your solution to the predefined variable(s) in the corresponding cell. Failing to do so will make the tests fail.*\n3. *To save your work, click `File -> Save a copy on GitHub` and __make sure to manually select the correct repository from the dropdown list__.*\n4. *If you mess up with this file and need to start from scratch, you can always find the original one [here](https://github.com/hse-mlwp-2022/assignment3-template/blob/main/pandas_exercise.ipynb). Just open it in Google Colab (see note 1) and save to your repository (see note 3). Remember to backup your code elsewhere, since this action will overwrite your previous work.* \n5. *Exercises 1-4 are mandatory. Your work __will not be graded__ if you fail any one of them. Exercises 5-8 are optional, you can skip them if you want*\n\n## About the Dataset\n\nWe will be using 2019 flight statistics from the United States Department of Transportation’s Bureau of Transportation Statistics (available [here](https://www.transtats.bts.gov/DL_SelectFields.asp?gnoyr_VQ=FMF&QO_fu146_anzr=Nv4%20Pn44vr45) and in your repository as `data/T100_MARKET_ALL_CARRIER.zip`). You can load the dataset in pandas using this link: `https://github.com/hse-mlwp-2022/assignment3-template/raw/main/data/T100_MARKET_ALL_CARRIER.zip`.\n\nEach row contains information about a specific route for a given carrier in a given month (e.g., JFK → LAX on Delta Airlines in January). There are 321,409 rows and 41 columns. Note that you don't need to unzip the file to read it in with `pd.read_csv()`.\n\n#### Exercises\n\n##### 1. Read in the data and convert the column names to lowercase to make them easier to work with.",
"_____no_output_____"
]
],
[
[
"import pandas as p\ndf = p.read_csv(\"https://github.com/hse-mlwp-2022/assignment3-template/raw/main/data/T100_MARKET_ALL_CARRIER.zip\")\ndf.columns = (col.lower() for col in df.columns)",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"##### 2. What columns are in the data? (0.5 point)",
"_____no_output_____"
]
],
[
[
"columns = df.columns\n\nprint(columns)",
"Index(['passengers', 'freight', 'mail', 'distance', 'unique_carrier',\n 'airline_id', 'unique_carrier_name', 'unique_carrier_entity', 'region',\n 'carrier', 'carrier_name', 'carrier_group', 'carrier_group_new',\n 'origin_airport_id', 'origin_airport_seq_id', 'origin_city_market_id',\n 'origin', 'origin_city_name', 'origin_state_abr', 'origin_state_fips',\n 'origin_state_nm', 'origin_country', 'origin_country_name',\n 'origin_wac', 'dest_airport_id', 'dest_airport_seq_id',\n 'dest_city_market_id', 'dest', 'dest_city_name', 'dest_state_abr',\n 'dest_state_fips', 'dest_state_nm', 'dest_country', 'dest_country_name',\n 'dest_wac', 'year', 'quarter', 'month', 'distance_group', 'class',\n 'data_source'],\n dtype='object')\n"
]
],
[
[
"##### 3. How many distinct carrier names are in the dataset? (0.5 point)",
"_____no_output_____"
]
],
[
[
"carrier_names = df['unique_carrier_name'].nunique()\n\nprint(carrier_names)",
"318\n"
]
],
[
[
"##### 4. Calculate the totals of the `freight`, `mail`, and `passengers` columns for flights from the United Kingdom to the United States. (1 point)",
"_____no_output_____"
]
],
[
[
"freight_total = df[(df['origin_country_name'] == 'United Kingdom') & (df['dest_country_name'] == 'United States')]['freight'].sum()\nmail_total = df[(df['origin_country_name'] == 'United Kingdom') & (df['dest_country_name'] == 'United States')]['mail'].sum()\npassengers_total = df[(df['origin_country_name'] == 'United Kingdom') & (df['dest_country_name'] == 'United States')]['passengers'].sum()\n\nprint(f\"freight total: {freight_total}\")\nprint(f\"mail total: {mail_total}\")\nprint(f\"passengers total: {passengers_total}\")",
"freight total: 903296879.0\nmail total: 29838395.0\npassengers total: 10685608.0\n"
]
],
[
[
"##### 5. Which 10 carriers flew the most passengers out of the United States to another country? (1.5 points)\nThe result should be a Python iterable, e.g. a list or a corresponding pandas object",
"_____no_output_____"
]
],
[
[
"top_10_by_passengers = df[(df['origin_country_name'] == 'United States') & (df['dest_country_name'] != 'United States')].groupby('unique_carrier_name')['passengers'].sum().nlargest(10).index.tolist()\n\nprint(f\"List of top 10 carriers with max number of passengers flown out of US: {top_10_by_passengers}\")",
"List of top 10 carriers with max number of passengers flown out of US: ['American Airlines Inc.', 'United Air Lines Inc.', 'Delta Air Lines Inc.', 'JetBlue Airways', 'British Airways Plc', 'Lufthansa German Airlines', 'Westjet', 'Air Canada', 'Southwest Airlines Co.', 'Virgin Atlantic Airways']\n"
]
],
[
[
"##### 6. Between which two cities were the most passengers flown? Make sure to account for both directions. (1.5 points)",
"_____no_output_____"
]
],
[
[
"def direction(row):\n origin = row['origin_city_name']\n dest = row['dest_city_name']\n if origin == dest:\n return 'none'\n elif origin > dest:\n k = origin\n origin = dest \n dest = k \n return '?'.join([origin, dest])\ndf['direction'] = df[['origin_city_name', 'dest_city_name']].apply(direction, axis=1)\ntop_direction = df.groupby('direction')['passengers'].sum().idxmax()\ntop_route_origin_city = df[df['origin_city_name']==top_direction.split('?')[0]]['origin_city_name'].iloc[0] \ntop_route_dest_city = df[df['origin_city_name']==top_direction.split('?')[1]]['origin_city_name'].iloc[0] \ntop_route_passengers_count = df.groupby('direction')['passengers'].sum().max() \nprint(f\"top route is '{top_route_origin_city} - {top_route_dest_city}' with traffic of {top_route_passengers_count} passengers\")",
"top route is 'Chicago, IL - New York, NY' with traffic of 4131579.0 passengers\n"
]
],
[
[
"##### 7. Find the top 3 carriers for the pair of cities found in #6 and calculate the percentage of passengers each accounted for. (2 points)\nThe result should be a pandas dataframe object with two columns: \n1. carrier name (string)\n2. percentage of passengers (float in the range of 0-100)",
"_____no_output_____"
]
],
[
[
"all_passengers_series = df[df['direction']==top_direction].groupby('unique_carrier_name')['passengers'].sum()\nall_passengers = all_passengers_series.sum()\n(df[df['direction']==top_direction].groupby('unique_carrier_name')['passengers'].sum().nlargest(3) / all_passengers * 100).reset_index()\ntop_3_carriers_df = (df[df['direction']==top_direction].groupby('unique_carrier_name')['passengers'].sum().nlargest(3) / all_passengers * 100)\ntop_3_carriers_df",
"_____no_output_____"
]
],
[
[
"##### 8. Find the percentage of international travel per country using total passengers on class F flights. (3 points)",
"_____no_output_____"
]
],
[
[
"international_travel_per_country = ... # Place your code here instead of '...'\n\ninternational_travel_per_country",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e75e7f201bfaf9c9e609996e26801b76948fa46e | 118,603 | ipynb | Jupyter Notebook | code/chap04-Mine.ipynb | dlrow-olleh/ModSimPy | 003a2273b51c169ddf981bb686e74ca97b478a9f | [
"MIT"
] | null | null | null | code/chap04-Mine.ipynb | dlrow-olleh/ModSimPy | 003a2273b51c169ddf981bb686e74ca97b478a9f | [
"MIT"
] | null | null | null | code/chap04-Mine.ipynb | dlrow-olleh/ModSimPy | 003a2273b51c169ddf981bb686e74ca97b478a9f | [
"MIT"
] | null | null | null | 98.181291 | 22,336 | 0.851218 | [
[
[
"# Modeling and Simulation in Python\n\nChapter 4\n\nCopyright 2017 Allen Downey\n\nLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)\n",
"_____no_output_____"
]
],
[
[
"# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim library\nfrom modsim import *",
"_____no_output_____"
]
],
[
[
"## Returning values",
"_____no_output_____"
],
[
"Here's a simple function that returns a value:",
"_____no_output_____"
]
],
[
[
"def add_five(x):\n return x + 5",
"_____no_output_____"
]
],
[
[
"And here's how we call it.",
"_____no_output_____"
]
],
[
[
"y = add_five(3)",
"_____no_output_____"
]
],
[
[
"If you run a function on the last line of a cell, Jupyter displays the result:",
"_____no_output_____"
]
],
[
[
"add_five(5)",
"_____no_output_____"
]
],
[
[
"But that can be a bad habit, because usually if you call a function and don't assign the result in a variable, the result gets discarded.\n\nIn the following example, Jupyter shows the second result, but the first result just disappears.",
"_____no_output_____"
]
],
[
[
"add_five(3)\nadd_five(5)",
"_____no_output_____"
]
],
[
[
"When you call a function that returns a variable, it is generally a good idea to assign the result to a variable.",
"_____no_output_____"
]
],
[
[
"y1 = add_five(3)\ny2 = add_five(5)\n\nprint(y1, y2)",
"8 10\n"
]
],
[
[
"**Exercise:** Write a function called `make_state` that creates a `State` object with the state variables `olin=10` and `wellesley=2`, and then returns the new `State` object.\n\nWrite a line of code that calls `make_state` and assigns the result to a variable named `init`.",
"_____no_output_____"
]
],
[
[
"def make_state():\n bikeshare = State(olin=10, wellesley = 2)\n return bikeshare",
"_____no_output_____"
],
[
"init = make_state()",
"_____no_output_____"
]
],
[
[
"## Running simulations",
"_____no_output_____"
],
[
"Here's the code from the previous notebook.",
"_____no_output_____"
]
],
[
[
"def step(state, p1, p2):\n \"\"\"Simulate one minute of time.\n \n state: bikeshare State object\n p1: probability of an Olin->Wellesley customer arrival\n p2: probability of a Wellesley->Olin customer arrival\n \"\"\"\n if flip(p1):\n bike_to_wellesley(state)\n \n if flip(p2):\n bike_to_olin(state)\n \ndef bike_to_wellesley(state):\n \"\"\"Move one bike from Olin to Wellesley.\n \n state: bikeshare State object\n \"\"\"\n if state.olin == 0:\n state.olin_empty += 1\n return\n state.olin -= 1\n state.wellesley += 1\n \ndef bike_to_olin(state):\n \"\"\"Move one bike from Wellesley to Olin.\n \n state: bikeshare State object\n \"\"\"\n if state.wellesley == 0:\n state.wellesley_empty += 1\n return\n state.wellesley -= 1\n state.olin += 1\n \ndef decorate_bikeshare():\n \"\"\"Add a title and label the axes.\"\"\"\n decorate(title='Olin-Wellesley Bikeshare',\n xlabel='Time step (min)', \n ylabel='Number of bikes')",
"_____no_output_____"
]
],
[
[
"Here's a modified version of `run_simulation` that creates a `State` object, runs the simulation, and returns the `State` object.",
"_____no_output_____"
]
],
[
[
"def run_simulation(p1, p2, num_steps):\n \"\"\"Simulate the given number of time steps.\n \n p1: probability of an Olin->Wellesley customer arrival\n p2: probability of a Wellesley->Olin customer arrival\n num_steps: number of time steps\n \"\"\"\n state = State(olin=10, wellesley=2, \n olin_empty=0, wellesley_empty=0)\n \n for i in range(num_steps):\n step(state, p1, p2)\n \n return state",
"_____no_output_____"
]
],
[
[
"Now `run_simulation` doesn't plot anything:",
"_____no_output_____"
]
],
[
[
"state = run_simulation(0.4, 0.2, 60)",
"_____no_output_____"
]
],
[
[
"But after the simulation, we can read the metrics from the `State` object.",
"_____no_output_____"
]
],
[
[
"state.olin_empty",
"_____no_output_____"
]
],
[
[
"Now we can run simulations with different values for the parameters. When `p1` is small, we probably don't run out of bikes at Olin.",
"_____no_output_____"
]
],
[
[
"state = run_simulation(0.2, 0.2, 60)\nstate.olin_empty",
"_____no_output_____"
]
],
[
[
"When `p1` is large, we probably do.",
"_____no_output_____"
]
],
[
[
"state = run_simulation(0.6, 0.2, 60)\nstate.olin_empty",
"_____no_output_____"
]
],
[
[
"## More for loops",
"_____no_output_____"
],
[
"`linspace` creates a NumPy array of equally spaced numbers.",
"_____no_output_____"
]
],
[
[
"p1_array = linspace(0, 1, 5)",
"_____no_output_____"
]
],
[
[
"We can use an array in a `for` loop, like this:",
"_____no_output_____"
]
],
[
[
"for p1 in p1_array:\n print(p1)",
"0.0\n0.25\n0.5\n0.75\n1.0\n"
]
],
[
[
"This will come in handy in the next section.\n\n`linspace` is defined in `modsim.py`. You can get the documentation using `help`.",
"_____no_output_____"
]
],
[
[
"help(linspace)",
"Help on function linspace in module modsim:\n\nlinspace(start, stop, num=50, **options)\n Returns an array of evenly-spaced values in the interval [start, stop].\n \n start: first value\n stop: last value\n num: number of values\n \n Also accepts the same keyword arguments as np.linspace. See\n https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html\n \n returns: array or Quantity\n\n"
]
],
[
[
"`linspace` is based on a NumPy function with the same name. [Click here](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html) to read more about how to use it.",
"_____no_output_____"
],
[
"**Exercise:** \nUse `linspace` to make an array of 10 equally spaced numbers from 1 to 10 (including both).",
"_____no_output_____"
]
],
[
[
"linspace(1,10,10)",
"_____no_output_____"
]
],
[
[
"**Exercise:** The `modsim` library provides a related function called `linrange`. You can view the documentation by running the following cell:",
"_____no_output_____"
]
],
[
[
"help(linrange)",
"Help on function linrange in module modsim:\n\nlinrange(start=0, stop=None, step=1, **options)\n Returns an array of evenly-spaced values in the interval [start, stop].\n \n This function works best if the space between start and stop\n is divisible by step; otherwise the results might be surprising.\n \n By default, the last value in the array is `stop-step`\n (at least approximately).\n If you provide the keyword argument `endpoint=True`,\n the last value in the array is `stop`.\n \n start: first value\n stop: last value\n step: space between values\n \n Also accepts the same keyword arguments as np.linspace. See\n https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html\n \n returns: array or Quantity\n\n"
]
],
[
[
"Use `linrange` to make an array of numbers from 1 to 11 with a step size of 2.",
"_____no_output_____"
]
],
[
[
"linrange(1,11,2)",
"_____no_output_____"
]
],
[
[
"## Sweeping parameters",
"_____no_output_____"
],
[
"`p1_array` contains a range of values for `p1`.",
"_____no_output_____"
]
],
[
[
"p2 = 0.2\nnum_steps = 60\np1_array = linspace(0, 1, 11)",
"_____no_output_____"
]
],
[
[
"The following loop runs a simulation for each value of `p1` in `p1_array`; after each simulation, it prints the number of unhappy customers at the Olin station:",
"_____no_output_____"
]
],
[
[
"for p1 in p1_array:\n state = run_simulation(p1, p2, num_steps)\n print(p1, state.olin_empty)",
"0.0 0\n0.1 0\n0.2 0\n0.30000000000000004 5\n0.4 2\n0.5 4\n0.6000000000000001 14\n0.7000000000000001 17\n0.8 30\n0.9 34\n1.0 41\n"
]
],
[
[
"Now we can do the same thing, but storing the results in a `SweepSeries` instead of printing them.\n\n",
"_____no_output_____"
]
],
[
[
"sweep = SweepSeries()\n\nfor p1 in p1_array:\n state = run_simulation(p1, p2, num_steps)\n sweep[p1] = state.olin_empty",
"_____no_output_____"
],
[
"sweep",
"_____no_output_____"
],
[
"plot(sweep)",
"_____no_output_____"
]
],
[
[
"And then we can plot the results.",
"_____no_output_____"
]
],
[
[
"plot(sweep, label='Olin')\n\ndecorate(title='Olin-Wellesley Bikeshare',\n xlabel='Arrival rate at Olin (p1 in customers/min)', \n ylabel='Number of unhappy customers')\n\nsavefig('figs/chap02-fig02.pdf')",
"Saving figure to file figs/chap02-fig02.pdf\n"
]
],
[
[
"## Exercises\n\n**Exercise:** Wrap this code in a function named `sweep_p1` that takes an array called `p1_array` as a parameter. It should create a new `SweepSeries`, run a simulation for each value of `p1` in `p1_array`, store the results in the `SweepSeries`, and return the `SweepSeries`.\n\nUse your function to plot the number of unhappy customers at Olin as a function of `p1`. Label the axes.",
"_____no_output_____"
]
],
[
[
"def sweep_p1(p1_array):\n results = SweepSeries()\n for p1 in p1_array:\n state = run_simulation(p1, p2, num_steps)\n results[p1] = state.olin_empty\n return results\n ",
"_____no_output_____"
],
[
"unhappy_olin= sweep_p1(p1_array)\n\nplot(unhappy_olin, label='Olin')\n\ndecorate(title='unhappy Olin',\n xlabel='Arrival rate at Olin (p1 in customers/min)', \n ylabel='Number of unhappy customers')",
"_____no_output_____"
]
],
[
[
"**Exercise:** Write a function called `sweep_p2` that runs simulations with `p1=0.5` and a range of values for `p2`. It should store the results in a `SweepSeries` and return the `SweepSeries`.\n",
"_____no_output_____"
]
],
[
[
"def sweep_p2(p2_array):\n results = SweepSeries()\n for p2 in p2_array:\n state = run_simulation(0.5, p2, num_steps)\n results[p2] = state.olin_empty\n return results",
"_____no_output_____"
],
[
"p2_array = linspace(0, 1, 11)\nunhappy_olin_again = sweep_p2(p2_array)\nplot(unhappy_olin_again, label='Olin')\n\ndecorate(title='Unhappy Olin',\n xlabel='Arrival rate at Wellesly (p2 in customers/min)', \n ylabel='Number of unhappy customers')",
"_____no_output_____"
]
],
[
[
"## Optional exercises\n\nThe following two exercises are a little more challenging. If you are comfortable with what you have learned so far, you should give them a try. If you feel like you have your hands full, you might want to skip them for now.\n\n**Exercise:** Because our simulations are random, the results vary from one run to another, and the results of a parameter sweep tend to be noisy. We can get a clearer picture of the relationship between a parameter and a metric by running multiple simulations with the same parameter and taking the average of the results.\n\nWrite a function called `run_multiple_simulations` that takes as parameters `p1`, `p2`, `num_steps`, and `num_runs`.\n\n`num_runs` specifies how many times it should call `run_simulation`.\n\nAfter each run, it should store the total number of unhappy customers (at Olin or Wellesley) in a `TimeSeries`. At the end, it should return the `TimeSeries`.\n\nTest your function with parameters\n\n```\np1 = 0.3\np2 = 0.3\nnum_steps = 60\nnum_runs = 10\n```\n\nDisplay the resulting `TimeSeries` and use the `mean` function provided by the `TimeSeries` object to compute the average number of unhappy customers.",
"_____no_output_____"
]
],
[
[
"\ndef run_multiple_simulations(p1, p2, num_steps, num_runs):\n for i in range(num_runs):\n results = TimeSeries()\n state = run_simulation(p1, p2, num_steps)\n results[i] = state.olin_empty\n return results \n \n ",
"_____no_output_____"
],
[
"olin_sad = run_multiple_simulations(0.3, 0.3, 60, 10)\nplot(olin_sad, label='Olin')\n\ndecorate(title='Unhappy Olin',\n xlabel='Arrival rate at Wellesly (p2 in customers/min)', \n ylabel='Number of unhappy customers')\n ",
"_____no_output_____"
]
],
[
[
"**Exercise:** Continuting the previous exercise, use `run_multiple_simulations` to run simulations with a range of values for `p1` and\n\n```\np2 = 0.3\nnum_steps = 60\nnum_runs = 20\n```\n\nStore the results in a `SweepSeries`, then plot the average number of unhappy customers as a function of `p1`. Label the axes.\n\nWhat value of `p1` minimizes the average number of unhappy customers?",
"_____no_output_____"
]
],
[
[
"# Solution goes here",
"_____no_output_____"
],
[
"# Solution goes here",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e75e93bcda574c3f7fa89f847dd8e7092b68a92e | 34,859 | ipynb | Jupyter Notebook | alg-AD.ipynb | rodolfomedeiros/heart-disease-prediction | fbac8bc662302c429ed2c1a1083fb73e31c22fcb | [
"MIT"
] | 1 | 2020-07-11T11:58:52.000Z | 2020-07-11T11:58:52.000Z | alg-AD.ipynb | rodolfomedeiros/heart-disease-prediction | fbac8bc662302c429ed2c1a1083fb73e31c22fcb | [
"MIT"
] | null | null | null | alg-AD.ipynb | rodolfomedeiros/heart-disease-prediction | fbac8bc662302c429ed2c1a1083fb73e31c22fcb | [
"MIT"
] | null | null | null | 48.617852 | 9,080 | 0.661178 | [
[
[
"import pandas as pd\nimport numpy as np\n\nfrom sklearn.model_selection import train_test_split\n\nfrom sklearn.model_selection import cross_val_score\n\nfrom sklearn.tree import DecisionTreeClassifier as ad, export_graphviz\nfrom sklearn.ensemble import BaggingClassifier\n\nimport graphviz\n\nbasePre = pd.read_csv('./bases/base_pre.csv')\nbaseScaled = pd.read_csv('./bases/base_scaled.csv')\nbasePCACompleta = pd.read_csv('./bases/base_train_completa.csv')\nbasePCAInversa = pd.read_csv('./bases/base_train_correlacao_inversa.csv')\nbasePCAProporcional = pd.read_csv('./bases/base_train_correlacao_proporcional.csv')\nbasePca70 = pd.read_csv('./bases/base_train_70.csv')\nbasePca50 = pd.read_csv('./bases/base_train_50.csv')\n\ncv=5\n\nc = 'entropy'\nmsl = 5\nmss = 5\nmd = None\nrs=0\n\nclf = ad(criterion=c,min_samples_leaf=msl,min_samples_split=mss,max_depth=md,random_state=rs)\nclf",
"_____no_output_____"
]
],
[
[
"SINGLE EXECUTION",
"_____no_output_____"
],
[
"Applying in baseScaled",
"_____no_output_____"
]
],
[
[
"Y = basePre['target']\n\nx_train, x_test, y_train, y_test = train_test_split(baseScaled, Y, test_size=0.30, random_state=0)\n\nclf = ad(criterion=c,min_samples_leaf=msl,min_samples_split=mss,max_depth=md,random_state=rs)\nclf.fit(x_train, y_train)\n",
"_____no_output_____"
],
[
"sc = cross_val_score(clf, baseScaled, Y, cv=cv)",
"_____no_output_____"
],
[
"accArray = np.array([[sc.mean(), sc.std()*2]])",
"_____no_output_____"
]
],
[
[
"Applying in basePCAInversa",
"_____no_output_____"
]
],
[
[
"x_train, x_test, y_train, y_test = train_test_split(basePCAInversa, Y, test_size=0.30, random_state=0)\n\nclf = ad(criterion=c,min_samples_leaf=msl,min_samples_split=mss,max_depth=md,random_state=rs)\nclf.fit(x_train, y_train)\n",
"_____no_output_____"
],
[
"sc = cross_val_score(clf, basePCAInversa, Y, cv=cv)",
"_____no_output_____"
],
[
"accArray = np.append(accArray, [[sc.mean(), sc.std()*2]], axis=0)",
"_____no_output_____"
]
],
[
[
"Applying in basePCAProporcional",
"_____no_output_____"
]
],
[
[
"x_train, x_test, y_train, y_test = train_test_split(basePCAProporcional, Y, test_size=0.30, random_state=0)\n\nclf = ad(criterion=c,min_samples_leaf=msl,min_samples_split=mss,max_depth=md,random_state=rs)\nclf.fit(x_train, y_train)\n",
"_____no_output_____"
],
[
"sc = cross_val_score(clf, basePCAProporcional, Y, cv=cv)",
"_____no_output_____"
],
[
"accArray = np.append(accArray, [[sc.mean(), sc.std()*2]], axis=0)",
"_____no_output_____"
]
],
[
[
"PCA com 70%",
"_____no_output_____"
]
],
[
[
"x_train, x_test, y_train, y_test = train_test_split(basePca70, Y, test_size=0.30, random_state=0)\n\nclf = ad(criterion=c,min_samples_leaf=msl,min_samples_split=mss,max_depth=md,random_state=rs)\nclf.fit(x_train, y_train)\n",
"_____no_output_____"
],
[
"sc = cross_val_score(clf, basePca70, Y, cv=cv)",
"_____no_output_____"
],
[
"accArray = np.append(accArray, [[sc.mean(), sc.std()*2]], axis=0)",
"_____no_output_____"
]
],
[
[
"PCA com 50%",
"_____no_output_____"
]
],
[
[
"x_train, x_test, y_train, y_test = train_test_split(basePca50, Y, test_size=0.30, random_state=0)\n\nclf = ad(criterion=c,min_samples_leaf=msl,min_samples_split=mss,max_depth=md,random_state=rs)\nclf.fit(x_train, y_train)\n",
"_____no_output_____"
],
[
"sc = cross_val_score(clf, basePca50, Y, cv=cv)",
"_____no_output_____"
],
[
"accArray = np.append(accArray, [[sc.mean(), sc.std()*2]], axis=0)",
"_____no_output_____"
],
[
"dfAcc = pd.DataFrame(accArray, columns=['mean', 'std'], index=None)",
"_____no_output_____"
],
[
"dfAcc = (dfAcc*100).apply(np.floor)\ndfAcc",
"_____no_output_____"
],
[
"from plt import *\n\nsingle(dfAcc, 'adSingle.png', '#2E8B57', '#F08080')",
"_____no_output_____"
]
],
[
[
"BAGGING com a melhor single",
"_____no_output_____"
]
],
[
[
"clf = ad(criterion=c,min_samples_leaf=msl,min_samples_split=mss,max_depth=md,random_state=rs)\n\nmodel = BaggingClassifier(clf, n_estimators=5, random_state=0)\n\nsc = cross_val_score(model, basePca50, Y, cv=cv)",
"_____no_output_____"
],
[
"accArray = np.array([[sc.mean(), sc.std()*2]])",
"_____no_output_____"
],
[
"clf = ad(criterion=c,min_samples_leaf=msl,min_samples_split=mss,max_depth=md,random_state=rs)\n\nmodel = BaggingClassifier(clf, n_estimators=10, random_state=0)\n\nsc = cross_val_score(model, basePca50, Y, cv=cv)",
"_____no_output_____"
],
[
"accArray = np.append(accArray, [[sc.mean(), sc.std()*2]], axis=0)",
"_____no_output_____"
],
[
"clf = ad(criterion=c,min_samples_leaf=msl,min_samples_split=mss,max_depth=md,random_state=rs)\n\nmodel = BaggingClassifier(clf, n_estimators=20, random_state=0)\n\nsc = cross_val_score(model, basePca50, Y, cv=cv)",
"_____no_output_____"
],
[
"accArray = np.append(accArray, [[sc.mean(), sc.std()*2]], axis=0)",
"_____no_output_____"
],
[
"clf = ad(criterion=c,min_samples_leaf=msl,min_samples_split=mss,max_depth=md,random_state=rs)\n\nmodel = BaggingClassifier(clf, n_estimators=30, random_state=0)\n\nsc = cross_val_score(model, basePca50, Y, cv=cv)",
"_____no_output_____"
],
[
"accArray = np.append(accArray, [[sc.mean(), sc.std()*2]], axis=0)",
"_____no_output_____"
],
[
"dfAcc = pd.DataFrame(accArray, columns=['mean', 'std'], index=None)",
"_____no_output_____"
],
[
"dfAcc = (dfAcc*100).apply(np.floor)\ndfAcc",
"_____no_output_____"
],
[
"bagging(dfAcc, 'adBagging.png', '#2E8B57', '#F08080')",
"_____no_output_____"
],
[
"def plotTree(tree, df, labelCol, plotTitle):\n cols = df.drop('target', azxis=1).columns\n \n graphData = export_graphviz(tree, out_file=None, feature_names=cols, class_names=True, filled=True, rounded=True)\n \n graph = graphviz.Source(graphData)\n graph.render(plotTitle)\n return graph",
"_____no_output_____"
],
[
"data = baseScaled\n\nx_train, x_test, y_train, y_test = train_test_split(data, Y, test_size=0.30, random_state=0)\n\nclf = ad(criterion=c,min_samples_leaf=msl,min_samples_split=mss,max_depth=md,random_state=rs)\nclf.fit(x_train, y_train)\n\ncols = data.columns\n \ngraphData = export_graphviz(clf, out_file=None, feature_names=cols, class_names=True, filled=True, rounded=True)\n\ngraph = graphviz.Source(graphData)\ngraph.render('Heart Disease')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75ea3c28d29ca5f28ef5ccdb9a07e22a78fdada | 10,843 | ipynb | Jupyter Notebook | main.ipynb | isrugeek/climate_extreme_values | 2bf4a2caf51e3b5dc423da23aec4c607437f850d | [
"MIT"
] | 5 | 2019-08-13T09:37:11.000Z | 2021-03-13T16:40:55.000Z | main.ipynb | isrugeek/climate_extreme_values | 2bf4a2caf51e3b5dc423da23aec4c607437f850d | [
"MIT"
] | null | null | null | main.ipynb | isrugeek/climate_extreme_values | 2bf4a2caf51e3b5dc423da23aec4c607437f850d | [
"MIT"
] | 1 | 2021-05-24T16:54:29.000Z | 2021-05-24T16:54:29.000Z | 24.531674 | 166 | 0.509914 | [
[
[
"%load_ext autoreload\n%autoreload 2\nimport os\nimport warnings\nwarnings.filterwarnings(\"ignore\")\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport torch\nfrom train_main import create_dataset\nfrom train_main import pre_process\nimport utils\nfrom utils import device\nfrom models import *\nimport constants\nfrom matplotlib.pyplot import savefig\nimport forecaster\n",
"_____no_output_____"
]
],
[
[
"## Batch Scheduler to train",
"_____no_output_____"
]
],
[
[
"## !sbatch --gres=gpu:titanxp:1 --mem=32G run.sh",
"_____no_output_____"
]
],
[
[
"## Test for latest data points",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('data/weatherstats_montreal_daily_inter.csv',index_col=0)\ndf = pre_process(df[:360], save_plots=True)",
"_____no_output_____"
],
[
"max_v = create_dataset(df[df['Year'] >= 2018], look_back=28, forecast_horizon=15, batch_size=1)",
"_____no_output_____"
],
[
"max_y_plot = []\nfor input_x, max_y, no_z in max_v:\n max_y_plot.append(max_y[0][0])",
"_____no_output_____"
],
[
"print (len(max_y_plot))\nplt.figure(figsize=(16,3.5))\nplt.title(\"Maxima Temprature per Forecasting Horizon\")\nplt.xlabel(\"Date/Time\")\nplt.ylabel(\"Values\")\nplt.plot(max_y_plot,c=\"red\",alpha=5,label='maxima temprature')\n\nplt.legend()",
"_____no_output_____"
],
[
"plt.figure(1,figsize=(7,5.5))\nplt.subplot(211)\nplt.title('Full time series data',fontsize=14)\nplt.xlabel(\"Days\",fontsize=14)\nplt.ylabel(\"Values\",fontsize=14)\nplt.plot(df[['max_temperature']][:360],'r')\n\n\nplt.subplot(212)\nplt.title('Maxima Temprature per Forecasting Horizon',fontsize=14)\n\nplt.xlabel(\"Forecasting Horizon\",fontsize=14)\nplt.ylabel(\"Values\",fontsize=14)\n\nplt.plot(max_y_plot[:360])\n\nplt.tight_layout()\nfrom matplotlib.pyplot import savefig\nsavefig('./plots/data_maxi.eps')\n\nplt.show()\n",
"_____no_output_____"
],
[
"#ax = df[['max_temperature', 'avg_hourly_temperature','avg_temperature','min_temperature']].plot(title='Full Time Series Data',fontsize=13, figsize = (7,2.5))\n",
"_____no_output_____"
]
],
[
[
"## LSTM Forecaster",
"_____no_output_____"
]
],
[
[
"ep_loss, mean, rms, lower, upper, mae, acc, test_true_y, test_pred_y = forecaster.LSTMForecaster(df)",
"_____no_output_____"
],
[
"print (rms,mae)",
"_____no_output_____"
],
[
"forecast_fi = {}\nfn = 0\nfor i in range(0, len(ep_loss), 15):\n sp = i\n ep = i + 15\n forecast_fi[fn] = ep_loss[sp:ep]\n fn+=1\n \n\nfig, axs = plt.subplots(nrows=5, ncols=6, figsize=(20,10 ))\n\nfor ax, i in zip(axs.flat, forecast_fi):\n ax.plot(forecast_fi[i])\n ax.set_title(str(i)+\"\\'th Forecasting Cycle Error in oC\")\n\nplt.tight_layout()\nplt.show()\n# forecast_fi",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(7,4))\nax.plot(np.array(test_true_y[:100]).reshape(-1),c='blue', label='GT', alpha=5)\nax.plot(mean[:100], label='Prediction', c='green', linestyle='--', alpha=5)\nax.set(title=\"Maxima Forecasting (LSTM)\", ylabel=\"Max temprature\", xlabel=\"Days\") #ylim=(12.7, 13.8))\nax.legend();\nsavefig(\"plots/lstm_pred.eps\")",
"_____no_output_____"
],
[
"pred_fo = {}\ngt_fo = {}\nfn = 0\nfor i in range(0, len(mean), 15):\n sp = i\n ep = i + 15\n pred_fo[fn] = mean[sp:ep]\n fn+=1\nfn = 0\nfor i in range(0, len(test_true_y), 15):\n sp = i\n ep = i + 15\n gt_fo[fn] = test_true_y[sp:ep]\n fn+=1\n \n\nfig, axs = plt.subplots(nrows=5, ncols=6, figsize=(20,10 ))\n\nfor ax, i in zip(axs.flat, pred_fo):\n ax.plot(pred_fo[i])\n ax.plot(gt_fo[i])\n ax.set_title(str(i)+\"\\'th Forecasting Cycle\")\n\nplt.tight_layout()\nplt.show()\nsavefig(\"plots/lstm_forecast_cycle.eps\")",
"_____no_output_____"
]
],
[
[
"## GU-LSTM Forecaster",
"_____no_output_____"
]
],
[
[
"ep_loss, mean, rms, lower, upper, mae, acc, test_true_y, test_pred_y = forecaster.LSTMGUForecaster(df)",
"_____no_output_____"
],
[
"print (rms,mae)",
"_____no_output_____"
],
[
"forecast_fi = {}\nfn = 0\nfor i in range(0, len(ep_loss), 15):\n sp = i\n ep = i + 15\n forecast_fi[fn] = ep_loss[sp:ep]\n fn+=1\n \n\nfig, axs = plt.subplots(nrows=5, ncols=6, figsize=(20,10 ))\n\nfor ax, i in zip(axs.flat, forecast_fi):\n ax.plot(forecast_fi[i])\n ax.set_title(str(i)+\"\\'th Forecasting Cycle Error in oC\")\n\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(7,4))\nax.plot(np.array(test_true_y[:100]).reshape(-1),c='blue', label='GT', alpha=5)\nax.plot(mean[:100], label='Prediction', c='green', linestyle='--', alpha=5)\nax.set(title=\"Maxima Forecasting (GU+LSTM)\", ylabel=\"Max temprature\", xlabel=\"Days\") #ylim=(12.7, 13.8))\nax.legend();\nsavefig(\"plots/gu_lstm_pred.eps\")",
"_____no_output_____"
],
[
"pred_fo = {}\ngt_fo = {}\nfn = 0\nfor i in range(0, len(mean), 15):\n sp = i\n ep = i + 15\n pred_fo[fn] = mean[sp:ep]\n fn+=1\nfn = 0\nfor i in range(0, len(test_true_y), 15):\n sp = i\n ep = i + 15\n gt_fo[fn] = test_true_y[sp:ep]\n fn+=1\n \n\nfig, axs = plt.subplots(nrows=5, ncols=6, figsize=(20,10 ))\n\nfor ax, i in zip(axs.flat, pred_fo):\n ax.plot(pred_fo[i])\n ax.plot(gt_fo[i])\n ax.set_title(str(i)+\"\\'th Forecasting Cycle\")\n\nplt.tight_layout()\nplt.show()\nsavefig(\"plots/gu_lstm_forecast_cycle.eps\")",
"_____no_output_____"
]
],
[
[
"## ENCDEC Forecaster",
"_____no_output_____"
]
],
[
[
"ep_loss, mean, rms, lower, upper, mae, acc, test_true_y, test_pred_y = forecaster.ENCDECForecaster(df)",
"_____no_output_____"
],
[
"print (mae,rms)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(7,4))\nax.plot(np.array(test_true_y[:100]).reshape(-1),c='blue', label='GT', alpha=5)\nax.plot(mean[:100], label='Prediction', c='green', linestyle='--', alpha=5)\nax.set(title=\"Maxima Forecasting (ENCDEC LSTM)\", ylabel=\"Max temprature\", xlabel=\"Days\") #ylim=(12.7, 13.8))\nax.legend();\nsavefig(\"plots/enc_dec.eps\")",
"_____no_output_____"
],
[
"pred_fo = {}\ngt_fo = {}\nfn = 0\nfor i in range(0, len(mean), 15):\n sp = i\n ep = i + 15\n pred_fo[fn] = mean[sp:ep]\n fn+=1\nfn = 0\nfor i in range(0, len(test_true_y), 15):\n sp = i\n ep = i + 15\n gt_fo[fn] = test_true_y[sp:ep]\n fn+=1\n \n\nfig, axs = plt.subplots(nrows=5, ncols=6, figsize=(20,10 ))\n\nfor ax, i in zip(axs.flat, pred_fo):\n ax.plot(pred_fo[i])\n ax.plot(gt_fo[i])\n ax.set_title(str(i)+\"\\'th Forecasting Cycle\")\n\nplt.tight_layout()\nsavefig(\"plots/enc_dec_forecast_cycle.eps\")\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e75ea7261ea816ab492acef145f954e6015ebc5f | 43,402 | ipynb | Jupyter Notebook | ddpg-pendulum/DDPG.ipynb | xlnwel/deep-reinforcement-learning | 59581b9e3402f9e4dc8763d8bc2f54df88f3d885 | [
"MIT"
] | 7 | 2018-08-18T17:13:33.000Z | 2020-05-03T00:12:41.000Z | ddpg-pendulum/DDPG.ipynb | xlnwel/deep-reinforcement-learning | 59581b9e3402f9e4dc8763d8bc2f54df88f3d885 | [
"MIT"
] | null | null | null | ddpg-pendulum/DDPG.ipynb | xlnwel/deep-reinforcement-learning | 59581b9e3402f9e4dc8763d8bc2f54df88f3d885 | [
"MIT"
] | 4 | 2018-10-29T02:43:29.000Z | 2019-02-12T05:08:53.000Z | 192.897778 | 35,132 | 0.898323 | [
[
[
"# Deep Deterministic Policy Gradients (DDPG)\n---\nIn this notebook, we train DDPG with OpenAI Gym's Pendulum-v0 environment.\n\n### 1. Import the Necessary Packages",
"_____no_output_____"
]
],
[
[
"import gym\nimport random\nimport torch\nimport numpy as np\nfrom collections import deque\nimport matplotlib.pyplot as plt\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nfrom MyAgent import Agent",
"_____no_output_____"
]
],
[
[
"### 2. Instantiate the Environment and Agent",
"_____no_output_____"
]
],
[
[
"env = gym.make('Pendulum-v0')\nenv.seed(2)\nagent = Agent(state_size=3, action_size=1)",
"\u001b[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.\u001b[0m\n\u001b[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.\u001b[0m\n"
],
[
"print(env.action_space.shape)\nprint(env.action_space.low)\nprint(env.action_space.high)",
"(1,)\n[-2.]\n[2.]\n"
]
],
[
[
"### 3. Train the Agent with DDPG",
"_____no_output_____"
]
],
[
[
"def ddpg(n_episodes=300, max_t=300, print_every=100):\n scores_deque = deque(maxlen=print_every)\n scores = []\n for i_episode in range(1, n_episodes+1):\n state = env.reset()\n # agent.reset() # old implementation\n score = 0\n for t in range(max_t):\n action = agent.act(state)\n next_state, reward, done, _ = env.step(action)\n agent.step(state, action, reward, next_state, done)\n state = next_state\n score += reward\n if done:\n break \n scores_deque.append(score)\n scores.append(score)\n print('\\rEpisode {}\\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)), end=\"\")\n torch.save(agent.actor_main.state_dict(), 'checkpoint_actor.pth')\n torch.save(agent.critic_main.state_dict(), 'checkpoint_critic.pth')\n if i_episode % print_every == 0:\n print('\\rEpisode {}\\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))\n \n return scores\n\nscores = ddpg()\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nplt.plot(np.arange(1, len(scores)+1), scores)\nplt.ylabel('Score')\nplt.xlabel('Episode #')\nplt.show()",
"Episode 100\tAverage Score: -642.86\nEpisode 200\tAverage Score: -198.31\nEpisode 300\tAverage Score: -155.96\n"
]
],
[
[
"### 4. Watch a Smart Agent!",
"_____no_output_____"
]
],
[
[
"agent.actor_local.load_state_dict(torch.load('checkpoint_actor.pth'))\nagent.critic_local.load_state_dict(torch.load('checkpoint_critic.pth'))\n\nstate = env.reset()\nfor t in range(200):\n action = agent.act(state, add_noise=False)\n env.render()\n state, reward, done, _ = env.step(action)\n if done:\n break \n\nenv.close()",
"_____no_output_____"
]
],
[
[
"### 6. Explore\n\nIn this exercise, we have provided a sample DDPG agent and demonstrated how to use it to solve an OpenAI Gym environment. To continue your learning, you are encouraged to complete any (or all!) of the following tasks:\n- Amend the various hyperparameters and network architecture to see if you can get your agent to solve the environment faster than this benchmark implementation. Once you build intuition for the hyperparameters that work well with this environment, try solving a different OpenAI Gym task!\n- Write your own DDPG implementation. Use this code as reference only when needed -- try as much as you can to write your own algorithm from scratch.\n- You may also like to implement prioritized experience replay, to see if it speeds learning. \n- The current implementation adds Ornsetein-Uhlenbeck noise to the action space. However, it has [been shown](https://blog.openai.com/better-exploration-with-parameter-noise/) that adding noise to the parameters of the neural network policy can improve performance. Make this change to the code, to verify it for yourself!\n- Write a blog post explaining the intuition behind the DDPG algorithm and demonstrating how to use it to solve an RL environment of your choosing. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e75eaadbce89fe854328e25ba34ed88c47bd4266 | 41,320 | ipynb | Jupyter Notebook | Python+Basics+With+Numpy+v3.ipynb | sudharshan-chakra/DeepLearning.ai-Implementation | 98db0483431e92d6496628a360763c082831c3cd | [
"MIT"
] | null | null | null | Python+Basics+With+Numpy+v3.ipynb | sudharshan-chakra/DeepLearning.ai-Implementation | 98db0483431e92d6496628a360763c082831c3cd | [
"MIT"
] | null | null | null | Python+Basics+With+Numpy+v3.ipynb | sudharshan-chakra/DeepLearning.ai-Implementation | 98db0483431e92d6496628a360763c082831c3cd | [
"MIT"
] | null | null | null | 35.28608 | 960 | 0.511084 | [
[
[
"# Python Basics with Numpy (optional assignment)\n\nWelcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need. \n\n**Instructions:**\n- You will be using Python 3.\n- Avoid using for-loops and while-loops, unless you are explicitly told to do so.\n- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.\n- After coding your function, run the cell right below it to check if your result is correct.\n\n**After this assignment you will:**\n- Be able to use iPython Notebooks\n- Be able to use numpy functions and numpy matrix/vector operations\n- Understand the concept of \"broadcasting\"\n- Be able to vectorize code\n\nLet's get started!",
"_____no_output_____"
],
[
"## About iPython Notebooks ##\n\niPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing \"SHIFT\"+\"ENTER\" or by clicking on \"Run Cell\" (denoted by a play symbol) in the upper bar of the notebook. \n\nWe will often specify \"(≈ X lines of code)\" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.\n\n**Exercise**: Set test to `\"Hello World\"` in the cell below to print \"Hello World\" and run the two cells below.",
"_____no_output_____"
]
],
[
[
"### START CODE HERE ### (≈ 1 line of code)\ntest = \"Hello World\"\n### END CODE HERE ###",
"_____no_output_____"
],
[
"print (\"test: \" + test)",
"test: Hello World\n"
]
],
[
[
"**Expected output**:\ntest: Hello World",
"_____no_output_____"
],
[
"<font color='blue'>\n**What you need to remember**:\n- Run your cells using SHIFT+ENTER (or \"Run cell\")\n- Write code in the designated areas using Python 3 only\n- Do not modify the code outside of the designated areas",
"_____no_output_____"
],
[
"## 1 - Building basic functions with numpy ##\n\nNumpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.\n\n### 1.1 - sigmoid function, np.exp() ###\n\nBefore using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().\n\n**Exercise**: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.\n\n**Reminder**:\n$sigmoid(x) = \\frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.\n\n<img src=\"images/Sigmoid.png\" style=\"width:500px;height:228px;\">\n\nTo refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: basic_sigmoid\n\nimport math\n\ndef basic_sigmoid(x):\n \"\"\"\n Compute sigmoid of x.\n\n Arguments:\n x -- A scalar\n\n Return:\n s -- sigmoid(x)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n s = 1/(1+math.exp(-1*x))\n ### END CODE HERE ###\n \n return s",
"_____no_output_____"
],
[
"basic_sigmoid(3)",
"_____no_output_____"
]
],
[
[
"**Expected Output**: \n<table style = \"width:40%\">\n <tr>\n <td>** basic_sigmoid(3) **</td> \n <td>0.9525741268224334 </td> \n </tr>\n\n</table>",
"_____no_output_____"
],
[
"Actually, we rarely use the \"math\" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful. ",
"_____no_output_____"
]
],
[
[
"### One reason why we use \"numpy\" instead of \"math\" in Deep Learning ###\nx = [1, 2, 3]\nbasic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.",
"_____no_output_____"
]
],
[
[
"In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\n# example of np.exp\nx = np.array([1, 2, 3])\nprint(np.exp(x)) # result is (exp(1), exp(2), exp(3))",
"[ 2.71828183 7.3890561 20.08553692]\n"
]
],
[
[
"Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \\frac{1}{x}$ will output s as a vector of the same size as x.",
"_____no_output_____"
]
],
[
[
"# example of vector operation\nx = np.array([1, 2, 3])\nprint (x + 3)",
"[4 5 6]\n"
]
],
[
[
"Any time you need more info on a numpy function, we encourage you to look at [the official documentation](https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.exp.html). \n\nYou can also create a new cell in the notebook and write `np.exp?` (for example) to get quick access to the documentation.\n\n**Exercise**: Implement the sigmoid function using numpy. \n\n**Instructions**: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.\n$$ \\text{For } x \\in \\mathbb{R}^n \\text{, } sigmoid(x) = sigmoid\\begin{pmatrix}\n x_1 \\\\\n x_2 \\\\\n ... \\\\\n x_n \\\\\n\\end{pmatrix} = \\begin{pmatrix}\n \\frac{1}{1+e^{-x_1}} \\\\\n \\frac{1}{1+e^{-x_2}} \\\\\n ... \\\\\n \\frac{1}{1+e^{-x_n}} \\\\\n\\end{pmatrix}\\tag{1} $$",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: sigmoid\n\nimport numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()\n\ndef sigmoid(x):\n \"\"\"\n Compute the sigmoid of x\n\n Arguments:\n x -- A scalar or numpy array of any size\n\n Return:\n s -- sigmoid(x)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n s = 1/(1+np.exp(-1*x))\n ### END CODE HERE ###\n \n return s",
"_____no_output_____"
],
[
"x = np.array([1, 2, 3])\nsigmoid(x)",
"_____no_output_____"
]
],
[
[
"**Expected Output**: \n<table>\n <tr> \n <td> **sigmoid([1,2,3])**</td> \n <td> array([ 0.73105858, 0.88079708, 0.95257413]) </td> \n </tr>\n</table> \n",
"_____no_output_____"
],
[
"### 1.2 - Sigmoid gradient\n\nAs you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.\n\n**Exercise**: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid\\_derivative(x) = \\sigma'(x) = \\sigma(x) (1 - \\sigma(x))\\tag{2}$$\nYou often code this function in two steps:\n1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.\n2. Compute $\\sigma'(x) = s(1-s)$",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: sigmoid_derivative\n\ndef sigmoid_derivative(x):\n \"\"\"\n Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.\n You can store the output of the sigmoid function into variables and then use it to calculate the gradient.\n \n Arguments:\n x -- A scalar or numpy array\n\n Return:\n ds -- Your computed gradient.\n \"\"\"\n \n ### START CODE HERE ### (≈ 2 lines of code)\n s = 1/(1+np.exp(-1*x))\n ds = s*(1-s)\n ### END CODE HERE ###\n \n return ds",
"_____no_output_____"
],
[
"x = np.array([1, 2, 3])\nprint (\"sigmoid_derivative(x) = \" + str(sigmoid_derivative(x)))",
"sigmoid_derivative(x) = [ 0.19661193 0.10499359 0.04517666]\n"
]
],
[
[
"**Expected Output**: \n\n\n<table>\n <tr> \n <td> **sigmoid_derivative([1,2,3])**</td> \n <td> [ 0.19661193 0.10499359 0.04517666] </td> \n </tr>\n</table> \n\n",
"_____no_output_____"
],
[
"### 1.3 - Reshaping arrays ###\n\nTwo common numpy functions used in deep learning are [np.shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) and [np.reshape()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html). \n- X.shape is used to get the shape (dimension) of a matrix/vector X. \n- X.reshape(...) is used to reshape X into some other dimension. \n\nFor example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(length*height*3, 1)$. In other words, you \"unroll\", or reshape, the 3D array into a 1D vector.\n\n<img src=\"images/image2vector_kiank.png\" style=\"width:500px;height:300;\">\n\n**Exercise**: Implement `image2vector()` that takes an input of shape (length, height, 3) and returns a vector of shape (length\\*height\\*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:\n``` python\nv = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c\n```\n- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with `image.shape[0]`, etc. ",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: image2vector\ndef image2vector(image):\n \"\"\"\n Argument:\n image -- a numpy array of shape (length, height, depth)\n \n Returns:\n v -- a vector of shape (length*height*depth, 1)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n v = image.reshape(image.shape[0]*image.shape[1]*image.shape[2],1)\n ### END CODE HERE ###\n \n return v",
"_____no_output_____"
],
[
"# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values\nimage = np.array([[[ 0.67826139, 0.29380381],\n [ 0.90714982, 0.52835647],\n [ 0.4215251 , 0.45017551]],\n\n [[ 0.92814219, 0.96677647],\n [ 0.85304703, 0.52351845],\n [ 0.19981397, 0.27417313]],\n\n [[ 0.60659855, 0.00533165],\n [ 0.10820313, 0.49978937],\n [ 0.34144279, 0.94630077]]])\n\nprint (\"image2vector(image) = \" + str(image2vector(image)))",
"image2vector(image) = [[ 0.67826139]\n [ 0.29380381]\n [ 0.90714982]\n [ 0.52835647]\n [ 0.4215251 ]\n [ 0.45017551]\n [ 0.92814219]\n [ 0.96677647]\n [ 0.85304703]\n [ 0.52351845]\n [ 0.19981397]\n [ 0.27417313]\n [ 0.60659855]\n [ 0.00533165]\n [ 0.10820313]\n [ 0.49978937]\n [ 0.34144279]\n [ 0.94630077]]\n"
]
],
[
[
"**Expected Output**: \n\n\n<table style=\"width:100%\">\n <tr> \n <td> **image2vector(image)** </td> \n <td> [[ 0.67826139]\n [ 0.29380381]\n [ 0.90714982]\n [ 0.52835647]\n [ 0.4215251 ]\n [ 0.45017551]\n [ 0.92814219]\n [ 0.96677647]\n [ 0.85304703]\n [ 0.52351845]\n [ 0.19981397]\n [ 0.27417313]\n [ 0.60659855]\n [ 0.00533165]\n [ 0.10820313]\n [ 0.49978937]\n [ 0.34144279]\n [ 0.94630077]]</td> \n </tr>\n \n \n</table>",
"_____no_output_____"
],
[
"### 1.4 - Normalizing rows\n\nAnother common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \\frac{x}{\\| x\\|} $ (dividing each row vector of x by its norm).\n\nFor example, if $$x = \n\\begin{bmatrix}\n 0 & 3 & 4 \\\\\n 2 & 6 & 4 \\\\\n\\end{bmatrix}\\tag{3}$$ then $$\\| x\\| = np.linalg.norm(x, axis = 1, keepdims = True) = \\begin{bmatrix}\n 5 \\\\\n \\sqrt{56} \\\\\n\\end{bmatrix}\\tag{4} $$and $$ x\\_normalized = \\frac{x}{\\| x\\|} = \\begin{bmatrix}\n 0 & \\frac{3}{5} & \\frac{4}{5} \\\\\n \\frac{2}{\\sqrt{56}} & \\frac{6}{\\sqrt{56}} & \\frac{4}{\\sqrt{56}} \\\\\n\\end{bmatrix}\\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.\n\n\n**Exercise**: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: normalizeRows\n\ndef normalizeRows(x):\n \"\"\"\n Implement a function that normalizes each row of the matrix x (to have unit length).\n \n Argument:\n x -- A numpy matrix of shape (n, m)\n \n Returns:\n x -- The normalized (by row) numpy matrix. You are allowed to modify x.\n \"\"\"\n \n ### START CODE HERE ### (≈ 2 lines of code)\n # Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)\n x_norm = np.linalg.norm(x,ord=2,axis=1,keepdims=True)\n \n # Divide x by its norm.\n x = x / x_norm\n ### END CODE HERE ###\n\n return x",
"_____no_output_____"
],
[
"x = np.array([\n [0, 3, 4],\n [1, 6, 4]])\nprint(\"normalizeRows(x) = \" + str(normalizeRows(x)))",
"normalizeRows(x) = [[ 0. 0.6 0.8 ]\n [ 0.13736056 0.82416338 0.54944226]]\n"
]
],
[
[
"**Expected Output**: \n\n<table style=\"width:60%\">\n\n <tr> \n <td> **normalizeRows(x)** </td> \n <td> [[ 0. 0.6 0.8 ]\n [ 0.13736056 0.82416338 0.54944226]]</td> \n </tr>\n \n \n</table>",
"_____no_output_____"
],
[
"**Note**:\nIn normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now! ",
"_____no_output_____"
],
[
"### 1.5 - Broadcasting and the softmax function ####\nA very important concept to understand in numpy is \"broadcasting\". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official [broadcasting documentation](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).",
"_____no_output_____"
],
[
"**Exercise**: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.\n\n**Instructions**:\n- $ \\text{for } x \\in \\mathbb{R}^{1\\times n} \\text{, } softmax(x) = softmax(\\begin{bmatrix}\n x_1 &&\n x_2 &&\n ... &&\n x_n \n\\end{bmatrix}) = \\begin{bmatrix}\n \\frac{e^{x_1}}{\\sum_{j}e^{x_j}} &&\n \\frac{e^{x_2}}{\\sum_{j}e^{x_j}} &&\n ... &&\n \\frac{e^{x_n}}{\\sum_{j}e^{x_j}} \n\\end{bmatrix} $ \n\n- $\\text{for a matrix } x \\in \\mathbb{R}^{m \\times n} \\text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\\begin{bmatrix}\n x_{11} & x_{12} & x_{13} & \\dots & x_{1n} \\\\\n x_{21} & x_{22} & x_{23} & \\dots & x_{2n} \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n x_{m1} & x_{m2} & x_{m3} & \\dots & x_{mn}\n\\end{bmatrix} = \\begin{bmatrix}\n \\frac{e^{x_{11}}}{\\sum_{j}e^{x_{1j}}} & \\frac{e^{x_{12}}}{\\sum_{j}e^{x_{1j}}} & \\frac{e^{x_{13}}}{\\sum_{j}e^{x_{1j}}} & \\dots & \\frac{e^{x_{1n}}}{\\sum_{j}e^{x_{1j}}} \\\\\n \\frac{e^{x_{21}}}{\\sum_{j}e^{x_{2j}}} & \\frac{e^{x_{22}}}{\\sum_{j}e^{x_{2j}}} & \\frac{e^{x_{23}}}{\\sum_{j}e^{x_{2j}}} & \\dots & \\frac{e^{x_{2n}}}{\\sum_{j}e^{x_{2j}}} \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n \\frac{e^{x_{m1}}}{\\sum_{j}e^{x_{mj}}} & \\frac{e^{x_{m2}}}{\\sum_{j}e^{x_{mj}}} & \\frac{e^{x_{m3}}}{\\sum_{j}e^{x_{mj}}} & \\dots & \\frac{e^{x_{mn}}}{\\sum_{j}e^{x_{mj}}}\n\\end{bmatrix} = \\begin{pmatrix}\n softmax\\text{(first row of x)} \\\\\n softmax\\text{(second row of x)} \\\\\n ... \\\\\n softmax\\text{(last row of x)} \\\\\n\\end{pmatrix} $$",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: softmax\n\ndef softmax(x):\n \"\"\"Calculates the softmax for each row of the input x.\n\n Your code should work for a row vector and also for matrices of shape (n, m).\n\n Argument:\n x -- A numpy matrix of shape (n,m)\n\n Returns:\n s -- A numpy matrix equal to the softmax of x, of shape (n,m)\n \"\"\"\n \n ### START CODE HERE ### (≈ 3 lines of code)\n # Apply exp() element-wise to x. Use np.exp(...).\n x_exp = np.exp(x)\n\n # Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).\n x_sum = np.sum(x_exp, axis=1, keepdims=True)\n \n # Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.\n s = x_exp / x_sum\n\n ### END CODE HERE ###\n \n return s",
"_____no_output_____"
],
[
"x = np.array([\n [9, 2, 5, 0, 0],\n [7, 5, 0, 0 ,0]])\nprint(\"softmax(x) = \" + str(softmax(x)))",
"softmax(x) = [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04\n 1.21052389e-04]\n [ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04\n 8.01252314e-04]]\n"
]
],
[
[
"**Expected Output**:\n\n<table style=\"width:60%\">\n\n <tr> \n <td> **softmax(x)** </td> \n <td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04\n 1.21052389e-04]\n [ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04\n 8.01252314e-04]]</td> \n </tr>\n</table>\n",
"_____no_output_____"
],
[
"**Note**:\n- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). **x_exp/x_sum** works due to python broadcasting.\n\nCongratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.",
"_____no_output_____"
],
[
"<font color='blue'>\n**What you need to remember:**\n- np.exp(x) works for any np.array x and applies the exponential function to every coordinate\n- the sigmoid function and its gradient\n- image2vector is commonly used in deep learning\n- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs. \n- numpy has efficient built-in functions\n- broadcasting is extremely useful",
"_____no_output_____"
],
[
"## 2) Vectorization",
"_____no_output_____"
],
[
"\nIn deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.",
"_____no_output_____"
]
],
[
[
"import time\n\nx1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]\nx2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]\n\n### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###\ntic = time.process_time()\ndot = 0\nfor i in range(len(x1)):\n dot+= x1[i]*x2[i]\ntoc = time.process_time()\nprint (\"dot = \" + str(dot) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### CLASSIC OUTER PRODUCT IMPLEMENTATION ###\ntic = time.process_time()\nouter = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros\nfor i in range(len(x1)):\n for j in range(len(x2)):\n outer[i,j] = x1[i]*x2[j]\ntoc = time.process_time()\nprint (\"outer = \" + str(outer) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### CLASSIC ELEMENTWISE IMPLEMENTATION ###\ntic = time.process_time()\nmul = np.zeros(len(x1))\nfor i in range(len(x1)):\n mul[i] = x1[i]*x2[i]\ntoc = time.process_time()\nprint (\"elementwise multiplication = \" + str(mul) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###\nW = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array\ntic = time.process_time()\ngdot = np.zeros(W.shape[0])\nfor i in range(W.shape[0]):\n for j in range(len(x1)):\n gdot[i] += W[i,j]*x1[j]\ntoc = time.process_time()\nprint (\"gdot = \" + str(gdot) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")",
"dot = 278\n ----- Computation time = 0.08468999999999838ms\nouter = [[ 81. 18. 18. 81. 0. 81. 18. 45. 0. 0. 81. 18. 45. 0.\n 0.]\n [ 18. 4. 4. 18. 0. 18. 4. 10. 0. 0. 18. 4. 10. 0.\n 0.]\n [ 45. 10. 10. 45. 0. 45. 10. 25. 0. 0. 45. 10. 25. 0.\n 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0.]\n [ 63. 14. 14. 63. 0. 63. 14. 35. 0. 0. 63. 14. 35. 0.\n 0.]\n [ 45. 10. 10. 45. 0. 45. 10. 25. 0. 0. 45. 10. 25. 0.\n 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0.]\n [ 81. 18. 18. 81. 0. 81. 18. 45. 0. 0. 81. 18. 45. 0.\n 0.]\n [ 18. 4. 4. 18. 0. 18. 4. 10. 0. 0. 18. 4. 10. 0.\n 0.]\n [ 45. 10. 10. 45. 0. 45. 10. 25. 0. 0. 45. 10. 25. 0.\n 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0.]]\n ----- Computation time = 0.5422709999998165ms\nelementwise multiplication = [ 81. 4. 10. 0. 0. 63. 10. 0. 0. 0. 81. 4. 25. 0. 0.]\n ----- Computation time = 0.1170230000000494ms\ngdot = [ 31.65303553 23.21301307 26.93150442]\n ----- Computation time = 0.21097000000014354ms\n"
],
[
"x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]\nx2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]\n\n### VECTORIZED DOT PRODUCT OF VECTORS ###\ntic = time.process_time()\ndot = np.dot(x1,x2)\ntoc = time.process_time()\nprint (\"dot = \" + str(dot) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### VECTORIZED OUTER PRODUCT ###\ntic = time.process_time()\nouter = np.outer(x1,x2)\ntoc = time.process_time()\nprint (\"outer = \" + str(outer) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### VECTORIZED ELEMENTWISE MULTIPLICATION ###\ntic = time.process_time()\nmul = np.multiply(x1,x2)\ntoc = time.process_time()\nprint (\"elementwise multiplication = \" + str(mul) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### VECTORIZED GENERAL DOT PRODUCT ###\ntic = time.process_time()\ndot = np.dot(W,x1)\ntoc = time.process_time()\nprint (\"gdot = \" + str(dot) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")",
"dot = 278\n ----- Computation time = 0.09656799999979704ms\nouter = [[81 18 18 81 0 81 18 45 0 0 81 18 45 0 0]\n [18 4 4 18 0 18 4 10 0 0 18 4 10 0 0]\n [45 10 10 45 0 45 10 25 0 0 45 10 25 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n [63 14 14 63 0 63 14 35 0 0 63 14 35 0 0]\n [45 10 10 45 0 45 10 25 0 0 45 10 25 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n [81 18 18 81 0 81 18 45 0 0 81 18 45 0 0]\n [18 4 4 18 0 18 4 10 0 0 18 4 10 0 0]\n [45 10 10 45 0 45 10 25 0 0 45 10 25 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]]\n ----- Computation time = 0.09004300000015064ms\nelementwise multiplication = [81 4 10 0 0 63 10 0 0 0 81 4 25 0 0]\n ----- Computation time = 0.09477399999990865ms\ngdot = [ 31.65303553 23.21301307 26.93150442]\n ----- Computation time = 1.5495900000002116ms\n"
]
],
[
[
"As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger. \n\n**Note** that `np.dot()` performs a matrix-matrix or matrix-vector multiplication. This is different from `np.multiply()` and the `*` operator (which is equivalent to `.*` in Matlab/Octave), which performs an element-wise multiplication.",
"_____no_output_____"
],
[
"### 2.1 Implement the L1 and L2 loss functions\n\n**Exercise**: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.\n\n**Reminder**:\n- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \\hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.\n- L1 loss is defined as:\n$$\\begin{align*} & L_1(\\hat{y}, y) = \\sum_{i=0}^m|y^{(i)} - \\hat{y}^{(i)}| \\end{align*}\\tag{6}$$",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: L1\n\ndef L1(yhat, y):\n \"\"\"\n Arguments:\n yhat -- vector of size m (predicted labels)\n y -- vector of size m (true labels)\n \n Returns:\n loss -- the value of the L1 loss function defined above\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n loss = np.sum(np.abs(y-yhat))\n ### END CODE HERE ###\n \n return loss",
"_____no_output_____"
],
[
"yhat = np.array([.9, 0.2, 0.1, .4, .9])\ny = np.array([1, 0, 0, 1, 1])\nprint(\"L1 = \" + str(L1(yhat,y)))",
"L1 = 1.1\n"
]
],
[
[
"**Expected Output**:\n\n<table style=\"width:20%\">\n\n <tr> \n <td> **L1** </td> \n <td> 1.1 </td> \n </tr>\n</table>\n",
"_____no_output_____"
],
[
"**Exercise**: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then `np.dot(x,x)` = $\\sum_{j=0}^n x_j^{2}$. \n\n- L2 loss is defined as $$\\begin{align*} & L_2(\\hat{y},y) = \\sum_{i=0}^m(y^{(i)} - \\hat{y}^{(i)})^2 \\end{align*}\\tag{7}$$",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: L2\n\ndef L2(yhat, y):\n \"\"\"\n Arguments:\n yhat -- vector of size m (predicted labels)\n y -- vector of size m (true labels)\n \n Returns:\n loss -- the value of the L2 loss function defined above\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n loss = np.sum(np.dot(y-yhat,y-yhat))\n ### END CODE HERE ###\n \n return loss",
"_____no_output_____"
],
[
"yhat = np.array([.9, 0.2, 0.1, .4, .9])\ny = np.array([1, 0, 0, 1, 1])\nprint(\"L2 = \" + str(L2(yhat,y)))",
"L2 = 0.43\n"
]
],
[
[
"**Expected Output**: \n<table style=\"width:20%\">\n <tr> \n <td> **L2** </td> \n <td> 0.43 </td> \n </tr>\n</table>",
"_____no_output_____"
],
[
"Congratulations on completing this assignment. We hope that this little warm-up exercise helps you in the future assignments, which will be more exciting and interesting!",
"_____no_output_____"
],
[
"<font color='blue'>\n**What to remember:**\n- Vectorization is very important in deep learning. It provides computational efficiency and clarity.\n- You have reviewed the L1 and L2 loss.\n- You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc...",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
e75eb7b41dbedac3f553931e5376d0e5031bbf52 | 465,104 | ipynb | Jupyter Notebook | project/Report/Report.ipynb | SamarpanDas/Global-Terrorism-Analysis | 78eca119ffae341dd3e6a747acbe60c0ced29ca0 | [
"Apache-2.0"
] | null | null | null | project/Report/Report.ipynb | SamarpanDas/Global-Terrorism-Analysis | 78eca119ffae341dd3e6a747acbe60c0ced29ca0 | [
"Apache-2.0"
] | null | null | null | project/Report/Report.ipynb | SamarpanDas/Global-Terrorism-Analysis | 78eca119ffae341dd3e6a747acbe60c0ced29ca0 | [
"Apache-2.0"
] | null | null | null | 145.208867 | 141,162 | 0.801225 | [
[
[
"# **GLOBAL TERRORISM ANALYSIS**\n## PART 4 : REPORT\n#### Author : Samarpan Das",
"_____no_output_____"
],
[
"---\n---",
"_____no_output_____"
],
[
"## **Introduction**\n1. The **Global Terrorism Database** [GTD](https://gtd.terrorismdata.com/files/gtd-1970-2019-4/) is an open-source database including information on terrorist attacks around the world from 1970 through 2019. The GTD includes systematic data on domestic as well as international terrorist incidents that have occurred during this time period and now includes more than 180,000 attacks. The database is maintained by researchers at the **National Consortium for the Study of Terrorism and Responses to Terrorism** [START](https://www.start.umd.edu/gtd/), headquartered at the University of Maryland.\n\n\n\nThe GTD defines terrorism as-\n> \"The threatened or actual use of illegal force and violence by a non-state actor to attain political, economic, religious or social goal through fear, coercion, or intimidation.\"\n\n\n\n",
"_____no_output_____"
],
[
"2. **Characteristics of the Database**\n\n* Contains information on over 201,000 terrorist attacks\n* Currently the most comprehensive unclassified database on terrorist attacks in the world\n* Includes information on more than 88,000 bombings, 19,000 assassinations, and 11,000 kidnappings since 1970\n* Includes information on at least 45 variables for each case, with more recent incidents including information on more than 120 variables\n* More than 4,000,000 news articles and 25,000 news sources were reviewed to collect incident data from 1998 to 2017 alone\n\n\n",
"_____no_output_____"
],
[
"3. **Project Goals**\n* Read the source and do some quick research to understand more about the dataset and its topic\n* Clean the data\n* Perform some Preprocessing to get the field that needs to be given the prime focus\n* Perform Exploratory Data Analysis on the dataset\n* Analyze the data more deeply and extract insights\n* Visualize analysis on Tableau . Please find our report here.",
"_____no_output_____"
],
[
"---\n----",
"_____no_output_____"
],
[
"## 1. PREPARE and Inspection stage\nIn this step we tried to inspect the basic features of the dataset and prepared the dataset according to fit best for our purpose\n\nDetailed code and explaination for the same can be found [here](https://github.com/SamarpanDas/Global-Terrorism-Analysis/blob/main/project/1.%20Prepare/Prepare.sql)\n\nThe initial prepare phase was conducted using Google's BigQuery as the data was too vast to be handled locally and could be efficiently handled by google's systems\n\nGlimpse of the code used for the preparation phase\n\n\n```\n-- Queries the whole database by ascending order of their date of occurance\nSELECT *\nFROM `qwiklabs-gcp-01-28c376c2a71a.terrorism_dataset.terrorism_table`\nORDER BY iyear, imonth, iday\n\n\n-- Checking if duplicates are present in the database. Returns 201183\nSELECT COUNT(DISTINCT(eventid))\nFROM `qwiklabs-gcp-01-28c376c2a71a.terrorism_dataset.terrorism_table` \nSELECT COUNT(eventid)\nFROM `qwiklabs-gcp-01-28c376c2a71a.terrorism_dataset.terrorism_table`\n\n-- checking totatl number of rows. Returns 201183\nSELECT COUNT(eventid)\nFROM `qwiklabs-gcp-01-28c376c2a71a.terrorism_dataset.terrorism_table` \nSELECT COUNT(eventid)\nFROM `qwiklabs-gcp-01-28c376c2a71a.terrorism_dataset.terrorism_table`\n\n\n-- Drawing some basic insights\n\n-- Query to group the number of times a each country has been attacked, in descending order\nSELECT COUNT(eventid) AS AttackCount, country_txt AS Country\nFROM `qwiklabs-gcp-01-28c376c2a71a.terrorism_dataset.terrorism_table`\nGROUP BY country_txt\nORDER BY COUNT(eventid) DESC;\n\n\n-- Query to group together distinct attack_types and each of their counts in descending order\nSELECT COUNT(eventid) AS AttackCount, attacktype1_txt AS AttackType\nFROM `qwiklabs-gcp-01-28c376c2a71a.terrorism_dataset.terrorism_table`\nGROUP BY attacktype1_txt\nORDER BY COUNT(eventid) DESC;\n\n\n\n```\n\nA few targeted pivot tables were also formulated using the prepare phase and some initial graphs and charts were designed. \n\nThose charts can be found [here](https://github.com/SamarpanDas/Global-Terrorism-Analysis/tree/main/project/1.%20Prepare/Prep%20Stage%20Results)\n\n",
"_____no_output_____"
],
[
"---\n---",
"_____no_output_____"
],
[
"## 2. Data Processing\nIn this step we have looked in the the features of this dataset in details and have improved, added, removed and altered some of its features into more convenient forms \n\nDetailed code and explanation for the same can be found [here](https://github.com/SamarpanDas/Global-Terrorism-Analysis/blob/main/project/2.%20Process/data_preprocessing.ipynb)\n\nThe data was sill to huge to be inspected using spreadsheet softwares and hence from here on Python has been used to manipulate and work around with the data",
"_____no_output_____"
],
[
"Importing necessary libraries",
"_____no_output_____"
]
],
[
[
"import time\nimport matplotlib.pyplot as plt \nimport matplotlib.ticker as ticker\nfrom matplotlib import animation\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns",
"_____no_output_____"
]
],
[
[
"Connecting colab to google drive",
"_____no_output_____"
]
],
[
[
"from google.colab import drive \ndrive.mount('/content/drive')",
"Mounted at /content/drive\n"
]
],
[
[
"Importing data from google drive",
"_____no_output_____"
]
],
[
[
"# BaseForAnalysis.csv was uploaded into google drve before hand\nprimary_df = pd.read_csv('/content/drive/My Drive/BaseForAnalysis.csv', sep=',', encoding=\"ISO-8859-1\")\n",
"_____no_output_____"
]
],
[
[
"Initial layout of the data",
"_____no_output_____"
]
],
[
[
"primary_df.head(10)",
"_____no_output_____"
],
[
"print ('dataframe shape: ', primary_df.shape)",
"dataframe shape: (201183, 29)\n"
]
],
[
[
"####Changing the content and features of the data\n\nRenaming certain columns to better identifiable names",
"_____no_output_____"
]
],
[
[
"primary_df.rename(columns = \n {'iyear':'year', \n 'imonth':'month',\n 'iday':'day',\n 'country_txt' : 'country',\n 'region_txt' : 'region',\n 'crit1' : 'crit',\n 'attacktype1_txt' : 'attacktype',\n 'targtype1_txt' : 'targettype',\n 'natlty1_txt' : 'nationalityofvic',\n 'gname' : 'organisation',\n 'claimed' : 'claimedresp',\n 'weaptype1_txt' : 'weapontype',\n 'nkill' : 'nkilled',\n 'nkillter' : 'nkillonlyter',\n 'nwound' : 'nwounded',\n 'propextent_txt' : 'propdamageextent',\n 'ishostkid' : 'victimkidnapped',\n 'ransom' : 'ransomdemanded',\n }, inplace = True)",
"_____no_output_____"
],
[
"#Add column ncasualties (Number of Dead/Injured people) by adding Nkill and Nwound\nprimary_df['ncasualties'] = primary_df['nkilled'] + primary_df['nwounded']",
"_____no_output_____"
],
[
"# Limit long strings\nprimary_df['weapontype'] = primary_df['weapontype'].replace(u'Vehicle (not to include vehicle-borne explosives, i.e., car or truck bombs)', 'Vehicle')\n\n\nprimary_df['propdamageextent'] = primary_df['propdamageextent'].replace('Minor (likely < $1 million)', 'Minor')\nprimary_df['propdamageextent'] = primary_df['propdamageextent'].replace('Major (likely > $1 million but < $1 billion)', 'Major')\nprimary_df['propdamageextent'] = primary_df['propdamageextent'].replace('Catastrophic (likely > $1 billion)', 'Catastrophic')",
"_____no_output_____"
]
],
[
[
"Glimpse of the final preprocessed data",
"_____no_output_____"
]
],
[
[
"primary_df.head(10)",
"_____no_output_____"
],
[
"'''\n# Converting the dataframe to a csv file and uploading it to google drive with the name BaseForAnalysis_Version2.csv\nprimary_df.to_csv(\"/content/drive/My Drive/BaseForAnalysis_Version2.csv\", sep = \",\")\n'''",
"_____no_output_____"
]
],
[
[
"###End of data processing\n---\n---",
"_____no_output_____"
],
[
"..",
"_____no_output_____"
],
[
"# 3. Data Analysis & Visualization\n\nHere is a glimpse of the analysis and vizualisation using Python, for viewing some more advanced analysis and visualizations refer to Global Terrorism Analysis Workbook @ my Tableau Public Profile¶\n\nDetailed code and explaination of the analysis and visualization steps using Python can be found [here](https://github.com/SamarpanDas/Global-Terrorism-Analysis/blob/main/project/3.%20Analysis%20and%20Visualisations/data_analysis.ipynb)\n\nCheck out **Tableau Public Workbook on** [Global Terrorism Analysis by Samarpan](https://public.tableau.com/profile/samarpan.das#!/)\n\nNote : You wont have to create an account or sign in to view my workbook, but tableau does sometimes have server issues, so referesh the page a few times in case it doesn't open at once.\n",
"_____no_output_____"
],
[
"Importing necessary libraries",
"_____no_output_____"
]
],
[
[
"import time\nimport matplotlib.pyplot as plt \nimport matplotlib.ticker as ticker\nfrom matplotlib import animation\nimport numpy as np\nimport pandas as pd\npd.options.mode.chained_assignment = None\nimport seaborn as sns\nimport plotly.express as px",
"_____no_output_____"
]
],
[
[
"\nConnecting drive to google colab",
"_____no_output_____"
]
],
[
[
"from google.colab import drive \ndrive.mount('/content/drive')",
"Mounted at /content/drive\n"
],
[
"terror_df = pd.read_csv('/content/drive/My Drive/BaseForAnalysis_Version2.csv', sep=',', encoding=\"ISO-8859-1\")",
"_____no_output_____"
]
],
[
[
"Glimpse of the final dataset that will be used to draw the analysis",
"_____no_output_____"
]
],
[
[
"terror_df.head(10)",
"_____no_output_____"
]
],
[
[
"Cols involved",
"_____no_output_____"
]
],
[
[
"terror_df.columns",
"_____no_output_____"
]
],
[
[
"Analysis of the numerical figures in the data frame",
"_____no_output_____"
]
],
[
[
"terror_df[['nkilled', 'nkillonlyter', 'nwounded', 'propdamageextent', \n 'ncasualties']].describe().transpose()",
"_____no_output_____"
],
[
"terror_df.info(verbose = True)",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 201183 entries, 0 to 201182\nData columns (total 31 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Unnamed: 0 201183 non-null int64 \n 1 eventid 201183 non-null int64 \n 2 year 201183 non-null int64 \n 3 month 201183 non-null int64 \n 4 day 201183 non-null int64 \n 5 extended 201183 non-null int64 \n 6 country 201183 non-null object \n 7 region 201183 non-null object \n 8 city 200757 non-null object \n 9 latitude 196556 non-null float64\n 10 longitude 196555 non-null float64\n 11 vicinity 201183 non-null int64 \n 12 crit 201183 non-null int64 \n 13 multiple 201183 non-null int64 \n 14 success 201183 non-null int64 \n 15 suicide 201183 non-null int64 \n 16 attacktype 201183 non-null object \n 17 targettype 201183 non-null object \n 18 nationality 199333 non-null object \n 19 organisation 201183 non-null object \n 20 nperps 130088 non-null float64\n 21 claimedresp 135089 non-null float64\n 22 weapontype 201183 non-null object \n 23 nkilled 189233 non-null float64\n 24 nkillonlyter 133316 non-null float64\n 25 nwounded 182259 non-null float64\n 26 propdamageextent 70498 non-null object \n 27 victimkidnapped 201005 non-null float64\n 28 ransomdemanded 79561 non-null float64\n 29 nreleased 12595 non-null float64\n 30 ncasualties 181582 non-null float64\ndtypes: float64(11), int64(11), object(9)\nmemory usage: 47.6+ MB\n"
]
],
[
[
"\nAnalysis of the number of attacks per yea",
"_____no_output_____"
]
],
[
[
"f = plt.figure(figsize=(20, 7))\n\nsns.set(font_scale = 1.1)\nsns.set_theme(style = \"darkgrid\")\nxaxis = sns.countplot(x = 'year', data = terror_df)\nxaxis.set_xticklabels(xaxis.get_xticklabels(), rotation=60)\nplt.ylabel('Count', fontsize=12)\nplt.xlabel('Year', fontsize=12)\nplt.title('Number of Terrorist Attack by Year', fontsize = 12)",
"_____no_output_____"
]
],
[
[
"\nNumber of Attacks per Region (The globe has been divided into 12 distinct regions as per global standards",
"_____no_output_____"
]
],
[
[
"f = plt.figure(figsize=(16, 8))\n\nsns.set(font_scale=0.7)\nsns.countplot(y='region', data=terror_df)\nplt.ylabel('Region', fontsize=12)\nplt.xlabel('Count', fontsize=12)\nplt.title('Number of Terrorist Attack by Region', fontsize=12)",
"_____no_output_____"
]
],
[
[
"Number of Attacks per Attack Method",
"_____no_output_____"
]
],
[
[
"f = plt.figure(figsize=(20, 8))\n\nsns.set(font_scale=0.8)\nsns.countplot(x='attacktype', data=terror_df,)\nplt.xlabel('Methods of Attack', fontsize=12)\nplt.ylabel('Counts', fontsize=12)\nplt.title('Types of Terrorist Attack ', fontsize=12)",
"_____no_output_____"
]
],
[
[
"Number of Attacks per Type of Targets",
"_____no_output_____"
]
],
[
[
"f = plt.figure(figsize=(20, 8))\n\nsns.set(font_scale=0.8)\nxaxis = sns.countplot(x='targettype', data=terror_df,)\n\nxaxis.set_xticklabels(xaxis.get_xticklabels(), rotation=60)\nplt.xlabel('Target Types', fontsize=12)\nplt.ylabel('Count', fontsize=12)\nplt.title('Types of Target', fontsize=12)",
"_____no_output_____"
]
],
[
[
"\nTop 15 Contries with most number of Attacks by terror groups",
"_____no_output_____"
]
],
[
[
"fig= plt.figure(figsize=(16, 10))\nsns.set(font_scale=0.9)\nterror_country = sns.barplot(x=terror_df['country'].value_counts()[0:15].index, y=terror_df['country'].value_counts()[0:15], palette='RdYlGn')\nterror_country.set_xticklabels(terror_country.get_xticklabels(), rotation=70)\nterror_country.set_xlabel('Country', fontsize=12)\nterror_country.set_ylabel('Counts', fontsize=12)\nplt.title('Top 15 Countries: Most Attacks by Terrorist Groups', fontsize=12)",
"_____no_output_____"
]
],
[
[
"...",
"_____no_output_____"
],
[
"Analysis of number of attacks in a region on a particular calender year\nThis helps us compare the rise / fall of attacks on a region",
"_____no_output_____"
]
],
[
[
"region_year = pd.crosstab(terror_df.year, terror_df.region)\n\nregion_year.head(20)",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(16, 10))\n\ncolor_list_reg_yr = ['palegreen', 'lime', 'green', 'Aqua', 'skyblue', 'darkred', 'darkgray', 'tan', \n 'orangered', 'plum', 'salmon', 'mistyrose']\nregion_year.plot(figsize=(14, 10), fontsize=13, color=color_list_reg_yr)\n#region_year.plot(figsize=(14, 10), fontsize=13)\nplt.xlabel('Year', fontsize=12)\nplt.ylabel('Number of Attacks', fontsize=12)\nplt.legend(fontsize=12)\nplt.title('Number of Attacks per Region by Year', fontsize=12)",
"_____no_output_____"
]
],
[
[
"Analysis of number of attacks of a particular type on a particular calender year\nThis helps us compare the rise / fall of attack types over the years",
"_____no_output_____"
]
],
[
[
"attacktype_year = pd.crosstab(terror_df.year, terror_df.attacktype)\n\nattacktype_year.head(20)",
"_____no_output_____"
]
],
[
[
"...",
"_____no_output_____"
],
[
"##For Additional Analysis insights, refer to\n\n### **Tableau Public Workbook** on \n[Global Terrorism Analysis by Samarpan](https://public.tableau.com/profile/samarpan.das#!/)\n\nNote : You wont have to create an account or sign in to view my workbook, but tableau does sometimes have server issues, so referesh the web-page a few times in case it doesn't open at once.",
"_____no_output_____"
],
[
"### *End of Report*",
"_____no_output_____"
],
[
"---\n---",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e75ec4a90dd7990b1b0f9397fd3e2f331a96228c | 2,587 | ipynb | Jupyter Notebook | new_h1st/Model and Modeler.ipynb | TgithubJ/h1st | 18c8ab2ca5e3a047aea255c636d27fd66bb80ec5 | [
"Apache-2.0"
] | null | null | null | new_h1st/Model and Modeler.ipynb | TgithubJ/h1st | 18c8ab2ca5e3a047aea255c636d27fd66bb80ec5 | [
"Apache-2.0"
] | null | null | null | new_h1st/Model and Modeler.ipynb | TgithubJ/h1st | 18c8ab2ca5e3a047aea255c636d27fd66bb80ec5 | [
"Apache-2.0"
] | null | null | null | 20.054264 | 186 | 0.514109 | [
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"from my_ml_modeler import MyMLModeler",
"_____no_output_____"
],
[
"my_ml_modeler = MyMLModeler()",
"_____no_output_____"
],
[
"my_ml_model = my_ml_modeler.build()",
"_____no_output_____"
],
[
"my_ml_model.predict({\n 'X': pd.DataFrame(\n [[5.1, 3.5, 1.5, 0.2],\n [7.1, 3.5, 1.5, 0.6]], \n columns=['sepal_length','sepal_width','petal_length','petal_width'])\n})",
"/Users/arimo/.venv/py39/lib/python3.9/site-packages/sklearn/base.py:445: UserWarning: X does not have valid feature names, but LogisticRegression was fitted with feature names\n warnings.warn(\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e75ed06de4fbe5c2ce94e941caac3017043bf603 | 1,483 | ipynb | Jupyter Notebook | _downloads/plot_maskedstats.ipynb | scipy-lectures/scipy-lectures.github.com | 637a0d9cc2c95ed196550371e44a4cc6e150c830 | [
"CC-BY-4.0"
] | 48 | 2015-01-13T22:15:34.000Z | 2022-01-04T20:17:41.000Z | _downloads/plot_maskedstats.ipynb | scipy-lectures/scipy-lectures.github.com | 637a0d9cc2c95ed196550371e44a4cc6e150c830 | [
"CC-BY-4.0"
] | 1 | 2017-04-25T09:01:00.000Z | 2017-04-25T13:48:56.000Z | _downloads/plot_maskedstats.ipynb | scipy-lectures/scipy-lectures.github.com | 637a0d9cc2c95ed196550371e44a4cc6e150c830 | [
"CC-BY-4.0"
] | 21 | 2015-03-16T17:52:23.000Z | 2021-02-19T00:02:13.000Z | 27.462963 | 417 | 0.49292 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\nExample: Masked statistics\n==========================\n\nPlot a masked statistics\n\n\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n\ndata = np.loadtxt('../../../../data/populations.txt')\npopulations = np.ma.masked_array(data[:,1:])\nyear = data[:, 0]\n\nbad_years = (((year >= 1903) & (year <= 1910))\n | ((year >= 1917) & (year <= 1918)))\npopulations[bad_years, 0] = np.ma.masked\npopulations[bad_years, 1] = np.ma.masked\n\nplt.plot(year, populations, 'o-') \nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e75ed1e95cdbdcfbfd09da80a37a6dc2eb3c3725 | 72,072 | ipynb | Jupyter Notebook | doc/source/ray-air/examples/huggingface_text_classification.ipynb | jianoaix/ray | 1701b923bc83905f8961c06a6a173e3eba46a936 | [
"Apache-2.0"
] | null | null | null | doc/source/ray-air/examples/huggingface_text_classification.ipynb | jianoaix/ray | 1701b923bc83905f8961c06a6a173e3eba46a936 | [
"Apache-2.0"
] | null | null | null | doc/source/ray-air/examples/huggingface_text_classification.ipynb | jianoaix/ray | 1701b923bc83905f8961c06a6a173e3eba46a936 | [
"Apache-2.0"
] | null | null | null | 46.61837 | 1,165 | 0.559343 | [
[
[
"# Fine-tune a 🤗 Transformers model",
"_____no_output_____"
],
[
"This notebook is based on [an official 🤗 notebook - \"How to fine-tune a model on text classification\"](https://github.com/huggingface/notebooks/blob/6ca682955173cc9d36ffa431ddda505a048cbe80/examples/text_classification.ipynb). The main aim of this notebook is to show the process of conversion from vanilla 🤗 to [Ray AIR](https://docs.ray.io/en/latest/ray-air/getting-started.html) 🤗 without changing the training logic unless necessary.\n\nIn this notebook, we will:\n1. [Set up Ray](#setup)\n2. [Load the dataset](#load)\n3. [Preprocess the dataset](#preprocess)\n4. [Run the training with Ray AIR](#train)\n5. [Predict on test data with Ray AIR](#predict)\n6. [Optionally, share the model with the community](#share)",
"_____no_output_____"
],
[
"Uncomment and run the following line in order to install all the necessary dependencies:",
"_____no_output_____"
]
],
[
[
"#! pip install \"datasets\" \"transformers>=4.19.0\" \"torch>=1.10.0\" \"mlflow\" \"ray[air]>=1.13\"",
"_____no_output_____"
]
],
[
[
"## Set up Ray <a name=\"setup\"></a>",
"_____no_output_____"
],
[
"We will use `ray.init()` to initialize a local cluster. By default, this cluster will be compromised of only the machine you are running this notebook on. You can also run this notebook on an Anyscale cluster.\n\nThis notebook *will not* run in [Ray Client](https://docs.ray.io/en/latest/cluster/ray-client.html) mode.",
"_____no_output_____"
]
],
[
[
"from pprint import pprint\nimport ray\n\nray.init()",
"_____no_output_____"
]
],
[
[
"We can check the resources our cluster is composed of. If you are running this notebook on your local machine or Google Colab, you should see the number of CPU cores and GPUs available on the said machine.",
"_____no_output_____"
]
],
[
[
"pprint(ray.cluster_resources())",
"{'CPU': 2.0,\n 'GPU': 1.0,\n 'accelerator_type:T4': 1.0,\n 'memory': 7855477556.0,\n 'node:172.28.0.2': 1.0,\n 'object_store_memory': 3927738777.0}\n"
]
],
[
[
"In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model to a text classification task of the [GLUE Benchmark](https://gluebenchmark.com/). We will be running the training using [Ray AIR](https://docs.ray.io/en/latest/ray-air/getting-started.html).\n\nYou can change those two variables to control whether the training (which we will get to later) uses CPUs or GPUs, and how many workers should be spawned. Each worker will claim one CPU or GPU. Make sure not to request more resources than the resources present!\n\nBy default, we will run the training with one GPU worker.",
"_____no_output_____"
]
],
[
[
"use_gpu = True # set this to False to run on CPUs\nnum_workers = 1 # set this to number of GPUs/CPUs you want to use",
"_____no_output_____"
]
],
[
[
"## Fine-tuning a model on a text classification task",
"_____no_output_____"
],
[
"The GLUE Benchmark is a group of nine classification tasks on sentences or pairs of sentences. If you would like to learn more, refer to the [original notebook](https://github.com/huggingface/notebooks/blob/6ca682955173cc9d36ffa431ddda505a048cbe80/examples/text_classification.ipynb).\n\nEach task is named by its acronym, with `mnli-mm` standing for the mismatched version of MNLI (so same training set as `mnli` but different validation and test sets):",
"_____no_output_____"
]
],
[
[
"GLUE_TASKS = [\"cola\", \"mnli\", \"mnli-mm\", \"mrpc\", \"qnli\", \"qqp\", \"rte\", \"sst2\", \"stsb\", \"wnli\"]",
"_____no_output_____"
]
],
[
[
"This notebook is built to run on any of the tasks in the list above, with any model checkpoint from the [Model Hub](https://huggingface.co/models) as long as that model has a version with a classification head. Depending on you model and the GPU you are using, you might need to adjust the batch size to avoid out-of-memory errors. Set those three parameters, then the rest of the notebook should run smoothly:",
"_____no_output_____"
]
],
[
[
"task = \"cola\"\nmodel_checkpoint = \"distilbert-base-uncased\"\nbatch_size = 16",
"_____no_output_____"
]
],
[
[
"### Loading the dataset <a name=\"load\"></a>",
"_____no_output_____"
],
[
"We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the functions `load_dataset` and `load_metric`.\n\nApart from `mnli-mm` being a special code, we can directly pass our task name to those functions.\n\nAs Ray AIR doesn't provide integrations for 🤗 Datasets yet, we will simply run the normal 🤗 Datasets code to load the dataset from the Hub.",
"_____no_output_____"
]
],
[
[
"from datasets import load_dataset\n\nactual_task = \"mnli\" if task == \"mnli-mm\" else task\ndatasets = load_dataset(\"glue\", actual_task)",
"_____no_output_____"
]
],
[
[
"The `dataset` object itself is [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasetdict), which contains one key for the training, validation and test set (with more keys for the mismatched validation and test set in the special case of `mnli`).",
"_____no_output_____"
],
[
"We will also need the metric. In order to avoid serialization errors, we will load the metric inside the training workers later. Therefore, now we will just define the function we will use.",
"_____no_output_____"
]
],
[
[
"from datasets import load_metric\n\ndef load_metric_fn():\n return load_metric('glue', actual_task)",
"_____no_output_____"
]
],
[
[
"The metric is an instance of [`datasets.Metric`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Metric).",
"_____no_output_____"
],
[
"### Preprocessing the data <a name=\"preprocess\"></a>",
"_____no_output_____"
],
[
"Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires.\n\nTo do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:\n\n- we get a tokenizer that corresponds to the model architecture we want to use,\n- we download the vocabulary used when pretraining this specific checkpoint.",
"_____no_output_____"
]
],
[
[
"from transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)",
"_____no_output_____"
]
],
[
[
"We pass along `use_fast=True` to the call above to use one of the fast tokenizers (backed by Rust) from the 🤗 Tokenizers library. Those fast tokenizers are available for almost all models, but if you got an error with the previous call, remove that argument.",
"_____no_output_____"
],
[
"To preprocess our dataset, we will thus need the names of the columns containing the sentence(s). The following dictionary keeps track of the correspondence task to column names:",
"_____no_output_____"
]
],
[
[
"task_to_keys = {\n \"cola\": (\"sentence\", None),\n \"mnli\": (\"premise\", \"hypothesis\"),\n \"mnli-mm\": (\"premise\", \"hypothesis\"),\n \"mrpc\": (\"sentence1\", \"sentence2\"),\n \"qnli\": (\"question\", \"sentence\"),\n \"qqp\": (\"question1\", \"question2\"),\n \"rte\": (\"sentence1\", \"sentence2\"),\n \"sst2\": (\"sentence\", None),\n \"stsb\": (\"sentence1\", \"sentence2\"),\n \"wnli\": (\"sentence1\", \"sentence2\"),\n}",
"_____no_output_____"
]
],
[
[
"We can them write the function that will preprocess our samples. We just feed them to the `tokenizer` with the argument `truncation=True`. This will ensure that an input longer that what the model selected can handle will be truncated to the maximum length accepted by the model.",
"_____no_output_____"
]
],
[
[
"def preprocess_function(examples, *, tokenizer):\n sentence1_key, sentence2_key = task_to_keys[task]\n if sentence2_key is None:\n return tokenizer(examples[sentence1_key], truncation=True)\n return tokenizer(examples[sentence1_key], examples[sentence2_key], truncation=True)",
"_____no_output_____"
]
],
[
[
"To apply this function on all the sentences (or pairs of sentences) in our dataset, we just use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command.",
"_____no_output_____"
]
],
[
[
"encoded_datasets = datasets.map(preprocess_function, batched=True, fn_kwargs=dict(tokenizer=tokenizer))",
"_____no_output_____"
]
],
[
[
"For Ray AIR, instead of using 🤗 Dataset objects directly, we will convert them to [Ray Datasets](https://docs.ray.io/en/latest/data/dataset.html). Both are backed by Arrow tables, so the conversion is straightforward. We will use the built-in `ray.data.from_huggingface` function.",
"_____no_output_____"
]
],
[
[
"import ray.data\n\nray_datasets = ray.data.from_huggingface(encoded_datasets)",
"_____no_output_____"
]
],
[
[
"### Fine-tuning the model with Ray AIR <a name=\"train\"></a>",
"_____no_output_____"
],
[
"Now that our data is ready, we can download the pretrained model and fine-tune it.\n\nSince all our tasks are about sentence classification, we use the `AutoModelForSequenceClassification` class.\n\nWe will not go into details about each specific component of the training (see the [original notebook](https://github.com/huggingface/notebooks/blob/6ca682955173cc9d36ffa431ddda505a048cbe80/examples/text_classification.ipynb) for that). The tokenizer is the same as we have used to encoded the dataset before.\n\nThe main difference when using the Ray AIR is that we need to create our 🤗 Transformers `Trainer` inside a function (`trainer_init_per_worker`) and return it. That function will be passed to the `HuggingFaceTrainer` and ran on every Ray worker. The training will then proceed by the means of PyTorch DDP.\n\nMake sure that you initialize the model, metric and tokenizer inside that function. Otherwise, you may run into serialization errors.\n\nFurthermore, `push_to_hub=True` is not yet supported. Ray will however checkpoint the model at every epoch, allowing you to push it to hub manually. We will do that after the training.\n\nIf you wish to use thrid party logging libraries, such as MLFlow or Weights&Biases, do not set them in `TrainingArguments` (they will be automatically disabled) - instead, you should be passing Ray AIR callbacks to `HuggingFaceTrainer`'s `run_config`. In this example, we will use MLFlow.",
"_____no_output_____"
]
],
[
[
"from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer\nimport numpy as np\nimport torch\n\nnum_labels = 3 if task.startswith(\"mnli\") else 1 if task==\"stsb\" else 2\nmetric_name = \"pearson\" if task == \"stsb\" else \"matthews_correlation\" if task == \"cola\" else \"accuracy\"\nmodel_name = model_checkpoint.split(\"/\")[-1]\nvalidation_key = \"validation_mismatched\" if task == \"mnli-mm\" else \"validation_matched\" if task == \"mnli\" else \"validation\"\nname = f\"{model_name}-finetuned-{task}\"\n\ndef trainer_init_per_worker(train_dataset, eval_dataset = None, **config):\n print(f\"Is CUDA available: {torch.cuda.is_available()}\")\n metric = load_metric_fn()\n tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)\n model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels)\n args = TrainingArguments(\n name,\n evaluation_strategy=\"epoch\",\n save_strategy=\"epoch\",\n learning_rate=2e-5,\n per_device_train_batch_size=batch_size,\n per_device_eval_batch_size=batch_size,\n num_train_epochs=5,\n weight_decay=0.01,\n push_to_hub=False,\n disable_tqdm=True, # declutter the output a little\n no_cuda=not use_gpu, # you need to explicitly set no_cuda if you want CPUs\n )\n\n def compute_metrics(eval_pred):\n predictions, labels = eval_pred\n if task != \"stsb\":\n predictions = np.argmax(predictions, axis=1)\n else:\n predictions = predictions[:, 0]\n return metric.compute(predictions=predictions, references=labels)\n\n trainer = Trainer(\n model,\n args,\n train_dataset=train_dataset,\n eval_dataset=eval_dataset,\n tokenizer=tokenizer,\n compute_metrics=compute_metrics\n )\n\n print(\"Starting training\")\n return trainer",
"_____no_output_____"
]
],
[
[
"With our `trainer_init_per_worker` complete, we can now instantiate the `HuggingFaceTrainer`. Aside from the function, we set the `scaling_config`, controlling the amount of workers and resources used, and the `datasets` we will use for training and evaluation.\n\nWe specify the `MlflowLoggerCallback` inside the `run_config`.",
"_____no_output_____"
]
],
[
[
"from ray.train.huggingface import HuggingFaceTrainer\nfrom ray.air import RunConfig\nfrom ray.tune.integration.mlflow import MLflowLoggerCallback\n\ntrainer = HuggingFaceTrainer(\n trainer_init_per_worker=trainer_init_per_worker,\n scaling_config={\"num_workers\": num_workers, \"use_gpu\": use_gpu},\n datasets={\"train\": ray_datasets[\"train\"], \"evaluation\": ray_datasets[validation_key]},\n run_config=RunConfig(callbacks=[MLflowLoggerCallback(experiment_name=name)])\n)",
"_____no_output_____"
]
],
[
[
"Finally, we call the `fit` method to being training with Ray AIR. We will save the `Result` object to a variable so we can access metrics and checkpoints.",
"_____no_output_____"
]
],
[
[
"result = trainer.fit()",
"_____no_output_____"
]
],
[
[
"You can use the returned `Result` object to access metrics and the Ray AIR `Checkpoint` associated with the last iteration.",
"_____no_output_____"
]
],
[
[
"result",
"_____no_output_____"
]
],
[
[
"### Predict on test data with Ray AIR <a name=\"predict\"></a>",
"_____no_output_____"
],
[
"You can now use the checkpoint to run prediction with `HuggingFacePredictor`, which wraps around [🤗 Pipelines](https://huggingface.co/docs/transformers/main_classes/pipelines). In order to distribute prediction, we use `BatchPredictor`. While this is not necessary for the very small example we are using (you could use `HuggingFacePredictor` directly), it will scale well to a large dataset.",
"_____no_output_____"
]
],
[
[
"from ray.train.huggingface import HuggingFacePredictor\nfrom ray.train.batch_predictor import BatchPredictor\nimport pandas as pd\n\nsentences = ['Bill whistled past the house.',\n 'The car honked its way down the road.',\n 'Bill pushed Harry off the sofa.',\n 'the kittens yawned awake and played.',\n 'I demand that the more John eats, the more he pay.']\npredictor = BatchPredictor.from_checkpoint(\n checkpoint=result.checkpoint,\n predictor_cls=HuggingFacePredictor,\n task=\"text-classification\",\n)\ndata = ray.data.from_pandas(pd.DataFrame(sentences, columns=[\"sentence\"]))\nprediction = predictor.predict(data)\nprediction = prediction.to_pandas()\nprediction",
"Map Progress (2 actors 1 pending): 0%| | 0/1 [00:12<?, ?it/s]\u001b[2m\u001b[36m(BlockWorker pid=735)\u001b[0m 2022-05-12 18:36:08.491769: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected\nMap Progress (2 actors 1 pending): 100%|██████████| 1/1 [00:16<00:00, 16.63s/it]\n"
]
],
[
[
"### Share the model <a name=\"share\"></a>",
"_____no_output_____"
],
[
"To be able to share your model with the community, there are a few more steps to follow.\n\nWe have conducted the training on the Ray cluster, but share the model from the local enviroment - this will allow us to easily authenticate.\n\nFirst you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/join) if you haven't already!) then execute the following cell and input your username and password:",
"_____no_output_____"
]
],
[
[
"from huggingface_hub import notebook_login\n\nnotebook_login()",
"_____no_output_____"
]
],
[
[
"Then you need to install Git-LFS. Uncomment the following instructions:",
"_____no_output_____"
]
],
[
[
"# !apt install git-lfs",
"_____no_output_____"
]
],
[
[
"Now, load the model and tokenizer locally, and recreate the 🤗 Transformers `Trainer`:",
"_____no_output_____"
]
],
[
[
"from ray.train.huggingface import load_checkpoint\n\nhf_trainer = load_checkpoint(\n checkpoint=result.checkpoint,\n model=AutoModelForSequenceClassification,\n tokenizer=AutoTokenizer\n)",
"_____no_output_____"
]
],
[
[
"You can now upload the result of the training to the Hub, just execute this instruction:",
"_____no_output_____"
]
],
[
[
"hf_trainer.push_to_hub()",
"_____no_output_____"
]
],
[
[
"You can now share this model with all your friends, family, favorite pets: they can all load it with the identifier `\"your-username/the-name-you-picked\"` so for instance:\n\n```python\nfrom transformers import AutoModelForSequenceClassification\n\nmodel = AutoModelForSequenceClassification.from_pretrained(\"sgugger/my-awesome-model\")\n```",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e75ed9230ff0b3cbfe1669e61d4718ed040f6991 | 67,608 | ipynb | Jupyter Notebook | Misc Notebooks/r8_small-Copy1.ipynb | johnowhitaker/CIRTS | c62f3d2440542e38277a4d60bfba2c0fa22931d1 | [
"MIT"
] | 8 | 2018-11-30T04:05:41.000Z | 2021-08-06T15:57:14.000Z | Misc Notebooks/r8_small-Copy1.ipynb | johnowhitaker/CIRTS | c62f3d2440542e38277a4d60bfba2c0fa22931d1 | [
"MIT"
] | null | null | null | Misc Notebooks/r8_small-Copy1.ipynb | johnowhitaker/CIRTS | c62f3d2440542e38277a4d60bfba2c0fa22931d1 | [
"MIT"
] | 1 | 2019-08-16T18:30:56.000Z | 2019-08-16T18:30:56.000Z | 58.032618 | 32,408 | 0.710863 | [
[
[
"import serial, time\nser = serial.Serial('/dev/ttyACM0', 115200)",
"_____no_output_____"
],
[
"def read_all():\n r = ser.read_all() # clear buffer\n ser.write(b'a')\n while ser.in_waiting < 1:\n pass # wait for a response\n time.sleep(0.05)\n r = ser.read_all()\n t = str(r).split('A')[-1].strip()\n r = [[int(s.strip('b\\'')) for s in str(l).split(',')[:8]] for l in str(r).split('A')[:-1]]\n return(r, t)",
"_____no_output_____"
],
[
"import numpy as np\nr, t = read_all()\nnp.asarray(r)",
"_____no_output_____"
],
[
"from IPython.display import clear_output\nfor i in range(100):\n r, t = read_all()\n print(np.asarray(r))\n time.sleep(0.1)\n clear_output(wait=True)",
"[[729 726 683 489 724 724 719 728]\n [709 731 498 720 717 712 707 723]\n [728 721 722 679 744 738 712 730]\n [714 739 721 726 538 710 712 690]\n [740 731 725 738 737 682 621 735]\n [589 687 714 730 727 743 718 724]\n [674 473 716 721 740 715 685 721]\n [712 721 722 733 719 734 720 596]]\n"
],
[
"def read_av():\n read = np.asarray(read_all()[0]).flatten()\n for i in range(10):\n read = read+(np.asarray(read_all()[0]).flatten())\n return read/10",
"_____no_output_____"
],
[
"base = read_all()[0]",
"_____no_output_____"
],
[
"from matplotlib import pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"read = read_all()[0]\nplt.imshow(np.asarray(base)-np.asarray(read))",
"_____no_output_____"
],
[
"base",
"_____no_output_____"
]
],
[
[
"# Image recon",
"_____no_output_____"
]
],
[
[
"# Import required libraries\nfrom image_util import *\nimport skimage.filters\nfrom matplotlib import pyplot as plt\nimport cairocffi as cairo\nimport math, random\nimport numpy as np\nimport pandas as pd\nfrom IPython.display import Image\nfrom scipy.interpolate import interp1d\nimport astra\n\n%matplotlib inline\n\ndef r8_to_sino(readings):\n sino = []\n for e in range(8):\n start = e*8 + (e+2)%8\n end = e*8 + (e+6)%8\n if end-start == 4:\n sino.append(readings[start : end])\n else:\n r = readings[start : (e+1)*8]\n for p in readings[e*8 : end]:\n r.append(p)\n sino.append(r)\n return np.asarray(sino)\n",
"_____no_output_____"
],
[
"nviews = 8\nndetectors = 4\nnvdetectors = 8\n\nIMSIZE = 50\nR = IMSIZE/2\nD = IMSIZE/2\n\n# Transforming from a round fan-beam to a fan-flat projection (See diagram)\nbeta = np.linspace(math.pi/8, 7*math.pi/8, ndetectors)\nalpha = np.asarray([R*math.sin(b-math.pi/2)/(R**2 + D**2)**0.5 for b in beta])\ntau = np.asarray([(R+D)*math.tan(a) for a in alpha])\n\ntau_new = np.linspace(-(max(tau)/2), max(tau)/2, nvdetectors)\n\nvol_geom = astra.create_vol_geom(IMSIZE, IMSIZE)\nangles = np.linspace(0,2*math.pi,nviews);\nd_size = (tau[-1]-tau[0])/nvdetectors\nproj_geom= astra.create_proj_geom('fanflat', d_size, nvdetectors, angles, D, R);\nproj_id = astra.create_projector('line_fanflat', proj_geom, vol_geom)",
"_____no_output_____"
],
[
"base = read_av()",
"_____no_output_____"
],
[
"np.asarray(base).reshape(8,8)",
"_____no_output_____"
],
[
"%%time\nfor i in range(1):\n print(i)\n r2 = read_av()\n readings = (np.asarray(base)-np.asarray(r2))# - base\n readings = r8_to_sino(readings.tolist()) # Get important ones and reorder\n \n readings2 = []\n for r in readings:\n f = interp1d(tau, r, kind='cubic') # Can change to linear\n readings2.append(f(tau_new))\n \n \n sinogram_id = astra.data2d.create('-sino', proj_geom, np.asarray(readings2))\n \n # Plotting sinogram - new (transformed) set of readings\n plt.figure(num=None, figsize=(16, 10), dpi=80, facecolor='w', edgecolor='k')\n ax1 = plt.subplot(1, 3, 1)\n ax1.imshow(readings2) #<< Set title\n\n # Doing the reconstruction, in this case with FBP\n\n rec_id = astra.data2d.create('-vol', vol_geom)\n\n cfg = astra.astra_dict('FBP')\n cfg['ReconstructionDataId'] = rec_id\n cfg['ProjectionDataId'] = sinogram_id\n cfg['ProjectorId'] = proj_id\n\n # Create the algorithm object from the configuration structure\n alg_id = astra.algorithm.create(cfg)\n\n astra.algorithm.run(alg_id, 1)\n\n # Get the result\n rec = astra.data2d.get(rec_id)\n ax2 = plt.subplot(1, 3, 2)\n ax2.imshow(rec)\n norm_rec = rec/(np.amax(np.abs(rec)))\n blurred = skimage.filters.gaussian(norm_rec, 3)\n ax3 = plt.subplot(1, 3, 3)\n ax3.imshow(blurred)\n \n plt.savefig('r8s'+str(i) + '.png')\n print(max(np.asarray(readings2).flatten()))\n",
"0\n17.877346222440426\nCPU times: user 582 ms, sys: 310 ms, total: 891 ms\nWall time: 972 ms\n"
],
[
"# Clean up.\nastra.algorithm.delete(alg_id)\nastra.data2d.delete(rec_id)\nastra.data2d.delete(sinogram_id)\nastra.projector.delete(proj_id)",
"_____no_output_____"
],
[
"np.linspace(math.pi/8, 7*math.pi/8, ndetectors)",
"_____no_output_____"
],
[
"np.linspace(0, math.pi, ndetectors)",
"_____no_output_____"
],
[
"r = []\ny = []\ny2 = []\nfor i in range(50):\n r.append(np.asarray(read_all()[0]).flatten())\n y.append(0)\n y2.append(0)",
"_____no_output_____"
],
[
"for i in range(50):\n r.append(np.asarray(read_all()[0]).flatten())\n y.append(2)\n y2.append(2)",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.ensemble import RandomForestClassifier\n\nX_train, X_test, y_train, y_test = train_test_split(r, y)\n\nregr = RandomForestClassifier(max_depth=5, random_state=0)\nregr.fit(X_train, y_train)\nregr.score(X_test, y_test)",
"_____no_output_____"
],
[
"regr.predict([np.asarray(read_all()[0]).flatten()])",
"_____no_output_____"
],
[
"df = pd.read_csv('r8_small_rotation.csv')\nr = df[[str(i) for i in range(64)]]\ny = df['Y']\n\nfrom sklearn.neural_network import MLPClassifier, MLPRegressor\nX_train, X_test, y_train, y_test = train_test_split(r, y)\n\nscaler = StandardScaler()\nscaler.fit(X_train)\nX_train = scaler.transform(X_train)\nX_test = scaler.transform(X_test)\n\nmlpc = MLPClassifier(hidden_layer_sizes=(20, 20, 20), max_iter=400)\nmlpc.fit(X_train, y_train)\nprint(mlpc.score(X_test, y_test))",
"0.9557522123893806\n"
],
[
"from IPython.display import clear_output\nav = 0\nwhile True:\n read = [np.asarray(read_all()[0]).flatten()]\n read = scaler.transform(read)\n print(mlpc.predict(read))\n time.sleep(0.1)\n clear_output(wait=True)",
"[1]\n"
],
[
"ser.read_all()",
"_____no_output_____"
],
[
"import pandas as pd\ndf1 = pd.DataFrame(r)\ndf1.head()",
"_____no_output_____"
],
[
"df1['Y'] = y\ndf1.head()",
"_____no_output_____"
],
[
"df1.to_csv('r8_small_rotation.csv', index=False)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75ed928e3f3633426ca730db195b4f28b257ff3 | 121,655 | ipynb | Jupyter Notebook | Tensorflow.ipynb | iamatul1214/CatDealer | 0360c25481a986caba1d5977261611093a83f0e4 | [
"MIT"
] | null | null | null | Tensorflow.ipynb | iamatul1214/CatDealer | 0360c25481a986caba1d5977261611093a83f0e4 | [
"MIT"
] | null | null | null | Tensorflow.ipynb | iamatul1214/CatDealer | 0360c25481a986caba1d5977261611093a83f0e4 | [
"MIT"
] | null | null | null | 41.127451 | 12,650 | 0.614023 | [
[
[
"<a href=\"https://colab.research.google.com/github/iamatul1214/CatDealer/blob/main/Tensorflow.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"import tensorflow as tf",
"_____no_output_____"
],
[
"tf.__version__",
"_____no_output_____"
]
],
[
[
"## Tensor constant\n\nTensor: Multi dim array",
"_____no_output_____"
]
],
[
[
"const = tf.constant(43)",
"_____no_output_____"
],
[
"const",
"_____no_output_____"
],
[
"const.numpy()",
"_____no_output_____"
]
],
[
[
"JAX: Built on Numpy but it supports CUDA",
"_____no_output_____"
]
],
[
[
"# for specific datatype\n\nconst = tf.constant(43, dtype=tf.float32)\n\nconst",
"_____no_output_____"
],
[
"const.numpy()",
"_____no_output_____"
],
[
"const_mat = tf.constant([[1,3],[4,5]], dtype=tf.float32)\nprint(const_mat)\nconst_mat.numpy()",
"tf.Tensor(\n[[1. 3.]\n [4. 5.]], shape=(2, 2), dtype=float32)\n"
],
[
"const_mat.shape",
"_____no_output_____"
],
[
"const_mat.dtype",
"_____no_output_____"
],
[
"## We can't perform assignment like below\nconst_mat[0][0] = 32",
"_____no_output_____"
]
],
[
[
"## Commonly used method",
"_____no_output_____"
]
],
[
[
"tf.ones(shape=(2,3))",
"_____no_output_____"
],
[
"-1*tf.ones(shape=(2,3))",
"_____no_output_____"
],
[
"6*tf.ones(shape=(2,3))",
"_____no_output_____"
],
[
"tf.zeros(shape=(2,4))",
"_____no_output_____"
]
],
[
[
"## add operations",
"_____no_output_____"
]
],
[
[
"const1 = tf.constant([[1,2,3],[4,5,6]])\nconst2 = tf.constant([[2,5,3],[3,5,8]])",
"_____no_output_____"
],
[
"const1 + const2",
"_____no_output_____"
],
[
"## Adding from tensorflow object\ntf.add(const1, const2)",
"_____no_output_____"
]
],
[
[
"## Random const",
"_____no_output_____"
]
],
[
[
"tf.random.normal(shape=(2,2), mean=0, stddev=1.0)",
"_____no_output_____"
],
[
"tf.random.uniform(shape=(2,2), minval=0, maxval=20)",
"_____no_output_____"
]
],
[
[
"## Variables",
"_____no_output_____"
]
],
[
[
"var1 = tf.Variable([[1,2,3],[4,5,6]])",
"_____no_output_____"
],
[
"var2 = tf.Variable(43)",
"_____no_output_____"
],
[
"var2.assign(32)",
"_____no_output_____"
],
[
"var2",
"_____no_output_____"
],
[
"type(var2)",
"_____no_output_____"
],
[
"var2 = 33",
"_____no_output_____"
],
[
"type(var2)",
"_____no_output_____"
],
[
"var1.assign([[22,2,3],[4,5,6]])",
"_____no_output_____"
],
[
"var1[1,1].assign(34)\n\nvar1",
"_____no_output_____"
],
[
"var1[1][1].assign(34)\n",
"_____no_output_____"
],
[
"var1.assign([[22,2,3],[4,5,6],[3,4,5]])\n",
"_____no_output_____"
]
],
[
[
"## reshaping operation",
"_____no_output_____"
]
],
[
[
"tensor = tf.Variable([[22,2,3],[4,5,6]])\n\ntensor.shape",
"_____no_output_____"
],
[
"tf.reshape(tensor, [3,2])",
"_____no_output_____"
],
[
"tf.reshape(tensor, [1,6])\n",
"_____no_output_____"
],
[
"tf.reshape(tensor, [6,1])\n",
"_____no_output_____"
]
],
[
[
"## other mathematical ops",
"_____no_output_____"
]
],
[
[
"var1",
"_____no_output_____"
],
[
"tf.square(var1)",
"_____no_output_____"
]
],
[
[
"## [detailed link of the demo of all the funcitons in tensorflow ](https://colab.research.google.com/drive/12sBvm2ON-gAWNisJD1dpoNTdFri3dEjE?usp=sharing)",
"_____no_output_____"
],
[
"## Broadcasting in TF",
"_____no_output_____"
]
],
[
[
"tensor",
"_____no_output_____"
],
[
"scaler = 4\n\nscaler * tensor",
"_____no_output_____"
],
[
"scaler + tensor",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"## Matrix Multiplication",
"_____no_output_____"
]
],
[
[
"mat_u = tf.constant([[6,7,7]])\nmat_v = tf.constant([[3,4,3]])",
"_____no_output_____"
],
[
"mat_u.shape",
"_____no_output_____"
],
[
"mat_v.shape",
"_____no_output_____"
]
],
[
[
"Rule: column of mat A = row of matrix B. If you are multiplying AB",
"_____no_output_____"
]
],
[
[
"tf.matmul(mat_u, mat_v)",
"_____no_output_____"
],
[
"tf.matmul(mat_u, tf.transpose(mat_v))",
"_____no_output_____"
],
[
"## alternative to above\n\nmat_u @ tf.transpose(mat_v)",
"_____no_output_____"
],
[
"tf.matmul(tf.transpose(mat_u), mat_v)",
"_____no_output_____"
],
[
"mat_u * mat_v # element wise multiplication\n# shape of both should be same",
"_____no_output_____"
]
],
[
[
"## Casting method in tf to change the data type ",
"_____no_output_____"
]
],
[
[
"mat_u.dtype",
"_____no_output_____"
],
[
"tf.cast(mat_u, dtype=tf.int16) ## can be used in quantizing a model to save the space in the memory allocation\n",
"_____no_output_____"
],
[
"34.34343434 ## <<< THis will be more precise\n34.34",
"_____no_output_____"
]
],
[
[
"## Ragged tensors\nnested variable length arrays",
"_____no_output_____"
]
],
[
[
"ragged = tf.ragged.constant([[1,2,4,5,6], [1], [135,1]])",
"_____no_output_____"
],
[
"ragged.shape",
"_____no_output_____"
],
[
"ragged[0].shape",
"_____no_output_____"
],
[
"ragged[1].shape\n",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"## Checkpointing to restore the matrix vals",
"_____no_output_____"
]
],
[
[
"var1 = tf.Variable(5*tf.ones((5,5)))\n\nvar1",
"_____no_output_____"
],
[
"ckpt = tf.train.Checkpoint(var=var1)\nsavepath = ckpt.save(\"./vars.ckpt\")",
"_____no_output_____"
],
[
"var1.assign(tf.zeros((5,5)))",
"_____no_output_____"
],
[
"var1",
"_____no_output_____"
],
[
"ckpt.restore(savepath)",
"_____no_output_____"
],
[
"var1",
"_____no_output_____"
]
],
[
[
"## tf.function",
"_____no_output_____"
],
[
"$z = x^3 * 6 + y^3$",
"_____no_output_____"
]
],
[
[
"def f1(x, y):\n input_var = tf.multiply(x ** 3, 6) + y ** 3\n return tf.reduce_mean(input_tensor=input_var)",
"_____no_output_____"
],
[
"func = tf.function(f1)",
"_____no_output_____"
],
[
"x = tf.constant([3.,-4.])\ny = tf.constant([1.,4.])",
"_____no_output_____"
],
[
"f1(x,y) ## using a simple python function",
"_____no_output_____"
],
[
"func(x, y)",
"_____no_output_____"
],
[
"@tf.function ## tf decorator function\ndef f2(x, y):\n input_var = tf.multiply(x ** 3, 6) + y ** 3\n return tf.reduce_mean(input_tensor=input_var)",
"_____no_output_____"
],
[
"f2(x,y)",
"_____no_output_____"
]
],
[
[
"## Example of decorator",
"_____no_output_____"
]
],
[
[
"def print_me():\n print(\"Hi FSDS\")",
"_____no_output_____"
],
[
"print_me()",
"Hi FSDS\n"
],
[
"print(\"**\"*20)\nprint_me()\nprint(\"**\"*20)",
"****************************************\nHi FSDS\n****************************************\n"
],
[
"def decorate_it(input_func):\n def decorated_func():\n print(\"**\"*20)\n input_func()\n print(\"**\"*20)\n\n return decorated_func",
"_____no_output_____"
],
[
"decorated_func = decorate_it(print_me)\n\ndecorated_func()",
"****************************************\nHi FSDS\n****************************************\n"
],
[
"@decorate_it\ndef print_me2():\n print(\"Hi FSDS\")",
"_____no_output_____"
],
[
"print_me2()",
"****************************************\nHi FSDS\n****************************************\n"
],
[
"@decorate_it\ndef print_my_name():\n print(\"Sunny\")",
"_____no_output_____"
],
[
"print_my_name()",
"****************************************\nSunny\n****************************************\n"
]
],
[
[
"# Calculation of Gradients in tf",
"_____no_output_____"
]
],
[
[
"x = tf.random.normal(shape=(2,2)) ## this creates a const by default\ny = tf.random.normal(shape=(2,2))",
"_____no_output_____"
]
],
[
[
"$f(x,y) = \\sqrt{(x^2 + y^2)}$\n\n$\\nabla f(x,y) = \\frac{\\partial f}{\\partial x} \\hat{\\imath} + \\frac{\\partial f}{\\partial y} \\hat{\\jmath}$",
"_____no_output_____"
]
],
[
[
"with tf.GradientTape() as tape:\n tape.watch(x) ### <<< I want to calculate grad wrt x\n f = tf.sqrt(tf.square(x) + tf.square(y))\n\n df_dx = tape.gradient(f, x)\n\n print(df_dx)",
"tf.Tensor(\n[[-0.9929363 0.86325336]\n [ 0.2952111 -0.9611857 ]], shape=(2, 2), dtype=float32)\n"
],
[
"with tf.GradientTape() as tape:\n tape.watch(y) ### <<< I want to calculate grad wrt y\n f = tf.sqrt(tf.square(x) + tf.square(y))\n\n df_dy = tape.gradient(f, y)\n\n print(df_dy)",
"tf.Tensor(\n[[ 0.1186479 -0.5047708 ]\n [-0.95543206 0.27590245]], shape=(2, 2), dtype=float32)\n"
],
[
"## tf.watch is only needed when X and Y are not variables\n\nwith tf.GradientTape() as tape:\n tape.watch(y) ### <<< I want to calculate grad wrt y\n tape.watch(x) ### <<< I want to calculate grad wrt x\n f = tf.sqrt(tf.square(x) + tf.square(y))\n\n df_dx, df_dy = tape.gradient(f, [x, y]) ## partial diff wrt x and y\n\n print(df_dx)\n print(df_dy)",
"tf.Tensor(\n[[-0.9929363 0.86325336]\n [ 0.2952111 -0.9611857 ]], shape=(2, 2), dtype=float32)\ntf.Tensor(\n[[ 0.1186479 -0.5047708 ]\n [-0.95543206 0.27590245]], shape=(2, 2), dtype=float32)\n"
],
[
"with tf.GradientTape() as tape:\n f = tf.sqrt(tf.square(x) + tf.square(y))\n\n df_dx, df_dy = tape.gradient(f, [x, y]) ## partial diff wrt x and y\n\n print(df_dx)\n print(df_dy)",
"None\nNone\n"
],
[
"x = tf.Variable(tf.random.normal(shape=(2,2)))\ny = tf.Variable(tf.random.normal(shape=(2,2)))",
"_____no_output_____"
],
[
"with tf.GradientTape() as tape:\n f = tf.sqrt(tf.square(x) + tf.square(y))\n\n df_dx, df_dy = tape.gradient(f, [x, y]) ## partial diff wrt x and y\n\n print(df_dx)\n print(df_dy)",
"tf.Tensor(\n[[-0.251249 0.99643266]\n [-0.87177974 -0.33253065]], shape=(2, 2), dtype=float32)\ntf.Tensor(\n[[-0.9679225 0.08439222]\n [ 0.4898982 0.94309247]], shape=(2, 2), dtype=float32)\n"
],
[
"x = tf.Variable(3.)\ny = tf.Variable(2.)",
"_____no_output_____"
],
[
"with tf.GradientTape() as tape:\n f = tf.sqrt(tf.square(x) + tf.square(y))\n\n df_dx, df_dy = tape.gradient(f, [x, y]) ## partial diff wrt x and y\n\n print(df_dx)\n print(df_dy)",
"tf.Tensor(0.8320503, shape=(), dtype=float32)\ntf.Tensor(0.5547002, shape=(), dtype=float32)\n"
],
[
"x/tf.sqrt(tf.square(x) + tf.square(y))",
"_____no_output_____"
],
[
"y/tf.sqrt(tf.square(x) + tf.square(y))",
"_____no_output_____"
]
],
[
[
"## Simple linear regression example",
"_____no_output_____"
],
[
"$f(x) = W.x + b$",
"_____no_output_____"
]
],
[
[
"TRUE_W = 3.0\nTRUE_B = 2.0\n\nNUM_EXAMPLES = 1000\n\nx = tf.random.normal(shape=[NUM_EXAMPLES])",
"_____no_output_____"
],
[
"noise = tf.random.normal(shape=[NUM_EXAMPLES])",
"_____no_output_____"
],
[
"y = x * TRUE_W + TRUE_B + noise",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\nplt.scatter(x, y, c=\"b\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Lets define a model",
"_____no_output_____"
]
],
[
[
"class MyModel(tf.Module):\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n\n # initial weights\n self.w = tf.Variable(5.0)\n self.b = tf.Variable(0.0)\n\n def __call__(self, x):\n return self.w*x + self.b",
"_____no_output_____"
],
[
"class Test:\n def __init__(self, x):\n self.x = x\n\n def __call__(self):\n return self.x ** 3",
"_____no_output_____"
],
[
"obj = Test(2)\nobj",
"_____no_output_____"
],
[
"obj()",
"_____no_output_____"
],
[
"model = MyModel()",
"_____no_output_____"
],
[
"model(3)",
"_____no_output_____"
],
[
"model.w",
"_____no_output_____"
],
[
"model.b",
"_____no_output_____"
],
[
"model.variables",
"_____no_output_____"
]
],
[
[
"### Define loss function",
"_____no_output_____"
]
],
[
[
"def MSE_loss(target_y, predicted_y):\n error = target_y - predicted_y\n squared_error = tf.square(error)\n mse = tf.reduce_mean(squared_error)\n return mse",
"_____no_output_____"
],
[
"plt.scatter(x, y, c=\"b\")\n\npred_y = model(x) ## without training\nplt.scatter(x, pred_y, c=\"r\") \n\nplt.show()",
"_____no_output_____"
],
[
"current_loss = MSE_loss(y, model(x))\ncurrent_loss.numpy()",
"_____no_output_____"
]
],
[
[
"### training function def",
"_____no_output_____"
]
],
[
[
"def train(model, x, y, learning_rate):\n\n with tf.GradientTape() as tape:\n current_loss = MSE_loss(y, model(x))\n\n dc_dw, dc_db = tape.gradient(current_loss, [model.w, model.b])\n\n model.w.assign_sub(learning_rate * dc_dw)\n model.b.assign_sub(learning_rate * dc_db)",
"_____no_output_____"
],
[
"model = MyModel()\n\nWs, bs = [], []\n\nepochs = 10*2\n\nlearning_rate = 0.1\n\nw = model.w.numpy()\nb = model.b.numpy()\n\ninit_loss = MSE_loss(y, model(x)).numpy()\n\nprint(f\"Initial W: {w}, initial bias: {b}, initial_loss: {init_loss}\")",
"Initial W: 5.0, initial bias: 0.0, initial_loss: 9.671940803527832\n"
],
[
"for epoch in range(epochs):\n train(model, x, y, learning_rate)\n\n Ws.append(model.w.numpy())\n bs.append(model.b.numpy())\n\n current_loss = MSE_loss(y, model(x))\n\n print(f\"For epoch: {epoch}, W: {Ws[-1]}, b: {bs[-1]}, current_loss: {current_loss}\")",
"For epoch: 0, W: 4.570666790008545, b: 0.4271250367164612, current_loss: 6.390035629272461\nFor epoch: 1, W: 4.231834411621094, b: 0.7646052837371826, current_loss: 4.343555927276611\nFor epoch: 2, W: 3.964430093765259, b: 1.0312591791152954, current_loss: 3.067443609237671\nFor epoch: 3, W: 3.75339937210083, b: 1.2419540882110596, current_loss: 2.2717032432556152\nFor epoch: 4, W: 3.586859941482544, b: 1.4084358215332031, current_loss: 1.7755054235458374\nFor epoch: 5, W: 3.4554340839385986, b: 1.5399843454360962, current_loss: 1.466092824935913\nFor epoch: 6, W: 3.351719856262207, b: 1.6439313888549805, current_loss: 1.273152470588684\nFor epoch: 7, W: 3.2698755264282227, b: 1.7260695695877075, current_loss: 1.1528406143188477\nFor epoch: 8, W: 3.2052905559539795, b: 1.7909756898880005, current_loss: 1.077817678451538\nFor epoch: 9, W: 3.1543262004852295, b: 1.8422658443450928, current_loss: 1.0310354232788086\nFor epoch: 10, W: 3.1141107082366943, b: 1.8827970027923584, current_loss: 1.0018631219863892\nFor epoch: 11, W: 3.0823774337768555, b: 1.91482675075531, current_loss: 0.9836720824241638\nFor epoch: 12, W: 3.057337999343872, b: 1.940138578414917, current_loss: 0.9723285436630249\nFor epoch: 13, W: 3.037580728530884, b: 1.9601420164108276, current_loss: 0.9652550220489502\nFor epoch: 14, W: 3.021991491317749, b: 1.9759504795074463, current_loss: 0.960844099521637\nFor epoch: 15, W: 3.0096912384033203, b: 1.9884440898895264, current_loss: 0.9580935835838318\nFor epoch: 16, W: 2.999986410140991, b: 1.998318076133728, current_loss: 0.9563782811164856\nFor epoch: 17, W: 2.9923295974731445, b: 2.006121873855591, current_loss: 0.955308735370636\nFor epoch: 18, W: 2.986288547515869, b: 2.0122897624969482, current_loss: 0.9546416997909546\nFor epoch: 19, W: 2.981522560119629, b: 2.017164468765259, current_loss: 0.9542258381843567\n"
],
[
"plt.plot(range(epochs), Ws, 'r', range(epochs), bs, \"b\")\n\nplt.plot([TRUE_W] * epochs, \"r--\", [TRUE_B] * epochs, \"b--\")\n\nplt.legend([\"W\", \"b\", \"True W\", \"True B\"])\n\nplt.show()",
"_____no_output_____"
],
[
"plt.scatter(x, y, c=\"b\")\n\npred_y = model(x) ## after training\nplt.scatter(x, pred_y, c=\"r\") \n\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e75edb4337628cc00b3e6a162276e585c3a9fd92 | 6,840 | ipynb | Jupyter Notebook | DeepRL_For_HPE/.ipynb_checkpoints/DatasetVisualizer-checkpoint.ipynb | muratcancicek/Deep_RL_For_Head_Pose_Est | b3436a61a44d20d8bcfd1341792e0533e3ff9fc2 | [
"Apache-2.0"
] | null | null | null | DeepRL_For_HPE/.ipynb_checkpoints/DatasetVisualizer-checkpoint.ipynb | muratcancicek/Deep_RL_For_Head_Pose_Est | b3436a61a44d20d8bcfd1341792e0533e3ff9fc2 | [
"Apache-2.0"
] | null | null | null | DeepRL_For_HPE/.ipynb_checkpoints/DatasetVisualizer-checkpoint.ipynb | muratcancicek/Deep_RL_For_Head_Pose_Est | b3436a61a44d20d8bcfd1341792e0533e3ff9fc2 | [
"Apache-2.0"
] | null | null | null | 26.823529 | 131 | 0.556871 | [
[
[
"from DatasetHandler.BiwiBrowser import *\nimport keras\nimport numpy as np\n%matplotlib inline\n#from keras import Model \nfrom keras.layers import *\nimport matplotlib.pyplot as plt\nfrom keras.optimizers import SGD\nfrom keras.models import Sequential\nfrom keras.constraints import maxnorm\nfrom keras.applications.vgg16 import VGG16\nfrom sklearn.preprocessing import MinMaxScaler\nfrom keras.preprocessing.image import load_img\nfrom keras.preprocessing.image import img_to_array\nfrom keras.layers.convolutional import MaxPooling2D\nfrom keras.layers.convolutional import Convolution2D\nfrom keras.applications.vgg16 import preprocess_input\nfrom keras.applications.vgg16 import decode_predictions",
"Using Theano backend.\n"
],
[
"def scale(arr):\n scaler = MinMaxScaler(feature_range=(-1, 1))\n scaler = scaler.fit(arr)\n # normalize the dataset and printscaler, \n normalized = scaler.transform(arr)\n return normalized ",
"_____no_output_____"
],
[
"def reshaper(m, l, timesteps = 1):\n wasted = (m.shape[0] % timesteps)\n m, l = m[wasted:], l[wasted:]\n l = scale(l)\n m = m.reshape((int(m.shape[0]/timesteps), timesteps, m.shape[1], m.shape[2], m.shape[3]))\n l = l.reshape((int(l.shape[0]/timesteps), timesteps, l.shape[1]))\n l = l[:, -1, :]\n return m, l",
"_____no_output_____"
],
[
"num_datasets = 2",
"_____no_output_____"
],
[
"num_outputs = 1",
"_____no_output_____"
],
[
"timesteps = 1",
"_____no_output_____"
],
[
"#keras.backend.clear_session()\ndef getFinalModel(num_outputs = num_outputs):\n dense_layer_1 = 1#int((patch_size[0] * patch_size[1]) / 1)0010#00000\n dense_layer_2 = 8\n inp = BIWI_Frame_Shape\n vgg_model = VGG16(weights='imagenet', include_top=False, input_shape = BIWI_Frame_Shape)\n rnn = Sequential()\n rnn.add(TimeDistributed(vgg_model, batch_size = timesteps, input_shape=(timesteps, inp[0], inp[1], inp[2])))#\n \n rnn.add(TimeDistributed(Flatten()))\n rnn.add(LSTM(64, dropout=0.2, recurrent_dropout=0.2, stateful=True)) # , activation='relu'\n# rnn.add(TimeDistributed(Dropout(0.2)))\n rnn.add(Dense(num_outputs))\n\n for layer in rnn.layers[:15]:\n layer.trainable = False\n rnn.compile(optimizer='adam', loss='mean_squared_error', metrics=['mae'])\n return rnn",
"_____no_output_____"
],
[
"full_model = getFinalModel(num_outputs = num_outputs)",
"_____no_output_____"
],
[
"biwi = readBIWIDataset(subjectList = [s for s in range(1, num_datasets+1)])#",
"_____no_output_____"
],
[
"c = 0\nframes, labelsList = [], []\nfor inputMatrix, labels in biwi:\n inputMatrix, labels = reshaper(inputMatrix, labels, timesteps = timesteps)\n if c < num_datasets-1:\n full_model.fit(inputMatrix, labels[:, :num_outputs], batch_size = timesteps, epochs=1, verbose=2, shuffle=False) #\n full_model.reset_states()\n frames.append(inputMatrix)\n labelsList.append(scale(labels))\n else:\n frames.append(inputMatrix)\n labelsList.append(scale(labels))\n c += 1\n print('Batch %d done!' % c)",
"_____no_output_____"
],
[
"test_inputMatrix, test_labels = frames[0], labelsList[0]",
"_____no_output_____"
],
[
"predictions = full_model.predict(test_inputMatrix, batch_size = timesteps)",
"_____no_output_____"
],
[
"output1 = numpy.concatenate((test_labels[:, :1], predictions[:, :1]), axis=1)",
"_____no_output_____"
],
[
"plt.plot(output1)",
"_____no_output_____"
],
[
"output1 = numpy.concatenate((test_labels[:, :1], predictions[:, :1]), axis=1)\noutput2 = numpy.concatenate((test_labels[:, 1:2], predictions[:, 1:2]), axis=1)\noutput3 = numpy.concatenate((test_labels[:, 2:3], predictions[:, 2:3]), axis=1)",
"_____no_output_____"
],
[
"# Three subplots sharing both x/y axes\nf, (ax1, ax2, ax3) = plt.subplots(3, sharex=True)\nax1.plot(output1)\nax1.set_title('Sharing both axes')\nax2.plot(output2)\nax3.plot(output3)\n# Fine-tune figure; make subplots close to each other and hide x ticks for\n# all but bottom plot.\nf.subplots_adjust(hspace=0)\nplt.setp([a.get_xticklabels() for a in f.axes[:-1]], visible=False)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75ee75e0dd541d616b114ea86877a8debcb4885 | 1,094 | ipynb | Jupyter Notebook | Untitled.ipynb | nyimbi/caseke | ce4a0fa44cd383bc23900e42f81656f089c8fdd9 | [
"MIT"
] | 1 | 2019-06-03T16:20:35.000Z | 2019-06-03T16:20:35.000Z | Untitled.ipynb | nyimbi/caseke | ce4a0fa44cd383bc23900e42f81656f089c8fdd9 | [
"MIT"
] | 20 | 2020-01-28T22:02:29.000Z | 2022-03-29T22:28:34.000Z | Untitled.ipynb | nyimbi/caseke | ce4a0fa44cd383bc23900e42f81656f089c8fdd9 | [
"MIT"
] | 1 | 2019-06-10T17:20:48.000Z | 2019-06-10T17:20:48.000Z | 15.855072 | 41 | 0.458867 | [
[
[
"import sklearn\n",
"_____no_output_____"
]
],
[
[
"# Simple Example\n\nLet us now evaluate the equation\n$$ y = x^2 $$\n for \n $$ x=25 $$\n ",
"_____no_output_____"
]
],
[
[
"x= 25\ny= x**2\nprint(y)",
"625\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e75eedddf0689b89efa9f1de95881d4ca2c9a9f9 | 23,457 | ipynb | Jupyter Notebook | week2_value_based/seminar2_MCTS.ipynb | Maverobot/Practical_RL | a089e22c4ef57401e9aa2709f54120d50e469284 | [
"Unlicense"
] | 3 | 2019-07-28T11:04:36.000Z | 2021-10-09T21:44:17.000Z | week2_value_based/seminar2_MCTS.ipynb | torusknot38/Practical_RL | 8e5471eaabc09795824a63e7b96e893693b51f7a | [
"Unlicense"
] | null | null | null | week2_value_based/seminar2_MCTS.ipynb | torusknot38/Practical_RL | 8e5471eaabc09795824a63e7b96e893693b51f7a | [
"Unlicense"
] | 3 | 2018-10-03T21:57:45.000Z | 2020-02-05T22:41:03.000Z | 35.757622 | 282 | 0.551179 | [
[
[
"import gym\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Seminar: Monte-carlo tree search\n\nIn this seminar, we'll implement a vanilla MCTS planning and use it to solve some Gym envs.\n\nBut before we do that, we first need to modify gym env to allow saving and loading game states to facilitate backtracking.",
"_____no_output_____"
]
],
[
[
"from gym.core import Wrapper\nfrom pickle import dumps,loads\nfrom collections import namedtuple\n\n#a container for get_result function below. Works just like tuple, but prettier\nActionResult = namedtuple(\"action_result\",(\"snapshot\",\"observation\",\"reward\",\"is_done\",\"info\"))\n\n\nclass WithSnapshots(Wrapper):\n \"\"\"\n Creates a wrapper that supports saving and loading environemnt states.\n Required for planning algorithms.\n\n This class will have access to the core environment as self.env, e.g.:\n - self.env.reset() #reset original env\n - self.env.ale.cloneState() #make snapshot for atari. load with .restoreState()\n - ...\n\n You can also use reset, step and render directly for convenience.\n - s, r, done, _ = self.step(action) #step, same as self.env.step(action)\n - self.render(close=True) #close window, same as self.env.render(close=True)\n \"\"\"\n\n \n def get_snapshot(self):\n \"\"\"\n :returns: environment state that can be loaded with load_snapshot \n Snapshots guarantee same env behaviour each time they are loaded.\n \n Warning! Snapshots can be arbitrary things (strings, integers, json, tuples)\n Don't count on them being pickle strings when implementing MCTS.\n \n Developer Note: Make sure the object you return will not be affected by \n anything that happens to the environment after it's saved.\n You shouldn't, for example, return self.env. \n In case of doubt, use pickle.dumps or deepcopy.\n \n \"\"\"\n self.render() #close popup windows since we can't pickle them\n if self.unwrapped.viewer is not None:\n self.unwrapped.viewer.close()\n self.unwrapped.viewer = None\n return dumps(self.env)\n \n def load_snapshot(self,snapshot):\n \"\"\"\n Loads snapshot as current env state.\n Should not change snapshot inplace (in case of doubt, deepcopy).\n \"\"\"\n \n assert not hasattr(self,\"_monitor\") or hasattr(self.env,\"_monitor\"), \"can't backtrack while recording\"\n\n self.render(close=True) #close popup windows since we can't load into them\n self.env = loads(snapshot)\n \n def get_result(self,snapshot,action):\n \"\"\"\n A convenience function that \n - loads snapshot, \n - commits action via self.step,\n - and takes snapshot again :)\n \n :returns: next snapshot, next_observation, reward, is_done, info\n \n Basically it returns next snapshot and everything that env.step would have returned.\n \"\"\"\n \n <your code here load,commit,take snapshot>\n \n return ActionResult(<next_snapshot>, #fill in the variables\n <next_observation>, \n <reward>, <is_done>, <info>)\n",
"_____no_output_____"
]
],
[
[
"### try out snapshots:\n",
"_____no_output_____"
]
],
[
[
"#make env\nenv = WithSnapshots(gym.make(\"CartPole-v0\"))\nenv.reset()\n\nn_actions = env.action_space.n",
"_____no_output_____"
],
[
"print(\"initial_state:\")\n\nplt.imshow(env.render('rgb_array'))\n\n#create first snapshot\nsnap0 = env.get_snapshot()",
"_____no_output_____"
],
[
"#play without making snapshots (faster)\nwhile True:\n is_done = env.step(env.action_space.sample())[2]\n if is_done: \n print(\"Whoops! We died!\")\n break\n \nprint(\"final state:\")\nplt.imshow(env.render('rgb_array'))\nplt.show()\n",
"_____no_output_____"
],
[
"#reload initial state\nenv.load_snapshot(snap0)\n\nprint(\"\\n\\nAfter loading snapshot\")\nplt.imshow(env.render('rgb_array'))\nplt.show()",
"_____no_output_____"
],
[
"#get outcome (snapshot, observation, reward, is_done, info)\nres = env.get_result(snap0,env.action_space.sample())\n\nsnap1, observation, reward = res[:3]\n\n#second step\nres2 = env.get_result(snap1,env.action_space.sample())",
"_____no_output_____"
]
],
[
[
"# MCTS: Monte-Carlo tree search\n\nIn this section, we'll implement the vanilla MCTS algorithm with UCB1-based node selection.\n\nWe will start by implementing the `Node` class - a simple class that acts like MCTS node and supports some of the MCTS algorithm steps.\n\nThis MCTS implementation makes some assumptions about the environment, you can find those _in the notes section at the end of the notebook_.",
"_____no_output_____"
]
],
[
[
"assert isinstance(env,WithSnapshots)",
"_____no_output_____"
],
[
"class Node:\n \"\"\" a tree node for MCTS \"\"\"\n \n #metadata:\n parent = None #parent Node\n value_sum = 0. #sum of state values from all visits (numerator)\n times_visited = 0 #counter of visits (denominator)\n\n \n def __init__(self,parent,action,):\n \"\"\"\n Creates and empty node with no children.\n Does so by commiting an action and recording outcome.\n \n :param parent: parent Node\n :param action: action to commit from parent Node\n \n \"\"\"\n \n self.parent = parent\n self.action = action \n self.children = set() #set of child nodes\n\n #get action outcome and save it\n res = env.get_result(parent.snapshot,action)\n self.snapshot,self.observation,self.immediate_reward,self.is_done,_ = res\n \n \n def is_leaf(self):\n return len(self.children)==0\n \n def is_root(self):\n return self.parent is None\n \n def get_mean_value(self):\n return self.value_sum / self.times_visited if self.times_visited !=0 else 0\n \n def ucb_score(self,scale=10,max_value=1e100):\n \"\"\"\n Computes ucb1 upper bound using current value and visit counts for node and it's parent.\n \n :param scale: Multiplies upper bound by that. From hoeffding inequality, assumes reward range to be [0,scale].\n :param max_value: a value that represents infinity (for unvisited nodes)\n \n \"\"\"\n \n if self.times_visited == 0:\n return max_value\n \n #compute ucb-1 additive component (to be added to mean value)\n #hint: you can use self.parent.times_visited for N times node was considered,\n # and self.times_visited for n times it was visited\n \n U = <your code here>\n \n return self.get_mean_value() + scale*U\n \n \n #MCTS steps\n \n def select_best_leaf(self):\n \"\"\"\n Picks the leaf with highest priority to expand\n Does so by recursively picking nodes with best UCB-1 score until it reaches the leaf.\n \n \"\"\"\n if self.is_leaf():\n return self\n \n children = self.children\n \n best_child = <select best child node in terms of node.ucb_score()>\n \n return best_child.select_best_leaf()\n \n def expand(self):\n \"\"\"\n Expands the current node by creating all possible child nodes.\n Then returns one of those children.\n \"\"\"\n \n assert not self.is_done, \"can't expand from terminal state\"\n\n for action in range(n_actions):\n self.children.add(Node(self,action))\n \n return self.select_best_leaf()\n \n def rollout(self,t_max=10**4):\n \"\"\"\n Play the game from this state to the end (done) or for t_max steps.\n \n On each step, pick action at random (hint: env.action_space.sample()).\n \n Compute sum of rewards from current state till \n Note 1: use env.action_space.sample() for random action\n Note 2: if node is terminal (self.is_done is True), just return 0\n \n \"\"\"\n \n #set env into the appropriate state\n env.load_snapshot(self.snapshot)\n obs = self.observation\n is_done = self.is_done\n \n <your code here - rollout and compute reward>\n\n return rollout_reward\n \n def propagate(self,child_value):\n \"\"\"\n Uses child value (sum of rewards) to update parents recursively.\n \"\"\"\n #compute node value\n my_value = self.immediate_reward + child_value\n \n #update value_sum and times_visited\n self.value_sum+=my_value\n self.times_visited+=1\n \n #propagate upwards\n if not self.is_root():\n self.parent.propagate(my_value)\n \n def safe_delete(self):\n \"\"\"safe delete to prevent memory leak in some python versions\"\"\"\n del self.parent\n for child in self.children:\n child.safe_delete()\n del child",
"_____no_output_____"
],
[
"class Root(Node):\n def __init__(self,snapshot,observation):\n \"\"\"\n creates special node that acts like tree root\n :snapshot: snapshot (from env.get_snapshot) to start planning from\n :observation: last environment observation\n \"\"\"\n \n self.parent = self.action = None\n self.children = set() #set of child nodes\n \n #root: load snapshot and observation\n self.snapshot = snapshot\n self.observation = observation\n self.immediate_reward = 0\n self.is_done=False\n \n @staticmethod\n def from_node(node):\n \"\"\"initializes node as root\"\"\"\n root = Root(node.snapshot,node.observation)\n #copy data\n copied_fields = [\"value_sum\",\"times_visited\",\"children\",\"is_done\"]\n for field in copied_fields:\n setattr(root,field,getattr(node,field))\n return root",
"_____no_output_____"
]
],
[
[
"## Main MCTS loop\n\nWith all we implemented, MCTS boils down to a trivial piece of code.",
"_____no_output_____"
]
],
[
[
"def plan_mcts(root,n_iters=10):\n \"\"\"\n builds tree with monte-carlo tree search for n_iters iterations\n :param root: tree node to plan from\n :param n_iters: how many select-expand-simulate-propagete loops to make\n \"\"\"\n for _ in range(n_iters):\n\n node = <select best leaf>\n\n if node.is_done:\n node.propagate(0)\n\n else: #node is not terminal\n <expand-simulate-propagate loop>\n \n",
"_____no_output_____"
]
],
[
[
"## Plan and execute\nIn this section, we use the MCTS implementation to find optimal policy.",
"_____no_output_____"
]
],
[
[
"root_observation = env.reset()\nroot_snapshot = env.get_snapshot()\nroot = Root(root_snapshot,root_observation)",
"_____no_output_____"
],
[
"#plan from root:\nplan_mcts(root,n_iters=1000)",
"_____no_output_____"
],
[
"from IPython.display import clear_output\nfrom itertools import count\nfrom gym.wrappers import Monitor\n\ntotal_reward = 0 #sum of rewards\ntest_env = loads(root_snapshot) #env used to show progress\n\nfor i in count():\n \n #get best child\n best_child = <select child with highest mean reward>\n \n #take action\n s,r,done,_ = test_env.step(best_child.action)\n \n #show image\n clear_output(True)\n plt.title(\"step %i\"%i)\n plt.imshow(test_env.render('rgb_array'))\n plt.show()\n\n total_reward += r\n if done:\n print(\"Finished with reward = \",total_reward)\n break\n \n #discard unrealized part of the tree [because not every child matters :(]\n for child in root.children:\n if child != best_child:\n child.safe_delete()\n\n #declare best child a new root\n root = Root.from_node(best_child)\n \n assert not root.is_leaf(), \"We ran out of tree! Need more planning! Try growing tree right inside the loop.\"\n \n #you may want to expand tree here\n #<your code here>\n",
"_____no_output_____"
]
],
[
[
"## Bonus assignments (10+pts each)\n\nThere's a few things you might want to try if you want to dig deeper:\n\n### Node selection and expansion\n\n\"Analyze this\" assignment\n\nUCB-1 is a weak bound as it relies on a very general bounds (Hoeffding Inequality, to be exact). \n* Try playing with alpha. The theoretically optimal alpha for CartPole is 200 (max reward). \n* Use using a different exploration strategy (bayesian UCB, for example)\n* Expand not all but several random actions per `expand` call. See __the notes below__ for details.\n\nThe goal is to find out what gives the optimal performance for `CartPole-v0` for different time budgets (i.e. different n_iter in plan_mcts.\n\nEvaluate your results on `Acrobot-v1` - do the results change and if so, how can you explain it?\n\n\n### Atari-RAM\n\n\"Build this\" assignment\n\nApply MCTS to play atari games. In particular, let's start with ```gym.make(\"MsPacman-ramDeterministic-v0\")```.\n\nThis requires two things:\n* Slightly modify WithSnapshots wrapper to work with atari.\n\n * Atari has a special interface for snapshots:\n ``` \n snapshot = self.env.ale.cloneState()\n ...\n self.env.ale.restoreState(snapshot)\n ```\n * Try it on the env above to make sure it does what you told it to.\n \n* Run MCTS on the game above. \n * Start with small tree size to speed-up computations\n * You will probably want to rollout for 10-100 steps (t_max) for starters\n * Consider using discounted rewards (see __notes at the end__)\n * Try a better rollout policy\n \n \n### Integrate learning into planning\n\nPlanning on each iteration is a costly thing to do. You can speed things up drastically if you train a classifier to predict which action will turn out to be best according to MCTS.\n\nTo do so, just record which action did the MCTS agent take on each step and fit something to [state, mcts_optimal_action]\n* You can also use optimal actions from discarded states to get more (dirty) samples. Just don't forget to fine-tune without them.\n* It's also worth a try to use P(best_action|state) from your model to select best nodes in addition to UCB\n* If your model is lightweight enough, try using it as a rollout policy.\n\n__(bonus points)__ While CartPole is glorious enough, try expanding this to ```gym.make(\"MsPacmanDeterministic-v0\")```\n* See previous section on how to wrap atari (you'll get points for both if you run this on atari)\n\n\n### Integrate planning into learning (project, a LOT of points)\n\nIncorporate planning into the agent architecture. \n\nThe goal is to implement [Value Iteration Networks](https://arxiv.org/abs/1602.02867)\n\nFor starters, remember [week7 assignment](https://github.com/yandexdataschool/Practical_RL/blob/master/week7/7.2_seminar_kung_fu.ipynb)? If not, use [this](http://bit.ly/2oZ34Ap) instead.\n\nYou will need to switch it into a maze-like game, consider MsPacman or the games from week7 [Bonus: Neural Maps from here](https://github.com/yandexdataschool/Practical_RL/blob/master/week7/7.3_homework.ipynb).\n\nYou will need to implement a special layer that performs value iteration-like update to a recurrent memory. This can be implemented the same way you did attention from week7 or week8.",
"_____no_output_____"
],
[
"## Notes\n\n\n#### Assumptions\n\nThe full list of assumptions is\n* __Finite actions__ - we enumerate all actions in `expand`\n* __Episodic (finite) MDP__ - while technically it works for infinite mdp, we rollout for $ 10^4$ steps. If you are knowingly infinite, please adjust `t_max` to something more reasonable.\n* __No discounted rewards__ - we assume $\\gamma=1$. If that isn't the case, you only need to change a two lines in `rollout` and use `my_R = r + gamma*child_R` for `propagate`\n* __pickleable env__ - won't work if e.g. your env is connected to a web-browser surfing the internet. For custom envs, you may need to modify get_snapshot/load_snapshot from `WithSnapshots`.\n\n#### On `get_best_leaf` and `expand` functions\n\nThis MCTS implementation only selects leaf nodes for expansion.\nThis doesn't break things down because `expand` adds all possible actions. Hence, all non-leaf nodes are by design fully expanded and shouldn't be selected.\n\nIf you want to only add a few random action on each expand, you will also have to modify `get_best_leaf` to consider returning non-leafs.\n\n#### Rollout policy\n\nWe use a simple uniform policy for rollouts. This introduces a negative bias to good situations that can be messed up completely with random bad action. As a simple example, if you tend to rollout with uniform policy, you better don't use sharp knives and walk near cliffs.\n\nYou can improve that by integrating a reinforcement _learning_ algorithm with a computationally light agent. You can even train this agent on optimal policy found by the tree search.\n\n#### Contributions\n* Reusing some code from 5vision [solution for deephack.RL](https://github.com/5vision/uct_atari), code by Mikhail Pavlov\n* Using some code from [this gist](https://gist.github.com/blole/dfebbec182e6b72ec16b66cc7e331110)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e75ef4f743ee2c8fdd88c0fd01c48bfbd11728bb | 5,270 | ipynb | Jupyter Notebook | _notebooks/2022-05-12-tsp_bruteforce.ipynb | ljk233/blog | ce64f2bbb344b8178f7383d79b21816178b1187a | [
"MIT"
] | null | null | null | _notebooks/2022-05-12-tsp_bruteforce.ipynb | ljk233/blog | ce64f2bbb344b8178f7383d79b21816178b1187a | [
"MIT"
] | null | null | null | _notebooks/2022-05-12-tsp_bruteforce.ipynb | ljk233/blog | ce64f2bbb344b8178f7383d79b21816178b1187a | [
"MIT"
] | null | null | null | 25.960591 | 231 | 0.525427 | [
[
[
"# Travelling Salesperson Problem (Brute-force Search)\n> A solution to the travelling salesperson problem using a brute-force search.\n\n- toc: true\n- badges: true\n- comments: true\n- categories: [algorithm, graphs]\n- permalink: /2022/05/12/travelling_salesperson_problem_bruteforce/",
"_____no_output_____"
],
[
"## Notes\n\nThe **Travelling Salesperson Problem** is defined as:\n\n> *Given a set of cities and distances between every pair of cities, the [***Travelling Salesperson***] problem is to find the shortest possible route that visits every city exactly once and returns to the starting point.*\n>\n> *[Traveling Salesman Problem (TSP) Implementation](https://www.geeksforgeeks.org/traveling-salesman-problem-tsp-implementation/)* (GeeksForGeeks)\n\nIn this implementation, we generate permutations and check if the |*path*| < |*min path*|.\n\nWhilst the function works, it is unusuable when |nodes(*g*)| ≥ 11, given $P(11, 11) =$ 36720000 permutations to check!",
"_____no_output_____"
],
[
"## Dependencies",
"_____no_output_____"
]
],
[
[
"import random as rand\nimport math\nimport itertools as it\nimport networkx as nx",
"_____no_output_____"
]
],
[
[
"## Function",
"_____no_output_____"
]
],
[
[
"def bruteforce_tsp(G: nx.Graph, start: object) -> float | int:\n \"\"\"Return the shortest route that visits every city exactly once and\n ends back at the start.\n\n Solves the travelling salesperson with a brute-force search using\n permutations.\n\n Preconditions:\n - G is a complete weighted graph\n - start in G\n - WG[u, v]['weight'] is the distance u -> v\n \"\"\"\n neighbours = set((node for node in G.nodes if node != start))\n min_dist = math.inf\n for path in it.permutations(neighbours):\n u, dist = start, 0\n for v in path:\n dist += G.edges[u, v]['weight']\n u = v\n min_dist = min(min_dist, dist + G.edges[u, start]['weight'])\n\n return min_dist",
"_____no_output_____"
]
],
[
[
"## Example usage\n\n### Initialise the graph",
"_____no_output_____"
]
],
[
[
"cg = nx.complete_graph(['origin', 'a', 'b', 'c', 'd'])\ng = nx.Graph((u, v, {'weight': rand.randint(1, 10)}) for u, v in cg.edges)\nprint(f\"g = {g}\")",
"g = Graph with 5 nodes and 10 edges\n"
]
],
[
[
"### Find the shortest path from the origin",
"_____no_output_____"
]
],
[
[
"print(f\"Shortest path from the origin = {bruteforce_tsp(g, 'origin')}\")",
"Shortest path from the origin = 24\n"
]
],
[
[
"## Performance",
"_____no_output_____"
]
],
[
[
"for n in [4, 6, 8, 10]:\n print(f\"|nodes(g)| = {n}\")\n cg = nx.complete_graph(n)\n g = nx.Graph((u, v, {'weight': rand.randint(1, 10)}) for u, v in cg.edges)\n %timeit bruteforce_tsp(g, 1)",
"|nodes(g)| = 4\n17.6 µs ± 214 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)\n|nodes(g)| = 6\n472 µs ± 5.3 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)\n|nodes(g)| = 8\n26.3 ms ± 203 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\n|nodes(g)| = 10\n2.29 s ± 25.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e75f11274e178b277baac83167f6a86591eace09 | 9,618 | ipynb | Jupyter Notebook | scripts/VectorsEn.ipynb | IlyaGusev/tgcontest | 8945b9f6d1527ca21920998e86a8ecc1ebfdf526 | [
"Apache-2.0"
] | 91 | 2020-01-05T11:46:12.000Z | 2022-03-28T04:50:12.000Z | scripts/VectorsEn.ipynb | gangeleski/tgcontest | 3d8ab5ba140c9a6f928c40e97c917db9f9297321 | [
"Apache-2.0"
] | 1 | 2020-07-10T11:32:47.000Z | 2020-08-05T20:57:10.000Z | scripts/VectorsEn.ipynb | gangeleski/tgcontest | 3d8ab5ba140c9a6f928c40e97c917db9f9297321 | [
"Apache-2.0"
] | 33 | 2020-01-14T17:37:14.000Z | 2022-03-12T15:28:01.000Z | 25.579787 | 120 | 0.533895 | [
[
[
"!pip install pyonmttok fasttext",
"_____no_output_____"
],
[
"!rm -f en_tg_train.tar.gz\n!wget https://www.dropbox.com/s/umd8tyx4wz1wquq/en_tg_train.tar.gz\n!rm -f en_tg_train.json\n!tar -xzvf en_tg_train.tar.gz\n!rm en_tg_train.tar.gz",
"_____no_output_____"
],
[
"# https://www.kaggle.com/pariza/bbc-news-summary/data\n\n!rm -f bbc-news-summary.zip\n!wget https://www.dropbox.com/s/gq76b24q3x5n1ku/bbc-news-summary.zip\n!unzip bbc-news-summary.zip",
"_____no_output_____"
],
[
"# https://www.kaggle.com/rmisra/news-category-dataset\n\n!rm -f news-category-dataset.zip\n!wget https://www.dropbox.com/s/ua18htwqrkwnfpg/news-category-dataset.zip\n!unzip news-category-dataset.zip",
"_____no_output_____"
],
[
"# https://www.kaggle.com/snapcrack/all-the-news\n\n!rm -f all-the-news.zip\n!wget https://www.dropbox.com/s/bacg3cxckeqw6a9/all-the-news.zip\n!unzip all-the-news.zip",
"_____no_output_____"
],
[
"import json\n\nwith open('en_tg_train.json', \"r\") as r:\n tg_train_data = json.load(r)\n\ntg_titles = [record[\"title\"] for record in tg_train_data]\ntg_texts = [record[\"text\"] for record in tg_train_data]\nprint(tg_titles[0])\nprint(tg_texts[0])\nprint(len(tg_titles))",
"_____no_output_____"
],
[
"import os\n\ndef get_bbc_texts(input_directory):\n assert os.path.exists(input_directory)\n records = []\n for rubric_dir in os.listdir(input_directory):\n rubric_dir = os.path.join(input_directory, rubric_dir)\n if not os.path.isdir(rubric_dir):\n continue\n for file_name in os.listdir(rubric_dir):\n file_name = os.path.join(rubric_dir, file_name)\n with open(file_name, \"r\") as r:\n try:\n content = r.read().replace(\"\\n\", \" \")\n except Exception as e:\n continue\n records.append(content)\n return records\n\nbbc_texts = get_bbc_texts(\"BBC News Summary/News Articles\")\nprint(bbc_texts[0])\nprint(len(bbc_texts))",
"_____no_output_____"
],
[
"import json\n\nnc_texts = []\nwith open(\"News_Category_Dataset_v2.json\", \"r\") as r:\n for line in r:\n data = json.loads(line)\n title = data[\"headline\"]\n text = data[\"short_description\"]\n nc_texts.append(title + \" \" + text)\nprint(nc_texts[0])\nprint(len(nc_texts))",
"_____no_output_____"
],
[
"import csv\nimport sys\ncsv.field_size_limit(sys.maxsize)\n\nall_the_news_files = (\"articles1.csv\", \"articles2.csv\", \"articles3.csv\")\natn_titles = []\natn_texts = []\nfor file_name in all_the_news_files:\n with open(file_name, \"r\") as r:\n next(r)\n reader = csv.reader(r, delimiter=',')\n for row in reader:\n _, _, title, _, _, _, _, _, _, text = row\n atn_titles.append(title)\n atn_texts.append(text)\nprint(atn_titles[0])\nprint(atn_texts[0])\nprint(len(atn_titles))",
"_____no_output_____"
],
[
"import pyonmttok\nimport random\ntokenizer = pyonmttok.Tokenizer(\"conservative\")\n\ndef preprocess(text):\n text = str(text).strip().replace(\"\\n\", \" \").replace(\"\\xa0\", \" \").lower()\n tokens, _ = tokenizer.tokenize(text)\n text = \" \".join(tokens)\n return text\n\nall_samples = tg_titles + tg_texts + bbc_texts + nc_texts + atn_titles + atn_texts\nrandom.shuffle(all_samples)\nprocessed_all_samples = [preprocess(text) for text in all_samples]\nprocessed_all_samples = [text for text in processed_all_samples if text.strip()]\nprint(processed_all_samples[0])\nprint(len(processed_all_samples))",
"_____no_output_____"
],
[
"# Clear RAM\ndel tg_titles\ndel tg_texts\ndel bbc_texts\ndel nc_texts\ndel all_samples\ndel atn_titles\ndel atn_texts",
"_____no_output_____"
],
[
"# Clear Disk\n!rm -rf \"BBC News Summary\"\n!rm -rf \"bbc news summary\"\n!rm News_Category_Dataset_v2.json\n!rm en_tg_train.json\n!rm articles1.csv\n!rm articles2.csv\n!rm articles3.csv",
"_____no_output_____"
],
[
"with open(\"train.txt\", \"w\", encoding=\"utf-8\") as w:\n for sample in processed_all_samples:\n w.write(sample.strip() + \"\\n\")",
"_____no_output_____"
],
[
"!tar -czvf en_unsupervised_train.tar.gz train.txt",
"_____no_output_____"
],
[
"from fasttext import train_unsupervised\n\nmodel = train_unsupervised('train.txt', model='skipgram', dim=50, epoch=10, minCount=50, bucket=200000, verbose=2)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75f284425b412f96aaaf743cc4c8e323372836c | 7,840 | ipynb | Jupyter Notebook | polyglot.ipynb | SylvainCorlay/xtensor-polyglot | 34ff276e29c5c2e15442568ab4d8e39a7c8324ca | [
"BSD-3-Clause"
] | 5 | 2019-08-29T09:57:48.000Z | 2019-10-07T21:09:34.000Z | polyglot.ipynb | QuantStack/xtensor-polyglot | acdf29e29ed1d48e107eb91a0258a0e3495b0133 | [
"BSD-3-Clause"
] | null | null | null | polyglot.ipynb | QuantStack/xtensor-polyglot | acdf29e29ed1d48e107eb91a0258a0e3495b0133 | [
"BSD-3-Clause"
] | 2 | 2019-12-26T14:40:22.000Z | 2020-05-29T00:41:31.000Z | 25.209003 | 99 | 0.5 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e75f3a118dc4e7e968aaa5948fd8086471110bb3 | 164,113 | ipynb | Jupyter Notebook | code/seq_sim_demo/20200713_seq_sim_mouse.ipynb | eho-tacc/epi-model-reu | baac6b01c5b0224c3f18222a92308479e5e14532 | [
"MIT"
] | null | null | null | code/seq_sim_demo/20200713_seq_sim_mouse.ipynb | eho-tacc/epi-model-reu | baac6b01c5b0224c3f18222a92308479e5e14532 | [
"MIT"
] | null | null | null | code/seq_sim_demo/20200713_seq_sim_mouse.ipynb | eho-tacc/epi-model-reu | baac6b01c5b0224c3f18222a92308479e5e14532 | [
"MIT"
] | null | null | null | 75.419577 | 60,929 | 0.363762 | [
[
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"# Sequence Similarity Demo\nIn this demo, we will answer the question:\n\n_How does the primary sequence of TMPRSS2 differ between species that one would encounter in a farm environment?_\n\nWe will address this question using sequence alignment and analysis tools from the [Biopython](http://biopython.org/DIST/docs/tutorial/Tutorial.html#sec81) Python library.\n\n## Outline\n\n* Using the [Biopython tutorial](http://biopython.org/DIST/docs/tutorial/Tutorial.html#sec385) as reference\n* Prerequisites\n * Reading on PSSMs: [Rice.edu](https://www.cs.rice.edu/~ogilvie/comp571/2018/09/11/pssm.html)\n\n### Part 1: Preparing input sequences\n\n* Intro to `Bio.Align`\n* Learn how to filter sequence records in a multiple sequence alignment by:\n * Species name\n * Sequence snippets\n* Find the \n* Generate consensus sequences for the cat sequence\n\n### Part 2: Analyzing aligned sequences\n\n* Compare human homolog to the mouse\n * Compute a [log odds substitution matrix](http://biopython.org/DIST/docs/tutorial/Tutorial.html#sec390)\n * What are the log odds of the following polymorphisms?\n * Hydrophobic -> hydrophilic and vice versa\n * Aromatic -> non-aromatic and vice versa\n * Construct a more generalized PSSM for the above categories of \"penalized polymorphisms\"\n * For instance, we want to parse `R -> Y` and `S -> I` to `hydrophilic -> hydrophobic`\n\n## \"Homework\"\n\"Homework\" is a recommendedation. If you find yourself more interested in a different analysis, say in a comparison of variants **within** Homo sapiens, feel free to do that analysis instead.\n\n* Repeat analysis for each of the other domestic species (dog, horse, chicken, etc.)\n* Generate a \"generalized PSSM\" for the other types of penalized polymorphisms, such as `acidic -> basic`, `bulky -> small`, `aromatic -> non-aromatic`, etc.",
"_____no_output_____"
],
[
"We're using [Biopython](http://biopython.org/DIST/docs/tutorial/Tutorial.html) again. If the below import commands fail, you might need to install Biopython from the command line:\n```bash\npip install biopython\n\n# or using poetry\npoetry add biopython\n```",
"_____no_output_____"
]
],
[
[
"from Bio.Align import AlignInfo, MultipleSeqAlignment\nfrom Bio import AlignIO, Alphabet, SeqRecord, Seq, SubsMat",
"_____no_output_____"
]
],
[
[
"# Part #1\n____",
"_____no_output_____"
],
[
"# Read the alignment records\nWe use the Python function from the Biopython package: `Bio.AlignIO.read` to read the trimmed alignment file. This Python function reads the `*.txt` file in the `'fasta'` format and returns an instance of `Bio.Align.MultipleSeqAlignment` (documentation can be found [here](https://biopython.org/DIST/docs/api/Bio.Align.MultipleSeqAlignment-class.html) and [here](http://biopython.org/DIST/docs/tutorial/Tutorial.html#sec81)).",
"_____no_output_____"
]
],
[
[
"alignment = AlignIO.read(open('./trimmed_alg.txt'), format='fasta')\nalignment",
"_____no_output_____"
]
],
[
[
"Each element of this list-like instance is a sequence:",
"_____no_output_____"
]
],
[
[
"alignment[0] ",
"_____no_output_____"
]
],
[
[
"This instance of `Bio.Align.MultipleSeqAlignment` is a lot like a Python list. For instance, you can:",
"_____no_output_____"
]
],
[
[
"# get the number of sequences in this alignment\nprint(\"number of sequence records: \", len(alignment))\n\n# iterate over the sequence records in the alignment\nrecord_counter = 0\nfor record in alignment:\n record_counter += 1\nprint(\"number of sequence records (a different way): \", record_counter)\n\n# get the 100th sequence record in the alignment\nprint(\"ID of the 100th sequence: \", alignment[99].id)",
"number of sequence records: 9757\nnumber of sequence records (a different way): 9757\nID of the 100th sequence: 9796.ENSECAP00000016722\n"
]
],
[
[
"# Filter the sequences in the alignment\nFor now, we're only interested in \"domestic species,\" or species whose scientific name is in the Python list `domestic_sp_names`:",
"_____no_output_____"
]
],
[
[
"domestic_sp_names = [\n 'Homo sapiens', # human\n 'Mus musculus', # mouse\n 'Canis lupus familiaris', # dog\n 'Felis catus', # cat\n 'Bos taurus', # cattle\n 'Equus caballus', # horse\n 'Gallus gallus' # chicken\n]",
"_____no_output_____"
]
],
[
[
"The sequences in the `Bio.Align.MultipleSeqAlignment` are for **all** the species that EggNOG could find, including worms, polar bears, and other species that we're not interested in.\n\nLet's filter out sequences from species whose names are **not** in the list `domestic_sp_names`. To do this, we will:\n1. Get the scientific name for each species, and load it into the `description` attribute of each sequence. This should be familiar from the [descriptive stats demo](../descriptive_stats_demo/eggNOG_alignment_metadata.ipynb).\n2. Use a [list comprehension](https://github.com/wilfredinni/python-cheatsheet#list-comprehension) to get the list of sequences for species that we are interested in.\n3. The above step will generate a Python list; it will need to be converted to an instance of `Bio.Align.MultipleSeqAlignment` if we want to use fancy Biopython analysis tools on it.",
"_____no_output_____"
],
[
"## Step #1: Get scientific name for each species\nThis should be familiar from the [descriptive stats demo](../descriptive_stats_demo/eggNOG_alignment_metadata.ipynb).",
"_____no_output_____"
]
],
[
[
"!ls\n!\n!ls",
" 20200706_seq_sim-checkpoint.ipynb \"Ty's Playbook.ipynb\" tree.txt\n 20200706_seq_sim.ipynb\t\t extended_members.txt trimmed_alg.txt\n 20200713_seq_sim.ipynb\t\t raw_alg.txt\n 20200706_seq_sim-checkpoint.ipynb \"Ty's Playbook.ipynb\" tree.txt\n 20200706_seq_sim.ipynb\t\t extended_members.txt trimmed_alg.txt\n 20200713_seq_sim.ipynb\t\t raw_alg.txt\n"
],
[
"tmprss2_ext = pd.read_table('../seq_sim_demo/extended_members.txt', header=None)\ntmprss2_ext.columns = ['id_1', 'id_2', 'species', '', '']\ntmprss2_ext.head()",
"_____no_output_____"
],
[
"for record in alignment:\n \n # while we're at it, let's make sure that Biopython knows these\n # are protein sequences\n record.seq.alphabet = Alphabet.generic_protein\n \n # from visual inspection we know the name format is XXXX.unique_id,\n # so we split on \".\" and take the last element of the list\n id_code = record.id.split('.')[-1]\n \n # reference the metadata to get the species name\n sp_name = tmprss2_ext[tmprss2_ext['id_1'] == id_code]['species'].values\n \n try:\n sp_name = sp_name.item()\n except ValueError:\n sp_name = None\n \n # assign the species name to the species attribute\n record.description = sp_name",
"_____no_output_____"
]
],
[
[
"## Step #2: Use a list comprehension to filter to domestic species",
"_____no_output_____"
]
],
[
[
"dom_aln_list = [record for record in alignment\n if record.description in domestic_sp_names]",
"_____no_output_____"
]
],
[
[
"We see that the length of this filtered list is much shorter:",
"_____no_output_____"
]
],
[
[
"print(\"number of records for all species:\", len(alignment))\nprint(\"number of records for domestic species:\", len(dom_aln_list))",
"number of records for all species: 9757\nnumber of records for domestic species: 732\n"
]
],
[
[
"## Step #3: Convert this list to a new `MultipleSeqAlignment` instance",
"_____no_output_____"
]
],
[
[
"dom_aln = MultipleSeqAlignment(dom_aln_list)",
"_____no_output_____"
]
],
[
[
"`dom_aln` has the same data, but is a different type of Python variable:",
"_____no_output_____"
]
],
[
[
"print(\"dom_aln_list is type:\", type(dom_aln_list))\nprint(\"dom_aln is type:\", type(dom_aln))",
"dom_aln_list is type: <class 'list'>\ndom_aln is type: <class 'Bio.Align.MultipleSeqAlignment'>\n"
]
],
[
[
"# Get the sequence of human TMPRSS2\nBefore we start comparing sequences to each other, let's get the sequence of TMPRSS2 in `Homo sapiens`. This is the sequence that we will compare other species' homologs to.\n\nTo do this filtering, let's use a list comprehension, then convert to a `MultipleSeqAlignment`, just like we did before:",
"_____no_output_____"
]
],
[
[
"human_aln_list = [\n record for record in dom_aln\n if record.description == 'Homo sapiens'\n]\nhuman_aln = MultipleSeqAlignment(human_aln_list)",
"_____no_output_____"
]
],
[
[
"We see that there are many records in the alignment that have `Homo sapiens` as the species:",
"_____no_output_____"
]
],
[
[
"len(human_aln)",
"_____no_output_____"
]
],
[
[
"It would be interesting to look at how the differences between these 118 variants _within_ the human species, but let's move on to our inter-species analysis for this demo.\n\n## Get the sequence of human isoform 2\n\nLet's find the sequence record that has the same sequence as isoform 2 on the [TMPRSS2 UniProt page](https://www.uniprot.org/uniprot/O15393#O15393-1). The first few residues of this isoform are `MPPAPPGG`:",
"_____no_output_____"
]
],
[
[
"isoform_aln_list = [\n record for record in human_aln\n if 'MPPAPPGG' in str(record.seq).replace(\"-\", \"\")\n]",
"_____no_output_____"
],
[
"print(\"number of human sequences that contain MPPAPPGG:\", len(isoform_aln_list))\nhuman_iso2 = isoform_aln_list[0]\nhuman_iso2",
"number of human sequences that contain MPPAPPGG: 1\n"
]
],
[
[
"This is an aligned sequence, so it has a lot of `-` characters that signify residues that are missing relative to other sequences in `alignment`:",
"_____no_output_____"
]
],
[
[
"str(human_iso2.seq)",
"_____no_output_____"
]
],
[
[
"We can remove these characters using Python's string replacement method, allowing us to more easily look at the amino acid sequence:",
"_____no_output_____"
]
],
[
[
"str(human_iso2.seq).replace('-', '')",
"_____no_output_____"
]
],
[
[
"We also notice that most of the sequence of interest is in the middle of the aligned sequence. Let's trim the aligned sequence to generate a compact aligned sequence that it starts with `MPPAPP` and ends with `ADG`. To do this, we will make use of the [`str.index`](https://docs.python.org/2/library/stdtypes.html?highlight=index#str.index) method:",
"_____no_output_____"
]
],
[
[
"index_nterm = str(human_iso2.seq).index('MPPAPP')\nindex_cterm = str(human_iso2.seq).index('ADG')\n\n# since we want to cut at ADG^, not ^ADG, we add 3 characters to this index\nindex_cterm += 3\n\nprint(\"index of N-terminus:\", index_nterm)\nprint(\"index of C-terminus:\", index_cterm)",
"index of N-terminus: 33713\nindex of C-terminus: 38856\n"
]
],
[
[
"We can use these indices to trim to the compact sequence:",
"_____no_output_____"
]
],
[
[
"human_compact = human_iso2[index_nterm:index_cterm]\nstr(human_compact.seq)",
"_____no_output_____"
]
],
[
[
"These N-terminus and C-terminus indices will be useful when we want to trim sequence records for other species.",
"_____no_output_____"
],
[
"# Generate consensus sequences for cat homolog\n\nJust like the sequence records for `Homo sapiens`, the records for the other `domestic_sp_names` have duplicates. For example, let's look at `Mus musculus`:",
"_____no_output_____"
]
],
[
[
"mouse_aln_list = [\n record for record in dom_aln\n if record.description == 'Mus musculus'\n]\nmouse_aln = MultipleSeqAlignment(mouse_aln_list)",
"_____no_output_____"
],
[
"len(mouse_aln)",
"_____no_output_____"
]
],
[
[
"Let's compare 1 sequence, instead of all 146 variants, of cat homolog to the human homolog. To do this, we will generate a **consensus sequence** ([Wikipedia](https://en.wikipedia.org/wiki/Consensus_sequence#:~:text=In%20molecular%20biology%20and%20bioinformatics,position%20in%20a%20sequence%20alignment.)) for the cat variants. We do this in 2 steps:\n1. Generate a `Bio.Align.AlignInfo.SummaryInfo` instance from the `MultipleSeqAlignment`\n2. Call the `SummaryInfo` method `dumb_consensus`, which runs a very simple consensus sequence finding algorithm.",
"_____no_output_____"
],
[
"## Step #1",
"_____no_output_____"
]
],
[
[
"mouse_aln_summary = AlignInfo.SummaryInfo(mouse_aln)\nmouse_aln_summary",
"_____no_output_____"
]
],
[
[
"## Step #2",
"_____no_output_____"
]
],
[
[
"mouse_aln_consensus = mouse_aln_summary.dumb_consensus()\nmouse_aln_consensus",
"_____no_output_____"
]
],
[
[
"Let's use the N-terminus and C-terminus locations that we calculated above to compact this consensus sequence:",
"_____no_output_____"
]
],
[
[
"mouse_consensus_compact = mouse_aln_consensus[index_nterm:index_cterm]\nstr(mouse_consensus_compact).replace('X', '-')",
"_____no_output_____"
]
],
[
[
"Finally, this consensus sequence is a `Seq`, not a `SeqRecord`. Let's convert it to a `SeqRecord` so we can compare it to the human sequence:",
"_____no_output_____"
]
],
[
[
"# convert 'X' to '-' for consistency with human sequence\n# and convert to a Seq.Seq instance\nmouse_replaced_str = str(mouse_consensus_compact).replace('X', '-')\nmouse_consensus_replaced = Seq.Seq(mouse_replaced_str)\n\n# then convert to a SeqRecord.SeqRecord instance\nmouse_record_compact = SeqRecord.SeqRecord(mouse_consensus_replaced, description='Mus musculus', name='dumb_consensus')\nmouse_record_compact",
"_____no_output_____"
]
],
[
[
"# Part 2: the fun stuff\n**Finally**, we have human TMPRSS2 and a consensus sequence for cat TMPRSS2. The sequences are aligned and ready for some more advanced analysis with the help of Biopython.\n\nLet's start looking at ways we can compare the two sequences. To start, we will answer the question:\n\n**At every location in the sequence, what is the percent probability that this position will be a tyrosine (Y), leucine (L), or any other amino acid?**\n\nTo do this, we will calculate a [**position specific score matrix**](https://www.cs.rice.edu/~ogilvie/comp571/2018/09/11/pssm.html) (PSSM). Let's generate a new, very short `MultipleSeqAlignment` between our human and cat sequences:",
"_____no_output_____"
]
],
[
[
"hum_mouse_aln = MultipleSeqAlignment([human_compact, mouse_record_compact])\nhum_mouse_aln",
"_____no_output_____"
]
],
[
[
"Now we can generate a `SummaryInfo` instance like we did before, and calculate the PSSM:",
"_____no_output_____"
]
],
[
[
"hum_mouse_summary = AlignInfo.SummaryInfo(hum_mouse_aln)\nhum_mouse_summary",
"_____no_output_____"
],
[
"hum_mouse_pssm = hum_mouse_summary.pos_specific_score_matrix(human_compact)\nhum_mouse_pssm",
"_____no_output_____"
]
],
[
[
"We can look at the data in the PSSM by inspecting the `pssm` attribute.\n\nThe PSSM is a Python list, where each element is a [tuple](https://github.com/wilfredinni/python-cheatsheet#tuple-data-type) of length 2. The first element of the tuple is the amino acid in the human sequence, and the second element is a Python [dictionary](https://github.com/wilfredinni/python-cheatsheet#dictionaries-and-structuring-data). The dictionary keys are all the naturally occurring amino acids, and the values are the number of times that amino acid was found at that position in the alignment.",
"_____no_output_____"
],
[
"## At which positions are the sequences identical?\nTo answer this question, we will use a familiar [for loop](https://github.com/wilfredinni/python-cheatsheet#for-loops-and-the-range-function). When we encounter an `-` in the first element of the `position` tuple, this means that the human sequence had a `-` character at that position. `-` is not an amino acid, so we skip these positions and move on using the [continue statement](https://github.com/wilfredinni/python-cheatsheet#continue-statements).\n\nIn the `print` statement at the end of the cell, we also make use of [formatted strings](https://github.com/wilfredinni/python-cheatsheet#formatted-string-literals-or-f-strings-python-36) in Python 3.6.",
"_____no_output_____"
]
],
[
[
"# we want to keep track of which amino acid our\n# \"cursor\" is on in the for loop\nposition_counter = 0\n\nfor position in hum_mouse_pssm.pssm:\n \n # `position` is the 2-element tuple\n # let's give each element a useful name\n resi_in_human = position[0]\n resi_dict = position[1]\n \n # skip this position if it is a '-'\n # in the human sequence record\n if resi_in_human == '-':\n continue\n else:\n # increment the counter by 1\n position_counter += 1 \n \n # if more than one instance of amino acid\n # `resi_in_human` was found at this position,\n # meaning that the cat homolog is the same amino acid\n if resi_dict[resi_in_human] == 2:\n print(f\"mouse and human are the same at position \" +\n f\"{position_counter}, which is amino acid {resi_in_human}\")",
"mouse and human are the same at position 16, which is amino acid G\nmouse and human are the same at position 43, which is amino acid G\nmouse and human are the same at position 47, which is amino acid A\nmouse and human are the same at position 82, which is amino acid Y\nmouse and human are the same at position 100, which is amino acid P\nmouse and human are the same at position 106, which is amino acid P\nmouse and human are the same at position 153, which is amino acid S\nmouse and human are the same at position 157, which is amino acid C\nmouse and human are the same at position 162, which is amino acid T\nmouse and human are the same at position 170, which is amino acid C\nmouse and human are the same at position 172, which is amino acid G\nmouse and human are the same at position 176, which is amino acid C\nmouse and human are the same at position 179, which is amino acid G\nmouse and human are the same at position 205, which is amino acid W\nmouse and human are the same at position 222, which is amino acid C\nmouse and human are the same at position 281, which is amino acid C\nmouse and human are the same at position 293, which is amino acid I\nmouse and human are the same at position 295, which is amino acid G\nmouse and human are the same at position 296, which is amino acid G\nmouse and human are the same at position 305, which is amino acid P\nmouse and human are the same at position 306, which is amino acid W\nmouse and human are the same at position 307, which is amino acid Q\nmouse and human are the same at position 309, which is amino acid S\nmouse and human are the same at position 310, which is amino acid L\nmouse and human are the same at position 316, which is amino acid H\nmouse and human are the same at position 318, which is amino acid C\nmouse and human are the same at position 319, which is amino acid G\nmouse and human are the same at position 320, which is amino acid G\nmouse and human are the same at position 323, which is amino acid I\nmouse and human are the same at position 327, which is amino acid W\nmouse and human are the same at position 331, which is amino acid A\nmouse and human are the same at position 332, which is amino acid A\nmouse and human are the same at position 333, which is amino acid H\nmouse and human are the same at position 334, which is amino acid C\nmouse and human are the same at position 350, which is amino acid G\nmouse and human are the same at position 371, which is amino acid H\nmouse and human are the same at position 382, which is amino acid D\nmouse and human are the same at position 383, which is amino acid I\nmouse and human are the same at position 384, which is amino acid A\nmouse and human are the same at position 385, which is amino acid L\nmouse and human are the same at position 388, which is amino acid L\nmouse and human are the same at position 402, which is amino acid C\nmouse and human are the same at position 403, which is amino acid L\nmouse and human are the same at position 404, which is amino acid P\nmouse and human are the same at position 416, which is amino acid C\nmouse and human are the same at position 420, which is amino acid G\nmouse and human are the same at position 421, which is amino acid W\nmouse and human are the same at position 422, which is amino acid G\nmouse and human are the same at position 434, which is amino acid L\nmouse and human are the same at position 447, which is amino acid C\nmouse and human are the same at position 461, which is amino acid M\nmouse and human are the same at position 463, which is amino acid C\nmouse and human are the same at position 464, which is amino acid A\nmouse and human are the same at position 465, which is amino acid G\nmouse and human are the same at position 467, which is amino acid L\nmouse and human are the same at position 469, which is amino acid G\nmouse and human are the same at position 472, which is amino acid D\nmouse and human are the same at position 474, which is amino acid C\nmouse and human are the same at position 476, which is amino acid G\nmouse and human are the same at position 477, which is amino acid D\nmouse and human are the same at position 478, which is amino acid S\nmouse and human are the same at position 479, which is amino acid G\nmouse and human are the same at position 480, which is amino acid G\nmouse and human are the same at position 481, which is amino acid P\nmouse and human are the same at position 482, which is amino acid L\nmouse and human are the same at position 483, which is amino acid V\nmouse and human are the same at position 490, which is amino acid W\nmouse and human are the same at position 494, which is amino acid G\nmouse and human are the same at position 497, which is amino acid S\nmouse and human are the same at position 498, which is amino acid W\nmouse and human are the same at position 499, which is amino acid G\nmouse and human are the same at position 502, which is amino acid C\nmouse and human are the same at position 508, which is amino acid P\nmouse and human are the same at position 509, which is amino acid G\nmouse and human are the same at position 510, which is amino acid V\nmouse and human are the same at position 511, which is amino acid Y\nmouse and human are the same at position 514, which is amino acid V\nmouse and human are the same at position 520, which is amino acid W\nmouse and human are the same at position 521, which is amino acid I\n"
]
],
[
[
"To make sure our `position_counter` variable is working properly, let's double check that the length of the human sequence (without `-` characters) is indeed 529:",
"_____no_output_____"
]
],
[
[
"# position counter from the above for loop\nprint(f\"the human sequence is {position_counter} amino acids long\")\n\n# calling len(str)\nlength_a_different_way = len(str(hum_mouse_aln[0].seq).replace('-', ''))\nprint(f\"the human sequence is {length_a_different_way} amino acids long\")",
"the human sequence is 529 amino acids long\nthe human sequence is 529 amino acids long\n"
]
],
[
[
"We see that `position_counter` appears to be working as expected!",
"_____no_output_____"
],
[
"## At which positions are amino acids different?\nThe more interesting question is how these structures differ. We can use a similar for loop to address this question:",
"_____no_output_____"
]
],
[
[
"# we want to keep track of which amino acid our\n# \"cursor\" is on in the for loop\nposition_counter = 0\n\nlist_to_store_same = list()\n\nfor position in hum_mouse_pssm.pssm:\n \n # `position` is the 2-element tuple\n # let's give each element a useful name\n resi_in_human = position[0]\n resi_dict = position[1]\n \n # skip this position if it is a '-'\n # in the human sequence record\n if resi_in_human == '-':\n continue\n else:\n # increment the counter by 1\n position_counter += 1\n \n # if more than one instance of amino acid\n # `resi_in_human` was found at this position,\n # meaning that the cat homolog is the same amino acid\n if position[1][resi_in_human] != 2:\n print(f\"mouse and human are the same at position \" +\n f\"{position_counter}, which is amino acid {resi_in_human}\")\n # list_to_store_same.append(position_counter)",
"mouse and human are the same at position 1, which is amino acid M\nmouse and human are the same at position 2, which is amino acid P\nmouse and human are the same at position 3, which is amino acid P\nmouse and human are the same at position 4, which is amino acid A\nmouse and human are the same at position 5, which is amino acid P\nmouse and human are the same at position 6, which is amino acid P\nmouse and human are the same at position 7, which is amino acid G\nmouse and human are the same at position 8, which is amino acid G\nmouse and human are the same at position 9, which is amino acid E\nmouse and human are the same at position 10, which is amino acid S\nmouse and human are the same at position 11, which is amino acid G\nmouse and human are the same at position 12, which is amino acid C\nmouse and human are the same at position 13, which is amino acid E\nmouse and human are the same at position 14, which is amino acid E\nmouse and human are the same at position 15, which is amino acid R\nmouse and human are the same at position 17, which is amino acid A\nmouse and human are the same at position 18, which is amino acid A\nmouse and human are the same at position 19, which is amino acid G\nmouse and human are the same at position 20, which is amino acid H\nmouse and human are the same at position 21, which is amino acid I\nmouse and human are the same at position 22, which is amino acid E\nmouse and human are the same at position 23, which is amino acid H\nmouse and human are the same at position 24, which is amino acid S\nmouse and human are the same at position 25, which is amino acid R\nmouse and human are the same at position 26, which is amino acid Y\nmouse and human are the same at position 27, which is amino acid L\nmouse and human are the same at position 28, which is amino acid S\nmouse and human are the same at position 29, which is amino acid L\nmouse and human are the same at position 30, which is amino acid L\nmouse and human are the same at position 31, which is amino acid D\nmouse and human are the same at position 32, which is amino acid A\nmouse and human are the same at position 33, which is amino acid V\nmouse and human are the same at position 34, which is amino acid D\nmouse and human are the same at position 35, which is amino acid N\nmouse and human are the same at position 36, which is amino acid S\nmouse and human are the same at position 37, which is amino acid K\nmouse and human are the same at position 38, which is amino acid M\nmouse and human are the same at position 39, which is amino acid A\nmouse and human are the same at position 40, which is amino acid L\nmouse and human are the same at position 41, which is amino acid N\nmouse and human are the same at position 42, which is amino acid S\nmouse and human are the same at position 44, which is amino acid S\nmouse and human are the same at position 45, which is amino acid P\nmouse and human are the same at position 46, which is amino acid P\nmouse and human are the same at position 48, which is amino acid I\nmouse and human are the same at position 49, which is amino acid G\nmouse and human are the same at position 50, which is amino acid P\nmouse and human are the same at position 51, which is amino acid Y\nmouse and human are the same at position 52, which is amino acid Y\nmouse and human are the same at position 53, which is amino acid E\nmouse and human are the same at position 54, which is amino acid N\nmouse and human are the same at position 55, which is amino acid H\nmouse and human are the same at position 56, which is amino acid G\nmouse and human are the same at position 57, which is amino acid Y\nmouse and human are the same at position 58, which is amino acid Q\nmouse and human are the same at position 59, which is amino acid P\nmouse and human are the same at position 60, which is amino acid E\nmouse and human are the same at position 61, which is amino acid N\nmouse and human are the same at position 62, which is amino acid P\nmouse and human are the same at position 63, which is amino acid Y\nmouse and human are the same at position 64, which is amino acid P\nmouse and human are the same at position 65, which is amino acid A\nmouse and human are the same at position 66, which is amino acid Q\nmouse and human are the same at position 67, which is amino acid P\nmouse and human are the same at position 68, which is amino acid T\nmouse and human are the same at position 69, which is amino acid V\nmouse and human are the same at position 70, which is amino acid V\nmouse and human are the same at position 71, which is amino acid P\nmouse and human are the same at position 72, which is amino acid T\nmouse and human are the same at position 73, which is amino acid V\nmouse and human are the same at position 74, which is amino acid Y\nmouse and human are the same at position 75, which is amino acid E\nmouse and human are the same at position 76, which is amino acid V\nmouse and human are the same at position 77, which is amino acid H\nmouse and human are the same at position 78, which is amino acid P\nmouse and human are the same at position 79, which is amino acid A\nmouse and human are the same at position 80, which is amino acid Q\nmouse and human are the same at position 81, which is amino acid Y\nmouse and human are the same at position 83, which is amino acid P\nmouse and human are the same at position 84, which is amino acid S\nmouse and human are the same at position 85, which is amino acid P\nmouse and human are the same at position 86, which is amino acid V\nmouse and human are the same at position 87, which is amino acid P\nmouse and human are the same at position 88, which is amino acid Q\nmouse and human are the same at position 89, which is amino acid Y\nmouse and human are the same at position 90, which is amino acid A\nmouse and human are the same at position 91, which is amino acid P\nmouse and human are the same at position 92, which is amino acid R\nmouse and human are the same at position 93, which is amino acid V\nmouse and human are the same at position 94, which is amino acid L\nmouse and human are the same at position 95, which is amino acid T\nmouse and human are the same at position 96, which is amino acid Q\nmouse and human are the same at position 97, which is amino acid A\nmouse and human are the same at position 98, which is amino acid S\nmouse and human are the same at position 99, which is amino acid N\nmouse and human are the same at position 101, which is amino acid V\nmouse and human are the same at position 102, which is amino acid V\nmouse and human are the same at position 103, which is amino acid C\nmouse and human are the same at position 104, which is amino acid T\nmouse and human are the same at position 105, which is amino acid Q\nmouse and human are the same at position 107, which is amino acid K\nmouse and human are the same at position 108, which is amino acid S\nmouse and human are the same at position 109, which is amino acid P\nmouse and human are the same at position 110, which is amino acid S\nmouse and human are the same at position 111, which is amino acid G\nmouse and human are the same at position 112, which is amino acid T\nmouse and human are the same at position 113, which is amino acid V\nmouse and human are the same at position 114, which is amino acid C\nmouse and human are the same at position 115, which is amino acid T\nmouse and human are the same at position 116, which is amino acid S\nmouse and human are the same at position 117, which is amino acid K\nmouse and human are the same at position 118, which is amino acid T\nmouse and human are the same at position 119, which is amino acid K\nmouse and human are the same at position 120, which is amino acid K\nmouse and human are the same at position 121, which is amino acid A\nmouse and human are the same at position 122, which is amino acid L\nmouse and human are the same at position 123, which is amino acid C\nmouse and human are the same at position 124, which is amino acid I\nmouse and human are the same at position 125, which is amino acid T\nmouse and human are the same at position 126, which is amino acid L\nmouse and human are the same at position 127, which is amino acid T\nmouse and human are the same at position 128, which is amino acid L\nmouse and human are the same at position 129, which is amino acid G\nmouse and human are the same at position 130, which is amino acid T\nmouse and human are the same at position 131, which is amino acid F\nmouse and human are the same at position 132, which is amino acid L\nmouse and human are the same at position 133, which is amino acid V\nmouse and human are the same at position 134, which is amino acid G\nmouse and human are the same at position 135, which is amino acid A\nmouse and human are the same at position 136, which is amino acid A\nmouse and human are the same at position 137, which is amino acid L\nmouse and human are the same at position 138, which is amino acid A\nmouse and human are the same at position 139, which is amino acid A\nmouse and human are the same at position 140, which is amino acid G\nmouse and human are the same at position 141, which is amino acid L\nmouse and human are the same at position 142, which is amino acid L\nmouse and human are the same at position 143, which is amino acid W\nmouse and human are the same at position 144, which is amino acid K\nmouse and human are the same at position 145, which is amino acid F\nmouse and human are the same at position 146, which is amino acid M\nmouse and human are the same at position 147, which is amino acid G\nmouse and human are the same at position 148, which is amino acid S\nmouse and human are the same at position 149, which is amino acid K\nmouse and human are the same at position 150, which is amino acid C\nmouse and human are the same at position 151, which is amino acid S\nmouse and human are the same at position 152, which is amino acid N\nmouse and human are the same at position 154, which is amino acid G\nmouse and human are the same at position 155, which is amino acid I\nmouse and human are the same at position 156, which is amino acid E\nmouse and human are the same at position 158, which is amino acid D\nmouse and human are the same at position 159, which is amino acid S\nmouse and human are the same at position 160, which is amino acid S\nmouse and human are the same at position 161, which is amino acid G\nmouse and human are the same at position 163, which is amino acid C\nmouse and human are the same at position 164, which is amino acid I\nmouse and human are the same at position 165, which is amino acid N\nmouse and human are the same at position 166, which is amino acid P\nmouse and human are the same at position 167, which is amino acid S\nmouse and human are the same at position 168, which is amino acid N\nmouse and human are the same at position 169, which is amino acid W\nmouse and human are the same at position 171, which is amino acid D\nmouse and human are the same at position 173, which is amino acid V\nmouse and human are the same at position 174, which is amino acid S\nmouse and human are the same at position 175, which is amino acid H\nmouse and human are the same at position 177, which is amino acid P\nmouse and human are the same at position 178, which is amino acid G\nmouse and human are the same at position 180, which is amino acid E\nmouse and human are the same at position 181, which is amino acid D\nmouse and human are the same at position 182, which is amino acid E\nmouse and human are the same at position 183, which is amino acid N\nmouse and human are the same at position 184, which is amino acid R\nmouse and human are the same at position 185, which is amino acid C\nmouse and human are the same at position 186, which is amino acid V\nmouse and human are the same at position 187, which is amino acid R\nmouse and human are the same at position 188, which is amino acid L\nmouse and human are the same at position 189, which is amino acid Y\nmouse and human are the same at position 190, which is amino acid G\nmouse and human are the same at position 191, which is amino acid P\nmouse and human are the same at position 192, which is amino acid N\nmouse and human are the same at position 193, which is amino acid F\nmouse and human are the same at position 194, which is amino acid I\nmouse and human are the same at position 195, which is amino acid L\nmouse and human are the same at position 196, which is amino acid Q\nmouse and human are the same at position 197, which is amino acid V\nmouse and human are the same at position 198, which is amino acid Y\nmouse and human are the same at position 199, which is amino acid S\nmouse and human are the same at position 200, which is amino acid S\nmouse and human are the same at position 201, which is amino acid Q\nmouse and human are the same at position 202, which is amino acid R\nmouse and human are the same at position 203, which is amino acid K\nmouse and human are the same at position 204, which is amino acid S\nmouse and human are the same at position 206, which is amino acid H\nmouse and human are the same at position 207, which is amino acid P\nmouse and human are the same at position 208, which is amino acid V\nmouse and human are the same at position 209, which is amino acid C\nmouse and human are the same at position 210, which is amino acid Q\nmouse and human are the same at position 211, which is amino acid D\nmouse and human are the same at position 212, which is amino acid D\nmouse and human are the same at position 213, which is amino acid W\nmouse and human are the same at position 214, which is amino acid N\nmouse and human are the same at position 215, which is amino acid E\nmouse and human are the same at position 216, which is amino acid N\nmouse and human are the same at position 217, which is amino acid Y\nmouse and human are the same at position 218, which is amino acid G\nmouse and human are the same at position 219, which is amino acid R\nmouse and human are the same at position 220, which is amino acid A\nmouse and human are the same at position 221, which is amino acid A\nmouse and human are the same at position 223, which is amino acid R\nmouse and human are the same at position 224, which is amino acid D\nmouse and human are the same at position 225, which is amino acid M\nmouse and human are the same at position 226, which is amino acid G\nmouse and human are the same at position 227, which is amino acid Y\nmouse and human are the same at position 228, which is amino acid K\nmouse and human are the same at position 229, which is amino acid N\nmouse and human are the same at position 230, which is amino acid N\nmouse and human are the same at position 231, which is amino acid F\nmouse and human are the same at position 232, which is amino acid Y\nmouse and human are the same at position 233, which is amino acid S\nmouse and human are the same at position 234, which is amino acid S\nmouse and human are the same at position 235, which is amino acid Q\nmouse and human are the same at position 236, which is amino acid G\nmouse and human are the same at position 237, which is amino acid I\nmouse and human are the same at position 238, which is amino acid V\nmouse and human are the same at position 239, which is amino acid D\nmouse and human are the same at position 240, which is amino acid D\nmouse and human are the same at position 241, which is amino acid S\nmouse and human are the same at position 242, which is amino acid G\nmouse and human are the same at position 243, which is amino acid S\nmouse and human are the same at position 244, which is amino acid T\nmouse and human are the same at position 245, which is amino acid S\nmouse and human are the same at position 246, which is amino acid F\nmouse and human are the same at position 247, which is amino acid M\nmouse and human are the same at position 248, which is amino acid K\nmouse and human are the same at position 249, which is amino acid L\nmouse and human are the same at position 250, which is amino acid N\nmouse and human are the same at position 251, which is amino acid T\nmouse and human are the same at position 252, which is amino acid S\nmouse and human are the same at position 253, which is amino acid A\nmouse and human are the same at position 254, which is amino acid G\nmouse and human are the same at position 255, which is amino acid N\nmouse and human are the same at position 256, which is amino acid V\nmouse and human are the same at position 257, which is amino acid D\nmouse and human are the same at position 258, which is amino acid I\nmouse and human are the same at position 259, which is amino acid Y\nmouse and human are the same at position 260, which is amino acid K\nmouse and human are the same at position 261, which is amino acid K\nmouse and human are the same at position 262, which is amino acid L\nmouse and human are the same at position 263, which is amino acid Y\nmouse and human are the same at position 264, which is amino acid H\nmouse and human are the same at position 265, which is amino acid S\nmouse and human are the same at position 266, which is amino acid D\nmouse and human are the same at position 267, which is amino acid A\nmouse and human are the same at position 268, which is amino acid C\nmouse and human are the same at position 269, which is amino acid S\nmouse and human are the same at position 270, which is amino acid S\nmouse and human are the same at position 271, which is amino acid K\nmouse and human are the same at position 272, which is amino acid A\nmouse and human are the same at position 273, which is amino acid V\nmouse and human are the same at position 274, which is amino acid V\nmouse and human are the same at position 275, which is amino acid S\nmouse and human are the same at position 276, which is amino acid L\nmouse and human are the same at position 277, which is amino acid R\nmouse and human are the same at position 278, which is amino acid C\nmouse and human are the same at position 279, which is amino acid I\nmouse and human are the same at position 280, which is amino acid A\nmouse and human are the same at position 282, which is amino acid G\nmouse and human are the same at position 283, which is amino acid V\nmouse and human are the same at position 284, which is amino acid N\nmouse and human are the same at position 285, which is amino acid L\nmouse and human are the same at position 286, which is amino acid N\nmouse and human are the same at position 287, which is amino acid S\nmouse and human are the same at position 288, which is amino acid S\nmouse and human are the same at position 289, which is amino acid R\nmouse and human are the same at position 290, which is amino acid Q\nmouse and human are the same at position 291, which is amino acid S\nmouse and human are the same at position 292, which is amino acid R\nmouse and human are the same at position 294, which is amino acid V\nmouse and human are the same at position 297, which is amino acid E\nmouse and human are the same at position 298, which is amino acid S\nmouse and human are the same at position 299, which is amino acid A\nmouse and human are the same at position 300, which is amino acid L\nmouse and human are the same at position 301, which is amino acid P\nmouse and human are the same at position 302, which is amino acid G\nmouse and human are the same at position 303, which is amino acid A\nmouse and human are the same at position 304, which is amino acid W\nmouse and human are the same at position 308, which is amino acid V\nmouse and human are the same at position 311, which is amino acid H\nmouse and human are the same at position 312, which is amino acid V\nmouse and human are the same at position 313, which is amino acid Q\nmouse and human are the same at position 314, which is amino acid N\nmouse and human are the same at position 315, which is amino acid V\nmouse and human are the same at position 317, which is amino acid V\nmouse and human are the same at position 321, which is amino acid S\nmouse and human are the same at position 322, which is amino acid I\nmouse and human are the same at position 324, which is amino acid T\nmouse and human are the same at position 325, which is amino acid P\nmouse and human are the same at position 326, which is amino acid E\nmouse and human are the same at position 328, which is amino acid I\nmouse and human are the same at position 329, which is amino acid V\nmouse and human are the same at position 330, which is amino acid T\nmouse and human are the same at position 335, which is amino acid V\nmouse and human are the same at position 336, which is amino acid E\nmouse and human are the same at position 337, which is amino acid K\nmouse and human are the same at position 338, which is amino acid P\nmouse and human are the same at position 339, which is amino acid L\nmouse and human are the same at position 340, which is amino acid N\nmouse and human are the same at position 341, which is amino acid N\nmouse and human are the same at position 342, which is amino acid P\nmouse and human are the same at position 343, which is amino acid W\nmouse and human are the same at position 344, which is amino acid H\nmouse and human are the same at position 345, which is amino acid W\nmouse and human are the same at position 346, which is amino acid T\nmouse and human are the same at position 347, which is amino acid A\nmouse and human are the same at position 348, which is amino acid F\nmouse and human are the same at position 349, which is amino acid A\nmouse and human are the same at position 351, which is amino acid I\nmouse and human are the same at position 352, which is amino acid L\nmouse and human are the same at position 353, which is amino acid R\nmouse and human are the same at position 354, which is amino acid Q\nmouse and human are the same at position 355, which is amino acid S\nmouse and human are the same at position 356, which is amino acid F\nmouse and human are the same at position 357, which is amino acid M\nmouse and human are the same at position 358, which is amino acid F\nmouse and human are the same at position 359, which is amino acid Y\nmouse and human are the same at position 360, which is amino acid G\nmouse and human are the same at position 361, which is amino acid A\nmouse and human are the same at position 362, which is amino acid G\nmouse and human are the same at position 363, which is amino acid Y\nmouse and human are the same at position 364, which is amino acid Q\nmouse and human are the same at position 365, which is amino acid V\nmouse and human are the same at position 366, which is amino acid E\nmouse and human are the same at position 367, which is amino acid K\nmouse and human are the same at position 368, which is amino acid V\nmouse and human are the same at position 369, which is amino acid I\nmouse and human are the same at position 370, which is amino acid S\nmouse and human are the same at position 372, which is amino acid P\nmouse and human are the same at position 373, which is amino acid N\nmouse and human are the same at position 374, which is amino acid Y\nmouse and human are the same at position 375, which is amino acid D\nmouse and human are the same at position 376, which is amino acid S\nmouse and human are the same at position 377, which is amino acid K\nmouse and human are the same at position 378, which is amino acid T\nmouse and human are the same at position 379, which is amino acid K\nmouse and human are the same at position 380, which is amino acid N\nmouse and human are the same at position 381, which is amino acid N\nmouse and human are the same at position 386, which is amino acid M\nmouse and human are the same at position 387, which is amino acid K\nmouse and human are the same at position 389, which is amino acid Q\nmouse and human are the same at position 390, which is amino acid K\nmouse and human are the same at position 391, which is amino acid P\nmouse and human are the same at position 392, which is amino acid L\nmouse and human are the same at position 393, which is amino acid T\nmouse and human are the same at position 394, which is amino acid F\nmouse and human are the same at position 395, which is amino acid N\nmouse and human are the same at position 396, which is amino acid D\nmouse and human are the same at position 397, which is amino acid L\nmouse and human are the same at position 398, which is amino acid V\nmouse and human are the same at position 399, which is amino acid K\nmouse and human are the same at position 400, which is amino acid P\nmouse and human are the same at position 401, which is amino acid V\nmouse and human are the same at position 405, which is amino acid N\nmouse and human are the same at position 406, which is amino acid P\nmouse and human are the same at position 407, which is amino acid G\nmouse and human are the same at position 408, which is amino acid M\nmouse and human are the same at position 409, which is amino acid M\nmouse and human are the same at position 410, which is amino acid L\nmouse and human are the same at position 411, which is amino acid Q\nmouse and human are the same at position 412, which is amino acid P\nmouse and human are the same at position 413, which is amino acid E\nmouse and human are the same at position 414, which is amino acid Q\nmouse and human are the same at position 415, which is amino acid L\nmouse and human are the same at position 417, which is amino acid W\nmouse and human are the same at position 418, which is amino acid I\nmouse and human are the same at position 419, which is amino acid S\nmouse and human are the same at position 423, which is amino acid A\nmouse and human are the same at position 424, which is amino acid T\nmouse and human are the same at position 425, which is amino acid E\nmouse and human are the same at position 426, which is amino acid E\nmouse and human are the same at position 427, which is amino acid K\nmouse and human are the same at position 428, which is amino acid G\nmouse and human are the same at position 429, which is amino acid K\nmouse and human are the same at position 430, which is amino acid T\nmouse and human are the same at position 431, which is amino acid S\nmouse and human are the same at position 432, which is amino acid E\nmouse and human are the same at position 433, which is amino acid V\nmouse and human are the same at position 435, which is amino acid N\nmouse and human are the same at position 436, which is amino acid A\nmouse and human are the same at position 437, which is amino acid A\nmouse and human are the same at position 438, which is amino acid K\nmouse and human are the same at position 439, which is amino acid V\nmouse and human are the same at position 440, which is amino acid L\nmouse and human are the same at position 441, which is amino acid L\nmouse and human are the same at position 442, which is amino acid I\nmouse and human are the same at position 443, which is amino acid E\nmouse and human are the same at position 444, which is amino acid T\nmouse and human are the same at position 445, which is amino acid Q\nmouse and human are the same at position 446, which is amino acid R\nmouse and human are the same at position 448, which is amino acid N\nmouse and human are the same at position 449, which is amino acid S\nmouse and human are the same at position 450, which is amino acid R\nmouse and human are the same at position 451, which is amino acid Y\nmouse and human are the same at position 452, which is amino acid V\nmouse and human are the same at position 453, which is amino acid Y\nmouse and human are the same at position 454, which is amino acid D\nmouse and human are the same at position 455, which is amino acid N\nmouse and human are the same at position 456, which is amino acid L\nmouse and human are the same at position 457, which is amino acid I\nmouse and human are the same at position 458, which is amino acid T\nmouse and human are the same at position 459, which is amino acid P\nmouse and human are the same at position 460, which is amino acid A\nmouse and human are the same at position 462, which is amino acid I\nmouse and human are the same at position 466, which is amino acid F\nmouse and human are the same at position 468, which is amino acid Q\nmouse and human are the same at position 470, which is amino acid N\nmouse and human are the same at position 471, which is amino acid V\nmouse and human are the same at position 473, which is amino acid S\nmouse and human are the same at position 475, which is amino acid Q\nmouse and human are the same at position 484, which is amino acid T\nmouse and human are the same at position 485, which is amino acid S\nmouse and human are the same at position 486, which is amino acid K\nmouse and human are the same at position 487, which is amino acid N\nmouse and human are the same at position 488, which is amino acid N\nmouse and human are the same at position 489, which is amino acid I\nmouse and human are the same at position 491, which is amino acid W\nmouse and human are the same at position 492, which is amino acid L\nmouse and human are the same at position 493, which is amino acid I\nmouse and human are the same at position 495, which is amino acid D\nmouse and human are the same at position 496, which is amino acid T\nmouse and human are the same at position 500, which is amino acid S\nmouse and human are the same at position 501, which is amino acid G\nmouse and human are the same at position 503, which is amino acid A\nmouse and human are the same at position 504, which is amino acid K\nmouse and human are the same at position 505, which is amino acid A\nmouse and human are the same at position 506, which is amino acid Y\nmouse and human are the same at position 507, which is amino acid R\nmouse and human are the same at position 512, which is amino acid G\nmouse and human are the same at position 513, which is amino acid N\nmouse and human are the same at position 515, which is amino acid M\nmouse and human are the same at position 516, which is amino acid V\nmouse and human are the same at position 517, which is amino acid F\nmouse and human are the same at position 518, which is amino acid T\nmouse and human are the same at position 519, which is amino acid D\nmouse and human are the same at position 522, which is amino acid Y\nmouse and human are the same at position 523, which is amino acid R\nmouse and human are the same at position 524, which is amino acid Q\nmouse and human are the same at position 525, which is amino acid M\nmouse and human are the same at position 526, which is amino acid R\nmouse and human are the same at position 527, which is amino acid A\nmouse and human are the same at position 528, which is amino acid D\nmouse and human are the same at position 529, which is amino acid G\n"
]
],
[
[
"## At which positions do we encounter a hydrophobic -> hydrophilic (or vice versa)?\nFor this question, we will need to make our algorithm a little more complex. We are going to start by making a dataframe that stores amino acid properties, such as volume, hydrophobicity, charge, and so forth. We will use the CSV format of this [table of amino acid properties](https://web.nmsu.edu/~talipovm/lib/exe/fetch.php?media=world:pasted:table08.pdf) and load it into a [Pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/getting_started/10min.html#object-creation).\n\nLet's also narrow our analysis to sites that are implicated as important in cleaving the SARS-CoV-2 S protein. H296, D345 and S441 are the catalytic triad, and D435 is a binding residue ([Meng et al 2020](https://www.biorxiv.org/content/10.1101/2020.02.08.926006v3.full)).\n\nK225 is implicated as important in binding monobasic targets such as S1/S2 domain of S protein ([Ohno et al 2020](https://www.biorxiv.org/content/10.1101/2020.06.12.149229v1.full)). Residue 225 in isoform 1 is actually a Leucine (L); this might have been a typo, since the two previous residues (223 and 224) are both lysines. We will consider both 223 and 224 as important, since they likely both contribute to the positive patch in the binding site, hypothesized to confer preference for monobasic substrates by Ohno et al.",
"_____no_output_____"
]
],
[
[
"for position in human_compact:\n print(type(position))\n print(position[0])\n break",
"<class 'str'>\nM\n"
]
],
[
[
"## Pseudocode outline\n\n```\nhuman MP P APP\ncat LA P ---\n```\n\n1. Iterate over each amino acid. For loop. We don't necessarily need the PSSM here.\n\n```python\nfor position in human_compact:\n```\n2. Get the amino acid at this position, for both human and cat.\n\n```python\nresi_in_human = human_compact[position]\nresi_in_mouse = mouse_compact[position]\n```\n3. Get the hydrophobicity of each amino acid (5.0 and 3.0)\n\n```python\nh_hum = get_hydrophobicity(resi_in_human)\nh_mouse = get_hydrophobicity(resi_in_mouse)\n```\n4. Get the (absolute value of) **difference** in hydrophobicity (2.0)\n\n```python\nh_hum = 9.00\nh_mouse = 20.0\ndiff = abs(h_hum - h_mouse)\ndiff = 11.0\n```\n5. Is this difference \"large\" == 5.0 -> yes or no (boolean, True or False)\n\n```python\nif diff < 5.0:\n # not a change in hydrophobicity\n no_change_in_h.append(position_counter)\nelse:\n # is a change in hydrophobicity\n change_in_h.append(position_counter)\n\nposition_counter += 1\n```\n6. Variable that stores this boolean\n 1. List that has length of human sequence\n 2. Value is this boolean",
"_____no_output_____"
]
],
[
[
"change_in_h",
"_____no_output_____"
],
[
"aa_props['hydrophobicity']",
"_____no_output_____"
],
[
"def replace",
"_____no_output_____"
],
[
"\"\".replace(\"-\", '')",
"_____no_output_____"
],
[
"def get_hydrophobicity(aa):\n hydrophobicity = aa_props.loc[[aa]]['hydrophobicity'].item()\n return hydrophobicity",
"_____no_output_____"
],
[
"aa = 'S'\nget_hydrophobicity(aa)",
"_____no_output_____"
],
[
"aa_props = pd.read_csv(\"../../data/amino_acid_properties.csv\")\naa_props.set_index('single_letter', inplace=True)\naa_props",
"_____no_output_____"
]
],
[
[
"See the [PDF format](../../data/amino_acid_properties.pdf) for references and details on how these metrics are calculated.\n\nNext, we will write a Python [function](https://github.com/wilfredinni/python-cheatsheet#functions), in which we pass the single-letter IDs of two amino acids, and get a Python [boolean](https://github.com/wilfredinni/python-cheatsheet#boolean-operators) (variable that stores `True` or `False`) that says whether or not these two amino acids have different hydrophobicity. We arbitrarily define \"difference in hydrophobicity\" here as a difference of 5.0 units between the amino acids' `hydrophobicity` columns.\n\nThe text at the beginning of the funciton that is wrapped in `\"\"\"` is a special type of [comment](https://github.com/wilfredinni/python-cheatsheet#comments) called a [function docstring](https://github.com/wilfredinni/python-cheatsheet#comments); it tells us what the function does and how to use it.",
"_____no_output_____"
]
],
[
[
"def is_change_in_hydrophobicity(resi1, resi2):\n \"\"\"This function takes string-type amino acid identifiers `resi1` and `resi2`\n and compares their hydrophobicities. If the absolute value of the difference\n between hydrophobicities is greater than `min_diff`, return boolean True.\n Otherwise, return boolean False.\n \"\"\"\n min_diff = 5.0\n print(f\"comparing hydrophobicity between {resi1} and {resi2}\")\n h1 = aa_props.loc[[resi1]]['hydrophobicity'].item()\n h2 = aa_props.loc[[resi2]]['hydrophobicity'].item()\n \n diff = abs(h1 - h2)\n print(f\"the difference is hydrophobicity is {diff}\")\n \n if diff > min_diff:\n return True\n else:\n return False ",
"_____no_output_____"
]
],
[
[
"We can quickly test our function with some examples:",
"_____no_output_____"
]
],
[
[
"is_change_in_hydrophobicity('M', 'S')",
"_____no_output_____"
],
[
"is_change_in_hydrophobicity('M', 'F')",
"_____no_output_____"
],
[
"is_change_in_hydrophobicity('M', 'M')",
"_____no_output_____"
]
],
[
[
"### Get list of interesting residues\n\nNext, let's generate a list of positions in the human sequence that are residues of interest, such as the catalytic triad (H296, D345 and S441) and important binding residues (D435, K223, and K224).\n\nIt is important to remember that these positions reported in the literature are relative to the human **isoform 1** sequence, not the **isoform 2** sequence (which we have stored in the variable `human_compact`). Thankfully, the conversion it relatively simple: isoform 2 is simply a splice variant in which `M → MPPAPPGGESGCEERGAAGHIEHSRYLSLLDAVDNSKM` at the N-terminal methionine. This means that we simply add 37 to the isoform 1 index to get the isoform 2 index. For instance, the catalytic serine S441 in isoform 1 is at position 441 + 37 = 478 in isoform 2.\n\nLastly, amino acid numbering in the literature uses 1 indexing (first amino acid is `M`), while our Python sequence uses 0 indexing. So position 478 with 1-indexing can be indexed using 477 with 0-indexing:",
"_____no_output_____"
]
],
[
[
"len(str(human_compact.seq).replace('-', ''))",
"_____no_output_____"
],
[
"str(human_compact.seq).replace('-', '')[477]",
"_____no_output_____"
]
],
[
[
"We can also check that the other residues of interest are the expected amino acids:\n* H296 in isoform 1 → 296 + 37 = H333 in isoform 2 → 333 - 1 = position 332 with 0-indexing\n* D345 → 381\n* D435 → 471\n* K223 → 259\n* K224 → 260\n\nLet's store these 0-indexed positions in a list so we can use it later:",
"_____no_output_____"
]
],
[
[
"resi_interest = [332, 381, 471, 259, 260]",
"_____no_output_____"
]
],
[
[
"Let's check that these positions are the amino acids we expect, this time using a for loop:",
"_____no_output_____"
]
],
[
[
"for position in resi_interest:\n resi = str(human_compact.seq).replace('-', '')[position]\n print(f\"amino acid at 0-indexed position {position} is {resi}\")",
"_____no_output_____"
]
],
[
[
"### Putting it all together\n\nLet's try using our new function in a for loop. This for loop is a bit different from the previous ones; it's actually simpler. Instead of using the PSSM, we can simply iterate over the positions in the human sequence, get the equivalent amino acid in the cat sequence, and use our function to ask whether the amino acids at that position have different hydrophobicity.\n\nOne new addition to this algorithm (besides our custom function) is the [`range()` function](https://github.com/wilfredinni/python-cheatsheet#for-loops-and-the-range-function).",
"_____no_output_____"
]
],
[
[
"list(range(len(human_compact)))",
"_____no_output_____"
],
[
"# we want to keep track of which amino acid our\n# \"cursor\" is on in the for loop\nposition_counter = 0\n\n# get the entire list of positions in the human sequence as\n# integers. We include dashes in this calculation\nlist_of_positions_including_dashes = range(len(human_compact))\n\nfor position_with_dashes in list_of_positions_including_dashes:\n \n # get the amino acid at this position (dashes included)\n # in both human and cat\n resi_in_human = human_compact[position_with_dashes]\n resi_in_mouse = mouse_record_compact[position_with_dashes]\n \n # skip this position if it is a '-'\n # in the human sequence record\n if resi_in_human == '-':\n continue\n elif position_counter in resi_interest:\n # detect if we are at an important amino acid\n print(f\"* position {resi_in_human}{position_counter} is a residue of interest!\")\n position_counter += 1\n else:\n # increment the counter by 1\n position_counter += 1\n \n # detect amino acid deletions\n if resi_in_mouse == '-':\n print(f'detected a deletion at position {position_counter}')\n continue\n \n # check changes in amino acid properties\n if is_change_in_hydrophobicity(resi_in_human, resi_in_mouse):\n print(f\"detected a change in hydrophobicity at position {position_counter}\")\n \n # TODO: check for other changes in amino acid properties",
"_____no_output_____"
]
],
[
[
"## Goal for the end of this week\n\nFor every position in the human sequence (compared to cat sequence), write an algorithm that prints every time there is a hydrophobic residue in human, and non-hydrophobic (hydrophilic) residue in cats.",
"_____no_output_____"
],
[
"# Other useful `SummaryInfo` tools",
"_____no_output_____"
],
[
"## Compute replacement dictionary",
"_____no_output_____"
]
],
[
[
"hum_mouse_rep_dict = hum_mouse_summary.replacement_dictionary()\n{k: hum_mouse_rep_dict[k] for k in hum_mouse_rep_dict\n if hum_mouse_rep_dict[k] > 0\n and k[0] != k[1]}",
"_____no_output_____"
]
],
[
[
"## Compute substitution and log odds matrix",
"_____no_output_____"
]
],
[
[
"my_arm = SubsMat.SeqMat(hum_mouse_rep_dict)\nmy_arm",
"_____no_output_____"
],
[
"my_lom = SubsMat.make_log_odds_matrix(my_arm)\nmy_lom",
"_____no_output_____"
]
],
[
[
"# Ty's Work Below",
"_____no_output_____"
]
],
[
[
"for position in hum_mouse_pssm.pssm:\n resi_in_human = position[0]\n resi_dict=position[1]\n if resi_in_human ==\"-\": \n continue\n else:\n position_counter += 1\n if position[1][resi_in_human] > 1:\n print(f\"mouse and human are the same at position\" + f\"{position_counter}, which is amino acid {resi_in_human}\")\n for position[1][resi_in_human] > 1 in hum_mouse_pssm.pssm:\n if position[0] == [\"D\", \"E\"]:\n print(f\"mouse and human have the same acidic amino acid at position\" + f\"{position_counter}, which is amino {resi_in_human}\")\n elif position[0]==[\"R\", \"H\", \"K\"]:\n Print(f\"mouse and human have the same basic amino acid at position\" + f\"{position_counter}, which is amino acid {resi_in_human}\")",
"_____no_output_____"
],
[
"position_counter=0\nfor position in hum_mouse_pssm.pssm:\n resi_in_human= position[0]\n resi_dict= position[1]\n if resi_in_human == '-':\n continue\n else:\n position_counter += 1\n if position[1][resi_in_human] > 1:\n print(f\"mouse and human are the same at position\" + f\"{position_counter}, which is amino acid {resi_in_human}\")",
"_____no_output_____"
]
],
[
[
"# How to do all the species automatically",
"_____no_output_____"
],
[
"1. Define a function `compute_sequence_diff` that (basically) has all the code in this notebook\n2. Use a function\n\n```python\nfor species in species_list:\n compute_sequence_diff('Homo sapiens', 'monkey')\n \ndef compute_sequence_diff(species1, species2):\n \"\"\"\n \"\"\"\n # do stuff with species sequence\n```",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e75f3a5b9aa2e67febbc09e10fcaa1ddab4da99d | 5,097 | ipynb | Jupyter Notebook | Traditional methods/Handle Class Imbalance.ipynb | rajahaseeb147/3dFacialPartSegmentation | aedfed75558761295e9bf602b18c2c3b631080e5 | [
"MIT"
] | null | null | null | Traditional methods/Handle Class Imbalance.ipynb | rajahaseeb147/3dFacialPartSegmentation | aedfed75558761295e9bf602b18c2c3b631080e5 | [
"MIT"
] | null | null | null | Traditional methods/Handle Class Imbalance.ipynb | rajahaseeb147/3dFacialPartSegmentation | aedfed75558761295e9bf602b18c2c3b631080e5 | [
"MIT"
] | 1 | 2021-11-03T01:33:26.000Z | 2021-11-03T01:33:26.000Z | 22.257642 | 119 | 0.480871 | [
[
[
"import pandas as pd\nimport numpy as np\nimport os\nimport glob\nfrom sklearn.utils import resample",
"_____no_output_____"
],
[
"ROOT = 'E:/skia_projects/3d_facial_landmark/implementation_1/data_new/temp'",
"_____no_output_____"
],
[
"df = pd.read_csv(os.path.join(ROOT, 'train.csv'), delimiter=',', index_col=False, names=['X', 'Y', 'Z', 'label'])",
"_____no_output_____"
],
[
"# Separate majority and minority classes\ndf_majority = df[df.label==0]\ndf_minority = df[df.label==1]",
"_____no_output_____"
],
[
"print(len(df_majority))\nprint(len(df_minority))",
"60429\n6784\n"
],
[
"# Upsample minority class\ndf_minority_upsampled = resample(df_minority, \n replace=True, # sample with replacement\n n_samples=60429, # to mach majority class elements\n random_state=123) # reproducible results",
"_____no_output_____"
],
[
"print(len(df_minority_upsampled))",
"60429\n"
],
[
"print((df_minority_upsampled).head(5))",
" X Y Z label\n64011 -0.213124 0.185127 0.087760 1\n63883 -0.210399 0.162202 0.094162 1\n61775 0.103098 -0.098548 0.152565 1\n64489 -0.011116 0.194203 0.086627 1\n65647 -0.082892 0.088318 0.162291 1\n"
],
[
"df_upsampled = pd.concat([df_majority, df_minority_upsampled])",
"_____no_output_____"
],
[
"df_upsampled.label.value_counts()",
"_____no_output_____"
],
[
"print(df_upsampled)",
" X Y Z label\n0 0.036309 -0.000011 0.000140 0\n1 0.051504 0.000000 0.000126 0\n2 0.067061 0.000000 0.000126 0\n3 0.091582 0.000000 0.000126 0\n4 0.108457 0.000000 0.000126 0\n... ... ... ... ...\n61692 0.074577 -0.079726 0.145563 1\n64876 -0.025355 0.212418 0.072108 1\n60544 0.159572 -0.067101 0.045794 1\n66829 -0.187689 0.160424 0.067511 1\n64548 -0.016911 0.202529 0.087553 1\n\n[120858 rows x 4 columns]\n"
],
[
"df_upsampled.to_csv(os.path.join(ROOT, 'train_balanced.csv'), mode='w', index=False, header=None)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75f415ad295a087420553bff8a24efb9472fa6d | 356,090 | ipynb | Jupyter Notebook | EDL_8_4_EvoAutoencoder.ipynb | cxbxmxcx/EvolutionaryDeepLearning | eb0bd73d01ca140a1216cc449af85166487ab087 | [
"Apache-2.0"
] | 2 | 2022-02-17T13:20:33.000Z | 2022-03-18T13:39:56.000Z | EDL_8_4_EvoAutoencoder.ipynb | cxbxmxcx/EvolutionaryDeepLearning | eb0bd73d01ca140a1216cc449af85166487ab087 | [
"Apache-2.0"
] | null | null | null | EDL_8_4_EvoAutoencoder.ipynb | cxbxmxcx/EvolutionaryDeepLearning | eb0bd73d01ca140a1216cc449af85166487ab087 | [
"Apache-2.0"
] | 1 | 2021-12-27T13:03:33.000Z | 2021-12-27T13:03:33.000Z | 282.386994 | 71,385 | 0.868224 | [
[
[
"<a href=\"https://colab.research.google.com/github/cxbxmxcx/EvolutionaryDeepLearning/blob/main/EDL_8_4_EvoAutoencoder.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Imports and other setup",
"_____no_output_____"
]
],
[
[
"!pip install livelossplot --quiet\n!pip install deap --quiet",
"\u001b[?25l\r\u001b[K |██ | 10 kB 18.0 MB/s eta 0:00:01\r\u001b[K |████ | 20 kB 12.7 MB/s eta 0:00:01\r\u001b[K |██████ | 30 kB 6.6 MB/s eta 0:00:01\r\u001b[K |████████▏ | 40 kB 3.7 MB/s eta 0:00:01\r\u001b[K |██████████▏ | 51 kB 4.3 MB/s eta 0:00:01\r\u001b[K |████████████▏ | 61 kB 5.1 MB/s eta 0:00:01\r\u001b[K |██████████████▎ | 71 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████▎ | 81 kB 4.8 MB/s eta 0:00:01\r\u001b[K |██████████████████▎ | 92 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████▍ | 102 kB 4.8 MB/s eta 0:00:01\r\u001b[K |██████████████████████▍ | 112 kB 4.8 MB/s eta 0:00:01\r\u001b[K |████████████████████████▍ | 122 kB 4.8 MB/s eta 0:00:01\r\u001b[K |██████████████████████████▌ | 133 kB 4.8 MB/s eta 0:00:01\r\u001b[K |████████████████████████████▌ | 143 kB 4.8 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▌ | 153 kB 4.8 MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 160 kB 4.8 MB/s \n\u001b[?25h"
],
[
"import numpy as np\nimport tensorflow as tf\nimport numpy as np\nimport random\n\nfrom tensorflow.keras.datasets import mnist\nfrom tensorflow.keras import layers, models, Input, Model\nfrom tensorflow.keras.callbacks import EarlyStopping\n\nfrom IPython import display\nfrom IPython.display import clear_output\nimport matplotlib.pyplot as plt\nfrom tensorflow.keras.utils import plot_model\nfrom livelossplot import PlotLosses\nplt.gray()\n\n#DEAP\nfrom deap import algorithms\nfrom deap import base\nfrom deap import benchmarks\nfrom deap import creator\nfrom deap import tools",
"_____no_output_____"
]
],
[
[
"CONSTANTS",
"_____no_output_____"
],
[
"Load Fashion Data",
"_____no_output_____"
]
],
[
[
"# load dataset\n(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.fashion_mnist.load_data()\n\n# split dataset\ntrain_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype(\"float32\") / 255.0\ntest_images = test_images.reshape(test_images.shape[0], 28, 28, 1).astype(\"float32\") / 255.0\n\n# reduce dataset for demonstration\n#train_images = train_images[1000:]\n#test_images = test_images[100:]",
"Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz\n32768/29515 [=================================] - 0s 0us/step\n40960/29515 [=========================================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz\n26427392/26421880 [==============================] - 0s 0us/step\n26435584/26421880 [==============================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz\n16384/5148 [===============================================================================================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz\n4423680/4422102 [==============================] - 0s 0us/step\n4431872/4422102 [==============================] - 0s 0us/step\n"
]
],
[
[
"Setup class names and labels for visualization, not training",
"_____no_output_____"
]
],
[
[
"class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',\n 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']",
"_____no_output_____"
]
],
[
[
"Plot some images.",
"_____no_output_____"
]
],
[
[
"import math\n\ndef plot_data(num_images, images, labels):\n grid = math.ceil(math.sqrt(num_images))\n plt.figure(figsize=(grid*2,grid*2))\n for i in range(num_images):\n plt.subplot(grid,grid,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False) \n plt.imshow(images[i].reshape(28,28))\n plt.xlabel(class_names[labels[i]]) \n plt.show()\n\nplot_data(25, train_images, train_labels)\n",
"_____no_output_____"
]
],
[
[
"STAGE 1: Auto-encoders",
"_____no_output_____"
],
[
"Build the Encoder",
"_____no_output_____"
]
],
[
[
"# input layer\ninput_layer = Input(shape=(28, 28, 1))\n\n# encoding architecture\nencoded_layer1 = layers.Conv2D(64, (3, 3), activation='relu', padding='same')(input_layer)\nencoded_layer1 = layers.MaxPool2D( (2, 2), padding='same')(encoded_layer1)\nencoded_layer2 = layers.Conv2D(32, (3, 3), activation='relu', padding='same')(encoded_layer1)\nencoded_layer2 = layers.MaxPool2D( (2, 2), padding='same')(encoded_layer2)\nencoded_layer3 = layers.Conv2D(16, (3, 3), activation='relu', padding='same')(encoded_layer2)\nlatent_view = layers.MaxPool2D( (2, 2), padding='same')(encoded_layer3)",
"_____no_output_____"
]
],
[
[
"Build the Decoder",
"_____no_output_____"
]
],
[
[
"#decoding architecture\ndecoded_layer1 = layers.Conv2D(16, (3, 3), activation='relu', padding='same')(latent_view)\ndecoded_layer1 = layers.UpSampling2D((2, 2))(decoded_layer1)\ndecoded_layer2 = layers.Conv2D(32, (3, 3), activation='relu', padding='same')(decoded_layer1)\ndecoded_layer2 = layers.UpSampling2D((2, 2))(decoded_layer2)\ndecoded_layer3 = layers.Conv2D(64, (3, 3), activation='relu')(decoded_layer2)\ndecoded_layer3 = layers.UpSampling2D((2, 2))(decoded_layer3)\n#output layer\noutput_layer = layers.Conv2D(1, (3, 3), padding='same')(decoded_layer3)",
"_____no_output_____"
]
],
[
[
"Build the Model",
"_____no_output_____"
]
],
[
[
"# compile the model\nmodel = Model(input_layer, output_layer)\nmodel.compile(optimizer='adam', loss='mse')\nmodel.summary()\nplot_model(model)",
"Model: \"model\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n input_1 (InputLayer) [(None, 28, 28, 1)] 0 \n \n conv2d (Conv2D) (None, 28, 28, 64) 640 \n \n max_pooling2d (MaxPooling2D (None, 14, 14, 64) 0 \n ) \n \n conv2d_1 (Conv2D) (None, 14, 14, 32) 18464 \n \n max_pooling2d_1 (MaxPooling (None, 7, 7, 32) 0 \n 2D) \n \n conv2d_2 (Conv2D) (None, 7, 7, 16) 4624 \n \n max_pooling2d_2 (MaxPooling (None, 4, 4, 16) 0 \n 2D) \n \n conv2d_3 (Conv2D) (None, 4, 4, 16) 2320 \n \n up_sampling2d (UpSampling2D (None, 8, 8, 16) 0 \n ) \n \n conv2d_4 (Conv2D) (None, 8, 8, 32) 4640 \n \n up_sampling2d_1 (UpSampling (None, 16, 16, 32) 0 \n 2D) \n \n conv2d_5 (Conv2D) (None, 14, 14, 64) 18496 \n \n up_sampling2d_2 (UpSampling (None, 28, 28, 64) 0 \n 2D) \n \n conv2d_6 (Conv2D) (None, 28, 28, 1) 577 \n \n=================================================================\nTotal params: 49,761\nTrainable params: 49,761\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"history_loss = []\nhistory_val_loss = []\n\ndef add_history(history):\n history_loss.append(history.history[\"loss\"])\n history_val_loss.append(history.history[\"val_loss\"])\n\ndef reset_history():\n global history_loss\n global history_val_loss\n history_loss = []\n history_val_loss = []\n return []\n\ndef plot_results(num_images, images, labels, history):\n add_history(history)\n grid = math.ceil(math.sqrt(num_images))\n plt.figure(figsize=(grid*2,grid*2))\n for i in range(num_images):\n plt.subplot(grid,grid,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False) \n plt.imshow(images[i].reshape(28,28))\n plt.xlabel(class_names[labels[i]]) \n plt.show()\n plt.plot(history_loss, label='loss')\n plt.plot(history_val_loss, label='val_loss')\n plt.legend()\n plt.show() ",
"_____no_output_____"
],
[
"EPOCHS = 3\nhistory = reset_history()\n\nfor i in range(EPOCHS):\n history = model.fit(train_images, train_images, epochs=1, batch_size=2048, validation_data=(test_images, test_images))\n pred_images = model.predict(test_images[:25])\n clear_output()\n plot_results(25, pred_images[:25], test_labels[:25], history)",
"_____no_output_____"
],
[
"#@title Constants\nmax_layers = 10\nmax_neurons = 128\nmin_neurons = 16\nmax_kernel = 3\nmin_kernel = 3\nmax_pool = 2\nmin_pool = 2\n\nCONV_LAYER = -1\nCONV_LAYER_LEN = 4\nBN_LAYER = -3\nBN_LAYER_LEN = 1\nDROPOUT_LAYER = -4\nDROPOUT_LAYER_LEN = 2\nUPCONV_LAYER = -2\nUPCONV_LAYER_LEN = 4",
"_____no_output_____"
],
[
"#@title Encoding scheme\ndef generate_neurons():\n return random.randint(min_neurons, max_neurons)\n\ndef generate_kernel():\n part = []\n part.append(random.randint(min_kernel, max_kernel))\n part.append(random.randint(min_kernel, max_kernel))\n return part\n\ndef generate_bn_layer():\n part = [BN_LAYER] \n return part\n\ndef generate_dropout_layer():\n part = [DROPOUT_LAYER] \n part.append(random.uniform(0,.5)) \n return part\n\ndef generate_conv_layer():\n part = [CONV_LAYER] \n part.append(generate_neurons())\n part.extend(generate_kernel()) \n return part\n\ndef generate_upconv_layer():\n part = [UPCONV_LAYER] \n part.append(generate_neurons())\n part.extend(generate_kernel()) \n return part\n\ndef create_offspring():\n ind = []\n layers = 0\n for i in range(max_layers):\n if i==0: #first layer always convolutational\n ind.extend(generate_conv_layer()) \n layers += 1\n elif random.uniform(0,1)<.5:\n #add convolution layer\n ind.extend(generate_conv_layer())\n layers += 1\n if random.uniform(0,1)<.5:\n #add batchnormalization\n ind.extend(generate_bn_layer())\n if random.uniform(0,1) < .5:\n ind.extend(generate_dropout_layer()) \n for i in range(layers):\n ind.extend(generate_upconv_layer())\n if random.uniform(0,1)<.5:\n #add batchnormalization\n ind.extend(generate_bn_layer())\n if random.uniform(0,1) < .5:\n ind.extend(generate_dropout_layer())\n return ind\n \nindividual = create_offspring()\nprint(individual)",
"[-1, 77, 3, 3, -1, 120, 3, 3, -3, -1, 65, 3, 3, -4, 0.1457169739400423, -1, 125, 3, 3, -4, 0.4112446879141588, -1, 20, 3, 3, -3, -4, 0.4427076677284294, -1, 120, 3, 3, -3, -4, 0.32406907420755815, -2, 89, 3, 3, -2, 126, 3, 3, -3, -4, 0.38140167072851344, -2, 47, 3, 3, -2, 91, 3, 3, -2, 108, 3, 3, -3, -2, 53, 3, 3, -4, 0.37112359234255]\n"
],
[
"def padding(gene):\n return \"same\" if gene == 1 else \"valid\"\n\ndef build_model(individual):\n input_layer = Input(shape=(28, 28, 1)) \n il = len(individual)\n i = 0\n x = input_layer\n while i < il: \n if individual[i] == CONV_LAYER: \n pad=\"same\" \n n = individual[i+1]\n k = (individual[i+2], individual[i+3])\n i += CONV_LAYER_LEN \n x = layers.Conv2D(n, k, activation='relu', padding=pad)(x) \n if x.shape[1] > 7:\n x = layers.MaxPool2D( (2, 2), padding='same')(x)\n elif individual[i] == BN_LAYER: #add batchnormal layer\n x = layers.BatchNormalization()(x)\n i += BN_LAYER_LEN \n elif individual[i] == DROPOUT_LAYER: #add dropout layer \n x = layers.Dropout(individual[i+1])(x) \n i += DROPOUT_LAYER_LEN\n elif individual[i] == UPCONV_LAYER:\n pad=\"same\"\n n = individual[i+1]\n k = (individual[i+2], individual[i+3]) \n x = layers.Conv2D(n, k, activation='relu', padding=pad)(x) \n x = layers.UpSampling2D((2, 2))(x) \n i += CONV_LAYER_LEN \n if x.shape[1] == (28):\n break #model is complete\n else:\n break\n if x.shape[1] == 14:\n x = layers.UpSampling2D((2, 2))(x)\n \n output_layer = layers.Conv2D(1, (3, 3), padding='same')(x)\n model = Model(input_layer, output_layer)\n model.compile(optimizer='adam', loss='mse')\n return model\n\nmodel = build_model(individual) \nmodel.summary()",
"Model: \"model_1\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n input_2 (InputLayer) [(None, 28, 28, 1)] 0 \n \n conv2d_7 (Conv2D) (None, 28, 28, 77) 770 \n \n max_pooling2d_3 (MaxPooling (None, 14, 14, 77) 0 \n 2D) \n \n conv2d_8 (Conv2D) (None, 14, 14, 120) 83280 \n \n max_pooling2d_4 (MaxPooling (None, 7, 7, 120) 0 \n 2D) \n \n batch_normalization (BatchN (None, 7, 7, 120) 480 \n ormalization) \n \n conv2d_9 (Conv2D) (None, 7, 7, 65) 70265 \n \n dropout (Dropout) (None, 7, 7, 65) 0 \n \n conv2d_10 (Conv2D) (None, 7, 7, 125) 73250 \n \n dropout_1 (Dropout) (None, 7, 7, 125) 0 \n \n conv2d_11 (Conv2D) (None, 7, 7, 20) 22520 \n \n batch_normalization_1 (Batc (None, 7, 7, 20) 80 \n hNormalization) \n \n dropout_2 (Dropout) (None, 7, 7, 20) 0 \n \n conv2d_12 (Conv2D) (None, 7, 7, 120) 21720 \n \n batch_normalization_2 (Batc (None, 7, 7, 120) 480 \n hNormalization) \n \n dropout_3 (Dropout) (None, 7, 7, 120) 0 \n \n conv2d_13 (Conv2D) (None, 7, 7, 89) 96209 \n \n up_sampling2d_3 (UpSampling (None, 14, 14, 89) 0 \n 2D) \n \n conv2d_14 (Conv2D) (None, 14, 14, 126) 101052 \n \n up_sampling2d_4 (UpSampling (None, 28, 28, 126) 0 \n 2D) \n \n conv2d_15 (Conv2D) (None, 28, 28, 1) 1135 \n \n=================================================================\nTotal params: 471,241\nTrainable params: 470,721\nNon-trainable params: 520\n_________________________________________________________________\n"
],
[
"individual = create_offspring() \nmodel = build_model(individual)\nmodel.summary()",
"Model: \"model_2\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n input_3 (InputLayer) [(None, 28, 28, 1)] 0 \n \n conv2d_16 (Conv2D) (None, 28, 28, 57) 570 \n \n max_pooling2d_5 (MaxPooling (None, 14, 14, 57) 0 \n 2D) \n \n conv2d_17 (Conv2D) (None, 14, 14, 34) 17476 \n \n max_pooling2d_6 (MaxPooling (None, 7, 7, 34) 0 \n 2D) \n \n batch_normalization_3 (Batc (None, 7, 7, 34) 136 \n hNormalization) \n \n dropout_4 (Dropout) (None, 7, 7, 34) 0 \n \n conv2d_18 (Conv2D) (None, 7, 7, 61) 18727 \n \n conv2d_19 (Conv2D) (None, 7, 7, 43) 23650 \n \n dropout_5 (Dropout) (None, 7, 7, 43) 0 \n \n conv2d_20 (Conv2D) (None, 7, 7, 33) 12804 \n \n batch_normalization_4 (Batc (None, 7, 7, 33) 132 \n hNormalization) \n \n dropout_6 (Dropout) (None, 7, 7, 33) 0 \n \n conv2d_21 (Conv2D) (None, 7, 7, 88) 26224 \n \n up_sampling2d_5 (UpSampling (None, 14, 14, 88) 0 \n 2D) \n \n batch_normalization_5 (Batc (None, 14, 14, 88) 352 \n hNormalization) \n \n conv2d_22 (Conv2D) (None, 14, 14, 26) 20618 \n \n up_sampling2d_6 (UpSampling (None, 28, 28, 26) 0 \n 2D) \n \n conv2d_23 (Conv2D) (None, 28, 28, 1) 235 \n \n=================================================================\nTotal params: 120,924\nTrainable params: 120,614\nNon-trainable params: 310\n_________________________________________________________________\n"
]
],
[
[
"# Creating Mate/Mutation Operators",
"_____no_output_____"
]
],
[
[
"#@title Start of Mate/Mutation Operators\ndef get_layers(ind, layer_type):\n return [a for a in range(len(ind)) if ind[a] == layer_type]\n\ndef swap(ind1, iv1, ind2, iv2, ll):\n ch1 = ind1[iv1:iv1+ll]\n ch2 = ind2[iv2:iv2+ll] \n ind1[iv1:iv1+ll] = ch2\n ind2[iv2:iv2+ll] = ch1\n return ind1, ind2\n\ndef swap_layers(ind1, ind2, layer_type, layer_len):\n c1, c2 = get_layers(ind1, layer_type), get_layers(ind2, layer_type) \n min_c = min(len(c1), len(c2))\n for i in range(min_c):\n if random.random() < 1:\n i1 = random.randint(0, len(c1)-1)\n i2 = random.randint(0, len(c2)-1) \n iv1 = c1.pop(i1)\n iv2 = c2.pop(i2) \n ind1, ind2 = swap(ind1, iv1, ind2, iv2, layer_len) \n return ind1, ind2 \n\ndef crossover(ind1, ind2): \n ind1, ind2 = swap_layers(ind1, ind2, CONV_LAYER, CONV_LAYER_LEN)\n ind1, ind2 = swap_layers(ind1, ind2, UPCONV_LAYER, UPCONV_LAYER_LEN)\n ind1, ind2 = swap_layers(ind1, ind2, BN_LAYER, BN_LAYER_LEN)\n ind1, ind2 = swap_layers(ind1, ind2, DROPOUT_LAYER, DROPOUT_LAYER_LEN)\n return ind1, ind2 \n\nind1 = create_offspring()\nind2 = create_offspring()\nprint(ind1)\nprint(ind2)\n\nind1, ind2 = crossover(ind1, ind2)\nprint(ind1)\nprint(ind2)\n\nmodel = build_model(ind1)\nmodel.summary()\nmodel = build_model(ind2)\nmodel.summary()",
"[-1, 93, 3, 3, -1, 67, 3, 3, -3, -4, 0.027071642365128212, -1, 77, 3, 3, -2, 21, 3, 3, -2, 17, 3, 3, -2, 52, 3, 3, -3]\n[-1, 109, 3, 3, -1, 40, 3, 3, -1, 23, 3, 3, -4, 0.44171814300243506, -1, 17, 3, 3, -3, -4, 0.4532032266493803, -1, 102, 3, 3, -2, 95, 3, 3, -3, -2, 33, 3, 3, -3, -2, 41, 3, 3, -4, 0.4287618783933327, -2, 111, 3, 3, -3, -2, 111, 3, 3, -3, -4, 0.33036928763087736]\n[-1, 109, 3, 3, -1, 17, 3, 3, -3, -4, 0.4532032266493803, -1, 102, 3, 3, -2, 95, 3, 3, -2, 41, 3, 3, -2, 111, 3, 3, -3]\n[-1, 93, 3, 3, -1, 40, 3, 3, -1, 23, 3, 3, -4, 0.44171814300243506, -1, 67, 3, 3, -3, -4, 0.027071642365128212, -1, 77, 3, 3, -2, 21, 3, 3, -3, -2, 33, 3, 3, -3, -2, 17, 3, 3, -4, 0.4287618783933327, -2, 111, 3, 3, -3, -2, 52, 3, 3, -3, -4, 0.33036928763087736]\nModel: \"model_3\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n input_4 (InputLayer) [(None, 28, 28, 1)] 0 \n \n conv2d_24 (Conv2D) (None, 28, 28, 109) 1090 \n \n max_pooling2d_7 (MaxPooling (None, 14, 14, 109) 0 \n 2D) \n \n conv2d_25 (Conv2D) (None, 14, 14, 17) 16694 \n \n max_pooling2d_8 (MaxPooling (None, 7, 7, 17) 0 \n 2D) \n \n batch_normalization_6 (Batc (None, 7, 7, 17) 68 \n hNormalization) \n \n dropout_7 (Dropout) (None, 7, 7, 17) 0 \n \n conv2d_26 (Conv2D) (None, 7, 7, 102) 15708 \n \n conv2d_27 (Conv2D) (None, 7, 7, 95) 87305 \n \n up_sampling2d_7 (UpSampling (None, 14, 14, 95) 0 \n 2D) \n \n conv2d_28 (Conv2D) (None, 14, 14, 41) 35096 \n \n up_sampling2d_8 (UpSampling (None, 28, 28, 41) 0 \n 2D) \n \n conv2d_29 (Conv2D) (None, 28, 28, 1) 370 \n \n=================================================================\nTotal params: 156,331\nTrainable params: 156,297\nNon-trainable params: 34\n_________________________________________________________________\nModel: \"model_4\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n input_5 (InputLayer) [(None, 28, 28, 1)] 0 \n \n conv2d_30 (Conv2D) (None, 28, 28, 93) 930 \n \n max_pooling2d_9 (MaxPooling (None, 14, 14, 93) 0 \n 2D) \n \n conv2d_31 (Conv2D) (None, 14, 14, 40) 33520 \n \n max_pooling2d_10 (MaxPoolin (None, 7, 7, 40) 0 \n g2D) \n \n conv2d_32 (Conv2D) (None, 7, 7, 23) 8303 \n \n dropout_8 (Dropout) (None, 7, 7, 23) 0 \n \n conv2d_33 (Conv2D) (None, 7, 7, 67) 13936 \n \n batch_normalization_7 (Batc (None, 7, 7, 67) 268 \n hNormalization) \n \n dropout_9 (Dropout) (None, 7, 7, 67) 0 \n \n conv2d_34 (Conv2D) (None, 7, 7, 77) 46508 \n \n conv2d_35 (Conv2D) (None, 7, 7, 21) 14574 \n \n up_sampling2d_9 (UpSampling (None, 14, 14, 21) 0 \n 2D) \n \n batch_normalization_8 (Batc (None, 14, 14, 21) 84 \n hNormalization) \n \n conv2d_36 (Conv2D) (None, 14, 14, 33) 6270 \n \n up_sampling2d_10 (UpSamplin (None, 28, 28, 33) 0 \n g2D) \n \n conv2d_37 (Conv2D) (None, 28, 28, 1) 298 \n \n=================================================================\nTotal params: 124,691\nTrainable params: 124,515\nNon-trainable params: 176\n_________________________________________________________________\n"
],
[
"#@title Mutation\ndef mutate(part, layer_type):\n if layer_type == CONV_LAYER and len(part)==CONV_LAYER_LEN:\n part[1] = int(part[1] * random.uniform(.9, 1.1))\n part[2] = random.randint(min_kernel, max_kernel)\n part[3] = random.randint(min_kernel, max_kernel)\n elif layer_type == UPCONV_LAYER and len(part)==UPCONV_LAYER_LEN:\n part[1] = random.randint(min_kernel, max_kernel)\n part[2] = random.randint(min_kernel, max_kernel)\n elif layer_type == DROPOUT_LAYER and len(part)==DROPOUT_LAYER_LEN:\n part[1] = random.uniform(0, .5) \n else:\n error = f\"mutate ERROR {part}\" \n raise Exception(error) \n return part\n\ndef mutate_layers(ind, layer_type, layer_len):\n layers = get_layers(ind1, layer_type)\n for layer in layers:\n if random.random() < 1:\n try:\n ind[layer:layer+layer_len] = mutate(\n ind[layer:layer+layer_len], layer_type) \n except:\n print(layers)\n return ind \n\nprint(ind1)\n\ndef mutation(ind): \n ind = mutate_layers(ind, CONV_LAYER, CONV_LAYER_LEN)\n ind = mutate_layers(ind, DROPOUT_LAYER, DROPOUT_LAYER_LEN) \n ind = mutate_layers(ind, UPCONV_LAYER, UPCONV_LAYER_LEN)\n return ind,\n\nind, = mutation(ind1)\nprint(ind)\nmodel = build_model(ind1)\nmodel.summary()",
"[-1, 109, 3, 3, -1, 17, 3, 3, -3, -4, 0.4532032266493803, -1, 102, 3, 3, -2, 95, 3, 3, -2, 41, 3, 3, -2, 111, 3, 3, -3]\n[-1, 114, 3, 3, -1, 16, 3, 3, -3, -4, 0.3591074600780886, -1, 102, 3, 3, -2, 3, 3, 3, -2, 3, 3, 3, -2, 3, 3, 3, -3]\nModel: \"model_5\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n input_6 (InputLayer) [(None, 28, 28, 1)] 0 \n \n conv2d_38 (Conv2D) (None, 28, 28, 114) 1140 \n \n max_pooling2d_11 (MaxPoolin (None, 14, 14, 114) 0 \n g2D) \n \n conv2d_39 (Conv2D) (None, 14, 14, 16) 16432 \n \n max_pooling2d_12 (MaxPoolin (None, 7, 7, 16) 0 \n g2D) \n \n batch_normalization_9 (Batc (None, 7, 7, 16) 64 \n hNormalization) \n \n dropout_10 (Dropout) (None, 7, 7, 16) 0 \n \n conv2d_40 (Conv2D) (None, 7, 7, 102) 14790 \n \n conv2d_41 (Conv2D) (None, 7, 7, 3) 2757 \n \n up_sampling2d_11 (UpSamplin (None, 14, 14, 3) 0 \n g2D) \n \n conv2d_42 (Conv2D) (None, 14, 14, 3) 84 \n \n up_sampling2d_12 (UpSamplin (None, 28, 28, 3) 0 \n g2D) \n \n conv2d_43 (Conv2D) (None, 28, 28, 1) 28 \n \n=================================================================\nTotal params: 35,295\nTrainable params: 35,263\nNon-trainable params: 32\n_________________________________________________________________\n"
],
[
"#@title Setting up the Creator\ncreator.create(\"FitnessMin\", base.Fitness, weights=(-1.0,))\ncreator.create(\"Individual\", list, fitness=creator.FitnessMin)",
"_____no_output_____"
],
[
"#@title Create Individual and Population\ntoolbox = base.Toolbox()\ntoolbox.register(\"autoencoder\", create_offspring)\ntoolbox.register(\"individual\", tools.initIterate, creator.Individual, toolbox.autoencoder)\ntoolbox.register(\"population\", tools.initRepeat, list, toolbox.individual)\n\ntoolbox.register(\"select\", tools.selTournament, tournsize=5)",
"_____no_output_____"
],
[
"#@title Register Crossover and Mutation\ntoolbox.register(\"mate\", crossover)\ntoolbox.register(\"mutate\", mutation)",
"_____no_output_____"
],
[
"#@title Register Evaluation\ndef clamp(num, min_value, max_value):\n return max(min(num, max_value), min_value)\n\ndef train(model): \n history = model.fit(train_images, train_images, epochs=3,\n batch_size=2048, validation_data=(test_images, test_images),\n verbose=0)\n return model, history\n\nfits = []\n\ndef evaluate(individual): \n global fits\n try:\n model = build_model(individual)\n model, history = train(model) \n fitness = history.history[\"val_loss\"]\n fits.append(fitness)\n print(\".\", end='') \n return clamp(fitness, 0, np.nanmax(fits)),\n except:\n return np.nanmax(fits), \n\ntoolbox.register(\"evaluate\", evaluate) ",
"_____no_output_____"
],
[
"#@title Optimize the Weights { run: \"auto\" }\nMU = 100 #@param {type:\"slider\", min:5, max:100, step:1}\nNGEN = 1 #@param {type:\"slider\", min:1, max:10, step:1}\nRGEN = 1 #@param {type:\"slider\", min:1, max:5, step:1}\nCXPB = .6\nMUTPB = .3\n\nrandom.seed(64)\n\npop = toolbox.population(n=MU)\nhof = tools.HallOfFame(1)\nstats = tools.Statistics(lambda ind: ind.fitness.values)\nstats.register(\"avg\", np.mean)\nstats.register(\"std\", np.std)\nstats.register(\"min\", np.min)\nstats.register(\"max\", np.max)",
"_____no_output_____"
],
[
"best = None\ngroups = { \"fitness\" : {\"min\", \"max\"}}\nplotlosses = PlotLosses(groups=groups)\n\nfor g in range(NGEN):\n pop, logbook = algorithms.eaSimple(pop, toolbox, \n cxpb=CXPB, mutpb=MUTPB, ngen=RGEN, stats=stats, halloffame=hof, verbose=False)\n best = hof[0] \n \n print(f\"Gen ({(g+1)*RGEN})\") \n for l in logbook:\n plotlosses.update({'min': l[\"min\"], 'max': l[\"max\"]})\n plotlosses.send() # draw, update logs, etc",
"_____no_output_____"
],
[
"model = build_model(best)\n\nEPOCHS = 10\nhistory = reset_history()\n\nfor i in range(EPOCHS):\n history = model.fit(train_images, train_images, epochs=1, batch_size=2048, validation_data=(test_images, test_images))\n pred_images = model.predict(test_images[:25])\n clear_output()\n plot_results(25, pred_images[:25], test_labels[:25], history)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75f45430fc18120d95fad4a59a78a786c8aaf4a | 12,596 | ipynb | Jupyter Notebook | notebooks/deep_learning_intro/raw/tut4.ipynb | qursaan/learntools | 3df5094cb78ed1a6aaca2d16c782ade523d6a92b | [
"Apache-2.0"
] | 359 | 2018-03-23T15:57:52.000Z | 2022-03-25T21:56:28.000Z | notebooks/deep_learning_intro/raw/tut4.ipynb | qursaan/learntools | 3df5094cb78ed1a6aaca2d16c782ade523d6a92b | [
"Apache-2.0"
] | 84 | 2018-06-14T00:06:52.000Z | 2022-02-08T17:25:54.000Z | notebooks/deep_learning_intro/raw/tut4.ipynb | qursaan/learntools | 3df5094cb78ed1a6aaca2d16c782ade523d6a92b | [
"Apache-2.0"
] | 213 | 2018-05-02T19:06:31.000Z | 2022-03-20T15:40:34.000Z | 51.412245 | 527 | 0.66148 | [
[
[
"# Introduction #\n\nRecall from the example in the previous lesson that Keras will keep a history of the training and validation loss over the epochs that it is training the model. In this lesson, we're going to learn how to interpret these learning curves and how we can use them to guide model development. In particular, we'll examine at the learning curves for evidence of *underfitting* and *overfitting* and look at a couple of strategies for correcting it.\n\n# Interpreting the Learning Curves #\n\nYou might think about the information in the training data as being of two kinds: *signal* and *noise*. The signal is the part that generalizes, the part that can help our model make predictions from new data. The noise is that part that is *only* true of the training data; the noise is all of the random fluctuation that comes from data in the real-world or all of the incidental, non-informative patterns that can't actually help the model make predictions. The noise is the part might look useful but really isn't.\n\nWe train a model by choosing weights or parameters that minimize the loss on a training set. You might know, however, that to accurately assess a model's performance, we need to evaluate it on a new set of data, the *validation* data. (You could see our lesson on [model validation](https://www.kaggle.com/dansbecker/model-validation) in *Introduction to Machine Learning* for a review.)\n\nWhen we train a model we've been plotting the loss on the training set epoch by epoch. To this we'll add a plot the validation data too. These plots we call the **learning curves**. To train deep learning models effectively, we need to be able to interpret them.\n\n<figure style=\"padding: 1em;\">\n<img src=\"https://i.imgur.com/tHiVFnM.png\" width=\"500\" alt=\"A graph of training and validation loss.\">\n<figcaption style=\"textalign: center; font-style: italic\"><center>The validation loss gives an estimate of the expected error on unseen data.\n</center></figcaption>\n</figure>\n\nNow, the training loss will go down either when the model learns signal or when it learns noise. But the validation loss will go down only when the model learns signal. (Whatever noise the model learned from the training set won't generalize to new data.) So, when a model learns signal both curves go down, but when it learns noise a *gap* is created in the curves. The size of the gap tells you how much noise the model has learned.\n\nIdeally, we would create models that learn all of the signal and none of the noise. This will practically never happen. Instead we make a trade. We can get the model to learn more signal at the cost of learning more noise. So long as the trade is in our favor, the validation loss will continue to decrease. After a certain point, however, the trade can turn against us, the cost exceeds the benefit, and the validation loss begins to rise.\n\n<figure style=\"padding: 1em;\">\n<img src=\"https://i.imgur.com/eUF6mfo.png\" width=\"600\" alt=\"Two graphs. On the left, a line through a few data points with the true fit a parabola. On the right, a curve running through each datapoint with the true fit a parabola.\">\n<figcaption style=\"textalign: center; font-style: italic\"><center>Underfitting and overfitting.\n</center></figcaption>\n</figure>\n\nThis trade-off indicates that there can be two problems that occur when training a model: not enough signal or too much noise. **Underfitting** the training set is when the loss is not as low as it could be because the model hasn't learned enough *signal*. **Overfitting** the training set is when the loss is not as low as it could be because the model learned too much *noise*. The trick to training deep learning models is finding the best balance between the two.\n\nWe'll look at a couple ways of getting more signal out of the training data while reducing the amount of noise.\n\n# Capacity #\n\nA model's **capacity** refers to the size and complexity of the patterns it is able to learn. For neural networks, this will largely be determined by how many neurons it has and how they are connected together. If it appears that your network is underfitting the data, you should try increasing its capacity.\n\nYou can increase the capacity of a network either by making it *wider* (more units to existing layers) or by making it *deeper* (adding more layers). Wider networks have an easier time learning more linear relationships, while deeper networks prefer more nonlinear ones. Which is better just depends on the dataset.\n\n```\nmodel = keras.Sequential([\n layers.Dense(16, activation='relu'),\n layers.Dense(1),\n])\n\nwider = keras.Sequential([\n layers.Dense(32, activation='relu'),\n layers.Dense(1),\n])\n\ndeeper = keras.Sequential([\n layers.Dense(16, activation='relu'),\n layers.Dense(16, activation='relu'),\n layers.Dense(1),\n])\n```\n\nYou'll explore how the capacity of a network can affect its performance in the exercise.\n\n# Early Stopping #\n\nWe mentioned that when a model is too eagerly learning noise, the validation loss may start to increase during training. To prevent this, we can simply stop the training whenever it seems the validation loss isn't decreasing anymore. Interrupting the training this way is called **early stopping**.\n\n<figure style=\"padding: 1em;\">\n<img src=\"https://i.imgur.com/eP0gppr.png\" width=500 alt=\"A graph of the learning curves with early stopping at the minimum validation loss, underfitting to the left of it and overfitting to the right.\">\n<figcaption style=\"textalign: center; font-style: italic\"><center>We keep the model where the validation loss is at a minimum.\n</center></figcaption>\n</figure>\n\nOnce we detect that the validation loss is starting to rise again, we can reset the weights back to where the minimum occured. This ensures that the model won't continue to learn noise and overfit the data.\n\nTraining with early stopping also means we're in less danger of stopping the training too early, before the network has finished learning signal. So besides preventing overfitting from training too long, early stopping can also prevent *underfitting* from not training long enough. Just set your training epochs to some large number (more than you'll need), and early stopping will take care of the rest.\n\n## Adding Early Stopping ##\n\nIn Keras, we include early stopping in our training through a callback. A **callback** is just a function you want run every so often while the network trains. The early stopping callback will run after every epoch. (Keras has [a variety of useful callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks) pre-defined, but you can [define your own](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/LambdaCallback), too.)",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.callbacks import EarlyStopping\n\nearly_stopping = EarlyStopping(\n min_delta=0.001, # minimium amount of change to count as an improvement\n patience=20, # how many epochs to wait before stopping\n restore_best_weights=True,\n)",
"_____no_output_____"
]
],
[
[
"These parameters say: \"If there hasn't been at least an improvement of 0.001 in the validation loss over the previous 20 epochs, then stop the training and keep the best model you found.\" It can sometimes be hard to tell if the validation loss is rising due to overfitting or just due to random batch variation. The parameters allow us to set some allowances around when to stop.\n\nAs we'll see in our example, we'll pass this callback to the `fit` method along with the loss and optimizer.\n\n# Example - Train a Model with Early Stopping #\n\nLet's continue developing the model from the example in the last tutorial. We'll increase the capacity of that network but also add an early-stopping callback to prevent overfitting.\n\nHere's the data prep again.",
"_____no_output_____"
]
],
[
[
"#$HIDE_INPUT$\nimport pandas as pd\nfrom IPython.display import display\n\nred_wine = pd.read_csv('../input/dl-course-data/red-wine.csv')\n\n# Create training and validation splits\ndf_train = red_wine.sample(frac=0.7, random_state=0)\ndf_valid = red_wine.drop(df_train.index)\ndisplay(df_train.head(4))\n\n# Scale to [0, 1]\nmax_ = df_train.max(axis=0)\nmin_ = df_train.min(axis=0)\ndf_train = (df_train - min_) / (max_ - min_)\ndf_valid = (df_valid - min_) / (max_ - min_)\n\n# Split features and target\nX_train = df_train.drop('quality', axis=1)\nX_valid = df_valid.drop('quality', axis=1)\ny_train = df_train['quality']\ny_valid = df_valid['quality']",
"_____no_output_____"
]
],
[
[
"Now let's increase the capacity of the network. We'll go for a fairly large network, but rely on the callback to halt the training once the validation loss shows signs of increasing.",
"_____no_output_____"
]
],
[
[
"from tensorflow import keras\nfrom tensorflow.keras import layers, callbacks\n\nearly_stopping = callbacks.EarlyStopping(\n min_delta=0.001, # minimium amount of change to count as an improvement\n patience=20, # how many epochs to wait before stopping\n restore_best_weights=True,\n)\n\nmodel = keras.Sequential([\n layers.Dense(512, activation='relu', input_shape=[11]),\n layers.Dense(512, activation='relu'),\n layers.Dense(512, activation='relu'),\n layers.Dense(1),\n])\nmodel.compile(\n optimizer='adam',\n loss='mae',\n)",
"_____no_output_____"
]
],
[
[
"After defining the callback, add it as an argument in `fit` (you can have several, so put it in a list). Choose a large number of epochs when using early stopping, more than you'll need.",
"_____no_output_____"
]
],
[
[
"history = model.fit(\n X_train, y_train,\n validation_data=(X_valid, y_valid),\n batch_size=256,\n epochs=500,\n callbacks=[early_stopping], # put your callbacks in a list\n verbose=0, # turn off training log\n)\n\nhistory_df = pd.DataFrame(history.history)\nhistory_df.loc[:, ['loss', 'val_loss']].plot();\nprint(\"Minimum validation loss: {}\".format(history_df['val_loss'].min()))",
"_____no_output_____"
]
],
[
[
"And sure enough, Keras stopped the training well before the full 500 epochs!\n\n# Your Turn #\n\nNow [**predict how popular a song is**](#$NEXT_NOTEBOOK_URL$) with the *Spotify* dataset.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e75f51d10473f0e814c2aced47bd9323304e85dc | 157,757 | ipynb | Jupyter Notebook | Titanic_dataset_analysis.ipynb | paawan01/Titanic_dataset_analysis | 33316b68f19ac350b53c3425138f05e77af5f7a4 | [
"FTL",
"CNRI-Python"
] | null | null | null | Titanic_dataset_analysis.ipynb | paawan01/Titanic_dataset_analysis | 33316b68f19ac350b53c3425138f05e77af5f7a4 | [
"FTL",
"CNRI-Python"
] | null | null | null | Titanic_dataset_analysis.ipynb | paawan01/Titanic_dataset_analysis | 33316b68f19ac350b53c3425138f05e77af5f7a4 | [
"FTL",
"CNRI-Python"
] | null | null | null | 104.33664 | 35,104 | 0.829459 | [
[
[
"# Titanic : Analysis of a disaster\n\n#### Author - Paawan Mukker\n\n> This notebook strives to answer some chosen question using simple exploratory data analysis, and descriptive statistics, (the aim is to avoid using any inferential statistics or Machine learning as much as possible) on the titanic dataset. This notebook follows on lines of Cross-Industry Standard Process for Data Mining (CRISP-DM)\n\n\n\n",
"_____no_output_____"
],
[
"## Phase 1 - Business Understanding",
"_____no_output_____"
],
[
"### Introduction",
"_____no_output_____"
],
[
"Titanic is arguably one of the most famous ship voyages of the past, unfortunately not because of something glorious but something unlucky - An accident leading to shipwrecks.\nOn April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This sensational tragedy shocked the international community and led to better safety regulations for ships.\n\nOne of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew. Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.\n\nPlease find more details here at [Kaggle](https://www.kaggle.com/c/titanic).",
"_____no_output_____"
],
[
"### Objective",
"_____no_output_____"
],
[
"#### Buisness perpective",
"_____no_output_____"
],
[
"The objective of this case study is to answer the following questions:\n- Does having family members on board increases your survival ? \n- Was there any advantage of survival to a particular gender ?\n- Which aspect had most crucial role to play in passengers survival ?",
"_____no_output_____"
],
[
"#### Technical perspective",
"_____no_output_____"
],
[
"Answer the above posed questions with the help of appropriate use of data/Statistics/Visualizations to justify or nullify the proposed hypothesis.",
"_____no_output_____"
],
[
"## Phase 2 - Data Understanding",
"_____no_output_____"
],
[
"### Data Collection\n",
"_____no_output_____"
],
[
"The data has been taken from [Kaggle's Titanic Challenge](https://www.kaggle.com/c/titanic).\nThe description from the same is as follows.\n\nThe data has been split into two groups:\n\n- training set (train.csv) : The training set should be used to build machine learning models. For the training set, we provide the outcome (also known as the “ground truth”) for each passenger. \n\n\n- test set (test.csv) : The test set should be used to see how well the model performs on unseen data. For the test set, we do not provide the ground truth for each passenger.\n\nFor this study we don't need to look into test.csv and we'll focus on train.csv only.",
"_____no_output_____"
],
[
"### Data Description",
"_____no_output_____"
],
[
"Let's import the required libraries.",
"_____no_output_____"
]
],
[
[
"# Libraries for Visualisation/plots.\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nfrom pandas.plotting import radviz\nimport matplotlib.patches as mpatches\n\n\n# Libraries for data handling\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\n\n# Libraries for data modeling\nfrom sklearn.ensemble import RandomForestClassifier",
"_____no_output_____"
]
],
[
[
"#### Let's read our data using pandas:",
"_____no_output_____"
]
],
[
[
"# Load the data as pandas data frame\ntitanic_train = pd.read_csv(\"./data/train.csv\")\ntitanic_test = pd.read_csv(\"./data/test.csv\") # Will not be used",
"_____no_output_____"
]
],
[
[
"#### Show an overview of our data:",
"_____no_output_____"
],
[
"_Dimension of the data:_",
"_____no_output_____"
]
],
[
[
"titanic_train.shape",
"_____no_output_____"
]
],
[
[
"_First few columns:_",
"_____no_output_____"
]
],
[
[
"titanic_train.head()",
"_____no_output_____"
]
],
[
[
"Here is what each of the column means:\n```\nVariable Name\tDescription\nPassengerId Passenger Id.\nSurvived\t 1 for Survived, 0 otherwise\nPclass\t Passenger’s class\nName\t Passenger’s name\nSex\t Passenger’s sex\nAge\t Passenger’s age\nSibSp\t Number of siblings/spouses on ship\nParch\t Number of parents/children on ship\nTicket\t Passenger’s Ticket number\nFare\t Passenger’s Ticket Fare\nCabin\t Passenger’s Cabin Number\nEmbarked\t Place from where Passenger boareded the ship.\n```",
"_____no_output_____"
],
[
"Some other notes about variables from [Kaggle](https://www.kaggle.com/c/titanic) itself:\n\n```\npclass: A proxy for socio-economic status (SES)\n1st = Upper\n2nd = Middle\n3rd = Lower\n\nage: Age is fractional if less than 1. If the age is estimated, is it in the form of xx.5\n\nsibsp: The dataset defines family relations in this way...\nSibling = brother, sister, stepbrother, stepsister\nSpouse = husband, wife (mistresses and fiancés were ignored)\n\nparch: The dataset defines family relations in this way...\nParent = mother, father\nChild = daughter, son, stepdaughter, stepson\nSome children travelled only with a nanny, therefore parch=0 for them.\n```",
"_____no_output_____"
],
[
"#### Compute basic statistics",
"_____no_output_____"
]
],
[
[
"# Use only relevant columns\n\nto_have = ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare']\ntitanic_train[to_have].describe(exclude=[type(None)])",
"_____no_output_____"
]
],
[
[
"#### Assessing Missing Values in columns\n",
"_____no_output_____"
]
],
[
[
"titanic_train.isnull().sum()",
"_____no_output_____"
],
[
"titanic_train.isnull().sum().plot(kind='bar', figsize=(15,4))\n",
"_____no_output_____"
]
],
[
[
"## Phase 3 - Data Preparation",
"_____no_output_____"
],
[
"### Data cleaning",
"_____no_output_____"
],
[
"#### Re-Encode Categorical Features",
"_____no_output_____"
]
],
[
[
"titanic_train.dtypes\n",
"_____no_output_____"
],
[
"drop_categorical_var = ['Name', 'Embarked', 'Ticket', 'Cabin', 'PassengerId']\nbin_categorical_var = ['Sex']\nmulti_categorical_var = ['Pclass']\n",
"_____no_output_____"
],
[
"# Drop not required categorical variable\ntitanic_train.drop(drop_categorical_var, axis=1, inplace=True)\n",
"_____no_output_____"
],
[
"# Re-encode binary categorical variable(s) to be kept in the analysis.\n\nsex_map = {'male':0, 'female':1}\ntitanic_train['Sex'] = titanic_train['Sex'].map(sex_map)\n\n# for attribute in bin_categorical_var:\n \n# for ind, row in enumerate(titanic_train[attribute].value_counts().index):\n# titanic_train[attribute] = titanic_train[attribute].replace(row, ind)",
"_____no_output_____"
],
[
"titanic_train['Sex'].head()\n",
"_____no_output_____"
],
[
"# Re-encode multi categorical variable(s) to be kept in the analysis.\n\nfor attribute in multi_categorical_var:\n \n titanic_train = pd.get_dummies(titanic_train, columns=[attribute])\n ",
"_____no_output_____"
]
],
[
[
"#### Fix missing Values",
"_____no_output_____"
],
[
"Let's target the most seen Null or NA values.",
"_____no_output_____"
]
],
[
[
"# Show count of missing values\ntitanic_train.isnull().sum()\n",
"_____no_output_____"
]
],
[
[
"For replacing missing data there can be multiple stratergies. Let's see which strategy suits us the best.\nWe can:\n1. Deleting rows/columns with missing values - but since the dataset is small removing rows is not a good way to go but a field can be dropped for analysis.\n2. Replace missing values with values inferred from data such as mean, median, etc - This seems an appropriate choice for us.\n3. Randomly set values - Not a wise strategy for most cases.\n4. Predict the missing values based on other values from data - This also seems an appropriate choice for us.\n\nBefore selecting which technique to employ for our case, we need to look for how relevant the missing field is with respect to our objectives. Hence we emply different techniques for different fields. For:\n\n- Age: So for all the questions we have posed in objectives Age doesn't seem to affect explictly but may affect impliclty hence instead of dropping(1) and predicting(4) we choose to replace it with mean i.e. strategy 2.",
"_____no_output_____"
]
],
[
[
"# Impute the missing age values with mean\ntitanic_train.Age=titanic_train.Age.fillna(titanic_train.Age.mean())\n",
"_____no_output_____"
]
],
[
[
"#### Data Construction",
"_____no_output_____"
],
[
"We can't predict role of family in survival since we don't have any field that directly corresponds to that. Hence we need to do come up with something based on existing fields which are SibSp (Siblings/Spouse) and Parch(Parent/children).",
"_____no_output_____"
]
],
[
[
"# Source - http://ahmedbesbes.com/how-to-score-08134-in-titanic-kaggle-challenge.html\n\nfamily = pd.DataFrame()\n\n# introducing a new feature : the size of families (including the passenger)\nfamily[ 'FamilySize' ] = titanic_train[ 'Parch' ] + titanic_train[ 'SibSp' ] + 1\n\n# introducing other features based on the family size\nfamily[ 'Family_Single' ] = family[ 'FamilySize' ].map( lambda s : 1 if s == 1 else 0 )\nfamily[ 'Family_Small' ] = family[ 'FamilySize' ].map( lambda s : 1 if 2 <= s <= 4 else 0 )\nfamily[ 'Family_Large' ] = family[ 'FamilySize' ].map( lambda s : 1 if 5 <= s else 0 )\n\nfamily.head()\n",
"_____no_output_____"
],
[
"# Add family to df and remove Parch and SibSp\n\ntitanic_train.drop(['Parch', 'SibSp'], axis=1, inplace=True)\ntitanic_train = pd.concat( [titanic_train, family], axis=1)\ntitanic_train.head()\n",
"_____no_output_____"
]
],
[
[
"### Phase 4 - Modeling",
"_____no_output_____"
],
[
"#### Modelling technique selection",
"_____no_output_____"
],
[
"For modelling we are using RandomForest classifiers here and the reason to choose them is as follows :\n\n- This is a classifcation problem so we are restricted to use classification algorithms.\n- We have the labelled data, hence this becomes a supervised learning problem, also what needs to be noted here that we need to predicted Survived/Not survived hence a binary variable.\n- Considering above two points we are limited to ML algos for binary supervised classifcation problem.\n- I choose Random Forest amongst others like SVM, etc. They don't require many hyperparameters to tune. And fits well for the limited, simple yet non-linear dataset like ours. Also random forests are much easier to train; it's easier to get a good, robust model.",
"_____no_output_____"
],
[
"#### Train-Test split",
"_____no_output_____"
],
[
"Our data is already split.\n\ntraining data in - titanic_train.\n\nAnd testing data in titanic_test.\n\n**For this case we're NOT concerned with test data becasue our objective is not achieving accuracy in prediction rather the importance of features. And reason why we are not concerned with accuracy is because here we're modelling only because to answer the third question as to which are the most important attributes for survival, the accuracy in prediction hence is not our primary focus. For this same reason we're not tuning the ideal hyperparameters using random or grid search.**",
"_____no_output_____"
],
[
"Let's split the training data and labels.",
"_____no_output_____"
]
],
[
[
"# Drop the survived column (or labels for training data)\nx = titanic_train.drop(['Survived'], axis=1)\n\n# Get the label data\ny = titanic_train.Survived\n",
"_____no_output_____"
]
],
[
[
"#### Build Model",
"_____no_output_____"
]
],
[
[
"# Instantiate RAndom forest classifer\nclf = RandomForestClassifier(random_state=0, max_features=None)\n\n# Fit the training data\nclf_tit = clf.fit(x, y)\n",
"_____no_output_____"
]
],
[
[
"### Phase 5 - Evaluation",
"_____no_output_____"
],
[
"In this section we'll be evaluating the questions asked instead of the model.",
"_____no_output_____"
],
[
"- #### Does having family members on board increases your survival ?",
"_____no_output_____"
],
[
"Ler us plot the bar graph of family members on ship vs their survival count.",
"_____no_output_____"
]
],
[
[
"freq1 = pd.value_counts(titanic_train[titanic_train['Survived']==1].FamilySize)\nfreq2 = pd.value_counts(titanic_train[titanic_train['Survived']==0].FamilySize)\nax = pd.concat([ freq2.rename('Not Survived'), freq1.rename('Survived')], axis=1).plot.bar(figsize=(10,7))\nax.set_xlabel('Number of family members on ship')\nax.set_ylabel('Count of Survival/Non Sruvival')\n",
"_____no_output_____"
]
],
[
[
"**Ans** : Using the above graph we can see that there’s a survival penalty to singletons and those with family sizes above 4. \nHence to answer the asked question, Yes having family members on board increases your survival but only if you have family members less than 4 beyond that it hurts your chances rather than increasing it.",
"_____no_output_____"
],
[
"- #### Was there any advantage of survival to a particular gender ?",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(10,5))\nax = fig.add_subplot(111)\ntitanic_train.Survived[titanic_train.Sex == 0].value_counts().plot(kind='bar', label='Male', color='blue')\ntitanic_train.Survived[titanic_train.Sex == 1].value_counts().plot(kind='bar', label='Female', color='red')\nax.set_xticklabels(['Survived', 'Not survived'])\nax.set_xlabel('Survival')\nax.set_ylabel('Count of Survival/Non Sruvival')\n\nplt.title(\"Propotion of survival in terms of sex\"); plt.legend(loc='best')\n",
"_____no_output_____"
]
],
[
[
"The above chart is Count of males/females that survived : Through this we see that the **number of males surviving are more than the females.** Also the **number of males who didn’t survive are also more than females who didn’t survive.** So this doesn’t gives the answer and hence we have the next chart.",
"_____no_output_____"
]
],
[
[
"freq1 = pd.value_counts(titanic_train[titanic_train['Survived']==1].Sex)\nfreq2 = pd.value_counts(titanic_train[titanic_train['Survived']==0].Sex)\nax = pd.concat([ freq2.rename('Not Survived'), freq1.rename('Survived')], axis=1).plot.bar(figsize=(10,5))\nax.set_xticklabels(['male', 'female'])\nax.set_xlabel('Gender of person')\nax.set_ylabel('Count of Survival/Non Sruvival')\n",
"_____no_output_____"
]
],
[
[
"This one contrasts the number of survived/not survived male and female passengers and their total count. From this we clearly infer that out of nearly 550 males aboard only 100 survived, that is 18%. Whereas for females out of 340, nearly 260 survived i.e. 75%. Hence we answer in affirmative that yes **Females had a survival advantage over men.** So I want to emphasize that raw count of survival for gender wasn’t very helpful in answering the asked question, but analyzing the relative percentages certainly is.",
"_____no_output_____"
],
[
"**Ans** : The clear answer is yes; There was an advantage to women as far as survival is concerned.",
"_____no_output_____"
],
[
"- #### Which aspect had most crucial role to play in passengers survival ?",
"_____no_output_____"
],
[
"For looking at the most important feature for survival prediction let us first plot the used features in sorted order by the classifier.",
"_____no_output_____"
]
],
[
[
"features = pd.DataFrame()\nfeatures['features'] = x.columns\nfeatures['importance'] = clf.feature_importances_\nfeatures.sort_values(by=['importance'], ascending=True, inplace=True)\nfeatures.set_index('features', inplace=True)\nax = features.plot(kind='barh', figsize=(12,6))\nax.set_xlabel('The feature importances (the higher, the more important the feature).')\n",
"_____no_output_____"
]
],
[
[
"The feature importance are numbers which are computed using scikit learn. Basically, the idea is to measure the decrease in accuracy on data when we randomly permute the values for that feature. If the decrease is low, then the feature is not important, and vice-versa. The higher the number the more the feature importance of that attribute/feature.\n\nFor more details refer here - https://stackoverflow.com/questions/15810339/how-are-feature-importances-in-randomforestclassifier-determined",
"_____no_output_____"
],
[
"Although it is clear which is the most important factor, Let us see another interesting visualisation for the same, namely Radviz.\n>RadViz is a way of visualizing multi-variate data. It is based on a simple spring tension minimization algorithm. Basically you set up a bunch of points in a plane. In our case they are equally spaced on a unit circle. Each point represents a single attribute. You then pretend that each sample in the data set is attached to each of these points by a spring, the stiffness of which is proportional to the numerical value of that attribute (they are normalized to unit interval). The point in the plane, where our sample settles to (where the forces acting on our sample are at an equilibrium) is where a dot representing our sample will be drawn. Depending on which class that sample belongs it will be colored differently. Description from [here](http://pandas.pydata.org/pandas-docs/stable/visualization.html#visualization-radviz).",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(12,8))\nax = radviz(titanic_train, 'Survived', color=['r','b'])\n\nblue_patch = mpatches.Patch(color='blue', label='Survived')\nred_patch = mpatches.Patch(color='red', label='Not survived')\n\nax.legend(handles=[blue_patch, red_patch])",
"_____no_output_____"
]
],
[
[
"**Inference of plot**:\n\nAs from the description of radviz plot, we try to put each vector(normalised) in a unit circle surrounded by each attribute on the plot as if each attribute is exerting a force on the vector. We can roughly say that the more significant the attribute is the more force it will exert.\nWe clearly see that most of the points of Survived are clustered towards Sex, Age, and Fare attributes, which also came out to be the most important features _(hence no suprise here !)_.\nAnother interesting thing came out here is the other vectors lie in a straight line towards the next important attributes in line namely, Pclass_3, FamilySize. Although bit of diversion seems for other attribute Family_Single.\nBut overall the Radviz and Feature importance bar chart are consistent for top important features for predicting survivability.\n",
"_____no_output_____"
],
[
"**Ans** : Based on the above two visualisations, It turns out that 'Sex' of a person has the most effect in terms of survival. From previous analysis we can see that women were most likely to survive.\nAfter 'Sex', Age seems to be an important factor influencing the survival of a particular person.",
"_____no_output_____"
],
[
"### Conclusion",
"_____no_output_____"
],
[
"In this Notebook, we took a look at various questions which highlights the survivability of passengers of titanic based on Kaggle’s Titanic dataset. We found out that:\n\n1) Travelling with family members would make your chances of survival go up significantly as long as you don’t have more than 5 members. Also travelling alone can be a lot dangerous.\n\n2) Then we looked at how being a male or female affected your chances of survival. And we learnt that Females had an overwhelming advantage; With a whooping 75% survival rate which for males is around only 18%.\n\n3) Lastly we tried to investigate about the most important attribute for a passengers’ survival and it turned out to be his/her sex followed by age. I think this is because of old code of conduct that sailors and captains follow in case of threatening situations: “Women and children first !”.\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e75f6662bbb49e854998a42b3d26171a665e0123 | 1,041,317 | ipynb | Jupyter Notebook | model1-train-resnet34/2020-03-29-resnet34-experiments.ipynb | w210-accessibility/classify-streetview | d60328484ea992b4cb2ffecb04bb548efaf06f1b | [
"MIT"
] | 2 | 2020-06-23T04:02:50.000Z | 2022-02-08T00:59:24.000Z | model1-train-resnet34/2020-03-29-resnet34-experiments.ipynb | w210-accessibility/classify-streetview | d60328484ea992b4cb2ffecb04bb548efaf06f1b | [
"MIT"
] | null | null | null | model1-train-resnet34/2020-03-29-resnet34-experiments.ipynb | w210-accessibility/classify-streetview | d60328484ea992b4cb2ffecb04bb548efaf06f1b | [
"MIT"
] | null | null | null | 710.796587 | 671,740 | 0.946217 | [
[
[
"# ResNet34 - Experiments",
"_____no_output_____"
],
[
"Welcome to lesson 1! For those of you who are using a Jupyter Notebook for the first time, you can learn about this useful tool in a tutorial we prepared specially for you; click `File`->`Open` now and click `00_notebook_tutorial.ipynb`. \n\nIn this lesson we will build our first image classifier from scratch, and see if we can achieve world-class results. Let's dive in!\n\nEvery notebook starts with the following three lines; they ensure that any edits to libraries you make are reloaded here automatically, and also that any charts or images displayed are shown in this notebook.",
"_____no_output_____"
]
],
[
[
"%reload_ext autoreload\n%autoreload 2\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"We import all the necessary packages. We are going to work with the [fastai V1 library](http://www.fast.ai/2018/10/02/fastai-ai/) which sits on top of [Pytorch 1.0](https://hackernoon.com/pytorch-1-0-468332ba5163). The fastai library provides many useful functions that enable us to quickly and easily build neural networks and train our models.",
"_____no_output_____"
]
],
[
[
"from fastai.vision import *\nfrom fastai.metrics import error_rate",
"_____no_output_____"
]
],
[
[
"If you're using a computer with an unusually small GPU, you may get an out of memory error when running this notebook. If this happens, click Kernel->Restart, uncomment the 2nd line below to use a smaller *batch size* (you'll learn all about what this means during the course), and try again.",
"_____no_output_____"
]
],
[
[
"bs = 64\n# bs = 16 # uncomment this line if you run out of memory even after clicking Kernel->Restart",
"_____no_output_____"
]
],
[
[
"## Looking at the data",
"_____no_output_____"
],
[
"We are going to use the [Oxford-IIIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) by [O. M. Parkhi et al., 2012](http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf) which features 12 cat breeds and 25 dogs breeds. Our model will need to learn to differentiate between these 37 distinct categories. According to their paper, the best accuracy they could get in 2012 was 59.21%, using a complex model that was specific to pet detection, with separate \"Image\", \"Head\", and \"Body\" models for the pet photos. Let's see how accurate we can be using deep learning!\n\nWe are going to use the `untar_data` function to which we must pass a URL as an argument and which will download and extract the data.",
"_____no_output_____"
]
],
[
[
"help(untar_data)",
"Help on function untar_data in module fastai.datasets:\n\nuntar_data(url:str, fname:Union[pathlib.Path, str]=None, dest:Union[pathlib.Path, str]=None, data=True, force_download=False) -> pathlib.Path\n Download `url` to `fname` if `dest` doesn't exist, and un-tgz to folder `dest`.\n\n"
],
[
"#path = untar_data(URLs.PETS); path\npath = Path(r'/home/ec2-user/SageMaker/classify-streetview/images')\npath",
"_____no_output_____"
],
[
"path.ls()",
"_____no_output_____"
],
[
"#path_anno = path/'annotations'\npath_img = path",
"_____no_output_____"
]
],
[
[
"The first thing we do when we approach a problem is to take a look at the data. We _always_ need to understand very well what the problem is and what the data looks like before we can figure out how to solve it. Taking a look at the data means understanding how the data directories are structured, what the labels are and what some sample images look like.\n\nThe main difference between the handling of image classification datasets is the way labels are stored. In this particular dataset, labels are stored in the filenames themselves. We will need to extract them to be able to classify the images into the correct categories. Fortunately, the fastai library has a handy function made exactly for this, `ImageDataBunch.from_name_re` gets the labels from the filenames using a [regular expression](https://docs.python.org/3.6/library/re.html).",
"_____no_output_____"
]
],
[
[
"fnames = get_image_files(path_img)\nfnames[:5]",
"_____no_output_____"
],
[
"tfms = get_transforms(do_flip=False)\n#data = ImageDataBunch.from_folder(path_img, ds_tfms=tfms, size=224)",
"_____no_output_____"
],
[
"#np.random.seed(2)\n#pat = r'/([^/]+)_\\d+.jpg$'",
"_____no_output_____"
],
[
"#data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs\n# ).normalize(imagenet_stats)",
"_____no_output_____"
],
[
"\n\n# https://docs.fast.ai/vision.data.html#ImageDataBunch.from_folder\ndata = ImageDataBunch.from_folder(path, ds_tfms = tfms, size = 224, bs=bs)\n\n",
"_____no_output_____"
],
[
"data.show_batch(rows=3, figsize=(7,6))",
"_____no_output_____"
],
[
"print(data.classes)\nlen(data.classes),data.c",
"['0_missing', '1_null', '2_obstacle', '3_present', '4_surface_prob']\n"
]
],
[
[
"## Training: resnet34",
"_____no_output_____"
],
[
"Now we will start training our model. We will use a [convolutional neural network](http://cs231n.github.io/convolutional-networks/) backbone and a fully connected head with a single hidden layer as a classifier. Don't know what these things mean? Not to worry, we will dive deeper in the coming lessons. For the moment you need to know that we are building a model which will take images as input and will output the predicted probability for each of the categories (in this case, it will have 37 outputs).\n\nWe will train for 4 epochs (4 cycles through all our data).",
"_____no_output_____"
]
],
[
[
"learn = cnn_learner(data, models.resnet34, metrics=error_rate)",
"_____no_output_____"
],
[
"learn.model",
"_____no_output_____"
],
[
"learn.fit_one_cycle(4)",
"_____no_output_____"
],
[
"learn.save('stage-1')",
"_____no_output_____"
]
],
[
[
"## Results",
"_____no_output_____"
],
[
"Let's see what results we have got. \n\nWe will first see which were the categories that the model most confused with one another. We will try to see if what the model predicted was reasonable or not. In this case the mistakes look reasonable (none of the mistakes seems obviously naive). This is an indicator that our classifier is working correctly. \n\nFurthermore, when we plot the confusion matrix, we can see that the distribution is heavily skewed: the model makes the same mistakes over and over again but it rarely confuses other categories. This suggests that it just finds it difficult to distinguish some specific categories between each other; this is normal behaviour.",
"_____no_output_____"
]
],
[
[
"interp = ClassificationInterpretation.from_learner(learn)\n\nlosses,idxs = interp.top_losses()\n\nlen(data.valid_ds)==len(losses)==len(idxs)",
"_____no_output_____"
],
[
"interp.plot_top_losses(9, figsize=(15,11))",
"_____no_output_____"
],
[
"doc(interp.plot_top_losses)",
"_____no_output_____"
],
[
"interp.plot_confusion_matrix(figsize=(4,4), dpi=60)",
"_____no_output_____"
],
[
"interp.most_confused(min_val=2)",
"_____no_output_____"
]
],
[
[
"## Unfreezing, fine-tuning, and learning rates",
"_____no_output_____"
],
[
"Since our model is working as we expect it to, we will *unfreeze* our model and train some more.",
"_____no_output_____"
]
],
[
[
"learn.unfreeze()",
"_____no_output_____"
],
[
"learn.fit_one_cycle(1)",
"_____no_output_____"
],
[
"learn.load('stage-1');",
"_____no_output_____"
],
[
"learn.lr_find()",
"_____no_output_____"
],
[
"learn.recorder.plot()",
"_____no_output_____"
],
[
"learn.unfreeze()\nlearn.fit_one_cycle(2, max_lr=slice(1e-6,1e-4))",
"_____no_output_____"
]
],
[
[
"That's a pretty accurate model!",
"_____no_output_____"
],
[
"## Training: resnet50",
"_____no_output_____"
],
[
"Now we will train in the same way as before but with one caveat: instead of using resnet34 as our backbone we will use resnet50 (resnet34 is a 34 layer residual network while resnet50 has 50 layers. It will be explained later in the course and you can learn the details in the [resnet paper](https://arxiv.org/pdf/1512.03385.pdf)).\n\nBasically, resnet50 usually performs better because it is a deeper network with more parameters. Let's see if we can achieve a higher performance here. To help it along, let's us use larger images too, since that way the network can see more detail. We reduce the batch size a bit since otherwise this larger network will require more GPU memory.",
"_____no_output_____"
]
],
[
[
"data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(),\n size=299, bs=bs//2).normalize(imagenet_stats)",
"_____no_output_____"
],
[
"learn = cnn_learner(data, models.resnet50, metrics=error_rate)",
"_____no_output_____"
],
[
"learn.lr_find()\nlearn.recorder.plot()",
"LR Finder complete, type {learner_name}.recorder.plot() to see the graph.\n"
],
[
"learn.fit_one_cycle(8)",
"Total time: 06:59\nepoch train_loss valid_loss error_rate\n1 0.548006 0.268912 0.076455 (00:57)\n2 0.365533 0.193667 0.064953 (00:51)\n3 0.336032 0.211020 0.073072 (00:51)\n4 0.263173 0.212025 0.060893 (00:51)\n5 0.217016 0.183195 0.063599 (00:51)\n6 0.161002 0.167274 0.048038 (00:51)\n7 0.086668 0.143490 0.044655 (00:51)\n8 0.082288 0.154927 0.046008 (00:51)\n\n"
],
[
"learn.save('stage-1-50')",
"_____no_output_____"
]
],
[
[
"It's astonishing that it's possible to recognize pet breeds so accurately! Let's see if full fine-tuning helps:",
"_____no_output_____"
]
],
[
[
"learn.unfreeze()\nlearn.fit_one_cycle(3, max_lr=slice(1e-6,1e-4))",
"Total time: 03:27\nepoch train_loss valid_loss error_rate\n1 0.097319 0.155017 0.048038 (01:10)\n2 0.074885 0.144853 0.044655 (01:08)\n3 0.063509 0.144917 0.043978 (01:08)\n\n"
]
],
[
[
"If it doesn't, you can always go back to your previous model.",
"_____no_output_____"
]
],
[
[
"learn.load('stage-1-50');",
"_____no_output_____"
],
[
"interp = ClassificationInterpretation.from_learner(learn)",
"_____no_output_____"
],
[
"interp.most_confused(min_val=2)",
"_____no_output_____"
]
],
[
[
"## Other data formats",
"_____no_output_____"
]
],
[
[
"path = untar_data(URLs.MNIST_SAMPLE); path",
"_____no_output_____"
],
[
"tfms = get_transforms(do_flip=False)\ndata = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=26)",
"_____no_output_____"
],
[
"data.show_batch(rows=3, figsize=(5,5))",
"_____no_output_____"
],
[
"learn = cnn_learner(data, models.resnet18, metrics=accuracy)\nlearn.fit(2)",
"Total time: 00:23\nepoch train_loss valid_loss accuracy\n1 0.116117 0.029745 0.991168 (00:12)\n2 0.056860 0.015974 0.994603 (00:10)\n\n"
],
[
"df = pd.read_csv(path/'labels.csv')\ndf.head()",
"_____no_output_____"
],
[
"data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=28)",
"_____no_output_____"
],
[
"data.show_batch(rows=3, figsize=(5,5))\ndata.classes",
"_____no_output_____"
],
[
"data = ImageDataBunch.from_df(path, df, ds_tfms=tfms, size=24)\ndata.classes",
"_____no_output_____"
],
[
"fn_paths = [path/name for name in df['name']]; fn_paths[:2]",
"_____no_output_____"
],
[
"pat = r\"/(\\d)/\\d+\\.png$\"\ndata = ImageDataBunch.from_name_re(path, fn_paths, pat=pat, ds_tfms=tfms, size=24)\ndata.classes",
"_____no_output_____"
],
[
"data = ImageDataBunch.from_name_func(path, fn_paths, ds_tfms=tfms, size=24,\n label_func = lambda x: '3' if '/3/' in str(x) else '7')\ndata.classes",
"_____no_output_____"
],
[
"labels = [('3' if '/3/' in str(x) else '7') for x in fn_paths]\nlabels[:5]",
"_____no_output_____"
],
[
"data = ImageDataBunch.from_lists(path, fn_paths, labels=labels, ds_tfms=tfms, size=24)\ndata.classes",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75f76f0cb8b4224965b24e803c0e65c77378ec3 | 133,169 | ipynb | Jupyter Notebook | Palm Detection and Hand Tracking Model.ipynb | alfaPegasis/Hand-Tracking-Model | a34e77afb1f18cd191988edff972aebfd3b797b2 | [
"MIT"
] | null | null | null | Palm Detection and Hand Tracking Model.ipynb | alfaPegasis/Hand-Tracking-Model | a34e77afb1f18cd191988edff972aebfd3b797b2 | [
"MIT"
] | null | null | null | Palm Detection and Hand Tracking Model.ipynb | alfaPegasis/Hand-Tracking-Model | a34e77afb1f18cd191988edff972aebfd3b797b2 | [
"MIT"
] | null | null | null | 62.025617 | 130 | 0.707267 | [
[
[
"# Install and Import libraries.",
"_____no_output_____"
]
],
[
[
"!pip install mediapipe opencv-python",
"Requirement already satisfied: mediapipe in ./track/lib/python3.9/site-packages (0.8.5)\nRequirement already satisfied: opencv-python in ./track/lib/python3.9/site-packages (4.5.2.54)\nRequirement already satisfied: protobuf>=3.11.4 in ./track/lib/python3.9/site-packages (from mediapipe) (3.17.3)\nRequirement already satisfied: opencv-contrib-python in ./track/lib/python3.9/site-packages (from mediapipe) (4.5.2.54)\nRequirement already satisfied: attrs>=19.1.0 in ./track/lib/python3.9/site-packages (from mediapipe) (21.2.0)\nRequirement already satisfied: numpy in ./track/lib/python3.9/site-packages (from mediapipe) (1.20.3)\nRequirement already satisfied: wheel in ./track/lib/python3.9/site-packages (from mediapipe) (0.36.2)\nRequirement already satisfied: six in ./track/lib/python3.9/site-packages (from mediapipe) (1.16.0)\nRequirement already satisfied: absl-py in ./track/lib/python3.9/site-packages (from mediapipe) (0.13.0)\n"
],
[
"import mediapipe as mp\nimport numpy as np\nimport os\nimport cv2\nimport uuid",
"_____no_output_____"
],
[
"# checking for webcam \ncapture_frames=cv2.VideoCapture(0)\nwhile capture_frames.isOpened():\n ret,frames=capture_frames.read()\n image=cv2.cvtColor(frames,cv2.COLOR_BGR2RGB)\n cv2.imshow(\"Hand Tracking Model\", image)\n if cv2.waitKey(10) & 0xFF ==ord(\"o\"):\n break\ncapture_frames.release()\ncv2.destroyAllWindows()\n\n# comment or do not run this cell after the completion of the project. \n# This is only for testing purpose.",
"_____no_output_____"
]
],
[
[
"# Render joints and landmarks of our hand.\n",
"_____no_output_____"
]
],
[
[
"mp_drawing=mp.solutions.drawing_utils\nmp_hands=mp.solutions.hands",
"_____no_output_____"
]
],
[
[
"# Detecting Images",
"_____no_output_____"
]
],
[
[
"capture_frames=cv2.VideoCapture(0)\nwith mp_hands.Hands(min_detection_confidence=0.8,min_tracking_confidence=0.5)as hands:\n while capture_frames.isOpened():\n ret,frame=capture_frames.read()\n \n #recolor the frame\n image=cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)\n \n #set flag\n image.flags.writeable=False\n \n #make detections\n img_processing=hands.process(image)\n \n #set flag to true\n image.flags.writeable=True\n \n #recolor again\n image=cv2.cvtColor(frame,cv2.COLOR_RGB2BGR)\n \n #results\n print(img_processing)\n \n #for rendering the landmarks\n if img_processing.multi_hand_landmarks:\n for num,hand in enumerate(img_processing.multi_hand_landmarks):\n mp_drawing.draw_landmarks(image,hand,mp_hands.HAND_CONNECTIONS,\n mp_drawing.DrawingSpec(color=(121,22,76), thickness=2, circle_radius=4),\n mp_drawing.DrawingSpec(color=(121,44,250), thickness=2, circle_radius=2))\n \n \n cv2.imshow(\"Hand Tracking\", image)\n if cv2.waitKey(10) & 0xFF == ord(\"q\"):\n break\n \ncapture_frames.release()\ncv2.destroyAllWindows()",
"<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n<class 'mediapipe.python.solution_base.SolutionOutputs'>\n"
]
],
[
[
"# Save Images in the local folder\n",
"_____no_output_____"
]
],
[
[
"os.mkdir(\"Output Images After Detection\")",
"_____no_output_____"
],
[
"capture_frames=cv2.VideoCapture(0)\nwith mp_hands.Hands(min_detection_confidence=0.8,min_tracking_confidence=0.5)as hands:\n while capture_frames.isOpened():\n ret,frame=capture_frames.read()\n \n #recolor the frame\n image=cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)\n \n #set flag\n image.flags.writeable=False\n \n #make detections\n img_processing=hands.process(image)\n \n #set flag to true\n image.flags.writeable=True\n \n #recolor again\n image=cv2.cvtColor(frame,cv2.COLOR_RGB2BGR)\n \n #results\n print(img_processing)\n \n #for rendering the landmarks\n if img_processing.multi_hand_landmarks:\n for num,hand in enumerate(img_processing.multi_hand_landmarks):\n mp_drawing.draw_landmarks(image,hand,mp_hands.HAND_CONNECTIONS,\n mp_drawing.DrawingSpec(color=(121,22,76), thickness=2, circle_radius=4),\n mp_drawing.DrawingSpec(color=(121,44,250), thickness=2, circle_radius=2))\n \n #Save Our Image\n cv2.imwrite(os.path.join('Output Images After Detection','{}.jpg'.format(uuid.uuid1())),image)\n cv2.imshow(\"Hand Tracking\", image)\n if cv2.waitKey(10) & 0xFF == ord(\"q\"):\n break\n \ncapture_frames.release()\ncv2.destroyAllWindows()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e75f7ff2fee7fdacdfce9cb85f921f55c38a5cde | 6,510 | ipynb | Jupyter Notebook | Ariketak/Errekurtsibitatea.ipynb | mpenagar/Programazioaren-Oinarriak | 831dd1c1ec6cbd290f958328acc0132185b89e96 | [
"MIT"
] | null | null | null | Ariketak/Errekurtsibitatea.ipynb | mpenagar/Programazioaren-Oinarriak | 831dd1c1ec6cbd290f958328acc0132185b89e96 | [
"MIT"
] | null | null | null | Ariketak/Errekurtsibitatea.ipynb | mpenagar/Programazioaren-Oinarriak | 831dd1c1ec6cbd290f958328acc0132185b89e96 | [
"MIT"
] | null | null | null | 20.092593 | 175 | 0.467588 | [
[
[
"### 3 - Zenbaki arrunt baten errepresentazio hamartarrak izango dituen digito kopurua kalkulatzen duen funtzioa. Generalizatu ezazu edozein oinarri erabili ahal izateko.",
"_____no_output_____"
]
],
[
[
"def digito_kopurua(n,oinarria=10):\n #print(n)\n if n < oinarria :\n return 1\n else :\n return 1 + digito_kopurua(n // oinarria, oinarria)\n \n \n#digito_kopurua(863465234)\n#digito_kopurua(24,10)\ndigito_kopurua(24,2)",
"_____no_output_____"
]
],
[
[
"### 4 - Zerrenda batetako elementu handienaren balioa bueltatuko duen funtzioa.",
"_____no_output_____"
]
],
[
[
"def maximoa(z):\n if len(z) == 1 :\n return z[0]\n else :\n a = z[:len(z)//2]\n b = z[len(z)//2:]\n #print(a,b)\n return max(maximoa(a),maximoa(b))",
"_____no_output_____"
],
[
"maximoa([34216,32,46,13465,236,134,632,73452,3452,36,236,2365])",
"_____no_output_____"
]
],
[
[
"Horrenbeste zerrenda sortzeak badu bere kostua...",
"_____no_output_____"
]
],
[
[
"def maximoa_errek(z,i,j):\n if j-i == 1 :\n return z[i]\n else :\n #print(z[i:(i+j)//2],z[(i+j)//2:j])\n return max(maximoa_errek(z,i,(i+j)//2),maximoa_errek(z,(i+j)//2,j))\n\ndef maximoa(z):\n return maximoa_errek(z,0,len(z))",
"_____no_output_____"
],
[
"maximoa_errek([34216,32,46,13465,236,134,632,73452,3452,36,236,2365],0,12)",
"_____no_output_____"
],
[
"maximoa([34216,32,46,13465,236,134,632,73452,3452,36,236,2365])",
"_____no_output_____"
],
[
"def maximoa(z):\n if len(z) == 1 :\n return z[0]\n else :\n return max(z[0],maximoa(z[1:]))",
"_____no_output_____"
],
[
"maximoa([34216,32,46,13465,236,134,632,73452,3452,36,236,2365])",
"_____no_output_____"
]
],
[
[
"Horrenbeste zerrenda sortzeak badu bere kostua...",
"_____no_output_____"
]
],
[
[
"def maximoa_errek(z,i):\n if i == len(z)-1 :\n return z[i]\n else :\n return max(z[i],maximoa_errek(z,i+1))\n\ndef maximoa(z):\n return maximoa_errek(z,0)",
"_____no_output_____"
],
[
"maximoa_errek([34216,32,46,13465,236,134,632,73452,3452,36,236,2365],0)",
"_____no_output_____"
],
[
"maximoa([34216,32,46,13465,236,134,632,73452,3452,36,236,2365])",
"_____no_output_____"
]
],
[
[
"### 7- Karaktere kate baten barnean beste kate baten agerpen kopurua (gainezarmenik gabe) kalkulatuko duen funtzioa.",
"_____no_output_____"
]
],
[
[
"def kontatu(zer,non):\n if len(zer) >= len(non) :\n return 1 if zer==non else 0\n else :\n \n ",
"_____no_output_____"
],
[
"def kontatu(zer,non):\n n = len(zer)\n if n > len(non) :\n return 0\n elif zer == non[:n] :\n return 1 + kontatu(zer,non[n:])\n else :\n return kontatu(zer,non[1:])",
"_____no_output_____"
],
[
"z = \"kaixo\"\nz[10:]",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e75f8f15912b3d8c482ece152dae0c4667e023c4 | 49,366 | ipynb | Jupyter Notebook | text/Chapter9.ipynb | Selubi/tutorial_python | 938fee5866b81448de250abeaa6099ad85571601 | [
"MIT"
] | null | null | null | text/Chapter9.ipynb | Selubi/tutorial_python | 938fee5866b81448de250abeaa6099ad85571601 | [
"MIT"
] | null | null | null | text/Chapter9.ipynb | Selubi/tutorial_python | 938fee5866b81448de250abeaa6099ad85571601 | [
"MIT"
] | null | null | null | 111.184685 | 21,780 | 0.849532 | [
[
[
"import warnings\nwarnings.filterwarnings('ignore') # 実行に影響のない warninig を非表示にします. 非推奨.",
"_____no_output_____"
]
],
[
[
"# Chapter 9: Pytorchによる転移学習\nここではpytorchによる転移学習の実装を行います.<br>\n転移学習とは,事前に他のデータセットで学習した深層学習モデルを特徴量生成器(次元削減器)として使用することで学習用データが少ない時でも過学習しないモデルを作るといったアプローチです<br>\n今回は[Chapter8](./Chapter8.ipynb)で作成した分類器を用いて「0」「1」を分類するモデルを転移学習で作成していきます.<br>",
"_____no_output_____"
]
],
[
[
"import torch\nimport torchvision\nimport torchvision.transforms as transforms\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nprint(torch.__version__)",
"1.9.1\n"
]
],
[
[
"データセットの用意<br>\n**MNISTデータセットから「0」と「1」のみを抽出します**",
"_____no_output_____"
]
],
[
[
"#データの前処理を行うクラスインスタンス\ntransform = transforms.Compose(\n [transforms.Resize((16, 16)),\n transforms.ToTensor(),\n transforms.Normalize((0.5, ), (0.5, ))])\n\nbatch_size = 100\n\n#使用するtrainデータセット\ntrainset = torchvision.datasets.MNIST(root='./data', \n train=True,\n download=True,\n transform=transform)\nmask = ((trainset.targets == 1) | (trainset.targets == 0))\ntrainset.data = trainset.data[mask]\ntrainset.targets = trainset.targets[mask]\n#データ分割\ntrainset, _ = torch.utils.data.random_split(trainset, [10000, len(trainset)-10000])\nprint(len(trainset))\n\n#trainデータをbatchごとに逐次的に取り出してくれるクラスインスタンス\ntrainloader = torch.utils.data.DataLoader(trainset,\n batch_size=batch_size,\n shuffle=True)\n\n#使用するtestデータセット(以下略)\ntestset = torchvision.datasets.MNIST(root='./data', \n train=False, \n download=True, \n transform=transform)\nmask = ((testset.targets == 1) | (testset.targets == 0))\ntestset.data = testset.data[mask]\ntestset.targets = testset.targets[mask]\n\ntestset, _ = torch.utils.data.random_split(testset, [1000, len(testset)-1000])\nprint(len(testset))\n\ntestloader = torch.utils.data.DataLoader(testset, \n batch_size=batch_size,\n shuffle=False)",
"10000\n1000\n"
]
],
[
[
"モデルの定義",
"_____no_output_____"
]
],
[
[
"import torch.nn.functional as F\n#モデルの定義\nclass NeuralNet(torch.nn.Module):\n def __init__(self, n_input=256, n_hidden=16, n_output=8):\n super(NeuralNet, self).__init__()\n self.n_input = n_input\n \n #一層目と二層目の重み行列の定義\n self.l1 = torch.nn.Linear(n_input, n_hidden, bias = True)\n self.l2 = torch.nn.Linear(n_hidden, n_hidden, bias = True)\n self.l3 = torch.nn.Linear(n_hidden, n_output, bias = True)\n \n def forward(self, x):\n #画像データ(2次元)を1次元に落とす\n x = x.view(-1, self.n_input)\n \n #一層目の重み行列をかける\n a1 = self.l1(x)\n \n #活性化関数に通す\n h1 = F.sigmoid(a1)\n #h1 = F.relu(a1)\n \n #二層目の重み行列をかける\n a2 = self.l2(h1)\n \n #活性化関数に通す\n h2 = F.sigmoid(a2)\n #h2 = F.relu(a2)\n \n #三層目の重み行列をかける\n a3 = self.l3(h2)\n \n return a3",
"_____no_output_____"
],
[
"#モデルインスタンスの作成\nmodel = NeuralNet()\n'''\nここで先ほど作成して保存したモデルの情報を呼び出して今回の分類モデルのパラメータに代入します\n'''\nmodel.load_state_dict(torch.load(\"params/model_state_dict.pth\"), strict=False)",
"_____no_output_____"
]
],
[
[
"一度予測をさせてみます.\\\n前回作成したモデルは2~9までの数字しか分類できないので結果はトンチキなものになります",
"_____no_output_____"
]
],
[
[
"#モデルの予想を可視化する関数の作成\ndef prediction(model, num=10, c=2):\n with torch.no_grad():\n img, t = testloader.__iter__().next()\n t_pred = model(img)\n fig = plt.figure(figsize=(12,4))\n ax = []\n for i in range(num):\n print(f'true: {t[i]}, predict: {np.argmax(t_pred[i])+c}')\n ax.append(fig.add_subplot(1, num, i+1))\n ax[i].imshow(img[i, 0], cmap='gray')\n plt.show()\n\n#学習前でどのような予測をするのかを表示\nprediction(model)",
"true: 1, predict: 8\ntrue: 1, predict: 2\ntrue: 0, predict: 5\ntrue: 1, predict: 8\ntrue: 0, predict: 5\ntrue: 0, predict: 2\ntrue: 0, predict: 5\ntrue: 1, predict: 3\ntrue: 0, predict: 5\ntrue: 1, predict: 7\n"
]
],
[
[
"転移学習の用意をします.",
"_____no_output_____"
]
],
[
[
"for param in model.parameters():\n param.requires_grad = False#モデルにある全てのパラメータを固定値に変換する\n \nmodel.l3 = torch.nn.Linear(model.l3.in_features, 2)#モデルの最終層のパラメータを2クラス分類ように書き換えた上で学習パラメータに設定します\nmodel.l3.requires_grad = True#デフォルトでTrueなので本来は書く必要ないですが明示的に\n\n#loss関数の設定\ncriterion = torch.nn.CrossEntropyLoss()\n\n#最適化手法の設定\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1)\n#optimizer = torch.optim.Adam(model.parameters(), lr=0.01)",
"_____no_output_____"
],
[
"#学習\nepochs = 30\ntrain_loss = []\ntest_loss = []\ntest_acc = []\ntrain_num_batchs = np.ceil(len(trainset) / float(batch_size))\ntest_num_batchs = np.ceil(len(testset) / float(batch_size))\nfor epoch in range(epochs):\n loss_sum = 0\n #trainloaderからbatchごとのデータを取り出し\n for X, t in trainloader:\n t_pred = model(X)\n loss = criterion(t_pred, t)\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n loss_sum += loss.detach()\n loss_sum /= train_num_batchs\n train_loss.append(float(loss_sum))\n #今回はテストデータに対する予測誤差や精度が各epochでどのように変化したのかを確かめます.\n with torch.no_grad():\n loss_sum = 0\n correct = 0\n for X, t in testloader:\n t_pred = model(X)\n loss = criterion(t_pred, t)\n loss_sum += loss.detach()\n pred = t_pred.argmax(dim=1, keepdim=True)\n correct += pred.eq(t.view_as(pred)).sum().item()\n loss_sum /= test_num_batchs\n test_loss.append(float(loss_sum))\n test_acc.append(correct/len(testloader))\n print(f'\\repoch: {epoch+1}/{epochs}, train loss: {train_loss[epoch]}, test loss: {test_loss[epoch]}, test_acc: {correct}/{len(testset)}.....', end='')",
"epoch: 30/30, train loss: 0.2612673044204712, test loss: 0.2221093624830246, test_acc: 903/1000......."
],
[
"fig = plt.figure(figsize=(12,4))\n\nax1 = fig.add_subplot(1, 2, 1)\nax1.plot(train_loss, c='b', label='train')\nax1.plot(test_loss, c='r', label='test')\n\nax2 = fig.add_subplot(1, 2, 2)\nax2.plot(test_acc, c='r', label='test')\nplt.plot()",
"_____no_output_____"
],
[
"prediction(model, c=0)",
"true: 1, predict: 0\ntrue: 1, predict: 1\ntrue: 0, predict: 0\ntrue: 1, predict: 1\ntrue: 0, predict: 0\ntrue: 0, predict: 1\ntrue: 0, predict: 0\ntrue: 1, predict: 1\ntrue: 0, predict: 0\ntrue: 1, predict: 1\n"
]
],
[
[
"ある程度の分類ができるようになっていると思います.\\\nうまく分類できているのが確認できたら,モデルのパラメータの代入部分\n```python\nmodel.load_state_dict(torch.load(\"params/model_state_dict.pth\"), strict=False)\n```\nなどをコメントアウトしてもう一度学習させてみましょう.\nおそらく分類精度が悪くなってしまうのが確認できると思います",
"_____no_output_____"
]
],
[
[
"torch.save(model.state_dict(), \"params/01_model_state_dict.pth\")",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"raw",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"raw"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e75f913b52db84a0de2ac0c36aabbdaeb55ebe51 | 419,626 | ipynb | Jupyter Notebook | assign/assignment1/knn.ipynb | zhmz90/CS231N | cb0c883a257f99aa3aabdd7552fc17207db596f9 | [
"MIT"
] | null | null | null | assign/assignment1/knn.ipynb | zhmz90/CS231N | cb0c883a257f99aa3aabdd7552fc17207db596f9 | [
"MIT"
] | null | null | null | assign/assignment1/knn.ipynb | zhmz90/CS231N | cb0c883a257f99aa3aabdd7552fc17207db596f9 | [
"MIT"
] | null | null | null | 589.36236 | 286,882 | 0.930986 | [
[
[
"# k-Nearest Neighbor (kNN) exercise\n\n*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*\n\nThe kNN classifier consists of two stages:\n\n- During training, the classifier takes the training data and simply remembers it\n- During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples\n- The value of k is cross-validated\n\nIn this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.",
"_____no_output_____"
]
],
[
[
"%matplotlib",
"_____no_output_____"
],
[
"# Run some setup code for this notebook.\n\nimport random\nimport numpy as np\nfrom cs231n.data_utils import load_CIFAR10\nimport matplotlib.pyplot as plt\n\n# This is a bit of magic to make matplotlib figures appear inline in the notebook\n# rather than in a new window.\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# Some more magic so that the notebook will reload external python modules;\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
],
[
"# Load the raw CIFAR-10 data.\ncifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\nX_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n\n# As a sanity check, we print out the size of the training and test data.\nprint 'Training data shape: ', X_train.shape\nprint 'Training labels shape: ', y_train.shape\nprint 'Test data shape: ', X_test.shape\nprint 'Test labels shape: ', y_test.shape",
"Training data shape: (50000, 32, 32, 3)\nTraining labels shape: (50000,)\nTest data shape: (10000, 32, 32, 3)\nTest labels shape: (10000,)\n"
],
[
"# Visualize some examples from the dataset.\n# We show a few examples of training images from each class.\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nnum_classes = len(classes)\nsamples_per_class = 7\nfor y, cls in enumerate(classes):\n idxs = np.flatnonzero(y_train == y)\n idxs = np.random.choice(idxs, samples_per_class, replace=False)\n for i, idx in enumerate(idxs):\n plt_idx = i * num_classes + y + 1\n plt.subplot(samples_per_class, num_classes, plt_idx)\n #print idx\n plt.imshow(X_train[idx].astype('uint8'))\n plt.axis('off')\n if i == 0:\n plt.title(cls)\nplt.show()",
"_____no_output_____"
],
[
"# Subsample the data for more efficient code execution in this exercise\nnum_training = 5000\nmask = range(num_training)\nX_train = X_train[mask]\ny_train = y_train[mask]\n\nnum_test = 500\nmask = range(num_test)\nX_test = X_test[mask]\ny_test = y_test[mask]",
"_____no_output_____"
],
[
"# Reshape the image data into rows\nX_train = np.reshape(X_train, (X_train.shape[0], -1))\nX_test = np.reshape(X_test, (X_test.shape[0], -1))\nprint X_train.shape, X_test.shape",
"(5000, 3072) (500, 3072)\n"
],
[
"from cs231n.classifiers import KNearestNeighbor\n\n# Create a kNN classifier instance. \n# Remember that training a kNN classifier is a noop: \n# the Classifier simply remembers the data and does no further processing \nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)",
"_____no_output_____"
]
],
[
[
"We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: \n\n1. First we must compute the distances between all test examples and all train examples. \n2. Given these distances, for each test example we find the k nearest examples and have them vote for the label\n\nLets begin with computing the distance matrix between all training and test examples. For example, if there are **Ntr** training examples and **Nte** test examples, this stage should result in a **Nte x Ntr** matrix where each element (i,j) is the distance between the i-th test and j-th train example.\n\nFirst, open `cs231n/classifiers/k_nearest_neighbor.py` and implement the function `compute_distances_two_loops` that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.",
"_____no_output_____"
]
],
[
[
"# Open cs231n/classifiers/k_nearest_neighbor.py and implement\n# compute_distances_two_loops.\n\n# Test your implementation:\ndists = classifier.compute_distances_two_loops(X_test)",
"_____no_output_____"
],
[
"# We can visualize the distance matrix: each row is a single test example and\n# its distances to training examples\nplt.imshow(dists, interpolation='none')\nplt.show()",
"_____no_output_____"
]
],
[
[
"**Inline Question #1:** Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)\n\n- What in the data is the cause behind the distinctly bright rows?\n- What causes the columns?",
"_____no_output_____"
],
[
"**Your Answer**: *fill this in.* \n- some test image is similar to every image in the training dataset, in contrast some test image is not.\n- some train image is similar to each test image, in trast some train image is not.\n",
"_____no_output_____"
]
],
[
[
"# Now implement the function predict_labels and run the code below:\n# We use k = 1 (which is Nearest Neighbor).\ny_test_pred = classifier.predict_labels(dists, k=1)\n\n# Compute and print the fraction of correctly predicted examples\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)",
"Got 137 / 500 correct => accuracy: 0.274000\n"
]
],
[
[
"You should expect to see approximately `27%` accuracy. Now lets try out a larger `k`, say `k = 5`:",
"_____no_output_____"
]
],
[
[
"y_test_pred = classifier.predict_labels(dists, k=5)\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)",
"Got 142 / 500 correct => accuracy: 0.284000\n"
]
],
[
[
"You should expect to see a slightly better performance than with `k = 1`.",
"_____no_output_____"
]
],
[
[
"# Now lets speed up distance matrix computation by using partial vectorization\n# with one loop. Implement the function compute_distances_one_loop and run the\n# code below:\ndists_one = classifier.compute_distances_one_loop(X_test)\n\n# To ensure that our vectorized implementation is correct, we make sure that it\n# agrees with the naive implementation. There are many ways to decide whether\n# two matrices are similar; one of the simplest is the Frobenius norm. In case\n# you haven't seen it before, the Frobenius norm of two matrices is the square\n# root of the squared sum of differences of all elements; in other words, reshape\n# the matrices into vectors and compute the Euclidean distance between them.\ndifference = np.linalg.norm(dists - dists_one, ord='fro')\nprint 'Difference was: %f' % (difference, )\nif difference < 0.001:\n print 'Good! The distance matrices are the same'\nelse:\n print 'Uh-oh! The distance matrices are different'",
"_____no_output_____"
],
[
"# Now implement the fully vectorized version inside compute_distances_no_loops\n# and run the code\ndists_two = classifier.compute_distances_no_loops(X_test)\n\n# check that the distance matrix agrees with the one we computed before:\ndifference = np.linalg.norm(dists - dists_two, ord='fro')\nprint 'Difference was: %f' % (difference, )\nif difference < 0.001:\n print 'Good! The distance matrices are the same'\nelse:\n print 'Uh-oh! The distance matrices are different'",
"Difference was: 0.000000\nGood! The distance matrices are the same\n"
],
[
"# Let's compare how fast the implementations are\ndef time_function(f, *args):\n \"\"\"\n Call a function f with args and return the time (in seconds) that it took to execute.\n \"\"\"\n import time\n tic = time.time()\n f(*args)\n toc = time.time()\n return toc - tic\n\n#two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)\n#print 'Two loop version took %f seconds' % two_loop_time\n\none_loop_time = time_function(classifier.compute_distances_one_loop, X_test)\nprint 'One loop version took %f seconds' % one_loop_time\n\n#no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)\n#print 'No loop version took %f seconds' % no_loop_time\n\n# you should see significantly faster performance with the fully vectorized implementation",
"One loop version took 48.089830 seconds\n"
]
],
[
[
"### Cross-validation\n\nWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.",
"_____no_output_____"
]
],
[
[
"num_folds = 5\nk_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]\n\nX_train_folds = []\ny_train_folds = []\n################################################################################\n# TODO: #\n# Split up the training data into folds. After splitting, X_train_folds and #\n# y_train_folds should each be lists of length num_folds, where #\n# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #\n# Hint: Look up the numpy array_split function. #\n################################################################################\nnum_example_each_fold = X_train.shape[0] / num_folds\nX_train_folds = np.array_split(X_train, num_folds)\ny_train_folds = np.array_split(y_train, num_folds)\n\nprint \"num_example_each_fold is {}\".format(num_example_each_fold)\nprint \"X_train_folds:\", [data.shape for data in X_train_folds]\nprint \"y_train_folds:\", [data.shape for data in y_train_folds]\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# A dictionary holding the accuracies for different values of k that we find\n# when running cross-validation. After running cross-validation,\n# k_to_accuracies[k] should be a list of length num_folds giving the different\n# accuracy values that we found when using that value of k.\nk_to_accuracies = {}\n\n\n################################################################################\n# TODO: #\n# Perform k-fold cross validation to find the best value of k. For each #\n# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #\n# where in each case you use all but one of the folds as training data and the #\n# last fold as a validation set. Store the accuracies for all fold and all #\n# values of k in the k_to_accuracies dictionary. #\n################################################################################\ndef run_knn(X_train, y_train, X_test, y_test, k):\n #print X_train.shape,y_train.shape,X_test.shape,y_test.shape\n classifier = KNearestNeighbor()\n classifier.train(X_train, y_train)\n dists = classifier.compute_distances_no_loops(X_test)\n y_test_pred = classifier.predict_labels(dists, k=k)\n num_correct = np.sum(y_test_pred == y_test)\n num_test = X_test.shape[0]\n accuracy = float(num_correct) / num_test\n return accuracy\n\ndef run_knn_on_nfolds_with_k(X_train_folds,y_train_folds,num_folds,k):\n acc_list = []\n for ind_fold in range(num_folds):\n X_test = X_train_folds[ind_fold]\n y_test = y_train_folds[ind_fold]\n X_train = np.vstack(X_train_folds[0:ind_fold]+X_train_folds[ind_fold+1:])\n y_train = np.hstack(y_train_folds[0:ind_fold]+y_train_folds[ind_fold+1:])\n #print \"y_train_folds:\", [data.shape for data in y_train_folds]\n #print np.vstack(y_train_folds[0:ind_fold]+y_train_folds[ind_fold:]).shape\n #print X_train.shape,y_train.shape,X_test.shape,y_test.shape\n acc = run_knn(X_train,y_train,X_test,y_test,k)\n acc_list.append(acc)\n return acc_list\n\nfor k in k_choices:\n print \"run knn on {} folds data with k: {}\".format(num_folds,k)\n k_to_accuracies[k] = run_knn_on_nfolds_with_k(X_train_folds, y_train_folds, num_folds, k)\n\n\n\n\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Print out the computed accuracies\nfor k in sorted(k_to_accuracies):\n for accuracy in k_to_accuracies[k]:\n print 'k = %d, accuracy = %f' % (k, accuracy)",
"num_example_each_fold is 1000\nX_train_folds: [(1000, 3072), (1000, 3072), (1000, 3072), (1000, 3072), (1000, 3072)]\ny_train_folds: [(1000,), (1000,), (1000,), (1000,), (1000,)]\nrun knn on 5 folds data with k: 1\nrun knn on 5 folds data with k: 3\nrun knn on 5 folds data with k: 5\nrun knn on 5 folds data with k: 8\nrun knn on 5 folds data with k: 10\nrun knn on 5 folds data with k: 12\nrun knn on 5 folds data with k: 15\nrun knn on 5 folds data with k: 20\nrun knn on 5 folds data with k: 50\nrun knn on 5 folds data with k: 100\nk = 1, accuracy = 0.263000\nk = 1, accuracy = 0.257000\nk = 1, accuracy = 0.264000\nk = 1, accuracy = 0.278000\nk = 1, accuracy = 0.266000\nk = 3, accuracy = 0.241000\nk = 3, accuracy = 0.249000\nk = 3, accuracy = 0.243000\nk = 3, accuracy = 0.273000\nk = 3, accuracy = 0.264000\nk = 5, accuracy = 0.258000\nk = 5, accuracy = 0.273000\nk = 5, accuracy = 0.281000\nk = 5, accuracy = 0.290000\nk = 5, accuracy = 0.272000\nk = 8, accuracy = 0.263000\nk = 8, accuracy = 0.288000\nk = 8, accuracy = 0.278000\nk = 8, accuracy = 0.285000\nk = 8, accuracy = 0.277000\nk = 10, accuracy = 0.265000\nk = 10, accuracy = 0.296000\nk = 10, accuracy = 0.278000\nk = 10, accuracy = 0.284000\nk = 10, accuracy = 0.286000\nk = 12, accuracy = 0.260000\nk = 12, accuracy = 0.294000\nk = 12, accuracy = 0.281000\nk = 12, accuracy = 0.282000\nk = 12, accuracy = 0.281000\nk = 15, accuracy = 0.255000\nk = 15, accuracy = 0.290000\nk = 15, accuracy = 0.281000\nk = 15, accuracy = 0.281000\nk = 15, accuracy = 0.276000\nk = 20, accuracy = 0.270000\nk = 20, accuracy = 0.281000\nk = 20, accuracy = 0.280000\nk = 20, accuracy = 0.282000\nk = 20, accuracy = 0.284000\nk = 50, accuracy = 0.271000\nk = 50, accuracy = 0.288000\nk = 50, accuracy = 0.278000\nk = 50, accuracy = 0.269000\nk = 50, accuracy = 0.266000\nk = 100, accuracy = 0.256000\nk = 100, accuracy = 0.270000\nk = 100, accuracy = 0.263000\nk = 100, accuracy = 0.256000\nk = 100, accuracy = 0.263000\n"
],
[
"# plot the raw observations\nfor k in k_choices:\n accuracies = k_to_accuracies[k]\n plt.scatter([k] * len(accuracies), accuracies)\n\n# plot the trend line with error bars that correspond to standard deviation\naccuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])\nprint \"the best mean from k {}\".format(np.argmax(accuracies_mean))\naccuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])\nplt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)\nplt.title('Cross-validation on k')\nplt.xlabel('k')\nplt.ylabel('Cross-validation accuracy')\nplt.show()",
"the best mean from k 4\n"
],
[
"# Based on the cross-validation results above, choose the best value for k, \n# retrain the classifier using all the training data, and test it on the test\n# data. You should be able to get above 28% accuracy on the test data.\nbest_k = 4\n\nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)\ny_test_pred = classifier.predict(X_test, k=best_k)\n\n# Compute and display the accuracy\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)",
"Got 141 / 500 correct => accuracy: 0.282000\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e75f97ee0c56ef2c326b9122e76de17ef2fd3c8d | 27,984 | ipynb | Jupyter Notebook | BERT_distyll.ipynb | blanchefort/text_mining | 59a0b674fd6ea8b7b76773558eb4145df85a4d12 | [
"MIT"
] | 5 | 2020-03-27T06:38:31.000Z | 2022-02-10T10:39:47.000Z | BERT_distyll.ipynb | blanchefort/text_mining | 59a0b674fd6ea8b7b76773558eb4145df85a4d12 | [
"MIT"
] | null | null | null | BERT_distyll.ipynb | blanchefort/text_mining | 59a0b674fd6ea8b7b76773558eb4145df85a4d12 | [
"MIT"
] | 1 | 2020-11-23T20:25:52.000Z | 2020-11-23T20:25:52.000Z | 33.156398 | 318 | 0.472877 | [
[
[
"<a href=\"https://colab.research.google.com/github/blanchefort/text_mining/blob/master/BERT_distyll.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Дистилляция BERT\n\nДообученная модель BERT показывает очень хорошее качество при решении множества NLP-задач. Однако, её не всегда можно применить на практике из-за того, что модель очень большая и работает дастаточно медленно. В связи с этим было придумано несколько способов обойти это ограничение.\n\nОдин из способов - `knowledge distillation`.\n\nСуть метода заключается в следующем. Мы берём две модели - нашу обученную на решение конкретной задачи BERT (модель-учитель) и модель с более простой архитектурой (модель-ученик). Модель-ученик будет обучаться поведению модели-учителя: логиты Берта мы будем подавать модели-ученику в процессе её обучения.\n\nВ качестве модели-учителя возьмём уже обученную ранее модель, классифицирующую названия строительных товаров.",
"_____no_output_____"
],
[
"## Библиотеки",
"_____no_output_____"
]
],
[
[
"pip install transformers catboost",
"_____no_output_____"
],
[
"import os\nimport random\nimport numpy as np\nimport pandas as pd\nimport torch\n\nfrom transformers import AutoConfig, AutoModelForSequenceClassification\nfrom transformers import AutoTokenizer\nfrom torch.utils.data import TensorDataset, DataLoader, SequentialSampler\n\nfrom catboost import Pool, CatBoostRegressor\n\nfrom sklearn.metrics import classification_report\nfrom tqdm.notebook import tqdm\n\nSEED = 22\nos.environ['PYTHONHASHSEED'] = str(SEED)\nrandom.seed(SEED)\nnp.random.seed(SEED)\ntorch.manual_seed(SEED)\ntorch.cuda.manual_seed(SEED)\ntorch.backends.cudnn.benchmark = False\ntorch.backends.cudnn.deterministic = True\n\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nprint(device.type)\nif device.type == 'cuda':\n print(torch.cuda.get_device_name(0))",
"cuda\nTesla P100-PCIE-16GB\n"
]
],
[
[
"## Загрузка токенизатора, модели, конфигурации",
"_____no_output_____"
]
],
[
[
"# config\nconfig = AutoConfig.from_pretrained('/content/drive/My Drive/colab_data/leroymerlin/model/BERT_model')\n# tokenizer\ntokenizer = AutoTokenizer.from_pretrained('/content/drive/My Drive/colab_data/leroymerlin/model/BERT_model', pad_to_max_length=True)\n# model\nmodel = AutoModelForSequenceClassification.from_pretrained('/content/drive/My Drive/colab_data/leroymerlin/model/BERT_model', config=config)",
"_____no_output_____"
]
],
[
[
"## Подготовка данных",
"_____no_output_____"
]
],
[
[
"category_index = {'Водоснабжение': 8,\n 'Декор': 12,\n 'Инструменты': 4,\n 'Краски': 11,\n 'Кухни': 15,\n 'Напольные покрытия': 5,\n 'Окна и двери': 2,\n 'Освещение': 13,\n 'Плитка': 6,\n 'Сад': 9,\n 'Сантехника': 7,\n 'Скобяные изделия': 10,\n 'Столярные изделия': 1,\n 'Стройматериалы': 0,\n 'Хранение': 14,\n 'Электротовары': 3}\ncategory_index_inverted = dict(map(reversed, category_index.items()))",
"_____no_output_____"
],
[
"df = pd.read_csv('/content/drive/My Drive/colab_data/leroymerlin/to_classifier.csv')\nsentences = df.name.values\nlabels = [category_index[i] for i in df.category_1.values]",
"_____no_output_____"
],
[
"tokens = [tokenizer.encode(\n sent, \n add_special_tokens=True, \n max_length=24, \n pad_to_max_length='right') for sent in sentences]",
"_____no_output_____"
],
[
"tokens_tensor = torch.tensor(tokens)\n#labels_tensor = torch.tensor(labels)",
"_____no_output_____"
],
[
"BATCH_SIZE = 400\n#full_dataset = TensorDataset(tokens_tensor, labels_tensor)\nsampler = SequentialSampler(tokens_tensor)\ndataloader = DataLoader(tokens_tensor, sampler=sampler, batch_size=BATCH_SIZE)",
"_____no_output_____"
]
],
[
[
"## Получение логитов BERT",
"_____no_output_____"
]
],
[
[
"train_logits = []\nwith torch.no_grad():\n model.to(device)\n for batch in tqdm(dataloader):\n batch = batch.to(device)\n outputs = model(batch)\n logits = outputs[0].detach().cpu().numpy()\n train_logits.extend(logits)",
"_____no_output_____"
],
[
"#train_logits = np.vstack(train_logits)",
"_____no_output_____"
]
],
[
[
"## Обучение ученика\n\nТеперь возьмём мультирегрессионную модель от CatBoost и передадим ей все полученные логиты.",
"_____no_output_____"
]
],
[
[
"data_pool = Pool(tokens, train_logits)",
"_____no_output_____"
],
[
"distilled_model = CatBoostRegressor(iterations=2000, \n depth=4, \n learning_rate=.1, \n loss_function='MultiRMSE',\n verbose=200)",
"_____no_output_____"
],
[
"distilled_model.fit(data_pool)",
"0:\tlearn: 11.6947874\ttotal: 275ms\tremaining: 9m 9s\n200:\tlearn: 9.0435970\ttotal: 47s\tremaining: 7m\n400:\tlearn: 8.2920608\ttotal: 1m 32s\tremaining: 6m 10s\n600:\tlearn: 7.7736947\ttotal: 2m 18s\tremaining: 5m 22s\n800:\tlearn: 7.3674586\ttotal: 3m 4s\tremaining: 4m 36s\n1000:\tlearn: 7.0166625\ttotal: 3m 51s\tremaining: 3m 51s\n1200:\tlearn: 6.7202548\ttotal: 4m 38s\tremaining: 3m 5s\n1400:\tlearn: 6.4602129\ttotal: 5m 25s\tremaining: 2m 19s\n1600:\tlearn: 6.2248947\ttotal: 6m 12s\tremaining: 1m 32s\n1800:\tlearn: 6.0164036\ttotal: 7m\tremaining: 46.4s\n1999:\tlearn: 5.8322141\ttotal: 7m 46s\tremaining: 0us\n"
]
],
[
[
"## Сравнение качества моделей",
"_____no_output_____"
]
],
[
[
"category_index_inverted = dict(map(reversed, category_index.items()))",
"_____no_output_____"
]
],
[
[
"### Метрики Берта:",
"_____no_output_____"
]
],
[
[
"print(classification_report(labels, np.argmax(train_logits, axis=1), target_names=category_index_inverted.values()))",
" precision recall f1-score support\n\n Водоснабжение 0.94 0.88 0.91 13377\n Декор 1.00 0.40 0.57 2716\n Инструменты 1.00 0.40 0.58 540\n Краски 0.97 0.81 0.88 20397\n Кухни 0.96 0.91 0.93 29920\nНапольные покрытия 1.00 0.56 0.72 2555\n Окна и двери 1.00 0.61 0.76 2440\n Освещение 0.98 0.92 0.95 30560\n Плитка 0.97 0.96 0.97 23922\n Сад 0.95 0.98 0.96 49518\n Сантехника 0.97 0.74 0.84 24245\n Скобяные изделия 0.85 0.93 0.89 15280\n Столярные изделия 0.58 0.95 0.72 30329\n Стройматериалы 0.98 0.67 0.80 8532\n Хранение 0.97 0.77 0.86 6237\n Электротовары 0.96 0.87 0.92 4019\n\n accuracy 0.89 264587\n macro avg 0.94 0.77 0.83 264587\n weighted avg 0.91 0.89 0.89 264587\n\n"
]
],
[
[
"### Метрики модели-ученика:",
"_____no_output_____"
]
],
[
[
"tokens_pool = Pool(tokens)\n\ndistilled_predicted_logits = distilled_model.predict(tokens_pool, prediction_type='RawFormulaVal') # Probability",
"_____no_output_____"
],
[
"print(classification_report(labels, np.argmax(distilled_predicted_logits, axis=1), target_names=category_index_inverted.values()))",
" precision recall f1-score support\n\n Водоснабжение 0.90 0.53 0.67 13377\n Декор 0.99 0.30 0.46 2716\n Инструменты 0.00 0.00 0.00 540\n Краски 0.97 0.61 0.75 20397\n Кухни 0.85 0.77 0.81 29920\nНапольные покрытия 1.00 0.28 0.44 2555\n Окна и двери 0.96 0.30 0.45 2440\n Освещение 0.92 0.82 0.87 30560\n Плитка 0.94 0.86 0.90 23922\n Сад 0.85 0.86 0.86 49518\n Сантехника 0.91 0.55 0.68 24245\n Скобяные изделия 0.61 0.78 0.69 15280\n Столярные изделия 0.40 0.92 0.56 30329\n Стройматериалы 0.80 0.64 0.71 8532\n Хранение 0.93 0.50 0.65 6237\n Электротовары 0.88 0.24 0.38 4019\n\n accuracy 0.74 264587\n macro avg 0.81 0.56 0.62 264587\n weighted avg 0.82 0.74 0.75 264587\n\n"
]
],
[
[
"Как видим, качество модели-ученика немного хуже качества Берта, но скорее всего модель-ученик сможет иметь то же качество, если мы произведём тонкую настройку гиперпараметров.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
e75fa31f7ed419311d1b98be6bcf4d7fb75d5761 | 4,937 | ipynb | Jupyter Notebook | code/eeg/eeg_param_leadfield_ssp-elems.ipynb | CyclotronResearchCentre/shamo-tutorials | 91fc9aab8d185b24ca8a6678d706292ce56e5a79 | [
"MIT"
] | null | null | null | code/eeg/eeg_param_leadfield_ssp-elems.ipynb | CyclotronResearchCentre/shamo-tutorials | 91fc9aab8d185b24ca8a6678d706292ce56e5a79 | [
"MIT"
] | null | null | null | code/eeg/eeg_param_leadfield_ssp-elems.ipynb | CyclotronResearchCentre/shamo-tutorials | 91fc9aab8d185b24ca8a6678d706292ce56e5a79 | [
"MIT"
] | 1 | 2022-03-06T13:53:39.000Z | 2022-03-06T13:53:39.000Z | 29.386905 | 335 | 0.607859 | [
[
[
"EEG parametric leadfield computation - On all ROI elements\n==========================================================\n\nThe computation of a parametric solution for the EEG leadfield matrix takes almost the same parameters as the single problem.\n\nThe first step is to load the finite element model created before.",
"_____no_output_____"
]
],
[
[
"from shamo import FEM\n\nmodel = FEM.load(\"../../derivatives/fem_from_labels/fem_from_labels.json\")",
"_____no_output_____"
]
],
[
[
"Next, we import the :py:class:`~shamo.eeg.leadfield.parametric.problem.ProbParamEEGLeadfield` class and create an instance of it.",
"_____no_output_____"
]
],
[
[
"from shamo.eeg import ProbParamEEGLeadfield\n\nproblem = ProbParamEEGLeadfield()",
"_____no_output_____"
]
],
[
[
"As for the single problem, we must set the electrical conductivity of the tissues but this time, we must provide probability distributions. If a parameter is fixed, the :py:class:`~shamo.core.distributions.constant.DistConstant` can be used. Otherwise, we can pick from the following probability laws:\n\n* :py:class:`~shamo.core.distributions.uniform.DistUniform`\n* :py:class:`~shamo.core.distributions.normal.DistNormal`\n* :py:class:`~shamo.core.distributions.normal.DistTruncNormal`\n\nFor the sake of this example, we only use uniform distributions and define the ranges with the values reported in :footcite:`mccann_variation_2019`.",
"_____no_output_____"
]
],
[
[
"from shamo import DistUniform\n\nproblem.sigmas.set(\"scalp\", DistUniform(0.137, 2.1))\nproblem.sigmas.set(\"gm\", DistUniform(0.06, 2.47))\nproblem.sigmas.set(\"wm\", DistUniform(0.0646, 0.81))",
"_____no_output_____"
]
],
[
[
"The electrodes and the regions of interest are set as for the :py:class:`~shamo.eeg.leadfield.single.problem.ProbEEGLeadfield`.",
"_____no_output_____"
]
],
[
[
"problem.reference.add(\"IZ\")",
"_____no_output_____"
],
[
"problem.markers.adds([\"NZ\", \"LeftEar\", \"RightEar\"])",
"_____no_output_____"
],
[
"problem.rois.add(\"gm\")",
"_____no_output_____"
]
],
[
[
"Finally, we can solve the problem to generate `n_evals` sub-solutions. The `method` parameter determines how the solutions are solved:\n\n* `\"sequential\"` means each solution is computed one at a time.\n* `\"multiprocessing\"` means `n_proc` solutions are computed in parallel on the same computing node.\n* `\"jobs\"` means a python script is generated for every sub-solution. Those scripts can be run in any way we like, on a HPC unit or on the computer. If this solution is chosen, the :py:func:`~shamo.eeg.leadfield.parametric.problem.ProbParamEEGLeadfield.finalize` method must be called after all the sub-solutions are generated.",
"_____no_output_____"
]
],
[
[
"solution = problem.solve(\"parametric_ssp-elems\", \"../../derivatives/eeg_leadfield\", model, n_evals=4, method=\"multiprocessing\", n_proc=4)",
"_____no_output_____"
]
],
[
[
"We now have multiple sub-solutions accessible with a single parametric solution. To really use the power of those results, we still have to generate a surrogate model.\n\n.. footbibliography::",
"_____no_output_____"
]
]
] | [
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw"
] | [
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
],
[
"code",
"code",
"code"
],
[
"raw"
],
[
"code"
],
[
"raw"
]
] |
e75fb0ba1dad3b8457fbae0197522f9f9d1fb51b | 50,478 | ipynb | Jupyter Notebook | notebooks/TensorBayes.ipynb | jklopf/tensorbayes | 4b0cb3c565e9603a972135ddf7cfbe28a23b3a4a | [
"MIT"
] | 1 | 2018-11-12T16:58:32.000Z | 2018-11-12T16:58:32.000Z | notebooks/TensorBayes.ipynb | jklopf/tensorbayes | 4b0cb3c565e9603a972135ddf7cfbe28a23b3a4a | [
"MIT"
] | null | null | null | notebooks/TensorBayes.ipynb | jklopf/tensorbayes | 4b0cb3c565e9603a972135ddf7cfbe28a23b3a4a | [
"MIT"
] | null | null | null | 38.211961 | 1,759 | 0.54816 | [
[
[
"# TensorBayes\n\n### Adaptation of `BayesC.cpp`\n\n",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf",
"_____no_output_____"
],
[
"import tensorflow_probability as tfp",
"_____no_output_____"
],
[
"import numpy as np",
"_____no_output_____"
],
[
"tfd = tfp.distributions",
"_____no_output_____"
]
],
[
[
"## File input\n\nTo do",
"_____no_output_____"
]
],
[
[
"# Get the numbers of columns in the csv:\n# File I/O here \nfilenames = \"\"\n\ncsv_in = open(filenames, \"r\") # open the csv\nncol = len(csv_in.readline().split(\",\")) # read the first line and count the # of columns\ncsv_in.close() # close the csv\nprint(\"Number of columns in the csv: \" + str(ncol)) # print the # of columns",
"_____no_output_____"
]
],
[
[
"## Reproducibility\n\nSeed setting for reproducable research.",
"_____no_output_____"
]
],
[
[
"# To do: get a numpy seed or look at how TF implements rng.\n\n# each distributions.sample() seen below can be seedeed.\n# ex. dist.sample(seed=32): return a sample of shape=() (scalar).\n\n# Set graph-level seed\ntf.set_random_seed(1234)",
"_____no_output_____"
]
],
[
[
"## Distributions functions\n\n- Random Uniform: \nreturn a sample from a uniform distribution of limits parameter `lower ` and `higher`.\n \n \n- Random Normal: \nreturn a sample from a normal distribution of parameter `mean` and `standard deviation`.\n \n \n- Random Beta: \nreturn a random quantile of a beta distribution of parameter `alpha` and `beta`.\n \n \n- Random Inversed Chi$^2$: \nreturn a random quantile of a inversed chi$^2$ distribution of parameter `degrees of freedom` and `scale`.\n \n \n- Random Bernoulli: \nreturn a sample from a bernoulli distribution of probability of sucess `p`.\n \n ",
"_____no_output_____"
]
],
[
[
"# Note: written as a translation of BayesC.cpp\n# the function definitions might not be needeed,\n# and the declarations of the distributions could be enough\n\ndef runif(lower, higher):\n dist = tfd.Uniform(lower, higher)\n return dist.sample()\n\ndef rnorm(mean, sd):\n dist = tfd.Normal(loc= mean, scale= sd)\n return dist.sample()\n\ndef rbeta(alpha, beta):\n dist = tfd.Beta(float(alpha), float(beta))\n return dist.sample()\n\ndef rinvchisq(df, scale):\n dist = tfd.InverseGamma(df*0.5, df*scale*0.5)\n return dist.sample()\n\ndef rbernoulli(p):\n dist = tfd.Bernoulli(probs=p)\n return dist.sample()\n",
"_____no_output_____"
]
],
[
[
"## Sampling functions\n\n- Sampling of the mean \n \n \n- Sampling of the variance of beta \n \n \n- Sampling of the error variance of Y \n \n \n- Sample of the mixture weight \n \n ",
"_____no_output_____"
]
],
[
[
"# sample mean\ndef sample_mu(N, Esigma2, Y, X, beta): #as in BayesC, with the N parameter\n mean = tf.reduce_sum(tf.subtract(Y, tf.matmul(X, beta)))/N\n sd = tf.sqrt(Esigma2/N)\n mu = rnorm(mean, sd)\n return mu\n\n# sample variance of beta\ndef sample_psi2_chisq( beta, NZ, v0B, s0B):\n df=v0B+NZ\n scale=(tf.nn.l2_loss(beta)*2*NZ+v0B*s0B)/(v0B+NZ)\n psi2=rinvchisq(df, scale)\n return(psi2)\n\n\n# sample error variance of Y\ndef sample_sigma_chisq( N, epsilon, v0E, s0E):\n sigma2=rinvchisq(v0E+N, (tf.nn.l2_loss(epsilon)*2+v0E*s0E)/(v0E+N))\n return(sigma2)\n\n\n# sample mixture weight\ndef sample_w( M, NZ):\n w=rbeta(1+NZ,1+(M-NZ))\n return(w)\n\n\n \n \n ",
"_____no_output_____"
]
],
[
[
"## Simulate data",
"_____no_output_____"
]
],
[
[
"def build_toy_dataset(N, beta, sigmaY_true=1):\n \n features = len(beta)\n x = np.random.randn(N, features)\n y = np.dot(x, beta) + np.random.normal(0, sigmaY_true, size=N)\n return x, y\n\nN = 40 # number of data points\nM = 10 # number of features\n\nbeta_true = np.random.randn(M)\nx, y = build_toy_dataset(N, beta_true)\n\n",
"_____no_output_____"
],
[
"# Could be implemented:\n# building datasets using TF API without numpy\n\n",
"_____no_output_____"
],
[
"X = tf.constant(x, shape=[N,M], dtype=tf.float32)",
"_____no_output_____"
],
[
"Y = tf.constant(y, shape = [N,1], dtype=tf.float32)",
"_____no_output_____"
],
[
"bte = rbeta(1,1)",
"_____no_output_____"
],
[
"bte",
"_____no_output_____"
]
],
[
[
"## Parameters setup",
"_____no_output_____"
]
],
[
[
"# Distinction between constant and variables\n# Variables: values might change between evaluation of the graph\n# (if something changes within the graph, it should be a variable)\n\nEmu = tf.Variable(0., trainable=False)\nvEmu = tf.ones([N,1])\nEbeta = tf.zeros([M,1])\nny = tf.zeros(M)\nEw = tf.constant(0.)\nepsilon = Y - tf.matmul(X,Ebeta) - vEmu*Emu\nNZ = tf.constant([0])\n\nEsigma2 = tf.nn.l2_loss(epsilon)/N\nEpsi2 = rbeta(1.,1.)\n\n",
"_____no_output_____"
],
[
"epsilon",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
],
[
"#Standard parameterization of hyperpriors for variances\n#double v0E=0.001,s0E=0.001,v0B=0.001,s0B=0.001;\n\n#Alternative parameterization of hyperpriors for variances\nv0E, v0B = 4, 4\ns0B=((v0B-2)/v0B)*Epsi2\ns0E=((v0E-2)/v0E)*Esigma2",
"_____no_output_____"
],
[
"# pre-computed elements for calculations\nel1 = tf.matmul(tf.transpose(X),X)",
"_____no_output_____"
],
[
"epsilon",
"_____no_output_____"
]
],
[
[
"## Tensorboard graph",
"_____no_output_____"
]
],
[
[
"writer = tf.summary.FileWriter('.')\nwriter.add_graph(tf.get_default_graph())",
"_____no_output_____"
]
],
[
[
"## Gibbs sampling",
"_____no_output_____"
]
],
[
[
"# Open session\nsess = tf.Session()",
"_____no_output_____"
],
[
"# Initialize variables\ninit = tf.global_variables_initializer()\nsess.run(init)",
"_____no_output_____"
],
[
"num_iter = 50",
"_____no_output_____"
],
[
"print(sess.run(tf.report_uninitialized_variables()))\n",
"[]\n"
],
[
"#debug for just 1 marker 0\nepsilon = tf.add(epsilon, X[:,0]*Ebeta[0])\nCj=tf.nn.l2_loss(X[:,0])*2+Esigma2/Epsi2 #adjusted variance\nrj= tf.matmul(tf.reshape(X[:,0], [1,N]),tf.reshape(epsilon, [N,1])) # mean\n\nratio=((tf.exp(-(tf.pow(rj,2))/(2*Cj*Esigma2))*tf.sqrt((Epsi2*Cj)/Esigma2)))\n\nratio=Ew/(Ew+ratio*(1-Ew))",
"_____no_output_____"
],
[
"if (ny[marker]==0):\n\n Ebeta[j]=0\nelif (ny[j]==1):\n Ebeta[j]=rnorm(rj/Cj,Esigma2/Cj)\n update = epsilon-X.col(j)*Ebeta[j]\n sess.run(tf.assign(epsilon,update)) ",
"_____no_output_____"
],
[
"epsilon = tf.multiply(X[:,0],Ebeta[0])\nepsilon",
"_____no_output_____"
],
[
"\n\n\nb0 = rnorm(rj/Cj,Esigma2/Cj)",
"_____no_output_____"
],
[
"b00 = sess.run(b0)\nb00",
"_____no_output_____"
],
[
"sess.run(Cj)",
"_____no_output_____"
],
[
"ww = sess.run(rnorm(1.,1.))\nww",
"_____no_output_____"
],
[
"update_epsilon = epsilon.assign(tf.add(epsilon, X[:,0] * Ebeta[0]))",
"_____no_output_____"
],
[
"mull= X[:,0]*Ebeta[0]\nmull",
"_____no_output_____"
],
[
"epsilon",
"_____no_output_____"
],
[
"epsilon = Y - tf.matmul(X,Ebeta) - vEmu*Emu",
"_____no_output_____"
],
[
"ep31 = tf.squeeze(epsilon, axis=1) + mull\nep31",
"_____no_output_____"
],
[
"ep22 = tf.reshape(epsilon, [40])\nep22",
"_____no_output_____"
],
[
"# actual code\nfor i in range(num_iter):\n \n Emu = sample_mu(N, Esigma2, Y, X, Ebeta)\n \n for j in range(M): # implement random column\n \n epsilon = epsilon + X[:,j]*Ebeta[j]\n Cj=el1[j]+Esigma2/Epsi2; #adjusted variance\n rj= tf.transpose(X[:,j])*epsilon; # mean\n \n ratio=(((tf.exp(-(tf.pow(rj,2))/(2*Cj*Esigma2))*tf.sqrt((Epsi2*Cj)/Esigma2))))\n ratio=Ew/(Ew+ratio*(1-Ew))\n \n if (ny[marker]==0):\n \n Ebeta[j]=0\n elif (ny[j]==1):\n Ebeta[j]=rnorm(rj/Cj,Esigma2/Cj)\n update = epsilon-X.col(j)*Ebeta[j]\n sess.run(tf.assign(epsilon,update)) \n for j in range(M):\n print(sess.run(Ebeta[j]))\n print(sess.run(ny[j]))\n Ew=sample_w(M,NZ)\n epsilon=Y-X*Ebeta-vEmu*Emu\n\n Epsi2=sample_psi2_chisq(Ebeta,NZ,v0B,s0B)\n Esigma2=sample_sigma_chisq(N,epsilon,v0E,s0E)",
"_____no_output_____"
],
[
"testConst = tf.ones([5,5])",
"_____no_output_____"
],
[
"testConst = tf.constant(5, shape=[5,5])",
"_____no_output_____"
],
[
"sess.run(testConst)",
"_____no_output_____"
],
[
"testConst = testConst * 4",
"_____no_output_____"
],
[
"sess.run(testConst)",
"_____no_output_____"
],
[
"testVar = tf.Variable(tf.ones([5,5]))",
"_____no_output_____"
],
[
"sess.run(testVar)",
"_____no_output_____"
],
[
"testVar = testVar * 3",
"_____no_output_____"
],
[
"sess.run(testVar)",
"_____no_output_____"
],
[
"testVar.assign(testVar*2)",
"_____no_output_____"
],
[
"#v = tf.get_variable(\"v\", shape=(), initializer=tf.zeros_initializer())\nassignment = v.assign_add(1)\ntf.global_variables_initializer().run(session = sess)\nsess.run(assignment) # or assignment.op.run(), or assignment.eval()\n",
"_____no_output_____"
],
[
"# WORKING VERSION\n\n\n\n\n\n\n# Create random column order list (dataset) + iterator\ncol_list = tf.data.Dataset.range(ncol).shuffle(buffer_size=ncol)\ncol_next = col_list.make_one_shot_iterator().get_next()\n\n#def scale_zscore(vector):\n# mean, var = tf.nn.moments(vector, axes=[0])\n# normalized_col = tf.map_fn(lambda x: (x - mean)/tf.sqrt(var), vector)\n# return normalized_col\n\n# Launch of graph\nwith tf.Session() as sess:\n\n while True: # Loop on 'col_next', the queue of column iterator\n try:\n index = sess.run(col_next)\n dataset = tf.contrib.data.CsvDataset( # Creates a dataset of the current csv column\n \"ex.csv\",\n [tf.float32],\n select_cols=[index] # Only parse last three columns\n )\n next_element = dataset.make_one_shot_iterator().get_next() # Creates an iterator\n print('Current column to be full pass: ' + str(index))\n current_col = []\n while True: \n try:\n current_col.append(sess.run(next_element)[0]) # Full pass\n except tf.errors.OutOfRangeError: # End of full pass\n \n print(current_col)\n current_col = tf.convert_to_tensor([current_col])\n mean, var = tf.nn.moments(current_col, axes=[0])\n normalized_col = tf.map_fn(lambda x: (x - mean)/tf.sqrt(var), current_col)\n print(normalized_col)\n print('\\n')\n \n break\n\n\n \n\n except tf.errors.OutOfRangeError:\n break\n\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75fb2572937851d9f007e87856a3a023aba28ad | 912,398 | ipynb | Jupyter Notebook | test.ipynb | schlinkertc/Spotify | 2c8eb0ad352236d587573b368b92e0e272fb0297 | [
"MIT"
] | null | null | null | test.ipynb | schlinkertc/Spotify | 2c8eb0ad352236d587573b368b92e0e272fb0297 | [
"MIT"
] | null | null | null | test.ipynb | schlinkertc/Spotify | 2c8eb0ad352236d587573b368b92e0e272fb0297 | [
"MIT"
] | null | null | null | 620.256968 | 430,980 | 0.931614 | [
[
[
"import pandas as pd\nfrom spotify import *",
"_____no_output_____"
],
[
"del_water_tracks = get_artistTracks('Del Water Gap')",
"_____no_output_____"
],
[
"nick_tracks = get_artistTracks('Nick Cianci')",
"_____no_output_____"
]
],
[
[
"## track features ",
"_____no_output_____"
]
],
[
[
"del_water_tracks[:5]",
"_____no_output_____"
],
[
"def tracks_toDF(tracks):\n records = []\n for track in tracks:\n audio_features = sp.audio_features(track['id'])[0]\n audio_features['artists'] = track['artists']\n audio_features['name'] = track['name']\n records.append(audio_features)\n\n df = pd.DataFrame.from_records(records)\n\n df['artists'] = df.apply(lambda x : \"\".join([artist+', ' for artist in x['artists']]).strip(', '),axis=1)\n\n trs = sp.tracks(df['id'].to_list())\n\n df['popularity'] = df['id'].map({x['id']:x['popularity'] for x in trs['tracks']})\n\n df['release_date'] = df['id'].map({x['id']:x['album']['release_date'] for x in trs['tracks']})\n return df\ndf = tracks_toDF(del_water_tracks)",
"_____no_output_____"
],
[
"df[df['name']=='Theory of Emotion']",
"_____no_output_____"
],
[
"df.loc[18,'name']='Theory of Emotion'",
"_____no_output_____"
],
[
"data = df.groupby('name').mean()",
"_____no_output_____"
]
],
[
[
"### Visualize ",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style('whitegrid')\n\n%config InlineBackend.figure_format = 'retina'\n%matplotlib inline\n\nsns.set(color_codes=True)\nsns.set(rc={'figure.figsize':(20,18)})",
"_____no_output_____"
],
[
"nick_df = tracks_toDF(nick_tracks)\nnick_data = nick_df.groupby('name').mean()",
"_____no_output_____"
],
[
"def track_scatterPlot(data,x,y,xlabel,ylabel,title):\n p1 = sns.regplot(data=data,x=x,y=y,fit_reg=False,scatter_kws={'s':400})\n for line in range(0,data.shape[0]):\n p1.text(data[x][line]+.02,data[y][line],data.index[line],\n horizontalalignment='left',size='large',color='black',weight='semibold')\n plt.xlabel(xlabel,fontdict={'size':25})\n plt.ylabel(ylabel,fontdict={'size':25})\n plt.title(title,fontdict={'size':30})\n plt.savefig('images/'+title)",
"_____no_output_____"
],
[
"track_scatterPlot(nick_data,'valence','energy','Happiness Score','Energy Score','Nick Cianci Tracks Ranked by Happiness, Energy')",
"_____no_output_____"
],
[
"track_scatterPlot(data,'valence','danceability','Happiness Score','Danceability Score','DWG Tracks Ranked by Happiness, Danceability')",
"_____no_output_____"
],
[
"samia_tracks = get_artistTracks('Samia')\nsamia_df = tracks_toDF(samia_tracks)\nsamia_df['name']=samia_df['name'].str.title()\nsamia_data = samia_df[samia_df['artists']=='Samia'].groupby('name').mean()",
"_____no_output_____"
],
[
"samia_data",
"_____no_output_____"
],
[
"with_charlie = [\n 'I Am Drunk, And She Is Insane', 'Lost My Cat / Put in a Cage', \n \"Rockman's Pier\", 'Still in Love', 'Lamplight', 'Cut the Rope', \n 'Vanessa', 'High Tops', 'Love Song for Lady Earth', \"Let's Pretend\",\n 'Deirdre, Pt. I', \"Don't Read the Mirror\", 'Laid Down My Arms','Theory of Emotion'\n]",
"_____no_output_____"
],
[
"not_charlie = df[~df['name'].isin(with_charlie)]['valence']",
"_____no_output_____"
],
[
"charlie = df[df['name'].isin(with_charlie)]['valence']",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\nax.boxplot([charlie,not_charlie])\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"## by playlist",
"_____no_output_____"
]
],
[
[
"sam_playlists = sp.user_playlists(1250134147)",
"_____no_output_____"
],
[
"sam_playlist_ids = [\n {'playlist_id':x['id'],'playlist_name':x['name']} for x in sam_playlists['items']]",
"_____no_output_____"
],
[
"sam_playlist_ids",
"_____no_output_____"
],
[
"records = []\nfor playlist in sam_playlist_ids:\n playlist_items = sp.playlist_tracks(playlist['playlist_id'])['items']\n track_ids = [item['track']['id'] for item in playlist_items]\n \n audio_features = [sp.audio_features(x)[0] for x in track_ids]\n for track in audio_features:\n track['playlist_name']=playlist['playlist_name']\n records.extend(audio_features)\n time.sleep(5)",
"_____no_output_____"
],
[
"playlist_df = pd.DataFrame.from_records(records)",
"_____no_output_____"
],
[
"playlist_df",
"_____no_output_____"
]
],
[
[
"### Songs I've played on ",
"_____no_output_____"
]
],
[
[
"my_tracks=[\n {\n 'name':item['track']['name'],\n 'id':item['track']['id'],\n 'artists':[x['name'] for x in item['track']['artists']]\n } for \n item in sp.playlist_tracks(\"spotify:playlist:4XeFzR948Yyk1X4SsXXogr\")['items']\n]",
"_____no_output_____"
],
[
"my_df = tracks_toDF(my_tracks)",
"_____no_output_____"
],
[
"track_scatterPlot(my_df,'valence','energy',)",
"_____no_output_____"
],
[
"sns.scatterplot(my_df['valence'],my_df['energy'],hue=my_df['artists'])",
"_____no_output_____"
],
[
"kw = {\n 'data':my_df,\n 'x':'valence','y':'danceability',\n 'hue':'artists',\n 's':400\n}",
"_____no_output_____"
],
[
"sns.scatterplot(**kw)",
"_____no_output_____"
],
[
"sns.regplot(data=data,x=x,y=y,fit_reg=False,scatter_kws={'s':400})",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75fb781cf2a923abb24771b57de1f414f9f776c | 12,928 | ipynb | Jupyter Notebook | 02.great_expectations_validation/3.Valid_data_via_function.ipynb | pengfei99/DataQualityAndValidation | a305ef87087ae0bf19ce057a9c0c5e27e4a6de5a | [
"Apache-2.0"
] | null | null | null | 02.great_expectations_validation/3.Valid_data_via_function.ipynb | pengfei99/DataQualityAndValidation | a305ef87087ae0bf19ce057a9c0c5e27e4a6de5a | [
"Apache-2.0"
] | null | null | null | 02.great_expectations_validation/3.Valid_data_via_function.ipynb | pengfei99/DataQualityAndValidation | a305ef87087ae0bf19ce057a9c0c5e27e4a6de5a | [
"Apache-2.0"
] | null | null | null | 42.666667 | 2,833 | 0.490408 | [
[
[
"# Tutorial 3\nIn tutorial 1 and 2, we have seen how to use great expectation as a project framework to validate data. If you don't want to use\nall the features that it provides you. you can just use the simple validation method directly on a dataframe",
"_____no_output_____"
]
],
[
[
"import great_expectations as ge\nimport pandas as pd\nfrom matplotlib import pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"file_path=\"../data/adult_with_duplicates.csv\"\n\n\ndf = pd.read_csv(file_path)\n\n# convert pandas dataframe to ge dataframe\ndf = ge.dataset.PandasDataset(df)\nprint(df.columns)",
"Index(['age', 'workclass', 'fnlwgt', 'education', 'education-num',\n 'marital-status', 'occupation', 'relationship', 'race', 'sex',\n 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country',\n 'income'],\n dtype='object')\n"
],
[
"df.head()",
"_____no_output_____"
]
],
[
[
"# Apply validation method directly on the dataframe\n\nBelow method checks if the dataframe has the expected column names. It's equivalent to the yaml config\n\n```yaml\n\"expectations\": [\n {\n \"expectation_type\": \"expect_table_columns_to_match_ordered_list\",\n \"kwargs\": {\n \"column_list\": [\n \"age\",\n \"workclass\",\n \"fnlwgt\",\n \"education\",\n \"education-num\",\n \"marital-status\",\n \"occupation\",\n \"relationship\",\n \"race\",\n \"sex\",\n \"capital-gain\",\n \"capital-loss\",\n \"hours-per-week\",\n \"native-country\",\n \"income\"\n ]\n },\n \"meta\": {}\n }\n```",
"_____no_output_____"
]
],
[
[
"column_list= [\n \"age\",\n \"workclass\",\n \"fnlwgt\",\n \"education\",\n \"education-num\",\n \"marital-status\",\n \"occupation\",\n \"relationship\",\n \"race\",\n \"sex\",\n \"capital-gain\",\n \"capital-loss\",\n \"hours-per-week\",\n \"native-country\",\n \"income\",\n # \"toto\"\n ]\ndf.expect_table_columns_to_match_ordered_list(column_list=column_list)",
"_____no_output_____"
]
],
[
[
"Below method checks if the age value is between 0 and 120. It's equivalent to the yaml config\n\n```yaml\n {\n \"expectation_type\": \"expect_column_values_to_be_between\",\n \"kwargs\": {\n \"column\": \"age\",\n \"max_value\": 120.0,\n \"min_value\": 0.0\n },\n \"meta\": {}\n }\n```",
"_____no_output_____"
]
],
[
[
"# ge dataframe provides access to all validation method\n\ndf.expect_column_values_to_be_between(column='age', min_value=0, max_value=120)",
"_____no_output_____"
],
[
"df.expect_column_values_to_not_be_null(\"age\")",
"_____no_output_____"
],
[
"values= (\"Private\", \"Self-emp-not-inc\", \"Self-emp-inc\", \"Federal-gov\", \"Local-gov\", \"State-gov\", \"Without-pay\", \"Never-worked\")\ndf.expect_column_values_to_be_in_set(\"workclass\",value_set=values)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e75fba17ded8ae0bb1a9f24296845f79abff4aaa | 44,559 | ipynb | Jupyter Notebook | scripts/_jhirner/hillsborough_expand_acs_variables.ipynb | ahopejasen/datakind-test | 032202f2913a2af6b2ad29d888da559ad2aa0ee8 | [
"MIT"
] | 12 | 2021-03-01T03:53:03.000Z | 2022-02-15T09:57:52.000Z | scripts/_jhirner/hillsborough_expand_acs_variables.ipynb | ahopejasen/datakind-test | 032202f2913a2af6b2ad29d888da559ad2aa0ee8 | [
"MIT"
] | 17 | 2021-03-04T01:00:05.000Z | 2021-03-12T06:33:22.000Z | scripts/_jhirner/hillsborough_expand_acs_variables.ipynb | ahopejasen/datakind-test | 032202f2913a2af6b2ad29d888da559ad2aa0ee8 | [
"MIT"
] | 21 | 2021-03-02T03:30:49.000Z | 2021-04-13T19:48:01.000Z | 37.634291 | 213 | 0.367445 | [
[
[
"# Expand Hillsborough data set with additional ACS variables",
"_____no_output_____"
],
[
"### Header information\n*DataDive goal targeted:* \"Expand the list of ACS variables available for analysis by joining the processed dataset with the full list of data profile variables.\"\n\n*Contact info*: Josh Hirner, [email protected]",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"### Import the required data sets:\n(1) Procecessed housing insecurity data as `hb_proc`, (2) the ACS demographic dataset as `hb_acs`, and (3) the ACS data dictionary for interpeting `DPxx_xxxx` codes as `acs_dict`.",
"_____no_output_____"
]
],
[
[
"hb_proc = pd.read_csv(\"../../data/processed/hillsborough_fl_processed_2017_to_2019_20210225.csv\")",
"_____no_output_____"
],
[
"hb_proc.head()",
"_____no_output_____"
],
[
"hb_acs = pd.read_csv(\"../../data/acs/hillsborough_acs5-2018_census.csv\")",
"_____no_output_____"
],
[
"hb_acs.head()",
"_____no_output_____"
],
[
"acs_dict = pd.read_csv(\"../../data/acs/data_dictionary.csv\")",
"_____no_output_____"
],
[
"acs_dict.head()",
"_____no_output_____"
]
],
[
[
"### Expand the processed data set.\nJoin `hb_proc` (processed Hillsborough housing insecurity) and `hb_acs` (Hillsborough ACS demographics) datasets on the GEO ID columns to generate the expanded Hillsborough dataset, `hb_expand`.",
"_____no_output_____"
]
],
[
[
"hb_expand = pd.merge(hb_proc, hb_acs, left_on = \"census_tract_GEOID\", right_on = \"GEOID\", how = \"inner\")",
"_____no_output_____"
],
[
"hb_expand = hb_expand.drop([\"GEOID\", \"index\"], axis = 1)\nhb_expand.head()",
"_____no_output_____"
]
],
[
[
"### Quick evaluation for new correlations\nLet's see if anything interesting popped up in this merge. For illustrative purposes only, we'll restrict this correlation to the `avg-housing-loss-rate` column from the original processed Hillsborough data.",
"_____no_output_____"
]
],
[
[
"hb_corr = hb_expand.corr(method = \"spearman\")",
"_____no_output_____"
],
[
"# Examine correlation coefficients only for avg-housing-loss-rate, \n# and only with newly merged columns (i.e.: not present in the original processed data set)\nhb_housing_loss_corr = hb_corr[\"avg-housing-loss-rate\"].dropna().drop(hb_proc.columns, axis = 0, errors = \"ignore\").sort_values(ascending = True)\nhb_housing_loss_corr = pd.DataFrame(hb_housing_loss_corr)\nhb_housing_loss_corr",
"_____no_output_____"
],
[
"pd.set_option('display.max_colwidth', None)\npd.merge(hb_housing_loss_corr, acs_dict[[\"variable\", \"label\"]], left_index = True, right_on = \"variable\")",
"_____no_output_____"
]
],
[
[
"At a very cursory glance, it appears as though the expanded ACS variables offer both strong positive and strong negative correlations to housing insecurity.",
"_____no_output_____"
],
[
"### Export the expanded data",
"_____no_output_____"
]
],
[
[
"hb_expand.to_csv(\"../../data/processed/hillsborough_fl_processed_expanded_ACS_2017_to_2019_20210225.csv\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
e75fc13fd06842cb84e3e0502c97596dab8bbb56 | 25,017 | ipynb | Jupyter Notebook | code/history_backup/clustering_based_models_v1-reply_reference-200_dim.ipynb | InscribeDeeper/Text-Classification | 9cd3def58b5bd4b722a5b8fdff60a07d977234aa | [
"MIT"
] | null | null | null | code/history_backup/clustering_based_models_v1-reply_reference-200_dim.ipynb | InscribeDeeper/Text-Classification | 9cd3def58b5bd4b722a5b8fdff60a07d977234aa | [
"MIT"
] | null | null | null | code/history_backup/clustering_based_models_v1-reply_reference-200_dim.ipynb | InscribeDeeper/Text-Classification | 9cd3def58b5bd4b722a5b8fdff60a07d977234aa | [
"MIT"
] | null | null | null | 49.246063 | 1,845 | 0.636727 | [
[
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Clustering-based\" data-toc-modified-id=\"Clustering-based-1\"><span class=\"toc-item-num\">1 </span>Clustering based</a></span><ul class=\"toc-item\"><li><span><a href=\"#modeling\" data-toc-modified-id=\"modeling-1.1\"><span class=\"toc-item-num\">1.1 </span>modeling</a></span></li><li><span><a href=\"#prediction\" data-toc-modified-id=\"prediction-1.2\"><span class=\"toc-item-num\">1.2 </span>prediction</a></span></li><li><span><a href=\"#evaluation\" data-toc-modified-id=\"evaluation-1.3\"><span class=\"toc-item-num\">1.3 </span>evaluation</a></span></li></ul></li></ul></div>",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import pairwise_distances\nfrom sklearn import metrics\nfrom sklearn import mixture\nfrom sklearn.cluster import KMeans\nfrom nltk.cluster import KMeansClusterer, cosine_distance\nimport pandas as pd\nfrom sklearn.model_selection import GridSearchCV, train_test_split\nfrom sklearn.pipeline import Pipeline\nfrom matplotlib import pyplot as plt\nimport numpy as np\nfrom sklearn import svm\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.naive_bayes import MultinomialNB\nfrom IPython.core.interactiveshell import InteractiveShell\nfrom sklearn.model_selection import cross_validate\nfrom sklearn.metrics import precision_recall_fscore_support, classification_report, roc_curve, auc, precision_recall_curve\nfrom IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\"",
"_____no_output_____"
],
[
"seeds = 1234",
"_____no_output_____"
],
[
"train = pd.read_json('../data/structured_train.json')\ntest = pd.read_json('../data/structured_test.json')",
"_____no_output_____"
],
[
"# train = 'label').sample(50, random_state=seeds)\n# test = test.groupby('label').sample(50, random_state=seeds)",
"_____no_output_____"
],
[
"select_cols = [\"global_index\", \"doc_path\", \"label\", \"reply\", \"reference_one\", \"reference_two\",\n \"Subject\", \"From\", \"Lines\", \"Organization\", \"contained_emails\", \"long_string\", \"text\", \"error_message\"]\nprint(\"\\nmay use cols: \\n\", select_cols)\ntrain = train[select_cols]\ntest = test[select_cols]",
"\nmay use cols: \n ['global_index', 'doc_path', 'label', 'reply', 'reference_one', 'reference_two', 'Subject', 'From', 'Lines', 'Organization', 'contained_emails', 'long_string', 'text', 'error_message']\n"
]
],
[
[
"# Clustering based\n- Steps:\n 1. Transform into TF-IDF matrix\n 2. Dimension reduction into 200\n 3. Clustering in cosine similarity space (since it is word)\n 4. Assign labels with majority vote based on training set labels\n 5. Prediction\n 1. Transform test set into TF-IDF matrix\n 2. Dimension reduction into 200\n 3. Make prediction based on the clusters and mapping between clusters and labels from training set\n 6. Evaluation\n 1. Based on classification report",
"_____no_output_____"
],
[
"## modeling",
"_____no_output_____"
]
],
[
[
"train_text = train['reply'] + ' ' + train['reference_one']\ntrain_label = train['label']\ntest_text = test['reply'] + ' ' + test['reference_one']\ntest_label = test['label']",
"_____no_output_____"
],
[
"from sklearn.decomposition import TruncatedSVD\n\n\ndef tfidf_vectorizer(train_text, test_text, min_df=3):\n tfidf_vect = TfidfVectorizer(stop_words=\"english\", min_df=min_df, max_df=0.95)\n dtm_train = tfidf_vect.fit_transform(train_text)\n dtm_test = tfidf_vect.transform(test_text)\n \n word_to_idx = tfidf_vect.vocabulary_\n print(\"num of words:\", len(word_to_idx))\n return dtm_train, dtm_test, word_to_idx, tfidf_vect\n\ndef dimension_reduction(dtm, out_dim=200, verbose=0):\n print(\"Dimension reduction with truncate SVD:\")\n print(\" input columns with \", dtm.shape[1])\n print(\" output columns with \", out_dim)\n\n transform_mapper = TruncatedSVD(n_components=out_dim)\n dtm = transform_mapper.fit_transform(dtm)\n if verbose > 0:\n print(\"singular_values_: \", transform_mapper.singular_values_)\n return dtm, transform_mapper",
"_____no_output_____"
],
[
"def fit_clustering_model(dtm_train, train_label, num_clusters, metric='Cosine', model='KMeans', repeats=20):\n \n '''\n\n '''\n assert metric in ['Cosine']\n assert model in ['KMeans']\n\n # model training\n if model == 'KMeans':\n if metric == 'Cosine':\n clusterer = KMeansClusterer(num_clusters, cosine_distance, repeats=repeats, avoid_empty_clusters=True)\n clusters = clusterer.cluster(dtm_train, assign_clusters=True)\n train_cluster_pred = [clusterer.classify(v) for v in dtm_train]\n\n elif model == 'GMM':\n pass \n # GMM model not good in such case\n # clusterer = mixture.GaussianMixture(n_components=num_clusters, n_init=repeats, covariance_type='diag')\n # clusterer.fit(dtm_train)\n # train_cluster_pred = clusterer.predict(dtm_train)\n \n # Maping clusters into labels\n df = pd.DataFrame(list(zip(train_label, train_cluster_pred)), columns=['actual_class', 'cluster'])\n confusion = pd.crosstab(index=df.cluster, columns=df.actual_class)\n clusters_to_labels = confusion.idxmax(axis=1)\n \n print(\"Cluster to label mapping: \")\n for idx, t in enumerate(clusters_to_labels):\n print(\"Cluster {} <-> label {}\".format(idx, t))\n print(\"\\n\")\n\n return clusterer, clusters_to_labels\n\ndef pred_clustering_model(dtm_test, clusterer, clusters_to_labels):\n test_cluster_pred = [clusterer.classify(v) for v in dtm_test]\n predict = [clusters_to_labels[i] for i in test_cluster_pred]\n return predict",
"_____no_output_____"
],
[
"dtm_train, dtm_test, word_to_idx, tfidf_vect = tfidf_vectorizer(train_text, test_text, min_df=3)\ndtm_train, transform_mapper = dimension_reduction(dtm_train, out_dim=200)\ndtm_test = transform_mapper.transform(dtm_test)\n\nprint('dtm_train.shape', dtm_train.shape)\nprint('dtm_test.shape', dtm_test.shape)",
"num of words: 27588\nDimension reduction with truncate SVD:\n input columns with 27588\n output columns with 200\ndtm_train.shape (11083, 200)\ndtm_test.shape (7761, 200)\n"
],
[
"clusterer, clusters_to_labels = fit_clustering_model(dtm_train, train_label, num_clusters=80, repeats=5)",
"C:\\Users\\Administrator\\Anaconda3\\envs\\py810\\lib\\site-packages\\nltk\\cluster\\util.py:131: RuntimeWarning: invalid value encountered in double_scalars\n return 1 - (numpy.dot(u, v) / (sqrt(numpy.dot(u, u)) * sqrt(numpy.dot(v, v))))\n"
]
],
[
[
"## prediction",
"_____no_output_____"
]
],
[
[
"pred = pred_clustering_model(dtm_test, clusterer, clusters_to_labels)",
"_____no_output_____"
]
],
[
[
"## evaluation",
"_____no_output_____"
]
],
[
[
"from sklearn import preprocessing\n# le = preprocessing.LabelEncoder()\n# encoded_test_label = le.fit_transform(test_label)\n# print(metrics.classification_report(y_true = encoded_test_label, y_pred=pred, target_names=le.classes_))\nprint(metrics.classification_report(y_true = test_label, y_pred=pred))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e75fc5d5da77cceb2c50b5d33932f19b7df7941e | 5,776 | ipynb | Jupyter Notebook | Machine Learning Projects/Useful_Code_Examples/Deep_Learning/ANN-GeoDemographicSegmentation/ANN-GridSearch.ipynb | samlaubscher/HacktoberFest2020-Contributions | b86e06bd93d68e703e8a9d8415db0a8d63c75c4b | [
"MIT"
] | 256 | 2020-09-30T19:31:34.000Z | 2021-11-20T18:09:15.000Z | Machine Learning Projects/Useful_Code_Examples/Deep_Learning/ANN-GeoDemographicSegmentation/ANN-GridSearch.ipynb | samlaubscher/HacktoberFest2020-Contributions | b86e06bd93d68e703e8a9d8415db0a8d63c75c4b | [
"MIT"
] | 293 | 2020-09-30T19:14:54.000Z | 2021-06-06T02:34:47.000Z | Machine Learning Projects/Useful_Code_Examples/Deep_Learning/ANN-GeoDemographicSegmentation/ANN-GridSearch.ipynb | samlaubscher/HacktoberFest2020-Contributions | b86e06bd93d68e703e8a9d8415db0a8d63c75c4b | [
"MIT"
] | 1,620 | 2020-09-30T18:37:44.000Z | 2022-03-03T20:54:22.000Z | 22.740157 | 113 | 0.529778 | [
[
[
"# Artificial Neural Networks\n#### Geo-Demographic Segmentation",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nimport numpy as np\nimport pandas as pd",
"_____no_output_____"
],
[
"from tensorflow.compat.v1 import ConfigProto\nfrom tensorflow.compat.v1 import InteractiveSession\n\nconfig = ConfigProto()\nconfig.gpu_options.allow_growth = True\nsession = InteractiveSession(config=config)",
"_____no_output_____"
]
],
[
[
"## Part 1 - Data Preprocessing",
"_____no_output_____"
],
[
"### Data Loading",
"_____no_output_____"
]
],
[
[
"PATH = \"../../../../Deep_Learning/ANN/Python/Churn_Modelling.csv\"",
"_____no_output_____"
],
[
"dataset = pd.read_csv(PATH)",
"_____no_output_____"
],
[
"dataset.head()",
"_____no_output_____"
],
[
"X = dataset.iloc[:, 3:-1].values\ny = dataset.iloc[:, -1].values",
"_____no_output_____"
]
],
[
[
"### Encoding the Categorical Variables",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import LabelEncoder\n\nle = LabelEncoder()\nX[:, 2] = le.fit_transform(X[:, 2])",
"_____no_output_____"
],
[
"from sklearn.compose import ColumnTransformer\nfrom sklearn.preprocessing import OneHotEncoder\n\nct = ColumnTransformer(transformers=[('encoder', \n OneHotEncoder(), [1])], \n remainder='passthrough')\nX = np.array(ct.fit_transform(X))",
"_____no_output_____"
]
],
[
[
"### Train Test Split",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, \n test_size=0.20, \n random_state=42)",
"_____no_output_____"
]
],
[
[
"### Feature Scaling",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import StandardScaler\n\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.transform(X_test)",
"_____no_output_____"
]
],
[
[
"### Grid Search",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Dropout",
"_____no_output_____"
],
[
"from tensorflow.keras.wrappers.scikit_learn import KerasClassifier\nfrom sklearn.model_selection import GridSearchCV",
"_____no_output_____"
],
[
"def build_classifer(optimizer='adam'):\n tf.random.set_seed(42)\n classifier = Sequential()\n classifier.add(Dense(6, activation='relu', input_shape=(12, )))\n classifier.add(Dense(3, activation='relu'))\n classifier.add(Dense(1, activation='sigmoid'))\n classifier.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])\n return classifier",
"_____no_output_____"
],
[
"parameters = {\"batch_size\": [25, 32], \"epochs\": [100, 500], \"optimizer\": [\"adam\", \"rmsprop\"]},\n\nclassifier = KerasClassifier(build_fn=build_classifer)\ngs = GridSearchCV(estimator=classifier, param_grid=parameters, \n scoring='accuracy', cv=10, n_jobs=-1)\ngs = gs.fit(X_train, y_train)",
"_____no_output_____"
],
[
"best_accuracy = gs.best_score_\nbest_parameters = gs.best_params_\n\nprint(\"Best Accuracy \\t\\t\\t%.2f\" % best_accuracy)\nprint(\"Best Parameters \\t\", best_parameters)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e75fcdfb5cd6c25222f732402fc11377e14e80e2 | 304,441 | ipynb | Jupyter Notebook | lstm_fb.ipynb | ctxj/Financial-Time-Series | 29b1938069152982ec2a56be86458b2189120b03 | [
"Apache-2.0"
] | null | null | null | lstm_fb.ipynb | ctxj/Financial-Time-Series | 29b1938069152982ec2a56be86458b2189120b03 | [
"Apache-2.0"
] | null | null | null | lstm_fb.ipynb | ctxj/Financial-Time-Series | 29b1938069152982ec2a56be86458b2189120b03 | [
"Apache-2.0"
] | null | null | null | 304,441 | 304,441 | 0.758433 | [
[
[
"# LSTM Model\nTrain a LSTM (long short term memory) model to forecast time series sequence of stock closing price \nUse 5 time steps to forecast 1 forward time step ",
"_____no_output_____"
],
[
"Imports",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\nimport tensorflow as tf\n",
"_____no_output_____"
]
],
[
[
"Upload closing price data",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('fb_rsi.csv')\ndf['date'] = pd.to_datetime(df['date'])",
"_____no_output_____"
]
],
[
[
"Convert pandas object to numpy array ",
"_____no_output_____"
]
],
[
[
"close = np.array(df['close'])\ndate = np.array(df['date'])\nprint(len(close))",
"160679\n"
]
],
[
[
"There are 160,679 time steps, split 100,000 for training and the rest for testing ",
"_____no_output_____"
]
],
[
[
"split = 100000\ndate_train = date[:split] #Training split\nx_train = close[:split]\ndate_val = date[split:] #Testing split\nx_val = close[split:]\n\n#Variables for the windowing function below\nwindow_size = 5 #Number of timestep\nbatch_size = 250 #Number of sequence to be loaded into the model, depends to GPU memory\nshuffle_buffer_size = 10000 #Instead for shuffling the entire 10,000 training data, shuffle 10,000 at a time",
"_____no_output_____"
]
],
[
[
"Window function to split the sequence into features and labels \nSequence is split into windows of 5 time steps as the feature and the next time step as the label",
"_____no_output_____"
]
],
[
[
"def windowed_dataset(series, window_size, batch_size, shuffle_buffer):\n series = tf.expand_dims(series, axis=-1)\n ds = tf.data.Dataset.from_tensor_slices(series)\n ds = ds.window(window_size + 1, shift=1, drop_remainder=True)\n ds = ds.flat_map(lambda w: w.batch(window_size + 1))\n ds = ds.shuffle(shuffle_buffer)\n ds = ds.map(lambda w: (w[:-1], w[1:]))\n return ds.batch(batch_size).prefetch(1)",
"_____no_output_____"
]
],
[
[
"# LSTM Model\nCombine convolutional layers with LSTM layers for the complete model",
"_____no_output_____"
]
],
[
[
"train_set = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)\n\nmodel = tf.keras.models.Sequential([\n tf.keras.layers.Conv1D(filters=60, kernel_size=5,\n strides=1, padding='causal',\n activation='relu', input_shape=[None, 1]), #1D convolution layer\n tf.keras.layers.LSTM(60, return_sequences=True),\n tf.keras.layers.LSTM(60, return_sequences=True),\n tf.keras.layers.Dense(30, activation='relu'),\n tf.keras.layers.Dense(10, activation='relu'),\n tf.keras.layers.Dense(1),\n tf.keras.layers.Lambda(lambda x: x*500)\n]) #Output is between 0-1, lambda layer to multiple output to match stock prices (Multiplication factor dependent on stock peak price)\n\noptimizer = tf.keras.optimizers.SGD(lr=1e-5, momentum=0.9)\n\nmodel.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics=['mae']) #Optimizing for MAE(mean absolute error)",
"_____no_output_____"
]
],
[
[
"Train model for 600 epochs \nAverage 7 seconds per epoch, training time took 1 hour 10 minutes",
"_____no_output_____"
]
],
[
[
"history = model.fit(train_set, epochs=600)",
"6\nEpoch 399/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.5849 - mae: 2.0633\nEpoch 400/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.7061 - mae: 2.1753\nEpoch 401/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.3381 - mae: 1.7994\nEpoch 402/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.5575 - mae: 2.0218\nEpoch 403/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.5579 - mae: 2.0143\nEpoch 404/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.6738 - mae: 2.1568\nEpoch 405/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.6589 - mae: 2.1397\nEpoch 406/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.6263 - mae: 2.0867\nEpoch 407/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.5948 - mae: 2.0709\nEpoch 408/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.1096 - mae: 1.5535\nEpoch 409/600\n400/400 [==============================] - 8s 20ms/step - loss: 1.6734 - mae: 2.1566\nEpoch 410/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.5305 - mae: 1.9963\nEpoch 411/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.8236 - mae: 2.2669\nEpoch 412/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.9870 - mae: 2.4531\nEpoch 413/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.2531 - mae: 1.6865\nEpoch 414/600\n400/400 [==============================] - 8s 19ms/step - loss: 0.9024 - mae: 1.3268\nEpoch 415/600\n400/400 [==============================] - 8s 20ms/step - loss: 1.0019 - mae: 1.4780\nEpoch 416/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.4858 - mae: 1.9429\nEpoch 417/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.5762 - mae: 2.0457\nEpoch 418/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.3389 - mae: 1.7981\nEpoch 419/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.0765 - mae: 1.4980\nEpoch 420/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.5060 - mae: 1.9719\nEpoch 421/600\n400/400 [==============================] - 8s 20ms/step - loss: 1.1912 - mae: 1.6366\nEpoch 422/600\n400/400 [==============================] - 8s 20ms/step - loss: 1.3516 - mae: 1.8257\nEpoch 423/600\n400/400 [==============================] - 9s 22ms/step - loss: 1.7951 - mae: 2.2717\nEpoch 424/600\n400/400 [==============================] - 9s 21ms/step - loss: 1.6585 - mae: 2.1422\nEpoch 425/600\n400/400 [==============================] - 8s 21ms/step - loss: 1.4196 - mae: 1.8754\nEpoch 426/600\n400/400 [==============================] - 8s 21ms/step - loss: 1.6456 - mae: 2.1283\nEpoch 427/600\n400/400 [==============================] - 8s 20ms/step - loss: 1.3869 - mae: 1.8471\nEpoch 428/600\n400/400 [==============================] - 8s 20ms/step - loss: 1.6411 - mae: 2.1110\nEpoch 429/600\n400/400 [==============================] - 8s 20ms/step - loss: 1.6794 - mae: 2.1632\nEpoch 430/600\n400/400 [==============================] - 8s 20ms/step - loss: 1.6639 - mae: 2.1483\nEpoch 431/600\n400/400 [==============================] - 8s 20ms/step - loss: 1.4836 - mae: 1.9503\nEpoch 432/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.4515 - mae: 1.8901\nEpoch 433/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.5238 - mae: 1.9873\nEpoch 434/600\n400/400 [==============================] - 8s 20ms/step - loss: 1.5361 - mae: 2.0096\nEpoch 435/600\n400/400 [==============================] - 8s 20ms/step - loss: 1.4229 - mae: 1.8839\nEpoch 436/600\n400/400 [==============================] - 8s 20ms/step - loss: 1.6713 - mae: 2.1446\nEpoch 437/600\n400/400 [==============================] - 8s 20ms/step - loss: 1.1247 - mae: 1.5766\nEpoch 438/600\n400/400 [==============================] - 8s 20ms/step - loss: 1.4537 - mae: 1.9149\nEpoch 439/600\n400/400 [==============================] - 8s 20ms/step - loss: 1.3624 - mae: 1.8203\nEpoch 440/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.6472 - mae: 2.1317\nEpoch 441/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.6369 - mae: 2.1199\nEpoch 442/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6758 - mae: 2.1579\nEpoch 443/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.1671 - mae: 1.6063\nEpoch 444/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6348 - mae: 2.1066\nEpoch 445/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.7310 - mae: 2.2055\nEpoch 446/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.4785 - mae: 1.9432\nEpoch 447/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.6251 - mae: 2.0903\nEpoch 448/600\n400/400 [==============================] - 8s 21ms/step - loss: 1.7280 - mae: 2.2073\nEpoch 449/600\n400/400 [==============================] - 8s 20ms/step - loss: 1.2934 - mae: 1.7336\nEpoch 450/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.6659 - mae: 1.0913\nEpoch 451/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.3238 - mae: 1.7776\nEpoch 452/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.0277 - mae: 1.4632\nEpoch 453/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.5756 - mae: 2.0427\nEpoch 454/600\n400/400 [==============================] - 6s 16ms/step - loss: 1.6578 - mae: 2.1059\nEpoch 455/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.4609 - mae: 1.9144\nEpoch 456/600\n400/400 [==============================] - 6s 16ms/step - loss: 0.7380 - mae: 1.1605\nEpoch 457/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6279 - mae: 2.0814\nEpoch 458/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.2247 - mae: 1.6772\nEpoch 459/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.8314 - mae: 1.2541\nEpoch 460/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.3121 - mae: 1.7459\nEpoch 461/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6419 - mae: 2.1257\nEpoch 462/600\n400/400 [==============================] - 6s 16ms/step - loss: 1.5466 - mae: 2.0056\nEpoch 463/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.9828 - mae: 2.4526\nEpoch 464/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6315 - mae: 2.1142\nEpoch 465/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6313 - mae: 2.1139\nEpoch 466/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.5665 - mae: 2.0304\nEpoch 467/600\n400/400 [==============================] - 7s 16ms/step - loss: 1.2692 - mae: 1.7287\nEpoch 468/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.5647 - mae: 2.0338\nEpoch 469/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.7325 - mae: 2.2077\nEpoch 470/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6432 - mae: 2.1244\nEpoch 471/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6551 - mae: 2.1385\nEpoch 472/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.3729 - mae: 1.8206\nEpoch 473/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.7420 - mae: 1.1516\nEpoch 474/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.3386 - mae: 1.8006\nEpoch 475/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.6457 - mae: 1.0779\nEpoch 476/600\n400/400 [==============================] - 7s 16ms/step - loss: 1.5043 - mae: 1.9704\nEpoch 477/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6379 - mae: 2.1215\nEpoch 478/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6453 - mae: 2.1291\nEpoch 479/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6590 - mae: 2.1339\nEpoch 480/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.4412 - mae: 1.8987\nEpoch 481/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.5973 - mae: 2.0529\nEpoch 482/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.0133 - mae: 1.4744\nEpoch 483/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.8867 - mae: 2.3664\nEpoch 484/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.1934 - mae: 1.6427\nEpoch 485/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.8351 - mae: 2.3138\nEpoch 486/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.5506 - mae: 2.0226\nEpoch 487/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.7765 - mae: 2.2476\nEpoch 488/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.0242 - mae: 1.4472\nEpoch 489/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.0442 - mae: 1.5030\nEpoch 490/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6844 - mae: 2.1629\nEpoch 491/600\n400/400 [==============================] - 7s 16ms/step - loss: 1.5619 - mae: 2.0398\nEpoch 492/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.2187 - mae: 1.6703\nEpoch 493/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.1336 - mae: 1.5838\nEpoch 494/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.4797 - mae: 1.9052\nEpoch 495/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.0984 - mae: 1.5603\nEpoch 496/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.1988 - mae: 1.6127\nEpoch 497/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.8465 - mae: 1.2945\nEpoch 498/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.9138 - mae: 1.3738\nEpoch 499/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.2073 - mae: 1.6564\nEpoch 500/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.4427 - mae: 1.8775\nEpoch 501/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.4910 - mae: 1.9461\nEpoch 502/600\n400/400 [==============================] - 6s 16ms/step - loss: 1.4618 - mae: 1.9364\nEpoch 503/600\n400/400 [==============================] - 6s 16ms/step - loss: 1.8323 - mae: 2.2968\nEpoch 504/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.5970 - mae: 2.0795\nEpoch 505/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.4047 - mae: 1.8764\nEpoch 506/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.4754 - mae: 1.9451\nEpoch 507/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6561 - mae: 2.1080\nEpoch 508/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.4798 - mae: 1.9521\nEpoch 509/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.8180 - mae: 1.2375\nEpoch 510/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.9185 - mae: 1.3577\nEpoch 511/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.8483 - mae: 1.2807\nEpoch 512/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.5596 - mae: 1.0082\nEpoch 513/600\n400/400 [==============================] - 7s 16ms/step - loss: 1.0205 - mae: 1.4483\nEpoch 514/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.7247 - mae: 2.1934\nEpoch 515/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.2426 - mae: 1.6937\nEpoch 516/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.5148 - mae: 1.9776\nEpoch 517/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.9924 - mae: 1.4148\nEpoch 518/600\n400/400 [==============================] - 6s 16ms/step - loss: 0.8676 - mae: 1.2930\nEpoch 519/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.3828 - mae: 1.8465\nEpoch 520/600\n400/400 [==============================] - 7s 16ms/step - loss: 1.5484 - mae: 2.0032\nEpoch 521/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.3121 - mae: 1.7753\nEpoch 522/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.6449 - mae: 1.0808\nEpoch 523/600\n400/400 [==============================] - 7s 16ms/step - loss: 1.2557 - mae: 1.7135\nEpoch 524/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.9896 - mae: 1.4206\nEpoch 525/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.8547 - mae: 1.2651\nEpoch 526/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.1599 - mae: 1.5834\nEpoch 527/600\n400/400 [==============================] - 7s 16ms/step - loss: 1.4033 - mae: 1.8672\nEpoch 528/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.9266 - mae: 2.3877\nEpoch 529/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.4873 - mae: 0.9148\nEpoch 530/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.9901 - mae: 1.4156\nEpoch 531/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6903 - mae: 2.1735\nEpoch 532/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.2559 - mae: 1.6907\nEpoch 533/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.2732 - mae: 1.7274\nEpoch 534/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.5017 - mae: 0.9356\nEpoch 535/600\n400/400 [==============================] - 7s 16ms/step - loss: 0.6974 - mae: 1.1073\nEpoch 536/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.2113 - mae: 1.6721\nEpoch 537/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.5959 - mae: 2.0772\nEpoch 538/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.3133 - mae: 1.7772\nEpoch 539/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.6290 - mae: 1.0635\nEpoch 540/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.9007 - mae: 1.3453\nEpoch 541/600\n400/400 [==============================] - 7s 16ms/step - loss: 1.5741 - mae: 2.0568\nEpoch 542/600\n400/400 [==============================] - 6s 16ms/step - loss: 1.8759 - mae: 2.3523\nEpoch 543/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.0877 - mae: 1.5487\nEpoch 544/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.2358 - mae: 1.6846\nEpoch 545/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.9646 - mae: 1.4239\nEpoch 546/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.3492 - mae: 1.8123\nEpoch 547/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.6114 - mae: 1.0452\nEpoch 548/600\n400/400 [==============================] - 7s 16ms/step - loss: 0.6969 - mae: 1.1550\nEpoch 549/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.5358 - mae: 1.9799\nEpoch 550/600\n400/400 [==============================] - 7s 16ms/step - loss: 1.0174 - mae: 1.4647\nEpoch 551/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.7704 - mae: 2.2532\nEpoch 552/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.1686 - mae: 1.6233\nEpoch 553/600\n400/400 [==============================] - 7s 18ms/step - loss: 0.4874 - mae: 0.9286\nEpoch 554/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.7303 - mae: 1.1813\nEpoch 555/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6467 - mae: 2.1038\nEpoch 556/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.5464 - mae: 2.0277\nEpoch 557/600\n400/400 [==============================] - 8s 19ms/step - loss: 1.5981 - mae: 2.0794\nEpoch 558/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.5519 - mae: 2.0348\nEpoch 559/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.2601 - mae: 1.7232\nEpoch 560/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6244 - mae: 2.1009\nEpoch 561/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.2939 - mae: 1.7352\nEpoch 562/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.2607 - mae: 1.7132\nEpoch 563/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.5752 - mae: 2.0339\nEpoch 564/600\n400/400 [==============================] - 6s 16ms/step - loss: 0.8534 - mae: 1.2673\nEpoch 565/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.5843 - mae: 1.0061\nEpoch 566/600\n400/400 [==============================] - 7s 16ms/step - loss: 0.7928 - mae: 1.2529\nEpoch 567/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.5947 - mae: 1.0265\nEpoch 568/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.2925 - mae: 1.7380\nEpoch 569/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.3418 - mae: 1.8046\nEpoch 570/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.7799 - mae: 1.2412\nEpoch 571/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6746 - mae: 2.1553\nEpoch 572/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.6044 - mae: 2.0783\nEpoch 573/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.3274 - mae: 1.7741\nEpoch 574/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.9014 - mae: 1.3525\nEpoch 575/600\n400/400 [==============================] - 7s 16ms/step - loss: 1.3106 - mae: 1.7697\nEpoch 576/600\n400/400 [==============================] - 6s 16ms/step - loss: 0.8363 - mae: 1.2459\nEpoch 577/600\n400/400 [==============================] - 6s 16ms/step - loss: 0.4656 - mae: 0.8903\nEpoch 578/600\n400/400 [==============================] - 6s 16ms/step - loss: 0.9666 - mae: 1.4174\nEpoch 579/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.5682 - mae: 2.0537\nEpoch 580/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.1311 - mae: 1.5777\nEpoch 581/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.1593 - mae: 1.6140\nEpoch 582/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.5554 - mae: 2.0393\nEpoch 583/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.5632 - mae: 2.0490\nEpoch 584/600\n400/400 [==============================] - 7s 16ms/step - loss: 1.7185 - mae: 2.1950\nEpoch 585/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.0121 - mae: 1.4516\nEpoch 586/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.0929 - mae: 1.5378\nEpoch 587/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.5412 - mae: 2.0242\nEpoch 588/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.5437 - mae: 2.0287\nEpoch 589/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.5510 - mae: 2.0314\nEpoch 590/600\n400/400 [==============================] - 7s 18ms/step - loss: 1.2694 - mae: 1.7188\nEpoch 591/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.5462 - mae: 0.9668\nEpoch 592/600\n400/400 [==============================] - 7s 18ms/step - loss: 0.7292 - mae: 1.1646\nEpoch 593/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.5493 - mae: 2.0340\nEpoch 594/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.4621 - mae: 1.9408\nEpoch 595/600\n400/400 [==============================] - 7s 17ms/step - loss: 1.1439 - mae: 1.6008\nEpoch 596/600\n400/400 [==============================] - 7s 16ms/step - loss: 0.4729 - mae: 0.8958\nEpoch 597/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.5253 - mae: 0.9599\nEpoch 598/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.8528 - mae: 1.2815\nEpoch 599/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.5717 - mae: 0.9897\nEpoch 600/600\n400/400 [==============================] - 7s 17ms/step - loss: 0.8740 - mae: 1.3325\n"
],
[
"Plot MAE and loss against epochs",
"_____no_output_____"
],
[
"mae=history.history['mae']\nloss=history.history['loss']\n\nepochs=range(len(loss))\n\nplt.plot(epochs, mae, 'r')\nplt.plot(epochs, loss, 'b')\nplt.title('MAE and Loss')\nplt.xlabel(\"Epochs\")\nplt.ylabel(\"Accuracy\")\nplt.legend([\"MAE\", \"Loss\"])\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"Zooming into the last 100 epochs, although there are fluctuations, MAE and losses are still decreasing \nModel can be trained longer for higher accuracy",
"_____no_output_____"
]
],
[
[
"mae=history.history['mae']\nloss=history.history['loss']\n\nepochs=range(len(loss))\n\nplt.plot(epochs[-100:], mae[-100:], 'r')\nplt.plot(epochs[-100:], loss[-100:], 'b')\nplt.title('MAE and Loss')\nplt.xlabel(\"Epochs\")\nplt.ylabel(\"Accuracy\")\nplt.legend([\"MAE\", \"Loss\"])\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"Saving the model",
"_____no_output_____"
]
],
[
[
"model.save('lstm.h5')",
"_____no_output_____"
]
],
[
[
"Function to forecast test data \nSimilar to training, forecasting uses windows of 5 time steps to forecast forward time step",
"_____no_output_____"
]
],
[
[
"def model_forecast(model, series, window_size):\n ds = tf.data.Dataset.from_tensor_slices(series)\n ds = ds.window(window_size, shift=1, drop_remainder=True)\n ds = ds.flat_map(lambda w: w.batch(window_size))\n ds = ds.batch(250).prefetch(1)\n forecast = model.predict(ds)\n return forecast",
"_____no_output_____"
]
],
[
[
"Forecast testing data",
"_____no_output_____"
]
],
[
[
"forecast = model_forecast(model, close[..., np.newaxis], window_size)\nforecast = forecast[split - window_size:-1, -1, 0]",
"_____no_output_____"
]
],
[
[
"# Visualising forecasted time series\nForecasted series looks similar to the actual stock, however the model was unable to match the peaks of the closing prices",
"_____no_output_____"
]
],
[
[
"locator = mdates.MonthLocator()\nfmt = mdates.DateFormatter('%b %y')\n\nplt.plot(date_val, x_val, alpha=0.5)\nplt.plot(date_val, forecast)\n\nx = plt.gca()\nx.xaxis.set_major_locator(locator)\nx.xaxis.set_major_formatter(fmt)\nplt.title('Actual vs Forecast')\nplt.legend(['Actual', 'Forecast'])\nplt.xticks(rotation=45)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Conclusion\nModel performed well in forecasting unseen series. Although the peaks were not as high, the movement of the series has been captured by the model \nModel could be optimised by tuning the hyperparameter, adjusting the batch size and learning rate to smooth out the losses during training \nIncreasing the number of training epochs could improve the model, as the MAE was still decreasing at 600 epochs \n\n## Further Optimising the Model\nModel could be improved by training with updated series data, as the model did not match the actual series after 2 months \nModel would have to be constantly retrained to ensure high accuracy",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e75fcf89fa405aa42a03d26ce8fa838d15769f8f | 6,634 | ipynb | Jupyter Notebook | examples/deep-learning-notes-and-labs/08b_Deep_Neural_Networks.ipynb | kcavagnolo/udacity_selfdrive | 1e1e884c3f82eec476ccc7b4f4dd6b54f48d032e | [
"MIT"
] | null | null | null | examples/deep-learning-notes-and-labs/08b_Deep_Neural_Networks.ipynb | kcavagnolo/udacity_selfdrive | 1e1e884c3f82eec476ccc7b4f4dd6b54f48d032e | [
"MIT"
] | null | null | null | examples/deep-learning-notes-and-labs/08b_Deep_Neural_Networks.ipynb | kcavagnolo/udacity_selfdrive | 1e1e884c3f82eec476ccc7b4f4dd6b54f48d032e | [
"MIT"
] | null | null | null | 47.049645 | 316 | 0.662044 | [
[
[
"# L2 Deep Neural Networks\n",
"_____no_output_____"
],
[
"### Linear Model Complexity\nCalculate the number of parameters in the linear model in the previous lesson (each input is a 28x28 pixel image and there are 10 classes: letters from A to J.)\n* Answer: 28x28x10 (W) + 1x10 = 7850\n\nGenerally have (N+1)*X\n\n* Small number of parameters: generally (N+1)\\* K parameters, where N = numberof inputs, K = number of outputs.\n* Interactions of inputs are limited because model is linear\n\nBut they are \n* efficient (which makes them cheap and fast to run) and \n* stable. -> Can show mathematically that small changes in input can never yield big changes in output (|W| is bounded) Derivatives are also constant. \n\nWant to keep parameters in linear functions but want entire function to be non-linear. Cannot just multiply $W_1W_2W_3$ because that's equivalent to one linear function. So we have to introduce **non-linearities**.",
"_____no_output_____"
],
[
"### Rectified Linear Units (RELU)\nNon-linear function: Simplest non-linear function\ny = 0 when x <0. y = x when x>= 0.\n\n* How to use this (refer to our linear classifier process): Insert a RELU in the middle. Now have two matrices: One from the input to the RELU and oneo from the RELU to the output.\n* New parameter **H**: the number of RELUs you insert.\n\nBuild network by stacking up simple operations to make the maths simple (**Chain Rule**).\n\nCan write the chain rule in a way that is computationally efficient.\n\n### How to compute derivatives: Back-Propagation \n\nStochastic Gradient Descent: \n1. For each batch of data run forward prop and then back prop.\n2. That will give you gradients for each of the weights in your model.\n3. Apply gradients and learning rates to the original weight sand update them.\n4. Repeat 1-3 many times to optimise your models.\n\nNote: Each block of the backprop typically takes twice the memory and computation of the forward prop blocks. -> Important for sizing your model and fitting it in memory.",
"_____no_output_____"
],
[
"### Training a Deep Neural Network\nIncreasing H is not efficient: you need to make it very big and then it gets hard to train.\n\nInstead, you can add more layers. A deep model is often preferred for two reasons:\n\n1. Parameter efficiency: Can generally get better performance with more parameters if you go deeper rather than wider.\n2. Many natural phenomena have a hierarchical structure. (E.g. Lines and edges -> Geometric shapes -> Objects in image recognition. Model matches abstractions you see in your data.)\n\n**Why did deep networks only become popular recently?** \nDeep models only really shine if you have large amounts of data to train them with. ",
"_____no_output_____"
],
[
"## Regularisation\n\nAnalogy: Skinny jeans are hard to get into, so people usually wear jeans that are a bit too big. Similarly, networks that's just the right size for your data are hard to train. (**Why?**) So in practice we train networks that are way too big for our data and then try our best to prevent them from overfitting.",
"_____no_output_____"
],
[
"### Ways to prevent overfitting\n1. Early termination: Looking at performance on validation set and stop training once performance stops improving.\n2. Regularisation: Putting artificial constraints on your network that implicitly reduce the number of free parameters without making it more difficult to optimise. // Stretch pants.\n\nTwo methods of regularisation:\n1. L2 Regularisation\n2. Dropout\n\n### L2 Regularisation\nAdd term to the loss that penalises large weights, typically by adding L2 norm of your weights multiplied by a small constant to your loss. -> Additional hyperparameter to tune.\n* L2 norm: Sum of the squares of the elements in a vector.\n$$ L' = L + \\beta\\frac{1}{2}||W||^2_2 $$\n\nPros:\n* Simple because you just add it to the loss. You don't need to change the structure of your network. \n\n### Dropout\nRecent new technique for regularisation. \n* Imagine if you have one layer connected to another layer. The values that go from one layer to the next are often called **activations**.\n* Take activations and for every example you train your network on, set half of them to zero. I.e. Randomly destroy half the data that's flowing through your network. Do this over and over. \n* -> Network cannot rely on any given activation to be present because it may get destroyed at any moment.\n* -> This forces network to learn redundant representations to ensure that at least some information remains. -> Seems inefficient but this makes things more robust and prevents overfitting. It also makes your network act as though it's taking a consensus over an ensemble of networks.\n\nOp: If dropout doesn't work for you, you should probably be using a bigger network.\n\n### Evaluating a Dropout-Trained Network\nWhen you evaluate the network that's been trained with dropout, you don't want randomness - you want something deterministic. You want the consensus. You get this by averaging activations. \n\nHow do you get this? \n* During training, don't only zero -> Scale remaining activations by factor of two. \n* When evaluating, remove dropout and scaling operations to get an average of the activations that is properly scaled.",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e75ff1e9ee1b7fd47b562a7420bffc5c7d84ae85 | 60,172 | ipynb | Jupyter Notebook | notebooks/topic_modeling (1).ipynb | tcausero/huginn | 904acd21f7306d9227c6de867a83e16e86507140 | [
"MIT"
] | 1 | 2020-03-22T05:11:41.000Z | 2020-03-22T05:11:41.000Z | notebooks/topic_modeling (1).ipynb | naftalic/huginn | 25e86811f1b8fb302ca7fcdce1bdf2b3df75f3dd | [
"MIT"
] | 7 | 2020-11-13T18:46:54.000Z | 2022-02-10T01:43:17.000Z | notebooks/topic_modeling (1).ipynb | naftalic/huginn | 25e86811f1b8fb302ca7fcdce1bdf2b3df75f3dd | [
"MIT"
] | 2 | 2020-03-22T05:11:43.000Z | 2020-04-04T02:30:35.000Z | 70.624413 | 25,716 | 0.60824 | [
[
[
"!pip install nltk\n!pip install gensim\n!pip install pyLDAvis",
"Requirement already satisfied: nltk in /usr/local/lib/python3.6/dist-packages (3.2.5)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from nltk) (1.12.0)\nRequirement already satisfied: gensim in /usr/local/lib/python3.6/dist-packages (3.6.0)\nRequirement already satisfied: six>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from gensim) (1.12.0)\nRequirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from gensim) (1.18.2)\nRequirement already satisfied: smart-open>=1.2.1 in /usr/local/lib/python3.6/dist-packages (from gensim) (1.10.0)\nRequirement already satisfied: scipy>=0.18.1 in /usr/local/lib/python3.6/dist-packages (from gensim) (1.4.1)\nRequirement already satisfied: boto3 in /usr/local/lib/python3.6/dist-packages (from smart-open>=1.2.1->gensim) (1.12.34)\nRequirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from smart-open>=1.2.1->gensim) (2.21.0)\nRequirement already satisfied: google-cloud-storage in /usr/local/lib/python3.6/dist-packages (from smart-open>=1.2.1->gensim) (1.18.1)\nRequirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /usr/local/lib/python3.6/dist-packages (from boto3->smart-open>=1.2.1->gensim) (0.3.3)\nRequirement already satisfied: botocore<1.16.0,>=1.15.34 in /usr/local/lib/python3.6/dist-packages (from boto3->smart-open>=1.2.1->gensim) (1.15.34)\nRequirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.6/dist-packages (from boto3->smart-open>=1.2.1->gensim) (0.9.5)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->smart-open>=1.2.1->gensim) (2019.11.28)\nRequirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->smart-open>=1.2.1->gensim) (1.24.3)\nRequirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->smart-open>=1.2.1->gensim) (2.8)\nRequirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->smart-open>=1.2.1->gensim) (3.0.4)\nRequirement already satisfied: google-auth>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from google-cloud-storage->smart-open>=1.2.1->gensim) (1.7.2)\nRequirement already satisfied: google-resumable-media<0.5.0dev,>=0.3.1 in /usr/local/lib/python3.6/dist-packages (from google-cloud-storage->smart-open>=1.2.1->gensim) (0.4.1)\nRequirement already satisfied: google-cloud-core<2.0dev,>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from google-cloud-storage->smart-open>=1.2.1->gensim) (1.0.3)\nRequirement already satisfied: python-dateutil<3.0.0,>=2.1 in /usr/local/lib/python3.6/dist-packages (from botocore<1.16.0,>=1.15.34->boto3->smart-open>=1.2.1->gensim) (2.8.1)\nRequirement already satisfied: docutils<0.16,>=0.10 in /usr/local/lib/python3.6/dist-packages (from botocore<1.16.0,>=1.15.34->boto3->smart-open>=1.2.1->gensim) (0.15.2)\nRequirement already satisfied: cachetools<3.2,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.2.0->google-cloud-storage->smart-open>=1.2.1->gensim) (3.1.1)\nRequirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.2.0->google-cloud-storage->smart-open>=1.2.1->gensim) (46.1.3)\nRequirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.2.0->google-cloud-storage->smart-open>=1.2.1->gensim) (4.0)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.2.0->google-cloud-storage->smart-open>=1.2.1->gensim) (0.2.8)\nRequirement already satisfied: google-api-core<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from google-cloud-core<2.0dev,>=1.0.0->google-cloud-storage->smart-open>=1.2.1->gensim) (1.16.0)\nRequirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<4.1,>=3.1.4->google-auth>=1.2.0->google-cloud-storage->smart-open>=1.2.1->gensim) (0.4.8)\nRequirement already satisfied: protobuf>=3.4.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.14.0->google-cloud-core<2.0dev,>=1.0.0->google-cloud-storage->smart-open>=1.2.1->gensim) (3.10.0)\nRequirement already satisfied: pytz in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.14.0->google-cloud-core<2.0dev,>=1.0.0->google-cloud-storage->smart-open>=1.2.1->gensim) (2018.9)\nRequirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0dev,>=1.14.0->google-cloud-core<2.0dev,>=1.0.0->google-cloud-storage->smart-open>=1.2.1->gensim) (1.51.0)\nCollecting pyLDAvis\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/a5/3a/af82e070a8a96e13217c8f362f9a73e82d61ac8fff3a2561946a97f96266/pyLDAvis-2.1.2.tar.gz (1.6MB)\n\u001b[K |████████████████████████████████| 1.6MB 2.7MB/s \n\u001b[?25hRequirement already satisfied: wheel>=0.23.0 in /usr/local/lib/python3.6/dist-packages (from pyLDAvis) (0.34.2)\nRequirement already satisfied: numpy>=1.9.2 in /usr/local/lib/python3.6/dist-packages (from pyLDAvis) (1.18.2)\nRequirement already satisfied: scipy>=0.18.0 in /usr/local/lib/python3.6/dist-packages (from pyLDAvis) (1.4.1)\nRequirement already satisfied: pandas>=0.17.0 in /usr/local/lib/python3.6/dist-packages (from pyLDAvis) (1.0.3)\nRequirement already satisfied: joblib>=0.8.4 in /usr/local/lib/python3.6/dist-packages (from pyLDAvis) (0.14.1)\nRequirement already satisfied: jinja2>=2.7.2 in /usr/local/lib/python3.6/dist-packages (from pyLDAvis) (2.11.1)\nRequirement already satisfied: numexpr in /usr/local/lib/python3.6/dist-packages (from pyLDAvis) (2.7.1)\nRequirement already satisfied: pytest in /usr/local/lib/python3.6/dist-packages (from pyLDAvis) (3.6.4)\nRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyLDAvis) (0.16.0)\nCollecting funcy\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/ce/4b/6ffa76544e46614123de31574ad95758c421aae391a1764921b8a81e1eae/funcy-1.14.tar.gz (548kB)\n\u001b[K |████████████████████████████████| 552kB 22.2MB/s \n\u001b[?25hRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.17.0->pyLDAvis) (2018.9)\nRequirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.17.0->pyLDAvis) (2.8.1)\nRequirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2>=2.7.2->pyLDAvis) (1.1.1)\nRequirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pytest->pyLDAvis) (8.2.0)\nRequirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from pytest->pyLDAvis) (1.12.0)\nRequirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.6/dist-packages (from pytest->pyLDAvis) (0.7.1)\nRequirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.6/dist-packages (from pytest->pyLDAvis) (1.3.0)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from pytest->pyLDAvis) (46.1.3)\nRequirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/dist-packages (from pytest->pyLDAvis) (19.3.0)\nRequirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from pytest->pyLDAvis) (1.8.1)\nBuilding wheels for collected packages: pyLDAvis, funcy\n Building wheel for pyLDAvis (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for pyLDAvis: filename=pyLDAvis-2.1.2-py2.py3-none-any.whl size=97711 sha256=7276290d94f9a650a56837b73aef59c14225105f7b8c0354abedb3c67b8711ae\n Stored in directory: /root/.cache/pip/wheels/98/71/24/513a99e58bb6b8465bae4d2d5e9dba8f0bef8179e3051ac414\n Building wheel for funcy (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for funcy: filename=funcy-1.14-py2.py3-none-any.whl size=32042 sha256=5a6a3430a1a3d83628df740019e00c864d7f62eac51e3f20de2b0692456415bb\n Stored in directory: /root/.cache/pip/wheels/20/5a/d8/1d875df03deae6f178dfdf70238cca33f948ef8a6f5209f2eb\nSuccessfully built pyLDAvis funcy\nInstalling collected packages: funcy, pyLDAvis\nSuccessfully installed funcy-1.14 pyLDAvis-2.1.2\n"
],
[
"#searched bitcoin on NYT\ndataset = ['The Coder and the Dictator',\n 'Bitcoin Has Lost Steam. But Criminals Still Love It.',\n 'China’s Cryptocurrency Plan Has a Powerful Partner: Big Brother',\n 'China Gives Digital Currencies a Reprieve as Beijing Warms to Blockchain',\n 'Bitcoin is a protocol. Bitcoin is a brand.']",
"_____no_output_____"
],
[
"\nimport spacy\nspacy.load('en')\nfrom spacy.lang.en import English\nimport nltk\nnltk.download('wordnet')\nfrom nltk.corpus import wordnet as wn\nfrom nltk.stem.wordnet import WordNetLemmatizer\nfrom gensim import corpora\nimport pickle\nimport gensim\n\n\nparser = English()\ndef tokenize(text):\n lda_tokens = []\n tokens = parser(text)\n for token in tokens:\n if token.orth_.isspace():\n continue\n elif token.like_url:\n lda_tokens.append('URL')\n elif token.orth_.startswith('@'):\n lda_tokens.append('SCREEN_NAME')\n else:\n lda_tokens.append(token.lower_)\n return lda_tokens",
"[nltk_data] Downloading package wordnet to /root/nltk_data...\n[nltk_data] Unzipping corpora/wordnet.zip.\n"
],
[
"tokenize('Holy guacamole')",
"_____no_output_____"
],
[
"def get_lemma(word):\n lemma = wn.morphy(word)\n if lemma is None:\n return word\n else:\n return lemma\n \ndef get_lemma2(word):\n return WordNetLemmatizer().lemmatize(word)",
"_____no_output_____"
],
[
"print(get_lemma('notebooks'))\nprint(get_lemma2('notebooks'))",
"notebook\nnotebook\n"
],
[
"for w in ['dogs', 'ran', 'discouraged']:\n print(w, get_lemma(w), get_lemma2(w))",
"dogs dog dog\nran run ran\ndiscouraged discourage discouraged\n"
],
[
"nltk.download('stopwords')\nen_stop = set(nltk.corpus.stopwords.words('english'))",
"[nltk_data] Downloading package stopwords to /root/nltk_data...\n[nltk_data] Unzipping corpora/stopwords.zip.\n"
],
[
"list(en_stop)[:5]",
"_____no_output_____"
],
[
"def prepare_text_for_lda(text):\n tokens = tokenize(text)\n tokens = [token for token in tokens if len(token) > 4] #discard short words\n tokens = [token for token in tokens if token not in en_stop] #remove if stop word\n tokens = [get_lemma(token) for token in tokens] #lemmatize each word\n return tokens",
"_____no_output_____"
],
[
"prepare_text_for_lda('Harry Potter is a silly little wizard')",
"_____no_output_____"
],
[
"text_data = [prepare_text_for_lda(i) for i in dataset]",
"_____no_output_____"
],
[
"import random\ntext_data2 = []\nwith open('dataset.csv') as f:\n for line in f:\n tokens = prepare_text_for_lda(line)\n if random.random() > .99:\n #print(tokens)\n text_data2.append(tokens)\ntext_data2[:5]",
"_____no_output_____"
],
[
"# creates dictionary generator\ndictionary = corpora.Dictionary(text_data2)\n",
"_____no_output_____"
],
[
"# creates a list of lists of tuples, with index for each word in bag of words\ncorpus = [dictionary.doc2bow(text) for text in text_data2]\ncorpus[:2]",
"_____no_output_____"
],
[
"#creates a pickle file and dictionary file to save progress\npickle.dump(corpus, open('corpus.pkl', 'wb'))\ndictionary.save('dictionary.gensim')",
"/usr/local/lib/python3.6/dist-packages/smart_open/smart_open_lib.py:410: UserWarning: This function is deprecated, use smart_open.open instead. See the migration notes for details: https://github.com/RaRe-Technologies/smart_open/blob/master/README.rst#migrating-to-the-new-open-function\n 'See the migration notes for details: %s' % _MIGRATION_NOTES_URL\n"
],
[
"#LDA Model instantiation , corpus is the list of tuples, dictionary maps the words to indices\n\nNUM_TOPICS = 5 # arbitrary\nldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics = NUM_TOPICS, id2word=dictionary, passes=15)\nldamodel.save('model5.gensim')",
"/usr/local/lib/python3.6/dist-packages/smart_open/smart_open_lib.py:410: UserWarning: This function is deprecated, use smart_open.open instead. See the migration notes for details: https://github.com/RaRe-Technologies/smart_open/blob/master/README.rst#migrating-to-the-new-open-function\n 'See the migration notes for details: %s' % _MIGRATION_NOTES_URL\n"
],
[
"# Choosing 4 words from each title, what are the central topics\n\ntopics = ldamodel.print_topics(num_words=4)\nfor topic in topics:\n print(topic)",
"(0, '0.053*\"base\" + 0.028*\"using\" + 0.028*\"scale\" + 0.027*\"network\"')\n(1, '0.021*\"sensor\" + 0.021*\"complex\" + 0.021*\"documentary\" + 0.021*\"telling\"')\n(2, '0.027*\"construction\" + 0.015*\"flexible\" + 0.015*\"sheaf\" + 0.015*\"paper\"')\n(3, '0.034*\"database\" + 0.034*\"approach\" + 0.034*\"group\" + 0.018*\"system\"')\n(4, '0.044*\"network\" + 0.030*\"power\" + 0.016*\"efficient\" + 0.016*\"algorithm\"')\n"
],
[
"dataset[-1]",
"_____no_output_____"
],
[
"new_doc = dataset[-1]\nnew_doc = prepare_text_for_lda(new_doc)\nnew_doc_bow = dictionary.doc2bow(new_doc)\nprint(new_doc_bow)\nprint(ldamodel.get_document_topics(new_doc_bow))",
"[]\n[(0, 0.2), (1, 0.2), (2, 0.2), (3, 0.2), (4, 0.2)]\n"
],
[
"ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics = 3, id2word=dictionary, passes=15)\nldamodel.save('model3.gensim')\ntopics = ldamodel.print_topics(num_words=4)\nfor topic in topics:\n print(topic)",
"(0, '0.029*\"network\" + 0.016*\"using\" + 0.016*\"sensor\" + 0.016*\"approach\"')\n(1, '0.040*\"base\" + 0.018*\"algorithm\" + 0.018*\"scale\" + 0.018*\"power\"')\n(2, '0.018*\"delta\" + 0.018*\"constant\" + 0.018*\"inductor\" + 0.018*\"propose\"')\n"
],
[
"ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics = 10, id2word=dictionary, passes=15)\nldamodel.save('model10.gensim')\ntopics = ldamodel.print_topics(num_words=4)\nfor topic in topics:\n print(topic)",
"(0, '0.036*\"construction\" + 0.036*\"scale\" + 0.036*\"base\" + 0.036*\"scalability\"')\n(1, '0.067*\"network\" + 0.035*\"complex\" + 0.035*\"using\" + 0.035*\"efficient\"')\n(2, '0.051*\"datapath\" + 0.051*\"error\" + 0.051*\"overclocking\" + 0.051*\"tradeoff\"')\n(3, '0.056*\"sensor\" + 0.029*\"wireless\" + 0.029*\"story\" + 0.029*\"telling\"')\n(4, '0.023*\"flexible\" + 0.023*\"parallel\" + 0.023*\"tangible\" + 0.023*\"sheet\"')\n(5, '0.042*\"base\" + 0.042*\"network\" + 0.022*\"design\" + 0.022*\"decoding\"')\n(6, '0.025*\"using\" + 0.025*\"directional\" + 0.025*\"coding\" + 0.025*\"broadcast\"')\n(7, '0.054*\"group\" + 0.054*\"efficiency\" + 0.054*\"semantics\" + 0.054*\"recommendation\"')\n(8, '0.025*\"efficient\" + 0.025*\"algorithm\" + 0.025*\"approach\" + 0.025*\"topology\"')\n(9, '0.039*\"speech\" + 0.039*\"inner\" + 0.039*\"cepstral\" + 0.039*\"noise\"')\n"
],
[
"dictionary = gensim.corpora.Dictionary.load('dictionary.gensim')\ncorpus = pickle.load(open('corpus.pkl', 'rb'))\nlda = gensim.models.ldamodel.LdaModel.load('model5.gensim')\n",
"/usr/local/lib/python3.6/dist-packages/smart_open/smart_open_lib.py:410: UserWarning: This function is deprecated, use smart_open.open instead. See the migration notes for details: https://github.com/RaRe-Technologies/smart_open/blob/master/README.rst#migrating-to-the-new-open-function\n 'See the migration notes for details: %s' % _MIGRATION_NOTES_URL\n"
],
[
"import pyLDAvis.gensim\nlda_display = pyLDAvis.gensim.prepare(lda, corpus, dictionary, sort_topics=False)\npyLDAvis.display(lda_display)\n",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75ff6dd1ddbbc6e85e5d6909681239eec983db7 | 19,077 | ipynb | Jupyter Notebook | pykonal_eq/quake_de.ipynb | malcolmw/PyKonalEQ | 9990bf3f703d2e0bfecc2b76f80cf4da633a8f8a | [
"MIT"
] | 1 | 2021-09-10T17:50:45.000Z | 2021-09-10T17:50:45.000Z | pykonal_eq/quake_de.ipynb | malcolmw/PyKonalEQ | 9990bf3f703d2e0bfecc2b76f80cf4da633a8f8a | [
"MIT"
] | null | null | null | pykonal_eq/quake_de.ipynb | malcolmw/PyKonalEQ | 9990bf3f703d2e0bfecc2b76f80cf4da633a8f8a | [
"MIT"
] | null | null | null | 32.224662 | 145 | 0.476647 | [
[
[
"%matplotlib ipympl\n\nimport itertools\nimport inventory\nimport matplotlib.pyplot as plt\nimport multiprocessing as mp\nimport numpy as np\nimport pandas as pd\nimport pykonal\nimport scipy.optimize",
"_____no_output_____"
],
[
"EVENTS = pd.read_hdf(\"../data/catalog.hdf5\", key=\"events\")\nARRIVALS = pd.read_hdf(\"../data/catalog.hdf5\", key=\"arrivals\")\nARRIVALS[\"location\"] = \"\"\nARRIVALS[\"handle\"] = ARRIVALS[\"network\"] + \".\" + ARRIVALS[\"station\"] + \".\" + ARRIVALS[\"location\"] + \".\" + ARRIVALS[\"phase\"]",
"_____no_output_____"
],
[
"class EQLocator(object):\n\n def __init__(\n self, \n tt_inventory,\n delta=np.array([\n 20,\n np.radians(0.2),\n np.radians(0.2),\n 10\n ])\n ):\n \"\"\"\n A class for locating earthquakes.\n \n Positional Arguments\n ====================\n `tt_inventory`: str\n Path to a TTInventory file created using the `compute_tts`\n program.\n \n Keyword Arguments\n =================\n `delta`: length 4 array-like\n Search range around an initial location estimate in \n spherical coordinates (rho, theta, phi, time) with units\n of (km, rad, rad, s).\n \"\"\"\n self._tti = inventory.TTInventory(tt_inventory, mode=\"r\")\n self._delta = np.array(delta)\n\n\n def __exit__(self):\n self.__del__()\n\n\n def __del__(self):\n self.tti.f5.close()\n\n\n @property\n def arrivals(self):\n \"\"\"\n Phase arrival observations used to locate event.\n \n This attribute should be set using a DataFrame with `handle`\n and `time` fields. The `handle` field should be a string \n (e.g., SEED id code) that matches the name of the corresponding\n Dataset object in the TTInventory file. The `time` field should\n be a float representing the observed phase arrival time in as a\n UNIX/epoch timestamp (seconds since 1970-01-01T00:00:00).\n \"\"\"\n return (self._arrivals)\n \n @arrivals.setter\n def arrivals(self, value):\n self._arrivals = value.set_index(\"handle\")\n self._recompute_bounds = True\n self._tt = None\n \n @property\n def bs_arrivals(self):\n \"\"\"\n Phase arrival observations with added noise for bootstrap\n uncertainty analysis.\n \"\"\"\n return (self._bs_arrivals)\n \n @bs_arrivals.setter\n def bs_arrivals(self, value):\n self._bs_arrivals = value\n\n\n @property\n def bounds(self):\n \"\"\"\n The bounds of the search range for the optimal location.\n \"\"\"\n if not hasattr(self, \"_bounds\") or self._recompute_bounds is True:\n self.bounds = self.compute_bounds()\n return (self._bounds)\n \n @bounds.setter\n def bounds(self, value):\n self._recompute_bounds = False\n self._bounds = value\n\n\n @property\n def delta(self):\n \"\"\"\n Search range half-widths around initial location.\n \"\"\"\n return (self._delta)\n\n @delta.setter\n def delta(self, value):\n self._delta = value\n\n\n @property\n def method(self):\n \"\"\"\n Optimization method. Defaults to scipy.optimize.differential_evolution().\n \"\"\"\n if not hasattr(self, \"_method\"):\n self._method = scipy.optimize.differential_evolution\n \n return (self._method)\n \n @method.setter\n def method(self, value):\n self._method = value\n\n \n @property\n def tt(self):\n \"\"\"\n A dictionary of pykonal.fields.ScalarField3D objects containing\n the traveltime data for each arrival.\n \"\"\"\n if self._tt is None:\n if self._recompute_bounds is True:\n self.bounds = self.compute_bounds()\n self._tt = {handle:\n self._tti.read(\n handle,\n min_coords=self.bounds[0, :-1],\n max_coords=self.bounds[1, :-1]\n )\n for handle in self._arrivals.index\n }\n \n return (self._tt)\n \n @property\n def tti(self):\n return (self._tti)\n\n \n def add_bs_noise(self, coords, seed=None):\n \"\"\"\n Add noise to a copy of arrivals for bootstrap uncertainty\n analysis. Noise is sampled from residual distribution computed\n for provided location coordinates.\n \"\"\"\n arrivals = self.arrivals.copy()\n r = self.residuals(coords, bootstrap=False)\n np.random.seed(seed)\n arrivals[\"time\"] += np.random.choice(r, size=len(r))\n \n self.bs_arrivals = arrivals\n\n \n def bootstrap_analysis(self, loc, n=128, nproc=None):\n \"\"\"\n Run bootstrap uncertainty analysis for location given by `loc`.\n \n Keyword Arguments\n =================\n `n`: int (default=128)\n Number of bootstrap iterations.\n `nproc`: int (default=None)\n Number of simultaneous threads to process.\n \"\"\"\n\n seeds = np.random.randint(0, 2**32-1, n)\n args = (self.arrivals, loc, self.tti.path, self.delta, self.method)\n with mp.Pool(nproc) as pool:\n results = pool.map(\n bootstrap_target,\n ((*args, seed) for seed in seeds)\n )\n \n return (np.stack(results))\n\n\n def compute_bounds(self, loc=None):\n \"\"\"\n Compute the bounds of the search range.\n \n If the optional `loc` keyword argument is provided, the lower \n and upper bounds are simply loc - delta and loc + delta, \n respectively. If the `loc` keyword is not provided, than an\n estimate is obtained by a brute-force grid search.\n \"\"\"\n if loc is None:\n loc = self.grid_search()\n return (np.array([loc-self.delta, loc+self.delta]))\n\n\n def grid_search(self):\n \"\"\"\n Return location estimate from brute-force grid search.\n \"\"\"\n t0 = np.array([\n arrival[\"time\"] - self.tti.read(index).values\n for index, arrival in self.arrivals.iterrows()\n ])\n std = np.std(t0, axis=0)\n idx_min = np.unravel_index(np.argmin(std), std.shape)\n t0 = np.mean(t0, axis=0)[idx_min]\n loc = np.array([*self.tti.nodes[idx_min], t0])\n\n return (loc)\n\n \n def locate(\n self,\n *args,\n order=2, \n bootstrap=False,\n **kwargs\n ):\n \"\"\"\n Locate the event using `target` method from scipy.optimize to\n minimize the residual.\n\n Keyword Arguments\n =================\n `order`: int (default=2)\n Order of the Lp-norm that will be minimized.\n `bootstrap`: bool (default=False)\n Whether to compute the location using the arrivals with\n noise added for bootstrap uncertainity analysis.\n `**kwargs`: \n Additional keyword arguments are passed directly to\n the optimization function.\n \"\"\"\n cost = lambda coords: self.norm(coords, order=order, bootstrap=bootstrap)\n return(\n self.method(\n cost,\n *args,\n bounds=self.bounds.T,\n **kwargs\n ).x\n )\n\n \n def norm(self, coords, order=2, bootstrap=False):\n \"\"\"\n Return the Lp-norm of the residuals computed for the given\n location coordinates, `coords`.\n \n Keyword Arguments\n =================\n `order`: int (default=2)\n Order of the Lp-norm that will be minimized.\n `bootstrap`: bool (default=False)\n Whether to compute the location using the arrivals with\n noise added for bootstrap uncertainity analysis.\n \"\"\"\n return (\n np.linalg.norm(\n self.residuals(coords, bootstrap=bootstrap), \n ord=order\n )\n )\n\n\n def residuals(self, coords, bootstrap=False):\n \"\"\"\n Return the residuals computed for the given location\n coordinates, `coords`.\n \n Keyword Arguments\n =================\n `bootstrap`: bool (default=False)\n Whether to compute the location using the arrivals with\n noise added for bootstrap uncertainity analysis.\n \"\"\"\n if bootstrap is False:\n arrivals = self.arrivals\n elif bootstrap is True:\n arrivals = self.bs_arrivals\n else:\n raise (ValueError)\n \n tt = np.array([\n self.tt[\n handle\n ].value(\n coords[:3], \n null=np.inf\n )\n for handle in arrivals.index\n ])\n \n return (arrivals[\"time\"].values - coords[3] - tt)\n\n\ndef bootstrap_target(args):\n arrivals, loc, tti_path, delta, method, seed = args\n locator = EQLocator(tti_path, delta=delta)\n locator.method = method\n locator.arrivals = arrivals.reset_index().copy()\n locator.add_bs_noise(loc, seed=seed)\n\n return (locator.locate(bootstrap=True))\n\nlocator = EQLocator(\"../data/tts.hdf5\")",
"_____no_output_____"
],
[
"%%time\nevent = EVENTS.iloc[1]\nlocator.arrivals = ARRIVALS.set_index(\"event_id\").loc[event[\"event_id\"]]\nresult = locator.locate()\nresult",
"CPU times: user 741 ms, sys: 102 ms, total: 843 ms\nWall time: 839 ms\n"
],
[
"%%time\nlocs = locator.bootstrap_analysis(n=16)",
"CPU times: user 402 ms, sys: 45.6 ms, total: 448 ms\nWall time: 5.72 s\n"
]
],
[
[
"# Development of EDT residual",
"_____no_output_____"
]
],
[
[
"import itertools\n\ndef edt(self, coords):\n arrivals = self.arrivals.set_index(\"handle\")\n pairs = list(itertools.product(arrivals.index, arrivals.index))\n r = [\n (arrivals.loc[handle1, \"time\"] - arrivals.loc[handle2, \"time\"] )\n - (self._tt[handle1].value(coords[:3], null=np.inf) - self._tt[handle2].value(coords[:3], null=np.inf))\n for handle1, handle2 in pairs\n ]\n return (r)",
"_____no_output_____"
],
[
"%%time\nevent = EVENTS.iloc[1]\nlocator.arrivals = ARRIVALS.set_index(\"event_id\").loc[event[\"event_id\"]]",
"_____no_output_____"
],
[
"locator._tt = {handle: locator.tti.read(handle) for handle in locator.arrivals.index}",
"_____no_output_____"
],
[
"def residuals(self, coords, bootstrap=False):\n arrivals = locator.arrivals\n pairs = np.array(list(itertools.product(arrivals.index, arrivals.index)))\n ota = locator.arrivals.loc[pairs[:, 0], \"time\"].values \n otb = locator.arrivals.loc[pairs[:, 1], \"time\"].values\n tts = {handle: self._tt[handle].value(coords[:3], null=np.inf) for handle in arrivals.index}\n tta = np.array([tts[handle] for handle in pairs[:, 0]])\n ttb = np.array([tts[handle] for handle in pairs[:, 1]])\n\n return ((ota - otb) - (tta - ttb))",
"_____no_output_____"
],
[
"np.array([*nodes[i, j], 0])",
"_____no_output_____"
],
[
"nodes = locator.tti.nodes[-5]\n# edt_norm = np.zeros(nodes.shape[:-1])\n# for i in range(edt_norm.shape[0]):\n# for j in range(edt_norm.shape[1]):\n# edt_norm[i, j] = np.linalg.norm(residuals(locator, nodes[i, j]))\n\nl2_norm = np.zeros(nodes.shape[:-1])\nfor i in range(l2_norm.shape[0]):\n for j in range(l2_norm.shape[1]):\n l2_norm[i, j] = locator.norm(np.array([*nodes[i, j], loc0.x[-1]]))",
"_____no_output_____"
],
[
"plt.close(\"all\")\nfig, axes = plt.subplots(ncols=2, figsize=(12, 6))\nqmesh = axes[0].pcolormesh(l2_norm)\nfig.colorbar(qmesh, ax=axes[0])\nqmesh = axes[1].pcolormesh(edt_norm)\nfig.colorbar(qmesh, ax=axes[1])",
"_____no_output_____"
],
[
"%%time\nloc0 = locator.differential_evolution(order=2, bootstrap=False)\nloc0.x",
"_____no_output_____"
],
[
"%%time\nboots = np.empty((0, 4))\nfor i in range(100):\n locator.bootstrap_sample(loc0.x)\n loc = locator.differential_evolution(order=2, bootstrap=True)\n boots = np.vstack([boots, loc.x])",
"_____no_output_____"
],
[
"np.degrees(np.std(boots[:, 1])) * 111",
"_____no_output_____"
],
[
"plt.close(\"all\")\nfig, ax = plt.subplots()\nax.hist(boots[:, 0], bins=32)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75ff904b8a07d23641a4706aefebf1a518d2943 | 168,445 | ipynb | Jupyter Notebook | DeepLearningWithPython/06_CatsVsDogs/CatsVsDogs-PretrainedCNN-withAugmentation_Colab.ipynb | anish-pratheepkumar/AI-Machine-Learning-and-Deep-Learning | 3d1873e2ac70b9ee6cca309e96759e1714bc73f1 | [
"MIT"
] | null | null | null | DeepLearningWithPython/06_CatsVsDogs/CatsVsDogs-PretrainedCNN-withAugmentation_Colab.ipynb | anish-pratheepkumar/AI-Machine-Learning-and-Deep-Learning | 3d1873e2ac70b9ee6cca309e96759e1714bc73f1 | [
"MIT"
] | null | null | null | DeepLearningWithPython/06_CatsVsDogs/CatsVsDogs-PretrainedCNN-withAugmentation_Colab.ipynb | anish-pratheepkumar/AI-Machine-Learning-and-Deep-Learning | 3d1873e2ac70b9ee6cca309e96759e1714bc73f1 | [
"MIT"
] | null | null | null | 183.691385 | 26,404 | 0.841266 | [
[
[
"#using a pretrained cnn - VGG16 (trained on imagenet data set of animals) \n#this is available as an application in keras\nfrom keras.applications import VGG16\n\nconv_base = VGG16(weights='imagenet', #this argument is weight initialisation we can use either the imagenet weights or random initialisation\n include_top=False, #False => not to use the dense layers \n input_shape=(150, 150, 3))\n\nimport os\nimport numpy as np\nfrom keras.preprocessing.image import ImageDataGenerator\n\n#defining direcory location for the image input to the network, no need to use os.mkdir here since directory is already made before\nG_Path = '/content/drive/My Drive/Colab Notebooks/'\nbase_dir = G_Path+'cats_and_dogs_small'\ntrain_dir = os.path.join(base_dir, 'train')\nvalidation_dir = os.path.join(base_dir, 'validation')\ntest_dir = os.path.join(base_dir, 'test')\n",
"_____no_output_____"
],
[
"G_Path = '/content/drive/My Drive/Colab Notebooks/'\nbase_dir = G_Path+'cats_and_dogs_small'\nprint (G_Path)\nprint (base_dir)\n",
"/content/drive/My Drive/Colab Notebooks/\n/content/drive/My Drive/Colab Notebooks/cats_and_dogs_small\n"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n"
],
[
"#designing the network architecture with the conv base as first layer and then adding dense classifier to it\nfrom keras import models\nfrom keras import layers\n\nmodel = models.Sequential()\nmodel.add(conv_base)\nmodel.add(layers.Flatten())\nmodel.add(layers.Dense(256, activation='relu'))\nmodel.add(layers.Dense(1, activation='sigmoid'))\n\n#freezing the conv base weights so that it is not modified while feeding the data through it\nprint('This is the number of trainable param tensors '\n 'before freezing the conv base:', len(model.trainable_weights))\n\nconv_base.trainable = False\n\nprint('This is the number of trainable param tensors '\n 'after freezing the conv base:', len(model.trainable_weights))\n#4 indicate we have 2 dense layers each ontaining one weight matrix and one bias vector",
"('This is the number of trainable param tensors before freezing the conv base:', 30)\n('This is the number of trainable param tensors after freezing the conv base:', 4)\n"
],
[
"#training the model end to end with the frozen conv base and augmented data i/p to the network\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras import optimizers\n\n#augmenting data using imagedatagenerator class of keras\ntrain_datagen = ImageDataGenerator(\n rescale=1./255,\n rotation_range=40,\n width_shift_range=0.2,\n height_shift_range=0.2,\n shear_range=0.2,\n zoom_range=0.2,\n horizontal_flip=True,\n fill_mode='nearest')\n#validation data is not augmented only rescaled to values btwn 0 and 1 for easy processing of the data\ntest_datagen = ImageDataGenerator(rescale=1./255)\n\n#generator fot train data\ntrain_generator = train_datagen.flow_from_directory( #using flow from directory method of imagedatagenerator class to generate datas\n train_dir, #specifying directory from which data is to be loaded\n target_size=(150, 150), #resizing the image\n batch_size=20,\n class_mode='binary')\n\n#generator fot validation data\nvalidation_generator = test_datagen.flow_from_directory(\n validation_dir,\n target_size=(150, 150),\n batch_size=20,\n class_mode='binary')\n\n#compiling the designed model\nmodel.compile(loss='binary_crossentropy',\n optimizer=optimizers.RMSprop(lr=2e-5),\n metrics=['acc'])\n\n#fitting the generated augmented images to the corresponding labels by using the compiled model \nhistory = model.fit_generator( \n train_generator, #generating the train data\n steps_per_epoch=100, #breaking the data generation and now its time for weight updates, this marks one epoch\n epochs=30,\n validation_data=validation_generator,\n validation_steps=50)\n",
"Found 2000 images belonging to 2 classes.\n"
],
[
"#plotting the results - training (accuracy and losses) vs validation (accuracy and losses)\nimport matplotlib.pyplot as plt\n\nacc = history.history['acc']\nval_acc = history.history['val_acc']\nloss = history.history['loss']\nval_loss = history.history['val_loss']\n\nepochs = range(1, len(acc) + 1)\n\nplt.plot(epochs, acc, 'bo', label='Training acc')\nplt.plot(epochs, val_acc, 'b', label='Validation acc')\nplt.title('Training and validation accuracy')\nplt.legend()\nplt.figure()\n\nplt.plot(epochs, loss, 'bo', label='Training loss')\nplt.plot(epochs, val_loss, 'b', label='Validation loss')\nplt.title('Training and validation loss')\nplt.legend()\n\nplt.show()",
"_____no_output_____"
],
[
"#finetuning - training by unfreezing only few top layers of conv base\nconv_base.trainable = True #unfreezing the whole convnet\n\nset_trainable = False #initialising a boolean variable\n\nfor layer in conv_base.layers: #iterating over layers in conv net\n if layer.name == 'block5_conv1': #if layer name is block5(the block just above FCN) then making those layers trainable\n set_trainable = True\n if set_trainable:\n layer.trainable = True\n else:\n layer.trainable = False \n \n ",
"_____no_output_____"
],
[
"#compiling the model with a very low learning rate : so that the trainable weights\n#of the convnet is not modified too much, which might harm the representations it is having\nmodel.compile(loss='binary_crossentropy',\n optimizer=optimizers.RMSprop(lr=1e-5), #learning rate is set low \n metrics=['acc'])\n\n#fitting the model with generated augmented i/p images and its labels using the \n#model with top few conv layer set to trainable\nhistory = model.fit_generator(\n train_generator, #loading train data from this directory\n steps_per_epoch=100, #breaking the data generation, with this one epoch is completed and the trainable weights are updated\n epochs=100,\n validation_data=validation_generator, #loading the unaugmented validation data\n validation_steps=50)\n\n\n",
"Epoch 1/100\n100/100 [==============================] - 25s 252ms/step - loss: 0.2850 - acc: 0.8785 - val_loss: 0.2255 - val_acc: 0.9130\nEpoch 2/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.2582 - acc: 0.8890 - val_loss: 0.2091 - val_acc: 0.9130\nEpoch 3/100\n100/100 [==============================] - 23s 227ms/step - loss: 0.2356 - acc: 0.8980 - val_loss: 0.2037 - val_acc: 0.9230\nEpoch 4/100\n100/100 [==============================] - 23s 228ms/step - loss: 0.2255 - acc: 0.9075 - val_loss: 0.2232 - val_acc: 0.9070\nEpoch 5/100\n100/100 [==============================] - 23s 227ms/step - loss: 0.2121 - acc: 0.9090 - val_loss: 0.1978 - val_acc: 0.9240\nEpoch 6/100\n100/100 [==============================] - 23s 229ms/step - loss: 0.1866 - acc: 0.9270 - val_loss: 0.1987 - val_acc: 0.9260\nEpoch 7/100\n100/100 [==============================] - 22s 225ms/step - loss: 0.1836 - acc: 0.9280 - val_loss: 0.2047 - val_acc: 0.9240\nEpoch 8/100\n100/100 [==============================] - 23s 229ms/step - loss: 0.1772 - acc: 0.9260 - val_loss: 0.1966 - val_acc: 0.9200\nEpoch 9/100\n100/100 [==============================] - 22s 225ms/step - loss: 0.1643 - acc: 0.9315 - val_loss: 0.2182 - val_acc: 0.9160\nEpoch 10/100\n100/100 [==============================] - 22s 225ms/step - loss: 0.1626 - acc: 0.9400 - val_loss: 0.1894 - val_acc: 0.9270\nEpoch 11/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.1473 - acc: 0.9420 - val_loss: 0.2551 - val_acc: 0.9120\nEpoch 12/100\n100/100 [==============================] - 23s 227ms/step - loss: 0.1535 - acc: 0.9360 - val_loss: 0.1867 - val_acc: 0.9240\nEpoch 13/100\n100/100 [==============================] - 22s 225ms/step - loss: 0.1285 - acc: 0.9470 - val_loss: 0.2387 - val_acc: 0.9150\nEpoch 14/100\n100/100 [==============================] - 23s 225ms/step - loss: 0.1288 - acc: 0.9485 - val_loss: 0.1832 - val_acc: 0.9370\nEpoch 15/100\n100/100 [==============================] - 23s 225ms/step - loss: 0.1182 - acc: 0.9540 - val_loss: 0.1967 - val_acc: 0.9290\nEpoch 16/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.1139 - acc: 0.9530 - val_loss: 0.1908 - val_acc: 0.9330\nEpoch 17/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.1061 - acc: 0.9590 - val_loss: 0.2064 - val_acc: 0.9260\nEpoch 18/100\n100/100 [==============================] - 23s 227ms/step - loss: 0.1146 - acc: 0.9520 - val_loss: 0.1860 - val_acc: 0.9360\nEpoch 19/100\n100/100 [==============================] - 23s 229ms/step - loss: 0.0920 - acc: 0.9655 - val_loss: 0.2026 - val_acc: 0.9290\nEpoch 20/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0875 - acc: 0.9630 - val_loss: 0.2408 - val_acc: 0.9320\nEpoch 21/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0945 - acc: 0.9605 - val_loss: 0.2176 - val_acc: 0.9250\nEpoch 22/100\n100/100 [==============================] - 23s 225ms/step - loss: 0.0933 - acc: 0.9665 - val_loss: 0.2193 - val_acc: 0.9320\nEpoch 23/100\n100/100 [==============================] - 23s 228ms/step - loss: 0.0795 - acc: 0.9685 - val_loss: 0.2041 - val_acc: 0.9380\nEpoch 24/100\n100/100 [==============================] - 22s 225ms/step - loss: 0.0773 - acc: 0.9700 - val_loss: 0.1935 - val_acc: 0.9380\nEpoch 25/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0817 - acc: 0.9720 - val_loss: 0.2368 - val_acc: 0.9310\nEpoch 26/100\n100/100 [==============================] - 23s 225ms/step - loss: 0.0837 - acc: 0.9680 - val_loss: 0.1997 - val_acc: 0.9310\nEpoch 27/100\n100/100 [==============================] - 22s 224ms/step - loss: 0.0758 - acc: 0.9735 - val_loss: 0.2131 - val_acc: 0.9350\nEpoch 28/100\n100/100 [==============================] - 23s 225ms/step - loss: 0.0748 - acc: 0.9730 - val_loss: 0.2715 - val_acc: 0.9230\nEpoch 29/100\n100/100 [==============================] - 22s 224ms/step - loss: 0.0705 - acc: 0.9735 - val_loss: 0.1891 - val_acc: 0.9350\nEpoch 30/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0682 - acc: 0.9755 - val_loss: 0.1987 - val_acc: 0.9330\nEpoch 31/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0557 - acc: 0.9810 - val_loss: 0.2199 - val_acc: 0.9260\nEpoch 32/100\n100/100 [==============================] - 22s 224ms/step - loss: 0.0765 - acc: 0.9715 - val_loss: 0.2233 - val_acc: 0.9320\nEpoch 33/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0637 - acc: 0.9760 - val_loss: 0.1779 - val_acc: 0.9430\nEpoch 34/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0635 - acc: 0.9785 - val_loss: 0.2282 - val_acc: 0.9270\nEpoch 35/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0594 - acc: 0.9780 - val_loss: 0.1866 - val_acc: 0.9330\nEpoch 36/100\n100/100 [==============================] - 22s 223ms/step - loss: 0.0599 - acc: 0.9775 - val_loss: 0.2052 - val_acc: 0.9380\nEpoch 37/100\n100/100 [==============================] - 22s 224ms/step - loss: 0.0676 - acc: 0.9765 - val_loss: 0.1793 - val_acc: 0.9420\nEpoch 38/100\n100/100 [==============================] - 23s 225ms/step - loss: 0.0478 - acc: 0.9825 - val_loss: 0.2026 - val_acc: 0.9330\nEpoch 39/100\n100/100 [==============================] - 22s 224ms/step - loss: 0.0555 - acc: 0.9790 - val_loss: 0.2260 - val_acc: 0.9400\nEpoch 40/100\n100/100 [==============================] - 22s 224ms/step - loss: 0.0495 - acc: 0.9825 - val_loss: 0.2189 - val_acc: 0.9360\nEpoch 41/100\n100/100 [==============================] - 23s 225ms/step - loss: 0.0523 - acc: 0.9810 - val_loss: 0.2199 - val_acc: 0.9360\nEpoch 42/100\n100/100 [==============================] - 23s 225ms/step - loss: 0.0486 - acc: 0.9795 - val_loss: 0.1886 - val_acc: 0.9430\nEpoch 43/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0457 - acc: 0.9815 - val_loss: 0.3262 - val_acc: 0.9160\nEpoch 44/100\n100/100 [==============================] - 22s 225ms/step - loss: 0.0550 - acc: 0.9820 - val_loss: 0.2086 - val_acc: 0.9400\nEpoch 45/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0434 - acc: 0.9825 - val_loss: 0.2168 - val_acc: 0.9420\nEpoch 46/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0449 - acc: 0.9825 - val_loss: 0.2630 - val_acc: 0.9270\nEpoch 47/100\n100/100 [==============================] - 23s 231ms/step - loss: 0.0403 - acc: 0.9880 - val_loss: 0.2560 - val_acc: 0.9320\nEpoch 48/100\n100/100 [==============================] - 22s 225ms/step - loss: 0.0379 - acc: 0.9860 - val_loss: 0.3204 - val_acc: 0.9190\nEpoch 49/100\n100/100 [==============================] - 23s 231ms/step - loss: 0.0283 - acc: 0.9885 - val_loss: 0.3287 - val_acc: 0.9180\nEpoch 50/100\n100/100 [==============================] - 23s 227ms/step - loss: 0.0486 - acc: 0.9845 - val_loss: 0.2534 - val_acc: 0.9380\nEpoch 51/100\n100/100 [==============================] - 23s 225ms/step - loss: 0.0320 - acc: 0.9885 - val_loss: 0.4533 - val_acc: 0.9030\nEpoch 52/100\n100/100 [==============================] - 23s 227ms/step - loss: 0.0385 - acc: 0.9845 - val_loss: 0.2396 - val_acc: 0.9370\nEpoch 53/100\n100/100 [==============================] - 23s 228ms/step - loss: 0.0340 - acc: 0.9885 - val_loss: 0.2124 - val_acc: 0.9420\nEpoch 54/100\n100/100 [==============================] - 22s 223ms/step - loss: 0.0286 - acc: 0.9895 - val_loss: 0.2735 - val_acc: 0.9310\nEpoch 55/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0389 - acc: 0.9875 - val_loss: 0.2657 - val_acc: 0.9360\nEpoch 56/100\n100/100 [==============================] - 22s 224ms/step - loss: 0.0300 - acc: 0.9895 - val_loss: 0.2497 - val_acc: 0.9440\nEpoch 57/100\n100/100 [==============================] - 23s 229ms/step - loss: 0.0394 - acc: 0.9855 - val_loss: 0.2379 - val_acc: 0.9440\nEpoch 58/100\n100/100 [==============================] - 22s 225ms/step - loss: 0.0355 - acc: 0.9885 - val_loss: 0.2792 - val_acc: 0.9390\nEpoch 59/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0420 - acc: 0.9845 - val_loss: 0.2455 - val_acc: 0.9390\nEpoch 60/100\n100/100 [==============================] - 22s 223ms/step - loss: 0.0343 - acc: 0.9865 - val_loss: 0.2507 - val_acc: 0.9330\nEpoch 61/100\n100/100 [==============================] - 23s 230ms/step - loss: 0.0310 - acc: 0.9885 - val_loss: 0.2704 - val_acc: 0.9310\nEpoch 62/100\n100/100 [==============================] - 23s 231ms/step - loss: 0.0328 - acc: 0.9885 - val_loss: 0.2769 - val_acc: 0.9360\nEpoch 63/100\n100/100 [==============================] - 23s 227ms/step - loss: 0.0269 - acc: 0.9930 - val_loss: 0.2698 - val_acc: 0.9380\nEpoch 64/100\n100/100 [==============================] - 22s 225ms/step - loss: 0.0343 - acc: 0.9870 - val_loss: 0.2837 - val_acc: 0.9350\nEpoch 65/100\n100/100 [==============================] - 23s 228ms/step - loss: 0.0321 - acc: 0.9885 - val_loss: 0.2233 - val_acc: 0.9390\nEpoch 66/100\n100/100 [==============================] - 22s 225ms/step - loss: 0.0340 - acc: 0.9895 - val_loss: 0.2366 - val_acc: 0.9440\nEpoch 67/100\n100/100 [==============================] - 23s 227ms/step - loss: 0.0266 - acc: 0.9915 - val_loss: 0.2395 - val_acc: 0.9410\nEpoch 68/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0339 - acc: 0.9880 - val_loss: 0.2381 - val_acc: 0.9390\nEpoch 69/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0286 - acc: 0.9905 - val_loss: 0.2434 - val_acc: 0.9370\nEpoch 70/100\n100/100 [==============================] - 22s 225ms/step - loss: 0.0237 - acc: 0.9915 - val_loss: 0.2884 - val_acc: 0.9330\nEpoch 71/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0286 - acc: 0.9880 - val_loss: 0.2329 - val_acc: 0.9410\nEpoch 72/100\n100/100 [==============================] - 22s 223ms/step - loss: 0.0171 - acc: 0.9950 - val_loss: 0.3459 - val_acc: 0.9300\nEpoch 73/100\n100/100 [==============================] - 23s 227ms/step - loss: 0.0296 - acc: 0.9885 - val_loss: 0.2355 - val_acc: 0.9420\nEpoch 74/100\n100/100 [==============================] - 23s 228ms/step - loss: 0.0305 - acc: 0.9900 - val_loss: 0.2325 - val_acc: 0.9410\nEpoch 75/100\n100/100 [==============================] - 22s 224ms/step - loss: 0.0217 - acc: 0.9920 - val_loss: 0.4367 - val_acc: 0.9210\nEpoch 76/100\n100/100 [==============================] - 23s 225ms/step - loss: 0.0240 - acc: 0.9930 - val_loss: 0.2303 - val_acc: 0.9430\nEpoch 77/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0173 - acc: 0.9930 - val_loss: 0.2573 - val_acc: 0.9420\nEpoch 78/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0247 - acc: 0.9910 - val_loss: 0.3878 - val_acc: 0.9260\nEpoch 79/100\n100/100 [==============================] - 22s 224ms/step - loss: 0.0166 - acc: 0.9945 - val_loss: 0.2767 - val_acc: 0.9350\nEpoch 80/100\n100/100 [==============================] - 22s 224ms/step - loss: 0.0254 - acc: 0.9910 - val_loss: 0.2394 - val_acc: 0.9410\nEpoch 81/100\n100/100 [==============================] - 23s 225ms/step - loss: 0.0157 - acc: 0.9940 - val_loss: 0.2417 - val_acc: 0.9400\nEpoch 82/100\n100/100 [==============================] - 22s 224ms/step - loss: 0.0232 - acc: 0.9930 - val_loss: 0.2930 - val_acc: 0.9370\nEpoch 83/100\n100/100 [==============================] - 22s 225ms/step - loss: 0.0272 - acc: 0.9915 - val_loss: 0.2281 - val_acc: 0.9430\nEpoch 84/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0219 - acc: 0.9920 - val_loss: 0.2590 - val_acc: 0.9370\nEpoch 85/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0221 - acc: 0.9935 - val_loss: 0.2792 - val_acc: 0.9340\nEpoch 86/100\n100/100 [==============================] - 23s 225ms/step - loss: 0.0221 - acc: 0.9920 - val_loss: 0.2528 - val_acc: 0.9460\nEpoch 87/100\n100/100 [==============================] - 23s 229ms/step - loss: 0.0274 - acc: 0.9895 - val_loss: 0.2970 - val_acc: 0.9420\nEpoch 88/100\n100/100 [==============================] - 23s 230ms/step - loss: 0.0153 - acc: 0.9940 - val_loss: 0.3803 - val_acc: 0.9370\nEpoch 89/100\n100/100 [==============================] - 23s 228ms/step - loss: 0.0142 - acc: 0.9965 - val_loss: 0.3061 - val_acc: 0.9290\nEpoch 90/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0253 - acc: 0.9920 - val_loss: 0.2998 - val_acc: 0.9330\nEpoch 91/100\n100/100 [==============================] - 23s 227ms/step - loss: 0.0224 - acc: 0.9925 - val_loss: 0.3233 - val_acc: 0.9330\nEpoch 92/100\n100/100 [==============================] - 23s 227ms/step - loss: 0.0243 - acc: 0.9920 - val_loss: 0.3145 - val_acc: 0.9350\nEpoch 93/100\n100/100 [==============================] - 23s 227ms/step - loss: 0.0184 - acc: 0.9920 - val_loss: 0.3073 - val_acc: 0.9370\nEpoch 94/100\n100/100 [==============================] - 23s 227ms/step - loss: 0.0173 - acc: 0.9945 - val_loss: 0.2766 - val_acc: 0.9470\nEpoch 95/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0173 - acc: 0.9935 - val_loss: 0.3954 - val_acc: 0.9320\nEpoch 96/100\n100/100 [==============================] - 23s 225ms/step - loss: 0.0233 - acc: 0.9925 - val_loss: 0.3294 - val_acc: 0.9300\nEpoch 97/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0163 - acc: 0.9925 - val_loss: 0.2882 - val_acc: 0.9370\nEpoch 98/100\n100/100 [==============================] - 22s 225ms/step - loss: 0.0184 - acc: 0.9925 - val_loss: 0.2997 - val_acc: 0.9390\nEpoch 99/100\n100/100 [==============================] - 23s 226ms/step - loss: 0.0179 - acc: 0.9935 - val_loss: 0.2579 - val_acc: 0.9360\nEpoch 100/100\n100/100 [==============================] - 23s 227ms/step - loss: 0.0226 - acc: 0.9930 - val_loss: 0.2782 - val_acc: 0.9400\n"
],
[
"#plotting the results after unfreezing top 3 layers of pretrained convnet - training (accuracy and losses) vs validation (accuracy and losses)\nimport matplotlib.pyplot as plt\n\nacc = history.history['acc']\nval_acc = history.history['val_acc']\nloss = history.history['loss']\nval_loss = history.history['val_loss']\n\nepochs = range(1, len(acc) + 1)\n\nplt.plot(epochs, acc, 'bo', label='Training acc')\nplt.plot(epochs, val_acc, 'b', label='Validation acc')\nplt.title('Training and validation accuracy')\nplt.legend()\nplt.figure()\n\nplt.plot(epochs, loss, 'bo', label='Training loss')\nplt.plot(epochs, val_loss, 'b', label='Validation loss')\nplt.title('Training and validation loss')\nplt.legend()\n\nplt.show()\n\n#Accuracy increased to above 90 but Validation accuracy doesnot improve much after 15 epochs \n#the training need to be stopped aroung 15 epochs",
"_____no_output_____"
],
[
"#smoothening the curves\ndef smooth_curve(points, factor=0.8):\n smoothed_points = []\n for point in points:\n if smoothed_points:\n previous = smoothed_points[-1]\n smoothed_points.append(previous * factor + point * (1 - factor))\n else:\n smoothed_points.append(point)\n return smoothed_points\n\nplt.plot(epochs,smooth_curve(acc), 'bo', label='Smoothed training acc')\nplt.plot(epochs,smooth_curve(val_acc), 'b', label='Smoothed validation acc')\nplt.title('Training and validation accuracy')\nplt.legend()\nplt.figure()\n\nplt.plot(epochs,smooth_curve(loss), 'bo', label='Smoothed training loss')\nplt.plot(epochs,smooth_curve(val_loss), 'b', label='Smoothed validation loss')\nplt.title('Training and validation loss')\nplt.legend()\nplt.show()\n",
"_____no_output_____"
],
[
"#evaluating the model for the test data\n#generating test data\ntest_generator = test_datagen.flow_from_directory(\n test_dir,\n target_size=(150, 150),\n batch_size=20,\n class_mode='binary')\n#evaluating the result of the test data on the trained model\ntest_loss, test_acc = model.evaluate_generator(test_generator, steps=50)\n\nprint('test acc:', test_acc)",
"Found 1000 images belonging to 2 classes.\n('test acc:', 0.9419999921321869)\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e760036bd981298f1ca2b8e99d68259745f4e1e9 | 12,908 | ipynb | Jupyter Notebook | scikit-learn-official-examples/model_selection/plot_precision_recall.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 1 | 2019-05-10T09:16:23.000Z | 2019-05-10T09:16:23.000Z | scikit-learn-official-examples/model_selection/plot_precision_recall.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | null | null | null | scikit-learn-official-examples/model_selection/plot_precision_recall.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 1 | 2019-05-10T09:17:28.000Z | 2019-05-10T09:17:28.000Z | 71.711111 | 4,215 | 0.627595 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\n# Precision-Recall\n\n\nExample of Precision-Recall metric to evaluate classifier output quality.\n\nPrecision-Recall is a useful measure of success of prediction when the\nclasses are very imbalanced. In information retrieval, precision is a\nmeasure of result relevancy, while recall is a measure of how many truly\nrelevant results are returned.\n\nThe precision-recall curve shows the tradeoff between precision and\nrecall for different threshold. A high area under the curve represents\nboth high recall and high precision, where high precision relates to a\nlow false positive rate, and high recall relates to a low false negative\nrate. High scores for both show that the classifier is returning accurate\nresults (high precision), as well as returning a majority of all positive\nresults (high recall).\n\nA system with high recall but low precision returns many results, but most of\nits predicted labels are incorrect when compared to the training labels. A\nsystem with high precision but low recall is just the opposite, returning very\nfew results, but most of its predicted labels are correct when compared to the\ntraining labels. An ideal system with high precision and high recall will\nreturn many results, with all results labeled correctly.\n\nPrecision ($P$) is defined as the number of true positives ($T_p$)\nover the number of true positives plus the number of false positives\n($F_p$).\n\n$P = \\frac{T_p}{T_p+F_p}$\n\nRecall ($R$) is defined as the number of true positives ($T_p$)\nover the number of true positives plus the number of false negatives\n($F_n$).\n\n$R = \\frac{T_p}{T_p + F_n}$\n\nThese quantities are also related to the ($F_1$) score, which is defined\nas the harmonic mean of precision and recall.\n\n$F1 = 2\\frac{P \\times R}{P+R}$\n\nNote that the precision may not decrease with recall. The\ndefinition of precision ($\\frac{T_p}{T_p + F_p}$) shows that lowering\nthe threshold of a classifier may increase the denominator, by increasing the\nnumber of results returned. If the threshold was previously set too high, the\nnew results may all be true positives, which will increase precision. If the\nprevious threshold was about right or too low, further lowering the threshold\nwill introduce false positives, decreasing precision.\n\nRecall is defined as $\\frac{T_p}{T_p+F_n}$, where $T_p+F_n$ does\nnot depend on the classifier threshold. This means that lowering the classifier\nthreshold may increase recall, by increasing the number of true positive\nresults. It is also possible that lowering the threshold may leave recall\nunchanged, while the precision fluctuates.\n\nThe relationship between recall and precision can be observed in the\nstairstep area of the plot - at the edges of these steps a small change\nin the threshold considerably reduces precision, with only a minor gain in\nrecall.\n\n**Average precision** (AP) summarizes such a plot as the weighted mean of\nprecisions achieved at each threshold, with the increase in recall from the\nprevious threshold used as the weight:\n\n$\\text{AP} = \\sum_n (R_n - R_{n-1}) P_n$\n\nwhere $P_n$ and $R_n$ are the precision and recall at the\nnth threshold. A pair $(R_k, P_k)$ is referred to as an\n*operating point*.\n\nAP and the trapezoidal area under the operating points\n(:func:`sklearn.metrics.auc`) are common ways to summarize a precision-recall\ncurve that lead to different results. Read more in the\n`User Guide <precision_recall_f_measure_metrics>`.\n\nPrecision-recall curves are typically used in binary classification to study\nthe output of a classifier. In order to extend the precision-recall curve and\naverage precision to multi-class or multi-label classification, it is necessary\nto binarize the output. One curve can be drawn per label, but one can also draw\na precision-recall curve by considering each element of the label indicator\nmatrix as a binary prediction (micro-averaging).\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>See also :func:`sklearn.metrics.average_precision_score`,\n :func:`sklearn.metrics.recall_score`,\n :func:`sklearn.metrics.precision_score`,\n :func:`sklearn.metrics.f1_score`</p></div>\n\n",
"_____no_output_____"
]
],
[
[
"from __future__ import print_function",
"_____no_output_____"
]
],
[
[
"In binary classification settings\n--------------------------------------------------------\n\nCreate simple data\n..................\n\nTry to differentiate the two first classes of the iris data\n\n",
"_____no_output_____"
]
],
[
[
"from sklearn import svm, datasets\nfrom sklearn.model_selection import train_test_split\nimport numpy as np\n\niris = datasets.load_iris()\nX = iris.data\ny = iris.target\n\n# Add noisy features\nrandom_state = np.random.RandomState(0)\nn_samples, n_features = X.shape\nX = np.c_[X, random_state.randn(n_samples, 200 * n_features)]\n\n# Limit to the two first classes, and split into training and test\nX_train, X_test, y_train, y_test = train_test_split(X[y < 2], y[y < 2],\n test_size=.5,\n random_state=random_state)\n\n# Create a simple classifier\nclassifier = svm.LinearSVC(random_state=random_state)\nclassifier.fit(X_train, y_train)\ny_score = classifier.decision_function(X_test)",
"_____no_output_____"
]
],
[
[
"Compute the average precision score\n...................................\n\n",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import average_precision_score\naverage_precision = average_precision_score(y_test, y_score)\n\nprint('Average precision-recall score: {0:0.2f}'.format(\n average_precision))",
"_____no_output_____"
]
],
[
[
"Plot the Precision-Recall curve\n................................\n\n",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import precision_recall_curve\nimport matplotlib.pyplot as plt\n\nprecision, recall, _ = precision_recall_curve(y_test, y_score)\n\nplt.step(recall, precision, color='b', alpha=0.2,\n where='post')\nplt.fill_between(recall, precision, step='post', alpha=0.2,\n color='b')\n\nplt.xlabel('Recall')\nplt.ylabel('Precision')\nplt.ylim([0.0, 1.05])\nplt.xlim([0.0, 1.0])\nplt.title('2-class Precision-Recall curve: AP={0:0.2f}'.format(\n average_precision))",
"_____no_output_____"
]
],
[
[
"In multi-label settings\n------------------------\n\nCreate multi-label data, fit, and predict\n...........................................\n\nWe create a multi-label dataset, to illustrate the precision-recall in\nmulti-label settings\n\n",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import label_binarize\n\n# Use label_binarize to be multi-label like settings\nY = label_binarize(y, classes=[0, 1, 2])\nn_classes = Y.shape[1]\n\n# Split into training and test\nX_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=.5,\n random_state=random_state)\n\n# We use OneVsRestClassifier for multi-label prediction\nfrom sklearn.multiclass import OneVsRestClassifier\n\n# Run classifier\nclassifier = OneVsRestClassifier(svm.LinearSVC(random_state=random_state))\nclassifier.fit(X_train, Y_train)\ny_score = classifier.decision_function(X_test)",
"_____no_output_____"
]
],
[
[
"The average precision score in multi-label settings\n....................................................\n\n",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import precision_recall_curve\nfrom sklearn.metrics import average_precision_score\n\n# For each class\nprecision = dict()\nrecall = dict()\naverage_precision = dict()\nfor i in range(n_classes):\n precision[i], recall[i], _ = precision_recall_curve(Y_test[:, i],\n y_score[:, i])\n average_precision[i] = average_precision_score(Y_test[:, i], y_score[:, i])\n\n# A \"micro-average\": quantifying score on all classes jointly\nprecision[\"micro\"], recall[\"micro\"], _ = precision_recall_curve(Y_test.ravel(),\n y_score.ravel())\naverage_precision[\"micro\"] = average_precision_score(Y_test, y_score,\n average=\"micro\")\nprint('Average precision score, micro-averaged over all classes: {0:0.2f}'\n .format(average_precision[\"micro\"]))",
"_____no_output_____"
]
],
[
[
"Plot the micro-averaged Precision-Recall curve\n...............................................\n\n\n",
"_____no_output_____"
]
],
[
[
"plt.figure()\nplt.step(recall['micro'], precision['micro'], color='b', alpha=0.2,\n where='post')\nplt.fill_between(recall[\"micro\"], precision[\"micro\"], step='post', alpha=0.2,\n color='b')\n\nplt.xlabel('Recall')\nplt.ylabel('Precision')\nplt.ylim([0.0, 1.05])\nplt.xlim([0.0, 1.0])\nplt.title(\n 'Average precision score, micro-averaged over all classes: AP={0:0.2f}'\n .format(average_precision[\"micro\"]))",
"_____no_output_____"
]
],
[
[
"Plot Precision-Recall curve for each class and iso-f1 curves\n.............................................................\n\n\n",
"_____no_output_____"
]
],
[
[
"from itertools import cycle\n# setup plot details\ncolors = cycle(['navy', 'turquoise', 'darkorange', 'cornflowerblue', 'teal'])\n\nplt.figure(figsize=(7, 8))\nf_scores = np.linspace(0.2, 0.8, num=4)\nlines = []\nlabels = []\nfor f_score in f_scores:\n x = np.linspace(0.01, 1)\n y = f_score * x / (2 * x - f_score)\n l, = plt.plot(x[y >= 0], y[y >= 0], color='gray', alpha=0.2)\n plt.annotate('f1={0:0.1f}'.format(f_score), xy=(0.9, y[45] + 0.02))\n\nlines.append(l)\nlabels.append('iso-f1 curves')\nl, = plt.plot(recall[\"micro\"], precision[\"micro\"], color='gold', lw=2)\nlines.append(l)\nlabels.append('micro-average Precision-recall (area = {0:0.2f})'\n ''.format(average_precision[\"micro\"]))\n\nfor i, color in zip(range(n_classes), colors):\n l, = plt.plot(recall[i], precision[i], color=color, lw=2)\n lines.append(l)\n labels.append('Precision-recall for class {0} (area = {1:0.2f})'\n ''.format(i, average_precision[i]))\n\nfig = plt.gcf()\nfig.subplots_adjust(bottom=0.25)\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.05])\nplt.xlabel('Recall')\nplt.ylabel('Precision')\nplt.title('Extension of Precision-Recall curve to multi-class')\nplt.legend(lines, labels, loc=(0, -.38), prop=dict(size=14))\n\n\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e76019e3399c3a95dc1304b97713533fa4ada8bf | 670,639 | ipynb | Jupyter Notebook | Data_Cleaning.ipynb | msvrk/Credit-Risk-analysis-and-prediction | 8a99efddd001e445e8591cfc5ad4681bc080d4f9 | [
"MIT"
] | null | null | null | Data_Cleaning.ipynb | msvrk/Credit-Risk-analysis-and-prediction | 8a99efddd001e445e8591cfc5ad4681bc080d4f9 | [
"MIT"
] | null | null | null | Data_Cleaning.ipynb | msvrk/Credit-Risk-analysis-and-prediction | 8a99efddd001e445e8591cfc5ad4681bc080d4f9 | [
"MIT"
] | null | null | null | 212.496515 | 10,052 | 0.889841 | [
[
[
"import pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom scipy.stats import chi2_contingency\nimport numpy as np\nfrom sklearn.feature_selection import VarianceThreshold\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.feature_selection import SelectKBest\nfrom sklearn.feature_selection import f_classif, mutual_info_classif, chi2, mutual_info_regression\nfrom sklearn.feature_selection import RFE\nfrom sklearn.ensemble import RandomForestClassifier\nfrom xgboost import XGBClassifier\nfrom sklearn.impute import KNNImputer\nfrom scipy.stats.mstats import winsorize",
"_____no_output_____"
],
[
"df.to_csv('C:/Users/Progyan/Downloads/Output_clean.csv',index = False)",
"_____no_output_____"
],
[
"df = pd.read_csv('C:/Users/Progyan/Downloads/Output_part_cleaned.csv')",
"C:\\Users\\Progyan\\anaconda3\\lib\\site-packages\\IPython\\core\\interactiveshell.py:3071: DtypeWarning: Columns (61,71,72,73,76,77,78,81,98) have mixed types.Specify dtype option on import or set low_memory=False.\n has_raised = await self.run_ast_nodes(code_ast.body, cell_name,\n"
],
[
"df",
"_____no_output_____"
],
[
"(df.isnull().sum() / df.shape[0] * 100)[0:50]",
"_____no_output_____"
],
[
"df.num_tl_120dpd_2m.isnull().sum()",
"_____no_output_____"
],
[
"df = df.loc[df['zip_code'].notna(),:]",
"_____no_output_____"
],
[
"df = df.loc[df['inq_last_6mths'].notna(),:]",
"_____no_output_____"
],
[
"df = df.loc[df['last_pymnt_d'].notna(),:]",
"_____no_output_____"
],
[
"df = df.loc[df['last_credit_pull_d'].notna(),:]",
"_____no_output_____"
],
[
"df = df.loc[df['collections_12_mths_ex_med'].notna(),:]",
"_____no_output_____"
],
[
"df = df.loc[df['employmentTitle'].notna(),:]",
"_____no_output_____"
],
[
"df = df.loc[df['postCode'].notna(),:]",
"_____no_output_____"
],
[
"df = df.drop(['revol_bal_joint'],axis=1)",
"_____no_output_____"
],
[
"df = df.drop(['sec_app_fico_range_low'],axis=1)",
"_____no_output_____"
],
[
"df = df.drop(['sec_app_fico_range_high'],axis=1)",
"_____no_output_____"
],
[
"df = df.drop(['sec_app_earliest_cr_line'],axis=1)",
"_____no_output_____"
],
[
"df = df.drop(['sec_app_inq_last_6mths'],axis=1)",
"_____no_output_____"
],
[
"df = df.dropna(thresh=500000, axis=1)",
"_____no_output_____"
],
[
"df = df.loc[df['tot_coll_amt'].notna(),:]",
"_____no_output_____"
],
[
"df = df.loc[df['bc_open_to_buy'].notna(),:]",
"_____no_output_____"
],
[
"df = df.loc[df['bc_util'].notna(),:]",
"_____no_output_____"
],
[
"df = df.loc[df['mths_since_recent_bc'].notna(),:]",
"_____no_output_____"
],
[
"df = df.loc[df['num_rev_accts'].notna(),:]",
"_____no_output_____"
],
[
"df = df.loc[df['pct_tl_nvr_dlq'].notna(),:]",
"_____no_output_____"
],
[
"df = df.loc[df['percent_bc_gt_75'].notna(),:]",
"_____no_output_____"
],
[
"df['mths_since_recent_inq'].fillna(df['mths_since_recent_inq'].median(), inplace=True)",
"_____no_output_____"
],
[
"df['mths_since_recent_inq'].fillna(df['mths_since_recent_inq'].median(), inplace=True)",
"_____no_output_____"
],
[
"df['mo_sin_old_il_acct'].fillna(df['mo_sin_old_il_acct'].median(), inplace=True)",
"_____no_output_____"
],
[
"df = df.loc[df['num_tl_120dpd_2m'].notna(),:]",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 677934 entries, 14 to 753123\nData columns (total 94 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 loan_amnt 677934 non-null int64 \n 1 funded_amnt 677934 non-null int64 \n 2 funded_amnt_inv 677934 non-null float64\n 3 issue_d 677934 non-null object \n 4 loan_status 677934 non-null object \n 5 zip_code 677934 non-null object \n 6 inq_last_6mths 677934 non-null float64\n 7 out_prncp_inv 677934 non-null float64\n 8 total_pymnt 677934 non-null float64\n 9 total_pymnt_inv 677934 non-null float64\n 10 total_rec_prncp 677934 non-null float64\n 11 total_rec_int 677934 non-null float64\n 12 total_rec_late_fee 677934 non-null float64\n 13 recoveries 677934 non-null float64\n 14 collection_recovery_fee 677934 non-null float64\n 15 last_pymnt_d 677934 non-null object \n 16 last_pymnt_amnt 677934 non-null float64\n 17 last_credit_pull_d 677934 non-null object \n 18 last_fico_range_high 677934 non-null int64 \n 19 last_fico_range_low 677934 non-null int64 \n 20 collections_12_mths_ex_med 677934 non-null float64\n 21 policy_code 677934 non-null int64 \n 22 application_type 677934 non-null object \n 23 tot_coll_amt 677934 non-null float64\n 24 tot_cur_bal 677934 non-null float64\n 25 total_rev_hi_lim 677934 non-null float64\n 26 acc_open_past_24mths 677934 non-null float64\n 27 avg_cur_bal 677934 non-null float64\n 28 bc_open_to_buy 677934 non-null float64\n 29 bc_util 677934 non-null float64\n 30 mo_sin_old_il_acct 677934 non-null float64\n 31 mo_sin_old_rev_tl_op 677934 non-null float64\n 32 mo_sin_rcnt_rev_tl_op 677934 non-null float64\n 33 mo_sin_rcnt_tl 677934 non-null float64\n 34 mort_acc 677934 non-null float64\n 35 mths_since_recent_bc 677934 non-null float64\n 36 mths_since_recent_inq 677934 non-null float64\n 37 num_accts_ever_120_pd 677934 non-null float64\n 38 num_actv_bc_tl 677934 non-null float64\n 39 num_actv_rev_tl 677934 non-null float64\n 40 num_bc_sats 677934 non-null float64\n 41 num_bc_tl 677934 non-null float64\n 42 num_il_tl 677934 non-null float64\n 43 num_op_rev_tl 677934 non-null float64\n 44 num_rev_accts 677934 non-null float64\n 45 num_rev_tl_bal_gt_0 677934 non-null float64\n 46 num_sats 677934 non-null float64\n 47 num_tl_120dpd_2m 677934 non-null float64\n 48 num_tl_30dpd 677934 non-null float64\n 49 num_tl_90g_dpd_24m 677934 non-null float64\n 50 num_tl_op_past_12m 677934 non-null float64\n 51 pct_tl_nvr_dlq 677934 non-null float64\n 52 percent_bc_gt_75 677934 non-null float64\n 53 tax_liens 677934 non-null float64\n 54 tot_hi_cred_lim 677934 non-null float64\n 55 total_bal_ex_mort 677934 non-null float64\n 56 total_bc_limit 677934 non-null float64\n 57 total_il_high_credit_limit 677934 non-null float64\n 58 disbursement_method 677934 non-null object \n 59 id 677934 non-null int64 \n 60 loanAmnt 677934 non-null int64 \n 61 term 677934 non-null int64 \n 62 interestRate 677934 non-null float64\n 63 installment 677934 non-null float64\n 64 grade 677934 non-null object \n 65 subGrade 677934 non-null object \n 66 employmentTitle 677934 non-null float64\n 67 employmentLength 677934 non-null int64 \n 68 homeOwnership 677934 non-null int64 \n 69 annualIncome 677934 non-null float64\n 70 verificationStatus 677934 non-null object \n 71 purpose 677934 non-null int64 \n 72 postCode 677934 non-null float64\n 73 regionCode 677934 non-null int64 \n 74 dti 677934 non-null float64\n 75 delinquency_2years 677934 non-null int64 \n 76 ficoRangeLow 677934 non-null int64 \n 77 ficoRangeHigh 677934 non-null int64 \n 78 openAcc 677934 non-null int64 \n 79 pubRec 677934 non-null int64 \n 80 pubRecBankruptcies 677934 non-null float64\n 81 revolBal 677934 non-null int64 \n 82 revolUtil 677934 non-null float64\n 83 totalAcc 677934 non-null int64 \n 84 initialListStatus 677934 non-null int64 \n 85 applicationType 677934 non-null int64 \n 86 earliesCreditLine 677934 non-null object \n 87 title 677934 non-null float64\n 88 policyCode 677934 non-null int64 \n 89 isDefault 677934 non-null float64\n 90 earliesCreditLine_month 677934 non-null int64 \n 91 earliesCreditLine_year 677934 non-null int64 \n 92 issue_d_month 677934 non-null int64 \n 93 issue_d_year 677934 non-null int64 \ndtypes: float64(57), int64(26), object(11)\nmemory usage: 491.4+ MB\n"
],
[
"df1=df.select_dtypes(exclude=['object'])\n\nfor column in df1:\n plt.figure()\n df.boxplot([column])",
"<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n<ipython-input-234-55e12a7cc308>:4: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n plt.figure()\n"
],
[
"df = df[df.last_fico_range_high != 0]",
"_____no_output_____"
],
[
"df.last_fico_range_high.min()",
"_____no_output_____"
],
[
"df.loc[df['last_fico_range_low'] ==0,'last_fico_range_high':]",
"_____no_output_____"
],
[
"x = df.loc[:,'last_fico_range_low'].replace(0,300)\ndf.loc[:,'last_fico_range_low'] = x.copy()",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 677630 entries, 14 to 753123\nData columns (total 94 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 loan_amnt 677630 non-null int64 \n 1 funded_amnt 677630 non-null int64 \n 2 funded_amnt_inv 677630 non-null float64\n 3 issue_d 677630 non-null object \n 4 loan_status 677630 non-null object \n 5 zip_code 677630 non-null object \n 6 inq_last_6mths 677630 non-null float64\n 7 out_prncp_inv 677630 non-null float64\n 8 total_pymnt 677630 non-null float64\n 9 total_pymnt_inv 677630 non-null float64\n 10 total_rec_prncp 677630 non-null float64\n 11 total_rec_int 677630 non-null float64\n 12 total_rec_late_fee 677630 non-null float64\n 13 recoveries 677630 non-null float64\n 14 collection_recovery_fee 677630 non-null float64\n 15 last_pymnt_d 677630 non-null object \n 16 last_pymnt_amnt 677630 non-null float64\n 17 last_credit_pull_d 677630 non-null object \n 18 last_fico_range_high 677630 non-null int64 \n 19 last_fico_range_low 677630 non-null int64 \n 20 collections_12_mths_ex_med 677630 non-null float64\n 21 policy_code 677630 non-null int64 \n 22 application_type 677630 non-null object \n 23 tot_coll_amt 677630 non-null float64\n 24 tot_cur_bal 677630 non-null float64\n 25 total_rev_hi_lim 677630 non-null float64\n 26 acc_open_past_24mths 677630 non-null float64\n 27 avg_cur_bal 677630 non-null float64\n 28 bc_open_to_buy 677630 non-null float64\n 29 bc_util 677630 non-null float64\n 30 mo_sin_old_il_acct 677630 non-null float64\n 31 mo_sin_old_rev_tl_op 677630 non-null float64\n 32 mo_sin_rcnt_rev_tl_op 677630 non-null float64\n 33 mo_sin_rcnt_tl 677630 non-null float64\n 34 mort_acc 677630 non-null float64\n 35 mths_since_recent_bc 677630 non-null float64\n 36 mths_since_recent_inq 677630 non-null float64\n 37 num_accts_ever_120_pd 677630 non-null float64\n 38 num_actv_bc_tl 677630 non-null float64\n 39 num_actv_rev_tl 677630 non-null float64\n 40 num_bc_sats 677630 non-null float64\n 41 num_bc_tl 677630 non-null float64\n 42 num_il_tl 677630 non-null float64\n 43 num_op_rev_tl 677630 non-null float64\n 44 num_rev_accts 677630 non-null float64\n 45 num_rev_tl_bal_gt_0 677630 non-null float64\n 46 num_sats 677630 non-null float64\n 47 num_tl_120dpd_2m 677630 non-null float64\n 48 num_tl_30dpd 677630 non-null float64\n 49 num_tl_90g_dpd_24m 677630 non-null float64\n 50 num_tl_op_past_12m 677630 non-null float64\n 51 pct_tl_nvr_dlq 677630 non-null float64\n 52 percent_bc_gt_75 677630 non-null float64\n 53 tax_liens 677630 non-null float64\n 54 tot_hi_cred_lim 677630 non-null float64\n 55 total_bal_ex_mort 677630 non-null float64\n 56 total_bc_limit 677630 non-null float64\n 57 total_il_high_credit_limit 677630 non-null float64\n 58 disbursement_method 677630 non-null object \n 59 id 677630 non-null int64 \n 60 loanAmnt 677630 non-null int64 \n 61 term 677630 non-null int64 \n 62 interestRate 677630 non-null float64\n 63 installment 677630 non-null float64\n 64 grade 677630 non-null object \n 65 subGrade 677630 non-null object \n 66 employmentTitle 677630 non-null float64\n 67 employmentLength 677630 non-null int64 \n 68 homeOwnership 677630 non-null int64 \n 69 annualIncome 677630 non-null float64\n 70 verificationStatus 677630 non-null object \n 71 purpose 677630 non-null int64 \n 72 postCode 677630 non-null float64\n 73 regionCode 677630 non-null int64 \n 74 dti 677630 non-null float64\n 75 delinquency_2years 677630 non-null int64 \n 76 ficoRangeLow 677630 non-null int64 \n 77 ficoRangeHigh 677630 non-null int64 \n 78 openAcc 677630 non-null int64 \n 79 pubRec 677630 non-null int64 \n 80 pubRecBankruptcies 677630 non-null float64\n 81 revolBal 677630 non-null int64 \n 82 revolUtil 677630 non-null float64\n 83 totalAcc 677630 non-null int64 \n 84 initialListStatus 677630 non-null int64 \n 85 applicationType 677630 non-null int64 \n 86 earliesCreditLine 677630 non-null object \n 87 title 677630 non-null float64\n 88 policyCode 677630 non-null int64 \n 89 isDefault 677630 non-null float64\n 90 earliesCreditLine_month 677630 non-null int64 \n 91 earliesCreditLine_year 677630 non-null int64 \n 92 issue_d_month 677630 non-null int64 \n 93 issue_d_year 677630 non-null int64 \ndtypes: float64(57), int64(26), object(11)\nmemory usage: 491.1+ MB\n"
],
[
"df.loc[df.total_il_high_credit_limit>1200000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.tot_coll_amt<37000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.funded_amnt<37000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.total_pymnt<45000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.total_pymnt_inv<43000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.total_rec_int<22000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.total_rec_late_fee<550,:]",
"_____no_output_____"
],
[
"df = df.loc[df.recoveries<24000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.last_pymnt_amnt<37000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.tot_cur_bal<4000000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.total_rev_hi_lim<8000000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.acc_open_past_24mths<55,:]",
"_____no_output_____"
],
[
"df = df.loc[df.avg_cur_bal<370000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.bc_open_to_buy<310000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.mo_sin_old_il_acct<600,:]",
"_____no_output_____"
],
[
"df = df.loc[df.mo_sin_old_rev_tl_op<730,:]",
"_____no_output_____"
],
[
"df = df.loc[df.mo_sin_rcnt_rev_tl_op<330,:]",
"_____no_output_____"
],
[
"df = df.loc[df.mo_sin_rcnt_tl<200,:]",
"_____no_output_____"
],
[
"df = df.loc[df.mort_acc<27,:]",
"_____no_output_____"
],
[
"df = df.loc[df.mths_since_recent_bc<510,:]",
"_____no_output_____"
],
[
"df = df.loc[df.num_actv_rev_tl<40,:]",
"_____no_output_____"
],
[
"df = df.loc[df.num_bc_sats<40,:]",
"_____no_output_____"
],
[
"df = df.loc[df.num_bc_tl<60,:]",
"_____no_output_____"
],
[
"df = df.loc[df.num_il_tl<110,:]",
"_____no_output_____"
],
[
"df = df.loc[df.num_op_rev_tl<60,:]",
"_____no_output_____"
],
[
"df = df.loc[df.num_rev_accts<100,:]",
"_____no_output_____"
],
[
"df = df.loc[df.num_sats<70,:]",
"_____no_output_____"
],
[
"df = df.loc[df.num_tl_op_past_12m<27,:]",
"_____no_output_____"
],
[
"df = df.loc[df.tax_liens<80,:]",
"_____no_output_____"
],
[
"df = df.loc[df.total_bal_ex_mort<3000000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.total_bc_limit<450000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.total_il_high_credit_limit<1700000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.annualIncome<8000000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.dti<60,:]",
"_____no_output_____"
],
[
"df = df.loc[df.delinquency_2years<35,:]",
"_____no_output_____"
],
[
"df = df.loc[df.openAcc<65,:]",
"_____no_output_____"
],
[
"df = df.loc[df.pubRec<50,:]",
"_____no_output_____"
],
[
"df = df.loc[df.revolBal<1600000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.revolUtil<175,:]",
"_____no_output_____"
],
[
"df = df.loc[df.total_rev_hi_lim<1500000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.bc_util<225,:]",
"_____no_output_____"
],
[
"df = df.loc[df.mo_sin_old_il_acct<600,:]",
"_____no_output_____"
],
[
"df = df.loc[df.tax_liens<40,:]",
"_____no_output_____"
],
[
"df = df.loc[df.total_bal_ex_mort<1000000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.total_bc_limit<375000,:]",
"_____no_output_____"
],
[
"df = df.loc[df.total_il_high_credit_limit<1200000,:]",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7601bb8bdfa969e87f81af3c869dfaf1856bfd5 | 484,327 | ipynb | Jupyter Notebook | regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb | katewomack/NuPyCEE | 67169db62a53825ed44de320c50475229bddc34d | [
"BSD-3-Clause"
] | 22 | 2016-05-24T15:59:41.000Z | 2021-08-16T08:32:31.000Z | regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb | katewomack/NuPyCEE | 67169db62a53825ed44de320c50475229bddc34d | [
"BSD-3-Clause"
] | 15 | 2016-05-30T15:57:40.000Z | 2022-01-23T14:20:54.000Z | regression_tests/.ipynb_checkpoints/SYGMA_SSP_h_yield_input-checkpoint.ipynb | katewomack/NuPyCEE | 67169db62a53825ed44de320c50475229bddc34d | [
"BSD-3-Clause"
] | 14 | 2016-10-20T10:13:36.000Z | 2022-03-13T09:14:49.000Z | 176.182976 | 40,784 | 0.886926 | [
[
[
"# Regression test suite: Test of basic SSP GCE features",
"_____no_output_____"
],
[
"Test of SSP with artificial yields,pure h1 yields, provided in NuGrid tables (no PopIII tests here). Focus are basic GCE features.\nYou can find the documentation <a href=\"doc/sygma.html\">here</a>.\n\nBefore starting the test make sure that use the standard yield input files.\n\n\n\n",
"_____no_output_____"
],
[
"## Outline:",
"_____no_output_____"
],
[
"$\\odot$ Evolution of ISM fine\n\n$\\odot$ Sources of massive and AGB stars distinguished\n\n$\\odot$ Test of final mass of ISM for different IMF boundaries\n\n$\\odot$ Test of Salpeter, Chabrier, Kroupa IMF by checking the evolution of ISM mass (incl. alphaimf)\n\n$\\odot$ Test if SNIa on/off works\n\n$\\odot$ Test of the three SNIa implementations, the evolution of SN1a contributions\n\n$\\odot$ Test of parameter tend, dt and special_timesteps\n\n$\\odot$ Test of parmeter mgal\n\n$\\odot$ Test of parameter transitionmass\n\nTODO: test non-linear yield fitting (hard set in code right now, no input parameter provided)\n",
"_____no_output_____"
]
],
[
[
"#from imp import *\n#s=load_source('sygma','/home/nugrid/nugrid/SYGMA/SYGMA_online/SYGMA_dev/sygma.py')\n#%pylab nbagg\nimport sys\nimport sygma as s\nprint s.__file__\nreload(s)\ns.__file__\n#import matplotlib\n#matplotlib.use('nbagg')\nimport matplotlib.pyplot as plt\n#matplotlib.use('nbagg')\nimport numpy as np\nfrom scipy.integrate import quad\nfrom scipy.interpolate import UnivariateSpline\nimport os\n\n# Trigger interactive or non-interactive depending on command line argument\n__RUNIPY__ = sys.argv[0]\n\nif __RUNIPY__:\n %matplotlib inline\nelse:\n %pylab nbagg",
"/Users/christian/Research/NuGRid/NuPyCEE/sygma.py\n"
]
],
[
[
"### IMF notes:",
"_____no_output_____"
],
[
"The IMF allows to calculate the number of stars $N_{12}$ in the mass interval [m1,m2] with\n\n(I) $N_{12}$ = k_N $\\int _{m1}^{m2} m^{-2.35} dm$ \n\nWhere k_N is the normalization constant. It can be derived from the total amount of mass of the system $M_{tot}$\nsince the total mass $M_{12}$ in the mass interval above can be estimated with\n\n(II) $M_{12}$ = k_N $\\int _{m1}^{m2} m^{-1.35} dm$\n\nWith a total mass interval of [1,30] and $M_{tot}=1e11$ the $k_N$ can be derived:\n\n$1e11 = k_N/0.35 * (1^{-0.35} - 30^{-0.35})$",
"_____no_output_____"
]
],
[
[
"k_N=1e11*0.35/ (1**-0.35 - 30**-0.35) #(I)",
"_____no_output_____"
]
],
[
[
"The total number of stars $N_{tot}$ is then:",
"_____no_output_____"
]
],
[
[
"N_tot=k_N/1.35 * (1**-1.35 - 30**-1.35) #(II)\nprint N_tot",
"36877281297.2\n"
]
],
[
[
"With a yield ejected of $0.1 Msun$, the total amount ejected is:",
"_____no_output_____"
]
],
[
[
"Yield_tot=0.1*N_tot\nprint Yield_tot/1e11",
"0.0368772812972\n"
]
],
[
[
"compared to the simulation:",
"_____no_output_____"
]
],
[
[
"import sygma as s\nreload(s)\ns1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,imf_type='salpeter',imf_bdys=[1,30],iniZ=0.02,hardsetZ=0.0001,\n table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', \n iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')\nYield_tot_sim=s1.history.ism_iso_yield[-1][0]\n#% matplotlib inline",
"SYGMA run in progress..\n SYGMA run completed - Run time: 0.32s\n"
],
[
"import read_yields as ry\npath = os.environ['SYGMADIR']+'/yield_tables/agb_and_massive_stars_nugrid_MESAonly_fryer12delay.txt'\n#path='/home/christian/NuGrid/SYGMA_PROJECT/NUPYCEE/new/nupycee.bitbucket.org/yield_tables/isotope_yield_table.txt'\nytables = ry.read_nugrid_yields(path,excludemass=[32,60])\nzm_lifetime_grid=s1.zm_lifetime_grid_current #__interpolate_lifetimes_grid()\n#return [[metallicities Z1,Z2,...], [masses], [[log10(lifetimesofZ1)],\n# [log10(lifetimesofZ2)],..] ]\n#s1.__find_lifetimes()\n\n#minm1 = self.__find_lifetimes(round(self.zmetal,6),mass=[minm,maxm], lifetime=lifetimemax1)",
"_____no_output_____"
]
],
[
[
"Compare both results:",
"_____no_output_____"
]
],
[
[
"print Yield_tot_sim\nprint Yield_tot\nprint 'ratio should be 1 : ',Yield_tot_sim/Yield_tot",
"3687728129.72\n3687728129.72\nratio should be 1 : 1.0\n"
]
],
[
[
"### Test of distinguishing between massive and AGB sources:",
"_____no_output_____"
],
[
"Boundaries between AGB and massive for Z=0 (1e-4) at 8 (transitionmass parameter)",
"_____no_output_____"
]
],
[
[
"Yield_agb= ( k_N/1.35 * (1**-1.35 - 8.**-1.35) ) * 0.1\nYield_massive= ( k_N/1.35 * (8.**-1.35 - 30**-1.35) ) * 0.1\n",
"_____no_output_____"
],
[
"print 'Should be 1:',Yield_agb/s1.history.ism_iso_yield_agb[-1][0]\nprint 'Should be 1:',Yield_massive/s1.history.ism_iso_yield_massive[-1][0]\nprint 'Test total number of SNII agree with massive star yields: ',sum(s1.history.sn2_numbers)*0.1/Yield_massive\nprint sum(s1.history.sn2_numbers)",
"Should be 1: 1.0\nShould be 1: 1.0\nTest total number of SNII agree with massive star yields: 1.0\n1871484249.69\n"
],
[
"s1.plot_totmasses(source='agb')\ns1.plot_totmasses(source='massive')\ns1.plot_totmasses(source='all')\ns1.plot_totmasses(source='sn1a')",
"_____no_output_____"
]
],
[
[
"### Calculating yield ejection over time ",
"_____no_output_____"
],
[
"For plotting, take the lifetimes/masses from the yield grid:\n\n$\nIni Mass & Age [yrs]\n1Msun = 5.67e9\n1.65 = 1.211e9\n2 = 6.972e8\n3 = 2.471e8\n4 = 1.347e8\n5 = 8.123e7\n6 = 5.642e7\n7 = 4.217e7\n12 = 1.892e7\n15 = 1.381e7\n20 = 9.895e6\n25 = 7.902e6\n$",
"_____no_output_____"
]
],
[
[
"s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,\\\n imf_bdys=[1,30],iniZ=0,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \\\n sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')\nYield_tot_sim=s1.history.ism_iso_yield[-1][0]",
"SYGMA run in progress..\n SYGMA run completed - Run time: 0.32s\n"
],
[
"s1.plot_mass(specie='H',label='H, sim',color='k',shape='-',marker='o',markevery=800)\nm=[1,1.65,2,3,4,5,6,7,12,15,20,25]\nages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]\ndef yields(m,k_N):\n return ( k_N/1.35 * (m**-1.35 - 30.**-1.35) ) * 0.1\nyields1=[]\nfor m1 in m:\n yields1.append(yields(m1,k_N))\nplt.plot(ages,yields1,marker='+',linestyle='',markersize=15,label='H, semi')\nplt.legend(loc=4)",
"_____no_output_____"
]
],
[
[
"Simulation results in the plot above should agree with semi-analytical calculations.",
"_____no_output_____"
],
[
"### Test of parameter imf_bdys: Selection of different initial mass intervals",
"_____no_output_____"
],
[
"##### Select imf_bdys=[5,20]",
"_____no_output_____"
]
],
[
[
"k_N=1e11*0.35/ (5**-0.35 - 20**-0.35)\nN_tot=k_N/1.35 * (5**-1.35 - 20**-1.35)\nYield_tot=0.1*N_tot",
"_____no_output_____"
],
[
"s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='salpeter',\\\n imf_bdys=[5,20],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \\\n sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')\nYield_tot_sim=s1.history.ism_iso_yield[-1][0]",
"SYGMA run in progress..\n SYGMA run completed - Run time: 0.23s\n"
],
[
"print 'Sould be 1:' ,Yield_tot_sim/Yield_tot",
"Sould be 1: 1.0\n"
]
],
[
[
"##### Select imf_bdys=[1,5]",
"_____no_output_____"
]
],
[
[
"k_N=1e11*0.35/ (1**-0.35 - 5**-0.35)\nN_tot=k_N/1.35 * (1**-1.35 - 5**-1.35)\nYield_tot=0.1*N_tot",
"_____no_output_____"
],
[
"s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,\\\n imf_bdys=[1,5],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',\\\n sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')\nYield_tot_sim=s1.history.ism_iso_yield[-1][0]",
"SYGMA run in progress..\n SYGMA run completed - Run time: 0.32s\n"
]
],
[
[
"Results:",
"_____no_output_____"
]
],
[
[
"print 'Sould be 1: ',Yield_tot_sim/Yield_tot",
"Sould be 1: 1.0\n"
]
],
[
[
"### Test of parameter imf_type: Selection of different IMF types",
"_____no_output_____"
],
[
"#### power-law exponent : alpha_imf",
"_____no_output_____"
],
[
"The IMF allows to calculate the number of stars $N_{12}$ in the mass interval [m1,m2] with\n\n$N_{12}$ = k_N $\\int _{m1}^{m2} m^{-alphaimf} dm$\n\nWhere k_N is the normalization constant. It can be derived from the total amount of mass of the system $M_{tot}$\nsince the total mass $M_{12}$ in the mass interval above can be estimated with\n\n$M_{12}$ = k_N $\\int _{m1}^{m2} m^{-(alphaimf-1)} dm$\n\nWith a total mass interval of [1,30] and $M_{tot}=1e11$ the $k_N$ can be derived:\n\n$1e11 = k_N/(alphaimf-2) * (1^{-(alphaimf-2)} - 30^{-(alphaimf-2)})$",
"_____no_output_____"
]
],
[
[
"alphaimf = 1.5 #Set test alphaimf",
"_____no_output_____"
],
[
"k_N=1e11*(alphaimf-2)/ (-1**-(alphaimf-2) + 30**-(alphaimf-2))\nN_tot=k_N/(alphaimf-1) * (-1**-(alphaimf-1) + 30**-(alphaimf-1))\nYield_tot=0.1*N_tot",
"_____no_output_____"
],
[
"s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='alphaimf',alphaimf=1.5,imf_bdys=[1,30],hardsetZ=0.0001,\n table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', \n iniabu_table='yield_tables/iniabu/iniab_h1.ppn')\nYield_tot_sim=s1.history.ism_iso_yield[-1][0]",
"SYGMA run in progress..\n SYGMA run completed - Run time: 0.31s\n"
],
[
"print 'Should be 1 :',Yield_tot/Yield_tot_sim",
"Should be 1 : 1.0\n"
]
],
[
[
"#### Chabrier:",
"_____no_output_____"
],
[
"Change interval now from [0.01,30]",
"_____no_output_____"
],
[
"M<1: $IMF(m) = \\frac{0.158}{m} * \\exp{ \\frac{-(log(m) - log(0.08))^2}{2*0.69^2}}$\n\nelse: $IMF(m) = m^{-2.3}$",
"_____no_output_____"
]
],
[
[
"def imf_times_m(mass):\n if mass<=1:\n return 0.158 * np.exp( -np.log10(mass/0.079)**2 / (2.*0.69**2))\n else:\n return mass*0.0443*mass**(-2.3)\nk_N= 1e11/ (quad(imf_times_m,0.01,30)[0] )",
"_____no_output_____"
],
[
"N_tot=k_N/1.3 * 0.0443* (1**-1.3 - 30**-1.3)\nYield_tot=N_tot * 0.1",
"_____no_output_____"
],
[
"s1=s.sygma(iolevel=0,mgal=1e11,dt=1e9,tend=1.3e10,imf_type='chabrier',imf_bdys=[0.01,30],\n hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \n sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')\nYield_tot_sim=s1.history.ism_iso_yield[-1][0]\n",
"SYGMA run in progress..\n SYGMA run completed - Run time: 0.29s\n"
],
[
"print Yield_tot\nprint Yield_tot_sim\nprint 'Should be 1 :',Yield_tot/Yield_tot_sim",
"1844499958.22\n1844499958.22\nShould be 1 : 1.0\n"
],
[
"plt.figure(11)\ns1.plot_mass(fig=11,specie='H',label='H',color='k',shape='-',marker='o',markevery=800)\nm=[1,1.65,2,3,4,5,6,7,12,15,20,25]\nages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]\ndef yields(m,k_N):\n return ( k_N/1.3 * 0.0443*(m**-1.3 - 30.**-1.3) ) * 0.1\nyields1=[]\nfor m1 in m:\n yields1.append(yields(m1,k_N))\nplt.plot(ages,yields1,marker='+',linestyle='',markersize=20,label='semi')\nplt.legend(loc=4)\n\n",
"_____no_output_____"
]
],
[
[
"Simulation should agree with semi-analytical calculations for Chabrier IMF.",
"_____no_output_____"
],
[
"#### Kroupa:",
"_____no_output_____"
],
[
"M<0.08: $IMF(m) = m^{-0.3}$\n\nM<0.5 : $IMF(m) = m^{-1.3}$\n\nelse : $IMF(m) = m^{-2.3}$",
"_____no_output_____"
]
],
[
[
"def imf_times_m(mass):\n p0=1.\n p1=0.08**(-0.3+1.3)\n p2=0.5**(-1.3+2.3)\n p3= 1**(-2.3+2.3)\n if mass<0.08:\n return mass*p0*mass**(-0.3)\n elif mass < 0.5:\n return mass*p1*mass**(-1.3)\n else: #mass>=0.5:\n return mass*p1*p2*mass**(-2.3)\nk_N= 1e11/ (quad(imf_times_m,0.01,30)[0] )",
"_____no_output_____"
],
[
"p1=0.08**(-0.3+1.3)\np2=0.5**(-1.3+2.3)\nN_tot=k_N/1.3 * p1*p2*(1**-1.3 - 30**-1.3)\nYield_tot=N_tot * 0.1",
"_____no_output_____"
],
[
"reload(s)\ns1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='kroupa',imf_bdys=[0.01,30],\n hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \n sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')\nYield_tot_sim=s1.history.ism_iso_yield[-1][0]\n",
"SYGMA run in progress..\n SYGMA run completed - Run time: 0.44s\n"
],
[
"print 'Should be 1: ',Yield_tot/Yield_tot_sim",
"Should be 1: 1.0\n"
],
[
"plt.figure(111)\ns1.plot_mass(fig=111,specie='H',label='H',color='k',shape='-',marker='o',markevery=800)\nm=[1,1.65,2,3,4,5,6,7,12,15,20,25]\nages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]\ndef yields(m,k_N):\n return ( k_N/1.3 *p1*p2* (m**-1.3 - 30.**-1.3) ) * 0.1\nyields1=[]\nfor m1 in m:\n yields1.append(yields(m1,k_N))\nplt.plot(ages,yields1,marker='+',linestyle='',markersize=20,label='semi')\nplt.legend(loc=4)",
"_____no_output_____"
]
],
[
[
"Simulation results compared with semi-analytical calculations for Kroupa IMF.",
"_____no_output_____"
],
[
"### Test of parameter sn1a_on: on/off mechanism",
"_____no_output_____"
]
],
[
[
"s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=False,sn1a_rate='maoz',imf_type='salpeter',\n imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',\n sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')\ns2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='maoz',imf_type='salpeter',\n imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',\n sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')",
"SYGMA run in progress..\n SYGMA run completed - Run time: 0.31s\nSYGMA run in progress..\n SYGMA run completed - Run time: 0.47s\n"
],
[
"print (s1.history.ism_elem_yield_1a[0]),(s1.history.ism_elem_yield_1a[-1])\nprint (s1.history.ism_elem_yield[0]),(s1.history.ism_elem_yield[-1])\nprint (s2.history.ism_elem_yield_1a[0]),(s2.history.ism_elem_yield_1a[-1])\nprint (s2.history.ism_elem_yield[0]),(s2.history.ism_elem_yield[-1])\nprint (s1.history.ism_elem_yield[-1][0] + s2.history.ism_elem_yield_1a[-1][0])/s2.history.ism_elem_yield[-1][0]\ns2.plot_mass(fig=33,specie='H-1',source='sn1a') #plot s1 data (without sn) cannot be plotted -> error, maybe change plot function?",
"[0] [0.0]\n[100000000000.0] [3687728129.7190337]\n[0] [10000000.000000006]\n[100000000000.0] [3697728129.7190342]\n1.0\n"
]
],
[
[
"############################################################################################",
"_____no_output_____"
],
[
"### Test of parameter sn1a_rate (DTD): Different SN1a rate implementatinos",
"_____no_output_____"
],
[
"Calculate with SNIa and look at SNIa contribution only. Calculated for each implementation from $4*10^7$ until $1.5*10^{10}$ yrs ",
"_____no_output_____"
],
[
"##### DTD taken from Vogelsberger 2013 (sn1a_rate='vogelsberger')",
"_____no_output_____"
],
[
"$\\frac{N_{1a}}{Msun} = \\int _t^{t+\\Delta t} 1.3*10^{-3} * (\\frac{t}{4*10^7})^{-1.12} * \\frac{1.12 -1}{4*10^7}$ for $t>4*10^7 yrs$",
"_____no_output_____"
],
[
"def dtd(t):\n return 1.3e-3*(t/4e7)**-1.12 * ((1.12-1)/4e7)\nn1a_msun= quad(dtd,4e7,1.5e10)[0]\nYield_tot=n1a_msun*1e11*0.1 * 7 #special factor\nprint Yield_tot",
"_____no_output_____"
],
[
"reload(s)\ns1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='vogelsberger',imf_type='salpeter',imf_bdys=[1,30],iniZ=-1,hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt', sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')\nYield_tot_sim=s1.history.ism_iso_yield_1a[-1][0]\n",
"_____no_output_____"
],
[
"print 'Should be 1: ',Yield_tot/Yield_tot_sim",
"_____no_output_____"
],
[
"s1.plot_mass(specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)\nm=[1,1.65,2,3,4,5,6,7,12,15,20,25]\nages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]\ndef yields(t):\n def dtd(t):\n return 1.3e-3*(t/4e7)**-1.12 * ((1.12-1)/4e7)\n return quad(dtd,4e7,t)[0]*1e11*0.1 * 7 #special factor\nyields1=[]\nages1=[]\nfor m1 in m:\n t=ages[m.index(m1)]\n if t>4e7:\n yields1.append(yields(t))\n ages1.append(t)\nplt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')\nplt.legend(loc=4)",
"_____no_output_____"
],
[
"Simulation results should agree with semi-analytical calculations for the SN1 yields.",
"_____no_output_____"
],
[
"### Exponential DTD taken from Wiersma09 (sn1a_rate='wiersmaexp') (maybe transitionmass should replace 8Msun?)",
"_____no_output_____"
],
[
"$\\frac{N_{1a}}{Msun} = \\int _t ^{t+\\Delta t} f_{wd}(t) exp(-t/\\tau)/\\tau$ with \n\nif $M_z(t) >3$ : \n\n$f_{wd}(t) = (\\int _{M(t)}^8 IMF(m) dm)$\n\nelse: \n\n$f_{wd}(t) = 0$\n\nwith $M(t) = max(3, M_z(t))$ and $M_z(t)$ being the mass-lifetime function.\n\nNOTE: This mass-lifetime function needs to be extracted from the simulation (calculated in SYGMA, see below)\n",
"_____no_output_____"
],
[
"The following performs the simulation but also takes the mass-metallicity-lifetime grid from this simulation.\nWith the mass-lifetime spline function calculated the integration can be done further down. See also the fit for this function below.",
"_____no_output_____"
]
],
[
[
"#import read_yields as ry\nimport sygma as s\nreload(s)\nplt.figure(99)\n#interpolate_lifetimes_grid=s22.__interpolate_lifetimes_grid\n#ytables=ry.read_nugrid_yields('yield_tables/isotope_yield_table_h1.txt')\n#zm_lifetime_grid=interpolate_lifetimes_grid(ytables,iolevel=0) 1e7\ns1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='exp',\n imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt', \n sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')\nYield_tot_sim=s1.history.ism_iso_yield_1a[-1][0]\nzm_lifetime_grid=s1.zm_lifetime_grid_current\nidx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0\ngrid_masses=zm_lifetime_grid[1][::-1]\ngrid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]\nspline_degree1=2\nsmoothing1=0\nboundary=[None,None]\nspline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)\nplt.plot(grid_masses,grid_lifetimes,label='spline fit grid points (SYGMA)')\nplt.xlabel('Mini/Msun')\nplt.ylabel('log lifetime')\nm=[1,1.65,2,3,4,5,6,7,12,15,20,25]\nages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]\nplt.plot(np.array(m),np.log10(np.array(ages)),marker='+',markersize=20,label='input yield grid',linestyle='None')\nplt.plot(10**spline_lifetime(np.log10(ages)),np.log10(ages),linestyle='--',label='spline fit SNIa')\nplt.legend()\n#plt.yscale('log')",
"SYGMA run in progress..\n SYGMA run completed - Run time: 1.48s\n"
],
[
"#print grid_lifetimes\n#print grid_masses\n#10**spline_lifetime(np.log10(7.902e6))",
"_____no_output_____"
]
],
[
[
"Small test: Initial mass vs. lifetime from the input yield grid compared to the fit in the the Mass-Metallicity-lifetime plane (done by SYGMA) for Z=0.02.",
"_____no_output_____"
],
[
"A double integration has to be performed in order to solve the complex integral from Wiersma:",
"_____no_output_____"
]
],
[
[
"#following inside function wiersma09_efolding\n\n#if timemin ==0:\n# timemin=1\n\nfrom scipy.integrate import dblquad\ndef spline1(x):\n #x=t\n minm_prog1a=3\n #if minimum progenitor mass is larger than 3Msun due to IMF range:\n #if self.imf_bdys[0]>3:\n # minm_prog1a=self.imf_bdys[0]\n return max(minm_prog1a,10**spline_lifetime(np.log10(x)))\n\n\ndef f_wd_dtd(m,t):\n #print 'time ',t\n #print 'mass ',m\n mlim=10**spline_lifetime(np.log10(t))\n maxm_prog1a=8\n #if maximum progenitor mass is smaller than 8Msun due to IMF range:\n #if 8>self.imf_bdys[1]:\n # maxm_prog1a=self.imf_bdys[1]\n if mlim>maxm_prog1a:\n return 0\n else:\n #Delay time distribution function (DTD)\n tau= 2e9\n mmin=0\n mmax=0\n inte=0\n #follwing is done in __imf()\n def g2(mm):\n return mm*mm**-2.35\n norm=1./quad(g2,1,30)[0]\n #print 'IMF test',norm*m**-2.35\n #imf normalized to 1Msun\n return norm*m**-2.35* np.exp(-t/tau)/tau\n \na= 0.01 #normalization parameter\n#if spline(np.log10(t))\n#a=1e-3/()\na=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )\nn1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] \n# in principle since normalization is set: nb_1a_per_m the above calculation is not necessary anymore\nYield_tot=n1a*1e11*0.1 *1 #7 #special factor",
"_____no_output_____"
],
[
"print Yield_tot_sim\nprint Yield_tot\nprint 'Should be : ', Yield_tot_sim/Yield_tot",
"10000001.8389\n10000000.0\nShould be : 1.00000018389\n"
],
[
"s1.plot_mass(specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)\nyields1=[]\nages1=[]\na= 0.01 #normalization parameter\na=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )\nfor m1 in m:\n t=ages[m.index(m1)]\n yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor \n yields1.append(yields)\n ages1.append(t)\nplt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')\nplt.legend(loc=4)",
"_____no_output_____"
]
],
[
[
"Simulation results compared with semi-analytical calculations for the SN1 sources with Wiersma (exp) implementation.",
"_____no_output_____"
],
[
"#### Compare number of WD's in range",
"_____no_output_____"
]
],
[
[
"sum(s1.wd_sn1a_range1)/sum(s1.wd_sn1a_range)",
"_____no_output_____"
],
[
"s1.plot_sn_distr(xaxis='time',fraction=False)",
"_____no_output_____"
]
],
[
[
"## Wiersmagauss",
"_____no_output_____"
]
],
[
[
"reload(s)\ns2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='gauss',imf_type='salpeter',\n imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, \n sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')\nYield_tot_sim=s2.history.ism_iso_yield_1a[-1][0]\nzm_lifetime_grid=s2.zm_lifetime_grid_current\nidx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0\ngrid_masses=zm_lifetime_grid[1][::-1]\ngrid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]\nspline_degree1=2\nsmoothing1=0\nboundary=[None,None]\nspline = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)\n",
"SYGMA run in progress..\n SYGMA run completed - Run time: 1.42s\n"
],
[
"from scipy.integrate import dblquad\ndef spline1(x):\n #x=t\n return max(3.,10**spline(np.log10(x)))\ndef f_wd_dtd(m,t):\n #print 'time ',t\n #print 'mass ',m\n mlim=10**spline(np.log10(t))\n #print 'mlim',mlim\n if mlim>8.:\n #print t\n #print mlim\n return 0\n else:\n #mmin=max(3.,massfunc(t))\n #mmax=8.\n #imf=self.__imf(mmin,mmax,1)\n #Delay time distribution function (DTD)\n tau= 1e9 #3.3e9 #characteristic delay time\n sigma=0.66e9#0.25*tau \n #sigma=0.2#narrow distribution\n #sigma=0.5*tau #wide distribution\n mmin=0\n mmax=0\n inte=0\n def g2(mm):\n return mm*mm**-2.35\n norm=1./quad(g2,1,30)[0]\n #imf normalized to 1Msun\n return norm*m**-2.35* 1./np.sqrt(2*np.pi*sigma**2) * np.exp(-(t-tau)**2/(2*sigma**2))\n \n#a= 0.0069 #normalization parameter\n#if spline(np.log10(t))\na=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )\nn1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] \nYield_tot=n1a*1e11*0.1 #special factor",
"_____no_output_____"
],
[
"print Yield_tot_sim\nprint Yield_tot\nprint 'Should be 1: ', Yield_tot_sim/Yield_tot",
"10000001.3717\n10000000.0\nShould be 1: 1.00000013717\n"
],
[
"s2.plot_mass(fig=988,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)\nyields1=[]\nages1=[]\nm=[1,1.65,2,3,4,5,6,7,12,15,20,25]\nages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]\nfor m1 in m:\n t=ages[m.index(m1)]\n yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor\n yields1.append(yields)\n ages1.append(t)\nplt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')\nplt.legend(loc=2)",
"_____no_output_____"
]
],
[
[
"Simulation results compared with semi-analytical calculations for the SN1 sources with Wiersma (Gauss) implementation.",
"_____no_output_____"
],
[
"#### Compare number of WD's in range",
"_____no_output_____"
]
],
[
[
"sum(s2.wd_sn1a_range1)/sum(s2.wd_sn1a_range)",
"_____no_output_____"
]
],
[
[
"############################################################################################",
"_____no_output_____"
],
[
"### SNIa implementation: Maoz12 $t^{-1}$",
"_____no_output_____"
]
],
[
[
"import sygma as s\nreload(s)\ns2=s.sygma(iolevel=0,mgal=1e11,dt=1e8,tend=1.3e10,sn1a_rate='maoz',imf_type='salpeter',\n imf_bdys=[1,30],special_timesteps=-1,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',\n sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')",
"SYGMA run in progress..\n SYGMA run completed - Run time: 1.11s\n"
],
[
"Yield_tot_sim=s2.history.ism_iso_yield_1a[-1][0]\nfrom scipy.interpolate import UnivariateSpline\nzm_lifetime_grid=s2.zm_lifetime_grid_current\nidx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0\ngrid_masses=zm_lifetime_grid[1][::-1]\ngrid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]\nspline_degree1=2\nsmoothing1=0\nboundary=[None,None]\nspline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)\n\nfrom scipy.integrate import quad",
"_____no_output_____"
],
[
"def spline1(t):\n minm_prog1a=3\n #if minimum progenitor mass is larger than 3Msun due to IMF range:\n return max(minm_prog1a,10**spline_lifetime(np.log10(t)))\n\n #funciton giving the total (accummulatitive) number of WDs at each timestep\ndef wd_number(m,t):\n #print 'time ',t\n #print 'mass ',m\n mlim=10**spline_lifetime(np.log10(t))\n maxm_prog1a=8\n\n if mlim>maxm_prog1a:\n return 0\n else:\n mmin=0\n mmax=0\n inte=0\n #normalized to 1msun!\n def g2(mm):\n return mm*mm**-2.35\n norm=1./quad(g2,1,30)[0]\n return norm*m**-2.35 #self.__imf(mmin,mmax,inte,m)\n\ndef maoz_sn_rate(m,t):\n return wd_number(m,t)* 4.0e-13 * (t/1.0e9)**-1\n\ndef maoz_sn_rate_int(t):\n return quad( maoz_sn_rate,spline1(t),8,args=t)[0]\n\n#in this formula, (paper) sum_sn1a_progenitors number of \nmaxm_prog1a=8\nlongtimefornormalization=1.3e10 #yrs\nfIa=0.00147\nfIa=1e-3\n#A = (fIa*s2.number_stars_born[1]) / quad(maoz_sn_rate_int,0,longtimefornormalization)[0]\nA = 1e-3 / quad(maoz_sn_rate_int,0,longtimefornormalization)[0]\n\nprint 'Norm. constant A:',A\nn1a= A* quad(maoz_sn_rate_int,0,1.3e10)[0]\nYield_tot=n1a*1e11*0.1 #specialfactor",
"Norm. constant A: 8.4185441437\n"
],
[
"print Yield_tot_sim\nprint Yield_tot\nprint 'Should be 1: ', Yield_tot_sim/Yield_tot",
"10000000.0\n10000000.0\nShould be 1: 1.0\n"
]
],
[
[
"#### Check trend:",
"_____no_output_____"
]
],
[
[
"s2.plot_mass(fig=44,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)\nyields1=[]\nages1=[]\nm=[1,1.65,2,3,4,5,6,7,12,15,20,25]\nages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]\nfor m1 in m:\n t=ages[m.index(m1)]\n #yields= a* dblquad(wdfrac,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 \n yields= A*quad(maoz_sn_rate_int,0,t)[0] *1e11*0.1 #special factor\n yields1.append(yields)\n ages1.append(t)\nplt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')\nplt.legend(loc=2)\nplt.legend(loc=3)",
"_____no_output_____"
]
],
[
[
"### Test of parameter tend, dt and special_timesteps",
"_____no_output_____"
],
[
"#### First constant timestep size of 1e7",
"_____no_output_____"
]
],
[
[
"import sygma as s\ns1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',\n imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \n sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',\n stellar_param_on=False)",
"SYGMA run in progress..\n SYGMA run completed - Run time: 11.12s\n"
],
[
"print 'Should be 0: ',s1.history.age[0]\nprint 'Should be 1: ',s1.history.age[-1]/1.3e10\nprint 'Should be 1: ',s1.history.timesteps[0]/1e7\nprint 'Should be 1: ',s1.history.timesteps[-1]/1e7\nprint 'Should be 1: ',sum(s1.history.timesteps)/1.3e10",
"Should be 0: 0\nShould be 1: 1.0\nShould be 1: 1.0\nShould be 1: 1.0\nShould be 1: 1.0\n"
]
],
[
[
"#### First timestep size of 1e7, then in log space to tend with a total number of steps of 200; Note: changed tend",
"_____no_output_____"
]
],
[
[
"import sygma as s\ns2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.5e9,special_timesteps=200,imf_type='salpeter',\n imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \n sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')",
"SYGMA run in progress..\n SYGMA run completed - Run time: 1.88s\n"
],
[
"print 'Should be 0: ',s2.history.age[0]\nprint 'Should be 1: ',s2.history.age[-1]/1.5e9\nprint 'Should be 201: ',len(s2.history.age)\nprint 'Should be 1: ',s2.history.timesteps[0]/1e7\n#print 'in dt steps: ',s2.history.timesteps[1]/1e7,s1.history.timesteps[2]/1e7,'..; larger than 1e7 at step 91!'\nprint 'Should be 200: ',len(s2.history.timesteps)\nprint 'Should be 1: ',sum(s2.history.timesteps)/1.5e9",
"Should be 0: 0\nShould be 1: 1.0\nShould be 201: 201\nShould be 1: 1.0\nShould be 200: 200\nShould be 1: 1.0\n"
],
[
"plt.figure(55)\nplt.plot(s1.history.age[1:],s1.history.timesteps,label='linear (constant) scaled',marker='+')\nplt.plot(s2.history.age[1:],s2.history.timesteps,label='log scaled',marker='+')\nplt.yscale('log');plt.xscale('log')\nplt.xlabel('age/years');plt.ylabel('timesteps/years');plt.legend(loc=4)",
"_____no_output_____"
]
],
[
[
"#### Choice of dt should not change final composition:",
"_____no_output_____"
],
[
"for special_timesteps:",
"_____no_output_____"
]
],
[
[
"s3=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],\n hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', \n iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)\ns4=s.sygma(iolevel=0,mgal=1e11,dt=1.3e10,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],\n hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',\n iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)\ns5=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=200,imf_type='salpeter',imf_bdys=[1,30],\n hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt',\n iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)\ns6=s.sygma(iolevel=0,mgal=1e11,dt=1.3e10,tend=1.3e10,special_timesteps=200,imf_type='salpeter',imf_bdys=[1,30],\n hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', \n iniabu_table='yield_tables/iniabu/iniab_h1.ppn',stellar_param_on=False)",
"SYGMA run in progress..\n SYGMA run completed - Run time: 11.1s\nSYGMA run in progress..\n SYGMA run completed - Run time: 0.02s\nSYGMA run in progress..\n SYGMA run completed - Run time: 1.87s\nSYGMA run in progress..\n SYGMA run completed - Run time: 1.65s\n"
],
[
"#print s3.history.ism_iso_yield[-1][0] == s4.history.ism_iso_yield[-1][0] why false?\nprint 'should be 1 ',s3.history.ism_iso_yield[-1][0]/s4.history.ism_iso_yield[-1][0]\n#print s3.history.ism_iso_yield[-1][0],s4.history.ism_iso_yield[-1][0]\nprint 'should be 1',s5.history.ism_iso_yield[-1][0]/s6.history.ism_iso_yield[-1][0]\n#print s5.history.ism_iso_yield[-1][0],s6.history.ism_iso_yield[-1][0]\n",
"should be 1 1.0\nshould be 1 1.0\n"
]
],
[
[
"### Test of parameter mgal - the total mass of the SSP",
"_____no_output_____"
],
[
"Test the total isotopic and elemental ISM matter at first and last timestep.",
"_____no_output_____"
]
],
[
[
"s1=s.sygma(iolevel=0,mgal=1e7,dt=1e7,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',\n sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')\ns2=s.sygma(iolevel=0,mgal=1e8,dt=1e8,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',\n sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')\ns3=s.sygma(iolevel=0,mgal=1e9,dt=1e9,tend=1.3e10,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',\n sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')",
"SYGMA run in progress..\n SYGMA run completed - Run time: 0.32s\nSYGMA run in progress..\n SYGMA run completed - Run time: 0.31s\nSYGMA run in progress..\n SYGMA run completed - Run time: 0.27s\n"
],
[
"print 'At timestep 0: ',sum(s1.history.ism_elem_yield[0])/1e7,sum(s2.history.ism_elem_yield[0])/1e8,sum(s3.history.ism_elem_yield[0])/1e9\nprint 'At timestep 0: ',sum(s1.history.ism_iso_yield[0])/1e7,sum(s2.history.ism_iso_yield[0])/1e8,sum(s3.history.ism_iso_yield[0])/1e9",
"At timestep 0: 1.0 1.0 1.0\nAt timestep 0: 1.0 1.0 1.0\n"
],
[
"print 'At last timestep, should be the same fraction: ',sum(s1.history.ism_elem_yield[-1])/1e7,sum(s2.history.ism_elem_yield[-1])/1e8,sum(s3.history.ism_elem_yield[-1])/1e9\nprint 'At last timestep, should be the same fraction: ',sum(s1.history.ism_iso_yield[-1])/1e7,sum(s2.history.ism_iso_yield[-1])/1e8,sum(s3.history.ism_iso_yield[-1])/1e9",
"At last timestep, should be the same fraction: 0.0170583657213 0.0170583657213 0.0170583657213\nAt last timestep, should be the same fraction: 0.0170583657213 0.0170583657213 0.0170583657213\n"
]
],
[
[
"### Test of SN rate: depend on timestep size: shows always mean value of timestep; larger timestep> different mean",
"_____no_output_____"
]
],
[
[
"reload(s)\ns1=s.sygma(iolevel=0,mgal=1e11,dt=7e6,tend=1e8,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,\n table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',\n iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')\ns2=s.sygma(iolevel=0,mgal=1e11,dt=7e6,tend=1e8,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],\n hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, \n sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',\n pop3_table='yield_tables/popIII_h1.txt')\ns3=s.sygma(iolevel=0,mgal=1e11,dt=1e6,tend=1e8,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],\n hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', \n iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')\ns4=s.sygma(iolevel=0,mgal=1e11,dt=3e7,tend=1e8,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,\n table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', \n iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')",
"SYGMA run in progress..\n SYGMA run completed - Run time: 0.34s\nSYGMA run in progress..\n SYGMA run completed - Run time: 0.17s\nSYGMA run in progress..\n SYGMA run completed - Run time: 1.08s\nSYGMA run in progress..\n SYGMA run completed - Run time: 0.05s\n"
],
[
"s1.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 1',label2='SNII, rate 1',marker1='o',marker2='s',shape2='-',markevery=1)\ns2.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 2',label2='SNII rate 2',marker1='d',marker2='p',markevery=1,shape2='-.')\ns4.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 2',label2='SNII rate 2',marker1='d',marker2='+',markevery=1,shape2=':',color2='y')\ns3.plot_sn_distr(rate=True,rate_only='sn2',label1='SN1a, rate, 2',label2='SNII rate 2',marker1='d',marker2='x',markevery=1,shape2='--')\nplt.xlim(6e6,7e7)\n#plt.xlim(6.5e6,4e7)\nplt.vlines(7e6,1e2,1e9)\nplt.ylim(1e2,1e4)",
"/Users/christian/Research/NuGRid/NuPyCEE/sygma.py:2097: RuntimeWarning: invalid value encountered in divide\n sn1a_rate=np.array(sn1anumbers[1:])/ (np.array(self.history.timesteps)/100.)\n/Users/christian/Research/NuGRid/NuPyCEE/sygma.py:2098: RuntimeWarning: invalid value encountered in divide\n sn2_rate=np.array(sn2numbers[1:])/ (np.array(self.history.timesteps)/100.)\n"
],
[
"print s1.history.sn2_numbers[1]/s1.history.timesteps[0]\nprint s2.history.sn2_numbers[1]/s2.history.timesteps[0]\n#print s1.history.timesteps[:5]\n#print s2.history.timesteps[:5]",
"0.0\n0.0\n"
],
[
"s3=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,\n table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', \n iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt',\n stellar_param_on=False)\ns4=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,special_timesteps=-1,imf_type='salpeter',imf_bdys=[1,30],\n hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=True, \n sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn',\n pop3_table='yield_tables/popIII_h1.txt',stellar_param_on=False)",
"SYGMA run in progress..\n SYGMA run completed - Run time: 0.3s\nSYGMA run in progress..\n SYGMA run completed - Run time: 11.46s\n"
]
],
[
[
"##### Rate does not depend on timestep type:",
"_____no_output_____"
]
],
[
[
"s3.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='SN1a, rate',label2='SNII, rate',marker1='o',marker2='s',markevery=1)\ns4.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='SN1a, number',label2='SNII number',marker1='d',marker2='p')\nplt.xlim(3e7,1e10)",
"_____no_output_____"
],
[
"s1.plot_sn_distr(fig=77,rate=True,marker1='o',marker2='s',markevery=5)\ns2.plot_sn_distr(fig=77,rate=True,marker1='x',marker2='^',markevery=1)\n#s1.plot_sn_distr(rate=False)\n#s2.plot_sn_distr(rate=True)\n#s2.plot_sn_distr(rate=False)\nplt.xlim(1e6,1.5e10)\n#plt.ylim(1e2,1e4)",
"_____no_output_____"
]
],
[
[
"### Test of parameter transitionmass : Transition from AGB to massive stars",
"_____no_output_____"
],
[
"Check if transitionmass is properly set",
"_____no_output_____"
]
],
[
[
"import sygma as s; reload(s)\ns1=s.sygma(iolevel=0,imf_bdys=[1.65,30],transitionmass=8,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',\n hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \n sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')\ns2=s.sygma(iolevel=0,imf_bdys=[1.65,30],transitionmass=10,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',\n hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \n sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')\nYield_tot_sim_8=s1.history.ism_iso_yield_agb[-1][0]\nYield_tot_sim_10=s2.history.ism_iso_yield_agb[-1][0]",
"SYGMA run in progress..\n SYGMA run completed - Run time: 0.31s\nWarning: Non-default transitionmass chosen. Use in agreement with yield input!\nSYGMA run in progress..\n SYGMA run completed - Run time: 0.3s\n"
],
[
"alphaimf=2.35\nk_N=1e11*(alphaimf-2)/ (-1.65**-(alphaimf-2) + 30**-(alphaimf-2))\n\nN_tot=k_N/(alphaimf-1) * (-1.65**-(alphaimf-1) + 8**-(alphaimf-1))\nYield_tot_8=0.1*N_tot\n\nN_tot=k_N/(alphaimf-1) * (-1.65**-(alphaimf-1) + 10**-(alphaimf-1))\nYield_tot_10=0.1*N_tot\n#N_tot=k_N/(alphaimf-1) * (-1.65**-(alphaimf-1) + 5**-(alphaimf-1))\n#Yield_tot_5=0.1*N_tot",
"_____no_output_____"
],
[
"print '1:',Yield_tot_sim_8/Yield_tot_8\nprint '1:',Yield_tot_sim_10/Yield_tot_10\n#print '1:',Yield_tot_sim_5/Yield_tot_5",
"1: 1.0\n1: 1.0\n"
]
],
[
[
"#2 starbursts",
"_____no_output_____"
]
],
[
[
"s1=s.sygma(starbursts=[0.1,0.1],iolevel=1,mgal=1e11,dt=1e7,imf_type='salpeter',\n imf_bdys=[1,30],iniZ=0.02,hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',\n sn1a_on=False, sn1a_table='yield_tables/sn1a_h1.txt', \n iniabu_table='yield_tables/iniabu/iniab_h1.ppn',pop3_table='yield_tables/popIII_h1.txt')\n",
"Warning - Use isotopes with care.\n['H-1']\nUse initial abundance of yield_tables/iniabu/iniab_h1.ppn\nNumber of timesteps: 3.0E+01\n### Start with initial metallicity of 1.0000E-04\n###############################\nSYGMA run in progress..\n################## Star formation at 1.000E+07 (Z=1.0000E-04) of 0.1\nMass locked away: 1.000E+10 , new ISM mass: 9.000E+10\n__get_mass_bdys: mass_bdys: [1, 1.325, 1.825, 2.5, 3.5, 4.5, 5.5, 6.5, 8, 13.5, 17.5, 22.5, 30]\n__get_mass_bdys: m_stars [1.0, 1.65, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 12.0, 15.0, 20.0, 25.0]\nStars under consideration (take into account user-selected imf ends):\n1 | 1.0 | 1.325\n1.325 | 1.65 | 1.825\n1.825 | 2.0 | 2.5\n2.5 | 3.0 | 3.5\n3.5 | 4.0 | 4.5\n4.5 | 5.0 | 5.5\n5.5 | 6.0 | 6.5\n6.5 | 7.0 | 8\n8 | 12.0 | 13.5\n13.5 | 15.0 | 17.5\n17.5 | 20.0 | 22.5\n22.5 | 25.0 | 30\nlens: 13 12\nTotal mass of the gas in stars:\nAGB: 7.430E+09\nMassive: 2.570E+09\n25.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 1.000E+07 with lifetime: 8.070E+06\n20.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 1.000E+07 with lifetime: 9.916E+06\n15.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 1.280E+07 with lifetime: 1.366E+07\n12.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 1.640E+07 with lifetime: 1.825E+07\n7.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 3.443E+07 with lifetime: 4.284E+07\n6.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 5.645E+07 with lifetime: 5.688E+07\n5.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 7.228E+07 with lifetime: 8.135E+07\n4.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 1.185E+08 with lifetime: 1.304E+08\n3.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 1.943E+08 with lifetime: 2.528E+08\n2.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 4.080E+08 with lifetime: 7.131E+08\n1.65 Wind (if massive +SN2): of Z= 1.000000E-04 at time 1.097E+09 with lifetime: 1.217E+09\n1.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 2.949E+09 with lifetime: 5.564E+09\n################## Star formation at 1.280E+07 (Z=1.0000E-04) of 1.0\nMass locked away: 9.000E+10 , new ISM mass: 0.000E+00\n__get_mass_bdys: mass_bdys: [1, 1.325, 1.825, 2.5, 3.5, 4.5, 5.5, 6.5, 8, 13.5, 17.5, 22.5, 30]\n__get_mass_bdys: m_stars [1.0, 1.65, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 12.0, 15.0, 20.0, 25.0]\nStars under consideration (take into account user-selected imf ends):\n1 | 1.0 | 1.325\n1.325 | 1.65 | 1.825\n1.825 | 2.0 | 2.5\n2.5 | 3.0 | 3.5\n3.5 | 4.0 | 4.5\n4.5 | 5.0 | 5.5\n5.5 | 6.0 | 6.5\n6.5 | 7.0 | 8\n8 | 12.0 | 13.5\n13.5 | 15.0 | 17.5\n17.5 | 20.0 | 22.5\n22.5 | 25.0 | 30\nlens: 13 12\nTotal mass of the gas in stars:\nAGB: 6.687E+10\nMassive: 2.313E+10\n25.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 2.100E+07 with lifetime: 8.070E+06\n20.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 2.100E+07 with lifetime: 9.916E+06\n15.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 2.688E+07 with lifetime: 1.366E+07\n12.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 2.688E+07 with lifetime: 1.825E+07\n7.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 4.408E+07 with lifetime: 4.284E+07\n6.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 7.228E+07 with lifetime: 5.688E+07\n5.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 9.255E+07 with lifetime: 8.135E+07\n4.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 1.185E+08 with lifetime: 1.304E+08\n3.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 1.943E+08 with lifetime: 2.528E+08\n2.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 4.080E+08 with lifetime: 7.131E+08\n1.65 Wind (if massive +SN2): of Z= 1.000000E-04 at time 1.097E+09 with lifetime: 1.217E+09\n1.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 2.949E+09 with lifetime: 5.564E+09\ntime and metallicity and total mass:\n1.640E+07 1.0000E-04 2.3518E+06\ntime and metallicity and total mass:\n1.640E+07 1.0000E-04 2.3518E+06\n################## Star formation at 1.640E+07 (Z=1.0000E-04) of 0.1\nMass locked away: 2.352E+05 , new ISM mass: 2.117E+06\n__get_mass_bdys: mass_bdys: [1, 1.325, 1.825, 2.5, 3.5, 4.5, 5.5, 6.5, 8, 13.5, 17.5, 22.5, 30]\n__get_mass_bdys: m_stars [1.0, 1.65, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 12.0, 15.0, 20.0, 25.0]\nStars under consideration (take into account user-selected imf ends):\n1 | 1.0 | 1.325\n1.325 | 1.65 | 1.825\n1.825 | 2.0 | 2.5\n2.5 | 3.0 | 3.5\n3.5 | 4.0 | 4.5\n4.5 | 5.0 | 5.5\n5.5 | 6.0 | 6.5\n6.5 | 7.0 | 8\n8 | 12.0 | 13.5\n13.5 | 15.0 | 17.5\n17.5 | 20.0 | 22.5\n22.5 | 25.0 | 30\nlens: 13 12\nTotal mass of the gas in stars:\nAGB: 1.747E+05\nMassive: 6.045E+04\n25.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 2.100E+07 with lifetime: 8.070E+06\n20.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 2.688E+07 with lifetime: 9.916E+06\n15.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 2.688E+07 with lifetime: 1.366E+07\n12.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 3.443E+07 with lifetime: 1.825E+07\n7.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 5.645E+07 with lifetime: 4.284E+07\n6.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 7.228E+07 with lifetime: 5.688E+07\n5.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 9.255E+07 with lifetime: 8.135E+07\n4.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 1.185E+08 with lifetime: 1.304E+08\n3.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 1.943E+08 with lifetime: 2.528E+08\n2.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 5.224E+08 with lifetime: 7.131E+08\n1.65 Wind (if massive +SN2): of Z= 1.000000E-04 at time 1.097E+09 with lifetime: 1.217E+09\n1.0 Wind (if massive +SN2): of Z= 1.000000E-04 at time 2.949E+09 with lifetime: 5.564E+09\ntime and metallicity and total mass:\n3.443E+07 1.0000E-04 8.6080E+07\ntime and metallicity and total mass:\n3.443E+07 1.0000E-04 8.6080E+07\ntime and metallicity and total mass:\n7.228E+07 1.0000E-04 2.4975E+08\ntime and metallicity and total mass:\n7.228E+07 1.0000E-04 2.4975E+08\ntime and metallicity and total mass:\n1.518E+08 1.0000E-04 4.7345E+08\ntime and metallicity and total mass:\n1.518E+08 1.0000E-04 4.7345E+08\ntime and metallicity and total mass:\n3.186E+08 1.0000E-04 7.7998E+08\ntime and metallicity and total mass:\n3.186E+08 1.0000E-04 7.7998E+08\ntime and metallicity and total mass:\n6.690E+08 1.0000E-04 1.1964E+09\ntime and metallicity and total mass:\n6.690E+08 1.0000E-04 1.1964E+09\ntime and metallicity and total mass:\n1.405E+09 1.0000E-04 1.7555E+09\ntime and metallicity and total mass:\n1.405E+09 1.0000E-04 1.7555E+09\ntime and metallicity and total mass:\n2.949E+09 1.0000E-04 2.4969E+09\ntime and metallicity and total mass:\n2.949E+09 1.0000E-04 2.4969E+09\ntime and metallicity and total mass:\n6.192E+09 1.0000E-04 3.4681E+09\ntime and metallicity and total mass:\n6.192E+09 1.0000E-04 3.4681E+09\ntime and metallicity and total mass:\n1.300E+10 1.0000E-04 3.6848E+09\ntime and metallicity and total mass:\n1.300E+10 1.0000E-04 3.6848E+09\n SYGMA run completed - Run time: 0.83s\n"
]
],
[
[
"# imf_yield_range - include yields only in this mass range",
"_____no_output_____"
]
],
[
[
"s0=s.sygma(iolevel=0,iniZ=0.0001,imf_bdys=[0.01,100],imf_yields_range=[1,100],\n hardsetZ=0.0001,table='yield_tables/agb_and_massive_stars_h1.txt',sn1a_on=False, \n sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab_h1.ppn')",
"SYGMA run in progress..\n SYGMA run completed - Run time: 0.31s\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7601d65914367a0c79dc6ea664b04ca0d529106 | 46,849 | ipynb | Jupyter Notebook | 2_objectdetection/2-4-5_SSD_model_forward.ipynb | MilyangParkJaeHun/pytorch_advanced | 479ea101dc656b8705a810f57e8925035eec8ae2 | [
"MIT"
] | null | null | null | 2_objectdetection/2-4-5_SSD_model_forward.ipynb | MilyangParkJaeHun/pytorch_advanced | 479ea101dc656b8705a810f57e8925035eec8ae2 | [
"MIT"
] | null | null | null | 2_objectdetection/2-4-5_SSD_model_forward.ipynb | MilyangParkJaeHun/pytorch_advanced | 479ea101dc656b8705a810f57e8925035eec8ae2 | [
"MIT"
] | null | null | null | 34.600443 | 106 | 0.416914 | [
[
[
"from math import sqrt\nfrom itertools import product\n\nimport pandas as pd\nimport torch\nfrom torch.autograd import Function\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.nn.init as init",
"_____no_output_____"
]
],
[
[
"# vgg",
"_____no_output_____"
]
],
[
[
"def make_vgg():\n layers = []\n in_channels = 3 \n\n cfg = [64, 64, 'M', 128, 128, 'M', 256, 256,\n 256, 'MC', 512, 512, 512, 'M', 512, 512, 512]\n\n for v in cfg:\n if v == 'M':\n layers += [nn.MaxPool2d(kernel_size=2, stride=2)]\n elif v == 'MC':\n layers += [nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True)]\n else:\n conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)\n layers += [conv2d, nn.ReLU(inplace=True)]\n in_channels = v\n\n pool5 = nn.MaxPool2d(kernel_size=3, stride=1, padding=1)\n conv6 = nn.Conv2d(512, 1024, kernel_size=3, padding=6, dilation=6)\n conv7 = nn.Conv2d(1024, 1024, kernel_size=1)\n layers += [pool5, conv6,\n nn.ReLU(inplace=True), conv7, nn.ReLU(inplace=True)]\n return nn.ModuleList(layers)\n\nvgg_test = make_vgg()\nprint(vgg_test)\n",
"ModuleList(\n (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (1): ReLU(inplace=True)\n (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (3): ReLU(inplace=True)\n (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (6): ReLU(inplace=True)\n (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (8): ReLU(inplace=True)\n (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (11): ReLU(inplace=True)\n (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (13): ReLU(inplace=True)\n (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (15): ReLU(inplace=True)\n (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)\n (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (18): ReLU(inplace=True)\n (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (20): ReLU(inplace=True)\n (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (22): ReLU(inplace=True)\n (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (25): ReLU(inplace=True)\n (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (27): ReLU(inplace=True)\n (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (29): ReLU(inplace=True)\n (30): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)\n (31): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(6, 6), dilation=(6, 6))\n (32): ReLU(inplace=True)\n (33): Conv2d(1024, 1024, kernel_size=(1, 1), stride=(1, 1))\n (34): ReLU(inplace=True)\n)\n"
]
],
[
[
"# extras",
"_____no_output_____"
]
],
[
[
"def make_extras():\n layers = []\n in_channels = 1024 \n\n cfg = [256, 512, 128, 256, 128, 256, 128, 256]\n\n layers += [nn.Conv2d(in_channels, cfg[0], kernel_size=(1))]\n layers += [nn.Conv2d(cfg[0], cfg[1], kernel_size=(3), stride=2, padding=1)]\n layers += [nn.Conv2d(cfg[1], cfg[2], kernel_size=(1))]\n layers += [nn.Conv2d(cfg[2], cfg[3], kernel_size=(3), stride=2, padding=1)]\n layers += [nn.Conv2d(cfg[3], cfg[4], kernel_size=(1))]\n layers += [nn.Conv2d(cfg[4], cfg[5], kernel_size=(3))]\n layers += [nn.Conv2d(cfg[5], cfg[6], kernel_size=(1))]\n layers += [nn.Conv2d(cfg[6], cfg[7], kernel_size=(3))]\n \n return nn.ModuleList(layers)\n\nextras_test = make_extras()\nprint(extras_test)\n",
"ModuleList(\n (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))\n (1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n (2): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))\n (3): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n (4): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))\n (5): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1))\n (6): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))\n (7): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1))\n)\n"
]
],
[
[
"# loc conf",
"_____no_output_____"
]
],
[
[
"\ndef make_loc_conf(num_classes=21, bbox_aspect_num=[4, 6, 6, 6, 4, 4]):\n\n loc_layers = []\n conf_layers = []\n\n loc_layers += [nn.Conv2d(512, bbox_aspect_num[0]\n * 4, kernel_size=3, padding=1)]\n conf_layers += [nn.Conv2d(512, bbox_aspect_num[0]\n * num_classes, kernel_size=3, padding=1)]\n\n loc_layers += [nn.Conv2d(1024, bbox_aspect_num[1]\n * 4, kernel_size=3, padding=1)]\n conf_layers += [nn.Conv2d(1024, bbox_aspect_num[1]\n * num_classes, kernel_size=3, padding=1)]\n\n loc_layers += [nn.Conv2d(512, bbox_aspect_num[2]\n * 4, kernel_size=3, padding=1)]\n conf_layers += [nn.Conv2d(512, bbox_aspect_num[2]\n * num_classes, kernel_size=3, padding=1)]\n\n loc_layers += [nn.Conv2d(256, bbox_aspect_num[3]\n * 4, kernel_size=3, padding=1)]\n conf_layers += [nn.Conv2d(256, bbox_aspect_num[3]\n * num_classes, kernel_size=3, padding=1)]\n\n loc_layers += [nn.Conv2d(256, bbox_aspect_num[4]\n * 4, kernel_size=3, padding=1)]\n conf_layers += [nn.Conv2d(256, bbox_aspect_num[4]\n * num_classes, kernel_size=3, padding=1)]\n\n loc_layers += [nn.Conv2d(256, bbox_aspect_num[5]\n * 4, kernel_size=3, padding=1)]\n conf_layers += [nn.Conv2d(256, bbox_aspect_num[5]\n * num_classes, kernel_size=3, padding=1)]\n\n return nn.ModuleList(loc_layers), nn.ModuleList(conf_layers)\n\nloc_test, conf_test = make_loc_conf()\nprint(loc_test)\nprint(conf_test)\n",
"ModuleList(\n (0): Conv2d(512, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (1): Conv2d(1024, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (2): Conv2d(512, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (3): Conv2d(256, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (4): Conv2d(256, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (5): Conv2d(256, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n)\nModuleList(\n (0): Conv2d(512, 84, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (1): Conv2d(1024, 126, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (2): Conv2d(512, 126, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (3): Conv2d(256, 126, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (4): Conv2d(256, 84, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (5): Conv2d(256, 84, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n)\n"
]
],
[
[
"# L2Norm",
"_____no_output_____"
]
],
[
[
"class L2Norm(nn.Module):\n def __init__(self, input_channels=512, scale=20):\n super(L2Norm, self).__init__() \n self.weight = nn.Parameter(torch.Tensor(input_channels))\n self.scale = scale \n self.reset_parameters() \n self.eps = 1e-10\n\n def reset_parameters(self):\n init.constant_(self.weight, self.scale) \n\n def forward(self, x):\n # torch.Size([batch_num, 1, 38, 38])\n norm = x.pow(2).sum(dim=1, keepdim=True).sqrt()+self.eps\n x = torch.div(x, norm)\n\n # torch.Size([batch_num, 512, 38, 38])\n weights = self.weight.unsqueeze(\n 0).unsqueeze(2).unsqueeze(3).expand_as(x)\n out = weights * x\n\n return out\n",
"_____no_output_____"
],
[
"class DBox(object):\n def __init__(self, cfg):\n super(DBox, self).__init__()\n\n self.image_size = cfg['input_size'] \n self.feature_maps = cfg['feature_maps']\n self.num_priors = len(cfg[\"feature_maps\"]) \n self.steps = cfg['steps'] \n \n self.min_sizes = cfg['min_sizes']\n \n self.max_sizes = cfg['max_sizes']\n \n self.aspect_ratios = cfg['aspect_ratios']\n\n def make_dbox_list(self):\n mean = []\n # 'feature_maps': [38, 19, 10, 5, 3, 1]\n for k, f in enumerate(self.feature_maps):\n for i, j in product(range(f), repeat=2): \n # 300 / 'steps': [8, 16, 32, 64, 100, 300],\n f_k = self.image_size / self.steps[k]\n\n cx = (j + 0.5) / f_k\n cy = (i + 0.5) / f_k\n\n # DBox [cx,cy, width, height]\n # 'min_sizes': [30, 60, 111, 162, 213, 264]\n s_k = self.min_sizes[k]/self.image_size\n mean += [cx, cy, s_k, s_k]\n\n # DBox [cx,cy, width, height]\n # 'max_sizes': [60, 111, 162, 213, 264, 315],\n s_k_prime = sqrt(s_k * (self.max_sizes[k]/self.image_size))\n mean += [cx, cy, s_k_prime, s_k_prime]\n\n # defBox [cx,cy, width, height]\n for ar in self.aspect_ratios[k]:\n mean += [cx, cy, s_k*sqrt(ar), s_k/sqrt(ar)]\n mean += [cx, cy, s_k/sqrt(ar), s_k*sqrt(ar)]\n\n # DBox torch.Size([8732, 4])\n output = torch.Tensor(mean).view(-1, 4)\n\n output.clamp_(max=1, min=0)\n\n return output\n",
"_____no_output_____"
],
[
"# SSD300\nssd_cfg = {\n 'num_classes': 21, \n 'input_size': 300, \n 'bbox_aspect_num': [4, 6, 6, 6, 4, 4], \n 'feature_maps': [38, 19, 10, 5, 3, 1],\n 'steps': [8, 16, 32, 64, 100, 300], \n 'min_sizes': [30, 60, 111, 162, 213, 264], \n 'max_sizes': [60, 111, 162, 213, 264, 315], \n 'aspect_ratios': [[2], [2, 3], [2, 3], [2, 3], [2], [2]],\n}\n\n# DBox\ndbox = DBox(ssd_cfg)\ndbox_list = dbox.make_dbox_list()\n\n# DBox\npd.DataFrame(dbox_list.numpy())\n",
"_____no_output_____"
]
],
[
[
"# SSD",
"_____no_output_____"
]
],
[
[
"# SSD\nclass SSD(nn.Module):\n\n def __init__(self, phase, cfg):\n super(SSD, self).__init__()\n\n self.phase = phase # train or inference\n self.num_classes = cfg[\"num_classes\"]\n\n # SSD\n self.vgg = make_vgg()\n self.extras = make_extras()\n self.L2Norm = L2Norm()\n self.loc, self.conf = make_loc_conf(\n cfg[\"num_classes\"], cfg[\"bbox_aspect_num\"])\n\n # DBox\n dbox = DBox(cfg)\n self.dbox_list = dbox.make_dbox_list()\n\n if phase == 'inference':\n self.detect = Detect()\n\nssd_test = SSD(phase=\"train\", cfg=ssd_cfg)\nprint(ssd_test)\n",
"SSD(\n (vgg): ModuleList(\n (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (1): ReLU(inplace=True)\n (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (3): ReLU(inplace=True)\n (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (6): ReLU(inplace=True)\n (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (8): ReLU(inplace=True)\n (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (11): ReLU(inplace=True)\n (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (13): ReLU(inplace=True)\n (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (15): ReLU(inplace=True)\n (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)\n (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (18): ReLU(inplace=True)\n (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (20): ReLU(inplace=True)\n (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (22): ReLU(inplace=True)\n (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (25): ReLU(inplace=True)\n (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (27): ReLU(inplace=True)\n (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (29): ReLU(inplace=True)\n (30): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)\n (31): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(6, 6), dilation=(6, 6))\n (32): ReLU(inplace=True)\n (33): Conv2d(1024, 1024, kernel_size=(1, 1), stride=(1, 1))\n (34): ReLU(inplace=True)\n )\n (extras): ModuleList(\n (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))\n (1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n (2): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))\n (3): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n (4): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))\n (5): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1))\n (6): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))\n (7): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1))\n )\n (L2Norm): L2Norm()\n (loc): ModuleList(\n (0): Conv2d(512, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (1): Conv2d(1024, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (2): Conv2d(512, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (3): Conv2d(256, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (4): Conv2d(256, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (5): Conv2d(256, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n (conf): ModuleList(\n (0): Conv2d(512, 84, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (1): Conv2d(1024, 126, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (2): Conv2d(512, 126, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (3): Conv2d(256, 126, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (4): Conv2d(256, 84, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (5): Conv2d(256, 84, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n)\n"
],
[
"\n\ndef decode(loc, dbox_list):\n\n\n # DBox[cx, cy, width, height]\n # loc[Δcx, Δcy, Δwidth, Δheight]\n\n boxes = torch.cat((\n dbox_list[:, :2] + loc[:, :2] * 0.1 * dbox_list[:, 2:],\n dbox_list[:, 2:] * torch.exp(loc[:, 2:] * 0.2)), dim=1)\n # boxes torch.Size([8732, 4])\n\n # BBox [cx, cy, width, height] [xmin, ymin, xmax, ymax] \n boxes[:, :2] -= boxes[:, 2:] / 2 \n boxes[:, 2:] += boxes[:, :2] \n\n return boxes\n",
"_____no_output_____"
]
],
[
[
"# Non-Maximum Suppression",
"_____no_output_____"
]
],
[
[
"# Non-Maximum Suppression\n\ndef nm_suppression(boxes, scores, overlap=0.45, top_k=200):\n\n count = 0\n keep = scores.new(scores.size(0)).zero_().long()\n\n x1 = boxes[:, 0]\n y1 = boxes[:, 1]\n x2 = boxes[:, 2]\n y2 = boxes[:, 3]\n area = torch.mul(x2 - x1, y2 - y1)\n\n tmp_x1 = boxes.new()\n tmp_y1 = boxes.new()\n tmp_x2 = boxes.new()\n tmp_y2 = boxes.new()\n tmp_w = boxes.new()\n tmp_h = boxes.new()\n\n v, idx = scores.sort(0)\n\n idx = idx[-top_k:]\n\n while idx.numel() > 0:\n i = idx[-1] \n\n keep[count] = i\n count += 1\n\n if idx.size(0) == 1:\n break\n\n idx = idx[:-1]\n\n torch.index_select(x1, 0, idx, out=tmp_x1)\n torch.index_select(y1, 0, idx, out=tmp_y1)\n torch.index_select(x2, 0, idx, out=tmp_x2)\n torch.index_select(y2, 0, idx, out=tmp_y2)\n\n tmp_x1 = torch.clamp(tmp_x1, min=x1[i])\n tmp_y1 = torch.clamp(tmp_y1, min=y1[i])\n tmp_x2 = torch.clamp(tmp_x2, max=x2[i])\n tmp_y2 = torch.clamp(tmp_y2, max=y2[i])\n\n tmp_w.resize_as_(tmp_x2)\n tmp_h.resize_as_(tmp_y2)\n\n tmp_w = tmp_x2 - tmp_x1\n tmp_h = tmp_y2 - tmp_y1\n\n tmp_w = torch.clamp(tmp_w, min=0.0)\n tmp_h = torch.clamp(tmp_h, min=0.0)\n\n inter = tmp_w*tmp_h\n\n rem_areas = torch.index_select(area, 0, idx) \n union = (rem_areas - inter) + area[i] \n IoU = inter/union\n\n idx = idx[IoU.le(overlap)] \n\n\n return keep, count\n",
"_____no_output_____"
]
],
[
[
"# Detect",
"_____no_output_____"
]
],
[
[
"\nclass Detect(Function):\n\n def __init__(self, conf_thresh=0.01, top_k=200, nms_thresh=0.45):\n self.softmax = nn.Softmax(dim=-1) \n self.conf_thresh = conf_thresh \n self.top_k = top_k \n self.nms_thresh = nms_thresh \n \n def forward(self, loc_data, conf_data, dbox_list):\n num_batch = loc_data.size(0) \n num_dbox = loc_data.size(1) \n num_classes = conf_data.size(2) \n\n conf_data = self.softmax(conf_data)\n\n output = torch.zeros(num_batch, num_classes, self.top_k, 5)\n\n conf_preds = conf_data.transpose(2, 1)\n\n for i in range(num_batch):\n\n decoded_boxes = decode(loc_data[i], dbox_list)\n\n conf_scores = conf_preds[i].clone()\n\n for cl in range(1, num_classes):\n\n c_mask = conf_scores[cl].gt(self.conf_thresh)\n\n scores = conf_scores[cl][c_mask]\n\n if scores.nelement() == 0: \n continue\n\n l_mask = c_mask.unsqueeze(1).expand_as(decoded_boxes)\n # l_mask:torch.Size([8732, 4])\n\n boxes = decoded_boxes[l_mask].view(-1, 4)\n\n ids, count = nm_suppression(\n boxes, scores, self.nms_thresh, self.top_k)\n\n output[i, cl, :count] = torch.cat((scores[ids[:count]].unsqueeze(1),\n boxes[ids[:count]]), 1)\n\n return output # torch.Size([1, 21, 200, 5])\n",
"_____no_output_____"
]
],
[
[
"# SSD",
"_____no_output_____"
]
],
[
[
"\nclass SSD(nn.Module):\n\n def __init__(self, phase, cfg):\n super(SSD, self).__init__()\n\n self.phase = phase # train or inference\n self.num_classes = cfg[\"num_classes\"] \n\n # SSD\n self.vgg = make_vgg()\n self.extras = make_extras()\n self.L2Norm = L2Norm()\n self.loc, self.conf = make_loc_conf(\n cfg[\"num_classes\"], cfg[\"bbox_aspect_num\"])\n\n # DBox\n dbox = DBox(cfg)\n self.dbox_list = dbox.make_dbox_list()\n\n if phase == 'inference':\n self.detect = Detect()\n\n def forward(self, x):\n sources = list() \n loc = list() \n conf = list() \n\n for k in range(23):\n x = self.vgg[k](x)\n\n source1 = self.L2Norm(x)\n sources.append(source1)\n\n for k in range(23, len(self.vgg)):\n x = self.vgg[k](x)\n\n sources.append(x)\n\n for k, v in enumerate(self.extras):\n x = F.relu(v(x), inplace=True)\n if k % 2 == 1: \n sources.append(x)\n\n for (x, l, c) in zip(sources, self.loc, self.conf):\n loc.append(l(x).permute(0, 2, 3, 1).contiguous())\n conf.append(c(x).permute(0, 2, 3, 1).contiguous())\n\n # loc torch.Size([batch_num, 34928])\n # conf torch.Size([batch_num, 183372])\n loc = torch.cat([o.view(o.size(0), -1) for o in loc], 1)\n conf = torch.cat([o.view(o.size(0), -1) for o in conf], 1)\n\n # loc torch.Size([batch_num, 8732, 4])\n # conf torch.Size([batch_num, 8732, 21])\n loc = loc.view(loc.size(0), -1, 4)\n conf = conf.view(conf.size(0), -1, self.num_classes)\n\n output = (loc, conf, self.dbox_list)\n\n if self.phase == \"inference\": # \n # torch.Size([batch_num, 21, 200, 5])\n return self.detect(output[0], output[1], output[2])\n\n else: \n return output\n # (loc, conf, dbox_list)のタプル\n\n\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e76037f5cdb57fd6d349ee82227f38568c19b0c6 | 2,062 | ipynb | Jupyter Notebook | 04_download_test_data.ipynb | florianboergel/pyTEF | 721c0b8fd42564361a6240f27acb423a8525f7b0 | [
"MIT"
] | null | null | null | 04_download_test_data.ipynb | florianboergel/pyTEF | 721c0b8fd42564361a6240f27acb423a8525f7b0 | [
"MIT"
] | 16 | 2021-06-07T21:22:41.000Z | 2022-03-30T05:08:33.000Z | 04_download_test_data.ipynb | florianboergel/pyTEF | 721c0b8fd42564361a6240f27acb423a8525f7b0 | [
"MIT"
] | null | null | null | 31.242424 | 128 | 0.553346 | [
[
[
"# default_exp tutorial\nfrom nbdev import *",
"_____no_output_____"
]
],
[
[
"# Download TEF test data\n\n> Data used for Example 2",
"_____no_output_____"
]
],
[
[
"#export\n# Code from xarray\ntry:\n import pooch\nexcept ImportError as e:\n raise ImportError(\n \"tutorial.download_test_data depends on pooch to download and manage datasets.\"\n \" To proceed please install pooch.\"\n ) from e",
"_____no_output_____"
],
[
"#export\ndef download_test_data(downloadpath):\n filepath_1d_winter = pooch.retrieve(\"https://github.com/florianboergel/pyTEF/raw/master/data/SoH_2_2011_01.nc\",\n known_hash=None, path = downloadpath, fname = \"SoH_2_2011_01.nc\")\n filepath_1d_summer = pooch.retrieve(\"https://github.com/florianboergel/pyTEF/raw/master/data/SoH_2_2011_08.nc\",\n known_hash=None, path = downloadpath, fname = \"SoH_2_2011_08.nc\")\n filepath_2d_winter = pooch.retrieve(\"https://github.com/florianboergel/pyTEF/raw/master/data/Surface_2011_01.nc\",\n known_hash=None, path = downloadpath, fname = \"Surface_2011_01.nc\")\n filepath_2d_summer = pooch.retrieve(\"https://github.com/florianboergel/pyTEF/raw/master/data/Surface_2011_08.nc\",\n known_hash=None, path = downloadpath, fname = \"Surface_2011_08.nc\")\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.