hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
sequence
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
sequence
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
sequence
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
sequence
cell_types
sequence
cell_type_groups
sequence
e70dbce23212db5d2d9e911aff2a13958630f237
13,051
ipynb
Jupyter Notebook
quantfin/adjusted-factor-based-perf-attrib.ipynb
georgh0021/PaTSwAPS
62db213d3ec55a2d396a6ab884f2ce2f70a9feaa
[ "MIT" ]
16
2018-08-24T13:05:50.000Z
2020-03-25T04:34:49.000Z
quantfin/adjusted-factor-based-perf-attrib.ipynb
eigenfoo/random
62db213d3ec55a2d396a6ab884f2ce2f70a9feaa
[ "MIT" ]
null
null
null
quantfin/adjusted-factor-based-perf-attrib.ipynb
eigenfoo/random
62db213d3ec55a2d396a6ab884f2ce2f70a9feaa
[ "MIT" ]
4
2018-09-18T14:40:51.000Z
2019-10-01T13:08:03.000Z
40.033742
900
0.600567
[ [ [ "# Adjusted Factor-Based Performance Attribution", "_____no_output_____" ], [ "[Link to article here](http://bfjlaward.com/pdf/26059/67-78_Stubbs_colour_JPM_0517.pdf)\n\nSuppose an algorithm is trading, generating a daily profit or loss (PnL). *How much of the PnL came from where, and what can we do to mitigate this risk?*\n\nThis question is the motivation of performance attribution.\n\nJust as an algorithm can use a factor model to make its trading decisions, so too can a factor model be used to analyze an algorithm's trading decisions. In a sense, performance attribution can be thought of as solving the inverse problem of designing an algorithm.", "_____no_output_____" ], [ "First, the authors explain why the factors in a risk model matter. Consider the strategy:\n\n*maximize:* exposure to a growth factor\n\n*subject to:*\n- long only\n- fully invested\n- active risk constraint of ±3% (overall strategy)\n- sector bounds of ±4%\n- asset bounds of ±3%\n\nWe analyze the returns using two risk models:\n\n- RM1 has 10 sector factors and 4 style factors (market sensititvity, momentum, size and value)\n- RM2 is the same as RM1, but **with** the growth factor", "_____no_output_____" ], [ "<style type=\"text/css\">\n.tg {border-collapse:collapse;border-spacing:0;}\n.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}\n.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}\n.tg .tg-baqh{text-align:center;vertical-align:top}\n.tg .tg-lqy6{text-align:right;vertical-align:top}\n.tg .tg-yw4l{vertical-align:top}\n</style>\n<table class=\"tg\">\n <tr>\n <th class=\"tg-baqh\">Risk Model</th>\n <th class=\"tg-lqy6\">RM1</th>\n <th class=\"tg-lqy6\">RM2</th>\n </tr>\n <tr>\n <td class=\"tg-yw4l\">Returns</td>\n <td class=\"tg-baqh\" colspan=\"2\">1.47%</td>\n </tr>\n <tr>\n <td class=\"tg-yw4l\">Factor Contribution (FC)</td>\n <td class=\"tg-lqy6\">-0.18%</td>\n <td class=\"tg-lqy6\">2.35%</td>\n </tr>\n <tr>\n <td class=\"tg-yw4l\">Specific Contribution (SC)</td>\n <td class=\"tg-lqy6\">1.65%</td>\n <td class=\"tg-lqy6\">-0.88%</td>\n </tr>\n <tr>\n <td class=\"tg-yw4l\">FC-SC Correlation</td>\n <td class=\"tg-lqy6\">-0.09</td>\n <td class=\"tg-lqy6\">-0.32</td>\n </tr>\n</table>", "_____no_output_____" ], [ "As expected, RM2 attributes much more of the returns to the factors. Even worse, RM2 has a significant correlation between the daily factor contribution and the daily specific contribution.", "_____no_output_____" ], [ "## Why is correlation between factor and specific contribution bad?\n\nConsider the following portfolio returns:\n\n$$ r = f + (-0.5f) $$\n\nwhere:\n\n- $r$ are the portfolio's returns\n- $f$ is the factor contribution\n- $-0.5f$ is the specific contribution\n\nIn this case, we have an FC-SC correlation of 1: this means that some (in this case, all) of the specific contribution can be explained by the factor $f$. This is undesireable: we want the specific contribution to be completely idiosyncratic to $f$.\n\nOne assumption of linear regression is that $E(u_i | H) = 0$: i.e. the expected value of each position in the unexplained portfolio is 0. Violation of this assumption leads to biased estimates of $\\lambda$. Now, if the unexplained portfolio covaries with the factor-mimicking portfolios, $cov(u, H) \\neq 0 \\implies E(u_i | H) \\neq 0$", "_____no_output_____" ], [ "<div class=\"alert alert-success\">\n**TLDR:** reducing the correlation between factor contributions and specific contributions drives the specific contribution down, thus leading to more accurate inferences from the performance attribution.\n</div>", "_____no_output_____" ], [ "## Mathematics of Factor Attribution\n\nThere are 2 ways to think of factor attribution.", "_____no_output_____" ], [ "### Way 1 (less important):\n\n$$ r = Xf + \\epsilon $$\n\nwhere:\n\n- $n$ is the number of assets\n- $k$ is the number of factors\n- $X$ is an $n \\times k$ factor exposure matrix\n- $f$ is an $n \\times 1$ vector of factor returns\n- $\\epsilon$ is an $n \\times 1$ vector of stock-specific residual returns\n\nIn a cross-sectional returns model, $X$ is given, and $f$ is estimated using WLS regression.\n\nIf this is the case, then it can be shown that\n\n$$ f = H^t r$$\n\nwhere:\n\n- $H = WX(X^tWX)^{-1}$ is an $n \\times k$ matrix whose columns are pure factor-mimicking portfolios\n\nKnowing $f$ and our portfolio $h$ (an $n \\times 1$ vector of our holdings), we thus have our PnL attribution:\n\n$$ h^t r = h^t X f + h^t \\epsilon $$", "_____no_output_____" ], [ "### Way 2 (more important):\n\n$$ h = \\tilde{H}\\lambda + u$$\n\nwhere:\n\n- $\\tilde{H}$ is now **constructed by us** (I used a tilde to reflect that)\n- $\\lambda$ is a $k \\times 1$ vector of the portfolio's factor exposures\n- $u$ is a $k \\times 1$ vector of factor-specific residual exposures\n\n> \"The advantage of this second way of thinking about attribution is that we can see that exposures are not exact:\n> They are least-squares estimates of a linear regression. And as with all regressions, the estimates contain\n> errors and may be biased if all underlying model assumptions are not satisfied.\"\n\nClearly, if $\\lambda = X^th$, this way is no different from way 1.\n\n$$ h = \\tilde{H}\\lambda + u $$\n\n$$ \\implies h = \\tilde{H} X^t h + u $$\n\n$$ \\implies h^t = h^t X \\tilde{H}^t + u^t $$\n\n$$ \\implies h^t r = h^t X \\tilde{H}^t r + u^t r $$\n\n$$ \\implies h^t r = h^t X f + u^t r $$", "_____no_output_____" ], [ "#### So, when does $\\lambda = X^th$?\n\nBasically, never in real life.\n\nThe authors outline some instances in which it is: if you cleverly construct $\\tilde{H}$'s factor-mimicking portfolios using weights that cancel out some bad stuff that we did before (I don't really get this bit).", "_____no_output_____" ], [ "## So, how do we make $cov(u, H) = 0$?\n\nAgain, there are two ways.\n\nWe consider the residual portfolio $u$ as a linear combination of the factor-mimicking portfolios in $H$. Let $H = [H_1 \\: H_2 \\: ... \\: H_k]$.", "_____no_output_____" ], [ "### Way 1: Absolute adjustment\n\nFirst, estimate the $\\beta$s using a time-series regression\n\n- Instead of using the first equation and running a cross-sectional, multivariate regression, use the third equation to run a time-series regression.\n \n- A cross-sectional regression won't work because it introduces some of the aforementioned biases. Further, a time series regression has the benefit of being a single regression through time, as opposed to modifying the factor exposures differently in each period.\n\n\n$$ u = \\sum_{j}{\\beta_j \\tilde{H_j}} + \\tilde{u} $$\n\n$$\\implies r^t u = \\sum_{j}{\\beta_j \\: r^t \\tilde{H_j}} + r^t \\tilde{u} $$\n\n$$\\implies r^t u = \\sum_{j}{\\beta_j \\: \\tilde{f_j}} + r^t \\tilde{u} $$\n\nThen,\n\n$$ h = \\tilde{H}\\lambda + u$$\n\n$$ \\implies r^t h = r^t \\tilde{H} \\lambda + r^t u $$\n\n$$ \\implies r^t h = \\sum_{j}{r^t \\tilde{H_j} X_j^t h} + \\sum_{j}{\\beta_j \\: r^t H_j} + r^t \\tilde{u} $$\n\n$$ \\implies r^t h = \\sum_{j}{f_j (X_j^t h + \\beta_j)} + r^t \\tilde{u}$$", "_____no_output_____" ], [ "### Way 2: Relative adjustment\n\nThe authors find that in practice, exposures are typically off by a relative amount, instead of an absolute amount. Therefore, they propose an alternative to the above equation:\n\n$$ r^t h = \\sum_{j}{f_j X_j^t h (1 + \\beta_j)} + r^t \\tilde{u}$$\n\nwhere the $\\beta$s are estimated using the following equation:\n\n$$ r_t^t u_t = \\sum_{j}{f_{tj} X_{tj}^t h_t \\beta_j} + r_t^t \\tilde{u_t} $$\n\n> A relative adjustment can also be more appropriate if factor exposures are changing through time. For these reasons, we prefer the relative adjustment to the absolute adjustment and use it in all computational results.", "_____no_output_____" ], [ "<div class=\"alert alert-warning\">\nBeware of overfitting! The problem as stated is that \"some of the specific contribution is explained by the factor\". Be careful that we do not explain the **noise** in the specific contribution with the factor!\n</div>", "_____no_output_____" ], [ "Relative adjustments help to overcome this problem...\n\n> Because we are making relative adjustments to the exposures, the adjustment procedure will not suddenly allow a factor to explain a large portion of returns when the unadjusted factor exposure is near zero. If the exposure was near zero prior to adjustment, it will remain near zero after the adjustment. In this sense, the proposed adjusted attribution methodology behaves like a Bayesian method with the standard exposures as the prior.\n\nBut more importantly, a robust method is needed to estimate the $\\beta$s. The authors propose the following scheme:\n\n> We use a heuristic variable selection scheme to select the independent variables (factor contributions) of Equation 10 based on their statistical significance, as measured by their $p$-values. We use an iterative regression scheme that starts with all variables present. After each iteration, we remove the variable with the greatest p-value if it is greater than the specified tolerance 0.02. If none of the $p$-values exceed the tolerance, we stop the iterative procedure of removing factors. Thereafter, we employ a reentry procedure in which we consider reentering rejected variables into the regression one at a time. A variable can reenter the regression only if its entry does not increase the $p$-value of any variable (including itself) above the tolerance. After the reentry trials, we run a final regression with the selected variables to compute the final estimate of $\\beta$.\n\nThis sounds to me like it is very susceptible to overfitting.", "_____no_output_____" ], [ "Last remark:\n\n> In our experience, the classical bias/variance trade-off seems to exist in standard attribution results in which variance is the volatility of the unexplained portfolio, and bias is the over- or underestimation of factor contributions. ", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
e70dc42b2a81b6a37e7549049fab991df0d617a8
567,886
ipynb
Jupyter Notebook
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-10-13.ipynb
pvieito/Radar-STATS
9ff991a4db776259bc749a823ee6f0b0c0d38108
[ "Apache-2.0" ]
9
2020-10-14T16:58:32.000Z
2021-10-05T12:01:56.000Z
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-10-13.ipynb
pvieito/Radar-STATS
9ff991a4db776259bc749a823ee6f0b0c0d38108
[ "Apache-2.0" ]
3
2020-10-08T04:48:35.000Z
2020-10-10T20:46:58.000Z
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-10-13.ipynb
Radar-STATS/Radar-STATS
61d8b3529f6bbf4576d799e340feec5b183338a3
[ "Apache-2.0" ]
3
2020-09-27T07:39:26.000Z
2020-10-02T07:48:56.000Z
78.785516
98,856
0.729189
[ [ [ "# RadarCOVID-Report", "_____no_output_____" ], [ "## Data Extraction", "_____no_output_____" ] ], [ [ "import datetime\nimport json\nimport logging\nimport os\nimport shutil\nimport tempfile\nimport textwrap\nimport uuid\n\nimport matplotlib.ticker\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\n%matplotlib inline", "_____no_output_____" ], [ "current_working_directory = os.environ.get(\"PWD\")\nif current_working_directory:\n os.chdir(current_working_directory)\n\nsns.set()\nmatplotlib.rcParams[\"figure.figsize\"] = (15, 6)\n\nextraction_datetime = datetime.datetime.utcnow()\nextraction_date = extraction_datetime.strftime(\"%Y-%m-%d\")\nextraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)\nextraction_previous_date = extraction_previous_datetime.strftime(\"%Y-%m-%d\")\nextraction_date_with_hour = datetime.datetime.utcnow().strftime(\"%Y-%m-%d@%H\")", "_____no_output_____" ] ], [ [ "### Constants", "_____no_output_____" ] ], [ [ "spain_region_country_name = \"Spain\"\nspain_region_country_code = \"ES\"\n\nbackend_extraction_days = 7 * 2\ndaily_summary_days = 7 * 4 * 3\ndaily_plot_days = 7 * 4\ntek_dumps_load_limit = daily_summary_days + 1", "_____no_output_____" ] ], [ [ "### Parameters", "_____no_output_____" ] ], [ [ "active_region_parameter = os.environ.get(\"RADARCOVID_REPORT__ACTIVE_REGION\")\nif active_region_parameter:\n active_region_country_code, active_region_country_name = \\\n active_region_parameter.split(\":\")\nelse:\n active_region_country_code, active_region_country_name = \\\n spain_region_country_code, spain_region_country_name", "_____no_output_____" ] ], [ [ "### COVID-19 Cases", "_____no_output_____" ] ], [ [ "confirmed_df = pd.read_csv(\"https://covid19tracking.narrativa.com/csv/confirmed.csv\")\n\nradar_covid_countries = {active_region_country_name}\n\nconfirmed_df = confirmed_df[confirmed_df[\"Country_EN\"].isin(radar_covid_countries)]\nconfirmed_df = confirmed_df[pd.isna(confirmed_df.Region)]\nconfirmed_df.head()", "_____no_output_____" ], [ "confirmed_country_columns = list(filter(lambda x: x.startswith(\"Country_\"), confirmed_df.columns))\nconfirmed_regional_columns = confirmed_country_columns + [\"Region\"]\nconfirmed_df.drop(columns=confirmed_regional_columns, inplace=True)\nconfirmed_df.head()", "_____no_output_____" ], [ "confirmed_df = confirmed_df.sum().to_frame()\nconfirmed_df.tail()", "_____no_output_____" ], [ "confirmed_df.reset_index(inplace=True)\nconfirmed_df.columns = [\"sample_date_string\", \"cumulative_cases\"]\nconfirmed_df.sort_values(\"sample_date_string\", inplace=True)\nconfirmed_df[\"new_cases\"] = confirmed_df.cumulative_cases.diff()\nconfirmed_df[\"covid_cases\"] = confirmed_df.new_cases.rolling(7).mean().round()\nconfirmed_df.tail()", "_____no_output_____" ], [ "extraction_date_confirmed_df = \\\n confirmed_df[confirmed_df.sample_date_string == extraction_date]\nextraction_previous_date_confirmed_df = \\\n confirmed_df[confirmed_df.sample_date_string == extraction_previous_date].copy()\n\nif extraction_date_confirmed_df.empty and \\\n not extraction_previous_date_confirmed_df.empty:\n extraction_previous_date_confirmed_df[\"sample_date_string\"] = extraction_date\n extraction_previous_date_confirmed_df[\"new_cases\"] = \\\n extraction_previous_date_confirmed_df.covid_cases\n extraction_previous_date_confirmed_df[\"cumulative_cases\"] = \\\n extraction_previous_date_confirmed_df.new_cases + \\\n extraction_previous_date_confirmed_df.cumulative_cases\n confirmed_df = confirmed_df.append(extraction_previous_date_confirmed_df)\n\nconfirmed_df[\"covid_cases\"] = confirmed_df.covid_cases.fillna(0).astype(int)\nconfirmed_df.tail()", "_____no_output_____" ], [ "confirmed_df[[\"new_cases\", \"covid_cases\"]].plot()", "_____no_output_____" ] ], [ [ "### Extract API TEKs", "_____no_output_____" ] ], [ [ "from Modules.ExposureNotification import exposure_notification_io\n\nraw_zip_path_prefix = \"Data/TEKs/Raw/{backend_identifier}/\"\nraw_zip_path_suffix = \"/TEKs-{backend_identifier}-{sample_date}.zip\"\nraw_zip_paths = [\n \"Current\",\n f\"Daily/{extraction_date}\",\n]\nraw_zip_paths = list(map(lambda x: raw_zip_path_prefix + x + raw_zip_path_suffix, raw_zip_paths))\n\nfail_on_error_backend_identifiers = [active_region_country_code]\nmulti_region_exposure_keys_df = \\\n exposure_notification_io.download_exposure_keys_from_backends(\n days=backend_extraction_days,\n fail_on_error_backend_identifiers=fail_on_error_backend_identifiers,\n save_raw_zip_path=raw_zip_paths)\nmulti_region_exposure_keys_df[\"region\"] = multi_region_exposure_keys_df[\"backend_identifier\"]\nmulti_region_exposure_keys_df.rename(\n columns={\n \"generation_datetime\": \"sample_datetime\",\n \"generation_date_string\": \"sample_date_string\",\n },\n inplace=True)\nmulti_region_exposure_keys_df.head()", "WARNING:root:NoKeysFoundException(\"No exposure keys found on endpoint 'https://stayaway.incm.pt/v1/gaen/exposed/1602547200000' (parameters: {'sample_date': '2020-10-13', 'server_endpoint_url': 'https://stayaway.incm.pt', 'backend_identifier': 'PT'}).\")\n" ], [ "early_teks_df = multi_region_exposure_keys_df[\n multi_region_exposure_keys_df.rolling_period < 144].copy()\nearly_teks_df[\"rolling_period_in_hours\"] = early_teks_df.rolling_period / 6\nearly_teks_df[early_teks_df.sample_date_string != extraction_date] \\\n .rolling_period_in_hours.hist(bins=list(range(24)))", "_____no_output_____" ], [ "early_teks_df[early_teks_df.sample_date_string == extraction_date] \\\n .rolling_period_in_hours.hist(bins=list(range(24)))", "_____no_output_____" ], [ "multi_region_exposure_keys_df = multi_region_exposure_keys_df[[\n \"sample_date_string\", \"region\", \"key_data\"]]\nmulti_region_exposure_keys_df.head()", "_____no_output_____" ], [ "active_regions = \\\n multi_region_exposure_keys_df.groupby(\"region\").key_data.nunique().sort_values().index.unique().tolist()\nactive_regions", "_____no_output_____" ], [ "multi_region_summary_df = multi_region_exposure_keys_df.groupby(\n [\"sample_date_string\", \"region\"]).key_data.nunique().reset_index() \\\n .pivot(index=\"sample_date_string\", columns=\"region\") \\\n .sort_index(ascending=False)\nmulti_region_summary_df.rename(\n columns={\"key_data\": \"shared_teks_by_generation_date\"},\n inplace=True)\nmulti_region_summary_df.rename_axis(\"sample_date\", inplace=True)\nmulti_region_summary_df = multi_region_summary_df.fillna(0).astype(int)\nmulti_region_summary_df = multi_region_summary_df.head(backend_extraction_days)\nmulti_region_summary_df.head()", "_____no_output_____" ], [ "multi_region_without_active_region_exposure_keys_df = \\\n multi_region_exposure_keys_df[multi_region_exposure_keys_df.region != active_region_country_code]\nmulti_region_without_active_region = \\\n multi_region_without_active_region_exposure_keys_df.groupby(\"region\").key_data.nunique().sort_values().index.unique().tolist()\nmulti_region_without_active_region", "_____no_output_____" ], [ "exposure_keys_summary_df = multi_region_exposure_keys_df[\n multi_region_exposure_keys_df.region == active_region_country_code]\nexposure_keys_summary_df.drop(columns=[\"region\"], inplace=True)\nexposure_keys_summary_df = \\\n exposure_keys_summary_df.groupby([\"sample_date_string\"]).key_data.nunique().to_frame()\nexposure_keys_summary_df = \\\n exposure_keys_summary_df.reset_index().set_index(\"sample_date_string\")\nexposure_keys_summary_df.sort_index(ascending=False, inplace=True)\nexposure_keys_summary_df.rename(columns={\"key_data\": \"shared_teks_by_generation_date\"}, inplace=True)\nexposure_keys_summary_df.head()", "/opt/hostedtoolcache/Python/3.8.6/x64/lib/python3.8/site-packages/pandas/core/frame.py:4110: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n return super().drop(\n" ] ], [ [ "### Dump API TEKs", "_____no_output_____" ] ], [ [ "tek_list_df = multi_region_exposure_keys_df[\n [\"sample_date_string\", \"region\", \"key_data\"]].copy()\ntek_list_df[\"key_data\"] = tek_list_df[\"key_data\"].apply(str)\ntek_list_df.rename(columns={\n \"sample_date_string\": \"sample_date\",\n \"key_data\": \"tek_list\"}, inplace=True)\ntek_list_df = tek_list_df.groupby(\n [\"sample_date\", \"region\"]).tek_list.unique().reset_index()\ntek_list_df[\"extraction_date\"] = extraction_date\ntek_list_df[\"extraction_date_with_hour\"] = extraction_date_with_hour\n\ntek_list_path_prefix = \"Data/TEKs/\"\ntek_list_current_path = tek_list_path_prefix + f\"/Current/RadarCOVID-TEKs.json\"\ntek_list_daily_path = tek_list_path_prefix + f\"Daily/RadarCOVID-TEKs-{extraction_date}.json\"\ntek_list_hourly_path = tek_list_path_prefix + f\"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json\"\n\nfor path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:\n os.makedirs(os.path.dirname(path), exist_ok=True)\n\ntek_list_df.drop(columns=[\"extraction_date\", \"extraction_date_with_hour\"]).to_json(\n tek_list_current_path,\n lines=True, orient=\"records\")\ntek_list_df.drop(columns=[\"extraction_date_with_hour\"]).to_json(\n tek_list_daily_path,\n lines=True, orient=\"records\")\ntek_list_df.to_json(\n tek_list_hourly_path,\n lines=True, orient=\"records\")\ntek_list_df.head()", "_____no_output_____" ] ], [ [ "### Load TEK Dumps", "_____no_output_____" ] ], [ [ "import glob\n\ndef load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:\n extracted_teks_df = pd.DataFrame(columns=[\"region\"])\n paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + \"/RadarCOVID-TEKs-*.json\"))))\n if limit:\n paths = paths[:limit]\n for path in paths:\n logging.info(f\"Loading TEKs from '{path}'...\")\n iteration_extracted_teks_df = pd.read_json(path, lines=True)\n extracted_teks_df = extracted_teks_df.append(\n iteration_extracted_teks_df, sort=False)\n extracted_teks_df[\"region\"] = \\\n extracted_teks_df.region.fillna(spain_region_country_code).copy()\n if region:\n extracted_teks_df = \\\n extracted_teks_df[extracted_teks_df.region == region]\n return extracted_teks_df", "_____no_output_____" ], [ "daily_extracted_teks_df = load_extracted_teks(\n mode=\"Daily\",\n region=active_region_country_code,\n limit=tek_dumps_load_limit)\ndaily_extracted_teks_df.head()", "_____no_output_____" ], [ "exposure_keys_summary_df_ = daily_extracted_teks_df \\\n .sort_values(\"extraction_date\", ascending=False) \\\n .groupby(\"sample_date\").tek_list.first() \\\n .to_frame()\nexposure_keys_summary_df_.index.name = \"sample_date_string\"\nexposure_keys_summary_df_[\"tek_list\"] = \\\n exposure_keys_summary_df_.tek_list.apply(len)\nexposure_keys_summary_df_ = exposure_keys_summary_df_ \\\n .rename(columns={\"tek_list\": \"shared_teks_by_generation_date\"}) \\\n .sort_index(ascending=False)\nexposure_keys_summary_df = exposure_keys_summary_df_\nexposure_keys_summary_df.head()", "_____no_output_____" ] ], [ [ "### Daily New TEKs", "_____no_output_____" ] ], [ [ "tek_list_df = daily_extracted_teks_df.groupby(\"extraction_date\").tek_list.apply(\n lambda x: set(sum(x, []))).reset_index()\ntek_list_df = tek_list_df.set_index(\"extraction_date\").sort_index(ascending=True)\ntek_list_df.head()", "_____no_output_____" ], [ "def compute_teks_by_generation_and_upload_date(date):\n day_new_teks_set_df = tek_list_df.copy().diff()\n try:\n day_new_teks_set = day_new_teks_set_df[\n day_new_teks_set_df.index == date].tek_list.item()\n except ValueError:\n day_new_teks_set = None\n if pd.isna(day_new_teks_set):\n day_new_teks_set = set()\n day_new_teks_df = daily_extracted_teks_df[\n daily_extracted_teks_df.extraction_date == date].copy()\n day_new_teks_df[\"shared_teks\"] = \\\n day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))\n day_new_teks_df[\"shared_teks\"] = \\\n day_new_teks_df.shared_teks.apply(len)\n day_new_teks_df[\"upload_date\"] = date\n day_new_teks_df.rename(columns={\"sample_date\": \"generation_date\"}, inplace=True)\n day_new_teks_df = day_new_teks_df[\n [\"upload_date\", \"generation_date\", \"shared_teks\"]]\n day_new_teks_df[\"generation_to_upload_days\"] = \\\n (pd.to_datetime(day_new_teks_df.upload_date) -\n pd.to_datetime(day_new_teks_df.generation_date)).dt.days\n day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]\n return day_new_teks_df\n\nshared_teks_generation_to_upload_df = pd.DataFrame()\nfor upload_date in daily_extracted_teks_df.extraction_date.unique():\n shared_teks_generation_to_upload_df = \\\n shared_teks_generation_to_upload_df.append(\n compute_teks_by_generation_and_upload_date(date=upload_date))\nshared_teks_generation_to_upload_df \\\n .sort_values([\"upload_date\", \"generation_date\"], ascending=False, inplace=True)\nshared_teks_generation_to_upload_df.tail()", "<ipython-input-24-827222b35590>:4: FutureWarning: `item` has been deprecated and will be removed in a future version\n day_new_teks_set = day_new_teks_set_df[\n" ], [ "today_new_teks_df = \\\n shared_teks_generation_to_upload_df[\n shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()\ntoday_new_teks_df.tail()", "_____no_output_____" ], [ "if not today_new_teks_df.empty:\n today_new_teks_df.set_index(\"generation_to_upload_days\") \\\n .sort_index().shared_teks.plot.bar()", "_____no_output_____" ], [ "generation_to_upload_period_pivot_df = \\\n shared_teks_generation_to_upload_df[\n [\"upload_date\", \"generation_to_upload_days\", \"shared_teks\"]] \\\n .pivot(index=\"upload_date\", columns=\"generation_to_upload_days\") \\\n .sort_index(ascending=False).fillna(0).astype(int) \\\n .droplevel(level=0, axis=1)\ngeneration_to_upload_period_pivot_df.head()", "_____no_output_____" ], [ "new_tek_df = tek_list_df.diff().tek_list.apply(\n lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()\nnew_tek_df.rename(columns={\n \"tek_list\": \"shared_teks_by_upload_date\",\n \"extraction_date\": \"sample_date_string\",}, inplace=True)\nnew_tek_df.tail()", "_____no_output_____" ], [ "estimated_shared_diagnoses_df = daily_extracted_teks_df.copy()\nestimated_shared_diagnoses_df[\"new_sample_extraction_date\"] = \\\n pd.to_datetime(estimated_shared_diagnoses_df.sample_date) + datetime.timedelta(1)\nestimated_shared_diagnoses_df[\"extraction_date\"] = pd.to_datetime(estimated_shared_diagnoses_df.extraction_date)\nestimated_shared_diagnoses_df[\"sample_date\"] = pd.to_datetime(estimated_shared_diagnoses_df.sample_date)\nestimated_shared_diagnoses_df.head()", "_____no_output_____" ], [ "# Sometimes TEKs from the same day are uploaded, we do not count them as new TEK devices:\nsame_day_tek_list_df = estimated_shared_diagnoses_df[\n estimated_shared_diagnoses_df.sample_date == estimated_shared_diagnoses_df.extraction_date].copy()\nsame_day_tek_list_df = same_day_tek_list_df[[\"extraction_date\", \"tek_list\"]].rename(\n columns={\"tek_list\": \"same_day_tek_list\"})\nsame_day_tek_list_df.head()", "_____no_output_____" ], [ "shared_teks_uploaded_on_generation_date_df = same_day_tek_list_df.rename(\n columns={\n \"extraction_date\": \"sample_date_string\",\n \"same_day_tek_list\": \"shared_teks_uploaded_on_generation_date\",\n })\nshared_teks_uploaded_on_generation_date_df.shared_teks_uploaded_on_generation_date = \\\n shared_teks_uploaded_on_generation_date_df.shared_teks_uploaded_on_generation_date.apply(len)\nshared_teks_uploaded_on_generation_date_df.head()\nshared_teks_uploaded_on_generation_date_df[\"sample_date_string\"] = \\\n shared_teks_uploaded_on_generation_date_df.sample_date_string.dt.strftime(\"%Y-%m-%d\")\nshared_teks_uploaded_on_generation_date_df.head()", "_____no_output_____" ], [ "estimated_shared_diagnoses_df = estimated_shared_diagnoses_df[\n estimated_shared_diagnoses_df.new_sample_extraction_date == estimated_shared_diagnoses_df.extraction_date]\nestimated_shared_diagnoses_df.head()", "_____no_output_____" ], [ "same_day_tek_list_df[\"extraction_date\"] = \\\n same_day_tek_list_df.extraction_date + datetime.timedelta(1)\nestimated_shared_diagnoses_df = \\\n estimated_shared_diagnoses_df.merge(same_day_tek_list_df, how=\"left\", on=[\"extraction_date\"])\nestimated_shared_diagnoses_df[\"same_day_tek_list\"] = \\\n estimated_shared_diagnoses_df.same_day_tek_list.apply(lambda x: [] if x is np.nan else x)\nestimated_shared_diagnoses_df.head()", "_____no_output_____" ], [ "estimated_shared_diagnoses_df.set_index(\"extraction_date\", inplace=True)\nestimated_shared_diagnoses_df[\"shared_diagnoses\"] = estimated_shared_diagnoses_df.apply(\n lambda x: len(set(x.tek_list).difference(x.same_day_tek_list)), axis=1).copy()\nestimated_shared_diagnoses_df.reset_index(inplace=True)\nestimated_shared_diagnoses_df.rename(columns={\n \"extraction_date\": \"sample_date_string\"}, inplace=True)\nestimated_shared_diagnoses_df = estimated_shared_diagnoses_df[[\"sample_date_string\", \"shared_diagnoses\"]]\nestimated_shared_diagnoses_df[\"sample_date_string\"] = estimated_shared_diagnoses_df.sample_date_string.dt.strftime(\"%Y-%m-%d\")\nestimated_shared_diagnoses_df.head()", "_____no_output_____" ] ], [ [ "### Hourly New TEKs", "_____no_output_____" ] ], [ [ "hourly_extracted_teks_df = load_extracted_teks(\n mode=\"Hourly\", region=active_region_country_code, limit=25)\nhourly_extracted_teks_df.head()", "_____no_output_____" ], [ "hourly_new_tek_count_df = hourly_extracted_teks_df \\\n .groupby(\"extraction_date_with_hour\").tek_list. \\\n apply(lambda x: set(sum(x, []))).reset_index().copy()\nhourly_new_tek_count_df = hourly_new_tek_count_df.set_index(\"extraction_date_with_hour\") \\\n .sort_index(ascending=True)\n\nhourly_new_tek_count_df[\"new_tek_list\"] = hourly_new_tek_count_df.tek_list.diff()\nhourly_new_tek_count_df[\"new_tek_count\"] = hourly_new_tek_count_df.new_tek_list.apply(\n lambda x: len(x) if not pd.isna(x) else 0)\nhourly_new_tek_count_df.rename(columns={\n \"new_tek_count\": \"shared_teks_by_upload_date\"}, inplace=True)\nhourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[\n \"extraction_date_with_hour\", \"shared_teks_by_upload_date\"]]\nhourly_new_tek_count_df.head()", "_____no_output_____" ], [ "hourly_estimated_shared_diagnoses_df = hourly_extracted_teks_df.copy()\nhourly_estimated_shared_diagnoses_df[\"new_sample_extraction_date\"] = \\\n pd.to_datetime(hourly_estimated_shared_diagnoses_df.sample_date) + datetime.timedelta(1)\nhourly_estimated_shared_diagnoses_df[\"extraction_date\"] = \\\n pd.to_datetime(hourly_estimated_shared_diagnoses_df.extraction_date)\n\nhourly_estimated_shared_diagnoses_df = hourly_estimated_shared_diagnoses_df[\n hourly_estimated_shared_diagnoses_df.new_sample_extraction_date ==\n hourly_estimated_shared_diagnoses_df.extraction_date]\nhourly_estimated_shared_diagnoses_df = \\\n hourly_estimated_shared_diagnoses_df.merge(same_day_tek_list_df, how=\"left\", on=[\"extraction_date\"])\nhourly_estimated_shared_diagnoses_df[\"same_day_tek_list\"] = \\\n hourly_estimated_shared_diagnoses_df.same_day_tek_list.apply(lambda x: [] if x is np.nan else x)\nhourly_estimated_shared_diagnoses_df[\"shared_diagnoses\"] = hourly_estimated_shared_diagnoses_df.apply(\n lambda x: len(set(x.tek_list).difference(x.same_day_tek_list)), axis=1)\nhourly_estimated_shared_diagnoses_df = \\\n hourly_estimated_shared_diagnoses_df.sort_values(\"extraction_date_with_hour\").copy()\nhourly_estimated_shared_diagnoses_df[\"shared_diagnoses\"] = hourly_estimated_shared_diagnoses_df \\\n .groupby(\"extraction_date\").shared_diagnoses.diff() \\\n .fillna(0).astype(int)\n\nhourly_estimated_shared_diagnoses_df.set_index(\"extraction_date_with_hour\", inplace=True)\nhourly_estimated_shared_diagnoses_df.reset_index(inplace=True)\nhourly_estimated_shared_diagnoses_df = hourly_estimated_shared_diagnoses_df[[\n \"extraction_date_with_hour\", \"shared_diagnoses\"]]\nhourly_estimated_shared_diagnoses_df.head()", "_____no_output_____" ], [ "hourly_summary_df = hourly_new_tek_count_df.merge(\n hourly_estimated_shared_diagnoses_df, on=[\"extraction_date_with_hour\"], how=\"outer\")\nhourly_summary_df.set_index(\"extraction_date_with_hour\", inplace=True)\nhourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()\nhourly_summary_df[\"datetime_utc\"] = pd.to_datetime(\n hourly_summary_df.extraction_date_with_hour, format=\"%Y-%m-%d@%H\")\nhourly_summary_df.set_index(\"datetime_utc\", inplace=True)\nhourly_summary_df = hourly_summary_df.tail(-1)\nhourly_summary_df.head()", "_____no_output_____" ] ], [ [ "### Data Merge", "_____no_output_____" ] ], [ [ "result_summary_df = exposure_keys_summary_df.merge(\n new_tek_df, on=[\"sample_date_string\"], how=\"outer\")\nresult_summary_df.head()", "_____no_output_____" ], [ "result_summary_df = result_summary_df.merge(\n shared_teks_uploaded_on_generation_date_df, on=[\"sample_date_string\"], how=\"outer\")\nresult_summary_df.head()", "_____no_output_____" ], [ "result_summary_df = result_summary_df.merge(\n estimated_shared_diagnoses_df, on=[\"sample_date_string\"], how=\"outer\")\nresult_summary_df.head()", "_____no_output_____" ], [ "result_summary_df = confirmed_df.tail(daily_summary_days).merge(\n result_summary_df, on=[\"sample_date_string\"], how=\"left\")\nresult_summary_df.head()", "_____no_output_____" ], [ "result_summary_df[\"sample_date\"] = pd.to_datetime(result_summary_df.sample_date_string)\nresult_summary_df.set_index(\"sample_date\", inplace=True)\nresult_summary_df.drop(columns=[\"sample_date_string\"], inplace=True)\nresult_summary_df.sort_index(ascending=False, inplace=True)\nresult_summary_df.head()", "_____no_output_____" ], [ "with pd.option_context(\"mode.use_inf_as_na\", True):\n result_summary_df = result_summary_df.fillna(0).astype(int)\n result_summary_df[\"teks_per_shared_diagnosis\"] = \\\n (result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)\n result_summary_df[\"shared_diagnoses_per_covid_case\"] = \\\n (result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)\n\nresult_summary_df.head(daily_plot_days)", "_____no_output_____" ], [ "weekly_result_summary_df = result_summary_df \\\n .sort_index(ascending=True).fillna(0).rolling(7).agg({\n \"covid_cases\": \"sum\",\n \"shared_teks_by_generation_date\": \"sum\",\n \"shared_teks_by_upload_date\": \"sum\",\n \"shared_diagnoses\": \"sum\"\n}).sort_index(ascending=False)\n\nwith pd.option_context(\"mode.use_inf_as_na\", True):\n weekly_result_summary_df = weekly_result_summary_df.fillna(0).astype(int)\n weekly_result_summary_df[\"teks_per_shared_diagnosis\"] = \\\n (weekly_result_summary_df.shared_teks_by_upload_date / weekly_result_summary_df.shared_diagnoses).fillna(0)\n weekly_result_summary_df[\"shared_diagnoses_per_covid_case\"] = \\\n (weekly_result_summary_df.shared_diagnoses / weekly_result_summary_df.covid_cases).fillna(0)\n\nweekly_result_summary_df.head()", "_____no_output_____" ], [ "last_7_days_summary = weekly_result_summary_df.to_dict(orient=\"records\")[0]\nlast_7_days_summary", "_____no_output_____" ] ], [ [ "## Report Results", "_____no_output_____" ] ], [ [ "display_column_name_mapping = {\n \"sample_date\": \"Sample\\u00A0Date\\u00A0(UTC)\",\n \"datetime_utc\": \"Timestamp (UTC)\",\n \"upload_date\": \"Upload Date (UTC)\",\n \"generation_to_upload_days\": \"Generation to Upload Period in Days\",\n \"region\": \"Backend Region\",\n \"covid_cases\": \"COVID-19 Cases (7-day Rolling Average)\",\n \"shared_teks_by_generation_date\": \"Shared TEKs by Generation Date\",\n \"shared_teks_by_upload_date\": \"Shared TEKs by Upload Date\",\n \"shared_diagnoses\": \"Shared Diagnoses (Estimation)\",\n \"teks_per_shared_diagnosis\": \"TEKs Uploaded per Shared Diagnosis\",\n \"shared_diagnoses_per_covid_case\": \"Usage Ratio (Fraction of Cases Which Shared Diagnosis)\",\n \"shared_teks_uploaded_on_generation_date\": \"Shared TEKs Uploaded on Generation Date\",\n}", "_____no_output_____" ], [ "summary_columns = [\n \"covid_cases\",\n \"shared_teks_by_generation_date\",\n \"shared_teks_by_upload_date\",\n \"shared_teks_uploaded_on_generation_date\",\n \"shared_diagnoses\",\n \"teks_per_shared_diagnosis\",\n \"shared_diagnoses_per_covid_case\",\n]", "_____no_output_____" ] ], [ [ "### Daily Summary Table", "_____no_output_____" ] ], [ [ "result_summary_df_ = result_summary_df.copy()\nresult_summary_df = result_summary_df[summary_columns]\nresult_summary_with_display_names_df = result_summary_df \\\n .rename_axis(index=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping)\nresult_summary_with_display_names_df", "_____no_output_____" ] ], [ [ "### Daily Summary Plots", "_____no_output_____" ] ], [ [ "result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \\\n .rename_axis(index=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping)\nsummary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(\n title=f\"Daily Summary\",\n rot=45, subplots=True, figsize=(15, 22), legend=False)\nax_ = summary_ax_list[-1]\nax_.get_figure().tight_layout()\nax_.get_figure().subplots_adjust(top=0.95)\nax_.yaxis.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))\n_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime(\"%Y-%m-%d\").tolist()))", "_____no_output_____" ] ], [ [ "### Daily Generation to Upload Period Table", "_____no_output_____" ] ], [ [ "display_generation_to_upload_period_pivot_df = \\\n generation_to_upload_period_pivot_df \\\n .head(backend_extraction_days)\ndisplay_generation_to_upload_period_pivot_df \\\n .head(backend_extraction_days) \\\n .rename_axis(columns=display_column_name_mapping) \\\n .rename_axis(index=display_column_name_mapping)", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n\nfig, generation_to_upload_period_pivot_table_ax = plt.subplots(\n figsize=(10, 1 + 0.5 * len(display_generation_to_upload_period_pivot_df)))\ngeneration_to_upload_period_pivot_table_ax.set_title(\n \"Shared TEKs Generation to Upload Period Table\")\nsns.heatmap(\n data=display_generation_to_upload_period_pivot_df\n .rename_axis(columns=display_column_name_mapping)\n .rename_axis(index=display_column_name_mapping),\n fmt=\".0f\",\n annot=True,\n ax=generation_to_upload_period_pivot_table_ax)\ngeneration_to_upload_period_pivot_table_ax.get_figure().tight_layout()", "_____no_output_____" ] ], [ [ "### Hourly Summary Plots ", "_____no_output_____" ] ], [ [ "hourly_summary_ax_list = hourly_summary_df \\\n .rename_axis(index=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping) \\\n .plot.bar(\n title=f\"Last 24h Summary\",\n rot=45, subplots=True, legend=False)\nax_ = hourly_summary_ax_list[-1]\nax_.get_figure().tight_layout()\nax_.get_figure().subplots_adjust(top=0.9)\n_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime(\"%Y-%m-%d@%H\").tolist()))", "_____no_output_____" ] ], [ [ "### Publish Results", "_____no_output_____" ] ], [ [ "def get_temporary_image_path() -> str:\n return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + \".png\")\n\ndef save_temporary_plot_image(ax):\n if isinstance(ax, np.ndarray):\n ax = ax[0]\n media_path = get_temporary_image_path()\n ax.get_figure().savefig(media_path)\n return media_path\n\ndef save_temporary_dataframe_image(df):\n import dataframe_image as dfi\n media_path = get_temporary_image_path()\n dfi.export(df, media_path)\n return media_path", "_____no_output_____" ], [ "github_repository = os.environ.get(\"GITHUB_REPOSITORY\")\nif github_repository is None:\n github_repository = \"pvieito/Radar-STATS\"\n\ngithub_project_base_url = \"https://github.com/\" + github_repository\n\ndisplay_formatters = {\n display_column_name_mapping[\"teks_per_shared_diagnosis\"]: lambda x: f\"{x:.2f}\",\n display_column_name_mapping[\"shared_diagnoses_per_covid_case\"]: lambda x: f\"{x:.2%}\",\n}\ndaily_summary_table_html = result_summary_with_display_names_df \\\n .head(daily_plot_days) \\\n .rename_axis(index=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping) \\\n .to_html(formatters=display_formatters)\nmulti_region_summary_table_html = multi_region_summary_df \\\n .head(daily_plot_days) \\\n .rename_axis(columns=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping) \\\n .rename_axis(index=display_column_name_mapping) \\\n .to_html(formatters=display_formatters)\n\nextraction_date_result_summary_df = \\\n result_summary_df[result_summary_df.index == extraction_date]\nextraction_date_result_hourly_summary_df = \\\n hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]\n\ncovid_cases = \\\n extraction_date_result_summary_df.covid_cases.sum()\nshared_teks_by_generation_date = \\\n extraction_date_result_summary_df.shared_teks_by_generation_date.sum()\nshared_teks_by_upload_date = \\\n extraction_date_result_summary_df.shared_teks_by_upload_date.sum()\nshared_diagnoses = \\\n extraction_date_result_summary_df.shared_diagnoses.sum()\nteks_per_shared_diagnosis = \\\n extraction_date_result_summary_df.teks_per_shared_diagnosis.sum()\nshared_diagnoses_per_covid_case = \\\n extraction_date_result_summary_df.shared_diagnoses_per_covid_case.sum()\n\nshared_teks_by_upload_date_last_hour = \\\n extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)\nshared_diagnoses_last_hour = \\\n extraction_date_result_hourly_summary_df.shared_diagnoses.sum().astype(int)", "_____no_output_____" ], [ "summary_plots_image_path = save_temporary_plot_image(\n ax=summary_ax_list)\nsummary_table_image_path = save_temporary_dataframe_image(\n df=result_summary_with_display_names_df)\nhourly_summary_plots_image_path = save_temporary_plot_image(\n ax=hourly_summary_ax_list)\nmulti_region_summary_table_image_path = save_temporary_dataframe_image(\n df=multi_region_summary_df)\ngeneration_to_upload_period_pivot_table_image_path = save_temporary_plot_image(\n ax=generation_to_upload_period_pivot_table_ax)", "_____no_output_____" ] ], [ [ "### Save Results", "_____no_output_____" ] ], [ [ "report_resources_path_prefix = \"Data/Resources/Current/RadarCOVID-Report-\"\nresult_summary_df.to_csv(\n report_resources_path_prefix + \"Summary-Table.csv\")\nresult_summary_df.to_html(\n report_resources_path_prefix + \"Summary-Table.html\")\nhourly_summary_df.to_csv(\n report_resources_path_prefix + \"Hourly-Summary-Table.csv\")\nmulti_region_summary_df.to_csv(\n report_resources_path_prefix + \"Multi-Region-Summary-Table.csv\")\ngeneration_to_upload_period_pivot_df.to_csv(\n report_resources_path_prefix + \"Generation-Upload-Period-Table.csv\")\n_ = shutil.copyfile(\n summary_plots_image_path,\n report_resources_path_prefix + \"Summary-Plots.png\")\n_ = shutil.copyfile(\n summary_table_image_path,\n report_resources_path_prefix + \"Summary-Table.png\")\n_ = shutil.copyfile(\n hourly_summary_plots_image_path,\n report_resources_path_prefix + \"Hourly-Summary-Plots.png\")\n_ = shutil.copyfile(\n multi_region_summary_table_image_path,\n report_resources_path_prefix + \"Multi-Region-Summary-Table.png\")\n_ = shutil.copyfile(\n generation_to_upload_period_pivot_table_image_path,\n report_resources_path_prefix + \"Generation-Upload-Period-Table.png\")", "_____no_output_____" ] ], [ [ "### Publish Results as JSON", "_____no_output_____" ] ], [ [ "summary_results_api_df = result_summary_df.reset_index()\nsummary_results_api_df[\"sample_date_string\"] = \\\n summary_results_api_df[\"sample_date\"].dt.strftime(\"%Y-%m-%d\")\n\nsummary_results = dict(\n extraction_datetime=extraction_datetime,\n extraction_date=extraction_date,\n extraction_date_with_hour=extraction_date_with_hour,\n last_hour=dict(\n shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,\n shared_diagnoses=shared_diagnoses_last_hour,\n ),\n today=dict(\n covid_cases=covid_cases,\n shared_teks_by_generation_date=shared_teks_by_generation_date,\n shared_teks_by_upload_date=shared_teks_by_upload_date,\n shared_diagnoses=shared_diagnoses,\n teks_per_shared_diagnosis=teks_per_shared_diagnosis,\n shared_diagnoses_per_covid_case=shared_diagnoses_per_covid_case,\n ),\n last_7_days=last_7_days_summary,\n daily_results=summary_results_api_df.to_dict(orient=\"records\"))\nsummary_results = \\\n json.loads(pd.Series([summary_results]).to_json(orient=\"records\"))[0]\n\nwith open(report_resources_path_prefix + \"Summary-Results.json\", \"w\") as f:\n json.dump(summary_results, f, indent=4)", "_____no_output_____" ] ], [ [ "### Publish on README", "_____no_output_____" ] ], [ [ "with open(\"Data/Templates/README.md\", \"r\") as f:\n readme_contents = f.read()\n\nreadme_contents = readme_contents.format(\n extraction_date_with_hour=extraction_date_with_hour,\n github_project_base_url=github_project_base_url,\n daily_summary_table_html=daily_summary_table_html,\n multi_region_summary_table_html=multi_region_summary_table_html)\n\nwith open(\"README.md\", \"w\") as f:\n f.write(readme_contents)", "_____no_output_____" ] ], [ [ "### Publish on Twitter", "_____no_output_____" ] ], [ [ "enable_share_to_twitter = os.environ.get(\"RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER\")\ngithub_event_name = os.environ.get(\"GITHUB_EVENT_NAME\")\n\nif enable_share_to_twitter and github_event_name == \"schedule\":\n import tweepy\n\n twitter_api_auth_keys = os.environ[\"RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS\"]\n twitter_api_auth_keys = twitter_api_auth_keys.split(\":\")\n auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])\n auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])\n\n api = tweepy.API(auth)\n\n summary_plots_media = api.media_upload(summary_plots_image_path)\n summary_table_media = api.media_upload(summary_table_image_path)\n generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)\n media_ids = [\n summary_plots_media.media_id,\n summary_table_media.media_id,\n generation_to_upload_period_pivot_table_image_media.media_id,\n ]\n\n status = textwrap.dedent(f\"\"\"\n #RadarCOVID Report – {extraction_date_with_hour}\n\n Today:\n - Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)\n - Shared Diagnoses: ≤{shared_diagnoses:.0f} ({shared_diagnoses_last_hour:+d} last hour)\n - TEKs per Diagnosis: ≥{teks_per_shared_diagnosis:.1f}\n - Usage Ratio: ≤{shared_diagnoses_per_covid_case:.2%}\n\n Week:\n - Shared Diagnoses: ≤{last_7_days_summary[\"shared_diagnoses\"]:.0f}\n - Usage Ratio: ≤{last_7_days_summary[\"shared_diagnoses_per_covid_case\"]:.2%}\n\n More Info: {github_project_base_url}#documentation\n \"\"\")\n status = status.encode(encoding=\"utf-8\")\n api.update_status(status=status, media_ids=media_ids)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e70dc650f36124d5279a33f4aa1f59ea2eeaff60
8,753
ipynb
Jupyter Notebook
docs/quantum_chess/quantum_chess_client.ipynb
PawelPamula/ReCirq
79a351310cd98f67524a9df0c4ef9f300bf9eea4
[ "Apache-2.0" ]
1
2021-04-07T09:36:03.000Z
2021-04-07T09:36:03.000Z
docs/quantum_chess/quantum_chess_client.ipynb
PawelPamula/ReCirq
79a351310cd98f67524a9df0c4ef9f300bf9eea4
[ "Apache-2.0" ]
null
null
null
docs/quantum_chess/quantum_chess_client.ipynb
PawelPamula/ReCirq
79a351310cd98f67524a9df0c4ef9f300bf9eea4
[ "Apache-2.0" ]
null
null
null
29.772109
381
0.60722
[ [ [ "##### Copyright 2020 Google", "_____no_output_____" ] ], [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# Quantum Chess REST Client", "_____no_output_____" ], [ "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://quantumai.google/cirq/experiments/quantum_chess/quantum_chess_client\"><img src=\"https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png\" />View on QuantumAI</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/quantumlib/ReCirq/blob/master/docs/quantum_chess/quantum_chess_client.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/quantumlib/ReCirq/blob/master/docs/quantum_chess/quantum_chess_client.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/github_logo_1x.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/ReCirq/docs/quantum_chess/quantum_chess_client.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/download_icon_1x.png\" />Download notebook</a>\n </td>\n</table>", "_____no_output_____" ], [ "This is a basic client meant to test the server implemented at the end of the [Quantum Chess REST API](./quantum_chess_rest_api.ipynb) documentation. You must run that previous Colab for this one to work.", "_____no_output_____" ], [ "## Setup", "_____no_output_____" ] ], [ [ "!pip install git+https://github.com/quantumlib/ReCirq/ -q\n!pip install requests -q", "_____no_output_____" ] ], [ [ "The server for the Quantum Chess Rest API endpoints should provide you with an ngrok url when you run it. **Paste the url provided by your server in the form below**. If your server is running, the following code should produce the message: \"Running Flask on Google Colab!\"", "_____no_output_____" ] ], [ [ "url = 'http://bd626d83c9ec.ngrok.io/' #@param {type:\"string\"}\n!curl -s $url", "_____no_output_____" ] ], [ [ "You should be able to see the server output indicting a connection was made.", "_____no_output_____" ], [ "## Initialization", "_____no_output_____" ], [ "Make a simple request to initialize a board with the starting occupancy state of all pieces. Using the bitboard format, the initial positions of pieces are given by the hex 0xFFFF00000000FFFF. This initializes all squares in ranks 1, 2, 7, and 8 to be occupied.", "_____no_output_____" ] ], [ [ "import requests\n\ninit_board_json = { 'init_basis_state' : 0xFFFF00000000FFFF }\nresponse = requests.post(url + '/quantumboard/init', json=init_board_json)\n\nprint(response.content)", "_____no_output_____" ] ], [ [ "## Superposition", "_____no_output_____" ], [ "With the board initialized, you can execute a few moves to see what happens. You can create superposition by executing a split move from b1 to a3 and c3. Watch the server output to see the execution of this move.", "_____no_output_____" ] ], [ [ "from recirq.quantum_chess.enums import MoveType, MoveVariant\nfrom recirq.quantum_chess.bit_utils import square_to_bit\n\nsplit_b1_a3_c3 = {'square1' : square_to_bit('b1'), 'square2' : square_to_bit('a3'), 'square3' : square_to_bit('c3'), \n 'type' : int(MoveType.SPLIT_JUMP.value), 'variant': int(MoveVariant.BASIC.value)}\nresponse = requests.post(url + '/quantumboard/do_move', json=split_b1_a3_c3)\nprint(response.content)\n", "_____no_output_____" ] ], [ [ "## Entanglement", "_____no_output_____" ], [ "You can see, in the probabilities returned, a roughly 50/50 split for two of the squares. A pawn two-step move, from c2 to c4, will entangle the pawn on c2 with the piece in superposition on a3 and c3.", "_____no_output_____" ] ], [ [ "move_c2_c4 = {'square1' : square_to_bit('c2'), 'square2' : square_to_bit('c4'), 'square3' : 0,'type' : int(MoveType.PAWN_TWO_STEP.value), 'variant': int(MoveVariant.BASIC.value)}\nresponse = requests.post(url + '/quantumboard/do_move', json=move_c2_c4)\nprint(response.content)", "_____no_output_____" ] ], [ [ "## Measurement", "_____no_output_____" ], [ "The probability distribution returned doesn't show the entanglement, but it still exists in the underlying state. You can see this by doing a move that forces a measurement. An excluded move from d1 to c2 will force a measurement of the c2 square. In the server output you should see the collapse of the state, with c2, c3, c4, and a3 taking definite 0 or 100% probabilities.", "_____no_output_____" ] ], [ [ "move_d1_c2 = {'square1' : square_to_bit('d1'), 'square2' : square_to_bit('c2'), 'square3' : 0, 'type' : int(MoveType.JUMP.value), 'variant': int(MoveVariant.EXCLUDED.value)}\nresponse = requests.post(url + '/quantumboard/do_move', json=move_d1_c2)\nprint(response.content)", "_____no_output_____" ] ], [ [ "You can see the entanglement correlation by running the following cell a few times. There should be two different outcomes, the first with both c2 and c3 are 100%, and the second with c4 and a3 both 100%.", "_____no_output_____" ] ], [ [ "response = requests.post(url + '/quantumboard/undo_last_move')\nprint(response.content)\nresponse = requests.post(url + '/quantumboard/do_move', json=move_d1_c2)\nprint(response.content)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e70dc6a6b141279da6c8053182cef7aaeb6e3bc7
979,189
ipynb
Jupyter Notebook
Eiru/Mouse_Essential_Genes_Analysis-checkpoint.ipynb
NMikolajewicz/Lawson2020
82662ff8183307ec09439dc001834537ec00bda3
[ "MIT" ]
2
2020-10-03T17:37:54.000Z
2021-02-01T02:46:04.000Z
Eiru/Mouse_Essential_Genes_Analysis-checkpoint.ipynb
NMikolajewicz/Lawson2020
82662ff8183307ec09439dc001834537ec00bda3
[ "MIT" ]
null
null
null
Eiru/Mouse_Essential_Genes_Analysis-checkpoint.ipynb
NMikolajewicz/Lawson2020
82662ff8183307ec09439dc001834537ec00bda3
[ "MIT" ]
4
2020-09-28T01:52:31.000Z
2021-07-30T14:05:05.000Z
133.041984
109,060
0.854899
[ [ [ "%pylab inline\nimport scipy.stats as stats\nimport pandas as pd\nimport gseapy\nrcParams['font.size']=12\nrcParams['pdf.fonttype']=42\nrcParams['font.family'] = 'sans-serif'\nrcParams['font.sans-serif'] = ['Arial']\nimport seaborn as sns\nsns.set_style(\"white\")", "Populating the interactive namespace from numpy and matplotlib\n" ], [ "# Written by Eiru Kim", "_____no_output_____" ] ], [ [ "# Define mCEG0, mNEG0", "_____no_output_____" ] ], [ [ "#### load Human Ref genes\nhuman_ceg_hgnc = pd.read_csv('CEGv2.txt',sep=\"\\t\")['HGNC_ID']\n'''\nManually modified\nCT45A4 -> CT45A3 (HGNC:33269 -> 33268, replaced)\nCXorf1 -> SLITRK2 (HGNC:2562 -> HGNC:13449, replaced)\nPRAMEF3 -> Deleted (Discontinued)\n'''\n\nhuman_neg_hgnc = pd.read_csv('NEGv1.txt',sep=\"\\t\")['HGNC_ID']\nprint (len(human_ceg_hgnc))\nprint (len(human_neg_hgnc))", "684\n926\n" ], [ "human_ceg_hgnc.head(3)", "_____no_output_____" ], [ "human_neg_hgnc.head(3)", "_____no_output_____" ], [ "### load xrefs\n'''\nmanually modified hgnc2entrez file for \nUSP5 (HGNC: 12628, EntrezID: 8078)\nKRTAP11-1 (HGNC:18922, EntrezID: 337880)\nIL25 (HGNC:13765, EntrezID: 64806)\nMC2R (HGNC:6930, EntrezID: 4158)\nOR11A1 (HGNC:8176, EntrezID: 26531)\n\n'''\n\nhgnc2entrez = pd.read_csv('hgnc2entrez_Jul2018',index_col=0,sep=\"\\t\")['Entrez Gene ID'].dropna().astype(int)\nentrez2symbol = pd.read_csv('Homo_sapiens.gene_info_Jul2018',index_col=1,sep=\"\\t\")['Symbol']\nsymbol2entrez_human = pd.Series(entrez2symbol.index.values, index=entrez2symbol )\nentrez2symbol_mouse = pd.read_csv('Mus_musculus.gene_info_Jul2018',index_col=1,sep=\"\\t\")['Symbol']\n\n# ensembl mouse data\nensembldata = pd.read_csv('ensembl2entrez_symbol_Jul2018.txt',index_col=0,sep=\"\\t\")\nensembldata = ensembldata.loc[ensembldata['CCDS ID'].notnull()].drop_duplicates() # select only CCDS genes\n\n\nccds_mouse = pd.read_csv('CCDS.0708.txt',index_col=4,sep=\"\\t\") #current CCDS version\nccds_mouse = ccds_mouse[ccds_mouse['ccds_status']=='Public']\n\n#filtering non ccds genes\nensembl2symbol = dict()\nensembl2entrez = dict()\nentrez2ensembl = dict()\nensembl2ccds = dict()\nfor i in range(len(ensembldata.index)):\n if ensembldata.iloc[i]['NCBI gene ID'] in ccds_mouse['gene_id'].values:\n if ensembldata.index[i] in ensembl2symbol:\n if ensembl2symbol[ensembldata.index[i]] != ensembldata.iloc[i]['Gene name']:\n print (ensembl2symbol[ensembldata.index[i]], ensembldata.iloc[i]['Gene name']) # unmatch check\n ensembl2symbol[ensembldata.index[i]] = ensembldata.iloc[i]['Gene name']\n ensembl2entrez[ensembldata.index[i]] = ensembldata.iloc[i]['NCBI gene ID']\n entrez2ensembl[ensembldata.iloc[i]['NCBI gene ID']] = ensembldata.index[i]\n ensembl2ccds[ensembldata.index[i]] = ensembldata.iloc[i]['CCDS ID']\nensembl2symbol = pd.Series(ensembl2symbol)\nensembl2entrez = pd.Series(ensembl2entrez)\nensembl2ccds = pd.Series(ensembl2ccds)\nentrez2ensembl = pd.Series(entrez2ensembl)", "_____no_output_____" ], [ "ensembl2symbol.head(3)", "_____no_output_____" ], [ "hgnc2entrez.head(3)", "_____no_output_____" ], [ "entrez2symbol.head(3)", "_____no_output_____" ], [ "ccds_mouse.head(3)", "_____no_output_____" ], [ "#convert to updated entrez ID\nhuman_ceg_entrez = hgnc2entrez[human_ceg_hgnc].astype(int)\nhuman_neg_entrez = hgnc2entrez[human_neg_hgnc].astype(int)\n\nprint (len(human_ceg_entrez))\nprint (len(human_neg_entrez))", "684\n926\n" ], [ "#convert to updated symbols\nhuman_ceg_symbol = entrez2symbol[human_ceg_entrez].values\nhuman_neg_symbol = entrez2symbol[human_neg_entrez].values\n\nprint (len(human_ceg_symbol))\nprint (len(human_neg_symbol))", "684\n926\n" ], [ "#load MGI homolog info\nfirstcheck = True\nhomologenes = dict()\nhuman2mouse = dict() #xref\nhuman2mouse_duplicated = dict() # xref of duplicated genes(human1 -> mouse 2~3)\nwith open('HOM_AllOrganism.rpt','r') as fp:\n for line in fp:\n if firstcheck == True:\n firstcheck = False\n continue\n linearray = line.rstrip().split(\"\\t\")\n homologeneid = int(linearray[0])\n taxon = int(linearray[2])\n symbol = linearray[3]\n entrezid = int(linearray[4])\n if taxon in [10090,9606]: # Mouse(10090) and Human(9606)\n if homologeneid not in homologenes:\n homologenes[homologeneid] = dict()\n if taxon not in homologenes[homologeneid]:\n homologenes[homologeneid][taxon] = list()\n homologenes[homologeneid][taxon].append((entrezid,symbol))\n\nfor i in homologenes:\n if len(homologenes[i]) == 2:\n if len(homologenes[i][9606])==1 and len(homologenes[i][10090])==1: # only 1 to 1 \n for (hid,hsym) in homologenes[i][9606]:\n if hid in entrez2symbol.index:\n human2mouse[hid] = homologenes[i][10090] # only in ncbi genes\n else:\n pass\n \n elif len(homologenes[i][10090])>1:\n for (hid,hsym) in homologenes[i][9606]:\n if hid in entrez2symbol.index:\n human2mouse_duplicated[hid] = homologenes[i][10090] # only in ncbi genes\n else:\n pass\n ", "_____no_output_____" ], [ "# print human2mouse symbol xref\nwith open (\"Human_entrez_Mouse_symbol\", 'w') as fout:\n for entrezid in human2mouse:\n for (mid,msym) in human2mouse[entrezid]:\n fout.write(str(entrezid) + \"\\t\" + msym + \"\\n\")\n ", "_____no_output_____" ], [ "# print human2mouse symbol xref 1to1 only\nwith open (\"Human_symbol_Mouse_symbol_1to1\", 'w') as fout:\n for entrezid in human2mouse:\n if len(human2mouse[entrezid]) == 1:\n for (mid,msym) in human2mouse[entrezid]:\n fout.write(str(entrez2symbol[entrezid]) + \"\\t\" + msym + \"\\n\")", "_____no_output_____" ], [ "# Convert Human Ref to Mouse Ref. Discard non-CCDS genes\n\nmouse_ceg_entrez = list()\nmouse_neg_entrez = list()\n\nnoortholog_ceg = list() # no orthologship\nnoortholog_neg = list()\nduportholog_ceg = list() # there is ortholog but duplicated\nduportholog_neg = list()\nfor entrezid in human_ceg_entrez:\n if entrezid not in human2mouse:\n if entrezid in human2mouse_duplicated:\n duportholog_ceg.append((entrezid,entrez2symbol[entrezid]))\n else:\n noortholog_ceg.append((entrezid,entrez2symbol[entrezid]))\n #nacount+=1\n #print entrezid, entrez2symbol[entrezid]\n else:\n if len(human2mouse[entrezid]) >= 2:\n print (\"%d, %s is duplicated\" % (entrezid,entrez2symbol[entrezid]))\n for (mid,msym) in human2mouse[entrezid]:\n if mid in ccds_mouse['gene_id'].values:\n print (' Mouse:',mid,msym)\n else:\n print( \" Mouse: Not in CCDS\",mid,msym)\n for (mid,msym) in human2mouse[entrezid]:\n if mid in ccds_mouse['gene_id'].values:\n mouse_ceg_entrez.append(mid)\n \nfor entrezid in human_neg_entrez:\n if entrezid not in human2mouse:\n if entrezid in human2mouse_duplicated:\n duportholog_neg.append((entrezid,entrez2symbol[entrezid]))\n else:\n noortholog_neg.append((entrezid,entrez2symbol[entrezid]))\n #nacount+=1\n #print entrezid, entrez2symbol[entrezid]\n else:\n if len(human2mouse[entrezid]) >= 2:\n print (\"%d, %s is duplicated \" % (entrezid,entrez2symbol[entrezid]))\n for (mid,msym) in human2mouse[entrezid]:\n if mid in ccds_mouse['gene_id'].values:\n print (' Mouse:',mid,msym)\n else:\n print (\" Mouse: Not in CCDS\",mid,msym)\n for (mid,msym) in human2mouse[entrezid]:\n if mid in ccds_mouse['gene_id'].values:\n mouse_neg_entrez.append(mid)\nmouse_ceg_entrez = list(set(mouse_ceg_entrez)) # rm duplicates\nmouse_neg_entrez = list(set(mouse_neg_entrez)) # rm duplicates\nprint (len(mouse_ceg_entrez)) \nprint (len(mouse_neg_entrez))", "657\n605\n" ], [ "entrez2symbol_mouse[mouse_ceg_entrez].head(3) # no NA", "_____no_output_____" ], [ "entrez2symbol_mouse[mouse_neg_entrez].head(3) # no NA", "_____no_output_____" ], [ "### load RNAseq for mouse tissues\n# https://www.nature.com/articles/s41598-017-04520-z \n#\n\nli_etal_fpkm = pd.read_csv('Lietal_fpkm_table_supptable6',index_col=0,sep=\"\\t\")\n\nli_etal_fpkm_log = log(li_etal_fpkm.drop(['gene_short_name','gene_type'],1)+0.5)", "_____no_output_____" ], [ "li_etal_fpkm_log.shape", "_____no_output_____" ], [ "li_etal_fpkm.head(3)", "_____no_output_____" ], [ "li_etal_fpkm_log.head(3)", "_____no_output_____" ], [ "neg_in_rnaseq = list()\n\nfor mid in mouse_neg_entrez:\n if entrez2ensembl[mid] not in li_etal_fpkm_log.index:\n print (mid,entrez2symbol_mouse[mid],entrez2ensembl[mid]) #Not in RNA-seq...\n else:\n pass\n neg_in_rnaseq.append(entrez2ensembl[mid])", "14429 Galr3 ENSMUSG00000114755\n18365 Olfr65 ENSMUSG00000110259\n" ], [ "ceg_in_rnaseq = list()\n\nfor mid in mouse_ceg_entrez:\n if entrez2ensembl[mid] not in li_etal_fpkm_log.index:\n print (mid,entrez2symbol_mouse[mid]) #Not in RNA-seq...\n else:\n pass\n ceg_in_rnaseq.append(entrez2ensembl[mid])", "71752 Gtf3c2\n72544 Exosc6\n15469 Prmt1\n66914 Vps28\n" ], [ "gene_mean = li_etal_fpkm_log.loc[neg_in_rnaseq].mean(axis=1)\nneg_filtered = entrez2symbol_mouse[ensembl2entrez[gene_mean[gene_mean<0].index]]\nneg_li = neg_filtered.values\nprint( \"\\n\".join(neg_filtered.values))", "Cacng2\nOlig3\nB3gnt6\nSlc18a3\nTmprss11a\nIl1f8\nGabra6\nSpaca1\nSlc6a18\nPax1\nPax4\nGm11437\nMmp20\nGalr1\nEnthd1\nOlfr412\nPrss37\nOlfr1425\nZan\nOlfr685\nSstr4\nDdi1\nPdilt\nRtp1\nShcbp1l\nNlrp9b\nSlc22a13\nPdx1\nPdyn\nSamd7\nZswim2\nTex44\nSox1\nOlfr417\nSox14\nOpn1mw\nHcrtr2\nGpr31b\nCatsper4\nPgk2\nCngb3\nIl1f10\nSpata21\nZic3\nOlfr1356\nZp2\nOlfr145\nGh\nGhrh\nMrgprb2\nOlfr554\nTas2r118\nTas2r120\nTas2r121\nTbpl2\nGja8\nTas2r126\nUts2r\nTas2r130\nTas2r131\nFigla\nKrt36\nGk2\nSlc36a3\nKrt86\nKrt84\nRd3\nCdx2\nCdx4\nOlfr692\nSlc22a19\nSohlh1\nTmem207\nKrtap11-1\nOlfr557\nOlfr574\nCldn17\nOlfr1093\nGlra1\nTrpc7\nOlfr368\nGalntl5\nFrmd7\nCer1\nSpata32\nAsz1\nPla2g2e\nZnrf4\nTex28\nChat\nOlfr464\nSlc7a13\nTex13a\nKrt35\nOlfr221\nDefb23\nDsg4\nLalba\nVsx2\nSppl2c\nTspo2\nPiwil1\nCst11\nBpifc\nOlfr91\nFam71b\nGpr50\nLbx1\nUsp29\nAnkrd60\nPglyrp3\nKcnb2\nIfnk\nTas2r135\nTas2r144\nGpx5\n4933402N03Rik\nGpr26\nRbm46\nAtp6v1g3\nOlfr96\nUbqln3\nSerpina12\nOlfr684\nAccsl\nKcna10\nLhx3\nLhx5\nSept14\nNphs2\nCnga2\nGsx1\nGsx2\nKrt75\nPrdm14\nT\nCrnn\nOlfr1023\nOlfr1022\nOtx2\nOlfr1086\nIqcf1\nOlfr716\nVrtn\nOlfr270\nTbc1d21\nSpem1\nPou3f4\nPou4f2\nPou4f3\nKcnk10\nTaar1\nOlfr1151\nOlfr152\nOlfr418\nMageb18\nOlfr365\nDgat2l6\nPrg3\nTaar2\nKrtap13-1\nKcnk18\nSix6\nGpr151\nOlfr1423\nOlfr1424\nTex45\nPpp3r2\nSun5\nMrgprd\nOtud6a\nOosp2\nGrm4\nPrdm13\nOlfr410\nOlfr402\nOlfr411\nSp8\nOlfr419\nCrx\nCrygb\nKrt73\nOlfr502\nOlfr1497\nTktl2\nOlfr935\nProp1\nOlfr556\nCsn2\nCsn3\nScrt2\nTmprss15\nKrt77\nKrt40\nRnase9\nCst9\nTriml1\nGfral\nMagea4\nTmem174\nGpr152\nOlfr231\nZfp804b\nNutm1\nGsc2\nMs4a5\nOlfr273\nOlfr39\nPde6h\nPtf1a\nCyp11b2\nOlfr109\nMrgprx1\nRtl4\nScp2d1\nFam71a\nRbmxl2\nFoxb1\nOlfr982\nSpaca7\nTaar5\nTaar6\nMc2r\nMc3r\nMc5r\nRtp2\nAsic5\nOpalin\nGot1l1\nKcnk16\nTrim60\nOlfr853\nOlfr750\nCst10\nRp1l1\nPrss33\nDazl\nOlfr20\nGpr139\nFoxn1\nMsgn1\nOlfr63\nTas2r139\nOlfr354\nTbr1\nPrlh\nDdx4\nLipm\nPkd1l3\n4930435E12Rik\nCdcp2\nOlfr1496\nOlfr1044\nTrpm1\nOlfr714\nOlfr713\nMtnr1b\nHrh3\nPrlhr\nDmrtb1\nOlfr618\nRax\nOlfr362\nOlfr691\nOlfr520\nWfdc9\nTbx10\nHmx1\nOlfr569\nCela3a\nOlfr568\nSpink14\nOlfr552\nMorc1\nHoxb1\nTrim42\nOpn5\n4933402J07Rik\nNdst4\nHoxd12\nKrtap1-3\nLhfpl5\nIl21\nDmp1\nMfrp\nCacng3\nTrim67\nNpvf\nFoxr1\nNms\nNyx\nBarhl1\nKrt25\nKlk9\nHhla1\nSlitrk1\nCabp5\nCabp2\nChrna6\nIl1f6\nDrd3\nTrpd52l3\nBsnd\nKrt28\nHtr1a\nHtr2c\nBhlhe23\nCct8l1\nHtr5a\nHtr6\nActl7a\nActl7b\nFam47c\nRbp3\nRxfp2\nSpata16\nAdam18\nAsb17\nRbpjl\nTeddm1b\nCtcfl\nNox3\nIrgc1\nPanx3\nVmn1r237\nAdam2\nGpr119\nEfcab3\nAdad1\nTmprss12\nKrt76\nRnase10\nPrss58\nGprc6a\nTph2\nCpxcr1\nMagea10\nGhsr\nCabs1\nCnpy1\nAdgrf2\nEgr4\nPrss41\nKrt26\nRnf17\nAicda\nNkx2-1\nInsrr\nGcm2\nPrss55\nAkp3\nAlppl2\nTmprss11f\nNr2e1\nCstl1\nTlx1\nKrt71\nMmp27\nNlrp5\nAdam30\nTex101\nAlx3\nKhdc3\nRnase13\nRp1\nCacng5\nRpe65\nAntxrl\nXkr7\nFezf2\nBpifb6\nLrit1\nLrit2\nGlt6d1\nNoto\nTnr\nGrk1\nRnase12\nMyf5\nRnase11\nRfx6\nFezf1\nH1foo\nGpr45\nIapp\nIl25\nChrnb3\n1700024G13Rik\nInsm2\nRdh8\nTrhr\nGrm5\nGrm6\nTsga13\nTrpc5\nAipl1\nRho\nNanos2\nDmrtc2\nCcdc83\nTmem132d\nTshb\nTfap2d\nNeurod2\nIl17f\nNeurog1\nPdcl2\nTssk1\nTssk2\nLin28a\nIfnb1\nSlc2a7\nCcdc155\nDmrt1\nUsp26\nDgkk\nSlc25a31\nSlc17a6\nFfar1\nActrt1\nAtoh1\nNeurod6\nNeurod4\nGlra2\nTchhl1\nTyr\nNkx2-2\nBpifa3\nGpr101\nRs1\nHist1h2ak\nHist1h2ba\nSlitrk2\nStpg4\nEvx1\nPrss38\nOlig2\nBanf2\nLim2\nRxfp3\nPcare\nPou5f2\nGpx6\nMs4a13\nKif2b\nLyzl1\nCetn1\nTaar9\nNpsr1\nH2bfm\nAwat2\nAwat1\nMbd3l1\nIl12b\nIl13\nScn10a\nIl17a\nTrpv5\nNpffr1\nSlc6a5\nVax1\nTas1r2\nZfp648\nCcl1\nLyzl6\nIl9\nTxndc8\nBpifb3\nSlc32a1\nOc90\nFgf3\nFgf4\nFgf6\nActl9\nNobox\nBmp10\nBmp15\nCcdc172\nGdf2\nKrt82\nKlk12\nCntnap5a\nLcn9\nOlfr17\nOlfr15\nOlfr19\nOlfr2\nGucy2f\nTmem225\nOlfr31\nKrtap26-1\nMepe\nOlfr263\nCyp26c1\nTas2r119\nKcnv1\nIsx\nRetnlb\nVmn1r224\nOlfr140\nKrtap15\nOlfr68\nOlfr69\nClec3a\nSult6b1\nClrn1\nIns2\nOlfr453\nOtop3\nTgm6\nOtor\nOtp\nLrrc10\nPnpla5\n" ], [ "gene_mean = li_etal_fpkm_log.loc[ceg_in_rnaseq].mean(axis=1)\nceg_filtered = entrez2symbol_mouse[ensembl2entrez[gene_mean[gene_mean>1].index]]\nceg_li = ceg_filtered.values\nprint (\"\\n\".join(ceg_filtered.values))", "Mybbp1a\nExosc8\nCkap5\nFntb\nDgcr8\nCdc5l\nCpsf1\nHinfp\nPabpc1\nSdad1\nEif6\nTelo2\nXrcc6\nPafah1b1\nSpc24\nGabpa\nImp3\nTrappc1\nPlrg1\nCtdp1\nTmem223\nPolr2h\nTomm40\nIgbp1\nTrmt112\nRpp21\nMrpl18\nLuc7l3\nAtp6v1d\nPcna\nEif3g\nHypk\nTrnau1ap\nGart\nPtcd1\nDhx8\nNup93\nPolr2g\nDnajc9\nIsg20l2\nKif23\nCcna2\nPop1\nCdc27\nEftud2\nMsto1\nDiexf\nKri1\nCse1l\nGnl3\nMed30\nSnrnp70\nSnrpd1\nZpr1\nBysl\nGins2\nCcnk\nInts3\nCox11\nDdx47\nKif11\nCct2\nCct3\nCct4\nCct5\nCct7\nCct8\nPggt1b\nKat8\nRpp38\nLsm12\nPfdn2\nPfn1\nTsr1\nPgam1\nSrsf1\nPgk1\nHaus5\nPhb\nCdk1\nSnrpf\nCdc37\nMrpl57\nGgps1\nNoc4l\nNop9\nPtpa\nDdx55\nCops6\nNploc4\nSs18l2\nEif2s3x\nCdk7\nPolr2i\nEif5a\nSars2\nNol9\nRpl4\nRacgap1\nPpa1\nSnapc4\nWdr70\nCebpz\nFars2\nCdc16\nCenpa\nCenpc1\nTrrap\nMak16\nRpl8\nCfl1\nNol10\nSnu13\nHjurp\nTimm23\nLas1l\nCoq4\nRack1\nTbcd\nSpout1\nTars\nDolk\nPop5\nPuf60\nMepce\nPtpn23\nPlk1\nExosc2\nMphosph10\nGtf3c1\nNup214\nSnrnp35\nAurkb\nTufm\nCoa5\nWdr12\nTut1\nLonp1\nCfap298\nPrpf4\nGps1\nRps3\nCmtr1\nNaa50\nSupv3l1\nTpx2\nClns1a\nSeh1l\nSupt5\nSupt6\nEif2b1\nRpl35a\nHaus1\nTomm22\nSmc4\nRps13\nTtc27\nRfc5\nTrappc3\nYars2\nUspl1\nGpn3\nPak1ip1\nAtp5d\nNcbp2\nTubgcp2\nRps2\nGspt1\nPpil2\nSdhc\nCdc123\nSmu1\nGtf2b\nExosc4\nPola2\nGtf2h1\nGtf2h4\nLars\nRars2\nSnrnp25\nLsm7\nCopa\nCox4i1\nWdr74\nTrmt5\nRps11\nCpsf2\nGuk1\nSf3b5\nNars\nNup85\nWdr33\nCtps\nPcid2\nRrp12\nGins4\nBud23\nOraov1\nSlu7\nSympk\nGtf3c5\nRpl24\nHnrnpu\nPsmd1\nPpp2ca\nMrpl4\nNip7\nNudt21\nMed11\nNup88\nUbl5\nPrim1\nSart3\nEif3d\nNat10\nTti2\nClp1\nPolr3c\nNudcd3\nRpa1\nDmap1\nIars\nNol6\nOsgep\nCopb1\nDimt1\nDdx20\nPsma2\nPsma3\nPsmb1\nPsmb4\nTxnl4a\nRpl3\nPsmb7\nLsm8\nPsmc2\nPrmt5\nCox10\nPsmc3\nPsmc5\nPsmd4\nUtp23\nSrbd1\nMagoh\nHsd17b10\nCycs\nYars\nNup155\nHars\nRps15a\nTbl3\nWdr61\nZfp131\nRnf20\nFtsj3\nAtp5l\nEif2b5\nNus1\nSnw1\nHcfc1\nExosc3\nMcm3\nMcm4\nMcm5\nArcn1\nMcm7\nWdr43\nWdr77\nDad1\nAnapc2\nHdac3\nIscu\nSrp19\nMad2l1\nPrelid3b\nNmd3\nCox15\nLsg1\nTaf1b\nTaf6\nTwnk\nPolr2e\nPhf5a\nTubgcp6\nDdx49\nDdx21\nDdb1\nHeatr1\nSpc25\nExosc7\nDdost\nMrpl53\nDhx15\nRabggtb\nSkp1a\nDhx9\nSys1\nRad21\nInts1\nUtp15\nRad51d\nFam96b\nCpsf4\nRps23\nRps21\nUbtf\nRan\nRpl35\nPolr2l\nRangap1\nPrelid1\nEcd\nPsmg3\nDonson\nRbm8a\nRpl12\nTcp1\nInts8\nRbm14\nDis3\nUpf2\nNaa10\nAnapc4\nEprs\nMrpl28\nArl2\nRpl18a\nElac2\nHnrnpc\nMrpl38\nEif3c\nCmpk1\nHnrnpk\nFarsa\nHnrnpl\nUtp20\nGtf3a\nSamm50\nTonsl\nSnrnp27\nDhps\nAhcy\nDlst\nPolr3h\nPolrmt\nPpp4c\nLsm2\nCcnh\nMcm3ap\nDnm2\nRae1\nRfk\nCcdc84\nDnmt1\nSnrnp200\nActr10\nCopz1\nOrc6\nUqcrfs1\nActl6a\nRbm17\nSae1\nMrps14\nMrps24\nXpo1\nHspd1\nActr2\nWdr3\nDpagt1\nHspa9\nSnrpd2\nCpsf3\nYju2\nRbbp6\nMrps34\nActb\nPrpf38a\nRbmx\nTubgcp3\nVcp\nChmp6\nAlg14\nDdx41\nGrwd1\nAars\nPpwd1\nUpf1\nArih1\nRfc2\nAtr\nUtp4\nTtc1\nDhx37\nBanf1\nGmppb\nIpo13\nTfam\nRheb\nDdx56\nAdsl\nAdss\nGrpel1\nNhp2\nPmpca\nVars2\nThoc5\nSf3a3\nTubg1\nKars\nCrnkl1\nEef2\nCherp\nFarsb\nEif3b\nYrdc\nAlg1\nCnot3\nChmp2a\nImp4\nPrpf19\nEif2s1\nTimm44\nEif3a\nTimm13\nWdr92\nPrpf38b\nTimm10\nMed27\nNup133\nNol11\nSnrpa1\nPmpcb\nMars\nPrpf31\nDdx18\nGfm1\nSsu72\nUsp39\nEll\nDtymk\nAlg2\nNuf2\nTnpo3\nDhodh\nTmem258\nCdk9\nAtp5o\nRpa2\nVps25\nEif3i\nPsmd12\nRpl10a\nRtcb\nRpl18\nPolr3k\nPsmd13\nMvk\nMyc\nAbce1\nRpl11\nRngtt\nRpl19\nTop1\nPsmd11\nTop2a\nTriap1\nCdc20\nMrpl45\nNop16\nAtp6v0b\nKrr1\nCdc73\nRiok2\nAbcf1\nRpl27\nNkap\nRpl30\nPaics\nRomo1\nRpl23\nSmc1a\nRpl37a\nZfp574\nPsmc6\nBirc5\nDdx27\nTfrc\nPpan\nNop2\nZbtb8os\nEif2b3\nSacm1l\nNcbp1\nNudt4\nRpl14\nErcc2\nErcc3\nPolr1b\nPolr1a\nPolr1c\nErh\nPolr2a\nPolr2c\nSf3b3\nTpt1\nAqr\nRplp0\nNop56\nNepro\nCtr9\nUbe2n\nSbno1\nRps12\nNedd8\nRps16\nNapa\nZmat5\nPsmd3\nCopb2\nNdufa13\nRplp2\nRps18\nGtpbp4\nRps19\nMtg2\nPolr2d\nOgt\nAnapc5\nEif2s2\nRrs1\nNup160\nRps5\nRps6\nThoc2\nTubb5\nMed12\nHuwe1\nNampt\nPsmd14\nRcl1\nRps7\nRps8\nTxn1\nPrpf8\nKansl3\nTti1\nAtp2a2\nCinp\nGemin8\nEif5b\nPnkp\nRpf2\nRrm1\nAtp5a1\nAtp5b\nEif4a3\nAtp5c1\nNsa2\nU2af2\nUbe2m\nMdn1\nUbe2l3\nUba1\nNmt1\nAtp6v1a\nGemin5\nAtp6v1e1\nMars2\nRuvbl2\nAtp6v0c\nPolr3a\nUsp5\nPreb\nDkc1\nCltc\nMrpl46\nCox6b1\nSrrt\nDbr1\nRcc1\nUqcrc1\nPhb2\nCiao1\nSars\nSnrpd3\nSrsf7\nUrod\nNsf\nUxt\nDdx10\nFau\nSec13\nDhx33\nNudc\nVars\nSnapc2\nPsma1\nIlf3\nPsma4\nPsma5\nPsma6\nPsma7\nPsmb2\nPsmb3\nKpnb1\nRpl27a\nSf3b2\nRps20\nWars\nRfc4\nSnapc1\nXab2\nMrpl43\nOgdh\nWee1\nNvl\nSmc2\nDctn5\nSrsf2\nPpat\nSrsf3\nPolr2b\nMettl16\nNle1\nFnta\nThoc3\nWdr75\nBub1b\nBub3\nCops3\nDnaja3\nSf3b1\nNarfl\nInts9\nGmps\n" ], [ "### load RNAseq for mouse tissues set 2\n# https://www.nature.com/articles/sdata2017185#t1\n#\n\nsoellner_etal_rpkm = pd.read_csv('Soellneretal_rpkm_table',index_col=0,sep=\"\\t\")\n\nsoellner_etal_rpkm_log = log(soellner_etal_rpkm+0.5)", "_____no_output_____" ], [ "soellner_etal_rpkm_log.shape", "_____no_output_____" ], [ "soellner_etal_rpkm_log.head(3)", "_____no_output_____" ], [ "# Non essential genes in RNAseq data\nneg_in_rnaseq = list()\n\nfor mid in mouse_neg_entrez:\n if entrez2ensembl[mid] not in soellner_etal_rpkm_log.index:\n print (mid,entrez2symbol_mouse[mid],entrez2ensembl[mid]) #Not in RNA-seq...\n else:\n pass\n neg_in_rnaseq.append(entrez2ensembl[mid])", "14429 Galr3 ENSMUSG00000114755\n18365 Olfr65 ENSMUSG00000110259\n" ], [ "# Core essential genes in RNAseq data\nceg_in_rnaseq = list()\n\nfor mid in mouse_ceg_entrez:\n if entrez2ensembl[mid] not in soellner_etal_rpkm_log.index:\n print (mid,entrez2symbol_mouse[mid],entrez2ensembl[mid]) #Not in RNA-seq...\n else:\n pass\n ceg_in_rnaseq.append(entrez2ensembl[mid])", "272551 Gins2 ENSMUSG00000031821\n12857 Cox4i1 ENSMUSG00000031818\n72544 Exosc6 ENSMUSG00000109941\n66914 Vps28 ENSMUSG00000115987\n234865 Nup133 ENSMUSG00000039509\n67177 Cdt1 ENSMUSG00000006585\n" ], [ "# list of mNEG filtered by RNA-seq data\ngene_mean = soellner_etal_rpkm_log.loc[neg_in_rnaseq].mean(axis=1)\nneg_filtered = entrez2symbol_mouse[ensembl2entrez[gene_mean[gene_mean<0].index]]\nneg_soellner = neg_filtered.values\nprint (\"\\n\".join(neg_filtered.values))", "Cacng2\nOlig3\nB3gnt6\nSlc18a3\nTmprss11a\nIl1f8\nGabra6\nSpaca1\nSlc6a18\nPax1\nPax4\nMmp20\nGalr1\nEnthd1\nOlfr412\nPrss37\nOlfr1425\nZan\nOlfr685\nSstr4\nDdi1\nPdilt\nRtp1\nShcbp1l\nSlc22a13\nPdx1\nPdyn\nSamd7\nZswim2\nTex44\nSox1\nOlfr417\nSox14\nOpn1mw\nHcrtr2\nGpr31b\nCatsper4\nPgk2\nCngb3\nIl1f10\nSpata21\nZic3\nOlfr1356\nZp2\nOlfr145\nGh\nGhrh\nMrgprb2\nOlfr554\nTas2r118\nTas2r120\nTas2r121\nTbpl2\nGja8\nTas2r126\nUts2r\nTas2r130\nTas2r131\nFigla\nKrt36\nGk2\nSlc36a3\nKrt86\nKrt84\nRd3\nCdx4\nOlfr692\nSlc22a19\nSohlh1\nTmem207\nKrtap11-1\nOlfr557\nOlfr574\nCldn17\nOlfr1093\nGlra1\nTrpc7\nOlfr368\nGalntl5\nFrmd7\nCer1\nSpata32\nAsz1\nPla2g2e\nZnrf4\nTex28\nChat\nOlfr464\nTex13a\nKrt35\nOlfr221\nDefb23\nDsg4\nLalba\nVsx2\nSppl2c\nTspo2\nPiwil1\nCst11\nBpifc\nOlfr91\nFam71b\nGpr50\nLbx1\nUsp29\nAnkrd60\nPglyrp3\nKcnb2\nIfnk\nTas2r135\nTas2r144\nGpx5\n4933402N03Rik\nGpr26\nRbm46\nAtp6v1g3\nOlfr96\nUbqln3\nSerpina12\nOlfr684\nAccsl\nKcna10\nLhx3\nLhx5\nSept14\nNphs2\nCnga2\nGsx1\nGsx2\nKrt75\nPrdm14\nT\nCrnn\nOlfr1023\nOlfr1022\nOtx2\nOlfr1086\nIqcf1\nOlfr716\nVrtn\nOlfr270\nTbc1d21\nSpem1\nPou3f4\nPou4f2\nPou4f3\nTaar1\nOlfr1151\nOlfr152\nOlfr418\nMageb18\nOlfr365\nDgat2l6\nPrg3\nTaar2\nKrtap13-1\nKcnk18\nSix6\nGpr151\nOlfr1423\nOlfr1424\nTex45\nPpp3r2\nSun5\nMrgprd\nOtud6a\nOosp2\n1700028K03Rik\nGrm4\nPrdm13\nOlfr410\nOlfr402\nOlfr411\nSp8\nOlfr419\nCrx\nCrygb\nKrt73\nOlfr502\nOlfr1497\nTktl2\nFndc7\nOlfr935\nProp1\nOlfr556\nCsn2\nCsn3\nScrt2\nTmprss15\nKrt77\nKrt40\nRnase9\nCst8\nCst9\nTriml1\nGfral\nMagea4\nTmem174\nApof\nGpr152\nOlfr231\nZfp804b\nNutm1\nGsc2\nMs4a5\nHao1\nOlfr273\nOlfr39\nPde6h\nPtf1a\nCyp11b2\nOlfr109\nMrgprx1\nMas1\nRtl4\nScp2d1\nFam71a\nRbmxl2\nFoxb1\nOlfr982\nSpaca7\nTaar5\nTaar6\nMc2r\nMc3r\nMc5r\nRtp2\nAsic5\nCyp7a1\nOpalin\nGot1l1\nKcnk16\nTrim60\nOlfr853\nOlfr750\nCst10\nRp1l1\nPrss33\nDazl\nOlfr20\nGpr139\nFoxn1\nMsgn1\nOlfr63\nTas2r139\nOlfr354\nTbr1\nPrlh\nDdx4\nLipm\nPkd1l3\n4930435E12Rik\nCdcp2\nOlfr1496\nOlfr1044\nTrpm1\nOlfr714\nOlfr713\nMtnr1b\nSlc39a12\nHrh3\nPrlhr\nDmrtb1\nOlfr618\nRax\nOlfr362\nOlfr691\nOlfr520\nWfdc9\nTbx10\nHmx1\nOlfr569\nCela3a\nOlfr568\nSpink14\nOlfr552\nMorc1\nHoxb1\nTrim42\nOpn5\n4933402J07Rik\nNdst4\nHoxd12\nKrtap1-3\nLhfpl5\nIl21\nDmp1\nMfrp\nCacng3\nTrim67\nNpvf\nFoxr1\nNms\nNyx\nBarhl1\nKrt25\nKlk9\nHhla1\nSlitrk1\nCabp5\nChrna6\nIl1f6\nDrd3\nIl1f5\nTrpd52l3\nBsnd\nKrt28\nHtr1a\nHtr2c\nBhlhe23\nCct8l1\nHtr5a\nHtr6\nActl7a\nActl7b\nFam47c\nRbp3\nRxfp2\nSpata16\nAdam18\nAsb17\nTeddm1b\nCtcfl\nNox3\nIrgc1\nPanx3\nVmn1r237\nAdam2\nGpr119\nEfcab3\nAdad1\nTmprss12\nKrt76\nRnase10\nPrss58\nGprc6a\nTph2\nCpxcr1\nMagea10\nGhsr\nCabs1\nAdgrf2\nEgr4\nPrss41\nKrt26\nRnf17\nAicda\nNkx2-1\nInsrr\nGcm2\nPrss55\nAlppl2\nTmprss11f\nNr2e1\nCstl1\nTlx1\nKrt71\nMmp27\nNlrp5\nAdam30\nTex101\nAlx3\nKhdc3\nRnase13\nRp1\nCacng5\nRpe65\nAntxrl\nXkr7\nFezf2\nBpifb6\nLrit1\nLrit2\nGlt6d1\nNoto\nTnr\nGrk1\nRnase12\nMyf5\nRnase11\nRfx6\nFezf1\nH1foo\nGpr45\nIl25\nChrnb3\n1700024G13Rik\nInsm2\nRdh8\nTrhr\nCyp11b1\nGrm5\nGrm6\nTsga13\nTrpc5\nAipl1\nRho\nNanos2\nDmrtc2\nCcdc83\nTmem132d\nTshb\nTfap2d\nNeurod2\nIl17f\nNeurog1\nPdcl2\nTssk1\nTssk2\nLin28a\nIfnb1\nSlc2a7\nCcdc155\nDmrt1\nUsp26\nDgkk\nSlc25a31\nSlc17a6\nFfar1\nActrt1\nNeurod6\nNeurod4\nGlra2\nTchhl1\nTyr\nNkx2-2\nBpifa3\nGpr101\nRs1\nHist1h2ak\nHist1h2ba\nSlitrk2\nStpg4\nEvx1\nPrss38\nOlig2\nBanf2\nLim2\nRxfp3\nPcare\nPou5f2\nF9\nGpx6\nMs4a13\nKif2b\nLyzl1\nCetn1\nTaar9\nNpsr1\nSec14l3\nH2bfm\nSerpina7\nAwat2\nAwat1\nMbd3l1\nIl12b\nIl13\nScn10a\nIl17a\nC8b\nTrpv5\nNpffr1\nSlc6a5\nVax1\nTas1r2\nZfp648\nCcl1\nLyzl6\nIl9\nTxndc8\nBpifb3\nSlc32a1\nOc90\nFgf3\nFgf4\nFgf6\nActl9\nNobox\nBmp10\nBmp15\nCcdc172\nGdf2\nKrt82\nKlk12\nCntnap5a\nLcn9\nOlfr17\nOlfr15\nOlfr19\nOlfr2\nGucy2f\nTmem225\nOlfr31\nKrtap26-1\nMepe\nOlfr263\nCyp26c1\nTas2r119\nKcnv1\nVmn1r224\nOlfr140\nKrtap15\nOlfr68\nOlfr69\nClec3a\nSult6b1\nClrn1\nOlfr453\nTgm6\nOtor\nOtp\nLrrc10\nPnpla5\n" ], [ "# list of mCEG filtered by RNA-seq data\ngene_mean = soellner_etal_rpkm_log.loc[ceg_in_rnaseq].mean(axis=1)\nceg_filtered = entrez2symbol_mouse[ensembl2entrez[gene_mean[gene_mean>1].index]]\nceg_soellner = ceg_filtered.values\nprint (\"\\n\".join(ceg_filtered.values))", "Mybbp1a\nExosc8\nCkap5\nFntb\nDgcr8\nCdc5l\nCpsf1\nHinfp\nPabpc1\nSdad1\nEif6\nTelo2\nXrcc6\nPafah1b1\nSpc24\nGabpa\nImp3\nTrappc1\nPlrg1\nCtdp1\nGtf3c2\nTmem223\nPolr2h\nTomm40\nIgbp1\nRpp21\nMrpl18\nLuc7l3\nAtp6v1d\nPcna\nEif3g\nHypk\nTrnau1ap\nGart\nPtcd1\nDhx8\nNup93\nPolr2g\nDnajc9\nIsg20l2\nKif23\nCcna2\nPop1\nCdc27\nEftud2\nMsto1\nDiexf\nKri1\nCse1l\nGnl3\nMed30\nSnrnp70\nSnrpd1\nZpr1\nBysl\nCcnk\nInts3\nCox11\nDdx47\nKif11\nCct2\nCct3\nCct4\nCct5\nCct7\nCct8\nPggt1b\nKat8\nRpp38\nLsm12\nPfdn2\nPfn1\nTsr1\nPgam1\nSrsf1\nPgk1\nHaus5\nPhb\nCdk1\nSnrpf\nCdc37\nMrpl57\nGgps1\nNoc4l\nNop9\nPtpa\nDdx55\nCops6\nNploc4\nSs18l2\nCdk7\nPolr2i\nEif5a\nSars2\nNol9\nRpl4\nRacgap1\nPpa1\nSnapc4\nWdr70\nCebpz\nFars2\nCdc16\nCenpa\nCenpc1\nTrrap\nMak16\nRpl8\nCfl1\nNol10\nSnu13\nHjurp\nTimm23\nLas1l\nCoq4\nRack1\nTbcd\nChek1\nSpout1\nTars\nDolk\nPop5\nPuf60\nMepce\nPtpn23\nPlk1\nExosc2\nMphosph10\nGtf3c1\nNup214\nSnrnp35\nAurkb\nTufm\nCoa5\nWdr12\nTut1\nLonp1\nCfap298\nPrpf4\nGps1\nRps3\nCmtr1\nNaa50\nSupv3l1\nTpx2\nClns1a\nSeh1l\nSupt5\nSupt6\nEif2b1\nRpl35a\nHaus1\nTomm22\nSmc4\nRps13\nTtc27\nRfc5\nTrappc3\nYars2\nUspl1\nGpn3\nPak1ip1\nAtp5d\nNcbp2\nTubgcp2\nRps2\nGspt1\nPpil2\nSdhc\nCdc123\nSmu1\nGtf2b\nExosc4\nPola2\nGtf2h1\nGtf2h4\nLars\nRars2\nSnrnp25\nCopa\nWdr74\nTrmt5\nRps11\nCpsf2\nGuk1\nSf3b5\nNars\nNup85\nWdr33\nCtps\nPcid2\nRrp12\nGins4\nBud23\nOraov1\nSlu7\nSympk\nGtf3c5\nHnrnpu\nPsmd1\nPpp2ca\nMrpl4\nNip7\nMed11\nNup88\nUbl5\nPkmyt1\nPrim1\nSart3\nEif3d\nNat10\nTti2\nClp1\nPolr3c\nNudcd3\nRpa1\nDmap1\nIars\nNol6\nOsgep\nCopb1\nDdx20\nPsma2\nPsma3\nPsmb1\nPsmb4\nTxnl4a\nRpl3\nPsmb7\nLsm8\nPsmc2\nPrmt5\nCox10\nPsmc3\nPsmc5\nPsmd4\nUtp23\nSrbd1\nMagoh\nHsd17b10\nCycs\nYars\nNup155\nHars\nRps15a\nTbl3\nWdr61\nZfp131\nRnf20\nFtsj3\nAtp5l\nEif2b5\nNus1\nSnw1\nHcfc1\nExosc3\nMcm3\nMcm4\nMcm5\nArcn1\nMcm7\nWdr43\nWdr77\nDad1\nAnapc2\nHdac3\nIscu\nSrp19\nMad2l1\nPrelid3b\nNmd3\nCox15\nLsg1\nTaf1b\nTaf6\nTwnk\nPolr2e\nPhf5a\nTubgcp6\nDdx49\nDdx21\nDdb1\nHeatr1\nSpc25\nExosc7\nDdost\nMrpl53\nDhx15\nRabggtb\nSkp1a\nDhx9\nSys1\nRad21\nInts1\nUtp15\nRad51d\nFam96b\nCpsf4\nRps23\nRps21\nUbtf\nRan\nRpl35\nPolr2l\nRangap1\nPrelid1\nEcd\nPsmg3\nDonson\nRbm8a\nRpl12\nTcp1\nInts8\nRbm14\nDis3\nUpf2\nNaa10\nAnapc4\nEprs\nMrpl28\nArl2\nRpl18a\nElac2\nHnrnpc\nMrpl38\nEif3c\nCmpk1\nHnrnpk\nFarsa\nHnrnpl\nUtp20\nGtf3a\nSamm50\nTonsl\nSnrnp27\nDhps\nAhcy\nDlst\nPolr3h\nPolrmt\nPpp4c\nLsm2\nPrmt1\nCcnh\nMcm3ap\nDnm2\nRae1\nRfk\nCcdc84\nDnmt1\nSnrnp200\nActr10\nCopz1\nOrc6\nUqcrfs1\nActl6a\nRbm17\nSae1\nMrps14\nMrps24\nXpo1\nHspd1\nActr2\nWdr3\nDpagt1\nHspa9\nSnrpd2\nCpsf3\nYju2\nRbbp6\nMrps34\nActb\nPrpf38a\nRbmx\nTubgcp3\nVcp\nChmp6\nAlg14\nDdx41\nGrwd1\nAars\nPpwd1\nUpf1\nArih1\nRfc2\nAtr\nUtp4\nTtc1\nDhx37\nBanf1\nGmppb\nIpo13\nTfam\nRheb\nDdx56\nAdsl\nAdss\nGrpel1\nNhp2\nPmpca\nVars2\nThoc5\nSf3a3\nTubg1\nKars\nCrnkl1\nEef2\nCherp\nFarsb\nEif3b\nYrdc\nAlg1\nCnot3\nChmp2a\nImp4\nPrpf19\nEif2s1\nTimm44\nEif3a\nTimm13\nWdr92\nPrpf38b\nTimm10\nMed27\nNol11\nSnrpa1\nPmpcb\nMars\nPrpf31\nDdx18\nGfm1\nSsu72\nUsp39\nEll\nDtymk\nAlg2\nNuf2\nTnpo3\nDhodh\nTmem258\nCdk9\nAtp5o\nRpa2\nVps25\nEif3i\nPsmd12\nRpl10a\nRtcb\nRpl18\nPolr3k\nPsmd13\nMvk\nMyc\nAbce1\nRpl11\nRngtt\nTop1\nPsmd11\nTop2a\nTriap1\nCdc20\nMrpl45\nNop16\nAtp6v0b\nKrr1\nCdc73\nRiok2\nAbcf1\nRpl27\nNkap\nRpl30\nPaics\nRomo1\nRpl23\nSmc1a\nRpl37a\nZfp574\nPsmc6\nBirc5\nDdx27\nTfrc\nPpan\nNop2\nZbtb8os\nEif2b3\nSacm1l\nNcbp1\nNudt4\nRpl14\nErcc2\nErcc3\nPolr1b\nPolr1a\nPolr1c\nErh\nPolr2a\nPolr2c\nSf3b3\nTpt1\nAqr\nRplp0\nNop56\nNepro\nCtr9\nUbe2n\nSbno1\nNedd8\nRps16\nNapa\nZmat5\nPsmd3\nCopb2\nNdufa13\nRplp2\nRps18\nGtpbp4\nRps19\nMtg2\nPolr2d\nOgt\nAnapc5\nEif2s2\nRrs1\nNup160\nRps5\nThoc2\nTubb5\nMed12\nHuwe1\nNampt\nPsmd14\nRcl1\nRps7\nRps8\nTxn1\nPrpf8\nKansl3\nTti1\nAtp2a2\nCinp\nEif5b\nPnkp\nRpf2\nRrm1\nAtp5a1\nAtp5b\nEif4a3\nAtp5c1\nNsa2\nU2af2\nUbe2m\nMdn1\nUbe2l3\nUba1\nNmt1\nAtp6v1a\nGemin5\nAtp6v1e1\nRuvbl2\nAtp6v0c\nPolr3a\nUsp5\nPreb\nDkc1\nCltc\nMrpl46\nCox6b1\nSrrt\nDbr1\nRcc1\nUqcrc1\nPhb2\nCiao1\nSars\nSnrpd3\nSrsf7\nUrod\nNsf\nUxt\nDdx10\nFau\nSec13\nDhx33\nNudc\nVars\nSnapc2\nPsma1\nIlf3\nPsma4\nPsma5\nPsma6\nPsma7\nPsmb2\nPsmb3\nKpnb1\nRpl27a\nSf3b2\nRps20\nWars\nRfc4\nSnapc1\nXab2\nMrpl43\nOgdh\nWee1\nNvl\nSmc2\nDctn5\nSrsf2\nPpat\nSrsf3\nPolr2b\nMettl16\nNle1\nFnta\nThoc3\nWdr75\nBub1b\nBub3\nCops3\nDnaja3\nSf3b1\nNarfl\nInts9\nGmps\n" ], [ "# mean rpkm of CEG\nessgene_mean = soellner_etal_rpkm_log.loc[ceg_in_rnaseq].mean(axis=1)\nessgene_mean[essgene_mean<0]", "_____no_output_____" ], [ "from matplotlib_venn import venn3, venn3_circles,venn2, venn2_circles\ndef dec_to_bin(x,length):\n formatstr = \"%0\"+str(length)+\"d\"\n return formatstr%int(bin(x)[2:])\n\n\n#two set comparison\n\nnumberofset = 2\ncount=dict()\ncount['10'] = len(setdiff1d(ceg_li,ceg_soellner))\ncount['01'] = len(setdiff1d(ceg_soellner, ceg_li))\ncount['11'] = len(intersect1d(ceg_li,ceg_soellner))\n\nfailed_rnaseq_ceg = setxor1d(ceg_li,ceg_soellner) # not intersection\n\nplt.figure(figsize=(8,4))\n#v = venn3(subsets=(count[\"100\"],count[\"010\"],count[\"110\"],count[\"001\"],count[\"101\"],count[\"011\"],count[\"111\"])\nv = venn2(subsets=(1,1,1)\n , set_labels = ('ceg_li', 'ceg_sollner')) ## A, B, AB, C, AC, AB, ABC\nfor text in v.set_labels:\n text.set_fontsize(16)\nfor com in count:\n v.get_label_by_id(com).set_text(count[com]) # to avoid area weighted by count, firstly set 1 and change text\n v.get_label_by_id(com).set_fontsize(14)\n'''\nv.get_label_by_id('100').set_text('First')\nv.get_label_by_id('010').set_text('Second')\nv.get_label_by_id('001').set_text('Third')\n'''\nplt.title(\"CEG validated with RNA-seq data\")\nsavefig('Fig_gene_exp_overlap_CEG.pdf',format='pdf')\nplt.show()\n ", "_____no_output_____" ], [ "from matplotlib_venn import venn3, venn3_circles,venn2, venn2_circles\ndef dec_to_bin(x,length):\n formatstr = \"%0\"+str(length)+\"d\"\n return formatstr%int(bin(x)[2:])\n\n\n#two set comparison\n\nnumberofset = 2\ncount=dict()\ncount['10'] = len(setdiff1d(neg_li,neg_soellner))\ncount['01'] = len(setdiff1d(neg_soellner, neg_li))\ncount['11'] = len(intersect1d(neg_li,neg_soellner))\n\nfailed_rnaseq_neg = setxor1d(neg_li,neg_soellner) # not intersection\n\nplt.figure(figsize=(8,4))\n#v = venn3(subsets=(count[\"100\"],count[\"010\"],count[\"110\"],count[\"001\"],count[\"101\"],count[\"011\"],count[\"111\"])\nv = venn2(subsets=(1,1,1)\n , set_labels = ('neg_li', 'neg_sollner')) ## A, B, AB, C, AC, AB, ABC\nfor text in v.set_labels:\n text.set_fontsize(16)\nfor com in count:\n v.get_label_by_id(com).set_text(count[com]) # to avoid area weighted by count, firstly set 1 and change text\n v.get_label_by_id(com).set_fontsize(14)\n'''\nv.get_label_by_id('100').set_text('First')\nv.get_label_by_id('010').set_text('Second')\nv.get_label_by_id('001').set_text('Third')\n'''\nplt.title(\"NEG validated with RNA-seq data\")\nsavefig('Fig_gene_exp_overlap_NEG.pdf',format='pdf')\nplt.show()\n ", "_____no_output_____" ], [ "ceg_intersect = intersect1d(ceg_li,ceg_soellner)\nneg_intersect = intersect1d(neg_li,neg_soellner)", "_____no_output_____" ], [ "# export mCEG0, mCEG0\npd.DataFrame(ceg_intersect).to_csv(\"mCEG0.txt\",index=False)\npd.DataFrame(neg_intersect).to_csv(\"mNEG0.txt\",index=False)", "_____no_output_____" ] ], [ [ "# Analysis of screens (After BAGEL2 analysis)", "_____no_output_____" ] ], [ [ "# load BF data generated through BAGEL2\nbfdata = pd.DataFrame()\nwith open('bflist_woQR','r') as fp:\n for line in fp:\n line = line.rstrip()\n print (line)\n cell_line=line.split(\"/\")[1].split(\"_\")[1] + \"_\" + line.split(\"/\")[1].split(\".\")[3]\n bfdata[cell_line] = pd.read_csv(line,index_col=0,header=0,sep=\"\\t\")['BF']\n \n#remove controls\n\nbfdata = bfdata.drop(['LacZ','luciferase','EGFP'])", "bagel2_results/MUS003_Renca-HA_Drop-out.foldchange.T24.bf\nbagel2_results/MUS004_4T1-HA_Drop-out.foldchange.T14.bf\nbagel2_results/MUS005_CT26_Drop-out.foldchange.T17.bf\nbagel2_results/MUS006_EMT6-HA_Drop-out.foldchange.T11.bf\nbagel2_results/MUS007_MC38-OVA_Dropout.foldchange.T12.bf\nbagel2_results/MUS009_B16-OVA_Drop-out.foldchange.T17.bf\n" ], [ "bfdata.head(3)", "_____no_output_____" ], [ "# save as table\nbfdata.to_csv('bfdata_woQR',sep='\\t')", "_____no_output_____" ], [ "bfdata_selected = bfdata[['Renca-HA_bf',\n '4T1-HA_bf',\n 'CT26_bf',\n 'EMT6-HA_bf',\n 'MC38-OVA_bf',\n 'B16-OVA_bf']]", "_____no_output_____" ], [ "bfdata_selected.head(3)", "_____no_output_____" ], [ "def quantileNormalize(df_input):\n df = df_input.copy()\n #compute rank\n dic = {}\n for col in df:\n dic[col] = df[col].sort_values(na_position='first').values\n sorted_df = pd.DataFrame(dic)\n #rank = sorted_df.mean(axis = 1).tolist()\n rank = sorted_df.median(axis = 1).tolist()\n #sort\n for col in df:\n # compute percentile rank [0,1] for each score in column \n t = df[col].rank( pct=True, method='max' ).values\n # replace percentile values in column with quantile normalized score\n # retrieve q_norm score using calling rank with percentile value\n df[col] = [ np.nanpercentile( rank, i*100 ) if ~np.isnan(i) else np.nan for i in t ]\n return df", "_____no_output_____" ] ], [ [ "# calculate quantile normalized bf\nbfdata_selected_qt = quantileNormalize(bfdata_selected)", "_____no_output_____" ], [ "# save qtnormed bf\nbfdata_selected_qt.to_csv(\"bf_mouse_selected.qtnorm\")", "_____no_output_____" ] ], [ [ "# load qtnormed bf\nbfdata_selected_qt = pd.read_csv(\"bf_mouse_selected.qtnorm\",index_col=0,header=0)", "_____no_output_____" ], [ "bfdata_selected.mean(axis=1).sort_values()", "_____no_output_____" ], [ "### read replicates\n\nbfdata_col = pd.DataFrame()\nwith open (\"bflist_col_woQR\",\"r\") as fp:\n for line in fp:\n line = line.rstrip()\n cell_line=line.split(\"_\")[1] + \"_\" + line.split(\".\")[2] + \"_\" + line.split(\".\")[3]\n bfdata_col[cell_line] = pd.read_csv(line,header = 0,index_col=0,sep=\"\\t\")['BF']\n \nbfdata_col = bfdata_col.drop(['LacZ','luciferase','EGFP']) ", "_____no_output_____" ], [ "bfdata_col", "_____no_output_____" ] ], [ [ "# quantile normalization\n\nbfdata_selected_qt_col = quantileNormalize(bfdata_col)\nbfdata_selected_qt_col.to_csv(\"bf_mouse_selected_cols.qtnorm\")\n\n", "_____no_output_____" ] ], [ [ "# load qtnormed replicates\n\nbfdata_selected_qt_col = pd.read_csv(\"bf_mouse_selected_cols.qtnorm\",index_col=0,header=0)", "_____no_output_____" ], [ "# distribution of mCEG0 and mNEG0 in essentiality (Bayes Factor) data\n\nbins = numpy.linspace(-100, 150, 51)\nhist(bfdata_selected.mean(axis=1),bins,log=True,color='grey')\nhist(bfdata_selected.loc[intersect1d(bfdata_selected.index,ceg_intersect)].mean(axis=1).dropna(),bins,log=True,color='red',alpha=0.5)\nhist(bfdata_selected.loc[intersect1d(bfdata_selected.index,neg_intersect)].mean(axis=1).dropna(),bins,log=True,color='blue',alpha=0.5)\nxlabel('Mean Bayes Factor',size=16)\nylabel('Count',size=16)\nxticks(size=14)\nyticks(size=14)\nlegend(['All','CEG','NEG'])\nsavefig(\"Fig_bf_histogram_ceg_neg.pdf\",format='pdf')\nshow()\n\n", "_____no_output_____" ], [ "# The number of cells where a gene is essential\n\nfig,(ax,ax2) = plt.subplots(2, 1, sharex=True)\n\nbfdata_selected_count = bfdata_selected[bfdata_selected>5].count(axis=1)\nbfdata_selected_ess_count = bfdata_selected_count[ceg_intersect]\nbfdata_selected_noness_count = bfdata_selected_count[neg_intersect]\n\nfor i in range(9):\n ax.bar([i],[len(bfdata_selected_count[bfdata_selected_count==i])],align='center',color='grey')\n ax2.bar([i],[len(bfdata_selected_count[bfdata_selected_count==i])],align='center',color='grey')\n ax2.bar([i],[len(bfdata_selected_ess_count[bfdata_selected_ess_count==i])],align='center',color='red')\n bottom = len(bfdata_selected_ess_count[bfdata_selected_ess_count==i])\n ax2.bar([i],[len(bfdata_selected_noness_count[bfdata_selected_noness_count==i])],align='center',color='blue',bottom=bottom)\nxlabel('The number of cell lines',size=16)\nxticks(range(0,7),size=14)\n\n\n# Delete ticks between plots\nax.spines['bottom'].set_visible(False)\nax2.spines['top'].set_visible(False)\nax.xaxis.tick_top()\nax.tick_params(labeltop='off')\nax2.xaxis.tick_bottom()\n\n# set coverage of each axis\nax.set_ylim([15000,16500])\nax2.set_ylim([0,1500])\nxlim([-1,7])\n# make smaller gap\nsubplots_adjust(wspace=0.10)\nax.set_title(\"Essentiality of genes\")\n\nsavefig(\"Fig_essential_genes_cell_count.pdf\",format='pdf')\n\nshow()", "_____no_output_____" ], [ "# Essential at six cell lines\nprint (\",\".join(bfdata_selected_count[bfdata_selected_count==6].index))", "1110004E09Rik,1110037F02Rik,1810026J23Rik,2700060E02Rik,AW822073,Aars,Aars2,Aasdhppt,Abce1,Abt1,Actl6a,Actr10,Actr2,Actr3,Actr6,Adat2,Adat3,Adsl,Ahctf1,Ahcy,Ak6,Aldoa,Alg1,Alg11,Alg14,Alg2,Alyref,Anapc1,Anapc10,Anapc11,Anapc2,Anapc4,Anapc5,Ankle2,Aqr,Arcn1,Arfrp1,Arl2,Armc7,Arpc4,Asna1,Atad3a,Atic,Atp1a1,Atp2a2,Atp5a1,Atp5b,Atp5c1,Atp5d,Atp5j,Atp5j2,Atp5k,Atp5o,Atp6v0b,Atp6v0c,Atp6v0d1,Atp6v1a,Atp6v1b2,Atp6v1c1,Atp6v1e1,Atp6v1f,Atp6v1g1,Atrip,Aurka,Aurkaip1,Aurkb,Banf1,Bard1,Bcas2,Bccip,Bcs1l,Birc5,Bms1,Bop1,Bora,Brf1,Brf2,Brix1,Btf3,Bub1b,Bub3,Bud31,Bysl,C130026I21Rik,C1d,Cad,Caml,Capzb,Cars2,Casc5,Ccdc115,Ccdc12,Ccdc84,Ccdc86,Ccdc94,Ccna2,Ccnb1,Ccnh,Ccnk,Cct2,Cct3,Cct4,Cct5,Cct7,Cct8,Cd3eap,Cdc16,Cdc26,Cdc27,Cdc37,Cdc42,Cdc45,Cdc5l,Cdc6,Cdc73,Cdca8,Cdipt,Cdk1,Cdk12,Cdk7,Cdk9,Cds2,Cebpz,Cenpa,Cenpe,Cenph,Cenpi,Cenpk,Cenpl,Cenpm,Cenpn,Cenpo,Cenpp,Cenpw,Chaf1b,Chek1,Chmp4b,Chmp6,Chordc1,Ciao1,Cinp,Cirh1a,Ckap5,Clns1a,Clp1,Cmtr1,Cnot1,Cnot3,Coasy,Cog1,Cog3,Cog7,Copa,Copb1,Copb2,Cops2,Cops3,Cops4,Cops5,Cops6,Copz1,Cox20,Cox4i1,Cox6c,Cpsf1,Cpsf2,Cpsf3,Cpsf3l,Cpsf4,Cpsf6,Crls1,Crnkl1,Cse1l,Cstf3,Ctcf,Ctdp1,Ctnnbl1,Ctps,Ctr9,Cyc1,D2Wsu81e,Dad1,Dars,Dars2,Dbr1,Dcaf13,Dctn2,Dctn4,Dctn5,Ddb1,Ddost,Ddx1,Ddx10,Ddx18,Ddx19a,Ddx20,Ddx21,Ddx24,Ddx27,Ddx3x,Ddx41,Ddx47,Ddx49,Ddx51,Ddx52,Ddx54,Ddx55,Ddx56,Ddx59,Deb1,Dhdds,Dhfr,Dhodh,Dhps,Dhx15,Dhx33,Dhx37,Dhx9,Dimt1,Dis3,Dkc1,Dmap1,Dna2,Dnaaf5,Dnaja3,Dnajc17,Dnajc2,Dnajc8,Dnlz,Dnm1l,Dnm2,Dohh,Dolk,Dpagt1,Dph2,Dph5,Dph6,Drap1,Dtl,Dtymk,Dut,Duxf3,Dync1h1,Dync1i2,Ears2,Ecd,Ect2,Eef1a1,Eef1e1,Eef1g,Eef2,Eftud1,Eftud2,Eif1ad,Eif2b1,Eif2b2,Eif2b3,Eif2b4,Eif2b5,Eif2s1,Eif2s2,Eif2s3x,Eif3b,Eif3c,Eif3d,Eif3e,Eif3f,Eif3g,Eif3i,Eif3m,Eif4a1,Eif4a3,Eif5,Eif5a,Eif6,Elac2,Elp6,Emg1,Eprs,Eral1,Ercc2,Ercc3,Erh,Esco2,Esf1,Espl1,Etf1,Exosc10,Exosc2,Exosc3,Exosc4,Exosc5,Exosc7,Exosc8,Exosc9,Fam210a,Fam50a,Fam96b,Fars2,Farsa,Farsb,Fau,Fbl,Fbxo5,Fcf1,Fdx1l,Fdxr,Fen1,Fignl1,Fnta,Fntb,Ftsj3,Fxn,Gak,Gapdh,Gar1,Gars,Gart,Gatb,Gemin4,Gemin6,Gemin8,Ggps1,Gins2,Gins3,Gins4,Glrx5,Gltscr2,Gmppb,Gmps,Gnb1l,Gnb2l1,Gnl2,Gnl3,Gnl3l,Gpkow,Gpn1,Gpn2,Gpn3,Gps1,Grpel1,Grwd1,Gtf2a2,Gtf2b,Gtf2f2,Gtf2h1,Gtf2h2,Gtf2h3,Gtf3c1,Gtf3c3,Gtf3c5,Gtpbp4,Guk1,H2afz,Hars,Hars2,Haus1,Haus2,Haus4,Haus5,Haus6,Haus7,Hcfc1,Hdac3,Heatr1,Hgs,Hinfp,Hjurp,Hmgcs1,Hnrnpc,Hnrnpk,Hnrnpl,Hnrnpu,Hscb,Hsd17b10,Hspa5,Hspa8,Hspa9,Hspd1,Hus1,Hyou1,Hypk,Iars,Iars2,Ice1,Idi1,Igbp1,Imp3,Imp4,Impdh2,Incenp,Ints2,Ints3,Ints4,Ints7,Ints9,Ipo11,Ipo13,Ipo7,Iscu,Isg20l2,Isy1,Kansl1,Kars,Kat5,Kat8,Kdm8,Kif11,Kif18b,Kif23,Kif4,Kin,Kpnb1,Kri1,Krr1,Lage3,Las1l,Letm1,Lin52,Lin54,Lonp1,Lrr1,Lsg1,Lsm10,Lsm2,Lsm3,Lsm4,Lsm5,Lsm7,Lsm8,Ltv1,Luc7l3,Lyrm4,Mad2l1,Magoh,Mak16,Mars,Mars2,Mastl,Mbtps1,Mcm2,Mcm3,Mcm4,Mcm5,Mcm6,Mcm7,Mcmbp,Mcrs1,Mdn1,Mecr,Med11,Med14,Med20,Med22,Med26,Med6,Med8,Mepce,Metap1,Metap2,Mettl14,Mettl16,Mettl3,Mis12,Mis18a,Mis18bp1,Mms19,Mms22l,Mphosph10,Mphosph6,Mre11a,Mrpl10,Mrpl11,Mrpl12,Mrpl13,Mrpl17,Mrpl18,Mrpl20,Mrpl22,Mrpl23,Mrpl24,Mrpl3,Mrpl34,Mrpl35,Mrpl36,Mrpl37,Mrpl38,Mrpl39,Mrpl4,Mrpl40,Mrpl41,Mrpl42,Mrpl43,Mrpl45,Mrpl46,Mrpl47,Mrpl48,Mrpl49,Mrpl51,Mrpl52,Mrpl57,Mrps11,Mrps12,Mrps14,Mrps15,Mrps16,Mrps18a,Mrps2,Mrps24,Mrps25,Mrps27,Mrps30,Mrps34,Mrps5,Mrps6,Mrps7,Mrto4,Mtg2,Mthfd1,Mtrr,Mvd,Mvk,Mybbp1a,Myc,Myh9,N6amt1,Naa20,Naa25,Naa50,Naca,Nae1,Naf1,Napa,Narfl,Nars,Nars2,Nat10,Ncapd2,Ncapd3,Ncapg,Ncapg2,Ncaph2,Ncbp1,Ncbp2,Ncl,Ndc80,Ndnl2,Ndor1,Ndufab1,Nedd1,Nedd8,Nelfb,Nfs1,Nfyc,Ngdn,Nhp2,Nhp2l1,Nifk,Nip7,Nkap,Nle1,Nmd3,Nob1,Noc2l,Noc3l,Noc4l,Nol10,Nol11,Nol6,Nol9,Nom1,Nop14,Nop16,Nop2,Nop56,Nop58,Nop9,Npat,Nploc4,Nrf1,Nsa2,Nsf,Nubp1,Nubp2,Nudc,Nudcd3,Nudt21,Nuf2,Numa1,Nup107,Nup153,Nup160,Nup205,Nup214,Nup43,Nup85,Nup88,Nus1,Nutf2,Nvl,Nxf1,Ogdh,Oip5,Oraov1,Orc1,Orc4,Orc5,Osbp,Osgep,Oxa1l,Oxsm,Pabpc1,Pabpn1,Paf1,Pafah1b1,Paics,Pak1ip1,Palb2,Pam16,Parn,Pars2,Pcid2,Pcna,Pdcd11,Pdcd7,Pelo,Pelp1,Peo1,Pes1,Pfas,Pfdn2,Pfn1,Pgam1,Pgd,Pggt1b,Pgk1,Pgs1,Phax,Phb,Phb2,Phf5a,Pik3r4,Pkm,Plk1,Plrg1,Pmf1,Pmpcb,Pmvk,Pnkp,Pnn,Pno1,Pnpt1,Pola2,Pold1,Pold2,Pold3,Pole,Pole2,Polg,Polg2,Polr1a,Polr1b,Polr1c,Polr1d,Polr1e,Polr2c,Polr2d,Polr2e,Polr2f,Polr2g,Polr2h,Polr2i,Polr2j,Polr2l,Polr3a,Polr3b,Polr3c,Polr3d,Polr3e,Polr3h,Polr3k,Polrmt,Pomp,Pop4,Pop5,Pop7,Ppa1,Ppan,Ppat,Ppil2,Ppil4,Ppp1cb,Ppp1r10,Ppp1r11,Ppp1r12a,Ppp1r15b,Ppp1r7,Ppp1r8,Ppp2ca,Ppp4c,Ppwd1,Prc1,Preb,Prelid1,Prim1,Prim2,Prkrip1,Prkrir,Prmt1,Prmt5,Prpf19,Prpf31,Prpf38a,Prpf38b,Prpf4,Prpf6,Prpf8,Psma1,Psma2,Psma3,Psma4,Psma5,Psma6,Psma7,Psmb1,Psmb2,Psmb3,Psmb4,Psmb6,Psmb7,Psmc1,Psmc2,Psmc3,Psmc4,Psmc5,Psmc6,Psmd1,Psmd11,Psmd12,Psmd14,Psmd3,Psmd4,Psmd6,Psmd7,Psmd8,Psmg3,Psmg4,Pwp2,Pyroxd1,Qars,Qrsl1,Rab7,Rabggta,Rabggtb,Racgap1,Rad1,Rad17,Rad21,Rad50,Rad51,Rad9a,Rae1,Ran,Rangap1,Rars,Rbbp4,Rbbp5,Rbbp6,Rbm14,Rbm19,Rbm22,Rbm25,Rbm39,Rbm48,Rbmx2,Rbx1,Rcc1,Rcl1,Recql4,Rfc2,Rfc3,Rfc5,Rft1,Rint1,Riok1,Rnasek,Rnf20,Rnf40,Rngtt,Rnmt,Romo1,Rpa1,Rpa2,Rpain,Rpap2,Rpf2,Rpia,Rpl10,Rpl10a,Rpl11,Rpl12,Rpl13,Rpl13a,Rpl14,Rpl17,Rpl18,Rpl18a,Rpl19,Rpl23,Rpl23a,Rpl24,Rpl27a,Rpl3,Rpl30,Rpl31,Rpl32,Rpl34,Rpl35,Rpl35a,Rpl37,Rpl37a,Rpl38,Rpl4,Rpl5,Rpl7,Rpl7a,Rpl7l1,Rpl8,Rplp0,Rpn1,Rpp21,Rpp30,Rpp38,Rpp40,Rps11,Rps12,Rps13,Rps14,Rps15,Rps15a,Rps16,Rps17,Rps19,Rps2,Rps20,Rps21,Rps25,Rps26,Rps27,Rps27a,Rps29,Rps3,Rps4x,Rps5,Rps7,Rps8,Rps9,Rpsa,Rpusd4,Rrm1,Rrm2,Rrp1,Rrp12,Rrp15,Rrp36,Rrp7a,Rrp9,Rrs1,Rsl1d1,Rsl24d1,Rtcb,Rtfdc1,Ruvbl1,Ruvbl2,Sacm1l,Sae1,Samm50,Sap30bp,Sars,Sart3,Sbds,Scap,Scfd1,Sdad1,Sde2,Sdhb,Sec13,Sec22b,Sec61a1,Seh1l,Sepsecs,Setd1a,Sf1,Sf3a1,Sf3a2,Sf3a3,Sf3b2,Sf3b3,Sf3b4,Sf3b5,Sf3b6,Sfi1,Sfpq,Shq1,Ska1,Ska2,Ska3,Skiv2l2,Skp1a,Slc25a3,Slc35b1,Slc3a2,Slc7a6,Slc7a6os,Slmo2,Slu7,Smc1a,Smc2,Smc3,Smc4,Smc6,Smg5,Smu1,Snap23,Snapc1,Snapc3,Snapc4,Snapc5,Snf8,Snip1,Snrnp200,Snrnp25,Snrnp70,Snrpa,Snrpa1,Snrpb,Snrpd1,Snrpd2,Snrpd3,Snrpe,Snrpg,Snupn,Snw1,Sod1,Spata5,Spc24,Spc25,Srbd1,Srcap,Srp14,Srp72,Srrm1,Srsf1,Srsf2,Srsf3,Ssrp1,Ssu72,Strap,Stx5a,Suds3,Sugt1,Supt16,Supt4a,Supt5,Supt6,Supv3l1,Sympk,Sys1,Taf10,Taf11,Taf12,Taf13,Taf1a,Taf1b,Taf1c,Taf2,Taf3,Taf5,Taf6,Tamm41,Tango6,Tars,Tars2,Tbca,Tbcb,Tbce,Tbl3,Tbp,Tcp1,Telo2,Terf1,Terf2,Tex10,Tfrc,Thap11,Thg1l,Thoc1,Thoc2,Thoc3,Ticrr,Timeless,Timm13,Timm22,Timm23,Timm44,Timm50,Tinf2,Tma16,Tmem258,Tnpo1,Tnpo3,Toe1,Tomm40,Tomm70a,Tonsl,Top1,Top2a,Top3a,Topbp1,Tpi1,Traip,Trappc1,Trappc11,Trappc3,Trappc4,Trappc5,Trappc8,Trmt10c,Trmt112,Trmt5,Trmt6,Trmt61a,Trnt1,Trpm7,Tsen2,Tsen54,Tsfm,Tsg101,Tsr1,Tsr2,Ttc1,Ttc27,Ttc4,Ttf1,Tti1,Tti2,Ttk,Tubb5,Tubg1,Tubgcp2,Tubgcp4,Tufm,Tut1,Twistnb,Txn1,Txnl4a,Txnl4b,U2af1,U2af2,Uba1,Uba2,Uba3,Ube2i,Ubl5,Ubtf,Ufd1l,Uhrf1,Umps,Upf1,Upf2,Uqcrb,Uqcrc2,Uqcrfs1,Uqcrq,Urb1,Urb2,Uri1,Urod,Uso1,Usp36,Usp39,Utp11l,Utp15,Utp18,Utp20,Utp23,Utp3,Utp6,Uxt,Vars,Vars2,Vcp,Vhl,Vmp1,Vprbp,Vps25,Vps29,Vps4b,Vps72,Wac,Wars,Wars2,Wbp11,Wbscr22,Wdr1,Wdr12,Wdr18,Wdr25,Wdr3,Wdr33,Wdr36,Wdr43,Wdr46,Wdr5,Wdr55,Wdr61,Wdr7,Wdr73,Wdr74,Wdr75,Wdr77,Wdr82,Wdr92,Wee1,Wrap53,Wrb,Xab2,Xpo1,Xrn2,Yae1d1,Yars,Yars2,Yeats2,Yeats4,Ykt6,Yrdc,Ythdc1,Zbtb8os,Zcchc9,Zfp131,Zfp207,Zfp407,Zfp830,Zmat2,Zmat5,Znhit2,Znhit6,Znrd1,Zpr1,Zzz3\n" ], [ "# Essential at six cell lines\nessgenes = bfdata_selected_count[bfdata_selected_count==6].index", "_____no_output_____" ], [ "from matplotlib_venn import venn3, venn3_circles,venn2, venn2_circles\ndef dec_to_bin(x,length):\n formatstr = \"%0\"+str(length)+\"d\"\n return formatstr%int(bin(x)[2:])\n\n\n\n#two set comparison\n\nnumberofset = 2\ncount=dict()\ncount['10'] = len(setdiff1d(essgenes,ceg_intersect))\ncount['01'] = len(setdiff1d(ceg_intersect, essgenes))\ncount['11'] = len(intersect1d(essgenes,ceg_intersect))\n\nplt.figure(figsize=(8,4))\n#v = venn3(subsets=(count[\"100\"],count[\"010\"],count[\"110\"],count[\"001\"],count[\"101\"],count[\"011\"],count[\"111\"])\nv = venn2(subsets=(1,1,1)\n , set_labels = ('Essential genes', 'mCEG0')) ## A, B, AB, C, AC, AB, ABC\nfor text in v.set_labels:\n text.set_fontsize(16)\nfor com in count:\n v.get_label_by_id(com).set_text(count[com]) # to avoid area weighted by count, firstly set 1 and change text\n v.get_label_by_id(com).set_fontsize(14)\n'''\nv.get_label_by_id('100').set_text('First')\nv.get_label_by_id('010').set_text('Second')\nv.get_label_by_id('001').set_text('Third')\n'''\nplt.title(\"Essential genes(= 6 cells, BF > 5) and mCEG0\")\nsavefig('Fig_CEG_ESS_overlap.pdf',format='pdf')\nplt.show()\n ", "_____no_output_____" ], [ "pd.DataFrame(essgenes).to_csv(\"essgenes_6cells.txt\",index=False)", "_____no_output_____" ], [ "# define mCEG1 from screens\n# Essential at 6 cell lines + mean expression > 1\n\n\nexp_gene_soellner = entrez2symbol_mouse[ensembl2entrez[soellner_etal_rpkm_log[soellner_etal_rpkm_log.mean(axis=1)>1].index]]\nexp_gene_li = entrez2symbol_mouse[ensembl2entrez[li_etal_fpkm_log[li_etal_fpkm_log.mean(axis=1)>1].index]]\nmCEG1_mean1 = intersect1d(essgenes,intersect1d(list(exp_gene_soellner.values),list(exp_gene_li.values)))\nprint (len(mCEG1_mean1))\nprint (\",\".join(mCEG1_mean1))", "1045\nAars,Aars2,Aasdhppt,Abce1,Abt1,Actl6a,Actr10,Actr2,Actr3,Actr6,Adat2,Adsl,Ahctf1,Ahcy,Ak6,Aldoa,Alg1,Alg11,Alg14,Alg2,Alyref,Anapc1,Anapc10,Anapc11,Anapc2,Anapc4,Anapc5,Ankle2,Aqr,Arcn1,Arfrp1,Arl2,Armc7,Arpc4,Asna1,Atad3a,Atic,Atp1a1,Atp2a2,Atp5a1,Atp5b,Atp5c1,Atp5d,Atp5j,Atp5j2,Atp5k,Atp5o,Atp6v0b,Atp6v0c,Atp6v0d1,Atp6v1a,Atp6v1b2,Atp6v1c1,Atp6v1e1,Atp6v1f,Atp6v1g1,Atrip,Aurka,Aurkaip1,Aurkb,Banf1,Bcas2,Bccip,Bcs1l,Birc5,Bms1,Bop1,Brf1,Brf2,Brix1,Btf3,Bub1b,Bub3,Bud31,Bysl,C1d,Cad,Caml,Capzb,Cars2,Ccdc115,Ccdc12,Ccdc84,Ccdc86,Ccna2,Ccnh,Ccnk,Cct2,Cct3,Cct4,Cct5,Cct7,Cct8,Cd3eap,Cdc16,Cdc26,Cdc27,Cdc37,Cdc42,Cdc45,Cdc5l,Cdc73,Cdca8,Cdipt,Cdk1,Cdk12,Cdk7,Cdk9,Cds2,Cebpz,Cenpa,Cenpl,Cenpo,Chmp4b,Chmp6,Chordc1,Ciao1,Cinp,Ckap5,Clns1a,Clp1,Cmtr1,Cnot1,Cnot3,Coasy,Cog1,Cog3,Cog7,Copa,Copb1,Copb2,Cops2,Cops3,Cops4,Cops5,Cops6,Copz1,Cox20,Cox6c,Cpsf1,Cpsf2,Cpsf3,Cpsf4,Cpsf6,Crls1,Crnkl1,Cse1l,Cstf3,Ctcf,Ctdp1,Ctnnbl1,Ctps,Ctr9,Cyc1,Dad1,Dars,Dars2,Dbr1,Dcaf13,Dctn2,Dctn4,Dctn5,Ddb1,Ddost,Ddx1,Ddx10,Ddx18,Ddx19a,Ddx20,Ddx21,Ddx24,Ddx27,Ddx3x,Ddx41,Ddx47,Ddx49,Ddx51,Ddx52,Ddx54,Ddx55,Ddx56,Ddx59,Dhdds,Dhfr,Dhodh,Dhps,Dhx15,Dhx33,Dhx37,Dhx9,Dis3,Dkc1,Dmap1,Dna2,Dnaaf5,Dnaja3,Dnajc17,Dnajc2,Dnajc8,Dnlz,Dnm1l,Dnm2,Dohh,Dolk,Dpagt1,Dph2,Dph5,Dph6,Drap1,Dtymk,Dync1h1,Dync1i2,Ears2,Ecd,Ect2,Eef1a1,Eef1e1,Eef1g,Eef2,Eftud2,Eif1ad,Eif2b1,Eif2b2,Eif2b3,Eif2b4,Eif2b5,Eif2s1,Eif2s2,Eif3b,Eif3c,Eif3d,Eif3e,Eif3f,Eif3g,Eif3i,Eif3m,Eif4a1,Eif4a3,Eif5,Eif5a,Eif6,Elac2,Elp6,Emg1,Eprs,Eral1,Ercc2,Ercc3,Erh,Esf1,Etf1,Exosc10,Exosc2,Exosc3,Exosc4,Exosc5,Exosc7,Exosc8,Exosc9,Fam210a,Fam50a,Fam96b,Fars2,Farsa,Farsb,Fau,Fbl,Fbxo5,Fcf1,Fdx1l,Fdxr,Fen1,Fnta,Fntb,Ftsj3,Fxn,Gak,Gapdh,Gar1,Gars,Gart,Gatb,Gemin6,Ggps1,Gins4,Glrx5,Gmppb,Gmps,Gnl2,Gnl3,Gnl3l,Gpkow,Gpn1,Gpn2,Gpn3,Gps1,Grpel1,Grwd1,Gtf2a2,Gtf2b,Gtf2f2,Gtf2h1,Gtf2h2,Gtf2h3,Gtf3c1,Gtf3c3,Gtf3c5,Gtpbp4,Guk1,H2afz,Hars,Hars2,Haus1,Haus2,Haus4,Haus5,Haus7,Hcfc1,Hdac3,Heatr1,Hinfp,Hjurp,Hmgcs1,Hnrnpc,Hnrnpk,Hnrnpl,Hnrnpu,Hscb,Hsd17b10,Hspa5,Hspa8,Hspa9,Hspd1,Hus1,Hyou1,Hypk,Iars,Iars2,Ice1,Igbp1,Imp3,Imp4,Impdh2,Incenp,Ints2,Ints3,Ints4,Ints7,Ints9,Ipo11,Ipo13,Ipo7,Iscu,Isg20l2,Isy1,Kansl1,Kars,Kat5,Kat8,Kif11,Kif23,Kin,Kpnb1,Kri1,Krr1,Lage3,Las1l,Letm1,Lin52,Lin54,Lonp1,Lsg1,Lsm10,Lsm2,Lsm3,Lsm4,Lsm5,Lsm8,Ltv1,Luc7l3,Lyrm4,Mad2l1,Magoh,Mak16,Mars,Mbtps1,Mcm2,Mcm3,Mcm4,Mcm5,Mcm6,Mcm7,Mcmbp,Mcrs1,Mdn1,Mecr,Med11,Med14,Med20,Med22,Med26,Med6,Med8,Mepce,Metap1,Metap2,Mettl14,Mettl16,Mettl3,Mis12,Mis18a,Mms19,Mphosph10,Mphosph6,Mre11a,Mrpl10,Mrpl11,Mrpl12,Mrpl13,Mrpl17,Mrpl18,Mrpl20,Mrpl22,Mrpl24,Mrpl3,Mrpl34,Mrpl35,Mrpl36,Mrpl37,Mrpl38,Mrpl39,Mrpl4,Mrpl40,Mrpl41,Mrpl42,Mrpl43,Mrpl45,Mrpl46,Mrpl47,Mrpl48,Mrpl49,Mrpl51,Mrpl52,Mrpl57,Mrps11,Mrps12,Mrps14,Mrps15,Mrps16,Mrps18a,Mrps2,Mrps24,Mrps25,Mrps27,Mrps30,Mrps34,Mrps5,Mrps6,Mrps7,Mrto4,Mtg2,Mthfd1,Mtrr,Mvk,Mybbp1a,Myc,Myh9,N6amt1,Naa20,Naa25,Naa50,Naca,Nae1,Naf1,Napa,Narfl,Nars,Nars2,Nat10,Ncapd2,Ncapd3,Ncaph2,Ncbp1,Ncbp2,Ncl,Ndor1,Ndufab1,Nedd1,Nedd8,Nfs1,Nfyc,Ngdn,Nhp2,Nifk,Nip7,Nkap,Nle1,Nmd3,Nob1,Noc2l,Noc3l,Noc4l,Nol10,Nol11,Nol6,Nol9,Nom1,Nop14,Nop16,Nop2,Nop56,Nop58,Nop9,Npat,Nploc4,Nrf1,Nsa2,Nsf,Nubp1,Nubp2,Nudc,Nudcd3,Nuf2,Numa1,Nup107,Nup153,Nup160,Nup205,Nup214,Nup43,Nup85,Nup88,Nus1,Nutf2,Nvl,Ogdh,Oraov1,Orc4,Orc5,Osbp,Osgep,Oxa1l,Oxsm,Pabpc1,Pabpn1,Paf1,Pafah1b1,Paics,Pak1ip1,Parn,Pars2,Pcid2,Pcna,Pdcd11,Pdcd7,Pelo,Pelp1,Pes1,Pfas,Pfdn2,Pfn1,Pgam1,Pgd,Pggt1b,Pgk1,Pgs1,Phax,Phb,Phb2,Phf5a,Pik3r4,Pkm,Plk1,Plrg1,Pmf1,Pmpcb,Pmvk,Pnkp,Pnn,Pno1,Pnpt1,Pola2,Pold1,Pold2,Pold3,Polg,Polg2,Polr1a,Polr1b,Polr1c,Polr1d,Polr1e,Polr2c,Polr2d,Polr2e,Polr2f,Polr2g,Polr2h,Polr2i,Polr2j,Polr2l,Polr3a,Polr3b,Polr3c,Polr3d,Polr3e,Polr3h,Polr3k,Polrmt,Pomp,Pop4,Pop5,Pop7,Ppa1,Ppan,Ppat,Ppil2,Ppil4,Ppp1cb,Ppp1r10,Ppp1r11,Ppp1r12a,Ppp1r15b,Ppp1r7,Ppp1r8,Ppp2ca,Ppp4c,Ppwd1,Prc1,Preb,Prelid1,Prim1,Prim2,Prkrip1,Prmt5,Prpf19,Prpf31,Prpf38a,Prpf38b,Prpf4,Prpf6,Prpf8,Psma1,Psma2,Psma3,Psma4,Psma5,Psma6,Psma7,Psmb1,Psmb2,Psmb3,Psmb4,Psmb6,Psmb7,Psmc1,Psmc2,Psmc3,Psmc4,Psmc5,Psmc6,Psmd1,Psmd11,Psmd12,Psmd14,Psmd3,Psmd4,Psmd6,Psmd7,Psmd8,Psmg3,Psmg4,Pwp2,Pyroxd1,Qars,Qrsl1,Rab7,Rabggta,Rabggtb,Racgap1,Rad1,Rad17,Rad21,Rad50,Rad9a,Rae1,Ran,Rangap1,Rars,Rbbp4,Rbbp5,Rbbp6,Rbm14,Rbm22,Rbm25,Rbm39,Rbm48,Rbx1,Rcc1,Rcl1,Rfc2,Rfc3,Rfc5,Rft1,Rint1,Riok1,Rnasek,Rnf20,Rnf40,Rngtt,Rnmt,Romo1,Rpa1,Rpa2,Rpain,Rpap2,Rpf2,Rpia,Rpl10,Rpl10a,Rpl11,Rpl12,Rpl13a,Rpl14,Rpl17,Rpl18,Rpl18a,Rpl23,Rpl27a,Rpl3,Rpl30,Rpl31,Rpl32,Rpl34,Rpl35,Rpl35a,Rpl37,Rpl37a,Rpl38,Rpl4,Rpl5,Rpl7,Rpl7a,Rpl7l1,Rpl8,Rplp0,Rpn1,Rpp21,Rpp30,Rpp38,Rpp40,Rps11,Rps13,Rps14,Rps15,Rps15a,Rps16,Rps17,Rps19,Rps2,Rps20,Rps21,Rps25,Rps26,Rps27a,Rps29,Rps3,Rps4x,Rps5,Rps7,Rps8,Rps9,Rpusd4,Rrm1,Rrm2,Rrp1,Rrp12,Rrp15,Rrp36,Rrp7a,Rrp9,Rrs1,Rsl1d1,Rsl24d1,Rtcb,Ruvbl1,Ruvbl2,Sacm1l,Sae1,Samm50,Sap30bp,Sars,Sart3,Sbds,Scap,Scfd1,Sdad1,Sde2,Sdhb,Sec13,Sec22b,Sec61a1,Seh1l,Sepsecs,Setd1a,Sf1,Sf3a1,Sf3a2,Sf3a3,Sf3b2,Sf3b3,Sf3b4,Sf3b5,Sf3b6,Sfi1,Sfpq,Shq1,Ska2,Skiv2l2,Skp1a,Slc25a3,Slc3a2,Slc7a6,Slc7a6os,Slu7,Smc1a,Smc2,Smc3,Smc4,Smc6,Smg5,Smu1,Snap23,Snapc1,Snapc3,Snapc4,Snapc5,Snf8,Snip1,Snrnp200,Snrnp25,Snrnp70,Snrpa,Snrpa1,Snrpb,Snrpd1,Snrpd2,Snrpd3,Snrpe,Snrpg,Snupn,Snw1,Sod1,Spata5,Spc24,Spc25,Srbd1,Srcap,Srp14,Srp72,Srrm1,Srsf1,Srsf2,Srsf3,Ssrp1,Ssu72,Strap,Stx5a,Suds3,Sugt1,Supt16,Supt4a,Supt5,Supt6,Supv3l1,Sympk,Sys1,Taf10,Taf11,Taf12,Taf13,Taf1b,Taf1c,Taf2,Taf3,Taf5,Taf6,Tamm41,Tars,Tars2,Tbca,Tbcb,Tbce,Tbl3,Tbp,Tcp1,Telo2,Terf1,Terf2,Tex10,Tfrc,Thap11,Thg1l,Thoc1,Thoc2,Thoc3,Timeless,Timm13,Timm22,Timm23,Timm44,Timm50,Tinf2,Tma16,Tmem258,Tnpo1,Tnpo3,Toe1,Tomm40,Tomm70a,Tonsl,Top1,Top2a,Top3a,Topbp1,Tpi1,Trappc1,Trappc11,Trappc3,Trappc4,Trappc5,Trappc8,Trmt10c,Trmt5,Trmt6,Trmt61a,Trnt1,Trpm7,Tsen2,Tsen54,Tsfm,Tsg101,Tsr1,Tsr2,Ttc1,Ttc27,Ttc4,Ttf1,Tti1,Tti2,Tubb5,Tubg1,Tubgcp2,Tubgcp4,Tufm,Tut1,Twistnb,Txn1,Txnl4a,Txnl4b,U2af1,U2af2,Uba1,Uba2,Uba3,Ube2i,Ubl5,Ubtf,Uhrf1,Umps,Upf1,Upf2,Uqcrb,Uqcrc2,Uqcrfs1,Uqcrq,Uri1,Urod,Uso1,Usp36,Usp39,Utp15,Utp18,Utp20,Utp23,Utp3,Utp6,Uxt,Vars,Vars2,Vcp,Vhl,Vmp1,Vps25,Vps29,Vps4b,Vps72,Wac,Wars,Wars2,Wbp11,Wdr1,Wdr12,Wdr18,Wdr3,Wdr33,Wdr36,Wdr43,Wdr46,Wdr5,Wdr55,Wdr61,Wdr7,Wdr73,Wdr74,Wdr75,Wdr77,Wdr82,Wdr92,Wee1,Wrap53,Wrb,Xab2,Xpo1,Xrn2,Yae1d1,Yars,Yars2,Yeats2,Yeats4,Ykt6,Yrdc,Ythdc1,Zbtb8os,Zcchc9,Zfp131,Zfp207,Zfp830,Zmat2,Zmat5,Znhit2,Znhit6,Znrd1,Zpr1,Zzz3\n" ], [ "# check discarded genes\nnonexp_soellner = entrez2symbol_mouse[ensembl2entrez[soellner_etal_rpkm_log[soellner_etal_rpkm_log.mean(axis=1)<=1].index]]\nnonexp_li = entrez2symbol_mouse[ensembl2entrez[li_etal_fpkm_log[li_etal_fpkm_log.mean(axis=1)<=1].index]]\nmCEG1_belowmean1_both = intersect1d(essgenes,intersect1d(list(nonexp_soellner.values),list(nonexp_li.values)))\nmCEG1_belowmean1_either = setdiff1d(intersect1d(essgenes,union1d(list(nonexp_soellner.values),list(nonexp_li.values))),mCEG1_belowmean1_both)\nprint (len(mCEG1_belowmean1_both))\nprint (\" \".join(mCEG1_belowmean1_both))\nprint (len(mCEG1_belowmean1_either))\nprint (\" \".join(mCEG1_belowmean1_either))", "39\nBard1 Bora C130026I21Rik Cdc6 Cenph Cenpi Cenpk Cenpm Cenpn Cenpp Chaf1b Dtl Esco2 Espl1 Fignl1 Gins3 Kif18b Kif4 Lrr1 Mastl Mis18bp1 Mms22l Ncapg2 Ndc80 Oip5 Orc1 Palb2 Pole Pole2 Rbmx2 Recql4 Ska1 Ska3 Tango6 Ticrr Traip Ttk Urb1 Wdr25\n29\nCcnb1 Cenpe Cenpw Chek1 Dimt1 Dut Eif2s3x Gemin4 Gemin8 Gnb1l Haus6 Idi1 Kdm8 Lsm7 Mars2 Mrpl23 Ncapg Nudt21 Pam16 Rad51 Rbm19 Rpl19 Rpl23a Rpl24 Rps12 Rps27 Rpsa Trmt112 Zfp407\n" ], [ "bfdata[bfdata.max(axis=1)>5].shape", "_____no_output_____" ], [ "# save mCEG1\npd.DataFrame(mCEG1_mean1).to_csv(\"mCEG1_meanexp1.txt\",index=False)", "_____no_output_____" ], [ "from matplotlib_venn import venn3, venn3_circles,venn2, venn2_circles\ndef dec_to_bin(x,length):\n formatstr = \"%0\"+str(length)+\"d\"\n return formatstr%int(bin(x)[2:])\n\n\n\n#two set comparison\n\nnumberofset = 2\ncount=dict()\ncount['10'] = len(setdiff1d(ceg_intersect,mCEG1_mean1))\ncount['01'] = len(setdiff1d(mCEG1_mean1,ceg_intersect))\ncount['11'] = len(intersect1d(mCEG1_mean1,ceg_intersect))\n\nplt.figure(figsize=(8,4))\n#v = venn3(subsets=(count[\"100\"],count[\"010\"],count[\"110\"],count[\"001\"],count[\"101\"],count[\"011\"],count[\"111\"])\nv = venn2(subsets=(1,1,1)\n , set_labels = ('mCEG0', 'mCEG1')) ## A, B, AB, C, AC, AB, ABC\nfor text in v.set_labels:\n text.set_fontsize(16)\nfor com in count:\n v.get_label_by_id(com).set_text(count[com]) # to avoid area weighted by count, firstly set 1 and change text\n v.get_label_by_id(com).set_fontsize(14)\n'''\nv.get_label_by_id('100').set_text('First')\nv.get_label_by_id('010').set_text('Second')\nv.get_label_by_id('001').set_text('Third')\n'''\nplt.title(\"mCEG0 and mCEG1\")\nsavefig('Fig_mCEG0_mCEG1_mean1_overlap.pdf',format='pdf')\nplt.show()\n ", "_____no_output_____" ] ], [ [ "# mCEG1 property (mCEG1 == ess 6 cells + mean exp > 1)", "_____no_output_____" ] ], [ [ "lethal_phenotypes = pd.read_csv('VOC_MammalianPhenotype.lethal',header=None,index_col=0,sep=\"\\t\")\nlethal_phenotypes.head(3)", "_____no_output_____" ], [ "#MGI KO essentials\n#Targeted + NULL\n\nmgi_ess = pd.read_csv(\"MGI_PhenotypicAllele_Jul2018.rpt\",header=None,sep=\"\\t\")\n\ntargeted = mgi_ess[mgi_ess[3] == 'Targeted']\n\nnull_phenotypes = set()\nfor i in targeted.index:\n if pd.isnull(targeted.loc[i][4]) == True or pd.isnull(targeted.loc[i][10]) == True:\n continue\n phenotypes = targeted.loc[i][10].split(\",\")\n exptype = targeted.loc[i][4].split(\"|\")\n symbol = targeted.loc[i][7]\n flag = False\n if 'Null/knockout' in exptype:\n flag=True\n null_phenotypes.add(symbol)\n ", "_____no_output_____" ], [ "len(null_phenotypes)", "_____no_output_____" ], [ "### panther overrepresentation analysis\n\npanther_result = pd.read_csv('panther_mCEG1_mean1_analysis.txt',sep=\"\\t\",index_col=0,header=0)\nindex2desc = {}\nfor term in panther_result.index:\n desc = term.split(\" (\")[0]\n index2desc[term]= desc\npanther_result.rename(index=index2desc,inplace=True)\npanther_result['Client Text Box Input (fold Enrichment)'].at[panther_result['Client Text Box Input (fold Enrichment)']== ' < 0.01'] = 0.01\npanther_result['Client Text Box Input (fold Enrichment)'] = panther_result['Client Text Box Input (fold Enrichment)'].astype('float')\n\nfig,ax = plt.subplots(1, 1,figsize=(7,5))\n\ndesclist = ['mRNA processing','cell adhesion','signal transduction','rRNA metabolic process','tRNA metabolic process','DNA repair','DNA replication','cell differentiation','response to external stimulus','developmental process']\ncount=len(desclist) - 1\nprintterms = panther_result.loc[desclist].sort_values('Client Text Box Input (fold Enrichment)',ascending=False)\nfor term in printterms.index:\n \n \n logfold = log(float(printterms['Client Text Box Input (fold Enrichment)'][term]))\n print (term, logfold)\n ax.barh([count],[logfold],align='center',color='#F1E4B3',edgecolor = \"#ABA27F\")\n if logfold>=0:\n annotate(term,(-0.1,count),horizontalalignment='right',verticalalignment='center')\n else:\n annotate(term,(0.1,count),horizontalalignment='left',verticalalignment='center')\n count-=1\ntitle('Functional enrichment of essential genes')\nxlabel('Log fold enrichment',size=16)\nxlim((-5,5))\nxticks(size=14)\nyticks([])\nylim([-1,10])\nsavefig(\"Fig_foldenrichment_mCEG1_mean1.pdf\",format='pdf')\nshow()", "/home/ekim8/anaconda3/lib/python3.7/site-packages/pandas/core/indexing.py:205: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n self._setitem_with_indexer(indexer, value)\n" ], [ "# initiallize table\n\nnonfitness = bfdata_selected_count[bfdata_selected_count==0].index\n\nproperties_mCEG1_mean1 = {}\n", "_____no_output_____" ], [ "### Gene exp mean\ntemp = li_etal_fpkm_log.rename(index=ensembl2symbol)\nprint (\"mCEG1 mean\",temp.loc[intersect1d(temp.index,mCEG1_mean1)].mean().mean())\nprint (\"mNEG0 mean\",temp.loc[intersect1d(temp.index,nonfitness)].mean().mean())\nprint (\"ratio\",log2(temp.loc[intersect1d(temp.index,mCEG1_mean1)].mean().mean() / li_etal_fpkm_log.rename(index=ensembl2symbol).loc[intersect1d(temp.index,nonfitness)].mean().mean()))\n", "mCEG1 mean 3.0909959725937157\nmNEG0 mean 1.1035234739815438\nratio 1.485954455710223\n" ], [ "## P value KS test\n### Gene exp mean and std\ntemp = li_etal_fpkm_log.rename(index=ensembl2symbol)\n\nD,P = stats.ks_2samp(temp.loc[intersect1d(temp.index,mCEG1_mean1)].dropna().values.flatten(),\n temp.loc[intersect1d(temp.index,nonfitness)].dropna().values.flatten())\nprint (\"pvalue =\",P)\n\nproperties_mCEG1_mean1['Gene exp, mean'] = (log2(temp.loc[intersect1d(temp.index,mCEG1_mean1)].mean().mean() / \n temp.loc[intersect1d(temp.index,nonfitness)].mean().mean()), P)\nproperties_mCEG1_mean1['Gene exp, std'] = (log2(std(temp.loc[intersect1d(temp.index,mCEG1_mean1)].dropna().values.flatten()) / \n std(temp.loc[intersect1d(temp.index,nonfitness)].dropna().values.flatten())), P)\n\n", "pvalue = 0.0\n" ] ], [ [ "### mousenet", "_____no_output_____" ] ], [ [ "# MoustNet v2\n# https://www.inetbio.org/mousenet/\nnetwork_mousenet = dict()\nwith open(\"MouseNetV2_symbol.txt\",'r') as fp:\n for line in fp:\n linearray= line.rstrip().split(\"\\t\")\n g1 = linearray[0]\n g2 = linearray[1]\n score = float(linearray[2])\n if g1 not in network_mousenet:\n network_mousenet[g1] = dict()\n if g2 not in network_mousenet:\n network_mousenet[g2] = dict()\n network_mousenet[g1][g2] = score\n network_mousenet[g2][g1] = score\n ", "_____no_output_____" ], [ "degree_list = list()\ndegree_list_mCEG1_mean1 = list()\ndegree_list_noness = list()\nfor g in network_mousenet:\n degree_list.append(len(network_mousenet[g]))\n if g in mCEG1_mean1:\n degree_list_mCEG1_mean1.append(len(network_mousenet[g]))\n if g in nonfitness:\n degree_list_noness.append(len(network_mousenet[g]))\n", "_____no_output_____" ], [ "## P value KS test\n\nD,P = stats.ks_2samp(degree_list_mCEG1_mean1,degree_list_noness)\nprint (P)\nproperties_mCEG1_mean1['MouseNet'] = (log2(mean(degree_list_mCEG1_mean1) / mean(degree_list_noness)),P)\n", "2.5863424858963487e-172\n" ] ], [ [ "### biogrid", "_____no_output_____" ] ], [ [ "# biogrid\n# https://thebiogrid.org/\nnetwork_biogrid = dict()\nwith open(\"BIOGRID-ALL-3.4.156.tab.mouse.combine\",'r') as fp:\n for line in fp:\n linearray= line.rstrip().split(\"\\t\")\n g1 = linearray[0]\n g2 = linearray[1]\n score = float(linearray[2])\n if g1 not in network_biogrid:\n network_biogrid[g1] = dict()\n if g2 not in network_biogrid:\n network_biogrid[g2] = dict()\n network_biogrid[g1][g2] = score\n network_biogrid[g2][g1] = score\n ", "_____no_output_____" ], [ "degree_list = list()\ndegree_list_ess = list()\ndegree_list_mCEG1_mean1 = list()\ndegree_list_noness = list()\nfor g in network_biogrid:\n degree_list.append(len(network_biogrid[g]))\n if g in mCEG1_mean1:\n degree_list_mCEG1_mean1.append(len(network_biogrid[g]))\n if g in nonfitness:\n degree_list_noness.append(len(network_biogrid[g]))\n\n", "_____no_output_____" ], [ "## P value KS test\nD,P = stats.ks_2samp(degree_list_mCEG1_mean1,degree_list_noness)\nprint (P)\nproperties_mCEG1_mean1['Biogrid'] = (log2(mean(degree_list_mCEG1_mean1) / mean(degree_list_noness)),P)\n\n\n", "8.992750544597178e-16\n" ] ], [ [ "### string", "_____no_output_____" ] ], [ [ "# convert ensembl gene id to symbol\n# xref was obtained from Ensenbl Biomart\n# STRING v10.5 was obtained from string-db.org\n\nensemblprotin2symbol_mouse = pd.read_csv('ensemblprotein2symbol_mouse.txt',index_col=0,header=0,sep=\"\\t\")['Gene name']\nnetwork_string = dict()\nwith open(\"10090.protein.links.v10.5.mod.above500.symbol\",'w') as fout:\n with open(\"10090.protein.links.v10.5.mod.above500\",'r') as fp:\n for line in fp:\n linearray= line.rstrip().split(\"\\t\")\n try:\n g1 = ensemblprotin2symbol_mouse[linearray[0]]\n g2 = ensemblprotin2symbol_mouse[linearray[1]]\n except:\n continue\n score = float(linearray[2])\n fout.write(\"%s\\t%s\\t%f\\n\"%(g1,g2,score))\n if g1 not in network_string:\n network_string[g1] = dict()\n if g2 not in network_string:\n network_string[g2] = dict()\n network_string[g1][g2] = score\n network_string[g2][g1] = score\n", "_____no_output_____" ], [ "degree_list = list()\n\ndegree_list_mCEG1_mean1 = list()\ndegree_list_noness = list()\nfor g in network_string:\n degree_list.append(len(network_string[g]))\n if g in mCEG1_mean1:\n degree_list_mCEG1_mean1.append(len(network_string[g]))\n if g in nonfitness:\n degree_list_noness.append(len(network_string[g]))\n\n\n", "_____no_output_____" ], [ "## P value KS test\nD,P = stats.ks_2samp(degree_list_mCEG1_mean1,degree_list_noness)\nprint (P)\nproperties_mCEG1_mean1['STRING'] = (log2(mean(degree_list_mCEG1_mean1) / mean(degree_list_noness)),P)\n\n\n", "1.8353195864625886e-172\n" ] ], [ [ "### dn/ds", "_____no_output_____" ] ], [ [ "# data was obtained from Ensembl Biomart\n\nhuman_dnds = pd.read_csv('mouse2human_DNDS.txt',index_col=0,header=0,sep=\"\\t\").dropna()\nhuman_dnds = human_dnds[human_dnds['Human homology type']=='ortholog_one2one']\nhuman_dnds['dNdS'] = human_dnds['dN with Human'] / human_dnds['dS with Human']\n\n", "_____no_output_____" ], [ "dnds_mCEG1_mean1 = list()\ndnds_noness = list()\n\nfor i in range(len(human_dnds.index)):\n g = human_dnds.iloc[i]['Gene name']\n if pd.isnull(human_dnds.iloc[i]['dNdS']):\n continue\n if g in nonfitness:\n dnds_noness.append(human_dnds.iloc[i]['dNdS'])\n if g in mCEG1_mean1:\n dnds_mCEG1_mean1.append(human_dnds.iloc[i]['dNdS'])\n \n \n ", "_____no_output_____" ], [ "## P value KS test\nD,P = stats.ks_2samp(dnds_mCEG1_mean1,dnds_noness)\nprint (P)\nproperties_mCEG1_mean1['Human dn/ds'] = (log2( mean(dnds_mCEG1_mean1) / mean(dnds_noness) ),P)\n\n\n\n", "1.6723649397907527e-21\n" ] ], [ [ "### mouse ko library", "_____no_output_____" ] ], [ [ "### mouse knockout library from MGI database\n\nmouse_ko_lib_lethal = set()\nwith open ('MGI_PhenoGenoMP.lethal_embryonic','r') as fp:\n for line in fp:\n linearray = line.rstrip().split(\"\\t\")\n gene = linearray[0].split(\"<\")[0]\n mouse_ko_lib_lethal.add(gene)\n \nmouse_ko_lib_lethal = list(mouse_ko_lib_lethal)\n", "_____no_output_____" ], [ "len(mouse_ko_lib_lethal)", "_____no_output_____" ], [ "from matplotlib_venn import venn3, venn3_circles,venn2, venn2_circles\ndef dec_to_bin(x,length):\n formatstr = \"%0\"+str(length)+\"d\"\n return formatstr%int(bin(x)[2:])\n\n\n\n#two set comparison\n\nnumberofset = 2\ncount=dict()\ncount['10'] = len(setdiff1d(mCEG1_mean1,mouse_ko_lib_lethal))\ncount['01'] = len(setdiff1d(mouse_ko_lib_lethal, mCEG1_mean1))\ncount['11'] = len(intersect1d(mCEG1_mean1,mouse_ko_lib_lethal))\n\nplt.figure(figsize=(8,4))\n\nv = venn2(subsets=(1,1,1)\n , set_labels = ('mCEG1', 'Mouse KO Ess (MGI)')) ## A, B, AB, C, AC, AB, ABC\nfor text in v.set_labels:\n text.set_fontsize(16)\nfor com in count:\n v.get_label_by_id(com).set_text(count[com]) # to avoid area weighted by count, firstly set 1 and change text\n v.get_label_by_id(com).set_fontsize(14)\n\n#plt.title(\"Essential genes(= 8 cells, BF > 5)\")\nsavefig('Fig_mCEG1_mean1_MGI_overlap.pdf',format='pdf')\nplt.show()\n ", "_____no_output_____" ], [ "entrez2symbol.value_counts()", "_____no_output_____" ], [ "from matplotlib_venn import venn3, venn3_circles,venn2, venn2_circles\ndef dec_to_bin(x,length):\n formatstr = \"%0\"+str(length)+\"d\"\n return formatstr%int(bin(x)[2:])\n\n\nbfdata_avana = pd.read_csv('bf-17187genes-276screens-Fge85.txt.ccds.idupdated',index_col=0,header=0,sep=\"\\t\")\nbfdata_avana_count = bfdata_avana[bfdata_avana>5].count(axis=1)\navana_coreess_genes = bfdata_avana_count[bfdata_avana_count>=270].index\nprint (len(avana_coreess_genes))\navana_essgenes_mouse = list()\nfor g in avana_coreess_genes:\n try:\n for (entrez,symbol) in human2mouse[symbol2entrez_human[g]]:\n avana_essgenes_mouse.append(symbol)\n except:\n pass\n\n#two set comparison\n\nnumberofset = 2\ncount=dict()\ncount['10'] = len(setdiff1d(mCEG1_mean1,avana_essgenes_mouse))\ncount['01'] = len(setdiff1d(avana_essgenes_mouse, mCEG1_mean1))\ncount['11'] = len(intersect1d(mCEG1_mean1,avana_essgenes_mouse))\n\nplt.figure(figsize=(8,4))\n#v = venn3(subsets=(count[\"100\"],count[\"010\"],count[\"110\"],count[\"001\"],count[\"101\"],count[\"011\"],count[\"111\"])\nv = venn2(subsets=(1,1,1)\n , set_labels = ('mCEG1', 'Avana pan essential (mouse orthologs)')) ## A, B, AB, C, AC, AB, ABC\nfor text in v.set_labels:\n text.set_fontsize(16)\nfor com in count:\n v.get_label_by_id(com).set_text(count[com]) # to avoid area weighted by count, firstly set 1 and change text\n v.get_label_by_id(com).set_fontsize(14)\n'''\nv.get_label_by_id('100').set_text('First')\nv.get_label_by_id('010').set_text('Second')\nv.get_label_by_id('001').set_text('Third')\n'''\n#plt.title(\"Essential genes(= 8 cells, BF > 5)\")\nsavefig('Fig_mCEG1_mean1_Avana_overlap.pdf',format='pdf')\nplt.show()\n ", "500\n" ], [ "# xref downloaded from ensembl biomart\nyeast2mouse = pd.read_csv('ensembl_yeast2mouse.mod',index_col=0,header=None,sep=\"\\t\")[1]\n\n# essential gene data from SGD database\nessgenes_yeast_data = pd.read_csv('SGD_YeastMine_Phenotype_Genes_Null_Inviable.tsv',index_col=0,header=None,sep=\"\\t\")\nessgenes_yeast_data = essgenes_yeast_data[essgenes_yeast_data[3]=='Verified'] # leave only verified\nessgenes_yeast = essgenes_yeast_data[1].unique()\nessgenes_yeast_mouse = yeast2mouse[intersect1d(yeast2mouse.index,essgenes_yeast)]", "_____no_output_____" ], [ "from matplotlib_venn import venn3, venn3_circles,venn2, venn2_circles\ndef dec_to_bin(x,length):\n formatstr = \"%0\"+str(length)+\"d\"\n return formatstr%int(bin(x)[2:])\n\n\n#two set comparison\n\nnumberofset = 2\ncount=dict()\ncount['10'] = len(setdiff1d(mCEG1_mean1,essgenes_yeast_mouse))\ncount['01'] = len(setdiff1d(essgenes_yeast_mouse, mCEG1_mean1))\ncount['11'] = len(intersect1d(mCEG1_mean1,essgenes_yeast_mouse))\n\nplt.figure(figsize=(8,4))\n#v = venn3(subsets=(count[\"100\"],count[\"010\"],count[\"110\"],count[\"001\"],count[\"101\"],count[\"011\"],count[\"111\"])\nv = venn2(subsets=(1,1,1)\n , set_labels = ('mCEG1', 'Yeast essential genes (mouse orthologs)')) ## A, B, AB, C, AC, AB, ABC\nfor text in v.set_labels:\n text.set_fontsize(16)\nfor com in count:\n v.get_label_by_id(com).set_text(count[com]) # to avoid area weighted by count, firstly set 1 and change text\n v.get_label_by_id(com).set_fontsize(14)\n'''\nv.get_label_by_id('100').set_text('First')\nv.get_label_by_id('010').set_text('Second')\nv.get_label_by_id('001').set_text('Third')\n'''\n#plt.title(\"Essential genes(= 8 cells, BF > 5)\")\nsavefig('Fig_mCEG1_mean1_Yeast_overlap.pdf',format='pdf')\nplt.show()\n ", "_____no_output_____" ] ], [ [ "### disease gene", "_____no_output_____" ] ], [ [ "diseasegene = pd.read_csv(\"MGI_Geno_DiseaseDO.rpt\",header=None,sep=\"\\t\")\n", "_____no_output_____" ], [ "diseasegene.head(3)", "_____no_output_____" ], [ "dglist = set()\nfor gene in diseasegene[1]:\n dg = gene.split(\"<\")[0]\n dglist.add(dg)\ndglist = list(dglist)", "_____no_output_____" ], [ "# Pvalue Fisher exact test, mCEG1_mean1\nctable = [[ len(intersect1d(dglist,mCEG1_mean1)) , len(setdiff1d(mCEG1_mean1,dglist)) ], \n [ len(intersect1d(dglist,nonfitness)) , len(setdiff1d(nonfitness,dglist)) ]]\n\n\n''' Disease Non-disease\nEss 1 2\nNoness 3 4\n'''\ns,p = stats.fisher_exact(ctable)\nprint (s,p)", "0.31865756532365774 1.8441721116771266e-11\n" ], [ "# save data \nproperties_mCEG1_mean1['Disease genes'] = (log2( (len(intersect1d(dglist,mCEG1_mean1)) / float(len(mCEG1_mean1))) / (len(intersect1d(dglist,nonfitness)) / float(len(nonfitness))) ),\n p)", "_____no_output_____" ] ], [ [ "### genome information", "_____no_output_____" ] ], [ [ "## GENCODE M18 parsing\n\ngene2transcript = dict()\ntranscript2exon = dict()\ntranscript2type = dict()\ntranscript2CDS = dict()\ntranscriptlength = dict()\ngene2location = dict()\nwith open('gencode.vM18.annotation.gtf','r') as fp:\n for line in fp:\n linearray = line.rstrip().split(\"\\t\")\n #print linearray # ['chr1', 'HAVANA', 'gene', '3073253', '3074322', '.', '+', '.', 'gene_id \"ENSMUSG00000102693.1\"; gene_type \"TEC\"; gene_name \"RP23-271O17.1\"; level 2; havana_gene \"OTTMUSG00000049935.1\";']\n chrom = linearray[0]\n featuretype = linearray[2]\n start = int(linearray[3])\n end = int(linearray[4])\n strand = linearray[6]\n tags = [x.strip().replace('\"','').split(' ') for x in linearray[8].split(\";\")]\n \n if featuretype=='exon':\n gene=\"\"\n transcript=\"\"\n #genetype = \"\"\n for tag in tags:\n if tag[0] == 'gene_id':\n gene=tag[1].split(\".\")[0]\n elif tag[0] == 'transcript_id':\n transcript = tag[1]\n if tag[0] == 'tag' and 'appris_principal' in tag[1]:\n transcript2type[transcript] = tag[1]\n #elif tag[0] == 'tag':\n \n if gene not in gene2transcript:\n gene2transcript[gene]=set()\n gene2transcript[gene].add(transcript)\n if transcript not in transcript2exon:\n transcript2exon[transcript] = 0\n transcriptlength[transcript]=0\n transcript2exon[transcript] += 1\n transcriptlength[transcript] += abs(end-start)+1\n if featuretype=='CDS':\n gene=\"\"\n transcript=\"\"\n #genetype = \"\"\n for tag in tags:\n if tag[0] == 'gene_id':\n gene=tag[1].split(\".\")[0]\n elif tag[0] == 'transcript_id':\n transcript = tag[1]\n\n if transcript not in transcript2CDS:\n transcript2CDS[transcript] = 0\n transcript2CDS[transcript] += abs(end-start)+1\n if gene not in gene2location or abs(gene2location[gene][1] - gene2location[gene][2]) < abs(start-end): # max length for loaction\n gene2location[gene] = (chrom,start,end)\n \n #break", "_____no_output_____" ], [ "# save data\nprint (len(gene2location))\nwith open('mouse_gene2location_ensembl','w') as fout:\n fout.write(\"GENE\\tENTREZID\\tCHR\\tSTART\\tEND\\n\")\n for g in gene2location:\n if g in ensembl2symbol:\n fout.write(\"%s\\t%d\\t%s\\t%d\\t%d\\n\"%(ensembl2symbol[g],0,gene2location[g][0],gene2location[g][1],gene2location[g][2]))", "22551\n" ], [ "# get exon,cds,transcript length\ntemp=list()\n\nexon_mCEG1_mean1=list()\nexon_non=list()\n\ncds_mCEG1_mean1=list()\ncds_non=list()\n\ntrans_mCEG1_mean1 = list()\ntrans_non = list()\nfor gene in gene2transcript:\n count=0\n if gene in ensembl2symbol and ensembl2symbol[gene] in mCEG1_mean1:\n longest = (\"\",0)\n for t in gene2transcript[gene]:\n if t in transcript2type:\n if longest[1] < transcriptlength[t]:\n longest = (t,transcriptlength[t])\n count+=1\n #print ensembl2symbol[gene],maxexon\n exon_mCEG1_mean1.append(transcript2exon[longest[0]])\n cds_mCEG1_mean1.append(transcript2CDS[longest[0]] + 3) # stop codon\n trans_mCEG1_mean1.append(longest[1])\n temp.append(ensembl2symbol[gene])\n '''if transcript2CDS[longest[0]] < longest[1]:\n print (gene, transcript2CDS[longest[0]] +3, longest[1])'''\n if gene in ensembl2symbol and ensembl2symbol[gene] in nonfitness:\n longest = (\"\",0)\n for t in gene2transcript[gene]:\n if t in transcript2type:\n if longest[1] < transcriptlength[t]:\n longest = (t,transcriptlength[t])\n count+=1\n #print ensembl2symbol[gene],maxexon\n exon_non.append(transcript2exon[longest[0]])\n cds_non.append(transcript2CDS[longest[0]] + 3)\n trans_non.append(longest[1])\n temp.append(ensembl2symbol[gene])\n", "_____no_output_____" ], [ "print( log2(float(mean(exon_mCEG1_mean1)) / float(mean(exon_non))))\nprint (log2(float(mean(cds_mCEG1_mean1)) / float(mean(cds_non))))\nprint (log2(float(mean(trans_mCEG1_mean1)) / float(mean(trans_non))))\n\n\n\n", "0.2948582392493737\n-0.08386357775608488\n-0.22385435846273807\n" ], [ "# Pvalue\n\nD,P = stats.ks_2samp(exon_mCEG1_mean1,exon_non)\nprint (P)\nproperties_mCEG1_mean1['Num. exons'] = (log2(float(mean(exon_mCEG1_mean1)) / float(mean(exon_non))),P)\nD,P = stats.ks_2samp(cds_mCEG1_mean1,cds_non)\nprint (P)\nproperties_mCEG1_mean1['CDS length'] = (log2(float(mean(cds_mCEG1_mean1)) / float(mean(cds_non))),P)\nD,P = stats.ks_2samp(trans_mCEG1_mean1,trans_non)\nprint (P)\nproperties_mCEG1_mean1['Transcript length'] = (log2(float(mean(trans_mCEG1_mean1)) / float(mean(trans_non))),P)", "7.308840542701409e-29\n2.8919277957800242e-05\n1.2978079442108432e-08\n" ], [ "properties_mCEG1_mean1.keys()", "_____no_output_____" ], [ "fig,ax = plt.subplots(1, 1,figsize=(5,6))\ncount = 0\nbarlist = ['Gene exp, mean','Gene exp, std',' ','STRING','MouseNet',' ','Human dn/ds','Disease genes',' ','Num. exons','CDS length','Transcript length']\n\nfor term in barlist:\n if term == ' ':\n count-=1\n continue\n \n logfold = properties_mCEG1_mean1[term][0]\n p = properties_mCEG1_mean1[term][1]\n ax.barh([count],[logfold],align='center',color='#F1E4B3',edgecolor = \"#ABA27F\",height=0.8)\n if logfold>=0:\n annotate(term,(-0.1,count),horizontalalignment='right',verticalalignment='center')\n if p < 0.0000000001:\n annotate(\"*\",(logfold+0.2,count),verticalalignment='center')\n else:\n annotate(term,(0.1,count),horizontalalignment='left',verticalalignment='center')\n if p < 0.0000000001:\n annotate(\"*\",(logfold-0.2,count),verticalalignment='center')\n \n count-=1\nplot([0,0],[1,-len(barlist)-2],'k-',lw=2)\nxticks([-2,-1,0,1,2],size=14)\nyticks([])\nylim([-len(barlist),1])\nxlabel(\"log2(mCEG1 / nonfitness)\",size=14)\nsavefig('Fig_functional_property_mCEG1.pdf',format='pdf')\nshow()", "_____no_output_____" ] ], [ [ "# density plot", "_____no_output_____" ] ], [ [ "## CTL core genes\n\nCTL = pd.read_csv(\"Geneset_T_Cell_Killing_Core.txt\",header=0,sep=\"\\t\")['GENE']", "_____no_output_____" ], [ "len(CTL)", "_____no_output_____" ], [ "bfdata.describe()", "_____no_output_____" ], [ "bfdata_selected_qt.describe()", "_____no_output_____" ], [ "bfdata.mean(axis=1).plot.density(color='k')\nbfdata.loc[intersect1d(bfdata.index,ceg_intersect)].mean(axis=1).plot.density(color='red')\nbfdata.loc[intersect1d(bfdata.index,neg_intersect)].mean(axis=1).plot.density(color='blue')\nbfdata.loc[intersect1d(bfdata.index,CTL)].mean(axis=1).plot.density(color='green')\nlegend([\"ALL\",'mCEG0','mNEG0','CTL'])\ntitle('Controls (6 cells)')\nxlabel('Bayes Factor')\nsavefig('distribution_bf_ctl.pdf')", "_____no_output_____" ], [ "bfdata.mean(axis=1).plot.density(color='k')\nbfdata.loc[intersect1d(bfdata.index,mCEG1_mean1)].mean(axis=1).plot.density(color='#7570B2')\nbfdata.loc[intersect1d(bfdata.index,neg_intersect)].mean(axis=1).plot.density(color='#189E77')\nbfdata.loc[intersect1d(bfdata.index,CTL)].mean(axis=1).plot.density(color='orange')\nlegend([\"ALL\",'mCEG1','mNEG','CTL'],fontsize=12)\n#title('Controls (6 cells)')\nxlabel('Bayes Factor',size=14)\nylabel('Density',size=14)\nxticks(size=12)\nyticks(size=12)\nsavefig('distribution_bf_ctl_mCEG1_mean1.pdf')", "_____no_output_____" ] ], [ [ "# tissue specific essential genes", "_____no_output_____" ] ], [ [ "#tissue specific essential\n\nkidney = bfdata_selected[(bfdata_selected['Renca-HA_bf']>5) & (bfdata_selected['4T1-HA_bf']<=5) & (bfdata_selected['B16-OVA_bf']<=5) & (bfdata_selected['CT26_bf']<=5) & (bfdata_selected['EMT6-HA_bf']<=5) & (bfdata_selected['MC38-OVA_bf']<=5)]\nprint( len(kidney))\nbreast = bfdata_selected[(bfdata_selected['Renca-HA_bf']<=5) & (bfdata_selected['4T1-HA_bf']>5) & (bfdata_selected['B16-OVA_bf']<=5) & (bfdata_selected['CT26_bf']<=5) & (bfdata_selected['EMT6-HA_bf']>5) & (bfdata_selected['MC38-OVA_bf']<=5)]\nprint( len(breast))\ncolon = bfdata_selected[(bfdata_selected['Renca-HA_bf']<=5) & (bfdata_selected['4T1-HA_bf']<=5) & (bfdata_selected['B16-OVA_bf']<=5) & (bfdata_selected['CT26_bf']>5) & (bfdata_selected['EMT6-HA_bf']<=5) & (bfdata_selected['MC38-OVA_bf']>5)]\nprint (len(colon))\nskin = bfdata_selected[(bfdata_selected['Renca-HA_bf']<=5) & (bfdata_selected['4T1-HA_bf']<=5) & (bfdata_selected['B16-OVA_bf']>5) & (bfdata_selected['CT26_bf']<=5) & (bfdata_selected['EMT6-HA_bf']<=5) & (bfdata_selected['MC38-OVA_bf']<=5)]\nprint (len(skin))\n", "107\n11\n23\n330\n" ], [ "ess_atleast1 = bfdata_selected_qt.index[(bfdata_selected_qt>5).sum(axis=1)>=6]\nprint( ess_atleast1)\n\ness_atonly1 = bfdata_selected_qt.index[(bfdata_selected_qt>5).sum(axis=1)==1]\nprint( ess_atonly1)", "Index(['1110004E09Rik', '1110008L16Rik', '1110037F02Rik', '1810026J23Rik',\n '2700060E02Rik', '2810004N23Rik', 'AW822073', 'Aamp', 'Aars', 'Aars2',\n ...\n 'Zfp207', 'Zfp407', 'Zfp830', 'Zmat2', 'Zmat5', 'Znhit2', 'Znhit6',\n 'Znrd1', 'Zpr1', 'Zzz3'],\n dtype='object', name='GENE', length=1179)\nIndex(['1700015E13Rik', '1700025B11Rik', '1700029P11Rik', '1700066M21Rik',\n '1810009A15Rik', '2310007B03Rik', '2310011J03Rik', '2410015M20Rik',\n '2700062C07Rik', '2810006K23Rik',\n ...\n 'Zfat', 'Zfp114', 'Zfp202', 'Zfp367', 'Zfp623', 'Zfp638', 'Zfx', 'Zhx1',\n 'Zkscan1', 'Zxdb'],\n dtype='object', name='GENE', length=802)\n" ], [ "#each top 5\n\nsns.set(font_scale=1.0)\neachtop5 = list()\nfor c in bfdata_selected_qt.dtypes.index:\n eachtop5.extend(list(bfdata_selected_qt[c].loc[ess_atonly1].sort_values(ascending=False).head(5).index))\n\nsns.clustermap(bfdata_selected_qt.loc[eachtop5],linewidth=0,figsize=(7,10),col_cluster=False,row_cluster=False,center=0)\nsavefig(\"clustering_unique_ess_eachtop5.pdf\",format=\"pdf\")", "_____no_output_____" ], [ "sns.set(font_scale=0.3)\nsns.clustermap(bfdata_selected_qt.loc[ess_atonly1],linewidth=0,figsize=(7,30),center=0)\nsavefig(\"clustering_unique_ess.pdf\",format=\"pdf\")", "_____no_output_____" ], [ "sns.set(font_scale=1.0)\ntop20 = bfdata_selected_qt.loc[ess_atonly1].max(axis=1).sort_values(ascending=False).head(30).index\nsns.clustermap(bfdata_selected_qt.loc[top20],linewidth=0,figsize=(7,10),center=0)\nsavefig(\"clustering_unique_ess_top30.pdf\",format=\"pdf\")", "_____no_output_____" ], [ "sns.set(font_scale=1.0)\ntop20 = bfdata_selected_qt.loc[ess_atonly1].max(axis=1).sort_values(ascending=False).head(20).index\nsns.clustermap(bfdata_selected_qt.loc[top20],linewidth=0,figsize=(7,10),center=0)\nsavefig(\"clustering_unique_ess_top20.pdf\",format=\"pdf\")", "_____no_output_____" ], [ "#each top 5 (tissue)\n\nsns.set(font_scale=1.0)\neachtop5_tissue = list()\ntemp = bfdata_selected_qt['Renca-HA_bf'].loc[kidney.index].sort_values(ascending=False).head(5)\neachtop5_tissue.extend(list(temp.index))\ntemp = bfdata_selected_qt['B16-OVA_bf'].loc[skin.index].sort_values(ascending=False).head(5)\neachtop5_tissue.extend(list(temp.index))\ntemp = bfdata_selected_qt[['4T1-HA_bf','EMT6-HA_bf']].mean(axis=1)[breast.index].sort_values(ascending=False).head(5)\neachtop5_tissue.extend(list(temp.index))\n\ntemp = bfdata_selected_qt[['CT26_bf','MC38-OVA_bf']].mean(axis=1)[colon.index].sort_values(ascending=False).head(5)\neachtop5_tissue.extend(list(temp.index))\n\n\nsns.clustermap(bfdata_selected_qt[['Renca-HA_bf','B16-OVA_bf','4T1-HA_bf','EMT6-HA_bf','CT26_bf','MC38-OVA_bf']].loc[eachtop5_tissue],linewidth=0,figsize=(7,10),col_cluster=False,row_cluster=False,center=0)\nsavefig(\"clustering_unique_ess_eachtop5_tissue.pdf\",format=\"pdf\")", "_____no_output_____" ], [ "#TF\ntflist = pd.read_csv(\"human2mouse_TF\",header=None,sep=\"\\t\")\nprint (len(tflist.index))\ntflist.head(3)", "1176\n" ], [ "#each top 5 (tissue)\n\nsns.set(font_scale=1.0)\neachtop5_tissue = list()\ntemp = bfdata_selected_qt['Renca-HA_bf'].loc[intersect1d(tflist[1],kidney.index)].sort_values(ascending=False).head(5)\neachtop5_tissue.extend(list(temp.index))\ntemp = bfdata_selected_qt['B16-OVA_bf'].loc[intersect1d(tflist[1],skin.index)].sort_values(ascending=False).head(5)\neachtop5_tissue.extend(list(temp.index))\ntemp = bfdata_selected_qt[['4T1-HA_bf','EMT6-HA_bf']].mean(axis=1)[intersect1d(tflist[1],breast.index)].sort_values(ascending=False).head(5)\neachtop5_tissue.extend(list(temp.index))\n\ntemp = bfdata_selected_qt[['CT26_bf','MC38-OVA_bf']].mean(axis=1)[intersect1d(tflist[1],colon.index)].sort_values(ascending=False).head(5)\neachtop5_tissue.extend(list(temp.index))\n\n\nsns.clustermap(bfdata_selected_qt[['Renca-HA_bf','B16-OVA_bf','4T1-HA_bf','EMT6-HA_bf','CT26_bf','MC38-OVA_bf']].loc[eachtop5_tissue],linewidth=0,figsize=(7,10),col_cluster=False,row_cluster=False,center=0)\n#savefig(\"clustering_unique_ess_eachtop5_tissue.pdf\",format=\"pdf\")", "_____no_output_____" ], [ "#each top 10 (tissue)\n\nsns.set(font_scale=1.0)\neachtop5_tissue = list()\ntemp = bfdata_selected_qt['Renca-HA_bf'].loc[kidney.index].sort_values(ascending=False).head(10)\neachtop5_tissue.extend(list(temp.index))\ntemp = bfdata_selected_qt['B16-OVA_bf'].loc[skin.index].sort_values(ascending=False).head(10)\neachtop5_tissue.extend(list(temp.index))\ntemp = bfdata_selected_qt[['4T1-HA_bf','EMT6-HA_bf']].mean(axis=1)[breast.index].sort_values(ascending=False).head(10)\neachtop5_tissue.extend(list(temp.index))\n\ntemp = bfdata_selected_qt[['CT26_bf','MC38-OVA_bf']].mean(axis=1)[colon.index].sort_values(ascending=False).head(10)\neachtop5_tissue.extend(list(temp.index))\n\n\nsns.clustermap(bfdata_selected_qt[['Renca-HA_bf','B16-OVA_bf','4T1-HA_bf','EMT6-HA_bf','CT26_bf','MC38-OVA_bf']].loc[eachtop5_tissue],linewidth=0,figsize=(7,20),col_cluster=False,row_cluster=False)\nsavefig(\"clustering_unique_ess_eachtop5_tissue.pdf\",format=\"pdf\")", "_____no_output_____" ], [ "bfdata_selected_qt.loc['Brca2']", "_____no_output_____" ], [ "#A mouse tissue transcription factor atlas https://www.nature.com/articles/ncomms15089\ntflist_zhou = pd.read_csv(\"mouse_tf_Zhou_etal\",header=None,sep=\"\\t\")\nprint( len(tflist_zhou))\ntflist_zhou.head(3)", "941\n" ], [ "#TF\ntflist = pd.read_csv(\"human2mouse_TF\",header=None,sep=\"\\t\")\nprint (len(tflist.index))\ntflist.head(3)", "1176\n" ], [ "# TF\n\nsns.set(font_scale=1.0)\n'''eachtop5 = list()\nfor c in bfdata_selected_qt.dtypes.index:\n eachtop5.extend(list(bfdata_selected_qt[c].loc[ess_atonly1].sort_values(ascending=False).head(5).index))'''\n\nsns.clustermap(bfdata_selected_qt.loc[intersect1d(tflist_zhou[0],ess_atonly1)],linewidth=0,figsize=(7,10))\nsavefig(\"clustering_unique_ess_mouseTF.pdf\",format=\"pdf\")", "_____no_output_____" ], [ "# TF\n\nsns.set(font_scale=1.0)\n'''eachtop5 = list()\nfor c in bfdata_selected_qt.dtypes.index:\n eachtop5.extend(list(bfdata_selected_qt[c].loc[ess_atonly1].sort_values(ascending=False).head(5).index))'''\n\nsns.clustermap(bfdata_selected_qt.loc[intersect1d(tflist[1],ess_atonly1)],linewidth=0,figsize=(7,10))\nsavefig(\"clustering_unique_ess_TF.pdf\",format=\"pdf\")", "_____no_output_____" ], [ "sns.set(font_scale=0.3)\nsns.clustermap(bfdata_selected_qt.loc[setdiff1d(ess_atleast1,ceg_intersect)],linewidth=0,figsize=(7,30))\nsavefig(\"clustering_all_ess.pdf\",format=\"pdf\")", "_____no_output_____" ], [ "bfdata_selected_qt.loc[CTL].to_excel(\"bf_6cells_qtnormed_ctl.xlsx\")", "_____no_output_____" ], [ "bfdata_selected_qt.to_excel(\"bf_6cells_qtnormed.xlsx\")\nbfdata.to_excel(\"bf_6cells.xlsx\")", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "raw", "code", "raw", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "raw", "raw" ], [ "code", "code", "code", "code" ], [ "raw" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e70dd16e82440271996df275f346297278dab6f5
248,106
ipynb
Jupyter Notebook
assigment_1_Frants_Vladimir.ipynb
vfrantc/data_viz
819b4bd9fc53201da05556c2eb16e11dab6c2100
[ "CC0-1.0" ]
null
null
null
assigment_1_Frants_Vladimir.ipynb
vfrantc/data_viz
819b4bd9fc53201da05556c2eb16e11dab6c2100
[ "CC0-1.0" ]
null
null
null
assigment_1_Frants_Vladimir.ipynb
vfrantc/data_viz
819b4bd9fc53201da05556c2eb16e11dab6c2100
[ "CC0-1.0" ]
null
null
null
389.491366
167,302
0.919885
[ [ [ "<a href=\"https://colab.research.google.com/github/vfrantc/data_viz/blob/master/assigment_1_Frants_Vladimir.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# HW1. Frants Vladimir\n\nUse Vincent van Gogh paintings dataset\nCreate 2 visualizations (described below) using any software\nIf your outputs are still images - use the Add Image function in Milanote and place these 2 images inside your home assignment 1 board\nif you made interactive visualizations and published them in the web, then to the following to share it with the instructor \n\n* take a screenshot, add as an image (same as normal image); \n* use Add Caption function and place the link to the web visualization into the caption.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline", "/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.\n import pandas.util.testing as tm\n" ] ], [ [ "# Load the dataset", "_____no_output_____" ] ], [ [ "!wget https://raw.githubusercontent.com/vfrantc/data_viz/master/data/van_gogh_genre.csv", "--2020-09-27 02:59:08-- https://raw.githubusercontent.com/vfrantc/data_viz/master/data/van_gogh_genre.csv\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 119287 (116K) [text/plain]\nSaving to: ‘van_gogh_genre.csv’\n\nvan_gogh_genre.csv 100%[===================>] 116.49K --.-KB/s in 0.01s \n\n2020-09-27 02:59:08 (9.42 MB/s) - ‘van_gogh_genre.csv’ saved [119287/119287]\n\n" ], [ "df = pd.read_csv('van_gogh_genre.csv')", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ], [ "# Some data wrangling", "_____no_output_____" ] ], [ [ "df.Genre.unique()", "_____no_output_____" ], [ "df.Season.unique()", "_____no_output_____" ], [ "df.Label_Place.unique()", "_____no_output_____" ], [ "df = df.replace({'Label_Place': {'1_Early' : 'Early', \n '2_Paris' : 'Paris', \n '3_Arles' : 'Arles', \n '4_Saint-Remy-de-Provence_asylum' : 'Saint Remy',\n '5_Auvers-sur-Oise' : 'Auvers'},\n 'Season': {'winter': 'Winter',\n 'spring': 'Spring',\n 'summer': 'Summer',\n 'fall': 'Fall'}})", "_____no_output_____" ], [ "", "_____no_output_____" ] ], [ [ "# Visualization 1\n\nExplore the relations between `genres`, `seasons`, and `places` where the artist worked. Are there any patterns present? (you can if you like manipulate these categories, creating larger ones, or breaking them in different ways). If it does not work out to use all 3, use 2.", "_____no_output_____" ] ], [ [ "def hist_plot(x, **kwargs):\n plt.hist(x, orientation='horizontal', **kwargs)\n", "_____no_output_____" ], [ "# Initialize a grid of plots with an Axes for each walk\ngrid = sns.FacetGrid(df, \n col='Season',\n col_order=['Winter', 'Spring', 'Summer', 'Fall'], \n row='Label_Place',\n row_order=['Early', 'Paris', 'Arles', 'Saint Remy', 'Auvers'],\n hue='Season', \n palette=\"coolwarm\",\n margin_titles=True)\n\n# Draw a line plot to show the trajectory of each random walk\ngrid.map(hist_plot, \"Genre\")\n[plt.setp(ax.texts, text=\"\") for ax in grid.axes.flat] # remove the original texts\n # important to add this before setting titles\ngrid.set_titles(row_template = '{row_name}', col_template = '{col_name}')\n\nfor ax in grid.axes.flat:\n ax.set_xticks([])\n ax.set_yticklabels(df.Genre.unique())\n\ngrid.set_xlabels('')\ngrid.fig.tight_layout()\n", "_____no_output_____" ] ], [ [ "# Visualization 2\n\nCreate a visualization that highlights some of the \"outliers\" among van Gogh paintings using the data provided. Optional: Add the names of these outlier works.", "_____no_output_____" ] ], [ [ "mean_brightness = df.brightness_median.mean()\nmean_saturation = df.saturation_median.mean()", "_____no_output_____" ], [ "from scipy.spatial.distance import cdist\nimport matplotlib.patheffects as pe", "_____no_output_____" ], [ "distances = []\nnodes = np.c_[df.brightness_median, df.saturation_median]\nfor idx in range(0, df.shape[0]):\n distances.append(cdist([nodes[idx]], np.vstack([nodes[:idx], nodes[(idx+1):]])).min())\ndf['Minimum distance'] = distances\ndf['outlier'] = df['Minimum distance'] > 15", "_____no_output_____" ], [ "plt.figure(figsize=(12, 12))\nsns.set_style(\"ticks\")\npl = sns.scatterplot(x=\"brightness_median\", y=\"saturation_median\", hue='outlier', marker='o', data=df)\nfor idx in range(0, df.shape[0]):\n if df.outlier[idx]:\n pl.text(df.brightness_median[idx]+2, df.saturation_median[idx]+2, df.Title[idx], horizontalalignment='left', size='medium', color='black', path_effects=[pe.withStroke(linewidth=4, foreground=\"white\")])\nplt.legend([],[], frameon=False)\npl.set_ylabel('Saturation median', fontsize=21)\npl.set_xlabel('Brightness median', fontsize=21)\npl.set_xlim(0, 350)\nfor _,s in pl.spines.items():\n s.set_linewidth(3)\n s.set_color('black')\nsns.despine()", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e70dd24676a7575dbc570947babf65d83f845ffd
96,796
ipynb
Jupyter Notebook
.ipynb_checkpoints/mission_to_mars-checkpoint.ipynb
BassamAziz310/Web-scrape-challenge
7d4973298844ef8c4f0cf630568719a4b609fae0
[ "ADSL" ]
null
null
null
.ipynb_checkpoints/mission_to_mars-checkpoint.ipynb
BassamAziz310/Web-scrape-challenge
7d4973298844ef8c4f0cf630568719a4b609fae0
[ "ADSL" ]
null
null
null
.ipynb_checkpoints/mission_to_mars-checkpoint.ipynb
BassamAziz310/Web-scrape-challenge
7d4973298844ef8c4f0cf630568719a4b609fae0
[ "ADSL" ]
null
null
null
82.590444
21,137
0.623734
[ [ [ "from splinter import Browser\nfrom bs4 import BeautifulSoup\nimport pandas as pd \nimport requests\nimport pymongo\nimport json", "_____no_output_____" ], [ "executable_path = {'executable_path': 'chromedriver.exe'}\nbrowser = Browser('chrome', **executable_path, headless=False)", "_____no_output_____" ], [ "conn = 'mongodb://localhost:27017'\nclient = pymongo.MongoClient(conn)\n", "_____no_output_____" ], [ "db = client.nhl_db\ncollection = db.articles\n", "_____no_output_____" ], [ "url = \"https://mars.nasa.gov/news/8613/a-year-of-surprising-science-from-nasas-insight-mars-mission/\"", "_____no_output_____" ], [ "response = requests.get(url)\n\nresponse \n", "_____no_output_____" ], [ "soup = BeautifulSoup(response.text, 'lxml')\n\nsoup\n", "_____no_output_____" ], [ "results = soup.find_all('div', class_=\"article-item__top\")\n\nresults\n", "_____no_output_____" ], [ "for result in results:\n # scrape the article header \n header = result.find('h1', class_='article-item__headline').text\n \n # scrape the article subheader\n subheader = result.find('h2', class_='article-item__subheader').text\n \n # scrape the datetime\n datetime = result.find('span', class_=\"article-item__date\")['data-date']\n \n # get only the date from the datetime\n date = datetime.split('T')[0]\n \n # print article data\n print('-----------------')\n print(header)\n print(subheader)\n print(date)\n\n # Dictionary to be inserted into MongoDB\n post = {\n 'header': header,\n 'subheader': subheader,\n 'date': date\n }\n\n \n\n ", "_____no_output_____" ], [ "articles = db.articles.find()\nfor article in articles:\n print(article)\n", "_____no_output_____" ], [ "news_p = soup.body.find_all('p')\n\nnews_p", "_____no_output_____" ], [ "soup.body.find('p').text", "_____no_output_____" ], [ "news_title = soup.find_all(\"title\")\n\nnews_title\n", "_____no_output_____" ], [ "featured_image_url = \"https://www.jpl.nasa.gov/spaceimages/images/mediumsize/PIA18846_ip.jpg\"", "_____no_output_____" ], [ "mars_weather = \"InSight sol 445 (2020-02-26), low -92.8ºC (-135.0ºF), high -12.8ºC (8.9ºF),winds, from the SSE at 5.9 m/s (13.3 mph), gusting to 21.1 m/s, (47.3 mph),pressure at 6.30 hPa\"\n\n", "_____no_output_____" ], [ "mars_url = \"https://space-facts.com/mars/\"\n\nbrowser.visit(url)", "_____no_output_____" ], [ "results = soup.find_all('div', class_=\"widget-header\")", "_____no_output_____" ], [ "hemisphere_image_urls = [\n {\"title\": \"Valles Marineris Hemisphere\", \"img_url\": \"https://astrogeology.usgs.gov/cache/images/7cf2da4bf549ed01c17f206327be4db7_valles_marineris_enhanced.tif_full.jpg\"},\n {\"title\": \"Cerberus Hemisphere\", \"img_url\": \"https://astrogeology.usgs.gov/cache/images/cfa62af2557222a02478f1fcd781d445_cerberus_enhanced.tif_full.jpg\"},\n {\"title\": \"Schiaparelli Hemisphere\", \"img_url\": \"https://astrogeology.usgs.gov/cache/images/3cdd1cbf5e0813bba925c9030d13b62e_schiaparelli_enhanced.tif_full.jpg\"},\n {\"title\": \"Syrtis Major Hemisphere\", \"img_url\": \"https://astrogeology.usgs.gov/cache/images/ae209b4e408bb6c3e67b6af38168cf28_syrtis_major_enhanced.tif_full.jpg\"},\n]\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e70ddb35fe3abca214b458339a2b3b5814698155
1,345
ipynb
Jupyter Notebook
resizegif.ipynb
crazysuryaa/Deeplearning_PlayGround
94ddc3ddba157e3eb7cde4514e8824fe2c428551
[ "MIT" ]
null
null
null
resizegif.ipynb
crazysuryaa/Deeplearning_PlayGround
94ddc3ddba157e3eb7cde4514e8824fe2c428551
[ "MIT" ]
null
null
null
resizegif.ipynb
crazysuryaa/Deeplearning_PlayGround
94ddc3ddba157e3eb7cde4514e8824fe2c428551
[ "MIT" ]
null
null
null
23.189655
69
0.537546
[ [ [ "from PIL import Image, ImageSequence\n\n# Output (max) size\nsize = 1920, 1080\n\n# Open source\nim = Image.open(\"in.gif\")\n\n# Get sequence iterator\nframes = ImageSequence.Iterator(im)\n\n# Wrap on-the-fly thumbnail generator\ndef thumbnails(frames):\n for frame in frames:\n thumbnail = frame.copy()\n thumbnail.thumbnail(size, Image.ANTIALIAS)\n yield thumbnail\n\nframes = thumbnails(frames)\n\n# Save output\nom = next(frames) # Handle first frame separately\nom.info = im.info # Copy sequence info\nom.save(\"out.gif\", save_all=True, append_images=list(frames))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
e70df1965b1080b9d5b08bd02ebec435208c6c81
44,254
ipynb
Jupyter Notebook
notebooks/1.4 - Pandas Best Practices.ipynb
jseabold/ngcm_pandas_2017
ab3d2b55f92b5919cc625f78908a08324855d830
[ "CC0-1.0" ]
1
2017-09-01T22:21:16.000Z
2017-09-01T22:21:16.000Z
notebooks/1.4 - Pandas Best Practices.ipynb
jseabold/ngcm_pandas_2017
ab3d2b55f92b5919cc625f78908a08324855d830
[ "CC0-1.0" ]
null
null
null
notebooks/1.4 - Pandas Best Practices.ipynb
jseabold/ngcm_pandas_2017
ab3d2b55f92b5919cc625f78908a08324855d830
[ "CC0-1.0" ]
1
2020-03-16T11:02:04.000Z
2020-03-16T11:02:04.000Z
28.829967
2,391
0.59563
[ [ [ "# Table of Contents\n <p><div class=\"lev1\"><a href=\"#Idomatic-Pandas\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Idomatic Pandas</a></div><div class=\"lev2\"><a href=\"#Reshaping-DataFrame-objects\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Reshaping DataFrame objects</a></div><div class=\"lev2\"><a href=\"#Exercise\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Exercise</a></div><div class=\"lev2\"><a href=\"#Method-chaining\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>Method chaining</a></div><div class=\"lev2\"><a href=\"#Pivoting\"><span class=\"toc-item-num\">1.4&nbsp;&nbsp;</span>Pivoting</a></div><div class=\"lev3\"><a href=\"#Exercise\"><span class=\"toc-item-num\">1.4.1&nbsp;&nbsp;</span>Exercise</a></div><div class=\"lev2\"><a href=\"#Data-transformation\"><span class=\"toc-item-num\">1.5&nbsp;&nbsp;</span>Data transformation</a></div><div class=\"lev3\"><a href=\"#Dealing-with-duplicates\"><span class=\"toc-item-num\">1.5.1&nbsp;&nbsp;</span>Dealing with duplicates</a></div><div class=\"lev3\"><a href=\"#Value-replacement\"><span class=\"toc-item-num\">1.5.2&nbsp;&nbsp;</span>Value replacement</a></div><div class=\"lev3\"><a href=\"#Inidcator-variables\"><span class=\"toc-item-num\">1.5.3&nbsp;&nbsp;</span>Inidcator variables</a></div><div class=\"lev3\"><a href=\"#Exercise\"><span class=\"toc-item-num\">1.5.4&nbsp;&nbsp;</span>Exercise</a></div><div class=\"lev3\"><a href=\"#Discretization\"><span class=\"toc-item-num\">1.5.5&nbsp;&nbsp;</span>Discretization</a></div><div class=\"lev3\"><a href=\"#Exercise\"><span class=\"toc-item-num\">1.5.6&nbsp;&nbsp;</span>Exercise</a></div><div class=\"lev2\"><a href=\"#Categorical-Variables\"><span class=\"toc-item-num\">1.6&nbsp;&nbsp;</span>Categorical Variables</a></div><div class=\"lev2\"><a href=\"#Data-aggregation-and-GroupBy-operations\"><span class=\"toc-item-num\">1.7&nbsp;&nbsp;</span>Data aggregation and GroupBy operations</a></div><div class=\"lev3\"><a href=\"#Exercise\"><span class=\"toc-item-num\">1.7.1&nbsp;&nbsp;</span>Exercise</a></div><div class=\"lev3\"><a href=\"#Apply\"><span class=\"toc-item-num\">1.7.2&nbsp;&nbsp;</span>Apply</a></div><div class=\"lev2\"><a href=\"#Exercise\"><span class=\"toc-item-num\">1.8&nbsp;&nbsp;</span>Exercise</a></div><div class=\"lev2\"><a href=\"#References\"><span class=\"toc-item-num\">1.9&nbsp;&nbsp;</span>References</a></div>", "_____no_output_____" ], [ "# Idomatic Pandas\n\n> Q: How do I make my pandas code faster with parallelism?\n\n> A: You don’t need parallelism, you can use Pandas better.\n\n> -- Matthew Rocklin\n\nNow that we have been exposed to the basic functionality of pandas, lets explore some more advanced features that will be useful when addressing more complex data management tasks.\n\nAs most statisticians/data analysts will admit, often the lion's share of the time spent implementing an analysis is devoted to preparing the data itself, rather than to coding or running a particular model that uses the data. This is where Pandas and Python's standard library are beneficial, providing high-level, flexible, and efficient tools for manipulating your data as needed.\n\nAs you may already have noticed, there are sometimes mutliple ways to achieve the same goal using pandas. Importantly, some approaches are better than others, in terms of performance, readability and ease of use. We will cover some important ways of maximizing your pandas efficiency.", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "## Reshaping DataFrame objects\n\nIn the context of a single DataFrame, we are often interested in re-arranging the layout of our data. ", "_____no_output_____" ], [ "This dataset in from Table 6.9 of [Statistical Methods for the Analysis of Repeated Measurements](http://www.amazon.com/Statistical-Methods-Analysis-Repeated-Measurements/dp/0387953701) by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia (spasmodic torticollis) from nine U.S. sites.\n\n* Randomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37)\n* Response variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment)\n* TWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began", "_____no_output_____" ] ], [ [ "cdystonia = pd.read_csv(\"../data/cdystonia.csv\", index_col=None)\ncdystonia.head()", "_____no_output_____" ] ], [ [ "This dataset includes **repeated measurements** of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways: showing each repeated measurement in their own row, or in multiple columns representing multiple measurements.\n", "_____no_output_____" ], [ "The `stack` method **rotates** the data frame so that columns are represented in rows:", "_____no_output_____" ] ], [ [ "stacked = cdystonia.stack()\nstacked", "_____no_output_____" ] ], [ [ "Have a peek at the structure of the index of the stacked data (and the data itself).\n\nTo complement this, `unstack` pivots from rows back to columns.", "_____no_output_____" ] ], [ [ "stacked.unstack().head()", "_____no_output_____" ] ], [ [ "## Exercise\n\nWhich columns uniquely define a row? Create a DataFrame called `cdystonia2` with a hierarchical index based on these columns.", "_____no_output_____" ] ], [ [ "# Write your answer here", "_____no_output_____" ] ], [ [ "If we want to transform this data so that repeated measurements are in columns, we can `unstack` the `twstrs` measurements according to `obs`.", "_____no_output_____" ] ], [ [ "twstrs_wide = cdystonia2['twstrs'].unstack('obs')\ntwstrs_wide.head()", "_____no_output_____" ] ], [ [ "We can now **merge** these reshaped outcomes data with the other variables to create a **wide format** DataFrame that consists of one row for each patient.", "_____no_output_____" ] ], [ [ "cdystonia_wide = (cdystonia[['patient','site','id','treat','age','sex']]\n .drop_duplicates()\n .merge(twstrs_wide, right_index=True, left_on='patient', how='inner'))\ncdystonia_wide.head()", "_____no_output_____" ] ], [ [ "A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking:", "_____no_output_____" ] ], [ [ "(cdystonia.set_index(['patient','site','id','treat','age','sex','week'])['twstrs']\n .unstack('week').head())", "_____no_output_____" ] ], [ [ "To convert our \"wide\" format back to long, we can use the `melt` function, appropriately parameterized. This function is useful for `DataFrame`s where one\nor more columns are identifier variables (`id_vars`), with the remaining columns being measured variables (`value_vars`). The measured variables are \"unpivoted\" to\nthe row axis, leaving just two non-identifier columns, a *variable* and its corresponding *value*, which can both be renamed using optional arguments.", "_____no_output_____" ] ], [ [ "pd.melt(cdystonia_wide, id_vars=['patient','site','id','treat','age','sex'], \n var_name='obs', value_name='twsters').head()", "_____no_output_____" ] ], [ [ "This illustrates the two formats for longitudinal data: **long** and **wide** formats. Its typically better to store data in long format because additional data can be included as additional rows in the database, while wide format requires that the entire database schema be altered by adding columns to every row as data are collected.\n\nThe preferable format for analysis depends entirely on what is planned for the data, so it is imporant to be able to move easily between them.", "_____no_output_____" ], [ "## Method chaining\n\nIn the DataFrame reshaping section above, you probably noticed how several methods were strung together to produce a wide format table:", "_____no_output_____" ] ], [ [ "(cdystonia[['patient','site','id','treat','age','sex']]\n .drop_duplicates()\n .merge(twstrs_wide, right_index=True, left_on='patient', how='inner')\n .head())", "_____no_output_____" ] ], [ [ "This approach of seqentially calling methods is called **method chaining**, and despite the fact that it creates very long lines of code that must be properly justified, it allows for the writing of rather concise and readable code. Method chaining is possible because of the pandas convention of returning copies of the results of operations, rather than in-place operations. This allows methods from the returned object to be immediately called, as needed, rather than assigning the output to a variable that might not otherwise be used. For example, without method chaining we would have done the following:", "_____no_output_____" ] ], [ [ "cdystonia_subset = cdystonia[['patient','site','id','treat','age','sex']]\ncdystonia_complete = cdystonia_subset.drop_duplicates()\ncdystonia_merged = cdystonia_complete.merge(twstrs_wide, right_index=True, left_on='patient', how='inner')\ncdystonia_merged.head()", "_____no_output_____" ] ], [ [ "This necessitates the creation of a slew of intermediate variables that we really don't need.\n\nLet's transform another dataset using method chaining. The `measles.csv` file contains de-identified cases of measles from an outbreak in Sao Paulo, Brazil in 1997. The file contains rows of individual records:", "_____no_output_____" ] ], [ [ "measles = pd.read_csv(\"../data/measles.csv\", index_col=0, encoding='latin-1', parse_dates=['ONSET'])\nmeasles.head()", "_____no_output_____" ] ], [ [ "The goal is to summarize this data by age groups and bi-weekly period, so that we can see how the outbreak affected different ages over the course of the outbreak.\n\nThe best approach is to build up the chain incrementally. We can begin by generating the age groups (using `cut`) and grouping by age group and the date (`ONSET`):", "_____no_output_____" ] ], [ [ "(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))\n .groupby(['ONSET', 'AGE_GROUP']))", "_____no_output_____" ] ], [ [ "What we then want is the number of occurences in each combination, which we can obtain by checking the `size` of each grouping:", "_____no_output_____" ] ], [ [ "(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))\n .groupby(['ONSET', 'AGE_GROUP'])\n .size()).head(10)", "_____no_output_____" ] ], [ [ "This results in a hierarchically-indexed `Series`, which we can pivot into a `DataFrame` by simply unstacking:", "_____no_output_____" ] ], [ [ "(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))\n .groupby(['ONSET', 'AGE_GROUP'])\n .size()\n .unstack()).head(5)", "_____no_output_____" ] ], [ [ "Now, fill replace the missing values with zeros:", "_____no_output_____" ] ], [ [ "(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))\n .groupby(['ONSET', 'AGE_GROUP'])\n .size()\n .unstack()\n .fillna(0)).head(5)", "_____no_output_____" ] ], [ [ "Finally, we want the counts in 2-week intervals, rather than as irregularly-reported days, which yields our the table of interest:", "_____no_output_____" ] ], [ [ "case_counts_2w = (measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))\n .groupby(['ONSET', 'AGE_GROUP'])\n .size()\n .unstack()\n .fillna(0)\n .resample('2W')\n .sum())\n\ncase_counts_2w", "_____no_output_____" ] ], [ [ "From this, it is easy to create meaningful plots and conduct analyses:", "_____no_output_____" ] ], [ [ "case_counts_2w.plot(cmap='hot')", "_____no_output_____" ] ], [ [ "## Pivoting\n\nThe `pivot` method allows a DataFrame to be transformed easily between long and wide formats in the same way as a pivot table is created in a spreadsheet. It takes three arguments: `index`, `columns` and `values`, corresponding to the DataFrame index (the row headers), columns and cell values, respectively.\n\nFor example, we may want the `twstrs` variable (the response variable) in wide format according to patient, as we saw with the unstacking method above:", "_____no_output_____" ] ], [ [ "cdystonia.pivot(index='patient', columns='obs', values='twstrs').head()", "_____no_output_____" ] ], [ [ "### Exercise\n\nTry pivoting the `cdystonia` DataFrame without specifying a variable for the cell values:", "_____no_output_____" ] ], [ [ "# Write your answer here", "_____no_output_____" ] ], [ [ "A related method, `pivot_table`, creates a spreadsheet-like table with a hierarchical index, and allows the values of the table to be populated using an arbitrary **aggregation function**.", "_____no_output_____" ] ], [ [ "cdystonia.pivot_table(index=['site', 'treat'], columns='week', values='twstrs', \n aggfunc=max).head(20)", "_____no_output_____" ] ], [ [ "For a simple **cross-tabulation** of group frequencies, the `crosstab` function (not a method) aggregates counts of data according to factors in rows and columns. The factors may be hierarchical if desired.", "_____no_output_____" ] ], [ [ "pd.crosstab(cdystonia.sex, cdystonia.site)", "_____no_output_____" ] ], [ [ "## Data transformation\n\nThere are a slew of additional operations for DataFrames that we would collectively refer to as **transformations** which include tasks such as:\n\n- removing duplicate values\n- replacing values\n- grouping values.", "_____no_output_____" ], [ "### Dealing with duplicates\n\nWe can easily identify and remove duplicate values from `DataFrame` objects. For example, say we want to remove ships from our `vessels` dataset that have the same name:", "_____no_output_____" ] ], [ [ "vessels = pd.read_csv('../data/AIS/vessel_information.csv')\nvessels.tail(10)", "_____no_output_____" ], [ "vessels.duplicated(subset='names').tail(10)", "_____no_output_____" ] ], [ [ "These rows can be removed using `drop_duplicates`", "_____no_output_____" ] ], [ [ "vessels.drop_duplicates(['names']).tail(10)", "_____no_output_____" ] ], [ [ "### Value replacement\n\nFrequently, we get data columns that are encoded as strings that we wish to represent numerically for the purposes of including it in a quantitative analysis. For example, consider the treatment variable in the cervical dystonia dataset:", "_____no_output_____" ] ], [ [ "cdystonia.treat.value_counts()", "_____no_output_____" ] ], [ [ "A logical way to specify these numerically is to change them to integer values, perhaps using \"Placebo\" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the `map` method to implement the changes.", "_____no_output_____" ] ], [ [ "treatment_map = {'Placebo': 0, '5000U': 1, '10000U': 2}", "_____no_output_____" ], [ "cdystonia['treatment'] = cdystonia.treat.map(treatment_map)\ncdystonia.treatment", "_____no_output_____" ] ], [ [ "Alternately, if we simply want to replace particular values in a `Series` or `DataFrame`, we can use the `replace` method. \n\nAn example where replacement is useful is replacing sentinel values with an appropriate numeric value prior to analysis. A large negative number is sometimes used in otherwise positive-valued data to denote missing values.", "_____no_output_____" ] ], [ [ "scores = pd.Series([99, 76, 85, -999, 84, 95])", "_____no_output_____" ] ], [ [ "In such situations, we can use `replace` to substitute `nan` where the sentinel values occur.", "_____no_output_____" ] ], [ [ "scores.replace(-999, np.nan)", "_____no_output_____" ] ], [ [ "We can also perform the same replacement that we used `map` for with `replace`:", "_____no_output_____" ] ], [ [ "cdystonia2.treat.replace({'Placebo': 0, '5000U': 1, '10000U': 2})", "_____no_output_____" ] ], [ [ "### Inidcator variables\n\nFor some statistical analyses (*e.g.* regression models or analyses of variance), categorical or group variables need to be converted into columns of indicators--zeros and ones--to create a so-called **design matrix**. The Pandas function `get_dummies` (indicator variables are also known as *dummy variables*) makes this transformation straightforward.\n\nLet's consider the DataFrame containing the ships corresponding to the transit segments on the eastern seaboard. The `type` variable denotes the class of vessel; we can create a matrix of indicators for this. For simplicity, lets filter out the 5 most common types of ships.\n\n### Exercise\n\nCreate a subset of the `vessels` DataFrame called `vessels5` that only contains the 5 most common types of vessels, based on their prevalence in the dataset.", "_____no_output_____" ] ], [ [ "# Write your answer here", "_____no_output_____" ] ], [ [ "We can now apply `get_dummies` to the vessel type to create 5 indicator variables.", "_____no_output_____" ] ], [ [ "pd.get_dummies(vessels5.type).head(10)", "_____no_output_____" ] ], [ [ "### Discretization\n\nPandas' `cut` function can be used to group continuous or countable data in to bins. Discretization is generally a very **bad idea** for statistical analysis, so use this function responsibly!\n\nLets say we want to bin the ages of the cervical dystonia patients into a smaller number of groups:", "_____no_output_____" ] ], [ [ "cdystonia.age.describe()", "_____no_output_____" ] ], [ [ "Let's transform these data into decades, beginnnig with individuals in their 20's and ending with those in their 80's:", "_____no_output_____" ] ], [ [ "pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90])[:30]", "_____no_output_____" ] ], [ [ "The parentheses indicate an open interval, meaning that the interval includes values up to but *not including* the endpoint, whereas the square bracket is a closed interval, where the endpoint is included in the interval. We can switch the closure to the left side by setting the `right` flag to `False`:", "_____no_output_____" ] ], [ [ "pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90], right=False)[:30]", "_____no_output_____" ] ], [ [ "Since the data are now **ordinal**, rather than numeric, we can give them labels:", "_____no_output_____" ] ], [ [ "pd.cut(cdystonia.age, [20,40,60,80,90], labels=['young','middle-aged','old','really old'])[:30]", "_____no_output_____" ] ], [ [ "A related function `qcut` uses empirical quantiles to divide the data. If, for example, we want the quartiles -- (0-25%], (25-50%], (50-70%], (75-100%] -- we can just specify 4 intervals, which will be equally-spaced by default:", "_____no_output_____" ] ], [ [ "pd.qcut(cdystonia.age, 4)[:30]", "_____no_output_____" ] ], [ [ "Alternatively, one can specify custom quantiles to act as cut points:", "_____no_output_____" ] ], [ [ "quantiles = pd.qcut(vessels.max_loa, [0, 0.01, 0.05, 0.95, 0.99, 1])\nquantiles[:30]", "_____no_output_____" ] ], [ [ "### Exercise\n\nUse the discretized segment lengths as the input for `get_dummies` to create 5 indicator variables for segment length:", "_____no_output_____" ] ], [ [ "# Write your answer here", "_____no_output_____" ] ], [ [ "## Categorical Variables\n\nOne of the keys to maximizing performance in pandas is to use the appropriate **types** for your data wherever possible. In the case of categorical data--either the ordered categories as we have just created, or unordered categories like race, gender or country--the use of the `categorical` to encode string variables as numeric quantities can dramatically improve performance and simplify subsequent analyses.\n\nWhen text data are imported into a `DataFrame`, they are endowed with an `object` dtype. This will result in relatively slow computation because this dtype runs at Python speeds, rather than as Cython code that gives much of pandas its speed. We can ameliorate this by employing the `categorical` dtype on such data.", "_____no_output_____" ] ], [ [ "cdystonia_cat = cdystonia.assign(treatment=cdystonia.treat.astype('category')).drop('treat', axis=1)\ncdystonia_cat.dtypes", "_____no_output_____" ], [ "cdystonia_cat.treatment.head()", "_____no_output_____" ], [ "cdystonia_cat.treatment.cat.codes", "_____no_output_____" ] ], [ [ "This creates an **unordered** categorical variable. To create an ordinal variable, we can specify `order=True` as an argument to `astype`:", "_____no_output_____" ] ], [ [ "cdystonia.treat.astype('category', ordered=True).head()", "_____no_output_____" ] ], [ [ "However, this is not the correct order; by default, the categories will be sorted alphabetically, which here gives exactly the reverse order that we need. \n\nTo specify an arbitrary order, we can used the `set_categories` method, as follows:", "_____no_output_____" ] ], [ [ "cdystonia.treat.astype('category').cat.set_categories(['Placebo', '5000U', '10000U'], ordered=True).head()", "_____no_output_____" ] ], [ [ "Notice that we obtained `set_categories` from the `cat` attribute of the categorical variable. This is known as the **category accessor**, and is a device for gaining access to `Categorical` variables' categories, analogous to the string accessor that we have seen previously from text variables.", "_____no_output_____" ] ], [ [ "cdystonia_cat.treatment.cat", "_____no_output_____" ] ], [ [ "Additional categoried can be added, even if they do not currently exist in the `DataFrame`, but are part of the set of possible categories:", "_____no_output_____" ] ], [ [ "cdystonia_cat['treatment'] = (cdystonia.treat.astype('category').cat\n .set_categories(['Placebo', '5000U', '10000U', '20000U'], ordered=True))", "_____no_output_____" ] ], [ [ "To complement this, we can remove categories that we do not wish to retain:", "_____no_output_____" ] ], [ [ "cdystonia_cat.treatment.cat.remove_categories('20000U').head()", "_____no_output_____" ] ], [ [ "Or, even more simply:", "_____no_output_____" ] ], [ [ "cdystonia_cat.treatment.cat.remove_unused_categories().head()", "_____no_output_____" ] ], [ [ "For larger datasets, there is an appreciable gain in performance, both in terms of speed and memory usage.", "_____no_output_____" ] ], [ [ "vessels_merged = (pd.read_csv('../data/AIS/vessel_information.csv', index_col=0)\n .merge(pd.read_csv('../data/AIS/transit_segments.csv'), left_index=True, right_on='mmsi'))", "_____no_output_____" ], [ "vessels_merged['registered'] = vessels_merged.flag.astype('category')", "_____no_output_____" ], [ "%timeit vessels_merged.groupby('flag').avg_sog.mean().sort_values()", "_____no_output_____" ], [ "%timeit vessels_merged.groupby('registered').avg_sog.mean().sort_values()", "_____no_output_____" ], [ "vessels_merged[['flag','registered']].memory_usage()", "_____no_output_____" ] ], [ [ "## Data aggregation and GroupBy operations\n\nOne of the most powerful features of Pandas is its **GroupBy** functionality. On occasion we may want to perform operations on *groups* of observations within a dataset. For exmaple:\n\n* **aggregation**, such as computing the sum of mean of each group, which involves applying a function to each group and returning the aggregated results\n* **slicing** the DataFrame into groups and then doing something with the resulting slices (*e.g.* plotting)\n* group-wise **transformation**, such as standardization/normalization", "_____no_output_____" ] ], [ [ "cdystonia_grouped = cdystonia.groupby(cdystonia.patient)", "_____no_output_____" ] ], [ [ "This **grouped** dataset is hard to visualize\n\n", "_____no_output_____" ] ], [ [ "cdystonia_grouped", "_____no_output_____" ] ], [ [ "However, the grouping is only an intermediate step; for example, we may want to **iterate** over each of the patient groups:", "_____no_output_____" ] ], [ [ "for patient, group in cdystonia_grouped:\n print('patient', patient)\n print('group', group)", "_____no_output_____" ] ], [ [ "A common data analysis procedure is the **split-apply-combine** operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table.\n\nFor example, we may want to aggregate our data with with some function.\n\n![split-apply-combine](images/split-apply-combine.png)\n\n<div align=\"right\">*(figure taken from \"Python for Data Analysis\", p.251)*</div>", "_____no_output_____" ], [ "We can aggregate in Pandas using the `aggregate` (or `agg`, for short) method:", "_____no_output_____" ] ], [ [ "cdystonia_grouped.agg(np.mean).head()", "_____no_output_____" ] ], [ [ "Notice that the `treat` and `sex` variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method.\n\nSome aggregation functions are so common that Pandas has a convenience method for them, such as `mean`:", "_____no_output_____" ] ], [ [ "cdystonia_grouped.mean().head()", "_____no_output_____" ] ], [ [ "The `add_prefix` and `add_suffix` methods can be used to give the columns of the resulting table labels that reflect the transformation:", "_____no_output_____" ] ], [ [ "cdystonia_grouped.mean().add_suffix('_mean').head()", "_____no_output_____" ] ], [ [ "### Exercise\n\nUse the `quantile` method to generate the median values of the `twstrs` variable for each patient.", "_____no_output_____" ] ], [ [ "# Write your answer here", "_____no_output_____" ] ], [ [ "If we wish, we can easily aggregate according to multiple keys:", "_____no_output_____" ] ], [ [ "cdystonia.groupby(['week','site']).mean().head()", "_____no_output_____" ] ], [ [ "Alternately, we can **transform** the data, using a function of our choice with the `transform` method:", "_____no_output_____" ] ], [ [ "normalize = lambda x: (x - x.mean())/x.std()\n\ncdystonia_grouped.transform(normalize).head()", "_____no_output_____" ] ], [ [ "It is easy to do column selection within `groupby` operations, if we are only interested split-apply-combine operations on a subset of columns:", "_____no_output_____" ] ], [ [ "%timeit cdystonia_grouped['twstrs'].mean().head()", "_____no_output_____" ] ], [ [ "Or, as a DataFrame:", "_____no_output_____" ] ], [ [ "cdystonia_grouped[['twstrs']].mean().head()", "_____no_output_____" ] ], [ [ "If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed:", "_____no_output_____" ] ], [ [ "chunks = dict(list(cdystonia_grouped))", "_____no_output_____" ], [ "chunks[4]", "_____no_output_____" ] ], [ [ "By default, `groupby` groups by row, but we can specify the `axis` argument to change this. For example, we can group our columns by `dtype` this way:", "_____no_output_____" ] ], [ [ "dict(list(cdystonia.groupby(cdystonia.dtypes, axis=1)))", "_____no_output_____" ] ], [ [ "Its also possible to group by one or more levels of a hierarchical index. Recall `cdystonia2`, which we created with a hierarchical index:", "_____no_output_____" ] ], [ [ "cdystonia2.head(10)", "_____no_output_____" ] ], [ [ "The `level` argument specifies which level of the index to use for grouping.", "_____no_output_____" ] ], [ [ "cdystonia2.groupby(level='obs', axis=0)['twstrs'].mean()", "_____no_output_____" ] ], [ [ "### Apply\n\nWe can generalize the split-apply-combine methodology by using `apply` function. This allows us to invoke any function we wish on a grouped dataset and recombine them into a DataFrame.", "_____no_output_____" ], [ "The function below takes a DataFrame and a column name, sorts by the column, and takes the `n` largest values of that column. We can use this with `apply` to return the largest values from every group in a DataFrame in a single call. ", "_____no_output_____" ] ], [ [ "def top(df, column, n=5):\n return df.sort_index(by=column, ascending=False)[:n]", "_____no_output_____" ] ], [ [ "To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield `segments_merged`). Say we wanted to return the 3 longest segments travelled by each ship:", "_____no_output_____" ] ], [ [ "goo = vessels_merged.groupby('mmsi')", "_____no_output_____" ], [ "top3segments = vessels_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']]\ntop3segments.head(15)", "_____no_output_____" ] ], [ [ "Notice that additional arguments for the applied function can be passed via `apply` after the function name. It assumes that the DataFrame is the first argument.", "_____no_output_____" ], [ "## Exercise\n\nLoad the dataset in `titanic.xls`. It contains data on all the passengers that travelled on the Titanic.", "_____no_output_____" ] ], [ [ "from IPython.core.display import HTML\nHTML(filename='../data/titanic.html')", "_____no_output_____" ] ], [ [ "Women and children first?\n\n1. Use the `groupby` method to calculate the proportion of passengers that survived by sex.\n2. Calculate the same proportion, but by class and sex.\n3. Create age categories: children (under 14 years), adolescents (14-20), adult (21-64), and senior(65+), and calculate survival proportions by age category, class and sex.", "_____no_output_____" ] ], [ [ "# Write your answer here", "_____no_output_____" ] ], [ [ "## References\n\n[Python for Data Analysis](http://shop.oreilly.com/product/0636920023784.do) Wes McKinney", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e70e17e3edd1c6b871e415a7299710f9695221c5
17,876
ipynb
Jupyter Notebook
Analysis.ipynb
halleysfifthinc/ArmMotionStabilityRecoveryPerturbations
ca45732f1a06efa046aa7048a87797ae9586dd55
[ "MIT" ]
null
null
null
Analysis.ipynb
halleysfifthinc/ArmMotionStabilityRecoveryPerturbations
ca45732f1a06efa046aa7048a87797ae9586dd55
[ "MIT" ]
null
null
null
Analysis.ipynb
halleysfifthinc/ArmMotionStabilityRecoveryPerturbations
ca45732f1a06efa046aa7048a87797ae9586dd55
[ "MIT" ]
null
null
null
29.943049
144
0.488252
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e70e185acc329db0468130de5e44514d50d519c3
1,383
ipynb
Jupyter Notebook
Bloco7/aula15.ipynb
Felipe-Tommaselli/Sistemas_evolutivos
db126a2e9e5140d050bd0670d34b1acfcd5a88ca
[ "MIT" ]
null
null
null
Bloco7/aula15.ipynb
Felipe-Tommaselli/Sistemas_evolutivos
db126a2e9e5140d050bd0670d34b1acfcd5a88ca
[ "MIT" ]
null
null
null
Bloco7/aula15.ipynb
Felipe-Tommaselli/Sistemas_evolutivos
db126a2e9e5140d050bd0670d34b1acfcd5a88ca
[ "MIT" ]
null
null
null
28.8125
226
0.630513
[ [ [ "# BLOCO 7: Aula 15\n\n* link https://drive.google.com/file/d/1IydWtxJlGuDXQ3URtGgXkPKWrgr-xe1e/view\n\n- link do quadro com resumão https://gitlab.com/simoesusp/disciplinas/-/blob/master/SSC0713-Sistemas-Evolutivos-Aplicados-a-Robotica/MaterialAulaDistancia/LousaImagens_PrimeiroSemestre2021/Evolutivos_ESTRATEGIAS_AG03.svg", "_____no_output_____" ], [ "> discussões iniciais\n\n* continua falando da tese de doutorado dele\n\n- utilização do conceito de **Black Box**\n - no caso ele rodava todas as possíveis entradas dos sensores, treinava na rede neural e aí utilizava uma memória pequena para \"tabelar\" as saídas pra cada entrada que a rede neural deu\n\n* explicou um pouco ainda sobre operações com tipos de dados e essa parte mais baixo nível para trabalhar com a memória\n * técnicas de otimização em baixo nivel", "_____no_output_____" ], [ "- ELE CAIU DA AULA POR UNS 40 MIN\n\n* depois que voltou ficou só de resenha mesmo", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown" ] ]
e70e4eaba2685816c55bbdf90549c2fb3122f5fd
263,724
ipynb
Jupyter Notebook
project/starter_code/aequitas.ipynb
luigisaetta/uci-diabetes
146c4d9adea5e84d828226e0b98c5b27ee74c17f
[ "MIT" ]
null
null
null
project/starter_code/aequitas.ipynb
luigisaetta/uci-diabetes
146c4d9adea5e84d828226e0b98c5b27ee74c17f
[ "MIT" ]
null
null
null
project/starter_code/aequitas.ipynb
luigisaetta/uci-diabetes
146c4d9adea5e84d828226e0b98c5b27ee74c17f
[ "MIT" ]
null
null
null
402.632061
45,136
0.924876
[ [ [ "### Bias analysis using AEquitas toolkit on UCI diabetes dataset", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport aequitas as ae\nfrom aequitas.preprocessing import preprocess_input_df\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "FILE = 'ae_subset.csv'\n\nae_subset_df = pd.read_csv(FILE)", "_____no_output_____" ], [ "ae_subset_df.head()", "_____no_output_____" ], [ "# plot histogram for races\nplt.hist(ae_subset_df['race'])\nplt.grid()", "_____no_output_____" ] ], [ [ "### The majority group is Caucasian. We will take this group as reference group in Bias analysis", "_____no_output_____" ] ], [ [ "# plot histogram for gender\nplt.hist(ae_subset_df['gender'])\nplt.grid()", "_____no_output_____" ] ], [ [ "### Analysis with AEquitas", "_____no_output_____" ] ], [ [ "df, _ = preprocess_input_df(ae_subset_df)", "_____no_output_____" ], [ "from aequitas.group import Group\n\n# compute the crosstab\ng = Group()\nxtab, _ = g.get_crosstabs(df)", "_____no_output_____" ], [ "absolute_metrics = g.list_absolute_metrics(xtab)\n\nxtab[['attribute_name', 'attribute_value'] + absolute_metrics].round(2)", "_____no_output_____" ], [ "from aequitas.plotting import Plot\n \naqp = Plot()\nfdr_plot = aqp.plot_group_metric(xtab, 'fdr')", "_____no_output_____" ], [ "fpr_plot = aqp.plot_group_metric(xtab, 'fpr')", "_____no_output_____" ], [ "fnr_plot = aqp.plot_group_metric(xtab, 'fnr')", "_____no_output_____" ], [ "tpr_plot = aqp.plot_group_metric(xtab, 'tpr')", "_____no_output_____" ], [ "from aequitas.bias import Bias\n \nb = Bias()\nbdf = b.get_disparity_predefined_groups(xtab, original_df=df, \n ref_groups_dict={'race':'Caucasian', 'gender':'Male'}, \n alpha=0.05, \n check_significance=False)", "get_disparity_predefined_group()\n" ], [ "# questo grafico evidenzia la disparità in Asian per quanto riguarda FDR\nfpr_disparity = aqp.plot_disparity(bdf, group_metric='fdr_disparity', \n attribute_name='race')", "_____no_output_____" ], [ "# questo grafico evidenzia la disparità in Asian per quanto riguarda FDR\ntpr_disparity = aqp.plot_disparity(bdf, group_metric='tpr_disparity', \n attribute_name='race')", "_____no_output_____" ], [ "from aequitas.fairness import Fairness\n \nf = Fairness()\nfdf = f.get_group_value_fairness(bdf)", "_____no_output_____" ], [ "fdr_fairness = aqp.plot_fairness_group(fdf, group_metric='fdr', title=True)", "_____no_output_____" ], [ "fdr_disparity_fairness = aqp.plot_fairness_disparity(fdf, group_metric='fdr', attribute_name='race')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e70e61bb8fbadc4314f64246c0a2fc78df784afd
15,972
ipynb
Jupyter Notebook
notebook/modelling/finance.ipynb
kreibaum/JuMPTutorials.jl
a2d1744d9d10a013557059e126bfcdf3b5005191
[ "MIT" ]
75
2020-06-15T13:05:14.000Z
2022-02-28T12:58:48.000Z
notebook/modelling/finance.ipynb
kreibaum/JuMPTutorials.jl
a2d1744d9d10a013557059e126bfcdf3b5005191
[ "MIT" ]
34
2019-05-27T05:36:48.000Z
2019-08-22T09:52:29.000Z
notebook/modelling/finance.ipynb
kreibaum/JuMPTutorials.jl
a2d1744d9d10a013557059e126bfcdf3b5005191
[ "MIT" ]
19
2019-10-09T09:32:56.000Z
2020-06-02T17:41:21.000Z
31.25636
164
0.53982
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e70e657d81a4148b570f00fffec8f0ae9fece9da
127,262
ipynb
Jupyter Notebook
MoviesNotebooks/MoviesFlowDescription_top_view.ipynb
UBC-MOAD/outputanalysisnotebooks
50839cde3832d26bac6641427fed03c818fbe170
[ "Apache-2.0" ]
null
null
null
MoviesNotebooks/MoviesFlowDescription_top_view.ipynb
UBC-MOAD/outputanalysisnotebooks
50839cde3832d26bac6641427fed03c818fbe170
[ "Apache-2.0" ]
null
null
null
MoviesNotebooks/MoviesFlowDescription_top_view.ipynb
UBC-MOAD/outputanalysisnotebooks
50839cde3832d26bac6641427fed03c818fbe170
[ "Apache-2.0" ]
null
null
null
280.931567
111,580
0.89788
[ [ [ "### Movie with u, v, w, $\\rho$, tr, top view of canyon and shelf", "_____no_output_____" ] ], [ [ "#KRM\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as mcolors\nimport matplotlib as mpl\n#from MITgcmutils import rdmds # not working\n#%matplotlib inline\nfrom netCDF4 import Dataset\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport struct\nimport xarray as xr\nimport canyon_tools.readout_tools as rout", "_____no_output_____" ] ], [ [ "## Functions", "_____no_output_____" ] ], [ [ "def rel_vort(x,y,u,v):\n \"\"\"-----------------------------------------------------------------------------\n rel_vort calculates the z component of relative vorticity.\n \n INPUT:\n x,y,u,v should be at least 2D arrays in coordinate order (..., Y , X ) \n \n OUTPUT:\n relvort - z-relative vorticity array of size u[...,2:-2,2:-2]\n -----------------------------------------------------------------------------\"\"\"\n \n dvdx = (v[...,1:-1, 2:]-v[...,1:-1, :-2])/(x[...,1:-1, 2:]-x[...,1:-1, :-2])\n dudy = (u[...,2:,1:-1]-u[..., :-2,1:-1])/(y[..., 2:,1:-1]-y[..., :-2,1:-1])\n relvort = dvdx - dudy\n return relvort\n\n\ndef calc_rho(RhoRef,T,S,alpha=2.0E-4, beta=7.4E-4):\n \"\"\"-----------------------------------------------------------------------------\n calc_rho calculates the density using a linear equation of state.\n \n INPUT:\n RhoRef : reference density at the same z as T and S slices. Can be a scalar or a \n vector, depending on the size of T and S.\n T, S : should be at least 2D arrays in coordinate order (..., Y , X ) \n alpha = 2.0E-4 # 1/degC, thermal expansion coefficient\n beta = 7.4E-4, haline expansion coefficient\n OUTPUT:\n rho - Density [...,ny,nx]\n -----------------------------------------------------------------------------\"\"\"\n \n #Linear eq. of state \n rho = RhoRef*(np.ones(np.shape(T)) - alpha*(T[...,:,:]) + beta*(S[...,:,:]))\n return rho\n\ndef call_unstag(t):\n UU,VV = rout.unstagger(state.U.isel(T=t),state.V.isel(T=t))\n return(UU,VV)\n\n\ndef call_rho(t):\n T = state.Temp.isel(T=t,Z=zind)\n S = state.S.isel(T=t,Z=zind)\n rho = calc_rho(RhoRef,T,S,alpha=2.0E-4, beta=7.4E-4)\n return(rho) ", "_____no_output_____" ] ], [ [ "## Frame functions", "_____no_output_____" ] ], [ [ " \n# ALONGSHORE VELOCITY \ndef Plot1(t,ax1,UU):\n umin = -0.55\n umax= 0.55\n Uplot=np.ma.array(UU.isel(Z=zind,Xp1=xslice,Y=yslice).data,mask=MaskC[zind,yslice,xslice])\n csU = np.linspace(umin,umax,num=20)\n csU2 = np.linspace(umin,umax,num=10)\n ax1.clear()\n mesh=ax1.contourf(grid.X[xslice]/1000,grid.Y[yslice]/1000,Uplot[:,:],csU,cmap='RdYlBu_r')\n if t == 1: \n cax,kw = mpl.colorbar.make_axes([ax1],location='top',anchor=(0.5,0.0),shrink=0.96)\n cb = plt.colorbar(mesh, cax=cax,ticks=[np.linspace(umin, umax,8) ],format='%.2f',**kw)\n \n ax1.set_axis_bgcolor((205/255.0, 201/255.0, 201/255.0))\n ax1.set_ylabel('Cross-shore distance (km)')\n ax1.text(0.7,0.1,'u ($m/s$)',transform=ax1.transAxes)\n\n# ACROSS-SHORE VELOCITY \ndef Plot2(t,ax2,VV):\n vmin = -0.25\n vmax = 0.25\n Uplot=np.ma.array(VV.isel(Z=zind,Yp1=yslice,X=xslice).data,mask=MaskC[zind,yslice,xslice])\n csU = np.linspace(vmin,vmax,num=20)\n csU2 = np.linspace(vmin,vmax,num=10)\n ax2.clear()\n mesh=ax2.contourf(grid.X[xslice]/1000,grid.Y[yslice]/1000,Uplot[:,:],csU,cmap='RdYlBu_r')\n if t == 1: \n cax,kw = mpl.colorbar.make_axes([ax2],location='top',anchor=(0.5,0.0),shrink=0.96)\n cb = plt.colorbar(mesh, cax=cax,ticks=[np.linspace(vmin,vmax,8) ],format='%.2f',**kw)\n \n ax2.set_axis_bgcolor((205/255.0, 201/255.0, 201/255.0))\n ax2.text(0.7,0.1,'v ($m/s$)',transform=ax2.transAxes)\n\n# VERTICAL VELOCITY \ndef Plot3(t,ax3): \n wmin = -3.0\n wmax = 3.0\n Uplot=np.ma.array(state.W.isel(T=t,X=xslice,Y=yslice,Zl=zind).data,mask=MaskC[zind,yslice,xslice])\n csU = np.linspace(wmin,wmax,num=20)\n csU2 = np.linspace(wmin,wmax,num=10)\n ax3.clear()\n mesh=ax3.contourf(grid.X[xslice]/1000,grid.Y[yslice]/1000,Uplot[:,:]*1000,csU,cmap='RdYlBu_r')\n if t == 1: \n cax,kw = mpl.colorbar.make_axes([ax3],location='top',anchor=(0.5,0.0),shrink=0.96)\n cb = plt.colorbar(mesh, cax=cax,ticks=[np.linspace(wmin,wmax,8) ],format='%.1f',**kw)\n \n ax3.set_axis_bgcolor((205/255.0, 201/255.0, 201/255.0))\n ax3.text(0.6,0.1,'w ($10^{-3}$ $m/s$)',transform=ax3.transAxes)\n props = dict(boxstyle='round', facecolor='white', alpha=0.5)\n ax3.text(1.05,0.86,'day %0.1f' %(t/2.0),fontsize=20,transform=ax3.transAxes,bbox=props)\n\n# ISOPYCNALS\ndef Plot4(t,ax4):\n rho_min = 1020.8 # 1020.6 if z=22 \n rho_max = 1021.6 # 1021.6 if z=22\n density = call_rho(t)\n csU = np.linspace(rho_min,rho_max,num=21)\n csU2 = np.linspace(rho_min,rho_max,num=11)\n ax4.clear()\n mesh=ax4.contourf(grid.X[xslice]/1000,grid.Y[yslice]/1000,\n np.ma.array(density[yslice,xslice].data,mask=MaskC[zind,yslice,xslice]),\n csU,cmap='inferno')\n if t == 1:\n cax,kw = mpl.colorbar.make_axes([ax4],location='top',anchor=(0.5,0),shrink=0.96)\n cb = plt.colorbar(mesh, cax=cax,ticks=[np.linspace(rho_min,rho_max,6) ],format='%.1f',**kw)\n \n CS = ax4.contour(grid.X[xslice]/1000,grid.Y[yslice]/1000,\n np.ma.array(density[yslice,xslice].data,mask=MaskC[zind,yslice,xslice]),\n csU2,colors='k',linewidths=[0.75] )\n ax4.set_axis_bgcolor((205/255.0, 201/255.0, 201/255.0))\n ax4.text(0.6,0.1,r'$\\rho$ ($kg/m^{3}$)',transform=ax4.transAxes)\n ax4.set_ylabel('Cross-shore distance (km)')\n ax4.set_xlabel('Alongshore distance (km)')\n\n# TRACER \ndef Plot5(t,ax5): \n tr_min = 4 # 3 if z=22\n tr_max = 13 # 11 if z=22\n csU = np.linspace(tr_min,tr_max,num=19)\n csU2 = np.linspace(tr_min,tr_max,num=9)\n ax5.clear()\n mesh=ax5.contourf(grid.X[xslice]/1000,grid.Y[yslice]/1000,\n np.ma.array(ptracers.Tr1[t,zind,yslice,xslice].data,mask=MaskC[zind,yslice,xslice]),\n csU,cmap='viridis')\n if t == 1:\n cax,kw = mpl.colorbar.make_axes([ax5],location='top',anchor=(0.5,0),shrink=0.96)\n cb = plt.colorbar(mesh, cax=cax,ticks=[np.linspace(tr_min,tr_max,9) ],format='%.1f',**kw)\n \n CS = ax5.contour(grid.X[xslice]/1000,grid.Y[yslice]/1000,\n np.ma.array(ptracers.Tr1[t,zind,yslice,xslice].data,mask=MaskC[zind,yslice,xslice]),\n csU2,colors='k',linewidths=[0.75] )\n ax5.set_axis_bgcolor((205/255.0, 201/255.0, 201/255.0))\n ax5.text(0.6,0.1,'tracer ($Mol/l$)',transform=ax5.transAxes)\n ax5.set_xlabel('Across-shore distance (km)')\n\n# VORTICITY\ndef Plot6(t,ax6,UU,VV):\n vort_min = -40\n vort_max = 40\n relvort = rel_vort(grid.XC.data,grid.YC.data,UU.data,VV.data)\n Uplot=np.ma.array(relvort[zind,yslice,xslice],mask=MaskC[zind,yslice2,xslice2])\n csU = np.linspace(vort_min,vort_max,num=20)\n csU2 = np.linspace(vort_min,vort_max,num=10)\n ax6.clear()\n mesh=ax6.contourf(grid.X[xslice2]/1000,grid.Y[yslice2]/1000,Uplot[:,:]*1E5,\n csU,\n cmap='PiYG_r')\n if t == 1: \n cax,kw = mpl.colorbar.make_axes([ax6],location='top',anchor=(0.5,0.0),shrink=0.96)\n cb = plt.colorbar(mesh, cax=cax,\n ticks=[np.linspace(vort_min,vort_max,8) ],\n format='%.1f',**kw)\n \n ax6.set_axis_bgcolor((205/255.0, 201/255.0, 201/255.0))\n ax6.text(0.6,0.1,'$\\zeta$ ($10^{-5}$ $1/s$)',transform=ax6.transAxes)\n ax6.set_xlabel('Alongshore distance (km)')\n props = dict(boxstyle='round', facecolor='white', alpha=0.5)\n ax6.text(1.05,0.1,'Shelf-break \\n depth',fontsize=15,transform=ax6.transAxes,bbox=props)\n \n", "_____no_output_____" ] ], [ [ "## Set-up", "_____no_output_____" ] ], [ [ "# Grid, state and tracers datasets of base case\ngrid_file = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run38/gridGlob.nc'\ngrid = xr.open_dataset(grid_file)\n\nstate_file = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run38/stateGlob.nc' \nstate = xr.open_dataset(state_file)\n\nptracers_file = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run38/ptracersGlob.nc'\nptracers = xr.open_dataset(ptracers_file)\n\n#RhoRef = np.squeeze(rdmds('/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run38/RhoRef'))\nRhoRef = 999.79998779 # It is constant in all my runs, can't run rdmds", "_____no_output_____" ], [ "# General input\nnx = 360\nny = 360\nnz = 90\nnt = 19 # t dimension size \n\nzind = 28\nxslice = slice(0,359) \nyslice=slice(100,300)\nyslice2=slice(100,300)\nxslice2 = slice(0,358) \n\nhFacmasked = np.ma.masked_values(grid.HFacC.data, 0)\nMaskC = np.ma.getmask(hFacmasked)\n \n ", "_____no_output_____" ], [ "import matplotlib.animation as animation", "_____no_output_____" ], [ "\nsns.set_style('white')\nsns.set_context(\"talk\")\n\n#Empty figures\nfig,((ax1,ax2,ax3),(ax4, ax5,ax6)) = plt.subplots(2, 3, figsize=(15, 8),sharex='col', sharey='row')\nplt.subplots_adjust(hspace =0.1, wspace=0.1)\n\n#Initial image\ndef init():\n UU,VV = call_unstag(0)\n Plot1(0,ax1,UU)\n Plot2(0,ax2,VV)\n Plot3(0,ax3)\n Plot4(0,ax4)\n Plot5(0,ax5)\n Plot6(0,ax6,UU,VV)\n #plt.tight_layout()\n \ndef animate(tt):\n UU,VV = call_unstag(tt)\n Plot1(tt,ax1,UU)\n Plot2(tt,ax2,VV)\n Plot3(tt,ax3)\n Plot4(tt,ax4)\n Plot5(tt,ax5)\n Plot6(tt,ax6,UU,VV)\n xticklabels = ax1.get_xticklabels() + ax2.get_xticklabels() + ax3.get_xticklabels()\n plt.setp(xticklabels, visible=False)\n yticklabels = ax2.get_yticklabels() + ax3.get_yticklabels() + ax5.get_yticklabels() + ax6.get_yticklabels()\n plt.setp(yticklabels, visible=False)\n\n\nWriter = animation.writers['ffmpeg']\nwriter = Writer(fps=1, metadata=dict(artist='Me'), bitrate=1800)\n\n\nanim = animation.FuncAnimation(fig, animate, init_func=init,frames=19,repeat=False)\n\n## Save in current folder\n\nanim.save('CNTDIFF_baseFreeSlipnDrag_topview_section_z28.mp4', writer=writer)\n\nplt.show()\n\n\n", "_____no_output_____" ], [ "MaskC[zind,yslice2,xslice2].shape\n", "_____no_output_____" ], [ "relvort[zind,yslice,xslice].shape", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
e70e6774c9a2e086c43dc209a583b87add1fbdb3
46,346
ipynb
Jupyter Notebook
monty-hall.ipynb
angela1C/jupyter-teaching-notebooks
7494cab4702b8bfb95f716bf66e9ddf62a67b408
[ "Unlicense" ]
null
null
null
monty-hall.ipynb
angela1C/jupyter-teaching-notebooks
7494cab4702b8bfb95f716bf66e9ddf62a67b408
[ "Unlicense" ]
null
null
null
monty-hall.ipynb
angela1C/jupyter-teaching-notebooks
7494cab4702b8bfb95f716bf66e9ddf62a67b408
[ "Unlicense" ]
null
null
null
35.137225
6,812
0.571506
[ [ [ "# The Monty Hall Problem", "_____no_output_____" ], [ "See the [Wikipedia entry](https://en.wikipedia.org/wiki/Monty_Hall_problem) for a summary of the problem.", "_____no_output_____" ], [ "## Random door selection", "_____no_output_____" ], [ "Here's some code to pick a door at random.", "_____no_output_____" ] ], [ [ "# Python provides a library called random to generate pseudo-random numbers and do stuff with them.\nimport random\n\n# The three doors in a list.\ndoors = ['red', 'green', 'blue']\n\n# Pick a random door.\nprint(random.choice(doors))", "red\n" ] ], [ [ "The pick is meant to give a one third probability to each door. Let's pick 10,000 doors are see if that looks correct.", "_____no_output_____" ] ], [ [ "# 10,000 random doors.\ntenthous = [random.choice(doors) for i in range(10000)]\n\ntenthous", "_____no_output_____" ] ], [ [ "Let's plot it now, and see that each door is picked about a third of the time.", "_____no_output_____" ] ], [ [ "import seaborn as sns\nimport matplotlib.pyplot as pl\n\npl.figure(figsize=(10, 6))\nsns.set(style=\"darkgrid\")\nax = sns.countplot(y=tenthous)\npl.show()", "_____no_output_____" ] ], [ [ "## Simulate the game", "_____no_output_____" ], [ "Let's simulate the game now. Let's:\n\n1. Pick a door to put the car behind.\n2. Have the contestant pick a door.\n3. Have the show host open one of the other doors to reveal a goat.\n4. Ask the contestant if they want to switch.\n5. See how often the contestant wins.\n\nThe question we're looking to answer is whether staying with your original pick makes a difference.", "_____no_output_____" ] ], [ [ "# A function to simulate a game and tell us if the contestant wins.\ndef simulate(stay=True):\n doors = ['red', 'green', 'blue']\n \n # Put the car behind a random door.\n car = random.choice(doors)\n \n # Have the contestant pick a door.\n pick = random.choice(doors)\n \n # Open a door with a goat.\n show = random.choice([door for door in doors if door != car and door != pick])\n\n # Figure out which door was not opened or picked.\n notopen = [door for door in doors if door != pick and door != show][0]\n \n return (car == [pick, notopen][not stay])", "_____no_output_____" ] ], [ [ "So, we can simulate a game in which the contestant stays with their original pick by running the following. A return value of True means they won the car, False means they didn't.", "_____no_output_____" ] ], [ [ "simulate(stay=True)", "_____no_output_____" ] ], [ [ "## Ten thousand times each", "_____no_output_____" ], [ "Let's run the game 10,000 times where the contestant stays, and then 10,000 where they switch. Then we'll see how often they win in each case.", "_____no_output_____" ] ], [ [ "staying = [simulate(stay=True) for i in range(10000)]", "_____no_output_____" ] ], [ [ "Let's plot the result of staying.", "_____no_output_____" ] ], [ [ "ax = sns.countplot(y=staying)\npl.show()", "_____no_output_____" ] ], [ [ "Looks like when the contestant stays, they win only about a third of the time.", "_____no_output_____" ] ], [ [ "switching = [simulate(stay=False) for i in range(10000)]", "_____no_output_____" ], [ "ax = sns.countplot(y=switching)\npl.show()", "_____no_output_____" ] ], [ [ "Looks like you win two thirds of the time if you switch.", "_____no_output_____" ], [ "## End", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ] ]
e70e6808719e58b0dab503a3b894a1927b8ddca2
745,757
ipynb
Jupyter Notebook
equity_in_mortality_prediction_DeepLearning.ipynb
williamcaicedo/equity-in-mortality
ab51724e68bb9546cd1a203ecfe54e43ddd36abb
[ "MIT" ]
null
null
null
equity_in_mortality_prediction_DeepLearning.ipynb
williamcaicedo/equity-in-mortality
ab51724e68bb9546cd1a203ecfe54e43ddd36abb
[ "MIT" ]
null
null
null
equity_in_mortality_prediction_DeepLearning.ipynb
williamcaicedo/equity-in-mortality
ab51724e68bb9546cd1a203ecfe54e43ddd36abb
[ "MIT" ]
null
null
null
239.639139
92,964
0.857515
[ [ [ "# Equity in Mortality Prediction\n\nHow our tools perform across ethnical groups and diverse demographics?\n", "_____no_output_____" ], [ "## Load libraries and connect to the database", "_____no_output_____" ] ], [ [ "# Import libraries\nimport numpy as np\nimport os\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sbs\nplt.style.use('ggplot')\n\n# model building\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import roc_curve, roc_auc_score, auc, confusion_matrix, classification_report\nfrom sklearn.pipeline import Pipeline\nfrom sklearn import preprocessing\nfrom sklearn import metrics\nfrom sklearn import impute\n\n# Make pandas dataframes prettier\nfrom IPython.display import display, HTML\n\n# Access data using Google BigQuery.\nfrom google.colab import auth\nfrom google.cloud import bigquery", "_____no_output_____" ], [ "from scipy import stats\nnp.random.seed(1)\n\n\nimport itertools\nimport os\n\nfrom keras import layers, regularizers, optimizers\nfrom keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, Conv1D\nfrom keras.layers import AveragePooling2D, MaxPooling2D, MaxPooling1D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D\nfrom keras.models import Model, load_model\nfrom keras.callbacks import ModelCheckpoint, EarlyStopping\n\n\nimport keras.backend as K\nK.set_image_data_format('channels_last')\n\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import imshow\n\n# https://pypi.python.org/pypi/pydot\n#!apt-get -qq install -y graphviz && pip install -q pydot\n#import pydot\nfrom IPython.display import SVG\nfrom keras.utils.vis_utils import model_to_dot\nfrom keras.utils import plot_model", "Using TensorFlow backend.\n" ], [ "# authenticate\nauth.authenticate_user()", "_____no_output_____" ], [ "# Set up environment variables\nproject_id='hack-aotearoa'\nos.environ[\"GOOGLE_CLOUD_PROJECT\"]=project_id", "_____no_output_____" ], [ "def plot_confusion_matrix(cm, classes,\n normalize=False,\n title='Confusion matrix',filename=None,\n cmap=plt.cm.Blues):\n \"\"\"\n This function prints and plots the confusion matrix.\n Normalization can be applied by setting `normalize=True`.\n \"\"\"\n plt.figure()\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n print(\"Normalized confusion matrix\")\n \n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks, classes, rotation=45)\n plt.yticks(tick_marks, classes)\n\n fmt = '.2f' if normalize else 'd'\n thresh = cm.max() / 2.\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n plt.text(j, i, format(cm[i, j], fmt),\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')\n if filename:\n plt.savefig(filename, dpi=500)\n plt.show()\n \ndef plot_roc_curve(ground_truth, predictions, filename=None):\n fpr, tpr, thr = roc_curve(y_true=ground_truth, y_score=predictions, drop_intermediate=False)\n roc_auc = auc(fpr, tpr)\n plt.figure(figsize=(15,10))\n lw = 2\n plt.plot(fpr, tpr,\n lw=lw, label='ROC curve (area = %0.4f)' % roc_auc)\n plt.plot([0, 1], [0, 1], linestyle='--', lw=2,\n label='Chance', alpha=.8)\n plt.xlim([0.0, 1.0])\n plt.ylim([0.0, 1.05])\n plt.xlabel('1 - Specificity')\n plt.ylabel('Sensitivity')\n plt.title('Receiver operating characteristic curve')\n plt.legend(loc=\"lower right\")\n if filename:\n plt.savefig(filename, dpi=500)\n plt.show()\n #return fpr, tpr, thr\n\ndef plot_roc_curve_multiple(data, filename=None):\n plt.figure(figsize=(15,10))\n i = 0\n mean_fpr = np.linspace(0, 1, 100)\n tprs = []\n aucs = []\n for k,v in data.items():\n # Compute ROC curve and area the curve\n fpr, tpr, thresholds = roc_curve(y_true = v[0], y_score = v[1], drop_intermediate=False)\n tprs.append(np.interp(mean_fpr, fpr, tpr))\n tprs[-1][0] = 0.0\n roc_auc = auc(fpr, tpr)\n aucs.append(roc_auc)\n plt.plot(fpr, tpr, lw=4, alpha=0.3,\n label= f'ROC for {k} (AUC = {roc_auc:.4f})')\n i += 1\n \n plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',\n label='Chance', alpha=.8)\n\n plt.xlim([-0.05, 1.05])\n plt.ylim([-0.05, 1.05])\n plt.xlabel('1 - Specificity')\n plt.ylabel('Sensitivity')\n plt.title('Receiver operating characteristic curve')\n plt.legend(loc=\"lower right\")\n if filename:\n plt.savefig(filename, dpi=500)\n plt.show()\n #return mean_fpr, mean_tpr\n ", "_____no_output_____" ] ], [ [ "## Load the patient cohort", "_____no_output_____" ] ], [ [ "%%bigquery data\n\nWITH cohort AS\n (\n SELECT\n i.SUBJECT_ID,\n i.HADM_ID,\n i.ICUSTAY_ID,\n i.los,\n DATETIME_DIFF(i.INTIME, p.DOB, YEAR) as AGE,\n --EXTRACT(EPOCH FROM i.INTIME - p.DOB) / 60.0 / 60.0 / 24.0 / 365.242 AS AGE,\n CASE\n WHEN ad.ADMISSION_TYPE = 'ELECTIVE' THEN 1\n ELSE 0 END AS elective,\n CASE\n WHEN lower(ser.CURR_SERVICE) like '%surg%' then 1\n ELSE 0 END AS surgical,\n CASE\n WHEN ICD9_CODE BETWEEN '042' AND '0449'\n THEN 1\n\t\t ELSE 0 END AS AIDS /* HIV and AIDS */,\n CASE\n WHEN ICD9_CODE BETWEEN '1960' AND '1991' THEN 1\n WHEN ICD9_CODE BETWEEN '20970' AND '20975' THEN 1\n WHEN ICD9_CODE = '20979' THEN 1\n WHEN ICD9_CODE = '78951' THEN 1\n ELSE 0 END AS METASTATIC_CANCER,\n CASE\n WHEN ICD9_CODE BETWEEN '20000' AND '20238' THEN 1 -- lymphoma\n WHEN ICD9_CODE BETWEEN '20240' AND '20248' THEN 1 -- leukemia\n WHEN ICD9_CODE BETWEEN '20250' AND '20302' THEN 1 -- lymphoma\n WHEN ICD9_CODE BETWEEN '20310' AND '20312' THEN 1 -- leukemia\n WHEN ICD9_CODE BETWEEN '20302' AND '20382' THEN 1 -- lymphoma\n WHEN ICD9_CODE BETWEEN '20400' AND '20522' THEN 1 -- chronic leukemia\n WHEN ICD9_CODE BETWEEN '20580' AND '20702' THEN 1 -- other myeloid leukemia\n WHEN ICD9_CODE BETWEEN '20720' AND '20892' THEN 1 -- other myeloid leukemia\n WHEN ICD9_CODE = '2386 ' THEN 1 -- lymphoma\n WHEN ICD9_CODE = '2733 ' THEN 1 -- lymphoma\n ELSE 0 END AS LYMPHOMA,\n RANK()\n OVER (\n PARTITION BY i.SUBJECT_ID\n ORDER BY i.INTIME ) AS ICUSTAY_ID_order,\n CASE\n WHEN ad.deathtime BETWEEN i.INTIME AND i.OUTTIME\n THEN 1\n ELSE 0 END AS mort_icu\n --, d.label as variable_name\n ,\n CASE\n WHEN ITEMID IN (723, 223900) AND VALUENUM >= 1 AND VALUENUM <= 5 --OK\n THEN 'GCSVerbal'\n WHEN ITEMID IN (454, 223901) AND VALUENUM >= 1 AND VALUENUM <= 6 --OK\n THEN 'GCSMotor'\n WHEN ITEMID IN (184, 220739) AND VALUENUM >= 1 AND VALUENUM <= 6 --OK\n THEN 'GCSEyes'\n WHEN ITEMID IN (211, 220045) AND VALUENUM > 0 AND VALUENUM < 300 --OK\n THEN 'HEART RATE'\n WHEN ITEMID IN (51, 442, 455, 6701, 220179, 220050) AND VALUENUM > 0 AND VALUENUM < 400 --OK\n THEN 'SYSTOLIC BP'\n WHEN ITEMID IN (8368, 8440, 8441, 8555, 220180, 220051) AND VALUENUM > 0 AND VALUENUM < 300 --OK\n THEN 'DIASTOLIC BP'\n WHEN ITEMID IN (223761, 678) AND VALUENUM > 70 AND VALUENUM < 120 --OK\n THEN 'TEMPERATURE' -- converted to degC in VALUENUM call\n WHEN ITEMID IN (223762, 676) AND VALUENUM > 10 AND VALUENUM < 50 --OK\n THEN 'TEMPERATURE'\n WHEN ITEMID IN (223835, 3420, 3422, 190) --AND VALUENUM > 0 AND VALUENUM < 100 --OK\n THEN 'FiO2'\n ELSE NULL END AS measurement_name,\n case\n when ITEMID = 223835\n then case\n when VALUENUM > 0 and VALUENUM <= 1\n then VALUENUM * 100\n -- improperly input data - looks like O2 flow in litres\n when VALUENUM > 1 and VALUENUM < 21\n then null\n when VALUENUM >= 21 and VALUENUM <= 100\n then VALUENUM\n else null end -- unphysiological\n when ITEMID in (3420, 3422)\n -- all these VALUEs are well formatted\n then VALUENUM\n when ITEMID = 190 and VALUENUM > 0.20 and VALUENUM < 1\n -- well formatted but not in %\n then VALUENUM * 100\n WHEN ITEMID IN (223761, 678)\n then (c.VALUENUM -32)/1.8 --convert F to C\n else VALUENUM end AS VALUE,\n DATETIME_DIFF(c.CHARTTIME, i.INTIME, HOUR) AS icu_time_hr\n FROM `physionet-data.mimiciii_clinical.icustays` i\n JOIN `physionet-data.mimiciii_clinical.patients` p ON i.SUBJECT_ID = p.SUBJECT_ID\n JOIN `physionet-data.mimiciii_clinical.admissions` ad ON ad.HADM_ID = i.HADM_ID AND ad.SUBJECT_ID = i.SUBJECT_ID\n JOIN `physionet-data.mimiciii_clinical.diagnoses_icd` icd on ad.HADM_ID = icd.HADM_ID AND ad.SUBJECT_ID = icd.SUBJECT_ID\n JOIN `physionet-data.mimiciii_clinical.services` ser on ad.HADM_ID = ser.HADM_ID AND ad.SUBJECT_ID = ser.SUBJECT_ID\n LEFT JOIN `physionet-data.mimiciii_clinical.chartevents` c\n ON i.ICUSTAY_ID = c.ICUSTAY_ID\n AND c.ERROR != 1\n AND c.CHARTTIME BETWEEN (i.INTIME) AND DATETIME_ADD(i.INTIME, INTERVAL 2 DAY)\n WHERE i.los >= 2)\n SELECT *\n FROM cohort\n WHERE icustay_id_order = 1 AND age > 16 \n AND VALUE IS NOT NULL\n AND MEASUREMENT_NAME IS NOT NULL\n ORDER BY subject_id, icu_time_hr\n LIMIT 10000000", "_____no_output_____" ], [ "data.head()", "_____no_output_____" ], [ "data = data[data.icu_time_hr != 48]\nprint(\"Number of rows: {0}\".format(data.shape[0]))\n#sorted_data = data.sort_values(by=['subject_id', 'measurement_name'])\nmortality_labels = data.drop_duplicates(subset=['SUBJECT_ID'])[['SUBJECT_ID','mort_icu']]\nprint(\"Number of distinct IDs: {0}\".format(mortality_labels.shape[0]))\ninputs = data.pivot_table(index=['SUBJECT_ID', 'measurement_name'], \n columns='icu_time_hr', values='VALUE', aggfunc=np.mean)\nprint(\"Number of distinct IDs after pivoting: {0}\".format(inputs.index.levels[0].shape[0]))\n\nindex_names = inputs.index.names\nnew_index = pd.MultiIndex.from_product(inputs.index.levels)\nnew_index.names = index_names\ninputs = inputs.reindex(new_index).fillna(method='ffill', axis=1).fillna(method='bfill', axis=1)\nprint(\"Number of distinct series after re-indexing: {0}\".format(inputs.index.levels[1].shape[0]))\nprint(\"Number of distinct hours after re-indexing: {0}\".format(inputs.shape[1]))", "Number of rows: 9837994\nNumber of distinct IDs: 1627\nNumber of distinct IDs after pivoting: 1627\nNumber of distinct series after re-indexing: 8\nNumber of distinct hours after re-indexing: 48\n" ], [ "data['measurement_name'] = 'AGE'\nages = data.pivot_table(index=['SUBJECT_ID', 'measurement_name'], \n columns='icu_time_hr', values='AGE', aggfunc=np.mean)\\\n .fillna(axis=1, method='bfill').fillna(axis=1, method='ffill')\ninputs = pd.concat([inputs, ages]).sort_index()\ninputs", "_____no_output_____" ], [ "len(index_names)", "_____no_output_____" ], [ "def model_func(input_shape):\n X_input = Input(input_shape, name='input')\n \n #First layer\n \n #Conv\n short_term_conv = Conv2D(16, (3,1), strides = (1,1), padding = 'same', name = '3h_conv')(X_input)\n medium_term_conv = Conv2D(16, (6,1), strides = (1,1), padding = 'same',\n name = '6h_conv')(X_input)\n long_term_conv = Conv2D(16, (12,1), strides = (1,1), padding = 'same',\n name = '12h_conv')(X_input)\n #extra_long_term_conv = Conv2D(64, (24,1), strides = (1,1), padding = 'same',\n # name = '24h_conv')(X_input)\n #ReLU\n short_term_conv = Activation('relu')(short_term_conv)\n medium_term_conv = Activation('relu')(medium_term_conv)\n long_term_conv = Activation('relu')(long_term_conv)\n #extra_long_term_conv = Activation('relu')(extra_long_term_conv)\n #\n #Max pooling\n short_term_conv = AveragePooling2D(pool_size = (1,3), padding = 'same', \n name = '3h_pooled')(short_term_conv)\n medium_term_conv = AveragePooling2D(pool_size = (1,3), padding = 'same', \n name = '6h_pooled')(medium_term_conv)\n long_term_conv = AveragePooling2D(pool_size = (1,3), padding = 'same', \n name = '12h_pooled')(long_term_conv)\n #extra_long_term_conv = MaxPooling2D(pool_size = (1,3), padding = 'same',\n # name = '24h_pooled')(extra_long_term_conv)\n #Concat\n X = layers.concatenate([short_term_conv, medium_term_conv, long_term_conv], axis=3)\n #Dropout\n if use_dropout:\n X = Dropout(conv_drop_prob, name = 'dropout1')(X)\n #BatchNorm\n X = BatchNormalization(axis = 3, name = 'bn1')(X)\n \n \n #Fully connected layer\n X = Flatten()(X)\n X = Dense(256, activation = 'relu', name = 'fc1', kernel_regularizer=regularizers.l2(l2_penalty))(X)\n #Dropout\n if use_dropout:\n X = Dropout(fc_drop_prob, name = 'dropout3')(X)\n #BatchNorm\n X = BatchNormalization(axis = 1, name='bn3')(X)\n \n \n #Output softmax layer\n X = Dense(1, activation = 'sigmoid', name = 'fc2', kernel_regularizer=regularizers.l2(l2_penalty))(X)\n \n model = Model(inputs = X_input, outputs = X, name='icu_mortality_conv')\n return model", "_____no_output_____" ], [ "from sklearn.model_selection import StratifiedKFold\ncv = StratifiedKFold(n_splits=4, random_state=42)\ncv_idx = list(cv.split(X=np.zeros(mortality_labels.shape[0]), y=mortality_labels['mort_icu'].values))\ntrain, validation = list(cv_idx)[0]", "/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_split.py:296: FutureWarning: Setting a random_state has no effect since shuffle is False. This will raise an error in 0.24. You should leave random_state to its default (None), or set shuffle=True.\n FutureWarning\n" ], [ "#mean = inputs.loc[train].mean(level='measurement_name').mean(axis=1)\n#std = inputs.loc[train].std(level='measurement_name').mean(axis=1)\n#Store training fold mean and std to standardize validation folds later\n#Extra dimension is added to make broadcasted division and subtraction possible\n#mean = mean.values[:, None]\n#std = std.values[:, None]", "_____no_output_____" ], [ "X = inputs.values.reshape([inputs.index.levels[0].shape[0], inputs.index.levels[1].shape[0], inputs.shape[1]])\ny = mortality_labels.values\n\n#Reshape so each physiological time series \n#becomes a channel (3D) instead of a matrix row (2D)\nX = X.reshape((-1, 1, inputs.index.levels[1].shape[0], inputs.shape[1])).astype(np.float32)\ny = y[:, 1]\ny = y[:, None].astype(np.float32)\nprint(\"Training inputs shape: {0}\".format(X.shape))\nprint(\"Training labels shape: {0}\".format(y.shape))", "Training inputs shape: (1627, 1, 9, 48)\nTraining labels shape: (1627, 1)\n" ], [ "mean = X[train].mean(axis=0)\nstd = X[train].std(axis=0)\nmean = mean[:, None]\nstd = std[:, None]\nmean.shape", "_____no_output_____" ], [ "print(X[train].shape)\nprint(mean.shape)\n((X[train]-mean)).shape", "(1220, 1, 9, 48)\n(1, 1, 9, 48)\n" ], [ "batch_size = 32\nepochs = 50\nclass_weight = {0:1, 1:10} \nuse_dropout = True\nconv_drop_prob = 0.45\nfc_drop_prob = 0.45\nl2_penalty = 0.0\n\nmodel = model_func((1,inputs.index.levels[1].shape[0],48))\nsgd = optimizers.SGD(lr=0.01, decay=1e-7, momentum=0.9, nesterov=True)\nmodel.compile(optimizer = sgd, loss = 'binary_crossentropy', metrics = ['accuracy'])\n\nscaled_X_train = (X[train] - mean)/std #Standardize\nscaled_X_train = np.nan_to_num(scaled_X_train) #Implicit mean imputation for remaining NaNs\nscaled_X_validation = (X[validation] - mean)/std #Standardize using train set mean. No Leaking\nscaled_X_validation = np.nan_to_num(scaled_X_validation) #Implicit mean imputation for remaining NaNs\n\n\nmodel.fit(x = scaled_X_train, y = y[train], \n epochs = epochs, batch_size = batch_size, \n class_weight=class_weight, validation_data=(scaled_X_validation, \n y[validation]))\n\npredictions_training_set = model.predict(scaled_X_train)\npredictions = model.predict(scaled_X_validation)", "Train on 1220 samples, validate on 407 samples\nEpoch 1/50\n1220/1220 [==============================] - 2s 1ms/step - loss: 1.8762 - acc: 0.5139 - val_loss: 2.4247 - val_acc: 0.2432\nEpoch 2/50\n1220/1220 [==============================] - 0s 235us/step - loss: 2.0783 - acc: 0.5057 - val_loss: 1.8544 - val_acc: 0.2531\nEpoch 3/50\n1220/1220 [==============================] - 0s 233us/step - loss: 2.2947 - acc: 0.5000 - val_loss: 1.0670 - val_acc: 0.5184\nEpoch 4/50\n1220/1220 [==============================] - 0s 254us/step - loss: 2.2328 - acc: 0.4770 - val_loss: 0.6066 - val_acc: 0.6634\nEpoch 5/50\n1220/1220 [==============================] - 0s 252us/step - loss: 2.2058 - acc: 0.5033 - val_loss: 1.3217 - val_acc: 0.5111\nEpoch 6/50\n1220/1220 [==============================] - 0s 255us/step - loss: 1.9025 - acc: 0.5385 - val_loss: 0.6717 - val_acc: 0.6757\nEpoch 7/50\n1220/1220 [==============================] - 0s 268us/step - loss: 1.7677 - acc: 0.5500 - val_loss: 1.0406 - val_acc: 0.6364\nEpoch 8/50\n1220/1220 [==============================] - 0s 257us/step - loss: 2.2026 - acc: 0.5303 - val_loss: 0.4593 - val_acc: 0.8034\nEpoch 9/50\n1220/1220 [==============================] - 0s 284us/step - loss: 2.0763 - acc: 0.5393 - val_loss: 0.5921 - val_acc: 0.7469\nEpoch 10/50\n1220/1220 [==============================] - 0s 270us/step - loss: 1.7220 - acc: 0.5459 - val_loss: 1.0967 - val_acc: 0.5774\nEpoch 11/50\n1220/1220 [==============================] - 0s 252us/step - loss: 1.9888 - acc: 0.5820 - val_loss: 0.8794 - val_acc: 0.5381\nEpoch 12/50\n1220/1220 [==============================] - 0s 274us/step - loss: 1.8755 - acc: 0.5262 - val_loss: 1.1800 - val_acc: 0.6462\nEpoch 13/50\n1220/1220 [==============================] - 0s 256us/step - loss: 1.9514 - acc: 0.5697 - val_loss: 0.9486 - val_acc: 0.4840\nEpoch 14/50\n1220/1220 [==============================] - 0s 267us/step - loss: 1.7577 - acc: 0.5877 - val_loss: 1.0550 - val_acc: 0.4767\nEpoch 15/50\n1220/1220 [==============================] - 0s 272us/step - loss: 1.7444 - acc: 0.5590 - val_loss: 0.8296 - val_acc: 0.6413\nEpoch 16/50\n1220/1220 [==============================] - 0s 279us/step - loss: 1.7957 - acc: 0.5697 - val_loss: 0.5134 - val_acc: 0.7862\nEpoch 17/50\n1220/1220 [==============================] - 0s 271us/step - loss: 1.9031 - acc: 0.4984 - val_loss: 0.5571 - val_acc: 0.7150\nEpoch 18/50\n1220/1220 [==============================] - 0s 272us/step - loss: 1.7232 - acc: 0.5648 - val_loss: 0.5883 - val_acc: 0.7273\nEpoch 19/50\n1220/1220 [==============================] - 0s 276us/step - loss: 1.7339 - acc: 0.5639 - val_loss: 0.8422 - val_acc: 0.5627\nEpoch 20/50\n1220/1220 [==============================] - 0s 274us/step - loss: 2.0123 - acc: 0.5320 - val_loss: 0.6152 - val_acc: 0.6830\nEpoch 21/50\n1220/1220 [==============================] - 0s 282us/step - loss: 1.9987 - acc: 0.5533 - val_loss: 1.0922 - val_acc: 0.5233\nEpoch 22/50\n1220/1220 [==============================] - 0s 265us/step - loss: 1.6207 - acc: 0.5910 - val_loss: 0.6295 - val_acc: 0.6904\nEpoch 23/50\n1220/1220 [==============================] - 0s 276us/step - loss: 1.7005 - acc: 0.5877 - val_loss: 0.6525 - val_acc: 0.7076\nEpoch 24/50\n1220/1220 [==============================] - 0s 263us/step - loss: 1.6907 - acc: 0.5689 - val_loss: 0.4249 - val_acc: 0.8378\nEpoch 25/50\n1220/1220 [==============================] - 0s 246us/step - loss: 1.7148 - acc: 0.5844 - val_loss: 0.5453 - val_acc: 0.7469\nEpoch 26/50\n1220/1220 [==============================] - 0s 243us/step - loss: 1.6024 - acc: 0.5967 - val_loss: 0.8627 - val_acc: 0.7101\nEpoch 27/50\n1220/1220 [==============================] - 0s 237us/step - loss: 1.4112 - acc: 0.6508 - val_loss: 0.6048 - val_acc: 0.7101\nEpoch 28/50\n1220/1220 [==============================] - 0s 237us/step - loss: 1.4922 - acc: 0.5918 - val_loss: 0.4542 - val_acc: 0.8673\nEpoch 29/50\n1220/1220 [==============================] - 0s 244us/step - loss: 1.6769 - acc: 0.5992 - val_loss: 0.5769 - val_acc: 0.7248\nEpoch 30/50\n1220/1220 [==============================] - 0s 237us/step - loss: 1.4752 - acc: 0.6049 - val_loss: 0.4747 - val_acc: 0.8059\nEpoch 31/50\n1220/1220 [==============================] - 0s 239us/step - loss: 1.4826 - acc: 0.6098 - val_loss: 0.4768 - val_acc: 0.7961\nEpoch 32/50\n1220/1220 [==============================] - 0s 242us/step - loss: 1.4675 - acc: 0.6361 - val_loss: 0.6210 - val_acc: 0.6585\nEpoch 33/50\n1220/1220 [==============================] - 0s 242us/step - loss: 1.4112 - acc: 0.5902 - val_loss: 0.5634 - val_acc: 0.7690\nEpoch 34/50\n1220/1220 [==============================] - 0s 236us/step - loss: 1.4441 - acc: 0.6352 - val_loss: 0.4926 - val_acc: 0.8133\nEpoch 35/50\n1220/1220 [==============================] - 0s 248us/step - loss: 1.4899 - acc: 0.5713 - val_loss: 0.4874 - val_acc: 0.8059\nEpoch 36/50\n1220/1220 [==============================] - 0s 239us/step - loss: 1.3492 - acc: 0.6164 - val_loss: 0.5769 - val_acc: 0.7469\nEpoch 37/50\n1220/1220 [==============================] - 0s 237us/step - loss: 1.3724 - acc: 0.6557 - val_loss: 0.5250 - val_acc: 0.7813\nEpoch 38/50\n1220/1220 [==============================] - 0s 232us/step - loss: 1.5123 - acc: 0.5443 - val_loss: 0.6476 - val_acc: 0.7101\nEpoch 39/50\n1220/1220 [==============================] - 0s 241us/step - loss: 1.4685 - acc: 0.6180 - val_loss: 0.4246 - val_acc: 0.8747\nEpoch 40/50\n1220/1220 [==============================] - 0s 233us/step - loss: 1.4256 - acc: 0.6590 - val_loss: 0.5251 - val_acc: 0.7813\nEpoch 41/50\n1220/1220 [==============================] - 0s 262us/step - loss: 1.2416 - acc: 0.6590 - val_loss: 1.0984 - val_acc: 0.5135\nEpoch 42/50\n1220/1220 [==============================] - 0s 248us/step - loss: 1.3382 - acc: 0.6557 - val_loss: 0.7437 - val_acc: 0.5946\nEpoch 43/50\n1220/1220 [==============================] - 0s 242us/step - loss: 1.3969 - acc: 0.6516 - val_loss: 0.7945 - val_acc: 0.6364\nEpoch 44/50\n1220/1220 [==============================] - 0s 239us/step - loss: 1.4190 - acc: 0.6369 - val_loss: 0.6385 - val_acc: 0.7076\nEpoch 45/50\n1220/1220 [==============================] - 0s 241us/step - loss: 1.3816 - acc: 0.5861 - val_loss: 0.4372 - val_acc: 0.8526\nEpoch 46/50\n1220/1220 [==============================] - 0s 263us/step - loss: 1.2981 - acc: 0.6525 - val_loss: 0.5697 - val_acc: 0.7592\nEpoch 47/50\n1220/1220 [==============================] - 0s 251us/step - loss: 1.2103 - acc: 0.6869 - val_loss: 0.5057 - val_acc: 0.8133\nEpoch 48/50\n1220/1220 [==============================] - 0s 266us/step - loss: 1.3898 - acc: 0.6008 - val_loss: 0.4487 - val_acc: 0.8378\nEpoch 49/50\n1220/1220 [==============================] - 0s 244us/step - loss: 1.1704 - acc: 0.7033 - val_loss: 0.4742 - val_acc: 0.8108\nEpoch 50/50\n1220/1220 [==============================] - 0s 236us/step - loss: 1.2679 - acc: 0.6475 - val_loss: 0.4232 - val_acc: 0.8575\n" ], [ "params = {'patients': list(mortality_labels.iloc[validation]['SUBJECT_ID'].values)}", "_____no_output_____" ], [ "%%bigquery demographics --params $params\n\nSELECT DISTINCT (p.SUBJECT_ID), ad.RELIGION, ad.ETHNICITY, ad.MARITAL_STATUS, ad.INSURANCE, p.gender\n\nFROM `physionet-data.mimiciii_clinical.admissions` ad\nJOIN `physionet-data.mimiciii_clinical.patients` p on ad.SUBJECT_ID = p.SUBJECT_ID\n\nWHERE p.SUBJECT_ID in UNNEST(@patients)", "_____no_output_____" ], [ "demographics = demographics.drop_duplicates(subset='SUBJECT_ID')\ndemographics.head()", "_____no_output_____" ], [ "predictions_df = pd.DataFrame(data={'predictions':predictions[:,0], \n 'mort_icu': mortality_labels.iloc[validation]['mort_icu'].values,\n 'SUBJECT_ID':mortality_labels.iloc[validation]['SUBJECT_ID'].values})\npredictions_df", "_____no_output_____" ], [ "mimiciii = demographics.merge(predictions_df, on='SUBJECT_ID')\nmimiciii.head()", "_____no_output_____" ], [ "### Function for converting the ethnicity data into coarser categories\ndef ethnicCoding(dfIN, inColName, outColName):\n \n # fill in missing data to 'other'\n dfIN[inColName][dfIN[inColName].isnull()] = \"other\"\n # conver to lower\n dfIN[inColName] = dfIN[inColName].str.lower()\n\n #1\tWHITE\n #12\tWHITE - EASTERN EUROPEAN\n #14\tWHITE - RUSSIAN\n #24\tWHITE - OTHER EUROPEAN\n whiteMask = pd.Series(dfIN[inColName]).str.contains('white').tolist()\n dfIN.loc[whiteMask, outColName] = 'white'\n\n #2\tBLACK/AFRICAN AMERICAN\n #13\tBLACK/CAPE VERDEAN\n #15\tBLACK/HAITIAN\n #16\tCARIBBEAN ISLAND\n #18\tBLACK/AFRICAN\n blackMask = pd.Series(dfIN[inColName]).str.contains('black|caribbean').tolist()\n dfIN.loc[blackMask, outColName] = 'black'\n\n #3\tHISPANIC/LATINO - PUERTO RICAN\n #6\tHISPANIC OR LATINO\n #21\tHISPANIC/LATINO - DOMINICAN\n #22\tHISPANIC/LATINO - CUBAN\n #23\tHISPANIC/LATINO - GUATEMALAN\n #26\tPORTUGUESE\n #28\tSOUTH AMERICAN\n #30\tHISPANIC/LATINO - CENTRAL AMERICAN (OTHER)\n #33\tHISPANIC/LATINO - SALVADORAN\n #35\tWHITE - BRAZILIAN\n #36\tHISPANIC/LATINO - COLOMBIAN\n #38\tHISPANIC/LATINO - MEXICAN\n #41\tHISPANIC/LATINO - HONDURAN\n hispMask = pd.Series(dfIN[inColName]).str.contains('hispanic|portuguese|south american|brazilian').tolist()\n dfIN.loc[hispMask, outColName] = 'hispanic'\n\n #4\tOTHER\n #7\tUNKNOWN/NOT SPECIFIED\n #8\tPATIENT DECLINED TO ANSWER\n #11\tMULTI RACE ETHNICITY\n #17\tUNABLE TO OBTAIN\n unknownMask = pd.Series(dfIN[inColName]).str.contains('other|unknown|declined|multi|unable').tolist()\n dfIN.loc[unknownMask, outColName] = 'other' \n\n #5\tASIAN\n #9\tASIAN - CHINESE\n #19\tASIAN - OTHER\n #20\tASIAN - FILIPINO\n #25\tASIAN - KOREAN\n #27\tASIAN - CAMBODIAN\n #29\tASIAN - ASIAN INDIAN\n #31\tASIAN - VIETNAMESE\n #37\tASIAN - THAI\n #40\tASIAN - JAPANESE\n asianMask = pd.Series(dfIN[inColName]).str.contains('asian').tolist()\n dfIN.loc[asianMask, outColName] = 'asian' \n \n #10 AMERICAN INDIAN/ALASKA NATIVE\n #39\tAMERICAN INDIAN/ALASKA NATIVE FEDERALLY RECOGNIZED TRIBE\n americanindianMask = pd.Series(dfIN[inColName]).str.contains('american indian').tolist()\n dfIN.loc[americanindianMask, outColName] = 'americanindian' \n \n #32\tMIDDLE EASTERN\n middleeasternMask = pd.Series(dfIN[inColName]).str.contains('middle').tolist()\n dfIN.loc[middleeasternMask, outColName] = 'middle' \n \n #34\tNATIVE HAWAIIAN OR OTHER PACIFIC ISLANDER\n pacificMask = pd.Series(dfIN[inColName]).str.contains('pacific').tolist()\n dfIN.loc[pacificMask, outColName] = 'pacific' \n \n return(dfIN)\n \n### Function for converting the religion data into coarser categories\ndef religionCoding(dfIN, inColName, outColName):\n \n # fill in missing data to 'other'\n dfIN[inColName][dfIN[inColName].isnull()] = \"other\"\n # conver to lower\n dfIN[inColName] = dfIN[inColName].str.lower()\n\n#1\tEPISCOPALIAN\n#5\tCATHOLIC\n#7\tPROTESTANT QUAKER\n#8\tGREEK ORTHODOX\n#9\tJEHOVAH'S WITNESS\n#10\tUNITARIAN-UNIVERSALIST\n#12\t7TH DAY ADVENTIST\n#13\tBAPTIST\n#16\tMETHODIST\n#18\tCHRISTIAN SCIENTIST\n#19\tROMANIAN EAST. ORTH\n#21\tLUTHERAN\n christianMask = pd.Series(dfIN[inColName]).str.contains('episc|catho|prote|greek|jehov|unit|adven|bapt|meth|scien|roman|luth').tolist()\n dfIN.loc[christianMask, outColName] = 'christian'\n \n#2\tOTHER\n#4\tNOT SPECIFIED\n#6\tUNOBTAINABLE\n#11\tnull\n otherMask = pd.Series(dfIN[inColName]).str.contains('other|specif|unob|null').tolist()\n dfIN.loc[otherMask, outColName] = 'other' \n\n#3\tJEWISH\n#17\tHEBREW\n jewMask = pd.Series(dfIN[inColName]).str.contains('jew|hebr').tolist()\n dfIN.loc[jewMask, outColName] = 'jewish' \n\n#14\tBUDDHIST\n budMask = pd.Series(dfIN[inColName]).str.contains('buddhist').tolist()\n dfIN.loc[budMask, outColName] = 'buddhist' \n\n#15\tMUSLIM\n muslimMask = pd.Series(dfIN[inColName]).str.contains('muslim').tolist()\n dfIN.loc[muslimMask, outColName] = 'muslim' \n\n#20\tHINDU\n hinduMask = pd.Series(dfIN[inColName]).str.contains('hindu').tolist()\n dfIN.loc[hinduMask, outColName] = 'hindu' \n \n return(dfIN)\n \n \ndef doConversions(dfIN):\n dfIN = ethnicCoding(dfIN, 'ETHNICITY', \"EthCat\")\n dfIN = religionCoding(dfIN, \"RELIGION\", \"RelCat\")\n return(dfIN)\n\nmimiciii = doConversions(mimiciii)\nmimiciii.head()", "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:4: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n after removing the cwd from sys.path.\n/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:79: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n" ], [ "mimiciii['predicted_mort_icu'] = mimiciii['predictions'] > 0.5\nmimiciii['predicted_mort_icu'] = mimiciii['predicted_mort_icu'].astype(int)\nmimiciii.head()", "_____no_output_____" ] ], [ [ "## Overall Performance", "_____no_output_____" ] ], [ [ "observed_mortality = mimiciii[['mort_icu']].values\npredicted_mortality = mimiciii[['predicted_mort_icu']].values\nplot_roc_curve(observed_mortality, predicted_mortality, filename='ROC_overall_DL')", "_____no_output_____" ] ], [ [ "## Performance by Gender", "_____no_output_____" ] ], [ [ "sbs.catplot(y='gender', hue='mort_icu', kind='count', data=mimiciii, height=10, aspect=1)", "_____no_output_____" ], [ "data = dict()\ndata['male'] = [mimiciii[mimiciii.gender == 'M'][['mort_icu']].values,\n mimiciii[mimiciii.gender == 'M'][['predicted_mort_icu']].values]\ndata['female'] = [mimiciii[mimiciii.gender == 'F'][['mort_icu']].values,\n mimiciii[mimiciii.gender == 'F'][['predicted_mort_icu']].values]\nplot_roc_curve_multiple(data, filename='ROC_gender_DL')\n#print(data.items())", "_____no_output_____" ] ], [ [ "## Performance by Ethnicity", "_____no_output_____" ] ], [ [ "data = dict()\nfor e in mimiciii.EthCat.unique():\n data[e] = [mimiciii[mimiciii.EthCat == e][['mort_icu']].values,\n mimiciii[mimiciii.EthCat == e][['predicted_mort_icu']].values]\n\nplot_roc_curve_multiple(data, filename='ROC_ethnicity_DL')\nsbs.catplot(y='EthCat', hue='mort_icu', kind='count', data=mimiciii, height=10, aspect=1)", "/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_ranking.py:808: UndefinedMetricWarning: No positive samples in y_true, true positive value should be meaningless\n UndefinedMetricWarning)\n" ] ], [ [ "## Performance by Marital Status", "_____no_output_____" ] ], [ [ "sbs.catplot(y='MARITAL_STATUS', hue='mort_icu', kind='count', data=mimiciii, height=10, aspect=1)", "_____no_output_____" ], [ "data = dict()\ndata['married'] = [mimiciii[mimiciii.MARITAL_STATUS == 'MARRIED'][['mort_icu']].values,\n mimiciii[mimiciii.MARITAL_STATUS == 'MARRIED'][['predicted_mort_icu']].values]\ndata['single'] = [mimiciii[mimiciii.MARITAL_STATUS == 'SINGLE'][['mort_icu']].values,\n mimiciii[mimiciii.MARITAL_STATUS == 'SINGLE'][['predicted_mort_icu']].values]\n\nplot_roc_curve_multiple(data, filename='ROC_marital_DL')", "_____no_output_____" ] ], [ [ "## Performance by Insurance Status", "_____no_output_____" ] ], [ [ "data = dict()\nfor e in mimiciii.INSURANCE.unique():\n data[e] = [mimiciii[mimiciii.INSURANCE == e][['mort_icu']].values,\n mimiciii[mimiciii.INSURANCE == e][['predicted_mort_icu']].values]\n\nplot_roc_curve_multiple(data, filename='ROC_insurance_DL')\nsbs.catplot(y='INSURANCE', hue='mort_icu', kind='count', data=mimiciii, height=10, aspect=1)", "/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_ranking.py:808: UndefinedMetricWarning: No positive samples in y_true, true positive value should be meaningless\n UndefinedMetricWarning)\n" ], [ "mimiciii.RelCat.unique()\ndata = dict()\nfor e in mimiciii.RelCat.unique():\n data[e] = [mimiciii[mimiciii.RelCat == e][['mort_icu']].values,\n mimiciii[mimiciii.RelCat == e][['predicted_mort_icu']].values]\n\nplot_roc_curve_multiple(data, filename='ROC_religion_DL')\nsbs.catplot(y='RelCat', hue='mort_icu', kind='count', data=mimiciii, height=10, aspect=1)", "/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_ranking.py:808: UndefinedMetricWarning: No positive samples in y_true, true positive value should be meaningless\n UndefinedMetricWarning)\n" ], [ "!apt-get -qq install -y graphviz && pip install -q pydot\nimport pydot", "_____no_output_____" ], [ "from IPython.display import SVG\nfrom keras.utils.vis_utils import model_to_dot\nfrom keras.utils import plot_model\nplot_model(model, to_file='model.png')", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e70e7288225386fcbfdb34c421c0e6c3fcb44888
195,385
ipynb
Jupyter Notebook
Phage_Classification.ipynb
doronser/DNA_classifiaction
aea3135e3beb5bd125ee3399de78659d55158a23
[ "MIT" ]
null
null
null
Phage_Classification.ipynb
doronser/DNA_classifiaction
aea3135e3beb5bd125ee3399de78659d55158a23
[ "MIT" ]
null
null
null
Phage_Classification.ipynb
doronser/DNA_classifiaction
aea3135e3beb5bd125ee3399de78659d55158a23
[ "MIT" ]
null
null
null
195,385
195,385
0.847552
[ [ [ "# 8200-BiomX Challange: Phage Classification\n### &copy; Doron Serebro\nThis notebook is a draft for the 8200-BiomX challange. The main goal is to classify DNA sequences to either bacteria or phage. Secondary goal is to identify the bacteria and the phage type (compare to NCBI) \n\nThe noteboke contains the main logic and flow of the challange. Helper functions are available here:\n* my_utils.py\n* encodeFASTA.py - my patch for [fasta_one_hot_encoder](https://github.com/LucaCappelletti94/fasta_one_hot_encoder) that supports varying length sequences\n", "_____no_output_____" ] ], [ [ "from google.colab import drive\ndrive.mount('/content/drive')", "_____no_output_____" ] ], [ [ "## Setup\nLet's start with some imports and configurations.", "_____no_output_____" ] ], [ [ "# /content/drive/MyDrive/Code/Phage Classification/\nimport os\nfrom glob import glob\nimport matplotlib.pyplot as plt;\nimport numpy as np\nimport pandas as pd\nimport cv2\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nimport sys\n\n# define home dir and import my modules\nDATA_DIR = \"/content/drive/MyDrive/Code/PhageClassification\"\nsys.path.append(DATA_DIR)\nfrom my_utils import *\nfrom encodeFASTA import FastaEncoder\n\n\n\n\n# magics\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2", "_____no_output_____" ] ], [ [ "### Parameters\nHere we define some useful parameters for the rest of the notebok", "_____no_output_____" ] ], [ [ "# define constants\nDUMMY_FASTA = f\"{DATA_DIR}/example_training_genes.fasta\"\nTRAIN_FASTA = f\"{DATA_DIR}/train.fasta\"\nTEST_FASTA = f\"{DATA_DIR}/test_shuffled.fasta\"\n\n\nUSE_GPU = 0 # SET THIS TO 1 IF RUNNING WITH A GPU RUNTIME\nUSE_REG = 0\nDNA_SEQUENCE_CLIP=500\nBATCH_SIZE = 16\nVAL_SIZE = 3500\nNUM_EPOCHS = 100\nOPT = keras.optimizers.Adam(learning_rate=0.001)\nLOSS = keras.losses.BinaryCrossentropy(from_logits=False)\nMETRICS = [keras.metrics.BinaryAccuracy(), tf.keras.metrics.AUC()]", "_____no_output_____" ] ], [ [ "### GPU\nFor faster runtime during training, it is reccomended to use a GPU.\n", "_____no_output_____" ] ], [ [ "# GPU usage\ndevice_name = tf.test.gpu_device_name()\nif USE_GPU==1:\n if device_name != '/device:GPU:0':\n raise SystemError('GPU device not found')\n else:\n print('Found GPU at: {}'.format(device_name))", "_____no_output_____" ] ], [ [ "### Load Data\nThe data for this challange is a .fasta file containing ~10k DNA sequences.\n\nEach 2 consecutive line in a .fasta file represent a single DNA sequence. The first line is the sequence name and and the seconds line is the sequence itself. Example from the training set:\n\n```\n>Phage-4995\nATGACGGCTGATCAGGTGTTTAACCAAGTGCTGCCTGAAGCTTACAAGCTT...\n```\n\nInstead of working with a string representation, let's use [fasta_one_hot_encoder](https://github.com/LucaCappelletti94/fasta_one_hot_encoder) which loads and encodes a .fasta file in a single function.\n\nOne hot encoding is useful as it converts each DNA sequence to a 2D-array of size (sequence_length,4). This essentially means that our input data is a binary image and that allows us to utilize CNNs easily.\n\n*In order to deal with sequences of varying lengths, I added zero-padding to fasta-one-hot-encoder.", "_____no_output_____" ] ], [ [ "# my fix for FastaOneHotEncoder that supports sequences of varying lengths\nfastaEncoder = FastaEncoder(nucleotides = \"acgt\", lower = True, sparse = False, handle_unknown=\"ignore\")\n\n# Load DNA sequences from FASTA\ntrain_seqs = fastaEncoder.transform(TRAIN_FASTA)\ntrain_seqs = train_seqs.astype(np.float32)\nprint(f\"training set shape: {train_seqs.shape}\")\nprint(\"generating labels for train dataset\")\n\n# Load labels from FASTA file\ntrain_labels, train_ids = get_training_labels(TRAIN_FASTA,train_seqs.shape[0],2000)", "training set shape: (9998, 9948, 4)\ngenerating labels for train dataset\nparsed 2000 contigs\nparsed 4000 contigs\nparsed 6000 contigs\nparsed 8000 contigs\n" ] ], [ [ "## EDA\nSo far we know that the training data is roughly 10k sequences, and that the largest DNA sequence is of length 9,948.\nLet's get a better understanding of the data we're dealing with here:", "_____no_output_____" ] ], [ [ "num_phages = np.count_nonzero(train_labels)\nnum_bact = train_labels.shape[0] - num_phages\nprint(f\"Training set has {num_phages} phage sequences and {num_bact} bacteria sequences\")", "Training set has 4998 phage sequences and 5000 bacteria sequences\n" ], [ "# check labels of a given line\ntest_line = 2 #play around with this to check different sequences\nprint(f\"train contig id#{train_ids[2]}: {train_labels[2]}\")\nprint(f\"train contig id#{train_ids[5001]}: {train_labels[5001]}\")", "train contig id#5001: 0\ntrain contig id#2: 1\n" ] ], [ [ "Since each DNA sequence has a different length, we should check to see how the distribution of sequence length.", "_____no_output_____" ] ], [ [ "#visualise sequence length\nseq_lengths = []\nbact_lengths = []\nphage_lengths = []\nfor i in range(train_seqs.shape[0]):\n curr_length = np.max(np.nonzero(train_seqs[i]))\n seq_lengths.append(curr_length)\n if train_labels[i] == 0 :\n bact_lengths.append(curr_length)\n else:\n phage_lengths.append(curr_length)\n\nseq_lengths = np.array(seq_lengths)\nbact_lengths = np.array(seq_lengths)\nphage_lengths = np.array(seq_lengths)\n\nplt.figure(figsize=(10,5))\nplt.suptitle(f\"Sequence Length Histograms\\n Mean: {int(np.mean(seq_lengths))} Std: {int(np.std(seq_lengths))}\")\nplt.subplot(1,2,1)\nplt.hist(seq_lengths)\nplt.xlabel(\"Sequence Length\")\nplt.ylabel(\"Count\")\nplt.subplot(1,2,2)\nplt.hist(seq_lengths, log=True)\nplt.xlabel(\"Sequence Length\")\nplt.ylabel(\"Log(Count)\")", "_____no_output_____" ] ], [ [ "Let's compare", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(10,5))\nplt.suptitle(f\"Sequence Length Histograms Per Category\")\nplt.subplot(1,2,1)\nplt.hist(phage_lengths)\nplt.xlabel(\"Phage Sequence Length\")\nplt.ylabel(\"Count\")\nplt.subplot(1,2,2)\nplt.hist(bact_lengths)\nplt.xlabel(\"Bacteria Sequence Length\")\nplt.ylabel(\"Count\")", "_____no_output_____" ] ], [ [ "Conclusions:\n* dataset is balanced so we don't have to worry about label aguemntation or fancy loss for our model.\n* Sequence length distributes roughly the same between labels. This is good since we won't havbe to worry about accidently learning sequence length as a feature.\n* Looks like most sequences are roughly 1,000 samples long. This means that our dataset in currently mostly zeros obtains from zero-padding. Let's fix that.", "_____no_output_____" ], [ "## Sequence Splitting\nIf we use the current dataset as input to a model, it would be problematic due to 2 reasons:\n1. In most cases, the input data would be dominated by zeros obtained by zero-padding from length ~1k to ~10k.\n2. The input dimension is very large and would mean our model would be very large \n\nIn order to deal with this issue, let's define a maximum length of 500 and split each sequence to smaller sequences of length 500. \n\nFor example, a sequence of length 1,507 would be split into 3 sequences of length 500 and the remaining 7 samples would be ignored.\n\n\nAdvanteges:\n\n* No need for zero padding\n* Constant and small input dimension\n* Easy to Implement\n\nDisadvanteges:\n\n* Loss of imformation due to ignored samples\n* Loss of imformation due split sequences (representation constraints)\n", "_____no_output_____" ] ], [ [ "# create_dataset is available under my_utils.py\ntrain_dataset_split = create_dataset(train_seqs,train_labels)", "building dataset from 9998 sequences split into 18241 sequences of length 500\n" ] ], [ [ "The last step before we can build and train a model is to shuffle the data and split it into a validation set:", "_____no_output_____" ] ], [ [ "# Shuffle dataset, create validation set\nshuffled_dataset = train_dataset_split.shuffle(buffer_size=20000)\nfull_trainset = shuffled_dataset.batch(BATCH_SIZE) # will be used for final training\nval_dataset = shuffled_dataset.take(3500).batch(BATCH_SIZE)\ntrain_dataset = shuffled_dataset.skip(3500).batch(BATCH_SIZE)", "_____no_output_____" ] ], [ [ "## Build Model\nThe model below is inspired by this article:\n\n[Enhancer Identification using Transfer and Adversarial Deep Learning of DNA Sequences](https://www.biorxiv.org/content/biorxiv/early/2018/02/14/264200.full.pdf)\n\n\n**Why Use 1D Convolutions?**\n\nThe information we seek to learn from DNA sequences comes mostly in the form of specific protein sequences (k-mers, motifs, etc..). This means that a 1D covolution is more suited for our model than the \"classic\" 2D convolution which learns spatial features.\n\n\n", "_____no_output_____" ] ], [ [ "def create_model():\n # initializer = tf.keras.initializers.RandomNormal(mean=0., stddev=1.)\n if USE_REG==1:\n l2_reg = tf.keras.regularizers.L2(0.01)\n else:\n l2_reg = None\n model = keras.Sequential(\n [\n layers.Conv1D(input_shape=(500, 4), filters=4, kernel_size=9, strides=1, activation=\"relu\", kernel_regularizer=l2_reg),\n layers.MaxPool1D(3),\n layers.Conv1D(filters=20, kernel_size=5, strides=1, activation=\"relu\", kernel_regularizer=l2_reg),\n layers.MaxPool1D(4),\n layers.Conv1D(filters=30, kernel_size=3, strides=1, activation=\"relu\", kernel_regularizer=l2_reg),\n layers.MaxPool1D(4),\n layers.Flatten(),\n layers.Dense(90, activation=\"relu\", kernel_regularizer=l2_reg),\n layers.Dropout(0.5),\n layers.Dense(45, activation=\"relu\", kernel_regularizer=l2_reg),\n # layers.Dropout(0.5),\n layers.Dense(1, activation=\"sigmoid\"),\n ]\n )\n return model\n\n#create a model on the CPU jsut for summary()\nmodel = create_model()\nmodel.compile(optimizer=OPT,loss=LOSS,metrics=METRICS)\nmodel.summary()", "Model: \"sequential_6\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv1d_18 (Conv1D) (None, 492, 4) 148 \n_________________________________________________________________\nmax_pooling1d_18 (MaxPooling (None, 164, 4) 0 \n_________________________________________________________________\nconv1d_19 (Conv1D) (None, 160, 20) 420 \n_________________________________________________________________\nmax_pooling1d_19 (MaxPooling (None, 40, 20) 0 \n_________________________________________________________________\nconv1d_20 (Conv1D) (None, 38, 30) 1830 \n_________________________________________________________________\nmax_pooling1d_20 (MaxPooling (None, 9, 30) 0 \n_________________________________________________________________\nflatten_6 (Flatten) (None, 270) 0 \n_________________________________________________________________\ndense_18 (Dense) (None, 90) 24390 \n_________________________________________________________________\ndropout_6 (Dropout) (None, 90) 0 \n_________________________________________________________________\ndense_19 (Dense) (None, 45) 4095 \n_________________________________________________________________\ndense_20 (Dense) (None, 1) 46 \n=================================================================\nTotal params: 30,929\nTrainable params: 30,929\nNon-trainable params: 0\n_________________________________________________________________\n" ] ], [ [ "### Train With Validation Split", "_____no_output_____" ] ], [ [ "#TODO: add early stopping using tf.keras.callbacks.EarlyStopping\nwith tf.device('/device:GPU:0'):\n model_gpu = create_model()\n model_gpu.compile(optimizer=OPT,loss=LOSS,metrics=METRICS)\n train_hist = model_gpu.fit(train_dataset, epochs=NUM_EPOCHS, validation_data=val_dataset)", "Epoch 1/100\n922/922 [==============================] - 6s 5ms/step - loss: 0.6935 - binary_accuracy: 0.5110 - auc_2: 0.5029 - val_loss: 0.6922 - val_binary_accuracy: 0.5214 - val_auc_2: 0.5071\nEpoch 2/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.6829 - binary_accuracy: 0.5466 - auc_2: 0.5730 - val_loss: 0.6481 - val_binary_accuracy: 0.6329 - val_auc_2: 0.6876\nEpoch 3/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.6288 - binary_accuracy: 0.6463 - auc_2: 0.6983 - val_loss: 0.5966 - val_binary_accuracy: 0.6863 - val_auc_2: 0.7517\nEpoch 4/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.6051 - binary_accuracy: 0.6742 - auc_2: 0.7329 - val_loss: 0.5706 - val_binary_accuracy: 0.7151 - val_auc_2: 0.7830\nEpoch 5/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.5871 - binary_accuracy: 0.6922 - auc_2: 0.7552 - val_loss: 0.5649 - val_binary_accuracy: 0.6986 - val_auc_2: 0.7821\nEpoch 6/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.5739 - binary_accuracy: 0.6988 - auc_2: 0.7690 - val_loss: 0.5603 - val_binary_accuracy: 0.7103 - val_auc_2: 0.7846\nEpoch 7/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.5721 - binary_accuracy: 0.7035 - auc_2: 0.7720 - val_loss: 0.5517 - val_binary_accuracy: 0.7326 - val_auc_2: 0.8037\nEpoch 8/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.5603 - binary_accuracy: 0.7128 - auc_2: 0.7838 - val_loss: 0.5401 - val_binary_accuracy: 0.7331 - val_auc_2: 0.8072\nEpoch 9/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.5505 - binary_accuracy: 0.7204 - auc_2: 0.7934 - val_loss: 0.5364 - val_binary_accuracy: 0.7309 - val_auc_2: 0.8076\nEpoch 10/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.5493 - binary_accuracy: 0.7217 - auc_2: 0.7941 - val_loss: 0.5267 - val_binary_accuracy: 0.7483 - val_auc_2: 0.8207\nEpoch 11/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.5431 - binary_accuracy: 0.7250 - auc_2: 0.7998 - val_loss: 0.5192 - val_binary_accuracy: 0.7520 - val_auc_2: 0.8293\nEpoch 12/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.5388 - binary_accuracy: 0.7255 - auc_2: 0.8039 - val_loss: 0.5109 - val_binary_accuracy: 0.7511 - val_auc_2: 0.8393\nEpoch 13/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.5269 - binary_accuracy: 0.7367 - auc_2: 0.8122 - val_loss: 0.5225 - val_binary_accuracy: 0.7369 - val_auc_2: 0.8345\nEpoch 14/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.5240 - binary_accuracy: 0.7408 - auc_2: 0.8155 - val_loss: 0.4845 - val_binary_accuracy: 0.7709 - val_auc_2: 0.8535\nEpoch 15/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.5147 - binary_accuracy: 0.7434 - auc_2: 0.8230 - val_loss: 0.4967 - val_binary_accuracy: 0.7577 - val_auc_2: 0.8412\nEpoch 16/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.5086 - binary_accuracy: 0.7470 - auc_2: 0.8275 - val_loss: 0.4865 - val_binary_accuracy: 0.7631 - val_auc_2: 0.8459\nEpoch 17/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.5031 - binary_accuracy: 0.7526 - auc_2: 0.8327 - val_loss: 0.4785 - val_binary_accuracy: 0.7754 - val_auc_2: 0.8559\nEpoch 18/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4975 - binary_accuracy: 0.7553 - auc_2: 0.8356 - val_loss: 0.4666 - val_binary_accuracy: 0.7831 - val_auc_2: 0.8632\nEpoch 19/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4923 - binary_accuracy: 0.7600 - auc_2: 0.8408 - val_loss: 0.4754 - val_binary_accuracy: 0.7731 - val_auc_2: 0.8622\nEpoch 20/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4865 - binary_accuracy: 0.7645 - auc_2: 0.8441 - val_loss: 0.4597 - val_binary_accuracy: 0.7834 - val_auc_2: 0.8718\nEpoch 21/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4762 - binary_accuracy: 0.7702 - auc_2: 0.8516 - val_loss: 0.4496 - val_binary_accuracy: 0.7840 - val_auc_2: 0.8818\nEpoch 22/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4700 - binary_accuracy: 0.7739 - auc_2: 0.8560 - val_loss: 0.4333 - val_binary_accuracy: 0.8086 - val_auc_2: 0.8963\nEpoch 23/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4668 - binary_accuracy: 0.7773 - auc_2: 0.8580 - val_loss: 0.4333 - val_binary_accuracy: 0.7969 - val_auc_2: 0.8798\nEpoch 24/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4566 - binary_accuracy: 0.7806 - auc_2: 0.8651 - val_loss: 0.4343 - val_binary_accuracy: 0.7931 - val_auc_2: 0.8894\nEpoch 25/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4536 - binary_accuracy: 0.7890 - auc_2: 0.8676 - val_loss: 0.4178 - val_binary_accuracy: 0.8086 - val_auc_2: 0.8979\nEpoch 26/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4473 - binary_accuracy: 0.7876 - auc_2: 0.8713 - val_loss: 0.4126 - val_binary_accuracy: 0.8160 - val_auc_2: 0.8972\nEpoch 27/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4412 - binary_accuracy: 0.7936 - auc_2: 0.8749 - val_loss: 0.3912 - val_binary_accuracy: 0.8274 - val_auc_2: 0.9089\nEpoch 28/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4391 - binary_accuracy: 0.7965 - auc_2: 0.8762 - val_loss: 0.3777 - val_binary_accuracy: 0.8314 - val_auc_2: 0.9119\nEpoch 29/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4324 - binary_accuracy: 0.7963 - auc_2: 0.8804 - val_loss: 0.3712 - val_binary_accuracy: 0.8371 - val_auc_2: 0.9237\nEpoch 30/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4239 - binary_accuracy: 0.8005 - auc_2: 0.8856 - val_loss: 0.4451 - val_binary_accuracy: 0.7880 - val_auc_2: 0.9116\nEpoch 31/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4214 - binary_accuracy: 0.8018 - auc_2: 0.8865 - val_loss: 0.3589 - val_binary_accuracy: 0.8474 - val_auc_2: 0.9254\nEpoch 32/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4112 - binary_accuracy: 0.8089 - auc_2: 0.8926 - val_loss: 0.3586 - val_binary_accuracy: 0.8460 - val_auc_2: 0.9262\nEpoch 33/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4041 - binary_accuracy: 0.8119 - auc_2: 0.8961 - val_loss: 0.3403 - val_binary_accuracy: 0.8531 - val_auc_2: 0.9313\nEpoch 34/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3983 - binary_accuracy: 0.8112 - auc_2: 0.8994 - val_loss: 0.3442 - val_binary_accuracy: 0.8491 - val_auc_2: 0.9277\nEpoch 35/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.4058 - binary_accuracy: 0.8138 - auc_2: 0.8949 - val_loss: 0.3371 - val_binary_accuracy: 0.8551 - val_auc_2: 0.9438\nEpoch 36/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3873 - binary_accuracy: 0.8238 - auc_2: 0.9053 - val_loss: 0.3770 - val_binary_accuracy: 0.8277 - val_auc_2: 0.9359\nEpoch 37/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3913 - binary_accuracy: 0.8215 - auc_2: 0.9033 - val_loss: 0.3219 - val_binary_accuracy: 0.8777 - val_auc_2: 0.9491\nEpoch 38/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3770 - binary_accuracy: 0.8276 - auc_2: 0.9108 - val_loss: 0.3211 - val_binary_accuracy: 0.8663 - val_auc_2: 0.9486\nEpoch 39/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3823 - binary_accuracy: 0.8248 - auc_2: 0.9080 - val_loss: 0.3044 - val_binary_accuracy: 0.8846 - val_auc_2: 0.9522\nEpoch 40/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3765 - binary_accuracy: 0.8257 - auc_2: 0.9101 - val_loss: 0.2997 - val_binary_accuracy: 0.8897 - val_auc_2: 0.9569\nEpoch 41/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3663 - binary_accuracy: 0.8348 - auc_2: 0.9158 - val_loss: 0.2922 - val_binary_accuracy: 0.8823 - val_auc_2: 0.9589\nEpoch 42/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3621 - binary_accuracy: 0.8331 - auc_2: 0.9176 - val_loss: 0.2867 - val_binary_accuracy: 0.8891 - val_auc_2: 0.9576\nEpoch 43/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3547 - binary_accuracy: 0.8362 - auc_2: 0.9207 - val_loss: 0.3060 - val_binary_accuracy: 0.8689 - val_auc_2: 0.9502\nEpoch 44/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3526 - binary_accuracy: 0.8375 - auc_2: 0.9220 - val_loss: 0.3226 - val_binary_accuracy: 0.8583 - val_auc_2: 0.9480\nEpoch 45/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3482 - binary_accuracy: 0.8418 - auc_2: 0.9237 - val_loss: 0.2849 - val_binary_accuracy: 0.8851 - val_auc_2: 0.9578\nEpoch 46/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3442 - binary_accuracy: 0.8431 - auc_2: 0.9258 - val_loss: 0.2612 - val_binary_accuracy: 0.8977 - val_auc_2: 0.9669\nEpoch 47/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3400 - binary_accuracy: 0.8459 - auc_2: 0.9274 - val_loss: 0.2682 - val_binary_accuracy: 0.8957 - val_auc_2: 0.9643\nEpoch 48/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3409 - binary_accuracy: 0.8422 - auc_2: 0.9267 - val_loss: 0.2547 - val_binary_accuracy: 0.8954 - val_auc_2: 0.9666\nEpoch 49/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3345 - binary_accuracy: 0.8513 - auc_2: 0.9302 - val_loss: 0.2267 - val_binary_accuracy: 0.9231 - val_auc_2: 0.9764\nEpoch 50/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3325 - binary_accuracy: 0.8494 - auc_2: 0.9306 - val_loss: 0.2352 - val_binary_accuracy: 0.9129 - val_auc_2: 0.9730\nEpoch 51/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3304 - binary_accuracy: 0.8506 - auc_2: 0.9316 - val_loss: 0.2623 - val_binary_accuracy: 0.8931 - val_auc_2: 0.9679\nEpoch 52/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3320 - binary_accuracy: 0.8489 - auc_2: 0.9307 - val_loss: 0.2463 - val_binary_accuracy: 0.9051 - val_auc_2: 0.9705\nEpoch 53/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3279 - binary_accuracy: 0.8514 - auc_2: 0.9320 - val_loss: 0.2365 - val_binary_accuracy: 0.9117 - val_auc_2: 0.9722\nEpoch 54/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3224 - binary_accuracy: 0.8555 - auc_2: 0.9349 - val_loss: 0.2196 - val_binary_accuracy: 0.9217 - val_auc_2: 0.9789\nEpoch 55/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3150 - binary_accuracy: 0.8607 - auc_2: 0.9382 - val_loss: 0.2174 - val_binary_accuracy: 0.9240 - val_auc_2: 0.9777\nEpoch 56/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3231 - binary_accuracy: 0.8543 - auc_2: 0.9345 - val_loss: 0.2185 - val_binary_accuracy: 0.9203 - val_auc_2: 0.9785\nEpoch 57/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3170 - binary_accuracy: 0.8571 - auc_2: 0.9365 - val_loss: 0.2089 - val_binary_accuracy: 0.9269 - val_auc_2: 0.9816\nEpoch 58/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3085 - binary_accuracy: 0.8596 - auc_2: 0.9401 - val_loss: 0.2188 - val_binary_accuracy: 0.9231 - val_auc_2: 0.9794\nEpoch 59/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3082 - binary_accuracy: 0.8594 - auc_2: 0.9401 - val_loss: 0.2158 - val_binary_accuracy: 0.9206 - val_auc_2: 0.9790\nEpoch 60/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3087 - binary_accuracy: 0.8598 - auc_2: 0.9403 - val_loss: 0.2053 - val_binary_accuracy: 0.9309 - val_auc_2: 0.9814\nEpoch 61/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3007 - binary_accuracy: 0.8662 - auc_2: 0.9430 - val_loss: 0.2060 - val_binary_accuracy: 0.9277 - val_auc_2: 0.9819\nEpoch 62/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.3018 - binary_accuracy: 0.8638 - auc_2: 0.9427 - val_loss: 0.2454 - val_binary_accuracy: 0.8974 - val_auc_2: 0.9751\nEpoch 63/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2930 - binary_accuracy: 0.8655 - auc_2: 0.9460 - val_loss: 0.2000 - val_binary_accuracy: 0.9300 - val_auc_2: 0.9841\nEpoch 64/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2955 - binary_accuracy: 0.8627 - auc_2: 0.9449 - val_loss: 0.1932 - val_binary_accuracy: 0.9291 - val_auc_2: 0.9813\nEpoch 65/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2945 - binary_accuracy: 0.8651 - auc_2: 0.9453 - val_loss: 0.1904 - val_binary_accuracy: 0.9371 - val_auc_2: 0.9859\nEpoch 66/100\n922/922 [==============================] - 5s 6ms/step - loss: 0.2888 - binary_accuracy: 0.8695 - auc_2: 0.9474 - val_loss: 0.1877 - val_binary_accuracy: 0.9406 - val_auc_2: 0.9852\nEpoch 67/100\n922/922 [==============================] - 5s 6ms/step - loss: 0.2923 - binary_accuracy: 0.8694 - auc_2: 0.9463 - val_loss: 0.2010 - val_binary_accuracy: 0.9246 - val_auc_2: 0.9792\nEpoch 68/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2917 - binary_accuracy: 0.8692 - auc_2: 0.9464 - val_loss: 0.1883 - val_binary_accuracy: 0.9406 - val_auc_2: 0.9864\nEpoch 69/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2852 - binary_accuracy: 0.8710 - auc_2: 0.9486 - val_loss: 0.1903 - val_binary_accuracy: 0.9406 - val_auc_2: 0.9863\nEpoch 70/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2847 - binary_accuracy: 0.8719 - auc_2: 0.9489 - val_loss: 0.1998 - val_binary_accuracy: 0.9326 - val_auc_2: 0.9834\nEpoch 71/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2797 - binary_accuracy: 0.8727 - auc_2: 0.9510 - val_loss: 0.1940 - val_binary_accuracy: 0.9294 - val_auc_2: 0.9833\nEpoch 72/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2902 - binary_accuracy: 0.8681 - auc_2: 0.9465 - val_loss: 0.1839 - val_binary_accuracy: 0.9377 - val_auc_2: 0.9867\nEpoch 73/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2776 - binary_accuracy: 0.8759 - auc_2: 0.9511 - val_loss: 0.1922 - val_binary_accuracy: 0.9357 - val_auc_2: 0.9859\nEpoch 74/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2764 - binary_accuracy: 0.8788 - auc_2: 0.9521 - val_loss: 0.1686 - val_binary_accuracy: 0.9414 - val_auc_2: 0.9881\nEpoch 75/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2798 - binary_accuracy: 0.8729 - auc_2: 0.9504 - val_loss: 0.1738 - val_binary_accuracy: 0.9471 - val_auc_2: 0.9879\nEpoch 76/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2698 - binary_accuracy: 0.8787 - auc_2: 0.9542 - val_loss: 0.1934 - val_binary_accuracy: 0.9286 - val_auc_2: 0.9816\nEpoch 77/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2750 - binary_accuracy: 0.8749 - auc_2: 0.9521 - val_loss: 0.1784 - val_binary_accuracy: 0.9466 - val_auc_2: 0.9907\nEpoch 78/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2728 - binary_accuracy: 0.8754 - auc_2: 0.9528 - val_loss: 0.1738 - val_binary_accuracy: 0.9423 - val_auc_2: 0.9880\nEpoch 79/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2632 - binary_accuracy: 0.8835 - auc_2: 0.9564 - val_loss: 0.1657 - val_binary_accuracy: 0.9420 - val_auc_2: 0.9888\nEpoch 80/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2699 - binary_accuracy: 0.8771 - auc_2: 0.9539 - val_loss: 0.1658 - val_binary_accuracy: 0.9431 - val_auc_2: 0.9888\nEpoch 81/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2642 - binary_accuracy: 0.8814 - auc_2: 0.9560 - val_loss: 0.1523 - val_binary_accuracy: 0.9540 - val_auc_2: 0.9925\nEpoch 82/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2692 - binary_accuracy: 0.8791 - auc_2: 0.9539 - val_loss: 0.2107 - val_binary_accuracy: 0.9117 - val_auc_2: 0.9858\nEpoch 83/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2592 - binary_accuracy: 0.8847 - auc_2: 0.9575 - val_loss: 0.1493 - val_binary_accuracy: 0.9486 - val_auc_2: 0.9893\nEpoch 84/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2658 - binary_accuracy: 0.8810 - auc_2: 0.9552 - val_loss: 0.1732 - val_binary_accuracy: 0.9346 - val_auc_2: 0.9877\nEpoch 85/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2621 - binary_accuracy: 0.8831 - auc_2: 0.9566 - val_loss: 0.1548 - val_binary_accuracy: 0.9506 - val_auc_2: 0.9920\nEpoch 86/100\n922/922 [==============================] - 5s 6ms/step - loss: 0.2597 - binary_accuracy: 0.8835 - auc_2: 0.9576 - val_loss: 0.1658 - val_binary_accuracy: 0.9443 - val_auc_2: 0.9888\nEpoch 87/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2601 - binary_accuracy: 0.8845 - auc_2: 0.9575 - val_loss: 0.1532 - val_binary_accuracy: 0.9494 - val_auc_2: 0.9904\nEpoch 88/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2542 - binary_accuracy: 0.8850 - auc_2: 0.9592 - val_loss: 0.1785 - val_binary_accuracy: 0.9471 - val_auc_2: 0.9895\nEpoch 89/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2541 - binary_accuracy: 0.8841 - auc_2: 0.9589 - val_loss: 0.1431 - val_binary_accuracy: 0.9531 - val_auc_2: 0.9913\nEpoch 90/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2528 - binary_accuracy: 0.8875 - auc_2: 0.9596 - val_loss: 0.1611 - val_binary_accuracy: 0.9577 - val_auc_2: 0.9919\nEpoch 91/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2561 - binary_accuracy: 0.8855 - auc_2: 0.9585 - val_loss: 0.1438 - val_binary_accuracy: 0.9543 - val_auc_2: 0.9928\nEpoch 92/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2528 - binary_accuracy: 0.8870 - auc_2: 0.9593 - val_loss: 0.1411 - val_binary_accuracy: 0.9597 - val_auc_2: 0.9932\nEpoch 93/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2499 - binary_accuracy: 0.8850 - auc_2: 0.9604 - val_loss: 0.1448 - val_binary_accuracy: 0.9520 - val_auc_2: 0.9927\nEpoch 94/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2480 - binary_accuracy: 0.8887 - auc_2: 0.9613 - val_loss: 0.1412 - val_binary_accuracy: 0.9583 - val_auc_2: 0.9930\nEpoch 95/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2458 - binary_accuracy: 0.8886 - auc_2: 0.9618 - val_loss: 0.1319 - val_binary_accuracy: 0.9631 - val_auc_2: 0.9942\nEpoch 96/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2445 - binary_accuracy: 0.8897 - auc_2: 0.9620 - val_loss: 0.1299 - val_binary_accuracy: 0.9609 - val_auc_2: 0.9939\nEpoch 97/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2494 - binary_accuracy: 0.8870 - auc_2: 0.9604 - val_loss: 0.1383 - val_binary_accuracy: 0.9591 - val_auc_2: 0.9940\nEpoch 98/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2489 - binary_accuracy: 0.8889 - auc_2: 0.9610 - val_loss: 0.1393 - val_binary_accuracy: 0.9569 - val_auc_2: 0.9933\nEpoch 99/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2438 - binary_accuracy: 0.8903 - auc_2: 0.9618 - val_loss: 0.1319 - val_binary_accuracy: 0.9629 - val_auc_2: 0.9943\nEpoch 100/100\n922/922 [==============================] - 5s 5ms/step - loss: 0.2353 - binary_accuracy: 0.8927 - auc_2: 0.9644 - val_loss: 0.1408 - val_binary_accuracy: 0.9586 - val_auc_2: 0.9932\n" ] ], [ [ "* **NOTE!** When rerunning this notebook, you may have to change the key for the auc plots. Just make sure the plot matches the auc name outputted by model.fit(). Currently it is set to \"auc_2\" but rerunning would probably cause it to reset to just \"auc\"", "_____no_output_____" ] ], [ [ "val_loss = train_hist.history[\"val_loss\"][-1]\nval_acc = train_hist.history[\"val_binary_accuracy\"][-1]\nval_auc = train_hist.history[\"val_auc_2\"][-1]\nprint(f\"Validation Loss:{val_loss:.2f}\\nValidation Accuracy: {val_acc:.2f}\\nValidation AUC: {val_auc:.2f}\")\nplt.figure(figsize=(15, 5))\nplt.suptitle(f\"Validation Training Results\")\nplt.subplot(1,3,1)\nplt.plot(train_hist.history[\"loss\"])\nplt.plot(train_hist.history[\"val_loss\"])\nplt.legend([\"train loss\", \"val loss\"])\nplt.xlabel(\"Epoch\")\nplt.title(\"Loss\")\nplt.subplot(1,3,2)\nplt.plot(train_hist.history[\"binary_accuracy\"])\nplt.plot(train_hist.history[\"val_binary_accuracy\"])\nplt.legend([\"train acc\", \"val acc\"])\nplt.xlabel(\"Epoch\")\nplt.title(\"Model Accuracy\")\nplt.subplot(1,3,3)\nplt.plot(train_hist.history[\"auc_2\"])\nplt.plot(train_hist.history[\"val_auc_2\"])\nplt.xlabel(\"Epoch\")\nplt.title(\"AUC\")\nplt.legend([\"train AUC\", \"val AUC\"])", "Validation Loss:0.14\nValidation Accuracy: 0.96\nValidation AUC: 0.99\n" ], [ "model_gpu.save(f\"{DATA_DIR}/models/model0608_96val_acc.h5\", save_format='h5')", "_____no_output_____" ] ], [ [ "Looks like we have a pretty good model. In order to make it even better, let's train on the entire dataset (remove the validation split) for the final evaluation that will be performed on the test set.\n\n*It is quit strange that validation results are better than training results. This might be explained by the fact that during validation dropout is not applied or by the fact that our validation set is too small to represent a good test case.", "_____no_output_____" ], [ "### Train Using All Data\n", "_____no_output_____" ] ], [ [ "with tf.device('/device:GPU:0'):\n final_model_gpu = create_model()\n final_model_gpu.compile(optimizer=OPT,loss=LOSS,metrics=METRICS)\n final_train_hist = final_model_gpu.fit(full_trainset, epochs=NUM_EPOCHS)", "Epoch 1/100\n1141/1141 [==============================] - 6s 4ms/step - loss: 0.6926 - binary_accuracy: 0.5918 - auc_2: 0.6403\nEpoch 2/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.6404 - binary_accuracy: 0.6264 - auc_2: 0.6795\nEpoch 3/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.5869 - binary_accuracy: 0.6975 - auc_2: 0.7570\nEpoch 4/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.5733 - binary_accuracy: 0.7031 - auc_2: 0.7708\nEpoch 5/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.5640 - binary_accuracy: 0.7129 - auc_2: 0.7802\nEpoch 6/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.5550 - binary_accuracy: 0.7195 - auc_2: 0.7891\nEpoch 7/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.5509 - binary_accuracy: 0.7196 - auc_2: 0.7920\nEpoch 8/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.5442 - binary_accuracy: 0.7262 - auc_2: 0.7985\nEpoch 9/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.5399 - binary_accuracy: 0.7306 - auc_2: 0.8030\nEpoch 10/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.5370 - binary_accuracy: 0.7314 - auc_2: 0.8050\nEpoch 11/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.5316 - binary_accuracy: 0.7344 - auc_2: 0.8095\nEpoch 12/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.5264 - binary_accuracy: 0.7386 - auc_2: 0.8143\nEpoch 13/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.5205 - binary_accuracy: 0.7433 - auc_2: 0.8184\nEpoch 14/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.5151 - binary_accuracy: 0.7457 - auc_2: 0.8225\nEpoch 15/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.5106 - binary_accuracy: 0.7487 - auc_2: 0.8265\nEpoch 16/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.5093 - binary_accuracy: 0.7517 - auc_2: 0.8283\nEpoch 17/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.5068 - binary_accuracy: 0.7540 - auc_2: 0.8297\nEpoch 18/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.5003 - binary_accuracy: 0.7556 - auc_2: 0.8348\nEpoch 19/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.5002 - binary_accuracy: 0.7538 - auc_2: 0.8342\nEpoch 20/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4945 - binary_accuracy: 0.7568 - auc_2: 0.8390\nEpoch 21/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4880 - binary_accuracy: 0.7648 - auc_2: 0.8431\nEpoch 22/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4827 - binary_accuracy: 0.7666 - auc_2: 0.8474\nEpoch 23/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4797 - binary_accuracy: 0.7672 - auc_2: 0.8483\nEpoch 24/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4749 - binary_accuracy: 0.7743 - auc_2: 0.8532\nEpoch 25/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4731 - binary_accuracy: 0.7703 - auc_2: 0.8537\nEpoch 26/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.4639 - binary_accuracy: 0.7802 - auc_2: 0.8606\nEpoch 27/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.4643 - binary_accuracy: 0.7770 - auc_2: 0.8601\nEpoch 28/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4599 - binary_accuracy: 0.7799 - auc_2: 0.8627\nEpoch 29/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4574 - binary_accuracy: 0.7776 - auc_2: 0.8642\nEpoch 30/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4506 - binary_accuracy: 0.7853 - auc_2: 0.8690\nEpoch 31/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4488 - binary_accuracy: 0.7872 - auc_2: 0.8704\nEpoch 32/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4437 - binary_accuracy: 0.7905 - auc_2: 0.8737\nEpoch 33/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4408 - binary_accuracy: 0.7932 - auc_2: 0.8753\nEpoch 34/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4384 - binary_accuracy: 0.7924 - auc_2: 0.8765\nEpoch 35/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4342 - binary_accuracy: 0.7945 - auc_2: 0.8786\nEpoch 36/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4309 - binary_accuracy: 0.7971 - auc_2: 0.8810\nEpoch 37/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.4287 - binary_accuracy: 0.7958 - auc_2: 0.8820\nEpoch 38/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4215 - binary_accuracy: 0.8016 - auc_2: 0.8864\nEpoch 39/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4158 - binary_accuracy: 0.8047 - auc_2: 0.8898\nEpoch 40/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4154 - binary_accuracy: 0.8069 - auc_2: 0.8900\nEpoch 41/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.4143 - binary_accuracy: 0.8037 - auc_2: 0.8903\nEpoch 42/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.4085 - binary_accuracy: 0.8073 - auc_2: 0.8936\nEpoch 43/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.3989 - binary_accuracy: 0.8141 - auc_2: 0.8992\nEpoch 44/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.3989 - binary_accuracy: 0.8138 - auc_2: 0.8988\nEpoch 45/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3934 - binary_accuracy: 0.8187 - auc_2: 0.9018\nEpoch 46/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3948 - binary_accuracy: 0.8168 - auc_2: 0.9011\nEpoch 47/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3889 - binary_accuracy: 0.8134 - auc_2: 0.9031\nEpoch 48/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3879 - binary_accuracy: 0.8195 - auc_2: 0.9045\nEpoch 49/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3832 - binary_accuracy: 0.8231 - auc_2: 0.9073\nEpoch 50/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3803 - binary_accuracy: 0.8238 - auc_2: 0.9085\nEpoch 51/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3824 - binary_accuracy: 0.8220 - auc_2: 0.9075\nEpoch 52/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3759 - binary_accuracy: 0.8228 - auc_2: 0.9105\nEpoch 53/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.3742 - binary_accuracy: 0.8268 - auc_2: 0.9114\nEpoch 54/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.3704 - binary_accuracy: 0.8302 - auc_2: 0.9133\nEpoch 55/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3711 - binary_accuracy: 0.8308 - auc_2: 0.9131\nEpoch 56/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3637 - binary_accuracy: 0.8335 - auc_2: 0.9164\nEpoch 57/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3613 - binary_accuracy: 0.8345 - auc_2: 0.9174\nEpoch 58/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3621 - binary_accuracy: 0.8355 - auc_2: 0.9173\nEpoch 59/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3582 - binary_accuracy: 0.8364 - auc_2: 0.9189\nEpoch 60/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3567 - binary_accuracy: 0.8356 - auc_2: 0.9195\nEpoch 61/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3549 - binary_accuracy: 0.8386 - auc_2: 0.9209\nEpoch 62/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3478 - binary_accuracy: 0.8388 - auc_2: 0.9241\nEpoch 63/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.3505 - binary_accuracy: 0.8400 - auc_2: 0.9227\nEpoch 64/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.3523 - binary_accuracy: 0.8368 - auc_2: 0.9218\nEpoch 65/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.3490 - binary_accuracy: 0.8392 - auc_2: 0.9232\nEpoch 66/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.3443 - binary_accuracy: 0.8400 - auc_2: 0.9251\nEpoch 67/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3433 - binary_accuracy: 0.8407 - auc_2: 0.9256\nEpoch 68/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3395 - binary_accuracy: 0.8437 - auc_2: 0.9273\nEpoch 69/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3369 - binary_accuracy: 0.8453 - auc_2: 0.9287\nEpoch 70/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3402 - binary_accuracy: 0.8433 - auc_2: 0.9268\nEpoch 71/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3307 - binary_accuracy: 0.8494 - auc_2: 0.9315\nEpoch 72/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3299 - binary_accuracy: 0.8491 - auc_2: 0.9317\nEpoch 73/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3372 - binary_accuracy: 0.8447 - auc_2: 0.9285\nEpoch 74/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3219 - binary_accuracy: 0.8536 - auc_2: 0.9346\nEpoch 75/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.3264 - binary_accuracy: 0.8497 - auc_2: 0.9329\nEpoch 76/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.3258 - binary_accuracy: 0.8508 - auc_2: 0.9334\nEpoch 77/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.3255 - binary_accuracy: 0.8499 - auc_2: 0.9333\nEpoch 78/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3260 - binary_accuracy: 0.8486 - auc_2: 0.9330\nEpoch 79/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3243 - binary_accuracy: 0.8504 - auc_2: 0.9340\nEpoch 80/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3185 - binary_accuracy: 0.8541 - auc_2: 0.9361\nEpoch 81/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3207 - binary_accuracy: 0.8498 - auc_2: 0.9352\nEpoch 82/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3174 - binary_accuracy: 0.8545 - auc_2: 0.9364\nEpoch 83/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3134 - binary_accuracy: 0.8544 - auc_2: 0.9383\nEpoch 84/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3153 - binary_accuracy: 0.8534 - auc_2: 0.9375\nEpoch 85/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3138 - binary_accuracy: 0.8558 - auc_2: 0.9377\nEpoch 86/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.3114 - binary_accuracy: 0.8554 - auc_2: 0.9388\nEpoch 87/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.3078 - binary_accuracy: 0.8574 - auc_2: 0.9402\nEpoch 88/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.3036 - binary_accuracy: 0.8614 - auc_2: 0.9422\nEpoch 89/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3053 - binary_accuracy: 0.8608 - auc_2: 0.9416\nEpoch 90/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3029 - binary_accuracy: 0.8580 - auc_2: 0.9424\nEpoch 91/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.2976 - binary_accuracy: 0.8627 - auc_2: 0.9440\nEpoch 92/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.2946 - binary_accuracy: 0.8648 - auc_2: 0.9453\nEpoch 93/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.3004 - binary_accuracy: 0.8622 - auc_2: 0.9429\nEpoch 94/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.2977 - binary_accuracy: 0.8617 - auc_2: 0.9440\nEpoch 95/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.2933 - binary_accuracy: 0.8628 - auc_2: 0.9455\nEpoch 96/100\n1141/1141 [==============================] - 5s 4ms/step - loss: 0.2948 - binary_accuracy: 0.8624 - auc_2: 0.9448\nEpoch 97/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.2890 - binary_accuracy: 0.8673 - auc_2: 0.9475\nEpoch 98/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.2940 - binary_accuracy: 0.8645 - auc_2: 0.9455\nEpoch 99/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.2928 - binary_accuracy: 0.8628 - auc_2: 0.9455\nEpoch 100/100\n1141/1141 [==============================] - 5s 5ms/step - loss: 0.2876 - binary_accuracy: 0.8657 - auc_2: 0.9475\n" ], [ "final_loss = final_train_hist.history[\"loss\"][-1]\nfinal_acc = final_train_hist.history[\"binary_accuracy\"][-1]\nfinal_auc = final_train_hist.history[\"auc_2\"][-1]\nplt.figure(figsize=(10, 5))\nplt.title(f\"Final Training Results\\nLoss:{final_loss:.2f} Accuracy: {final_acc:.2f} AUC: {final_auc:.2f}\")\nplt.plot(final_train_hist.history[\"loss\"])\nplt.plot(final_train_hist.history[\"binary_accuracy\"])\nplt.plot(final_train_hist.history[\"auc_2\"])\nplt.legend([\"Loss\", \"Accuracy\", \"AUC\"])\nplt.xlabel(\"Epoch\")", "_____no_output_____" ] ], [ [ "Let's save our model for future work:", "_____no_output_____" ] ], [ [ "final_model_gpu.save(f\"{DATA_DIR}/models/final_model.h5\", save_format='h5')", "_____no_output_____" ] ], [ [ "## Generate Submission\nInference is slightly different than training. Instead of splitting sequences and treating them as independant dataset elements, we must now perform an ensemble of all sequences and give a single prediction per sequence.\n\nAdditionally, instead of labels, the sequence names are IDs which we need to pair each sequence with for the submission csv.", "_____no_output_____" ], [ "### Load Test Data\nIn order to properly load the test data we must create pairs of sequence IDs and one=hot encoded DNA sequences:", "_____no_output_____" ] ], [ [ "test_seqs = fastaEncoder.transform(TEST_FASTA)\ntest_seqs = test_seqs.astype(np.float32)\n\n# get_test_labels available under my_utils.py\ntest_ids = get_test_labels(TEST_FASTA,test_seqs.shape[0],500)\nprint(f\"Test sequences shape: {test_seqs.shape} Test IDs shape: {test_ids.shape}\")", "parsed 500 contigs\nparsed 1000 contigs\nparsed 1500 contigs\nparsed 2000 contigs\nTest sequences shape: (2001, 7782, 4) Test IDs shape: (2001,)\n" ] ], [ [ "### Inference\nUsing pandas we can easily build a DataFrame to hold the ids and generate a prediction for each sequence in the test", "_____no_output_____" ] ], [ [ "#build a df for our submission csv\nsubmission_df = pd.DataFrame(columns=[\"Contig Name\", \"Classification\", \"Probability Score\", \"tax_id\"])\nsubmission_df[\"Contig Name\"] = test_ids\nsubmission_df[\"Probability Score\"] = get_preds(test_seqs, final_model_gpu) # available in my_utils.py\nsubmission_df[\"Classification\"] = submission_df[\"Probability Score\"].apply(pred2class) # probability to class", "parsed 0 sequences\nparsed 100 sequences\nparsed 200 sequences\nparsed 300 sequences\nparsed 400 sequences\nparsed 500 sequences\nparsed 600 sequences\nparsed 700 sequences\nparsed 800 sequences\nparsed 900 sequences\nparsed 1000 sequences\nparsed 1100 sequences\nparsed 1200 sequences\nparsed 1300 sequences\nparsed 1400 sequences\nparsed 1500 sequences\nparsed 1600 sequences\nparsed 1700 sequences\nparsed 1800 sequences\nparsed 1900 sequences\nparsed 2000 sequences\n" ], [ "submission_df.to_csv(f\"{DATA_DIR}/Submissions/final_submission.csv\",index=False)\nsubmission_df", "_____no_output_____" ], [ "submission_df.Classification.value_counts()", "_____no_output_____" ] ], [ [ "## Future Plans\n### 1. Data augmentations\nBetter data means better learning. Currently, the dataset is not being fully utilized. First of all, I'm literally ignoring chunks of data since I arbitraily chose an input dimension of 500\n\n\n### 2. Hyperparameter tuning (learning-rate, sequence length)\nGiven more time, I would've tried to further improve results by additional tuning of the model's hyperparamaters. Maybe use Keras Tuner for hyperparameters search.\n\n\n### 3. Sliding window slicing for inference mode\nCurrently inference is using the same slicing mechanism as training which simpley cuts the sequence after every 500 characters and calculate the mean prediction. A smarter idea is to use a sliding window approach that would ensure:\n1. Overlap between different slices means less information is lost due to the slicing operation\n2. The entire input sequence will be used\n\n\n### 4. Different model types\nIdeally, the model would be able to handle DNA of any length without slicing them into smaller sequences and losing the sequence information. For this task, a CNN is probably not the best idea (it was just the easiest for me to implement on a short notice). Some better models might include:\n* a RNN elemnt (GRU, or bi-directional GRU) that is able to process sequence data without any constraints on the input dimension\n* An embedding mechanism. Recent years has shown just how powerful representation learning can be. If had more time and compute power, I would've tried to use dna2vec as an embedding layer or maybe even train some sort of autoencoder to map DNA sequences into a latent space.\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
e70e74b3b504d84175701135c0ba2ea89a1ba84e
98,646
ipynb
Jupyter Notebook
Trainable_STFT/Result.ipynb
keunwoochoi/nnAudio
9f432990fdc5519241d442b985ec9e3434e2b6c5
[ "MIT" ]
1
2020-02-13T05:44:07.000Z
2020-02-13T05:44:07.000Z
Trainable_STFT/Result.ipynb
keunwoochoi/nnAudio
9f432990fdc5519241d442b985ec9e3434e2b6c5
[ "MIT" ]
null
null
null
Trainable_STFT/Result.ipynb
keunwoochoi/nnAudio
9f432990fdc5519241d442b985ec9e3434e2b6c5
[ "MIT" ]
null
null
null
648.986842
94,284
0.946688
[ [ [ "import numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "trainable = np.load('trainable_stft.npy')\nnontrainable = np.load('nontrainable_stft.npy')\n\ntrainable2 = np.load('trainable_mel.npy')\nnontrainable2 = np.load('nontrainable_mel.npy')\n\ntrainable3 = np.load('trainable_cqt.npy')\nnontrainable3 = np.load('nontrainable_cqt.npy')\n\ntrainable_lin = np.load('trainable_stft_linear.npy')\nnontrainable_lin = np.load('nontrainable_stft_linear.npy')\n\ntrainable2_lin = np.load('trainable_mel_lin.npy')\nnontrainable2_lin = np.load('nontrainable_mel_lin.npy')\n\ntrainable3_lin = np.load('trainable_cqt_lin.npy')\nnontrainable3_lin = np.load('nontrainable_cqt_lin.npy')", "_____no_output_____" ], [ "fig, ax = plt.subplots(3,2,figsize=(12,15))\ncols = ['Linear', 'CNN']\nrows = ['STFT', 'MelSpec', 'CQT']\n\nax[0,0].plot(trainable_lin)\nax[0,0].plot(nontrainable_lin)\nax[0,0].set_yscale('log')\nax[0,0].set_title('Linear', size=18)\nax[0,0].legend(['Trainable', 'Non-Trainable'])\nax[0,0].set_ylabel('STFT', size=18)\nax[0,0].tick_params(labelsize=14)\nax[0,0].set_ylim(1e-6,0.2)\n\nax[0,1].plot(trainable)\nax[0,1].plot(nontrainable)\nax[0,1].set_yscale('log')\nax[0,1].set_title('CNN', size=18)\nax[0,1].legend(['Trainable', 'Non-Trainable'])\nax[0,1].tick_params(labelsize=14)\nax[0,1].set_ylim(1e-6,0.2)\n\nax[1,0].plot(trainable2_lin)\nax[1,0].plot(nontrainable2_lin)\nax[1,0].set_yscale('log')\nax[1,0].legend(['Trainable', 'Non-Trainable'])\nax[1,0].set_ylabel('MelSpec', size=18)\nax[1,0].set_xlabel('Epoch', size=14)\nax[1,0].tick_params(labelsize=14)\nax[1,0].set_ylim(1e-6,0.2)\n\nax[1,1].plot(trainable2)\nax[1,1].plot(nontrainable2)\nax[1,1].set_yscale('log')\nax[1,1].legend(['Trainable', 'Non-Trainable'])\nax[1,1].set_xlabel('Epoch', size=14)\nax[1,1].tick_params(labelsize=14)\nax[1,1].set_ylim(1e-6,0.2)\n\nax[2,0].plot(trainable3_lin)\nax[2,0].plot(nontrainable3_lin)\nax[2,0].set_yscale('log')\nax[2,0].legend(['Trainable', 'Non-Trainable'])\nax[2,0].set_ylabel('CQT', size=18)\nax[2,0].set_xlabel('Epoch', size=14)\nax[2,0].tick_params(labelsize=14)\nax[2,0].set_ylim(1e-6,0.2)\n\nax[2,1].plot(trainable3)\nax[2,1].plot(nontrainable3)\nax[2,1].set_yscale('log')\nax[2,1].legend(['Trainable', 'Non-Trainable'])\nax[2,1].set_xlabel('Epoch', size=14)\nax[2,1].tick_params(labelsize=14)\nax[2,1].set_ylim(1e-6,0.2)\n\nfor ax_idx, col in zip(ax[0], cols):\n ax_idx.set_title(col, size=18)\n \n# for ax_idx, row in zip(ax[:,0], rows):\n# ax_idx.annotate(row, xy=(0, 0), xytext=(ax_idx.yaxis.labelpad-5, ax_idx.xaxis.labelpad-3), rotation=90,\n# xycoords=ax_idx.yaxis.label, size=18)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
e70e8b274838edacfbb7dec39e69e37e0a0f417a
16,824
ipynb
Jupyter Notebook
Shasta/pamphlets/chapter7.3.2.ipynb
jbsparks/notebooks
c549af8f39fa1e7683f2e8c01068717f639c4edc
[ "Apache-2.0" ]
1
2020-01-08T16:01:31.000Z
2020-01-08T16:01:31.000Z
Shasta/pamphlets/chapter7.3.2.ipynb
jbsparks/notebooks
c549af8f39fa1e7683f2e8c01068717f639c4edc
[ "Apache-2.0" ]
null
null
null
Shasta/pamphlets/chapter7.3.2.ipynb
jbsparks/notebooks
c549af8f39fa1e7683f2e8c01068717f639c4edc
[ "Apache-2.0" ]
null
null
null
27.854305
237
0.573704
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e70e90676b6209a9f7360f86179f505ad7525612
122,987
ipynb
Jupyter Notebook
snippets/.ipynb_checkpoints/FigureS2-checkpoint.ipynb
zhivkoplias/network_generation_algo
3ee2493443acb8773f54bc70d11469a43a87a973
[ "MIT" ]
null
null
null
snippets/.ipynb_checkpoints/FigureS2-checkpoint.ipynb
zhivkoplias/network_generation_algo
3ee2493443acb8773f54bc70d11469a43a87a973
[ "MIT" ]
null
null
null
snippets/.ipynb_checkpoints/FigureS2-checkpoint.ipynb
zhivkoplias/network_generation_algo
3ee2493443acb8773f54bc70d11469a43a87a973
[ "MIT" ]
null
null
null
486.114625
113,792
0.93205
[ [ [ "## Import libs, set paths and load params", "_____no_output_____" ] ], [ [ "import os, glob\nimport numpy as np\nimport pandas as pd\nimport sys\nsys.path.insert(0, \"../src\")\nimport auxilary_functions as f\nimport subprocess\nimport csv\nimport matplotlib.pyplot as plt\nimport json\n\ncfg = f.get_actual_parametrization(\"../src/config-yeast.json\")\nnetworks = ['gnw','networkx','fflatt']\norganisms = ['yeast']\n\nsizes = ['500', '750', '1000', '1500']\nn_trials = 1\n\nos.chdir('../networks/')\ngnwdir = '/home/erikz/sonnhammer/gnw/'\nfflattdir = '../snippets/'\n\nprint(os.getcwd())\ntopology_dir = os.path.join(os.getcwd(), 'topology')", "/home/erik/sweden/sonnhammer/GeneSnake/generation/network_generation_algo/networks\n" ], [ "test = str(cfg)\n#d_test = dict(test)\nprint(test)", "{'RANDOM_SEED': 18, 'GROWTH_BARABASI': 0.4, 'FFL_PERCENTAGES': 0.27, 'SPARSITY': 2.899, 'TEST_NETWORK_SIZE': 100, 'TEST_NETWORK_LINK_PROB': 0.9, 'N_CORES_TO_USE': -1, 'NETWORK_TO_SEARCH_IN': 'yeast', 'SHUFFLED': 0, 'OUTPUT': 'adj_list', 'NO_CYCLES': 0}\n" ], [ "#collect data\ntopo_list = []\nfor network in networks:\n for number, organism in enumerate(organisms):\n for size in sizes:\n current_dir = os.path.join(topology_dir, network, organism, size)\n #create networks if don't exist\n if not os.path.exists(os.path.abspath(current_dir)):\n \n try:\n print('making dirs...')\n os.mkdir(os.path.abspath(current_dir))\n \n except FileExistsError:\n pass\n \n if network == 'gnw':\n \n print('running gnw...')\n subprocess.call(['java', '-jar', gnwdir+'gnw-3.1.2b.jar', '--extract', '--input-net',\\\n gnwdir+'sandbox/yeast_network.tsv',\\\n '--random-seed', '--greedy-selection', '--subnet-size='+str(size),\\\n '--num-subnets='+str(n_trials), '--output-net-format=0', '--keep-self-interactions',\\\n '-c', gnwdir+'sandbox/settings.txt', '--output-path',\\\n str(current_dir)])\n \n elif network == 'networkx':\n \n print('creating scale-free networkx graphs...') \n f.create_nx_network(n_trials,cfg['SPARSITY'],size,current_dir)\n \n elif network == 'fflatt':\n \n print('running fflatt...')\n #python3 test.py 103 0.4 test_networks/\n subprocess.call(['python3', fflattdir+'test.py', json.dumps(cfg), size,\\\n str(n_trials), str(current_dir)])\n \n \n for rep, file in enumerate(glob.glob(os.path.join(current_dir, '*sv'))):\n topo_list.append(f.analyze_exctracted_network(cfg, file, network, rep, size))\n \n #collect data otherwise\n else:\n for rep, file in enumerate(glob.glob(os.path.join(current_dir, '*sv'))):\n topo_list.append(f.analyze_exctracted_network(cfg, file, network, rep, size))\n\n", "making dirs...\nrunning fflatt...\nmaking dirs...\nrunning fflatt...\nmaking dirs...\nrunning fflatt...\nmaking dirs...\nrunning fflatt...\n" ], [ "df_topo = pd.DataFrame(topo_list, columns = ['ffl-nodes', 'sparsity', 'in-degree',\\\n 'out-degree', 'network', 'size', 'rep'])\ndf_topo[\"size\"] = pd.to_numeric(df_topo[\"size\"])\nos.chdir('../results/tables')\ndf_topo.to_csv('topology_stats_yeast.tsv')", "_____no_output_____" ], [ "tool_colors = ['#00876c', '#e79053','#7dc9e1']\nplt.rcParams.update({'font.size': 24})\nfig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2,2)\nfig.tight_layout(pad=0.07)\n\nin_degree = df_topo[['size','in-degree', 'network']]\nin_degree.groupby([\"network\", \"size\"]).agg(np.mean).unstack(0).\\\n plot(kind = \"bar\", y = \"in-degree\", legend = False,\\\n yerr = np.ravel(in_degree.groupby([\"network\", \"size\"]).agg(np.std)).reshape(len(networks),len(sizes)),\\\n ax=ax3, cmap='Dark2', figsize = (20,18), title = 'In-degree', color=tool_colors, xlabel = 'network size')\n\nout_degree = df_topo[['out-degree', 'network', 'size']]\nout_degree.groupby([\"network\", \"size\"]).agg(np.mean).unstack(0).\\\n plot(kind = \"bar\", y = \"out-degree\", legend = False,\\\n yerr = np.ravel(out_degree.groupby([\"network\", \"size\"]).agg(np.std)).reshape(len(networks),len(sizes)),\\\n ax=ax4, cmap='Dark2', figsize = (20,18), title = 'Out-degree', color=tool_colors, xlabel = 'network size')\n\nsparsity = df_topo[['sparsity', 'network', 'size']]\nsparsity.groupby([\"network\", \"size\"]).agg(np.mean).unstack(0).\\\n plot(kind = \"bar\", y = \"sparsity\", legend = False,\\\n yerr = np.ravel(sparsity.groupby([\"network\", \"size\"]).agg(np.std)).reshape(len(networks),len(sizes)),\\\n ax=ax2, cmap='Dark2', figsize = (20,18), title = 'Sparsity', color=tool_colors, xlabel = 'network size')\n\nffl_nodes = df_topo[['ffl-nodes', 'network', 'size']]\nffl_nodes.groupby([\"network\", \"size\"]).agg(np.mean).unstack(0).\\\n plot(kind = \"bar\", y = \"ffl-nodes\", legend = False,\\\n yerr = np.ravel(ffl_nodes.groupby([\"network\", \"size\"]).agg(np.std)).reshape(len(networks),len(sizes)),\\\n ax=ax1, cmap='Dark2', figsize = (20,18), title = 'FFL motif-node participation', color=tool_colors,\\\n xlabel = 'network size')\n\nax1.legend([\"FFLatt\", \"GNW\", \"NetworkX graph\"])\nax1.set_ylabel('counts')\nax2.set_ylabel('average links per node')\nax3.set_ylabel('average in-degree per node')\nax4.set_ylabel('average out-degree per node')\n\nfor ax, ylabel in zip([ax1, ax2, ax3, ax4], ['counts', 'average links per node',\\\n 'average in-degree per node', 'average out-degree per node']):\n ax.set_ylabel(ylabel)\n\n#fig.canvas.draw()\n\nfor ax, label in zip([ax1, ax2, ax3, ax4], ['A', 'B', 'C', 'D']):\n bbox = ax.get_tightbbox(fig.canvas.get_renderer())\n fig.text(bbox.x0, bbox.y1, label, fontsize=28, fontweight=\"bold\", ha=\"center\",\n transform=None)\n\nos.chdir('../figures/')\nplt.savefig(\"figureS1_yeast.svg\")\nplt.savefig(\"figureS1_yeast.png\")", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e70e9957af22ba3f246c58c6c106cdcab875aaea
67,721
ipynb
Jupyter Notebook
Data visualization/Pandas by AAIC.ipynb
krish1511/Python
1c04a576619341a2562ee805917c9e05e4f40707
[ "MIT" ]
1
2020-06-30T19:36:22.000Z
2020-06-30T19:36:22.000Z
Data visualization/Pandas by AAIC.ipynb
krish1511/Python
1c04a576619341a2562ee805917c9e05e4f40707
[ "MIT" ]
null
null
null
Data visualization/Pandas by AAIC.ipynb
krish1511/Python
1c04a576619341a2562ee805917c9e05e4f40707
[ "MIT" ]
null
null
null
27.295848
112
0.336823
[ [ [ "import pandas as pd\nimport numpy as np", "_____no_output_____" ], [ "data = pd.read_csv('nyc_weather.csv')", "_____no_output_____" ], [ "data.head()", "_____no_output_____" ], [ "data.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 31 entries, 0 to 30\nData columns (total 11 columns):\nEST 31 non-null object\nTemperature 31 non-null int64\nDewPoint 31 non-null int64\nHumidity 31 non-null int64\nSea Level PressureIn 31 non-null float64\nVisibilityMiles 31 non-null int64\nWindSpeedMPH 28 non-null float64\nPrecipitationIn 31 non-null object\nCloudCover 31 non-null int64\nEvents 9 non-null object\nWindDirDegrees 31 non-null int64\ndtypes: float64(2), int64(6), object(3)\nmemory usage: 2.8+ KB\n" ], [ "data.describe()", "_____no_output_____" ], [ "numeric_data = data._get_numeric_data()\nnumeric_data.head(2)", "_____no_output_____" ], [ "#featching categorical features\nobj_data = data.select_dtypes(include=['object']).copy()\nobj_data.head()", "_____no_output_____" ], [ "data['Temperature'].max()", "_____no_output_____" ], [ "data.columns", "_____no_output_____" ], [ "data['Events'].value_counts()", "_____no_output_____" ], [ "data['EST'][data['Events']=='Rain']", "_____no_output_____" ], [ "data['WindSpeedMPH'].mean()", "_____no_output_____" ], [ "data = pd.read_csv('weather_data.csv')\ndata.head()", "_____no_output_____" ], [ "weather_data = [\n ('1/1/2017',32,6,'Rain'),\n ('1/2/2017',35,7,'Sunny'),\n ('1/3/2017',28,2,'snow'),\n ('1/4/2017',24,7,'snow'),\n ('1/5/2017',32,4,'Rain')\n]", "_____no_output_____" ], [ "data = pd.DataFrame(weather_data,columns=['day','temperature','windspeed','event'])\n# data = pd.DataFrame(weather_data,columns=['day','temperature','windspeed','event'],index=[1,2,3,4,5])\n\ndata", "_____no_output_____" ], [ "data.iloc[2:5]", "_____no_output_____" ], [ "data['temperature']", "_____no_output_____" ], [ "data[['temperature','event']]", "_____no_output_____" ], [ "data['temperature'].describe()", "_____no_output_____" ], [ "data[data['temperature'] == data['temperature'].max()]", "_____no_output_____" ], [ "data['temperature'].max()", "_____no_output_____" ], [ "data['day'][data['temperature'] == data['temperature'].max()]", "_____no_output_____" ] ], [ [ "## Group-By", "_____no_output_____" ] ], [ [ "data = pd.read_csv('weather_data_cities.csv')\ndata.head()", "_____no_output_____" ], [ "group = data.groupby('city')\nprint(group)", "<pandas.core.groupby.generic.DataFrameGroupBy object at 0x00000279B9A3F588>\n" ], [ "data['city'].value_counts()", "_____no_output_____" ], [ "for city,cities in group:\n print(city)\n print(cities)", "mumbai\n day city temperature windspeed event\n4 1/1/2017 mumbai 90 5 Sunny\n5 1/2/2017 mumbai 85 12 Fog\n6 1/3/2017 mumbai 87 15 Fog\n7 1/4/2017 mumbai 92 5 Rain\nnew york\n day city temperature windspeed event\n0 1/1/2017 new york 32 6 Rain\n1 1/2/2017 new york 36 7 Sunny\n2 1/3/2017 new york 28 12 Snow\n3 1/4/2017 new york 33 7 Sunny\nparis\n day city temperature windspeed event\n8 1/1/2017 paris 45 20 Sunny\n9 1/2/2017 paris 50 13 Cloudy\n10 1/3/2017 paris 54 8 Cloudy\n11 1/4/2017 paris 42 10 Cloudy\n" ], [ "data[data['city']=='mumbai']", "_____no_output_____" ], [ "group.get_group('mumbai')", "_____no_output_____" ], [ "group.max() # it will give you maximum values of numerical features", "_____no_output_____" ], [ "group.mean()", "_____no_output_____" ], [ "group.describe()", "_____no_output_____" ], [ "indian_weather = pd.DataFrame({\n 'city' : ['Andhra','Telangana','Banglore'],\n 'temperature' : [32,36,30],\n 'humidity' : [80,60,70]\n})", "_____no_output_____" ], [ "indian_weather", "_____no_output_____" ], [ "us_weather = pd.DataFrame({\n 'city' : ['newyork','chicago','new jersey'],\n 'temperature' : [21,14,28],\n 'humidity' : [68,65,75]\n})", "_____no_output_____" ], [ "us_weather", "_____no_output_____" ], [ "data = pd.concat([indian_weather,us_weather],ignore_index=True)\ndata", "_____no_output_____" ], [ "data = pd.concat([indian_weather,us_weather],ignore_index=True,axis=1)\ndata", "_____no_output_____" ] ], [ [ "## Merge DataFrames", "_____no_output_____" ] ], [ [ "temperature_df = pd.DataFrame({\n 'city' : ['andhra','telangana','banglore','chennai'],\n 'temperature' : [32,36,30,40]\n})\ntemperature_df", "_____no_output_____" ], [ "humidity_df = pd.DataFrame({\n 'city' : ['andhra','telangana','banglore'],\n 'humidity' : [68,65,75]\n})\nhumidity_df", "_____no_output_____" ], [ "pd.merge(temperature_df,humidity_df,how='outer',on='city')", "_____no_output_____" ], [ "pd.merge(temperature_df,humidity_df,on='city')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e70ea2f3f1bd66c91646093c8f27de7201cd9dfb
19,394
ipynb
Jupyter Notebook
notebooks/.ipynb_checkpoints/1.0-full-model-checkpoint.ipynb
kmkping/school_budget_1_data_science
3a8a1153421fb5c65d427717b16e01c850927f2f
[ "MIT" ]
null
null
null
notebooks/.ipynb_checkpoints/1.0-full-model-checkpoint.ipynb
kmkping/school_budget_1_data_science
3a8a1153421fb5c65d427717b16e01c850927f2f
[ "MIT" ]
null
null
null
notebooks/.ipynb_checkpoints/1.0-full-model-checkpoint.ipynb
kmkping/school_budget_1_data_science
3a8a1153421fb5c65d427717b16e01c850927f2f
[ "MIT" ]
null
null
null
35.45521
372
0.534444
[ [ [ "# From Raw Data to Predictions\n\nThis notebook is designed as a follow-up to the [Machine Learning with the Experts: School Budgets](https://www.datacamp.com/courses/machine-learning-with-the-experts-school-budgets) course on Datacamp. We won't explain all the tools and techniques we use here. If you're curious about any of the tools, code, or methods used here, make sure to check out the course!", "_____no_output_____" ] ], [ [ "from __future__ import division\nfrom __future__ import print_function\n%matplotlib inline\n\n\n# ignore deprecation warnings in sklearn\nimport warnings\n\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nimport os\nimport sys\n\n# add the 'src' directory as one where we can import modules\nsrc_dir = os.path.join(os.getcwd(), os.pardir, 'src')\nsys.path.append(src_dir)\n\nfrom data.multilabel import multilabel_sample_dataframe, multilabel_train_test_split\nfrom features.SparseInteractions import SparseInteractions\nfrom models.metrics import multi_multi_log_loss", "_____no_output_____" ] ], [ [ "# Load Data\n\nFirst, we'll load the entire training data set available from DrivenData. In order to make this notebook run, you will need to: \n - [Sign up for an account on DrivenData](http://www.drivendata.org)\n - [Join the Box-plots for education competition](https://www.drivendata.org/competitions/46/box-plots-for-education-reboot/)\n - Download the competition data to the `data` folder in this repository. Files should be named `TrainingSet.csv` and `TestSet.csv`.\n - Enjoy!", "_____no_output_____" ] ], [ [ "path_to_training_data = os.path.join(os.pardir,\n 'data',\n 'TrainingData.csv')\n\ndf = pd.read_csv(path_to_training_data, index_col=0)\n\nprint(df.shape)", "(400277, 25)\n" ] ], [ [ "# Resample Data\n\n400,277 rows is too many to work with locally while we develop our approach. We'll sample down to 10,000 rows so that it is easy and quick to run our analysis.\n\nWe'll also create dummy variables for our labels and split our sampled dataset into a training set and a test set.", "_____no_output_____" ] ], [ [ "LABELS = ['Function',\n 'Use',\n 'Sharing',\n 'Reporting',\n 'Student_Type',\n 'Position_Type',\n 'Object_Type', \n 'Pre_K',\n 'Operating_Status']\n\nNON_LABELS = [c for c in df.columns if c not in LABELS]\n\nSAMPLE_SIZE = 40000\n\nsampling = multilabel_sample_dataframe(df,\n pd.get_dummies(df[LABELS]),\n size=SAMPLE_SIZE,\n min_count=25,\n seed=43)\n\ndummy_labels = pd.get_dummies(sampling[LABELS])\n\nX_train, X_test, y_train, y_test = multilabel_train_test_split(sampling[NON_LABELS],\n dummy_labels,\n 0.2,\n min_count=3,\n seed=43)", "_____no_output_____" ] ], [ [ "# Create preprocessing tools\n\nWe need tools to preprocess our text and numeric data. We'll create those tools here. The `combine_text_columns` function will take a DataFrame of text columns and return a single series where all of the text in the columns has been joined together.\n\nWe'll then create `FunctionTransformer` objects that select our text and numeric data from the dataframe.\n\nFinally, we create a custom scoring method that uses the `multi_multi_log_loss` function that is the evaluation metric for the competition.", "_____no_output_____" ] ], [ [ "NUMERIC_COLUMNS = ['FTE', \"Total\"]\n\ndef combine_text_columns(data_frame, to_drop=NUMERIC_COLUMNS + LABELS):\n \"\"\" Takes the dataset as read in, drops the non-feature, non-text columns and\n then combines all of the text columns into a single vector that has all of\n the text for a row.\n \n :param data_frame: The data as read in with read_csv (no preprocessing necessary)\n :param to_drop (optional): Removes the numeric and label columns by default.\n \"\"\"\n # drop non-text columns that are in the df\n to_drop = set(to_drop) & set(data_frame.columns.tolist())\n text_data = data_frame.drop(to_drop, axis=1)\n \n # replace nans with blanks\n text_data.fillna(\"\", inplace=True)\n \n # joins all of the text items in a row (axis=1)\n # with a space in between\n return text_data.apply(lambda x: \" \".join(x), axis=1)\n", "_____no_output_____" ], [ "combine_text_columns(df, to_drop=NUMERIC_COLUMNS + LABELS)", "_____no_output_____" ], [ "from sklearn.preprocessing import FunctionTransformer\n\nget_text_data = FunctionTransformer(combine_text_columns, validate=False)\nget_numeric_data = FunctionTransformer(lambda x: x[NUMERIC_COLUMNS], validate=False)", "_____no_output_____" ], [ "get_text_data.fit_transform(sampling.head(5))", "_____no_output_____" ], [ "get_numeric_data.fit_transform(sampling.head(5))", "_____no_output_____" ], [ "from sklearn.metrics.scorer import make_scorer\n\nlog_loss_scorer = make_scorer(multi_multi_log_loss)", "_____no_output_____" ] ], [ [ "# Train model pipeline\n\nNow we'll train the final pipeline from the course that takes text and numeric data, does the necessary preprocessing, and trains the classifier.", "_____no_output_____" ] ], [ [ "from sklearn.feature_selection import chi2, SelectKBest\n\nfrom sklearn.pipeline import Pipeline, FeatureUnion\n\nfrom sklearn.preprocessing import Imputer\nfrom sklearn.feature_extraction.text import HashingVectorizer\n\nfrom sklearn.multiclass import OneVsRestClassifier\nfrom sklearn.linear_model import LogisticRegression\n\nfrom sklearn.preprocessing import MaxAbsScaler\n\nTOKENS_ALPHANUMERIC = '[A-Za-z0-9]+(?=\\\\s+)'", "_____no_output_____" ], [ "%%time\n\n# set a reasonable number of features before adding interactions\nchi_k = 300\n\n# create the pipeline object\npl = Pipeline([\n ('union', FeatureUnion(\n transformer_list = [\n ('numeric_features', Pipeline([\n ('selector', get_numeric_data),\n ('imputer', Imputer())\n ])),\n ('text_features', Pipeline([\n ('selector', get_text_data),\n ('vectorizer', HashingVectorizer(token_pattern=TOKENS_ALPHANUMERIC,\n norm=None, binary=False, alternate_sign=False,\n ngram_range=(1, 2))),\n ('dim_red', SelectKBest(chi2, chi_k))\n ]))\n ]\n )),\n ('int', SparseInteractions(degree=2)),\n ('scale', MaxAbsScaler()),\n ('clf', OneVsRestClassifier(LogisticRegression()))\n ])\n\n# fit the pipeline to our training data\npl.fit(X_train, y_train.values)\n\n# print the score of our trained pipeline on our test set\nprint(\"Logloss score of trained pipeline: \", log_loss_scorer(pl, X_test, y_test.values))", "/Users/[email protected]/anaconda3/lib/python3.7/site-packages/sklearn/utils/deprecation.py:66: DeprecationWarning: Class Imputer is deprecated; Imputer was deprecated in version 0.20 and will be removed in 0.22. Import impute.SimpleImputer from sklearn instead.\n warnings.warn(msg, category=DeprecationWarning)\n" ] ], [ [ "# Predict holdout set and write submission\n\nFinally, we want to use our trained pipeline to predict the holdout dataset. We will write our predictions to a file, `predictions.csv`, that we can submit on [DrivenData](http://www.drivendata.org)!", "_____no_output_____" ] ], [ [ "path_to_holdout_data = os.path.join(os.pardir,\n 'data',\n 'TestSet.csv')\n\n# Load holdout data\nholdout = pd.read_csv(path_to_holdout_data, index_col=0)\n\n# Make predictions\npredictions = pl.predict_proba(holdout)\n\n# Format correctly in new DataFrame: prediction_df\nprediction_df = pd.DataFrame(columns=pd.get_dummies(df[LABELS]).columns,\n index=holdout.index,\n data=predictions)\n\n\n# Save prediction_df to csv called \"predictions.csv\"\nprediction_df.to_csv(\"predictions.csv\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
e70eaabfb79262c340f1f4d461f7e4d5f76df8ea
95,666
ipynb
Jupyter Notebook
Codes/P02_Cleaning_Data.ipynb
Atashnezhad/Natural_language_processing_Project
5f9e882f7fbec66d108347155f2cb6f42612252c
[ "MIT" ]
null
null
null
Codes/P02_Cleaning_Data.ipynb
Atashnezhad/Natural_language_processing_Project
5f9e882f7fbec66d108347155f2cb6f42612252c
[ "MIT" ]
null
null
null
Codes/P02_Cleaning_Data.ipynb
Atashnezhad/Natural_language_processing_Project
5f9e882f7fbec66d108347155f2cb6f42612252c
[ "MIT" ]
null
null
null
37.311232
116
0.389511
[ [ [ "# Cleaning data\nIn this section of the project, the data is called from the dataset folder and some edits are applied.\nAt the beginning some essential libraries are installed.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport regex as re\nimport warnings\nwarnings.filterwarnings('ignore')\nfrom nltk.corpus import stopwords \nfrom nltk.stem.porter import PorterStemmer\nfrom nltk.stem import WordNetLemmatizer\nimport pickle", "_____no_output_____" ] ], [ [ "Install some libraries if it is needed using ```pip install libraries_name```", "_____no_output_____" ] ], [ [ "# !pip install nltk\n# !pip install regex", "_____no_output_____" ] ], [ [ "## Nasa Data", "_____no_output_____" ] ], [ [ "file_path = \"../DataSet/\"\nfile_name = \"df_nasa.csv\"\ndf_nasa = pd.read_csv(file_path+file_name)", "_____no_output_____" ], [ "df_nasa.shape", "_____no_output_____" ] ], [ [ "Check the column names and details as follow.", "_____no_output_____" ] ], [ [ "df_nasa.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 6000 entries, 0 to 5999\nData columns (total 83 columns):\nUnnamed: 0 6000 non-null int64\nindex 6000 non-null int64\nall_awardings 5492 non-null object\nallow_live_comments 4249 non-null object\nauthor 6000 non-null object\nauthor_cakeday 19 non-null object\nauthor_flair_background_color 14 non-null object\nauthor_flair_css_class 58 non-null object\nauthor_flair_richtext 6000 non-null object\nauthor_flair_template_id 30 non-null object\nauthor_flair_text 58 non-null object\nauthor_flair_text_color 58 non-null object\nauthor_flair_type 6000 non-null object\nauthor_fullname 6000 non-null object\nauthor_patreon_flair 6000 non-null bool\nauthor_premium 928 non-null object\nawarders 2359 non-null object\ncan_mod_post 6000 non-null bool\ncontest_mode 6000 non-null bool\ncreated_utc 6000 non-null int64\ncrosspost_parent 8 non-null object\ncrosspost_parent_list 8 non-null object\ndomain 6000 non-null object\nedited 2 non-null float64\nfull_link 6000 non-null object\ngilded 16 non-null float64\ngildings 6000 non-null object\nid 6000 non-null object\nis_crosspostable 6000 non-null bool\nis_meta 6000 non-null bool\nis_original_content 6000 non-null bool\nis_reddit_media_domain 6000 non-null bool\nis_robot_indexable 6000 non-null bool\nis_self 6000 non-null bool\nis_video 6000 non-null bool\nlink_flair_background_color 1464 non-null object\nlink_flair_css_class 3698 non-null object\nlink_flair_richtext 6000 non-null object\nlink_flair_template_id 2101 non-null object\nlink_flair_text 3853 non-null object\nlink_flair_text_color 6000 non-null object\nlink_flair_type 6000 non-null object\nlocked 6000 non-null bool\nmedia 400 non-null object\nmedia_embed 396 non-null object\nmedia_metadata 37 non-null object\nmedia_only 6000 non-null bool\nno_follow 6000 non-null bool\nnum_comments 6000 non-null int64\nnum_crossposts 6000 non-null int64\nog_description 33 non-null object\nog_title 33 non-null object\nover_18 6000 non-null bool\nparent_whitelist_status 6000 non-null object\npermalink 6000 non-null object\npinned 6000 non-null bool\npost_hint 2292 non-null object\npreview 2292 non-null object\npwls 6000 non-null int64\nremoved_by 109 non-null object\nremoved_by_category 170 non-null object\nretrieved_on 6000 non-null int64\nscore 6000 non-null int64\nsecure_media 400 non-null object\nsecure_media_embed 396 non-null object\nselftext 971 non-null object\nsend_replies 6000 non-null bool\nspoiler 6000 non-null bool\nsteward_reports 2757 non-null object\nstickied 6000 non-null bool\nsubreddit 6000 non-null object\nsubreddit_id 6000 non-null object\nsubreddit_subscribers 6000 non-null int64\nsubreddit_type 6000 non-null object\nthumbnail 6000 non-null object\nthumbnail_height 2265 non-null float64\nthumbnail_width 2265 non-null float64\ntitle 6000 non-null object\ntotal_awards_received 5492 non-null float64\nupdated_utc 4558 non-null float64\nurl 6000 non-null object\nwhitelist_status 6000 non-null object\nwls 6000 non-null int64\ndtypes: bool(18), float64(6), int64(10), object(49)\nmemory usage: 3.1+ MB\n" ] ], [ [ "Choose following column names.", "_____no_output_____" ] ], [ [ "keep_clmns = ['author', 'created_utc', 'domain', 'id', 'num_comments', 'over_18',\n 'post_hint', 'score', 'selftext',\n 'title']", "_____no_output_____" ], [ "df_nasa_keep_colmn = df_nasa[keep_clmns]", "_____no_output_____" ], [ "df_nasa_keep_colmn.head(5)", "_____no_output_____" ], [ "df_nasa_keep_colmn['title'][8]", "_____no_output_____" ], [ "df_nasa_keep_colmn.isnull().sum()", "_____no_output_____" ] ], [ [ "I choose the same approach as Meghani did toward the imputing and dropping and editing columns.", "_____no_output_____" ] ], [ [ "df_nasa_keep_colmn[\"title\"].fillna(\" \", inplace=True)\ndf_nasa_keep_colmn[\"selftext\"].fillna(\" \", inplace=True)\n\ndf_nasa_keep_colmn['text_merged'] = df_nasa_keep_colmn['title'] + \" \" + df_nasa_keep_colmn['selftext']\ndf_nasa_keep_colmn.drop(columns = [\"title\", \"selftext\"], inplace=True)\n\ndf_nasa_keep_colmn['post_hint'].fillna(\"Empty\", inplace=True)", "_____no_output_____" ] ], [ [ "Double check the colmns for null values.", "_____no_output_____" ] ], [ [ "df_nasa_keep_colmn.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 6000 entries, 0 to 5999\nData columns (total 9 columns):\nauthor 6000 non-null object\ncreated_utc 6000 non-null int64\ndomain 6000 non-null object\nid 6000 non-null object\nnum_comments 6000 non-null int64\nover_18 6000 non-null bool\npost_hint 6000 non-null object\nscore 6000 non-null int64\ntext_merged 6000 non-null object\ndtypes: bool(1), int64(3), object(5)\nmemory usage: 380.9+ KB\n" ], [ "df_nasa_keep_colmn.head()", "_____no_output_____" ], [ "print(df_nasa_keep_colmn['text_merged'][0])\nprint(df_nasa_keep_colmn['text_merged'][5999])", "A star shining through Saturn's rings \nThis is Saturn \n" ] ], [ [ "## Space discussion data.", "_____no_output_____" ] ], [ [ "file_path = \"../DataSet/\"\nfile_name = \"df_space.csv\"\ndf_space = pd.read_csv(file_path+file_name)", "_____no_output_____" ], [ "df_space.shape", "_____no_output_____" ], [ "df_space.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 6000 entries, 0 to 5999\nData columns (total 78 columns):\nUnnamed: 0 6000 non-null int64\nindex 6000 non-null int64\nall_awardings 6000 non-null object\nallow_live_comments 6000 non-null bool\nauthor 6000 non-null object\nauthor_cakeday 22 non-null object\nauthor_flair_background_color 0 non-null float64\nauthor_flair_css_class 0 non-null float64\nauthor_flair_richtext 6000 non-null object\nauthor_flair_text 14 non-null object\nauthor_flair_text_color 14 non-null object\nauthor_flair_type 6000 non-null object\nauthor_fullname 6000 non-null object\nauthor_patreon_flair 6000 non-null bool\nauthor_premium 3787 non-null object\nawarders 6000 non-null object\ncan_mod_post 6000 non-null bool\ncontest_mode 6000 non-null bool\ncreated_utc 6000 non-null int64\ndomain 6000 non-null object\nedited 4 non-null float64\nfull_link 6000 non-null object\ngilded 2 non-null float64\ngildings 6000 non-null object\nid 6000 non-null object\nis_crosspostable 6000 non-null bool\nis_meta 6000 non-null bool\nis_original_content 6000 non-null bool\nis_reddit_media_domain 6000 non-null bool\nis_robot_indexable 6000 non-null bool\nis_self 6000 non-null bool\nis_video 6000 non-null bool\nlink_flair_background_color 0 non-null float64\nlink_flair_css_class 1219 non-null object\nlink_flair_richtext 6000 non-null object\nlink_flair_template_id 2 non-null object\nlink_flair_text 1223 non-null object\nlink_flair_text_color 6000 non-null object\nlink_flair_type 6000 non-null object\nlocked 6000 non-null bool\nmedia 443 non-null object\nmedia_embed 431 non-null object\nmedia_metadata 39 non-null object\nmedia_only 6000 non-null bool\nno_follow 6000 non-null bool\nnum_comments 6000 non-null int64\nnum_crossposts 6000 non-null int64\nover_18 6000 non-null bool\nparent_whitelist_status 6000 non-null object\npermalink 6000 non-null object\npinned 6000 non-null bool\npost_hint 2267 non-null object\npreview 2267 non-null object\npwls 6000 non-null int64\nremoved_by 329 non-null object\nremoved_by_category 639 non-null object\nretrieved_on 6000 non-null int64\nscore 6000 non-null int64\nsecure_media 443 non-null object\nsecure_media_embed 431 non-null object\nselftext 1176 non-null object\nsend_replies 6000 non-null bool\nspoiler 6000 non-null bool\nsteward_reports 5999 non-null object\nstickied 6000 non-null bool\nsubreddit 6000 non-null object\nsubreddit_id 6000 non-null object\nsubreddit_subscribers 6000 non-null int64\nsubreddit_type 6000 non-null object\nsuggested_sort 3 non-null object\nthumbnail 6000 non-null object\nthumbnail_height 2215 non-null float64\nthumbnail_width 2215 non-null float64\ntitle 6000 non-null object\ntotal_awards_received 6000 non-null int64\nurl 6000 non-null object\nwhitelist_status 6000 non-null object\nwls 6000 non-null int64\ndtypes: bool(19), float64(7), int64(11), object(41)\nmemory usage: 2.8+ MB\n" ], [ "df_space_keep_colmn = df_space[keep_clmns]", "_____no_output_____" ], [ "df_space_keep_colmn.head(5)", "_____no_output_____" ], [ "df_space_keep_colmn.isnull().sum()", "_____no_output_____" ], [ "df_space_keep_colmn[\"title\"].fillna(\" \", inplace=True)\ndf_space_keep_colmn[\"selftext\"].fillna(\" \", inplace=True)", "_____no_output_____" ], [ "df_space_keep_colmn['text_merged'] = df_space_keep_colmn['title'] + \" \" + df_space_keep_colmn['selftext']\ndf_space_keep_colmn.drop(columns = [\"title\", \"selftext\"], inplace=True)", "_____no_output_____" ], [ "df_space_keep_colmn['post_hint'].fillna(\"Empty\", inplace=True)", "_____no_output_____" ], [ "df_space_keep_colmn.head()", "_____no_output_____" ], [ "df_space_keep_colmn.isnull().sum()", "_____no_output_____" ], [ "print(df_space_keep_colmn['text_merged'][0])\nprint(df_space_keep_colmn['text_merged'][5999])", "Basic Goats Milk Soap Base - Low Sweat \nThe Eastern Veil Nebula \n" ] ], [ [ "Adding a colmn to determine the source of each data (Nasa or Space) and merging two sets of data to one.", "_____no_output_____" ] ], [ [ "#Adding one column to determine the subreddit pulled from\ndf_nasa_keep_colmn[\"subreddit\"] = \"NASA\"\ndf_space_keep_colmn[\"subreddit\"] = \"Space_discussion\"\ndf_reddit = pd.concat([df_nasa_keep_colmn, df_space_keep_colmn], axis = 0, ignore_index=True)\ndf_reddit.head(5)", "_____no_output_____" ], [ "df_reddit.shape", "_____no_output_____" ] ], [ [ "Write a function to do regex on text_merged. We are going to do the following edits:\n\n* **Removing \"\\n\" characters**\n* **Removing the [removed] characters**\n* **Use regular expressions to do a find-and-replace**\n* **Making all characters lower case**\n* **Replacing multiple spaces**\n* **Removing stopwords**\n* **Instantiate object of class PorterStemmer and stemming**\n* **Adding space to stitch the words together**\n", "_____no_output_____" ] ], [ [ "# inserting the parent directory into current path\nimport sys; sys.path.insert(1, '../Functions')\nimport text_cleaning, text_cleaning_second_approach\ntext_cleaning.Apply(df_reddit)\n# text_cleaning_second_approach.Apply(df_reddit)", "_____no_output_____" ], [ "df_reddit.shape", "_____no_output_____" ], [ "df_reddit.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 12000 entries, 0 to 11999\nData columns (total 10 columns):\nauthor 12000 non-null object\ncreated_utc 12000 non-null int64\ndomain 12000 non-null object\nid 12000 non-null object\nnum_comments 12000 non-null int64\nover_18 12000 non-null bool\npost_hint 12000 non-null object\nscore 12000 non-null int64\ntext_merged 12000 non-null object\nsubreddit 12000 non-null object\ndtypes: bool(1), int64(3), object(6)\nmemory usage: 855.5+ KB\n" ] ], [ [ "Now let's use the pickle library to save the data.", "_____no_output_____" ] ], [ [ "pickle.dump(df_reddit, open('../DataSet/df_reddit.pkl', 'wb'))", "_____no_output_____" ], [ "pickle.dump(df_nasa_keep_colmn, open('../DataSet/df_nasa_keep_colmn.pkl', 'wb'))\npickle.dump(df_space_keep_colmn, open('../DataSet/df_space_keep_colmn.pkl', 'wb'))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e70eafef13934a560f8325274cdcf391e24ee22f
96,445
ipynb
Jupyter Notebook
CIS522/Week2_Homework.ipynb
felipe-parodi/QuantTools4Neuro
a475b1ef04ed261cbc9687bf15dd5a39402ecee1
[ "MIT" ]
null
null
null
CIS522/Week2_Homework.ipynb
felipe-parodi/QuantTools4Neuro
a475b1ef04ed261cbc9687bf15dd5a39402ecee1
[ "MIT" ]
null
null
null
CIS522/Week2_Homework.ipynb
felipe-parodi/QuantTools4Neuro
a475b1ef04ed261cbc9687bf15dd5a39402ecee1
[ "MIT" ]
null
null
null
136.607649
34,742
0.840054
[ [ [ "<a href=\"https://colab.research.google.com/github/felipe-parodi/DL4DataScience/blob/main/Week2_Homework.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "## Week 2 Homework: Design a deep network for linear regression on the QSAR Fish Toxicity dataset.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport random, time\nimport matplotlib.pylab as plt\n%matplotlib inline \nimport matplotlib as mpl\nfrom tqdm.notebook import tqdm, trange\n\nfrom sklearn.decomposition import PCA\nfrom sklearn.model_selection import train_test_split\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\nurl = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00504/qsar_fish_toxicity.csv'\n\nheaders = ['CIC0', 'SM1_Dz(Z)', 'GATS1i', 'NdsCH', 'NdssC', 'MLogP', 'LC50']\n\ndf1 = pd.read_csv(url, names = headers, header=None, delimiter=\";\")\n\nX = np.array(df1)[:,:-1]\ny = np.array(df1)[:,-1].reshape(-1,1)\n\nprint(f'Data: \\n{df1}')\n\nprint(f'\\ninput shape of X: {X.shape}, ' \n f'targets shape of y: {y.shape}')", "Data: \n CIC0 SM1_Dz(Z) GATS1i NdsCH NdssC MLogP LC50\n0 3.260 0.829 1.676 0 1 1.453 3.770\n1 2.189 0.580 0.863 0 0 1.348 3.115\n2 2.125 0.638 0.831 0 0 1.348 3.531\n3 3.027 0.331 1.472 1 0 1.807 3.510\n4 2.094 0.827 0.860 0 0 1.886 5.390\n.. ... ... ... ... ... ... ...\n903 2.801 0.728 2.226 0 2 0.736 3.109\n904 3.652 0.872 0.867 2 3 3.983 4.040\n905 3.763 0.916 0.878 0 6 2.918 4.818\n906 2.831 1.393 1.077 0 1 0.906 5.317\n907 4.057 1.032 1.183 1 3 4.754 8.201\n\n[908 rows x 7 columns]\n\ninput shape of X: (908, 5), targets shape of y: (908, 1)\n[[3.26 0.829 1.676 0. 1. ]\n [2.189 0.58 0.863 0. 0. ]\n [2.125 0.638 0.831 0. 0. ]\n ...\n [3.763 0.916 0.878 0. 6. ]\n [2.831 1.393 1.077 0. 1. ]\n [4.057 1.032 1.183 1. 3. ]]\n" ], [ "#@markdown ## 1. Decompose and visualize data (2 dims)\n# 1. Decompose and visualize data\npca = PCA(2) # project from 6 to 2 dimensions\npca.fit(X)\nZ = pca.transform(X)\ndef arrow(v1, v2, ax):\n arrowprops=dict(arrowstyle='->', linewidth=2, shrinkA=0, shrinkB=0)\n ax.annotate(\"\", v2, v1, arrowprops=arrowprops)\n\nfig, axes = plt.subplots(1,2, figsize=(12,4))\naxes[0].axis('equal')\naxes[0].scatter(X[:,0], X[:,1])\naxes[1].axis('equal')\naxes[1].set_xlim(-3,3)\naxes[1].scatter(Z[:,0], Z[:,1])\n# for l, v in zip(pca.explained_variance_, pca.components_):\n# arrow([0,0], v*l*3, axes[0])\nfor l, v in zip([1.0,0.16], [np.array([1.0,0.0]),np.array([0.0,1.0])]):\n arrow([0,0], v*l*3, axes[1])\naxes[0].set_title(\"Original\")\naxes[0].set_xlabel('IV')\naxes[0].set_ylabel('DV')\naxes[1].set_title(\"Reduced\")\naxes[1].set_xlabel('IV')\naxes[1].set_ylabel('DV');\n\nprint('Original dimensions: ', X.shape)\nprint('Reduced dimension: ', Z.shape)", "Original dimensions: (908, 6)\nReduced dimension: (908, 2)\n" ], [ "# 2. Split data\ntrain_size = 0.8\ntest_size = 0.2\ntrain_X, test_X, train_y, test_y = train_test_split(X, y, \n train_size=train_size, test_size=test_size, random_state=42)\n\n# Convert numpy array to tensor\nX = torch.from_numpy(train_X.astype(np.float32))\ny = torch.from_numpy(train_y.astype(np.float32))\nx_test = torch.from_numpy(test_X.astype(np.float32))\ny_test = torch.from_numpy(test_y.astype(np.float32))\n\n# 3. Build network\ninput_dim = 6\noutput_dim = 1\nh1, h2, h3 = 20, 15, 10\n\nmynetwork = nn.Sequential(nn.Linear(input_dim, h1),\n nn.Linear(h1, h2),\n nn.Linear(h2, h3),\n nn.Linear(h3, output_dim))\n\n\n# 4. Set hyperparameters\nlearning_rate = 0.01\ncriterion = nn.MSELoss()\noptimizer = torch.optim.SGD(mynetwork.parameters(), lr=learning_rate)\n\ntraining_losses = []\ntest_losses = []\nnum_epochs = 500\nepoch_range = trange(num_epochs, desc='loss: ', leave=True)\n\n\n# 5.1 Training \nfor epoch in epoch_range:\n if training_losses:\n epoch_range.set_description(\"loss: {:.6f}\".format(training_losses[-1]))\n epoch_range.refresh()\n time.sleep(0.01)\n\n # Compute loss, backpropagate, update, zero gradients\n training_loss = criterion(mynetwork(X), y)\n training_loss.backward()\n optimizer.step()\n optimizer.zero_grad()\n\n # Compile losses\n training_losses.append(training_loss)\n\n if test_losses:\n epoch_range.set_description(\"loss: {:.6f}\".format(test_losses[-1]))\n epoch_range.refresh()\n time.sleep(0.01)\n\n # Compute loss, backpropagate, update, zero gradients\n test_loss = criterion(mynetwork(x_test), y_test)\n test_loss.backward()\n optimizer.step()\n optimizer.zero_grad()\n\n # Compile losses\n test_losses.append(test_loss)\n\n# Print training and testing loss\npreds_train = mynetwork(X) # 726x6 x 6x20\npreds_test = mynetwork(x_test) # 182x6 x 6x20\ntraining_loss = criterion(preds_train, y)\ntest_loss = criterion(preds_test, y_test)\nprint(f'The training loss is: {training_loss}')\nprint(f'The test loss is: {test_loss}')\n\n# 5.2 Plot training and test loss vs. number of epochs\nplt.figure()\nplt.plot(training_losses, label='training', color='b')\nplt.plot(test_losses, label='test',color= 'r')\nplt.xlabel('Epoch')\nplt.ylabel('Loss')\nplt.title('Losses')\nplt.legend()\nplt.show()\n\npredicted = mynetwork(x_test).detach().numpy()\n\n\n# 6. Plot model's performance\n## doesn't yet plot what i want... i want y=y_hat\nplt.figure()\nplt.scatter(y_test, preds_test.detach().numpy(), \n label='original data', alpha=0.5)\nplt.plot(y_test, y_test, label='regression', color='red') \nplt.xlabel('Actual')\nplt.ylabel('Predicted')\nplt.title('Model Performance')\nplt.legend()\nplt.show()\n", "_____no_output_____" ], [ "# 7. Train the model again by removing a feature\n# train_size = 0.8\n# test_size = 0.2\n# train_X, test_X, train_y, test_y = train_test_split(X, y, \n# train_size=train_size, test_size=test_size, random_state=42)\n\n# # Convert numpy array to tensor\n# X = torch.from_numpy(train_X.astype(np.float32))\n# y = torch.from_numpy(train_y.astype(np.float32))\n# x_test = torch.from_numpy(test_X.astype(np.float32))\n# y_test = torch.from_numpy(test_y.astype(np.float32))\n\n\n##### STEPS ####\n# remove a feature to make X_removed\n# split data to make X_train_removed and X_test_removed\n# train\n\n# Build network\ninput_dim = 5 # remove one feature\noutput_dim = 1\nh1, h2, h3 = 20, 15, 10\n\nmynetwork = nn.Sequential(nn.Linear(input_dim, h1),\n nn.Linear(h1, h2),\n nn.Linear(h2, h3),\n nn.Linear(h3, output_dim))\n\n\n# 4. Set hyperparameters\nlearning_rate = 0.01\ncriterion = nn.MSELoss()\noptimizer = torch.optim.SGD(mynetwork.parameters(), lr=learning_rate)\n\nlosses = []\npredictions = []\nnum_epochs = 500\nepoch_range = trange(num_epochs, desc='loss: ', leave=True)\n\n\n# Training \nfor epoch in epoch_range:\n for i in range(6):\n X \n if losses:\n epoch_range.set_description(\"loss: {:.6f}\".format(losses[-1]))\n epoch_range.refresh()\n time.sleep(0.01)\n\n # Compute loss, backpropagate, update, zero gradients\n loss = criterion(mynetwork(X), y)\n loss.backward()\n optimizer.step()\n optimizer.zero_grad()\n\n # Compile losses\n losses.append(loss)\n\npredicted = mynetwork(x_test).detach().numpy()\npredictions.append(predicted)\n\n\n# 6. Plot model's performance\nplt.figure()\nplt.scatter(y_test, preds_test.detach().numpy(), \n label='original data', alpha=0.5)\nplt.plot(y_test, y_test, label='regression', color='red') \nplt.xlabel('Actual')\nplt.ylabel('Predicted')\nplt.title('Model Performance')\nplt.legend()\nplt.show()\n\n\n\n## Remove CICO\n\n## Remove SM1_Dz(Z)\n\n## Remove GATS1i\n\n## Remove NdsCH\n\n## Remove NdssC\n\n## Remove MLogP\n\n\nplt.figure()\n for i, preds in enumerate(predictions):\n plt.subplot(2, 3, i + 1)\n plt.plot(...) # plot original y vs. y_hat\n plt.plot(...) # y=y_hat\n plt.legend()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code" ] ]
e70eb6acee4a07123a951c01e814de8a799540df
10,775
ipynb
Jupyter Notebook
docs/notebooks/atomic/windows/privilege_escalation/SDWIN-190403133337.ipynb
onesorzer0es/Security-Datasets
6a0eec7d9a2ec6026c6ba239ad647c4f59d2a6ef
[ "MIT" ]
294
2020-08-27T01:41:47.000Z
2021-06-28T00:17:15.000Z
docs/notebooks/atomic/windows/privilege_escalation/SDWIN-190403133337.ipynb
onesorzer0es/Security-Datasets
6a0eec7d9a2ec6026c6ba239ad647c4f59d2a6ef
[ "MIT" ]
18
2020-09-01T14:51:13.000Z
2021-06-22T14:12:04.000Z
docs/notebooks/atomic/windows/privilege_escalation/SDWIN-190403133337.ipynb
onesorzer0es/Security-Datasets
6a0eec7d9a2ec6026c6ba239ad647c4f59d2a6ef
[ "MIT" ]
48
2020-08-31T07:30:05.000Z
2021-06-28T00:17:37.000Z
36.64966
342
0.491137
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e70eb8386e92805d2e04d8a8f939ac7802de2fe3
135,686
ipynb
Jupyter Notebook
notebooks/temp/segregation.ipynb
sdaza/income-mobility-mortality-abm
13a29a7280acdf3ce81df12e8039729c0632c26d
[ "MIT" ]
null
null
null
notebooks/temp/segregation.ipynb
sdaza/income-mobility-mortality-abm
13a29a7280acdf3ce81df12e8039729c0632c26d
[ "MIT" ]
null
null
null
notebooks/temp/segregation.ipynb
sdaza/income-mobility-mortality-abm
13a29a7280acdf3ce81df12e8039729c0632c26d
[ "MIT" ]
null
null
null
325.386091
30,990
0.936523
[ [ [ "# ABM: Residential segregation mechanim", "_____no_output_____" ], [ "The segregation mechanism is an adaptation of Schelling's segregation model. \n\n\nAgents live in neighborhoods. At rate *t*, agents decide whether to move or stay in their neighborhood based on the proportion of people within the same quintile of income (e.g., 5 groups of income). Agents have a tolerance threshold (e.g., 20%) of people in the same quintile of income living in the same neighborhood. If the proportion of people of that quintile is lower than the tolerance threshold, agents move to another neighborhood **chosen randomly** from a pool of neighborhood that has not reach its population limit (e.g., more than 30% its original size).\n\nChanges in segregation are very sensitive to changes in the values of parameters and number of income groups. In this example, I use: \n\n- 20 neighbors with an initial population of 100 agents.\n- 5 income groups.\n- Population limit by neighborhood of 1.30 * 100.\n- Moving rate is 0.1 per year.\n- 100 replicates for each scenario. \n- Income distribution comes from CPS data. \n\nTo measure segregation I use the **neighborhood sorting index or NSI** (Jargowsky's 1996), that compares the income variation across all neighborhoods in a metro area with the income variation across all households in that metro area. If households are segregated across neighborhoods by income, the income variation across\nneighborhoods will be similar to the income variation across households, and the NSI will equal almost 1. If all neighborhoods are perfectly economically integrated (i.e., each neighborhood is a microcosm of the entire metro area) the NSI will be almost 0. Because the NSI is based on relative variances in income, measured income segregation will be influenced by the metro areas’ overall inequality. I also use the **average proportion of similar agents**. ", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport pysdaza as sd\n%matplotlib inline", "_____no_output_____" ], [ "# examining some data\nind = sd.read_files('../output/example01/indiv*.csv')\nagg = sd.read_files('../output/example01/aggregate_data*.csv')", "_____no_output_____" ] ], [ [ "The income distribution looks as expected and the average Gini coefficient of this distribution is 0.36. ", "_____no_output_____" ] ], [ [ "# income distribution from all replicates\nsns.distplot(ind.income);", "_____no_output_____" ], [ "print('Gini', round(agg.gini.mean(),2), 'SD =', round(agg.gini.std(), 4))", "Gini 0.36 SD = 0.0046\n" ], [ "# income distribution highest quintile\nsns.distplot(ind.loc[ind.quintile==5, 'income']);", "_____no_output_____" ], [ "# income distribution lower quintile\nsns.distplot(ind.loc[ind.quintile==1, 'income']);", "_____no_output_____" ] ], [ [ "# Segregation measures", "_____no_output_____" ], [ "Most of replicates (73%) reach convergence (all agents satisfy the moving threshold).", "_____no_output_____" ] ], [ [ "(agg.unhappy==0).value_counts()", "_____no_output_____" ] ], [ [ "NSI changes dramatically due to small changes in the moving threshold. In other words, segregation is very sensitive to changes in the moving threshold. This is related to the way the segregation model is implemented and the number of groups (income quintiles). Standard deviation is between 0.04 and 0.09. So there is about 7% of the variability of segregation due to the stochasticity of the simulation. ", "_____no_output_____" ] ], [ [ "sns.regplot(agg['threshold'], agg['nsi'], scatter_kws={'alpha':.10}, line_kws={'linestyle':'--', 'linewidth':0.6});", "_____no_output_____" ], [ "agg_group = agg.groupby('iter')\nagg_group.nsi.mean() # mean", "_____no_output_____" ], [ "# standard deviation\nagg_group.nsi.std()", "_____no_output_____" ] ], [ [ "I obtain similar results when observing the proportion of neighbors of similar income quintile. This time, variability increases with the moving threshold. That is, when the moving threshold is higher it becomes more difficult to satisfy that threshold and there is a higher chance agents will move, increasing the variability of similarity. This is confirmed by the plot of moving threshold and number of unhappy agent (i.e., who hasn't satisfied that threshold). The variability of **NSI** is more robust to higher thresholds. ", "_____no_output_____" ] ], [ [ "sns.regplot(agg['threshold'], agg['similar'], scatter_kws={'alpha':0.1});", "_____no_output_____" ], [ "agg_group.similar.std()", "_____no_output_____" ], [ "sns.regplot(agg['threshold'], agg['unhappy'], fit_reg=False, scatter_kws={'alpha':0.1});", "_____no_output_____" ] ], [ [ "In sum, the segregation mechanism seems to work properly. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
e70ebf3d61ac278494d7ad3575432643ec0c2468
74,753
ipynb
Jupyter Notebook
sagemaker-pipelines-preprocess-train-evaluate-batch-transform.ipynb
river-tiger/sagemaker-pipelines
d3f7fc1571021d9dc7e5daab9bec483ea4a22ab9
[ "MIT" ]
2
2021-01-26T02:28:15.000Z
2021-03-24T02:20:29.000Z
sagemaker-pipelines-preprocess-train-evaluate-batch-transform.ipynb
jihys/sagemaker-pipelines
d3f7fc1571021d9dc7e5daab9bec483ea4a22ab9
[ "MIT" ]
null
null
null
sagemaker-pipelines-preprocess-train-evaluate-batch-transform.ipynb
jihys/sagemaker-pipelines
d3f7fc1571021d9dc7e5daab9bec483ea4a22ab9
[ "MIT" ]
4
2021-01-26T05:06:24.000Z
2021-09-27T08:01:58.000Z
36.607738
421
0.544874
[ [ [ "# Orchestrating Jobs with Amazon SageMaker Model Building Pipelines\n\n***본 노트북 코드는 [영문 노트북](https://github.com/aws/amazon-sagemaker-examples/blob/master/sagemaker-pipelines/sagemaker-pipelines-preprocess-train-evaluate-batch-transform.ipynb)의 번역본으로 직역이 아닌 중간중간 설명을 덧붙였습니다.***\n\nAmazon SageMaker 모델 구축 파이프라인은 머신 러닝 (ML) 애플리케이션 개발자 및 운영 엔지니어가 SageMaker 작업을 조율하고(orchestrate) 재현 가능한 ML 파이프라인을 작성할 수 있는 기능들을 제공합니다. 또한 짧은 지연 시간(latency)으로 실시간 추론을 위한 사용자 지정 빌드 모델을 배포하고 배치 변환(batch transform)으로 오프라인 추론을 실행하고 아티팩트의 계보(lineage)를 추적할 수 있습니다. 프로덕션 워크 플로 배포 및 모니터링, 모델 아티팩트 배포, 간단한 인터페이스를 통해 아티팩트 계보 추적, ML 애플리케이션 개발을 위한 안전 및 모범 사례 패러다임을 준수하는 데 있어 건전한 운영 관행을 도입할 수 있습니다.\n\nSageMaker Pipelines 서비스는 선언적 JSON 사양인 SageMaker 파이프라인 DSL(Domains Specific Language)을 지원합니다. 이 DSL은 파이프라인 매개 변수 및 SageMaker 작업 단계의 DAG(Directed Acyclic Graph)를 정의합니다. SageMaker Python SDK(Software Developer Kit)를 사용하면 엔지니어와 과학자가 파이프라인 DSL 생성을 간소화할 수 있습니다.", "_____no_output_____" ], [ "<br>\n\n## 1. 들어가며\n---\n\n### 1.1. SageMaker Pipelines\n\nSageMaker Pipelines는 아래의 기능들을 지원합니다.\n\n* 파이프라인 (Pipelines) - SageMaker 작업 및 리소스 생성을 조율하기 위한 계 및 조건의 DAG입니다.\n* 처리 작업 단계 (Processing job step) - SageMaker에서 피쳐 엔지니어링, 데이터 유효성 검사, 모델 평가 및 모델 해석과 같은 데이터 처리 워크로드를 실행하는 단순화된 관리 환경입니다.\n* 훈련 작업 단계 (Training job steps) - 훈련 데이터셋의 예를 제시하여 예측을 수행하도록 모델을 훈련시키는 반복적인 프로세스입니다.\n* 조건부 실행 단계 (Conditional execution steps) - 파이프라인에서 분기의 조건부 실행을 제공하는 단계입니다.\n* 모델 단계 등록 (Register model steps) - Amazon SageMaker에서 배포 가능한 모델을 생성하는 데 사용할 수 있는 모델 레지스트리에서 모델 패키지 리소스를 생성하는 단계입니다.\n* 모델 단계 만들기 (Create model steps) - 변환 단계 또는 나중에 엔드포인트로 게시할 때 사용할 모델을 생성허는 단계입니다.\n* 작업 단계 변환 (Transform job steps) - 데이터셋에서 훈련 또는 추론을 방해하는 노이즈 또는 편향(bias)을 제거하고, 대규모 데이터셋에서 추론을 가져오고, 영구 엔드포인트가 필요하지 않을 때 추론을 실행하기 위해 데이터셋을 사전 처리하는 일괄 변환입니다.\n* 매개 변수화된 파이프라인 실행 (Parametrized Pipeline executions) - 지정된 매개 변수에 따라 파이프라인 실행의 변형을 활성화합니다.\n\n[Note] SageMaker Pipelines은 캐싱을 지원합니다. 자세한 내용은 아래 가이드를 참조해 주세요.<br> \nhttps://docs.aws.amazon.com/sagemaker/latest/dg/pipelines-caching.html", "_____no_output_____" ], [ "### 1.2. 노트북 개요\n\n이 노트북은 아래의 방법들을 보여줍니다.\n\n* Pipeline parameters - SageMaker 파이프라인을 매개 변수화하는 데 사용할 수 있는 파이프라인 매개 변수 셋을 정의합니다.\n* Processing step - 클린징, 피쳐 엔지니어링, 입력 데이터를 훈련 및 테스트 데이터셋으로 분할하는 처리 단계를 정의합니다.\n* Training step - 전처리된 훈련 데이터셋에서 모델을 훈련하는 훈련 단계를 정의합니다.\n* Processing step - 테스트 데이터셋에서 훈련된 모델의 성능을 평가하는 처리 단계를 정의합니다.\n* Create Model step - 훈련에 사용되는 모델 아티팩트에서 모델을 생성하는 모델 생성 단계를 정의합니다.\n* Transform step - 생성된 모델을 기반으로 일괄 변환을 수행하는 변환 단계를 정의합니다.\n* Register Model step - 모델 훈련에 사용되는 estimator와 모델 아티팩트에서 모델 패키지를 생성하는 모델 등록 단계를 정의합니다.\n* Conditional step - 이전 단계의 출력을 기반으로 조건을 측정하고 다른 단계를 조건부로 실행하는 조건부 단계를 정의합니다.\n* Pipeline definition - 정의된 매개 변수 및 단계를 사용하여 DAG에서 파이프라인 정의를 정의하고 생성합니다.\n* Pipeline execution - 파이프라인 실행을 시작하고 실행이 완료될 때까지 기다립니다.\n* Model evaluation - 검사를 위해 S3 버켓에서 모델 평가 보고서를 다운로드합니다.\n* 두 번째 파이프라인 실행을 시작합니다.", "_____no_output_____" ], [ "### A SageMaker Pipeline\n\n여러분이 생성하는 파이프라인은 전처리, 훈련, 평가, 모델 생성, 일괄 변환, 모델 등록의 일반적인 머신 러닝 (ML) 애플리케이션 패턴을 따릅니다.\n\n![A typical ML Application pipeline](img/pipeline-full.png)", "_____no_output_____" ], [ "### 1.3. 데이터셋 개요\n\n본 노트북에서 사용하는 데이터셋은 [UCI Machine Learning Abalone Dataset](https://archive.ics.uci.edu/ml/datasets/abalone) [1] 으로 전복의 나이를 추정하는 회귀(regression) 문제입니다. \n\n데이터셋의 컬럼은 length (가장 긴 껍질 측정),diameter (지름), height (높이), whole_weight (전체 전복 무게), shucked_weight (몸통 무게), viscera_weight (내장 무게), shell_weight (껍질 무게), 성별 ('M', 'F', 'I';'I'는 새끼 전복인 경우), ring (껍질의 고리 수) 으로 구성되어 있습니다.\n\n고리 수는 나이를 추측할 수 있는 좋은 근사치로 밝혀졌습니다 (나이 = 고리 * 1.5). 그러나 이 숫자를 얻으려면 원뿔을 통해 껍질을 자르고, 단면을 염색하고, 현미경을 통해 고리의 수를 계산해야 하는데, 이는 시간이 많이 걸리는 작업입니다. 하지만, 머신 러닝 모델로 고리 수를 예측하는 모델을 구축한다면 물리적인 시간을 절약할 수 있습니다.\n\n[1] Dua, D. and Graff, C. (2019). [UCI Machine Learning Repository](http://archive.ics.uci.edu/ml). Irvine, CA: University of California, School of Information and Computer Science.", "_____no_output_____" ] ], [ [ "import boto3\nimport sagemaker\n\n\nregion = boto3.Session().region_name\nsagemaker_session = sagemaker.session.Session()\nrole = sagemaker.get_execution_role()\ndefault_bucket = sagemaker_session.default_bucket()\nmodel_package_group_name = f\"AbaloneModelPackageGroupName\"", "_____no_output_____" ] ], [ [ "데이터를 여러분 계정의 S3 버켓에 업로드합니다. ", "_____no_output_____" ] ], [ [ "!mkdir -p data", "_____no_output_____" ], [ "local_path = \"data/abalone-dataset.csv\"\n\ns3 = boto3.resource(\"s3\")\ns3.Bucket(f\"sagemaker-servicecatalog-seedcode-{region}\").download_file(\n \"dataset/abalone-dataset.csv\",\n local_path\n)\n\nbase_uri = f\"s3://{default_bucket}/abalone\"\ninput_data_uri = sagemaker.s3.S3Uploader.upload(\n local_path=local_path, \n desired_s3_uri=base_uri,\n)\nprint(input_data_uri)", "s3://sagemaker-us-east-1-387793684046/abalone/abalone-dataset.csv\n" ] ], [ [ "모델 생성 후 배치 변환을 위한 두 번째 데이터셋을 다운로드합니다.", "_____no_output_____" ] ], [ [ "local_path = \"data/abalone-dataset-batch\"\n\ns3 = boto3.resource(\"s3\")\ns3.Bucket(f\"sagemaker-servicecatalog-seedcode-{region}\").download_file(\n \"dataset/abalone-dataset-batch\",\n local_path\n)\n\nbase_uri = f\"s3://{default_bucket}/abalone\"\nbatch_data_uri = sagemaker.s3.S3Uploader.upload(\n local_path=local_path, \n desired_s3_uri=base_uri,\n)\nprint(batch_data_uri)", "s3://sagemaker-us-east-1-387793684046/abalone/abalone-dataset-batch\n" ], [ "import argparse\nimport os\nimport requests\nimport tempfile\n\nimport numpy as np\nimport pandas as pd\n\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import StandardScaler, OneHotEncoder\n\n\n# Specify the column names for the .csv file.\nfeature_columns_names = [\n \"sex\",\n \"length\",\n \"diameter\",\n \"height\",\n \"whole_weight\",\n \"shucked_weight\",\n \"viscera_weight\",\n \"shell_weight\",\n]\nlabel_column = \"rings\"\n\nfeature_columns_dtype = {\n \"sex\": str,\n \"length\": np.float64,\n \"diameter\": np.float64,\n \"height\": np.float64,\n \"whole_weight\": np.float64,\n \"shucked_weight\": np.float64,\n \"viscera_weight\": np.float64,\n \"shell_weight\": np.float64\n}\nlabel_column_dtype = {\"rings\": np.float64}\n\n\ndef merge_two_dicts(x, y):\n z = x.copy()\n z.update(y)\n return z\n\n\n\ndf = pd.read_csv(\n f\"./data/abalone-dataset.csv\",\n header=None, \n names=feature_columns_names + [label_column],\n dtype=merge_two_dicts(feature_columns_dtype, label_column_dtype)\n)\nnumeric_features = list(feature_columns_names)\nnumeric_features.remove(\"sex\")\nnumeric_transformer = Pipeline(\n steps=[\n (\"imputer\", SimpleImputer(strategy=\"median\")),\n (\"scaler\", StandardScaler())\n ]\n)\n\ncategorical_features = [\"sex\"]\ncategorical_transformer = Pipeline(\n steps=[\n (\"imputer\", SimpleImputer(strategy=\"constant\", fill_value=\"missing\")),\n (\"onehot\", OneHotEncoder(handle_unknown=\"ignore\"))\n ]\n)\n\npreprocess = ColumnTransformer(\n transformers=[\n (\"num\", numeric_transformer, numeric_features),\n (\"cat\", categorical_transformer, categorical_features)\n ]\n)\n \ny = df.pop(\"rings\")\nX_pre = preprocess.fit_transform(df)\ny_pre = y.to_numpy().reshape(len(y), 1)\n \nX = np.concatenate((y_pre, X_pre), axis=1)\n\npd.DataFrame(X).to_csv(f\"data/all.csv\", header=False, index=False)", "_____no_output_____" ] ], [ [ "<br>\n\n## 2. 파이프라인 정의\n---\n\n### 2.1. 파이프라인 파라메터 정의: 파이프라인 실행 매개 변수화를 위한 매개 변수 정의\n\n파이프라인을 매개 변수화하는 데 사용할 수 있는 파이프라인 매개 변수를 정의합니다. 매개 변수를 사용하면 파이프라인 정의를 수정하지 않고도 사용자 지정 파이프라인 실행 및 일정을 설정할 수 있습니다.\n\n지원되는 매개 변수 유형들은 다음과 같습니다.\n\n* `ParameterString` - `str`파이썬 타입을 나타냅니다.\n* `ParameterInteger` - `int` 파이썬 타입을 나타냅니다.\n* `ParameterFloat` - `float` 파이썬 타입을 나타냅니다.\n\n이러한 매개 변수들은 파이프라인 실행 시 재정의할 수 있는 기본값 제공을 지원합니다. 지정된 기본값은 매개 변수 유형의 인스턴스여야 합니다.\n\n이 워크플로에 정의된 매개 변수들은 다음과 같습니다.\n\n* `processing_instance_type` - 처리 job의 `ml.*` 인스턴스 타입입니다.\n* `processing_instance_count` - 처리 job의 인스턴스 개수입니다.\n* `training_instance_type` - 훈련 job의 `ml.*` 인스턴스 타입입니다.\n* `model_approval_status` - CI/CD 목적으로 훈련된 모델을 등록하기 위한 승인 상태입니다. (\"PendingManualApproval\"이 기본값)\n* `input_data` - 입력 데이터의 S3 버켓 URI 위치입니다.\n* `batch_data` - 배치 데이터의 S3 버켓 URI 위치입니다.", "_____no_output_____" ] ], [ [ "from sagemaker.workflow.parameters import (\n ParameterInteger,\n ParameterString,\n)\n\nprocessing_instance_count = ParameterInteger(\n name=\"ProcessingInstanceCount\",\n default_value=1\n)\nprocessing_instance_type = ParameterString(\n name=\"ProcessingInstanceType\",\n default_value=\"ml.m5.xlarge\"\n)\ntraining_instance_type = ParameterString(\n name=\"TrainingInstanceType\",\n default_value=\"ml.m5.xlarge\"\n)\nmodel_approval_status = ParameterString(\n name=\"ModelApprovalStatus\",\n default_value=\"PendingManualApproval\"\n)\ninput_data = ParameterString(\n name=\"InputData\",\n default_value=input_data_uri,\n)\nbatch_data = ParameterString(\n name=\"BatchData\",\n default_value=batch_data_uri,\n)", "_____no_output_____" ] ], [ [ "![Define Parameters](img/pipeline-1.png)", "_____no_output_____" ], [ "### 2.2. 피쳐 엔지니어링을 위한 처리 단계(Processing Step) 정의\n\n이 섹션에서는 전처리 스크립트를 포함하는 `preprocessing_abalone.py` 파일을 작성합니다. `%%writefile` 매직 커맨드를 사용해 스크립트를 업데이트하고 이 셀을 다시 실행하여 최신 버전으로 덮어쓸 수 있습니다. 전처리 스크립트는 scikit-learn을 사용하여 다음을 수행합니다.\n\n- 누락된 성별 카테고리 데이터를 채우고 훈련에 적합하도록 인코딩합니다.\n- 성별 및 링 숫자 데이터를 제외한 모든 숫자 필드의 크기를 조정하고 정규화합니다.\n- 데이터를 훈련, 검증 및 테스트 데이터셋으로 분할합니다.\n- 처리 단계는 입력 데이터에서 스크립트를 실행합니다. 훈련 단계에서는 사전 처리된 훈련 피쳐 및 레이블을 사용하여 모델을 훈련합니다. 평가 단계에서는 훈련된 모델과 사전 처리된 테스트 피쳐 및 레이블을 사용하여 모델을 평가합니다.", "_____no_output_____" ] ], [ [ "!mkdir -p abalone", "_____no_output_____" ], [ "%%writefile abalone/preprocessing.py\nimport argparse\nimport os\nimport requests\nimport tempfile\n\nimport numpy as np\nimport pandas as pd\n\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import StandardScaler, OneHotEncoder\n\n\n# Since we get a headerless CSV file we specify the column names here.\nfeature_columns_names = [\n \"sex\",\n \"length\",\n \"diameter\",\n \"height\",\n \"whole_weight\",\n \"shucked_weight\",\n \"viscera_weight\",\n \"shell_weight\",\n]\nlabel_column = \"rings\"\n\nfeature_columns_dtype = {\n \"sex\": str,\n \"length\": np.float64,\n \"diameter\": np.float64,\n \"height\": np.float64,\n \"whole_weight\": np.float64,\n \"shucked_weight\": np.float64,\n \"viscera_weight\": np.float64,\n \"shell_weight\": np.float64\n}\nlabel_column_dtype = {\"rings\": np.float64}\n\n\ndef merge_two_dicts(x, y):\n z = x.copy()\n z.update(y)\n return z\n\n\nif __name__ == \"__main__\":\n base_dir = \"/opt/ml/processing\"\n\n df = pd.read_csv(\n f\"{base_dir}/input/abalone-dataset.csv\",\n header=None, \n names=feature_columns_names + [label_column],\n dtype=merge_two_dicts(feature_columns_dtype, label_column_dtype)\n )\n numeric_features = list(feature_columns_names)\n numeric_features.remove(\"sex\")\n numeric_transformer = Pipeline(\n steps=[\n (\"imputer\", SimpleImputer(strategy=\"median\")),\n (\"scaler\", StandardScaler())\n ]\n )\n\n categorical_features = [\"sex\"]\n categorical_transformer = Pipeline(\n steps=[\n (\"imputer\", SimpleImputer(strategy=\"constant\", fill_value=\"missing\")),\n (\"onehot\", OneHotEncoder(handle_unknown=\"ignore\"))\n ]\n )\n\n preprocess = ColumnTransformer(\n transformers=[\n (\"num\", numeric_transformer, numeric_features),\n (\"cat\", categorical_transformer, categorical_features)\n ]\n )\n \n y = df.pop(\"rings\")\n X_pre = preprocess.fit_transform(df)\n y_pre = y.to_numpy().reshape(len(y), 1)\n \n X = np.concatenate((y_pre, X_pre), axis=1)\n \n np.random.shuffle(X)\n train, validation, test = np.split(X, [int(.7*len(X)), int(.85*len(X))])\n\n \n pd.DataFrame(train).to_csv(f\"{base_dir}/train/train.csv\", header=False, index=False)\n pd.DataFrame(validation).to_csv(f\"{base_dir}/validation/validation.csv\", header=False, index=False)\n pd.DataFrame(test).to_csv(f\"{base_dir}/test/test.csv\", header=False, index=False)", "Writing abalone/preprocessing.py\n" ] ], [ [ "다음으로 `SKLearnProcessor` 프로세서의 인스턴스를 만들고 `ProcessingStep`에서 사용합니다.\n\n또한, 이 노트북 전체에서 사용할 프레임워크 버전(`framework_version`)을 지정합니다.\n\n프로세서 인스턴스에서 사용하는 `processing_instance_type` 및 `processing_instance_count` 매개 변수를 확인합니다.", "_____no_output_____" ] ], [ [ "from sagemaker.sklearn.processing import SKLearnProcessor\n\n\nframework_version = \"0.23-1\"\n\nsklearn_processor = SKLearnProcessor(\n framework_version=framework_version,\n instance_type=processing_instance_type,\n instance_count=processing_instance_count,\n base_job_name=\"sklearn-abalone-process\",\n role=role,\n)", "_____no_output_____" ] ], [ [ "마지막으로 프로세서 인스턴스를 사용하여 입력 및 출력 채널, 파이프라인이 파이프라인 실행을 호출 할때 실행될 코드와 함께 `ProcessingStep` 을 생성합니다. 이는 Python SDK 프로세서 인스턴스의 `run` 메소드와 유사합니다.\n\n`ProcessingStep`에 전달된 `input_data` 매개 변수는 단계에서 사용되는 입력 데이터입니다. 이 입력 데이터는 실행될 때 프로세서 인스턴스에서 사용됩니다.\n\n또한 처리 작업의 출력 구성에 지정된 `\"train_data\"` 및 `\"test_data\"` 채널을 확인합니다. `Properties` 단계는 후속 단계에서 사용할 수 있으며, 실행 시 런타임 값을 확인할 수 있습니다. 특히 이 사용법은 훈련 단계를 정의할 때 호출됩니다.\n", "_____no_output_____" ] ], [ [ "from sagemaker.processing import ProcessingInput, ProcessingOutput\nfrom sagemaker.workflow.steps import ProcessingStep\n \n\nstep_process = ProcessingStep(\n name=\"AbaloneProcess\",\n processor=sklearn_processor,\n inputs=[\n ProcessingInput(source=input_data, destination=\"/opt/ml/processing/input\"), \n ],\n outputs=[\n ProcessingOutput(output_name=\"train\", source=\"/opt/ml/processing/train\"),\n ProcessingOutput(output_name=\"validation\", source=\"/opt/ml/processing/validation\"),\n ProcessingOutput(output_name=\"test\", source=\"/opt/ml/processing/test\")\n ],\n code=\"abalone/preprocessing.py\",\n)", "_____no_output_____" ] ], [ [ "![Define a Processing Step for Feature Engineering](img/pipeline-2.png)", "_____no_output_____" ], [ "### 2.3. 모델 훈련을 위한 훈련 단계(Training Step) 정의\n\n이 섹션에서는 Amazon SageMaker의 [XGBoost Algorithm](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html) 빌트인 알고리즘을 사용하여 이 데이터셋을 훈련합니다. 기존 SageMaker와 동일하게 XGBoost 알고리즘 및 입력 데이터셋에 대한 Estimator를 구성합니다. 일반적인 훈련 스크립트는 입력 채널에서 데이터를 로드하고, 하이퍼 파라메터로 훈련을 구성하고, 모델을 훈련시키고, 나중에 엔드포인트에 호스팅할 수 있도록 모델을 `model_dir`에 저장합니다.\n\n`training_instance_type` 매개 변수는 파이프라인의 여러 위치에서 사용될 수 있다는 점을 참조하세요. 이 경우 `training_instance_type`은 estimator로 전달됩니다.", "_____no_output_____" ] ], [ [ "from sagemaker.estimator import Estimator\n\n\nmodel_path = f\"s3://{default_bucket}/AbaloneTrain\"\nimage_uri = sagemaker.image_uris.retrieve(\n framework=\"xgboost\",\n region=region,\n version=\"1.0-1\",\n py_version=\"py3\",\n instance_type=training_instance_type,\n)\nxgb_train = Estimator(\n image_uri=image_uri,\n instance_type=training_instance_type,\n instance_count=1,\n output_path=model_path,\n role=role,\n)\nxgb_train.set_hyperparameters(\n objective=\"reg:linear\",\n num_round=50,\n max_depth=5,\n eta=0.2,\n gamma=4,\n min_child_weight=6,\n subsample=0.7,\n silent=0\n)", "_____no_output_____" ] ], [ [ "마지막으로 estimator 인스턴스와 이전 `ProcessingStep`의 속성을 사용하여 `TrainingStep`을 생성합니다. 이는 Python SDK의 estimator `fit` 메서드와 유사합니다.\n\n구체적으로 `\"train_data\"` 출력 채널의 `S3Uri`를 `TrainingStep`으로 전달합니다. 또한, 파이프라인에서 모델 평가를 위해 다른 `\"test_data\"` 출력 채널을 사용합니다. 파이프라인 단계의 `properties`는 설명 호출의 해당 응답의 object 모델과 일치합니다. 이러한 속성은 placeholder 값으로 참조될 수 있으며 런타임 시 확인할 수 있습니다. 예를 들어 `ProcessingStep` `properties` 속성은 [DescribeProcessingJob](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeProcessingJob.html) 응답 object의 object 모델과 일치합니다.\n", "_____no_output_____" ] ], [ [ "from sagemaker.inputs import TrainingInput\nfrom sagemaker.workflow.steps import TrainingStep\n\n\nstep_train = TrainingStep(\n name=\"AbaloneTrain\",\n estimator=xgb_train,\n inputs={\n \"train\": TrainingInput(\n s3_data=step_process.properties.ProcessingOutputConfig.Outputs[\n \"train\"\n ].S3Output.S3Uri,\n content_type=\"text/csv\"\n ),\n \"validation\": TrainingInput(\n s3_data=step_process.properties.ProcessingOutputConfig.Outputs[\n \"validation\"\n ].S3Output.S3Uri,\n content_type=\"text/csv\"\n )\n },\n)", "_____no_output_____" ] ], [ [ "![Define a Training Step to Train a Model](img/pipeline-3.png)", "_____no_output_____" ], [ "### 2.4. 훈련된 모델을 평가하기 위한 모델 평가 단계(Evaluation Step) 정의\n\n먼저 모델 평가를 수행하는 처리 단계에 지정된 평가 스크립트를 작성합니다.\n\n파이프라인 실행 후 분석을 위해 결과 `evaluation.json`을 검사할 수 있습니다.\n\n평가 스크립트는 `xgboost`를 사용하여 다음을 수행합니다.\n\n* 모델 로드\n* 테스트 데이터 로드\n* 테스트 데이터에 대한 예측 수행\n* 정확도(accuracy) 및 ROC 곡선을 포함한 분류 보고서(classification report) 작성\n* 평가 보고서를 평가 디렉터리에 저장", "_____no_output_____" ] ], [ [ "%%writefile abalone/evaluation.py\nimport json\nimport pathlib\nimport pickle\nimport tarfile\n\nimport joblib\nimport numpy as np\nimport pandas as pd\nimport xgboost\n\nfrom sklearn.metrics import mean_squared_error\n\n\nif __name__ == \"__main__\":\n model_path = f\"/opt/ml/processing/model/model.tar.gz\"\n with tarfile.open(model_path) as tar:\n tar.extractall(path=\".\")\n \n model = pickle.load(open(\"xgboost-model\", \"rb\"))\n\n test_path = \"/opt/ml/processing/test/test.csv\"\n df = pd.read_csv(test_path, header=None)\n \n y_test = df.iloc[:, 0].to_numpy()\n df.drop(df.columns[0], axis=1, inplace=True)\n \n X_test = xgboost.DMatrix(df.values)\n \n predictions = model.predict(X_test)\n\n mse = mean_squared_error(y_test, predictions)\n std = np.std(y_test - predictions)\n report_dict = {\n \"regression_metrics\": {\n \"mse\": {\n \"value\": mse,\n \"standard_deviation\": std\n },\n },\n }\n\n output_dir = \"/opt/ml/processing/evaluation\"\n pathlib.Path(output_dir).mkdir(parents=True, exist_ok=True)\n \n evaluation_path = f\"{output_dir}/evaluation.json\"\n with open(evaluation_path, \"w\") as f:\n f.write(json.dumps(report_dict))", "Writing abalone/evaluation.py\n" ] ], [ [ "다음으로 `ScriptProcessor` 프로세서의 인스턴스를 만들고 `ProcessingStep`에서 사용합니다.\n\n프로세서에 전달된 `processing_instance_type` 매개 변수에 유의하세요.", "_____no_output_____" ] ], [ [ "from sagemaker.processing import ScriptProcessor\n\n\nscript_eval = ScriptProcessor(\n image_uri=image_uri,\n command=[\"python3\"],\n instance_type=processing_instance_type,\n instance_count=1,\n base_job_name=\"script-abalone-eval\",\n role=role,\n)", "_____no_output_____" ] ], [ [ "프로세서 인스턴스를 사용하여 입력 및 출력 채널과 파이프라인이 파이프라인 실행을 호출할 때 실행될 코드와 함께 `ProcessingStep`을 생성합니다. 이는 Python SDK에서 프로세서 인스턴스의 `run` 메서드와 유사합니다.\n\n구체적으로, `step_train` 속성의 `S3ModelArtifacts`와 `step_process` 속성의 `\"test_data\"` 출력 채널의 `S3Uri`가 입력으로 전달됩니다. `TrainingStep` 및 `ProcessingStep` `properties` 속성은 각각 [DescribeTrainingJob](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeTrainingJob.html) 및 [DescribeProcessingJob](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeProcessingJob.html) 응답 object의 object 모델과 일치합니다.", "_____no_output_____" ] ], [ [ "from sagemaker.workflow.properties import PropertyFile\n\n\nevaluation_report = PropertyFile(\n name=\"EvaluationReport\",\n output_name=\"evaluation\",\n path=\"evaluation.json\"\n)\nstep_eval = ProcessingStep(\n name=\"AbaloneEval\",\n processor=script_eval,\n inputs=[\n ProcessingInput(\n source=step_train.properties.ModelArtifacts.S3ModelArtifacts,\n destination=\"/opt/ml/processing/model\"\n ),\n ProcessingInput(\n source=step_process.properties.ProcessingOutputConfig.Outputs[\n \"test\"\n ].S3Output.S3Uri,\n destination=\"/opt/ml/processing/test\"\n )\n ],\n outputs=[\n ProcessingOutput(output_name=\"evaluation\", source=\"/opt/ml/processing/evaluation\"),\n ],\n code=\"abalone/evaluation.py\",\n property_files=[evaluation_report],\n)", "_____no_output_____" ] ], [ [ "![Define a Model Evaluation Step to Evaluate the Trained Model](img/pipeline-4.png)", "_____no_output_____" ], [ "### 2.5. 모델 생성을 위한 모델 생성 단계(Create Model Step) 정의\n\n예제 모델을 사용하여 배치 변환을 수행하려면 SageMaker 모델을 생성해야 합니다. \n\n구체적으로 `TrainingStep`, `step_train` 속성에서 `S3ModelArtifacts`를 전달합니다. `TrainingStep` `properties` 속성은 [DescribeTrainingJob](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeTrainingJob.html) 응답 object의 object 모델과 일치합니다.", "_____no_output_____" ] ], [ [ "from sagemaker.model import Model\n\n\nmodel = Model(\n image_uri=image_uri,\n model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,\n sagemaker_session=sagemaker_session,\n role=role,\n)", "_____no_output_____" ] ], [ [ "SageMaker 모델을 생성하기 위해 모델 입력값(`instance_type` 및 `accelerator_type`)을 제공한 다음 이전에 정의된 입력 및 모델 인스턴스를 전달하는 `CreateModelStep`을 정의합니다.", "_____no_output_____" ] ], [ [ "from sagemaker.inputs import CreateModelInput\nfrom sagemaker.workflow.steps import CreateModelStep\n\n\ninputs = CreateModelInput(\n instance_type=\"ml.m5.large\",\n accelerator_type=\"ml.eia1.medium\",\n)\nstep_create_model = CreateModelStep(\n name=\"AbaloneCreateModel\",\n model=model,\n inputs=inputs,\n)", "_____no_output_____" ] ], [ [ "### 2.6. 배치 변환을 수행하기위한 변환 단계(Transform Step) 정의\n\n이제 모델 인스턴스가 정의되었으므로 적절한 모델 유형, 컴퓨팅 인스턴스 유형 및 원하는 출력 S3 URI를 사용하여 Transformer 인스턴스를 생성합니다.\n\n구체적으로 `CreateModelStep`, `step_create_model` 속성에서 `ModelName`을 전달합니다. `CreateModelStep` `properties` 속성은 [DescribeModel](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeModel.html) 응답 object의 object 모델과 일치합니다.", "_____no_output_____" ] ], [ [ "from sagemaker.transformer import Transformer\n\n\ntransformer = Transformer(\n model_name=step_create_model.properties.ModelName,\n instance_type=\"ml.m5.xlarge\",\n instance_count=1,\n output_path=f\"s3://{default_bucket}/AbaloneTransform\"\n)", "_____no_output_____" ] ], [ [ "앞에서 정의한 `batch_data` 파이프라인 매개 변수를 사용하여 transformer 인스턴스와 `TransformInput`을 전달합니다.", "_____no_output_____" ] ], [ [ "from sagemaker.inputs import TransformInput\nfrom sagemaker.workflow.steps import TransformStep\n\n\nstep_transform = TransformStep(\n name=\"AbaloneTransform\",\n transformer=transformer,\n inputs=TransformInput(data=batch_data)\n)", "_____no_output_____" ] ], [ [ "### 2.7. 모델 패키지 생성을 위한 모델 등록 단계(Register Model Step) 정의\n\n훈련 단계에 지정된 estimator 인스턴스를 사용하여 `RegisterModel`의 인스턴스를 생성합니다. 파이프라인에서 `RegisterModel`을 실행한 결과는 모델 패키지입니다. 모델 패키지는 추론에 필요한 모든 요소를 패키징하는 재사용 가능한 모델 아티팩트 추상화입니다. 주로 선택적인 모델 가중치 위치와 함께 사용할 추론 이미지를 정의하는 추론 사양으로 구성됩니다.\n\n모델 패키지 그룹은 모델 패키지의 컬렉션입니다. 특정 ML 비즈니스 문제에 대해 모델 패키지 그룹을 생성할 수 있으며 모델 패키지의 새 버전을 여기에 추가할 수 있습니다. 일반적으로 고객은 SageMaker 파이프라인을 실행할 때마다 모델 패키지 버전을 그룹에 추가할 수 있도록 SageMaker 파이프 라인에 대한 `ModelPackageGroup`을 생성해야 합니다.\n\n`RegisterModel`의 구성은 Python SDK에 있는 estimator 인스턴스의 `register` 메서드와 유사합니다.\n\n구체적으로 `TrainingStep`, `step_train` 속성에서 `S3ModelArtifacts`를 전달합니다. `TrainingStep` `properties` 속성은 [DescribeTrainingJob](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DescribeTrainingJob.html) 응답 object의 object 모델과 일치합니다.\n\n이 노트북에 제공된 특정 모델 패키지 그룹 이름은 모델 레지스트리에서 사용할 수 있으며, CI/CD는 SageMaker 프로젝트에서 작동합니다.", "_____no_output_____" ] ], [ [ "from sagemaker.model_metrics import MetricsSource, ModelMetrics \nfrom sagemaker.workflow.step_collections import RegisterModel\n\n\nmodel_metrics = ModelMetrics(\n model_statistics=MetricsSource(\n s3_uri=\"{}/evaluation.json\".format(\n step_eval.arguments[\"ProcessingOutputConfig\"][\"Outputs\"][0][\"S3Output\"][\"S3Uri\"]\n ),\n content_type=\"application/json\"\n )\n)\nstep_register = RegisterModel(\n name=\"AbaloneRegisterModel\",\n estimator=xgb_train,\n model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,\n content_types=[\"text/csv\"],\n response_types=[\"text/csv\"],\n inference_instances=[\"ml.t2.medium\", \"ml.m5.xlarge\"],\n transform_instances=[\"ml.m5.xlarge\"],\n model_package_group_name=model_package_group_name,\n approval_status=model_approval_status,\n model_metrics=model_metrics,\n)", "_____no_output_____" ] ], [ [ "![Define a Create Model Step and Batch Transform to Process Data in Batch at Scale](img/pipeline-5.png)", "_____no_output_____" ], [ "### 2.8. 정확도를 확인하고 조건부로 모델을 생성하고 배치 변환을 실행하고 모델 레지스트리에 모델을 등록하기 위한 조건 단계(Condition Step) 정의\n\n이 단계에서는 평가 단계 `step_eval에` 의해 결정된 모델의 정확도가 지정된 값을 초과하는 경우에만 모델이 등록됩니다. `ConditionStep`을 사용하면 파이프라인이 단계 속성의 조건에 따라 파이프 라인 DAG에서 조건부 실행을 지원할 수 있습니다.\n\n아래 코드 셀에서는 다음을 수행합니다.\n\n- 평가 단계 `step_eval`의 출력에서 찾은 정확도 값에 `ConditionLessThanOrEqualTo`를 정의합니다.\n- `ConditionStep`의 조건 목록에 있는 조건을 사용합니다.\n- `CreateModelStep` 및 `TransformStep` 단계와 `RegisterModel` 단계 컬렉션을 조건이 `True`로 평가되는 경우에만 실행되는 `ConditionStep`의 `if_steps`에 전달합니다.", "_____no_output_____" ] ], [ [ "from sagemaker.workflow.conditions import ConditionLessThanOrEqualTo\nfrom sagemaker.workflow.condition_step import (\n ConditionStep,\n JsonGet,\n)\n\n\ncond_lte = ConditionLessThanOrEqualTo(\n left=JsonGet(\n step=step_eval,\n property_file=evaluation_report,\n json_path=\"regression_metrics.mse.value\",\n ),\n right=6.0\n)\n\nstep_cond = ConditionStep(\n name=\"AbaloneMSECond\",\n conditions=[cond_lte],\n if_steps=[step_register, step_create_model, step_transform],\n else_steps=[], \n)", "_____no_output_____" ] ], [ [ "![Define a Condition Step to Check Accuracy and Conditionally Execute Steps](img/pipeline-6.png)", "_____no_output_____" ], [ "### 2.9. 파이프라인 정의 (Parameters, Steps, Conditions로 구성)\n\n이 섹션을 통해 앞 섹션에서 정의한 단계들을 파이프라인으로 결합하여 실행할 수 있습니다.\n\n`Pipeline` 인스턴스 생성 시 `name`, `parameters`, 그리고 `steps`이 필요합니다. `name`은 `(account, region)` 쌍에서 고유해야 합니다.\n\nNote:\n\n* 정의에 사용된 모든 매개 변수가 있어야 합니다.\n* 파이프라인으로 전달된 단계들은 실행 순서대로 나열할 필요가 없습니다. SageMaker Pipeline 서비스는 실행을 완료하기 위한 단계로 데이터 종속성 DAG를 해결합니다.\n* 단계들은 파이프라인 단계 리스트와 모든 조건 단계 if/else 리스트에서 고유해야 합니다.", "_____no_output_____" ] ], [ [ "from sagemaker.workflow.pipeline import Pipeline\n\n\npipeline_name = f\"AbalonePipeline\"\npipeline = Pipeline(\n name=pipeline_name,\n parameters=[\n processing_instance_type, \n processing_instance_count,\n training_instance_type,\n model_approval_status,\n input_data,\n batch_data,\n ],\n steps=[step_process, step_train, step_eval, step_cond],\n)", "_____no_output_____" ] ], [ [ "![Define a Pipeline of Parameters, Steps, and Conditions](img/pipeline-7.png)", "_____no_output_____" ], [ "<br>\n\n## 3. 파이프라인 실행\n---\n\n파이프라인 정의를 생성하였으면, 이를 곧바로 SageMaker에 제출하여 파이프라인을 실행할 수 있습니다.\n\n### 3.0. (Optional) 파이프라인 정의 검토\n\n파이프라인 정의의 JSON을 검사하여 파이프라인이 잘 정의되어 있고 매개 변수 및 단계 속성이 올바르게 해석되는지 확인할 수 있습니다.", "_____no_output_____" ] ], [ [ "import json\n\ndefinition = json.loads(pipeline.definition())\ndefinition", "No finished training job found associated with this estimator. Please make sure this estimator is only used for building workflow config\n" ] ], [ [ "### 3.1. 파이프라인을 SageMaker에 제출하고 실행 시작\n\n파이프라인 서비스에 파이프라인 정의를 제출합니다. 전달된 역할은 파이프라인 서비스에서 단계들 내에서 정의된 모든 작업들을 생성하는 데 사용됩니다.\n", "_____no_output_____" ] ], [ [ "pipeline.upsert(role_arn=role)", "No finished training job found associated with this estimator. Please make sure this estimator is only used for building workflow config\n" ] ], [ [ "Start the pipeline and accept all of the default parameters.", "_____no_output_____" ] ], [ [ "execution = pipeline.start()", "_____no_output_____" ] ], [ [ "### 3.2. Pipeline Operations: 파이프라인 실행 검사 및 대기\n\n파이프라인 실행을 확인합니다.", "_____no_output_____" ] ], [ [ "execution.describe()", "_____no_output_____" ] ], [ [ "실행이 완료될 때까지 기다리세요.", "_____no_output_____" ] ], [ [ "execution.wait()", "_____no_output_____" ] ], [ [ "실행 단계들을 나열합니다. step executor 서비스에서 처리된 파이프라인의 단계들입니다.", "_____no_output_____" ] ], [ [ "execution.list_steps()", "_____no_output_____" ] ], [ [ "### 3.3. 모델 평가 검토\n\n파이프라인이 완료된 후, 결과 모델 평가를 검토합니다. S3에서 결과 `evaluation.json` 파일을 다운로드하고 보고서를 인쇄합니다.", "_____no_output_____" ] ], [ [ "from pprint import pprint\n\n\nevaluation_json = sagemaker.s3.S3Downloader.read_file(\"{}/evaluation.json\".format(\n step_eval.arguments[\"ProcessingOutputConfig\"][\"Outputs\"][0][\"S3Output\"][\"S3Uri\"]\n))\npprint(json.loads(evaluation_json))", "{'regression_metrics': {'mse': {'standard_deviation': 2.1106958936144284,\n 'value': 4.455134079123846}}}\n" ] ], [ [ "### 3.4. Lineage\n\n파이프라인에서 생성된 아티팩트의 계보를 검토합니다.", "_____no_output_____" ] ], [ [ "import time\nfrom sagemaker.lineage.visualizer import LineageTableVisualizer\n\n\nviz = LineageTableVisualizer(sagemaker.session.Session())\nfor execution_step in reversed(execution.list_steps()):\n print(execution_step)\n display(viz.show(pipeline_execution_step=execution_step))\n time.sleep(5)", "{'StepName': 'AbaloneProcess', 'StartTime': datetime.datetime(2020, 12, 6, 9, 1, 16, 364000, tzinfo=tzlocal()), 'EndTime': datetime.datetime(2020, 12, 6, 9, 6, 27, 391000, tzinfo=tzlocal()), 'StepStatus': 'Succeeded', 'Metadata': {'ProcessingJob': {'Arn': 'arn:aws:sagemaker:us-east-1:387793684046:processing-job/pipelines-xfyoqflqx40c-abaloneprocess-zcek38fjoj'}}}\n" ] ], [ [ "### 3.5. Parametrized Executions: 파이프라인 실행에 대한 기본 매개 변수 오버라이드\n\n파이프라인의 추가 실행을 구동하고 다른 파이프라인 매개 변수를 지정할 수 있습니다. `parameters` 인수는 매개 변수 이름을 포함하는 사전(dictionary)이며 기본값을 오버라이드합니다.\n\n모델의 성능에 따라 컴퓨팅 최적화 인스턴스 유형에서 다른 파이프라인 실행을 시작하고 모델 승인 상태를 자동으로 \"Approved\"로 설정할 수 있습니다. 즉, `RegisterModel` 단계에서 생성된 모델 패키지 버전이 SageMaker 프로젝트와 같은 CI/CD 파이프라인을 통해 자동으로 배포할 준비가 되었음을 의미합니다.", "_____no_output_____" ] ], [ [ "execution = pipeline.start(\n parameters=dict(\n ProcessingInstanceType=\"ml.c5.xlarge\",\n ModelApprovalStatus=\"Approved\",\n )\n)", "_____no_output_____" ], [ "execution.wait()", "_____no_output_____" ], [ "execution.list_steps()", "_____no_output_____" ] ], [ [ "파이프라인 실행이 완료되면 Amazon S3에서 `evaluation.json` 파일을 다운로드하여 보고서를 검토합니다.", "_____no_output_____" ] ], [ [ "evaluation_json = sagemaker.s3.S3Downloader.read_file(\"{}/evaluation.json\".format(\n step_eval.arguments[\"ProcessingOutputConfig\"][\"Outputs\"][0][\"S3Output\"][\"S3Uri\"]\n))\njson.loads(evaluation_json)", "_____no_output_____" ] ], [ [ "### 3.6. (Optional) 파이프라인 실행 중지 및 삭제\n\n파이프라인 작업을 마치면 진행 중인 실행을 중지하고 파이프라인을 삭제할 수 있습니다.", "_____no_output_____" ] ], [ [ "#execution.stop()", "_____no_output_____" ], [ "pipeline.delete()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e70ed6c07e79ff2d725c71ee0f9d46ac91e23785
37,768
ipynb
Jupyter Notebook
support-notebooks/Perceptron_Intuicao.ipynb
victorsanunes/DLFS
0ce4825efcac651302faf5bd266a8736418f4590
[ "MIT" ]
null
null
null
support-notebooks/Perceptron_Intuicao.ipynb
victorsanunes/DLFS
0ce4825efcac651302faf5bd266a8736418f4590
[ "MIT" ]
null
null
null
support-notebooks/Perceptron_Intuicao.ipynb
victorsanunes/DLFS
0ce4825efcac651302faf5bd266a8736418f4590
[ "MIT" ]
null
null
null
56.793985
10,948
0.762868
[ [ [ "__Objetivos__: \n- entender como o perceptron funciona intuitivamente, tanto em regressão quanto em classificação.", "_____no_output_____" ], [ "# Sumário", "_____no_output_____" ], [ "[Regressão](#Regressão)\n\n[Classificação](#Classificação)\n- [Porta AND](#Porta-AND)\n- [Porta OR](#Porta-OR)\n- [Porta XOR](#Porta-XOR)", "_____no_output_____" ], [ "# Imports", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport ipywidgets as wg\nfrom ipywidgets import interactive, fixed\n\n%matplotlib inline\n\n# jupyter nbextension enable --py widgetsnbextension --sys-prefix\n# restart jupyter notebook", "_____no_output_____" ] ], [ [ "# Regressão ", "_____no_output_____" ] ], [ [ "df = pd.read_csv('data/medidas.csv')\nprint(df.shape)\ndf.head(10)", "(100, 2)\n" ], [ "x = df.Altura\ny = df.Peso\n\nplt.figure()\nplt.scatter(x, y)\nplt.xlabel('Altura')\nplt.ylabel('Peso')", "_____no_output_____" ], [ "def plot_line(w, b):\n plt.figure(0, figsize=(20,4))\n plt.subplot(1,3,3)\n plt.scatter(x, y)\n y_pred = x*w + b\n plt.plot(x, y_pred, c='red')\n plt.xlim(140, 210)\n plt.ylim(40, 120)\n \n plt.subplot(1,3,2)\n x_ = np.array([0, x.max()])\n y_ = x_*w + b\n plt.scatter(x, y)\n plt.plot(x_, y_, c='red')\n plt.xlim(0, 210)\n plt.ylim(-160, 120)\n \n plt.subplot(1,3,1)\n mse = np.mean((y - y_pred)**2)\n loss.append(mse)\n plt.plot(loss)\n plt.title('Loss')\n \n plt.show()", "_____no_output_____" ], [ "loss = []\n\ninteractive_plot = interactive(plot_line, w=(1, 1.5, 0.01), b=(-200, 0, 1))\noutput = interactive_plot.children[-1]\noutput.layout_height = '350px'\ninteractive_plot", "_____no_output_____" ], [ "from sklearn.linear_model import LinearRegression\n\nreg = LinearRegression()\nreg.fit(x.values.reshape(-1,1), y)\nprint(\"w: {:.2f} \\nb: {:.2f}\".format(reg.coef_[0], reg.intercept_))", "w: 1.37 \nb: -157.47\n" ] ], [ [ "# Classificação", "_____no_output_____" ] ], [ [ "def plot_line(w1, w2, b):\n x1, x2 = np.meshgrid(np.linspace(0,1,100), np.linspace(0,1,100))\n x_mesh = np.array([x1.ravel(), x2.ravel()]).T\n \n plt.figure(0, figsize=(10,4))\n plt.subplot(1,2,2)\n plt.scatter(x[:,0], x[:,1], c=y.ravel(), s=100, cmap='bwr')\n \n y_mesh = np.dot(x_mesh, np.array([w1, w2]).T) + b\n y_mesh = np.where(y_mesh <= 0, 0, 1)\n\n plt.contourf(x1, x2, y_mesh.reshape(x1.shape), cmap='bwr')\n \n y_pred = np.dot(x, np.array([w1, w2]).T) + b\n y_bin = np.where(y_pred <= 0, 0, 1)\n print('{0} => {1}'.format(y_pred, y_bin))\n \n plt.subplot(1,2,1)\n mse = np.mean((y.ravel() - y_bin)**2)\n loss.append(mse)\n plt.plot(loss)\n plt.title('Loss')\n \n plt.show()", "_____no_output_____" ] ], [ [ "### Porta AND", "_____no_output_____" ] ], [ [ "x = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])\ny = np.array([[0, 0, 0, 1]]).T\n\nprint(x, y, sep='\\n')", "[[0 0]\n [0 1]\n [1 0]\n [1 1]]\n[[0]\n [0]\n [0]\n [1]]\n" ], [ "plt.scatter(x[:,0], x[:,1], c=y.ravel(), s=50, cmap='bwr')", "_____no_output_____" ], [ "loss = []\n\ninteractive_plot = interactive(plot_line, w1=(-1,1,0.01), w2=(-1,1,0.01), b=(-1.5, 1.5, 0.01))\ninteractive_plot", "_____no_output_____" ] ], [ [ "### Porta OR", "_____no_output_____" ] ], [ [ "x = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])\ny = np.array([[0, 1, 1, 1]]).T\n\nprint(x, y, sep='\\n')", "[[0 0]\n [0 1]\n [1 0]\n [1 1]]\n[[0]\n [1]\n [1]\n [1]]\n" ], [ "plt.scatter(x[:,0], x[:,1], c=y.ravel(), s=50, cmap='bwr')", "_____no_output_____" ], [ "loss = []\n\ninteractive_plot = interactive(plot_line, w1=(-1,1,0.01), w2=(-1,1,0.01), b=(-1.5, 1.5, 0.01))\ninteractive_plot", "_____no_output_____" ] ], [ [ "### Porta XOR", "_____no_output_____" ] ], [ [ "x = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])\ny = np.array([[0, 1, 1, 0]]).T\n\nprint(x, y, sep='\\n')", "_____no_output_____" ], [ "plt.scatter(x[:,0], x[:,1], c=y.ravel(), s=50, cmap='bwr')", "_____no_output_____" ], [ "loss = []\n\ninteractive_plot = interactive(plot_line, w1=(-1,1,0.01), w2=(-1,1,0.01), b=(-1.5, 1.5, 0.01))\ninteractive_plot", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e70eefd8a4ae2bca06cbcdce643300554b9f30d6
42,629
ipynb
Jupyter Notebook
Examples/figure_gesture_cnn.ipynb
eLeVeNnN/shinnosuke-gpu
e222da99e8a5e6c56ea7a4c094d91fe8ff9d0069
[ "MIT" ]
8
2019-08-21T02:34:39.000Z
2020-08-15T14:46:58.000Z
Examples/figure_gesture_cnn.ipynb
eLeVeNnN/shinnosuke-gpu
e222da99e8a5e6c56ea7a4c094d91fe8ff9d0069
[ "MIT" ]
1
2019-12-24T08:14:01.000Z
2019-12-24T08:14:01.000Z
Examples/figure_gesture_cnn.ipynb
eLeVeNnN/shinnosuke-gpu
e222da99e8a5e6c56ea7a4c094d91fe8ff9d0069
[ "MIT" ]
3
2019-08-09T01:32:11.000Z
2020-05-04T09:36:14.000Z
91.478541
18,300
0.709024
[ [ [ "# load dataset from shinnosuke", "_____no_output_____" ] ], [ [ "import cupy as cp\nfrom shinnosuke.datasets import figure_gesture\nfrom shinnosuke.layers.Convolution import Conv2D,MaxPooling2D\nfrom shinnosuke.layers.Activation import Activation\nfrom shinnosuke.layers.Normalization import BatchNormalization\nfrom shinnosuke.layers.FC import Flatten,Dense\nfrom shinnosuke.layers.Base import Input\nfrom shinnosuke.models import Model\nfrom shinnosuke.utils.Preprocess import to_categorical\nfrom shinnosuke.utils.Optimizers import StochasticGradientDescent", "_____no_output_____" ], [ "batch_size = 256\nnum_classes = 6\nepochs = 50", "_____no_output_____" ] ], [ [ "# load data", "_____no_output_____" ] ], [ [ "train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes=figure_gesture.load_data()\n\ntrainX=train_set_x_orig/255 \n\ntestX=test_set_x_orig/255\n\ntrainy=to_categorical(train_set_y_orig)\ntesty=to_categorical(test_set_y_orig)\n\nprint('x_train shape:',trainX.shape)\nprint('y_train shape:',trainy.shape)\nprint('x_test shape:',testX.shape)\nprint('y_test shape:',testy.shape)\n", "x_train shape: (1080, 3, 64, 64)\ny_train shape: (1080, 6)\nx_test shape: (120, 3, 64, 64)\ny_test shape: (120, 6)\n" ] ], [ [ "# show a picture of data", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n%matplotlib inline\n# rerange the data to channel last as we want to show it\nshow_img=trainX[0].transpose(1,2,0)\nplt.imshow(show_img)", "_____no_output_____" ] ], [ [ "# If use Convolutional networks in shinnosuke,remember that data fotmat must be (batch_size,channels,height,width)", "_____no_output_____" ] ], [ [ "X_input=Input(shape=(None,3,64,64))\nX=Conv2D(8,(5,5),padding='VALID',initializer='normal',activation='relu')(X_input)\nX=BatchNormalization(axis=1)(X)\nX=MaxPooling2D((4,4))(X)\nX=Conv2D(16,(3,3),padding='VALID',initializer='normal',activation='relu')(X)\nX=MaxPooling2D((4,4))(X)\nX=Flatten()(X)\nX=Dense(6,activation='softmax',initializer='normal')(X)\nmodel=Model(inputs=X_input,outputs=X)\nmodel.compile(optimizer=StochasticGradientDescent(lr=0.1),loss='sparse_categorical_cross_entropy')\nmodel.fit(trainX,trainy,batch_size=batch_size,epochs=epochs,validation_data=(testX,testy))\nscore = model.evaluate(testX, testy)\nprint('Test loss:', score[1])\nprint('Test accuracy:', score[0])", "\u001b[0;31m Epoch[1/50]\n1080/1080 [==============================>] -2s -452ms/batch -batch_loss: 1.6824 -batch_acc: 0.2143 -val_loss: 1.7985 -val_acc: 0.1667\n\u001b[0;31m Epoch[2/50]\n1080/1080 [==============================>] -2s -400ms/batch -batch_loss: 1.6428 -batch_acc: 0.4286 -val_loss: 1.7974 -val_acc: 0.1667\n\u001b[0;31m Epoch[3/50]\n1080/1080 [==============================>] -2s -347ms/batch -batch_loss: 1.6592 -batch_acc: 0.3036 -val_loss: 1.8705 -val_acc: 0.1667\n\u001b[0;31m Epoch[4/50]\n1080/1080 [==============================>] -2s -356ms/batch -batch_loss: 1.5264 -batch_acc: 0.4107 -val_loss: 1.7877 -val_acc: 0.1667\n\u001b[0;31m Epoch[5/50]\n1080/1080 [==============================>] -2s -419ms/batch -batch_loss: 1.6166 -batch_acc: 0.4107 -val_loss: 1.7490 -val_acc: 0.2333\n\u001b[0;31m Epoch[6/50]\n1080/1080 [==============================>] -2s -402ms/batch -batch_loss: 1.4577 -batch_acc: 0.3393 -val_loss: 2.4635 -val_acc: 0.1667\n\u001b[0;31m Epoch[7/50]\n1080/1080 [==============================>] -2s -381ms/batch -batch_loss: 1.2900 -batch_acc: 0.4643 -val_loss: 1.7999 -val_acc: 0.2250\n\u001b[0;31m Epoch[8/50]\n1080/1080 [==============================>] -2s -419ms/batch -batch_loss: 1.6943 -batch_acc: 0.3571 -val_loss: 1.5102 -val_acc: 0.3583\n\u001b[0;31m Epoch[9/50]\n1080/1080 [==============================>] -2s -399ms/batch -batch_loss: 1.2475 -batch_acc: 0.5000 -val_loss: 1.3217 -val_acc: 0.5083\n\u001b[0;31m Epoch[10/50]\n1080/1080 [==============================>] -2s -365ms/batch -batch_loss: 1.4758 -batch_acc: 0.4286 -val_loss: 2.1101 -val_acc: 0.2167\n\u001b[0;31m Epoch[11/50]\n1080/1080 [==============================>] -2s -342ms/batch -batch_loss: 1.7193 -batch_acc: 0.1607 -val_loss: 3.0315 -val_acc: 0.2000\n\u001b[0;31m Epoch[12/50]\n1080/1080 [==============================>] -2s -339ms/batch -batch_loss: 1.1868 -batch_acc: 0.4643 -val_loss: 1.4093 -val_acc: 0.4750\n\u001b[0;31m Epoch[13/50]\n1080/1080 [==============================>] -2s -338ms/batch -batch_loss: 1.0691 -batch_acc: 0.7143 -val_loss: 1.6177 -val_acc: 0.3583\n\u001b[0;31m Epoch[14/50]\n1080/1080 [==============================>] -2s -338ms/batch -batch_loss: 0.9859 -batch_acc: 0.6786 -val_loss: 1.5468 -val_acc: 0.4167\n\u001b[0;31m Epoch[15/50]\n1080/1080 [==============================>] -2s -336ms/batch -batch_loss: 1.3675 -batch_acc: 0.4286 -val_loss: 1.9912 -val_acc: 0.2833\n\u001b[0;31m Epoch[16/50]\n1080/1080 [==============================>] -2s -336ms/batch -batch_loss: 1.0449 -batch_acc: 0.6071 -val_loss: 2.4149 -val_acc: 0.2917\n\u001b[0;31m Epoch[17/50]\n1080/1080 [==============================>] -2s -338ms/batch -batch_loss: 1.1349 -batch_acc: 0.5536 -val_loss: 1.8559 -val_acc: 0.4917\n\u001b[0;31m Epoch[18/50]\n1080/1080 [==============================>] -2s -413ms/batch -batch_loss: 1.1549 -batch_acc: 0.6071 -val_loss: 1.1190 -val_acc: 0.6083\n\u001b[0;31m Epoch[19/50]\n1080/1080 [==============================>] -2s -395ms/batch -batch_loss: 0.9719 -batch_acc: 0.6250 -val_loss: 3.3213 -val_acc: 0.2917\n\u001b[0;31m Epoch[20/50]\n1080/1080 [==============================>] -2s -414ms/batch -batch_loss: 1.1916 -batch_acc: 0.5000 -val_loss: 3.0144 -val_acc: 0.2750\n\u001b[0;31m Epoch[21/50]\n1080/1080 [==============================>] -2s -378ms/batch -batch_loss: 0.9888 -batch_acc: 0.6786 -val_loss: 1.2482 -val_acc: 0.5667\n\u001b[0;31m Epoch[22/50]\n1080/1080 [==============================>] -2s -401ms/batch -batch_loss: 0.8162 -batch_acc: 0.7679 -val_loss: 1.9143 -val_acc: 0.4500\n\u001b[0;31m Epoch[23/50]\n1080/1080 [==============================>] -2s -396ms/batch -batch_loss: 0.7910 -batch_acc: 0.6786 -val_loss: 1.1916 -val_acc: 0.5667\n\u001b[0;31m Epoch[24/50]\n1080/1080 [==============================>] -2s -411ms/batch -batch_loss: 0.8542 -batch_acc: 0.6786 -val_loss: 1.5116 -val_acc: 0.4333\n\u001b[0;31m Epoch[25/50]\n1080/1080 [==============================>] -2s -372ms/batch -batch_loss: 0.8654 -batch_acc: 0.7321 -val_loss: 4.0990 -val_acc: 0.2333\n\u001b[0;31m Epoch[26/50]\n1080/1080 [==============================>] -2s -345ms/batch -batch_loss: 1.1264 -batch_acc: 0.6071 -val_loss: 1.4645 -val_acc: 0.5250\n\u001b[0;31m Epoch[27/50]\n1080/1080 [==============================>] -2s -335ms/batch -batch_loss: 0.7295 -batch_acc: 0.7857 -val_loss: 0.9017 -val_acc: 0.6417\n\u001b[0;31m Epoch[28/50]\n1080/1080 [==============================>] -2s -411ms/batch -batch_loss: 0.7574 -batch_acc: 0.7143 -val_loss: 2.4740 -val_acc: 0.3917\n\u001b[0;31m Epoch[29/50]\n1080/1080 [==============================>] -2s -413ms/batch -batch_loss: 1.0254 -batch_acc: 0.7321 -val_loss: 2.4774 -val_acc: 0.3000\n\u001b[0;31m Epoch[30/50]\n1080/1080 [==============================>] -2s -411ms/batch -batch_loss: 1.2767 -batch_acc: 0.5357 -val_loss: 1.5218 -val_acc: 0.4167\n\u001b[0;31m Epoch[31/50]\n1080/1080 [==============================>] -2s -412ms/batch -batch_loss: 0.9058 -batch_acc: 0.7321 -val_loss: 1.0185 -val_acc: 0.6250\n\u001b[0;31m Epoch[32/50]\n1080/1080 [==============================>] -2s -391ms/batch -batch_loss: 0.8514 -batch_acc: 0.7143 -val_loss: 1.6374 -val_acc: 0.4500\n\u001b[0;31m Epoch[33/50]\n1080/1080 [==============================>] -2s -392ms/batch -batch_loss: 0.7333 -batch_acc: 0.7500 -val_loss: 0.9147 -val_acc: 0.6750\n\u001b[0;31m Epoch[34/50]\n1080/1080 [==============================>] -2s -412ms/batch -batch_loss: 0.7488 -batch_acc: 0.6250 -val_loss: 2.7518 -val_acc: 0.3917\n\u001b[0;31m Epoch[35/50]\n1080/1080 [==============================>] -2s -411ms/batch -batch_loss: 0.9342 -batch_acc: 0.6607 -val_loss: 1.0114 -val_acc: 0.6167\n\u001b[0;31m Epoch[36/50]\n1080/1080 [==============================>] -2s -389ms/batch -batch_loss: 0.8444 -batch_acc: 0.6250 -val_loss: 1.0990 -val_acc: 0.6000\n\u001b[0;31m Epoch[37/50]\n1080/1080 [==============================>] -2s -356ms/batch -batch_loss: 0.6121 -batch_acc: 0.8214 -val_loss: 0.9272 -val_acc: 0.6667\n\u001b[0;31m Epoch[38/50]\n1080/1080 [==============================>] -2s -341ms/batch -batch_loss: 0.8054 -batch_acc: 0.6964 -val_loss: 1.5120 -val_acc: 0.5250\n\u001b[0;31m Epoch[39/50]\n1080/1080 [==============================>] -2s -339ms/batch -batch_loss: 0.8949 -batch_acc: 0.7143 -val_loss: 0.9750 -val_acc: 0.6250\n\u001b[0;31m Epoch[40/50]\n1080/1080 [==============================>] -2s -340ms/batch -batch_loss: 0.7070 -batch_acc: 0.7679 -val_loss: 1.7789 -val_acc: 0.4500\n\u001b[0;31m Epoch[41/50]\n1080/1080 [==============================>] -2s -417ms/batch -batch_loss: 0.5363 -batch_acc: 0.8393 -val_loss: 1.2037 -val_acc: 0.6333\n\u001b[0;31m Epoch[42/50]\n1080/1080 [==============================>] -2s -377ms/batch -batch_loss: 0.6382 -batch_acc: 0.8571 -val_loss: 2.8164 -val_acc: 0.3083\n\u001b[0;31m Epoch[43/50]\n1080/1080 [==============================>] -2s -411ms/batch -batch_loss: 0.6792 -batch_acc: 0.7321 -val_loss: 1.4654 -val_acc: 0.5000\n\u001b[0;31m Epoch[44/50]\n1080/1080 [==============================>] -2s -416ms/batch -batch_loss: 0.7480 -batch_acc: 0.7321 -val_loss: 6.1761 -val_acc: 0.2083\n\u001b[0;31m Epoch[45/50]\n1080/1080 [==============================>] -2s -385ms/batch -batch_loss: 0.5687 -batch_acc: 0.7857 -val_loss: 0.7262 -val_acc: 0.7333\n\u001b[0;31m Epoch[46/50]\n1080/1080 [==============================>] -2s -403ms/batch -batch_loss: 0.5333 -batch_acc: 0.7679 -val_loss: 1.1714 -val_acc: 0.6250\n\u001b[0;31m Epoch[47/50]\n1080/1080 [==============================>] -2s -414ms/batch -batch_loss: 1.0777 -batch_acc: 0.5714 -val_loss: 2.5837 -val_acc: 0.4083\n\u001b[0;31m Epoch[48/50]\n1080/1080 [==============================>] -2s -416ms/batch -batch_loss: 0.5450 -batch_acc: 0.8214 -val_loss: 0.7097 -val_acc: 0.7417\n\u001b[0;31m Epoch[49/50]\n1080/1080 [==============================>] -2s -395ms/batch -batch_loss: 0.5901 -batch_acc: 0.8214 -val_loss: 0.9280 -val_acc: 0.6417\n\u001b[0;31m Epoch[50/50]\n1080/1080 [==============================>] -2s -415ms/batch -batch_loss: 0.3810 -batch_acc: 0.8750 -val_loss: 0.8400 -val_acc: 0.7250\nTest loss: 0.8400094619850206\nTest accuracy: 0.725\n" ] ], [ [ "# Compare to Keras-gpu", "_____no_output_____" ] ], [ [ "import keras\nfrom keras.models import Sequential,Model\nfrom keras.layers import Dense, Dropout, Flatten,Input,Conv2D, MaxPooling2D,BatchNormalization,Activation", "_____no_output_____" ] ], [ [ "# Convert data to numpy array ", "_____no_output_____" ] ], [ [ "trainX=cp.asnumpy(trainX)\ntrainy=cp.asnumpy(trainy)\ntestX=cp.asnumpy(testX)\ntesty=cp.asnumpy(testy)", "_____no_output_____" ], [ "X_input=Input(shape=(3,64,64))\nX=Conv2D(8,(5,5),padding='VALID',kernel_initializer='normal',activation='relu',data_format='channels_first')(X_input)\nX=BatchNormalization(axis=1)(X)\nX=MaxPooling2D((4,4))(X)\nX=Conv2D(16,(3,3),padding='SAME',kernel_initializer='normal',activation='relu',data_format='channels_first')(X)\nX=MaxPooling2D((4,4))(X)\nX=Flatten()(X)\nX=Dense(6,kernel_initializer='normal',activation='softmax')(X)\nmodel=Model(inputs=X_input,outputs=X)\nmodel.compile(optimizer=keras.optimizers.sgd(lr=0.1),loss='categorical_crossentropy',metrics=['accuracy'])\nmodel.fit(trainX,trainy,batch_size=batch_size,epochs=epochs,validation_data=(testX,testy))\nscore = model.evaluate(testX, testy)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])", "Train on 1080 samples, validate on 120 samples\nEpoch 1/50\n1080/1080 [==============================] - 2s 1ms/step - loss: 1.7893 - acc: 0.1991 - val_loss: 2.2341 - val_acc: 0.2167\nEpoch 2/50\n1080/1080 [==============================] - 1s 582us/step - loss: 1.7418 - acc: 0.2685 - val_loss: 1.7823 - val_acc: 0.1750\nEpoch 3/50\n1080/1080 [==============================] - 1s 579us/step - loss: 1.7220 - acc: 0.2676 - val_loss: 6.9724 - val_acc: 0.1917\nEpoch 4/50\n1080/1080 [==============================] - 1s 576us/step - loss: 1.7523 - acc: 0.2963 - val_loss: 1.9221 - val_acc: 0.2750\nEpoch 5/50\n1080/1080 [==============================] - 1s 574us/step - loss: 1.6098 - acc: 0.4019 - val_loss: 13.4317 - val_acc: 0.1667\nEpoch 6/50\n1080/1080 [==============================] - 1s 576us/step - loss: 2.0057 - acc: 0.1944 - val_loss: 5.3820 - val_acc: 0.1667\nEpoch 7/50\n1080/1080 [==============================] - 1s 575us/step - loss: 1.7155 - acc: 0.2787 - val_loss: 3.1049 - val_acc: 0.1667\nEpoch 8/50\n1080/1080 [==============================] - 1s 575us/step - loss: 1.6286 - acc: 0.3611 - val_loss: 3.0411 - val_acc: 0.3083\nEpoch 9/50\n1080/1080 [==============================] - 1s 574us/step - loss: 1.5576 - acc: 0.3704 - val_loss: 3.6891 - val_acc: 0.2333\nEpoch 10/50\n1080/1080 [==============================] - 1s 572us/step - loss: 1.5150 - acc: 0.4019 - val_loss: 3.4651 - val_acc: 0.1917\nEpoch 11/50\n1080/1080 [==============================] - 1s 594us/step - loss: 1.4551 - acc: 0.4204 - val_loss: 8.0961 - val_acc: 0.1750\nEpoch 12/50\n1080/1080 [==============================] - 1s 574us/step - loss: 1.6825 - acc: 0.4630 - val_loss: 2.6860 - val_acc: 0.3250\nEpoch 13/50\n1080/1080 [==============================] - 1s 560us/step - loss: 1.3930 - acc: 0.5046 - val_loss: 3.6602 - val_acc: 0.2417\nEpoch 14/50\n1080/1080 [==============================] - 1s 564us/step - loss: 1.4434 - acc: 0.4917 - val_loss: 2.1057 - val_acc: 0.3917\nEpoch 15/50\n1080/1080 [==============================] - 1s 561us/step - loss: 1.3100 - acc: 0.4750 - val_loss: 2.1778 - val_acc: 0.3917\nEpoch 16/50\n1080/1080 [==============================] - 1s 562us/step - loss: 1.3037 - acc: 0.4991 - val_loss: 2.7823 - val_acc: 0.3833\nEpoch 17/50\n1080/1080 [==============================] - 1s 566us/step - loss: 1.3264 - acc: 0.4963 - val_loss: 2.1820 - val_acc: 0.4583\nEpoch 18/50\n1080/1080 [==============================] - 1s 575us/step - loss: 1.3062 - acc: 0.5185 - val_loss: 2.1134 - val_acc: 0.4250\nEpoch 19/50\n1080/1080 [==============================] - 1s 562us/step - loss: 1.2777 - acc: 0.5213 - val_loss: 1.7933 - val_acc: 0.4167\nEpoch 20/50\n1080/1080 [==============================] - 1s 562us/step - loss: 1.3256 - acc: 0.5472 - val_loss: 1.1784 - val_acc: 0.5417\nEpoch 21/50\n1080/1080 [==============================] - 1s 561us/step - loss: 1.1962 - acc: 0.5491 - val_loss: 1.1587 - val_acc: 0.6000\nEpoch 22/50\n1080/1080 [==============================] - 1s 570us/step - loss: 1.1016 - acc: 0.5731 - val_loss: 1.3375 - val_acc: 0.4417\nEpoch 23/50\n1080/1080 [==============================] - 1s 561us/step - loss: 1.1513 - acc: 0.5731 - val_loss: 1.2104 - val_acc: 0.5000\nEpoch 24/50\n1080/1080 [==============================] - 1s 565us/step - loss: 1.0779 - acc: 0.6028 - val_loss: 1.2800 - val_acc: 0.4917\nEpoch 25/50\n1080/1080 [==============================] - 1s 568us/step - loss: 1.0913 - acc: 0.5648 - val_loss: 1.4441 - val_acc: 0.4667\nEpoch 26/50\n1080/1080 [==============================] - 1s 559us/step - loss: 0.9886 - acc: 0.6407 - val_loss: 1.3369 - val_acc: 0.4333\nEpoch 27/50\n1080/1080 [==============================] - 1s 555us/step - loss: 1.0046 - acc: 0.6167 - val_loss: 1.2075 - val_acc: 0.5500\nEpoch 28/50\n1080/1080 [==============================] - 1s 559us/step - loss: 1.0517 - acc: 0.6120 - val_loss: 1.1315 - val_acc: 0.6000\nEpoch 29/50\n1080/1080 [==============================] - 1s 556us/step - loss: 0.8810 - acc: 0.6593 - val_loss: 1.4238 - val_acc: 0.3833\nEpoch 30/50\n1080/1080 [==============================] - 1s 560us/step - loss: 0.8979 - acc: 0.6676 - val_loss: 1.4343 - val_acc: 0.5083\nEpoch 31/50\n1080/1080 [==============================] - 1s 559us/step - loss: 0.9800 - acc: 0.6713 - val_loss: 1.7131 - val_acc: 0.3250\nEpoch 32/50\n1080/1080 [==============================] - 1s 558us/step - loss: 0.9322 - acc: 0.6537 - val_loss: 1.2236 - val_acc: 0.6000\nEpoch 33/50\n1080/1080 [==============================] - 1s 557us/step - loss: 0.8170 - acc: 0.6935 - val_loss: 1.4700 - val_acc: 0.4583\nEpoch 34/50\n1080/1080 [==============================] - 1s 560us/step - loss: 1.0100 - acc: 0.6481 - val_loss: 3.2682 - val_acc: 0.4083\nEpoch 35/50\n1080/1080 [==============================] - 1s 556us/step - loss: 0.7894 - acc: 0.7056 - val_loss: 3.2516 - val_acc: 0.3500\nEpoch 36/50\n1080/1080 [==============================] - 1s 563us/step - loss: 0.8123 - acc: 0.6750 - val_loss: 0.9845 - val_acc: 0.6667\nEpoch 37/50\n1080/1080 [==============================] - 1s 557us/step - loss: 0.7315 - acc: 0.7278 - val_loss: 1.2379 - val_acc: 0.6000\nEpoch 38/50\n1080/1080 [==============================] - 1s 559us/step - loss: 0.8074 - acc: 0.7065 - val_loss: 1.2145 - val_acc: 0.6083\nEpoch 39/50\n1080/1080 [==============================] - 1s 558us/step - loss: 0.8554 - acc: 0.7074 - val_loss: 1.2367 - val_acc: 0.5333\nEpoch 40/50\n1080/1080 [==============================] - 1s 558us/step - loss: 0.7774 - acc: 0.7352 - val_loss: 1.4015 - val_acc: 0.4667\nEpoch 41/50\n1080/1080 [==============================] - 1s 557us/step - loss: 0.6294 - acc: 0.7759 - val_loss: 1.3922 - val_acc: 0.5083\nEpoch 42/50\n1080/1080 [==============================] - 1s 564us/step - loss: 1.3876 - acc: 0.5907 - val_loss: 12.3663 - val_acc: 0.1917\nEpoch 43/50\n1080/1080 [==============================] - 1s 556us/step - loss: 0.7498 - acc: 0.7398 - val_loss: 9.9578 - val_acc: 0.2750\nEpoch 44/50\n1080/1080 [==============================] - 1s 559us/step - loss: 0.9369 - acc: 0.6583 - val_loss: 10.6797 - val_acc: 0.2500\nEpoch 45/50\n1080/1080 [==============================] - 1s 561us/step - loss: 0.6902 - acc: 0.7509 - val_loss: 10.2411 - val_acc: 0.2000\nEpoch 46/50\n1080/1080 [==============================] - 1s 567us/step - loss: 0.8248 - acc: 0.6963 - val_loss: 10.5316 - val_acc: 0.2000\nEpoch 47/50\n1080/1080 [==============================] - 1s 560us/step - loss: 0.7044 - acc: 0.7509 - val_loss: 6.3822 - val_acc: 0.4167\nEpoch 48/50\n1080/1080 [==============================] - 1s 556us/step - loss: 1.1423 - acc: 0.6046 - val_loss: 5.9455 - val_acc: 0.3000\nEpoch 49/50\n1080/1080 [==============================] - 1s 557us/step - loss: 0.6565 - acc: 0.7639 - val_loss: 5.7669 - val_acc: 0.3333\nEpoch 50/50\n1080/1080 [==============================] - 1s 559us/step - loss: 0.7406 - acc: 0.7306 - val_loss: 3.0138 - val_acc: 0.4417\n120/120 [==============================] - 0s 641us/step\nTest loss: 3.0137996037801105\nTest accuracy: 0.44166666666666665\n" ] ], [ [ "# We can see that shinnosuke-gpu performs better than Keras-gpu on both training and test datasets.And the same problem is the speed.Keras-gpu is faster than shinnosuke-gpu,that may due to Keras-gpu is written by CUDA and most operations are conducted on GPU directly,while shinnosuke-gpu is written by cupy(a python library),so there exists frequency data exchanges between GPU and memory,which remarkably slow down the speed.I will try to solve this problem in the future.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
e70f11f516bbacd97fd015b4c98686047368c5ef
26,975
ipynb
Jupyter Notebook
version-01.ipynb
iffishells/ScrapeLinkedin
83a52c9e592dcd174e3c5bca2f1829bb5a8533f0
[ "Apache-2.0" ]
null
null
null
version-01.ipynb
iffishells/ScrapeLinkedin
83a52c9e592dcd174e3c5bca2f1829bb5a8533f0
[ "Apache-2.0" ]
null
null
null
version-01.ipynb
iffishells/ScrapeLinkedin
83a52c9e592dcd174e3c5bca2f1829bb5a8533f0
[ "Apache-2.0" ]
null
null
null
67.4375
2,042
0.615496
[ [ [ "import numpy as np\nimport pandas as pd\nfrom selenium import webdriver\nimport time\n\n# wati untill event is clickable\nfrom selenium.webdriver.support.wait import WebDriverWait \n\n# expected condition\nfrom selenium.webdriver.support import expected_conditions as EC\n\nfrom selenium.webdriver.common.by import By", "_____no_output_____" ], [ "# 1. connect to linkedin and collect posting on jobs related to terms:\n\n# - bioinformatics\n# - computational biology\n# - data science for biomedical data\n# - biomedical research analytics\n# - research data science\n# - biological data analysis\n\n# retrieve data, clean and store in a structured way\n\n# 2. structure data by category - paid/unpaid internship, - career opportunity/ job, - fresher job, - research project (for students)\n\n# list company\n# salary range/type if exists\n# link to post\n\n# end result - exportable as a csv or call by cell\n\n# 3. requirements: - location (or virtual/at home), - educational background (none, BA, MSc, PhD) - experience requirements (how many years)\n\n# 4. specific terms: specific terms we are looking for like bioinformatics, computational biology, biomedical, infectious, oncology, biotechnology, pharma\n\n# 5. link to job details\n\n# Submit working code in colab", "_____no_output_____" ], [ "class linkedin:\n \n def __init__(self,UserName,Email,Password):\n self.UserName = UserName\n self.Email = Email\n self.Password = Password\n self.JobAreaList = ['bioinformatics' , 'computational biology', 'data science for biomedical data','biomedical research analytics' ,'research data science',' biological data analysis']\n \n def login(self):\n \n driver = webdriver.Chrome('chromedriver_linux64/chromedriver')\n driver.get('https://www.linkedin.com/login?fromSignIn=true&trk=guest_homepage-basic_nav-header-signin')\n # driver.find_element_by_class_name('nav__button-secondary').click()\n \n time.sleep(2)\n \n try:\n \n username = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, \"username\")))\n username.clear()\n username.send_keys(self.Email)\n time.sleep(2)\n \n password = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, \"password\")))\n password.send_keys(self.Password)\n time.sleep(2)\n \n \n submit = driver.find_element_by_xpath(\"//button[@type ='submit']\").click()\n \n jobs = driver.get(\"https://www.linkedin.com/jobs/\")\n \n search_bars = driver.find_elements_by_class_name('jobs-search-box__text-input')\n \n \n search_keywords = search_bars[0]\n \n print(\"search_keywords : \",search_keywords)\n search_keywords.send_keys('data analyist')\n logging.info(\"Entring the keywords\")\n time.sleep(5)\n \n search_box = driver.find_element_by_class_name('jobs-search-box__input')\n print('search box ',search_box)\n \n # search_location = search_bars[1]\n # search_location.send_keys(\"Pakistan\")\n # logging.info(\"Inserting location\")\n \n # search button\n \n search = driver.find_element_by_class_name('jobs-search-box__submit-button')\n print(\"search ::>> \",search)\n \n search.click()\n logging.info(\"Pressing Search button\")\n time.sleep(10)\n finally:\n driver.quit()\n \n \nif __name__ == \"__main__\":\n \n iffishells = linkedin('iffishells' ,'[email protected]','razerblade123!@#')\n \n # login\n iffishells.login()\n \n \n \n \n \n ", "/tmp/ipykernel_249836/894814008.py:11: DeprecationWarning: executable_path has been deprecated, please pass in a Service object\n driver = webdriver.Chrome('chromedriver_linux64/chromedriver')\n/tmp/ipykernel_249836/894814008.py:29: DeprecationWarning: find_element_by_* commands are deprecated. Please use find_element() instead\n submit = driver.find_element_by_xpath(\"//button[@type ='submit']\").click()\n/tmp/ipykernel_249836/894814008.py:33: DeprecationWarning: find_elements_by_* commands are deprecated. Please use find_elements() instead\n search_bars = driver.find_elements_by_class_name('jobs-search-box__text-input')\n" ], [ "from selenium import webdriver\nfrom selenium.webdriver.common.keys import Keys\nfrom selenium.common.exceptions import TimeoutException, NoSuchElementException\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.common.proxy import Proxy, ProxyType\nimport time\nfrom pathlib import Path\nimport requests\nfrom bs4 import BeautifulSoup\nimport logging\nimport pickle\nimport os\n\n\nclass LinkedInBot:\n def __init__(self, delay=5):\n if not os.path.exists(\"data\"):\n os.makedirs(\"data\")\n log_fmt = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'\n logging.basicConfig(level=logging.INFO, format=log_fmt)\n self.delay=delay\n logging.info(\"Starting driver\")\n self.driver = webdriver.Chrome('chromedriver_linux64/chromedriver')\n\n def login(self, email, password):\n \"\"\"Go to linkedin and login\"\"\"\n # go to linkedin:\n logging.info(\"Logging in\")\n self.driver.maximize_window()\n self.driver.get('https://www.linkedin.com/login')\n time.sleep(self.delay)\n\n self.driver.find_element_by_id('username').send_keys(email)\n self.driver.find_element_by_id('password').send_keys(password)\n\n self.driver.find_element_by_id('password').send_keys(Keys.RETURN)\n time.sleep(self.delay)\n\n def save_cookie(self, path):\n with open(path, 'wb') as filehandler:\n pickle.dump(self.driver.get_cookies(), filehandler)\n\n def load_cookie(self, path):\n with open(path, 'rb') as cookiesfile:\n cookies = pickle.load(cookiesfile)\n for cookie in cookies:\n self.driver.add_cookie(cookie)\n\n def search_linkedin(self, keywords, location):\n \"\"\"Enter keywords into search bar\n \"\"\"\n logging.info(\"Searching jobs page\")\n self.driver.get(\"https://www.linkedin.com/jobs/\")\n # search based on keywords and location and hit enter\n self.wait_for_element_ready(By.CLASS_NAME, 'jobs-search-box__text-input')\n time.sleep(self.delay)\n search_bars = self.driver.find_elements_by_class_name('jobs-search-box__text-input')\n search_keywords = search_bars[0]\n search_keywords.send_keys(keywords)\n \n \n # search_location = search_bars[2]\n # search_location.send_keys(location)\n # time.sleep(self.delay)\n search_keywords.send_keys(Keys.RETURN)\n logging.info(\"Keyword search successful\")\n time.sleep(self.delay)\n \n def wait(self, t_delay=None):\n \"\"\"Just easier to build this in here.\n Parameters\n ----------\n t_delay [optional] : int\n seconds to wait.\n \"\"\"\n delay = self.delay if t_delay == None else t_delay\n time.sleep(delay)\n\n def scroll_to(self, job_list_item):\n \"\"\"Just a function that will scroll to the list item in the column \n \"\"\"\n self.driver.execute_script(\"arguments[0].scrollIntoView();\", job_list_item)\n job_list_item.click()\n time.sleep(self.delay)\n \n def get_position_data(self, job):\n \"\"\"Gets the position data for a posting.\n Parameters\n ----------\n job : Selenium webelement\n Returns\n -------\n list of strings : [position, company, location, details]\n \"\"\"\n [position, company, location] = job.text.split('\\n')[:3]\n details = self.driver.find_element_by_id(\"job-details\").text\n return [position, company, location, details]\n\n def wait_for_element_ready(self, by, text):\n try:\n WebDriverWait(self.driver, self.delay).until(EC.presence_of_element_located((by, text)))\n except TimeoutException:\n logging.debug(\"wait_for_element_ready TimeoutException\")\n pass\n\n def close_session(self):\n \"\"\"This function closes the actual session\"\"\"\n logging.info(\"Closing session\")\n self.driver.close()\n\n def run(self, email, password, keywords, location):\n if os.path.exists(\"data/cookies.txt\"):\n self.driver.get(\"https://www.linkedin.com/\")\n self.load_cookie(\"data/cookies.txt\")\n self.driver.get(\"https://www.linkedin.com/\")\n else:\n self.login(\n email=email,\n password=password\n )\n self.save_cookie(\"data/cookies.txt\")\n\n logging.info(\"Begin linkedin keyword search\")\n self.search_linkedin(keywords, location)\n self.wait()\n\n # scrape pages,only do first 8 pages since after that the data isn't \n # well suited for me anyways: \n for page in range(2, 8):\n # get the jobs list items to scroll through:\n jobs = self.driver.find_elements_by_class_name(\"occludable-update\")\n for job in jobs:\n self.scroll_to(job)\n [position, company, location, details] = self.get_position_data(job)\n\n # do something with the data...\n\n # go to next page:\n bot.driver.find_element_by_xpath(f\"//button[@aria-label='Page {page}']\").click()\n bot.wait()\n logging.info(\"Done scraping.\")\n logging.info(\"Closing DB connection.\")\n bot.close_session()\n\n\nif __name__ == \"__main__\":\n email = \"[email protected]\"\n password = 'razerblade123!@#'\n bot = LinkedInBot()\n bot.run(email, password, \"Data Scientist\", \"Pakistan\")\n", "2022-03-14 07:38:46,165 - root - INFO - Starting driver\n/tmp/ipykernel_536046/1342474180.py:25: DeprecationWarning: executable_path has been deprecated, please pass in a Service object\n self.driver = webdriver.Chrome('chromedriver_linux64/chromedriver')\n2022-03-14 07:38:58,939 - root - INFO - Begin linkedin keyword search\n2022-03-14 07:38:58,941 - root - INFO - Searching jobs page\n/tmp/ipykernel_536046/1342474180.py:59: DeprecationWarning: find_elements_by_* commands are deprecated. Please use find_elements() instead\n search_bars = self.driver.find_elements_by_class_name('jobs-search-box__text-input')\n2022-03-14 07:39:05,828 - root - INFO - Keyword search successful\n/tmp/ipykernel_536046/1342474180.py:131: DeprecationWarning: find_elements_by_* commands are deprecated. Please use find_elements() instead\n jobs = self.driver.find_elements_by_class_name(\"occludable-update\")\n/tmp/ipykernel_536046/1342474180.py:139: DeprecationWarning: find_element_by_* commands are deprecated. Please use find_element() instead\n bot.driver.find_element_by_xpath(f\"//button[@aria-label='Page {page}']\").click()\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
e70f11f5c04c6ac9f6312685cf9b9c27ef842877
612,317
ipynb
Jupyter Notebook
Module 2/Chapter01/Chapter 1_prep_untidy.ipynb
PacktPublishing/-Numerical-Computing-with-Python
b0cc82511ebc2143eb190cb60df482f45468b267
[ "MIT" ]
7
2019-03-09T05:44:23.000Z
2021-11-29T20:49:22.000Z
Module 2/Chapter01/Chapter 1_prep_untidy.ipynb
PacktPublishing/-Numerical-Computing-with-Python
b0cc82511ebc2143eb190cb60df482f45468b267
[ "MIT" ]
null
null
null
Module 2/Chapter01/Chapter 1_prep_untidy.ipynb
PacktPublishing/-Numerical-Computing-with-Python
b0cc82511ebc2143eb190cb60df482f45468b267
[ "MIT" ]
3
2018-12-20T12:52:41.000Z
2021-06-21T04:35:50.000Z
159.832159
85,346
0.828894
[ [ [ "%matplotlib inline", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "print('Hello Plots!')", "Hello Plots!\n" ], [ "N = 50\nx = np.random.rand(N)\ny = np.random.rand(N)\ncolors = np.random.rand(N)\narea = np.pi * (20 * np.random.rand(N))**2\n\nplt.scatter(x, y, s=area, c=colors, alpha=np.random.rand()*0.5)\nplt.show()", "_____no_output_____" ], [ "import matplotlib as mpl", "_____no_output_____" ], [ "mpl.style.available", "_____no_output_____" ], [ "N = 30\n\nnp.random.seed(42)\nx1 = np.random.rand(N)\ny1 = np.random.rand(N)\n\nnp.random.seed(24)\nx2 = np.random.rand(N)\ny2 = np.random.rand(N)\n\nplt.scatter(x1, y1)\nplt.scatter(x2, y2)\nplt.show()", "_____no_output_____" ], [ "N = 30\nmpl.style.use('classic')\nnp.random.seed(42)\nx1 = np.random.rand(N)\ny1 = np.random.rand(N)\n\n\nnp.random.seed(24)\nx2 = np.random.rand(N)\ny2 = np.random.rand(N)\n\nplt.scatter(x1, y1,label='a')\nplt.scatter(x2, y2,label='b')\nplt.legend()\nplt.show()", "_____no_output_____" ], [ "mpl.rcParams.update(mpl.rcParamsDefault)\nN = 30\nnp.random.seed(42)\nx1 = np.random.rand(N)\ny1 = np.random.rand(N)\n\n\nnp.random.seed(24)\nx2 = np.random.rand(N)\ny2 = np.random.rand(N)\n\nplt.scatter(x1, y1,label='a')\nplt.scatter(x2, y2,label='b')\nplt.legend()\nplt.show()", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\n\nN = M = 200\nX, Y = np.ogrid[0:20:N*1j, 0:20:M*1j]\ndata = np.sin(np.pi * X*2 / 20) * np.cos(np.pi * Y*2 / 20)\n\nfig, (ax2, ax1) = plt.subplots(1, 2, figsize=(7, 3))\nim = ax1.imshow(data, extent=[0, 200, 0, 200])\nax1.set_title(\"v2.0: 'viridis'\")\nfig.colorbar(im, ax=ax1, shrink=0.8)\n\nim2 = ax2.imshow(data, extent=[0, 200, 0, 200], cmap='jet')\nfig.colorbar(im2, ax=ax2, shrink=0.8)\nax2.set_title(\"classic: 'jet'\")\n\nfig.tight_layout()", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\n\nN = M = 200\nX, Y = np.ogrid[0:20:N*1j, 0:20:M*1j]\ndata = np.sin(np.pi * X*2 / 2000) * np.cos(np.pi * Y*2 / 10)\n\nfig, (ax2, ax1) = plt.subplots(1, 2, figsize=(7, 3))\nim = ax1.imshow(data, extent=[0, 200, 0, 200])\nax1.set_title(\"v2.0: 'viridis'\")\nfig.colorbar(im, ax=ax1, shrink=0.8)\n\nim2 = ax2.imshow(data, extent=[0, 200, 0, 200], cmap='jet')\nfig.colorbar(im2, ax=ax2, shrink=0.8)\nax2.set_title(\"classic: 'jet'\")\n\n#fig.tight_layout()\nplt.tight_layout()\n#plt.show()", "_____no_output_____" ], [ "list(mpl.rcParams['axes.prop_cycle'])", "_____no_output_____" ], [ "'#1f77b4'", "_____no_output_____" ], [ "import os, sys\nos.path.dirname(sys.executable)", "_____no_output_____" ], [ "# Code source: Gaël Varoquaux\n# Modified for documentation by Jaques Grobler\n# License: BSD 3 clause\n\n\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom sklearn import datasets\nfrom sklearn.decomposition import PCA\n\n# import some data to play with\niris = datasets.load_iris()\nX = iris.data[:, :2] # we only take the first two features.\nY = iris.target\n\nx_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5\ny_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5\n\nplt.figure(2, figsize=(8, 6))\nplt.clf()\n\n# Plot the training points\nplt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired)\nplt.xlabel('Sepal length')\nplt.ylabel('Sepal width')\n\nplt.xlim(x_min, x_max)\nplt.ylim(y_min, y_max)\nplt.xticks(())\nplt.yticks(())\n\n# To getter a better understanding of interaction of the dimensions\n# plot the first three PCA dimensions\nfig = plt.figure(1, figsize=(8, 6))\nax = Axes3D(fig, elev=-150, azim=110)\nX_reduced = PCA(n_components=3).fit_transform(iris.data)\nax.scatter(X_reduced[:, 0], X_reduced[:, 1], X_reduced[:, 2], c=Y,\n cmap=plt.cm.Paired)\nax.set_title(\"First three PCA directions\")\nax.set_xlabel(\"1st eigenvector\")\nax.w_xaxis.set_ticklabels([])\nax.set_ylabel(\"2nd eigenvector\")\nax.w_yaxis.set_ticklabels([])\nax.set_zlabel(\"3rd eigenvector\")\nax.w_zaxis.set_ticklabels([])\n\nplt.show()", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n# Set the random seed for consistency\nnp.random.seed(12)\n\nfig, ax = plt.subplots(1)\n\n# Show the whole color range\nfor i in range(8):\n x = np.random.normal(loc=i, size=1000)\n y = np.random.normal(loc=i, size=1000)\n ax.scatter(x, y, label=str(i))\nax.legend()\n", "_____no_output_____" ], [ "# https://software-carpentry.org/blog/2012/05/an-exercise-with-matplotlib-and-numpy.html\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom datetime import datetime\nimport os\n\nevent_types = ['Rain', 'Thunderstorm', 'Snow', 'Fog']\nnum_events = len(event_types)\n\ndef event2int(event):\n return event_types.index(event)\n\ndef date2int(date_str):\n date = datetime.strptime(date_str, '%Y-%m-%d')\n return date.toordinal()\n\ndef r_squared(actual, ideal):\n actual_mean = np.mean(actual)\n ideal_dev = np.sum([(val - actual_mean)**2 for val in ideal])\n actual_dev = np.sum([(val - actual_mean)**2 for val in actual])\n\n return ideal_dev / actual_dev\n\ndef read_weather(file_name):\n dtypes = np.dtype({ 'names' : ('timestamp', 'max temp', 'mean temp', 'min temp', 'events'),\n 'formats' : [np.int, np.float, np.float, np.float, 'S100'] })\n\n data = np.loadtxt(file_name, delimiter=',', skiprows=1,\n converters = { 0 : date2int },\n usecols=(0,1,2,3,21), dtype=dtypes)\n\n return data\ndef temp_plot(dates, mean_temps):\n\n year_start = datetime(2012, 1, 1)\n days = [(d - year_start).days + 1 for d in dates]\n\n fig = pyplot.figure()\n pyplot.title('Temperatures in Bloomington 2012')\n pyplot.ylabel('Mean Temperature (F)')\n pyplot.xlabel('Day of Year')\n\n pyplot.plot(days, mean_temps, marker='o')\n\n return fig", "_____no_output_____" ], [ "data = read_weather('data/weather.csv')\nmin_temps = data['min temp']\nmean_temps = data['mean temp']\nmax_temps = data['max temp']\ndates = [datetime.fromordinal(d) for d in data['timestamp']]\nevents = data['events']\n\nif not os.path.exists('plots'):\n os.mkdir('plots')\n\nfig = temp_plot(dates, mean_temps)\nfig.savefig('plots/day_vs_temp.png')", "_____no_output_____" ], [ "import matplotlib as mpl\nmpl.rcParams.update(mpl.rcParamsDefault)\na = np.arange(20)\nplt.plot(a,a**2,label = 'log')\nplt.legend()\nplt.show()", "_____no_output_____" ], [ "mpl.style.use('classic')\na = np.arange(20)\nplt.plot(a,a**2,label = 'log')\nplt.legend()\nplt.show()", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\nplt.plot([1,2,3,4])\nplt.ylabel('some numbers')\nplt.show()", "_____no_output_____" ], [ "# facebook emoji copied from http://emojipedia.org/facebook/\nimport matplotlib.pyplot as plt\nmpl.rcParams.update(mpl.rcParamsDefault)\n\nfig, ax = plt.subplots()\ntick_labels = [ '❤️', '😂', '😯', '😢', '😡'] # cannot show '👍'\n\ny = [24,12,16,2,1]\nx = range(5)\nax.bar(x, y, tick_label=tick_labels, align='center',facecolor='#3b5998')\nax.xaxis.set_tick_params(labelsize=36)\n\nax.set_title('эмоции-реакции в Facebook')", "_____no_output_____" ], [ "# facebook emoji copied from http://emojipedia.org/facebook/\nimport matplotlib.pyplot as plt\n\nmpl.style.use('classic')\nfig, ax = plt.subplots()\ntick_labels = ['👍', '❤️', '😂', '😯', '😢', '😡']\n\ny = [42,24,12,16,2,1]\nx = range(6)\nax.bar(x, y, tick_label=tick_labels, align='center',facecolor='#3b5998')\nax.xaxis.set_tick_params(labelsize=20)\n\nax.set_title('эмоции-реакции в Facebook')\nplt.show()\n", "_____no_output_____" ], [ "mpl.font_manager.findSystemFonts(fontpaths=None, fontext='ttf')", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\nimport numpy as np\nfrom matplotlib import rc_context\nimport matplotlib.patches as mpatches\n\nfig, all_ax = plt.subplots(3, 2, figsize=(4, 6), tight_layout=True)\n\ndef demo(ax_top, ax_mid, ax_bottom, rcparams, label):\n labels = 'Frogs', 'Hogs', 'Dogs', 'Logs'\n fracs = [15, 30, 45, 10]\n\n explode = (0, 0.05, 0, 0)\n\n ax_top.set_title(label)\n\n with rc_context(rc=rcparams):\n ax_top.pie(fracs, labels=labels)\n ax_top.set_aspect('equal')\n ax_mid.bar(range(len(fracs)), fracs, tick_label=labels)\n plt.setp(ax_mid.get_xticklabels(), rotation=-45)\n grid = np.mgrid[0.2:0.8:3j, 0.2:0.8:3j].reshape(2, -1).T\n\n ax_bottom.set_xlim(0, .75)\n ax_bottom.set_ylim(0, .75)\n ax_bottom.add_artist(mpatches.Rectangle(grid[1] - [0.025, 0.05],\n 0.05, 0.1))\n ax_bottom.add_artist(mpatches.RegularPolygon(grid[3], 5, 0.1))\n ax_bottom.add_artist(mpatches.Ellipse(grid[4], 0.2, 0.1))\n ax_bottom.add_artist(mpatches.Circle(grid[0], 0.1))\n ax_bottom.axis('off')\n\ndemo(*all_ax[:, 0], rcparams={'patch.force_edgecolor': True,\n 'patch.facecolor': 'b'}, label='classic')\ndemo(*all_ax[:, 1], rcparams={}, label='v2.0')", "c:\\users\\claire\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\matplotlib\\figure.py:1742: UserWarning: This figure includes Axes that are not compatible with tight_layout, so its results might be incorrect.\n warnings.warn(\"This figure includes Axes that are not \"\n" ], [ "import matplotlib.pyplot as plt\nimport numpy as np\nfrom matplotlib import rc_context\nimport matplotlib.patches as mpatches\n\nfig, all_ax = plt.subplots(3, 2, figsize=(4, 6), tight_layout=True)\n\ndef demo(ax_top, ax_mid, ax_bottom, rcparams, label):\n labels = ['Like','Love','Haha','Wow','Sad','Angry']\n rxns = [42,24,12,16,4,2]\n total = sum(rxns)\n fracs = [x/total for x in rxns]\n\n explode = (0, 0.05, 0, 0)\n\n ax_top.set_title(label)\n\n with rc_context(rc=rcparams):\n ax_top.pie(fracs, labels=labels)\n ax_top.set_aspect('equal')\n ax_mid.bar(range(len(fracs)), fracs, tick_label=labels)\n plt.setp(ax_mid.get_xticklabels(), rotation=-45)\n grid = np.mgrid[0.2:0.8:3j, 0.2:0.8:3j].reshape(2, -1).T\n\n ax_bottom.set_xlim(0, .75)\n ax_bottom.set_ylim(0, .75)\n ax_bottom.add_artist(mpatches.Rectangle(grid[1] - [0.025, 0.05],0.05, 0.1))\n ax_bottom.add_artist(mpatches.RegularPolygon(grid[3], 5, 0.1))\n ax_bottom.add_artist(mpatches.Ellipse(grid[4], 0.2, 0.1))\n ax_bottom.add_artist(mpatches.Circle(grid[0], 0.1))\n ax_bottom.axis('off')\n\ndemo(*all_ax[:, 0], rcparams={'patch.force_edgecolor': True,\n 'patch.facecolor': 'b'}, label='classic')\ndemo(*all_ax[:, 1], rcparams={}, label='v2.0')", "c:\\users\\claire\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\matplotlib\\figure.py:1742: UserWarning: This figure includes Axes that are not compatible with tight_layout, so its results might be incorrect.\n warnings.warn(\"This figure includes Axes that are not \"\n" ], [ "import matplotlib.pyplot as plt\nimport numpy as np\nfrom matplotlib import rc_context\nimport matplotlib.patches as mpatches\n\nfig, all_ax = plt.subplots(1, 2, figsize=(4, 2), tight_layout=True)\n\ndef demo(ax_top, rcparams, label):\n labels = ['Like','Love','Haha','Wow','Sad','Angry']\n rxns = [42,24,12,16,4,2]\n total = sum(rxns)\n fracs = [x/total for x in rxns]\n colors = [\"#3366cc\", \"#dc3912\", \"#ff9900\", \"#109618\", \"#990099\", \"#0099c6\", \"#dd4477\", \"#66aa00\", \"#b82e2e\", \"#316395\", \"#994499\", \"#22aa99\", \"#aaaa11\", \"#6633cc\", \"#e67300\", \"#8b0707\", \"#651067\", \"#329262\", \"#5574a6\", \"#3b3eac\"]\n\n explode = (0, 0.05, 0, 0)\n\n ax_top.set_title(label)\n\n with rc_context(rc=rcparams):\n ax_top.pie(fracs, labels=labels,colors=colors)\n ax_top.set_aspect('equal')\n grid = np.mgrid[0.2:0.8:3j, 0.2:0.8:3j].reshape(2, -1).T\n\ndemo(all_ax[0], rcparams={'patch.force_edgecolor': True,\n 'patch.facecolor': 'b'}, label='classic')\ndemo(all_ax[1], rcparams={}, label='v2.0')", "c:\\users\\claire\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\matplotlib\\figure.py:1742: UserWarning: This figure includes Axes that are not compatible with tight_layout, so its results might be incorrect.\n warnings.warn(\"This figure includes Axes that are not \"\n" ], [ "# https://www.quora.com/How-can-I-draw-a-heart-using-Python\nimport matplotlib.pyplot as plt\nimport numpy as np\nt = np.arange(0,2*np.pi, 0.1)\nx = 16*np.sin(t)**3\ny = 13*np.cos(t)-5*np.cos(2*t)-2*np.cos(3*t)-np.cos(4*t)\nplt.plot(x,y)\nplt.savefig('heart.png',dpi=900)\nplt.show()", "_____no_output_____" ], [ "# http://telliott99.blogspot.hk/2011/02/plotting-taylor-series-for-sine.html\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize=(3,2))\nX = np.linspace(-np.pi, np.pi, 300)\nC,S = np.cos(X), np.sin(X)\n\nplt.plot(X,C)\nplt.plot(X,S)\n\nplt.show()", "_____no_output_____" ], [ "%matplotlib notebook", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize=(3,2))\nX = np.linspace(-np.pi, np.pi, 300)\nC,S = np.cos(X), np.sin(X)\n\nplt.plot(X,C)\nplt.plot(X,S)\n\nplt.show()", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize=(3,2))\nX = np.linspace(-np.pi, np.pi, 300)\nC,S = np.cos(X), np.sin(X)\n\nplt.plot(X,C)\nplt.plot(X,S)\n\nplt.show()", "_____no_output_____" ], [ "np.linspace(2,10,5,dtype=np.int32)", "_____no_output_____" ], [ "np.arange(2,11,2,dtype=np.int32)", "_____no_output_____" ], [ "evens = [2,4,6,8,10]\nnp.array(evens)", "_____no_output_____" ], [ "import pandas as pd\npd.DataFrame(evens)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e70f1b54cb51984dcc45d8b08601a65e7b55bfc2
6,629
ipynb
Jupyter Notebook
Custom_TF_framework/main_MNIST.ipynb
OH-Seoyoung/ML_Optimization_Methods
7afdd4912e9b4821b266539651410b394d197de4
[ "MIT" ]
1
2021-11-22T03:32:55.000Z
2021-11-22T03:32:55.000Z
Custom_TF_framework/main_MNIST.ipynb
OH-Seoyoung/ML_Optimization_Methods
7afdd4912e9b4821b266539651410b394d197de4
[ "MIT" ]
null
null
null
Custom_TF_framework/main_MNIST.ipynb
OH-Seoyoung/ML_Optimization_Methods
7afdd4912e9b4821b266539651410b394d197de4
[ "MIT" ]
null
null
null
29.59375
144
0.537487
[ [ [ "## 0. Import Packages", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pylab as plt\nimport tensorflow as tf\nimport keras\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Flatten\nfrom keras.layers.convolutional import Conv2D, MaxPooling2D\nfrom keras.models import load_model\nfrom tensorflow.keras.optimizers import Adam\n\n# import tensorflow.compat.v1 as tf\n# tf.disable_v2_behavior() ", "_____no_output_____" ] ], [ [ "## 1. Make dataset", "_____no_output_____" ] ], [ [ "# Download the mnist dataset using keras\ndata_train, data_test = tf.keras.datasets.mnist.load_data()\n\n# Parse images and labels\n(images_train, labels_train) = data_train\n(images_test, labels_test) = data_test", "_____no_output_____" ], [ "x_train = images_train.reshape(60000, 28, 28, 1)\nx_test = images_test.reshape(10000, 28, 28, 1)\ny_train = labels_train\ny_test = labels_test", "_____no_output_____" ] ], [ [ "## 2. Modeling", "_____no_output_____" ] ], [ [ "batch_size = 512\nnum_classes = 10\nepochs = 1", "_____no_output_____" ], [ "model = Sequential()\nmodel.add(Conv2D(32, kernel_size=(5, 5), strides=(1, 1), padding='same',\n activation='relu',\n input_shape=(28,28,1)))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(Conv2D(64, (5, 5), activation='relu', padding='same'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(Dropout(0.25))\nmodel.add(Flatten())\nmodel.add(Dense(1024, activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(num_classes, activation='softmax'))\nmodel.summary()", "Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_2 (Conv2D) (None, 28, 28, 32) 832 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 14, 14, 32) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 14, 14, 64) 51264 \n_________________________________________________________________\nmax_pooling2d_3 (MaxPooling2 (None, 7, 7, 64) 0 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 7, 7, 64) 0 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 3136) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 1024) 3212288 \n_________________________________________________________________\ndropout_3 (Dropout) (None, 1024) 0 \n_________________________________________________________________\ndense_3 (Dense) (None, 10) 10250 \n=================================================================\nTotal params: 3,274,634\nTrainable params: 3,274,634\nNon-trainable params: 0\n_________________________________________________________________\n" ] ], [ [ "- keras framework", "_____no_output_____" ] ], [ [ "model.compile(loss = keras.losses.sparse_categorical_crossentropy, optimizer = Adam(learning_rate = 0.0001), metrics = ['accuracy'])\nhist = model.fit(x_train, y_train,\n epochs = epochs,\n batch_size = batch_size,\n verbose = 1, \n validation_data=(x_test, y_test))", "118/118 [==============================] - 100s 845ms/step - loss: 3.3267 - accuracy: 0.7644 - val_loss: 0.1672 - val_accuracy: 0.9565\n" ] ], [ [ "- keras custom framework", "_____no_output_____" ] ], [ [ "model.compile(loss='sparse_categorical_crossentropy', optimizer = custom_Adam, metrics=['accuracy'])\nhist = model.fit(x_train, y_train,\n epochs = epochs,\n batch_size = batch_size,\n verbose = 1, \n validation_data=(x_test, y_test))", "_____no_output_____" ], [ "score1 = model.evaluate(x_train, y_train, verbose = 0)\nscore2 = model.evaluate(x_test, y_test, verbose = 0)\nprint('Train accuracy:', score1[1])\nprint('Test accuracy:', score2[1])", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e70f1eea4597e23e0e9caac2af8e66a7dc8f944b
52,421
ipynb
Jupyter Notebook
Inteligência Artificial/Natural Language Processing (NLP)/1. Introdução ao NLP e Análise de Sentimento/1.5 Gerando matrizes e visualizando word frequencies.ipynb
dantebarross/vamos-programar
d9a14fadda5d2109db75ff46c3e8742a169d5186
[ "MIT" ]
2
2021-01-14T22:41:11.000Z
2021-01-27T21:41:25.000Z
Inteligência Artificial/Natural Language Processing (NLP)/1. Introdução ao NLP e Análise de Sentimento/.ipynb_checkpoints/1.5 Gerando matrizes e visualizando word frequencies-checkpoint.ipynb
dantebarross/vamos-programar
d9a14fadda5d2109db75ff46c3e8742a169d5186
[ "MIT" ]
null
null
null
Inteligência Artificial/Natural Language Processing (NLP)/1. Introdução ao NLP e Análise de Sentimento/.ipynb_checkpoints/1.5 Gerando matrizes e visualizando word frequencies-checkpoint.ipynb
dantebarross/vamos-programar
d9a14fadda5d2109db75ff46c3e8742a169d5186
[ "MIT" ]
null
null
null
107.420082
34,852
0.832472
[ [ [ "# 1.5 Gerando matrizes e visualizando word frequencies\n### 1.5.1 Retomando\nNos guias anteriores, vimos como são feitas as matrizes de sentenças. Cada um dos tweets tinha três parâmetros correspondentes, o bias, a soma das frequências positivas e a soma das frequências negativas (exemplo: [1, 4, 2]).\n\nExemplificando, a matriz de uma lista de sentenças então ficaria de modo semelhante a esse: \n[[1, 3, 5],\n[1, 0, 2],\n[1, 6, 5]]\n\nCada linha corresponde aos três features de cada sentença, porém o que separa as linhas é a vírgula. Sendo assim, é apensa uma questão de organização visual a linearização da matriz.\n\nResumindo, o que vamos ver agora depende da função build_freqs e da função process_tweet, ambas se encontram no arquivo **utils.py**.\n```\nfreqs = build_freqs(tweets,labels) # Cria o dicionário de frequências usando a função build_freqs()\nX = np.zeros((m,3)) # Matriz \"x\" inicia zerada e possui três parâmetros [0, 0, 0]\nfor i in range(m): Para cada tweet...\n p_tweet = process_tweet(tweets[i]) # Pré-processar (tokenização, stop words, stem, etc)\n x[i,:] = extract_features(p_tweet,freqs) # Extrai as features somando as frequências positivas e negativas\n```\nVamos importar as funções do utils.py para nos auxiliar no cálculo das **word frequencies**, e então visualizar isso no _dataset_.\n\n### 1.5.2 Carregando bibliotecas, funções e _dataset_", "_____no_output_____" ] ], [ [ "import nltk # Biblioteca de NLP\nfrom nltk.corpus import twitter_samples # Corpus de tweets\nimport matplotlib.pyplot as plt # Visualização gráfica\nimport numpy as np # Biblioteca de Ciência da Computação e operações de matriz\nnltk.download('stopwords') # Para a função process_tweet\n\n# import our convenience functions\nfrom utils import process_tweet, build_freqs # O utils.py, bem como suas funções, está dentro dessa mesma pasta. \n\n# select the lists of positive and negative tweets\nall_positive_tweets = twitter_samples.strings('positive_tweets.json')\nall_negative_tweets = twitter_samples.strings('negative_tweets.json')\n\n# concatenate the lists, 1st part is the positive tweets followed by the negative\ntweets = all_positive_tweets + all_negative_tweets\n\n# let's see how many tweets we have\nprint(\"Number of tweets: \", len(tweets))", "[nltk_data] Downloading package stopwords to\n[nltk_data] C:\\Users\\Dante\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package stopwords is already up-to-date!\n" ] ], [ [ "Por fim, teremos um array etiquetado. Ele pode ser entendido como uma lista comum, porém é otimizado para atividades computacionais e manipulação.\n\nNossas 10.000 sentenças são dividas em 5.000 positivas e 5.000 negativas, que são preenchidas respectivamentes por '1' e '0'. Algumas operações úteis da biblioteca **numpy** nos auxiliam a realizar tais alterações.\n- ``p.ones()`` cria um array '1';\n- ``np.zeros()`` cria um array '0';\n- ``np.append()`` concatena arrays.", "_____no_output_____" ] ], [ [ "# array NumPy representando o número de cada label dos tweets\nlabels = np.append(np.ones((len(all_positive_tweets))), np.zeros((len(all_negative_tweets))))", "_____no_output_____" ] ], [ [ "### 1.5.3 Dicionários\nPodemos criar **dicionários** em Python. Eles são coleções mutáveis e indexadas, e itens como pares de valor-chave. Para que consultas sejam feitas em tempo constante, os dicionários utilizam o _data structure_ **hash table** (https://en.wikipedia.org/wiki/Hash_table).\n\nOs dicionários são muito utilizados no NLP exatamente por conta do resgate rápido de itens. Vamos definir um dicionário utilizando chaves (brackets):", "_____no_output_____" ] ], [ [ "dicionario = {'chave1': 1, 'chave2' : 2}", "_____no_output_____" ] ], [ [ "A variável **dicionario** agora está relacionada a um dicionário com duas entradas. Cada entrada possui uma **chave** e um **valor**. Podemos utilizar quaisquer tipos de valor nas entradas (float, int, tuple, string).\n\nPara **adicionar e modificar entradas**, chamamos a variável e utilizamos formato de colchete para indicar a chave a ser adicionada/alterada.", "_____no_output_____" ] ], [ [ "dicionario['chave3'] = 3\ndicionario['chave1'] = 4\nprint(dicionario)", "{'chave1': 4, 'chave2': 2, 'chave3': 3}\n" ] ], [ [ "Para **acessar valores e chaves**, podemos utilizar dois métodos:\n1. Utilizando o colchete (square bracket) para valores que existem;\n2. Utilizando o método **.get()**.", "_____no_output_____" ] ], [ [ "print(dicionario['chave3'])", "3\n" ], [ "print(dicionario['chave4']) # A chave não existe, então retornará 'KeyError'", "_____no_output_____" ] ], [ [ "Utilizando o **if** e **else** ou o método **.get()**, já não existe erro pois apresentamos um valor padrão se a chave não for encontrada.", "_____no_output_____" ] ], [ [ "# Nesses dois casos, a chave será encontrada\nif 'chave1' in dicionario:\n print('Achei!', dicionario['chave1'])\nelse:\n print('Essa chave ainda não foi definida')\n\nprint('Item encontrado!:', dicionario.get('chave1', 'não existe'))\n\n\n# Nesse caso, retornou 'não existe' pois não foi encontrado \nprint('Pesquisando item que não existe:', dicionario.get('chave4', 'não existe')) ", "_____no_output_____" ] ], [ [ "### 1.5.4 Dicionário de word frequency\nAgora podemos olhar atentamente para a função **build_freqs** (que se encontra no utils.py).", "_____no_output_____" ] ], [ [ "def build_freqs(tweets, ys):\n \"\"\"Build frequencies.\n Input:\n tweets: a list of tweets\n ys: an m x 1 array with the sentiment label of each tweet\n (either 0 or 1)\n Output:\n freqs: a dictionary mapping each (word, sentiment) pair to its\n frequency\n \"\"\"\n # Converta o array np em lista, pois o zip precisa de um iterável.\n # O .squeeze é necessário ou a lista termina com um elemento apenas.\n # Observe também que este é apenas um NOP se ys já for uma lista.\n yslist = np.squeeze(ys).tolist()\n\n # Comece com um dicionário vazio e preencha-o realizando um looping em todos os tweets\n # Um segundo looping também será feito em todas as palavras processadas de cada tweet\n freqs = {}\n for y, tweet in zip(yslist, tweets):\n for word in process_tweet(tweet):\n pair = (word, y)\n if pair in freqs:\n freqs[pair] += 1\n else:\n freqs[pair] = 1 \n return freqs\n\n # Este é apenas um outro modo de realizar os mesmos loops, substituindo o if e else pelo método .get, como já visto.\n for y, tweet in zip(yslist, tweets):\n for word in process_tweet(tweet):\n pair = (word, y)\n freqs[pair] = freqs.get(pair, 0) + 1", "_____no_output_____" ] ], [ [ "O par **(word, y)** dirá respeito a uma **chave**, sendo **word** um elemento do tweet processado e **y** um integer, '1' para positivo e '0' para negativo. No dicionário, a **chave** (key) dirá respeito ao elemento word, e seu **valor** (value) corresponderá ao número de aparecimentos no corpus inteiro. Ao longo dos loopings, mais tweets e seus elementos vão sendo analisados e as frequências vão aumentando. Vamos observar alguns exemplos para entender melhor o resultado final:\n\n```\n# \"folowfriday\" aparece \"25 vezes\" nos tweets \"positivos\"\n('followfriday', 1.0): 25\n\n# \"shame\" aparece \"19 vezes\" nos tweets \"negativos\"\n('shame', 0.0): 19\n```", "_____no_output_____" ], [ "Agora que já compreendemos melhor como realizar o cálculo de frequências e como guardar em um dicionário, vamos avançar mais um passo!\n\nComo já temos os dicionários e seus elementos (**word**, **y** e **frequência**), agora basta **alimentar a nossa lista de tweets e etiquetas**.", "_____no_output_____" ] ], [ [ "freqs = build_freqs(tweets, labels) # Vamos utilizar a função para criar os dicionários em 'freqs'\nprint(f'type(freqs) = {type(freqs)}') # Vamos checar qual o _data type_ da variável 'freqs'\nprint(f'len(freqs) = {len(freqs)}') # Vamos checar o tamanho do dicionário", "_____no_output_____" ], [ "print(freqs) # E agora um print de todo o dicionário", "_____no_output_____" ] ], [ [ "### 1.5.5 Tabela de contagem de palavras\nVamos escolher algumas palavras que gostaríamos de visualizar. Primeiro vamos indicá-las em uma lista. Criamos uma segunda lista que será a **representação de nossa tabela de contagem de palavras**.", "_____no_output_____" ] ], [ [ "keys = ['happi', 'merri', 'nice', 'good', 'bad', 'sad', 'mad', 'best', 'pretti',\n '❤', ':)', ':(', '😒', '😬', '😄', '😍', '♛',\n 'song', 'idea', 'power', 'play', 'magnific']\n\n# lista representando nossa tabela de contagem de palavras\ndata = [] # cada elemento consistirá de uma sub-lista com esse padrão: [<word>, <positive_count>, <negative_count>]\n\n\nfor word in keys: # para cada elemento em nossa lista de palavras\n\n positivas = 0\n negativas = 0\n \n if (word, 1) in freqs: # se word for \"1\" no dicionário de frequências\n positivas = freqs[(word, 1)]\n \n # retrieve number of negative counts\n if (word, 0) in freqs: # se word for \"0\" no dicionário de frequências\n negativas = freqs[(word, 0)]\n \n # append the word counts to the table\n data.append([word, positivas, negativas])\n \ndata", "_____no_output_____" ] ], [ [ "Percebeu como a palavra \"happi\" (_lemma_ de _happy_) apareceu muito mais em sentenças positivas do que em negativas? Vamos utilizar um gráfico de dispersão (_scatter plot_) para visualizar a tabela que criamos. Nesse caso, a contagem não será crua (_raw_) mas sim indicada em escala logarítmica (para um entendimento mais palpável das medidas). A linha vermelha marca a fronteira entre positivo e negativo. As palavras mais **marginais** são as mais **positivas/negativas** e as **centrais** são **neutras**.", "_____no_output_____" ] ], [ [ "fig, ax = plt.subplots(figsize = (8, 8))\n\nx = np.log([x[1] + 1 for x in data]) # converte os dados crus de \"x\" em escala logarítmica e +1 para não ocasionar log(0)\ny = np.log([x[2] + 1 for x in data]) # igualmente para \"y\"\n\nax.scatter(x, y) # plota um pontinho para cada palavra\n\nplt.xlabel(\"Log Positive count\") # axis labels\nplt.ylabel(\"Log Negative count\")\n\nfor i in range(0, len(data)): # Adiciona as palavras na mesma posição dos pontinhos\n ax.annotate(data[i][0], (x[i], y[i]), fontsize=14)\n\nax.plot([0, 9], [0, 9], color = 'red') # Plota a linha vermelha que vai de \"0\" a \"9\"\nplt.show()", "_____no_output_____" ] ], [ [ "O emoticon \":)\", em nossa lista **[':)', 3568, 2]**, por ter uma contagem muito superior às outras palavras e em relação às próprias sentenças negativas, pôde ser melhor visualizado através da contagem logarítmica. O mesmo diz respeito ao emoticon \":(\". \n\nQual será o significado desse emoticon de coroa? (no gráfico representado por um quadrado)\n\nFinalmente! Realizamos a análise de sentimentos de cada sentença, contamos quantas vezes as nossas palavras (words) aparecem em cada tipo de sentenças (positivas e negativas) e representamos por meio de um gráfico utilizando escala logarítmica.\n\nCom o passar do tempo e através da prática, vai tudo se tornando cada vez mais compreensível! Vamos avançar?", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e70f233d2795a21c36c573124f3a15588bad8668
12,238
ipynb
Jupyter Notebook
notebooks/Data_cleanv2.0.ipynb
Cpizzle1/Texas_energy_use_weather_proj
9fd97ebdd76846853b0dfda807c16ff4991a5848
[ "MIT" ]
null
null
null
notebooks/Data_cleanv2.0.ipynb
Cpizzle1/Texas_energy_use_weather_proj
9fd97ebdd76846853b0dfda807c16ff4991a5848
[ "MIT" ]
null
null
null
notebooks/Data_cleanv2.0.ipynb
Cpizzle1/Texas_energy_use_weather_proj
9fd97ebdd76846853b0dfda807c16ff4991a5848
[ "MIT" ]
null
null
null
25.285124
574
0.557771
[ [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')\nimport seaborn as sbn\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import confusion_matrix, roc_curve, auc\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.impute import KNNImputer\nfrom math import ceil\n\nimport scipy.stats as stats\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.pipeline import Pipeline\n\nimport matplotlib.pyplot as plt\n\nfrom pandas.plotting import scatter_matrix\n\nfrom sklearn.linear_model import LinearRegression, Ridge, Lasso\nfrom sklearn.model_selection import train_test_split, KFold\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.base import clone\nimport scipy.stats as scs\nimport statsmodels.api as sm\n\n\n\n\nfrom sklearn.datasets import make_classification\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import KFold, train_test_split\nfrom sklearn.metrics import accuracy_score, precision_score, recall_score\n\n\nimport timeit\nimport datetime as dt", "_____no_output_____" ], [ "data = pd.read_csv('~/Downloads/EIA930_BALANCE_2020_Jan_Jun.csv')\ndata_2 = pd.read_csv('~/Downloads/EIA930_BALANCE_2020_Jul_Dec.csv')", "/Users/cp/opt/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3147: DtypeWarning: Columns (11,14,15,16,17,19,20,21) have mixed types.Specify dtype option on import or set low_memory=False.\n interactivity=interactivity, compiler=compiler, result=result)\n/Users/cp/opt/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3147: DtypeWarning: Columns (11,14,16,17,19,20,21) have mixed types.Specify dtype option on import or set low_memory=False.\n interactivity=interactivity, compiler=compiler, result=result)\n" ], [ "def change_cols_to_floats(dataframe,lst):\n \n for i in lst:\n dataframe[i] = dataframe[i].str.replace(',', '')\n dataframe[i] = dataframe[i].astype(float)\n return dataframe\ndef make_date_time_col(df):\n df['Hour Number'] = df_total['Hour Number'].replace(24, 0)\n df['Hour Number'] = df_total['Hour Number'].replace(25, 0)\n# df['Hour Number'] = df_total['Hour Number']== '25']\n \n \n df['Data Date']= df['Data Date'].astype(str)\n# df['Data Date']= df['Data Date'].replace(['/', '-'])\n df['Hour Number'] = df['Hour Number'].astype(str)\n df['New_datetime'] = df['Data Date'].map(str) + \" \" + df['Hour Number']\n \n \n df['Hour Number'] = df['Hour Number'].astype(int)\n \n \n \n return df", "_____no_output_____" ], [ "lst_cols = ['Demand (MW)','Net Generation (MW) from Natural Gas', 'Net Generation (MW) from Nuclear','Net Generation (MW) from All Petroleum Products','Net Generation (MW) from Hydropower and Pumped Storage', 'Net Generation (MW) from Solar', 'Net Generation (MW) from Wind', 'Net Generation (MW) from Other Fuel Sources','Net Generation (MW)','Demand Forecast (MW)', 'Total Interchange (MW)', 'Net Generation (MW) (Adjusted)','Net Generation (MW) from Coal','Sum(Valid DIBAs) (MW)','Demand (MW) (Imputed)', 'Net Generation (MW) (Imputed)','Demand (MW) (Adjusted)']\ndata_convert = change_cols_to_floats(data, lst_cols)\ndata_2_convert = change_cols_to_floats(data_2, lst_cols)", "_____no_output_____" ], [ "lst_data = [data_convert,data_2_convert]\ndf_total = pd.concat(lst_data)", "_____no_output_____" ], [ "# small_sample = df_total.sample(n=400)\ndf_total.info()", "_____no_output_____" ], [ "df_total.head()", "_____no_output_____" ], [ "# df_total['Hour Number'] = df_total['Hour Number'].replace(24, 0)\n# df_total['Hour Number'] = df_total['Hour Number'].replace(25, 0)\n\nmake_date_time_col(df_total)\n\n\n# small_sample['New_datetime'] = pd.to_datetime(small_sample['New_datetime'],infer_datetime_format=True, format ='%m/%d/%Y %H')", "_____no_output_____" ], [ "df_total.info()", "_____no_output_____" ], [ "df_total['New_datetime']= df_total['New_datetime'].apply(lambda x:f'{x}:00:00')", "_____no_output_____" ], [ "# df_total[df_total['Hour Number']== '25']\n# df_total[df_total['Hour Number']== '24']\n", "_____no_output_____" ], [ "df_total.info()", "_____no_output_____" ], [ "sample_data = df_total.sample(n =500)", "_____no_output_____" ], [ "sample_data.info()\nsample_data.head(30)", "_____no_output_____" ], [ "df_total['New_datetime'] = pd.to_datetime(df_total['New_datetime'],infer_datetime_format=True, format ='%m/%d/%Y %H')", "_____no_output_____" ], [ "df_total.info()", "_____no_output_____" ], [ "df_total['Hour Number'].unique()", "_____no_output_____" ], [ "df_total.head(15)", "_____no_output_____" ] ], [ [ "# EXPLORATORY DATA ", "_____no_output_____" ] ], [ [ "sample_data['New_datetime'] = pd.to_datetime(sample_data['New_datetime'],infer_datetime_format=True, format ='%m/%d/%Y %H')", "_____no_output_____" ], [ "sample_data.head()", "_____no_output_____" ], [ "sample_data['Demand Delta'] = sample_data['Demand Forecast (MW)']- sample_data['Demand (MW)']", "_____no_output_____" ], [ "sample_data['Net Generation Delta'] = sample_data['Net Generation (MW)']- sample_data['Demand (MW)']\nsample_data.head()", "_____no_output_____" ], [ "sample_data.hist(column = 'Demand (MW)')\n\n\n# x = np.arange(0, 23)\n# # fig, ax =plt.subplots(figsize =(12,12))\n# # ax.hist(y, bins = 24)\n# fig, ax = plt.subplots(figsize = (12,12))\n# ax.plot(x, y)", "_____no_output_____" ], [ "average_hourly_demand = sample_data.groupby(['Hour Number']).mean('Demand (MW)')\n# average_hourly_demand.set_index('Hour Number')\naverage_hourly_demand\n", "_____no_output_____" ], [ "# sample_data.hist(figsize=(16, 16));\n\n# average_hourly_demand['Hour Number'].sort()\nsample_data.info()", "_____no_output_____" ], [ "average_hourly_demand = sample_data.groupby(['Hour Number'])\naverage_hourly_demand.get_group(1)\n", "_____no_output_____" ], [ "# average_hourly_demand['Hour Number']\n\n\n# y = average_hourly_demand['Demand (MW)']\n# fig, ax = plt.subplots(figsize =(12,12))\n# ax.plot(sample_data['Hour Number'], sample_data['Demand (MW)'].mean())", "_____no_output_____" ], [ "filt =sample_data['Hour Number']==1\nhour1 = sample_data.loc[filt]['Demand (MW)'].mean()\nprint(hour1)", "_____no_output_____" ], [ "filt =df_total['Hour Number']==1\nhour1 = df_total[filt]['Demand (MW)'].mean()\nprint(hour1)\n", "_____no_output_____" ], [ "def make_hourly_demand_means(df,lst):\n d = {}\n for i in (lst):\n filt =df['Hour Number']==i\n d[i] = sample_data.loc[filt]['Demand (MW)'].mean()\n return d\n\n \n ", "_____no_output_____" ], [ "lst_hours = np.arange(0,24)\n# make_hourly_demand_means(df_total, lst_hours)\nmake_hourly_demand_means(sample_data, lst_hours)\n", "_____no_output_____" ], [ "sample_data.info()\nsample_data.head()", "_____no_output_____" ], [ "df_total.info()\ndf_total.head()\n", "_____no_output_____" ], [ "sample2 = df_total.sample(n =200)\nsample2.head(30)", "_____no_output_____" ], [ "make_hourly_demand_means(sample2, lst_hours)", "_____no_output_____" ], [ "sample_data['Demand (MW)'].mean()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e70f27beb2f9bcf189c535d72d3cfb98893c1506
243,933
ipynb
Jupyter Notebook
dataset/reddit/scrape_reddit_stcok_posts.ipynb
sendash-app/study_stocks_sentiments
39b5dcde79c0caed7f0cab0dac57d9c71da65cee
[ "MIT" ]
1
2020-03-09T20:31:09.000Z
2020-03-09T20:31:09.000Z
dataset/reddit/scrape_reddit_stcok_posts.ipynb
sendash-app/study_stocks_sentiments
39b5dcde79c0caed7f0cab0dac57d9c71da65cee
[ "MIT" ]
null
null
null
dataset/reddit/scrape_reddit_stcok_posts.ipynb
sendash-app/study_stocks_sentiments
39b5dcde79c0caed7f0cab0dac57d9c71da65cee
[ "MIT" ]
1
2019-02-19T08:08:37.000Z
2019-02-19T08:08:37.000Z
48.476351
93
0.432803
[ [ [ "import praw\nimport pandas as pd\nimport datetime as dt", "_____no_output_____" ], [ "reddit = praw.Reddit(client_id='', \\\n client_secret='', \\\n user_agent='reddit_stocks_post', \\\n username='', \\\n password='')", "_____no_output_____" ], [ "def get_date(created):\n return dt.datetime.fromtimestamp(created)", "_____no_output_____" ], [ "def get_stock_reddit_post(subreddit, keywords, timestamp_start):\n \n subreddit = reddit.subreddit(subreddit)\n \n top_subreddit = subreddit.search(query=keywords, limit=1000)\n \n topics_dict = { \"title\":[], \\\n \"score\":[], \\\n \"id\":[], \"url\":[], \\\n \"comms_num\": [], \\\n \"created\": [], \\\n \"body\":[]}\n \n for submission in top_subreddit:\n topics_dict[\"title\"].append(submission.title)\n topics_dict[\"score\"].append(submission.score)\n topics_dict[\"id\"].append(submission.id)\n topics_dict[\"url\"].append(submission.url)\n topics_dict[\"comms_num\"].append(submission.num_comments)\n topics_dict[\"created\"].append(submission.created)\n topics_dict[\"body\"].append(submission.selftext)\n \n topics_data = pd.DataFrame(topics_dict)\n \n _timestamp = topics_data[\"created\"].apply(get_date)\n \n topics_data = topics_data.assign(timestamp = _timestamp)\n \n # sort value ascending by unix_timestamp_decode_2\n topics_data = topics_data.sort_values(by='timestamp',ascending=True)\n \n topics_data = topics_data[topics_data['created'] > timestamp_start]\n \n return topics_data", "_____no_output_____" ], [ "df_FB = get_stock_reddit_post('Stocks', '$FB', 1514764800)\ndf_NFLX = get_stock_reddit_post('Stocks', '$NFLX', 1514764800)\ndf_GOOGL = get_stock_reddit_post('Stocks', '$GOOGL', 1514764800)\ndf_GOOG = get_stock_reddit_post('Stocks', '$GOOG', 1514764800)\ndf_AMZN = get_stock_reddit_post('Stocks', '$AMZN', 1514764800)", "_____no_output_____" ], [ "df_AMZN", "_____no_output_____" ], [ "df_FB", "_____no_output_____" ], [ "df_NFLX", "_____no_output_____" ], [ "df_GOOGL", "_____no_output_____" ], [ "df_GOOG", "_____no_output_____" ], [ "df_GOOG.to_csv('GOOG_reddit_2018_2019.csv', index=False)\ndf_GOOGL.to_csv('GOOGL_reddit_2018_2019.csv', index=False) \ndf_FB.to_csv('FB_reddit_2018_2019.csv', index=False) \ndf_AMZN.to_csv('AMZN_reddit_2018_2019.csv', index=False) \ndf_NFLX.to_csv('NFLX_reddit_2018_2019.csv', index=False) ", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e70f2e6a38802784255fc81d347aa44d9203a02b
263,977
ipynb
Jupyter Notebook
code/NetworkX_cancer_cell_feeding.ipynb
asifabdullah-git/StellarGraph
e3fbf42086b9b64e978494622d8d8347122170c9
[ "MIT" ]
null
null
null
code/NetworkX_cancer_cell_feeding.ipynb
asifabdullah-git/StellarGraph
e3fbf42086b9b64e978494622d8d8347122170c9
[ "MIT" ]
null
null
null
code/NetworkX_cancer_cell_feeding.ipynb
asifabdullah-git/StellarGraph
e3fbf42086b9b64e978494622d8d8347122170c9
[ "MIT" ]
null
null
null
142.075888
184,862
0.812995
[ [ [ "!pip install stellargraph", "Collecting stellargraph\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/74/78/16b23ef04cf6fb24a7dea9fd0e03c8308a56681cc5efe29f16186210ba04/stellargraph-1.2.1-py3-none-any.whl (435kB)\n\r\u001b[K |▊ | 10kB 15.4MB/s eta 0:00:01\r\u001b[K |█▌ | 20kB 2.2MB/s eta 0:00:01\r\u001b[K |██▎ | 30kB 2.7MB/s eta 0:00:01\r\u001b[K |███ | 40kB 3.0MB/s eta 0:00:01\r\u001b[K |███▊ | 51kB 2.5MB/s eta 0:00:01\r\u001b[K |████▌ | 61kB 2.8MB/s eta 0:00:01\r\u001b[K |█████▎ | 71kB 3.1MB/s eta 0:00:01\r\u001b[K |██████ | 81kB 3.4MB/s eta 0:00:01\r\u001b[K |██████▊ | 92kB 3.6MB/s eta 0:00:01\r\u001b[K |███████▌ | 102kB 3.4MB/s eta 0:00:01\r\u001b[K |████████▎ | 112kB 3.4MB/s eta 0:00:01\r\u001b[K |█████████ | 122kB 3.4MB/s eta 0:00:01\r\u001b[K |█████████▉ | 133kB 3.4MB/s eta 0:00:01\r\u001b[K |██████████▌ | 143kB 3.4MB/s eta 0:00:01\r\u001b[K |███████████▎ | 153kB 3.4MB/s eta 0:00:01\r\u001b[K |████████████ | 163kB 3.4MB/s eta 0:00:01\r\u001b[K |████████████▉ | 174kB 3.4MB/s eta 0:00:01\r\u001b[K |█████████████▌ | 184kB 3.4MB/s eta 0:00:01\r\u001b[K |██████████████▎ | 194kB 3.4MB/s eta 0:00:01\r\u001b[K |███████████████ | 204kB 3.4MB/s eta 0:00:01\r\u001b[K |███████████████▉ | 215kB 3.4MB/s eta 0:00:01\r\u001b[K |████████████████▋ | 225kB 3.4MB/s eta 0:00:01\r\u001b[K |█████████████████▎ | 235kB 3.4MB/s eta 0:00:01\r\u001b[K |██████████████████ | 245kB 3.4MB/s eta 0:00:01\r\u001b[K |██████████████████▉ | 256kB 3.4MB/s eta 0:00:01\r\u001b[K |███████████████████▋ | 266kB 3.4MB/s eta 0:00:01\r\u001b[K |████████████████████▎ | 276kB 3.4MB/s eta 0:00:01\r\u001b[K |█████████████████████ | 286kB 3.4MB/s eta 0:00:01\r\u001b[K |█████████████████████▉ | 296kB 3.4MB/s eta 0:00:01\r\u001b[K |██████████████████████▋ | 307kB 3.4MB/s eta 0:00:01\r\u001b[K |███████████████████████▍ | 317kB 3.4MB/s eta 0:00:01\r\u001b[K |████████████████████████ | 327kB 3.4MB/s eta 0:00:01\r\u001b[K |████████████████████████▉ | 337kB 3.4MB/s eta 0:00:01\r\u001b[K |█████████████████████████▋ | 348kB 3.4MB/s eta 0:00:01\r\u001b[K |██████████████████████████▍ | 358kB 3.4MB/s eta 0:00:01\r\u001b[K |███████████████████████████ | 368kB 3.4MB/s eta 0:00:01\r\u001b[K |███████████████████████████▉ | 378kB 3.4MB/s eta 0:00:01\r\u001b[K |████████████████████████████▋ | 389kB 3.4MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▍ | 399kB 3.4MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▏ | 409kB 3.4MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▉ | 419kB 3.4MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▋| 430kB 3.4MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 440kB 3.4MB/s \n\u001b[?25hRequirement already satisfied: numpy>=1.14 in /usr/local/lib/python3.6/dist-packages (from stellargraph) (1.18.5)\nRequirement already satisfied: scikit-learn>=0.20 in /usr/local/lib/python3.6/dist-packages (from stellargraph) (0.22.2.post1)\nRequirement already satisfied: scipy>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from stellargraph) (1.4.1)\nRequirement already satisfied: tensorflow>=2.1.0 in /usr/local/lib/python3.6/dist-packages (from stellargraph) (2.3.0)\nRequirement already satisfied: gensim>=3.4.0 in /usr/local/lib/python3.6/dist-packages (from stellargraph) (3.6.0)\nRequirement already satisfied: pandas>=0.24 in /usr/local/lib/python3.6/dist-packages (from stellargraph) (1.0.5)\nRequirement already satisfied: matplotlib>=2.2 in /usr/local/lib/python3.6/dist-packages (from stellargraph) (3.2.2)\nRequirement already satisfied: networkx>=2.2 in /usr/local/lib/python3.6/dist-packages (from stellargraph) (2.5)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20->stellargraph) (0.16.0)\nRequirement already satisfied: tensorflow-estimator<2.4.0,>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->stellargraph) (2.3.0)\nRequirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->stellargraph) (3.12.4)\nRequirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->stellargraph) (0.3.3)\nRequirement already satisfied: keras-preprocessing<1.2,>=1.1.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->stellargraph) (1.1.2)\nRequirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->stellargraph) (0.2.0)\nRequirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->stellargraph) (1.1.0)\nRequirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->stellargraph) (3.3.0)\nRequirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->stellargraph) (1.32.0)\nRequirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->stellargraph) (1.15.0)\nRequirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->stellargraph) (1.12.1)\nRequirement already satisfied: tensorboard<3,>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->stellargraph) (2.3.0)\nRequirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->stellargraph) (0.35.1)\nRequirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->stellargraph) (0.10.0)\nRequirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->stellargraph) (1.6.3)\nRequirement already satisfied: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->stellargraph) (2.10.0)\nRequirement already satisfied: smart-open>=1.2.1 in /usr/local/lib/python3.6/dist-packages (from gensim>=3.4.0->stellargraph) (2.1.1)\nRequirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.24->stellargraph) (2.8.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.24->stellargraph) (2018.9)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.2->stellargraph) (0.10.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.2->stellargraph) (2.4.7)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.2->stellargraph) (1.2.0)\nRequirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx>=2.2->stellargraph) (4.4.2)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.9.2->tensorflow>=2.1.0->stellargraph) (50.3.0)\nRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (3.2.2)\nRequirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (1.7.0)\nRequirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (1.17.2)\nRequirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (2.23.0)\nRequirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (1.0.1)\nRequirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (0.4.1)\nRequirement already satisfied: boto3 in /usr/local/lib/python3.6/dist-packages (from smart-open>=1.2.1->gensim>=3.4.0->stellargraph) (1.14.59)\nRequirement already satisfied: boto in /usr/local/lib/python3.6/dist-packages (from smart-open>=1.2.1->gensim>=3.4.0->stellargraph) (2.49.0)\nRequirement already satisfied: importlib-metadata; python_version < \"3.8\" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (1.7.0)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (4.1.1)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (0.2.8)\nRequirement already satisfied: rsa<5,>=3.1.4; python_version >= \"3\" in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (4.6)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (1.24.3)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (3.0.4)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (2.10)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (2020.6.20)\nRequirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (1.3.0)\nRequirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /usr/local/lib/python3.6/dist-packages (from boto3->smart-open>=1.2.1->gensim>=3.4.0->stellargraph) (0.3.3)\nRequirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.6/dist-packages (from boto3->smart-open>=1.2.1->gensim>=3.4.0->stellargraph) (0.10.0)\nRequirement already satisfied: botocore<1.18.0,>=1.17.59 in /usr/local/lib/python3.6/dist-packages (from boto3->smart-open>=1.2.1->gensim>=3.4.0->stellargraph) (1.17.59)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < \"3.8\"->markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (3.1.0)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (0.4.8)\nRequirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->stellargraph) (3.1.0)\nRequirement already satisfied: docutils<0.16,>=0.10 in /usr/local/lib/python3.6/dist-packages (from botocore<1.18.0,>=1.17.59->boto3->smart-open>=1.2.1->gensim>=3.4.0->stellargraph) (0.15.2)\nInstalling collected packages: stellargraph\nSuccessfully installed stellargraph-1.2.1\n" ], [ "#url = '/content/drive/My Drive/Colab Notebooks/stellargraph/Data/graph.csv'\n#url1 = 'https://raw.githubusercontent.com/asifabdullah-git/StellarGraph/master/Data/graph_final.csv'\n#url2 = 'https://raw.githubusercontent.com/asifabdullah-git/StellarGraph/master/Data/nodes.csv'\n\nurl1 = 'https://raw.githubusercontent.com/asifabdullah-git/StellarGraph/master/Data/network_graph/datasets_2738_4529_stack_network_links.csv'\nurl2 = 'https://raw.githubusercontent.com/asifabdullah-git/StellarGraph/master/Data/network_graph/datasets_2738_4529_stack_network_nodes.csv'", "_____no_output_____" ], [ "import numpy as np\nimport pandas as pd\nfrom pandas import DataFrame, Series\nfrom sklearn.model_selection import train_test_split\nimport matplotlib.pyplot as plt\n\n#import stellargraph as sg\nimport tensorflow as tf\nimport networkx as nx", "_____no_output_____" ], [ "G = nx.Graph(day=\"Stackoverflow\")", "_____no_output_____" ], [ "df_nodes = pd.read_csv(url2, encoding='utf-8')\ndf_edges = pd.read_csv(url1, encoding='utf-8')", "_____no_output_____" ], [ "df_nodes", "_____no_output_____" ], [ "#First, find out all the features with type object in the data:\n\nobjList = df_nodes.select_dtypes(include = \"object\").columns\nprint (objList)", "Index(['name'], dtype='object')\n" ], [ "#First, find out all the features with type object in the data:\n\nobjList1 = df_edges.select_dtypes(include = \"object\").columns\nprint (objList1)", "Index(['source', 'target'], dtype='object')\n" ], [ "#Label Encoding for object to numeric conversion\n\nfrom sklearn.preprocessing import LabelEncoder\nle = LabelEncoder()\n\nfor feat in objList:\n df_nodes[feat] = le.fit_transform(df_nodes[feat].astype(str))\n\nprint (df_nodes.info())", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 115 entries, 0 to 114\nData columns (total 3 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 name 115 non-null int64 \n 1 group 115 non-null int64 \n 2 nodesize 115 non-null float64\ndtypes: float64(1), int64(2)\nmemory usage: 2.8 KB\nNone\n" ], [ "from sklearn.preprocessing import LabelEncoder\nle = LabelEncoder()\n\nfor feat in objList1:\n df_edges[feat] = le.fit_transform(df_edges[feat].astype(str))\n\nprint (df_edges.info())", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 490 entries, 0 to 489\nData columns (total 3 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 source 490 non-null int64 \n 1 target 490 non-null int64 \n 2 value 490 non-null float64\ndtypes: float64(1), int64(2)\nmemory usage: 11.6 KB\nNone\n" ], [ "df_edges.sort_values(by=['source'])\n", "_____no_output_____" ], [ "labels = df_nodes['group']\nlabels", "_____no_output_____" ], [ "df_nodes.sort_values(by=['group'])", "_____no_output_____" ], [ "for index, row in df_nodes.iterrows():\n G.add_node(row['name'], group=row['group'], nodesize=row['nodesize'])", "_____no_output_____" ], [ "for index, row in df_edges.iterrows():\n G.add_weighted_edges_from([(row['source'], row['target'], row['value'])])", "_____no_output_____" ], [ "'''for index, row in df_edges.iterrows():\n G.add_weighted_edges_from([(row['Edge1'], row['Edge2'], row['Weight'])])'''", "_____no_output_____" ], [ "#obtain the adjacency matrix (A)\nA = nx.adjacency_matrix(G)\nprint('Graph info: ', nx.info(G))\n\n#Inspect the node features\nprint('\\nGraph Nodes: ', G.nodes.data())", "Graph info: Name: \nType: Graph\nNumber of nodes: 115\nNumber of edges: 245\nAverage degree: 4.2609\n\nGraph Nodes: [(41.0, {'group': 6.0, 'nodesize': 272.45}), (22.0, {'group': 6.0, 'nodesize': 341.17}), (40.0, {'group': 8.0, 'nodesize': 29.83}), (89.0, {'group': 8.0, 'nodesize': 52.84}), (83.0, {'group': 3.0, 'nodesize': 70.14}), (84.0, {'group': 3.0, 'nodesize': 55.31}), (44.0, {'group': 4.0, 'nodesize': 87.46}), (94.0, {'group': 4.0, 'nodesize': 63.62}), (42.0, {'group': 6.0, 'nodesize': 140.18}), (17.0, {'group': 1.0, 'nodesize': 189.83}), (19.0, {'group': 1.0, 'nodesize': 268.11}), (12.0, {'group': 2.0, 'nodesize': 129.55}), (18.0, {'group': 2.0, 'nodesize': 321.13}), (65.0, {'group': 4.0, 'nodesize': 47.01}), (48.0, {'group': 6.0, 'nodesize': 649.16}), (50.0, {'group': 6.0, 'nodesize': 208.29}), (80.0, {'group': 3.0, 'nodesize': 8.52}), (78.0, {'group': 3.0, 'nodesize': 59.03}), (70.0, {'group': 6.0, 'nodesize': 361.22}), (62.0, {'group': 6.0, 'nodesize': 165.43}), (91.0, {'group': 8.0, 'nodesize': 18.0}), (0.0, {'group': 2.0, 'nodesize': 75.08}), (77.0, {'group': 3.0, 'nodesize': 13.61}), (90.0, {'group': 8.0, 'nodesize': 12.37}), (54.0, {'group': 6.0, 'nodesize': 9.73}), (85.0, {'group': 6.0, 'nodesize': 30.55}), (38.0, {'group': 10.0, 'nodesize': 17.95}), (10.0, {'group': 10.0, 'nodesize': 11.04}), (93.0, {'group': 2.0, 'nodesize': 64.62}), (33.0, {'group': 3.0, 'nodesize': 14.27}), (64.0, {'group': 3.0, 'nodesize': 117.36}), (60.0, {'group': 3.0, 'nodesize': 50.95}), (45.0, {'group': 4.0, 'nodesize': 15.29}), (36.0, {'group': 5.0, 'nodesize': 12.71}), (35.0, {'group': 5.0, 'nodesize': 54.48}), (31.0, {'group': 14.0, 'nodesize': 12.5}), (32.0, {'group': 14.0, 'nodesize': 11.38}), (30.0, {'group': 2.0, 'nodesize': 12.88}), (55.0, {'group': 2.0, 'nodesize': 8.32}), (107.0, {'group': 2.0, 'nodesize': 12.73}), (111.0, {'group': 2.0, 'nodesize': 19.38}), (4.0, {'group': 4.0, 'nodesize': 229.86}), (46.0, {'group': 8.0, 'nodesize': 610.65}), (86.0, {'group': 10.0, 'nodesize': 27.02}), (2.0, {'group': 6.0, 'nodesize': 35.41}), (24.0, {'group': 1.0, 'nodesize': 40.91}), (74.0, {'group': 1.0, 'nodesize': 438.67}), (104.0, {'group': 14.0, 'nodesize': 16.87}), (113.0, {'group': 4.0, 'nodesize': 11.37}), (9.0, {'group': 5.0, 'nodesize': 13.17}), (63.0, {'group': 5.0, 'nodesize': 9.49}), (8.0, {'group': 6.0, 'nodesize': 126.59}), (13.0, {'group': 2.0, 'nodesize': 11.28}), (53.0, {'group': 6.0, 'nodesize': 32.12}), (71.0, {'group': 2.0, 'nodesize': 10.32}), (66.0, {'group': 2.0, 'nodesize': 30.19}), (51.0, {'group': 6.0, 'nodesize': 25.38}), (114.0, {'group': 6.0, 'nodesize': 23.77}), (34.0, {'group': 1.0, 'nodesize': 9.39}), (110.0, {'group': 6.0, 'nodesize': 46.74}), (47.0, {'group': 8.0, 'nodesize': 22.45}), (59.0, {'group': 8.0, 'nodesize': 10.3}), (52.0, {'group': 8.0, 'nodesize': 13.78}), (15.0, {'group': 5.0, 'nodesize': 23.91}), (56.0, {'group': 5.0, 'nodesize': 108.54}), (7.0, {'group': 7.0, 'nodesize': 18.79}), (99.0, {'group': 7.0, 'nodesize': 17.53}), (21.0, {'group': 6.0, 'nodesize': 18.71}), (95.0, {'group': 12.0, 'nodesize': 9.45}), (1.0, {'group': 12.0, 'nodesize': 12.22}), (97.0, {'group': 6.0, 'nodesize': 31.05}), (108.0, {'group': 8.0, 'nodesize': 18.94}), (82.0, {'group': 8.0, 'nodesize': 27.08}), (96.0, {'group': 11.0, 'nodesize': 8.95}), (87.0, {'group': 11.0, 'nodesize': 12.7}), (5.0, {'group': 4.0, 'nodesize': 14.79}), (79.0, {'group': 3.0, 'nodesize': 13.85}), (49.0, {'group': 9.0, 'nodesize': 10.02}), (25.0, {'group': 9.0, 'nodesize': 22.85}), (3.0, {'group': 9.0, 'nodesize': 30.05}), (6.0, {'group': 7.0, 'nodesize': 29.09}), (67.0, {'group': 4.0, 'nodesize': 12.58}), (57.0, {'group': 1.0, 'nodesize': 44.21}), (75.0, {'group': 1.0, 'nodesize': 10.53}), (109.0, {'group': 5.0, 'nodesize': 19.71}), (100.0, {'group': 5.0, 'nodesize': 11.98}), (43.0, {'group': 6.0, 'nodesize': 8.44}), (28.0, {'group': 3.0, 'nodesize': 10.82}), (106.0, {'group': 6.0, 'nodesize': 8.38}), (76.0, {'group': 1.0, 'nodesize': 52.7}), (29.0, {'group': 1.0, 'nodesize': 13.27}), (37.0, {'group': 9.0, 'nodesize': 24.84}), (105.0, {'group': 2.0, 'nodesize': 18.13}), (72.0, {'group': 3.0, 'nodesize': 39.03}), (92.0, {'group': 2.0, 'nodesize': 154.23}), (102.0, {'group': 5.0, 'nodesize': 15.67}), (27.0, {'group': 8.0, 'nodesize': 11.39}), (103.0, {'group': 2.0, 'nodesize': 23.56}), (101.0, {'group': 2.0, 'nodesize': 19.36}), (23.0, {'group': 9.0, 'nodesize': 9.81}), (26.0, {'group': 6.0, 'nodesize': 8.25}), (88.0, {'group': 5.0, 'nodesize': 11.63}), (16.0, {'group': 6.0, 'nodesize': 13.28}), (112.0, {'group': 2.0, 'nodesize': 11.18}), (14.0, {'group': 2.0, 'nodesize': 13.68}), (61.0, {'group': 2.0, 'nodesize': 10.92}), (39.0, {'group': 10.0, 'nodesize': 11.18}), (11.0, {'group': 8.0, 'nodesize': 8.61}), (98.0, {'group': 6.0, 'nodesize': 10.13}), (81.0, {'group': 13.0, 'nodesize': 9.46}), (68.0, {'group': 13.0, 'nodesize': 19.38}), (20.0, {'group': 9.0, 'nodesize': 10.66}), (69.0, {'group': 6.0, 'nodesize': 12.62}), (73.0, {'group': 5.0, 'nodesize': 9.85}), (58.0, {'group': 1.0, 'nodesize': 27.21})]\n" ], [ "", "_____no_output_____" ], [ "color_map = {1:'#f09494', 2:'#eebcbc', 3:'#72bbd0', 4:'#91f0a1', 5:'#629fff', 6:'#bcc2f2', \n 7:'#eebcbc', 8:'#f1f0c0', 9:'#d2ffe7', 10:'#caf3a6', 11:'#ffdf55', 12:'#ef77aa', \n 13:'#d6dcff', 14:'#d2f5f0'}\n", "_____no_output_____" ], [ "colors = [color_map[G.nodes[node]['group']] for node in G]\nsizes = [G.nodes[node]['nodesize']*10 for node in G]", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n\nplt.rcParams[\"figure.figsize\"] = (30,30)\n\nnx.draw(G, node_color=colors, node_size=sizes, with_labels=True, font_weight='bold')\nplt.show()", "_____no_output_____" ], [ "", "_____no_output_____" ] ], [ [ "# Adjacency Matrix (A) and Node Features Matrix (X)", "_____no_output_____" ] ], [ [ "#Get the Adjacency Matrix (A) and Node Features Matrix (X) as numpy array\nA = np.array(nx.attr_matrix(G, node_attr='group')[0])\nX = np.array(nx.attr_matrix(G, node_attr='group')[1])\nX = np.expand_dims(X,axis=1)\n\nprint('Shape of A: ', A.shape)\nprint('\\nShape of X: ', X.shape)\nprint('\\nAdjacency Matrix (A):\\n', A)\nprint('\\nNode Features Matrix (X):\\n', X)", "Shape of A: (14, 14)\n\nShape of X: (14, 1)\n\nAdjacency Matrix (A):\n [[12. 0. 1. 0. 1. 0. 0. 2. 0. 0. 0. 0. 0. 0.]\n [ 0. 49. 0. 0. 0. 2. 0. 0. 1. 0. 0. 0. 0. 0.]\n [ 1. 0. 20. 0. 1. 9. 0. 0. 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 14. 2. 0. 0. 1. 0. 0. 0. 0. 0. 0.]\n [ 1. 0. 1. 2. 13. 1. 0. 0. 1. 0. 0. 0. 0. 0.]\n [ 0. 2. 9. 0. 1. 66. 1. 1. 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0. 1. 2. 0. 0. 0. 0. 0. 0. 0.]\n [ 2. 0. 0. 1. 0. 1. 0. 27. 1. 0. 0. 0. 0. 0.]\n [ 0. 1. 0. 0. 1. 0. 0. 1. 7. 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 4. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3.]]\n\nNode Features Matrix (X):\n [[ 1.]\n [ 2.]\n [ 3.]\n [ 4.]\n [ 5.]\n [ 6.]\n [ 7.]\n [ 8.]\n [ 9.]\n [10.]\n [11.]\n [12.]\n [13.]\n [14.]]\n" ] ], [ [ "# Converting the label to one-hot encoding", "_____no_output_____" ] ], [ [ "from keras.utils import to_categorical", "_____no_output_____" ], [ "def encode_label(labels):\n label_encoder = LabelEncoder()\n labels = label_encoder.fit_transform(labels)\n labels = to_categorical(labels)\n return labels, label_encoder.classes_\n\nlabels_encoded, classes = encode_label(labels)", "_____no_output_____" ], [ "classes", "_____no_output_____" ], [ "labels_encoded", "_____no_output_____" ] ], [ [ "# GCN", "_____no_output_____" ] ], [ [ "!pip install dgl", "Collecting dgl\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/4d/05/9627fd225854f9ab77984f79405e78def50eb673a962940ed30fc07e9ac6/dgl-0.5.2-cp36-cp36m-manylinux1_x86_64.whl (3.5MB)\n\u001b[K |████████████████████████████████| 3.5MB 3.4MB/s \n\u001b[?25hRequirement already satisfied: numpy>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from dgl) (1.18.5)\nRequirement already satisfied: scipy>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from dgl) (1.4.1)\nRequirement already satisfied: networkx>=2.1 in /usr/local/lib/python3.6/dist-packages (from dgl) (2.5)\nRequirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.6/dist-packages (from dgl) (2.23.0)\nRequirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx>=2.1->dgl) (4.4.2)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->dgl) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->dgl) (2020.6.20)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->dgl) (2.10)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->dgl) (1.24.3)\nInstalling collected packages: dgl\nSuccessfully installed dgl-0.5.2\n" ], [ "!pip install spektral", "Collecting spektral\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/8e/2e/3b5bb768d0568f9568bcde08f42738b17bada5f5329221222edfad0838f6/spektral-0.6.1-py3-none-any.whl (95kB)\n\r\u001b[K |███▍ | 10kB 8.6MB/s eta 0:00:01\r\u001b[K |██████▉ | 20kB 2.1MB/s eta 0:00:01\r\u001b[K |██████████▎ | 30kB 2.7MB/s eta 0:00:01\r\u001b[K |█████████████▊ | 40kB 3.1MB/s eta 0:00:01\r\u001b[K |█████████████████▏ | 51kB 2.5MB/s eta 0:00:01\r\u001b[K |████████████████████▋ | 61kB 2.8MB/s eta 0:00:01\r\u001b[K |████████████████████████ | 71kB 3.0MB/s eta 0:00:01\r\u001b[K |███████████████████████████▌ | 81kB 3.3MB/s eta 0:00:01\r\u001b[K |███████████████████████████████ | 92kB 3.6MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 102kB 3.1MB/s \n\u001b[?25hRequirement already satisfied: tensorflow>=2.1.0 in /usr/local/lib/python3.6/dist-packages (from spektral) (2.3.0)\nRequirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from spektral) (2.23.0)\nRequirement already satisfied: networkx in /usr/local/lib/python3.6/dist-packages (from spektral) (2.5)\nRequirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from spektral) (1.0.5)\nRequirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from spektral) (0.22.2.post1)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from spektral) (1.18.5)\nRequirement already satisfied: lxml in /usr/local/lib/python3.6/dist-packages (from spektral) (4.2.6)\nRequirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from spektral) (0.16.0)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from spektral) (1.4.1)\nRequirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->spektral) (1.1.0)\nRequirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->spektral) (1.12.1)\nRequirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->spektral) (3.12.4)\nRequirement already satisfied: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->spektral) (2.10.0)\nRequirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->spektral) (0.35.1)\nRequirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->spektral) (0.3.3)\nRequirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->spektral) (1.32.0)\nRequirement already satisfied: tensorflow-estimator<2.4.0,>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->spektral) (2.3.0)\nRequirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->spektral) (1.6.3)\nRequirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->spektral) (0.10.0)\nRequirement already satisfied: tensorboard<3,>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->spektral) (2.3.0)\nRequirement already satisfied: keras-preprocessing<1.2,>=1.1.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->spektral) (1.1.2)\nRequirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->spektral) (1.15.0)\nRequirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->spektral) (0.2.0)\nRequirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=2.1.0->spektral) (3.3.0)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->spektral) (2.10)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->spektral) (3.0.4)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->spektral) (1.24.3)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->spektral) (2020.6.20)\nRequirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx->spektral) (4.4.2)\nRequirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas->spektral) (2.8.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->spektral) (2018.9)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.9.2->tensorflow>=2.1.0->spektral) (50.3.0)\nRequirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=2.1.0->spektral) (0.4.1)\nRequirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=2.1.0->spektral) (1.7.0)\nRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=2.1.0->spektral) (3.2.2)\nRequirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=2.1.0->spektral) (1.17.2)\nRequirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=2.1.0->spektral) (1.0.1)\nRequirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->spektral) (1.3.0)\nRequirement already satisfied: importlib-metadata; python_version < \"3.8\" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->spektral) (1.7.0)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->spektral) (4.1.1)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->spektral) (0.2.8)\nRequirement already satisfied: rsa<5,>=3.1.4; python_version >= \"3\" in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->spektral) (4.6)\nRequirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->spektral) (3.1.0)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < \"3.8\"->markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->spektral) (3.1.0)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow>=2.1.0->spektral) (0.4.8)\nInstalling collected packages: spektral\nSuccessfully installed spektral-0.6.1\n" ], [ "import dgl\nfrom dgl.nn.pytorch import GraphConv\nfrom sklearn import preprocessing\nfrom spektral.layers import GraphConv", "DGL backend not selected or invalid. Assuming PyTorch for now.\nUsing backend: pytorch\n" ], [ "# Parameters\nchannels = 16 # Number of channels in the first layer\ndropout = 0.5 # Dropout rate for the features\nl2_reg = 5e-4 # L2 regularization rate\nlearning_rate = 1e-2 # Learning rate\nepochs = 200 # Number of training epochs\nes_patience = 10 # Patience for early stopping\n\n# Preprocessing operations\nA = GraphConv.preprocess(A).astype('f4')", "_____no_output_____" ], [ "F = df_nodes.iloc[:,1:3]\nF = len(F. columns)", "_____no_output_____" ], [ "N = G.nodes\nN = len(N)", "_____no_output_____" ], [ "num_classes = len(set(labels))\nprint('\\nNumber of classes: ', num_classes)", "\nNumber of classes: 14\n" ], [ "import tensorflow as tf\nimport tensorflow \n\nfrom tensorflow import keras\nfrom keras.layers import Dense\nfrom tensorflow.python.keras.layers import Input, Dense\nfrom keras.layers import Dropout\nfrom tensorflow.keras import layers\nfrom tensorflow.keras import regularizers", "_____no_output_____" ], [ "# Model definition\nX_in = Input(shape=(F, ))\nfltr_in = Input((N, ), sparse=True)\n\ndropout_1 = Dropout(dropout)(X_in)\ngraph_conv_1 = GraphConv(channels,\n activation='relu',\n kernel_regularizer=regularizers.l2(l2_reg),\n use_bias=False)([dropout_1, fltr_in])\n\ndropout_2 = Dropout(dropout)(graph_conv_1)\ngraph_conv_2 = GraphConv(num_classes,\n activation='softmax',\n use_bias=False)([dropout_2, fltr_in])\n", "_____no_output_____" ], [ "import tensorflow.keras as keras", "_____no_output_____" ], [ "# Build model\nmodel = keras.Model(inputs=[X_in, fltr_in], outputs=graph_conv_2)\noptimizer = tf.keras.optimizers.Adam(lr=learning_rate)\nmodel.compile(optimizer=optimizer,\n loss='categorical_crossentropy',\n weighted_metrics=['acc'])\nmodel.summary()\n", "Model: \"functional_5\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_12 (InputLayer) [(None, 2)] 0 \n__________________________________________________________________________________________________\ndropout_6 (Dropout) (None, 2) 0 input_12[0][0] \n__________________________________________________________________________________________________\ninput_13 (InputLayer) [(None, 115)] 0 \n__________________________________________________________________________________________________\ngraph_conv_1 (GraphConv) (None, 16) 32 dropout_6[0][0] \n input_13[0][0] \n__________________________________________________________________________________________________\ndropout_7 (Dropout) (None, 16) 0 graph_conv_1[0][0] \n__________________________________________________________________________________________________\ngraph_conv_2 (GraphConv) (None, 14) 224 dropout_7[0][0] \n input_13[0][0] \n==================================================================================================\nTotal params: 256\nTrainable params: 256\nNon-trainable params: 0\n__________________________________________________________________________________________________\n" ], [ "", "_____no_output_____" ] ], [ [ "# Train the Graph Convolutional Networks", "_____no_output_____" ] ], [ [ "# Train model\n#validation_data = ([X, A], labels_encoded, val_mask)\nmodel.fit([X, A],\n labels_encoded,\n #sample_weight=train_mask,\n epochs=epochs,\n batch_size=N,\n #validation_data=validation_data,\n shuffle=False,\n callbacks=[\n EarlyStopping(patience=es_patience, restore_best_weights=True),\n \n ])", "_____no_output_____" ], [ "", "_____no_output_____" ] ], [ [ "# Draw the graph", "_____no_output_____" ] ], [ [ "color_map = {1:'#f09494', 2:'#eebcbc', 3:'#72bbd0', 4:'#91f0a1', 5:'#629fff', 6:'#bcc2f2', \n 7:'#eebcbc', 8:'#f1f0c0', 9:'#d2ffe7', 10:'#caf3a6', 11:'#ffdf55', 12:'#ef77aa', \n 13:'#d6dcff', 14:'#d2f5f0'}\n", "_____no_output_____" ], [ "from matplotlib.pyplot import figure\n#figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')", "_____no_output_____" ], [ "plt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')\noptions = {\n 'edge_color': '#FFDEA2',\n 'width': 1,\n 'with_labels': True,\n 'font_weight': 'regular',\n}", "_____no_output_____" ], [ "colors = [color_map[G.nodes[node]['group']] for node in G]\nsizes = [G.nodes[node]['nodesize']*10 for node in G]", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n\nplt.rcParams[\"figure.figsize\"] = (50,80)\n\nnx.draw(G, node_color=colors, node_size=sizes, with_labels=True, font_weight='bold')\nplt.show()", "_____no_output_____" ], [ "nx.draw(G, node_color=colors, node_size=sizes, pos=nx.spring_layout(G, k=0.25, iterations=10), **options)\nax = plt.gca()\nax.collections[0].set_edgecolor(\"#555555\") \nplt.show()", "_____no_output_____" ], [ "df1 = pd.read_csv(url1, encoding='utf-8')", "_____no_output_____" ], [ "Graphtype = nx.Graph()", "_____no_output_____" ], [ "G = nx.parse_edgelist(df1, delimiter=',', create_using=Graphtype,\n nodetype=int, data=(('weight', float),))", "_____no_output_____" ], [ "G.number_of_nodes", "_____no_output_____" ], [ "\nG = nx.Graph()", "_____no_output_____" ], [ "G.add_nodes_from(df1)", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "FG = nx.Graph()\nFG.add_weighted_edges_from([(1, 2, 0.125), (1, 3, 0.75), (2, 4, 1.2), (3, 4, 0.375)])", "_____no_output_____" ], [ "FG", "_____no_output_____" ], [ "import matplotlib.pyplot as plt", "_____no_output_____" ], [ "G = nx.petersen_graph()\nplt.subplot(121)\n\nnx.draw(G, with_labels=True, font_weight='bold')\n", "_____no_output_____" ], [ "plt.subplot(122)\n\nnx.draw_shell(G, nlist=[range(5, 10), range(5)], with_labels=True, font_weight='bold')", "_____no_output_____" ], [ "df1.head()", "_____no_output_____" ], [ "#First, find out all the features with type object in the data:\n\nobjList = df1.select_dtypes(include = \"object\").columns\nprint (objList)", "_____no_output_____" ], [ "#Label Encoding for object to numeric conversion\n\nfrom sklearn.preprocessing import LabelEncoder\nle = LabelEncoder()\n\nfor feat in objList:\n df1[feat] = le.fit_transform(df1[feat].astype(str))\n\nprint (df1.info())", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "from mlxtend.preprocessing import one_hot\nfrom sklearn.preprocessing import OneHotEncoder\nimport numpy as np", "_____no_output_____" ], [ "X = df1.iloc[:,0:3]", "_____no_output_____" ], [ "arr = X.to_numpy()", "_____no_output_____" ], [ "OHE = OneHotEncoder()", "_____no_output_____" ], [ "y = OHE.fit(arr)", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e70f346499914ff64f803287534160ace653e15e
303,068
ipynb
Jupyter Notebook
notebooks/visualizations/streamlit_network_page.ipynb
btatkinson/NCAA-basketball-Capstone
dbc967471379263c64bad53833504376e069801a
[ "MIT" ]
null
null
null
notebooks/visualizations/streamlit_network_page.ipynb
btatkinson/NCAA-basketball-Capstone
dbc967471379263c64bad53833504376e069801a
[ "MIT" ]
null
null
null
notebooks/visualizations/streamlit_network_page.ipynb
btatkinson/NCAA-basketball-Capstone
dbc967471379263c64bad53833504376e069801a
[ "MIT" ]
null
null
null
204.637407
250,804
0.872425
[ [ [ "\n\n\"\"\"\n\nThis notebook is for creating interactives and static images for our network visualization page\n\n\"\"\"\n\nimport gc\nimport os\nimport pickle\n\nimport numpy as np\nimport pandas as pd\nimport networkx as nx\nimport matplotlib.pyplot as plt\n\nfrom tqdm import tqdm\nfrom fuzzywuzzy import process\nfrom matplotlib.colors import to_rgb\nfrom sklearn.preprocessing import MinMaxScaler\n\nDATA_PATH = '../../data/'\n\ndef save_dict(di_, filename_):\n with open(filename_, 'wb') as f:\n pickle.dump(di_, f)\n\ndef load_dict(filename_):\n with open(filename_, 'rb') as f:\n ret_di = pickle.load(f)\n return ret_di\n\ngc.collect()\n\n", "_____no_output_____" ] ], [ [ "\n### Undirected networks\n\n- Pace\n- Rebounds\n", "_____no_output_____" ] ], [ [ "\n# 1. Load Data\n\n# get opponent team id\ndef get_opponent_team_id(data):\n \n opps = data.copy()[['game_id','team_id']].drop_duplicates().reset_index(drop=True)\n opps['team_AorB'] = opps.groupby(['game_id'])['team_id'].rank('dense').astype(int).map({\n 1:'A',\n 2:'B'\n })\n opps = opps.pivot(index='game_id', columns=['team_AorB'], values='team_id').reset_index()\n opps.columns=['game_id','team_id','opp_id']\n opps2 = opps.copy()\n opps2.columns=['game_id','opp_id','team_id']\n opps = pd.concat([opps, opps2], axis=0).dropna().reset_index(drop=True)\n opps['team_id'] = opps['team_id'].astype(int)\n opps['opp_id'] = opps['opp_id'].astype(int)\n \n return opps\n\n", "_____no_output_____" ], [ "\n\ndef get_possessions(pbox_data):\n \n opponent_ids = get_opponent_team_id(pbox_data.copy())\n ## estimate number of possessions from box score \n poss = pbox_data.groupby(['game_id','team_id'])[['fga','to','fta','oreb']].sum().reset_index()\n ## commonly used possession estimate formula\n ## (FGA – OR) + TO + (0.44 * FTA)\n poss['tm_poss'] = (poss['fga'].copy()-poss['oreb'].copy())+poss['to'].copy()+(0.44*poss['fta'].copy())\n poss = poss.drop(columns=['fga','to','fta','oreb'])\n\n possession_key = opponent_ids.copy().merge(poss, how='left', on=['game_id','team_id'])\n poss = poss.rename(columns={'team_id':'opp_id','tm_poss':'opp_poss'})\n possession_key = possession_key.copy().merge(poss, how='left', on=['game_id','opp_id'])\n possession_key['game_possessions'] = possession_key[['tm_poss','opp_poss']].copy().mean(axis=1)\n possession_key = possession_key.drop(columns=['tm_poss','opp_poss','opp_id'])\n\n return possession_key.sort_values(by='game_id').reset_index(drop=True)\n\ndef add_player_boxscore_features(data):\n \n #pbox\n data['fgm'] = data['fg'].apply(lambda x: x.split('-')[0])\n data['fga'] = data['fg'].apply(lambda x: x.split('-')[-1])\n data['fg3m'] = data['fg3'].apply(lambda x: x.split('-')[0])\n data['fg3a'] = data['fg3'].apply(lambda x: x.split('-')[-1])\n data['ftm'] = data['ft'].apply(lambda x: x.split('-')[0])\n data['fta'] = data['ft'].apply(lambda x: x.split('-')[-1])\n\n data['fgm']= data['fgm'].replace('',0)\n data['fgm'] = data['fgm'].astype(int)\n data['fga']= data['fga'].replace('',0)\n data['fga'] = data['fga'].astype(int)\n data['ftm']= data['ftm'].replace('',0)\n data['ftm'] = data['ftm'].astype(int)\n data['fta']= data['fta'].replace('',0)\n data['fta'] = data['fta'].astype(int)\n \n data['oreb']= data['oreb'].replace('',0)\n data['oreb'] = data['oreb'].astype(int)\n data['dreb']= data['dreb'].replace('',0)\n data['dreb'] = data['dreb'].astype(int)\n data['reb']= data['reb'].replace('',0)\n data['reb'] = data['reb'].astype(int)\n\n data['fg3m']= data['fg3m'].replace('',0)\n data['fg3m'] = data['fg3m'].astype(int)\n data['fg3a']= data['fg3a'].replace('',0)\n data['fg3a'] = data['fg3a'].astype(int)\n\n data['fg2m'] = data['fgm'].copy()-data['fg3m'].copy()\n data['fg2a'] = data['fga'].copy()-data['fg3a'].copy()\n \n possess = get_possessions(data.copy())\n data = data.merge(possess, how='left', on=['game_id','team_id'])\n\n data['fg%'] = (data['fgm'].copy()/data['fga'].copy()).fillna(0)\n data['fg2%'] = (data['fg2m'].copy()/data['fg2a'].copy()).fillna(0)\n data['fg3%'] = (data['fg3m'].copy()/data['fg3a'].copy()).fillna(0)\n\n data['eFG%'] = ((data['fgm'].copy()+(data['fg3m'].copy()*0.5))/data['fga'].copy()).fillna(0)\n data['TS%'] = ((data['pts'].copy())/(2*(data['fga'].copy()+(0.44*data['fta'].copy())))).fillna(0)\n # pbox[['fg','fg3m','fga']].dtypes\n data['pts_pm'] = data['pts'].copy()/data['min'].copy()\n data['reb_pm'] = data['reb'].copy()/data['min'].copy()\n data['ast_pm'] = data['ast'].copy()/data['min'].copy()\n data['stl_pm'] = data['stl'].copy()/data['min'].copy()\n data['blk_pm'] = data['blk'].copy()/data['min'].copy()\n data['to_pm'] = data['to'].copy()/data['min'].copy()\n data['pf_pm'] = data['pf'].copy()/data['min'].copy()\n \n ## could be improved with OT markers\n ## percentage of estimated possessions player took part of\n data['player_possessions'] = data['game_possessions'].copy()*(data['min'].copy()/(40*2)) # times 2 because game possessions = \n \n data['pts_pp'] = data['pts'].copy()/data['player_possessions'].copy()\n data['reb_pp'] = data['reb'].copy()/data['player_possessions'].copy()\n data['ast_pp'] = data['ast'].copy()/data['player_possessions'].copy()\n data['stl_pp'] = data['stl'].copy()/data['player_possessions'].copy()\n data['blk_pp'] = data['blk'].copy()/data['player_possessions'].copy()\n data['to_pp'] = data['to'].copy()/data['player_possessions'].copy()\n data['pf_pp'] = data['pf'].copy()/data['player_possessions'].copy()\n \n return data\n\n\ndef clean_player_boxscores(data):\n for stat_col in ['min','pts','oreb','dreb','reb','ast','stl','blk','to','pf']:\n data[stat_col] = data[stat_col].replace('--',0)\n data[stat_col] = data[stat_col].astype(int)\n\n return data\n\n\ndef load_player_boxscore_season(year):\n return add_player_boxscore_features(\\\n clean_player_boxscores(\\\n pd.read_csv(os.path.join(DATA_PATH, f'ESPN/player_boxscores/{year}.csv'))))\n\n\n## used for visualizations\ndef rgb_to_hsl(r, g, b):\n r = float(r)\n g = float(g)\n b = float(b)\n high = max(r, g, b)\n low = min(r, g, b)\n h, s, v = ((high + low) / 2,)*3\n\n if high == low:\n h = 0.0\n s = 0.0\n else:\n d = high - low\n s = d / (2 - high - low) if l > 0.5 else d / (high + low)\n h = {\n r: (g - b) / d + (6 if g < b else 0),\n g: (b - r) / d + 2,\n b: (r - g) / d + 4,\n }[high]\n h /= 6\n\n return h, s, v\n\ndef complementaryColor(my_hex):\n if my_hex[0] == '#':\n my_hex = my_hex[1:]\n rgb = (my_hex[0:2], my_hex[2:4], my_hex[4:6])\n comp = ['%02X' % (255 - int(a, 16)) for a in rgb]\n return ''.join(comp)\n\ndef determine_darker_color(c1, c2):\n r, g, b = c1\n r2, g2, b2 = c2\n hsp1 = 0.299 * (r * r) + 0.587 * (g * g) + 0.114 * (b * b)\n hsp2 = 0.299 * (r2 * r2) + 0.587 * (g2 * g2) + 0.114 * (b2 * b2)\n \n if hsp1 > hsp2:\n # darker is hsp2\n return 0\n elif hsp1 < hsp2:\n return 1\n else:\n print(hsp1, hsp2)\n raise ValueError() # same color\n\n\ndef load_colors():\n \n team_meta = pd.read_csv('team_meta.csv')\n teams_id2conf = team_meta.copy().drop_duplicates(subset=['ESPN_team_id'])[['ESPN_team_id','conference_name']].set_index('ESPN_team_id').to_dict()['conference_name']\n\n # fill nas with complementary\n # don't want these being the same\n team_meta['secondary_color'] = np.where(team_meta['primary_color']==team_meta['secondary_color'], np.nan, team_meta['secondary_color'].copy())\n # one special case\n team_meta.loc[team_meta['ESPN_team_id']==57, 'secondary_color'] = 'FA4616'\n team_meta['rgb_primary'] = team_meta['primary_color'].apply(lambda x: to_rgb('#'+x))\n team_meta['secondary_color'] = team_meta['secondary_color'].fillna(team_meta['primary_color'].apply(lambda x: complementaryColor(x)))\n team_meta['rgb_secondary'] = team_meta['secondary_color'].apply(lambda x: to_rgb('#'+str(x)))\n team_meta['primary_darker'] = team_meta.apply(lambda x: determine_darker_color(x.rgb_primary, x.rgb_secondary), axis=1)\n\n team_meta['darker_color'] = np.where(team_meta['primary_darker']==1, team_meta['primary_color'].copy(), team_meta['secondary_color'].copy())\n team_meta['lighter_color'] = np.where(team_meta['primary_darker']==0, team_meta['primary_color'].copy(), team_meta['secondary_color'].copy())\n team_meta['darker_color'] = '#' + team_meta['darker_color'].copy()\n team_meta['lighter_color'] = '#' + team_meta['lighter_color'].copy()\n\n team_dark = team_meta.copy()[['ESPN_team_id','darker_color']].set_index('ESPN_team_id').to_dict()['darker_color']\n team_light = team_meta.copy()[['ESPN_team_id','lighter_color']].set_index('ESPN_team_id').to_dict()['lighter_color']\n\n return team_dark, team_light\n\n\npbox = load_player_boxscore_season(2022)\nopponent_ids = get_opponent_team_id(pbox.copy())\n## create name mapping\nplayers_id2name = pbox.copy().drop_duplicates(subset=['athlete_id'])[['athlete_id','athlete_display_name']].set_index('athlete_id').to_dict()['athlete_display_name']\nplayers_name2id = {v:k for k,v in players_id2name.items()}\n\nplayers_id2team = pbox.copy().drop_duplicates(subset=['athlete_id'],keep='last')[['athlete_id','team_short_display_name']].set_index('athlete_id').to_dict()['team_short_display_name']\n\nteams_id2name = pbox.copy().drop_duplicates(subset=['team_id'])[['team_id','team_short_display_name']].set_index('team_id').to_dict()['team_short_display_name']\nteams_name2id = {v:k for k,v in teams_id2name.items()}\n\n# save_dict(existing_dict, os.path.join(DATA_PATH, 'IDs/kenpom2tname'))\n# save_dict({v:k for k,v in existing_dict.items()}, os.path.join(DATA_PATH, 'IDs/tname2kenpom'))\n\nteam_dark, team_light = load_colors()\nconferences = pd.read_csv('team_meta.csv')\n\n", "/var/folders/6j/0sqk1ykn5f10xfsflg6djktr0000gn/T/ipykernel_1494/1973450377.py:173: DtypeWarning: Columns (6,7,8,9,10,11,12,13,14) have mixed types.Specify dtype option on import or set low_memory=False.\n pbox = load_player_boxscore_season(2022)\n" ], [ "\n# save_dict(teams_id2name, os.path.join(DATA_PATH, 'IDs/teams_id2name'))\n# save_dict(teams_name2id, os.path.join(DATA_PATH, 'IDs/teams_name2id'))\n# save_dict(teams_id2name, '../../src/network_viz_data/teams_id2name')\n# save_dict(teams_name2id, '../../src/network_viz_data/teams_name2id')\n", "_____no_output_____" ] ], [ [ "\n\n#### Pace Network\n", "_____no_output_____" ] ], [ [ "\n# need to create possession estimates\ndef create_pace_net(data):\n \n opp_data = get_opponent_team_id(data.copy())\n game_possessions = get_possessions(data.copy())\n net_data = pd.merge(opp_data.copy(), game_possessions.copy(), how='left', on=['game_id','team_id'])\n net_data = net_data.drop_duplicates(subset=['game_id']).reset_index(drop=True)\n net_data['team_id'] = net_data['team_id'].astype(int)\n net_data['opp_id'] = net_data['opp_id'].astype(int)\n \n ## scale weight of edges\n mms = MinMaxScaler()\n net_data['game_possessions'] = mms.fit_transform(net_data['game_possessions'].values.reshape(-1,1))\n \n nodes = list(set(net_data['team_id'].unique()).union(set(net_data.opp_id.unique())))\n edges = [tuple([int(e[0]),int(e[1]),e[2]]) for e in net_data[['team_id','opp_id','game_possessions']].values.copy()]\n\n pace_net = nx.MultiGraph()\n pace_net.add_nodes_from(nodes)\n pace_net.add_weighted_edges_from(edges)\n pace_ranks = pd.Series(nx.pagerank(pace_net,alpha=1)).reset_index()\n pace_ranks.columns=['team_id','prank']\n pace_ranks['team_name'] = pace_ranks['team_id'].map(teams_id2name)\n \n degrees = pd.DataFrame.from_dict(pace_net.degree())\n degrees.columns=['team_id','degree']\n pace_ranks = pace_ranks.copy().merge(degrees, how='left', on='team_id')\n pace_ranks['pace_rating'] = 100000* (pace_ranks['prank'].copy()/pace_ranks['degree'].copy())\n pace_ranks = pace_ranks.sort_values(by='pace_rating', ascending=False)\n \n pace_ranks = pace_ranks.loc[pace_ranks['degree']>22].drop(columns=['prank','degree']) # minimum 25 games played (some DII teams show up otherwise)\n \n pace_ranks['pace_rank'] =pace_ranks['pace_rating'].rank(method='dense', ascending=False)\n pace_ranks['pace_rank']=pace_ranks['pace_rank'].astype(int)\n return pace_ranks.reset_index(drop=True), pace_net\n\npace_series, pace_network = create_pace_net(pbox.copy())\npace_series = pace_series.merge(conferences.rename(columns={'ESPN_team_id':'team_id'})[['team_id','ESPN_conference_id','conference_name']])\n\n", "_____no_output_____" ], [ "\ndef load_kenpom(season):\n kp = pd.read_csv(os.path.join(DATA_PATH, f'kenpom/{season}.csv'))\n return kp.rename(columns={'TeamName':'kenpom_name'})\nk22 = load_kenpom(2022)\n", "_____no_output_____" ] ], [ [ "\n### Need to name match with KenPom to show rankings from there\n", "_____no_output_____" ] ], [ [ "\n# existing_map = {} # starts over\nambiguous = []\n\n\ndef name_matching(names, choices, existing):\n \n for i, name in enumerate(names):\n if i % 25 == 0:\n print(f\"We are through {i} teams\")\n if name in existing:\n continue\n top_5 = process.extract(name, [en for en in choices if en not in list(existing.values())], limit=8)\n if top_5[0][1] >= 98:\n existing[name] = top_5[0][0]\n print(f\"{name} == {top_5[0][0]}\")\n else: \n ## ask\n print(\"See any matches? Use 0 or blank to continue.\")\n print(name, \" \", top_5)\n print(\"Otherwise use numbers 1-8\")\n resp = input()\n if not resp.isdigit():\n print(\"invalid response, skipping\")\n ambiguous.append(name) \n elif int(resp)>8:\n print(\"invalid response, skipping\")\n ambiguous.append(name) \n elif int(resp)==0:\n print(\"skipping\")\n ambiguous.append(name) \n elif int(resp)=='':\n print(\"skipping\")\n ambiguous.append(name) \n else:\n resp = int(resp)\n existing[name] = top_5[resp-1][0]\n print(f\"{name} == {top_5[resp-1][0]}\")\n \n return existing\n\n# knames = list(k22['kenpom_name'].unique())\n# espn_names = list(pace_series['team_name'].unique())\n\n# existing_dict = load_dict(os.path.join(DATA_PATH, 'IDs/kenpom2tname'))\n# existing_dict = name_matching(knames, espn_names, existing_dict) # if new names are needed\n", "_____no_output_____" ], [ "\n# save_dict(existing_dict, os.path.join(DATA_PATH, 'IDs/kenpom2tname'))\n# save_dict({v:k for k,v in existing_dict.items()}, os.path.join(DATA_PATH, 'IDs/tname2kenpom'))\n\n", "_____no_output_____" ], [ "# for streamlit\n# sorted(pace_series.conference_name.unique())\n", "_____no_output_____" ], [ "\n\n", "_____no_output_____" ] ], [ [ "\n#### Code for graphing conference undirected pace graphs\n", "_____no_output_____" ] ], [ [ "\ntname2kenpom = load_dict(os.path.join(DATA_PATH, 'IDs/tname2kenpom'))\npace_series['kenpom_name'] = pace_series['team_name'].map(tname2kenpom)\n## merge kenpom\npace_series = pace_series.merge(k22[['kenpom_name','Tempo','AdjTempo','RankTempo','RankAdjTempo']].copy(), how='left', on='kenpom_name')\npace_series[['pace_rank','RankTempo','RankAdjTempo']].corr('spearman')\n\n", "_____no_output_____" ], [ "nx.write_gml(pace_network, \"../../src/network_viz_data/pace_graph.gml\", stringizer=str)\npace_network = nx.read_gml('../../src/network_viz_data/pace_graph.gml')\n# Read graph", "_____no_output_____" ], [ "# pace_series.to_csv('../../src/network_viz_data/pace_df.csv',index=False)", "_____no_output_____" ], [ "subteams = pace_series.copy().loc[pace_series['conference_name']=='ACC'].reset_index(drop=True)\nsubnodes = list(subteams['team_id'].unique())\nsub_g = pace_network.subgraph([int(sn) for sn in subnodes])\nsub_g.nodes()\n# nx.draw(sub_g)", "_____no_output_____" ], [ "\nplt.style.use('ggplot')\ndef draw_pace_subgraph(pace_df, conf):\n \n # pace_series\n subteams = pace_df.copy().loc[pace_df['conference_name']==conf].reset_index(drop=True)\n prnk = pace_df.copy().set_index('team_id').to_dict()['pace_rating']\n subnodes = list(subteams['team_id'].unique())\n\n sub_g = pace_network.subgraph([str(sn) for sn in subnodes])\n\n fig, axes= plt.subplots(2,1, figsize=(14, 18))\n ax1 = axes[0]\n sub_wedges = sub_g.edges(data=\"weight\")\n sub_edges = sub_g.edges()\n sub_nodes = sub_g.nodes()\n\n pos = nx.circular_layout(sub_g)\n weights = [e * 5 for u,v,e in sub_wedges]\n colors = [team_dark[int(n)] for n in sub_nodes]\n # fig, ax = plt.subplots(figsize=(22,16))\n nx.draw(sub_g, edge_color=weights, pos=pos, labels={n:teams_id2name[int(n)] for n in sub_nodes}, \n node_color=[prnk[int(n)] for n in sub_nodes], node_size=[7e2*prnk[int(n)] for n in sub_nodes], font_size=10, font_color=\"black\",\n cmap=plt.cm.bwr, edge_cmap=plt.cm.bwr, ax=ax1)\n\n ax2=axes[1]\n ax2.set_title(\"Pace Network vs. KenPom \\nPace Ratings\",fontsize=24)\n ax2.set_xlabel(\"Pace Network\\nPageRank\", fontsize=16)\n ax2.set_ylabel(\"KenPom Adjusted\\n Tempo\", fontsize=16)\n ax2.scatter(subteams.pace_rating, subteams.Tempo, c=[prnk[int(n)] for n in subteams.team_id.values],\n cmap=plt.cm.bwr, s=350)\n n = subteams.team_name\n for i, txt in enumerate(n):\n ax2.annotate(txt, (subteams.pace_rating[i]+0.03, subteams.Tempo[i]))\n\n plt.show()\n \n return\n\ndraw_pace_subgraph(pace_series, 'SEC')\n\n", "_____no_output_____" ], [ "\n\npace_series.loc[pace_series['conference_name']=='ACC']\n", "_____no_output_____" ], [ "\npbox\n", "_____no_output_____" ], [ "\nprnk[153]\n", "_____no_output_____" ], [ "\n\n# nx.draw(pace_network)\n", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e70f37402f3a4fc77ae63828a3eb4fc2f67b115c
331,576
ipynb
Jupyter Notebook
Notebooks/Spectral Graph Drawing.ipynb
darkeclipz/jupyter-notebooks
5de784244ad9db12cfacbbec3053b11f10456d7e
[ "Unlicense" ]
1
2018-08-28T12:16:12.000Z
2018-08-28T12:16:12.000Z
Notebooks/Spectral Graph Drawing.ipynb
darkeclipz/jupyter-notebooks
5de784244ad9db12cfacbbec3053b11f10456d7e
[ "Unlicense" ]
null
null
null
Notebooks/Spectral Graph Drawing.ipynb
darkeclipz/jupyter-notebooks
5de784244ad9db12cfacbbec3053b11f10456d7e
[ "Unlicense" ]
null
null
null
598.512635
41,080
0.938035
[ [ [ "%pylab inline\nimport numpy as np\nfrom scipy.linalg import eigh", "Populating the interactive namespace from numpy and matplotlib\n" ] ], [ [ "# Graph Drawing and Energy Minimization $^{[\\mathrm{ref. 1}]}$", "_____no_output_____" ], [ "Let $G =(V,E)$ be some undirected graph. Say $|\\ V\\ |=m$. The idea is to assign a point $\\rho(v_i)$ in $\\mathbb{R}^n$ to the vertex $v_i \\in V$, for every $v_i \\in V$, and to draw a line segment between the points $\\rho(v_i)$ and $\\rho(v_j)$. Thus, a _graph drawing_ is a function $\\rho : V \\rightarrow \\mathbb{R}^n$.\n\nWe define the _matrix of a graph drawing $\\rho$ (in $\\mathbb{R}^n$)_ as a $m\\times n$ matrix $R$ whose _i_th row consists of the row vector $\\rho(v_i)$ corresponding to the point representation $v_i$ in $\\mathbb{R}^n$. Typically, we want $n<m$; in fact $n$ should be much smaller than $n$.\n\nA representation is _balanced_ iff the sum of the entries of every column is zero, that is $\\boldsymbol{1}^\\mathrm{T}R=0$.", "_____no_output_____" ], [ "**Proposition:** Let $G = (V,W)$ be a weighted graph, with $|\\ V\\ |=m$ and $W$ an $n\\times n$ symmetric matrix, and let $R$ be the matrix of a graph drawing $\\rho$ of G in $\\mathbb{R}^n$ (a $m\\times n$ matrix). If $L=D-W$ is the unnormalized Laplacian matrix associated with $W$, then: $ \\mathcal{E}(R)=\\mathrm{tr}(R^\\mathrm{T}LR).$\n", "_____no_output_____" ], [ "**Theorem:** Let $G=(V,W)$ be a weighted graph with $|\\ V\\ |=m$. If $L=D-W$ is the (unnormalized) Laplacian of G, and if the eigenvalues of $L$ are $0=\\lambda_1<\\lambda_2\\leq\\lambda_3\\leq\\ldots\\leq\\lambda_m$, then the minimal energy of any balanced orthogonal graph drawing of $G$ in $\\mathbb{R}^n$ is equal to $\\lambda_2+\\ldots+\\lambda_{n+1}$ (in particular, this implies that $n<m$). The $m\\times n$ matrix $R$ consisting of any unit eigenvectors $u_2,\\ldots,u_{n-1}$ associated with $\\lambda_2\\leq\\ldots\\leq\\lambda_{n+1}$ yields a balanced orthognal graph drawing of minimal energy; it satisfies the condition $R^\\mathrm{T}R=I$.", "_____no_output_____" ] ], [ [ "def spectral(A):\n D = np.eye(len(A)) * np.array(np.sum(A, axis=1)) # degree matrix\n L = D - A # Laplacian L = degree matrix - adjacency matrix\n w, v = eigh(L) # w = eigenvalues, v = eigenvectors\n x = v[:,1]; y = v[:,2] # spectral coordinates\n return {i: (x[i], y[i]) for i in range(len(A))} \n\ndef spectral_matrix(A):\n D = np.eye(len(A)) * np.array(np.sum(A, axis=1)) # degree matrix\n L = D - A # Laplacian L = degree matrix - adjacency matrix\n w, v = eigh(L) # w = eigenvalues, v = eigenvectors\n x = v[:,1]; y = v[:,2] # spectral coordinates\n return np.matrix([x,y])", "_____no_output_____" ] ], [ [ "# Examples of Graph Drawings", "_____no_output_____" ], [ "## Plotting", "_____no_output_____" ] ], [ [ "def gplot(A, S, size=6,c='b',title=''):\n plt.figure(figsize=(size,size))\n for i in range(len(A)):\n for j in range(len(A)):\n if A[i,j] > 0:\n plot((S[i][0], S[j][0]), (S[i][1], S[j][1]), c=c, lw=1)\n x, y = zip(*S.values())\n plot(x, y, 'o', c=c);\n if title: plt.title(title)", "_____no_output_____" ] ], [ [ "## Example 1: Square", "_____no_output_____" ], [ "Consider the graph with four nodes whose adjacency matrix is:\n\n$$ A = \\begin{bmatrix} 0 & 1 & 1 & 0 \\\\ 1 & 0 & 0 & 1 \\\\ 1 & 0 & 0 & 1 \\\\ 0 & 1 & 1 & 0 \\end{bmatrix}$$", "_____no_output_____" ] ], [ [ "A = np.matrix([[0,1,1,0],[1,0,0,1],[1,0,0,1],[0,1,1,0]]) # adjancency matrix\nS = spectral(A)\ngplot(A,S)", "_____no_output_____" ] ], [ [ "## Example 2: 5-points", "_____no_output_____" ], [ "Another example with a graph consisting of $5$ points.", "_____no_output_____" ] ], [ [ "A = np.matrix([[0,1,1,0,0],[1,0,1,1,1],[1,1,0,1,0],[0,1,1,0,1],[0,1,0,1,0]])\nS = spectral(A)\ngplot(A,S)", "_____no_output_____" ] ], [ [ "## Example 3: Complete graph ($K_n)$ of order $n$", "_____no_output_____" ], [ "In a complete graph $K_n$ of order $n$, all nodes are connected to each other. The adjancency matrix for $K_n$, can be defined as $K_n = \\boldsymbol{1}-I$. Which is a square matrix of size $n$, except for the diagonal, which is zero.", "_____no_output_____" ] ], [ [ "def K(n): return np.ones(n) - np.eye(n)", "_____no_output_____" ] ], [ [ "Notice that we want to have $n<m$ where $n$ is much smaller. However, we can see that $m=n-1$, which violates that $n$ should be much smaller. ", "_____no_output_____" ] ], [ [ "for i in range(3, 10):\n A = K(i)\n S = spectral(A)\n gplot(A,S,title='Spectral graph of $K_{}$'.format(i))", "_____no_output_____" ], [ "A = K(14)\nS = spectral(A)\ngplot(A,S)", "_____no_output_____" ] ], [ [ "## Example 5: Ring graph", "_____no_output_____" ], [ "A ring graph is a graph with one cycle, and where the degree of all the nodes is exactly 2. We can create a ring graph adjacency matrix $M$ by constructing the identity matrix $I$. If we shift (and wrap around) all the values in the rows of the matrix, We get that we are going from $A\\rightarrow B$, $B\\rightarrow C$, and so forth.", "_____no_output_____" ] ], [ [ "A = np.roll(np.eye(4), 1, axis=1)\nA", "_____no_output_____" ] ], [ [ "Because this is an undirected graph, we should also set the adjacencies for $B\\rightarrow A$, $C\\rightarrow B$, and so forth. this is easly achieved with $A+A^T$, because the adjacency matrix of an undirected graph is symmetrical.", "_____no_output_____" ] ], [ [ "A + A.T", "_____no_output_____" ] ], [ [ "This gives the following function that construct an adjacency matrix for a ring graph of size $n$:", "_____no_output_____" ] ], [ [ "def ring(n):\n A = np.roll(np.eye(n), 1, axis=1)\n return A + A.T", "_____no_output_____" ] ], [ [ "If we create a spectral graph of the ring graph, we get the following result:", "_____no_output_____" ] ], [ [ "for i in range(3, 12, 2):\n A = ring(i)\n S = spectral(A)\n gplot(A,S, title='Ring graph ($n={}$)'.format(i))", "_____no_output_____" ] ], [ [ "# References", "_____no_output_____" ], [ "The following references have been used:", "_____no_output_____" ], [ "1. Chapter 18, Spectral Graph Drawing (http://www.cis.upenn.edu/~cis515/cis515-15-graph-drawing.pdf)\n2. https://en.wikipedia.org/wiki/Laplacian_matrix\n3. https://en.wikipedia.org/wiki/Degree_matrix\n4. https://www.johndcook.com/blog/2016/01/15/spectral-coordinates-in-python/", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
e70f3bb8491393c83e0852651849ffb273ceedb2
10,042
ipynb
Jupyter Notebook
cmws/examples/timeseries_real/experimental_notebooks/Untitled.ipynb
tuananhle7/hmws
175f77a2b386ce5a9598b61c982e053e7ecff8a2
[ "MIT" ]
null
null
null
cmws/examples/timeseries_real/experimental_notebooks/Untitled.ipynb
tuananhle7/hmws
175f77a2b386ce5a9598b61c982e053e7ecff8a2
[ "MIT" ]
null
null
null
cmws/examples/timeseries_real/experimental_notebooks/Untitled.ipynb
tuananhle7/hmws
175f77a2b386ce5a9598b61c982e053e7ecff8a2
[ "MIT" ]
null
null
null
36.919118
2,835
0.511352
[ [ [ "# Data", "_____no_output_____" ] ], [ [ "# path = \"/om/user/lbh/wsvae/examples/timeseries/UCR_TS_Archive_2015.zip\"\npath = \"/om/user/lbh/wsvae/examples/timeseries/UCR_TS_Archive_2015/data.p\"", "_____no_output_____" ], [ "import numpy as np\nimport random\nimport pickle\nfrom tqdm import tqdm", "_____no_output_____" ], [ "def lukes_make_data():\n \"\"\"\n https://github.com/insperatum/wsvae/blob/master/examples/timeseries/main-timeseries.py#L81\n \"\"\"\n # output_dir = \"./plots/timeseries_filtered\"\n # os.makedirs(output_dir, exist_ok=True)\n\n # Init\n n_data = 2000\n n_timepoints=256\n np.random.seed(0)\n random.seed(0)\n\n # Read file\n with open(path, \"rb\") as f:\n d_in = pickle.load(f)\n\n # Make arrays\n data = []\n testdata = []\n all_timeseries = [x for X in d_in for x in X['data']]\n # if args.shuffle: random.shuffle(all_timeseries)\n for x in tqdm(all_timeseries):\n #if len(x)<n_timepoints: continue\n #lower = math.floor((len(x)-n_timepoints)/2)\n #upper = len(x) - math.ceil((len(x)-n_timepoints)/2)\n if len(x)<n_timepoints+1: continue\n lower = 0\n upper = n_timepoints\n x = np.array(x)\n\n # Centre the timeseries\n if x.std()==0: continue\n x = (x - x.mean()) / x.std()\n x = list(x)\n\n # Append\n data.append(x[lower:upper])\n testdata.append(x[upper:upper+100])\n\n # Break\n if len(data) > n_data*2: break\n\n\n # Add more datasets\n # -- Airlines\n airlines=np.array([112, 115, 118, 125, 132, 130, 129, 125, 121, 128, 135, 141, 148, 148, 148, 142, 136, 127, 119, 111, 104, 111, 118, 116, 115, 120, 126, 133, 141, 138, 135, 130, 125, 137, 149, 159, 170, 170, 170, 164, 158, 145, 133, 123, 114, 127, 140, 142, 145, 147, 150, 164, 178, 170, 163, 167, 172, 175, 178, 188, 199, 199, 199, 191, 184, 173, 162, 154, 146, 156, 166, 168, 171, 175, 180, 186, 193, 187, 181, 182, 183, 200, 218, 224, 230, 236, 242, 225, 209, 200, 191, 181, 172, 183, 194, 195, 196, 196, 196, 216, 236, 235, 235, 232, 229, 236, 243, 253, 264, 268, 272, 254, 237, 224, 211, 195, 180, 190, 201, 202, 204, 196, 188, 211, 235, 231, 227, 230, 234, 249, 264, 283, 302, 297, 293, 276, 259, 244, 229, 216, 203, 216, 229, 235, 242, 237, 233, 250, 267, 268, 269, 269, 270, 292, 315, 339, 364, 355, 347, 329, 312, 293, 274, 255, 237, 257, 278, 281, 284, 280, 277, 297, 317, 315, 313, 315, 318, 346, 374, 393, 413, 409, 405, 380, 355, 330, 306, 288, 271, 288, 306, 310, 315, 308, 301, 328, 356, 352, 348, 351, 355, 388, 422, 443, 465, 466, 467, 435, 404, 375, 347, 326, 305, 320, 336, 338, 340, 329, 318, 340, 362, 355, 348, 355, 363, 399, 435, 463, 491, 498, 505, 454, 404, 381, 359, 334, 310, 323, 337, 348, 360, 351, 342, 374, 406, 401, 396, 408, 420, 446, 472, 510, 548, 553, 559, 511, 463, 435, 407, 384, 362, 383, 405, 411, 417, 404, 391, 405, 419, 440, 461, 466, 472, 503, 535, 578, 622, 614, 606, 557, 508, 484, 461, 425, 390, 411]).astype(np.float32)\n airlines=(airlines - airlines.mean())/airlines.std()\n airlines = airlines.tolist()\n data.append(airlines[:n_timepoints])\n testdata.append(airlines[n_timepoints:])\n\n # -- Mauna\n mauna=np.array([-26.4529, -26.4529, -26.4529, -26.4529, -24.7129, -24.6629, -26.3029, -27.2329, -28.8229, -26.5829, -24.0029, -25.6129, -28.8229, -23.1329, -22.1329, -22.5729, -23.9829, -27.1629, -24.4629, -23.6229, -22.3829, -23.5829, -25.1529, -23.6029, -22.4729, -21.1529, -21.5529, -22.3029, -20.7729, -19.9229, -22.4229, -24.3929, -25.9529, -20.2729, -23.4629, -25.4629, -25.2929, -24.4829, -23.4529, -22.7229, -21.2729, -20.0029, -20.2929, -24.3529, -24.8629, -23.2929, -22.7429, -21.5429, -19.7729, -18.4129, -19.1229, -17.7429, -17.1629, -18.0729, -19.6129, -22.9029, -20.2029, -17.1429, -16.8029, -18.0229, -21.8329, -18.1629, -17.7429, -15.5029, -14.7829, -15.4629, -16.2729, -19.7829, -20.3829, -18.0429, -17.1029, -16.1829, -15.2329, -14.5029, -17.4729, -19.0629, -17.0329, -14.9829, -14.3829, -13.2429, -18.8029, -17.3629, -14.4129, -12.0929, -14.1129, -15.8429, -17.3229, -15.6629, -13.6229, -10.6629, -9.6829, -10.0929, -9.5129, -9.0729, -9.9129, -10.9829, -12.7629, -14.7929, -10.1229, -8.2029, -10.2529, -12.1029, -13.8229, -12.6729, -10.4229, -9.6029, -8.6629, -7.5829, -7.2929, -7.8229, -9.1129, -11.2229, -10.4829, -9.2429, -5.4229, -9.4129, -7.1929, -4.1529, -4.2729, -5.6229, -7.4829, -9.4029, -8.2429, -5.9329, -5.4029, -2.6929, -4.4329, -8.3029, -6.8729, -5.4329, -1.3929, -0.9929, -5.0629, -3.9529, -2.9329, 0.3471, 0.0871, -1.6729, -3.8029, -2.5529, -1.4129, 0.5371, 1.3971, 1.1871, -2.3429, -4.1929, -1.6729, 0.3571, 0.9371, 0.2271, 0.8271, 2.3471, 3.1171, 4.9171, 5.2671, -0.8129, 0.8171, 3.8371, 6.1871, 6.7671, 2.5271, 0.9271, 0.6371, 4.7971, 5.6971, 7.3771, 5.7771, 2.6971, 2.0071, 4.7371, 5.8571, 6.3071, 8.8271, 9.0871, 4.1971, 8.2671, 11.4271, 12.0571, 10.2271, 8.2771, 9.1771, 10.5971, 13.2571, 13.5071, 11.7371, 9.5071, 9.1371, 10.3671, 12.5371, 13.2271, 14.9971, 12.6571, 8.7971, 12.0471, 12.5571, 13.5871, 17.1771, 11.8671, 17.0871, 14.8671, 12.8371, 13.2371, 14.5371, 14.9971, 17.2971, 18.1171, 15.4071, 11.8171, 14.6371, 16.7471, 19.5171, 18.7871, 13.6771, 15.8971, 15.5871, 17.3971, 18.5371, 19.8871, 21.4871, 17.4371, 18.5971, 20.1671, 21.0171, 21.8371, 24.1871, 18.6071, 20.2671, 23.9871, 26.4471, 27.1271, 25.4771, 22.0671, 25.9871, 28.9771, 28.8371, 24.5071, 25.8471, 26.9771, 29.4971, 24.4571, 24.5671, 26.1271, 28.1171, 29.9571, 27.3871, 25.7971, 31.3571, 33.9471, 32.3371, 30.8271, 30.8371, 32.1871, 33.5371, 33.5371, 33.5371, 33.5371])\n mauna=(mauna-mauna.mean())/mauna.std()\n mauna=mauna.tolist()\n data.append(mauna[:n_timepoints])\n testdata.append(mauna[n_timepoints:])\n\n data_novel=data[n_data:]\n data=data[:n_data]\n# testdata_novel=testdata[n_data:]\n# testdata=testdata[:n_data]\n print(\"Loaded\", len(data), \"timeseries\")\n return data, data_novel", "_____no_output_____" ], [ "train, test = lukes_make_data()", "_____no_output_____" ], [ "import torch", "_____no_output_____" ], [ "torch.tensor(train, device=)", "_____no_output_____" ], [ "len(train)", "_____no_output_____" ], [ "len(testdata_novel)", "_____no_output_____" ], [ "type(data)", "_____no_output_____" ], [ "len(data)", "_____no_output_____" ], [ "type(data[0])", "_____no_output_____" ], [ "len(data[0])", "_____no_output_____" ], [ "len(data_novel)", "_____no_output_____" ], [ "len(testdata_novel)", "_____no_output_____" ], [ "len(testdata)", "_____no_output_____" ], [ "len(testdata[0])", "_____no_output_____" ], [ "len(data)", "_____no_output_____" ], [ "import matplotlib.pyplot as plt", "_____no_output_____" ], [ "plt.plot(testdata[-1])", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e70f3c23d6bc8f25ee490bc9625ea2ee13c9b606
10,823
ipynb
Jupyter Notebook
match_rxrx_kaggle_ids.ipynb
alxndrkalinin/kaggle-rcic-1st
f228f0f68d4388f25cf415d799df9dea3b9ab88e
[ "MIT" ]
null
null
null
match_rxrx_kaggle_ids.ipynb
alxndrkalinin/kaggle-rcic-1st
f228f0f68d4388f25cf415d799df9dea3b9ab88e
[ "MIT" ]
null
null
null
match_rxrx_kaggle_ids.ipynb
alxndrkalinin/kaggle-rcic-1st
f228f0f68d4388f25cf415d799df9dea3b9ab88e
[ "MIT" ]
null
null
null
38.379433
1,729
0.592257
[ [ [ "import numpy as np\nimport pandas as pd\nfrom pathlib import Path\nfrom tqdm import tqdm", "_____no_output_____" ], [ "root = Path('/home/user/data/rxrx1/images/')\ntrain = pd.read_csv(root / \"train.csv\")\ntest = pd.read_csv(root / \"test.csv\")\n\ntrain_controls = pd.read_csv(root / \"train_controls.csv\")\ntest_controls = pd.read_csv(root / \"test_controls.csv\")\n\nrxrx_config = pd.read_csv(root.parent / \"rxrx1.csv\")", "_____no_output_____" ], [ "def replace_with_rxrx_id(kaggle_df, rxrx_df):\n for i in kaggle_df['id_code'].unique():\n rxrx_sirna_id = rxrx_df.loc[rxrx_df['well_id']==i, 'sirna_id']\n if not rxrx_sirna_id.empty and len(rxrx_sirna_id)==2:\n kaggle_df.loc[kaggle_df['id_code']==i, 'sirna'] = list(rxrx_sirna_id)[0]\n else:\n print(f'No match for experiment {i}')\n print(f'From shape {kaggle_df.shape} dropping {kaggle_df[kaggle_df[\"id_code\"]==i]}')\n kaggle_df.drop(kaggle_df[kaggle_df['id_code']==i].index, axis=0, inplace=True)\n print(f'New shape {kaggle_df.shape}')\n return kaggle_df\n\ndef get_ids_dict(kaggle_df, rxrx_df):\n ids = {}\n for i in kaggle_df['id_code'].unique():\n rxrx_sirna_id = rxrx_df.loc[rxrx_df['well_id']==i, 'sirna_id']\n if not rxrx_sirna_id.empty and len(rxrx_sirna_id)==2:\n print(kaggle_df.loc[kaggle_df['id_code']==i, 'sirna'])\n print(list(rxrx_sirna_id)[0])\n break\n ids[kaggle_df.loc[kaggle_df['id_code']==i, 'sirna']] = list(rxrx_sirna_id)[0]\n else:\n print(f'No match for experiment {i}')\n return ids", "_____no_output_____" ] ], [ [ "### Check sizes of Kaggle and RxRx labels ", "_____no_output_____" ] ], [ [ "train['well_type'] = 'unknown'\ntest['well_type'] = 'unknown'\nkaggle_all_train = pd.concat([train, train_controls])\nkaggle_all_test = pd.concat([test, test_controls])\nprint(f'All Kaggle train ids: {kaggle_all_train[\"id_code\"].nunique()}')\nprint(f'All Kaggle test ids: {kaggle_all_test[\"id_code\"].nunique()}')", "All Kaggle train ids: 40614\nAll Kaggle test ids: 22145\n" ], [ "rxrx_train = rxrx_config.loc[rxrx_config['dataset']=='train', :]\nrxrx_test = rxrx_config.loc[rxrx_config['dataset']=='test', :]\n\nprint(f'All train ids: {rxrx_train[\"well_id\"].nunique()}')\nprint(f'All test ids: {rxrx_test[\"well_id\"].nunique()}')", "All train ids: 40612\nAll test ids: 22143\n" ], [ "# experiments missing from the RxRx dataset\nprint(f'Missing from RxRx train: \\\n{set(kaggle_all_train[\"id_code\"]).difference(set(rxrx_train[\"well_id\"]))}')\nprint(f'Missing from RxRx test: \\\n{set(kaggle_all_test[\"id_code\"]).difference(set(rxrx_test[\"well_id\"]))}')", "Missing from RxRx train: {'HUVEC-06_1_B18', 'RPE-04_3_E04'}\nMissing from RxRx test: {'HUVEC-18_3_D23', 'RPE-09_2_J16'}\n" ] ], [ [ "### Get dictionary of correspondence", "_____no_output_____" ], [ "### Replace Kaggle well IDs with RxRx", "_____no_output_____" ] ], [ [ "kaggle_all_train = replace_with_rxrx_id(kaggle_all_train, rxrx_train)\nkaggle_train = kaggle_all_train.loc[kaggle_all_train['well_type']=='unknown', :]\nkaggle_train.drop(['well_type'], axis=1, inplace=True)\nkaggle_train.to_csv(root / \"kaggle_train.csv\", index=False)\nkaggle_train_controls = kaggle_all_train[kaggle_all_train['well_type']!='unknown']\nkaggle_train_controls.to_csv(root / \"kaggle_train_controls.csv\", index=False)", "No match for experiment HUVEC-06_1_B18\nFrom shape (40614, 6) dropping id_code experiment plate well sirna well_type\n13305 HUVEC-06_1_B18 HUVEC-06 1 B18 sirna_777 unknown\nNew shape (40613, 6)\nNo match for experiment RPE-04_3_E04\nFrom shape (40613, 6) dropping id_code experiment plate well sirna well_type\n29378 RPE-04_3_E04 RPE-04 3 E04 sirna_612 unknown\nNew shape (40612, 6)\n" ], [ "kaggle_all_test = replace_with_rxrx_id(kaggle_all_test, rxrx_test)\nkaggle_test = kaggle_all_test.loc[kaggle_all_test['well_type']=='unknown', :]\nkaggle_test.drop(['well_type'], axis=1, inplace=True)\nkaggle_test.to_csv(root / \"kaggle_test.csv\", index=False)\nkaggle_test_controls = kaggle_all_test[kaggle_all_test['well_type']!='unknown']\nkaggle_test_controls_cols = kaggle_test_controls.columns.to_list()\nkaggle_test_controls = kaggle_test_controls[kaggle_test_controls_cols[:-2] +\n kaggle_test_controls_cols[-2:]]\nkaggle_test_controls.to_csv(root / \"kaggle_test_controls.csv\", index=False)", "No match for experiment HUVEC-18_3_D23\nFrom shape (22145, 6) dropping id_code experiment plate well well_type sirna\n6149 HUVEC-18_3_D23 HUVEC-18 3 D23 unknown NaN\nNew shape (22144, 6)\nNo match for experiment RPE-09_2_J16\nFrom shape (22144, 6) dropping id_code experiment plate well well_type sirna\n14828 RPE-09_2_J16 RPE-09 2 J16 unknown NaN\nNew shape (22143, 6)\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ] ]
e70f522450dde4bad967005046422ef6796250ff
4,451
ipynb
Jupyter Notebook
misc/ipynb/cat-cat.ipynb
oneoffcoder/py-pair
df9be0ad969cf7c4ce2c037029fa5e513919655c
[ "Apache-2.0" ]
16
2020-11-19T14:18:10.000Z
2022-02-12T03:27:50.000Z
misc/ipynb/cat-cat.ipynb
oneoffcoder/py-pair
df9be0ad969cf7c4ce2c037029fa5e513919655c
[ "Apache-2.0" ]
null
null
null
misc/ipynb/cat-cat.ipynb
oneoffcoder/py-pair
df9be0ad969cf7c4ce2c037029fa5e513919655c
[ "Apache-2.0" ]
1
2020-11-26T22:39:44.000Z
2020-11-26T22:39:44.000Z
29.282895
99
0.312289
[ [ [ "from random import choice\n\n\nx_domain = ['a', 'b', 'c']\ny_domain = ['a', 'b']\n\nget_x = lambda: choice(x_domain)\nget_y = lambda: choice(y_domain)\nget_data = lambda: {f'x{i}':v for i, v in enumerate((get_x(), get_y(), get_x(), get_y()))}\n\ndata = [get_data() for _ in range(10)]", "_____no_output_____" ], [ "from itertools import combinations, chain\n\ndef to_count(d):\n def count(k1, k2):\n tups = [(k1, d[k1]), (k2, d[k2])]\n tups = sorted(tups, key=lambda t: t[0])\n \n return (tups[0][0], tups[1][0], tups[0][1], tups[1][1]), 1\n \n return [count(k1, k2) for k1, k2 in combinations(d.keys(), 2)]\n \nt = map(lambda d: to_count(d), data)\nt = chain(*t)", "_____no_output_____" ], [ "list(t)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
e70f5d29ecf1a73ce9c2bab994ea9a4b198e29b3
112,143
ipynb
Jupyter Notebook
models/baseline_hierarchical/baseline-myotis.ipynb
FrankFundel/BAT
70c422d9af093a5c5e4d7486f7a206bc87478a9e
[ "MIT" ]
null
null
null
models/baseline_hierarchical/baseline-myotis.ipynb
FrankFundel/BAT
70c422d9af093a5c5e4d7486f7a206bc87478a9e
[ "MIT" ]
null
null
null
models/baseline_hierarchical/baseline-myotis.ipynb
FrankFundel/BAT
70c422d9af093a5c5e4d7486f7a206bc87478a9e
[ "MIT" ]
null
null
null
68.968635
62,108
0.718743
[ [ [ "# Dataset", "_____no_output_____" ] ], [ [ "import sys\nsys.path.append('../../datasets/')\nfrom prepare_individuals import prepare, germanBats\nimport matplotlib.pyplot as plt\nimport torch\nimport numpy as np\nimport tqdm\nimport pickle\n\nclasses = germanBats", "_____no_output_____" ], [ "patch_len = 44 # 88 bei 44100, 44 bei 22050 = 250ms ~ 25ms\n\nX_train, Y_train, X_test, Y_test, X_val, Y_val = prepare(\"../../datasets/prepared.h5\", classes, patch_len)", "100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 18/18 [00:15<00:00, 1.18it/s]\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 18/18 [00:06<00:00, 2.66it/s]\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 18/18 [00:03<00:00, 4.52it/s]\n" ], [ "with open('../call_nocall.indices', 'rb') as file:\n indices, labels = pickle.load(file)\n \n train_indices = indices[0][:len(X_train)]\n test_indices = indices[1][:len(X_test)]\n val_indices = indices[2][:len(X_val)]\n \n X_train = X_train[train_indices]\n X_test = X_test[test_indices]\n X_val = X_val[val_indices]\n \n Y_train = Y_train[train_indices]\n Y_test = Y_test[test_indices]\n Y_val = Y_val[val_indices]", "_____no_output_____" ], [ "print(\"Total calls:\", len(X_train) + len(X_test) + len(X_val))\nprint(X_train.shape, Y_train.shape)", "Total calls: 33868\n(19839, 44, 257) (19839,)\n" ], [ "'''species = [0, 1]\ndef filterSpecies(s, X, Y):\n idx = np.in1d(Y, s)\n return X[idx], Y[idx]\n\nX_train, Y_train = filterSpecies(species, X_train, Y_train)\nX_test, Y_test = filterSpecies(species, X_test, Y_test)\nX_val, Y_val = filterSpecies(species, X_val, Y_val)\n\nclasses = {\n \"Rhinolophus ferrumequinum\": 0,\n \"Rhinolophus hipposideros\": 1,\n}'''\n\nspecies = np.asarray([7, 7, 0, 1, 2, 3, 4, 5, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7])\n\nY_train = species[Y_train]\nY_test = species[Y_test]\nY_val = species[Y_val]\n\nclasses = {\n \"Myotis daubentonii\": 0,\n \"Myotis brandtii\": 1,\n \"Myotis mystacinus\": 2,\n \"Myotis emarginatus\": 3,\n \"Myotis nattereri\": 4,\n \"Myotis myotis\": 5,\n \"Myotis dasycneme\": 6,\n \"Other\": 7,\n}\n\nprint(\"Total calls:\", len(X_train) + len(X_test) + len(X_val))\nprint(X_train.shape, Y_train.shape)", "Total calls: 33868\n(19839, 44, 257) (19839,)\n" ] ], [ [ "# Model", "_____no_output_____" ] ], [ [ "import time\nimport datetime\nimport tqdm\nimport torch.nn as nn\nimport torchvision\nfrom torch.cuda.amp import autocast\nfrom torch.utils.data import TensorDataset, DataLoader\nfrom timm.data.mixup import Mixup", "_____no_output_____" ], [ "use_stochdepth = False\nuse_mixedprecision = False\nuse_imbalancedsampler = False\nuse_sampler = False\nuse_cosinescheduler = True\nuse_reduceonplateu = False\nuse_nadam = False\nuse_mixup = False", "_____no_output_____" ], [ "mixup_args = {\n 'mixup_alpha': 1.,\n 'cutmix_alpha': 0.,\n 'cutmix_minmax': None,\n 'prob': 1.0,\n 'switch_prob': 0.,\n 'mode': 'batch',\n 'label_smoothing': 0,\n 'num_classes': len(list(classes))}\nmixup_fn = Mixup(**mixup_args)", "_____no_output_____" ], [ "class Block(nn.Module):\n def __init__(self, num_layers, in_channels, out_channels, identity_downsample=None, stride=1):\n assert num_layers in [18, 34, 50, 101, 152], \"should be a a valid architecture\"\n super(Block, self).__init__()\n self.num_layers = num_layers\n if self.num_layers > 34:\n self.expansion = 4\n else:\n self.expansion = 1\n \n # ResNet50, 101, and 152 include additional layer of 1x1 kernels\n self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0)\n self.bn1 = nn.BatchNorm2d(out_channels)\n if self.num_layers > 34:\n self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=stride, padding=1)\n else:\n # for ResNet18 and 34, connect input directly to (3x3) kernel (skip first (1x1))\n self.conv2 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1)\n \n self.bn2 = nn.BatchNorm2d(out_channels)\n self.conv3 = nn.Conv2d(out_channels, out_channels * self.expansion, kernel_size=1, stride=1, padding=0)\n self.bn3 = nn.BatchNorm2d(out_channels * self.expansion)\n self.relu = nn.ReLU()\n self.identity_downsample = identity_downsample\n\n def forward(self, x):\n identity = x\n if self.num_layers > 34:\n x = self.conv1(x)\n x = self.bn1(x)\n x = self.relu(x)\n x = self.conv2(x)\n x = self.bn2(x)\n x = self.relu(x)\n x = self.conv3(x)\n x = self.bn3(x)\n\n if self.identity_downsample is not None:\n identity = self.identity_downsample(identity)\n\n x = torchvision.ops.stochastic_depth(input=x, p=0.25, mode='batch', training=self.training) # randomly zero input tensor\n x += identity\n x = self.relu(x)\n return x", "_____no_output_____" ], [ "class ResNet(nn.Module):\n def __init__(self, num_layers, block, image_channels, num_classes):\n assert num_layers in [18, 34, 50, 101, 152], f'ResNet{num_layers}: Unknown architecture! Number of layers has ' \\\n f'to be 18, 34, 50, 101, or 152 '\n super(ResNet, self).__init__()\n if num_layers < 50:\n self.expansion = 1\n else:\n self.expansion = 4\n if num_layers == 18:\n layers = [2, 2, 2, 2]\n elif num_layers == 34 or num_layers == 50:\n layers = [3, 4, 6, 3]\n elif num_layers == 101:\n layers = [3, 4, 23, 3]\n else:\n layers = [3, 8, 36, 3]\n self.in_channels = 64\n self.conv1 = nn.Conv2d(image_channels, 64, kernel_size=7, stride=2, padding=3)\n self.bn1 = nn.BatchNorm2d(64)\n self.relu = nn.ReLU()\n self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n\n # ResNetLayers\n self.layer1 = self.make_layers(num_layers, block, layers[0], intermediate_channels=64, stride=1)\n self.layer2 = self.make_layers(num_layers, block, layers[1], intermediate_channels=128, stride=2)\n self.layer3 = self.make_layers(num_layers, block, layers[2], intermediate_channels=256, stride=2)\n self.layer4 = self.make_layers(num_layers, block, layers[3], intermediate_channels=512, stride=2)\n\n self.avgpool = nn.AdaptiveAvgPool2d((1, 1))\n self.fc = nn.Linear(512 * self.expansion, num_classes)\n\n def forward(self, x):\n x = self.conv1(x)\n x = self.bn1(x)\n x = self.relu(x)\n x = self.maxpool(x)\n\n x = self.layer1(x)\n x = self.layer2(x)\n x = self.layer3(x)\n x = self.layer4(x)\n\n x = self.avgpool(x)\n x = x.reshape(x.shape[0], -1)\n x = self.fc(x)\n return x\n\n def make_layers(self, num_layers, block, num_residual_blocks, intermediate_channels, stride):\n layers = []\n\n identity_downsample = nn.Sequential(nn.Conv2d(self.in_channels, intermediate_channels*self.expansion, kernel_size=1, stride=stride),\n nn.BatchNorm2d(intermediate_channels*self.expansion))\n layers.append(block(num_layers, self.in_channels, intermediate_channels, identity_downsample, stride))\n self.in_channels = intermediate_channels * self.expansion # 256\n for i in range(num_residual_blocks - 1):\n layers.append(block(num_layers, self.in_channels, intermediate_channels)) # 256 -> 64, 64*4 (256) again\n return nn.Sequential(*layers)", "_____no_output_____" ], [ "def train_epoch(model, epoch, criterion, optimizer, scheduler, dataloader, device):\n model.train()\n \n running_loss = 0.0\n running_corrects = 0\n \n num_batches = len(dataloader)\n num_samples = len(dataloader.dataset)\n \n for batch, (inputs, labels) in enumerate(tqdm.tqdm(dataloader)):\n # Transfer Data to GPU if available\n inputs, labels = inputs.to(device), labels.to(device)\n if use_mixup:\n inputs, labels = mixup_fn(inputs, labels)\n \n # Clear the gradients\n optimizer.zero_grad()\n \n with autocast(enabled=use_mixedprecision):\n # Forward Pass\n outputs = model(inputs)\n _, predictions = torch.max(outputs, 1)\n\n # Compute Loss\n loss = criterion(outputs, labels)\n \n # Calculate gradients\n loss.backward()\n \n # Update Weights\n optimizer.step()\n \n # Calculate Loss\n running_loss += loss.item() * inputs.size(0)\n if use_mixup:\n running_corrects += (predictions == torch.max(labels, 1)[1]).sum().item()\n else:\n running_corrects += (predictions == labels).sum().item()\n \n # Perform learning rate step\n if use_cosinescheduler:\n scheduler.step(epoch + batch / num_batches)\n \n epoch_loss = running_loss / num_samples\n epoch_acc = running_corrects / num_samples\n \n return epoch_loss, epoch_acc", "_____no_output_____" ], [ "def test_epoch(model, epoch, criterion, optimizer, dataloader, device):\n model.eval()\n \n num_batches = len(dataloader)\n num_samples = len(dataloader.dataset)\n \n with torch.no_grad():\n running_loss = 0.0\n running_corrects = 0\n\n for batch, (inputs, labels) in enumerate(tqdm.tqdm(dataloader)):\n # Transfer Data to GPU if available\n inputs, labels = inputs.to(device), labels.to(device)\n if use_mixup:\n labels = torch.nn.functional.one_hot(labels.to(torch.int64), num_classes=len(list(classes))).float()\n\n # Clear the gradients\n optimizer.zero_grad()\n\n # Forward Pass\n outputs = model(inputs)\n _, predictions = torch.max(outputs, 1)\n\n # Compute Loss\n loss = criterion(outputs, labels)\n\n # Update Weights\n # optimizer.step()\n\n # Calculate Loss\n running_loss += loss.item() * inputs.size(0)\n if use_mixup:\n running_corrects += (predictions == torch.max(labels, 1)[1]).sum().item()\n else:\n running_corrects += (predictions == labels).sum().item()\n\n epoch_loss = running_loss / num_samples\n epoch_acc = running_corrects / num_samples\n \n return epoch_loss, epoch_acc", "_____no_output_____" ], [ "batch_size = 64\nepochs = 20\nlr = 0.05\nwarmup_epochs = 5\nwd = 0.01", "_____no_output_____" ], [ "from torchsampler import ImbalancedDatasetSampler\nfrom torch.utils.data import WeightedRandomSampler\n\n'''# Experiment: wrong sampling\nX = np.concatenate([X_train, X_test, X_val])\nY = np.concatenate([Y_train, Y_test, Y_val])\n\nfull_data = TensorDataset(torch.Tensor(np.expand_dims(X, axis=1)), torch.from_numpy(Y))\ntrain_size = int(0.75 * len(full_data))\ntest_size = len(full_data) - train_size\nval_size = int(0.2 * test_size)\ntest_size -= val_size\n\ntrain_data, test_data, val_data = torch.utils.data.random_split(full_data, [train_size, test_size, val_size],\n generator=torch.Generator().manual_seed(42))'''\n\nif use_mixup and len(X_train) % 2 != 0:\n X_train = X_train[:-1]\n Y_train = Y_train[:-1]\n\ntrain_data = TensorDataset(torch.Tensor(np.expand_dims(X_train, axis=1)), torch.from_numpy(Y_train))\ntest_data = TensorDataset(torch.Tensor(np.expand_dims(X_test, axis=1)), torch.from_numpy(Y_test))\nval_data = TensorDataset(torch.Tensor(np.expand_dims(X_val, axis=1)), torch.from_numpy(Y_val))\n\nif use_imbalancedsampler:\n train_loader = DataLoader(train_data, sampler=ImbalancedDatasetSampler(train_data), batch_size=batch_size)\n test_loader = DataLoader(test_data, sampler=ImbalancedDatasetSampler(test_data), batch_size=batch_size)\n val_loader = DataLoader(val_data, sampler=ImbalancedDatasetSampler(val_data), batch_size=batch_size)\nelif use_sampler:\n def getSampler(y):\n _, counts = np.unique(y, return_counts=True)\n weights = [len(y)/c for c in counts]\n samples_weights = [weights[t] for t in y]\n return WeightedRandomSampler(samples_weights, len(y))\n \n train_loader = DataLoader(train_data, sampler=getSampler(Y_train), batch_size=batch_size)\n test_loader = DataLoader(test_data, sampler=getSampler(Y_test), batch_size=batch_size)\n val_loader = DataLoader(val_data, sampler=getSampler(Y_val), batch_size=batch_size)\nelse:\n train_loader = DataLoader(train_data, batch_size=batch_size)\n test_loader = DataLoader(test_data, batch_size=batch_size)\n val_loader = DataLoader(val_data, batch_size=batch_size)", "_____no_output_____" ], [ "model = ResNet(18, Block, image_channels=1, num_classes=len(list(classes)))\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\nif torch.cuda.device_count() > 1:\n print(\"Let's use\", torch.cuda.device_count(), \"GPUs!\")\n model = nn.DataParallel(model, device_ids=[0, 1])\nmodel.to(device)\nprint(device)", "cuda:0\n" ], [ "import wandb\n\nwandb.init(project=\"BAT-baseline-hierarchical\", entity=\"frankfundel\")\n\nwandb.config = {\n \"learning_rate\": lr,\n \"epochs\": epochs,\n \"batch_size\": batch_size\n}\n\ncriterion = nn.CrossEntropyLoss()\nif use_mixup:\n criterion = nn.BCEWithLogitsLoss()\n\noptimizer = torch.optim.SGD(model.parameters(), lr=lr)\nif use_nadam:\n optimizer = torch.optim.NAdam(model.parameters(), lr=lr, weight_decay=wd)\n\nscheduler = None\nif use_cosinescheduler:\n scheduler = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer=optimizer, T_0=warmup_epochs, T_mult=1)\nif use_reduceonplateu:\n scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer)\n\nmin_val_loss = np.inf\n\ntorch.autograd.set_detect_anomaly(True)", "Failed to detect the name of this notebook, you can set it manually with the WANDB_NOTEBOOK_NAME environment variable to enable code saving.\n\u001b[34m\u001b[1mwandb\u001b[0m: Currently logged in as: \u001b[33mfrankfundel\u001b[0m (use `wandb login --relogin` to force relogin)\n" ], [ "for epoch in range(epochs):\n end = time.time()\n print(f\"==================== Starting at epoch {epoch} ====================\", flush=True)\n \n train_loss, train_acc = train_epoch(model, epoch, criterion, optimizer, scheduler, train_loader, device)\n print('Training loss: {:.4f} Acc: {:.4f}'.format(train_loss, train_acc), flush=True)\n \n val_loss, val_acc = test_epoch(model, epoch, criterion, optimizer, val_loader, device)\n print('Validation loss: {:.4f} Acc: {:.4f}'.format(val_loss, val_acc), flush=True)\n \n if use_reduceonplateu:\n scheduler.step(val_loss)\n \n wandb.log({\n \"train_loss\": train_loss,\n \"train_acc\": train_acc,\n \"val_loss\": val_loss,\n \"val_acc\": val_acc,\n })\n \n if min_val_loss > val_loss:\n print('val_loss decreased, saving model', flush=True)\n min_val_loss = val_loss\n \n # Saving State Dict\n torch.save(model.state_dict(), 'baseline_myotis.pth')", "==================== Starting at epoch 0 ====================\n" ], [ "wandb.finish()", "\n" ], [ "model.load_state_dict(torch.load('baseline_myotis.pth'))\ncompiled_model = torch.jit.script(model)\ntorch.jit.save(compiled_model, 'baseline_myotis.pt')", "_____no_output_____" ], [ "from sklearn.metrics import confusion_matrix\nimport seaborn as sn\nimport pandas as pd\n\nY_pred = []\nY_true = []\ncorrects = 0\n\nmodel.eval()\n\n# iterate over test data\nfor inputs, labels in tqdm.tqdm(test_loader):\n output = model(inputs.cuda()) # Feed Network\n\n output = (torch.max(output, 1)[1]).data.cpu().numpy()\n Y_pred.extend(output) # Save Prediction\n\n labels = labels.data.cpu().numpy()\n Y_true.extend(labels) # Save Truth", "100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 138/138 [00:17<00:00, 7.79it/s]\n" ], [ "# Build confusion matrix\ncf_matrix = confusion_matrix(Y_true, Y_pred)\ndf_cm = pd.DataFrame(cf_matrix / np.sum(cf_matrix, axis=-1), index = [i for i in classes],\n columns = [i for i in classes])\nplt.figure(figsize = (12,7))\nsn.heatmap(df_cm, annot=True)\nplt.savefig('baseline_myotis_cf.png')", "_____no_output_____" ], [ "from sklearn.metrics import f1_score\ncorrects = np.equal(Y_pred, Y_true).sum()\nprint(\"Test accuracy:\", corrects/len(Y_pred))\nprint(\"F1-score:\", f1_score(Y_true, Y_pred, average=None).mean())", "Test accuracy: 0.8969188944268237\nF1-score: 0.7259438594446604\n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e70f60cf2d6d8636f339f14c67607e937e6ab4b5
58,157
ipynb
Jupyter Notebook
M1890-Hidrologia/Precipitacion/docs/Ej5_Precipitacion.ipynb
NorAhmed1/Clases
da0f90c2a9da99a973d01b27e1c1bfaced443c69
[ "MIT" ]
5
2020-07-06T00:02:46.000Z
2022-03-01T03:47:59.000Z
M1890-Hidrologia/Precipitacion/docs/Ej5_Precipitacion.ipynb
Ahmed-Yahia-cs/Clases
104a7632c41c278444fca4cd2ca76d986062768f
[ "MIT" ]
14
2020-01-08T11:11:03.000Z
2020-01-12T16:42:32.000Z
M1890-Hidrologia/Precipitacion/docs/Ej5_Precipitacion.ipynb
casadoj/GISH_Hidrologia
104a7632c41c278444fca4cd2ca76d986062768f
[ "MIT" ]
16
2020-04-22T06:39:42.000Z
2022-02-01T13:20:58.000Z
67.782051
18,984
0.736747
[ [ [ "# Ejercicios de precipitación\n\n## <font color=steelblue>Exercise 5 - Curva intensidad-duración-frecuencia\n\n<font color=steelblue>Construye la curva IDF (intensidad-duración-frecuencia) a partir de la información en la tabla *ChiAnnMax* del archivo *RainfallData.xlsx*.<tfont>", "_____no_output_____" ] ], [ [ "import numpy as np\n\nimport pandas as pd\n\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n# plt.style.use('dark_background')\nplt.style.use('seaborn-whitegrid')\n\nfrom scipy.stats import genextreme\nfrom scipy.optimize import curve_fit", "_____no_output_____" ] ], [ [ "Las **curvas de intensidad-duración-frecuencia (IDF)** son una aproximación habitual en los proyectos de hidrología para definir las tormentas de diseño. Las curvas IDF relacionan la intensidad de la precipitación, con su duración y su frecuencia de ocurrencia (expresada como periodo de retorno).\n \n<img src=\"img/IDF curves.JPG\" alt=\"Mountain View\" style=\"width:500px\">\n\n> <font color=grey>Curva de intensidad-duración-frecuenca para la ciudad de Oklahoma. *(Applied Hydrology. Chow, 1988)*\n\nCuando se va a diseñar una estructura hidráulica (puente, drenaje, presa...), es necesario conocer la intensidad máxima de precipitación que puede ocurrir para un periodo de retorno y una duración de la tormenta. El periodo de retorno suele estar definido por la normativa para cada tipo de estructura; el peor escenario de duración de la tormenta es el tiempo de concentración de la cuenca de drenaje de la estructura.\n\n**Curvas IDF empíricas**<br>\nPara construir las curvas IDF a partir de datos locales, se lleva a cabo un análisis de frecuencia de extremos. Los valores de entrada son la serie anual de máxima intensidad de precipitación para diversas duraciones de tormenta. La serie correspondiente a cada duración se ajusta a una función de distribución de valores extremos para estimar el periodo de retorno. \n\n**Curvas IDF analíticas**\nPara generar las curvas IDF analíticas no es necesario el análisis de frecuencia de extremos anterior. En su lugar, se ajusta una ecuación representativa de la curva IDF a las observaciones.\n", "_____no_output_____" ], [ "### Importación y análisis de datos\nPara generar las curvas de intensidad-duración-frecuencia se necesitan los máximos anuales de precipitación acumulada a distintas duraciones. En nuestro caso estudiaremos eventos de duración 1, 6 y 24 horas.", "_____no_output_____" ] ], [ [ "# Cargar los datos de intensidad\nintensity = pd.read_excel('../data/RainfallData.xlsx', sheet_name='ChiAnnMax', skiprows=7,\n usecols=[0, 5, 6, 7], index_col='Year')\n# Convertir datos de pulgadas a mm\nintensity = intensity * 25.4\n# Corregir columnas\nD = np.array([1, 6 , 24]) # duración de la tormenta\nintensity.columns = D\nintensity.head()", "_____no_output_____" ] ], [ [ "Vamos a generar un gráfico que muestre ordenadas de menor a mayor las series de máxima intensidad de precipitación para las tres duraciones que estamos analizando.\n\nEn este gráfico se observa que a menor duración, la intensidad es siempre mayor. Además, se intuye una mayor variabilidad (mayor pendiente) de la intensidad a menor duracion.", "_____no_output_____" ] ], [ [ "# Configurar el gráfico\nfig = plt.figure(figsize=(10, 6))\nplt.title('Series ordenadas de máxima intensidad anual', fontsize=16, weight='bold')\nplt.xlabel('', fontsize=13)\nplt.xlim((0, 25))\nplt.ylabel('intensidad (mm/h)', fontsize=13)\nplt.ylim((0, 60))\n\n# Tres gráficos de dispersión para cada duración de tormenta\nplt.scatter(range(intensity.shape[0]), intensity.sort_values(1)[1], label='1 h')\nplt.scatter(range(intensity.shape[0]), intensity.sort_values(6)[6], label='6 h')\nplt.scatter(range(intensity.shape[0]), intensity.sort_values(24)[24], label='24 h')\n\n# Leyenda\nfig.legend(loc=8, ncol= 3, fontsize=13);", "_____no_output_____" ] ], [ [ "### Ajuste de la función GEV a los datos\n\nHemos de ajustar una distribución estadística de extremos a los datos. A partir de este ajuste podremos calcular los periodos de retorno. Utilizaremos la función de distribución **GEV (generalized extreme values)**. La función de distribución GEV sigue, para el caso de variables siempre positivas como la precipitación, la siguiente ecuación:\n\n$$F(s,\\xi)=e^{-(1+\\xi s)^{-1/\\xi}} \\quad \\forall \\xi>0$$\n$$ s = \\frac{x-\\mu}{\\sigma} \\quad \\sigma>0$$\n\nDonde $s$ es la variable de estudio estandarizada a través del parámetro de posición $\\mu$ y el parámetro de escala $\\sigma$, y $\\xi$ es el parámetro de forma. Por tanto, la distribución GEV tiene 3 parámetros. En la siguiente figura se muestra la función de densidad y la función de distribución de extremos del tipo II, la distribución de Frechet, para diversos valores de los parámetros de escala y forma.\n\n<img src=\"img/Frechet.png\" alt=\"Mountain View\" style=\"width:600px\">\n\nPara ajustar la función GEV utilizaremos la función `genextreme.fit` del paquete `SciPy.stats` de Python. Esta función devuelve los valores de los 3 parámetros de la GEV (forma, localización y escala) que mejor se ajustan a nuestros datos.", "_____no_output_____" ] ], [ [ "# Ejemplo\n# Ajustar la GEV para duración 1 h\npar_int1h = genextreme.fit(intensity[1])", "_____no_output_____" ], [ "print('Parámetros ajustados para la intensidad en 1 h:')\nprint('xi =', round(par_int1h[0], 4))\nprint('mu =', round(par_int1h[1], 4))\nprint('sigma =', round(par_int1h[2], 4))", "Parámetros ajustados para la intensidad en 1 h:\nxi = 0.2339\nmu = 31.7407\nsigma = 10.3977\n" ] ], [ [ "Lo haremos con un bucle para las tres duraciones (1, 6 y 24 h). Los parámetros se guardarán en el data frame *parametros*.", "_____no_output_____" ] ], [ [ "# Ajustar los parámetros de las 3 duraciones\nparametros = pd.DataFrame(index=['xi', 'mu', 'sigma'], columns=D)\nfor duracion in D:\n # Ajustar la GEV y guardar los parámetros\n parametros[duracion] = genextreme.fit(intensity[duracion])\nparametros", "_____no_output_____" ] ], [ [ "### Curva IDF empírica\n\nLa **probabilidad de no excedencia** (el valor de la función de distribución) y el **periodo de retorno** de una variable estan relacionados mediante la siguiente ecuación:\n\n\\\\[R = \\frac{1}{1-CDF(x)}\\\\]\n\nDonde $R$ es el periodo de retorno en años, y $CDF(x)$ (del inglés, cumulative density function) es el valor de la función de distribución (o probabilidad de no excendencia) del valor de precipitación $x$.\n\nA partir de esta expresión se pueden calcular los **cuantiles** de un **periodo de retorno** dado:\n\n\\\\[CDF(x) = \\frac{R-1}{R} = 1 - \\frac{1}{R}\\\\]\n\nAnalizaremos los periodos de retorno de 10, 25, 50 y 100 años. Calculamos los cuantiles ($Q$) correspondientes a estos periodos de retorno de acuerdo a las distribuciones anteriormente ajustadas.", "_____no_output_____" ] ], [ [ "# Periodos de retorno\nR = np.array([10, 25, 50, 100], dtype=\"float64\")", "_____no_output_____" ], [ "# Probabilidad de no excedencia\nQ = 1. - 1. / R", "_____no_output_____" ] ], [ [ "Como ejemplo, generamos los valores extremos de la intensidad de una tormenta de 1 h de duración para las probabilidades de no excedencia (Q). Para ello utilizamos la función `genextrem.ppf` (*percent point function*) del paquete `SciPy.stats`.", "_____no_output_____" ] ], [ [ "# intensidad de 1 h para los periodos de retorno\nP1 = genextreme.ppf(Q, *parametros[1]) # ppf: percent point function\n\nprint('Intensidad de precipitación en 1 h según periodo de retorno:')\nfor i, Tr in enumerate(R):\n print('I(Tr=', int(Tr), ') = ', round(P1[i], 1), ' mm/h', sep='')\n", "Intensidad de precipitación en 1 h según periodo de retorno:\nI(Tr=10) = 49.9 mm/h\nI(Tr=25) = 55.2 mm/h\nI(Tr=50) = 58.3 mm/h\nI(Tr=100) = 61.0 mm/h\n" ] ], [ [ "Podemos iterar el cálculo de extremos para cada una de las duraciones y cuantiles, guardando los datos en un *data frame* al que llamaremos *IDF*, el cual podemos graficar.", "_____no_output_____" ] ], [ [ "# data frame con los valores de la curva IDF\nIDFe = pd.DataFrame(index=R, columns=D)\nIDFe.index.name = 'Tr'\nfor duracion in D:\n IDFe[duracion] = genextreme(*parametros[duracion]).ppf(Q)\nIDFe", "_____no_output_____" ], [ "# guardar la tabla de resultados\nIDFe.to_csv('../output/Ej5_Resultados IDF analítica.csv', float_format='%.1f')", "_____no_output_____" ] ], [ [ "Gráfico de líneas que muestra, para cada periodo de retorno, la intensidad de precipitación en función de la duración de la tormenta. \n\nSólo tenemos los datos para tres duraciones de tormenta, motivo por el que la curva es tan quebrada. Para solventar este problema habría que repetir el cálculo para más duraciones de tormenta, o aplicar las **curvas IDF analíticas**.", "_____no_output_____" ] ], [ [ "# configuración del gráfico\nfig = plt.figure(figsize=(12, 6))\nplt.title('Curva IDF', fontsize=16, weight='bold')\nplt.xlabel('duración (h)', fontsize=13)\nplt.xlim(0, IDF.columns.max() + 1)\nplt.ylabel('intensidad (mm/h)', fontsize=13)\nplt.ylim((0, 80))\ncolor = ['tan', 'darkkhaki', 'olive', 'darkolivegreen']\n\nfor i, Tr in enumerate(IDF.index):\n plt.plot(IDF.loc[Tr,:], color=color[i], label='Tr = ' + str(int(Tr)) + ' años')\n\nfig.legend(loc=8, ncol=4, fontsize=12);\n\n# guardar la figura\nplt.savefig('../output/Ej5_IDF empírica.png', dpi=300)", "_____no_output_____" ] ], [ [ "### Curva IDF analítica\nHasta ahora hemos calculado una serie de puntos de la **curva IDF**, los correspondientes a las tormentas de 1, 6 y 24 h para los periodos de retorno de 10, 25, 50 y 100 años. Aplicando las ecuaciones analíticas de la curva IDF, podemos generar la curva completa.\n\nDos de las formas analíticas de la curva IDF son:\n\n\\\\[I = \\frac{a}{(D + c)^b}\\\\]\n\n\\\\[I = \\frac{a}{D ^b + c}\\\\]\n\ndonde \\\\(I\\\\) es la intensidad de preciptiación, \\\\(D\\\\) es la duración de la tormenta, \\\\(a\\\\) es una constante dependiente del periodo de retorno y \\\\(b\\\\) y \\\\(c\\\\) son constantes que dependen de la localización del estudio.\n\nAsumiremos que la relación entre $a$ y el periodo de retorno sigue la siguiente función lineal:\n\n\\\\[a = d \\cdot R + e\\\\]\n\nCrearemos funciones de Python para estas curvas analíticas.", "_____no_output_____" ] ], [ [ "def IDF_type_I(x, b, c, d, e):\n \"\"\"Calcula la intensidad de la precipitación para un periodo de retorno y duración de la tormenta dadas a\n partir de la fórmula I = d * R + e / (D + c)**b. \n \n Parámetros:\n -----------\n x: list [2x1]. Par de valores de periodo de retorno(años) y duración (h)\n b: float. Parámetro de la curva IDF\n c: float. Parámetro de la curva IDF\n d: float. Parámetro de la curva IDF\n e: float. Parámetro de la curva IDF\n \n Salida:\n -------\n I: float. Intensidad de precipitación (mm/h)\"\"\"\n \n a = d * x[0] + e\n I = a / (x[1] + c)**b\n return I\n\ndef IDF_type_II(x, b, c, d, e):\n \"\"\"Calcula la intensidad de la precipitación para un periodo de retorno y duración de la tormenta dadas a\n partir de la fórmula I = d * R + e / (D**b + c). \n \n Parámetros:\n -----------\n x: list [2x1]. Par de valores de periodo de retorno(años) y duración (h)\n b: float. Parámetro de la curva IDF\n c: float. Parámetro de la curva IDF\n d: float. Parámetro de la curva IDF\n e: float. Parámetro de la curva IDF\n \n Salida:\n -------\n I: float. Intensidad de precipitación (mm/h)\"\"\"\n \n a = d * x[0] + e\n I = a / (x[1]**b + c)\n return I\n\ndef IDF_type_III(x, b, c, d, e):\n \"\"\"Calcula la intensidad de la precipitación para un periodo de retorno y duración de la tormenta dadas a\n partir de la fórmula I = d * R**e / (D + c)**b. \n \n Parámetros:\n -----------\n x: list [2x1]. Par de valores de periodo de retorno(años) y duración (h)\n b: float. Parámetro de la curva IDF\n c: float. Parámetro de la curva IDF\n d: float. Parámetro de la curva IDF\n e: float. Parámetro de la curva IDF\n \n Salida:\n -------\n I: float. Intensidad de precipitación (mm/h)\"\"\"\n \n a = d * x[0]**e \n I = a / (x[1] + c)**b\n return I\n\ndef IDF_type_IV(x, b, c, d, e):\n \"\"\"Calcula la intensidad de la precipitación para un periodo de retorno y duración de la tormenta dadas a\n partir de la fórmula I = d * R**e / (D**b + c). \n \n Parámetros:\n -----------\n x: list [2x1]. Par de valores de periodo de retorno(años) y duración (h)\n b: float. Parámetro de la curva IDF\n c: float. Parámetro de la curva IDF\n d: float. Parámetro de la curva IDF\n e: float. Parámetro de la curva IDF\n \n Salida:\n -------\n I: float. Intensidad de precipitación (mm/h)\"\"\"\n \n a = d * x[0]**e\n I = a / (x[1]**b + c)\n return I ", "_____no_output_____" ] ], [ [ "Para ajustar la curva hemos de crear primero una malla de pares de valores de periodo de retorno y duración. Utilizaremos las tres duraciones ('D') y los cuatro periodos de retorno ('R') ya empleados hasta ahora, para los cuales hemos calculado la intensidad de precipitación asociada (data frame 'IDF').", "_____no_output_____" ] ], [ [ "# malla con todas las posibles combinaciones de periodo de retorno 'R' y duración 'D'\n(RR, DD) = np.meshgrid(R, D)\nRR.shape, DD.shape", "_____no_output_____" ], [ "# convertir 'RR' y 'DD' en un vector unidimensional\nRR = RR.reshape(-1)\nDD = DD.reshape(-1)\nRR.shape, DD.shape", "_____no_output_____" ], [ "# unir los vectores 'RR' y 'DD'\nRD = np.vstack([RR, DD])\n\nRD.shape", "_____no_output_____" ], [ "# vector unidimensional a partir de 'IDF'\nI = np.hstack([IDF[1], IDF[6], IDF[24]])\n\nI.shape", "_____no_output_____" ] ], [ [ "Para ajustar la curva utilizaremos la función `curve_fit` de `SciPy.optimize`. A esta función hemos de asignarle la función de la curva a ajustar, los valores de entrada (pares retorno-duración) y el valor de la función en esos pares (intensidad). La función devuelve un vector con los parámetros de la curva optimizados y otro vector con las covarianza entre dichos parámetros", "_____no_output_____" ] ], [ [ "# ajustar la curva\ncurva = IDF_type_IV\npopt, pcov = curve_fit(curva, RD, I)\n\nprint('Parámetros optimizados de la curva IDF analítica')\nfor i, par in enumerate(['b', 'c', 'd', 'e']):\n print(par, '=', round(popt[i], 4))", "_____no_output_____" ], [ "# guardar parámetros optimizados\nIDFa = pd.DataFrame(popt, index=['b', 'c', 'd', 'e']).transpose()\nIDFa.to_csv('../output/Ej5_Parámetros IDF analítica.csv', float_format='%.5f')", "_____no_output_____" ], [ "fig = plt.figure(figsize=(12, 6))\nplt.xlim(0, D.max()+1)\nplt.xlabel('duración (h)', fontsize=13)\nplt.ylabel('intensidad (mm/h)', fontsize=13)\ncolor = ['tan', 'darkkhaki', 'olive', 'darkolivegreen']\n\nxx = np.linspace(.25, D.max(), 1000) # valores de duración\ny = np.zeros((xx.size,)) # vector vacío de valores de intensidad\n\nfor i, Tr in enumerate(R): # para cada periodo de retorno\n for j, d in enumerate(xx): # para cada duración\n y[j] = curva((Tr, d), *popt)\n # gráfico de línea\n plt.plot(xx, y, color=color[i], label='Tr = ' + str(int(Tr)) + ' años')\n # gráfico de dispersión\n plt.scatter(D, IDF.loc[Tr], s=8, marker='o', c=color[i], label=None)\n\nfig.legend(loc=8, ncol=4, fontsize=12);\n\n# guardar figura\nplt.savefig('../output/Ej5_IDF analítica.png', dpi=300)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e70f625e91ae64eeab49a00384797e305b84b84d
9,652
ipynb
Jupyter Notebook
notebooks/example_sqlalchemy.ipynb
monocongo/sqlalchemy_examples
16c5e109fec58707836a86d3c6aedd4796b43b5f
[ "MIT" ]
null
null
null
notebooks/example_sqlalchemy.ipynb
monocongo/sqlalchemy_examples
16c5e109fec58707836a86d3c6aedd4796b43b5f
[ "MIT" ]
null
null
null
notebooks/example_sqlalchemy.ipynb
monocongo/sqlalchemy_examples
16c5e109fec58707836a86d3c6aedd4796b43b5f
[ "MIT" ]
null
null
null
38
573
0.634791
[ [ [ "# SQLAlchemy Example\nAn example of object-relational mapping (ORM) using SQLAlchemy.", "_____no_output_____" ], [ "#### Import packages and modules used in subsequent cells.", "_____no_output_____" ] ], [ [ "import configparser\nimport sqlalchemy\nimport sqlalchemy_utils", "_____no_output_____" ] ], [ [ "#### Define a function for parsing database connection parameters from a configuration file.", "_____no_output_____" ] ], [ [ "def _database_config(config_file, section='postgresql'):\n\n # create a parser\n config_parser = configparser.ConfigParser()\n\n # read configuration file\n config_parser.read(config_file)\n\n # get section, default to postgresql\n db_config = {}\n if config_parser.has_section(section):\n params = config_parser.items(section)\n for param in params:\n db_config[param[0]] = param[1]\n else:\n raise Exception(f'Section {section} not found in the {config_file} file')\n\n return db_config", "_____no_output_____" ] ], [ [ "#### Read database connection parameters and use these to configure an SQLAlchemy Engine instance.", "_____no_output_____" ] ], [ [ "params = _database_config(\"C:/home/data/pullpoint/database.ini\")\ndb_connection_details = f\"postgresql+psycopg2://{params['user']}:{params['password']}@{params['host']}/{params['database']}\"\nengine = sqlalchemy.create_engine(db_connection_details, echo=True)", "_____no_output_____" ] ], [ [ "#### Create an instance of the declarative base class and define an associated mapped class for notifications.", "_____no_output_____" ] ], [ [ "Base = sqlalchemy.ext.declarative.declarative_base()\n\nclass Notification(Base):\n \n __tablename__ = 'notifications'\n \n # object attributes (columns in each row)\n notification_id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True, autoincrement=True)\n ip_address = sqlalchemy.Column(sqlalchemy_utils.IPAddressType)\n message = sqlalchemy.Column(sqlalchemy.Unicode(255))\n \n # simple representation\n def __repr__(self):\n return f\"<Notification(notification_id={self.notification_id}, ip_address={self.ip_address}, message={self.message})>\"", "_____no_output_____" ] ], [ [ "#### MetaData and Table objects\nThe Notifications class we've declared defines metadata information about a corresponding table in our database. We now have a `Table` object (for our `notifications` table) which is part of a `MetaData` registry. The MetaData object is available from our declarative base object as an attribute, `.metadata`, and it includes the ability to emit a limited set of schema generation commands to the database. The `notifications` table can now be created by calling the MetaData.create_all() method, passing in our `Engine` instance as a source of database connectivity.", "_____no_output_____" ] ], [ [ "Base.metadata.create_all(engine)", "2019-02-11 16:49:52,595 INFO sqlalchemy.engine.base.Engine select version()\n2019-02-11 16:49:52,597 INFO sqlalchemy.engine.base.Engine {}\n2019-02-11 16:49:52,683 INFO sqlalchemy.engine.base.Engine select current_schema()\n2019-02-11 16:49:52,684 INFO sqlalchemy.engine.base.Engine {}\n2019-02-11 16:49:52,770 INFO sqlalchemy.engine.base.Engine SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1\n2019-02-11 16:49:52,771 INFO sqlalchemy.engine.base.Engine {}\n2019-02-11 16:49:52,833 INFO sqlalchemy.engine.base.Engine SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1\n2019-02-11 16:49:52,833 INFO sqlalchemy.engine.base.Engine {}\n2019-02-11 16:49:52,927 INFO sqlalchemy.engine.base.Engine show standard_conforming_strings\n2019-02-11 16:49:52,928 INFO sqlalchemy.engine.base.Engine {}\n2019-02-11 16:49:53,017 INFO sqlalchemy.engine.base.Engine select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s\n2019-02-11 16:49:53,018 INFO sqlalchemy.engine.base.Engine {'name': 'notifications'}\n2019-02-11 16:49:53,073 INFO sqlalchemy.engine.base.Engine \nCREATE TABLE notifications (\n\tnotification_id SERIAL NOT NULL, \n\tip_address VARCHAR(50), \n\tmessage VARCHAR(255), \n\tPRIMARY KEY (notification_id)\n)\n\n\n2019-02-11 16:49:53,074 INFO sqlalchemy.engine.base.Engine {}\n2019-02-11 16:49:53,135 INFO sqlalchemy.engine.base.Engine COMMIT\n" ] ], [ [ "#### Create an instance of the mapped class and insert a corresponding record into the corresponding database table\nWe will create a `Session` class that will be bound to the database via the `Engine` instance, and we'll use an instance of this `Session` class to access the database via connections from a pool maintained by the associated `Engine`.", "_____no_output_____" ] ], [ [ "initial_notification = Notification(ip_address=\"12.34.56.78\", message=\"First notification message.\")\nsecond_notification = Notification(ip_address=\"12.34.56.79\", message=\"Second notification message.\")\n\nSession = sqlalchemy.orm.sessionmaker(bind=engine)\nsession = Session()\n\nsession.add(initial_notification)\nsession.add(second_notification)", "_____no_output_____" ] ], [ [ "At this point the notification object hasn't been saved as a row in the database table, since the instance is **pending**, i.e. the SQL to persist the notification object won't be issued until it is needed, using a process known as a **flush**. For example, if we perform a lookup of the notification via a query then the `Session` will first flush all pending information before issuing the query SQL.\n\nBelow we'll query for all `Notification` objects (i.e. rows in the `notifications` table) filtered by the IP address attribute we've used for the initial notification, \"12.34.56.78\", and get the first one found in the list (and in this case there should be only one). This will trigger the flushing of the pending insertions from the above `Session.add()` calls, which will happen before the SQL for the query is issued.", "_____no_output_____" ] ], [ [ "notification = session.query(Notification).filter_by(ip_address=\"12.34.56.78\").first()\nnotification", "2019-02-11 17:22:07,487 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)\n2019-02-11 17:22:07,488 INFO sqlalchemy.engine.base.Engine INSERT INTO notifications (ip_address, message) VALUES (%(ip_address)s, %(message)s) RETURNING notifications.notification_id\n2019-02-11 17:22:07,488 INFO sqlalchemy.engine.base.Engine {'ip_address': '12.34.56.78', 'message': 'This is a notification message.'}\n2019-02-11 17:22:07,652 INFO sqlalchemy.engine.base.Engine SELECT notifications.notification_id AS notifications_notification_id, notifications.ip_address AS notifications_ip_address, notifications.message AS notifications_message \nFROM notifications \nWHERE notifications.ip_address = %(ip_address_1)s \n LIMIT %(param_1)s\n2019-02-11 17:22:07,653 INFO sqlalchemy.engine.base.Engine {'ip_address_1': '12.34.56.78', 'param_1': 1}\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e70f692416bb5f09f8f9ce102e747a78ce117d16
498,385
ipynb
Jupyter Notebook
ARIMA.ipynb
zifeng53/Dow-Jones-Industrial-Average-Forecasting
58538e7c0102a5f311307d0a7ccdb42ae4354b36
[ "MIT" ]
null
null
null
ARIMA.ipynb
zifeng53/Dow-Jones-Industrial-Average-Forecasting
58538e7c0102a5f311307d0a7ccdb42ae4354b36
[ "MIT" ]
null
null
null
ARIMA.ipynb
zifeng53/Dow-Jones-Industrial-Average-Forecasting
58538e7c0102a5f311307d0a7ccdb42ae4354b36
[ "MIT" ]
null
null
null
471.063327
112,954
0.923387
[ [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport os\nimport warnings\nwarnings.filterwarnings('ignore')\nplt.style.use('fivethirtyeight')\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 10, 6\nfrom statsmodels.tsa.stattools import adfuller\nfrom statsmodels.tsa.seasonal import seasonal_decompose\nfrom statsmodels.tsa.arima_model import ARIMA\n#from pyramid.arima import auto_arima\n#from pmdarima.arima import auto_arima\nfrom sklearn.metrics import mean_squared_error, mean_absolute_error\nimport math\n", "_____no_output_____" ], [ "from google.colab import drive\ndrive.mount('/content/gdrive')", "Drive already mounted at /content/gdrive; to attempt to forcibly remount, call drive.mount(\"/content/gdrive\", force_remount=True).\n" ], [ "# data = pd.read_csv('/content/gdrive/My Drive/Dataset/newdata.csv', index_col=['Date'], parse_dates=['Date'])\ndata = pd.read_csv('/content/gdrive/My Drive/Dataset/newdata.csv')\ndata.head(10)", "_____no_output_____" ], [ "drop_cols = [ 'Vol.', 'Change %','Open','High','Low']\ndata.drop(drop_cols, axis=1, inplace=True)", "_____no_output_____" ], [ "data.head()", "_____no_output_____" ], [ "data.tail()", "_____no_output_____" ] ], [ [ "Visualize the per day closing price of the stock.", "_____no_output_____" ] ], [ [ "x = data.Date\ny = data.Price", "_____no_output_____" ], [ "#plot close price\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.xlabel('Date')\nplt.ylabel('Close Prices')\nplt.plot(data['Date'],data['Price'])\nplt.title('DJIA')\nplt.xticks(np.arange(0,360,50),data['Date'][0:360:50])\nplt.show()", "_____no_output_____" ], [ "df_close = data['Price']\nplt.xlabel('Date')\nplt.ylabel('Close Prices')\nplt.plot(data['Date'],data['Price'])\ndf_close.plot(style='k.')\nplt.xticks(np.arange(0,360,50),data['Date'][0:360:50])\nplt.title('Scatter plot of closing price')\nplt.show()\n", "_____no_output_____" ], [ "#Test for staionarity\ndef test_stationarity(timeseries):\n #Determing rolling statistics\n rolmean = timeseries.rolling(12).mean()\n rolstd = timeseries.rolling(12).std()\n #Plot rolling statistics:\n plt.plot(timeseries, color='blue',label='Original')\n plt.plot(rolmean, color='red', label='Rolling Mean')\n plt.plot(rolstd, color='black', label = 'Rolling Std')\n plt.legend(loc='best')\n plt.title('Rolling Mean and Standard Deviation')\n plt.show(block=False)\n \n print(\"Results of dickey fuller test\")\n adft = adfuller(timeseries,autolag='AIC')\n # output for dft will give us without defining what the values are.\n #hence we manually write what values does it explains using a for loop\n output = pd.Series(adft[0:4],index=['Test Statistics','p-value','No. of lags used','Number of observations used'])\n for key,values in adft[4].items():\n output['critical value (%s)'%key] = values\n print(output)\n \ntest_stationarity(df_close)", "_____no_output_____" ], [ "from pylab import rcParams\nrcParams['figure.figsize'] = 10, 6\ndf_log = np.log(df_close)\nmoving_avg = df_log.rolling(12).mean()\nstd_dev = df_log.rolling(12).std()\nplt.legend(loc='best')\nplt.title('Moving Average')\nplt.plot(std_dev, color =\"black\", label = \"Standard Deviation\")\nplt.plot(moving_avg, color=\"red\", label = \"Mean\")\nplt.legend()\nplt.show()", "No handles with labels found to put in legend.\n" ], [ "#split data into train and training set\ntrain_data, test_data = df_log[3:int(len(df_log)*0.7)], df_log[int(len(df_log)*0.7):]\nplt.figure(figsize=(10,6))\nplt.grid(True)\nplt.xlabel('Date')\nplt.ylabel('Closing Prices')\nplt.xticks(np.arange(0,360,50),data['Date'][0:360:50])\nplt.plot(df_log, 'green', label='Train data')\nplt.plot(test_data, 'blue', label='Test data')\nplt.title('DJIA Train and Test Data')\nplt.legend()", "_____no_output_____" ], [ "from pandas.plotting import lag_plot\nfrom pandas import datetime\nfrom statsmodels.tsa.arima_model import ARIMA\nfrom sklearn.metrics import mean_squared_error", "_____no_output_____" ], [ "pip install --upgrade numpy", "Requirement already up-to-date: numpy in /usr/local/lib/python3.7/dist-packages (1.20.2)\n" ], [ "from statsmodels.tsa.arima_model import ARIMA", "_____no_output_____" ], [ "pip install pmdarima\n", "Requirement already satisfied: pmdarima in /usr/local/lib/python3.7/dist-packages (1.8.1)\nRequirement already satisfied: pandas>=0.19 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.1.5)\nRequirement already satisfied: scikit-learn>=0.22 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (0.22.2.post1)\nRequirement already satisfied: scipy>=1.3.2 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.4.1)\nRequirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.20.2)\nRequirement already satisfied: setuptools!=50.0.0,>=38.6.0 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (54.2.0)\nRequirement already satisfied: statsmodels!=0.12.0,>=0.11 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (0.12.2)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.0.1)\nRequirement already satisfied: urllib3 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.24.3)\nRequirement already satisfied: Cython!=0.29.18,>=0.29 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (0.29.22)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.19->pmdarima) (2018.9)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.19->pmdarima) (2.8.1)\nRequirement already satisfied: patsy>=0.5 in /usr/local/lib/python3.7/dist-packages (from statsmodels!=0.12.0,>=0.11->pmdarima) (0.5.1)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas>=0.19->pmdarima) (1.15.0)\n" ], [ "from pmdarima.arima import auto_arima", "_____no_output_____" ], [ "\nmodel_autoARIMA = auto_arima(train_data, start_p=0, start_q=0,test='adf', max_p=3, max_q=3, m=1, d=None, seasonal=False, start_P=0, D=0, trace=True,error_action='ignore', suppress_warnings=True, stepwise=True)\n", "Performing stepwise search to minimize aic\n ARIMA(0,1,0)(0,0,0)[0] intercept : AIC=-842.829, Time=0.07 sec\n ARIMA(1,1,0)(0,0,0)[0] intercept : AIC=-841.041, Time=0.05 sec\n ARIMA(0,1,1)(0,0,0)[0] intercept : AIC=-841.057, Time=0.06 sec\n ARIMA(0,1,0)(0,0,0)[0] : AIC=-840.559, Time=0.04 sec\n ARIMA(1,1,1)(0,0,0)[0] intercept : AIC=-839.494, Time=0.08 sec\n\nBest model: ARIMA(0,1,0)(0,0,0)[0] intercept\nTotal fit time: 0.315 seconds\n" ], [ "model_autoARIMA.plot_diagnostics(figsize=(15,8))\nplt.show()", "_____no_output_____" ], [ "model = ARIMA(train_data, order=(0, 1, 0)) \nfitted = model.fit(disp=-1) \nprint(fitted.summary())", " ARIMA Model Results \n==============================================================================\nDep. Variable: D.Price No. Observations: 247\nModel: ARIMA(0, 1, 0) Log Likelihood 423.414\nMethod: css S.D. of innovations 0.044\nDate: Sun, 18 Apr 2021 AIC -842.829\nTime: 14:46:40 BIC -835.810\nSample: 1 HQIC -840.003\n \n==============================================================================\n coef std err z P>|z| [0.025 0.975]\n------------------------------------------------------------------------------\nconst 0.0058 0.003 2.075 0.038 0.000 0.011\n==============================================================================\n" ], [ "# Forecast\nfc, se, conf = fitted.forecast(109, alpha=0.05) # 95% confidence\nfc_series = pd.Series(fc, index=test_data.index)\nlower_series = pd.Series(conf[:, 0], index=test_data.index)\nupper_series = pd.Series(conf[:, 1], index=test_data.index)\nplt.figure(figsize=(12,5), dpi=100)\nplt.plot(train_data, label='training')\nplt.plot(test_data, color = 'blue', label='Actual Stock Price')\nplt.plot(fc_series, color = 'orange',label='Predicted Stock Price')\nplt.fill_between(lower_series.index, lower_series, upper_series, \n color='k', alpha=.10)\nplt.title('Dow Jones Industrial Average(DJIA) ARIMA Forecast')\nplt.xlabel('Date')\nplt.ylabel('Actual Stock Price')\nplt.legend(loc='upper left', fontsize=8)\nplt.xticks(np.arange(0,360,50),data['Date'][0:360:50])\nplt.show()", "_____no_output_____" ], [ "# report performance\nmse = mean_squared_error(test_data, fc)\nprint('MSE: '+str(mse))\nmae = mean_absolute_error(test_data, fc)\nprint('MAE: '+str(mae))\nrmse = math.sqrt(mean_squared_error(test_data, fc))\nprint('RMSE: '+str(rmse))\nmape = np.mean(np.abs(fc - test_data)/np.abs(test_data))\nprint('MAPE: '+str(mape))", "MSE: 0.03488236228613365\nMAE: 0.16322125170765395\nRMSE: 0.18676820469805253\nMAPE: 0.01646521229454659\n" ], [ "def model_diagnostics(residuals, model_obj):\n # For Breusch-Godfrey we have to pass the results object\n godfrey = acorr_breusch_godfrey(model_obj, nlags= 40)\n ljung = acorr_ljungbox(residuals, lags= 40)\n shap = shapiro(residuals)\n j_bera = jarque_bera(residuals)\n print('Results of Ljung-Box:')\n print('Null Hypothesis: No auotcorrelation')\n print('P-Value =< Alpha(.05) => Reject Null')\n print(f'p-values: {ljung[1]}\\n')\n print('Results of Breusch-Godfrey:')\n print('Null Hypothesis: No auotcorrelation')\n print('P-Value =< Alpha(.05) => Reject Null') \n print(f'p-values: {godfrey[1]}\\n')\n print('Results of Shapiro-Wilks:')\n print('Null Hypothesis: Data is normally distributed')\n print('P-Value =< Alpha(.05) => Reject Null') \n print(f'p-value: {shap[1]}\\n')\n print('Results of Jarque-Bera:')\n print('Null Hypothesis: Data is normally distributed')\n print('P-Value =< Alpha(.05) => Reject Null') \n print(f'p-value: {j_bera[1]}')\n\ndef plot_diagnostics(residuals):\n residuals.plot(title='ARIMA Residuals', figsize=(15, 10))\n plt.show()\n fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(20, 10))\n ax[0].set_title('ARIMA Residuals KDE')\n ax[1].set_title('ARIMA Resduals Probability Plot') \n residuals.plot(kind='kde', ax=ax[0])\n probplot(residuals, dist='norm', plot=ax[1])\n plt.show() ", "_____no_output_____" ], [ "best_parameters = (0, 1, 0)\nmodel = ARIMA(data['bc_pm10'], order=best_parameters)\nmodel_fit = model.fit(disp=-1)\nresid = model_fit.resid\n\nmodel_diagnostics(resid, model_fit)\nplot_diagnostics(resid)", "_____no_output_____" ], [ " acf(resid(final.arma))", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e70f6c6487dad561d09b730b741f4372203d09b7
187,005
ipynb
Jupyter Notebook
Project4/Project4_sul_part4 (4).ipynb
wiggs555/cse7324project
6bc6e51ccbdbf0b80abbb0e7f0a64ae150831abd
[ "Unlicense" ]
null
null
null
Project4/Project4_sul_part4 (4).ipynb
wiggs555/cse7324project
6bc6e51ccbdbf0b80abbb0e7f0a64ae150831abd
[ "Unlicense" ]
null
null
null
Project4/Project4_sul_part4 (4).ipynb
wiggs555/cse7324project
6bc6e51ccbdbf0b80abbb0e7f0a64ae150831abd
[ "Unlicense" ]
1
2019-02-05T07:45:51.000Z
2019-02-05T07:45:51.000Z
112.585792
36,672
0.774541
[ [ [ "# Preparation ", "_____no_output_____" ] ], [ [ "# dependencies\nimport pandas as pd\nimport numpy as np\nimport missingno as msno \nimport matplotlib.pyplot as plt\nimport re\nfrom sklearn.model_selection import train_test_split\n\nfrom textwrap import wrap\nfrom sklearn.preprocessing import StandardScaler\nimport warnings\nwarnings.filterwarnings(\"ignore\")\nimport math\n%matplotlib inline", "_____no_output_____" ], [ "# import data\nshelter_outcomes = pd.read_csv(\"C:/Users/sulem/OneDrive/Desktop/machin learnign/Project3/aac_shelter_outcomes.csv\")\n# filter animal type for just cats\ncats = shelter_outcomes[shelter_outcomes['animal_type'] == 'Cat']\n#print(cats.head())\n\n# remove age_upon_outcome and recalculate to standard units (days)\nage = cats.loc[:,['datetime', 'date_of_birth']]\n# convert to datetime\nage.loc[:,'datetime'] = pd.to_datetime(age['datetime'])\nage.loc[:,'date_of_birth'] = pd.to_datetime(age['date_of_birth'])\n# calculate cat age in days\ncats.loc[:,'age'] = (age.loc[:,'datetime'] - age.loc[:,'date_of_birth']).dt.days\n# get dob info\ncats['dob_month'] = age.loc[:, 'date_of_birth'].dt.month\ncats['dob_day'] = age.loc[:, 'date_of_birth'].dt.day\ncats['dob_dayofweek'] = age.loc[:, 'date_of_birth'].dt.dayofweek\n# get month from datetime\ncats['month'] = age.loc[:,'datetime'].dt.month\n# get day of month\ncats['day'] = age.loc[:,'datetime'].dt.day\n# get day of week\ncats['dayofweek'] = age.loc[:, 'datetime'].dt.dayofweek\n# get hour of day\ncats['hour'] = age.loc[:, 'datetime'].dt.hour\n# get quarter\ncats['quarter'] = age.loc[:, 'datetime'].dt.quarter\n\n# clean up breed attribute\n# get breed attribute for processing\n# convert to lowercase, remove mix and strip whitespace\n# remove space in 'medium hair' to match 'longhair' and 'shorthair'\n# split on either space or '/'\nbreed = cats.loc[:, 'breed'].str.lower().str.replace('mix', '').str.replace('medium hair', 'mediumhair').str.strip().str.split('/', expand=True)\ncats['breed'] = breed[0]\ncats['breed1'] = breed[1]\n\n# clean up color attribute\n# convert to lowercase\n# strip spaces\n# split on '/'\ncolor = cats.loc[:, 'color'].str.lower().str.strip().str.split('/', expand=True)\ncats['color'] = color[0]\ncats['color1'] = color[1]\n\n# clean up sex_upon_outcome\nsex = cats['sex_upon_outcome'].str.lower().str.strip().str.split(' ', expand=True)\nsex[0].replace('spayed', True, inplace=True)\nsex[0].replace('neutered', True, inplace=True)\nsex[0].replace('intact', False, inplace=True)\nsex[1].replace(np.nan, 'unknown', inplace=True)\ncats['spayed_neutered'] = sex[0]\ncats['sex'] = sex[1]\n\n# add in domesticated attribute\ncats['domestic'] = np.where(cats['breed'].str.contains('domestic'), 1, 0)\n\n# combine outcome and outcome subtype into a single attribute\ncats['outcome_subtype'] = cats['outcome_subtype'].str.lower().str.replace(' ', '-').fillna('unknown')\ncats['outcome_type'] = cats['outcome_type'].str.lower().str.replace(' ', '-').fillna('unknown')\ncats['outcome'] = cats['outcome_type'] + '_' + cats['outcome_subtype']\n\n# drop unnecessary columns\ncats.drop(columns=['animal_id', 'name', 'animal_type', 'age_upon_outcome', 'date_of_birth', 'datetime', 'monthyear', 'sex_upon_outcome', 'outcome_subtype', 'outcome_type'], inplace=True)\n#print(cats['outcome'].value_counts())\n\ncats.head()\n", "_____no_output_____" ], [ "cats.drop(columns=['breed1'], inplace=True)\n# Breed, Color, Color1, Spayed_Netured and Sex attributes need to be one hot encoded\ncats_ohe = pd.get_dummies(cats, columns=['breed', 'color', 'color1', 'spayed_neutered', 'sex'])\ncats_ohe.head()\nout_t={'euthanasia_suffering' : 0, 'died_in-kennel' : 0, 'return-to-owner_unknown' : 0, 'transfer_partner' : 1, 'euthanasia_at-vet' : 2, 'adoption_foster' : 3, 'died_in-foster' : 0, 'transfer_scrp' : 4, 'euthanasia_medical' : 0, 'transfer_snr' : 0, 'died_enroute' : 0, 'rto-adopt_unknown' : 0, 'missing_in-foster' : 0, 'adoption_offsite' : 0, 'adoption_unknown' :5,'euthanasia_rabies-risk' : 0, 'unknown_unknown' : 0, 'adoption_barn' : 0, 'died_unknown' : 0, 'died_in-surgery' : 0, 'euthanasia_aggressive' : 0, 'euthanasia_unknown' : 0, 'missing_unknown' : 0, 'missing_in-kennel' : 0, 'missing_possible-theft' : 0, 'died_at-vet' : 0, 'disposal_unknown' : 0, 'euthanasia_underage' : 0, 'transfer_barn' : 0}\n#output is converted from string to catogries 0 to 5 represent each output\n# separate outcome from data\noutcome = cats_ohe['outcome']\ncats_ohe.drop(columns=['outcome'])\n\nprint(cats_ohe.head())\n\n# split the data\nX_train, X_test, y_train, y_test = train_test_split(cats_ohe, outcome, test_size=0.2, random_state=0)\nX_train.drop(columns=['outcome'], inplace=True)\ny_train = [out_t[item] for item in y_train]\n#print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)", " age dob_month dob_day dob_dayofweek month day dayofweek hour \\\n0 15 7 7 0 7 22 1 16 \n8 59 6 16 0 8 14 3 18 \n9 95 3 26 2 6 29 6 17 \n10 366 3 27 2 3 28 4 14 \n17 24 12 16 0 1 9 3 19 \n\n quarter domestic ... color1_tortie point color1_tricolor \\\n0 3 1 ... 0 0 \n8 3 1 ... 0 0 \n9 2 1 ... 0 0 \n10 1 1 ... 0 0 \n17 1 1 ... 0 0 \n\n color1_white color1_yellow spayed_neutered_False spayed_neutered_True \\\n0 0 0 1 0 \n8 1 0 1 0 \n9 0 0 0 1 \n10 1 0 0 1 \n17 1 0 1 0 \n\n spayed_neutered_unknown sex_female sex_male sex_unknown \n0 0 0 1 0 \n8 0 1 0 0 \n9 0 1 0 0 \n10 0 1 0 0 \n17 0 0 1 0 \n\n[5 rows x 141 columns]\n" ], [ "x_train_ar=X_train.values\ny_target_ar=np.asarray(y_train)\nx_train_ar = StandardScaler().fit(x_train_ar).transform(x_train_ar)\nprint(x_train_ar.shape)\nprint(y_target_ar.shape)\nunique, counts = np.unique(y_target_ar, return_counts=True)\nnp.asarray((unique, counts))\nplt.pie(np.asarray(( counts)), labels=np.unique(y_target_ar), startangle=90, autopct='%.1f%%')\nplt.show()", "(23537, 140)\n(23537,)\n" ] ], [ [ "# Evaluation ", "_____no_output_____" ], [ "# Modeling ", "_____no_output_____" ], [ "# Exceptional Work ", "_____no_output_____" ] ], [ [ "# Example adapted from https://github.com/rasbt/python-machine-learning-book/blob/master/code/ch12/ch12.ipynb\n# Original Author: Sebastian Raschka\n\n# This is the optional book we use in the course, excellent intuitions and straightforward programming examples\n# please note, however, that this code has been manipulated to reflect our assumptions and notation.\nimport numpy as np\nfrom scipy.special import expit\nimport pandas as pd\nimport sys\n\n# start with a simple base classifier, which can't be fit or predicted\n# it only has internal classes to be used by classes that will subclass it\nclass TwoLayerPerceptronBase(object):\n def __init__(self, n_hidden=30,\n C=0.0, epochs=500, eta=0.001, random_state=None,phi='sig',n_ner=2,cf='quad'):\n np.random.seed(random_state)\n self.n_hidden = n_hidden\n self.l2_C = C\n self.epochs = epochs\n self.eta = eta\n self.phi=phi\n self.n_ner=n_ner\n self.cf=cf\n @staticmethod\n def _encode_labels(y):\n \"\"\"Encode labels into one-hot representation\"\"\"\n onehot = pd.get_dummies(y).values.T\n \n return onehot\n\n def _initialize_weights(self):\n \"\"\"Initialize weights with small random numbers.\"\"\"\n #W1_num_elems = (self.n_features_ + 1)*self.n_hidden\n #W1 = np.random.uniform(-1.0, 1.0,size=W1_num_elems)\n #W1 = W1.reshape(self.n_hidden, self.n_features_ + 1) # reshape to be W\n \n #W2_num_elems = (self.n_hidden + 1)*self.n_output_\n #W2 = np.random.uniform(-1.0, 1.0, size=W2_num_elems)\n #W2 = W2.reshape(self.n_output_, self.n_hidden + 1)\n\n for i in range(self.n_ner):\n if i==0:\n vars()[\"W\" + str(i + 1) +\"_num_elems\"] = (self.n_features_ + 1)*self.n_hidden\n vars()[\"W\" + str(i + 1)] = np.random.uniform(-1.0, 1.0,size=vars()[\"W\" + str(i + 1) +\"_num_elems\"])\n vars()[\"W\" + str(i + 1)] = vars()[\"W\" + str(i+1)].reshape(self.n_hidden, self.n_features_ + 1) # reshape to be W \n\n if i>0:\n vars()[\"W\" + str(i + 1)+\"_num_elems\"] = (self.n_hidden + 1)*self.n_hidden\n vars()[\"W\" + str(i + 1)] = np.random.uniform(-1.0, 1.0,size=vars()[\"W\" + str(i + 1)+\"_num_elems\"])\n vars()[\"W\" + str(i + 1)] = vars()[\"W\" + str(i+1)].reshape(self.n_hidden, self.n_hidden + 1) # reshape to be W \n \n if i==(self.n_ner-1):\n vars()[\"W\" + str(i + 1)+\"_num_elems\"] = (self.n_hidden + 1)*self.n_output_ \n vars()[\"W\" + str(i + 1)] = np.random.uniform(-1.0, 1.0,size=vars()[\"W\" + str(i + 1)+\"_num_elems\"])\n vars()[\"W\" + str(i + 1)] = vars()[\"W\" + str(i+1)].reshape(self.n_output_, self.n_hidden + 1)\n\n return vars()\n \n @staticmethod\n def _sigmoid(z,phi):\n \"\"\"Use scipy.special.expit to avoid overflow\"\"\"\n # 1.0 / (1.0 + np.exp(-z))\n if phi=='sig': \n return expit(z)\n if phi=='lin': \n return z\n if phi=='silu': \n return expit(z)*z\n if phi=='relu': \n bol= z>=0 \n #z=bol*z\n return np.maximum(0,z.copy())\n \n @staticmethod\n def _add_bias_unit(X, how='column'):\n \"\"\"Add bias unit (column or row of 1s) to array at index 0\"\"\"\n if how == 'column':\n ones = np.ones((X.shape[0], 1))\n X_new = np.hstack((ones, X))\n elif how == 'row':\n ones = np.ones((1, X.shape[1]))\n X_new = np.vstack((ones, X))\n return X_new\n \n @staticmethod\n def _L2_reg(lambda_, W,n):\n \"\"\"Compute L2-regularization cost\"\"\"\n # only compute for non-bias terms\n W_sum=0\n for i in range(n):\n W_sum=np.mean(W['W'+str(i+1)][:, 1:] ** 2)+W_sum\n \n sqr=np.sqrt(W_sum)\n return (lambda_/2.0) *sqr\n \n def _cost(self,Al,Y_enc,W):\n '''Get the objective function value'''\n cost = np.mean((Y_enc-Al)**2)\n L2_term = self._L2_reg(self.l2_C, W,self.n_ner)\n return cost + L2_term\n \n def _feedforward(self, X, W, n_ner):\n \"\"\"Compute feedforward step\n \"\"\" \n # for i in range(5):\n # n = 1\n # globals()[\"A\" + str(i + 1)] = a + b\n # print(globals()[\"Temp\" + str(i + 1)])\n #n = n + 1\n #A1 = self._add_bias_unit(X, how='column')\n #A1 = A1.T\n # Z1 = W1 @ A1\n #A2 = self._sigmoid(Z1,self.phi)\n #A2 = self._add_bias_unit(A2, how='row')\n #Z2 = W2 @ A2\n #A3 = self._sigmoid(Z2,'sig')\n \n for i in range(self.n_ner+1):\n if i==0: \n vars()[\"A\"+str(i+1)]=self._add_bias_unit(X, how='column')\n vars()[\"A\"+str(i+1)]=vars()[\"A\"+str(i+1)].T\n vars()[\"Z\"+str(i+1)]=W[\"W\"+str(i+1)] @ vars()[\"A\"+str(i+1)]\n #print(\"A\"+str(i+1))\n #print(vars()[\"A\"+str(i+1)])\n if (i>0) and i!=(self.n_ner):\n vars()[\"A\"+str(i+1)]=self._sigmoid(vars()[\"Z\"+str(i)],self.phi)\n vars()[\"A\"+str(i+1)]=self._add_bias_unit(vars()[\"A\"+str(i+1)], how='row')\n vars()[\"Z\"+str(i+1)]=W[\"W\"+str(i+1)]@vars()[\"A\"+str(i+1)]\n #print(\"A\"+str(i+1))\n #print(vars()[\"A\"+str(i+1)])\n if i==(self.n_ner):\n vars()[\"A\"+str(i+1)]=self._sigmoid(vars()[\"Z\"+str(i)],'sig')\n #print(\"A\"+str(i+1))\n #print(vars()[\"A\"+str(i+1)])\n \n return vars()\n def _div(b,A_,phi):\n \n if phi=='sig': \n return A_*(1-A_)\n if phi=='lin': \n return 1\n if phi=='silu':\n return (expit(A_)*A_)+(expit(A_)*(1-expit(A_)*A_))\n if phi=='relu': \n bol= A_>=0 \n return 1 \n \n def _get_gradient(self, F, Y_enc, W):\n \"\"\" Compute gradient step using backpropagation.\n \"\"\"\n # vectorized backpropagation\n #Z1_with_bias = self._add_bias_unit(Z1,how='row')\n #Z2_with_bias = self._add_bias_unit(Z2,how='row')\n #V2 = -2*(Y_enc-A3)*self._div(A3,self.phi) # last layer sensitivity\n #V1 = self._div(A2,self.phi)*(W2.T @ V2) # back prop the sensitivity \n \n #grad2 = V2 @ A2.T # no bias on final layer\n #grad1 = V1[1:,:] @ A1.T # dont back prop sensitivity of bias\n if self.cf==\"quad\": \n vars()['V'+str(self.n_ner)] = -2*(Y_enc-F[\"A\"+str(self.n_ner+1)])*self._div(F[\"A\"+str(self.n_ner+1)],'sig')\n if self.phi=='relu': \n vars()['V'+str(self.n_ner)][F[\"Z\"+str(self.n_ner)]<=0] = 0 \n \n \n if self.cf==\"ce\": \n vars()['V'+str(self.n_ner)] = -2*(Y_enc-F[\"A\"+str(self.n_ner+1)])\n \n vars()['grad'+str(self.n_ner)] = vars()['V'+str(self.n_ner)] @ F[\"A\"+str(self.n_ner)].T \n\n vars()['grad'+str(self.n_ner)][:, 1:] += W[\"W\"+str(self.n_ner)][:, 1:] * self.l2_C\n for i in range(self.n_ner-1):\n l=self.n_ner-1-i\n if l==self.n_ner-1:\n \n vars()[\"Z\"+str(l)+\"_with_bias\"] = self._add_bias_unit(F[\"Z\"+str(l)],how='row')\n \n vars()['V'+str(l)] = self._div(F[\"A\"+str(l+1)],self.phi)*(W[\"W\"+str(l+1)].T @ vars()['V'+str(l+1)])\n if self.phi=='relu':\n vars()['V'+str(l)][vars()[\"Z\"+str(l)+\"_with_bias\"]<=0] = 0\n if l!=self.n_ner-1:\n \n vars()[\"Z\"+str(l)+\"_with_bias\"] = self._add_bias_unit(F[\"Z\"+str(l)],how='row')\n vars()['V'+str(l)] = self._div(F[\"A\"+str(l+1)],self.phi)*(W[\"W\"+str(l+1)].T @ vars()['V'+str(l+1)][1:, :]) \n if self.phi=='relu':\n vars()['V'+str(l)][vars()[\"Z\"+str(l)+\"_with_bias\"]<=0] = 0\n \n \n vars()['grad'+str(l)]=vars()['V'+str(l)][1:,:] @ F[\"A\"+str(l)].T\n \n \n vars()['grad'+str(l)][:, 1:] += W[\"W\"+str(l)][:, 1:] * self.l2_C\n # regularize weights that are not bias terms\n #grad1[:, 1:] += W1[:, 1:] * self.l2_C\n #grad2[:, 1:] += W2[:, 1:] * self.l2_C\n \n \n return vars()\n \n def predict(self, X):\n \"\"\"Predict class labels\"\"\"\n p = self._feedforward(X, self.W,self.n_ner)\n #print(p[\"A\"+str(self.n_ner+1)])\n \n y_pred = np.argmax(p[\"A\"+str(self.n_ner+1)], axis=0)\n return y_pred", "_____no_output_____" ], [ "from sklearn.metrics import accuracy_score\n# just start with the vectorized version and minibatch\nclass TLPMiniBatch(TwoLayerPerceptronBase):\n def __init__(self, alpha=0.0, decrease_const=0.0, shuffle=True, \n minibatches=1, **kwds): \n # need to add to the original initializer \n self.alpha = alpha\n self.decrease_const = decrease_const\n self.shuffle = shuffle\n self.minibatches = minibatches\n # but keep other keywords\n super().__init__(**kwds)\n \n \n def fit(self, X, y, print_progress=False):\n \"\"\" Learn weights from training data. With mini-batch\"\"\"\n X_data, y_data = X.copy(), y.copy()\n Y_enc = self._encode_labels(y)\n \n # init weights and setup matrices\n self.n_features_ = X_data.shape[1]\n self.n_output_ = Y_enc.shape[0]\n \n #self.vars()[\"W\" + str(i + 1)]= self._initialize_weights(i)\n self.W=self._initialize_weights()\n \n #print(self.W['W1'])\n for i in range(self.n_ner):\n vars()[\"delta_W\"+str(i + 1)+\"_prev\"] = np.zeros(self.W[\"W\" + str(i + 1)].shape)\n #delta_W2_prev = np.zeros(self.W[2].shape)\n\n self.cost_ = []\n self.score_ = []\n # get starting acc\n self.score_.append(accuracy_score(y_data,self.predict(X_data)))\n for i in range(self.epochs):\n\n # adaptive learning rate\n self.eta /= (1 + self.decrease_const*i)\n\n if print_progress>0 and (i+1)%print_progress==0:\n sys.stderr.write('\\rEpoch: %d/%d' % (i+1, self.epochs))\n sys.stderr.flush()\n\n if self.shuffle:\n idx_shuffle = np.random.permutation(y_data.shape[0])\n X_data, Y_enc, y_data = X_data[idx_shuffle], Y_enc[:, idx_shuffle], y_data[idx_shuffle]\n\n mini = np.array_split(range(y_data.shape[0]), self.minibatches)\n mini_cost = []\n for idx in mini:\n\n # feedforward\n \n F = self._feedforward(X_data[idx],self.W,self.n_ner)\n \n # F[\"A\"+str(self.n_ner+1)] \n \n cost = self._cost(F[\"A\"+str(self.n_ner+1)],Y_enc[:, idx],self.W)\n mini_cost.append(cost) # this appends cost of mini-batch only\n\n # compute gradient via backpropagation\n grad= self._get_gradient(F= F, \n Y_enc=Y_enc[:, idx],\n W=self.W)\n \n # momentum calculations\n for i in range(self.n_ner):\n #delta_W1, delta_W2 = self.eta * grad1, self.eta * grad2\n #self.W1 -= (delta_W1 + (self.alpha * delta_W1_prev))\n #self.W2 -= (delta_W2 + (self.alpha * delta_W2_prev))\n #delta_W1_prev, delta_W2_prev = delta_W1, delta_W2\n vars()[\"delta_W\"+str(i + 1)] = self.eta * grad[\"grad\"+str(i + 1)]\n self.W[\"W\"+str(i + 1)] -=(vars()[\"delta_W\"+str(i + 1)]+ (self.alpha * vars()[\"delta_W\"+str(i + 1)+\"_prev\"]))\n vars()[\"delta_W\"+str(i + 1)+\"_prev\"]=vars()[\"delta_W\"+str(i + 1)] \n self.cost_.append(mini_cost)\n self.score_.append(accuracy_score(y_data,self.predict(X_data)))\n \n return self", "_____no_output_____" ], [ "# lets load up the handwritten digit dataset\nfrom sklearn.datasets import load_digits\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.preprocessing import StandardScaler\nimport numpy as np\n\nds = load_digits()\nX = ds.data/16.0-0.5\ny = ds.target\nX_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.2)", "_____no_output_____" ], [ "%%time\nparams = dict(n_hidden=50, \n C=.0001, # tradeoff L2 regularizer\n epochs=100, # iterations\n eta=0.001, # learning rate\n random_state=1,\n phi='relu',n_ner=3,cf='ce')\nnn_mini = TLPMiniBatch(**params,\n alpha=0.001,# momentum calculation\n decrease_const=0.0001, # decreasing eta\n minibatches=50, # minibatch size\n shuffle=True)\n\n\n \nnn_mini.fit(X_train, y_train, print_progress=50)\nyhat = nn_mini.predict(X_train)\nprint('Accuracy:',accuracy_score(y_train,yhat))", "Epoch: 100/100" ], [ "from sklearn.preprocessing import StandardScaler\nfrom sklearn.datasets import load_iris\nimport numpy as np\nimport plotly\n\nds = load_iris()\nX = ds.data\ny = ds.target\nx_train_ar = StandardScaler().fit(X).transform(X)\nX_train1, X_test1, y_train1, y_test1 = train_test_split(X,y,test_size = 0.2)", "_____no_output_____" ], [ "%%time\nparams = dict(n_hidden=50, \n C=.0001, # tradeoff L2 regularizer\n epochs=100, # iterations\n eta=0.001, # learning rate\n random_state=1,\n phi='sig',n_ner=3,cf='ce')\nparams31 = dict(n_hidden=50, \n C=.0001, # tradeoff L2 regularizer\n epochs=100, # iterations\n eta=0.001, # learning rate\n random_state=1,\n phi='sig',n_ner=3,cf='quad')\n\n\nnn_mini = TLPMiniBatch(**params,\n alpha=0.001,# momentum calculation\n decrease_const=0.0001, # decreasing eta\n minibatches=50, # minibatch size\n shuffle=True)\n\nnn_mini31 = TLPMiniBatch(**params31,\n alpha=0.001,# momentum calculation\n decrease_const=1e-5, # decreasing eta\n minibatches=50, # minibatch size\n shuffle=True)\n decrease_const=1e-5, # decreasing eta\n minibatches=50, # minibatch size\n shuffle=True)\n\n\n \nnn_mini31.fit(X_train1, y_train1, print_progress=50)\nyhat = nn_mini31.predict(X_train1)\nprint('Accuracy:',accuracy_score(y_train1,yhat))", "Epoch: 100/100" ], [ "from sklearn.metrics import accuracy_score\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.style.use('ggplot')\n\ndef print_result(nn,X_train,y_train,X_test,y_test,title=\"\",color=\"red\"):\n \n print(\"=================\")\n print(title,\":\")\n yhat = nn.predict(X_train)\n print('Resubstitution acc:',accuracy_score(y_train,yhat))\n \n yhat = nn.predict(X_test)\n print('Validation acc:',accuracy_score(y_test,yhat))\n \n if hasattr(nn,'val_score_'):\n plt.plot(range(len(nn.val_score_)), nn.val_score_, color=color,label=title)\n plt.ylabel('Validation Accuracy')\n else:\n plt.plot(range(len(nn.score_)), nn.score_, color=color,label=title)\n plt.ylabel('Resub Accuracy')\n \n plt.xlabel('Epochs')\n plt.tight_layout()\n plt.legend(loc='best')\n plt.grid(True)", "_____no_output_____" ], [ "paramslin = dict(n_hidden=50, \n C=.0001, # tradeoff L2 regularizer\n epochs=100, # iterations\n eta=0.001, # learning rate\n random_state=1,\n phi='sig',n_ner=3,cf='ce')\nparamslin31 = dict(n_hidden=50, \n C=.0001, # tradeoff L2 regularizer\n epochs=100, # iterations\n eta=0.001, # learning rate\n random_state=1,\n phi='lin',n_ner=3,cf='ce')\nnn_minilin = TLPMiniBatch(**paramslin,\n alpha=0.001,# momentum calculation\n decrease_const=0.0001, # decreasing eta\n minibatches=50, # minibatch size\n shuffle=True)\n\nnn_minilin31 = TLPMiniBatch(**paramslin31,\n alpha=0.001,# momentum calculation\n decrease_const=1e-5, # decreasing eta\n minibatches=50, # minibatch size\n shuffle=True)", "_____no_output_____" ], [ "%time nn_mini.fit(X_train1, y_train1, print_progress=10)\n%time nn_mini31.fit(X_train1, y_train1, print_progress=10)", "Epoch: 100/100" ], [ "nn_mini31._initialize_weights()", "_____no_output_____" ], [ "print_result(nn_mini,X_train1,y_train1,X_test1,y_test1,title=\"Cross Entropy Loss\",color=\"red\")\nprint_result(nn_mini31,X_train1,y_train1,X_test1,y_test1,title=\"Quadratic Loss\",color=\"blue\")\nplt.show()", "=================\nCross Entropy Loss :\nResubstitution acc: 0.9666666666666667\nValidation acc: 0.9666666666666667\n=================\nQuadratic Loss :\nResubstitution acc: 0.9333333333333333\nValidation acc: 0.9333333333333333\n" ], [ "%time nn_minilin.fit(X_train1, y_train1, print_progress=10)\n%time nn_minilin31.fit(X_train1, y_train1, print_progress=10)", "Epoch: 10/1000" ], [ "nn_minilin31._initialize_weights()", "_____no_output_____" ], [ "print_result(nn_minilin,X_train1,y_train1,X_test1,y_test1,title=\"Cross Entropy Loss sig\",color=\"red\")\nprint_result(nn_minilin31,X_train1,y_train1,X_test1,y_test1,title=\"Cross Entropy Loss lin\",color=\"blue\")\nplt.show()", "=================\nCross Entropy Loss sig :\nResubstitution acc: 0.9666666666666667\nValidation acc: 0.9666666666666667\n=================\nCross Entropy Loss lin :\nResubstitution acc: 0.975\nValidation acc: 0.9333333333333333\n" ], [ "params2 = dict(n_hidden=50, \n C=.0001, # tradeoff L2 regularizer\n epochs=50, # iterations\n eta=0.001, # learning rate\n random_state=1,\n phi='relu',n_ner=2,cf='ce')\nparams3 = dict(n_hidden=50, \n C=.0001, # tradeoff L2 regularizer\n epochs=50, # iterations\n eta=0.001, # learning rate\n random_state=1,\n phi='relu',n_ner=3,cf='ce')\nnn_mini2 = TLPMiniBatch(**params2,\n alpha=0.001,# momentum calculation\n decrease_const=0.0001, # decreasing eta\n minibatches=50, # minibatch size\n shuffle=True)\n\nnn_mini3 = TLPMiniBatch(**params3,\n alpha=0.001,# momentum calculation\n decrease_const=1e-5, # decreasing eta\n minibatches=50, # minibatch size\n shuffle=True)", "_____no_output_____" ], [ "%time nn_mini2.fit(X_train1, y_train1, print_progress=10)\n%time nn_mini3.fit(X_train1, y_train1, print_progress=10)", "Epoch: 10/50" ], [ "nn_mini3._initialize_weights()", "_____no_output_____" ], [ "print_result(nn_mini2,X_train1,y_train1,X_test1,y_test1,title=\"CE sig 2 layers\",color=\"red\")\nprint_result(nn_mini3,X_train1,y_train1,X_test1,y_test1,title=\"CE sig 3 layers\",color=\"blue\")\nplt.show()", "=================\nCE sig 2 layers :\nResubstitution acc: 0.9583333333333334\nValidation acc: 0.9666666666666667\n=================\nCE sig 3 layers :\nResubstitution acc: 0.9833333333333333\nValidation acc: 0.9333333333333333\n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e70f8b78d3e995ad1a862d330e8582042c5eac46
6,056
ipynb
Jupyter Notebook
Data Analysis and Model/exploratory_data_analysis/Labeling_Joyce.ipynb
rexlintc/Autoreply
ebdc1f645bf8ff3d46448045b82ebb52d5be4256
[ "MIT" ]
1
2018-05-16T22:42:29.000Z
2018-05-16T22:42:29.000Z
Data Analysis and Model/exploratory_data_analysis/Labeling_Joyce.ipynb
rexlintc/Autoreply
ebdc1f645bf8ff3d46448045b82ebb52d5be4256
[ "MIT" ]
null
null
null
Data Analysis and Model/exploratory_data_analysis/Labeling_Joyce.ipynb
rexlintc/Autoreply
ebdc1f645bf8ff3d46448045b82ebb52d5be4256
[ "MIT" ]
1
2018-05-04T20:01:34.000Z
2018-05-04T20:01:34.000Z
29.980198
371
0.547061
[ [ [ "# Labeling\n\nHopefully this notebook should make the tedious task of labeling as painless as possible for you all.\n\n---\n### BEFORE YOU START: Rename your Jupyter Notebook so it doesn't cause merge conflicts.\n\n__We will assign numbers to each category as follows:__\n1. `miscl.` -- Any miscellaneous emails that don't belong to one of the categories above (anything that we can't generate a generic response to)\n2. `conflicts` -- Anything related to midterm/final scheduling conflicts\n3. `regrade` -- Anything related to HW/Lab/Exam regrades\n4. `hw` -- Anything related to homework (e.g. submissions)\n5. `enrollment` -- Anything related to Calcentral/course enrollment issues\n6. `internal`-- Anything related to course logistics, hiring (interviews), or other internal administrative issues\n7. `dsp` -- Anything related to dsp letters or accommodations\n\n__Keep in mind:__\n\nAs you read through the emails, treat all emails independently from each other, even if it's in the same thread so we don't accidently classify an email as something if it doesn't actually contain indicators that it belongs in a category!\n\n__Example:__\n\nEmail 1: \"okay thanks, i will email you closer to the exam date. as a heads up you will be escorted by a ta from your 294 class to the dsp exam room (it will be in the dsp exam room because of this special circumstance) i.e. you won't have time to go home/take a break if you are accepting this accommodation. please confirm best\"\n\nEmail 2: \"got it. i hope this reply serves as confirmation. thanks\"\n\nThe first email should be classified as 7 ('DSP') because from just its content alone, that would be our best guess. If we read the earlier thread, we'd know it has to do with Midterm Conflicts, but try not to let that bias your classification to best help our model. Likewise, the second email should be 1 ('miscl') even though we know from the threat the context.", "_____no_output_____" ] ], [ [ "import pandas as pd", "_____no_output_____" ], [ "#read emails from csv into dataframe\n#Replace **YOUR_NAME**\nemails = pd.read_csv('../data/joyce_label.csv')", "_____no_output_____" ], [ "# display all emails\ncontent = emails['Body']\n#for i, em in enumerate(content):\n #print('EMAIL!!!', i, '\\n', ' ', em, '\\n')", "_____no_output_____" ], [ "# Each list should contain ten numbers -- each number corresponds to a category as listed above. \n#For ex, l0 contains the classifications for emails 0-9.\n#See Keiko or Rohan's notebook for an example of how this is done.\nl0 = [8,8,8,8,1,8,3,3,3,3]\nl1 = [3,3,4,4,2,2,7,7,7,7]\nl2 = [1,1,1,1,1,1,2,2,1,1]\nl3 = [1,7,1,7,4,1,4,4,4,2]\nl4 = [7,7,2,1,1,1,1,1,1,1]\nl5 = [1,1,4,4,4,4,1,4,3,3]\nl6 = [3,4,2,1,2,2,2,2,4,1]\nl7 = [1,1,1,1,2,2,1,2,4,4]\nl8 = [1,4,1,1,1,4,4,2,7,1]\nl9 = [7,1,1,1,1,1,1,1,7,1]\nl10 = [4,4,1,1,4,1,1,1,1,1]\nl11 = [1,1,1,4,2,1,4,4,4,4]\nl12 = [4,4,4,7,4,2,4,1,1,1]\nl13 = [1,1,1,1,1,1,4,1,2,2]\nl14 = [1,1,1,2,1,5,5,1,1,2]\nl15 = [1,1,1,4,4,4,1,1,8,1]\nl16 = [8,1,2,1,4,1,7,7,1,1]\nl17 = [2,1,1,1,1,1,1,1,1,1]\nl18 = [2,2,1,3,1,1,2,2,2,7]\nl19 = [1,1,7,7,1,1,1,1,1,1]\nl20 = [1,1,1,2,2,2,2,2,2,1]\n\ntee = [0,1,2,3,4,5,6,7,8,9]\nl21 = [2,2,1,1,1,1,2,2,1,2]\nl22 = [1,1,1,2,7,7,1,1,2,1]\nl23 = [2,2,2,2,2,1,1,2,2,1]\nl24 = [1,1,1,1,1,2,1,2,1,1]\nl25 = [1,1,1,1,1,1,2,2,1,1]\nl26 = [2,1,1,1,1,1,2,1,1,2]\nl27 = [7,1,2,1,1,1,1,1,1,1]\nl28 = [1,7,4,4,4,4,1,1,3,1]\nl29 = [1,1,1,4,4,4,4,4,4,4]\nl30 = [4,4,1,1]", "_____no_output_____" ], [ "l = l0+l1+l2+l3+l4+l5+l6+l7+l8+l9+l10+l11+l12+l13+l14+l15+l16+l17+l18+l19+l20+l21+l22+l23+l24+l25+l26+l27+l28+l29+l30\nlen(l)", "_____no_output_____" ], [ "emails['Category'] = l", "_____no_output_____" ], [ "#emails.iloc[:,[0,6,7]]", "_____no_output_____" ], [ "#replace **YOUR_NAME**\n\nemails.to_csv('../data/joyce_labeled.csv', index=False, sep=',', encoding='utf-8')", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e70f910a5883282db7692a2ec363b5d7de983816
114,427
ipynb
Jupyter Notebook
sms_using__Stemmer_with_countvectorizer.ipynb
ksdkamesh99/Email-Spam-Classifier
b334b679e84edc5f8847e3ed5e801838c948fba1
[ "MIT" ]
5
2020-10-29T05:31:20.000Z
2021-04-30T19:44:02.000Z
sms_using__Stemmer_with_countvectorizer.ipynb
ksdkamesh99/Email-Spam-Classifier
b334b679e84edc5f8847e3ed5e801838c948fba1
[ "MIT" ]
null
null
null
sms_using__Stemmer_with_countvectorizer.ipynb
ksdkamesh99/Email-Spam-Classifier
b334b679e84edc5f8847e3ed5e801838c948fba1
[ "MIT" ]
2
2020-06-22T05:39:32.000Z
2022-03-18T03:43:10.000Z
43.098682
215
0.492305
[ [ [ "cd /content/drive/My Drive/Spam Classifier", "/content/drive/My Drive/Spam Classifier\n" ], [ "import nltk\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.model_selection import train_test_split\nfrom nltk.corpus import stopwords\nfrom nltk.stem import PorterStemmer,WordNetLemmatizer\nfrom nltk.tokenize import word_tokenize\nimport sklearn.metrics as m\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.svm import SVC\nfrom sklearn.tree import DecisionTreeClassifier\n", "_____no_output_____" ], [ "nltk.download('punkt')", "[nltk_data] Downloading package punkt to /root/nltk_data...\n[nltk_data] Unzipping tokenizers/punkt.zip.\n" ], [ "nltk.download('stopwords')", "[nltk_data] Downloading package stopwords to /root/nltk_data...\n[nltk_data] Unzipping corpora/stopwords.zip.\n" ], [ "nltk.download('wordnet')", "[nltk_data] Downloading package wordnet to /root/nltk_data...\n[nltk_data] Unzipping corpora/wordnet.zip.\n" ], [ "dataset=pd.read_csv('spam.csv',encoding='latin-1')\ndataset", "_____no_output_____" ], [ "sent=dataset.iloc[:,[1]]['v2']", "_____no_output_____" ], [ "sent", "_____no_output_____" ], [ "label=dataset.iloc[:,[0]]['v1']", "_____no_output_____" ], [ "label", "_____no_output_____" ], [ "from sklearn.preprocessing import LabelEncoder", "_____no_output_____" ], [ "le=LabelEncoder()\nlabel=le.fit_transform(label)", "_____no_output_____" ], [ "label", "_____no_output_____" ], [ "le.classes_", "_____no_output_____" ], [ "import re", "_____no_output_____" ], [ "len(set(stopwords.words('english')))", "_____no_output_____" ], [ "stem=PorterStemmer()", "_____no_output_____" ], [ "sent", "_____no_output_____" ], [ "sentences=[]\nfor sen in sent:\n senti=re.sub('[^A-Za-z]',' ',sen)\n senti=senti.lower()\n words=word_tokenize(senti)\n word=[stem.stem(i) for i in words if i not in stopwords.words('english')]\n senti=' '.join(word)\n sentences.append(senti)\n", "_____no_output_____" ], [ "sentences", "_____no_output_____" ], [ "from sklearn.feature_extraction.text import CountVectorizer", "_____no_output_____" ], [ "cv=CountVectorizer(max_features=5000)", "_____no_output_____" ], [ "features=cv.fit_transform(sentences)", "_____no_output_____" ], [ "features=features.toarray()", "_____no_output_____" ], [ "features", "_____no_output_____" ], [ "len(cv.get_feature_names())", "_____no_output_____" ], [ "feature_train,feature_test,label_train,label_test=train_test_split(features,label,test_size=0.2,random_state=7)", "_____no_output_____" ] ], [ [ "#Naive Bayies", "_____no_output_____" ] ], [ [ "model=MultinomialNB()\nmodel.fit(feature_train,label_train)", "_____no_output_____" ], [ "label_pred=model.predict(feature_test)", "_____no_output_____" ], [ "label_pred", "_____no_output_____" ], [ "label_test", "_____no_output_____" ], [ "m.accuracy_score(label_test,label_pred)", "_____no_output_____" ], [ "print(m.classification_report(label_test,label_pred))", " precision recall f1-score support\n\n 0 0.99 0.99 0.99 970\n 1 0.93 0.96 0.95 145\n\n accuracy 0.99 1115\n macro avg 0.96 0.97 0.97 1115\nweighted avg 0.99 0.99 0.99 1115\n\n" ], [ "print(m.confusion_matrix(label_test,label_pred))", "[[960 10]\n [ 6 139]]\n" ] ], [ [ "#SVC", "_____no_output_____" ] ], [ [ "model=SVC(kernel='linear')\nmodel.fit(feature_train,label_train)", "_____no_output_____" ], [ "label_pred=model.predict(feature_test)", "_____no_output_____" ], [ "m.accuracy_score(label_test,label_pred)", "_____no_output_____" ], [ "label_pred", "_____no_output_____" ], [ "label_test", "_____no_output_____" ], [ "print(m.classification_report(label_test,label_pred))", " precision recall f1-score support\n\n 0 0.99 1.00 0.99 970\n 1 0.98 0.92 0.95 145\n\n accuracy 0.99 1115\n macro avg 0.98 0.96 0.97 1115\nweighted avg 0.99 0.99 0.99 1115\n\n" ], [ "print(m.confusion_matrix(label_test,label_pred))", "[[967 3]\n [ 11 134]]\n" ] ], [ [ "#LogisticRegression", "_____no_output_____" ] ], [ [ "model=LogisticRegression()\nmodel.fit(feature_train,label_train)", "_____no_output_____" ], [ "label_pred=model.predict(feature_test)", "_____no_output_____" ], [ "m.accuracy_score(label_test,label_pred)", "_____no_output_____" ], [ "label_pred", "_____no_output_____" ], [ "label_test", "_____no_output_____" ], [ "print(m.classification_report(label_test,label_pred))", " precision recall f1-score support\n\n 0 0.99 1.00 0.99 970\n 1 0.99 0.90 0.95 145\n\n accuracy 0.99 1115\n macro avg 0.99 0.95 0.97 1115\nweighted avg 0.99 0.99 0.99 1115\n\n" ], [ "print(m.confusion_matrix(label_test,label_pred))", "[[969 1]\n [ 14 131]]\n" ] ], [ [ "#Decision Tree", "_____no_output_____" ] ], [ [ "model=DecisionTreeClassifier()\nmodel.fit(feature_train,label_train)\n", "_____no_output_____" ], [ "label_pred=model.predict(feature_test)", "_____no_output_____" ], [ "m.accuracy_score(label_test,label_pred)", "_____no_output_____" ], [ "label_pred", "_____no_output_____" ], [ "label_test", "_____no_output_____" ], [ "print(m.classification_report(label_test,label_pred))", " precision recall f1-score support\n\n 0 0.98 0.99 0.99 970\n 1 0.94 0.89 0.91 145\n\n accuracy 0.98 1115\n macro avg 0.96 0.94 0.95 1115\nweighted avg 0.98 0.98 0.98 1115\n\n" ], [ "print(m.confusion_matrix(label_test,label_pred))", "[[962 8]\n [ 16 129]]\n" ], [ "", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e70f9127136ae9877334f159a56950a1d096448e
8,201
ipynb
Jupyter Notebook
Checkpoint 3 Drill.ipynb
mouax209/mouax209
85ec8564ead807a225f3ff36d7cf485462d895f4
[ "CC-BY-3.0" ]
null
null
null
Checkpoint 3 Drill.ipynb
mouax209/mouax209
85ec8564ead807a225f3ff36d7cf485462d895f4
[ "CC-BY-3.0" ]
null
null
null
Checkpoint 3 Drill.ipynb
mouax209/mouax209
85ec8564ead807a225f3ff36d7cf485462d895f4
[ "CC-BY-3.0" ]
null
null
null
40.399015
144
0.520912
[ [ [ "import math\n\n-------------First you create a root node.\n#find item in a list\ndef find(item, list):\n for i in list:\n if item(i): \n return True\n else:\n return False\n\n-------------next two lines say that if you're already exclusively (If all observations are 'A', label root node 'A' and return.\n If all observations are 'B', label root node 'B' and return.\n If no attributes return the root note labeled with the most common Outcome.) one class, just label with that class and you're done.\n#find most common value for an attribute\ndef majority(attributes, data, target):\n #find target attribute\n valFreq = {}\n #find target in data\n index = attributes.index(target)\n #calculate frequency of values in target attr\n for tuple in data:\n if (valFreq.has_key(tuple[index])):\n valFreq[tuple[index]] += 1 \n else:\n valFreq[tuple[index]] = 1\n max = 0\n major = \"\"\n for key in valFreq.keys():\n if valFreq[key]>max:\n max = valFreq[key]\n major = key\n return major\n \n \n-------------real algorithm. For each value vi of each attribute ai, calculate the entropy.\n#Calculates the entropy of the given data set for the target attr\ndef entropy(attributes, data, targetAttr):\n\n valFreq = {}\n dataEntropy = 0.0\n \n #find index of the target attribute\n i = 0\n for entry in attributes:\n if (targetAttr == entry):\n break\n ++i\n \n # Calculate the frequency of each of the values in the target attr\n for entry in data:\n if (valFreq.has_key(entry[i])):\n valFreq[entry[i]] += 1.0\n else:\n valFreq[entry[i]] = 1.0\n\n # Calculate the entropy of the data for the target attr\n for freq in valFreq.values():\n dataEntropy += (-freq/len(data)) * math.log(freq/len(data), 2) \n \n return dataEntropy\n\n\ndef gain(attributes, data, attr, targetAttr):\n \"\"\"\n Calculates the information gain (reduction in entropy) that would\n result by splitting the data on the chosen attribute (attr).\n \"\"\"\n valFreq = {}\n subsetEntropy = 0.0\n \n--------------If no attributes return the root note labeled with the most common Outcome.\n Otherwise, start:\n For each value vi of each attribute ai, calculate the entropy.\n \n #find index of the attribute\n i = attributes.index(attr)\n\n # Calculate the frequency of each of the values in the target attribute\n for entry in data:\n if (valFreq.has_key(entry[i])):\n valFreq[entry[i]] += 1.0\n else:\n valFreq[entry[i]] = 1.0\n # Calculate the sum of the entropy for each subset of records weighted\n # by their probability of occuring in the training set.\n for val in valFreq.keys():\n valProb = valFreq[val] / sum(valFreq.values())\n dataSubset = [entry for entry in data if entry[i] == val]\n subsetEntropy += valProb * entropy(attributes, dataSubset, targetAttr)\n\n # Subtract the entropy of the chosen attribute from the entropy of the\n # whole data set with respect to the target attribute (and return it)\n return (entropy(attributes, data, targetAttr) - subsetEntropy)\n\n#choose best attibute \ndef chooseAttr(data, attributes, target):\n best = attributes[0]\n maxGain = 0;\n for attr in attributes:\n newGain = gain(attributes, data, attr, target) \n if newGain>maxGain:\n maxGain = newGain\n best = attr\n return best\n\n#get values in the column of the given attribute \ndef getValues(data, attributes, attr):\n index = attributes.index(attr)\n values = []\n for entry in data:\n if entry[index] not in values:\n values.append(entry[index])\n return values\n\ndef getExamples(data, attributes, best, val):\n examples = [[]]\n index = attributes.index(best)\n for entry in data:\n #find entries with the give value\n if (entry[index] == val):\n newEntry = []\n #add value if it is not in best column\n for i in range(0,len(entry)):\n if(i != index):\n newEntry.append(entry[i])\n examples.append(newEntry)\n examples.remove([])\n return examples\n\n--------------The attribute for this node is then ai\n Split the tree to below based on the rule ai = vi\ndef makeTree(data, attributes, target, recursion):\n recursion += 1\n #Returns a new decision tree based on the examples given.\n data = data[:]\n vals = [record[attributes.index(target)] for record in data]\n default = majority(attributes, data, target)\n\n\n-------------Else at the new node start a subtree (Observationsvi, Target Outcome, Attributes - {ai}) and repeat the algorithm\n # If the dataset is empty or the attributes list is empty, return the\n # default value. When checking the attributes list for emptiness, we\n # need to subtract 1 to account for the target attribute.\n if not data or (len(attributes) - 1) <= 0:\n return default\n # If all the records in the dataset have the same classification,\n # return that classification.\n elif vals.count(vals[0]) == len(vals):\n return vals[0]\n else:\n # Choose the next best attribute to best classify our data\n best = chooseAttr(data, attributes, target)\n # Create a new decision tree/node with the best attribute and an empty\n # dictionary object--we'll fill that up next.\n tree = {best:{}}\n \n # Create a new decision tree/sub-node for each of the values in the\n # best attribute field\n for val in getValues(data, attributes, best):\n # Create a subtree for the current value under the \"best\" field\n examples = getExamples(data, attributes, best, val)\n newAttr = attributes[:]\n newAttr.remove(best)\n subtree = makeTree(examples, newAttr, target, recursion)\n \n # Add the new subtree to the empty dictionary object in our new\n # tree/node we just created.\n tree[best][val] = subtree\n \n return tree\n", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown" ] ]
e70f9162bae91da529ee1f2a49128561793440b7
525,419
ipynb
Jupyter Notebook
docs/practices/cv/super_resolution_sub_pixel.ipynb
luotao1/docs
2ebc0351fa1060426253fbea3559e84a55c7cb7c
[ "Apache-2.0" ]
37
2021-05-28T08:59:49.000Z
2022-03-16T12:41:43.000Z
docs/practices/cv/super_resolution_sub_pixel.ipynb
luotao1/docs
2ebc0351fa1060426253fbea3559e84a55c7cb7c
[ "Apache-2.0" ]
896
2021-05-14T16:05:54.000Z
2022-03-31T08:58:33.000Z
docs/practices/cv/super_resolution_sub_pixel.ipynb
luotao1/docs
2ebc0351fa1060426253fbea3559e84a55c7cb7c
[ "Apache-2.0" ]
138
2021-05-17T02:57:09.000Z
2022-03-30T08:23:54.000Z
639.195864
156,456
0.944423
[ [ [ "# 通过Sub-Pixel实现图像超分辨率\n**作者:** [Ralph LU](https://github.com/ralph0813)<br>\n**日期:** 2021.12 <br>\n**摘要:** 本示例通过Sub-Pixel实现图像超分辨率。", "_____no_output_____" ], [ "## 一、简要介绍\n\n在计算机视觉中,图像超分辨率(Image Super Resolution)是指由一幅低分辨率图像或图像序列恢复出高分辨率图像。图像超分辨率技术分为超分辨率复原和超分辨率重建。\n\n本示例简要介绍如何通过飞桨开源框架,实现图像超分辨率。包括数据集的定义、模型的搭建与训练。\n\n参考论文:《Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network》\n\n论文链接:https://arxiv.org/abs/1609.05158", "_____no_output_____" ], [ "## 二、环境设置\n导入一些比较基础常用的模块,确认自己的飞桨版本。", "_____no_output_____" ] ], [ [ "import os\nimport io\nimport math\nimport random\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom PIL import Image\n\nfrom IPython.display import display\n\nimport paddle\nfrom paddle.io import Dataset\nfrom paddle.vision.transforms import transforms\n\nprint(paddle.__version__)", "2.2.1\n" ] ], [ [ "## 三、数据集\n### 3.1 数据集下载\n本案例使用BSR_bsds500数据集,下载链接:http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/BSR/BSR_bsds500.tgz", "_____no_output_____" ] ], [ [ "!wget --no-check-certificate --no-cookies --header \"Cookie: oraclelicense=accept-securebackup-cookie\" http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/BSR/BSR_bsds500.tgz\n!tar -zxvf BSR_bsds500.tgz", "_____no_output_____" ] ], [ [ "### 3.2 数据集概览\n```\nBSR\n├── BSDS500\n│   └── data\n│   ├── groundTruth\n│   │   ├── test\n│   │   ├── train\n│   │   └── val\n│   └── images\n│   ├── test\n│   ├── train\n│   └── val\n├── bench\n│   ├── benchmarks\n│   ├── data\n│   │   ├── ...\n│   │   └── ...\n│   └── source\n└── documentation\n```\n\n可以看到需要的图片文件在BSR/BSDS500/images文件夹下,train、test各200张,val为100张。", "_____no_output_____" ], [ "### 3.3 数据集类定义\n飞桨(PaddlePaddle)数据集加载方案是统一使用Dataset(数据集定义) + DataLoader(多进程数据集加载)。\n\n首先先进行数据集的定义,数据集定义主要是实现一个新的Dataset类,继承父类paddle.io.Dataset,并实现父类中以下两个抽象方法,__getitem__和__len__:\n```python\nclass MyDataset(Dataset):\n def __init__(self):\n ...\n\n # 每次迭代时返回数据和对应的标签\n def __getitem__(self, idx):\n return x, y\n\n # 返回整个数据集的总数\n def __len__(self):\n return count(samples)\n```\n", "_____no_output_____" ] ], [ [ "class BSD_data(Dataset):\n \"\"\"\n 继承paddle.io.Dataset类\n \"\"\"\n def __init__(self, mode='train',image_path=\"BSR/BSDS500/data/images/\"):\n \"\"\"\n 实现构造函数,定义数据读取方式,划分训练和测试数据集\n \"\"\"\n super(BSD_data, self).__init__()\n \n self.mode = mode.lower()\n if self.mode == 'train':\n self.image_path = os.path.join(image_path,'train')\n elif self.mode == 'val':\n self.image_path = os.path.join(image_path,'val')\n else:\n raise ValueError('mode must be \"train\" or \"val\"')\n \n # 原始图像的缩放大小\n self.crop_size = 300\n # 缩放倍率\n self.upscale_factor = 3\n # 缩小后送入神经网络的大小\n self.input_size = self.crop_size // self.upscale_factor\n # numpy随机数种子\n self.seed=1337\n # 图片集合\n self.temp_images = []\n # 加载数据\n self._parse_dataset()\n \n def transforms(self, img):\n \"\"\"\n 图像预处理工具,用于将升维(100, 100) => (100, 100,1),\n 并对图像的维度进行转换从HWC变为CHW\n \"\"\"\n if len(img.shape) == 2:\n img = np.expand_dims(img, axis=2)\n return img.transpose((2, 0, 1))\n \n def __getitem__(self, idx):\n \"\"\"\n 返回 缩小3倍后的图片 和 原始图片\n \"\"\"\n \n # 加载原始图像\n img = self._load_img(self.temp_images[idx])\n # 将原始图像缩放到(3, 300, 300)\n img = img.resize([self.crop_size,self.crop_size], Image.BICUBIC)\n\n #转换为YCbCr图像\n ycbcr = img.convert(\"YCbCr\")\n\n # 因为人眼对亮度敏感,所以只取Y通道\n y, cb, cr = ycbcr.split()\n y = np.asarray(y,dtype='float32')\n y = y / 255.0\n\n # 缩放后的图像和前面采取一样的操作\n img_ = img.resize([self.input_size,self.input_size], Image.BICUBIC)\n ycbcr_ = img_.convert(\"YCbCr\")\n y_, cb_, cr_ = ycbcr_.split()\n y_ = np.asarray(y_,dtype='float32')\n y_ = y_ / 255.0\n\n # 升纬并将HWC转换为CHW\n y = self.transforms(y)\n x = self.transforms(y_)\n\n # x为缩小3倍后的图片(1, 100, 100) y是原始图片(1, 300, 300)\n return x, y\n\n\n def __len__(self):\n \"\"\"\n 实现__len__方法,返回数据集总数目\n \"\"\"\n return len(self.temp_images)\n \n def _sort_images(self, img_dir):\n \"\"\"\n 对文件夹内的图像进行按照文件名排序\n \"\"\"\n files = []\n\n for item in os.listdir(img_dir):\n if item.split('.')[-1].lower() in [\"jpg\",'jpeg','png']:\n files.append(os.path.join(img_dir, item))\n\n return sorted(files)\n \n def _parse_dataset(self):\n \"\"\"\n 处理数据集\n \"\"\"\n self.temp_images = self._sort_images(self.image_path)\n random.Random(self.seed).shuffle(self.temp_images)\n \n def _load_img(self, path):\n \"\"\"\n 从磁盘读取图片\n \"\"\"\n with open(path, 'rb') as f:\n img = Image.open(io.BytesIO(f.read()))\n img = img.convert('RGB')\n return img", "_____no_output_____" ] ], [ [ "### 3.4 PetDataSet数据集抽样展示\n实现好BSD_data数据集后,我们来测试一下数据集是否符合预期,因为BSD_data是一个可以被迭代的Class,我们通过for循环从里面读取数据进行展示。", "_____no_output_____" ] ], [ [ "# 测试定义的数据集\ntrain_dataset = BSD_data(mode='train')\nval_dataset = BSD_data(mode='val')\n\nprint('=============train dataset=============')\nx, y = train_dataset[0]\nx = x[0]\ny = y[0]\nx = x * 255\ny = y * 255\nimg_ = Image.fromarray(np.uint8(x), mode=\"L\")\nimg = Image.fromarray(np.uint8(y), mode=\"L\")\ndisplay(img_)\ndisplay(img_.size)\ndisplay(img)\ndisplay(img.size)", "=============train dataset=============\n" ] ], [ [ "## 四、模型组网\nSub_Pixel_CNN是一个全卷积网络,网络结构比较简单,这里采用Layer类继承方式组网。", "_____no_output_____" ] ], [ [ "class Sub_Pixel_CNN(paddle.nn.Layer):\n\n def __init__(self, upscale_factor=3, channels=1):\n super(Sub_Pixel_CNN, self).__init__()\n \n self.conv1 = paddle.nn.Conv2D(channels,64,5,stride=1, padding=2)\n self.conv2 = paddle.nn.Conv2D(64,64,3,stride=1, padding=1)\n self.conv3 = paddle.nn.Conv2D(64,32,3,stride=1, padding=1)\n self.conv4 = paddle.nn.Conv2D(32,channels * (upscale_factor ** 2),3,stride=1, padding=1)\n\n def forward(self, x):\n x = self.conv1(x)\n x = self.conv2(x)\n x = self.conv3(x)\n x = self.conv4(x)\n x = paddle.nn.functional.pixel_shuffle(x,3)\n return x", "_____no_output_____" ] ], [ [ "### 4.1 模型封装", "_____no_output_____" ] ], [ [ "# 模型封装\nmodel = paddle.Model(Sub_Pixel_CNN())", "_____no_output_____" ] ], [ [ "### 4.2 模型可视化\n调用飞桨提供的summary接口对组建好的模型进行可视化,方便进行模型结构和参数信息的查看和确认。", "_____no_output_____" ] ], [ [ "model.summary((1,1, 100, 100))", "---------------------------------------------------------------------------\n Layer (type) Input Shape Output Shape Param # \n===========================================================================\n Conv2D-5 [[1, 1, 100, 100]] [1, 64, 100, 100] 1,664 \n Conv2D-6 [[1, 64, 100, 100]] [1, 64, 100, 100] 36,928 \n Conv2D-7 [[1, 64, 100, 100]] [1, 32, 100, 100] 18,464 \n Conv2D-8 [[1, 32, 100, 100]] [1, 9, 100, 100] 2,601 \n===========================================================================\nTotal params: 59,657\nTrainable params: 59,657\nNon-trainable params: 0\n---------------------------------------------------------------------------\nInput size (MB): 0.04\nForward/backward pass size (MB): 12.89\nParams size (MB): 0.23\nEstimated Total Size (MB): 13.16\n---------------------------------------------------------------------------\n\n" ] ], [ [ "## 五、模型训练", "_____no_output_____" ], [ "### 5.1 启动模型训练\n\n使用模型代码进行Model实例生成,使用prepare接口定义优化器、损失函数和评价指标等信息,用于后续训练使用。在所有初步配置完成后,调用fit接口开启训练执行过程,调用fit时只需要将前面定义好的训练数据集、测试数据集、训练轮次(Epoch)和批次大小(batch_size)配置好即可。", "_____no_output_____" ] ], [ [ "model.prepare(paddle.optimizer.Adam(learning_rate=0.001,parameters=model.parameters()),\n paddle.nn.MSELoss()\n )\n\n# 启动模型训练,指定训练数据集,设置训练轮次,设置每次数据集计算的批次大小,设置日志格式\nmodel.fit(train_dataset,\n epochs=20,\n batch_size=16,\n verbose=1)", "The loss value printed in the log is the current step, and the metric is the average value of previous steps.\nEpoch 1/20\nstep 13/13 [==============================] - loss: 0.1233 - 112ms/step \nEpoch 2/20\nstep 13/13 [==============================] - loss: 0.0427 - 113ms/step \nEpoch 3/20\nstep 13/13 [==============================] - loss: 0.0259 - 117ms/step \nEpoch 4/20\nstep 13/13 [==============================] - loss: 0.0208 - 113ms/step \nEpoch 5/20\nstep 13/13 [==============================] - loss: 0.0174 - 112ms/step \nEpoch 6/20\nstep 13/13 [==============================] - loss: 0.0110 - 112ms/step \nEpoch 7/20\nstep 13/13 [==============================] - loss: 0.0131 - 110ms/step \nEpoch 8/20\nstep 13/13 [==============================] - loss: 0.0102 - 114ms/step \nEpoch 9/20\nstep 13/13 [==============================] - loss: 0.0083 - 111ms/step \nEpoch 10/20\nstep 13/13 [==============================] - loss: 0.0061 - 112ms/step \nEpoch 11/20\nstep 13/13 [==============================] - loss: 0.0047 - 111ms/step \nEpoch 12/20\nstep 13/13 [==============================] - loss: 0.0074 - 112ms/step \nEpoch 13/20\nstep 13/13 [==============================] - loss: 0.0043 - 112ms/step \nEpoch 14/20\nstep 13/13 [==============================] - loss: 0.0059 - 112ms/step \nEpoch 15/20\nstep 13/13 [==============================] - loss: 0.0058 - 112ms/step \nEpoch 16/20\nstep 13/13 [==============================] - loss: 0.0045 - 112ms/step \nEpoch 17/20\nstep 13/13 [==============================] - loss: 0.0048 - 112ms/step \nEpoch 18/20\nstep 13/13 [==============================] - loss: 0.0031 - 112ms/step \nEpoch 19/20\nstep 13/13 [==============================] - loss: 0.0049 - 112ms/step \nEpoch 20/20\nstep 13/13 [==============================] - loss: 0.0063 - 112ms/step \n" ] ], [ [ "## 六、模型预测", "_____no_output_____" ], [ "### 6.1 预测\n我们可以直接使用model.predict接口来对数据集进行预测操作,只需要将预测数据集传递到接口内即可。", "_____no_output_____" ] ], [ [ "predict_results = model.predict(val_dataset)", "Predict begin...\nstep 100/100 [==============================] - 8ms/step \nPredict samples: 100\n" ] ], [ [ "### 6.2 定义预测结果可视化函数", "_____no_output_____" ] ], [ [ "import math\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes\nfrom mpl_toolkits.axes_grid1.inset_locator import mark_inset\n\ndef psnr(img1, img2):\n \"\"\"\n PSMR计算函数\n \"\"\"\n mse = np.mean( (img1/255. - img2/255.) ** 2 )\n if mse < 1.0e-10:\n return 100\n PIXEL_MAX = 1\n return 20 * math.log10(PIXEL_MAX / math.sqrt(mse))\n\ndef plot_results(img, title='results', prefix='out'):\n \"\"\"\n 画图展示函数\n \"\"\"\n img_array = np.asarray(img, dtype='float32')\n img_array = img_array.astype(\"float32\") / 255.0\n\n fig, ax = plt.subplots()\n im = ax.imshow(img_array[::-1], origin=\"lower\")\n\n plt.title(title)\n axins = zoomed_inset_axes(ax, 2, loc=2)\n axins.imshow(img_array[::-1], origin=\"lower\")\n\n x1, x2, y1, y2 = 200, 300, 100, 200\n axins.set_xlim(x1, x2)\n axins.set_ylim(y1, y2)\n\n plt.yticks(visible=False)\n plt.xticks(visible=False)\n\n mark_inset(ax, axins, loc1=1, loc2=3, fc=\"none\", ec=\"blue\")\n plt.savefig(str(prefix) + \"-\" + title + \".png\")\n plt.show()\n \ndef get_lowres_image(img, upscale_factor):\n \"\"\"\n 缩放图片\n \"\"\"\n return img.resize(\n (img.size[0] // upscale_factor, img.size[1] // upscale_factor),\n Image.BICUBIC,\n )\n\ndef upscale_image(model, img):\n '''\n 输入小图,返回上采样三倍的大图像\n '''\n # 把图片复转换到YCbCr格式\n ycbcr = img.convert(\"YCbCr\")\n y, cb, cr = ycbcr.split()\n y = np.asarray(y, dtype='float32')\n y = y / 255.0\n img = np.expand_dims(y, axis=0) # 升维度到(1,w,h)一张image\n img = np.expand_dims(img, axis=0) # 升维度到(1,1,w,h)一个batch\n img = np.expand_dims(img, axis=0) # 升维度到(1,1,1,w,h)可迭代的batch\n \n out = model.predict(img) # predict输入要求为可迭代的batch\n\n out_img_y = out[0][0][0] # 得到predict输出结果\n out_img_y *= 255.0\n\n # 把图片复转换回RGB格式\n out_img_y = out_img_y.reshape((np.shape(out_img_y)[1], np.shape(out_img_y)[2]))\n out_img_y = Image.fromarray(np.uint8(out_img_y), mode=\"L\")\n out_img_cb = cb.resize(out_img_y.size, Image.BICUBIC)\n out_img_cr = cr.resize(out_img_y.size, Image.BICUBIC)\n out_img = Image.merge(\"YCbCr\", (out_img_y, out_img_cb, out_img_cr)).convert(\n \"RGB\"\n )\n return out_img\n\ndef main(model, img, upscale_factor=3):\n # 读取图像\n with open(img, 'rb') as f:\n img = Image.open(io.BytesIO(f.read()))\n # 缩小三倍\n lowres_input = get_lowres_image(img, upscale_factor)\n w = lowres_input.size[0] * upscale_factor\n h = lowres_input.size[1] * upscale_factor\n # 将缩小后的图片再放大三倍\n lowres_img = lowres_input.resize((w, h)) \n # 确保未经缩放的图像和其他两张图片大小一致\n highres_img = img.resize((w, h))\n # 得到缩小后又经过 Efficient Sub-Pixel CNN放大的图片\n prediction = upscale_image(model, lowres_input)\n psmr_low = psnr(np.asarray(lowres_img), np.asarray(highres_img))\n psmr_pre = psnr(np.asarray(prediction), np.asarray(highres_img))\n # 展示三张图片\n plot_results(lowres_img, \"lowres\")\n plot_results(highres_img, \"highres\")\n plot_results(prediction, \"prediction\")\n print(\"psmr_low:\", psmr_low, \"psmr_pre:\", psmr_pre)", "_____no_output_____" ] ], [ [ "### 6.3 执行预测\n从我们的预测数据集中抽1个张图片来看看预测的效果,展示一下原图、小图和预测结果。", "_____no_output_____" ] ], [ [ "main(model,'BSR/BSDS500/data/images/test/100007.jpg')", "Predict begin...\nstep 1/1 [==============================] - 3ms/step\nPredict samples: 1\n" ] ], [ [ "# 7.模型保存\n将模型保存到 checkpoint/model_final ,并保留训练参数", "_____no_output_____" ] ], [ [ "model.save('checkpoint/model_final',training=True)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e70fac589e7fed67ef2c04940a4acccebb802583
2,325
ipynb
Jupyter Notebook
18_Searching.ipynb
ChicagoPark/DSA
a88c3fb8481f795d2f3aec12e7ac0ef8107b3e02
[ "CECILL-B" ]
null
null
null
18_Searching.ipynb
ChicagoPark/DSA
a88c3fb8481f795d2f3aec12e7ac0ef8107b3e02
[ "CECILL-B" ]
null
null
null
18_Searching.ipynb
ChicagoPark/DSA
a88c3fb8481f795d2f3aec12e7ac0ef8107b3e02
[ "CECILL-B" ]
null
null
null
19.871795
61
0.449032
[ [ [ "# Search\n\n## Linear Search", "_____no_output_____" ] ], [ [ "def linearSearch(arr, value):\n for i, v in enumerate(arr):\n if v == value:\n return 1\n return -1\n\nprint(linearSearch([20,40,30,50,90],10))", "-1\n" ] ], [ [ "## Binary Search ", "_____no_output_____" ] ], [ [ "import math\n\ndef binarySearch(arr, value):\n # define pointers\n start = 0\n end = len(arr)-1\n middle = math.floor((start+end)/2)\n while not(arr[middle]== value) and start <= end:\n if value < arr[middle]:\n end = middle - 1\n else:\n start = middle + 1\n middle = math.floor((start+end)/2)\n \n if arr[middle] == value:\n return f\"index: {middle}\"\n else:\n return -1\n\ncustArray = [8,9,12,15,17,19,20,21,28, 29]\nbinarySearch(custArray, 25)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e70fc152b6d86b9119c6d90b582b4110c97fd17c
1,023
ipynb
Jupyter Notebook
Untitled.ipynb
anifort/xgb-ml-ops
f5492366364359b595f863012428b42ea7220c6b
[ "MIT" ]
1
2022-03-16T17:18:28.000Z
2022-03-16T17:18:28.000Z
Untitled.ipynb
anifort/vertex-xgb-ml-ops
f5492366364359b595f863012428b42ea7220c6b
[ "MIT" ]
null
null
null
Untitled.ipynb
anifort/vertex-xgb-ml-ops
f5492366364359b595f863012428b42ea7220c6b
[ "MIT" ]
null
null
null
20.877551
173
0.57478
[ [ [ "!GET https://us-central1-aiplatform.googleapis.com/v1beta1/projects/feature-store-mars21/locations/us-central1/metadataStores/default/artifacts?pageSize=10&pageToken=0", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
e70fc8ee01264f5beb279528ce8b97c080bcad5a
212,567
ipynb
Jupyter Notebook
ML Pipeline Preparation.ipynb
alwz1/disaster_response_pipelines
e3d84bdac8c9054983292441ab72dee3074901ff
[ "MIT" ]
null
null
null
ML Pipeline Preparation.ipynb
alwz1/disaster_response_pipelines
e3d84bdac8c9054983292441ab72dee3074901ff
[ "MIT" ]
null
null
null
ML Pipeline Preparation.ipynb
alwz1/disaster_response_pipelines
e3d84bdac8c9054983292441ab72dee3074901ff
[ "MIT" ]
null
null
null
43.927878
35,280
0.430462
[ [ [ "# ML Pipeline Preparation\nFollow the instructions below to help you create your ML pipeline.\n### 1. Import libraries and load data from database.\n- Import Python libraries\n- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)\n- Define feature and target variables X and Y", "_____no_output_____" ] ], [ [ "# import libraries\nimport numpy as np\nimport pandas as pd\nfrom sqlalchemy import create_engine\nimport sqlite3\nimport re\nimport nltk\nfrom nltk.tokenize import word_tokenize\nfrom nltk.tokenize import sent_tokenize\nfrom nltk.corpus import stopwords\nfrom nltk.stem import WordNetLemmatizer\nfrom nltk import PorterStemmer\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.pipeline import FeatureUnion\nfrom sklearn.preprocessing import Normalizer\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom xgboost import XGBClassifier\nfrom sklearn.multioutput import MultiOutputClassifier\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer\nfrom sklearn.base import BaseEstimator, TransformerMixin\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import classification_report\nfrom sklearn.model_selection import GridSearchCV\n\nnltk.download(['words', 'punkt', 'stopwords',\n 'averaged_perceptron_tagger',\n 'maxent_ne_chunker', 'wordnet'])", "[nltk_data] Downloading package words to\n[nltk_data] /Users/ayemyatwinshwe/nltk_data...\n[nltk_data] Package words is already up-to-date!\n[nltk_data] Downloading package punkt to\n[nltk_data] /Users/ayemyatwinshwe/nltk_data...\n[nltk_data] Package punkt is already up-to-date!\n[nltk_data] Downloading package stopwords to\n[nltk_data] /Users/ayemyatwinshwe/nltk_data...\n[nltk_data] Package stopwords is already up-to-date!\n[nltk_data] Downloading package averaged_perceptron_tagger to\n[nltk_data] /Users/ayemyatwinshwe/nltk_data...\n[nltk_data] Package averaged_perceptron_tagger is already up-to-\n[nltk_data] date!\n[nltk_data] Downloading package maxent_ne_chunker to\n[nltk_data] /Users/ayemyatwinshwe/nltk_data...\n[nltk_data] Package maxent_ne_chunker is already up-to-date!\n[nltk_data] Downloading package wordnet to\n[nltk_data] /Users/ayemyatwinshwe/nltk_data...\n[nltk_data] Package wordnet is already up-to-date!\n" ], [ "# load data from database\nengine = create_engine('sqlite:///DisasterResponse.db')\ndf = pd.read_sql_table('message_category', engine)", "_____no_output_____" ], [ "df.head(2)", "_____no_output_____" ], [ "# number of distinct observations\ndf.nunique()", "_____no_output_____" ], [ "# number of missing values\ndf.isnull().sum()", "_____no_output_____" ], [ "# drop id, original\ndf.drop(['id', 'original'], axis=1, inplace=True)", "_____no_output_____" ], [ "df.head(2)", "_____no_output_____" ], [ "# Check distribution of message categories\ncategory_names = df.loc[:, 'related':'direct_report'].columns\ncategory_counts = (df.loc[:, 'related':'direct_report']\n ).sum().sort_values(ascending=False)", "_____no_output_____" ], [ "category_counts.plot(kind='bar', figsize=(\n 10, 5), title='Distribution of message categories')", "_____no_output_____" ], [ "X = df['message'].values\nY = df.loc[:,'related':'direct_report'].values", "_____no_output_____" ], [ "# check messages and categories\nrnd = np.random.randint(df.shape[0])\nprint(X[rnd])\ndf.iloc[rnd]", "Beyond the ISDR Secretariat and OCHA, let me note that WMO has also much to offer in the area of scientific and technological expertise.\n" ] ], [ [ "### 2. Write a tokenization function to process your text data", "_____no_output_____" ] ], [ [ "url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\\(\\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'\n\n\ndef tokenize(text):\n \"\"\"\n 1. Replace url in the text with 'urlplaceholder'\n 2. Remove punctuations and use lower cases\n 3. Remove stopwords and lemmatize tokens\n\n Args: text\n Returns: cleaned tokens of text\n \"\"\"\n detected_urls = re.findall(url_regex, text)\n for url in detected_urls:\n text = text.replace(url, \"urlplaceholder\")\n\n text = re.sub(r\"[^a-zA-Z0-9]\", \" \", text.lower())\n tokens = word_tokenize(text)\n\n stop_words = stopwords.words(\"english\")\n lemmatizer = WordNetLemmatizer()\n\n clean_tokens = [lemmatizer.lemmatize(tok)\n for tok in tokens if tok not in stop_words]\n\n return clean_tokens", "_____no_output_____" ] ], [ [ "### 3. Build a machine learning pipeline\nThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.", "_____no_output_____" ] ], [ [ "pipeline_ada = Pipeline([\n ('vect', CountVectorizer(tokenizer=tokenize)),\n ('tfidf', TfidfTransformer(use_idf=True)),\n ('clf', MultiOutputClassifier(AdaBoostClassifier())),\n])", "_____no_output_____" ], [ "pipeline_ada.get_params()", "_____no_output_____" ] ], [ [ "### 4. Train pipeline\n- Split data into train and test sets\n- Train pipeline", "_____no_output_____" ] ], [ [ "X_train, X_test, Y_train, Y_test = train_test_split(\n X, Y, test_size=0.2, random_state=42)", "_____no_output_____" ], [ "pipeline_ada.fit(X_train, Y_train)", "_____no_output_____" ] ], [ [ "### 5. Test your model\nReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.", "_____no_output_____" ] ], [ [ "Y_pred = pipeline_ada.predict(X_test)", "_____no_output_____" ], [ "(Y_pred == Y_test).mean()", "_____no_output_____" ], [ "def display_results(y_test, y_pred, y_col):\n \"\"\"\n Display f1 score, precision, recall, accuracy and confusion_matrix\n for each category of the test dataset\n \"\"\"\n\n clf_report = classification_report(y_test, y_pred)\n confusion_mat = confusion_matrix(y_test, y_pred)\n accuracy = (y_pred == y_test).mean()\n print('\\n')\n print(y_col, \":\")\n print('\\n')\n print(clf_report)\n print('confusion_matrix :')\n print(confusion_mat)\n print('\\n')\n print('Accuracy =', accuracy)\n print('-'*65)", "_____no_output_____" ], [ "for i in range(Y_test.shape[1]):\n display_results(Y_test[:, i], Y_pred[:, i],\n df.loc[:, 'related':'direct_report'].columns[i])", "\n\nrelated :\n\n\n precision recall f1-score support\n\n 0 0.70 0.24 0.36 1245\n 1 0.80 0.97 0.88 3998\n\n accuracy 0.80 5243\n macro avg 0.75 0.61 0.62 5243\nweighted avg 0.78 0.80 0.76 5243\n\nconfusion_matrix :\n[[ 304 941]\n [ 131 3867]]\n\n\nAccuracy = 0.7955369063513256\n-----------------------------------------------------------------\n\n\nrequest :\n\n\n precision recall f1-score support\n\n 0 0.90 0.97 0.93 4352\n 1 0.78 0.47 0.59 891\n\n accuracy 0.89 5243\n macro avg 0.84 0.72 0.76 5243\nweighted avg 0.88 0.89 0.88 5243\n\nconfusion_matrix :\n[[4231 121]\n [ 470 421]]\n\n\nAccuracy = 0.8872782757962998\n-----------------------------------------------------------------\n\n\noffer :\n\n\n precision recall f1-score support\n\n 0 1.00 1.00 1.00 5219\n 1 0.00 0.00 0.00 24\n\n accuracy 0.99 5243\n macro avg 0.50 0.50 0.50 5243\nweighted avg 0.99 0.99 0.99 5243\n\nconfusion_matrix :\n[[5212 7]\n [ 24 0]]\n\n\nAccuracy = 0.9940873545679955\n-----------------------------------------------------------------\n\n\naid_related :\n\n\n precision recall f1-score support\n\n 0 0.74 0.89 0.81 3079\n 1 0.78 0.57 0.66 2164\n\n accuracy 0.76 5243\n macro avg 0.76 0.73 0.73 5243\nweighted avg 0.76 0.76 0.75 5243\n\nconfusion_matrix :\n[[2737 342]\n [ 939 1225]]\n\n\nAccuracy = 0.7556742323097463\n-----------------------------------------------------------------\n\n\nmedical_help :\n\n\n precision recall f1-score support\n\n 0 0.94 0.99 0.96 4808\n 1 0.63 0.27 0.37 435\n\n accuracy 0.93 5243\n macro avg 0.78 0.63 0.67 5243\nweighted avg 0.91 0.93 0.91 5243\n\nconfusion_matrix :\n[[4740 68]\n [ 319 116]]\n\n\nAccuracy = 0.9261872973488461\n-----------------------------------------------------------------\n\n\nmedical_products :\n\n\n precision recall f1-score support\n\n 0 0.96 0.99 0.98 4964\n 1 0.62 0.29 0.40 279\n\n accuracy 0.95 5243\n macro avg 0.79 0.64 0.69 5243\nweighted avg 0.94 0.95 0.94 5243\n\nconfusion_matrix :\n[[4913 51]\n [ 197 82]]\n\n\nAccuracy = 0.9526988365439634\n-----------------------------------------------------------------\n\n\nsearch_and_rescue :\n\n\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5107\n 1 0.60 0.18 0.28 136\n\n accuracy 0.98 5243\n macro avg 0.79 0.59 0.63 5243\nweighted avg 0.97 0.98 0.97 5243\n\nconfusion_matrix :\n[[5090 17]\n [ 111 25]]\n\n\nAccuracy = 0.9755864962807553\n-----------------------------------------------------------------\n\n\nsecurity :\n\n\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5147\n 1 0.13 0.02 0.04 96\n\n accuracy 0.98 5243\n macro avg 0.56 0.51 0.51 5243\nweighted avg 0.97 0.98 0.97 5243\n\nconfusion_matrix :\n[[5134 13]\n [ 94 2]]\n\n\nAccuracy = 0.9795918367346939\n-----------------------------------------------------------------\n\n\nmilitary :\n\n\n precision recall f1-score support\n\n 0 0.98 0.99 0.99 5085\n 1 0.57 0.29 0.39 158\n\n accuracy 0.97 5243\n macro avg 0.78 0.64 0.69 5243\nweighted avg 0.97 0.97 0.97 5243\n\nconfusion_matrix :\n[[5051 34]\n [ 112 46]]\n\n\nAccuracy = 0.9721533473202365\n-----------------------------------------------------------------\n\n\nchild_alone :\n\n\n precision recall f1-score support\n\n 0 1.00 1.00 1.00 5243\n\n accuracy 1.00 5243\n macro avg 1.00 1.00 1.00 5243\nweighted avg 1.00 1.00 1.00 5243\n\nconfusion_matrix :\n[[5243]]\n\n\nAccuracy = 1.0\n-----------------------------------------------------------------\n\n\nwater :\n\n\n precision recall f1-score support\n\n 0 0.98 0.99 0.98 4908\n 1 0.75 0.67 0.71 335\n\n accuracy 0.96 5243\n macro avg 0.87 0.83 0.84 5243\nweighted avg 0.96 0.96 0.96 5243\n\nconfusion_matrix :\n[[4835 73]\n [ 112 223]]\n\n\nAccuracy = 0.9647148579057792\n-----------------------------------------------------------------\n\n\nfood :\n\n\n precision recall f1-score support\n\n 0 0.96 0.98 0.97 4659\n 1 0.82 0.68 0.74 584\n\n accuracy 0.95 5243\n macro avg 0.89 0.83 0.86 5243\nweighted avg 0.94 0.95 0.95 5243\n\nconfusion_matrix :\n[[4571 88]\n [ 188 396]]\n\n\nAccuracy = 0.9473583826053786\n-----------------------------------------------------------------\n\n\nshelter :\n\n\n precision recall f1-score support\n\n 0 0.96 0.98 0.97 4775\n 1 0.76 0.56 0.64 468\n\n accuracy 0.94 5243\n macro avg 0.86 0.77 0.81 5243\nweighted avg 0.94 0.94 0.94 5243\n\nconfusion_matrix :\n[[4692 83]\n [ 208 260]]\n\n\nAccuracy = 0.9444974251382796\n-----------------------------------------------------------------\n\n\nclothing :\n\n\n precision recall f1-score support\n\n 0 0.99 1.00 0.99 5173\n 1 0.67 0.34 0.45 70\n\n accuracy 0.99 5243\n macro avg 0.83 0.67 0.72 5243\nweighted avg 0.99 0.99 0.99 5243\n\nconfusion_matrix :\n[[5161 12]\n [ 46 24]]\n\n\nAccuracy = 0.9889376311272172\n-----------------------------------------------------------------\n\n\nmoney :\n\n\n precision recall f1-score support\n\n 0 0.99 0.99 0.99 5131\n 1 0.52 0.31 0.39 112\n\n accuracy 0.98 5243\n macro avg 0.75 0.65 0.69 5243\nweighted avg 0.98 0.98 0.98 5243\n\nconfusion_matrix :\n[[5099 32]\n [ 77 35]]\n\n\nAccuracy = 0.9792103757390807\n-----------------------------------------------------------------\n\n\nmissing_people :\n\n\n precision recall f1-score support\n\n 0 0.99 1.00 0.99 5180\n 1 0.71 0.19 0.30 63\n\n accuracy 0.99 5243\n macro avg 0.85 0.59 0.65 5243\nweighted avg 0.99 0.99 0.99 5243\n\nconfusion_matrix :\n[[5175 5]\n [ 51 12]]\n\n\nAccuracy = 0.9893190921228304\n-----------------------------------------------------------------\n\n\nrefugees :\n\n\n precision recall f1-score support\n\n 0 0.98 0.99 0.98 5073\n 1 0.57 0.28 0.38 170\n\n accuracy 0.97 5243\n macro avg 0.77 0.64 0.68 5243\nweighted avg 0.96 0.97 0.96 5243\n\nconfusion_matrix :\n[[5037 36]\n [ 122 48]]\n\n\nAccuracy = 0.9698645813465573\n-----------------------------------------------------------------\n\n\ndeath :\n\n\n precision recall f1-score support\n\n 0 0.97 0.99 0.98 4996\n 1 0.80 0.48 0.60 247\n\n accuracy 0.97 5243\n macro avg 0.89 0.74 0.79 5243\nweighted avg 0.97 0.97 0.97 5243\n\nconfusion_matrix :\n[[4966 30]\n [ 128 119]]\n\n\nAccuracy = 0.9698645813465573\n-----------------------------------------------------------------\n\n\nother_aid :\n\n\n precision recall f1-score support\n\n 0 0.88 0.98 0.93 4551\n 1 0.52 0.15 0.23 692\n\n accuracy 0.87 5243\n macro avg 0.70 0.56 0.58 5243\nweighted avg 0.83 0.87 0.84 5243\n\nconfusion_matrix :\n[[4455 96]\n [ 589 103]]\n\n\nAccuracy = 0.8693496090024795\n-----------------------------------------------------------------\n\n\ninfrastructure_related :\n\n\n precision recall f1-score support\n\n 0 0.94 0.99 0.97 4907\n 1 0.41 0.08 0.14 336\n\n accuracy 0.93 5243\n macro avg 0.68 0.54 0.55 5243\nweighted avg 0.91 0.93 0.91 5243\n\nconfusion_matrix :\n[[4867 40]\n [ 308 28]]\n\n\nAccuracy = 0.9336257867633034\n-----------------------------------------------------------------\n\n\ntransport :\n\n\n precision recall f1-score support\n\n 0 0.96 1.00 0.98 5008\n 1 0.68 0.20 0.30 235\n\n accuracy 0.96 5243\n macro avg 0.82 0.60 0.64 5243\nweighted avg 0.95 0.96 0.95 5243\n\nconfusion_matrix :\n[[4986 22]\n [ 189 46]]\n\n\nAccuracy = 0.9597558649628075\n-----------------------------------------------------------------\n\n\nbuildings :\n\n\n precision recall f1-score support\n\n 0 0.97 0.99 0.98 4974\n 1 0.71 0.38 0.50 269\n\n accuracy 0.96 5243\n macro avg 0.84 0.69 0.74 5243\nweighted avg 0.95 0.96 0.95 5243\n\nconfusion_matrix :\n[[4932 42]\n [ 166 103]]\n\n\nAccuracy = 0.9603280564562273\n-----------------------------------------------------------------\n\n\nelectricity :\n\n\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5128\n 1 0.61 0.22 0.32 115\n\n accuracy 0.98 5243\n macro avg 0.80 0.61 0.66 5243\nweighted avg 0.97 0.98 0.98 5243\n\nconfusion_matrix :\n[[5112 16]\n [ 90 25]]\n\n\nAccuracy = 0.9797825672325005\n-----------------------------------------------------------------\n\n\ntools :\n\n\n precision recall f1-score support\n\n 0 0.99 1.00 1.00 5208\n 1 0.20 0.03 0.05 35\n\n accuracy 0.99 5243\n macro avg 0.60 0.51 0.52 5243\nweighted avg 0.99 0.99 0.99 5243\n\nconfusion_matrix :\n[[5204 4]\n [ 34 1]]\n\n\nAccuracy = 0.9927522410833493\n-----------------------------------------------------------------\n\n\nhospitals :\n\n\n precision recall f1-score support\n\n 0 0.99 1.00 0.99 5191\n 1 0.36 0.15 0.22 52\n\n accuracy 0.99 5243\n macro avg 0.68 0.58 0.61 5243\nweighted avg 0.99 0.99 0.99 5243\n\nconfusion_matrix :\n[[5177 14]\n [ 44 8]]\n\n\nAccuracy = 0.9889376311272172\n-----------------------------------------------------------------\n\n\nshops :\n\n\n precision recall f1-score support\n\n 0 1.00 1.00 1.00 5218\n 1 0.25 0.04 0.07 25\n\n accuracy 0.99 5243\n macro avg 0.62 0.52 0.53 5243\nweighted avg 0.99 0.99 0.99 5243\n\nconfusion_matrix :\n[[5215 3]\n [ 24 1]]\n\n\nAccuracy = 0.9948502765592219\n-----------------------------------------------------------------\n\n\naid_centers :\n\n\n precision recall f1-score support\n\n 0 0.99 1.00 0.99 5179\n 1 0.25 0.05 0.08 64\n\n accuracy 0.99 5243\n macro avg 0.62 0.52 0.54 5243\nweighted avg 0.98 0.99 0.98 5243\n\nconfusion_matrix :\n[[5170 9]\n [ 61 3]]\n\n\nAccuracy = 0.986648865153538\n-----------------------------------------------------------------\n\n\nother_infrastructure :\n\n\n precision recall f1-score support\n\n 0 0.96 0.99 0.98 5018\n 1 0.43 0.12 0.18 225\n\n accuracy 0.96 5243\n macro avg 0.70 0.55 0.58 5243\nweighted avg 0.94 0.96 0.94 5243\n\nconfusion_matrix :\n[[4984 34]\n [ 199 26]]\n\n\nAccuracy = 0.9555597940110624\n-----------------------------------------------------------------\n\n\nweather_related :\n\n\n precision recall f1-score support\n\n 0 0.88 0.96 0.92 3771\n 1 0.86 0.68 0.76 1472\n\n accuracy 0.88 5243\n macro avg 0.87 0.82 0.84 5243\nweighted avg 0.88 0.88 0.87 5243\n\nconfusion_matrix :\n[[3607 164]\n [ 476 996]]\n\n\nAccuracy = 0.8779324814037764\n-----------------------------------------------------------------\n" ] ], [ [ "### 6. Improve your model\nUse grid search to find better parameters. ", "_____no_output_____" ] ], [ [ "%timeit\n\nparameters = {\n\n # 'vect__max_df': [0.75, 1.0],\n 'vect__max_features': [500, 2000],\n 'vect__ngram_range': [(1, 1), (1, 2)],\n # 'tfidf__smooth_idf': [True, False],\n # 'tfidf__sublinear_tf': [True, False],\n # 'tfidf__use_idf': [True, False],\n 'clf__estimator__learning_rate': [0.5, 1.0],\n 'clf__estimator__n_estimators': [50, 100]\n\n}\n\ncv_ada = GridSearchCV(pipeline_ada, param_grid=parameters,\n cv=2, n_jobs=-1, verbose=2)\n\ncv_ada.fit(X_train, Y_train)", "Fitting 2 folds for each of 16 candidates, totalling 32 fits\n" ], [ "cv_ada.best_params_", "_____no_output_____" ] ], [ [ "### 7. Test your model\nShow the accuracy, precision, and recall of the tuned model. \n\nSince this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!", "_____no_output_____" ] ], [ [ "Y_pred = cv_ada.predict(X_test)", "_____no_output_____" ], [ "(Y_pred == Y_test).mean()", "_____no_output_____" ], [ "for i in range(Y_test.shape[1]):\n display_results(Y_test[:, i], Y_pred[:, i],\n df.loc[:, 'related':'direct_report'].columns[i])", "\n\nrelated :\n\n\n precision recall f1-score support\n\n 0 0.75 0.25 0.38 1245\n 1 0.81 0.97 0.88 3998\n\n accuracy 0.80 5243\n macro avg 0.78 0.61 0.63 5243\nweighted avg 0.79 0.80 0.76 5243\n\nconfusion_matrix :\n[[ 312 933]\n [ 104 3894]]\n\n\nAccuracy = 0.8022124737745565\n-----------------------------------------------------------------\n\n\nrequest :\n\n\n precision recall f1-score support\n\n 0 0.90 0.98 0.94 4352\n 1 0.84 0.44 0.58 891\n\n accuracy 0.89 5243\n macro avg 0.87 0.71 0.76 5243\nweighted avg 0.89 0.89 0.88 5243\n\nconfusion_matrix :\n[[4275 77]\n [ 499 392]]\n\n\nAccuracy = 0.8901392332633988\n-----------------------------------------------------------------\n\n\noffer :\n\n\n precision recall f1-score support\n\n 0 1.00 1.00 1.00 5219\n 1 0.00 0.00 0.00 24\n\n accuracy 1.00 5243\n macro avg 0.50 0.50 0.50 5243\nweighted avg 0.99 1.00 0.99 5243\n\nconfusion_matrix :\n[[5219 0]\n [ 24 0]]\n\n\nAccuracy = 0.9954224680526416\n-----------------------------------------------------------------\n\n\naid_related :\n\n\n precision recall f1-score support\n\n 0 0.75 0.89 0.82 3079\n 1 0.79 0.57 0.67 2164\n\n accuracy 0.76 5243\n macro avg 0.77 0.73 0.74 5243\nweighted avg 0.77 0.76 0.75 5243\n\nconfusion_matrix :\n[[2755 324]\n [ 923 1241]]\n\n\nAccuracy = 0.7621590692351707\n-----------------------------------------------------------------\n\n\nmedical_help :\n\n\n precision recall f1-score support\n\n 0 0.93 0.99 0.96 4808\n 1 0.63 0.18 0.28 435\n\n accuracy 0.92 5243\n macro avg 0.78 0.58 0.62 5243\nweighted avg 0.91 0.92 0.90 5243\n\nconfusion_matrix :\n[[4763 45]\n [ 357 78]]\n\n\nAccuracy = 0.9233263398817471\n-----------------------------------------------------------------\n\n\nmedical_products :\n\n\n precision recall f1-score support\n\n 0 0.96 0.99 0.98 4964\n 1 0.72 0.25 0.37 279\n\n accuracy 0.95 5243\n macro avg 0.84 0.62 0.67 5243\nweighted avg 0.95 0.95 0.94 5243\n\nconfusion_matrix :\n[[4937 27]\n [ 209 70]]\n\n\nAccuracy = 0.9549876025176426\n-----------------------------------------------------------------\n\n\nsearch_and_rescue :\n\n\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5107\n 1 0.68 0.12 0.21 136\n\n accuracy 0.98 5243\n macro avg 0.83 0.56 0.60 5243\nweighted avg 0.97 0.98 0.97 5243\n\nconfusion_matrix :\n[[5099 8]\n [ 119 17]]\n\n\nAccuracy = 0.9757772267785619\n-----------------------------------------------------------------\n\n\nsecurity :\n\n\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5147\n 1 0.17 0.01 0.02 96\n\n accuracy 0.98 5243\n macro avg 0.57 0.50 0.50 5243\nweighted avg 0.97 0.98 0.97 5243\n\nconfusion_matrix :\n[[5142 5]\n [ 95 1]]\n\n\nAccuracy = 0.9809269502193401\n-----------------------------------------------------------------\n\n\nmilitary :\n\n\n precision recall f1-score support\n\n 0 0.98 0.99 0.98 5085\n 1 0.53 0.18 0.27 158\n\n accuracy 0.97 5243\n macro avg 0.75 0.59 0.63 5243\nweighted avg 0.96 0.97 0.96 5243\n\nconfusion_matrix :\n[[5059 26]\n [ 129 29]]\n\n\nAccuracy = 0.9704367728399771\n-----------------------------------------------------------------\n\n\nchild_alone :\n\n\n precision recall f1-score support\n\n 0 1.00 1.00 1.00 5243\n\n accuracy 1.00 5243\n macro avg 1.00 1.00 1.00 5243\nweighted avg 1.00 1.00 1.00 5243\n\nconfusion_matrix :\n[[5243]]\n\n\nAccuracy = 1.0\n-----------------------------------------------------------------\n\n\nwater :\n\n\n precision recall f1-score support\n\n 0 0.98 0.99 0.98 4908\n 1 0.77 0.64 0.70 335\n\n accuracy 0.96 5243\n macro avg 0.87 0.81 0.84 5243\nweighted avg 0.96 0.96 0.96 5243\n\nconfusion_matrix :\n[[4842 66]\n [ 120 215]]\n\n\nAccuracy = 0.9645241274079726\n-----------------------------------------------------------------\n\n\nfood :\n\n\n precision recall f1-score support\n\n 0 0.97 0.98 0.97 4659\n 1 0.84 0.72 0.77 584\n\n accuracy 0.95 5243\n macro avg 0.90 0.85 0.87 5243\nweighted avg 0.95 0.95 0.95 5243\n\nconfusion_matrix :\n[[4578 81]\n [ 165 419]]\n\n\nAccuracy = 0.9530802975395766\n-----------------------------------------------------------------\n\n\nshelter :\n\n\n precision recall f1-score support\n\n 0 0.95 0.99 0.97 4775\n 1 0.81 0.51 0.62 468\n\n accuracy 0.95 5243\n macro avg 0.88 0.75 0.80 5243\nweighted avg 0.94 0.95 0.94 5243\n\nconfusion_matrix :\n[[4719 56]\n [ 230 238]]\n\n\nAccuracy = 0.9454510776273126\n-----------------------------------------------------------------\n\n\nclothing :\n\n\n precision recall f1-score support\n\n 0 0.99 1.00 0.99 5173\n 1 0.76 0.27 0.40 70\n\n accuracy 0.99 5243\n macro avg 0.88 0.64 0.70 5243\nweighted avg 0.99 0.99 0.99 5243\n\nconfusion_matrix :\n[[5167 6]\n [ 51 19]]\n\n\nAccuracy = 0.9891283616250238\n-----------------------------------------------------------------\n\n\nmoney :\n\n\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5131\n 1 0.59 0.17 0.26 112\n\n accuracy 0.98 5243\n macro avg 0.79 0.58 0.63 5243\nweighted avg 0.97 0.98 0.97 5243\n\nconfusion_matrix :\n[[5118 13]\n [ 93 19]]\n\n\nAccuracy = 0.9797825672325005\n-----------------------------------------------------------------\n" ] ], [ [ "### 8. Try improving your model further. Here are a few ideas:\n* try other machine learning algorithms\n* add other features besides the TF-IDF", "_____no_output_____" ] ], [ [ "# Add two customer transformers\n\n\ndef tokenize_2(text):\n \"\"\"\n Tokenize the input text. This function is called in StartingVerbExtractor.\n \"\"\"\n\n url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\\(\\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'\n detected_urls = re.findall(url_regex, text)\n for url in detected_urls:\n text = text.replace(url, \"urlplaceholder\")\n\n tokens = word_tokenize(text)\n lemmatizer = WordNetLemmatizer()\n clean_tokens = [lemmatizer.lemmatize(\n tok).lower().strip() for tok in tokens]\n\n return clean_tokens\n\n\nclass StartingVerbExtractor(BaseEstimator, TransformerMixin):\n\n def starting_verb(self, text):\n \"\"\" return true if the first word is an appropriate verb or RT for retweet \"\"\"\n # tokenize by sentences\n sentence_list = nltk.sent_tokenize(text)\n\n for sentence in sentence_list:\n # tokenize each sentence into words and tag part of speech\n pos_tags = nltk.pos_tag(tokenize_2(sentence))\n # index pos_tags to get the first word and part of speech tag\n first_word, first_tag = pos_tags[0]\n\n # return true if the first word is an appropriate verb or RT for retweet\n if first_tag in ['VB', 'VBP'] or first_word == 'RT':\n return True\n return False\n\n def fit(self, x, y=None):\n \"\"\" Fit \"\"\"\n return self\n\n def transform(self, X):\n \"\"\" Transform \"\"\"\n X_tagged = pd.Series(X).apply(self.starting_verb)\n return pd.DataFrame(X_tagged)\n\n\n# Count the number of tokens\nclass TextLengthExtractor(BaseEstimator, TransformerMixin):\n\n def text_len_count(self, text):\n \"\"\" Count the number of tokens \"\"\"\n text_length = len(tokenize(text))\n return text_length\n\n def fit(self, x, y=None):\n \"\"\" Fit \"\"\"\n return self\n\n def transform(self, X):\n \"\"\" Transform \"\"\"\n X_text_len = pd.Series(X).apply(self.text_len_count)\n return pd.DataFrame(X_text_len)", "_____no_output_____" ], [ "pipeline_xgb = Pipeline([\n ('features', FeatureUnion([\n\n ('text_pipeline', Pipeline([\n ('vect', CountVectorizer(tokenizer=tokenize,\n # max_features=5000,\n # max_df=0.75,\n )),\n ('tfidf', TfidfTransformer(use_idf=True))\n ])),\n\n ('txt_length', TextLengthExtractor()),\n ('start_verb', StartingVerbExtractor())\n ])),\n\n ('norm', Normalizer()),\n\n ('clf', MultiOutputClassifier(XGBClassifier(\n # max_depth=3,\n # learning_rate=0.2,\n # max_delta_step=2,\n # colsample_bytree=0.7,\n # colsample_bylevel=0.7,\n # subsample=0.8,\n # n_estimators=150,\n tree_method='hist',\n )))\n])", "_____no_output_____" ], [ "pipeline_xgb.fit(X_train, Y_train)", "_____no_output_____" ], [ "Y_pred = pipeline_xgb.predict(X_test)", "_____no_output_____" ], [ "(Y_pred == Y_test).mean()", "_____no_output_____" ], [ "for i in range(Y_test.shape[1]):\n display_results(Y_test[:, i], Y_pred[:, i],\n df.loc[:, 'related':'direct_report'].columns[i])", "\n\nrelated :\n\n\n precision recall f1-score support\n\n 0 0.68 0.49 0.57 1245\n 1 0.85 0.93 0.89 3998\n\n accuracy 0.82 5243\n macro avg 0.77 0.71 0.73 5243\nweighted avg 0.81 0.82 0.81 5243\n\nconfusion_matrix :\n[[ 607 638]\n [ 283 3715]]\n\n\nAccuracy = 0.8243372115201221\n-----------------------------------------------------------------\n\n\nrequest :\n\n\n precision recall f1-score support\n\n 0 0.91 0.97 0.94 4352\n 1 0.81 0.53 0.64 891\n\n accuracy 0.90 5243\n macro avg 0.86 0.75 0.79 5243\nweighted avg 0.89 0.90 0.89 5243\n\nconfusion_matrix :\n[[4238 114]\n [ 416 475]]\n\n\nAccuracy = 0.8989128361625024\n-----------------------------------------------------------------\n\n\noffer :\n\n\n precision recall f1-score support\n\n 0 1.00 1.00 1.00 5219\n 1 0.00 0.00 0.00 24\n\n accuracy 1.00 5243\n macro avg 0.50 0.50 0.50 5243\nweighted avg 0.99 1.00 0.99 5243\n\nconfusion_matrix :\n[[5219 0]\n [ 24 0]]\n\n\nAccuracy = 0.9954224680526416\n-----------------------------------------------------------------\n\n\naid_related :\n\n\n precision recall f1-score support\n\n 0 0.78 0.87 0.83 3079\n 1 0.78 0.66 0.71 2164\n\n accuracy 0.78 5243\n macro avg 0.78 0.76 0.77 5243\nweighted avg 0.78 0.78 0.78 5243\n\nconfusion_matrix :\n[[2686 393]\n [ 743 1421]]\n\n\nAccuracy = 0.7833301544917032\n-----------------------------------------------------------------\n\n\nmedical_help :\n\n\n precision recall f1-score support\n\n 0 0.94 0.99 0.96 4808\n 1 0.65 0.29 0.40 435\n\n accuracy 0.93 5243\n macro avg 0.80 0.64 0.68 5243\nweighted avg 0.91 0.93 0.91 5243\n\nconfusion_matrix :\n[[4742 66]\n [ 311 124]]\n\n\nAccuracy = 0.9280946023269121\n-----------------------------------------------------------------\n\n\nmedical_products :\n\n\n precision recall f1-score support\n\n 0 0.96 0.99 0.98 4964\n 1 0.64 0.29 0.40 279\n\n accuracy 0.95 5243\n macro avg 0.80 0.64 0.69 5243\nweighted avg 0.94 0.95 0.95 5243\n\nconfusion_matrix :\n[[4919 45]\n [ 198 81]]\n\n\nAccuracy = 0.9536524890329964\n-----------------------------------------------------------------\n\n\nsearch_and_rescue :\n\n\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5107\n 1 0.60 0.21 0.31 136\n\n accuracy 0.98 5243\n macro avg 0.79 0.60 0.65 5243\nweighted avg 0.97 0.98 0.97 5243\n\nconfusion_matrix :\n[[5088 19]\n [ 108 28]]\n\n\nAccuracy = 0.9757772267785619\n-----------------------------------------------------------------\n\n\nsecurity :\n\n\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5147\n 1 0.33 0.02 0.04 96\n\n accuracy 0.98 5243\n macro avg 0.66 0.51 0.51 5243\nweighted avg 0.97 0.98 0.97 5243\n\nconfusion_matrix :\n[[5143 4]\n [ 94 2]]\n\n\nAccuracy = 0.9813084112149533\n-----------------------------------------------------------------\n\n\nmilitary :\n\n\n precision recall f1-score support\n\n 0 0.98 0.99 0.99 5085\n 1 0.57 0.32 0.41 158\n\n accuracy 0.97 5243\n macro avg 0.78 0.65 0.70 5243\nweighted avg 0.97 0.97 0.97 5243\n\nconfusion_matrix :\n[[5048 37]\n [ 108 50]]\n\n\nAccuracy = 0.9723440778180431\n-----------------------------------------------------------------\n\n\nchild_alone :\n\n\n precision recall f1-score support\n\n 0 1.00 1.00 1.00 5243\n\n accuracy 1.00 5243\n macro avg 1.00 1.00 1.00 5243\nweighted avg 1.00 1.00 1.00 5243\n\nconfusion_matrix :\n[[5243]]\n\n\nAccuracy = 1.0\n-----------------------------------------------------------------\n\n\nwater :\n\n\n precision recall f1-score support\n\n 0 0.98 0.99 0.98 4908\n 1 0.78 0.72 0.75 335\n\n accuracy 0.97 5243\n macro avg 0.88 0.85 0.87 5243\nweighted avg 0.97 0.97 0.97 5243\n\nconfusion_matrix :\n[[4841 67]\n [ 95 240]]\n\n\nAccuracy = 0.9691016593553309\n-----------------------------------------------------------------\n\n\nfood :\n\n\n precision recall f1-score support\n\n 0 0.97 0.98 0.98 4659\n 1 0.81 0.80 0.81 584\n\n accuracy 0.96 5243\n macro avg 0.89 0.89 0.89 5243\nweighted avg 0.96 0.96 0.96 5243\n\nconfusion_matrix :\n[[4553 106]\n [ 118 466]]\n\n\nAccuracy = 0.9572763684913218\n-----------------------------------------------------------------\n\n\nshelter :\n\n\n precision recall f1-score support\n\n 0 0.96 0.98 0.97 4775\n 1 0.75 0.60 0.67 468\n\n accuracy 0.95 5243\n macro avg 0.86 0.79 0.82 5243\nweighted avg 0.94 0.95 0.94 5243\n\nconfusion_matrix :\n[[4684 91]\n [ 188 280]]\n\n\nAccuracy = 0.9467861911119588\n-----------------------------------------------------------------\n" ], [ "pipeline_xgb.get_params()", "_____no_output_____" ], [ "# Use grid search to find better parameters.\n\n%timeit\n\nparameters = {\n\n 'clf__estimator__max_depth': [3, 4],\n 'clf__estimator__learning_rate': [0.2, 0.5],\n 'clf__estimator__max_delta_step': [2, 3],\n 'clf__estimator__colsample_bytree': [0.5, 0.7],\n 'clf__estimator__colsample_bylevel': [0.5, 0.7],\n 'clf__estimator__subsample': [0.5, 0.8],\n 'clf__estimator__n_estimators': [100, 150]\n\n}\n\ncv_xgb = GridSearchCV(pipeline_xgb, param_grid=parameters,\n cv=2, n_jobs=-1, verbose=2)\n\ncv_xgb.fit(X_train, Y_train)", "Fitting 2 folds for each of 128 candidates, totalling 256 fits\n" ], [ "cv_xgb.best_params_", "_____no_output_____" ], [ "Y_pred = cv_xgb.predict(X_test)", "_____no_output_____" ], [ "(Y_pred == Y_test).mean()", "_____no_output_____" ], [ "for i in range(Y_test.shape[1]):\n display_results(Y_test[:, i], Y_pred[:, i],\n df.loc[:, 'related':'direct_report'].columns[i])", "\n\nrelated :\n\n\n precision recall f1-score support\n\n 0 0.71 0.41 0.52 1245\n 1 0.84 0.95 0.89 3998\n\n accuracy 0.82 5243\n macro avg 0.77 0.68 0.70 5243\nweighted avg 0.81 0.82 0.80 5243\n\nconfusion_matrix :\n[[ 507 738]\n [ 210 3788]]\n\n\nAccuracy = 0.8191874880793439\n-----------------------------------------------------------------\n\n\nrequest :\n\n\n precision recall f1-score support\n\n 0 0.91 0.98 0.94 4352\n 1 0.82 0.54 0.65 891\n\n accuracy 0.90 5243\n macro avg 0.86 0.76 0.80 5243\nweighted avg 0.90 0.90 0.89 5243\n\nconfusion_matrix :\n[[4244 108]\n [ 410 481]]\n\n\nAccuracy = 0.9012016021361816\n-----------------------------------------------------------------\n\n\noffer :\n\n\n precision recall f1-score support\n\n 0 1.00 1.00 1.00 5219\n 1 0.00 0.00 0.00 24\n\n accuracy 1.00 5243\n macro avg 0.50 0.50 0.50 5243\nweighted avg 0.99 1.00 0.99 5243\n\nconfusion_matrix :\n[[5219 0]\n [ 24 0]]\n\n\nAccuracy = 0.9954224680526416\n-----------------------------------------------------------------\n\n\naid_related :\n\n\n precision recall f1-score support\n\n 0 0.78 0.88 0.82 3079\n 1 0.79 0.64 0.71 2164\n\n accuracy 0.78 5243\n macro avg 0.78 0.76 0.77 5243\nweighted avg 0.78 0.78 0.78 5243\n\nconfusion_matrix :\n[[2707 372]\n [ 781 1383]]\n\n\nAccuracy = 0.7800877360289911\n-----------------------------------------------------------------\n\n\nmedical_help :\n\n\n precision recall f1-score support\n\n 0 0.94 0.99 0.96 4808\n 1 0.66 0.27 0.38 435\n\n accuracy 0.93 5243\n macro avg 0.80 0.63 0.67 5243\nweighted avg 0.91 0.93 0.91 5243\n\nconfusion_matrix :\n[[4748 60]\n [ 317 118]]\n\n\nAccuracy = 0.9280946023269121\n-----------------------------------------------------------------\n\n\nmedical_products :\n\n\n precision recall f1-score support\n\n 0 0.96 0.99 0.98 4964\n 1 0.73 0.29 0.41 279\n\n accuracy 0.96 5243\n macro avg 0.84 0.64 0.69 5243\nweighted avg 0.95 0.96 0.95 5243\n\nconfusion_matrix :\n[[4934 30]\n [ 199 80]]\n\n\nAccuracy = 0.9563227160022888\n-----------------------------------------------------------------\n\n\nsearch_and_rescue :\n\n\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5107\n 1 0.64 0.20 0.30 136\n\n accuracy 0.98 5243\n macro avg 0.81 0.60 0.65 5243\nweighted avg 0.97 0.98 0.97 5243\n\nconfusion_matrix :\n[[5092 15]\n [ 109 27]]\n\n\nAccuracy = 0.9763494182719817\n-----------------------------------------------------------------\n\n\nsecurity :\n\n\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5147\n 1 0.33 0.01 0.02 96\n\n accuracy 0.98 5243\n macro avg 0.66 0.51 0.51 5243\nweighted avg 0.97 0.98 0.97 5243\n\nconfusion_matrix :\n[[5145 2]\n [ 95 1]]\n\n\nAccuracy = 0.9814991417127599\n-----------------------------------------------------------------\n\n\nmilitary :\n\n\n precision recall f1-score support\n\n 0 0.98 0.99 0.99 5085\n 1 0.56 0.32 0.40 158\n\n accuracy 0.97 5243\n macro avg 0.77 0.65 0.70 5243\nweighted avg 0.97 0.97 0.97 5243\n\nconfusion_matrix :\n[[5046 39]\n [ 108 50]]\n\n\nAccuracy = 0.9719626168224299\n-----------------------------------------------------------------\n\n\nchild_alone :\n\n\n precision recall f1-score support\n\n 0 1.00 1.00 1.00 5243\n\n accuracy 1.00 5243\n macro avg 1.00 1.00 1.00 5243\nweighted avg 1.00 1.00 1.00 5243\n\nconfusion_matrix :\n[[5243]]\n\n\nAccuracy = 1.0\n-----------------------------------------------------------------\n\n\nwater :\n\n\n precision recall f1-score support\n\n 0 0.98 0.99 0.98 4908\n 1 0.78 0.70 0.74 335\n\n accuracy 0.97 5243\n macro avg 0.88 0.84 0.86 5243\nweighted avg 0.97 0.97 0.97 5243\n\nconfusion_matrix :\n[[4843 65]\n [ 100 235]]\n\n\nAccuracy = 0.9685294678619111\n-----------------------------------------------------------------\n\n\nfood :\n\n\n precision recall f1-score support\n\n 0 0.97 0.98 0.98 4659\n 1 0.83 0.78 0.80 584\n\n accuracy 0.96 5243\n macro avg 0.90 0.88 0.89 5243\nweighted avg 0.96 0.96 0.96 5243\n\nconfusion_matrix :\n[[4565 94]\n [ 128 456]]\n\n\nAccuracy = 0.957657829486935\n-----------------------------------------------------------------\n" ] ], [ [ "### 9. Export your model as a pickle file", "_____no_output_____" ] ], [ [ "import pickle\npickle.dump(pipeline_xgb,open('./models/model_xgb','wb'))", "_____no_output_____" ] ], [ [ "### 10. Use this notebook to complete `train.py`\nUse the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e70fcdfce0a07bf3b01d685e344812de1597edbe
239,330
ipynb
Jupyter Notebook
Charts using SkyField.ipynb
bradleypallen/skyfield-notebooks
5192c5cb592968d4877e332bcfc7902070d86f27
[ "MIT" ]
1
2020-09-19T23:05:48.000Z
2020-09-19T23:05:48.000Z
Charts using SkyField.ipynb
bradleypallen/skyfield-notebooks
5192c5cb592968d4877e332bcfc7902070d86f27
[ "MIT" ]
null
null
null
Charts using SkyField.ipynb
bradleypallen/skyfield-notebooks
5192c5cb592968d4877e332bcfc7902070d86f27
[ "MIT" ]
null
null
null
1,068.4375
235,020
0.960933
[ [ [ "%matplotlib inline\nimport numpy as np \nfrom matplotlib import pyplot as plt\nfrom skyfield import almanac, api, data\nimport skychart", "_____no_output_____" ], [ "plt.rcParams['figure.figsize'] = [15, 15]", "_____no_output_____" ], [ "load = api.Loader('./data')", "_____no_output_____" ], [ "manhattan_beach = api.Topos('33.881519 N', '118.388177 W')", "_____no_output_____" ], [ "ts = load.timescale()", "_____no_output_____" ], [ "ephemeris = load('de421.bsp')", "_____no_output_____" ], [ "with load.open(data.hipparcos.URL) as f:\n df = data.hipparcos.load_dataframe(f)", "_____no_output_____" ], [ "earth = ephemeris['earth']\nt = ts.now()", "_____no_output_____" ], [ "bright = df[df['magnitude'] <= 5.5]", "_____no_output_____" ], [ "len(bright)", "_____no_output_____" ], [ "bright_stars = api.Star.from_dataframe(bright)", "_____no_output_____" ], [ "t = ts.now()\nastrometric = earth.at(t).observe(bright_stars)\nra, dec, distance = astrometric.radec()", "_____no_output_____" ], [ "observer = earth + manhattan_beach", "_____no_output_____" ], [ "chart = skychart.AltAzFullSkyChart(observer, t)", "_____no_output_____" ], [ "chart.plot_stars(bright)", "_____no_output_____" ], [ "chart.plot_ephemeris_object(ephemeris['sun'], 150, 'y')\nchart.plot_ephemeris_object(ephemeris['mercury'], 70, 'brown')\nchart.plot_ephemeris_object(ephemeris['venus'], 90, 'g')\nchart.plot_ephemeris_object(ephemeris['moon'], 150, 'b')\nchart.plot_ephemeris_object(ephemeris['mars'], 70, 'r')\nchart.plot_ephemeris_object(ephemeris['JUPITER BARYCENTER'], 90, 'y')\nchart.plot_ephemeris_object(ephemeris['SATURN BARYCENTER'], 80, 'y')", "_____no_output_____" ], [ "chart.display()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e70fd2b043c36036e9f0c991f177f331f7582eb3
443,786
ipynb
Jupyter Notebook
notebooks/keras_example.ipynb
Chen-Zhao/python-lrcurve
57d158156e817b84a1fba6d6d6530c090da312cd
[ "MIT" ]
175
2019-11-28T20:39:42.000Z
2022-03-10T03:25:22.000Z
notebooks/keras_example.ipynb
Chen-Zhao/python-lrcurve
57d158156e817b84a1fba6d6d6530c090da312cd
[ "MIT" ]
11
2019-12-04T15:56:55.000Z
2022-02-15T00:18:30.000Z
notebooks/keras_example.ipynb
Chen-Zhao/python-lrcurve
57d158156e817b84a1fba6d6d6530c090da312cd
[ "MIT" ]
12
2019-12-01T08:47:27.000Z
2021-08-04T13:16:22.000Z
44.42747
47,951
0.467572
[ [ [ "!pip install lrcurve", "_____no_output_____" ], [ "import sklearn.datasets\nimport sklearn.model_selection\nimport tensorflow.keras as keras\nfrom lrcurve import KerasLearningCurve", "_____no_output_____" ], [ "# define dataset\nx_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(\n *sklearn.datasets.load_iris(return_X_y=True),\n random_state=0\n)\n\n# define model\nmodel = keras.Sequential()\nmodel.add(keras.Input(shape=(4, )))\nmodel.add(keras.layers.Dense(32, activation='tanh'))\nmodel.add(keras.layers.Dense(16, activation='tanh'))\nmodel.add(keras.layers.Dense(3))\n\n# Compile the model\nmodel.compile(optimizer=keras.optimizers.Adam(),\n loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=[keras.metrics.SparseCategoricalAccuracy()],\n run_eagerly=False)\n\n\nhistory = model.fit(x_train, y_train,\n batch_size=x_train.shape[0],\n epochs=500,\n validation_data=(x_test, y_test),\n validation_freq=50,\n callbacks=[KerasLearningCurve()],\n verbose=0)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
e70fdda3d5af775aa9d41361f06bf81c2b71aaf0
12,226
ipynb
Jupyter Notebook
chapters/chapter_3/3_5_yelp_dataset_preprocessing_FULL.ipynb
TRBoom/PyTorchNLPBook
3ace692a8076e559bf4115e92ae5c932acb9956c
[ "Apache-2.0" ]
null
null
null
chapters/chapter_3/3_5_yelp_dataset_preprocessing_FULL.ipynb
TRBoom/PyTorchNLPBook
3ace692a8076e559bf4115e92ae5c932acb9956c
[ "Apache-2.0" ]
null
null
null
chapters/chapter_3/3_5_yelp_dataset_preprocessing_FULL.ipynb
TRBoom/PyTorchNLPBook
3ace692a8076e559bf4115e92ae5c932acb9956c
[ "Apache-2.0" ]
null
null
null
12,226
12,226
0.583265
[ [ [ "import collections\nimport numpy as np\nimport pandas as pd\nimport re\n\nfrom argparse import Namespace\nfrom google.colab import drive\ndrive.mount('/content/drive')\n%cd drive/MyDrive/CSC-project/PyTorchNLPBook/\n!pip install -r requirements.txt", "_____no_output_____" ], [ "args = Namespace(\n raw_train_dataset_csv=\"data/yelp/raw_train.csv\",\n raw_test_dataset_csv=\"data/yelp/raw_test.csv\",\n train_proportion=0.7,\n val_proportion=0.3,\n output_munged_csv=\"data/yelp/reviews_with_splits_full.csv\",\n seed=1337\n)", "_____no_output_____" ], [ "# Read raw data\ntrain_reviews = pd.read_csv(args.raw_train_dataset_csv, header=None, names=['rating', 'review'])\ntrain_reviews = train_reviews[~pd.isnull(train_reviews.review)]\ntest_reviews = pd.read_csv(args.raw_test_dataset_csv, header=None, names=['rating', 'review'])\ntest_reviews = test_reviews[~pd.isnull(test_reviews.review)]", "_____no_output_____" ], [ "train_reviews.head()", "_____no_output_____" ], [ "test_reviews.head()", "_____no_output_____" ], [ "# Unique classes\nset(train_reviews.rating)", "_____no_output_____" ], [ "# Splitting train by rating\n# Create dict\nby_rating = collections.defaultdict(list)\nfor _, row in train_reviews.iterrows():\n by_rating[row.rating].append(row.to_dict())", "_____no_output_____" ], [ "# Create split data\nfinal_list = []\nnp.random.seed(args.seed)\n\nfor _, item_list in sorted(by_rating.items()):\n\n np.random.shuffle(item_list)\n \n n_total = len(item_list)\n n_train = int(args.train_proportion * n_total)\n n_val = int(args.val_proportion * n_total)\n \n # Give data point a split attribute\n for item in item_list[:n_train]:\n item['split'] = 'train'\n \n for item in item_list[n_train:n_train+n_val]:\n item['split'] = 'val'\n\n # Add to final list\n final_list.extend(item_list)", "_____no_output_____" ], [ "for _, row in test_reviews.iterrows():\n row_dict = row.to_dict()\n row_dict['split'] = 'test'\n final_list.append(row_dict)", "_____no_output_____" ], [ "# Write split data to file\nfinal_reviews = pd.DataFrame(final_list)", "_____no_output_____" ], [ "final_reviews.split.value_counts()", "_____no_output_____" ], [ "final_reviews.review.head()", "_____no_output_____" ], [ "final_reviews[pd.isnull(final_reviews.review)]", "_____no_output_____" ], [ "# Preprocess the reviews\ndef preprocess_text(text):\n if type(text) == float:\n print(text)\n text = text.lower()\n text = re.sub(r\"([.,!?])\", r\" \\1 \", text)\n text = re.sub(r\"[^a-zA-Z.,!?]+\", r\" \", text)\n return text\n \nfinal_reviews.review = final_reviews.review.apply(preprocess_text)", "_____no_output_____" ], [ "final_reviews['rating'] = final_reviews.rating.apply({1: 'negative', 2: 'positive'}.get)", "_____no_output_____" ], [ "final_reviews.head()", "_____no_output_____" ], [ "final_reviews.to_csv(args.output_munged_csv, index=False)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e70fe17b44406150e1284d11a41f74f79fd06089
8,612
ipynb
Jupyter Notebook
examples/train_backtest_analyze.ipynb
vanshg/qlib
7c1c6ea71a63763b919458dcc236e3a9eba99458
[ "MIT" ]
null
null
null
examples/train_backtest_analyze.ipynb
vanshg/qlib
7c1c6ea71a63763b919458dcc236e3a9eba99458
[ "MIT" ]
2
2021-03-31T20:02:55.000Z
2021-12-13T20:47:04.000Z
examples/train_backtest_analyze.ipynb
vanshg/qlib
7c1c6ea71a63763b919458dcc236e3a9eba99458
[ "MIT" ]
null
null
null
25.40413
136
0.531352
[ [ [ "import sys\nfrom pathlib import Path\n\nimport qlib\nimport pandas as pd\nfrom qlib.config import REG_CN\nfrom qlib.contrib.model.gbdt import LGBModel\nfrom qlib.contrib.estimator.handler import Alpha158\nfrom qlib.contrib.strategy.strategy import TopkDropoutStrategy\nfrom qlib.contrib.evaluate import (\n backtest as normal_backtest,\n risk_analysis,\n)\nfrom qlib.utils import exists_qlib_data", "_____no_output_____" ], [ "# use default data\n# NOTE: need to download data from remote: python scripts/get_data.py qlib_data_cn --target_dir ~/.qlib/qlib_data/cn_data\nprovider_uri = \"~/.qlib/qlib_data/cn_data\" # target_dir\nif not exists_qlib_data(provider_uri):\n print(f\"Qlib data is not found in {provider_uri}\")\n sys.path.append(str(Path.cwd().parent.joinpath(\"scripts\")))\n from get_data import GetData\n GetData().qlib_data_cn(target_dir=provider_uri)\nqlib.init(provider_uri=provider_uri, region=REG_CN)", "_____no_output_____" ], [ "MARKET = \"csi300\"\nBENCHMARK = \"SH000300\"", "_____no_output_____" ] ], [ [ "# train model", "_____no_output_____" ] ], [ [ "###################################\n# train model\n###################################\nDATA_HANDLER_CONFIG = {\n \"dropna_label\": True,\n \"start_date\": \"2008-01-01\",\n \"end_date\": \"2020-08-01\",\n \"market\": MARKET,\n}\n\nTRAINER_CONFIG = {\n \"train_start_date\": \"2008-01-01\",\n \"train_end_date\": \"2014-12-31\",\n \"validate_start_date\": \"2015-01-01\",\n \"validate_end_date\": \"2016-12-31\",\n \"test_start_date\": \"2017-01-01\",\n \"test_end_date\": \"2020-08-01\",\n}\n\n# use default DataHandler\n# custom DataHandler, refer to: TODO: DataHandler api url\nx_train, y_train, x_validate, y_validate, x_test, y_test = Alpha158(**DATA_HANDLER_CONFIG).get_split_data(**TRAINER_CONFIG)\n\n\nMODEL_CONFIG = {\n \"loss\": \"mse\",\n \"colsample_bytree\": 0.8879,\n \"learning_rate\": 0.0421,\n \"subsample\": 0.8789,\n \"lambda_l1\": 205.6999,\n \"lambda_l2\": 580.9768,\n \"max_depth\": 8,\n \"num_leaves\": 210,\n \"num_threads\": 20,\n}\n# use default model\n# custom Model, refer to: TODO: Model api url\nmodel = LGBModel(**MODEL_CONFIG)\nmodel.fit(x_train, y_train, x_validate, y_validate)\n_pred = model.predict(x_test)\n_pred = pd.DataFrame(_pred, index=x_test.index, columns=y_test.columns)\n\n# backtest requires pred_score\npred_score = pd.DataFrame(index=_pred.index)\npred_score[\"score\"] = _pred.iloc(axis=1)[0]\n\n", "_____no_output_____" ] ], [ [ "# backtest", "_____no_output_____" ] ], [ [ "###################################\n# backtest\n###################################\nSTRATEGY_CONFIG = {\n \"topk\": 50,\n \"n_drop\": 5}\nBACKTEST_CONFIG = {\n \"verbose\": False,\n \"limit_threshold\": 0.095,\n \"account\": 100000000,\n \"benchmark\": BENCHMARK,\n \"deal_price\": \"close\",\n \"open_cost\": 0.0005,\n \"close_cost\": 0.0015,\n \"min_cost\": 5,\n \n}\n\n# use default strategy\n# custom Strategy, refer to: TODO: Strategy api url\nstrategy = TopkDropoutStrategy(**STRATEGY_CONFIG)\nreport_normal, positions_normal = normal_backtest(pred_score, strategy=strategy, **BACKTEST_CONFIG)\n", "_____no_output_____" ] ], [ [ "# analyze", "_____no_output_____" ] ], [ [ "###################################\n# analyze\n# If need a more detailed analysis, refer to: examples/train_and_bakctest.ipynb\n###################################\nanalysis = dict()\nanalysis[\"excess_return_without_cost\"] = risk_analysis(report_normal[\"return\"] - report_normal[\"bench\"])\nanalysis[\"excess_return_with_cost\"] = risk_analysis(\n report_normal[\"return\"] - report_normal[\"bench\"] - report_normal[\"cost\"]\n)\nanalysis_df = pd.concat(analysis) # type: pd.DataFrame\nprint(analysis_df)", "_____no_output_____" ] ], [ [ "# analyze graphs", "_____no_output_____" ] ], [ [ "from qlib.contrib.report import analysis_model, analysis_position\nfrom qlib.data import D\npred_df_dates = pred_score.index.get_level_values(level='datetime')\nreport_normal_df = report_normal\npositions = positions_normal\npred_df = pred_score", "_____no_output_____" ] ], [ [ "## analysis position", "_____no_output_____" ] ], [ [ "stock_ret = D.features(D.instruments(MARKET), ['Ref($close, -1)/$close - 1'], pred_df_dates.min(), pred_df_dates.max())\nstock_ret.columns = ['label']", "_____no_output_____" ] ], [ [ "### report", "_____no_output_____" ] ], [ [ "analysis_position.report_graph(report_normal_df)", "_____no_output_____" ] ], [ [ "### risk analysis", "_____no_output_____" ] ], [ [ "analysis_position.risk_analysis_graph(analysis_df, report_normal_df)", "_____no_output_____" ] ], [ [ "## analysis model", "_____no_output_____" ] ], [ [ "label_df = D.features(D.instruments(MARKET), ['Ref($close, -2)/Ref($close, -1) - 1'], pred_df_dates.min(), pred_df_dates.max())\nlabel_df.columns = ['label']", "_____no_output_____" ] ], [ [ "### score IC", "_____no_output_____" ] ], [ [ "pred_label = pd.concat([label_df, pred_df], axis=1, sort=True).reindex(label_df.index)\nanalysis_position.score_ic_graph(pred_label)", "_____no_output_____" ] ], [ [ "### model performance", "_____no_output_____" ] ], [ [ "analysis_model.model_performance_graph(pred_label)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e70ff17fd89d06be868db6ec98c01c064f6bf205
8,879
ipynb
Jupyter Notebook
recommendation_algorithm.ipynb
viniaraujoo/Recommender-system
712386fa874851d0cc4776d3d82c7dbbebd4cf88
[ "Apache-2.0" ]
null
null
null
recommendation_algorithm.ipynb
viniaraujoo/Recommender-system
712386fa874851d0cc4776d3d82c7dbbebd4cf88
[ "Apache-2.0" ]
null
null
null
recommendation_algorithm.ipynb
viniaraujoo/Recommender-system
712386fa874851d0cc4776d3d82c7dbbebd4cf88
[ "Apache-2.0" ]
null
null
null
32.405109
650
0.589368
[ [ [ "import pandas as pd\nimport os\nfrom surprise import Dataset, KNNBasic, Reader, accuracy, SVD\nfrom surprise.model_selection import cross_validate, PredefinedKFold", "_____no_output_____" ] ], [ [ "## Descrição do Problema. \nEste projeto tem como principal objetivo desenvolver um sistema de apoio a recomendação. Para está analise iremos utilizar a base de dados [MovieLens](https://grouplens.org/datasets/movielens/) que consiste numa base de dados onde os dados foram coletados através do site da MovieLens (movielens.umn.edu) durante o período de sete meses a partir de 19 de setembro, 1997 até 22 de abril de 1998. Esses dados foram limpos - usuários que tinha menos de 20 classificações ou não tinha demografia completa informações foram removidas deste conjunto de dados. Descrições detalhadas dê o arquivo de dados pode ser encontrado no final deste arquivo.\nÁ partir desses dados foi desenvolvido um sistema de recomendação baseado em dois algoritimos que explicamos abaixo.", "_____no_output_____" ], [ "## Leitura do Conjunto de Dados\nAbaixo o algoritimo é reponsavel pela leitura de um conjunto especifico da base de dados [ml-100k](https://grouplens.org/datasets/movielens/), onde iremos utilizar o conjunto especifico de teste e a base de intems, considerando os 1000 usuario presentes. ", "_____no_output_____" ] ], [ [ "items_stream = open('ml-100k/u.item', 'r')\nitem_data = items_stream.read().split('\\n')\nitem_data = list(map(lambda item: item.split('|')[:2], item_data))\nitems_stream.close()", "_____no_output_____" ], [ "database = pd.read_csv('ml-100k/u1.base.csv')\nuser_set = set(database.user_id)\nitem_set = set(database.item_id)\nnot_watch = {user: item_set.difference(database.query('user_id == %s' %(user)).item_id) for user in user_set}", "_____no_output_____" ], [ "files_dir = os.path.expanduser('ml-100k/')\nreader = Reader('ml-100k')", "_____no_output_____" ] ], [ [ "## Leitura do Conjunto de Teste\nConsideramos para a o conjunto de teste a base 1 contida no conjunto dos dados.", "_____no_output_____" ] ], [ [ "train_file = files_dir + 'u%d.base'\ntest_file = files_dir + 'u%d.test'\nfolds_files = [(train_file % i, test_file % i) for i in [1]]\n\ndata = Dataset.load_from_folds(folds_files, reader=reader)\npkf = PredefinedKFold()", "_____no_output_____" ] ], [ [ "## Algoritmos utilizados \nPara está analise utilizamos dois algoritimos diferente para observar o resultados, ambos os algoritimos estão presentes na biblioteca surprise. São eles:\n+ KNN\n+ SVD", "_____no_output_____" ], [ "### KNN\n+ A ideia principal do KNN é determinar o rótulo de classificação de uma amostra baseado nas amostras vizinhas advindas de um conjunto de treinamento.\n+ Passos:\n + 1-Escolha um vértice arbitrário como vértice atual.\n + 2-Descubra a aresta de menor peso que seja conectada ao vértice atual e a um vértice não visitado V.\n + 3-Faça o vértice atual ser V.\n + 4-Marque V como visitado.\n + 5-Se todos os vértices no domínio estiverem visitados, encerre o algoritmo.\n + 6-Se não vá para o passo 2.\n+ Mais detalhe sobre a formulação e como funciona o algoritimo pela biblioteca [aqui](http://surprise.readthedocs.io/en/stable/knn_inspired.html#surprise.prediction_algorithms.knns.KNNBasic).", "_____no_output_____" ], [ "### SVD\nO famoso algoritmo SVD , popularizado por Simon Funk durante o Prêmio Netflix. Quando as linhas de base não são usadas, isso é equivalente à fatoração de matrizes probabilísticas.\nMais detalhe sobre a formulação e como funciona o algoritimo pela biblioteca [aqui](http://surprise.readthedocs.io/en/stable/matrix_factorization.html#matrix-factorization-based-algorithms).", "_____no_output_____" ] ], [ [ "sim_options = {\n 'name': 'cosine',\n 'user_based': True # compute similarities between items\n}\n\nalgo = KNNBasic(sim_options=sim_options, k=4, min_k=2)\nalgo_svd = SVD()\nfor trainset, testset in pkf.split(data):\n\n # train and test algorithm.\n algo_svd.fit(trainset)\n algo.fit(trainset)\n predictions = algo.test(testset)\n predictions_svd = algo_svd.test(testset)\n accuracy.rmse(predictions,verbose=True)\n accuracy.rmse(predictions_svd,verbose=True)\n ", "Computing the cosine similarity matrix...\nDone computing similarity matrix.\nRMSE: 1.1118\nRMSE: 0.9513\n" ] ], [ [ "## Métodos utilizados.\nAbaixo estão os metodos que retornar o top 5 dos filmes de acordo com o SVD ou KNN e também o metodo que retornar o top 5 de usuarios de acordo com o KNN que possui perfil similar com o perfil selecionado na pesquisa. ", "_____no_output_____" ] ], [ [ "def get_top_5(uid):\n top = []\n items = not_watch[int(uid)]\n \n for item in items:\n top.append((item, algo.predict(uid=uid, iid=str(item)).est))\n \n return sorted(top, key=lambda item: item[1], reverse=True)[:5]\n\n\ndef get_top_5_movies_KNN(uid):\n top_5 = get_top_5(uid)\n return [item_data[int(item[0])][1] for item in top_5]", "_____no_output_____" ], [ "def get_top2_5(uid):\n top = []\n items = not_watch[int(uid)]\n \n for item in items:\n top.append((item, algo_svd.predict(uid=uid, iid=str(item)).est))\n \n return sorted(top, key=lambda item: item[1], reverse=True)[:5]\n\ndef get_top_5_movies_SVD(uid):\n top_5 = get_top2_5(uid)\n return [item_data[int(item[0])][1] for item in top_5]", "_____no_output_____" ], [ "def get_top_5_neighbors(uid):\n inner_uid = algo.trainset.to_inner_uid(uid)\n neighbords = algo.get_neighbors(iid=inner_uid, k=5)\n return [algo.trainset.to_raw_uid(iid) for iid in neighbords]", "_____no_output_____" ] ], [ [ "## Exemplo da aplicação. Usuario id: 11. ", "_____no_output_____" ], [ "| Recomendações KNN | Recomendações SVD | Usuarios Proximos |\n| ------------- |:-------------:| -----:|\n| Angels and Insects (1995) | Mighty Aphrodite (1995) | 9 |\n| Mother (1996) | Maltese Falcon, The (1941) | 34 |\n| That Old Feeling (1997) | Ulee's Gold (1997) | 86 |\n| Ayn Rand: A Sense of Life (1997) | Legends of the Fall (1994) | 88 |\n| Cure, The (1995) | Brazil (1985) | 93 |", "_____no_output_____" ], [ "### Analise de Precisão dos algoritimos. \n| Algorimo | RMSE | \n| ------------- |:-------------:|\n| KNN | 1.1118 |\n| SVD| 0.9513 |", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ] ]
e7100cec332e3f885dff3698bfcc7a0be7c44571
63,692
ipynb
Jupyter Notebook
ImageToPix_PixToImage/ImageToPix_PixToImage/ImageToPix_PixToImage.ipynb
wahmed555/Image_Processing
d327a1f122228a40d668ee9425f945a4b21735dc
[ "MIT" ]
null
null
null
ImageToPix_PixToImage/ImageToPix_PixToImage/ImageToPix_PixToImage.ipynb
wahmed555/Image_Processing
d327a1f122228a40d668ee9425f945a4b21735dc
[ "MIT" ]
null
null
null
ImageToPix_PixToImage/ImageToPix_PixToImage/ImageToPix_PixToImage.ipynb
wahmed555/Image_Processing
d327a1f122228a40d668ee9425f945a4b21735dc
[ "MIT" ]
null
null
null
28.69009
132
0.262231
[ [ [ "from PIL import Image\nimport numpy as np \nimport pandas as pd", "_____no_output_____" ] ], [ [ "Converting Image to Pixels In a CSV", "_____no_output_____" ] ], [ [ "#creating image object\ncolourImg = Image.open(\"image.png\")\n\n#converting image to rgb format\ncolourPixels = colourImg.convert(\"RGB\") \n\n#converting image pixels to array and resizing array\ncolourArray = np.array(colourPixels.getdata()).reshape(colourImg.size + (3,))\n\n#np.moveaxis Move axes of an array to new positions.\n#Other axes remain in their original order.\n#Compute an array where the subarrays contain index \n#values 0, 1, … varying only along the corresponding axis.\nindicesArray = np.moveaxis(np.indices(colourImg.size), 0, 2) \nprint(colourImg.size) \n \n \n#Stack arrays in sequence depth wise (along third axis).\n#-1 is for removing the axis formed when axises are moved\nallArray = np.dstack((indicesArray, colourArray)).reshape((-1, 5)) \n\n#converting numpy array to dataframe\ndf_rgb_xy = pd.DataFrame(allArray, columns=[\"y\", \"x\", \"red\",\"green\",\"blue\"]) ", "(1100, 619)\n" ], [ "#shape of pixels array\ncolourArray.shape", "_____no_output_____" ], [ "#dataframe formes by the pixels of image with their corresponding position\ndf_rgb_xy", "_____no_output_____" ], [ "#saving dataframe to csv\ndf_rgb_xy.to_csv('xy_rgb_ValuesOfImage.csv')", "_____no_output_____" ], [ "#calculation RGB in percentage\ndf_rgb_percentage_xy=df_rgb_xy\n#appending RGB percentage columns\ndf_rgb_percentage_xy['R%'] = (df_rgb_xy.red /255)*100\ndf_rgb_percentage_xy['G%'] = (df_rgb_xy.green /255)*100\ndf_rgb_percentage_xy['B%'] = (df_rgb_xy.blue /255)*100\n", "_____no_output_____" ], [ "#dataframe with RGB percentage columns\ndf_rgb_percentage_xy", "_____no_output_____" ], [ "#saving RGB percentage dataframe to csv\ndf_rgb_percentage_xy.to_csv('rgb_percentage_xy_ValuesOfImage.csv')", "_____no_output_____" ] ], [ [ "Converting Pixels From CSV to Image", "_____no_output_____" ] ], [ [ "#importing RGB percentage csv to dataframe\n\ndf_rgb_percentage_xy = pd.read_csv(\"rgb_percentage_xy_ValuesOfImage.csv\") \n", "_____no_output_____" ], [ "#selecting columns which we need to display image\ndf_rgb=df_rgb_percentage_xy[[\"red\",\"green\",\"blue\"]]", "_____no_output_____" ], [ "#dataframe of selected columns\ndf_rgb", "_____no_output_____" ], [ "#inserting column in dataframe taht contain max rgb value\ndf_rgb['Max_rgb_value'] = \"255\" ", "D:\\anaconda\\lib\\site-packages\\ipykernel\\__main__.py:2: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n from ipykernel import kernelapp as app\n" ], [ "#converting dataframe to array\npixArray=df_rgb.values", "_____no_output_____" ], [ "#extracting height and width of image\nheight, width=colourImg.size", "_____no_output_____" ], [ "#reshaping array for applying in function\npixArray=pixArray.reshape(width,height, 4)", "_____no_output_____" ], [ "#reshaped array\npixArray", "_____no_output_____" ], [ "print(pixArray.shape)\nprint(pixArray.dtype)\n", "(619, 1100, 4)\nobject\n" ], [ "#converting datatype of array\npixArray_unsigned = pixArray.astype('uint8') \n", "_____no_output_____" ], [ "#converting pixels array to image\nimage2 = Image.fromarray(pixArray_unsigned)\n", "_____no_output_____" ], [ "#saving to a file\nimage2.save('new.png')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7100f929b401cac5559ce12c73b0712e2e23f82
87,311
ipynb
Jupyter Notebook
spectrum1.ipynb
salvol/metalsf2
4900173da33114216890ba2e57e12c18ec051416
[ "MIT" ]
null
null
null
spectrum1.ipynb
salvol/metalsf2
4900173da33114216890ba2e57e12c18ec051416
[ "MIT" ]
null
null
null
spectrum1.ipynb
salvol/metalsf2
4900173da33114216890ba2e57e12c18ec051416
[ "MIT" ]
null
null
null
87,311
87,311
0.820366
[ [ [ "# We will use spekpy to plot X ray spectrum as a function of voltage, target and filters : https://bitbucket.org/spekpy/spekpy_release/wiki/Home\n\n## the code below show the plot of a spectrum for a fixed voltage , a W target and with a filter of 0.5 mm of Aluminium. study the influence of\n\n### Voltage : 40kV to 160kV without filter and with 0.5 mm Al filter : effect on mean energy ?\n### Filter : compare with 0.5 mm Al filter and without filter at 40 kV : effect on mean energy ?\n### Filter : compare with 1 mm Cu filter and without filter at 160 kV : effect on mean energy ?\n### target : compare W and Mo target at 50V with 0.5 mm Al : effect on mean energy ?", "_____no_output_____" ] ], [ [ "%matplotlib notebook\nimport spekpy as sp # Import SpekPy\nimport numpy as np # import numpy\nimport matplotlib.pyplot as plt # Import library for plotting\nimport xraydb\n\nVoltage = 50 # kV\ntheta = 20 # X ray beam angle / anode in °\nFilter_Material = 'Al'\nFilter_Thickness = 0.5 # in mm (put 0 if no filter)\nTarget_Material ='W'\n\ns = sp.Spek(kvp=Voltage,th=theta,targ=Target_Material) # Create a spectrum\ns.filter(Filter_Material,Filter_Thickness) # Filter the spectrum thickness in mm\nenergy, intensity = s.get_spectrum(edges=True) # Get the spectrum\n\n# value to change \nVoltage2 = 50 # kV\ntheta2 = 20 # X ray beam angle / anode in °\nFilter_Material2 = 'Al'\nFilter_Thickness2 = 0.5 # in mm (put 0 if no filter)\nTarget_Material2 ='W'\n\ns2 = sp.Spek(kvp=Voltage2,th=theta2,targ=Target_Material2) # Create a spectrum\ns2.filter(Filter_Material2,Filter_Thickness2) # Filter the spectrum thickness in mm\nenergy2, intensity2 = s2.get_spectrum(edges=True) # Get the spectrum\n\n# Plot the spectrum\nplt.plot(energy, intensity, label='%d kV %s Filter %0.2f mm target %s' %(Voltage,Filter_Material,Filter_Thickness,Target_Material)) \nplt.plot(energy2, intensity2, label='%d kV %s Filter %0.2f mm target %s' %(Voltage2,Filter_Material2,Filter_Thickness2,Target_Material2)) \nplt.xlabel('Energy [keV]')\nplt.ylabel('Fluence per mAs per unit energy [photons/cm2/mAs/keV]')\nplt.legend(loc='best')\nplt.show()\n\n# compute mean energy\nmean_energy = np.sum(energy * intensity / np.sum (intensity))\nplt.title('mean energy = %0.2f in keV' %(mean_energy))\nprint(mean_energy)\n\nmean_energy2 = np.sum(energy2 * intensity2 / np.sum (intensity2))\nplt.title('mean energy = %0.2f in keV' %(mean_energy2))\nprint(mean_energy2)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code" ] ]
e71011e7b811f070d9333b6fc85784545d10b7b1
417,644
ipynb
Jupyter Notebook
svhn-model.ipynb
k-chuang/tf-svhn
b19e39fcca911ae91156ede60a37d2a409804e98
[ "MIT" ]
3
2018-11-13T08:02:40.000Z
2021-08-08T09:08:39.000Z
svhn-model.ipynb
k-chuang/tf-svhn
b19e39fcca911ae91156ede60a37d2a409804e98
[ "MIT" ]
1
2019-05-12T11:32:20.000Z
2019-05-12T11:32:20.000Z
svhn-model.ipynb
k-chuang/tf-svhn
b19e39fcca911ae91156ede60a37d2a409804e98
[ "MIT" ]
6
2019-08-04T04:46:46.000Z
2020-09-16T04:59:18.000Z
149.854324
137,640
0.841434
[ [ [ "# Street View House Numbers (SVHN)\n\n* Author: Kevin Chuang [@k-chuang](https://www.github.com/k-chuang)\n* Created on: September 14, 2018\n* Description: Implementation of a deep neural network (CNN) using TensorFlow to recognize images of sequences of digits (Google's street view house numbers) \n* Dataset: [SVHN dataset](http://ufldl.stanford.edu/housenumbers/)\n\n-----------", "_____no_output_____" ], [ "# Model Training Steps\n\n1. Construction phase\n - Build static computational graph using TensorFlow\n2. Execution phase\n - Initiate session to execute operations in the graph (e.g. as minimizing loss)", "_____no_output_____" ] ], [ [ "# OS packages\nimport os\nimport sys\n\n# linear algebra\nimport numpy as np\n\n# data processing \nimport pandas as pd\n\n# data visualizations\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\n# file system/structure \nimport h5py\n\n# deep learning framework\nimport tensorflow as tf\n\n# time packages\nimport time\nfrom datetime import timedelta\n\n# utils\nfrom sklearn.utils import shuffle\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (20.0, 10.0)\n\ntf.logging.set_verbosity(tf.logging.INFO)\n\nprint(\"Tensorflow version: \" + tf.__version__)", "c:\\users\\kevin\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n" ] ], [ [ "# Load data\n\n- Load the preprocessed data from the previous notebook using `h5py`\n- Save all the image attributes to use later", "_____no_output_____" ] ], [ [ "# Open the HDF5 file containing the datasets\nwith open(h5py.File('data/SVHN_multi_digit_norm_grayscale.h5','r')) as h5f:\n X_train = h5f['X_train'][:]\n y_train = h5f['y_train'][:]\n X_val = h5f['X_val'][:]\n y_val = h5f['y_val'][:]\n X_test = h5f['X_test'][:]\n y_test = h5f['y_test'][:]\n\n\nprint('Training set', X_train.shape, y_train.shape)\nprint('Validation set', X_val.shape, y_val.shape)\nprint('Test set', X_test.shape, y_test.shape)", "Training set (225754, 32, 32, 1) (225754, 5)\nValidation set (10000, 32, 32, 1) (10000, 5)\nTest set (13068, 32, 32, 1) (13068, 5)\n" ], [ "# Get the image data information & dimensions\ntrain_count, img_height, img_width, num_channels = X_train.shape\n\n# Get label information\nnum_digits, num_labels = y_train.shape[1], len(np.unique(y_train))", "_____no_output_____" ] ], [ [ "# Helper Functions\n\n- Create helper functions to make notebook easier to read and reduce code duplication\n - Helper functions include plotting, initializing variables in TF graph, designing model, etc.", "_____no_output_____" ], [ "### Plot Images", "_____no_output_____" ] ], [ [ "def plot_images(images, nrows, ncols, cls_true, cls_pred=None):\n \"\"\" Helper function for plotting nrows * ncols images\n \"\"\"\n fig, axes = plt.subplots(nrows, ncols, figsize=(16, 2*nrows))\n for i, ax in enumerate(axes.flat): \n # Pretty string with actual label\n true_number = ''.join(str(x) for x in cls_true[i] if x != 10)\n if cls_pred is None:\n title = \"Label: {0}\".format(true_number)\n else:\n # Pretty string with predicted label\n pred_number = ''.join(str(x) for x in cls_pred[i] if x != 10)\n title = \"Label: {0}, Pred: {1}\".format(true_number, pred_number) \n \n if images[i].shape == (32, 32, 3):\n ax.imshow(images[i])\n else:\n ax.imshow(images[i,:,:,0], cmap=\"gray\")\n ax.set_title(title) \n ax.set_xticks([]); ax.set_yticks([])", "_____no_output_____" ] ], [ [ "### Create new variables & initialization\n\n- Create new Tensorflow variables with a given shape\n- Initialize according to [Xavier & Glorot](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) initialization scheme\n - will experiment with the [He](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf) intialization scheme as well\n- Things to note:\n - The idea under both Xavier and He initialization is to preserve variance of activation values between layers.\n - `xavier_initializer_conv2d()` is the same as `xavier_initializer()`\n - just an alias to differentiate\n - `He` initialization is supposedly good for ReLu functions", "_____no_output_____" ] ], [ [ "def init_conv_weights_xavier(shape, name):\n return tf.get_variable(name, shape, initializer=tf.contrib.layers.xavier_initializer_conv2d())\n\ndef init_fc_weights_xavier(shape, name):\n return tf.get_variable(name, shape, initializer=tf.contrib.layers.xavier_initializer())\n\ndef init_conv_weights_he(shape, name):\n return tf.get_variable(name, shape, initializer=tf.keras.initializers.he_uniform())\n\ndef init_fc_weights_he(shape, name):\n return tf.get_variable(name, shape, initializer=tf.keras.initializers.he_uniform())\n\ndef init_biases(shape):\n return tf.Variable(tf.constant(0.0, shape=shape))", "_____no_output_____" ] ], [ [ "## Create layers in neural network\n\n- Create functions for different layers in the computational graph our neural network in TensorFlow.\n- Overview:\n - Convolution layer\n - Flatten layer\n - Fully connected or Dense layer", "_____no_output_____" ], [ "### Convolution Layer\n\n- Convolution layers usually create feature maps of each indvidual image by convolving a fixed size filter across the image\n- Conv layers will create abstract representations of the image \n - The early layers will detect low level features, such as edges or blobs\n - The later layers will start learning high level features from the combination of low features in the earlier layers\n- Convolution layers are usually used in image feature extraction, because they are shift invariant and can recognize small patterns in subsamples of an image\n\nCommon ConvNet architectures follows the pattern:\n\n`INPUT > [[CONV -> RELU]*N -> POOL?]*M -> [FC -> RELU]*K -> FC`\n\nSource: http://cs231n.github.io/convolutional-networks/", "_____no_output_____" ], [ "### Batch Normalization (Used)\n\n- BN reduces the amount by what the hidden unit values shift around (covariance shift)\n- BN allows each layer of a network to learn by itself a little bit more independently of other layers\n- BN allows us to use higher learning rates, and controls exploding & vanishing gradients\n- BN can also help reduce overfitting, with slight regularization effects\n - Similiar to dropout, it adds some noise to each hidden layer's activations.\n - We will still use dropout layers, but will use less dropout, thus keeping more information.\n- How does BN work?\n - To increase stability of a neural network, BN normalizes the output of a previous activation layer by subtracting the batch mean and dividng by the batch standard deviation\n \nSource: [Batch Normalization in Neural Networks](https://towardsdatascience.com/batch-normalization-in-neural-networks-1ac91516821c)\n\n### Maxout Layer (Not Used)\n\n- A maxout layer is simply a layer where the activation function is the max of the inputs. \n- As stated in the paper (below), even an MLP with 2 maxout units can approximate any function.\n- Similiar to ReLU (no saturation, linear regime of operation), and does not have its drawbacks of dying ReLU (or dying/vanishing gradients)\n\nSource: [Maxout Network](https://arxiv.org/pdf/1302.4389v4.pdf)", "_____no_output_____" ] ], [ [ "def conv_layer(input_tensor, # The input or previous layer\n filter_size, # Width and height of each filter\n in_channels, # Number of channels in previous layer\n num_filters, # Number of filters\n layer_name, # Layer description name\n pooling, # Average pooling\n initializer='xavier'): # He or Xavier initialization \n \n # Add layer name scopes for better graph visualization\n with tf.name_scope(layer_name):\n \n # Shape of the filter-weights for the convolution\n shape = [filter_size, filter_size, in_channels, num_filters]\n\n # Create weights and biases\n if initializer == 'he':\n weights = init_conv_weights_he(shape, layer_name + '/weights')\n else:\n weights = init_conv_weights_xavier(shape, layer_name + '/weights')\n \n biases = init_biases([num_filters])\n \n # Add histogram summaries for weights\n tf.summary.histogram(layer_name + '/weights', weights)\n \n # Create the TensorFlow operation for convolution, with S=1 and zero padding\n activations = tf.nn.conv2d(input_tensor, weights, [1, 1, 1, 1], 'SAME') + biases\n \n # Add Batch Normalization\n activations = tf.layers.batch_normalization(activations)\n\n # Rectified Linear Unit (ReLU)\n# activations = tf.nn.relu(activations)\n activations = tf.nn.leaky_relu(activations, alpha=0.10)\n# activations = tf.contrib.layers.maxout(activations, num_units=num_filters)\n\n # pooling layer\n if pooling:\n # Create a pooling layer with F=2, S=1 and zero padding\n# activations = tf.nn.max_pool(activations, [1, 2, 2, 1], [1, 2, 2, 1], 'SAME')\n activations = tf.nn.avg_pool(activations, [1, 2, 2, 1], [1, 2, 2, 1], 'SAME')\n\n return activations", "_____no_output_____" ] ], [ [ "### Flatten Layer\n\n- A convolutional layer produces an output tensor with 4 dimensions.\n - (Batch_size, height, width, channels) or NHWC format\n- A dense or fully connected layer will typically be added after convolution layers, and these layers can only take in 2-dim tensors. \n - Need to reduce the 4-dim tensor to 2-dim, or flatten the (height, width, channels) axes \n- Example: \n - `Input shape`: (?, 16, 16, 64) ---> `Flatten Layer` ---> `Output shape`: (?, 16 x 16 x 64) or (?, 16384).", "_____no_output_____" ] ], [ [ "def flatten_layer(input_tensor):\n \"\"\" Helper function for transforming a 4D tensor to 2D\n \"\"\"\n # Get the shape of the input_tensor.\n input_tensor_shape = input_tensor.get_shape()\n\n # Calculate the volume of the input tensor\n num_activations = input_tensor_shape[1:4].num_elements()\n \n # Reshape the input_tensor to 2D: (?, num_activations)\n input_tensor_flat = tf.reshape(input_tensor, [-1, num_activations])\n\n # Return the flattened input_tensor and the number of activations\n return input_tensor_flat, num_activations", "_____no_output_____" ] ], [ [ "### Fully Connected (Dense) Layer\n\n- Neurons in a fully connected layer have full connections to all activations in the previous layer (Perceptron)\n- Their activations can hence be computed with a matrix multiplication followed by a bias offset.\n- Then, they can this can be passed through an optional leaky ReLU function", "_____no_output_____" ] ], [ [ "def fc_layer(input_tensor, # The previous layer, \n input_dim, # Num. inputs from prev. layer\n output_dim, # Num. outputs\n layer_name, # The layer name\n relu=False): \n\n # Add layer name scopes for better graph visualization\n with tf.name_scope(layer_name):\n \n # Create new weights and biases.\n weights = init_fc_weights_xavier([input_dim, output_dim], layer_name + '/weights')\n# weights = init_fc_weights_he([input_dim, output_dim], layer_name + '/weights')\n\n biases = init_biases([output_dim])\n \n # Add histogram summaries for weights\n tf.summary.histogram(layer_name + '/weights', weights)\n\n # Calculate the layer activation\n activations = tf.matmul(input_tensor, weights) + biases\n \n if relu:\n activations = tf.nn.leaky_relu(activations, alpha=0.10)\n# activations = tf.nn.relu(activations)\n# activations = tf.contrib.layers.maxout(activations, num_units=output_dim)\n\n return activations", "_____no_output_____" ] ], [ [ "# TensorFlow Model\n\n- Initialize the configurations of the CNN and the data dimensions. \n- Create placeholder variables (input variables, dropout)\n- Create model architecture / computational graph\n- Define loss function\n- Define optimization method\n- Define evaluation metric", "_____no_output_____" ] ], [ [ "# Optimizer learning rate\n# learning_rate = 0.0001\n\n# Conv Block 1\nfilter_size1 = filter_size2 = 5 \nnum_filters1 = num_filters2 = 32 \n\n# Conv Block 2\n\nfilter_size3 = filter_size4 = 5 \nnum_filters3 = num_filters4 = 64\n\n# Conv Block 3\nfilter_size5 = filter_size6 = filter_size7 = 5 \nnum_filters5 = num_filters6 = num_filters7 = 128 \n\n# Fully-connected layers\nfc1_size = fc2_size = 256", "_____no_output_____" ] ], [ [ "## Placeholder variables", "_____no_output_____" ] ], [ [ "with tf.name_scope(\"input\"):\n \n # Placeholders for feeding input images\n x = tf.placeholder(tf.float32, shape=(None, img_height, img_width, num_channels), name='x')\n y_ = tf.placeholder(tf.int64, shape=[None, num_digits], name='y_')\n\nwith tf.name_scope(\"dropout\"):\n \n # Dropout rate applied after the pooling layers\n p_keep_1 = tf.placeholder(tf.float32)\n tf.summary.scalar('conv_keep_probability', p_keep_1)\n\n # Dropout rate using between the fully-connected layers\n p_keep_2 = tf.placeholder(tf.float32)\n tf.summary.scalar('fc_keep_probability', p_keep_2)", "_____no_output_____" ] ], [ [ "## Model Definition\n\n- The architecture of my model can be summarized as:\n\n", "_____no_output_____" ] ], [ [ "# Conv Block 1\nconv_1 = conv_layer(x, filter_size1, num_channels, num_filters1, \"conv_1\", pooling=False)\nconv_2 = conv_layer(conv_1, filter_size2, num_filters1, num_filters2, \"conv_2\", pooling=True)\ndrop_block1 = tf.nn.dropout(conv_2, p_keep_1) # Dropout\n\n# Conv Block 2\nconv_3 = conv_layer(conv_2, filter_size3, num_filters2, num_filters3, \"conv_3\", pooling=False)\nconv_4 = conv_layer(conv_3, filter_size4, num_filters3, num_filters4, \"conv_4\", pooling=True)\ndrop_block2 = tf.nn.dropout(conv_4, p_keep_1) # Dropout\n\n# Conv Block 3\nconv_5 = conv_layer(drop_block2, filter_size5, num_filters4, num_filters5, \"conv_5\", pooling=False)\nconv_6 = conv_layer(conv_5, filter_size6, num_filters5, num_filters6, \"conv_6\", pooling=False)\nconv_7 = conv_layer(conv_6, filter_size7, num_filters6, num_filters7, \"conv_7\", pooling=True)\nflat_tensor, num_activations = flatten_layer(tf.nn.dropout(conv_7, p_keep_2)) # Dropout\n\n# Fully-connected 1\nfc_1 = fc_layer(flat_tensor, num_activations, fc1_size, 'fc_1', relu=True)\ndrop_fc2 = tf.nn.dropout(fc_1, p_keep_2) # Dropout\n\n# Fully-connected 2\nfc_2 = fc_layer(drop_fc2, fc1_size, fc2_size, 'fc_2', relu=True)\n\n# Parallel softmax layers\nlogits_1 = fc_layer(fc_2, fc2_size, num_labels, 'softmax1')\nlogits_2 = fc_layer(fc_2, fc2_size, num_labels, 'softmax2')\nlogits_3 = fc_layer(fc_2, fc2_size, num_labels, 'softmax3')\nlogits_4 = fc_layer(fc_2, fc2_size, num_labels, 'softmax4')\nlogits_5 = fc_layer(fc_2, fc2_size, num_labels, 'softmax5')\n\n# Stack the logits together to make a prediction for an image (5 digit sequence prediction)\ny_pred = tf.stack([logits_1, logits_2, logits_3, logits_4, logits_5])\n\n# The class-number is the index of the largest element\ny_pred_cls = tf.transpose(tf.argmax(y_pred, axis=2))", "_____no_output_____" ] ], [ [ "## Loss Function\n\n- Calculate the loss by taking the average loss of every individual example for each of our 5 digits and adding them together. \n- Using `tf.nn.sparse_softmax_cross_entropy_with_logits` allows us to skip using `OneHotEncoding` on our label values.", "_____no_output_____" ] ], [ [ "with tf.name_scope('loss'):\n \n # Calculate the loss for each individual digit in the sequence\n loss1 = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_1, labels=y_[:, 0]))\n loss2 = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_2, labels=y_[:, 1]))\n loss3 = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_3, labels=y_[:, 2]))\n loss4 = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_4, labels=y_[:, 3]))\n loss5 = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_5, labels=y_[:, 4]))\n\n # Calculate the total loss for all predictions\n loss = loss1 + loss2 + loss3 + loss4 + loss5\n \n # Create tensorboard logs for loss\n tf.summary.scalar('loss', loss)", "_____no_output_____" ] ], [ [ "## Optimizer\n\n- We have a loss function that must be minimized, now we need an optimizer to optimize on this loss function.\n- `Adam` is a good starting optimizer historically & empirically\n - replacement of Stochastic Gradient Descent\n - adaptively changes the learning rate\n - maintains an adaptive per parameter learning rate that is based on the average of the first moment of the gradients (mean) and the second moment of the gradients (uncentered variance) \n- In addition to using the Adam optimizer, we also exponentially decay the learning rate by half (or 0.5) every 20 epochs\n - ~440 steps per epoch for batch size 512, so ~8800 steps for 20 epochs\n - Very useful & effective to decay the learning rate to prevent overshooting the optimal loss & overfitting ", "_____no_output_____" ] ], [ [ "with tf.name_scope('optimizer'):\n \n # Global step is required to compute the decayed learning rate\n global_step = tf.Variable(0, name='global_step', trainable=False)\n\n # Drop learning rate by half every 20 epochs\n decay_step = 8800\n \n # Apply exponential decay to the learning rate\n learning_rate = tf.train.exponential_decay(1e-3, global_step, decay_step, 0.5, staircase=True)\n\n # Add scalar summary for learning rate (for Tensorboard)\n tf.summary.scalar('learning_rate', learning_rate)\n\n # Construct a new Adam optimizer\n optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss, global_step=global_step)", "_____no_output_____" ] ], [ [ "## Evaluation Metric\n\n- To evaluate the performance of our model, we calculate the average accuracy across all samples\n- To explain further:\n - `correct_prediction` is defined as the number of correctly classified sequences (if one digit is wrong in sequence, the whole prediction is incorrect)\n - First, we check if the batch of predictions & class labels are equal (`tf.equal` produces boolean tensors), and then cast it to a float data type (False -> 0.0, True -> 1.0)\n - **A correctly classified image would have a tensor of all 1's**\n - **An incorrectly classified image would have be a tensor with at least one 0.**\n - Then, we get the minimum value in each of the boolean tensors by using the `tf.reduce_min` function.\n - **A correctly classified image would have an output tensor minimum of 1**\n - **An incorrectly classified image would have an output tensor minimum of 0**\n - Finally, we calculate the mean of the `correct_prediction` tensor by summing up the 1's & 0's produced in `correct_prediction` & dividing by the total number of samples & multiplying by 100 to get a percentage.", "_____no_output_____" ] ], [ [ "with tf.name_scope(\"accuracy\"):\n \n # Correct prediction is when predicted class equals the true class of each image\n correct_prediction = tf.reduce_min(tf.cast(tf.equal(y_pred_cls, y_), tf.float32), 1)\n\n # Cast predictions to float and calculate the mean\n accuracy = tf.reduce_mean(correct_prediction) * 100.0\n \n # Add scalar summary for accuracy tensor for accuracy\n tf.summary.scalar('accuracy', accuracy)", "_____no_output_____" ] ], [ [ "# Tensorflow Run\n\n- Once TF graph has been created, we have to create a TensorFlow session which is used to execute the graph.\n- Then, to save time from training again, we will initialize checkpoints ", "_____no_output_____" ] ], [ [ "# Launch the graph in a session\nsession = tf.Session()", "_____no_output_____" ] ], [ [ "## Checkpoints & TensorBoard\n\n- Use checkpoints to save variables of the neural network\n - Can be reloaded quickly without having to train the network again\n - Create a Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph.\n- Write the TensorBoard summaries using the `tf.summary.FileWriter` class. \n - Create two separate log files one for the training set and one for the validation set", "_____no_output_____" ] ], [ [ "SVHN_VERSION = 'svhn_v16'\nCHECKPOINT_PATH = os.path.join('checkpoints', SVHN_VERSION)\nLOG_DIR = os.path.join('logs', SVHN_VERSION)\n\nif not os.path.exists(CHECKPOINT_PATH):\n os.makedirs(CHECKPOINT_PATH)", "_____no_output_____" ], [ "saver = tf.train.Saver()\n\n# Let's try to find the latest checkpoint - if any\ntry:\n print(\"Attempting to restore from last checkpoint ...\")\n \n # Finds the filename of latest saved checkpoint file\n last_chk_path = tf.train.latest_checkpoint(checkpoint_dir=CHECKPOINT_PATH)\n\n # Try and load the data in the checkpoint.\n saver.restore(session, save_path=last_chk_path)\n print(\"Restored checkpoint from:\", last_chk_path)\n \n# If the code above runs into an exception - initialize all the variables\nexcept:\n print(\"Failed to restore checkpoint - initializing variables\")\n session.run(tf.global_variables_initializer())\n", "Restoring last checkpoint ...\nFailed to restore checkpoint - initializing variables\n" ], [ "# Merge all the summaries and write them out to LOG_DIR\nmerged = tf.summary.merge_all()\n\n# Pass the graph to the writer to display it in TensorBoard\ntrain_writer = tf.summary.FileWriter(LOG_DIR + '/train', session.graph)\nvalidation_writer = tf.summary.FileWriter(LOG_DIR + '/validation')", "_____no_output_____" ] ], [ [ "# Model Training\n\n- Initialize neural network hyperparameters\n - batch size\n - dropout\n - epoch size\n - display size\n- Things I learned:\n - `tf.nn.dropout` uses a keep_prob, so this is the percentage of connections to keep, not drop out rate (unlike `tf.layers.dropout` and the Keras `Dropout` layer.\n- Reasoning for dropout layers\n - Increase regularization to decrease overfitting of deep neural network\n - Since we have batch normalization in convolutional layers, which has regularization effects, we will drop less of the convolutional layers\n - Keep dropout at fully connected layers higher (since no regularization occurs there)", "_____no_output_____" ] ], [ [ "# Epoch size\nepochs = 20\n\n# Display step to print out & for writing to tensorboard (put together for now)\ndisplay_step = 200\n\n# Batch size\nbatch_size = 512\n\n# Dropout applied between the conv layers (This is keep probability not dropout rate)\nd1 = 0.50\n\n# Dropout applied to the fully-connected layers\nd2 = 0.50", "_____no_output_____" ] ], [ [ "### Batch Data Generator\n\n- In each iteration, a new batch of data is selected from the training set\n - In our case a new batch of 512 images will be selected (comes out to around 440 iterations to run through whole dataset)\n- `feed_dict` function (Used in training, evaluating, and making predictions)\n - will return a batch of the data based on batch_size & step\n - **AND IMPORTANT**, at every epoch (when step is 0) it will shuffle the data\n- `evaluate_batch` function (Used in training)\n - will split the validation and test set into batches and calculates the accuracy over all of the batches.\n- `get_batch` function (Not used)\n - is not directly used, but could also be used further to save on memory (Python `generator`)", "_____no_output_____" ] ], [ [ "def get_batch(X, y, batch_size=512):\n for i in np.arange(0, y.shape[0], batch_size):\n end = min(X.shape[0], i + batch_size)\n yield(X[i:end],y[i:end])\n\n\ndef feed_dict(X, y, step=0):\n \"\"\" Make a TensorFlow feed_dict mapping data onto the placeholders\n \"\"\"\n \n# # Shuffle the data after every epoch so the algorithm doesn't seem the same order every epoch\n if step == 0:\n print('Shuffling data after each epoch....')\n X, y = shuffle(X, y)\n \n # Calculate the offset\n offset = (step * batch_size) % (y.shape[0] - batch_size)\n \n # Get the batch data\n xs, ys = X[offset:offset + batch_size], y[offset:offset+batch_size]\n \n return {x: xs, y_: ys, p_keep_1: d1, p_keep_2: d2}\n\n\ndef evaluate_batch(test, batch_size):\n \"\"\" Evaluate in batches \n \"\"\"\n # Store the cumulative accuracy over all batches\n cumulative_accuracy = 0.0\n \n # Get the number of images\n n_images = y_test.shape[0] if test else y_val.shape[0]\n \n # Numer of batches needed to evaluate all images\n n_batches = n_images // batch_size + 1\n \n # Iterate over all the batches\n for i in range(n_batches):\n \n # Calculate the offset\n offset = i * batch_size\n \n if test:\n # Get the batch from the test set\n xs, ys = X_test[offset:offset+batch_size], y_test[offset:offset+batch_size]\n else:\n # Get batch from the validation set\n xs, ys = X_val[offset:offset+batch_size], y_val[offset:offset+batch_size]\n \n cumulative_accuracy += session.run(accuracy,\n {x: xs, y_: ys, p_keep_1: 1.0, p_keep_2: 1.0})\n \n # Return the average accuracy over all batches\n return cumulative_accuracy / float(n_batches)", "_____no_output_____" ], [ "def train_model(max_epochs, display_step, batch_size, total_train):\n \n # To calculate total time of training\n start_time = time.time()\n \n # Calculate the number of steps based on total number of training data & batch size:\n num_of_iterations = total_train // batch_size + 1\n \n for epoch in range(max_epochs):\n print('=====================================================')\n print('Epoch', epoch+1 , ':')\n print('=====================================================')\n \n for step in range(num_of_iterations):\n train_acc, summary, i, _ = session.run([accuracy, merged, global_step, optimizer], \n feed_dict=feed_dict(X_train, y_train, step))\n train_writer.add_summary(summary, i)\n\n if(step % display_step == 0) or (step == num_of_iterations - 1):\n \n # Display the minibatch accuracy\n print(\"Minibatch accuracy at step %d: %.4f\" % (i, train_acc))\n \n val_summary, val_acc= session.run([merged, accuracy], \n feed_dict={x: X_val, y_: y_val, p_keep_1: 1.0, p_keep_2: 1.0})\n\n print(\"Validation accuracy at step %d: %.4f\" % (i, val_acc))\n validation_writer.add_summary(val_summary, i)\n\n \n # Calculate the accuracy on the validation-set after epoch is done\n valid_acc = evaluate_batch(test=False, batch_size=batch_size)\n print(\"Validation accuracy after epoch %i: %.4f\" % (epoch+1, valid_acc))\n \n print ('Epoch', epoch+1, 'completed out of', max_epochs)\n\n \n # Calculate net time\n time_diff = time.time() - start_time\n print(\"Time usage: \" + str(timedelta(seconds=int(round(time_diff)))))\n \n # Calculate and display the testset accuracy\n test_acc = evaluate_batch(test=True, batch_size=batch_size)\n print(\"Test accuracy: %.4f\" % test_acc)\n \n # Save all the variables of the TensorFlow graph\n saver.save(session, save_path= os.path.join(CHECKPOINT_PATH, SVHN_VERSION), global_step=global_step)\n print('Model saved in file: {}'.format(os.path.join(CHECKPOINT_PATH, SVHN_VERSION)))\n \n print('=====================================================')\n print()\n \n print()\n print(\"Final test accuracy: %.4f\" % test_acc)\n # Calculate total time\n total_time = time.time() - start_time\n print(\"Total time usage: \" + str(timedelta(seconds=int(round(total_time)))))", "_____no_output_____" ], [ "train_model(max_epochs=20, display_step=200, batch_size=512, total_train=train_count)", "=====================================================\nEpoch 1 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 1: 1.5800\nValidation accuracy at step 201: 6.7800\nValidation accuracy at step 401: 49.5400\nValidation accuracy at step 441: 54.2800\nValidation accuracy after epoch 1: 54.1108\nEpoch 1 completed out of 20\nTime usage: 0:00:59\nTest accuracy: 59.6927\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 2 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 442: 54.2700\nValidation accuracy at step 642: 69.7600\nValidation accuracy at step 842: 75.9700\nValidation accuracy at step 882: 75.7100\nValidation accuracy after epoch 2: 75.7623\nEpoch 2 completed out of 20\nTime usage: 0:01:56\nTest accuracy: 79.9403\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 3 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 883: 75.5900\nValidation accuracy at step 1083: 79.8500\nValidation accuracy at step 1283: 81.1100\nValidation accuracy at step 1323: 81.5700\nValidation accuracy after epoch 3: 81.6659\nEpoch 3 completed out of 20\nTime usage: 0:02:53\nTest accuracy: 84.1772\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 4 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 1324: 81.4600\nValidation accuracy at step 1524: 82.6500\nValidation accuracy at step 1724: 83.8500\nValidation accuracy at step 1764: 84.3700\nValidation accuracy after epoch 4: 84.4261\nEpoch 4 completed out of 20\nTime usage: 0:03:50\nTest accuracy: 87.5222\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 5 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 1765: 84.1200\nValidation accuracy at step 1965: 84.4700\nValidation accuracy at step 2165: 84.4200\nValidation accuracy at step 2205: 85.1400\nValidation accuracy after epoch 5: 85.2384\nEpoch 5 completed out of 20\nTime usage: 0:04:47\nTest accuracy: 88.2926\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 6 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 2206: 85.4700\nValidation accuracy at step 2406: 85.8900\nValidation accuracy at step 2606: 85.3600\nValidation accuracy at step 2646: 86.4300\nValidation accuracy after epoch 6: 86.5154\nEpoch 6 completed out of 20\nTime usage: 0:05:44\nTest accuracy: 88.6163\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 7 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 2647: 86.1700\nValidation accuracy at step 2847: 86.5500\nValidation accuracy at step 3047: 87.0400\nValidation accuracy at step 3087: 86.6400\nValidation accuracy after epoch 7: 86.7636\nEpoch 7 completed out of 20\nTime usage: 0:06:41\nTest accuracy: 89.7377\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 8 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 3088: 86.7100\nValidation accuracy at step 3288: 86.7900\nValidation accuracy at step 3488: 87.2000\nValidation accuracy at step 3528: 87.0000\nValidation accuracy after epoch 8: 87.0893\nEpoch 8 completed out of 20\nTime usage: 0:07:38\nTest accuracy: 90.0839\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 9 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 3529: 87.4000\nValidation accuracy at step 3729: 87.7000\nValidation accuracy at step 3929: 86.9800\nValidation accuracy at step 3969: 86.8900\nValidation accuracy after epoch 9: 86.9818\nEpoch 9 completed out of 20\nTime usage: 0:08:35\nTest accuracy: 90.0641\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 10 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 3970: 87.3600\nValidation accuracy at step 4170: 87.7500\nValidation accuracy at step 4370: 87.3600\nValidation accuracy at step 4410: 88.0100\nValidation accuracy after epoch 10: 88.1187\nEpoch 10 completed out of 20\nTime usage: 0:09:32\nTest accuracy: 90.7634\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 11 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 4411: 87.4700\nValidation accuracy at step 4611: 88.0900\nValidation accuracy at step 4811: 88.4600\nValidation accuracy at step 4851: 87.8700\nValidation accuracy after epoch 11: 87.9820\nEpoch 11 completed out of 20\nTime usage: 0:10:29\nTest accuracy: 90.4896\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 12 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 4852: 88.0800\nValidation accuracy at step 5052: 87.8700\nValidation accuracy at step 5252: 87.8800\nValidation accuracy at step 5292: 87.9300\nValidation accuracy after epoch 12: 88.0233\nEpoch 12 completed out of 20\nTime usage: 0:11:26\nTest accuracy: 90.6610\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 13 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 5293: 88.4200\nValidation accuracy at step 5493: 88.2100\nValidation accuracy at step 5693: 88.4100\nValidation accuracy at step 5733: 88.0200\nValidation accuracy after epoch 13: 88.1198\nEpoch 13 completed out of 20\nTime usage: 0:12:23\nTest accuracy: 91.2032\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 14 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 5734: 88.2000\nValidation accuracy at step 5934: 87.9600\nValidation accuracy at step 6134: 88.4400\nValidation accuracy at step 6174: 89.3200\nValidation accuracy after epoch 14: 89.4152\nEpoch 14 completed out of 20\nTime usage: 0:13:20\nTest accuracy: 92.0507\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 15 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 6175: 88.9400\nValidation accuracy at step 6375: 88.7900\nValidation accuracy at step 6575: 88.7400\nValidation accuracy at step 6615: 89.0400\nValidation accuracy after epoch 15: 89.1504\nEpoch 15 completed out of 20\nTime usage: 0:14:17\nTest accuracy: 92.2986\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 16 :\n=====================================================\nShuffling data after each epoch....\n" ], [ "train_model(max_epochs=20, display_step=200, batch_size=512, total_train=train_count)", "=====================================================\nEpoch 1 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 8821: 89.7300\nValidation accuracy at step 9021: 90.4600\nValidation accuracy at step 9221: 90.2300\nValidation accuracy at step 9261: 90.7000\nValidation accuracy after epoch 1: 90.7887\nEpoch 1 completed out of 20\nTime usage: 0:00:56\nTest accuracy: 93.3401\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 2 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 9262: 90.7400\nValidation accuracy at step 9462: 90.3300\nValidation accuracy at step 9662: 90.2300\nValidation accuracy at step 9702: 90.4100\nValidation accuracy after epoch 2: 90.4797\nEpoch 2 completed out of 20\nTime usage: 0:01:53\nTest accuracy: 93.6795\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 3 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 9703: 90.5100\nValidation accuracy at step 9903: 90.3700\nValidation accuracy at step 10103: 90.4200\nValidation accuracy at step 10143: 90.4100\nValidation accuracy after epoch 3: 90.4797\nEpoch 3 completed out of 20\nTime usage: 0:02:50\nTest accuracy: 93.7000\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 4 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 10144: 90.4600\nValidation accuracy at step 10344: 90.7100\nValidation accuracy at step 10544: 90.7300\nValidation accuracy at step 10584: 90.6400\nValidation accuracy after epoch 4: 90.6957\nEpoch 4 completed out of 20\nTime usage: 0:03:47\nTest accuracy: 93.3489\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 5 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 10585: 90.7900\nValidation accuracy at step 10785: 90.3400\nValidation accuracy at step 10985: 90.8500\nValidation accuracy at step 11025: 90.5800\nValidation accuracy after epoch 5: 90.6543\nEpoch 5 completed out of 20\nTime usage: 0:04:44\nTest accuracy: 93.2561\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 6 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 11026: 90.6600\nValidation accuracy at step 11226: 90.2300\nValidation accuracy at step 11426: 90.4800\nValidation accuracy at step 11466: 90.5500\nValidation accuracy after epoch 6: 90.6078\nEpoch 6 completed out of 20\nTime usage: 0:05:42\nTest accuracy: 93.3463\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 7 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 11467: 90.6700\nValidation accuracy at step 11667: 90.6900\nValidation accuracy at step 11867: 90.5000\nValidation accuracy at step 11907: 90.5400\nValidation accuracy after epoch 7: 90.6239\nEpoch 7 completed out of 20\nTime usage: 0:06:39\nTest accuracy: 93.4815\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 8 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 11908: 90.6800\nValidation accuracy at step 12108: 89.8600\nValidation accuracy at step 12308: 90.7200\nValidation accuracy at step 12348: 90.9200\nValidation accuracy after epoch 8: 90.9863\nEpoch 8 completed out of 20\nTime usage: 0:07:36\nTest accuracy: 93.5040\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 9 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 12349: 90.9900\nValidation accuracy at step 12549: 90.5100\nValidation accuracy at step 12749: 90.8200\nValidation accuracy at step 12789: 90.4700\nValidation accuracy after epoch 9: 90.5555\nEpoch 9 completed out of 20\nTime usage: 0:08:33\nTest accuracy: 93.0533\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 10 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 12790: 90.6300\nValidation accuracy at step 12990: 90.5600\nValidation accuracy at step 13190: 90.6300\nValidation accuracy at step 13230: 90.7200\nValidation accuracy after epoch 10: 90.8082\nEpoch 10 completed out of 20\nTime usage: 0:09:30\nTest accuracy: 93.4815\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 11 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 13231: 90.7200\nValidation accuracy at step 13431: 89.8100\nValidation accuracy at step 13631: 90.6000\nValidation accuracy at step 13671: 90.8200\nValidation accuracy after epoch 11: 90.9059\nEpoch 11 completed out of 20\nTime usage: 0:10:27\nTest accuracy: 93.4426\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 12 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 13672: 90.8500\nValidation accuracy at step 13872: 90.4700\nValidation accuracy at step 14072: 90.4200\nValidation accuracy at step 14112: 90.5900\nValidation accuracy after epoch 12: 90.6813\nEpoch 12 completed out of 20\nTime usage: 0:11:24\nTest accuracy: 93.5484\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 13 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 14113: 90.8400\nValidation accuracy at step 14313: 91.0100\nValidation accuracy at step 14513: 90.4700\nValidation accuracy at step 14553: 90.6100\nValidation accuracy after epoch 13: 90.6577\nEpoch 13 completed out of 20\nTime usage: 0:12:21\nTest accuracy: 93.7369\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 14 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 14554: 90.6900\nValidation accuracy at step 14754: 90.8200\nValidation accuracy at step 14954: 90.5300\nValidation accuracy at step 14994: 90.5400\nValidation accuracy after epoch 14: 90.6152\nEpoch 14 completed out of 20\nTime usage: 0:13:18\nTest accuracy: 93.4364\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 15 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 14995: 90.8000\nValidation accuracy at step 15195: 90.5600\nValidation accuracy at step 15395: 90.3800\nValidation accuracy at step 15435: 90.6300\nValidation accuracy after epoch 15: 90.7204\nEpoch 15 completed out of 20\nTime usage: 0:14:15\nTest accuracy: 93.6147\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 16 :\n=====================================================\nShuffling data after each epoch....\n" ], [ "train_model(max_epochs=20, display_step=200, batch_size=512, total_train=train_count)", "=====================================================\nEpoch 1 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 17641: 91.2000\nValidation accuracy at step 17841: 91.2100\nValidation accuracy at step 18041: 91.0300\nValidation accuracy at step 18081: 91.1300\nValidation accuracy after epoch 1: 91.2173\nEpoch 1 completed out of 20\nTime usage: 0:00:56\nTest accuracy: 93.8052\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 2 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 18082: 91.2800\nValidation accuracy at step 18282: 91.2900\nValidation accuracy at step 18482: 91.4900\nValidation accuracy at step 18522: 91.2000\nValidation accuracy after epoch 2: 91.2856\nEpoch 2 completed out of 20\nTime usage: 0:01:53\nTest accuracy: 93.8878\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 3 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 18523: 91.2200\nValidation accuracy at step 18723: 91.2200\nValidation accuracy at step 18923: 91.2100\nValidation accuracy at step 18963: 90.9600\nValidation accuracy after epoch 3: 91.0340\nEpoch 3 completed out of 20\nTime usage: 0:02:50\nTest accuracy: 94.1036\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 4 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 18964: 90.9900\nValidation accuracy at step 19164: 90.9600\nValidation accuracy at step 19364: 91.1800\nValidation accuracy at step 19404: 91.3800\nValidation accuracy after epoch 4: 91.4355\nEpoch 4 completed out of 20\nTime usage: 0:03:47\nTest accuracy: 94.1105\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 5 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 19405: 91.3600\nValidation accuracy at step 19605: 91.1500\nValidation accuracy at step 19805: 91.3600\nValidation accuracy at step 19845: 91.1700\nValidation accuracy after epoch 5: 91.2477\nEpoch 5 completed out of 20\nTime usage: 0:04:45\nTest accuracy: 93.9486\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 6 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 19846: 91.1900\nValidation accuracy at step 20046: 91.1900\nValidation accuracy at step 20246: 91.0900\nValidation accuracy at step 20286: 91.0200\nValidation accuracy after epoch 6: 91.1098\nEpoch 6 completed out of 20\nTime usage: 0:05:42\nTest accuracy: 94.0325\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 7 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 20287: 91.2300\nValidation accuracy at step 20487: 91.1700\nValidation accuracy at step 20687: 91.3400\nValidation accuracy at step 20727: 91.1300\nValidation accuracy after epoch 7: 91.1914\nEpoch 7 completed out of 20\nTime usage: 0:06:39\nTest accuracy: 94.0975\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 8 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 20728: 91.2500\nValidation accuracy at step 20928: 91.1600\nValidation accuracy at step 21128: 91.1100\nValidation accuracy at step 21168: 91.3000\nValidation accuracy after epoch 8: 91.3833\nEpoch 8 completed out of 20\nTime usage: 0:07:36\nTest accuracy: 94.2559\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 9 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 21169: 91.3100\nValidation accuracy at step 21369: 90.9200\nValidation accuracy at step 21569: 91.0500\nValidation accuracy at step 21609: 91.2100\nValidation accuracy after epoch 9: 91.2695\nEpoch 9 completed out of 20\nTime usage: 0:08:33\nTest accuracy: 94.0531\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 10 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 21610: 91.2700\nValidation accuracy at step 21810: 91.1000\nValidation accuracy at step 22010: 91.0300\nValidation accuracy at step 22050: 90.9100\nValidation accuracy after epoch 10: 90.9766\nEpoch 10 completed out of 20\nTime usage: 0:09:30\nTest accuracy: 93.8946\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 11 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 22051: 91.0200\nValidation accuracy at step 22251: 91.2700\nValidation accuracy at step 22451: 90.5600\nValidation accuracy at step 22491: 91.2600\nValidation accuracy after epoch 11: 91.3528\nEpoch 11 completed out of 20\nTime usage: 0:10:27\nTest accuracy: 94.1105\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 12 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 22492: 91.2600\nValidation accuracy at step 22692: 91.3200\nValidation accuracy at step 22892: 91.2100\nValidation accuracy at step 22932: 91.0100\nValidation accuracy after epoch 12: 91.0915\nEpoch 12 completed out of 20\nTime usage: 0:11:24\nTest accuracy: 93.8502\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 13 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 22933: 91.0500\nValidation accuracy at step 23133: 91.2400\nValidation accuracy at step 23333: 91.1400\nValidation accuracy at step 23373: 91.1500\nValidation accuracy after epoch 13: 91.2540\nEpoch 13 completed out of 20\nTime usage: 0:12:22\nTest accuracy: 93.8653\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 14 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 23374: 91.1800\nValidation accuracy at step 23574: 91.5500\nValidation accuracy at step 23774: 91.3400\nValidation accuracy at step 23814: 91.5100\nValidation accuracy after epoch 14: 91.5625\nEpoch 14 completed out of 20\nTime usage: 0:13:19\nTest accuracy: 94.1180\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 15 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 23815: 91.4700\nValidation accuracy at step 24015: 91.0400\nValidation accuracy at step 24215: 91.1500\nValidation accuracy at step 24255: 91.1600\nValidation accuracy after epoch 15: 91.2466\nEpoch 15 completed out of 20\nTime usage: 0:14:16\nTest accuracy: 94.1193\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 16 :\n=====================================================\nShuffling data after each epoch....\n" ], [ "train_model(max_epochs=20, display_step=200, batch_size=512, total_train=train_count)", "=====================================================\nEpoch 1 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 26461: 91.3400\nValidation accuracy at step 26661: 91.4700\nValidation accuracy at step 26861: 91.4900\nValidation accuracy at step 26901: 91.4500\nValidation accuracy after epoch 1: 91.5298\nEpoch 1 completed out of 20\nTime usage: 0:00:56\nTest accuracy: 94.3064\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 2 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 26902: 91.4900\nValidation accuracy at step 27102: 91.6700\nValidation accuracy at step 27302: 91.3800\nValidation accuracy at step 27342: 91.6100\nValidation accuracy after epoch 2: 91.6774\nEpoch 2 completed out of 20\nTime usage: 0:01:53\nTest accuracy: 94.3064\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 3 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 27343: 91.6800\nValidation accuracy at step 27543: 91.4600\nValidation accuracy at step 27743: 91.3900\nValidation accuracy at step 27783: 91.2800\nValidation accuracy after epoch 3: 91.3551\nEpoch 3 completed out of 20\nTime usage: 0:02:50\nTest accuracy: 94.3898\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 4 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 27784: 91.3400\nValidation accuracy at step 27984: 91.6200\nValidation accuracy at step 28184: 91.4700\nValidation accuracy at step 28224: 91.5300\nValidation accuracy after epoch 4: 91.6165\nEpoch 4 completed out of 20\nTime usage: 0:03:48\nTest accuracy: 94.1480\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 5 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 28225: 91.5400\nValidation accuracy at step 28425: 91.6000\nValidation accuracy at step 28625: 91.3500\nValidation accuracy at step 28665: 91.5700\nValidation accuracy after epoch 5: 91.6383\nEpoch 5 completed out of 20\nTime usage: 0:04:45\nTest accuracy: 94.2402\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 6 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 28666: 91.5800\nValidation accuracy at step 28866: 91.6600\nValidation accuracy at step 29066: 91.4400\nValidation accuracy at step 29106: 91.3700\nValidation accuracy after epoch 6: 91.4430\nEpoch 6 completed out of 20\nTime usage: 0:05:42\nTest accuracy: 94.2996\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 7 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 29107: 91.3900\nValidation accuracy at step 29307: 91.7400\nValidation accuracy at step 29507: 91.4000\nValidation accuracy at step 29547: 91.5300\nValidation accuracy after epoch 7: 91.5820\nEpoch 7 completed out of 20\nTime usage: 0:06:39\nTest accuracy: 94.2751\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 8 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 29548: 91.5800\nValidation accuracy at step 29748: 91.4800\nValidation accuracy at step 29948: 91.4600\nValidation accuracy at step 29988: 91.4500\nValidation accuracy after epoch 8: 91.5384\nEpoch 8 completed out of 20\nTime usage: 0:07:36\nTest accuracy: 94.1863\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 9 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 29989: 91.4000\nValidation accuracy at step 30189: 91.8000\nValidation accuracy at step 30389: 91.4800\nValidation accuracy at step 30429: 91.6400\nValidation accuracy after epoch 9: 91.7153\nEpoch 9 completed out of 20\nTime usage: 0:08:33\nTest accuracy: 94.3584\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 10 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 30430: 91.7300\nValidation accuracy at step 30630: 91.4100\nValidation accuracy at step 30830: 91.6200\nValidation accuracy at step 30870: 91.6300\nValidation accuracy after epoch 10: 91.7055\nEpoch 10 completed out of 20\nTime usage: 0:09:30\nTest accuracy: 94.3283\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 11 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 30871: 91.6500\nValidation accuracy at step 31071: 91.4000\nValidation accuracy at step 31271: 91.7000\nValidation accuracy at step 31311: 91.5900\nValidation accuracy after epoch 11: 91.6751\nEpoch 11 completed out of 20\nTime usage: 0:10:28\nTest accuracy: 94.3058\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 12 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 31312: 91.6200\nValidation accuracy at step 31512: 91.4000\nValidation accuracy at step 31712: 91.4600\nValidation accuracy at step 31752: 91.6000\nValidation accuracy after epoch 12: 91.6849\nEpoch 12 completed out of 20\nTime usage: 0:11:25\nTest accuracy: 94.4342\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 13 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 31753: 91.6200\nValidation accuracy at step 31953: 91.5600\nValidation accuracy at step 32153: 91.3700\nValidation accuracy at step 32193: 91.5700\nValidation accuracy after epoch 13: 91.6211\nEpoch 13 completed out of 20\nTime usage: 0:12:22\nTest accuracy: 94.2614\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 14 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 32194: 91.6100\nValidation accuracy at step 32394: 91.6300\nValidation accuracy at step 32594: 91.4900\nValidation accuracy at step 32634: 91.6000\nValidation accuracy after epoch 14: 91.6935\nEpoch 14 completed out of 20\nTime usage: 0:13:19\nTest accuracy: 94.2777\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 15 :\n=====================================================\nShuffling data after each epoch....\nValidation accuracy at step 32635: 91.5900\nValidation accuracy at step 32835: 91.5700\nValidation accuracy at step 33035: 91.5400\nValidation accuracy at step 33075: 91.5900\nValidation accuracy after epoch 15: 91.6492\nEpoch 15 completed out of 20\nTime usage: 0:14:16\nTest accuracy: 94.2095\nModel saved in file: checkpoints\\svhn_v16\\svhn_v16\n=====================================================\n\n=====================================================\nEpoch 16 :\n=====================================================\nShuffling data after each epoch....\n" ] ], [ [ "## Summary of training\n\n- I ran the model through 80 epochs (or 80 complete run throughs of the whole training dataset), which only took about an hour (thanks to my NVIDIA 1070 GPU as well as CUDA & cuDNN)\n- The test accuracy seems to grow more slowly and then fluctuate in later epochs \n - Initially, it grows fast, then starts to increase very slowly\n- **Could run for more epochs to potentially have mariginal gains**", "_____no_output_____" ], [ "# Model Evaluation", "_____no_output_____" ], [ "## Test set performance\n\n- Predict test set & calculate accuracy based on predictions\n- Remember to disable dropout during predictions & evaluations\n - Dropout is only applied in training\n- Two types of evaluation:\n - Original Image evaluation (Multiple digits)\n - Individual Digit evaluation (Single digit)", "_____no_output_____" ], [ "### Original Image Evaluation (Multiple digits)", "_____no_output_____" ] ], [ [ "# Feed the test set with dropout disabled\ntest_feed_dict={\n x: X_test,\n y_: y_test,\n p_keep_1: 1.0,\n p_keep_2: 1.0\n}\n\n# Generate predictions for the testset\ntest_pred = session.run(y_pred_cls, feed_dict=test_feed_dict)\n\n# Display the predictions\ntest_pred", "_____no_output_____" ], [ "test_pred.shape", "_____no_output_____" ], [ "def calculate_accuracy(a, b):\n \"\"\" Calculating the % of similar rows in two numpy arrays \n \"\"\"\n # Compare two numpy arrays row-wise\n correct = np.sum(np.all(a == b, axis=1))\n return 100.0 * (correct / float(a.shape[0]))\n\n\ntotal_acc = calculate_accuracy(test_pred, y_test)\n\nprint('Multiple Digit Test Accuracy: %.3f %%' % total_acc)", "Multiple Digit Test Accuracy: 94.399 %\n" ] ], [ [ "### Individual Digit Evaluation\n\n- Calculate the model's accuracy on each individual digit only counting the non missing values\n- Plot confusion matrix to show performance of each class\n - Imbalanced classes, so using a confusion matrix is a better indicator of true performance\n - Calculate F1 score, which can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0. \n - F1 = 2 * (precision * recall) / (precision + recall)\n - In the multi-class case, this is the weighted average of the F1 score of each class.\n - F1 score is a good metric for measuring performances of models with imbalanced classes\n - Produce a classification report for each individual digit\n - Precision, recall, and F1 scores", "_____no_output_____" ] ], [ [ "from sklearn.metrics import accuracy_score\n\n# Find the position of the non missing labels\nnon_zero = np.where(y_test.flatten() != 10)\n\n# Calculate the accuracy on the individual digit level\nind_acc = accuracy_score(test_pred.flatten()[non_zero], y_test.flatten()[non_zero]) * 100.0\n\nprint('Individual Digit Test Accuracy: %.3f %%' % ind_acc)", "Individual Digit Test Accuracy: 96.620 %\n" ], [ "from sklearn.metrics import confusion_matrix\n\n# Set the figure size\nplt.figure(figsize=(12, 8))\n\n# Calculate the confusion matrix\ncm = confusion_matrix(y_test.flatten()[non_zero], test_pred.flatten()[non_zero])\n\n# Normalize the confusion matrix\ncm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] * 100.0\n\n# Visualize the confusion matrix\nsns.heatmap(cm, annot=True, cmap='Reds', fmt='.1f', square=True);", "c:\\users\\kevin\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\ipykernel_launcher.py:10: RuntimeWarning: invalid value encountered in true_divide\n # Remove the CWD from sys.path while we load stuff.\n" ], [ "from sklearn.metrics import f1_score\n\nf1 = f1_score(test_pred.flatten()[non_zero], y_test.flatten()[non_zero], average='weighted')\n\nprint('Individual Digit F1 Score: %.4f' % f1)", "Individual Digit F1 Score: 0.9643\n" ], [ "from sklearn.metrics import classification_report\n\ncls_report = classification_report(test_pred.flatten()[non_zero], y_test.flatten()[non_zero], digits=4)\nprint(cls_report)", " precision recall f1-score support\n\n 0 0.9713 0.9565 0.9639 1771\n 1 0.9761 0.9768 0.9765 5095\n 2 0.9716 0.9834 0.9774 4099\n 3 0.9622 0.9523 0.9572 2912\n 4 0.9723 0.9847 0.9785 2491\n 5 0.9581 0.9699 0.9639 2355\n 6 0.9626 0.9572 0.9599 1988\n 7 0.9658 0.9731 0.9694 2004\n 8 0.9422 0.9583 0.9502 1632\n 9 0.9549 0.9603 0.9576 1586\n 10 0.0000 0.0000 0.0000 99\n\navg / total 0.9625 0.9662 0.9643 26032\n\n" ] ], [ [ "### Analysis\n\n- It seems like the digits `8` and `3` seem to have the lowest f1-score\n - This maybe due to them being very similiar to each other and other digits at different angles and thus easily misclassified", "_____no_output_____" ], [ "## Number of digits per image \n\n- Let's see if the number of digits per image had an impact on the model's performance", "_____no_output_____" ] ], [ [ "# For every possible sequence length\nfor num_digits in range(1, 6):\n \n # Find all images with that given sequence length (returns an boolean array of True & Falses)\n images = np.where((y_test != 10).sum(1) == num_digits)\n \n # Calculate the accuracy on those images\n acc = calculate_accuracy(test_pred[images], y_test[images])\n \n print(\"%d digit accuracy %.3f %%\" % (num_digits, acc))", "1 digit accuracy 94.805 %\n2 digit accuracy 95.010 %\n3 digit accuracy 92.023 %\n4 digit accuracy 87.671 %\n5 digit accuracy 0.000 %\n" ] ], [ [ "### Analysis\n\n- Results:\n - The number of digits per image had a huge impact on model's performance. 0% of the 5 digit images were classified, and as the number of digits per image increased, there seems to be a decrease in accuracy.\n - This could be because of imbalanced classes, where there were not a lot of training data with 5 digits per image as shown in the exploration/preprocessing notebook [svhn-preprocessing.ipynb]().\n - With more 5 digit training images, we could potentially increase performance\n - This is also a downfall of the designed neural network, seeing that it is not very scale invariant. \n - 5 digit images seem to be very tightly packed together vs. 1 digit images where the digit is scaled differently", "_____no_output_____" ], [ "## Visualization of images\n\n- Correctly classified images\n- Incorrectly classified images", "_____no_output_____" ] ], [ [ "# Find the correctly classified examples\ncorrect = np.array([(a==b).all() for a, b in zip(test_pred, y_test)])\n\n# Select the incorrectly classified examples\nimages = X_test[correct]\ncls_true = y_test[correct]\ncls_pred = test_pred[correct]\n\n# Plot the correctly- classified examples\nplot_images(images, 6, 6, cls_true, cls_pred);", "_____no_output_____" ], [ "# Find the incorrectly classified examples\nincorrect = np.invert(correct)\n\n# Select the incorrectly classified examples\nimages = X_test[incorrect]\ncls_true = y_test[incorrect]\ncls_pred = test_pred[incorrect]\n\n# Plot the mis-classified examples\nplot_images(images, 6, 6, cls_true, cls_pred);", "_____no_output_____" ], [ "# End the Tensorflow Session\nsession.close()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ] ]
e7104f2ac7cb3ac137ee0254bc7e95bfe5400d9b
177,041
ipynb
Jupyter Notebook
python_cheat_sheet.ipynb
joaopcanario/python-cheatsheet
da65794caeafa165ecc2be59c4523b13eeb11619
[ "MIT" ]
null
null
null
python_cheat_sheet.ipynb
joaopcanario/python-cheatsheet
da65794caeafa165ecc2be59c4523b13eeb11619
[ "MIT" ]
null
null
null
python_cheat_sheet.ipynb
joaopcanario/python-cheatsheet
da65794caeafa165ecc2be59c4523b13eeb11619
[ "MIT" ]
1
2021-01-18T21:49:20.000Z
2021-01-18T21:49:20.000Z
24.643792
509
0.50621
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e7105bba270a752241fd6d55987efc33085d3401
22,210
ipynb
Jupyter Notebook
probings/rapids_on_colab_probing.ipynb
JohnTigue/colab_utils
84635a548afe0a583cf86c6e2b6de3819730461d
[ "MIT" ]
null
null
null
probings/rapids_on_colab_probing.ipynb
JohnTigue/colab_utils
84635a548afe0a583cf86c6e2b6de3819730461d
[ "MIT" ]
null
null
null
probings/rapids_on_colab_probing.ipynb
JohnTigue/colab_utils
84635a548afe0a583cf86c6e2b6de3819730461d
[ "MIT" ]
1
2021-08-19T01:34:20.000Z
2021-08-19T01:34:20.000Z
41.90566
221
0.444484
[ [ [ "# JFT metanotes\n2019-06-21, starting from https://rapids.ai/ clicked through to [Go to example notebook](https://colab.research.google.com/drive/1XTKHiIcvyL5nuldx0HSL_dUa8yopzy_Y#forceEdit=true&offline=true&sandboxMode=true)\n\n\n# Environment Sanity Check #\n\nClick the _Runtime_ dropdown at the top of the page, then _Change Runtime Type_ and confirm the instance type is _GPU_.\n\nCheck the output of `!nvidia-smi` to make sure you've been allocated a Tesla T4.", "_____no_output_____" ] ], [ [ "!nvidia-smi", "Fri Jun 21 08:22:44 2019 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 418.67 Driver Version: 410.79 CUDA Version: 10.0 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n|===============================+======================+======================|\n| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |\n| N/A 48C P8 16W / 70W | 0MiB / 15079MiB | 0% Default |\n+-------------------------------+----------------------+----------------------+\n \n+-----------------------------------------------------------------------------+\n| Processes: GPU Memory |\n| GPU PID Type Process name Usage |\n|=============================================================================|\n| No running processes found |\n+-----------------------------------------------------------------------------+\n" ], [ "import pynvml\n\npynvml.nvmlInit()\nhandle = pynvml.nvmlDeviceGetHandleByIndex(0)\ndevice_name = pynvml.nvmlDeviceGetName(handle)\n\nif device_name != b'Tesla T4':\n raise Exception(\"\"\"\n Unfortunately this instance does not have a T4 GPU.\n \n Please make sure you've configured Colab to request a GPU instance type.\n \n Sometimes Colab allocates a Tesla K80 instead of a T4. Resetting the instance.\n\n If you get a K80 GPU, try Runtime -> Reset all runtimes...\n \"\"\")\nelse:\n print('Woo! You got the right kind of GPU!')", "Woo! You got the right kind of GPU!\n" ] ], [ [ "#Setup:\n\n1. Install most recent Miniconda release compatible with Google Colab's Python install (3.6.7)\n2. Install RAPIDS libraries\n3. Set necessary environment variables\n4. Copy RAPIDS .so files into current working directory, a workaround for conda/colab interactions", "_____no_output_____" ] ], [ [ "%%time \n\n# intall miniconda\n!wget -c https://repo.continuum.io/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh\n!chmod +x Miniconda3-4.5.4-Linux-x86_64.sh\n!bash ./Miniconda3-4.5.4-Linux-x86_64.sh -b -f -p /usr/local\n\n# install RAPIDS packages\n!conda install -q -y --prefix /usr/local -c conda-forge \\\n -c rapidsai-nightly/label/cuda10.0 -c nvidia/label/cuda10.0 \\\n cudf cuml\n\n# set environment vars\nimport sys, os, shutil\nsys.path.append('/usr/local/lib/python3.6/site-packages/')\nos.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so'\nos.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cuda/nvvm/libdevice/'\n\n# copy .so files to current working dir\nfor fn in ['libcudf.so', 'librmm.so']:\n shutil.copy('/usr/local/lib/'+fn, os.getcwd())", "--2019-06-21 08:26:41-- https://repo.continuum.io/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh\nResolving repo.continuum.io (repo.continuum.io)... 104.18.200.79, 104.18.201.79, 2606:4700::6812:c84f, ...\nConnecting to repo.continuum.io (repo.continuum.io)|104.18.200.79|:443... connected.\nHTTP request sent, awaiting response... 416 Requested Range Not Satisfiable\n\n The file is already fully retrieved; nothing to do.\n\nPREFIX=/usr/local\ninstalling: python-3.6.5-hc3d631a_2 ...\nPython 3.6.5 :: Anaconda, Inc.\ninstalling: ca-certificates-2018.03.07-0 ...\ninstalling: conda-env-2.6.0-h36134e3_1 ...\ninstalling: libgcc-ng-7.2.0-hdf63c60_3 ...\ninstalling: libstdcxx-ng-7.2.0-hdf63c60_3 ...\ninstalling: libffi-3.2.1-hd88cf55_4 ...\ninstalling: ncurses-6.1-hf484d3e_0 ...\ninstalling: openssl-1.0.2o-h20670df_0 ...\ninstalling: tk-8.6.7-hc745277_3 ...\ninstalling: xz-5.2.4-h14c3975_4 ...\ninstalling: yaml-0.1.7-had09818_2 ...\ninstalling: zlib-1.2.11-ha838bed_2 ...\ninstalling: libedit-3.1.20170329-h6b74fdf_2 ...\ninstalling: readline-7.0-ha6073c6_4 ...\ninstalling: sqlite-3.23.1-he433501_0 ...\ninstalling: asn1crypto-0.24.0-py36_0 ...\ninstalling: certifi-2018.4.16-py36_0 ...\ninstalling: chardet-3.0.4-py36h0f667ec_1 ...\ninstalling: idna-2.6-py36h82fb2a8_1 ...\ninstalling: pycosat-0.6.3-py36h0a5515d_0 ...\ninstalling: pycparser-2.18-py36hf9f622e_1 ...\ninstalling: pysocks-1.6.8-py36_0 ...\ninstalling: ruamel_yaml-0.15.37-py36h14c3975_2 ...\ninstalling: six-1.11.0-py36h372c433_1 ...\ninstalling: cffi-1.11.5-py36h9745a5d_0 ...\ninstalling: setuptools-39.2.0-py36_0 ...\ninstalling: cryptography-2.2.2-py36h14c3975_0 ...\ninstalling: wheel-0.31.1-py36_0 ...\ninstalling: pip-10.0.1-py36_0 ...\ninstalling: pyopenssl-18.0.0-py36_0 ...\ninstalling: urllib3-1.22-py36hbe7ace6_0 ...\ninstalling: requests-2.18.4-py36he2e5f8d_1 ...\ninstalling: conda-4.5.4-py36_0 ...\ninstallation finished.\nWARNING:\n You currently have a PYTHONPATH environment variable set. This may cause\n unexpected behavior when running the Python interpreter in Miniconda3.\n For best results, please verify that your PYTHONPATH only points to\n directories of packages that are compatible with the Python interpreter\n in Miniconda3: /usr/local\nSolving environment: ...working... done\n\n## Package Plan ##\n\n environment location: /usr/local\n\n added / updated specs: \n - cudf\n - cuml\n\n\nThe following packages will be downloaded:\n\n package | build\n ---------------------------|-----------------\n cudf-0.8.0a1 | py36_1182 3.2 MB rapidsai-nightly/label/cuda10.0\n libcudf-0.8.0a1 | cuda10.0_1182 16.8 MB rapidsai-nightly/label/cuda10.0\n ------------------------------------------------------------\n Total: 20.0 MB\n\nThe following NEW packages will be INSTALLED:\n\n arrow-cpp: 0.12.1-py36h0e61e49_0 conda-forge \n boost-cpp: 1.68.0-h11c811c_1000 conda-forge \n bzip2: 1.0.6-h14c3975_1002 conda-forge \n cudatoolkit: 10.0.130-0 \n cudf: 0.8.0a1-py36_1182 rapidsai-nightly/label/cuda10.0\n cuml: 0.8.0a-cuda10.0_py36_1456 rapidsai-nightly/label/cuda10.0\n cython: 0.29.10-py36he1b5a44_0 conda-forge \n icu: 58.2-hf484d3e_1000 conda-forge \n libblas: 3.8.0-7_openblas conda-forge \n libcblas: 3.8.0-7_openblas conda-forge \n libcudf: 0.8.0a1-cuda10.0_1182 rapidsai-nightly/label/cuda10.0\n libcuml: 0.8.0a-cuda10.0_1456 rapidsai-nightly/label/cuda10.0\n libcumlmg: 0.0.0.dev0-cuda10.0_373 nvidia/label/cuda10.0 \n libgfortran: 3.0.0-1 conda-forge \n liblapack: 3.8.0-7_openblas conda-forge \n libnvstrings: 0.8.0a-cuda10.0_126 rapidsai-nightly/label/cuda10.0\n libprotobuf: 3.6.1-hdbcaa40_1001 conda-forge \n librmm: 0.8.0a-cuda10.0_40 rapidsai-nightly/label/cuda10.0\n llvmlite: 0.28.0-py36hdbcaa40_0 conda-forge \n numba: 0.43.1-py36hf2d7682_0 conda-forge \n numpy: 1.16.4-py36h95a1406_0 conda-forge \n nvstrings: 0.8.0a-py36_126 rapidsai-nightly/label/cuda10.0\n openblas: 0.3.5-ha44fe06_0 conda-forge \n pandas: 0.24.2-py36hb3f55d8_0 conda-forge \n parquet-cpp: 1.5.1-4 conda-forge \n pyarrow: 0.12.1-py36hbbcf98d_0 conda-forge \n python-dateutil: 2.8.0-py_0 conda-forge \n pytz: 2019.1-py_0 conda-forge \n rmm: 0.8.0a-py36_40 rapidsai-nightly/label/cuda10.0\n thrift-cpp: 0.12.0-h0a07b25_1002 conda-forge \n\nThe following packages will be UPDATED:\n\n ca-certificates: 2018.03.07-0 --> 2019.6.16-hecc5488_0 conda-forge\n certifi: 2018.4.16-py36_0 --> 2019.6.16-py36_0 conda-forge\n conda: 4.5.4-py36_0 --> 4.6.14-py36_0 conda-forge\n cryptography: 2.2.2-py36h14c3975_0 --> 2.7-py36h72c5cf5_0 conda-forge\n libgcc-ng: 7.2.0-hdf63c60_3 --> 9.1.0-hdf63c60_0 \n libstdcxx-ng: 7.2.0-hdf63c60_3 --> 9.1.0-hdf63c60_0 \n openssl: 1.0.2o-h20670df_0 --> 1.1.1b-h14c3975_1 conda-forge\n python: 3.6.5-hc3d631a_2 --> 3.6.7-h381d211_1004 conda-forge\n sqlite: 3.23.1-he433501_0 --> 3.28.0-h8b20d00_0 conda-forge\n tk: 8.6.7-hc745277_3 --> 8.6.9-hed695b0_1002 conda-forge\n\nPreparing transaction: ...working... done\nVerifying transaction: ...working... done\nExecuting transaction: ...working... done\nCPU times: user 433 ms, sys: 143 ms, total: 576 ms\nWall time: 1min 40s\n" ] ], [ [ "# cuDF and cuML Examples #\n\nNow you can run code! \n\nWhat follows are basic examples where all processing takes place on the GPU.", "_____no_output_____" ], [ "#[cuDF](https://github.com/rapidsai/cudf)#\n\nLoad a dataset into a GPU memory resident DataFrame and perform a basic calculation.\n\nEverything from CSV parsing to calculating tip percentage and computing a grouped average is done on the GPU.\n\n_Note_: You must import nvstrings and nvcategory before cudf, else you'll get errors.", "_____no_output_____" ] ], [ [ "!pwd\n#!ls -l\n!ls -l sample_data\n", "/content\ntotal 55504\n-r-xr-xr-x 1 root root 1697 Jan 1 2000 anscombe.json\n-rw-r--r-- 1 root root 301141 Jun 18 16:14 california_housing_test.csv\n-rw-r--r-- 1 root root 1706430 Jun 18 16:14 california_housing_train.csv\n-rw-r--r-- 1 root root 18289443 Jun 18 16:14 mnist_test.csv\n-rw-r--r-- 1 root root 36523880 Jun 18 16:14 mnist_train_small.csv\n-r-xr-xr-x 1 root root 930 Jan 1 2000 README.md\n" ], [ "import nvstrings, nvcategory, cudf\nimport io, requests\n\n# download CSV file from GitHub\n#pre-jft: url=\"https://github.com/plotly/datasets/raw/master/tips.csv\"\n#\n# JFT:\n# Seemingly a well known data set:\n# https://support.10xgenomics.com/single-cell-gene-expression/datasets/1.1.0/frozen_pbmc_donor_b\n# 17MB gene/cell matrix (filtered)\n\nimport urllib.request as ureq\nurl = \"http://cf.10xgenomics.com/samples/cell-exp/1.1.0/frozen_pbmc_donor_b/frozen_pbmc_donor_b_filtered_gene_bc_matrices.tar.gz\"\nfname=\"frozen_pbmc_donor_b_filtered_gene_bc_matrices.tar.gz\"\nureq.urlretrieve(url, fname)\n#content = requests.get(url).content.decode('utf-8')\n\nimport tarfile\nif (fname.endswith(\"tar.gz\")):\n tar = tarfile.open(fname, \"r:gz\")\n tar.extractall()\n tar.close()\n", "_____no_output_____" ], [ "#!ls\n!ls filtered_matrices_mex/hg19", "barcodes.tsv genes.tsv matrix.mtx\n" ], [ "# read CSV from memory\ntips_df = cudf.read_csv(io.StringIO(content))\ntips_df['tip_percentage'] = tips_df['tip']/tips_df['total_bill']*100\n\n# display average tip by dining party size\nprint(tips_df.groupby('size').tip_percentage.mean())", "size\n1 21.729201548727808\n2 16.57191917348289\n3 15.215685473711831\n4 14.594900639351334\n5 14.149548965142023\n6 15.622920072028379\nName: tip_percentage, dtype: float64\n" ] ], [ [ "#[cuML](https://github.com/rapidsai/cuml)#\n\nThis snippet loads a \n\nAs above, all calculations are performed on the GPU.", "_____no_output_____" ] ], [ [ "import cuml\n\n# Create and populate a GPU DataFrame\ndf_float = cudf.DataFrame()\ndf_float['0'] = [1.0, 2.0, 5.0]\ndf_float['1'] = [4.0, 2.0, 1.0]\ndf_float['2'] = [4.0, 2.0, 1.0]\n\n# Setup and fit clusters\ndbscan_float = cuml.DBSCAN(eps=1.0, min_samples=1)\ndbscan_float.fit(df_float)\n\nprint(dbscan_float.labels_)", "0 0\n1 1\n2 2\ndtype: int32\n" ] ], [ [ "# Next Steps #\n\nFor an overview of how you can access and work with your own datasets in Colab, check out [this guide](https://towardsdatascience.com/3-ways-to-load-csv-files-into-colab-7c14fcbdcb92).\n\nFor more RAPIDS examples, check out our RAPIDS notebooks repos:\n1. https://github.com/rapidsai/notebooks\n2. https://github.com/rapidsai/notebooks-extended", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e71060d529f7cf7f18f5f3475d5fb4688f4d1c7c
20,716
ipynb
Jupyter Notebook
Modyfikacja_Dannych/Przygotowanie danych po imporcie_lab.ipynb
MarekKras/Analiza_Dannych_01
11554348ab50736817bd2a96671680bb9a820648
[ "Unlicense" ]
null
null
null
Modyfikacja_Dannych/Przygotowanie danych po imporcie_lab.ipynb
MarekKras/Analiza_Dannych_01
11554348ab50736817bd2a96671680bb9a820648
[ "Unlicense" ]
null
null
null
Modyfikacja_Dannych/Przygotowanie danych po imporcie_lab.ipynb
MarekKras/Analiza_Dannych_01
11554348ab50736817bd2a96671680bb9a820648
[ "Unlicense" ]
null
null
null
29.936416
136
0.407849
[ [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport math as math\nfb = pd.read_csv(\"./Data/mrbean_facebook_statuses_with_nulls.csv\")\nfb.head()", "_____no_output_____" ], [ "fb.info(memory_usage='deep')", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 56 entries, 0 to 55\nData columns (total 15 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 status_id 56 non-null object \n 1 status_message 40 non-null object \n 2 link_name 56 non-null object \n 3 status_type 56 non-null object \n 4 status_link 56 non-null object \n 5 status_published 56 non-null object \n 6 num_reactions 54 non-null float64\n 7 num_comments 55 non-null float64\n 8 num_shares 56 non-null float64\n 9 num_likes 56 non-null int64 \n 10 num_loves 56 non-null int64 \n 11 num_wows 56 non-null int64 \n 12 num_hahas 56 non-null int64 \n 13 num_sads 56 non-null int64 \n 14 num_angrys 56 non-null int64 \ndtypes: float64(3), int64(6), object(6)\nmemory usage: 34.4 KB\n" ], [ "fb['status_type'].unique()", "_____no_output_____" ], [ "fb = pd.read_csv(\"./Data/mrbean_facebook_statuses_with_nulls.csv\",usecols= [\"status_message\",\"status_type\",\"link_name\",\n\"num_reactions\",\"num_shares\",\"num_likes\"])", "_____no_output_____" ], [ "fb.head()", "_____no_output_____" ], [ "fb.info(memory_usage='deep')", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 56 entries, 0 to 55\nData columns (total 6 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 status_message 40 non-null object \n 1 link_name 56 non-null object \n 2 status_type 56 non-null object \n 3 num_reactions 54 non-null float64\n 4 num_shares 56 non-null float64\n 5 num_likes 56 non-null int64 \ndtypes: float64(2), int64(1), object(3)\nmemory usage: 16.1 KB\n" ], [ "len(fb)", "_____no_output_____" ], [ "fb.nunique()", "_____no_output_____" ], [ "fb[\"status_type\"].nunique()", "_____no_output_____" ], [ "fb[\"status_type\"].value_counts()", "_____no_output_____" ], [ "fb[\"status_type\"] = fb[\"status_type\"].astype('category')", "_____no_output_____" ], [ "fb[\"link_name\"].nunique()", "_____no_output_____" ], [ "fb[\"link_name\"].value_counts().head()", "_____no_output_____" ], [ "fb[\"link_name\"] = fb[\"link_name\"].astype('category')\n", "_____no_output_____" ], [ "fb.info(memory_usage='deep')", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 56 entries, 0 to 55\nData columns (total 6 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 status_message 40 non-null object \n 1 link_name 56 non-null category\n 2 status_type 56 non-null category\n 3 num_reactions 54 non-null float64 \n 4 num_shares 56 non-null float64 \n 5 num_likes 56 non-null int64 \ndtypes: category(2), float64(2), int64(1), object(1)\nmemory usage: 12.7 KB\n" ], [ "fb[\"num_reactions\"].fillna(0,inplace = True)\nfb[\"num_shares\"].fillna(0,inplace = True)", "_____no_output_____" ], [ "fb[\"num_reactions\"] = fb[\"num_reactions\"].astype('int')\nfb[\"num_shares\"] = fb[\"num_shares\"].astype('int')\nfb[\"num_likes\"] = fb[\"num_likes\"].astype('int')", "_____no_output_____" ], [ "fb.info(memory_usage='deep')", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 56 entries, 0 to 55\nData columns (total 6 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 status_message 40 non-null object \n 1 link_name 56 non-null category\n 2 status_type 56 non-null category\n 3 num_reactions 56 non-null int32 \n 4 num_shares 56 non-null int32 \n 5 num_likes 56 non-null int32 \ndtypes: category(2), int32(3), object(1)\nmemory usage: 12.0 KB\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e71064e7c7e79abee02f3d1d9f4aff7ea62016ff
426,551
ipynb
Jupyter Notebook
labs/lab_02/lab_02.ipynb
uspas/2020_optimization_and_ml
68f197705733706dd89728751e8e9651db8b6988
[ "Apache-2.0" ]
5
2022-01-11T20:52:08.000Z
2022-03-31T11:32:41.000Z
labs/lab_02/lab_02.ipynb
uspas/2020_optimization_and_ml
68f197705733706dd89728751e8e9651db8b6988
[ "Apache-2.0" ]
null
null
null
labs/lab_02/lab_02.ipynb
uspas/2020_optimization_and_ml
68f197705733706dd89728751e8e9651db8b6988
[ "Apache-2.0" ]
3
2022-01-21T17:53:23.000Z
2022-02-16T03:24:01.000Z
1,113.710183
391,312
0.948032
[ [ [ "# Lab 02 - Multi-Objective Optimization\n## Tasks\n- Plot travesal of scipy optimization on Rosenbrock function\n- Conduct multi-objective optimization on AWA photoinjector example", "_____no_output_____" ], [ "# Set up environment", "_____no_output_____" ] ], [ [ "pip install git+https://github.com/uspas/optimization_and_ml --quiet", "Note: you may need to restart the kernel to use updated packages.\n" ], [ "%reset -f\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import minimize\n#matplotlib graphs will be included in your notebook, next to the code:\n%matplotlib inline\n\n#import toy accelerator package\nfrom uspas_ml.accelerator_toy_models import awa_model\nimport torch\n\n#import pygmo\nimport pygmo as pg", "_____no_output_____" ], [ "# implementation of Rosenbrock function - see https://en.wikipedia.org/wiki/Test_functions_for_optimization\ndef rosen(x):\n '''\n x : input point shape (n,dim)\n dim : dimension of input space\n \n example usage\n rosen(np.random.rand(2,2), 2) \n \n '''\n \n #do calculation\n return np.sum(100 * (x[1:] - x[:-1]**2)**2 + (1 - x[:-1])**2)\n \n#plot in 2D\ndef plot_rosen():\n n = 100\n x = np.linspace(-2,2,n)\n y = np.linspace(-1,3,n)\n xx = np.meshgrid(x,y)\n pts = np.vstack([ele.ravel() for ele in xx]).T\n \n f = []\n for pt in pts:\n f += [rosen(pt)]\n\n f = np.array(f)\n fig,ax = plt.subplots()\n c = ax.pcolor(*xx, f.reshape(n,n))\n fig.colorbar(c)\n \n return fig, ax", "_____no_output_____" ], [ "plot_rosen()", "/home/vagrant/.pyenv/versions/py3/lib/python3.7/site-packages/ipykernel_launcher.py:29: MatplotlibDeprecationWarning: shading='flat' when X and Y have the same dimensions as C is deprecated since 3.3. Either specify the corners of the quadrilaterals with X and Y, or pass shading='auto', 'nearest' or 'gouraud', or set rcParams['pcolor.shading']. This will become an error two minor releases later.\n" ] ], [ [ "## Second order optimization methods of black box functions with restarts\n\n<div class=\"alert alert-block alert-info\">\n \n**Task:**\nUse `scipy.optimize.minimize()` implementation of the following methods (Nelder-Mead, L-BFGS-B, Powell) to optimize the 2D Rosenbrock function and plot the trajectory of each method through 2D space.\n\n**Task:**\nWrite a second optimizer function that repeats L-BFGS-B optimization of a 10D Rosenbrock function starting with 10 different random initial points and return the best results. \n \n</div>", "_____no_output_____" ] ], [ [ "#your code here", " fun: 3.7461927208730536e-11\n hess_inv: array([[0.48965874, 0.97921278],\n [0.97921278, 1.96321588]])\n jac: array([-5.71761226e-07, -1.35533544e-06])\n message: 'Optimization terminated successfully.'\n nfev: 72\n nit: 17\n njev: 24\n status: 0\n success: True\n x: array([0.99999388, 0.99998775])\n" ] ], [ [ "## Multi-objective optimization\nHere we will find the pareto front of the AWA photoinjector problem (see below). Input variables are shown in red and output variables are shown in blue. Both the inputs and outputs are normalized to [-1,1]. Our goal is to minimize all of the output beam parameters.\n\n![Figure5.png](attachment:Figure5.png)", "_____no_output_____" ] ], [ [ "# get AWA model\nmodel = awa_model.AWAModel()\nprint(model.features)\nprint(model.targets)\n\nx = torch.rand(5,6)\nmodel.predict(x)", "['P0', 'P1', 'G0', 'G1', 'K1', 'K2']\n['rms_x', 'rms_y', 'rms_s', 'emit_x', 'emit_y', 'emit_s', 'dE']\n" ] ], [ [ "## First try multi-objective optimization on a test problem\nTo start with we try a test problem - see description here https://datacrayon.com/posts/search-and-optimisation/practical-evolutionary-algorithms/synthetic-objective-functions-and-zdt1/ and here (page 488) https://ro.ecu.edu.au/cgi/viewcontent.cgi?article=3021&context=ecuworks", "_____no_output_____" ] ], [ [ "zdt = pg.problem(pg.zdt())", "_____no_output_____" ], [ "#do example NSGA-II optimization\n\n# create population\npop = pg.population(zdt, size=20)\n# select algorithm\nalgo = pg.algorithm(pg.nsga2(gen=1))\nalgo.set_verbosity(100)\nprint(algo)\n# run optimization\npop = algo.evolve(pop)\n# extract results\nfits, vectors = pop.get_f(), pop.get_x()", "Algorithm name: NSGA-II: [stochastic]\n\tC++ class name: pagmo::nsga2\n\n\tThread safety: basic\n\nExtra info:\n\tGenerations: 1\n\tCrossover probability: 0.95\n\tDistribution index for crossover: 10\n\tMutation probability: 0.01\n\tDistribution index for mutation: 50\n\tSeed: 2750693800\n\tVerbosity: 100\n" ] ], [ [ "<div class=\"alert alert-block alert-info\">\n \n**Task:**\nPlot the Pareto front and calculate the front hypervolume with the reference point (11,11). See https://esa.github.io/pygmo2/tutorials/moo.html and https://esa.github.io/pygmo2/tutorials/tutorials.html#hypervolumes for utilities.\n \n</div>", "_____no_output_____" ] ], [ [ "#define problem class for pygmo\nclass AWAProblem:\n def __init__(self):\n self.model = awa_model.AWAModel()\n \n def get_nobj(self):\n return 7\n\n def fitness(self, x):\n x = torch.tensor(x).reshape(1,-1).float()\n return self.model.predict(x).detach().numpy().flatten().astype(np.float)\n \n def get_bounds(self):\n return ([-1]*6,[1]*6)", "_____no_output_____" ] ], [ [ "<div class=\"alert alert-block alert-info\">\n \n**Task:**\nUse the above code to find the 7D pareto front of the AWA multi-objective problem. Plot the front projected onto the bunch length vs. horizontal emittance subspace.\n \n</div>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7106977531086e6e2afd71976370427afa1b327
7,373
ipynb
Jupyter Notebook
docs/notebooks/041_routing_electical.ipynb
simbilod/gdsfactory
4d76db32674c3edb4d16260e3177ee29ef9ce11d
[ "MIT" ]
null
null
null
docs/notebooks/041_routing_electical.ipynb
simbilod/gdsfactory
4d76db32674c3edb4d16260e3177ee29ef9ce11d
[ "MIT" ]
null
null
null
docs/notebooks/041_routing_electical.ipynb
simbilod/gdsfactory
4d76db32674c3edb4d16260e3177ee29ef9ce11d
[ "MIT" ]
null
null
null
26.810909
106
0.551065
[ [ [ "# Routing electrical\n\nFor routing low speed DC electrical ports you can use sharp corners instead of smooth bends.\n\nYou can also define `port.orientation = None` to ignore the port orientation for low speed DC ports.", "_____no_output_____" ], [ "## Single route functions\n\n### get_route_electrical\n\n\nGet route_electrical `bend = wire_corner` defaults to 90 degrees bend.", "_____no_output_____" ] ], [ [ "import gdsfactory as gf\n\nc = gf.Component(\"pads\")\npt = c << gf.components.pad_array(orientation=270, columns=3)\npb = c << gf.components.pad_array(orientation=90, columns=3)\npt.move((70, 200))\nc", "_____no_output_____" ], [ "c = gf.Component(\"pads_with_routes_with_bends\")\npt = c << gf.components.pad_array(orientation=270, columns=3)\npb = c << gf.components.pad_array(orientation=90, columns=3)\npt.move((70, 200))\nroute = gf.routing.get_route_electrical(\n pt.ports[\"e11\"], pb.ports[\"e11\"], bend=\"bend_euler\", radius=30\n)\nc.add(route.references)\nc", "_____no_output_____" ], [ "c = gf.Component(\"pads_with_routes_with_wire_corners\")\npt = c << gf.components.pad_array(orientation=270, columns=3)\npb = c << gf.components.pad_array(orientation=90, columns=3)\npt.move((70, 200))\nroute = gf.routing.get_route_electrical(\n pt.ports[\"e11\"], pb.ports[\"e11\"], bend=\"wire_corner\"\n)\nc.add(route.references)\nc", "_____no_output_____" ], [ "c = gf.Component(\"pads_with_routes_with_wire_corners_no_orientation\")\npt = c << gf.components.pad_array(orientation=None, columns=3)\npb = c << gf.components.pad_array(orientation=None, columns=3)\npt.move((70, 200))\nroute = gf.routing.get_route_electrical(\n pt.ports[\"e11\"], pb.ports[\"e11\"], bend=\"wire_corner\"\n)\nc.add(route.references)\nc", "_____no_output_____" ] ], [ [ "### route_quad", "_____no_output_____" ] ], [ [ "c = gf.Component(\"pads_route_quad\")\npt = c << gf.components.pad_array(orientation=270, columns=3)\npb = c << gf.components.pad_array(orientation=90, columns=3)\npt.move((100, 200))\nroute = gf.routing.route_quad(pt.ports[\"e11\"], pb.ports[\"e11\"], layer=(49, 0))\nc.add(route)\nc", "_____no_output_____" ] ], [ [ "### get_route_from_steps", "_____no_output_____" ] ], [ [ "c = gf.Component(\"pads_route_from_steps\")\npt = c << gf.components.pad_array(orientation=270, columns=3)\npb = c << gf.components.pad_array(orientation=90, columns=3)\npt.move((100, 200))\nroute = gf.routing.get_route_from_steps(\n pb.ports[\"e11\"],\n pt.ports[\"e11\"],\n steps=[\n {\"y\": 200},\n ],\n cross_section=gf.cross_section.metal3,\n bend=gf.components.wire_corner,\n)\nc.add(route.references)\nc", "_____no_output_____" ], [ "c = gf.Component(\"pads_route_from_steps_None_orientation\")\npt = c << gf.components.pad_array(orientation=None, columns=3)\npb = c << gf.components.pad_array(orientation=None, columns=3)\npt.move((100, 200))\nroute = gf.routing.get_route_from_steps(\n pb.ports[\"e11\"],\n pt.ports[\"e11\"],\n steps=[\n {\"y\": 200},\n ],\n cross_section=gf.cross_section.metal3,\n bend=gf.components.wire_corner,\n)\nc.add(route.references)\nc", "_____no_output_____" ] ], [ [ "## Bundle of routes (get_bundle_electrical)", "_____no_output_____" ] ], [ [ "import gdsfactory as gf\n\nc = gf.Component(\"pads_bundle\")\npt = c << gf.components.pad_array(orientation=270, columns=3)\npb = c << gf.components.pad_array(orientation=90, columns=3)\npt.move((100, 200))\n\nroutes = gf.routing.get_bundle_electrical(\n pb.ports, pt.ports, end_straight_length=60, separation=30\n)\n\nfor route in routes:\n c.add(route.references)\nc", "_____no_output_____" ] ], [ [ "## get bundle from steps", "_____no_output_____" ] ], [ [ "c = gf.Component(\"pads_bundle_steps\")\npt = c << gf.components.pad_array(\n gf.partial(gf.components.pad, size=(30, 30)),\n orientation=270,\n columns=3,\n spacing=(50, 0),\n)\npb = c << gf.components.pad_array(orientation=90, columns=3)\npt.move((300, 500))\n\nroutes = gf.routing.get_bundle_from_steps_electrical(\n pb.ports, pt.ports, end_straight_length=60, separation=30, steps=[{\"dy\": 100}]\n)\n\nfor route in routes:\n c.add(route.references)\n\nc", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7108b187db53d18569ce9d4096f299346bb34bc
91,640
ipynb
Jupyter Notebook
Entity Explorer - Linux Host.ipynb
rvoak-MS/Azure-Sentinel-Notebooks
c3481af258a7f30e842e9828d2c789b9b078b72d
[ "MIT" ]
null
null
null
Entity Explorer - Linux Host.ipynb
rvoak-MS/Azure-Sentinel-Notebooks
c3481af258a7f30e842e9828d2c789b9b078b72d
[ "MIT" ]
null
null
null
Entity Explorer - Linux Host.ipynb
rvoak-MS/Azure-Sentinel-Notebooks
c3481af258a7f30e842e9828d2c789b9b078b72d
[ "MIT" ]
null
null
null
48.900747
4,550
0.585279
[ [ [ "# Entity Explorer - Linux Host\n <details>\n <summary>&nbsp;<u>Details...</u></summary>\n\n **Notebook Version:** 1.1<br>\n **Python Version:** Python 3.6 (including Python 3.6 - AzureML)<br>\n **Required Packages**: kqlmagic, msticpy, pandas, pandas_bokeh, numpy, matplotlib, networkx, seaborn, datetime, ipywidgets, ipython, dnspython, ipwhois, folium, maxminddb_geolite2<br>\n **Platforms Supported**:\n - Azure Notebooks Free Compute\n - Azure Notebooks DSVM\n - OS Independent\n\n **Data Sources Required**:\n - Log Analytics/Azure Sentinel - Syslog, Secuirty Alerts, Auditd, Azure Network Analytics.\n - (Optional) - AlienVault OTX (requires account and API key)\n </details>\n\nThis Notebooks brings together a series of tools and techniques to enable threat hunting within the context of a singular Linux host. The notebook utilizes a range of data sources to achieve this but in order to support the widest possible range of scenarios this Notebook prioritizes using common Syslog data. If there is detailed auditd data available for a host you may wish to edit the Notebook to rely primarily on this dataset, as it currently stands auditd is used when available to provide insight not otherwise available via Syslog.", "_____no_output_____" ], [ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><ul class=\"toc-item\"><li><span><a href=\"#Notebook-initialization\" data-toc-modified-id=\"Notebook-initialization-0.1\"><span class=\"toc-item-num\">0.1&nbsp;&nbsp;</span>Notebook initialization</a></span></li><li><span><a href=\"#Get-WorkspaceId-and-Authenticate-to-Log-Analytics\" data-toc-modified-id=\"Get-WorkspaceId-and-Authenticate-to-Log-Analytics-0.2\"><span class=\"toc-item-num\">0.2&nbsp;&nbsp;</span>Get WorkspaceId and Authenticate to Log Analytics</a></span></li></ul></li><li><span><a href=\"#Set-Hunting-Time-Frame\" data-toc-modified-id=\"Set-Hunting-Time-Frame-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Set Hunting Time Frame</a></span><ul class=\"toc-item\"><li><span><a href=\"#Select-Host-to-Investigate\" data-toc-modified-id=\"Select-Host-to-Investigate-1.1\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Select Host to Investigate</a></span></li></ul></li><li><span><a href=\"#Host-Summary\" data-toc-modified-id=\"Host-Summary-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Host Summary</a></span><ul class=\"toc-item\"><li><span><a href=\"#Host-Alerts\" data-toc-modified-id=\"Host-Alerts-2.1\"><span class=\"toc-item-num\">2.1&nbsp;&nbsp;</span>Host Alerts</a></span></li></ul></li><li><span><a href=\"#Re-scope-Hunting-Time-Frame\" data-toc-modified-id=\"Re-scope-Hunting-Time-Frame-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Re-scope Hunting Time Frame</a></span></li><li><span><a href=\"#How-to-use-this-Notebook\" data-toc-modified-id=\"How-to-use-this-Notebook-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>How to use this Notebook</a></span></li><li><span><a href=\"#Host-Logon-Events\" data-toc-modified-id=\"Host-Logon-Events-5\"><span class=\"toc-item-num\">5&nbsp;&nbsp;</span>Host Logon Events</a></span><ul class=\"toc-item\"><li><span><a href=\"#Logon-Sessions\" data-toc-modified-id=\"Logon-Sessions-5.1\"><span class=\"toc-item-num\">5.1&nbsp;&nbsp;</span>Logon Sessions</a></span><ul class=\"toc-item\"><li><span><a href=\"#Session-Details\" data-toc-modified-id=\"Session-Details-5.1.1\"><span class=\"toc-item-num\">5.1.1&nbsp;&nbsp;</span>Session Details</a></span></li><li><span><a href=\"#Raw-data-from-user-session\" data-toc-modified-id=\"Raw-data-from-user-session-5.1.2\"><span class=\"toc-item-num\">5.1.2&nbsp;&nbsp;</span>Raw data from user session</a></span></li></ul></li><li><span><a href=\"#Process-Tree-from-session\" data-toc-modified-id=\"Process-Tree-from-session-5.2\"><span class=\"toc-item-num\">5.2&nbsp;&nbsp;</span>Process Tree from session</a></span></li><li><span><a href=\"#Sudo-Session-Investigation\" data-toc-modified-id=\"Sudo-Session-Investigation-5.3\"><span class=\"toc-item-num\">5.3&nbsp;&nbsp;</span>Sudo Session Investigation</a></span></li></ul></li><li><span><a href=\"#User-Activity\" data-toc-modified-id=\"User-Activity-6\"><span class=\"toc-item-num\">6&nbsp;&nbsp;</span>User Activity</a></span></li><li><span><a href=\"#Application-Activity\" data-toc-modified-id=\"Application-Activity-7\"><span class=\"toc-item-num\">7&nbsp;&nbsp;</span>Application Activity</a></span><ul class=\"toc-item\"><li><span><a href=\"#Display-process-tree\" data-toc-modified-id=\"Display-process-tree-7.1\"><span class=\"toc-item-num\">7.1&nbsp;&nbsp;</span>Display process tree</a></span></li><li><span><a href=\"#Application-Logs-with-associated-Threat-Intelligence\" data-toc-modified-id=\"Application-Logs-with-associated-Threat-Intelligence-7.2\"><span class=\"toc-item-num\">7.2&nbsp;&nbsp;</span>Application Logs with associated Threat Intelligence</a></span></li></ul></li><li><span><a href=\"#Network-Activity\" data-toc-modified-id=\"Network-Activity-8\"><span class=\"toc-item-num\">8&nbsp;&nbsp;</span>Network Activity</a></span><ul class=\"toc-item\"><li><span><a href=\"#Choose-ASNs/IPs-to-Check-for-Threat-Intel-Reports\" data-toc-modified-id=\"Choose-ASNs/IPs-to-Check-for-Threat-Intel-Reports-8.1\"><span class=\"toc-item-num\">8.1&nbsp;&nbsp;</span>Choose ASNs/IPs to Check for Threat Intel Reports</a></span></li></ul></li><li><span><a href=\"#Configuration\" data-toc-modified-id=\"Configuration-9\"><span class=\"toc-item-num\">9&nbsp;&nbsp;</span>Configuration</a></span><ul class=\"toc-item\"><li><span><a href=\"#msticpyconfig.yaml-configuration-File\" data-toc-modified-id=\"msticpyconfig.yaml-configuration-File-9.1\"><span class=\"toc-item-num\">9.1&nbsp;&nbsp;</span><code>msticpyconfig.yaml</code> configuration File</a></span></li></ul></li></ul></div>", "_____no_output_____" ], [ "# Hunting Hypothesis: \nOur broad initial hunting hypothesis is that a particular Linux host in our environment has been compromised, we will need to hunt from a range of different positions to validate or disprove this hypothesis.\n", "_____no_output_____" ], [ "---\n### Notebook initialization\nThe next cell:\n- Checks for the correct Python version\n- Checks versions and optionally installs required packages\n- Imports the required packages into the notebook\n- Sets a number of configuration options.\n\nThis should complete without errors. If you encounter errors or warnings look at the following two notebooks:\n- [TroubleShootingNotebooks](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/TroubleShootingNotebooks.ipynb)\n- [ConfiguringNotebookEnvironment](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb)\n\nIf you are running in the Azure Sentinel Notebooks environment (Azure Notebooks or Azure ML) you can run live versions of these notebooks:\n- [Run TroubleShootingNotebooks](./TroubleShootingNotebooks.ipynb)\n- [Run ConfiguringNotebookEnvironment](./ConfiguringNotebookEnvironment.ipynb)\n\nYou may also need to do some additional configuration to successfully use functions such as Threat Intelligence service lookup and Geo IP lookup. \nThere are more details about this in the `ConfiguringNotebookEnvironment` notebook and in these documents:\n- [msticpy configuration](https://msticpy.readthedocs.io/en/latest/getting_started/msticpyconfig.html)\n- [Threat intelligence provider configuration](https://msticpy.readthedocs.io/en/latest/data_acquisition/TIProviders.html#configuration-file)", "_____no_output_____" ] ], [ [ "from pathlib import Path\nimport os\nimport sys\nimport warnings\nfrom IPython.display import display, HTML, Markdown\n\nREQ_PYTHON_VER=(3, 6)\nREQ_MSTICPY_VER=(0, 6, 0)\n\ndisplay(HTML(\"<h3>Starting Notebook setup...</h3>\"))\nif Path(\"./utils/nb_check.py\").is_file():\n from utils.nb_check import check_python_ver, check_mp_ver\n\n check_python_ver(min_py_ver=REQ_PYTHON_VER)\n try:\n check_mp_ver(min_msticpy_ver=REQ_MSTICPY_VER)\n except ImportError:\n !pip install --upgrade msticpy\n if \"msticpy\" in sys.modules:\n importlib.reload(sys.modules[\"msticpy\"])\n else:\n import msticpy\n check_mp_ver(REQ_MSTICPY_VER)\n \n\n# If not using Azure Notebooks, install msticpy with\n# !pip install msticpy\nfrom msticpy.nbtools import nbinit\nextra_imports = [\n \"msticpy.nbtools, observationlist\",\n \"msticpy.nbtools.foliummap, get_map_center\",\n \"msticpy.common.exceptions, MsticpyException\",\n \"msticpy.sectools.syslog_utils, create_host_record\",\n \"msticpy.sectools.syslog_utils, cluster_syslog_logons_df\",\n \"msticpy.sectools.syslog_utils, risky_sudo_sessions\",\n \"msticpy.sectools.ip_utils, convert_to_ip_entities\",\n \"msticpy.sectools, auditdextract\",\n \"msticpy.sectools.cmd_line, risky_cmd_line\",\n \"pyvis.network, Network\",\n \"re\",\n \"math, pi\",\n \"ipwhois, IPWhois\",\n \"bokeh.plotting, show\",\n \"bokeh.plotting, Row\",\n \"bokeh.models, ColumnDataSource\",\n \"bokeh.models, FactorRange\",\n \"bokeh.transform, factor_cmap\",\n \"bokeh.transform, cumsum\",\n \"bokeh.palettes, viridis\",\n \"dns, reversename\",\n \"dns, resolver\",\n \"ipaddress, ip_address\",\n \"functools, lru_cache\",\n \"datetime,,dt\"\n]\nadditional_packages = [\n \"oauthlib\", \"pyvis\", \"python-whois\"\n]\nnbinit.init_notebook(\n namespace=globals(),\n additional_packages=additional_packages,\n extra_imports=extra_imports,\n);\n\nWIDGET_DEFAULTS = {\n \"layout\": widgets.Layout(width=\"95%\"),\n \"style\": {\"description_width\": \"initial\"},\n}\nfrom bokeh.plotting import figure", "_____no_output_____" ] ], [ [ "### Get WorkspaceId and Authenticate to Log Analytics\n <details>\n <summary> <u>Details...</u></summary>\nIf you are using user/device authentication, run the following cell. \n- Click the 'Copy code to clipboard and authenticate' button.\n- This will pop up an Azure Active Directory authentication dialog (in a new tab or browser window). The device code will have been copied to the clipboard. \n- Select the text box and paste (Ctrl-V/Cmd-V) the copied value. \n- You should then be redirected to a user authentication page where you should authenticate with a user account that has permission to query your Log Analytics workspace.\n\nUse the following syntax if you are authenticating using an Azure Active Directory AppId and Secret:\n```\n%kql loganalytics://tenant(aad_tenant).workspace(WORKSPACE_ID).clientid(client_id).clientsecret(client_secret)\n```\ninstead of\n```\n%kql loganalytics://code().workspace(WORKSPACE_ID)\n```\n\nNote: you may occasionally see a JavaScript error displayed at the end of the authentication - you can safely ignore this.<br>\nOn successful authentication you should see a ```popup schema``` button.\nTo find your Workspace Id go to [Log Analytics](https://ms.portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.OperationalInsights%2Fworkspaces). Look at the workspace properties to find the ID.\n </details>", "_____no_output_____" ] ], [ [ "#See if we have an Azure Sentinel Workspace defined in our config file, if not let the user specify Workspace and Tenant IDs\nfrom msticpy.nbtools.wsconfig import WorkspaceConfig\nws_config = WorkspaceConfig()\ntry:\n ws_id = ws_config['workspace_id']\n ten_id = ws_config['tenant_id']\n md(\"Workspace details collected from config file\")\n config = True\nexcept:\n md('Please go to your Log Analytics workspace, copy the workspace ID'\n ' and/or tenant Id and paste here to enable connection to the workspace and querying of it..<br> ')\n ws_id = nbwidgets.GetEnvironmentKey(env_var='WORKSPACE_ID',\n prompt='Please enter your Log Analytics Workspace Id:', auto_display=True)\n ten_id = nbwidgets.GetEnvironmentKey(env_var='TENANT_ID',\n prompt='Please enter your Log Analytics Tenant Id:', auto_display=True)\n config = False\n", "_____no_output_____" ], [ "# Establish a query provider for Azure Sentinel and connect to it\nif config is False:\n ws_id = ws_id.value\n ten_id = ten_id.value\nqry_prov = QueryProvider('LogAnalytics')\nqry_prov.connect(connection_str=ws_config.code_connect_str)", "_____no_output_____" ] ], [ [ "## Set Hunting Time Frame\nTo begin the hunt we need to et the time frame in which you wish to test your compromised host hunting hypothesis within. Use the widget below to select your start and end time for the hunt. ", "_____no_output_____" ] ], [ [ "query_times = nbwidgets.QueryTime(units='day',\n max_before=14, max_after=1, before=1)\nquery_times.display()", "_____no_output_____" ] ], [ [ "### Select Host to Investigate\nSelect the host you want to test your hunting hypothesis against, only hosts with Syslog data within the time frame you specified are available. If the host you wish to select is not present try adjusting your time frame.", "_____no_output_____" ] ], [ [ "#Get a list of hosts with syslog data in our hunting timegframe to provide easy selection\nsyslog_query = f\"\"\"Syslog | where TimeGenerated between (datetime({query_times.start}) .. datetime({query_times.end})) | summarize by Computer\"\"\"\nmd(\"Collecting avaliable host details...\")\nhosts_list = qry_prov._query_provider.query(query=syslog_query)\nif isinstance(hosts_list, pd.DataFrame) and not hosts_list.empty:\n hosts = hosts_list[\"Computer\"].unique().tolist()\n host_text = nbwidgets.SelectItem(description='Select host to investigate: ', \n item_list=hosts, width='75%', auto_display=True)\nelse:\n display(md(\"There are no hosts with syslog data in this time period to investigate\"))", "_____no_output_____" ] ], [ [ "## Host Summary\nBelow is a overview of the selected host based on available data sources.", "_____no_output_____" ] ], [ [ "hostname=host_text.value\naz_net_df = None\n# Collect data on the host\nall_syslog_query = f\"Syslog | where TimeGenerated between (datetime({query_times.start}) .. datetime({query_times.end})) | where Computer =~ '{hostname}'\"\"\"\nall_syslog_data = qry_prov.exec_query(all_syslog_query)\nif isinstance(all_syslog_data, pd.DataFrame) and not all_syslog_data.empty:\n heartbeat_query = f\"\"\"Heartbeat | where TimeGenerated >= datetime({query_times.start}) | where TimeGenerated <= datetime({query_times.end})| where Computer == '{hostname}' | top 1 by TimeGenerated desc nulls last\"\"\"\n if \"AzureNetworkAnalytics_CL\" in qry_prov.schema:\n aznet_query = f\"\"\"AzureNetworkAnalytics_CL | where TimeGenerated >= datetime({query_times.start}) | where TimeGenerated <= datetime({query_times.end}) | where VirtualMachine_s has '{hostname}' | where ResourceType == 'NetworkInterface' | top 1 by TimeGenerated desc | project PrivateIPAddresses = PrivateIPAddresses_s, PublicIPAddresses = PublicIPAddresses_s\"\"\"\n print(\"Getting network data...\")\n az_net_df = qry_prov.exec_query(query=aznet_query)\n print(\"Getting host data...\")\n host_hb = qry_prov.exec_query(query=heartbeat_query)\n\n # Create host entity record, with Azure network data if any is avaliable\n if az_net_df is not None and isinstance(az_net_df, pd.DataFrame) and not az_net_df.empty:\n host_entity = create_host_record(syslog_df=all_syslog_data, heartbeat_df=host_hb, az_net_df=az_net_df)\n else:\n host_entity = create_host_record(syslog_df=all_syslog_data, heartbeat_df=host_hb)\n\n md(\n \"<b>Host Details</b><br>\"\n f\"<b>Hostname</b>: {host_entity.computer}<br>\"\n f\"<b>OS</b>: {host_entity.OSType} {host_entity.OSName}<br>\"\n f\"<b>IP Address</b>: {host_entity.IPAddress.Address}<br>\"\n f\"<b>Location</b>: {host_entity.IPAddress.Location.CountryName}<br>\"\n f\"<b>Installed Applications</b>: {host_entity.Applications}<br>\"\n )\nelse:\n md_warn(\"No Syslog data found, check hostname and timeframe.\")\n md(\"The data query may be timing out, consider reducing the timeframe size.\")", "_____no_output_____" ] ], [ [ "### Host Alerts & Bookmarks\nThis section provides an overview of any security alerts or Hunting Bookmarks in Azure Sentinel related to this host, this will help scope and guide our hunt.", "_____no_output_____" ] ], [ [ "related_alerts = qry_prov.SecurityAlert.list_related_alerts(\n query_times, host_name=hostname)\nrealted_bookmarks = qry_prov.AzureSentinel.list_bookmarks_for_entity(query_times, entity_id=hostname)\nif isinstance(related_alerts, pd.DataFrame) and not related_alerts.empty:\n host_alert_items = (related_alerts[['AlertName', 'TimeGenerated']]\n .groupby('AlertName').TimeGenerated.agg('count').to_dict())\n\n def print_related_alerts(alertDict, entityType, entityName):\n if len(alertDict) > 0:\n md(f\"Found {len(alertDict)} different alert types related to this {entityType} (\\'{entityName}\\')\")\n for (k, v) in alertDict.items():\n md(f\"- {k}, Count of alerts: {v}\")\n else:\n md(f\"No alerts for {entityType} entity \\'{entityName}\\'\")\n\n print_related_alerts(host_alert_items, 'host', host_entity.HostName)\n nbdisplay.display_timeline(\n data=related_alerts, source_columns=[\"AlertName\"], title=\"Host alerts over time\", height=300, color=\"red\")\nelse:\n md('No related alerts found.')\n \nif isinstance(realted_bookmarks, pd.DataFrame) and not realted_bookmarks.empty:\n nbdisplay.display_timeline(data=realted_bookmarks, source_columns=[\"BookmarkName\"], height=200, color=\"orange\", title=\"Host bookmarks over time\",)\nelse:\n md('No related bookmarks found.')", "_____no_output_____" ], [ "rel_alert_select = None\n\ndef show_full_alert(selected_alert):\n global security_alert, alert_ip_entities\n security_alert = SecurityAlert(\n rel_alert_select.selected_alert)\n nbdisplay.display_alert(security_alert, show_entities=True)\n\n# Show selected alert when selected\nif isinstance(related_alerts, pd.DataFrame) and not related_alerts.empty:\n related_alerts['CompromisedEntity'] = related_alerts['Computer']\n md('### Click on alert to view details.')\n rel_alert_select = nbwidgets.SelectAlert(alerts=related_alerts,\n action=show_full_alert)\n rel_alert_select.display()\nelse:\n md('No related alerts found.')", "_____no_output_____" ] ], [ [ "## Re-scope Hunting Time Frame\nBased on the security alerts for this host we can choose to re-scope our hunting time frame.", "_____no_output_____" ] ], [ [ "if rel_alert_select is None or rel_alert_select.selected_alert is None:\n start = query_times.start\nelse:\n start = rel_alert_select.selected_alert['TimeGenerated']\n\n# Set new investigation time windows based on the selected alert\ninvest_times = nbwidgets.QueryTime(\n units='day', max_before=24, max_after=12, before=1, after=1, origin_time=start)\ninvest_times.display()", "_____no_output_____" ] ], [ [ "## How to use this Notebook\nWhilst this notebook is linear in layout it doesn't need to be linear in usage. We have selected our host to investigate and set an initial hunting time-frame to work within. We can now start to test more specific hunting hypothesis with the aim of validating our broader initial hunting hypothesis. To do this we can start by looking at:\n- <a>Host Logon Events</a>\n- <a>User Activity</a>\n- <a>Application Activity</a>\n- <a>Network Activity</a>\n\nYou can choose to start below with a hunt in host logon events or choose to jump to one of the other sections listed above. The order in which you choose to run each of these major sections doesn't matter, they are each self contained. You may also choose to rerun sections based on your findings from running other sections.", "_____no_output_____" ], [ "This notebook uses external threat intelligence sources to enrich data. The next cell loads the TILookup class.\n> **Note**: to use TILookup you will need configuration settings in your msticpyconfig.yaml\n> <br>see [TIProviders documenation](https://msticpy.readthedocs.io/en/latest/TIProviders.html)\n> <br>and [Configuring Notebook Environment notebook](./ConfiguringNotebookEnvironment.ipynb)\n> <br>or [ConfiguringNotebookEnvironment (GitHub static view)](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb)", "_____no_output_____" ] ], [ [ "tilookup = TILookup()\nmd(\"Threat intelligence provider loading complete.\")", "_____no_output_____" ] ], [ [ "## Host Logon Events\n**Hypothesis:** That an attacker has gained legitimate access to the host via compromised credentials and has logged into the host to conduct malicious activity. \n\nThis section provides an overview of logon activity for the host within our hunting time frame, the purpose of this is to allow for the identification of anomalous logons or attempted logons.", "_____no_output_____" ] ], [ [ "\n# Collect logon events for this, seperate them into sucessful and unsucessful and cluster sucessful one into sessions\nlogon_events = qry_prov.LinuxSyslog.user_logon(start=invest_times.start, end=invest_times.end, host_name=hostname)\nremote_logons = None\nfailed_logons = None\n\nif isinstance(logon_events, pd.DataFrame) and not logon_events.empty:\n remote_logons = (logon_events[logon_events['LogonResult'] == 'Success'])\n failed_logons = (logon_events[logon_events['LogonResult'] == 'Failure'])\nelse:\n print(\"No logon events in this timeframe\")\n\n\nif not remote_logons.empty or not failed_logons.empty:\n#Provide a timeline of sucessful and failed logon attempts to aid identification of potential brute force attacks\n display(Markdown('### Timeline of sucessful host logons.'))\n tooltip_cols = ['User', 'ProcessName', 'SourceIP']\n if rel_alert_select is not None:\n logon_timeline = nbdisplay.display_timeline(data=remote_logons, overlay_data=failed_logons, source_columns=tooltip_cols, height=200, overlay_color=\"red\", alert = rel_alert_select.selected_alert)\n else:\n logon_timeline = nbdisplay.display_timeline(data=remote_logons, overlay_data=failed_logons, source_columns=tooltip_cols, height=200, overlay_color=\"red\")\n display(Markdown('<b>Key:</b><p style=\"color:darkblue\">Sucessful logons </p><p style=\"color:Red\">Failed Logon Attempts (via su)</p>')) \n\n all_df = pd.DataFrame(dict(successful= remote_logons['ProcessName'].value_counts(), failed = failed_logons['ProcessName'].value_counts())).fillna(0)\n fail_data = pd.value_counts(failed_logons['User'].values, sort=True).head(10).reset_index(name='value').rename(columns={'User':'Count'})\n fail_data['angle'] = fail_data['value']/fail_data['value'].sum() * 2*pi\n fail_data['color'] = viridis(len(fail_data))\n fp = figure(plot_height=350, plot_width=450, title=\"Relative Frequencies of Failed Logons by Account\", toolbar_location=None, tools=\"hover\", tooltips=\"@index: @value\")\n fp.wedge(x=0, y=1, radius=0.5, start_angle=cumsum('angle', include_zero=True), end_angle=cumsum('angle'), line_color=\"white\", fill_color='color', legend='index', source=fail_data)\n\n sucess_data = pd.value_counts(remote_logons['User'].values, sort=False).reset_index(name='value').rename(columns={'User':'Count'})\n sucess_data['angle'] = sucess_data['value']/sucess_data['value'].sum() * 2*pi\n sucess_data['color'] = viridis(len(sucess_data))\n sp = figure(plot_height=350, width=450, title=\"Relative Frequencies of Sucessful Logons by Account\", toolbar_location=None, tools=\"hover\", tooltips=\"@index: @value\")\n sp.wedge(x=0, y=1, radius=0.5, start_angle=cumsum('angle', include_zero=True), end_angle=cumsum('angle'), line_color=\"white\", fill_color='color', legend='index', source=sucess_data)\n\n fp.axis.axis_label=None\n fp.axis.visible=False\n fp.grid.grid_line_color = None\n sp.axis.axis_label=None\n sp.axis.visible=False\n sp.grid.grid_line_color = None\n\n\n processes = all_df.index.values.tolist()\n results = all_df.columns.values.tolist()\n fail_sucess_data = {'processes' :processes,\n 'sucess' : all_df['successful'].values.tolist(),\n 'failure': all_df['failed'].values.tolist()}\n\n palette = viridis(2)\n x = [ (process, result) for process in processes for result in results ]\n counts = sum(zip(fail_sucess_data['sucess'], fail_sucess_data['failure']), ()) \n source = ColumnDataSource(data=dict(x=x, counts=counts))\n b = figure(x_range=FactorRange(*x), plot_height=350, plot_width=450, title=\"Failed and Sucessful logon attempts by process\",\n toolbar_location=None, tools=\"\", y_minor_ticks=2)\n b.vbar(x='x', top='counts', width=0.9, source=source, line_color=\"white\",\n fill_color=factor_cmap('x', palette=palette, factors=results, start=1, end=2))\n b.y_range.start = 0\n b.x_range.range_padding = 0.1\n b.xaxis.major_label_orientation = 1\n b.xgrid.grid_line_color = None\n\n show(Row(sp,fp,b))\n\n ip_list = [convert_to_ip_entities(i)[0] for i in remote_logons['SourceIP']]\n ip_fail_list = [convert_to_ip_entities(i)[0] for i in failed_logons['SourceIP']]\n \n location = get_map_center(ip_list + ip_fail_list)\n folium_map = FoliumMap(location = location, zoom_start=1.4)\n #Map logon locations to allow for identification of anomolous locations\n if len(ip_fail_list) > 0:\n md('<h3>Map of Originating Location of Logon Attempts</h3>')\n icon_props = {'color': 'red'}\n folium_map.add_ip_cluster(ip_entities=ip_fail_list, **icon_props)\n if len(ip_list) > 0:\n icon_props = {'color': 'green'}\n folium_map.add_ip_cluster(ip_entities=ip_list, **icon_props)\n display(folium_map.folium_map)\n md('<p style=\"color:red\">Warning: the folium mapping library '\n 'does not display correctly in some browsers.</p><br>'\n 'If you see a blank image please retry with a different browser.') \n", "_____no_output_____" ] ], [ [ "### Logon Sessions\nBased on the detail above if you wish to focus your hunt on a particular user jump to the [User Activity](#user) section. Alternatively to further further refine our hunt we need to select a logon session to view in more detail. Select a session from the list below to continue. Sessions that occurred at the time an alert was raised for this host, or where the user has a abnormal ratio of failed to successful login attempts are highlighted.", "_____no_output_____" ] ], [ [ "logon_sessions_df = None\ntry:\n print(\"Clustering logon sessions...\")\n logon_sessions_df = cluster_syslog_logons_df(logon_events)\nexcept Exception as err:\n print(f\"Error clustering logons: {err}\")\n\nif logon_sessions_df is not None:\n logon_sessions_df[\"Alerts during session?\"] = np.nan\n # check if any alerts occur during logon window.\n logon_sessions_df['Start (UTC)'] = [(time - dt.timedelta(seconds=5)) for time in logon_sessions_df['Start']]\n logon_sessions_df['End (UTC)'] = [(time + dt.timedelta(seconds=5)) for time in logon_sessions_df['End']]\n\n for TimeGenerated in related_alerts['TimeGenerated']:\n logon_sessions_df.loc[(TimeGenerated >= logon_sessions_df['Start (UTC)']) & (TimeGenerated <= logon_sessions_df['End (UTC)']), \"Alerts during session?\"] = \"Yes\"\n\n logon_sessions_df.loc[logon_sessions_df['User'] == 'root', \"Root?\"] = \"Yes\"\n logon_sessions_df.replace(np.nan, \"No\", inplace=True)\n\n ratios = []\n for _, row in logon_sessions_df.iterrows():\n suc_fail = logon_events.apply(lambda x: True if x['User'] == row['User'] and x[\"LogonResult\"] == 'Success' else(\n False if x['User'] == row['User'] and x[\"LogonResult\"] == 'Failure' else None), axis=1)\n numofsucess = len(suc_fail[suc_fail == True].index)\n numoffail = len(suc_fail[suc_fail == False].index)\n if numoffail == 0:\n ratio = 1\n else:\n ratio = numofsucess/numoffail\n ratios.append(ratio)\n logon_sessions_df[\"Sucessful to failed logon ratio\"] = ratios\n\n def color_cells(val):\n if isinstance(val, str):\n color = 'yellow' if val == \"Yes\" else 'white'\n elif isinstance(val, float):\n color = 'yellow' if val > 0.5 else 'white'\n else:\n color = 'white'\n return 'background-color: %s' % color \n\n display(logon_sessions_df[['User','Start (UTC)', 'End (UTC)', 'Alerts during session?', 'Sucessful to failed logon ratio', 'Root?']]\n .style.applymap(color_cells).hide_index())\n\n logon_items = (\n logon_sessions_df[['User','Start (UTC)', 'End (UTC)']]\n .to_string(header=False, index=False, index_names=False)\n .split('\\n')\n )\n logon_sessions_df[\"Key\"] = logon_items \n logon_sessions_df.set_index('Key', inplace=True)\n logon_dict = logon_sessions_df[['User','Start (UTC)', 'End (UTC)']].to_dict('index')\n\n logon_selection = nbwidgets.SelectItem(description='Select logon session to investigate: ',\n item_dict=logon_dict , width='80%', auto_display=True)\nelse:\n md(\"No logon sessions during this timeframe\")", "_____no_output_____" ] ], [ [ "#### Session Details", "_____no_output_____" ] ], [ [ "def view_syslog(selected_facility):\n return [syslog_events.query('Facility == @selected_facility')]\n\n# Produce a summary of user modification actions taken\n if \"Add\" in x:\n return len(add_events.replace(\"\", np.nan).dropna(subset=['User'])['User'].unique().tolist())\n elif \"Modify\" in x:\n return len(mod_events.replace(\"\", np.nan).dropna(subset=['User'])['User'].unique().tolist())\n elif \"Delete\" in x:\n return len(del_events.replace(\"\", np.nan).dropna(subset=['User'])['User'].unique().tolist())\n else:\n return \"\"\n\ncrn_tl_data = {}\nuser_tl_data = {}\nsudo_tl_data = {}\nsudo_sessions = None\ntooltip_cols = ['SyslogMessage']\nif logon_sessions_df is not None:\n #Collect data based on the session selected for investigation\n invest_sess = {'StartTimeUtc': logon_selection.value.get('Start (UTC)'), 'EndTimeUtc': logon_selection.value.get(\n 'End (UTC)'), 'Account': logon_selection.value.get('User'), 'Host': hostname}\n session = entityschema.HostLogonSession(invest_sess)\n syslog_events = qry_prov.LinuxSyslog.all_syslog(\n start=session.StartTimeUtc, end=session.EndTimeUtc, host_name=session.Host)\n sudo_events = qry_prov.LinuxSyslog.sudo_activity(\n start=session.StartTimeUtc, end=session.EndTimeUtc, host_name=session.Host, user=session.Account)\n \n if isinstance(sudo_events, pd.DataFrame) and not sudo_events.empty:\n try:\n sudo_sessions = cluster_syslog_logons_df(logon_events=sudo_events)\n except MsticpyException:\n pass\n\n # Display summary of cron activity in session\n cron_events = qry_prov.LinuxSyslog.cron_activity(\n start=session.StartTimeUtc, end=session.EndTimeUtc, host_name=session.Host)\n if not isinstance(cron_events, pd.DataFrame) or cron_events.empty:\n md(f'<h3> No Cron activity for {session.Host} between {session.StartTimeUtc} and {session.EndTimeUtc}</h3>')\n else:\n cron_events['CMD'].replace('', np.nan, inplace=True)\n crn_tl_data = {\"Cron Exections\": {\"data\": cron_events[['TimeGenerated', 'CMD', 'CronUser', 'SyslogMessage']].dropna(), \"source_columns\": tooltip_cols, \"color\": \"Blue\"},\n \"Cron Edits\": {\"data\": cron_events.loc[cron_events['SyslogMessage'].str.contains('EDIT')], \"source_columns\": tooltip_cols, \"color\": \"Green\"}}\n md('<h2> Most common commands run by cron:</h2>')\n md('This shows how often each cron job was exected within the specified time window')\n cron_commands = (cron_events[['EventTime', 'CMD']]\n .groupby(['CMD']).count()\n .dropna()\n .style\n .set_table_attributes('width=900px, text-align=center')\n .background_gradient(cmap='Reds', low=0.5, high=1)\n .format(\"{0:0>1.0f}\"))\n display(cron_commands)\n\n # Display summary of user and group creations, deletions and modifications during the session\n user_activity = qry_prov.LinuxSyslog.user_group_activity(\n start=session.StartTimeUtc, end=session.EndTimeUtc, host_name=session.Host)\n if not isinstance(user_activity, pd.DataFrame) or user_activity.empty:\n md(f'<h3>No user or group moidifcations for {session.Host} between {session.StartTimeUtc} and {session.EndTimeUtc}></h3>')\n else:\n add_events = user_activity[user_activity['UserGroupAction'].str.contains(\n 'Add')]\n del_events = user_activity[user_activity['UserGroupAction'].str.contains(\n 'Delete')]\n mod_events = user_activity[user_activity['UserGroupAction'].str.contains(\n 'Modify')]\n user_activity['Count'] = user_activity.groupby('UserGroupAction')['UserGroupAction'].transform('count')\n if add_events.empty and del_events.empty and mod_events.empty:\n md('<h2> Users and groups added or deleted:</h2<>')\n md(f'No users or groups were added or deleted on {host_entity.HostName} between {query_times.start} and {query_times.end}')\n user_tl_data = {}\n else:\n md(\"<h2>Users added, modified or deleted</h2>\")\n display(user_activity[['UserGroupAction','Count']].drop_duplicates().style.hide_index())\n account_actions = pd.DataFrame({\"User Additions\": [add_events.replace(\"\", np.nan).dropna(subset=['User'])['User'].unique().tolist()],\n \"User Modifications\": [mod_events.replace(\"\", np.nan).dropna(subset=['User'])['User'].unique().tolist()],\n \"User Deletions\": [del_events.replace(\"\", np.nan).dropna(subset=['User'])['User'].unique().tolist()]})\n display(account_actions.style.hide_index())\n user_tl_data = {\"User adds\": {\"data\": add_events, \"source_columns\": tooltip_cols, \"color\": \"Orange\"},\n \"User deletes\": {\"data\": del_events, \"source_columns\": tooltip_cols, \"color\": \"Red\"},\n \"User modfications\": {\"data\": mod_events, \"source_columns\": tooltip_cols, \"color\": \"Grey\"}}\n \n # Display sudo activity during session\n if not isinstance(sudo_sessions, pd.DataFrame) or sudo_sessions.empty:\n md(f\"<h3>No Sudo sessions for {session.Host} between {logon_selection.value.get('Start (UTC)')} and {logon_selection.value.get('End (UTC)')}</h3>\")\n sudo_tl_data = {}\n else:\n sudo_start = sudo_events[sudo_events[\"SyslogMessage\"].str.contains(\n \"pam_unix.+session opened\")].rename(columns={\"Sudoer\": \"User\"})\n sudo_tl_data = {\"Host logons\": {\"data\": remote_logons, \"source_columns\": tooltip_cols, \"color\": \"Cyan\"},\n \"Sudo sessions\": {\"data\": sudo_start, \"source_columns\": tooltip_cols, \"color\": \"Purple\"}}\n try:\n risky_actions = cmd_line.risky_cmd_line(events=sudo_events, log_type=\"Syslog\")\n suspicious_events = cmd_speed(\n cmd_events=sudo_events, time=60, events=2, cmd_field=\"Command\")\n except:\n risky_actions = None\n suspicious_events = None\n if risky_actions is None and suspicious_events is None:\n pass\n else:\n risky_sessions = risky_sudo_sessions(\n risky_actions=risky_actions, sudo_sessions=sudo_sessions, suspicious_actions=suspicious_events)\n for key in risky_sessions:\n if key in sudo_sessions:\n sudo_sessions[f\"{key} - {risky_sessions[key]}\"] = sudo_sessions.pop(\n key)\n \n if isinstance(sudo_events, pd.DataFrame):\n sudo_events_val = sudo_events[['EventTime', 'CommandCall']][sudo_events['CommandCall']!=\"\"].dropna(how='any', subset=['CommandCall'])\n if sudo_events_val.empty:\n md(f\"No sucessful sudo activity for {hostname} between {logon_selection.value.get('Start (UTC)')} and {logon_selection.value.get('End (UTC)')}\")\n else:\n sudo_events.replace(\"\", np.nan, inplace=True)\n md('<h2> Frequency of sudo commands</h2>')\n md('This shows how many times each command has been run with sudo. /bin/bash is usally associated with the use of \"sudo -i\"')\n sudo_commands = (sudo_events[['EventTime', 'CommandCall']]\n .groupby(['CommandCall'])\n .count()\n .dropna()\n .style\n .set_table_attributes('width=900px, text-align=center')\n .background_gradient(cmap='Reds', low=.5, high=1)\n .format(\"{0:0>3.0f}\"))\n display(sudo_commands)\n else:\n md(f\"No sucessful sudo activity for {hostname} between {logon_selection.value.get('Start (UTC)')} and {logon_selection.value.get('End (UTC)')}\") \n\n # Display a timeline of all activity during session\n crn_tl_data.update(user_tl_data)\n crn_tl_data.update(sudo_tl_data)\n if crn_tl_data:\n md('<h2> Session Timeline.</h2>')\n nbdisplay.display_timeline(\n data=crn_tl_data, title='Session Timeline', height=300)\nelse:\n md(\"No logon sessions during this timeframe\")", "_____no_output_____" ] ], [ [ "#### Raw data from user session\nUse this syslog message data to further investigate suspicous activity during the session", "_____no_output_____" ] ], [ [ "if isinstance(logon_sessions_df, pd.DataFrame) and not logon_sessions_df.empty:\n #Return syslog data and present it to the use for investigation\n session_syslog = qry_prov.LinuxSyslog.all_syslog(\n start=session.StartTimeUtc, end=session.EndTimeUtc, host_name=session.Host)\n if session_syslog.empty:\n display(HTML(\n f' No syslog for {session.Host} between {session.StartTimeUtc} and {session.EndTimeUtc}'))\n\n\n def view_sudo(selected_cmd):\n return [sudo_events.query('CommandCall == @selected_cmd')[\n ['TimeGenerated', 'SyslogMessage', 'Sudoer', 'SudoTo', 'Command', 'CommandCall']]]\n\n # Show syslog messages associated with selected sudo command\n md(\"<h3>View all messages associated with a sudo command</h3>\")\n items = sudo_events['CommandCall'].dropna().unique().tolist()\n display(nbwidgets.SelectItem(item_list=items, action=view_sudo))\nelse:\n md(\"No logon sessions during this timeframe\")", "_____no_output_____" ], [ "if isinstance(logon_sessions_df, pd.DataFrame) and not logon_sessions_df.empty:\n # Display syslog messages from the session witht he facility selected\n items = syslog_events['Facility'].dropna().unique().tolist()\n md(\"<h3>View all messages associated with a syslog facility</h3>\")\n display(nbwidgets.SelectItem(item_list=items, action=view_syslog))\nelse:\n md(\"No logon sessions during this timeframe\")", "_____no_output_____" ] ], [ [ "### Process Tree from session", "_____no_output_____" ] ], [ [ "if isinstance(logon_sessions_df, pd.DataFrame) and not logon_sessions_df.empty:\n display(HTML(\"<h3>Process Trees from session</h3>\"))\n print(\"Building process tree, this may take some time...\")\n # Find the table with auditd data in\n regex = '.*audit.*\\_cl?'\n matches = ((re.match(regex, key, re.IGNORECASE)) for key in qry_prov.schema)\n for match in matches:\n if match != None:\n audit_table = match.group(0)\n\n # Retrieve auditd data\n if audit_table:\n audit_data = qry_prov.LinuxAudit.auditd_all(\n start=session.StartTimeUtc, end=session.EndTimeUtc, host_name=hostname\n )\n if isinstance(audit_data, pd.DataFrame) and not audit_data.empty:\n audit_events = auditdextract.extract_events_to_df(\n data=audit_data\n )\n\n process_tree = auditdextract.generate_process_tree(audit_data=audit_events)\n process_tree.mp_process_tree.plot()\n else:\n display(HTML(\"No auditd data avaliable to build process tree\"))\n else:\n display(HTML(\"No auditd data avaliable to build process tree\"))\nelse:\n md(\"No logon sessions during this timeframe\")", "_____no_output_____" ] ], [ [ "Click [here](#app) to start a process/application focused hunt or continue with session based hunt below by selecting a sudo session to investigate.", "_____no_output_____" ], [ "### Sudo Session Investigation\nSudo activity is often required by an attacker to conduct actions on target, and more granular data is avalibale for sudo sessions allowing for deeper level hunting within these sesions.", "_____no_output_____" ] ], [ [ "if logon_sessions_df is not None and sudo_sessions is not None:\n sudo_items = sudo_sessions[['User','Start', 'End']].to_string(header=False,\n index=False,\n index_names=False).split('\\n')\n sudo_sessions[\"Key\"] = sudo_items\n sudo_sessions.set_index('Key', inplace=True)\n sudo_dict = sudo_sessions[['User','Start', 'End']].to_dict('index')\n\n sudo_selection = nbwidgets.SelectItem(description='Select sudo session to investigate: ',\n item_dict=sudo_dict, width='100%', height='300px', auto_display=True)\nelse:\n sudo_selection = None\n md(\"No logon sessions during this timeframe\")", "_____no_output_____" ], [ "#Collect data associated with the sudo session selected\nsudo_events = None\nfrom msticpy.sectools.tiproviders.ti_provider_base import TISeverity\n\ndef ti_check_sev(severity, threshold):\n severity = TISeverity.parse(severity)\n threshold = TISeverity.parse(threshold)\n return severity.value >= threshold.value\n\nif sudo_selection:\n sudo_sess = {'StartTimeUtc': sudo_selection.value.get('Start'), 'EndTimeUtc': sudo_selection.value.get(\n 'End'), 'Account': sudo_selection.value.get('User'), 'Host': hostname}\n sudo_session = entityschema.HostLogonSession(sudo_sess)\n sudo_events = qry_prov.LinuxSyslog.sudo_activity(start=sudo_session.StartTimeUtc.round(\n '-1s') - pd.Timedelta(seconds=1), end=(sudo_session.EndTimeUtc.round('1s')+ pd.Timedelta(seconds=1)), host_name=sudo_session.Host)\n if isinstance(sudo_events, pd.DataFrame) and not sudo_events.empty:\n display(sudo_events.replace('', np.nan).dropna(axis=0, subset=['Command'])[\n ['TimeGenerated', 'Command', 'CommandCall', 'SyslogMessage']])\n # Extract IOCs from the data\n ioc_extractor = iocextract.IoCExtract()\n os_family = host_entity.OSType if host_entity.OSType else 'Linux'\n print('Extracting IoCs.......')\n ioc_df = ioc_extractor.extract(data=sudo_events,\n columns=['SyslogMessage'],\n os_family=os_family,\n ioc_types=['ipv4', 'ipv6', 'dns', 'url',\n 'md5_hash', 'sha1_hash', 'sha256_hash'])\n if len(ioc_df) > 0:\n ioc_count = len(\n ioc_df[[\"IoCType\", \"Observable\"]].drop_duplicates())\n md(f\"Found {ioc_count} IOCs\")\n #Lookup the extracted IOCs in TI feed\n ti_resps = tilookup.lookup_iocs(data=ioc_df[[\"IoCType\", \"Observable\"]].drop_duplicates(\n ).reset_index(), obs_col='Observable', ioc_type_col='IoCType')\n i = 0\n ti_hits = []\n ti_resps.reset_index(drop=True, inplace=True)\n while i < len(ti_resps):\n if ti_resps['Result'][i] == True and ti_check_sev(ti_resps['Severity'][i], 1):\n ti_hits.append(ti_resps['Ioc'][i])\n i += 1\n else:\n i += 1\n md(f\"Found {len(ti_hits)} IoCs in Threat Intelligence\")\n for ioc in ti_hits:\n md(f\"Messages containing IoC found in TI feed: {ioc}\")\n display(sudo_events[sudo_events['SyslogMessage'].str.contains(\n ioc)][['TimeGenerated', 'SyslogMessage']])\n else:\n md(\"No IoC patterns found in Syslog Messages.\")\n else:\n md('No sudo messages for this session')\n\n\nelse:\n md(\"No Sudo session to investigate\")", "_____no_output_____" ] ], [ [ "Jump to:\n- <a>Host Logon Events</a>\n- <a>Application Activity</a>\n- <a>Network Activity</a>", "_____no_output_____" ], [ "<a></a>\n## User Activity\n**Hypothesis:** That an attacker has gained access to the host and is using a user account to conduct actions on the host.\n\nThis section provides an overview of activity by user within our hunting time frame, the purpose of this is to allow for the identification of anomalous activity by a user. This hunt can be driven be investigation of suspected users or as a hunt across all users seen on the host.", "_____no_output_____" ] ], [ [ "# Get list of users with logon or sudo sessions on host\nlogon_events = qry_prov.LinuxSyslog.user_logon(query_times, host_name=hostname)\nusers = logon_events['User'].replace('', np.nan).dropna().unique().tolist()\nall_users = list(users)\n\n\nif isinstance(sudo_events, pd.DataFrame) and not sudo_events.empty:\n sudoers = sudo_events['Sudoer'].replace(\n '', np.nan).dropna().unique().tolist()\n all_users.extend(x for x in sudoers if x not in all_users)\n\n# Pick Users\nif not logon_events.empty:\n user_select = nbwidgets.SelectItem(description='Select user to investigate: ',\n item_list=all_users, width='75%', auto_display=True)\nelse:\n md(\"There was no user activity in the timeframe specified.\")\n user_select = None", "_____no_output_____" ], [ "folium_user_map = FoliumMap()\n\ndef view_sudo(cmd):\n return [user_sudo_hold.query('CommandCall == @cmd')[\n ['TimeGenerated', 'HostName', 'Command', 'CommandCall', 'SyslogMessage']]]\nuser_sudo_hold = None\nif user_select is not None:\n # Get all syslog relating to these users\n username = user_select.value\n user_events = all_syslog_data[all_syslog_data['SyslogMessage'].str.contains(username)]\n logon_sessions = cluster_syslog_logons_df(logon_events)\n\n # Display all logons associated with the user\n md(f\"<h1> User Logon Activity for {username}</h1>\")\n user_logon_events = logon_events[logon_events['User'] == username]\n try:\n user_logon_sessions = cluster_syslog_logons_df(user_logon_events)\n except:\n user_logon_sessions = None\n \n user_remote_logons = (\n user_logon_events[user_logon_events['LogonResult'] == 'Success']\n )\n user_failed_logons = (\n user_logon_events[user_logon_events['LogonResult'] == 'Failure']\n )\n if not user_remote_logons.empty:\n for _, row in logon_sessions_df.iterrows():\n end = row['End']\n user_sudo_events = qry_prov.LinuxSyslog.sudo_activity(start=user_remote_logons.sort_values(\n by='TimeGenerated')['TimeGenerated'].iloc[0], end=end, host_name=hostname, user=username)\n else: \n user_sudo_events = None\n\n if user_logon_sessions is None and user_remote_logons.empty and user_failed_logons.empty:\n pass\n else:\n display(HTML(\n f\"{len(user_remote_logons)} sucessfull logons and {len(user_failed_logons)} failed logons for {username}\"))\n\n display(Markdown('### Timeline of host logon attempts.'))\n tooltip_cols = ['SyslogMessage']\n dfs = {\"User Logons\" :user_remote_logons, \"Failed Logons\": user_failed_logons, \"Sudo Events\" :user_sudo_events}\n user_tl_data = {}\n\n for k,v in dfs.items():\n if v is not None and not v.empty:\n user_tl_data.update({k :{\"data\":v,\"source_columns\":tooltip_cols}})\n\n nbdisplay.display_timeline(\n data=user_tl_data, title=\"User logon timeline\", height=300)\n \n all_user_df = pd.DataFrame(dict(successful= user_remote_logons['ProcessName'].value_counts(), failed = user_failed_logons['ProcessName'].value_counts())).fillna(0)\n processes = all_user_df.index.values.tolist()\n results = all_user_df.columns.values.tolist()\n user_fail_sucess_data = {'processes' :processes,\n 'sucess' : all_user_df['successful'].values.tolist(),\n 'failure': all_user_df['failed'].values.tolist()}\n\n palette = viridis(2)\n x = [ (process, result) for process in processes for result in results ]\n counts = sum(zip(user_fail_sucess_data['sucess'], fail_sucess_data['failure']), ()) \n source = ColumnDataSource(data=dict(x=x, counts=counts))\n b = figure(x_range=FactorRange(*x), plot_height=350, plot_width=450, title=\"Failed and Sucessful logon attempts by process\",\n toolbar_location=None, tools=\"\", y_minor_ticks=2)\n b.vbar(x='x', top='counts', width=0.9, source=source, line_color=\"white\",\n fill_color=factor_cmap('x', palette=palette, factors=results, start=1, end=2))\n b.y_range.start = 0\n b.x_range.range_padding = 0.1\n b.xaxis.major_label_orientation = 1\n b.xgrid.grid_line_color = None\n user_logons = pd.DataFrame({\"Sucessful Logons\" : [int(all_user_df['successful'].sum())],\n \"Failed Logons\" : [int(all_user_df['failed'].sum())]}).T\n user_logon_data = pd.value_counts(user_logon_events['LogonResult'].values, sort=True).head(10).reset_index(name='value').rename(columns={'User':'Count'})\n user_logon_data = user_logon_data[user_logon_data['index']!=\"Unknown\"].copy()\n user_logon_data['angle'] = user_logon_data['value']/user_logon_data['value'].sum() * 2*pi\n user_logon_data['color'] = viridis(len(user_logon_data))\n p = figure(plot_height=350, plot_width=450, title=\"Relative Frequencies of Failed Logons by Account\", toolbar_location=None, tools=\"hover\", tooltips=\"@index: @value\")\n p.axis.visible = False\n p.xgrid.visible = False\n p.ygrid.visible = False\n p.wedge(x=0, y=1, radius=0.5, start_angle=cumsum('angle', include_zero=True), end_angle=cumsum('angle'), line_color=\"white\", fill_color='color', legend='index', source=user_logon_data)\n show(Row(p,b)) \n \n user_ip_list = [convert_to_ip_entities(i)[0] for i in user_remote_logons['SourceIP']]\n user_ip_fail_list = [convert_to_ip_entities(i)[0] for i in user_failed_logons['SourceIP']]\n \n user_location = get_map_center(ip_list + ip_fail_list)\n user_folium_map = FoliumMap(location = location, zoom_start=1.4)\n #Map logon locations to allow for identification of anomolous locations\n if len(ip_fail_list) > 0:\n md('<h3>Map of Originating Location of Logon Attempts</h3>')\n icon_props = {'color': 'red'}\n user_folium_map.add_ip_cluster(ip_entities=user_ip_fail_list, **icon_props)\n if len(ip_list) > 0:\n icon_props = {'color': 'green'}\n user_folium_map.add_ip_cluster(ip_entities=user_ip_list, **icon_props)\n display(user_folium_map.folium_map)\n md('<p style=\"color:red\">Warning: the folium mapping library '\n 'does not display correctly in some browsers.</p><br>'\n 'If you see a blank image please retry with a different browser.') \n \n #Display sudo activity of the user \n if not isinstance(user_sudo_events, pd.DataFrame) or user_sudo_events.empty:\n md(f\"<h3>No sucessful sudo activity for {username}</h3>\")\n else:\n user_sudo_hold = user_sudo_events\n user_sudo_commands = (user_sudo_events[['EventTime', 'CommandCall']].replace('', np.nan).groupby(['CommandCall']).count().dropna().style.set_table_attributes('width=900px, text-align=center').background_gradient(cmap='Reds', low=.5, high=1).format(\"{0:0>3.0f}\"))\n display(user_sudo_commands)\n md(\"Select a sudo command to investigate in more detail\")\n display(nbwidgets.SelectItem(item_list=items, action=view_sudo))\nelse:\n md(\"No user session selected\")", "_____no_output_____" ], [ "# If the user has sudo activity extract and IOCs from the logs and look them up in TI feeds\nif not isinstance(user_sudo_hold, pd.DataFrame) or user_sudo_hold.empty:\n md(f\"No sudo messages data\")\nelse:\n # Extract IOCs\n ioc_extractor = iocextract.IoCExtract()\n os_family = host_entity.OSType if host_entity.OSType else 'Linux'\n print('Extracting IoCs.......')\n ioc_df = ioc_extractor.extract(data=user_sudo_hold,\n columns=['SyslogMessage'],\n os_family=os_family,\n ioc_types=['ipv4', 'ipv6', 'dns', 'url', 'md5_hash', 'sha1_hash', 'sha256_hash'])\n if len(ioc_df) > 0:\n ioc_count = len(ioc_df[[\"IoCType\", \"Observable\"]].drop_duplicates())\n md(f\"Found {ioc_count} IOCs\")\n ti_resps = tilookup.lookup_iocs(data=ioc_df[[\"IoCType\", \"Observable\"]].drop_duplicates(\n ).reset_index(), obs_col='Observable', ioc_type_col='IoCType')\n i = 0\n ti_hits = []\n ti_resps.reset_index(drop=True, inplace=True)\n while i < len(ti_resps):\n if ti_resps['Result'][i] == True and ti_check_sev(ti_resps['Severity'][i], 1):\n ti_hits.append(ti_resps['Ioc'][i])\n i += 1\n else:\n i += 1\n md(f\"Found {len(ti_hits)} IoCs in Threat Intelligence\")\n for ioc in ti_hits:\n md(f\"Messages containing IoC found in TI feed: {ioc}\")\n display(user_sudo_hold[user_sudo_hold['SyslogMessage'].str.contains(\n ioc)][['TimeGenerated', 'SyslogMessage']])\n else:\n md(\"No IoC patterns found in Syslog Message.\")", "_____no_output_____" ] ], [ [ "Jump to:\n- <a>Host Logon Events</a>\n- <a>User Activity</a>\n- <a>Network Activity</a>", "_____no_output_____" ], [ "<a></a>\n## Application Activity\n\n**Hypothesis:** That an attacker has compromised an application running on the host and is using the applications process to conduct actions on the host.\n\nThis section provides an overview of activity by application within our hunting time frame, the purpose of this is to allow for the identification of anomalous activity by an application. This hunt can be driven be investigation of suspected applications or as a hunt across all users seen on the host.", "_____no_output_____" ] ], [ [ "# Get list of Applications\napps = all_syslog_data['ProcessName'].replace('', np.nan).dropna().unique().tolist()\nsystem_apps = ['sudo', 'CRON', 'systemd-resolved', 'snapd',\n '50-motd-news', 'systemd-logind', 'dbus-deamon', 'crontab']\nif len(host_entity.Applications) > 0:\n installed_apps = []\n installed_apps.extend(x for x in apps if x not in system_apps)\n\n # Pick Applications\n app_select = nbwidgets.SelectItem(description='Select sudo session to investigate: ',\n item_list=installed_apps, width='75%', auto_display=True)\nelse:\n display(HTML(\"No applications other than stand OS applications present\"))", "_____no_output_____" ], [ "# Get all syslog relating to these Applications\napp = app_select.value\napp_data = all_syslog_data[all_syslog_data['ProcessName'] == app].copy()\n\n# App log volume over time\nif isinstance(app_data, pd.DataFrame) and not app_data.empty:\n app_data_volume = app_data.set_index(\n \"TimeGenerated\").resample('5T').count()\n app_data_volume.reset_index(level=0, inplace=True)\n app_data_volume.rename(columns={\"TenantId\" : \"NoOfLogMessages\"}, inplace=True)\n nbdisplay.display_timeline_values(data=app_data_volume, y='NoOfLogMessages', source_columns=['NoOfLogMessages'], title=f\"{app} log volume over time\") \n \n app_high_sev = app_data[app_data['SeverityLevel'].isin(\n ['emerg', 'alert', 'crit', 'err', 'warning'])]\n if isinstance(app_high_sev, pd.DataFrame) and not app_high_sev.empty:\n app_hs_volume = app_high_sev.set_index(\n \"TimeGenerated\").resample('5T').count()\n app_hs_volume.reset_index(level=0, inplace=True)\n app_hs_volume.rename(columns={\"TenantId\" : \"NoOfLogMessages\"}, inplace=True)\n nbdisplay.display_timeline_values(data=app_hs_volume, y='NoOfLogMessages', source_columns=['NoOfLogMessages'], title=f\"{app} high severity log volume over time\") \n\nrisky_messages = risky_cmd_line(events=app_data, log_type=\"Syslog\", cmd_field=\"SyslogMessage\")\nif risky_messages:\n print(risky_messages)", "_____no_output_____" ] ], [ [ "### Display process tree\nDue to the large volume of data involved you may wish to make you query window smaller", "_____no_output_____" ] ], [ [ "if rel_alert_select is None or rel_alert_select.selected_alert is None:\n start = query_times.start\nelse:\n start = rel_alert_select.selected_alert['TimeGenerated']\n\n# Set new investigation time windows based on the selected alert\nproc_invest_times = nbwidgets.QueryTime(units='hours',\n max_before=6, max_after=3, before=2, origin_time=start)\nproc_invest_times.display()", "_____no_output_____" ], [ "audit_table = None\napp_audit_data = None\napp = app_select.value\nprocess_tree_data = None\nregex = '.*audit.*\\_cl?'\n# Find the table with auditd data in and collect the data\nmatches = ((re.match(regex, key, re.IGNORECASE)) for key in qry_prov.schema)\nfor match in matches:\n if match != None:\n audit_table = match.group(0)\n\n#Check if the amount of data expected to be returned is a reasonable size, if not prompt before continuing\nif audit_table != None:\n if isinstance(app_audit_data, pd.DataFrame):\n pass\n else:\n print('Collecting audit data, please wait this may take some time....')\n app_audit_query_count = f\"\"\"{audit_table} \n | where TimeGenerated >= datetime({proc_invest_times.start}) \n | where TimeGenerated <= datetime({proc_invest_times.end}) \n | where Computer == '{hostname}'\n | summarize count()\n \"\"\"\n \n count_check = qry_prov.exec_query(query=app_audit_query_count)\n\n if count_check['count_'].iloc[0] > 100000 and not count_check.empty:\n size = count_check['count_'].iloc[0]\n print(\n f\"You are returning a very large dataset ({size} rows).\",\n \"It is reccomended that you consider scoping the size\\n\",\n \"of your query down.\\n\",\n \"Are you sure you want to proceed?\"\n )\n response = (input(\"Y/N\") or \"N\")\n \n if (\n (count_check['count_'].iloc[0] < 100000)\n or (count_check['count_'].iloc[0] > 100000\n and response.casefold().startswith(\"y\"))\n ):\n print(\"querying audit data...\")\n audit_data = qry_prov.LinuxAudit.auditd_all(\n start=proc_invest_times.start, end=proc_invest_times.end, host_name=hostname\n )\n if isinstance(audit_data, pd.DataFrame) and not audit_data.empty:\n print(\"building process tree...\")\n audit_events = auditdextract.extract_events_to_df(\n data=audit_data\n )\n \n process_tree_data = auditdextract.generate_process_tree(audit_data=audit_events)\n plot_lim = 1000\n if len(process_tree) > plot_lim:\n md_warn(f\"More than {plot_lim} processes to plot, limiting to top {plot_lim}.\")\n process_tree[:plot_lim].mp_process_tree.plot(legend_col=\"exe\")\n else:\n process_tree.mp_process_tree.plot(legend_col=\"exe\")\n size = audit_events.size\n print(f\"Collected {size} rows of data\")\n else:\n md(\"No audit events avalaible\")\n else:\n print(\"Resize query window\")\n \nelse:\n md(\"No audit events avalaible\")", "_____no_output_____" ], [ "md(f\"<h3>Process tree for {app}</h3>\")\nif process_tree_data is not None:\n process_tree_df = process_tree_data[process_tree_data[\"exe\"].str.contains(app, na=False)].copy()\n if not process_tree_df.empty: \n app_roots = process_tree_data.apply(lambda x: ptree.get_root(process_tree_data, x), axis=1)\n trees = []\n for root in app_roots[\"source_index\"].unique():\n trees.append(process_tree_data[process_tree_data[\"path\"].str.startswith(root)])\n app_proc_trees = pd.concat(trees)\n app_proc_trees.mp_process_tree.plot(legend_col=\"exe\", show_table=True)\n else:\n display(f\"No process tree data avaliable for {app}\")\n process_tree = None\nelse:\n md(\"No data avaliable to build process tree\")", "_____no_output_____" ] ], [ [ "### Application Logs with associated Threat Intelligence\nThese logs are associated with the process being investigated and include IOCs that appear in our TI feeds.", "_____no_output_____" ] ], [ [ "# Extract IOCs from syslog assocated with the selected process\nioc_extractor = iocextract.IoCExtract()\nos_family = host_entity.OSType if host_entity.OSType else 'Linux'\nmd('Extracting IoCs...')\nioc_df = ioc_extractor.extract(data=app_data,\n columns=['SyslogMessage'],\n os_family=os_family,\n ioc_types=['ipv4', 'ipv6', 'dns', 'url',\n 'md5_hash', 'sha1_hash', 'sha256_hash'])\n\nif process_tree_data is not None and not process_tree_data.empty:\n app_process_tree = app_proc_trees.dropna(subset=['cmdline'])\n audit_ioc_df = ioc_extractor.extract(data=app_process_tree,\n columns=['cmdline'],\n os_family=os_family,\n ioc_types=['ipv4', 'ipv6', 'dns', 'url',\n 'md5_hash', 'sha1_hash', 'sha256_hash'])\n\n ioc_df = ioc_df.append(audit_ioc_df)\n# Look up IOCs in TI feeds\nif len(ioc_df) > 0:\n ioc_count = len(ioc_df[[\"IoCType\", \"Observable\"]].drop_duplicates())\n md(f\"Found {ioc_count} IOCs\")\n md(\"Looking up threat intel...\")\n ti_resps = tilookup.lookup_iocs(data=ioc_df[[\n \"IoCType\", \"Observable\"]].drop_duplicates().reset_index(drop=True), obs_col='Observable')\n i = 0\n ti_hits = []\n ti_resps.reset_index(drop=True, inplace=True)\n while i < len(ti_resps):\n if ti_resps['Result'][i] == True and ti_check_sev(ti_resps['Severity'][i], 1):\n ti_hits.append(ti_resps['Ioc'][i])\n i += 1\n else:\n i += 1\n display(HTML(f\"Found {len(ti_hits)} IoCs in Threat Intelligence\"))\n for ioc in ti_hits:\n display(HTML(f\"Messages containing IoC found in TI feed: {ioc}\"))\n display(app_data[app_data['SyslogMessage'].str.contains(\n ioc)][['TimeGenerated', 'SyslogMessage']])\nelse:\n md(\"<h3>No IoC patterns found in Syslog Message.</h3>\")", "_____no_output_____" ] ], [ [ "Jump to:\n- <a>Host Logon Events</a>\n- <a>User Activity</a>\n- <a>Application Activity</a>", "_____no_output_____" ], [ "## Network Activity\n**Hypothesis:** That an attacker is remotely communicating with the host in order to compromise the host or for C2 or data exfiltration purposes after compromising the host.\n\nThis section provides an overview of network activity to and from the host during hunting time frame, the purpose of this is to allow for the identification of anomalous network traffic. If you wish to investigate a specific IP in detail it is recommended that you use the IP Explorer Notebook (include link).", "_____no_output_____" ] ], [ [ "# Get list of IPs from Syslog and Azure Network Data\nioc_extractor = iocextract.IoCExtract()\nos_family = host_entity.OSType if host_entity.OSType else 'Linux'\nprint('Finding IP Addresses this may take a few minutes.......')\nsyslog_ips = ioc_extractor.extract(data=all_syslog_data,\n columns=['SyslogMessage'],\n os_family=os_family,\n ioc_types=['ipv4', 'ipv6'])\n\n\nif 'AzureNetworkAnalytics_CL' not in qry_prov.schema:\n az_net_comms_df = None\n az_ips = None\nelse:\n if hasattr(host_entity, 'private_ips') and hasattr(host_entity, 'public_ips'):\n all_host_ips = host_entity.private_ips + \\\n host_entity.public_ips + [host_entity.IPAddress]\n else:\n all_host_ips = [host_entity.IPAddress]\n host_ips = {'\\'{}\\''.format(i.Address) for i in all_host_ips}\n host_ip_list = ','.join(host_ips)\n\n az_ip_where = f\"\"\"| where (VMIPAddress in (\"{host_ip_list}\") or SrcIP in (\"{host_ip_list}\") or DestIP in (\"{host_ip_list}\")) and (AllowedOutFlows > 0 or AllowedInFlows > 0)\"\"\"\n az_net_comms_df = qry_prov.AzureNetwork.az_net_analytics(\n start=query_times.start, end=query_times.end, host_name=hostname, where_clause=az_ip_where)\n if isinstance(az_net_comms_df, pd.DataFrame) and not az_net_comms_df.empty:\n az_ips = az_net_comms_df.query(\"PublicIPs != @host_entity.IPAddress\")\n else:\n az_ips = None\nif len(syslog_ips):\n IPs = syslog_ips[['IoCType', 'Observable']].drop_duplicates('Observable')\n display(f\"Found {len(IPs)} IP Addresses assoicated with the host\")\nelse:\n md(\"### No IoC patterns found in Syslog Message.\")\n \nif az_ips is not None:\n ips = az_ips['PublicIps'].drop_duplicates(\n ) + syslog_ips['Observable'].drop_duplicates()\nelse:\n ips = syslog_ips['Observable'].drop_duplicates()\n\nif isinstance(az_net_comms_df, pd.DataFrame) and not az_net_comms_df.empty:\n import warnings\n\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n\n az_net_comms_df['TotalAllowedFlows'] = az_net_comms_df['AllowedOutFlows'] + \\\n az_net_comms_df['AllowedInFlows']\n sns.catplot(x=\"L7Protocol\", y=\"TotalAllowedFlows\",\n col=\"FlowDirection\", data=az_net_comms_df)\n sns.relplot(x=\"FlowStartTime\", y=\"TotalAllowedFlows\",\n col=\"FlowDirection\", kind=\"line\",\n hue=\"L7Protocol\", data=az_net_comms_df).set_xticklabels(rotation=50)\n\n nbdisplay.display_timeline(data=az_net_comms_df.query('AllowedOutFlows > 0'),\n overlay_data=az_net_comms_df.query(\n 'AllowedInFlows > 0'),\n title='Network Flows (out=blue, in=green)',\n time_column='FlowStartTime',\n source_columns=[\n 'FlowType', 'AllExtIPs', 'L7Protocol', 'FlowDirection'],\n height=300)\nelse:\n md('<h3>No Azure network data for specified time range.</h3>')", "_____no_output_____" ] ], [ [ "### Choose ASNs/IPs to Check for Threat Intel Reports\nChoose from the list of Selected ASNs for the IPs you wish to check on. Then select the IP(s) that you wish to check against Threat Intelligence data.\nThe Source list is populated with all ASNs found in the syslog and network flow data.", "_____no_output_____" ] ], [ [ "#Lookup each IP in whois data and extract the ASN\n@lru_cache(maxsize=1024)\ndef whois_desc(ip_lookup, progress=False):\n try:\n ip = ip_address(ip_lookup)\n except ValueError:\n return \"Not an IP Address\"\n if ip.is_private:\n return \"private address\"\n if not ip.is_global:\n return \"other address\"\n whois = IPWhois(ip)\n whois_result = whois.lookup_whois()\n if progress:\n print(\".\", end=\"\")\n return whois_result[\"asn_description\"]\n\n# Summarise network data by ASN\nASN_List = []\nprint(\"WhoIs Lookups\")\nASNs = ips.apply(lambda x: whois_desc(x, True))\nIP_ASN = pd.DataFrame(dict(IPs=ips, ASN=ASNs)).reset_index()\nx = IP_ASN.groupby([\"ASN\"]).count().drop(\n 'index', axis=1).sort_values('IPs', ascending=False)\ndisplay(x)\nASN_List = x.index\n\n# Select an ASN to investigate in more detail\nselection = widgets.SelectMultiple(\n options=ASN_List,\n width=900,\n description='Select ASN to investigate',\n disabled=False\n)\nselection", "_____no_output_____" ], [ "# For every IP associated with the selected ASN look them up in TI feeds\nip_invest_list = None\nip_selection = None\nfor ASN in selection.value:\n if ip_invest_list is None:\n ip_invest_list = (IP_ASN[IP_ASN[\"ASN\"] == ASN]['IPs'].tolist())\n else:\n ip_invest_list + (IP_ASN[IP_ASN[\"ASN\"] == ASN]['IPs'].tolist())\n\nif ip_invest_list is not None:\n ioc_ip_list = []\n if len(ip_invest_list) > 0:\n ti_resps = tilookup.lookup_iocs(data=ip_invest_list, providers=[\"OTX\"])\n i = 0\n ti_hits = []\n while i < len(ti_resps):\n if ti_resps['Details'][i]['pulse_count'] > 0:\n ti_hits.append(ti_resps['Ioc'][i])\n i += 1\n else:\n i += 1\n display(HTML(f\"Found {len(ti_hits)} IoCs in Threat Intelligence\"))\n for ioc in ti_hits:\n ioc_ip_list.append(ioc)\n\n #Show IPs found in TI feeds for further investigation \n if len(ioc_ip_list) > 0: \n display(HTML(\"Select an IP whcih appeared in TI to investigate further\"))\n ip_selection = nbwidgets.SelectItem(description='Select IP Address to investigate: ', item_list = ioc_ip_list, width='95%', auto_display=True)\n \nelse:\n md(\"No IPs to investigate\")", "_____no_output_____" ], [ "# Get all syslog for the IPs\nif ip_selection is not None:\n display(HTML(\"Syslog data associated with this IP Address\"))\n sys_hits = all_syslog_data[all_syslog_data['SyslogMessage'].str.contains(\n ip_selection.value)]\n display(sys_hits)\n os_family = host_entity.OSType if host_entity.OSType else 'Linux'\n\n display(HTML(\"TI result for this IP Address\"))\n display(ti_resps[ti_resps['Ioc'] == ip_selection.value])\nelse:\n md(\"No IP address selected\")", "_____no_output_____" ] ], [ [ "## Configuration\n\n### `msticpyconfig.yaml` configuration File\nYou can configure primary and secondary TI providers and any required parameters in the `msticpyconfig.yaml` file. This is read from the current directory or you can set an environment variable (`MSTICPYCONFIG`) pointing to its location.\n\nTo configure this file see the [ConfigureNotebookEnvironment notebook](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
e71090067f9ad5d5a7cd783825e28bbd9981e14f
583,579
ipynb
Jupyter Notebook
2-Valued-Based Methods/monte-carlo/Monte_Carlo_Solution.ipynb
zhaolongkzz/DRL-of-Udacity
331aeb5d61c769f94c6847a902f6a781af690bc2
[ "MIT" ]
null
null
null
2-Valued-Based Methods/monte-carlo/Monte_Carlo_Solution.ipynb
zhaolongkzz/DRL-of-Udacity
331aeb5d61c769f94c6847a902f6a781af690bc2
[ "MIT" ]
null
null
null
2-Valued-Based Methods/monte-carlo/Monte_Carlo_Solution.ipynb
zhaolongkzz/DRL-of-Udacity
331aeb5d61c769f94c6847a902f6a781af690bc2
[ "MIT" ]
null
null
null
1,176.570565
284,812
0.954351
[ [ [ "# Monte Carlo Methods\n\nIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. \n\nWhile we have provided some starter code, you are welcome to erase these hints and write your code from scratch.\n\n### Part 0: Explore BlackjackEnv\n\nWe begin by importing the necessary packages.", "_____no_output_____" ] ], [ [ "import sys\nimport gym\nimport numpy as np\nfrom collections import defaultdict\n\nfrom plot_utils import plot_blackjack_values, plot_policy", "_____no_output_____" ] ], [ [ "Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.", "_____no_output_____" ] ], [ [ "env = gym.make('Blackjack-v0')", "_____no_output_____" ] ], [ [ "Each state is a 3-tuple of:\n- the player's current sum $\\in \\{0, 1, \\ldots, 31\\}$,\n- the dealer's face up card $\\in \\{1, \\ldots, 10\\}$, and\n- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).\n\nThe agent has two potential actions:\n\n```\n STICK = 0\n HIT = 1\n```\nVerify this by running the code cell below.", "_____no_output_____" ] ], [ [ "print(env.observation_space)\nprint(env.action_space)", "Tuple(Discrete(32), Discrete(11), Discrete(2))\nDiscrete(2)\n" ] ], [ [ "Execute the code cell below to play Blackjack with a random policy. \n\n(_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)", "_____no_output_____" ] ], [ [ "for i_episode in range(3):\n state = env.reset()\n while True:\n print(state)\n action = env.action_space.sample()\n state, reward, done, info = env.step(action)\n if done:\n print('End game! Reward: ', reward)\n print('You won :)\\n') if reward > 0 else print('You lost :(\\n')\n break", "(18, 7, False)\nEnd game! Reward: 1.0\nYou won :)\n\n(18, 8, False)\n(20, 8, False)\nEnd game! Reward: -1\nYou lost :(\n\n(20, 3, False)\nEnd game! Reward: 1.0\nYou won :)\n\n" ] ], [ [ "### Part 1: MC Prediction\n\nIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). \n\nWe will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. \n\nThe function accepts as **input**:\n- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.\n\nIt returns as **output**:\n- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \\ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.", "_____no_output_____" ] ], [ [ "def generate_episode_from_limit_stochastic(bj_env):\n episode = []\n state = bj_env.reset()\n while True:\n probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]\n action = np.random.choice(np.arange(2), p=probs)\n next_state, reward, done, info = bj_env.step(action)\n episode.append((state, action, reward))\n state = next_state\n if done:\n break\n return episode", "_____no_output_____" ] ], [ [ "Execute the code cell below to play Blackjack with the policy. \n\n(*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)", "_____no_output_____" ] ], [ [ "for i in range(3):\n print(generate_episode_from_limit_stochastic(env))", "[((17, 7, False), 0, -1.0)]\n[((20, 8, False), 0, 1.0)]\n[((16, 5, True), 1, 0), ((16, 5, False), 1, -1)]\n" ] ], [ [ "Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.\n\nYour algorithm has three arguments:\n- `env`: This is an instance of an OpenAI Gym environment.\n- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.\n- `generate_episode`: This is a function that returns an episode of interaction.\n- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).\n\nThe algorithm returns as output:\n- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.", "_____no_output_____" ] ], [ [ "def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):\n # initialize empty dictionaries of arrays\n returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))\n N = defaultdict(lambda: np.zeros(env.action_space.n))\n Q = defaultdict(lambda: np.zeros(env.action_space.n))\n # loop over episodes\n for i_episode in range(1, num_episodes+1):\n # monitor progress\n if i_episode % 1000 == 0:\n print(\"\\rEpisode {}/{}.\".format(i_episode, num_episodes), end=\"\")\n sys.stdout.flush()\n # generate an episode\n episode = generate_episode(env)\n # obtain the states, actions, and rewards\n states, actions, rewards = zip(*episode)\n # prepare for discounting\n discounts = np.array([gamma**i for i in range(len(rewards)+1)])\n # update the sum of the returns, number of visits, and action-value \n # function estimates for each state-action pair in the episode\n for i, state in enumerate(states):\n returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])\n N[state][actions[i]] += 1.0\n Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]\n return Q", "_____no_output_____" ] ], [ [ "Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.\n\nTo check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.", "_____no_output_____" ] ], [ [ "# obtain the action-value function\nQ = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)\n\n# obtain the corresponding state-value function\nV_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \\\n for k, v in Q.items())\n\n# plot the state-value function\nplot_blackjack_values(V_to_plot)", "Episode 500000/500000." ] ], [ [ "### Part 2: MC Control\n\nIn this section, you will write your own implementation of constant-$\\alpha$ MC control. \n\nYour algorithm has four arguments:\n- `env`: This is an instance of an OpenAI Gym environment.\n- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.\n- `alpha`: This is the step-size parameter for the update step.\n- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).\n\nThe algorithm returns as output:\n- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.\n- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.\n\n(_Feel free to define additional functions to help you to organize your code._)", "_____no_output_____" ] ], [ [ "def generate_episode_from_Q(env, Q, epsilon, nA):\n \"\"\" generates an episode from following the epsilon-greedy policy \"\"\"\n episode = []\n state = env.reset()\n while True:\n action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \\\n if state in Q else env.action_space.sample()\n next_state, reward, done, info = env.step(action)\n episode.append((state, action, reward))\n state = next_state\n if done:\n break\n return episode\n\ndef get_probs(Q_s, epsilon, nA):\n \"\"\" obtains the action probabilities corresponding to epsilon-greedy policy \"\"\"\n policy_s = np.ones(nA) * epsilon / nA\n best_a = np.argmax(Q_s)\n policy_s[best_a] = 1 - epsilon + (epsilon / nA)\n return policy_s\n\ndef update_Q(env, episode, Q, alpha, gamma):\n \"\"\" updates the action-value function estimate using the most recent episode \"\"\"\n states, actions, rewards = zip(*episode)\n # prepare for discounting\n discounts = np.array([gamma**i for i in range(len(rewards)+1)])\n for i, state in enumerate(states):\n old_Q = Q[state][actions[i]] \n Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)\n return Q", "_____no_output_____" ], [ "def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):\n nA = env.action_space.n\n # initialize empty dictionary of arrays\n Q = defaultdict(lambda: np.zeros(nA))\n epsilon = eps_start\n # loop over episodes\n for i_episode in range(1, num_episodes+1):\n # monitor progress\n if i_episode % 1000 == 0:\n print(\"\\rEpisode {}/{}.\".format(i_episode, num_episodes), end=\"\")\n sys.stdout.flush()\n # set the value of epsilon\n epsilon = max(epsilon*eps_decay, eps_min)\n # generate an episode by following epsilon-greedy policy\n episode = generate_episode_from_Q(env, Q, epsilon, nA)\n # update the action-value function estimate using the episode\n Q = update_Q(env, episode, Q, alpha, gamma)\n # determine the policy corresponding to the final action-value function estimate\n policy = dict((k,np.argmax(v)) for k, v in Q.items())\n return policy, Q", "_____no_output_____" ] ], [ [ "Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.", "_____no_output_____" ] ], [ [ "# obtain the estimated optimal policy and action-value function\npolicy, Q = mc_control(env, 500000, 0.02)", "Episode 500000/500000." ] ], [ [ "Next, we plot the corresponding state-value function.", "_____no_output_____" ] ], [ [ "# obtain the corresponding state-value function\nV = dict((k,np.max(v)) for k, v in Q.items())\n\n# plot the state-value function\nplot_blackjack_values(V)", "_____no_output_____" ] ], [ [ "Finally, we visualize the policy that is estimated to be optimal.", "_____no_output_____" ] ], [ [ "# plot the policy\nplot_policy(policy)", "_____no_output_____" ] ], [ [ "The **true** optimal policy $\\pi_*$ can be found in Figure 5.2 of the [textbook](http://go.udacity.com/rl-textbook) (and appears below). Compare your final estimate to the optimal policy - how close are you able to get? If you are not happy with the performance of your algorithm, take the time to tweak the decay rate of $\\epsilon$, change the value of $\\alpha$, and/or run the algorithm for more episodes to attain better results.\n\n![True Optimal Policy](images/optimal.png)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7109a61dc76d5a55a41ece29ec94cec48fc423b
57,099
ipynb
Jupyter Notebook
notebooks/community/gapic/custom/showcase_tfhub_image_classification_online.ipynb
shenzhimo2/vertex-ai-samples
06fcfbff4800e4aa9a69266dd9b1d3e51a618b47
[ "Apache-2.0" ]
2
2021-10-02T02:17:20.000Z
2021-11-17T10:35:01.000Z
notebooks/community/gapic/custom/showcase_tfhub_image_classification_online.ipynb
shenzhimo2/vertex-ai-samples
06fcfbff4800e4aa9a69266dd9b1d3e51a618b47
[ "Apache-2.0" ]
4
2021-08-18T18:58:26.000Z
2022-02-10T07:03:36.000Z
notebooks/community/gapic/custom/showcase_tfhub_image_classification_online.ipynb
shenzhimo2/vertex-ai-samples
06fcfbff4800e4aa9a69266dd9b1d3e51a618b47
[ "Apache-2.0" ]
null
null
null
38.295775
499
0.621184
[ [ [ "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# Vertex client library: TF Hub image classification model for online prediction\n\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_tfhub_image_classification_online.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_tfhub_image_classification_online.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n<br/><br/><br/>", "_____no_output_____" ], [ "## Overview\n\n\nThis tutorial demonstrates how to use the Vertex client library for Python to deploy a pretrained TensorFlow Hub image classification model for online prediction.", "_____no_output_____" ], [ "### Dataset\n\nThe dataset used for this tutorial is the [Flowers dataset](https://www.tensorflow.org/datasets/catalog/tf_flowers) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip.", "_____no_output_____" ], [ "### Objective\n\nIn this tutorial, you will deploy a TensorFlow Hub pretrained model, and then do a prediction on the deployed model by sending data.\n\nThe steps performed include:\n\n- Download a TensorFlow Hub pretrained model.\n- Retrieve and load the model artifacts.\n- Upload the model as a Vertex `Model` resource.\n- Deploy the `Model` resource to a serving `Endpoint` resource.\n- Make a prediction.\n- Undeploy the `Model` resource.", "_____no_output_____" ], [ "### Costs\n\nThis tutorial uses billable components of Google Cloud (GCP):\n\n* Vertex AI\n* Cloud Storage\n\nLearn about [Vertex AI\npricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage\npricing](https://cloud.google.com/storage/pricing), and use the [Pricing\nCalculator](https://cloud.google.com/products/calculator/)\nto generate a cost estimate based on your projected usage.", "_____no_output_____" ], [ "## Installation\n\nInstall the latest version of Vertex client library.", "_____no_output_____" ] ], [ [ "import os\nimport sys\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install -U google-cloud-aiplatform $USER_FLAG", "_____no_output_____" ] ], [ [ "Install the latest GA version of *google-cloud-storage* library as well.", "_____no_output_____" ] ], [ [ "! pip3 install -U google-cloud-storage $USER_FLAG", "_____no_output_____" ] ], [ [ "### Restart the kernel\n\nOnce you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.", "_____no_output_____" ] ], [ [ "if not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "_____no_output_____" ] ], [ [ "## Before you begin\n\n### GPU runtime\n\n*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU**\n\n### Set up your Google Cloud project\n\n**The following steps are required, regardless of your notebook environment.**\n\n1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)\n\n3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)\n\n4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.\n\n5. Enter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.", "_____no_output_____" ] ], [ [ "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}", "_____no_output_____" ], [ "if PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)", "_____no_output_____" ], [ "! gcloud config set project $PROJECT_ID", "_____no_output_____" ] ], [ [ "#### Region\n\nYou can also change the `REGION` variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.\n\n- Americas: `us-central1`\n- Europe: `europe-west4`\n- Asia Pacific: `asia-east1`\n\nYou may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations)", "_____no_output_____" ] ], [ [ "REGION = \"us-central1\" # @param {type: \"string\"}", "_____no_output_____" ] ], [ [ "#### Timestamp\n\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.", "_____no_output_____" ] ], [ [ "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "_____no_output_____" ] ], [ [ "### Authenticate your Google Cloud account\n\n**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.\n\n**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\n\n**Otherwise**, follow these steps:\n\nIn the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.\n\n**Click Create service account**.\n\nIn the **Service account name** field, enter a name, and click **Create**.\n\nIn the **Grant this service account access to project** section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select **Vertex Administrator**. Type \"Storage Object Admin\" into the filter box, and select **Storage Object Admin**.\n\nClick Create. A JSON file that contains your key downloads to your local environment.\n\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "_____no_output_____" ] ], [ [ "# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "_____no_output_____" ] ], [ [ "### Create a Cloud Storage bucket\n\n**The following steps are required, regardless of your notebook environment.**\n\nWhen you submit a custom training job using the Vertex client library, you upload a Python package\ncontaining your training code to a Cloud Storage bucket. Vertex runs\nthe code from this package. In this tutorial, Vertex also saves the\ntrained model that results from your job in the same bucket. You can then\ncreate an `Endpoint` resource based on this output in order to serve\nonline predictions.\n\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.", "_____no_output_____" ] ], [ [ "BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}", "_____no_output_____" ], [ "if BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP", "_____no_output_____" ] ], [ [ "**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.", "_____no_output_____" ] ], [ [ "! gsutil mb -l $REGION $BUCKET_NAME", "_____no_output_____" ] ], [ [ "Finally, validate access to your Cloud Storage bucket by examining its contents:", "_____no_output_____" ] ], [ [ "! gsutil ls -al $BUCKET_NAME", "_____no_output_____" ] ], [ [ "### Set up variables\n\nNext, set up some variables used throughout the tutorial.\n### Import libraries and define constants", "_____no_output_____" ], [ "#### Import Vertex client library\n\nImport the Vertex client library into our Python environment.", "_____no_output_____" ] ], [ [ "import time\n\nfrom google.cloud.aiplatform import gapic as aip\nfrom google.protobuf import json_format\nfrom google.protobuf.json_format import MessageToJson, ParseDict\nfrom google.protobuf.struct_pb2 import Struct, Value", "_____no_output_____" ] ], [ [ "#### Vertex constants\n\nSetup up the following constants for Vertex:\n\n- `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.\n- `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.", "_____no_output_____" ] ], [ [ "# API service endpoint\nAPI_ENDPOINT = \"{}-aiplatform.googleapis.com\".format(REGION)\n\n# Vertex location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION", "_____no_output_____" ] ], [ [ "#### Hardware Accelerators\n\nSet the hardware accelerators (e.g., GPU), if any, for prediction.\n\nSet the variable `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:\n\n (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)\n\nFor GPU, available accelerators include:\n - aip.AcceleratorType.NVIDIA_TESLA_K80\n - aip.AcceleratorType.NVIDIA_TESLA_P100\n - aip.AcceleratorType.NVIDIA_TESLA_P4\n - aip.AcceleratorType.NVIDIA_TESLA_T4\n - aip.AcceleratorType.NVIDIA_TESLA_V100\n\nOtherwise specify `(None, None)` to use a container image to run on a CPU.", "_____no_output_____" ] ], [ [ "if os.getenv(\"IS_TESTING_DEPOLY_GPU\"):\n DEPLOY_GPU, DEPLOY_NGPU = (\n aip.AcceleratorType.NVIDIA_TESLA_K80,\n int(os.getenv(\"IS_TESTING_DEPOLY_GPU\")),\n )\nelse:\n DEPLOY_GPU, DEPLOY_NGPU = (None, None)", "_____no_output_____" ] ], [ [ "#### Container (Docker) image\n\nNext, we will set the Docker container images for prediction\n\n- Set the variable `TF` to the TensorFlow version of the container image. For example, `2-1` would be version 2.1, and `1-15` would be version 1.15. The following list shows some of the pre-built images available:\n\n - TensorFlow 1.15\n - `gcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest`\n - `gcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest`\n - TensorFlow 2.1\n - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest`\n - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest`\n - TensorFlow 2.2\n - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest`\n - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest`\n - TensorFlow 2.3\n - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest`\n - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest`\n - XGBoost\n - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest`\n - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest`\n - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest`\n - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest`\n - Scikit-learn\n - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest`\n - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest`\n - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest`\n\nFor the latest list, see [Pre-built containers for prediction](https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers)", "_____no_output_____" ] ], [ [ "if os.getenv(\"IS_TESTING_TF\"):\n TF = os.getenv(\"IS_TESTING_TF\")\nelse:\n TF = \"2-1\"\n\nif TF[0] == \"2\":\n if DEPLOY_GPU:\n DEPLOY_VERSION = \"tf2-gpu.{}\".format(TF)\n else:\n DEPLOY_VERSION = \"tf2-cpu.{}\".format(TF)\nelse:\n if DEPLOY_GPU:\n DEPLOY_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n DEPLOY_VERSION = \"tf-cpu.{}\".format(TF)\n\nDEPLOY_IMAGE = \"gcr.io/cloud-aiplatform/prediction/{}:latest\".format(DEPLOY_VERSION)\n\nprint(\"Deployment:\", DEPLOY_IMAGE, DEPLOY_GPU)", "_____no_output_____" ] ], [ [ "#### Machine Type\n\nNext, set the machine type to use for prediction.\n\n- Set the variable `DEPLOY_COMPUTE` to configure the compute resources for the VM you will use for prediction.\n - `machine type`\n - `n1-standard`: 3.75GB of memory per vCPU.\n - `n1-highmem`: 6.5GB of memory per vCPU\n - `n1-highcpu`: 0.9 GB of memory per vCPU\n - `vCPUs`: number of \\[2, 4, 8, 16, 32, 64, 96 \\]\n\n*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*", "_____no_output_____" ] ], [ [ "if os.getenv(\"IS_TESTING_DEPLOY_MACHINE\"):\n MACHINE_TYPE = os.getenv(\"IS_TESTING_DEPLOY_MACHINE\")\nelse:\n MACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nDEPLOY_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Deploy machine type\", DEPLOY_COMPUTE)", "_____no_output_____" ] ], [ [ "# Tutorial\n\nNow you are ready to deploy a TensorFlow Hub pretrained image classification model.", "_____no_output_____" ], [ "## Set up clients\n\nThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.\n\nYou will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.\n\n- Model Service for `Model` resources.\n- Endpoint Service for deployment.\n- Prediction Service for serving.", "_____no_output_____" ] ], [ [ "# client options same for all services\nclient_options = {\"api_endpoint\": API_ENDPOINT}\n\n\ndef create_model_client():\n client = aip.ModelServiceClient(client_options=client_options)\n return client\n\n\ndef create_endpoint_client():\n client = aip.EndpointServiceClient(client_options=client_options)\n return client\n\n\ndef create_prediction_client():\n client = aip.PredictionServiceClient(client_options=client_options)\n return client\n\n\nclients = {}\nclients[\"model\"] = create_model_client()\nclients[\"endpoint\"] = create_endpoint_client()\nclients[\"prediction\"] = create_prediction_client()\n\nfor client in clients.items():\n print(client)", "_____no_output_____" ] ], [ [ "## Get pretrained model from TFHub\n\nNext, you download a pre-trained model from $(TENSORFLOW) Hub.", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nimport tensorflow_hub as hub\n\nIMAGE_SHAPE = (224, 224)\n\nmodel = tf.keras.Sequential(\n [\n hub.KerasLayer(\n \"https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/4\",\n input_shape=IMAGE_SHAPE + (3,),\n )\n ]\n)\n\nmodel_path_to_deploy = BUCKET_NAME + \"/resnet\"", "_____no_output_____" ] ], [ [ "## Upload the model for serving\n\nNext, you will upload your TF.Keras model from the custom job to Vertex `Model` service, which will create a Vertex `Model` resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.\n\n### How does the serving function work\n\nWhen you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a `tf.string`.\n\nThe serving function consists of two parts:\n\n- `preprocessing function`:\n - Converts the input (`tf.string`) to the input shape and data type of the underlying model (dynamic graph).\n - Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.\n- `post-processing function`:\n - Converts the model output to format expected by the receiving application -- e.q., compresses the output.\n - Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc.\n\nBoth the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.\n\nOne consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported.", "_____no_output_____" ], [ "### Serving function for image data\n\nTo pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.\n\nTo resolve this, define a serving function (`serving_fn`) and attach it to the model as a preprocessing step. Add a `@tf.function` decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).\n\nWhen you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (`tf.string`), which is passed to the serving function (`serving_fn`). The serving function preprocesses the `tf.string` into raw (uncompressed) numpy bytes (`preprocess_fn`) to match the input requirements of the model:\n- `io.decode_jpeg`- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).\n- `image.convert_image_dtype` - Changes integer pixel values to float 32.\n- `image.resize` - Resizes the image to match the input shape for the model.\n- `resized / 255.0` - Rescales (normalization) the pixel data between 0 and 1.\n\nAt this point, the data can be passed to the model (`m_call`).", "_____no_output_____" ] ], [ [ "CONCRETE_INPUT = \"numpy_inputs\"\n\n\ndef _preprocess(bytes_input):\n decoded = tf.io.decode_jpeg(bytes_input, channels=3)\n decoded = tf.image.convert_image_dtype(decoded, tf.float32)\n resized = tf.image.resize(decoded, size=(32, 32))\n rescale = tf.cast(resized / 255.0, tf.float32)\n return rescale\n\n\[email protected](input_signature=[tf.TensorSpec([None], tf.string)])\ndef preprocess_fn(bytes_inputs):\n decoded_images = tf.map_fn(\n _preprocess, bytes_inputs, dtype=tf.float32, back_prop=False\n )\n return {\n CONCRETE_INPUT: decoded_images\n } # User needs to make sure the key matches model's input\n\n\[email protected](input_signature=[tf.TensorSpec([None], tf.string)])\ndef serving_fn(bytes_inputs):\n images = preprocess_fn(bytes_inputs)\n prob = m_call(**images)\n return prob\n\n\nm_call = tf.function(model.call).get_concrete_function(\n [tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]\n)\n\ntf.saved_model.save(\n model, model_path_to_deploy, signatures={\"serving_default\": serving_fn}\n)", "_____no_output_____" ] ], [ [ "## Get the serving function signature\n\nYou can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.\n\nFor your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array.\n\nWhen making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.", "_____no_output_____" ] ], [ [ "loaded = tf.saved_model.load(model_path_to_deploy)\n\nserving_input = list(\n loaded.signatures[\"serving_default\"].structured_input_signature[1].keys()\n)[0]\nprint(\"Serving function input:\", serving_input)", "_____no_output_____" ] ], [ [ "### Upload the model\n\nUse this helper function `upload_model` to upload your model, stored in SavedModel format, up to the `Model` service, which will instantiate a Vertex `Model` resource instance for your model. Once you've done that, you can use the `Model` resource instance in the same way as any other Vertex `Model` resource instance, such as deploying to an `Endpoint` resource for serving predictions.\n\nThe helper function takes the following parameters:\n\n- `display_name`: A human readable name for the `Endpoint` service.\n- `image_uri`: The container image for the model deployment.\n- `model_uri`: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the `trainer/task.py` saved the model artifacts, which we specified in the variable `MODEL_DIR`.\n\nThe helper function calls the `Model` client service's method `upload_model`, which takes the following parameters:\n\n- `parent`: The Vertex location root path for `Dataset`, `Model` and `Endpoint` resources.\n- `model`: The specification for the Vertex `Model` resource instance.\n\nLet's now dive deeper into the Vertex model specification `model`. This is a dictionary object that consists of the following fields:\n\n- `display_name`: A human readable name for the `Model` resource.\n- `metadata_schema_uri`: Since your model was built without an Vertex `Dataset` resource, you will leave this blank (`''`).\n- `artificat_uri`: The Cloud Storage path where the model is stored in SavedModel format.\n- `container_spec`: This is the specification for the Docker container that will be installed on the `Endpoint` resource, from which the `Model` resource will serve predictions. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated.\n\nUploading a model into a Vertex Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex Model resource is ready.\n\nThe helper function returns the Vertex fully qualified identifier for the corresponding Vertex Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id.", "_____no_output_____" ] ], [ [ "IMAGE_URI = DEPLOY_IMAGE\n\n\ndef upload_model(display_name, image_uri, model_uri):\n model = {\n \"display_name\": display_name,\n \"metadata_schema_uri\": \"\",\n \"artifact_uri\": model_uri,\n \"container_spec\": {\n \"image_uri\": image_uri,\n \"command\": [],\n \"args\": [],\n \"env\": [{\"name\": \"env_name\", \"value\": \"env_value\"}],\n \"ports\": [{\"container_port\": 8080}],\n \"predict_route\": \"\",\n \"health_route\": \"\",\n },\n }\n response = clients[\"model\"].upload_model(parent=PARENT, model=model)\n print(\"Long running operation:\", response.operation.name)\n upload_model_response = response.result(timeout=180)\n print(\"upload_model_response\")\n print(\" model:\", upload_model_response.model)\n return upload_model_response.model\n\n\nmodel_to_deploy_id = upload_model(\n \"flowers-\" + TIMESTAMP, IMAGE_URI, model_path_to_deploy\n)", "_____no_output_____" ] ], [ [ "### Get `Model` resource information\n\nNow let's get the model information for just your model. Use this helper function `get_model`, with the following parameter:\n\n- `name`: The Vertex unique identifier for the `Model` resource.\n\nThis helper function calls the Vertex `Model` client service's method `get_model`, with the following parameter:\n\n- `name`: The Vertex unique identifier for the `Model` resource.", "_____no_output_____" ] ], [ [ "def get_model(name):\n response = clients[\"model\"].get_model(name=name)\n print(response)\n\n\nget_model(model_to_deploy_id)", "_____no_output_____" ] ], [ [ "## Deploy the `Model` resource\n\nNow deploy the trained Vertex custom `Model` resource. This requires two steps:\n\n1. Create an `Endpoint` resource for deploying the `Model` resource to.\n\n2. Deploy the `Model` resource to the `Endpoint` resource.", "_____no_output_____" ], [ "### Create an `Endpoint` resource\n\nUse this helper function `create_endpoint` to create an endpoint to deploy the model to for serving predictions, with the following parameter:\n\n- `display_name`: A human readable name for the `Endpoint` resource.\n\nThe helper function uses the endpoint client service's `create_endpoint` method, which takes the following parameter:\n\n- `display_name`: A human readable name for the `Endpoint` resource.\n\nCreating an `Endpoint` resource returns a long running operation, since it may take a few moments to provision the `Endpoint` resource for serving. You call `response.result()`, which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the `Endpoint` resource: `response.name`.", "_____no_output_____" ] ], [ [ "ENDPOINT_NAME = \"flowers_endpoint-\" + TIMESTAMP\n\n\ndef create_endpoint(display_name):\n endpoint = {\"display_name\": display_name}\n response = clients[\"endpoint\"].create_endpoint(parent=PARENT, endpoint=endpoint)\n print(\"Long running operation:\", response.operation.name)\n\n result = response.result(timeout=300)\n print(\"result\")\n print(\" name:\", result.name)\n print(\" display_name:\", result.display_name)\n print(\" description:\", result.description)\n print(\" labels:\", result.labels)\n print(\" create_time:\", result.create_time)\n print(\" update_time:\", result.update_time)\n return result\n\n\nresult = create_endpoint(ENDPOINT_NAME)", "_____no_output_____" ] ], [ [ "Now get the unique identifier for the `Endpoint` resource you created.", "_____no_output_____" ] ], [ [ "# The full unique ID for the endpoint\nendpoint_id = result.name\n# The short numeric ID for the endpoint\nendpoint_short_id = endpoint_id.split(\"/\")[-1]\n\nprint(endpoint_id)", "_____no_output_____" ] ], [ [ "### Compute instance scaling\n\nYou have several choices on scaling the compute instances for handling your online prediction requests:\n\n- Single Instance: The online prediction requests are processed on a single compute instance.\n - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.\n\n- Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.\n - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.\n\n- Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.\n - Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.\n\nThe minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request.", "_____no_output_____" ] ], [ [ "MIN_NODES = 1\nMAX_NODES = 1", "_____no_output_____" ] ], [ [ "### Deploy `Model` resource to the `Endpoint` resource\n\nUse this helper function `deploy_model` to deploy the `Model` resource to the `Endpoint` resource you created for serving predictions, with the following parameters:\n\n- `model`: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.\n- `deploy_model_display_name`: A human readable name for the deployed model.\n- `endpoint`: The Vertex fully qualified endpoint identifier to deploy the model to.\n\nThe helper function calls the `Endpoint` client service's method `deploy_model`, which takes the following parameters:\n\n- `endpoint`: The Vertex fully qualified `Endpoint` resource identifier to deploy the `Model` resource to.\n- `deployed_model`: The requirements specification for deploying the model.\n- `traffic_split`: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.\n - If only one model, then specify as **{ \"0\": 100 }**, where \"0\" refers to this model being uploaded and 100 means 100% of the traffic.\n - If there are existing models on the endpoint, for which the traffic will be split, then use `model_id` to specify as **{ \"0\": percent, model_id: percent, ... }**, where `model_id` is the model id of an existing model to the deployed endpoint. The percents must add up to 100.\n\nLet's now dive deeper into the `deployed_model` parameter. This parameter is specified as a Python dictionary with the minimum required fields:\n\n- `model`: The Vertex fully qualified model identifier of the (upload) model to deploy.\n- `display_name`: A human readable name for the deployed model.\n- `disable_container_logging`: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.\n- `dedicated_resources`: This refers to how many compute instances (replicas) that are scaled for serving prediction requests.\n - `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated.\n - `min_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`.\n - `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`.\n\n#### Traffic Split\n\nLet's now dive deeper into the `traffic_split` parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.\n\nWhy would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.\n\n#### Response\n\nThe method returns a long running operation `response`. We will wait sychronously for the operation to complete by calling the `response.result()`, which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.", "_____no_output_____" ] ], [ [ "DEPLOYED_NAME = \"flowers_deployed-\" + TIMESTAMP\n\n\ndef deploy_model(\n model, deployed_model_display_name, endpoint, traffic_split={\"0\": 100}\n):\n\n if DEPLOY_GPU:\n machine_spec = {\n \"machine_type\": DEPLOY_COMPUTE,\n \"accelerator_type\": DEPLOY_GPU,\n \"accelerator_count\": DEPLOY_NGPU,\n }\n else:\n machine_spec = {\n \"machine_type\": DEPLOY_COMPUTE,\n \"accelerator_count\": 0,\n }\n\n deployed_model = {\n \"model\": model,\n \"display_name\": deployed_model_display_name,\n \"dedicated_resources\": {\n \"min_replica_count\": MIN_NODES,\n \"max_replica_count\": MAX_NODES,\n \"machine_spec\": machine_spec,\n },\n \"disable_container_logging\": False,\n }\n\n response = clients[\"endpoint\"].deploy_model(\n endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split\n )\n\n print(\"Long running operation:\", response.operation.name)\n result = response.result()\n print(\"result\")\n deployed_model = result.deployed_model\n print(\" deployed_model\")\n print(\" id:\", deployed_model.id)\n print(\" model:\", deployed_model.model)\n print(\" display_name:\", deployed_model.display_name)\n print(\" create_time:\", deployed_model.create_time)\n\n return deployed_model.id\n\n\ndeployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)", "_____no_output_____" ] ], [ [ "## Make a online prediction request\n\nNow do a online prediction to your deployed model.", "_____no_output_____" ], [ "### Get test item\n\nYou will use an example image from your dataset as a test item.", "_____no_output_____" ] ], [ [ "FLOWERS_CSV = \"gs://cloud-ml-data/img/flower_photos/all_data.csv\"\n\ntest_images = ! gsutil cat $FLOWERS_CSV | head -n1\ntest_image = test_images[0].split(\",\")[0]\nprint(test_image)", "_____no_output_____" ] ], [ [ "### Prepare the request content\n\nYou are going to send the flowers image as compressed JPG image, instead of the raw uncompressed bytes:\n\n- `tf.io.read_file`: Read the compressed JPG images back into memory as raw bytes.\n- `base64.b64encode`: Encode the raw bytes into a base 64 encoded string.", "_____no_output_____" ] ], [ [ "import base64\n\nbytes = tf.io.read_file(test_image)\nb64str = base64.b64encode(bytes.numpy()).decode(\"utf-8\")", "_____no_output_____" ] ], [ [ "### Send the prediction request\n\nOk, now you have a test image. Use this helper function `predict_image`, which takes the following parameters:\n\n- `image`: The test image data as a numpy array.\n- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed to.\n- `parameters_dict`: Additional parameters for serving.\n\nThis function calls the prediction client service `predict` method with the following parameters:\n\n- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed to.\n- `instances`: A list of instances (encoded images) to predict.\n- `parameters`: Additional parameters for serving.\n\nTo pass the image data to the prediction service, in the previous step you encoded the bytes into base64 -- which makes the content safe from modification when transmitting binary data over the network. You need to tell the serving binary where your model is deployed to, that the content has been base64 encoded, so it will decode it on the other end in the serving binary.\n\nEach instance in the prediction request is a dictionary entry of the form:\n\n {serving_input: {'b64': content}}\n\n- `input_name`: the name of the input layer of the underlying model.\n- `'b64'`: A key that indicates the content is base64 encoded.\n- `content`: The compressed JPG image bytes as a base64 encoded string.\n\nSince the `predict()` service can take multiple images (instances), you will send your single image as a list of one image. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the `predict()` service.\n\nThe `response` object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction:\n\n- `predictions`: Confidence level for the prediction, between 0 and 1, for each of the classes.", "_____no_output_____" ] ], [ [ "def predict_image(image, endpoint, parameters_dict):\n # The format of each instance should conform to the deployed model's prediction input schema.\n instances_list = [{serving_input: {\"b64\": image}}]\n instances = [json_format.ParseDict(s, Value()) for s in instances_list]\n\n response = clients[\"prediction\"].predict(\n endpoint=endpoint, instances=instances, parameters=parameters_dict\n )\n print(\"response\")\n print(\" deployed_model_id:\", response.deployed_model_id)\n predictions = response.predictions\n print(\"predictions\")\n for prediction in predictions:\n print(\" prediction:\", prediction)\n\n\npredict_image(b64str, endpoint_id, None)", "_____no_output_____" ] ], [ [ "## Undeploy the `Model` resource\n\nNow undeploy your `Model` resource from the serving `Endpoint` resoure. Use this helper function `undeploy_model`, which takes the following parameters:\n\n- `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed to.\n- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` is deployed to.\n\nThis function calls the endpoint client service's method `undeploy_model`, with the following parameters:\n\n- `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed.\n- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource is deployed.\n- `traffic_split`: How to split traffic among the remaining deployed models on the `Endpoint` resource.\n\nSince this is the only deployed model on the `Endpoint` resource, you simply can leave `traffic_split` empty by setting it to {}.", "_____no_output_____" ] ], [ [ "def undeploy_model(deployed_model_id, endpoint):\n response = clients[\"endpoint\"].undeploy_model(\n endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}\n )\n print(response)\n\n\nundeploy_model(deployed_model_id, endpoint_id)", "_____no_output_____" ] ], [ [ "# Cleaning up\n\nTo clean up all GCP resources used in this project, you can [delete the GCP\nproject](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n\nOtherwise, you can delete the individual resources you created in this tutorial:\n\n- Dataset\n- Pipeline\n- Model\n- Endpoint\n- Batch Job\n- Custom Job\n- Hyperparameter Tuning Job\n- Cloud Storage Bucket", "_____no_output_____" ] ], [ [ "delete_dataset = True\ndelete_pipeline = True\ndelete_model = True\ndelete_endpoint = True\ndelete_batchjob = True\ndelete_customjob = True\ndelete_hptjob = True\ndelete_bucket = True\n\n# Delete the dataset using the Vertex fully qualified identifier for the dataset\ntry:\n if delete_dataset and \"dataset_id\" in globals():\n clients[\"dataset\"].delete_dataset(name=dataset_id)\nexcept Exception as e:\n print(e)\n\n# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline\ntry:\n if delete_pipeline and \"pipeline_id\" in globals():\n clients[\"pipeline\"].delete_training_pipeline(name=pipeline_id)\nexcept Exception as e:\n print(e)\n\n# Delete the model using the Vertex fully qualified identifier for the model\ntry:\n if delete_model and \"model_to_deploy_id\" in globals():\n clients[\"model\"].delete_model(name=model_to_deploy_id)\nexcept Exception as e:\n print(e)\n\n# Delete the endpoint using the Vertex fully qualified identifier for the endpoint\ntry:\n if delete_endpoint and \"endpoint_id\" in globals():\n clients[\"endpoint\"].delete_endpoint(name=endpoint_id)\nexcept Exception as e:\n print(e)\n\n# Delete the batch job using the Vertex fully qualified identifier for the batch job\ntry:\n if delete_batchjob and \"batch_job_id\" in globals():\n clients[\"job\"].delete_batch_prediction_job(name=batch_job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the custom job using the Vertex fully qualified identifier for the custom job\ntry:\n if delete_customjob and \"job_id\" in globals():\n clients[\"job\"].delete_custom_job(name=job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job\ntry:\n if delete_hptjob and \"hpt_job_id\" in globals():\n clients[\"job\"].delete_hyperparameter_tuning_job(name=hpt_job_id)\nexcept Exception as e:\n print(e)\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e710a140ff6c9a84a2790bb420f8596c2e6f08d4
40,628
ipynb
Jupyter Notebook
jupyter/The+Unit+Commitment+Problem+Local.jupyter-py36.ipynb
vostertag/DO-Samples
1bc3837a2f7f8db023b0a7087840f69ee7515689
[ "Apache-2.0" ]
null
null
null
jupyter/The+Unit+Commitment+Problem+Local.jupyter-py36.ipynb
vostertag/DO-Samples
1bc3837a2f7f8db023b0a7087840f69ee7515689
[ "Apache-2.0" ]
null
null
null
jupyter/The+Unit+Commitment+Problem+Local.jupyter-py36.ipynb
vostertag/DO-Samples
1bc3837a2f7f8db023b0a7087840f69ee7515689
[ "Apache-2.0" ]
null
null
null
35.114952
371
0.585163
[ [ [ "# The Unit Commitment Problem (UCP)\n\nThis tutorial includes everything you need to set up IBM Decision Optimization CPLEX Modeling for Python (DOcplex), build a Mathematical Programming model, and get its solution by solving the model on the cloud with IBM ILOG CPLEX Optimizer.\n\nWhen you finish this tutorial, you'll have a foundational knowledge of _Prescriptive Analytics_.\n\n>This notebook is part of [Prescriptive Analytics for Python](https://rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html).\n\n>It requires an [installation of CPLEX Optimizers](http://ibmdecisionoptimization.github.io/docplex-doc/getting_started.html)\n\nDiscover us [here](https://developer.ibm.com/docloud)\n\n\nTable of contents:\n\n* [Describe the business problem](#Describe-the-business-problem)\n* [How decision optimization (prescriptive analytics) can help](#How--decision-optimization-can-help)\n* [Use decision optimization](#Use-decision-optimization)\n * [Step 1: Import the library](#Step-1:-Import-the-library)\n * [Step 2: Model the Data](#Step-2:-Model-the-data)\n * [Step 3: Prepare the data](#Step-3:-Prepare-the-data)\n * [Step 4: Set up the prescriptive model](#Step-4:-Set-up-the-prescriptive-model)\n * [Define the decision variables](#Define-the-decision-variables)\n * [Express the business constraints](#Express-the-business-constraints)\n * [Express the objective](#Express-the-objective)\n * [Solve with Decision Optimization](#Solve-with-Decision-Optimization)\n * [Step 5: Investigate the solution and run an example analysis](#Step-5:-Investigate-the-solution-and-then-run-an-example-analysis)\n* [Summary](#Summary)\n\n****", "_____no_output_____" ], [ "## Describe the business problem\n\n* The Model estimates the lower cost of generating electricity within a given plan. \nDepending on the demand for electricity, we turn on or off units that generate power and which have operational properties and costs.\n\n* The Unit Commitment Problem answers the question \"Which power generators should I run at which times and at what level in order to satisfy the demand for electricity?\". This model helps users to find not only a feasible answer to the question, but one that also optimizes its solution to meet as many of the electricity company's overall goals as possible. \n", "_____no_output_____" ], [ "## How decision optimization can help\n\n* Prescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes. \n\n* Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes. \n\n* Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage. \n<br/>\n\n<u>With prescriptive analytics, you can:</u> \n\n* Automate the complex decisions and trade-offs to better manage your limited resources.\n* Take advantage of a future opportunity or mitigate a future risk.\n* Proactively update recommendations based on changing events.\n* Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.", "_____no_output_____" ], [ "## Checking minimum requirements\nThis notebook uses some features of *pandas* that are available in version 0.17.1 or above.", "_____no_output_____" ] ], [ [ "import pip\nREQUIRED_MINIMUM_PANDAS_VERSION = '0.17.1'\ntry:\n import pandas as pd\n assert pd.__version__ >= REQUIRED_MINIMUM_PANDAS_VERSION\nexcept:\n raise Exception(\"Version \" + REQUIRED_MINIMUM_PANDAS_VERSION + \" or above of Pandas is required to run this notebook\")", "_____no_output_____" ] ], [ [ "## Use decision optimization", "_____no_output_____" ], [ "### Step 1: Import the library\n\nRun the following code to the import the Decision Optimization CPLEX Modeling library. The *DOcplex* library contains the two modeling packages, Mathematical Programming (*docplex.mp*) and Constraint Programming (*docplex.cp*).", "_____no_output_____" ] ], [ [ "import sys\ntry:\n import docplex.mp\nexcept:\n raise Exception('Please install docplex. See https://pypi.org/project/docplex/')", "_____no_output_____" ] ], [ [ "### Step 2: Model the data\n#### Load data from a *pandas* DataFrame\n\nData for the Unit Commitment Problem is provided as a *pandas* DataFrame.\nFor a standalone notebook, we provide the raw data as Python collections,\nbut real data could be loaded\nfrom an Excel sheet, also using *pandas*.", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom pandas import DataFrame, Series\n\n# make matplotlib plots appear inside the notebook\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 11, 5 ############################ <-Use this to change the plot", "_____no_output_____" ] ], [ [ "Update the configuration of notebook so that display matches browser window width.", "_____no_output_____" ] ], [ [ "from IPython.core.display import HTML\nHTML(\"<style>.container { width:100%; }</style>\")", "_____no_output_____" ] ], [ [ "#### Available energy technologies\n\nThe following *df_energy* DataFrame stores CO<sub>2</sub> cost information, indexed by energy type.", "_____no_output_____" ] ], [ [ "energies = [\"coal\", \"gas\", \"diesel\", \"wind\"]\ndf_energy = DataFrame({\"co2_cost\": [30, 5, 15, 0]}, index=energies)\n\n# Display the 'df_energy' Data Frame\ndf_energy", "_____no_output_____" ] ], [ [ "The following *df_units* DataFrame stores common elements for units of a given technology.", "_____no_output_____" ] ], [ [ "all_units = [\"coal1\", \"coal2\", \n \"gas1\", \"gas2\", \"gas3\", \"gas4\", \n \"diesel1\", \"diesel2\", \"diesel3\", \"diesel4\"]\n \nucp_raw_unit_data = {\n \"energy\": [\"coal\", \"coal\", \"gas\", \"gas\", \"gas\", \"gas\", \"diesel\", \"diesel\", \"diesel\", \"diesel\"],\n \"initial\" : [400, 350, 205, 52, 155, 150, 78, 76, 0, 0],\n \"min_gen\": [100, 140, 78, 52, 54.25, 39, 17.4, 15.2, 4, 2.4],\n \"max_gen\": [425, 365, 220, 210, 165, 158, 90, 87, 20, 12],\n \"operating_max_gen\": [400, 350, 205, 197, 155, 150, 78, 76, 20, 12],\n \"min_uptime\": [15, 15, 6, 5, 5, 4, 3, 3, 1, 1],\n \"min_downtime\":[9, 8, 7, 4, 3, 2, 2, 2, 1, 1],\n \"ramp_up\": [212, 150, 101.2, 94.8, 58, 50, 40, 60, 20, 12],\n \"ramp_down\": [183, 198, 95.6, 101.7, 77.5, 60, 24, 45, 20, 12],\n \"start_cost\": [5000, 4550, 1320, 1291, 1280, 1105, 560, 554, 300, 250],\n \"fixed_cost\": [208.61, 117.37, 174.12, 172.75, 95.353, 144.52, 54.417, 54.551, 79.638, 16.259],\n \"variable_cost\": [22.536, 31.985, 70.5, 69, 32.146, 54.84, 40.222, 40.522, 116.33, 76.642],\n }\n\ndf_units = DataFrame(ucp_raw_unit_data, index=all_units)\n\n# Display the 'df_units' Data Frame\ndf_units", "_____no_output_____" ] ], [ [ "### Step 3: Prepare the data", "_____no_output_____" ], [ "The *pandas* *merge* operation is used to create a join between the *df_units* and *df_energy* DataFrames. Here, the join is performed based on the *'energy'* column of *df_units* and index column of *df_energy*.\n\nBy default, *merge* performs an *inner* join. That is, the resulting DataFrame is based on the **intersection** of keys from both input DataFrames.", "_____no_output_____" ] ], [ [ "# Add a derived co2-cost column by merging with df_energies\n# Use energy key from units and index from energy dataframe\ndf_up = pd.merge(df_units, df_energy, left_on=\"energy\", right_index=True)\ndf_up.index.names=['units']\n\n# Display first rows of new 'df_up' Data Frame\ndf_up.head()", "_____no_output_____" ] ], [ [ "The demand is stored as a *pandas* _Series_ indexed from 1 to the number of periods.", "_____no_output_____" ] ], [ [ "raw_demand = [1259.0, 1439.0, 1289.0, 1211.0, 1433.0, 1287.0, 1285.0, 1227.0, 1269.0, 1158.0, 1277.0, 1417.0, 1294.0, 1396.0, 1414.0, 1386.0,\n 1302.0, 1215.0, 1433.0, 1354.0, 1436.0, 1285.0, 1332.0, 1172.0, 1446.0, 1367.0, 1243.0, 1275.0, 1363.0, 1208.0, 1394.0, 1345.0, \n 1217.0, 1432.0, 1431.0, 1356.0, 1360.0, 1364.0, 1286.0, 1440.0, 1440.0, 1313.0, 1389.0, 1385.0, 1265.0, 1442.0, 1435.0, 1432.0, \n 1280.0, 1411.0, 1440.0, 1258.0, 1333.0, 1293.0, 1193.0, 1440.0, 1306.0, 1264.0, 1244.0, 1368.0, 1437.0, 1236.0, 1354.0, 1356.0, \n 1383.0, 1350.0, 1354.0, 1329.0, 1427.0, 1163.0, 1339.0, 1351.0, 1174.0, 1235.0, 1439.0, 1235.0, 1245.0, 1262.0, 1362.0, 1184.0, \n 1207.0, 1359.0, 1443.0, 1205.0, 1192.0, 1364.0, 1233.0, 1281.0, 1295.0, 1357.0, 1191.0, 1329.0, 1294.0, 1334.0, 1265.0, 1207.0, \n 1365.0, 1432.0, 1199.0, 1191.0, 1411.0, 1294.0, 1244.0, 1256.0, 1257.0, 1224.0, 1277.0, 1246.0, 1243.0, 1194.0, 1389.0, 1366.0, \n 1282.0, 1221.0, 1255.0, 1417.0, 1358.0, 1264.0, 1205.0, 1254.0, 1276.0, 1435.0, 1335.0, 1355.0, 1337.0, 1197.0, 1423.0, 1194.0, \n 1310.0, 1255.0, 1300.0, 1388.0, 1385.0, 1255.0, 1434.0, 1232.0, 1402.0, 1435.0, 1160.0, 1193.0, 1422.0, 1235.0, 1219.0, 1410.0, \n 1363.0, 1361.0, 1437.0, 1407.0, 1164.0, 1392.0, 1408.0, 1196.0, 1430.0, 1264.0, 1289.0, 1434.0, 1216.0, 1340.0, 1327.0, 1230.0, \n 1362.0, 1360.0, 1448.0, 1220.0, 1435.0, 1425.0, 1413.0, 1279.0, 1269.0, 1162.0, 1437.0, 1441.0, 1433.0, 1307.0, 1436.0, 1357.0, \n 1437.0, 1308.0, 1207.0, 1420.0, 1338.0, 1311.0, 1328.0, 1417.0, 1394.0, 1336.0, 1160.0, 1231.0, 1422.0, 1294.0, 1434.0, 1289.0]\nnb_periods = len(raw_demand)\nprint(\"nb periods = {}\".format(nb_periods))\n\ndemand = Series(raw_demand, index = range(1, nb_periods+1))\n\n# plot demand\ndemand.plot(title=\"Demand\")", "_____no_output_____" ] ], [ [ "### Step 4: Set up the prescriptive model", "_____no_output_____" ] ], [ [ "from docplex.mp.environment import Environment\nenv = Environment()\nenv.print_information()", "_____no_output_____" ] ], [ [ "#### Create the DOcplex model\nThe model contains all the business constraints and defines the objective.", "_____no_output_____" ] ], [ [ "from docplex.mp.model import Model\n\nucpm = Model(\"ucp\")", "_____no_output_____" ] ], [ [ "#### Define the decision variables\n\nDecision variables are:\n\n- The variable *in_use[u,t]* is 1 if and only if unit _u_ is working at period _t_.\n- The variable *turn_on[u,t]* is 1 if and only if unit _u_ is in production at period _t_.\n- The variable *turn_off[u,t]* is 1 if unit _u_ is switched off at period _t_.\n- The variable *production[u,t]* is a continuous variables representing the production of energy for unit _u_ at period _t_.", "_____no_output_____" ] ], [ [ "units = all_units\n# periods range from 1 to nb_periods included\nperiods = range(1, nb_periods+1)\n\n# in use[u,t] is true iff unit u is in production at period t\nin_use = ucpm.binary_var_matrix(keys1=units, keys2=periods, name=\"in_use\")\n\n# true if unit u is turned on at period t\nturn_on = ucpm.binary_var_matrix(keys1=units, keys2=periods, name=\"turn_on\")\n\n# true if unit u is switched off at period t\n# modeled as a continuous 0-1 variable, more on this later\nturn_off = ucpm.continuous_var_matrix(keys1=units, keys2=periods, lb=0, ub=1, name=\"turn_off\")\n\n# production of energy for unit u at period t\nproduction = ucpm.continuous_var_matrix(keys1=units, keys2=periods, name=\"p\")\n\n# at this stage we have defined the decision variables.\nucpm.print_information()", "_____no_output_____" ], [ "# Organize all decision variables in a DataFrame indexed by 'units' and 'periods'\ndf_decision_vars = DataFrame({'in_use': in_use, 'turn_on': turn_on, 'turn_off': turn_off, 'production': production})\n# Set index names\ndf_decision_vars.index.names=['units', 'periods']\n\n# Display first few rows of 'df_decision_vars' DataFrame\ndf_decision_vars.head()", "_____no_output_____" ] ], [ [ "#### Express the business constraints\n\n##### Linking in-use status to production\n\nWhenever the unit is in use, the production must be within the minimum and maximum generation.\n", "_____no_output_____" ] ], [ [ "# Create a join between 'df_decision_vars' and 'df_up' Data Frames based on common index id (ie: 'units')\n# In 'df_up', one keeps only relevant columns: 'min_gen' and 'max_gen'\ndf_join_decision_vars_up = df_decision_vars.join(df_up[['min_gen', 'max_gen']], how='inner')\n\n# Display first few rows of joined Data Frames\ndf_join_decision_vars_up.head()", "_____no_output_____" ], [ "import pandas as pb\nprint(pd.__version__)\n", "_____no_output_____" ], [ "# When in use, the production level is constrained to be between min and max generation.\nfor item in df_join_decision_vars_up.itertuples(index=False):\n ucpm += (item.production <= item.max_gen * item.in_use)\n ucpm += (item.production >= item.min_gen * item.in_use)", "_____no_output_____" ] ], [ [ "##### Initial state\nThe solution must take into account the initial state. The initial state of use of the unit is determined by its initial production level.", "_____no_output_____" ] ], [ [ "# Initial state\n# If initial production is nonzero, then period #1 is not a turn_on\n# else turn_on equals in_use\n# Dual logic is implemented for turn_off\nfor u in units:\n if df_up.initial[u] > 0:\n # if u is already running, not starting up\n ucpm.add_constraint(turn_on[u, 1] == 0)\n # turnoff iff not in use\n ucpm.add_constraint(turn_off[u, 1] + in_use[u, 1] == 1)\n else:\n # turn on at 1 iff in use at 1\n ucpm.add_constraint(turn_on[u, 1] == in_use[u, 1])\n # already off, not switched off at t==1\n ucpm.add_constraint(turn_off[u, 1] == 0)\nucpm.print_information()", "_____no_output_____" ] ], [ [ "##### Ramp-up / ramp-down constraint\nVariations of the production level over time in a unit is constrained by a ramp-up / ramp-down process.\n\nWe use the *pandas* *groupby* operation to collect all decision variables for each unit in separate series. Then, we iterate over units to post constraints enforcing the ramp-up / ramp-down process by setting upper bounds on the variation of the production level for consecutive periods.", "_____no_output_____" ] ], [ [ "# Use groupby operation to process each unit\nfor unit, r in df_decision_vars.groupby(level='units'):\n u_ramp_up = df_up.ramp_up[unit]\n u_ramp_down = df_up.ramp_down[unit]\n u_initial = df_up.initial[unit]\n # Initial ramp up/down\n # Note that r.production is a Series that can be indexed as an array (ie: first item index = 0)\n ucpm.add_constraint(r.production[0] - u_initial <= u_ramp_up)\n ucpm.add_constraint(u_initial - r.production[0] <= u_ramp_down)\n for (p_curr, p_next) in zip(r.production, r.production[1:]):\n ucpm.add_constraint(p_next - p_curr <= u_ramp_up)\n ucpm.add_constraint(p_curr - p_next <= u_ramp_down)\n\nucpm.print_information()", "_____no_output_____" ] ], [ [ "##### Turn on / turn off\nThe following constraints determine when a unit is turned on or off.\n\nWe use the same *pandas* *groupby* operation as in the previous constraint to iterate over the sequence of decision variables for each unit.", "_____no_output_____" ] ], [ [ "# Turn_on, turn_off\n# Use groupby operation to process each unit\nfor unit, r in df_decision_vars.groupby(level='units'):\n for (in_use_curr, in_use_next, turn_on_next, turn_off_next) in zip(r.in_use, r.in_use[1:], r.turn_on[1:], r.turn_off[1:]):\n # if unit is off at time t and on at time t+1, then it was turned on at time t+1\n ucpm.add_constraint(in_use_next - in_use_curr <= turn_on_next)\n\n # if unit is on at time t and time t+1, then it was not turned on at time t+1\n # mdl.add_constraint(in_use_next + in_use_curr + turn_on_next <= 2)\n\n # if unit is on at time t and off at time t+1, then it was turned off at time t+1\n ucpm.add_constraint(in_use_curr - in_use_next + turn_on_next == turn_off_next)\nucpm.print_information() ", "_____no_output_____" ] ], [ [ "##### Minimum uptime and downtime\nWhen a unit is turned on, it cannot be turned off before a *minimum uptime*. Conversely, when a unit is turned off, it cannot be turned on again before a *minimum downtime*.\n\nAgain, let's use the same *pandas* *groupby* operation to implement this constraint for each unit.", "_____no_output_____" ] ], [ [ "# Minimum uptime, downtime\nfor unit, r in df_decision_vars.groupby(level='units'):\n min_uptime = df_up.min_uptime[unit]\n min_downtime = df_up.min_downtime[unit]\n # Note that r.turn_on and r.in_use are Series that can be indexed as arrays (ie: first item index = 0)\n for t in range(min_uptime, nb_periods):\n ctname = \"min_up_{0!s}_{1}\".format(*r.index[t])\n ucpm.add_constraint(ucpm.sum(r.turn_on[(t - min_uptime) + 1:t + 1]) <= r.in_use[t], ctname)\n\n for t in range(min_downtime, nb_periods):\n ctname = \"min_down_{0!s}_{1}\".format(*r.index[t])\n ucpm.add_constraint(ucpm.sum(r.turn_off[(t - min_downtime) + 1:t + 1]) <= 1 - r.in_use[t], ctname)\n", "_____no_output_____" ] ], [ [ "##### Demand constraint\nTotal production level must be equal or higher than demand on any period.\n\nThis time, the *pandas* operation *groupby* is performed on *\"periods\"* since we have to iterate over the list of all units for each period.", "_____no_output_____" ] ], [ [ "# Enforcing demand\n# we use a >= here to be more robust, \n# objective will ensure we produce efficiently\nfor period, r in df_decision_vars.groupby(level='periods'):\n total_demand = demand[period]\n ctname = \"ct_meet_demand_%d\" % period\n ucpm.add_constraint(ucpm.sum(r.production) >= total_demand, ctname) ", "_____no_output_____" ] ], [ [ "#### Express the objective\n\nOperating the different units incur different costs: fixed cost, variable cost, startup cost, co2 cost.\n\nIn a first step, we define the objective as a non-weighted sum of all these costs.\n\nThe following *pandas* *join* operation groups all the data to calculate the objective in a single DataFrame.", "_____no_output_____" ] ], [ [ "# Create a join between 'df_decision_vars' and 'df_up' Data Frames based on common index ids (ie: 'units')\n# In 'df_up', one keeps only relevant columns: 'fixed_cost', 'variable_cost', 'start_cost' and 'co2_cost'\ndf_join_obj = df_decision_vars.join(\n df_up[['fixed_cost', 'variable_cost', 'start_cost', 'co2_cost']], how='inner')\n\n# Display first few rows of joined Data Frame\ndf_join_obj.head()", "_____no_output_____" ], [ "# objective\ntotal_fixed_cost = ucpm.sum(df_join_obj.in_use * df_join_obj.fixed_cost)\ntotal_variable_cost = ucpm.sum(df_join_obj.production * df_join_obj.variable_cost)\ntotal_startup_cost = ucpm.sum(df_join_obj.turn_on * df_join_obj.start_cost)\ntotal_co2_cost = ucpm.sum(df_join_obj.production * df_join_obj.co2_cost)\ntotal_economic_cost = total_fixed_cost + total_variable_cost + total_startup_cost\n\ntotal_nb_used = ucpm.sum(df_decision_vars.in_use)\ntotal_nb_starts = ucpm.sum(df_decision_vars.turn_on)\n\n# store expression kpis to retrieve them later.\nucpm.add_kpi(total_fixed_cost , \"Total Fixed Cost\")\nucpm.add_kpi(total_variable_cost, \"Total Variable Cost\")\nucpm.add_kpi(total_startup_cost , \"Total Startup Cost\")\nucpm.add_kpi(total_economic_cost, \"Total Economic Cost\")\nucpm.add_kpi(total_co2_cost , \"Total CO2 Cost\")\nucpm.add_kpi(total_nb_used, \"Total #used\")\nucpm.add_kpi(total_nb_starts, \"Total #starts\")\n\n# minimize sum of all costs\nucpm.minimize(total_fixed_cost + total_variable_cost + total_startup_cost + total_co2_cost)", "_____no_output_____" ] ], [ [ "#### Solve with Decision Optimization\n\nIf you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation.", "_____no_output_____" ] ], [ [ "ucpm.print_information()", "_____no_output_____" ], [ "assert ucpm.solve(), \"!!! Solve of the model fails\"", "_____no_output_____" ], [ "ucpm.report()", "_____no_output_____" ] ], [ [ "### Step 5: Investigate the solution and then run an example analysis\n\nNow let's store the results in a new *pandas* DataFrame.\n\nFor convenience, the different figures are organized into pivot tables with *periods* as row index and *units* as columns. The *pandas* *unstack* operation does this for us.", "_____no_output_____" ] ], [ [ "df_prods = df_decision_vars.production.apply(lambda v: v.solution_value).unstack(level='units')\ndf_used = df_decision_vars.in_use.apply(lambda v: v.solution_value).unstack(level='units')\ndf_started = df_decision_vars.turn_on.apply(lambda v: v.solution_value).unstack(level='units')\n\n# Display the first few rows of the pivoted 'production' data\ndf_prods.head()", "_____no_output_____" ] ], [ [ "From these raw DataFrame results, we can compute _derived_ results.\nFor example, for a given unit and period, the _reserve_ r(u,t) is defined as\nthe unit's maximum generation minus the current production.", "_____no_output_____" ] ], [ [ "df_spins = DataFrame(df_up.max_gen.to_dict(), index=periods) - df_prods\n\n# Display the first few rows of the 'df_spins' Data Frame, representing the reserve for each unit, over time\ndf_spins.head()", "_____no_output_____" ] ], [ [ "Let's plot the evolution of the reserves for the *\"coal2\"* unit:", "_____no_output_____" ] ], [ [ "df_spins.coal2.plot(style='o-', ylim=[0,200])", "_____no_output_____" ] ], [ [ "Now we want to sum all unit reserves to compute the _global_ spinning reserve.\nWe need to sum all columns of the DataFrame to get an aggregated time series. We use the *pandas* **sum** method\nwith axis=1 (for rows).", "_____no_output_____" ] ], [ [ "global_spin = df_spins.sum(axis=1)\nglobal_spin.plot(title=\"Global spinning reserve\")", "_____no_output_____" ] ], [ [ "#### Number of plants online by period\n\nThe total number of plants online at each period t is the sum of in_use variables for all units at this period.\nAgain, we use the *pandas* sum with axis=1 (for rows) to sum over all units.", "_____no_output_____" ] ], [ [ "df_used.sum(axis=1).plot(title=\"Number of plants online\", kind='line', style=\"r-\", ylim=[0, len(units)])", "_____no_output_____" ] ], [ [ "#### Costs by period", "_____no_output_____" ] ], [ [ "# extract unit cost data\nall_costs = [\"fixed_cost\", \"variable_cost\", \"start_cost\", \"co2_cost\"]\ndf_costs = df_up[all_costs]\n\nrunning_cost = df_used * df_costs.fixed_cost\nstartup_cost = df_started * df_costs.start_cost\nvariable_cost = df_prods * df_costs.variable_cost\nco2_cost = df_prods * df_costs.co2_cost\ntotal_cost = running_cost + startup_cost + variable_cost + co2_cost\n\nrunning_cost.sum(axis=1).plot(style='g')\nstartup_cost.sum(axis=1).plot(style='r')\nvariable_cost.sum(axis=1).plot(style='b',logy=True)\nco2_cost.sum(axis=1).plot(style='k')", "_____no_output_____" ] ], [ [ "#### Cost breakdown by unit and by energy", "_____no_output_____" ] ], [ [ "# Calculate sum by column (by default, axis = 0) to get total cost for each unit\ncost_by_unit = total_cost.sum()\n\n# Create a dictionary storing energy type for each unit, from the corresponding pandas Series\nunit_energies = df_up.energy.to_dict()\n\n# Group cost by unit type and plot total cost by energy type in a pie chart\ngb = cost_by_unit.groupby(unit_energies)\n# gb.sum().plot(kind='pie')\ngb.sum().plot.pie(figsize=(6, 6),autopct='%.2f',fontsize=15)\n\nplt.title('total cost by energy type', bbox={'facecolor':'0.8', 'pad':5})", "_____no_output_____" ] ], [ [ "### Arbitration between CO<sub>2</sub> cost and economic cost\n\nEconomic cost and CO<sub>2</sub> cost usually push in opposite directions.\nIn the above discussion, we have minimized the raw sum of economic cost and CO<sub>2</sub> cost, without weights.\nBut how good could we be on CO<sub>2</sub>, regardless of economic constraints? \nTo know this, let's solve again with CO<sub>2</sub> cost as the only objective.\n", "_____no_output_____" ] ], [ [ "# first retrieve the co2 and economic kpis\nco2_kpi = ucpm.kpi_by_name(\"co2\") # does a name matching\neco_kpi = ucpm.kpi_by_name(\"eco\")\nprev_co2_cost = co2_kpi.compute()\nprev_eco_cost = eco_kpi.compute()\nprint(\"* current CO2 cost is: {}\".format(prev_co2_cost))\nprint(\"* current $$$ cost is: {}\".format(prev_eco_cost))\n# now set the objective\nold_objective = ucpm.objective_expr # save it\nucpm.minimize(co2_kpi.as_expression())", "_____no_output_____" ], [ "assert ucpm.solve(), \"Solve failed\"", "_____no_output_____" ], [ "min_co2_cost = ucpm.objective_value\nmin_co2_eco_cost = eco_kpi.compute()\nprint(\"* absolute minimum for CO2 cost is {}\".format(min_co2_cost))\nprint(\"* at this point $$$ cost is {}\".format(min_co2_eco_cost))", "_____no_output_____" ] ], [ [ "As expected, we get a significantly lower CO<sub>2</sub> cost when minimized alone, at the price of a higher economic cost.\n\nWe could do a similar analysis for economic cost to estimate the absolute minimum of\nthe economic cost, regardless of CO<sub>2</sub> cost.", "_____no_output_____" ] ], [ [ "# minimize only economic cost\nucpm.minimize(eco_kpi.as_expression())", "_____no_output_____" ], [ "assert ucpm.solve(), \"Solve failed\"", "_____no_output_____" ], [ "min_eco_cost = ucpm.objective_value\nmin_eco_co2_cost = co2_kpi.compute()\nprint(\"* absolute minimum for $$$ cost is {}\".format(min_eco_cost))\nprint(\"* at this point CO2 cost is {}\".format(min_eco_co2_cost))", "_____no_output_____" ] ], [ [ "Again, the absolute minimum for economic cost is lower than the figure we obtained in the original model where we minimized the _sum_ of economic and CO<sub>2</sub> costs, but here we significantly increase the CO<sub>2</sub>.\n\nBut what happens in between these two extreme points?\n\nTo investigate, we will divide the interval of CO<sub>2</sub> cost values in smaller intervals, add an upper limit on CO<sub>2</sub>,\nand minimize economic cost with this constraint. This will give us a Pareto optimal point with at most this CO<sub>2</sub> value.\n\nTo avoid adding many constraints, we add only one constraint with an extra variable, and we change only the upper bound\nof this CO<sub>2</sub> limit variable between successive solves.\n\nThen we iterate (with a fixed number of iterations) and collect the cost values. ", "_____no_output_____" ] ], [ [ "# add extra variable\nco2_limit = ucpm.continuous_var(lb=0)\n# add a named constraint which limits total co2 cost to this variable:\nmax_co2_ctname = \"ct_max_co2\"\nco2_ct = ucpm.add_constraint(co2_kpi.as_expression() <= co2_limit, max_co2_ctname) ", "_____no_output_____" ], [ "co2min = min_co2_cost\nco2max = min_eco_co2_cost\ndef explore_ucp(nb_iters, eps=1e-5):\n\n step = (co2max-co2min)/float(nb_iters)\n co2_ubs = [co2min + k * step for k in range(nb_iters+1)]\n\n # ensure we minimize eco\n ucpm.minimize(eco_kpi.as_expression())\n all_co2s = []\n all_ecos = []\n for k in range(nb_iters+1):\n co2_ub = co2min + k * step\n print(\" iteration #{0} co2_ub={1}\".format(k, co2_ub))\n co2_limit.ub = co2_ub + eps\n assert ucpm.solve() is not None, \"Solve failed\"\n cur_co2 = co2_kpi.compute()\n cur_eco = eco_kpi.compute()\n all_co2s.append(cur_co2)\n all_ecos.append(cur_eco)\n return all_co2s, all_ecos", "_____no_output_____" ], [ "#explore the co2/eco frontier in 50 points\nco2s, ecos = explore_ucp(nb_iters=50)", "_____no_output_____" ], [ "# normalize all values by dividing by their maximum\neco_max = min_co2_eco_cost\nnxs = [c / co2max for c in co2s]\nnys = [e / eco_max for e in ecos]\n# plot a scatter chart of x=co2, y=costs\nplt.scatter(nxs, nys)\n# plot as one point\nplt.plot(prev_co2_cost/co2max, prev_eco_cost/eco_max, \"rH\", markersize=16)\nplt.xlabel(\"co2 cost\")\nplt.ylabel(\"economic cost\")\nplt.show()", "_____no_output_____" ] ], [ [ "This figure demonstrates that the result obtained in the initial model clearly favored\neconomic cost over CO<sub>2</sub> cost: CO<sub>2</sub> cost is well above 95% of its maximum value.", "_____no_output_____" ], [ "## Summary\n\nYou learned how to set up and use IBM Decision Optimization CPLEX Modeling for Python to formulate a Mathematical Programming model and solve it with IBM Decision Optimization on Cloud.", "_____no_output_____" ], [ "#### References\n* [CPLEX Modeling for Python documentation](https://rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html)\n* [Decision Optimization on Cloud](https://developer.ibm.com/docloud/)\n* Need help with DOcplex or to report a bug? Please go [here](https://developer.ibm.com/answers/smartspace/docloud).\n* Contact us at [email protected].", "_____no_output_____" ], [ "Copyright © 2017-2018 IBM. IPLA licensed Sample Materials.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
e710a31aad8138eaf0d9ad0435fd780ceb37ce9c
12,439
ipynb
Jupyter Notebook
Data_Series/Data_Series3.ipynb
MarekKras/Analiza_Dannych_01
11554348ab50736817bd2a96671680bb9a820648
[ "Unlicense" ]
null
null
null
Data_Series/Data_Series3.ipynb
MarekKras/Analiza_Dannych_01
11554348ab50736817bd2a96671680bb9a820648
[ "Unlicense" ]
null
null
null
Data_Series/Data_Series3.ipynb
MarekKras/Analiza_Dannych_01
11554348ab50736817bd2a96671680bb9a820648
[ "Unlicense" ]
null
null
null
21.899648
1,517
0.494493
[ [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport math as math", "_____no_output_____" ], [ "monotonicList = (1,2,4,67,99)\nmonotonicSeries = pd.Series(monotonicList)\nmonotonicSeries", "_____no_output_____" ], [ "monotonicSeries.sum()", "_____no_output_____" ], [ "monotonicSeries.min()", "_____no_output_____" ], [ "monotonicSeries.max()", "_____no_output_____" ], [ "monotonicSeries.mean()", "_____no_output_____" ], [ "monotonicSeries.count()", "_____no_output_____" ], [ "monotonicSeries.size()", "_____no_output_____" ], [ "monotonicSeries.size", "_____no_output_____" ], [ "monotonicSeries.product()", "_____no_output_____" ], [ "monotonicSeries.index", "_____no_output_____" ], [ "monotonicSeries.keys()", "_____no_output_____" ], [ "monotonicSeries.values", "_____no_output_____" ], [ "monotonicSeries.get_values()\n", "_____no_output_____" ], [ "monotonicSeries.to_list()", "_____no_output_____" ], [ "monotonicSeries.add(10)", "_____no_output_____" ], [ "monotonicSeries", "_____no_output_____" ], [ "newSeries = monotonicSeries.add(10)", "_____no_output_____" ], [ "newSeries", "_____no_output_____" ], [ "currencies = ['USD', 'EUR', 'PLN', 'EUR','EUR']\ncountries = ['USA', 'Spain', 'Poland', 'Portugal', 'Italy']", "_____no_output_____" ], [ "countrySeries = pd.Series(countries,currencies)\ncountrySeries", "_____no_output_____" ], [ "curSeries = pd.Series(data = countries, index = currencies)\ncurSeries", "_____no_output_____" ], [ "dicCoutry = {'USD':'USA', 'USD':'USA'}", "_____no_output_____" ], [ "dicCoutry", "_____no_output_____" ], [ "dicCoutry = {'USD':'USA', 'USD':'Ecuador'}", "_____no_output_____" ], [ "dicCoutry", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e710b102847438050d5b534ab3d2fe2eac89f587
40,123
ipynb
Jupyter Notebook
src/plotter/jupyter/deep-recurrence-plot.ipynb
code-rius/data-randomness-and-regularities
ac54c03ac00aef32f84ccc8d6208ee50dda87bd1
[ "MIT" ]
null
null
null
src/plotter/jupyter/deep-recurrence-plot.ipynb
code-rius/data-randomness-and-regularities
ac54c03ac00aef32f84ccc8d6208ee50dda87bd1
[ "MIT" ]
null
null
null
src/plotter/jupyter/deep-recurrence-plot.ipynb
code-rius/data-randomness-and-regularities
ac54c03ac00aef32f84ccc8d6208ee50dda87bd1
[ "MIT" ]
null
null
null
90.776018
14,536
0.756698
[ [ [ "import numpy as np\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras.models import Sequential, load_model\nfrom tensorflow.keras.layers import Activation, Dense, Flatten, BatchNormalization, Dropout, Conv2D, MaxPool2D\nfrom tensorflow.keras.optimizers import Adam\nfrom tensorflow.keras.metrics import categorical_crossentropy\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\nfrom sklearn.metrics import plot_confusion_matrix, confusion_matrix \nfrom sklearn.metrics import accuracy_score\nfrom sklearn.utils import shuffle\nimport itertools\nimport os\nimport shutil\nimport random\nimport glob\nimport matplotlib.pyplot as plt \nimport warnings\nwarnings.simplefilter(action='ignore',category=FutureWarning)\n%matplotlib inline", "_____no_output_____" ], [ "# Optional - enable GPU accelleration\nphysical_devices = tf.config.experimental.list_physical_devices('GPU')\nprint(\"Num GPUs Available: \", len(physical_devices))\ntf.config.experimental.set_memory_growth(physical_devices[0], True)", "Num GPUs Available: 1\n" ], [ "os.chdir('data/')\nif os.path.isdir('train/chaotic') is False:\n os.makedirs('train/chaotic')\n os.makedirs('train/periodic')\n os.makedirs('train/trend')\n os.makedirs('valid/chaotic')\n os.makedirs('valid/periodic')\n os.makedirs('valid/trend')\n os.makedirs('test/chaotic')\n os.makedirs('test/periodic')\n os.makedirs('test/trend')\n \n for c in random.sample(glob.glob('chaotic*'), 850):\n shutil.move(c, 'train/chaotic')\n for c in random.sample(glob.glob('periodic*'), 850):\n shutil.move(c, 'train/periodic')\n for c in random.sample(glob.glob('trend*'), 850):\n shutil.move(c, 'train/trend')\n for c in random.sample(glob.glob('chaotic*'), 100):\n shutil.move(c, 'valid/chaotic')\n for c in random.sample(glob.glob('periodic*'), 100):\n shutil.move(c, 'valid/periodic')\n for c in random.sample(glob.glob('trend*'), 100):\n shutil.move(c, 'valid/trend')\n for c in random.sample(glob.glob('chaotic*'), 50):\n shutil.move(c, 'test/chaotic')\n for c in random.sample(glob.glob('periodic*'), 50):\n shutil.move(c, 'test/periodic')\n for c in random.sample(glob.glob('trend*'), 50):\n shutil.move(c, 'test/trend')\n \nos.chdir('../')", "_____no_output_____" ], [ "train_path = 'data/train'\nvalid_path = 'data/valid'\ntest_path = 'data/test'", "_____no_output_____" ], [ "train_batches = ImageDataGenerator(preprocessing_function=tf.keras.applications.vgg16.preprocess_input) \\\n.flow_from_directory(directory=train_path, target_size=(224,224), classes=['periodic', 'trend', 'chaotic'], batch_size=10) \nvalid_batches = ImageDataGenerator(preprocessing_function=tf.keras.applications.vgg16.preprocess_input) \\\n.flow_from_directory(directory=valid_path, target_size=(224,224), classes=['periodic', 'trend', 'chaotic'], batch_size=10)\ntest_batches = ImageDataGenerator(preprocessing_function=tf.keras.applications.vgg16.preprocess_input) \\\n.flow_from_directory(directory=test_path, target_size=(224,224), classes=['periodic', 'trend', 'chaotic'], batch_size=10, shuffle=False)", "Found 2550 images belonging to 3 classes.\nFound 300 images belonging to 3 classes.\nFound 150 images belonging to 3 classes.\n" ], [ "print(test_batches.class_indices)", "{'periodic': 0, 'trend': 1, 'chaotic': 2}\n" ], [ "model = Sequential([\n Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same', input_shape=(224,224,3)),\n Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same'),\n MaxPool2D(pool_size=(2,2), strides=2),\n Dropout(0.25),\n Conv2D(filters=128, kernel_size=(3,3), activation='relu', padding='same'),\n Conv2D(filters=128, kernel_size=(3,3), activation='relu', padding='same'),\n MaxPool2D(pool_size=(2,2), strides=2),\n Dropout(0.25),\n Conv2D(filters=256, kernel_size=(3,3), activation='relu', padding='same'),\n Conv2D(filters=256, kernel_size=(3,3), activation='relu', padding='same'),\n MaxPool2D(pool_size=(2,2), strides=2),\n Dropout(0.25),\n Conv2D(filters=512, kernel_size=(3,3), activation='relu', padding='same'),\n Conv2D(filters=512, kernel_size=(3,3), activation='relu', padding='same'),\n MaxPool2D(pool_size=(2,2), strides=2),\n Dropout(0.25),\n Flatten(),\n Dense(units=3, activation='softmax')\n])", "_____no_output_____" ], [ "model.summary()", "Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d (Conv2D) (None, 224, 224, 64) 1792 \n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 224, 224, 64) 36928 \n_________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 112, 112, 64) 0 \n_________________________________________________________________\ndropout (Dropout) (None, 112, 112, 64) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 112, 112, 128) 73856 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 112, 112, 128) 147584 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 56, 56, 128) 0 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 56, 56, 128) 0 \n_________________________________________________________________\nconv2d_4 (Conv2D) (None, 56, 56, 256) 295168 \n_________________________________________________________________\nconv2d_5 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 28, 28, 256) 0 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 28, 28, 256) 0 \n_________________________________________________________________\nconv2d_6 (Conv2D) (None, 28, 28, 512) 1180160 \n_________________________________________________________________\nconv2d_7 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nmax_pooling2d_3 (MaxPooling2 (None, 14, 14, 512) 0 \n_________________________________________________________________\ndropout_3 (Dropout) (None, 14, 14, 512) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 100352) 0 \n_________________________________________________________________\ndense (Dense) (None, 3) 301059 \n=================================================================\nTotal params: 4,986,435\nTrainable params: 4,986,435\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "model.compile(optimizer=Adam(learning_rate=0.001), \n loss='categorical_crossentropy', metrics=['accuracy'])", "_____no_output_____" ], [ "model.fit(x=train_batches, validation_data=valid_batches, epochs=10, verbose=2)", "Epoch 1/10\n" ] ], [ [ "# Predict", "_____no_output_____" ] ], [ [ "predictions = model.predict(x=test_batches, verbose=0)", "_____no_output_____" ], [ "from sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import plot_confusion_matrix\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.svm import SVC\nimport sklearn.svm\n\nclass_names=['periodic','trending','chaotic']\ntitle = \"Recurrence plot model confusion matrix\"\nclassifier = SVC(kernel='linear', C=1).fit(predictions, test_batches.classes)\nnp.set_printoptions(precision=2)\n\ntitles_options = [(\"Normallaized confusion matrix\", \"true\")]\n\ndisp=plot_confusion_matrix(classifier, predictions, test_batches.classes,\n display_labels=class_names,\n cmap=plt.cm.Blues)\n\ndisp.ax_.set_title(title)\nplt.savefig('confusion_matrix.png')\nplt.show()", "_____no_output_____" ], [ "if os.path.isfile('models/recurrence_plot_model.h5') is False:\n model.save('models/recurrence_plot_model.h5')", "_____no_output_____" ], [ "model = load_model('models/recurrence_plot_model.h5')", "_____no_output_____" ], [ "model.summary()", "Model: \"sequential_16\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_140 (Conv2D) (None, 224, 224, 64) 1792 \n_________________________________________________________________\nconv2d_141 (Conv2D) (None, 224, 224, 64) 36928 \n_________________________________________________________________\nmax_pooling2d_59 (MaxPooling (None, 112, 112, 64) 0 \n_________________________________________________________________\ndropout_16 (Dropout) (None, 112, 112, 64) 0 \n_________________________________________________________________\nconv2d_142 (Conv2D) (None, 112, 112, 128) 73856 \n_________________________________________________________________\nconv2d_143 (Conv2D) (None, 112, 112, 128) 147584 \n_________________________________________________________________\nmax_pooling2d_60 (MaxPooling (None, 56, 56, 128) 0 \n_________________________________________________________________\ndropout_17 (Dropout) (None, 56, 56, 128) 0 \n_________________________________________________________________\nconv2d_144 (Conv2D) (None, 56, 56, 256) 295168 \n_________________________________________________________________\nconv2d_145 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nconv2d_146 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nmax_pooling2d_61 (MaxPooling (None, 28, 28, 256) 0 \n_________________________________________________________________\ndropout_18 (Dropout) (None, 28, 28, 256) 0 \n_________________________________________________________________\nconv2d_147 (Conv2D) (None, 28, 28, 512) 1180160 \n_________________________________________________________________\nconv2d_148 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nconv2d_149 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nmax_pooling2d_62 (MaxPooling (None, 14, 14, 512) 0 \n_________________________________________________________________\nflatten_16 (Flatten) (None, 100352) 0 \n_________________________________________________________________\ndense_28 (Dense) (None, 3) 301059 \n=================================================================\nTotal params: 7,936,323\nTrainable params: 7,936,323\nNon-trainable params: 0\n_________________________________________________________________\n" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e710d33d615eec10b7c1bcc7c222ee9ddeca271a
25,517
ipynb
Jupyter Notebook
examples/tmaze_demo.ipynb
spetey/pymdp
1623215c133fe32b9f79dc628237fb006f34c013
[ "MIT" ]
29
2020-04-30T21:26:53.000Z
2020-11-18T20:31:23.000Z
examples/tmaze_demo.ipynb
spetey/pymdp
1623215c133fe32b9f79dc628237fb006f34c013
[ "MIT" ]
null
null
null
examples/tmaze_demo.ipynb
spetey/pymdp
1623215c133fe32b9f79dc628237fb006f34c013
[ "MIT" ]
3
2020-04-30T17:22:53.000Z
2020-11-20T09:46:50.000Z
44.070812
859
0.658541
[ [ [ "# Active Inference Demo: T-Maze Environment\nThis demo notebook provides a full walk-through of active inference using the `Agent()` class of `pymdp`. The canonical example used here is the 'T-maze' task, often used in the active inference literature in discussions of epistemic behavior (see, for example, [\"Active Inference and Epistemic Value\"](https://pubmed.ncbi.nlm.nih.gov/25689102/))", "_____no_output_____" ], [ "### Imports\n\nFirst, import `pymdp` and the modules we'll need.", "_____no_output_____" ] ], [ [ "import os\nimport sys\nimport pathlib\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport copy\n\npath = pathlib.Path(os.getcwd())\nmodule_path = str(path.parent) + '/'\nsys.path.append(module_path)\n\nfrom pymdp.agent import Agent\nfrom pymdp import utils\nfrom pymdp.envs import TMazeEnv", "_____no_output_____" ] ], [ [ "### Auxiliary Functions\n\nDefine some utility functions that will be helpful for plotting.", "_____no_output_____" ] ], [ [ "def plot_beliefs(belief_dist, title=\"\"):\n plt.grid(zorder=0)\n plt.bar(range(belief_dist.shape[0]), belief_dist, color='r', zorder=3)\n plt.xticks(range(belief_dist.shape[0]))\n plt.title(title)\n plt.show()\n \ndef plot_likelihood(A, title=\"\"):\n ax = sns.heatmap(A, cmap=\"OrRd\", linewidth=2.5)\n plt.xticks(range(A.shape[1]))\n plt.yticks(range(A.shape[0]))\n plt.title(title)\n plt.show()", "_____no_output_____" ] ], [ [ "## Environment\n\nHere we consider an agent navigating a three-armed 'T-maze,' with the agent starting in a central location of the maze. The bottom arm of the maze contains an informative cue, which signals in which of the two top arms ('Left' or 'Right', the ends of the 'T') a reward is likely to be found. \n\nAt each timestep, the environment is described by the joint occurrence of two qualitatively-different 'kinds' of states (hereafter referred to as _hidden state factors_). These hidden state factors are independent of one another.\n\nWe represent the first hidden state factor (`Location`) as a $ 1 \\ x \\ 4 $ vector that encodes the current position of the agent, and can take the following values: {`CENTER`, `RIGHT ARM`, `LEFT ARM`, or `CUE LOCATION`}. For example, if the agent is in the `CUE LOCATION`, the current state of this factor would be $s_1 = [0 \\ 0 \\ 0 \\ 1]$.\n\nWe represent the second hidden state factor (`Reward Condition`) as a $ 1 \\ x \\ 2 $ vector that encodes the reward condition of the trial: {`Reward on Right`, or `Reward on Left`}. A trial where the condition is reward is `Reward on Left` is thus encoded as the state $s_2 = [0 \\ 1]$.\n\nThe environment is designed such that when the agent is located in the `RIGHT ARM` and the reward condition is `Reward on Right`, the agent has a specified probability $a$ (where $a > 0.5$) of receiving a reward, and a low probability $b = 1 - a$ of receiving a 'loss' (we can think of this as an aversive or unpreferred stimulus). If the agent is in the `LEFT ARM` for the same reward condition, the reward probabilities are swapped, and the agent experiences loss with probability $a$, and reward with lower probability $b = 1 - a$. These reward contingencies are intuitively swapped for the `Reward on Left` condition. \n\nFor instance, we can encode the state of the environment at the first time step in a `Reward on Right` trial with the following pair of hidden state vectors: $s_1 = [1 \\ 0 \\ 0 \\ 0]$, $s_2 = [1 \\ 0]$, where we assume the agent starts sitting in the central location. If the agent moved to the right arm, then the corresponding hidden state vectors would now be $s_1 = [0 \\ 1 \\ 0 \\ 0]$, $s_2 = [1 \\ 0]$. This highlights the _independence_ of the two hidden state factors -- the location of the agent ($s_1$) can change without affecting the identity of the reward condition ($s_2$).\n", "_____no_output_____" ], [ "### 1. Initialize environment\nNow we can initialize the T-maze environment using the built-in `TMazeEnv` class from the `pymdp.envs` module.", "_____no_output_____" ], [ "Choose reward probabilities $a$ and $b$, where $a$ and $b$ are the probabilities of reward / loss in the 'correct' arm, and the probabilities of loss / reward in the 'incorrect' arm. Which arm counts as 'correct' vs. 'incorrect' depends on the reward condition (state of the 2nd hidden state factor).", "_____no_output_____" ] ], [ [ "reward_probabilities = [0.98, 0.02] # probabilities used in the original SPM T-maze demo", "_____no_output_____" ] ], [ [ "Initialize an instance of the T-maze environment", "_____no_output_____" ] ], [ [ "env = TMazeEnv(reward_probs = reward_probabilities)", "_____no_output_____" ] ], [ [ "### Structure of the state --> outcome mapping\nWe can 'peer into' the rules encoded by the environment (also known as the _generative process_ ) by looking at the probability distributions that map from hidden states to observations. Following the SPM version of active inference, we refer to this collection of probabilistic relationships as the `A` array. In the case of the true rules of the environment, we refer to this array as `A_gp` (where the suffix `_gp` denotes the generative process). \n\nIt is worth outlining what constitute the agent's observations in this task. In this T-maze demo, we have three sensory channels or observation modalities: `Location`, `Reward`, and `Cue`. \n\n>The `Location` observation values are identical to the `Location` hidden state values. In this case, the agent always unambiguously observes its own state - if the agent is in `RIGHT ARM`, it receives a `RIGHT ARM` observation in the corresponding modality. This might be analogized to a 'proprioceptive' sense of one's own place.\n\n>The `Reward` observation modality assumes the values `No Reward`, `Reward` or `Loss`. The `No Reward` (index 0) observation is observed whenever the agent isn't occupying one of the two T-maze arms (the right or left arms). The `Reward` (index 1) and `Loss` (index 2) observations are observed in the right and left arms of the T-maze, with associated probabilities that depend on the reward condition (i.e. on the value of the second hidden state factor).\n\n> The `Cue` observation modality assumes the values `Cue Right`, `Cue Left`. This observation unambiguously signals the reward condition of the trial, and therefore in which arm the `Reward` observation is more probable. When the agent occupies the other arms, the `Cue` observation will be `Cue Right` or `Cue Left` with equal probability. However (as we'll see below when we intialise the agent), the agent's beliefs about the likelihood mapping render these observations uninformative and irrelevant to state inference.\n\nIn `pymdp`, we store the set of probability distributions encoding the conditional probabilities of observations, under different configurations of hidden states, as a set of matrices referred to as the likelihood mapping or `A` array (this is a convention borrowed from SPM). The likelihood mapping _for a single modality_ is stored as a single matrix `A[i]` with the larger likelihood array, where `i` is the index of the corresponding modality. Each modality-specific A matrix has `n_observations[i]` rows, and as many lagging dimensions (e.g. columns, 'slices' and higher-order dimensions) as there are hidden state factors. `n_observations[i]` tells you the number of observation values for observation modality `i`, and is usually stored as a property of the `Env` class (e.g. `env.n_observations`).\n\n", "_____no_output_____" ] ], [ [ "A_gp = env.get_likelihood_dist()", "_____no_output_____" ], [ "plot_likelihood(A_gp[1][:,:,0],'Reward Right')", "_____no_output_____" ], [ "plot_likelihood(A_gp[1][:,:,1],'Reward Left')", "_____no_output_____" ], [ "plot_likelihood(A_gp[2][:,3,:],'Cue Mapping')", "_____no_output_____" ] ], [ [ "### Transition Dynamics\n\nWe represent the dynamics of the environment (e.g. changes in the location of the agent and changes to the reward condition) as conditional probability distributions that encode the likelihood of transitions between the states of a given hidden state factor. These distributions are collected into the so-called `B` array, also known as _transition likelihoods_ or _transition distribution_ . As with the `A` array, we denote the true probabilities describing the environmental dynamics as `B_gp`. Each sub-matrix `B_gp[f]` of the larger array encodes the transition probabilities between state-values of a given hidden state factor with index `f`. These matrices encode dynamics as Markovian transition probabilities, such that the entry $i,j$ of a given matrix encodes the probability of transition to state $i$ at time $t+1$, given state $j$ at $t$. ", "_____no_output_____" ] ], [ [ "B_gp = env.get_transition_dist()", "_____no_output_____" ] ], [ [ "For example, we can inspect the 'dynamics' of the `Reward Condition` factor by indexing into the appropriate sub-matrix of `B_gp`", "_____no_output_____" ] ], [ [ "plot_likelihood(B_gp[1][:,:,0],'Reward Condition Transitions')", "_____no_output_____" ] ], [ [ "The above transition array is the 'trivial' identity matrix, meaning that the reward condition doesn't change over time (it's mapped from whatever it's current value is to the same value at the next timestep).", "_____no_output_____" ], [ "### (Controllable-) Transition Dynamics\n\nImportantly, some hidden state factors are _controllable_ by the agent, meaning that the probability of being in state $i$ at $t+1$ isn't merely a function of the state at $t$, but also of actions (or from the agent's perspective, _control states_ ). So now each transition likelihood encodes conditional probability distributions over states at $t+1$, where the conditioning variables are both the states at $t-1$ _and_ the actions at $t-1$. This extra conditioning on actions is encoded via an optional third dimension to each factor-specific `B` matrix.\n\nFor example, in our case the first hidden state factor (`Location`) is under the control of the agent, which means the corresponding transition likelihoods `B[0]` are index-able by both previous state and action.", "_____no_output_____" ] ], [ [ "plot_likelihood(B_gp[0][:,:,0],'Transition likelihood for \"Move to Center\"')", "_____no_output_____" ], [ "plot_likelihood(B_gp[0][:,:,1],'Transition likelihood for \"Move to Right Arm\"')", "_____no_output_____" ], [ "plot_likelihood(B_gp[0][:,:,2],'Transition likelihood for \"Move to Left Arm\"')", "_____no_output_____" ], [ "plot_likelihood(B_gp[0][:,:,3],'Transition likelihood for \"Move to Cue Location\"')", "_____no_output_____" ] ], [ [ "## The generative model\nNow we can move onto setting up the generative model of the agent - namely, the agent's beliefs about how hidden states give rise to observations, and how hidden states transition among eachother.\n\nIn almost all MDPs, the critical building blocks of this generative model are the agent's representation of the observation likelihood, which we'll refer to as `A_gm`, and its representation of the transition likelihood, or `B_gm`. \n\nHere, we assume the agent has a veridical representation of the rules of the T-maze (namely, how hidden states cause observations) as well as its ability to control its own movements with certain consequences (i.e. 'noiseless' transitions).", "_____no_output_____" ] ], [ [ "A_gm = copy.deepcopy(A_gp) # make a copy of the true observation likelihood to initialize the observation model\nB_gm = copy.deepcopy(B_gp) # make a copy of the true transition likelihood to initialize the transition model", "_____no_output_____" ] ], [ [ "### Note !\nIt is not necessary, or even in many cases _important_ , that the generative model is a veridical representation of the generative process. This distinction between generative model (essentially, beliefs entertained by the agent and its interaction with the world) and the generative process (the actual dynamical system 'out there' generating sensations) is of crucial importance to the active inference formalism and (in our experience) often overlooked in code.\n\nIt is for notational and computational convenience that we encode the generative process using `A` and `B` matrices. By doing so, it simply puts the rules of the environment in a data structure that can easily be converted into the Markovian-style conditional distributions useful for encoding the agent's generative model.\n\nStrictly speaking, however, all the generative process needs to do is generate observations and be 'perturbable' by actions. The way in which it does so can be arbitrarily complex, non-linear, and unaccessible by the agent.", "_____no_output_____" ], [ "## Introducing the `Agent()` class\n\nIn `pymdp`, we have abstracted much of the computations required for active inference into the `Agent()` class, a flexible object that can be used to store necessary aspects of the generative model, the agent's instantaneous observations and actions, and perform action / perception using functions like `Agent.infer_states` and `Agent.infer_policies`. \n\nAn instance of `Agent` is straightforwardly initialized with a call to `Agent()` with a list of optional arguments.\n", "_____no_output_____" ], [ "In our call to `Agent()`, we need to constrain the default behavior with some of our T-Maze-specific needs. For example, we want to make sure that the agent's beliefs about transitions are constrained by the fact that it can only control the `Location` factor - _not_ the `Reward Condition` (which we assumed stationary across an epoch of time). Therefore we specify this using a list of indices that will be passed as the `control_fac_idx` argument of the `Agent()` constructor. \n\nEach element in the list specifies a hidden state factor (in terms of its index) that is controllable by the agent. Hidden state factors whose indices are _not_ in this list are assumed to be uncontrollable.", "_____no_output_____" ] ], [ [ "controllable_indices = [0] # this is a list of the indices of the hidden state factors that are controllable", "_____no_output_____" ] ], [ [ "Now we can construct our agent...", "_____no_output_____" ] ], [ [ "agent = Agent(A=A_gm, B=B_gm, control_fac_idx=controllable_indices)", "_____no_output_____" ] ], [ [ "Now we can inspect properties (and change) of the agent as we see fit. Let's look at the initial beliefs the agent has about its starting location and reward condition, encoded in the prior over hidden states $P(s)$, known in SPM-lingo as the `D` array.", "_____no_output_____" ] ], [ [ "plot_beliefs(agent.D[0],\"Beliefs about initial location\")", "_____no_output_____" ], [ "plot_beliefs(agent.D[1],\"Beliefs about reward condition\")", "_____no_output_____" ] ], [ [ "Let's make it so that agent starts with precise and accurate prior beliefs about its starting location.", "_____no_output_____" ] ], [ [ "agent.D[0] = utils.onehot(0, agent.num_states[0])", "_____no_output_____" ] ], [ [ "And now confirm that our agent knows (i.e. has accurate beliefs about) its initial state by visualizing its priors again.", "_____no_output_____" ] ], [ [ "plot_beliefs(agent.D[0],\"Beliefs about initial location\")", "_____no_output_____" ] ], [ [ "Another thing we want to do in this case is make sure the agent has a 'sense' of reward / loss and thus a motivation to be in the 'correct' arm (the arm that maximizes the probability of getting the reward outcome).\n\nWe can do this by changing the prior beliefs about observations, the `C` array (also known as the _prior preferences_ ). This is represented as a collection of distributions over observations for each modality. It is initialized by default to be all 0s. This means agent has no preference for particular outcomes. Since the second modality (index `1` of the `C` array) is the `Reward` modality, with the index of the `Reward` outcome being `1`, and that of the `Loss` outcome being `2`, we populate the corresponding entries with values whose relative magnitudes encode the preference for one outcome over another (technically, this is encoded directly in terms of relative log-probabilities). \n\nOur ability to make the agent's prior beliefs that it tends to observe the outcome with index `1` in the `Reward` modality, more often than the outcome with index `2`, is what makes this modality a Reward modality in the first place -- otherwise, it would just be an arbitrary observation with no extrinsic value _per se_. ", "_____no_output_____" ] ], [ [ "agent.C[1][1] = 3.0\nagent.C[1][2] = -3.0", "_____no_output_____" ], [ "plot_beliefs(agent.C[1],\"Prior beliefs about observations\")", "_____no_output_____" ] ], [ [ "## Active Inference\nNow we can start off the T-maze with an initial observation and run active inference via a loop over a desired time interval.", "_____no_output_____" ] ], [ [ "T = 5 # number of timesteps\n\nobs = env.reset() # reset the environment and get an initial observation\n\n# these are useful for displaying read-outs during the loop over time\nreward_conditions = [\"Right\", \"Left\"]\nlocation_observations = ['CENTER','RIGHT ARM','LEFT ARM','CUE LOCATION']\nreward_observations = ['No reward','Reward!','Loss!']\ncue_observations = ['Cue Right','Cue Left']\nmsg = \"\"\" === Starting experiment === \\n Reward condition: {}, Observation: [{}, {}, {}]\"\"\"\nprint(msg.format(reward_conditions[env.reward_condition], location_observations[obs[0]], reward_observations[obs[1]], cue_observations[obs[2]]))\n\nfor t in range(T):\n qx = agent.infer_states(obs)\n\n q_pi, efe = agent.infer_policies()\n\n action = agent.sample_action()\n\n msg = \"\"\"[Step {}] Action: [Move to {}]\"\"\"\n print(msg.format(t, location_observations[int(action[0])]))\n\n obs = env.step(action)\n\n msg = \"\"\"[Step {}] Observation: [{}, {}, {}]\"\"\"\n print(msg.format(t, location_observations[obs[0]], reward_observations[obs[1]], cue_observations[obs[2]]))", "_____no_output_____" ] ], [ [ "The agent begins by moving to the `CUE LOCATION` to resolve its uncertainty about the reward condition - this is because it knows it will get an informative cue in this location, which will signal the true reward condition unambiguously. At the beginning of the next timestep, the agent then uses this observaiton to update its posterior beliefs about states `qx[1]` to reflect the true reward condition. Having resolved its uncertainty about the reward condition, the agent then moves to `RIGHT ARM` to maximize utility and continues to do so, given its (correct) beliefs about the reward condition and the mapping between hidden states and reward observations. \n\nNotice, perhaps confusingly, that the agent continues to receive observations in the 3rd modality (i.e. samples from `A_gp[2]`). These are observations of the form `Cue Right` or `Cue Left`. However, these 'cue' observations are random and totally umambiguous unless the agent is in the `CUE LOCATION` - this is reflected by totally entropic distributions in the corresponding columns of `A_gp[2]` (and the agents beliefs about this ambiguity, reflected in `A_gm[2]`. See below.", "_____no_output_____" ] ], [ [ "plot_likelihood(A_gp[2][:,:,0],'Cue Observations when condition is Reward on Right, for Different Locations')", "_____no_output_____" ], [ "plot_likelihood(A_gp[2][:,:,1],'Cue Observations when condition is Reward on Left, for Different Locations')", "_____no_output_____" ] ], [ [ "The final column on the right side of these matrices represents the distribution over cue observations, conditioned on the agent being in `CUE LOCATION` and the appropriate Reward Condition. This demonstrates that cue observations are uninformative / lacking epistemic value for the agent, _unless_ they are in `CUE LOCATION.`", "_____no_output_____" ], [ "Now we can inspect the agent's final beliefs about the reward condition characterizing the 'trial,' having undergone 10 timesteps of active inference.", "_____no_output_____" ] ], [ [ "plot_beliefs(qx[1],\"Final posterior beliefs about reward condition\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ] ]
e710f83c4e18f6faeb6143e7b47402a37303638a
228,192
ipynb
Jupyter Notebook
scripts/download_from_AWS.ipynb
yabmtm/Mpro-Manuscript
73dc79109b618f110b8768da2b6d10539fd06acf
[ "MIT" ]
null
null
null
scripts/download_from_AWS.ipynb
yabmtm/Mpro-Manuscript
73dc79109b618f110b8768da2b6d10539fd06acf
[ "MIT" ]
null
null
null
scripts/download_from_AWS.ipynb
yabmtm/Mpro-Manuscript
73dc79109b618f110b8768da2b6d10539fd06acf
[ "MIT" ]
null
null
null
42.187465
190
0.473895
[ [ [ "# Downloading simulation data from AWS \n\nOctober 18, 2021\n\nIn this notebook, Vince and Rashad are trying to use Matt's example code to download some simulation data of our choosing\n\n", "_____no_output_____" ] ], [ [ "import mdtraj as md\nimport itertools\nimport numpy as np\nimport matplotlib\nfrom matplotlib import pyplot as plt\n\nimport os, urllib, subprocess, glob\nfrom tqdm import tqdm\n\nclass DownloadProgressBar(tqdm):\n def update_to(self, b=1, bsize=1, tsize=None):\n if tsize is not None:\n self.total = tsize\n self.update(b * bsize - self.n)\n\n\ndef download(url, output_path):\n with DownloadProgressBar(unit='B', unit_scale=True,\n miniters=1, desc=url.split('/')[-1]) as t:\n urllib.request.urlretrieve(url, filename=output_path, reporthook=t.update_to)\n\ndef run_cmd(cmd):\n subprocess.check_output(cmd, stderr=subprocess.STDOUT,shell=True).decode().split('\\n')", "_____no_output_____" ], [ "### pull data down from AWS and post-process trajectories\n\n# download xtc and (gro/top)\n\nurl_prefix = 'https://fah-public-data-covid19-absolute-free-energy.s3.us-east-2.amazonaws.com'\nproject = 14823\nruns = range(1) # just RUN0\nclones = range(1) # just CLONE0\nfor run in runs:\n for clone in clones:\n \n PRC_dir = f'data/P{project}_R{run}_C{clone}'\n if not os.path.exists(PRC_dir):\n os.makedirs(PRC_dir)\n download(f'{url_prefix}/setup_files/p{project}/RUN0/npt.gro',\n f'data/P{project}_R{run}_C{clone}/npt.gro')\n download(f'{url_prefix}/setup_files/p{project}/RUN0/topol.top',\n f'data/P{project}_R{run}_C{clone}/topol.top')\n gen = 0\n while True:\n try:\n print(f'\\nProcessing P{project}_R{run}_C{clone}_G{gen}')\n download(f'{url_prefix}/PROJ{project}/RUN{run}/CLONE{clone}/results{gen}/traj_comp.xtc',\n f'data/P{project}_R{run}_C{clone}/traj_comp.xtc')\n except Exception as e:\n print(e)\n break\n \n path = f'data/P{project}_R{run}_C{clone}'\n \n ### WARNING: This next section needs an installation of gmx in your path to work!!!!\n \n # Step 1: Build a custom *.tpr for the subset of atoms (molecules \"LIG\" and \"system1\") in the *.xtc trajectories\n \n ## write a dummy *.mdp for minimization (we will need a tpr for trjconv)\n write_mdp_cmd = f'echo \"integrator = steep\" > {path}/xtc.mdp'\n run_cmd(write_mdp_cmd)\n \n ## make an index file for just the atoms in the xtc\n \"\"\"Example:\n 0 System : 54272 atoms\n 1 Other : 64 atoms\n 2 LIG : 64 atoms\n 3 NA : 30 atoms\n 4 CL : 30 atoms\n ---> 5 Protein : 4657 atoms\n 6 Protein-H : 2352 atoms\n \"\"\"\n make_index_cmd = f'echo \"5|2\\nq\\n\" | gmx make_ndx -f {path}/npt.gro -o {path}/index.ndx'\n run_cmd(make_index_cmd)\n \n # make a (.top) for xtc atoms, omitting the last three lines:\n \"\"\"\n [ molecules ]\n ; Compound #mols\n LIG 1\n system1 1\n omit X HOH 16497\n omit X NA 30\n omit X CL 30\n \"\"\"\n fin = open(f'{path}/topol.top', 'r')\n topol_lines = fin.readlines()\n fin.close()\n fout = open(f'{path}/xtc.top', 'w')\n fout.writelines(topol_lines[:-3])\n fout.close()\n \n # write a *.gro file for just the xtc atoms\n make_xtcgro_cmd = f'echo \"24\\n\" | gmx editconf -f {path}/npt.gro -n {path}/index.ndx -o {path}/xtc.gro'\n run_cmd(make_xtcgro_cmd)\n\n # write a *.ndx file for just the xtc atoms\n make_xtcndx_cmd = f'echo \"3|2\\nq\\n\" | gmx make_ndx -f {path}/xtc.gro -o {path}/xtc.ndx'\n run_cmd(make_xtcndx_cmd)\n \n # gmx grompp to make a fake *.tpr\n make_xtctpr_cmd = f'gmx grompp -f {path}/xtc.mdp -c {path}/xtc.gro -p {path}/xtc.top -o {path}/xtc.tpr'\n run_cmd(make_xtctpr_cmd)\n \n # gmx trjconv for PBC correction\n pbc_correct_cmd = f'echo \"3\\n14\\n\" | gmx trjconv -f {path}/traj_comp.xtc -s {path}/xtc.tpr -n {path}/xtc.ndx -pbc mol -center -o {path}/traj_{str(gen).zfill(4)}.xtc'\n run_cmd(pbc_correct_cmd)\n \n # for cmd in [write_mdp_cmd, make_index_cmd, make_xtctop_cmd, make_xtcgro_cmd,\n # make_xtcndx_cmd, make_xtctpr_cmd, pbc_correct_cmd]:\n # subprocess.check_output(cmd, stderr=subprocess.STDOUT,shell=True).decode().split('\\n')\n\n #for cmd in [write_mdp_cmd, make_index_cmd]:\n # subprocess.check_output(cmd, stderr=subprocess.STDOUT,shell=True).decode().split('\\n')\n \n \n traj = md.load(f'{path}/traj_{str(gen).zfill(4)}.xtc',top = f'{path}/xtc.gro')\n PHE140_indices = [a.index for a in traj.topology.atoms if a.residue.index in [141] and a.name in ['CG','CD1','CD2','CE1','CE2','CZ']]\n HIS163_indices = [a.index for a in traj.topology.atoms if a.residue.index in [164]and a.name in ['CG','ND1','CD2','CE1','NE2']]\n traj_PHE140_indices = traj.atom_slice(PHE140_indices)\n traj_HIS163_indices = traj.atom_slice(HIS163_indices)\n coords_PHE140_com = md.compute_center_of_mass(traj_PHE140_indices)\n coords_HIS163_com = md.compute_center_of_mass(traj_HIS163_indices)\n hacked_traj = traj\n\n ## creating hacked traj 0 and 1\n hacked_traj.xyz[:,0,:] = coords_PHE140_com # PHE140 trajectory\n hacked_traj.xyz[:,1,:] = coords_HIS163_com # HIS163 trajectory\n\n\n ## computing the distance between the center of mass of the PHE140 and HIS163 ring\n PHE140_HIS163_distances = md.compute_distances(hacked_traj, [[0,1]])[:,0]\n np.save(f'{path}/PHE140_HIS163_distnces_G{str(gen).zfill(4)}', PHE140_HIS163_distances)\n\n gen += 1\n \n # file_list = ['traj_comp.xtc','xtc.mdp','xtc.top','xtc.ndx','xtc.gro','xtc.tpr','index.ndx']\n file_list = ['traj_comp.xtc','xtc.mdp','xtc.top','xtc.ndx','xtc.tpr']\n for file in glob.glob(f'{path}/*'):\n if any(substring in file for substring in file_list):\n os.remove(file)\n \n # remove backups:\n for file in glob.glob(f'{path}/#*'):\n os.remove(file)\n \n #gen += 1\n\n \n ###### Hay Rashad: to concentate all these xtc files (per gen) into one long trajectory xtc file:\n ### $ gmx trjcat -o all.xtc -f traj_????.xtc -cat\n ", "npt.gro: 2.45MB [00:01, 2.38MB/s] \ntopol.top: 2.65MB [00:00, 8.07MB/s] \ntraj_comp.xtc: 4%|▍ | 8.19k/209k [00:00<00:04, 44.8kB/s]" ], [ "topol_lines", "_____no_output_____" ], [ "line_indices_to_grab = []\nfor j in range(len(topol_lines)):\n if topol_lines[j].count('ParmEd') > 0:\n line_indices_to_grab.append(j)\n\nprint('line_indices_to_grab', line_indices_to_grab)", "line_indices_to_grab [9]\n" ], [ "my_grofile = f'data/P{project}_R{run}_C{clone}/npt.gro'\nfin = open(my_grofile, 'r')\nlines = fin.readlines()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e710fc7080900366ba3d49ecca235bcd3bb8c186
62,372
ipynb
Jupyter Notebook
basic_ml/notebooks/pandas/pandas_parse_json_column.ipynb
jmetzz/ml-laboratory
26b1e87bd0d80efa4f15280f7f32ad46d59efc1f
[ "MIT" ]
1
2021-09-10T16:55:35.000Z
2021-09-10T16:55:35.000Z
basic_ml/notebooks/pandas/pandas_parse_json_column.ipynb
jmetzz/ml-laboratory
26b1e87bd0d80efa4f15280f7f32ad46d59efc1f
[ "MIT" ]
14
2022-03-12T01:06:08.000Z
2022-03-30T14:30:22.000Z
basic_ml/notebooks/pandas/pandas_parse_json_column.ipynb
jmetzz/ml-laboratory
26b1e87bd0d80efa4f15280f7f32ad46d59efc1f
[ "MIT" ]
null
null
null
63.00202
2,260
0.581735
[ [ [ "from typing import Dict, Any, Tuple, Optional\nimport pandas as pd\nfrom pandas import json_normalize", "_____no_output_____" ], [ "configs = \"\"\"\n[\n {\n \"country_code\": \"BR\",\n \"item_group_code\": \"COOLING\",\n \"market_configuration\": {\n \"moc\": {\n \"low_price_percentage\": 0.1,\n \"high_price_percentage\": 0.1,\n \"medium_price_percentage\": 0.1,\n \"lower_price_range_threshold\": 0,\n \"upper_price_range_threshold\": 999999999\n },\n \"ce\": {\n \"low_price_percentage\": 0.1,\n \"high_price_percentage\": 0.1,\n \"medium_price_percentage\": 0.1,\n \"lower_price_range_threshold\": 0,\n \"upper_price_range_threshold\": 999999999\n }\n }\n },\n {\n \"country_code\": \"DE\",\n \"item_group_code\": \"COOLING\",\n \"market_configuration\": {\n \"ce\": {\n \"low_price_percentage\": 0.1,\n \"high_price_percentage\": 0.1,\n \"medium_price_percentage\": 0.1,\n \"lower_price_range_threshold\": 0,\n \"upper_price_range_threshold\": 999999999\n }\n }\n },\n {\n \"country_code\": \"CN\",\n \"item_group_code\": \"COOLING\",\n \"market_configuration\": {\n \"moc\": {\n \"low_price_percentage\": 0.1,\n \"high_price_percentage\": 0.1,\n \"medium_price_percentage\": 0.1,\n \"lower_price_range_threshold\": 0,\n \"upper_price_range_threshold\": 999999999\n }\n }\n },\n {\n \"country_code\": \"JP\",\n \"item_group_code\": \"COOLING\",\n \"market_configuration\": null\n }\n]\n\"\"\"\n", "_____no_output_____" ], [ "raw = pd.read_json(configs, orient=\"records\")\nraw", "_____no_output_____" ], [ "raw['market_configuration'].notna()", "_____no_output_____" ], [ "r_de = raw.at[0, 'market_configuration']\nr_de\n", "_____no_output_____" ], [ "r_cn = raw.at[1, 'market_configuration']\nr_jp = raw.at[2, 'market_configuration']\nr_jp\n", "_____no_output_____" ], [ "raw = raw[raw['market_configuration'].notna()]\nraw", "_____no_output_____" ], [ "temp = raw['market_configuration']\ntemp", "_____no_output_____" ], [ "transformed = temp.transform(lambda x: json_normalize(data=x))", "_____no_output_____" ], [ "transformed.iloc[0]", "_____no_output_____" ], [ "temp= raw.copy()\ntemp", "_____no_output_____" ] ], [ [ "# TEST 2", "_____no_output_____" ] ], [ [ "raw = pd.read_json(configs, orient=\"records\")\nraw = raw[raw['market_configuration'].notna()]\nraw", "_____no_output_____" ], [ "df_keys = raw.loc[:, ['country_code', 'item_group_code']]\ndf_values = raw.loc[:, ['market_configuration']]", "_____no_output_____" ], [ "df_keys", "_____no_output_____" ], [ "df_values", "_____no_output_____" ], [ "# df_values.transform(lambda x: json_normalize(data=x), axis='index')", "_____no_output_____" ], [ "import numpy as np\n\ndf = pd.DataFrame()\nfor _, row in df_values.iterrows():\n entry_df = json_normalize(data=row)\n print(entry_df.head())\n print(\" ---- \")\n\n", " moc.low_price_percentage moc.high_price_percentage \\\n0 0.1 0.1 \n\n moc.medium_price_percentage moc.lower_price_range_threshold \\\n0 0.1 0 \n\n moc.upper_price_range_threshold ce.low_price_percentage \\\n0 999999999 0.1 \n\n ce.high_price_percentage ce.medium_price_percentage \\\n0 0.1 0.1 \n\n ce.lower_price_range_threshold ce.upper_price_range_threshold \n0 0 999999999 \n ---- \n ce.low_price_percentage ce.high_price_percentage \\\n0 0.1 0.1 \n\n ce.medium_price_percentage ce.lower_price_range_threshold \\\n0 0.1 0 \n\n ce.upper_price_range_threshold \n0 999999999 \n ---- \n moc.low_price_percentage moc.high_price_percentage \\\n0 0.1 0.1 \n\n moc.medium_price_percentage moc.lower_price_range_threshold \\\n0 0.1 0 \n\n moc.upper_price_range_threshold \n0 999999999 \n ---- \n" ], [ "df_values.transform(lambda x: None if x.empty else json_normalize(data=x))", "_____no_output_____" ] ], [ [ "# TEST 3", "_____no_output_____" ] ], [ [ "l = list()\nfor _, row in raw[raw[\"market_configuration\"].notna()].iterrows():\n config = row[\"market_configuration\"]\n if 'ce' in\n config['country_code'] = row['country_code'] \n config['item_group_code'] = row['item_group_code']\n\n t = json_normalize(data=row[\"market_configuration\"])\n l.append(t)\nl[0]", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
e710fcb04d776a2d4c27caa1abf4a7a8ecc3a42e
237,591
ipynb
Jupyter Notebook
msa/models/pytorch_review/cnn_fashion_mini_mnist.ipynb
mnguyen0226/ai-assurance-research
1ad522f14f14eb77b01be9dcd6a42f847b3a9738
[ "MIT" ]
null
null
null
msa/models/pytorch_review/cnn_fashion_mini_mnist.ipynb
mnguyen0226/ai-assurance-research
1ad522f14f14eb77b01be9dcd6a42f847b3a9738
[ "MIT" ]
null
null
null
msa/models/pytorch_review/cnn_fashion_mini_mnist.ipynb
mnguyen0226/ai-assurance-research
1ad522f14f14eb77b01be9dcd6a42f847b3a9738
[ "MIT" ]
null
null
null
65.039967
62,488
0.721378
[ [ [ "# CNN Fashion MNIST Mini Project\n- Fashion MNIST:\n - 10 classes\n - 60000 training images\n - 10000 testing images", "_____no_output_____" ], [ "## 1. Prepare the data:\n- E = Extract - Get Fashion MNIST image data from the source\n- T = Transform - Put data to tensor form\n- L = Load - Put data to object for easier accessed", "_____no_output_____" ] ], [ [ "import torch\nimport torchvision \nimport torchvision.transforms as transforms # image transformation", "_____no_output_____" ], [ "train_set = torchvision.datasets.FashionMNIST(\n root = './data/FashionMNIST', # directory to be download\n train = True, # trainable dataset\n download = True, # Download to local machine\n transform = transforms.Compose([ # Convert images into Tensor Transformation\n transforms.ToTensor()\n ])\n)", "_____no_output_____" ], [ "train_loader = torch.utils.data.DataLoader(train_set, batch_size = 10) # can have shuffle and batchsize\n\n# This allow us to query of the dataset", "_____no_output_____" ] ], [ [ "### Better understand the dataset", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n\ntorch.set_printoptions(linewidth=120)", "_____no_output_____" ], [ "len(train_set)", "_____no_output_____" ], [ "# Check number of labels\ntrain_set.train_labels", "C:\\Users\\nguye\\anaconda3\\lib\\site-packages\\torchvision\\datasets\\mnist.py:54: UserWarning: train_labels has been renamed targets\n warnings.warn(\"train_labels has been renamed targets\")\n" ], [ "# Check number of images in each class\ntrain_set.train_labels.bincount()", "_____no_output_____" ], [ "# Check elements in a train set\nsample_input = next(iter(train_set))\ntuple(sample_input)\nimage, label = sample_input\n\nprint(image.shape)\nprint(label)\nplt.imshow(image.squeeze(), cmap=\"gray\")\nprint(f\"label: {label}\")", "torch.Size([1, 28, 28])\n9\nlabel: 9\n" ], [ "# Analyse the batch\nbatch = next(iter(train_loader))\nimages, labels = batch", "_____no_output_____" ], [ "images.shape", "_____no_output_____" ], [ "labels.shape", "_____no_output_____" ], [ "grid = torchvision.utils.make_grid(images, nrow=10)\nplt.figure(figsize=(15,15))\nplt.imshow(np.transpose(grid, (1,2,0)))\nprint(labels)", "tensor([9, 0, 0, 3, 0, 2, 7, 2, 5, 5])\n" ] ], [ [ "## 2. Build model\n- Methods = Function\n- Attributes = Representation of the data\n- Parameters vs Arguments: \n - parameters = local to the funciton\n - argument = values assigned to parameters by the caller of the function\n => Parameter is in_channels, arguments = 1, 6, 12....\n \n- kernel_size = set of filter size\n- out_channels = set of number of filters => This can be called as feature maps\n- out_features = set of size of output tensor", "_____no_output_____" ] ], [ [ "import torch.nn as nn\nimport torch.nn.functional as F", "_____no_output_____" ], [ "class Network(nn.Module):\n def __init__(self):\n super().__init__()\n # Convolutional layers\n \n # NOTE param: color channel of image, number of filter, size of kernel, stride\n self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5, stride=1) # in_channel = 1 = grayscale, hyperparam, hyperparam\n self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5, stride=1) # we in crease the output channel when have extra conv layers\n \n # Why the out channel = 6 or why the in_features is 12*4*4 => flatten. 4x4?\n \n # Fully connected layers\n self.fc1 = nn.Linear(in_features=12*4*4, out_features=120, bias=True) # we also shrink the number of features to number of class that we have\n self.fc2 = nn.Linear(in_features = 120, out_features=60, bias=True)\n self.out = nn.Linear(in_features = 60, out_features=10, bias=True) \n \n def forward(self, t):\n # input layer\n t = t\n print(f\"TESTING: {t.shape}\")\n \n # convolution 1, not \n t = self.conv1(t)\n t = F.relu(t) # operation do not use weight, unlike layers\n t = F.max_pool2d(t, kernel_size=2, stride=2) # operation do not use weight, unlike layers\n \n # convolution 2: => relu => maxpool\n t = self.conv2(t)\n # WHY do we need these 2 layers?\n t = F.relu(t) \n t = F.max_pool2d(t, kernel_size=2, stride=2) # how to determine these values?\n \n # Transition from Conv to Linear will require flatten\n t = t.reshape(-1, 12*4*4) # 4x4 = shape of reduce image (originally 28x28)\n \n # linear 1:\n t = self.fc1(t)\n t = F.relu(t)\n \n # linear 2:\n t = self.fc2(t)\n t = F.relu(t)\n \n # output:\n t = self.out(t)\n# t = F.softmax(t, dim=1) # we will use crossentropy loss which used the softmax already\n \n return t\n ", "_____no_output_____" ], [ "network = Network()\nprint(network)", "Network(\n (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))\n (conv2): Conv2d(6, 12, kernel_size=(5, 5), stride=(1, 1))\n (fc1): Linear(in_features=192, out_features=120, bias=True)\n (fc2): Linear(in_features=120, out_features=60, bias=True)\n (out): Linear(in_features=60, out_features=10, bias=True)\n)\n" ], [ "# Analyze the layer weight\nprint(network.conv1.weight.shape) # (out, in, kernel - height&width of filter)\nprint(network.conv2.weight.shape) # (out, in, kernel)\nprint(network.fc1.weight.shape) # (out, in)\nprint(network.fc2.weight.shape) # (out, in)\nprint(network.out.weight.shape) # (out, in)", "torch.Size([6, 1, 5, 5])\ntorch.Size([12, 6, 5, 5])\ntorch.Size([120, 192])\ntorch.Size([60, 120])\ntorch.Size([10, 60])\n" ], [ "for name, param in network.named_parameters():\n print(f\"{name} \\t\\t {param.shape}\")", "conv1.weight \t\t torch.Size([6, 1, 5, 5])\nconv1.bias \t\t torch.Size([6])\nconv2.weight \t\t torch.Size([12, 6, 5, 5])\nconv2.bias \t\t torch.Size([12])\nfc1.weight \t\t torch.Size([120, 192])\nfc1.bias \t\t torch.Size([120])\nfc2.weight \t\t torch.Size([60, 120])\nfc2.bias \t\t torch.Size([60])\nout.weight \t\t torch.Size([10, 60])\nout.bias \t\t torch.Size([10])\n" ] ], [ [ "### Understanding Linear Layer", "_____no_output_____" ] ], [ [ "in_features = torch.tensor([1,2,3,4], dtype=torch.float32)\n\nfc = nn.Linear(in_features=4, out_features=3)\n\nmatmul_fc = fc(in_features)\nprint(f\"The matrix multiplication answer is {matmul_fc}\")", "The matrix multiplication answer is tensor([0.7207, 1.8323, 1.8387], grad_fn=<AddBackward0>)\n" ] ], [ [ "### Forward Propagation", "_____no_output_____" ] ], [ [ "# turn an image into a batch to feed in the NN\nimage.unsqueeze(0).shape", "_____no_output_____" ], [ "pred = network(image.unsqueeze(0)) # we put in the param for the forward function", "TESTING: torch.Size([1, 1, 28, 28])\n" ], [ "pred.shape # 1 image in a batch & 10 prediction", "_____no_output_____" ], [ "print(pred.argmax(dim=1))", "tensor([5])\n" ], [ "label # prediction is incorrect compared to the ground truth", "_____no_output_____" ] ], [ [ "### Forward Propagation with Batch", "_____no_output_____" ] ], [ [ "data_loader = torch.utils.data.DataLoader(\n train_set, batch_size=10\n) # this is an iteratro", "_____no_output_____" ], [ "batch = next(iter(data_loader))", "_____no_output_____" ], [ "images, labels = batch", "_____no_output_____" ], [ "print(images.shape)\nprint(labels.shape)", "torch.Size([10, 1, 28, 28])\ntorch.Size([10])\n" ], [ "preds = network(images)", "TESTING: torch.Size([10, 1, 28, 28])\n" ], [ "print(preds) \n# we can understand this as 10 predictions (out_channels) of prediction, use softmax or max_arg to provide the most prediction", "tensor([[-0.0911, 0.0854, 0.0196, 0.0619, -0.0486, 0.1152, -0.1129, 0.0421, -0.0517, -0.0074],\n [-0.0859, 0.0913, 0.0144, 0.0642, -0.0479, 0.1092, -0.1145, 0.0461, -0.0506, -0.0031],\n [-0.0935, 0.0778, 0.0275, 0.0584, -0.0541, 0.1161, -0.1104, 0.0351, -0.0505, -0.0061],\n [-0.0932, 0.0801, 0.0238, 0.0594, -0.0523, 0.1168, -0.1112, 0.0385, -0.0505, -0.0052],\n [-0.0876, 0.0871, 0.0162, 0.0599, -0.0538, 0.1152, -0.1140, 0.0488, -0.0543, -0.0058],\n [-0.0894, 0.0843, 0.0197, 0.0623, -0.0471, 0.1156, -0.1117, 0.0410, -0.0500, -0.0058],\n [-0.0924, 0.0803, 0.0179, 0.0621, -0.0485, 0.1189, -0.1111, 0.0353, -0.0507, -0.0043],\n [-0.0889, 0.0832, 0.0159, 0.0625, -0.0475, 0.1192, -0.1104, 0.0427, -0.0475, -0.0059],\n [-0.0995, 0.0725, 0.0246, 0.0572, -0.0509, 0.1183, -0.1036, 0.0301, -0.0517, -0.0081],\n [-0.1010, 0.0778, 0.0190, 0.0598, -0.0499, 0.1175, -0.1103, 0.0371, -0.0520, -0.0109]],\n grad_fn=<AddmmBackward>)\n" ], [ "preds.argmax(dim=1)", "_____no_output_____" ], [ "labels", "_____no_output_____" ], [ "# Calculate the correct labels compared with the prediction\ndef compare(preds, labels):\n \"\"\"Provides totals number of correct prediction\n \n Parameters\n ----------\n preds:\n list of prediction\n labels:\n list of ground truths\n \"\"\"\n result = preds.argmax(dim=1).eq(labels).sum()\n \n return result\n\nprint(compare(preds, labels))", "tensor(2)\n" ] ], [ [ "## 3. The Training Process:\n- 1. Get batch from the training set\n- 2. Pass batch to the network\n- 3. Calculate the loss (difference betwee the predicted values and the true values) - LOSS FUNCTION\n- 4. Calcualte the gradient of the loss function wrt the netowork's weights - BACK PROP\n- 5. Update the weights using the gradients to reduce the loss - OPTIMIZATION ALGO\n- 6. Repeat steps 1-5 until one epoch is completed\n- 7. Repeat steps 1-6 for as many epochs required to obtain the desired level of accuracy\n\n=> 1 epochs = complete pass thru all samples of the training dataset", "_____no_output_____" ] ], [ [ "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\nimport torchvision\nimport torchvision.transforms as transforms\n\ntorch.set_printoptions(linewidth=120)", "_____no_output_____" ], [ "def get_num_correct(preds, labels):\n return preds.argmax(dim=1).eq(labels).sum().item()", "_____no_output_____" ], [ "class Network(nn.Module):\n def __init__(self):\n super().__init__()\n # Convolutional layers\n \n self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5, stride=1) # in_channel = 1 = grayscale, hyperparam, hyperparam\n self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5, stride=1) # we in crease the output channel when have extra conv layers\n \n # Fully connected layers\n self.fc1 = nn.Linear(in_features=12*4*4, out_features=120, bias=True) # we also shrink the number of features to number of class that we have\n self.fc2 = nn.Linear(in_features = 120, out_features=60, bias=True)\n self.out = nn.Linear(in_features = 60, out_features=10, bias=True) \n \n def forward(self, t):\n # input layer\n t = t\n \n # convolution 1, not \n t = self.conv1(t)\n t = F.relu(t) # operation do not use weight, unlike layers\n t = F.max_pool2d(t, kernel_size=2, stride=2) # operation do not use weight, unlike layers\n \n # convolution 2: => relu => maxpool\n t = self.conv2(t)\n # WHY do we need these 2 layers?\n t = F.relu(t) \n t = F.max_pool2d(t, kernel_size=2, stride=2) # how to determine these values?\n \n # Transition from Conv to Linear will require flatten\n t = t.reshape(-1, 12*4*4) # 4x4 = shape of reduce image (originally 28x28)\n \n # linear 1:\n t = self.fc1(t)\n t = F.relu(t)\n \n # linear 2:\n t = self.fc2(t)\n t = F.relu(t)\n \n # output:\n t = self.out(t)\n \n return t", "_____no_output_____" ], [ "train_set = torchvision.datasets.FashionMNIST(\n root=\"./data/FashionMNIST\",\n train=True,\n download=True,\n transform=transforms.Compose([ # convert image to \n transforms.ToTensor()\n ]))", "_____no_output_____" ], [ "network = Network()", "_____no_output_____" ], [ "train_loader = torch.utils.data.DataLoader(train_set, batch_size = 100) # pass the training set and divide into batch of 100\nbatch = next(iter(train_loader)) # get a sample batch\nimages, labels = batch", "_____no_output_____" ] ], [ [ "### Calcualting the Loss", "_____no_output_____" ] ], [ [ "preds = network(images) # pass the batch of image thru the network\nloss = F.cross_entropy(preds, labels) # calculate the loss\nloss.item() # get the loss\n\n# item() contains the loss of entire mini-batch, but divided by the batch size.\n# Goal: we want the loss to be decrease", "_____no_output_____" ] ], [ [ "### Calculating the Gradients. Back Prop", "_____no_output_____" ] ], [ [ "print(network.conv1.weight.grad)", "None\n" ], [ "loss.backward() # calculate the graident", "_____no_output_____" ], [ "print(network.conv1.weight.grad)\nprint(network.conv1.weight.grad.shape)", "tensor([[[[ 4.6323e-04, 1.4078e-03, 8.0984e-04, 5.5834e-04, 1.9952e-03],\n [ 7.8312e-04, 1.6110e-03, 1.3740e-03, 1.2380e-03, 2.7021e-03],\n [ 1.2505e-03, 1.7090e-03, 1.0253e-03, 1.1032e-03, 2.6810e-03],\n [ 6.2296e-04, 1.3367e-03, 1.1384e-03, 1.6654e-03, 2.5975e-03],\n [ 4.3458e-04, 1.5364e-03, 1.6356e-03, 2.4002e-03, 2.9890e-03]]],\n\n\n [[[-1.8531e-03, -2.3051e-03, -3.4586e-03, -3.1435e-03, -2.5931e-03],\n [-2.1732e-03, -2.8829e-03, -4.0729e-03, -3.3431e-03, -2.9297e-03],\n [-3.1861e-03, -3.6186e-03, -3.7462e-03, -3.0499e-03, -2.5995e-03],\n [-3.5344e-03, -3.3483e-03, -4.0562e-03, -3.5834e-03, -2.7679e-03],\n [-2.7358e-03, -3.2995e-03, -3.4873e-03, -2.8166e-03, -2.0110e-03]]],\n\n\n [[[-3.0482e-04, -2.1922e-04, 5.3109e-04, 2.7711e-04, -2.8756e-05],\n [-6.9535e-04, -1.7091e-04, 5.7366e-04, -7.5365e-05, -6.1022e-04],\n [-5.4535e-04, 1.5849e-04, 5.3918e-04, -9.2436e-05, -3.8979e-04],\n [-6.4054e-04, 3.9764e-05, 1.8756e-04, -4.1711e-05, -6.1706e-04],\n [-3.4663e-04, -9.3658e-06, 1.8173e-04, -3.4340e-05, -6.8764e-04]]],\n\n\n [[[ 2.2721e-05, -1.3973e-04, -8.3932e-06, 7.4019e-05, -1.2792e-04],\n [-1.6098e-05, -1.3476e-04, -1.8720e-05, -4.6422e-05, -3.9426e-05],\n [-5.9533e-05, -5.3049e-05, -4.3887e-05, -5.0863e-05, -9.6049e-06],\n [-1.0295e-04, -3.8701e-05, -2.7050e-05, -5.0257e-05, 6.6775e-06],\n [-3.8006e-04, -1.2987e-04, -8.9985e-05, -6.6788e-05, 1.1706e-04]]],\n\n\n [[[-6.1105e-04, -5.6807e-04, -7.5599e-04, -4.9543e-04, 5.1358e-05],\n [-6.9693e-04, -6.1293e-04, -8.3058e-04, -3.7291e-04, -1.9534e-05],\n [-8.8220e-04, -7.0973e-04, -8.5431e-04, -4.7484e-04, -3.9267e-05],\n [-8.9634e-04, -8.8763e-04, -9.8413e-04, -5.1151e-04, -4.1058e-05],\n [-8.9721e-04, -9.3097e-04, -1.0691e-03, -6.1728e-04, -1.3642e-04]]],\n\n\n [[[ 5.0506e-04, 5.1439e-04, 1.7679e-04, 1.8973e-05, -3.1018e-06],\n [ 4.6289e-04, 3.3825e-04, 1.2572e-04, -4.3104e-05, -8.1421e-05],\n [ 4.1588e-04, 4.3632e-04, 4.4355e-04, 9.6350e-05, 1.2512e-04],\n [ 3.5385e-04, 3.8503e-04, 5.6245e-04, 4.1100e-04, 6.6381e-04],\n [ 3.1215e-04, 4.3475e-04, 7.3892e-04, 9.2901e-04, 8.4748e-04]]]])\ntorch.Size([6, 1, 5, 5])\n" ] ], [ [ "### Updating the networks' weights with optimizer", "_____no_output_____" ] ], [ [ "optimizer = optim.Adam(network.parameters(), lr=0.01) #initialize optimier", "_____no_output_____" ], [ "loss.item() # just to see the current loss without training", "_____no_output_____" ], [ "get_num_correct(preds,labels) # just to see correct prediction without training", "_____no_output_____" ], [ "optimizer.step() # updates the weight", "_____no_output_____" ], [ "preds = network(images)\nloss = F.cross_entropy(preds, labels)", "_____no_output_____" ], [ "loss.item()", "_____no_output_____" ], [ "get_num_correct(preds, labels)", "_____no_output_____" ] ], [ [ "### Wrap up training in a single batch", "_____no_output_____" ] ], [ [ "network = Network()\n\ntrain_loader = torch.utils.data.DataLoader(train_set, batch_size = 100)\noptimizer = optim.Adam(network.parameters(), lr=0.01)\n\nbatch = next(iter(train_loader)) # get batch\nimages, labels = batch\n\npreds = network(images)\nloss = F.cross_entropy(preds, labels) # calculate loss\n\nloss.backward() # calculate gradient/ backprop. Note, this does not affect the loss but just the learning hyperparam\noptimizer.step() # Update the weight\n\n############################################\nprint(f\"Loss 2 step optimizer (update) {loss.item()}\") # loss before training\npreds = network(images)\nloss = F.cross_entropy(preds, labels) # calculate loss\nprint(f\"Loss 3 step optimizer (update) {loss.item()}\") # loss after training\n", "Loss 2 step optimizer (update) 2.291348695755005\nLoss 3 step optimizer (update) 2.2746458053588867\n" ] ], [ [ "### A Full Training Loop for all batch & multiple epochs", "_____no_output_____" ] ], [ [ "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\nimport torchvision\nimport torchvision.transforms as transforms\n\ntorch.set_printoptions(linewidth=120)", "_____no_output_____" ], [ "def get_num_correct(preds, labels):\n return preds.argmax(dim=1).eq(labels).sum().item()", "_____no_output_____" ], [ "class Network(nn.Module):\n def __init__(self):\n super().__init__()\n # Convolutional layers\n \n self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5, stride=1) # in_channel = 1 = grayscale, hyperparam, hyperparam\n self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5, stride=1) # we in crease the output channel when have extra conv layers\n \n # Fully connected layers\n self.fc1 = nn.Linear(in_features=12*4*4, out_features=120, bias=True) # we also shrink the number of features to number of class that we have\n self.fc2 = nn.Linear(in_features = 120, out_features=60, bias=True)\n self.out = nn.Linear(in_features = 60, out_features=10, bias=True) \n \n def forward(self, t):\n # input layer\n t = t\n \n # convolution 1, not \n t = self.conv1(t)\n t = F.relu(t) # operation do not use weight, unlike layers\n t = F.max_pool2d(t, kernel_size=2, stride=2) # operation do not use weight, unlike layers\n \n # convolution 2: => relu => maxpool\n t = self.conv2(t)\n # WHY do we need these 2 layers?\n t = F.relu(t) \n t = F.max_pool2d(t, kernel_size=2, stride=2) # how to determine these values?\n \n # Transition from Conv to Linear will require flatten\n t = t.reshape(-1, 12*4*4) # 4x4 = shape of reduce image (originally 28x28)\n \n # linear 1:\n t = self.fc1(t)\n t = F.relu(t)\n \n # linear 2:\n t = self.fc2(t)\n t = F.relu(t)\n \n # output:\n t = self.out(t)\n \n return t", "_____no_output_____" ], [ "train_set = torchvision.datasets.FashionMNIST(\n root=\"./data/FashionMNIST\",\n train=True,\n download=True,\n transform=transforms.Compose([ # convert image to \n transforms.ToTensor()\n ]))", "_____no_output_____" ], [ "network = Network()\n\ntrain_loader = torch.utils.data.DataLoader(train_set, batch_size = 100)\noptimizer = optim.Adam(network.parameters(), lr=0.01)\n\nfor epoch in range(5):\n\n total_loss = 0\n total_correct = 0\n\n for i, batch in enumerate(train_loader):\n # print(f\"Batch {i}\")\n images, labels = batch\n\n preds = network(images)\n loss = F.cross_entropy(preds, labels) # calculate loss\n\n # Each weight has the corresponsing Gradient\n # before we calculate a new gradient for the same weight via each batch we have to zero out the gradient.\n # we want to use the new calculated gradient to update the weight. \n\n optimizer.zero_grad() \n loss.backward() # calculate gradient/ backprop. Note, this does not affect the loss but just the learning hyperparam\n optimizer.step() # Update the weight\n\n total_loss += loss.item()\n total_correct += get_num_correct(preds, labels)\n\n print(f\"epoch: {epoch}, total_correct: {total_correct}, loss: {total_loss}\")", "epoch: 0, total_correct: 47859, loss: 325.8240841627121\nepoch: 1, total_correct: 51775, loss: 226.27133131027222\nepoch: 2, total_correct: 52480, loss: 205.96931199729443\nepoch: 3, total_correct: 52738, loss: 198.12887558341026\nepoch: 4, total_correct: 53023, loss: 190.52423013746738\n" ], [ "print(f\"The accuracy rate is {total_correct / len(train_set)}\") # total correct of the latest trained model", "The accuracy rate is 0.8837166666666667\n" ] ], [ [ "### Confusion Matrix - Analyze CNN Results, building & Plotting a Confusion Matrix\n- Which prediction classes confuse the network", "_____no_output_____" ] ], [ [ "print(len(train_set))\nprint(len(train_set.targets))", "60000\n60000\n" ], [ "# Getting predictions for entire training set\ndef get_all_preds(model, data_loader): # data_loader for batches, model = trained model\n all_preds = torch.tensor([])\n for batch in data_loader:\n images, labels = batch\n \n preds = model(images)\n all_preds = torch.cat((all_preds, preds), dim=0)\n return all_preds", "_____no_output_____" ], [ "# when doing test prediction, we don't want tracking the gradient\nwith torch.no_grad():\n prediction_loader = torch.utils.data.DataLoader(train_set, batch_size = 10000)\n train_preds = get_all_preds(network, prediction_loader)", "_____no_output_____" ], [ "train_preds.shape", "_____no_output_____" ], [ "preds_correct = get_num_correct(train_preds, train_set.targets) # prediction, labels\nprint(f\"accuracy: {preds_correct/len(train_set)}\")", "accuracy: 0.8715666666666667\n" ] ], [ [ "#### Build a confusion matrix", "_____no_output_____" ] ], [ [ "#labels\ntrain_set.targets", "_____no_output_____" ], [ "# Prediction\ntrain_preds.argmax(dim=1)", "_____no_output_____" ], [ "# pair each labels & the prediction\nstacked = torch.stack(\n (\n train_set.targets, train_preds.argmax(dim=1)\n ), dim = 1 # dim = 1 to pair up one by one\n)", "_____no_output_____" ], [ "print(stacked.shape)", "torch.Size([60000, 2])\n" ], [ "stacked", "_____no_output_____" ], [ "stacked[0].tolist()", "_____no_output_____" ], [ "confusion_matrix = torch.zeros(10,10, dtype=torch.int64)\nconfusion_matrix", "_____no_output_____" ], [ "for pair in stacked: \n # for each pair of prediction (does not have to be in order), just add 1 to where you predict it\n labels, preds = pair.tolist()\n confusion_matrix[labels, preds] = confusion_matrix[labels, preds] + 1", "_____no_output_____" ], [ "confusion_matrix", "_____no_output_____" ] ], [ [ "#### Plot the confusion matrix", "_____no_output_____" ] ], [ [ "import itertools\nimport numpy as np\nimport matplotlib.pyplot as plt\ndef plot_confusion_matrix(cm, classes, normalize = False, title=\"Confusion Matrix\", cmap=plt.cm.Blues):\n \"\"\"Prints and plots the confusion matrix\n \"\"\"\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np,newaxis]\n print(\"Normalized confusion matrix\")\n else:\n print(\"Unnormalized confusion matrix\")\n print(cm)\n \n plt.imshow(cm, interpolation=\"nearest\", cmap = cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks, classes, rotation=45)\n plt.yticks(tick_marks, classes)\n \n fmt = \".2f\" if normalize else 'd'\n thresh = cm.max() / 2.\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n plt.text(j, i, format(cm[i, j], fmt),\n horizontalalignment = \"center\",\n color = \"white\" if cm[i,j] > thresh else \"black\")\n \n plt.tight_layout()\n plt.ylabel(\"True labels\")\n plt.xlabel(\"Prediction\")", "_____no_output_____" ], [ "from sklearn.metrics import confusion_matrix\n", "_____no_output_____" ], [ "cm = confusion_matrix(train_set.targets, train_preds.argmax(dim=1))\nprint(type(cm))\ncm", "<class 'numpy.ndarray'>\n" ], [ "names = (\"T-shirt/top\", \"Trouser\", \"Pullover\", \"Dress\", \"Coat\", \"Sandal\", \"Shirt\", \"Sneaker\", \"Bag\", \"Ankle Boot\")\nplt.figure(figsize=(10,10))\nplot_confusion_matrix(cm, names)", "Unnormalized confusion matrix\n[[5089 14 80 235 12 13 508 0 49 0]\n [ 9 5755 3 208 2 14 2 0 7 0]\n [ 98 8 4614 144 769 10 309 0 48 0]\n [ 137 24 8 5651 120 0 45 0 15 0]\n [ 13 31 361 395 4615 3 517 0 65 0]\n [ 1 0 0 5 0 5859 0 81 5 49]\n [1063 19 640 288 445 9 3471 0 65 0]\n [ 0 0 0 0 0 133 0 5703 4 160]\n [ 17 3 54 29 10 39 30 14 5803 1]\n [ 0 0 1 6 0 41 0 214 4 5734]]\n" ] ], [ [ "### Concatenating vs Stacking\n- Concatenating = Joins a sequence of tensors along an existing axis\n- Stacking = Joins a sequence of tensors along a new axis", "_____no_output_____" ], [ "# Using TensorBoard-----------------------------------------------------------------------", "_____no_output_____" ] ], [ [ "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\nimport torchvision\nimport torchvision.transforms as transforms\n\ntorch.set_printoptions(linewidth=120)\ntorch.set_grad_enabled(True)\n\nfrom torch.utils.tensorboard import SummaryWriter", "_____no_output_____" ], [ "def get_num_correct(preds, labels):\n return preds.argmax(dim=1).eq(labels).sum().item()", "_____no_output_____" ], [ "class Network(nn.Module):\n def __init__(self):\n super().__init__()\n # Convolutional layers\n \n self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5, stride=1) # in_channel = 1 = grayscale, hyperparam, hyperparam\n self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5, stride=1) # we in crease the output channel when have extra conv layers\n \n # Fully connected layers\n self.fc1 = nn.Linear(in_features=12*4*4, out_features=120, bias=True) # we also shrink the number of features to number of class that we have\n self.fc2 = nn.Linear(in_features = 120, out_features=60, bias=True)\n self.out = nn.Linear(in_features = 60, out_features=10, bias=True) \n \n def forward(self, t):\n # input layer\n t = t\n \n # convolution 1, not \n t = self.conv1(t)\n t = F.relu(t) # operation do not use weight, unlike layers\n t = F.max_pool2d(t, kernel_size=2, stride=2) # operation do not use weight, unlike layers\n \n # convolution 2: => relu => maxpool\n t = self.conv2(t)\n # WHY do we need these 2 layers?\n t = F.relu(t) \n t = F.max_pool2d(t, kernel_size=2, stride=2) # how to determine these values?\n \n # Transition from Conv to Linear will require flatten\n t = t.reshape(-1, 12*4*4) # 4x4 = shape of reduce image (originally 28x28)\n \n # linear 1:\n t = self.fc1(t)\n t = F.relu(t)\n \n # linear 2:\n t = self.fc2(t)\n t = F.relu(t)\n \n # output:\n t = self.out(t)\n \n return t", "_____no_output_____" ], [ "train_set = torchvision.datasets.FashionMNIST(\n root=\"./data/FashionMNIST\",\n train=True,\n download=True,\n transform=transforms.Compose([ # convert image to \n transforms.ToTensor()\n ]))", "_____no_output_____" ] ], [ [ "### Adding TB to dataset and network", "_____no_output_____" ] ], [ [ "train_loader = torch.utils.data.DataLoader(train_set, batch_size = 100, shuffle=True)\ntb = SummaryWriter()\n\nnetwork = Network()\nimages, labels = next(iter(train_loader))\ngrid = torchvision.utils.make_grid(images) # Create a grid to hold images\n\ntb.add_image(\"images\", grid)\ntb.add_graph(network, images)\ntb.close()", "_____no_output_____" ] ], [ [ "### Adding TB to the training loop", "_____no_output_____" ] ], [ [ "network = Network()\ntrain_loader = torch.utils.data.DataLoader(train_set, batch_size = 100, shuffle = True)\noptimizer = optim.Adam(network.parameters(), lr=0.01)\n\n# For batch dataset + model analysis\ntb = SummaryWriter()\n\nimages, labels = next(iter(train_loader))\ngrid = torchvision.utils.make_grid(images) # Create a grid to hold images\n\ntb.add_image(\"images\", grid)\ntb.add_graph(network, images)\n\n# For training process analysis\nfor epoch in range(10):\n\n total_loss = 0\n total_correct = 0\n\n for batch in train_loader:\n images, labels = batch\n\n preds = network(images)\n loss = F.cross_entropy(preds, labels) # calculate loss\n\n optimizer.zero_grad() \n loss.backward() # calculate gradient/ backprop. Note, this does not affect the loss but just the learning hyperparam\n optimizer.step() # Update the weight5\n\n total_loss += loss.item()\n total_correct += get_num_correct(preds, labels)\n\n tb.add_scalar(\"Loss\", total_loss, epoch)\n tb.add_scalar(\"Number Correct\", total_correct, epoch)\n tb.add_scalar(\"Accuracy\", total_correct / len(train_set), epoch)\n \n tb.add_histogram(\"conv1.bias\", network.conv1.bias, epoch)\n tb.add_histogram(\"conv1.weight\", network.conv1.weight, epoch)\n tb.add_histogram(\"conv1.weight.grad\", network.conv1.weight.grad, epoch)\n \n print(f\"epoch: {epoch}, total_correct: {total_correct}, loss: {total_loss}\")\n\ntb.close()", "epoch: 0, total_correct: 47578, loss: 330.02903857827187\nepoch: 1, total_correct: 51653, loss: 228.3071929216385\nepoch: 2, total_correct: 52057, loss: 213.3909364193678\nepoch: 3, total_correct: 52444, loss: 201.3433803766966\nepoch: 4, total_correct: 52873, loss: 191.35578165203333\nepoch: 5, total_correct: 52957, loss: 189.49972957372665\nepoch: 6, total_correct: 53161, loss: 182.23613695800304\nepoch: 7, total_correct: 53254, loss: 182.96793319284916\nepoch: 8, total_correct: 53325, loss: 180.26875261217356\nepoch: 9, total_correct: 53653, loss: 171.32632698118687\n" ] ], [ [ "# Hyperparameter Tuning & Experienting", "_____no_output_____" ] ], [ [ "from itertools import product", "_____no_output_____" ], [ "parameters = dict(\n lr = [0.01, 0.001],\n batch_size = [10,100,1000],\n shuffle = [True, False]\n)", "_____no_output_____" ], [ "param_values = [v for v in parameters.values()]\nparam_values", "_____no_output_____" ], [ "# This alternative for the 2 for loop for all combination\nfor lr, batch_size, shuffle in product(*param_values):\n print(lr, batch_size, shuffle)", "0.01 10 True\n0.01 10 False\n0.01 100 True\n0.01 100 False\n0.01 1000 True\n0.01 1000 False\n0.001 10 True\n0.001 10 False\n0.001 100 True\n0.001 100 False\n0.001 1000 True\n0.001 1000 False\n" ], [ "for lr, batch_size, shuffle in product(*param_values):\n comment = f\"batch_size={batch_size} lr={lr} shuffle={shuffle}\"\n \n batch_size = 100\n lr = 0.01\n\n network = Network()\n train_loader = torch.utils.data.DataLoader(train_set, batch_size = batch_size, shuffle = True)\n optimizer = optim.Adam(network.parameters(), lr=lr)\n\n # For batch dataset + model analysis\n images, labels = next(iter(train_loader))\n grid = torchvision.utils.make_grid(images) # Create a grid to hold images\n\n tb = SummaryWriter(comment = comment) # append the name of the run\n tb.add_image(\"images\", grid)\n tb.add_graph(network, images)\n\n # For training process analysis\n for epoch in range(10):\n\n total_loss = 0\n total_correct = 0\n\n for batch in train_loader:\n images, labels = batch\n\n preds = network(images)\n loss = F.cross_entropy(preds, labels) # calculate loss\n\n optimizer.zero_grad() \n loss.backward() # calculate gradient/ backprop. Note, this does not affect the loss but just the learning hyperparam\n optimizer.step() # Update the weight\n\n total_loss += loss.item() * batch_size\n total_correct += get_num_correct(preds, labels)\n\n tb.add_scalar(\"Loss\", total_loss, epoch)\n tb.add_scalar(\"Number Correct\", total_correct, epoch)\n tb.add_scalar(\"Accuracy\", total_correct / len(train_set), epoch)\n\n # tb.add_histogram(\"conv1.bias\", network.conv1.bias, epoch)\n # tb.add_histogram(\"conv1.weight\", network.conv1.weight, epoch)\n # tb.add_histogram(\"conv1.weight.grad\", network.conv1.weight.grad, epoch)\n\n for name, weight in network.named_parameters():\n tb.add_histogram(name, weight, epoch)\n tb.add_histogram(f\"{name}.grad\", weight.grad, epoch)\n\n print(f\"epoch: {epoch}, total_correct: {total_correct}, loss: {total_loss}\")\n\n tb.close()", "epoch: 0, total_correct: 47815, loss: 32627.08657681942\nepoch: 1, total_correct: 51718, loss: 22260.237954556942\nepoch: 2, total_correct: 52419, loss: 20644.833785295486\nepoch: 3, total_correct: 52798, loss: 19458.965423703194\nepoch: 4, total_correct: 53003, loss: 19034.481520950794\nepoch: 5, total_correct: 53271, loss: 18492.442212998867\nepoch: 6, total_correct: 53417, loss: 17817.013681679964\nepoch: 7, total_correct: 53486, loss: 17938.345924019814\nepoch: 8, total_correct: 53619, loss: 17453.28206717968\nepoch: 9, total_correct: 53580, loss: 17477.22753509879\nepoch: 0, total_correct: 46356, loss: 35690.80906510353\nepoch: 1, total_correct: 51323, loss: 23711.545085906982\nepoch: 2, total_correct: 52143, loss: 21316.894641518593\nepoch: 3, total_correct: 52621, loss: 20285.44818609953\nepoch: 4, total_correct: 52753, loss: 19711.36677339673\nepoch: 5, total_correct: 53007, loss: 18897.72866666317\nepoch: 6, total_correct: 53153, loss: 18425.33391714096\nepoch: 7, total_correct: 53159, loss: 18565.76368138194\nepoch: 8, total_correct: 53381, loss: 17964.81671333313\nepoch: 9, total_correct: 53311, loss: 18306.530352681875\nepoch: 0, total_correct: 44554, loss: 40406.589302420616\nepoch: 1, total_correct: 50544, loss: 25712.07067221403\nepoch: 2, total_correct: 51450, loss: 23124.77108836174\nepoch: 3, total_correct: 51911, loss: 22040.09489491582\nepoch: 4, total_correct: 52085, loss: 21369.326788187027\nepoch: 5, total_correct: 52246, loss: 21107.723239064217\nepoch: 6, total_correct: 52528, loss: 20302.074022591114\nepoch: 7, total_correct: 52568, loss: 20341.403813660145\nepoch: 8, total_correct: 52693, loss: 19968.143731355667\nepoch: 9, total_correct: 52664, loss: 19794.090458750725\nepoch: 0, total_correct: 47020, loss: 33960.67530810833\nepoch: 1, total_correct: 51538, loss: 22986.73229366541\nepoch: 2, total_correct: 52163, loss: 21209.345690906048\nepoch: 3, total_correct: 52531, loss: 20198.757615685463\nepoch: 4, total_correct: 52790, loss: 19570.826382935047\nepoch: 5, total_correct: 53087, loss: 18779.24553900957\nepoch: 6, total_correct: 53166, loss: 18482.16244056821\nepoch: 7, total_correct: 53300, loss: 18360.81723868847\nepoch: 8, total_correct: 53337, loss: 18220.240525901318\nepoch: 9, total_correct: 53434, loss: 17799.655033648014\nepoch: 0, total_correct: 46530, loss: 35534.46931988001\nepoch: 1, total_correct: 51205, loss: 23584.97524559498\nepoch: 2, total_correct: 51954, loss: 21781.90327435732\nepoch: 3, total_correct: 52195, loss: 20818.449698388577\nepoch: 4, total_correct: 52534, loss: 20316.45976603031\nepoch: 5, total_correct: 52625, loss: 19817.181876301765\nepoch: 6, total_correct: 52932, loss: 19420.408706367016\nepoch: 7, total_correct: 52989, loss: 18973.46661090851\nepoch: 8, total_correct: 53094, loss: 18770.466816425323\nepoch: 9, total_correct: 53279, loss: 18540.807612240314\nepoch: 0, total_correct: 47396, loss: 33841.07643067837\nepoch: 1, total_correct: 51464, loss: 23069.676284492016\nepoch: 2, total_correct: 52092, loss: 21162.001590430737\nepoch: 3, total_correct: 52613, loss: 19909.17019546032\nepoch: 4, total_correct: 52769, loss: 19623.87887239456\nepoch: 5, total_correct: 53128, loss: 18771.23445570469\nepoch: 6, total_correct: 53242, loss: 18318.224046379328\nepoch: 7, total_correct: 53358, loss: 18083.061026781797\nepoch: 8, total_correct: 53374, loss: 17979.729913920164\nepoch: 9, total_correct: 53526, loss: 17808.34444463253\nepoch: 0, total_correct: 47496, loss: 33324.724800884724\nepoch: 1, total_correct: 51599, loss: 22413.968540728092\nepoch: 2, total_correct: 52242, loss: 20874.532824754715\nepoch: 3, total_correct: 52552, loss: 20034.52707082033\nepoch: 4, total_correct: 52789, loss: 19392.623429000378\nepoch: 5, total_correct: 52874, loss: 19108.930475264788\nepoch: 6, total_correct: 52999, loss: 18741.07948690653\nepoch: 7, total_correct: 53213, loss: 18316.71445891261\nepoch: 8, total_correct: 53231, loss: 18199.224837124348\nepoch: 9, total_correct: 53423, loss: 17868.811705708504\nepoch: 0, total_correct: 47322, loss: 33581.98929429054\nepoch: 1, total_correct: 51573, loss: 22732.692924141884\nepoch: 2, total_correct: 52272, loss: 20805.599881708622\nepoch: 3, total_correct: 52474, loss: 20082.680636644363\nepoch: 4, total_correct: 52835, loss: 19386.16304844618\nepoch: 5, total_correct: 52880, loss: 19094.42046880722\nepoch: 6, total_correct: 53120, loss: 18467.592690885067\nepoch: 7, total_correct: 53224, loss: 18333.08691754937\nepoch: 8, total_correct: 53205, loss: 18319.795460253954\nepoch: 9, total_correct: 53432, loss: 17886.299324035645\nepoch: 0, total_correct: 46360, loss: 35888.596464693546\nepoch: 1, total_correct: 51338, loss: 23612.54615932703\nepoch: 2, total_correct: 52127, loss: 21429.33057397604\nepoch: 3, total_correct: 52513, loss: 20325.778460502625\nepoch: 4, total_correct: 52756, loss: 19834.023685753345\nepoch: 5, total_correct: 52862, loss: 19340.33272266388\nepoch: 6, total_correct: 52837, loss: 19155.930253118277\nepoch: 7, total_correct: 53040, loss: 18724.377931654453\nepoch: 8, total_correct: 53164, loss: 18629.095577448606\nepoch: 9, total_correct: 53369, loss: 18357.340314239264\nepoch: 0, total_correct: 46518, loss: 35022.46422767639\nepoch: 1, total_correct: 51400, loss: 22897.744515538216\nepoch: 2, total_correct: 52155, loss: 21097.114764153957\nepoch: 3, total_correct: 52466, loss: 20359.921458363533\nepoch: 4, total_correct: 52712, loss: 19486.170861124992\nepoch: 5, total_correct: 52943, loss: 19195.890572667122\nepoch: 6, total_correct: 53057, loss: 18653.28980088234\nepoch: 7, total_correct: 53238, loss: 18442.972961068153\nepoch: 8, total_correct: 53271, loss: 18402.76671499014\nepoch: 9, total_correct: 53422, loss: 17905.239026993513\nepoch: 0, total_correct: 46370, loss: 36095.42239308357\nepoch: 1, total_correct: 51322, loss: 23602.086704969406\nepoch: 2, total_correct: 52126, loss: 21612.434799969196\nepoch: 3, total_correct: 52348, loss: 20723.108020424843\nepoch: 4, total_correct: 52699, loss: 20131.296561658382\nepoch: 5, total_correct: 52772, loss: 19704.853954911232\nepoch: 6, total_correct: 52940, loss: 19421.88045978546\nepoch: 7, total_correct: 52954, loss: 19318.173514306545\nepoch: 8, total_correct: 52970, loss: 19167.52364486456\nepoch: 9, total_correct: 53153, loss: 18328.64599376917\nepoch: 0, total_correct: 46730, loss: 35292.53210425377\nepoch: 1, total_correct: 51115, loss: 23995.792649686337\nepoch: 2, total_correct: 51927, loss: 21715.699139237404\nepoch: 3, total_correct: 52288, loss: 20744.744351506233\nepoch: 4, total_correct: 52626, loss: 19907.143668830395\nepoch: 5, total_correct: 52848, loss: 19198.414004594088\nepoch: 6, total_correct: 53035, loss: 18810.788298398256\nepoch: 7, total_correct: 53238, loss: 18336.914777755737\nepoch: 8, total_correct: 53271, loss: 18180.050624907017\nepoch: 9, total_correct: 53432, loss: 17870.177245885134\n" ] ], [ [ "## Training Loop Run Builder Class\n- A more convienient way compared to for loop", "_____no_output_____" ] ], [ [ "from collections import OrderedDict\nfrom collections import namedtuple\nfrom itertools import product", "_____no_output_____" ], [ "class RunBuilder():\n @staticmethod\n def get_runs(params):\n # Build runs for us, based on the params we passed in\n Run = namedtuple(\"Run\", params.keys())\n \n runs = []\n for v in product(*params.values()):\n runs.append(Run(*v))\n \n return runs", "_____no_output_____" ], [ "params = OrderedDict(\n lr = [0.01, 0.001],\n batch_size = [1000, 10000]\n)", "_____no_output_____" ], [ "runs = RunBuilder.get_runs(params)\nruns", "_____no_output_____" ], [ "for run in runs:\n print(run, run.lr, run.batch_size)", "Run(lr=0.01, batch_size=1000) 0.01 1000\nRun(lr=0.01, batch_size=10000) 0.01 10000\nRun(lr=0.001, batch_size=1000) 0.001 1000\nRun(lr=0.001, batch_size=10000) 0.001 10000\n" ], [ "for run in RunBuilder.get_runs(params):\n comment = f\"-{run}\"\n \n batch_size = 100\n lr = 0.01\n\n network = Network()\n train_loader = torch.utils.data.DataLoader(train_set, batch_size = batch_size, shuffle = True)\n optimizer = optim.Adam(network.parameters(), lr=lr)\n\n # For batch dataset + model analysis\n images, labels = next(iter(train_loader))\n grid = torchvision.utils.make_grid(images) # Create a grid to hold images\n\n tb = SummaryWriter(comment = comment) # append the name of the run\n tb.add_image(\"images\", grid)\n tb.add_graph(network, images)\n\n # For training process analysis\n for epoch in range(10):\n\n total_loss = 0\n total_correct = 0\n\n for batch in train_loader:\n images, labels = batch\n\n preds = network(images)\n loss = F.cross_entropy(preds, labels) # calculate loss\n\n optimizer.zero_grad() \n loss.backward() # calculate gradient/ backprop. Note, this does not affect the loss but just the learning hyperparam\n optimizer.step() # Update the weight\n\n total_loss += loss.item() * batch_size\n total_correct += get_num_correct(preds, labels)\n\n tb.add_scalar(\"Loss\", total_loss, epoch)\n tb.add_scalar(\"Number Correct\", total_correct, epoch)\n tb.add_scalar(\"Accuracy\", total_correct / len(train_set), epoch)\n\n # tb.add_histogram(\"conv1.bias\", network.conv1.bias, epoch)\n # tb.add_histogram(\"conv1.weight\", network.conv1.weight, epoch)\n # tb.add_histogram(\"conv1.weight.grad\", network.conv1.weight.grad, epoch)\n\n for name, weight in network.named_parameters():\n tb.add_histogram(name, weight, epoch)\n tb.add_histogram(f\"{name}.grad\", weight.grad, epoch)\n\n print(f\"epoch: {epoch}, total_correct: {total_correct}, loss: {total_loss}\")\n\n tb.close()", "_____no_output_____" ] ], [ [ "## CNN Training Loop Refactoring - Simultaneous Hyperparameter Testing\n- Clean up training loop", "_____no_output_____" ] ], [ [ "!pip3 install simplejson", "Collecting simplejson\n Downloading simplejson-3.17.2.tar.gz (83 kB)\nBuilding wheels for collected packages: simplejson\n Building wheel for simplejson (setup.py): started\n Building wheel for simplejson (setup.py): finished with status 'done'\n Created wheel for simplejson: filename=simplejson-3.17.2-cp38-cp38-win_amd64.whl size=74464 sha256=2d95ef48d10147106e8306a89985ab904994e7ba70781cc4771366a4c7ee9840\n Stored in directory: c:\\users\\nguye\\appdata\\local\\pip\\cache\\wheels\\17\\72\\7d\\df0984c925921e22322ea462a6f861e9d0617881192deb9b8d\nSuccessfully built simplejson\nInstalling collected packages: simplejson\nSuccessfully installed simplejson-3.17.2\n" ], [ "import time \nimport pandas as pd\nfrom IPython.display import display\nfrom IPython.display import clear_output\nimport simplejson as json\n\n# Run Manager Class for separating tensorboard code\nclass RunManager():\n def __init__(self):\n self.epoch_count = 0\n self.epoch_loss = 0\n self.epoch_num_correct = 0\n self.epoch_start_time = None\n \n self.run_params = None\n self.run_count = 0\n self.run_data = []\n self.run_start_time = None\n \n self.network = None\n self.loader = None\n self.tb = None\n \n def begin_run(self, run, network, loader):\n self.run_start_time = time.time()\n \n self.run_params = run\n self.run_count += 1\n \n self.network = network\n self.loader = loader\n self.tb = SummaryWriter(comment=f\"-{run}\")\n \n images, labels = next(iter(self.loader))\n grid = torchvision.utils.make_grid(images)\n \n self.tb.add_image(\"images\", grid)\n self.tb.add_graph(self.network, images)\n \n def end_run(self):\n self.tb.close()\n self.epoch_count = 0\n \n def begin_epoch(self):\n self.epoch_start_time = time.time()\n \n def begin_epoch(self):\n self.epoch_start_time = time.time()\n self.epoch_count += 1\n self.epoch_loss = 0\n self.epoch_num_correct = 0\n \n def end_epoch(self):\n epoch_duration = time.time() - self.epoch_start_time\n run_duration = time.time() - self.run_start_time\n \n loss = self.epoch_loss / len(self.loader.dataset)\n accuracy = self.epoch_num_correct / len(self.loader.dataset)\n \n self.tb.add_scalar(\"Loss\", loss, self.epoch_count)\n self.tb.add_scalar(\"Accuracy\", accuracy, self.epoch_count)\n \n for name, param in self.network.named_parameters():\n self.tb.add_histogram(name, param, self.epoch_count)\n self.tb.add_histogram(f\"{name}.grad\", param.grad, self.epoch_count)\n \n # built pandas to analyze data outside of TB\n results = OrderedDict()\n results[\"run\"] = self.run_count\n results[\"epoch\"] = self.epoch_count\n results[\"loss\"] = loss\n results[\"accuracy\"] = accuracy\n results[\"epoch duration\"] = epoch_duration\n results[\"run duration\"] = run_duration\n for k,v in self.run_params._asdict().items(): results[k] = v # allow us to see what results match with what param\n self.run_data.append(results)\n df = pd.DataFrame.from_dict(self.run_data, orient=\"columns\")\n \n # update in ipynb in real time\n clear_output(wait=True)\n display(df)\n \n def track_loss(self, loss):\n self.epoch_loss += loss.item() * self.loader.batch_size\n \n def track_num_correct(self, preds, labels):\n self.epoch_num_correct += self._get_num_correct(preds, labels)\n \n @torch.no_grad()\n def _get_num_correct(self, preds, labels):\n return preds.argmax(dim=1).eq(labels).sum().item()\n \n def save(self, fileName):\n pd.DataFrame.from_dict(\n self.run_data,\n orient=\"columns\"\n ).to_csv(f\"{fileName}.csv\") # save in csv\n \n # to create in tensorboard \n with open(f\"{fileName}.json\", \"w\", encoding=\"utf-8\") as f:\n json.dump(self.run_data, f, ensure_ascii=False, indent = 4)", "_____no_output_____" ], [ "params = OrderedDict(\n lr = [0.01],\n batch_size = [1000,2000],\n num_workers = [0,1,2,4,8,16]\n# shuffle = [True, False]\n)\nm = RunManager()\n\nfor run in RunBuilder.get_runs(params):\n network = Network()\n loader = torch.utils.data.DataLoader(train_set, batch_size=run.batch_size, shuffle = shuffle, num_workers=run.num_workers) # num worker to speed up process for dataloader\n optimizer = optim.Adam(network.parameters(), lr=lr)\n\n m.begin_run(run, network, loader)\n for epoch in range(5):\n m.begin_epoch()\n for batch in loader:\n images = batch[0]\n labels = batch[1]\n preds = network(images) # pass batch\n loss = F.cross_entropy(preds, labels) # calculate loss\n optimizer.zero_grad() # zero gradient\n loss.backward() # back prop for calculating gradient\n optimizer.step() # update weights\n \n m.track_loss(loss)\n m.track_num_correct(preds, labels)\n \n m.end_epoch()\n m.end_run()\nm.save(\"results\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e710ff4ab5380d7608c7f7f035b74b1260e8f2f0
121,077
ipynb
Jupyter Notebook
notebooks/classifier_gpy_gaussian_process.ipynb
ziqizh/adversarial-robustness-toolbox
801d714e3f5a4651dfce587ce01674724b7fc318
[ "MIT" ]
2
2019-10-26T08:35:37.000Z
2020-09-02T18:38:00.000Z
notebooks/classifier_gpy_gaussian_process.ipynb
MohammedAbuibaid/adversarial-robustness-toolbox
548febd02770bf06d9e0bb34974b3d98ec889865
[ "MIT" ]
null
null
null
notebooks/classifier_gpy_gaussian_process.ipynb
MohammedAbuibaid/adversarial-robustness-toolbox
548febd02770bf06d9e0bb34974b3d98ec889865
[ "MIT" ]
1
2019-12-22T22:18:15.000Z
2019-12-22T22:18:15.000Z
506.598326
31,988
0.941252
[ [ [ "# Gaussian Process Classification with GPy\n\nIn this notebook, we want to show how to apply a sime GPy classifier and craft adversarail examples on it.\nLet us start by importing all things we might use and trainnig a model and visualizing it.", "_____no_output_____" ] ], [ [ "from art.attacks import HighConfidenceLowUncertainty, ProjectedGradientDescent\nfrom art.classifiers import GPyGaussianProcessClassifier\n\nimport GPy\nfrom sklearn.datasets import make_moons\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm", "_____no_output_____" ] ], [ [ "## Training a classifier\nWe will first train a classifier. The classifier is limited to binary classification problems and scales quatratically with the data, so we use a very simple and basic dataset here.\n\nOnce the code runs, we see a summary of the model and a visualization of the classifier. The shade of the samples is directly related to the confidence of the GP in its classification.", "_____no_output_____" ] ], [ [ "np.random.seed(6)\nX, y = make_moons(n_samples=100, noise=0.1)\n#getting a kernel for GPy. Gradients work for any kernel.\ngpkern = GPy.kern.RBF(np.shape(X)[1])\n#get the model\nm = GPy.models.GPClassification(X, y.reshape(-1,1), kernel=gpkern)\nm.rbf.lengthscale.fix(0.4)\n#determining the infernce method\nm.inference_method = GPy.inference.latent_function_inference.laplace.Laplace()\n#now train the model\nm.optimize(messages=True, optimizer='lbfgs')\n#apply ART to the model\nm_art = GPyGaussianProcessClassifier(m)\n#getting additional test data\nXt, Yt = make_moons(n_samples=10, noise=0.1)\nplt.scatter(X[:,0],X[:,1],c=cm.hot(m_art.predict(X)[:,0].reshape(-1)))\nplt.show()", "Running L-BFGS-B (Scipy implementation) Code:\n runtime i f |g| \n 01s17 0012 1.649933e+01 2.167350e-05 \n 01s89 0021 1.648188e+01 1.741903e-10 \nRuntime: 01s89\nOptimization status: Converged\n\n" ] ], [ [ "## Targeting a classifier\nWe will now craft attacks on this classifier. One are the adversarial examples introduced by Grosse et al. (https://arxiv.org/abs/1812.02606) which are specificallt targeting Gaussian Process classifiers. We then apply one of the other attacks of ART, PGD by Madry et al. (https://arxiv.org/abs/1706.06083), as an example. \n\n### Confidence optimized adversarial examples\nWe craft adversarial examples which are optimized for confidence. We plot the initial seeds for the adversarial examples in green and the resulting adversarial examples in black, and connected initial and final point using a straight line (which is not equivalent to the path the optimization took).\n\nWe observe that some examples are not moving towards the other class, but instead seem to move randomly away from the data. This stems from the problem that the Gaussian Processes' gradients point away from the data in all directions, and might lead the attack far away from the actauly boundary.", "_____no_output_____" ] ], [ [ "# get attack\nattack = HighConfidenceLowUncertainty(m_art,conf=0.75,min_val=-1.0,max_val=2.0)\n# generate examples and plot them\nadv = attack.generate(Xt)\nplt.scatter(X[:,0],X[:,1],c=cm.hot(m_art.predict(X)[:,0].reshape(-1)))\nfor i in range(np.shape(Xt)[0]):\n plt.scatter(Xt[:,0],Xt[:,1],c='green')\n plt.scatter(adv[:,0],adv[:,1],c='k')\n plt.arrow(Xt[i,0], Xt[i,1], adv[i,0]-Xt[i,0], adv[i,1]-Xt[i,1])", "_____no_output_____" ] ], [ [ "### Uncertainty optimized adversarial examples\nWe can additionally optimize for uncetainty by setting unc_increase to 0.9, thereby forcing the adversarial examples to be closer to the original training data.", "_____no_output_____" ] ], [ [ "attack = HighConfidenceLowUncertainty(m_art,unc_increase=0.9,min_val=0.0,max_val=2.0)\nadv = attack.generate(Xt)\nplt.scatter(X[:,0],X[:,1],c=cm.hot(m_art.predict(X)[:,0].reshape(-1)))\nfor i in range(np.shape(Xt)[0]):\n plt.scatter(Xt[:,0],Xt[:,1],c='green')\n plt.scatter(adv[:,0],adv[:,1],c='k')\n plt.arrow(Xt[i,0], Xt[i,1], adv[i,0]-Xt[i,0], adv[i,1]-Xt[i,1])\nplt.show()", "_____no_output_____" ] ], [ [ "### PGD on Guassian process classification\nTo conclude, we show how to compute PGD adversarial exmples on our model. We observe that as before, many attempts fail, as the model misleads the attack to take a wrong path away from the boundary, where samples are classified default wise as either of the classes.", "_____no_output_____" ] ], [ [ "attack = ProjectedGradientDescent(m_art,eps=0.5,eps_step=0.2) #TODO,targeted=True)\nadv = attack.generate(Xt)\nplt.scatter(X[:,0],X[:,1],c=cm.hot(m_art.predict(X)[:,0].reshape(-1)))\nfor i in range(np.shape(Xt)[0]):\n plt.scatter(Xt[:,0],Xt[:,1],c='green')\n plt.scatter(adv[:,0],adv[:,1],c='k')\n plt.arrow(Xt[i,0], Xt[i,1], adv[i,0]-Xt[i,0], adv[i,1]-Xt[i,1])\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e710ff82d149a50425cf9bdd4dd4fafc1a9519ed
15,468
ipynb
Jupyter Notebook
Final_Report.ipynb
sille1994/Optimization-of-Hyperparameters-in-a-Convolution-Neural-Network
5976467bfa35e2679c0fa85c73187c6b6381d115
[ "MIT" ]
null
null
null
Final_Report.ipynb
sille1994/Optimization-of-Hyperparameters-in-a-Convolution-Neural-Network
5976467bfa35e2679c0fa85c73187c6b6381d115
[ "MIT" ]
null
null
null
Final_Report.ipynb
sille1994/Optimization-of-Hyperparameters-in-a-Convolution-Neural-Network
5976467bfa35e2679c0fa85c73187c6b6381d115
[ "MIT" ]
1
2020-09-25T16:17:14.000Z
2020-09-25T16:17:14.000Z
54.464789
1,077
0.668671
[ [ [ "# <center> Optimization of Hyperparameters in a Convolution Neural Network </center>\n\n<center>By Cecilie Dura André</center>", "_____no_output_____" ], [ "<img src=\"./Documentation/images/renderplot.png\" />\nImage from: https://ax.dev/tutorials/tune_cnn.html", "_____no_output_____" ], [ "---\n# Authors\n\nCecilie Dura André <br>\nDanish Technical University, Healthcare Technology", "_____no_output_____" ], [ "---\n# Abstract\n\nIn the last couple of years, there has been a major shift in training and using convolutional neural networks as a second opinion in medical detection and diagnostic. It has been proven in some medical cases that the neural networks outperform the detectors [1–7]. Thus, they are thought to be able to work as a second opinion to minimize the time used on detecting and diagnosing a patient, while increasing sensitivity and specificity. Convolution neural networks can be trained by fitting the network of weights iteratively to a wished outcome by a known input [8]. For each convolution neural networks, a few hyperparameters have to be picked [8]. These parameters are often chosen before training and they are not changed under training. These parameters are picked based on experience and retraining the model a couple of times. This project will look into finding the best hyperparameters using Bayesian optimization. Thus, retraining the model is no longer necessary and time will be saved. There will also be statistical evidence for the chosen hyperparameters. ", "_____no_output_____" ], [ "----\n# Statement of Need\n\nThe author is from a country where all medical data is saved on the same database. Data from people have been stored on this database since the end of the 1960s, thus a huge amount of data has been stored. With the right permissions, this data can be used for scientific purposes that will help doctors to see a correlation between biomarkers and diseases and to diagnose patients. This author is in a group that workes on creating algorithms that can help doctors diagnose patients as a second opinion. This software is needed because of the huge amount of data used to train and test these models. Training a model can take up to a week and if they have to be retrained multiple times months can go by. This software will make retraining unnecessary and safe weeks of training. There will also be statistical evidence for the chosen hyperparameters. ", "_____no_output_____" ], [ "----\n# Installation instructions\n\nStart by downloading the project by opening the terminal and write: \n - git clone https://gitlab.msu.edu/andrecec/cmse802_spring2020_hyperparamterop.git\nWhen the folder is cloned, the packages used to run the porject has to be installed. The folling us used for that. Python version 3.7 is used in this project and should be installed beforehand.\n\n\nOpen the terminal and then run:\n - conda install pytorch torchvision cpuonly -c pytorch\n - conda install numpy==1.16.1\n - conda install matplotlib==3.1.0\n - conda install botorch -c pytorch -c gpytorch\n - pip install ax-platform\n\nOr \n\nOpen the terminal and make sure you are in the \n\"cmse802_spring2020_hyperparamterop\"- folder. <br>\nThen run: <br>\n - conda env create --prefix ./envs --file ./Software/requirements.yml\n - conda activate ./envs\n\nOr <br>\n\nOpen the terminal and make sure you are in the \n\"cmse802_spring2020_hyperparamterop\"- folder. <br>\nThen run: \n - make init \n - conda activate ./envs \n", "_____no_output_____" ], [ "----\n# Unit Tests\n\nWhen the repository is downloaded and the Python packages is installed a unit test should be made to make sure the functions works on your computer. This is done in following way: \n - Open the terminal and make sure you are in the \"cmse802_spring2020_hyperparamterop\"- folder. \n - Run \"make test\" in the terminal", "_____no_output_____" ], [ "----\n# Example of implementing Bayesian optimization\n\nThis author used the module called Ax to use Bayesian optimization to find the best hyperparameters. This is what the author noticed that should be done to implement Ax. Other people can use this project as an example of how to implement Bayesian optimization into their convolutional neural network. To start with they will be able to see that the first difference they should make is in the training function. ", "_____no_output_____" ] ], [ [ "\ndef train_bayesian_optimization():\n # ...\n # Define the hyperparameters\n optimizer = optim.Adam(net.parameters(), lr=parameters.get(\"lr\", 0.001))\n # ...\n for _ in range(num_epochs):\n for i in range(num_batches):\n #...\n #...\n #...\n return net, mean_cost, accuracy\n\ndef eval_bayesian_optimization():\n #...\n # Calculating the accuracy\n return float(correct/num_batches)", "_____no_output_____" ] ], [ [ "In the function \"train_bayesian_optimization\" it can be seen that the parameters have to be given as a class. We will come back to that later in this section. It is important that the function gives the trained network back because it is used in Bayesian optimization. The mean cost and accuracy will be used later on for training the network and will be ignored under the Bayesian optimization step. In this case, the evaluation function, \"eval_bayesian_optimization\", is often the same. It should only give the accuracy back. \n\n", "_____no_output_____" ] ], [ [ "def evaluate_hyperparameters(parameterization):\n \"\"\" Train and evaluate the network to find the best parameters\n Args:\n parameterization: The hyperparameters that should be evaluated\n Returns:\n float: classification accuracy \"\"\"\n net = Net()\n net, _, _ = train_bayesian_optimization(net=net, input_picture=DATA['x_train'],\\\n label_picture=DATA['y_train'], parameters=parameterization,)\n\n return eval_bayesian_optimization(net=net, input_picture=DATA['x_valid'],\\\n label_picture=DATA['y_valid'],)", "_____no_output_____" ] ], [ [ "An evaluation function has to be made. In this project it is called \"evaluate_hyperparameters()\", which take the class, parameters, and train a new network each time with the function \"train_bayesian_optimization()\" and evaluate the trained network with \"eval_bayesian_optimization()\" to get the accuracy.", "_____no_output_____" ] ], [ [ "from ax.service.managed_loop import optimize\n\nrun = False\nif run == True:\n \n ####################################\n # THIS FOLLOWING STEP SHOULD BE USED\n ####################################\n \n BEST_PARAMETERS, VALUES, EXPERIMENT, MODEL = optimize(parameters=[{\"name\": \"lr\", \"type\": \"range\",\\\n \"bounds\": [1e-6, 0.4], \"log_scale\": True},], evaluation_function=evaluate_hyperparameters,\\\n objective_name='accuracy',)\n \n # Findin the best hyperparameter for training the network\n DATA1 = EXPERIMENT.fetch_data()\n DF = DATA1.df\n BEST_ARM_NAME = DF.arm_name[DF['mean'] == DF['mean'].max()].values[0]\n BEST_ARM = EXPERIMENT.arms_by_name[BEST_ARM_NAME]", "_____no_output_____" ] ], [ [ "Here, the class, parameters, is defined. In this project, only the best learning rate is found, but other parameters could also be used e.g. momentum, beta values, and the number of epochs. The evaluation function we defined before should be given to the \"optimizer function\", which is the Bayesian optimization algorithm. The next lines are used to get the best hyperparameters, which can be used to train the network.", "_____no_output_____" ], [ "---\n# Methodology\n\n<table>\n<tr>\n<td> <img src=\"./Documentation/images/training.png\" alt=\"Drawing\" style=\"width: 250px;\"/> </td>\n<td> <img src=\"./Documentation/images/costs.png\" style=\"width: 250px;\"/> </td>\n</tr>\n</table>\n<table>\n<tr>\n<td> <img src=\"./Documentation/images/validation.png\" alt=\"Drawing\" style=\"width: 250px;\"/> </td>\n<td> <img src=\"./Documentation/images/test.png\" style=\"width: 250px;\"/> </td>\n</tr>\n</table>\n\nHere it can be seen for both train -, validation -, and test results that the convolution neural network with hyperparameter optimization gets better results faster, but after 10 epochs the hyperparameter optimization (HO) convolution neural network and Non-HO convolution neural network looks similar in mean accuracy. The mean accuracy is the solid line and the standard deviation is the shadow with the belonging color to the line. The accuracy of the convolution neural network does not get worse with HO. This could be a concern since HO could over train the network, but it is not seen here. \n\nWe have to remember that the convolutional neural network used in this paper is small, thus there is a limit to its accuracy. HO does show promising results since the same results are achieved with fewer epochs than the convolutional neural network wit non-HO. Therefore, with bigger networks and with more hyperparameters, time might be saved in the length and with equal or better results. Now, there will also be statistical evidence for the chosen hyperparameters.\n\nThe project differs from the submission guideline in one way. Bayesian optimization is only used before training the convolution neural networks, whereas the guideline would have used Bayesian optimization after several epochs. Thus, getting better hyperparameters. This is not needed since the optimization algorithm finds the best overall hyperparameter used for training a network and therefore training the network with a new hyperparameter is not needed.", "_____no_output_____" ], [ "---\n# Concluding Remarks\n\nIn this project, the author has learned to use Bayesian optimization with complex algorithms such as convolutional neural networks. This has contributed to me looking more into the descriptions of the functions and how they should be used. In this project, it is important since Bayesian optimization and optimization of convolutional neural networks should not interfere with each other. The goal of this project was reached with optimal results, which indicate that further work with decisions of hyperparameters can be handled with this new knowledge of Bayesian optimization and implementation. The results from this project are going to be shared with a research group, so they no longer have to use brute force to find the optimal hyperparameters. \n\nThe author also got a sense of what it takes to write better code and write it more beautiful. Together with new methods to check whether the codes work, make environments, and in general a small insight into what it takes to write good code in general. This was a pleasure. ", "_____no_output_____" ], [ "----\n# References\n\n\n[1] Sindhu Ramachandran S, Jose George, and Shibon Skaria. “Using YOLO baseddeep learning network for real time detection and localization of lung nodulesfrom low dose CT scans”. In: February 2018 (2019).doi:10.1117/12.2293699.[14]Aiden Nibali, Zhen He, and Dennis Wollersheim. “Pulmonary nodule classifica-tion with deep residual networks”. eng. In:International Journal of ComputerAssisted Radiology and Surgery12.10 (2017), pages 1799–1808.issn: 1861-6410.<br/>\n[2] Wentao Zhu et al. “DeepLung: Deep 3D dual path nets for automated pul-monary nodule detection and classification”. In:Proceedings - 2018 IEEE Win-ter Conference on Applications of Computer Vision, WACV 20182018-Janua(2018), pages 673–681.doi:10.1109/WACV.2018.00079.<br/>\n[3] Manu Sharma, Jignesh S Bhatt, and Manjunath V Joshi. “Early detection oflung cancer from classification using deep learning”. In: April 2018 (2019).doi:10.1117/12.2309530.<br/>\n[4] Emre Dandil et al. “Artificial neural network-based classification system forlung nodules on computed tomography scans”. eng. In:2014 6th InternationalConference of Soft Computing and Pattern Recognition (SoCPaR). IEEE, 2014,pages 382–386.isbn: 9781479959341.<br/>\n[5] Jinsa Kuruvilla and K Gunavathi. “Lung cancer classification using neural net-works for CT images.” eng. In:Computer methods and programs in biomedicine113.1 (2014), pages 202–209.issn: 1872-7565.url:http://search.proquest.com/docview/1461341321/.<br/>\n[6] Carmen Krewer et al. “Immediate effectiveness of single-session therapeutic in-terventions in pusher behaviour.” eng. In:Gait posture37.2 (2013), pages 246–250.issn: 1879-2219.url:http://search.proquest.com/docview/1282049046/.<br/>\n[7] L.B. Nascimento, A.C. De Paiva, and A.C. Silva. “Lung nodules classificationin CT images using Shannon and Simpson Diversity Indices and SVM”. In:volume 7376. 2012, pages 454–466.isbn: 9783642315367.<br/>\n[8] Hargrave, Marschall. 2019. “Deep Learning.” April 30. https://www.investopedia.com/terms/d/deep-learning.asp.<br/>\n[9] Aravikumar, Meghan. 2018. \"Let’s Talk Bayesian Optimization.\" November 16. https://mlconf.com/blog/lets-talk-bayesian-optimization/.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
e7110c64b7d8f4f5760602d15ada73fcb7ef7e81
157,450
ipynb
Jupyter Notebook
examples/Lorenz_inverse_forced_Colab.ipynb
zhang-liu-official/project3-pinn-test
fcf586a4b15176ee4595bcb5c9b0bc9f3b18f5a8
[ "Apache-2.0" ]
955
2019-06-21T21:56:02.000Z
2022-03-31T03:44:45.000Z
examples/Lorenz_inverse_forced_Colab.ipynb
zhang-liu-official/project3-pinn-test
fcf586a4b15176ee4595bcb5c9b0bc9f3b18f5a8
[ "Apache-2.0" ]
517
2019-07-25T16:47:44.000Z
2022-03-31T17:37:58.000Z
examples/Lorenz_inverse_forced_Colab.ipynb
zhang-liu-official/project3-pinn-test
fcf586a4b15176ee4595bcb5c9b0bc9f3b18f5a8
[ "Apache-2.0" ]
374
2019-06-24T00:44:16.000Z
2022-03-30T08:17:36.000Z
280.659537
39,748
0.879047
[ [ [ "<a href=\"https://colab.research.google.com/github/lululxvi/deepxde/blob/master/examples/Lorenz_inverse_forced_Colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Description\n\nThis notebook aims at the identification of the parameters of the modified Lorenz attractor (with exogenous input)\n\nBuilt upon: \n* Lorenz attractor example from DeepXDE (Lu's code)\n* https://github.com/lululxvi/deepxde/issues/79\n* kind help from Lu, greatly acknowledged\n\n# Install lib and imports", "_____no_output_____" ] ], [ [ "\"\"\"Backend supported: tensorflow.compat.v1, tensorflow\"\"\"\n!pip install deepxde\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport re\nimport numpy as np\nimport requests\nimport io\nimport matplotlib.pyplot as plt\n\nimport deepxde as dde\nfrom deepxde.backend import tf\n\nimport scipy as sp\nimport scipy.interpolate as interp\nfrom scipy.integrate import odeint\n", "Collecting deepxde\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/cf/8e/ee29b85e5892f9775e68a7842a3e808a3d935e2731a00c2ef5f47579b195/DeepXDE-0.8.5-py3-none-any.whl (67kB)\n\u001b[K |████████████████████████████████| 71kB 2.2MB/s \n\u001b[?25hCollecting salib\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/f7/33/cee4d64f7c40f33c08cf5ef5c9b1fb5e51f194b5deceefb5567112800b70/SALib-1.3.11.tar.gz (856kB)\n\u001b[K |████████████████████████████████| 860kB 8.7MB/s \n\u001b[?25hRequirement already satisfied: tensorflow>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from deepxde) (2.3.0)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from deepxde) (1.4.1)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from deepxde) (1.18.5)\nRequirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from deepxde) (0.22.2.post1)\nRequirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from deepxde) (3.2.2)\nRequirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from salib->deepxde) (1.1.2)\nRequirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14.0->deepxde) (3.3.0)\nRequirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14.0->deepxde) (0.35.1)\nRequirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14.0->deepxde) (1.12.1)\nRequirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14.0->deepxde) (0.10.0)\nRequirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14.0->deepxde) (1.15.0)\nRequirement already satisfied: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14.0->deepxde) (2.10.0)\nRequirement already satisfied: tensorboard<3,>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14.0->deepxde) (2.3.0)\nRequirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14.0->deepxde) (0.2.0)\nRequirement already satisfied: keras-preprocessing<1.2,>=1.1.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14.0->deepxde) (1.1.2)\nRequirement already satisfied: tensorflow-estimator<2.4.0,>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14.0->deepxde) (2.3.0)\nRequirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14.0->deepxde) (1.32.0)\nRequirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14.0->deepxde) (3.12.4)\nRequirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14.0->deepxde) (0.3.3)\nRequirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14.0->deepxde) (1.1.0)\nRequirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.14.0->deepxde) (1.6.3)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->deepxde) (0.16.0)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->deepxde) (1.2.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->deepxde) (2.4.7)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->deepxde) (0.10.0)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->deepxde) (2.8.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->salib->deepxde) (2018.9)\nRequirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (1.7.0)\nRequirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (1.17.2)\nRequirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (1.0.1)\nRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (3.2.2)\nRequirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (50.3.0)\nRequirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (0.4.1)\nRequirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (2.23.0)\nRequirement already satisfied: rsa<5,>=3.1.4; python_version >= \"3\" in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (4.6)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (0.2.8)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (4.1.1)\nRequirement already satisfied: importlib-metadata; python_version < \"3.8\" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (2.0.0)\nRequirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (1.3.0)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (2.10)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (3.0.4)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (1.24.3)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (2020.6.20)\nRequirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<5,>=3.1.4; python_version >= \"3\"->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (0.4.8)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < \"3.8\"->markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (3.2.0)\nRequirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow>=1.14.0->deepxde) (3.1.0)\nBuilding wheels for collected packages: salib\n Building wheel for salib (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for salib: filename=SALib-1.3.11-py2.py3-none-any.whl size=729665 sha256=959eb24383de6204e4fb3a66a5e7d8ee7ab00adb5e306e70995ee8d6dac9a325\n Stored in directory: /root/.cache/pip/wheels/62/ed/f9/a0b98754ffb2191b98324b96cbbeb1bd5d9598b39ab996b429\nSuccessfully built salib\nInstalling collected packages: salib, deepxde\nSuccessfully installed deepxde-0.8.5 salib-1.3.11\nUsing TensorFlow 2 backend.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/compat/v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.\nInstructions for updating:\nnon-resource variables are not supported in the long term\n" ] ], [ [ "# Generate data", "_____no_output_____" ] ], [ [ "# true values, see p. 15 in https://arxiv.org/abs/1907.04502\nC1true = 10\nC2true = 15\nC3true = 8 / 3\n\n# time points\nmaxtime = 3\ntime = np.linspace(0, maxtime, 200)\nex_input = 10 * np.sin(2 * np.pi * time) # exogenous input\n\n# interpolate time / lift vectors (for using exogenous variable without fixed time stamps)\ndef ex_func(t):\n spline = sp.interpolate.Rbf(\n time, ex_input, function=\"thin_plate\", smooth=0, episilon=0\n )\n # return spline(t[:,0:])\n return spline(t)\n\n\n# function that returns dy/dt\ndef LorezODE(x, t): # Modified Lorenz system (with exogenous input).\n x1, x2, x3 = x\n dxdt = [\n C1true * (x2 - x1),\n x1 * (C2true - x3) - x2,\n x1 * x2 - C3true * x3 + ex_func(t),\n ]\n return dxdt\n\n\n# initial condition\nx0 = [-8, 7, 27]\n\n# solve ODE\nx = odeint(LorezODE, x0, time)\n\n# plot results\nplt.plot(time, x, time, ex_input)\nplt.xlabel(\"time\")\nplt.ylabel(\"x(t)\")\nplt.show()\n\ntime = time.reshape(-1, 1)\ntime.shape\n", "_____no_output_____" ] ], [ [ "# Perform identification", "_____no_output_____" ] ], [ [ "# parameters to be identified\nC1 = tf.Variable(1.0)\nC2 = tf.Variable(1.0)\nC3 = tf.Variable(1.0)\n\n# interpolate time / lift vectors (for using exogenous variable without fixed time stamps)\ndef ex_func2(t):\n spline = sp.interpolate.Rbf(\n time, ex_input, function=\"thin_plate\", smooth=0, episilon=0\n )\n return spline(t[:, 0:])\n # return spline(t)\n\n\n# define system ODEs\ndef Lorenz_system(x, y, ex):\n \"\"\"Modified Lorenz system (with exogenous input).\n dy1/dx = 10 * (y2 - y1)\n dy2/dx = y1 * (28 - y3) - y2\n dy3/dx = y1 * y2 - 8/3 * y3 + u\n \"\"\"\n y1, y2, y3 = y[:, 0:1], y[:, 1:2], y[:, 2:]\n dy1_x = dde.grad.jacobian(y, x, i=0)\n dy2_x = dde.grad.jacobian(y, x, i=1)\n dy3_x = dde.grad.jacobian(y, x, i=2)\n return [\n dy1_x - C1 * (y2 - y1),\n dy2_x - y1 * (C2 - y3) + y2,\n dy3_x - y1 * y2 + C3 * y3 - ex,\n # dy3_x - y1 * y2 + C3 * y3 - 10*tf.math.sin(2*np.pi*x),\n ]\n\n\ndef boundary(_, on_initial):\n return on_initial\n\n\n# define time domain\ngeom = dde.geometry.TimeDomain(0, maxtime)\n\n# Initial conditions\nic1 = dde.IC(geom, lambda X: x0[0], boundary, component=0)\nic2 = dde.IC(geom, lambda X: x0[1], boundary, component=1)\nic3 = dde.IC(geom, lambda X: x0[2], boundary, component=2)\n\n# Get the training data\nobserve_t, ob_y = time, x\n# boundary conditions\nobserve_y0 = dde.PointSetBC(observe_t, ob_y[:, 0:1], component=0)\nobserve_y1 = dde.PointSetBC(observe_t, ob_y[:, 1:2], component=1)\nobserve_y2 = dde.PointSetBC(observe_t, ob_y[:, 2:3], component=2)\n\n# define data object\ndata = dde.data.PDE(\n geom,\n Lorenz_system,\n [ic1, ic2, ic3, observe_y0, observe_y1, observe_y2],\n num_domain=400,\n num_boundary=2,\n anchors=observe_t,\n auxiliary_var_function=ex_func2,\n)\n\nplt.plot(observe_t, ob_y)\nplt.xlabel(\"Time\")\nplt.legend([\"x\", \"y\", \"z\"])\nplt.title(\"Training data\")\nplt.show()\n\n# define FNN architecture and compile\nnet = dde.maps.FNN([1] + [40] * 3 + [3], \"tanh\", \"Glorot uniform\")\nmodel = dde.Model(data, net)\nmodel.compile(\"adam\", lr=0.001)\n\n# callbacks for storing results\nfnamevar = \"variables.dat\"\nvariable = dde.callbacks.VariableValue([C1, C2, C3], period=1, filename=fnamevar)\n\nlosshistory, train_state = model.train(epochs=60000, callbacks=[variable])\n", "_____no_output_____" ] ], [ [ "Plots", "_____no_output_____" ] ], [ [ "# reopen saved data using callbacks in fnamevar\nlines = open(fnamevar, \"r\").readlines()\n\n# read output data in fnamevar (this line is a long story...)\nChat = np.array(\n [\n np.fromstring(\n min(re.findall(re.escape(\"[\") + \"(.*?)\" + re.escape(\"]\"), line), key=len),\n sep=\",\",\n )\n for line in lines\n ]\n)\n\nl, c = Chat.shape\n\nplt.plot(range(l), Chat[:, 0], \"r-\")\nplt.plot(range(l), Chat[:, 1], \"k-\")\nplt.plot(range(l), Chat[:, 2], \"g-\")\nplt.plot(range(l), np.ones(Chat[:, 0].shape) * C1true, \"r--\")\nplt.plot(range(l), np.ones(Chat[:, 1].shape) * C2true, \"k--\")\nplt.plot(range(l), np.ones(Chat[:, 2].shape) * C3true, \"g--\")\nplt.legend([\"C1hat\", \"C2hat\", \"C3hat\", \"True C1\", \"True C2\", \"True C3\"], loc=\"right\")\nplt.xlabel(\"Epoch\")\nplt.show()\n\n\nyhat = model.predict(observe_t)\n\nplt.plot(observe_t, ob_y, \"-\", observe_t, yhat, \"--\")\nplt.xlabel(\"Time\")\nplt.legend([\"x\", \"y\", \"z\", \"xh\", \"yh\", \"zh\"])\nplt.title(\"Training data\")\nplt.show()\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e71120878aa5325b91511a14852976e52649dcaf
16,168
ipynb
Jupyter Notebook
keras/thu.nguyen/1 VGG model.ipynb
tuanthi/Machine-Learning-Course
591166a75860d1499fc8f9538e854a7b6f97f61a
[ "MIT" ]
null
null
null
keras/thu.nguyen/1 VGG model.ipynb
tuanthi/Machine-Learning-Course
591166a75860d1499fc8f9538e854a7b6f97f61a
[ "MIT" ]
null
null
null
keras/thu.nguyen/1 VGG model.ipynb
tuanthi/Machine-Learning-Course
591166a75860d1499fc8f9538e854a7b6f97f61a
[ "MIT" ]
null
null
null
39.724816
1,044
0.547316
[ [ [ "# 1. VGG\n### Finetuning VGG on Fruit dataset\n<img src=\"images/VGG16.png\" style=\"width:750px;height:350px;\">\n<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **VGG16 Model** </center></caption>", "_____no_output_____" ] ], [ [ "# https://deeplearningcourses.com/c/advanced-computer-vision\n# https://www.udemy.com/advanced-computer-vision\nfrom __future__ import print_function, division\nfrom builtins import range, input\n# Note: you may need to update your version of future\n# sudo pip install -U future\n\nfrom keras.layers import Input, Lambda, Dense, Flatten\nfrom keras.models import Model\nfrom keras.applications.vgg16 import VGG16\nfrom keras.applications.vgg16 import preprocess_input\nfrom keras.preprocessing import image\nfrom keras.preprocessing.image import ImageDataGenerator\n\nfrom sklearn.metrics import confusion_matrix\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom glob import glob\n\n\n# re-size all the images to this\nIMAGE_SIZE = [100, 100] # feel free to change depending on dataset\n\n# training config:\nepochs = 5\nbatch_size = 32\n\n# https://www.kaggle.com/paultimothymooney/blood-cells\n# train_path = '../large_files/blood_cell_images/TRAIN'\n# valid_path = '../large_files/blood_cell_images/TEST'\n\n# https://www.kaggle.com/moltean/fruits\n# train_path = '../large_files/fruits-360/Training'\n# valid_path = '../large_files/fruits-360/Validation'\ntrain_path = '../large_files/fruits-360/Training'\nvalid_path = '../large_files/fruits-360/Validation'\n\n# useful for getting number of files\nimage_files = glob(train_path + '/*/*.jp*g')\nvalid_image_files = glob(valid_path + '/*/*.jp*g')\n\n# useful for getting number of classes\nfolders = glob(train_path + '/*')\n\n\n# look at an image for fun\n# plt.imshow(image.load_img(np.random.choice(image_files)))\n# plt.show()\n\n\n\n\n# add preprocessing layer to the front of VGG\nvgg = VGG16(input_shape=IMAGE_SIZE + [3], weights='imagenet', include_top=False)\n\n# don't train existing weights\nfor layer in vgg.layers:\n layer.trainable = False\n\n# our layers - you can add more if you want\nx = Flatten()(vgg.output)\n# x = Dense(1000, activation='relu')(x)\nprediction = Dense(len(folders), activation='softmax')(x)\n\n# create a model object\nmodel = Model(inputs=vgg.input, outputs=prediction)\n\n# view the structure of the model\nmodel.summary()\n\n# tell the model what cost and optimization method to use\nmodel.compile(\n loss='categorical_crossentropy',\n optimizer='rmsprop',\n metrics=['accuracy']\n)\n\n\n\n\n\n# create an instance of ImageDataGenerator\ngen = ImageDataGenerator(\n rotation_range=20,\n width_shift_range=0.1,\n height_shift_range=0.1,\n shear_range=0.1,\n zoom_range=0.2,\n horizontal_flip=True,\n vertical_flip=True,\n preprocessing_function=preprocess_input\n)\n\n\n# test generator to see how it works and some other useful things\n\n# get label mapping for confusion matrix plot later\ntest_gen = gen.flow_from_directory(valid_path, target_size=IMAGE_SIZE)\nprint(test_gen.class_indices)\nlabels = [None] * len(test_gen.class_indices)\nfor k, v in test_gen.class_indices.items():\n labels[v] = k\n\n# should be a strangely colored image (due to VGG weights being BGR)\n# for x, y in test_gen:\n# print(\"min:\", x[0].min(), \"max:\", x[0].max())\n# plt.title(labels[np.argmax(y[0])])\n# plt.imshow(x[0])\n# plt.show()\n# break\n\n\n\n\n# create generators\ntrain_generator = gen.flow_from_directory(\n train_path,\n target_size=IMAGE_SIZE,\n shuffle=True,\n batch_size=batch_size,\n)\nvalid_generator = gen.flow_from_directory(\n valid_path,\n target_size=IMAGE_SIZE,\n shuffle=True,\n batch_size=batch_size,\n)\n\nprint(len(image_files), len(valid_image_files) , len(valid_image_files) // batch_size, len(image_files) // batch_size)\n# fit the model\nr = model.fit_generator(\n train_generator,\n validation_data=valid_generator,\n epochs=epochs,\n steps_per_epoch=len(image_files) // batch_size,\n validation_steps=len(valid_image_files) // batch_size,\n)\n\n\n\n\n\ndef get_confusion_matrix(data_path, N):\n # we need to see the data in the same order\n # for both predictions and targets\n print(\"Generating confusion matrix\", N)\n predictions = []\n targets = []\n i = 0\n for x, y in gen.flow_from_directory(data_path, target_size=IMAGE_SIZE, shuffle=False, batch_size=batch_size * 2):\n i += 1\n if i % 50 == 0:\n print(i)\n p = model.predict(x)\n p = np.argmax(p, axis=1)\n y = np.argmax(y, axis=1)\n predictions = np.concatenate((predictions, p))\n targets = np.concatenate((targets, y))\n if len(targets) >= N:\n break\n\n cm = confusion_matrix(targets, predictions)\n return cm\n\n\ncm = get_confusion_matrix(train_path, len(image_files))\nprint(cm)\nvalid_cm = get_confusion_matrix(valid_path, len(valid_image_files))\nprint(valid_cm)\n\n\n# plot some data\n\n# loss\nplt.plot(r.history['loss'], label='train loss')\nplt.plot(r.history['val_loss'], label='val loss')\nplt.legend()\nplt.show()\n\n# accuracies\nplt.plot(r.history['acc'], label='train acc')\nplt.plot(r.history['val_acc'], label='val acc')\nplt.legend()\nplt.show()\n\nfrom util import plot_confusion_matrix\nplot_confusion_matrix(cm, labels, title='Train confusion matrix')\nplot_confusion_matrix(valid_cm, labels, title='Validation confusion matrix')", "/home/thunguyen/miniconda2/envs/py36/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\nUsing TensorFlow backend.\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code" ] ]
e71146ee951cd2c69dba88b8cfda4c3d92acfdb0
12,616
ipynb
Jupyter Notebook
Modeluwu.ipynb
controlledchaos2002/Potato_sick_classfier
8c9fec79489ded74a341de934b63bb6a3c42a620
[ "Apache-2.0" ]
null
null
null
Modeluwu.ipynb
controlledchaos2002/Potato_sick_classfier
8c9fec79489ded74a341de934b63bb6a3c42a620
[ "Apache-2.0" ]
null
null
null
Modeluwu.ipynb
controlledchaos2002/Potato_sick_classfier
8c9fec79489ded74a341de934b63bb6a3c42a620
[ "Apache-2.0" ]
null
null
null
32.348718
3,039
0.401633
[ [ [ "# loading the dataset", "_____no_output_____" ] ], [ [ "\nimport tensorflow as tf\nfrom tensorflow.keras import models, layers\nimport matplotlib.pyplot as plt\nfrom IPython.display import HTML", "_____no_output_____" ], [ "BATCH_SIZE = 32\nIMAGE_SIZE = 256\nCHANNELS=3\nEPOCHS=50", "_____no_output_____" ], [ "dataset = tf.keras.preprocessing.image_dataset_from_directory(\n \"PlantVillage\",\n seed=123,\n shuffle=True,\n image_size=(IMAGE_SIZE,IMAGE_SIZE),\n batch_size=BATCH_SIZE\n)", "Found 2152 files belonging to 3 classes.\n" ], [ "len(dataset)\n\nclass_names = dataset.class_names\n", "_____no_output_____" ] ], [ [ "# Displaying the images ", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(10,10))\nfor image_batch, label_batch in dataset.take(1):\n for i in range(12):\n ax=plt.subplot(3,4,i+1)\n plt.imshow(image_batch[i].numpy().astype(\"uint8\"))\n plt.title(class_names[label_batch[i]])\n plt.axis(\"off\")\n \n ", "_____no_output_____" ], [ "train_ds = dataset.take(54)\nlen(train_ds)", "_____no_output_____" ], [ "test_ds = dataset.skip(54)\nlen(test_ds)", "_____no_output_____" ], [ "val_size=0.1\nlen(dataset)*val_size", "_____no_output_____" ], [ "val_ds = test_ds.take(6)\nlen(val_ds)", "_____no_output_____" ], [ "\ntest_ds = test_ds.skip(6)\nlen(test_ds)", "_____no_output_____" ] ], [ [ "# Making a partition function", "_____no_output_____" ] ], [ [ "def get_dataset_partitions_tf(ds, train_split=0.8, val_split=0.1, test_split=0.1, shuffle=True, shuffle_size=10000):\n assert (train_split + test_split + val_split) == 1\n \n ds_size = len(ds)\n \n if shuffle:\n ds = ds.shuffle(shuffle_size, seed=12)\n \n train_size = int(train_split * ds_size)\n val_size = int(val_split * ds_size)\n \n train_ds = ds.take(train_size) \n val_ds = ds.skip(train_size).take(val_size)\n test_ds = ds.skip(train_size).skip(val_size)\n \n return train_ds, val_ds, test_ds", "_____no_output_____" ], [ "\ntrain_ds, val_ds, test_ds = get_dataset_partitions_tf(dataset)", "_____no_output_____" ], [ "train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=tf.data.AUTOTUNE)\nval_ds = val_ds.cache().shuffle(1000).prefetch(buffer_size=tf.data.AUTOTUNE)\ntest_ds = test_ds.cache().shuffle(1000).prefetch(buffer_size=tf.data.AUTOTUNE)", "_____no_output_____" ] ], [ [ "# model build", "_____no_output_____" ] ], [ [ "resize_and_rescale = tf.keras.Sequential([\n layers.experimental.preprocessing.Resizing(IMAGE_SIZE, IMAGE_SIZE),\n layers.experimental.preprocessing.Rescaling(1./255),\n])", "_____no_output_____" ], [ "data_augmentation = tf.keras.Sequential([\n layers.experimental.preprocessing.RandomFlip(\"horizontal_and_vertical\"),\n layers.experimental.preprocessing.RandomRotation(0.2),\n])", "_____no_output_____" ], [ "input_shape = (BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, CHANNELS)\nn_classes = 3\n\nmodel = models.Sequential([\n resize_and_rescale,\n data_augmentation,\n layers.Conv2D(32, kernel_size = (3,3), activation='relu', input_shape=input_shape),\n layers.MaxPooling2D((2, 2)),\n layers.Conv2D(64, kernel_size = (3,3), activation='relu'),\n layers.MaxPooling2D((2, 2)),\n layers.Conv2D(64, kernel_size = (3,3), activation='relu'),\n layers.MaxPooling2D((2, 2)),\n layers.Conv2D(64, (3, 3), activation='relu'),\n layers.MaxPooling2D((2, 2)),\n layers.Conv2D(64, (3, 3), activation='relu'),\n layers.MaxPooling2D((2, 2)),\n layers.Conv2D(64, (3, 3), activation='relu'),\n layers.MaxPooling2D((2, 2)),\n layers.Flatten(),\n layers.Dense(64, activation='relu'),\n layers.Dense(n_classes, activation='softmax'),\n])\n\nmodel.build(input_shape=input_shape)", "_____no_output_____" ], [ "model.compile(\n optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),\n metrics=['accuracy']\n)", "_____no_output_____" ], [ "history = model.fit(\n train_ds,\n batch_size=BATCH_SIZE,\n validation_data=val_ds,\n verbose=1,\n epochs=50,\n)", "_____no_output_____" ], [ "scores = model.evaluate(test_ds)", "_____no_output_____" ], [ "import numpy as np\nfor images_batch, labels_batch in test_ds.take(1):\n \n first_image = images_batch[0].numpy().astype('uint8')\n first_label = labels_batch[0].numpy()\n \n print(\"first image to predict\")\n plt.imshow(first_image)\n print(\"actual label:\",class_names[first_label])\n \n batch_prediction = model.predict(images_batch)\n print(\"predicted label:\",class_names[np.argmax(batch_prediction[0])])", "_____no_output_____" ], [ "\ndef predict(model, img):\n img_array = tf.keras.preprocessing.image.img_to_array(images[i].numpy())\n img_array = tf.expand_dims(img_array, 0) # Create a batch\n\n predictions = model.predict(img_array)\n\n predicted_class = class_names[np.argmax(predictions[0])]\n confidence = round(100 * (np.max(predictions[0])), 2)\n return predicted_class, confidence", "_____no_output_____" ], [ "plt.figure(figsize=(15, 15))\nfor images, labels in test_ds.take(1):\n for i in range(9):\n ax = plt.subplot(3, 3, i + 1)\n plt.imshow(images[i].numpy().astype(\"uint8\"))\n \n predicted_class, confidence = predict(model, images[i].numpy())\n actual_class = class_names[labels[i]] \n \n plt.title(f\"Actual: {actual_class},\\n Predicted: {predicted_class}.\\n Confidence: {confidence}%\")\n \n plt.axis(\"off\")", "_____no_output_____" ], [ "import os\nmodel_version=max([int(i) for i in os.listdir(\"../models\") + [0]])+1\nmodel.save(f\"../models/{model_version}\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7114ad3b12a4534a00efe80413ddea6621512ba
67,331
ipynb
Jupyter Notebook
examples/DocFormer_for_MLM.ipynb
shabie/docformer
fd3e818aa0bca7d3bb8700a66ad5462976a182be
[ "MIT" ]
73
2021-10-12T07:53:00.000Z
2022-03-30T13:46:11.000Z
examples/DocFormer_for_MLM.ipynb
uakarsh/docformer
fd3e818aa0bca7d3bb8700a66ad5462976a182be
[ "MIT" ]
19
2021-10-03T10:26:06.000Z
2022-03-30T17:56:05.000Z
examples/DocFormer_for_MLM.ipynb
uakarsh/docformer
fd3e818aa0bca7d3bb8700a66ad5462976a182be
[ "MIT" ]
25
2021-10-01T02:37:35.000Z
2022-03-22T13:07:10.000Z
35.270299
415
0.497735
[ [ [ "<a href=\"https://colab.research.google.com/github/uakarsh/docformer/blob/master/examples/DocFormer_for_MLM.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "## Make the environment CUDA Enabled (so that, it would be easy to process everything)", "_____no_output_____" ], [ "### 1. About the Notebook:\n\nThis notebook, demonstrates using DocFormer for the purpose of Masked language Modeling (without pre-trained weights)", "_____no_output_____" ] ], [ [ "## Installing the dependencies (might take some time)\n\n%%capture\n!pip install pytesseract\n!sudo apt install tesseract-ocr\n!pip install transformers\n!pip install pytorch-lightning\n!pip install einops\n!pip install accelerate\n!pip install tqdm\n!pip install torchmetrics", "_____no_output_____" ], [ "%%capture\n!pip install 'Pillow==7.1.2'", "_____no_output_____" ], [ "## Cloning the repository\n\n%%capture\n!git clone https://github.com/shabie/docformer.git", "_____no_output_____" ], [ "## Importing the libraries\n\nimport os\nimport pickle\nimport pytesseract\nimport numpy as np\nimport pandas as pd\nfrom PIL import Image,ImageDraw\nimport torch\nfrom torchvision.transforms import ToTensor\nimport torch.nn as nn\nfrom torch.utils.data import Dataset,DataLoader\n\nimport math\nimport torch.nn.functional as F\nimport torchvision.models as models\nfrom einops import rearrange\nfrom torch import Tensor\n\n\n## Adding the path of docformer to system path\nimport sys\nsys.path.append('/content/docformer/src/docformer/')\n\n\n\n## Importing the functions from the DocFormer Repo\nfrom dataset import create_features\nfrom modeling import DocFormerEncoder,ResNetFeatureExtractor,DocFormerEmbeddings,LanguageFeatureExtractor\nfrom transformers import BertTokenizerFast", "_____no_output_____" ], [ "## Setting some hyperparameters\n\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\n\nconfig = {\n \"coordinate_size\": 96, ## (768/8), 8 for each of the 8 coordinates of x, y\n \"hidden_dropout_prob\": 0.1,\n \"hidden_size\": 768,\n \"image_feature_pool_shape\": [7, 7, 256],\n \"intermediate_ff_size_factor\": 4,\n \"max_2d_position_embeddings\": 1024,\n \"max_position_embeddings\": 512,\n \"max_relative_positions\": 8,\n \"num_attention_heads\": 12,\n \"num_hidden_layers\": 12,\n \"pad_token_id\": 0,\n \"shape_size\": 96,\n \"vocab_size\": 30522,\n \"layer_norm_eps\": 1e-12,\n}", "_____no_output_____" ] ], [ [ "## 2. Making the dataset", "_____no_output_____" ] ], [ [ "class DocumentDataset(Dataset):\n def __init__(self,entries,tokenizer,labels = None, use_mlm = False):\n\n self.use_mlm = use_mlm\n self.entries = entries\n self.labels = labels\n self.tokenizer = tokenizer\n self.config = config\n\n def __len__(self) -> int:\n return len(self.entries)\n \n def __getitem__(self,index):\n \n ''' \n Returns only four required inputs, \n * resized_scaled_img\n * input_ids\n * x_features\n * y_features\n\n If labels are not None, then labels also\n '''\n encoding = create_features(self.entries[index],self.tokenizer, apply_mask_for_mlm=self.use_mlm)\n\n if self.labels==None:\n\n if self.use_mlm:\n return encoding['resized_scaled_img'],encoding['input_ids'],encoding['x_features'],encoding['y_features'], encoding['mlm_labels']\n\n else:\n return encoding['resized_scaled_img'],encoding['input_ids'],encoding['x_features'],encoding['y_features']\n\n return encoding['resized_scaled_img'],encoding['input_ids'],encoding['x_features'],encoding['y_features'], self.labels[index]", "_____no_output_____" ], [ "tokenizer = BertTokenizerFast.from_pretrained(\"bert-base-uncased\")", "_____no_output_____" ] ], [ [ "##### Downloading the RVL-CDIP dataset, it contains few images for the purpose of MLM (from invoice classes of RVL-CDIP dataset)", "_____no_output_____" ] ], [ [ "%%capture\n!git clone https://github.com/uakarsh/sample_rvl_cdip_dataset.git", "_____no_output_____" ], [ "base_path = '/content/sample_rvl_cdip_dataset/RVL-CDIP Invoice Class Dataset'\nfp = pd.DataFrame({'image_id':[os.path.join(base_path,i) for i in os.listdir(base_path)]})", "_____no_output_____" ], [ "train_ds = DocumentDataset(fp['image_id'].values.tolist(),tokenizer = tokenizer, use_mlm = True)\n\ndef collate_fn(batch):\n return tuple(zip(*batch))\n\ntrain_data_loader = DataLoader(train_ds,\n batch_size=2,\n shuffle=True,\n num_workers=0,\n )", "_____no_output_____" ] ], [ [ "## 3. Making the model and doing the propagation", "_____no_output_____" ] ], [ [ "class DocFormerForMLM(nn.Module):\n \n def __init__(self, config):\n super().__init__()\n\n self.resnet = ResNetFeatureExtractor()\n self.embeddings = DocFormerEmbeddings(config)\n self.lang_emb = LanguageFeatureExtractor()\n self.config = config\n self.dropout = nn.Dropout(config['hidden_dropout_prob'])\n self.linear_layer = nn.Linear(in_features = config['hidden_size'], out_features = config['vocab_size'])\n self.encoder = DocFormerEncoder(config)\n\n def forward(self, x_feat, y_feat, img, token):\n v_bar_s, t_bar_s = self.embeddings(x_feat,y_feat)\n v_bar = self.resnet(img)\n t_bar = self.lang_emb(token)\n out = self.encoder(t_bar,v_bar,t_bar_s,v_bar_s)\n out = self.linear_layer(out)\n\n return out", "_____no_output_____" ], [ "model = DocFormerForMLM(config).to(device)", "Some weights of the model checkpoint at microsoft/layoutlm-base-uncased were not used when initializing LayoutLMForTokenClassification: ['cls.predictions.transform.dense.weight', 'cls.predictions.decoder.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight']\n- This IS expected if you are initializing LayoutLMForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n- This IS NOT expected if you are initializing LayoutLMForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\nSome weights of LayoutLMForTokenClassification were not initialized from the model checkpoint at microsoft/layoutlm-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight']\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n" ], [ "## Using a single batch for the forward propagation\nfeatures = next(iter(train_data_loader))\nimg,token,x_feat,y_feat, labels = features", "_____no_output_____" ], [ "## Transferring it to device\n\nimg = img.to(device)\ntoken = token.to(device)\nx_feat = x_feat.to(device)\ny_feat = y_feat.to(device)\nlabels = labels.to(device)", "_____no_output_____" ], [ "## Forward Propagation\n\nout = model(x_feat, y_feat, img, token)", "_____no_output_____" ], [ "## Initializing, the loss and optimizer\n\ncriterion = nn.CrossEntropyLoss()\ncriterion = criterion.to(device)\noptimizer = torch.optim.AdamW(model.parameters(), lr= 5e-5)\n\n\n## Calculating the loss and back propagating\noptimizer.zero_grad()\nloss = criterion(out.transpose(1,2), labels.long())\nloss.backward()\noptimizer.step()", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
e71152552b19474573938a2b853a9d5d61b29e8c
1,958
ipynb
Jupyter Notebook
Untitled1.ipynb
prrajaveen/Python
7dd16f7cca5c8446dbcff277d6afcedc192758ca
[ "MIT" ]
null
null
null
Untitled1.ipynb
prrajaveen/Python
7dd16f7cca5c8446dbcff277d6afcedc192758ca
[ "MIT" ]
null
null
null
Untitled1.ipynb
prrajaveen/Python
7dd16f7cca5c8446dbcff277d6afcedc192758ca
[ "MIT" ]
null
null
null
22
225
0.443309
[ [ [ "<a href=\"https://colab.research.google.com/github/prrajaveen/Python/blob/master/Untitled1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "print('praveen kumar')", "praveen kumar\n" ], [ "print(2+3)", "5\n" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ] ]
e71153d9b7b89ad09caf00d9390da26e423b2e36
5,398
ipynb
Jupyter Notebook
strava_analysis/.ipynb_checkpoints/data_cleaning-checkpoint.ipynb
annakoretchko/strava_analysis
5eafda68f3ee80a8628e131a04ad6526f7a61d28
[ "MIT" ]
null
null
null
strava_analysis/.ipynb_checkpoints/data_cleaning-checkpoint.ipynb
annakoretchko/strava_analysis
5eafda68f3ee80a8628e131a04ad6526f7a61d28
[ "MIT" ]
null
null
null
strava_analysis/.ipynb_checkpoints/data_cleaning-checkpoint.ipynb
annakoretchko/strava_analysis
5eafda68f3ee80a8628e131a04ad6526f7a61d28
[ "MIT" ]
null
null
null
33.7375
141
0.632642
[ [ [ "# Running Exploration (Strava and Garmin)\n\nThis explores Strava and Garmin data", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nfrom datetime import datetime", "_____no_output_____" ], [ "df_garmin_month = pd.read_csv(r'/Users/anna/Google Drive/Projects/garmin_analysis/garmin_analysis/data/Month.csv')\ndf_strava_combined = pd.read_csv(r'/Users/anna/Google Drive/Projects/strava_analysis/strava_analysis/data/combined_activities.csv')\ndf_strava_api = pd.read_csv(r'/Users/anna/Google Drive/Projects/strava_analysis/strava_analysis/data/new_activities_raw.csv')\ndf_strava_historic = pd.read_csv(r'/Users/anna/Google Drive/Projects/strava_analysis/strava_analysis/data/historic_activities_raw.csv')", "_____no_output_____" ], [ "# clean new api data\n# create unique id to match new and old later\ndf_strava_api = df_strava_api.rename(columns = {\"id\" :\"activity_id\"})\n# convert to miles\ndf_strava_api[\"distance\"] = ((0.621371 * df_strava_api[\"distance\"])/1000).round(decimals = 2)\ndf_strava_api['average_speed'] = df_strava_api['average_speed'].round(decimals = 3)\ndf_strava_api['max_speed'] = df_strava_api['max_speed'].round(decimals = 3)\ndf_strava_api['average_cadence'] = df_strava_api['average_cadence'].round(decimals = 1)\ndf_strava_api['average_watts'] = df_strava_api['average_watts'].round(decimals = 1)\n#Break date into start time and date\ndf_strava_api['start_date_local'] = pd.to_datetime(df_strava_api['start_date_local'])\ndf_strava_api['start_time'] = df_strava_api['start_date_local'].dt.time\ndf_strava_api['start_date_local'] = df_strava_api['start_date_local'].dt.date\n# create new column to join with (this is the 'real' date)\ndf_strava_api['activity_date'] = df_strava_api['start_date_local']\n\n", "_____no_output_____" ], [ "df_strava_historic = df_strava_historic.rename(columns=str.lower)\ndf_strava_historic.columns = df_strava_historic.columns.str.replace(\" \",\"_\")\ndf_strava_historic[\"distance\"] = df_strava_historic[\"distance\"].str.replace(\",\",\"\")", "_____no_output_____" ], [ "# sorted(df_strava_historic.columns.values)\n# sorted(df_strava_api.columns.values)", "_____no_output_____" ], [ "#df_strava_historic['distance'] = df_strava_historic['distance'].round(decimals = 3)\ndf_strava_historic[\"distance\"] = pd.to_numeric(df_strava_historic[\"distance\"])\ndf_strava_historic['average_speed'].fillna((df_strava_historic['distance'] / df_strava_historic['moving_time'])*1000, inplace=True)\ndf_strava_historic[\"distance\"] = (0.621371 * df_strava_historic[\"distance\"]).round(decimals = 2) # convert to miles\ndf_strava_historic['average_speed'] = df_strava_historic['average_speed'].round(decimals = 3)\ndf_strava_historic['max_speed'] = df_strava_historic['max_speed'].round(decimals = 3)\ndf_strava_historic['average_cadence'] = df_strava_historic['average_cadence'].round(decimals = 1)\ndf_strava_historic['average_watts'] = df_strava_historic['average_watts'].round(decimals = 1)\ndf_strava_historic['activity_date'] = pd.to_datetime(df_strava_historic['activity_date'])\ndf_strava_historic['activity_date'] = df_strava_historic['activity_date'].dt.date\n\n\n\n\n", "_____no_output_____" ], [ "test_car = 'activity_date'\nprint(df_strava_api.loc[2, [test_car]] == df_strava_historic.loc[947, [test_car]])\nprint(df_strava_api.loc[2, [test_car]])\nprint(df_strava_historic.loc[947, [test_car]])", "activity_date True\ndtype: bool\nactivity_date 2021-05-20\nName: 2, dtype: object\nactivity_date 2021-05-20\nName: 947, dtype: object\n" ], [ "# concats the two dfs\ndf_combo = pd.concat([df_strava_api,df_strava_historic])", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7116a81dd8b70d3b2fe3a6a05587a39d2c8be7d
116,170
ipynb
Jupyter Notebook
example/demo_lca.ipynb
qihongl/pylca
9d782d4e25b50faba048f79d0fb90180d0298fa0
[ "MIT" ]
5
2019-03-22T02:29:48.000Z
2021-01-26T22:41:10.000Z
example/demo_lca.ipynb
qihongl/pylca
9d782d4e25b50faba048f79d0fb90180d0298fa0
[ "MIT" ]
null
null
null
example/demo_lca.ipynb
qihongl/pylca
9d782d4e25b50faba048f79d0fb90180d0298fa0
[ "MIT" ]
null
null
null
484.041667
68,608
0.942378
[ [ [ "# install the package for google colab \n!pip install pylca ", "_____no_output_____" ], [ "\"\"\"\nLCA, demonstrate the effect of competition\n\"\"\"\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom pylca import LCA\n\nsns.set(style='white', palette='colorblind', context='talk')\nnp.random.seed(0)\n%matplotlib inline ", "_____no_output_____" ], [ "\"\"\"model params\n\"\"\"\nn_units = 3\n# input weights\nw_input = 1\n# decision param\nleak = .5\ncompetition = 1\nself_excit = 0\n# time step size\ndt = .1\n#\nself_excit = 0\nw_cross = 0\noffset = 0\nnoise_sd = .1\n\n# init LCA\nlca = LCA(\n n_units, dt, leak, competition,\n self_excit=self_excit, w_input=w_input, w_cross=w_cross,\n offset=offset, noise_sd=noise_sd,\n)\n\n\"\"\"run LCA\n\"\"\"\n# make inputs: turning on more and more units\nT = 25\ninput_patterns = list(np.tril(np.ones((n_units, n_units)), k=0))\n# run LCA for all input patterns\nvals = []\nfor input_pattern in input_patterns:\n input_seq = np.tile(input_pattern, (T, 1))\n vals.append(lca.run(input_seq))", "_____no_output_____" ], [ "\"\"\"plot\nif more units are activated, they compete and inhibit each other,\nas a result, the uncertainty of the system is larger\n\"\"\"\n\ntitle_list = ['Turn on %d units' % (k+1) for k in range(n_units)]\n\nf, axes = plt.subplots(n_units, 1, figsize=(8, 3*n_units), sharex=True)\nfor i, ax in enumerate(axes):\n ax.plot(vals[i])\n ax.set_title(f'{title_list[i]} (i.e. input = {input_patterns[i]})')\n ax.set_ylabel('LCA activity')\n ax.set_ylim([-.05, 1.05])\n ax.axhline(0, linestyle='--', color='grey')\naxes[-1].set_xlabel('Time')\nf.tight_layout()\nsns.despine()", "_____no_output_____" ], [ "\"\"\" run a larger simulation\nplot the max activity as a function of the number of units get activated\n\"\"\"\n# use more units, zero noise to clean the pattern\nn_units = 7\nnoise_sd = 0\n# init LCA\nlca = LCA(\n n_units, dt, leak, competition,\n self_excit=self_excit, w_input=w_input, w_cross=w_cross,\n offset=offset, noise_sd=noise_sd,\n)\n\n\"\"\"run LCA\n\"\"\"\n# make inputs: turning on more and more units\ninput_patterns = list(np.tril(np.ones((n_units, n_units)), k=0))\n# run LCA for all input patterns\nvals = []\nfor input_pattern in input_patterns:\n input_seq = np.tile(input_pattern, (T, 1))\n vals.append(lca.run(input_seq))", "_____no_output_____" ], [ "\"\"\"plot\nagain, if more units are activated, they compete and inhibit each other,\nas a result, the uncertainty of the system is larger\n\"\"\"\nf, ax = plt.subplots(1, 1, figsize=(8, 4), sharex=True)\ncol_pal = sns.color_palette('GnBu_d', n_colors=n_units)\nfor i in range(n_units):\n ax.plot(np.max(vals[i], axis=1), color=col_pal[i])\nax.axhline(0, linestyle='--', color='grey')\nax.set_title(f'The effect of turning on more and more units')\nax.set_xlabel('Time')\nax.set_ylabel('MAX(LCA activity)')\nax.set_ylim([-.05, 1.05])\nlegend_list = [f'%d / {n_units}' % (k+1) for k in range(n_units)]\nleg = f.legend(legend_list, frameon=False, bbox_to_anchor=(1.15, .9))\nleg.set_title('# units ON', prop = {'size':'x-large'})\nf.tight_layout()\nsns.despine()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
e7116bcc4c63a43353ca28e237bde258aa77a0ef
107,204
ipynb
Jupyter Notebook
reinforcement/Dynamic_Programming.ipynb
Bato803/Deep-Learning-Nano-Degree
78181be03c79a1d607164c89d40f65e70895fc7f
[ "MIT" ]
null
null
null
reinforcement/Dynamic_Programming.ipynb
Bato803/Deep-Learning-Nano-Degree
78181be03c79a1d607164c89d40f65e70895fc7f
[ "MIT" ]
null
null
null
reinforcement/Dynamic_Programming.ipynb
Bato803/Deep-Learning-Nano-Degree
78181be03c79a1d607164c89d40f65e70895fc7f
[ "MIT" ]
null
null
null
111.438669
21,504
0.83121
[ [ [ "# Mini Project: Dynamic Programming\n\nIn this notebook, you will write your own implementations of many classical dynamic programming algorithms. \n\nWhile we have provided some starter code, you are welcome to erase these hints and write your code from scratch.", "_____no_output_____" ], [ "### Part 0: Explore FrozenLakeEnv\n\nUse the code cell below to create an instance of the [FrozenLake](https://github.com/openai/gym/blob/master/gym/envs/toy_text/frozen_lake.py) environment.", "_____no_output_____" ] ], [ [ "%matplotlib inline", "_____no_output_____" ], [ "from frozenlake import FrozenLakeEnv\n\nenv = FrozenLakeEnv()", "_____no_output_____" ] ], [ [ "The agent moves through a $4 \\times 4$ gridworld, with states numbered as follows:\n```\n[[ 0 1 2 3]\n [ 4 5 6 7]\n [ 8 9 10 11]\n [12 13 14 15]]\n```\nand the agent has 4 potential actions:\n```\nLEFT = 0\nDOWN = 1\nRIGHT = 2\nUP = 3\n```\n\nThus, $\\mathcal{S}^+ = \\{0, 1, \\ldots, 15\\}$, and $\\mathcal{A} = \\{0, 1, 2, 3\\}$. Verify this by running the code cell below.", "_____no_output_____" ] ], [ [ "# print the state space and action space\nprint(env.observation_space)\nprint(env.action_space)\n\n# print the total number of states and actions\nprint(env.nS)\nprint(env.nA)", "Discrete(16)\nDiscrete(4)\n16\n4\n" ] ], [ [ "Dynamic programming assumes that the agent has full knowledge of the MDP. We have already amended the `frozenlake.py` file to make the one-step dynamics accessible to the agent. \n\nExecute the code cell below to return the one-step dynamics corresponding to a particular state and action. In particular, `env.P[1][0]` returns the the probability of each possible reward and next state, if the agent is in state 1 of the gridworld and decides to go left.", "_____no_output_____" ] ], [ [ "env.P[13][0]", "_____no_output_____" ] ], [ [ "Each entry takes the form \n```\nprob, next_state, reward, done\n```\nwhere: \n- `prob` details the conditional probability of the corresponding (`next_state`, `reward`) pair, and\n- `done` is `True` if the `next_state` is a terminal state, and otherwise `False`.\n\nThus, we can interpret `env.P[1][0]` as follows:\n$$\n\\mathbb{P}(S_{t+1}=s',R_{t+1}=r|S_t=1,A_t=0) = \\begin{cases}\n \\frac{1}{3} \\text{ if } s'=1, r=0\\\\\n \\frac{1}{3} \\text{ if } s'=0, r=0\\\\\n \\frac{1}{3} \\text{ if } s'=5, r=0\\\\\n 0 \\text{ else}\n \\end{cases}\n$$\n\nFeel free to change the code cell above to explore how the environment behaves in response to other (state, action) pairs.", "_____no_output_____" ], [ "### Part 1: Iterative Policy Evaluation\n\nIn this section, you will write your own implementation of iterative policy evaluation.\n\nYour algorithm should accept four arguments as **input**:\n- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.\n- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.\n- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).\n- `theta`: This is a very small positive number that is used to decide if the estimate has sufficiently converged to the true value function (default value: `1e-8`).\n\nThe algorithm returns as **output**:\n- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s` under the input policy.\n\nPlease complete the function in the code cell below.", "_____no_output_____" ] ], [ [ "import numpy as np\n\ndef policy_evaluation(env, policy, gamma=1, theta=1e-8):\n V = np.zeros(env.nS)\n \n delta = 1\n while delta > theta:\n delta = 0\n for s in range(env.nS):\n v = V[s]\n \n sum_over_action = 0\n for a in range(env.nA):\n prob_of_a_given_s = policy[s][a]\n \n sum_over_next_state = 0\n for ns in range(len(env.P[s][a])):\n prob_of_ns = env.P[s][a][ns][0]\n next_state = env.P[s][a][ns][1]\n reward = env.P[s][a][ns][2]\n done_or_not = env.P[s][a][ns][3]\n sum_over_next_state += prob_of_ns * (reward + gamma*V[next_state])\n \n sum_over_action += prob_of_a_given_s * sum_over_next_state\n \n V[s] = sum_over_action\n delta = max(delta, abs(v-V[s]))\n \n return V", "_____no_output_____" ] ], [ [ "We will evaluate the equiprobable random policy $\\pi$, where $\\pi(a|s) = \\frac{1}{|\\mathcal{A}(s)|}$ for all $s\\in\\mathcal{S}$ and $a\\in\\mathcal{A}(s)$. \n\nUse the code cell below to specify this policy in the variable `random_policy`.", "_____no_output_____" ] ], [ [ "random_policy = np.ones([env.nS, env.nA]) / env.nA", "_____no_output_____" ] ], [ [ "Run the next code cell to evaluate the equiprobable random policy and visualize the output. The state-value function has been reshaped to match the shape of the gridworld.", "_____no_output_____" ] ], [ [ "from plot_utils import plot_values\n\n# evaluate the policy \nV = policy_evaluation(env, random_policy)\n\nplot_values(V)", "_____no_output_____" ] ], [ [ "Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! \n\n**Note:** In order to ensure accurate results, make sure that your `policy_evaluation` function satisfies the requirements outlined above (with four inputs, a single output, and with the default values of the input arguments unchanged).", "_____no_output_____" ] ], [ [ "import check_test\n\ncheck_test.run_check('policy_evaluation_check', policy_evaluation)", "_____no_output_____" ] ], [ [ "### Part 2: Obtain $q_\\pi$ from $v_\\pi$\n\nIn this section, you will write a function that takes the state-value function estimate as input, along with some state $s\\in\\mathcal{S}$. It returns the **row in the action-value function** corresponding to the input state $s\\in\\mathcal{S}$. That is, your function should accept as input both $v_\\pi$ and $s$, and return $q_\\pi(s,a)$ for all $a\\in\\mathcal{A}(s)$.\n\nYour algorithm should accept four arguments as **input**:\n- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.\n- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.\n- `s`: This is an integer corresponding to a state in the environment. It should be a value between `0` and `(env.nS)-1`, inclusive.\n- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).\n\nThe algorithm returns as **output**:\n- `q`: This is a 1D numpy array with `q.shape[0]` equal to the number of actions (`env.nA`). `q[a]` contains the (estimated) value of state `s` and action `a`.\n\nPlease complete the function in the code cell below.", "_____no_output_____" ] ], [ [ "def q_from_v(env, V, s, gamma=1):\n q = np.zeros(env.nA)\n \n for a in range(env.nA):\n \n sum_over_next_state = 0\n \n for ns in range(len(env.P[s][a])):\n \n prob_of_next_state = env.P[s][a][ns][0]\n next_state = env.P[s][a][ns][1]\n reward = env.P[s][a][ns][2]\n sum_over_next_state += prob_of_next_state * (reward+gamma*V[next_state])\n \n q[a] = sum_over_next_state\n \n return q", "_____no_output_____" ] ], [ [ "Run the code cell below to print the action-value function corresponding to the above state-value function.", "_____no_output_____" ] ], [ [ "Q = np.zeros([env.nS, env.nA])\nfor s in range(env.nS):\n Q[s] = q_from_v(env, V, s)\nprint(\"Action-Value Function:\")\nprint(Q)", "Action-Value Function:\n[[ 0.0147094 0.01393978 0.01393978 0.01317015]\n [ 0.00852356 0.01163091 0.0108613 0.01550788]\n [ 0.02444514 0.02095298 0.02406033 0.01435346]\n [ 0.01047649 0.01047649 0.00698432 0.01396865]\n [ 0.02166487 0.01701828 0.01624865 0.01006281]\n [ 0. 0. 0. 0. ]\n [ 0.05433538 0.04735105 0.05433538 0.00698432]\n [ 0. 0. 0. 0. ]\n [ 0.01701828 0.04099204 0.03480619 0.04640826]\n [ 0.07020885 0.11755991 0.10595784 0.05895312]\n [ 0.18940421 0.17582037 0.16001424 0.04297382]\n [ 0. 0. 0. 0. ]\n [ 0. 0. 0. 0. ]\n [ 0.08799677 0.20503718 0.23442716 0.17582037]\n [ 0.25238823 0.53837051 0.52711478 0.43929118]\n [ 0. 0. 0. 0. ]]\n" ] ], [ [ "Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! \n\n**Note:** In order to ensure accurate results, make sure that the `q_from_v` function satisfies the requirements outlined above (with four inputs, a single output, and with the default values of the input arguments unchanged).", "_____no_output_____" ] ], [ [ "check_test.run_check('q_from_v_check', q_from_v)", "_____no_output_____" ] ], [ [ "### Part 3: Policy Improvement\n\nIn this section, you will write your own implementation of policy improvement. \n\nYour algorithm should accept three arguments as **input**:\n- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.\n- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.\n- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).\n\nThe algorithm returns as **output**:\n- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.\n\nPlease complete the function in the code cell below. You are encouraged to use the `q_from_v` function you implemented above.", "_____no_output_____" ] ], [ [ "def policy_improvement(env, V, gamma=1):\n policy = np.zeros([env.nS, env.nA]) / env.nA\n \n for s in range(env.nS):\n \n Q = q_from_v(env, V, s)\n best_action = np.argmax(Q)\n policy[s][best_action] = 1\n \n\n return policy", "_____no_output_____" ] ], [ [ "Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! \n\n**Note:** In order to ensure accurate results, make sure that the `policy_improvement` function satisfies the requirements outlined above (with three inputs, a single output, and with the default values of the input arguments unchanged).\n\nBefore moving on to the next part of the notebook, you are strongly encouraged to check out the solution in **Dynamic_Programming_Solution.ipynb**. There are many correct ways to approach this function!", "_____no_output_____" ] ], [ [ "check_test.run_check('policy_improvement_check', policy_improvement)", "_____no_output_____" ] ], [ [ "### Part 4: Policy Iteration\n\nIn this section, you will write your own implementation of policy iteration. The algorithm returns the optimal policy, along with its corresponding state-value function.\n\nYour algorithm should accept three arguments as **input**:\n- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.\n- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).\n- `theta`: This is a very small positive number that is used to decide if the policy evaluation step has sufficiently converged to the true value function (default value: `1e-8`).\n\nThe algorithm returns as **output**:\n- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.\n- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.\n\nPlease complete the function in the code cell below. You are strongly encouraged to use the `policy_evaluation` and `policy_improvement` functions you implemented above.", "_____no_output_____" ] ], [ [ "import copy\n\ndef policy_iteration(env, gamma=1, theta=1e-8):\n policy = np.ones([env.nS, env.nA]) / env.nA\n \n stable = False\n while not stable:\n V = policy_evaluation(env, policy, gamma, theta)\n new_policy = policy_improvement(env, V, gamma)\n if (new_policy == policy).all():\n stable = True\n else:\n policy = new_policy\n\n return policy, V", "_____no_output_____" ] ], [ [ "Run the next code cell to solve the MDP and visualize the output. The optimal state-value function has been reshaped to match the shape of the gridworld.\n\n**Compare the optimal state-value function to the state-value function from Part 1 of this notebook**. _Is the optimal state-value function consistently greater than or equal to the state-value function for the equiprobable random policy?_", "_____no_output_____" ] ], [ [ "# obtain the optimal policy and optimal state-value function\npolicy_pi, V_pi = policy_iteration(env)\n\n# print the optimal policy\nprint(\"\\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):\")\nprint(policy_pi,\"\\n\")\n\nplot_values(V_pi)", "\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):\n[[ 1. 0. 0. 0.]\n [ 0. 0. 0. 1.]\n [ 0. 0. 0. 1.]\n [ 0. 0. 0. 1.]\n [ 1. 0. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 0. 0. 0. 1.]\n [ 0. 1. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 0. 0. 1. 0.]\n [ 0. 1. 0. 0.]\n [ 1. 0. 0. 0.]] \n\n" ] ], [ [ "Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! \n\n**Note:** In order to ensure accurate results, make sure that the `policy_iteration` function satisfies the requirements outlined above (with three inputs, two outputs, and with the default values of the input arguments unchanged).", "_____no_output_____" ] ], [ [ "check_test.run_check('policy_iteration_check', policy_iteration)", "_____no_output_____" ] ], [ [ "### Part 5: Truncated Policy Iteration\n\nIn this section, you will write your own implementation of truncated policy iteration. \n\nYou will begin by implementing truncated policy evaluation. Your algorithm should accept five arguments as **input**:\n- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.\n- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.\n- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.\n- `max_it`: This is a positive integer that corresponds to the number of sweeps through the state space (default value: `1`).\n- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).\n\nThe algorithm returns as **output**:\n- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.\n\nPlease complete the function in the code cell below.", "_____no_output_____" ] ], [ [ "def truncated_policy_evaluation(env, policy, V, max_it=1, gamma=1):\n \n counter = 0\n while counter < max_it:\n \n for s in range(env.nS):\n \n sum_over_action = 0\n for a in range(env.nA):\n \n sum_over_next_state = 0\n for ns in range(len(env.P[s][a])):\n \n prob_next_state = env.P[s][a][ns][0]\n next_state = env.P[s][a][ns][1]\n reward = env.P[s][a][ns][2]\n sum_over_next_state += prob_next_state * (reward + gamma*V[next_state])\n \n sum_over_action += policy[s][a] * sum_over_next_state\n \n V[s] = sum_over_action\n \n counter += 1\n \n return V", "_____no_output_____" ] ], [ [ "Next, you will implement truncated policy iteration. Your algorithm should accept five arguments as **input**:\n- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.\n- `max_it`: This is a positive integer that corresponds to the number of sweeps through the state space (default value: `1`).\n- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).\n- `theta`: This is a very small positive number that is used for the stopping criterion (default value: `1e-8`).\n\nThe algorithm returns as **output**:\n- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.\n- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.\n\nPlease complete the function in the code cell below.", "_____no_output_____" ] ], [ [ "def truncated_policy_iteration(env, max_it=1, gamma=1, theta=1e-8):\n V = np.zeros(env.nS)\n policy = np.zeros([env.nS, env.nA]) / env.nA\n \n while True:\n policy = policy_improvement(env, V, gamma)\n V_old = copy.copy(V)\n V = truncated_policy_evaluation(env, policy, V, max_it, gamma)\n \n if max(abs(V_old - V)) < theta:\n break\n \n return policy, V", "_____no_output_____" ] ], [ [ "Run the next code cell to solve the MDP and visualize the output. The state-value function has been reshaped to match the shape of the gridworld.\n\nPlay with the value of the `max_it` argument. Do you always end with the optimal state-value function?", "_____no_output_____" ] ], [ [ "policy_tpi, V_tpi = truncated_policy_iteration(env, max_it=2)\n\n# print the optimal policy\nprint(\"\\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):\")\nprint(policy_tpi,\"\\n\")\n\n# plot the optimal state-value function\nplot_values(V_tpi)", "\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):\n[[ 1. 0. 0. 0.]\n [ 0. 0. 0. 1.]\n [ 0. 0. 0. 1.]\n [ 0. 0. 0. 1.]\n [ 1. 0. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 0. 0. 0. 1.]\n [ 0. 1. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 0. 0. 1. 0.]\n [ 0. 1. 0. 0.]\n [ 1. 0. 0. 0.]] \n\n" ] ], [ [ "Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! \n\n**Note:** In order to ensure accurate results, make sure that the `truncated_policy_iteration` function satisfies the requirements outlined above (with four inputs, two outputs, and with the default values of the input arguments unchanged).", "_____no_output_____" ] ], [ [ "check_test.run_check('truncated_policy_iteration_check', truncated_policy_iteration)", "_____no_output_____" ] ], [ [ "### Part 6: Value Iteration\n\nIn this section, you will write your own implementation of value iteration.\n\nYour algorithm should accept three arguments as input:\n- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.\n- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).\n- `theta`: This is a very small positive number that is used for the stopping criterion (default value: `1e-8`).\n\nThe algorithm returns as **output**:\n- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.\n- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.", "_____no_output_____" ] ], [ [ "def value_iteration(env, gamma=1, theta=1e-8):\n V = np.zeros(env.nS)\n \n while True:\n \n delta = 0\n for s in range(env.nS):\n v_old = V[s]\n V[s] = max(q_from_v(env, V, s, gamma))\n delta = max(delta, abs(v_old - V[s]))\n \n if delta < theta:\n break\n \n policy = policy_improvement(env, V, gamma)\n \n \n return policy, V", "_____no_output_____" ] ], [ [ "Use the next code cell to solve the MDP and visualize the output. The state-value function has been reshaped to match the shape of the gridworld.", "_____no_output_____" ] ], [ [ "policy_vi, V_vi = value_iteration(env)\n\n# print the optimal policy\nprint(\"\\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):\")\nprint(policy_vi,\"\\n\")\n\n# plot the optimal state-value function\nplot_values(V_vi)", "\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):\n[[ 1. 0. 0. 0.]\n [ 0. 0. 0. 1.]\n [ 0. 0. 0. 1.]\n [ 0. 0. 0. 1.]\n [ 1. 0. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 0. 0. 0. 1.]\n [ 0. 1. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 0. 0. 1. 0.]\n [ 0. 1. 0. 0.]\n [ 1. 0. 0. 0.]] \n\n" ] ], [ [ "Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! \n\n**Note:** In order to ensure accurate results, make sure that the `value_iteration` function satisfies the requirements outlined above (with three inputs, two outputs, and with the default values of the input arguments unchanged).", "_____no_output_____" ] ], [ [ "check_test.run_check('value_iteration_check', value_iteration)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7116d1bdf69a214e06487c9e580b357fc18892a
89,233
ipynb
Jupyter Notebook
header_footer/biosignalsnotebooks_environment/categories/Evaluate/classification_game_volume_4.ipynb
Boris69bg/biosignalsnotebooks
ed183aeb8161ff8a829a5444e956cb0b368ec51b
[ "MIT" ]
1
2019-06-02T07:50:41.000Z
2019-06-02T07:50:41.000Z
notebookToHtml/biosignalsnotebooks_html_publish/Categories/Evaluate/classification_game_volume_4.ipynb
Boris69bg/biosignalsnotebooks
ed183aeb8161ff8a829a5444e956cb0b368ec51b
[ "MIT" ]
null
null
null
notebookToHtml/biosignalsnotebooks_html_publish/Categories/Evaluate/classification_game_volume_4.ipynb
Boris69bg/biosignalsnotebooks
ed183aeb8161ff8a829a5444e956cb0b368ec51b
[ "MIT" ]
null
null
null
43.806087
5,029
0.531474
[ [ [ "<table width=\"100%\">\n <tr style=\"border-bottom:solid 2pt #009EE3\">\n <td style=\"text-align:left\" width=\"10%\">\n <a href=\"classification_game_volume_4.dwipynb\" download><img src=\"../../images/icons/download.png\"></a>\n </td>\n <td style=\"text-align:left\" width=\"10%\">\n <a href=\"https://mybinder.org/v2/gh/biosignalsnotebooks/biosignalsnotebooks/biosignalsnotebooks_binder?filepath=biosignalsnotebooks_environment%2Fcategories%2FEvaluate%2Fclassification_game_volume_4.dwipynb\" target=\"_blank\"><img src=\"../../images/icons/program.png\" title=\"Be creative and test your solutions !\"></a>\n </td>\n <td></td>\n <td style=\"text-align:left\" width=\"5%\">\n <a href=\"../MainFiles/biosignalsnotebooks.ipynb\"><img src=\"../../images/icons/home.png\"></a>\n </td>\n <td style=\"text-align:left\" width=\"5%\">\n <a href=\"../MainFiles/contacts.ipynb\"><img src=\"../../images/icons/contacts.png\"></a>\n </td>\n <td style=\"text-align:left\" width=\"5%\">\n <a href=\"https://github.com/biosignalsnotebooks/biosignalsnotebooks\" target=\"_blank\"><img src=\"../../images/icons/github.png\"></a>\n </td>\n <td style=\"border-left:solid 2pt #009EE3\" width=\"15%\">\n <img src=\"../../images/ost_logo.png\">\n </td>\n </tr>\n</table>", "_____no_output_____" ], [ "<link rel=\"stylesheet\" href=\"../../styles/theme_style.css\">\n<!--link rel=\"stylesheet\" href=\"../../styles/header_style.css\"-->\n<link rel=\"stylesheet\" href=\"https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css\">\n\n<table width=\"100%\">\n <tr>\n <td id=\"image_td\" width=\"15%\" class=\"header_image_color_12\"><div id=\"image_img\"\n class=\"header_image_12\"></div></td>\n <td class=\"header_text\"> Stone, Paper or Scissor Game - Train and Classify [Volume 4] </td>\n </tr>\n</table>", "_____no_output_____" ], [ "<div id=\"flex-container\">\n <div id=\"diff_level\" class=\"flex-item\">\n <strong>Difficulty Level:</strong> <span class=\"fa fa-star checked\"></span>\n <span class=\"fa fa-star checked\"></span>\n <span class=\"fa fa-star\"></span>\n <span class=\"fa fa-star\"></span>\n <span class=\"fa fa-star\"></span>\n </div>\n <div id=\"tag\" class=\"flex-item-tag\">\n <span id=\"tag_list\">\n <table id=\"tag_list_table\">\n <tr>\n <td class=\"shield_left\">Tags</td>\n <td class=\"shield_right\" id=\"tags\">evaluate&#9729;machine-learning&#9729;features&#9729;quality&#9729;cross-validation</td>\n </tr>\n </table>\n </span>\n <!-- [OR] Visit https://img.shields.io in order to create a tag badge-->\n </div>\n</div>", "_____no_output_____" ], [ "<span class=\"color4\"><strong>Previous Notebooks that are part of \"Stone, Paper or Scissor Game - Train and Classify\" module</strong></span>\n<ul>\n <li><a href=\"../Train_and_Classify/classification_game_volume_1.ipynb\"><strong>Stone, Paper or Scissor Game - Train and Classify [Volume 1] | Experimental Setup <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></strong></a></li>\n <li><a href=\"../Train_and_Classify/classification_game_volume_2.ipynb\"><strong>Stone, Paper or Scissor Game - Train and Classify [Volume 2] | Feature Extraction <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></strong></a></li>\n <li><a href=\"../Train_and_Classify/classification_game_volume_3.ipynb\"><strong>Stone, Paper or Scissor Game - Train and Classify [Volume 3] | Training a Classifier <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></strong></a></li>\n</ul> \n\n<table width=\"100%\">\n <tr>\n <td style=\"text-align:left;font-size:12pt;border-top:dotted 2px #62C3EE\">\n <span class=\"color1\">&#9740;</span> In order to ensure that our classification system is functional we need to evaluate it in an objective way.\n <br>\n At our final volume (current <span class=\"color4\"><strong>Jupyter Notebook</strong></span>) an evaluation methodology will be described taking into consideration a particular cross-validation technique.\n </td>\n </tr>\n</table>\n<hr>", "_____no_output_____" ], [ "<p style=\"font-size:20pt;color:#62C3EE;padding-bottom:5pt\">Performance Evaluation</p>\n<strong>Brief Intro</strong>\n<br>\nWhen implementing a classification system it is considered to be extremely important to have an objective understanding of how said system would behave when interacting with new testing examples.\n\nA classifier should function correctly when the testing examples are very similar to the training examples. However, if there is a testing example with characteristics that are somewhat disparate, the robustness of the system will be challenged.\n\nThus, what makes a classifier robust is his capacity to establish correspondences even when the similarities are more tenuous.\nTo estimate the quality of the implemented system, there were different methods to be followed, namely <span class=\"color1\"><strong>Cross-Layer Estimation</strong></span> and <span class=\"color13\"><strong>Leave One Out</strong></span> (see an <a href=\"https://www.cs.cmu.edu/~schneide/tut5/node42.html\">external reference <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></a>).\n\nIn <span class=\"color1\"><strong>Cross-Layer Estimation</strong></span> the training set is divided into $N$ subsets with approximately the same number of examples. Using <i>N−1</i> iterations, each of the $N$ subsets acquires the role of testing set, while the remaining <i>N−1</i> subsets are used to train a \"partial\" classifier. Finally, an estimate of the error of the\noriginal classifier is obtained through the partial errors of the <i>N−1</i> partial classifiers.\n\nRelatively to the <span class=\"color13\"><strong>Leave One Out</strong></span> method, it is a particular case of cross-layer estimation. It involves creating a number of partial classifiers which is equal to the number of training examples. In each iteration, one training example assumes the role of testing example, while the rest are used to train the \"partial\" classifier.\n\nFortunately there are built-in function on <a href=\"https://scikit-learn.org/stable/index.html\">scikit-learn <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></a>, which will be applied in the current <span class=\"color4\"><strong>Jupyter Notebook</strong></span>.", "_____no_output_____" ], [ "<p class=\"steps\">0 - Import of the needed packages for a correct execution of the current <span class=\"color4\">Jupyter Notebook</span></p>", "_____no_output_____" ] ], [ [ "# Python package that contains functions specialised on \"Machine Learning\" tasks.\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.model_selection import cross_val_score, LeaveOneOut\n\n# biosignalsnotebooks own package that supports some functionalities used on the Jupyter Notebooks.\nimport biosignalsnotebooks as bsnb\n\n# Package containing a diversified set of function for statistical processing and also provide support to array operations.\nfrom numpy import array", "_____no_output_____" ] ], [ [ "<p class=\"steps\">1 - Replicate the training procedure of <a href=\"../Train_and_Classify/classification_game_volume_3.ipynb\"><span class=\"color4\">Volume 3 of \"Classification Game\" Jupyter Notebook <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></span></a></p>\n<p class=\"steps\">1.1 - Load of all the extracted features from our training data </p>\n<span class=\"color13\" style=\"font-size:30px\">&#9888;</span> This step was done internally !!! For now don't be worried about that, remember only that a dictionary (called <span class=\"color7\"><strong>\"features_class_dict\"</strong></span>), with the list of all features values and classes of training examples, is available from <a href=\"../Train_and_Classify/classification_game_volume_3.ipynb\"><span class=\"color4\">Volume 3 of \"Classification Game\" Jupyter Notebook <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></span></a>", "_____no_output_____" ] ], [ [ "# Package dedicated to the manipulation of json files.\nfrom json import loads\n\n# Specification of filename and relative path.\nrelative_path = \"../../signal_samples/classification_game/features\"\nfilename = \"classification_game_features_final.json\"\n\n# Load of data inside file, storing it inside a Python dictionary.\nwith open(relative_path + \"/\" + filename) as file:\n features_class_dict = loads(file.read())", "_____no_output_____" ] ], [ [ "<span class=\"color4\"><strong>List of Dictionary keys</strong></span>", "_____no_output_____" ] ], [ [ "features_class_dict.keys()", "_____no_output_____" ] ], [ [ "<p class=\"steps\">1.2 - Storage of dictionary content into separate variables </p>", "_____no_output_____" ] ], [ [ "features_list = features_class_dict[\"features_list_final\"]\nclass_training_examples = features_class_dict[\"class_labels\"]", "_____no_output_____" ] ], [ [ "<p class=\"steps\">1.3 - Let's select two sets of features. Set A will be identical to the one used on <a href=\"../Train_and_Classify/classification_game_volume_3.ipynb\"><span class=\"color4\">Volume 3 of \"Classification Game\" Jupyter Notebook <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></span></a>, while set B is a more restricted one, formed by three features (one from each used sensor)</p>", "_____no_output_____" ], [ "<span class=\"color1\"><strong>Set of Features A</strong></span>\n<ul>\n <li>$\\sigma_{emg\\,flexor}$</li>\n <li>$zcr_{emg\\,flexor}$</li>\n <li>$\\sigma_{emg\\,flexor}^{abs}$</li>\n <li>$\\sigma_{emg\\,adductor}$</li>\n <li>$\\sigma_{emg\\,adductor}^{abs}$</li>\n <li>$\\sigma_{acc\\,z}$</li>\n <li>$max_{acc\\,z}$</li>\n <li>$m_{acc\\,z}$</li>\n</ul>\n\n\\[$\\sigma_{emg\\,flexor}$, $max_{emg\\,flexor}$, $zcr_{emg\\,flexor}$, $\\sigma_{emg\\,flexor}^{abs}$, $\\sigma_{emg\\,adductor}$, $max_{emg\\,adductor}$, $zcr_{emg\\,adductor}$, $\\sigma_{emg\\,adductor}^{abs}$, $\\mu_{acc\\,z}$, $\\sigma_{acc\\,z}$, $max_{acc\\,z}$, $zcr_{acc\\,z}$, $m_{acc\\,z}$\\] \n\n= \\[True, False, True, True, True, False, False, True, False, True, True, False, True\\] <span class=\"color1\">(List of entries that contain relevant features are flagged with \"True\")</span>", "_____no_output_____" ] ], [ [ "# Access each training example and exclude meaningless entries.\n# Entries that we want to keep are marked with \"True\" flag.\nacception_labels_a = [True, False, True, True, True, False, \n False, True, False, True, True, False, True]\ntraining_examples_a = []\nfor example_nbr in range(0, len(features_list)):\n training_examples_a += [list(array(features_list[example_nbr])[array(acception_labels_a)])]", "_____no_output_____" ] ], [ [ "<span class=\"color7\"><strong>Set of Features B</strong></span> (one random feature from each sensor, i.e, a set with 3 features)\n<ul>\n <li>$zcr_{emg\\,flexor}$</li>\n <li>$\\sigma_{emg\\,adductor}$</li>\n <li>$m_{acc\\,z}$</li>\n</ul>\n\n\\[$\\sigma_{emg\\,flexor}$, $max_{emg\\,flexor}$, $zcr_{emg\\,flexor}$, $\\sigma_{emg\\,flexor}^{abs}$, $\\sigma_{emg\\,adductor}$, $max_{emg\\,adductor}$, $zcr_{emg\\,adductor}$, $\\sigma_{emg\\,adductor}^{abs}$, $\\mu_{acc\\,z}$, $\\sigma_{acc\\,z}$, $max_{acc\\,z}$, $zcr_{acc\\,z}$, $m_{acc\\,z}$\\] \n\n= \\[False, True, False, False, True, False, False, False, False, False, False, False, True\\] <span class=\"color7\">(List entries that contain relevant features are flagged with \"True\")</span>", "_____no_output_____" ] ], [ [ "# Access each training example and exclude meaningless entries.\nacception_labels_b = [False, True, False, False, True, False, False, False, False, False, False, False, True] # Entries that we want to keep are marked with \"True\" flag.\ntraining_examples_b = []\nfor example_nbr in range(0, len(features_list)):\n training_examples_b += [list(array(features_list[example_nbr])[array(acception_labels_b)])]", "_____no_output_____" ] ], [ [ "<p class=\"steps\">1.4 - Two classifiers will be trained, using the features contained inside the two previous sets of features</p>", "_____no_output_____" ], [ "<span class=\"color1\"><strong>Set of Features A</strong></span>", "_____no_output_____" ] ], [ [ "# k-Nearest Neighbour object initialisation.\nknn_classifier_a = KNeighborsClassifier()\n\n# Fit model to data.\nknn_classifier_a.fit(training_examples_a, class_training_examples) ", "_____no_output_____" ] ], [ [ "<span class=\"color7\"><strong>Set of Features B</strong></span>", "_____no_output_____" ] ], [ [ "# k-Nearest Neighbour object initialisation.\nknn_classifier_b = KNeighborsClassifier()\n\n# Fit model to data.\nknn_classifier_b.fit(training_examples_b, class_training_examples) ", "_____no_output_____" ] ], [ [ "<p class=\"steps\">2 - Usage of \"cross_val_score\" function of <a href=\"https://scikit-learn.org/stable/index.html\">scikit-learn <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></a> package</p>\nWith this function it will be possible to specify a cross-validation method in order to the performance of our classification system can be accessed. In the current <span class=\"color4\"><strong>Jupyter Notebook</strong></span> it will be used one of the previously described cross-validation methods:\n<ul>\n <li><span class=\"color13\"><strong>Leave One Out</strong></span></li>\n</ul>", "_____no_output_____" ], [ "<p class=\"steps\">2.1 - Classifier trained with <span class=\"color1\"><strong>Set of Features A</strong></span></p>", "_____no_output_____" ] ], [ [ "leave_one_out_score_a = cross_val_score(knn_classifier_a, training_examples_a, class_training_examples, scoring=\"accuracy\", cv=LeaveOneOut())\n\n# Average accuracy of classifier.\nmean_l1o_score_a = leave_one_out_score_a.mean()\n\n# Standard Deviation of the previous estimate.\nstd_l1o_score_a = leave_one_out_score_a.std()", "_____no_output_____" ], [ "from sty import fg, rs\nprint(fg(232,77,14) + \"\\033[1mAverage Accuracy of Classifier:\\033[0m\" + fg.rs)\nprint(str(mean_l1o_score_a * 100) + \" %\")\n\nprint(fg(98,195,238) + \"\\033[1mStandard Deviation:\\033[0m\" + fg.rs)\nprint(\"+-\" + str(round(std_l1o_score_a, 1) * 100) + \" %\")", "\u001b[38;2;232;77;14m\u001b[1mAverage Accuracy of Classifier:\u001b[0m\u001b[39m\n90.0 %\n\u001b[38;2;98;195;238m\u001b[1mStandard Deviation:\u001b[0m\u001b[39m\n+-30.0 %\n" ] ], [ [ "<p class=\"steps\">2.2 - Classifier trained with <span class=\"color7\"><strong>Set of Features B</strong></span></p>", "_____no_output_____" ] ], [ [ "leave_one_out_score_b = cross_val_score(knn_classifier_b, training_examples_b, class_training_examples, scoring=\"accuracy\", cv=LeaveOneOut())\n\n# Average accuracy of classifier.\nmean_l1o_score_b = leave_one_out_score_b.mean()\n\n# Standard Deviation of the previous estimate.\nstd_l1o_score_b = leave_one_out_score_b.std()", "_____no_output_____" ], [ "from sty import fg, rs\nprint(fg(232,77,14) + \"\\033[1mAverage Accuracy of Classifier:\\033[0m\" + fg.rs)\nprint(str(mean_l1o_score_b * 100) + \" %\")\n\nprint(fg(98,195,238) + \"\\033[1mStandard Deviation:\\033[0m\" + fg.rs)\nprint(\"+-\" + str(round(std_l1o_score_b, 1) * 100) + \" %\")", "\u001b[38;2;232;77;14m\u001b[1mAverage Accuracy of Classifier:\u001b[0m\u001b[39m\n70.0 %\n\u001b[38;2;98;195;238m\u001b[1mStandard Deviation:\u001b[0m\u001b[39m\n+-50.0 %\n" ] ], [ [ "As you can see, different sets of features produced two classifiers with a very distinct performance. We clearly understand that the first set of features <span class=\"color1\"><strong>Set A</strong></span> ensures a more effective training stage and consequently prepares better the classifier to receive and classify correctly new training examples !", "_____no_output_____" ], [ "We reach the end of the \"Classification Game\". This 4-Volume long journey reveals the wonderful world of <strong>Machine Learning</strong>, however the contents included in the Notebooks represent only a small sample of the full potential of this research area.\n\n<strong><span class=\"color7\">We hope that you have enjoyed this guide. </span><span class=\"color2\">biosignalsnotebooks</span><span class=\"color4\"> is an environment in continuous expansion, so don't stop your journey and learn more with the remaining <a href=\"../MainFiles/biosignalsnotebooks.ipynb\">Notebooks <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></a></span></strong> !", "_____no_output_____" ], [ "<hr>\n<table width=\"100%\">\n <tr>\n <td style=\"border-right:solid 3px #009EE3\" width=\"20%\">\n <img src=\"../../images/ost_logo.png\">\n </td>\n <td width=\"40%\" style=\"text-align:left\">\n <a href=\"../MainFiles/aux_files/biosignalsnotebooks_presentation.pdf\" target=\"_blank\">&#9740; Project Presentation</a>\n <br>\n <a href=\"https://github.com/biosignalsnotebooks/biosignalsnotebooks\" target=\"_blank\">&#9740; GitHub Repository</a>\n <br>\n <a href=\"https://pypi.org/project/biosignalsnotebooks/\" target=\"_blank\">&#9740; How to install biosignalsnotebooks Python package ?</a>\n <br>\n <a href=\"../MainFiles/signal_samples.ipynb\">&#9740; Signal Library</a>\n </td>\n <td width=\"40%\" style=\"text-align:left\">\n <a href=\"../MainFiles/biosignalsnotebooks.ipynb\">&#9740; Notebook Categories</a>\n <br>\n <a href=\"../MainFiles/by_diff.ipynb\">&#9740; Notebooks by Difficulty</a>\n <br>\n <a href=\"../MainFiles/by_signal_type.ipynb\">&#9740; Notebooks by Signal Type</a>\n <br>\n <a href=\"../MainFiles/by_tag.ipynb\">&#9740; Notebooks by Tag</a>\n </td>\n </tr>\n</table>", "_____no_output_____" ] ], [ [ "from biosignalsnotebooks.__notebook_support__ import css_style_apply\ncss_style_apply()", ".................... CSS Style Applied to Jupyter Notebook .........................\n" ], [ "%%html\n<script>\n // AUTORUN ALL CELLS ON NOTEBOOK-LOAD!\n require(\n ['base/js/namespace', 'jquery'],\n function(jupyter, $) {\n $(jupyter.events).on(\"kernel_ready.Kernel\", function () {\n console.log(\"Auto-running all cells-below...\");\n jupyter.actions.call('jupyter-notebook:run-all-cells-below');\n jupyter.actions.call('jupyter-notebook:save-notebook');\n });\n }\n );\n</script>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ] ]
e7118be082e3feaef28c6793de4a89bb21215caa
79,047
ipynb
Jupyter Notebook
.ipynb_checkpoints/test_ppt_python-checkpoint.ipynb
DataScienceLead/MacroeconomicsII
7862fe1033f245797cf1c34ff300c421a5219f0d
[ "MIT" ]
1
2020-08-01T10:57:42.000Z
2020-08-01T10:57:42.000Z
.ipynb_checkpoints/test_ppt_python-checkpoint.ipynb
DataScienceLead/MacroeconomicsII
7862fe1033f245797cf1c34ff300c421a5219f0d
[ "MIT" ]
null
null
null
.ipynb_checkpoints/test_ppt_python-checkpoint.ipynb
DataScienceLead/MacroeconomicsII
7862fe1033f245797cf1c34ff300c421a5219f0d
[ "MIT" ]
2
2018-11-05T11:15:28.000Z
2019-10-03T08:05:03.000Z
332.130252
35,716
0.935456
[ [ [ "import pandas as pd\n#import matplotlib as plt\nimport matplotlib.pyplot as plt\nimport datetime\nimport numpy as np", "_____no_output_____" ], [ "# Open data from Statistics Norway, www.ssb.no\n# Quarterly national account: \n# https://www.ssb.no/en/statbank/list/knr\n\ndataset = \"http://www.ssb.no/statbank/sq/10010628/\"\ndf = pd.read_excel(dataset, skiprows=3, skipfooter=48)", "_____no_output_____" ], [ "# Simple time series plot\ndf['Konsum i husholdninger og ideelle organisasjoner'].plot()\nplt.show()", "_____no_output_____" ], [ "\n# Add title and legend\ndf['Konsum i husholdninger og ideelle organisasjoner'].plot()\nplt.title(\"Figure 1.1: Consumption\")\nplt.legend()\nplt.xlabel('Date', fontdict=None, labelpad=None)\nplt.ylabel('MNOK')\n\n# save figure and use in presentataion etc. \nfolder ='C:\\\\Users\\\\username\\\\Documents\\\\GitHub\\\\MacroeconomicsII\\\\'\nfilename = folder + 'consumption1.png'\nplt.savefig(filename)\nplt.show()", "_____no_output_____" ], [ "# Create growth rates:\n# df[log_C] = ...\ndf['Dc'] = np.log(df['Konsum i husholdninger og ideelle organisasjoner']).diff(4)\n\ndf['DC_Y'] = df['Konsum i husholdninger og ideelle organisasjoner'].diff(4)/(df['Bruttonasjonalprodukt Fastlands-Norge, markedsverdi'].shift(4))\n# Remember that using difference of the logarithm is an approximation that works as long as the relavtive change is small\n# Note small letters for logarithms\n\n# Figure with title and legend\ndf['DC_Y'].plot()\nplt.title(\"Figure 1.2: Yearly change in consumption\")\nplt.legend()\nplt.xlabel('Date', fontdict=None, labelpad=None)\nplt.ylabel('MNOK')\n\n# save figure and use in presentataion etc. \nfilename_2 = folder + 'consumption2.png'\nplt.savefig(filename_2)\nplt.show()", "_____no_output_____" ], [ "#Powerpoint presentation\nfrom pptx import Presentation\nfrom pptx.util import Inches\n", "_____no_output_____" ], [ "#Slide 1\nprs = Presentation()\ntitle_slide_layout = prs.slide_layouts[0]\nslide = prs.slides.add_slide(title_slide_layout)\ntitle = slide.shapes.title\nsubtitle = slide.placeholders[1]\n\ntitle.text = \"Powerpoint via Python\"\nsubtitle.text = \"Consumption\"", "_____no_output_____" ], [ "#Side 2 - med bilde\nblank_slide_layout = prs.slide_layouts[6]\nslide = prs.slides.add_slide(blank_slide_layout)\n#####\n\n\ntxBox = slide.shapes.add_textbox(left = Inches(4), top=Inches(1), width=Inches(1), height=Inches(1))\ntf = txBox.text_frame\n\n\ntf.text = \"Consumption\"\n#####\nbilde = filename\n\n\npic = slide.shapes.add_picture(bilde, left=Inches(1), top=Inches(2),width=Inches(7.5))\n\n\n", "_____no_output_____" ], [ "#Slide 3 \nblank_slide_layout = prs.slide_layouts[6]\nslide = prs.slides.add_slide(blank_slide_layout)\n#####\n\n\ntxBox = slide.shapes.add_textbox(left = Inches(4), top=Inches(1), width=Inches(1), height=Inches(1))\ntf = txBox.text_frame\n\n\ntf.text = \"Endring konsum\"\n#####\nbilde_2 = filename_2\n\n\npic2 = slide.shapes.add_picture(bilde_2, left=Inches(1), top=Inches(2),width=Inches(7.5))\n\nprs.save('ppt_joakim.pptx') #Bare lag en ny pptx-fil der du lagrer med samme navn i samme folder.", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e711a44bb98a5b6510c49d55f7f1e540ee6a0f4e
174,899
ipynb
Jupyter Notebook
superseded/traffic_sign_model_rev6.ipynb
alexandrosanat/traffic-sign-recognition
f48ba4793f1775aab3a1cf3b7e69025a05390a7d
[ "MIT" ]
null
null
null
superseded/traffic_sign_model_rev6.ipynb
alexandrosanat/traffic-sign-recognition
f48ba4793f1775aab3a1cf3b7e69025a05390a7d
[ "MIT" ]
null
null
null
superseded/traffic_sign_model_rev6.ipynb
alexandrosanat/traffic-sign-recognition
f48ba4793f1775aab3a1cf3b7e69025a05390a7d
[ "MIT" ]
null
null
null
312.878354
109,172
0.901658
[ [ [ "import os\nimport zipfile\nimport tensorflow as tf\nfrom tensorflow.keras.optimizers import RMSprop\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\nfrom tensorflow.keras import layers\nfrom tensorflow.keras import Model\nimport requests\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nprint(tf.__version__)\nprint('TensorFlow version:',tf.__version__)\nprint('Keras version:',tf.keras.__version__)", "2.4.1\nTensorFlow version: 2.4.1\nKeras version: 2.4.0\n" ], [ "physical_devices = tf.config.list_physical_devices(\"GPU\")\ntf.config.experimental.set_memory_growth(physical_devices[0], True)", "_____no_output_____" ], [ "classes = pd.read_csv(\"data/Train.csv\")\n\nmin_width, max_width = max(classes.Width), min(classes.Width)\nmin_height, max_height = max(classes.Height), min(classes.Height)\n\nprint(np.mean([min_width, max_width]))\nprint(np.mean([min_height, max_height]))", "134.0\n125.0\n" ], [ "classes_no = len(classes.ClassId.unique())\nprint(\"There are {} unique classes in the dataset.\".format(classes_no))", "There are 43 unique classes in the dataset.\n" ] ], [ [ "Load the data and use data augmentation", "_____no_output_____" ] ], [ [ "cwd = os.getcwd()\nbase_dir = os.path.join(cwd, 'data')\ntrain_path= os.path.join(base_dir, 'Train')\ntest_path= os.path.join(base_dir, 'Test')", "_____no_output_____" ], [ "BATCH_SIZE = 150\nSTEPS_PER_EPOCH = 2000\nTARGET_SIZE = (32, 32)", "_____no_output_____" ], [ "# Create a data generator for the training images\ntrain_datagen = ImageDataGenerator(\n rescale=1./255,\n rotation_range=10,\n width_shift_range=0.1,\n height_shift_range=0.1,\n zoom_range=0.2,\n validation_split=0.2) # val 20%\n\n# Create a data generator for the validation images\nval_datagen = ImageDataGenerator(rescale=1./255, validation_split=0.2)\n\n#Split data to training and validation datasets\ntrain_data = train_datagen.flow_from_directory(train_path, \n target_size=TARGET_SIZE, \n color_mode='grayscale',\n batch_size=BATCH_SIZE, \n class_mode='categorical',\n shuffle=True,\n seed=2,\n subset = 'training') \n\nval_data = val_datagen.flow_from_directory(train_path, \n target_size=TARGET_SIZE, \n color_mode='grayscale',\n batch_size=BATCH_SIZE, \n class_mode='categorical',\n shuffle=False,\n seed=2,\n subset = 'validation')\n\ndatagen = ImageDataGenerator(rescale=1./255)\ntest_data = datagen.flow_from_directory(test_path,\n target_size=TARGET_SIZE, \n color_mode='grayscale',\n class_mode='categorical',\n batch_size=BATCH_SIZE, \n shuffle=True)", "Found 31368 images belonging to 43 classes.\nFound 7841 images belonging to 43 classes.\nFound 12630 images belonging to 43 classes.\n" ], [ "X_batch, y_batch = next(train_data, 15)", "_____no_output_____" ], [ "fig, ax = plt.subplots(1, 15, figsize=(20, 5))\nfig.tight_layout()\n\nfor i in range(15):\n ax[i].imshow(X_batch[i].reshape(32, 32))\n plt.axis(\"off\")", "_____no_output_____" ], [ "callback = tf.keras.callbacks.EarlyStopping(monitor='categorical_crossentropy', patience=10)", "_____no_output_____" ] ], [ [ "LeNEt 5 model ", "_____no_output_____" ] ], [ [ "def leNet():\n \n filters_no=60\n filter_size=(5,5)\n filter_size2=(3,3)\n size_of_pool=(2,2)\n no_of_nodes=500\n\n model = tf.keras.Sequential()\n\n model.add(layers.Conv2D(filters=filters_no, kernel_size=filter_size, activation='relu', input_shape=(32, 32, 1)))\n \n model.add(layers.Conv2D(filters=filters_no, kernel_size=filter_size, activation='relu'))\n \n model.add(layers.MaxPooling2D(pool_size=size_of_pool))\n\n model.add(layers.Conv2D(filters=filters_no//2, kernel_size=filter_size2, activation='relu'))\n \n model.add(layers.Conv2D(filters=filters_no//2, kernel_size=filter_size2, activation='relu'))\n\n model.add(layers.MaxPooling2D(pool_size=size_of_pool))\n\n model.add(layers.Dropout(0.5))\n\n model.add(layers.Flatten())\n\n model.add(layers.Dense(units=no_of_nodes, activation='relu'))\n\n model.add(layers.Dropout(0.5))\n \n model.add(layers.Dense(units=classes_no, activation = 'softmax'))\n \n return model", "_____no_output_____" ], [ "from tensorflow.keras.optimizers import Adam\n\nlenet = leNet()\nlenet.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy'])", "_____no_output_____" ], [ "# we train our model again (this time fine-tuning the top 2 inception blocks\n# alongside the top Dense layers\n\nhistory = lenet.fit(\n train_data,\n steps_per_epoch= train_data.samples // BATCH_SIZE, # One pass through entire training dataset\n epochs=25,\n validation_data=val_data,\n validation_steps= val_data.samples // BATCH_SIZE, # One pass through entire validation dataset\n #validation_freq=10,\n verbose=1)", "Epoch 1/25\n 33/209 [===>..........................] - ETA: 2:24 - loss: 0.1936 - accuracy: 0.9400" ], [ "acc = history.history['accuracy']\nval_acc = history.history['val_accuracy']\nloss = history.history['loss']\nval_loss = history.history['val_loss']\n\nepochs = range(len(acc))\n\nplt.plot(epochs, acc, 'bo', label='Training accuracy')\nplt.plot(epochs, val_acc, 'b', label='Validation accuracy')\nplt.title('Training and validation accuracy')\n\nplt.figure()\n\nplt.plot(epochs, loss, 'bo', label='Training Loss')\nplt.plot(epochs, val_loss, 'b', label='Validation Loss')\nplt.title('Training and validation loss')\nplt.legend()\n\nplt.show()", "_____no_output_____" ], [ "lenet.save('trained_model/lenet_500_epochs') ", "INFO:tensorflow:Assets written to: trained_model/lenet_500_epochs\\assets\n" ], [ "BATCH_SIZE = 200\nTARGET_SIZE = (32, 32)", "_____no_output_____" ], [ "# Create a data generator for the training images\ntrain_datagen = ImageDataGenerator(rescale=1./255,\n rotation_range=20,\n width_shift_range=0.2,\n height_shift_range=0.2,\n horizontal_flip=True,\n validation_split=0.2) # val 20%\n\n# Create a data generator for the validation images\nval_datagen = ImageDataGenerator(rescale=1./255, validation_split=0.2)\n\n#Split data to training and validation datasets\ntrain_data = train_datagen.flow_from_directory(train_path, \n target_size=TARGET_SIZE, \n color_mode='rgb',\n batch_size=BATCH_SIZE, \n class_mode='categorical',\n shuffle=True,\n seed=2,\n subset = 'training') \n\nval_data = val_datagen.flow_from_directory(train_path, \n target_size=TARGET_SIZE, \n color_mode='rgb',\n batch_size=BATCH_SIZE, \n class_mode='categorical',\n shuffle=False,\n seed=2,\n subset = 'validation')\n\ndatagen = ImageDataGenerator(rescale=1./255)\ntest_data = datagen.flow_from_directory(test_path,\n target_size=TARGET_SIZE, \n color_mode='rgb',\n class_mode='categorical',\n batch_size=BATCH_SIZE, \n shuffle=True)", "Found 31368 images belonging to 43 classes.\nFound 7841 images belonging to 43 classes.\nFound 12630 images belonging to 43 classes.\n" ], [ "# we train our model again (this time fine-tuning the top 2 inception blocks\n# alongside the top Dense layers\n\nhistory = lenet.fit(\n train_data,\n steps_per_epoch= train_data.samples // BATCH_SIZE, # One pass through entire training dataset\n epochs=50,\n validation_data=val_data,\n validation_steps= val_data.samples // BATCH_SIZE, # One pass through entire validation dataset\n #validation_freq=10,\n verbose=1)", "Epoch 1/50\n156/156 [==============================] - 28s 179ms/step - loss: 1.4984 - accuracy: 0.5241 - val_loss: 1.6963 - val_accuracy: 0.4779\nEpoch 2/50\n156/156 [==============================] - 28s 179ms/step - loss: 1.4985 - accuracy: 0.5267 - val_loss: 1.6989 - val_accuracy: 0.4763\nEpoch 3/50\n156/156 [==============================] - 28s 180ms/step - loss: 1.5042 - accuracy: 0.5278 - val_loss: 1.6896 - val_accuracy: 0.4783\nEpoch 4/50\n156/156 [==============================] - 28s 178ms/step - loss: 1.5037 - accuracy: 0.5262 - val_loss: 1.6932 - val_accuracy: 0.4827\nEpoch 5/50\n156/156 [==============================] - 28s 181ms/step - loss: 1.5091 - accuracy: 0.5250 - val_loss: 1.6976 - val_accuracy: 0.4767\nEpoch 6/50\n156/156 [==============================] - 28s 182ms/step - loss: 1.5083 - accuracy: 0.5220 - val_loss: 1.6925 - val_accuracy: 0.4791\nEpoch 7/50\n156/156 [==============================] - 30s 193ms/step - loss: 1.5047 - accuracy: 0.5293 - val_loss: 1.6971 - val_accuracy: 0.4781\nEpoch 8/50\n156/156 [==============================] - 29s 188ms/step - loss: 1.5019 - accuracy: 0.5313 - val_loss: 1.6932 - val_accuracy: 0.4768\nEpoch 9/50\n156/156 [==============================] - 29s 183ms/step - loss: 1.5078 - accuracy: 0.5258 - val_loss: 1.6936 - val_accuracy: 0.4800\nEpoch 10/50\n156/156 [==============================] - 30s 194ms/step - loss: 1.5077 - accuracy: 0.5245 - val_loss: 1.6979 - val_accuracy: 0.4794\nEpoch 11/50\n156/156 [==============================] - 30s 191ms/step - loss: 1.5028 - accuracy: 0.5272 - val_loss: 1.6990 - val_accuracy: 0.4801\nEpoch 12/50\n156/156 [==============================] - 31s 202ms/step - loss: 1.4986 - accuracy: 0.5248 - val_loss: 1.6900 - val_accuracy: 0.4805\nEpoch 13/50\n156/156 [==============================] - 31s 201ms/step - loss: 1.5114 - accuracy: 0.5249 - val_loss: 1.6945 - val_accuracy: 0.4771\nEpoch 14/50\n156/156 [==============================] - 31s 199ms/step - loss: 1.5048 - accuracy: 0.5227 - val_loss: 1.6949 - val_accuracy: 0.4818\nEpoch 15/50\n156/156 [==============================] - 29s 188ms/step - loss: 1.5021 - accuracy: 0.5272 - val_loss: 1.6942 - val_accuracy: 0.4799\nEpoch 16/50\n156/156 [==============================] - 29s 183ms/step - loss: 1.5040 - accuracy: 0.5266 - val_loss: 1.6908 - val_accuracy: 0.4826\nEpoch 17/50\n156/156 [==============================] - 31s 200ms/step - loss: 1.5023 - accuracy: 0.5252 - val_loss: 1.6941 - val_accuracy: 0.4824\nEpoch 18/50\n156/156 [==============================] - 30s 195ms/step - loss: 1.4926 - accuracy: 0.5260 - val_loss: 1.6935 - val_accuracy: 0.4808\nEpoch 19/50\n156/156 [==============================] - 32s 203ms/step - loss: 1.5002 - accuracy: 0.5292 - val_loss: 1.6956 - val_accuracy: 0.4790\nEpoch 20/50\n156/156 [==============================] - 31s 201ms/step - loss: 1.4946 - accuracy: 0.5308 - val_loss: 1.6908 - val_accuracy: 0.4805\nEpoch 21/50\n107/156 [===================>..........] - ETA: 8s - loss: 1.5027 - accuracy: 0.5259 - ETA: 10s - loss: 1.5050 " ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e711b1d582d9fe13f5c21bc52d242c5125c91a28
5,521
ipynb
Jupyter Notebook
qiskit_qudits/test/gate_ideas/levelswitch_implementation_v2.ipynb
q-inho/QuditsTeam-1
9935eedd7d8258619a35424a98f2a71776b61e28
[ "Apache-2.0" ]
1
2021-10-20T09:23:47.000Z
2021-10-20T09:23:47.000Z
qiskit_qudits/test/gate_ideas/levelswitch_implementation_v2.ipynb
q-inho/QuditsTeam-1
9935eedd7d8258619a35424a98f2a71776b61e28
[ "Apache-2.0" ]
null
null
null
qiskit_qudits/test/gate_ideas/levelswitch_implementation_v2.ipynb
q-inho/QuditsTeam-1
9935eedd7d8258619a35424a98f2a71776b61e28
[ "Apache-2.0" ]
null
null
null
39.719424
1,240
0.468031
[ [ [ "# This code is from Qiskit Hackathon 2021 by the team\n# Qiskit for high dimensional multipartite quantum states.\n#\n# Author: Hoang Van Do\n#\n# (C) Copyright 2021 Hoang Van Do, Tim Alexis Körner, Inho Choi, Timothé Presles and Élie Gouzien.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\nimport numpy as np\nfrom qiskit import QuantumCircuit, QuantumRegister, AncillaRegister\nfrom qiskit.exceptions import QiskitError\n\n#Pi coupling between m and l level. The inverse of the LevelsSwitch function is itself\ndef level_switch(m, l, dimension):\n if m > dimension or l > dimension:\n raise QiskitError('The level is higher than the dimension')\n n=int(np.ceil(np.log2(dimension)))\n qreg =QuantumRegister(n)\n areg = AncillaRegister(1)\n circuit=QuantumCircuit(qreg, areg)\n control_qubits = qreg[:]\n target_qubit = areg[0]\n\n #save indices of qubits which are 1 for states m, l\n marray=[]\n larray=[]\n for i in range(n):\n if (m >> i) & 1 != 1:\n marray.append(i)\n for i in range(n):\n if (l >> i) & 1 != 1:\n larray.append(i)\n\n #control on m, l\n if len(marray)>0:\n circuit.x(marray)\n circuit.mcx(control_qubits,target_qubit)\n if len(marray)>0:\n circuit.x(marray)\n if len(larray)>0:\n circuit.x(larray)\n circuit.mcx(control_qubits,target_qubit)\n if len(larray)>0:\n circuit.x(larray)\n \n #swap\n for i in range(n):\n if (( m >> i) & 1) != (( l >> i) & 1):\n circuit.cx(n, i)\n \n #control on m, l to reset auxiliary qubit \n if len(marray) > 0:\n circuit.x(marray)\n circuit.mcx(control_qubits,target_qubit)\n if len(marray) > 0:\n circuit.x(marray)\n if len(larray) > 0:\n circuit.x(larray)\n circuit.mcx(control_qubits,target_qubit)\n if len(larray) > 0:\n circuit.x(larray)\n \n return circuit\n\nfrom qiskit import Aer, execute\nbackend = Aer.get_backend('unitary_simulator')\nnp.set_printoptions(linewidth=200, precision=2, suppress=True)\n\nqc = level_switch(2, 3, 8)\nprint(qc)\njob = execute(qc, backend)\nresult = job.result()\nU = result.get_unitary(qc)\n\nN = int(U.shape[0]/2)\nprint(\"Auxiliary qubit should start and end in state |0> (only look at top left of matrix)\")\nprint(U[:N,:N])\n", " ┌───┐ ┌───┐ ┌───┐┌───┐ ┌───┐ \nq12_0: ┤ X ├──■──┤ X ├───────■───────┤ X ├┤ X ├──■──┤ X ├───────■───────\n └───┘ │ └───┘ │ └─┬─┘└───┘ │ └───┘ │ \nq12_1: ───────■──────────────■─────────┼─────────■──────────────■───────\n ┌───┐ │ ┌───┐┌───┐ │ ┌───┐ │ ┌───┐ │ ┌───┐┌───┐ │ ┌───┐\nq12_2: ┤ X ├──■──┤ X ├┤ X ├──■──┤ X ├──┼──┤ X ├──■──┤ X ├┤ X ├──■──┤ X ├\n └───┘┌─┴─┐└───┘└───┘┌─┴─┐└───┘ │ └───┘┌─┴─┐└───┘└───┘┌─┴─┐└───┘\n a1_0: ─────┤ X ├──────────┤ X ├───────■───────┤ X ├──────────┤ X ├─────\n └───┘ └───┘ └───┘ └───┘ \nAuxiliary qubit should start and end in state |0> (only look at top left of matrix)\n[[1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]\n [0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]\n [0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]\n [0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]\n [0.+0.j 0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j]\n [0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j]\n [0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j]\n [0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 1.+0.j]]\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
e711b1f12b64ea1c0273207bbdc9b7029d634e26
12,324
ipynb
Jupyter Notebook
python/Python08_Basics_Variables.ipynb
HasanIjaz-HB/Quantum-Computing
53c2df99cd2efbfb827857125991342f336a3097
[ "MIT" ]
null
null
null
python/Python08_Basics_Variables.ipynb
HasanIjaz-HB/Quantum-Computing
53c2df99cd2efbfb827857125991342f336a3097
[ "MIT" ]
null
null
null
python/Python08_Basics_Variables.ipynb
HasanIjaz-HB/Quantum-Computing
53c2df99cd2efbfb827857125991342f336a3097
[ "MIT" ]
null
null
null
27.632287
309
0.511928
[ [ [ "<table>\n <tr>\n <td style=\"background-color:#ffffff;\"><a href=\"https://qsoftware.lu.lv/index.php/qworld/\" target=\"_blank\"><img src=\"..\\images\\qworld.jpg\" width=\"70%\" align=\"left\"></a></td>\n <td style=\"background-color:#ffffff;\" width=\"*\"></td>\n <td style=\"background-color:#ffffff;vertical-align:text-top;\"><a href=\"https://qsoftware.lu.lv\" target=\"_blank\"><img src=\"..\\images\\logo.jpg\" width=\"25%\" align=\"right\"></a></td> \n </tr>\n <tr><td colspan=\"3\" align=\"right\" style=\"color:#777777;background-color:#ffffff;font-size:12px;\">\n prepared by <a href=\"http://abu.lu.lv\" target=\"_blank\">Abuzer Yakaryilmaz</a>\n </td></tr>\n <tr><td colspan=\"3\" align=\"right\" style=\"color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;\">\n This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros.\n </td></tr>\n</table>\n$ \\newcommand{\\bra}[1]{\\langle #1|} $\n$ \\newcommand{\\ket}[1]{|#1\\rangle} $\n$ \\newcommand{\\braket}[2]{\\langle #1|#2\\rangle} $\n$ \\newcommand{\\inner}[2]{\\langle #1,#2\\rangle} $\n$ \\newcommand{\\biginner}[2]{\\left\\langle #1,#2\\right\\rangle} $\n$ \\newcommand{\\mymatrix}[2]{\\left( \\begin{array}{#1} #2\\end{array} \\right)} $\n$ \\newcommand{\\myvector}[1]{\\mymatrix{c}{#1}} $\n$ \\newcommand{\\myrvector}[1]{\\mymatrix{r}{#1}} $\n$ \\newcommand{\\mypar}[1]{\\left( #1 \\right)} $\n$ \\newcommand{\\mybigpar}[1]{ \\Big( #1 \\Big)} $\n$ \\newcommand{\\sqrttwo}{\\frac{1}{\\sqrt{2}}} $\n$ \\newcommand{\\dsqrttwo}{\\dfrac{1}{\\sqrt{2}}} $\n$ \\newcommand{\\onehalf}{\\frac{1}{2}} $\n$ \\newcommand{\\donehalf}{\\dfrac{1}{2}} $\n$ \\newcommand{\\hadamard}{ \\mymatrix{rr}{ \\sqrttwo & \\sqrttwo \\\\ \\sqrttwo & -\\sqrttwo }} $\n$ \\newcommand{\\vzero}{\\myvector{1\\\\0}} $\n$ \\newcommand{\\vone}{\\myvector{0\\\\1}} $\n$ \\newcommand{\\vhadamardzero}{\\myvector{ \\sqrttwo \\\\ \\sqrttwo } } $\n$ \\newcommand{\\vhadamardone}{ \\myrvector{ \\sqrttwo \\\\ -\\sqrttwo } } $\n$ \\newcommand{\\myarray}[2]{ \\begin{array}{#1}#2\\end{array}} $\n$ \\newcommand{\\X}{ \\mymatrix{cc}{0 & 1 \\\\ 1 & 0} } $\n$ \\newcommand{\\Z}{ \\mymatrix{rr}{1 & 0 \\\\ 0 & -1} } $\n$ \\newcommand{\\Htwo}{ \\mymatrix{rrrr}{ \\frac{1}{2} & \\frac{1}{2} & \\frac{1}{2} & \\frac{1}{2} \\\\ \\frac{1}{2} & -\\frac{1}{2} & \\frac{1}{2} & -\\frac{1}{2} \\\\ \\frac{1}{2} & \\frac{1}{2} & -\\frac{1}{2} & -\\frac{1}{2} \\\\ \\frac{1}{2} & -\\frac{1}{2} & -\\frac{1}{2} & \\frac{1}{2} } } $\n$ \\newcommand{\\CNOT}{ \\mymatrix{cccc}{1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 0 & 1 \\\\ 0 & 0 & 1 & 0} } $\n$ \\newcommand{\\norm}[1]{ \\left\\lVert #1 \\right\\rVert } $", "_____no_output_____" ], [ "<h2> Basics of Python: Variables </h2>\n\nWe review using variables in Python here. \n\nPlease run each cell and check the results.\n\n<b> Indention of codes <u>matters</u> in Python!</b> \n\nIn this notebook, each line of code should start from the left without any indention. Otherwise, you will get a syntax error.\n\nComments can be indented.\n\nThe codes belonging to a conditional or loop statement or a function/procedure are indented. We will see them later.", "_____no_output_____" ] ], [ [ "# This is a comment\n# A comment is used for explanations/descriptions/etc.\n# Comments do not affect the programs", "_____no_output_____" ], [ "# let's define an integer variable named a\na = 5\n\n# let's print its value\nprint(a)", "_____no_output_____" ], [ "# let's define three integer variables named a, b, and c\na = 2\nb = 4\nc = a + b # summation of a and b\n\n# let's print their values together \nprint(a,b,c)\n# a single space will automatically appear in between", "_____no_output_____" ], [ "# let's print their values in reverse order\nprint(c,b,a)", "_____no_output_____" ], [ "# let's print their summation and multiplication\nprint(a+b+c,a*b*c)", "_____no_output_____" ], [ "# let's define variables with string/text values\n\nhw = \"hello world\" # we can use double quotes\nhqw = 'hello quantum world' # we can use single quotes\n\n# let's print them\nprint(hw)\nprint(hqw)", "_____no_output_____" ], [ "# let's print them together by inserting another string in between\nprint(hw,\"and\",hqw)", "_____no_output_____" ], [ "# let's concatenate a few strings\nd = \"Hello \" + 'World' + \" but \" + 'Quantum ' + \"World\" \n\n# let's print the result\nprint(d)", "_____no_output_____" ], [ "# let's print numeric and string values together\nprint(\"a =\",a,\", b =\",b,\", a+b =\",a+b)", "_____no_output_____" ], [ "# let's subtract two numbers\nd = a-b\nprint(a,b,d)", "_____no_output_____" ], [ "# let's divide two numbers\nd = a/b\nprint(a,b,d)", "_____no_output_____" ], [ "# let's divide integers over integers\n# the result is always an integer (with possible integer remainder)\nd = 33 // 6\nprint(d)", "_____no_output_____" ], [ "# reminder/mod operator\nr = 33 % 6 \n# 33 mod 6 = 3\n# or when 33 is divided by 6 over integers, the reminder is 3\n# 33 = 5 * 6 + 3\n\n# let's print the result\nprint(r) ", "_____no_output_____" ], [ "# Booleen variables\nt = True\nf = False\n\n# let's print their values\nprint(t,f)", "_____no_output_____" ], [ "# print their negations\nprint(not t) \nprint(\"the negation of\",t,\"is\",not t)\n\nprint(not f)\nprint(\"the negation of\",f,\"is\",not f)", "_____no_output_____" ], [ "# define a float variable\n\nd = -3.4444\n\n# let's print its value and its square\nprint(d, d * d)", "_____no_output_____" ] ], [ [ "Let's use parentheses in our expressions.\n\n$(23 * 13)-(11 * 15) $\n\nHere $*$ represents the multiplication operator", "_____no_output_____" ] ], [ [ "e = (23*13) - (11 * 15)\nprint(e)", "_____no_output_____" ] ], [ [ "Let's consider a more complex expression.\n\n$ -3 * (123- 34 * 11 ) + 4 * (5+ (23 * 15) ) $", "_____no_output_____" ] ], [ [ "# we can use more than one variable\n\n# left is the variable for the left part of the expression\n# we start with the multiplication inside the parentheses\nleft = 34*11\n# we continue with the substruction inside the parentheses\n# we reuse the variable named left\nleft = 123 - left\n# we reuse left again for the multiplication with -3\nleft = -3 * left\n\n# right is the variable for the right part of the expression\n# we use the same idea here\nright = 23 * 15\nright = 5 + right\nright = 4 * right\n\n# at the end, we use left for the result\nleft = left + right\n\n# let's print the result\nprint(left)", "_____no_output_____" ] ], [ [ "<h3> Task 1 </h3>\n\nDefine three variables $n1$, $n2$, and $n3$, and set their values to $3$, $-4$, and $6$.\n\nDefine a new variable $r1$, and set its value to $ (2 \\cdot n1 + 3 \\cdot n2) \\cdot 2 - 5 \\cdot n3 $, where $\\cdot$ represents the multiplication operator. \n\n<i>The multiplication operator in python (and in many other programming languages) is *.</i>\n\nThen, print the value of $r1$. \n\nAs you may verify it by yourself, the result should be $-42$.", "_____no_output_____" ] ], [ [ "#\n# your solution is here\n#\n", "_____no_output_____" ] ], [ [ "<a href=\"Python08_Basics_Variables_Solutions.ipynb#task1\">click for our solution</a>", "_____no_output_____" ], [ "<h3> Task 2 </h3>\n\nBy using the same variables (you may not need to define them again), please print the following value\n$$\n \\dfrac{(n1-n2)\\cdot(n2-n3)}{(n3-n1)\\cdot(n3+1)} \n$$\n\nYou should see $ -3.3333333333333335 $ as the outcome.", "_____no_output_____" ] ], [ [ "#\n# your solution is here\n#\n", "_____no_output_____" ] ], [ [ "<a href=\"Python08_Basics_Variables_Solutions.ipynb#task2\">click for our solution</a>", "_____no_output_____" ], [ "<h3> Task 3 </h3>\n\nDefine variables N and S, and set their values to your name and surname. \n\nThen, print the values of N and S with a prefix phrase \"hello from the quantum world to\".", "_____no_output_____" ] ], [ [ "#\n# your solution is here\n#\n", "_____no_output_____" ] ], [ [ "<a href=\"Python08_Basics_Variables_Solutions.ipynb#task3\">click for our solution</a>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
e711b3aac795dd780125d53c54b7e8cb53125583
15,764
ipynb
Jupyter Notebook
14 - Interpret Models.ipynb
JacksonSun/DP-100JA-Designing-and-Implementing-a-Data-Science-Solution-on-Azure
66799c120af57f3032824eb73fd17a0636660bfc
[ "MIT" ]
1
2021-08-17T06:31:58.000Z
2021-08-17T06:31:58.000Z
14 - Interpret Models.ipynb
JacksonSun/DP-100JA-Designing-and-Implementing-a-Data-Science-Solution-on-Azure
66799c120af57f3032824eb73fd17a0636660bfc
[ "MIT" ]
null
null
null
14 - Interpret Models.ipynb
JacksonSun/DP-100JA-Designing-and-Implementing-a-Data-Science-Solution-on-Azure
66799c120af57f3032824eb73fd17a0636660bfc
[ "MIT" ]
null
null
null
35.109131
273
0.552969
[ [ [ "# モデルを解釈する\r\n\r\nAzure Machine Learning を使用して、各機能が予測ラベルに与える影響の量を定量化する *Explainer* を使用して、モデルを解釈できます。一般的な Explainer は多く、それぞれ異なる種類のモデリング アルゴリズムに適しています。ただし、それらを使用する基本的なアプローチは同じです。\r\n\r\n## SDK パッケージのインストール\r\n\r\nこのノートブックのコードを実行するには、最新バージョンの **azureml-sdk** および **azureml-widgets** パッケージに加えて、**azureml-explain-model** パッケージが必要です。また、Azure ML 解釈可能性ライブラリ (**azureml-interpret**) も使用します。これを使用すると、Azure ML 実験でトレーニングされていない場合や、Azure ML ワークスペースに登録されていない場合でも、多くの一般的な種類のモデルを解釈できます。\r\n\r\n次のセルを実行して、これらのパッケージがインストールされていることを確認します。 ", "_____no_output_____" ] ], [ [ "!pip show azureml-explain-model azureml-interpret", "_____no_output_____" ] ], [ [ "## モデルを説明する\r\n\r\nAzure Machine Learning の外部でトレーニングされたモデルから始めましょう - 下のセルを実行して、デシジョン ツリー分類モデルをトレーニングします。", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport joblib\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.metrics import roc_curve\n\n# 糖尿病データセットを読み込む\r\nprint(\"Loading Data...\")\ndata = pd.read_csv('data/diabetes.csv')\n\n# 特徴とラベルを分離する\r\nfeatures = ['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']\nlabels = ['not-diabetic', 'diabetic']\nX, y = data[features].values, data['Diabetic'].values\n\n# データをトレーニング セットとテスト セットに分割する\r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)\n\n# デシジョン ツリー モデルをトレーニングする\r\nprint('Training a decision tree model')\nmodel = DecisionTreeClassifier().fit(X_train, y_train)\n\n# 精度を計算する\r\ny_hat = model.predict(X_test)\nacc = np.average(y_hat == y_test)\nprint('Accuracy:', acc)\n\n# AUC を計算する\r\ny_scores = model.predict_proba(X_test)\nauc = roc_auc_score(y_test,y_scores[:,1])\nprint('AUC: ' + str(auc))\n\nprint('Model trained.')", "_____no_output_____" ] ], [ [ "トレーニング プロセスでは、ホールドバック検証データセットに基づいてモデル評価メトリックが生成されるため、予測の精度を把握できます。しかし、データの特徴は予測にどのような影響を与えるのでしょうか?", "_____no_output_____" ], [ "### モデルの説明を取得する\r\n\r\n先にインストールした Azure ML の解釈可能性ライブラリから、モデルに適した Explainer を取得しましょう。Explainer には多くの種類があります。この例では、適切な [SHAP](https://github.com/slundberg/shap) モデル Explainer を呼び出すことによって、多くの種類のモデルを説明するために使用できる「ブラック ボックス」の説明である表形式の *Explainer* を使用します。", "_____no_output_____" ] ], [ [ "from interpret.ext.blackbox import TabularExplainer\n\n# 「特徴」と「クラス」フィールドはオプションです\r\ntab_explainer = TabularExplainer(model,\n X_train, \n features=features, \n classes=labels)\nprint(tab_explainer, \"ready!\")", "_____no_output_____" ] ], [ [ "### *グローバル*な特徴の重要度を取得する\r\n\r\n最初に行うことは、全体的な*特徴の重要度*を評価することによってモデルを説明しようとすることです - つまり、各特徴がトレーニング データセット全体に基づいて予測に影響を与える程度を定量化します。", "_____no_output_____" ] ], [ [ "# ここでトレーニング データまたはテスト データを使用できます\r\nglobal_tab_explanation = tab_explainer.explain_global(X_train)\n\n# 重要度別の上位の特徴を取得する\r\nglobal_tab_feature_importance = global_tab_explanation.get_feature_importance_dict()\nfor feature, importance in global_tab_feature_importance.items():\n print(feature,\":\", importance)", "_____no_output_____" ] ], [ [ "特徴の重要度が順位付けされ、最も重要な機能が最初に表示されます。\r\n\r\n### *ローカル*な特徴の重要度を取得する\r\n\r\n全体的な見解がありますが、個々の観察を説明はどうですか? 可能性のある各ラベル値を予測する決定に各機能が影響を与えた程度を定量化して、個々の予測の*ローカル*説明を生成しましょう。この場合、バイナリ モデルであるため、2 つのラベル (糖尿病以外と糖尿病) があります。また、データセット内の個々の観測値に対するこれらのラベル値の各特徴の影響を定量化できます。テスト データセットの最初の 2 つのケースを評価するだけです。", "_____no_output_____" ] ], [ [ "# 説明したい観測値を取得する (最初の 2 つ)\r\nX_explain = X_test[0:2]\n\n# 予測を取得する\r\npredictions = model.predict(X_explain)\n\n# ローカルな説明を取得する\r\nlocal_tab_explanation = tab_explainer.explain_local(X_explain)\n\n# 各ラベルの特徴の名前と重要度を取得する\r\nlocal_tab_features = local_tab_explanation.get_ranked_local_names()\nlocal_tab_importance = local_tab_explanation.get_ranked_local_values()\n\nfor l in range(len(local_tab_features)):\n print('Support for', labels[l])\n label = local_tab_features[l]\n for o in range(len(label)):\n print(\"\\tObservation\", o + 1)\n feature_list = label[o]\n total_support = 0\n for f in range(len(feature_list)):\n print(\"\\t\\t\", feature_list[f], ':', local_tab_importance[l][o][f])\n total_support += local_tab_importance[l][o][f]\n print(\"\\t\\t ----------\\n\\t\\t Total:\", total_support, \"Prediction:\", labels[predictions[o]])", "_____no_output_____" ] ], [ [ "## モデル トレーニング実験に説明可能性を追加する\r\n\r\nこれまで見てきたように、Azure Machine Learning の外部でトレーニングされたモデルの説明を生成できます。ただし、Azure Machine Learning ワークスペースでモデルをトレーニングして登録するために実験を使用する場合は、モデルの説明を生成してログに記録できます。\r\n\r\n次のセルでコードを実行して、ワークスペースに接続します。\r\n\r\n> **注**: Azure サブスクリプションでまだ認証済みのセッションを確立していない場合は、リンクをクリックして認証コードを入力し、Azure にサインインして認証するよう指示されます。", "_____no_output_____" ] ], [ [ "import azureml.core\nfrom azureml.core import Workspace\n\n# 保存された構成ファイルからワークスペースを読み込む\r\nws = Workspace.from_config()\nprint('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))", "_____no_output_____" ] ], [ [ "### 実験を使用してモデルをトレーニングして説明する\r\n\r\nでは、実験を作成して、必要なファイルをローカル フォルダーに配置しましょう - この場合、糖尿病データの同じ CSV ファイルを使用してモデルをトレーニングします。", "_____no_output_____" ] ], [ [ "import os, shutil\nfrom azureml.core import Experiment\n\n# 実験ファイル用フォルダーを作成する\r\nexperiment_folder = 'diabetes_train_and_explain'\nos.makedirs(experiment_folder, exist_ok=True)\n\n# データ ファイルを実験フォルダーにコピーする\r\nshutil.copy('data/diabetes.csv', os.path.join(experiment_folder, \"diabetes.csv\"))", "_____no_output_____" ] ], [ [ "次の特徴を含む以外 Azure ML トレーニング スクリプトと同様のトレーニング スクリプトを作成します。\r\n\r\n- 以前使用したモデルの説明を生成する同じライブラリがインポートされ、グローバルな説明を生成するために使用されます\r\n- **ExplanationClient** ライブラリを使用して、説明を実験出力にアップロードします", "_____no_output_____" ] ], [ [ "%%writefile $experiment_folder/diabetes_training.py\n# ライブラリをインポートする\r\nimport pandas as pd\nimport numpy as np\nimport joblib\nimport os\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.metrics import roc_curve\n\n# Azure ML 実行ライブラリをインポートする\r\nfrom azureml.core.run import Run\n\n# モデルの説明用ライブラリをインポートする\r\nfrom azureml.interpret import ExplanationClient\nfrom interpret.ext.blackbox import TabularExplainer\n\n# 実験実行コンテキストを取得する\r\nrun = Run.get_context()\n\n# 糖尿病データセットを読み込む\r\nprint(\"Loading Data...\")\ndata = pd.read_csv('diabetes.csv')\n\nfeatures = ['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']\nlabels = ['not-diabetic', 'diabetic']\n\n# 特徴とラベルを分離する\r\nX, y = data[features].values, data['Diabetic'].values\n\n# データをトレーニング セットとテスト セットに分割する\r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)\n\n# デシジョン ツリー モデルをトレーニングする\r\nprint('Training a decision tree model')\nmodel = DecisionTreeClassifier().fit(X_train, y_train)\n\n# 精度を計算する\r\ny_hat = model.predict(X_test)\nacc = np.average(y_hat == y_test)\nrun.log('Accuracy', np.float(acc))\n\n# AUC を計算する\r\ny_scores = model.predict_proba(X_test)\nauc = roc_auc_score(y_test,y_scores[:,1])\nrun.log('AUC', np.float(auc))\n\nos.makedirs('outputs', exist_ok=True)\n# 出力フォルダーに保存されたファイルは、自動的に実験レコードにアップロードされます\r\njoblib.dump(value=model, filename='outputs/diabetes.pkl')\n\n# 説明を取得する\r\nexplainer = TabularExplainer(model, X_train, features=features, classes=labels)\nexplanation = explainer.explain_global(X_test)\n\n# Explanation Client を取得し、説明をアップロードする\r\nexplain_client = ExplanationClient.from_run(run)\nexplain_client.upload_model_explanation(explanation, comment='Tabular Explanation')\n\n# 実行を完了する\r\nrun.complete()", "_____no_output_____" ] ], [ [ "実験にはスクリプトを実行するための Python 環境が必要なため、そのための Conda 仕様を定義します。トレーニング環境には **azureml-interpret** ライブラリが含まれているので、スクリプトは **TabularExplainer** を作成して **ExplainerClient** クラスを使用できる点に留意してください。", "_____no_output_____" ] ], [ [ "%%writefile $experiment_folder/interpret_env.yml\nname: batch_environment\ndependencies:\n- python=3.6.2\n- scikit-learn\n- pandas\n- pip\n- pip:\n - azureml-defaults\n - azureml-interpret", "_____no_output_____" ] ], [ [ "これで実験を実行できます。", "_____no_output_____" ] ], [ [ "from azureml.core import Experiment, ScriptRunConfig, Environment\nfrom azureml.widgets import RunDetails\n\n\n# 実験用 Python 環境を作成する\r\nexplain_env = Environment.from_conda_specification(\"explain_env\", experiment_folder + \"/interpret_env.yml\")\n\n# スクリプト構成を作成する\r\nscript_config = ScriptRunConfig(source_directory=experiment_folder,\n script='diabetes_training.py',\n environment=explain_env) \n\n# 実験を送信する\r\nexperiment_name = 'mslearn-diabetes-explain'\nexperiment = Experiment(workspace=ws, name=experiment_name)\nrun = experiment.submit(config=script_config)\nRunDetails(run).show()\nrun.wait_for_completion()", "_____no_output_____" ] ], [ [ "## 特徴の重要度の値を取得する\r\n\r\n実験の実行が完了したら、**ExplanationClient** クラスを使用して、実行用に登録された説明から特徴の重要度を取得できます。", "_____no_output_____" ] ], [ [ "from azureml.interpret import ExplanationClient\n\n# 特徴の説明を取得する\r\nclient = ExplanationClient.from_run(run)\nengineered_explanations = client.download_model_explanation()\nfeature_importances = engineered_explanations.get_feature_importance_dict()\n\n# 全体的な特徴の重要度\r\nprint('Feature\\tImportance')\nfor key, value in feature_importances.items():\n print(key, '\\t', value)", "_____no_output_____" ] ], [ [ "## Azure Machine Learning Studio でモデルの説明を表示する\r\n\r\nまた、実行の詳細ウィジェットの**実行の詳細を表示**リンクをクリックすると、Azure Machine Learning Studio の実行が表示され、**説明**タブを表示できます。次に以下を実行します。\r\n\r\n1. 表形式の Explainer の説明 ID を選択します。\r\n2. 全体的なグローバル特徴の重要度を示す**特徴の重要度の集計**グラフを表示します。\r\n3. テスト データの各データ ポイントを示す**個別の特徴の重要度**グラフを表示します。\r\n4. 個々のポイントを選択すると、選択したデータ ポイントの個々の予測のローカル特徴の重要度が表示されます。\r\n5. 「**新しいコホート**」 ボタンを使って、次の設定でデータのサブセットを定義します。\r\n - **データセット コホート名**:25歳未満\r\n - **フィルターを選択する**: データセット\r\n - 25歳未満(新しいコホートを保存する前に、必ずこのフィルターを追加する)。\r\n6. 25歳以上の年齢フィルターを使用して、「**25歳以上**」 という名前の 2 つ目のコホートを作成します。\r\n6. **集計機能の重要度**の視覚化を確認し、定義した 2 つのコホートの相対的な機能の重要度を比較します。コホートを比較する能力により、データ母集団の複数のサブセットについて、機能が予測にどのような影響を与えるかを確認できます。\r\n\r\n", "_____no_output_____" ], [ "**詳細情報**: Azure ML での Explainer の使用の詳細については、[ドキュメント](https://docs.microsoft.com/azure/machine-learning/how-to-machine-learning-interpretability)を参照してください。 ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e711b6a2b30ea1f0b3c82acbb95d75b4a09ba24e
57,975
ipynb
Jupyter Notebook
notebooks/intro_to_dl_frameworks.ipynb
rahulsarkar906/ContinuosLearning
5fee47eadbe10763452e5fc135e54398ff808cd5
[ "MIT" ]
249
2018-09-26T18:00:29.000Z
2022-03-29T22:31:44.000Z
notebooks/intro_to_dl_frameworks.ipynb
Sh-imaa/colab
a4ace01caac49f3a83ef78ee2ad4308e35ef15ca
[ "MIT" ]
13
2018-11-01T20:11:16.000Z
2022-01-18T10:27:15.000Z
notebooks/intro_to_dl_frameworks.ipynb
Sh-imaa/colab
a4ace01caac49f3a83ef78ee2ad4308e35ef15ca
[ "MIT" ]
80
2018-10-06T10:19:42.000Z
2022-03-31T08:59:35.000Z
39.681725
5,970
0.522398
[ [ [ "# Open-Source Frameworks for Deep Learning: an Overview\n\n\nThis notebook is part of the 2 hrs talk given at the Univerity of Bologna (DISI), Nuovo Campus Universitario, Via Pavese 50, Cesena, FC the 13th of December 2018, 10-12 am. Remember to include the copyright if you want to use, modify or distribute this notebook! :-) Slides of the talk are available [here](https://docs.google.com/presentation/d/1fbTKtp9xOlCL4JtpiLF39x7Vxq2Z-jgn77CTqcBrwEE/edit?usp=sharing).\n\n\n\n", "_____no_output_____" ], [ "\n---\n\n**Abstract** : The rise of deep learning over the last decade has led to profound changes in the landscape of the machine learning software stack both for research and production. In this talk we will provide a comprehensive overview of the *open-source deep learning frameworks* landscape with both a theoretical and hands-on approach. After a brief introduction and historical contextualization, we will highlight common features and distinctions of their recent developments. Finally, we will take at deeper look into three of the most used deep learning frameworks today: *Caffe*, *Tensorflow*, *PyTorch*; with practical examples and considerations worth reckoning in the choice of such libraries.\n\n**Short Bio** : [Vincenzo Lomonaco](https://vincenzolomonaco.com) is a Deep Learning PhD student at the University of Bologna and founder of [ContinualAI.org](https://continualai.org). He is also the PhD students representative at the Department of Computer Science of Engineering (DISI) and teaching assistant of the courses *“Machine Learning”* and *“Computer Architectures”* in the same department. Previously, he was a Machine Learning software engineer at IDL in-line Devices and a Master Student at the University of Bologna where he graduated cum laude in 2015 with the dissertation [“Deep Learning for Computer Vision: a Comparison Between CNNs and HTMs on Object Recognition Tasks\"](https://amslaurea.unibo.it/9095/).\n\n---", "_____no_output_____" ], [ "** Connecting a local runtime**\n\nIn case resources are not enough for you (no GPU for example), you can always connect another [local runtime](https://research.google.com/colaboratory/local-runtimes.html) or to a [runtime on a Google Compute Engine instance](https://research.google.com/colaboratory/local-runtimes.html). However, this notebook has been designed to run fast enough on simple CPUs so you shouldn't fined any trouble here, using a free *hosted account*.\n\n\n**Requisites to run it locally, outside colab (not recommended)**\n\n* Python 3.x\n* Jupyter\n* Numpy\n* Matplolib\n* Pytorch 0.4.0\n* Caffe 1.0.0\n* Tensorflow 1.12", "_____no_output_____" ], [ "# Hands-on session (45 minutes)\n\nIn this session we will try to learn an evaluate a Convolutional Neural Networks model on [MNIST](http://yann.lecun.com/exdb/mnist/) using three of the most used deep learning frameworks today: *Caffe*, *Tensorflow*, *PyTorch* with an expected timeframe of 15 minutes for each of them. This will allow us to grasp what it means to train a deep model with such libraries and compare the different Python APIs for this simple use case.", "_____no_output_____" ], [ "## Google Colaboratory\n\nFirst of all, take a moment to look around and discover Google Colab if you haven't before! You can run the commands below to understand how much resources you're using and are still available. Then consider also that you can also connect you Google Drive for additional space or for easily loading your own files. Check out the [official tutorial](https://colab.research.google.com/) of the Google Colaboratory for more information.\n\nYou can always reset the entire VM with \"*Runtime > Reset all runtime*\" in case of difficulty. Make also sure you're using the GPU or TPU in the same tab (\"*Runtime > Change runtime type*\").", "_____no_output_____" ] ], [ [ "!free -m\n!df -h\n!nvidia-smi", " total used free shared buff/cache available\nMem: 13022 2998 1992 67 8031 10763\nSwap: 0 0 0\nFilesystem Size Used Avail Use% Mounted on\noverlay 359G 18G 323G 6% /\ntmpfs 6.4G 0 6.4G 0% /dev\ntmpfs 6.4G 0 6.4G 0% /sys/fs/cgroup\n/dev/sda1 365G 22G 344G 6% /opt/bin\ntmpfs 6.4G 8.0K 6.4G 1% /var/colab\nshm 6.0G 4.0K 6.0G 1% /dev/shm\ntmpfs 6.4G 0 6.4G 0% /sys/firmware\nSat Dec 15 13:57:37 2018 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 396.44 Driver Version: 396.44 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n|===============================+======================+======================|\n| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |\n| N/A 39C P0 71W / 149W | 1157MiB / 11441MiB | 0% Default |\n+-------------------------------+----------------------+----------------------+\n \n+-----------------------------------------------------------------------------+\n| Processes: GPU Memory |\n| GPU PID Type Process name Usage |\n|=============================================================================|\n+-----------------------------------------------------------------------------+\n" ] ], [ [ "Questions to explore:\n\n* How to connect your Google Drive with Google Colab?\n* How to import a new notebook and save it to your GDrive?\n* How to use files which are contained in your GDrive?\n\nSome tips here: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d\n", "_____no_output_____" ], [ "## Loading the MNIST Benchamark", "_____no_output_____" ], [ "in this section we load the common MNIST benchmark which we will use for our examples. We will take advantage of the *ContinualAI* calab scripts for easy loading of the MNIST images as numpy tensors:", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "!git clone https://github.com/ContinualAI/colab.git continualai/colab", "fatal: destination path 'continualai/colab' already exists and is not an empty directory.\n" ], [ "from continualai.colab.scripts import mnist\nmnist.init()", "Downloading train-images-idx3-ubyte.gz...\nDownloading t10k-images-idx3-ubyte.gz...\nDownloading train-labels-idx1-ubyte.gz...\nDownloading t10k-labels-idx1-ubyte.gz...\nDownload complete.\nSave complete.\n" ], [ "x_train, t_train, x_test, t_test = mnist.load()\n\nprint(\"x_train dim and type: \", x_train.shape, x_train.dtype)\nprint(\"t_train dim and type: \", t_train.shape, t_train.dtype)\nprint(\"x_test dim and type: \", x_test.shape, x_test.dtype)\nprint(\"t_test dim and type: \", t_test.shape, t_test.dtype)", "x_train dim and type: (60000, 1, 28, 28) float32\nt_train dim and type: (60000,) uint8\nx_test dim and type: (10000, 1, 28, 28) float32\nt_test dim and type: (10000,) uint8\n" ] ], [ [ "Let us take a look at these images:", "_____no_output_____" ] ], [ [ "f, axarr = plt.subplots(2,2)\naxarr[0,0].imshow(x_train[1, 0], cmap=\"gray\")\naxarr[0,1].imshow(x_train[2, 0], cmap=\"gray\")\naxarr[1,0].imshow(x_train[3, 0], cmap=\"gray\")\naxarr[1,1].imshow(x_train[4, 0], cmap=\"gray\")\nnp.vectorize(lambda ax:ax.axis('off'))(axarr);", "_____no_output_____" ] ], [ [ "## Common Constants", "_____no_output_____" ], [ "Now we can move on and define some common constants that we will share across the DL framework experiments:", "_____no_output_____" ] ], [ [ "# we will use time to measure speed\nimport time\n\n# number of classes in the MNIST dataset\nnum_class = 10\n\n# number of epochs we will use for each training\nn_epochs = 2\n\n# mini-batch size for SGD\nminibatch_size = 100\n\n# Iterations for epoch for the two sets\ntr_it_for_epoch = t_train.shape[0] // minibatch_size\nte_it_for_epoch = t_test.shape[0] // minibatch_size\nprint(\"train iterations: \", tr_it_for_epoch)\nprint(\"test iterations: \", te_it_for_epoch)", "train iterations: 600\ntest iterations: 100\n" ] ], [ [ "## Training a ConvNet with Caffe\n\nLet us focus on *Caffe*. First of all let us install the library. Luckily enough for Ubuntu (>= 17.04) there is a packaged version we can install simply with:", "_____no_output_____" ] ], [ [ "# !apt install -y caffe-cpu\n!apt install -y caffe-cuda", "Reading package lists... Done\nBuilding dependency tree \nReading state information... Done\ncaffe-cuda is already the newest version (1.0.0-6build1).\n0 upgraded, 0 newly installed, 0 to remove and 8 not upgraded.\n" ] ], [ [ "Let us now import the library and check the version:", "_____no_output_____" ] ], [ [ "import caffe\ncaffe.__version__\n", "_____no_output_____" ] ], [ [ "Now we can set the hardware type:", "_____no_output_____" ] ], [ [ "#caffe.set_mode_cpu()\ncaffe.set_device(0)\ncaffe.set_mode_gpu()", "_____no_output_____" ] ], [ [ "Great! Now that we have caffe imported and configured, we can focus on the definition of our ConvNet and the training/testing procedures. The easiest way to define the network structure and the opimization parameters is to define two separate prototxts files. In this case we I have already prepared the net.prototxt and solver.prototxt files which we can import front the *ContinualAI-colab* toolchain:", "_____no_output_____" ] ], [ [ "!cp continualai/colab/extras/net.prototxt .\n!cp continualai/colab/extras/solver.prototxt .\nsolver_name = \"solver.prototxt\"", "_____no_output_____" ] ], [ [ "Before moving on let's visualize them with the awesome netscope tool: \n\n* [net.prototxt](http://ethereon.github.io/netscope/#/gist/fbc84e148391c5bd953a5ec7d613b0f9)\n* [solver.prototxt](https://gist.github.com/vlomonaco/82dbff5eab77e146b489d27a5cd5923f)", "_____no_output_____" ], [ "Now we can define our test method:", "_____no_output_____" ] ], [ [ "def test(net, x, y, test_iters, test_batch_size):\n \"\"\" test the trained net \"\"\"\n\n acc = 0\n loss = 0\n for it in range(test_iters):\n if it % 100 == 1: print(\"+\", end=\"\", flush = True)\n start = it * test_batch_size\n end = (it + 1) * test_batch_size\n net.blobs['data'].data[...] = x[start:end]\n net.blobs['label'].data[...] = y[start:end]\n \n blobs = net.forward([\"accuracy\", \"loss\"])\n acc += blobs[\"accuracy\"]\n loss += blobs[\"loss\"]\n\n return acc / test_iters, loss / test_iters", "_____no_output_____" ] ], [ [ "Load the solver and start the training procedure:", "_____no_output_____" ] ], [ [ "solver = caffe.get_solver(solver_name)\n\nt_start = time.time()\nprint(\"Start Training\")\n\nfor epoch in range(n_epochs):\n print(\"Epoch\", epoch, \" \", end=\"\")\n for it in range(tr_it_for_epoch):\n\n if it % 100 == 1: print(\".\", end=\"\", flush=True) \n start = it * minibatch_size\n end = (it + 1) * minibatch_size\n solver.net.blobs['data'].data[...] = x_train[start:end]\n solver.net.blobs['label'].data[...] = t_train[start:end]\n solver.step(1)\n\n train_acc, train_loss = test(solver.test_nets[0], x_train, t_train,\n tr_it_for_epoch, minibatch_size)\n test_acc, _ = test(solver.test_nets[0], x_test, t_test,\n te_it_for_epoch, minibatch_size)\n print(\" Train loss: %.4f Train acc: %.2f %% Test acc: %.2f %%\" %\n (train_loss, train_acc * 100, test_acc * 100))\n\nt_elapsed = time.time()-t_start\nprint(\"---------------------------------------------\")\nprint ('%d patterns (%.2f sec.) -> %.2f patt/sec' % \n (x_train.shape[0]*n_epochs, t_elapsed, \n x_train.shape[0]*n_epochs / t_elapsed))\nprint(\"---------------------------------------------\")", "Start Training\nEpoch 0 ......+++++++ Train loss: 0.0988 Train acc: 96.82 % Test acc: 96.65 %\nEpoch 1 ......+++++++ Train loss: 0.0451 Train acc: 98.54 % Test acc: 98.13 %\n---------------------------------------------\n120000 patterns (112.29 sec.) -> 1068.70 patt/sec\n---------------------------------------------\n" ] ], [ [ "### [Extra] Define model and solver from Python\n\nOf course is it possible to define the network directly in Python as shown below. However we leave this part as optional for the reader to explore.", "_____no_output_____" ] ], [ [ "from caffe import layers as L\nfrom caffe import params as P\nfrom caffe.proto import caffe_pb2\nfrom google.protobuf import text_format\n\ndef get_net(num_classes=10, train_mb_size=100):\n \"\"\" Define net and return it as String \"\"\"\n\n net = caffe.NetSpec()\n\n net.data = L.Input(\n shape=[dict(dim=[train_mb_size, 1, 28, 28, ])], ntop=1,\n include=dict(phase=caffe.TRAIN)\n )\n net.test_data = L.Input(\n shape=[dict(dim=[100, 1, 28, 28, ])], ntop=1,\n include=dict(phase=caffe.TEST)\n )\n net.label = L.Input(\n shape=[dict(dim=[train_mb_size])], ntop=1,\n include=dict(phase=caffe.TRAIN)\n )\n net.test_label = L.Input(\n shape=[dict(dim=[100])], ntop=1,\n include=dict(phase=caffe.TEST)\n )\n net.conv1 = L.Convolution(\n net.data, kernel_size=5,\n num_output=32, param=[dict(lr_mult=1), dict(lr_mult=2)],\n weight_filler=dict(type='xavier'),\n bias_filler=dict(type='constant')\n )\n net.relu1 = L.ReLU(net.conv1, in_place=True)\n\n net.conv2 = L.Convolution(\n net.relu1, kernel_size=5,\n num_output=32, param=[dict(lr_mult=1), dict(lr_mult=2)],\n weight_filler=dict(type='xavier'),\n bias_filler=dict(type='constant')\n )\n net.relu2 = L.ReLU(net.conv2, in_place=True)\n\n net.fc1 = L.InnerProduct(\n net.relu2, num_output=500,\n param=[dict(lr_mult=1), dict(lr_mult=2)],\n weight_filler=dict(type='xavier'),\n bias_filler=dict(type='constant')\n )\n net.relu3 = L.ReLU(net.fc1, in_place=True)\n\n net.out = L.InnerProduct(\n net.fc1, num_output=num_classes,\n param=[dict(lr_mult=1), dict(lr_mult=2)],\n weight_filler=dict(type='xavier'),\n bias_filler=dict(type='constant')\n )\n\n net.loss = L.SoftmaxWithLoss(net.out, net.label)\n \n net.accuracy = L.Accuracy(\n net.out, net.test_label, include=dict(phase=caffe.TEST)\n )\n\n proto = str(net.to_proto())\n proto = proto.replace('test_data', 'data').replace('test_label', 'label')\\\n .replace('test_target', 'target')\n \n return proto\n\ndef get_solver( net, base_lr, random_seed=1, lr_policy=\"step\", gamma=0.1,\n stepsize=100000000, momentum=0.9, weight_decay=0.0005, test_iter=0,\n test_interval=1000, display=20, solver_mode=caffe_pb2.SolverParameter.GPU):\n \"\"\" Define solver and return it as String \"\"\"\n\n solver_config = caffe_pb2.SolverParameter()\n\n solver_config.random_seed = random_seed\n solver_config.test_iter.append(1)\n solver_config.test_interval = 1\n solver_config.net = net\n solver_config.base_lr = base_lr\n solver_config.lr_policy = lr_policy\n solver_config.gamma = gamma\n solver_config.stepsize = stepsize\n solver_config.momentum = momentum\n solver_config.weight_decay = weight_decay\n solver_config.snapshot_format = caffe_pb2.SolverParameter.HDF5\n solver_config.solver_mode = solver_mode\n\n solver_config = text_format.MessageToString(\n solver_config, float_format='.6g'\n )\n\n return solver_config", "_____no_output_____" ], [ "net_name = \"net.prototxt\"\n\nwith open(net_name, \"w\") as wf:\n wf.write(get_net())\n \nwith open(solver_name, \"w\") as wf:\n wf.write(get_solver(net_name, base_lr=0.01))", "_____no_output_____" ] ], [ [ "Questions to explore:\n\n* How to recover the weights of a particular layer?\n* How to get the activations of a particular layer?\n* How to cast a classifier into a Fully Convolutional Network?\n\nSome tips here: https://github.com/BVLC/caffe/blob/master/examples/net_surgery.ipynb\n", "_____no_output_____" ], [ "## Training a ConvNet with Tensorflow\n\nLet us move to the second framework we are considering: *Tensorflow*. We don't need to install it since it's already pre-loaded in Google Colaboratory (guess why! :'D). So let us start by importing it and checking the version:", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nprint(tf.__version__)\ntf.reset_default_graph()", "1.12.0\n" ] ], [ [ "Then we can define directly the network structure using the tf.layers API:", "_____no_output_____" ] ], [ [ "x = tf.placeholder(tf.float32, shape=[minibatch_size, 1, 28, 28])\nt = tf.to_int64(tf.placeholder(tf.int32, shape=[minibatch_size]))\n\n# First Convolutional Layer\nx_image = tf.reshape(x, [-1,28,28,1])\nconv1 = tf.layers.conv2d(\n x_image, 32, 5, strides=(1, 1), padding='valid', activation=tf.nn.relu\n)\n\n# Second Convolutional Layer\nconv2 = tf.layers.conv2d(\n conv1, 32, 5, strides=(1, 1), padding='valid', activation=tf.nn.relu\n)\npool2 = tf.layers.flatten(conv2)\n\n# Densely Connected Layer\nfc1 = tf.layers.dense(pool2, 500, name=\"fc1\", activation=tf.nn.relu)\n\n# Output Layer\ny_logits = tf.layers.dense(fc1, num_class, name=\"logits\")\n\n# Train and Evaluate the Model\ncross_entropy = tf.reduce_mean(\n tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y_logits, labels=t)\n)\noptim = tf.train.MomentumOptimizer(1e-2, momentum=0.9)\ntrain_step = optim.minimize(cross_entropy)\ncorrect_prediction = tf.equal(tf.argmax(y_logits,1), t)\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\nloss = tf.reduce_mean(cross_entropy)", "_____no_output_____" ] ], [ [ "As for caffe we can define the *test* function:", "_____no_output_____" ] ], [ [ "def test(x_set, y_set, test_iters, test_batch_size):\n \"\"\" testing set accuracy: can be used for train and test\"\"\"\n \n accuracy_sum = 0.0\n for it in range(test_iters):\n if it % 100 == 1: print(\"+\", end=\"\", flush = True)\n start = it * test_batch_size\n end = (it + 1) * test_batch_size\n accuracy_sum += sess.run(\n fetches=accuracy,\n feed_dict={x: x_set[start:end], t: y_set[start:end]}\n )\n \n return accuracy_sum / test_iters", "_____no_output_____" ] ], [ [ "And then we can use the computational graph just created within an interactive session:", "_____no_output_____" ] ], [ [ "sess = tf.InteractiveSession()\nsess.run(tf.global_variables_initializer())\n\nt_start = time.time()\nprint(\"Start Training\")\ntrain_loss = 0\nfor epoch in range(n_epochs):\n print(\"Epoch\", epoch, \" \", end=\"\")\n for it in range(tr_it_for_epoch):\n if it % 100 == 1: print(\".\", end=\"\", flush = True)\n start = it * minibatch_size\n end = (it + 1) * minibatch_size\n batch_loss, _ = sess.run(\n fetches=[loss, train_step],\n feed_dict={x: x_train[start:end], t: t_train[start:end]}\n )\n train_loss += batch_loss\n\n train_loss = train_loss / tr_it_for_epoch\n train_acc = test(\n x_train, t_train, tr_it_for_epoch, minibatch_size,\n ) \n test_acc = test(\n x_test, t_test, te_it_for_epoch, minibatch_size,\n )\n\n print(\" Train loss: %.4f Train acc: %.2f %% Test acc: %.2f %%\" %\n (train_loss, train_acc * 100, test_acc * 100))\n \nt_elapsed = time.time()-t_start\nprint(\"---------------------------------------------\")\nprint ('%d patterns (%.2f sec.) -> %.2f patt/sec' % \n (x_train.shape[0]*n_epochs, t_elapsed, \n x_train.shape[0]*n_epochs / t_elapsed))\nprint(\"---------------------------------------------\")\n\nsess.close()", "Start Training\nEpoch 0 ......+++++++ Train loss: 0.2559 Train acc: 97.60 % Test acc: 97.40 %\nEpoch 1 ......+++++++ Train loss: 0.0604 Train acc: 98.26 % Test acc: 97.80 %\n---------------------------------------------\n120000 patterns (20.27 sec.) -> 5920.06 patt/sec\n---------------------------------------------\n" ] ], [ [ "### [Extra] Keras API", "_____no_output_____" ], [ "As an bonus we leave to the reader the same implementation but using the keras API. Very easy, isn't it? But remember, with great abstraction power comes great responsibility...", "_____no_output_____" ] ], [ [ "tf.keras.backend.set_image_data_format('channels_first')\n\nmodel = tf.keras.models.Sequential([\n tf.keras.layers.Conv2D(\n 32, (5, 5), strides=(1, 1), padding='valid', activation=tf.nn.relu\n ),\n tf.keras.layers.Conv2D(\n 32, (5, 5), strides=(1, 1), padding='valid', activation=tf.nn.relu\n ),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(500, activation=tf.nn.relu),\n tf.keras.layers.Dense(num_class, activation=tf.nn.softmax)\n])\noptim = tf.keras.optimizers.SGD(lr=0.01, momentum=0.9)\nmodel.compile(optimizer=optim,\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\nt_start = time.time()\nprint(\"Start Training\")\n\nmodel.fit(x_train, t_train, batch_size=minibatch_size, epochs=n_epochs)\nmodel.evaluate(x_test, t_test)\n\nt_elapsed = time.time()-t_start\nprint(\"---------------------------------------------\")\nprint ('%d patterns (%.2f sec.) -> %.2f patt/sec' % \n (x_train.shape[0]*n_epochs, t_elapsed, \n x_train.shape[0]*n_epochs / t_elapsed))\nprint(\"---------------------------------------------\")", "Start Training\nEpoch 1/2\n60000/60000 [==============================] - 8s 131us/step - loss: 0.0366 - acc: 0.9888\nEpoch 2/2\n60000/60000 [==============================] - 8s 130us/step - loss: 0.0252 - acc: 0.9920\n10000/10000 [==============================] - 1s 98us/step\n---------------------------------------------\n120000 patterns (17.05 sec.) -> 7040.14 patt/sec\n---------------------------------------------\n" ] ], [ [ "Questions to explore:\n\n* What happens if you comment the first line of code?\n* What if you also change the convolution padding from \"valid\" to \"same\"?\n\nSome tips here: https://keras.io/layers/convolutional/\n", "_____no_output_____" ], [ "## Training a ConvNet with PyTorch", "_____no_output_____" ], [ "Let us delve now into the third and last framework we will consider: Pytorch. First of all let us install it and import it.", "_____no_output_____" ] ], [ [ "# http://pytorch.org/\nfrom os import path\nfrom wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag\nplatform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())\n\naccelerator = 'cu80' #'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'\nprint('Platform:', platform, 'Accelerator:', accelerator)\n\n!pip install --upgrade --force-reinstall -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.0-{platform}-linux_x86_64.whl torchvision\n\nimport torch\nprint('Torch', torch.__version__, 'CUDA', torch.version.cuda)", "Platform: cp36-cp36m Accelerator: cu80\ntcmalloc: large alloc 1073750016 bytes == 0x5b6a8000 @ 0x7f19cfe102a4 0x591a07 0x5b5d56 0x502e9a 0x506859 0x502209 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x507641 0x502209 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x507641 0x504c28 0x502540 0x502f3d 0x507641\n\u001b[31mjupyter-console 6.0.0 has requirement prompt-toolkit<2.1.0,>=2.0.0, but you'll have prompt-toolkit 1.0.15 which is incompatible.\u001b[0m\n\u001b[31mgoogle-colab 0.0.1a1 has requirement six~=1.11.0, but you'll have six 1.12.0 which is incompatible.\u001b[0m\n\u001b[31mfeaturetools 0.4.1 has requirement pandas>=0.23.0, but you'll have pandas 0.22.0 which is incompatible.\u001b[0m\n\u001b[31mcufflinks 0.14.6 has requirement plotly>=3.0.0, but you'll have plotly 1.12.12 which is incompatible.\u001b[0m\nTorch 0.4.0 CUDA 8.0.61\n" ], [ "import torch\ntorch.cuda.is_available()", "_____no_output_____" ], [ "# switch to False to use CPU\nuse_cuda = True\n\nuse_cuda = use_cuda and torch.cuda.is_available()\ndevice = torch.device(\"cuda\" if use_cuda else \"cpu\");\ntorch.manual_seed(1);", "_____no_output_____" ], [ "import torch.nn as nn\nimport torchvision.datasets as datasets\nimport torchvision.transforms as transforms\nimport torch.optim as optim\nimport torch.nn.functional as F", "_____no_output_____" ] ], [ [ "**Questions to explore:**\n\n* What's new in Pythorch 0.4?\n\nSome tips here: https://pytorch.org/blog/pytorch-0_4_0-migration-guide/\n", "_____no_output_____" ], [ "Great! So now we can define the network and the independent forward function which can dynamically change depending on the input data:", "_____no_output_____" ] ], [ [ "class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 32, kernel_size=5)\n self.conv2 = nn.Conv2d(32, 32, kernel_size=5)\n self.fc1 = nn.Linear(512, 500)\n self.fc2 = nn.Linear(500, num_class)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2(x), 2))\n x = x.view(-1, 512)\n x = F.relu(self.fc1(x))\n x = self.fc2(x)\n return F.log_softmax(x, dim=1)", "_____no_output_____" ] ], [ [ "We can define the test method as before:", "_____no_output_____" ] ], [ [ "def test(model, device, x_test, t_test, test_iters, test_batch_size):\n model.eval()\n test_loss = 0\n correct = 0\n for it in range(test_iters):\n if it % 100 == 1: print(\"+\", end=\"\", flush = True)\n start = it * test_batch_size\n end = (it + 1) * test_batch_size\n with torch.no_grad():\n x = torch.from_numpy(x_test[start:end])\n y = torch.from_numpy(t_test[start:end]).long()\n x, y = x.to(device), y.to(device)\n output = model(x)\n # sum up batch loss\n test_loss += F.cross_entropy(output, y).item() \n # get the index of the max log-probability\n pred = output.max(1, keepdim=True)[1] \n correct += pred.eq(y.view_as(pred)).sum().item()\n\n return correct / len(t_test)", "_____no_output_____" ] ], [ [ "Then finally define the model and start the training:", "_____no_output_____" ] ], [ [ "model = Net().to(device)\noptimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)\nmodel.train()\n\nt_start = time.time()\nprint(\"Start Training\")\ntrain_loss = 0\n\nfor epoch in range(n_epochs):\n print(\"Epoch\", epoch, \" \", end=\"\")\n train_loss = 0\n for it in range(tr_it_for_epoch):\n if it % 100 == 1: print(\".\", end=\"\", flush = True)\n start = it * minibatch_size\n end = (it + 1) * minibatch_size\n x = torch.from_numpy(x_train[start:end])\n y = torch.from_numpy(t_train[start:end]).long()\n x, y = x.to(device), y.to(device)\n \n optimizer.zero_grad()\n\n output = model(x)\n loss = F.cross_entropy(output, y)\n loss.backward()\n optimizer.step()\n train_loss += loss\n \n train_loss = train_loss / tr_it_for_epoch\n train_acc = test(\n model, device, x_train, t_train, tr_it_for_epoch, minibatch_size,\n ) \n test_acc = test(\n model, device, x_test, t_test, te_it_for_epoch, minibatch_size,\n )\n\n print(\" Train loss: %.4f Train acc: %.2f %% Test acc: %.2f %%\" %\n (train_loss, train_acc * 100, test_acc * 100))\n \nt_elapsed = time.time()-t_start\nprint(\"---------------------------------------------\")\nprint ('%d patterns (%.2f sec.) -> %.2f patt/sec' % \n (x_train.shape[0]*n_epochs, t_elapsed, \n x_train.shape[0]*n_epochs / t_elapsed))\nprint(\"---------------------------------------------\")", "Start Training\nEpoch 0 ......+++++++ Train loss: 0.3784 Train acc: 97.12 % Test acc: 97.43 %\nEpoch 1 ......+++++++ Train loss: 0.0778 Train acc: 98.06 % Test acc: 98.05 %\n---------------------------------------------\n120000 patterns (8.33 sec.) -> 14403.05 patt/sec\n---------------------------------------------\n" ] ], [ [ "Wow! ~98% accuracy in such a short time.\n\n**Questions to explore:**\n\n* Can you find a better parametrization to improve the final accuracy?\n* Can you change the network architecture to improve the final accuracy?\n* Can you achieve the same performances with a smaller architecture?\n* What's the difference in accuracy if you change convolutions with fully connected layers?\n* Can you improve the speed of the training for all the frameworks described above?\n* What are the pros and cons of each framework in this simple example?\n\nSome tips here: http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#4d4e495354", "_____no_output_____" ], [ "This concludes our little tour of the thre most used open-source frameworks for Deep Learning. Please make a PR if you spot any error or you want to contribute to the **ContinualAI-Colab** project! :-) ", "_____no_output_____" ], [ "**Copyright (c) 2018. Continual AI. All rights reserved. **\n\nSee the accompanying LICENSE file in the GitHub repository for terms. \n\n*Date: 27-11-2018 \nAuthor: Vincenzo Lomonaco \nE-mail: [email protected] \nWebsite: continualai.org* ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
e711bf9447d3ad83152d33822643d1841738aebe
8,973
ipynb
Jupyter Notebook
docs/notebooks/inference.ipynb
mcoughlin/AIQC
a272a9e3e5b2393dd7ee1eab8ceddf9b455066fd
[ "BSD-3-Clause" ]
null
null
null
docs/notebooks/inference.ipynb
mcoughlin/AIQC
a272a9e3e5b2393dd7ee1eab8ceddf9b455066fd
[ "BSD-3-Clause" ]
null
null
null
docs/notebooks/inference.ipynb
mcoughlin/AIQC
a272a9e3e5b2393dd7ee1eab8ceddf9b455066fd
[ "BSD-3-Clause" ]
null
null
null
25.784483
338
0.488465
[ [ [ "# Inference", "_____no_output_____" ], [ "Down the road, you will need to make real-life predictions using the models that you've trained.", "_____no_output_____" ], [ "Inference is a breeze with AIQC because it persists all of the information that we need to preprocess our new samples and reconstruct our model.\n\nNormally, the challenge with inference is being able to preprocess your new samples the same way as your processed your training samples. Additionally, if you provide labels with your new data for the purpose of evaluation, then PyTorch requires you to reconstruct parts of your model like your optimizer in order to calculate loss.", "_____no_output_____" ], [ "---", "_____no_output_____" ] ], [ [ "import aiqc\nfrom aiqc import datum\nfrom aiqc import tests", "_____no_output_____" ] ], [ [ "Below we're just making a trained model so that we have examples to work with for making inference-based predictions.", "_____no_output_____" ] ], [ [ "%%capture\nqueue_multiclass = tests.make_test_queue('keras_multiclass')\nqueue_multiclass.run_jobs()", "_____no_output_____" ] ], [ [ "## Predictor", "_____no_output_____" ], [ "Let's say that we have a trained model in the form of a `Predictor`,", "_____no_output_____" ] ], [ [ "predictor = queue_multiclass.jobs[0].predictors[0]", "_____no_output_____" ] ], [ [ "and that we have samples that we want to generate predictions for.", "_____no_output_____" ], [ "## New Splitset", "_____no_output_____" ] ], [ [ "df = datum.to_pandas('iris.tsv').sample(10)", "_____no_output_____" ], [ "df[:5]", "_____no_output_____" ] ], [ [ "We'll fashion a new `Splitset` of the samples that we want to predict using the high-level API.\n\n- Leave the `label_column` blank if you are conducting pure inference where you don't know the real Label/target.\n- Otherwise, `splitset.label` will be used to generate metrics for your new predictions.", "_____no_output_____" ] ], [ [ "splitset = aiqc.Pipeline.Tabular.make(\n dataFrame_or_filePath = df\n , label_column = 'species'\n)", "_____no_output_____" ] ], [ [ "## Run Inference", "_____no_output_____" ], [ "Then pass that `Splitset` to `Predictor.infer()`.", "_____no_output_____" ], [ "During `infer`, it will validate that the schema of your new Splitset's `Feature` and `Label` match the schema of the original training Splitset. It will also ignore any splits that you make, fetching the entire Feature and Label.", "_____no_output_____" ], [ "- `Dataset.Tabular` schema includes column ordering and dtype.\n- `Dataset.Image` schema includes Pillow size (height/width) and mode (color dimensions).", "_____no_output_____" ] ], [ [ "prediction = predictor.infer(splitset_id=splitset.id)", "_____no_output_____" ] ], [ [ "- The key in the dictionary-based `Prediction` attributes will be equal to the `str(splitset.id)`.\n- If you trained on encoded Labels, don't worry, the output will be `inverse_transform`'ed.", "_____no_output_____" ] ], [ [ "prediction.predictions", "_____no_output_____" ] ], [ [ "For more information on the `Prediction` object, reference the [Low-Level API](api_low_level.html) documentation.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e711c1e43ffc26dd55dd6bc65c5b9024c472d96a
187,997
ipynb
Jupyter Notebook
aaStats/Covariance.ipynb
uah-cao1/CEO
40dbf7db365d9cd14268cd36b1c789b22750d552
[ "Zlib" ]
18
2016-02-29T12:41:52.000Z
2021-12-03T15:10:34.000Z
aaStats/Covariance.ipynb
uah-cao1/CEO
40dbf7db365d9cd14268cd36b1c789b22750d552
[ "Zlib" ]
23
2015-04-27T14:17:19.000Z
2021-11-29T22:19:12.000Z
aaStats/Covariance.ipynb
uah-cao1/CEO
40dbf7db365d9cd14268cd36b1c789b22750d552
[ "Zlib" ]
17
2015-04-09T14:13:16.000Z
2022-02-17T10:03:00.000Z
554.563422
43,124
0.952712
[ [ [ "import ceo\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ], [ "NL = 48\nn = NL\nNA = NL+1\nD = 25.5\nd = D/NL", "_____no_output_____" ], [ "atm = ceo.GmtAtmosphere(0.15,25)\nngs = ceo.Source(\"V\",resolution=(NA,NA))", "_____no_output_____" ] ], [ [ "# Angle of arrival covariance", "_____no_output_____" ] ], [ [ "aa = ceo.AaStatsMatrix(NL,atm,d,ngs)\n\nfig,ax = plt.subplots(figsize=(10,4))\nh = ax.matshow(np.hstack(np.vsplit(aa.cov.host(),4)))\nfig.colorbar(h,ax=ax,orientation='horizontal')", "_____no_output_____" ] ], [ [ "# Phase/Angle of arrival covariance", "_____no_output_____" ] ], [ [ "d = D/(NA-1)\npa = ceo.PaStats(NA,NL,1,atm,d,ngs,ngs)\n\nfig,ax = plt.subplots(figsize=(6,4))\nh = ax.matshow(np.hstack(np.vsplit(pa.cov.host().reshape(-1,NA + NL -1),2)))\nfig.colorbar(h,ax=ax,orientation='horizontal')", "_____no_output_____" ], [ "osf = 4\nNP = osf*NL+1\npa = ceo.PaStats(NP,NL,osf,atm,d,ngs,ngs)\n\nfig,ax = plt.subplots(figsize=(6,4))\nh = ax.matshow(np.hstack(np.vsplit(pa.cov.host().reshape(-1,NP + NL -1),2)))\nfig.colorbar(h,ax=ax,orientation='horizontal')", "_____no_output_____" ], [ "pa = ceo.APaStats(NA,NL,1,atm,d,ngs,0)\n\nfig,ax = plt.subplots(figsize=(6,4))\nh = ax.matshow(np.hstack(np.vsplit(pa.cov.host().reshape(-1,NA + NL -1),2)))\nfig.colorbar(h,ax=ax,orientation='horizontal')", "_____no_output_____" ], [ "pa = ceo.APaStats(NA,NL,1,atm,d,ngs,ceo.constants.ARCMIN2RAD*5)\n\nfig,ax = plt.subplots(figsize=(6,4))\nh = ax.matshow(np.hstack(np.vsplit(pa.cov.host().reshape(-1,NA + NL -1),2)))\nfig.colorbar(h,ax=ax,orientation='horizontal')", "_____no_output_____" ], [ "pa = ceo.APaStats(NA,NL,1,atm,d,ngs,ceo.constants.ARCMIN2RAD*10)\n\nfig,ax = plt.subplots(figsize=(6,4))\nh = ax.matshow(np.hstack(np.vsplit(pa.cov.host().reshape(-1,NA + NL -1),2)))\nfig.colorbar(h,ax=ax,orientation='horizontal')", "_____no_output_____" ], [ "osf = 4\nNP = osf*NL+1\npa = ceo.APaStats(NP,NL,osf,atm,d,ngs,ceo.constants.ARCMIN2RAD*10)\n\nfig,ax = plt.subplots(figsize=(6,4))\nh = ax.matshow(np.hstack(np.vsplit(pa.cov.host().reshape(-1,NP + NL -1),2)))\nfig.colorbar(h,ax=ax,orientation='horizontal')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
e711c396d6ac8049801840444617ee129a828693
59,543
ipynb
Jupyter Notebook
ch-algorithms/deutsch-josza.ipynb
ThePrez/qiskit-textbook
f76197ae05ed17157e994adce884dd9f6cf62b18
[ "Apache-2.0" ]
2
2019-09-16T17:52:16.000Z
2019-12-12T03:01:37.000Z
ch-algorithms/deutsch-josza.ipynb
ThePrez/qiskit-textbook
f76197ae05ed17157e994adce884dd9f6cf62b18
[ "Apache-2.0" ]
null
null
null
ch-algorithms/deutsch-josza.ipynb
ThePrez/qiskit-textbook
f76197ae05ed17157e994adce884dd9f6cf62b18
[ "Apache-2.0" ]
null
null
null
111.503745
16,292
0.826764
[ [ [ "# Deutsch-Josza Algorithm", "_____no_output_____" ], [ "In this section, we first introduce the Deutsch-Josza problem, and classical and quantum algorithms to solve it. We then implement the quantum algorithm using Qiskit, and run on a simulator and device.", "_____no_output_____" ], [ "## Contents\n\n1. [Introduction](#introduction)\n - [Deutsch-Josza Problem](#djproblem)\n - [Deutsch-Josza Algorithm](#djalgorithm)\n\n2. [Example](#example)\n\n3. [Qiskit Implementation](#implementation)\n - [Simulation](#simulation)\n - [Device](#device)\n\n4. [Problems](#problems)\n\n5. [References](#references)", "_____no_output_____" ], [ "## 1. Introduction <a id='introduction'></a>", "_____no_output_____" ], [ "The Deutsch-Josza algorithm, first introduced in Reference [1], was the first example of a quantum algorithm that performs better than the best classical algorithm. It showed that there can be advantages in using a quantum computer as a computational tool for a specific problem.", "_____no_output_____" ], [ "### 1a. Deutsch-Josza Problem <a id='djproblem'> </a>", "_____no_output_____" ], [ "We are given a hidden Boolean function $f$, which takes as as input a string of bits, and returns either $0$ or $1$, that is \n<center>$f(\\{x_0,x_1,x_2,...\\}) \\rightarrow 0 \\textrm{ or } 1 \\textrm{ , where } x_n \\textrm{ is } 0 \\textrm{ or } 1$.\n\nFor a string of $n$ bits, there are a total of $2^n$ combinations. The property of the given Boolean function is that it is guaranteed to either be balanced or constant. A constant function returns all $0$'s or all $1$'s for any input, while a balanced function returns $0$'s for exactly half of all inputs and $1$'s for the other half. Our task is to determine whether the given function is balanced or constant. \n\nNote that the Deutsch-Josza problem is an $n$-bit extension of the single bit Deutsch problem. ", "_____no_output_____" ], [ "### 1b. Deutsch-Josza Algorithm <a id='djalgorithm'> </a>", "_____no_output_____" ], [ "#### Classical Solution\n\nClassically, in the best case, two queries to the oracle can determine if the hidden Boolean function, $f(x)$, is balanced: \ne.g. if we get both $f(0,0,0,... \\rightarrow 0)$ and $f(1,0,0,... \\rightarrow 1)$ we know the function is balanced as we have obtained the two different outputs. \n\nIn the worst case, if we continue to see the same output for each input we try, we will have to check exactly $2^{n-1}+1$ inputs to be certain that $f(x)$ is constant: \ne.g. for a $4$-bit string, if we checked $8$ out of the $16$ possible combinations, getting all $0$'s, it is still possible that the $9^\\textrm{th}$ input returns a $1$ and $f(x)$ is balanced. Probabilistically, this is a very unlikely event. In fact, if we get the same result continually in succession, we can express the probability that the function is constant as a function of $k$ inputs as:\n$$ P_\\textrm{constant}(k) = 1 - \\frac{1}{2^{k-1}} \\qquad \\textrm{for } k \\leq 2^{n-1}$$\nRealistically, we could opt to truncate our classical algorithm early, say if we were over x% confident. But if we want to be 100% confident, we would need to check $2^{n-1}+1$ inputs.", "_____no_output_____" ], [ "#### Quantum Solution\n\nUsing a quantum computer, we can solve this problem with 100% confidence after only one call to the function $f(x)$, provided we have the function $f$ implemented as a quantum oracle, which maps the state $\\vert x\\rangle \\vert y\\rangle $ to $ \\vert x\\rangle \\vert y \\oplus f(x)\\rangle$, where $\\oplus$ is addition modulo $2$. Below is the generic circuit for the Deutsh-Josza algorithm.\n\n<img src=\"images/deutsch_steps.png\" width=\"600\">\n\nNow, let's go through the steps of the algorithm:\n\n<ol>\n <li>\n Prepare two quantum registers. The first is an $n$-qubit reqister initialised to $\\vert 0 \\rangle$, and the second is a one-qubit register initialised to $\\vert 1\\rangle$:\n $$\\vert \\psi_0 \\rangle = \\vert0\\rangle^{\\otimes n} \\vert 1\\rangle$$\n </li>\n \n <li>\n Apply a Hadamard gate to each qubit:\n $$\\vert \\psi_1 \\rangle = \\frac{1}{\\sqrt{2^{n+1}}}\\sum_{x=0}^{2^n-1} \\vert x\\rangle \\vert 0\\rangle - \\vert 1 \\rangle$$\n </li>\n \n <li>\n Apply the quantum oracle $\\vert x\\rangle \\vert y\\rangle $ to $ \\vert x\\rangle \\vert y \\oplus f(x)\\rangle$:\n \\begin{aligned}\n \\lvert \\psi_2 \\rangle \n & = \\frac{1}{\\sqrt{2^{n+1}}}\\sum_{x=0}^{2^n-1} \\vert x\\rangle (\\vert f(x)\\rangle - \\vert 1 \\oplus f(x)\\rangle) \\\\ \n & = \\frac{1}{\\sqrt{2^{n+1}}}\\sum_{x=0}^{2^n-1}(-1)^{f(x)}|x\\rangle ( |0\\rangle - |1\\rangle ) \n \\end{aligned}\n since each $x,f(x)$ is either $0$ or $1$.\n </li>\n\n <li>\n At this point the second single qubit register may be ignored. Apply a Hadamard gate to each qubit in the first register:\n \\begin{aligned}\n \\lvert \\psi_3 \\rangle \n & = \\frac{1}{2^n}\\sum_{x=0}^{2^n-1}(-1)^{f(x)}\n \\left[ \\sum_{y=0}^{2^n-1}(-1)^{x \\cdot y} \n \\vert y \\rangle \\right] \\\\\n & = \\frac{1}{2^n}\\sum_{y=0}^{2^n-1}\n \\left[ \\sum_{x=0}^{2^n-1}(-1)^{f(x)}(-1)^{x \\cdot y} \\right]\n \\vert y \\rangle\n \\end{aligned}\n where $x \\cdot y = x_0y_0 \\oplus x_1y_1 \\oplus \\ldots \\oplus x_{n-1}y_{n-1}$ is the sum of the bitwise product.\n </li>\n\n <li>\n Measure the first register. Notice that the probability of measuring $\\vert 0 \\rangle ^{\\otimes n} = \\lvert \\frac{1}{2^n}\\sum_{x=0}^{2^n-1}(-1)^{f(x)} \\rvert^2$, which evaluates to $1$ if $f(x)$ is constant and $0$ if $f(x)$ is balanced. \n </li>\n\n</ol>\n\n**Why does this work?**\n\n$\\qquad$ When the hidden Boolean function is *constant*, the quantum states before and after querying the oracle are the same. The inverse of the Hadamard gate is the Hadamard gate itself. Thus, by Step 4, we essentially reverse Step 2 to obtain the initial quantum state of all-zero at the first register. \n\n$\\qquad$ When the hidden Boolean function is *balanced*, the quantum state after querying the oracle is orthogonal to the quantum state before querying the oracle. Thus, by Step 4, when reverting the operation, we must end up with a quantum state that is orthogonal to the initial quantum state of all-zero at the first register. This means we should never obtain the all-zero state. \n", "_____no_output_____" ], [ "##### Quantum Oracle\n\nThe key to the Deutsch-Josza Algorithm is the implementation of the quantum oracle. \n\nFor a constant function, it is simple:\n\n$\\qquad$ 1. if f(x) = 0, then apply the $I$ gate to the qubit in register 2. \n$\\qquad$ 2. if f(x) = 1, then apply the $X$ gate to the qubit in register 2.\n\nFor a balanced function, it is more complicated:\n\n$\\qquad$ There are $2^{n}-2$ different configurations for an $n$-qubit balanced function. These can be defined by one of the bitstrings from $1$ to $2^n-1$ inclusive. Given a particular bitstring, $a$, the oracle is the bitwise product of $x$ and $a$, which is implemented as a multi-qubit f-controlled-NOT gate with the second register, as per Reference [2]. ", "_____no_output_____" ], [ "## 2. Example <a id='example'></a>\n\nLet's go through a specfic example for a two bit balanced function with $a = 3$.\n\n<ol>\n <li> The first register of two qubits is initialized to zero and the second register qubit to one \n $$\\lvert \\psi_0 \\rangle = \\lvert 0 0 \\rangle_1 \\lvert 1 \\rangle_2 $$ \n </li>\n \n <li> Apply Hadamard on all qubits\n $$\\lvert \\psi_1 \\rangle = \\frac{1}{2} \\left( \\lvert 0 0 \\rangle_1 + \\lvert 0 1 \\rangle_1 + \\lvert 1 0 \\rangle_1 + \\lvert 1 1 \\rangle_1 \\right) \\frac{1}{\\sqrt{2}} \\left( \\lvert 0 \\rangle_2 - \\lvert 1 \\rangle_2 \\right) $$ \n </li>\n \n <li> For $a=3$, (11 in binary) the oracle function can be implemented as $\\text{Q}_f = CX_{1a}CX_{2a}$, \n \\begin{align*}\n \\lvert \\psi_2 \\rangle = \\frac{1}{2\\sqrt{2}} \\left[ \\lvert 0 0 \\rangle_1 \\left( \\lvert 0 \\oplus 0 \\oplus 0 \\rangle_2 - \\lvert 1 \\oplus 0 \\oplus 0 \\rangle_2 \\right) \\\\\n + \\lvert 0 1 \\rangle_1 \\left( \\lvert 0 \\oplus 0 \\oplus 1 \\rangle_2 - \\lvert 1 \\oplus 0 \\oplus 1 \\rangle_2 \\right) \\\\\n + \\lvert 1 0 \\rangle_1 \\left( \\lvert 0 \\oplus 1 \\oplus 0 \\rangle_2 - \\lvert 1 \\oplus 1 \\oplus 0 \\rangle_2 \\right) \\\\\n + \\lvert 1 1 \\rangle_1 \\left( \\lvert 0 \\oplus 1 \\oplus 1 \\rangle_2 - \\lvert 1 \\oplus 1 \\oplus 1 \\rangle_2 \\right) \\right]\n \\end{align*}\n </li>\n \n Thus\n \\begin{aligned}\n \\lvert \\psi_2 \\rangle & = \\frac{1}{2\\sqrt{2}} \\left[ \\lvert 0 0 \\rangle_1 \\left( \\lvert 0 \\rangle_2 - \\lvert 1 \\rangle_2 \\right) - \\lvert 0 1 \\rangle_1 \\left( \\lvert 0 \\rangle_2 - \\lvert 1 \\rangle_2 \\right) - \\lvert 1 0 \\rangle_1 \\left( \\lvert 0 \\rangle_2 - \\lvert 1 \\rangle_2 \\right) + \\lvert 1 1 \\rangle_1 \\left( \\lvert 0 \\rangle_2 - \\lvert 1 \\rangle_2 \\right) \\right] \\\\\n & = \\frac{1}{2} \\left( \\lvert 0 0 \\rangle_1 - \\lvert 0 1 \\rangle_1 - \\lvert 1 0 \\rangle_1 + \\lvert 1 1 \\rangle_1 \\right) \\frac{1}{\\sqrt{2}} \\left( \\lvert 0 \\rangle_2 - \\lvert 1 \\rangle_2 \\right) \\\\\n & = \\frac{1}{\\sqrt{2}} \\left( \\lvert 0 \\rangle_{10} - \\lvert 1 \\rangle_{10} \\right)\\frac{1}{\\sqrt{2}} \\left( \\lvert 0 \\rangle_{11} - \\lvert 1 \\rangle_{11} \\right)\\frac{1}{\\sqrt{2}} \\left( \\lvert 0 \\rangle_2 - \\lvert 1 \\rangle_2 \\right)\n \\end{aligned}\n </li>\n \n <li> Apply Hadamard on the first register\n $$ \\lvert \\psi_3\\rangle = \\lvert 1 \\rangle_{10} \\lvert 1 \\rangle_{11} \\left( \\lvert 0 \\rangle_2 - \\lvert 1 \\rangle_2 \\right) $$\n </li>\n \n <li> Measuring the first two qubits will give the non-zero $11$, indicating a balanced function.\n </li>\n</ol>\n", "_____no_output_____" ], [ "## 3. Qiskit Implementation <a id='implementation'></a>\n\nWe now implement the Deutsch-Josza algorithm for the example of a two bit balanced function with $a = 3$.", "_____no_output_____" ] ], [ [ "# initialization\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport numpy as np\n\n# importing Qiskit\nfrom qiskit import IBMQ, BasicAer\nfrom qiskit.providers.ibmq import least_busy\nfrom qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute\n\n# import basic plot tools\nfrom qiskit.tools.visualization import plot_histogram", "_____no_output_____" ], [ "# set the length of the $n$-bit string. \nn = 2\n\n# set the oracle, b for balanced, c for constant\noracle = \"b\"\n\n# if the oracle is balanced, set b\nif oracle == \"b\":\n b = 3 # np.random.randint(1,2**n) uncomment for a random value\n\n# if the oracle is constant, set c = 0 or 1 randomly.\nif oracle == \"c\":\n c = np.random.randint(2)", "_____no_output_____" ], [ "# Creating registers\n# n qubits for querying the oracle and one qubit for storing the answer\nqr = QuantumRegister(n+1)\ncr = ClassicalRegister(n)\n\ndjCircuit = QuantumCircuit(qr, cr)\nbarriers = True\n\n# Since all qubits are initialized to |0>, we need to flip the second register qubit to the the |1> state\ndjCircuit.x(qr[n])\n\n# Apply barrier \nif barriers:\n djCircuit.barrier()\n\n# Apply Hadamard gates to all qubits\ndjCircuit.h(qr) \n \n# Apply barrier \nif barriers:\n djCircuit.barrier()\n\n# Query the oracle \nif oracle == \"c\": # if the oracle is constant, return c\n if c == 1:\n djCircuit.x(qr[n])\n else:\n djCircuit.iden(qr[n])\nelse: # otherwise, the oracle is balanced and it returns the inner product of the input with b (non-zero bitstring) \n for i in range(n):\n if (b & (1 << i)):\n djCircuit.cx(qr[i], qr[n])\n\n# Apply barrier \nif barriers:\n djCircuit.barrier()\n\n# Apply Hadamard gates to the first register after querying the oracle\nfor i in range(n):\n djCircuit.h(qr[i])\n\n# Measure the first register\nfor i in range(n):\n djCircuit.measure(qr[i], cr[i])", "_____no_output_____" ], [ "djCircuit.draw(output='mpl')", "_____no_output_____" ] ], [ [ "### 3b. Experiment with Simulators <a id='simulation'></a>\n\nWe can run the above circuit on the simulator. ", "_____no_output_____" ] ], [ [ "# use local simulator\nbackend = BasicAer.get_backend('qasm_simulator')\nshots = 1024\nresults = execute(djCircuit, backend=backend, shots=shots).result()\nanswer = results.get_counts()\n\nplot_histogram(answer)", "_____no_output_____" ] ], [ [ "We can see that the result of the measurement is $11$ as expected.", "_____no_output_____" ], [ "### 3a. Experiment with Real Devices <a id='device'></a>\n\nWe can run the circuit on the real device as shown below.", "_____no_output_____" ] ], [ [ "# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to 5 qubits\nIBMQ.load_accounts()\nIBMQ.backends()\nbackend = least_busy(IBMQ.backends(filters=lambda x: x.configuration().n_qubits <= 5 and \n not x.configuration().simulator and x.status().operational==True))\nprint(\"least busy backend: \", backend)", "least busy backend: ibmqx4\n" ], [ "# Run our circuit on the least busy backend. Monitor the execution of the job in the queue\nfrom qiskit.tools.monitor import job_monitor\n\nshots = 1024\njob = execute(djCircuit, backend=backend, shots=shots)\n\njob_monitor(job, interval = 2)", "Job Status: job has successfully run\n" ], [ "# Get the results of the computation\nresults = job.result()\nanswer = results.get_counts()\n\nplot_histogram(answer)", "_____no_output_____" ] ], [ [ "As we can see, most of the results are $11$. The other results are due to errors in the quantum computation. ", "_____no_output_____" ], [ "## 4. Problems <a id='problems'></a>\n\n1. The above [implementation](#implementation) of Deutsch-Josza is for a balanced function with a two bit input of 3. Modify the implementation for a constant function. Are the results what you expect? Explain.\n2. The above [implementation](#implementation) of Deutsch-Josza is for a balanced function with a two bit random input. Modify the implementation for a balanced function with a 4 bit input of 13. Are the results what you expect? Explain.", "_____no_output_____" ], [ "## 5. References <a id='references'></a>\n\n1. David Deutsch and Richard Jozsa (1992). \"Rapid solutions of problems by quantum computation\". Proceedings of the Royal Society of London A. 439: 553–558. [doi:10.1098/rspa.1992.0167](https://doi.org/10.1098%2Frspa.1992.0167).\n2. R. Cleve; A. Ekert; C. Macchiavello; M. Mosca (1998). \"Quantum algorithms revisited\". Proceedings of the Royal Society of London A. 454: 339–354. [doi:10.1098/rspa.1998.0164](https://doi.org/10.1098%2Frspa.1998.0164).", "_____no_output_____" ] ], [ [ "qiskit.__qiskit_version__", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ] ]
e711db20a9c7cff4440652f2087b900bfcba2d00
25,682
ipynb
Jupyter Notebook
timotion.ipynb
ahmadkammonah/ActuatorHunt
6eedde1373ed3bc1969aa4a4bf6c8c575a2a26c7
[ "MIT" ]
null
null
null
timotion.ipynb
ahmadkammonah/ActuatorHunt
6eedde1373ed3bc1969aa4a4bf6c8c575a2a26c7
[ "MIT" ]
null
null
null
timotion.ipynb
ahmadkammonah/ActuatorHunt
6eedde1373ed3bc1969aa4a4bf6c8c575a2a26c7
[ "MIT" ]
null
null
null
45.860714
121
0.513083
[ [ [ "import csv\nimport requests\nfrom bs4 import BeautifulSoup", "_____no_output_____" ], [ "site = \"https://www.timotion.com\"\npage = \"https://www.timotion.com/en/products/intro/linear-actuators/lists?guid=1481269298\"\n\nhtml = requests.get(page)\nif html.status_code==200:\n soup = BeautifulSoup(html.text, 'lxml')\n div = [div for div in soup.find_all('div',class_='product-text', href=True) if a.text]\n links = [a['href'] for a in soup.find_all('a', href=True) if a.text]\n links = links[78:120]", "_____no_output_____" ], [ "data", "_____no_output_____" ], [ "#for link in links:\ndata = []\n\nfor link in links:\n html = requests.get(site+link)\n if html.status_code==200:\n soup = BeautifulSoup(html.text, 'lxml')\n title = (soup.find('h1').text)\n specs = [spec.text for spec in (soup.find('ul', class_=\"dot\")).find_all('li')]\n\n specs.insert(0, title)\n data.append(specs)", "_____no_output_____" ], [ "with open(\"timotion.csv\", \"w+\") as fileWriter:\n wr = csv.writer(fileWriter)\n wr.writerows(data)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
e711e3e183fd775b4dd590cefdb3aca18db78d2d
29,258
ipynb
Jupyter Notebook
hw_sst.ipynb
manpreet2000/cs224u
4b9683dba06214a1beab2b7562830e396876b5e0
[ "Apache-2.0" ]
1
2021-04-14T09:07:19.000Z
2021-04-14T09:07:19.000Z
hw_sst.ipynb
manpreet2000/cs224u
4b9683dba06214a1beab2b7562830e396876b5e0
[ "Apache-2.0" ]
null
null
null
hw_sst.ipynb
manpreet2000/cs224u
4b9683dba06214a1beab2b7562830e396876b5e0
[ "Apache-2.0" ]
null
null
null
38.146023
486
0.618839
[ [ [ "# Homework and bake-off: Stanford Sentiment Treebank", "_____no_output_____" ] ], [ [ "__author__ = \"Christopher Potts\"\n__version__ = \"CS224u, Stanford, Fall 2020\"", "_____no_output_____" ] ], [ [ "## Contents\n\n1. [Overview](#Overview)\n1. [Methodological note](#Methodological-note)\n1. [Set-up](#Set-up)\n1. [A softmax baseline](#A-softmax-baseline)\n1. [RNNClassifier wrapper](#RNNClassifier-wrapper)\n1. [Error analysis](#Error-analysis)\n1. [Homework questions](#Homework-questions)\n 1. [Sentiment words alone [2 points]](#Sentiment-words-alone-[2-points])\n 1. [A more powerful vector-averaging baseline [2 points]](#A-more-powerful-vector-averaging-baseline-[2-points])\n 1. [Sentiment shifters [2 points]](#Sentiment-shifters-[2-points])\n 1. [Your original system [3 points]](#Your-original-system-[3-points])\n1. [Bake-off [1 point]](#Bake-off-[1-point])", "_____no_output_____" ], [ "## Overview\n\nThis homework and associated bake-off are devoted to the Stanford Sentiment Treebank (SST). The homework questions ask you to implement some baseline systems and some original feature functions, and the bake-off challenge is to define a system that does extremely well at the SST task.\n\nWe'll focus on the ternary task as defined by `sst.ternary_class_func` This isn't used in the literature but I think it is the best version of the SST problem for the reasons given [here](sst_01_overview.ipynb#Modeling-the-SST-labels).\n\nThe SST test set will be used for the bake-off evaluation. This dataset is already publicly distributed, so we are counting on people not to cheat by develping their models on the test set. You must do all your development without using the test set at all, and then evaluate exactly once on the test set and turn in the results, with no further system tuning or additional runs. __Much of the scientific integrity of our field depends on people adhering to this honor code__. \n\nOur only additional restriction is that you cannot use any of the subtree labels as input features. You can have your system learn to predict them (as intended), but no feature function can make use of them.\n\nOne of our goals for this homework and bake-off is to encourage you to engage in __the basic development cycle for supervised models__, in which you\n\n1. Write a new feature function. We recommend starting with something simple.\n1. Use `sst.experiment` to evaluate your new feature function, with at least `fit_softmax_classifier`.\n1. If you have time, compare your feature function with `unigrams_phi` using `sst.compare_models` or `utils.mcnemar`. (For discussion, see [this notebook section](sst_02_hand_built_features.ipynb#Statistical-comparison-of-classifier-models).)\n1. Return to step 1, or stop the cycle and conduct a more rigorous evaluation with hyperparameter tuning and assessment on the `dev` set.\n\n[Error analysis](#Error-analysis) is one of the most important methods for steadily improving a system, as it facilitates a kind of human-powered hill-climbing on your ultimate objective. Often, it takes a careful human analyst just a few examples to spot a major pattern that can lead to a beneficial change to the feature representations.", "_____no_output_____" ], [ "## Methodological note\n\nYou don't have to use the experimental framework defined below (based on `sst`). However, if you don't use `sst.experiment` as below, then make sure you're training only on `train`, evaluating on `dev`, and that you report with \n\n```\nfrom sklearn.metrics import classification_report\nclassification_report(y_dev, predictions)\n```\nwhere `y_dev = [y for tree, y in sst.dev_reader(class_func=sst.ternary_class_func)]`. We'll focus on the value at `macro avg` under `f1-score` in these reports.", "_____no_output_____" ], [ "## Set-up\n\nSee [the first notebook in this unit](sst_01_overview.ipynb#Set-up) for set-up instructions.", "_____no_output_____" ] ], [ [ "from collections import Counter\nfrom nltk.tree import Tree\nimport numpy as np\nimport os\nimport pandas as pd\nimport random\nfrom sklearn.linear_model import LogisticRegression\nimport sst\nimport torch.nn as nn\nfrom torch_rnn_classifier import TorchRNNClassifier\nfrom torch_tree_nn import TorchTreeNN\nimport utils", "_____no_output_____" ], [ "SST_HOME = os.path.join('data', 'trees')", "_____no_output_____" ] ], [ [ "## A softmax baseline\n\nThis example is here mainly as a reminder of how to use our experimental framework with linear models.", "_____no_output_____" ] ], [ [ "def unigrams_phi(tree):\n \"\"\"The basis for a unigrams feature function.\n\n Parameters\n ----------\n tree : nltk.tree\n The tree to represent.\n\n Returns\n -------\n Counter\n A map from strings to their counts in `tree`. (Counter maps a\n list to a dict of counts of the elements in that list.)\n\n \"\"\"\n return Counter(tree.leaves())", "_____no_output_____" ] ], [ [ "Thin wrapper around `LogisticRegression` for the sake of `sst.experiment`:", "_____no_output_____" ] ], [ [ "def fit_softmax_classifier(X, y):\n mod = LogisticRegression(\n fit_intercept=True,\n solver='liblinear',\n multi_class='ovr')\n mod.fit(X, y)\n return mod", "_____no_output_____" ] ], [ [ "The experimental run with some notes:", "_____no_output_____" ] ], [ [ "softmax_experiment = sst.experiment(\n SST_HOME,\n unigrams_phi, # Free to write your own!\n fit_softmax_classifier, # Free to write your own!\n train_reader=sst.train_reader, # Fixed by the competition.\n assess_reader=sst.dev_reader, # Fixed until the bake-off.\n class_func=sst.ternary_class_func) # Fixed by the bake-off rules.", "_____no_output_____" ] ], [ [ "`softmax_experiment` contains a lot of information that you can use for analysis; see [this section below](#Error-analysis) for starter code.", "_____no_output_____" ], [ "## RNNClassifier wrapper\n\nThis section illustrates how to use `sst.experiment` with `TorchRNNClassifier`. The same basic patterns hold for using `TorchTreeNN`; see [sst_03_neural_networks.ipynb](sst_03_neural_networks.ipynb) for additional discussion.", "_____no_output_____" ], [ "To featurize examples for an RNN, we just get the words in order, letting the model take care of mapping them into an embedding space.", "_____no_output_____" ] ], [ [ "def rnn_phi(tree):\n return tree.leaves()", "_____no_output_____" ] ], [ [ "The model wrapper gets the vocabulary using `sst.get_vocab`. If you want to use pretrained word representations in here, then you can have `fit_rnn_classifier` build that space too; see [this notebook section for details](sst_03_neural_networks.ipynb#Pretrained-embeddings). See also [torch_model_base.py](torch_model_base.py) for details on the many optimization parameters that `TorchRNNClassifier` accepts.", "_____no_output_____" ] ], [ [ "def fit_rnn_classifier(X, y):\n sst_glove_vocab = utils.get_vocab(X, mincount=2)\n mod = TorchRNNClassifier(\n sst_glove_vocab,\n early_stopping=True)\n mod.fit(X, y)\n return mod", "_____no_output_____" ], [ "rnn_experiment = sst.experiment(\n SST_HOME,\n rnn_phi,\n fit_rnn_classifier,\n vectorize=False, # For deep learning, use `vectorize=False`.\n assess_reader=sst.dev_reader)", "_____no_output_____" ] ], [ [ "## Error analysis\n\nThis section begins to build an error-analysis framework using the dicts returned by `sst.experiment`. These have the following structure:\n\n```\n'model': trained model\n'phi': the feature function used\n'train_dataset':\n 'X': feature matrix\n 'y': list of labels\n 'vectorizer': DictVectorizer,\n 'raw_examples': list of raw inputs, before featurizing \n'assess_dataset': same structure as the value of 'train_dataset'\n'predictions': predictions on the assessment data\n'metric': `score_func.__name__`, where `score_func` is an `sst.experiment` argument\n'score': the `score_func` score on the assessment data\n```\nThe following function just finds mistakes, and returns a `pd.DataFrame` for easy subsequent processing:", "_____no_output_____" ] ], [ [ "def find_errors(experiment):\n \"\"\"Find mistaken predictions.\n\n Parameters\n ----------\n experiment : dict\n As returned by `sst.experiment`.\n\n Returns\n -------\n pd.DataFrame\n\n \"\"\"\n raw_examples = experiment['assess_dataset']['raw_examples']\n raw_examples = [\" \".join(tree.leaves()) for tree in raw_examples]\n df = pd.DataFrame({\n 'raw_examples': raw_examples,\n 'predicted': experiment['predictions'],\n 'gold': experiment['assess_dataset']['y']})\n df['correct'] = df['predicted'] == df['gold']\n return df", "_____no_output_____" ], [ "softmax_analysis = find_errors(softmax_experiment)", "_____no_output_____" ], [ "rnn_analysis = find_errors(rnn_experiment)", "_____no_output_____" ] ], [ [ "Here we merge the sotmax and RNN experiments into a single DataFrame:", "_____no_output_____" ] ], [ [ "analysis = softmax_analysis.merge(\n rnn_analysis, left_on='raw_examples', right_on='raw_examples')\n\nanalysis = analysis.drop('gold_y', axis=1).rename(columns={'gold_x': 'gold'})", "_____no_output_____" ] ], [ [ "The following code collects a specific subset of examples; small modifications to its structure will give you different interesting subsets:", "_____no_output_____" ] ], [ [ "# Examples where the softmax model is correct, the RNN is not,\n# and the gold label is 'positive'\n\nerror_group = analysis[\n (analysis['predicted_x'] == analysis['gold'])\n &\n (analysis['predicted_y'] != analysis['gold'])\n &\n (analysis['gold'] == 'positive')\n]", "_____no_output_____" ], [ "error_group.shape[0]", "_____no_output_____" ], [ "for ex in error_group['raw_examples'].sample(5):\n print(\"=\"*70)\n print(ex)", "_____no_output_____" ] ], [ [ "## Homework questions\n\nPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.)", "_____no_output_____" ], [ "### Sentiment words alone [2 points]\n\nNLTK includes an easy interface to [Minqing Hu and Bing Liu's __Opinion Lexicon__](https://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html), which consists of a list of positive words and a list of negative words. How much of the ternary SST story does this lexicon tell?\n\nFor this problem, submit code to do the following:\n\n1. Create a feature function `op_unigrams_phi` on the model of `unigrams_phi` above, but filtering the vocabulary to just items that are members of the Opinion Lexicon. Submit this feature function. You can use `test_op_unigrams_phi` to check your work.\n\n1. Evaluate your feature function with `sst.experiment`, with all the same parameters as were used to create `softmax_experiment` in [A softmax baseline](#A-softmax-baseline) above, except of course for the feature function.\n\n1. Use `utils.mcnemar` to compare your feature function with the results in `softmax_experiment`. The information you need for this is in `softmax_experiment` and your own `sst.experiment` results. Submit your evaluation code. You can assume `softmax_experiment` is already in memory, but your code should create the other objects necessary for this comparison.", "_____no_output_____" ] ], [ [ "from nltk.corpus import opinion_lexicon\n\n# Use set for fast membership checking:\npositive = set(opinion_lexicon.positive())\nnegative = set(opinion_lexicon.negative())\n\ndef op_unigrams_phi(tree):\n pass\n ##### YOUR PART 1 CODE HERE\n\n\n##### YOUR PART 2 CODE HERE\n\n\n##### YOUR PART 3 CODE HERE\n\n", "_____no_output_____" ], [ "def test_op_unigrams_phi(func):\n tree = Tree.fromstring(\"\"\"(4 (2 NLU) (4 (2 is) (4 amazing)))\"\"\")\n expected = {\"enlightening\": 1}\n result = func(tree)\n assert result == expected, \\\n (\"Error for `op_unigrams_phi`: \"\n \"Got `{}` which differs from `expected` \"\n \"in `test_op_unigrams_phi`\".format(result))", "_____no_output_____" ], [ "test_op_unigrams_phi(op_unigrams_phi)", "_____no_output_____" ] ], [ [ "### A more powerful vector-averaging baseline [2 points]\n\nIn [Distributed representations as features](sst_03_neural_networks.ipynb#Distributed-representations-as-features), we looked at a baseline for the ternary SST problem in which each example is modeled as the sum of its GloVe representations. A `LogisticRegression` model was used for prediction. A neural network might do better with these representations, since there might be complex relationships between the input feature dimensions that a linear classifier can't learn. \n\nTo address this question, we want to get set up to run the experiment with a shallow neural classifier. Thus, your task is to write and submit a model wrapper function around `TorchShallowNeuralClassifier`. This function should implement hyperparameter search according to this specification:\n\n* Set `early_stopping=True` for all experiments.\n* Using 3-fold cross-validation, exhaustively explore this set of hyperparameter combinations:\n * The hidden dimensionality at 50, 100, and 200.\n * The hidden activation function as `nn.Tanh()` and `nn.ReLU()`.\n* For all other parameters to `TorchShallowNeuralClassifier`, use the defaults.\n\n\nSee [this notebook section](sst_02_hand_built_features.ipynb#Hyperparameter-search) for examples. You are not required to run a full evaluation with this function using `sst.experiment`, but we assume you will want to.\n\nWe're not evaluating the quality of your model. (We've specified the protocols completely, but there will still be variation in the results.) However, the primary goal of this question is to get you thinking more about this strong baseline feature representation scheme for SST, so we're sort of hoping you feel compelled to try out variations on your own.", "_____no_output_____" ] ], [ [ "from torch_shallow_neural_classifier import TorchShallowNeuralClassifier\n\ndef fit_shallow_neural_classifier_with_hyperparameter_search(X, y):\n pass\n ##### YOUR CODE HERE\n", "_____no_output_____" ] ], [ [ "### Sentiment shifters [2 points]", "_____no_output_____" ], [ "Some words have greater power than others to shift sentiment around. Because the SST has sentiment labels on all of its subconstituents, it provides an opportunity to study these shifts in detail. This question takes a first step in that direction by asking you to identify some of these sentiment shifters automatically.\n\nMore specifically, the task is to identify words that effect a particularly large shift between the value of their sibling node and the value of their mother node. For instance, in the tree", "_____no_output_____" ] ], [ [ "tree = Tree.fromstring(\n \"\"\"(1 (2 Astrology) (1 (2 is) (1 (2 not) (4 enlightening))))\"\"\")\n\ntree", "_____no_output_____" ] ], [ [ "we have the shifter calculations:\n \n* *not*: `1 - 4 = -3`\n* *enlightening*: `1 - 2 = -1`\n* *is*: `1 - 1 = 0`\n* *Astrology*: `1 - 1 = 0`.\n \n__Your task__: write a function `sentiment_shifters` that accepts a `tree` argument and returns a dict mapping words to their list of shifts in `tree`. You can then run `view_top_shifters` to see the results. In addition, you can use `test_sentiment_shifters` to test your function directly. It uses the above example as the basis for the test.\n\n__Tips__:\n\n* You'll probably want to use `tree.subtrees()` to inspect all of the subtrees in each tree.\n* `len(tree)` counts the number of children (immediate descendants) of `tree`.\n* `isinstance(subtree[0][0], str)` will test whether the left daughter of subtree has a lexical child.\n* `tree.label()` gives the label for any tree or subtree.\n* Your SST reader should use `replace_root_score=False` so that you keep the root node label.", "_____no_output_____" ] ], [ [ "from collections import defaultdict\nfrom operator import itemgetter\n\ndef sentiment_shifters(tree, diffs=defaultdict(list)):\n \"\"\"\n Calculates the shifts in `tree`.\n\n Parameters\n ----------\n tree : nltk.tree.Tree\n\n diffs: defaultdict(list)\n This accumulates the results for `tree`, and `view_top_shifters`\n accumulates all these results into a single dict.\n\n Returns\n -------\n defaultdict mapping words to their list of shifts in `tree`.\n\n \"\"\"\n pass\n ### YOUR CODE HERE", "_____no_output_____" ], [ "def test_sentiment_shifters(func):\n \"\"\"func should be `sentiment_shifters`\"\"\"\n tree = Tree.fromstring(\n \"\"\"(1 (2 Astrology) (1 (2 is) (1 (2 not) (4 enlightening))))\"\"\")\n expected = {\"not\": [-3], \"enlightening\": [-1], \"is\": [0], \"Astrology\": [0]}\n result = func(tree)\n assert result == expected, \\\n (\"Error for `sentiment_shifters`: \"\n \"Got\\n\\n\\t{}\\n\\nwhich differs from `expected` \"\n \"in `test_sentiment_shifters`\".format(result))", "_____no_output_____" ], [ "test_sentiment_shifters(sentiment_shifters)", "_____no_output_____" ] ], [ [ "The following utility will let you use `sentiment_shifters`. The resulting insights could inform new feature functions.", "_____no_output_____" ] ], [ [ "def view_top_shifters(top_n=10, mincount=100):\n diffs = defaultdict(list)\n for tree, label in sst.train_reader(SST_HOME) :\n these_diffs = sentiment_shifters(tree, diffs=diffs)\n diffs = {key: np.mean(vals) for key, vals in diffs.items()\n if len(vals) >= mincount}\n diffs = sorted(diffs.items(), key=itemgetter(1))\n segs = ((\"Negative\", diffs[:top_n]), (\"Positive\", diffs[-top_n:]))\n for label, seg in segs:\n print(\"\\nTop {} {} shifters:\\n\".format(top_n, label))\n for key, val in seg:\n print(key, val)\n\n\nview_top_shifters()", "_____no_output_____" ] ], [ [ "### Your original system [3 points]\n\nYour task is to develop an original model for the SST ternary problem, predicting only the root-level labels. There are many options. If you spend more than a few hours on this homework problem, you should consider letting it grow into your final project! Here are some relatively manageable ideas that you might try:\n\n1. We didn't systematically evaluate the `bidirectional` option to the `TorchRNNClassifier`. Similarly, that model could be tweaked to allow multiple LSTM layers (at present there is only one), and you could try adding layers to the classifier portion of the model as well.\n\n1. We've already glimpsed the power of rich initial word representations, and later in the course we'll see that smart initialization usually leads to a performance gain in NLP, so you could perhaps achieve a winning entry with a simple model that starts in a great place.\n\n1. Our [practical introduction to contextual word representations](contextualreps.ipynb) covers pretrained representations and interfaces that are likely to boost the performance of any system.\n\n1. The `TreeNN` and `TorchTreeNN` don't perform all that well, and this could be for the same reason that RNNs don't peform well: the gradient signal doesn't propagate reliably down inside very deep trees. [Tai et al. 2015](https://www.aclweb.org/anthology/P15-1150/) sought to address this with TreeLSTMs, which are fairly easy to implement in PyTorch.\n\nWe want to emphasize that this needs to be an __original__ system. It doesn't suffice to download code from the Web, retrain, and submit. You can build on others' code, but you have to do something new and meaningful with it.\n\nIn the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development, just to help us understand how systems performed overall.", "_____no_output_____" ] ], [ [ "# PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS:\n# 1) Textual description of your system.\n# 2) The code for your original system.\n# 3) The score achieved by your system in place of MY_NUMBER.\n# With no other changes to that line.\n# You should report your score as a decimal value <=1.0\n# PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS\n\n# START COMMENT: Enter your system description in this cell.\n# My peak score was: MY_NUMBER\nif 'IS_GRADESCOPE_ENV' not in os.environ:\n pass\n\n# STOP COMMENT: Please do not remove this comment.", "_____no_output_____" ] ], [ [ "## Bake-off [1 point]\n\nAs we said above, the bake-off evaluation data is the official SST test-set release. For this bake-off, you'll evaluate your original system from the above homework problem on the test set, using the ternary class problem. Rules:\n\n1. Only one evaluation is permitted.\n1. No additional system tuning is permitted once the bake-off has started.\n\nThe cells below this one constitute your bake-off entry.\n\nSystems that enter will receive the additional homework point, and systems that achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.\n\nLate entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.\n\nThe announcement will include the details on where to submit your entry.", "_____no_output_____" ] ], [ [ "# Enter your bake-off assessment code in this cell.\n# Place your code in the scope of the 'IS_GRADESCOPE_ENV'\n# conditional.\n# Please do not remove this comment.\nif 'IS_GRADESCOPE_ENV' not in os.environ:\n pass\n # Please enter your code in the scope of the above conditional.\n ##### YOUR CODE HERE\n", "_____no_output_____" ], [ "# On an otherwise blank line in this cell, please enter\n# your macro-average F1 value as reported by the code above.\n# Please enter only a number between 0 and 1 inclusive.\n# Please do not remove this comment.\nif 'IS_GRADESCOPE_ENV' not in os.environ:\n pass\n # Please enter your score in the scope of the above conditional.\n ##### YOUR CODE HERE\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e711f00cd231d0d7181f2ea54578f4279af7850c
740,152
ipynb
Jupyter Notebook
Financial_bubble_prediction_with_the_LPPL_model.ipynb
wpla/LPPLModel
0d9d0936cde8dcead0a8b6e5949eba3b78961f23
[ "MIT" ]
5
2020-04-11T05:53:15.000Z
2021-07-27T06:01:20.000Z
Financial_bubble_prediction_with_the_LPPL_model.ipynb
wpla/LPPLModel
0d9d0936cde8dcead0a8b6e5949eba3b78961f23
[ "MIT" ]
null
null
null
Financial_bubble_prediction_with_the_LPPL_model.ipynb
wpla/LPPLModel
0d9d0936cde8dcead0a8b6e5949eba3b78961f23
[ "MIT" ]
null
null
null
748.384226
573,272
0.931337
[ [ [ "# Financial bubble prediction using the LPPL model\n\n### Wolfgang Plaschg, [email protected]\n\nLPPL model\n\n$$ \\mathbb{E}[\\ln(p(t))] = A + B(t_c - t)^m + C(t_c - t)^m\\cos(\\omega\\ln(t_c - t) - \\phi) $$\n\nwith\n\n$$\n\\begin{align}\n0.1 \\leq{} m & \\leq 0.9 \\\\\n6 \\leq{} \\omega & \\leq 13 \\\\\n |C\\,| & \\le 1 \\\\\n B & \\le 0\n\\end{align}\n$$", "_____no_output_____" ], [ "A model example:\n\n![model example](https://i.imgur.com/VSSHKAA.png)", "_____no_output_____" ], [ "A data example:\n\n![data](https://i.imgur.com/lDiiSxW.png)", "_____no_output_____" ], [ "We want to fit a model for a given data:\n\n![model + data](https://i.imgur.com/WwOtMss.png)", "_____no_output_____" ], [ "The original model has 7 parameters $A$, $B$, $C$, $m$, $\\omega$, $\\phi$ and $t_c$ but can be reduced to a non-linear optimization problem in 3 variables:\n\n![alt text](https://i.imgur.com/z6WyzRv.png)", "_____no_output_____" ], [ "Example: Bitcoin bubble\n\n![alt text](https://i.imgur.com/zpt3RMK.png)", "_____no_output_____" ] ], [ [ "%pylab inline\nimport scipy\nimport pandas as pd\n\n# disable warnings\nnp.seterr(divide='ignore', invalid='ignore', over='ignore')", "Populating the interactive namespace from numpy and matplotlib\n" ], [ "DATA_SIZE = 400\nNOISE_FACTOR = 0.5\nCUTOFF = 0.8", "_____no_output_____" ] ], [ [ "## Generate a model and test data", "_____no_output_____" ] ], [ [ "# LPPL 4 factor model\n\ntc = 6\nm = np.random.uniform(0.1, 0.9) # 0.1 <= m <= 0.9\nomega = np.random.uniform(6, 13) # 6 <= omega <= 13\n\nC = abs(np.random.normal()) # |C| < 1\nB = np.random.uniform(-10, 0) # B < 0\nA = 200\nphi = 10\n\nt = np.linspace(0, tc, num=DATA_SIZE)\nline_data = A + B * (tc - t) ** m + C * (tc - t) ** m * np.cos(omega * np.log(tc - t) - phi)\nline_data_index = np.linspace(0, tc, len(line_data))\nlog_prices = [x + np.random.normal(0, NOISE_FACTOR) for x in line_data]\nlog_prices = log_prices[:int(DATA_SIZE * CUTOFF)]\nt_cutoff = t[:int(DATA_SIZE * CUTOFF)]\nfactor = 1 / max(t_cutoff)\nt_cutoff = t_cutoff * factor\nline_data_index = line_data_index * factor\nt = t * factor\ntc = max(t)\n\nsimulated_data = pd.Series(data=log_prices, index=t_cutoff)\n\nprint(\"tc: %.2f\" % tc)\nplot(simulated_data, '.')\n# plot(line_data_index, line_data, 'b-')", "tc: 1.25\n" ] ], [ [ "## Generate test data using geometric brownian motion", "_____no_output_____" ] ], [ [ "# x0 = start value, mu = drift, sigma = volatility\n\ndef make_gbm_data(x0=200, mu=0.8, sigma=0.6, data_size=DATA_SIZE):\n n = int(data_size * CUTOFF)\n dt = 1/n\n x = pd.DataFrame()\n t = np.linspace(0, 1, n)\n step = np.exp((mu - sigma**2 / 2) * dt) * np.exp(sigma * np.random.normal(0, np.sqrt(dt), (1, n)))\n return pd.Series(data = x0 * step.cumprod(), index=t)\n\ngbm_data = make_gbm_data()\n\nplot(gbm_data, '.')", "_____no_output_____" ], [ "# LPPL 3 factor model\n\nC1 = C * np.cos(phi)\nC2 = C * np.sin(phi)\n\nline_data = A + B * (tc - t) ** m + C1 * (tc - t) ** m * np.cos(omega * np.log(tc - t)) + \\\n C2 * (tc - t) ** m * np.sin(omega * np.log(tc - t))\nline_data_index = np.linspace(0, tc, len(line_data))\nlog_prices = [x + np.random.normal(0, NOISE_FACTOR) for x in line_data]\nlog_prices = log_prices[:int(DATA_SIZE * CUTOFF)]\nt_cutoff = t[:int(DATA_SIZE * CUTOFF)]\nfactor = 1 / max(t_cutoff)\nt_cutoff = t_cutoff * factor\nline_data_index = line_data_index * factor\nt = t * factor\ntc = max(t)\n\nsimulated_data = pd.Series(data=log_prices, index=t_cutoff)\n\nprint(\"tc: %.2f\" % tc)\nplot(simulated_data, '.')\nplot(line_data_index, line_data, 'b-')", "tc: 1.25\n" ], [ "# delete data points\nimport copy\ncutout_data = copy.deepcopy(simulated_data)\ndel_ival = []\nfor i in range(len(cutout_data.index)):\n ival = cutout_data.index[i]\n if ival >= 0.4 and ival <= 0.7:\n del_ival.append(ival)\nfor ival in del_ival:\n del cutout_data[ival]\n\nplot(cutout_data, '.')\nplot(line_data_index, line_data, 'b-')", "_____no_output_____" ], [ "def reduce_data(data, target_size):\n while len(data) > target_size:\n del_keys = np.random.choice(data.index, len(data.index) - target_size)\n removed = 0\n for k in del_keys:\n try:\n del data[k]\n removed += 1\n except:\n pass\nreduced_data = gbm_data.copy()\nreduce_data(reduced_data, 159)\nplot(gbm_data, '.')\nfigure()\nplot(reduced_data, '.')", "_____no_output_____" ], [ "x = gbm_data.values\n\nx1 = min(gbm_data.values)\nx2 = max(gbm_data.values)\nb = (x1 + x2) / (x1 - x2)\na = (-1 - b) / x1\nscaled = np.array(gbm_data.values) * a + b\n\ndata = line_data[:int(DATA_SIZE * CUTOFF)] + scaled * 1.1\ngbm_sim_data = pd.Series(data, index=t_cutoff)\n\nplot(gbm_sim_data.values, '.')\nplot(line_data, 'b-')", "_____no_output_____" ], [ "def F1_get_linear_parameters(X, stock_data):\n tc, m, omega = X\n \n t = np.array(stock_data.index)\n y = np.array(stock_data.values)\n \n N = len(stock_data)\n f = (tc - t) ** m\n g = (tc - t) ** m * np.cos(omega * np.log(tc - t))\n h = (tc - t) ** m * np.sin(omega * np.log(tc - t))\n \n LHS = np.array([[N, sum(f), sum(g), sum(h) ],\n [sum(f), sum(f**2), sum(f*g), sum(f*h) ],\n [sum(g), sum(f*g), sum(g**2), sum(g*h) ],\n [sum(h), sum(f*h), sum(g*h), sum(h**2)]])\n \n RHS = np.array([[sum(y)], \n [sum(y*f)],\n [sum(y*g)],\n [sum(y*h)]])\n \n A, B, C1, C2 = np.linalg.solve(LHS, RHS)\n return A, B, C1, C2\n\ndef F1(X, stock_data):\n tc, m, omega = X\n t = np.array(stock_data.index)\n y = np.array(stock_data.values)\n A, B, C1, C2 = F1_get_linear_parameters(X, stock_data)\n error = y - A - B * (tc - t) ** m - C1 * (tc - t) ** m * np.cos(omega * np.log(tc - t)) - \\\n C2 * (tc - t) ** m * np.sin(omega * np.log(tc - t))\n cost = sum(error ** 2)\n return cost \n\ndef F1_normalized(result, stock_data):\n x1 = min(stock_data.values)\n x2 = max(stock_data.values)\n b = (x1 + x2) / (x1 - x2)\n a = (-1 - b) / x1\n data = np.array(stock_data.values) * a + b\n stock_data_norm = pd.Series(data=data, index=stock_data.index)\n return F1(result, stock_data_norm)\n\nfrom scipy import optimize \n\nclass Result:\n def __init__(self):\n self.success = None\n\n # model parameters \n self.tc = None\n self.m = None\n self.omega = None\n self.A = None\n self.B = None\n self.C1 = None\n self.C2 = None\n self.C = None\n self.pruned = None # True if one of the parameters has been pruned to the \n # valid range after fitting.\n self.price_tc = None # Estimated price at tc\n self.price_chg = None # Price difference between est. price at tc and last price in percent\n\n self.mse = None # mean square error \n self.mse_hist = [] # history of mean square errors\n self.norm_mse = None # normalized mean square error\n self.opt_rv = None # Return object from optimize function\n self.tc_start = []\n self.m_start = []\n self.omega_start = []\n \ndef LPPL_fit(data, tries=20, min_distance=0.2):\n rv = Result()\n fitted_parameters = None\n mse_min = None\n fitted_pruned = False\n\n tc_min, tc_max = 1, 1.6 # Critical time\n m_min, m_max = 0.1, 0.5 # Convexity: smaller is more convex\n omega_min, omega_max = 6, 13 # Number of oscillations\n\n # Scaling parameters to scale tc, m and omega to range 0 .. 1\n\n tc_scale_b = tc_min / (tc_min - tc_max)\n tc_scale_a = -tc_scale_b / tc_min\n\n m_scale_b = m_min / (m_min - m_max)\n m_scale_a = -m_scale_b / m_min\n\n omega_scale_b = omega_min / (omega_min - omega_max)\n omega_scale_a = -omega_scale_b / omega_min\n\n for n in range(tries):\n\n found = False\n\n while not found:\n tc_start = numpy.random.uniform(low=tc_min, high=tc_max)\n m_start = numpy.random.uniform(low=m_min, high=m_max)\n omega_start = numpy.random.uniform(low=omega_min, high=omega_max)\n found = True\n\n for i in range(len(rv.tc_start)):\n # Scale values to range 0 .. 1 \n # Calculate distance and reject starting point if too close to \n # already used starting point\n a = np.array([tc_start * tc_scale_a + tc_scale_b,\n m_start * m_scale_a + m_scale_b,\n omega_start * omega_scale_a + omega_scale_b])\n b = np.array([rv.tc_start[i] * tc_scale_a + tc_scale_b, \n rv.m_start[i] * m_scale_a + m_scale_b, \n rv.omega_start[i] * omega_scale_a + omega_scale_b])\n distance = numpy.linalg.norm(a - b)\n if distance < min_distance:\n found = False\n # print(\"Points to close together: \", a, b)\n break\n\n rv.tc_start.append(tc_start)\n rv.m_start.append(m_start)\n rv.omega_start.append(omega_start)\n\n x0 = [tc_start, m_start, omega_start]\n\n try:\n opt_rv = optimize.minimize(F1, x0, args=(data,), method='Nelder-Mead') \n if opt_rv.success:\n\n tc_est, m_est, omega_est = opt_rv.x\n pruned = False\n\n if tc_est < tc_min:\n tc_est = tc_min\n pruned = True\n elif tc_est > tc_max:\n tc_est = tc_max\n pruned = True\n\n if m_est < m_min:\n m_est = m_min\n pruned = True\n elif m_est > m_max:\n m_est = m_max\n pruned = True\n\n if omega_est < omega_min:\n omega_est = omega_min\n pruned = True\n elif omega_est > omega_max:\n omega_est = omega_max\n pruned = True\n\n mse = F1([tc_est, m_est, omega_est], data)\n\n if mse_min is None or mse < mse_min:\n fitted_parameters = [tc_est, m_est, omega_est]\n fitted_pruned = pruned\n mse_min = mse\n rv.mse_hist.append(mse)\n else:\n rv.mse_hist.append(mse_min)\n except LinAlgError as e:\n # print(\"Exception occurred: \", e)\n pass\n\n if fitted_parameters is not None:\n rv.tc, rv.m, rv.omega = fitted_parameters\n rv.A, rv.B, rv.C1, rv.C2 = F1_get_linear_parameters(fitted_parameters, data)\n rv.C = abs(rv.C1) + abs(rv.C2)\n rv.price_tc = rv.A + rv.B * (0.001) ** rv.m + \\\n rv.C1 * (0.001) ** rv.m * np.cos(rv.omega * np.log(0.001)) + \\\n rv.C2 * (0.001) ** rv.m * np.sin(rv.omega * np.log(0.001))\n rv.price_chg = (rv.price_tc - data.iat[-1]) / data.iat[-1] * 100\n rv.pruned = fitted_pruned\n rv.mse = mse_min\n rv.norm_mse = F1_normalized(fitted_parameters, data) / len(data) * 1000\n rv.success = True\n return rv", "_____no_output_____" ], [ "data = gbm_data\ndata = simulated_data\n# data = gbm_sim_data\ndata = reduced_data\ndata = cutout_data\n\nrv = LPPL_fit(data)\n\nif rv.success:\n line_points = len(data.values) * rv.tc\n t_ = np.linspace(0, rv.tc, num=line_points)\n est_line_data = rv.A + rv.B * (rv.tc - t_) ** rv.m + \\\n rv.C1 * (rv.tc - t_) ** rv.m * np.cos(rv.omega * np.log(rv.tc - t_)) + \\\n rv.C2 * (rv.tc - t_) ** rv.m * np.sin(rv.omega * np.log(rv.tc - t_))\n est_line_data_index = np.linspace(0, rv.tc, len(est_line_data))\n\n price_tc = rv.A + rv.B * (0.001) ** rv.m + \\\n rv.C1 * (0.001) ** rv.m * np.cos(rv.omega * np.log(0.001)) + \\\n rv.C2 * (0.001) ** rv.m * np.sin(rv.omega * np.log(0.001))\n\n print()\n print(\"== RESULTS ==\")\n print(\" price tc: %.2f (%.2f)\" % (price_tc, est_line_data[-2]))\n print(\" tc: real value: % 8.2f estimation: % 8.2f\" % (tc, rv.tc))\n print(\" m: real value: % 8.2f estimation: % 8.2f\" % (m, rv.m))\n print(\"omega: real value: % 8.2f estimation: % 8.2f\" % (omega, rv.omega))\n print(\" A: real value: % 8.2f estimation: % 8.2f\" % (A, rv.A))\n print(\" B: real value: % 8.2f estimation: % 8.2f\" % (B, rv.B))\n print(\" C1: real value: % 8.2f estimation: % 8.2f\" % (C1, rv.C1))\n print(\" C2: real value: % 8.2f estimation: % 8.2f\" % (C2, rv.C2))\n print()\n print(\"== ERROR STATISTICS ==\")\n print(\"Mean square error: % 20.2f\" % (rv.mse,))\n print(\" MSE (normalized): % 20.2f\" % (rv.norm_mse,))\n\n plot(data.index, data.values, '.')\n plot(est_line_data_index, est_line_data, 'b-')\n title(\"MSE: %d, NMSE: %.2f, tc: %.2f\" % (rv.mse, rv.norm_mse, rv.tc))", "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:11: DeprecationWarning: object of type <class 'numpy.float64'> cannot be safely interpreted as an integer.\n # This is added back by InteractiveShellApp.init_path()\n" ] ], [ [ "# Cost function plots\nModel contraints\n$$\n\\begin{align}\n1 \\leq{} & t_c \\leq 4 \\\\\n0.1 \\leq{} & m \\leq 0.9 \\\\\n6 \\leq{} & \\omega \\leq 13 \n\\end{align}\n$$", "_____no_output_____" ] ], [ [ "# Calculate values\n\nMESH_SIZE = 100\nLEVELS = 50\n\ntc_est, m_est, omega_est = rv.tc, rv.m, rv.omega\ntc_ = np.linspace(1.0, 4.0, MESH_SIZE)\nm_ = np.linspace(0.1, 0.9, MESH_SIZE)\nomega_ = np.linspace(6, 13, MESH_SIZE)\n\nX1, Y1 = np.meshgrid(tc_, m_)\nZ1 = np.zeros((MESH_SIZE, MESH_SIZE))\nfor i in range(len(tc_)):\n for j in range(len(m_)):\n Z1[i, j] = F1([tc_[i], m_[j], omega_est], data)\n\nX2, Y2 = np.meshgrid(tc_, omega_)\nZ2 = np.zeros((MESH_SIZE, MESH_SIZE))\nfor i in range(len(tc_)):\n for j in range(len(omega_)):\n Z2[i, j] = F1([tc_[i], m_est, omega_[j]], data)\n\nX3, Y3 = np.meshgrid(m_, omega_)\nZ3 = np.zeros((MESH_SIZE, MESH_SIZE))\nfor i in range(len(m_)):\n for j in range(len(omega_)):\n Z3[i, j] = F1([tc_est, m_[i], omega_[j]], data)\n\n", "_____no_output_____" ], [ "from mpl_toolkits.mplot3d.axes3d import get_test_data\n# This import registers the 3D projection, but is otherwise unused.\nfrom mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import\n\n# Plot graph\n\nfig = plt.figure(figsize=(18., 30.))\n\n# Plot tc vs m\nax = fig.add_subplot(4, 2, 1)\nc = ax.contour(X1, Y1, Z1, LEVELS)\n# ax.plot(tc, m, 'ro', markersize=10)\n# ax.plot(tc_est, m_est, 'r*', markersize=15)\nax.set_xlabel(r\"$t_c$\", fontsize=18)\nax.set_ylabel(r\"m\", fontsize=18)\nax.plot(tc_est, m_est, 'r*', markersize=15)\nplt.colorbar(c, ax=ax)\n\n# 3D surface\nax = fig.add_subplot(4, 2, 5, projection='3d')\nsurf = ax.plot_surface(X1, Y1, Z1, cmap=cm.coolwarm, linewidth=0, antialiased=False)\n\n# Plot omega vs tc\nax = fig.add_subplot(4, 2, 3)\nc = ax.contour(X2, Y2, Z2, LEVELS)\n# ax.plot(tc, m, 'ro', markersize=10)\n# ax.plot(tc_est, m_est, 'r*', markersize=15)\nax.set_xlabel(r\"$t_c$\", fontsize=18)\nax.set_ylabel(r\"$\\omega$\", fontsize=18)\nax.plot(tc_est, omega_est, 'r*', markersize=15)\nplt.colorbar(c, ax=ax)\n\n# 3D surface\nax = fig.add_subplot(4, 2, 7, projection='3d')\nsurf = ax.plot_surface(X2, Y2, Z2, cmap=cm.coolwarm, linewidth=0, antialiased=False)\n\n# Plot omega vs m\nax = fig.add_subplot(4, 2, 4)\nc = ax.contour(X3, Y3, Z3, LEVELS)\n# ax.plot(tc, m, 'ro', markersize=10)\n# ax.plot(tc_est, m_est, 'r*', markersize=15)\nax.set_xlabel(r\"$m$\", fontsize=18)\nax.set_ylabel(r\"$\\omega$\", fontsize=18)\nax.plot(m_est, omega_est, 'r*', markersize=15)\nplt.colorbar(c, ax=ax)\n\n# 3D surface\nax = fig.add_subplot(4, 2, 8, projection='3d')\nsurf = ax.plot_surface(X3, Y3, Z3, cmap=cm.coolwarm, linewidth=0, antialiased=False)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e712008b6d4710312d4db20dee328ff84b166072
10,394
ipynb
Jupyter Notebook
jupyter-notebooks/data-api-tutorials/search_and_download_quickstart.ipynb
mapninja/notebooks
a3b535ea9a57d31c0d12c2881e837fd480517f9e
[ "Apache-2.0" ]
483
2017-05-16T16:41:57.000Z
2022-03-28T04:52:39.000Z
jupyter-notebooks/data-api-tutorials/search_and_download_quickstart.ipynb
mapninja/notebooks
a3b535ea9a57d31c0d12c2881e837fd480517f9e
[ "Apache-2.0" ]
111
2017-05-23T19:47:11.000Z
2022-03-30T11:00:18.000Z
jupyter-notebooks/data-api-tutorials/search_and_download_quickstart.ipynb
mapninja/notebooks
a3b535ea9a57d31c0d12c2881e837fd480517f9e
[ "Apache-2.0" ]
267
2017-07-18T16:17:07.000Z
2022-03-29T11:59:04.000Z
28.244565
295
0.565903
[ [ [ "# Getting started with the Data API", "_____no_output_____" ], [ "### **Let's search & download some imagery of farmland near Stockton, CA. Here are the steps we'll follow:**\n\n1. Define an Area of Interest (AOI)\n2. Save our AOI's coordinates to GeoJSON format\n3. Create a few search filters\n4. Search for imagery using those filters\n5. Activate an image for downloading\n6. Download an image", "_____no_output_____" ], [ "### Requirements\n- Python 2.7 or 3+\n- requests\n- A [Planet API Key](https://www.planet.com/account/#/)", "_____no_output_____" ], [ "## Define an Area of Interest", "_____no_output_____" ], [ "An **Area of Interest** (or *AOI*) is how we define the geographic \"window\" out of which we want to get data.\n\nFor the Data API, this could be a simple bounding box with four corners, or a more complex shape, as long as the definition is in [GeoJSON](http://geojson.org/) format. \n\nFor this example, let's just use a simple box. To make it easy, I'll use [geojson.io](http://geojson.io/) to quickly draw a shape & generate GeoJSON output for our box:", "_____no_output_____" ], [ "![geojsonio.png](images/geojsonio.png)", "_____no_output_____" ], [ "We only need the \"geometry\" object for our Data API request:", "_____no_output_____" ] ], [ [ "# Stockton, CA bounding box (created via geojson.io) \ngeojson_geometry = {\n \"type\": \"Polygon\",\n \"coordinates\": [\n [ \n [-121.59290313720705, 37.93444993515032],\n [-121.27017974853516, 37.93444993515032],\n [-121.27017974853516, 38.065932950547484],\n [-121.59290313720705, 38.065932950547484],\n [-121.59290313720705, 37.93444993515032]\n ]\n ]\n}", "_____no_output_____" ] ], [ [ "## Create Filters", "_____no_output_____" ], [ "Now let's set up some **filters** to further constrain our Data API search:", "_____no_output_____" ] ], [ [ "# get images that overlap with our AOI \ngeometry_filter = {\n \"type\": \"GeometryFilter\",\n \"field_name\": \"geometry\",\n \"config\": geojson_geometry\n}\n\n# get images acquired within a date range\ndate_range_filter = {\n \"type\": \"DateRangeFilter\",\n \"field_name\": \"acquired\",\n \"config\": {\n \"gte\": \"2016-08-31T00:00:00.000Z\",\n \"lte\": \"2016-09-01T00:00:00.000Z\"\n }\n}\n\n# only get images which have <50% cloud coverage\ncloud_cover_filter = {\n \"type\": \"RangeFilter\",\n \"field_name\": \"cloud_cover\",\n \"config\": {\n \"lte\": 0.5\n }\n}\n\n# combine our geo, date, cloud filters\ncombined_filter = {\n \"type\": \"AndFilter\",\n \"config\": [geometry_filter, date_range_filter, cloud_cover_filter]\n}", "_____no_output_____" ] ], [ [ "## Searching: Items and Assets", "_____no_output_____" ], [ "Planet's products are categorized as **items** and **assets**: an item is a single picture taken by a satellite at a certain time. Items have multiple asset types including the image in different formats, along with supporting metadata files.\n\nFor this demonstration, let's get a satellite image that is best suited for analytic applications; i.e., a 4-band image with spectral data for Red, Green, Blue and Near-infrared values. To get the image we want, we will specify an item type of `PSScene4Band`, and asset type `analytic`.\n\nYou can learn more about item & asset types in Planet's Data API [here](https://planet.com/docs/reference/data-api/items-assets/).\n\nNow let's search for all the items that match our filters:", "_____no_output_____" ] ], [ [ "import os\nimport json\nimport requests\nfrom requests.auth import HTTPBasicAuth\n\n# API Key stored as an env variable\nPLANET_API_KEY = os.getenv('PL_API_KEY')\n\n\nitem_type = \"PSScene4Band\"\n\n# API request object\nsearch_request = {\n \"item_types\": [item_type], \n \"filter\": combined_filter\n}\n\n# fire off the POST request\nsearch_result = \\\n requests.post(\n 'https://api.planet.com/data/v1/quick-search',\n auth=HTTPBasicAuth(PLANET_API_KEY, ''),\n json=search_request)\n\nprint(json.dumps(search_result.json(), indent=1))", "_____no_output_____" ] ], [ [ "Our search returns metadata for all of the images within our AOI that match our date range and cloud coverage filters. It looks like there are multiple images here; let's extract a list of just those image IDs:", "_____no_output_____" ] ], [ [ "# extract image IDs only\nimage_ids = [feature['id'] for feature in search_result.json()['features']]\nprint(image_ids)", "_____no_output_____" ] ], [ [ "Since we just want a single image, and this is only a demonstration, for our purposes here we can arbitrarily select the first image in that list. Let's do that, and get the `asset` list available for that image:", "_____no_output_____" ] ], [ [ "# For demo purposes, just grab the first image ID\nid0 = image_ids[0]\nid0_url = 'https://api.planet.com/data/v1/item-types/{}/items/{}/assets'.format(item_type, id0)\n\n# Returns JSON metadata for assets in this ID. Learn more: planet.com/docs/reference/data-api/items-assets/#asset\nresult = \\\n requests.get(\n id0_url,\n auth=HTTPBasicAuth(PLANET_API_KEY, '')\n )\n\n# List of asset types available for this particular satellite image\nprint(result.json().keys())\n", "_____no_output_____" ] ], [ [ " ## Activation and Downloading\n \nThe Data API does not pre-generate assets, so they are not always immediately availiable to download. In order to download an asset, we first have to **activate** it.\n\nRemember, earlier we decided we wanted a color-corrected image best suited for *analytic* applications. We can check the status of the analytic asset we want to download like so:\n ", "_____no_output_____" ] ], [ [ "# This is \"inactive\" if the \"analytic\" asset has not yet been activated; otherwise 'active'\nprint(result.json()['analytic']['status'])", "_____no_output_____" ] ], [ [ "Let's now go ahead and **activate** that asset for download:", "_____no_output_____" ] ], [ [ "# Parse out useful links\nlinks = result.json()[u\"analytic\"][\"_links\"]\nself_link = links[\"_self\"]\nactivation_link = links[\"activate\"]\n\n# Request activation of the 'analytic' asset:\nactivate_result = \\\n requests.get(\n activation_link,\n auth=HTTPBasicAuth(PLANET_API_KEY, '')\n )", "_____no_output_____" ] ], [ [ "At this point, we wait for the activation status for the asset we are requesting to change from `inactive` to `active`. We can monitor this by polling the \"status\" of the asset:", "_____no_output_____" ] ], [ [ "activation_status_result = \\\n requests.get(\n self_link,\n auth=HTTPBasicAuth(PLANET_API_KEY, '')\n )\n \nprint(activation_status_result.json()[\"status\"])", "_____no_output_____" ] ], [ [ "Once the asset has finished activating (status is \"active\"), we can download it. \n\n*Note: the download link on an active asset is temporary*", "_____no_output_____" ] ], [ [ "# Image can be downloaded by making a GET with your Planet API key, from here:\ndownload_link = activation_status_result.json()[\"location\"]\nprint(download_link)", "_____no_output_____" ] ], [ [ "![stockton_thumb.png](images/stockton_thumb.png)", "_____no_output_____" ], [ " ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e712102d259f879013b6aef48c1212d2dca2e596
146,678
ipynb
Jupyter Notebook
data_cleaning/Exploratory Data Analysis and Cleaning.ipynb
Gendo90/citi-bike-project
729d723b4cdb0651a633cc8c27448ef7015cff3e
[ "MIT" ]
null
null
null
data_cleaning/Exploratory Data Analysis and Cleaning.ipynb
Gendo90/citi-bike-project
729d723b4cdb0651a633cc8c27448ef7015cff3e
[ "MIT" ]
null
null
null
data_cleaning/Exploratory Data Analysis and Cleaning.ipynb
Gendo90/citi-bike-project
729d723b4cdb0651a633cc8c27448ef7015cff3e
[ "MIT" ]
null
null
null
66.370136
14,240
0.616193
[ [ [ "import pandas as pd\nimport datetime\nimport time\nfrom scipy.spatial import distance", "_____no_output_____" ], [ "test_df = pd.read_csv(\"../raw_data/202008-citibike-tripdata.csv\")\n\ntest_df = test_df.drop_duplicates()\ntest_df", "_____no_output_____" ], [ "test_df[\"rider_age\"] = test_df[\"birth year\"].apply(lambda x: datetime.datetime.now().year - x)\n\n\nprint(test_df[\"rider_age\"].min())\n#clear issues with maximum rider age...\nprint(test_df[\"rider_age\"].max())\ntest_df = test_df.loc[test_df[\"rider_age\"] <= 80]\n\n#confirm new max age\nprint(test_df[\"rider_age\"].max())\ntest_df", "16\n136\n80\n" ], [ "test_df[\"rider_age\"].value_counts()", "_____no_output_____" ], [ "rider_ages = test_df[\"rider_age\"].value_counts().sort_index()\nrider_ages", "_____no_output_____" ], [ "#some issues for data where ages are too high, maybe too many values (duplicates?) for age = 51\n#can compare to other data sets from previous months, too\nrider_ages.plot(kind=\"bar\")", "_____no_output_____" ], [ "rider_ages.plot()", "_____no_output_____" ], [ "print(test_df[\"usertype\"].value_counts())", "Subscriber 1671914\nCustomer 657600\nName: usertype, dtype: int64\n" ], [ "print(test_df.loc[test_df[\"rider_age\"] == 51][\"usertype\"].value_counts())", "Customer 272417\nSubscriber 38715\nName: usertype, dtype: int64\n" ], [ "print(test_df.loc[test_df[\"rider_age\"] == 51][\"start station name\"].value_counts())", "12 Ave & W 40 St 2135\nWest St & Chambers St 1912\nPier 40 - Hudson River Park 1490\nWest St & Liberty St 1488\nBroadway & W 60 St 1486\n ... \nE 133 St & Cypress Pl 6\nInwood Ave & W 170 St 6\nNelson Ave & W 172 St 4\nW 170 St & University Ave 4\nGrand Concourse & E 156 St 2\nName: start station name, Length: 1039, dtype: int64\n" ], [ "print(test_df.loc[test_df[\"rider_age\"] == 51][\"end station name\"].value_counts())", "12 Ave & W 40 St 2126\nWest St & Chambers St 2115\nPier 40 - Hudson River Park 1644\nChristopher St & Greenwich St 1549\nBroadway & W 60 St 1524\n ... \nGrove St PATH 1\nE 153 St & E 157 St 1\nColumbus Dr at Exchange Pl 1\nHeights Elevator 1\nHamilton Park 1\nName: end station name, Length: 1046, dtype: int64\n" ], [ "#test data to get approx. mph over ride duration - may have off values, probably be good to know\ncheck_dist = distance.cdist([[40.719586, -74.043117]], [[40.727596, -74.044247]], 'cityblock')\nprint(check_dist*69*3600/(384))\n\ndef getDist(row):\n start = [row[\"start station latitude\"], row[\"start station longitude\"]]\n end = [row[\"end station latitude\"], row[\"end station longitude\"]]\n sec = row[\"tripduration\"]\n total_coord_dist = distance.cdist([start], [end], 'cityblock')\n return (total_coord_dist*69*3600/(sec))[0][0]", "[[5.9124375]]\n" ], [ "#add average speed calculated column\ntest_df[\"avg_speed\"] = test_df.apply(lambda row: getDist(row), axis=1)\ntest_df", "_____no_output_____" ], [ "#show bike speed frequency breakdown - 0 just means that it was a round-trip (bike returned to start location)\nbike_speeds = test_df[\"avg_speed\"].sort_values()\nbike_speeds = bike_speeds.reset_index(drop=True)\nbike_speeds.plot(kind=\"hist\")", "_____no_output_____" ], [ "#need to make a histogram showing start times, too\nstart_times = test_df[\"starttime\"].sort_values()\nstart_times = start_times.reset_index(drop=True)\nplottable_start_times = pd.to_datetime(start_times)\naugust_time_df = pd.DataFrame({\"full_time\":plottable_start_times})\naugust_time_df[\"day\"] = august_time_df[\"full_time\"].map(lambda x: x.day)\naugust_time_df[\"day\"].plot(kind=\"hist\")", "_____no_output_____" ], [ "#weekends get more traffic (esp. Saturday) - likely from out of town/recreational use, probably not commuters\nday_freq = august_time_df[\"day\"].value_counts()\nday_freq = day_freq.sort_index()\nday_freq.plot(kind=\"bar\")", "_____no_output_____" ], [ "#see if the same approximate trends are apparent here, too - like the 51 year old data\n\ntest_df_2 = pd.read_csv(\"../raw_data/202007-citibike-tripdata.csv\")\n\ntest_df_2", "_____no_output_____" ], [ "def getRiderAge(df):\n df[\"rider_age\"] = df[\"birth year\"].apply(lambda x: datetime.datetime.now().year - x)\n \ndef getAvgSpeed(df):\n df[\"avg_speed\"] = df.apply(lambda row: getDist(row), axis=1)\n \n#start to make a cleaning/organizing function for new data frame imports\ndef addCols(df):\n getRiderAge(df)\n #remove all riders older than about 80 - not really a huge demographic for cylcing anyway\n getAvgSpeed(df)\n \naddCols(test_df_2)\n\nprint(test_df_2[\"rider_age\"].max())\ntest_df_2", "147\n" ], [ "#can see the same sort of chart, apparently there are a lot of 51 year olds that use this?\nrider_ages2 = test_df_2[\"rider_age\"].value_counts().sort_index()\nrider_ages2.plot()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]