hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e7531bc29004df10550c4b2bd7f952956cfea5b9 | 1,656 | ipynb | Jupyter Notebook | sonstiges/DSP_Python_Matlab/15.11 DFT Therory.ipynb | thieleju/studium | f23db7c7d2c30a2f0095cfdd25a4944c39d80d82 | [
"MIT"
] | 2 | 2021-11-16T22:53:25.000Z | 2021-11-17T12:30:49.000Z | sonstiges/DSP_Python_Matlab/15.11 DFT Therory.ipynb | thieleju/studium | f23db7c7d2c30a2f0095cfdd25a4944c39d80d82 | [
"MIT"
] | 1 | 2022-02-23T18:56:51.000Z | 2022-02-23T19:09:20.000Z | sonstiges/DSP_Python_Matlab/15.11 DFT Therory.ipynb | thieleju/studium | f23db7c7d2c30a2f0095cfdd25a4944c39d80d82 | [
"MIT"
] | 1 | 2022-01-24T16:54:10.000Z | 2022-01-24T16:54:10.000Z | 21.230769 | 77 | 0.532609 | [
[
[
"# Discrete Fourier Transform\n### Fourier-Transform is the Link between Time and Frequency Domain\nTime Domain: s(t) | t <br>\nFrequency Domain: S(t) | f \n\n**continous** Fourier-Transform <br>\n**discrete** Fourier-Transform (focus)\n\nm: frequency index <br>\nk: time index\n\n\n",
"_____no_output_____"
],
[
"### Time Domain to Frequency Domain\n\n",
"_____no_output_____"
],
[
"\n\n### Calculation\n",
"_____no_output_____"
],
[
"### Plot result\n",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e75327cb23d72751c0bbd9d941efec654e6aa1b6 | 356,377 | ipynb | Jupyter Notebook | Result_Analysis_dg_g.ipynb | HaTT2018/Deep_Gravity | b31285eec68723f252cf4aa859f13d7521912868 | [
"MIT"
] | null | null | null | Result_Analysis_dg_g.ipynb | HaTT2018/Deep_Gravity | b31285eec68723f252cf4aa859f13d7521912868 | [
"MIT"
] | null | null | null | Result_Analysis_dg_g.ipynb | HaTT2018/Deep_Gravity | b31285eec68723f252cf4aa859f13d7521912868 | [
"MIT"
] | null | null | null | 408.221077 | 210,164 | 0.930506 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport ipdb\nimport deep_gravity_utils as dgu",
"_____no_output_____"
]
],
[
[
"# Deep Gravity ",
"_____no_output_____"
]
],
[
[
"big_res_df1 = pd.DataFrame(columns=['cpc', 'nrmse', 'nmae', 'smape'])",
"_____no_output_____"
],
[
"OD = np.load('./data/3d_daily.npy').sum(axis=2)[:48, :48]\nOD_max = OD.max(axis=1).reshape(-1, 1)\nOD_max_pred = OD_max[-14:]\n\nlabels = np.load('./res/dg and g/labels.npy')[-14*48:]\nlabels = (labels.reshape(14, 48) * OD_max_pred).reshape(-1, 1)\n\nfor i in range(25):\n run = i + 1\n path = './runs/run%i/'%run\n pred_run = np.load(path+'pred.npy')[-14*48:]\n pred_run = (pred_run.reshape(14, 48) * OD_max_pred).reshape(-1, 1)\n cpc = dgu.get_CPC(pred_run, labels)\n nrmse = dgu.nrmse_loss_func(pred_run, labels, 0)\n nmae = dgu.nmae_loss_func(pred_run, labels, 0)\n smape = dgu.smape_loss_func(pred_run, labels, 0)\n big_res_df1.loc[run, :] = [cpc, nrmse, nmae, smape]\n ",
"_____no_output_____"
],
[
"big_res_df1.loc['mean', :] = big_res_df1.mean().values\nbig_res_df1.loc['std', :] = big_res_df1.std().values\n\nbig_res_df1.to_csv('./res/dg and g/results_dg_25_runs.csv')\nbig_res_df1.loc[['mean', 'std']]",
"_____no_output_____"
],
[
"OD = np.load('./data/3d_daily.npy').sum(axis=2)[:48, :48]\nOD_max = OD.max(axis=1).reshape(-1, 1)\nOD_max_pred = OD_max[-14:]",
"_____no_output_____"
],
[
"run = 11\npath = './runs/run%i/'%run\npred = np.load(path+'pred.npy')[-14*48:]\nlabels = np.load('./res/dg and g/labels.npy')[-14*48:]\n\npred = (pred.reshape(14, 48) * OD_max_pred).reshape(-1, 1)\nlabels = (labels.reshape(14, 48) * OD_max_pred).reshape(-1, 1)\n\nlabels_df = pd.DataFrame(labels).sort_values(by=0, ascending=False)\nl_ind = labels_df.index\nlabels_df.index = range(labels_df.shape[0])\npred_df = pd.DataFrame(pred).loc[l_ind]\npred_df.index = range(pred_df.shape[0])\n\nfig_res = plt.figure()\nax0 = fig_res.add_subplot(1, 1, 1)\nax0.plot(labels_df[0], '.', label='ground truth', ms=10)\nax0.plot(pred_df[0], '.', label='pred', ms=10)\nax0.set_xlabel('OD Pair', fontsize=12)\nax0.set_ylabel('# Annual Trips', fontsize=12)\nplt.xticks(fontsize=12)\nplt.yticks(fontsize=12)\nplt.legend(fontsize=12)\nplt.grid(ls='--')",
"_____no_output_____"
],
[
"m = 0.0\n\nprint('The mae loss is %.4f'%dgu.mae_loss_func(pred, labels, m))\nprint('The mape loss is %.4f'%dgu.mape_loss_func(pred, labels, m))\nprint('The smape loss is %.4f'%dgu.smape_loss_func(pred, labels, m))\nprint('The nrmse loss is %.4f'%dgu.nrmse_loss_func(pred, labels, m))\nprint('The nmae loss is %.4f'%dgu.nmae_loss_func(pred, labels, m))\nprint('The CPC is %.4f'%dgu.get_CPC(pred, labels))",
"The mae loss is 32711.9884\nThe mape loss is 4.7522\nThe smape loss is 0.8279\nThe nrmse loss is 0.0875\nThe nmae loss is 0.0458\nThe CPC is 0.7160\n"
]
],
[
[
"# Gravity",
"_____no_output_____"
]
],
[
[
"g_pred_df = pd.read_csv('./res/dg and g/g_pred.csv', index_col=0)\ng_pred = g_pred_df['0'].values[-14*48:].reshape(-1, 1)\ng_pred.shape",
"_____no_output_____"
],
[
"labels = np.load('./res/dg and g/labels.npy')[-14*48:]\n\nlabels = (labels.reshape(14, 48) * OD_max_pred).reshape(-1, 1)\n\nlabels_df = pd.DataFrame(labels).sort_values(by=0, ascending=False)\nl_ind = labels_df.index\nlabels_df.index = range(labels_df.shape[0])\ng_pred_df = pd.DataFrame(g_pred).loc[l_ind]\ng_pred_df.index = range(g_pred_df.shape[0])\n\nfig_res = plt.figure()\nax0 = fig_res.add_subplot(1, 1, 1)\nax0.plot(labels_df[0], '.', label='ground truth', ms=10)\nax0.plot(g_pred_df[0], '.', label='g_pred', ms=10)\nax0.set_xlabel('OD Pair', fontsize=12)\nax0.set_ylabel('# Annual Trips', fontsize=12)\nplt.xticks(fontsize=12)\nplt.yticks(fontsize=12)\nplt.legend(fontsize=12)\nplt.grid(ls='--')",
"_____no_output_____"
],
[
"m = 0.0\n\nprint('The mae loss is %.4f'%dgu.mae_loss_func(g_pred, labels, m))\nprint('The mape loss is %.4f'%dgu.mape_loss_func(g_pred, labels, m))\nprint('The smape loss is %.4f'%dgu.smape_loss_func(g_pred, labels, m))\nprint('The nrmse loss is %.4f'%dgu.nrmse_loss_func(g_pred, labels, m))\nprint('The nmae loss is %.4f'%dgu.nmae_loss_func(g_pred, labels, m))\nprint('The CPC is %.4f'%dgu.get_CPC(g_pred, labels))",
"The mae loss is 55596.9690\nThe mape loss is 17.3799\nThe smape loss is 1.1917\nThe nrmse loss is 0.1396\nThe nmae loss is 0.0778\nThe CPC is 0.3505\n"
]
],
[
[
"# Two models in one graph",
"_____no_output_____"
]
],
[
[
"run = 11\npath = './runs/run%i/'%run\npred = np.load(path+'pred.npy')[-14*48:]\nlabels = np.load('./res/dg and g/labels.npy')[-14*48:]\n\npred = (pred.reshape(14, 48) * OD_max_pred).reshape(-1, 1)\nlabels = (labels.reshape(14, 48) * OD_max_pred).reshape(-1, 1)\n\nlabels_df = pd.DataFrame(labels).sort_values(by=0, ascending=False)\nl_ind = labels_df.index\nlabels_df.index = range(labels_df.shape[0])\ng_pred_df = pd.DataFrame(g_pred).loc[l_ind]\ng_pred_df.index = range(g_pred_df.shape[0])\npred_df = pd.DataFrame(pred).loc[l_ind]\npred_df.index = range(pred_df.shape[0])\n\nms = 8\nfs = 14\nfig_res = plt.figure(figsize=[12, 3])\nax0 = fig_res.add_subplot(1, 2, 1)\nax0.plot(pred_df[0], '.', label='DG', ms=ms, c='red')\nax0.plot(labels_df[0], '.', label='Ground Truth', ms=ms, c='blue')\nax0.set_xlabel('Sorted OD Pair', fontsize=fs)\nax0.set_ylabel('Number of Annual Trips', fontsize=fs)\nplt.xticks(fontsize=fs)\nplt.yticks(fontsize=fs)\nax0.legend(fontsize=fs)\nax0.grid(ls='--')\n\nax1 = fig_res.add_subplot(1, 2, 2)\nax1.plot(g_pred_df[0], '.', label='GM', ms=ms, c='coral')\nax1.plot(labels_df[0], '.', label='Ground Truth', ms=ms, c='blue')\nax1.set_xlabel('Sorted OD Pair', fontsize=fs)\nax1.set_ylabel('Number of Annual Trips', fontsize=fs)\nplt.xticks(fontsize=fs)\nplt.yticks(fontsize=fs)\nax1.legend(fontsize=fs)\nax1.grid(ls='--')\n\nplt.tight_layout()\nfig_res.savefig('./res/dg and g/res.png', dpi=500)",
"_____no_output_____"
]
],
[
[
"# Spatial Visualization",
"_____no_output_____"
]
],
[
[
"import geopandas as gpd",
"_____no_output_____"
],
[
"data_X_all = gpd.read_file('./data/data_X_all.shp')\nstops = pd.read_csv('./data/stops_order.csv', index_col=0).iloc[:48, :]",
"_____no_output_____"
],
[
"data_X_all.head(2)",
"_____no_output_____"
],
[
"bart_coor = pd.read_csv('./data/station-coor.csv', index_col=0)\nbart_coor.head(2)",
"_____no_output_____"
],
[
"fs = 12\n\nfig = plt.figure(figsize=[20, 10], dpi=50)\nax0 = fig.add_subplot(133)\nplt.xticks(rotation=30, fontsize=fs)\nplt.yticks(fontsize=fs)\n# ax0.set_title('DG Generation')\nax1 = fig.add_subplot(131)\nplt.xticks(rotation=30, fontsize=fs)\nplt.yticks(fontsize=fs)\n# ax1.set_title('Ground Truth')\nax2 = fig.add_subplot(132)\nplt.xticks(rotation=30, fontsize=fs)\nplt.yticks(fontsize=fs)\n# ax2.set_title('GM Generation')\n\ndata_X_all.plot(column='TotPop', ax=ax0, cmap='RdPu')\ndata_X_all.plot(column='TotPop', ax=ax1, cmap='RdPu')\ndata_X_all.plot(column='TotPop', ax=ax2, cmap='RdPu')\n\n# plot trips\nnum_pred_stations = pred.shape[0] // 50 # 50 means that there are 50 stations in total\npred_stations = stops.iloc[-num_pred_stations:, 0]\n\nlow_lon_pred_dest = []\nlow_lat_pred_dest = []\nmid_lon_pred_dest = []\nmid_lat_pred_dest = []\nhigh_lon_pred_dest = []\nhigh_lat_pred_dest = []\nlow_lon_pred_origin = []\nlow_lat_pred_origin = []\nmid_lon_pred_origin = []\nmid_lat_pred_origin = []\nhigh_lon_pred_origin = []\nhigh_lat_pred_origin = []\n\nlow_lon_g_pred_dest = []\nlow_lat_g_pred_dest = []\nmid_lon_g_pred_dest = []\nmid_lat_g_pred_dest = []\nhigh_lon_g_pred_dest = []\nhigh_lat_g_pred_dest = []\nlow_lon_g_pred_origin = []\nlow_lat_g_pred_origin = []\nmid_lon_g_pred_origin = []\nmid_lat_g_pred_origin = []\nhigh_lon_g_pred_origin = []\nhigh_lat_g_pred_origin = []\n\nlow_lon_labels_dest = []\nlow_lat_labels_dest = []\nmid_lon_labels_dest = []\nmid_lat_labels_dest = []\nhigh_lon_labels_dest = []\nhigh_lat_labels_dest = []\nlow_lon_labels_origin = []\nlow_lat_labels_origin = []\nmid_lon_labels_origin = []\nmid_lat_labels_origin = []\nhigh_lon_labels_origin = []\nhigh_lat_labels_origin = []\n\nfor i in pred_stations.index:\n station_dest = pred_stations.loc[i]\n for j in stops.index:\n station_origin = stops.loc[j, 'stop']\n \n lon_origin = bart_coor.loc[bart_coor['abbr']==station_origin, 'lon'].values[0]\n lat_origin = bart_coor.loc[bart_coor['abbr']==station_origin, 'lat'].values[0]\n lon_dest = bart_coor.loc[bart_coor['abbr']==station_dest, 'lon'].values[0]\n lat_dest = bart_coor.loc[bart_coor['abbr']==station_dest, 'lat'].values[0]\n \n dest_ind = i-34\n origin_ind = j+1\n trip_ind = dest_ind*origin_ind - 1\n \n if pred[trip_ind]<0.3*labels.max():\n low_lon_pred_dest.append(lon_dest)\n low_lat_pred_dest.append(lat_dest)\n low_lon_pred_origin.append(lon_origin)\n low_lat_pred_origin.append(lat_origin)\n# ax0.plot([lon_origin, lon_dest], [lat_origin, lat_dest], 'b', lw=0.1)\n elif pred[trip_ind]>=0.3*labels.max() and pred[trip_ind]<0.6*labels.max():\n mid_lon_pred_dest.append(lon_dest)\n mid_lat_pred_dest.append(lat_dest)\n mid_lon_pred_origin.append(lon_origin)\n mid_lat_pred_origin.append(lat_origin)\n# ax0.plot([lon_origin, lon_dest], [lat_origin, lat_dest], 'red', lw=0.6*labels.max())\n elif pred[trip_ind]>=0.6*labels.max():\n high_lon_pred_dest.append(lon_dest)\n high_lat_pred_dest.append(lat_dest)\n high_lon_pred_origin.append(lon_origin)\n high_lat_pred_origin.append(lat_origin)\n# ax0.plot([lon_origin, lon_dest], [lat_origin, lat_dest], 'yellow', lw=2)\n \n if g_pred[trip_ind]<0.3*labels.max():\n low_lon_g_pred_dest.append(lon_dest)\n low_lat_g_pred_dest.append(lat_dest)\n low_lon_g_pred_origin.append(lon_origin)\n low_lat_g_pred_origin.append(lat_origin)\n# ax0.plot([lon_origin, lon_dest], [lat_origin, lat_dest], 'b', lw=0.1)\n elif g_pred[trip_ind]>=0.3*labels.max() and pred[trip_ind]<0.6*labels.max():\n mid_lon_g_pred_dest.append(lon_dest)\n mid_lat_g_pred_dest.append(lat_dest)\n mid_lon_g_pred_origin.append(lon_origin)\n mid_lat_g_pred_origin.append(lat_origin)\n# ax0.plot([lon_origin, lon_dest], [lat_origin, lat_dest], 'red', lw=0.6*labels.max())\n elif g_pred[trip_ind]>=0.6*labels.max():\n high_lon_g_pred_dest.append(lon_dest)\n high_lat_g_pred_dest.append(lat_dest)\n high_lon_g_pred_origin.append(lon_origin)\n high_lat_g_pred_origin.append(lat_origin)\n# ax0.plot([lon_origin, lon_dest], [lat_origin, lat_dest], 'yellow', lw=2)\n \n if labels[trip_ind]<0.3*labels.max():\n low_lon_labels_dest.append(lon_dest)\n low_lat_labels_dest.append(lat_dest)\n low_lon_labels_origin.append(lon_origin)\n low_lat_labels_origin.append(lat_origin)\n# ax1.plot([lon_origin, lon_dest], [lat_origin, lat_dest], 'b', lw=0.1)\n elif labels[trip_ind]>=0.3*labels.max() and labels[trip_ind]<0.6*labels.max():\n mid_lon_labels_dest.append(lon_dest)\n mid_lat_labels_dest.append(lat_dest)\n mid_lon_labels_origin.append(lon_origin)\n mid_lat_labels_origin.append(lat_origin)\n# ax1.plot([lon_origin, lon_dest], [lat_origin, lat_dest], 'red', lw=0.6*labels.max())\n elif labels[trip_ind]>=0.6*labels.max():\n high_lon_labels_dest.append(lon_dest)\n high_lat_labels_dest.append(lat_dest)\n high_lon_labels_origin.append(lon_origin)\n high_lat_labels_origin.append(lat_origin)\n# ax1.plot([lon_origin, lon_dest], [lat_origin, lat_dest], 'yellow', lw=2)\n\n# dg pred\nfor i in range(len(low_lon_pred_dest)):\n ax0.plot([low_lon_pred_origin[i], low_lon_pred_dest[i]], [low_lat_pred_origin[i], low_lat_pred_dest[i]], 'b', lw=0.3)\n \nfor i in range(len(mid_lon_pred_dest)):\n ax0.plot([mid_lon_pred_origin[i], mid_lon_pred_dest[i]], [mid_lat_pred_origin[i], mid_lat_pred_dest[i]], 'red', lw=1)\n \nfor i in range(len(high_lon_pred_dest)):\n ax0.plot([high_lon_pred_origin[i], high_lon_pred_dest[i]], [high_lat_pred_origin[i], high_lat_pred_dest[i]], 'yellow', lw=1.5)\n\n# gm pred\nfor i in range(len(low_lon_g_pred_dest)):\n ax2.plot([low_lon_g_pred_origin[i], low_lon_g_pred_dest[i]], [low_lat_g_pred_origin[i], low_lat_g_pred_dest[i]], 'b', lw=0.3)\n \nfor i in range(len(mid_lon_g_pred_dest)):\n ax2.plot([mid_lon_g_pred_origin[i], mid_lon_g_pred_dest[i]], [mid_lat_g_pred_origin[i], mid_lat_g_pred_dest[i]], 'red', lw=1)\n \nfor i in range(len(high_lon_g_pred_dest)):\n ax2.plot([high_lon_g_pred_origin[i], high_lon_g_pred_dest[i]], [high_lat_g_pred_origin[i], high_lat_g_pred_dest[i]], 'yellow', lw=1.5)\n\n# labels\nfor i in range(len(low_lon_labels_dest)):\n ax1.plot([low_lon_labels_origin[i], low_lon_labels_dest[i]], [low_lat_labels_origin[i], low_lat_labels_dest[i]], 'b', lw=0.3)\n \nfor i in range(len(mid_lon_labels_dest)):\n ax1.plot([mid_lon_labels_origin[i], mid_lon_labels_dest[i]], [mid_lat_labels_origin[i], mid_lat_labels_dest[i]], 'red', lw=1)\n \nfor i in range(len(high_lon_labels_dest)):\n ax1.plot([high_lon_labels_origin[i], high_lon_labels_dest[i]], [high_lat_labels_origin[i], high_lat_labels_dest[i]], 'yellow', lw=1.5)\n\n\n\n# plot stations\nfor i in bart_coor.index:\n lon = bart_coor.loc[i, 'lon']\n lat = bart_coor.loc[i, 'lat']\n ax0.plot(lon, lat, 'r.')\n ax1.plot(lon, lat, 'r.')\n ax2.plot(lon, lat, 'r.')\n\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7532998353cbe9c456db11e18791703da306d22 | 170,191 | ipynb | Jupyter Notebook | Pymaceuticals/pymaceuticals_starter.ipynb | MBoerenko/Matplotlib-Challenge | e2ec7dd75f0b43c7e5922f9822c3ea45fbb2ee72 | [
"MIT"
] | null | null | null | Pymaceuticals/pymaceuticals_starter.ipynb | MBoerenko/Matplotlib-Challenge | e2ec7dd75f0b43c7e5922f9822c3ea45fbb2ee72 | [
"MIT"
] | null | null | null | Pymaceuticals/pymaceuticals_starter.ipynb | MBoerenko/Matplotlib-Challenge | e2ec7dd75f0b43c7e5922f9822c3ea45fbb2ee72 | [
"MIT"
] | null | null | null | 111.967763 | 19,608 | 0.812804 | [
[
[
"## Observations and Insights ",
"_____no_output_____"
]
],
[
[
"#Look across all previously generated figures and tables and write at least three observations or inferences that \n#can be made from the data. Include these observations at the top of notebook.\n\n#1. Two of the drugs with more promising results also had the most mice tested.\n#2. There were very few outliers in the data which adds credence to the data since the outliers can skew the \n #results in the direction of the outliers.\n#3. While the results of Capomulin had a significant affect on the tumor size over 45 days, the mouse's weight appeared\n #to have an adverse affect and would require more study.",
"_____no_output_____"
],
[
"# Dependencies and Setup\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport scipy.stats as st\nimport numpy as np\nfrom scipy.stats import linregress\n\n# Import data files\nmouse_metadata_path = \"data/Mouse_metadata.csv\"\nstudy_results_path = \"data/Study_results.csv\"\n\n# Read the mouse data and the study results\nmouse_metadata = pd.read_csv(mouse_metadata_path)\nstudy_results = pd.read_csv(study_results_path)\n\n# Combine the data into a single dataset. \nstudy_complete = pd.merge(mouse_metadata, study_results, how=\"left\", on=[\"Mouse ID\", \"Mouse ID\"])\n\n# Display the data table for preview\nstudy_complete",
"_____no_output_____"
],
[
"# Checking the number of mice.\nstudy_complete[\"Mouse ID\"].count()",
"_____no_output_____"
],
[
"# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint. \nduplicateDFRow=study_complete[study_complete.duplicated([\"Mouse ID\",\"Timepoint\"])]",
"_____no_output_____"
],
[
"# Optional: Get all the data for the duplicate mouse ID. \nprint(duplicateDFRow)",
" Mouse ID Drug Regimen Sex Age_months Weight (g) Timepoint \\\n909 g989 Propriva Female 21 26 0 \n911 g989 Propriva Female 21 26 5 \n913 g989 Propriva Female 21 26 10 \n915 g989 Propriva Female 21 26 15 \n917 g989 Propriva Female 21 26 20 \n\n Tumor Volume (mm3) Metastatic Sites \n909 45.000000 0 \n911 47.570392 0 \n913 49.880528 0 \n915 53.442020 0 \n917 54.657650 1 \n"
],
[
"# Create a clean DataFrame by dropping the duplicate mouse by its ID.\nclean_df=study_complete.drop_duplicates(subset=[\"Mouse ID\", \"Timepoint\"], keep='first')",
"_____no_output_____"
],
[
"# Checking the number of mice in the clean DataFrame.\nclean_df[\"Mouse ID\"].count()",
"_____no_output_____"
]
],
[
[
"## Summary Statistics",
"_____no_output_____"
]
],
[
[
"# A summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen\nmice_count = clean_df.groupby(['Drug Regimen']).count()['Mouse ID']\nmean = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].mean()\nmedian = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].median()\nvariance = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].var()\nstdv = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].std()\nsem = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].sem()\nsummary_df = pd.DataFrame({'Mice Count': mice_count,\"Mean\": mean, \"Median\": median, \"Var.\": variance, \"Std Dev\": stdv, \"SEM\": sem})\n\nsummary_df",
"_____no_output_____"
]
],
[
[
"## Bar and Pie Charts",
"_____no_output_____"
]
],
[
[
"# A bar plot showing the total number of mice for each treatment throughout the course of the study using pandas. \nsummary_df.plot(y='Mice Count', kind='bar', color = 'blue')\nplt.title(\"Mice Count per Drug Treatment\")\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
],
[
"# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.\nplt.bar(summary_df.index, summary_df[\"Mice Count\"], color='b', alpha=0.5, align='center')\nplt.xticks(rotation = 'vertical')\nplt.xlabel(\"Drug Regimen\")\nplt.ylabel(\"Count of Mice\")\nplt.title(\"Mice count per Drug Regimen\")\nplt.ylim(0, max(summary_df[\"Mice Count\"])+10)\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
],
[
"# Create a dataframe of Mouse IDs grouped by gender\ngender_df = clean_df.groupby(['Sex']).count()\ncount_df = gender_df.rename(columns = {'Mouse ID':'Mice Count'})",
"_____no_output_____"
],
[
"# Generate a pie plot showing the distribution of female versus male mice using pandas\ncolors = [\"pink\", \"lightblue\"]\nexplode = (0, 0.1)\ncount_df.plot(kind = \"pie\", y=\"Mice Count\", figsize= (4,4), startangle = 45, explode=explode, legend = False, colors=colors, autopct=\"%1.1f%%\")\nplt.title('Female vs Male Mice')\nplt.show() ",
"_____no_output_____"
],
[
"# Generate a pie plot showing the distribution of female versus male mice using pyplot\nplt.title(\"Female vs Male Mice\")\nexplode = (0, 0.1)\ncolors = [\"pink\", \"lightblue\"]\nplt.pie(count_df['Mice Count'], explode=explode, labels=count_df.index, colors=colors,\n autopct=\"%1.1f%%\", shadow=True, startangle=90)\nplt.axis(\"equal\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Quartiles, Outliers and Boxplots",
"_____no_output_____"
]
],
[
[
"# Calculate the final tumor volume of each mouse across four of the treatment regimens: \n# Capomulin, Ramicane, Infubinol, and Ceftamin\n\n# Start by getting the last (greatest) timepoint for each mouse \nmax_df = clean_df.groupby('Mouse ID')['Timepoint'].max()\n\n# Merge this group df with the original dataframe to get the tumor volume at the last timepoint\nmerged_df = pd.merge(max_df, clean_df, how=\"left\", on=\"Timepoint\")\nmerged_df",
"_____no_output_____"
],
[
"#Group the tumor volumes by drug \"Capomulin\"\ncapomulin_data_df = merged_df[merged_df['Drug Regimen'].isin(['Capomulin'])].sort_values(by='Tumor Volume (mm3)')\ntumors_capomulin = capomulin_data_df['Tumor Volume (mm3)']\ntumors_capomulin",
"_____no_output_____"
],
[
"# Calculate the IQR for 'Capomulin' and quantitatively determine if there are any potential outliers.\nquartiles = tumors_capomulin.quantile([.25,.5,.75])\nlowerq = quartiles[0.25]\nupperq = quartiles[0.75]\niqr = upperq-lowerq\n\nlower_bound = lowerq - (1.5*iqr)\nupper_bound = upperq + (1.5*iqr)\nprint(f\"Values below {lower_bound} could be outliers.\")\nprint(f\"Values above {upper_bound} could be outliers.\")",
"Values below 23.818219883749997 could be outliers.\nValues above 54.87014197375 could be outliers.\n"
],
[
"#Group the tumor volumes by drug \"Ramicane\"\nRamicane_data_df = merged_df[merged_df['Drug Regimen'].isin(['Ramicane'])].sort_values(by='Tumor Volume (mm3)')\ntumors_ramicane = Ramicane_data_df['Tumor Volume (mm3)']\ntumors_ramicane",
"_____no_output_____"
],
[
"# Calculate the IQR for \"Ramicane\" and quantitatively determine if there are any potential outliers.\nquartiles = tumors_ramicane.quantile([.25,.5,.75])\nlowerq = quartiles[0.25]\nupperq = quartiles[0.75]\niqr = upperq-lowerq\nlower_bound = lowerq - (1.5*iqr)\nupper_bound = upperq + (1.5*iqr)\nprint(f\"Values below {lower_bound} could be outliers.\")\nprint(f\"Values above {upper_bound} could be outliers.\")",
"Values below 19.334691524999986 could be outliers.\nValues above 57.275253245000016 could be outliers.\n"
],
[
"#Group the tumor volumes by drug 'Infubinol'\ninfubinol_data_df = merged_df[merged_df['Drug Regimen'].isin(['Infubinol'])].sort_values(by='Tumor Volume (mm3)')\ntumors_infubinol = infubinol_data_df['Tumor Volume (mm3)']\ntumors_infubinol",
"_____no_output_____"
],
[
"# Calculate the IQR for 'Infubinol' and quantitatively determine if there are any potential outliers.\nquartiles = tumors_infubinol.quantile([.25,.5,.75])\nlowerq = quartiles[0.25]\nupperq = quartiles[0.75]\niqr = upperq-lowerq\nlower_bound = lowerq - (1.5*iqr)\nupper_bound = upperq + (1.5*iqr)\nprint(f\"Values below {lower_bound} could be outliers.\")\nprint(f\"Values above {upper_bound} could be outliers.\")\n",
"Values below 27.255846989999984 could be outliers.\nValues above 86.26845163000002 could be outliers.\n"
],
[
"#Group the tumor volumes by drug 'Ceftamin'\nceftamin_data_df = merged_df[merged_df['Drug Regimen'].isin(['Ceftamin'])].sort_values(by='Tumor Volume (mm3)')\ntumors_ceftamin = ceftamin_data_df['Tumor Volume (mm3)']\ntumors_ceftamin",
"_____no_output_____"
],
[
"# Calculate the IQR for 'Ceftamin' and quantitatively determine if there are any potential outliers.\nquartiles = tumors_ceftamin.quantile([.25,.5,.75])\nlowerq = quartiles[0.25]\nupperq = quartiles[0.75]\niqr = upperq-lowerq\nlower_bound = lowerq - (1.5*iqr)\nupper_bound = upperq + (1.5*iqr)\nprint(f\"Values below {lower_bound} could be outliers.\")\nprint(f\"Values above {upper_bound} could be outliers.\")",
"Values below 28.31287333499999 could be outliers.\nValues above 84.56355513500002 could be outliers.\n"
],
[
"# Generate outlier plot of final total volume\nlabels = ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin'] \nfig1, ax = plt.subplots()\nax.set_title('Final Tumor Volume in Drug Regimen')\nax.set_xlabel('Drug Regimen')\nax.set_ylabel('Final Tumor Volume (mm3)')\nplt.ylim(15, 90)\ngreen_diamond = dict(markerfacecolor='g', marker='D')\nax.boxplot([tumors_capomulin,tumors_ramicane, tumors_infubinol, tumors_ceftamin],labels = labels,patch_artist=True,flierprops=green_diamond)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Line and Scatter Plots",
"_____no_output_____"
]
],
[
[
"capomulin_data_df",
"_____no_output_____"
],
[
"# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin\nCapomulin_line_df = merged_df.loc[merged_df['Mouse ID']=='s185',:].sort_values(by=\"Timepoint\", ascending = False)\n\nx_axis = Capomulin_line_df['Timepoint']\ntumor_volume = Capomulin_line_df['Tumor Volume (mm3)']\n\nplt.title(\"Capomulin Results on Mouse s185\")\nplt.xlabel(\"Timepoint(Days)\")\nplt.ylabel(\"Tumor Volume (mm3)\")\n\nplt.plot(x_axis, tumor_volume, marker=\"o\", color=\"red\", linewidth=0.5)\nplt.show()",
"_____no_output_____"
],
[
"# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen\nCapomulin_scatter_df = merged_df.loc[merged_df['Drug Regimen']=='Capomulin',:]\n\navg_tumor=Capomulin_scatter_df.groupby(['Mouse ID']).mean()\n\nx_axis = avg_tumor['Weight (g)']\ny_axis = avg_tumor['Tumor Volume (mm3)']\n\nfig1, ax1 = plt.subplots(figsize=(8, 5))\nplt.title(\"Mouse Weight vs Average Tumor Volumes\", fontsize = 16)\nplt.xlabel('Weight (g)')\nplt.ylabel('Average Tumor Volume (mm3)')\nax1.set_ylim([29,47])\nplt.scatter(x_axis, y_axis, marker=\"o\", color=\"purple\")\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Correlation and Regression",
"_____no_output_____"
]
],
[
[
"# Calculate the correlation coefficient and linear regression model \n# for mouse weight and average tumor volume for the Capomulin regimen\ncorrelation = st.pearsonr(avg_tumor['Weight (g)'],avg_tumor['Tumor Volume (mm3)'])\nprint(f\"The correlation between both factors is {round(correlation[0],2)}\")",
"The correlation between both factors is 0.87\n"
],
[
"# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen\n#including the linear regression model\nx_values = avg_tumor['Weight (g)']\ny_values = avg_tumor['Tumor Volume (mm3)']\n\n(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)\nregress_values = x_values * slope + intercept\n\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\n\nplt.scatter(x_values,y_values)\n\nplt.plot(x_values,regress_values,\"r-\")\nax1.annotate(line_eq, xy=(20, 40), xycoords='data',xytext=(0.8, 0.95), textcoords='axes fraction',horizontalalignment='right', verticalalignment='top',fontsize=30,color=\"red\")\n\nplt.title(\"Mouse Weight vs Average Tumor Volumes\", fontsize = 16)\nplt.xlabel('Weight (g)')\nplt.ylabel('Average Tumor Volume (mm3)')\n\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\n\nprint(f\"The r-squared is: {rvalue**2}\")\nplt.show()",
"The r-squared is: 0.7532769048399547\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7532f25c5d9e47fd83dd3efb5717eae992720a5 | 6,146 | ipynb | Jupyter Notebook | examples/notebooks/models_training/beta_tc_vae_training.ipynb | clementchadebec/benchmark_VAE | 943e231f9e5dfa40b4eec14d4536f1c229ad9be1 | [
"Apache-2.0"
] | 143 | 2021-10-17T08:43:33.000Z | 2022-03-31T11:10:53.000Z | examples/notebooks/models_training/beta_tc_vae_training.ipynb | louis-j-vincent/benchmark_VAE | 943e231f9e5dfa40b4eec14d4536f1c229ad9be1 | [
"Apache-2.0"
] | 6 | 2022-01-21T17:40:09.000Z | 2022-03-16T13:09:22.000Z | examples/notebooks/models_training/beta_tc_vae_training.ipynb | louis-j-vincent/benchmark_VAE | 943e231f9e5dfa40b4eec14d4536f1c229ad9be1 | [
"Apache-2.0"
] | 18 | 2021-12-16T15:17:08.000Z | 2022-03-15T01:30:13.000Z | 22.595588 | 104 | 0.539701 | [
[
[
"# If you run on colab uncomment the following line\n#!pip install git+https://github.com/clementchadebec/benchmark_VAE.git",
"_____no_output_____"
],
[
"import torch\nimport torchvision.datasets as datasets\n\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"mnist_trainset = datasets.MNIST(root='../../data', train=True, download=True, transform=None)\n\ntrain_dataset = mnist_trainset.data[:-10000].reshape(-1, 1, 28, 28) / 255.\neval_dataset = mnist_trainset.data[-10000:].reshape(-1, 1, 28, 28) / 255.",
"_____no_output_____"
],
[
"from pythae.models import BetaTCVAE, BetaTCVAEConfig\nfrom pythae.trainers import BaseTrainerConfig\nfrom pythae.pipelines.training import TrainingPipeline\nfrom pythae.models.nn.benchmarks.mnist import Encoder_VAE_MNIST, Decoder_AE_MNIST",
"_____no_output_____"
],
[
"config = BaseTrainerConfig(\n output_dir='my_model',\n learning_rate=1e-3,\n batch_size=100,\n num_epochs=100,\n)\n\n\nmodel_config = BetaTCVAEConfig(\n input_dim=(1, 28, 28),\n latent_dim=16,\n beta=2.,\n alpha=1,\n gamma=1\n\n)\n\nmodel = BetaTCVAE(\n model_config=model_config,\n encoder=Encoder_VAE_MNIST(model_config), \n decoder=Decoder_AE_MNIST(model_config) \n)",
"_____no_output_____"
],
[
"pipeline = TrainingPipeline(\n training_config=config,\n model=model\n)",
"_____no_output_____"
],
[
"pipeline(\n train_data=train_dataset,\n eval_data=eval_dataset\n)",
"_____no_output_____"
],
[
"import os",
"_____no_output_____"
],
[
"last_training = sorted(os.listdir('my_model'))[-1]\ntrained_model = BetaTCVAE.load_from_folder(os.path.join('my_model', last_training, 'final_model'))",
"_____no_output_____"
],
[
"from pythae.samplers import NormalSampler",
"_____no_output_____"
],
[
"# create normal sampler\nnormal_samper = NormalSampler(\n model=trained_model\n)",
"_____no_output_____"
],
[
"# sample\ngen_data = normal_samper.sample(\n num_samples=25\n)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"# show results with normal sampler\nfig, axes = plt.subplots(nrows=5, ncols=5, figsize=(10, 10))\n\nfor i in range(5):\n for j in range(5):\n axes[i][j].imshow(gen_data[i*5 +j].cpu().squeeze(0), cmap='gray')\n axes[i][j].axis('off')\nplt.tight_layout(pad=0.)",
"_____no_output_____"
],
[
"from pythae.samplers import GaussianMixtureSampler, GaussianMixtureSamplerConfig",
"_____no_output_____"
],
[
"# set up gmm sampler config\ngmm_sampler_config = GaussianMixtureSamplerConfig(\n n_components=10\n)\n\n# create gmm sampler\ngmm_sampler = GaussianMixtureSampler(\n sampler_config=gmm_sampler_config,\n model=trained_model\n)\n\n# fit the sampler\ngmm_sampler.fit(train_dataset)",
"_____no_output_____"
],
[
"# sample\ngen_data = gmm_sampler.sample(\n num_samples=25\n)",
"_____no_output_____"
],
[
"# show results with gmm sampler\nfig, axes = plt.subplots(nrows=5, ncols=5, figsize=(10, 10))\n\nfor i in range(5):\n for j in range(5):\n axes[i][j].imshow(gen_data[i*5 +j].cpu().squeeze(0), cmap='gray')\n axes[i][j].axis('off')\nplt.tight_layout(pad=0.)",
"_____no_output_____"
]
],
[
[
"## ... the other samplers work the same",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e75330edfe97f564a369e2cdf010368f42fce2c4 | 72,729 | ipynb | Jupyter Notebook | hands-on/.ipynb_checkpoints/housing-checkpoint.ipynb | seguranca-publica/learning-machine-learning | 3f2cc3f2f70e9f57b565de1e36611dd706f828e1 | [
"MIT"
] | null | null | null | hands-on/.ipynb_checkpoints/housing-checkpoint.ipynb | seguranca-publica/learning-machine-learning | 3f2cc3f2f70e9f57b565de1e36611dd706f828e1 | [
"MIT"
] | null | null | null | hands-on/.ipynb_checkpoints/housing-checkpoint.ipynb | seguranca-publica/learning-machine-learning | 3f2cc3f2f70e9f57b565de1e36611dd706f828e1 | [
"MIT"
] | null | null | null | 162.341518 | 58,236 | 0.857471 | [
[
[
"import os\nimport tarfile\nfrom six.moves import urllib\n\nDOWNLOAD_ROOT = \"https://raw.githubusercontent.com/ageron/handson-ml/master/\"\nHOUSING_PATH = \"datasets/housing\"\nHOUSING_URL = DOWNLOAD_ROOT + HOUSING_PATH + \"/housing.tgz\"\n\ndef fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):\n if not os.path.isdir(housing_path):\n os.makedirs(housing_path)\n tgz_path = os.path.join(housing_path, \"housing.tgz\")\n urllib.request.urlretrieve(housing_url, tgz_path)\n housing_tgz = tarfile.open(tgz_path)\n housing_tgz.extractall(path=housing_path)\n housing_tgz.close()",
"_____no_output_____"
],
[
"fetch_housing_data()",
"_____no_output_____"
],
[
"import pandas \n\ndef load_housing_data(housing_path=HOUSING_PATH):\n csv_path = os.path.join(housing_path, \"housing.csv\")\n return pandas.read_csv(csv_path)",
"_____no_output_____"
],
[
"housing = load_housing_data()\nhousing.head()",
"_____no_output_____"
],
[
"housing.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 20640 entries, 0 to 20639\nData columns (total 10 columns):\nlongitude 20640 non-null float64\nlatitude 20640 non-null float64\nhousing_median_age 20640 non-null float64\ntotal_rooms 20640 non-null float64\ntotal_bedrooms 20433 non-null float64\npopulation 20640 non-null float64\nhouseholds 20640 non-null float64\nmedian_income 20640 non-null float64\nmedian_house_value 20640 non-null float64\nocean_proximity 20640 non-null object\ndtypes: float64(9), object(1)\nmemory usage: 1.6+ MB\n"
],
[
"housing.describe()",
"_____no_output_____"
],
[
"%matplotlib inline\n\nimport matplotlib.pyplot as ploter\n\nhousing.hist(bins=50, figsize=(20,15))\nploter.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e753369dd0ebdf2771f44ba295a525af5cec3745 | 90,656 | ipynb | Jupyter Notebook | notebooks/beta-error-bernoulli-hypothesis-test.ipynb | garciparedes/r-examples | 0e0e18439ad859f97eafb27c5e7f77d33da28bc6 | [
"Apache-2.0"
] | 1 | 2017-09-15T19:56:31.000Z | 2017-09-15T19:56:31.000Z | notebooks/beta-error-bernoulli-hypothesis-test.ipynb | garciparedes/r-examples | 0e0e18439ad859f97eafb27c5e7f77d33da28bc6 | [
"Apache-2.0"
] | 5 | 2018-03-23T09:34:55.000Z | 2019-01-09T14:13:32.000Z | notebooks/beta-error-bernoulli-hypothesis-test.ipynb | garciparedes/r-examples | 0e0e18439ad859f97eafb27c5e7f77d33da28bc6 | [
"Apache-2.0"
] | null | null | null | 370.02449 | 81,660 | 0.907772 | [
[
[
"# Contraste Bilateral: Cálculo del Error de tipo II\n\n## Parámetro $p$ en variables de $Bernoulli$\n\n#### Autor:\nSergio García Prado - [garciparedes.me](https://garciparedes.me)\n\n#### Fecha:\nAbril de 2018\n\n#### Agradecimientos:\nMe gustaría agradecer a la profesora [Pilar Rodríguez del Tío](http://www.eio.uva.es/~pilar/) la revisión y correcciones sobre este trabajo.",
"_____no_output_____"
],
[
"## Descripción:\n\nLas variables aleatorias de $Bernoulli$ surgen cuando se pretende estudiar fenómenos de carácter binario como por ejemplo, la ocurrencia o no de un determinado suceso. Estas variables se caracterizan por el ratio de ocurrencia del suceso de estudio, el cual se denota por $p \\in [0, 1]$. \n\n\nSin embargo, este parámetro es desconocido en la mayoría de casos, por lo que es habitual que surja la pregunta sobre qué valor es el que toma. Para hacer inferencia sobre $p$ existen distintas técnicas, entre las que se encuentran los contrastes de hipótesis. En los contrastes se utilizan por dos parámetros de error conocidos como $\\alpha$ y $\\beta$ que representan la probabilidad de rechazar la hipótesis cuando era cierta y de aceptarla cuando era falsa.\n\nEstos errores están relacionados entre si, y cuando uno disminuye el otro aumenta, por lo que es necesario estudiar su comportamiento en detalle para llegar a extraer conclusiones razonables de nuestros datos.\n\nEn este trabajo nos hemos centrado en estudiar la variación del error de tipo II (probabilidad de aceptar la hipótesis $p = c$ cuando en realidad era falsa) en el contraste bilateral para el parámetro p sobre una muestra aleatoria simple de variables de Bernoulli.",
"_____no_output_____"
],
[
"## Procedimiento:\n\nSea: \n\n$$X_1,..., X_i,..., X_n \\ m.a.s \\mid X_i \\sim B(p) $$\n\nSabemos que:\n\n$$\\widehat{p} = \\bar{X} = \\frac{\\sum_{i = 1}^nX_i}{n}$$\n\nPara realizar el contraste:\n\n$$H_0: p = c\\\\H_1:p \\neq c$$\n\nSabemos que:\n\n\\begin{align}\n\\widehat{p} \\simeq N(p, \\frac{p(1-p)}{n}) &\\quad \\text{(bajo cualquier hipótesis)}\\\\\n\\widehat{p} \\simeq N(c, \\frac{c(1-c)}{n}) &\\quad \\text{(bajo $H_0$)}\n\\end{align}\n\nSi tipificamos para en ambos casos:\n\n\\begin{align}\n\\frac{\\widehat{p} - p}{\\sqrt{\\frac{p(1-p)}{n}}} \\simeq N(0, 1) &\\quad \\text{(bajo cualquier hipótesis)}\\\\\n\\frac{\\widehat{p} - c}{\\sqrt{\\frac{c(1-c)}{n}}} \\simeq N(0, 1) &\\quad \\text{(bajo $H_0$)}\n\\end{align}\n\nAhora, queremos construir la región crítica o de rechazo tal que: \n\n$$P_{H_0}(C) = \\alpha$$\n\nPor lo tanto:\n\n\\begin{align}\nC \n&= \\left\\{ \\left| \\ N(0,1) \\ \\right| \\geq Z_{1 - \\frac{\\alpha}{2}}\\right\\} \\\\\n&= \\left\\{ \\left| \\ \\frac{\\widehat{p} - c}{\\sqrt{\\frac{c(1-c)}{n}}} \\ \\right| \\geq Z_{1 - \\frac{\\alpha}{2}} \\right\\} \\\\\n&= \\left\\{ \\left| \\ \\widehat{p} - c \\ \\right| \\geq Z_{1 - \\frac{\\alpha}{2}} \\sqrt{\\frac{c(1-c)}{n}}\\right\\} \\\\\n&= \\left\\{ \\widehat{p} \\leq c - Z_{1 - \\frac{\\alpha}{2}} \\sqrt{\\frac{c(1-c)}{n}}\\right\\} \\cup \\left\\{c + Z_{1 - \\frac{\\alpha}{2}} \\sqrt{\\frac{c(1-c)}{n}} \\leq \\widehat{p}\\right\\} \\\\\n\\end{align}\n\n(Donde $Z_{1 - \\frac{\\alpha}{2}}$ se refiere al cuantil $1-\\alpha/2$ de la distribución Normal estándar)\n\nSin embargo, buscamos calcular el error de tipo II: \n\n$$\\beta\\left(p\\right) = P_p(\\bar{C})$$\n\nLuego, obtenemos el complementario de la región crítica:\n\n\\begin{align}\n\\bar{C} \n&= \\left\\{ \\left| \\ N(0,1) \\ \\right| < Z_{1 - \\frac{\\alpha}{2}}\\right\\} \\\\\n&= \\left\\{ \\left| \\ \\frac{\\widehat{p} - c}{\\sqrt{\\frac{c(1-c)}{n}}} \\ \\right| < Z_{1 - \\frac{\\alpha}{2}} \\right\\} \\\\\n&= \\left\\{ \\left| \\ \\widehat{p} - c\\ \\right| < Z_{1 - \\frac{\\alpha}{2}} \\sqrt{\\frac{c(1-c)}{n}}\\right\\} \\\\\n&= \\left\\{ c - Z_{1-\\frac{\\alpha}{2}} \\sqrt{\\frac{c(1-c)}{n}} < \\widehat{p} < c + Z_{1 - \\frac{\\alpha}{2}} \\sqrt{\\frac{c(1-c)}{n}}\\right\\} \\\\\n\\end{align}\n\nPara después desarrollar el cálculo de dicha probabilidad:\n\n\\begin{align}\n\\beta(p) \n&= P_p(\\bar{C}) \\\\\n&= P_p\\left(\\left| \\frac{\\widehat{p} - c}{\\sqrt{\\frac{c(1-c)}{n}}} \\right| < Z_{1 - \\frac{\\alpha}{2}}\\right) \\\\\n&= P_p\\left(c - Z_{\\frac{\\alpha}{2}} \\sqrt{\\frac{c(1-c)}{n}} < \\widehat{p} < c + Z_{ 1- \\frac{\\alpha}{2}} \\sqrt{\\frac{c(1-c)}{n}}\\right) \\\\\n&= \\Phi\\left(\\frac{\\left(c + Z_{1 - \\frac{\\alpha}{2}} \\sqrt{\\frac{c(1-c)}{n}}\\right)-p}{\\sqrt{\\frac{p(1-p)}{n}}}\\right) - \\Phi\\left(\\frac{\\left(c - Z_{1 - \\frac{\\alpha}{2}} \\sqrt{\\frac{c(1-c)}{n}}\\right)-p}{\\sqrt{\\frac{p(1-p)}{n}}}\\right)\n\\end{align}\n\n(Donde $\\Phi(x)$ se refiere a la función de distribución de la Normal estándar)\n\nA continuación se muestra la implementación de los cálculos superiores en __R__:",
"_____no_output_____"
],
[
"## Implementación:",
"_____no_output_____"
],
[
"#### Cálculo del varlor crítico de nivel $\\alpha$ para variables de Bernoulli:",
"_____no_output_____"
]
],
[
[
"CriticalValue <- function(n, p, alpha) {\n qnorm(alpha) * sqrt((p * (1 - p)) / n)\n}",
"_____no_output_____"
]
],
[
[
"#### Cálculo de la probabilidad $P(\\bar{C})$:",
"_____no_output_____"
]
],
[
[
"PNegateC <- function(p, n, c, alpha) {\n pnorm(\n (\n c + CriticalValue(n, c, 1 - alpha / 2) - p\n ) / sqrt((p * (1 - p)) / n)\n ) - \n pnorm(\n (\n c - CriticalValue(n, c, 1 - alpha / 2) - p\n ) / sqrt((p * (1 - p)) / n)\n )\n}",
"_____no_output_____"
]
],
[
[
"#### Representación gráfica de $\\beta(p)$ tomando distintos valores $c$ y $n$ (manteniendo $\\alpha$ fijo) para comprobar su variación ",
"_____no_output_____"
]
],
[
[
"n.vec <- 10 ^ (1:3)\nc.vec <- c(0.25, 0.5, 0.75)\np <- seq(0, 1, length = 200)",
"_____no_output_____"
]
],
[
[
"## Resultados:",
"_____no_output_____"
]
],
[
[
"par(mfrow = c(length(n.vec), length(c.vec)))\nfor (n in n.vec) {\n for (c in c.vec) {\n plot(p, 1 - PNegateC(p, n, c, 0.05), type = \"l\", \n main = paste(\"c =\", c, \"\\nn =\", n),\n ylab = \"A(p) = 1 - B(p)\")\n }\n}",
"_____no_output_____"
]
],
[
[
"Tal y como se puede apreciar, la función del error tan solo es simétrica en el caso $c = \\frac{1}{2}$, lo cual es más marcado para valores de $n$ pequeños. Además, conforme aumenta $n$ la función $\\beta(p)$ se vuelve mucho más apuntada entorno a $c$, por lo que el error de tipo II se mantiene muy bajo excepto para valores $p \\simeq c$. Esto tiene sentido ya que el error cometido por rechazar que el verdadero valor de $p$ es $c$ será bajo cuando realmente no sea $c$, mientras que será elevado cuando si que lo sea. En dicho caso, este error está condicionado por el error de no rechazo, es decir, de tipo I (en este ejemplo $\\alpha = 0.05$).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7533a717b3bd7ac3a88894b79432f4625c09be4 | 3,666 | ipynb | Jupyter Notebook | object_detection_for_image_cropping/chiroptera/archive/chiroptera_split_train_test.ipynb | aubricot/computer_vision_with_eol_images | 33f5df56568992b01ad953364c77f9fd0a977b2f | [
"MIT"
] | 4 | 2020-06-02T19:01:23.000Z | 2021-06-01T20:01:29.000Z | object_detection_for_image_cropping/chiroptera/archive/chiroptera_split_train_test.ipynb | aubricot/computer_vision_with_eol_images | 33f5df56568992b01ad953364c77f9fd0a977b2f | [
"MIT"
] | null | null | null | object_detection_for_image_cropping/chiroptera/archive/chiroptera_split_train_test.ipynb | aubricot/computer_vision_with_eol_images | 33f5df56568992b01ad953364c77f9fd0a977b2f | [
"MIT"
] | 2 | 2020-06-02T21:49:00.000Z | 2021-04-21T07:42:30.000Z | 39 | 293 | 0.599564 | [
[
[
"<a href=\"https://colab.research.google.com/github/aubricot/object_detection_for_image_cropping/blob/master/chiroptera_split_train_test.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Split EOL user crops dataset into train and test\n---\n*Last Updated 11 February 2020* \nInstead of creating image annotations from scratch, EOL user-generated cropping coordinates are used to create training and testing data to teach object detection models and evaluate model accuracy for YOLO via darkflow, SSD and Faster-RCNN object detection models, respectively. \n\nFollowing the [Pareto principle](https://en.wikipedia.org/wiki/Pareto_principle), 80% of the original EOL crops dataset are randomly selected to be training data and the remaining 20% will be used to test model accuracy. \n\nResulting train and test datasets are exported for further pre-processing in [chiroptera_preprocessing.ipynb](https://github.com/aubricot/object_detection_for_image_cropping/blob/master/chiroptera_preprocessing.ipynb), before they are ready to use with the object detection models.",
"_____no_output_____"
]
],
[
[
"# Mount google drive to import/export files\nfrom google.colab import drive\ndrive.mount('/content/drive')",
"_____no_output_____"
],
[
"import pandas as pd\nimport numpy as np\n\n# Read in EOL user-generated cropping data\ncrops = pd.read_csv('drive/My Drive/fall19_smithsonian_informatics/train/chiroptera_crops.tsv', sep=\"\\t\", header=0)\nprint(crops.head())\n\n# Randomly select 80% of data to use for training\n# set seed with random_state=2 for reproducible results\nidx = crops.sample(frac = 0.8, random_state=2).index\ntrain = crops.iloc[idx]\nprint(train.head())\n\n# Select the remaining 20% of data for testing using the inverse index from above\ntest = crops.iloc[crops.index.difference(idx)]\nprint(test.head())\n\n# Write test and train to tsvs \ntrain.to_csv('drive/My Drive/fall19_smithsonian_informatics/train/chiroptera_crops_train.tsv', sep='\\t', header=True, index=False)\ntrain.to_csv('drive/My Drive/fall19_smithsonian_informatics/train/chiroptera_crops_test.tsv', sep='\\t', header=True, index=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
e753492847111749d66b71bce95525bffca45ff1 | 981 | ipynb | Jupyter Notebook | Hwk3/Untitled.ipynb | Dr-Spicy/Neural-Network | 404e7a3134a108dc86959d3cf46081cd30efbbdd | [
"MIT"
] | null | null | null | Hwk3/Untitled.ipynb | Dr-Spicy/Neural-Network | 404e7a3134a108dc86959d3cf46081cd30efbbdd | [
"MIT"
] | null | null | null | Hwk3/Untitled.ipynb | Dr-Spicy/Neural-Network | 404e7a3134a108dc86959d3cf46081cd30efbbdd | [
"MIT"
] | null | null | null | 18.166667 | 76 | 0.502548 | [
[
[
"import numpy as np\nnp.arange(10000)[np.arange(10000) % 1000 ==0]",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
e7534d2e696ec47320c901bdb24fc5901d8d7954 | 4,367 | ipynb | Jupyter Notebook | .ipynb_checkpoints/SQL_fbook-checkpoint.ipynb | conner-mcnicholas/jupyter_notebooks | 30033f0b2a2fceb892000b6a24f263a518c1916e | [
"MIT"
] | null | null | null | .ipynb_checkpoints/SQL_fbook-checkpoint.ipynb | conner-mcnicholas/jupyter_notebooks | 30033f0b2a2fceb892000b6a24f263a518c1916e | [
"MIT"
] | null | null | null | .ipynb_checkpoints/SQL_fbook-checkpoint.ipynb | conner-mcnicholas/jupyter_notebooks | 30033f0b2a2fceb892000b6a24f263a518c1916e | [
"MIT"
] | null | null | null | 23.863388 | 83 | 0.428212 | [
[
[
"import pandas as pd\nimport psycopg2\nimport sqlalchemy\n",
"_____no_output_____"
],
[
"%load_ext sql\n",
"_____no_output_____"
],
[
"# Format %sql dialect+driver://username:password@host:port/database\n# Example format\n%sql postgresql://postgres:password@localhost/postgres",
"_____no_output_____"
],
[
"%%sql\n\nCreate table If Not Exists students(id int, name varchar(255), marks int);\nTruncate table students;\ninsert into students (id, name, marks) values ('1', 'Julia', '88');\ninsert into students (id, name, marks) values ('2', 'Samantha', '68');\ninsert into students (id, name, marks) values ('3', 'Maria', '100');\ninsert into students (id, name, marks) values ('4', 'Scarlet', '78');\ninsert into students (id, name, marks) values ('5', 'Ashley', '63');\ninsert into students (id, name, marks) values ('6', 'Jane', '81');\n",
" * postgresql://postgres:***@localhost/postgres\nDone.\nDone.\n1 rows affected.\n1 rows affected.\n1 rows affected.\n1 rows affected.\n1 rows affected.\n1 rows affected.\n"
],
[
"%%sql\n\nselect (case when grade < 8 then 'NULL'\n else name end) as name, \n (case when grade > 10 then 10\n else grade end) as grade, marks from \n(select name,\ncast(1+floor(marks/10) as int) as grade,\nmarks \nfrom students) as s\norder by grade desc,name asc,marks asc",
" * postgresql://postgres:***@localhost/postgres\n6 rows affected.\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e75353c5133642852d20cb2b5395bb28b3892216 | 6,042 | ipynb | Jupyter Notebook | JavaScripts/Image/LandcoverCleanup.ipynb | YuePanEdward/earthengine-py-notebooks | cade6a81dd4dbbfb1b9b37aaf6955de42226cfc5 | [
"MIT"
] | 1 | 2020-11-16T08:00:11.000Z | 2020-11-16T08:00:11.000Z | JavaScripts/Image/LandcoverCleanup.ipynb | mllzl/earthengine-py-notebooks | cade6a81dd4dbbfb1b9b37aaf6955de42226cfc5 | [
"MIT"
] | null | null | null | JavaScripts/Image/LandcoverCleanup.ipynb | mllzl/earthengine-py-notebooks | cade6a81dd4dbbfb1b9b37aaf6955de42226cfc5 | [
"MIT"
] | null | null | null | 45.428571 | 1,031 | 0.606091 | [
[
[
"<table class=\"ee-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://github.com/giswqs/earthengine-py-notebooks/tree/master/JavaScripts/Image/LandcoverCleanup.ipynb\"><img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /> View source on GitHub</a></td>\n <td><a target=\"_blank\" href=\"https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/JavaScripts/Image/LandcoverCleanup.ipynb\"><img width=26px src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png\" />Notebook Viewer</a></td>\n <td><a target=\"_blank\" href=\"https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=JavaScripts/Image/LandcoverCleanup.ipynb\"><img width=58px src=\"https://mybinder.org/static/images/logo_social.png\" />Run in binder</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/JavaScripts/Image/LandcoverCleanup.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /> Run in Google Colab</a></td>\n</table>",
"_____no_output_____"
],
[
"## Install Earth Engine API and geemap\nInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.\nThe following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.\n\n**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).",
"_____no_output_____"
]
],
[
[
"# Installs geemap package\nimport subprocess\n\ntry:\n import geemap\nexcept ImportError:\n print('geemap package not installed. Installing ...')\n subprocess.check_call([\"python\", '-m', 'pip', 'install', 'geemap'])\n\n# Checks whether this notebook is running on Google Colab\ntry:\n import google.colab\n import geemap.eefolium as emap\nexcept:\n import geemap as emap\n\n# Authenticates and initializes Earth Engine\nimport ee\n\ntry:\n ee.Initialize()\nexcept Exception as e:\n ee.Authenticate()\n ee.Initialize() ",
"_____no_output_____"
]
],
[
[
"## Create an interactive map \nThe default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function. ",
"_____no_output_____"
]
],
[
[
"Map = emap.Map(center=[40,-100], zoom=4)\nMap.add_basemap('ROADMAP') # Add Google Map\nMap",
"_____no_output_____"
]
],
[
[
"## Add Earth Engine Python script ",
"_____no_output_____"
]
],
[
[
"# Add Earth Engine dataset\n",
"_____no_output_____"
]
],
[
[
"## Display Earth Engine data layers ",
"_____no_output_____"
]
],
[
[
"Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.\nMap",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e75370dde794afd0a71056dfcea6aac5c0b6d111 | 6,476 | ipynb | Jupyter Notebook | 002_Python_Functions_Built_in/049_Python_open().ipynb | ATrain951/01.python_function-milaan9 | 0e776b98dd6349efe2789ded1d54ccb453325414 | [
"MIT"
] | 167 | 2021-06-28T03:50:28.000Z | 2022-03-21T14:56:29.000Z | 002_Python_Functions_Built_in/049_Python_open().ipynb | Dengjinjing04/04_Python_Functions | 09bcc22a5596fef91b8c59c018588b3791140cb0 | [
"MIT"
] | null | null | null | 002_Python_Functions_Built_in/049_Python_open().ipynb | Dengjinjing04/04_Python_Functions | 09bcc22a5596fef91b8c59c018588b3791140cb0 | [
"MIT"
] | 155 | 2021-06-28T03:55:09.000Z | 2022-03-21T14:56:30.000Z | 28.782222 | 194 | 0.548023 | [
[
[
"<small><small><i>\nAll the IPython Notebooks in this lecture series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/04_Python_Functions/tree/main/002_Python_Functions_Built_in)**\n</i></small></small>",
"_____no_output_____"
],
[
"# Python `open()`\n\nThe **`open()`** function opens the file (if possible) and returns the corresponding file object.\n\n**Syntax**:\n\n```python\nopen(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None)\n```",
"_____no_output_____"
],
[
"## `open()` Parameters\n\n* **file** - path-like object (representing a file system path)\n* **mode (optional)** - mode while opening a file. If not provided, it defaults to **`'r'`** (open for reading in text mode). Available file modes are:\n\n| Mode | Description |\n|:----| :--- |\n| **`'r'`** | **Open a file for reading. (default)** |\n| **`'w'`** | **Open a file for writing. Creates a new file if it does not exist or truncates the file if it exists.** |\n| **`'x'`** | **Open a file for exclusive creation. If the file already exists, the operation fails.** |\n| **`'a'`** | **Open for appending at the end of the file without truncating it. Creates a new file if it does not exist.** |\n| **`'t'`** | **Open in text mode. (default)** |\n| **`'b'`** | **Open in binary mode.** |\n| **`'+'`** | **Open a file for updating (reading and writing)** |\n\n* **buffering** (optional) - used for setting buffering policy\n* **encoding** (optional) - the encoding format\n* **errors** (optional) - string specifying how to handle encoding/decoding errors\n* **newline** (optional) - how newlines mode works (available values: **`None`**, **`' '`**, **`'\\n'`**, **`'r'`**, and **`'\\r\\n'`**\n* **closefd** (optional) - must be **`True`** (default); if given otherwise, an exception will be raised\n* **opener** (optional) - a custom opener; must return an open file descriptor",
"_____no_output_____"
],
[
"## Return Value from `open()`\n\nThe **`open()`** function returns a file object which can used to read, write and modify the file.\n\nIf the file is not found, it raises the **`FileNotFoundError`** exception.",
"_____no_output_____"
]
],
[
[
"# Example 1: How to open a file in Python?\n\n# opens test.text file of the current directory\nf = open(\"test.txt\")\n\n# To get the current directory\n#import os\n#os.getcwd()\n\n# specifying the full path\nf = open(\"C:/Python99/README.txt\")",
"_____no_output_____"
]
],
[
[
"Since the mode is omitted, the file is opened in **`'r'`** mode; opens for reading.",
"_____no_output_____"
]
],
[
[
"# Example 2: Providing mode to open()\n\n# opens the file in reading mode\n#f = open(\"path_to_file\", mode='r')\nf = open(\"C:/Python99/README.txt\", mode='r')\n\n# opens the file in writing mode \n#f = open(\"path_to_file\", mode = 'w')\nf = open(\"C:/Python99/README.txt\", mode='w')\n\n# opens for writing to the end \n#f = open(\"path_to_file\", mode = 'a')\nf = open(\"C:/Python99/README.txt\", mode='a')",
"_____no_output_____"
]
],
[
[
"Python's default encoding is ASCII. You can easily change it by passing the **`encoding`** parameter.",
"_____no_output_____"
]
],
[
[
"#f = open(\"path_to_file\", mode = 'r', encoding='utf-8')\nf = open(\"C:/Python99/README.txt\", mode = 'r', encoding='utf-8')",
"_____no_output_____"
]
],
[
[
">Recommended Reading: **[Python File Input/Output](https://github.com/milaan9/05_Python_Files/blob/main/001_Python_File_Input_Output.ipynb)**",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7538177b81a604a2ea30f58d39c4838be634b53 | 183,601 | ipynb | Jupyter Notebook | Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb | jfindlay/Azure-MachineLearning-DataScience | 47e7d6f09db019f9b985ce451c8a6857d101f30b | [
"CC-BY-4.0",
"MIT"
] | 390 | 2015-01-14T13:33:39.000Z | 2019-06-24T21:28:30.000Z | Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb | Jun-NIBS/Azure-MachineLearning-DataScience | 47e7d6f09db019f9b985ce451c8a6857d101f30b | [
"CC-BY-4.0",
"MIT"
] | 23 | 2015-02-06T08:22:16.000Z | 2019-01-25T02:39:48.000Z | Misc/Spark/Scala/Exploration-Modeling-and-Scoring-using-Scala.ipynb | Jun-NIBS/Azure-MachineLearning-DataScience | 47e7d6f09db019f9b985ce451c8a6857d101f30b | [
"CC-BY-4.0",
"MIT"
] | 362 | 2015-01-27T22:48:45.000Z | 2019-06-18T13:11:55.000Z | 108.064155 | 37,520 | 0.833209 | [
[
[
"# Exploration and ML Models with Sampled NYC Taxi Trip and Fare Dataset using Scala",
"_____no_output_____"
],
[
"###### Expected time to run this Notebook: \nAbout 5 mins on a HDInsight Spark (version 1.6) cluster with 4 worker nodes (D12)",
"_____no_output_____"
],
[
"## INTRODUCTION, OBJECTIVE AND ORGANIZATION",
"_____no_output_____"
],
[
"### INTRODUCTION\nHere we show some features and capabilities of Spark's MLlib and SparkML toolkits for supervised machine learning (ML) problems using <a href=\"http://www.andresmh.com/nyctaxitrips/\" target=\"_blank\">the NYC taxi trip and fare data-set from 2013</a>. We take a 0.1% sample of this data-set (about 170K rows, 35 Mb) to to show an end-to-end data science process, including data exploration, visualization, modeling, model saving, and scoring for binary classification and regression problems using this data-set. \n\nThis notebook is written in Scala. We have released similar notebooks in <a href=\"https://azure.microsoft.com/en-us/documentation/articles/machine-learning-data-science-spark-overview/\" target=\"_blank\">Python</a> earlier to show how to perform end-to-end data-science process. We provide a shorter version of the notebook here, showing the process and functions in Scala.",
"_____no_output_____"
],
[
"### OBJECTIVE: To show use of Spark's native Spark ML and MLlib machine learning functions for ML tasks using Scala\n\n\n#### We address two common supervised machine learning tasks\n1. Regression problem: Prediction of the tip amonut ($)\n2. Binary classification: Prediction of tip or no-tip (1/0) for a taxi trip\n\n#### Modeling section is be split into steps: \n1. Model training \n2. Model evaluation on a test data set with relevant accuracy metrics\n3. Saving model in Azure blob for future consumption\n4. Loading a saved model to score a data-set\n\nWe present three appreaches: linear (logistic or linear regression), Random Forest and Boosted Trees.\n\n\n#### NOTES:\n1. At this time models can be created using MLlib (with RDDs) as well as Spark ML functions (with data-frames). As of Spark version 1.6, there are some limitations in terms of available functions to save/load models created using Spark ML functions (e.g. Cross-validator models from various modeling approaches besides linear models).\n\n2. For using tree-based modeling functions from Spark ML and MLlib, target and features have to be indexed or vectorized appropriately. We have shown how to accomplish this in the data preparation and feature engineering section.",
"_____no_output_____"
],
[
"### ORGANIZATION: We have organized this walkthrough into the following sections: ",
"_____no_output_____"
],
[
"#### [1. Data ingestion from public blob](#ingestion)\nIngests CSV file, creates data-frame, and registre temporary table",
"_____no_output_____"
],
[
"#### [2. Data exploration & visualization](#exploration)\nVisualize data using Jupyter's autovisualization features, as well as, Python's matplotlib functions on pandas data-frames",
"_____no_output_____"
],
[
"#### [3. Data preparation, feature engineering, and feature transformation](#transformation)\nIndex categorical features, convert numerical variable to categorical using SQL, create RRDs (LabeledPoint) and data-frames for input into modeling functions",
"_____no_output_____"
],
[
"#### [4. Binary classification problem: Modeling, model evaluation and persistance](#binary)\nCreate binary classification models, save model in blob, and evaluate model on test data",
"_____no_output_____"
],
[
"#### [5. Regression problem: Modeling, model evaluation, and persistance](#regression)\nCreate regression models, save model in blob, load model, and evaluate test data",
"_____no_output_____"
],
[
"#### [6. Advanced training with cross-validation and model hyper-parameter tuning](#advanced)\nWe show how to optimize model using CV or train-validation split and model hyper-parameter sweeping",
"_____no_output_____"
],
[
"#### [7. Automated scoring and consumption of ML models built on Spark](#consumption)\nWe provide pointers describing how a saved ML model can be used to automatically score new data-sets",
"_____no_output_____"
],
[
"### BACKGROUND\n\n1. <a href=\"https://azure.microsoft.com/en-us/documentation/articles/hdinsight-apache-spark-zeppelin-notebook-jupyter-spark-sql/\" target=\"_blank\">How to provision a HDI cluster running Spark</a>\n\n2. <a href=\"http://www.andresmh.com/nyctaxitrips/\" target=\"_blank\">NYC 2013 Taxi data</a>",
"_____no_output_____"
],
[
"<hr>",
"_____no_output_____"
]
],
[
[
"import java.util.Calendar\nval beginningTime = Calendar.getInstance().getTime()",
"Creating SparkContext as 'sc'\nCreating HiveContext as 'sqlContext'\nbeginningTime: java.util.Date = Sun Jul 31 17:11:46 UTC 2016"
]
],
[
[
"<hr>",
"_____no_output_____"
],
[
"### IMPORT FUNCTIONS",
"_____no_output_____"
]
],
[
[
"import org.apache.spark.sql.SQLContext\nimport org.apache.spark.sql.functions._\nimport java.text.SimpleDateFormat\nimport java.util.Calendar\nimport sqlContext.implicits._\nimport org.apache.spark.sql.Row\n\n// Spark SQL functions\nimport org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType, FloatType, DoubleType}\nimport org.apache.spark.sql.functions.rand\n\n// Spark ML functions\nimport org.apache.spark.ml.Pipeline\nimport org.apache.spark.ml.feature.{StringIndexer, VectorAssembler, OneHotEncoder, VectorIndexer, Binarizer}\nimport org.apache.spark.ml.tuning.{ParamGridBuilder, TrainValidationSplit, CrossValidator}\nimport org.apache.spark.ml.regression.{LinearRegression, LinearRegressionModel, RandomForestRegressor, RandomForestRegressionModel, GBTRegressor, GBTRegressionModel}\nimport org.apache.spark.ml.classification.{LogisticRegression, LogisticRegressionModel, RandomForestClassifier, RandomForestClassificationModel, GBTClassifier, GBTClassificationModel}\nimport org.apache.spark.ml.evaluation.{BinaryClassificationEvaluator, RegressionEvaluator, MulticlassClassificationEvaluator}\n\n// MLlib functions\nimport org.apache.spark.mllib.linalg.{Vector, Vectors}\nimport org.apache.spark.mllib.util.MLUtils\nimport org.apache.spark.mllib.classification.{LogisticRegressionWithLBFGS, LogisticRegressionModel}\nimport org.apache.spark.mllib.regression.{LabeledPoint, LinearRegressionWithSGD, LinearRegressionModel}\nimport org.apache.spark.mllib.tree.{GradientBoostedTrees, RandomForest}\nimport org.apache.spark.mllib.tree.configuration.BoostingStrategy\nimport org.apache.spark.mllib.tree.model.{GradientBoostedTreesModel, RandomForestModel, Predict}\nimport org.apache.spark.mllib.evaluation.{BinaryClassificationMetrics, MulticlassMetrics, RegressionMetrics}\n\nval sqlContext = new SQLContext(sc)",
"sqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@179ce90b"
]
],
[
[
"<hr>",
"_____no_output_____"
],
[
"<a name=\"ingestion\"></a>\n## 1. DATA INGESTION: Read in joined 0.1% taxi trip and fare file (as tsv), format and clean data, and create data-frame",
"_____no_output_____"
],
[
"###### Specify location of input file and storage location for models in the Azure blob taht is attached to the cluster",
"_____no_output_____"
]
],
[
[
"// Location of training data\nval taxi_train_file = sc.textFile(\"wasb://[email protected]/Data/NYCTaxi/JoinedTaxiTripFare.Point1Pct.Train.tsv\")\nval header = taxi_train_file.first;\n\n// Set model storage directory path. This is where models will be saved.\nval modelDir = \"wasb:///user/remoteuser/NYCTaxi/Models/\"; //The last backslash is needed;",
"modelDir: String = wasb:///user/remoteuser/NYCTaxi/Models/"
]
],
[
[
"###### Import data, create RDD and define data-frame according to schema",
"_____no_output_____"
]
],
[
[
"val starttime = Calendar.getInstance().getTime()\n\n/* DEFINE SCHEMA BASED ON HEADER OF THE FILE */\nval sqlContext = new SQLContext(sc)\nval taxi_schema = StructType(\n Array(\n StructField(\"medallion\", StringType, true), \n StructField(\"hack_license\", StringType, true),\n StructField(\"vendor_id\", StringType, true), \n StructField(\"rate_code\", DoubleType, true),\n StructField(\"store_and_fwd_flag\", StringType, true),\n StructField(\"pickup_datetime\", StringType, true),\n StructField(\"dropoff_datetime\", StringType, true),\n StructField(\"pickup_hour\", DoubleType, true),\n StructField(\"pickup_week\", DoubleType, true),\n StructField(\"weekday\", DoubleType, true),\n StructField(\"passenger_count\", DoubleType, true),\n StructField(\"trip_time_in_secs\", DoubleType, true),\n StructField(\"trip_distance\", DoubleType, true),\n StructField(\"pickup_longitude\", DoubleType, true),\n StructField(\"pickup_latitude\", DoubleType, true),\n StructField(\"dropoff_longitude\", DoubleType, true),\n StructField(\"dropoff_latitude\", DoubleType, true),\n StructField(\"direct_distance\", StringType, true),\n StructField(\"payment_type\", StringType, true),\n StructField(\"fare_amount\", DoubleType, true),\n StructField(\"surcharge\", DoubleType, true),\n StructField(\"mta_tax\", DoubleType, true),\n StructField(\"tip_amount\", DoubleType, true),\n StructField(\"tolls_amount\", DoubleType, true),\n StructField(\"total_amount\", DoubleType, true),\n StructField(\"tipped\", DoubleType, true),\n StructField(\"tip_class\", DoubleType, true)\n )\n )\n\n/* CAST VARIABLES ACCORDING TO SCHEMA */\nval taxi_temp = (taxi_train_file.map(_.split(\"\\t\"))\n .filter((r) => r(0) != \"medallion\")\n .map(p => Row(p(0), p(1), p(2),\n p(3).toDouble, p(4), p(5), p(6), p(7).toDouble, p(8).toDouble, p(9).toDouble, p(10).toDouble,\n p(11).toDouble, p(12).toDouble, p(13).toDouble, p(14).toDouble, p(15).toDouble, p(16).toDouble,\n p(17), p(18), p(19).toDouble, p(20).toDouble, p(21).toDouble, p(22).toDouble,\n p(23).toDouble, p(24).toDouble, p(25).toDouble, p(26).toDouble)))\n\n\n/* CREATE INITIAL DATA-FRAME, DROP COLUMNS, AND CREATE CLEANED DATA-FRAME BY FILTERING FOR UNDESIRED VALUES OR OUTLIERS */\nval taxi_train_df = sqlContext.createDataFrame(taxi_temp, taxi_schema)\n\nval taxi_df_train_cleaned = (taxi_train_df.drop(taxi_train_df.col(\"medallion\"))\n .drop(taxi_train_df.col(\"hack_license\")).drop(taxi_train_df.col(\"store_and_fwd_flag\"))\n .drop(taxi_train_df.col(\"pickup_datetime\")).drop(taxi_train_df.col(\"dropoff_datetime\"))\n .drop(taxi_train_df.col(\"pickup_longitude\")).drop(taxi_train_df.col(\"pickup_latitude\"))\n .drop(taxi_train_df.col(\"dropoff_longitude\")).drop(taxi_train_df.col(\"dropoff_latitude\"))\n .drop(taxi_train_df.col(\"surcharge\")).drop(taxi_train_df.col(\"mta_tax\"))\n .drop(taxi_train_df.col(\"direct_distance\")).drop(taxi_train_df.col(\"tolls_amount\"))\n .drop(taxi_train_df.col(\"total_amount\")).drop(taxi_train_df.col(\"tip_class\"))\n .filter(\"passenger_count > 0 and passenger_count < 8 AND payment_type in ('CSH', 'CRD') AND tip_amount >= 0 AND tip_amount < 30 AND fare_amount >= 1 AND fare_amount < 150 AND trip_distance > 0 AND trip_distance < 100 AND trip_time_in_secs > 30 AND trip_time_in_secs < 7200\"));\n\n/* CACHE AND MATERIALIZE CLEANED DATA-FRAME IN MEMORY */\ntaxi_df_train_cleaned.cache()\ntaxi_df_train_cleaned.count()\n\n/* REGISTER DATA-FRAME AS A TEMP-TABLE IN SQL-CONTEXT */\ntaxi_df_train_cleaned.registerTempTable(\"taxi_train\")\n\n/* GET TIME TO RUN THE CELL */\nval endtime = Calendar.getInstance().getTime()\nval elapsedtime = ((endtime.getTime() - starttime.getTime())/1000).toString;\nprintln(\"Time taken to run the above cell: \" + elapsedtime + \" seconds.\");",
"Time taken to run the above cell: 8 seconds."
]
],
[
[
"###### Query table and import results in a data-frame",
"_____no_output_____"
]
],
[
[
"val sqlStatement = \"\"\"\n SELECT fare_amount, passenger_count, tip_amount, tipped\n FROM taxi_train \n WHERE passenger_count > 0 AND passenger_count < 7\n AND fare_amount > 0 AND fare_amount < 200\n AND payment_type in ('CSH', 'CRD')\n AND tip_amount > 0 AND tip_amount < 25\n\"\"\"\nval sqlResultsDF = sqlContext.sql(sqlStatement)\n\nsqlResultsDF.show(3)",
"+-----------+---------------+----------+------+\n|fare_amount|passenger_count|tip_amount|tipped|\n+-----------+---------------+----------+------+\n| 13.5| 1.0| 2.9| 1.0|\n| 16.0| 2.0| 3.4| 1.0|\n| 10.5| 2.0| 1.0| 1.0|\n+-----------+---------------+----------+------+\nonly showing top 3 rows"
]
],
[
[
"<hr>",
"_____no_output_____"
],
[
"<a name=\"exploration\"></a>\n## 2. DATA EXPLORATION AND VISUALIZATION: Plotting of target variables and features",
"_____no_output_____"
],
[
"###### Query a table and import results into data-frame, create a local pandas data-frame, and then visualize using Jupyter's autovisualization feature",
"_____no_output_____"
],
[
"###### NOTE: \nYou can use the <b>`%%local`</b> magic to run your code locally on the Jupyter server, which is the headnode of the HDInsight cluster. Here's a typical use case for this scenario. \n\nBy default, the output of any code snippet that you run from a Jupyter notebook is available within the context of the session that is persisted on the worker nodes. However, if you want to save a trip to the worker nodes for every computation, and all the data that you need for your computation is available locally on the Jupyter server node (which is the headnode), you can use the `%%local` magic to run the code snippet on the Jupyter server. Typically, you would use `%%local` magic in conjunction with the `%%sql` magic with `-o` parameter. The `-o` parameter would persist the output of the SQL query locally and then `%%local` magic would trigger the next set of code snippet to run locally against the output of the SQL queries that is persisted locally.\n\nIn the cells below, the %%local magic creates a local data-frame, sqlResults, which can be used for plotting using matplotlib. This is used multiple times in this walkthrough. <b>If the amount of data is large, please sample to create a data-frame that can fit in local memory</b>.",
"_____no_output_____"
],
[
"----------\n##### NOTE:\n\nAutomatic visualization of queries \n\nThe Pyspark kernel automatically visualizes the output of SQL (HiveQL) queries. You are given the option to choose between several different types of visualizations:\n- Table\n- Pie\n- Line \n- Area\n- Bar",
"_____no_output_____"
]
],
[
[
"%%sql -q -o sqlResults\nSELECT fare_amount, passenger_count, tip_amount, tipped FROM taxi_train WHERE passenger_count > 0 AND passenger_count < 7 AND fare_amount > 0 AND fare_amount < 200 AND payment_type in ('CSH', 'CRD') AND tip_amount > 0 AND tip_amount < 25",
"_____no_output_____"
]
],
[
[
"### Visualize using Jupyter autovisualization feature",
"_____no_output_____"
]
],
[
[
"%%local\nsqlResults",
"_____no_output_____"
]
],
[
[
"### One can plot using Python code once the data-frame is in local context as pandas data-frame",
"_____no_output_____"
]
],
[
[
"%%local\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# TIP BY PAYMENT TYPE AND PASSENGER COUNT\nax1 = sqlResults[['tip_amount']].plot(kind='hist', bins=25, facecolor='lightblue')\nax1.set_title('Tip amount distribution')\nax1.set_xlabel('Tip Amount ($)')\nax1.set_ylabel('Counts')\nplt.suptitle('')\nplt.show()\n\n# TIP BY PASSENGER COUNT\nax2 = sqlResults.boxplot(column=['tip_amount'], by=['passenger_count'])\nax2.set_title('Tip amount by Passenger count')\nax2.set_xlabel('Passenger count')\nax2.set_ylabel('Tip Amount ($)')\nplt.suptitle('')\nplt.show()\n\n# TIP AMOUNT BY FARE AMOUNT, POINTS ARE SCALED BY PASSENGER COUNT\nax = sqlResults.plot(kind='scatter', x= 'fare_amount', y = 'tip_amount', c='blue', alpha = 0.10, s=5*(sqlResults.passenger_count))\nax.set_title('Tip amount by Fare amount')\nax.set_xlabel('Fare Amount ($)')\nax.set_ylabel('Tip Amount ($)')\nplt.axis([-2, 80, -2, 20])\nplt.show()",
"_____no_output_____"
]
],
[
[
"<a name=\"transformation\"></a>\n## 3. CREATING FEATURES, TRANSFORMATION OF FEATURES, AND DATA PREP FOR INPUT INTO MODELING FUNCTIONS",
"_____no_output_____"
],
[
"### Create a new feature by binning hours into traffic time buckets",
"_____no_output_____"
]
],
[
[
"/* CREATE FOUR BUCKETS FOR TRAFFIC TIMES */\nval sqlStatement = \"\"\"\n SELECT *,\n CASE\n WHEN (pickup_hour <= 6 OR pickup_hour >= 20) THEN \"Night\" \n WHEN (pickup_hour >= 7 AND pickup_hour <= 10) THEN \"AMRush\" \n WHEN (pickup_hour >= 11 AND pickup_hour <= 15) THEN \"Afternoon\"\n WHEN (pickup_hour >= 16 AND pickup_hour <= 19) THEN \"PMRush\"\n END as TrafficTimeBins\n FROM taxi_train \n\"\"\"\nval taxi_df_train_with_newFeatures = sqlContext.sql(sqlStatement)\n\n/* CACHE DATA-FRAME IN MEMORY & MATERIALIZE DF IN MEMORY */\ntaxi_df_train_with_newFeatures.cache()\ntaxi_df_train_with_newFeatures.count()",
"res35: Long = 126050"
]
],
[
[
"### Indexing and one-hot encoding of categorical features",
"_____no_output_____"
],
[
"Here we only transform four variables to show examples, which are character strings. Other variables, such as week-day, which are represented by numerical valies, can also be indexed as categorical variables.\n\nFor indexing, we used stringIndexer, and for one-hot encoding, we used OneHotEncoder functions from MLlib.",
"_____no_output_____"
]
],
[
[
"// HERE WE CREATE INDEXES, AND ONE-HOT ENCODED VECTORS FOR SEVERAL CATEGORICAL FEATURES\nval starttime = Calendar.getInstance().getTime()\n\nval stringIndexer = new StringIndexer().setInputCol(\"vendor_id\").setOutputCol(\"vendorIndex\").fit(taxi_df_train_with_newFeatures)\nval indexed = stringIndexer.transform(taxi_df_train_with_newFeatures)\nval encoder = new OneHotEncoder().setInputCol(\"vendorIndex\").setOutputCol(\"vendorVec\")\nval encoded1 = encoder.transform(indexed)\n\nval stringIndexer = new StringIndexer().setInputCol(\"rate_code\").setOutputCol(\"rateIndex\").fit(encoded1)\nval indexed = stringIndexer.transform(encoded1)\nval encoder = new OneHotEncoder().setInputCol(\"rateIndex\").setOutputCol(\"rateVec\")\nval encoded2 = encoder.transform(indexed)\n\nval stringIndexer = new StringIndexer().setInputCol(\"payment_type\").setOutputCol(\"paymentIndex\").fit(encoded2)\nval indexed = stringIndexer.transform(encoded2)\nval encoder = new OneHotEncoder().setInputCol(\"paymentIndex\").setOutputCol(\"paymentVec\")\nval encoded3 = encoder.transform(indexed)\n\nval stringIndexer = new StringIndexer().setInputCol(\"TrafficTimeBins\").setOutputCol(\"TrafficTimeBinsIndex\").fit(encoded3)\nval indexed = stringIndexer.transform(encoded3)\nval encoder = new OneHotEncoder().setInputCol(\"TrafficTimeBinsIndex\").setOutputCol(\"TrafficTimeBinsVec\")\nval encodedFinal = encoder.transform(indexed)\n\n/* GET TIME TO RUN THE CELL */\nval endtime = Calendar.getInstance().getTime()\nval elapsedtime = ((endtime.getTime() - starttime.getTime())/1000).toString;\nprintln(\"Time taken to run the above cell: \" + elapsedtime + \" seconds.\");",
"Time taken to run the above cell: 4 seconds."
]
],
[
[
"### Split data-set into training and test. Add a random number (between 0 and 1) to reach row (in \"rand\" column). The rand column can be used to select cross-validation folds during training",
"_____no_output_____"
]
],
[
[
"val starttime = Calendar.getInstance().getTime()\n\nval samplingFraction = 0.25;\nval trainingFraction = 0.75; \nval testingFraction = (1-trainingFraction);\nval seed = 1234;\nval encodedFinalSampledTmp = encodedFinal.sample(withReplacement = false, fraction = samplingFraction, seed = seed)\nval sampledDFcount = encodedFinalSampledTmp.count().toInt\n\nval generateRandomDouble = udf(() => {\n scala.util.Random.nextDouble\n})\n\nval encodedFinalSampled = encodedFinalSampledTmp.withColumn(\"rand\", generateRandomDouble());\n\n// SPLIT SAMPLED DATA-FRAME INTO TRAIN/TEST, WITH A RANDOM COLUMN ADDED FOR DOING CV (SHOWN LATER)\n// INCLUDE RAND COLUMN FOR CREATING CROSS-VALIDATION FOLDS\nval splits = encodedFinalSampled.randomSplit(Array(trainingFraction, testingFraction), seed = seed)\nval trainData = splits(0)\nval testData = splits(1)\n\n/* GET TIME TO RUN THE CELL */\nval endtime = Calendar.getInstance().getTime()\nval elapsedtime = ((endtime.getTime() - starttime.getTime())/1000).toString;\nprintln(\"Time taken to run the above cell: \" + elapsedtime + \" seconds.\");",
"Time taken to run the above cell: 2 seconds."
]
],
[
[
"### Specify target (dependant) variable and features to be used training. Create indexed or one-hot encoded training and testing input LabeledPoint RDDs or Data-Frames. ",
"_____no_output_____"
]
],
[
[
"val starttime = Calendar.getInstance().getTime()\n\n// MAP NAMES OF FEATURES AND TARGETS FOR CLASSIFICATION AND REGRESSION PROBLEMS.\nval featuresIndOneHot = List(\"paymentVec\", \"vendorVec\", \"rateVec\", \"TrafficTimeBinsVec\", \"pickup_hour\", \"weekday\", \"passenger_count\", \"trip_time_in_secs\", \"trip_distance\", \"fare_amount\").map(encodedFinalSampled.columns.indexOf(_))\nval featuresIndIndex = List(\"paymentIndex\", \"vendorIndex\", \"rateIndex\", \"TrafficTimeBinsIndex\", \"pickup_hour\", \"weekday\", \"passenger_count\", \"trip_time_in_secs\", \"trip_distance\", \"fare_amount\").map(encodedFinalSampled.columns.indexOf(_))\n\n// Specify the target for classification ('tipped') and regression ('tip_amount') problems\nval targetIndBinary = List(\"tipped\").map(encodedFinalSampled.columns.indexOf(_))\nval targetIndRegression = List(\"tip_amount\").map(encodedFinalSampled.columns.indexOf(_))\n\n// Indexed LabeledPoint RDD objects\nval indexedTRAINbinary = trainData.rdd.map(r => LabeledPoint(r.getDouble(targetIndBinary(0).toInt), Vectors.dense(featuresIndIndex.map(r.getDouble(_)).toArray)))\nval indexedTESTbinary = testData.rdd.map(r => LabeledPoint(r.getDouble(targetIndBinary(0).toInt), Vectors.dense(featuresIndIndex.map(r.getDouble(_)).toArray)))\nval indexedTRAINreg = trainData.rdd.map(r => LabeledPoint(r.getDouble(targetIndRegression(0).toInt), Vectors.dense(featuresIndIndex.map(r.getDouble(_)).toArray)))\nval indexedTESTreg = testData.rdd.map(r => LabeledPoint(r.getDouble(targetIndRegression(0).toInt), Vectors.dense(featuresIndIndex.map(r.getDouble(_)).toArray)))\n\n//Indexed DFs that can be used for training using Spark ML functions\nval indexedTRAINbinaryDF = indexedTRAINbinary.toDF()\nval indexedTESTbinaryDF = indexedTESTbinary.toDF()\nval indexedTRAINregDF = indexedTRAINreg.toDF()\nval indexedTESTregDF = indexedTESTreg.toDF()\n\n// One-hot encoded (vectorized) DFs that can be used for training using Spark ML functions\nval assemblerOneHot = new VectorAssembler().setInputCols(Array(\"paymentVec\", \"vendorVec\", \"rateVec\", \"TrafficTimeBinsVec\", \"pickup_hour\", \"weekday\", \"passenger_count\", \"trip_time_in_secs\", \"trip_distance\", \"fare_amount\")).setOutputCol(\"features\")\nval OneHotTRAIN = assemblerOneHot.transform(trainData) \nval OneHotTEST = assemblerOneHot.transform(testData)\n\n/* GET TIME TO RUN THE CELL */\nval endtime = Calendar.getInstance().getTime()\nval elapsedtime = ((endtime.getTime() - starttime.getTime())/1000).toString;\nprintln(\"Time taken to run the above cell: \" + elapsedtime + \" seconds.\");",
"Time taken to run the above cell: 4 seconds."
]
],
[
[
"### Automaticall categorizing and vectorizing features and target for use as input in tree-based modeling functions in Spark ML\nProperly categorize target and features for use in tree-based modeling functions in Spark ML. \n1. Target for binary classification (tipped - 0/1) is binarized based on threshold of 0.5\n2. Features are automatically categorized. If number of distinct numerical values for any of the features is < 32, that feature is that features is categorized.",
"_____no_output_____"
]
],
[
[
"// CATEGORIZE FEATURES AND BINARIZE TARGET FOR BINARY CLASSIFICATION PROBLEM //\n//Train data\nval indexer = new VectorIndexer().setInputCol(\"features\").setOutputCol(\"featuresCat\").setMaxCategories(32)\nval indexerModel = indexer.fit(indexedTRAINbinaryDF)\nval indexedTrainwithCatFeat = indexerModel.transform(indexedTRAINbinaryDF)\nval binarizer: Binarizer = new Binarizer().setInputCol(\"label\").setOutputCol(\"labelBin\").setThreshold(0.5)\nval indexedTRAINwithCatFeatBinTarget = binarizer.transform(indexedTrainwithCatFeat)\n\n//Test data\nval indexerModel = indexer.fit(indexedTESTbinaryDF)\nval indexedTrainwithCatFeat = indexerModel.transform(indexedTESTbinaryDF)\nval binarizer: Binarizer = new Binarizer().setInputCol(\"label\").setOutputCol(\"labelBin\").setThreshold(0.5)\nval indexedTESTwithCatFeatBinTarget = binarizer.transform(indexedTrainwithCatFeat)\n\n// CATEGORIZE FEATURES FOR REGRESSION PROBLEM //\n// Create properly indexed and categorized DFs for tree-based models\n//Train data\nval indexer = new VectorIndexer().setInputCol(\"features\").setOutputCol(\"featuresCat\").setMaxCategories(32)\nval indexerModel = indexer.fit(indexedTRAINregDF)\nval indexedTRAINwithCatFeat = indexerModel.transform(indexedTRAINregDF)\n\n//Test data\nval indexerModel = indexer.fit(indexedTESTbinaryDF)\nval indexedTESTwithCatFeat = indexerModel.transform(indexedTESTregDF)",
"indexedTESTwithCatFeat: org.apache.spark.sql.DataFrame = [label: double, features: vector, featuresCat: vector]"
]
],
[
[
"<hr>",
"_____no_output_____"
],
[
"<a name=\"binary\"></a>\n## 4. BINARY CLASSIFICATION MODEL TRAINING: Predicting tip or no tip (target: tipped = 1/0)",
"_____no_output_____"
],
[
"### Create a Logistic regression model using SparkML's LogisticRession function, save model in blob, and predict on test data",
"_____no_output_____"
]
],
[
[
"// Create Logistic regression model \nval lr = new LogisticRegression().setLabelCol(\"tipped\").setFeaturesCol(\"features\").setMaxIter(10).setRegParam(0.3).setElasticNetParam(0.8)\nval lrModel = lr.fit(OneHotTRAIN)\n\n// Predict on test data-set\nval predictions = lrModel.transform(OneHotTEST)\n\n// Select BinaryClassificationEvaluator to compute test error\nval evaluator = new BinaryClassificationEvaluator().setLabelCol(\"tipped\").setRawPredictionCol(\"probability\").setMetricName(\"areaUnderROC\")\nval ROC = evaluator.evaluate(predictions)\nprintln(\"ROC on test data = \" + ROC)\n\n// Save Model\nval datestamp = Calendar.getInstance().getTime().toString.replaceAll(\" \", \".\").replaceAll(\":\", \"_\");\nval modelName = \"LogisticRegression__\"\nval filename = modelDir.concat(modelName).concat(datestamp)\nlrModel.save(filename);",
"_____no_output_____"
]
],
[
[
"##### Example: Load saved model and score test data-set",
"_____no_output_____"
]
],
[
[
"val starttime = Calendar.getInstance().getTime()\n\nval savedModel = org.apache.spark.ml.classification.LogisticRegressionModel.load(filename)\nprintln(s\"Coefficients: ${savedModel.coefficients} Intercept: ${savedModel.intercept}\")\n\n// score the model on test data.\nval predictions = savedModel.transform(OneHotTEST).select(\"tipped\",\"probability\",\"rawPrediction\")\npredictions.registerTempTable(\"testResults\")\n\n// Select BinaryClassificationEvaluator to compute test error\nval evaluator = new BinaryClassificationEvaluator().setLabelCol(\"tipped\").setRawPredictionCol(\"probability\").setMetricName(\"areaUnderROC\")\nval ROC = evaluator.evaluate(predictions)\n\n/* GET TIME TO RUN THE CELL */\nval endtime = Calendar.getInstance().getTime()\nval elapsedtime = ((endtime.getTime() - starttime.getTime())/1000).toString;\nprintln(\"Time taken to run the above cell: \" + elapsedtime + \" seconds.\");\n\nprintln(\"ROC on test data = \" + ROC)",
"ROC on test data = 0.9827381497557599"
]
],
[
[
"##### Example: Use Python on local pandas data-frames to plot ROC curve",
"_____no_output_____"
]
],
[
[
"%%sql -q -o sqlResults\nselect tipped, probability from testResults",
"_____no_output_____"
],
[
"%%local\n%matplotlib inline\nfrom sklearn.metrics import roc_curve,auc\n\nsqlResults['probFloat'] = sqlResults.apply(lambda row: row['probability'].values()[0][1], axis=1)\npredictions_pddf = sqlResults[[\"tipped\",\"probFloat\"]]\n\n#predictions_pddf = sqlResults.rename(columns={'_1': 'probability', 'tipped': 'label'})\nprob = predictions_pddf[\"probFloat\"] \nfpr, tpr, thresholds = roc_curve(predictions_pddf['tipped'], prob, pos_label=1);\nroc_auc = auc(fpr, tpr)\n\nplt.figure(figsize=(5,5))\nplt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)\nplt.plot([0, 1], [0, 1], 'k--')\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.05])\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('ROC Curve')\nplt.legend(loc=\"lower right\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Create Random Forest classification model using Spark ML RandomForestClassifier function, and evaluate model on test-data",
"_____no_output_____"
]
],
[
[
"val starttime = Calendar.getInstance().getTime()\n\n// Random Forest Classifier with Spark ML\nval rf = new RandomForestClassifier().setLabelCol(\"labelBin\").setFeaturesCol(\"featuresCat\").setNumTrees(10).setSeed(1234)\n\n// Fit the model\nval rfModel = rf.fit(indexedTRAINwithCatFeatBinTarget)\nval predictions = rfModel.transform(indexedTESTwithCatFeatBinTarget)\n\nval evaluator = new MulticlassClassificationEvaluator().setLabelCol(\"label\").setPredictionCol(\"prediction\").setMetricName(\"f1\")\nval Test_f1Score = evaluator.evaluate(predictions)\nprintln(\"F1 score on test data: \" + Test_f1Score);\n\n/* GET TIME TO RUN THE CELL */\nval endtime = Calendar.getInstance().getTime()\nval elapsedtime = ((endtime.getTime() - starttime.getTime())/1000).toString;\nprintln(\"Time taken to run the above cell: \" + elapsedtime + \" seconds.\");\n\n//* Binary classification evaluation metrics*//\nval evaluator = new BinaryClassificationEvaluator().setLabelCol(\"label\").setRawPredictionCol(\"probability\").setMetricName(\"areaUnderROC\")\nval ROC = evaluator.evaluate(predictions)\nprintln(\"ROC on test data = \" + ROC)",
"ROC on test data = 0.9847103571552683"
]
],
[
[
"### Create Gradient boosting tree classification model using MLlib's GradientBoostedTrees function, and evaluate model on test-data",
"_____no_output_____"
]
],
[
[
"// Train a GBT Classiication model using MLlib and LabeledPoint\nval starttime = Calendar.getInstance().getTime()\n\nval boostingStrategy = BoostingStrategy.defaultParams(\"Classification\")\nboostingStrategy.numIterations = 20\nboostingStrategy.treeStrategy.numClasses = 2\nboostingStrategy.treeStrategy.maxDepth = 5\nboostingStrategy.treeStrategy.categoricalFeaturesInfo = Map[Int, Int]((0,2),(1,2),(2,6),(3,4))\n\nval gbtModel = GradientBoostedTrees.train(indexedTRAINbinary, boostingStrategy)\n\n// Save Model in blob location\nval datestamp = Calendar.getInstance().getTime().toString.replaceAll(\" \", \".\").replaceAll(\":\", \"_\");\nval modelName = \"GBT_Classification__\"\nval filename = modelDir.concat(modelName).concat(datestamp)\ngbtModel.save(sc, filename);\n\n// Evaluate model on test instances and compute test error\nval labelAndPreds = indexedTESTbinary.map { point =>\n val prediction = gbtModel.predict(point.features)\n (point.label, prediction)\n}\nval testErr = labelAndPreds.filter(r => r._1 != r._2).count.toDouble / indexedTRAINbinary.count()\n//println(\"Learned classification GBT model:\\n\" + gbtModel.toDebugString)\nprintln(\"Test Error = \" + testErr)\n\n// Use Binary and Multiclass Metrics to evaluate model on Test data\nval metrics = new MulticlassMetrics(labelAndPreds)\nprintln(s\"Precision: ${metrics.precision}\")\nprintln(s\"Recall: ${metrics.recall}\")\nprintln(s\"F1 Score: ${metrics.fMeasure}\")\n\nval metrics = new BinaryClassificationMetrics(labelAndPreds)\nprintln(s\"Area under PR curve: ${metrics.areaUnderPR}\")\nprintln(s\"Area under ROC curve: ${metrics.areaUnderROC}\")\n\n/* GET TIME TO RUN THE CELL */\nval endtime = Calendar.getInstance().getTime()\nval elapsedtime = ((endtime.getTime() - starttime.getTime())/1000).toString;\nprintln(\"Time taken to run the above cell: \" + elapsedtime + \" seconds.\");\n\nprintln(s\"Area under ROC curve: ${metrics.areaUnderROC}\")",
"Area under ROC curve: 0.9846895479241554"
]
],
[
[
"<hr>",
"_____no_output_____"
],
[
"<a name=\"regression\"></a>\n## 5. REGRESSION MODEL TRAINING: Predicting tip amount",
"_____no_output_____"
],
[
"### Create Linear Regression model using Spark ML LinearRegression function, save model and evaluate model on test-data",
"_____no_output_____"
]
],
[
[
"// Create Regularized Linear Regression model using Spark ML function and data-frame\nval starttime = Calendar.getInstance().getTime()\n\nval lr = new LinearRegression().setLabelCol(\"tip_amount\").setFeaturesCol(\"features\").setMaxIter(10).setRegParam(0.3).setElasticNetParam(0.8)\n\n// Fit the model using data-frame\nval lrModel = lr.fit(OneHotTRAIN)\nprintln(s\"Coefficients: ${lrModel.coefficients} Intercept: ${lrModel.intercept}\")\n\n// Summarize the model over the training set and print out some metrics\nval trainingSummary = lrModel.summary\nprintln(s\"numIterations: ${trainingSummary.totalIterations}\")\nprintln(s\"objectiveHistory: ${trainingSummary.objectiveHistory.toList}\")\ntrainingSummary.residuals.show()\nprintln(s\"RMSE: ${trainingSummary.rootMeanSquaredError}\")\nprintln(s\"r2: ${trainingSummary.r2}\")\n\n// Save Model in blob\nval datestamp = Calendar.getInstance().getTime().toString.replaceAll(\" \", \".\").replaceAll(\":\", \"_\");\nval modelName = \"LinearRegression__\"\nval filename = modelDir.concat(modelName).concat(datestamp)\nlrModel.save(filename);\n\n// Print coefficients\nprintln(s\"Coefficients: ${lrModel.coefficients} Intercept: ${lrModel.intercept}\")\n\n// score the model on test data.\nval predictions = lrModel.transform(OneHotTEST)\n\n// evaluate the model on Test data\nval evaluator = new RegressionEvaluator().setLabelCol(\"tip_amount\").setPredictionCol(\"prediction\").setMetricName(\"r2\")\nval r2 = evaluator.evaluate(predictions)\nprintln(\"R-sqr on test data = \" + r2)\n\n/* GET TIME TO RUN THE CELL */\nval endtime = Calendar.getInstance().getTime()\nval elapsedtime = ((endtime.getTime() - starttime.getTime())/1000).toString;\nprintln(\"Time taken to run the above cell: \" + elapsedtime + \" seconds.\");",
"Time taken to run the above cell: 13 seconds."
]
],
[
[
"##### EXAMPLE: Load a saved LinearRegression model from blob and score test data-set",
"_____no_output_____"
]
],
[
[
"val starttime = Calendar.getInstance().getTime()\n\nval savedModel = org.apache.spark.ml.regression.LinearRegressionModel.load(filename)\nprintln(s\"Coefficients: ${savedModel.coefficients} Intercept: ${savedModel.intercept}\")\n\n// score the model on test data.\nval predictions = savedModel.transform(OneHotTEST).select(\"tip_amount\",\"prediction\")\npredictions.registerTempTable(\"testResults\")\n\n// evaluate the model on Test data\nval evaluator = new RegressionEvaluator().setLabelCol(\"tip_amount\").setPredictionCol(\"prediction\").setMetricName(\"r2\")\nval r2 = evaluator.evaluate(predictions)\nprintln(\"R-sqr on test data = \" + r2)\n\n/* GET TIME TO RUN THE CELL */\nval endtime = Calendar.getInstance().getTime()\nval elapsedtime = ((endtime.getTime() - starttime.getTime())/1000).toString;\nprintln(\"Time taken to run the above cell: \" + elapsedtime + \" seconds.\");\n\nprintln(\"R-sqr on test data = \" + r2)",
"R-sqr on test data = 0.5960320470835743"
]
],
[
[
"##### Example: Query test results as data-frame and visualize using Jupyter autoviz & Python matplotlib",
"_____no_output_____"
]
],
[
[
"%%sql -q -o sqlResults\nselect * from testResults",
"_____no_output_____"
],
[
"%%local\nsqlResults",
"_____no_output_____"
]
],
[
[
"##### Create plots using Python matplotlib",
"_____no_output_____"
]
],
[
[
"%%local\nsqlResults\n%matplotlib inline\nimport numpy as np\n\nax = sqlResults.plot(kind='scatter', figsize = (6,6), x='tip_amount', y='prediction', color='blue', alpha = 0.25, label='Actual vs. predicted');\nfit = np.polyfit(sqlResults['tip_amount'], sqlResults['prediction'], deg=1)\nax.set_title('Actual vs. Predicted Tip Amounts ($)')\nax.set_xlabel(\"Actual\")\nax.set_ylabel(\"Predicted\")\n#ax.plot(sqlResults['tip_amount'], fit[0] * sqlResults['prediction'] + fit[1], color='magenta')\nplt.axis([-1, 15, -1, 8])\nplt.show(ax)",
"_____no_output_____"
]
],
[
[
"### Create Gradient boosting tree regression model using Spark ML GBTRegressor function, and evaluate model on test-data",
"_____no_output_____"
]
],
[
[
"val starttime = Calendar.getInstance().getTime()\n\n// Train a GBT Regression model.\nval gbt = new GBTRegressor().setLabelCol(\"label\").setFeaturesCol(\"featuresCat\").setMaxIter(10)\nval gbtModel = gbt.fit(indexedTRAINwithCatFeat)\n\n// Make predictions.\nval predictions = gbtModel.transform(indexedTESTwithCatFeat)\n\n// Compute Test set R2\nval evaluator = new RegressionEvaluator().setLabelCol(\"label\").setPredictionCol(\"prediction\").setMetricName(\"r2\")\nval Test_R2 = evaluator.evaluate(predictions)\n\n\n/* GET TIME TO RUN THE CELL */\nval endtime = Calendar.getInstance().getTime()\nval elapsedtime = ((endtime.getTime() - starttime.getTime())/1000).toString;\nprintln(\"Time taken to run the above cell: \" + elapsedtime + \" seconds.\");\n\nprintln(\"Test R-sqr is: \" + Test_R2);",
"Test R-sqr is: 0.7667229448874853"
]
],
[
[
"<hr>",
"_____no_output_____"
],
[
"<a name=\"advanced\"></a>\n## 6. ADVANCED MODELING UTILITIES: In this section, we show ML utilities that are frequently used for model optimization\n\n<b>We show three different ways to optimize ML models using parameter sweeping:</b>\n1. Split data into train & validation sets, optimize model using hyper-parameter sweeeping on training set and evaluation on validation set (Linear Regression)\n2. Optimize model using cross-validation and hyper-parameter sweeping, using Spark ML's CrossValidator function (Binary Classification)\n3. Optimize model using custom cross-validation and parameter-sweeping code to to utilize any ML function and parameter-set (Linear Regression)",
"_____no_output_____"
],
[
"### Split data into train & validation sets, optimize model using hyper-parameter sweeeping on training set and evaluation on validation set (Linear Regression)",
"_____no_output_____"
]
],
[
[
"val starttime = Calendar.getInstance().getTime()\n\n// Rename tip_amount as label\nval OneHotTRAINLabeled = OneHotTRAIN.select(\"tip_amount\",\"features\").withColumnRenamed(existingName=\"tip_amount\",newName=\"label\") \nval OneHotTESTLabeled = OneHotTEST.select(\"tip_amount\",\"features\").withColumnRenamed(existingName=\"tip_amount\",newName=\"label\")\nOneHotTRAINLabeled.cache()\nOneHotTESTLabeled.cache()\n\n// Define the estimator function: LineareRegression function\nval lr = new LinearRegression().setLabelCol(\"label\").setFeaturesCol(\"features\").setMaxIter(10)\n\n// Define parameter grid\nval paramGrid = new ParamGridBuilder().addGrid(lr.regParam, Array(0.1, 0.01, 0.001)).addGrid(lr.fitIntercept).addGrid(lr.elasticNetParam, Array(0.1, 0.5, 0.9)).build()\n\n// Define pipeline with train-validation split, with 75% in training set, specify estimator, evaluator, parameter grid\nval trainPct = 0.75\nval trainValidationSplit = new TrainValidationSplit().setEstimator(lr).setEvaluator(new RegressionEvaluator).setEstimatorParamMaps(paramGrid).setTrainRatio(trainPct)\n\n// Run train validation split, and choose the best set of parameters.\nval model = trainValidationSplit.fit(OneHotTRAINLabeled)\n\n// Make predictions on test data. model is the model with combination of parameters that performed best.\nval testResults = model.transform(OneHotTESTLabeled).select(\"label\", \"prediction\")\n\n// Compute Test set R2\nval evaluator = new RegressionEvaluator().setLabelCol(\"label\").setPredictionCol(\"prediction\").setMetricName(\"r2\")\nval Test_R2 = evaluator.evaluate(testResults)\n\n/* GET TIME TO RUN THE CELL */\nval endtime = Calendar.getInstance().getTime()\nval elapsedtime = ((endtime.getTime() - starttime.getTime())/1000).toString;\nprintln(\"Time taken to run the above cell: \" + elapsedtime + \" seconds.\");\n\nprintln(\"Test R-sqr is: \" + Test_R2);",
"Test R-sqr is: 0.6229443508226747"
]
],
[
[
"### Optimize model using cross-validation and hyper-parameter sweeping, using Spark ML's CrossValidator function (Binary Classification)",
"_____no_output_____"
]
],
[
[
"val starttime = Calendar.getInstance().getTime()\n\n// Create data-frames with properly labeled columns for use with train/test split\nval indexedTRAINwithCatFeatBinTargetRF = indexedTRAINwithCatFeatBinTarget.select(\"labelBin\",\"featuresCat\").withColumnRenamed(existingName=\"labelBin\",newName=\"label\").withColumnRenamed(existingName=\"featuresCat\",newName=\"features\")\nval indexedTESTwithCatFeatBinTargetRF = indexedTESTwithCatFeatBinTarget.select(\"labelBin\",\"featuresCat\").withColumnRenamed(existingName=\"labelBin\",newName=\"label\").withColumnRenamed(existingName=\"featuresCat\",newName=\"features\")\nindexedTRAINwithCatFeatBinTargetRF.cache()\nindexedTESTwithCatFeatBinTargetRF.cache()\n\n// Define the estimator function\nval rf = new RandomForestClassifier().setLabelCol(\"label\").setFeaturesCol(\"features\").setImpurity(\"gini\").setSeed(1234).setFeatureSubsetStrategy(\"auto\").setMaxBins(32)\n\n// Define parameter grid\nval paramGrid = new ParamGridBuilder().addGrid(rf.maxDepth, Array(4,8)).addGrid(rf.numTrees, Array(5,10)).addGrid(rf.minInstancesPerNode, Array(100,300)).build()\n\n// Specify number of folds\nval numFolds = 3\n\n// Define train-test validation split, with 75% in training set\nval CrossValidator = new CrossValidator().setEstimator(rf).setEvaluator(new BinaryClassificationEvaluator).setEstimatorParamMaps(paramGrid).setNumFolds(numFolds)\n\n// Run train validation split, and choose the best set of parameters.\nval model = CrossValidator.fit(indexedTRAINwithCatFeatBinTargetRF)\n\n// Make predictions on test data. model is the model with combination of parameters that performed best.\nval testResults = model.transform(indexedTESTwithCatFeatBinTargetRF).select(\"label\", \"prediction\")\n\n// Compute Test F1 score\nval evaluator = new MulticlassClassificationEvaluator().setLabelCol(\"label\").setPredictionCol(\"prediction\").setMetricName(\"f1\")\nval Test_f1Score = evaluator.evaluate(testResults)\n\n/* GET TIME TO RUN THE CELL */\nval endtime = Calendar.getInstance().getTime()\nval elapsedtime = ((endtime.getTime() - starttime.getTime())/1000).toString;\nprintln(\"Time taken to run the above cell: \" + elapsedtime + \" seconds.\");",
"Time taken to run the above cell: 35 seconds."
]
],
[
[
"### Optimize model using custom cross-validation and parameter-sweeping code to utilize any ML function and parameter-set (Linear Regression)\n\n#### Optimize model using custom code. Then identify best model parameters based on highest accuracy, create final model, and evaluate model on test data. Save model in blog. Load model and score test-data and evaluate accuracy.",
"_____no_output_____"
]
],
[
[
"val starttime = Calendar.getInstance().getTime()\n\n// Define parameter grid and number of folds\nval paramGrid = new ParamGridBuilder().addGrid(rf.maxDepth, Array(5,10)).addGrid(rf.numTrees, Array(10,25,50)).build()\n\nval nFolds = 3\nval numModels = paramGrid.size\nval numParamsinGrid = 2\n\n// Specify the number of categories of categorical variables\nval categoricalFeaturesInfo = Map[Int, Int]((0,2),(1,2),(2,6),(3,4))\n\nvar maxDepth = -1\nvar numTrees = -1\nvar param = \"\"\nvar paramval = -1\nvar validateLB = -1.0\nvar validateUB = -1.0\nval h = 1.0 / nFolds;\nval RMSE = Array.fill(numModels)(0.0)\n\n// Create k folds\nval splits = MLUtils.kFold(indexedTRAINbinary, numFolds = nFolds, seed=1234)\n\n\n// Loop through k-folds and parameter grid to get and identify best parameter set by best accuracy\nfor (i <- 0 to (nFolds-1)) {\n validateLB = i * h\n validateUB = (i + 1) * h\n val validationCV = trainData.filter($\"rand\" >= validateLB && $\"rand\" < validateUB)\n val trainCV = trainData.filter($\"rand\" < validateLB || $\"rand\" >= validateUB)\n val validationLabPt = validationCV.rdd.map(r => LabeledPoint(r.getDouble(targetIndRegression(0).toInt), Vectors.dense(featuresIndIndex.map(r.getDouble(_)).toArray)));\n val trainCVLabPt = trainCV.rdd.map(r => LabeledPoint(r.getDouble(targetIndRegression(0).toInt), Vectors.dense(featuresIndIndex.map(r.getDouble(_)).toArray)));\n validationLabPt.cache()\n trainCVLabPt.cache()\n\n for (nParamSets <- 0 to (numModels-1)) {\n for (nParams <- 0 to (numParamsinGrid-1)) {\n param = paramGrid(nParamSets).toSeq(nParams).param.toString.split(\"__\")(1)\n paramval = paramGrid(nParamSets).toSeq(nParams).value.toString.toInt\n if (param == \"maxDepth\") {maxDepth = paramval}\n if (param == \"numTrees\") {numTrees = paramval}\n }\n val rfModel = RandomForest.trainRegressor(trainCVLabPt, categoricalFeaturesInfo=categoricalFeaturesInfo,\n numTrees=numTrees, maxDepth=maxDepth,\n featureSubsetStrategy=\"auto\",impurity=\"variance\", maxBins=32)\n val labelAndPreds = validationLabPt.map { point =>\n val prediction = rfModel.predict(point.features)\n ( prediction, point.label )\n }\n val validMetrics = new RegressionMetrics(labelAndPreds)\n val rmse = validMetrics.rootMeanSquaredError\n RMSE(nParamSets) += rmse\n }\n validationLabPt.unpersist();\n trainCVLabPt.unpersist();\n}\nval minRMSEindex = RMSE.indexOf(RMSE.min)\n\n// Get best parameters from CV and parameter sweep\nvar best_maxDepth = -1\nvar best_numTrees = -1\nfor (nParams <- 0 to (numParamsinGrid-1)) {\n param = paramGrid(minRMSEindex).toSeq(nParams).param.toString.split(\"__\")(1)\n paramval = paramGrid(minRMSEindex).toSeq(nParams).value.toString.toInt\n if (param == \"maxDepth\") {best_maxDepth = paramval}\n if (param == \"numTrees\") {best_numTrees = paramval}\n}\n\n// Create best model with best parameters and full training data-set\nval best_rfModel = RandomForest.trainRegressor(indexedTRAINreg, categoricalFeaturesInfo=categoricalFeaturesInfo,\n numTrees=best_numTrees, maxDepth=best_maxDepth,\n featureSubsetStrategy=\"auto\",impurity=\"variance\", maxBins=32)\n\n// Save best RF model in blob\nval datestamp = Calendar.getInstance().getTime().toString.replaceAll(\" \", \".\").replaceAll(\":\", \"_\");\nval modelName = \"BestCV_RF_Regression__\"\nval filename = modelDir.concat(modelName).concat(datestamp)\nbest_rfModel.save(sc, filename);\n\n// Predict on Training set with best model and evaluate\nval labelAndPreds = indexedTESTreg.map { point =>\n val prediction = best_rfModel.predict(point.features)\n ( prediction, point.label )\n }\n\nval test_rmse = new RegressionMetrics(labelAndPreds).rootMeanSquaredError\nval test_rsqr = new RegressionMetrics(labelAndPreds).r2\n\n/* GET TIME TO RUN THE CELL */\nval endtime = Calendar.getInstance().getTime()\nval elapsedtime = ((endtime.getTime() - starttime.getTime())/1000).toString;\nprintln(\"Time taken to run the above cell: \" + elapsedtime + \" seconds.\");",
"Time taken to run the above cell: 61 seconds."
]
],
[
[
"##### Load best RF model from blob and score test file",
"_____no_output_____"
]
],
[
[
"val savedRFModel = RandomForestModel.load(sc, filename)\n\nval labelAndPreds = indexedTESTreg.map { point =>\n val prediction = savedRFModel.predict(point.features)\n ( prediction, point.label )\n }\nval test_rmse = new RegressionMetrics(labelAndPreds).rootMeanSquaredError\nval test_rsqr = new RegressionMetrics(labelAndPreds).r2",
"test_rsqr: Double = 0.7847314211279889"
]
],
[
[
"<hr>",
"_____no_output_____"
],
[
"<a name=\"consumption\"></a>\n## 7. AUTOMATICALLY CONSUMING SPARK-BUILT ML MODELS\n\nWe have previously published a description and code (pySpark) to show how one can automatically load and score new data-sets with ML models built in Spark and saved in Azure blobs. We do not repeat that description here, but simply point the users to the <a href=\"https://azure.microsoft.com/en-us/documentation/articles/machine-learning-data-science-spark-overview/\" target=\"_blank\">previously published description on automated model consumption</a>. Users need to follow the instructions provided earlier, and replace the python code with Scala code shown above to enable automated consumption.",
"_____no_output_____"
],
[
"<hr>",
"_____no_output_____"
]
],
[
[
"/* GET TIME TO RUN THE NOTEBOOK */\nval finalTime = Calendar.getInstance().getTime()\nval totalTime = ((finalTime.getTime() - beginningTime.getTime())/1000).toString;\nprintln(\"Time taken to run the above cell: \" + totalTime + \" seconds.\");",
"Time taken to run the above cell: 295 seconds."
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
e75388c9f8b033c063c50cf4a9e446c25a4ac3b3 | 104,080 | ipynb | Jupyter Notebook | Climate_starter.ipynb | lovenalee/sqlalchemy-challenge | 6c3dec4ff7282b424f00f3452eb6d753caef8504 | [
"ADSL"
] | null | null | null | Climate_starter.ipynb | lovenalee/sqlalchemy-challenge | 6c3dec4ff7282b424f00f3452eb6d753caef8504 | [
"ADSL"
] | null | null | null | Climate_starter.ipynb | lovenalee/sqlalchemy-challenge | 6c3dec4ff7282b424f00f3452eb6d753caef8504 | [
"ADSL"
] | null | null | null | 153.510324 | 56,484 | 0.883493 | [
[
[
"%matplotlib inline\nfrom matplotlib import style\nstyle.use('fivethirtyeight')\nimport matplotlib.pyplot as plt\nfrom pprint import pprint",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd",
"_____no_output_____"
],
[
"import datetime as dt",
"_____no_output_____"
]
],
[
[
"# Reflect Tables into SQLAlchemy ORM",
"_____no_output_____"
]
],
[
[
"# Python SQL toolkit and Object Relational Mapper\nimport sqlalchemy\nfrom sqlalchemy.ext.automap import automap_base\nfrom sqlalchemy.orm import Session\nfrom sqlalchemy import create_engine, func\nfrom sqlalchemy import Column, Integer, String, Float",
"_____no_output_____"
],
[
"engine = create_engine(\"sqlite:///Resources/hawaii.sqlite\")\nconn=engine.connect()\n",
"_____no_output_____"
],
[
"# reflect an existing database into a new model\nBase = automap_base()\n# reflect the tables\nBase.prepare(engine, reflect=True)",
"_____no_output_____"
],
[
"# We can view all of the classes that automap found\nBase.classes.keys()",
"_____no_output_____"
],
[
"# Save references to each table\nMeasurement = Base.classes.measurement\nStation = Base.classes.station",
"_____no_output_____"
],
[
"# Create our session (link) from Python to the DB\nsession = Session(engine)",
"_____no_output_____"
]
],
[
[
"# Exploratory Climate Analysis",
"_____no_output_____"
]
],
[
[
"# Design a query to retrieve the last 12 months of precipitation data and plot the results\n\n# Calculate the date 1 year ago from the last data point in the database\nlast_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()[0]\nlast_date = dt.datetime.strptime(last_date, \"%Y-%m-%d\")\nlast_year_date = last_date - dt.timedelta(days=365)\n# Perform a query to retrieve the data and precipitation scores\ndate_prcp_query = (session.query(Measurement.date, Measurement.prcp).filter(Measurement.date > last_year_date).order_by(Measurement.date).all())\n# Save the query results as a Pandas DataFrame and set the index to the date column\nprcp_df = pd.DataFrame(date_prcp_query, columns=[\"Date\", \"Precipitation Score\"])\nprcp_df = prcp_df.set_index('Date')\n# Sort the dataframe by date\nprcp_df = prcp_df.sort_values(by=['Date'])\n# Use Pandas Plotting with Matplotlib to plot the data\nprcp_table = prcp_df.plot(grid=True, figsize=(12,6))\nprcp_table.set_title(\"Precipitation Scores by Date\", pad=20)\nprcp_table.set_ylabel(\"Precipitation Score\",labelpad=20)\nprcp_table.set_xlabel(\"Date\",labelpad=20)\nplt.show()",
"_____no_output_____"
],
[
"# Use Pandas to calcualte the summary statistics for the precipitation data\nprcp_df.describe()",
"_____no_output_____"
],
[
"# Design a query to show how many stations are available in this dataset?\nstation_count = session.query(Station.name).count()\nprint(f'There are {station_count} stations are available in this dataset.')",
"There are 9 stations are available in this dataset.\n"
],
[
"# What are the most active stations? (i.e. what stations have the most rows)?\n# List the stations and the counts in descending order.\nactive_stations = (\n session.query(\n Measurement.station, \n Station.name, \n func.count(Measurement.id)\n )\n .filter(Measurement.station == Station.station)\n .group_by(Measurement.station)\n .order_by(func.count(Measurement.id).desc())\n .all())\npprint(active_stations)",
"[('USC00519281', 'WAIHEE 837.5, HI US', 2772),\n ('USC00519397', 'WAIKIKI 717.2, HI US', 2724),\n ('USC00513117', 'KANEOHE 838.1, HI US', 2709),\n ('USC00519523', 'WAIMANALO EXPERIMENTAL FARM, HI US', 2669),\n ('USC00516128', 'MANOA LYON ARBO 785.2, HI US', 2612),\n ('USC00514830', 'KUALOA RANCH HEADQUARTERS 886.9, HI US', 2202),\n ('USC00511918', 'HONOLULU OBSERVATORY 702.2, HI US', 1979),\n ('USC00517948', 'PEARL CITY, HI US', 1372),\n ('USC00518838', 'UPPER WAHIAWA 874.3, HI US', 511)]\n"
],
[
"# Using the station id from the previous query, calculate the lowest temperature recorded, \n# highest temperature recorded, and average temperature of the most active station?\nactive_station_temp = (\n session.query(\n func.min(Measurement.tobs),\n func.max(Measurement.tobs),\n func.avg(Measurement.tobs),\n )\n .filter(Measurement.station == active_stations[0][0])\n .all()\n)\nprint(f'The most active station {active_stations[0][0]} has a highest recorded temperature of {active_station_temp[0][1]}, and a lowest recorded temperature of {active_station_temp[0][0]}.The average temperature of the most active station is {active_station_temp[0][2]}')\n",
"The most active station USC00519281 has a highest recorded temperature of 85.0, and a lowest recorded temperature of 54.0.The average temperature of the most active station is 71.66378066378067\n"
],
[
"# Choose the station with the highest number of temperature observations.\n# Query the last 12 months of temperature observation data for this station and plot the results as a histogram\nmost_temp_observation = (\n session.query(\n Measurement.date, \n Measurement.tobs)\n .filter(Measurement.station == active_stations[0][0])\n .filter(Measurement.date > last_year_date)\n .order_by(Measurement.date)\n .all()\n)\nmost_temp_df = pd.DataFrame(most_temp_observation)\nmost_temp_df = most_temp_df.set_index(\"date\").sort_index(ascending=True)\nmost_temp_df",
"_____no_output_____"
],
[
"temp_hist = plt.hist(most_temp_df['tobs'])\nplt.title('Last 12 months of temperature observation data for WAIHEE 837.5, HI US station')\nplt.xlabel('Temperature')\nplt.ylabel('Counts')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Bonus Challenge Assignment",
"_____no_output_____"
]
],
[
[
"# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d' \n# and return the minimum, average, and maximum temperatures for that range of dates\ndef calc_temps(start_date, end_date):\n \"\"\"TMIN, TAVG, and TMAX for a list of dates.\n \n Args:\n start_date (string): A date string in the format %Y-%m-%d\n end_date (string): A date string in the format %Y-%m-%d\n \n Returns:\n TMIN, TAVE, and TMAX\n \"\"\"\n \n return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\\\n filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()\n\n# function usage example\nprint(calc_temps('2012-02-28', '2012-03-05'))",
"[(62.0, 69.57142857142857, 74.0)]\n"
],
[
"# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax \n# for your trip using the previous year's data for those same dates.\n",
"_____no_output_____"
],
[
"# Plot the results from your previous query as a bar chart. \n# Use \"Trip Avg Temp\" as your Title\n# Use the average temperature for the y value\n# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)\n",
"_____no_output_____"
],
[
"# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.\n# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation\n\n",
"_____no_output_____"
],
[
"# Create a query that will calculate the daily normals \n# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)\n\ndef daily_normals(date):\n \"\"\"Daily Normals.\n \n Args:\n date (str): A date string in the format '%m-%d'\n \n Returns:\n A list of tuples containing the daily normals, tmin, tavg, and tmax\n \n \"\"\"\n \n sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]\n return session.query(*sel).filter(func.strftime(\"%m-%d\", Measurement.date) == date).all()\n \ndaily_normals(\"01-01\")",
"_____no_output_____"
],
[
"# calculate the daily normals for your trip\n# push each tuple of calculations into a list called `normals`\n\n# Set the start and end date of the trip\n\n# Use the start and end date to create a range of dates\n\n# Stip off the year and save a list of %m-%d strings\n\n# Loop through the list of %m-%d strings and calculate the normals for each date\n",
"_____no_output_____"
],
[
"# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index\n",
"_____no_output_____"
],
[
"# Plot the daily normals as an area plot with `stacked=False`\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7539828bc8d9a9becf07b0c8e4efd74be8771d1 | 898,232 | ipynb | Jupyter Notebook | examples/Generating random surfaces - perez method.ipynb | FrictionTribologyEnigma/SlipPY | a97fc5470ce5008ef4c049b82077cd1871f6ef2c | [
"MIT"
] | 5 | 2018-10-30T03:30:42.000Z | 2020-05-18T17:09:55.000Z | examples/Generating random surfaces - perez method.ipynb | FrictionTribologyEnigma/SlipPY | a97fc5470ce5008ef4c049b82077cd1871f6ef2c | [
"MIT"
] | null | null | null | examples/Generating random surfaces - perez method.ipynb | FrictionTribologyEnigma/SlipPY | a97fc5470ce5008ef4c049b82077cd1871f6ef2c | [
"MIT"
] | null | null | null | 3,389.554717 | 297,948 | 0.964565 | [
[
[
"# Generating random surfaces - Perez method\nOften in tribology we will want to generate random surfaces with particular properties, we can use these ase roughness in simulations to investigate how our contact changes with specific roughness parameters. Slippy contains several methods for making randomly rough surfaces. These are:\n\n- RandomFilterSurface\n- RandomPerezSurface\n- HurstFractalSurface\n\nThese methods vary in terms of performance and flexibility: \n\nFiltering of random sequences, as provided by RandomFilterSurface allows the user to specify the autocorrelation function of the surface profile and gives some flexability over the distribution of surface heights. \n\nThe Perez method allows the power spectral density and the height function to be set. The method will give a surface that perfectly matches one of these specifications and matches the other to a settable tollerance. \n\nHurst fractals allow control over the frequeny components present in the surface. The amplitude of each component is define by a simple function in the frequency domain. This gives no control over the height distribution.\n\n---\nIn this notebook we will run through an example of generating a random rough surface by the perez method.\n\nLets start by importing everything we will need, and setting the random seed so we always get the same sequence of random numbers. This just ensures that the surfaces generated will be the same every time this is run.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport slippy.surface as s # surface generation and manipulation\nimport numpy as np # numerical functions\nimport scipy.stats as stats # statistical distributions\nimport matplotlib.pyplot as plt # plotting\nnp.random.seed(1)",
"_____no_output_____"
]
],
[
[
"# RandomPerezSurface\nThe RandomPerezSurface class implements the method described in the reference below:\n\nFrancesc Pérez-Ràfols, Andreas Almqvist,\nGenerating randomly rough surfaces with given height probability distribution and power spectrum,\nTribology International,\nVolume 131,\n2019,\nPages 591-604,\nISSN 0301-679X,\nhttps://doi.org/10.1016/j.triboint.2018.11.020.\n(http://www.sciencedirect.com/science/article/pii/S0301679X18305607)\n \nThis method itteates between a surface with the required height function and another surface with the required PSD. As the iterations progress, these surfaces converge. \n\nThis random surface requires us to set the height probability density function and the power spectral density of the output surface. Unlike the random filter method this method can only generate a single realisation of the random surface, further realisations require the optimisation problem to be solved again, with new random values. However, this method converges much more quickly than the filter method.\n\n---\n\nWe will start by generating a realistic power spectrum, the origin of this spectrum should be in the top left corner as in the shown example:",
"_____no_output_____"
]
],
[
[
"beta = 10 # the drop off length of the acf\nsigma = 1 # the roughness of the surface\nqx = np.arange(-128,128)\nqy = np.arange(-128,128)\nQx, Qy = np.meshgrid(qx, qy)\nCq = sigma**2*beta/(2*np.pi*(beta**2+Qx**2+Qy**2)**0.5) # the PSD of the surface\nCq = np.fft.fftshift(Cq)\nplt.imshow(Cq)\nplt.colorbar()",
"_____no_output_____"
]
],
[
[
"Next we will generate a height probability density function:",
"_____no_output_____"
]
],
[
[
"a = 0.5\nheight_distribution = stats.lognorm(a)\nx = np.linspace(stats.gamma.ppf(0.01, a),\n stats.gamma.ppf(0.99, a), 100)\nplt.plot(x, height_distribution.pdf(x))\n_ = plt.gca().set_title(\"Set height probability density function\")",
"_____no_output_____"
]
],
[
[
"Now we can make a surface realisation:",
"_____no_output_____"
]
],
[
[
"my_surface = s.RandomPerezSurface(target_psd = Cq, height_distribution=height_distribution,\n grid_spacing=1,\n generate=True)\n_ = my_surface.show(['profile', 'psd', 'histogram'], figsize = (14,4))",
"_____no_output_____"
]
],
[
[
"We can make another random surface with normally distibuted heights as follows:",
"_____no_output_____"
]
],
[
[
"np.random.seed(1)\nnormal_distribution = stats.norm()\nmy_surface = s.RandomPerezSurface(target_psd = Cq, height_distribution=normal_distribution,\n grid_spacing=1,\n generate=True, exact='heights')\n_ = my_surface.show(['profile', 'psd', 'histogram'], figsize = (14,4))",
"_____no_output_____"
]
],
[
[
"As shown the surfaces do not perfectly fit the PSD, however the height function is well represented, a better fit to the PSD can be achieved by using the psd estimate from the original paper:",
"_____no_output_____"
]
],
[
[
"np.random.seed(1)\nnormal_distribution = stats.norm()\nmy_surface = s.RandomPerezSurface(target_psd = Cq, height_distribution=normal_distribution,\n grid_spacing=1,\n generate=True, exact='psd')\n_ = my_surface.show(['profile', 'psd', 'histogram'], figsize = (14,4))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e753a433807d464dfe118509125bde64bfee9fd9 | 107,583 | ipynb | Jupyter Notebook | book_recommendation_knn.ipynb | PratikChowdhury/Book-Recommendation-Engine | 071ca3b3f80e166b8e061aa5f7db8f8022a3783e | [
"MIT"
] | null | null | null | book_recommendation_knn.ipynb | PratikChowdhury/Book-Recommendation-Engine | 071ca3b3f80e166b8e061aa5f7db8f8022a3783e | [
"MIT"
] | null | null | null | book_recommendation_knn.ipynb | PratikChowdhury/Book-Recommendation-Engine | 071ca3b3f80e166b8e061aa5f7db8f8022a3783e | [
"MIT"
] | null | null | null | 39.538037 | 509 | 0.243273 | [
[
[
"*Note: You are currently reading this using Google Colaboratory which is a cloud-hosted version of Jupyter Notebook. This is a document containing both text cells for documentation and runnable code cells. If you are unfamiliar with Jupyter Notebook, watch this 3-minute introduction before starting this challenge: https://www.youtube.com/watch?v=inN8seMm7UI*\n\n---\n\nIn this challenge, you will create a book recommendation algorithm using **K-Nearest Neighbors**.\n\nYou will use the [Book-Crossings dataset](http://www2.informatik.uni-freiburg.de/~cziegler/BX/). This dataset contains 1.1 million ratings (scale of 1-10) of 270,000 books by 90,000 users. \n\nAfter importing and cleaning the data, use `NearestNeighbors` from `sklearn.neighbors` to develop a model that shows books that are similar to a given book. The Nearest Neighbors algorithm measures distance to determine the “closeness” of instances.\n\nCreate a function named `get_recommends` that takes a book title (from the dataset) as an argument and returns a list of 5 similar books with their distances from the book argument.\n\nThis code:\n\n`get_recommends(\"The Queen of the Damned (Vampire Chronicles (Paperback))\")`\n\nshould return:\n\n```\n[\n 'The Queen of the Damned (Vampire Chronicles (Paperback))',\n [\n ['Catch 22', 0.793983519077301], \n ['The Witching Hour (Lives of the Mayfair Witches)', 0.7448656558990479], \n ['Interview with the Vampire', 0.7345068454742432],\n ['The Tale of the Body Thief (Vampire Chronicles (Paperback))', 0.5376338362693787],\n ['The Vampire Lestat (Vampire Chronicles, Book II)', 0.5178412199020386]\n ]\n]\n```\n\nNotice that the data returned from `get_recommends()` is a list. The first element in the list is the book title passed in to the function. The second element in the list is a list of five more lists. Each of the five lists contains a recommended book and the distance from the recommended book to the book passed in to the function.\n\nIf you graph the dataset (optional), you will notice that most books are not rated frequently. To ensure statistical significance, remove from the dataset users with less than 200 ratings and books with less than 100 ratings.\n\nThe first three cells import libraries you may need and the data to use. The final cell is for testing. Write all your code in between those cells.",
"_____no_output_____"
]
],
[
[
"# import libraries (you may add additional imports but you may not have to)\nimport numpy as np\nimport pandas as pd\nfrom scipy.sparse import csr_matrix\nfrom sklearn.neighbors import NearestNeighbors\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"# get data files\n!wget https://cdn.freecodecamp.org/project-data/books/book-crossings.zip\n\n!unzip book-crossings.zip\n\nbooks_filename = 'BX-Books.csv'\nratings_filename = 'BX-Book-Ratings.csv'",
"--2022-01-15 15:48:56-- https://cdn.freecodecamp.org/project-data/books/book-crossings.zip\nResolving cdn.freecodecamp.org (cdn.freecodecamp.org)... 104.26.2.33, 104.26.3.33, 172.67.70.149, ...\nConnecting to cdn.freecodecamp.org (cdn.freecodecamp.org)|104.26.2.33|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 26085508 (25M) [application/zip]\nSaving to: ‘book-crossings.zip’\n\nbook-crossings.zip 100%[===================>] 24.88M 92.3MB/s in 0.3s \n\n2022-01-15 15:48:56 (92.3 MB/s) - ‘book-crossings.zip’ saved [26085508/26085508]\n\nArchive: book-crossings.zip\n inflating: BX-Book-Ratings.csv \n inflating: BX-Books.csv \n inflating: BX-Users.csv \n"
],
[
"# import csv data into dataframes\ndf_books = pd.read_csv(\n books_filename,\n encoding = \"ISO-8859-1\",\n sep=\";\",\n header=0,\n names=['isbn', 'title', 'author'],\n usecols=['isbn', 'title', 'author'],\n dtype={'isbn': 'str', 'title': 'str', 'author': 'str'})\n\ndf_ratings = pd.read_csv(\n ratings_filename,\n encoding = \"ISO-8859-1\",\n sep=\";\",\n header=0,\n names=['user', 'isbn', 'rating'],\n usecols=['user', 'isbn', 'rating'],\n dtype={'user': 'int32', 'isbn': 'str', 'rating': 'float32'})",
"_____no_output_____"
],
[
"# add your code here - consider creating a new cell for each section of code\nfilter_1=df_ratings['user'].value_counts()\nfilter_2=df_ratings['isbn'].value_counts()\ndf_ratings = df_ratings[~df_ratings['user'].isin(filter_1[filter_1 < 200].index) & ~df_ratings['isbn'].isin(filter_2[filter_2 < 100].index)]",
"_____no_output_____"
],
[
"df_table = df_ratings.pivot_table(index='isbn', columns='user', values='rating').fillna(0)\ndf_table",
"_____no_output_____"
],
[
"df_table.index = df_table.join(df_books.set_index('isbn'))['title']\ndf_table",
"_____no_output_____"
],
[
"# function to return recommended books - this will be tested\ndef get_recommends(book = \"\"):\n recommended_books = []\n nbrs = NearestNeighbors(n_neighbors=6, metric=\"cosine\").fit(df_table.values)\n distances, indices = nbrs.kneighbors([df_table.loc[book].values], n_neighbors=6)\n for i in range(1,6):\n recommended_books.append([df_table.index[indices[0][-i]], distances[0][-i]])\n\n return [book,recommended_books]",
"_____no_output_____"
]
],
[
[
"Use the cell below to test your function. The `test_book_recommendation()` function will inform you if you passed the challenge or need to keep trying.",
"_____no_output_____"
]
],
[
[
"books = get_recommends(\"Where the Heart Is (Oprah's Book Club (Paperback))\")\nprint(books)\n\ndef test_book_recommendation():\n test_pass = True\n recommends = get_recommends(\"Where the Heart Is (Oprah's Book Club (Paperback))\")\n if recommends[0] != \"Where the Heart Is (Oprah's Book Club (Paperback))\":\n test_pass = False\n recommended_books = [\"I'll Be Seeing You\", 'The Weight of Water', 'The Surgeon', 'I Know This Much Is True']\n recommended_books_dist = [0.8, 0.77, 0.77, 0.77]\n for i in range(2): \n if recommends[1][i][0] not in recommended_books:\n test_pass = False\n if abs(recommends[1][i][1] - recommended_books_dist[i]) >= 0.05:\n test_pass = False\n if test_pass:\n print(\"You passed the challenge! 🎉🎉🎉🎉🎉\")\n else:\n print(\"You haven't passed yet. Keep trying!\")\n\ntest_book_recommendation()",
"[\"Where the Heart Is (Oprah's Book Club (Paperback))\", [[\"I'll Be Seeing You\", 0.8016211], ['The Weight of Water', 0.77085835], ['The Surgeon', 0.7699411], ['I Know This Much Is True', 0.7677075], ['The Lovely Bones: A Novel', 0.7234864]]]\nYou passed the challenge! 🎉🎉🎉🎉🎉\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e753a7a100f0bc5096670fa025e0c7763fcaf6a7 | 3,852 | ipynb | Jupyter Notebook | django_ecommerce_detailView.ipynb | djangojeng-e/django_tutorial | 78a5f8e17253a32f43079b2c17ffe4cecbd3c3f0 | [
"MIT"
] | null | null | null | django_ecommerce_detailView.ipynb | djangojeng-e/django_tutorial | 78a5f8e17253a32f43079b2c17ffe4cecbd3c3f0 | [
"MIT"
] | 9 | 2021-03-19T10:01:27.000Z | 2022-01-13T03:05:42.000Z | django_ecommerce_detailView.ipynb | djangojeng-e/django_tutorial | 78a5f8e17253a32f43079b2c17ffe4cecbd3c3f0 | [
"MIT"
] | 1 | 2020-05-01T12:55:48.000Z | 2020-05-01T12:55:48.000Z | 50.025974 | 969 | 0.615265 | [
[
[
"# views.py \n\nfrom django.views.generic import ListView, DetailView \n\n\nclass ProductDetailView(DetailView):\n queryset = Product.objects.all() \n template_name = 'products/detail.html'\n \n def get_context_data(self, *args, **kwargs):\n context = super(ProductDetailView, self).get_context_data(*args, **kwargs)\n print(context)\n context['abc'] = 123\n return context \n \n \n \ndef product_detail_view(request):\n queryset = Product.objects.all()\n context = {\n 'object_list': queryset\n }\n return render(request, 'products/detail.html', context)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
e753b843860fadc407ed076c2602eed1498a94ed | 273,973 | ipynb | Jupyter Notebook | Regression_electrical.ipynb | CrucifierBladex/electrical_regression | 2d04296cea13974c7be1cdaefab0379e8c5fe785 | [
"MIT"
] | null | null | null | Regression_electrical.ipynb | CrucifierBladex/electrical_regression | 2d04296cea13974c7be1cdaefab0379e8c5fe785 | [
"MIT"
] | null | null | null | Regression_electrical.ipynb | CrucifierBladex/electrical_regression | 2d04296cea13974c7be1cdaefab0379e8c5fe785 | [
"MIT"
] | 1 | 2022-01-23T05:56:44.000Z | 2022-01-23T05:56:44.000Z | 234.968268 | 101,254 | 0.882003 | [
[
[
"<a href=\"https://colab.research.google.com/github/CrucifierBladex/electrical_regression/blob/main/Regression_electrical.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"\ndf=pd.read_csv('/content/Data_for_UCI_named.csv')\ndf.head()",
"_____no_output_____"
],
[
"from sklearn.preprocessing import LabelEncoder\nencoder=LabelEncoder()\ndf['stabf']=encoder.fit_transform(df['stabf'])\ndf.head()",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 10000 entries, 0 to 9999\nData columns (total 14 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 tau1 10000 non-null float64\n 1 tau2 10000 non-null float64\n 2 tau3 10000 non-null float64\n 3 tau4 10000 non-null float64\n 4 p1 10000 non-null float64\n 5 p2 10000 non-null float64\n 6 p3 10000 non-null float64\n 7 p4 10000 non-null float64\n 8 g1 10000 non-null float64\n 9 g2 10000 non-null float64\n 10 g3 10000 non-null float64\n 11 g4 10000 non-null float64\n 12 stab 10000 non-null float64\n 13 stabf 10000 non-null int64 \ndtypes: float64(13), int64(1)\nmemory usage: 1.1 MB\n"
],
[
"df.describe()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nplt.figure(figsize=(12,10))\nplt.style.use('ggplot')\ndf.plot(kind='hist',figsize=(12,10))",
"_____no_output_____"
],
[
"import seaborn as sns\nsns.set_style(style='dark')\nsns.heatmap(df.corr(),annot=True)",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 10000 entries, 0 to 9999\nData columns (total 14 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 tau1 10000 non-null float64\n 1 tau2 10000 non-null float64\n 2 tau3 10000 non-null float64\n 3 tau4 10000 non-null float64\n 4 p1 10000 non-null float64\n 5 p2 10000 non-null float64\n 6 p3 10000 non-null float64\n 7 p4 10000 non-null float64\n 8 g1 10000 non-null float64\n 9 g2 10000 non-null float64\n 10 g3 10000 non-null float64\n 11 g4 10000 non-null float64\n 12 stab 10000 non-null float64\n 13 stabf 10000 non-null int64 \ndtypes: float64(13), int64(1)\nmemory usage: 1.1 MB\n"
],
[
"from sklearn.neighbors import LocalOutlierFactor\nlof=LocalOutlierFactor(n_neighbors=5)\nscores=lof.fit_predict(df)\n\nw=[i for i in scores if i == (-1)]\nprint(w)\n\n##no outliers found\n\n\n",
"[]\n"
],
[
"sns.boxplot(data=df)",
"_____no_output_____"
],
[
"x=df.drop(['stabf'],axis=1)\ny=df['stabf']\nimport numpy as np\nx=np.array(x)\ny=np.array(y)",
"_____no_output_____"
],
[
"from yellowbrick.features import Rank1D,Rank2D\nvisualizer=Rank1D(algorithm='shapiro')\nvisualizer.fit(x,y)\nvisualizer.transform(x)\nvisualizer.poof()\n",
"/usr/local/lib/python3.6/dist-packages/scipy/stats/morestats.py:1676: UserWarning:\n\np-value may not be accurate for N > 5000.\n\n"
],
[
"visualizer=Rank2D(algorithm='pearson')\nvisualizer.fit(x,y)\nvisualizer.transform(x)\nvisualizer.poof()",
"_____no_output_____"
],
[
"visualizer=Rank2D(algorithm='covariance')\nvisualizer.fit(x,y)\nvisualizer.transform(x)\nvisualizer.poof()",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"from sklearn.preprocessing import StandardScaler\nscaler=StandardScaler()\n\nfrom sklearn.model_selection import train_test_split\nx_train,y_train,x_test,y_test=train_test_split(x,y,test_size=0.3)\nx_train=scaler.fit_transform(x_train)\ny_train=scaler.fit_transform(y_train)",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestRegressor\nmodel=RandomForestRegressor(n_estimators=100)\nmodel.fit(x_train,x_test)",
"_____no_output_____"
],
[
"model.score(y_train,y_test)",
"_____no_output_____"
],
[
"y_pred=model.predict(y_train)\nfrom sklearn import metrics\nmetrics.mean_absolute_error(y_test,y_pred)",
"_____no_output_____"
],
[
"from yellowbrick.regressor import ResidualsPlot\nvisualizer=ResidualsPlot(model)\nvisualizer.fit(x_train,x_test)\nvisualizer.score(y_train,y_test)\nvisualizer.poof()",
"_____no_output_____"
],
[
"from yellowbrick.regressor import ResidualsPlot,PredictionError\nvisualizer=PredictionError(model)\nvisualizer.fit(x_train,x_test)\nvisualizer.score(y_train,y_test)\nvisualizer.poof()",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e753c2f2c21fb968bc9c45ffbe4762c7b600eff2 | 67,958 | ipynb | Jupyter Notebook | 1_Lineare_Regression.ipynb | spielmann-cloud/Machine-Learning-Course-2021 | d5364987cb85ce75e0482d7f88f22f78a6962bc9 | [
"MIT"
] | null | null | null | 1_Lineare_Regression.ipynb | spielmann-cloud/Machine-Learning-Course-2021 | d5364987cb85ce75e0482d7f88f22f78a6962bc9 | [
"MIT"
] | null | null | null | 1_Lineare_Regression.ipynb | spielmann-cloud/Machine-Learning-Course-2021 | d5364987cb85ce75e0482d7f88f22f78a6962bc9 | [
"MIT"
] | null | null | null | 107.189274 | 30,256 | 0.850628 | [
[
[
"# Lineare Regression",
"_____no_output_____"
],
[
"In diesem Notebook werden mittels linearer Regression Vorhersagen auf dem \"Advertising\"-Datensatz machen. Ziel ist es auf Basis von Werbeausgaben (im Bereich \"TV\", \"Radio\" und \"Newspaper\") Vorhersagen über Verkaufserlöse (\"Sales\") zu machen.",
"_____no_output_____"
],
[
"### Laden des Advertising-Datensatzes",
"_____no_output_____"
],
[
"Zuerst laden wird die Daten aus der csv-Datei `advertising.csv` in einen Pandas-DataFrame und schauen uns die Daten kurz an.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ndata_raw = pd.read_csv(\"data/advertising.csv\")\ndata_raw.head()",
"_____no_output_____"
]
],
[
[
"Die `head`-Funktion zeigt nur die ersten 5 Datenpunkte im DataFrame an. Um zu wissen wie viele Datenpunkte sich im DataFrame befinden, schauen wir auf das `shape`-Attribut.",
"_____no_output_____"
]
],
[
[
"rows, cols = data_raw.shape\nprint(\"Anzahl Zeilen:\", rows)\nprint(\"Anzahl Spalten:\", cols)",
"Anzahl Zeilen: 200\nAnzahl Spalten: 5\n"
]
],
[
[
"Die erste Spalte enthält lediglich einen fortlaufenden Index und wird für die Vorhersage nicht benötigt, daher wird sie entfernt.",
"_____no_output_____"
]
],
[
[
"data = data_raw.drop(columns=['index'])\ndata.head()",
"_____no_output_____"
]
],
[
[
"Als nächstes visualieren wir die Datenpunkte mit Hilfe der `matplotlib`-Library.\nDazu erstellten wir einen Plot, welcher auf der x-Achse die `TV`-Daten und auf der y-Achse die `sales`-Daten darstellt.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n\nplt.figure(figsize=(16, 8))\nplt.scatter(data['TV'], data['sales'])\nplt.xlabel(\"TV Werbebudget (€)\")\nplt.ylabel(\"Sales (€)\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Training der linearen Regression",
"_____no_output_____"
],
[
"Als erstes Modell trainieren wir eine lineare Regression mit nur einem Feature. Als Feature wählen wir die Spalte `TV`.\n\nBevor wir mit dem Training beginnen, unterteilten wir die verfügbaren Daten in Trainings- und Testdaten, wobei die Trainingsdaten 80% der ursprünglichen Daten beinhalten sollen und die Testdaten 20%.",
"_____no_output_____"
]
],
[
[
"train_data = data.sample(frac=0.8, random_state=0)\ntest_data = data.drop(train_data.index) # Daten welche nicht in train_data sind\n\nprint('Shape der Trainingsdaten:', train_data.shape)\nprint('Shape der Testdaten:', test_data.shape)",
"Shape der Trainingsdaten: (160, 4)\nShape der Testdaten: (40, 4)\n"
]
],
[
[
"Anschließend trainieren wir auf den Trainingsdaten eine lineare Regression mit dem Feature `TV` und dem Label `sales`.\nDafür erstellen wir:\n1. Einen DataFrame mit dem Feature `TV`. Diesen nennen wir `X_train`\n2. Eine Series mit dem Label. Diese nennen wir `y_train`\n\nUm `X_train` als DataFrame und nicht als Series zu erhalten, müssen wir `TV` als Teil einer Liste übergeben. Der folgende Code zeigt den Unterschied:",
"_____no_output_____"
]
],
[
[
"X_series = train_data['TV'] # nur TV selektiert\nprint(\"Datentyp von X_series:\", type(X_series))\nX_df = train_data[['TV']] # Liste mit TV als einzigem Element\nprint(\"Datentyp von X_df:\", type(X_df))\n\nX_train = X_df # Die Features müssen als DataFrame vorliegen und nicht als Series\ny_train = train_data['sales']\nprint(\"Datentyp von y_train:\", type(y_train))",
"Datentyp von X_series: <class 'pandas.core.series.Series'>\nDatentyp von X_df: <class 'pandas.core.frame.DataFrame'>\nDatentyp von y_train: <class 'pandas.core.series.Series'>\n"
]
],
[
[
"Jetzt folgt das eigentliche Training des Modells:",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LinearRegression\n\nreg = LinearRegression()\nreg.fit(X_train, y_train)",
"_____no_output_____"
]
],
[
[
"Die lineare Regression ist nun trainiert und die Modellgewichte in the `reg`-Variable verfügbar. Wir können uns nun die Regressionsgerade ausgeben lassen.",
"_____no_output_____"
]
],
[
[
"print(f\"Regressionsgerade: y = {reg.intercept_} + {reg.coef_[0]}*TV\")",
"Regressionsgerade: y = 6.745792674540394 + 0.04950397743349263*TV\n"
]
],
[
[
"Mit dem trainierten Modell können wir nun Vorhersagen auf einzelnen Datenpunkten machen.",
"_____no_output_____"
]
],
[
[
"dataPoint = X_train.iloc[0] # erster Datenpunkt aus den Trainingsdaten\nprediction = reg.predict([dataPoint]) # predict-Methode erwartet Liste von Datenpunkten \nprint(f\"Bei einem TV-Werbebudget von {dataPoint[0]}€, werden {prediction[0]}€ Umsatz erzielt.\")",
"Bei einem TV-Werbebudget von 69.2€, werden 10.171467912938084€ Umsatz erzielt.\n"
]
],
[
[
"Um zu Visualisieren wie die trainierte Regressionsgerade aussieht, machen wir mit dem Modell Vorhersagen auf den Trainingsdatenpunkten.",
"_____no_output_____"
]
],
[
[
"prediction_train = reg.predict(X_train) # Vorhersage auf allen Trainingsdaten gleichzeitig\n\nplt.figure(figsize=(16, 8))\nplt.scatter(data['TV'], data['sales']) # Trainingsdatenpunkte\nplt.plot(X_train, prediction_train, 'r') # Regressionsgerade\nplt.xlabel(\"TV Werbebudget ($)\")\nplt.ylabel(\"Umsatz (Euro)\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Testen des Regressionsmodells",
"_____no_output_____"
],
[
"Um die Qualität des trainierte Regressionsmodells zu überprüfen, machen wir damit Vorhersagen auf den Testdaten und bestimmen den MSE.",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import mean_squared_error\nX_test = test_data[['TV']] # X_test muss ein DateFrame sein\ny_test = test_data['sales'] # y_test muss eine Series sein\nprediction_test = reg.predict(X_test)\nmse_test = mean_squared_error(y_test, prediction_test)\nprint(\"Mean squared error (MSE) auf Testdaten:\", mse_test)",
"Mean squared error (MSE) auf Testdaten: 14.41037265386388\n"
]
],
[
[
"### Multidimensionale lineare Regression",
"_____no_output_____"
],
[
"Wir erweitern nun die lineare Regression indem wir die beiden Features `radio` und `newspaper` zusätzlich benutzen.",
"_____no_output_____"
]
],
[
[
"X_train = train_data[[\"TV\", \"radio\", \"newspaper\"]]\ny_train = train_data['sales']\nreg_all = LinearRegression()\nreg_all.fit(X_train, y_train)\nprint(f\"Regression: Y = {reg_all.intercept_} + {reg_all.coef_[0]}*TV + {reg_all.coef_[1]}*radio + {reg_all.coef_[2]}*newspaper\")",
"Regression: Y = 2.9008471054251608 + 0.04699763711005833*TV + 0.1822877768933094*radio + -0.0012975074726833402*newspaper\n"
]
],
[
[
"Abschließend nutzen wir das neuen Modell um wiederum Vorhersagen auf den Testdaten zu machen.",
"_____no_output_____"
]
],
[
[
"X_test = test_data[[\"TV\", \"radio\", \"newspaper\"]]\ny_test = test_data['sales']\npredictions = reg_all.predict(X_test)\nmse = mean_squared_error(y_test, predictions)\nprint(\"Mean squared error (MSE) auf Testdaten: %.2f\" % mse)",
"Mean squared error (MSE) auf Testdaten: 3.16\n"
]
],
[
[
"Wie wir sehen können ist die Vorhersage für das multidimensionale Modell besser als die normale lineare Regression (der MSE ist wesentlich kleiner). Das Hinzufügen der neuen Features verbessert also die Vorhersagekraft des Modells. ",
"_____no_output_____"
],
[
"### Aufgabe",
"_____no_output_____"
],
[
"Bestimmen Sie welches der drei Features „TV“, „radio“ und „newspaper“ die stärkste Vorhersagekraft auf das Label „sales“ hat. Trainieren Sie dazu drei Modelle mit jeweils einem dieser Features und vergleichen Sie den MSE auf den Testdaten.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
e753c82b22aa6e1f321dfbe64d30ac1a986a01df | 33,090 | ipynb | Jupyter Notebook | input/inputs.ipynb | opotowsky/opra_sims | 67a6b160604115056bd4a0f53bfb3728a17f7bb6 | [
"BSD-3-Clause"
] | null | null | null | input/inputs.ipynb | opotowsky/opra_sims | 67a6b160604115056bd4a0f53bfb3728a17f7bb6 | [
"BSD-3-Clause"
] | null | null | null | input/inputs.ipynb | opotowsky/opra_sims | 67a6b160604115056bd4a0f53bfb3728a17f7bb6 | [
"BSD-3-Clause"
] | null | null | null | 40.851852 | 171 | 0.39175 | [
[
[
"import subprocess\nimport numpy as np",
"_____no_output_____"
],
[
"# Variable params\nintro_time = 12 * 5 # intro time for additive\nsim_duration = 12 * 100 # might need longer for transitions\ntrans_time = 12 * 15 # transition time for EG01 --> EGXX\n# Transition time is assumed to always be greater than intro time\n\n# \"Non-Variable\" Params\nn_init_rxtrs = 100\nn_assem_core = 3\nassem_size = 29565\ninit_total_assem_size = n_init_rxtrs * n_assem_core * assem_size\ncycle_time = 18\nrxtr_life = 12 * 60 ",
"_____no_output_____"
]
],
[
[
"# EG01 \n\n## Protoype Definitions",
"_____no_output_____"
]
],
[
[
"init_uox_source = {'name' : 'SourceInitUOX', \n 'config' : {'Source' : {'outcommod' : 'InitUOX',\n 'outrecipe' : 'UOX_no232',\n 'inventory_size' : init_total_assem_size\n }\n }\n }",
"_____no_output_____"
],
[
"from eg01_facilities import (natu_source, non_source, add_source, enrich, \n du_store, store_no232, mix_no232, store_232, \n mix_232, lwr_cool, lwr_store, sink)",
"_____no_output_____"
],
[
"# play around with contraining additive source throughput\n#add_source['config']['Source']['throughput'] = 1e5",
"_____no_output_____"
],
[
"# LWR prototype for full additive availability upon introduction\nlwr_full = {'name' : 'LWR',\n 'lifetime' : rxtr_life,\n 'config' : {'Reactor' : {'fuel_incommods' : {'val' : ['UOX_Add', 'UOX_Non', 'InitUOX']},\n 'fuel_outcommods' : {'val' : ['SpentUOX_Add', 'SpentUOX_Non', 'SpentUOX_Non']},\n 'fuel_inrecipes' : {'val' : ['UOX_232', 'UOX_no232', 'UOX_no232']},\n 'fuel_outrecipes' : {'val' : ['SpentUOX_232', 'SpentUOX_no232', 'SpentUOX_no232']},\n 'fuel_prefs' : {'val' : [1, 2, 2.5]},\n 'pref_change_times' : {'val' : intro_time},\n 'pref_change_commods' : {'val' : 'UOX_Add'},\n 'pref_change_values' : {'val' : 3},\n 'cycle_time' : cycle_time,\n 'refuel_time' : 0,\n 'assem_size' : assem_size,\n 'n_assem_core' : n_assem_core,\n 'n_assem_batch' : 1,\n 'power_name' : 'PowerLWR',\n 'power_cap' : 1000\n }\n }\n }",
"_____no_output_____"
]
],
[
[
"## Regions and Institutions\n\n### 1. Init LWR Fleet (Deploy Inst)",
"_____no_output_____"
]
],
[
[
"init_lwr_prototypes = ['LWR' for x in range(0, n_init_rxtrs)]\nn_builds = [1 for x in range(0, n_init_rxtrs)]\n# staggering build times over first 18 timesteps so that reactors \n# don't all cycle together\nbuild_times = [x + 1 for x in range(0, 17) for y in range(0,6)]\ndel build_times[-3:-1]\n# Lifetimes borrowed from previous EG scenario work presuming a \n# start year of 2000 and first decommission in 2015. Updated to \n# better stagger decommissioning.\nold_lives = [181, 186, 191, 196, \n 201, 206, 211, 216, 221, 226, 231, 236, 241, 246, 251, 256, 261, 266, 271, 276, 281, 286, 291, 296,\n 301, 306, 311, 316, 321, 326, 331, 336, 341, 346, 351, 356, 361, 366, 371, 376, 381, 386, 391, 396,\n 401, 406, 411, 416, 421, 426, 431, 436, 441, 446, 451, 456, 461, 466, 471, 476, 481, 486, 491, 496,\n 501, 506, 511, 516, 521, 526, 531, 536, 541, 546, 551, 556, 561, 566, 571, 576, 581, 586, 591, 596,\n 601, 606, 611, 616, 621, 626, 631, 636, 641, 646, 651, 656, 661, 666, 671, 676\n ]\n# Overwrite lifetimes to have decommission start 1.5 year in, \n# expecting a sim start of 2022\nlifetimes = [x - 163 for x in old_lives]\n\nassert len(init_lwr_prototypes) == n_init_rxtrs\nassert len(n_builds) == n_init_rxtrs\nassert len(build_times) == n_init_rxtrs\nassert len(lifetimes) == n_init_rxtrs",
"_____no_output_____"
],
[
"init_fleet = {'name' : 'InitFleet', \n 'config' : {'DeployInst' : {'prototypes' : {'val' : init_lwr_prototypes},\n 'n_build' : {'val' : n_builds},\n 'build_times' : {'val' : build_times},\n 'lifetimes' : {'val' : lifetimes}\n }\n }\n }",
"_____no_output_____"
]
],
[
[
"### 2. EG01 FC Facilities: Manager Inst",
"_____no_output_____"
]
],
[
[
"eg1_fc_prototypes = ['SourceNatU', 'SourceNonIsos', 'SourceAddIsos', 'Enrichment', 'StorageDepU', \n 'UOXStrNon', 'UOXMixNon', 'UOXStrAdd', 'UOXMixAdd', 'LWR', \n 'UOXCool', 'UOXStr', 'Waste'\n ]\n\neg1_fc_inst = {'name' : 'FCInstEG01', \n 'initialfacilitylist' : {'entry' : [{'number' : 1, 'prototype' : 'SourceInitUOX'},\n {'number' : 1, 'prototype' : 'SourceNatU'},\n {'number' : 1, 'prototype' : 'SourceNonIsos'},\n {'number' : 1, 'prototype' : 'SourceAddIsos'},\n {'number' : 1, 'prototype' : 'Enrichment'},\n {'number' : 1, 'prototype' : 'StorageDepU'},\n {'number' : 1, 'prototype' : 'UOXStrNon'},\n {'number' : 1, 'prototype' : 'UOXMixNon'},\n {'number' : 1, 'prototype' : 'UOXStrAdd'},\n {'number' : 1, 'prototype' : 'UOXMixAdd'},\n {'number' : 1, 'prototype' : 'UOXCool'},\n {'number' : 1, 'prototype' : 'UOXStr'},\n {'number' : 1, 'prototype' : 'Waste'}]\n },\n 'config' : {'ManagerInst' : {'prototypes' : {'val' : eg1_fc_prototypes}}}\n }",
"_____no_output_____"
]
],
[
[
"### 3. Growth Regions: Pick Flat or 1% Growth",
"_____no_output_____"
]
],
[
[
"# 1% growth per year\n\nmonth_grow_rate = 0.01 / 12\nexp_str = '100000 ' + str(month_grow_rate) + ' 0'\nexp_func = {'piece' : [{'start' : 18,\n 'function' : {'type' : 'exponential', 'params' : exp_str}\n }\n ]\n }\ngrowth_region = {'GrowthRegion' : {'growth' : {'item' : [{'commod' : 'PowerLWR',\n 'piecewise_function' : exp_func\n }\n ]\n }\n }\n }",
"_____no_output_____"
],
[
"# flat\nlin_func = {'piece' : [{'start' : 18,\n 'function' : {'type' : 'linear', 'params' : '0 100000'}\n }\n ]\n }\ngrowth_region = {'GrowthRegion' : {'growth' : {'item' : [{'commod' : 'PowerLWR',\n 'piecewise_function' : lin_func\n }\n ]\n }\n }\n }",
"_____no_output_____"
]
],
[
[
"# Recipes\n- Here: \n 1. Depleted U\n 2. Natural U\n- 100 ppt Init U232 Additive Recipes (in recipe_100ppt.py):\n 1. NonAdditive U Isotopes (U234)\n 2. Additive U Isotopes (U232, U233, U234)\n 3. Almost UOX NonAdditive Enr Ratio\n 4. Almost UOX Additive Enr Ratio\n 5. UOX without Additive\n 6. UOX with Additive\n 7. Spent UOX from #5\n 8. Spent UOX from #6",
"_____no_output_____"
]
],
[
[
"dep_u = {'name' : 'DU',\n 'basis' : 'mass',\n 'nuclide' : [{'id' : 'U235', 'comp' : 0.0025}, \n {'id' : 'U238', 'comp' : 0.9975}]\n }\nnat_u = {'name' : 'NU',\n 'basis' : 'mass',\n 'nuclide' : [{'id' : 'U235', 'comp' : 0.007110}, \n {'id' : 'U238', 'comp' : 0.992890}]\n }",
"_____no_output_____"
],
[
"# Recipes from 100 ppt U232 additive @ beginning of enrichment\nfrom recipe_100ppt import (isos_no232, isos_232, enr_no232, enr_232, \n uox_no232, uox_10pct232, uox_50pct232, uox_232, \n spent_no232, spent_10pct232, spent_50pct232, \n spent_232, ff_no232, ff_232, spentff_no232, \n spentff_232\n )",
"_____no_output_____"
]
],
[
[
"# Main Input File ",
"_____no_output_____"
]
],
[
[
"control = {'duration' : sim_duration, \n 'startmonth' : 1, \n 'startyear' : 2022,\n #'dt' : 86400, \n #'explicit_inventory' : True\n }",
"_____no_output_____"
],
[
"def run_sim(filebase, sim):\n in_file = filebase + '.py'\n sim_file = '../output/' + filebase + '.sqlite'\n\n with open(in_file, 'w') as file: \n file.write('SIMULATION = ' + str(sim))\n subprocess.run(['rm', sim_file])\n subprocess.run(['cyclus', in_file, '-o', sim_file]) \n return",
"_____no_output_____"
]
],
[
[
"## EG01",
"_____no_output_____"
]
],
[
[
"archetypes = {'spec' : [{'lib' : 'cycamore', 'name' : 'Source'},\n {'lib' : 'cycamore', 'name' : 'Enrichment'},\n {'lib' : 'cycamore', 'name' : 'Mixer'},\n {'lib' : 'cycamore', 'name' : 'Reactor'},\n {'lib' : 'cycamore', 'name' : 'Storage'},\n {'lib' : 'cycamore', 'name' : 'Sink'},\n {'lib' : 'cycamore', 'name' : 'DeployInst'},\n {'lib' : 'cycamore', 'name' : 'ManagerInst'},\n {'lib' : 'cycamore', 'name' : 'GrowthRegion'},\n ]\n }",
"_____no_output_____"
],
[
"# full additive availability at intro -- skip down for ramp-up sim\nfull_region = {'name' : 'GrowthRegion', \n 'config' : growth_region, \n 'institution' : [init_fleet, eg1_fc_inst]\n }\nfull_sim = {'simulation' : {'control' : control,\n 'archetypes' : archetypes,\n 'region' : full_region,\n 'facility' : [init_uox_source, natu_source, non_source, add_source, enrich,\n du_store, store_no232, mix_no232, store_232, mix_232, \n lwr_full, lwr_cool, lwr_store, sink\n ],\n 'recipe' : [dep_u, nat_u, isos_no232, isos_232, enr_no232, enr_232, \n uox_no232, uox_232, spent_no232, spent_232\n ]\n }\n }",
"_____no_output_____"
],
[
"#run_sim('01_full-add_flat-pwr', full_sim)\n#run_sim('01_full-add_grow-pwr', full_sim)",
"_____no_output_____"
]
],
[
[
"# EG01 --> 23\n## Prototype Definitions",
"_____no_output_____"
]
],
[
[
"# remember that mixer facilities have fake ratios as of 3/9/22\n# only making fast fuel without additive, because the additive doesn't make sense for MOX\n# (prototypes exist to have a split, but creating both streams does not work as expected)\nfrom eg23_facilities import (eg23_sink, non_lwr_cool, add_lwr_cool, lwr_sep, \n sfr_mix_no232, non_sfr_cool, sfr_sep)",
"_____no_output_____"
],
[
"sfr = {'name' : 'SFR', \n 'lifetime' : rxtr_life,\n 'config' : {'Reactor' : {'fuel_incommods' : {'val' : ['FF_Non']},\n 'fuel_outcommods' : {'val' : ['SpentFF_Non']},\n 'fuel_inrecipes' : {'val' : ['FF_no232']},\n 'fuel_outrecipes' : {'val' : ['SpentFF_no232']},\n 'fuel_prefs' : {'val' : [1]},\n 'cycle_time' : 14,\n 'refuel_time' : 0,\n 'assem_size' : 7490,\n 'n_assem_core' : 5,\n 'n_assem_batch' : 1,\n 'power_name' : 'PowerSFR',\n 'power_cap' : 400\n }\n }\n }",
"_____no_output_____"
]
],
[
[
"## Regions and Institutions",
"_____no_output_____"
]
],
[
[
"eg1_23_fc_prototypes = ['SourceNatU', 'SourceNonIsos', 'SourceAddIsos', 'Enrichment', 'StorageDepU', \n 'UOXStrNon', 'UOXMixNon', 'UOXStrAdd', 'UOXMixAdd', 'LWR', \n 'UOXCoolNon', 'UOXCoolAdd', 'Waste', 'SFR'\n ]\n\neg1_23_fc_inst = {'name' : 'FCInstEG01-23', \n 'initialfacilitylist' : {'entry' : [{'number' : 1, 'prototype' : 'SourceInitUOX'},\n {'number' : 1, 'prototype' : 'SourceNatU'},\n {'number' : 1, 'prototype' : 'SourceNonIsos'},\n {'number' : 1, 'prototype' : 'SourceAddIsos'},\n {'number' : 1, 'prototype' : 'Enrichment'},\n {'number' : 1, 'prototype' : 'StorageDepU'},\n {'number' : 1, 'prototype' : 'UOXStrNon'},\n {'number' : 1, 'prototype' : 'UOXMixNon'},\n {'number' : 1, 'prototype' : 'UOXStrAdd'},\n {'number' : 1, 'prototype' : 'UOXMixAdd'},\n {'number' : 1, 'prototype' : 'UOXCoolNon'},\n {'number' : 1, 'prototype' : 'UOXCoolAdd'},\n {'number' : 1, 'prototype' : 'Waste'}]\n },\n 'config' : {'ManagerInst' : {'prototypes' : {'val' : eg1_23_fc_prototypes}\n }\n }\n }\n\neg23_fc_prototypes = ['UOXSep', 'FFMixNon', 'FFCoolNon', 'FFSep']\neg23_nbuilds = [1 for x in range(0, len(eg23_fc_prototypes))]\neg23_buildtimes = [trans_time for x in range(0, len(eg23_fc_prototypes))]\neg23_deploy = {'name' : 'EG23Deploy', \n 'config' : {'DeployInst' : {'prototypes' : {'val' : eg23_fc_prototypes},\n 'n_build' : {'val' : eg23_nbuilds},\n 'build_times' : {'val' : eg23_buildtimes}\n }\n }\n }",
"_____no_output_____"
],
[
"# exponential power demand\n## THIS IS NOT FUNCTIONAL, see 3/17 slides ##\nmonth_grow_rate = 0.01 / 12\na_0 = 100000\nexp_str_0 = str(a_0) + ' ' + str(month_grow_rate) + ' 0'\nexp_func_0 = {'type' : 'exponential', 'params' : exp_str_0}\n\n# exponential transition too:\n# in theory the deployment of SFR doesn't need to be exponential but since\n# the power commod is linked to this, this is how it will be for now. \n# maybe manual deployments do make the most sense to avoid this constraint\n#\n# todo: automate getting a_trans from previously calculated power curve\n# ...will be especially beneficial for ramp-up approach\npwr_trans = 114000\na_trans = pwr_trans / np.exp(-month_grow_rate * trans_time)\nexp_str_1 = str(a_trans) + ' -' + str(month_grow_rate) + ' 0'\nexp_str_2 = str(a_0) + ' ' + str(month_grow_rate) + ' 0'\nexp_func_1 = {'type' : 'exponential', 'params' : exp_str_1}\nexp_func_2 = {'type' : 'exponential', 'params' : exp_str_2}\n\nexp_func_lwr = {'piece' : [{'start' : 18,\n 'function' : exp_func_0\n },\n {'start' : trans_time,\n 'function' : exp_func_1\n }\n ]\n }\nexp_func_sfr = {'piece' : [{'start' : trans_time,\n 'function' : exp_func_2\n }\n ]\n }\ngrowth_region = {'GrowthRegion' : {'growth' : {'item' : [{'commod' : 'PowerLWR',\n 'piecewise_function' : exp_func_lwr\n },\n {'commod' : 'PowerSFR',\n 'piecewise_function' : exp_func_sfr\n }\n ]\n }\n }\n }",
"_____no_output_____"
],
[
"# linear power demand\nm = 20\nlin_func_0 = {'type' : 'linear', 'params' : '0 100000'}\nlin_func_1 = {'type' : 'linear', 'params' : '-' + str(m) + ' 0'}\nlin_func_2 = {'type' : 'linear', 'params' : str(m) + ' 0'}\nlin_func_lwr = {'piece' : [{'start' : 18,\n 'function' : lin_func_0\n },\n {'start' : trans_time,\n 'function' : lin_func_1\n }\n ]\n }\nlin_func_sfr = {'piece' : [{'start' : trans_time,\n 'function' : lin_func_2\n }\n ]\n }\ngrowth_region = {'GrowthRegion' : {'growth' : {'item' : [{'commod' : 'PowerLWR',\n 'piecewise_function' : lin_func_lwr\n },\n {'commod' : 'PowerSFR',\n 'piecewise_function' : lin_func_sfr\n }\n ]\n }\n }\n }",
"_____no_output_____"
],
[
"archetypes = {'spec' : [{'lib' : 'cycamore', 'name' : 'Source'},\n {'lib' : 'cycamore', 'name' : 'Enrichment'},\n {'lib' : 'cycamore', 'name' : 'Mixer'},\n {'lib' : 'cycamore', 'name' : 'Reactor'},\n {'lib' : 'cycamore', 'name' : 'Storage'},\n {'lib' : 'cycamore', 'name' : 'Separations'},\n {'lib' : 'cycamore', 'name' : 'Sink'},\n {'lib' : 'cycamore', 'name' : 'DeployInst'},\n {'lib' : 'cycamore', 'name' : 'ManagerInst'},\n {'lib' : 'cycamore', 'name' : 'GrowthRegion'},\n ]\n }\n\n\n# full additive availability at intro\nfull_region = {'name' : 'GrowthRegion', \n 'config' : growth_region, \n 'institution' : [init_fleet, eg1_23_fc_inst, eg23_deploy]\n }\nfull_sim = {'simulation' : {'control' : control,\n 'archetypes' : archetypes,\n 'region' : full_region,\n 'facility' : [init_uox_source, natu_source, non_source, add_source, enrich,\n du_store, store_no232, mix_no232, store_232, mix_232, lwr_full, \n non_lwr_cool, add_lwr_cool, lwr_sep, sfr_mix_no232, sfr,\n sfr_sep, non_sfr_cool, eg23_sink\n ],\n 'recipe' : [dep_u, nat_u, isos_no232, isos_232, enr_no232, enr_232, \n uox_no232, uox_232, spent_no232, spent_232, ff_no232, \n spentff_no232 \n ]\n }\n }",
"_____no_output_____"
],
[
"#run_sim('23_full-add_flat-pwr', full_sim)\nrun_sim('23_full-add_grow-pwr', full_sim)",
"_____no_output_____"
]
],
[
[
"# Ramp-up Approach EG01 Only (so far)\n\n10% additive fuel in refuel for 3 cycles, 50% for the next 3, and 100% after that.",
"_____no_output_____"
]
],
[
[
"from eg01_facilities import (store_pct_no232, store_pct_232, \n mix_50pct232, mix_10pct232)\n\n# LWR prototype for partial additive availability/slow utility uptake upon introduction\nintro_50 = intro_time + 3 * cycle_time\nintro_100 = intro_50 + 3 * cycle_time\nlwr_ramp = {'name' : 'LWR',\n 'lifetime' : rxtr_life,\n 'config' : {'Reactor' : {'fuel_incommods' : {'val' : ['UOX_Add', 'UOX_50pctAdd', 'UOX_10pctAdd', 'UOX_Non', 'InitUOX']},\n 'fuel_outcommods' : {'val' : ['SpentUOX_Add', 'SpentUOX_50pctAdd', 'SpentUOX_10pctAdd', 'SpentUOX_Non', 'SpentUOX_Non']},\n 'fuel_inrecipes' : {'val' : ['UOX_232', 'UOX_50pct232', 'UOX_10pct232', 'UOX_no232', 'UOX_no232']},\n 'fuel_outrecipes' : {'val' : ['SpentUOX_232', 'SpentUOX_50pct232', 'SpentUOX_10pct232', 'SpentUOX_no232', 'SpentUOX_no232']},\n 'fuel_prefs' : {'val' : [1, 1, 1, 2, 2.5]},\n 'pref_change_times' : {'val' : [intro_time, intro_50, intro_100]},\n 'pref_change_commods' : {'val' : ['UOX_10pctAdd', 'UOX_50pctAdd', 'UOX_Add']},\n 'pref_change_values' : {'val' : [3, 4, 5]},\n 'cycle_time' : cycle_time,\n 'refuel_time' : 0,\n 'assem_size' : assem_size,\n 'n_assem_core' : n_assem_core,\n 'n_assem_batch' : 1,\n 'power_cap' : 1000\n }\n }\n }",
"_____no_output_____"
],
[
"eg1_ramp_prototypes = ['StorageRampNon', 'StorageRampAdd', \n 'Mixer50pctAdd', 'Mixer10pctAdd'\n ]\n\nt = intro_time\n\neg1_ramp_inst = {'name' : 'RampInstEG01', \n 'config' : {'DeployInst' : {'prototypes' : {'val' : eg1_ramp_prototypes},\n 'n_build' : {'val' : [1, 1, 1, 1]},\n 'build_times' : {'val' : [t, t, t, t]}\n }\n }\n }",
"_____no_output_____"
],
[
"# ramp up additive availability at intro\nramp_region = {'name' : 'GrowthRegion', \n 'config' : growth_region, \n 'institution' : [init_fleet, eg1_fc_inst, eg1_ramp_inst]\n }\nramp_sim = {'simulation' : {'control' : control,\n 'archetypes' : archetypes,\n 'region' : ramp_region,\n 'facility' : [init_uox_source, natu_source, non_source, add_source, enrich,\n du_store, store_no232, mix_no232, store_232, mix_232, \n store_pct_no232, store_pct_232, mix_50pct232, mix_10pct232, \n lwr_ramp, lwr_cool, lwr_store, sink\n ],\n 'recipe' : [dep_u, nat_u, isos_no232, isos_232, enr_no232, enr_232, \n uox_no232, uox_10pct232, uox_50pct232, uox_232,\n spent_no232, spent_10pct232, spent_50pct232, spent_232\n ]\n }\n }",
"_____no_output_____"
],
[
"run_sim('01_ramp-add_flat-pwr', ramp_sim)\n#run_sim('01_ramp-add_grow-pwr', ramp_sim)",
"_____no_output_____"
]
],
[
[
"# Set of Simulations \n\nList of Simulation Scenarios (24):\n\nIf this is done with flat power plus 1% growth in power, doubles to 48 simulations\n\n- EG Scenarios\n 1. 01\n 2. 01-23\n 3. 01-29\n- Init Additive Concentration\n 1. 100ppt\n 2. ???pp?\n- Date\n 1. long before transition\n 2. closer to transition\n- Rate\n 1. full availability\n 2. ramp up availability",
"_____no_output_____"
]
],
[
[
"# File Names:\nfor eg in ['01', '23', '29']:\n for ppx in ['100ppt', '100ppb']:\n for date in ['05yr', '15yr']:\n for rate in ['full', 'ramp']:\n file = eg + '_' + ppx + '_' + date + '_' + rate\n print(file)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e753de7c3e690b41e2e7398435f6ce750f2c27fe | 3,907 | ipynb | Jupyter Notebook | examples/reference/widgets/DateRangeSlider.ipynb | slamer59/panel | 963a63af57e786a8121f2f2bff1f6533fe7ddffe | [
"BSD-3-Clause"
] | 2 | 2021-05-03T15:19:04.000Z | 2021-05-13T16:02:25.000Z | examples/reference/widgets/DateRangeSlider.ipynb | slamer59/panel | 963a63af57e786a8121f2f2bff1f6533fe7ddffe | [
"BSD-3-Clause"
] | 10 | 2021-03-30T13:50:55.000Z | 2022-01-13T02:54:45.000Z | examples/reference/widgets/DateRangeSlider.ipynb | slamer59/panel | 963a63af57e786a8121f2f2bff1f6533fe7ddffe | [
"BSD-3-Clause"
] | 2 | 2021-05-07T19:20:08.000Z | 2021-11-11T20:37:57.000Z | 35.518182 | 474 | 0.611723 | [
[
[
"import datetime as dt\nimport panel as pn\n\npn.extension()",
"_____no_output_____"
]
],
[
[
"The ``DateRangeSlider`` widget allows selecting a date range using a slider with two handles.\n\nFor more information about listening to widget events and laying out widgets refer to the [widgets user guide](../../user_guide/Widgets.ipynb). Alternatively you can learn how to build GUIs by declaring parameters independently of any specific widgets in the [param user guide](../../user_guide/Param.ipynb). To express interactivity entirely using Javascript without the need for a Python server take a look at the [links user guide](../../user_guide/Param.ipynb).\n\n#### Parameters:\n\nFor layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb).\n\n\n##### Core\n\n* **``start``** (datetime): The range's lower bound\n* **``end``** (datetime): The range's upper bound\n* **``value``** (tuple): Tuple of upper and lower bounds of the selected range expressed as datetime types\n* **``value_throttled``** (tuple): Tuple of upper and lower bounds of the selected range expressed as datetime types where events are throttled by `callback_throttle` value.\n\n##### Display\n\n* **``bar_color``** (color): Color of the slider bar as a hexadecimal RGB value\n* **``callback_policy``** (str, **DEPRECATED**): Policy to determine when slider events are triggered (one of 'continuous', 'throttle', 'mouseup')\n* **``callback_throttle``** (int): Number of milliseconds to pause between callback calls as the slider is moved\n* **``direction``** (str): Whether the slider should go from left to right ('ltr') or right to left ('rtl')\n* **``disabled``** (boolean): Whether the widget is editable\n* **``name``** (str): The title of the widget\n* **``orientation``** (str): Whether the slider should be displayed in a 'horizontal' or 'vertical' orientation.\n* **``tooltips``** (boolean): Whether to display tooltips on the slider handle\n\n___\n\nThe slider start and end can be adjusted by dragging the handles and whole range can be shifted by dragging the selected range.",
"_____no_output_____"
]
],
[
[
"date_range_slider = pn.widgets.DateRangeSlider(\n name='Date Range Slider',\n start=dt.datetime(2017, 1, 1), end=dt.datetime(2019, 1, 1),\n value=(dt.datetime(2017, 1, 1), dt.datetime(2018, 1, 10))\n)\n\ndate_range_slider",
"_____no_output_____"
]
],
[
[
"``DateRangeSlider.value`` returns a tuple of datetime values that can be read out and set like other widgets:",
"_____no_output_____"
]
],
[
[
"date_range_slider.value",
"_____no_output_____"
]
],
[
[
"### Controls\n\nThe `DateRangeSlider` widget exposes a number of options which can be changed from both Python and Javascript. Try out the effect of these parameters interactively:",
"_____no_output_____"
]
],
[
[
"pn.Row(date_range_slider.controls(jslink=True), date_range_slider)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e753ebf0553ac6d80d1f6e57af4b780d57bf2be6 | 37,199 | ipynb | Jupyter Notebook | 03_sale_price_prediction.ipynb | Octodon-D/project_house_price | 0eff85f22a7e434870226ed285c04d45f7a16d69 | [
"MIT"
] | null | null | null | 03_sale_price_prediction.ipynb | Octodon-D/project_house_price | 0eff85f22a7e434870226ed285c04d45f7a16d69 | [
"MIT"
] | null | null | null | 03_sale_price_prediction.ipynb | Octodon-D/project_house_price | 0eff85f22a7e434870226ed285c04d45f7a16d69 | [
"MIT"
] | null | null | null | 116.246875 | 27,081 | 0.86962 | [
[
[
"# 3. Sale price prediction\nThe aim of this part is to predict the sale price of houses as accurately as possible with a multivariate linear regression. \n\n## Setup\nImport required libaries and load fitted and cleaned data from first notebook.",
"_____no_output_____"
]
],
[
[
"# load the libaries\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nfrom scipy import stats\n\nfrom datetime import datetime, date, time, timedelta\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.metrics import mean_squared_error, r2_score\n\nimport seaborn as sns\nfrom scipy import stats \nimport statsmodels.api as sms\nimport statsmodels.formula.api as smf\n\n%matplotlib inline",
"_____no_output_____"
],
[
"# import cleaned data\ndf = pd.read_csv('data/df_cleaned.csv')",
"_____no_output_____"
]
],
[
[
"## Split dataframe into test and train set\nWe do not yet know the true sales price until the house has been successfully sold. In order to test the model before applying on new and unknown data we have to split the data into a train and test dataset. During building the model I only work with the train dataset and keep the other part untouched. Later on the test set is then used to compare the results of the model (predicted prices) with the true sales prices and gives indication how well the model determines the true prices. \nTo separate the dataset into training and testing data, we use a feature of Scikit-Learn: Train-Test-Split. ",
"_____no_output_____"
]
],
[
[
"# remove columns because they do not provide prognostic information\n#df.drop(['sqft_above', 'sqft_basement', 'yr_renovated', 'lat', 'long', 'yr_sale', \n# 'mo_sale', 'sqft_price', 'sqft_lot_price'], axis=1, inplace=True)",
"_____no_output_____"
],
[
"# define descriptive variables\nall_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'waterfront', 'view', \n 'condition', 'grade', 'yr_built', 'zipcode', 'sqft_living15', 'sqft_lot15', 'renovated', 'basement']\n\n# X contains all descriptive variables defined above\nX = df[all_features]\n\n# y contains price\ny = df.price",
"_____no_output_____"
],
[
"# split data into train and test data\n# 25 % of the data is used for the subsequent testing of the prognostic quality\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)",
"_____no_output_____"
],
[
"# check how much data is in each dataset\nprint(\"X_train (features for the model to learn from): \", X_train.shape)\nprint(\"y_train (labels for the model to learn from): \", y_train.shape)\nprint(\"X_test (features to test the model's accuracy against): \", X_test.shape)\nprint(\"y_test (labels to test the model's accuracy with): \", y_test.shape)",
"X_train (features for the model to learn from): (16197, 15)\ny_train (labels for the model to learn from): (16197,)\nX_test (features to test the model's accuracy against): (5399, 15)\ny_test (labels to test the model's accuracy with): (5399,)\n"
]
],
[
[
"## Correlations with price\nTo get an idea which features are most interesting for the model, we take another look at the Pearson correlation coefficients",
"_____no_output_____"
]
],
[
[
"# combine X_train and y_train again to use only the training set\nX_training = X_train.merge(y_train, left_index=True, right_index=True)",
"_____no_output_____"
],
[
"corr_price = X_training.corrwith(X_training.price).sort_values(ascending=False)\ncorr_price = corr_price[1:] # exclude price\n\n# plot correlation with price\nfig, ax = plt.subplots(figsize = (10,5))\nsns.barplot(x=corr_price.index, y=corr_price);\nplt.xticks(rotation='75');\nax.set_ylabel('Correlation')\nax.set_xlabel('Features')\nax.set_title('Correlation with price', fontsize = 20);",
"_____no_output_____"
]
],
[
[
"## Multiple Linear Regression\nAbove we see that some features are more correlated with the price than others. To test different feature combinations, three different models will be build: One with all features (15 features), one with only features which have a Pearson correlation above 0.5 (4 features) and one with all features with a Pearson correlation above 0.1 (10 features). ",
"_____no_output_____"
]
],
[
[
"# training of the model\n# first model with all features\nmodel1 = LinearRegression()\nmodel1.fit(X_train, y_train)\n\n# second model with features with a correlation higher than 0.5\n# determine variables to pass into the model\nmodel2 = LinearRegression()\nfeatures2 = ['sqft_living', 'grade', 'sqft_living15', 'bathrooms']\nmodel2.fit(X_train[features2], y_train)\n\n# third model with features with a correlation higher than 0.1\n# determine variables to pass into the model\nmodel3 = LinearRegression()\nfeatures3 = ['sqft_living', 'grade', 'sqft_living15', 'bathrooms', 'view', 'bedrooms', 'waterfront', 'floors', 'basement', 'renovated']\nmodel3.fit(X_train[features3], y_train)",
"_____no_output_____"
]
],
[
[
"## Evaluation \nThe R<sup>2</sup> and the adjusted R<sup>2</sup> indicate the percentage of variance of the target variable (price per square foot) explained by the model. Adjusted R<sup>2</sup> is a modified version of R<sup>2</sup> that has been adjusted with the number of explanatory variables. It penalises the addition of unnecessary variables and allows comparison of regression models with different numbers of explanatory variables.",
"_____no_output_____"
]
],
[
[
"# evaluation model 1\nprint('R^2: ', model1.score(X_test, y_test).round(4))\nprint('adj. R^2: ', (1-(1-model1.score(X_test, y_test))*(X_test.shape[0]- 1)/(X_test.shape[0]-X_test.shape[1]-1)).round(4))",
"R^2: 0.6412\nadj. R^2: 0.6402\n"
],
[
"# evaluation model 2\nprint('R^2: ', model2.score(X_test[features2], y_test).round(4))\nprint('adj. R^2: ', (1-(1-model2.score(X_test[features2], y_test))*(X_test[features2].shape[0]- 1)/(X_test[features2].shape[0]-X_test[features2].shape[1]-1)).round(4))",
"R^2: 0.5359\nadj. R^2: 0.5356\n"
],
[
"# evaluation model 3\nprint('R^2: ', model3.score(X_test[features3], y_test).round(4))\nprint('adj. R^2: ', (1-(1-model3.score(X_test[features3], y_test))*(X_test[features3].shape[0]- 1)/(X_test[features3].shape[0]-X_test[features3].shape[1]-1)).round(4))",
"R^2: 0.5987\nadj. R^2: 0.5979\n"
]
],
[
[
"### Conclusion\nThe model with all features has the highest R<sup>2</sup> and the adjusted R<sup>2</sup>. This means this model predicts the variance of house prices with an accuracy of 64% when all features are included. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
e754021b66b1b01daef45b2b77dd7015fb1765a6 | 46,800 | ipynb | Jupyter Notebook | Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb | giaminhhoang/MLND-sagemaker-deployment | 997172b41e082e76240c316cde4e463b4cdce4b3 | [
"MIT"
] | null | null | null | Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb | giaminhhoang/MLND-sagemaker-deployment | 997172b41e082e76240c316cde4e463b4cdce4b3 | [
"MIT"
] | null | null | null | Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb | giaminhhoang/MLND-sagemaker-deployment | 997172b41e082e76240c316cde4e463b4cdce4b3 | [
"MIT"
] | null | null | null | 74.88 | 14,684 | 0.749145 | [
[
[
"# Predicting Boston Housing Prices\n\n## Using XGBoost in SageMaker (Batch Transform)\n\n_Deep Learning Nanodegree Program | Deployment_\n\n---\n\nAs an introduction to using SageMaker's High Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.\n\nThe documentation for the high level API can be found on the [ReadTheDocs page](http://sagemaker.readthedocs.io/en/latest/)\n\n## General Outline\n\nTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.\n\n1. Download or otherwise retrieve the data.\n2. Process / Prepare the data.\n3. Upload the processed data to S3.\n4. Train a chosen model.\n5. Test the trained model (typically using a batch transform job).\n6. Deploy the trained model.\n7. Use the deployed model.\n\nIn this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.",
"_____no_output_____"
],
[
"## Step 0: Setting up the notebook\n\nWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport os\n\nimport numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\n\nfrom sklearn.datasets import load_boston\nimport sklearn.model_selection",
"_____no_output_____"
]
],
[
[
"In addition to the modules above, we need to import the various bits of SageMaker that we will be using. ",
"_____no_output_____"
]
],
[
[
"import sagemaker\nfrom sagemaker import get_execution_role\nfrom sagemaker.amazon.amazon_estimator import get_image_uri\nfrom sagemaker.predictor import csv_serializer\n\n# This is an object that represents the SageMaker session that we are currently operating in. This\n# object contains some useful information that we will need to access later such as our region.\nsession = sagemaker.Session()\n\n# This is an object that represents the IAM role that we are currently assigned. When we construct\n# and launch the training job later we will need to tell it what IAM role it should have. Since our\n# use case is relatively simple we will simply assign the training job the role we currently have.\nrole = get_execution_role()",
"_____no_output_____"
]
],
[
[
"## Step 1: Downloading the data\n\nFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.",
"_____no_output_____"
]
],
[
[
"boston = load_boston()",
"_____no_output_____"
]
],
[
[
"## Step 2: Preparing and splitting the data\n\nGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.",
"_____no_output_____"
]
],
[
[
"# First we package up the input data and the target variable (the median value) as pandas dataframes. This\n# will make saving the data to a file a little easier later on.\n\nX_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)\nY_bos_pd = pd.DataFrame(boston.target)\n\n# We split the dataset into 2/3 training and 1/3 testing sets.\nX_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)\n\n# Then we split the training set further into 2/3 training and 1/3 validation sets.\nX_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)",
"_____no_output_____"
]
],
[
[
"## Step 3: Uploading the data files to S3\n\nWhen a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details.\n\n### Save the data locally\n\nFirst we need to create the test, train and validation csv files which we will then upload to S3.",
"_____no_output_____"
]
],
[
[
"# This is our local data directory. We need to make sure that it exists.\ndata_dir = '../data/boston'\nif not os.path.exists(data_dir):\n os.makedirs(data_dir)",
"_____no_output_____"
],
[
"# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header\n# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and\n# validation data, it is assumed that the first entry in each row is the target variable.\n\nX_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)\n\npd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)\npd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)",
"_____no_output_____"
]
],
[
[
"### Upload to S3\n\nSince we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.",
"_____no_output_____"
]
],
[
[
"prefix = 'boston-xgboost-HL'\n\ntest_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)\nval_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)\ntrain_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)",
"_____no_output_____"
]
],
[
[
"## Step 4: Train the XGBoost model\n\nNow that we have the training and validation data uploaded to S3, we can construct our XGBoost model and train it. We will be making use of the high level SageMaker API to do this which will make the resulting code a little easier to read at the cost of some flexibility.\n\nTo construct an estimator, the object which we wish to train, we need to provide the location of a container which contains the training code. Since we are using a built in algorithm this container is provided by Amazon. However, the full name of the container is a bit lengthy and depends on the region that we are operating in. Fortunately, SageMaker provides a useful utility method called `get_image_uri` that constructs the image name for us.\n\nTo use the `get_image_uri` method we need to provide it with our current region, which can be obtained from the session object, and the name of the algorithm we wish to use. In this notebook we will be using XGBoost however you could try another algorithm if you wish. The list of built in algorithms can be found in the list of [Common Parameters](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html).",
"_____no_output_____"
]
],
[
[
"# As stated above, we use this utility method to construct the image name for the training container.\ncontainer = get_image_uri(session.boto_region_name, 'xgboost')\n\n# Now that we know which container to use, we can construct the estimator object.\nxgb = sagemaker.estimator.Estimator(container, # The image name of the training container\n role, # The IAM role to use (our current role in this case)\n train_instance_count=1, # The number of instances to use for training\n train_instance_type='ml.m4.xlarge', # The type of instance to use for training\n output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),\n # Where to save the output (the model artifacts)\n sagemaker_session=session) # The current SageMaker session",
"'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.\nThere is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:\n\tget_image_uri(region, 'xgboost', '1.0-1').\nParameter image_name will be renamed to image_uri in SageMaker Python SDK v2.\n"
]
],
[
[
"Before asking SageMaker to begin the training job, we should probably set any model specific hyperparameters. There are quite a few that can be set when using the XGBoost algorithm, below are just a few of them. If you would like to change the hyperparameters below or modify additional ones you can find additional information on the [XGBoost hyperparameter page](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost_hyperparameters.html)",
"_____no_output_____"
]
],
[
[
"xgb.set_hyperparameters(max_depth=5,\n eta=0.2,\n gamma=4,\n min_child_weight=6,\n subsample=0.8,\n objective='reg:linear',\n early_stopping_rounds=10,\n num_round=200)",
"_____no_output_____"
]
],
[
[
"Now that we have our estimator object completely set up, it is time to train it. To do this we make sure that SageMaker knows our input data is in csv format and then execute the `fit` method.",
"_____no_output_____"
]
],
[
[
"# This is a wrapper around the location of our train and validation data, to make sure that SageMaker\n# knows our data is in csv format.\ns3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')\ns3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')\n\nxgb.fit({'train': s3_input_train, 'validation': s3_input_validation})",
"'s3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2.\n's3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2.\n"
]
],
[
[
"## Step 5: Test the model\n\nNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. To start with, we need to build a transformer object from our fit model.",
"_____no_output_____"
]
],
[
[
"xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')",
"Parameter image will be renamed to image_uri in SageMaker Python SDK v2.\n"
]
],
[
[
"Next we ask SageMaker to begin a batch transform job using our trained model and applying it to the test data we previously stored in S3. We need to make sure to provide SageMaker with the type of data that we are providing to our model, in our case `text/csv`, so that it knows how to serialize our data. In addition, we need to make sure to let SageMaker know how to split our data up into chunks if the entire data set happens to be too large to send to our model all at once.\n\nNote that when we ask SageMaker to do this it will execute the batch transform job in the background. Since we need to wait for the results of this job before we can continue, we use the `wait()` method. An added benefit of this is that we get some output from our batch transform job which lets us know if anything went wrong.",
"_____no_output_____"
]
],
[
[
"xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')",
"_____no_output_____"
],
[
"xgb_transformer.wait()",
"...........................\n.\u001b[32m2020-09-10T23:00:13.072:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD\u001b[0m\n\u001b[34mArguments: serve\u001b[0m\n\u001b[34m[2020-09-10 23:00:12 +0000] [1] [INFO] Starting gunicorn 19.7.1\u001b[0m\n\u001b[34m[2020-09-10 23:00:12 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)\u001b[0m\n\u001b[34m[2020-09-10 23:00:12 +0000] [1] [INFO] Using worker: gevent\u001b[0m\n\u001b[34m[2020-09-10 23:00:12 +0000] [36] [INFO] Booting worker with pid: 36\u001b[0m\n\u001b[34m[2020-09-10 23:00:12 +0000] [37] [INFO] Booting worker with pid: 37\u001b[0m\n\u001b[34m[2020-09-10 23:00:13 +0000] [38] [INFO] Booting worker with pid: 38\u001b[0m\n\u001b[34m[2020-09-10 23:00:13 +0000] [39] [INFO] Booting worker with pid: 39\u001b[0m\n\u001b[34m[2020-09-10:23:00:13:INFO] Model loaded successfully for worker : 36\u001b[0m\n\u001b[34m[2020-09-10:23:00:13:INFO] Model loaded successfully for worker : 38\u001b[0m\n\u001b[35mArguments: serve\u001b[0m\n\u001b[35m[2020-09-10 23:00:12 +0000] [1] [INFO] Starting gunicorn 19.7.1\u001b[0m\n\u001b[35m[2020-09-10 23:00:12 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)\u001b[0m\n\u001b[35m[2020-09-10 23:00:12 +0000] [1] [INFO] Using worker: gevent\u001b[0m\n\u001b[35m[2020-09-10 23:00:12 +0000] [36] [INFO] Booting worker with pid: 36\u001b[0m\n\u001b[35m[2020-09-10 23:00:12 +0000] [37] [INFO] Booting worker with pid: 37\u001b[0m\n\u001b[35m[2020-09-10 23:00:13 +0000] [38] [INFO] Booting worker with pid: 38\u001b[0m\n\u001b[35m[2020-09-10 23:00:13 +0000] [39] [INFO] Booting worker with pid: 39\u001b[0m\n\u001b[35m[2020-09-10:23:00:13:INFO] Model loaded successfully for worker : 36\u001b[0m\n\u001b[35m[2020-09-10:23:00:13:INFO] Model loaded successfully for worker : 38\u001b[0m\n\u001b[34m[2020-09-10:23:00:13:INFO] Model loaded successfully for worker : 37\u001b[0m\n\u001b[34m[2020-09-10:23:00:13:INFO] Sniff delimiter as ','\u001b[0m\n\u001b[34m[2020-09-10:23:00:13:INFO] Determined delimiter of CSV input is ','\u001b[0m\n\u001b[34m[2020-09-10:23:00:13:INFO] Model loaded successfully for worker : 39\u001b[0m\n\u001b[35m[2020-09-10:23:00:13:INFO] Model loaded successfully for worker : 37\u001b[0m\n\u001b[35m[2020-09-10:23:00:13:INFO] Sniff delimiter as ','\u001b[0m\n\u001b[35m[2020-09-10:23:00:13:INFO] Determined delimiter of CSV input is ','\u001b[0m\n\u001b[35m[2020-09-10:23:00:13:INFO] Model loaded successfully for worker : 39\u001b[0m\n"
]
],
[
[
"Now that the batch transform job has finished, the resulting output is stored on S3. Since we wish to analyze the output inside of our notebook we can use a bit of notebook magic to copy the output file from its S3 location and save it locally.",
"_____no_output_____"
]
],
[
[
"!aws s3 cp --recursive $xgb_transformer.output_path $data_dir",
"Completed 2.3 KiB/2.3 KiB (36.4 KiB/s) with 1 file(s) remaining\rdownload: s3://sagemaker-us-east-2-444100773610/xgboost-2020-09-10-22-55-41-172/test.csv.out to ../data/boston/test.csv.out\r\n"
]
],
[
[
"To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.",
"_____no_output_____"
]
],
[
[
"Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)",
"_____no_output_____"
],
[
"plt.scatter(Y_test, Y_pred)\nplt.xlabel(\"Median Price\")\nplt.ylabel(\"Predicted Price\")\nplt.title(\"Median Price vs Predicted Price\")",
"_____no_output_____"
]
],
[
[
"## Optional: Clean up\n\nThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.",
"_____no_output_____"
]
],
[
[
"# First we will remove all of the files contained in the data_dir directory\n!rm $data_dir/*\n\n# And then we delete the directory itself\n!rmdir $data_dir",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e75406ecdc480f3f98d2c6f68eaa310deeb4a69c | 342,798 | ipynb | Jupyter Notebook | Python/DataPreprocessing.ipynb | happyrabbit/IntroDataScience | d6376908f4bda3d61268c9f048e1f6cb799af881 | [
"CC0-1.0"
] | 32 | 2018-06-07T23:03:59.000Z | 2022-03-29T23:52:23.000Z | Python/DataPreprocessing.ipynb | happyrabbit/IntroDataScience | d6376908f4bda3d61268c9f048e1f6cb799af881 | [
"CC0-1.0"
] | 16 | 2017-08-28T14:57:44.000Z | 2020-10-16T22:51:42.000Z | Python/DataPreprocessing.ipynb | happyrabbit/IntroDataScience | d6376908f4bda3d61268c9f048e1f6cb799af881 | [
"CC0-1.0"
] | 12 | 2019-08-01T16:23:26.000Z | 2021-08-20T17:34:52.000Z | 181.952229 | 234,210 | 0.849139 | [
[
[
"<a href=\"https://colab.research.google.com/github/happyrabbit/IntroDataScience/blob/master/Python/DataPreprocessing.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Data Preprocessing Notebook\n\nIn this notebook, we will show how to use python to preprocess the data. ",
"_____no_output_____"
]
],
[
[
"# Load packages\nimport pandas as pd\nimport numpy as np\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.preprocessing import scale, power_transform\nfrom sklearn.feature_selection import VarianceThreshold\nfrom scipy import stats\nfrom statistics import mean\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import hist\nfrom sklearn.impute import KNNImputer\nfrom mlxtend.plotting import scatterplotmatrix\nimport seaborn as sns\n\n# Read data\ndat = pd.read_csv(\"http://bit.ly/2P5gTw4\")\ndat[:6]",
"_____no_output_____"
]
],
[
[
"# 01 Data Cleaning\n\nAfter you load the data, the first thing is to check how many variables are there, the type of variables, the distributions, and data errors. You can get descriptive statistics of the data using `describe()` function:",
"_____no_output_____"
]
],
[
[
"dat.describe()",
"_____no_output_____"
]
],
[
[
"You can check missing value and column type quickly using `info()`:",
"_____no_output_____"
]
],
[
[
"dat.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1000 entries, 0 to 999\nData columns (total 19 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 age 1000 non-null int64 \n 1 gender 1000 non-null object \n 2 income 816 non-null float64\n 3 house 1000 non-null object \n 4 store_exp 1000 non-null float64\n 5 online_exp 1000 non-null float64\n 6 store_trans 1000 non-null int64 \n 7 online_trans 1000 non-null int64 \n 8 Q1 1000 non-null int64 \n 9 Q2 1000 non-null int64 \n 10 Q3 1000 non-null int64 \n 11 Q4 1000 non-null int64 \n 12 Q5 1000 non-null int64 \n 13 Q6 1000 non-null int64 \n 14 Q7 1000 non-null int64 \n 15 Q8 1000 non-null int64 \n 16 Q9 1000 non-null int64 \n 17 Q10 1000 non-null int64 \n 18 segment 1000 non-null object \ndtypes: float64(3), int64(13), object(3)\nmemory usage: 148.6+ KB\n"
]
],
[
[
"Are there any problems? Questionnaire response Q1-Q10 seem reasonable, the minimum is 1 and maximum is 5. Recall that the questionnaire score is 1-5. The number of store transactions (`store_trans`) and online transactions (`online_trans`) make sense too. Things to pay attention are:\n\n1. There are some missing values.\n2. There are outliers for store expenses (store_exp). The maximum value is 50000. Who would spend $50000 a year buying clothes?! \n3. There is a negative value ( -500) in store_exp which is not logical.\nSomeone is 300 years old.\n4. How to deal with that? Depending on the real situation, if the sample size is large enough, it does not hurt to delete those problematic samples. Here we have 1000 observations. Since marketing survey is usually expensive, it is better to set these values as missing and impute them instead of deleting the rows.",
"_____no_output_____"
]
],
[
[
"# set problematic values as missings\ndat.loc[dat.age > 100, 'age'] = np.nan\ndat.loc[dat.store_exp < 0, 'store_exp'] = np.nan\ndat.loc[dat.income.isnull(), 'income'] = np.nan\n# see the results\n# some of the values are set as NA\ndat[['income','age', 'store_exp']].info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1000 entries, 0 to 999\nData columns (total 3 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 income 816 non-null float64\n 1 age 999 non-null float64\n 2 store_exp 999 non-null float64\ndtypes: float64(3)\nmemory usage: 23.6 KB\n"
]
],
[
[
"# 02-Missing Value",
"_____no_output_____"
],
[
"## 02.1-Impute missing values with `median`, `mode`, `mean`, or `constant`\n\nYou can set the imputation strategy using `strategy` argument.\n\n- If “`mean`”, then replace missing values using the mean along each column. Can only be used with numeric data.\n- If “`median`”, then replace missing values using the median along each column. Can only be used with numeric data.\n- If “`most_frequent`”, then replace missing using the most frequent value along each column. Can be used with strings or numeric data.\n- If “`constant`”, then replace missing values with fill_value. Can be used with strings or numeric data.\n",
"_____no_output_____"
]
],
[
[
"impdat = dat[['income','age','store_exp']]\nimp_mean = SimpleImputer(strategy=\"mean\")\nimp_mean.fit(impdat)\nimpdat = imp_mean.transform(impdat)",
"_____no_output_____"
],
[
"impdat = pd.DataFrame(data=impdat, columns=[\"income\", \"age\",'store_exp'])\nimpdat.head()",
"_____no_output_____"
]
],
[
[
"Let us replace the columns in `dat` with the imputed columns.",
"_____no_output_____"
]
],
[
[
"# replace the columns in `dat` with the imputed columns\ndat2 = dat.drop(columns = ['income','age','store_exp'])\ndat_imputed = pd.concat([dat2.reset_index(drop=True), impdat] , axis=1)\ndat_imputed.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1000 entries, 0 to 999\nData columns (total 19 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 gender 1000 non-null object \n 1 house 1000 non-null object \n 2 online_exp 1000 non-null float64\n 3 store_trans 1000 non-null int64 \n 4 online_trans 1000 non-null int64 \n 5 Q1 1000 non-null int64 \n 6 Q2 1000 non-null int64 \n 7 Q3 1000 non-null int64 \n 8 Q4 1000 non-null int64 \n 9 Q5 1000 non-null int64 \n 10 Q6 1000 non-null int64 \n 11 Q7 1000 non-null int64 \n 12 Q8 1000 non-null int64 \n 13 Q9 1000 non-null int64 \n 14 Q10 1000 non-null int64 \n 15 segment 1000 non-null object \n 16 income 1000 non-null float64\n 17 age 1000 non-null float64\n 18 store_exp 1000 non-null float64\ndtypes: float64(4), int64(12), object(3)\nmemory usage: 148.6+ KB\n"
]
],
[
[
"## 02.2-K-nearest neighbors",
"_____no_output_____"
]
],
[
[
"impdat = dat[['income','age','store_exp']]\nimp_knn = KNNImputer(n_neighbors=2, weights=\"uniform\")\nimpdat = imp_knn.fit_transform(impdat)\nimpdat = pd.DataFrame(data=impdat, columns=[\"income\", \"age\",'store_exp'])\nimpdat.head()",
"_____no_output_____"
],
[
"# replace the columns in `dat` with the imputed columns\ndat2 = dat.drop(columns = ['income','age','store_exp'])\ndat_imputed = pd.concat([dat2.reset_index(drop=True), impdat] , axis=1)\ndat_imputed.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1000 entries, 0 to 999\nData columns (total 19 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 gender 1000 non-null object \n 1 house 1000 non-null object \n 2 online_exp 1000 non-null float64\n 3 store_trans 1000 non-null int64 \n 4 online_trans 1000 non-null int64 \n 5 Q1 1000 non-null int64 \n 6 Q2 1000 non-null int64 \n 7 Q3 1000 non-null int64 \n 8 Q4 1000 non-null int64 \n 9 Q5 1000 non-null int64 \n 10 Q6 1000 non-null int64 \n 11 Q7 1000 non-null int64 \n 12 Q8 1000 non-null int64 \n 13 Q9 1000 non-null int64 \n 14 Q10 1000 non-null int64 \n 15 segment 1000 non-null object \n 16 income 1000 non-null float64\n 17 age 1000 non-null float64\n 18 store_exp 1000 non-null float64\ndtypes: float64(4), int64(12), object(3)\nmemory usage: 148.6+ KB\n"
]
],
[
[
"# 03-Centering and Scaling\n\nLet’s standardize variables `income` and `age` from imputed data `dat_imputed`. \n\n- `axis`: axis used to compute the means and standard deviations along. If 0, standardize each column, otherwise(if 1) each row.\n- `with_mean`: if True, center the data before scaliing\n- `with_std`: if True, scale the data to unit standard deviation.",
"_____no_output_____"
]
],
[
[
"dat_s = dat_imputed[['income', 'age']]\ndat_sed = scale(dat_s, axis = 0, with_mean = True, with_std = True)",
"_____no_output_____"
]
],
[
[
"After centering and scaling, the features are with mean 0 and standard deviation 1. ",
"_____no_output_____"
]
],
[
[
"dat_sed = pd.DataFrame(data=dat_sed, columns=[\"income\", \"age\"])\ndat_sed.describe()",
"_____no_output_____"
]
],
[
[
"# 04-Resolve Skewness\n\nWe can use `sklearn.preprocessing.power_transform` to resolve skewness in the data. Currently, `power_transform` supports the Box-Cox transform and the Yeo-Johnson transform. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood. Let's apply Box-Cox transformation.",
"_____no_output_____"
]
],
[
[
"dat_skew = dat_imputed[['income', 'age']]\ndat_skew_res = power_transform(dat_skew, method = 'box-cox')",
"_____no_output_____"
],
[
"dat_skew_res = pd.DataFrame(data=dat_skew_res, columns=[\"income\", \"age\"])",
"_____no_output_____"
],
[
"fig, axs = plt.subplots(2)\nfig.suptitle('Before (top) and after (bottom) transformation')\naxs[0].hist(dat_imputed.income)\naxs[1].hist(dat_skew_res.income)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# 05-Resolve Outliers\n\nBox plot, histogram and some other basic visualizations can be used to initially check whether there are outliers. For example, we can visualize numerical non-survey variables using scatter matrix plot:",
"_____no_output_____"
]
],
[
[
"# select numerical non-survey data\nsubdat = dat_imputed[[\"age\", \"income\", \"store_exp\", \n \"online_exp\", \"store_trans\", \"online_trans\"]]\nsubdat = pd.DataFrame(data=subdat, columns=[\"age\", \"income\", \"store_exp\", \n \"online_exp\", \"store_trans\", \"online_trans\"])\nplts = pd.plotting.scatter_matrix(subdat, alpha=0.2, figsize = (12,12))",
"_____no_output_____"
]
],
[
[
"Let us use MAD (section 5.5 of the book) to detect ourliers. The result here is slightly different because we use the imputed data `dat_imputed` here.\n",
"_____no_output_____"
]
],
[
[
"# calculate median of the absolute dispersion for income\nincome = dat_imputed.income\nymad = stats.median_absolute_deviation(income)\n# calculate z-score\nzs = (income - mean(income))/ymad\n# count the number of outliers\nsum(zs > 3.5)",
"_____no_output_____"
]
],
[
[
"# 06-Collinearity\n\nChecking correlations is an important part of the exploratory data analysis process. In python, you can visualize correlation structure of a set of predictors using [seaborn](https://seaborn.pydata.org) library.",
"_____no_output_____"
]
],
[
[
"# select non-survey numerical variables\ndf = dat_imputed[[\"age\", \"income\", \"store_exp\", \"online_exp\", \"store_trans\", \"online_trans\"]]\ndf = pd.DataFrame(df, columns = [\"age\", \"income\", \"store_exp\", \"online_exp\", \"store_trans\", \"online_trans\"])\ncor_plot = sns.heatmap(df.corr(), annot = True)",
"_____no_output_____"
]
],
[
[
"The closer the correlation is to 0, the lighter the color is. Let us write a `findCorrelation()` function to remove a minimum number of predictors to ensure all pairwise correlations are below a certain threshold.",
"_____no_output_____"
]
],
[
[
"## Drop out highly correlated features in Python\ndef findCorrelation(df, cutoff = 0.8):\n \"\"\"\n Given a numeric pd.DataFrame, this will find highly correlated features,\n and return a list of features to remove\n params:\n - df : pd.DataFrame\n - cutoff : correlation threshold, will remove one of pairs of features with\n a correlation greater than this value\n \"\"\"\n # Create correlation matrix\n corr_matrix = df.corr().abs()\n # Select upper triangle of correlation matrix\n upper = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool))\n # Find index of feature columns with correlation greater than cutoff\n to_drop = [column for column in upper.columns if any(upper[column] > cutoff)]\n return(to_drop)",
"_____no_output_____"
]
],
[
[
"Remove the columns:",
"_____no_output_____"
]
],
[
[
"removeCols = findCorrelation(df, cutoff = 0.7)\nprint(removeCols)\ndf1 = df.drop(columns = removeCols) \n# check the new cor matrix\ndf1.corr()",
"['store_trans', 'online_trans']\n"
]
],
[
[
"# 07-Sparse Variables",
"_____no_output_____"
]
],
[
[
"# create a data frame with sparse variables\ncol1 = [0,0,0,0,1,1,0,0,0,0,0,0,]\ncol2 = range(0,len(col1))\na_dict = {\"col1\":col1, \"col2\":col2}\ndf = pd.DataFrame(a_dict)\ndf",
"_____no_output_____"
]
],
[
[
"Define a function to remove columns that have a low variance. An instance of the class can be created specify the “threshold” argument, which defaults to 0.0 to remove columns with a single value.",
"_____no_output_____"
]
],
[
[
"def variance_threshold_selector(df, threshold=0):\n \"\"\"\n Given a numeric pd.DataFrame, this will remove columns that have a low variance and the return \n the resulted dataframe\n params:\n - df : input dataframe from which to compute variances.\n - threshold : Features with a training set variance lower than this threshold will be removed. \n The default is to keep all features with non-zero variance, i.e. remove the features \n that have the same value in all samples.\n \"\"\"\n selector = VarianceThreshold(threshold)\n selector.fit(df)\n res = df[df.columns[selector.get_support(indices=True)]]\n return res",
"_____no_output_____"
],
[
"# apply the function to the generated data set\nvariance_threshold_selector(df, threshold = 0.2)",
"_____no_output_____"
]
],
[
[
"# 08-Encode Dummy Variables\n\nLet’s encode `gender` and `house` from `dat_imputed` to dummy variables. You can use `get_dummies` function from `pandas`:",
"_____no_output_____"
]
],
[
[
"df = dat_imputed[['gender', 'house']]\npd.get_dummies(df).head()",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e754081911d2ed2f7b2fe46e08668764a49a5dbc | 356,785 | ipynb | Jupyter Notebook | Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb | adyaan1989/Deep-Learning | 3812b50dc5ae30f2766bd54d01a01ff9093b963f | [
"Apache-2.0"
] | null | null | null | Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb | adyaan1989/Deep-Learning | 3812b50dc5ae30f2766bd54d01a01ff9093b963f | [
"Apache-2.0"
] | null | null | null | Neural Networks and Deep Learning/Logistic Regression as a Neural Network/Week-2/Logistic Regression as a Neural Network/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb | adyaan1989/Deep-Learning | 3812b50dc5ae30f2766bd54d01a01ff9093b963f | [
"Apache-2.0"
] | null | null | null | 267.856607 | 226,946 | 0.900495 | [
[
[
"# Logistic Regression with a Neural Network mindset\n\nWelcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.\n\n**Instructions:**\n- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.\n\n**You will learn to:**\n- Build the general architecture of a learning algorithm, including:\n - Initializing parameters\n - Calculating the cost function and its gradient\n - Using an optimization algorithm (gradient descent) \n- Gather all three functions above into a main model function, in the right order.",
"_____no_output_____"
],
[
"## 1 - Packages ##\n\nFirst, let's run the cell below to import all the packages that you will need during this assignment. \n- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.\n- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.\n- [matplotlib](http://matplotlib.org) is a famous library to plot graphs in Python.\n- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport h5py\nimport scipy\nfrom PIL import Image\nfrom scipy import ndimage\nfrom lr_utils import load_dataset\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## 2 - Overview of the Problem set ##\n\n**Problem Statement**: You are given a dataset (\"data.h5\") containing:\n - a training set of m_train images labeled as cat (y=1) or non-cat (y=0)\n - a test set of m_test images labeled as cat or non-cat\n - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).\n\nYou will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.\n\nLet's get more familiar with the dataset. Load the data by running the following code.",
"_____no_output_____"
]
],
[
[
"# Loading the data (cat/non-cat)\ntrain_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()",
"_____no_output_____"
]
],
[
[
"We added \"_orig\" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).\n\nEach line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the `index` value and re-run to see other images. ",
"_____no_output_____"
]
],
[
[
"# Example of a picture\nindex = 25\nplt.imshow(train_set_x_orig[index])\nprint (\"y = \" + str(train_set_y[:, index]) + \", it's a '\" + classes[np.squeeze(train_set_y[:, index])].decode(\"utf-8\") + \"' picture.\")",
"y = [1], it's a 'cat' picture.\n"
]
],
[
[
"Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. \n\n**Exercise:** Find the values for:\n - m_train (number of training examples)\n - m_test (number of test examples)\n - num_px (= height = width of a training image)\nRemember that `train_set_x_orig` is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access `m_train` by writing `train_set_x_orig.shape[0]`.",
"_____no_output_____"
]
],
[
[
"### START CODE HERE ### (≈ 3 lines of code)\nm_train = train_set_x_orig.shape[0]\nm_test = test_set_x_orig.shape[0]\nnum_px = train_set_x_orig.shape[1]\n### END CODE HERE ###\n\nprint (\"Number of training examples: m_train = \" + str(m_train))\nprint (\"Number of testing examples: m_test = \" + str(m_test))\nprint (\"Height/Width of each image: num_px = \" + str(num_px))\nprint (\"Each image is of size: (\" + str(num_px) + \", \" + str(num_px) + \", 3)\")\nprint (\"train_set_x shape: \" + str(train_set_x_orig.shape))\nprint (\"train_set_y shape: \" + str(train_set_y.shape))\nprint (\"test_set_x shape: \" + str(test_set_x_orig.shape))\nprint (\"test_set_y shape: \" + str(test_set_y.shape))",
"Number of training examples: m_train = 209\nNumber of testing examples: m_test = 50\nHeight/Width of each image: num_px = 64\nEach image is of size: (64, 64, 3)\ntrain_set_x shape: (209, 64, 64, 3)\ntrain_set_y shape: (1, 209)\ntest_set_x shape: (50, 64, 64, 3)\ntest_set_y shape: (1, 50)\n"
]
],
[
[
"**Expected Output for m_train, m_test and num_px**: \n<table style=\"width:15%\">\n <tr>\n <td>**m_train**</td>\n <td> 209 </td> \n </tr>\n \n <tr>\n <td>**m_test**</td>\n <td> 50 </td> \n </tr>\n \n <tr>\n <td>**num_px**</td>\n <td> 64 </td> \n </tr>\n \n</table>\n",
"_____no_output_____"
],
[
"For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $*$ num_px $*$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.\n\n**Exercise:** Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num\\_px $*$ num\\_px $*$ 3, 1).\n\nA trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$*$c$*$d, a) is to use: \n```python\nX_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X\n```",
"_____no_output_____"
]
],
[
[
"# Reshape the training and test examples\n\n### START CODE HERE ### (≈ 2 lines of code)\ntrain_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0],-1).T\ntest_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0],-1).T\n### END CODE HERE ###\n\nprint (\"train_set_x_flatten shape: \" + str(train_set_x_flatten.shape))\nprint (\"train_set_y shape: \" + str(train_set_y.shape))\nprint (\"test_set_x_flatten shape: \" + str(test_set_x_flatten.shape))\nprint (\"test_set_y shape: \" + str(test_set_y.shape))\nprint (\"sanity check after reshaping: \" + str(train_set_x_flatten[0:5,0]))",
"train_set_x_flatten shape: (12288, 209)\ntrain_set_y shape: (1, 209)\ntest_set_x_flatten shape: (12288, 50)\ntest_set_y shape: (1, 50)\nsanity check after reshaping: [17 31 56 22 33]\n"
]
],
[
[
"**Expected Output**: \n\n<table style=\"width:35%\">\n <tr>\n <td>**train_set_x_flatten shape**</td>\n <td> (12288, 209)</td> \n </tr>\n <tr>\n <td>**train_set_y shape**</td>\n <td>(1, 209)</td> \n </tr>\n <tr>\n <td>**test_set_x_flatten shape**</td>\n <td>(12288, 50)</td> \n </tr>\n <tr>\n <td>**test_set_y shape**</td>\n <td>(1, 50)</td> \n </tr>\n <tr>\n <td>**sanity check after reshaping**</td>\n <td>[17 31 56 22 33]</td> \n </tr>\n</table>",
"_____no_output_____"
],
[
"To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.\n\nOne common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel).\n\n<!-- During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode. You will see that more in detail later in the lectures. !--> \n\nLet's standardize our dataset.",
"_____no_output_____"
]
],
[
[
"train_set_x = train_set_x_flatten/255.\ntest_set_x = test_set_x_flatten/255.",
"_____no_output_____"
]
],
[
[
"<font color='blue'>\n**What you need to remember:**\n\nCommon steps for pre-processing a new dataset are:\n- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)\n- Reshape the datasets such that each example is now a vector of size (num_px \\* num_px \\* 3, 1)\n- \"Standardize\" the data",
"_____no_output_____"
],
[
"## 3 - General Architecture of the learning algorithm ##\n\nIt's time to design a simple algorithm to distinguish cat images from non-cat images.\n\nYou will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why **Logistic Regression is actually a very simple Neural Network!**\n\n<img src=\"images/LogReg_kiank.png\" style=\"width:650px;height:400px;\">\n\n**Mathematical expression of the algorithm**:\n\nFor one example $x^{(i)}$:\n$$z^{(i)} = w^T x^{(i)} + b \\tag{1}$$\n$$\\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\\tag{2}$$ \n$$ \\mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \\log(a^{(i)}) - (1-y^{(i)} ) \\log(1-a^{(i)})\\tag{3}$$\n\nThe cost is then computed by summing over all training examples:\n$$ J = \\frac{1}{m} \\sum_{i=1}^m \\mathcal{L}(a^{(i)}, y^{(i)})\\tag{6}$$\n\n**Key steps**:\nIn this exercise, you will carry out the following steps: \n - Initialize the parameters of the model\n - Learn the parameters for the model by minimizing the cost \n - Use the learned parameters to make predictions (on the test set)\n - Analyse the results and conclude",
"_____no_output_____"
],
[
"## 4 - Building the parts of our algorithm ## \n\nThe main steps for building a Neural Network are:\n1. Define the model structure (such as number of input features) \n2. Initialize the model's parameters\n3. Loop:\n - Calculate current loss (forward propagation)\n - Calculate current gradient (backward propagation)\n - Update parameters (gradient descent)\n\nYou often build 1-3 separately and integrate them into one function we call `model()`.\n\n### 4.1 - Helper functions\n\n**Exercise**: Using your code from \"Python Basics\", implement `sigmoid()`. As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \\frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: sigmoid\n\ndef sigmoid(z):\n \"\"\"\n Compute the sigmoid of z\n\n Arguments:\n z -- A scalar or numpy array of any size.\n\n Return:\n s -- sigmoid(z)\n \"\"\"\n\n ### START CODE HERE ### (≈ 1 line of code)\n s = 1 / (1 + (np.exp(-z)))\n ### END CODE HERE ###\n \n return s",
"_____no_output_____"
],
[
"print (\"sigmoid([0, 2]) = \" + str(sigmoid(np.array([0,2]))))",
"sigmoid([0, 2]) = [ 1. 1.76159416]\n"
]
],
[
[
"**Expected Output**: \n\n<table>\n <tr>\n <td>**sigmoid([0, 2])**</td>\n <td> [ 0.5 0.88079708]</td> \n </tr>\n</table>",
"_____no_output_____"
],
[
"### 4.2 - Initializing parameters\n\n**Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: initialize_with_zeros\n\ndef initialize_with_zeros(dim):\n \"\"\"\n This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.\n \n Argument:\n dim -- size of the w vector we want (or number of parameters in this case)\n \n Returns:\n w -- initialized vector of shape (dim, 1)\n b -- initialized scalar (corresponds to the bias)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n w = np.zeros((dim,1))\n b = 0\n ### END CODE HERE ###\n\n assert(w.shape == (dim, 1))\n assert(isinstance(b, float) or isinstance(b, int))\n \n return w, b",
"_____no_output_____"
],
[
"dim = 2\nw, b = initialize_with_zeros(dim)\nprint (\"w = \" + str(w))\nprint (\"b = \" + str(b))",
"w = [[ 0.]\n [ 0.]]\nb = 0\n"
]
],
[
[
"**Expected Output**: \n\n\n<table style=\"width:15%\">\n <tr>\n <td> ** w ** </td>\n <td> [[ 0.]\n [ 0.]] </td>\n </tr>\n <tr>\n <td> ** b ** </td>\n <td> 0 </td>\n </tr>\n</table>\n\nFor image inputs, w will be of shape (num_px $\\times$ num_px $\\times$ 3, 1).",
"_____no_output_____"
],
[
"### 4.3 - Forward and Backward propagation\n\nNow that your parameters are initialized, you can do the \"forward\" and \"backward\" propagation steps for learning the parameters.\n\n**Exercise:** Implement a function `propagate()` that computes the cost function and its gradient.\n\n**Hints**:\n\nForward Propagation:\n- You get X\n- You compute $A = \\sigma(w^T X + b) = (a^{(1)}, a^{(2)}, ..., a^{(m-1)}, a^{(m)})$\n- You calculate the cost function: $J = -\\frac{1}{m}\\sum_{i=1}^{m}y^{(i)}\\log(a^{(i)})+(1-y^{(i)})\\log(1-a^{(i)})$\n\nHere are the two formulas you will be using: \n\n$$ \\frac{\\partial J}{\\partial w} = \\frac{1}{m}X(A-Y)^T\\tag{7}$$\n$$ \\frac{\\partial J}{\\partial b} = \\frac{1}{m} \\sum_{i=1}^m (a^{(i)}-y^{(i)})\\tag{8}$$",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: propagate\n\ndef propagate(w, b, X, Y):\n \"\"\"\n Implement the cost function and its gradient for the propagation explained above\n\n Arguments:\n w -- weights, a numpy array of size (num_px * num_px * 3, 1)\n b -- bias, a scalar\n X -- data of size (num_px * num_px * 3, number of examples)\n Y -- true \"label\" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)\n\n Return:\n cost -- negative log-likelihood cost for logistic regression\n dw -- gradient of the loss with respect to w, thus same shape as w\n db -- gradient of the loss with respect to b, thus same shape as b\n \n Tips:\n - Write your code step by step for the propagation. np.log(), np.dot()\n \"\"\"\n \n m = X.shape[1]\n \n # FORWARD PROPAGATION (FROM X TO COST)\n ### START CODE HERE ### (≈ 2 lines of code)\n A = sigmoid(np.dot(w.T,X)+b) # compute activation\n cost = -1 / m * np.sum(Y*np.log(A)+(1-Y)*np.log(1-A), axis = 1, keepdims = True) # compute cost\n ### END CODE HERE ###\n \n # BACKWARD PROPAGATION (TO FIND GRAD)\n ### START CODE HERE ### (≈ 2 lines of code)\n dw = 1 / m * np.dot(X,(A-Y).T)\n db = 1 / m * np.sum(A-Y)\n ### END CODE HERE ###\n\n assert(dw.shape == w.shape)\n assert(db.dtype == float)\n cost = np.squeeze(cost)\n assert(cost.shape == ())\n \n grads = {\"dw\": dw,\n \"db\": db}\n \n return grads, cost",
"_____no_output_____"
],
[
"w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])\ngrads, cost = propagate(w, b, X, Y)\nprint (\"dw = \" + str(grads[\"dw\"]))\nprint (\"db = \" + str(grads[\"db\"]))\nprint (\"cost = \" + str(cost))",
"dw = [[ 0.99845601]\n [ 2.39507239]]\ndb = 0.00145557813678\ncost = 5.801545319394553\n"
]
],
[
[
"**Expected Output**:\n\n<table style=\"width:50%\">\n <tr>\n <td> ** dw ** </td>\n <td> [[ 0.99845601]\n [ 2.39507239]]</td>\n </tr>\n <tr>\n <td> ** db ** </td>\n <td> 0.00145557813678 </td>\n </tr>\n <tr>\n <td> ** cost ** </td>\n <td> 5.801545319394553 </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"### 4.4 - Optimization\n- You have initialized your parameters.\n- You are also able to compute a cost function and its gradient.\n- Now, you want to update the parameters using gradient descent.\n\n**Exercise:** Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\\theta$, the update rule is $ \\theta = \\theta - \\alpha \\text{ } d\\theta$, where $\\alpha$ is the learning rate.",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: optimize\n\ndef optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):\n \"\"\"\n This function optimizes w and b by running a gradient descent algorithm\n \n Arguments:\n w -- weights, a numpy array of size (num_px * num_px * 3, 1)\n b -- bias, a scalar\n X -- data of shape (num_px * num_px * 3, number of examples)\n Y -- true \"label\" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)\n num_iterations -- number of iterations of the optimization loop\n learning_rate -- learning rate of the gradient descent update rule\n print_cost -- True to print the loss every 100 steps\n \n Returns:\n params -- dictionary containing the weights w and bias b\n grads -- dictionary containing the gradients of the weights and bias with respect to the cost function\n costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.\n \n Tips:\n You basically need to write down two steps and iterate through them:\n 1) Calculate the cost and the gradient for the current parameters. Use propagate().\n 2) Update the parameters using gradient descent rule for w and b.\n \"\"\"\n \n costs = []\n \n for i in range(num_iterations):\n \n \n # Cost and gradient calculation (≈ 1-4 lines of code)\n ### START CODE HERE ### \n grads, cost = propagate(w, b, X, Y)\n ### END CODE HERE ###\n \n # Retrieve derivatives from grads\n dw = grads[\"dw\"]\n db = grads[\"db\"]\n \n # update rule (≈ 2 lines of code)\n ### START CODE HERE ###\n w = w - learning_rate * dw\n b = b - learning_rate * db\n ### END CODE HERE ###\n \n # Record the costs\n if i % 100 == 0:\n costs.append(cost)\n \n # Print the cost every 100 training iterations\n if print_cost and i % 100 == 0:\n print (\"Cost after iteration %i: %f\" %(i, cost))\n \n params = {\"w\": w,\n \"b\": b}\n \n grads = {\"dw\": dw,\n \"db\": db}\n \n return params, grads, costs",
"_____no_output_____"
],
[
"params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)\n\nprint (\"w = \" + str(params[\"w\"]))\nprint (\"b = \" + str(params[\"b\"]))\nprint (\"dw = \" + str(grads[\"dw\"]))\nprint (\"db = \" + str(grads[\"db\"]))",
"w = [[ 0.19033591]\n [ 0.12259159]]\nb = 1.92535983008\ndw = [[ 0.67752042]\n [ 1.41625495]]\ndb = 0.219194504541\n"
]
],
[
[
"**Expected Output**: \n\n<table style=\"width:40%\">\n <tr>\n <td> **w** </td>\n <td>[[ 0.19033591]\n [ 0.12259159]] </td>\n </tr>\n \n <tr>\n <td> **b** </td>\n <td> 1.92535983008 </td>\n </tr>\n <tr>\n <td> **dw** </td>\n <td> [[ 0.67752042]\n [ 1.41625495]] </td>\n </tr>\n <tr>\n <td> **db** </td>\n <td> 0.219194504541 </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"**Exercise:** The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the `predict()` function. There are two steps to computing predictions:\n\n1. Calculate $\\hat{Y} = A = \\sigma(w^T X + b)$\n\n2. Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector `Y_prediction`. If you wish, you can use an `if`/`else` statement in a `for` loop (though there is also a way to vectorize this). ",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: predict\n\ndef predict(w, b, X):\n '''\n Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)\n \n Arguments:\n w -- weights, a numpy array of size (num_px * num_px * 3, 1)\n b -- bias, a scalar\n X -- data of size (num_px * num_px * 3, number of examples)\n \n Returns:\n Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X\n '''\n \n m = X.shape[1]\n Y_prediction = np.zeros((1,m))\n w = w.reshape(X.shape[0], 1)\n \n # Compute vector \"A\" predicting the probabilities of a cat being present in the picture\n ### START CODE HERE ### (≈ 1 line of code)\n A = sigmoid(np.dot(w.T,X)+b)\n ### END CODE HERE ###\n \n for i in range(A.shape[1]):\n \n # Convert probabilities A[0,i] to actual predictions p[0,i]\n ### START CODE HERE ### (≈ 4 lines of code)\n Y_prediction[0,i] = np.where(A[0,i]>0.5,1,0)\n ### END CODE HERE ###\n \n assert(Y_prediction.shape == (1, m))\n \n return Y_prediction",
"_____no_output_____"
],
[
"w = np.array([[0.1124579],[0.23106775]])\nb = -0.3\nX = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])\nprint (\"predictions = \" + str(predict(w, b, X)))",
"predictions = [[ 1. 1. 0.]]\n"
]
],
[
[
"**Expected Output**: \n\n<table style=\"width:30%\">\n <tr>\n <td>\n **predictions**\n </td>\n <td>\n [[ 1. 1. 0.]]\n </td> \n </tr>\n\n</table>\n",
"_____no_output_____"
],
[
"<font color='blue'>\n**What to remember:**\nYou've implemented several functions that:\n- Initialize (w,b)\n- Optimize the loss iteratively to learn parameters (w,b):\n - computing the cost and its gradient \n - updating the parameters using gradient descent\n- Use the learned (w,b) to predict the labels for a given set of examples",
"_____no_output_____"
],
[
"## 5 - Merge all functions into a model ##\n\nYou will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.\n\n**Exercise:** Implement the model function. Use the following notation:\n - Y_prediction_test for your predictions on the test set\n - Y_prediction_train for your predictions on the train set\n - w, costs, grads for the outputs of optimize()",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: model\n\ndef model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):\n \"\"\"\n Builds the logistic regression model by calling the function you've implemented previously\n \n Arguments:\n X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)\n Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)\n X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)\n Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)\n num_iterations -- hyperparameter representing the number of iterations to optimize the parameters\n learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()\n print_cost -- Set to true to print the cost every 100 iterations\n \n Returns:\n d -- dictionary containing information about the model.\n \"\"\"\n \n ### START CODE HERE ###\n \n # initialize parameters with zeros (≈ 1 line of code)\n w, b = initialize_with_zeros(X_train.shape[0])\n\n # Gradient descent (≈ 1 line of code)\n parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)\n \n # Retrieve parameters w and b from dictionary \"parameters\"\n w = parameters[\"w\"]\n b = parameters[\"b\"]\n \n # Predict test/train set examples (≈ 2 lines of code)\n Y_prediction_test = predict(w, b, X_test)\n Y_prediction_train = predict(w, b, X_train)\n\n ### END CODE HERE ###\n\n # Print train/test Errors\n print(\"train accuracy: {} %\".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))\n print(\"test accuracy: {} %\".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))\n\n \n d = {\"costs\": costs,\n \"Y_prediction_test\": Y_prediction_test, \n \"Y_prediction_train\" : Y_prediction_train, \n \"w\" : w, \n \"b\" : b,\n \"learning_rate\" : learning_rate,\n \"num_iterations\": num_iterations}\n \n return d",
"_____no_output_____"
]
],
[
[
"Run the following cell to train your model.",
"_____no_output_____"
]
],
[
[
"d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2500, learning_rate = 0.009, print_cost = True)",
"Cost after iteration 0: 0.693147\nCost after iteration 100: 0.726194\nCost after iteration 200: 1.452277\nCost after iteration 300: 0.871654\nCost after iteration 400: 0.617655\nCost after iteration 500: 0.409132\nCost after iteration 600: 0.248640\nCost after iteration 700: 0.168364\nCost after iteration 800: 0.150399\nCost after iteration 900: 0.139503\nCost after iteration 1000: 0.130313\nCost after iteration 1100: 0.122320\nCost after iteration 1200: 0.115257\nCost after iteration 1300: 0.108951\nCost after iteration 1400: 0.103281\nCost after iteration 1500: 0.098152\nCost after iteration 1600: 0.093489\nCost after iteration 1700: 0.089233\nCost after iteration 1800: 0.085332\nCost after iteration 1900: 0.081745\nCost after iteration 2000: 0.078435\nCost after iteration 2100: 0.075372\nCost after iteration 2200: 0.072529\nCost after iteration 2300: 0.069885\nCost after iteration 2400: 0.067419\ntrain accuracy: 100.0 %\ntest accuracy: 70.0 %\n"
]
],
[
[
"**Expected Output**: \n\n<table style=\"width:40%\"> \n\n <tr>\n <td> **Cost after iteration 0 ** </td> \n <td> 0.693147 </td>\n </tr>\n <tr>\n <td> <center> $\\vdots$ </center> </td> \n <td> <center> $\\vdots$ </center> </td> \n </tr> \n <tr>\n <td> **Train Accuracy** </td> \n <td> 99.04306220095694 % </td>\n </tr>\n\n <tr>\n <td>**Test Accuracy** </td> \n <td> 70.0 % </td>\n </tr>\n</table> \n\n\n",
"_____no_output_____"
],
[
"**Comment**: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!\n\nAlso, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the `index` variable) you can look at predictions on pictures of the test set.",
"_____no_output_____"
]
],
[
[
"# Example of a picture that was wrongly classified.\nindex = 1\nplt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))\nprint (\"y = \" + str(test_set_y[0,index]) + \", you predicted that it is a \\\"\" + classes[d[\"Y_prediction_test\"][0,index]].decode(\"utf-8\") + \"\\\" picture.\")",
"y = 1, you predicted that it is a \"cat\" picture.\n"
]
],
[
[
"Let's also plot the cost function and the gradients.",
"_____no_output_____"
]
],
[
[
"# Plot learning curve (with costs)\ncosts = np.squeeze(d['costs'])\nplt.plot(costs)\nplt.ylabel('cost')\nplt.xlabel('iterations (per hundreds)')\nplt.title(\"Learning rate =\" + str(d[\"learning_rate\"]))\nplt.show()",
"_____no_output_____"
]
],
[
[
"**Interpretation**:\nYou can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting. ",
"_____no_output_____"
],
[
"## 6 - Further analysis (optional/ungraded exercise) ##\n\nCongratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\\alpha$. ",
"_____no_output_____"
],
[
"#### Choice of learning rate ####\n\n**Reminder**:\nIn order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may \"overshoot\" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.\n\nLet's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the `learning_rates` variable to contain, and see what happens. ",
"_____no_output_____"
]
],
[
[
"learning_rates = [0.01, 0.001, 0.0001]\nmodels = {}\nfor i in learning_rates:\n print (\"learning rate is: \" + str(i))\n models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)\n print ('\\n' + \"-------------------------------------------------------\" + '\\n')\n\nfor i in learning_rates:\n plt.plot(np.squeeze(models[str(i)][\"costs\"]), label= str(models[str(i)][\"learning_rate\"]))\n\nplt.ylabel('cost')\nplt.xlabel('iterations (hundreds)')\n\nlegend = plt.legend(loc='upper center', shadow=True)\nframe = legend.get_frame()\nframe.set_facecolor('0.90')\nplt.show()",
"learning rate is: 0.01\ntrain accuracy: 99.52153110047847 %\ntest accuracy: 68.0 %\n\n-------------------------------------------------------\n\nlearning rate is: 0.001\ntrain accuracy: 88.99521531100478 %\ntest accuracy: 64.0 %\n\n-------------------------------------------------------\n\nlearning rate is: 0.0001\ntrain accuracy: 68.42105263157895 %\ntest accuracy: 36.0 %\n\n-------------------------------------------------------\n\n"
]
],
[
[
"**Interpretation**: \n- Different learning rates give different costs and thus different predictions results.\n- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost). \n- A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.\n- In deep learning, we usually recommend that you: \n - Choose the learning rate that better minimizes the cost function.\n - If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.) \n",
"_____no_output_____"
],
[
"## 7 - Test with your own image (optional/ungraded exercise) ##\n\nCongratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:\n 1. Click on \"File\" in the upper bar of this notebook, then click \"Open\" to go on your Coursera Hub.\n 2. Add your image to this Jupyter Notebook's directory, in the \"images\" folder\n 3. Change your image's name in the following code\n 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!",
"_____no_output_____"
]
],
[
[
"## START CODE HERE ## (PUT YOUR IMAGE NAME) \nmy_image = \"my_image.jpg\" # change this to the name of your image file \n## END CODE HERE ##\n\n# We preprocess the image to fit your algorithm.\nfname = \"images/\" + my_image\nimage = np.array(ndimage.imread(fname, flatten=False))\nmy_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T\nmy_predicted_image = predict(d[\"w\"], d[\"b\"], my_image)\n\nplt.imshow(image)\nprint(\"y = \" + str(np.squeeze(my_predicted_image)) + \", your algorithm predicts a \\\"\" + classes[int(np.squeeze(my_predicted_image)),].decode(\"utf-8\") + \"\\\" picture.\")",
"y = 0.0, your algorithm predicts a \"non-cat\" picture.\n"
]
],
[
[
"<font color='blue'>\n**What to remember from this assignment:**\n1. Preprocessing the dataset is important.\n2. You implemented each function separately: initialize(), propagate(), optimize(). Then you built a model().\n3. Tuning the learning rate (which is an example of a \"hyperparameter\") can make a big difference to the algorithm. You will see more examples of this later in this course!",
"_____no_output_____"
],
[
"Finally, if you'd like, we invite you to try different things on this Notebook. Make sure you submit before trying anything. Once you submit, things you can play with include:\n - Play with the learning rate and the number of iterations\n - Try different initialization methods and compare the results\n - Test other preprocessings (center the data, or divide each row by its standard deviation)",
"_____no_output_____"
],
[
"Bibliography:\n- http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/\n- https://stats.stackexchange.com/questions/211436/why-do-we-normalize-images-by-subtracting-the-datasets-image-mean-and-not-the-c",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
e7540a1d8b3685d623a99aafe75af9cd68746e84 | 4,755 | ipynb | Jupyter Notebook | termo-1-soal.ipynb | msuherma/termodinamika | e23001f6947a2599523ac7117b354a6c2f788bfe | [
"MIT"
] | null | null | null | termo-1-soal.ipynb | msuherma/termodinamika | e23001f6947a2599523ac7117b354a6c2f788bfe | [
"MIT"
] | null | null | null | termo-1-soal.ipynb | msuherma/termodinamika | e23001f6947a2599523ac7117b354a6c2f788bfe | [
"MIT"
] | null | null | null | 30.286624 | 151 | 0.561935 | [
[
[
"# Diketahui\nh_1=6 #tinggi Hg-Minyak, m\nh_2=2 #tinggi air-Hg, m\nSG_Hg = 13.6 #Specific Gravity/Densitas Relatif\nomega_H2O= 9800 #berat spesifik air, N/m^3\nomega_Hg= SG_Hg * omega_H2O\n\n#Ditanya: Perbedaan Tekanan pada P3-P1\n#Jawab:\n#deltaP adalah perbedaan tekanan pada P3-P1\ndeltaP=omega_Hg*h_1 - omega_H2O*h_2 \n\nprint('Perbedaan tekanan pada P3-P1 adalah %f Pa'%deltaP)",
"Perbedaan tekanan pada P3-P1 adalah 780080.000000 Pa\n"
],
[
"#Diketahui:\nV = 15 # Volume, m^3\nm_H2O= 5 # massa, kg\nT_1 = 40 # Temperatur (1), C\nT_2 = 86 # Temperatur (2), C\nvf_1 = 0.001008 # Tabel Steam, liquid, pada suhu (1) \nvg_1 = 19.52 # Tabel Steam, gas, pada suhu (1)\n\n#Untuk Interpolasi\nvf_T85 =0.001032\nvg_T85 =2.828\nvf_T90 =0.001036\nvg_T90 =2.361\nP_T85=0.05783 #MPa\nP_T90=0.07013 #MPa\n\nvf_2 = ((86-85)*(vf_T90-vf_T85)/(90-85))+vf_T85\nvg_2 = ((86-85)*(vg_T90-vg_T85)/(90-85))+vg_T85\n#Dicari: a. kualitas (x) untuk kondisi saturated mixture\n # b. tekanan saturasi, Psat_1 dan Psat_2\n#Jawab:\nv=V/m_H2O\nx_1 = (v-vf_1)/(vg_1-vf_1)\nx_2 = (v-vf_2)/(vg_2-vf_2)\nP_T86 = ((86-85)*(P_T90-P_T85)/(90-85))+P_T85\nprint('Kualitas (x) pada temperatur 40 C adalah %f'%x_1)\nprint('Dengan melihat Tabel Saturated H2O pada suhu 40 C, Tekanan Saturasinya adalah 0.007383 MPa')\nprint('===================================================')\nprint('Hasil Interpolasi linear')\nprint('volume spesifik likuid (vf @86 C) pada suhu 86 C adalah %f m^3/kg'%vf_2)\nprint('volume spesifik gas (vg @86 C) pada suhu 86 C adalah %f m^3/kg'%vg_2)\n\nprint('Kualitas (x) pada temperatur 86 C adalah %f'%x_2)\nprint('Dengan melihat Tabel Saturated H2O dan melakukan interpolasi linear guna mencari pada suhu 86 C, Tekanan Saturasinya adalah %f MPa'%P_T86)",
"Kualitas (x) pada temperatur 40 C adalah 0.153645\nDengan melihat Tabel Saturated H2O pada suhu 40 C, Tekanan Saturasinya adalah 0.007383 MPa\n===================================================\nHasil Interpolasi linear\nvolume spesifik likuid (vf @86 C) pada suhu 86 C adalah 0.001033 m^3/kg\nvolume spesifik gas (vg @86 C) pada suhu 86 C adalah 2.734600 m^3/kg\nKualitas (x) pada temperatur 86 C adalah 1.097089\nDengan melihat Tabel Saturated H2O dan melakukan interpolasi linear guna mencari pada suhu 86 C, Tekanan Saturasinya adalah 0.060290 MPa\n"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
e7540f3fe2479aea8b2b0dc7bf0bd59dcc43a5fb | 107,265 | ipynb | Jupyter Notebook | DOGE-INR Prediction.ipynb | 3Hamza/Machine-Learning | 2fde8dd0e912f8442fa1498193fbbb18d152395d | [
"Apache-2.0"
] | null | null | null | DOGE-INR Prediction.ipynb | 3Hamza/Machine-Learning | 2fde8dd0e912f8442fa1498193fbbb18d152395d | [
"Apache-2.0"
] | null | null | null | DOGE-INR Prediction.ipynb | 3Hamza/Machine-Learning | 2fde8dd0e912f8442fa1498193fbbb18d152395d | [
"Apache-2.0"
] | null | null | null | 114.844754 | 29,992 | 0.829683 | [
[
[
"import numpy as np\nimport pandas as pd \nimport matplotlib.pyplot as plt\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.svm import SVC",
"_____no_output_____"
],
[
"df=pd.read_csv('DOGE-INR.csv')",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"type(df)",
"_____no_output_____"
],
[
"df=df.dropna()",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"# Visualize the plot\nplt.figure(figsize=(16,8))\nplt.title('DogeCoin')\nplt.xlabel('Days')\nplt.ylabel('Close Price')\nplt.plot(df['Close'])\nplt.show()",
"_____no_output_____"
],
[
"df=df[['Close']]\ndf.head(5)",
"_____no_output_____"
],
[
"# Create a new variable 'x' that predicts x days into the future\nfuture_days=30\n# Create a target column shifted to 'x' days up\ndf['Prediction']=df[['Close']].shift(-future_days)\ndf",
"<ipython-input-10-bd1ab7ff0aaf>:4: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n df['Prediction']=df[['Close']].shift(-future_days)\n"
],
[
"# Create a feature dataset (X) and convert it to numpy array and remove the last 'x' days from the dataset\nX=np.array(df.drop(['Prediction'],axis=1))[:-future_days]\nX.shape",
"_____no_output_____"
],
[
"# Create a target dataset of datatype array and remove the last 'x' units/days \ny=np.array(df['Prediction'])[:-future_days]",
"_____no_output_____"
],
[
"y.shape",
"_____no_output_____"
],
[
"# Split the dataset into train and test with 75%\nfrom sklearn.model_selection import train_test_split\nX_train, X_test,y_train, y_test= train_test_split(X,y,test_size=0.25)",
"_____no_output_____"
],
[
"# Preparing the models using linear regression and Decision Tree Regression\n#Fitting the model into linear regression\nln=LinearRegression().fit(X_train,y_train)\n\n#Fitting the model into Decision Tree Regression\ndr=DecisionTreeRegressor().fit(X_train,y_train)",
"_____no_output_____"
],
[
"# Get the last 'x' rows of the feature dataset\nx_future=np.array(df.drop(['Prediction'],axis=1))[:-future_days]\nx_future=df.tail(future_days)\nx_future=np.array(x_future)\nx_future = x_future[~np.isnan(x_future)]\nx_future=x_future.reshape(-1,1)\n",
"_____no_output_____"
],
[
"# Predict the model\ntree_prediction = dr.predict(x_future)\nprint(tree_prediction)\nprint()\n\nlr_prediction = ln.predict(x_future)\nprint(lr_prediction)\n",
"[5.369837 5.369837 2.704501 2.704501 5.369837 5.369837 5.369837 5.369837\n 5.369837 5.369837 5.369837 5.369837 5.369837 5.369837 5.369837 5.369837\n 2.550038 2.550038 2.550038 2.550038 2.550038 2.550038 2.550038 2.550038\n 2.550038 2.550038 2.550038 2.550038 2.550038 2.550038]\n\n[ 2.99338846 3.28048718 3.67810021 3.66850438 3.61259457 3.51136563\n 3.5681433 3.53067724 3.49289366 3.03576533 3.22228424 3.25093767\n 3.32692816 3.14817764 3.07915823 2.68769782 5.29355087 23.09277545\n 13.32617653 17.96729068 16.82456863 15.04905814 17.9144219 26.27473099\n 22.93669506 28.41141859 39.29634781 39.39373842 34.85160241 36.72038294]\n"
],
[
"# Visualize the data\npredictions= tree_prediction\n\nvalid=df[X.shape[0]:]\nvalid['Prediction']=predictions\nplt.figure(figsize=(15,7))\nplt.title('DTR')\nplt.xlabel('Days')\nplt.ylabel('Close')\nplt.plot(df['Close'])\nplt.plot(valid[['Close','Prediction']])\nplt.legend(['Orignal', 'Validation', 'Prediction'])\nplt.show()",
"<ipython-input-18-d34efb385b49>:5: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n valid['Prediction']=predictions\n"
],
[
"predictions= lr_prediction\n\nvalid=df[X.shape[0]:]\nvalid['Prediction']=predictions\nplt.figure(figsize=(15,7))\nplt.title('Linear Regression')\nplt.xlabel('Days')\nplt.ylabel('Close')\nplt.plot(df['Close'])\nplt.plot(valid[['Close','Prediction']])\nplt.legend(['Orignal', 'Validation', 'Prediction'])\nplt.show()",
"<ipython-input-19-5f999a79e8f6>:4: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n valid['Prediction']=predictions\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7541e2c70230d4eab3dbce83c3bf078e402109d | 34,570 | ipynb | Jupyter Notebook | R&D_Profit-visualize-loss.ipynb | nwmsno1/tensorflow_base | 158592f87285a6d2516ae4d72e6015054db5991c | [
"MIT"
] | null | null | null | R&D_Profit-visualize-loss.ipynb | nwmsno1/tensorflow_base | 158592f87285a6d2516ae4d72e6015054db5991c | [
"MIT"
] | null | null | null | R&D_Profit-visualize-loss.ipynb | nwmsno1/tensorflow_base | 158592f87285a6d2516ae4d72e6015054db5991c | [
"MIT"
] | null | null | null | 61.403197 | 15,340 | 0.692537 | [
[
[
"### 1. 数据读入",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ndf = pd.read_csv('50_Startups.csv')\ndf.head()",
"_____no_output_____"
]
],
[
[
"### 2. 数据归一化",
"_____no_output_____"
]
],
[
[
"def normalize_feature(df):\n return df.apply(lambda column: (column - column.mean())/column.std())\n\ndf = normalize_feature(df[['R&D Spend', 'Marketing Spend', 'Profit']])\ndf.head()",
"_____no_output_____"
],
[
"# 数据分析(3D)\n# import matplotlib.pyplot as plt\n# from mpl_toolkits import mplot3d\n# fig = plt.figure()\n# ax = plt.axes(projection='3d')\n# ax.set_xlabel('R&D Spend')\n# ax.set_ylabel('Marketing Spend')\n# ax.set_zlabel('Profit')\n# ax.scatter3D(df['R&D Spend'], df['Marketing Spend'], df['Profit'], c=df['Profit'], cmap='Reds')",
"_____no_output_____"
]
],
[
[
"### 3. 数据处理",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\n# 为了方便矩阵相乘,添加一列全为1的x0\nones = pd.DataFrame({'ones': np.ones(len(df))}) # ones是n行1列的数据框,表示x0恒为1\ndf = pd.concat([ones, df], axis=1) # 根据列合并数据\n\nX_data = np.array(df[df.columns[0:3]])\nY_data = np.array(df[df.columns[-1]]).reshape(len(df), 1)\n\nprint(X_data.shape, type(X_data))\nprint(Y_data.shape, type(Y_data))\ndf.head()",
"(50, 3) <class 'numpy.ndarray'>\n(50, 1) <class 'numpy.ndarray'>\n"
]
],
[
[
"### 4. 创建线性回归模型(数据流图)",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\n\ntf.reset_default_graph() # https://www.cnblogs.com/demo-deng/p/10365889.html\n\nalpha = 0.01 # 学习率\nepoch = 500 # 训练全量数据集的轮数\n\n# 创建线性回归模型(数据流图)\nwith tf.name_scope('input'):\n # 输入X, 形状[50,3]\n X = tf.placeholder(tf.float32, X_data.shape, name='X')\n # 输入Y,形状[50,1]\n Y = tf.placeholder(tf.float32, Y_data.shape, name='Y')\n\n# 权重变量W,形状[3,1]\nwith tf.name_scope('hypothesis'):\n # 存疑:tf.get_variable_scope().reuse_variables() # https://cloud.tencent.com/developer/article/1335672\n W = tf.get_variable(\"weights\", (X_data.shape[1], 1), initializer=tf.constant_initializer())\n\n # 假设函数 h(x) = w0*x0+w1*x1+w2*x2,其中x0恒为1\n # 推理值 Y_pred 形状[47,10]\n Y_pred = tf.matmul(X, W)\n\nwith tf.name_scope('loss'):\n # 损失函数采用最小二乘法,Y_pred - y 是形如[47,1]的向量\n # tf.matmul(a, b, transpose_a=True) 表示: 矩阵a的转置乘矩阵b,及[1,47] x [47,1]\n # 损失函数操作 loss\n loss_op = 1 / (2 * len(X_data)) * tf.matmul((Y_pred - Y), (Y_pred - Y), transpose_a=True)\n\nwith tf.name_scope('train'):\n # 随机梯度下降优化器 opt\n opt = tf.train.GradientDescentOptimizer(learning_rate=alpha)\n # 单步训练操作 train_op\n train_op = opt.minimize(loss_op)",
"_____no_output_____"
]
],
[
[
"### 5. 创建会话(运行环境)",
"_____no_output_____"
]
],
[
[
"# 创建会话(运行环境)\nwith tf.Session() as sess:\n # 初始化全局变量\n sess.run(tf.global_variables_initializer())\n \n # 创建 FileWriter 实例\n writer = tf.summary.FileWriter(\"./summary/linear-regression-1/\", sess.graph)\n \n loss_data = []\n # 开始训练模型\n # 因为训练集较小,所以不采用批梯度下降优化算法,每次都采用全量数据训练\n for e in range(1, epoch+1):\n sess.run(train_op, feed_dict={X: X_data, Y: Y_data})\n if e % 10 == 0:\n _,loss, w = sess.run([train_op, loss_op, W], feed_dict={X: X_data, Y: Y_data})\n # 记录每一轮损失值变化情况\n loss_data.append(float(loss))\n log_str = \"Epoch %d \\t Loss=%.4g \\t Model: y = %.4gx1 + %.4gx2 + %.4g\"\n print(log_str % (e, loss, w[1], w[2], w[0]))\n \n# 关闭FileWriter的输出流\nwriter.close()\n# tensorboard --logdir ./ --host localhost",
"Epoch 10 \t Loss=0.3661 \t Model: y = 0.09726x1 + 0.07332x2 + 7.451e-11\nEpoch 20 \t Loss=0.2701 \t Model: y = 0.1796x1 + 0.1324x2 + 7.078e-10\nEpoch 30 \t Loss=0.2035 \t Model: y = 0.2495x1 + 0.1798x2 + 2.98e-10\nEpoch 40 \t Loss=0.1571 \t Model: y = 0.3091x1 + 0.2174x2 + -1.155e-09\nEpoch 50 \t Loss=0.1247 \t Model: y = 0.36x1 + 0.2471x2 + 8.196e-10\nEpoch 60 \t Loss=0.1019 \t Model: y = 0.4037x1 + 0.2702x2 + 7.078e-10\nEpoch 70 \t Loss=0.08567 \t Model: y = 0.4414x1 + 0.2879x2 + 1.49e-10\nEpoch 80 \t Loss=0.07409 \t Model: y = 0.4741x1 + 0.3011x2 + 8.009e-10\nEpoch 90 \t Loss=0.06568 \t Model: y = 0.5026x1 + 0.3107x2 + 1.639e-09\nEpoch 100 \t Loss=0.05949 \t Model: y = 0.5275x1 + 0.3173x2 + 2.403e-09\nEpoch 110 \t Loss=0.05484 \t Model: y = 0.5495x1 + 0.3215x2 + 3.427e-09\nEpoch 120 \t Loss=0.05128 \t Model: y = 0.569x1 + 0.3237x2 + 3.856e-09\nEpoch 130 \t Loss=0.04849 \t Model: y = 0.5863x1 + 0.3244x2 + 3.334e-09\nEpoch 140 \t Loss=0.04624 \t Model: y = 0.6019x1 + 0.3237x2 + 3.669e-09\nEpoch 150 \t Loss=0.04439 \t Model: y = 0.616x1 + 0.322x2 + 3.93e-09\nEpoch 160 \t Loss=0.04283 \t Model: y = 0.6288x1 + 0.3194x2 + 4.061e-09\nEpoch 170 \t Loss=0.04148 \t Model: y = 0.6405x1 + 0.3162x2 + 4.675e-09\nEpoch 180 \t Loss=0.0403 \t Model: y = 0.6512x1 + 0.3125x2 + 4.899e-09\nEpoch 190 \t Loss=0.03924 \t Model: y = 0.6611x1 + 0.3085x2 + 5.029e-09\nEpoch 200 \t Loss=0.03829 \t Model: y = 0.6704x1 + 0.3041x2 + 5.141e-09\nEpoch 210 \t Loss=0.03741 \t Model: y = 0.679x1 + 0.2995x2 + 5.364e-09\nEpoch 220 \t Loss=0.03661 \t Model: y = 0.687x1 + 0.2947x2 + 5.662e-09\nEpoch 230 \t Loss=0.03587 \t Model: y = 0.6946x1 + 0.2899x2 + 5.886e-09\nEpoch 240 \t Loss=0.03518 \t Model: y = 0.7018x1 + 0.285x2 + 6.072e-09\nEpoch 250 \t Loss=0.03454 \t Model: y = 0.7086x1 + 0.2801x2 + 6.426e-09\nEpoch 260 \t Loss=0.03394 \t Model: y = 0.7151x1 + 0.2752x2 + 6.91e-09\nEpoch 270 \t Loss=0.03337 \t Model: y = 0.7213x1 + 0.2703x2 + 7.488e-09\nEpoch 280 \t Loss=0.03284 \t Model: y = 0.7272x1 + 0.2655x2 + 7.618e-09\nEpoch 290 \t Loss=0.03234 \t Model: y = 0.7328x1 + 0.2608x2 + 7.618e-09\nEpoch 300 \t Loss=0.03188 \t Model: y = 0.7382x1 + 0.2561x2 + 7.749e-09\nEpoch 310 \t Loss=0.03143 \t Model: y = 0.7435x1 + 0.2515x2 + 7.637e-09\nEpoch 320 \t Loss=0.03102 \t Model: y = 0.7485x1 + 0.247x2 + 8.177e-09\nEpoch 330 \t Loss=0.03063 \t Model: y = 0.7533x1 + 0.2426x2 + 7.972e-09\nEpoch 340 \t Loss=0.03026 \t Model: y = 0.758x1 + 0.2383x2 + 8.028e-09\nEpoch 350 \t Loss=0.02992 \t Model: y = 0.7625x1 + 0.2341x2 + 8.009e-09\nEpoch 360 \t Loss=0.02959 \t Model: y = 0.7668x1 + 0.23x2 + 8.177e-09\nEpoch 370 \t Loss=0.02928 \t Model: y = 0.771x1 + 0.226x2 + 8.326e-09\nEpoch 380 \t Loss=0.02899 \t Model: y = 0.7751x1 + 0.2221x2 + 8.345e-09\nEpoch 390 \t Loss=0.02872 \t Model: y = 0.779x1 + 0.2183x2 + 8.401e-09\nEpoch 400 \t Loss=0.02846 \t Model: y = 0.7828x1 + 0.2146x2 + 8.55e-09\nEpoch 410 \t Loss=0.02822 \t Model: y = 0.7865x1 + 0.211x2 + 9.015e-09\nEpoch 420 \t Loss=0.02799 \t Model: y = 0.7901x1 + 0.2075x2 + 9.052e-09\nEpoch 430 \t Loss=0.02778 \t Model: y = 0.7935x1 + 0.2041x2 + 9.518e-09\nEpoch 440 \t Loss=0.02758 \t Model: y = 0.7969x1 + 0.2008x2 + 9.611e-09\nEpoch 450 \t Loss=0.02739 \t Model: y = 0.8001x1 + 0.1976x2 + 9.63e-09\nEpoch 460 \t Loss=0.02721 \t Model: y = 0.8033x1 + 0.1945x2 + 9.667e-09\nEpoch 470 \t Loss=0.02704 \t Model: y = 0.8063x1 + 0.1914x2 + 9.947e-09\nEpoch 480 \t Loss=0.02688 \t Model: y = 0.8093x1 + 0.1885x2 + 9.891e-09\nEpoch 490 \t Loss=0.02673 \t Model: y = 0.8122x1 + 0.1856x2 + 9.965e-09\nEpoch 500 \t Loss=0.02659 \t Model: y = 0.815x1 + 0.1829x2 + 9.928e-09\n"
]
],
[
[
"### 6. 可视化损失值",
"_____no_output_____"
]
],
[
[
"print(len(loss_data), loss_data)",
"50 [0.3661213517189026, 0.27011415362358093, 0.20349600911140442, 0.15711453557014465, 0.12467682361602783, 0.10185541212558746, 0.08567407727241516, 0.07408539205789566, 0.0656803622841835, 0.059489019215106964, 0.05484314635396004, 0.051282115280628204, 0.04848800227046013, 0.046241018921136856, 0.04438899829983711, 0.04282620921730995, 0.041478972882032394, 0.040295686572790146, 0.039239950478076935, 0.03828589990735054, 0.03741493821144104, 0.03661353141069412, 0.03587164729833603, 0.03518170863389969, 0.03453788533806801, 0.03393556922674179, 0.033371005207300186, 0.03284109756350517, 0.03234320878982544, 0.03187505528330803, 0.031434621661901474, 0.031020084396004677, 0.03062981739640236, 0.03026231937110424, 0.02991621196269989, 0.029590202495455742, 0.02928311936557293, 0.028993820771574974, 0.0287212785333395, 0.02846449799835682, 0.02822258323431015, 0.027994660660624504, 0.027779920026659966, 0.027577588334679604, 0.02738695777952671, 0.027207350358366966, 0.027038123458623886, 0.02687867358326912, 0.026728447526693344, 0.026586897671222687]\n"
],
[
"import seaborn as sns\nimport matplotlib.pyplot as plt\nsns.set(context='notebook', style='whitegrid', palette='dark')\n\nax = sns.lineplot(x='epoch', y='loss', data=pd.DataFrame({'loss': loss_data, 'epoch': np.arange(epoch/10)}))\nax.set_xlabel('epoch')\nax.set_ylabel('loss')\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e75444b30e54cb3c875935da56c0fdcd2f30e459 | 403,031 | ipynb | Jupyter Notebook | src/plots/notebook/Heatmaps.ipynb | nicksspark/codon-usage | 62e8ca16285f01080ba2419cee3869da32e3c1db | [
"MIT"
] | 1 | 2021-11-28T20:48:54.000Z | 2021-11-28T20:48:54.000Z | src/plots/notebook/Heatmaps.ipynb | nicksspark/codon-usage | 62e8ca16285f01080ba2419cee3869da32e3c1db | [
"MIT"
] | null | null | null | src/plots/notebook/Heatmaps.ipynb | nicksspark/codon-usage | 62e8ca16285f01080ba2419cee3869da32e3c1db | [
"MIT"
] | 2 | 2020-11-06T21:06:26.000Z | 2021-11-28T20:49:02.000Z | 1,098.177112 | 207,808 | 0.951899 | [
[
[
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"vrl_df = pd.read_csv(\"../heatmap_inputs/vrl_data.csv\", index_col=0)\nphg_df = pd.read_csv(\"../heatmap_inputs/phg_data.csv\", index_col=0)\nbct_df = pd.read_csv(\"../heatmap_inputs/bct_data.csv\", index_col=0)\narc_df = pd.read_csv(\"../heatmap_inputs/arc_data.csv\", index_col=0)\neuk_df = pd.read_csv(\"../heatmap_inputs/euk_data.csv\", index_col=0)",
"_____no_output_____"
],
[
"vrl_df.head(5)",
"_____no_output_____"
],
[
"#Heatmaps without normalizing \n\nfig,(ax1,ax2, ax3, ax4, ax5) = plt.subplots(5, 1, figsize=(10,10))\n\ng1 = sns.heatmap(vrl_df, cbar=False, ax=ax1)\nax1.title.set_text('vrl')\ng2 = sns.heatmap(phg_df, cbar=False, ax=ax2)\nax2.title.set_text('phg')\ng3 = sns.heatmap(bct_df, cbar=False, ax=ax3)\nax3.title.set_text('bct')\ng4 = sns.heatmap(arc_df, cbar=False, ax=ax4)\nax4.title.set_text('arc')\ng5 = sns.heatmap(euk_df, cbar=False, ax=ax5)\nax5.title.set_text('euk')\n\nfig.tight_layout()",
"_____no_output_____"
],
[
"#Heatmaps after normalizing \nfig,(ax1,ax2, ax3, ax4, ax5) = plt.subplots(5, 1, figsize=(10,10))\n\ng1 = sns.heatmap(vrl_df,vmin=0, vmax=0.27543, cbar=False, ax=ax1)\nax1.title.set_text('vrl')\ng2 = sns.heatmap(phg_df,vmin=0, vmax=0.27543, cbar=False, ax=ax2)\nax2.title.set_text('phg')\ng3 = sns.heatmap(bct_df,vmin=0, vmax=0.27543, cbar=False, ax=ax3)\nax3.title.set_text('bct')\ng4 = sns.heatmap(arc_df,vmin=0, vmax=0.27543, cbar=False, ax=ax4)\nax4.title.set_text('arc')\ng5 = sns.heatmap(euk_df,vmin=0, vmax=0.27543, cbar=False, ax=ax5)\nax5.title.set_text('euk')\n\nfig.tight_layout()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e754491cde8491fadd41065a73c2e5336b047942 | 1,918 | ipynb | Jupyter Notebook | present/presentation_introduction.ipynb | weekmo/biodb_team3 | c2918f629d42c1137dcc249a549991e92f63a7f9 | [
"Apache-2.0"
] | null | null | null | present/presentation_introduction.ipynb | weekmo/biodb_team3 | c2918f629d42c1137dcc249a549991e92f63a7f9 | [
"Apache-2.0"
] | 1 | 2021-11-15T17:47:33.000Z | 2021-11-15T17:47:33.000Z | present/presentation_introduction.ipynb | weekmo/biodb_team3 | c2918f629d42c1137dcc249a549991e92f63a7f9 | [
"Apache-2.0"
] | null | null | null | 21.311111 | 77 | 0.553702 | [
[
[
"# Investigating the opportunities\n\n- Research for a protein database which has OMIM ids\n- In pathway databases OMIM is missing\n- Human Protein Reference Database\n- UniProt\n",
"_____no_output_____"
],
[
"# Why UniProt?\n\n- OMIM id is available for proteins\n- More than 500,000 proteins\n- It is easy to access and use (txt)\n- Other databases has connection to UniProt Ids\n- Provides an up-to-date, comprehensive body of protein information\n- Continuously updated for new data every four weeks\n",
"_____no_output_____"
],
[
"# Source file\n\n\n",
"_____no_output_____"
],
[
"# Novelty\n\n- Similar task was addressed by others (offline)\n- The databases mostly focus on mapping gene to disease associations\n- Protein disease associations can be more interesting in drug design\n",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e754669fd037df1f64bb6c3d422f710650271573 | 332,720 | ipynb | Jupyter Notebook | reproduce_paper_figures/make_sm_figure_4.ipynb | flowersteam/holmes | e38fb8417ec56cfde8142eddd0f751e319e35d8c | [
"MIT"
] | 6 | 2020-12-19T00:16:16.000Z | 2022-01-28T14:59:21.000Z | reproduce_paper_figures/make_sm_figure_4.ipynb | Evolutionary-Intelligence/holmes | e38fb8417ec56cfde8142eddd0f751e319e35d8c | [
"MIT"
] | null | null | null | reproduce_paper_figures/make_sm_figure_4.ipynb | Evolutionary-Intelligence/holmes | e38fb8417ec56cfde8142eddd0f751e319e35d8c | [
"MIT"
] | 1 | 2021-05-24T14:58:26.000Z | 2021-05-24T14:58:26.000Z | 26.929988 | 39,487 | 0.366801 | [
[
[
"# Figure 4 SM",
"_____no_output_____"
],
[
"# General",
"_____no_output_____"
]
],
[
[
"# default print properties\nmultiplier = 2\n\npixel_cm_ration = 36.5\n\nwidth_full = int(13.95 * pixel_cm_ration) * multiplier\nwidth_half = int(13.95/2 * pixel_cm_ration) * multiplier\n\nheight_default_1 = int(4 * pixel_cm_ration) * multiplier\n\n# margins in pixel\ntop_margin = 5 * multiplier \nleft_margin = 35 * multiplier \nright_margin = 0 * multiplier \nbottom_margin = 25 * multiplier \n\nfont_size = 5 * multiplier \nfont_family='Times New Roman'\n\nline_width = 2 * multiplier ",
"_____no_output_____"
],
[
"# Define and load data\nimport autodisc as ad\nimport ipywidgets\nimport plotly\nimport numpy as np\nimport collections\nimport os\nimport sys\nplotly.offline.init_notebook_mode(connected=True)\n\ndata_filters = collections.OrderedDict()\ndata_filters['none'] = []\ndata_filters['non dead'] = ('classifier_dead.data', '==', False)\ndata_filters['SLP'] = ('classifier_animal.data', '==', True)\ndata_filters['TLP'] = (('classifier_dead.data', '==', False), 'and', ('classifier_animal.data', '==', False))\n\norg_experiment_definitions = dict()\n\norg_experiment_definitions['main_paper'] = [\n dict(id = '1',\n directory = '../experiments/Random Exploration',\n name = 'Random Exploration',\n is_default = True),\n dict(id = '2',\n directory = '../experiments/IMGEP-VAE',\n name = 'IMGEP-VAE',\n is_default = True),\n dict(id = '3',\n directory = '../experiments/IMGEP-HOLMES',\n name = 'IMGEP-HOLMES',\n is_default = True),\n dict(id = '4',\n directory = '../experiments/IMGEP-HOLMES (SLP)',\n name = 'IMGEP-HOLMES (SLP)',\n is_default = True),\n dict(id = '5',\n directory = '../experiments/IMGEP-HOLMES (TLP)',\n name = 'IMGEP-HOLMES (TLP)',\n is_default = True), \n]\n\nrepetition_ids = list(range(10))\n\n# define names and load the data\nexperiment_name_format = '<name>' # <id>, <name>\n\n#global experiment_definitions\nexperiment_definitions = []\nexperiment_statistics = []\n\ncurrent_experiment_list = 'main_paper'\n\nexperiment_definitions = []\nfor org_exp_def in org_experiment_definitions[current_experiment_list]:\n new_exp_def = dict()\n new_exp_def['directory'] = org_exp_def['directory']\n if 'is_default' in org_exp_def:\n new_exp_def['is_default'] = org_exp_def['is_default']\n\n if 'name' in org_exp_def:\n new_exp_def['id'] = ad.gui.jupyter.misc.replace_str_from_dict(experiment_name_format, {'id': org_exp_def['id'], 'name': org_exp_def['name']})\n else:\n new_exp_def['id'] = ad.gui.jupyter.misc.replace_str_from_dict(experiment_name_format, {'id': org_exp_def['id']})\n\n experiment_definitions.append(new_exp_def)\n\nexperiment_statistics = dict()\nfor experiment_definition in experiment_definitions:\n experiment_statistics[experiment_definition['id']] = ad.gui.jupyter.misc.load_statistics(experiment_definition['directory'])\n ",
"_____no_output_____"
],
[
"# Parameters\nnum_of_bins_per_dimension = list(range(5,35))\n\nBC_bvae_analytic_space_ranges = dict()\nBC_patchbvae_analytic_space_ranges = dict()\nBC_leniastatistics_analytic_space_ranges = dict()\nBC_ellipticalfourier_analytic_space_ranges = dict()\nBC_spectrumfourier_analytic_space_ranges = dict()\nfor i in range(8):\n BC_bvae_analytic_space_ranges[('BC_bvae_analytic_space_representations','data','[{}]'.format(i))] = (0, 1)\n BC_patchbvae_analytic_space_ranges[('BC_patchbvae_analytic_space_representations','data','[{}]'.format(i))] = (0, 1)\n BC_leniastatistics_analytic_space_ranges[('BC_leniastatistics_analytic_space_representations','data','[{}]'.format(i))] = (0, 1)\n BC_ellipticalfourier_analytic_space_ranges[('BC_ellipticalfourier_analytic_space_representations','data','[{}]'.format(i))] = (0, 1)\n BC_spectrumfourier_analytic_space_ranges[('BC_spectrumfourier_analytic_space_representations','data','[{}]'.format(i))] = (0, 1)\n\ndefault_config = dict(\n plotly_format = 'svg',\n layout = dict(\n xaxis = dict(\n title = 'bins per dimension',\n range = [num_of_bins_per_dimension[0], num_of_bins_per_dimension[-1]],\n showline = True,\n linewidth = 1,\n zeroline=False,\n ),\n yaxis = dict(\n title = 'number of bins',\n showline = True,\n linewidth = 1,\n zeroline=False,\n ),\n font = dict(\n family=font_family, \n size=font_size, \n ),\n width=width_half, # in cm\n height=height_default_1, # in cm\n \n margin = dict(\n l=left_margin, #left margin in pixel\n r=right_margin, #right margin in pixel\n b=bottom_margin, #bottom margin in pixel\n t=top_margin, #top margin in pixel\n ),\n\n legend=dict(\n xanchor='right',\n yanchor='bottom',\n y=0.04,\n x=1,\n ), \n \n updatemenus=[],\n\n ),\n \n default_trace = dict(\n x = num_of_bins_per_dimension,\n ),\n \n default_std_trace= dict(\n x = num_of_bins_per_dimension + num_of_bins_per_dimension[::-1],\n ),\n \n default_colors = ['rgb(0,0,0)',\n 'rgb(204,121,167)', \n #'rgb(0,114,178)',\n 'rgb(230,159,0)', \n 'rgb(0,158,115)',\n 'rgb(240,228,66)',\n 'rgb(213,94,0)', \n 'rgb(86,180,233)',\n 'rgb(214,39,40)',\n 'rgb(148,103,189)',\n 'rgb(140,86,75)',\n 'rgb(127,127,127)'],\n \n default_mean_trace = dict(line=dict(width = line_width)),\n \n mean_traces = [\n dict(line = dict(dash = 'dot')),\n dict(line = dict(dash = 'dash')),\n dict(line = dict(dash = 'dashdot')),\n dict(line = dict(dash = 'solid')),\n dict(line = dict(dash = 'longdashdot')),\n dict(line = dict(dash = 'longdash')),\n dict(line = dict(dash = 'solid')),\n dict(line = dict(dash = 'dash')),\n dict(line = dict(dash = 'dashdot')),\n dict(line = dict(dash = 'dot')),\n dict(line = dict(dash = 'longdash')),\n dict(line = dict(dash = 'longdashdot')),\n ],\n \n)\n",
"_____no_output_____"
]
],
[
[
"# Dependence of diversity on number of bins",
"_____no_output_____"
]
],
[
[
"# General Functions to load data\ndef calc_number_explored_bins(vectors, data_filter_inds, bin_config, ignore_out_of_range_values=True):\n\n number_explored_bins_per_step = []\n step_idx = 0\n \n # if there is a filter, fill all initial temsteps were there is no filtered entity with zero\n if data_filter_inds is not None:\n cur_n_bins = 0\n while step_idx < len(data_filter_inds) and data_filter_inds[step_idx] == False:\n number_explored_bins_per_step.append(cur_n_bins)\n step_idx += 1\n \n # create section borders\n bins_per_dim = []\n for dim_config in bin_config:\n bins_per_dim.append(np.linspace(dim_config[0], dim_config[1], num=dim_config[2]+1))\n\n # identify point for every vector\n count_per_section = collections.defaultdict(int)\n\n for vector in vectors:\n\n section = []\n\n # check each dimension\n for dim_idx in range(len(vector)):\n\n # identify at which section in de fined grid the value falls\n idxs = np.where(bins_per_dim[dim_idx] > vector[dim_idx])[0]\n\n if len(idxs) == 0:\n # value is larger than upper grid border\n #warnings.warn('A Vector with value {} is outside the defined grid for dimension {}.'.format(vector[dim_idx], dim_idx))\n\n if ignore_out_of_range_values:\n section = None\n break\n else:\n section_idx = len(bins_per_dim[dim_idx])\n\n elif idxs[0] == 0:\n # value is smaller than lower grid border\n #warnings.warn('A Vector with value {} is outside the defined grid for dimension {}.'.format(vector[dim_idx], dim_idx))\n\n if ignore_out_of_range_values:\n section = None\n break\n else:\n section_idx = -1\n else:\n section_idx = idxs[0]-1\n\n section.append(section_idx)\n\n if section is not None:\n section = tuple(section)\n count_per_section[section] += 1\n \n cur_n_bins = len(count_per_section)\n \n number_explored_bins_per_step.append(cur_n_bins)\n step_idx += 1\n \n if data_filter_inds is not None:\n # fill same number of bins for several time steps if data is filterd out\n while step_idx < len(data_filter_inds) and data_filter_inds[step_idx] == False:\n number_explored_bins_per_step.append(cur_n_bins)\n step_idx += 1\n \n return np.array(number_explored_bins_per_step)\n\n\ndef calc_number_explored_bins_for_experiments(experiment_definitions, source_data, space_defintion, num_of_bins_per_dimension=5, ignore_out_of_range_values=False, data_filter=None):\n \n data_filter_inds = None\n if data_filter is not None and data_filter:\n # filter data according data_filter the given filter\n data_filter_inds = ad.gui.jupyter.misc.filter_experiments_data(source_data, data_filter)\n \n data_number_explored_bins_per_exp = dict()\n\n \n data_diversity = dict()\n \n for exp_def in experiment_definitions:\n exp_id = exp_def['id']\n \n cur_diversity_data = []\n \n rep_data_matricies = []\n\n cur_bin_config = []\n cur_matrix_data = []\n \n cur_data_filter_inds = data_filter_inds[exp_id] if data_filter_inds is not None else None \n\n # load data and define the bin_config\n for dim_name, dim_ranges in space_defintion.items():\n\n # define the bin configuration for the current parameter\n cur_bin_config.append((dim_ranges[0], dim_ranges[1], num_of_bins_per_dimension))\n \n # get all repetition data for the current paramter\n try:\n cur_data = ad.gui.jupyter.misc.get_experiment_data(data=source_data, experiment_id=exp_id, data_source=dim_name, repetition_ids='all', data_filter_inds=cur_data_filter_inds)\n \n except Exception as err:\n if not isinstance(err, KeyError):\n raise Exception('Error during loading of data for Experiment {!r} (Datasource = {!r} )!'.format(exp_id, dim_name)) from err\n else:\n # could not load data\n warnings.warn('Could not load data for Experiment {!r} (Datasource = {!r} )!'.format(exp_id, dim_name))\n \n cur_data = []\n \n # go over repetitions\n for rep_idx, cur_rep_data in enumerate(cur_data):\n cur_rep_data = np.array([cur_rep_data]).transpose()\n\n if rep_idx >= len(rep_data_matricies):\n rep_data_matricies.append(cur_rep_data)\n else:\n rep_data_matricies[rep_idx] = np.hstack((rep_data_matricies[rep_idx], cur_rep_data))\n\n cur_run_parameter_bin_descr_per_exp = []\n for rep_idx, rep_matrix_data in enumerate(rep_data_matricies):\n cur_rep_data_filter_inds = data_filter_inds[exp_id][rep_idx] if cur_data_filter_inds is not None else None \n rep_data = calc_number_explored_bins(rep_matrix_data, cur_rep_data_filter_inds, cur_bin_config, ignore_out_of_range_values=ignore_out_of_range_values)\n cur_diversity_data.append(rep_data)\n \n data_diversity[exp_id] = dict()\n data_diversity[exp_id]['n_explored_bins'] = np.array(cur_diversity_data)\n\n return data_diversity",
"_____no_output_____"
]
],
[
[
"## BC Elliptical Fourier Analytic Space - SLP",
"_____no_output_____"
]
],
[
[
"# Collect Data\nnew_data = dict()\n\nfor cur_num_of_bins in num_of_bins_per_dimension:\n\n cur_diversity = calc_number_explored_bins_for_experiments(\n experiment_definitions, \n experiment_statistics, \n BC_ellipticalfourier_analytic_space_ranges, \n num_of_bins_per_dimension=cur_num_of_bins,\n data_filter=data_filters['SLP'])\n\n for cur_exp_idx, cur_exp_data in cur_diversity.items():\n\n if cur_exp_idx not in new_data:\n new_data[cur_exp_idx] = dict()\n new_data[cur_exp_idx]['diversity_dependent_on_number_of_bins_per_dim'] = cur_exp_data['n_explored_bins'][:,-1]\n else:\n new_data[cur_exp_idx]['diversity_dependent_on_number_of_bins_per_dim'] = np.vstack((new_data[cur_exp_idx]['diversity_dependent_on_number_of_bins_per_dim'], cur_exp_data['n_explored_bins'][:,-1]))\n\nfor exp_id in new_data.keys():\n new_data[exp_id]['diversity_dependent_on_number_of_bins_per_dim'] = new_data[exp_id]['diversity_dependent_on_number_of_bins_per_dim'].transpose()\n\ndata_statistic_space_all_diversity_dependence_on_number_of_bins = new_data \n ",
"_____no_output_____"
],
[
"# plot data\nimport copy \n \n# PLOTTING\nconfig = copy.deepcopy(default_config)\n\nconfig['layout']['yaxis']['range'] = [0,1500]\n\nfig = ad.gui.jupyter.plot_scatter_per_datasource(\n experiment_ids=[exp_def['id'] for exp_def in experiment_definitions],\n repetition_ids=repetition_ids, \n data=data_statistic_space_all_diversity_dependence_on_number_of_bins, \n data_source=['diversity_dependent_on_number_of_bins_per_dim'],\n config=config) \n\n#plotly.io.write_image(fig, 'sm_figure_4_SLP.pdf')",
"/home/mayalen/miniconda3/envs/holmes/lib/python3.6/site-packages/plotly/tools.py:465: DeprecationWarning:\n\nplotly.tools.make_subplots is deprecated, please use plotly.subplots.make_subplots instead\n\n"
]
],
[
[
"## BC Lenia Statistics Analytic Space - TLP",
"_____no_output_____"
]
],
[
[
"# Collect Data\nnew_data = dict()\n\nfor cur_num_of_bins in num_of_bins_per_dimension:\n\n cur_diversity = calc_number_explored_bins_for_experiments(\n experiment_definitions, \n experiment_statistics, \n BC_leniastatistics_analytic_space_ranges, \n num_of_bins_per_dimension=cur_num_of_bins,\n data_filter=data_filters['TLP'])\n\n for cur_exp_idx, cur_exp_data in cur_diversity.items():\n\n if cur_exp_idx not in new_data:\n new_data[cur_exp_idx] = dict()\n new_data[cur_exp_idx]['diversity_dependent_on_number_of_bins_per_dim'] = cur_exp_data['n_explored_bins'][:,-1]\n else:\n new_data[cur_exp_idx]['diversity_dependent_on_number_of_bins_per_dim'] = np.vstack((new_data[cur_exp_idx]['diversity_dependent_on_number_of_bins_per_dim'], cur_exp_data['n_explored_bins'][:,-1]))\n\nfor exp_id in new_data.keys():\n new_data[exp_id]['diversity_dependent_on_number_of_bins_per_dim'] = new_data[exp_id]['diversity_dependent_on_number_of_bins_per_dim'].transpose()\n\ndata_statistic_space_all_diversity_dependence_on_number_of_bins = new_data \n ",
"_____no_output_____"
],
[
"# plot data\nimport copy \n \n# PLOTTING\nconfig = copy.deepcopy(default_config)\n\nconfig['layout']['yaxis']['range'] = [0,1700]\n\nfig = ad.gui.jupyter.plot_scatter_per_datasource(\n experiment_ids=[exp_def['id'] for exp_def in experiment_definitions],\n repetition_ids=repetition_ids, \n data=data_statistic_space_all_diversity_dependence_on_number_of_bins, \n data_source=['diversity_dependent_on_number_of_bins_per_dim'],\n config=config) \n\n#plotly.io.write_image(fig, 'sm_figure_4_TLP.pdf')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e754711babd0f4dcab981d06cc2404be397169fb | 567,189 | ipynb | Jupyter Notebook | targeting_systematics/LRG_DESvsDECaLS_Optical.ipynb | biprateep/DESI-notebooks | 2f10bdc4ceff961186aee1576b16c3ae9a7642ba | [
"MIT"
] | null | null | null | targeting_systematics/LRG_DESvsDECaLS_Optical.ipynb | biprateep/DESI-notebooks | 2f10bdc4ceff961186aee1576b16c3ae9a7642ba | [
"MIT"
] | null | null | null | targeting_systematics/LRG_DESvsDECaLS_Optical.ipynb | biprateep/DESI-notebooks | 2f10bdc4ceff961186aee1576b16c3ae9a7642ba | [
"MIT"
] | null | null | null | 951.659396 | 175,388 | 0.951734 | [
[
[
"# Compare the LRG Optical-selection sytematics between DES and DECaLS regions \nWe do LASSO based training using a healpix map of `nside=128` but do the testing using a healpix map of `nside=32` to reduce variance.\n\nAlso we fit the model **only** to the DECaLS region but testing is done on **both** DES and DECaLS",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport healpy as hp\n\nimport matplotlib.pyplot as plt\nimport matplotlib.lines as lines\n\nfrom astropy.table import Table as T\nfrom astropy.coordinates import SkyCoord\n\nfrom scipy.stats import binned_statistic, iqr\n\nfrom sklearn.linear_model import SGDRegressor\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.metrics import mean_squared_error, median_absolute_error\n\nfrom helpFunc import plot_hpix",
"_____no_output_____"
]
],
[
[
"Load the data and select only `DECaLS`",
"_____no_output_____"
]
],
[
[
"hpTable = T.read(\"/home/bid13/code/desi/DESI-LASSO/data_new/heapix_map_lrg_optical_nominal_20191024_clean_combined_128.fits\")\npix_area = hp.pixelfunc.nside2pixarea(128, degrees=True)\n\n#Moving to pandas\ndata=hpTable.to_pandas()\ndata=data.dropna()\ndata=data.reset_index(drop=True)\ndata[\"region\"] = data[\"region\"].str.decode(\"utf-8\")",
"_____no_output_____"
],
[
"#Select DECaLS for training\ndata = data[data.region==\"decals\"]\n\n#put in galactic long and lat\ncoords = SkyCoord(ra = data.ra, dec =data.dec, unit = \"deg\")\ndata[\"cos(l)\"] = coords.galactic.l.radian\ndata[\"cos(b)\"] =coords.galactic.b.radian\n\ndata[\"cos(l)\"] = np.cos(data[\"cos(l)\"])\ndata[\"cos(b)\"] = np.cos(data[\"cos(b)\"])\n\n#The regression is weighted using the fraction of area occupied in the pixel\ndata[\"weight\"] = data[\"pix_frac\"]/data[\"pix_frac\"].max()\n\ndata[\"pix_area\"] = pix_area*data[\"pix_frac\"]\ndata[\"pix_pop\"] = data[\"density\"]*data[\"pix_area\"]\n\n#Columns to keep\ncolumns = ['EBV', 'galdepth_gmag', 'galdepth_rmag', 'galdepth_zmag','psfdepth_w1mag', 'PSFSIZE_G', 'PSFSIZE_R', 'PSFSIZE_Z', 'stardens_log',\"cos(l)\",\"cos(b)\"]\n\n#Scale the training data by subtracting the mean and dividing by std for each feature\nscaler = StandardScaler()\nscaled_data= scaler.fit_transform(data[columns])",
"_____no_output_____"
]
],
[
[
"### Create a linear model to predict surface density while performing variable selection using LASSO",
"_____no_output_____"
],
[
"**Weighted LASSO trained using Stochastic Gradient Descent** \nLASSO is a regularized linear regression method which sets slopes of un-important predictors to zero. The penalizing coefficient $\\alpha$ is fixed using a grid search and the $R^2$ metric via cross validation (CV). We select the value of $\\alpha$ so that it maximises $R^2$ while using the minimum set of predictors with non-zero slopes. Each data point is weighted using the fraction of area that is filled in the corresponding pixel. (The procedure to select an optimal value of $\\alpha$ has been omited to preserve the brevity of this notebook).",
"_____no_output_____"
]
],
[
[
"alpha_sel = 0.8\n\n#Weighted LASSO\nlasso_sgd = SGDRegressor(loss=\"squared_loss\", penalty=\"l1\", l1_ratio=1, alpha =0.8, random_state=200, tol=1e-6, max_iter=1e5, eta0=1e-4)\n\n\nlasso_sgd.fit(scaled_data, data.density, sample_weight=data[\"weight\"])",
"_____no_output_____"
]
],
[
[
"### Test the trained model with DES region with `nside=32`",
"_____no_output_____"
],
[
"Load the data and select only `DES+DECaLS`",
"_____no_output_____"
]
],
[
[
"hpTable_32 = T.read(\"/home/bid13/code/desi/DESI-LASSO/data_new/heapix_map_lrg_optical_nominal_20191024_clean_combined_32.fits\")\n\ndata_32 = hpTable_32.to_pandas()\ndata_32 = data_32.dropna()\ndata_32 = data_32.reset_index(drop=True)\ndata_32[\"region\"] = data_32[\"region\"].str.decode(\"utf-8\")\ndata_32 = data_32[data_32[\"region\"]!=\"bm\"]\n\n#put in galactic long and lat\ncoords = SkyCoord(ra = data_32.ra, dec =data_32.dec, unit = \"deg\")\ndata_32[\"cos(l)\"] = coords.galactic.l.radian\ndata_32[\"cos(b)\"] =coords.galactic.b.radian\n\ndata_32[\"cos(l)\"] = np.cos(data_32[\"cos(l)\"])\ndata_32[\"cos(b)\"] = np.cos(data_32[\"cos(b)\"])\n\ndata_32[\"weight\"] = data_32[\"pix_frac\"]/data_32[\"pix_frac\"].max()\n\ndata_32[\"pix_area\"] = pix_area*data_32[\"pix_frac\"]\ndata_32[\"pix_pop\"] = data_32[\"density\"]*data_32[\"pix_area\"]\n\nscaled_data_32 = scaler.transform(data_32[columns])",
"_____no_output_____"
]
],
[
[
"### The distribution of fractional total residuals \nTotal fractional residuals are defined as: $\\dfrac{\\text{(observed density - predicted density)}}{\\text{observed density}}$",
"_____no_output_____"
],
[
"Here we compare the total fractional residuals for the linear model trained on `DECaLS` vs. the fractional deviation from a ‘constant-only’ model where the predicted density is simply the mean density of the `DECaLS + DES` region.",
"_____no_output_____"
]
],
[
[
"#Linear Model\ndata_32[\"lin_res\"] = (data_32[\"density\"] - lasso_sgd.predict(scaled_data_32))\ndata_32[\"frac_lin_res\"] = data_32[\"lin_res\"]/data_32[\"density\"]\n\n#constant-only Model\ndata_32[\"cons_res\"] = (data_32[\"density\"] - np.mean(data_32[data_32[\"region\"]==\"decals\"][\"density\"]))\ndata_32[\"frac_cons_res\"] = data_32[\"cons_res\"]/data_32[\"density\"]\n\n\ndata_32_des = data_32[data_32.region==\"des\"].copy()\ndata_32_decals = data_32[data_32.region==\"decals\"].copy()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(1,2, figsize=(18,6))\n\n\nax[0].hist(data_32[\"frac_cons_res\"], bins = 20, label=\"DES+DECaLS\", alpha =0.2, density=True)\nax[0].hist(data_32_des[\"frac_cons_res\"], bins=20, alpha=0.8, label=\"DES\", histtype = \"step\", lw =2,density=True)\nax[0].hist(data_32_decals[\"frac_cons_res\"], bins=20, alpha=0.8, label=\"DECaLS\", histtype = \"step\", lw=2,density=True)\nax[0].grid(alpha=0.5, ls =\"--\")\nax[0].legend(loc=0, prop={'size': 15})\nax[0].set_xlabel(\"Fractional residuals from constant-only (i.e., mean density) model\",size=15)\n\nax[1].hist(data_32[\"frac_lin_res\"], bins = 20, label=\"DES+DECaLS\", alpha =0.2, density=True)\nax[1].hist(data_32_des[\"frac_lin_res\"], bins=20, alpha=0.8, label=\"DES\", histtype = \"step\", lw =2,density=True)\nax[1].hist(data_32_decals[\"frac_lin_res\"], bins=20, alpha=0.8, label=\"DECaLS\", histtype = \"step\", lw=2,density=True)\nax[1].grid(alpha=0.5, ls =\"--\")\nax[1].legend(loc=0, prop={'size': 15})\nax[1].set_xlabel(\"Fractional residuals from linear model\", size=15)\n\nfig.suptitle(\"Normalized histograms of fractional residuals trained only on DECaLS\", size=20)",
"_____no_output_____"
]
],
[
[
"### Print statistics for the fractional residuals",
"_____no_output_____"
],
[
"**Linear Model**",
"_____no_output_____"
]
],
[
[
"print(\"DES+Decals:\")\nprint(\"Mean:\", round(data_32[\"lin_res\"].sum()/data_32[\"density\"].sum(), 4), \"Median:\", round(np.median(data_32[\"frac_lin_res\"]),4))\nprint()\nprint(\"DES:\")\nprint(\"Mean:\", round(data_32_des[\"lin_res\"].sum()/data_32_des[\"density\"].sum(),4), \"Median:\", round(np.median(data_32_des[\"frac_lin_res\"]),4))\nprint()\nprint(\"DECaLS:\")\nprint(\"Mean:\", round(data_32_decals[\"lin_res\"].sum()/data_32_decals[\"density\"].sum(),4), \"Median:\", round(np.median(data_32_decals[\"frac_lin_res\"]),4) )",
"DES+Decals:\nMean: -0.001 Median: -0.0062\n\nDES:\nMean: -0.0091 Median: -0.0196\n\nDECaLS:\nMean: 0.0002 Median: -0.0049\n"
]
],
[
[
"**Constant-only (i.e., mean density) Model**",
"_____no_output_____"
]
],
[
[
"print(\"DES+Decals:\")\nprint(\"Mean:\", round(data_32[\"cons_res\"].sum()/data_32[\"density\"].sum(), 4), \"Median:\", round(np.median(data_32[\"frac_cons_res\"]),4))\nprint()\nprint(\"DES:\")\nprint(\"Mean:\", round(data_32_des[\"cons_res\"].sum()/data_32_des[\"density\"].sum(),4), \"Median:\", round(np.median(data_32_des[\"frac_cons_res\"]),4))\nprint()\nprint(\"DECaLS:\")\nprint(\"Mean:\", round(data_32_decals[\"cons_res\"].sum()/data_32_decals[\"density\"].sum(),4), \"Median:\", round(np.median(data_32_decals[\"frac_cons_res\"]),4) )",
"DES+Decals:\nMean: 0.0001 Median: -0.0053\n\nDES:\nMean: 0.0006 Median: -0.0091\n\nDECaLS:\nMean: 0.0 Median: -0.0051\n"
]
],
[
[
"### **Summary:** \nThere is about 0.1% difference in the average density of LRGs between the `DES` and `DECaLS` regions. However, if one fits a linear model for the dependence of LRG density on imaging properties and systematics using only the `DECaLS` area, one predicts a density difference comparable to this.",
"_____no_output_____"
],
[
"### Residuals from each predictor after model was trained on DECaLS but tested on DES",
"_____no_output_____"
],
[
"Below we plot the residuals for each predictor separately. We first determine the total residual for the model as $$\\text{Total residual = observed density - predicted density}$$ \nNow the residual for each predictor is calculated as: $\\text{Total residual}+C_i\\times x_i$ \nwhere $C_i$ is the slope corresponding to each predictor in the linear model and $x_i$ represents the values for each predictor. We are essentially adding back the contribution of each predictor one by one to the total residual. The straight lines denote the component of the linear model fitted to `DECaLS` while the points show binned residual for the `DES` region. The error bars are the maximum of the standard error or the poisson error in each bin. All the residuals are converted to fractions by dividing them by the average density in the `DES` region. The histograms in each plot show the normalized distributions of each predictor for the `DES` and `DECaLS` regions.",
"_____no_output_____"
]
],
[
[
"fig, axs = plt.subplots(3,4, figsize = (18,12))\nfig.delaxes(axs[2][3])\n\naxs = axs.flatten()\naxs_twin = [ax.twinx() for ax in axs]\nfig.delaxes(axs_twin[-1])\n\nscaled_32_des = scaler.transform(data_32_des[columns])\n\narray_des = np.array(data_32_des[columns])\narray_decals = np.array(data_32_decals[columns])\narray_data = np.array(data_32[columns])\n\navg_density = np.mean(data_32_des[\"density\"])\n\nnum_bins =5\n\nfor i, ax in enumerate(axs[:-1]):\n \n residual = (data_32_des[\"lin_res\"] + scaled_32_des[:,i]*lasso_sgd.coef_[i])\n\n \n #Bin the data\n bin_res, bin_edges, bin_num = binned_statistic(array_des[:,i], residual, statistic = \"mean\", bins=num_bins)\n \n frac_mean = bin_res/avg_density\n \n std, bin_edges,_ = binned_statistic(array_des[:,i], residual, statistic = \"std\", bins=num_bins)\n std = std/avg_density\n \n \n # Standard error: Standard deviation of each bin divided by sqrt(population)\n pop, _ = np.histogram(array_des[:,i], bins=num_bins)\n #Should be in terms of densities\n std_err = std/np.sqrt(pop)\n \n #Poisson error\n pois_err = np.zeros(num_bins)\n for b in range(num_bins):\n mask = (bin_num==b+1)\n data_bin = data_32_des[mask].copy()\n pop_bin = data_bin[\"pix_pop\"].sum()\n area_bin = data_bin[\"pix_area\"].sum()\n pois_err[b] = np.sqrt(pop_bin)/area_bin\n pois_err = pois_err/avg_density\n \n #Error is maximum of the standard error or the poisson error\n error = np.maximum(std_err, pois_err)\n \n x_bin = (bin_edges[1:]+bin_edges[:-1])/2\n \n x_line = np.linspace(array_data[:,i].min(), array_data[:,i].max(), 10)\n \n # rescale x\n x_line_scaled = np.zeros((10, len(columns)))\n x_line_scaled[:,i] = x_line\n x_line_scaled = scaler.transform(x_line_scaled)\n \n hist_des, des_bin_edges = np.histogram(array_des[:,i], bins=15, density=True)\n hist_decals, decals_bin_edges = np.histogram(array_decals[:,i], bins=15, density=True)\n \n \n normalize = 25\n \n\n axs_twin[i].hist(des_bin_edges[:-1], bins=des_bin_edges, weights=hist_des/normalize, label=\"des\", lw=1, histtype=\"step\", color=\"r\")\n axs_twin[i].hist(decals_bin_edges[:-1], bins=decals_bin_edges, weights=hist_decals/normalize, label=\"decals\", lw=1, histtype=\"step\", color=\"g\")\n\n axs_twin[i].set_ylim(0,1)\n axs_twin[i].axis(\"off\")\n \n \n ax.errorbar(x_bin, frac_mean, yerr = error, fmt=\"o\", ms=7, lw=1.5, capsize=3)\n \n ax.plot(x_line, (lasso_sgd.coef_[i]*x_line_scaled[:,i])/avg_density, c= \"C0\", ls =\"--\", lw=2)\n ax.set_xlabel(columns[i], size=15)\n\nhandle1 = lines.Line2D([], [], c='r')\nhandle2 = lines.Line2D([], [], c='g')\nhandle3 = lines.Line2D([], [], c='C0', ls =\"--\", lw=2)\nhandle4 = lines.Line2D([], [], c='C0', marker=\"o\", ls=\"\")\n\nfig.legend( (handle1,handle2,handle3, handle4), (\"DES Population\", \"DeCALS Population\", \"Component of fit to DECaLS\", \"Binned residuals for DES\"), loc=\"center right\", bbox_to_anchor=(1.05,0.25), prop={'size': 15}, ncol=1)\n\nfig.text(0.5, -0.05, r\"Predictors\", ha='center',size=30) #Common x label\nfig.text(-0.05, 0.5, \"Fractional Residuals\", va='center', rotation='vertical',size=30) #Common y label\nfig.suptitle(\"Fractional Residuals for each predictor\", size=30, y=1.05)\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"### **Summary:** \nWe see that the linear model fitted to `DECaLS` tends to describe the offsets to `DES` pretty well. This also shows that selection is quite uniform accros the two regions.",
"_____no_output_____"
],
[
"### Plot Healpix maps of the residulas",
"_____no_output_____"
],
[
"**Fractional residuals from linear model**",
"_____no_output_____"
]
],
[
[
"hp_map = plot_hpix(data_32, 32, \"frac_lin_res\", region=\"bm\")",
"_____no_output_____"
]
],
[
[
"**Fractional Residuals from constant-only model**",
"_____no_output_____"
]
],
[
[
"hp_map = plot_hpix(data_32, 32, \"frac_cons_res\", region=\"bm\")",
"_____no_output_____"
]
],
[
[
"### **Summary:** \nFrom the maps we see that the offsets in the predicted densities stay roughly the same over the `DECaLS` and `DES` regions. The linear model tends to predict higher densities for the `DES` region than the `DECaLS` region which is also evident from the histograms above and the statistics shown below.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7547237cbb0553a6f67c26a7fbed7df7d9b3d8e | 26,202 | ipynb | Jupyter Notebook | playground.ipynb | baldnate/src-pbs-to-csv | 1b0183b23778fe600b18888fbda85b019d375b8e | [
"BSD-3-Clause"
] | 2 | 2022-02-27T22:05:59.000Z | 2022-03-19T14:41:46.000Z | playground.ipynb | baldnate/src-pbs-to-csv | 1b0183b23778fe600b18888fbda85b019d375b8e | [
"BSD-3-Clause"
] | 4 | 2021-10-19T16:30:58.000Z | 2021-10-21T16:39:32.000Z | playground.ipynb | baldnate/src-pbs-to-csv | 1b0183b23778fe600b18888fbda85b019d375b8e | [
"BSD-3-Clause"
] | null | null | null | 38.419355 | 134 | 0.414968 | [
[
[
"# Building a csv of all your PBs\n-- a short story by baldnate",
"_____no_output_____"
],
[
"# Get user id",
"_____no_output_____"
]
],
[
[
"import json\nimport pandas as pd\nimport requests\nimport math\n\nusers = {}\ndef getUserId(username):\n if username not in users:\n url = \"https://www.speedrun.com/api/v1/users?name=\" + username\n data = requests.get(url).json()['data']\n if len(data) == 1:\n users[username] = data[0]['id']\n else:\n raise Exception('Searched for ' + username + ', got back ' + str(len(data)) + ' entries (expected 1)') \n return users[username]\n\nuserid = getUserId('baldnate')",
"_____no_output_____"
]
],
[
[
"# Get PBs",
"_____no_output_____"
]
],
[
[
"def getPBs(userid, all = False):\n url = \"https://www.speedrun.com/api/v1/users/\" + userid + \"/personal-bests?embed=game,category,region,platform,players\"\n data = requests.get(url).json()['data']\n pbdf = pd.DataFrame(data)\n pbdf = pbdf.join(pbdf['run'].apply(pd.Series), rsuffix='run')\n pbdf.drop(axis=1, columns=['run'], inplace=True)\n if all:\n allurl = \"https://www.speedrun.com/api/v1/runs?user=\" + userid + \"&embed=game,category,region,platform,players\"\n data = requests.get(allurl).json()['data']\n alldf = pd.DataFrame(data)\n alldf['place'] = math.nan\n pbdf = pbdf.append(alldf)\n return pbdf\n return pbdf\n\nrawdf = getPBs(userid, all=True)\nrunsdf = pd.DataFrame()",
"_____no_output_____"
],
[
"rawdf.tail()",
"_____no_output_____"
]
],
[
[
"# Co-Op - aka: write player(s) to a column",
"_____no_output_____"
],
[
"# \"Simple\" Columns",
"_____no_output_____"
]
],
[
[
"runsdf['place'] = rawdf['place']\nrunsdf['gamename'] = rawdf.apply(lambda x: x.game['data']['names']['international'], axis=1)\nrunsdf['categoryname'] = rawdf.apply(lambda x: x.category['data']['name'], axis=1)\nrunsdf['time'] = rawdf.apply(lambda x: x.times['primary_t'], axis=1)\nrunsdf['date'] = rawdf.apply(lambda x: x.date, axis=1)\nrunsdf['video'] = rawdf.apply(lambda x: x.videos['links'][0]['uri'], axis=1)\nrunsdf['comment'] = rawdf.apply(lambda x: str(x.comment).replace('\\n', ' ').replace('\\r', ' ') , axis=1)",
"_____no_output_____"
]
],
[
[
"# Columns that need optional handling",
"_____no_output_____"
]
],
[
[
"def getRegion(x):\n if len(x.region['data']) == 0:\n return \"None\"\n else:\n return x.region['data']['name']\n\nrunsdf['regionname'] = rawdf.apply(lambda x: getRegion(x), axis=1)\n",
"_____no_output_____"
],
[
"def getPlatform(x):\n if len(x.platform['data']) == 0:\n return \"None\"\n else:\n return x.platform['data']['name']\n\nrunsdf['platformname'] = rawdf.apply(lambda x: getPlatform(x), axis=1)",
"_____no_output_____"
]
],
[
[
"# Sub-Categories\n\nMemoized for speed and kindness.",
"_____no_output_____"
]
],
[
[
"varMemo = {}",
"_____no_output_____"
],
[
"def getVariable(variableid):\n if variableid not in varMemo:\n url = \"https://www.speedrun.com/api/v1/variables/\" + variableid\n response = requests.get(url)\n varMemo[variableid] = response.json()['data']\n return varMemo[variableid]\n\ndef getValue(variableid, valueid):\n var = getVariable(variableid)\n return var['values']['values'][valueid]['label']\n\ndef getSubCategories(x):\n if x['values'] == {}:\n return \"None\"\n else:\n vals = []\n for varid, valid in x['values'].items():\n if getVariable(varid)['is-subcategory']:\n vals.append(getValue(varid, valid)) \n return \" -- \".join(vals)\n\nrunsdf['subcategories'] = rawdf.apply(lambda x: getSubCategories(x), axis=1)",
"_____no_output_____"
]
],
[
[
"# Dump to a csv",
"_____no_output_____"
]
],
[
[
"def getPlayers(x):\n players = []\n for p in x.players['data']:\n players.append(p['names']['international']) \n return \", \".join(players)\n\nrunsdf['players'] = rawdf.apply(lambda x: getPlayers(x), axis=1)",
"_____no_output_____"
],
[
"import csv\nrunsdf.fillna(value=\"\").to_csv('runs.csv', index=False, quoting=csv.QUOTE_NONNUMERIC)\nprint('csv exported')",
"csv exported\n"
],
[
"csvdf = pd.read_csv('runs.csv')\ncsvdf.tail()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e75473c358b3d0d197fa6cb892b3da4501204f8d | 5,980 | ipynb | Jupyter Notebook | .ipynb_checkpoints/R arbeidsbok-checkpoint.ipynb | pandaAPIkurs/kursdata | 3ca734f181844c0e69f39f47c2c9bf889d254949 | [
"MIT"
] | 1 | 2021-10-12T11:53:29.000Z | 2021-10-12T11:53:29.000Z | .ipynb_checkpoints/R arbeidsbok-checkpoint.ipynb | pandaAPIkurs/kursdata | 3ca734f181844c0e69f39f47c2c9bf889d254949 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/R arbeidsbok-checkpoint.ipynb | pandaAPIkurs/kursdata | 3ca734f181844c0e69f39f47c2c9bf889d254949 | [
"MIT"
] | 3 | 2021-07-05T11:42:40.000Z | 2021-10-19T11:54:14.000Z | 22.566038 | 295 | 0.478763 | [
[
[
"# Eksempel i R",
"_____no_output_____"
],
[
"## Intro til notatbøker\n\n- Jupyter notebooks \n- Kjør celle med ctrl + enter\n - Shift + enter for kjøre og gå til neste",
"_____no_output_____"
],
[
"## Steg 1\n\nFørst må vi hente inn modulene vi trenger i R\n\nMarker neste celle og trykk Shift + enter",
"_____no_output_____"
]
],
[
[
"# Henter inn bibliotek\n\nlibrary(httr) # Bibiotek for spørringer\nlibrary(rjstat) # Bibliotek for håntering av json-stat formatet",
"_____no_output_____"
]
],
[
[
"## Steg 2\n\nNå skal vi lage spørringen som består av URL (som peker mot applikasjonen som skal kjøre programmet) og spørringsteksten (som vi henter fra SSB)\n\nHer er det mulig å bytte ut spørring med egne spørringer. \n\nHusk: \n- bruke `'` i starten og slutten av spørringsteksten\n- endre tabellnummer i URL (de fem tallene på slutten) `'https://data.ssb.no/api/v0/no/table/11616/'`",
"_____no_output_____"
]
],
[
[
"# Pendling\n\nurl <-'https://data.ssb.no/api/v0/no/table/11616/'\n\ndata <-\n'{\n \"query\": [\n {\n \"code\": \"Region\",\n \"selection\": {\n \"filter\": \"all\",\n \"values\": [\n \"*\"\n ]\n }\n },\n {\n \"code\": \"ContentsCode\",\n \"selection\": {\n \"filter\": \"item\",\n \"values\": [\n \"Innpendlere\",\n \"Utpendlere\"\n ]\n }\n },\n {\n \"code\": \"Tid\",\n \"selection\": {\n \"filter\": \"item\",\n \"values\": [\n \"2015\",\n \"2016\",\n \"2017\",\n \"2018\",\n \"2019\",\n \"2020\"\n ]\n }\n }\n ],\n \"response\": {\n \"format\": \"json-stat2\"\n }\n}'",
"_____no_output_____"
]
],
[
[
"## Steg 3\n\nNå sender vi spørringen til SSB og mottar respons. Responsen vil bestå av de data vi har bestilt, dersom vi har gjort alt rett.",
"_____no_output_____"
]
],
[
[
"temp <- POST(url , body = data, encode = \"json\", verbose())",
"_____no_output_____"
]
],
[
[
"Vi kan også sjekke metadata for respons:",
"_____no_output_____"
]
],
[
[
"print(temp)",
"_____no_output_____"
]
],
[
[
"## Steg 4\n\nNår vi har fått status 200 kan vi se på datasettet vi har lastet ned. Prøv å skifte ut `naming = \"label\"` med `naming = \"label\"` og kjør cellen igjen. Hva skjer?",
"_____no_output_____"
]
],
[
[
"# Vi lagrer responsen til en variabel\ntabell <- fromJSONstat(content(temp, \"text\"), naming = \"id\", use_factors = F)\n\n# Vis de første radene av tabellen\nhead(tabell)",
"_____no_output_____"
]
],
[
[
"## Steg 5\n\nHvis vi vil lagre data i lokalt, kan vi skrive den til en tekstfil. Den vil lagres i samme mappe som dette skriptet ligger i. Det er også mulig å skrive inn en filbane på din maskin, dersom du vil prøve å lagre den. Hvis ikke kan den lastes ned fra menyen til venstre (under mappesymbolet)",
"_____no_output_____"
]
],
[
[
"write.csv(tabell, \"11616.csv\")",
"_____no_output_____"
]
],
[
[
"## Bonus - hent metadata for tabell",
"_____no_output_____"
]
],
[
[
"content(temp)$updated # Viser når dataene sist ble oppdatert av SSB",
"_____no_output_____"
],
[
"content(temp)$label # Viser tittelen på tabellen",
"_____no_output_____"
],
[
"content(temp)$source # Viser hvem som eier tabellen",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e754795f6e757bcff842d221e1157c31fa0c002c | 2,189 | ipynb | Jupyter Notebook | cs224w/stepwise.ipynb | kidrabit/Data-Visualization-Lab-RND | baa19ee4e9f3422a052794e50791495632290b36 | [
"Apache-2.0"
] | 1 | 2022-01-18T01:53:34.000Z | 2022-01-18T01:53:34.000Z | cs224w/stepwise.ipynb | kidrabit/Data-Visualization-Lab-RND | baa19ee4e9f3422a052794e50791495632290b36 | [
"Apache-2.0"
] | null | null | null | cs224w/stepwise.ipynb | kidrabit/Data-Visualization-Lab-RND | baa19ee4e9f3422a052794e50791495632290b36 | [
"Apache-2.0"
] | null | null | null | 31.724638 | 91 | 0.51439 | [
[
[
"def Stepwise_model(X,y):\n Stepmodels = pd.DataFrame(columns=[\"AIC\", \"model\"])\n tic = time.time()\n predictors = []\n Smodel_before = processSubset(X,y,predictors+['const'])['AIC']\n # 변수 1~10개 : 0~9 -> 1~10\n for i in range(1, len(X.columns.difference(['const'])) + 1):\n Forward_result = forward(X=X, y=y, predictors=predictors) # constant added\n print('forward')\n Stepmodels.loc[i] = Forward_result\n predictors = Stepmodels.loc[i][\"model\"].model.exog_names\n predictors = [ k for k in predictors if k != 'const']\n Backward_result = backward(X=X, y=y, predictors=predictors)\n if Backward_result['AIC']< Forward_result['AIC']:\n Stepmodels.loc[i] = Backward_result\n predictors = Stepmodels.loc[i][\"model\"].model.exog_names\n Smodel_before = Stepmodels.loc[i][\"AIC\"]\n predictors = [ k for k in predictors if k != 'const']\n print('backward')\n if Stepmodels.loc[i]['AIC']> Smodel_before:\n break\n else:\n Smodel_before = Stepmodels.loc[i][\"AIC\"]\n toc = time.time()\n print(\"Total elapsed time:\", (toc - tic), \"seconds.\")\n return (Stepmodels['model'][len(Stepmodels['model'])])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
e7547b7172cce1beda8c6e61c9a6cf03cb9b2daf | 287,670 | ipynb | Jupyter Notebook | docs/tutorials/Combining_uncertainties_with_specutils.ipynb | dkrolikowski/muler | 6ac4d0a7c5095b39b63a5fbabe50ad72c018d84c | [
"MIT"
] | 7 | 2021-04-21T21:09:20.000Z | 2021-11-24T23:30:06.000Z | docs/tutorials/Combining_uncertainties_with_specutils.ipynb | dkrolikowski/muler | 6ac4d0a7c5095b39b63a5fbabe50ad72c018d84c | [
"MIT"
] | 71 | 2020-12-16T16:53:55.000Z | 2022-03-30T18:50:29.000Z | docs/tutorials/Combining_uncertainties_with_specutils.ipynb | dkrolikowski/muler | 6ac4d0a7c5095b39b63a5fbabe50ad72c018d84c | [
"MIT"
] | 6 | 2021-04-27T19:21:16.000Z | 2021-10-06T18:00:04.000Z | 278.480155 | 80,588 | 0.908374 | [
[
[
"# How specutils and muler propagate uncertainty\nIn this notebook we explore how `specutils` does uncertainty propagation, and how to combine two spectra with different---but overlapping---extents. That is, if a spectrum has close to, but not exactly the same wavelengths, how does specutils combine them? These two issues are important for combining spectra of the same objects taken from different exposures and different nights, when the target's rest-frame wavelength solution can change greater than one pixel.",
"_____no_output_____"
]
],
[
[
"from specutils import Spectrum1D\nimport numpy as np\nfrom astropy.nddata import StdDevUncertainty\nimport astropy.units as u\nimport matplotlib.pyplot as plt\n%config InlineBackend.figure_format='retina'",
"_____no_output_____"
]
],
[
[
"### Spectrum 1: $S/N=20$\n\nFirst we'll make a spectrum with signal-to-noise ratio equal to 20, and mean of 1.0.",
"_____no_output_____"
]
],
[
[
"N_points = 300\nfake_wavelength = np.linspace(500, 600, num=N_points)*u.nm\nmean_val, sigma = 1.0, 0.05\nsnr = mean_val / sigma\nknown_uncertainties = np.repeat(sigma, N_points) * u.Watt / u.cm**2\nfake_flux = np.random.normal(loc=mean_val, scale=known_uncertainties) * u.Watt / u.cm**2",
"_____no_output_____"
],
[
"spec1 = Spectrum1D(spectral_axis=fake_wavelength, \n flux=fake_flux, \n uncertainty=StdDevUncertainty(known_uncertainties))",
"_____no_output_____"
],
[
"known_uncertainties.value[0:7]",
"_____no_output_____"
],
[
"plt.axhline(1.0, linestyle='dashed', color='k', zorder=10)\nplt.errorbar(spec1.wavelength.value, spec1.flux.value, yerr=spec1.uncertainty.array, \n linestyle='none', marker='o', ecolor='k', alpha=0.5)\nplt.ylim(0, 1.5);",
"_____no_output_____"
]
],
[
[
"### Spectrum 2: $S/N = 50$ and *conspicuously* offset in wavelength\n\nNow we'll make a spectrum with signal-to-noise ratio equal to 50, and mean of 0.5. The wavelength axes are *offset* by 10 nanometers.",
"_____no_output_____"
]
],
[
[
"N_points2 = N_points\nfake_wavelength2 = np.linspace(510, 610, num=N_points2)*u.nm\nmean_val2, sigma2 = 0.5, 0.01\nsnr = mean_val2 / sigma2\nknown_uncertainties2 = np.repeat(sigma2, N_points2) * u.Watt / u.cm**2\nfake_flux2 = np.random.normal(loc=mean_val2, scale=known_uncertainties2) * u.Watt / u.cm**2",
"_____no_output_____"
],
[
"spec2 = Spectrum1D(spectral_axis=fake_wavelength2, \n flux=fake_flux2, \n uncertainty=StdDevUncertainty(known_uncertainties2))",
"_____no_output_____"
]
],
[
[
"### Add Spectrum 1 and Spectrum 2: What happens?\n\nWe expect the uncertainties to add *in quadrature*: \n$$ \\sigma_{net} = \\sqrt{\\sigma_1^2 + \\sigma2^2}$$\n$$ \\sigma_{net} = \\sqrt{0.05^2 + 0.01^2} $$\n$$=$$\n",
"_____no_output_____"
]
],
[
[
"np.hypot(0.05, 0.01)",
"_____no_output_____"
],
[
"spec_net = spec1 + spec2",
"_____no_output_____"
],
[
"spec_net.uncertainty[0:7]",
"_____no_output_____"
]
],
[
[
"Woohoo! Specutils *automatically* propagates the error correctly! You can turn this error propagation *off* (I'm not sure why you would want to) by calling the method with a kwarg:",
"_____no_output_____"
]
],
[
[
"spec_net_no_error_propagation = spec1.add(spec2, propagate_uncertainties=False)",
"_____no_output_____"
],
[
"spec_net_no_error_propagation.uncertainty[0:7]",
"_____no_output_____"
]
],
[
[
"### Wait, but what about the offset? How did it deal with the non-overlapping edges?",
"_____no_output_____"
]
],
[
[
"plt.errorbar(spec1.wavelength.value, spec1.flux.value, yerr=spec1.uncertainty.array, \n linestyle='none', marker='o', label='Spec1: $S/N=20$')\n\nplt.errorbar(spec2.wavelength.value, spec2.flux.value, yerr=spec2.uncertainty.array, \n linestyle='none', marker='o', markersize=1, label='Spec2: $S/N=50$')\n\nplt.errorbar(spec_net.wavelength.value, spec_net.flux.value, yerr=spec_net.uncertainty.array, \n linestyle='none', marker='o', \n label='Spec net = Spec + Spec2 : $\\sigma_{net}=\\sqrt{\\sigma^2 + \\sigma_2^2}$')\nplt.legend(loc='best')\nplt.ylim(0, 2.5)",
"_____no_output_____"
]
],
[
[
"Whoa! Specutils pretends like the signals are *aligned to Spectrum 1*. That is probably not the desired behavior for such an extreme offset as this one, but may be \"good enough\" for spectra that are either exactly aligned or within a pixel. It depends on your science application. PRV applications should not just round-to-the-nearest pixel, since they are trying to infer changes at sub-pixel levels.",
"_____no_output_____"
],
[
"What if you add the two spectra *the other way around*? Math says addition should be communitative...",
"_____no_output_____"
]
],
[
[
"spec_alt = spec2 + spec1",
"_____no_output_____"
],
[
"plt.errorbar(spec1.wavelength.value, spec1.flux.value, yerr=spec1.uncertainty.array, \n linestyle='none', marker='o', label='Spec1: $S/N=20$')\n\nplt.errorbar(spec2.wavelength.value, spec2.flux.value, yerr=spec2.uncertainty.array, \n linestyle='none', marker='o', markersize=1, label='Spec2: $S/N=50$')\n\nplt.errorbar(spec_alt.wavelength.value, spec_alt.flux.value, yerr=spec_alt.uncertainty.array, \n linestyle='none', marker='o', label='Spec alt = Spec2 + Spec : $\\sigma_{net}=\\sqrt{\\sigma^2 + \\sigma_2^2}$')\n\nplt.legend(loc='best')\nplt.ylim(0, 2.5);",
"_____no_output_____"
]
],
[
[
"Weird, so the wavelengths of the result are taken from the bounds of the *first* argument. This means \"addition is not commutative\" in specutils. Let's see why:",
"_____no_output_____"
]
],
[
[
"spec_net.spectral_axis",
"_____no_output_____"
],
[
"spec_alt.spectral_axis",
"_____no_output_____"
],
[
"spec1.add(spec2, compare_wcs=None).spectral_axis",
"_____no_output_____"
]
],
[
[
"Pixels instead of nanometers!",
"_____no_output_____"
]
],
[
[
"spec2.add(spec1, compare_wcs='first_found').spectral_axis",
"_____no_output_____"
],
[
"spec1.add(spec2, compare_wcs='first_found').spectral_axis",
"_____no_output_____"
]
],
[
[
"A ha! The `compare_wcs` kwarg controls what-to-do with the mis-matched spectral axes. Basically the sum of the two spectra just takes the wavelength labels from the first spectrum and uses those, when `compare_wcs='first_found'`---the default---is provided. It doesn't actually interpolate or anything fancy...",
"_____no_output_____"
],
[
"## Resampling one spectrum to another's wavelength axis\nAnd *then* adding them together.",
"_____no_output_____"
],
[
"Can we resample mis-aligned spectra and still get reasonable error propagation? How would that work?",
"_____no_output_____"
]
],
[
[
"from specutils.manipulation import FluxConservingResampler, LinearInterpolatedResampler",
"_____no_output_____"
],
[
"resampler = FluxConservingResampler(extrapolation_treatment='nan_fill')",
"_____no_output_____"
],
[
"%%capture \n#This method throws a warning for an unknown reason...\nresampled_spec2 = resampler(spec2, spec1.spectral_axis)",
"_____no_output_____"
],
[
"np.all(resampled_spec2.wavelength == spec1.wavelength)",
"_____no_output_____"
],
[
"resampled_spec2.uncertainty[0:50]",
"_____no_output_____"
]
],
[
[
"Hmmm... this process causes a new type of uncertainty \"inverse variance\" instead of std deviation... I'm not sure why! Hmm... We'll manually convert? Variance is just standard-deviation squared. Inverse is just one-over-that...",
"_____no_output_____"
]
],
[
[
"new_sigma = np.sqrt(1/resampled_spec2.uncertainty.array)",
"_____no_output_____"
],
[
"resampled_spec2.uncertainty = StdDevUncertainty(new_sigma)",
"_____no_output_____"
],
[
"spec_final = spec1.add(resampled_spec2, propagate_uncertainties=True)",
"_____no_output_____"
],
[
"spec_final.uncertainty[0:50]",
"_____no_output_____"
]
],
[
[
"Voila! It worked! I am not sure why specutils does not go the whole way and do this step manually.",
"_____no_output_____"
]
],
[
[
"plt.errorbar(spec1.wavelength.value, spec1.flux.value, yerr=spec1.uncertainty.array, \n linestyle='none', marker='o', label='Spec1: $S/N=20$')\n\nplt.errorbar(spec2.wavelength.value, spec2.flux.value, yerr=spec2.uncertainty.array, \n linestyle='none', marker='o', markersize=1, label='Spec2: $S/N=50$')\nplt.errorbar(resampled_spec2.wavelength.value, resampled_spec2.flux.value, yerr=new_sigma, \n linestyle='none', marker='o', markersize=1, label='Resampled Spec2: $S/N=50$')\n\nplt.errorbar(spec_final.wavelength.value, spec_final.flux.value, yerr=spec_final.uncertainty.array, \n linestyle='none', marker='o', label='Spec final = ResampledSpec2 + Spec1 : $\\sigma_{net}=?$')\n\nplt.legend(loc='best')\nplt.ylim(0, 2.5)",
"_____no_output_____"
]
],
[
[
"Okay! Now we understand how specutils combines spectra, both for error propagation and for comparing the spectral axes of the two spectra. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e75493b51f556ee63702c97fd4fd6a10489c6aa7 | 10,446 | ipynb | Jupyter Notebook | Intro to TensorFlow for Deep Learning/08_01_common_patterns.ipynb | Shahid1993/udacity-courses | 01ad5a785bbf61c7b416ac8d0332d549fd182f1e | [
"MIT"
] | null | null | null | Intro to TensorFlow for Deep Learning/08_01_common_patterns.ipynb | Shahid1993/udacity-courses | 01ad5a785bbf61c7b416ac8d0332d549fd182f1e | [
"MIT"
] | null | null | null | Intro to TensorFlow for Deep Learning/08_01_common_patterns.ipynb | Shahid1993/udacity-courses | 01ad5a785bbf61c7b416ac8d0332d549fd182f1e | [
"MIT"
] | null | null | null | 26.312343 | 295 | 0.472047 | [
[
[
"<a href=\"https://colab.research.google.com/github/Shahid1993/udacity-courses/blob/master/Intro%20to%20TensorFlow%20for%20Deep%20Learning/08_01_common_patterns.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"##### Copyright 2018 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# Common patterns",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c01_common_patterns.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c01_common_patterns.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"## Setup",
"_____no_output_____"
]
],
[
[
"from __future__ import absolute_import, division, print_function, unicode_literals",
"_____no_output_____"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"def plot_series(time, series, format=\"-\", start=0, end=None, label=None):\n plt.plot(time[start:end], series[start:end], format, label=label)\n plt.xlabel(\"Time\")\n plt.ylabel(\"Value\")\n if label:\n plt.legend(fontsize=14)\n plt.grid(True)",
"_____no_output_____"
]
],
[
[
"## Trend and Seasonality",
"_____no_output_____"
]
],
[
[
"def trend(time, slope=0):\n return slope * time",
"_____no_output_____"
]
],
[
[
"Let's create a time series that just trends upward:",
"_____no_output_____"
]
],
[
[
"time = np.arange(4 * 365 + 1)\nbaseline = 10\nseries = baseline + trend(time, 0.1)\n\nplt.figure(figsize=(10, 6))\nplot_series(time, series)\nplt.show()",
"_____no_output_____"
],
[
"time",
"_____no_output_____"
],
[
"series",
"_____no_output_____"
]
],
[
[
"Now let's generate a time series with a seasonal pattern:",
"_____no_output_____"
]
],
[
[
"def seasonal_pattern(season_time):\n \"\"\"Just an arbitrary pattern, you can change it if you wish\"\"\"\n return np.where(season_time < 0.4,\n np.cos(season_time * 2 * np.pi),\n 1 / np.exp(3 * season_time))\n\ndef seasonality(time, period, amplitude=1, phase=0):\n \"\"\"Repeats the same pattern at each period\"\"\"\n season_time = ((time + phase) % period) / period\n return amplitude * seasonal_pattern(season_time)",
"_____no_output_____"
],
[
"amplitude = 40\nseries = seasonality(time, period=365, amplitude=amplitude)\n\nplt.figure(figsize=(10, 6))\nplot_series(time, series)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Now let's create a time series with both trend and seasonality:",
"_____no_output_____"
]
],
[
[
"slope = 0.05\nseries = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)\n\nplt.figure(figsize=(10, 6))\nplot_series(time, series)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Noise",
"_____no_output_____"
],
[
"In practice few real-life time series have such a smooth signal. They usually have some noise, and the signal-to-noise ratio can sometimes be very low. Let's generate some white noise:",
"_____no_output_____"
]
],
[
[
"def white_noise(time, noise_level=1, seed=None):\n rnd = np.random.RandomState(seed)\n return rnd.randn(len(time)) * noise_level",
"_____no_output_____"
],
[
"noise_level = 5\nnoise = white_noise(time, noise_level, seed=42)\n\nplt.figure(figsize=(10, 6))\nplot_series(time, noise)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Now let's add this white noise to the time series:",
"_____no_output_____"
]
],
[
[
"series += noise\n\nplt.figure(figsize=(10, 6))\nplot_series(time, series)\nplt.show()",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e754b4d1ffe6e9f4beb9ccac1736b851c569843e | 100,976 | ipynb | Jupyter Notebook | Kaggle-Competitions/CrowdFlower/Initial Analysis.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 1 | 2019-05-10T09:16:23.000Z | 2019-05-10T09:16:23.000Z | Kaggle-Competitions/CrowdFlower/Initial Analysis.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | null | null | null | Kaggle-Competitions/CrowdFlower/Initial Analysis.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 1 | 2019-05-10T09:17:28.000Z | 2019-05-10T09:17:28.000Z | 213.029536 | 21,350 | 0.910484 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline",
"_____no_output_____"
],
[
"# load in the training and test set\ncrowd_train = pd.read_csv('./data/train.csv/train.csv', index_col='id', na_values=[''])\ncrowd_test = pd.read_csv('./data/test.csv/test.csv', index_col='id', na_values=[''])",
"_____no_output_____"
],
[
"# structure of the training set\ncrowd_train.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 10158 entries, 1 to 32668\nData columns (total 5 columns):\nquery 10158 non-null object\nproduct_title 10158 non-null object\nproduct_description 7714 non-null object\nmedian_relevance 10158 non-null int64\nrelevance_variance 10158 non-null float64\ndtypes: float64(1), int64(1), object(3)\nmemory usage: 357.1+ KB\n"
]
],
[
[
"### There are some missing values for product_description attribute in the training set.",
"_____no_output_____"
]
],
[
[
"# structure of the test set\ncrowd_test.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 22513 entries, 3 to 32671\nData columns (total 3 columns):\nquery 22513 non-null object\nproduct_title 22513 non-null object\nproduct_description 17086 non-null object\ndtypes: object(3)\nmemory usage: 439.7+ KB\n"
]
],
[
[
"### There are missing values for product_description attribute in the test set.",
"_____no_output_____"
]
],
[
[
"# figuring out unique query terms in training set\ntrain_query = crowd_train['query'].unique()",
"_____no_output_____"
],
[
"# figuring out unqiue query terms in test set\ntest_query = crowd_test['query'].unique()",
"_____no_output_____"
],
[
"# unique values in the training set\ntrain_query[:10]",
"_____no_output_____"
],
[
"# unique values in the test set\ntest_query[:10]",
"_____no_output_____"
],
[
"# lets find out those queries that overlap in training as well as test set\noverlapping_queries = (list(set(train_query) & set(test_query)))",
"_____no_output_____"
],
[
"# lets group examples by median_relevance\ng = crowd_train.groupby('median_relevance')",
"_____no_output_____"
],
[
"# lets find out length of the product title\ng.get_group(1).apply(lambda x: len(x['product_title'].split(' ')), axis=1).plot();",
"_____no_output_____"
],
[
"g.get_group(1).apply(lambda x: len(x['product_title'].split(' ')), axis=1).mean()",
"_____no_output_____"
],
[
"g.get_group(2).apply(lambda x: len(x['product_title'].split(' ')), axis=1).plot();",
"_____no_output_____"
],
[
"# mean word length\ng.get_group(2).apply(lambda x: len(x['product_title'].split(' ')), axis=1).mean()",
"_____no_output_____"
],
[
"g.get_group(3).apply(lambda x: len(x['product_title'].split(' ')), axis=1).plot();",
"_____no_output_____"
],
[
"g.get_group(3).apply(lambda x: len(x['product_title'].split(' ')), axis=1).mean()",
"_____no_output_____"
],
[
"g.get_group(4).apply(lambda x: len(x['product_title'].split(' ')), axis=1).plot();",
"_____no_output_____"
],
[
"g.get_group(4).apply(lambda x: len(x['product_title'].split(' ')), axis=1).mean()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e754cbf5e755dd41c1dd51aeef8dd0a3804f1710 | 46,481 | ipynb | Jupyter Notebook | Mathematics/Mathematical Modeling/02.07-Fed-Batch-Bioreactor.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
] | null | null | null | Mathematics/Mathematical Modeling/02.07-Fed-Batch-Bioreactor.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
] | null | null | null | Mathematics/Mathematical Modeling/02.07-Fed-Batch-Bioreactor.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
] | 2 | 2022-02-09T15:41:33.000Z | 2022-02-11T07:47:40.000Z | 192.86722 | 37,936 | 0.880962 | [
[
[
"<!--NOTEBOOK_HEADER-->\n*This notebook contains course material from [CBE30338](https://jckantor.github.io/CBE30338)\nby Jeffrey Kantor (jeff at nd.edu); the content is available [on Github](https://github.com/jckantor/CBE30338.git).\nThe text is released under the [CC-BY-NC-ND-4.0 license](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode),\nand code is released under the [MIT license](https://opensource.org/licenses/MIT).*",
"_____no_output_____"
],
[
"<!--NAVIGATION-->\n< [Exothermic Continuous Stirred Tank Reactor](http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/02.06-Exothermic-CSTR.ipynb) | [Contents](toc.ipynb) | [Model Library](http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/02.08-Model-Library.ipynb) ><p><a href=\"https://colab.research.google.com/github/jckantor/CBE30338/blob/master/notebooks/02.07-Fed-Batch-Bioreactor.ipynb\"><img align=\"left\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open in Google Colaboratory\"></a><p><a href=\"https://raw.githubusercontent.com/jckantor/CBE30338/master/notebooks/02.07-Fed-Batch-Bioreactor.ipynb\"><img align=\"left\" src=\"https://img.shields.io/badge/Github-Download-blue.svg\" alt=\"Download\" title=\"Download Notebook\"></a>",
"_____no_output_____"
],
[
"# Fed-Batch Bioreactor",
"_____no_output_____"
],
[
"## Model Development\n\nMass balances for a fed-batch bioreactor are given by\n\n$$\\begin{align*}\n\\frac{d(XV)}{dt} & = V r_g(X,S) \\\\\n\\frac{d(PV)}{dt} & = V r_P(X,S) \\\\\n\\frac{d(SV)}{dt} & = F S_f - \\frac{1}{Y_{X/S}}V r_g(X,S)\n\\end{align*}$$\n\nwhere $X$ is cell concentration, $P$ is product concentration, and $S$ is substrate concentration, all given in units of grams/liter. The reactor is fed with fresh substrate at concentration $S_f$ and flowrate $F(t)$ in liters per hour. The volume (in liters) is therefore changing \n\n$$\\frac{dV}{dt} = F(t)$$\n\nRate $r_g(X,S)$ is the production of fresh cell biomass in units of grams/liter/hr. The cell specific growth is expressed as\n\n$$r_g(X,S) = \\mu(S)X$$\n\nwhere $\\mu(S)$ is the cell specific growth rate. In the Monod model, the specific growth rate is a function of substrate concentration given by\n\n$$\\mu(S) = \\mu_{max}\\frac{S}{K_S + S}$$\n\nwhere $\\mu_{max}$ is the maximum specific growth rate, and $K_S$ is the half saturation constant which is the value of $S$ for which $\\mu = \\frac{1}{2}\\mu_{max}$.\n\nFor this model, the product is assumed to be a by-product of cell growth\n\n$$r_P(X,S) = Y_{P/X}r_g(X,S)$$\n\nwhere $Y_{P/X}$ is the product yield coefficient defined as\n\n$$Y_{P/X} = \\frac{\\mbox{mass of product formed}}{\\mbox{mass of new cells formed}}$$\n\nThe model further assumes that substrate is consumed is proportion to the mass of new cells formed where $Y_{X/S}$ is the yield coefficient for new cells\n\n$$Y_{P/X} = \\frac{\\mbox{mass of new cells formed}}{\\mbox{mass of substrate consumed}}$$",
"_____no_output_____"
],
[
"### Dilution Effect\n\nOne aspect of the fed-batch model is that volume is not constant, therefore the cell, product, and substrate concentrations are subject to a dilution effect. Mathematically, the chain rule of differential calculus provides a means to recast the state of model in terms of the intensive concentration variables $X$, $P$, and $S$, and extensive volume $V$.\n\n$$\\begin{align*}\n\\frac{d(XV)}{dt} & = V\\frac{dX}{dt} + X\\frac{dV}{dt} = V\\frac{dX}{dt} + F(t)X \\\\\n\\frac{d(PV)}{dt} & = V\\frac{dP}{dt} + P\\frac{dV}{dt} = V\\frac{dP}{dt} + F(t)P \\\\\n\\frac{d(SV)}{dt} & = V\\frac{dS}{dt} + S\\frac{dV}{dt} = V\\frac{dS}{dt} + F(t)S\n\\end{align*}$$\n\nRearranging and substituting into the mass balances gives\n\n$$\\begin{align*}\n\\frac{dX}{dt} & = - \\frac{F(t)}{V}X + r_g(X,S) \\\\\n\\frac{dP}{dt} & = - \\frac{F(t)}{V}P + r_P(X,S) \\\\\n\\frac{dS}{dt} & = \\frac{F(t)}{V}(S_f - S) - \\frac{1}{Y_{X/S}}r_g(X,S) \\\\\n\\frac{dV}{dt} & = F(t)\n\\end{align*}$$",
"_____no_output_____"
],
[
"## Python Implementation",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.integrate import odeint\n\n# parameter values\n\nmumax = 0.20 # 1/hour\nKs = 1.00 # g/liter\nYxs = 0.5 # g/g\nYpx = 0.2 # g/g\nSf = 10.0 # g/liter\n\n# inlet flowrate\n\ndef F(t):\n return 0.05\n\n# reaction rates\n\ndef mu(S):\n return mumax*S/(Ks + S)\n\ndef Rg(X,S):\n return mu(S)*X\n \ndef Rp(X,S):\n return Ypx*Rg(X,S)\n\n# differential equations\n\ndef xdot(x,t):\n X,P,S,V = x\n dX = -F(t)*X/V + Rg(X,S)\n dP = -F(t)*P/V + Rp(X,S)\n dS = F(t)*(Sf-S)/V - Rg(X,S)/Yxs\n dV = F(t)\n return [dX,dP,dS,dV]",
"_____no_output_____"
]
],
[
[
"## Simulation",
"_____no_output_____"
]
],
[
[
"IC = [0.05, 0.0, 10.0, 1.0]\n\nt = np.linspace(0,50)\nsol = odeint(xdot,IC,t)\nX,P,S,V = sol.transpose()\n\nplt.plot(t,X)\nplt.plot(t,P)\nplt.plot(t,S)\nplt.plot(t,V)\n\nplt.xlabel('Time [hr]')\nplt.ylabel('Concentration [g/liter]')\nplt.legend(['Cell Conc.',\n 'Product Conc.',\n 'Substrate Conc.',\n 'Volume [liter]'])",
"_____no_output_____"
]
],
[
[
"<!--NAVIGATION-->\n< [Exothermic Continuous Stirred Tank Reactor](http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/02.06-Exothermic-CSTR.ipynb) | [Contents](toc.ipynb) | [Model Library](http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/02.08-Model-Library.ipynb) ><p><a href=\"https://colab.research.google.com/github/jckantor/CBE30338/blob/master/notebooks/02.07-Fed-Batch-Bioreactor.ipynb\"><img align=\"left\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open in Google Colaboratory\"></a><p><a href=\"https://raw.githubusercontent.com/jckantor/CBE30338/master/notebooks/02.07-Fed-Batch-Bioreactor.ipynb\"><img align=\"left\" src=\"https://img.shields.io/badge/Github-Download-blue.svg\" alt=\"Download\" title=\"Download Notebook\"></a>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e754ccff840db48d08312c6c96ab69589fae3f99 | 184,743 | ipynb | Jupyter Notebook | shap_dt.ipynb | tteofili/xai-playground | 3ffe1bb31a68870ac2898201da9097ec4577b9f4 | [
"Apache-2.0"
] | null | null | null | shap_dt.ipynb | tteofili/xai-playground | 3ffe1bb31a68870ac2898201da9097ec4577b9f4 | [
"Apache-2.0"
] | null | null | null | shap_dt.ipynb | tteofili/xai-playground | 3ffe1bb31a68870ac2898201da9097ec4577b9f4 | [
"Apache-2.0"
] | null | null | null | 289.112676 | 64,756 | 0.90227 | [
[
[
"import pandas as pd\nimport sklearn\nimport numpy as np\nfrom sklearn.model_selection import train_test_split\nfrom sklearn import preprocessing\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn import preprocessing\nimport matplotlib.pyplot as plt\nimport shap",
"_____no_output_____"
],
[
"df = pd.read_csv('SBAnational.csv')\ndf.head()",
"Columns (9) have mixed types.Specify dtype option on import or set low_memory=False.\n"
],
[
"target = 'MIS_Status'\n\nmoney_columns = ['DisbursementGross', 'BalanceGross', 'ChgOffPrinGr', 'GrAppv', 'SBA_Appv']\n\nle = preprocessing.LabelEncoder()\nfor column_name in df.columns:\n if column_name not in money_columns and df[column_name].dtype == object and column_name != target:\n df[column_name] = le.fit_transform(df[column_name].astype(str))\n\nfor c in money_columns:\n df[c] = df[c].replace('\\$|,','', regex=True).replace('\\(','-', regex=True).replace('\\)','', regex=True)\n pd.to_numeric(df[c])\n\ndf[target] = le.fit_transform(df[target].astype(str))\n \ndf = df.replace([np.inf, -np.inf], np.nan)\ndf = df.dropna()\ndf.head()",
"_____no_output_____"
],
[
"Y = df[target]\nX = df.drop(columns=[target])\n\n# Split the data into train and test data:\nX_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2)",
"_____no_output_____"
],
[
"rf = DecisionTreeRegressor(max_depth=2)\nrf.fit(X_train, Y_train) \nprint(rf.feature_importances_)\nimportances = rf.feature_importances_\nindices = np.argsort(importances)\nfeatures = X_train.columns\nplt.title('Feature Importances')\nplt.barh(range(len(indices)), importances[indices], color='b', align='center')\nplt.yticks(range(len(indices)), [features[i] for i in indices])\nplt.xlabel('Relative Importance')\nplt.show()",
"[0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0.97889533 0. 0.02110467 0. 0.\n 0. 0. ]\n"
],
[
"shap_values = shap.TreeExplainer(rf).shap_values(X_train)\nshap.summary_plot(shap_values, X_train, plot_type=\"bar\")",
"Setting feature_perturbation = \"tree_path_dependent\" because no background data was given.\n"
],
[
"shap.summary_plot(shap_values, X_train)",
"_____no_output_____"
],
[
"shap_interaction_values = shap.TreeExplainer(rf).shap_interaction_values(X_train)",
"_____no_output_____"
],
[
"shap.summary_plot(shap_interaction_values, X_train)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e754d8bc25de1adb64ce719ae4e42b84a8c1eea4 | 79,429 | ipynb | Jupyter Notebook | MoviesNotebooks/MoviesFlowDescription.ipynb | UBC-MOAD/outputanalysisnotebooks | 50839cde3832d26bac6641427fed03c818fbe170 | [
"Apache-2.0"
] | null | null | null | MoviesNotebooks/MoviesFlowDescription.ipynb | UBC-MOAD/outputanalysisnotebooks | 50839cde3832d26bac6641427fed03c818fbe170 | [
"Apache-2.0"
] | null | null | null | MoviesNotebooks/MoviesFlowDescription.ipynb | UBC-MOAD/outputanalysisnotebooks | 50839cde3832d26bac6641427fed03c818fbe170 | [
"Apache-2.0"
] | null | null | null | 181.344749 | 63,506 | 0.86126 | [
[
[
"### Movie with u, v, w, $\\rho$, tr, vorticity alongshore section",
"_____no_output_____"
]
],
[
[
"#KRM\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as mcolors\nimport matplotlib as mpl\n#from MITgcmutils import rdmds # not working\n#%matplotlib inline\nimport os\nfrom netCDF4 import Dataset\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport struct\nimport xarray as xr\nimport canyon_tools.readout_tools as rout",
"_____no_output_____"
]
],
[
[
"## Functions",
"_____no_output_____"
]
],
[
[
"def rel_vort(x,y,u,v):\n \"\"\"-----------------------------------------------------------------------------\n rel_vort calculates the z component of relative vorticity.\n \n INPUT:\n x,y,u,v should be at least 2D arrays in coordinate order (..., Y , X ) \n \n OUTPUT:\n relvort - z-relative vorticity array of size u[...,2:-2,2:-2]\n -----------------------------------------------------------------------------\"\"\"\n \n dvdx = (v[...,1:-1, 2:]-v[...,1:-1, :-2])/(x[...,1:-1, 2:]-x[...,1:-1, :-2])\n dudy = (u[...,2:,1:-1]-u[..., :-2,1:-1])/(y[..., 2:,1:-1]-y[..., :-2,1:-1])\n relvort = dvdx - dudy\n return relvort\n\n\ndef calc_rho(RhoRef,T,S,alpha=2.0E-4, beta=7.4E-4):\n \"\"\"-----------------------------------------------------------------------------\n calc_rho calculates the density using a linear equation of state.\n \n INPUT:\n RhoRef : reference density at the same z as T and S slices. Can be a scalar or a \n vector, depending on the size of T and S.\n T, S : should be at least 2D arrays in coordinate order (..., Y , X ) \n alpha = 2.0E-4 # 1/degC, thermal expansion coefficient\n beta = 7.4E-4, haline expansion coefficient\n OUTPUT:\n rho - Density [...,ny,nx]\n -----------------------------------------------------------------------------\"\"\"\n \n #Linear eq. of state \n rho = RhoRef*(np.ones(np.shape(T)) - alpha*(T[...,:,:]) + beta*(S[...,:,:]))\n return rho\n\ndef call_unstag(t):\n UU,VV = rout.unstagger(state.U.isel(T=t),state.V.isel(T=t))\n return(UU,VV)\n\n\ndef call_rho(t):\n T = state.Temp.isel(T=t,Y=yind)\n S = state.S.isel(T=t,Y=yind)\n rho = calc_rho(RhoRef,T,S,alpha=2.0E-4, beta=7.4E-4)\n return(rho) ",
"_____no_output_____"
]
],
[
[
"## Frame functions",
"_____no_output_____"
]
],
[
[
"# if y = 230, z from 0 to 56\n# if y=245, z from 0 to 47 \n# if y=260, z from 0 to 30 \nsns.set_style('dark')\n\n# ALONGSHORE VELOCITY \ndef Plot1(t,ax1,UU):\n umin = -0.55 # 0.50\n umax= 0.55\n Uplot=np.ma.array(UU.isel(Y=yind).data,mask=MaskC[:,yind,:])\n csU = np.linspace(umin,umax,num=20)\n csU2 = np.linspace(umin,umax,num=10)\n ax1.clear()\n #mesh=ax1.contourf(grid.X/1000,grid.Z[:47],Uplot[:47,:],csU,cmap='RdYlBu_r') # full shelf\n mesh=ax1.contourf(grid.X[120:240]/1000,grid.Z[:47],Uplot[:47,120:240],csU,cmap='RdYlBu_r') # zoom canyon\n \n if t == 1: \n cax,kw = mpl.colorbar.make_axes([ax1],location='top',anchor=(0.5,0.0),shrink=0.96)\n cb = plt.colorbar(mesh, cax=cax,ticks=[np.linspace(umin, umax,8) ],format='%.2f',**kw)\n \n #ax1.set_axis_bgcolor((205/255.0, 201/255.0, 201/255.0))\n ax1.set_ylabel('Depth (m)')\n ax1.text(0.7,0.05,'u (m/s)',transform=ax1.transAxes)\n\n# ACROSS-SHORE VELOCITY \ndef Plot2(t,ax2,VV):\n vmin = -0.25\n vmax = 0.25\n Uplot=np.ma.array(VV.isel(Yp1=yind).data,mask=MaskC[:,yind,:])\n csU = np.linspace(vmin,vmax,num=20)\n csU2 = np.linspace(vmin,vmax,num=10)\n ax2.clear()\n #mesh=ax2.contourf(grid.X/1000,grid.Z[:47],Uplot[:47,:],csU,cmap='RdYlBu_r') # full shelf\n mesh=ax2.contourf(grid.X[120:240]/1000,grid.Z[:47],Uplot[:47,120:240],csU,cmap='RdYlBu_r') # canyon zoom\n \n if t == 1: \n cax,kw = mpl.colorbar.make_axes([ax2],location='top',anchor=(0.5,0.0),shrink=0.96)\n cb = plt.colorbar(mesh, cax=cax,ticks=[np.linspace(vmin,vmax,8) ],format='%.2f',**kw)\n \n #ax2.set_axis_bgcolor((205/255.0, 201/255.0, 201/255.0))\n ax2.text(0.7,0.05,'v (m/s)',transform=ax2.transAxes)\n\n# VERTICAL VELOCITY \ndef Plot3(t,ax3): \n wmin = -5.0\n wmax = 5.0\n Uplot=np.ma.array(state.W.isel(T=t,Y=yind).data,mask=MaskC[:,yind,:])\n csU = np.linspace(wmin,wmax,num=20)\n csU2 = np.linspace(wmin,wmax,num=10)\n ax3.clear()\n #mesh=ax3.contourf(grid.X/1000,grid.Z[:47],Uplot[:47,:]*1000,csU,cmap='RdYlBu_r')\n mesh=ax3.contourf(grid.X[120:240]/1000,grid.Z[:47],Uplot[:47,120:240]*1000,csU,cmap='RdYlBu_r')\n \n if t == 1: \n cax,kw = mpl.colorbar.make_axes([ax3],location='top',anchor=(0.5,0.0),shrink=0.96)\n cb = plt.colorbar(mesh, cax=cax,ticks=[np.linspace(wmin,wmax,8) ],format='%.1f',**kw)\n \n #ax3.set_axis_bgcolor((205/255.0, 201/255.0, 201/255.0))\n ax3.text(0.65,0.05,'w ($10^{-3}$ m/s)',transform=ax3.transAxes)\n props = dict(boxstyle='round', facecolor='white', alpha=0.5)\n ax3.text(1.05,0.86,'day %0.1f' %(t/2.0),fontsize=20,transform=ax3.transAxes,bbox=props)\n\n# ISOPYCNALS\ndef Plot4(t,ax4):\n rho_min = 1020.4-1000 # 1020.4\n rho_max = 1021.9-1000 # 1022.4 if y=230,1021.4 if y=260,1021.9 if y=245\n density = call_rho(t)\n csU = np.linspace(rho_min,rho_max,num=21) #21\n csU2 = np.linspace(rho_min,rho_max,num=31) #31\n ax4.clear()\n #mesh=ax4.contourf(grid.X/1000,grid.Z[:47],\n # np.ma.array(density[:47,:].data,mask=MaskC[:47,yind,:]),\n # csU,cmap='inferno')\n mesh=ax4.contourf(grid.X[120:240]/1000,grid.Z[:47],\n np.ma.array(density[:47,120:240].data,mask=MaskC[:47,yind,120:240])-1000,\n csU,cmap='inferno')\n \n if t == 1:\n cax,kw = mpl.colorbar.make_axes([ax4],location='top',anchor=(0.5,0),shrink=0.96)\n cb = plt.colorbar(mesh, cax=cax,ticks=[np.linspace(rho_min,rho_max,6) ],format='%.1f',**kw)\n \n #CS = ax4.contour(grid.X/1000,grid.Z[:47],\n # np.ma.array(density[:47,:].data,mask=MaskC[:47,yind,:]),\n # csU2,colors='k',linewidths=[0.75] )\n CS = ax4.contour(grid.X[120:240]/1000,grid.Z[:47],\n np.ma.array(density[:47,120:240].data,mask=MaskC[:47,yind,120:240])-1000,\n csU2,colors='k',linewidths=[0.75] )\n \n #ax4.set_axis_bgcolor((205/255.0, 201/255.0, 201/255.0))\n ax4.text(0.7,0.05,r'$\\sigma$ (kg/m$^{3}$)',transform=ax4.transAxes)\n ax4.set_ylabel('Depth (m)')\n ax4.set_xlabel('Alongshore distance (km)')\n\n# TRACER \ndef Plot5(t,ax5): \n tr_min = 0\n tr_max = 17 # 21 if y=230, 12 if y=260, 17 if y=245\n csU = np.linspace(tr_min,tr_max,num=25)\n csU2 = np.linspace(tr_min,tr_max,num=31)\n ax5.clear()\n #mesh=ax5.contourf(grid.X/1000,grid.Z[:47],\n # np.ma.array(ptracers.Tr1[t,:47,yind,:].data,mask=MaskC[:47,yind,:]),\n # csU,cmap='viridis')\n mesh=ax5.contourf(grid.X[120:240]/1000,grid.Z[:47],\n np.ma.array(ptracers.Tr1[t,:47,yind,120:240].data,mask=MaskC[:47,yind,120:240]),\n csU,cmap='viridis')\n \n if t == 1:\n cax,kw = mpl.colorbar.make_axes([ax5],location='top',anchor=(0.5,0),shrink=0.96)\n cb = plt.colorbar(mesh, cax=cax,ticks=[np.linspace(tr_min,tr_max,9) ],format='%.1f',**kw)\n \n #CS = ax5.contour(grid.X/1000,grid.Z[:47],\n # np.ma.array(ptracers.Tr1[t,:47,yind,:].data,mask=MaskC[:47,yind,:]),\n # csU2,colors='k',linewidths=[0.75] )\n CS = ax5.contour(grid.X[120:240]/1000,grid.Z[:47],\n np.ma.array(ptracers.Tr1[t,:47,yind,120:240].data,mask=MaskC[:47,yind,120:240]),\n csU2,colors='k',linewidths=[0.75] )\n #ax5.set_axis_bgcolor((205/255.0, 201/255.0, 201/255.0))\n ax5.text(0.75,0.05,'Tracer \\n ($\\mu$mol/l)',transform=ax5.transAxes)\n ax5.set_xlabel('Alongshore distance (km)')\n\n# VORTICITY\ndef Plot6(t,ax6,UU,VV):\n vort_min = -50\n vort_max = 50\n relvort = rel_vort(grid.XC.data,grid.YC.data,UU.data,VV.data)\n Uplot=np.ma.array(relvort[:,yind-1,:],mask=MaskC[:,yind,1:-1])\n csU = np.linspace(vort_min,vort_max,num=20)\n csU2 = np.linspace(vort_min,vort_max,num=10)\n ax6.clear()\n #mesh=ax6.contourf(grid.X[1:-1]/1000,grid.Z[:47],Uplot[:47,:]*1E5,\n # csU,\n # cmap='PiYG_r')\n mesh=ax6.contourf(grid.X[120:240]/1000,grid.Z[:47],Uplot[:47,120:240]*1E5,\n csU,\n cmap='PiYG_r')\n \n if t == 1: \n cax,kw = mpl.colorbar.make_axes([ax6],location='top',anchor=(0.5,0.0),shrink=0.96)\n cb = plt.colorbar(mesh, cax=cax,\n ticks=[np.linspace(vort_min,vort_max,8) ],\n format='%.1f',**kw)\n \n #ax6.set_axis_bgcolor((205/255.0, 201/255.0, 201/255.0))\n ax6.text(0.65,0.05,'$\\zeta$ ($10^{-5}$ s$^{-1}$)',transform=ax6.transAxes)\n ax6.set_xlabel('Alongshore distance (km)')\n props = dict(boxstyle='round', facecolor='white', alpha=0.5)\n ax6.text(1.05,0.1,'Near \\n mid-length ',fontsize=15,transform=ax6.transAxes,bbox=props)\n \n",
"_____no_output_____"
]
],
[
[
"## Set-up",
"_____no_output_____"
]
],
[
[
"# Grid, state and tracers datasets of base case\ngrid_file = '/data/kramosmu/results/TracerExperiments/3DVISC_REALISTIC/run01/gridGlob.nc'\ngrid = xr.open_dataset(grid_file)\n\nstate_file = '/data/kramosmu/results/TracerExperiments/3DVISC_REALISTIC/run01/stateGlob.nc' \nstate = xr.open_dataset(state_file)\n\nptracers_file = '/data/kramosmu/results/TracerExperiments/3DVISC_REALISTIC/run01/ptracersGlob.nc'\nptracers = xr.open_dataset(ptracers_file)\n\n#RhoRef = np.squeeze(rdmds('/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run38/RhoRef'))\nRhoRef = 999.79998779 # It is constant in all my runs, can't run rdmds",
"_____no_output_____"
],
[
"# General input\nnx = 616\nny = 360\nnz = 90\nnt = 19 # t dimension size \n\nyind = 245 # y index for alongshore cross-section\n\nhFacmasked = np.ma.masked_values(grid.HFacC.data, 0)\nMaskC = np.ma.getmask(hFacmasked)\n \n ",
"_____no_output_____"
],
[
"import matplotlib.animation as animation\nprint(animation.writers.list())",
"['ffmpeg', 'ffmpeg_file', 'avconv', 'avconv_file', 'imagemagick', 'html', 'pillow', 'imagemagick_file']\n"
],
[
"sns.set_style('white')\nsns.set_context(\"talk\")\n\n#Empty figures\nfig,((ax1,ax2,ax3),(ax4, ax5,ax6)) = plt.subplots(2, 3, figsize=(15, 8),sharex='col', sharey='row')\nplt.subplots_adjust(hspace =0.1, wspace=0.1)\n\n#Initial image\ndef init():\n UU,VV = call_unstag(0)\n Plot1(0,ax1,UU)\n Plot2(0,ax2,VV)\n Plot3(0,ax3)\n Plot4(0,ax4)\n Plot5(0,ax5)\n Plot6(0,ax6,UU,VV)\n #plt.tight_layout()\n \ndef animate(tt):\n UU,VV = call_unstag(tt)\n Plot1(tt,ax1,UU)\n Plot2(tt,ax2,VV)\n Plot3(tt,ax3)\n Plot4(tt,ax4)\n Plot5(tt,ax5)\n Plot6(tt,ax6,UU,VV)\n xticklabels = ax1.get_xticklabels() + ax2.get_xticklabels() + ax3.get_xticklabels()\n plt.setp(xticklabels, visible=False)\n yticklabels = ax2.get_yticklabels() + ax3.get_yticklabels() + ax5.get_yticklabels() + ax6.get_yticklabels()\n plt.setp(yticklabels, visible=False)\n\n\nWriter = animation.writers['ffmpeg']\nwriter = Writer(fps=1, metadata=dict(artist='Me'), bitrate=1800)\n\n\nanim = animation.FuncAnimation(fig, animate, init_func=init,frames=19,repeat=False)\nanim.save('3DVISC_REALISTIC_run01__alongshore_section_y245_ZOOM.mp4', writer=writer)\n\nplt.show()\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e754ef8a7e96dbad9c1051fc8e6b3c4ba7b4b098 | 6,727 | ipynb | Jupyter Notebook | Code/IPython/bootcamp_data_management.ipynb | ljsun88/data_bootcamp_nyu | abc2486060672e19f5e4a71342cb8ca05155db83 | [
"MIT"
] | 74 | 2015-01-14T22:51:39.000Z | 2021-01-31T17:23:58.000Z | Code/IPython/bootcamp_data_management.ipynb | ljsun88/data_bootcamp_nyu | abc2486060672e19f5e4a71342cb8ca05155db83 | [
"MIT"
] | 13 | 2015-03-18T20:24:40.000Z | 2016-05-06T13:44:33.000Z | Code/IPython/bootcamp_data_management.ipynb | ljsun88/data_bootcamp_nyu | abc2486060672e19f5e4a71342cb8ca05155db83 | [
"MIT"
] | 60 | 2015-03-24T00:05:50.000Z | 2021-05-12T15:15:32.000Z | 25.481061 | 153 | 0.434518 | [
[
[
"# Data management with Pandas \n\nAn overview of some of the data management tools in Python's [Pandas package](http://pandas.pydata.org/pandas-docs/version/0.17.1/). Includes:\n\n* Selecting variables \n* Selecting observations \n\n* Indexing \n\n* Groupby \n* Stacking \n\n* Doubly indexed dataframes \n\n* Combining dataframes (concat) \n* Merging dataframes\n\nThis notebook was written by Dave Backus for the NYU Stern course [Data Bootcamp](http://databootcamp.nyuecon.com/). ",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n%matplotlib inline ",
"_____no_output_____"
]
],
[
[
"## Reminders\n\n* Dataframes \n* Index and columns ",
"_____no_output_____"
],
[
"## Selecting variables \n\n",
"_____no_output_____"
],
[
"## Datasets\n\nWe take these examples from the data input chapter: \n\n* Penn World Table \n* World Economic Outlook \n* UN Population Data\n\nAll of them come in an unfriendly form; our goal is to fix them. Here we extract small subsets to work with so that we can follow all the steps. ",
"_____no_output_____"
],
[
"### Penn World Table \n\nThis one comes with countries stacked on top of each others. \n\n",
"_____no_output_____"
]
],
[
[
"data = {'countrycode': ['CHN', 'CHN', 'CHN', 'FRA', 'FRA', 'FRA', 'USA', 'USA', 'USA'],\n 'pop': [1124.7939240000001, 1246.8400649999999, 1318.1701519999999, 58.183173999999994,\n 60.764324999999999, 64.731126000000003, 253.33909699999998, 282.49630999999999,\n 310.38394799999998],\n 'rgdpe': [2611027.0, 4951485.0, 11106452.0, 1293837.0, 1752570.125, 2031723.25,\n 7964788.5, 11494606.0, 13151344.0],\n 'year': [1990, 2000, 2010, 1990, 2000, 2010, 1990, 2000, 2010]}\npwt = pd.DataFrame(data)\npwt",
"_____no_output_____"
],
[
"### UN Population Data ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
e754ff3fd74efa5dda9433651b89910f1d952f3e | 160,006 | ipynb | Jupyter Notebook | src/md-codes/single_particle-x4-x2-OV-Operator.ipynb | kadupitiya/RNN-MD | 9350ab209126983bff79f34b34e2f68f038e536c | [
"Apache-2.0"
] | 7 | 2020-05-19T02:24:37.000Z | 2021-05-27T11:01:24.000Z | src/md-codes/single_particle-x4-x2-OV-Operator.ipynb | kadupitiya/RNN-MD | 9350ab209126983bff79f34b34e2f68f038e536c | [
"Apache-2.0"
] | 1 | 2021-02-13T01:12:09.000Z | 2021-02-13T01:12:09.000Z | src/md-codes/single_particle-x4-x2-OV-Operator.ipynb | kadupitiya/RNN-MD | 9350ab209126983bff79f34b34e2f68f038e536c | [
"Apache-2.0"
] | 4 | 2020-05-20T20:50:35.000Z | 2022-01-11T08:20:04.000Z | 384.629808 | 143,992 | 0.924584 | [
[
[
"# Define vector 3D class",
"_____no_output_____"
]
],
[
[
"import math\nimport numpy as np\n\nclass Vector3D:\n def __init__(self, initial_x = 0.0, initial_y = 0.0, initial_z = 0.0):\n self.x = initial_x\n self.y = initial_y\n self.z = initial_z\n \n def magnitude(self):\n return math.sqrt(self.x**2 + self.y**2 + self.z**2)\n \n def sqd_magnitude(self):\n return self.x**2 + self.y**2 + self.z**2\n \n # Operator overloading for adding two vecs \n def __add__(self, v):\n return Vector3D(self.x + v.x, self.y + v.y, self.z + v.z)\n \n def __mul__(self, multiplier):\n if isinstance(multiplier, type(self)):\n return Vector3D(self.x*multiplier.x, self.y*multiplier.y, self.z*multiplier.z)\n else:\n return Vector3D(self.x*multiplier, self.y*multiplier, self.z*multiplier)\n \n def __rmul__(self, multiplier):\n return self.__mul__(multiplier)\n \n def __truediv__(self, divisor):\n if isinstance(divisor, type(self)):\n return Vector3D(self.x/divisor.x, self.y/divisor.y, self.z/divisor.z)\n else:\n return Vector3D(self.x/divisor, self.y/divisor, self.z/divisor)\n \n def __sub__(self, v):\n return Vector3D(self.x - v.x, self.y - v.y, self.z - v.z)\n \n def __eq__(self, other):\n if isinstance(other, Vector3D):\n return self.x == other.x and self.y == other.y and self.z == other.z\n \n #printing overloaded\n def __str__(self): \n return \"x=\" + str(self.x) + \", y=\" + str(self.y) + \", z=\" + str(self.z)\n",
"_____no_output_____"
]
],
[
[
"# Define Particle class",
"_____no_output_____"
]
],
[
[
"import math\nfrom decimal import *\n\nclass Particle:\n \n def __init__(self, initial_m = 1.0, diameter = 2.0, initial_position = Vector3D(0.0, 0.0, 0.0), initial_velocity = Vector3D(0.0, 0.0, 0.0)):\n self.m = initial_m\n self.d = diameter\n self.position = Vector3D(initial_position.x, 0.0, 0.0)\n self.position_with_ov = Vector3D(initial_position.x, 0.0, 0.0)\n self.position_with_ov_op = Vector3D(initial_position.x, 0.0, 0.0)\n self.prev_position_ov = Vector3D(initial_position.x, 0.0, 0.0)\n self.prev_position = Vector3D(initial_position.x, 0.0, 0.0)\n self.prev2_position = Vector3D(initial_position.x, 0.0, 0.0) \n self.velocity = Vector3D(initial_velocity.x, 0.0, 0.0)\n self.velocity_ov_op = Vector3D(initial_velocity.x, 0.0, 0.0)\n self.sigma = 1.0\n self.eps = 1.0\n self.force = Vector3D(0.0, 0.0, 0.0)\n self.force_VV_OP = Vector3D(0.0, 0.0, 0.0)\n self.prev_force = Vector3D(0.0, 0.0, 0.0)\n \n def volume(self):\n self.volume = (4.0/3.0) * math.pi * ((self.d/2.0)**3)\n \n def update_position(self, dt):\n self.position.x = self.position.x + (self.velocity.x * dt)\n self.position_with_ov.x= self.position.x\n \n #position updated to a full time-step \n def update_position_with_VV_OV(self, dt):\n \n self.prev_position.x = self.position.x\n \n self.position.x = self.position.x + (self.velocity.x * dt)\n\n # OV position update\n temp = self.position_with_ov.x*2 - self.prev_position_ov.x + (self.force.x * (dt**2)/self.m)\n \n self.prev_position_ov.x= self.position_with_ov.x\n\n self.position_with_ov.x = temp\n \n self.prev_force.x=self.force.x\n \n self.position_with_ov_op.x = self.position.x\n \n #velocity computation velocity v(1)\n self.velocity_ov_op.x= (self.position_with_ov_op.x-self.prev2_position.x)/(2.0*dt)\n \n #print(\"OV error: \"+str(self.position_with_ov-self.position))\n \n \n #position updated to a full time-step \n def update_position_with_VV_OV_Operator(self, dt):\n self.position.x = self.position.x + (self.velocity.x * dt)\n\n # OV position update\n temp1 = self.position_with_ov.x*2 - self.prev_position_ov.x + (self.force.x * (dt**2)/self.m)\n \n self.prev_position_ov.x = self.position_with_ov.x\n self.position_with_ov.x = temp1\n \n # OV operator position update #.magnitude()\n #print(self.prev_force)\n getcontext().prec = 100\n value = Decimal(self.force_VV_OP.x) / Decimal(self.prev_force.x)\n \n #print(value)\n \n dt2_m = (self.position_with_ov_op.x - self.prev_position.x*2 + self.prev2_position.x)*float(value)\n temp2 = self.position_with_ov_op.x*2 - self.prev_position.x + dt2_m\n\n \n self.prev2_position.x = self.prev_position.x\n self.prev_position.x = self.position_with_ov_op.x\n self.position_with_ov_op.x = temp2\n \n self.prev_force.x=self.force_VV_OP.x\n \n #velocity computation velocity v(2)\n self.velocity_ov_op.x= (self.position_with_ov_op.x-self.prev2_position.x)/(2.0*dt)\n \n def get_force_on_block(self):\n self.force=Vector3D(0.0, 0.0, 0.0)\n self.force.x = self.position.x - self.position.x**3\n #print(self.force)\n self.force_VV_OP=Vector3D(0.0, 0.0, 0.0)\n self.force_VV_OP.x = self.position_with_ov_op.x - self.position_with_ov_op.x**3\n #print(self.force_VV_OP)\n \n\n def update_velocity(self, dt):\n self.velocity = self.velocity + (self.force * (dt / self.m)) \n\n def kinetic_energy(self):\n self.ke = 0.5 * self.m * (self.velocity.magnitude()**2)\n self.ke_OV_OP = 0.5 * self.m * (self.velocity_ov_op.magnitude()**2)\n \n def get_energy_on_block(self):\n self.pe = ((self.position.x **4)/4) - ((self.position.x **2)/2)\n # consider a fixed particle in 0, 0, 0\n self.pe_OV_OP = ((self.prev_position.x **4)/4) - ((self.prev_position.x **2)/2)\n \n \n def print_pos_error(self):\n ov_error= self.position_with_ov-self.position\n ovop_error= self.position_with_ov_op-self.position\n print(\"OV error: \"+str(ov_error.x**2))\n print(\"OVP error: \"+str(ovop_error.x**2))\n ",
"_____no_output_____"
]
],
[
[
"# Velocity verlet code",
"_____no_output_____"
]
],
[
[
"import math\nimport time\n\ndef velocity_verlet(mass=None, initial_pos=2.0, time=100, deltaT=0.01):\n \n print(\"Modeling the block-spring system\")\n print(\"Need a useful abstraction of the problem: a point particle\")\n print(\"Make a Particle class\")\n print(\"Set up initial conditions\")\n\n \n sphere = Particle(initial_m = 1.0, diameter = 2.0, initial_position = Vector3D(0.0, 0.0, 0.0), initial_velocity = Vector3D(0.0, 0.0, 0.0))\n sphere_volume = sphere.volume()\n print(\"volume of a unit (radius = 1) sphere is {}\".format(sphere_volume))\n \n # inputs\n if mass is None:\n print(\"enter mass of the block: \")\n time.sleep(0.1) # This sleep is not needed, just added to get input box below the print statements\n mass = float(input())\n \n block = Particle(initial_m = mass, diameter = 2.0, initial_position = Vector3D(initial_pos, 0.0, 0.0), initial_velocity = Vector3D(0.0, 0.0, 0.0));\n \n # we can compute the initial force on the block\n block.get_force_on_block()\n\n #Print the system\n print(\"mass of the block is {}\".format(block.m))\n print(\"initial position of the block is {}\".format(block.position.x))\n print(\"initial velocity of the block is {}\".format(block.velocity.x))\n print(\"initial force on the block is {}\".format(block.force.x))\n \n t = time\n dt=deltaT\n S = int(t // dt)\n\n #simulation Begins here\n simulated_result = open(\"data/dynamics_mass={}_x0={}_t={}_deltaT={}.out\".format(mass,initial_pos,t, dt), \"w\")\n block.get_energy_on_block()\n block.kinetic_energy()\n simulated_result.write(\"{0} {1} {2} {3} {4} {5} {6} {7}\\n\".format(0*dt, block.position.x, block.position_with_ov_op.x, block.velocity.x, block.ke, block.pe, (block.ke + block.pe), (block.ke_OV_OP + block.pe_OV_OP)))\n \n \n ## 1st unroll for getting r1 from r0 using VV\n block.update_velocity(dt/2.0) #update velocity half timestep\n block.update_position(dt) #update position full timestep\n block.get_force_on_block()\n block.update_velocity(dt/2.0)\n #filing the time, position of the block\n block.kinetic_energy()\n block.get_energy_on_block()\n \n ## 2nd unroll for getting r2 from r1 using VV\n block.update_velocity(dt/2.0) #update velocity half timestep\n block.update_position_with_VV_OV(dt) #update position full timestep\n block.get_force_on_block()\n block.update_velocity(dt/2.0)\n #filing the time, position of the block\n block.kinetic_energy()\n block.get_energy_on_block()\n \n \n \n for i in range(2,S+1):\n block.update_velocity(dt/2.0) #update velocity half timestep\n block.update_position_with_VV_OV_Operator(dt) #update position full timestep\n block.get_force_on_block()\n block.update_velocity(dt/2.0)\n #filing the time, position of the block\n block.kinetic_energy()\n block.get_energy_on_block()\n \n if i%10000==0: \n block.print_pos_error()\n \n simulated_result.write(\"{0} {1} {2} {3} {4} {5} {6} {7} \\n\".format((i+1)*dt, block.position.x, block.position_with_ov_op.x, block.velocity.x, block.ke, block.pe, (block.ke + block.pe), (block.ke_OV_OP + block.pe_OV_OP)))\n \n simulated_result.close() \n print(\"Simulation is over.\")",
"_____no_output_____"
]
],
[
[
"# Run the code",
"_____no_output_____"
]
],
[
[
"import time\nstart = time.time()\n\n# Run the program\n# mass=1.0, initial_pos=1.0, time=100, deltaT=0.01\nparams__ = (2.0, -2.0, 100, 0.01)\nvelocity_verlet(*params__)\n\nend = time.time()\nprint(\"Time: \"+str(end - start))",
"Modeling the block-spring system\nNeed a useful abstraction of the problem: a point particle\nMake a Particle class\nSet up initial conditions\nvolume of a unit (radius = 1) sphere is None\nmass of the block is 2.0\ninitial position of the block is -2.0\ninitial velocity of the block is 0.0\ninitial force on the block is 6.0\nSimulation is over.\nTime: 0.24263930320739746\n"
]
],
[
[
"# Plot the graphs",
"_____no_output_____"
]
],
[
[
"# Visualize the data\n'''\nGNUPlot\nplot 'exact_dynamics.out' with lines, 'simulated_dynamics.out' using 1:2 with lp pt 6 title \"position\", 'simulated_dynamics.out' using 1:3 with p pt 4 title \"velocity\", 'simulated_dynamics.out' u 1:4 w p title \"kinetic\", 'simulated_dynamics.out' u 1:5 w p title \"potential\", \"simulated_dynamics.out\" u 1:6 w p title \"total\"\n'''\n\nimport matplotlib.pyplot as plt\n#%matplotlib notebook\n#%matplotlib notebook\n%matplotlib inline\n\n\nimport numpy as np\n\nsimulated_result_file = np.loadtxt(\"data/dynamics_mass={}_x0={}_t={}_deltaT={}.out\".format(*params__))\n\nfig=plt.figure(figsize=(12, 6))\n\n#plt.plot(exact_dynamics_file[:,0],exact_dynamics_file[:,1],'r+', label='exact_dynamics', linewidth=1, markersize=3, linestyle='dashed')\n#plt.plot(simulated_result_file[:,0],simulated_result_file[:,1], label='position')\n#plt.plot(simulated_result_file[:,0],simulated_result_file[:,2], label='position_OV_OP')\n#plt.plot(simulated_result_file[:,0],simulated_result_file[:,2], label='velocity')\n#plt.plot(simulated_result_file[:,0],simulated_result_file[:,3], label='kinetic')\n#plt.plot(simulated_result_file[:,0],simulated_result_file[:,5], label='potential')\nplt.plot(simulated_result_file[:,0],abs(simulated_result_file[:,6]-simulated_result_file[0,6])/simulated_result_file[0,6], label='total')\nplt.plot(simulated_result_file[:,0],abs(simulated_result_file[:,7]-simulated_result_file[0,7])/simulated_result_file[0,7], label='total_OV_OP')\nplt.xlabel('time')\nplt.ylabel('Position X(t)')\nplt.legend()\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7550aa900b60e38612030bd8336cea9bda802eb | 8,750 | ipynb | Jupyter Notebook | Programming Assignment 1/Getting started with iPython Notebook.ipynb | Drishtant-Shri/Coursera-UW-Machine-Learning-Foundations-A-Case-Study-Approach | 0d6287472dc7bb79f34d50a7b1b054417dbfcf13 | [
"MIT"
] | 10 | 2018-02-07T12:25:35.000Z | 2021-02-10T15:56:55.000Z | Programming Assignment 1/Getting started with iPython Notebook.ipynb | Drishtant-Shri/Coursera-UW-Machine-Learning-Foundations-A-Case-Study-Approach | 0d6287472dc7bb79f34d50a7b1b054417dbfcf13 | [
"MIT"
] | null | null | null | Programming Assignment 1/Getting started with iPython Notebook.ipynb | Drishtant-Shri/Coursera-UW-Machine-Learning-Foundations-A-Case-Study-Approach | 0d6287472dc7bb79f34d50a7b1b054417dbfcf13 | [
"MIT"
] | 14 | 2018-05-15T00:53:51.000Z | 2020-10-01T08:52:50.000Z | 16.634981 | 269 | 0.460571 | [
[
[
"#Installing Python and GraphLab Create",
"_____no_output_____"
],
[
"Please follow the installation instructions here before getting started:\n\n\n##We have done\n* Installed Python\n* Started Ipython Notebook",
"_____no_output_____"
],
[
"#Getting started with Python",
"_____no_output_____"
]
],
[
[
"print 'Hello World!'",
"Hello World!\n"
]
],
[
[
"##Create some variables in Python",
"_____no_output_____"
]
],
[
[
"i = 4 #int",
"_____no_output_____"
],
[
"type(i)",
"_____no_output_____"
],
[
"f = 4.1 #float",
"_____no_output_____"
],
[
"type(f)",
"_____no_output_____"
],
[
"b = True #boolean variable",
"_____no_output_____"
],
[
"s = \"This is a string!\"",
"_____no_output_____"
],
[
"print s",
"This is a string!\n"
]
],
[
[
"##Advanced python types",
"_____no_output_____"
]
],
[
[
"l = [3,1,2] #list",
"_____no_output_____"
],
[
"print l",
"[3, 1, 2]\n"
],
[
"d = {'foo':1, 'bar':2.3, 's':'my first dictionary'} #dictionary",
"_____no_output_____"
],
[
"print d",
"{'s': 'my first dictionary', 'foo': 1, 'bar': 2.3}\n"
],
[
"print d['foo'] #element of a dictionary",
"1\n"
],
[
"n = None #Python's null type",
"_____no_output_____"
],
[
"type(n)",
"_____no_output_____"
]
],
[
[
"##Advanced printing",
"_____no_output_____"
]
],
[
[
"print \"Our float value is %s. Our int value is %s.\" % (f,i) #Python is pretty good with strings",
"Our float value is 4.1. Our int value is 4.\n"
]
],
[
[
"##Conditional statements in python",
"_____no_output_____"
]
],
[
[
"if i == 1 and f > 4:\n print \"The value of i is 1 and f is greater than 4.\"\nelif i > 4 or f > 4:\n print \"i or f are both greater than 4.\"\nelse:\n print \"both i and f are less than or equal to 4\"\n",
"i or f are both greater than 4.\n"
]
],
[
[
"##Conditional loops",
"_____no_output_____"
]
],
[
[
"print l",
"[3, 1, 2]\n"
],
[
"for e in l:\n print e",
"3\n1\n2\n"
]
],
[
[
"Note that in Python, we don't use {} or other markers to indicate the part of the loop that gets iterated. Instead, we just indent and align each of the iterated statements with spaces or tabs. (You can use as many as you want, as long as the lines are aligned.)",
"_____no_output_____"
]
],
[
[
"counter = 6\nwhile counter < 10:\n print counter\n counter += 1",
"6\n7\n8\n9\n"
]
],
[
[
"#Creating functions in Python\n\nAgain, we don't use {}, but just indent the lines that are part of the function.",
"_____no_output_____"
]
],
[
[
"def add2(x):\n y = x + 2\n return y",
"_____no_output_____"
],
[
"i = 5",
"_____no_output_____"
],
[
"add2(i)",
"_____no_output_____"
]
],
[
[
"We can also define simple functions with lambdas:",
"_____no_output_____"
]
],
[
[
"square = lambda x: x*x",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7550d23a4466cf12ace0a0331ba00f3854397da | 124,446 | ipynb | Jupyter Notebook | Course_1_Getting started with TensorFlow 2/Tensorflow_2_week_3.ipynb | nagar-mayank/TensorFlow-2-for-Deep-Learning-by-Imperial-College-London | 6aac6f522fa921ef8c2e21b8cdd418d2c2a1ff58 | [
"MIT"
] | null | null | null | Course_1_Getting started with TensorFlow 2/Tensorflow_2_week_3.ipynb | nagar-mayank/TensorFlow-2-for-Deep-Learning-by-Imperial-College-London | 6aac6f522fa921ef8c2e21b8cdd418d2c2a1ff58 | [
"MIT"
] | null | null | null | Course_1_Getting started with TensorFlow 2/Tensorflow_2_week_3.ipynb | nagar-mayank/TensorFlow-2-for-Deep-Learning-by-Imperial-College-London | 6aac6f522fa921ef8c2e21b8cdd418d2c2a1ff58 | [
"MIT"
] | null | null | null | 124,446 | 124,446 | 0.898687 | [
[
[
"import tensorflow as tf\nprint(tf.__version__)",
"2.4.1\n"
]
],
[
[
"# Validation, regularisation and callbacks",
"_____no_output_____"
],
[
" ## Coding tutorials\n #### [1. Validation sets](#coding_tutorial_1)\n #### [2. Model regularisation](#coding_tutorial_2)\n #### [3. Introduction to callbacks](#coding_tutorial_3)\n #### [4. Early stopping / patience](#coding_tutorial_4)",
"_____no_output_____"
],
[
"***\n<a id=\"coding_tutorial_1\"></a>\n## Validation sets",
"_____no_output_____"
],
[
"#### Load the data",
"_____no_output_____"
]
],
[
[
"# Load the diabetes dataset\nfrom sklearn.datasets import load_diabetes\n\n\ndiabetes_dataset = load_diabetes()\nprint(diabetes_dataset.keys())",
"dict_keys(['data', 'target', 'DESCR', 'feature_names', 'data_filename', 'target_filename'])\n"
],
[
"print(diabetes_dataset['DESCR'])",
".. _diabetes_dataset:\n\nDiabetes dataset\n----------------\n\nTen baseline variables, age, sex, body mass index, average blood\npressure, and six blood serum measurements were obtained for each of n =\n442 diabetes patients, as well as the response of interest, a\nquantitative measure of disease progression one year after baseline.\n\n**Data Set Characteristics:**\n\n :Number of Instances: 442\n\n :Number of Attributes: First 10 columns are numeric predictive values\n\n :Target: Column 11 is a quantitative measure of disease progression one year after baseline\n\n :Attribute Information:\n - Age\n - Sex\n - Body mass index\n - Average blood pressure\n - S1\n - S2\n - S3\n - S4\n - S5\n - S6\n\nNote: Each of these 10 feature variables have been mean centered and scaled by the standard deviation times `n_samples` (i.e. the sum of squares of each column totals 1).\n\nSource URL:\nhttps://www4.stat.ncsu.edu/~boos/var.select/diabetes.html\n\nFor more information see:\nBradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani (2004) \"Least Angle Regression,\" Annals of Statistics (with discussion), 407-499.\n(https://web.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf)\n"
],
[
"# Save the input and target variables\ndata = diabetes_dataset['data']\ntargets = diabetes_dataset['target']\ndata.shape, targets.shape",
"_____no_output_____"
],
[
"# Normalise the target data (this will make clearer training curves)\n\ntargets = (targets - targets.mean()) / targets.std()",
"_____no_output_____"
],
[
"# Split the data into train and test sets\nfrom sklearn.model_selection import train_test_split\n\n\ntrain_data, test_data, train_targets, test_targets = train_test_split(data, targets, test_size=0.1)\nprint(train_data.shape)\nprint(test_data.shape)\nprint(train_targets.shape)\nprint(test_targets.shape)",
"(397, 10)\n(45, 10)\n(397,)\n(45,)\n"
]
],
[
[
"#### Train a feedforward neural network model",
"_____no_output_____"
]
],
[
[
"# Build the model\nfrom tensorflow.keras.layers import Dense\nfrom tensorflow.keras.models import Sequential\n\n\ndef get_model():\n model = Sequential([\n Dense(128, activation='relu', input_shape=(train_data.shape[1], 1)),\n Dense(128, activation='relu'),\n Dense(128, activation='relu'),\n Dense(128, activation='relu'),\n Dense(128, activation='relu'),\n Dense(64, activation='relu'),\n Dense(1),\n ])\n return model",
"_____no_output_____"
],
[
"# Print the model summary\nmodel = get_model()\nmodel.summary()",
"Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_7 (Dense) (None, 10, 128) 256 \n_________________________________________________________________\ndense_8 (Dense) (None, 10, 128) 16512 \n_________________________________________________________________\ndense_9 (Dense) (None, 10, 128) 16512 \n_________________________________________________________________\ndense_10 (Dense) (None, 10, 128) 16512 \n_________________________________________________________________\ndense_11 (Dense) (None, 10, 128) 16512 \n_________________________________________________________________\ndense_12 (Dense) (None, 10, 64) 8256 \n_________________________________________________________________\ndense_13 (Dense) (None, 10, 1) 65 \n=================================================================\nTotal params: 74,625\nTrainable params: 74,625\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"# Compile the model\nmodel.compile(optimizer='adam', loss='mae', metrics=['mae'])",
"_____no_output_____"
],
[
"# Train the model, with some of the data reserved for validation\nimport numpy as np\nhistory = model.fit(train_data[..., np.newaxis], train_targets, epochs=50, validation_split=0.15, batch_size=64, verbose=False)\n",
"_____no_output_____"
],
[
"# Evaluate the model on the test set\nmodel.evaluate(test_data[..., np.newaxis], test_targets)\n",
"2/2 [==============================] - 0s 5ms/step - loss: 0.8980 - mae: 0.8980\n"
]
],
[
[
"#### Plot the learning curves",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"# Plot the training and validation loss\n\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('Loss vs. epochs')\nplt.ylabel('Loss')\nplt.xlabel('Epoch')\nplt.legend(['Training', 'Validation'], loc='upper right')\nplt.show()",
"_____no_output_____"
]
],
[
[
"***\n<a id=\"coding_tutorial_2\"></a>\n## Model regularisation",
"_____no_output_____"
],
[
"#### Adding regularisation with weight decay and dropout",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.layers import Dropout\nfrom tensorflow.keras import regularizers",
"_____no_output_____"
],
[
"def get_regularised_model(wd, rate):\n model = Sequential([\n Dense(128, kernel_regularizer=regularizers.l2(wd), activation=\"relu\", input_shape=(train_data.shape[1],)),\n Dropout(rate),\n Dense(128, kernel_regularizer=regularizers.l2(wd), activation=\"relu\"),\n Dropout(rate),\n Dense(128, kernel_regularizer=regularizers.l2(wd), activation=\"relu\"),\n Dropout(rate),\n Dense(128, kernel_regularizer=regularizers.l2(wd), activation=\"relu\"),\n Dropout(rate),\n Dense(128, kernel_regularizer=regularizers.l2(wd), activation=\"relu\"),\n Dropout(rate),\n Dense(128, kernel_regularizer=regularizers.l2(wd), activation=\"relu\"),\n Dense(1)\n ])\n return model",
"_____no_output_____"
],
[
"# Re-build the model with weight decay and dropout layers\nmodel = get_regularised_model(1e-5, 0.3)",
"_____no_output_____"
],
[
"# Compile the model\nmodel.compile(optimizer='adam', loss='mse', metrics=['mae'])",
"_____no_output_____"
],
[
"# Train the model, with some of the data reserved for validation\nhistory = model.fit(train_data, train_targets, epochs=100, validation_split=0.15, batch_size=64, verbose=False)",
"_____no_output_____"
],
[
"# Evaluate the model on the test set\nmodel.evaluate(test_data, test_targets)",
"2/2 [==============================] - 0s 5ms/step - loss: 0.6260 - mae: 0.6243\n"
]
],
[
[
"#### Plot the learning curves",
"_____no_output_____"
]
],
[
[
"# Plot the training and validation loss\n\nimport matplotlib.pyplot as plt\n\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('Loss vs. epochs')\nplt.ylabel('Loss')\nplt.xlabel('Epoch')\nplt.legend(['Training', 'Validation'], loc='upper right')\nplt.show()",
"_____no_output_____"
]
],
[
[
"***\n<a id=\"coding_tutorial_3\"></a>\n## Introduction to callbacks",
"_____no_output_____"
],
[
"#### Example training callback",
"_____no_output_____"
]
],
[
[
"# Write a custom callback \n# https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/Callback\nfrom tensorflow.keras.callbacks import Callback\n\n\nclass TrainingCallback(Callback):\n def on_training_begin(self, logs=None):\n print('Starting training...')\n \n def on_training_end(self, logs=None):\n print('Training Finished')\n\n def on_epoch_begin(self, epoch, logs=None):\n print(f'Training starting epoch {epoch}')\n\n def on_epoch_end(self, epoch, logs=None):\n print(f'Training end epoch {epoch}')\n\n def on_train_batch_begin(self, batch, logs=None):\n print(f'Training starting batch {batch}')\n\n def on_train_batch_end(self, batch, logs=None):\n print(f'Training end batch {batch}')\n\n\nclass TestingCallback(Callback):\n def on_testing_begin(self, logs=None):\n print('Starting testing...')\n \n def on_training_end(self, logs=None):\n print('Testing Finished')\n\n def on_test_batch_begin(self, batch, logs=None):\n print(f'Testing starting batch {batch}')\n\n def on_test_batch_end(self, batch, logs=None):\n print(f'Testing end batch {batch}')\n\n\nclass PredictionCallback(Callback):\n def on_prediction_begin(self, logs=None):\n print('Starting prediction...')\n \n def on_prediction_end(self, logs=None):\n print('prediction Finished')\n\n def on_prediction_batch_begin(self, batch, logs=None):\n print(f'prediction starting batch {batch}')\n\n def on_prediction_batch_end(self, batch, logs=None):\n print(f'prediction end batch {batch}')",
"_____no_output_____"
],
[
"# Re-build the model\nmodel = get_regularised_model(1e-5, 0.3)",
"_____no_output_____"
],
[
"# Compile the model\nmodel.compile(optimizer='adam', loss='mse')",
"_____no_output_____"
]
],
[
[
"#### Train the model with the callback",
"_____no_output_____"
]
],
[
[
"# Train the model, with some of the data reserved for validation\nhistory = model.fit(train_data, train_targets, epochs=3, verbose=False, callbacks=[TrainingCallback()], validation_split=0.2)",
"Training starting epoch 0\nTraining starting batch 0\nTraining end batch 0\nTraining starting batch 1\nTraining end batch 1\nTraining starting batch 2\nTraining end batch 2\nTraining starting batch 3\nTraining end batch 3\nTraining starting batch 4\nTraining end batch 4\nTraining starting batch 5\nTraining end batch 5\nTraining starting batch 6\nTraining end batch 6\nTraining starting batch 7\nTraining end batch 7\nTraining starting batch 8\nTraining end batch 8\nTraining starting batch 9\nTraining end batch 9\nTraining end epoch 0\nTraining starting epoch 1\nTraining starting batch 0\nTraining end batch 0\nTraining starting batch 1\nTraining end batch 1\nTraining starting batch 2\nTraining end batch 2\nTraining starting batch 3\nTraining end batch 3\nTraining starting batch 4\nTraining end batch 4\nTraining starting batch 5\nTraining end batch 5\nTraining starting batch 6\nTraining end batch 6\nTraining starting batch 7\nTraining end batch 7\nTraining starting batch 8\nTraining end batch 8\nTraining starting batch 9\nTraining end batch 9\nTraining end epoch 1\nTraining starting epoch 2\nTraining starting batch 0\nTraining end batch 0\nTraining starting batch 1\nTraining end batch 1\nTraining starting batch 2\nTraining end batch 2\nTraining starting batch 3\nTraining end batch 3\nTraining starting batch 4\nTraining end batch 4\nTraining starting batch 5\nTraining end batch 5\nTraining starting batch 6\nTraining end batch 6\nTraining starting batch 7\nTraining end batch 7\nTraining starting batch 8\nTraining end batch 8\nTraining starting batch 9\nTraining end batch 9\nTraining end epoch 2\n"
],
[
"# Evaluate the model\nmodel.evaluate(test_data, test_targets, verbose=False, callbacks=[TestingCallback()])",
"Testing starting batch 0\nTesting end batch 0\nTesting starting batch 1\nTesting end batch 1\n"
],
[
"# Make predictions with the model\nmodel.predict(test_data, callbacks=[PredictionCallback()], verbose=False)",
"_____no_output_____"
]
],
[
[
"***\n<a id=\"coding_tutorial_4\"></a>\n## Early stopping / patience",
"_____no_output_____"
],
[
"#### Re-train the models with early stopping",
"_____no_output_____"
]
],
[
[
"# Re-train the unregularised model\nimport numpy as np\nunregularised_model = get_model()\nunregularised_model.compile(optimizer='adam', loss='mae')\nunreg_history = unregularised_model.fit(train_data[...,np.newaxis], train_targets, epochs=100,\n validation_split=0.15, batch_size=64, verbose=False,\n callbacks=[tf.keras.callbacks.EarlyStopping(patience=2)])",
"_____no_output_____"
],
[
"# Evaluate the model on the test set\nunregularised_model.evaluate(test_data[..., np.newaxis], test_targets, verbose=2)\n",
"2/2 - 0s - loss: 0.7867\n"
],
[
"# Re-train the regularised model\nregularised_model = get_regularised_model(1e-5, 0.2)\nregularised_model.compile(optimizer='adam', loss='mse')\nreg_history = regularised_model.fit(train_data, train_targets, epochs=100,\n validation_split=0.15, batch_size=64,verbose=False,\n callbacks=[tf.keras.callbacks.EarlyStopping(patience=10)])",
"_____no_output_____"
],
[
"# Evaluate the model on the test set\nregularised_model.evaluate(test_data, test_targets, verbose=2)\n",
"2/2 - 0s - loss: 0.5251\n"
]
],
[
[
"#### Plot the learning curves",
"_____no_output_____"
]
],
[
[
"# Plot the training and validation loss\n\nimport matplotlib.pyplot as plt\n\nfig = plt.figure(figsize=(12, 5))\n\nfig.add_subplot(121)\n\nplt.plot(unreg_history.history['loss'])\nplt.plot(unreg_history.history['val_loss'])\nplt.title('Unregularised model: loss vs. epochs')\nplt.ylabel('Loss')\nplt.xlabel('Epoch')\nplt.legend(['Training', 'Validation'], loc='upper right')\n\nfig.add_subplot(122)\n\nplt.plot(reg_history.history['loss'])\nplt.plot(reg_history.history['val_loss'])\nplt.title('Regularised model: loss vs. epochs')\nplt.ylabel('Loss')\nplt.xlabel('Epoch')\nplt.legend(['Training', 'Validation'], loc='upper right')\n\nplt.show()",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7550e31b5a5b46ea0b0948bbdb94412aad63134 | 221,581 | ipynb | Jupyter Notebook | Wiki_data.ipynb | QQrex/Movies-ETL | d42286f928889936f4b5998aeb711060ea072bba | [
"MIT"
] | null | null | null | Wiki_data.ipynb | QQrex/Movies-ETL | d42286f928889936f4b5998aeb711060ea072bba | [
"MIT"
] | null | null | null | Wiki_data.ipynb | QQrex/Movies-ETL | d42286f928889936f4b5998aeb711060ea072bba | [
"MIT"
] | null | null | null | 40.141486 | 183 | 0.389839 | [
[
[
"# Load dep\nimport json\nimport pandas as pd\nimport numpy as np\nimport os",
"_____no_output_____"
],
[
"file_dir = 'C:\\\\Users\\Billy\\Desktop\\Bootcamp\\Mods\\Mod 8\\Movies-ETL\\Resources'\n\nwith open(f'{file_dir}\\wikipedia-movies.json', mode='r') as file:\n wiki_movies_raw = json.load(file)",
"_____no_output_____"
],
[
"len(wiki_movies_raw)",
"_____no_output_____"
],
[
"# First 5 records\nwiki_movies_raw[:5]",
"_____no_output_____"
],
[
"# Last 5 records\nwiki_movies_raw[-5:]",
"_____no_output_____"
],
[
"# Some records in the middle\nwiki_movies_raw[3600:3605]",
"_____no_output_____"
],
[
"wiki_movies_df = pd.DataFrame(wiki_movies_raw)\n\nwiki_movies_df.head()",
"_____no_output_____"
],
[
"wiki_movies_df.columns.tolist()",
"_____no_output_____"
],
[
"wiki_movies = [movie for movie in wiki_movies_raw \n if ('Director' in movie or 'Directed by' in movie)\n and 'imdb_link' in movie\n and 'No. of episodes' not in movie]\n\nlen(wiki_movies)",
"_____no_output_____"
],
[
"wiki_movies_df2 = pd.DataFrame(wiki_movies)\n\nlen(wiki_movies_df2.columns)",
"_____no_output_____"
],
[
"wiki_movies_df[wiki_movies_df['Arabic'].notnull()]",
"_____no_output_____"
],
[
"wiki_movies_df[wiki_movies_df['Arabic'].notnull()]['url']",
"_____no_output_____"
],
[
"sorted(wiki_movies_df.columns.tolist())",
"_____no_output_____"
],
[
"def clean_movie(movie):\n movie = dict(movie) #create a non-destructive copy\n \n # Remove alternative language titles\n alt_titles = {}\n for key in ['Also known as','Arabic','Cantonese','Chinese','French',\n 'Hangul','Hebrew','Hepburn','Japanese','Literally',\n 'Mandarin','McCune–Reischauer','Original title','Polish',\n 'Revised Romanization','Romanized','Russian',\n 'Simplified','Traditional','Yiddish']:\n if key in movie:\n alt_titles[key] = movie[key]\n movie.pop(key)\n if len(alt_titles) > 0:\n movie['alt_titles'] = alt_titles \n \n # Change column name\n def change_column_name(old_name, new_name):\n if old_name in movie:\n movie[new_name] = movie.pop(old_name)\n change_column_name('Adaptation by', 'Writer(s)')\n change_column_name('Country of origin', 'Country')\n change_column_name('Directed by', 'Director')\n change_column_name('Distributed by', 'Distributor')\n change_column_name('Edited by', 'Editor(s)')\n change_column_name('Length', 'Running time')\n change_column_name('Original release', 'Release date')\n change_column_name('Music by', 'Composer(s)')\n change_column_name('Produced by', 'Producer(s)')\n change_column_name('Producer', 'Producer(s)')\n change_column_name('Productioncompanies ', 'Production company(s)')\n change_column_name('Productioncompany ', 'Production company(s)')\n change_column_name('Released', 'Release Date')\n change_column_name('Release Date', 'Release date')\n change_column_name('Screen story by', 'Writer(s)')\n change_column_name('Screenplay by', 'Writer(s)')\n change_column_name('Story by', 'Writer(s)')\n change_column_name('Theme music composer', 'Composer(s)')\n change_column_name('Written by', 'Writer(s)')\n \n return movie",
"_____no_output_____"
],
[
"# list comprehension for clean wiki_wiki movies\nclean_movies = [clean_movie(movie) for movie in wiki_movies]\nwiki_movies_df = pd.DataFrame(clean_movies)\nsorted(wiki_movies_df.columns.tolist())",
"_____no_output_____"
],
[
"# extract imbd id and remove duplicate rows\nwiki_movies_df['imdb_id'] = wiki_movies_df['imdb_link'].str.extract(r'(tt\\d{7})')\nprint(len(wiki_movies_df))\nwiki_movies_df.drop_duplicates(subset='imdb_id', inplace=True)\nprint(len(wiki_movies_df))\nwiki_movies_df.head()",
"7076\n7033\n"
],
[
"# Count null values in each column\n[[column,wiki_movies_df[column].isnull().sum()] for column in wiki_movies_df.columns]",
"_____no_output_____"
],
[
"# list of columns < 90% null values\nwiki_columns_to_keep = [column for column in wiki_movies_df.columns if wiki_movies_df[column].isnull().sum() < len(wiki_movies_df) * 0.9]\nwiki_movies_df = wiki_movies_df[wiki_columns_to_keep]\n\nwiki_movies_df",
"_____no_output_____"
],
[
"wiki_movies_df.dtypes",
"_____no_output_____"
],
[
"# create box office variable and dropna\nbox_office = wiki_movies_df['Box office'].dropna()",
"_____no_output_____"
],
[
"def is_not_a_string(x):\n return type(x) != str\nbox_office[box_office.map(is_not_a_string)]",
"_____no_output_____"
],
[
"# remove string\n#lambda x: type(x) != str\n#box_office[box_office.map(lambda x: type(x) !=str)]",
"_____no_output_____"
],
[
"box_office = box_office.apply(lambda x: ' '.join(x) if type(x) == list else x)\n\nbox_office",
"_____no_output_____"
],
[
"# import regular express\n\nimport re",
"_____no_output_____"
],
[
"# Form 1 for regular expression ($123.4 millions and billions)\n\nform_one = r'\\$\\d+\\.?\\d*\\s*[mb]illion'\n\nbox_office.str.contains(form_one, flags=re.IGNORECASE, na=False).sum()",
"_____no_output_____"
],
[
"# Form 2 for regular expression ($123,456,789)\n\nform_two = r'\\$\\d{1,3}(?:,\\d{3})+'\nbox_office.str.contains(form_two, flags=re.IGNORECASE, na=False).sum()",
"_____no_output_____"
],
[
"\nmatches_form_one = box_office.str.contains(form_one, flags=re.IGNORECASE, na=False)\nmatches_form_two = box_office.str.contains(form_two, flags=re.IGNORECASE, na=False)\n\nbox_office[~matches_form_one & ~matches_form_two]",
"_____no_output_____"
],
[
"# fix ranges\n\nbox_office = box_office.str.replace(r'\\$.*[-—–](?![a-z])', '$', regex=True)",
"_____no_output_____"
],
[
"# Form 1 edit for regular expression ($123.4 millions and billions)\n\nform_one = r'\\$\\s*\\d+\\.?\\d*\\s*[mb]illi?on'\n\nprint(box_office.str.contains(form_one, flags=re.IGNORECASE, na=False).sum())\n\n# Form 2 edit for regular expression ($123,456,789)\n\nform_two = r'\\$\\s*\\d{1,3}(?:[,\\.]\\d{3})+(?!\\s[mb]illion)'\nprint(box_office.str.contains(form_two, flags=re.IGNORECASE, na=False).sum())",
"3909\n1559\n"
],
[
"def parse_dollars(s):\n # if s is not a string, return NaN\n if type(s) != str:\n return np.nan\n\n # if input is of the form $###.# million\n if re.match(r'\\$\\s*\\d+\\.?\\d*\\s*milli?on', s, flags=re.IGNORECASE):\n\n # remove dollar sign and \" million\"\n s = re.sub('\\$|\\s|[a-zA-Z]','', s)\n\n # convert to float and multiply by a million\n value = float(s) * 10**6\n\n # return value\n return value\n\n # if input is of the form $###.# billion\n elif re.match(r'\\$\\s*\\d+\\.?\\d*\\s*billi?on', s, flags=re.IGNORECASE):\n\n # remove dollar sign and \" billion\"\n s = re.sub('\\$|\\s|[a-zA-Z]','', s)\n\n # convert to float and multiply by a billion\n value = float(s) * 10**9\n\n # return value\n return value\n\n # if input is of the form $###,###,###\n elif re.match(r'\\$\\s*\\d{1,3}(?:[,\\.]\\d{3})+(?!\\s[mb]illion)', s, flags=re.IGNORECASE):\n\n # remove dollar sign and commas\n s = re.sub('\\$|,','', s)\n\n # convert to float\n value = float(s)\n\n # return value\n return value\n\n # otherwise, return NaN\n else:\n return np.nan",
"_____no_output_____"
],
[
"wiki_movies_df['box_office'] = box_office.str.extract(f'({form_one}|{form_two})', flags=re.IGNORECASE)[0].apply(parse_dollars)",
"_____no_output_____"
],
[
"wiki_movies_df.drop('Box office', axis=1, inplace=True)\n\nwiki_movies_df",
"_____no_output_____"
],
[
"# drop budgets\n\nbudget = wiki_movies_df['Budget'].dropna()",
"_____no_output_____"
],
[
"# convert list to string\n\nbudget = budget.map(lambda x: ' '.join(x) if type(x) ==list else x)",
"_____no_output_____"
],
[
"# remove $ -\n\nbudget = budget.str.replace(r'\\$.*[-—–](?![a-z])', '$', regex=True)\n\nbudget",
"_____no_output_____"
],
[
"matches_form_one = budget.str.contains(form_one, flags=re.IGNORECASE, na=False)\nmatches_form_two = budget.str.contains(form_two, flags=re.IGNORECASE, na=False)\nbudget[~matches_form_one & ~matches_form_two]",
"_____no_output_____"
],
[
"budget = budget.str.replace(r'\\[\\d+\\]\\s*', '')\nbudget[~matches_form_one & ~matches_form_two]",
"C:\\Users\\Billy\\AppData\\Local\\Temp/ipykernel_22772/3746335845.py:1: FutureWarning: The default value of regex will change from True to False in a future version.\n budget = budget.str.replace(r'\\[\\d+\\]\\s*', '')\n"
],
[
"wiki_movies_df['budget'] = budget.str.extract(f'({form_one}|{form_two})', flags=re.IGNORECASE)[0].apply(parse_dollars)",
"_____no_output_____"
],
[
"wiki_movies_df.drop('Budget', axis=1, inplace=True)\n\nwiki_movies_df",
"_____no_output_____"
],
[
"# release date\n\nrelease_date = wiki_movies_df['Release date'].dropna().apply(lambda x: ' '.join(x) if type(x) == list else x)\n\nrelease_date.head()",
"_____no_output_____"
],
[
"# release date re forms\n\ndate_form_one = r'(?:January|February|March|April|May|June|July|August|September|October|November|December)\\s[123]?\\d,\\s\\d{4}'\ndate_form_two = r'\\d{4}.[01\\d.[01]\\d.[0123]\\d]'\ndate_form_three = r'(?:January|February|March|April|May|June|July|August|September|October|November|December)\\s\\d{4}'\ndate_form_four = r'\\d{4}'",
"_____no_output_____"
],
[
"#extract the dates\n\nrelease_date.str.extract(f'({date_form_one}|{date_form_two}|{date_form_three}|{date_form_four})', flags=re.IGNORECASE)",
"_____no_output_____"
],
[
"wiki_movies_df['release_date'] = pd.to_datetime(release_date.str.extract(f'({date_form_one}|{date_form_two}|{date_form_three}|{date_form_four})')[0], infer_datetime_format=True)",
"_____no_output_____"
],
[
"wiki_movies_df",
"_____no_output_____"
],
[
"# Parse Running time\n\nrunning_time = wiki_movies_df['Running time'].dropna().apply(lambda x: ' '.join(x) if type(x) == list else x)\n\nrunning_time",
"_____no_output_____"
],
[
"# checking how many running time are in mins\n\nrunning_time.str.contains(r'^\\d*\\s*minutes$', flags=re.IGNORECASE, na=False).sum()",
"_____no_output_____"
],
[
"# check what is happening to other running times\n\nrunning_time[running_time.str.contains(r'^\\d*\\s*minutes$', flags=re.IGNORECASE, na=False) != True]",
"_____no_output_____"
],
[
"# change to accept m for miniutes\nrunning_time.str.contains(r'^\\d*\\s*m', flags=re.IGNORECASE, na=False).sum()",
"_____no_output_____"
],
[
"running_time[running_time.str.contains(r'^\\d*\\s*m', flags=re.IGNORECASE, na=False) != True]",
"_____no_output_____"
],
[
"running_time_extract = running_time.str.extract(r'(\\d+)\\s*ho?u?r?s?\\s*(\\d*)|(\\d+)\\s*m')\n\nrunning_time_extract",
"_____no_output_____"
],
[
"running_time_extract = running_time_extract.apply(lambda col: pd.to_numeric(col, errors='coerce')).fillna(0)",
"_____no_output_____"
],
[
"running_time_extract",
"_____no_output_____"
],
[
"wiki_movies_df['running_time'] = running_time_extract.apply(lambda row: row[0]*60 + row[1] if row[2] == 0 else row[2], axis=1)\n\nwiki_movies_df",
"_____no_output_____"
],
[
"wiki_movies_df.drop('Running time', axis=1, inplace=True)",
"_____no_output_____"
],
[
"wiki_movies_df.count()",
"_____no_output_____"
],
[
"save_file = os.path.join(\"Resources\", \"clean_wiki_data.csv\")\n\nwiki_movies_df.to_csv(save_file)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75512f5cbc1b766536406e4cd86ae7846bb39ae | 347,149 | ipynb | Jupyter Notebook | notebooks/partitioning.ipynb | windisch/dtreeviz | d14af9ca6ef16232cf405ae3384d65daa71be188 | [
"MIT"
] | 1 | 2020-10-09T14:21:03.000Z | 2020-10-09T14:21:03.000Z | notebooks/partitioning.ipynb | ysy970/dtreeviz | f5152b70e58f54befc641aedd8deaa658e1f441a | [
"MIT"
] | 1 | 2020-12-10T07:56:04.000Z | 2020-12-11T14:08:26.000Z | notebooks/partitioning.ipynb | ysy970/dtreeviz | f5152b70e58f54befc641aedd8deaa658e1f441a | [
"MIT"
] | null | null | null | 730.84 | 67,240 | 0.953277 | [
[
[
"import numpy as np\nimport pandas as pd\n\nfrom sklearn.linear_model import LinearRegression, Ridge, Lasso, LogisticRegression\nfrom sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor\n\nfrom sklearn.ensemble import RandomForestClassifier, RandomForestRegressor\nfrom sklearn.datasets import load_boston, load_iris, load_wine, load_digits, \\\n load_breast_cancer, load_diabetes\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import confusion_matrix, precision_score, recall_score\n\nimport matplotlib.pyplot as plt\n#%config InlineBackend.figure_format = 'svg'\n\nfrom sklearn import tree\nfrom dtreeviz.trees import *\nfrom dtreeviz.models.sklearn_decision_trees import ShadowSKDTree\n",
"_____no_output_____"
]
],
[
[
"## Regression",
"_____no_output_____"
]
],
[
[
"df_cars = pd.read_csv(\"../data/cars.csv\")\nX = df_cars.drop('MPG', axis=1)\ny = df_cars['MPG']",
"_____no_output_____"
],
[
"features_reg_univar = [\"WGT\"]\ntarget_reg = \"MPG\"\ndtr_univar = DecisionTreeRegressor(max_depth=3, criterion=\"mae\")\ndtr_univar.fit(X[features_reg_univar], y)\n\nskdtree_univar = ShadowSKDTree(dtr_univar, X[features_reg_univar], y, features_reg_univar, target_reg)\n",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(1,1, figsize=(4,2.5))\n\nrtreeviz_univar(dtr_univar, X[features_reg_univar], y, features_reg_univar, target_reg, ax=ax)\n\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(1,1, figsize=(4,2.5))\n\nrtreeviz_univar(skdtree_univar, ax=ax)\n\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
],
[
"features_reg_bivar_3d = [\"WGT\", \"ENG\"]\ntarget_reg_bivar_3d = \"MPG\"\ndtr_bivar_3d = DecisionTreeRegressor(max_depth=3, criterion=\"mae\")\ndtr_bivar_3d.fit(X[features_reg_bivar_3d], y)\n\nskdtree_bivar_3d = ShadowSKDTree(dtr_bivar_3d, X[features_reg_bivar_3d], y, features_reg_bivar_3d, target_reg_bivar_3d)",
"_____no_output_____"
],
[
"rtreeviz_bivar_3D(dtr_bivar_3d,\n X[features_reg_bivar_3d], y,\n feature_names=features_reg_bivar_3d,\n target_name=target_reg_bivar_3d,\n fontsize=10,\n elev=30,\n azim=20,\n dist=10,\n show={'splits','title'},\n colors={'tesselation_alpha':.5})",
"_____no_output_____"
],
[
"rtreeviz_bivar_3D(skdtree_bivar_3d, \n fontsize=10,\n elev=30,\n azim=20,\n dist=10,\n show={'splits','title'},\n colors={'tesselation_alpha':.5})",
"_____no_output_____"
],
[
"rtreeviz_bivar_heatmap(dtr_bivar_3d, X[features_reg_bivar_3d], y, feature_names=features_reg_bivar_3d, target_name=target_reg_bivar_3d)",
"_____no_output_____"
],
[
"rtreeviz_bivar_heatmap(skdtree_bivar_3d)\n",
"_____no_output_____"
]
],
[
[
"## Classification",
"_____no_output_____"
]
],
[
[
"iris = load_iris()\nX = iris.data\nX = X[:,2].reshape(-1,1) # petal length (cm)\ny = iris.target\nlen(X), len(y)\n\nfeature_c_univar = \"petal length (cm)\"\ntarget_c_univar = \"iris\"\nclass_names_univar = list(iris.target_names)",
"_____no_output_____"
],
[
"dtc_univar = DecisionTreeClassifier(max_depth=1, min_samples_leaf=1)\ndtc_univar.fit(X, y)\n\nskdtree_c_univar = ShadowSKDTree(dtc_univar, X, y, feature_c_univar, target_c_univar, class_names_univar)",
"_____no_output_____"
],
[
"figsize = (6,2)\nctreeviz_univar(dtc_univar, X, y, \n feature_names=feature_c_univar, target_name=target_c_univar, class_names=class_names_univar,\n nbins=40, gtype='barstacked',\n show={'splits','title'})\nplt.tight_layout()\nplt.show()\n",
"_____no_output_____"
],
[
"figsize = (6,2)\nctreeviz_univar(skdtree_c_univar,\n nbins=40, gtype='barstacked',\n show={'splits','title'})\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
],
[
"wine = load_wine()\nX = wine.data\nX = X[:,[12,6]]\ny = wine.target\nlen(X), len(y)\n\ncolors = {'classes':\n [None, # 0 classes\n None, # 1 class\n [\"#FEFEBB\",\"#a1dab4\"], # 2 classes\n [\"#FEFEBB\",\"#D9E6F5\",'#a1dab4'], # 3\n ]\n }\n\nfeature_c_bivar = ['proline','flavanoid']\ntarget_c_bivar = \"wine\"\nclass_name_bivar = list(wine.target_names)\nfeature_c_bivar, target_c_bivar, class_name_bivar",
"_____no_output_____"
],
[
"dtc_bivar = DecisionTreeClassifier(max_depth=2)\ndtc_bivar.fit(X, y)\n\nskdtree_c_bivar = ShadowSKDTree(dtc_bivar, X, y, feature_c_bivar, target_c_bivar, class_name_bivar)",
"_____no_output_____"
],
[
"ctreeviz_bivar(dtc_bivar, X, y, \n feature_names=feature_c_bivar, target_name=target_c_bivar, class_names=class_name_bivar,\n show={'splits', \"legend\"}, \n colors={'scatter_edge': 'black'})\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
],
[
"ctreeviz_bivar(skdtree_c_bivar,\n show={'splits', \"legend\"}, \n colors={'scatter_edge': 'black'})\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75513c8d93300acb4a8c5eb3efdb6495fce4e09 | 2,907 | ipynb | Jupyter Notebook | ReproducingMLpipelines/Paper6/DataPreprocessing.ipynb | CompareML/AIM-Manuscript | 4cf118c1f06e8a1843d56e1f7f8f3d1698aac248 | [
"MIT"
] | null | null | null | ReproducingMLpipelines/Paper6/DataPreprocessing.ipynb | CompareML/AIM-Manuscript | 4cf118c1f06e8a1843d56e1f7f8f3d1698aac248 | [
"MIT"
] | null | null | null | ReproducingMLpipelines/Paper6/DataPreprocessing.ipynb | CompareML/AIM-Manuscript | 4cf118c1f06e8a1843d56e1f7f8f3d1698aac248 | [
"MIT"
] | null | null | null | 34.2 | 263 | 0.627795 | [
[
[
"### Load and transform dataset. \nInstall Bioconductor biocLite package in order to access the golubEsets library. [golubEsets](https://bioconductor.org/packages/release/data/experiment/manuals/golubEsets/man/golubEsets.pdf) contains the raw data used by Todd Golub in the original paper.\n\nWe use the scale method in the original paper instead of the thresholding algorithm in this paper for now.",
"_____no_output_____"
]
],
[
[
"## Most code is commented in this cell since it is unnecessary and time-consuming to run it everytime.\n# options(repos='http://cran.rstudio.com/') \n# source(\"http://bioconductor.org/biocLite.R\")\n# biocLite(\"golubEsets\")\nsuppressMessages(library(golubEsets))\n#Training data predictor and response\n\ndata(Golub_Train)\ngolub_train_p = t(exprs(Golub_Train))\ngolub_train_r =pData(Golub_Train)[, \"ALL.AML\"]\n#Testing data predictor\ndata(Golub_Test)\ngolub_test_p = t(exprs(Golub_Test))\ngolub_test_r = pData(Golub_Test)[, \"ALL.AML\"]\n\n# Thresholding\ngolub_train_pp = golub_train_p\ngolub_train_pp[golub_train_pp<100] = 100\ngolub_train_pp[golub_train_pp>16000] = 16000\n\n# Filtering\ngolub_filter = function(x, r = 5, d=500){\n minval = min(x)\n maxval = max(x)\n (maxval/minval>r)&&(maxval-minval>d)\n}\nindex = apply(golub_train_pp, 2, golub_filter)\ngolub_index = (1:7129)[index]\ngolub_train_pp = golub_train_pp[, golub_index]\n\ngolub_test_pp = golub_test_p\ngolub_test_pp[golub_test_pp<100] = 100\ngolub_test_pp[golub_test_pp>16000] = 16000\ngolub_test_pp = golub_test_pp[, golub_index]\n\n# Log Transformation\ngolub_train_p_trans = log10(golub_train_pp)\ngolub_test_p_trans = log10(golub_test_pp)\n\n# Normalization\ntrain_m = colMeans(golub_train_p_trans)\ntrain_sd = apply(golub_train_p_trans, 2, sd)\ngolub_train_p_trans = t((t(golub_train_p_trans)-train_m)/train_sd)\ngolub_test_p_trans = t((t(golub_test_p_trans)-train_m)/train_sd)\nsave(golub_train_p_trans, golub_test_p_trans, golub_train_r, golub_test_r, golub_train_pp, golub_test_pp,file = \"DP.rda\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
e75530dfb794d8c25be706df137e8f7886876802 | 81,147 | ipynb | Jupyter Notebook | docs/start/02_train2.ipynb | SanstyleLab/pytorch-book | d615ddade989559dfeaf8e765d294e1616af320a | [
"Apache-2.0"
] | 1 | 2021-11-05T01:25:28.000Z | 2021-11-05T01:25:28.000Z | docs/start/02_train2.ipynb | SanstyleLab/pytorch-book | d615ddade989559dfeaf8e765d294e1616af320a | [
"Apache-2.0"
] | null | null | null | docs/start/02_train2.ipynb | SanstyleLab/pytorch-book | d615ddade989559dfeaf8e765d294e1616af320a | [
"Apache-2.0"
] | null | null | null | 93.272414 | 228 | 0.624817 | [
[
[
"cd ../../apps/",
"e:\\kaggle\\pytorch-book\\apps\n"
],
[
"import time\nfrom pathlib import Path\n\nfrom random import randint\nfrom matplotlib import pyplot as plt\n\nimport torch as np\nfrom torchvision.utils import save_image\n\nfrom models.CSA import CSA\nfrom tools.toml import load_option\nfrom opt.dataset import init_dataset\n\nfrom tools.file import mkdir\nfrom utils.torch_loader import Loader\n\n\ndef array2image(x):\n x *= 255\n x = x.detach().cpu().numpy()\n return x.astype('uint8').transpose((1, 2, 0))\n\ndef mask_op(mask):\n mask = mask.cuda()\n mask = mask[0][0]\n mask = np.unsqueeze(mask, 0)\n mask = np.unsqueeze(mask, 1)\n mask = mask.byte()\n return mask",
"_____no_output_____"
]
],
[
[
"## 模型定义",
"_____no_output_____"
]
],
[
[
"# 超参数设定\n## 固定参数\nepochs = 1000\ndisplay_freq = 200\nsave_epoch_freq = 1\n\n## 模型参数\nalpha = 1\nbeta = 0.2\n\n\nmodel_name = f'CSA-crop-{alpha}-{beta}'",
"_____no_output_____"
],
[
"base_opt = load_option('../options/base.toml')\nopt = load_option('../options/train-new.toml')\nopt.update(base_opt)\nopt.update({'name': model_name}) # 设定模型名称\nmodel = CSA(beta, **opt)\n\nimage_save_dir = model.save_dir / 'images'\nmkdir(image_save_dir)",
"initialize network with normal\ninitialize network with normal\ninitialize network with normal\ninitialize network with normal\n---------- Networks initialized -------------\nUnetGeneratorCSA(\n (model): UnetSkipConnectionBlock_3(\n (model): Sequential(\n (0): Conv2d(6, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (1): UnetSkipConnectionBlock_3(\n (model): Sequential(\n (0): LeakyReLU(negative_slope=0.2, inplace=True)\n (1): Conv2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(3, 3), dilation=(2, 2))\n (2): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (3): LeakyReLU(negative_slope=0.2, inplace=True)\n (4): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (5): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (6): UnetSkipConnectionBlock_3(\n (model): Sequential(\n (0): LeakyReLU(negative_slope=0.2, inplace=True)\n (1): Conv2d(128, 128, kernel_size=(4, 4), stride=(2, 2), padding=(3, 3), dilation=(2, 2))\n (2): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (3): LeakyReLU(negative_slope=0.2, inplace=True)\n (4): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (5): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (6): CSA(\n (model): Sequential(\n (0): LeakyReLU(negative_slope=0.2, inplace=True)\n (1): Conv2d(256, 256, kernel_size=(4, 4), stride=(2, 2), padding=(3, 3), dilation=(2, 2))\n (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (3): LeakyReLU(negative_slope=0.2, inplace=True)\n (4): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (5): CSA_model(threshold: 0.3125 ,triple_weight 1)\n (6): InnerCos(skip: True ,strength: 1)\n (7): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (8): UnetSkipConnectionBlock_3(\n (model): Sequential(\n (0): LeakyReLU(negative_slope=0.2, inplace=True)\n (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(3, 3), dilation=(2, 2))\n (2): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (3): LeakyReLU(negative_slope=0.2, inplace=True)\n (4): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (5): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (6): UnetSkipConnectionBlock_3(\n (model): Sequential(\n (0): LeakyReLU(negative_slope=0.2, inplace=True)\n (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(3, 3), dilation=(2, 2))\n (2): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (3): LeakyReLU(negative_slope=0.2, inplace=True)\n (4): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (5): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (6): UnetSkipConnectionBlock_3(\n (model): Sequential(\n (0): LeakyReLU(negative_slope=0.2, inplace=True)\n (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(3, 3), dilation=(2, 2))\n (2): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (3): LeakyReLU(negative_slope=0.2, inplace=True)\n (4): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (5): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (6): UnetSkipConnectionBlock_3(\n (model): Sequential(\n (0): LeakyReLU(negative_slope=0.2, inplace=True)\n (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(3, 3), dilation=(2, 2))\n (2): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (3): LeakyReLU(negative_slope=0.2, inplace=True)\n (4): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (5): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (6): UnetSkipConnectionBlock_3(\n (model): Sequential(\n (0): LeakyReLU(negative_slope=0.2, inplace=True)\n (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(3, 3), dilation=(2, 2))\n (2): ReLU(inplace=True)\n (3): ConvTranspose2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (4): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n )\n )\n (7): ReLU(inplace=True)\n (8): ConvTranspose2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (9): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (10): ReLU(inplace=True)\n (11): ConvTranspose2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (12): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n )\n )\n (7): ReLU(inplace=True)\n (8): ConvTranspose2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (9): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (10): ReLU(inplace=True)\n (11): ConvTranspose2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (12): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n )\n )\n (7): ReLU(inplace=True)\n (8): ConvTranspose2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (9): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (10): ReLU(inplace=True)\n (11): ConvTranspose2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (12): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n )\n )\n (7): ReLU(inplace=True)\n (8): ConvTranspose2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (9): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (10): ReLU(inplace=True)\n (11): ConvTranspose2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (12): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n )\n )\n (9): InnerCos2(skip: True ,strength: 1)\n (10): ReLU(inplace=True)\n (11): ConvTranspose2d(1024, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (12): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (13): ReLU(inplace=True)\n (14): ConvTranspose2d(256, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (15): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n )\n )\n (7): ReLU(inplace=True)\n (8): ConvTranspose2d(512, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (9): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (10): ReLU(inplace=True)\n (11): ConvTranspose2d(128, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (12): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n )\n )\n (7): ReLU(inplace=True)\n (8): ConvTranspose2d(256, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (9): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (10): ReLU(inplace=True)\n (11): ConvTranspose2d(64, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (12): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n )\n )\n (2): ReLU(inplace=True)\n (3): ConvTranspose2d(128, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n )\n )\n)\nTotal number of parameters: 77692291\nUnetGenerator(\n (model): UnetSkipConnectionBlock(\n (model): Sequential(\n (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (1): UnetSkipConnectionBlock(\n (model): Sequential(\n (0): LeakyReLU(negative_slope=0.2, inplace=True)\n (1): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (2): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (3): UnetSkipConnectionBlock(\n (model): Sequential(\n (0): LeakyReLU(negative_slope=0.2, inplace=True)\n (1): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (3): UnetSkipConnectionBlock(\n (model): Sequential(\n (0): LeakyReLU(negative_slope=0.2, inplace=True)\n (1): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (2): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (3): UnetSkipConnectionBlock(\n (model): Sequential(\n (0): LeakyReLU(negative_slope=0.2, inplace=True)\n (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (2): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (3): UnetSkipConnectionBlock(\n (model): Sequential(\n (0): LeakyReLU(negative_slope=0.2, inplace=True)\n (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (2): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (3): UnetSkipConnectionBlock(\n (model): Sequential(\n (0): LeakyReLU(negative_slope=0.2, inplace=True)\n (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (2): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (3): UnetSkipConnectionBlock(\n (model): Sequential(\n (0): LeakyReLU(negative_slope=0.2, inplace=True)\n (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (2): ReLU(inplace=True)\n (3): ConvTranspose2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (4): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n )\n )\n (4): ReLU(inplace=True)\n (5): ConvTranspose2d(1024, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (6): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n )\n )\n (4): ReLU(inplace=True)\n (5): ConvTranspose2d(1024, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (6): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n )\n )\n (4): ReLU(inplace=True)\n (5): ConvTranspose2d(1024, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (6): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n )\n )\n (4): ReLU(inplace=True)\n (5): ConvTranspose2d(1024, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n )\n )\n (4): ReLU(inplace=True)\n (5): ConvTranspose2d(512, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (6): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n )\n )\n (4): ReLU(inplace=True)\n (5): ConvTranspose2d(256, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (6): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n )\n )\n (2): ReLU(inplace=True)\n (3): ConvTranspose2d(128, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (4): Tanh()\n )\n )\n)\nTotal number of parameters: 54419459\nNLayerDiscriminator(\n (model): Sequential(\n (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (1): LeakyReLU(negative_slope=0.2, inplace=True)\n (2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (3): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (4): LeakyReLU(negative_slope=0.2, inplace=True)\n (5): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (7): LeakyReLU(negative_slope=0.2, inplace=True)\n (8): Conv2d(256, 512, kernel_size=(4, 4), stride=(1, 1), padding=(1, 1))\n (9): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n (10): LeakyReLU(negative_slope=0.2, inplace=True)\n (11): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), padding=(1, 1))\n )\n)\nTotal number of parameters: 2766529\nPFDiscriminator(\n (model): Sequential(\n (0): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (1): LeakyReLU(negative_slope=0.2, inplace=True)\n (2): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n (3): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (4): LeakyReLU(negative_slope=0.2, inplace=True)\n (5): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))\n )\n)\nTotal number of parameters: 10487296\n-----------------------------------------------\n"
],
[
"opt = init_dataset(200)\nloader = Loader(**opt)\ntrainset = loader.trainset # 训练集\nmaskset = loader.maskset # mask 数据集",
"{'E:/kaggle/datasets/building/╓╨╛░┤σ┬Σ╖τ├▓': 0, 'E:/kaggle/datasets/building/中景村落风貌': 809, 'E:/kaggle/datasets/building/航拍总图': 281, 'E:/kaggle/datasets/building/近景建筑风貌': 583, 'E:/kaggle/datasets/building/远景村落风貌': 1349}\n"
],
[
"# 训练阶段\nstart_epoch = 0\ntotal_steps = 0\niter_start_time = time.time()\nfor epoch in range(start_epoch, epochs):\n epoch_start_time = time.time()\n epoch_iter = 0\n for batch, mask in zip(trainset, maskset):\n image = batch[0]\n mask = mask_op(mask)\n total_steps += model.batch_size\n epoch_iter += model.batch_size\n # it not only sets the input data with mask, but also sets the latent mask.\n model.set_input(image, mask)\n model.set_gt_latent()\n model.optimize_parameters()\n if total_steps % display_freq == 0:\n real_A, real_B, fake_B = model.get_current_visuals()\n # real_A=input, real_B=ground truth fake_b=output\n pic = (np.cat([real_A, real_B, fake_B], dim=0) + 1) / 2.0\n image_name = f\"epoch{epoch}-{total_steps}-{alpha}.png\"\n save_image(pic, image_save_dir/image_name, ncol=1)\n if total_steps % 100 == 0:\n errors = model.get_current_errors()\n t = (time.time() - iter_start_time) / model.batch_size\n print(\n f\"Epoch/total_steps/alpha-beta: {epoch}/{total_steps}/{alpha}-{beta}\", dict(errors))\n if epoch % save_epoch_freq == 0:\n print(f'保存模型 Epoch {epoch}, iters {total_steps} 在 {model.save_dir}')\n model.save(epoch)\n print(\n f'Epoch/Epochs {epoch}/{epochs-1} 花费时间:{time.time() - epoch_start_time}s')\n model.update_learning_rate()",
"Epoch/total_steps/alpha-beta: 0/100/1-0.2 {'G_GAN': 5.518022537231445, 'G_L1': 55.588680267333984, 'D': 1.1141009330749512, 'F': 0.07483334094285965}\nEpoch/total_steps/alpha-beta: 0/200/1-0.2 {'G_GAN': 5.759289741516113, 'G_L1': 55.08604049682617, 'D': 0.6345841288566589, 'F': 0.04530204087495804}\nEpoch/total_steps/alpha-beta: 0/300/1-0.2 {'G_GAN': 6.937740325927734, 'G_L1': 59.467960357666016, 'D': 0.5065065026283264, 'F': 0.03301296383142471}\nEpoch/total_steps/alpha-beta: 0/400/1-0.2 {'G_GAN': 6.193876266479492, 'G_L1': 38.149436950683594, 'D': 1.1900465488433838, 'F': 0.05299185961484909}\nEpoch/total_steps/alpha-beta: 0/500/1-0.2 {'G_GAN': 7.556830406188965, 'G_L1': 39.45917892456055, 'D': 0.2415945678949356, 'F': 0.022815663367509842}\nEpoch/total_steps/alpha-beta: 0/600/1-0.2 {'G_GAN': 6.890033721923828, 'G_L1': 65.72925567626953, 'D': 0.5194847583770752, 'F': 0.026784880086779594}\nEpoch/total_steps/alpha-beta: 0/700/1-0.2 {'G_GAN': 7.203421592712402, 'G_L1': 55.98662567138672, 'D': 0.1983884871006012, 'F': 0.02233431488275528}\nEpoch/total_steps/alpha-beta: 0/800/1-0.2 {'G_GAN': 8.526432037353516, 'G_L1': 60.987518310546875, 'D': 0.08916746079921722, 'F': 0.02979384921491146}\n保存模型 Epoch 0, iters 800 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 0/999 花费时间:2294.4748616218567s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 1/900/1-0.2 {'G_GAN': 7.642114639282227, 'G_L1': 78.49203491210938, 'D': 0.12604835629463196, 'F': 0.017340093851089478}\nEpoch/total_steps/alpha-beta: 1/1000/1-0.2 {'G_GAN': 7.608176231384277, 'G_L1': 43.2374382019043, 'D': 0.14779514074325562, 'F': 0.015945453196763992}\nEpoch/total_steps/alpha-beta: 1/1100/1-0.2 {'G_GAN': 6.903014659881592, 'G_L1': 78.60047912597656, 'D': 0.4338143765926361, 'F': 0.0416969358921051}\nEpoch/total_steps/alpha-beta: 1/1200/1-0.2 {'G_GAN': 9.23589038848877, 'G_L1': 56.984107971191406, 'D': 0.5504209995269775, 'F': 0.018008030951023102}\nEpoch/total_steps/alpha-beta: 1/1300/1-0.2 {'G_GAN': 6.421850204467773, 'G_L1': 61.57182312011719, 'D': 0.45247137546539307, 'F': 0.011279342696070671}\nEpoch/total_steps/alpha-beta: 1/1400/1-0.2 {'G_GAN': 8.797816276550293, 'G_L1': 99.80023193359375, 'D': 0.23331935703754425, 'F': 0.020383726805448532}\nEpoch/total_steps/alpha-beta: 1/1500/1-0.2 {'G_GAN': 6.7051191329956055, 'G_L1': 31.690582275390625, 'D': 0.2988111972808838, 'F': 0.028201300650835037}\nEpoch/total_steps/alpha-beta: 1/1600/1-0.2 {'G_GAN': 7.0514116287231445, 'G_L1': 85.72132873535156, 'D': 0.1930747926235199, 'F': 0.013776193372905254}\n保存模型 Epoch 1, iters 1600 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 1/999 花费时间:2240.1388833522797s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 2/1700/1-0.2 {'G_GAN': 7.846583366394043, 'G_L1': 28.170692443847656, 'D': 0.0433167964220047, 'F': 0.01057925820350647}\nEpoch/total_steps/alpha-beta: 2/1800/1-0.2 {'G_GAN': 8.019615173339844, 'G_L1': 104.96163177490234, 'D': 0.195338636636734, 'F': 0.010008035227656364}\nEpoch/total_steps/alpha-beta: 2/1900/1-0.2 {'G_GAN': 6.936626434326172, 'G_L1': 50.2357063293457, 'D': 0.19125911593437195, 'F': 0.015196853317320347}\nEpoch/total_steps/alpha-beta: 2/2000/1-0.2 {'G_GAN': 7.854140758514404, 'G_L1': 44.12738800048828, 'D': 0.07326505333185196, 'F': 0.007526670582592487}\nEpoch/total_steps/alpha-beta: 2/2100/1-0.2 {'G_GAN': 8.623893737792969, 'G_L1': 38.90788650512695, 'D': 0.14280785620212555, 'F': 0.014848092570900917}\nEpoch/total_steps/alpha-beta: 2/2200/1-0.2 {'G_GAN': 8.206073760986328, 'G_L1': 81.44075012207031, 'D': 0.07580943405628204, 'F': 0.012526793405413628}\nEpoch/total_steps/alpha-beta: 2/2300/1-0.2 {'G_GAN': 7.689569473266602, 'G_L1': 25.71515655517578, 'D': 0.4364542067050934, 'F': 0.012514639645814896}\nEpoch/total_steps/alpha-beta: 2/2400/1-0.2 {'G_GAN': 7.639985084533691, 'G_L1': 43.24721908569336, 'D': 0.1705082654953003, 'F': 0.009464604780077934}\n保存模型 Epoch 2, iters 2400 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 2/999 花费时间:2245.288181066513s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 3/2500/1-0.2 {'G_GAN': 8.486555099487305, 'G_L1': 61.935890197753906, 'D': 0.0932392105460167, 'F': 0.006783666554838419}\nEpoch/total_steps/alpha-beta: 3/2600/1-0.2 {'G_GAN': 8.313750267028809, 'G_L1': 50.51845169067383, 'D': 0.05715663731098175, 'F': 0.010329971089959145}\nEpoch/total_steps/alpha-beta: 3/2700/1-0.2 {'G_GAN': 8.266231536865234, 'G_L1': 92.46630096435547, 'D': 0.07510770857334137, 'F': 0.010331196710467339}\nEpoch/total_steps/alpha-beta: 3/2800/1-0.2 {'G_GAN': 7.768562316894531, 'G_L1': 67.29666137695312, 'D': 0.08272166550159454, 'F': 0.00749179208651185}\nEpoch/total_steps/alpha-beta: 3/2900/1-0.2 {'G_GAN': 6.214798927307129, 'G_L1': 84.63095092773438, 'D': 0.9830055236816406, 'F': 0.014948654919862747}\nEpoch/total_steps/alpha-beta: 3/3000/1-0.2 {'G_GAN': 5.631191253662109, 'G_L1': 72.10906219482422, 'D': 0.9772655963897705, 'F': 0.016496559605002403}\nEpoch/total_steps/alpha-beta: 3/3100/1-0.2 {'G_GAN': 8.428823471069336, 'G_L1': 33.1972541809082, 'D': 0.0724293440580368, 'F': 0.006524176802486181}\nEpoch/total_steps/alpha-beta: 3/3200/1-0.2 {'G_GAN': 8.611396789550781, 'G_L1': 56.844627380371094, 'D': 0.06261860579252243, 'F': 0.017875824123620987}\n保存模型 Epoch 3, iters 3200 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 3/999 花费时间:2243.4765717983246s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 4/3300/1-0.2 {'G_GAN': 7.931173324584961, 'G_L1': 41.59771728515625, 'D': 0.10313476622104645, 'F': 0.010135823860764503}\nEpoch/total_steps/alpha-beta: 4/3400/1-0.2 {'G_GAN': 7.9520063400268555, 'G_L1': 43.4984016418457, 'D': 0.1198870837688446, 'F': 0.010204552672803402}\nEpoch/total_steps/alpha-beta: 4/3500/1-0.2 {'G_GAN': 8.522815704345703, 'G_L1': 41.00242614746094, 'D': 0.0628371387720108, 'F': 0.016017796471714973}\nEpoch/total_steps/alpha-beta: 4/3600/1-0.2 {'G_GAN': 8.132135391235352, 'G_L1': 57.873714447021484, 'D': 0.06025378406047821, 'F': 0.004645512904971838}\nEpoch/total_steps/alpha-beta: 4/3700/1-0.2 {'G_GAN': 7.538127422332764, 'G_L1': 40.998233795166016, 'D': 0.06881724298000336, 'F': 0.007610495667904615}\nEpoch/total_steps/alpha-beta: 4/3800/1-0.2 {'G_GAN': 7.282956123352051, 'G_L1': 69.60569763183594, 'D': 0.40209025144577026, 'F': 0.009298296645283699}\nEpoch/total_steps/alpha-beta: 4/3900/1-0.2 {'G_GAN': 7.546775817871094, 'G_L1': 60.51772689819336, 'D': 0.07164295017719269, 'F': 0.0063060675747692585}\nEpoch/total_steps/alpha-beta: 4/4000/1-0.2 {'G_GAN': 8.293521881103516, 'G_L1': 94.56524658203125, 'D': 0.12158917635679245, 'F': 0.015506422147154808}\n保存模型 Epoch 4, iters 4000 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 4/999 花费时间:2241.118743419647s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 5/4100/1-0.2 {'G_GAN': 6.922964096069336, 'G_L1': 34.087669372558594, 'D': 0.3114049434661865, 'F': 0.010114545002579689}\nEpoch/total_steps/alpha-beta: 5/4200/1-0.2 {'G_GAN': 7.922966003417969, 'G_L1': 33.881492614746094, 'D': 0.05237006023526192, 'F': 0.015073570422828197}\nEpoch/total_steps/alpha-beta: 5/4300/1-0.2 {'G_GAN': 8.021566390991211, 'G_L1': 68.63485717773438, 'D': 0.025034595280885696, 'F': 0.006492625921964645}\nEpoch/total_steps/alpha-beta: 5/4400/1-0.2 {'G_GAN': 7.8188276290893555, 'G_L1': 83.95376586914062, 'D': 0.05151453614234924, 'F': 0.008408254012465477}\nEpoch/total_steps/alpha-beta: 5/4500/1-0.2 {'G_GAN': 7.6352033615112305, 'G_L1': 71.6593017578125, 'D': 0.17503702640533447, 'F': 0.00949680246412754}\nEpoch/total_steps/alpha-beta: 5/4600/1-0.2 {'G_GAN': 8.145234107971191, 'G_L1': 67.17691040039062, 'D': 0.1828279346227646, 'F': 0.02732495218515396}\nEpoch/total_steps/alpha-beta: 5/4700/1-0.2 {'G_GAN': 7.435663223266602, 'G_L1': 42.84819030761719, 'D': 0.6010628938674927, 'F': 0.01227540522813797}\nEpoch/total_steps/alpha-beta: 5/4800/1-0.2 {'G_GAN': 8.041173934936523, 'G_L1': 38.21138381958008, 'D': 0.03405224531888962, 'F': 0.010650875978171825}\n保存模型 Epoch 5, iters 4800 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 5/999 花费时间:2242.324235677719s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 6/4900/1-0.2 {'G_GAN': 8.876985549926758, 'G_L1': 87.36048889160156, 'D': 0.27589648962020874, 'F': 0.02187078818678856}\nEpoch/total_steps/alpha-beta: 6/5000/1-0.2 {'G_GAN': 7.427737236022949, 'G_L1': 61.41236877441406, 'D': 0.17753198742866516, 'F': 0.007680617272853851}\nEpoch/total_steps/alpha-beta: 6/5100/1-0.2 {'G_GAN': 8.027284622192383, 'G_L1': 54.627288818359375, 'D': 0.022993054240942, 'F': 0.0039543695747852325}\nEpoch/total_steps/alpha-beta: 6/5200/1-0.2 {'G_GAN': 8.459152221679688, 'G_L1': 36.08809280395508, 'D': 0.10983923077583313, 'F': 0.010778225027024746}\nEpoch/total_steps/alpha-beta: 6/5300/1-0.2 {'G_GAN': 7.603835105895996, 'G_L1': 26.886764526367188, 'D': 0.05270100384950638, 'F': 0.00450096745043993}\nEpoch/total_steps/alpha-beta: 6/5400/1-0.2 {'G_GAN': 8.0107421875, 'G_L1': 48.9970703125, 'D': 0.028950953856110573, 'F': 0.01580415666103363}\nEpoch/total_steps/alpha-beta: 6/5500/1-0.2 {'G_GAN': 7.988877773284912, 'G_L1': 69.03587341308594, 'D': 0.20133550465106964, 'F': 0.010259244590997696}\nEpoch/total_steps/alpha-beta: 6/5600/1-0.2 {'G_GAN': 6.994037628173828, 'G_L1': 71.26373291015625, 'D': 0.16854152083396912, 'F': 0.01484172884374857}\n保存模型 Epoch 6, iters 5600 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 6/999 花费时间:2240.137642145157s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 7/5700/1-0.2 {'G_GAN': 6.885168075561523, 'G_L1': 62.20435333251953, 'D': 0.961589515209198, 'F': 0.0072699678130447865}\nEpoch/total_steps/alpha-beta: 7/5800/1-0.2 {'G_GAN': 7.218307971954346, 'G_L1': 20.928606033325195, 'D': 0.07665181159973145, 'F': 0.003944483585655689}\nEpoch/total_steps/alpha-beta: 7/5900/1-0.2 {'G_GAN': 7.81699800491333, 'G_L1': 23.946985244750977, 'D': 0.027323318645358086, 'F': 0.003627028316259384}\nEpoch/total_steps/alpha-beta: 7/6000/1-0.2 {'G_GAN': 8.140027046203613, 'G_L1': 65.03438568115234, 'D': 0.034796133637428284, 'F': 0.005481574684381485}\nEpoch/total_steps/alpha-beta: 7/6100/1-0.2 {'G_GAN': 8.331938743591309, 'G_L1': 69.98379516601562, 'D': 0.04592915251851082, 'F': 0.00507495179772377}\nEpoch/total_steps/alpha-beta: 7/6200/1-0.2 {'G_GAN': 7.60057258605957, 'G_L1': 32.59454345703125, 'D': 0.07134933769702911, 'F': 0.008418773300945759}\nEpoch/total_steps/alpha-beta: 7/6300/1-0.2 {'G_GAN': 7.732146263122559, 'G_L1': 75.76055145263672, 'D': 0.14785771071910858, 'F': 0.007295622490346432}\nEpoch/total_steps/alpha-beta: 7/6400/1-0.2 {'G_GAN': 7.556277751922607, 'G_L1': 23.25562286376953, 'D': 0.13163849711418152, 'F': 0.003834435250610113}\n保存模型 Epoch 7, iters 6400 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 7/999 花费时间:2241.1717205047607s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 8/6500/1-0.2 {'G_GAN': 6.904270172119141, 'G_L1': 29.733259201049805, 'D': 0.24396635591983795, 'F': 0.015811074525117874}\nEpoch/total_steps/alpha-beta: 8/6600/1-0.2 {'G_GAN': 6.852598667144775, 'G_L1': 32.52655792236328, 'D': 0.16068589687347412, 'F': 0.00482404138892889}\nEpoch/total_steps/alpha-beta: 8/6700/1-0.2 {'G_GAN': 8.325637817382812, 'G_L1': 28.302183151245117, 'D': 0.07767530530691147, 'F': 0.019099505618214607}\nEpoch/total_steps/alpha-beta: 8/6800/1-0.2 {'G_GAN': 7.241868019104004, 'G_L1': 62.611324310302734, 'D': 0.1969233900308609, 'F': 0.007783173117786646}\nEpoch/total_steps/alpha-beta: 8/6900/1-0.2 {'G_GAN': 7.982600212097168, 'G_L1': 26.386409759521484, 'D': 0.0458754301071167, 'F': 0.004691822454333305}\nEpoch/total_steps/alpha-beta: 8/7000/1-0.2 {'G_GAN': 8.325735092163086, 'G_L1': 29.34808349609375, 'D': 0.06305962800979614, 'F': 0.011364387348294258}\nEpoch/total_steps/alpha-beta: 8/7100/1-0.2 {'G_GAN': 8.178101539611816, 'G_L1': 88.96224212646484, 'D': 0.09584712982177734, 'F': 0.008572196587920189}\nEpoch/total_steps/alpha-beta: 8/7200/1-0.2 {'G_GAN': 8.248159408569336, 'G_L1': 107.54080200195312, 'D': 0.04856514185667038, 'F': 0.004454158712178469}\n保存模型 Epoch 8, iters 7200 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 8/999 花费时间:2244.0125229358673s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 9/7300/1-0.2 {'G_GAN': 8.004782676696777, 'G_L1': 49.89494705200195, 'D': 0.06755077838897705, 'F': 0.004834933206439018}\nEpoch/total_steps/alpha-beta: 9/7400/1-0.2 {'G_GAN': 7.031300067901611, 'G_L1': 73.3926773071289, 'D': 0.20230433344841003, 'F': 0.004211545921862125}\nEpoch/total_steps/alpha-beta: 9/7500/1-0.2 {'G_GAN': 7.691649436950684, 'G_L1': 64.6584243774414, 'D': 0.08446256816387177, 'F': 0.014039777219295502}\nEpoch/total_steps/alpha-beta: 9/7600/1-0.2 {'G_GAN': 6.891648292541504, 'G_L1': 41.867435455322266, 'D': 0.215430349111557, 'F': 0.0066726417280733585}\nEpoch/total_steps/alpha-beta: 9/7700/1-0.2 {'G_GAN': 7.953327178955078, 'G_L1': 22.65772819519043, 'D': 0.019106190651655197, 'F': 0.005046420730650425}\nEpoch/total_steps/alpha-beta: 9/7800/1-0.2 {'G_GAN': 7.656808853149414, 'G_L1': 25.489377975463867, 'D': 0.0362175852060318, 'F': 0.003918834030628204}\nEpoch/total_steps/alpha-beta: 9/7900/1-0.2 {'G_GAN': 6.453868389129639, 'G_L1': 51.51165008544922, 'D': 0.3947092294692993, 'F': 0.004337526857852936}\nEpoch/total_steps/alpha-beta: 9/8000/1-0.2 {'G_GAN': 7.0943732261657715, 'G_L1': 54.6630973815918, 'D': 0.15637212991714478, 'F': 0.006491351407021284}\n保存模型 Epoch 9, iters 8000 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 9/999 花费时间:2237.490981578827s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 10/8100/1-0.2 {'G_GAN': 8.006678581237793, 'G_L1': 63.01471710205078, 'D': 0.027080141007900238, 'F': 0.0049079111777246}\nEpoch/total_steps/alpha-beta: 10/8200/1-0.2 {'G_GAN': 8.114480018615723, 'G_L1': 41.23532485961914, 'D': 0.040157005190849304, 'F': 0.006736353039741516}\nEpoch/total_steps/alpha-beta: 10/8300/1-0.2 {'G_GAN': 6.68264102935791, 'G_L1': 76.90336608886719, 'D': 0.2736968398094177, 'F': 0.011744905263185501}\nEpoch/total_steps/alpha-beta: 10/8400/1-0.2 {'G_GAN': 7.0265727043151855, 'G_L1': 87.83599090576172, 'D': 0.323846697807312, 'F': 0.01016068272292614}\nEpoch/total_steps/alpha-beta: 10/8500/1-0.2 {'G_GAN': 6.810027122497559, 'G_L1': 64.69995880126953, 'D': 0.32319140434265137, 'F': 0.005467316135764122}\nEpoch/total_steps/alpha-beta: 10/8600/1-0.2 {'G_GAN': 7.622931003570557, 'G_L1': 26.853879928588867, 'D': 0.04966985434293747, 'F': 0.0038053086027503014}\nEpoch/total_steps/alpha-beta: 10/8700/1-0.2 {'G_GAN': 8.05148983001709, 'G_L1': 29.02484893798828, 'D': 0.02136853337287903, 'F': 0.004720240831375122}\nEpoch/total_steps/alpha-beta: 10/8800/1-0.2 {'G_GAN': 7.915238380432129, 'G_L1': 32.80892562866211, 'D': 0.03601423278450966, 'F': 0.005002208054065704}\n保存模型 Epoch 10, iters 8800 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 10/999 花费时间:2238.145895719528s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 11/8900/1-0.2 {'G_GAN': 7.700460433959961, 'G_L1': 23.82728385925293, 'D': 0.034164849668741226, 'F': 0.0067907413467764854}\nEpoch/total_steps/alpha-beta: 11/9000/1-0.2 {'G_GAN': 7.920692443847656, 'G_L1': 65.83428192138672, 'D': 0.03876937925815582, 'F': 0.010151698254048824}\nEpoch/total_steps/alpha-beta: 11/9100/1-0.2 {'G_GAN': 6.110332012176514, 'G_L1': 29.21392059326172, 'D': 0.5502880811691284, 'F': 0.012250722385942936}\nEpoch/total_steps/alpha-beta: 11/9200/1-0.2 {'G_GAN': 8.218901634216309, 'G_L1': 22.31521987915039, 'D': 0.011554563418030739, 'F': 0.003787385765463114}\nEpoch/total_steps/alpha-beta: 11/9300/1-0.2 {'G_GAN': 7.857272148132324, 'G_L1': 62.92850875854492, 'D': 0.03000182844698429, 'F': 0.005410284735262394}\nEpoch/total_steps/alpha-beta: 11/9400/1-0.2 {'G_GAN': 8.47511100769043, 'G_L1': 31.989320755004883, 'D': 0.03988601267337799, 'F': 0.004918222781270742}\nEpoch/total_steps/alpha-beta: 11/9500/1-0.2 {'G_GAN': 7.151203155517578, 'G_L1': 76.63508605957031, 'D': 0.18310189247131348, 'F': 0.008669700473546982}\nEpoch/total_steps/alpha-beta: 11/9600/1-0.2 {'G_GAN': 6.36863374710083, 'G_L1': 94.42879486083984, 'D': 0.631427526473999, 'F': 0.007443910464644432}\n保存模型 Epoch 11, iters 9600 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 11/999 花费时间:2240.1046867370605s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 12/9700/1-0.2 {'G_GAN': 7.861506462097168, 'G_L1': 84.15937805175781, 'D': 0.0563352070748806, 'F': 0.004658516962081194}\nEpoch/total_steps/alpha-beta: 12/9800/1-0.2 {'G_GAN': 6.608602523803711, 'G_L1': 92.87541961669922, 'D': 0.7654603719711304, 'F': 0.006972730625420809}\nEpoch/total_steps/alpha-beta: 12/9900/1-0.2 {'G_GAN': 7.899750709533691, 'G_L1': 43.30852127075195, 'D': 0.04299182817339897, 'F': 0.0033969406504184008}\nEpoch/total_steps/alpha-beta: 12/10000/1-0.2 {'G_GAN': 7.106512069702148, 'G_L1': 43.74257278442383, 'D': 0.18593814969062805, 'F': 0.006359061226248741}\nEpoch/total_steps/alpha-beta: 12/10100/1-0.2 {'G_GAN': 7.60139799118042, 'G_L1': 23.905370712280273, 'D': 0.1088247150182724, 'F': 0.004313391167670488}\nEpoch/total_steps/alpha-beta: 12/10200/1-0.2 {'G_GAN': 8.557981491088867, 'G_L1': 59.30373001098633, 'D': 0.2433943748474121, 'F': 0.003985072020441294}\nEpoch/total_steps/alpha-beta: 12/10300/1-0.2 {'G_GAN': 8.166728973388672, 'G_L1': 30.371713638305664, 'D': 0.14048510789871216, 'F': 0.0034568822011351585}\nEpoch/total_steps/alpha-beta: 12/10400/1-0.2 {'G_GAN': 8.390424728393555, 'G_L1': 45.004878997802734, 'D': 0.05500246584415436, 'F': 0.006866150069981813}\n保存模型 Epoch 12, iters 10400 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 12/999 花费时间:2244.182137489319s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 13/10500/1-0.2 {'G_GAN': 7.24341344833374, 'G_L1': 28.59087562561035, 'D': 0.15088710188865662, 'F': 0.008901726454496384}\nEpoch/total_steps/alpha-beta: 13/10600/1-0.2 {'G_GAN': 7.8488450050354, 'G_L1': 24.737346649169922, 'D': 0.020268578082323074, 'F': 0.005499770864844322}\nEpoch/total_steps/alpha-beta: 13/10700/1-0.2 {'G_GAN': 7.92767858505249, 'G_L1': 25.854650497436523, 'D': 0.011605029925704002, 'F': 0.0026500700041651726}\nEpoch/total_steps/alpha-beta: 13/10800/1-0.2 {'G_GAN': 6.1945295333862305, 'G_L1': 27.03152847290039, 'D': 0.5844617486000061, 'F': 0.008744369260966778}\nEpoch/total_steps/alpha-beta: 13/10900/1-0.2 {'G_GAN': 7.308095455169678, 'G_L1': 44.71577835083008, 'D': 0.09405139088630676, 'F': 0.004695978946983814}\nEpoch/total_steps/alpha-beta: 13/11000/1-0.2 {'G_GAN': 7.86174201965332, 'G_L1': 38.60090255737305, 'D': 0.014573521912097931, 'F': 0.0033063036389648914}\nEpoch/total_steps/alpha-beta: 13/11100/1-0.2 {'G_GAN': 8.230925559997559, 'G_L1': 77.84384155273438, 'D': 0.10456274449825287, 'F': 0.004228030331432819}\nEpoch/total_steps/alpha-beta: 13/11200/1-0.2 {'G_GAN': 6.953227996826172, 'G_L1': 60.15802764892578, 'D': 0.25015395879745483, 'F': 0.004833053797483444}\n保存模型 Epoch 13, iters 11200 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 13/999 花费时间:2238.8006632328033s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 14/11300/1-0.2 {'G_GAN': 6.169229507446289, 'G_L1': 52.740882873535156, 'D': 0.40432703495025635, 'F': 0.003645665245130658}\nEpoch/total_steps/alpha-beta: 14/11400/1-0.2 {'G_GAN': 7.72521448135376, 'G_L1': 32.92009353637695, 'D': 0.020841024816036224, 'F': 0.002848017029464245}\nEpoch/total_steps/alpha-beta: 14/11500/1-0.2 {'G_GAN': 7.729382514953613, 'G_L1': 43.7780876159668, 'D': 0.3486082851886749, 'F': 0.0023028897121548653}\nEpoch/total_steps/alpha-beta: 14/11600/1-0.2 {'G_GAN': 7.579847812652588, 'G_L1': 56.63173294067383, 'D': 0.1492975652217865, 'F': 0.003988622222095728}\nEpoch/total_steps/alpha-beta: 14/11700/1-0.2 {'G_GAN': 5.360779762268066, 'G_L1': 95.73617553710938, 'D': 1.116483449935913, 'F': 0.004185945726931095}\nEpoch/total_steps/alpha-beta: 14/11800/1-0.2 {'G_GAN': 7.962145805358887, 'G_L1': 62.89451599121094, 'D': 0.011406878009438515, 'F': 0.0030127065256237984}\nEpoch/total_steps/alpha-beta: 14/11900/1-0.2 {'G_GAN': 8.265068054199219, 'G_L1': 38.31330108642578, 'D': 0.04536934196949005, 'F': 0.0026476383209228516}\nEpoch/total_steps/alpha-beta: 14/12000/1-0.2 {'G_GAN': 8.681221008300781, 'G_L1': 50.310302734375, 'D': 0.13753876090049744, 'F': 0.005655743181705475}\n保存模型 Epoch 14, iters 12000 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 14/999 花费时间:2241.6083137989044s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 15/12100/1-0.2 {'G_GAN': 7.299313068389893, 'G_L1': 37.13325119018555, 'D': 0.15664954483509064, 'F': 0.0029759001918137074}\nEpoch/total_steps/alpha-beta: 15/12200/1-0.2 {'G_GAN': 8.090167999267578, 'G_L1': 61.605560302734375, 'D': 0.03971627727150917, 'F': 0.0037103830836713314}\nEpoch/total_steps/alpha-beta: 15/12300/1-0.2 {'G_GAN': 7.322068214416504, 'G_L1': 31.381040573120117, 'D': 0.07610011100769043, 'F': 0.005929463542997837}\nEpoch/total_steps/alpha-beta: 15/12400/1-0.2 {'G_GAN': 6.784774303436279, 'G_L1': 23.646774291992188, 'D': 0.22741521894931793, 'F': 0.004072674084454775}\nEpoch/total_steps/alpha-beta: 15/12500/1-0.2 {'G_GAN': 8.442407608032227, 'G_L1': 51.77714920043945, 'D': 0.0912802517414093, 'F': 0.006001116242259741}\nEpoch/total_steps/alpha-beta: 15/12600/1-0.2 {'G_GAN': 7.338435649871826, 'G_L1': 28.449954986572266, 'D': 0.05427326634526253, 'F': 0.0035091484896838665}\nEpoch/total_steps/alpha-beta: 15/12700/1-0.2 {'G_GAN': 7.575613021850586, 'G_L1': 27.06997299194336, 'D': 0.023155365139245987, 'F': 0.0037070848047733307}\nEpoch/total_steps/alpha-beta: 15/12800/1-0.2 {'G_GAN': 7.28108024597168, 'G_L1': 22.052270889282227, 'D': 0.07147093862295151, 'F': 0.004375500604510307}\n保存模型 Epoch 15, iters 12800 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 15/999 花费时间:2239.5317149162292s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 16/12900/1-0.2 {'G_GAN': 8.20433521270752, 'G_L1': 53.51478958129883, 'D': 0.05921521410346031, 'F': 0.00908408872783184}\nEpoch/total_steps/alpha-beta: 16/13000/1-0.2 {'G_GAN': 7.548761367797852, 'G_L1': 30.75554656982422, 'D': 0.04906325042247772, 'F': 0.004870503209531307}\nEpoch/total_steps/alpha-beta: 16/13100/1-0.2 {'G_GAN': 7.823489665985107, 'G_L1': 45.979736328125, 'D': 0.02384975552558899, 'F': 0.0062649790197610855}\nEpoch/total_steps/alpha-beta: 16/13200/1-0.2 {'G_GAN': 7.724291801452637, 'G_L1': 38.099849700927734, 'D': 0.029030686244368553, 'F': 0.004051935859024525}\nEpoch/total_steps/alpha-beta: 16/13300/1-0.2 {'G_GAN': 7.891463279724121, 'G_L1': 32.830230712890625, 'D': 0.015078023076057434, 'F': 0.00300413160584867}\nEpoch/total_steps/alpha-beta: 16/13400/1-0.2 {'G_GAN': 8.276105880737305, 'G_L1': 33.14748764038086, 'D': 0.015881145372986794, 'F': 0.003197545651346445}\nEpoch/total_steps/alpha-beta: 16/13500/1-0.2 {'G_GAN': 8.012789726257324, 'G_L1': 32.253944396972656, 'D': 0.011376485228538513, 'F': 0.0034923399798572063}\nEpoch/total_steps/alpha-beta: 16/13600/1-0.2 {'G_GAN': 7.497161865234375, 'G_L1': 29.7363224029541, 'D': 0.048598550260066986, 'F': 0.0035320105962455273}\n保存模型 Epoch 16, iters 13600 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 16/999 花费时间:2242.413316011429s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 17/13700/1-0.2 {'G_GAN': 8.094002723693848, 'G_L1': 38.437049865722656, 'D': 0.11842149496078491, 'F': 0.005245475098490715}\nEpoch/total_steps/alpha-beta: 17/13800/1-0.2 {'G_GAN': 7.873350143432617, 'G_L1': 26.96285629272461, 'D': 0.0317816361784935, 'F': 0.002844210946932435}\nEpoch/total_steps/alpha-beta: 17/13900/1-0.2 {'G_GAN': 7.075848579406738, 'G_L1': 55.977787017822266, 'D': 0.22714312374591827, 'F': 0.005171992816030979}\nEpoch/total_steps/alpha-beta: 17/14000/1-0.2 {'G_GAN': 7.493486404418945, 'G_L1': 42.420326232910156, 'D': 0.06659327447414398, 'F': 0.004259633366018534}\nEpoch/total_steps/alpha-beta: 17/14100/1-0.2 {'G_GAN': 8.070931434631348, 'G_L1': 27.650165557861328, 'D': 0.01977536454796791, 'F': 0.004303794354200363}\nEpoch/total_steps/alpha-beta: 17/14200/1-0.2 {'G_GAN': 8.381969451904297, 'G_L1': 50.86918640136719, 'D': 0.02925492823123932, 'F': 0.004127249121665955}\nEpoch/total_steps/alpha-beta: 17/14300/1-0.2 {'G_GAN': 7.588492393493652, 'G_L1': 41.9941291809082, 'D': 0.03009391576051712, 'F': 0.004457810893654823}\nEpoch/total_steps/alpha-beta: 17/14400/1-0.2 {'G_GAN': 7.858951091766357, 'G_L1': 29.688405990600586, 'D': 0.01135019026696682, 'F': 0.0039006900042295456}\n保存模型 Epoch 17, iters 14400 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 17/999 花费时间:2239.8757491111755s\nlearning rate = 0.0002\nEpoch/total_steps/alpha-beta: 18/14500/1-0.2 {'G_GAN': 7.229364395141602, 'G_L1': 30.877859115600586, 'D': 0.09546893835067749, 'F': 0.008539246395230293}\nEpoch/total_steps/alpha-beta: 18/14600/1-0.2 {'G_GAN': 7.134337425231934, 'G_L1': 68.59115600585938, 'D': 0.12337328493595123, 'F': 0.0027978983707726}\nEpoch/total_steps/alpha-beta: 18/14700/1-0.2 {'G_GAN': 8.135141372680664, 'G_L1': 17.171323776245117, 'D': 0.07687912881374359, 'F': 0.00603462802246213}\nEpoch/total_steps/alpha-beta: 18/14800/1-0.2 {'G_GAN': 8.242454528808594, 'G_L1': 30.793432235717773, 'D': 0.03066949173808098, 'F': 0.003347197314724326}\nEpoch/total_steps/alpha-beta: 18/14900/1-0.2 {'G_GAN': 7.296627044677734, 'G_L1': 26.54770278930664, 'D': 0.07105697691440582, 'F': 0.007161422166973352}\nEpoch/total_steps/alpha-beta: 18/15000/1-0.2 {'G_GAN': 8.229040145874023, 'G_L1': 24.8217830657959, 'D': 0.019135314971208572, 'F': 0.002629463095217943}\nEpoch/total_steps/alpha-beta: 18/15100/1-0.2 {'G_GAN': 8.207392692565918, 'G_L1': 41.362701416015625, 'D': 0.03711078688502312, 'F': 0.004646832123398781}\nEpoch/total_steps/alpha-beta: 18/15200/1-0.2 {'G_GAN': 7.175361633300781, 'G_L1': 73.15138244628906, 'D': 0.16095854341983795, 'F': 0.019632771611213684}\n保存模型 Epoch 18, iters 15200 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 18/999 花费时间:2242.7799866199493s\nlearning rate = 0.0001980198\nEpoch/total_steps/alpha-beta: 19/15300/1-0.2 {'G_GAN': 8.825998306274414, 'G_L1': 87.97834014892578, 'D': 0.6945585608482361, 'F': 0.002806423231959343}\nEpoch/total_steps/alpha-beta: 19/15400/1-0.2 {'G_GAN': 7.9934163093566895, 'G_L1': 55.88832092285156, 'D': 0.3847002387046814, 'F': 0.0022807884961366653}\nEpoch/total_steps/alpha-beta: 19/15500/1-0.2 {'G_GAN': 7.383514404296875, 'G_L1': 22.17290496826172, 'D': 0.04608210176229477, 'F': 0.0025850986130535603}\nEpoch/total_steps/alpha-beta: 19/15600/1-0.2 {'G_GAN': 7.475275993347168, 'G_L1': 30.380226135253906, 'D': 0.044136591255664825, 'F': 0.0041557843796908855}\nEpoch/total_steps/alpha-beta: 19/15700/1-0.2 {'G_GAN': 7.986323833465576, 'G_L1': 61.470802307128906, 'D': 0.049010828137397766, 'F': 0.0036075019743293524}\nEpoch/total_steps/alpha-beta: 19/15800/1-0.2 {'G_GAN': 8.234842300415039, 'G_L1': 57.826568603515625, 'D': 0.013362471014261246, 'F': 0.004108164459466934}\nEpoch/total_steps/alpha-beta: 19/15900/1-0.2 {'G_GAN': 7.225225925445557, 'G_L1': 24.0756893157959, 'D': 0.08700612187385559, 'F': 0.005128690041601658}\nEpoch/total_steps/alpha-beta: 19/16000/1-0.2 {'G_GAN': 6.504546642303467, 'G_L1': 52.17607116699219, 'D': 0.3415322005748749, 'F': 0.002316955244168639}\n保存模型 Epoch 19, iters 16000 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 19/999 花费时间:2242.404953479767s\nlearning rate = 0.0001960396\nEpoch/total_steps/alpha-beta: 20/16100/1-0.2 {'G_GAN': 7.128078460693359, 'G_L1': 66.0529556274414, 'D': 0.10771150887012482, 'F': 0.002853887155652046}\nEpoch/total_steps/alpha-beta: 20/16200/1-0.2 {'G_GAN': 5.142807483673096, 'G_L1': 47.03786849975586, 'D': 1.221718430519104, 'F': 0.003558424301445484}\nEpoch/total_steps/alpha-beta: 20/16300/1-0.2 {'G_GAN': 7.904886245727539, 'G_L1': 39.72586441040039, 'D': 0.02436165325343609, 'F': 0.00712163420394063}\nEpoch/total_steps/alpha-beta: 20/16400/1-0.2 {'G_GAN': 6.522970199584961, 'G_L1': 70.01412200927734, 'D': 0.21351367235183716, 'F': 0.00249650701880455}\nEpoch/total_steps/alpha-beta: 20/16500/1-0.2 {'G_GAN': 8.220386505126953, 'G_L1': 33.55210876464844, 'D': 0.011556379497051239, 'F': 0.0034030417446047068}\nEpoch/total_steps/alpha-beta: 20/16600/1-0.2 {'G_GAN': 6.961518287658691, 'G_L1': 46.76138687133789, 'D': 0.11727525293827057, 'F': 0.0034470849204808474}\nEpoch/total_steps/alpha-beta: 20/16700/1-0.2 {'G_GAN': 7.414032936096191, 'G_L1': 62.36800003051758, 'D': 0.058432575315237045, 'F': 0.0026822336949408054}\nEpoch/total_steps/alpha-beta: 20/16800/1-0.2 {'G_GAN': 8.353904724121094, 'G_L1': 29.624656677246094, 'D': 0.019816379994153976, 'F': 0.0035483012907207012}\n保存模型 Epoch 20, iters 16800 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 20/999 花费时间:2244.7132070064545s\nlearning rate = 0.0001940594\nEpoch/total_steps/alpha-beta: 21/16900/1-0.2 {'G_GAN': 7.38734245300293, 'G_L1': 82.7057113647461, 'D': 0.08869419246912003, 'F': 0.005435818340629339}\nEpoch/total_steps/alpha-beta: 21/17000/1-0.2 {'G_GAN': 7.907797336578369, 'G_L1': 69.67766571044922, 'D': 0.00910542719066143, 'F': 0.002864617155864835}\nEpoch/total_steps/alpha-beta: 21/17100/1-0.2 {'G_GAN': 7.824112415313721, 'G_L1': 70.58577728271484, 'D': 0.04759104549884796, 'F': 0.00325974402949214}\nEpoch/total_steps/alpha-beta: 21/17200/1-0.2 {'G_GAN': 8.028343200683594, 'G_L1': 49.27384948730469, 'D': 0.009767578914761543, 'F': 0.0029460359364748}\nEpoch/total_steps/alpha-beta: 21/17300/1-0.2 {'G_GAN': 8.284825325012207, 'G_L1': 21.53055763244629, 'D': 0.020019259303808212, 'F': 0.0065833451226353645}\nEpoch/total_steps/alpha-beta: 21/17400/1-0.2 {'G_GAN': 7.703704833984375, 'G_L1': 30.230886459350586, 'D': 0.01830604299902916, 'F': 0.0021412395872175694}\nEpoch/total_steps/alpha-beta: 21/17500/1-0.2 {'G_GAN': 6.6705169677734375, 'G_L1': 55.950355529785156, 'D': 0.2863892912864685, 'F': 0.004841034300625324}\nEpoch/total_steps/alpha-beta: 21/17600/1-0.2 {'G_GAN': 6.915993690490723, 'G_L1': 31.78998565673828, 'D': 0.171169251203537, 'F': 0.004243798553943634}\n保存模型 Epoch 21, iters 17600 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 21/999 花费时间:2246.410421848297s\nlearning rate = 0.0001920792\nEpoch/total_steps/alpha-beta: 22/17700/1-0.2 {'G_GAN': 7.009700775146484, 'G_L1': 50.515220642089844, 'D': 0.20676347613334656, 'F': 0.0028297905810177326}\nEpoch/total_steps/alpha-beta: 22/17800/1-0.2 {'G_GAN': 8.270477294921875, 'G_L1': 23.363441467285156, 'D': 0.0628054141998291, 'F': 0.0036528105847537518}\nEpoch/total_steps/alpha-beta: 22/17900/1-0.2 {'G_GAN': 7.368780136108398, 'G_L1': 26.530593872070312, 'D': 0.05718117952346802, 'F': 0.0052513135597109795}\nEpoch/total_steps/alpha-beta: 22/18000/1-0.2 {'G_GAN': 8.431781768798828, 'G_L1': 53.51021194458008, 'D': 0.06125590205192566, 'F': 0.0029941671527922153}\nEpoch/total_steps/alpha-beta: 22/18100/1-0.2 {'G_GAN': 7.870539665222168, 'G_L1': 66.27178192138672, 'D': 0.014400158077478409, 'F': 0.005035104230046272}\nEpoch/total_steps/alpha-beta: 22/18200/1-0.2 {'G_GAN': 6.951085567474365, 'G_L1': 64.40331268310547, 'D': 0.1704384684562683, 'F': 0.0034156772308051586}\nEpoch/total_steps/alpha-beta: 22/18300/1-0.2 {'G_GAN': 8.28697681427002, 'G_L1': 52.01366424560547, 'D': 0.04828987270593643, 'F': 0.0022599599324166775}\nEpoch/total_steps/alpha-beta: 22/18400/1-0.2 {'G_GAN': 6.673297882080078, 'G_L1': 43.930965423583984, 'D': 0.19179758429527283, 'F': 0.0028942618519067764}\n保存模型 Epoch 22, iters 18400 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 22/999 花费时间:2243.9974060058594s\nlearning rate = 0.000190099\nEpoch/total_steps/alpha-beta: 23/18500/1-0.2 {'G_GAN': 7.1887617111206055, 'G_L1': 35.22029495239258, 'D': 0.07533498108386993, 'F': 0.0020688355434685946}\nEpoch/total_steps/alpha-beta: 23/18600/1-0.2 {'G_GAN': 7.784102439880371, 'G_L1': 64.49046325683594, 'D': 0.04144997149705887, 'F': 0.004662686958909035}\nEpoch/total_steps/alpha-beta: 23/18700/1-0.2 {'G_GAN': 7.735271453857422, 'G_L1': 63.74335861206055, 'D': 0.031427185982465744, 'F': 0.0019370903028175235}\nEpoch/total_steps/alpha-beta: 23/18800/1-0.2 {'G_GAN': 8.745346069335938, 'G_L1': 22.079360961914062, 'D': 0.08367709815502167, 'F': 0.005594715476036072}\nEpoch/total_steps/alpha-beta: 23/18900/1-0.2 {'G_GAN': 6.116710186004639, 'G_L1': 69.13908386230469, 'D': 0.4984710216522217, 'F': 0.0033354386687278748}\nEpoch/total_steps/alpha-beta: 23/19000/1-0.2 {'G_GAN': 8.251775741577148, 'G_L1': 43.45522689819336, 'D': 0.014510093256831169, 'F': 0.001929991296492517}\nEpoch/total_steps/alpha-beta: 23/19100/1-0.2 {'G_GAN': 8.819314002990723, 'G_L1': 62.17831802368164, 'D': 0.24065354466438293, 'F': 0.003155830316245556}\nEpoch/total_steps/alpha-beta: 23/19200/1-0.2 {'G_GAN': 8.504386901855469, 'G_L1': 27.542579650878906, 'D': 0.030940650030970573, 'F': 0.006908769253641367}\n保存模型 Epoch 23, iters 19200 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 23/999 花费时间:2241.1100900173187s\nlearning rate = 0.0001881188\nEpoch/total_steps/alpha-beta: 24/19300/1-0.2 {'G_GAN': 7.936385154724121, 'G_L1': 40.748207092285156, 'D': 0.0074166832491755486, 'F': 0.0034286866430193186}\nEpoch/total_steps/alpha-beta: 24/19400/1-0.2 {'G_GAN': 8.228165626525879, 'G_L1': 25.445863723754883, 'D': 0.05360954999923706, 'F': 0.0028822387102991343}\nEpoch/total_steps/alpha-beta: 24/19500/1-0.2 {'G_GAN': 8.33008861541748, 'G_L1': 20.84115982055664, 'D': 0.013825327157974243, 'F': 0.0021433676593005657}\nEpoch/total_steps/alpha-beta: 24/19600/1-0.2 {'G_GAN': 6.618738651275635, 'G_L1': 34.16556930541992, 'D': 0.2462276667356491, 'F': 0.0021396935917437077}\nEpoch/total_steps/alpha-beta: 24/19700/1-0.2 {'G_GAN': 7.551990985870361, 'G_L1': 24.816883087158203, 'D': 0.04307170584797859, 'F': 0.002858818508684635}\nEpoch/total_steps/alpha-beta: 24/19800/1-0.2 {'G_GAN': 8.352903366088867, 'G_L1': 73.77073669433594, 'D': 0.038181934505701065, 'F': 0.00205042096786201}\nEpoch/total_steps/alpha-beta: 24/19900/1-0.2 {'G_GAN': 8.032258987426758, 'G_L1': 47.5675048828125, 'D': 0.006485176272690296, 'F': 0.0014955683145672083}\nEpoch/total_steps/alpha-beta: 24/20000/1-0.2 {'G_GAN': 7.927211761474609, 'G_L1': 28.317678451538086, 'D': 0.030939824879169464, 'F': 0.006028459407389164}\n保存模型 Epoch 24, iters 20000 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 24/999 花费时间:2248.8287019729614s\nlearning rate = 0.0001861386\nEpoch/total_steps/alpha-beta: 25/20100/1-0.2 {'G_GAN': 7.878154277801514, 'G_L1': 38.26533889770508, 'D': 0.009690305218100548, 'F': 0.004098815843462944}\nEpoch/total_steps/alpha-beta: 25/20200/1-0.2 {'G_GAN': 7.144864082336426, 'G_L1': 35.61701202392578, 'D': 0.11488281935453415, 'F': 0.0017591463401913643}\nEpoch/total_steps/alpha-beta: 25/20300/1-0.2 {'G_GAN': 8.127372741699219, 'G_L1': 30.670583724975586, 'D': 0.005216977559030056, 'F': 0.0039797695353627205}\nEpoch/total_steps/alpha-beta: 25/20400/1-0.2 {'G_GAN': 7.844820499420166, 'G_L1': 21.9989070892334, 'D': 0.006824478507041931, 'F': 0.00280158594250679}\nEpoch/total_steps/alpha-beta: 25/20500/1-0.2 {'G_GAN': 7.7104034423828125, 'G_L1': 45.65074920654297, 'D': 0.013286659494042397, 'F': 0.002440476091578603}\nEpoch/total_steps/alpha-beta: 25/20600/1-0.2 {'G_GAN': 8.163658142089844, 'G_L1': 22.62981605529785, 'D': 0.011498484760522842, 'F': 0.0023653078824281693}\nEpoch/total_steps/alpha-beta: 25/20700/1-0.2 {'G_GAN': 8.32032585144043, 'G_L1': 18.226306915283203, 'D': 0.012844398617744446, 'F': 0.004530482925474644}\nEpoch/total_steps/alpha-beta: 25/20800/1-0.2 {'G_GAN': 6.495295524597168, 'G_L1': 63.88811492919922, 'D': 0.27272820472717285, 'F': 0.0015016230754554272}\n保存模型 Epoch 25, iters 20800 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 25/999 花费时间:2235.9144690036774s\nlearning rate = 0.0001841584\nEpoch/total_steps/alpha-beta: 26/20900/1-0.2 {'G_GAN': 6.722325801849365, 'G_L1': 55.520694732666016, 'D': 0.25681638717651367, 'F': 0.005036310758441687}\nEpoch/total_steps/alpha-beta: 26/21000/1-0.2 {'G_GAN': 8.120389938354492, 'G_L1': 43.75154495239258, 'D': 0.009639542549848557, 'F': 0.0023084445856511593}\nEpoch/total_steps/alpha-beta: 26/21100/1-0.2 {'G_GAN': 7.494899272918701, 'G_L1': 94.40898895263672, 'D': 0.06394319981336594, 'F': 0.0034542204812169075}\nEpoch/total_steps/alpha-beta: 26/21200/1-0.2 {'G_GAN': 7.996854782104492, 'G_L1': 29.426755905151367, 'D': 0.01607539877295494, 'F': 0.0011919197859242558}\nEpoch/total_steps/alpha-beta: 26/21300/1-0.2 {'G_GAN': 7.897091865539551, 'G_L1': 30.624181747436523, 'D': 0.005629442632198334, 'F': 0.0027661919593811035}\nEpoch/total_steps/alpha-beta: 26/21400/1-0.2 {'G_GAN': 8.135284423828125, 'G_L1': 59.35556411743164, 'D': 0.056294187903404236, 'F': 0.0023196698166429996}\nEpoch/total_steps/alpha-beta: 26/21500/1-0.2 {'G_GAN': 8.249577522277832, 'G_L1': 24.980527877807617, 'D': 0.033834610134363174, 'F': 0.0018140885513275862}\nEpoch/total_steps/alpha-beta: 26/21600/1-0.2 {'G_GAN': 5.603236675262451, 'G_L1': 40.208580017089844, 'D': 1.2233703136444092, 'F': 0.004660166800022125}\n保存模型 Epoch 26, iters 21600 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 26/999 花费时间:2236.9019520282745s\nlearning rate = 0.0001821782\nEpoch/total_steps/alpha-beta: 27/21700/1-0.2 {'G_GAN': 8.076769828796387, 'G_L1': 36.78916549682617, 'D': 0.035698916763067245, 'F': 0.0028610441368073225}\nEpoch/total_steps/alpha-beta: 27/21800/1-0.2 {'G_GAN': 7.596101760864258, 'G_L1': 39.03799057006836, 'D': 0.051726341247558594, 'F': 0.0027363463304936886}\nEpoch/total_steps/alpha-beta: 27/21900/1-0.2 {'G_GAN': 7.944169998168945, 'G_L1': 37.71663284301758, 'D': 0.041388411074876785, 'F': 0.0028344353195279837}\nEpoch/total_steps/alpha-beta: 27/22000/1-0.2 {'G_GAN': 8.619795799255371, 'G_L1': 20.55194664001465, 'D': 0.048415396362543106, 'F': 0.0012166944798082113}\nEpoch/total_steps/alpha-beta: 27/22100/1-0.2 {'G_GAN': 8.834108352661133, 'G_L1': 61.597900390625, 'D': 0.1330127716064453, 'F': 0.00851520150899887}\nEpoch/total_steps/alpha-beta: 27/22200/1-0.2 {'G_GAN': 8.492298126220703, 'G_L1': 33.35374069213867, 'D': 0.03885803371667862, 'F': 0.004575077444314957}\nEpoch/total_steps/alpha-beta: 27/22300/1-0.2 {'G_GAN': 7.3414835929870605, 'G_L1': 55.86278533935547, 'D': 0.06007916480302811, 'F': 0.002228633500635624}\nEpoch/total_steps/alpha-beta: 27/22400/1-0.2 {'G_GAN': 8.329330444335938, 'G_L1': 40.72560501098633, 'D': 0.017307186499238014, 'F': 0.002368675544857979}\n保存模型 Epoch 27, iters 22400 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 27/999 花费时间:2245.424254179001s\nlearning rate = 0.000180198\nEpoch/total_steps/alpha-beta: 28/22500/1-0.2 {'G_GAN': 8.699195861816406, 'G_L1': 66.82012939453125, 'D': 0.058796510100364685, 'F': 0.0025405113119632006}\nEpoch/total_steps/alpha-beta: 28/22600/1-0.2 {'G_GAN': 7.9345173835754395, 'G_L1': 78.65081787109375, 'D': 0.018228482455015182, 'F': 0.002032086020335555}\nEpoch/total_steps/alpha-beta: 28/22700/1-0.2 {'G_GAN': 7.631791591644287, 'G_L1': 55.65678405761719, 'D': 0.030407560989260674, 'F': 0.0020625197794288397}\nEpoch/total_steps/alpha-beta: 28/22800/1-0.2 {'G_GAN': 7.669142246246338, 'G_L1': 48.89533615112305, 'D': 0.024406706914305687, 'F': 0.0022532606963068247}\nEpoch/total_steps/alpha-beta: 28/22900/1-0.2 {'G_GAN': 5.767544269561768, 'G_L1': 23.03849220275879, 'D': 0.5869188904762268, 'F': 0.003424305934458971}\nEpoch/total_steps/alpha-beta: 28/23000/1-0.2 {'G_GAN': 8.303125381469727, 'G_L1': 34.163665771484375, 'D': 0.029416248202323914, 'F': 0.0018054952379316092}\nEpoch/total_steps/alpha-beta: 28/23100/1-0.2 {'G_GAN': 8.158636093139648, 'G_L1': 74.27262878417969, 'D': 0.01963939517736435, 'F': 0.0036374139599502087}\nEpoch/total_steps/alpha-beta: 28/23200/1-0.2 {'G_GAN': 7.551119327545166, 'G_L1': 25.352590560913086, 'D': 0.035640645772218704, 'F': 0.001792275346815586}\n保存模型 Epoch 28, iters 23200 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 28/999 花费时间:2243.9498064517975s\nlearning rate = 0.0001782178\nEpoch/total_steps/alpha-beta: 29/23300/1-0.2 {'G_GAN': 8.161771774291992, 'G_L1': 28.111188888549805, 'D': 0.007250278256833553, 'F': 0.0030810129828751087}\nEpoch/total_steps/alpha-beta: 29/23400/1-0.2 {'G_GAN': 7.662878036499023, 'G_L1': 29.13936424255371, 'D': 0.03483784198760986, 'F': 0.0036559763830155134}\nEpoch/total_steps/alpha-beta: 29/23500/1-0.2 {'G_GAN': 8.416187286376953, 'G_L1': 22.083412170410156, 'D': 0.015447132289409637, 'F': 0.0018566425424069166}\nEpoch/total_steps/alpha-beta: 29/23600/1-0.2 {'G_GAN': 6.429597854614258, 'G_L1': 30.08060073852539, 'D': 0.2464236170053482, 'F': 0.002432726789265871}\nEpoch/total_steps/alpha-beta: 29/23700/1-0.2 {'G_GAN': 8.03027057647705, 'G_L1': 18.990314483642578, 'D': 0.005123190116137266, 'F': 0.0017168023623526096}\nEpoch/total_steps/alpha-beta: 29/23800/1-0.2 {'G_GAN': 7.746734619140625, 'G_L1': 45.38220977783203, 'D': 0.020565040409564972, 'F': 0.001739036408253014}\nEpoch/total_steps/alpha-beta: 29/23900/1-0.2 {'G_GAN': 8.145263671875, 'G_L1': 22.485469818115234, 'D': 0.008120417594909668, 'F': 0.002415680792182684}\nEpoch/total_steps/alpha-beta: 29/24000/1-0.2 {'G_GAN': 6.887392044067383, 'G_L1': 72.38180541992188, 'D': 0.15967223048210144, 'F': 0.002369710709899664}\n保存模型 Epoch 29, iters 24000 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 29/999 花费时间:2241.8910686969757s\nlearning rate = 0.0001762376\nEpoch/total_steps/alpha-beta: 30/24100/1-0.2 {'G_GAN': 8.455358505249023, 'G_L1': 44.419620513916016, 'D': 0.057264987379312515, 'F': 0.004567265976220369}\nEpoch/total_steps/alpha-beta: 30/24200/1-0.2 {'G_GAN': 8.731988906860352, 'G_L1': 28.390640258789062, 'D': 0.06632992625236511, 'F': 0.0016776493284851313}\nEpoch/total_steps/alpha-beta: 30/24300/1-0.2 {'G_GAN': 7.621546268463135, 'G_L1': 39.21030807495117, 'D': 0.025209009647369385, 'F': 0.004008825868368149}\nEpoch/total_steps/alpha-beta: 30/24400/1-0.2 {'G_GAN': 8.586206436157227, 'G_L1': 48.30557632446289, 'D': 0.09471330046653748, 'F': 0.0015354871284216642}\nEpoch/total_steps/alpha-beta: 30/24500/1-0.2 {'G_GAN': 7.417546272277832, 'G_L1': 44.19021224975586, 'D': 0.03500431030988693, 'F': 0.0031865204218775034}\nEpoch/total_steps/alpha-beta: 30/24600/1-0.2 {'G_GAN': 7.757794380187988, 'G_L1': 40.792240142822266, 'D': 0.012213332578539848, 'F': 0.008470403030514717}\nEpoch/total_steps/alpha-beta: 30/24700/1-0.2 {'G_GAN': 7.540193557739258, 'G_L1': 21.742013931274414, 'D': 0.019589561969041824, 'F': 0.002744878875091672}\nEpoch/total_steps/alpha-beta: 30/24800/1-0.2 {'G_GAN': 7.719584941864014, 'G_L1': 38.571990966796875, 'D': 0.027254346758127213, 'F': 0.005582602694630623}\n保存模型 Epoch 30, iters 24800 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 30/999 花费时间:2240.4953978061676s\nlearning rate = 0.0001742574\nEpoch/total_steps/alpha-beta: 31/24900/1-0.2 {'G_GAN': 8.017459869384766, 'G_L1': 22.96689224243164, 'D': 0.003309932304546237, 'F': 0.0016904742224141955}\nEpoch/total_steps/alpha-beta: 31/25000/1-0.2 {'G_GAN': 8.22296142578125, 'G_L1': 43.51144790649414, 'D': 0.024556394666433334, 'F': 0.0029970533214509487}\nEpoch/total_steps/alpha-beta: 31/25100/1-0.2 {'G_GAN': 8.202264785766602, 'G_L1': 22.760787963867188, 'D': 0.011181453242897987, 'F': 0.0013762610033154488}\nEpoch/total_steps/alpha-beta: 31/25200/1-0.2 {'G_GAN': 7.737740516662598, 'G_L1': 29.908117294311523, 'D': 0.011087203398346901, 'F': 0.0018069373909384012}\nEpoch/total_steps/alpha-beta: 31/25300/1-0.2 {'G_GAN': 7.607811450958252, 'G_L1': 27.030330657958984, 'D': 0.02061172015964985, 'F': 0.002197978086769581}\nEpoch/total_steps/alpha-beta: 31/25400/1-0.2 {'G_GAN': 7.859828948974609, 'G_L1': 23.269733428955078, 'D': 0.016604989767074585, 'F': 0.0020086471922695637}\nEpoch/total_steps/alpha-beta: 31/25500/1-0.2 {'G_GAN': 7.176191329956055, 'G_L1': 25.43027114868164, 'D': 0.09436474740505219, 'F': 0.003587976098060608}\nEpoch/total_steps/alpha-beta: 31/25600/1-0.2 {'G_GAN': 7.882669448852539, 'G_L1': 26.447750091552734, 'D': 0.007880685850977898, 'F': 0.003998642787337303}\n保存模型 Epoch 31, iters 25600 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 31/999 花费时间:2247.6270802021027s\nlearning rate = 0.0001722772\nEpoch/total_steps/alpha-beta: 32/25700/1-0.2 {'G_GAN': 7.37361478805542, 'G_L1': 50.86867141723633, 'D': 0.052151940762996674, 'F': 0.0018901193980127573}\nEpoch/total_steps/alpha-beta: 32/25800/1-0.2 {'G_GAN': 6.746369361877441, 'G_L1': 60.299827575683594, 'D': 0.1564328819513321, 'F': 0.0020164239685982466}\nEpoch/total_steps/alpha-beta: 32/25900/1-0.2 {'G_GAN': 7.336899757385254, 'G_L1': 54.357784271240234, 'D': 0.058892495930194855, 'F': 0.003617411945015192}\nEpoch/total_steps/alpha-beta: 32/26000/1-0.2 {'G_GAN': 8.493104934692383, 'G_L1': 23.42197608947754, 'D': 0.027886860072612762, 'F': 0.0010997793870046735}\nEpoch/total_steps/alpha-beta: 32/26100/1-0.2 {'G_GAN': 6.479972839355469, 'G_L1': 25.710529327392578, 'D': 0.26717549562454224, 'F': 0.004488482140004635}\nEpoch/total_steps/alpha-beta: 32/26200/1-0.2 {'G_GAN': 8.181245803833008, 'G_L1': 20.100704193115234, 'D': 0.018170606344938278, 'F': 0.004974279552698135}\nEpoch/total_steps/alpha-beta: 32/26300/1-0.2 {'G_GAN': 5.9872870445251465, 'G_L1': 82.3907470703125, 'D': 0.5010002255439758, 'F': 0.0023362948559224606}\nEpoch/total_steps/alpha-beta: 32/26400/1-0.2 {'G_GAN': 8.375683784484863, 'G_L1': 23.043020248413086, 'D': 0.01623411849141121, 'F': 0.000978397554717958}\n保存模型 Epoch 32, iters 26400 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 32/999 花费时间:2275.732986688614s\nlearning rate = 0.000170297\nEpoch/total_steps/alpha-beta: 33/26500/1-0.2 {'G_GAN': 7.9573869705200195, 'G_L1': 35.2063102722168, 'D': 0.0062308465130627155, 'F': 0.0012718993239104748}\nEpoch/total_steps/alpha-beta: 33/26600/1-0.2 {'G_GAN': 6.878335952758789, 'G_L1': 44.3088264465332, 'D': 0.15189921855926514, 'F': 0.0019176076166331768}\nEpoch/total_steps/alpha-beta: 33/26700/1-0.2 {'G_GAN': 7.759462356567383, 'G_L1': 20.847213745117188, 'D': 0.009537670761346817, 'F': 0.0025901412591338158}\nEpoch/total_steps/alpha-beta: 33/26800/1-0.2 {'G_GAN': 6.837032794952393, 'G_L1': 65.01419830322266, 'D': 0.16352999210357666, 'F': 0.0015054140239953995}\nEpoch/total_steps/alpha-beta: 33/26900/1-0.2 {'G_GAN': 7.362067222595215, 'G_L1': 71.94444274902344, 'D': 0.06963425874710083, 'F': 0.0019400737946853042}\nEpoch/total_steps/alpha-beta: 33/27000/1-0.2 {'G_GAN': 7.604213714599609, 'G_L1': 58.557857513427734, 'D': 0.043337590992450714, 'F': 0.0017693773843348026}\nEpoch/total_steps/alpha-beta: 33/27100/1-0.2 {'G_GAN': 8.388092994689941, 'G_L1': 66.79051971435547, 'D': 0.04460863023996353, 'F': 0.002756285248324275}\nEpoch/total_steps/alpha-beta: 33/27200/1-0.2 {'G_GAN': 7.951116561889648, 'G_L1': 26.81524658203125, 'D': 0.024190768599510193, 'F': 0.001771372277289629}\n保存模型 Epoch 33, iters 27200 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 33/999 花费时间:2303.8208458423615s\nlearning rate = 0.0001683168\nEpoch/total_steps/alpha-beta: 34/27300/1-0.2 {'G_GAN': 7.875994682312012, 'G_L1': 25.477964401245117, 'D': 0.006411257199943066, 'F': 0.00386531138792634}\nEpoch/total_steps/alpha-beta: 34/27400/1-0.2 {'G_GAN': 7.213046550750732, 'G_L1': 43.94091796875, 'D': 0.07668862491846085, 'F': 0.0018929485231637955}\nEpoch/total_steps/alpha-beta: 34/27500/1-0.2 {'G_GAN': 7.252318382263184, 'G_L1': 30.42011833190918, 'D': 0.05034797266125679, 'F': 0.001807230757549405}\nEpoch/total_steps/alpha-beta: 34/27600/1-0.2 {'G_GAN': 8.24752426147461, 'G_L1': 58.524600982666016, 'D': 0.018284127116203308, 'F': 0.0017263232730329037}\nEpoch/total_steps/alpha-beta: 34/27700/1-0.2 {'G_GAN': 8.065179824829102, 'G_L1': 25.04204559326172, 'D': 0.007939932867884636, 'F': 0.0018257210031151772}\nEpoch/total_steps/alpha-beta: 34/27800/1-0.2 {'G_GAN': 8.115760803222656, 'G_L1': 34.73466873168945, 'D': 0.009024995379149914, 'F': 0.002416804898530245}\nEpoch/total_steps/alpha-beta: 34/27900/1-0.2 {'G_GAN': 8.071806907653809, 'G_L1': 23.847681045532227, 'D': 0.021876957267522812, 'F': 0.0038563059642910957}\nEpoch/total_steps/alpha-beta: 34/28000/1-0.2 {'G_GAN': 8.148441314697266, 'G_L1': 29.521400451660156, 'D': 0.00792113970965147, 'F': 0.002031065756455064}\n保存模型 Epoch 34, iters 28000 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 34/999 花费时间:2264.706836462021s\nlearning rate = 0.0001663366\nEpoch/total_steps/alpha-beta: 35/28100/1-0.2 {'G_GAN': 9.286863327026367, 'G_L1': 21.131092071533203, 'D': 0.13842421770095825, 'F': 0.0018789139576256275}\nEpoch/total_steps/alpha-beta: 35/28200/1-0.2 {'G_GAN': 8.505218505859375, 'G_L1': 37.81462478637695, 'D': 0.058449480682611465, 'F': 0.0013020221376791596}\nEpoch/total_steps/alpha-beta: 35/28300/1-0.2 {'G_GAN': 7.629211902618408, 'G_L1': 51.59615707397461, 'D': 0.026820950210094452, 'F': 0.0018706825794652104}\nEpoch/total_steps/alpha-beta: 35/28400/1-0.2 {'G_GAN': 5.966914176940918, 'G_L1': 63.73518753051758, 'D': 0.4677260220050812, 'F': 0.0019567771814763546}\nEpoch/total_steps/alpha-beta: 35/28500/1-0.2 {'G_GAN': 7.657004356384277, 'G_L1': 30.41579818725586, 'D': 0.02877647802233696, 'F': 0.0012610526755452156}\nEpoch/total_steps/alpha-beta: 35/28600/1-0.2 {'G_GAN': 7.948775291442871, 'G_L1': 27.328908920288086, 'D': 0.004160258453339338, 'F': 0.001692614401690662}\nEpoch/total_steps/alpha-beta: 35/28700/1-0.2 {'G_GAN': 5.842987060546875, 'G_L1': 77.45301818847656, 'D': 0.7266921997070312, 'F': 0.0047453660517930984}\nEpoch/total_steps/alpha-beta: 35/28800/1-0.2 {'G_GAN': 7.69786262512207, 'G_L1': 90.14388275146484, 'D': 0.02010192908346653, 'F': 0.002731677610427141}\n保存模型 Epoch 35, iters 28800 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 35/999 花费时间:2252.1114785671234s\nlearning rate = 0.0001643564\nEpoch/total_steps/alpha-beta: 36/28900/1-0.2 {'G_GAN': 8.16215991973877, 'G_L1': 60.4095573425293, 'D': 0.01852986589074135, 'F': 0.0020778735633939505}\nEpoch/total_steps/alpha-beta: 36/29000/1-0.2 {'G_GAN': 7.182176113128662, 'G_L1': 41.9015007019043, 'D': 0.0819399282336235, 'F': 0.0024390127509832382}\nEpoch/total_steps/alpha-beta: 36/29100/1-0.2 {'G_GAN': 7.702878952026367, 'G_L1': 28.158533096313477, 'D': 0.01701553910970688, 'F': 0.0028477925807237625}\nEpoch/total_steps/alpha-beta: 36/29200/1-0.2 {'G_GAN': 8.494702339172363, 'G_L1': 43.88920211791992, 'D': 0.032209232449531555, 'F': 0.0017581216525286436}\nEpoch/total_steps/alpha-beta: 36/29300/1-0.2 {'G_GAN': 9.025153160095215, 'G_L1': 58.69982147216797, 'D': 0.1241658627986908, 'F': 0.002193690277636051}\nEpoch/total_steps/alpha-beta: 36/29400/1-0.2 {'G_GAN': 8.13463306427002, 'G_L1': 33.57855224609375, 'D': 0.00461883470416069, 'F': 0.001785267610102892}\nEpoch/total_steps/alpha-beta: 36/29500/1-0.2 {'G_GAN': 7.844721794128418, 'G_L1': 37.07835388183594, 'D': 0.0057810829021036625, 'F': 0.00427527679130435}\nEpoch/total_steps/alpha-beta: 36/29600/1-0.2 {'G_GAN': 7.794288158416748, 'G_L1': 55.677433013916016, 'D': 0.011163191869854927, 'F': 0.0032508838921785355}\n保存模型 Epoch 36, iters 29600 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 36/999 花费时间:2297.8845319747925s\nlearning rate = 0.0001623762\nEpoch/total_steps/alpha-beta: 37/29700/1-0.2 {'G_GAN': 7.7387566566467285, 'G_L1': 38.53538131713867, 'D': 0.011235179379582405, 'F': 0.0015433173393830657}\nEpoch/total_steps/alpha-beta: 37/29800/1-0.2 {'G_GAN': 8.627163887023926, 'G_L1': 26.8377742767334, 'D': 0.0671268180012703, 'F': 0.001791026326827705}\nEpoch/total_steps/alpha-beta: 37/29900/1-0.2 {'G_GAN': 7.626511573791504, 'G_L1': 60.78753662109375, 'D': 0.10778381675481796, 'F': 0.0018895952962338924}\nEpoch/total_steps/alpha-beta: 37/30000/1-0.2 {'G_GAN': 7.478549003601074, 'G_L1': 17.54232406616211, 'D': 0.030389655381441116, 'F': 0.001302587566897273}\nEpoch/total_steps/alpha-beta: 37/30100/1-0.2 {'G_GAN': 8.228598594665527, 'G_L1': 60.54544448852539, 'D': 0.013345221057534218, 'F': 0.001516076736152172}\nEpoch/total_steps/alpha-beta: 37/30200/1-0.2 {'G_GAN': 7.970338344573975, 'G_L1': 45.44022750854492, 'D': 0.012497944757342339, 'F': 0.0013308614725247025}\nEpoch/total_steps/alpha-beta: 37/30300/1-0.2 {'G_GAN': 7.989448547363281, 'G_L1': 20.279512405395508, 'D': 0.0060158371925354, 'F': 0.001658424735069275}\nEpoch/total_steps/alpha-beta: 37/30400/1-0.2 {'G_GAN': 7.671058654785156, 'G_L1': 25.4240779876709, 'D': 0.016341283917427063, 'F': 0.0012650436256080866}\n保存模型 Epoch 37, iters 30400 在 ..\\result\\CSA-crop-1-0.2\nEpoch/Epochs 37/999 花费时间:2241.747955560684s\nlearning rate = 0.000160396\nEpoch/total_steps/alpha-beta: 38/30500/1-0.2 {'G_GAN': 8.126453399658203, 'G_L1': 31.337350845336914, 'D': 0.006814325228333473, 'F': 0.0010628155432641506}\nEpoch/total_steps/alpha-beta: 38/30600/1-0.2 {'G_GAN': 8.656627655029297, 'G_L1': 23.81717872619629, 'D': 0.05111401155591011, 'F': 0.0021094484254717827}\nEpoch/total_steps/alpha-beta: 38/30700/1-0.2 {'G_GAN': 8.484375953674316, 'G_L1': 27.377342224121094, 'D': 0.016362421214580536, 'F': 0.005972051061689854}\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e7553aa0d6d6198bffea2b99b230f6fbd7a6ecc4 | 72,909 | ipynb | Jupyter Notebook | 01 Getting Started with PCSE.ipynb | kkj154393476/pcse_notebooks | 112baacb3a5f45ffe854615f9989afd7493e40b4 | [
"MIT"
] | null | null | null | 01 Getting Started with PCSE.ipynb | kkj154393476/pcse_notebooks | 112baacb3a5f45ffe854615f9989afd7493e40b4 | [
"MIT"
] | null | null | null | 01 Getting Started with PCSE.ipynb | kkj154393476/pcse_notebooks | 112baacb3a5f45ffe854615f9989afd7493e40b4 | [
"MIT"
] | null | null | null | 150.950311 | 55,468 | 0.877669 | [
[
[
"<img style=\"float: right;\" src=\"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAOIAAAAjCAYAAACJpNbGAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAABR0RVh0Q3JlYXRpb24gVGltZQAzLzcvMTNND4u/AAAAHHRFWHRTb2Z0d2FyZQBBZG9iZSBGaXJld29ya3MgQ1M26LyyjAAACMFJREFUeJztnD1y20gWgD+6nJtzAsPhRqKL3AwqwQdYDpXDZfoEppNNTaWbmD7BUEXmI3EPMFCR2YI1UDQpdAPqBNzgvRZA/BGUZEnk9FeFIgj0z2ugX7/XP+jGer2mLv/8b6d+4Efgf/8KG0+Zn8XyXLx+bgEslqegcfzxSY3Irrx6bgEsFssBWsRGowGufwHAYtq7u+H6fUCOxTTWax4wBAbr+SRqNDKesOv3gN/133sW0yh927j1mucIaFWINl7PJ+OcvMcfW8Bol3iN44+mLIOsTCp3UJFfAETr+WRQcG8EOJpunEnTyDlYzycbeWr5xxq3jOF6PglK8ix9buv5xCsrAzBkMV1l5OwD/aJ4BXzV3+8F9z4gz/hTSbz8cxc84FuNvDc4VIsYA7+qohmGwAnycA194G22YqUYlZxv4vpN4AuwBv4oON5m8k3TVLnK4sYFcRyN86dWvCwnlCvFCeUVvwX8CkSZZ5eWs5mLJWE/VZThBMgpfirPk5J4f1SU4QsQ6LNP4+j9OkSUKdRiGlD87CWe3PcyR5PFdAhc1cz/joOziMoIeVF95GX1EGVY6bWhvsAeZQrm+kON80PDneD6PRbTi4LQpmJfsZieFaR1qXlXURh3y2BaBPyG63sspv0t6e+CKJTrf2YxHe8Qr6z8AXBdGbMoHgCTshgr4AiItfxljenPJGv5roCi+rGVw1TExTTWl99ThRsglfYHUnF7SMv+Bhjn4idxbhFLGiAu6gjXD3LuUBF5VzWi3CoAfMP1kxe7mNYZMT5DLFgf13eAXi3ZtvMOsUb3V3J5/mmqy+/66RbnTC1LFdfIu/kd8Qx2bTQeg2GBTPfiUF1TgHNE0QaIq/JDX9RKr/WBy/V8EhfEHWncWMO2EKV8S7UypYnYdE2r+o8gyj5MHXVYsZh+JnG7A+3LPQxR5g9II/UJ148ockmrybqm2+Qapo6gppwB8J7EM6jqaz8u0lhfkXgB58BKPam6rvEdh2kRARbTMa7/HXEfVqnW8hxxWwE+5+JJRTYd9CM90gxw/XFuMKMo/yTNDzUkLnbr6rCYnuH6N8igQ3CvNPJproDPuH6MKMd4Z5kMUjnrh98tn1if72/Ie729Vzq708L0YV3/HGmgB4iHsjOProhhd1lrEr4zaz/FvM4lolTnqWum/6jKmeuDmFb1jHylNg96hPQbhcU0wPVBXESvQI4W5aNshsK4jeOPhSOcOaThMVb48dhU8m2UlR+29ZHzrqyhLL0EaTROteGt67EYIsT6F1HXC/ikcvS00dl51PRwLaIwQtzCxGWRFnRMkT8v/SyAy8I+iliHJtDUsHHq7imipE42GtJanxdcB6mgQcm9MmKNs1m5F9MI13+n+cXZSEpAeV8mQgZqNkmU/HsuT7kf4PrGhXcK0h1SXv7iPKsJKCrDYvoV17+meMqhiDFlll7GEb4U3iseAf+k7mqksmU9qUoaj73E7TEtol3iZnks7Moai8WylUN3TS0WANbzyYv2rqxFtFheANYi7iGNRoPOrO2QGTQIu8vhU8vSmbWNDAHQD7vLYWfWbgFx2F3ee3FBZ9ZuIgMpTWAQdpeRXm9pPoPOrD3UMCtkQM4BRmF3ubG6ZZdxkOfCWsT9pU96CuX56KfOjeIFVC8Ar8NI0xuyOQJsVkWl8xzptQGPNY/6xFiLuL+0gIu0FVTrNESmbK7C7tLrzNpmPW0EeGF32UyFN19UnCAT4ZHGWWnYqDNrB4jViZBK/kbD9sLuMiBZSD8AVp1Z+0LD/NmZta+BIzOS3pm1xwBhd9kvkeEGUbQeqSmIdHhkXnGs5fIQRUxPV1x0Zm2zMuoq7C69rU/yBWAt4v7iAd86s/ZaDweZP+wBvwBOZ9b2SCrrmPzk+AWizA09j1QxMK4gZumcWKUWMvkdA56mfxN2l7GmHWk6V2F32Qi7yxaIsmnYHvkJ9zEQqAwBotQXwK2m0c+EN/Kk8zPTZiOkIWrp/xNTnpeOtYh7iFauN+k5W+0vXab6UsbyecAw229SxWiG3aVZ7NBCKrGHuneazy2iyBeIuxkjk9UDE1bzOtJ4IzbdwysNN0D6dnf9Rk3/iKSBWOnhUbASSWW+DbvLWM+HKreZ3O/r77gza5u842w6LxFrEfcTj+Jv3mK4q7Co63hE+fI6E94hUaT0cry+XushSuvoNZO2CdsCrlXJHDYVMUIUJso2BmhfL+wuV6rMvVR6AXnS1428XupaE7Hwnrqkg4cMGD0lr3NfpVegrUw1m2sN0+crNirEX1uTqiPbPoyI/QSKKmqA9I9aer+fcR2zxIj7GiMV+EYVIkZc3r5eH2rYI+0vnpBYIE/vGwUCdYM7s3agbqXJu58VIOwug86sfd2ZtSPNKwi7S9PHy4UnscCmXKuUZQRdsqbPwCHp2754pKYnW0akcZBO/x2df29XnvA//6iV8T3TSluBmOQlR+v5JNvaHixlDZRalRZifbZaAg3vIIrkmP6YVu6owI1M9x2r0vVIFCBGXNLS96Ph45IGY2ey6e1DY20UMaLGItUXoIhVvCv5tvDg2MWLqYNaoKBKWe6Z7gBR8OwAzZOyD4poBmtidlwt/gIxw/QHz0+oWKIoj19fRz8p3YOjoV8195F5l31ltZ5PfnluISyW+/IK6SPstRIiH/FaLHvLa2R+6F6f978AVsD7v0vf0HK4vNK9VfbVojSBceP4o/PcglgsD8GMmjaRbRCc1PEQIrbv45nlIfleIrs778XkrcWSZXMcXPZyqbvfxy7ckuyqHJPslJzH9c3We2ZRbx1O/07ziJbDI1FE2Qwp4n4DNzHJhkZF16+3bnwrCmi40U2eWoj7KZvobn7+YtKO1vPJVyyWPSZrER1kNU0TqfienpvlaWZR7oX+3tba6lxcX7MK3tNfo2RlpNc8tthsIFbAKYtpsA+TtRbLNp5/H4/EFXX0MOfbOGUxvbCKaDkEnl8Rq0jc1ayFjhFFjKwiWg6B/wNk+JCXXNBIXQAAAABJRU5ErkJggg==\">\n\n",
"_____no_output_____"
],
[
"# Getting Started with PCSE/WOFOST\n\nThis Jupyter notebook will introduce PCSE and explain the basics of running models with PCSE, taking WOFOST as an example.\n\nAllard de Wit, March 2018\n\n**Prerequisites for running this notebook**\n\nSeveral packages need to be installed for running PCSE/WOFOST:\n\n 1. `PCSE` and its dependencies. See the [PCSE user guide](http://pcse.readthedocs.io/en/stable/installing.html) for more information;\n 2. `pandas` for processing and storing WOFOST output;\n 3. `matplotlib` for generating charts\n",
"_____no_output_____"
],
[
"## Importing the relevant modules\n",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport sys, os\nimport pcse\nimport pandas\nimport matplotlib\nmatplotlib.style.use(\"ggplot\")\nimport matplotlib.pyplot as plt\nprint(\"This notebook was built with:\")\nprint(\"python version: %s \" % sys.version)\nprint(\"PCSE version: %s\" % pcse.__version__)",
"This notebook was built with:\npython version: 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] \nPCSE version: 5.4.2\n"
]
],
[
[
"## Starting from the internal demo database\nFor demonstration purposes, we can start WOFOST with a single function call. This function reads all relevant data from the internal demo databases. In the next notebook we will demonstrate how to read data from external sources.\n\nThe command below starts WOFOST in potential production mode for winter-wheat for a location in Southern Spain.",
"_____no_output_____"
]
],
[
[
"wofostPP = pcse.start_wofost(mode=\"pp\")",
"_____no_output_____"
]
],
[
[
"You have just successfully initialized a PCSE/WOFOST object in the Python interpreter, which is in its initial state and waiting to do some simulation. We can now advance the model state for example with 1 day:\n",
"_____no_output_____"
]
],
[
[
"wofostPP.run()",
"_____no_output_____"
]
],
[
[
"Advancing the crop simulation with only 1 day, is often not so useful so the number of days to simulate can be specified as well:",
"_____no_output_____"
]
],
[
[
"wofostPP.run(days=10)",
"_____no_output_____"
]
],
[
[
"## Getting information about state and rate variables\nRetrieving information about the calculated model states or rates can be done with the `get_variable()` method on a PCSE object. For example, to retrieve the leaf area index value in the current model state you can do:",
"_____no_output_____"
]
],
[
[
"wofostPP.get_variable(\"LAI\")",
"_____no_output_____"
],
[
"wofostPP.run(days=25)\nwofostPP.get_variable(\"LAI\")",
"_____no_output_____"
]
],
[
[
"Showing that after 11 days the LAI value is 0.287. When we increase time with another 25 days, the LAI increases to 1.528. The `get_variable()` method can retrieve any state or rate variable that is defined somewhere in the model. \n\nFinally, we can finish the crop season by letting it run until the model terminates because the crop reaches maturity or the harvest date:",
"_____no_output_____"
]
],
[
[
"wofostPP.run_till_terminate()",
"_____no_output_____"
]
],
[
[
"Note that before or after the crop cycle, the object representing the crop does not exist and therefore retrieving a crop related variable results in a `None` value. Off course the simulation results are stored and can be obtained, see next section.",
"_____no_output_____"
]
],
[
[
"print(wofostPP.get_variable(\"LAI\"))",
"None\n"
]
],
[
[
"## Retrieving and displaying WOFOST output\nWe can retrieve the results of the simulation at each time step using `get_output()`. In python terms this returns a list of dictionaries, one dictionary for each time step of the the simulation results. Each dictionary contains the key:value pairs of the state or rate variables that were stored at that time step.\n\n",
"_____no_output_____"
]
],
[
[
"output = wofostPP.get_output()",
"_____no_output_____"
]
],
[
[
"The most convenient way to handle the output from WOFOST is to used the `pandas` module to convert it into a dataframe. Pandas DataFrames can be converted to a variety of formats including excel, CSV or database tables.",
"_____no_output_____"
]
],
[
[
"dfPP = pandas.DataFrame(output).set_index(\"day\")\ndfPP.tail()",
"_____no_output_____"
]
],
[
[
"Besides the output at each time step, WOFOST also provides summary output which summarizes the crop cycle and provides you the total crop biomass, total yield, maximum LAI and other variables. In case of crop rotations, the summary output will consist of several sets of variables, one for each crop cycle.",
"_____no_output_____"
]
],
[
[
"summary_output = wofostPP.get_summary_output()\nmsg = \"Reached maturity at {DOM} with total biomass {TAGP:.1f} kg/ha, \" \\\n \"a yield of {TWSO:.1f} kg/ha with a maximum LAI of {LAIMAX:.2f}.\"\nfor crop_cycle in summary_output:\n print(msg.format(**crop_cycle))",
"Reached maturity at 2000-05-31 with total biomass 18091.0 kg/ha, a yield of 8729.4 kg/ha with a maximum LAI of 6.23.\n"
]
],
[
[
"## Visualizing output\nThe pandas module is also very useful for generating charts from simulation results. In this case we generate graphs of leaf area index and crop biomass including total biomass and grain yield.",
"_____no_output_____"
]
],
[
[
"fig, (axis1, axis2) = plt.subplots(nrows=1, ncols=2, figsize=(16,8))\ndfPP.LAI.plot(ax=axis1, label=\"LAI\", color='k')\ndfPP.TAGP.plot(ax=axis2, label=\"Total biomass\")\ndfPP.TWSO.plot(ax=axis2, label=\"Yield\")\naxis1.set_title(\"Leaf Area Index\")\naxis2.set_title(\"Crop biomass\")\nfig.autofmt_xdate()\nr = fig.legend()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e755489b6da04802eb767a49b1fd8947c1a681b0 | 47,312 | ipynb | Jupyter Notebook | 1.DataFrames y Series-ejercicio.ipynb | Javierhidalgo95/Hidalgo-Lopez---PC-PYTHON | 52f08b9e1d40584491c28b685c6ffafdf38d06e1 | [
"Apache-2.0"
] | null | null | null | 1.DataFrames y Series-ejercicio.ipynb | Javierhidalgo95/Hidalgo-Lopez---PC-PYTHON | 52f08b9e1d40584491c28b685c6ffafdf38d06e1 | [
"Apache-2.0"
] | null | null | null | 1.DataFrames y Series-ejercicio.ipynb | Javierhidalgo95/Hidalgo-Lopez---PC-PYTHON | 52f08b9e1d40584491c28b685c6ffafdf38d06e1 | [
"Apache-2.0"
] | null | null | null | 33.365303 | 180 | 0.393515 | [
[
[
"# Importar Pandas",
"_____no_output_____"
]
],
[
[
"#importa pandas\nimport pandas as pd",
"_____no_output_____"
],
[
"pd.__version__",
"_____no_output_____"
]
],
[
[
"# Crear una Serie",
"_____no_output_____"
],
[
"Explore series en python en el siguiente [link](https://pandas.pydata.org/pandas-docs/stable/user_guide/10min.html) en las primeras lineas del documento",
"_____no_output_____"
]
],
[
[
"# Crea una Serie de los numeros 10, 20 and 10.\ns = pd.Series([10, 20, 10])\ns",
"_____no_output_____"
],
[
"# Crea una Serie con tres objetos: 'rojo', 'verde', 'azul'\ns = pd.Series([\"rojo\", \"verde\", \"azul\"])\ns",
"_____no_output_____"
]
],
[
[
"# Crear un Dataframe",
"_____no_output_____"
]
],
[
[
"# Crea un dataframe vacío llamado 'df'\ndicx= {}\ndf_dataframe = pd.DataFrame(dicx)\ndf_dataframe\ndf = pd.Dataframe()",
"_____no_output_____"
],
[
"# Crea una nueva columna en el dataframe, y asignale la primera serie que has creado\n\nserie1 = [[10, 20, 10]]\n\n# nombre de columnas\ncolumnas= [\"C1\", \"C2\", \"C3\"]\n# Ayuda para funcion -> shift + tab\ndf_serie1 = pd.DataFrame(data=serie1, columns = columnas)\ndf_serie1",
"_____no_output_____"
],
[
"# Crea otra columna en el dataframe y asignale la segunda Serie que has creado\nserie2 = [[\"rojo\", \"verde\", \"azul\"]]\n\n# nombre de columnas\ncolumnas= [\"C1\", \"C2\", \"C3\"]\n# Ayuda para funcion -> shift + tab\ndf_serie2 = pd.DataFrame(data=serie2, columns = columnas)\ndf_serie2",
"_____no_output_____"
]
],
[
[
"# Leer un dataframe",
"_____no_output_____"
]
],
[
[
"# Lee el archivo llamado 'avengers.csv\" localizado en la carpeta \"data\" y crea un DataFrame, llamado 'avengers'. \n# El archivo está localizado en \"data/avengers.csv\"\ndf = pd.read_csv('./src/pandas/avengers.csv', sep=',')\ndf.head()",
"_____no_output_____"
]
],
[
[
"# Inspeccionar un dataframe",
"_____no_output_____"
]
],
[
[
"# Muestra las primeras 5 filas del DataFrame.\ndf.head(5)",
"_____no_output_____"
],
[
"# Muestra las primeras 10 filas del DataFrame. \ndf.head(10)",
"_____no_output_____"
],
[
"# Muestra las últimas 5 filas del DataFrame.\ndf.tail(5)",
"_____no_output_____"
]
],
[
[
"# Tamaño del DataFrame",
"_____no_output_____"
]
],
[
[
"# Muestra el tamaño del DataFrame\ndf.shape",
"_____no_output_____"
]
],
[
[
"# Data types en un DataFrame",
"_____no_output_____"
]
],
[
[
"# Muestra los data types del dataframe\ndf.dtypes",
"_____no_output_____"
]
],
[
[
"# Editar el indice (index)",
"_____no_output_____"
]
],
[
[
"# Cambia el indice a la columna \"fecha_inicio\".\ndf2 = df.set_index(\"fecha_inicio\").copy()\ndf2.head()",
"_____no_output_____"
]
],
[
[
"# Ordenar el indice",
"_____no_output_____"
]
],
[
[
"# Ordena el índice de forma descendiente\ndf.sort_values(by=[\"URL\", \"nombre\", \"n_apariciones\", \"actual\", \"genero\", \"fecha_inicio\", \"Notes\"], ascending=[False, False, False, False, False, False, False,])",
"_____no_output_____"
]
],
[
[
"# Resetear el indice",
"_____no_output_____"
]
],
[
[
"# Resetea el índice\ndf2.reset_index(drop=True, inplace=True)\ndf2.head()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7554a868fa50358ce4bfef765fcc723f848b4d5 | 160,883 | ipynb | Jupyter Notebook | OCEA-267/Lectures/W7_L13.ipynb | profxj/ocea200 | 562077f498d4283fb5d456b634e8f2f0bcaf539c | [
"BSD-3-Clause"
] | null | null | null | OCEA-267/Lectures/W7_L13.ipynb | profxj/ocea200 | 562077f498d4283fb5d456b634e8f2f0bcaf539c | [
"BSD-3-Clause"
] | 3 | 2019-10-09T04:04:54.000Z | 2019-11-28T16:12:30.000Z | OCEA-267/Lectures/W7_L13.ipynb | profxj/ocea200 | 562077f498d4283fb5d456b634e8f2f0bcaf539c | [
"BSD-3-Clause"
] | null | null | null | 131.333061 | 70,084 | 0.83598 | [
[
[
"# Lecture 13 Examples",
"_____no_output_____"
]
],
[
[
"# imports\nimport numpy as np\nfrom scipy.ndimage import uniform_filter1d\nfrom scipy.stats import shapiro, bartlett\nfrom matplotlib import pyplot as plt\nimport pandas\n\nfrom statsmodels.tsa.seasonal import seasonal_decompose\nimport statsmodels.api as sm\nfrom statsmodels.stats.stattools import durbin_watson\nimport statsmodels.formula.api as smf\nfrom statsmodels.graphics.tsaplots import plot_acf, plot_pacf\nfrom statsmodels.tsa.stattools import pacf\n\nimport pymannkendall as mk",
"_____no_output_____"
]
],
[
[
"# Phosphorus",
"_____no_output_____"
],
[
"## Load",
"_____no_output_____"
]
],
[
[
"data_file = '../Data/samsonvillebrook_phosphorus_quarterly.txt'\ndf = pandas.read_table(data_file, delim_whitespace=True, names=['time','P'])\ndf.set_index('time', inplace=True)\ndf.head()",
"_____no_output_____"
]
],
[
[
"## Plot",
"_____no_output_____"
]
],
[
[
"df.P.plot()",
"_____no_output_____"
]
],
[
[
"## Mann-Kendall",
"_____no_output_____"
]
],
[
[
"result = mk.original_test(df.P)\nresult",
"_____no_output_____"
]
],
[
[
"## Significant p-value!",
"_____no_output_____"
],
[
"## Fit linear trend",
"_____no_output_____"
]
],
[
[
"time = np.arange(len(df)) + 1\ndf['time'] = time",
"_____no_output_____"
],
[
"formula = \"P ~ time\"\nmod_ols = smf.glm(formula=formula, data=df).fit()#, family=sm.families.Binomial()).fit()",
"_____no_output_____"
],
[
"mod_ols.summary()",
"_____no_output_____"
],
[
"mod_ols.pvalues",
"_____no_output_____"
]
],
[
[
"----",
"_____no_output_____"
],
[
"# Faux dataset",
"_____no_output_____"
]
],
[
[
"## Load\n\ndata_file2 = '../Data/pollution_data_stationY.txt'\ndf2 = pandas.read_table(data_file2, delim_whitespace=True)\ndf2.head()",
"_____no_output_____"
]
],
[
[
"## Date",
"_____no_output_____"
]
],
[
[
"dates = []\nfor index, row in df2.iterrows():\n dates.append(f'{int(row.year)}-{int(row.month)}')\ndates = pandas.to_datetime(dates)\ndf2['date'] = dates\ndf2.set_index('date', inplace=True)\ndf2.head()",
"_____no_output_____"
]
],
[
[
"## Plot",
"_____no_output_____"
]
],
[
[
"df2.y.plot()",
"_____no_output_____"
]
],
[
[
"## Fit with seasonal trend ",
"_____no_output_____"
],
[
"### Dummies for seasonal",
"_____no_output_____"
]
],
[
[
"dummy = np.zeros((len(df2), 11), dtype=int)\nfor i in np.arange(11):\n for j in np.arange(len(df2)):\n if df2.month.values[j] == i+1:\n dummy[j,i] = 1",
"_____no_output_____"
],
[
"dummies = []\nfor idum in np.arange(11):\n key = f'dum{idum}'\n dummies.append(key)\n df2[key] = dummy[:,idum]",
"_____no_output_____"
],
[
"df2.head()",
"_____no_output_____"
]
],
[
[
"### Time",
"_____no_output_____"
]
],
[
[
"time = np.arange(len(df2)) + 1\ndf2['time'] = time",
"_____no_output_____"
]
],
[
[
"### Fit",
"_____no_output_____"
]
],
[
[
"formula = \"y ~ dum0 + dum1 + dum2 + dum3 + dum4 + dum5 + dum6 + dum7 + dum8 + dum9 + dum10 + time\"\nols2 = smf.glm(formula=formula, data=df2).fit()#, family=sm.families.Binomial()).fit()",
"_____no_output_____"
],
[
"ols2.summary()",
"_____no_output_____"
]
],
[
[
"## Plot",
"_____no_output_____"
]
],
[
[
"df2['ols'] = ols2.fittedvalues",
"_____no_output_____"
],
[
"fig = plt.figure()\nfig.set_size_inches((12, 9))\nax = df2.y.plot(ylabel='y', label='data', marker='o', ls='')\n#\ndf2.ols.plot(ax=ax, color='k', label='model')\n#\nax.legend(fontsize=15)\n#\n#set_fontsize(ax, 17)",
"_____no_output_____"
]
],
[
[
"## Explore residuals",
"_____no_output_____"
],
[
"### Durbin-Watson",
"_____no_output_____"
]
],
[
[
"resids = df2.y-ols2.fittedvalues",
"_____no_output_____"
],
[
"durbin_watson(resids)",
"_____no_output_____"
]
],
[
[
"### Shapiro",
"_____no_output_____"
]
],
[
[
"shapiro(resids)",
"_____no_output_____"
]
],
[
[
"### Not normal!!",
"_____no_output_____"
],
[
"## Try a Seasonal MK test!",
"_____no_output_____"
]
],
[
[
"mk2_results = mk.seasonal_test(df2.y, period=12)",
"_____no_output_____"
],
[
"mk2_results",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
e75558e5033f1c30dbbfd5434a3a64a03f360a84 | 14,715 | ipynb | Jupyter Notebook | courses/datacamp/notes/python/sklearn/recommender.ipynb | othrif/DataInsights | bd7340a6384fa44ffe91ac029863814e674ab043 | [
"MIT"
] | null | null | null | courses/datacamp/notes/python/sklearn/recommender.ipynb | othrif/DataInsights | bd7340a6384fa44ffe91ac029863814e674ab043 | [
"MIT"
] | null | null | null | courses/datacamp/notes/python/sklearn/recommender.ipynb | othrif/DataInsights | bd7340a6384fa44ffe91ac029863814e674ab043 | [
"MIT"
] | null | null | null | 92.54717 | 1,748 | 0.682025 | [
[
[
"---\ntitle: \"Music recommender system with full pipeline\"\ndate: 2020-04-12T14:41:32+02:00\nauthor: \"Othmane Rifki\"\ntype: technical_note\ndraft: false\n---",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom scipy.sparse import csr_matrix\ndf = pd.read_csv('artists/scrobbler-small-sample.csv', index_col=0)\nartists = csr_matrix(df.transpose())\nartist_names = [x.strip('\\n').split(' ')[0] for x in open('artists/artists.csv').readlines()]",
"_____no_output_____"
]
],
[
[
"computed the normalized NMF features:",
"_____no_output_____"
]
],
[
[
"# Perform the necessary imports\nfrom sklearn.decomposition import NMF\nfrom sklearn.preprocessing import Normalizer, MaxAbsScaler\nfrom sklearn.pipeline import make_pipeline\n\n# Create a MaxAbsScaler: scaler\nscaler = MaxAbsScaler()\n\n# Create an NMF model: nmf\nnmf = NMF(n_components=20)\n\n# Create a Normalizer: normalizer\nnormalizer = Normalizer()\n\n# Create a pipeline: pipeline\npipeline = make_pipeline(scaler, nmf, normalizer)\n\n# Apply fit_transform to artists: norm_features\nnorm_features = pipeline.fit_transform(artists)\nnorm_features",
"_____no_output_____"
],
[
"# Import pandas\nimport pandas as pd\n\n# Create a DataFrame: df\ndf = pd.DataFrame(norm_features, index=artist_names)\ndisplay(df)\n# Select row of 'Bruce Springsteen': artist\nartist = df.loc['Bruce Springsteen']\n\n# Compute cosine similarities: similarities\nsimilarities = df.dot(artist)\n\n# Display those with highest cosine similarity\nprint(similarities.nlargest())",
"_____no_output_____"
]
]
] | [
"raw",
"code",
"markdown",
"code"
] | [
[
"raw"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7555c61c4ee8494cb8e16da666fe0e832f72b29 | 5,245 | ipynb | Jupyter Notebook | 06 - K-Means on SVD Data.ipynb | avdeev-andrew/mlbootcamp6 | 16b874c350c872b5614cbbeac2875980ecb1b66b | [
"MIT"
] | null | null | null | 06 - K-Means on SVD Data.ipynb | avdeev-andrew/mlbootcamp6 | 16b874c350c872b5614cbbeac2875980ecb1b66b | [
"MIT"
] | null | null | null | 06 - K-Means on SVD Data.ipynb | avdeev-andrew/mlbootcamp6 | 16b874c350c872b5614cbbeac2875980ecb1b66b | [
"MIT"
] | null | null | null | 19.072727 | 68 | 0.485415 | [
[
[
"# 06 - K-Means on SVD Data",
"_____no_output_____"
],
[
"#### Imports",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nsns.set(style=\"white\")",
"_____no_output_____"
]
],
[
[
"#### Constants",
"_____no_output_____"
]
],
[
[
"n_clusters = 100",
"_____no_output_____"
],
[
"models_folder = \"models/\"\ntrain_data_fn = models_folder+'train_data.pkl'\ntarget_fn = models_folder+'target.pkl'\ntest_data_fn = models_folder+'test_data.pkl'\n\nweight_multiplier_fn = models_folder+\"weight_multiplier.pkl\"",
"_____no_output_____"
]
],
[
[
"#### Functions",
"_____no_output_____"
]
],
[
[
"import os.path\nfrom sklearn.externals import joblib\n\ndef Load(filename):\n if os.path.isfile(filename):\n return joblib.load(filename)\n \ndef Save(obj, filename):\n joblib.dump(obj, filename)",
"_____no_output_____"
]
],
[
[
"# Loading data",
"_____no_output_____"
]
],
[
[
"train = Load(train_data_fn)\ntest = Load(test_data_fn)\ntarget = Load(target_fn)",
"_____no_output_____"
],
[
"weight_multiplier = Load(weight_multiplier_fn)",
"_____no_output_____"
],
[
"print(train.shape)\nprint(test.shape)",
"(427994, 1000)\n(181024, 1000)\n"
],
[
"data = np.concatenate((train, test), axis=0)\nprint(data.shape)",
"(609018, 1000)\n"
],
[
"from sklearn.cluster import KMeans\n\nkmeans = KMeans(\n n_clusters=n_clusters,\n# init='k-means++',\n n_init=10,\n max_iter=300,\n tol=0.0001,\n precompute_distances='auto',\n verbose=0,\n random_state=None,\n copy_x=True,\n n_jobs=-1,\n algorithm='auto'\n)",
"_____no_output_____"
],
[
"%%time\nkmeans = kmeans.fit(data)",
"CPU times: user 1min 20s, sys: 6.16 s, total: 1min 26s\nWall time: 20min 38s\n"
],
[
"Save(kmeans.labels_,models_folder+'kmeans_n100.pkl')",
"_____no_output_____"
],
[
"n_clusters = 2\n\nkmeans = KMeans(\n n_clusters=n_clusters,\n# init='k-means++',\n n_init=10,\n max_iter=300,\n tol=0.0001,\n precompute_distances='auto',\n verbose=0,\n random_state=None,\n copy_x=True,\n n_jobs=-1,\n algorithm='auto'\n)",
"_____no_output_____"
],
[
"%%time\nkmeans = kmeans.fit(data)",
"CPU times: user 1min 21s, sys: 5.19 s, total: 1min 26s\nWall time: 1min 50s\n"
],
[
"Save(kmeans.labels_,models_folder+'kmeans_n2.pkl')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7555dbddb759e8f8790262f883234daedbebb5d | 45,247 | ipynb | Jupyter Notebook | t81_558_class_03_3_save_load.ipynb | sanjayssane/t81_558_deep_learning | dd186c240f9d0faeda70e81648d439a0f63ea8cc | [
"Apache-2.0"
] | 1 | 2020-12-15T19:35:48.000Z | 2020-12-15T19:35:48.000Z | t81_558_class_03_3_save_load.ipynb | sanjayssane/t81_558_deep_learning | dd186c240f9d0faeda70e81648d439a0f63ea8cc | [
"Apache-2.0"
] | null | null | null | t81_558_class_03_3_save_load.ipynb | sanjayssane/t81_558_deep_learning | dd186c240f9d0faeda70e81648d439a0f63ea8cc | [
"Apache-2.0"
] | null | null | null | 39.413763 | 587 | 0.561429 | [
[
[
"# T81-558: Applications of Deep Neural Networks\n**Module 3: Introduction to TensorFlow**\n* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)\n* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).",
"_____no_output_____"
],
[
"# Module 3 Material\n\n* Part 3.1: Deep Learning and Neural Network Introduction [[Video]](https://www.youtube.com/watch?v=zYnI4iWRmpc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_03_1_neural_net.ipynb)\n* Part 3.2: Introduction to Tensorflow & Keras [[Video]](https://www.youtube.com/watch?v=PsE73jk55cE&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_03_2_keras.ipynb)\n* **Part 3.3: Saving and Loading a Keras Neural Network** [[Video]](https://www.youtube.com/watch?v=-9QfbGM1qGw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_03_3_save_load.ipynb)\n* Part 3.4: Early Stopping in Keras to Prevent Overfitting [[Video]](https://www.youtube.com/watch?v=m1LNunuI2fk&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_03_4_early_stop.ipynb)\n* Part 3.5: Extracting Weights and Manual Calculation [[Video]](https://www.youtube.com/watch?v=7PWgx16kH8s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_03_5_weights.ipynb)",
"_____no_output_____"
],
[
"# Part 3.3: Saving and Loading a Keras Neural Network\n\nComplex neural networks will take a long time to fit/train. It is helpful to be able to save these neural networks so that they can be reloaded later. A reloaded neural network will not require retraining. Keras provides three formats for neural network saving.\n\n* **YAML** - Stores the neural network structure (no weights) in the [YAML file format](https://en.wikipedia.org/wiki/YAML).\n* **JSON** - Stores the neural network structure (no weights) in the [JSON file format](https://en.wikipedia.org/wiki/JSON).\n* **HDF5** - Stores the complete neural network (with weights) in the [HDF5 file format](https://en.wikipedia.org/wiki/Hierarchical_Data_Format). Do not confuse HDF5 with [HDFS](https://en.wikipedia.org/wiki/Apache_Hadoop). They are different. We do not use HDFS in this class.\n\nUsually you will want to save in HDF5.",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Activation\nimport pandas as pd\nimport io\nimport os\nimport requests\nimport numpy as np\nfrom sklearn import metrics\n\nsave_path = \".\"\n\ndf = pd.read_csv(\n \"https://data.heatonresearch.com/data/t81-558/auto-mpg.csv\", \n na_values=['NA', '?'])\n\ncars = df['name']\n\n# Handle missing value\ndf['horsepower'] = df['horsepower'].fillna(df['horsepower'].median())\n\n# Pandas to Numpy\nx = df[['cylinders', 'displacement', 'horsepower', 'weight',\n 'acceleration', 'year', 'origin']].values\ny = df['mpg'].values # regression\n\n# Build the neural network\nmodel = Sequential()\nmodel.add(Dense(25, input_dim=x.shape[1], activation='relu')) # Hidden 1\nmodel.add(Dense(10, activation='relu')) # Hidden 2\nmodel.add(Dense(1)) # Output\nmodel.compile(loss='mean_squared_error', optimizer='adam')\nmodel.fit(x,y,verbose=2,epochs=100)\n\n# Predict\npred = model.predict(x)\n\n# Measure RMSE error. RMSE is common for regression.\nscore = np.sqrt(metrics.mean_squared_error(pred,y))\nprint(f\"Before save score (RMSE): {score}\")\n\n# save neural network structure to JSON (no weights)\nmodel_json = model.to_json()\nwith open(os.path.join(save_path,\"network.json\"), \"w\") as json_file:\n json_file.write(model_json)\n\n# save neural network structure to YAML (no weights)\nmodel_yaml = model.to_yaml()\nwith open(os.path.join(save_path,\"network.yaml\"), \"w\") as yaml_file:\n yaml_file.write(model_yaml)\n\n# save entire network to HDF5 (save everything, suggested)\nmodel.save(os.path.join(save_path,\"network.h5\"))",
"_____no_output_____"
]
],
[
[
"The code below sets up a neural network and reads the data (for predictions), but it does not clear the model directory or fit the neural network. The weights from the previous fit are used.",
"_____no_output_____"
],
[
"Now we reload the network and perform another prediction. The RMSE should match the previous one exactly if the neural network was really saved and reloaded.",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.models import load_model\nmodel2 = load_model(os.path.join(save_path,\"network.h5\"))\npred = model2.predict(x)\n# Measure RMSE error. RMSE is common for regression.\nscore = np.sqrt(metrics.mean_squared_error(pred,y))\nprint(f\"After load score (RMSE): {score}\")",
"_____no_output_____"
]
],
[
[
"# Part 3.4: Early Stopping in Keras to Prevent Overfitting",
"_____no_output_____"
],
[
"**Overfitting** occurs when a neural network is trained to the point that it begins to memorize rather than generalize. \n\n\n\nIt is important to segment the original dataset into several datasets:\n\n* **Training Set**\n* **Validation Set**\n* **Holdout Set**\n\nThere are several different ways that these sets can be constructed. The following programs demonstrate some of these.\n\nThe first method is a training and validation set. The training data are used to train the neural network until the validation set no longer improves. This attempts to stop at a near optimal training point. This method will only give accurate \"out of sample\" predictions for the validation set, this is usually 20% or so of the data. The predictions for the training data will be overly optimistic, as these were the data that the neural network was trained on. \n\n",
"_____no_output_____"
],
[
"### Early Stopping with Classification",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport io\nimport requests\nimport numpy as np\nfrom sklearn import metrics\nfrom sklearn.model_selection import train_test_split\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Activation\nfrom tensorflow.keras.callbacks import EarlyStopping\n\ndf = pd.read_csv(\n \"https://data.heatonresearch.com/data/t81-558/iris.csv\", \n na_values=['NA', '?'])\n\n# Convert to numpy - Classification\nx = df[['sepal_l', 'sepal_w', 'petal_l', 'petal_w']].values\ndummies = pd.get_dummies(df['species']) # Classification\nspecies = dummies.columns\ny = dummies.values\n\n# Split into validation and training sets\nx_train, x_test, y_train, y_test = train_test_split( \n x, y, test_size=0.25, random_state=42)\n\n# Build neural network\nmodel = Sequential()\nmodel.add(Dense(50, input_dim=x.shape[1], activation='relu')) # Hidden 1\nmodel.add(Dense(25, activation='relu')) # Hidden 2\nmodel.add(Dense(y.shape[1],activation='softmax')) # Output\nmodel.compile(loss='categorical_crossentropy', optimizer='adam')\n\nmonitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto',\n restore_best_weights=True)\nmodel.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=2,epochs=1000)\n",
"Train on 112 samples, validate on 38 samples\nEpoch 1/1000\n112/112 - 0s - loss: 1.2631 - val_loss: 1.1849\nEpoch 2/1000\n112/112 - 0s - loss: 1.1055 - val_loss: 1.0706\nEpoch 3/1000\n112/112 - 0s - loss: 1.0157 - val_loss: 1.0093\nEpoch 4/1000\n112/112 - 0s - loss: 0.9774 - val_loss: 0.9663\nEpoch 5/1000\n112/112 - 0s - loss: 0.9449 - val_loss: 0.9235\nEpoch 6/1000\n112/112 - 0s - loss: 0.9073 - val_loss: 0.8794\nEpoch 7/1000\n112/112 - 0s - loss: 0.8659 - val_loss: 0.8314\nEpoch 8/1000\n112/112 - 0s - loss: 0.8203 - val_loss: 0.7881\nEpoch 9/1000\n112/112 - 0s - loss: 0.7832 - val_loss: 0.7506\nEpoch 10/1000\n112/112 - 0s - loss: 0.7527 - val_loss: 0.7151\nEpoch 11/1000\n112/112 - 0s - loss: 0.7208 - val_loss: 0.6813\nEpoch 12/1000\n112/112 - 0s - loss: 0.6904 - val_loss: 0.6484\nEpoch 13/1000\n112/112 - 0s - loss: 0.6607 - val_loss: 0.6162\nEpoch 14/1000\n112/112 - 0s - loss: 0.6322 - val_loss: 0.5841\nEpoch 15/1000\n112/112 - 0s - loss: 0.6048 - val_loss: 0.5549\nEpoch 16/1000\n112/112 - 0s - loss: 0.5787 - val_loss: 0.5287\nEpoch 17/1000\n112/112 - 0s - loss: 0.5544 - val_loss: 0.5034\nEpoch 18/1000\n112/112 - 0s - loss: 0.5315 - val_loss: 0.4796\nEpoch 19/1000\n112/112 - 0s - loss: 0.5096 - val_loss: 0.4580\nEpoch 20/1000\n112/112 - 0s - loss: 0.4902 - val_loss: 0.4378\nEpoch 21/1000\n112/112 - 0s - loss: 0.4706 - val_loss: 0.4188\nEpoch 22/1000\n112/112 - 0s - loss: 0.4536 - val_loss: 0.4019\nEpoch 23/1000\n112/112 - 0s - loss: 0.4396 - val_loss: 0.3865\nEpoch 24/1000\n112/112 - 0s - loss: 0.4215 - val_loss: 0.3709\nEpoch 25/1000\n112/112 - 0s - loss: 0.4077 - val_loss: 0.3573\nEpoch 26/1000\n112/112 - 0s - loss: 0.3938 - val_loss: 0.3440\nEpoch 27/1000\n112/112 - 0s - loss: 0.3815 - val_loss: 0.3318\nEpoch 28/1000\n112/112 - 0s - loss: 0.3707 - val_loss: 0.3208\nEpoch 29/1000\n112/112 - 0s - loss: 0.3579 - val_loss: 0.3102\nEpoch 30/1000\n112/112 - 0s - loss: 0.3465 - val_loss: 0.3000\nEpoch 31/1000\n112/112 - 0s - loss: 0.3372 - val_loss: 0.2905\nEpoch 32/1000\n112/112 - 0s - loss: 0.3268 - val_loss: 0.2841\nEpoch 33/1000\n112/112 - 0s - loss: 0.3190 - val_loss: 0.2756\nEpoch 34/1000\n112/112 - 0s - loss: 0.3139 - val_loss: 0.2665\nEpoch 35/1000\n112/112 - 0s - loss: 0.3011 - val_loss: 0.2597\nEpoch 36/1000\n112/112 - 0s - loss: 0.2998 - val_loss: 0.2516\nEpoch 37/1000\n112/112 - 0s - loss: 0.2883 - val_loss: 0.2436\nEpoch 38/1000\n112/112 - 0s - loss: 0.2789 - val_loss: 0.2397\nEpoch 39/1000\n112/112 - 0s - loss: 0.2717 - val_loss: 0.2315\nEpoch 40/1000\n112/112 - 0s - loss: 0.2655 - val_loss: 0.2248\nEpoch 41/1000\n112/112 - 0s - loss: 0.2585 - val_loss: 0.2189\nEpoch 42/1000\n112/112 - 0s - loss: 0.2498 - val_loss: 0.2168\nEpoch 43/1000\n112/112 - 0s - loss: 0.2465 - val_loss: 0.2124\nEpoch 44/1000\n112/112 - 0s - loss: 0.2418 - val_loss: 0.2028\nEpoch 45/1000\n112/112 - 0s - loss: 0.2334 - val_loss: 0.1979\nEpoch 46/1000\n112/112 - 0s - loss: 0.2278 - val_loss: 0.1952\nEpoch 47/1000\n112/112 - 0s - loss: 0.2225 - val_loss: 0.1889\nEpoch 48/1000\n112/112 - 0s - loss: 0.2178 - val_loss: 0.1840\nEpoch 49/1000\n112/112 - 0s - loss: 0.2128 - val_loss: 0.1829\nEpoch 50/1000\n112/112 - 0s - loss: 0.2073 - val_loss: 0.1759\nEpoch 51/1000\n112/112 - 0s - loss: 0.2010 - val_loss: 0.1701\nEpoch 52/1000\n112/112 - 0s - loss: 0.2002 - val_loss: 0.1659\nEpoch 53/1000\n112/112 - 0s - loss: 0.1927 - val_loss: 0.1655\nEpoch 54/1000\n112/112 - 0s - loss: 0.1937 - val_loss: 0.1653\nEpoch 55/1000\n112/112 - 0s - loss: 0.1870 - val_loss: 0.1557\nEpoch 56/1000\n112/112 - 0s - loss: 0.1794 - val_loss: 0.1523\nEpoch 57/1000\n112/112 - 0s - loss: 0.1760 - val_loss: 0.1492\nEpoch 58/1000\n112/112 - 0s - loss: 0.1716 - val_loss: 0.1447\nEpoch 59/1000\n112/112 - 0s - loss: 0.1692 - val_loss: 0.1408\nEpoch 60/1000\n112/112 - 0s - loss: 0.1666 - val_loss: 0.1405\nEpoch 61/1000\n112/112 - 0s - loss: 0.1662 - val_loss: 0.1408\nEpoch 62/1000\n112/112 - 0s - loss: 0.1584 - val_loss: 0.1322\nEpoch 63/1000\n112/112 - 0s - loss: 0.1567 - val_loss: 0.1296\nEpoch 64/1000\n112/112 - 0s - loss: 0.1541 - val_loss: 0.1314\nEpoch 65/1000\n112/112 - 0s - loss: 0.1497 - val_loss: 0.1279\nEpoch 66/1000\n112/112 - 0s - loss: 0.1469 - val_loss: 0.1234\nEpoch 67/1000\n112/112 - 0s - loss: 0.1453 - val_loss: 0.1198\nEpoch 68/1000\n112/112 - 0s - loss: 0.1430 - val_loss: 0.1194\nEpoch 69/1000\n112/112 - 0s - loss: 0.1404 - val_loss: 0.1211\nEpoch 70/1000\n112/112 - 0s - loss: 0.1395 - val_loss: 0.1156\nEpoch 71/1000\n112/112 - 0s - loss: 0.1355 - val_loss: 0.1163\nEpoch 72/1000\n112/112 - 0s - loss: 0.1328 - val_loss: 0.1123\nEpoch 73/1000\n112/112 - 0s - loss: 0.1307 - val_loss: 0.1094\nEpoch 74/1000\n112/112 - 0s - loss: 0.1297 - val_loss: 0.1063\nEpoch 75/1000\n112/112 - 0s - loss: 0.1291 - val_loss: 0.1045\nEpoch 76/1000\n112/112 - 0s - loss: 0.1262 - val_loss: 0.1055\nEpoch 77/1000\n112/112 - 0s - loss: 0.1291 - val_loss: 0.1192\nEpoch 78/1000\n112/112 - 0s - loss: 0.1264 - val_loss: 0.1033\nEpoch 79/1000\n112/112 - 0s - loss: 0.1227 - val_loss: 0.0987\nEpoch 80/1000\n112/112 - 0s - loss: 0.1198 - val_loss: 0.0987\nEpoch 81/1000\n112/112 - 0s - loss: 0.1194 - val_loss: 0.1028\nEpoch 82/1000\n112/112 - 0s - loss: 0.1151 - val_loss: 0.0956\nEpoch 83/1000\n112/112 - 0s - loss: 0.1159 - val_loss: 0.0934\nEpoch 84/1000\n112/112 - 0s - loss: 0.1139 - val_loss: 0.0978\nEpoch 85/1000\n112/112 - 0s - loss: 0.1133 - val_loss: 0.0967\nEpoch 86/1000\n112/112 - 0s - loss: 0.1140 - val_loss: 0.0994\nEpoch 87/1000\n112/112 - 0s - loss: 0.1090 - val_loss: 0.0894\nEpoch 88/1000\n112/112 - 0s - loss: 0.1090 - val_loss: 0.0875\nEpoch 89/1000\n112/112 - 0s - loss: 0.1119 - val_loss: 0.0882\nEpoch 90/1000\n112/112 - 0s - loss: 0.1082 - val_loss: 0.0950\nEpoch 91/1000\n112/112 - 0s - loss: 0.1065 - val_loss: 0.0925\nEpoch 92/1000\n112/112 - 0s - loss: 0.1047 - val_loss: 0.0842\nEpoch 93/1000\n112/112 - 0s - loss: 0.1091 - val_loss: 0.0829\nEpoch 94/1000\n112/112 - 0s - loss: 0.1082 - val_loss: 0.0909\nEpoch 95/1000\n112/112 - 0s - loss: 0.1018 - val_loss: 0.0858\nEpoch 96/1000\n112/112 - 0s - loss: 0.0999 - val_loss: 0.0804\nEpoch 97/1000\n112/112 - 0s - loss: 0.1024 - val_loss: 0.0802\nEpoch 98/1000\n112/112 - 0s - loss: 0.0995 - val_loss: 0.0883\nEpoch 99/1000\n112/112 - 0s - loss: 0.0988 - val_loss: 0.0917\nEpoch 100/1000\n112/112 - 0s - loss: 0.1002 - val_loss: 0.0826\nEpoch 101/1000\nRestoring model weights from the end of the best epoch.\n112/112 - 0s - loss: 0.0962 - val_loss: 0.0797\nEpoch 00101: early stopping\n"
]
],
[
[
"As you can see from above, the entire number of requested epocs were not used. The neural network training stopped once the validation set no longer improved.",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import accuracy_score\n\npred = model.predict(x_test)\npredict_classes = np.argmax(pred,axis=1)\nexpected_classes = np.argmax(y_test,axis=1)\ncorrect = accuracy_score(expected_classes,predict_classes)\nprint(f\"Accuracy: {correct}\")",
"Accuracy: 1.0\n"
]
],
[
[
"### Early Stopping with Regression",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Activation\nimport pandas as pd\nimport io\nimport os\nimport requests\nimport numpy as np\nfrom sklearn import metrics\n\ndf = pd.read_csv(\n \"https://data.heatonresearch.com/data/t81-558/auto-mpg.csv\", \n na_values=['NA', '?'])\n\ncars = df['name']\n\n# Handle missing value\ndf['horsepower'] = df['horsepower'].fillna(df['horsepower'].median())\n\n# Pandas to Numpy\nx = df[['cylinders', 'displacement', 'horsepower', 'weight',\n 'acceleration', 'year', 'origin']].values\ny = df['mpg'].values # regression\n\n# Split into validation and training sets\nx_train, x_test, y_train, y_test = train_test_split( \n x, y, test_size=0.25, random_state=42)\n\n# Build the neural network\nmodel = Sequential()\nmodel.add(Dense(25, input_dim=x.shape[1], activation='relu')) # Hidden 1\nmodel.add(Dense(10, activation='relu')) # Hidden 2\nmodel.add(Dense(1)) # Output\nmodel.compile(loss='mean_squared_error', optimizer='adam')\n\nmonitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto',\n restore_best_weights=True)\nmodel.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=2,epochs=1000)",
"Train on 298 samples, validate on 100 samples\nEpoch 1/1000\n298/298 - 0s - loss: 137009.1588 - val_loss: 87400.7916\nEpoch 2/1000\n298/298 - 0s - loss: 64687.9471 - val_loss: 34750.5637\nEpoch 3/1000\n298/298 - 0s - loss: 14863.5487 - val_loss: 437.6791\nEpoch 4/1000\n298/298 - 0s - loss: 1920.5421 - val_loss: 4129.8140\nEpoch 5/1000\n298/298 - 0s - loss: 2874.4793 - val_loss: 669.1977\nEpoch 6/1000\n298/298 - 0s - loss: 388.7574 - val_loss: 602.8470\nEpoch 7/1000\n298/298 - 0s - loss: 574.0564 - val_loss: 379.9243\nEpoch 8/1000\n298/298 - 0s - loss: 277.5623 - val_loss: 286.2166\nEpoch 9/1000\n298/298 - 0s - loss: 300.7139 - val_loss: 269.4909\nEpoch 10/1000\n298/298 - 0s - loss: 258.5796 - val_loss: 257.4868\nEpoch 11/1000\n298/298 - 0s - loss: 258.8578 - val_loss: 254.6119\nEpoch 12/1000\n298/298 - 0s - loss: 252.2497 - val_loss: 245.8697\nEpoch 13/1000\n298/298 - 0s - loss: 248.8389 - val_loss: 243.8566\nEpoch 14/1000\n298/298 - 0s - loss: 244.4187 - val_loss: 238.7814\nEpoch 15/1000\n298/298 - 0s - loss: 238.9818 - val_loss: 234.5445\nEpoch 16/1000\n298/298 - 0s - loss: 236.7845 - val_loss: 231.0035\nEpoch 17/1000\n298/298 - 0s - loss: 232.2934 - val_loss: 228.9457\nEpoch 18/1000\n298/298 - 0s - loss: 229.4774 - val_loss: 223.4574\nEpoch 19/1000\n298/298 - 0s - loss: 227.1223 - val_loss: 218.9363\nEpoch 20/1000\n298/298 - 0s - loss: 221.9513 - val_loss: 216.4789\nEpoch 21/1000\n298/298 - 0s - loss: 218.9378 - val_loss: 211.9149\nEpoch 22/1000\n298/298 - 0s - loss: 215.4493 - val_loss: 209.9197\nEpoch 23/1000\n298/298 - 0s - loss: 214.1163 - val_loss: 203.0125\nEpoch 24/1000\n298/298 - 0s - loss: 207.9805 - val_loss: 199.8764\nEpoch 25/1000\n298/298 - 0s - loss: 204.4530 - val_loss: 196.3883\nEpoch 26/1000\n298/298 - 0s - loss: 200.6158 - val_loss: 191.7415\nEpoch 27/1000\n298/298 - 0s - loss: 198.5157 - val_loss: 189.0866\nEpoch 28/1000\n298/298 - 0s - loss: 194.7323 - val_loss: 184.6138\nEpoch 29/1000\n298/298 - 0s - loss: 191.0196 - val_loss: 181.2299\nEpoch 30/1000\n298/298 - 0s - loss: 188.4134 - val_loss: 178.2012\nEpoch 31/1000\n298/298 - 0s - loss: 184.4825 - val_loss: 176.0514\nEpoch 32/1000\n298/298 - 0s - loss: 181.8757 - val_loss: 169.4620\nEpoch 33/1000\n298/298 - 0s - loss: 177.8738 - val_loss: 171.5512\nEpoch 34/1000\n298/298 - 0s - loss: 174.6477 - val_loss: 162.5667\nEpoch 35/1000\n298/298 - 0s - loss: 177.4287 - val_loss: 159.0651\nEpoch 36/1000\n298/298 - 0s - loss: 169.2184 - val_loss: 160.3227\nEpoch 37/1000\n298/298 - 0s - loss: 167.7838 - val_loss: 152.2763\nEpoch 38/1000\n298/298 - 0s - loss: 165.2119 - val_loss: 154.2723\nEpoch 39/1000\n298/298 - 0s - loss: 159.2664 - val_loss: 145.9608\nEpoch 40/1000\n298/298 - 0s - loss: 156.9077 - val_loss: 145.2549\nEpoch 41/1000\n298/298 - 0s - loss: 154.3356 - val_loss: 140.0966\nEpoch 42/1000\n298/298 - 0s - loss: 152.0847 - val_loss: 139.5768\nEpoch 43/1000\n298/298 - 0s - loss: 151.5686 - val_loss: 135.4901\nEpoch 44/1000\n298/298 - 0s - loss: 148.0668 - val_loss: 131.2476\nEpoch 45/1000\n298/298 - 0s - loss: 143.9285 - val_loss: 132.7091\nEpoch 46/1000\n298/298 - 0s - loss: 140.7696 - val_loss: 126.0253\nEpoch 47/1000\n298/298 - 0s - loss: 138.6575 - val_loss: 126.5558\nEpoch 48/1000\n298/298 - 0s - loss: 135.8795 - val_loss: 121.8080\nEpoch 49/1000\n298/298 - 0s - loss: 133.8290 - val_loss: 121.2612\nEpoch 50/1000\n298/298 - 0s - loss: 131.2323 - val_loss: 116.3520\nEpoch 51/1000\n298/298 - 0s - loss: 129.6460 - val_loss: 115.3310\nEpoch 52/1000\n298/298 - 0s - loss: 126.7419 - val_loss: 114.1026\nEpoch 53/1000\n298/298 - 0s - loss: 125.9019 - val_loss: 111.1985\nEpoch 54/1000\n298/298 - 0s - loss: 125.0483 - val_loss: 108.5914\nEpoch 55/1000\n298/298 - 0s - loss: 121.6166 - val_loss: 110.9844\nEpoch 56/1000\n298/298 - 0s - loss: 118.7622 - val_loss: 103.5990\nEpoch 57/1000\n298/298 - 0s - loss: 117.9291 - val_loss: 104.9465\nEpoch 58/1000\n298/298 - 0s - loss: 115.4261 - val_loss: 100.3285\nEpoch 59/1000\n298/298 - 0s - loss: 114.4067 - val_loss: 100.5623\nEpoch 60/1000\n298/298 - 0s - loss: 112.1221 - val_loss: 97.5862\nEpoch 61/1000\n298/298 - 0s - loss: 111.3041 - val_loss: 96.5822\nEpoch 62/1000\n298/298 - 0s - loss: 108.3902 - val_loss: 93.8154\nEpoch 63/1000\n298/298 - 0s - loss: 107.3160 - val_loss: 93.5471\nEpoch 64/1000\n298/298 - 0s - loss: 105.4188 - val_loss: 90.5398\nEpoch 65/1000\n298/298 - 0s - loss: 106.7369 - val_loss: 91.4318\nEpoch 66/1000\n298/298 - 0s - loss: 103.6693 - val_loss: 87.3595\nEpoch 67/1000\n298/298 - 0s - loss: 102.3900 - val_loss: 87.9832\nEpoch 68/1000\n298/298 - 0s - loss: 100.5562 - val_loss: 85.1072\nEpoch 69/1000\n298/298 - 0s - loss: 98.5360 - val_loss: 85.6733\nEpoch 70/1000\n298/298 - 0s - loss: 97.5029 - val_loss: 83.7361\nEpoch 71/1000\n298/298 - 0s - loss: 97.5912 - val_loss: 82.2137\nEpoch 72/1000\n298/298 - 0s - loss: 95.3032 - val_loss: 80.9050\nEpoch 73/1000\n298/298 - 0s - loss: 95.0331 - val_loss: 80.0183\nEpoch 74/1000\n298/298 - 0s - loss: 93.8799 - val_loss: 78.0280\nEpoch 75/1000\n298/298 - 0s - loss: 92.1775 - val_loss: 76.6894\nEpoch 76/1000\n298/298 - 0s - loss: 93.9282 - val_loss: 81.0375\nEpoch 77/1000\n298/298 - 0s - loss: 92.5300 - val_loss: 74.6066\nEpoch 78/1000\n298/298 - 0s - loss: 89.0045 - val_loss: 76.7518\nEpoch 79/1000\n298/298 - 0s - loss: 87.9711 - val_loss: 73.6240\nEpoch 80/1000\n298/298 - 0s - loss: 86.4187 - val_loss: 73.5404\nEpoch 81/1000\n298/298 - 0s - loss: 85.1361 - val_loss: 72.9517\nEpoch 82/1000\n298/298 - 0s - loss: 84.2285 - val_loss: 70.8196\nEpoch 83/1000\n298/298 - 0s - loss: 83.5711 - val_loss: 73.0940\nEpoch 84/1000\n298/298 - 0s - loss: 83.3114 - val_loss: 68.8215\nEpoch 85/1000\n298/298 - 0s - loss: 85.2272 - val_loss: 77.8712\nEpoch 86/1000\n298/298 - 0s - loss: 84.8784 - val_loss: 66.9690\nEpoch 87/1000\n298/298 - 0s - loss: 84.3240 - val_loss: 74.7963\nEpoch 88/1000\n298/298 - 0s - loss: 82.6671 - val_loss: 66.4218\nEpoch 89/1000\n298/298 - 0s - loss: 78.9253 - val_loss: 66.0465\nEpoch 90/1000\n298/298 - 0s - loss: 78.2948 - val_loss: 67.1132\nEpoch 91/1000\n298/298 - 0s - loss: 77.8454 - val_loss: 64.5615\nEpoch 92/1000\n298/298 - 0s - loss: 76.5378 - val_loss: 63.7461\nEpoch 93/1000\n298/298 - 0s - loss: 76.3956 - val_loss: 65.1476\nEpoch 94/1000\n298/298 - 0s - loss: 75.6534 - val_loss: 64.3530\nEpoch 95/1000\n298/298 - 0s - loss: 74.4515 - val_loss: 62.2992\nEpoch 96/1000\n298/298 - 0s - loss: 75.0175 - val_loss: 65.6056\nEpoch 97/1000\n298/298 - 0s - loss: 75.4062 - val_loss: 60.3103\nEpoch 98/1000\n298/298 - 0s - loss: 73.4676 - val_loss: 63.0665\nEpoch 99/1000\n298/298 - 0s - loss: 73.8604 - val_loss: 59.1017\nEpoch 100/1000\n298/298 - 0s - loss: 75.2177 - val_loss: 69.7093\nEpoch 101/1000\n298/298 - 0s - loss: 76.5957 - val_loss: 57.7874\nEpoch 102/1000\n298/298 - 0s - loss: 70.4228 - val_loss: 63.4300\nEpoch 103/1000\n298/298 - 0s - loss: 69.7966 - val_loss: 56.9887\nEpoch 104/1000\n298/298 - 0s - loss: 70.7405 - val_loss: 63.1973\nEpoch 105/1000\n298/298 - 0s - loss: 75.2547 - val_loss: 55.7992\nEpoch 106/1000\n298/298 - 0s - loss: 67.7083 - val_loss: 59.9058\nEpoch 107/1000\n298/298 - 0s - loss: 66.1408 - val_loss: 54.8671\nEpoch 108/1000\n298/298 - 0s - loss: 70.5219 - val_loss: 68.1857\nEpoch 109/1000\n298/298 - 0s - loss: 68.6804 - val_loss: 53.9880\nEpoch 110/1000\n298/298 - 0s - loss: 66.7275 - val_loss: 56.5978\nEpoch 111/1000\n298/298 - 0s - loss: 65.0595 - val_loss: 54.1060\nEpoch 112/1000\n298/298 - 0s - loss: 63.8615 - val_loss: 56.1610\nEpoch 113/1000\n298/298 - 0s - loss: 63.5641 - val_loss: 52.9413\nEpoch 114/1000\n298/298 - 0s - loss: 63.4639 - val_loss: 54.6012\nEpoch 115/1000\n298/298 - 0s - loss: 62.6168 - val_loss: 53.1920\nEpoch 116/1000\n298/298 - 0s - loss: 61.7351 - val_loss: 50.9684\nEpoch 117/1000\n298/298 - 0s - loss: 63.5669 - val_loss: 56.6385\nEpoch 118/1000\n298/298 - 0s - loss: 61.4415 - val_loss: 50.0127\nEpoch 119/1000\n298/298 - 0s - loss: 60.7425 - val_loss: 57.8498\nEpoch 120/1000\n298/298 - 0s - loss: 61.2164 - val_loss: 49.3895\nEpoch 121/1000\n298/298 - 0s - loss: 60.2546 - val_loss: 49.6976\nEpoch 122/1000\n298/298 - 0s - loss: 59.2859 - val_loss: 48.5382\nEpoch 123/1000\n298/298 - 0s - loss: 58.9762 - val_loss: 50.3346\nEpoch 124/1000\n298/298 - 0s - loss: 58.9458 - val_loss: 47.6183\nEpoch 125/1000\n298/298 - 0s - loss: 57.7480 - val_loss: 50.9688\nEpoch 126/1000\n298/298 - 0s - loss: 57.2021 - val_loss: 48.4334\nEpoch 127/1000\n298/298 - 0s - loss: 56.0933 - val_loss: 48.6423\nEpoch 128/1000\n298/298 - 0s - loss: 56.0916 - val_loss: 46.6327\nEpoch 129/1000\n298/298 - 0s - loss: 55.1348 - val_loss: 47.3380\nEpoch 130/1000\n298/298 - 0s - loss: 54.5392 - val_loss: 45.5798\nEpoch 131/1000\n298/298 - 0s - loss: 55.0560 - val_loss: 46.4440\nEpoch 132/1000\n298/298 - 0s - loss: 55.1137 - val_loss: 50.3307\nEpoch 133/1000\n298/298 - 0s - loss: 54.9968 - val_loss: 45.0395\nEpoch 134/1000\n298/298 - 0s - loss: 54.1360 - val_loss: 48.7353\nEpoch 135/1000\n298/298 - 0s - loss: 52.8437 - val_loss: 43.5325\nEpoch 136/1000\n298/298 - 0s - loss: 51.9040 - val_loss: 50.6398\nEpoch 137/1000\n298/298 - 0s - loss: 52.5419 - val_loss: 43.6542\nEpoch 138/1000\n298/298 - 0s - loss: 51.0480 - val_loss: 44.7285\nEpoch 139/1000\n298/298 - 0s - loss: 51.3169 - val_loss: 44.8340\nEpoch 140/1000\n298/298 - 0s - loss: 50.1141 - val_loss: 41.7541\nEpoch 141/1000\n298/298 - 0s - loss: 50.2745 - val_loss: 42.6962\nEpoch 142/1000\n298/298 - 0s - loss: 49.2871 - val_loss: 43.4447\nEpoch 143/1000\n298/298 - 0s - loss: 49.1571 - val_loss: 41.6067\nEpoch 144/1000\n298/298 - 0s - loss: 48.6497 - val_loss: 41.7728\nEpoch 145/1000\n298/298 - 0s - loss: 47.9751 - val_loss: 41.4525\nEpoch 146/1000\n298/298 - 0s - loss: 47.7075 - val_loss: 39.7075\nEpoch 147/1000\n298/298 - 0s - loss: 47.1009 - val_loss: 42.2388\nEpoch 148/1000\n298/298 - 0s - loss: 46.8240 - val_loss: 39.7196\nEpoch 149/1000\n298/298 - 0s - loss: 46.4609 - val_loss: 42.2989\nEpoch 150/1000\n298/298 - 0s - loss: 48.0583 - val_loss: 38.4602\nEpoch 151/1000\n298/298 - 0s - loss: 46.5334 - val_loss: 43.8147\nEpoch 152/1000\n298/298 - 0s - loss: 46.7716 - val_loss: 40.3131\nEpoch 153/1000\n298/298 - 0s - loss: 47.0893 - val_loss: 37.4092\nEpoch 154/1000\n298/298 - 0s - loss: 44.4238 - val_loss: 38.6971\nEpoch 155/1000\n298/298 - 0s - loss: 44.1358 - val_loss: 36.7729\nEpoch 156/1000\n298/298 - 0s - loss: 43.6989 - val_loss: 39.5008\nEpoch 157/1000\n298/298 - 0s - loss: 43.0400 - val_loss: 36.1819\nEpoch 158/1000\n298/298 - 0s - loss: 44.1855 - val_loss: 36.5305\nEpoch 159/1000\n298/298 - 0s - loss: 43.9221 - val_loss: 38.8542\nEpoch 160/1000\n298/298 - 0s - loss: 42.5634 - val_loss: 35.4242\nEpoch 161/1000\n298/298 - 0s - loss: 41.8758 - val_loss: 37.1866\nEpoch 162/1000\n298/298 - 0s - loss: 41.5447 - val_loss: 35.2277\nEpoch 163/1000\n298/298 - 0s - loss: 41.2064 - val_loss: 34.6725\nEpoch 164/1000\n298/298 - 0s - loss: 40.9822 - val_loss: 35.0514\nEpoch 165/1000\n298/298 - 0s - loss: 40.1962 - val_loss: 34.8705\nEpoch 166/1000\n298/298 - 0s - loss: 39.8260 - val_loss: 33.5002\nEpoch 167/1000\n298/298 - 0s - loss: 40.1474 - val_loss: 35.9322\nEpoch 168/1000\n298/298 - 0s - loss: 39.7041 - val_loss: 33.0713\nEpoch 169/1000\n298/298 - 0s - loss: 39.3530 - val_loss: 32.5961\nEpoch 170/1000\n298/298 - 0s - loss: 39.3592 - val_loss: 34.3898\nEpoch 171/1000\n298/298 - 0s - loss: 39.9318 - val_loss: 37.0751\nEpoch 172/1000\n298/298 - 0s - loss: 39.9988 - val_loss: 31.7385\nEpoch 173/1000\n298/298 - 0s - loss: 38.5538 - val_loss: 31.9029\nEpoch 174/1000\n298/298 - 0s - loss: 39.0118 - val_loss: 33.9289\nEpoch 175/1000\n298/298 - 0s - loss: 38.1769 - val_loss: 34.0615\nEpoch 176/1000\n298/298 - 0s - loss: 37.7205 - val_loss: 31.3360\nEpoch 177/1000\n298/298 - 0s - loss: 38.3103 - val_loss: 30.4050\nEpoch 178/1000\n298/298 - 0s - loss: 37.4394 - val_loss: 34.8860\nEpoch 179/1000\n298/298 - 0s - loss: 36.0916 - val_loss: 30.1852\nEpoch 180/1000\n298/298 - 0s - loss: 35.0402 - val_loss: 29.9700\nEpoch 181/1000\n298/298 - 0s - loss: 35.0743 - val_loss: 30.8582\nEpoch 182/1000\n298/298 - 0s - loss: 34.9778 - val_loss: 29.4909\nEpoch 183/1000\n298/298 - 0s - loss: 35.5301 - val_loss: 32.6701\nEpoch 184/1000\n298/298 - 0s - loss: 35.1328 - val_loss: 31.4080\nEpoch 185/1000\n298/298 - 0s - loss: 34.8071 - val_loss: 28.7977\nEpoch 186/1000\n298/298 - 0s - loss: 33.5718 - val_loss: 29.4352\nEpoch 187/1000\n298/298 - 0s - loss: 32.8953 - val_loss: 27.9879\nEpoch 188/1000\n298/298 - 0s - loss: 33.0373 - val_loss: 32.1209\nEpoch 189/1000\n298/298 - 0s - loss: 32.7321 - val_loss: 27.8982\nEpoch 190/1000\n298/298 - 0s - loss: 32.6381 - val_loss: 27.3850\nEpoch 191/1000\n298/298 - 0s - loss: 32.2747 - val_loss: 27.2659\nEpoch 192/1000\n298/298 - 0s - loss: 32.4851 - val_loss: 27.0773\nEpoch 193/1000\n298/298 - 0s - loss: 31.9399 - val_loss: 26.8742\nEpoch 194/1000\n298/298 - 0s - loss: 31.4582 - val_loss: 30.4675\nEpoch 195/1000\n298/298 - 0s - loss: 31.7919 - val_loss: 26.1536\nEpoch 196/1000\n298/298 - 0s - loss: 30.5031 - val_loss: 27.7287\nEpoch 197/1000\n298/298 - 0s - loss: 30.5504 - val_loss: 26.4488\nEpoch 198/1000\n298/298 - 0s - loss: 29.9834 - val_loss: 25.7465\nEpoch 199/1000\n298/298 - 0s - loss: 31.6555 - val_loss: 26.2356\nEpoch 200/1000\n298/298 - 0s - loss: 29.6036 - val_loss: 24.8315\nEpoch 201/1000\n298/298 - 0s - loss: 29.1212 - val_loss: 24.8808\nEpoch 202/1000\n298/298 - 0s - loss: 29.4223 - val_loss: 24.4004\nEpoch 203/1000\n298/298 - 0s - loss: 30.4341 - val_loss: 26.0114\nEpoch 204/1000\n298/298 - 0s - loss: 29.7155 - val_loss: 30.6579\nEpoch 205/1000\n298/298 - 0s - loss: 30.2008 - val_loss: 25.4360\nEpoch 206/1000\n298/298 - 0s - loss: 28.6162 - val_loss: 25.2825\nEpoch 207/1000\nRestoring model weights from the end of the best epoch.\n298/298 - 0s - loss: 28.0181 - val_loss: 25.5654\nEpoch 00207: early stopping\n"
],
[
"# Measure RMSE error. RMSE is common for regression.\npred = model.predict(x_test)\nscore = np.sqrt(metrics.mean_squared_error(pred,y_test))\nprint(f\"Final score (RMSE): {score}\")",
"Final score (RMSE): 4.939672608054\n"
]
],
[
[
"# Part 3.5: Extracting Keras Weights and Manual Neural Network Calculation\n\nIn this section we will build a neural network and analyze it down the individual weights. We will train a simple neural network that learns the XOR function. It is not hard to simply hand-code the neurons to provide an [XOR function](https://en.wikipedia.org/wiki/Exclusive_or); however, for simplicity, we will allow Keras to train this network for us. We will just use 100K epochs on the ADAM optimizer. This is massive overkill, but it gets the result, and our focus here is not on tuning. The neural network is small. Two inputs, two hidden neurons, and a single output.",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Activation\nimport numpy as np\n\n# Create a dataset for the XOR function\nx = np.array([\n [0,0],\n [1,0],\n [0,1],\n [1,1]\n])\n\ny = np.array([\n 0,\n 1,\n 1,\n 0\n])\n\n# Build the network\n# sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)\n\ndone = False\ncycle = 1\n\nwhile not done:\n print(\"Cycle #{}\".format(cycle))\n cycle+=1\n model = Sequential()\n model.add(Dense(2, input_dim=2, activation='relu')) \n model.add(Dense(1)) \n model.compile(loss='mean_squared_error', optimizer='adam')\n model.fit(x,y,verbose=0,epochs=10000)\n\n # Predict\n pred = model.predict(x)\n \n # Check if successful. It takes several runs with this small of a network\n done = pred[0]<0.01 and pred[3]<0.01 and pred[1] > 0.9 and pred[2] > 0.9 \n print(pred)",
"_____no_output_____"
],
[
"pred[3]",
"_____no_output_____"
]
],
[
[
"The output above should have two numbers near 0.0 for the first and forth spots (input [[0,0]] and [[1,1]]). The middle two numbers should be near 1.0 (input [[1,0]] and [[0,1]]). These numbers are in scientific notation. Due to random starting weights, it is sometimes necessary to run the above through several cycles to get a good result.\n\nNow that the neural network is trained, lets dump the weights. ",
"_____no_output_____"
]
],
[
[
"# Dump weights\nfor layerNum, layer in enumerate(model.layers):\n weights = layer.get_weights()[0]\n biases = layer.get_weights()[1]\n \n for toNeuronNum, bias in enumerate(biases):\n print(f'{layerNum}B -> L{layerNum+1}N{toNeuronNum}: {bias}')\n \n for fromNeuronNum, wgt in enumerate(weights):\n for toNeuronNum, wgt2 in enumerate(wgt):\n print(f'L{layerNum}N{fromNeuronNum} -> L{layerNum+1}N{toNeuronNum} = {wgt2}')",
"_____no_output_____"
]
],
[
[
"If you rerun this, you probably get different weights. There are many ways to solve the XOR function.\n\nIn the next section, we copy/paste the weights from above and recreate the calculations done by the neural network. Because weights can change with each training, the weights used for the below code came from this:\n\n```\n0B -> L1N0: -1.2913415431976318\n0B -> L1N1: -3.021530048386012e-08\nL0N0 -> L1N0 = 1.2913416624069214\nL0N0 -> L1N1 = 1.1912699937820435\nL0N1 -> L1N0 = 1.2913411855697632\nL0N1 -> L1N1 = 1.1912697553634644\n1B -> L2N0: 7.626241297587034e-36\nL1N0 -> L2N0 = -1.548777461051941\nL1N1 -> L2N0 = 0.8394404649734497\n```",
"_____no_output_____"
]
],
[
[
"input0 = 0\ninput1 = 1\n\nhidden0Sum = (input0*1.3)+(input1*1.3)+(-1.3)\nhidden1Sum = (input0*1.2)+(input1*1.2)+(0)\n\nprint(hidden0Sum) # 0\nprint(hidden1Sum) # 1.2\n\nhidden0 = max(0,hidden0Sum)\nhidden1 = max(0,hidden1Sum)\n\nprint(hidden0) # 0\nprint(hidden1) # 1.2\n\noutputSum = (hidden0*-1.6)+(hidden1*0.8)+(0)\nprint(outputSum) # 0.96\n\noutput = max(0,outputSum)\n\nprint(output) # 0.96",
"_____no_output_____"
]
],
[
[
"# Module 3 Assignment\n\nYou can find the first assignment here: [assignment 3](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class3.ipynb)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7556553ce6cd0785af5dd1a8235f7f3621c5944 | 603,806 | ipynb | Jupyter Notebook | session/deprecated/ocr/word-detection.ipynb | AetherPrior/malaya | 45d37b171dff9e92c5d30bd7260b282cd0912a7d | [
"MIT"
] | 88 | 2021-01-06T10:01:31.000Z | 2022-03-30T17:34:09.000Z | session/deprecated/ocr/word-detection.ipynb | AetherPrior/malaya | 45d37b171dff9e92c5d30bd7260b282cd0912a7d | [
"MIT"
] | 43 | 2021-01-14T02:44:41.000Z | 2022-03-31T19:47:42.000Z | session/deprecated/ocr/word-detection.ipynb | AetherPrior/malaya | 45d37b171dff9e92c5d30bd7260b282cd0912a7d | [
"MIT"
] | 38 | 2021-01-06T07:15:03.000Z | 2022-03-19T05:07:50.000Z | 514.315162 | 81,848 | 0.936799 | [
[
[
"import sys\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport cv2",
"_____no_output_____"
],
[
"image = cv2.cvtColor(cv2.imread(\"semarak-jawi.2.jpg\"), cv2.COLOR_BGR2RGB)\nplt.imshow(image)",
"_____no_output_____"
],
[
"SMALL_HEIGHT = 800\n\ndef resize(img, height=SMALL_HEIGHT, allways=False):\n \"\"\"Resize image to given height.\"\"\"\n if (img.shape[0] > height or allways):\n rat = height / img.shape[0]\n return cv2.resize(img, (int(rat * img.shape[1]), height))\n \n return img\n\ndef implt(img, cmp=None, t=''):\n \"\"\"Show image using plt.\"\"\"\n plt.imshow(img, cmap=cmp)\n plt.title(t)\n plt.show()\n \ndef ratio(img, height=SMALL_HEIGHT):\n \"\"\"Getting scale ratio.\"\"\"\n return img.shape[0] / height\n\ndef edges_det(img, min_val, max_val):\n \"\"\" Preprocessing (gray, thresh, filter, border) + Canny edge detection \"\"\"\n img = cv2.cvtColor(resize(img), cv2.COLOR_BGR2GRAY)\n\n # Applying blur and threshold\n img = cv2.bilateralFilter(img, 9, 75, 75)\n img = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 115, 4)\n implt(img, 'gray', 'Adaptive Threshold')\n\n # Median blur replace center pixel by median of pixels under kelner\n # => removes thin details\n img = cv2.medianBlur(img, 11)\n\n # Add black border - detection of border touching pages\n # Contour can't touch side of image\n img = cv2.copyMakeBorder(img, 5, 5, 5, 5, cv2.BORDER_CONSTANT, value=[0, 0, 0])\n implt(img, 'gray', 'Median Blur + Border')\n\n return cv2.Canny(img, min_val, max_val)",
"_____no_output_____"
],
[
"edges_image = edges_det(image, 200, 250)\n\n# Close gaps between edges (double page clouse => rectangle kernel)\nedges_image = cv2.morphologyEx(edges_image, cv2.MORPH_CLOSE, np.ones((5, 11)))\nimplt(edges_image, 'gray', 'Edges')",
"_____no_output_____"
],
[
"\ndef four_corners_sort(pts):\n \"\"\" Sort corners: top-left, bot-left, bot-right, top-right\"\"\"\n diff = np.diff(pts, axis=1)\n summ = pts.sum(axis=1)\n return np.array([pts[np.argmin(summ)],\n pts[np.argmax(diff)],\n pts[np.argmax(summ)],\n pts[np.argmin(diff)]])\n\n\ndef contour_offset(cnt, offset):\n \"\"\" Offset contour because of 5px border \"\"\"\n cnt += offset\n cnt[cnt < 0] = 0\n return cnt\n\n\ndef find_page_contours(edges, img):\n \"\"\" Finding corner points of page contour \"\"\"\n # Getting contours \n contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)\n \n # Finding biggest rectangle otherwise return original corners\n height = edges.shape[0]\n width = edges.shape[1]\n MIN_COUNTOUR_AREA = height * width * 0.5\n MAX_COUNTOUR_AREA = (width - 10) * (height - 10)\n\n max_area = MIN_COUNTOUR_AREA\n page_contour = np.array([[0, 0],\n [0, height-5],\n [width-5, height-5],\n [width-5, 0]])\n\n for cnt in contours:\n perimeter = cv2.arcLength(cnt, True)\n approx = cv2.approxPolyDP(cnt, 0.03 * perimeter, True)\n\n # Page has 4 corners and it is convex\n if (len(approx) == 4 and\n cv2.isContourConvex(approx) and\n max_area < cv2.contourArea(approx) < MAX_COUNTOUR_AREA):\n \n max_area = cv2.contourArea(approx)\n page_contour = approx[:, 0]\n\n # Sort corners and offset them\n page_contour = four_corners_sort(page_contour)\n return contour_offset(page_contour, (-5, -5))",
"_____no_output_____"
],
[
"page_contour = find_page_contours(edges_image, resize(image))\nprint(\"PAGE CONTOUR:\")\nprint(page_contour)\nimplt(cv2.drawContours(resize(image), [page_contour], -1, (0, 255, 0), 3))\n\n \n# Recalculate to original scale\npage_contour = page_contour.dot(ratio(image))",
"PAGE CONTOUR:\n[[ 0 1]\n [ 0 799]\n [1146 798]\n [1145 0]]\n"
],
[
"def persp_transform(img, s_points):\n \"\"\" Transform perspective from start points to target points \"\"\"\n # Euclidean distance - calculate maximum height and width\n height = max(np.linalg.norm(s_points[0] - s_points[1]),\n np.linalg.norm(s_points[2] - s_points[3]))\n width = max(np.linalg.norm(s_points[1] - s_points[2]),\n np.linalg.norm(s_points[3] - s_points[0]))\n \n # Create target points\n t_points = np.array([[0, 0],\n [0, height],\n [width, height],\n [width, 0]], np.float32)\n \n # getPerspectiveTransform() needs float32\n if s_points.dtype != np.float32:\n s_points = s_points.astype(np.float32)\n \n M = cv2.getPerspectiveTransform(s_points, t_points) \n return cv2.warpPerspective(img, M, (int(width), int(height)))\n \n \nnewImage = persp_transform(image, page_contour)\n#image = newImage\n#implt(newImage, t='Result')\n\nimage = newImage\nimplt(image, t='Result')",
"_____no_output_____"
],
[
"img = cv2.cvtColor(newImage, cv2.COLOR_RGB2GRAY)\nimplt(img, 'gray')",
"_____no_output_____"
],
[
"def sobel(channel):\n \"\"\" The Sobel Operator\"\"\"\n sobelX = cv2.Sobel(channel, cv2.CV_16S, 1, 0)\n sobelY = cv2.Sobel(channel, cv2.CV_16S, 0, 1)\n # Combine x, y gradient magnitudes sqrt(x^2 + y^2)\n sobel = np.hypot(sobelX, sobelY)\n sobel[sobel > 255] = 255\n return np.uint8(sobel)\n\n\ndef edge_detect(im):\n \"\"\" \n Edge detection \n The Sobel operator is applied for each image layer (RGB)\n \"\"\"\n return np.max(np.array([sobel(im[:,:, 0]), sobel(im[:,:, 1]), sobel(im[:,:, 2]) ]), axis=0)\n\n# Image pre-processing - blur, edges, threshold, closing\n#blurred = cv2.GaussianBlur(image, (1,1), 20)\nedges = edge_detect(image)\nret, edges = cv2.threshold(edges, 50, 255, cv2.THRESH_BINARY)\nbw_image = cv2.morphologyEx(edges, cv2.MORPH_CLOSE, np.ones((5, 4), np.uint8))\n\nimplt(edges, 'gray', 'Sobel operator')\nimplt(bw_image, 'gray', 'Final closing')",
"_____no_output_____"
],
[
"def union(a,b):\n x = min(a[0], b[0])\n y = min(a[1], b[1])\n w = max(a[0]+a[2], b[0]+b[2]) - x\n h = max(a[1]+a[3], b[1]+b[3]) - y\n return [x, y, w, h]\n\ndef intersect(a,b):\n x = max(a[0], b[0])\n y = max(a[1], b[1])\n w = min(a[0]+a[2], b[0]+b[2]) - x\n h = min(a[1]+a[3], b[1]+b[3]) - y\n print(w)\n if w<0 or h<0:\n return False\n return True\n\ndef group_rectangles(rec):\n \"\"\"\n Uion intersecting rectangles\n Args:\n rec - list of rectangles in form [x, y, w, h]\n Return:\n list of grouped ractangles \n \"\"\"\n tested = [False for i in range(len(rec))]\n final = []\n i = 0\n while i < len(rec):\n if not tested[i]:\n j = i+1\n while j < len(rec):\n if not tested[j] and intersect(rec[i], rec[j]):\n rec[i] = union(rec[i], rec[j])\n tested[j] = True\n j = i\n j += 1\n final += [rec[i]]\n i += 1\n \n return final",
"_____no_output_____"
],
[
"def text_detect(img, original):\n \"\"\" Text detection using contours \"\"\"\n # Resize image\n small = resize(img, 2000)\n image = resize(original, 2000)\n cp_image = image.copy()\n \n # Finding contours\n mask = np.zeros(small.shape, np.uint8)\n cnt, hierarchy = cv2.findContours(np.copy(small), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)\n \n implt(img, 'gray')\n \n # Variables for contour index and words' bounding boxes\n index = 0 \n boxes = []\n # CCOMP hierarchy: [Next, Previous, First Child, Parent]\n # cv2.RETR_CCOMP - contours into 2 levels\n # Go through all contours in first level\n while (index >= 0):\n x,y,w,h = cv2.boundingRect(cnt[index])\n # Get only the contour\n cv2.drawContours(mask, cnt, index, (255, 255, 255), cv2.FILLED)\n maskROI = mask[y:y+h, x:x+w]\n # Ratio of white pixels to area of bounding rectangle\n r = cv2.countNonZero(maskROI) / (w * h)\n \n # Limits for text (white pixel ratio, width, height)\n # TODO Test h/w and w/h ratios\n if r > 0.1 and 2000 > w > 10 and 1600 > h > 10 and h/w < 3 and w/h < 10:\n boxes += [[x, y, w, h]]\n \n # Index of next contour\n index = hierarchy[0][index][0]\n \n # Group intersecting rectangles\n #boxes = group_rectangles(boxes)\n bounding_boxes = np.array([0,0,0,0])\n for (x, y, w, h) in boxes:\n cv2.rectangle(cp_image, (x, y),(x+w,y+h), (0, 255, 0), 8)\n bounding_boxes = np.vstack((bounding_boxes, np.array([x, y, x+w, y+h])))\n\n implt(cp_image, t='Bounding rectangles')\n\n # Recalculate coordinates to original scale\n boxes = bounding_boxes.dot(ratio(image, small.shape[0])).astype(np.int64)\n return boxes[1:]",
"_____no_output_____"
],
[
"boxes = text_detect(bw_image, image)\nprint(\"Number of boxes:\", len(boxes))",
"_____no_output_____"
],
[
"def sort_words(box):\n boxes = box.copy()\n \"\"\"Sort boxes - (x, y, x+w, y+h) from left to right, top to bottom.\"\"\"\n mean_height = sum([y2 - y1 for _, y1, _, y2 in boxes]) / len(boxes)\n \n boxes.view('i8,i8,i8,i8').sort(order=['f1'], axis=0)\n current_line = boxes[0][1]\n lines = []\n tmp_line = []\n for box in boxes:\n if box[1] > current_line + mean_height:\n lines.append(tmp_line)\n tmp_line = [box]\n current_line = box[1] \n continue\n tmp_line.append(box)\n lines.append(tmp_line)\n \n for line in lines:\n line.sort(key=lambda box: box[0])\n \n return lines",
"_____no_output_____"
],
[
"sorted_boxes = sort_words(boxes)\nsorted_boxes",
"_____no_output_____"
],
[
"x1, y1, x2, y2 = sorted_boxes[0][-2]",
"_____no_output_____"
],
[
"plt.imshow(image[y1: y2, x1:x2])",
"_____no_output_____"
],
[
"image.shape",
"_____no_output_____"
],
[
"import glob\nwikis = glob.glob('wiki/*.png')\nlen(wikis)",
"_____no_output_____"
],
[
"wikis[-1]",
"_____no_output_____"
],
[
"image_ = cv2.cvtColor(cv2.imread(wikis[-1]), cv2.COLOR_BGR2RGB)\nplt.imshow(image_)",
"_____no_output_____"
],
[
"image[y1: y2, x1:x2]",
"_____no_output_____"
],
[
"class HysterThresh: \n def __init__(self, img):\n img = cv2.normalize(img, None, 0, 255, cv2.NORM_MINMAX)\n img = 255 - img\n img = (img - np.min(img)) / (np.max(img) - np.min(img)) * 255 \n hist, bins = np.histogram(img.ravel(), 256, [0,256])\n \n self.high = np.argmax(hist) + 65\n self.low = np.argmax(hist) + 45\n self.diff = 255 - self.high\n \n self.img = img\n self.im = np.zeros(img.shape, dtype=img.dtype)\n self.hyster()\n \n def hyster_rec(self, r, c):\n h, w = self.img.shape\n for ri in [r-1, r+1]:\n for ci in [c-1, c+1]:\n if (h > ri >= 0\n and w > ci >= 0\n and self.im[ri, ci] == 0\n and self.high > self.img[ri, ci] >= self.low): \n self.im[ri, ci] = self.img[ri, ci] + self.diff\n self.hyster_rec(ri, ci) \n \n def hyster(self):\n r, c = self.img.shape\n for ri in range(r):\n for ci in range(c):\n if (self.img[ri, ci] >= self.high):\n self.im[ri, ci] = 255\n self.img[ri, ci] = 255\n self.hyster_rec(ri, ci)\n \n implt(self.im, 'gray', 'Hister Thresh')\n\n\ndef binary_otsu_norm(img): \n return cv2.threshold(img, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]\n\n\ndef bilateral_norm(img):\n img = cv2.bilateralFilter(img, 9, 15, 30)\n return cv2.normalize(img, None, 0, 255, cv2.NORM_MINMAX)\n\n\ndef histogram_norm(img):\n img = bilateral_norm(img)\n add_img = 255 - cv2.threshold(img, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]\n img = 255 - img\n img = (img - np.min(img)) / (np.max(img) - np.min(img)) * 255 \n hist, bins = np.histogram(img.ravel(), 256, [0,256])\n \n img = img.astype(np.uint8)\n\n ret,thresh4 = cv2.threshold(img,np.argmax(hist)+10,255,cv2.THRESH_TOZERO)\n return add_img\n return cv2.add(add_img, thresh4, dtype=cv2.CV_8UC1)\n\ndef normalization2(img): \n implt(255 - img, 'gray', 'Original') \n implt(255 - bilateral_norm(img), 'gray', 'Bilateral')\n implt(255 - binary_otsu_norm(img), 'gray', 'Binary OTSU')\n implt(histogram_norm(img), 'gray', 'Binary OTSU + (Filter + TO_ZERO)')\n HysterThresh(cv2.bilateralFilter(img, 10, 10, 30))",
"_____no_output_____"
],
[
"histogram_norm(image[y1: y2, x1:x2, 0])",
"_____no_output_____"
],
[
"implt(histogram_norm(image[y1: y2, x1:x2, 0]), 'gray', 'Binary OTSU + (Filter + TO_ZERO)')",
"_____no_output_____"
],
[
"implt(cv2.bitwise_not(image_))",
"_____no_output_____"
],
[
"from scipy.ndimage.interpolation import map_coordinates\nfrom scipy.ndimage.filters import gaussian_filter\n\ndef elastic_transform(image, alpha, sigma, alpha_affine, random_state=None):\n if random_state is None:\n random_state = np.random.RandomState(None)\n\n shape = image.shape\n shape_size = shape[:2]\n \n blur_size = int(4*sigma) | 1\n dx = alpha * cv2.GaussianBlur((random_state.rand(*shape) * 2 - 1),\n ksize=(blur_size, blur_size),\n sigmaX=sigma)\n dy = alpha * cv2.GaussianBlur((random_state.rand(*shape) * 2 - 1),\n ksize=(blur_size, blur_size),\n sigmaX=sigma)\n\n x, y = np.meshgrid(np.arange(shape[1]), np.arange(shape[0]))\n indices = np.reshape(y+dy, (-1, 1)), np.reshape(x+dx, (-1, 1))\n\n image = map_coordinates(image, indices, order=1, mode='constant').reshape(shape)\n \n implt(image, 'gray')\n \n # Random affine\n center_square = np.float32(shape_size) // 2\n print(center_square)\n square_size = min(shape_size) // 4\n pts1 = np.float32([center_square + square_size,\n [center_square[0]+square_size, \n center_square[1]-square_size],\n center_square - square_size])\n pts2 = pts1 + random_state.uniform(-alpha_affine, alpha_affine, size=pts1.shape).astype(np.float32)\n M = cv2.getAffineTransform(pts1, pts2)\n image = cv2.warpAffine(image, M, shape_size[::-1], borderMode=cv2.BORDER_CONSTANT)\n\n return image",
"_____no_output_____"
],
[
"im = cv2.bitwise_not(image_)[:,:,0]\nim = im[:, np.min(np.where(im > 0)[1]) - 10:]\nimplt(elastic_transform(im, im.shape[1] * 5, im.shape[1] * 0.2, im.shape[1] * 0.001), 'gray')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75569d12bdfc56cde12a9c11279359faca767e5 | 44,247 | ipynb | Jupyter Notebook | Cropping_Plates.ipynb | msaarthak/ALPR-yolov3 | 7b9aa17b6ff3c953e0aa274eee0242266f52c374 | [
"MIT"
] | null | null | null | Cropping_Plates.ipynb | msaarthak/ALPR-yolov3 | 7b9aa17b6ff3c953e0aa274eee0242266f52c374 | [
"MIT"
] | null | null | null | Cropping_Plates.ipynb | msaarthak/ALPR-yolov3 | 7b9aa17b6ff3c953e0aa274eee0242266f52c374 | [
"MIT"
] | null | null | null | 34.220418 | 483 | 0.350803 | [
[
[
"",
"_____no_output_____"
],
[
"! pip install pydrive \nimport os\nfrom pydrive.auth import GoogleAuth\nfrom pydrive.drive import GoogleDrive\nfrom google.colab import auth\nfrom oauth2client.client import GoogleCredentials\n# 1. Authenticate and create the PyDrive client.\nauth.authenticate_user()\ngauth = GoogleAuth()\ngauth.credentials = GoogleCredentials.get_application_default()\ndrive = GoogleDrive(gauth)\nlocal_download_path = os.path.expanduser('~/data')\ntry:\n os.makedirs(local_download_path)\nexcept: pass\nfrom google.colab import drive\ndrive.mount('/content/drive/',force_remount=True)",
"Requirement already satisfied: pydrive in /usr/local/lib/python3.6/dist-packages (1.3.1)\nRequirement already satisfied: oauth2client>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pydrive) (4.1.3)\nRequirement already satisfied: PyYAML>=3.0 in /usr/local/lib/python3.6/dist-packages (from pydrive) (3.13)\nRequirement already satisfied: google-api-python-client>=1.2 in /usr/local/lib/python3.6/dist-packages (from pydrive) (1.7.11)\nRequirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python3.6/dist-packages (from oauth2client>=4.0.0->pydrive) (0.2.8)\nRequirement already satisfied: httplib2>=0.9.1 in /usr/local/lib/python3.6/dist-packages (from oauth2client>=4.0.0->pydrive) (0.11.3)\nRequirement already satisfied: six>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from oauth2client>=4.0.0->pydrive) (1.12.0)\nRequirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from oauth2client>=4.0.0->pydrive) (0.4.8)\nRequirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from oauth2client>=4.0.0->pydrive) (4.0)\nRequirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.2->pydrive) (0.0.3)\nRequirement already satisfied: google-auth>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.2->pydrive) (1.7.2)\nRequirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.2->pydrive) (3.0.1)\nRequirement already satisfied: cachetools<3.2,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.4.1->google-api-python-client>=1.2->pydrive) (3.1.1)\nRequirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.4.1->google-api-python-client>=1.2->pydrive) (45.1.0)\nGo to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n\nEnter your authorization code:\n··········\nMounted at /content/drive/\n"
],
[
"%cd 'drive/My Drive'",
"/content/drive/My Drive\n"
],
[
"%cd darknet",
"/content/drive/My Drive/darknet\n"
],
[
"import pandas as pd\nimport json\n",
"_____no_output_____"
],
[
"#loading results of YOLOv3 Detection\nwith open('result_plates.json') as file:\n data = json.load(file)\ndf = pd.DataFrame(data)\ndf",
"_____no_output_____"
],
[
"med=[]\ndef check(x):\n for y in x:\n if y['name']==\"NumberPlate\":\n return True\n return False\ndf['plates']=df['objects'].transform(lambda temp:check(temp))",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df=df[df['plates']]",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df['filename']=df['filename'].transform(lambda f:f.split('/')[-1])\nin_book=list(df['filename'])",
"/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"%ls",
"\u001b[0m\u001b[01;34m3rdparty\u001b[0m/ CMakeLists.txt \u001b[01;34mimg2\u001b[0m/ README.md\nappveyor.yml darknet \u001b[01;34minclude\u001b[0m/ result.json\n\u001b[01;34mbackup\u001b[0m/ darknet53.conv.74 json_mjpeg_streams.sh result_plates.json\nbad.list DarknetConfig.cmake.in LICENSE \u001b[01;34mresults\u001b[0m/\n\u001b[01;34mbuild\u001b[0m/ darknet.py Makefile \u001b[01;34mscripts\u001b[0m/\nbuild.ps1 darknet_video.py net_cam_v3.sh \u001b[01;34msrc\u001b[0m/\nbuild.sh \u001b[01;34mdata\u001b[0m/ \u001b[01;34mobj\u001b[0m/ valid.txt\n\u001b[01;34mcfg\u001b[0m/ image_yolov2.sh output.log video_v2.sh\n\u001b[01;34mcmake\u001b[0m/ image_yolov3.sh predictions.jpg video_yolov3.sh\n"
],
[
"%cd data/",
"/content/drive/My Drive/darknet/data\n"
],
[
"%cd ..",
"/content/drive/My Drive/darknet\n"
],
[
"%cd img2/",
"/content/drive/My Drive/darknet/img2\n"
],
[
"import subprocess\nproc=subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, )\noutput=proc.communicate()[0]\noutput=output.decode('utf-8').split('\\n')\noutput=list(output)",
"_____no_output_____"
],
[
"p=set(output)-set(in_book)",
"_____no_output_____"
],
[
"for x in p:\n %rm $x",
"rm: missing operand\nTry 'rm --help' for more information.\n"
],
[
"med=[]\ndef check2(x):\n for y in x:\n if y['name']==\"NumberPlate\":\n return y\n return False\ndf['plate_data']=df['objects'].transform(lambda temp:check2(temp))",
"/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:7: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n import sys\n"
],
[
"df",
"_____no_output_____"
],
[
"df=df.drop(['frame_id','objects','plates'],axis=1)\n",
"_____no_output_____"
],
[
"df['plate_data'][0]",
"_____no_output_____"
],
[
"data=df",
"_____no_output_____"
],
[
"def details(x,flag_2):\n co=x['relative_coordinates']\n wid=float(co['width'])/2.0\n hi=float(co['height'])/2.0\n if flag_2=='y2':\n return float(co['center_y'])+hi\n if flag_2=='x2':\n return float(co['center_x'])+wid\n if flag_2=='x1':\n return float(co['center_x'])-wid\n if flag_2=='y1':\n return float(co['center_y'])-hi\ndata['x1']=data['plate_data'].transform(lambda f:details(f,'x1')) \ndata['y1']=data['plate_data'].transform(lambda f:details(f,'y1')) \ndata['x2']=data['plate_data'].transform(lambda f:details(f,'x2')) \ndata['y2']=data['plate_data'].transform(lambda f:details(f,'y2')) ",
"_____no_output_____"
],
[
"from PIL import Image\ncount=0\nfor x,y in data.iterrows():\n img = Image.open('/content/drive/My Drive/darknet/img2/'+str(y['filename']))\n dim=img.size\n w,h=dim[0],dim[1]\n x1=y['x1']*w\n x2=y['x2']*w\n y1=y['y1']*h\n y2=y['y2']*h\n area=(x1,y1,x2,y2)\n print(count)\n cropped_img = img.crop(area)\n cropped_img=cropped_img.convert('RGB')\n cropped_img.save(\"/content/drive/My Drive/darknet/im/\"+str(x)+'.jpg')\n count+=1",
"0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\n70\n71\n72\n73\n74\n75\n76\n77\n78\n79\n80\n81\n82\n83\n84\n85\n86\n87\n88\n89\n90\n91\n92\n93\n94\n95\n96\n97\n98\n99\n100\n101\n102\n103\n104\n105\n106\n107\n108\n109\n110\n111\n112\n113\n114\n115\n116\n117\n118\n119\n120\n121\n122\n123\n124\n125\n126\n127\n128\n129\n130\n131\n132\n133\n134\n135\n136\n137\n138\n139\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7556a9f31d422de19067fc47e63e8ed464098f0 | 36,075 | ipynb | Jupyter Notebook | Naval Group/naval-group_selenium_bs4.ipynb | heyakshayhere/bs4 | 7de0ce38dd89013b3959610a273d19b3ea6d741d | [
"MIT"
] | null | null | null | Naval Group/naval-group_selenium_bs4.ipynb | heyakshayhere/bs4 | 7de0ce38dd89013b3959610a273d19b3ea6d741d | [
"MIT"
] | null | null | null | Naval Group/naval-group_selenium_bs4.ipynb | heyakshayhere/bs4 | 7de0ce38dd89013b3959610a273d19b3ea6d741d | [
"MIT"
] | null | null | null | 65.353261 | 4,352 | 0.648926 | [
[
[
"url :\n\nhttps://www.naval-group.com/en/documents",
"_____no_output_____"
]
],
[
[
"import os \nos.environ['KMP_DUPLICATE_LIB_OK']='True'\n\nimport pandas as pd,requests,bs4,re,time,io,pytesseract,easyocr,random,textstat,urllib.request\nfrom pdfminer.high_level import extract_text\nfrom PIL import Image\nfrom pathlib import Path\nfrom pdf2image import convert_from_path\nfrom selenium.webdriver.common.by import By\nfrom goose3 import Goose\nfrom datetime import datetime\nfrom bs4 import BeautifulSoup\nfrom selenium import webdriver\n\nreader = easyocr.Reader(['en'])\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\n%autosave 1",
"_____no_output_____"
],
[
"#driver for operation\nfrom webdriver_manager.chrome import ChromeDriverManager\noption = webdriver.ChromeOptions()\noption.add_argument('headless')\ndriver = webdriver.Chrome(ChromeDriverManager().install(),options=option)",
"\n\n====== WebDriver manager ======\nCurrent google-chrome version is 102.0.5005\nGet LATEST chromedriver version for 102.0.5005 google-chrome\nDriver [C:\\Users\\AKSHAY SATPUTE\\.wdm\\drivers\\chromedriver\\win32\\102.0.5005.61\\chromedriver.exe] found in cache\n"
],
[
"SITE_NAME='Naval-group'\n\nDOMAIN = \"https://www.naval-group.com\"\n\nSITE_LINK=\"https://www.naval-group.com/en/documents\"",
"_____no_output_____"
],
[
"def parse_webpage_bs(search_url):\n \n headers = {\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:90.0) Gecko/20100101 Firefox/90.0\"}\n try:\n site_request = requests.get(search_url, headers=headers, timeout=10)\n except requests.exceptions.RequestException as e:\n print(e)\n site_request = None\n if site_request != None and site_request.status_code==200:\n site_soup = bs4.BeautifulSoup(site_request.content, \"lxml\")\n else:\n site_soup = None\n return site_soup\n\ndef remove_esc_chars(text):\n return text.replace(\"\\n\", \" \").replace(\"\\t\", \" \").replace(\"\\r\", \" \")\n\ndef get_text(link): \n g = Goose()\n article_extract = g.extract(url=link)\n article = remove_esc_chars(article_extract.cleaned_text)\n meta_data = remove_esc_chars(article_extract.meta_description)\n whole_data = meta_data+article\n text = whole_data.strip()\n\n if textstat.lexicon_count(text, removepunct=True) < 5:\n try:\n response = requests.get(link)\n text = remove_esc_chars(extract_text(io.BytesIO(response.content)))\n\n if textstat.lexicon_count(text, removepunct=True) < 5:\n texts = \"\"\n r = requests.get(link)\n filename = Path('temp.pdf')\n filename.write_bytes(r.content)\n\n pages = convert_from_path('temp.pdf', 500)\n for x in pages:\n x.save(\"temp.jpg\")\n output = reader.readtext(\"temp.jpg\")\n for o in output:\n texts += o[1]\n\n text = remove_esc_chars(texts)\n\n if textstat.lexicon_count(text, removepunct=True) < 5:\n texts = \"\"\n soup = parse_webpage_bs(link)\n if soup!= None:\n ps = soup.findAll('p')\n for p in ps:\n texts+= p.text\n\n text = remove_esc_chars(texts)\n except:\n text = \"\"\n \n return text",
"_____no_output_____"
],
[
"article_list = []\n\npagination = 0\nlast_page = 0\n\nwhile pagination <= last_page:\n url = f\"https://www.naval-group.com/en/documents?page={pagination}\"\n driver.get(url)\n \n #accepting cookies\n try:\n driver.find_element(By.XPATH,'''//*[@id=\"orejime\"]/div[1]/div/div/div/ul/li[1]/button''').click()\n except:\n pass\n\n #getting last page count\n if last_page <= 1:\n lp = driver.find_element(By.XPATH,'''//*[@id=\"block-mainpagecontent\"]/div[2]/div/nav/ul/li[2]''').text.split(\"\\n\")[1]\n last_page_count = int(lp.strip())\n last_page = last_page_count - 1\n\n elements = driver.find_elements(By.XPATH,'''//li[@role=\"article\"]''')\n \n #creating empty lists to append data \n published_dates,titles,texts,links,thumbnails,authors = [],[],[],[],[],[]\n \n #links,thumbnails,authors\n for e in range(1,len(elements)+1):\n thumbnail = \"https://upload.wikimedia.org/wikipedia/commons/8/8a/Naval_Group_Logo.png\"\n thumbnails.append(thumbnail)\n link = driver.find_element(By.XPATH,f'''/html/body/div[2]/div/main/div[2]/div[2]/div[2]/div/ul/li[{e}]/div/div[2]/a''').get_attribute(\"href\")\n links.append(link)\n authors.append(SITE_NAME)\n\n #published_dates,titles,texts\n for link in links[:3]:\n driver.get(link)\n published_date = driver.find_element(By.XPATH,'''//*[@id=\"block-headerblock\"]/div/div[3]/p/span[1]''').text.strip()\n published_dates.append(published_date)\n title = driver.find_element(By.XPATH,'''//*[@id=\"block-headerblock\"]/div/div[1]/h1''').text.strip()\n titles.append(title)\n doc_link = driver.find_element(By.XPATH,'''//*[@id=\"block-socialsharedocumentblock\"]/div/div[2]/p/a''').get_attribute(\"href\")\n text = get_text(doc_link)\n if textstat.lexicon_count(text, removepunct=True) < 5: \n soup = parse_webpage_bs(link)\n ps = soup.find('div',{'id' :'block-mainpagecontent'}).text.strip()\n text = remove_esc_chars(ps).strip()\n texts.append(text)\n print(published_date,title)\n \n #zippig all the data togather \n zipped = list(zip(published_dates,titles,texts,links,thumbnails,authors))\n\n #unwinding and appending to the main list\n for published_date,title,text,link,thumbnail,author in zipped:\n article = (published_date.strip(),title.strip(),text.strip(),link.strip(),thumbnail.strip(),author.strip())\n article_list.append(article)\n \n pagination +=1",
"09 JUNE 2022 Corporate social responsibility report 2021\n20 MAY 2022 Suppliers - purchasing - quality requirements and forms\n17 MAY 2022 Yearbook 2021\n01 SEPTEMBER 2020 Compliance code of conduct (spanish)\n01 SEPTEMBER 2020 Compliance code of conduct (arabic)\n01 SEPTEMBER 2020 Compliance programme key procedures\n"
],
[
"temp_df = pd.DataFrame(article_list,columns=['date','title','article','url','thumbnail','author'])\ntemp_df.head()",
"_____no_output_____"
],
[
"def see_data(iloc_no=random.randint(0,len(temp_df))-1):\n print(temp_df.iloc[iloc_no]['date'],temp_df.iloc[iloc_no]['title'])\n print(f\"\\n{temp_df.iloc[iloc_no]['author']} {temp_df.iloc[iloc_no]['url']}\")\n urllib.request.urlretrieve(temp_df.iloc[iloc_no]['thumbnail'], \"temp.jpg\")\n display(Image.open(\"temp.jpg\"))\n print(f\"\\n{temp_df.iloc[iloc_no]['article']}\")\n\nsee_data()",
"01 SEPTEMBER 2020 Compliance programme key procedures\n\nNaval-group https://www.naval-group.com/en/compliance-programme-key-procedures\n"
],
[
"#to csv\ntemp_df.to_csv(f'{SITE_NAME} news.csv',index = False)\n\n#to json\ntemp_df.to_json(f'{SITE_NAME} news.json')",
"_____no_output_____"
],
[
"#to get rid of unwanteed trash created by the model use \ndef remove_trash():\n try:\n try:\n os.remove(\"temp.pdf\")\n except:\n pass\n os.remove(\"temp.jpg\")\n print(\"Trash removed successfully\")\n except:\n print(\"No trash found\")\n\nremove_trash()",
"Trash removed successfully\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7556b2969cca6bc76bfb120dbd15c2f7c6ffef4 | 864,753 | ipynb | Jupyter Notebook | Ensemble_Techniques/.ipynb_checkpoints/DecisionTreeRegressor_MPGData-checkpoint.ipynb | Aujasvi-Moudgil/Ensemble-Learning | 0ca2abd57cd2c27fbd09fdc8b59ec07567f21d2d | [
"MIT"
] | 1 | 2020-08-06T09:57:11.000Z | 2020-08-06T09:57:11.000Z | Ensemble_Techniques/DecisionTreeRegressor_MPGData.ipynb | Aujasvi-Moudgil/Ensemble-Learning | 0ca2abd57cd2c27fbd09fdc8b59ec07567f21d2d | [
"MIT"
] | null | null | null | Ensemble_Techniques/DecisionTreeRegressor_MPGData.ipynb | Aujasvi-Moudgil/Ensemble-Learning | 0ca2abd57cd2c27fbd09fdc8b59ec07567f21d2d | [
"MIT"
] | null | null | null | 257.213861 | 745,840 | 0.90667 | [
[
[
"# Analysing car mpg data set using Decision Tree Regressor",
"_____no_output_____"
]
],
[
[
"# To enable plotting graphs in Jupyter notebook\n%matplotlib inline \n",
"_____no_output_____"
],
[
"# Numerical libraries\nimport numpy as np \n\nfrom sklearn.model_selection import train_test_split\n\n# Import Linear Regression machine learning library\nfrom sklearn.tree import DecisionTreeRegressor\n\n# to handle data in form of rows and columns \nimport pandas as pd \n\n# importing ploting libraries\nimport matplotlib.pyplot as plt \n\n#importing seaborn for statistical plots\nimport seaborn as sns\n\nfrom sklearn.utils import resample\n",
"_____no_output_____"
],
[
"# reading the CSV file into pandas dataframe\nmpg_df = pd.read_csv(\"D:\\\\Ml_Data\\car-mpg.csv\") ",
"_____no_output_____"
],
[
"# Check top few records to get a feel of the data structure\nmpg_df.head(50)",
"_____no_output_____"
],
[
"mpg_df.describe().transpose() # horsepower is missing",
"_____no_output_____"
],
[
"temp = pd.DataFrame(mpg_df.hp.str.isdigit()) \ntemp[temp['hp'] == False]",
"_____no_output_____"
],
[
"mpg_df = mpg_df.replace('?', np.nan)",
"_____no_output_____"
],
[
"mpg_df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 398 entries, 0 to 397\nData columns (total 10 columns):\nmpg 398 non-null float64\ncyl 398 non-null int64\ndisp 398 non-null float64\nhp 392 non-null object\nwt 398 non-null int64\nacc 398 non-null float64\nyr 398 non-null int64\norigin 398 non-null int64\ncar_type 398 non-null int64\ncar_name 398 non-null object\ndtypes: float64(3), int64(5), object(2)\nmemory usage: 31.2+ KB\n"
],
[
"mpg_df['hp'] = mpg_df['hp'].astype('float64')",
"_____no_output_____"
],
[
"numeric_cols = mpg_df.drop('car_name', axis=1)\n\n# Copy the 'mpg' column alone into the y dataframe. This is the dependent variable\ncar_names = pd.DataFrame(mpg_df[['car_name']])\n\n\nnumeric_cols = numeric_cols.apply(lambda x: x.fillna(x.median()),axis=0)\nmpg_df = numeric_cols.join(car_names) # Recreating mpg_df by combining numerical columns with car names\n\nmpg_df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 398 entries, 0 to 397\nData columns (total 10 columns):\nmpg 398 non-null float64\ncyl 398 non-null int64\ndisp 398 non-null float64\nhp 398 non-null float64\nwt 398 non-null int64\nacc 398 non-null float64\nyr 398 non-null int64\norigin 398 non-null int64\ncar_type 398 non-null int64\ncar_name 398 non-null object\ndtypes: float64(4), int64(5), object(1)\nmemory usage: 31.2+ KB\n"
]
],
[
[
"## Let us do a pair plot analysis to visually check study the data",
"_____no_output_____"
]
],
[
[
"# This is done using scatter matrix function which creates a dashboard reflecting useful information about the dimensions\n# The result can be stored as a .png file and opened in say, paint to get a larger view \n\nmpg_df_attr = mpg_df.iloc[:, 0:9]\nmpg_df_attr['dispercyl'] = mpg_df_attr['disp'] / mpg_df_attr['cyl']\nsns.pairplot(mpg_df_attr, diag_kind='kde', hue = 'origin') # to plot density curve instead of histogram\n\n#sns.pairplot(mpg_df_attr) # to plot histogram, the default",
"C:\\Users\\Mukesh\\Anaconda3\\lib\\site-packages\\statsmodels\\nonparametric\\kde.py:494: RuntimeWarning: invalid value encountered in true_divide\n binned = fast_linbin(X,a,b,gridsize)/(delta*nobs)\nC:\\Users\\Mukesh\\Anaconda3\\lib\\site-packages\\statsmodels\\nonparametric\\kdetools.py:34: RuntimeWarning: invalid value encountered in double_scalars\n FAC1 = 2*(np.pi*bw/RANGE)**2\nC:\\Users\\Mukesh\\Anaconda3\\lib\\site-packages\\numpy\\core\\_methods.py:26: RuntimeWarning: invalid value encountered in reduce\n return umr_maximum(a, axis, None, out, keepdims)\n"
]
],
[
[
"# Step 5 DecisionTree Regression",
"_____no_output_____"
]
],
[
[
"from scipy.stats import zscore\n\nmpg_df_attr = mpg_df.loc[:, 'mpg':'origin']\nmpg_df_attr_z = mpg_df_attr.apply(zscore)\n\nmpg_df_attr_z.pop('origin') # Remove \"origin\" and \"yr\" columns\nmpg_df_attr_z.pop('yr')\n\narray = mpg_df_attr_z.values\nX = array[:,1:5] # select all rows and first 4 columns which are the attributes\ny = array[:,0] # select all rows and the 0th column which is the classification \"Yes\", \"No\" for diabeties\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=1)\n",
"_____no_output_____"
],
[
"#regressor = DecisionTreeRegressor(random_state=0, max_depth=3)\nregressor = DecisionTreeRegressor(random_state=0)\n\nregressor.fit(X_train , y_train)\nfeature_importances = regressor.feature_importances_\n\n\nfeature_names = mpg_df_attr.columns[1:9]\nprint(feature_names)\n\nk = 8\ntop_k_idx = feature_importances.argsort()[-k:][::-1]\nprint(feature_names[top_k_idx], feature_importances)",
"Index(['cyl', 'disp', 'hp', 'wt', 'acc', 'yr', 'origin'], dtype='object')\nIndex(['cyl', 'hp', 'wt', 'disp'], dtype='object') [0.57854977 0.07257386 0.20872587 0.14015051]\n"
],
[
"print(regressor.score(X_train, y_train))\nprint(regressor.score(X_test, y_test))",
"0.9997388230765724\n0.5694361756850345\n"
],
[
"#Overfit tree with poor performance on test data. Only 56% R^2 !!! The model is unable to explain 44% of the \n# variance in test data!!!",
"_____no_output_____"
],
[
"y_pred = regressor.predict(X_test)",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score\n\nvalues = mpg_df_attr.values # converting the original dataframe into an array of values ",
"_____no_output_____"
],
[
"from sklearn.utils import resample\n\n# configure bootstrap\nn_iterations = 200 # Number of bootstrap samples to create\nn_size = int(len(values) * 1) # picking only 50 % of the given data in every bootstrap sample\n\n# run bootstrap\nstats = list()\nfor i in range(n_iterations):\n\t# prepare train and test sets\n\ttrain = resample(values, n_samples=n_size) # Sampling with replacement \n\ttest = np.array([x for x in values if x.tolist() not in train.tolist()]) # picking rest of the data not considered in sample\n\n # fit model\n\tmodel = DecisionTreeRegressor()\n\tmodel.fit(train[:,:-1], train[:,-1])\n\n \n # evaluate model\n\tpredictions = model.predict(test[:,:-1])\n\tscore = model.score(test[:,:-1], test[:,-1])\n\n\tprint(score)\n\tstats.append(score)\n",
"0.5170738478622987\n0.5445859872611465\n0.4972705812924168\n0.5593311758360302\n0.4335071200684772\n0.5943249363147467\n0.49642951600105795\n0.44972867048537846\n0.2621062992125984\n0.3486736488036083\n0.41342952275249717\n0.3777862595419847\n0.39497749015194167\n0.648964896489649\n0.4389302453818583\n0.6659899142052524\n0.4812030075187969\n0.42129836904381196\n0.31776388378080184\n0.3387270041627501\n0.5328282828282829\n0.3470059674874819\n0.5367170626349893\n0.5538184836745987\n0.5805351521511017\n0.4561892461423424\n0.4159175619381716\n0.6213692946058091\n0.4572984749455338\n0.4780260986950652\n0.49996083653168333\n0.3924050632911392\n0.36297977713578217\n0.44025455284002435\n0.40232995114618586\n0.28934603471451115\n0.3087368690750705\n0.38098168949069083\n0.5543877421805747\n0.5475100758248514\n0.49056186403185387\n0.49001385361348426\n0.5148967551622419\n0.46544255511734656\n0.4069288540741338\n0.36873977086743037\n0.25810899063928605\n0.3917654398003743\n0.3016445145302995\n0.4385361491366856\n0.48288122201738215\n0.49396411092985326\n0.3714384508990318\n0.3108929406264608\n0.49182808716707027\n0.49647783974170817\n0.4177588938499208\n0.5233436783160168\n0.4833844580777095\n0.47436800500900056\n0.3741478231522106\n0.4935350573180486\n0.5532319391634981\n0.5067246835443038\n0.4698643105080466\n0.5677516536773854\n0.2219324553396793\n0.3760757314974183\n0.14613137592689696\n0.4258142340168878\n0.3603100081973322\n0.40813751774168133\n0.3148528405201917\n0.2364016736401673\n0.3399955253933926\n0.42425\n0.411815471520477\n0.3870177587262708\n0.6332931242460796\n0.27574407644134546\n0.5020889286780066\n0.3155351342786976\n0.48255295811338866\n0.2057135046473483\n0.4020338983050848\n0.21789567016403122\n0.44989147518898287\n0.4485027223230489\n0.452455438341215\n0.36375968992248076\n0.558990866896242\n0.3523222606343002\n0.4401305057096248\n0.5267749699157642\n0.49121522693997066\n0.4433198380566801\n0.40435706695005313\n0.5212898497618175\n0.25530603861426804\n0.662356704878699\n0.5774487471526197\n0.5551481592337624\n0.4456988502562681\n0.22114285714285709\n0.4876788255561382\n0.2703681798018629\n0.49560269011898606\n0.4991933315407368\n0.3171361156171658\n0.32482269503546113\n0.41403194263363735\n0.432780847145488\n0.41333333333333333\n0.39463793535454783\n0.3609318432236738\n0.12991636995942712\n0.31128311522502394\n0.34081854400579514\n0.5763953850951045\n0.4601571268237936\n0.3185022716590945\n0.4700148910247732\n0.46914493143318103\n0.391304347826087\n0.29571159283694615\n0.5414691943127962\n0.3964808043875685\n0.28632719088479397\n0.3575608253772713\n0.44893255701115964\n0.46131325805102463\n0.6172722638899002\n0.47089453641097373\n0.3469643753135976\n0.35016942650725663\n0.24281734978113811\n0.5921579851173441\n0.41912206855081163\n0.27118989405052973\n0.6570432674092597\n0.38918452466561204\n0.2697506758786422\n0.449819019414281\n0.22073602264685055\n0.5100715224054884\n0.4969950352756728\n0.35312697409981053\n0.526922573038124\n0.47302677804668597\n0.25771550195104664\n0.6181163848713248\n0.38921651221566983\n0.4856792979114225\n0.5344426682168291\n0.5514681892332791\n0.3984635679366805\n0.3977995758218452\n0.5182022471910113\n0.586839875741316\n0.5941517519536174\n0.2031249999999999\n0.4754464285714285\n0.30146912185069474\n0.4896481830027508\n0.35230007077140846\n0.4225024092515258\n0.46844660194174764\n0.5379525956085502\n0.487849675049449\n0.4885775003320495\n0.4034133069502845\n0.4003795066413661\n0.17727429955032858\n0.3234454791228597\n0.2370965448599461\n0.4664739884393063\n0.3317865429234338\n0.11416207710464199\n0.22362598144182722\n0.5200453857791225\n0.462163655014598\n0.4289958092408891\n0.10530679933664999\n0.4845959595959597\n0.44757526323594826\n0.41471518987341766\n0.504156010230179\n0.378194589747422\n0.3983286908077994\n0.35726324989075464\n0.20554518794987997\n0.312717071867486\n0.49014778325123143\n0.4151276480173819\n0.4733290488431877\n0.41655616594051886\n0.35799099222055414\n0.3230058886509636\n0.27654511094001544\n0.2756598240469208\n"
],
[
"from matplotlib import pyplot\n\n# plot scores\npyplot.hist(stats)\npyplot.show()\n# confidence intervals\nalpha = 0.95 # for 95% confidence \np = ((1.0-alpha)/2.0) * 100 # tail regions on right and left .25 on each side indicated by P value (border)\nlower = max(0.0, np.percentile(stats, p)) \np = (alpha+((1.0-alpha)/2.0)) * 100\nupper = min(1.0, np.percentile(stats, p))\nprint('%.1f confidence interval %.1f%% and %.1f%%' % (alpha*100, lower*100, upper*100))",
"_____no_output_____"
],
[
"# Use Ensemble Techniques \n\n#Bagging -\n\n# In the following lines, we call the bagging classifer with oob_score (out of bag score) set to true which false by default\n# This makes the baggingclassifier use the 37% unused data for testing\n# Compare the performance of the BGCL with regularized dt above. \n# Though not required, you can keep separate test data (outside the bootstrap sampling) on which we test the BGCL\n# \n\nfrom sklearn.ensemble import BaggingRegressor\nbgcl = BaggingRegressor(n_estimators=100, max_samples=1.0 , oob_score=True)\n\nbgcl = bgcl.fit(X, y)\n\nprint(bgcl.oob_score_)",
"0.7529779570652158\n"
],
[
"values = mpg_df_attr.values\n\n\n# configure bootstrap\nn_iterations = 500 # Number of bootstrap samples to create\nn_size = int(len(values) * 1) # Define size of each bootstrap sample\n# run bootstrap\nstats = list()\n\n\nfor i in range(n_iterations):\n\n\ttrain = resample(values, n_samples=n_size) # Sampling with replacement \n\ttest = np.array([x for x in values if x.tolist() not in train.tolist()]) # picking rest of the data not considered in sample\n\n\n# evaluate model on test data. Not OOB\n\tbgcl.fit(train[:,:-1], train[:,-1])\n\tscore = bgcl.score(test[:,:-1], test[:,-1])\n\tprint(score)\n\tstats.append(score)\n",
"0.6250622103899841\n0.7109393503880426\n0.7098621110433191\n0.6261018010487119\n0.68408914753572\n0.6916758913072506\n0.5948430014534885\n0.6482826729106629\n0.6544516971279373\n0.5331992214896899\n0.6074344449520329\n0.7765866314697841\n0.6533137890394816\n0.6470252669039146\n0.6657473504273503\n0.5751652877138413\n0.653407800982801\n0.645748143115942\n0.6480659719231764\n0.6557574019245004\n0.6529802516335327\n0.7207085294117648\n0.6517076437057895\n0.7762090402113295\n0.7491041903217087\n0.6088061494796595\n0.6650975073509757\n0.6833117704280156\n0.7237297329877644\n0.7325765080836762\n0.6804320474777448\n0.7003621719927253\n0.695069457082196\n0.6174480000000001\n0.6565367163626262\n0.6461621730694606\n0.622291782502044\n0.5522687750385209\n0.6881604831560284\n0.6579177523927138\n0.6693603868194842\n0.5516924688115561\n0.6793132031703074\n0.5434653629938967\n0.6925240980197125\n0.6911361126551788\n0.5450807392996109\n0.5701587707529767\n0.68744114816088\n0.6261695291679006\n0.6797347435897436\n0.670035632183908\n0.5917564938737041\n0.5405319753481672\n0.6269272740223664\n0.6881159736055776\n0.614125854945055\n0.7085220063191153\n0.6690221474953617\n0.6500945512412477\n0.6497247098646035\n0.6392607523939808\n0.6619626915179277\n0.6377767108938548\n0.6561068110572812\n0.6127047419016076\n0.5454575488599349\n0.6880598538650642\n0.6975697741935485\n0.6612565037685387\n0.6591301600733376\n0.6962711402474954\n0.5348743177570093\n0.5203704346009194\n0.6775081661696938\n0.6345751404494382\n0.709332369643779\n0.5889196288365455\n0.7013056527799288\n0.7570537032067624\n0.6819857635834631\n0.6543132200742043\n0.6699006515463917\n0.570361032938919\n0.7229942645208292\n0.6780855759141173\n0.6299367283950619\n0.8093230849130096\n0.6640361707700367\n0.6297445420326224\n0.6320614630960157\n0.6528906622296173\n0.6061973991740621\n0.6163392782839514\n0.6392263011152416\n0.6234014751552795\n0.6723107333870613\n0.6784989593871946\n0.5533457055214723\n0.5042405989272944\n0.6443017813567593\n0.754813947368421\n0.3166480305838737\n0.5123288932073138\n0.5683951949420443\n0.7070405351170568\n0.5621637655062025\n0.6171506739409499\n0.7699268128161889\n0.6668329918032787\n0.5791010071561092\n0.7446852902180318\n0.6084172505575138\n0.7145436789772728\n0.5880072587784008\n0.6074034884724981\n0.6961400030641949\n0.601806626506024\n0.5297530823928501\n0.6599305878750766\n0.6333512850228372\n0.7480812652068126\n0.5009994642571565\n0.4876929392446633\n0.5926278606965174\n0.6607338902147972\n0.5907999292564061\n0.5461435769828926\n0.6457332155760738\n0.5196747362829172\n0.5664367632367632\n0.6496601522615317\n0.7162950808151792\n0.8088161801553495\n0.5658341281953453\n0.5443155116387028\n0.7375506651634723\n0.5897837574445046\n0.5996975764740653\n0.692226488527447\n0.7114532012087988\n0.7088166285430134\n0.6961944014294222\n0.65422062418158\n0.6962946792883956\n0.694218281822944\n0.7375206747177735\n0.5699809670465046\n0.7050774167009461\n0.6508390014725474\n0.6520177313736903\n0.7014145569620254\n0.7195788670384502\n0.7112193974272174\n0.6527366062500001\n0.6332675512665862\n0.5687460635661176\n0.6125329389620864\n0.6590992122107336\n0.7006832665682761\n0.7286810793237972\n0.7336801228640023\n0.6671702933151433\n0.6977760765550238\n0.7667992840778924\n0.5735668311944718\n0.7635296944024207\n0.6149474593495936\n0.6301818383167219\n0.6312468715777177\n0.6210786309674158\n0.5307446216069068\n0.617543240258716\n0.5688762663885579\n0.617257916986933\n0.6478270425856567\n0.6096232206759443\n0.6603047438788661\n0.7132918551719308\n0.7115376712328766\n0.5677308022922638\n0.5824931173495564\n0.6207530779753762\n0.5596304560716283\n0.6924340577716643\n0.660968730066686\n0.6625186764455058\n0.6962465037651759\n0.6339098341599503\n0.5968725768690494\n0.6631859583409465\n0.7407276218220341\n0.7512673779527559\n0.6703461773700305\n0.6263786900765986\n0.7103191550451189\n0.7023312538878521\n0.6994874800213107\n0.6588266730141458\n0.6008166736842104\n0.693218214260224\n0.785821052631579\n0.7199009837467922\n0.6585218183111616\n0.7226773359153469\n0.6753883116883117\n0.5668797271829848\n0.5924207792207792\n0.6217361702127661\n0.6988981333713309\n0.7694140316205533\n0.7204445096742187\n0.6269691134588884\n0.7152505055776632\n0.5672443699313787\n0.6595044008875741\n0.6877473085866852\n0.6992975883867946\n0.6453291293850305\n0.7129663924458116\n0.665823113964687\n0.713512830482115\n0.65238803733516\n0.5930319910686788\n0.7556811333794057\n0.7081927591008553\n0.708059170212766\n0.6727116129032258\n0.7255653240201785\n0.6235643709907341\n0.7437610948600085\n0.597318018018018\n0.7124264639639639\n0.6181236289972507\n0.7860985893416927\n0.7289121374779683\n0.6467655523438481\n0.5653858751481629\n0.5901482597431853\n0.6373409267660402\n0.6985087534338109\n0.6895111689156235\n0.756877367896311\n0.6498193845124602\n0.6253850844004658\n0.6536563454759107\n0.7190627534774594\n0.638076471395253\n0.69895738736103\n0.7200707191780823\n0.5729500000000001\n0.732812710611883\n0.6704132083571837\n0.739491475042064\n0.6098439095334204\n0.7031395130434782\n0.667675239540083\n0.7077126412724989\n0.7461002195389681\n0.6872771360929766\n0.7279378787878787\n0.6929675903678789\n0.6213555905943648\n0.6482442947702061\n0.6976088947990544\n0.5820108052728639\n0.6674918236192416\n0.6924668306133095\n0.7704056166727784\n0.6636279837518464\n0.6797292550977945\n0.5961751340774026\n0.6301969392792495\n0.688354820415879\n0.7109408935208805\n0.5647851425280738\n0.6914629820749593\n0.6182740750670241\n0.6136166828699203\n0.6093923795476892\n0.6048118919302228\n0.6794366299832374\n0.7420202563502283\n0.5772474801629851\n0.7248433076869757\n0.6927859664685532\n0.7190191322214166\n0.5500730263157895\n0.6564784477945282\n0.6179945068664171\n0.7025496996996996\n0.5818253070310102\n0.5773043937439536\n0.6733917445742906\n0.6414594018624642\n0.6719086252189141\n0.7559051216300506\n0.5609404740515513\n0.7055575526932085\n0.7105272525849335\n0.6376487533720265\n0.6392458501783591\n0.6616023186237845\n0.6924525573770492\n0.681980312093628\n0.6468719512195122\n0.6696687593895785\n0.6754261363636365\n0.7033196092977251\n0.6023656067445566\n0.6548401909454882\n0.6021386298842865\n0.6798391435464415\n0.680731550802139\n0.8028586826347306\n0.7380794170403588\n0.5779721078431374\n0.5859506154137473\n0.7189661384046487\n0.6533513885701171\n0.7244320527392589\n0.6544620031796503\n0.513241302087499\n0.7176143222850985\n0.6616418114833866\n0.57267168593708\n0.5621928474534468\n0.6652938687561213\n0.7700493546404426\n0.7006281787196207\n0.7230025787965615\n0.6987739356669821\n0.6571499645440363\n0.6338883043302307\n0.6733821808510638\n0.6240704584527221\n0.622211394120038\n0.6380207617152163\n0.6860600906835487\n0.6565846637426901\n0.6167184052156469\n0.6384462382536624\n0.6405130585839223\n0.5819168116755543\n0.6692716303708064\n0.651526124075128\n0.5594034245642701\n0.6277958541815583\n0.7441705882352941\n0.6792384732111592\n0.6793904415372036\n0.6320693745373798\n0.7099123892008933\n0.6671957777928141\n0.7595562131877603\n0.711763619402985\n0.7569004021937843\n0.5546281651376146\n0.6107639743402828\n0.6925192632646164\n0.6071229586582316\n0.7325321126242179\n0.6821584116214334\n0.5364795369987597\n0.5791370361484325\n0.6947521834061134\n0.7387725490196079\n0.6902931062714917\n0.6939303992740472\n0.6734019162735849\n0.6865464350297081\n0.5647742082738945\n0.5815337254651236\n0.6478897471910112\n0.6946491717142231\n0.6749641861219196\n0.7193119516116684\n0.7254479571984436\n0.6021590712342321\n0.6354300876537338\n0.7181311320754717\n0.6501668099331424\n0.6835206831358709\n0.6587669946846718\n0.6498513117754731\n0.7851254528377835\n0.6575952431740615\n0.6475389589337175\n0.6330947554398363\n0.6732818375532714\n0.6551408458244112\n0.6776149632653059\n0.6174785805264646\n0.5603253704801423\n0.5953006365740741\n0.6973047250000001\n0.6180606552786334\n0.5916784073889166\n0.500181404189294\n0.594937344206801\n0.5930007850320553\n0.7188324575586096\n0.5890145601700458\n0.7226092294725013\n0.6755012336975678\n0.7763674984596426\n0.4548069989243457\n0.7170208924949291\n0.5148948979591838\n0.6594599782194392\n0.6528523652365237\n0.6497255102040815\n0.6138925909295015\n0.639969140523723\n0.7129925833552113\n0.6655684729064041\n0.5004177009873062\n0.6713461893764434\n0.5051012987012986\n0.6329367647058823\n0.6809138873484908\n0.6976160714285714\n0.7187934490027564\n0.7388242252291576\n0.6800372286744092\n0.7215468164794008\n0.7897192961347571\n0.6836789929742388\n0.6967360106455576\n0.7160995508982035\n0.6514249738574154\n0.6705556733543583\n0.6392570997771077\n0.6790764617691154\n"
],
[
"from matplotlib import pyplot\n\n# plot scores\npyplot.hist(stats)\npyplot.show()\n# confidence intervals\nalpha = 0.95 # for 95% confidence \np = ((1.0-alpha)/2.0) * 100 # tail regions on right and left .25 on each side indicated by P value (border)\nlower = max(0.0, np.percentile(stats, p)) \np = (alpha+((1.0-alpha)/2.0)) * 100\nupper = min(1.0, np.percentile(stats, p))\nprint('%.1f confidence interval %.1f%% and %.1f%%' % (alpha*100, lower*100, upper*100))",
"_____no_output_____"
],
[
"#Long Term performance on OOB data rather than a separate dataset\n\n\nvalues = mpg_df_attr_z.values # This is already done, showing it here for ease of reference\n\n\n\n# configure bootstrap\nn_iterations = 1000 # Number of bootstrap samples to create\nn_size = int(len(values) * 1) # picking only 50 % of the given data in every bootstrap sample\n\n# run bootstrap\nstats = list()\nfor i in range(n_iterations):\n\n\tbgcl.fit(values[:,:-1], values[:,-1])\n\tscore = bgcl.oob_score_\n\tprint(score)\n\tstats.append(score)\n",
"0.7053111895311244\n0.7092893372120896\n0.701888005900472\n0.7094249387485012\n0.7068182236224241\n0.7040045244423432\n0.7060921339476998\n0.715317385569022\n0.7091536566061147\n0.7057323291540045\n0.701839335319881\n0.7027610656770884\n0.7080203259730112\n0.7097463978890463\n0.7122935055358901\n0.7086449574874552\n0.7156065620944365\n0.7086584615532484\n0.719113358446394\n0.7107065127737924\n0.7047416806092748\n0.714772250072214\n0.7119715307180806\n0.7107818603969993\n0.7051439555629688\n0.69812156261218\n0.7178485244537611\n0.7154615090607206\n0.7067127407911074\n0.7110236069952598\n0.6953305403380505\n0.7103166762849475\n0.7109692094396367\n0.7190680349272662\n0.6996337558761374\n0.7056993221457677\n0.7059310968991228\n0.7059375395573136\n0.7188154065019943\n0.7134215274457928\n0.708043184845629\n0.7075346648820612\n0.7160410614752041\n0.7049908012088932\n0.7000376310363711\n0.7031866520321071\n0.7057412600164914\n0.701826952939131\n0.7077144355720034\n0.7105100654445486\n0.7019961456022394\n0.699093668078484\n0.7102322526495339\n0.7022226863693402\n0.7036990885497311\n0.7076345392064209\n0.7132882805530998\n0.7180323542101432\n0.7070899650175614\n0.7018721071301299\n0.7100845607758144\n0.7051875723931859\n0.7114257118841765\n0.7064704832405286\n0.7104781273247814\n0.7208591843355887\n0.7115113851833123\n0.7057086453681535\n0.7100022731954221\n0.7098920220962074\n0.7155253031704357\n0.7043532154249095\n0.7044376186679207\n0.7107034810014119\n0.7010060285501769\n0.7077527688158641\n0.7040258389586439\n0.7070745921945985\n0.706799060896806\n0.7013185302716513\n0.7149305199630688\n0.7044481123183062\n0.7013954044779143\n0.7088892075892049\n0.7107081600622533\n0.7070619928583137\n0.7089256543277581\n0.7142114910066857\n0.7077266296345026\n0.7076163501219195\n0.7068186592838197\n0.7006638467672869\n0.7101811385361145\n0.7132789439531584\n0.7050092433572892\n0.7059091793199768\n0.7123451089910107\n0.7169118995993549\n0.7044181046911373\n0.6931324482094123\n0.7035175499320518\n0.7103485119366049\n0.7116454650094097\n0.705426300286421\n0.7134147447321759\n0.7064044859682701\n0.711212492127154\n0.7088946383588373\n0.7090708770714558\n0.7022086134905923\n0.7123219302528941\n0.7025196486076721\n0.7105568773391093\n0.7123673025223767\n0.7061684562215024\n0.7054695864158449\n0.7152818049842853\n0.7008041787240451\n0.7079283572610688\n0.7089042633697413\n0.7127125139888546\n0.7096654059659708\n0.7045219931538023\n0.7036367647104325\n0.7048938150090195\n0.7094667238549388\n0.7087835650815555\n0.7054252457677199\n0.7108051571255452\n0.7062220728856583\n0.7033467933040111\n0.7048493425489473\n0.7124800985429414\n0.718106107027503\n0.7155475447084692\n0.7123205621336692\n0.7073844380169164\n0.7005422091260298\n0.7085996831360964\n0.7075486294005617\n0.7124682887998137\n0.7047431355709601\n0.7068379297250251\n0.7074073217151011\n0.7117445041755777\n0.7180249748391865\n0.7095876278192373\n0.7112985460087231\n0.7035442091031757\n0.7011782208570333\n0.7057939186824911\n0.7146911169105125\n0.714932177763506\n0.7111153257688836\n0.7130345241876257\n0.7063218488821752\n0.693166798016573\n0.7126140687553133\n0.7157423540373522\n0.714642018001816\n0.7107681661485152\n0.7137301112860543\n0.7122531009025881\n0.7134916469798469\n0.7179144921248547\n0.707988806120618\n0.7002260733735611\n0.7008127319663445\n0.703905562400525\n0.7074010692054882\n0.7066556539170433\n0.7057309465972842\n0.7154877525392862\n0.7118710555056841\n0.7038023697994591\n0.7087513424659281\n0.7128458987694217\n0.7141612042145222\n0.7005685228141414\n0.7065874708912427\n0.7105315693898697\n0.710757353636756\n0.7144400031457022\n0.7116572400309431\n0.7103442875732764\n0.7214382237570891\n0.7062995414956474\n0.7047684832079715\n0.7122864785666733\n0.7122538193464167\n0.7050079233574976\n0.6992795924419088\n0.7060075955160197\n0.7052557967238653\n0.7123661364489705\n0.698181754437621\n0.7177382330521085\n0.7037508569584326\n0.711223186503902\n0.7093002550214955\n0.7109800467506653\n0.714345163657498\n0.7087401801519584\n0.7104235298089798\n0.7097418182902961\n0.7047797034962087\n0.7108446496863001\n0.7146813268562042\n0.7096802888298976\n0.7156862267665696\n0.702370648863939\n0.7047929631411287\n0.7063662303676901\n0.7088134412465794\n0.7041584100120994\n0.7033770659182277\n0.7121323962671458\n0.7109219310959936\n0.708244953917418\n0.7121885295412188\n0.7150996863455201\n0.7058645169816014\n0.7099086699323113\n0.7117679561513093\n0.7164619806893555\n0.7051977142593462\n0.708755677011086\n0.7158065328882419\n0.7071288791243926\n0.7043886845828418\n0.7165500652859064\n0.7041545465777364\n0.7066935653520823\n0.699610795239181\n0.7056054453481935\n0.7098046044747865\n0.7046641731079673\n0.7083154826494179\n0.7136464425542501\n0.708637924433972\n0.708806248070268\n0.7045899734211478\n0.706524326021532\n0.7034162054039614\n0.7145596640021182\n0.7052165606710374\n0.7035344282232516\n0.7081965670986101\n0.7079357415555967\n0.7053187703063134\n0.7051865441246045\n0.7024133415933362\n0.7063474869753962\n0.7024566805676493\n0.712276571441298\n0.6951316566420123\n0.7123512187056742\n0.698931436248876\n0.7057979354434496\n0.7079605250237867\n0.7065180807505959\n0.71032472993441\n0.7083574777431056\n0.7018888282916063\n0.7117699822653589\n0.7076884948824269\n0.7163629980727109\n0.7069939063582945\n0.708607528486147\n0.7059576161501756\n0.713106495617077\n0.7028601848829028\n0.6982950521347759\n0.7097506198205177\n0.7048391565354477\n0.710690015895167\n0.7103862892301311\n0.7075812497208642\n0.7054493436258445\n0.7145867077334663\n0.7069634639623967\n0.7069850476302488\n0.7012999347003447\n0.7154384073958442\n0.7081574575667878\n0.7034399155911601\n0.7058600957398453\n0.7083888732552072\n0.7055714378917037\n0.7034731127343838\n0.7023579207290402\n0.711226274767517\n0.7060888875506142\n0.7022207730834203\n0.7072815719106132\n0.7108793600220978\n0.7059196211669122\n0.7025335697815553\n0.7096339040825179\n0.7047932288626175\n0.7079366602248434\n0.7112283492761868\n0.701381913175368\n0.7035859663066468\n0.7091696087096382\n0.7120878954242955\n0.703607158730154\n0.694081914627129\n0.708870783230439\n0.7059065781531932\n0.7066432019110859\n0.7095403422058848\n0.7095056783714467\n0.7121428803218415\n0.707168600396331\n0.7080976947476746\n0.7130945148668459\n0.7105558641554843\n0.7167953013031194\n0.7165992385429837\n0.7085858210911413\n0.7049825265213235\n0.7098437080926191\n0.71465443375032\n0.7055953487787197\n0.6945845474029034\n0.6999860639125216\n0.7091335479593264\n0.7007640621468217\n0.6920252474128659\n0.711616099998017\n0.7110025293986748\n0.7090638048224602\n0.700116571884689\n0.6995027295396801\n0.7125339392528612\n0.7051014244962461\n0.7054551646619849\n0.7007759630300459\n0.7108301218961652\n0.7050016705649638\n0.7079390182194076\n0.7171702911079356\n0.7075663766431652\n0.703809855711786\n0.6986241765124622\n0.70478650219158\n0.7081218651520002\n0.7059003548166125\n0.7082097417502282\n0.7066374587285271\n0.7025322148355848\n0.7070026945187351\n0.7061255998419006\n0.7111360479444404\n0.7017904898845353\n0.7010859280423112\n0.707674459028281\n0.7028278394580921\n0.7106159339421102\n0.6997119905867986\n0.711474454079907\n0.7023782963708721\n0.7114765201714252\n0.7131924883564539\n0.7173590751871557\n0.7011753783809955\n0.7159511928976998\n0.6998073000853042\n0.7096158517767588\n0.6994763926850431\n0.7148122517866535\n0.7079637491367716\n0.7101736214129701\n0.7124908643637857\n0.7136917402899765\n0.7078533706602\n0.7054714241510431\n0.7102918974328936\n0.7072546379631492\n0.7082917567542565\n0.7162276856248994\n0.7104529295951302\n0.7093104679764262\n0.7071958461375714\n0.7126599462381501\n0.7118416359299724\n0.708771165363411\n0.697213628874098\n0.7081751672661566\n0.7119675353916085\n0.7108438563921483\n0.704817345359148\n0.7017913830044189\n0.6995875765745048\n0.7165638066251925\n0.7019529154018055\n0.71125499527774\n0.7102386864829391\n0.7133678383997428\n0.7122176863442576\n0.7082880806440974\n0.7015833839258575\n0.7168751146345382\n0.7224852671761486\n0.7110654599607693\n0.7032353201935686\n0.7015266135878874\n0.7072507027732893\n0.7149116641122397\n0.7089033479907072\n0.7033250980739206\n0.7064326345863361\n0.7059615545354415\n0.7135135863562265\n0.7087845058776971\n0.702358654819409\n0.7054069154243441\n0.6915199316429543\n0.7067868930336533\n0.7058211392398338\n0.7055723306122635\n0.7110933439315161\n0.7114295035040554\n0.698739525212211\n0.713803065198509\n0.7040618285964249\n0.7026202481803294\n0.7023040866776151\n0.6982645512412202\n0.7034181709580953\n0.7099599685767719\n0.713192938469846\n0.7088074194867302\n0.7093597370169291\n"
],
[
"from matplotlib import pyplot\n\n# plot scores\npyplot.hist(stats)\npyplot.show()\n# confidence intervals\nalpha = 0.95 # for 95% confidence \np = ((1.0-alpha)/2.0) * 100 # tail regions on right and left .25 on each side indicated by P value (border)\nlower = max(0.0, np.percentile(stats, p)) \np = (alpha+((1.0-alpha)/2.0)) * 100\nupper = min(1.0, np.percentile(stats, p))\nprint('%.1f confidence interval %.1f%% and %.1f%%' % (alpha*100, lower*100, upper*100))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7556d37161d5887ff62cf1d24d41da7d08d59ac | 11,519 | ipynb | Jupyter Notebook | 01_INRIX_data_preprocessing_journal18/INRIX_data_preprocessing_04_change_file_names.ipynb | jingzbu/InverseVITraffic | c0d33d91bdd3c014147d58866c1a2b99fb8a9608 | [
"MIT"
] | null | null | null | 01_INRIX_data_preprocessing_journal18/INRIX_data_preprocessing_04_change_file_names.ipynb | jingzbu/InverseVITraffic | c0d33d91bdd3c014147d58866c1a2b99fb8a9608 | [
"MIT"
] | null | null | null | 01_INRIX_data_preprocessing_journal18/INRIX_data_preprocessing_04_change_file_names.ipynb | jingzbu/InverseVITraffic | c0d33d91bdd3c014147d58866c1a2b99fb8a9608 | [
"MIT"
] | null | null | null | 140.47561 | 9,705 | 0.844778 | [
[
[
"# cf. http://world77.blog.51cto.com/414605/552326\n\n#!/usr/bin/env python\nimport os\nimport shutil\nimport time\n\ndata_dir = \"/home/jzh/Dropbox/Research/Data-driven_estimation_inverse_optimization/INRIX/Raw_data\"\n\n# 判断是否存在路径\nif os.path.isdir(data_dir): \n print (\"Directory exists\")\nelse:\n print (\"Directory does not exist; please input right dir\") # 如果不存在,就提示\n\nfilelist = []\n\nfilelist = os.listdir(data_dir) # 得到文件名\n\nprint(filelist)\n\ndata_dir = \"/home/jzh/Dropbox/Research/Data-driven_estimation_inverse_optimization/INRIX/Raw_data/\"\n\nfor i in filelist:\n NewFile = i.replace(\"Link\", \"link\") \n # print NewFile \n shutil.move(data_dir + i, data_dir + NewFile) ",
"Directory exists\n['filtered_INRIX_attribute_table_journal_link_15.xlsx', 'filtered_INRIX_attribute_table_ext_link_28.xlsx', 'filtered_INRIX_attribute_table_journal_link_120.xlsx', 'filtered_INRIX_attribute_table_ext_link_29.xlsx', 'filtered_INRIX_attribute_table_journal_link_89.xlsx', 'filtered_INRIX_attribute_table_journal_link_32.xlsx', 'filtered_INRIX_attribute_table.xlsx', 'filtered_INRIX_attribute_table_journal_link_38.xlsx', 'filtered_INRIX_attribute_table_ext_link_20.xlsx', 'filtered_INRIX_attribute_table_journal_link_86.xlsx', 'filtered_INRIX_attribute_table_ext_link_30.xlsx', 'filtered_INRIX_attribute_table_journal_link_29.xlsx', 'MA_highway_network_topology_.pdf', 'filtered_INRIX_attribute_table_journal_link_23.xlsx', 'filtered_INRIX_attribute_table_journal_link_20.xlsx', 'filtered_INRIX_attribute_table_journal_link_39.xlsx', 'filtered_INRIX_attribute_table_link_5.xlsx', 'filtered_INRIX_attribute_table_journal_link_30.xlsx', 'filtered_INRIX_attribute_table_journal_link_21.xlsx', 'filtered_INRIX_attribute_table_journal_link_65.xlsx', 'filtered_INRIX_attribute_table_journal_link_87.xlsx', 'filtered_INRIX_attribute_table_journal_link_91.xlsx', 'filtered_INRIX_attribute_table_journal_link_117.xlsx', 'filtered_INRIX_attribute_table_journal_link_70.xlsx', 'MA_highway_network_submap.png', 'filtered_INRIX_attribute_table_journal_link_94.xlsx', 'filtered_INRIX_attribute_table_journal_link_115.xlsx', 'filtered_INRIX_attribute_table_journal_link_109.xlsx', 'filtered_INRIX_attribute_table_journal_link_53.xlsx', 'filtered_INRIX_attribute_table_journal_link_24.xlsx', 'filtered_INRIX_attribute_table_journal_link_16.xlsx', 'filtered_INRIX_attribute_table_journal_link_88.xlsx', 'filtered_INRIX_attribute_table_journal_link_59.xlsx', 'roadinv_id_to_tmc_lookup.xlsx', 'filtered_INRIX_attribute_table_journal_link_96.xlsx', 'filtered_INRIX_attribute_table_ext_link_6.xlsx', 'filtered_INRIX_attribute_table_ext_link_10.xlsx', 'filtered_INRIX_attribute_table_journal_link_101.xlsx', 'filtered_INRIX_attribute_table_journal_link_76.xlsx', 'filtered_INRIX_attribute_table_journal_link_58.xlsx', 'filtered_INRIX_attribute_table_journal_link_97.xlsx', 'filtered_INRIX_attribute_table_journal_link_57.xlsx', 'filtered_INRIX_attribute_table_ext_link_26.xlsx', 'filtered_INRIX_attribute_table_journal_link_2.xlsx', 'filtered_INRIX_attribute_table_journal_link_44.xlsx', 'filtered_INRIX_attribute_table_ext_link_9.xlsx', 'filtered_INRIX_attribute_table_journal_link_95.xlsx', 'filtered_INRIX_attribute_table_ext_link_12.xlsx', 'filtered_INRIX_attribute_table_journal_link_127.xlsx', 'filtered_INRIX_attribute_table_journal_link_48.xlsx', 'filtered_INRIX_attribute_table_ext_link_17.xlsx', 'filtered_INRIX_attribute_table_journal_link_90.xlsx', 'filtered_INRIX_attribute_table_ext_link_7.xlsx', 'filtered_INRIX_attribute_table_journal_link_69.xlsx', 'filtered_INRIX_attribute_table_ext_link_25.xlsx', 'filtered_INRIX_attribute_table_ext_link_24.xlsx', 'filtered_INRIX_attribute_table_journal_link_68.xlsx', 'filtered_INRIX_attribute_table_journal_link_27.xlsx', 'filtered_INRIX_attribute_table_journal_link_121.xlsx', 'filtered_INRIX_attribute_table_journal_link_102.xlsx', 'filtered_INRIX_attribute_table_journal_link_119.xlsx', 'filtered_INRIX_attribute_table_ext_link_18.xlsx', 'filtered_INRIX_attribute_table_journal_link_106.xlsx', 'filtered_INRIX_attribute_table_ext_link_8.xlsx', 'filtered_INRIX_attribute_table_journal_link_37.xlsx', 'filtered_INRIX_attribute_table_journal_link_46.xlsx', 'filtered_INRIX_attribute_table_journal_link_40.xlsx', 'filtered_INRIX_attribute_table_journal_link_105.xlsx', 'filtered_INRIX_attribute_table_journal_link_7.xlsx', 'filtered_INRIX_attribute_table_link_11.xlsx', 'filtered_INRIX_attribute_table_journal_link_9.xlsx', 'filtered_INRIX_attribute_table_link_1.xlsx', 'filtered_INRIX_attribute_table_ext_link_14.xlsx', 'filtered_INRIX_attribute_table_journal_link_10.xlsx', 'filtered_INRIX_attribute_table_ext_link_13.xlsx', 'filtered_INRIX_attribute_table_ext_link_32.xlsx', 'filtered_INRIX_attribute_table_journal_link_99.xlsx', 'filtered_INRIX_attribute_table_link_3.xlsx', 'filtered_INRIX_attribute_table_link_4.xlsx', 'filtered_INRIX_attribute_table_journal_link_129.xlsx', 'filtered_INRIX_attribute_table_journal_link_5.xlsx', 'filtered_INRIX_attribute_table_journal_link_25.xlsx', 'filtered_INRIX_attribute_table_journal_link_77.xlsx', 'filtered_INRIX_attribute_table_journal_link_104.xlsx', 'filtered_INRIX_attribute_table_journal_link_56.xlsx', 'filtered_INRIX_attribute_table_journal_link_118.xlsx', 'filtered_INRIX_attribute_table_journal_link_43.xlsx', 'filtered_INRIX_attribute_table_ext_link_5.xlsx', 'filtered_INRIX_attribute_table_journal_link_63.xlsx', 'filtered_INRIX_attribute_table_journal_link_114.xlsx', 'filtered_INRIX_attribute_table_journal_link_41.xlsx', 'filtered_INRIX_attribute_table_link_2.xlsx', 'filtered_INRIX_attribute_table_journal_link_4.xlsx', 'filtered_INRIX_attribute_table_journal_link_100.xlsx', 'filtered_INRIX_attribute_table_journal_link_31.xlsx', 'filtered_INRIX_attribute_table_journal_link_33.xlsx', 'filtered_INRIX_attribute_table_journal_link_75.xlsx', 'filtered_INRIX_attribute_table_ext_link_23.xlsx', 'filtered_INRIX_attribute_table_journal_link_73.xlsx', 'filtered_INRIX_attribute_table_journal_link_126.xlsx', 'filtered_INRIX_attribute_table_ext_link_11.xlsx', 'filtered_INRIX_attribute_table_journal_link_116.xlsx', 'filtered_INRIX_attribute_table_ext_link_16.xlsx', 'filtered_INRIX_attribute_table_journal_link_67.xlsx', 'filtered_INRIX_attribute_table_journal_link_28.xlsx', 'filtered_INRIX_attribute_table_journal_link_72.xlsx', 'filtered_INRIX_attribute_table_journal_link_60.xlsx', 'filtered_INRIX_attribute_table_journal_link_92.xlsx', 'filtered_INRIX_attribute_table_journal_link_18.xlsx', 'filtered_INRIX_attribute_table_journal_link_49.xlsx', 'filtered_INRIX_attribute_table_journal_link_8.xlsx', 'filtered_INRIX_attribute_table_journal_link_111.xlsx', 'filtered_INRIX_attribute_table_journal_link_81.xlsx', 'filtered_INRIX_attribute_table_journal_link_64.xlsx', 'filtered_INRIX_attribute_table_link_6.xlsx', 'filtered_INRIX_attribute_table_journal_link_51.xlsx', 'capacity_attribute_table_add_column_idx.xlsx', 'filtered_INRIX_attribute_table_ext_link_4.xlsx', 'filtered_INRIX_attribute_table_journal_link_79.xlsx', 'filtered_INRIX_attribute_table_link_10.xlsx', 'filtered_INRIX_attribute_table_journal_link_107.xlsx', 'filtered_INRIX_attribute_table_journal_link_124.xlsx', 'filtered_INRIX_attribute_table_ext_link_22.xlsx', 'filtered_INRIX_attribute_table_journal_link_6.xlsx', 'filtered_INRIX_attribute_table_journal_link_78.xlsx', 'filtered_INRIX_attribute_table_journal_link_80.xlsx', 'filtered_INRIX_attribute_table_journal_link_3.xlsx', 'filtered_INRIX_attribute_table_link_7.xlsx', 'filtered_INRIX_attribute_table_journal_link_50.xlsx', 'filtered_INRIX_attribute_table_journal_link_125.xlsx', 'filtered_INRIX_attribute_table_journal_link_26.xlsx', 'filtered_INRIX_attribute_table_link_12.xlsx', 'filtered_INRIX_attribute_table_ext_link_31.xlsx', 'filtered_INRIX_attribute_table_journal_link_19.xlsx', 'capacity_attribute_table.xlsx', 'filtered_INRIX_attribute_table_journal_link_128.xlsx', 'filtered_INRIX_attribute_table_journal_link_108.xlsx', 'filtered_INRIX_attribute_table_journal_link_84.xlsx', 'filtered_INRIX_attribute_table_journal_link_103.xlsx', 'filtered_INRIX_attribute_table_journal_link_71.xlsx', 'filtered_INRIX_attribute_table_journal_link_82.xlsx', 'filtered_INRIX_attribute_table_journal_link_122.xlsx', 'filtered_INRIX_attribute_table_journal_link_112.xlsx', 'filtered_INRIX_attribute_table_ext_link_27.xlsx', 'filtered_INRIX_attribute_table_journal_link_12.xlsx', 'filtered_INRIX_attribute_table_journal_link_55.xlsx', 'filtered_INRIX_attribute_table_journal_link_93.xlsx', 'filtered_INRIX_attribute_table_ext_link_15.xlsx', 'filtered_capacity_attribute_table.xlsx', 'filtered_INRIX_attribute_table_journal_link_110.xlsx', 'filtered_INRIX_attribute_table_journal_link_13.xlsx', 'filtered_INRIX_attribute_table_journal_link_1.xlsx', 'filtered_INRIX_attribute_table_journal_link_61.xlsx', 'filtered_INRIX_attribute_table_journal_link_47.xlsx', 'filtered_INRIX_attribute_table_journal_link_22.xlsx', 'filtered_INRIX_attribute_table_ext_link_19.xlsx', 'filtered_INRIX_attribute_table_journal_link_45.xlsx', 'filtered_INRIX_attribute_table_link_9.xlsx', 'filtered_INRIX_attribute_table_ext_link_1.xlsx', 'filtered_INRIX_attribute_table_journal_link_36.xlsx', 'filtered_INRIX_attribute_table_journal_link_35.xlsx', 'filtered_INRIX_attribute_table_journal_link_62.xlsx', 'filtered_INRIX_attribute_table_ext_link_21.xlsx', 'filtered_INRIX_attribute_table_ext_link_2.xlsx', 'filtered_INRIX_attribute_table_journal_link_34.xlsx', 'filtered_INRIX_attribute_table_journal_link_17.xlsx', 'filtered_INRIX_attribute_table_journal_link_74.xlsx', 'filtered_INRIX_attribute_table_ext.xlsx', '.~lock.filtered_INRIX_attribute_table.xlsx#', 'filtered_INRIX_attribute_table_journal_link_85.xlsx', 'filtered_INRIX_attribute_table_journal_link_83.xlsx', 'filtered_INRIX_attribute_table_journal_link_11.xlsx', 'filtered_INRIX_attribute_table_journal_link_113.xlsx', 'filtered_INRIX_attribute_table_journal_link_54.xlsx', 'filtered_INRIX_attribute_table_journal_link_42.xlsx', 'filtered_INRIX_attribute_table_journal_link_52.xlsx', 'filtered_INRIX_attribute_table_journal_link_123.xlsx', 'filtered_INRIX_attribute_table_journal_link_98.xlsx', 'filtered_INRIX_attribute_table_journal.xlsx', 'filtered_INRIX_attribute_table_journal_link_66.xlsx', 'filtered_INRIX_attribute_table_journal_link_14.xlsx', 'filtered_INRIX_attribute_table_ext_link_3.xlsx', 'filtered_INRIX_attribute_table_link_8.xlsx']\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
e755700f2d82f53d000e56f8c9fb52e8b0c054f4 | 6,114 | ipynb | Jupyter Notebook | nbs/00_data_zoo.ipynb | artemlops/customer-segmentation-toolkit | 32074ee141f715c4cf9052885595df42866b69f9 | [
"Apache-2.0"
] | null | null | null | nbs/00_data_zoo.ipynb | artemlops/customer-segmentation-toolkit | 32074ee141f715c4cf9052885595df42866b69f9 | [
"Apache-2.0"
] | 2 | 2021-06-07T10:10:31.000Z | 2021-07-20T13:39:40.000Z | nbs/00_data_zoo.ipynb | artemlops/customer-segmentation-toolkit | 32074ee141f715c4cf9052885595df42866b69f9 | [
"Apache-2.0"
] | null | null | null | 42.755245 | 1,959 | 0.51701 | [
[
[
"# default_exp data_zoo",
"_____no_output_____"
]
],
[
[
"# Data Zoo\n> Download datasets produced within current project",
"_____no_output_____"
]
],
[
[
"# export\nimport logging\nimport pandas as pd\nfrom pathlib import Path\nfrom typing import List\nfrom os.path import normpath",
"_____no_output_____"
],
[
"# export\nBASE_URL = 'https://raw.githubusercontent.com/artemlops/customer-segmentation-toolkit/master'\nSUPPORTED_SUFFIXES = {'.csv'}\nENCODING = \"ISO-8859-1\"\n\ndef download_data_csv(path_relative: str,\n base_url: str = BASE_URL,\n encoding: str = ENCODING,\n datetime_columns: List[str] = ()) -> pd.DataFrame:\n path_relative = Path(path_relative)\n if path_relative.suffix not in SUPPORTED_SUFFIXES:\n raise ValueError(f\"Can't download data {path_relative}: not a: {SUPPORTED_SUFFIXES}\")\n url = f'{base_url}/{normpath(path_relative)}'\n logging.info(f\"Downloading dataset '{url}'\")\n df = pd.read_csv(url, encoding=encoding)\n for column in datetime_columns:\n df[column] = pd.to_datetime(df[column])\n return df",
"_____no_output_____"
],
[
"DATA = 'data/output/04_data_analyse_customers/no_live_data__cleaned__purchase_clusters__train__customer_clusters.csv'\n\ndf1 = download_data_csv(DATA)\ndf2 = download_data_csv(f'/{DATA}')\ndf3 = download_data_csv(f'/../{DATA}')\ndf4 = download_data_csv(f'/../././{DATA}')\ndf5 = download_data_csv(f'////{DATA}')\n\nassert df1.shape == df2.shape == df3.shape == df4.shape == df5.shape",
"_____no_output_____"
],
[
"try:\n df = download_data_csv(\"not-found\")\nexcept:\n pass\nelse:\n assert False, \"should not be here\"",
"_____no_output_____"
],
[
"df = download_data_csv('data/data.csv', datetime_columns=['InvoiceDate'])\ndf.head()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e75575ec8057e6e3694e1c6f0a88b5484ae398d5 | 116,819 | ipynb | Jupyter Notebook | mavenn/examples/models/train_mpsa_ge_additive.ipynb | jbkinney/mavenn | 28c827cf6dc37f949b345391716b4f10fb7a6e5e | [
"MIT"
] | 12 | 2020-09-15T04:20:48.000Z | 2022-02-12T00:51:05.000Z | mavenn/examples/models/train_mpsa_ge_additive.ipynb | jbkinney/mavenn | 28c827cf6dc37f949b345391716b4f10fb7a6e5e | [
"MIT"
] | 12 | 2020-06-07T21:15:59.000Z | 2022-03-03T18:10:46.000Z | mavenn/examples/models/train_mpsa_ge_additive.ipynb | jbkinney/mavenn | 28c827cf6dc37f949b345391716b4f10fb7a6e5e | [
"MIT"
] | 1 | 2022-01-04T18:22:27.000Z | 2022-01-04T18:22:27.000Z | 122.32356 | 52,752 | 0.790479 | [
[
[
"# Standard imports\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport time\n\n# Insert path to mavenn beginning of path\nimport os\nimport sys\n\n# Load mavenn\nimport mavenn\nprint(mavenn.__path__)",
"['/Users/jkinney/github/mavenn/mavenn']\n"
],
[
"# Load example data\ndata_df = mavenn.load_example_dataset('mpsa')\n\n# Separate test from data_df\nix_test = data_df['set']=='test'\ntest_df = data_df[ix_test].reset_index(drop=True)\nprint(f'test N: {len(test_df):,}')\n\n# Remove test data from data_df\ndata_df = data_df[~ix_test].reset_index(drop=True)\nprint(f'training + validation N: {len(data_df):,}')\ndata_df.head(10)",
"test N: 6,249\ntraining + validation N: 24,234\n"
],
[
"# Get sequence length\nL = len(data_df['x'][0])\n\n# Define model\nmodel = mavenn.Model(L=L,\n alphabet='rna',\n gpmap_type='additive', \n regression_type='GE',\n ge_noise_model_type='SkewedT',\n ge_heteroskedasticity_order=2)\n",
"_____no_output_____"
],
[
"# Set training data\nmodel.set_data(x=data_df['x'],\n y=data_df['y'],\n validation_flags=(data_df['set']=='validation'),\n shuffle=True)",
"N = 24,234 observations set as training data.\nUsing 24.8% for validation.\nData shuffled.\nTime to set data: 0.48 sec.\n"
],
[
"# Fit model to data\nmodel.fit(learning_rate=.001,\n epochs=1000,\n batch_size=200,\n early_stopping=True,\n early_stopping_patience=30,\n linear_initialization=False)",
"Epoch 1/1000\n92/92 [==============================] - 1s 6ms/step - loss: 268.2306 - I_var: -0.5132 - val_loss: 254.5117 - val_I_var: -0.4241\nEpoch 2/1000\n92/92 [==============================] - 0s 3ms/step - loss: 246.9376 - I_var: -0.3570 - val_loss: 237.5490 - val_I_var: -0.3057\nEpoch 3/1000\n92/92 [==============================] - 0s 3ms/step - loss: 232.0611 - I_var: -0.2567 - val_loss: 224.8442 - val_I_var: -0.2179\nEpoch 4/1000\n92/92 [==============================] - 0s 2ms/step - loss: 221.3263 - I_var: -0.1895 - val_loss: 216.0564 - val_I_var: -0.1578\nEpoch 5/1000\n92/92 [==============================] - 0s 3ms/step - loss: 214.1755 - I_var: -0.1331 - val_loss: 210.2031 - val_I_var: -0.1188\nEpoch 6/1000\n92/92 [==============================] - 0s 2ms/step - loss: 209.3592 - I_var: -0.1024 - val_loss: 205.8030 - val_I_var: -0.0901\nEpoch 7/1000\n92/92 [==============================] - 0s 3ms/step - loss: 205.4109 - I_var: -0.0753 - val_loss: 202.4029 - val_I_var: -0.0684\nEpoch 8/1000\n92/92 [==============================] - 0s 3ms/step - loss: 202.1129 - I_var: -0.0515 - val_loss: 199.5084 - val_I_var: -0.0505\nEpoch 9/1000\n92/92 [==============================] - 0s 2ms/step - loss: 198.8590 - I_var: -0.0312 - val_loss: 196.4095 - val_I_var: -0.0302\nEpoch 10/1000\n92/92 [==============================] - 0s 3ms/step - loss: 196.2547 - I_var: -0.0116 - val_loss: 194.4191 - val_I_var: -0.0170\nEpoch 11/1000\n92/92 [==============================] - 0s 3ms/step - loss: 194.2773 - I_var: -0.0030 - val_loss: 194.1011 - val_I_var: -0.0152\nEpoch 12/1000\n92/92 [==============================] - 0s 2ms/step - loss: 193.1258 - I_var: 0.0085 - val_loss: 191.8575 - val_I_var: 2.8401e-04\nEpoch 13/1000\n92/92 [==============================] - 0s 2ms/step - loss: 191.7923 - I_var: 0.0179 - val_loss: 190.8446 - val_I_var: 0.0076\nEpoch 14/1000\n92/92 [==============================] - 0s 2ms/step - loss: 190.7292 - I_var: 0.0251 - val_loss: 189.8521 - val_I_var: 0.0150\nEpoch 15/1000\n92/92 [==============================] - 0s 3ms/step - loss: 189.7500 - I_var: 0.0287 - val_loss: 188.8520 - val_I_var: 0.0221\nEpoch 16/1000\n92/92 [==============================] - 0s 2ms/step - loss: 188.6816 - I_var: 0.0383 - val_loss: 187.7283 - val_I_var: 0.0309\nEpoch 17/1000\n92/92 [==============================] - 0s 2ms/step - loss: 187.7262 - I_var: 0.0525 - val_loss: 186.6064 - val_I_var: 0.0395\nEpoch 18/1000\n92/92 [==============================] - 0s 2ms/step - loss: 186.6794 - I_var: 0.0539 - val_loss: 185.5329 - val_I_var: 0.0478\nEpoch 19/1000\n92/92 [==============================] - 0s 2ms/step - loss: 185.5726 - I_var: 0.0607 - val_loss: 184.2486 - val_I_var: 0.0577\nEpoch 20/1000\n92/92 [==============================] - 0s 2ms/step - loss: 184.4433 - I_var: 0.0691 - val_loss: 183.0619 - val_I_var: 0.0672\nEpoch 21/1000\n92/92 [==============================] - 0s 2ms/step - loss: 183.3029 - I_var: 0.0811 - val_loss: 181.6407 - val_I_var: 0.0784\nEpoch 22/1000\n92/92 [==============================] - 0s 2ms/step - loss: 182.0850 - I_var: 0.0898 - val_loss: 180.4285 - val_I_var: 0.0884\nEpoch 23/1000\n92/92 [==============================] - 0s 2ms/step - loss: 180.8802 - I_var: 0.0926 - val_loss: 178.9515 - val_I_var: 0.1000\nEpoch 24/1000\n92/92 [==============================] - 0s 2ms/step - loss: 179.8844 - I_var: 0.1059 - val_loss: 177.7772 - val_I_var: 0.1096\nEpoch 25/1000\n92/92 [==============================] - 0s 2ms/step - loss: 178.7890 - I_var: 0.1095 - val_loss: 176.6228 - val_I_var: 0.1188\nEpoch 26/1000\n92/92 [==============================] - 0s 2ms/step - loss: 177.9755 - I_var: 0.1236 - val_loss: 175.7617 - val_I_var: 0.1260\nEpoch 27/1000\n92/92 [==============================] - 0s 3ms/step - loss: 177.3794 - I_var: 0.1180 - val_loss: 176.4082 - val_I_var: 0.1231\nEpoch 28/1000\n92/92 [==============================] - 0s 2ms/step - loss: 176.8279 - I_var: 0.1240 - val_loss: 174.5007 - val_I_var: 0.1367\nEpoch 29/1000\n92/92 [==============================] - 0s 2ms/step - loss: 176.5177 - I_var: 0.1382 - val_loss: 174.1222 - val_I_var: 0.1402\nEpoch 30/1000\n92/92 [==============================] - 0s 2ms/step - loss: 176.2588 - I_var: 0.1341 - val_loss: 173.8210 - val_I_var: 0.1430\nEpoch 31/1000\n92/92 [==============================] - 0s 2ms/step - loss: 175.9686 - I_var: 0.1373 - val_loss: 173.5596 - val_I_var: 0.1451\nEpoch 32/1000\n92/92 [==============================] - 0s 2ms/step - loss: 175.8260 - I_var: 0.1356 - val_loss: 173.4511 - val_I_var: 0.1462\nEpoch 33/1000\n92/92 [==============================] - 0s 3ms/step - loss: 175.6408 - I_var: 0.1370 - val_loss: 173.2629 - val_I_var: 0.1482\nEpoch 34/1000\n92/92 [==============================] - 0s 2ms/step - loss: 175.5785 - I_var: 0.1405 - val_loss: 173.1040 - val_I_var: 0.1493\nEpoch 35/1000\n92/92 [==============================] - 0s 2ms/step - loss: 175.3913 - I_var: 0.1398 - val_loss: 172.9339 - val_I_var: 0.1510\nEpoch 36/1000\n92/92 [==============================] - 0s 2ms/step - loss: 175.2706 - I_var: 0.1440 - val_loss: 172.7683 - val_I_var: 0.1522\nEpoch 37/1000\n92/92 [==============================] - 0s 2ms/step - loss: 175.1754 - I_var: 0.1433 - val_loss: 172.7124 - val_I_var: 0.1527\nEpoch 38/1000\n92/92 [==============================] - 0s 3ms/step - loss: 175.0405 - I_var: 0.1395 - val_loss: 172.5327 - val_I_var: 0.1545\nEpoch 39/1000\n92/92 [==============================] - 0s 3ms/step - loss: 174.9081 - I_var: 0.1429 - val_loss: 172.4553 - val_I_var: 0.1554\nEpoch 40/1000\n92/92 [==============================] - 0s 2ms/step - loss: 174.8007 - I_var: 0.1499 - val_loss: 172.4006 - val_I_var: 0.1560\nEpoch 41/1000\n92/92 [==============================] - 0s 2ms/step - loss: 174.7791 - I_var: 0.1504 - val_loss: 172.1709 - val_I_var: 0.1577\nEpoch 42/1000\n92/92 [==============================] - 0s 2ms/step - loss: 174.5209 - I_var: 0.1463 - val_loss: 172.2369 - val_I_var: 0.1571\nEpoch 43/1000\n92/92 [==============================] - 0s 2ms/step - loss: 174.3712 - I_var: 0.1436 - val_loss: 171.9542 - val_I_var: 0.1598\nEpoch 44/1000\n92/92 [==============================] - 0s 2ms/step - loss: 174.3566 - I_var: 0.1508 - val_loss: 171.7974 - val_I_var: 0.1610\nEpoch 45/1000\n92/92 [==============================] - 0s 2ms/step - loss: 174.1443 - I_var: 0.1417 - val_loss: 171.6707 - val_I_var: 0.1619\nEpoch 46/1000\n92/92 [==============================] - 0s 2ms/step - loss: 174.0996 - I_var: 0.1452 - val_loss: 171.6974 - val_I_var: 0.1617\nEpoch 47/1000\n92/92 [==============================] - 0s 2ms/step - loss: 174.0608 - I_var: 0.1518 - val_loss: 171.5171 - val_I_var: 0.1631\nEpoch 48/1000\n92/92 [==============================] - 0s 2ms/step - loss: 173.9358 - I_var: 0.1555 - val_loss: 171.3940 - val_I_var: 0.1641\nEpoch 49/1000\n92/92 [==============================] - 0s 2ms/step - loss: 174.0512 - I_var: 0.1567 - val_loss: 171.3353 - val_I_var: 0.1649\nEpoch 50/1000\n92/92 [==============================] - 0s 3ms/step - loss: 173.8512 - I_var: 0.1576 - val_loss: 171.2834 - val_I_var: 0.1653\nEpoch 51/1000\n92/92 [==============================] - 0s 2ms/step - loss: 173.7057 - I_var: 0.1546 - val_loss: 171.3958 - val_I_var: 0.1654\nEpoch 52/1000\n92/92 [==============================] - 0s 2ms/step - loss: 173.5615 - I_var: 0.1498 - val_loss: 171.1150 - val_I_var: 0.1669\nEpoch 53/1000\n92/92 [==============================] - 0s 3ms/step - loss: 173.5603 - I_var: 0.1606 - val_loss: 171.0643 - val_I_var: 0.1678\nEpoch 54/1000\n92/92 [==============================] - 0s 2ms/step - loss: 173.4648 - I_var: 0.1581 - val_loss: 170.9795 - val_I_var: 0.1685\nEpoch 55/1000\n92/92 [==============================] - 0s 2ms/step - loss: 173.3506 - I_var: 0.1607 - val_loss: 170.8067 - val_I_var: 0.1694\nEpoch 56/1000\n92/92 [==============================] - 0s 2ms/step - loss: 173.3000 - I_var: 0.1589 - val_loss: 171.0574 - val_I_var: 0.1672\nEpoch 57/1000\n92/92 [==============================] - 0s 2ms/step - loss: 173.2437 - I_var: 0.1498 - val_loss: 170.7196 - val_I_var: 0.1702\nEpoch 58/1000\n92/92 [==============================] - 0s 2ms/step - loss: 173.0173 - I_var: 0.1478 - val_loss: 170.9501 - val_I_var: 0.1681\n"
],
[
"# Save model\nmodel.save('mpsa_ge_additive')",
"Model saved to these files:\n\tmpsa_ge_additive.pickle\n\tmpsa_ge_additive.h5\n"
],
[
"# Load model\nmodel = mavenn.load('mpsa_ge_additive')",
"Model loaded from these files:\n\tmpsa_ge_additive.pickle\n\tmpsa_ge_additive.h5\n"
],
[
"# Get x and y\nx_test = test_df['x'].values\ny_test = test_df['y'].values",
"_____no_output_____"
],
[
"# Show training history\nprint('On test data:')\n\n# Compute variational information\nI_var, dI_var = model.I_variational(x=x_test, y=y_test)\nprint(f'I_var_test: {I_var:.3f} +- {dI_var:.3f} bits') \n\n# Compute predictive information\nI_pred, dI_pred = model.I_predictive(x=x_test, y=y_test)\nprint(f'I_pred_test: {I_pred:.3f} +- {dI_pred:.3f} bits')\n\nI_var_hist = model.history['I_var']\nval_I_var_hist = model.history['val_I_var']\n\nfig, ax = plt.subplots(1,1,figsize=[4,4])\nax.plot(I_var_hist, label='I_var_train')\nax.plot(val_I_var_hist, label='I_var_val')\nax.axhline(I_var, color='C2', linestyle=':', label='I_var_test')\nax.axhline(I_pred, color='C3', linestyle=':', label='I_pred_test')\nax.legend()\nax.set_xlabel('epochs')\nax.set_ylabel('bits')\nax.set_title('training hisotry')\nax.set_ylim([0, I_pred*1.2]);",
"On test data:\nI_var_test: 0.222 +- 0.028 bits\nI_pred_test: 0.260 +- 0.015 bits\n"
],
[
"# Predict latent phentoype values (phi) on test data\nphi_test = model.x_to_phi(x_test)\n\n# Predict measurement values (yhat) on test data\nyhat_test = model.x_to_yhat(x_test)\n\n# Set phi lims and create grid in phi space\nphi_lim = [min(phi_test)-.5, max(phi_test)+.5]\nphi_grid = np.linspace(phi_lim[0], phi_lim[1], 1000)\n\n# Compute yhat each phi gridpoint\nyhat_grid = model.phi_to_yhat(phi_grid)\n\n# Compute 90% CI for each yhat\nq = [0.05, 0.95] #[0.16, 0.84]\nyqs_grid = model.yhat_to_yq(yhat_grid, q=q)\n\n# Create figure\nfig, ax = plt.subplots(1, 1, figsize=[4, 4])\n\n# Illustrate measurement process with GE curve\nax.scatter(phi_test, y_test, color='C0', s=5, alpha=.2, label='test data')\nax.plot(phi_grid, yhat_grid, linewidth=2, color='C1',\n label='$\\hat{y} = g(\\phi)$')\nax.plot(phi_grid, yqs_grid[:, 0], linestyle='--', color='C1', label='68% CI')\nax.plot(phi_grid, yqs_grid[:, 1], linestyle='--', color='C1')\nax.set_xlim(phi_lim)\nax.set_xlabel('latent phenotype ($\\phi$)')\nax.set_ylabel('measurement ($y$)')\nax.set_title('measurement process')\nax.legend()\n\n# Fix up plot\nfig.tight_layout()\nplt.show()",
"_____no_output_____"
],
[
"# Test simulate_data\nsim_df = model.simulate_dataset(N=1000)\nsim_df.head()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75577db66693e7c0538d77a854a55ee49bd30a0 | 11,670 | ipynb | Jupyter Notebook | examples/notebook/contrib/slitherlink.ipynb | jspricke/or-tools | 45770b833997f827d322e929b1ed4781c4e60d44 | [
"Apache-2.0"
] | 1 | 2020-07-18T16:24:09.000Z | 2020-07-18T16:24:09.000Z | examples/notebook/contrib/slitherlink.ipynb | jspricke/or-tools | 45770b833997f827d322e929b1ed4781c4e60d44 | [
"Apache-2.0"
] | 1 | 2021-02-23T10:22:55.000Z | 2021-02-23T13:57:14.000Z | examples/notebook/contrib/slitherlink.ipynb | jspricke/or-tools | 45770b833997f827d322e929b1ed4781c4e60d44 | [
"Apache-2.0"
] | 1 | 2021-03-16T14:30:59.000Z | 2021-03-16T14:30:59.000Z | 38.9 | 89 | 0.492888 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7558010f76662449f347c07fe19d4f5f22d37d1 | 29,111 | ipynb | Jupyter Notebook | DCGAN_2_tensorflow_turtorial_bad_result.ipynb | kuThang/GAN | 53eaaceecbd219c76ccebc62dd4e680ee0ec83a7 | [
"Apache-2.0"
] | null | null | null | DCGAN_2_tensorflow_turtorial_bad_result.ipynb | kuThang/GAN | 53eaaceecbd219c76ccebc62dd4e680ee0ec83a7 | [
"Apache-2.0"
] | null | null | null | DCGAN_2_tensorflow_turtorial_bad_result.ipynb | kuThang/GAN | 53eaaceecbd219c76ccebc62dd4e680ee0ec83a7 | [
"Apache-2.0"
] | null | null | null | 63.98022 | 16,206 | 0.77469 | [
[
[
"import os\nimport shutil\nfrom sklearn.utils import shuffle\nimport numpy as np\nimport json\nimport cv2\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"!pip install imageio",
"Requirement already satisfied: imageio in /usr/local/lib/python3.6/dist-packages (2.4.1)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from imageio) (1.16.4)\nRequirement already satisfied: pillow in /usr/local/lib/python3.6/dist-packages (from imageio) (4.3.0)\nRequirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from pillow->imageio) (0.46)\n"
],
[
"import tensorflow as tf\ntf.enable_eager_execution()\n\nimport glob\nimport imageio\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport PIL\nimport time\n\nfrom IPython import display",
"_____no_output_____"
],
[
"(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()\n\ntrain_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')\ntrain_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]\nBUFFER_SIZE = 60000\nBATCH_SIZE = 256",
"_____no_output_____"
],
[
"train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)",
"_____no_output_____"
],
[
"def make_generator_model():\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))\n model.add(tf.keras.layers.BatchNormalization())\n model.add(tf.keras.layers.LeakyReLU())\n \n model.add(tf.keras.layers.Reshape((7, 7, 256)))\n assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size\n \n model.add(tf.keras.layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))\n assert model.output_shape == (None, 7, 7, 128) \n model.add(tf.keras.layers.BatchNormalization())\n model.add(tf.keras.layers.LeakyReLU())\n\n model.add(tf.keras.layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))\n assert model.output_shape == (None, 14, 14, 64) \n model.add(tf.keras.layers.BatchNormalization())\n model.add(tf.keras.layers.LeakyReLU())\n\n model.add(tf.keras.layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))\n assert model.output_shape == (None, 28, 28, 1)\n \n return model",
"_____no_output_____"
],
[
"\ndef make_discriminator_model():\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same'))\n model.add(tf.keras.layers.LeakyReLU())\n model.add(tf.keras.layers.Dropout(0.3))\n \n model.add(tf.keras.layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))\n model.add(tf.keras.layers.LeakyReLU())\n model.add(tf.keras.layers.Dropout(0.3))\n \n model.add(tf.keras.layers.Flatten())\n model.add(tf.keras.layers.Dense(1))\n \n return model",
"_____no_output_____"
],
[
"generator = make_generator_model()\ndiscriminator = make_discriminator_model()",
"_____no_output_____"
],
[
"def generator_loss(generated_output):\n return tf.losses.sigmoid_cross_entropy(tf.ones_like(generated_output), generated_output)",
"_____no_output_____"
],
[
"def discriminator_loss(real_output, generated_output):\n # [1,1,...,1] with real output since it is true and we want our generated examples to look like it\n real_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.ones_like(real_output), logits=real_output)\n\n # [0,0,...,0] with generated images since they are fake\n generated_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.zeros_like(generated_output), logits=generated_output)\n\n total_loss = real_loss + generated_loss\n\n return total_loss",
"_____no_output_____"
],
[
"generator_optimizer = tf.train.AdamOptimizer(1e-4)\ndiscriminator_optimizer = tf.train.AdamOptimizer(1e-4)",
"_____no_output_____"
],
[
"EPOCHS = 50\nnoise_dim = 100\nnum_examples_to_generate = 16\n\n# We'll re-use this random vector used to seed the generator so\n# it will be easier to see the improvement over time.\nrandom_vector_for_generation = tf.random_normal([num_examples_to_generate,\n noise_dim])",
"_____no_output_____"
],
[
"def train_step(images):\n # generating noise from a normal distribution\n noise = tf.random_normal([BATCH_SIZE, noise_dim])\n \n with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:\n generated_images = generator(noise, training=True)\n \n real_output = discriminator(images, training=True)\n generated_output = discriminator(generated_images, training=True)\n \n gen_loss = generator_loss(generated_output)\n disc_loss = discriminator_loss(real_output, generated_output)\n \n gradients_of_generator = gen_tape.gradient(gen_loss, generator.variables)\n gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.variables)\n \n generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.variables))\n discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.variables))",
"_____no_output_____"
],
[
"\ntrain_step = tf.contrib.eager.defun(train_step)",
"WARNING: Logging before flag parsing goes to stderr.\nW0705 07:24:43.309714 139963691317120 lazy_loader.py:50] \nThe TensorFlow contrib module will not be included in TensorFlow 2.0.\nFor more information, please see:\n * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md\n * https://github.com/tensorflow/addons\n * https://github.com/tensorflow/io (for I/O related ops)\nIf you depend on functionality not listed there, please file an issue.\n\n"
],
[
"def train(dataset, epochs): \n for epoch in range(epochs):\n start = time.time()\n \n for images in dataset:\n train_step(images)\n\n display.clear_output(wait=True)\n generate_and_save_images(generator,\n epoch + 1,\n random_vector_for_generation)\n \n \n \n print ('Time taken for epoch {} is {} sec'.format(epoch + 1,\n time.time()-start))\n # generating after the final epoch\n display.clear_output(wait=True)\n generate_and_save_images(generator,\n epochs,\n random_vector_for_generation)",
"_____no_output_____"
],
[
"def generate_and_save_images(model, epoch, test_input):\n # make sure the training parameter is set to False because we\n # don't want to train the batchnorm layer when doing inference.\n predictions = model(test_input, training=False)\n\n fig = plt.figure(figsize=(4,4))\n \n for i in range(predictions.shape[0]):\n plt.subplot(4, 4, i+1)\n plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')\n plt.axis('off')\n \n plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))\n plt.show()",
"_____no_output_____"
],
[
"\n%%time\ntrain(train_dataset, EPOCHS)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75598d5c93b1cf271ad32c17834a1cb248d63e7 | 24,647 | ipynb | Jupyter Notebook | nbs/03_augment.ipynb | tijeco/berteome | a6f8e5acbbc5a66acc53b69ca721d4bb6fb3e29f | [
"Apache-2.0"
] | 1 | 2022-01-05T23:58:31.000Z | 2022-01-05T23:58:31.000Z | nbs/03_augment.ipynb | tijeco/berteome | a6f8e5acbbc5a66acc53b69ca721d4bb6fb3e29f | [
"Apache-2.0"
] | 4 | 2021-09-22T20:31:07.000Z | 2021-11-02T22:18:24.000Z | nbs/03_augment.ipynb | tijeco/berteome | a6f8e5acbbc5a66acc53b69ca721d4bb6fb3e29f | [
"Apache-2.0"
] | null | null | null | 34.616573 | 2,102 | 0.43198 | [
[
[
"Alright, so I think I have a pretty neat idea for this package, so I'm just going to document a few of my thoughts. This is an augment package, where the idea is to take a protein sequence and generate a bunch of point mutational variants. I think this would be quite handy in certain applications, for instance antimicrobial peptide classification. There aren't too many known examples of antimicrobial peptides, but tons and tons of examples of non-antimicrobial peptides. If this were a computer vision problem, say we were trying to build a classifier for four leaf clovers vs three leaf clovers, you'd have a much easier time finding examples of three leaf vs four leaf. To compensate for this so that you have an evenish sampling you can augment the dataset byt manipulating the pictures you do have and jittering the images about to get kind of pseudoreplicates. Sure, there's lot's of ways to approach that problem, but for protein sequences it is not so easy to augment those sequences. If they are all in the same gene family, you can make a multiple sequence alignment of them and reconstruct their ancestral states using phylogenetic approaches, this would double the data size! But if that isn't the case then what can you do? Well you can take the single point mutation approach. Say for a protein of length L, generate N variants where you randomly mutate a single amino residue. That is sort of fine in practice, but we know that some mutations are more feleterious than others, if only there were a way to determine which mutations weren't likely to be deleterious... Aha! So I think we can take the dataframes generated from ESM or prot_bert and generate the top k most likely mutational variants per residue. Let's think about how that could potentially augment the data set. So for a single peptide of length L, if we take the top k substitutions per residue, that gives L*k variants, for the toy MENDEL example where L is 6, this would generate 30 variants! Which is quite substantial, for larger proteins I think you could get quite a reasanobly large number of variants.\n\nAlright, so here is what I plan to do, the prediction dataframe is pretty core to this, though for demonstration purposes, I'll probably just handmake a MENDEL dataframe. From that the main function will be an augmentPep function which will take a prediction df from a single peptide, as well k to dictate how many mutational variants per residue. An obvious caveat is the fact that the wild type residue will probably often be in the top k possible substitutions, so there will need to be a small check to insure we are getting the top k residues that are not wt. Beyond that, two aditional functions come to mind than can be built on top of this, augmentFasta and augmentPeps, which just takes augmentPep and applies it to either a fasta file or an iterable list of peptides. I should also think about how to name mutants, I feel like just posResidue + ResidueSub would be sufficient to tag that to the original id.\n\nLet's try to jump in this, and make a dummy MENDEL dataframe.",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"mendelDF = pd.DataFrame.from_dict({\n\t#\t A,C,D,E,F,G,H,I,K,L,M,N,P,Q,R,S,T,V,W,Y\n\t\"M\":[0,0,0,0,0,0,0,0,0,8,9,7,0,0,0,0,0,0,0,0],\n\t\"E\":[0,0,7,8,9,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n\t\"N\":[0,0,0,0,0,0,0,0,0,9,8,7,0,0,0,0,0,0,0,0],\n\t\"D\":[0,7,8,9,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n\t# \"E\":[0,0,9,8,7,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], # redundant... for the example\n\t\"L\":[0,0,0,0,0,0,0,0,8,9,7,0,0,0,0,0,0,0,0,0]\n}, orient=\"index\", columns=list(\"ACDEFGHIKLMNPQRSTVWY\")).reset_index().rename(columns={\"index\": \"wt\"})\n\n\nmendelDF",
"_____no_output_____"
]
],
[
[
"Alright so this very much a barebones dummy df, but I think it will work.\n\nSo for this example I would only want to sample k=2, to go through each residue it should be something like\n\n1. M: L,N\n2. E: D,F\n3. N: L,M\n4. D: C,E\n<!-- 5. E: D,F -->\n6. L: K,M\n\nSo that should give us 12 variant sequences! (Edit:10..)\n\nSo let me go ahead and think about what the logic needs to be for this. There is certainly a chance that I will be able to use some fancy pandas stuff to do this, but for now I will just focus on what might work.\n\nSo I definitely know that I need to grab the top k values in each column.. wait that's not right. No, I need to get the top values for each row! Then the tricky part is figuring out if the column name is the same value as the index.\n\n\nOh gravy, there definitely has to be some sort of SQLy or panday way to get this\n",
"_____no_output_____"
]
],
[
[
"def augmentPep(df, k):\n\tseqList = list(df[\"wt\"])\n\tvariantDict = {}\n\tfor index, row in df.iterrows():\n\t\tscores = row[list(\"ACDEFGHIKLMNPQRSTVWY\")]\n\t\ttop_k_scores = scores.where(scores.index != row[\"wt\"]).sort_values(ascending=False).head(k)\n\n\t\ttop_k_subs = list(top_k_scores.index)\n\t\tfor res in top_k_subs:\n\t\t\t\n\t\t\tseqCopy = seqList.copy()\n\t\t\tseqCopy[index] = res\n\t\t\tvariantDict[f\"{index}x{res}\"] = ''.join(seqCopy) \n\n\treturn variantDict\n",
"_____no_output_____"
],
[
"assert augmentPep(mendelDF, 2) == {'0xL': 'LENDL',\n '0xN': 'NENDL',\n '1xF': 'MFNDL',\n '1xD': 'MDNDL',\n '2xL': 'MELDL',\n '2xM': 'MEMDL',\n '3xE': 'MENEL',\n '3xC': 'MENCL',\n '4xK': 'MENDK',\n '4xM': 'MENDM'}",
"_____no_output_____"
]
],
[
[
"ALright!! In principle, that's working!!\n\nThe output structure needs to be modified so that the mutational variant is annotated as \"pos_res\".\n\nI think I like this..\n\nLet's see how it works on esm/prot_bert dataframes",
"_____no_output_____"
]
],
[
[
"from berteome import prot_bert",
"Some weights of the model checkpoint at Rostlab/prot_bert were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']\n- This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n- This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n"
],
[
"mendel_prot_bert_DF = prot_bert.bertPredictionDF(\"MENDEL\")",
"_____no_output_____"
],
[
"mendel_prot_bert_DF",
"_____no_output_____"
],
[
"augmentPep(mendel_prot_bert_DF, 2)",
"_____no_output_____"
],
[
"from berteome import esm",
"_____no_output_____"
],
[
"mendel_esm_DF = esm.esmPredictionDF(\"MENDEL\")",
"_____no_output_____"
],
[
"augmentPep(mendel_esm_DF, 2)",
"_____no_output_____"
]
],
[
[
"As far as I'm concerned, this is working! It shouldn't be too difficult to use this to make a `augmentPeps` and `augmentFasta`, which takes multiple peptides and returns their top k variants.\n\nI don't entirely need this right this moment, so maybe I should just work on that when I definitely need it or at least have more time, because it will probably take a couple of hours to finish.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e7559f0c7b1cebe2a4bb78241db8cdd316d50816 | 7,340 | ipynb | Jupyter Notebook | BK-slides/IntroDay2.ipynb | BernhardKonrad/2014-02-22-SFU | 25cfac5f703943ddaa98cc30bb267eacba48d07f | [
"CC-BY-3.0"
] | 1 | 2016-12-15T17:57:12.000Z | 2016-12-15T17:57:12.000Z | BK-slides/IntroDay2.ipynb | BernhardKonrad/2014-02-22-SFU | 25cfac5f703943ddaa98cc30bb267eacba48d07f | [
"CC-BY-3.0"
] | null | null | null | BK-slides/IntroDay2.ipynb | BernhardKonrad/2014-02-22-SFU | 25cfac5f703943ddaa98cc30bb267eacba48d07f | [
"CC-BY-3.0"
] | null | null | null | 37.070707 | 268 | 0.531335 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e755b87ed51d3a1ec9fd88adb20ad5b36957e0aa | 489,329 | ipynb | Jupyter Notebook | Visualizando_dados_WordCloud.ipynb | EduardoMoraesRitter/Introdu-o-NLP-an-lise-de-sentimento | 9c305a85d8ba06cfbd97bfe54e37c05c4fd9cd7d | [
"MIT"
] | 1 | 2022-02-11T21:53:38.000Z | 2022-02-11T21:53:38.000Z | Visualizando_dados_WordCloud.ipynb | EduardoMoraesRitter/Introducao-NLP-classificacao | 9c305a85d8ba06cfbd97bfe54e37c05c4fd9cd7d | [
"MIT"
] | null | null | null | Visualizando_dados_WordCloud.ipynb | EduardoMoraesRitter/Introducao-NLP-classificacao | 9c305a85d8ba06cfbd97bfe54e37c05c4fd9cd7d | [
"MIT"
] | null | null | null | 1,097.150224 | 99,820 | 0.939435 | [
[
[
"<a href=\"https://colab.research.google.com/github/EduardoMoraesRitter/Introducao-NLP-analise-sentimento/blob/master/Visualizando_dados_WordCloud.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"#gera o grafico na lina\n#%matplotlib inline\n\nimport pandas as pd\nfrom wordcloud import WordCloud",
"_____no_output_____"
],
[
"#base de palavras\ntextos = [\n 'eu nao gostei disso', \n 'eu nao quero isso', \n 'nao vou mais aceitar isso',\n 'eu nao gosto disso',\n 'gosto disso eu agora',\n 'voce goataria de que ir conosco'\n ]\nresenha = pd.DataFrame(textos, columns=['texto'])\nresenha",
"_____no_output_____"
],
[
"# compreensão de lista - https://www.alura.com.br/artigos/simplicando-o-processamento-com-compreensao-de-lista-do-python\ntodas_palavras = ' '.join([texto for texto in resenha.texto])\n\n#todas_palavras[:3]\nprint('quantidade de palavras: ',len(todas_palavras))",
"quantidade de palavras: 135\n"
],
[
"#contruir as palavra\nnuvem_palavra = WordCloud().generate(todas_palavras)\nnuvem_palavra",
"_____no_output_____"
],
[
"#tranformar o objeto em visual\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"plt.figure()\nplt.imshow(nuvem_palavra)\nplt.show()",
"_____no_output_____"
],
[
"#mudar o tamanho da distribuicao\nnuvem_palavra = WordCloud(width=800, height=500).generate(todas_palavras)\n\nplt.figure()\nplt.imshow(nuvem_palavra)\nplt.show()",
"_____no_output_____"
],
[
"#definir o tamanho maximo para as palavras mudando a escala\nnuvem_palavra = WordCloud(width=800, height=500, max_font_size=110).generate(todas_palavras)\n\nplt.figure()\nplt.imshow(nuvem_palavra)\nplt.show()",
"_____no_output_____"
],
[
"nuvem_palavra = WordCloud(width=800, \n height=500, \n max_font_size=110).generate(todas_palavras)\n\n#tamanho da imagem\nplt.figure(figsize=(10,7))\nplt.imshow(nuvem_palavra)\nplt.show()",
"_____no_output_____"
],
[
"nuvem_palavra = WordCloud(width=800, \n height=500, \n max_font_size=110).generate(todas_palavras)\n\nplt.figure(figsize=(10,7))\n#deixar as imagens mais nitidas\nplt.imshow(nuvem_palavra, interpolation='bilinear')\nplt.show()",
"_____no_output_____"
],
[
"#collocation calculo da frequancia pela palavra e nao pelo bigrama\nnuvem_palavra = WordCloud(width=800, \n height=500, \n max_font_size=110,\n collocations = False\n ).generate(todas_palavras)\n\nplt.figure(figsize=(10,7))\nplt.imshow(nuvem_palavra, interpolation='bilinear')\nplt.axis('off')\nplt.show()",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e755b93d2da0fa9d7b0f08f9832c7c51bea121b0 | 144,748 | ipynb | Jupyter Notebook | create_POSTING_LIST_caputring_FREQ.ipynb | ar8372/Search-Engine | 8281eacec433b19fa282247c552a43089b3a53c3 | [
"Apache-2.0"
] | null | null | null | create_POSTING_LIST_caputring_FREQ.ipynb | ar8372/Search-Engine | 8281eacec433b19fa282247c552a43089b3a53c3 | [
"Apache-2.0"
] | null | null | null | create_POSTING_LIST_caputring_FREQ.ipynb | ar8372/Search-Engine | 8281eacec433b19fa282247c552a43089b3a53c3 | [
"Apache-2.0"
] | null | null | null | 44.994716 | 199 | 0.493278 | [
[
[
"#@markdown <br><center><img src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/da/Google_Drive_logo.png/600px-Google_Drive_logo.png' height=\"50\" alt=\"Gdrive-logo\"/></center>\n#@markdown <center><h3>Mount GDrive to /content/drive</h3></center><br>\nMODE = \"MOUNT\" #@param [\"MOUNT\", \"UNMOUNT\"]\n#Mount your Gdrive! \nfrom google.colab import drive\ndrive.mount._DEBUG = False\nif MODE == \"MOUNT\":\n drive.mount('/content/drive', force_remount=True)\nelif MODE == \"UNMOUNT\":\n try:\n drive.flush_and_unmount()\n except ValueError:\n pass\n get_ipython().system_raw(\"rm -rf /root/.config/Google/DriveFS\")",
"Mounted at /content/drive\n"
],
[
"import nltk\nfrom nltk.corpus import stopwords\nfrom nltk.stem.porter import PorterStemmer\nporter = PorterStemmer()\n\nnltk.download('stopwords')\n\nstop_words=set(stopwords.words(\"english\"))\nprint(type(stop_words), len(stop_words))\n\nimport re\nre.sub(\"ab\",'*','absabdosba')\n\ndef rem_symbols(line):\n #return re.sub(\"[^A-Aa-z0-9\\s]+\",'', line)\n return re.sub('[^A-Za-z0-9\\s]+', '', line).lower() # do again\n\nimport csv\nimport sys\ncsv.field_size_limit(sys.maxsize) # use it to read whole line no limit on line size\n\nBLOCKSIZE= 10000\n####################################################################################\nimport pandas as pd\nA = pd.DataFrame({},columns=['Title','Author','info_abt_paper','text_block'])\n\ndef load_row(my_row):\n A.loc[len(A.index)+1] = my_row\n\nwith open(\"/content/drive/MyDrive/CRAN/Unzipped_cran/cran.all.1400\", 'r') as x:\n #next(x) # it is used when first line contains column names\n full_string = x.read()\n full_string = full_string.replace('\\n',' ').split(\".I\")[1:]\n for seg in full_string:\n seg = seg.replace(\".T\",\"_\").replace(\".A\",\"_\").replace(\".B\",\"_\").replace(\".W\",\"_\")\n seg=seg.split(\"_\")[1:]\n load_row(seg[:4])\nA.index.name = 'ID'\nA=A.reset_index()\n\n######################################################################################",
"[nltk_data] Downloading package stopwords to /root/nltk_data...\n[nltk_data] Package stopwords is already up-to-date!\n<class 'set'> 179\n"
],
[
"A.head(2)",
"_____no_output_____"
],
[
"\ndef bsbi(l):\n #freq_dist= defaultdict(set)\n freq_dist = []\n current_block =0\n total_files = 0\n line_no =0\n #########################################\n for i,line in A.iterrows():\n line_no += 1\n ID,\tTitle,\tAuthor,\tinfo_abt_paper,\ttext_block = line\n if l=='Text':\n p= text_block\n elif l=='Title':\n p= Title\n elif l=='Info':\n p = info_abt_paper\n elif l =='Author':\n p= Author\n for word in p.split(): # Splitting text_bloc to terms\n word = rem_symbols(word)\n if word and word not in stop_words:\n word = porter.stem(word)\n # so add this \n #t= freq_dist[word]\n #freq_dist[word]= t+[word]\n current_block += 1\n freq_dist.append((word,ID))\n \"\"\"\n if word not in freq_dist:\n # so first time\n current_block += 1\n #if id not in freq_dist[word]:\n if not freq_dist[word].__contains__(ID):\n # the id is not added\n freq_dist[word].add(ID)\n current_block += 1\"\"\"\n # if word in freq_dist:\n # # present\n # t = freq_dist[word]\n # freq_dist[word] = t+[ID]\n # else:\n # # not present\n # freq_dist[word] = [str(ID)]\n if current_block>= BLOCKSIZE:\n # IT IS FULL now \n sorted_list = sorted(freq_dist, key= lambda _:_[0])\n with open(f'/content/drive/MyDrive/CRAN/Dumps_1/{l}_dump/OP{total_files}.txt', 'w') as f:\n for word, doc_ids in sorted_list:\n f.write(word)\n for doc_id in doc_ids:\n f.write(f' {doc_id}')\n f.write('\\n') # writing one line is completed\n current_block = 0\n freq_dist.clear()\n total_files += 1\n #print(line_no, 'row done') # so n lines copied to the new file\n # test\n #if total_files >= 5:\n # return\n print(\"OP{}\".format(total_files-1),'dumped','|',line_no,'rows done')\n # last time we have something in freq_dist but we have not dumped it \n sorted_list = sorted(freq_dist, key = lambda _:_[0])\n with open(f'/content/drive/MyDrive/CRAN/Dumps_1/{l}_dump/OP{total_files}.txt','w') as f:\n # this is for last values\n # TODO:: DO IT BY FN SO NO REPEATATION OF CODE\n for word, doc_ids in sorted_list:\n f.write(word)\n #for doc_id in doc_ids:\n f.write(f' {doc_ids}')\n f.write('\\n') \n current_block = 0\n freq_dist.clear()\n total_files += 1\n #print(line_no, 'row done')\n print(\"OP{}\".format(total_files-1),'dumped','|',line_no,'rows done') \n#c= [\"Text\",\"Title\",\"Info\",\"Author\"] \nbsbi(\"Author\") \n############################################################################################################",
"OP0 dumped | 1 rows done\nOP1 dumped | 2 rows done\nOP2 dumped | 3 rows done\nOP3 dumped | 4 rows done\nOP4 dumped | 5 rows done\nOP5 dumped | 6 rows done\nOP6 dumped | 7 rows done\nOP7 dumped | 8 rows done\nOP8 dumped | 9 rows done\nOP9 dumped | 10 rows done\nOP10 dumped | 11 rows done\nOP11 dumped | 12 rows done\nOP12 dumped | 13 rows done\nOP13 dumped | 14 rows done\nOP14 dumped | 15 rows done\nOP15 dumped | 16 rows done\nOP16 dumped | 17 rows done\nOP17 dumped | 18 rows done\nOP18 dumped | 19 rows done\nOP19 dumped | 20 rows done\nOP20 dumped | 21 rows done\nOP21 dumped | 22 rows done\nOP22 dumped | 23 rows done\nOP23 dumped | 24 rows done\nOP24 dumped | 25 rows done\nOP25 dumped | 26 rows done\nOP26 dumped | 27 rows done\nOP27 dumped | 28 rows done\nOP28 dumped | 29 rows done\nOP29 dumped | 30 rows done\nOP30 dumped | 31 rows done\nOP31 dumped | 32 rows done\nOP32 dumped | 33 rows done\nOP33 dumped | 34 rows done\nOP34 dumped | 35 rows done\nOP35 dumped | 36 rows done\nOP36 dumped | 37 rows done\nOP37 dumped | 38 rows done\nOP38 dumped | 39 rows done\nOP39 dumped | 40 rows done\nOP40 dumped | 41 rows done\nOP41 dumped | 42 rows done\nOP42 dumped | 43 rows done\nOP43 dumped | 44 rows done\nOP44 dumped | 45 rows done\nOP45 dumped | 46 rows done\nOP46 dumped | 47 rows done\nOP47 dumped | 48 rows done\nOP48 dumped | 49 rows done\nOP49 dumped | 50 rows done\nOP50 dumped | 51 rows done\nOP51 dumped | 52 rows done\nOP52 dumped | 53 rows done\nOP53 dumped | 54 rows done\nOP54 dumped | 55 rows done\nOP55 dumped | 56 rows done\nOP56 dumped | 57 rows done\nOP57 dumped | 58 rows done\nOP58 dumped | 59 rows done\nOP59 dumped | 60 rows done\nOP60 dumped | 61 rows done\nOP61 dumped | 62 rows done\nOP62 dumped | 63 rows done\nOP63 dumped | 64 rows done\nOP64 dumped | 65 rows done\nOP65 dumped | 66 rows done\nOP66 dumped | 67 rows done\nOP67 dumped | 68 rows done\nOP68 dumped | 69 rows done\nOP69 dumped | 70 rows done\nOP70 dumped | 71 rows done\nOP71 dumped | 72 rows done\nOP72 dumped | 73 rows done\nOP73 dumped | 74 rows done\nOP74 dumped | 75 rows done\nOP75 dumped | 76 rows done\nOP76 dumped | 77 rows done\nOP77 dumped | 78 rows done\nOP78 dumped | 79 rows done\nOP79 dumped | 80 rows done\nOP80 dumped | 81 rows done\nOP81 dumped | 82 rows done\nOP82 dumped | 83 rows done\nOP83 dumped | 84 rows done\nOP84 dumped | 85 rows done\nOP85 dumped | 86 rows done\nOP86 dumped | 87 rows done\nOP87 dumped | 88 rows done\nOP88 dumped | 89 rows done\nOP89 dumped | 90 rows done\nOP90 dumped | 91 rows done\nOP91 dumped | 92 rows done\nOP92 dumped | 93 rows done\nOP93 dumped | 94 rows done\nOP94 dumped | 95 rows done\nOP95 dumped | 96 rows done\nOP96 dumped | 97 rows done\nOP97 dumped | 98 rows done\nOP98 dumped | 99 rows done\nOP99 dumped | 100 rows done\nOP100 dumped | 101 rows done\nOP101 dumped | 102 rows done\nOP102 dumped | 103 rows done\nOP103 dumped | 104 rows done\nOP104 dumped | 105 rows done\nOP105 dumped | 106 rows done\nOP106 dumped | 107 rows done\nOP107 dumped | 108 rows done\nOP108 dumped | 109 rows done\nOP109 dumped | 110 rows done\nOP110 dumped | 111 rows done\nOP111 dumped | 112 rows done\nOP112 dumped | 113 rows done\nOP113 dumped | 114 rows done\nOP114 dumped | 115 rows done\nOP115 dumped | 116 rows done\nOP116 dumped | 117 rows done\nOP117 dumped | 118 rows done\nOP118 dumped | 119 rows done\nOP119 dumped | 120 rows done\nOP120 dumped | 121 rows done\nOP121 dumped | 122 rows done\nOP122 dumped | 123 rows done\nOP123 dumped | 124 rows done\nOP124 dumped | 125 rows done\nOP125 dumped | 126 rows done\nOP126 dumped | 127 rows done\nOP127 dumped | 128 rows done\nOP128 dumped | 129 rows done\nOP129 dumped | 130 rows done\nOP130 dumped | 131 rows done\nOP131 dumped | 132 rows done\nOP132 dumped | 133 rows done\nOP133 dumped | 134 rows done\nOP134 dumped | 135 rows done\nOP135 dumped | 136 rows done\nOP136 dumped | 137 rows done\nOP137 dumped | 138 rows done\nOP138 dumped | 139 rows done\nOP139 dumped | 140 rows done\nOP140 dumped | 141 rows done\nOP141 dumped | 142 rows done\nOP142 dumped | 143 rows done\nOP143 dumped | 144 rows done\nOP144 dumped | 145 rows done\nOP145 dumped | 146 rows done\nOP146 dumped | 147 rows done\nOP147 dumped | 148 rows done\nOP148 dumped | 149 rows done\nOP149 dumped | 150 rows done\nOP150 dumped | 151 rows done\nOP151 dumped | 152 rows done\nOP152 dumped | 153 rows done\nOP153 dumped | 154 rows done\nOP154 dumped | 155 rows done\nOP155 dumped | 156 rows done\nOP156 dumped | 157 rows done\nOP157 dumped | 158 rows done\nOP158 dumped | 159 rows done\nOP159 dumped | 160 rows done\nOP160 dumped | 161 rows done\nOP161 dumped | 162 rows done\nOP162 dumped | 163 rows done\nOP163 dumped | 164 rows done\nOP164 dumped | 165 rows done\nOP165 dumped | 166 rows done\nOP166 dumped | 167 rows done\nOP167 dumped | 168 rows done\nOP168 dumped | 169 rows done\nOP169 dumped | 170 rows done\nOP170 dumped | 171 rows done\nOP171 dumped | 172 rows done\nOP172 dumped | 173 rows done\nOP173 dumped | 174 rows done\nOP174 dumped | 175 rows done\nOP175 dumped | 176 rows done\nOP176 dumped | 177 rows done\nOP177 dumped | 178 rows done\nOP178 dumped | 179 rows done\nOP179 dumped | 180 rows done\nOP180 dumped | 181 rows done\nOP181 dumped | 182 rows done\nOP182 dumped | 183 rows done\nOP183 dumped | 184 rows done\nOP184 dumped | 185 rows done\nOP185 dumped | 186 rows done\nOP186 dumped | 187 rows done\nOP187 dumped | 188 rows done\nOP188 dumped | 189 rows done\nOP189 dumped | 190 rows done\nOP190 dumped | 191 rows done\nOP191 dumped | 192 rows done\nOP192 dumped | 193 rows done\nOP193 dumped | 194 rows done\nOP194 dumped | 195 rows done\nOP195 dumped | 196 rows done\nOP196 dumped | 197 rows done\nOP197 dumped | 198 rows done\nOP198 dumped | 199 rows done\nOP199 dumped | 200 rows done\nOP200 dumped | 201 rows done\nOP201 dumped | 202 rows done\nOP202 dumped | 203 rows done\nOP203 dumped | 204 rows done\nOP204 dumped | 205 rows done\nOP205 dumped | 206 rows done\nOP206 dumped | 207 rows done\nOP207 dumped | 208 rows done\nOP208 dumped | 209 rows done\nOP209 dumped | 210 rows done\nOP210 dumped | 211 rows done\nOP211 dumped | 212 rows done\nOP212 dumped | 213 rows done\nOP213 dumped | 214 rows done\nOP214 dumped | 215 rows done\nOP215 dumped | 216 rows done\nOP216 dumped | 217 rows done\nOP217 dumped | 218 rows done\nOP218 dumped | 219 rows done\nOP219 dumped | 220 rows done\nOP220 dumped | 221 rows done\nOP221 dumped | 222 rows done\nOP222 dumped | 223 rows done\nOP223 dumped | 224 rows done\nOP224 dumped | 225 rows done\nOP225 dumped | 226 rows done\nOP226 dumped | 227 rows done\nOP227 dumped | 228 rows done\nOP228 dumped | 229 rows done\nOP229 dumped | 230 rows done\nOP230 dumped | 231 rows done\nOP231 dumped | 232 rows done\nOP232 dumped | 233 rows done\nOP233 dumped | 234 rows done\nOP234 dumped | 235 rows done\nOP235 dumped | 236 rows done\nOP236 dumped | 237 rows done\nOP237 dumped | 238 rows done\nOP238 dumped | 239 rows done\nOP239 dumped | 240 rows done\nOP240 dumped | 241 rows done\nOP241 dumped | 242 rows done\nOP242 dumped | 243 rows done\nOP243 dumped | 244 rows done\nOP244 dumped | 245 rows done\nOP245 dumped | 246 rows done\nOP246 dumped | 247 rows done\nOP247 dumped | 248 rows done\nOP248 dumped | 249 rows done\nOP249 dumped | 250 rows done\nOP250 dumped | 251 rows done\nOP251 dumped | 252 rows done\nOP252 dumped | 253 rows done\nOP253 dumped | 254 rows done\nOP254 dumped | 255 rows done\nOP255 dumped | 256 rows done\nOP256 dumped | 257 rows done\nOP257 dumped | 258 rows done\nOP258 dumped | 259 rows done\nOP259 dumped | 260 rows done\nOP260 dumped | 261 rows done\nOP261 dumped | 262 rows done\nOP262 dumped | 263 rows done\nOP263 dumped | 264 rows done\nOP264 dumped | 265 rows done\nOP265 dumped | 266 rows done\nOP266 dumped | 267 rows done\nOP267 dumped | 268 rows done\nOP268 dumped | 269 rows done\nOP269 dumped | 270 rows done\nOP270 dumped | 271 rows done\nOP271 dumped | 272 rows done\nOP272 dumped | 273 rows done\nOP273 dumped | 274 rows done\nOP274 dumped | 275 rows done\nOP275 dumped | 276 rows done\nOP276 dumped | 277 rows done\nOP277 dumped | 278 rows done\nOP278 dumped | 279 rows done\nOP279 dumped | 280 rows done\nOP280 dumped | 281 rows done\nOP281 dumped | 282 rows done\nOP282 dumped | 283 rows done\nOP283 dumped | 284 rows done\nOP284 dumped | 285 rows done\nOP285 dumped | 286 rows done\nOP286 dumped | 287 rows done\nOP287 dumped | 288 rows done\nOP288 dumped | 289 rows done\nOP289 dumped | 290 rows done\nOP290 dumped | 291 rows done\nOP291 dumped | 292 rows done\nOP292 dumped | 293 rows done\nOP293 dumped | 294 rows done\nOP294 dumped | 295 rows done\nOP295 dumped | 296 rows done\nOP296 dumped | 297 rows done\nOP297 dumped | 298 rows done\nOP298 dumped | 299 rows done\nOP299 dumped | 300 rows done\nOP300 dumped | 301 rows done\nOP301 dumped | 302 rows done\nOP302 dumped | 303 rows done\nOP303 dumped | 304 rows done\nOP304 dumped | 305 rows done\nOP305 dumped | 306 rows done\nOP306 dumped | 307 rows done\nOP307 dumped | 308 rows done\nOP308 dumped | 309 rows done\nOP309 dumped | 310 rows done\nOP310 dumped | 311 rows done\nOP311 dumped | 312 rows done\nOP312 dumped | 313 rows done\nOP313 dumped | 314 rows done\nOP314 dumped | 315 rows done\nOP315 dumped | 316 rows done\nOP316 dumped | 317 rows done\nOP317 dumped | 318 rows done\nOP318 dumped | 319 rows done\nOP319 dumped | 320 rows done\nOP320 dumped | 321 rows done\nOP321 dumped | 322 rows done\nOP322 dumped | 323 rows done\nOP323 dumped | 324 rows done\nOP324 dumped | 325 rows done\nOP325 dumped | 326 rows done\nOP326 dumped | 327 rows done\nOP327 dumped | 328 rows done\nOP328 dumped | 329 rows done\nOP329 dumped | 330 rows done\nOP330 dumped | 331 rows done\nOP331 dumped | 332 rows done\nOP332 dumped | 333 rows done\nOP333 dumped | 334 rows done\nOP334 dumped | 335 rows done\nOP335 dumped | 336 rows done\nOP336 dumped | 337 rows done\nOP337 dumped | 338 rows done\nOP338 dumped | 339 rows done\nOP339 dumped | 340 rows done\nOP340 dumped | 341 rows done\nOP341 dumped | 342 rows done\nOP342 dumped | 343 rows done\nOP343 dumped | 344 rows done\nOP344 dumped | 345 rows done\nOP345 dumped | 346 rows done\nOP346 dumped | 347 rows done\nOP347 dumped | 348 rows done\nOP348 dumped | 349 rows done\nOP349 dumped | 350 rows done\nOP350 dumped | 351 rows done\nOP351 dumped | 352 rows done\nOP352 dumped | 353 rows done\nOP353 dumped | 354 rows done\nOP354 dumped | 355 rows done\nOP355 dumped | 356 rows done\nOP356 dumped | 357 rows done\nOP357 dumped | 358 rows done\nOP358 dumped | 359 rows done\nOP359 dumped | 360 rows done\nOP360 dumped | 361 rows done\nOP361 dumped | 362 rows done\nOP362 dumped | 363 rows done\nOP363 dumped | 364 rows done\nOP364 dumped | 365 rows done\nOP365 dumped | 366 rows done\nOP366 dumped | 367 rows done\nOP367 dumped | 368 rows done\nOP368 dumped | 369 rows done\nOP369 dumped | 370 rows done\nOP370 dumped | 371 rows done\nOP371 dumped | 372 rows done\nOP372 dumped | 373 rows done\nOP373 dumped | 374 rows done\nOP374 dumped | 375 rows done\nOP375 dumped | 376 rows done\nOP376 dumped | 377 rows done\nOP377 dumped | 378 rows done\nOP378 dumped | 379 rows done\nOP379 dumped | 380 rows done\nOP380 dumped | 381 rows done\nOP381 dumped | 382 rows done\nOP382 dumped | 383 rows done\nOP383 dumped | 384 rows done\nOP384 dumped | 385 rows done\nOP385 dumped | 386 rows done\nOP386 dumped | 387 rows done\nOP387 dumped | 388 rows done\nOP388 dumped | 389 rows done\nOP389 dumped | 390 rows done\nOP390 dumped | 391 rows done\nOP391 dumped | 392 rows done\nOP392 dumped | 393 rows done\nOP393 dumped | 394 rows done\nOP394 dumped | 395 rows done\nOP395 dumped | 396 rows done\nOP396 dumped | 397 rows done\nOP397 dumped | 398 rows done\nOP398 dumped | 399 rows done\nOP399 dumped | 400 rows done\nOP400 dumped | 401 rows done\nOP401 dumped | 402 rows done\nOP402 dumped | 403 rows done\nOP403 dumped | 404 rows done\nOP404 dumped | 405 rows done\nOP405 dumped | 406 rows done\nOP406 dumped | 407 rows done\nOP407 dumped | 408 rows done\nOP408 dumped | 409 rows done\nOP409 dumped | 410 rows done\nOP410 dumped | 411 rows done\nOP411 dumped | 412 rows done\nOP412 dumped | 413 rows done\nOP413 dumped | 414 rows done\nOP414 dumped | 415 rows done\nOP415 dumped | 416 rows done\nOP416 dumped | 417 rows done\nOP417 dumped | 418 rows done\nOP418 dumped | 419 rows done\nOP419 dumped | 420 rows done\nOP420 dumped | 421 rows done\nOP421 dumped | 422 rows done\nOP422 dumped | 423 rows done\nOP423 dumped | 424 rows done\nOP424 dumped | 425 rows done\nOP425 dumped | 426 rows done\nOP426 dumped | 427 rows done\nOP427 dumped | 428 rows done\nOP428 dumped | 429 rows done\nOP429 dumped | 430 rows done\nOP430 dumped | 431 rows done\nOP431 dumped | 432 rows done\nOP432 dumped | 433 rows done\nOP433 dumped | 434 rows done\nOP434 dumped | 435 rows done\nOP435 dumped | 436 rows done\nOP436 dumped | 437 rows done\nOP437 dumped | 438 rows done\nOP438 dumped | 439 rows done\nOP439 dumped | 440 rows done\nOP440 dumped | 441 rows done\nOP441 dumped | 442 rows done\nOP442 dumped | 443 rows done\nOP443 dumped | 444 rows done\nOP444 dumped | 445 rows done\nOP445 dumped | 446 rows done\nOP446 dumped | 447 rows done\nOP447 dumped | 448 rows done\nOP448 dumped | 449 rows done\nOP449 dumped | 450 rows done\nOP450 dumped | 451 rows done\nOP451 dumped | 452 rows done\nOP452 dumped | 453 rows done\nOP453 dumped | 454 rows done\nOP454 dumped | 455 rows done\nOP455 dumped | 456 rows done\nOP456 dumped | 457 rows done\nOP457 dumped | 458 rows done\nOP458 dumped | 459 rows done\nOP459 dumped | 460 rows done\nOP460 dumped | 461 rows done\nOP461 dumped | 462 rows done\nOP462 dumped | 463 rows done\nOP463 dumped | 464 rows done\nOP464 dumped | 465 rows done\nOP465 dumped | 466 rows done\nOP466 dumped | 467 rows done\nOP467 dumped | 468 rows done\nOP468 dumped | 469 rows done\nOP469 dumped | 470 rows done\nOP470 dumped | 471 rows done\nOP471 dumped | 472 rows done\nOP472 dumped | 473 rows done\nOP473 dumped | 474 rows done\nOP474 dumped | 475 rows done\nOP475 dumped | 476 rows done\nOP476 dumped | 477 rows done\nOP477 dumped | 478 rows done\nOP478 dumped | 479 rows done\nOP479 dumped | 480 rows done\nOP480 dumped | 481 rows done\nOP481 dumped | 482 rows done\nOP482 dumped | 483 rows done\nOP483 dumped | 484 rows done\nOP484 dumped | 485 rows done\nOP485 dumped | 486 rows done\nOP486 dumped | 487 rows done\nOP487 dumped | 488 rows done\nOP488 dumped | 489 rows done\nOP489 dumped | 490 rows done\nOP490 dumped | 491 rows done\nOP491 dumped | 492 rows done\nOP492 dumped | 493 rows done\nOP493 dumped | 494 rows done\nOP494 dumped | 495 rows done\nOP495 dumped | 496 rows done\nOP496 dumped | 497 rows done\nOP497 dumped | 498 rows done\nOP498 dumped | 499 rows done\nOP499 dumped | 500 rows done\nOP500 dumped | 501 rows done\nOP501 dumped | 502 rows done\nOP502 dumped | 503 rows done\nOP503 dumped | 504 rows done\nOP504 dumped | 505 rows done\nOP505 dumped | 506 rows done\nOP506 dumped | 507 rows done\nOP507 dumped | 508 rows done\nOP508 dumped | 509 rows done\nOP509 dumped | 510 rows done\nOP510 dumped | 511 rows done\nOP511 dumped | 512 rows done\nOP512 dumped | 513 rows done\nOP513 dumped | 514 rows done\nOP514 dumped | 515 rows done\nOP515 dumped | 516 rows done\nOP516 dumped | 517 rows done\nOP517 dumped | 518 rows done\nOP518 dumped | 519 rows done\nOP519 dumped | 520 rows done\nOP520 dumped | 521 rows done\nOP521 dumped | 522 rows done\nOP522 dumped | 523 rows done\nOP523 dumped | 524 rows done\nOP524 dumped | 525 rows done\nOP525 dumped | 526 rows done\nOP526 dumped | 527 rows done\nOP527 dumped | 528 rows done\nOP528 dumped | 529 rows done\nOP529 dumped | 530 rows done\nOP530 dumped | 531 rows done\nOP531 dumped | 532 rows done\nOP532 dumped | 533 rows done\nOP533 dumped | 534 rows done\nOP534 dumped | 535 rows done\nOP535 dumped | 536 rows done\nOP536 dumped | 537 rows done\nOP537 dumped | 538 rows done\nOP538 dumped | 539 rows done\nOP539 dumped | 540 rows done\nOP540 dumped | 541 rows done\nOP541 dumped | 542 rows done\nOP542 dumped | 543 rows done\nOP543 dumped | 544 rows done\nOP544 dumped | 545 rows done\nOP545 dumped | 546 rows done\nOP546 dumped | 547 rows done\nOP547 dumped | 548 rows done\nOP548 dumped | 549 rows done\nOP549 dumped | 550 rows done\nOP550 dumped | 551 rows done\nOP551 dumped | 552 rows done\nOP552 dumped | 553 rows done\nOP553 dumped | 554 rows done\nOP554 dumped | 555 rows done\nOP555 dumped | 556 rows done\nOP556 dumped | 557 rows done\nOP557 dumped | 558 rows done\nOP558 dumped | 559 rows done\nOP559 dumped | 560 rows done\nOP560 dumped | 561 rows done\nOP561 dumped | 562 rows done\nOP562 dumped | 563 rows done\nOP563 dumped | 564 rows done\nOP564 dumped | 565 rows done\nOP565 dumped | 566 rows done\nOP566 dumped | 567 rows done\nOP567 dumped | 568 rows done\nOP568 dumped | 569 rows done\nOP569 dumped | 570 rows done\nOP570 dumped | 571 rows done\nOP571 dumped | 572 rows done\nOP572 dumped | 573 rows done\nOP573 dumped | 574 rows done\nOP574 dumped | 575 rows done\nOP575 dumped | 576 rows done\nOP576 dumped | 577 rows done\nOP577 dumped | 578 rows done\nOP578 dumped | 579 rows done\nOP579 dumped | 580 rows done\nOP580 dumped | 581 rows done\nOP581 dumped | 582 rows done\nOP582 dumped | 583 rows done\nOP583 dumped | 584 rows done\nOP584 dumped | 585 rows done\nOP585 dumped | 586 rows done\nOP586 dumped | 587 rows done\nOP587 dumped | 588 rows done\nOP588 dumped | 589 rows done\nOP589 dumped | 590 rows done\nOP590 dumped | 591 rows done\nOP591 dumped | 592 rows done\nOP592 dumped | 593 rows done\nOP593 dumped | 594 rows done\nOP594 dumped | 595 rows done\nOP595 dumped | 596 rows done\nOP596 dumped | 597 rows done\nOP597 dumped | 598 rows done\nOP598 dumped | 599 rows done\nOP599 dumped | 600 rows done\nOP600 dumped | 601 rows done\nOP601 dumped | 602 rows done\nOP602 dumped | 603 rows done\nOP603 dumped | 604 rows done\nOP604 dumped | 605 rows done\nOP605 dumped | 606 rows done\nOP606 dumped | 607 rows done\nOP607 dumped | 608 rows done\nOP608 dumped | 609 rows done\nOP609 dumped | 610 rows done\nOP610 dumped | 611 rows done\nOP611 dumped | 612 rows done\nOP612 dumped | 613 rows done\nOP613 dumped | 614 rows done\nOP614 dumped | 615 rows done\nOP615 dumped | 616 rows done\nOP616 dumped | 617 rows done\nOP617 dumped | 618 rows done\nOP618 dumped | 619 rows done\nOP619 dumped | 620 rows done\nOP620 dumped | 621 rows done\nOP621 dumped | 622 rows done\nOP622 dumped | 623 rows done\nOP623 dumped | 624 rows done\nOP624 dumped | 625 rows done\nOP625 dumped | 626 rows done\nOP626 dumped | 627 rows done\nOP627 dumped | 628 rows done\nOP628 dumped | 629 rows done\nOP629 dumped | 630 rows done\nOP630 dumped | 631 rows done\nOP631 dumped | 632 rows done\nOP632 dumped | 633 rows done\nOP633 dumped | 634 rows done\nOP634 dumped | 635 rows done\nOP635 dumped | 636 rows done\nOP636 dumped | 637 rows done\nOP637 dumped | 638 rows done\nOP638 dumped | 639 rows done\nOP639 dumped | 640 rows done\nOP640 dumped | 641 rows done\nOP641 dumped | 642 rows done\nOP642 dumped | 643 rows done\nOP643 dumped | 644 rows done\nOP644 dumped | 645 rows done\nOP645 dumped | 646 rows done\nOP646 dumped | 647 rows done\nOP647 dumped | 648 rows done\nOP648 dumped | 649 rows done\nOP649 dumped | 650 rows done\nOP650 dumped | 651 rows done\nOP651 dumped | 652 rows done\nOP652 dumped | 653 rows done\nOP653 dumped | 654 rows done\nOP654 dumped | 655 rows done\nOP655 dumped | 656 rows done\nOP656 dumped | 657 rows done\nOP657 dumped | 658 rows done\nOP658 dumped | 659 rows done\nOP659 dumped | 660 rows done\nOP660 dumped | 661 rows done\nOP661 dumped | 662 rows done\nOP662 dumped | 663 rows done\nOP663 dumped | 664 rows done\nOP664 dumped | 665 rows done\nOP665 dumped | 666 rows done\nOP666 dumped | 667 rows done\nOP667 dumped | 668 rows done\nOP668 dumped | 669 rows done\nOP669 dumped | 670 rows done\nOP670 dumped | 671 rows done\nOP671 dumped | 672 rows done\nOP672 dumped | 673 rows done\nOP673 dumped | 674 rows done\nOP674 dumped | 675 rows done\nOP675 dumped | 676 rows done\nOP676 dumped | 677 rows done\nOP677 dumped | 678 rows done\nOP678 dumped | 679 rows done\nOP679 dumped | 680 rows done\nOP680 dumped | 681 rows done\nOP681 dumped | 682 rows done\nOP682 dumped | 683 rows done\nOP683 dumped | 684 rows done\nOP684 dumped | 685 rows done\nOP685 dumped | 686 rows done\nOP686 dumped | 687 rows done\nOP687 dumped | 688 rows done\nOP688 dumped | 689 rows done\nOP689 dumped | 690 rows done\nOP690 dumped | 691 rows done\nOP691 dumped | 692 rows done\nOP692 dumped | 693 rows done\nOP693 dumped | 694 rows done\nOP694 dumped | 695 rows done\nOP695 dumped | 696 rows done\nOP696 dumped | 697 rows done\nOP697 dumped | 698 rows done\nOP698 dumped | 699 rows done\nOP699 dumped | 700 rows done\nOP700 dumped | 701 rows done\nOP701 dumped | 702 rows done\nOP702 dumped | 703 rows done\nOP703 dumped | 704 rows done\nOP704 dumped | 705 rows done\nOP705 dumped | 706 rows done\nOP706 dumped | 707 rows done\nOP707 dumped | 708 rows done\nOP708 dumped | 709 rows done\nOP709 dumped | 710 rows done\nOP710 dumped | 711 rows done\nOP711 dumped | 712 rows done\nOP712 dumped | 713 rows done\nOP713 dumped | 714 rows done\nOP714 dumped | 715 rows done\nOP715 dumped | 716 rows done\nOP716 dumped | 717 rows done\nOP717 dumped | 718 rows done\nOP718 dumped | 719 rows done\nOP719 dumped | 720 rows done\nOP720 dumped | 721 rows done\nOP721 dumped | 722 rows done\nOP722 dumped | 723 rows done\nOP723 dumped | 724 rows done\nOP724 dumped | 725 rows done\nOP725 dumped | 726 rows done\nOP726 dumped | 727 rows done\nOP727 dumped | 728 rows done\nOP728 dumped | 729 rows done\nOP729 dumped | 730 rows done\nOP730 dumped | 731 rows done\nOP731 dumped | 732 rows done\nOP732 dumped | 733 rows done\nOP733 dumped | 734 rows done\nOP734 dumped | 735 rows done\nOP735 dumped | 736 rows done\nOP736 dumped | 737 rows done\nOP737 dumped | 738 rows done\nOP738 dumped | 739 rows done\nOP739 dumped | 740 rows done\nOP740 dumped | 741 rows done\nOP741 dumped | 742 rows done\nOP742 dumped | 743 rows done\nOP743 dumped | 744 rows done\nOP744 dumped | 745 rows done\nOP745 dumped | 746 rows done\nOP746 dumped | 747 rows done\nOP747 dumped | 748 rows done\nOP748 dumped | 749 rows done\nOP749 dumped | 750 rows done\nOP750 dumped | 751 rows done\nOP751 dumped | 752 rows done\nOP752 dumped | 753 rows done\nOP753 dumped | 754 rows done\nOP754 dumped | 755 rows done\nOP755 dumped | 756 rows done\nOP756 dumped | 757 rows done\nOP757 dumped | 758 rows done\nOP758 dumped | 759 rows done\nOP759 dumped | 760 rows done\nOP760 dumped | 761 rows done\nOP761 dumped | 762 rows done\nOP762 dumped | 763 rows done\nOP763 dumped | 764 rows done\nOP764 dumped | 765 rows done\nOP765 dumped | 766 rows done\nOP766 dumped | 767 rows done\nOP767 dumped | 768 rows done\nOP768 dumped | 769 rows done\nOP769 dumped | 770 rows done\nOP770 dumped | 771 rows done\nOP771 dumped | 772 rows done\nOP772 dumped | 773 rows done\nOP773 dumped | 774 rows done\nOP774 dumped | 775 rows done\nOP775 dumped | 776 rows done\nOP776 dumped | 777 rows done\nOP777 dumped | 778 rows done\nOP778 dumped | 779 rows done\nOP779 dumped | 780 rows done\nOP780 dumped | 781 rows done\nOP781 dumped | 782 rows done\nOP782 dumped | 783 rows done\nOP783 dumped | 784 rows done\nOP784 dumped | 785 rows done\nOP785 dumped | 786 rows done\nOP786 dumped | 787 rows done\nOP787 dumped | 788 rows done\nOP788 dumped | 789 rows done\nOP789 dumped | 790 rows done\nOP790 dumped | 791 rows done\nOP791 dumped | 792 rows done\nOP792 dumped | 793 rows done\nOP793 dumped | 794 rows done\nOP794 dumped | 795 rows done\nOP795 dumped | 796 rows done\nOP796 dumped | 797 rows done\nOP797 dumped | 798 rows done\nOP798 dumped | 799 rows done\nOP799 dumped | 800 rows done\nOP800 dumped | 801 rows done\nOP801 dumped | 802 rows done\nOP802 dumped | 803 rows done\nOP803 dumped | 804 rows done\nOP804 dumped | 805 rows done\nOP805 dumped | 806 rows done\nOP806 dumped | 807 rows done\nOP807 dumped | 808 rows done\nOP808 dumped | 809 rows done\nOP809 dumped | 810 rows done\nOP810 dumped | 811 rows done\nOP811 dumped | 812 rows done\nOP812 dumped | 813 rows done\nOP813 dumped | 814 rows done\nOP814 dumped | 815 rows done\nOP815 dumped | 816 rows done\nOP816 dumped | 817 rows done\nOP817 dumped | 818 rows done\nOP818 dumped | 819 rows done\nOP819 dumped | 820 rows done\nOP820 dumped | 821 rows done\nOP821 dumped | 822 rows done\nOP822 dumped | 823 rows done\nOP823 dumped | 824 rows done\nOP824 dumped | 825 rows done\nOP825 dumped | 826 rows done\nOP826 dumped | 827 rows done\nOP827 dumped | 828 rows done\nOP828 dumped | 829 rows done\nOP829 dumped | 830 rows done\nOP830 dumped | 831 rows done\nOP831 dumped | 832 rows done\nOP832 dumped | 833 rows done\nOP833 dumped | 834 rows done\nOP834 dumped | 835 rows done\nOP835 dumped | 836 rows done\nOP836 dumped | 837 rows done\nOP837 dumped | 838 rows done\nOP838 dumped | 839 rows done\nOP839 dumped | 840 rows done\nOP840 dumped | 841 rows done\nOP841 dumped | 842 rows done\nOP842 dumped | 843 rows done\nOP843 dumped | 844 rows done\nOP844 dumped | 845 rows done\nOP845 dumped | 846 rows done\nOP846 dumped | 847 rows done\nOP847 dumped | 848 rows done\nOP848 dumped | 849 rows done\nOP849 dumped | 850 rows done\nOP850 dumped | 851 rows done\nOP851 dumped | 852 rows done\nOP852 dumped | 853 rows done\nOP853 dumped | 854 rows done\nOP854 dumped | 855 rows done\nOP855 dumped | 856 rows done\nOP856 dumped | 857 rows done\nOP857 dumped | 858 rows done\nOP858 dumped | 859 rows done\nOP859 dumped | 860 rows done\nOP860 dumped | 861 rows done\nOP861 dumped | 862 rows done\nOP862 dumped | 863 rows done\nOP863 dumped | 864 rows done\nOP864 dumped | 865 rows done\nOP865 dumped | 866 rows done\nOP866 dumped | 867 rows done\nOP867 dumped | 868 rows done\nOP868 dumped | 869 rows done\nOP869 dumped | 870 rows done\nOP870 dumped | 871 rows done\nOP871 dumped | 872 rows done\nOP872 dumped | 873 rows done\nOP873 dumped | 874 rows done\nOP874 dumped | 875 rows done\nOP875 dumped | 876 rows done\nOP876 dumped | 877 rows done\nOP877 dumped | 878 rows done\nOP878 dumped | 879 rows done\nOP879 dumped | 880 rows done\nOP880 dumped | 881 rows done\nOP881 dumped | 882 rows done\nOP882 dumped | 883 rows done\nOP883 dumped | 884 rows done\nOP884 dumped | 885 rows done\nOP885 dumped | 886 rows done\nOP886 dumped | 887 rows done\nOP887 dumped | 888 rows done\nOP888 dumped | 889 rows done\nOP889 dumped | 890 rows done\nOP890 dumped | 891 rows done\nOP891 dumped | 892 rows done\nOP892 dumped | 893 rows done\nOP893 dumped | 894 rows done\nOP894 dumped | 895 rows done\nOP895 dumped | 896 rows done\nOP896 dumped | 897 rows done\nOP897 dumped | 898 rows done\nOP898 dumped | 899 rows done\nOP899 dumped | 900 rows done\nOP900 dumped | 901 rows done\nOP901 dumped | 902 rows done\nOP902 dumped | 903 rows done\nOP903 dumped | 904 rows done\nOP904 dumped | 905 rows done\nOP905 dumped | 906 rows done\nOP906 dumped | 907 rows done\nOP907 dumped | 908 rows done\nOP908 dumped | 909 rows done\nOP909 dumped | 910 rows done\nOP910 dumped | 911 rows done\nOP911 dumped | 912 rows done\nOP912 dumped | 913 rows done\nOP913 dumped | 914 rows done\nOP914 dumped | 915 rows done\nOP915 dumped | 916 rows done\nOP916 dumped | 917 rows done\nOP917 dumped | 918 rows done\nOP918 dumped | 919 rows done\nOP919 dumped | 920 rows done\nOP920 dumped | 921 rows done\nOP921 dumped | 922 rows done\nOP922 dumped | 923 rows done\nOP923 dumped | 924 rows done\nOP924 dumped | 925 rows done\nOP925 dumped | 926 rows done\nOP926 dumped | 927 rows done\nOP927 dumped | 928 rows done\nOP928 dumped | 929 rows done\nOP929 dumped | 930 rows done\nOP930 dumped | 931 rows done\nOP931 dumped | 932 rows done\nOP932 dumped | 933 rows done\nOP933 dumped | 934 rows done\nOP934 dumped | 935 rows done\nOP935 dumped | 936 rows done\nOP936 dumped | 937 rows done\nOP937 dumped | 938 rows done\nOP938 dumped | 939 rows done\nOP939 dumped | 940 rows done\nOP940 dumped | 941 rows done\nOP941 dumped | 942 rows done\nOP942 dumped | 943 rows done\nOP943 dumped | 944 rows done\nOP944 dumped | 945 rows done\nOP945 dumped | 946 rows done\nOP946 dumped | 947 rows done\nOP947 dumped | 948 rows done\nOP948 dumped | 949 rows done\nOP949 dumped | 950 rows done\nOP950 dumped | 951 rows done\nOP951 dumped | 952 rows done\nOP952 dumped | 953 rows done\nOP953 dumped | 954 rows done\nOP954 dumped | 955 rows done\nOP955 dumped | 956 rows done\nOP956 dumped | 957 rows done\nOP957 dumped | 958 rows done\nOP958 dumped | 959 rows done\nOP959 dumped | 960 rows done\nOP960 dumped | 961 rows done\nOP961 dumped | 962 rows done\nOP962 dumped | 963 rows done\nOP963 dumped | 964 rows done\nOP964 dumped | 965 rows done\nOP965 dumped | 966 rows done\nOP966 dumped | 967 rows done\nOP967 dumped | 968 rows done\nOP968 dumped | 969 rows done\nOP969 dumped | 970 rows done\nOP970 dumped | 971 rows done\nOP971 dumped | 972 rows done\nOP972 dumped | 973 rows done\nOP973 dumped | 974 rows done\nOP974 dumped | 975 rows done\nOP975 dumped | 976 rows done\nOP976 dumped | 977 rows done\nOP977 dumped | 978 rows done\nOP978 dumped | 979 rows done\nOP979 dumped | 980 rows done\nOP980 dumped | 981 rows done\nOP981 dumped | 982 rows done\nOP982 dumped | 983 rows done\nOP983 dumped | 984 rows done\nOP984 dumped | 985 rows done\nOP985 dumped | 986 rows done\nOP986 dumped | 987 rows done\nOP987 dumped | 988 rows done\nOP988 dumped | 989 rows done\nOP989 dumped | 990 rows done\nOP990 dumped | 991 rows done\nOP991 dumped | 992 rows done\nOP992 dumped | 993 rows done\nOP993 dumped | 994 rows done\nOP994 dumped | 995 rows done\nOP995 dumped | 996 rows done\nOP996 dumped | 997 rows done\nOP997 dumped | 998 rows done\nOP998 dumped | 999 rows done\nOP999 dumped | 1000 rows done\nOP1000 dumped | 1001 rows done\nOP1001 dumped | 1002 rows done\nOP1002 dumped | 1003 rows done\nOP1003 dumped | 1004 rows done\nOP1004 dumped | 1005 rows done\nOP1005 dumped | 1006 rows done\nOP1006 dumped | 1007 rows done\nOP1007 dumped | 1008 rows done\nOP1008 dumped | 1009 rows done\nOP1009 dumped | 1010 rows done\nOP1010 dumped | 1011 rows done\nOP1011 dumped | 1012 rows done\nOP1012 dumped | 1013 rows done\nOP1013 dumped | 1014 rows done\nOP1014 dumped | 1015 rows done\nOP1015 dumped | 1016 rows done\nOP1016 dumped | 1017 rows done\nOP1017 dumped | 1018 rows done\nOP1018 dumped | 1019 rows done\nOP1019 dumped | 1020 rows done\nOP1020 dumped | 1021 rows done\nOP1021 dumped | 1022 rows done\nOP1022 dumped | 1023 rows done\nOP1023 dumped | 1024 rows done\nOP1024 dumped | 1025 rows done\nOP1025 dumped | 1026 rows done\nOP1026 dumped | 1027 rows done\nOP1027 dumped | 1028 rows done\nOP1028 dumped | 1029 rows done\nOP1029 dumped | 1030 rows done\nOP1030 dumped | 1031 rows done\nOP1031 dumped | 1032 rows done\nOP1032 dumped | 1033 rows done\nOP1033 dumped | 1034 rows done\nOP1034 dumped | 1035 rows done\nOP1035 dumped | 1036 rows done\nOP1036 dumped | 1037 rows done\nOP1037 dumped | 1038 rows done\nOP1038 dumped | 1039 rows done\nOP1039 dumped | 1040 rows done\nOP1040 dumped | 1041 rows done\nOP1041 dumped | 1042 rows done\nOP1042 dumped | 1043 rows done\nOP1043 dumped | 1044 rows done\nOP1044 dumped | 1045 rows done\nOP1045 dumped | 1046 rows done\nOP1046 dumped | 1047 rows done\nOP1047 dumped | 1048 rows done\nOP1048 dumped | 1049 rows done\nOP1049 dumped | 1050 rows done\nOP1050 dumped | 1051 rows done\nOP1051 dumped | 1052 rows done\nOP1052 dumped | 1053 rows done\nOP1053 dumped | 1054 rows done\nOP1054 dumped | 1055 rows done\nOP1055 dumped | 1056 rows done\nOP1056 dumped | 1057 rows done\nOP1057 dumped | 1058 rows done\nOP1058 dumped | 1059 rows done\nOP1059 dumped | 1060 rows done\nOP1060 dumped | 1061 rows done\nOP1061 dumped | 1062 rows done\nOP1062 dumped | 1063 rows done\nOP1063 dumped | 1064 rows done\nOP1064 dumped | 1065 rows done\nOP1065 dumped | 1066 rows done\nOP1066 dumped | 1067 rows done\nOP1067 dumped | 1068 rows done\nOP1068 dumped | 1069 rows done\nOP1069 dumped | 1070 rows done\nOP1070 dumped | 1071 rows done\nOP1071 dumped | 1072 rows done\nOP1072 dumped | 1073 rows done\nOP1073 dumped | 1074 rows done\nOP1074 dumped | 1075 rows done\nOP1075 dumped | 1076 rows done\nOP1076 dumped | 1077 rows done\nOP1077 dumped | 1078 rows done\nOP1078 dumped | 1079 rows done\nOP1079 dumped | 1080 rows done\nOP1080 dumped | 1081 rows done\nOP1081 dumped | 1082 rows done\nOP1082 dumped | 1083 rows done\nOP1083 dumped | 1084 rows done\nOP1084 dumped | 1085 rows done\nOP1085 dumped | 1086 rows done\nOP1086 dumped | 1087 rows done\nOP1087 dumped | 1088 rows done\nOP1088 dumped | 1089 rows done\nOP1089 dumped | 1090 rows done\nOP1090 dumped | 1091 rows done\nOP1091 dumped | 1092 rows done\nOP1092 dumped | 1093 rows done\nOP1093 dumped | 1094 rows done\nOP1094 dumped | 1095 rows done\nOP1095 dumped | 1096 rows done\nOP1096 dumped | 1097 rows done\nOP1097 dumped | 1098 rows done\nOP1098 dumped | 1099 rows done\nOP1099 dumped | 1100 rows done\nOP1100 dumped | 1101 rows done\nOP1101 dumped | 1102 rows done\nOP1102 dumped | 1103 rows done\nOP1103 dumped | 1104 rows done\nOP1104 dumped | 1105 rows done\nOP1105 dumped | 1106 rows done\nOP1106 dumped | 1107 rows done\nOP1107 dumped | 1108 rows done\nOP1108 dumped | 1109 rows done\nOP1109 dumped | 1110 rows done\nOP1110 dumped | 1111 rows done\nOP1111 dumped | 1112 rows done\nOP1112 dumped | 1113 rows done\nOP1113 dumped | 1114 rows done\nOP1114 dumped | 1115 rows done\nOP1115 dumped | 1116 rows done\nOP1116 dumped | 1117 rows done\nOP1117 dumped | 1118 rows done\nOP1118 dumped | 1119 rows done\nOP1119 dumped | 1120 rows done\nOP1120 dumped | 1121 rows done\nOP1121 dumped | 1122 rows done\nOP1122 dumped | 1123 rows done\nOP1123 dumped | 1124 rows done\nOP1124 dumped | 1125 rows done\nOP1125 dumped | 1126 rows done\nOP1126 dumped | 1127 rows done\nOP1127 dumped | 1128 rows done\nOP1128 dumped | 1129 rows done\nOP1129 dumped | 1130 rows done\nOP1130 dumped | 1131 rows done\nOP1131 dumped | 1132 rows done\nOP1132 dumped | 1133 rows done\nOP1133 dumped | 1134 rows done\nOP1134 dumped | 1135 rows done\nOP1135 dumped | 1136 rows done\nOP1136 dumped | 1137 rows done\nOP1137 dumped | 1138 rows done\nOP1138 dumped | 1139 rows done\nOP1139 dumped | 1140 rows done\nOP1140 dumped | 1141 rows done\nOP1141 dumped | 1142 rows done\nOP1142 dumped | 1143 rows done\nOP1143 dumped | 1144 rows done\nOP1144 dumped | 1145 rows done\nOP1145 dumped | 1146 rows done\nOP1146 dumped | 1147 rows done\nOP1147 dumped | 1148 rows done\nOP1148 dumped | 1149 rows done\nOP1149 dumped | 1150 rows done\nOP1150 dumped | 1151 rows done\nOP1151 dumped | 1152 rows done\nOP1152 dumped | 1153 rows done\nOP1153 dumped | 1154 rows done\nOP1154 dumped | 1155 rows done\nOP1155 dumped | 1156 rows done\nOP1156 dumped | 1157 rows done\nOP1157 dumped | 1158 rows done\nOP1158 dumped | 1159 rows done\nOP1159 dumped | 1160 rows done\nOP1160 dumped | 1161 rows done\nOP1161 dumped | 1162 rows done\nOP1162 dumped | 1163 rows done\nOP1163 dumped | 1164 rows done\nOP1164 dumped | 1165 rows done\nOP1165 dumped | 1166 rows done\nOP1166 dumped | 1167 rows done\nOP1167 dumped | 1168 rows done\nOP1168 dumped | 1169 rows done\nOP1169 dumped | 1170 rows done\nOP1170 dumped | 1171 rows done\nOP1171 dumped | 1172 rows done\nOP1172 dumped | 1173 rows done\nOP1173 dumped | 1174 rows done\nOP1174 dumped | 1175 rows done\nOP1175 dumped | 1176 rows done\nOP1176 dumped | 1177 rows done\nOP1177 dumped | 1178 rows done\nOP1178 dumped | 1179 rows done\nOP1179 dumped | 1180 rows done\nOP1180 dumped | 1181 rows done\nOP1181 dumped | 1182 rows done\nOP1182 dumped | 1183 rows done\nOP1183 dumped | 1184 rows done\nOP1184 dumped | 1185 rows done\nOP1185 dumped | 1186 rows done\nOP1186 dumped | 1187 rows done\nOP1187 dumped | 1188 rows done\nOP1188 dumped | 1189 rows done\nOP1189 dumped | 1190 rows done\nOP1190 dumped | 1191 rows done\nOP1191 dumped | 1192 rows done\nOP1192 dumped | 1193 rows done\nOP1193 dumped | 1194 rows done\nOP1194 dumped | 1195 rows done\nOP1195 dumped | 1196 rows done\nOP1196 dumped | 1197 rows done\nOP1197 dumped | 1198 rows done\nOP1198 dumped | 1199 rows done\nOP1199 dumped | 1200 rows done\nOP1200 dumped | 1201 rows done\nOP1201 dumped | 1202 rows done\nOP1202 dumped | 1203 rows done\nOP1203 dumped | 1204 rows done\nOP1204 dumped | 1205 rows done\nOP1205 dumped | 1206 rows done\nOP1206 dumped | 1207 rows done\nOP1207 dumped | 1208 rows done\nOP1208 dumped | 1209 rows done\nOP1209 dumped | 1210 rows done\nOP1210 dumped | 1211 rows done\nOP1211 dumped | 1212 rows done\nOP1212 dumped | 1213 rows done\nOP1213 dumped | 1214 rows done\nOP1214 dumped | 1215 rows done\nOP1215 dumped | 1216 rows done\nOP1216 dumped | 1217 rows done\nOP1217 dumped | 1218 rows done\nOP1218 dumped | 1219 rows done\nOP1219 dumped | 1220 rows done\nOP1220 dumped | 1221 rows done\nOP1221 dumped | 1222 rows done\nOP1222 dumped | 1223 rows done\nOP1223 dumped | 1224 rows done\nOP1224 dumped | 1225 rows done\nOP1225 dumped | 1226 rows done\nOP1226 dumped | 1227 rows done\nOP1227 dumped | 1228 rows done\nOP1228 dumped | 1229 rows done\nOP1229 dumped | 1230 rows done\nOP1230 dumped | 1231 rows done\nOP1231 dumped | 1232 rows done\nOP1232 dumped | 1233 rows done\nOP1233 dumped | 1234 rows done\nOP1234 dumped | 1235 rows done\nOP1235 dumped | 1236 rows done\nOP1236 dumped | 1237 rows done\nOP1237 dumped | 1238 rows done\nOP1238 dumped | 1239 rows done\nOP1239 dumped | 1240 rows done\nOP1240 dumped | 1241 rows done\nOP1241 dumped | 1242 rows done\nOP1242 dumped | 1243 rows done\nOP1243 dumped | 1244 rows done\nOP1244 dumped | 1245 rows done\nOP1245 dumped | 1246 rows done\nOP1246 dumped | 1247 rows done\nOP1247 dumped | 1248 rows done\nOP1248 dumped | 1249 rows done\nOP1249 dumped | 1250 rows done\nOP1250 dumped | 1251 rows done\nOP1251 dumped | 1252 rows done\nOP1252 dumped | 1253 rows done\nOP1253 dumped | 1254 rows done\nOP1254 dumped | 1255 rows done\nOP1255 dumped | 1256 rows done\nOP1256 dumped | 1257 rows done\nOP1257 dumped | 1258 rows done\nOP1258 dumped | 1259 rows done\nOP1259 dumped | 1260 rows done\nOP1260 dumped | 1261 rows done\nOP1261 dumped | 1262 rows done\nOP1262 dumped | 1263 rows done\nOP1263 dumped | 1264 rows done\nOP1264 dumped | 1265 rows done\nOP1265 dumped | 1266 rows done\nOP1266 dumped | 1267 rows done\nOP1267 dumped | 1268 rows done\nOP1268 dumped | 1269 rows done\nOP1269 dumped | 1270 rows done\nOP1270 dumped | 1271 rows done\nOP1271 dumped | 1272 rows done\nOP1272 dumped | 1273 rows done\nOP1273 dumped | 1274 rows done\nOP1274 dumped | 1275 rows done\nOP1275 dumped | 1276 rows done\nOP1276 dumped | 1277 rows done\nOP1277 dumped | 1278 rows done\nOP1278 dumped | 1279 rows done\nOP1279 dumped | 1280 rows done\nOP1280 dumped | 1281 rows done\nOP1281 dumped | 1282 rows done\nOP1282 dumped | 1283 rows done\nOP1283 dumped | 1284 rows done\nOP1284 dumped | 1285 rows done\nOP1285 dumped | 1286 rows done\nOP1286 dumped | 1287 rows done\nOP1287 dumped | 1288 rows done\nOP1288 dumped | 1289 rows done\nOP1289 dumped | 1290 rows done\nOP1290 dumped | 1291 rows done\nOP1291 dumped | 1292 rows done\nOP1292 dumped | 1293 rows done\nOP1293 dumped | 1294 rows done\nOP1294 dumped | 1295 rows done\nOP1295 dumped | 1296 rows done\nOP1296 dumped | 1297 rows done\nOP1297 dumped | 1298 rows done\nOP1298 dumped | 1299 rows done\nOP1299 dumped | 1300 rows done\nOP1300 dumped | 1301 rows done\nOP1301 dumped | 1302 rows done\nOP1302 dumped | 1303 rows done\nOP1303 dumped | 1304 rows done\nOP1304 dumped | 1305 rows done\nOP1305 dumped | 1306 rows done\nOP1306 dumped | 1307 rows done\nOP1307 dumped | 1308 rows done\nOP1308 dumped | 1309 rows done\nOP1309 dumped | 1310 rows done\nOP1310 dumped | 1311 rows done\nOP1311 dumped | 1312 rows done\nOP1312 dumped | 1313 rows done\nOP1313 dumped | 1314 rows done\nOP1314 dumped | 1315 rows done\nOP1315 dumped | 1316 rows done\nOP1316 dumped | 1317 rows done\nOP1317 dumped | 1318 rows done\nOP1318 dumped | 1319 rows done\nOP1319 dumped | 1320 rows done\nOP1320 dumped | 1321 rows done\nOP1321 dumped | 1322 rows done\nOP1322 dumped | 1323 rows done\nOP1323 dumped | 1324 rows done\nOP1324 dumped | 1325 rows done\nOP1325 dumped | 1326 rows done\nOP1326 dumped | 1327 rows done\nOP1327 dumped | 1328 rows done\nOP1328 dumped | 1329 rows done\nOP1329 dumped | 1330 rows done\nOP1330 dumped | 1331 rows done\nOP1331 dumped | 1332 rows done\nOP1332 dumped | 1333 rows done\nOP1333 dumped | 1334 rows done\nOP1334 dumped | 1335 rows done\nOP1335 dumped | 1336 rows done\nOP1336 dumped | 1337 rows done\nOP1337 dumped | 1338 rows done\nOP1338 dumped | 1339 rows done\nOP1339 dumped | 1340 rows done\nOP1340 dumped | 1341 rows done\nOP1341 dumped | 1342 rows done\nOP1342 dumped | 1343 rows done\nOP1343 dumped | 1344 rows done\nOP1344 dumped | 1345 rows done\nOP1345 dumped | 1346 rows done\nOP1346 dumped | 1347 rows done\nOP1347 dumped | 1348 rows done\nOP1348 dumped | 1349 rows done\nOP1349 dumped | 1350 rows done\nOP1350 dumped | 1351 rows done\nOP1351 dumped | 1352 rows done\nOP1352 dumped | 1353 rows done\nOP1353 dumped | 1354 rows done\nOP1354 dumped | 1355 rows done\nOP1355 dumped | 1356 rows done\nOP1356 dumped | 1357 rows done\nOP1357 dumped | 1358 rows done\nOP1358 dumped | 1359 rows done\nOP1359 dumped | 1360 rows done\nOP1360 dumped | 1361 rows done\nOP1361 dumped | 1362 rows done\nOP1362 dumped | 1363 rows done\nOP1363 dumped | 1364 rows done\nOP1364 dumped | 1365 rows done\nOP1365 dumped | 1366 rows done\nOP1366 dumped | 1367 rows done\nOP1367 dumped | 1368 rows done\nOP1368 dumped | 1369 rows done\nOP1369 dumped | 1370 rows done\nOP1370 dumped | 1371 rows done\nOP1371 dumped | 1372 rows done\nOP1372 dumped | 1373 rows done\nOP1373 dumped | 1374 rows done\nOP1374 dumped | 1375 rows done\nOP1375 dumped | 1376 rows done\nOP1376 dumped | 1377 rows done\nOP1377 dumped | 1378 rows done\nOP1378 dumped | 1379 rows done\nOP1379 dumped | 1380 rows done\nOP1380 dumped | 1381 rows done\nOP1381 dumped | 1382 rows done\nOP1382 dumped | 1383 rows done\nOP1383 dumped | 1384 rows done\nOP1384 dumped | 1385 rows done\nOP1385 dumped | 1386 rows done\nOP1386 dumped | 1387 rows done\nOP1387 dumped | 1388 rows done\nOP1388 dumped | 1389 rows done\nOP1389 dumped | 1390 rows done\nOP1390 dumped | 1391 rows done\nOP1391 dumped | 1392 rows done\nOP1392 dumped | 1393 rows done\nOP1393 dumped | 1394 rows done\nOP1394 dumped | 1395 rows done\nOP1395 dumped | 1396 rows done\nOP1396 dumped | 1397 rows done\nOP1397 dumped | 1398 rows done\nOP1398 dumped | 1399 rows done\nOP1399 dumped | 1400 rows done\n"
],
[
"def bsbi2(k):\n file_names = [f'/content/drive/MyDrive/CRAN/Dumps_1/{k}_dump/OP{i}.txt' for i in range(0,1400)] # OP0.txt OP1.txt OP2.txt\n ################################################################################\n # Let's be manual\n my_dict={}\n from collections import defaultdict\n for file_path in file_names:\n with open(file_path,'r') as x:\n for i in x:\n a,b = i.split()\n #if a=='wave' and b=='1244': # to check for wave text if it is capturing frequency\n # print(a,b)\n if a in my_dict:\n # present\n t = my_dict[a]\n my_dict[a] = t+[b]\n else:\n # not present\n my_dict[a] = list(b)\n ####################################################################################################################\n with open(f'/content/drive/MyDrive/CRAN/Dumps_1/{k}_combine/merged_{k}.txt', 'w') as x:\n for term,id_list in my_dict.items():\n my_set = set(id_list)\n freq_dist = []\n freq_dist = [(i,str(id_list.count(i))) for i in my_set]\n x.write(term+\" \")\n t = sum([int(j) for i,j in freq_dist])\n x.write(str(t)+\" \")\n my_str_list = [a+\"_\"+b for a,b in freq_dist]\n my_str = \" \".join(my_str_list)\n x.write(my_str+\"\\n\")\n#c= [\"Text\",\"Title\",\"Info\",\"Author\"] \nbsbi2(\"Author\")",
"_____no_output_____"
],
[
"# run all",
"_____no_output_____"
],
[
"c= [\"Info\"] \nfor i in c:\n bsbi(i)\n bsbi2(i)\n print(i)\n print(\"_\"*40)",
"OP0 dumped | 1 rows done\nOP1 dumped | 2 rows done\nOP2 dumped | 3 rows done\nOP3 dumped | 4 rows done\nOP4 dumped | 5 rows done\nOP5 dumped | 6 rows done\nOP6 dumped | 7 rows done\nOP7 dumped | 8 rows done\nOP8 dumped | 9 rows done\nOP9 dumped | 10 rows done\nOP10 dumped | 11 rows done\nOP11 dumped | 12 rows done\nOP12 dumped | 13 rows done\nOP13 dumped | 14 rows done\nOP14 dumped | 15 rows done\nOP15 dumped | 16 rows done\nOP16 dumped | 17 rows done\nOP17 dumped | 18 rows done\nOP18 dumped | 19 rows done\nOP19 dumped | 20 rows done\nOP20 dumped | 21 rows done\nOP21 dumped | 22 rows done\nOP22 dumped | 23 rows done\nOP23 dumped | 24 rows done\nOP24 dumped | 25 rows done\nOP25 dumped | 26 rows done\nOP26 dumped | 27 rows done\nOP27 dumped | 28 rows done\nOP28 dumped | 29 rows done\nOP29 dumped | 30 rows done\nOP30 dumped | 31 rows done\nOP31 dumped | 32 rows done\nOP32 dumped | 33 rows done\nOP33 dumped | 34 rows done\nOP34 dumped | 35 rows done\nOP35 dumped | 36 rows done\nOP36 dumped | 37 rows done\nOP37 dumped | 38 rows done\nOP38 dumped | 39 rows done\nOP39 dumped | 40 rows done\nOP40 dumped | 41 rows done\nOP41 dumped | 42 rows done\nOP42 dumped | 43 rows done\nOP43 dumped | 44 rows done\nOP44 dumped | 45 rows done\nOP45 dumped | 46 rows done\nOP46 dumped | 47 rows done\nOP47 dumped | 48 rows done\nOP48 dumped | 49 rows done\nOP49 dumped | 50 rows done\nOP50 dumped | 51 rows done\nOP51 dumped | 52 rows done\nOP52 dumped | 53 rows done\nOP53 dumped | 54 rows done\nOP54 dumped | 55 rows done\nOP55 dumped | 56 rows done\nOP56 dumped | 57 rows done\nOP57 dumped | 58 rows done\nOP58 dumped | 59 rows done\nOP59 dumped | 60 rows done\nOP60 dumped | 61 rows done\nOP61 dumped | 62 rows done\nOP62 dumped | 63 rows done\nOP63 dumped | 64 rows done\nOP64 dumped | 65 rows done\nOP65 dumped | 66 rows done\nOP66 dumped | 67 rows done\nOP67 dumped | 68 rows done\nOP68 dumped | 69 rows done\nOP69 dumped | 70 rows done\nOP70 dumped | 71 rows done\nOP71 dumped | 72 rows done\nOP72 dumped | 73 rows done\nOP73 dumped | 74 rows done\nOP74 dumped | 75 rows done\nOP75 dumped | 76 rows done\nOP76 dumped | 77 rows done\nOP77 dumped | 78 rows done\nOP78 dumped | 79 rows done\nOP79 dumped | 80 rows done\nOP80 dumped | 81 rows done\nOP81 dumped | 82 rows done\nOP82 dumped | 83 rows done\nOP83 dumped | 84 rows done\nOP84 dumped | 85 rows done\nOP85 dumped | 86 rows done\nOP86 dumped | 87 rows done\nOP87 dumped | 88 rows done\nOP88 dumped | 89 rows done\nOP89 dumped | 90 rows done\nOP90 dumped | 91 rows done\nOP91 dumped | 92 rows done\nOP92 dumped | 93 rows done\nOP93 dumped | 94 rows done\nOP94 dumped | 95 rows done\nOP95 dumped | 96 rows done\nOP96 dumped | 97 rows done\nOP97 dumped | 98 rows done\nOP98 dumped | 99 rows done\nOP99 dumped | 100 rows done\nOP100 dumped | 101 rows done\nOP101 dumped | 102 rows done\nOP102 dumped | 103 rows done\nOP103 dumped | 104 rows done\nOP104 dumped | 105 rows done\nOP105 dumped | 106 rows done\nOP106 dumped | 107 rows done\nOP107 dumped | 108 rows done\nOP108 dumped | 109 rows done\nOP109 dumped | 110 rows done\nOP110 dumped | 111 rows done\nOP111 dumped | 112 rows done\nOP112 dumped | 113 rows done\nOP113 dumped | 114 rows done\nOP114 dumped | 115 rows done\nOP115 dumped | 116 rows done\nOP116 dumped | 117 rows done\nOP117 dumped | 118 rows done\nOP118 dumped | 119 rows done\nOP119 dumped | 120 rows done\nOP120 dumped | 121 rows done\nOP121 dumped | 122 rows done\nOP122 dumped | 123 rows done\nOP123 dumped | 124 rows done\nOP124 dumped | 125 rows done\nOP125 dumped | 126 rows done\nOP126 dumped | 127 rows done\nOP127 dumped | 128 rows done\nOP128 dumped | 129 rows done\nOP129 dumped | 130 rows done\nOP130 dumped | 131 rows done\nOP131 dumped | 132 rows done\nOP132 dumped | 133 rows done\nOP133 dumped | 134 rows done\nOP134 dumped | 135 rows done\nOP135 dumped | 136 rows done\nOP136 dumped | 137 rows done\nOP137 dumped | 138 rows done\nOP138 dumped | 139 rows done\nOP139 dumped | 140 rows done\nOP140 dumped | 141 rows done\nOP141 dumped | 142 rows done\nOP142 dumped | 143 rows done\nOP143 dumped | 144 rows done\nOP144 dumped | 145 rows done\nOP145 dumped | 146 rows done\nOP146 dumped | 147 rows done\nOP147 dumped | 148 rows done\nOP148 dumped | 149 rows done\nOP149 dumped | 150 rows done\nOP150 dumped | 151 rows done\nOP151 dumped | 152 rows done\nOP152 dumped | 153 rows done\nOP153 dumped | 154 rows done\nOP154 dumped | 155 rows done\nOP155 dumped | 156 rows done\nOP156 dumped | 157 rows done\nOP157 dumped | 158 rows done\nOP158 dumped | 159 rows done\nOP159 dumped | 160 rows done\nOP160 dumped | 161 rows done\nOP161 dumped | 162 rows done\nOP162 dumped | 163 rows done\nOP163 dumped | 164 rows done\nOP164 dumped | 165 rows done\nOP165 dumped | 166 rows done\nOP166 dumped | 167 rows done\nOP167 dumped | 168 rows done\nOP168 dumped | 169 rows done\nOP169 dumped | 170 rows done\nOP170 dumped | 171 rows done\nOP171 dumped | 172 rows done\nOP172 dumped | 173 rows done\nOP173 dumped | 174 rows done\nOP174 dumped | 175 rows done\nOP175 dumped | 176 rows done\nOP176 dumped | 177 rows done\nOP177 dumped | 178 rows done\nOP178 dumped | 179 rows done\nOP179 dumped | 180 rows done\nOP180 dumped | 181 rows done\nOP181 dumped | 182 rows done\nOP182 dumped | 183 rows done\nOP183 dumped | 184 rows done\nOP184 dumped | 185 rows done\nOP185 dumped | 186 rows done\nOP186 dumped | 187 rows done\nOP187 dumped | 188 rows done\nOP188 dumped | 189 rows done\nOP189 dumped | 190 rows done\nOP190 dumped | 191 rows done\nOP191 dumped | 192 rows done\nOP192 dumped | 193 rows done\nOP193 dumped | 194 rows done\nOP194 dumped | 195 rows done\nOP195 dumped | 196 rows done\nOP196 dumped | 197 rows done\nOP197 dumped | 198 rows done\nOP198 dumped | 199 rows done\nOP199 dumped | 200 rows done\nOP200 dumped | 201 rows done\nOP201 dumped | 202 rows done\nOP202 dumped | 203 rows done\nOP203 dumped | 204 rows done\nOP204 dumped | 205 rows done\nOP205 dumped | 206 rows done\nOP206 dumped | 207 rows done\nOP207 dumped | 208 rows done\nOP208 dumped | 209 rows done\nOP209 dumped | 210 rows done\nOP210 dumped | 211 rows done\nOP211 dumped | 212 rows done\nOP212 dumped | 213 rows done\nOP213 dumped | 214 rows done\nOP214 dumped | 215 rows done\nOP215 dumped | 216 rows done\nOP216 dumped | 217 rows done\nOP217 dumped | 218 rows done\nOP218 dumped | 219 rows done\nOP219 dumped | 220 rows done\nOP220 dumped | 221 rows done\nOP221 dumped | 222 rows done\nOP222 dumped | 223 rows done\nOP223 dumped | 224 rows done\nOP224 dumped | 225 rows done\nOP225 dumped | 226 rows done\nOP226 dumped | 227 rows done\nOP227 dumped | 228 rows done\nOP228 dumped | 229 rows done\nOP229 dumped | 230 rows done\nOP230 dumped | 231 rows done\nOP231 dumped | 232 rows done\nOP232 dumped | 233 rows done\nOP233 dumped | 234 rows done\nOP234 dumped | 235 rows done\nOP235 dumped | 236 rows done\nOP236 dumped | 237 rows done\nOP237 dumped | 238 rows done\nOP238 dumped | 239 rows done\nOP239 dumped | 240 rows done\nOP240 dumped | 241 rows done\nOP241 dumped | 242 rows done\nOP242 dumped | 243 rows done\nOP243 dumped | 244 rows done\nOP244 dumped | 245 rows done\nOP245 dumped | 246 rows done\nOP246 dumped | 247 rows done\nOP247 dumped | 248 rows done\nOP248 dumped | 249 rows done\nOP249 dumped | 250 rows done\nOP250 dumped | 251 rows done\nOP251 dumped | 252 rows done\nOP252 dumped | 253 rows done\nOP253 dumped | 254 rows done\nOP254 dumped | 255 rows done\nOP255 dumped | 256 rows done\nOP256 dumped | 257 rows done\nOP257 dumped | 258 rows done\nOP258 dumped | 259 rows done\nOP259 dumped | 260 rows done\nOP260 dumped | 261 rows done\nOP261 dumped | 262 rows done\nOP262 dumped | 263 rows done\nOP263 dumped | 264 rows done\nOP264 dumped | 265 rows done\nOP265 dumped | 266 rows done\nOP266 dumped | 267 rows done\nOP267 dumped | 268 rows done\nOP268 dumped | 269 rows done\nOP269 dumped | 270 rows done\nOP270 dumped | 271 rows done\nOP271 dumped | 272 rows done\nOP272 dumped | 273 rows done\nOP273 dumped | 274 rows done\nOP274 dumped | 275 rows done\nOP275 dumped | 276 rows done\nOP276 dumped | 277 rows done\nOP277 dumped | 278 rows done\nOP278 dumped | 279 rows done\nOP279 dumped | 280 rows done\nOP280 dumped | 281 rows done\nOP281 dumped | 282 rows done\nOP282 dumped | 283 rows done\nOP283 dumped | 284 rows done\nOP284 dumped | 285 rows done\nOP285 dumped | 286 rows done\nOP286 dumped | 287 rows done\nOP287 dumped | 288 rows done\nOP288 dumped | 289 rows done\nOP289 dumped | 290 rows done\nOP290 dumped | 291 rows done\nOP291 dumped | 292 rows done\nOP292 dumped | 293 rows done\nOP293 dumped | 294 rows done\nOP294 dumped | 295 rows done\nOP295 dumped | 296 rows done\nOP296 dumped | 297 rows done\nOP297 dumped | 298 rows done\nOP298 dumped | 299 rows done\nOP299 dumped | 300 rows done\nOP300 dumped | 301 rows done\nOP301 dumped | 302 rows done\nOP302 dumped | 303 rows done\nOP303 dumped | 304 rows done\nOP304 dumped | 305 rows done\nOP305 dumped | 306 rows done\nOP306 dumped | 307 rows done\nOP307 dumped | 308 rows done\nOP308 dumped | 309 rows done\nOP309 dumped | 310 rows done\nOP310 dumped | 311 rows done\nOP311 dumped | 312 rows done\nOP312 dumped | 313 rows done\nOP313 dumped | 314 rows done\nOP314 dumped | 315 rows done\nOP315 dumped | 316 rows done\nOP316 dumped | 317 rows done\nOP317 dumped | 318 rows done\nOP318 dumped | 319 rows done\nOP319 dumped | 320 rows done\nOP320 dumped | 321 rows done\nOP321 dumped | 322 rows done\nOP322 dumped | 323 rows done\nOP323 dumped | 324 rows done\nOP324 dumped | 325 rows done\nOP325 dumped | 326 rows done\nOP326 dumped | 327 rows done\nOP327 dumped | 328 rows done\nOP328 dumped | 329 rows done\nOP329 dumped | 330 rows done\nOP330 dumped | 331 rows done\nOP331 dumped | 332 rows done\nOP332 dumped | 333 rows done\nOP333 dumped | 334 rows done\nOP334 dumped | 335 rows done\nOP335 dumped | 336 rows done\nOP336 dumped | 337 rows done\nOP337 dumped | 338 rows done\nOP338 dumped | 339 rows done\nOP339 dumped | 340 rows done\nOP340 dumped | 341 rows done\nOP341 dumped | 342 rows done\nOP342 dumped | 343 rows done\nOP343 dumped | 344 rows done\nOP344 dumped | 345 rows done\nOP345 dumped | 346 rows done\nOP346 dumped | 347 rows done\nOP347 dumped | 348 rows done\nOP348 dumped | 349 rows done\nOP349 dumped | 350 rows done\nOP350 dumped | 351 rows done\nOP351 dumped | 352 rows done\nOP352 dumped | 353 rows done\nOP353 dumped | 354 rows done\nOP354 dumped | 355 rows done\nOP355 dumped | 356 rows done\nOP356 dumped | 357 rows done\nOP357 dumped | 358 rows done\nOP358 dumped | 359 rows done\nOP359 dumped | 360 rows done\nOP360 dumped | 361 rows done\nOP361 dumped | 362 rows done\nOP362 dumped | 363 rows done\nOP363 dumped | 364 rows done\nOP364 dumped | 365 rows done\nOP365 dumped | 366 rows done\nOP366 dumped | 367 rows done\nOP367 dumped | 368 rows done\nOP368 dumped | 369 rows done\nOP369 dumped | 370 rows done\nOP370 dumped | 371 rows done\nOP371 dumped | 372 rows done\nOP372 dumped | 373 rows done\nOP373 dumped | 374 rows done\nOP374 dumped | 375 rows done\nOP375 dumped | 376 rows done\nOP376 dumped | 377 rows done\nOP377 dumped | 378 rows done\nOP378 dumped | 379 rows done\nOP379 dumped | 380 rows done\nOP380 dumped | 381 rows done\nOP381 dumped | 382 rows done\nOP382 dumped | 383 rows done\nOP383 dumped | 384 rows done\nOP384 dumped | 385 rows done\nOP385 dumped | 386 rows done\nOP386 dumped | 387 rows done\nOP387 dumped | 388 rows done\nOP388 dumped | 389 rows done\nOP389 dumped | 390 rows done\nOP390 dumped | 391 rows done\nOP391 dumped | 392 rows done\nOP392 dumped | 393 rows done\nOP393 dumped | 394 rows done\nOP394 dumped | 395 rows done\nOP395 dumped | 396 rows done\nOP396 dumped | 397 rows done\nOP397 dumped | 398 rows done\nOP398 dumped | 399 rows done\nOP399 dumped | 400 rows done\nOP400 dumped | 401 rows done\nOP401 dumped | 402 rows done\nOP402 dumped | 403 rows done\nOP403 dumped | 404 rows done\nOP404 dumped | 405 rows done\nOP405 dumped | 406 rows done\nOP406 dumped | 407 rows done\nOP407 dumped | 408 rows done\nOP408 dumped | 409 rows done\nOP409 dumped | 410 rows done\nOP410 dumped | 411 rows done\nOP411 dumped | 412 rows done\nOP412 dumped | 413 rows done\nOP413 dumped | 414 rows done\nOP414 dumped | 415 rows done\nOP415 dumped | 416 rows done\nOP416 dumped | 417 rows done\nOP417 dumped | 418 rows done\nOP418 dumped | 419 rows done\nOP419 dumped | 420 rows done\nOP420 dumped | 421 rows done\nOP421 dumped | 422 rows done\nOP422 dumped | 423 rows done\nOP423 dumped | 424 rows done\nOP424 dumped | 425 rows done\nOP425 dumped | 426 rows done\nOP426 dumped | 427 rows done\nOP427 dumped | 428 rows done\nOP428 dumped | 429 rows done\nOP429 dumped | 430 rows done\nOP430 dumped | 431 rows done\nOP431 dumped | 432 rows done\nOP432 dumped | 433 rows done\nOP433 dumped | 434 rows done\nOP434 dumped | 435 rows done\nOP435 dumped | 436 rows done\nOP436 dumped | 437 rows done\nOP437 dumped | 438 rows done\nOP438 dumped | 439 rows done\nOP439 dumped | 440 rows done\nOP440 dumped | 441 rows done\nOP441 dumped | 442 rows done\nOP442 dumped | 443 rows done\nOP443 dumped | 444 rows done\nOP444 dumped | 445 rows done\nOP445 dumped | 446 rows done\nOP446 dumped | 447 rows done\nOP447 dumped | 448 rows done\nOP448 dumped | 449 rows done\nOP449 dumped | 450 rows done\nOP450 dumped | 451 rows done\nOP451 dumped | 452 rows done\nOP452 dumped | 453 rows done\nOP453 dumped | 454 rows done\nOP454 dumped | 455 rows done\nOP455 dumped | 456 rows done\nOP456 dumped | 457 rows done\nOP457 dumped | 458 rows done\nOP458 dumped | 459 rows done\nOP459 dumped | 460 rows done\nOP460 dumped | 461 rows done\nOP461 dumped | 462 rows done\nOP462 dumped | 463 rows done\nOP463 dumped | 464 rows done\nOP464 dumped | 465 rows done\nOP465 dumped | 466 rows done\nOP466 dumped | 467 rows done\nOP467 dumped | 468 rows done\nOP468 dumped | 469 rows done\nOP469 dumped | 470 rows done\nOP470 dumped | 471 rows done\nOP471 dumped | 472 rows done\nOP472 dumped | 473 rows done\nOP473 dumped | 474 rows done\nOP474 dumped | 475 rows done\nOP475 dumped | 476 rows done\nOP476 dumped | 477 rows done\nOP477 dumped | 478 rows done\nOP478 dumped | 479 rows done\nOP479 dumped | 480 rows done\nOP480 dumped | 481 rows done\nOP481 dumped | 482 rows done\nOP482 dumped | 483 rows done\nOP483 dumped | 484 rows done\nOP484 dumped | 485 rows done\nOP485 dumped | 486 rows done\nOP486 dumped | 487 rows done\nOP487 dumped | 488 rows done\nOP488 dumped | 489 rows done\nOP489 dumped | 490 rows done\nOP490 dumped | 491 rows done\nOP491 dumped | 492 rows done\nOP492 dumped | 493 rows done\nOP493 dumped | 494 rows done\nOP494 dumped | 495 rows done\nOP495 dumped | 496 rows done\nOP496 dumped | 497 rows done\nOP497 dumped | 498 rows done\nOP498 dumped | 499 rows done\nOP499 dumped | 500 rows done\nOP500 dumped | 501 rows done\nOP501 dumped | 502 rows done\nOP502 dumped | 503 rows done\nOP503 dumped | 504 rows done\nOP504 dumped | 505 rows done\nOP505 dumped | 506 rows done\nOP506 dumped | 507 rows done\nOP507 dumped | 508 rows done\nOP508 dumped | 509 rows done\nOP509 dumped | 510 rows done\nOP510 dumped | 511 rows done\nOP511 dumped | 512 rows done\nOP512 dumped | 513 rows done\nOP513 dumped | 514 rows done\nOP514 dumped | 515 rows done\nOP515 dumped | 516 rows done\nOP516 dumped | 517 rows done\nOP517 dumped | 518 rows done\nOP518 dumped | 519 rows done\nOP519 dumped | 520 rows done\nOP520 dumped | 521 rows done\nOP521 dumped | 522 rows done\nOP522 dumped | 523 rows done\nOP523 dumped | 524 rows done\nOP524 dumped | 525 rows done\nOP525 dumped | 526 rows done\nOP526 dumped | 527 rows done\nOP527 dumped | 528 rows done\nOP528 dumped | 529 rows done\nOP529 dumped | 530 rows done\nOP530 dumped | 531 rows done\nOP531 dumped | 532 rows done\nOP532 dumped | 533 rows done\nOP533 dumped | 534 rows done\nOP534 dumped | 535 rows done\nOP535 dumped | 536 rows done\nOP536 dumped | 537 rows done\nOP537 dumped | 538 rows done\nOP538 dumped | 539 rows done\nOP539 dumped | 540 rows done\nOP540 dumped | 541 rows done\nOP541 dumped | 542 rows done\nOP542 dumped | 543 rows done\nOP543 dumped | 544 rows done\nOP544 dumped | 545 rows done\nOP545 dumped | 546 rows done\nOP546 dumped | 547 rows done\nOP547 dumped | 548 rows done\nOP548 dumped | 549 rows done\nOP549 dumped | 550 rows done\nOP550 dumped | 551 rows done\nOP551 dumped | 552 rows done\nOP552 dumped | 553 rows done\nOP553 dumped | 554 rows done\nOP554 dumped | 555 rows done\nOP555 dumped | 556 rows done\nOP556 dumped | 557 rows done\nOP557 dumped | 558 rows done\nOP558 dumped | 559 rows done\nOP559 dumped | 560 rows done\nOP560 dumped | 561 rows done\nOP561 dumped | 562 rows done\nOP562 dumped | 563 rows done\nOP563 dumped | 564 rows done\nOP564 dumped | 565 rows done\nOP565 dumped | 566 rows done\nOP566 dumped | 567 rows done\nOP567 dumped | 568 rows done\nOP568 dumped | 569 rows done\nOP569 dumped | 570 rows done\nOP570 dumped | 571 rows done\nOP571 dumped | 572 rows done\nOP572 dumped | 573 rows done\nOP573 dumped | 574 rows done\nOP574 dumped | 575 rows done\nOP575 dumped | 576 rows done\nOP576 dumped | 577 rows done\nOP577 dumped | 578 rows done\nOP578 dumped | 579 rows done\nOP579 dumped | 580 rows done\nOP580 dumped | 581 rows done\nOP581 dumped | 582 rows done\nOP582 dumped | 583 rows done\nOP583 dumped | 584 rows done\nOP584 dumped | 585 rows done\nOP585 dumped | 586 rows done\nOP586 dumped | 587 rows done\nOP587 dumped | 588 rows done\nOP588 dumped | 589 rows done\nOP589 dumped | 590 rows done\nOP590 dumped | 591 rows done\nOP591 dumped | 592 rows done\nOP592 dumped | 593 rows done\nOP593 dumped | 594 rows done\nOP594 dumped | 595 rows done\nOP595 dumped | 596 rows done\nOP596 dumped | 597 rows done\nOP597 dumped | 598 rows done\nOP598 dumped | 599 rows done\nOP599 dumped | 600 rows done\nOP600 dumped | 601 rows done\nOP601 dumped | 602 rows done\nOP602 dumped | 603 rows done\nOP603 dumped | 604 rows done\nOP604 dumped | 605 rows done\nOP605 dumped | 606 rows done\nOP606 dumped | 607 rows done\nOP607 dumped | 608 rows done\nOP608 dumped | 609 rows done\nOP609 dumped | 610 rows done\nOP610 dumped | 611 rows done\nOP611 dumped | 612 rows done\nOP612 dumped | 613 rows done\nOP613 dumped | 614 rows done\nOP614 dumped | 615 rows done\nOP615 dumped | 616 rows done\nOP616 dumped | 617 rows done\nOP617 dumped | 618 rows done\nOP618 dumped | 619 rows done\nOP619 dumped | 620 rows done\nOP620 dumped | 621 rows done\nOP621 dumped | 622 rows done\nOP622 dumped | 623 rows done\nOP623 dumped | 624 rows done\nOP624 dumped | 625 rows done\nOP625 dumped | 626 rows done\nOP626 dumped | 627 rows done\nOP627 dumped | 628 rows done\nOP628 dumped | 629 rows done\nOP629 dumped | 630 rows done\nOP630 dumped | 631 rows done\nOP631 dumped | 632 rows done\nOP632 dumped | 633 rows done\nOP633 dumped | 634 rows done\nOP634 dumped | 635 rows done\nOP635 dumped | 636 rows done\nOP636 dumped | 637 rows done\nOP637 dumped | 638 rows done\nOP638 dumped | 639 rows done\nOP639 dumped | 640 rows done\nOP640 dumped | 641 rows done\nOP641 dumped | 642 rows done\nOP642 dumped | 643 rows done\nOP643 dumped | 644 rows done\nOP644 dumped | 645 rows done\nOP645 dumped | 646 rows done\nOP646 dumped | 647 rows done\nOP647 dumped | 648 rows done\nOP648 dumped | 649 rows done\nOP649 dumped | 650 rows done\nOP650 dumped | 651 rows done\nOP651 dumped | 652 rows done\nOP652 dumped | 653 rows done\nOP653 dumped | 654 rows done\nOP654 dumped | 655 rows done\nOP655 dumped | 656 rows done\nOP656 dumped | 657 rows done\nOP657 dumped | 658 rows done\nOP658 dumped | 659 rows done\nOP659 dumped | 660 rows done\nOP660 dumped | 661 rows done\nOP661 dumped | 662 rows done\nOP662 dumped | 663 rows done\nOP663 dumped | 664 rows done\nOP664 dumped | 665 rows done\nOP665 dumped | 666 rows done\nOP666 dumped | 667 rows done\nOP667 dumped | 668 rows done\nOP668 dumped | 669 rows done\nOP669 dumped | 670 rows done\nOP670 dumped | 671 rows done\nOP671 dumped | 672 rows done\nOP672 dumped | 673 rows done\nOP673 dumped | 674 rows done\nOP674 dumped | 675 rows done\nOP675 dumped | 676 rows done\nOP676 dumped | 677 rows done\nOP677 dumped | 678 rows done\nOP678 dumped | 679 rows done\nOP679 dumped | 680 rows done\nOP680 dumped | 681 rows done\nOP681 dumped | 682 rows done\nOP682 dumped | 683 rows done\nOP683 dumped | 684 rows done\nOP684 dumped | 685 rows done\nOP685 dumped | 686 rows done\nOP686 dumped | 687 rows done\nOP687 dumped | 688 rows done\nOP688 dumped | 689 rows done\nOP689 dumped | 690 rows done\nOP690 dumped | 691 rows done\nOP691 dumped | 692 rows done\nOP692 dumped | 693 rows done\nOP693 dumped | 694 rows done\nOP694 dumped | 695 rows done\nOP695 dumped | 696 rows done\nOP696 dumped | 697 rows done\nOP697 dumped | 698 rows done\nOP698 dumped | 699 rows done\nOP699 dumped | 700 rows done\nOP700 dumped | 701 rows done\nOP701 dumped | 702 rows done\nOP702 dumped | 703 rows done\nOP703 dumped | 704 rows done\nOP704 dumped | 705 rows done\nOP705 dumped | 706 rows done\nOP706 dumped | 707 rows done\nOP707 dumped | 708 rows done\nOP708 dumped | 709 rows done\nOP709 dumped | 710 rows done\nOP710 dumped | 711 rows done\nOP711 dumped | 712 rows done\nOP712 dumped | 713 rows done\nOP713 dumped | 714 rows done\nOP714 dumped | 715 rows done\nOP715 dumped | 716 rows done\nOP716 dumped | 717 rows done\nOP717 dumped | 718 rows done\nOP718 dumped | 719 rows done\nOP719 dumped | 720 rows done\nOP720 dumped | 721 rows done\nOP721 dumped | 722 rows done\nOP722 dumped | 723 rows done\nOP723 dumped | 724 rows done\nOP724 dumped | 725 rows done\nOP725 dumped | 726 rows done\nOP726 dumped | 727 rows done\nOP727 dumped | 728 rows done\nOP728 dumped | 729 rows done\nOP729 dumped | 730 rows done\nOP730 dumped | 731 rows done\nOP731 dumped | 732 rows done\nOP732 dumped | 733 rows done\nOP733 dumped | 734 rows done\nOP734 dumped | 735 rows done\nOP735 dumped | 736 rows done\nOP736 dumped | 737 rows done\nOP737 dumped | 738 rows done\nOP738 dumped | 739 rows done\nOP739 dumped | 740 rows done\nOP740 dumped | 741 rows done\nOP741 dumped | 742 rows done\nOP742 dumped | 743 rows done\nOP743 dumped | 744 rows done\nOP744 dumped | 745 rows done\nOP745 dumped | 746 rows done\nOP746 dumped | 747 rows done\nOP747 dumped | 748 rows done\nOP748 dumped | 749 rows done\nOP749 dumped | 750 rows done\nOP750 dumped | 751 rows done\nOP751 dumped | 752 rows done\nOP752 dumped | 753 rows done\nOP753 dumped | 754 rows done\nOP754 dumped | 755 rows done\nOP755 dumped | 756 rows done\nOP756 dumped | 757 rows done\nOP757 dumped | 758 rows done\nOP758 dumped | 759 rows done\nOP759 dumped | 760 rows done\nOP760 dumped | 761 rows done\nOP761 dumped | 762 rows done\nOP762 dumped | 763 rows done\nOP763 dumped | 764 rows done\nOP764 dumped | 765 rows done\nOP765 dumped | 766 rows done\nOP766 dumped | 767 rows done\nOP767 dumped | 768 rows done\nOP768 dumped | 769 rows done\nOP769 dumped | 770 rows done\nOP770 dumped | 771 rows done\nOP771 dumped | 772 rows done\nOP772 dumped | 773 rows done\nOP773 dumped | 774 rows done\nOP774 dumped | 775 rows done\nOP775 dumped | 776 rows done\nOP776 dumped | 777 rows done\nOP777 dumped | 778 rows done\nOP778 dumped | 779 rows done\nOP779 dumped | 780 rows done\nOP780 dumped | 781 rows done\nOP781 dumped | 782 rows done\nOP782 dumped | 783 rows done\nOP783 dumped | 784 rows done\nOP784 dumped | 785 rows done\nOP785 dumped | 786 rows done\nOP786 dumped | 787 rows done\nOP787 dumped | 788 rows done\nOP788 dumped | 789 rows done\nOP789 dumped | 790 rows done\nOP790 dumped | 791 rows done\nOP791 dumped | 792 rows done\nOP792 dumped | 793 rows done\nOP793 dumped | 794 rows done\nOP794 dumped | 795 rows done\nOP795 dumped | 796 rows done\nOP796 dumped | 797 rows done\nOP797 dumped | 798 rows done\nOP798 dumped | 799 rows done\nOP799 dumped | 800 rows done\nOP800 dumped | 801 rows done\nOP801 dumped | 802 rows done\nOP802 dumped | 803 rows done\nOP803 dumped | 804 rows done\nOP804 dumped | 805 rows done\nOP805 dumped | 806 rows done\nOP806 dumped | 807 rows done\nOP807 dumped | 808 rows done\nOP808 dumped | 809 rows done\nOP809 dumped | 810 rows done\nOP810 dumped | 811 rows done\nOP811 dumped | 812 rows done\nOP812 dumped | 813 rows done\nOP813 dumped | 814 rows done\nOP814 dumped | 815 rows done\nOP815 dumped | 816 rows done\nOP816 dumped | 817 rows done\nOP817 dumped | 818 rows done\nOP818 dumped | 819 rows done\nOP819 dumped | 820 rows done\nOP820 dumped | 821 rows done\nOP821 dumped | 822 rows done\nOP822 dumped | 823 rows done\nOP823 dumped | 824 rows done\nOP824 dumped | 825 rows done\nOP825 dumped | 826 rows done\nOP826 dumped | 827 rows done\nOP827 dumped | 828 rows done\nOP828 dumped | 829 rows done\nOP829 dumped | 830 rows done\nOP830 dumped | 831 rows done\nOP831 dumped | 832 rows done\nOP832 dumped | 833 rows done\nOP833 dumped | 834 rows done\nOP834 dumped | 835 rows done\nOP835 dumped | 836 rows done\nOP836 dumped | 837 rows done\nOP837 dumped | 838 rows done\nOP838 dumped | 839 rows done\nOP839 dumped | 840 rows done\nOP840 dumped | 841 rows done\nOP841 dumped | 842 rows done\nOP842 dumped | 843 rows done\nOP843 dumped | 844 rows done\nOP844 dumped | 845 rows done\nOP845 dumped | 846 rows done\nOP846 dumped | 847 rows done\nOP847 dumped | 848 rows done\nOP848 dumped | 849 rows done\nOP849 dumped | 850 rows done\nOP850 dumped | 851 rows done\nOP851 dumped | 852 rows done\nOP852 dumped | 853 rows done\nOP853 dumped | 854 rows done\nOP854 dumped | 855 rows done\nOP855 dumped | 856 rows done\nOP856 dumped | 857 rows done\nOP857 dumped | 858 rows done\nOP858 dumped | 859 rows done\nOP859 dumped | 860 rows done\nOP860 dumped | 861 rows done\nOP861 dumped | 862 rows done\nOP862 dumped | 863 rows done\nOP863 dumped | 864 rows done\nOP864 dumped | 865 rows done\nOP865 dumped | 866 rows done\nOP866 dumped | 867 rows done\nOP867 dumped | 868 rows done\nOP868 dumped | 869 rows done\nOP869 dumped | 870 rows done\nOP870 dumped | 871 rows done\nOP871 dumped | 872 rows done\nOP872 dumped | 873 rows done\nOP873 dumped | 874 rows done\nOP874 dumped | 875 rows done\nOP875 dumped | 876 rows done\nOP876 dumped | 877 rows done\nOP877 dumped | 878 rows done\nOP878 dumped | 879 rows done\nOP879 dumped | 880 rows done\nOP880 dumped | 881 rows done\nOP881 dumped | 882 rows done\nOP882 dumped | 883 rows done\nOP883 dumped | 884 rows done\nOP884 dumped | 885 rows done\nOP885 dumped | 886 rows done\nOP886 dumped | 887 rows done\nOP887 dumped | 888 rows done\nOP888 dumped | 889 rows done\nOP889 dumped | 890 rows done\nOP890 dumped | 891 rows done\nOP891 dumped | 892 rows done\nOP892 dumped | 893 rows done\nOP893 dumped | 894 rows done\nOP894 dumped | 895 rows done\nOP895 dumped | 896 rows done\nOP896 dumped | 897 rows done\nOP897 dumped | 898 rows done\nOP898 dumped | 899 rows done\nOP899 dumped | 900 rows done\nOP900 dumped | 901 rows done\nOP901 dumped | 902 rows done\nOP902 dumped | 903 rows done\nOP903 dumped | 904 rows done\nOP904 dumped | 905 rows done\nOP905 dumped | 906 rows done\nOP906 dumped | 907 rows done\nOP907 dumped | 908 rows done\nOP908 dumped | 909 rows done\nOP909 dumped | 910 rows done\nOP910 dumped | 911 rows done\nOP911 dumped | 912 rows done\nOP912 dumped | 913 rows done\nOP913 dumped | 914 rows done\nOP914 dumped | 915 rows done\nOP915 dumped | 916 rows done\nOP916 dumped | 917 rows done\nOP917 dumped | 918 rows done\nOP918 dumped | 919 rows done\nOP919 dumped | 920 rows done\nOP920 dumped | 921 rows done\nOP921 dumped | 922 rows done\nOP922 dumped | 923 rows done\nOP923 dumped | 924 rows done\nOP924 dumped | 925 rows done\nOP925 dumped | 926 rows done\nOP926 dumped | 927 rows done\nOP927 dumped | 928 rows done\nOP928 dumped | 929 rows done\nOP929 dumped | 930 rows done\nOP930 dumped | 931 rows done\nOP931 dumped | 932 rows done\nOP932 dumped | 933 rows done\nOP933 dumped | 934 rows done\nOP934 dumped | 935 rows done\nOP935 dumped | 936 rows done\nOP936 dumped | 937 rows done\nOP937 dumped | 938 rows done\nOP938 dumped | 939 rows done\nOP939 dumped | 940 rows done\nOP940 dumped | 941 rows done\nOP941 dumped | 942 rows done\nOP942 dumped | 943 rows done\nOP943 dumped | 944 rows done\nOP944 dumped | 945 rows done\nOP945 dumped | 946 rows done\nOP946 dumped | 947 rows done\nOP947 dumped | 948 rows done\nOP948 dumped | 949 rows done\nOP949 dumped | 950 rows done\nOP950 dumped | 951 rows done\nOP951 dumped | 952 rows done\nOP952 dumped | 953 rows done\nOP953 dumped | 954 rows done\nOP954 dumped | 955 rows done\nOP955 dumped | 956 rows done\nOP956 dumped | 957 rows done\nOP957 dumped | 958 rows done\nOP958 dumped | 959 rows done\nOP959 dumped | 960 rows done\nOP960 dumped | 961 rows done\nOP961 dumped | 962 rows done\nOP962 dumped | 963 rows done\nOP963 dumped | 964 rows done\nOP964 dumped | 965 rows done\nOP965 dumped | 966 rows done\nOP966 dumped | 967 rows done\nOP967 dumped | 968 rows done\nOP968 dumped | 969 rows done\nOP969 dumped | 970 rows done\nOP970 dumped | 971 rows done\nOP971 dumped | 972 rows done\nOP972 dumped | 973 rows done\nOP973 dumped | 974 rows done\nOP974 dumped | 975 rows done\nOP975 dumped | 976 rows done\nOP976 dumped | 977 rows done\nOP977 dumped | 978 rows done\nOP978 dumped | 979 rows done\nOP979 dumped | 980 rows done\nOP980 dumped | 981 rows done\nOP981 dumped | 982 rows done\nOP982 dumped | 983 rows done\nOP983 dumped | 984 rows done\nOP984 dumped | 985 rows done\nOP985 dumped | 986 rows done\nOP986 dumped | 987 rows done\nOP987 dumped | 988 rows done\nOP988 dumped | 989 rows done\nOP989 dumped | 990 rows done\nOP990 dumped | 991 rows done\nOP991 dumped | 992 rows done\nOP992 dumped | 993 rows done\nOP993 dumped | 994 rows done\nOP994 dumped | 995 rows done\nOP995 dumped | 996 rows done\nOP996 dumped | 997 rows done\nOP997 dumped | 998 rows done\nOP998 dumped | 999 rows done\nOP999 dumped | 1000 rows done\nOP1000 dumped | 1001 rows done\nOP1001 dumped | 1002 rows done\nOP1002 dumped | 1003 rows done\nOP1003 dumped | 1004 rows done\nOP1004 dumped | 1005 rows done\nOP1005 dumped | 1006 rows done\nOP1006 dumped | 1007 rows done\nOP1007 dumped | 1008 rows done\nOP1008 dumped | 1009 rows done\nOP1009 dumped | 1010 rows done\nOP1010 dumped | 1011 rows done\nOP1011 dumped | 1012 rows done\nOP1012 dumped | 1013 rows done\nOP1013 dumped | 1014 rows done\nOP1014 dumped | 1015 rows done\nOP1015 dumped | 1016 rows done\nOP1016 dumped | 1017 rows done\nOP1017 dumped | 1018 rows done\nOP1018 dumped | 1019 rows done\nOP1019 dumped | 1020 rows done\nOP1020 dumped | 1021 rows done\nOP1021 dumped | 1022 rows done\nOP1022 dumped | 1023 rows done\nOP1023 dumped | 1024 rows done\nOP1024 dumped | 1025 rows done\nOP1025 dumped | 1026 rows done\nOP1026 dumped | 1027 rows done\nOP1027 dumped | 1028 rows done\nOP1028 dumped | 1029 rows done\nOP1029 dumped | 1030 rows done\nOP1030 dumped | 1031 rows done\nOP1031 dumped | 1032 rows done\nOP1032 dumped | 1033 rows done\nOP1033 dumped | 1034 rows done\nOP1034 dumped | 1035 rows done\nOP1035 dumped | 1036 rows done\nOP1036 dumped | 1037 rows done\nOP1037 dumped | 1038 rows done\nOP1038 dumped | 1039 rows done\nOP1039 dumped | 1040 rows done\nOP1040 dumped | 1041 rows done\nOP1041 dumped | 1042 rows done\nOP1042 dumped | 1043 rows done\nOP1043 dumped | 1044 rows done\nOP1044 dumped | 1045 rows done\nOP1045 dumped | 1046 rows done\nOP1046 dumped | 1047 rows done\nOP1047 dumped | 1048 rows done\nOP1048 dumped | 1049 rows done\nOP1049 dumped | 1050 rows done\nOP1050 dumped | 1051 rows done\nOP1051 dumped | 1052 rows done\nOP1052 dumped | 1053 rows done\nOP1053 dumped | 1054 rows done\nOP1054 dumped | 1055 rows done\nOP1055 dumped | 1056 rows done\nOP1056 dumped | 1057 rows done\nOP1057 dumped | 1058 rows done\nOP1058 dumped | 1059 rows done\nOP1059 dumped | 1060 rows done\nOP1060 dumped | 1061 rows done\nOP1061 dumped | 1062 rows done\nOP1062 dumped | 1063 rows done\nOP1063 dumped | 1064 rows done\nOP1064 dumped | 1065 rows done\nOP1065 dumped | 1066 rows done\nOP1066 dumped | 1067 rows done\nOP1067 dumped | 1068 rows done\nOP1068 dumped | 1069 rows done\nOP1069 dumped | 1070 rows done\nOP1070 dumped | 1071 rows done\nOP1071 dumped | 1072 rows done\nOP1072 dumped | 1073 rows done\nOP1073 dumped | 1074 rows done\nOP1074 dumped | 1075 rows done\nOP1075 dumped | 1076 rows done\nOP1076 dumped | 1077 rows done\nOP1077 dumped | 1078 rows done\nOP1078 dumped | 1079 rows done\nOP1079 dumped | 1080 rows done\nOP1080 dumped | 1081 rows done\nOP1081 dumped | 1082 rows done\nOP1082 dumped | 1083 rows done\nOP1083 dumped | 1084 rows done\nOP1084 dumped | 1085 rows done\nOP1085 dumped | 1086 rows done\nOP1086 dumped | 1087 rows done\nOP1087 dumped | 1088 rows done\nOP1088 dumped | 1089 rows done\nOP1089 dumped | 1090 rows done\nOP1090 dumped | 1091 rows done\nOP1091 dumped | 1092 rows done\nOP1092 dumped | 1093 rows done\nOP1093 dumped | 1094 rows done\nOP1094 dumped | 1095 rows done\nOP1095 dumped | 1096 rows done\nOP1096 dumped | 1097 rows done\nOP1097 dumped | 1098 rows done\nOP1098 dumped | 1099 rows done\nOP1099 dumped | 1100 rows done\nOP1100 dumped | 1101 rows done\nOP1101 dumped | 1102 rows done\nOP1102 dumped | 1103 rows done\nOP1103 dumped | 1104 rows done\nOP1104 dumped | 1105 rows done\nOP1105 dumped | 1106 rows done\nOP1106 dumped | 1107 rows done\nOP1107 dumped | 1108 rows done\nOP1108 dumped | 1109 rows done\nOP1109 dumped | 1110 rows done\nOP1110 dumped | 1111 rows done\nOP1111 dumped | 1112 rows done\nOP1112 dumped | 1113 rows done\nOP1113 dumped | 1114 rows done\nOP1114 dumped | 1115 rows done\nOP1115 dumped | 1116 rows done\nOP1116 dumped | 1117 rows done\nOP1117 dumped | 1118 rows done\nOP1118 dumped | 1119 rows done\nOP1119 dumped | 1120 rows done\nOP1120 dumped | 1121 rows done\nOP1121 dumped | 1122 rows done\nOP1122 dumped | 1123 rows done\nOP1123 dumped | 1124 rows done\nOP1124 dumped | 1125 rows done\nOP1125 dumped | 1126 rows done\nOP1126 dumped | 1127 rows done\nOP1127 dumped | 1128 rows done\nOP1128 dumped | 1129 rows done\nOP1129 dumped | 1130 rows done\nOP1130 dumped | 1131 rows done\nOP1131 dumped | 1132 rows done\nOP1132 dumped | 1133 rows done\nOP1133 dumped | 1134 rows done\nOP1134 dumped | 1135 rows done\nOP1135 dumped | 1136 rows done\nOP1136 dumped | 1137 rows done\nOP1137 dumped | 1138 rows done\nOP1138 dumped | 1139 rows done\nOP1139 dumped | 1140 rows done\nOP1140 dumped | 1141 rows done\nOP1141 dumped | 1142 rows done\nOP1142 dumped | 1143 rows done\nOP1143 dumped | 1144 rows done\nOP1144 dumped | 1145 rows done\nOP1145 dumped | 1146 rows done\nOP1146 dumped | 1147 rows done\nOP1147 dumped | 1148 rows done\nOP1148 dumped | 1149 rows done\nOP1149 dumped | 1150 rows done\nOP1150 dumped | 1151 rows done\nOP1151 dumped | 1152 rows done\nOP1152 dumped | 1153 rows done\nOP1153 dumped | 1154 rows done\nOP1154 dumped | 1155 rows done\nOP1155 dumped | 1156 rows done\nOP1156 dumped | 1157 rows done\nOP1157 dumped | 1158 rows done\nOP1158 dumped | 1159 rows done\nOP1159 dumped | 1160 rows done\nOP1160 dumped | 1161 rows done\nOP1161 dumped | 1162 rows done\nOP1162 dumped | 1163 rows done\nOP1163 dumped | 1164 rows done\nOP1164 dumped | 1165 rows done\nOP1165 dumped | 1166 rows done\nOP1166 dumped | 1167 rows done\nOP1167 dumped | 1168 rows done\nOP1168 dumped | 1169 rows done\nOP1169 dumped | 1170 rows done\nOP1170 dumped | 1171 rows done\nOP1171 dumped | 1172 rows done\nOP1172 dumped | 1173 rows done\nOP1173 dumped | 1174 rows done\nOP1174 dumped | 1175 rows done\nOP1175 dumped | 1176 rows done\nOP1176 dumped | 1177 rows done\nOP1177 dumped | 1178 rows done\nOP1178 dumped | 1179 rows done\nOP1179 dumped | 1180 rows done\nOP1180 dumped | 1181 rows done\nOP1181 dumped | 1182 rows done\nOP1182 dumped | 1183 rows done\nOP1183 dumped | 1184 rows done\nOP1184 dumped | 1185 rows done\nOP1185 dumped | 1186 rows done\nOP1186 dumped | 1187 rows done\nOP1187 dumped | 1188 rows done\nOP1188 dumped | 1189 rows done\nOP1189 dumped | 1190 rows done\nOP1190 dumped | 1191 rows done\nOP1191 dumped | 1192 rows done\nOP1192 dumped | 1193 rows done\nOP1193 dumped | 1194 rows done\nOP1194 dumped | 1195 rows done\nOP1195 dumped | 1196 rows done\nOP1196 dumped | 1197 rows done\nOP1197 dumped | 1198 rows done\nOP1198 dumped | 1199 rows done\nOP1199 dumped | 1200 rows done\nOP1200 dumped | 1201 rows done\nOP1201 dumped | 1202 rows done\nOP1202 dumped | 1203 rows done\nOP1203 dumped | 1204 rows done\nOP1204 dumped | 1205 rows done\nOP1205 dumped | 1206 rows done\nOP1206 dumped | 1207 rows done\nOP1207 dumped | 1208 rows done\nOP1208 dumped | 1209 rows done\nOP1209 dumped | 1210 rows done\nOP1210 dumped | 1211 rows done\nOP1211 dumped | 1212 rows done\nOP1212 dumped | 1213 rows done\nOP1213 dumped | 1214 rows done\nOP1214 dumped | 1215 rows done\nOP1215 dumped | 1216 rows done\nOP1216 dumped | 1217 rows done\nOP1217 dumped | 1218 rows done\nOP1218 dumped | 1219 rows done\nOP1219 dumped | 1220 rows done\nOP1220 dumped | 1221 rows done\nOP1221 dumped | 1222 rows done\nOP1222 dumped | 1223 rows done\nOP1223 dumped | 1224 rows done\nOP1224 dumped | 1225 rows done\nOP1225 dumped | 1226 rows done\nOP1226 dumped | 1227 rows done\nOP1227 dumped | 1228 rows done\nOP1228 dumped | 1229 rows done\nOP1229 dumped | 1230 rows done\nOP1230 dumped | 1231 rows done\nOP1231 dumped | 1232 rows done\nOP1232 dumped | 1233 rows done\nOP1233 dumped | 1234 rows done\nOP1234 dumped | 1235 rows done\nOP1235 dumped | 1236 rows done\nOP1236 dumped | 1237 rows done\nOP1237 dumped | 1238 rows done\nOP1238 dumped | 1239 rows done\nOP1239 dumped | 1240 rows done\nOP1240 dumped | 1241 rows done\nOP1241 dumped | 1242 rows done\nOP1242 dumped | 1243 rows done\nOP1243 dumped | 1244 rows done\nOP1244 dumped | 1245 rows done\nOP1245 dumped | 1246 rows done\nOP1246 dumped | 1247 rows done\nOP1247 dumped | 1248 rows done\nOP1248 dumped | 1249 rows done\nOP1249 dumped | 1250 rows done\nOP1250 dumped | 1251 rows done\nOP1251 dumped | 1252 rows done\nOP1252 dumped | 1253 rows done\nOP1253 dumped | 1254 rows done\nOP1254 dumped | 1255 rows done\nOP1255 dumped | 1256 rows done\nOP1256 dumped | 1257 rows done\nOP1257 dumped | 1258 rows done\nOP1258 dumped | 1259 rows done\nOP1259 dumped | 1260 rows done\nOP1260 dumped | 1261 rows done\nOP1261 dumped | 1262 rows done\nOP1262 dumped | 1263 rows done\nOP1263 dumped | 1264 rows done\nOP1264 dumped | 1265 rows done\nOP1265 dumped | 1266 rows done\nOP1266 dumped | 1267 rows done\nOP1267 dumped | 1268 rows done\nOP1268 dumped | 1269 rows done\nOP1269 dumped | 1270 rows done\nOP1270 dumped | 1271 rows done\nOP1271 dumped | 1272 rows done\nOP1272 dumped | 1273 rows done\nOP1273 dumped | 1274 rows done\nOP1274 dumped | 1275 rows done\nOP1275 dumped | 1276 rows done\nOP1276 dumped | 1277 rows done\nOP1277 dumped | 1278 rows done\nOP1278 dumped | 1279 rows done\nOP1279 dumped | 1280 rows done\nOP1280 dumped | 1281 rows done\nOP1281 dumped | 1282 rows done\nOP1282 dumped | 1283 rows done\nOP1283 dumped | 1284 rows done\nOP1284 dumped | 1285 rows done\nOP1285 dumped | 1286 rows done\nOP1286 dumped | 1287 rows done\nOP1287 dumped | 1288 rows done\nOP1288 dumped | 1289 rows done\nOP1289 dumped | 1290 rows done\nOP1290 dumped | 1291 rows done\nOP1291 dumped | 1292 rows done\nOP1292 dumped | 1293 rows done\nOP1293 dumped | 1294 rows done\nOP1294 dumped | 1295 rows done\nOP1295 dumped | 1296 rows done\nOP1296 dumped | 1297 rows done\nOP1297 dumped | 1298 rows done\nOP1298 dumped | 1299 rows done\nOP1299 dumped | 1300 rows done\nOP1300 dumped | 1301 rows done\nOP1301 dumped | 1302 rows done\nOP1302 dumped | 1303 rows done\nOP1303 dumped | 1304 rows done\nOP1304 dumped | 1305 rows done\nOP1305 dumped | 1306 rows done\nOP1306 dumped | 1307 rows done\nOP1307 dumped | 1308 rows done\nOP1308 dumped | 1309 rows done\nOP1309 dumped | 1310 rows done\nOP1310 dumped | 1311 rows done\nOP1311 dumped | 1312 rows done\nOP1312 dumped | 1313 rows done\nOP1313 dumped | 1314 rows done\nOP1314 dumped | 1315 rows done\nOP1315 dumped | 1316 rows done\nOP1316 dumped | 1317 rows done\nOP1317 dumped | 1318 rows done\nOP1318 dumped | 1319 rows done\nOP1319 dumped | 1320 rows done\nOP1320 dumped | 1321 rows done\nOP1321 dumped | 1322 rows done\nOP1322 dumped | 1323 rows done\nOP1323 dumped | 1324 rows done\nOP1324 dumped | 1325 rows done\nOP1325 dumped | 1326 rows done\nOP1326 dumped | 1327 rows done\nOP1327 dumped | 1328 rows done\nOP1328 dumped | 1329 rows done\nOP1329 dumped | 1330 rows done\nOP1330 dumped | 1331 rows done\nOP1331 dumped | 1332 rows done\nOP1332 dumped | 1333 rows done\nOP1333 dumped | 1334 rows done\nOP1334 dumped | 1335 rows done\nOP1335 dumped | 1336 rows done\nOP1336 dumped | 1337 rows done\nOP1337 dumped | 1338 rows done\nOP1338 dumped | 1339 rows done\nOP1339 dumped | 1340 rows done\nOP1340 dumped | 1341 rows done\nOP1341 dumped | 1342 rows done\nOP1342 dumped | 1343 rows done\nOP1343 dumped | 1344 rows done\nOP1344 dumped | 1345 rows done\nOP1345 dumped | 1346 rows done\nOP1346 dumped | 1347 rows done\nOP1347 dumped | 1348 rows done\nOP1348 dumped | 1349 rows done\nOP1349 dumped | 1350 rows done\nOP1350 dumped | 1351 rows done\nOP1351 dumped | 1352 rows done\nOP1352 dumped | 1353 rows done\nOP1353 dumped | 1354 rows done\nOP1354 dumped | 1355 rows done\nOP1355 dumped | 1356 rows done\nOP1356 dumped | 1357 rows done\nOP1357 dumped | 1358 rows done\nOP1358 dumped | 1359 rows done\nOP1359 dumped | 1360 rows done\nOP1360 dumped | 1361 rows done\nOP1361 dumped | 1362 rows done\nOP1362 dumped | 1363 rows done\nOP1363 dumped | 1364 rows done\nOP1364 dumped | 1365 rows done\nOP1365 dumped | 1366 rows done\nOP1366 dumped | 1367 rows done\nOP1367 dumped | 1368 rows done\nOP1368 dumped | 1369 rows done\nOP1369 dumped | 1370 rows done\nOP1370 dumped | 1371 rows done\nOP1371 dumped | 1372 rows done\nOP1372 dumped | 1373 rows done\nOP1373 dumped | 1374 rows done\nOP1374 dumped | 1375 rows done\nOP1375 dumped | 1376 rows done\nOP1376 dumped | 1377 rows done\nOP1377 dumped | 1378 rows done\nOP1378 dumped | 1379 rows done\nOP1379 dumped | 1380 rows done\nOP1380 dumped | 1381 rows done\nOP1381 dumped | 1382 rows done\nOP1382 dumped | 1383 rows done\nOP1383 dumped | 1384 rows done\nOP1384 dumped | 1385 rows done\nOP1385 dumped | 1386 rows done\nOP1386 dumped | 1387 rows done\nOP1387 dumped | 1388 rows done\nOP1388 dumped | 1389 rows done\nOP1389 dumped | 1390 rows done\nOP1390 dumped | 1391 rows done\nOP1391 dumped | 1392 rows done\nOP1392 dumped | 1393 rows done\nOP1393 dumped | 1394 rows done\nOP1394 dumped | 1395 rows done\nOP1395 dumped | 1396 rows done\nOP1396 dumped | 1397 rows done\nOP1397 dumped | 1398 rows done\nOP1398 dumped | 1399 rows done\nOP1399 dumped | 1400 rows done\nInfo\n________________________________________\n"
],
[
"print('DONE')",
"DONE\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e755cad7199dcdc30396d8a289c5f22939c6ab00 | 335,997 | ipynb | Jupyter Notebook | docs/pangeo_forge_recipes/tutorials/hdf_reference/reference_cmip6.ipynb | chc-ucsb/pangeo-forge-recipes | e8468df20aa2b78e0bc56e291b5e7e2bbf07d8e5 | [
"Apache-2.0"
] | null | null | null | docs/pangeo_forge_recipes/tutorials/hdf_reference/reference_cmip6.ipynb | chc-ucsb/pangeo-forge-recipes | e8468df20aa2b78e0bc56e291b5e7e2bbf07d8e5 | [
"Apache-2.0"
] | null | null | null | docs/pangeo_forge_recipes/tutorials/hdf_reference/reference_cmip6.ipynb | chc-ucsb/pangeo-forge-recipes | e8468df20aa2b78e0bc56e291b5e7e2bbf07d8e5 | [
"Apache-2.0"
] | null | null | null | 80.343615 | 87,994 | 0.622925 | [
[
[
"# HDF Reference Recipe for CMIP6\n\nThis example illustrates how to create a {class}`pangeo_forge_recipes.recipes.HDFReferenceRecipe`.\nThis recipe does not actually copy the original source data.\nInstead, it generates metadata files which reference and index the original data, allowing it to be accessed more efficiently.\nFor more background, see [this blog post](https://medium.com/pangeo/fake-it-until-you-make-it-reading-goes-netcdf4-data-on-aws-s3-as-zarr-for-rapid-data-access-61e33f8fe685).\n\nAs the input for this recipe, we will use some CMIP6 NetCDF4 files provided by ESGF and stored in Amazon S3 ([CMIP6 AWS Open Data Page](https://registry.opendata.aws/cmip6/)).\nMany CMIP6 simulations spread their outputs over many HDF5/ NetCDF4 files, in order to limit the individual file size.\nThis can be inconvenient for analysis.\nIn this recipe, we will see how to virtually concatentate many HDF5 files into one big viritual Zarr dataset.",
"_____no_output_____"
],
[
"## Define the FilePattern\n\nLet's pick a random dataset: ocean model output from the GFDL ocean model from the [OMIP](https://www.wcrp-climate.org/modelling-wgcm-mip-catalogue/cmip6-endorsed-mips-article/1063-modelling-cmip6-omip) experiments.",
"_____no_output_____"
]
],
[
[
"import s3fs\nfs = s3fs.S3FileSystem(anon=True)\nbase_path = 's3://esgf-world/CMIP6/OMIP/NOAA-GFDL/GFDL-CM4/omip1/r1i1p1f1/Omon/thetao/gr/v20180701/'\nall_paths = fs.ls(base_path)\nall_paths",
"_____no_output_____"
]
],
[
[
"We see there are 15 individual NetCDF files. Let's time how long it takes to open and display one of them using Xarray.\n\n```{note}\nThe argument `decode_coords='all'` helps Xarray promote all of the `_bnds` variables to coordinates (rather than data variables).\n```",
"_____no_output_____"
]
],
[
[
"import xarray as xr",
"_____no_output_____"
],
[
"%%time\nds_orig = xr.open_dataset(fs.open(all_paths[0]), engine='h5netcdf', chunks={}, decode_coords='all')\nds_orig",
"<timed exec>:1: UserWarning: Variable(s) referenced in cell_measures not in variables: ['areacello', 'volcello']\n"
]
],
[
[
"It took ~30 seconds to open this one dataset. So it would take 7-8 minutes for us to open every file. This would be annoyingly slow.\n\nAs a first step in our recipe, we create a `File Pattern <../../recipe_user_guide/file_patterns>` to represent the input files.\nIn this case, since we already have a list of inputs, we just use the `pattern_from_file_sequence` convenience function.",
"_____no_output_____"
]
],
[
[
"from pangeo_forge_recipes.patterns import pattern_from_file_sequence\npattern = pattern_from_file_sequence(['s3://' + path for path in all_paths], 'time')\npattern",
"_____no_output_____"
]
],
[
[
"## Define the Recipe\n\nOnce we have our `FilePattern` defined, defining our Recipe is straightforward.\nThe only custom options we need are to specify that we'll be accessing the source files anonymously and to use `decode_coords='all'` when opening them.",
"_____no_output_____"
]
],
[
[
"from pangeo_forge_recipes.recipes.reference_hdf_zarr import HDFReferenceRecipe\n\nrec = HDFReferenceRecipe(\n pattern,\n xarray_open_kwargs={\"decode_coords\": \"all\"},\n netcdf_storage_options={\"anon\": True}\n)\nrec",
"_____no_output_____"
]
],
[
[
"## Storage\n\nIf the recipe excecution occurs in a Bakery, cloud storage will be assigned automatically.\n\nFor this example, we use the recipe's default storage, which is a temporary local directory.",
"_____no_output_____"
],
[
"## Execute with Dask\n\nThis runs relatively slowly in serial on a small laptop, but it would scale out very well on the cloud.",
"_____no_output_____"
]
],
[
[
"from dask.diagnostics import ProgressBar\ndelayed = rec.to_dask()\nwith ProgressBar():\n delayed.compute()",
"[##################################### ] | 94% Completed | 6min 21.1s"
]
],
[
[
"## Examine the Result\n\n### Load with Intake\n\nThe easiest way to load the dataset created by `fsspec_reference_maker` is via intake.\nAn intake catalog is automatically created in the target.",
"_____no_output_____"
]
],
[
[
"cat_url = f\"{rec.target}/reference.yaml\"\ncat_url",
"_____no_output_____"
],
[
"import intake\ncat = intake.open_catalog(cat_url)\ncat",
"_____no_output_____"
]
],
[
[
"To load the data lazily:",
"_____no_output_____"
]
],
[
[
"%time ds = cat.data.to_dask()\nds",
"CPU times: user 153 ms, sys: 12.4 ms, total: 166 ms\nWall time: 777 ms\n"
]
],
[
[
"Note that it opened immediately! 🎉\n\n```{note}\nThe Zarr chunks of the reference dataset correspond 1:1 to the HDF5 chunks in the original dataset.\nThese chunks are often smaller than optimal for cloud-based analysis.\n```\n\nIf we want to pass custom options to xarray when loading the dataset, we do so as follows.\nIn this example, we specify larger chunks and fix the coords.",
"_____no_output_____"
]
],
[
[
"ds = cat.data(\n chunks={'time': 10, 'lev': -1, 'lat': -1, 'lon': -1},\n decode_coords='all'\n).to_dask()\nds",
"/Users//miniconda3/envs/pangeo-forge-recipes/lib/python3.9/site-packages/xarray/core/dataset.py:409: UserWarning: Specified Dask chunks (10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10) would separate on disks chunk shape 3600 for dimension time. This could degrade performance. Consider rechunking after loading instead.\n _check_chunks_compatibility(var, output_chunks, preferred_chunks)\n"
]
],
[
[
"### Manual Loading\n\nIt is also possible to load the reference dataset directly with xarray, bypassing intake.",
"_____no_output_____"
]
],
[
[
"ref_url = f\"{rec.target}/reference.json\"\nref_url",
"_____no_output_____"
],
[
"import fsspec\nm = fsspec.get_mapper(\n \"reference://\",\n fo=ref_url,\n target_protocol=\"file\",\n remote_protocol=\"s3\",\n remote_options=dict(anon=True),\n skip_instance_cache=True,\n)\nds = xr.open_dataset(\n m,\n engine='zarr',\n backend_kwargs={'consolidated': False},\n chunks={},\n decode_coords=\"all\"\n)\nds",
"_____no_output_____"
]
],
[
[
"### Problem with Time Encoding\n\n```{warning}\nThere is currently a bug with time encoding in [fsspec reference maker](https://github.com/intake/fsspec-reference-maker/issues/69)\nwhich causes the time coordinate in this dataset to be messed up.\nUntil this bug is fixed, we can manually override the time variable.\n```\n\nWe know the data are monthly, starting in Jan. 1708.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport datetime as dt\n\nds = ds.assign_coords(\n time=pd.date_range(start='1708-01', freq='MS', periods=ds.dims['time']) + dt.timedelta(days=14)\n)\nds.time",
"_____no_output_____"
]
],
[
[
"### Make a Map\n\nLet's just verify that we can read an visualize the data. We'll compare the first year to the last year.",
"_____no_output_____"
]
],
[
[
"ds_ann = ds.resample(time='A').mean()\nsst_diff = ds_ann.thetao.isel(time=-1, lev=0) - ds_ann.thetao.isel(time=0, lev=0)\nsst_diff.plot()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e755de71d94a4a0ed8a9d5cb5f527b8b7203f5ea | 8,728 | ipynb | Jupyter Notebook | 04/day-4.ipynb | jdhalimi/aoc-2018 | aeea38420dccb8bdef314daf84e7ac259fa757a6 | [
"Unlicense"
] | null | null | null | 04/day-4.ipynb | jdhalimi/aoc-2018 | aeea38420dccb8bdef314daf84e7ac259fa757a6 | [
"Unlicense"
] | null | null | null | 04/day-4.ipynb | jdhalimi/aoc-2018 | aeea38420dccb8bdef314daf84e7ac259fa757a6 | [
"Unlicense"
] | null | null | null | 25.97619 | 117 | 0.423579 | [
[
[
"# Avent Of Code 2018 - DAY 4",
"_____no_output_____"
],
[
"Jean-David HALIMI, 2018\n\nhttps://adventofcode.com/2018/day/4",
"_____no_output_____"
],
[
"## strategy\n- read input\n- sort alphabetically\n- parse lines and build list (date, guard, minutes)\n- use pandas dataframes to sum and search asleep time\n\n",
"_____no_output_____"
]
],
[
[
"import re\nimport numpy as np\nimport pandas as pd\n\n\ndef dumps(line):\n \"\"\"writes the date as in sample\"\"\"\n d, g, l = line[0], line[1], line[2:]\n return '{} {:04} {}'.format(d, g, ''.join(('#' if x else '.') for x in l))\n\n\ndef read(input_file):\n \"\"\"reads input and returns input sorted\"\"\"\n inputs = []\n with open(input_file) as f:\n for l in f:\n inputs.append(l.strip())\n inputs.sort()\n return inputs\n\n\ndef parse_input(inputs):\n \"\"\"parse input to represent minutes as 0 or 1\"\"\"\n begin = re.compile(r\"\\[([0-9]+)-([0-9]+)-([0-9]+) ([0-9]+):([0-9]+)\\] Guard #([0-9]+) begins shift\")\n wakes_up = re.compile(r\"\\[([0-9]+)-([0-9]+)-([0-9]+) ([0-9]+):([0-9]+)\\] wakes up\")\n asleep = re.compile(r\"\\[([0-9]+)-([0-9]+)-([0-9]+) ([0-9]+):([0-9]+)\\] falls asleep\")\n\n state = 0\n g = 0\n lines = []\n for l in inputs:\n l = l.strip()\n b = begin.match(l.strip())\n w = wakes_up.match(l.strip())\n a = asleep.match(l.strip())\n\n if b:\n y, m, d, h, mi, g = (int(x) for x in b.groups())\n state = 0\n continue\n elif w:\n y, m, d, h, mi = (int(x) for x in w.groups())\n state = 0\n elif a:\n y, m, d, h, mi =(int(x) for x in a.groups())\n state = 1\n else:\n continue\n \n if not lines or lines[-1][0] != (y, m, d, g):\n lines.append([(y, m, d, g),[1-state] * 60])\n lines[-1][1] = lines[-1][1][0:mi]+([state] * (60-mi))\n\n return [['{}-{:02}-{:02}'.format(y, m, d), g] + l for (y, m, d, g), l in lines]\n ",
"_____no_output_____"
]
],
[
[
"## Sample\n- read sample\n- apply the strategy",
"_____no_output_____"
]
],
[
[
"def sample():\n return [x.strip() for x in \n \"\"\"\n [1518-11-01 00:00] Guard #10 begins shift\n [1518-11-01 00:05] falls asleep\n [1518-11-01 00:25] wakes up\n [1518-11-01 00:30] falls asleep\n [1518-11-01 00:55] wakes up\n [1518-11-01 23:58] Guard #99 begins shift\n [1518-11-02 00:40] falls asleep\n [1518-11-02 00:50] wakes up\n [1518-11-03 00:05] Guard #10 begins shift\n [1518-11-03 00:24] falls asleep\n [1518-11-03 00:29] wakes up\n [1518-11-04 00:02] Guard #99 begins shift\n [1518-11-04 00:36] falls asleep\n [1518-11-04 00:46] wakes up\n [1518-11-05 00:03] Guard #99 begins shift\n [1518-11-05 00:45] falls asleep\n [1518-11-05 00:55] wakes up\n \"\"\".split('\\n') if x]\n\ninputs = sample()\nlines = parse_input(inputs)\nprint('\\n'.join(dumps(l) for l in lines))\n",
"1518-11-01 0010 .....####################.....#########################.....\n1518-11-02 0099 ........................................##########..........\n1518-11-03 0010 ........................#####...............................\n1518-11-04 0099 ....................................##########..............\n1518-11-05 0099 .............................................##########.....\n"
]
],
[
[
"## part 1",
"_____no_output_____"
]
],
[
[
"def strategy_1(lines):\n # create the dataframe with minutes as columns\n df = pd.DataFrame(lines, columns=['date', 'guard']+[str(x) for x in range(60)])\n df.set_index(['date', 'guard'], inplace=True)\n \n # sums the total asleep time per line\n df['total'] = df['0':'59'].sum(axis=1)\n \n # sum columns (minutes + total) by guard\n df2 = df.reset_index().set_index(['guard']).drop(['date'], axis=1).groupby('guard').sum()\n \n # finds the guard with greatest total\n guard = df2['total'].idxmax()\n \n # sum all columns (after drop total) for this guard\n minute = int(df2.drop(['total'], axis=1).loc[guard].idxmax())\n \n return guard * minute",
"_____no_output_____"
]
],
[
[
"### sample",
"_____no_output_____"
]
],
[
[
"lines = parse_input(sample())\nprint(strategy_1(lines))",
"240\n"
]
],
[
[
"### input file",
"_____no_output_____"
]
],
[
[
"lines = parse_input(read('input.txt'))\nprint(strategy_1(lines))",
"19874\n"
]
],
[
[
"## Part 2",
"_____no_output_____"
]
],
[
[
"def strategy_2(lines):\n df = pd.DataFrame(lines, columns=['date', 'guard']+[str(x) for x in range(60)])\n df.set_index(['date', 'guard'], inplace=True)\n \n # sum group by guard\n df2 = df.reset_index().set_index(['guard']).drop(['date'], axis=1).groupby('guard').sum()\n guard = df2.max(axis=1).idxmax()\n minute = df2.loc[guard].idxmax()\n minute = int(minute)\n return guard * minute",
"_____no_output_____"
]
],
[
[
"### Sample",
"_____no_output_____"
]
],
[
[
"lines = parse_input(sample())\nprint(strategy_2(lines))",
"4455\n"
]
],
[
[
"### Input file",
"_____no_output_____"
]
],
[
[
"lines = parse_input(read('input.txt'))\nprint(strategy_2(lines))",
"22687\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e755e61782a465f803488f66ec794a40c65638cf | 81,339 | ipynb | Jupyter Notebook | Data/my_model/Untitled.ipynb | James-Hagerman/DS-BW-3 | be60960f520148bc51f5343cc729a62a33d700f2 | [
"MIT"
] | null | null | null | Data/my_model/Untitled.ipynb | James-Hagerman/DS-BW-3 | be60960f520148bc51f5343cc729a62a33d700f2 | [
"MIT"
] | null | null | null | Data/my_model/Untitled.ipynb | James-Hagerman/DS-BW-3 | be60960f520148bc51f5343cc729a62a33d700f2 | [
"MIT"
] | 2 | 2020-10-23T02:00:32.000Z | 2021-02-24T23:26:20.000Z | 45.59361 | 1,546 | 0.57596 | [
[
[
"import pandas as pd \nimport category_encoders as ce\nimport numpy\nimport matplotlib\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.metrics import accuracy_score\nimport matplotlib.pyplot as plt\nfrom sklearn.model_selection import train_test_split\nfrom category_encoders import OrdinalEncoder, OneHotEncoder\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn import preprocessing",
"_____no_output_____"
],
[
"df = pd.read_csv('https://raw.githubusercontent.com/James-Hagerman/DS-BW-3/main/cannabis.csv')",
"_____no_output_____"
]
],
[
[
"# Explore Data",
"_____no_output_____"
]
],
[
[
"df.head()",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2351 entries, 0 to 2350\nData columns (total 6 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Strain 2351 non-null object \n 1 Type 2351 non-null object \n 2 Rating 2351 non-null float64\n 3 Effects 2351 non-null object \n 4 Flavor 2305 non-null object \n 5 Description 2318 non-null object \ndtypes: float64(1), object(5)\nmemory usage: 110.3+ KB\n"
],
[
"df.describe()",
"_____no_output_____"
],
[
"df.describe(include = 'all')",
"_____no_output_____"
],
[
"df.isnull().sum()\n",
"_____no_output_____"
],
[
"df = df.dropna()",
"_____no_output_____"
],
[
"df.isnull().sum()",
"_____no_output_____"
]
],
[
[
"# Split data",
"_____no_output_____"
]
],
[
[
"train, val = train_test_split(df, test_size=0.2, \n random_state=42)\n\ntarget = 'Strain'\n\nX_train = train.drop(columns = target)\ny_train = train[target]\n\nX_val = val.drop(columns = target)\ny_val = val[target]\n",
"_____no_output_____"
],
[
"print('Baselne Accruacy:', y_train.value_counts(normalize=True).max())",
"Baselne Accruacy: 0.001098297638660077\n"
]
],
[
[
"## Model Building \n",
"_____no_output_____"
]
],
[
[
"model = make_pipeline(\n OneHotEncoder(use_cat_names = True),\n RandomForestRegressor(random_state = 42, n_jobs = -1)\n)\n\nmodel.fit(X_train, y_train)",
"c:\\users\\james\\.virtualenvs\\ds-bw-3-9onecz87\\lib\\site-packages\\category_encoders\\utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead\n elif pd.api.types.is_categorical(cols):\n"
],
[
"encoder = ce.OrdinalEncoder(cols = ['Effects'])",
"_____no_output_____"
],
[
"# le = preprocessing.LabelEncoder()\n# df['Effects'] = le.fit_transform(df.Effects)",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df1 = df['Effects']",
"_____no_output_____"
],
[
"list = df1.to_list()",
"_____no_output_____"
],
[
"list",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e755fb2a549af8e12f941e9b1e775dd7e54ab4c2 | 109,115 | ipynb | Jupyter Notebook | Deep_Learning/Artificial_Neural_Network_Titanic.ipynb | Ironspine/zoli | 8e149b3458741343ea20dd9c6023dbe61d8abf14 | [
"Apache-2.0"
] | 6 | 2020-06-21T09:08:55.000Z | 2021-07-28T14:54:30.000Z | Deep_Learning/Artificial_Neural_Network_Titanic.ipynb | Ironspine/zoli | 8e149b3458741343ea20dd9c6023dbe61d8abf14 | [
"Apache-2.0"
] | null | null | null | Deep_Learning/Artificial_Neural_Network_Titanic.ipynb | Ironspine/zoli | 8e149b3458741343ea20dd9c6023dbe61d8abf14 | [
"Apache-2.0"
] | null | null | null | 61.786523 | 47,220 | 0.632177 | [
[
[
"import numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"df = pd.read_csv('titanic.csv', index_col = False)\ndf2 = df.copy()\ntest = pd.read_csv('test_titanic.csv', index_col = False)",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"test",
"_____no_output_____"
],
[
"# Particular columns must be droped since those columns are missing in the test set\n\ndf.drop(['adult_male', 'class', 'who', 'deck', 'embark_town', 'alive', 'alone'], axis = 1, inplace = True)",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"#Pre-processing\n\nnum = round(df[['age']].mean(), 2)\ndf['age'] = df['age'].fillna(float(num))\ndf = pd.get_dummies(df, drop_first = True)",
"_____no_output_____"
],
[
"X = df.drop('survived', axis = 1).values\ny = df['survived'].values\n\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 101)",
"_____no_output_____"
],
[
"from sklearn.preprocessing import MinMaxScaler\n\nscaler = MinMaxScaler()\nX_train = scaler.fit_transform(X_train)\nX_test = scaler.transform(X_test)",
"_____no_output_____"
],
[
"print(X_train.shape)\nprint(X_test.shape)",
"(623, 8)\n(268, 8)\n"
],
[
"import tensorflow as tf\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dropout, Dense\nfrom numpy.random import seed\n\nfrom sklearn.model_selection import GridSearchCV\nfrom tensorflow.keras.wrappers.scikit_learn import KerasClassifier\nfrom keras.callbacks import EarlyStopping, ModelCheckpoint",
"_____no_output_____"
],
[
"def titanic_model():\n seed(42)\n tf.random.set_seed(42)\n\n model = Sequential()\n\n model.add(Dense(units = 8, activation = 'relu'))\n model.add(Dropout(0.2))\n\n model.add(Dense(units = 4, activation = 'relu'))\n model.add(Dropout(0.2))\n\n model.add(Dense(units = 4, activation = 'relu'))\n model.add(Dropout(0.2))\n\n model.add(Dense(units = 1, activation = 'sigmoid'))\n\n model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])\n return model\nmodel = titanic_model()\nes = EarlyStopping(monitor = \"val_loss\", mode = \"auto\", patience = 25)\ntraining = model.fit(X_train, y_train, epochs = 100, batch_size = 16, verbose = 2,\n validation_data = (X_test, y_test), callbacks = [es])",
"Train on 623 samples, validate on 268 samples\nEpoch 1/100\n623/623 - 3s - loss: 0.7276 - accuracy: 0.4318 - val_loss: 0.7104 - val_accuracy: 0.3284\nEpoch 2/100\n623/623 - 0s - loss: 0.7026 - accuracy: 0.4462 - val_loss: 0.6950 - val_accuracy: 0.5299\nEpoch 3/100\n623/623 - 0s - loss: 0.6891 - accuracy: 0.5762 - val_loss: 0.6880 - val_accuracy: 0.5709\nEpoch 4/100\n623/623 - 0s - loss: 0.6808 - accuracy: 0.6308 - val_loss: 0.6867 - val_accuracy: 0.5784\nEpoch 5/100\n623/623 - 0s - loss: 0.6786 - accuracy: 0.6308 - val_loss: 0.6849 - val_accuracy: 0.5746\nEpoch 6/100\n623/623 - 0s - loss: 0.6739 - accuracy: 0.6340 - val_loss: 0.6832 - val_accuracy: 0.5746\nEpoch 7/100\n623/623 - 0s - loss: 0.6694 - accuracy: 0.6356 - val_loss: 0.6817 - val_accuracy: 0.5746\nEpoch 8/100\n623/623 - 0s - loss: 0.6671 - accuracy: 0.6308 - val_loss: 0.6804 - val_accuracy: 0.5746\nEpoch 9/100\n623/623 - 0s - loss: 0.6620 - accuracy: 0.6324 - val_loss: 0.6786 - val_accuracy: 0.5746\nEpoch 10/100\n623/623 - 0s - loss: 0.6584 - accuracy: 0.6340 - val_loss: 0.6757 - val_accuracy: 0.5746\nEpoch 11/100\n623/623 - 0s - loss: 0.6502 - accuracy: 0.6324 - val_loss: 0.6652 - val_accuracy: 0.5746\nEpoch 12/100\n623/623 - 0s - loss: 0.6404 - accuracy: 0.6661 - val_loss: 0.6532 - val_accuracy: 0.5746\nEpoch 13/100\n623/623 - 0s - loss: 0.6294 - accuracy: 0.6870 - val_loss: 0.6413 - val_accuracy: 0.5933\nEpoch 14/100\n623/623 - 0s - loss: 0.6165 - accuracy: 0.7111 - val_loss: 0.6294 - val_accuracy: 0.6269\nEpoch 15/100\n623/623 - 0s - loss: 0.5964 - accuracy: 0.7368 - val_loss: 0.6164 - val_accuracy: 0.7127\nEpoch 16/100\n623/623 - 0s - loss: 0.6006 - accuracy: 0.7127 - val_loss: 0.6052 - val_accuracy: 0.7463\nEpoch 17/100\n623/623 - 0s - loss: 0.5980 - accuracy: 0.7079 - val_loss: 0.5964 - val_accuracy: 0.7575\nEpoch 18/100\n623/623 - 0s - loss: 0.5880 - accuracy: 0.7143 - val_loss: 0.5898 - val_accuracy: 0.7649\nEpoch 19/100\n623/623 - 0s - loss: 0.5766 - accuracy: 0.7352 - val_loss: 0.5822 - val_accuracy: 0.7649\nEpoch 20/100\n623/623 - 0s - loss: 0.5801 - accuracy: 0.7303 - val_loss: 0.5756 - val_accuracy: 0.7649\nEpoch 21/100\n623/623 - 0s - loss: 0.5628 - accuracy: 0.7287 - val_loss: 0.5699 - val_accuracy: 0.7649\nEpoch 22/100\n623/623 - 0s - loss: 0.5573 - accuracy: 0.7560 - val_loss: 0.5660 - val_accuracy: 0.7649\nEpoch 23/100\n623/623 - 0s - loss: 0.5582 - accuracy: 0.7319 - val_loss: 0.5619 - val_accuracy: 0.7612\nEpoch 24/100\n623/623 - 0s - loss: 0.5587 - accuracy: 0.7239 - val_loss: 0.5563 - val_accuracy: 0.7687\nEpoch 25/100\n623/623 - 0s - loss: 0.5368 - accuracy: 0.7640 - val_loss: 0.5533 - val_accuracy: 0.7649\nEpoch 26/100\n623/623 - 0s - loss: 0.5636 - accuracy: 0.7223 - val_loss: 0.5524 - val_accuracy: 0.7575\nEpoch 27/100\n623/623 - 0s - loss: 0.5415 - accuracy: 0.7608 - val_loss: 0.5442 - val_accuracy: 0.7612\nEpoch 28/100\n623/623 - 0s - loss: 0.5434 - accuracy: 0.7448 - val_loss: 0.5416 - val_accuracy: 0.7649\nEpoch 29/100\n623/623 - 0s - loss: 0.5391 - accuracy: 0.7496 - val_loss: 0.5414 - val_accuracy: 0.7575\nEpoch 30/100\n623/623 - 0s - loss: 0.5305 - accuracy: 0.7560 - val_loss: 0.5368 - val_accuracy: 0.7537\nEpoch 31/100\n623/623 - 0s - loss: 0.5386 - accuracy: 0.7512 - val_loss: 0.5365 - val_accuracy: 0.7537\nEpoch 32/100\n623/623 - 0s - loss: 0.5375 - accuracy: 0.7528 - val_loss: 0.5320 - val_accuracy: 0.7537\nEpoch 33/100\n623/623 - 0s - loss: 0.5125 - accuracy: 0.7737 - val_loss: 0.5325 - val_accuracy: 0.7500\nEpoch 34/100\n623/623 - 0s - loss: 0.5156 - accuracy: 0.7689 - val_loss: 0.5261 - val_accuracy: 0.7537\nEpoch 35/100\n623/623 - 0s - loss: 0.5364 - accuracy: 0.7576 - val_loss: 0.5234 - val_accuracy: 0.7537\nEpoch 36/100\n623/623 - 0s - loss: 0.5319 - accuracy: 0.7576 - val_loss: 0.5258 - val_accuracy: 0.7500\nEpoch 37/100\n623/623 - 0s - loss: 0.5187 - accuracy: 0.7624 - val_loss: 0.5207 - val_accuracy: 0.7537\nEpoch 38/100\n623/623 - 0s - loss: 0.5281 - accuracy: 0.7496 - val_loss: 0.5163 - val_accuracy: 0.7575\nEpoch 39/100\n623/623 - 0s - loss: 0.5190 - accuracy: 0.7608 - val_loss: 0.5117 - val_accuracy: 0.7687\nEpoch 40/100\n623/623 - 0s - loss: 0.5149 - accuracy: 0.7753 - val_loss: 0.5096 - val_accuracy: 0.7612\nEpoch 41/100\n623/623 - 0s - loss: 0.4990 - accuracy: 0.7817 - val_loss: 0.5120 - val_accuracy: 0.7575\nEpoch 42/100\n623/623 - 0s - loss: 0.5136 - accuracy: 0.7721 - val_loss: 0.5099 - val_accuracy: 0.7575\nEpoch 43/100\n623/623 - 0s - loss: 0.5112 - accuracy: 0.7801 - val_loss: 0.5057 - val_accuracy: 0.7612\nEpoch 44/100\n623/623 - 0s - loss: 0.5061 - accuracy: 0.7737 - val_loss: 0.5053 - val_accuracy: 0.7612\nEpoch 45/100\n623/623 - 0s - loss: 0.5153 - accuracy: 0.7721 - val_loss: 0.5038 - val_accuracy: 0.7612\nEpoch 46/100\n623/623 - 0s - loss: 0.5280 - accuracy: 0.7592 - val_loss: 0.5026 - val_accuracy: 0.7575\nEpoch 47/100\n623/623 - 0s - loss: 0.5200 - accuracy: 0.7705 - val_loss: 0.5036 - val_accuracy: 0.7575\nEpoch 48/100\n623/623 - 0s - loss: 0.4951 - accuracy: 0.7785 - val_loss: 0.5014 - val_accuracy: 0.7575\nEpoch 49/100\n623/623 - 0s - loss: 0.5064 - accuracy: 0.7801 - val_loss: 0.4994 - val_accuracy: 0.7612\nEpoch 50/100\n623/623 - 0s - loss: 0.5049 - accuracy: 0.7721 - val_loss: 0.4975 - val_accuracy: 0.7649\nEpoch 51/100\n623/623 - 0s - loss: 0.5129 - accuracy: 0.7608 - val_loss: 0.4945 - val_accuracy: 0.7649\nEpoch 52/100\n623/623 - 0s - loss: 0.5059 - accuracy: 0.7592 - val_loss: 0.4942 - val_accuracy: 0.7649\nEpoch 53/100\n623/623 - 0s - loss: 0.4952 - accuracy: 0.7721 - val_loss: 0.4922 - val_accuracy: 0.7649\nEpoch 54/100\n623/623 - 0s - loss: 0.5023 - accuracy: 0.7544 - val_loss: 0.4921 - val_accuracy: 0.7649\nEpoch 55/100\n623/623 - 0s - loss: 0.5270 - accuracy: 0.7737 - val_loss: 0.4940 - val_accuracy: 0.7799\nEpoch 56/100\n623/623 - 0s - loss: 0.4802 - accuracy: 0.7865 - val_loss: 0.4874 - val_accuracy: 0.7649\nEpoch 57/100\n623/623 - 0s - loss: 0.5133 - accuracy: 0.7705 - val_loss: 0.4867 - val_accuracy: 0.7687\nEpoch 58/100\n623/623 - 0s - loss: 0.4911 - accuracy: 0.7753 - val_loss: 0.4868 - val_accuracy: 0.7761\nEpoch 59/100\n623/623 - 0s - loss: 0.5040 - accuracy: 0.7592 - val_loss: 0.4819 - val_accuracy: 0.7649\nEpoch 60/100\n623/623 - 0s - loss: 0.4803 - accuracy: 0.7961 - val_loss: 0.4813 - val_accuracy: 0.7649\nEpoch 61/100\n623/623 - 0s - loss: 0.4675 - accuracy: 0.7849 - val_loss: 0.4794 - val_accuracy: 0.7649\nEpoch 62/100\n623/623 - 0s - loss: 0.4743 - accuracy: 0.7961 - val_loss: 0.4790 - val_accuracy: 0.7687\nEpoch 63/100\n623/623 - 0s - loss: 0.4962 - accuracy: 0.7849 - val_loss: 0.4795 - val_accuracy: 0.7836\nEpoch 64/100\n623/623 - 0s - loss: 0.4729 - accuracy: 0.7897 - val_loss: 0.4750 - val_accuracy: 0.7687\nEpoch 65/100\n623/623 - 0s - loss: 0.4877 - accuracy: 0.7753 - val_loss: 0.4772 - val_accuracy: 0.7799\nEpoch 66/100\n623/623 - 0s - loss: 0.4777 - accuracy: 0.7994 - val_loss: 0.4720 - val_accuracy: 0.7761\nEpoch 67/100\n623/623 - 0s - loss: 0.4871 - accuracy: 0.7817 - val_loss: 0.4696 - val_accuracy: 0.7724\nEpoch 68/100\n623/623 - 0s - loss: 0.5185 - accuracy: 0.7640 - val_loss: 0.4742 - val_accuracy: 0.7836\nEpoch 69/100\n623/623 - 0s - loss: 0.4949 - accuracy: 0.7913 - val_loss: 0.4741 - val_accuracy: 0.7910\nEpoch 70/100\n623/623 - 0s - loss: 0.4711 - accuracy: 0.7994 - val_loss: 0.4689 - val_accuracy: 0.7836\nEpoch 71/100\n623/623 - 0s - loss: 0.4736 - accuracy: 0.7881 - val_loss: 0.4658 - val_accuracy: 0.7724\nEpoch 72/100\n623/623 - 0s - loss: 0.4614 - accuracy: 0.8090 - val_loss: 0.4670 - val_accuracy: 0.7836\nEpoch 73/100\n623/623 - 0s - loss: 0.4864 - accuracy: 0.7833 - val_loss: 0.4656 - val_accuracy: 0.7836\nEpoch 74/100\n623/623 - 0s - loss: 0.4852 - accuracy: 0.7978 - val_loss: 0.4645 - val_accuracy: 0.7836\nEpoch 75/100\n623/623 - 0s - loss: 0.5042 - accuracy: 0.7721 - val_loss: 0.4650 - val_accuracy: 0.7836\nEpoch 76/100\n623/623 - 0s - loss: 0.4873 - accuracy: 0.7849 - val_loss: 0.4680 - val_accuracy: 0.7910\nEpoch 77/100\n623/623 - 0s - loss: 0.4871 - accuracy: 0.7785 - val_loss: 0.4651 - val_accuracy: 0.7910\nEpoch 78/100\n623/623 - 0s - loss: 0.4925 - accuracy: 0.7705 - val_loss: 0.4670 - val_accuracy: 0.7910\nEpoch 79/100\n623/623 - 0s - loss: 0.4496 - accuracy: 0.8010 - val_loss: 0.4617 - val_accuracy: 0.7873\nEpoch 80/100\n623/623 - 0s - loss: 0.4789 - accuracy: 0.7801 - val_loss: 0.4596 - val_accuracy: 0.7910\n"
],
[
"print(model.summary())",
"Model: \"sequential_10\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_40 (Dense) multiple 72 \n_________________________________________________________________\ndropout_27 (Dropout) multiple 0 \n_________________________________________________________________\ndense_41 (Dense) multiple 36 \n_________________________________________________________________\ndropout_28 (Dropout) multiple 0 \n_________________________________________________________________\ndense_42 (Dense) multiple 20 \n_________________________________________________________________\ndropout_29 (Dropout) multiple 0 \n_________________________________________________________________\ndense_43 (Dense) multiple 5 \n=================================================================\nTotal params: 133\nTrainable params: 133\nNon-trainable params: 0\n_________________________________________________________________\nNone\n"
],
[
"losses = pd.DataFrame(training.history)\nplt.figure(figsize = (10, 8))\nplt.plot(losses['loss'], label = 'loss')\nplt.plot(losses['val_loss'], label = 'validation_loss')\nplt.title('Model losses')\nplt.ylabel('Loss')\nplt.xlabel('Epoch')\nplt.legend()",
"_____no_output_____"
],
[
"from sklearn.metrics import classification_report, confusion_matrix\n\nprediction = model.predict_classes(X_test)\n\nprint(classification_report(y_test, prediction))\nprint('\\n')\nprint(confusion_matrix(y_test, prediction))",
" precision recall f1-score support\n\n 0 0.79 0.92 0.85 154\n 1 0.85 0.67 0.75 114\n\n accuracy 0.81 268\n macro avg 0.82 0.79 0.80 268\nweighted avg 0.82 0.81 0.81 268\n\n\n\n[[141 13]\n [ 38 76]]\n"
],
[
"# GridsearchCV\n\nmodel = KerasClassifier(build_fn = titanic_model, verbose = 0)\n\nbatch_size = [16, 32, 64]\nepochs = [50, 100]\nparam_grid = dict(batch_size = batch_size, epochs = epochs)\n\ngrid = GridSearchCV(estimator = model, \n param_grid = param_grid,\n cv = 3,\n verbose = 2, n_jobs = -1)\n\ngrid_result = grid.fit(X_train, y_train)",
"Fitting 3 folds for each of 6 candidates, totalling 18 fits\n"
],
[
"grid_result",
"_____no_output_____"
],
[
"grid.best_params_",
"_____no_output_____"
],
[
"# Make predictions on test set\n\ntest.drop(['PassengerId','Name', 'Ticket', 'Cabin'], axis = 1, inplace = True)\nnum = round(test[['Age']].mean(), 2)\ntest['Age'] = test['Age'].fillna(float(num))\ntest = pd.get_dummies(test, drop_first = True)",
"_____no_output_____"
],
[
"test_solution = pd.DataFrame(test, columns = ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare', 'Sex_male',\n 'Embarked_Q', 'Embarked_S'])\ntest_solution",
"_____no_output_____"
],
[
"test_scaled = scaler.transform(test)\nprediction_test = model.predict_classes(test_scaled)\nprediction_test = prediction_test.ravel()\nprediction_test",
"C:\\Users\\User\\Anaconda3\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\sequential.py:342: RuntimeWarning: invalid value encountered in greater\n return (proba > 0.5).astype('int32')\n"
],
[
"test_solution['Survived'] = prediction_test\ntest_solution",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e755fb2a7794dd91c1f9338ac366b690b80d718f | 53,790 | ipynb | Jupyter Notebook | other/python_recap.ipynb | alirezadir/deep-learning | 332833c28b9a20fd7078eaf270b9838b821e2ade | [
"MIT"
] | 1 | 2020-03-15T23:55:52.000Z | 2020-03-15T23:55:52.000Z | other/python_recap.ipynb | iraghavr/deep-learning | 332833c28b9a20fd7078eaf270b9838b821e2ade | [
"MIT"
] | null | null | null | other/python_recap.ipynb | iraghavr/deep-learning | 332833c28b9a20fd7078eaf270b9838b821e2ade | [
"MIT"
] | 1 | 2019-11-26T16:11:43.000Z | 2019-11-26T16:11:43.000Z | 56.980932 | 33,764 | 0.778323 | [
[
[
"import matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"%matplotlib inline ",
"_____no_output_____"
],
[
"x = np.linspace(0,1,300)\nfor w in range(2,6,2):\n plt.plot(x, np.sin(2*np.pi*w*x))\n",
"_____no_output_____"
],
[
"%pdb\n\nnumbers= \"hello\"\nsum(numbers)",
"Automatic pdb calling has been turned ON\n"
],
[
"#scalar - has 0 dim\nv = np.array(5)\nv",
"_____no_output_____"
],
[
"v.shape # 0 dim",
"_____no_output_____"
],
[
"v.dtype # dtype: can be int64, uint64, ..... In python it's just int, float, etc",
"_____no_output_____"
],
[
"# vector - 1 dim - pass a list\nv = np.array([1,2,3,4])",
"_____no_output_____"
],
[
"v.shape # (4,) means 1 dim (it's a tuple) ",
"_____no_output_____"
],
[
"x = v.reshape(1,4)\nx",
"_____no_output_____"
],
[
"x.shape",
"_____no_output_____"
],
[
"x = v [None, :]\nx",
"_____no_output_____"
],
[
"x.shape",
"_____no_output_____"
],
[
"x = v[:,None]\nx",
"_____no_output_____"
],
[
"x.shape",
"_____no_output_____"
],
[
"# matrix - 2D - pass a list of lists\nm = np.array([[1,2,3], [4,5,6], [7,8,9]])\nm",
"_____no_output_____"
],
[
"m.shape",
"_____no_output_____"
],
[
"# tensor, any n-dim matrix \nt = np.array([[[[1],[2]],[[3],[4]],[[5],[6]]],[[[7],[8]],\\\n [[9],[10]],[[11],[12]]],[[[13],[14]],[[15],[16]],[[17],[17]]]])\nt.shape",
"_____no_output_____"
]
],
[
[
"## matrices",
"_____no_output_____"
],
[
"### elemnt wise product",
"_____no_output_____"
]
],
[
[
"a = np.array([[1,2], [3,4]])\nb = np.array([[4,5], [6,7]])\n",
"_____no_output_____"
],
[
"a * b # element wise",
"_____no_output_____"
],
[
"np.multiply(a, b) # elemnt wise",
"_____no_output_____"
]
],
[
[
"### matrix prodcut",
"_____no_output_____"
]
],
[
[
"np.dot(a,b)",
"_____no_output_____"
],
[
"np.matmul(a,b)",
"_____no_output_____"
],
[
"a.dot(b)",
"_____no_output_____"
],
[
"# transpose \na_t = a.T\na_t",
"_____no_output_____"
],
[
"a = np.array([2])\nb= np.array([[2],[ 3]])\nprint(a)\nprint(b)\na.dot(b)",
"[2]\n[[2]\n [3]]\n"
],
[
"a_t[0][1] = 5 # note Transposed mat is just a shallow copy, and a change in a_t, changes a too!!!!\na",
"_____no_output_____"
],
[
"x = np.array([0.5, -0.2, 0.1]) # (3,)\ny = np.array([2,4]) #(2,)\nprint(x[:,None] * y) #(3,1) * (2,) = (3,2) and element-wise",
"[[ 1. 2. ]\n [-0.4 -0.8]\n [ 0.2 0.4]]\n"
],
[
"# used for mini bacthes\na = list(range(10))\nfor i in range(0,10,3):\n print(a[i:i+3])",
"[0, 1, 2]\n[3, 4, 5]\n[6, 7, 8]\n[9]\n"
],
[
"## Glob",
"_____no_output_____"
],
[
"# glob\nimport glob \nimport numpy as np\nglob.glob('*.py')\nnp.array([item for item in sorted(glob.glob('*.py'))])",
"_____no_output_____"
],
[
"# exceptions ",
"_____no_output_____"
],
[
"try: \n print(1/0)\nexcept ZeroDivisionError: # except is similar to catch \n print(\"you can't devide by 0\")",
"you can't devide by 0\n"
],
[
"def pos_test(a):\n if (a>0):\n pass\n else: \n raise ValueError(\"should be positive\")\n #or raise Exception(\"should be positive\")\ntry: \n a = -2\n pos_test(a)\nexcept ValueError as err:\n print(err)\n print('we got value err')\n",
"should be positive\nwe got value err\n"
],
[
"# # write codes to file\n# with open('codes', 'w') as f:\n# codes.tofile(f) # codes are numpy arrays\n \n# # write labels to file\n# import csv\n# with open('labels', 'w') as f:\n# writer = csv.writer(f, delimiter='\\n')\n# writer.writerow(labels)\n\n# # read codes and labels from file\n# import csv\n\n# with open('labels') as f:\n# reader = csv.reader(f, delimiter='\\n')\n# labels = np.array([each for each in reader if len(each) > 0]).squeeze()\n# with open('codes') as f:\n# codes = np.fromfile(f, dtype=np.float32)\n# codes = codes.reshape((len(labels), -1))",
"_____no_output_____"
],
[
"## Random ",
"_____no_output_____"
],
[
"import random \na = np.array(range(10))\nrandom.shuffle(a)\nprint(a)\n",
"[1 9 8 5 6 2 7 4 3 0]\n"
],
[
"## tqdm ",
"_____no_output_____"
],
[
"from tqdm import tqdm \nfor i in tqdm(range(10)):\n print(i)",
"100%|██████████| 10/10 [00:00<00:00, 7583.27it/s]"
]
],
[
[
"## backprop ",
"_____no_output_____"
]
],
[
[
"activation_function = lambda x : 1 / (1 + np.exp(-x)) \nlist(map(activation_function, [2,3]))",
"_____no_output_____"
],
[
"import numpy as np\n\ndef sigmoid(x):\n \"\"\"\n Calculate sigmoid\n \"\"\"\n return 1 / (1 + np.exp(-x))\n\n\nx = np.array([0.5, 0.1, -0.2])\ntarget = 0.6\nlearnrate = 0.5\n\nweights_input_hidden = np.array([[0.5, -0.6],\n [0.1, -0.2],\n [0.1, 0.7]])\n\nweights_hidden_output = np.array([0.1, -0.3])\n\n## Forward pass\nhidden_layer_input = np.dot(x, weights_input_hidden)\nhidden_layer_output = sigmoid(hidden_layer_input)\n\noutput_layer_in = np.dot(hidden_layer_output, weights_hidden_output)\noutput = sigmoid(output_layer_in)\n\n## Backwards pass\n## TODO: Calculate output error\nerror = target - output\n\n# TODO: Calculate error term for output layer\noutput_error_term = error * output * (1 - output)\n\n# TODO: Calculate error term for hidden layer\nhidden_error_term = output_error_term * weights_hidden_output * sigmoid(hidden_layer_input) * (1 - sigmoid(hidden_layer_input))\n\n# TODO: Calculate change in weights for hidden layer to output layer\ndelta_w_h_o = learnrate * output_error_term * hidden_layer_output\n\n# TODO: Calculate change in weights for input layer to hidden layer\ndelta_w_i_h = learnrate * hidden_error_term * x[:,None]\n\nprint('Change in weights for hidden layer to output layer:')\nprint(delta_w_h_o)\nprint('Change in weights for input layer to hidden layer:')\nprint(delta_w_i_h)\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e756018f5b848e6438970f385024e5914f957f7f | 12,310 | ipynb | Jupyter Notebook | examples/compiler/03_iet-A.ipynb | millennial-geoscience/devito | 4e5903a22c7a238ac66d6179523ffa440de98ac0 | [
"MIT"
] | 1 | 2020-06-08T20:44:35.000Z | 2020-06-08T20:44:35.000Z | examples/compiler/03_iet-A.ipynb | millennial-geoscience/devito | 4e5903a22c7a238ac66d6179523ffa440de98ac0 | [
"MIT"
] | 3 | 2020-11-30T05:38:22.000Z | 2022-03-07T14:02:05.000Z | examples/compiler/03_iet-A.ipynb | millennial-geoscience/devito | 4e5903a22c7a238ac66d6179523ffa440de98ac0 | [
"MIT"
] | 1 | 2021-01-05T07:27:35.000Z | 2021-01-05T07:27:35.000Z | 25.381443 | 312 | 0.514785 | [
[
[
"In this tutorial we will show how to access and navigate the Iteration/Expression Tree (IET) rooted in an `Operator`.\n\n\n# Part I - Top Down\n\nLet's start with a fairly trivial example. First of all, we disable all performance-related optimizations, to maximize the simplicity of the created IET as well as the readability of the generated code.",
"_____no_output_____"
]
],
[
[
"from devito import configuration\nconfiguration['opt'] = 'noop'\nconfiguration['language'] = 'C'",
"_____no_output_____"
]
],
[
[
"Then, we create a `TimeFunction` with 3 points in each of the space `Dimension`s _x_ and _y_.",
"_____no_output_____"
]
],
[
[
"from devito import Grid, TimeFunction\n\ngrid = Grid(shape=(3, 3))\nu = TimeFunction(name='u', grid=grid)",
"_____no_output_____"
]
],
[
[
"We now create an `Operator` that increments by 1 all points in the computational domain.",
"_____no_output_____"
]
],
[
[
"from devito import Eq, Operator\n\neq = Eq(u.forward, u+1)\nop = Operator(eq)",
"_____no_output_____"
]
],
[
[
"An `Operator` is an IET node that can generate, JIT-compile, and run low-level code (e.g., C). Just like all other types of IET nodes, it's got a number of metadata attached. For example, we can query an `Operator` to retrieve the input/output `Function`s.",
"_____no_output_____"
]
],
[
[
"op.input",
"_____no_output_____"
],
[
"op.output",
"_____no_output_____"
]
],
[
[
"If we print `op`, we can see how the generated code looks like.",
"_____no_output_____"
]
],
[
[
"print(op)",
"#define _POSIX_C_SOURCE 200809L\n#include \"stdlib.h\"\n#include \"math.h\"\n#include \"sys/time.h\"\n\nstruct dataobj\n{\n void *restrict data;\n int * size;\n int * npsize;\n int * dsize;\n int * hsize;\n int * hofs;\n int * oofs;\n} ;\n\nstruct profiler\n{\n double section0;\n} ;\n\n\nint Kernel(struct dataobj *restrict u_vec, const int time_M, const int time_m, struct profiler * timers, const int x_M, const int x_m, const int y_M, const int y_m)\n{\n float (*restrict u)[u_vec->size[1]][u_vec->size[2]] __attribute__ ((aligned (64))) = (float (*)[u_vec->size[1]][u_vec->size[2]]) u_vec->data;\n for (int time = time_m, t0 = (time)%(2), t1 = (time + 1)%(2); time <= time_M; time += 1, t0 = (time)%(2), t1 = (time + 1)%(2))\n {\n struct timeval start_section0, end_section0;\n gettimeofday(&start_section0, NULL);\n /* Begin section0 */\n for (int x = x_m; x <= x_M; x += 1)\n {\n for (int y = y_m; y <= y_M; y += 1)\n {\n u[t1][x + 1][y + 1] = u[t0][x + 1][y + 1] + 1;\n }\n }\n /* End section0 */\n gettimeofday(&end_section0, NULL);\n timers->section0 += (double)(end_section0.tv_sec-start_section0.tv_sec)+(double)(end_section0.tv_usec-start_section0.tv_usec)/1000000;\n }\n return 0;\n}\n\n"
]
],
[
[
"An `Operator` is the root of an IET that typically consists of several nested `Iteration`s and `Expression`s – two other fundamental IET node types. The user-provided SymPy equations are wrapped within `Expressions`. Loop nest embedding such expressions are constructed by suitably nesting `Iterations`.\n\nThe Devito compiler constructs the IET from a collection of `Cluster`s, which represent a higher-level intermediate representation (not covered in this tutorial).\n\nThe Devito compiler also attaches to the IET key computational properties, such as _sequential_, _parallel_, and _affine_, which are derived through data dependence analysis.\n\nWe can print the IET structure of an `Operator`, as well as the attached computational properties, using the utility function `pprint`.",
"_____no_output_____"
]
],
[
[
"from devito.tools import pprint\npprint(op)",
"<Callable Kernel>\n <ArrayCast>\n <List (0, 1, 0)>\n\n <[affine,sequential,wrappable] Iteration time::time::(time_m, time_M, 1)>\n <TimedList (2, 1, 2)>\n <Section (1)>\n\n <[affine,parallel,parallel=,tilable] Iteration x::x::(x_m, x_M, 1)>\n <[affine,parallel,parallel=,tilable] Iteration y::y::(y_m, y_M, 1)>\n <ExpressionBundle (1)>\n\n <Expression u[t1, x + 1, y + 1] = u[t0, x + 1, y + 1] + 1>\n\n\n\n"
]
],
[
[
"In this example, `op` is represented as a `<Callable Kernel>`. Attached to it are metadata, such as `_headers` and `_includes`, as well as the `body`, which includes the children IET nodes. Here, the body is the concatenation of an `ArrayCast` and a `List` object.\n",
"_____no_output_____"
]
],
[
[
"op._headers",
"_____no_output_____"
],
[
"op._includes",
"_____no_output_____"
],
[
"op.body",
"_____no_output_____"
]
],
[
[
"We can explicitly traverse the `body` until we locate the user-provided `SymPy` equations.",
"_____no_output_____"
]
],
[
[
"print(op.body[0]) # Printing the ArrayCast",
"float (*restrict u)[u_vec->size[1]][u_vec->size[2]] __attribute__ ((aligned (64))) = (float (*)[u_vec->size[1]][u_vec->size[2]]) u_vec->data;\n"
],
[
"print(op.body[1]) # Printing the List",
"for (int time = time_m, t0 = (time)%(2), t1 = (time + 1)%(2); time <= time_M; time += 1, t0 = (time)%(2), t1 = (time + 1)%(2))\n{\n struct timeval start_section0, end_section0;\n gettimeofday(&start_section0, NULL);\n /* Begin section0 */\n for (int x = x_m; x <= x_M; x += 1)\n {\n for (int y = y_m; y <= y_M; y += 1)\n {\n u[t1][x + 1][y + 1] = u[t0][x + 1][y + 1] + 1;\n }\n }\n /* End section0 */\n gettimeofday(&end_section0, NULL);\n timers->section0 += (double)(end_section0.tv_sec-start_section0.tv_sec)+(double)(end_section0.tv_usec-start_section0.tv_usec)/1000000;\n}\n"
]
],
[
[
"Below we access the `Iteration` representing the time loop.",
"_____no_output_____"
]
],
[
[
"t_iter = op.body[1].body[0]\nt_iter",
"_____no_output_____"
]
],
[
[
"We can for example inspect the `Iteration` to discover what its iteration bounds are.",
"_____no_output_____"
]
],
[
[
"t_iter.limits",
"_____no_output_____"
]
],
[
[
"And as we keep going down through the IET, we can eventually reach the `Expression` wrapping the user-provided SymPy equation.",
"_____no_output_____"
]
],
[
[
"expr = t_iter.nodes[0].body[0].body[0].nodes[0].nodes[0].body[0]\nexpr.view",
"_____no_output_____"
]
],
[
[
"Of course, there are mechanisms in place to, for example, find all `Expression`s in a given IET. The Devito compiler has a number of IET visitors, among which `FindNodes`, usable to retrieve all nodes of a particular type. So we easily \ncan get all `Expression`s within `op` as follows",
"_____no_output_____"
]
],
[
[
"from devito.ir.iet import Expression, FindNodes\nexprs = FindNodes(Expression).visit(op)\nexprs[0].view",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e756098a52e880885c4500100073e6af12e4c706 | 278,858 | ipynb | Jupyter Notebook | Major_Project_4.ipynb | allokkk/Diabetic-Detection | 70dc12c824c365e894696d8f590928a61eaa341c | [
"MIT"
] | null | null | null | Major_Project_4.ipynb | allokkk/Diabetic-Detection | 70dc12c824c365e894696d8f590928a61eaa341c | [
"MIT"
] | null | null | null | Major_Project_4.ipynb | allokkk/Diabetic-Detection | 70dc12c824c365e894696d8f590928a61eaa341c | [
"MIT"
] | null | null | null | 176.492405 | 162,844 | 0.673633 | [
[
[
"<a href=\"https://colab.research.google.com/github/allokkk/Diabetic-Detection/blob/master/Major_Project_4.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"#mount the drive\nfrom google.colab import drive\ndrive.mount('/content/drive')",
"Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n\nEnter your authorization code:\n··········\nMounted at /content/drive\n"
],
[
"import os\nfolders=os.listdir(\"/content/drive/My Drive/Data train\")\nprint(folders)",
"['17725_left.jpeg', '17725_right.jpeg', '17728_left.jpeg', '17728_right.jpeg', '1772_left.jpeg', '1772_right.jpeg', '17731_left.jpeg', '17731_right.jpeg', '17732_left.jpeg', '17732_right.jpeg', '17734_left.jpeg', '17734_right.jpeg', '17736_left.jpeg', '17736_right.jpeg', '17743_left.jpeg', '17743_right.jpeg', '17746_left.jpeg', '17746_right.jpeg', '17747_left.jpeg', '17747_right.jpeg', '17749_left.jpeg', '17749_right.jpeg', '17750_left.jpeg', '17750_right.jpeg', '17759_left.jpeg', '17759_right.jpeg', '17760_left.jpeg', '17760_right.jpeg', '17762_left.jpeg', '17762_right.jpeg', '17768_left.jpeg', '17768_right.jpeg', '17769_left.jpeg', '17769_right.jpeg', '17770_left.jpeg', '17770_right.jpeg', '17779_left.jpeg', '17779_right.jpeg', '1777_left.jpeg', '1777_right.jpeg', '17782_left.jpeg', '17782_right.jpeg', '17788_left.jpeg', '17788_right.jpeg', '17792_left.jpeg', '17792_right.jpeg', '17794_left.jpeg', '17794_right.jpeg', '17796_left.jpeg', '17796_right.jpeg', '17798_left.jpeg', '17798_right.jpeg', '1779_left.jpeg', '1779_right.jpeg', '177_right.jpeg', '17801_left.jpeg', '17801_right.jpeg', '17802_left.jpeg', '17802_right.jpeg', '17805_left.jpeg', '17805_right.jpeg', '17806_left.jpeg', '17806_right.jpeg', '17808_left.jpeg', '17808_right.jpeg', '17811_left.jpeg', '17811_right.jpeg', '17814_left.jpeg', '17814_right.jpeg', '17817_left.jpeg', '17817_right.jpeg', '17819_left.jpeg', '17819_right.jpeg', '17831_left.jpeg', '17831_right.jpeg', '17838_left.jpeg', '17838_right.jpeg', '17840_left.jpeg', '17840_right.jpeg', '17843_left.jpeg', '17843_right.jpeg', '17846_left.jpeg', '17846_right.jpeg', '17847_left.jpeg', '17847_right.jpeg', '17850_left.jpeg', '17852_left.jpeg', '17852_right.jpeg', '17855_left.jpeg', '17855_right.jpeg', '17857_left.jpeg', '17857_right.jpeg', '17869_left.jpeg', '17869_right.jpeg', '1786_left.jpeg', '1786_right.jpeg', '17871_left.jpeg', '17871_right.jpeg', '17872_left.jpeg', '17872_right.jpeg', '17873_left.jpeg', '17873_right.jpeg', '17875_left.jpeg', '17875_right.jpeg', '17876_left.jpeg', '17876_right.jpeg', '17879_left.jpeg', '17879_right.jpeg', '17887_left.jpeg', '17887_right.jpeg', '17888_left.jpeg', '17888_right.jpeg', '17899_left.jpeg', '17900_left.jpeg', '17900_right.jpeg', '17901_left.jpeg', '17901_right.jpeg', '17903_left.jpeg', '17903_right.jpeg', '17912_left.jpeg', '17912_right.jpeg', '17921_left.jpeg', '17921_right.jpeg', '17922_left.jpeg', '17922_right.jpeg', '17925_left.jpeg', '17925_right.jpeg', '17931_left.jpeg', '17931_right.jpeg', '17934_left.jpeg', '17934_right.jpeg', '1793_left.jpeg', '1793_right.jpeg', '17940_left.jpeg', '17940_right.jpeg', '17945_left.jpeg', '17945_right.jpeg', '17953_left.jpeg', '17953_right.jpeg', '17967_left.jpeg', '17967_right.jpeg', '17973_left.jpeg', '17973_right.jpeg', '17974_left.jpeg', '17974_right.jpeg', '17979_left.jpeg', '17979_right.jpeg', '17983_left.jpeg', '17983_right.jpeg', '17985_left.jpeg', '17985_right.jpeg', '17990_left.jpeg', '17990_right.jpeg', '17992_left.jpeg', '17992_right.jpeg', '17998_left.jpeg', '17998_right.jpeg', '1799_left.jpeg', '1799_right.jpeg', '179_left.jpeg', '179_right.jpeg', '18000_left.jpeg', '18000_right.jpeg', '18010_left.jpeg', '18010_right.jpeg', '18011_left.jpeg', '18011_right.jpeg', '18015_left.jpeg', '18015_right.jpeg', '18017_left.jpeg', '18017_right.jpeg', '18020_left.jpeg', '18020_right.jpeg', '18021_right.jpeg', '18022_left.jpeg', '18022_right.jpeg', '18023_left.jpeg', '18023_right.jpeg', '18026_left.jpeg', '18026_right.jpeg', '18028_left.jpeg', '18028_right.jpeg', '18029_left.jpeg', '18029_right.jpeg', '18030_left.jpeg', '18030_right.jpeg', '18031_left.jpeg', '18031_right.jpeg', '18035_left.jpeg', '18035_right.jpeg', '18039_left.jpeg', '18039_right.jpeg', '1803_left.jpeg', '1803_right.jpeg', '18042_left.jpeg', '18042_right.jpeg', '18045_left.jpeg', '18045_right.jpeg', '18046_left.jpeg', '18046_right.jpeg', '18048_left.jpeg', '18048_right.jpeg', '18054_left.jpeg', '18054_right.jpeg', '18056_left.jpeg', '18056_right.jpeg', '18058_left.jpeg', '18058_right.jpeg', '18059_left.jpeg', '18059_right.jpeg', '18065_left.jpeg', '18065_right.jpeg', '18066_left.jpeg', '18066_right.jpeg', '18070_left.jpeg', '18070_right.jpeg', '18071_left.jpeg', '18071_right.jpeg', '18072_left.jpeg', '18072_right.jpeg', '18073_left.jpeg', '18073_right.jpeg', '18080_left.jpeg', '18080_right.jpeg', '18085_left.jpeg', '18085_right.jpeg', '1808_left.jpeg', '1808_right.jpeg', '18092_left.jpeg', '18092_right.jpeg', '18093_left.jpeg', '18093_right.jpeg', '18095_left.jpeg', '18095_right.jpeg', '18099_left.jpeg', '18099_right.jpeg', '18100_left.jpeg', '18100_right.jpeg', '18101_left.jpeg', '18101_right.jpeg', '18107_left.jpeg', '18107_right.jpeg', '18109_left.jpeg', '18109_right.jpeg', '18119_left.jpeg', '18119_right.jpeg', '18120_left.jpeg', '18120_right.jpeg', '18126_left.jpeg', '18126_right.jpeg', '18128_left.jpeg', '18128_right.jpeg', '18136_left.jpeg', '18136_right.jpeg', '18138_left.jpeg', '18138_right.jpeg', '18143_left.jpeg', '18143_right.jpeg', '18147_left.jpeg', '18147_right.jpeg', '18149_right.jpeg', '18150_left.jpeg', '18150_right.jpeg', '18151_left.jpeg', '18151_right.jpeg', '18157_left.jpeg', '18157_right.jpeg', '18158_left.jpeg', '18158_right.jpeg', '18172_left.jpeg', '18172_right.jpeg', '18173_left.jpeg', '18173_right.jpeg', '18175_left.jpeg', '18175_right.jpeg', '18182_left.jpeg', '18182_right.jpeg', '18183_left.jpeg', '18183_right.jpeg', '18184_left.jpeg', '18184_right.jpeg', '18185_left.jpeg', '18185_right.jpeg', '18186_left.jpeg', '18186_right.jpeg', '18187_left.jpeg', '18187_right.jpeg', '18188_left.jpeg', '18188_right.jpeg', '18191_left.jpeg', '18191_right.jpeg', '18197_left.jpeg', '18197_right.jpeg', '18198_left.jpeg', '18198_right.jpeg', '18203_left.jpeg', '18203_right.jpeg', '18205_left.jpeg', '18205_right.jpeg', '18210_left.jpeg', '18210_right.jpeg', '18213_left.jpeg', '18213_right.jpeg', '18218_left.jpeg', '18218_right.jpeg', '18219_left.jpeg', '18219_right.jpeg', '18226_left.jpeg', '18226_right.jpeg', '18227_left.jpeg', '18227_right.jpeg', '18233_left.jpeg', '18233_right.jpeg', '18235_left.jpeg', '18235_right.jpeg', '1823_left.jpeg', '1823_right.jpeg', '18245_right.jpeg', '18252_left.jpeg', '18252_right.jpeg', '18253_left.jpeg', '18253_right.jpeg', '18255_left.jpeg', '18255_right.jpeg', '18259_left.jpeg', '18259_right.jpeg', '18267_left.jpeg', '18267_right.jpeg', '1826_left.jpeg', '1826_right.jpeg', '18275_left.jpeg', '18275_right.jpeg', '18277_left.jpeg', '18277_right.jpeg', '18279_left.jpeg', '18279_right.jpeg', '1827_left.jpeg', '1827_right.jpeg', '18293_left.jpeg', '18293_right.jpeg', '18294_left.jpeg', '18294_right.jpeg', '18297_left.jpeg', '18297_right.jpeg', '18298_left.jpeg', '18300_left.jpeg', '18300_right.jpeg', '18301_left.jpeg', '18301_right.jpeg', '18303_left.jpeg', '18303_right.jpeg', '18307_left.jpeg', '18307_right.jpeg', '18308_left.jpeg', '18308_right.jpeg', '18309_left.jpeg', '18309_right.jpeg', '18310_left.jpeg', '18310_right.jpeg', '18318_left.jpeg', '18318_right.jpeg', '18320_left.jpeg', '18320_right.jpeg', '18325_left.jpeg', '18325_right.jpeg', '18329_left.jpeg', '18329_right.jpeg', '18333_left.jpeg', '18333_right.jpeg', '18337_right.jpeg', '1833_left.jpeg', '1833_right.jpeg', '18345_left.jpeg', '18345_right.jpeg', '18346_left.jpeg', '18346_right.jpeg', '18350_left.jpeg', '18350_right.jpeg', '18353_left.jpeg', '18353_right.jpeg', '18355_left.jpeg', '18355_right.jpeg', '18357_left.jpeg', '18357_right.jpeg', '1835_left.jpeg', '1835_right.jpeg', '18362_left.jpeg', '18362_right.jpeg', '18363_left.jpeg', '18363_right.jpeg', '18365_left.jpeg', '18365_right.jpeg', '18368_left.jpeg', '18368_right.jpeg', '1836_left.jpeg', '1836_right.jpeg', '18370_left.jpeg', '18370_right.jpeg', '18372_left.jpeg', '18372_right.jpeg', '18380_left.jpeg', '18380_right.jpeg', '1839_left.jpeg', '1839_right.jpeg', '18402_left.jpeg', '18402_right.jpeg', '18404_left.jpeg', '18404_right.jpeg', '18405_left.jpeg', '18405_right.jpeg', '18408_left.jpeg', '18408_right.jpeg', '18416_left.jpeg', '18416_right.jpeg', '18417_left.jpeg', '18417_right.jpeg', '18418_left.jpeg', '18418_right.jpeg', '18422_left.jpeg', '18422_right.jpeg', '18426_left.jpeg', '18426_right.jpeg', '18429_left.jpeg', '18429_right.jpeg', '18433_left.jpeg', '18433_right.jpeg', '18434_left.jpeg', '18434_right.jpeg', '18437_left.jpeg', '18437_right.jpeg', '18438_left.jpeg', '18438_right.jpeg', '18443_left.jpeg', '18443_right.jpeg', '18444_left.jpeg', '18444_right.jpeg', '18447_left.jpeg', '18447_right.jpeg', '18449_left.jpeg', '18449_right.jpeg', '18450_left.jpeg', '18450_right.jpeg', '18452_left.jpeg', '18452_right.jpeg', '18456_left.jpeg', '18456_right.jpeg', '18458_left.jpeg', '18458_right.jpeg', '18463_left.jpeg', '18463_right.jpeg', '18465_left.jpeg', '18470_left.jpeg', '18470_right.jpeg', '18471_left.jpeg', '18471_right.jpeg', '18474_left.jpeg', '18474_right.jpeg', '18476_left.jpeg', '18476_right.jpeg', '18478_left.jpeg', '18478_right.jpeg', '1847_left.jpeg', '1847_right.jpeg', '18483_left.jpeg', '18483_right.jpeg', '18486_left.jpeg', '18491_left.jpeg', '18491_right.jpeg', '18493_left.jpeg', '18493_right.jpeg', '18496_left.jpeg', '18496_right.jpeg', '1849_left.jpeg', '1849_right.jpeg', '18502_left.jpeg', '18502_right.jpeg', '18512_left.jpeg', '18512_right.jpeg', '18515_left.jpeg', '18515_right.jpeg', '18517_left.jpeg', '18517_right.jpeg', '1851_left.jpeg', '1851_right.jpeg', '18524_left.jpeg', '18524_right.jpeg', '18527_left.jpeg', '18527_right.jpeg', '18528_left.jpeg', '18528_right.jpeg', '18534_left.jpeg', '18534_right.jpeg', '18535_left.jpeg', '18535_right.jpeg', '18542_left.jpeg', '18542_right.jpeg', '18546_left.jpeg', '18546_right.jpeg', '18548_left.jpeg', '18548_right.jpeg', '18558_left.jpeg', '18558_right.jpeg', '1855_left.jpeg', '1855_right.jpeg', '18560_left.jpeg', '18560_right.jpeg', '18567_left.jpeg', '18567_right.jpeg', '18573_left.jpeg', '18573_right.jpeg', '18574_left.jpeg', '18574_right.jpeg', '18575_left.jpeg', '18575_right.jpeg', '18576_left.jpeg', '18576_right.jpeg', '18583_left.jpeg', '18583_right.jpeg', '18584_left.jpeg', '18584_right.jpeg', '18594_left.jpeg', '18594_right.jpeg', '18597_left.jpeg', '18597_right.jpeg', '18598_left.jpeg', '18598_right.jpeg', '1859_left.jpeg', '1859_right.jpeg', '18601_left.jpeg', '18601_right.jpeg', '18603_left.jpeg', '18603_right.jpeg', '18605_left.jpeg', '18605_right.jpeg', '18608_left.jpeg', '18608_right.jpeg', '18610_left.jpeg', '18610_right.jpeg', '18612_left.jpeg', '18612_right.jpeg', '18616_left.jpeg', '18616_right.jpeg', '18618_left.jpeg', '18618_right.jpeg', '18619_left.jpeg', '18619_right.jpeg', '18625_left.jpeg', '18625_right.jpeg', '18631_left.jpeg', '18631_right.jpeg', '18633_left.jpeg', '18633_right.jpeg', '18647_left.jpeg', '18647_right.jpeg', '18648_left.jpeg', '18648_right.jpeg', '18653_left.jpeg', '18653_right.jpeg', '18655_left.jpeg', '18655_right.jpeg', '18656_left.jpeg', '18656_right.jpeg', '18663_left.jpeg', '18663_right.jpeg', '18666_left.jpeg', '18666_right.jpeg', '18667_left.jpeg', '18667_right.jpeg', '18671_left.jpeg', '18671_right.jpeg', '18678_left.jpeg', '18678_right.jpeg', '1867_left.jpeg', '1867_right.jpeg', '18681_left.jpeg', '18681_right.jpeg', '18684_left.jpeg', '18684_right.jpeg', '18685_left.jpeg', '18685_right.jpeg', '1868_left.jpeg', '1868_right.jpeg', '18696_left.jpeg', '18696_right.jpeg', '18699_left.jpeg', '18699_right.jpeg', '186_left.jpeg', '186_right.jpeg', '18704_left.jpeg', '18704_right.jpeg', '18706_left.jpeg', '18706_right.jpeg', '18709_left.jpeg', '18709_right.jpeg', '18715_left.jpeg', '18715_right.jpeg', '18717_left.jpeg', '18717_right.jpeg', '18722_left.jpeg', '18722_right.jpeg', '1872_left.jpeg', '1872_right.jpeg', '18734_left.jpeg', '18734_right.jpeg', '18736_left.jpeg', '18736_right.jpeg', '18737_left.jpeg', '18737_right.jpeg', '1873_left.jpeg', '1873_right.jpeg', '18744_left.jpeg', '18744_right.jpeg', '18745_left.jpeg', '18745_right.jpeg', '18748_left.jpeg', '18748_right.jpeg', '18750_left.jpeg', '18750_right.jpeg', '18759_left.jpeg', '18759_right.jpeg', '18762_left.jpeg', '18762_right.jpeg', '18763_left.jpeg', '18763_right.jpeg', '18767_left.jpeg', '18767_right.jpeg', '18770_left.jpeg', '18770_right.jpeg', '18779_left.jpeg', '18779_right.jpeg', '18780_left.jpeg', '18780_right.jpeg', '18783_left.jpeg', '18783_right.jpeg', '18786_left.jpeg', '18786_right.jpeg', '18790_left.jpeg', '18790_right.jpeg', '18798_left.jpeg', '18798_right.jpeg', '18802_left.jpeg', '18802_right.jpeg', '18803_left.jpeg', '18803_right.jpeg', '18804_left.jpeg', '18804_right.jpeg', '18807_left.jpeg', '18807_right.jpeg', '1880_left.jpeg', '1880_right.jpeg', '18812_left.jpeg', '18812_right.jpeg', '18815_left.jpeg', '18815_right.jpeg', '18816_left.jpeg', '18816_right.jpeg', '18819_left.jpeg', '18819_right.jpeg', '18823_left.jpeg', '18823_right.jpeg', '18824_left.jpeg', '18824_right.jpeg', '18825_left.jpeg', '18825_right.jpeg', '18826_left.jpeg', '18826_right.jpeg', '18829_left.jpeg', '18829_right.jpeg', '18836_left.jpeg', '18836_right.jpeg', '18837_left.jpeg', '18837_right.jpeg', '18846_left.jpeg', '18846_right.jpeg', '18850_left.jpeg', '18850_right.jpeg', '18853_left.jpeg', '18853_right.jpeg', '18861_left.jpeg', '18861_right.jpeg', '18863_left.jpeg', '18863_right.jpeg', '18864_left.jpeg', '18866_left.jpeg', '18866_right.jpeg', '18875_left.jpeg', '18875_right.jpeg', '18876_left.jpeg', '18876_right.jpeg', '1887_left.jpeg', '1887_right.jpeg', '18880_left.jpeg', '18880_right.jpeg', '18881_left.jpeg', '18881_right.jpeg', '1889_left.jpeg', '1889_right.jpeg', '18901_left.jpeg', '18901_right.jpeg', '18902_left.jpeg', '18902_right.jpeg', '18903_left.jpeg', '18903_right.jpeg', '18905_left.jpeg', '18905_right.jpeg', '18909_left.jpeg', '18909_right.jpeg', '18911_left.jpeg', '18911_right.jpeg', '18915_left.jpeg', '18915_right.jpeg', '18920_left.jpeg', '18920_right.jpeg', '18924_left.jpeg', '18924_right.jpeg', '18930_left.jpeg', '18930_right.jpeg', '18941_left.jpeg', '18941_right.jpeg', '18942_left.jpeg', '18942_right.jpeg', '18946_left.jpeg', '18946_right.jpeg', '18947_left.jpeg', '18947_right.jpeg', '18948_left.jpeg', '18948_right.jpeg', '18951_left.jpeg', '18951_right.jpeg', '18971_left.jpeg', '18971_right.jpeg', '18974_left.jpeg', '18974_right.jpeg', '18982_left.jpeg', '18982_right.jpeg', '18984_left.jpeg', '18984_right.jpeg', '18986_left.jpeg', '18986_right.jpeg', '18987_left.jpeg', '18987_right.jpeg', '18991_left.jpeg', '18991_right.jpeg', '18993_left.jpeg', '18993_right.jpeg', '19000_left.jpeg', '19000_right.jpeg', '19005_left.jpeg', '19005_right.jpeg', '19011_left.jpeg', '19011_right.jpeg', '19018_left.jpeg', '19018_right.jpeg', '19020_left.jpeg', '19020_right.jpeg', '19022_left.jpeg', '19022_right.jpeg', '19030_left.jpeg', '19030_right.jpeg', '19041_left.jpeg', '19041_right.jpeg', '19043_left.jpeg', '19043_right.jpeg', '19045_left.jpeg', '19045_right.jpeg', '19047_left.jpeg', '19047_right.jpeg', '19056_left.jpeg', '19056_right.jpeg', '19059_left.jpeg', '19059_right.jpeg', '19061_left.jpeg', '19061_right.jpeg', '19062_left.jpeg', '19062_right.jpeg', '19063_left.jpeg', '19063_right.jpeg', '19067_left.jpeg', '19067_right.jpeg', '1906_left.jpeg', '1906_right.jpeg', '19070_left.jpeg', '19070_right.jpeg', '19073_left.jpeg', '19073_right.jpeg', '19075_left.jpeg', '19075_right.jpeg', '19076_left.jpeg', '19076_right.jpeg', '1907_left.jpeg', '1907_right.jpeg', '19088_left.jpeg', '19088_right.jpeg', '19099_left.jpeg', '19099_right.jpeg', '190_left.jpeg', '190_right.jpeg', '19108_left.jpeg', '19108_right.jpeg', '19110_left.jpeg', '19110_right.jpeg', '19114_left.jpeg', '19114_right.jpeg', '19115_left.jpeg', '19115_right.jpeg', '19120_left.jpeg', '19120_right.jpeg', '19124_left.jpeg', '19124_right.jpeg', '19134_left.jpeg', '19134_right.jpeg', '19145_left.jpeg', '19145_right.jpeg', '19149_left.jpeg', '19149_right.jpeg', '1914_left.jpeg', '1914_right.jpeg', '19154_left.jpeg', '19154_right.jpeg', '1915_left.jpeg', '1915_right.jpeg', '19161_left.jpeg', '19161_right.jpeg', '19163_left.jpeg', '19163_right.jpeg', '19166_left.jpeg', '19166_right.jpeg', '19168_left.jpeg', '19168_right.jpeg', '1916_left.jpeg', '1916_right.jpeg', '19172_left.jpeg', '19172_right.jpeg', '19179_left.jpeg', '19179_right.jpeg', '19180_left.jpeg', '19180_right.jpeg', '19183_left.jpeg', '19183_right.jpeg', '19193_left.jpeg', '19193_right.jpeg', '19197_left.jpeg', '19197_right.jpeg', '19199_left.jpeg', '19199_right.jpeg', '19200_left.jpeg', '19200_right.jpeg', '19201_left.jpeg', '19201_right.jpeg', '19202_left.jpeg', '19202_right.jpeg', '19209_left.jpeg', '19209_right.jpeg', '19213_left.jpeg', '19213_right.jpeg', '19217_left.jpeg', '19217_right.jpeg', '19221_left.jpeg', '19221_right.jpeg', '19223_left.jpeg', '19223_right.jpeg', '19229_left.jpeg', '19229_right.jpeg', '19238_left.jpeg', '19238_right.jpeg', '19249_left.jpeg', '19249_right.jpeg', '19254_left.jpeg', '19254_right.jpeg', '19262_left.jpeg', '19262_right.jpeg', '19266_left.jpeg', '19266_right.jpeg', '19272_left.jpeg', '19272_right.jpeg', '19280_left.jpeg', '19280_right.jpeg', '19285_left.jpeg', '19285_right.jpeg', '19289_left.jpeg', '19289_right.jpeg', '1928_left.jpeg', '1928_right.jpeg', '19291_left.jpeg', '19291_right.jpeg', '19299_left.jpeg', '19299_right.jpeg', '1929_left.jpeg', '1929_right.jpeg', '192_left.jpeg', '192_right.jpeg', '19301_left.jpeg', '19301_right.jpeg', '19306_left.jpeg', '19306_right.jpeg', '19309_left.jpeg', '19309_right.jpeg', '19328_left.jpeg', '19328_right.jpeg', '1933_left.jpeg', '1933_right.jpeg', '19342_left.jpeg', '19342_right.jpeg', '19343_left.jpeg', '19343_right.jpeg', '19344_left.jpeg', '19344_right.jpeg', '19346_left.jpeg', '19346_right.jpeg', '19347_left.jpeg', '19347_right.jpeg', '19348_left.jpeg', '19348_right.jpeg', '19350_left.jpeg', '19350_right.jpeg', '19356_left.jpeg', '19356_right.jpeg', '19357_left.jpeg', '19357_right.jpeg', '19358_left.jpeg', '19358_right.jpeg', '19364_left.jpeg', '19364_right.jpeg', '19367_left.jpeg', '19367_right.jpeg', '19368_left.jpeg', '19368_right.jpeg', '1936_left.jpeg', '1936_right.jpeg', '19375_left.jpeg', '19375_right.jpeg', '19377_left.jpeg', '19377_right.jpeg', '19380_left.jpeg', '19380_right.jpeg', '19388_left.jpeg', '19388_right.jpeg', '19399_left.jpeg', '19399_right.jpeg', '19402_left.jpeg', '19402_right.jpeg', '19406_left.jpeg', '19406_right.jpeg', '19408_left.jpeg', '19408_right.jpeg', '19409_left.jpeg', '19409_right.jpeg', '1940_left.jpeg', '1940_right.jpeg', '19410_left.jpeg', '19410_right.jpeg', '19417_left.jpeg', '19417_right.jpeg', '19421_left.jpeg', '19421_right.jpeg', '19431_left.jpeg', '19431_right.jpeg', '19433_left.jpeg', '19433_right.jpeg', '19441_left.jpeg', '19441_right.jpeg', '19442_left.jpeg', '19442_right.jpeg', '19447_left.jpeg', '19447_right.jpeg', '19448_left.jpeg', '19448_right.jpeg', '19451_left.jpeg', '19451_right.jpeg', '19452_left.jpeg', '19452_right.jpeg', '19454_left.jpeg', '19454_right.jpeg', '19456_left.jpeg', '19456_right.jpeg', '19458_left.jpeg', '19458_right.jpeg', '19460_left.jpeg', '19460_right.jpeg', '19465_left.jpeg', '19465_right.jpeg', '19471_left.jpeg', '19471_right.jpeg', '19473_left.jpeg', '19473_right.jpeg', '19476_right.jpeg', '1947_left.jpeg', '1947_right.jpeg', '19484_left.jpeg', '19484_right.jpeg', '19489_left.jpeg', '19489_right.jpeg', '19491_left.jpeg', '19491_right.jpeg', '19493_left.jpeg', '19493_right.jpeg', '19494_left.jpeg', '19494_right.jpeg', '194_left.jpeg', '194_right.jpeg', '15699_left.jpeg', '15699_right.jpeg', '156_left.jpeg', '156_right.jpeg', '15709_left.jpeg', '15709_right.jpeg', '1570_left.jpeg', '1570_right.jpeg', '15711_left.jpeg', '15711_right.jpeg', '15712_left.jpeg', '15712_right.jpeg', '15718_left.jpeg', '15718_right.jpeg', '15720_left.jpeg', '15720_right.jpeg', '15732_left.jpeg', '15732_right.jpeg', '15745_left.jpeg', '15745_right.jpeg', '15747_left.jpeg', '15747_right.jpeg', '15751_left.jpeg', '15751_right.jpeg', '15752_left.jpeg', '15752_right.jpeg', '15756_left.jpeg', '15761_left.jpeg', '15761_right.jpeg', '15763_left.jpeg', '15763_right.jpeg', '15776_left.jpeg', '15776_right.jpeg', '15780_left.jpeg', '15780_right.jpeg', '15788_left.jpeg', '15788_right.jpeg', '15791_left.jpeg', '15791_right.jpeg', '15793_left.jpeg', '15793_right.jpeg', '15795_left.jpeg', '15795_right.jpeg', '15797_left.jpeg', '15797_right.jpeg', '15798_left.jpeg', '15798_right.jpeg', '15799_left.jpeg', '15799_right.jpeg', '157_left.jpeg', '157_right.jpeg', '15800_left.jpeg', '15800_right.jpeg', '15801_left.jpeg', '15801_right.jpeg', '15808_left.jpeg', '15808_right.jpeg', '15810_left.jpeg', '15810_right.jpeg', '15816_left.jpeg', '15816_right.jpeg', '15817_left.jpeg', '15817_right.jpeg', '15821_left.jpeg', '15821_right.jpeg', '15828_left.jpeg', '15828_right.jpeg', '15847_left.jpeg', '15847_right.jpeg', '15848_left.jpeg', '15848_right.jpeg', '15855_left.jpeg', '15855_right.jpeg', '15858_left.jpeg', '15858_right.jpeg', '15863_left.jpeg', '15863_right.jpeg', '15864_left.jpeg', '15864_right.jpeg', '15867_left.jpeg', '15867_right.jpeg', '15868_right.jpeg', '15870_left.jpeg', '15870_right.jpeg', '15874_left.jpeg', '15874_right.jpeg', '15879_left.jpeg', '15879_right.jpeg', '15884_left.jpeg', '15884_right.jpeg', '15899_left.jpeg', '15899_right.jpeg', '15902_left.jpeg', '15902_right.jpeg', '15914_left.jpeg', '15914_right.jpeg', '15916_left.jpeg', '15916_right.jpeg', '15918_left.jpeg', '15918_right.jpeg', '15921_left.jpeg', '15921_right.jpeg', '15923_right.jpeg', '15926_left.jpeg', '15926_right.jpeg', '15928_left.jpeg', '15928_right.jpeg', '15930_left.jpeg', '15930_right.jpeg', '15932_left.jpeg', '15932_right.jpeg', '15935_left.jpeg', '15935_right.jpeg', '15938_left.jpeg', '15938_right.jpeg', '15941_left.jpeg', '15941_right.jpeg', '15946_left.jpeg', '15946_right.jpeg', '15960_left.jpeg', '15960_right.jpeg', '15961_left.jpeg', '15961_right.jpeg', '1596_left.jpeg', '1596_right.jpeg', '15970_left.jpeg', '15970_right.jpeg', '15971_left.jpeg', '15971_right.jpeg', '15973_left.jpeg', '15973_right.jpeg', '15975_left.jpeg', '15975_right.jpeg', '15980_left.jpeg', '15980_right.jpeg', '15981_left.jpeg', '15981_right.jpeg', '15982_left.jpeg', '15982_right.jpeg', '15985_left.jpeg', '15985_right.jpeg', '15987_left.jpeg', '15987_right.jpeg', '15988_left.jpeg', '15988_right.jpeg', '15989_left.jpeg', '15989_right.jpeg', '15991_left.jpeg', '15991_right.jpeg', '15994_left.jpeg', '15994_right.jpeg', '1599_left.jpeg', '1599_right.jpeg', '16007_left.jpeg', '16007_right.jpeg', '16011_left.jpeg', '16013_left.jpeg', '16013_right.jpeg', '16028_left.jpeg', '16028_right.jpeg', '16035_left.jpeg', '16035_right.jpeg', '16042_left.jpeg', '16042_right.jpeg', '16046_left.jpeg', '16046_right.jpeg', '16047_left.jpeg', '16047_right.jpeg', '16049_left.jpeg', '16049_right.jpeg', '16058_left.jpeg', '16058_right.jpeg', '16061_left.jpeg', '16061_right.jpeg', '16071_left.jpeg', '16071_right.jpeg', '16076_left.jpeg', '16076_right.jpeg', '1607_left.jpeg', '1607_right.jpeg', '16080_left.jpeg', '16080_right.jpeg', '16088_left.jpeg', '16088_right.jpeg', '1608_left.jpeg', '1608_right.jpeg', '16093_left.jpeg', '16093_right.jpeg', '16095_left.jpeg', '16095_right.jpeg', '16099_left.jpeg', '16099_right.jpeg', '16101_left.jpeg', '16109_right.jpeg', '1610_left.jpeg', '1610_right.jpeg', '16111_left.jpeg', '16111_right.jpeg', '16115_left.jpeg', '16115_right.jpeg', '16129_left.jpeg', '16129_right.jpeg', '16130_left.jpeg', '16130_right.jpeg', '16131_left.jpeg', '16134_left.jpeg', '16134_right.jpeg', '16142_left.jpeg', '16142_right.jpeg', '16145_left.jpeg', '16145_right.jpeg', '16149_right.jpeg', '1614_left.jpeg', '1614_right.jpeg', '16150_left.jpeg', '16150_right.jpeg', '16170_left.jpeg', '16170_right.jpeg', '16171_left.jpeg', '16171_right.jpeg', '16177_left.jpeg', '16177_right.jpeg', '16178_left.jpeg', '16178_right.jpeg', '16179_left.jpeg', '16179_right.jpeg', '16181_left.jpeg', '16181_right.jpeg', '16185_left.jpeg', '16185_right.jpeg', '16189_left.jpeg', '16189_right.jpeg', '16191_left.jpeg', '16191_right.jpeg', '16193_left.jpeg', '16193_right.jpeg', '16197_left.jpeg', '16197_right.jpeg', '16198_left.jpeg', '16198_right.jpeg', '16203_left.jpeg', '16203_right.jpeg', '16218_left.jpeg', '16218_right.jpeg', '16220_left.jpeg', '16220_right.jpeg', '16222_left.jpeg', '16222_right.jpeg', '16223_left.jpeg', '16224_left.jpeg', '16224_right.jpeg', '16233_left.jpeg', '16233_right.jpeg', '16234_left.jpeg', '16234_right.jpeg', '16236_left.jpeg', '16236_right.jpeg', '16249_left.jpeg', '16249_right.jpeg', '1624_left.jpeg', '1624_right.jpeg', '16258_left.jpeg', '16258_right.jpeg', '16260_left.jpeg', '16260_right.jpeg', '16265_left.jpeg', '16265_right.jpeg', '16272_left.jpeg', '16272_right.jpeg', '16279_left.jpeg', '16279_right.jpeg', '16280_left.jpeg', '16280_right.jpeg', '16283_left.jpeg', '16283_right.jpeg', '16284_left.jpeg', '16284_right.jpeg', '16287_left.jpeg', '16287_right.jpeg', '16288_left.jpeg', '16288_right.jpeg', '16289_left.jpeg', '16289_right.jpeg', '16290_left.jpeg', '16290_right.jpeg', '16294_left.jpeg', '16294_right.jpeg', '16299_left.jpeg', '16299_right.jpeg', '162_left.jpeg', '162_right.jpeg', '16302_left.jpeg', '16302_right.jpeg', '16309_left.jpeg', '16309_right.jpeg', '1630_left.jpeg', '1630_right.jpeg', '16310_left.jpeg', '16310_right.jpeg', '16318_right.jpeg', '16320_left.jpeg', '16320_right.jpeg', '16321_left.jpeg', '16321_right.jpeg', '1632_left.jpeg', '1632_right.jpeg', '16335_left.jpeg', '16335_right.jpeg', '16343_left.jpeg', '16343_right.jpeg', '16359_left.jpeg', '16359_right.jpeg', '16360_left.jpeg', '16360_right.jpeg', '16369_left.jpeg', '16369_right.jpeg', '1636_left.jpeg', '1636_right.jpeg', '16375_left.jpeg', '16375_right.jpeg', '16377_left.jpeg', '16377_right.jpeg', '16379_left.jpeg', '16379_right.jpeg', '16384_left.jpeg', '16384_right.jpeg', '16385_left.jpeg', '16385_right.jpeg', '16388_left.jpeg', '16388_right.jpeg', '16390_left.jpeg', '16390_right.jpeg', '16391_left.jpeg', '16391_right.jpeg', '16392_left.jpeg', '16392_right.jpeg', '1639_left.jpeg', '1639_right.jpeg', '16401_left.jpeg', '16401_right.jpeg', '16403_left.jpeg', '16403_right.jpeg', '16405_right.jpeg', '16408_left.jpeg', '16408_right.jpeg', '16410_left.jpeg', '16410_right.jpeg', '16413_left.jpeg', '16413_right.jpeg', '16414_left.jpeg', '16414_right.jpeg', '16417_left.jpeg', '16417_right.jpeg', '16418_right.jpeg', '1641_left.jpeg', '1641_right.jpeg', '16429_left.jpeg', '16429_right.jpeg', '16432_left.jpeg', '16432_right.jpeg', '16439_left.jpeg', '16439_right.jpeg', '16445_left.jpeg', '16446_left.jpeg', '16446_right.jpeg', '16451_left.jpeg', '16451_right.jpeg', '16452_left.jpeg', '16452_right.jpeg', '16458_left.jpeg', '16458_right.jpeg', '16460_left.jpeg', '16460_right.jpeg', '16466_left.jpeg', '16466_right.jpeg', '16468_left.jpeg', '16468_right.jpeg', '16479_left.jpeg', '16479_right.jpeg', '16486_left.jpeg', '16486_right.jpeg', '16496_right.jpeg', '16497_left.jpeg', '16497_right.jpeg', '16499_left.jpeg', '16499_right.jpeg', '164_left.jpeg', '164_right.jpeg', '16509_left.jpeg', '16509_right.jpeg', '1650_left.jpeg', '1650_right.jpeg', '16515_left.jpeg', '16515_right.jpeg', '1651_left.jpeg', '1651_right.jpeg', '16530_left.jpeg', '16530_right.jpeg', '16532_left.jpeg', '16532_right.jpeg', '16533_left.jpeg', '16533_right.jpeg', '16534_left.jpeg', '16534_right.jpeg', '16536_left.jpeg', '16536_right.jpeg', '16542_left.jpeg', '16542_right.jpeg', '16543_left.jpeg', '16543_right.jpeg', '16545_left.jpeg', '16545_right.jpeg', '16547_left.jpeg', '16547_right.jpeg', '16550_left.jpeg', '16550_right.jpeg', '16555_left.jpeg', '16555_right.jpeg', '16558_left.jpeg', '16558_right.jpeg', '16559_left.jpeg', '16559_right.jpeg', '16561_left.jpeg', '16561_right.jpeg', '16565_left.jpeg', '16565_right.jpeg', '16569_left.jpeg', '16569_right.jpeg', '16571_left.jpeg', '16571_right.jpeg', '16579_left.jpeg', '16579_right.jpeg', '16581_left.jpeg', '16581_right.jpeg', '16582_left.jpeg', '16582_right.jpeg', '1658_left.jpeg', '1658_right.jpeg', '16594_left.jpeg', '16594_right.jpeg', '16597_left.jpeg', '16597_right.jpeg', '16599_left.jpeg', '16599_right.jpeg', '16600_left.jpeg', '16602_left.jpeg', '16602_right.jpeg', '16604_left.jpeg', '16604_right.jpeg', '1660_left.jpeg', '1660_right.jpeg', '16610_left.jpeg', '16610_right.jpeg', '16612_left.jpeg', '16612_right.jpeg', '16616_left.jpeg', '16616_right.jpeg', '16620_left.jpeg', '16620_right.jpeg', '16623_left.jpeg', '16624_left.jpeg', '16624_right.jpeg', '16627_left.jpeg', '16627_right.jpeg', '16629_left.jpeg', '16629_right.jpeg', '16630_left.jpeg', '16630_right.jpeg', '16631_left.jpeg', '16631_right.jpeg', '16633_left.jpeg', '16633_right.jpeg', '16634_left.jpeg', '16634_right.jpeg', '16636_left.jpeg', '16636_right.jpeg', '16639_left.jpeg', '16639_right.jpeg', '16640_left.jpeg', '16640_right.jpeg', '16641_right.jpeg', '16642_left.jpeg', '16642_right.jpeg', '16649_left.jpeg', '16649_right.jpeg', '16650_left.jpeg', '16650_right.jpeg', '16651_left.jpeg', '16651_right.jpeg', '16658_left.jpeg', '16658_right.jpeg', '16670_left.jpeg', '16670_right.jpeg', '16672_left.jpeg', '16672_right.jpeg', '16679_left.jpeg', '16679_right.jpeg', '1667_left.jpeg', '1667_right.jpeg', '16683_left.jpeg', '16683_right.jpeg', '16693_left.jpeg', '16693_right.jpeg', '1669_left.jpeg', '1669_right.jpeg', '16701_left.jpeg', '16701_right.jpeg', '16702_left.jpeg', '16702_right.jpeg', '16707_left.jpeg', '16707_right.jpeg', '16727_left.jpeg', '16727_right.jpeg', '16728_left.jpeg', '16728_right.jpeg', '1672_left.jpeg', '1672_right.jpeg', '16730_left.jpeg', '16730_right.jpeg', '16732_left.jpeg', '16732_right.jpeg', '16735_left.jpeg', '16735_right.jpeg', '1673_left.jpeg', '1673_right.jpeg', '16740_left.jpeg', '16740_right.jpeg', '16748_left.jpeg', '16748_right.jpeg', '16749_left.jpeg', '16749_right.jpeg', '1674_left.jpeg', '1674_right.jpeg', '16750_left.jpeg', '16750_right.jpeg', '16753_left.jpeg', '16753_right.jpeg', '16754_left.jpeg', '16754_right.jpeg', '16757_left.jpeg', '16757_right.jpeg', '16763_left.jpeg', '16763_right.jpeg', '16764_left.jpeg', '16764_right.jpeg', '16768_left.jpeg', '16768_right.jpeg', '16774_left.jpeg', '16774_right.jpeg', '16778_left.jpeg', '16778_right.jpeg', '16785_right.jpeg', '16789_left.jpeg', '16789_right.jpeg', '1678_left.jpeg', '1678_right.jpeg', '16791_left.jpeg', '16791_right.jpeg', '16803_left.jpeg', '16803_right.jpeg', '16807_left.jpeg', '16807_right.jpeg', '16818_left.jpeg', '16818_right.jpeg', '16826_left.jpeg', '16826_right.jpeg', '16828_left.jpeg', '16828_right.jpeg', '16832_left.jpeg', '16832_right.jpeg', '16835_left.jpeg', '16835_right.jpeg', '16836_left.jpeg', '16836_right.jpeg', '16848_left.jpeg', '16848_right.jpeg', '16850_left.jpeg', '16850_right.jpeg', '16855_left.jpeg', '16855_right.jpeg', '16857_left.jpeg', '16857_right.jpeg', '16863_left.jpeg', '16863_right.jpeg', '16866_left.jpeg', '16866_right.jpeg', '16871_left.jpeg', '16871_right.jpeg', '16872_left.jpeg', '16872_right.jpeg', '16877_left.jpeg', '16877_right.jpeg', '1687_left.jpeg', '1687_right.jpeg', '1688_left.jpeg', '1688_right.jpeg', '16890_left.jpeg', '16890_right.jpeg', '16899_left.jpeg', '16899_right.jpeg', '1689_left.jpeg', '1689_right.jpeg', '16907_left.jpeg', '16907_right.jpeg', '16909_left.jpeg', '16909_right.jpeg', '16919_left.jpeg', '16919_right.jpeg', '16927_left.jpeg', '16927_right.jpeg', '16931_left.jpeg', '16931_right.jpeg', '16936_left.jpeg', '16936_right.jpeg', '16937_left.jpeg', '16937_right.jpeg', '16940_left.jpeg', '16940_right.jpeg', '16943_left.jpeg', '16943_right.jpeg', '16946_left.jpeg', '16946_right.jpeg', '16947_left.jpeg', '16947_right.jpeg', '16952_left.jpeg', '16952_right.jpeg', '16963_left.jpeg', '16963_right.jpeg', '16964_left.jpeg', '16964_right.jpeg', '16967_left.jpeg', '16967_right.jpeg', '16969_left.jpeg', '16969_right.jpeg', '1696_left.jpeg', '1696_right.jpeg', '16973_left.jpeg', '16973_right.jpeg', '16977_left.jpeg', '16977_right.jpeg', '16978_left.jpeg', '16978_right.jpeg', '1698_left.jpeg', '1698_right.jpeg', '16991_left.jpeg', '16993_left.jpeg', '16993_right.jpeg', '17001_left.jpeg', '17001_right.jpeg', '17002_left.jpeg', '17002_right.jpeg', '17004_left.jpeg', '17004_right.jpeg', '17005_left.jpeg', '17005_right.jpeg', '17009_left.jpeg', '17009_right.jpeg', '17010_left.jpeg', '17010_right.jpeg', '17011_left.jpeg', '17011_right.jpeg', '17021_left.jpeg', '17021_right.jpeg', '17025_left.jpeg', '17025_right.jpeg', '17028_left.jpeg', '17028_right.jpeg', '17029_left.jpeg', '17029_right.jpeg', '1702_left.jpeg', '1702_right.jpeg', '17030_left.jpeg', '17030_right.jpeg', '17036_left.jpeg', '17036_right.jpeg', '17038_left.jpeg', '17038_right.jpeg', '17040_left.jpeg', '17040_right.jpeg', '17048_left.jpeg', '17048_right.jpeg', '17052_left.jpeg', '17052_right.jpeg', '17065_left.jpeg', '17065_right.jpeg', '17066_left.jpeg', '17066_right.jpeg', '17067_left.jpeg', '17067_right.jpeg', '17075_left.jpeg', '17075_right.jpeg', '17076_left.jpeg', '17076_right.jpeg', '17083_left.jpeg', '17083_right.jpeg', '17086_left.jpeg', '17086_right.jpeg', '17088_right.jpeg', '17089_left.jpeg', '17089_right.jpeg', '17097_left.jpeg', '17097_right.jpeg', '170_left.jpeg', '170_right.jpeg', '17106_left.jpeg', '17106_right.jpeg', '1710_left.jpeg', '17114_left.jpeg', '17114_right.jpeg', '17117_left.jpeg', '17117_right.jpeg', '17121_left.jpeg', '17121_right.jpeg', '17125_left.jpeg', '17125_right.jpeg', '17133_left.jpeg', '17133_right.jpeg', '17134_left.jpeg', '17143_left.jpeg', '17143_right.jpeg', '17152_left.jpeg', '17152_right.jpeg', '17169_left.jpeg', '17169_right.jpeg', '17173_left.jpeg', '17173_right.jpeg', '17176_left.jpeg', '17176_right.jpeg', '17181_left.jpeg', '17181_right.jpeg', '17185_left.jpeg', '17185_right.jpeg', '17186_left.jpeg', '17186_right.jpeg', '17190_left.jpeg', '17190_right.jpeg', '17191_left.jpeg', '17191_right.jpeg', '17195_left.jpeg', '17195_right.jpeg', '1719_left.jpeg', '1719_right.jpeg', '17202_left.jpeg', '17202_right.jpeg', '17207_left.jpeg', '17207_right.jpeg', '17209_left.jpeg', '17209_right.jpeg', '17215_left.jpeg', '17215_right.jpeg', '17217_left.jpeg', '17217_right.jpeg', '17221_left.jpeg', '17221_right.jpeg', '1722_right.jpeg', '17232_left.jpeg', '17232_right.jpeg', '17234_left.jpeg', '17234_right.jpeg', '17235_left.jpeg', '17235_right.jpeg', '17240_right.jpeg', '17243_left.jpeg', '17243_right.jpeg', '17244_left.jpeg', '17244_right.jpeg', '17248_left.jpeg', '17248_right.jpeg', '1724_left.jpeg', '1724_right.jpeg', '17259_left.jpeg', '17259_right.jpeg', '17266_left.jpeg', '17266_right.jpeg', '17277_left.jpeg', '17277_right.jpeg', '1727_left.jpeg', '1727_right.jpeg', '17284_left.jpeg', '17284_right.jpeg', '17287_left.jpeg', '17287_right.jpeg', '17290_left.jpeg', '17290_right.jpeg', '17295_left.jpeg', '17295_right.jpeg', '17298_left.jpeg', '17298_right.jpeg', '17299_left.jpeg', '17299_right.jpeg', '172_left.jpeg', '172_right.jpeg', '17302_left.jpeg', '17302_right.jpeg', '17303_left.jpeg', '17303_right.jpeg', '17308_left.jpeg', '17308_right.jpeg', '17312_left.jpeg', '17312_right.jpeg', '17314_left.jpeg', '17314_right.jpeg', '17320_left.jpeg', '17320_right.jpeg', '17322_left.jpeg', '17322_right.jpeg', '17326_left.jpeg', '17326_right.jpeg', '17331_left.jpeg', '17331_right.jpeg', '17332_left.jpeg', '17332_right.jpeg', '17337_left.jpeg', '17337_right.jpeg', '1733_left.jpeg', '1733_right.jpeg', '17344_left.jpeg', '17344_right.jpeg', '17348_left.jpeg', '17348_right.jpeg', '17350_left.jpeg', '17350_right.jpeg', '17355_left.jpeg', '17355_right.jpeg', '17357_left.jpeg', '17357_right.jpeg', '17359_left.jpeg', '17359_right.jpeg', '17362_left.jpeg', '17362_right.jpeg', '17366_left.jpeg', '17366_right.jpeg', '17367_left.jpeg', '17367_right.jpeg', '1736_left.jpeg', '1736_right.jpeg', '17373_left.jpeg', '17373_right.jpeg', '17375_left.jpeg', '17375_right.jpeg', '17380_left.jpeg', '17380_right.jpeg', '17382_left.jpeg', '17382_right.jpeg', '17384_left.jpeg', '17384_right.jpeg', '17389_left.jpeg', '17389_right.jpeg', '17399_left.jpeg', '17399_right.jpeg', '17400_left.jpeg', '17400_right.jpeg', '17401_left.jpeg', '17401_right.jpeg', '17415_left.jpeg', '17415_right.jpeg', '1741_left.jpeg', '1741_right.jpeg', '17422_left.jpeg', '17422_right.jpeg', '17423_left.jpeg', '17423_right.jpeg', '17424_left.jpeg', '17424_right.jpeg', '17425_left.jpeg', '17425_right.jpeg', '1742_left.jpeg', '1742_right.jpeg', '17438_left.jpeg', '17438_right.jpeg', '17441_left.jpeg', '17441_right.jpeg', '17442_left.jpeg', '17442_right.jpeg', '1744_left.jpeg', '1744_right.jpeg', '17452_left.jpeg', '17452_right.jpeg', '17453_right.jpeg', '17471_left.jpeg', '17471_right.jpeg', '17474_left.jpeg', '17474_right.jpeg', '17475_left.jpeg', '17475_right.jpeg', '17477_left.jpeg', '17477_right.jpeg', '17481_left.jpeg', '17481_right.jpeg', '17483_left.jpeg', '17483_right.jpeg', '17488_left.jpeg', '17488_right.jpeg', '17489_left.jpeg', '17489_right.jpeg', '17490_left.jpeg', '17490_right.jpeg', '17495_left.jpeg', '17495_right.jpeg', '174_left.jpeg', '174_right.jpeg', '17506_left.jpeg', '17506_right.jpeg', '17510_left.jpeg', '17510_right.jpeg', '17511_left.jpeg', '17511_right.jpeg', '1752_left.jpeg', '1752_right.jpeg', '17533_left.jpeg', '17533_right.jpeg', '17550_left.jpeg', '17550_right.jpeg', '17561_left.jpeg', '17561_right.jpeg', '17568_left.jpeg', '17568_right.jpeg', '17571_left.jpeg', '17571_right.jpeg', '17586_left.jpeg', '17586_right.jpeg', '17591_left.jpeg', '17591_right.jpeg', '1759_left.jpeg', '1759_right.jpeg', '17600_left.jpeg', '17600_right.jpeg', '17604_left.jpeg', '17604_right.jpeg', '17607_left.jpeg', '17607_right.jpeg', '17612_left.jpeg', '17612_right.jpeg', '17615_left.jpeg', '17615_right.jpeg', '17616_left.jpeg', '17616_right.jpeg', '17627_left.jpeg', '17627_right.jpeg', '17629_left.jpeg', '17629_right.jpeg', '17630_left.jpeg', '17630_right.jpeg', '17638_left.jpeg', '17638_right.jpeg', '1763_left.jpeg', '1763_right.jpeg', '17644_left.jpeg', '17644_right.jpeg', '17645_left.jpeg', '17647_left.jpeg', '17647_right.jpeg', '1764_left.jpeg', '1764_right.jpeg', '17653_left.jpeg', '17653_right.jpeg', '17658_left.jpeg', '17658_right.jpeg', '1765_left.jpeg', '1765_right.jpeg', '17663_left.jpeg', '17663_right.jpeg', '17665_left.jpeg', '17665_right.jpeg', '17668_left.jpeg', '17668_right.jpeg', '17669_left.jpeg', '17669_right.jpeg', '1766_left.jpeg', '1766_right.jpeg', '17673_left.jpeg', '17673_right.jpeg', '17677_left.jpeg', '17677_right.jpeg', '17679_left.jpeg', '17679_right.jpeg', '17687_left.jpeg', '17687_right.jpeg', '17688_left.jpeg', '17688_right.jpeg', '17693_left.jpeg', '17693_right.jpeg', '17694_left.jpeg', '17694_right.jpeg', '17699_left.jpeg', '17699_right.jpeg', '1769_left.jpeg', '1769_right.jpeg', '17705_left.jpeg', '17705_right.jpeg', '17717_left.jpeg', '17717_right.jpeg', '17719_left.jpeg', '17719_right.jpeg', '1771_left.jpeg', '1771_right.jpeg', '17720_left.jpeg', '17720_right.jpeg', '17722_left.jpeg', '17722_right.jpeg', '13754_right.jpeg', '13757_left.jpeg', '13757_right.jpeg', '13762_left.jpeg', '13762_right.jpeg', '13766_left.jpeg', '13766_right.jpeg', '13768_left.jpeg', '13768_right.jpeg', '13769_left.jpeg', '13769_right.jpeg', '13771_left.jpeg', '13771_right.jpeg', '13773_left.jpeg', '13773_right.jpeg', '13784_left.jpeg', '13784_right.jpeg', '13790_left.jpeg', '13790_right.jpeg', '13796_left.jpeg', '13796_right.jpeg', '13802_left.jpeg', '13802_right.jpeg', '13808_left.jpeg', '13808_right.jpeg', '13815_left.jpeg', '13815_right.jpeg', '13816_left.jpeg', '13816_right.jpeg', '13820_left.jpeg', '13820_right.jpeg', '13826_left.jpeg', '13826_right.jpeg', '13827_left.jpeg', '13827_right.jpeg', '13832_left.jpeg', '13832_right.jpeg', '13833_left.jpeg', '13833_right.jpeg', '13837_left.jpeg', '13837_right.jpeg', '13839_left.jpeg', '13839_right.jpeg', '13841_left.jpeg', '13841_right.jpeg', '13843_left.jpeg', '13843_right.jpeg', '13844_left.jpeg', '13844_right.jpeg', '13846_left.jpeg', '13846_right.jpeg', '13848_left.jpeg', '13848_right.jpeg', '13852_left.jpeg', '13852_right.jpeg', '13854_left.jpeg', '13854_right.jpeg', '13863_left.jpeg', '13863_right.jpeg', '13865_left.jpeg', '13865_right.jpeg', '13867_left.jpeg', '13867_right.jpeg', '13874_left.jpeg', '13877_left.jpeg', '13877_right.jpeg', '1387_left.jpeg', '1387_right.jpeg', '13888_left.jpeg', '13888_right.jpeg', '13889_left.jpeg', '13889_right.jpeg', '1388_left.jpeg', '1388_right.jpeg', '13891_left.jpeg', '13891_right.jpeg', '13892_left.jpeg', '13892_right.jpeg', '13896_left.jpeg', '13896_right.jpeg', '13898_left.jpeg', '13898_right.jpeg', '13909_left.jpeg', '13909_right.jpeg', '13912_left.jpeg', '13912_right.jpeg', '13916_left.jpeg', '13916_right.jpeg', '1391_left.jpeg', '1391_right.jpeg', '13922_left.jpeg', '13922_right.jpeg', '13923_left.jpeg', '13923_right.jpeg', '13931_left.jpeg', '13931_right.jpeg', '13937_left.jpeg', '13937_right.jpeg', '1393_left.jpeg', '1393_right.jpeg', '13942_left.jpeg', '13942_right.jpeg', '13944_left.jpeg', '13944_right.jpeg', '13945_left.jpeg', '13945_right.jpeg', '1394_left.jpeg', '1394_right.jpeg', '13951_left.jpeg', '13951_right.jpeg', '13955_left.jpeg', '13955_right.jpeg', '13959_left.jpeg', '13959_right.jpeg', '1395_left.jpeg', '1395_right.jpeg', '13961_left.jpeg', '13961_right.jpeg', '13966_left.jpeg', '13966_right.jpeg', '13967_left.jpeg', '13967_right.jpeg', '13971_left.jpeg', '13971_right.jpeg', '13972_left.jpeg', '13972_right.jpeg', '13977_left.jpeg', '13977_right.jpeg', '13979_left.jpeg', '13979_right.jpeg', '13984_left.jpeg', '13984_right.jpeg', '13987_left.jpeg', '13987_right.jpeg', '13994_left.jpeg', '13994_right.jpeg', '13995_left.jpeg', '13995_right.jpeg', '13997_left.jpeg', '13997_right.jpeg', '1399_left.jpeg', '1399_right.jpeg', '14008_left.jpeg', '14008_right.jpeg', '14009_left.jpeg', '14009_right.jpeg', '14020_left.jpeg', '14020_right.jpeg', '14025_left.jpeg', '14025_right.jpeg', '14026_left.jpeg', '14026_right.jpeg', '14028_left.jpeg', '14028_right.jpeg', '14041_left.jpeg', '14041_right.jpeg', '14048_left.jpeg', '14048_right.jpeg', '14053_left.jpeg', '14053_right.jpeg', '14058_left.jpeg', '14058_right.jpeg', '14060_left.jpeg', '14060_right.jpeg', '14061_left.jpeg', '14061_right.jpeg', '14063_left.jpeg', '14063_right.jpeg', '14064_left.jpeg', '14064_right.jpeg', '14078_left.jpeg', '14078_right.jpeg', '1407_left.jpeg', '14081_left.jpeg', '14081_right.jpeg', '14082_left.jpeg', '14082_right.jpeg', '14084_left.jpeg', '14084_right.jpeg', '1408_left.jpeg', '1408_right.jpeg', '14091_left.jpeg', '14091_right.jpeg', '14093_left.jpeg', '14093_right.jpeg', '14094_left.jpeg', '14094_right.jpeg', '14097_left.jpeg', '14097_right.jpeg', '14098_left.jpeg', '14098_right.jpeg', '1409_left.jpeg', '1409_right.jpeg', '14104_left.jpeg', '14104_right.jpeg', '14109_left.jpeg', '14109_right.jpeg', '14110_left.jpeg', '14110_right.jpeg', '14114_left.jpeg', '14114_right.jpeg', '14115_left.jpeg', '14115_right.jpeg', '14119_left.jpeg', '14119_right.jpeg', '1411_left.jpeg', '1411_right.jpeg', '14121_left.jpeg', '14121_right.jpeg', '14122_left.jpeg', '1412_left.jpeg', '14130_left.jpeg', '14130_right.jpeg', '14139_left.jpeg', '14139_right.jpeg', '14144_left.jpeg', '14144_right.jpeg', '14152_left.jpeg', '14153_left.jpeg', '14153_right.jpeg', '14154_left.jpeg', '14154_right.jpeg', '14158_left.jpeg', '14158_right.jpeg', '14160_left.jpeg', '14161_left.jpeg', '14161_right.jpeg', '14169_left.jpeg', '14169_right.jpeg', '14175_left.jpeg', '14175_right.jpeg', '14181_left.jpeg', '14181_right.jpeg', '14186_left.jpeg', '14186_right.jpeg', '1418_left.jpeg', '1418_right.jpeg', '14191_left.jpeg', '14191_right.jpeg', '14198_left.jpeg', '14198_right.jpeg', '14199_left.jpeg', '14199_right.jpeg', '14201_left.jpeg', '14201_right.jpeg', '14202_left.jpeg', '14202_right.jpeg', '14205_left.jpeg', '14205_right.jpeg', '1420_left.jpeg', '1420_right.jpeg', '14213_left.jpeg', '14213_right.jpeg', '14216_left.jpeg', '14216_right.jpeg', '14218_left.jpeg', '14218_right.jpeg', '14224_left.jpeg', '14224_right.jpeg', '14228_left.jpeg', '14228_right.jpeg', '1422_left.jpeg', '1422_right.jpeg', '14248_left.jpeg', '14248_right.jpeg', '1424_left.jpeg', '1424_right.jpeg', '14253_left.jpeg', '14253_right.jpeg', '14254_left.jpeg', '14254_right.jpeg', '14256_left.jpeg', '14256_right.jpeg', '14258_left.jpeg', '14258_right.jpeg', '14262_left.jpeg', '14262_right.jpeg', '14269_left.jpeg', '14269_right.jpeg', '14270_left.jpeg', '14270_right.jpeg', '14279_left.jpeg', '14279_right.jpeg', '1427_left.jpeg', '1427_right.jpeg', '14282_left.jpeg', '14282_right.jpeg', '14285_left.jpeg', '14285_right.jpeg', '14292_left.jpeg', '14292_right.jpeg', '14293_left.jpeg', '14293_right.jpeg', '14300_left.jpeg', '14300_right.jpeg', '14311_left.jpeg', '14311_right.jpeg', '14313_left.jpeg', '14313_right.jpeg', '14317_left.jpeg', '14317_right.jpeg', '14320_left.jpeg', '14320_right.jpeg', '14323_left.jpeg', '14323_right.jpeg', '14325_left.jpeg', '14325_right.jpeg', '14328_left.jpeg', '14328_right.jpeg', '1432_left.jpeg', '1432_right.jpeg', '14333_left.jpeg', '14333_right.jpeg', '14344_left.jpeg', '14344_right.jpeg', '14350_left.jpeg', '14350_right.jpeg', '14355_left.jpeg', '14355_right.jpeg', '14356_left.jpeg', '14356_right.jpeg', '14361_left.jpeg', '14361_right.jpeg', '14367_left.jpeg', '14367_right.jpeg', '14368_left.jpeg', '14368_right.jpeg', '14371_left.jpeg', '14371_right.jpeg', '14375_left.jpeg', '14375_right.jpeg', '14379_left.jpeg', '14379_right.jpeg', '14380_left.jpeg', '14380_right.jpeg', '14384_left.jpeg', '14384_right.jpeg', '14386_left.jpeg', '14386_right.jpeg', '14388_left.jpeg', '14392_left.jpeg', '14392_right.jpeg', '14393_left.jpeg', '14393_right.jpeg', '14398_left.jpeg', '14398_right.jpeg', '1439_left.jpeg', '1439_right.jpeg', '14401_left.jpeg', '14401_right.jpeg', '14402_left.jpeg', '14402_right.jpeg', '14403_left.jpeg', '14403_right.jpeg', '14410_left.jpeg', '14410_right.jpeg', '14413_left.jpeg', '14413_right.jpeg', '14418_left.jpeg', '14418_right.jpeg', '14424_left.jpeg', '14424_right.jpeg', '14425_left.jpeg', '14425_right.jpeg', '14430_left.jpeg', '14430_right.jpeg', '14431_left.jpeg', '14436_left.jpeg', '14436_right.jpeg', '14444_left.jpeg', '14444_right.jpeg', '14446_left.jpeg', '14446_right.jpeg', '14449_left.jpeg', '14449_right.jpeg', '1444_left.jpeg', '1444_right.jpeg', '14455_left.jpeg', '14455_right.jpeg', '14456_left.jpeg', '14456_right.jpeg', '14457_left.jpeg', '14457_right.jpeg', '14460_left.jpeg', '14460_right.jpeg', '14463_left.jpeg', '14465_left.jpeg', '14465_right.jpeg', '14466_left.jpeg', '14466_right.jpeg', '14467_left.jpeg', '14467_right.jpeg', '1446_left.jpeg', '1446_right.jpeg', '14473_left.jpeg', '14473_right.jpeg', '14479_left.jpeg', '14481_left.jpeg', '14481_right.jpeg', '14482_left.jpeg', '14482_right.jpeg', '14490_left.jpeg', '14490_right.jpeg', '14495_left.jpeg', '14495_right.jpeg', '1449_left.jpeg', '1449_right.jpeg', '14501_left.jpeg', '14501_right.jpeg', '14504_left.jpeg', '14504_right.jpeg', '14510_left.jpeg', '14510_right.jpeg', '1452_left.jpeg', '1452_right.jpeg', '14533_left.jpeg', '14533_right.jpeg', '14535_left.jpeg', '14535_right.jpeg', '14548_left.jpeg', '14548_right.jpeg', '14549_left.jpeg', '14549_right.jpeg', '14564_left.jpeg', '14564_right.jpeg', '14566_left.jpeg', '14566_right.jpeg', '14567_left.jpeg', '14567_right.jpeg', '14568_left.jpeg', '14568_right.jpeg', '14581_left.jpeg', '14581_right.jpeg', '14587_left.jpeg', '14587_right.jpeg', '14588_left.jpeg', '14588_right.jpeg', '14594_left.jpeg', '14594_right.jpeg', '14595_left.jpeg', '14595_right.jpeg', '145_left.jpeg', '145_right.jpeg', '14610_left.jpeg', '14610_right.jpeg', '14614_left.jpeg', '14614_right.jpeg', '14615_left.jpeg', '14615_right.jpeg', '14616_left.jpeg', '14616_right.jpeg', '14621_left.jpeg', '14621_right.jpeg', '14626_left.jpeg', '14627_left.jpeg', '14627_right.jpeg', '14628_left.jpeg', '14628_right.jpeg', '14630_left.jpeg', '14630_right.jpeg', '14631_left.jpeg', '14631_right.jpeg', '14640_left.jpeg', '14640_right.jpeg', '14645_left.jpeg', '14645_right.jpeg', '14648_left.jpeg', '14648_right.jpeg', '14650_left.jpeg', '14650_right.jpeg', '14653_right.jpeg', '14654_left.jpeg', '14654_right.jpeg', '14655_left.jpeg', '14655_right.jpeg', '14656_left.jpeg', '14656_right.jpeg', '14665_left.jpeg', '14665_right.jpeg', '14674_left.jpeg', '14674_right.jpeg', '14675_left.jpeg', '14675_right.jpeg', '14677_left.jpeg', '14677_right.jpeg', '14680_left.jpeg', '14680_right.jpeg', '14681_left.jpeg', '14681_right.jpeg', '14683_left.jpeg', '14683_right.jpeg', '14687_left.jpeg', '14687_right.jpeg', '14688_left.jpeg', '14688_right.jpeg', '14697_left.jpeg', '14697_right.jpeg', '14698_left.jpeg', '14698_right.jpeg', '1469_left.jpeg', '1469_right.jpeg', '14701_left.jpeg', '14701_right.jpeg', '14703_left.jpeg', '14703_right.jpeg', '1471_left.jpeg', '1471_right.jpeg', '14721_left.jpeg', '14721_right.jpeg', '14723_left.jpeg', '14723_right.jpeg', '14731_left.jpeg', '14731_right.jpeg', '14749_left.jpeg', '14749_right.jpeg', '14751_left.jpeg', '14751_right.jpeg', '14752_left.jpeg', '14752_right.jpeg', '14753_right.jpeg', '14754_left.jpeg', '14754_right.jpeg', '14765_left.jpeg', '14765_right.jpeg', '14767_left.jpeg', '14767_right.jpeg', '14770_left.jpeg', '14770_right.jpeg', '14774_right.jpeg', '14778_left.jpeg', '14778_right.jpeg', '14784_left.jpeg', '14784_right.jpeg', '1478_left.jpeg', '1478_right.jpeg', '14793_left.jpeg', '14793_right.jpeg', '14805_left.jpeg', '14805_right.jpeg', '14807_left.jpeg', '14807_right.jpeg', '14816_left.jpeg', '14816_right.jpeg', '14819_left.jpeg', '14819_right.jpeg', '14824_left.jpeg', '14829_left.jpeg', '14829_right.jpeg', '14837_left.jpeg', '14837_right.jpeg', '14842_left.jpeg', '14842_right.jpeg', '14843_left.jpeg', '14843_right.jpeg', '14844_left.jpeg', '14844_right.jpeg', '14846_left.jpeg', '14846_right.jpeg', '14849_left.jpeg', '14849_right.jpeg', '14850_left.jpeg', '14850_right.jpeg', '14852_left.jpeg', '14852_right.jpeg', '14855_left.jpeg', '14855_right.jpeg', '14858_left.jpeg', '14858_right.jpeg', '14867_left.jpeg', '14874_left.jpeg', '14874_right.jpeg', '14876_left.jpeg', '14876_right.jpeg', '1487_left.jpeg', '1487_right.jpeg', '14880_left.jpeg', '14880_right.jpeg', '14883_left.jpeg', '14883_right.jpeg', '14886_left.jpeg', '14886_right.jpeg', '14887_left.jpeg', '14887_right.jpeg', '14891_left.jpeg', '14891_right.jpeg', '14899_left.jpeg', '14899_right.jpeg', '14901_left.jpeg', '14901_right.jpeg', '14902_left.jpeg', '14902_right.jpeg', '14904_left.jpeg', '14904_right.jpeg', '14907_left.jpeg', '14907_right.jpeg', '14913_left.jpeg', '14913_right.jpeg', '14916_left.jpeg', '14916_right.jpeg', '14920_left.jpeg', '14920_right.jpeg', '14921_left.jpeg', '14921_right.jpeg', '14923_left.jpeg', '14923_right.jpeg', '14924_left.jpeg', '14924_right.jpeg', '14933_left.jpeg', '14933_right.jpeg', '14938_left.jpeg', '14938_right.jpeg', '1493_left.jpeg', '1493_right.jpeg', '14940_left.jpeg', '14940_right.jpeg', '14943_left.jpeg', '14943_right.jpeg', '14945_left.jpeg', '14945_right.jpeg', '14947_left.jpeg', '14947_right.jpeg', '14955_left.jpeg', '14955_right.jpeg', '14965_left.jpeg', '14965_right.jpeg', '14967_left.jpeg', '14967_right.jpeg', '14970_left.jpeg', '14970_right.jpeg', '14979_left.jpeg', '14979_right.jpeg', '1497_left.jpeg', '1497_right.jpeg', '14982_left.jpeg', '14982_right.jpeg', '14986_left.jpeg', '14986_right.jpeg', '14989_left.jpeg', '14989_right.jpeg', '14999_left.jpeg', '14999_right.jpeg', '15000_left.jpeg', '15000_right.jpeg', '15001_left.jpeg', '15001_right.jpeg', '15002_left.jpeg', '15002_right.jpeg', '15003_left.jpeg', '15003_right.jpeg', '15004_left.jpeg', '15004_right.jpeg', '15005_left.jpeg', '15005_right.jpeg', '15006_left.jpeg', '15006_right.jpeg', '1500_left.jpeg', '1500_right.jpeg', '1501_left.jpeg', '1501_right.jpeg', '15022_left.jpeg', '15022_right.jpeg', '15026_left.jpeg', '15026_right.jpeg', '15034_left.jpeg', '15034_right.jpeg', '15037_left.jpeg', '15037_right.jpeg', '15038_left.jpeg', '15039_left.jpeg', '15039_right.jpeg', '15040_left.jpeg', '15040_right.jpeg', '15043_left.jpeg', '15043_right.jpeg', '15045_right.jpeg', '1504_left.jpeg', '1504_right.jpeg', '15053_left.jpeg', '15053_right.jpeg', '15056_left.jpeg', '15056_right.jpeg', '15061_left.jpeg', '15061_right.jpeg', '15066_left.jpeg', '15066_right.jpeg', '15074_left.jpeg', '15074_right.jpeg', '15083_left.jpeg', '15083_right.jpeg', '15088_left.jpeg', '15088_right.jpeg', '1508_left.jpeg', '1508_right.jpeg', '1509_left.jpeg', '1509_right.jpeg', '15102_left.jpeg', '15102_right.jpeg', '15110_left.jpeg', '15110_right.jpeg', '15114_left.jpeg', '15114_right.jpeg', '15118_left.jpeg', '15118_right.jpeg', '15120_left.jpeg', '15120_right.jpeg', '15125_left.jpeg', '15125_right.jpeg', '1513_left.jpeg', '1513_right.jpeg', '15144_left.jpeg', '15144_right.jpeg', '1514_left.jpeg', '1514_right.jpeg', '15156_left.jpeg', '15156_right.jpeg', '15161_left.jpeg', '15161_right.jpeg', '15175_left.jpeg', '15175_right.jpeg', '15182_left.jpeg', '15182_right.jpeg', '15185_left.jpeg', '15185_right.jpeg', '15194_left.jpeg', '15194_right.jpeg', '15200_left.jpeg', '15200_right.jpeg', '15205_left.jpeg', '15205_right.jpeg', '1520_left.jpeg', '1520_right.jpeg', '15210_right.jpeg', '15218_left.jpeg', '15218_right.jpeg', '15227_left.jpeg', '15227_right.jpeg', '15230_left.jpeg', '15230_right.jpeg', '15235_left.jpeg', '15235_right.jpeg', '15236_left.jpeg', '15236_right.jpeg', '15242_left.jpeg', '15242_right.jpeg', '15244_left.jpeg', '15244_right.jpeg', '15248_left.jpeg', '15248_right.jpeg', '15253_left.jpeg', '15253_right.jpeg', '15254_left.jpeg', '15254_right.jpeg', '15255_left.jpeg', '15255_right.jpeg', '15257_left.jpeg', '15262_left.jpeg', '15262_right.jpeg', '1526_left.jpeg', '1526_right.jpeg', '15270_left.jpeg', '15270_right.jpeg', '15273_left.jpeg', '15273_right.jpeg', '15275_left.jpeg', '15275_right.jpeg', '15276_left.jpeg', '15276_right.jpeg', '15278_left.jpeg', '15278_right.jpeg', '1527_left.jpeg', '1527_right.jpeg', '15284_left.jpeg', '15284_right.jpeg', '15300_left.jpeg', '15300_right.jpeg', '15306_left.jpeg', '15306_right.jpeg', '15309_left.jpeg', '15309_right.jpeg', '1530_left.jpeg', '1530_right.jpeg', '15312_left.jpeg', '15312_right.jpeg', '15314_left.jpeg', '15314_right.jpeg', '15315_left.jpeg', '15315_right.jpeg', '15319_left.jpeg', '15319_right.jpeg', '15322_left.jpeg', '15322_right.jpeg', '15323_left.jpeg', '15323_right.jpeg', '15325_left.jpeg', '15325_right.jpeg', '1532_left.jpeg', '1532_right.jpeg', '15336_left.jpeg', '15336_right.jpeg', '15339_left.jpeg', '15339_right.jpeg', '15340_left.jpeg', '15340_right.jpeg', '15343_left.jpeg', '15343_right.jpeg', '15347_left.jpeg', '15347_right.jpeg', '15351_left.jpeg', '15351_right.jpeg', '15354_left.jpeg', '15354_right.jpeg', '15360_left.jpeg', '15360_right.jpeg', '15368_left.jpeg', '15368_right.jpeg', '15369_left.jpeg', '15369_right.jpeg', '15375_left.jpeg', '15375_right.jpeg', '15376_left.jpeg', '15376_right.jpeg', '15383_left.jpeg', '15383_right.jpeg', '15385_left.jpeg', '15385_right.jpeg', '15387_left.jpeg', '15387_right.jpeg', '15388_right.jpeg', '15394_left.jpeg', '15394_right.jpeg', '15396_left.jpeg', '15396_right.jpeg', '15397_left.jpeg', '15397_right.jpeg', '15402_left.jpeg', '15402_right.jpeg', '15410_left.jpeg', '15410_right.jpeg', '15414_left.jpeg', '15414_right.jpeg', '15415_left.jpeg', '15415_right.jpeg', '15418_left.jpeg', '15418_right.jpeg', '15419_left.jpeg', '15419_right.jpeg', '15422_left.jpeg', '15422_right.jpeg', '15424_left.jpeg', '15424_right.jpeg', '15426_left.jpeg', '15426_right.jpeg', '15428_left.jpeg', '15428_right.jpeg', '15435_left.jpeg', '15435_right.jpeg', '15436_left.jpeg', '15436_right.jpeg', '15438_left.jpeg', '15438_right.jpeg', '15439_left.jpeg', '15439_right.jpeg', '15442_left.jpeg', '15442_right.jpeg', '15443_left.jpeg', '15443_right.jpeg', '15444_left.jpeg', '15444_right.jpeg', '15447_left.jpeg', '15447_right.jpeg', '15450_left.jpeg', '15450_right.jpeg', '15453_left.jpeg', '15453_right.jpeg', '15454_left.jpeg', '15454_right.jpeg', '15459_left.jpeg', '15459_right.jpeg', '15461_left.jpeg', '15461_right.jpeg', '15468_left.jpeg', '15468_right.jpeg', '15470_right.jpeg', '15477_left.jpeg', '15477_right.jpeg', '15480_left.jpeg', '15480_right.jpeg', '15482_left.jpeg', '15482_right.jpeg', '15487_left.jpeg', '15487_right.jpeg', '15490_left.jpeg', '15490_right.jpeg', '15502_left.jpeg', '15502_right.jpeg', '15509_left.jpeg', '15509_right.jpeg', '15510_left.jpeg', '15510_right.jpeg', '15515_left.jpeg', '15515_right.jpeg', '15516_left.jpeg', '15516_right.jpeg', '15520_left.jpeg', '15520_right.jpeg', '15522_left.jpeg', '15522_right.jpeg', '15523_left.jpeg', '15523_right.jpeg', '15527_left.jpeg', '15527_right.jpeg', '15528_left.jpeg', '15528_right.jpeg', '15529_left.jpeg', '15529_right.jpeg', '1552_left.jpeg', '1552_right.jpeg', '15531_left.jpeg', '15531_right.jpeg', '15534_left.jpeg', '15534_right.jpeg', '15538_left.jpeg', '15538_right.jpeg', '15539_left.jpeg', '15539_right.jpeg', '15540_left.jpeg', '15540_right.jpeg', '15544_left.jpeg', '15544_right.jpeg', '15546_left.jpeg', '15546_right.jpeg', '15547_left.jpeg', '15547_right.jpeg', '15549_left.jpeg', '15549_right.jpeg', '15557_left.jpeg', '15557_right.jpeg', '15563_left.jpeg', '15563_right.jpeg', '15569_left.jpeg', '15569_right.jpeg', '15573_left.jpeg', '15573_right.jpeg', '15574_left.jpeg', '15574_right.jpeg', '15582_left.jpeg', '15582_right.jpeg', '15584_left.jpeg', '15584_right.jpeg', '15590_left.jpeg', '15590_right.jpeg', '15593_left.jpeg', '15593_right.jpeg', '15595_right.jpeg', '15599_left.jpeg', '15599_right.jpeg', '155_left.jpeg', '155_right.jpeg', '15605_left.jpeg', '15605_right.jpeg', '15610_left.jpeg', '15610_right.jpeg', '15611_left.jpeg', '15611_right.jpeg', '15615_left.jpeg', '15615_right.jpeg', '15625_left.jpeg', '15625_right.jpeg', '15635_left.jpeg', '15635_right.jpeg', '15639_left.jpeg', '15639_right.jpeg', '15641_left.jpeg', '15641_right.jpeg', '15643_left.jpeg', '15643_right.jpeg', '15644_left.jpeg', '15646_left.jpeg', '15646_right.jpeg', '15653_left.jpeg', '15653_right.jpeg', '15655_left.jpeg', '15655_right.jpeg', '15658_left.jpeg', '15658_right.jpeg', '15660_left.jpeg', '15660_right.jpeg', '15661_left.jpeg', '15661_right.jpeg', '15666_left.jpeg', '15666_right.jpeg', '15682_left.jpeg', '15682_right.jpeg', '15688_left.jpeg', '15692_left.jpeg', '15692_right.jpeg', '15695_left.jpeg', '15695_right.jpeg', '15698_left.jpeg', '15698_right.jpeg', '11856_right.jpeg', '11863_left.jpeg', '11863_right.jpeg', '11866_left.jpeg', '11866_right.jpeg', '11870_left.jpeg', '11870_right.jpeg', '11874_left.jpeg', '11874_right.jpeg', '1188_left.jpeg', '1188_right.jpeg', '11896_left.jpeg', '11896_right.jpeg', '11900_left.jpeg', '11900_right.jpeg', '11903_left.jpeg', '11903_right.jpeg', '11904_left.jpeg', '11904_right.jpeg', '11909_left.jpeg', '11909_right.jpeg', '11910_left.jpeg', '11910_right.jpeg', '11913_left.jpeg', '11913_right.jpeg', '11914_left.jpeg', '11914_right.jpeg', '11915_left.jpeg', '11915_right.jpeg', '11920_left.jpeg', '11920_right.jpeg', '11923_left.jpeg', '11923_right.jpeg', '11929_left.jpeg', '11929_right.jpeg', '11935_left.jpeg', '11935_right.jpeg', '11937_left.jpeg', '11937_right.jpeg', '11942_left.jpeg', '11942_right.jpeg', '11943_left.jpeg', '11943_right.jpeg', '11946_left.jpeg', '11946_right.jpeg', '11949_left.jpeg', '11949_right.jpeg', '11952_left.jpeg', '11952_right.jpeg', '11956_left.jpeg', '11956_right.jpeg', '11959_left.jpeg', '11959_right.jpeg', '11962_left.jpeg', '11962_right.jpeg', '11965_left.jpeg', '11965_right.jpeg', '11966_left.jpeg', '11966_right.jpeg', '11968_left.jpeg', '11970_left.jpeg', '11970_right.jpeg', '11971_left.jpeg', '11971_right.jpeg', '11974_left.jpeg', '11974_right.jpeg', '11975_left.jpeg', '11975_right.jpeg', '11982_left.jpeg', '11982_right.jpeg', '11983_left.jpeg', '11983_right.jpeg', '11988_left.jpeg', '11988_right.jpeg', '11995_left.jpeg', '11995_right.jpeg', '11997_left.jpeg', '11997_right.jpeg', '119_left.jpeg', '119_right.jpeg', '12000_left.jpeg', '12000_right.jpeg', '12003_left.jpeg', '12003_right.jpeg', '12005_left.jpeg', '12005_right.jpeg', '12010_left.jpeg', '12010_right.jpeg', '12018_left.jpeg', '12018_right.jpeg', '12023_left.jpeg', '12023_right.jpeg', '12027_left.jpeg', '12027_right.jpeg', '1202_left.jpeg', '1202_right.jpeg', '12036_left.jpeg', '12036_right.jpeg', '1204_left.jpeg', '1204_right.jpeg', '12056_left.jpeg', '12056_right.jpeg', '12063_left.jpeg', '12063_right.jpeg', '12068_right.jpeg', '12072_left.jpeg', '12072_right.jpeg', '12074_left.jpeg', '12074_right.jpeg', '12075_left.jpeg', '12075_right.jpeg', '12079_left.jpeg', '12079_right.jpeg', '12082_left.jpeg', '12082_right.jpeg', '12086_left.jpeg', '12086_right.jpeg', '12092_left.jpeg', '12092_right.jpeg', '12095_left.jpeg', '12095_right.jpeg', '12103_left.jpeg', '12103_right.jpeg', '12104_left.jpeg', '12104_right.jpeg', '12107_left.jpeg', '12107_right.jpeg', '12108_left.jpeg', '12108_right.jpeg', '12119_left.jpeg', '12119_right.jpeg', '12126_left.jpeg', '12126_right.jpeg', '12127_left.jpeg', '12127_right.jpeg', '12128_left.jpeg', '12128_right.jpeg', '12129_left.jpeg', '12129_right.jpeg', '1212_left.jpeg', '1212_right.jpeg', '12133_left.jpeg', '12133_right.jpeg', '12140_left.jpeg', '12140_right.jpeg', '12141_left.jpeg', '12141_right.jpeg', '12147_left.jpeg', '12147_right.jpeg', '12154_left.jpeg', '12154_right.jpeg', '12159_left.jpeg', '12159_right.jpeg', '12170_left.jpeg', '12170_right.jpeg', '12176_left.jpeg', '12176_right.jpeg', '12187_left.jpeg', '12187_right.jpeg', '12194_left.jpeg', '12194_right.jpeg', '12210_left.jpeg', '12210_right.jpeg', '12211_left.jpeg', '12211_right.jpeg', '12214_left.jpeg', '12214_right.jpeg', '12216_left.jpeg', '12216_right.jpeg', '12225_left.jpeg', '12225_right.jpeg', '1222_left.jpeg', '1222_right.jpeg', '12231_left.jpeg', '12231_right.jpeg', '12232_left.jpeg', '12232_right.jpeg', '12233_left.jpeg', '12233_right.jpeg', '12238_left.jpeg', '12238_right.jpeg', '12241_left.jpeg', '12241_right.jpeg', '12247_left.jpeg', '12247_right.jpeg', '12256_left.jpeg', '12256_right.jpeg', '12257_left.jpeg', '12257_right.jpeg', '12260_left.jpeg', '12260_right.jpeg', '12271_left.jpeg', '12271_right.jpeg', '12279_left.jpeg', '12279_right.jpeg', '12286_left.jpeg', '12286_right.jpeg', '12287_left.jpeg', '12287_right.jpeg', '12289_left.jpeg', '12289_right.jpeg', '12292_left.jpeg', '12292_right.jpeg', '12299_left.jpeg', '12299_right.jpeg', '12301_left.jpeg', '12301_right.jpeg', '12307_left.jpeg', '12307_right.jpeg', '12316_left.jpeg', '12316_right.jpeg', '12325_left.jpeg', '12325_right.jpeg', '12326_left.jpeg', '12326_right.jpeg', '12339_left.jpeg', '12339_right.jpeg', '1233_left.jpeg', '1233_right.jpeg', '12349_left.jpeg', '12349_right.jpeg', '12352_left.jpeg', '12353_left.jpeg', '12353_right.jpeg', '12356_left.jpeg', '12356_right.jpeg', '12362_left.jpeg', '12362_right.jpeg', '12370_left.jpeg', '12370_right.jpeg', '12372_left.jpeg', '12372_right.jpeg', '12382_left.jpeg', '12382_right.jpeg', '12386_left.jpeg', '12386_right.jpeg', '12397_left.jpeg', '12397_right.jpeg', '12399_left.jpeg', '12399_right.jpeg', '12400_left.jpeg', '12400_right.jpeg', '12404_left.jpeg', '12404_right.jpeg', '1240_left.jpeg', '1240_right.jpeg', '12410_left.jpeg', '12410_right.jpeg', '12411_left.jpeg', '12411_right.jpeg', '12417_left.jpeg', '12417_right.jpeg', '12419_left.jpeg', '12419_right.jpeg', '12425_left.jpeg', '12425_right.jpeg', '12428_left.jpeg', '12428_right.jpeg', '1242_left.jpeg', '1242_right.jpeg', '12433_left.jpeg', '12433_right.jpeg', '12434_left.jpeg', '12434_right.jpeg', '12436_left.jpeg', '12436_right.jpeg', '12437_left.jpeg', '12437_right.jpeg', '12444_left.jpeg', '12444_right.jpeg', '12448_left.jpeg', '12448_right.jpeg', '12451_left.jpeg', '12451_right.jpeg', '12452_left.jpeg', '12452_right.jpeg', '12464_left.jpeg', '12464_right.jpeg', '12474_left.jpeg', '12474_right.jpeg', '12476_left.jpeg', '12476_right.jpeg', '12478_left.jpeg', '12478_right.jpeg', '12481_left.jpeg', '12481_right.jpeg', '12484_left.jpeg', '12484_right.jpeg', '12488_left.jpeg', '12488_right.jpeg', '12490_left.jpeg', '12490_right.jpeg', '12496_left.jpeg', '12496_right.jpeg', '12503_right.jpeg', '12506_left.jpeg', '12506_right.jpeg', '12511_left.jpeg', '12511_right.jpeg', '12513_left.jpeg', '12513_right.jpeg', '12514_left.jpeg', '12514_right.jpeg', '12515_left.jpeg', '12515_right.jpeg', '12521_left.jpeg', '12521_right.jpeg', '12525_left.jpeg', '12525_right.jpeg', '12533_left.jpeg', '12533_right.jpeg', '12536_left.jpeg', '12536_right.jpeg', '12539_left.jpeg', '12539_right.jpeg', '12546_left.jpeg', '12546_right.jpeg', '12547_left.jpeg', '12547_right.jpeg', '12554_left.jpeg', '12554_right.jpeg', '1255_left.jpeg', '1255_right.jpeg', '12560_left.jpeg', '12560_right.jpeg', '12561_left.jpeg', '12561_right.jpeg', '12565_left.jpeg', '12565_right.jpeg', '12568_left.jpeg', '12568_right.jpeg', '12573_left.jpeg', '12573_right.jpeg', '12577_left.jpeg', '12577_right.jpeg', '12587_left.jpeg', '12587_right.jpeg', '12589_left.jpeg', '12589_right.jpeg', '12596_left.jpeg', '12596_right.jpeg', '12597_left.jpeg', '12597_right.jpeg', '1259_left.jpeg', '1259_right.jpeg', '125_left.jpeg', '125_right.jpeg', '12608_left.jpeg', '12608_right.jpeg', '1260_left.jpeg', '1260_right.jpeg', '12612_left.jpeg', '12612_right.jpeg', '12616_left.jpeg', '12616_right.jpeg', '12618_left.jpeg', '12618_right.jpeg', '12623_left.jpeg', '12623_right.jpeg', '12624_left.jpeg', '12624_right.jpeg', '12628_left.jpeg', '12628_right.jpeg', '12629_left.jpeg', '12629_right.jpeg', '12641_left.jpeg', '12641_right.jpeg', '12642_left.jpeg', '12642_right.jpeg', '12645_left.jpeg', '12645_right.jpeg', '12646_left.jpeg', '12646_right.jpeg', '12650_left.jpeg', '12650_right.jpeg', '12651_left.jpeg', '12651_right.jpeg', '12652_left.jpeg', '12652_right.jpeg', '12653_left.jpeg', '12653_right.jpeg', '12656_left.jpeg', '12656_right.jpeg', '12662_right.jpeg', '12666_left.jpeg', '12666_right.jpeg', '1266_left.jpeg', '1266_right.jpeg', '12672_left.jpeg', '12672_right.jpeg', '12673_left.jpeg', '12673_right.jpeg', '12675_left.jpeg', '12675_right.jpeg', '12681_left.jpeg', '12681_right.jpeg', '12683_left.jpeg', '12683_right.jpeg', '12687_left.jpeg', '12687_right.jpeg', '12694_right.jpeg', '12700_right.jpeg', '12707_left.jpeg', '12707_right.jpeg', '12709_left.jpeg', '12709_right.jpeg', '12712_left.jpeg', '12712_right.jpeg', '12714_left.jpeg', '12714_right.jpeg', '12716_left.jpeg', '12716_right.jpeg', '12721_left.jpeg', '12721_right.jpeg', '12724_left.jpeg', '12724_right.jpeg', '1272_left.jpeg', '1272_right.jpeg', '12734_left.jpeg', '12734_right.jpeg', '12735_left.jpeg', '12735_right.jpeg', '12739_left.jpeg', '12739_right.jpeg', '12743_left.jpeg', '12743_right.jpeg', '12752_left.jpeg', '12752_right.jpeg', '12754_left.jpeg', '12754_right.jpeg', '12756_left.jpeg', '12756_right.jpeg', '1275_left.jpeg', '1275_right.jpeg', '12764_left.jpeg', '12764_right.jpeg', '12769_left.jpeg', '12769_right.jpeg', '12775_left.jpeg', '12775_right.jpeg', '12778_left.jpeg', '12778_right.jpeg', '1277_left.jpeg', '1277_right.jpeg', '12787_left.jpeg', '12787_right.jpeg', '12789_left.jpeg', '12789_right.jpeg', '12795_left.jpeg', '12795_right.jpeg', '12804_left.jpeg', '12804_right.jpeg', '12806_left.jpeg', '12806_right.jpeg', '12810_left.jpeg', '12810_right.jpeg', '12811_left.jpeg', '12811_right.jpeg', '12815_left.jpeg', '12815_right.jpeg', '12827_left.jpeg', '12827_right.jpeg', '1282_left.jpeg', '1282_right.jpeg', '12835_left.jpeg', '12835_right.jpeg', '12838_left.jpeg', '12838_right.jpeg', '1283_left.jpeg', '1283_right.jpeg', '12844_left.jpeg', '12844_right.jpeg', '12848_left.jpeg', '12848_right.jpeg', '12857_left.jpeg', '12857_right.jpeg', '1285_left.jpeg', '1285_right.jpeg', '12863_left.jpeg', '12863_right.jpeg', '12867_left.jpeg', '12867_right.jpeg', '12868_left.jpeg', '12868_right.jpeg', '12870_left.jpeg', '12870_right.jpeg', '12871_left.jpeg', '12871_right.jpeg', '12872_left.jpeg', '12872_right.jpeg', '12876_left.jpeg', '12876_right.jpeg', '12881_left.jpeg', '12881_right.jpeg', '12882_left.jpeg', '12882_right.jpeg', '12895_left.jpeg', '12895_right.jpeg', '12903_left.jpeg', '12903_right.jpeg', '12904_left.jpeg', '12904_right.jpeg', '12912_right.jpeg', '12915_left.jpeg', '12915_right.jpeg', '12917_left.jpeg', '12917_right.jpeg', '12918_left.jpeg', '12918_right.jpeg', '12922_left.jpeg', '12922_right.jpeg', '12923_left.jpeg', '12923_right.jpeg', '12929_left.jpeg', '12929_right.jpeg', '1292_left.jpeg', '1292_right.jpeg', '12931_left.jpeg', '12931_right.jpeg', '12934_left.jpeg', '12934_right.jpeg', '12937_left.jpeg', '12937_right.jpeg', '12938_left.jpeg', '12938_right.jpeg', '12941_left.jpeg', '12941_right.jpeg', '12944_left.jpeg', '12944_right.jpeg', '12947_left.jpeg', '12947_right.jpeg', '12948_left.jpeg', '12948_right.jpeg', '1294_left.jpeg', '1294_right.jpeg', '12951_left.jpeg', '12951_right.jpeg', '12954_left.jpeg', '12954_right.jpeg', '12955_left.jpeg', '12955_right.jpeg', '12958_left.jpeg', '12958_right.jpeg', '12970_left.jpeg', '12970_right.jpeg', '12971_left.jpeg', '12971_right.jpeg', '12974_left.jpeg', '12974_right.jpeg', '12975_left.jpeg', '12975_right.jpeg', '12977_left.jpeg', '12977_right.jpeg', '12985_left.jpeg', '12985_right.jpeg', '12991_left.jpeg', '12991_right.jpeg', '12995_left.jpeg', '12995_right.jpeg', '12998_left.jpeg', '12998_right.jpeg', '1299_left.jpeg', '1299_right.jpeg', '129_left.jpeg', '129_right.jpeg', '13005_left.jpeg', '13005_right.jpeg', '13008_left.jpeg', '13008_right.jpeg', '13013_left.jpeg', '13013_right.jpeg', '13014_left.jpeg', '13014_right.jpeg', '13015_left.jpeg', '13015_right.jpeg', '13017_left.jpeg', '13017_right.jpeg', '13022_left.jpeg', '13025_left.jpeg', '13025_right.jpeg', '13027_left.jpeg', '13027_right.jpeg', '13029_left.jpeg', '13029_right.jpeg', '1302_left.jpeg', '1302_right.jpeg', '13033_left.jpeg', '13033_right.jpeg', '13041_left.jpeg', '13041_right.jpeg', '13042_left.jpeg', '13042_right.jpeg', '13043_left.jpeg', '13043_right.jpeg', '13054_left.jpeg', '13054_right.jpeg', '13058_left.jpeg', '13058_right.jpeg', '1305_left.jpeg', '1305_right.jpeg', '13062_left.jpeg', '13062_right.jpeg', '13064_left.jpeg', '13064_right.jpeg', '13066_left.jpeg', '13066_right.jpeg', '13069_left.jpeg', '13069_right.jpeg', '1306_left.jpeg', '1306_right.jpeg', '13081_left.jpeg', '13081_right.jpeg', '13085_left.jpeg', '13085_right.jpeg', '13089_left.jpeg', '13089_right.jpeg', '13099_left.jpeg', '13099_right.jpeg', '13100_left.jpeg', '13100_right.jpeg', '13101_left.jpeg', '13101_right.jpeg', '13103_left.jpeg', '13103_right.jpeg', '13112_left.jpeg', '13112_right.jpeg', '13113_left.jpeg', '13113_right.jpeg', '13116_left.jpeg', '13116_right.jpeg', '13122_left.jpeg', '13122_right.jpeg', '13125_left.jpeg', '13125_right.jpeg', '13126_left.jpeg', '13126_right.jpeg', '1312_left.jpeg', '1312_right.jpeg', '13130_left.jpeg', '13130_right.jpeg', '13132_left.jpeg', '13132_right.jpeg', '13133_left.jpeg', '13133_right.jpeg', '1313_left.jpeg', '1313_right.jpeg', '13144_left.jpeg', '13144_right.jpeg', '13146_left.jpeg', '13146_right.jpeg', '13147_left.jpeg', '13147_right.jpeg', '13150_left.jpeg', '13150_right.jpeg', '13156_left.jpeg', '13156_right.jpeg', '13160_left.jpeg', '13160_right.jpeg', '13169_left.jpeg', '13169_right.jpeg', '1316_left.jpeg', '1316_right.jpeg', '13170_left.jpeg', '13170_right.jpeg', '13175_left.jpeg', '13175_right.jpeg', '13176_left.jpeg', '13176_right.jpeg', '13177_left.jpeg', '13177_right.jpeg', '13178_left.jpeg', '13178_right.jpeg', '13188_left.jpeg', '13188_right.jpeg', '13189_left.jpeg', '13189_right.jpeg', '13190_left.jpeg', '13190_right.jpeg', '13196_left.jpeg', '13196_right.jpeg', '13201_left.jpeg', '13201_right.jpeg', '13206_left.jpeg', '13206_right.jpeg', '13209_left.jpeg', '13209_right.jpeg', '13210_left.jpeg', '13210_right.jpeg', '13211_left.jpeg', '13211_right.jpeg', '13212_left.jpeg', '13212_right.jpeg', '13213_left.jpeg', '13213_right.jpeg', '13220_left.jpeg', '13220_right.jpeg', '13222_left.jpeg', '13222_right.jpeg', '1322_left.jpeg', '1322_right.jpeg', '13230_left.jpeg', '13230_right.jpeg', '13234_left.jpeg', '13234_right.jpeg', '13239_left.jpeg', '13239_right.jpeg', '13240_left.jpeg', '13240_right.jpeg', '1324_left.jpeg', '1324_right.jpeg', '13252_left.jpeg', '13252_right.jpeg', '13254_left.jpeg', '13254_right.jpeg', '13259_left.jpeg', '13259_right.jpeg', '13260_left.jpeg', '13260_right.jpeg', '13264_left.jpeg', '13264_right.jpeg', '13267_right.jpeg', '13271_left.jpeg', '13271_right.jpeg', '13276_left.jpeg', '13276_right.jpeg', '13278_left.jpeg', '13278_right.jpeg', '13285_left.jpeg', '13285_right.jpeg', '13290_left.jpeg', '13290_right.jpeg', '13294_left.jpeg', '13294_right.jpeg', '13295_left.jpeg', '13295_right.jpeg', '13296_left.jpeg', '13296_right.jpeg', '13299_left.jpeg', '13299_right.jpeg', '13300_left.jpeg', '13300_right.jpeg', '13307_right.jpeg', '13308_left.jpeg', '13308_right.jpeg', '13309_left.jpeg', '13309_right.jpeg', '13310_left.jpeg', '13310_right.jpeg', '13312_left.jpeg', '13312_right.jpeg', '13320_left.jpeg', '13320_right.jpeg', '13324_left.jpeg', '13324_right.jpeg', '13325_left.jpeg', '13325_right.jpeg', '13326_left.jpeg', '13326_right.jpeg', '13328_left.jpeg', '13328_right.jpeg', '1332_left.jpeg', '1332_right.jpeg', '1333_left.jpeg', '1333_right.jpeg', '13344_left.jpeg', '13344_right.jpeg', '13345_left.jpeg', '13345_right.jpeg', '13350_left.jpeg', '13350_right.jpeg', '13353_left.jpeg', '13353_right.jpeg', '13359_left.jpeg', '13359_right.jpeg', '13363_left.jpeg', '13363_right.jpeg', '1336_left.jpeg', '1336_right.jpeg', '13371_right.jpeg', '13376_left.jpeg', '13376_right.jpeg', '13383_left.jpeg', '13383_right.jpeg', '13387_left.jpeg', '13387_right.jpeg', '13390_left.jpeg', '13390_right.jpeg', '13391_left.jpeg', '13391_right.jpeg', '13395_left.jpeg', '13395_right.jpeg', '13400_left.jpeg', '13400_right.jpeg', '13408_left.jpeg', '13408_right.jpeg', '13409_left.jpeg', '13409_right.jpeg', '13417_left.jpeg', '13417_right.jpeg', '13420_left.jpeg', '13420_right.jpeg', '13430_left.jpeg', '13430_right.jpeg', '13432_left.jpeg', '13432_right.jpeg', '13436_left.jpeg', '13436_right.jpeg', '13438_left.jpeg', '13438_right.jpeg', '1343_left.jpeg', '1343_right.jpeg', '13446_left.jpeg', '13446_right.jpeg', '13448_left.jpeg', '13448_right.jpeg', '13453_left.jpeg', '13453_right.jpeg', '13457_left.jpeg', '13457_right.jpeg', '13458_left.jpeg', '13465_left.jpeg', '13465_right.jpeg', '13473_left.jpeg', '13473_right.jpeg', '13481_left.jpeg', '13481_right.jpeg', '13482_left.jpeg', '13482_right.jpeg', '13485_left.jpeg', '13485_right.jpeg', '13486_left.jpeg', '13486_right.jpeg', '13488_left.jpeg', '13488_right.jpeg', '13489_left.jpeg', '13489_right.jpeg', '1348_left.jpeg', '1348_right.jpeg', '13490_left.jpeg', '13490_right.jpeg', '13493_left.jpeg', '13493_right.jpeg', '13494_left.jpeg', '13494_right.jpeg', '13495_left.jpeg', '13495_right.jpeg', '13502_left.jpeg', '13502_right.jpeg', '13504_left.jpeg', '13504_right.jpeg', '13505_left.jpeg', '13505_right.jpeg', '1350_left.jpeg', '1350_right.jpeg', '13515_left.jpeg', '13515_right.jpeg', '13517_left.jpeg', '13517_right.jpeg', '13523_left.jpeg', '13527_left.jpeg', '13527_right.jpeg', '13529_left.jpeg', '13529_right.jpeg', '13530_left.jpeg', '13530_right.jpeg', '13539_left.jpeg', '13539_right.jpeg', '1353_left.jpeg', '1353_right.jpeg', '13540_left.jpeg', '13540_right.jpeg', '13541_left.jpeg', '13541_right.jpeg', '13545_left.jpeg', '13545_right.jpeg', '13554_left.jpeg', '13554_right.jpeg', '13564_left.jpeg', '13564_right.jpeg', '13566_left.jpeg', '13566_right.jpeg', '13568_left.jpeg', '13568_right.jpeg', '13569_left.jpeg', '13569_right.jpeg', '1356_left.jpeg', '1356_right.jpeg', '13573_left.jpeg', '13573_right.jpeg', '13584_left.jpeg', '13584_right.jpeg', '13585_left.jpeg', '13585_right.jpeg', '13588_left.jpeg', '13588_right.jpeg', '13589_left.jpeg', '13589_right.jpeg', '1358_left.jpeg', '1358_right.jpeg', '13590_left.jpeg', '13590_right.jpeg', '13591_left.jpeg', '13591_right.jpeg', '13596_left.jpeg', '13596_right.jpeg', '13597_left.jpeg', '13597_right.jpeg', '13598_left.jpeg', '13598_right.jpeg', '13599_left.jpeg', '13599_right.jpeg', '13600_left.jpeg', '13600_right.jpeg', '13602_left.jpeg', '13602_right.jpeg', '13603_left.jpeg', '13603_right.jpeg', '13607_left.jpeg', '13607_right.jpeg', '13609_left.jpeg', '13609_right.jpeg', '13611_left.jpeg', '13611_right.jpeg', '13613_left.jpeg', '13613_right.jpeg', '13614_left.jpeg', '13614_right.jpeg', '13618_left.jpeg', '13618_right.jpeg', '13619_left.jpeg', '13619_right.jpeg', '1361_left.jpeg', '1361_right.jpeg', '13629_left.jpeg', '13629_right.jpeg', '13630_left.jpeg', '13630_right.jpeg', '13644_left.jpeg', '13644_right.jpeg', '13645_left.jpeg', '13645_right.jpeg', '13651_left.jpeg', '13651_right.jpeg', '13659_left.jpeg', '13659_right.jpeg', '13660_left.jpeg', '13660_right.jpeg', '13667_left.jpeg', '13667_right.jpeg', '13668_left.jpeg', '13668_right.jpeg', '13669_left.jpeg', '13669_right.jpeg', '13670_left.jpeg', '13670_right.jpeg', '13672_left.jpeg', '13672_right.jpeg', '13675_left.jpeg', '13675_right.jpeg', '13678_left.jpeg', '13678_right.jpeg', '1367_left.jpeg', '1367_right.jpeg', '13684_left.jpeg', '13684_right.jpeg', '13685_left.jpeg', '13685_right.jpeg', '13691_left.jpeg', '13691_right.jpeg', '13693_left.jpeg', '13693_right.jpeg', '13697_left.jpeg', '13697_right.jpeg', '13701_left.jpeg', '13701_right.jpeg', '13704_left.jpeg', '13704_right.jpeg', '13705_left.jpeg', '13705_right.jpeg', '13713_left.jpeg', '13713_right.jpeg', '13716_left.jpeg', '13716_right.jpeg', '13730_left.jpeg', '13730_right.jpeg', '13736_left.jpeg', '13736_right.jpeg', '13739_left.jpeg', '13739_right.jpeg', '1373_left.jpeg', '1373_right.jpeg', '13740_left.jpeg', '13740_right.jpeg', '13748_left.jpeg', '13748_right.jpeg', '13749_left.jpeg', '13749_right.jpeg', '13753_left.jpeg', '13753_right.jpeg', '13754_left.jpeg', '11206_left.jpeg', '1754_left.jpeg', '11206_right.jpeg', '1754_right.jpeg', '17540_left.jpeg', '17548_right.jpeg', '17548_left.jpeg', '17546_right.jpeg', '17543_right.jpeg', '17543_left.jpeg', '17540_right.jpeg', '17546_left.jpeg', '1120_left.jpeg', '17555_right.jpeg', '17564_right.jpeg', '17567_left.jpeg', '1756_left.jpeg', '17567_right.jpeg', '1756_right.jpeg', '17564_left.jpeg', '1120_right.jpeg', '17578_right.jpeg', '17578_left.jpeg', '17555_left.jpeg', '11211_right.jpeg', '17606_right.jpeg', '17592_right.jpeg', '17608_right.jpeg', '17611_left.jpeg', '17608_left.jpeg', '17592_left.jpeg', '11216_right.jpeg', '17606_left.jpeg', '17581_right.jpeg', '17581_left.jpeg', '17611_right.jpeg', '11217_left.jpeg', '17613_left.jpeg', '17634_left.jpeg', '17613_right.jpeg', '17639_left.jpeg', '17634_right.jpeg', '11217_right.jpeg', '17636_right.jpeg', '17639_right.jpeg', '17631_left.jpeg', '17636_left.jpeg', '17631_right.jpeg', '11221_left.jpeg', '17649_left.jpeg', '17649_right.jpeg', '17675_right.jpeg', '17660_right.jpeg', '17675_left.jpeg', '11221_right.jpeg', '17667_left.jpeg', '17667_right.jpeg', '17684_left.jpeg', '17660_left.jpeg', '17645_right.jpeg', '11222_left.jpeg', '11222_right.jpeg', '17684_right.jpeg', '17698_left.jpeg', '17689_left.jpeg', '17696_left.jpeg', '17686_left.jpeg', '17698_right.jpeg', '11230_left.jpeg', '17709_left.jpeg', '17689_right.jpeg', '17696_right.jpeg', '17686_right.jpeg', '17723_right.jpeg', '17739_left.jpeg', '17751_left.jpeg', '17716_right.jpeg', '17751_right.jpeg', '17716_left.jpeg', '11230_right.jpeg', '17723_left.jpeg', '17709_right.jpeg', '17739_right.jpeg', '17757_left.jpeg', '11231_left.jpeg', '17757_right.jpeg', '11231_right.jpeg', '11232_left.jpeg', '17767_left.jpeg', '17766_right.jpeg', '17785_left.jpeg', '17767_right.jpeg', '1776_left.jpeg', '1775_left.jpeg', '1775_right.jpeg', '1776_right.jpeg', '17766_left.jpeg', '11232_right.jpeg', '11234_left.jpeg', '17785_right.jpeg', '17800_left.jpeg', '17800_right.jpeg', '17827_left.jpeg', '17822_right.jpeg', '177_left.jpeg', '17827_right.jpeg', '11234_right.jpeg', '17822_left.jpeg', '17809_right.jpeg', '11238_left.jpeg', '11238_right.jpeg', '17809_left.jpeg', '1123_left.jpeg', '17850_right.jpeg', '17853_left.jpeg', '1785_left.jpeg', '17853_right.jpeg', '1123_right.jpeg', '17833_left.jpeg', '17845_right.jpeg', '17833_right.jpeg', '17845_left.jpeg', '1785_right.jpeg', '17860_left.jpeg', '11241_right.jpeg', '17860_right.jpeg', '11243_left.jpeg', '17861_left.jpeg', '17864_left.jpeg', '17861_right.jpeg', '17868_left.jpeg', '11243_right.jpeg', '17882_right.jpeg', '17885_left.jpeg', '17868_right.jpeg', '17864_right.jpeg', '17882_left.jpeg', '11246_left.jpeg', '17885_right.jpeg', '178_left.jpeg', '17899_right.jpeg', '17891_right.jpeg', '17891_left.jpeg', '178_right.jpeg', '11246_right.jpeg', '17916_left.jpeg', '17915_left.jpeg', '17916_right.jpeg', '17915_right.jpeg', '11249_left.jpeg', '17957_left.jpeg', '17942_right.jpeg', '17950_right.jpeg', '17950_left.jpeg', '17942_left.jpeg', '17957_right.jpeg', '11249_right.jpeg', '17936_left.jpeg', '17939_left.jpeg', '17936_right.jpeg', '17939_right.jpeg', '11261_left.jpeg', '17960_right.jpeg', '17962_left.jpeg', '17972_left.jpeg', '17972_right.jpeg', '11261_right.jpeg', '17989_right.jpeg', '17986_right.jpeg', '17989_left.jpeg', '17986_left.jpeg', '17960_left.jpeg', '17962_right.jpeg', '11262_left.jpeg', '17999_left.jpeg', '17999_right.jpeg', '18005_left.jpeg', '18014_left.jpeg', '18014_right.jpeg', '11262_right.jpeg', '18001_left.jpeg', '18001_right.jpeg', '18008_left.jpeg', '18008_right.jpeg', '18005_right.jpeg', '11263_left.jpeg', '18016_left.jpeg', '18025_right.jpeg', '18021_left.jpeg', '18027_right.jpeg', '18016_right.jpeg', '11263_right.jpeg', '1801_left.jpeg', '1802_left.jpeg', '18025_left.jpeg', '1801_right.jpeg', '18027_left.jpeg', '11268_left.jpeg', '1802_right.jpeg', '18038_right.jpeg', '18067_left.jpeg', '18057_right.jpeg', '18057_left.jpeg', '11268_right.jpeg', '18040_left.jpeg', '18060_right.jpeg', '18040_right.jpeg', '18060_left.jpeg', '18038_left.jpeg', '11270_left.jpeg', '18069_left.jpeg', '11270_right.jpeg', '18069_right.jpeg', '18067_right.jpeg', '18078_left.jpeg', '18103_left.jpeg', '18078_right.jpeg', '18105_left.jpeg', '18103_right.jpeg', '18090_right.jpeg', '18090_left.jpeg', '11271_left.jpeg', '11271_right.jpeg', '11273_left.jpeg', '18105_right.jpeg', '18124_right.jpeg', '1810_left.jpeg', '18113_left.jpeg', '18113_right.jpeg', '11273_right.jpeg', '18130_left.jpeg', '18133_left.jpeg', '1810_right.jpeg', '18130_right.jpeg', '18124_left.jpeg', '1127_left.jpeg', '18133_right.jpeg', '18163_right.jpeg', '18171_right.jpeg', '18176_right.jpeg', '18155_right.jpeg', '18176_left.jpeg', '1127_right.jpeg', '18163_left.jpeg', '18155_left.jpeg', '18171_left.jpeg', '18149_left.jpeg', '11283_left.jpeg', '18178_left.jpeg', '11283_right.jpeg', '1817_right.jpeg', '18193_left.jpeg', '18189_right.jpeg', '1817_left.jpeg', '18178_right.jpeg', '18189_left.jpeg', '18194_right.jpeg', '18193_right.jpeg', '18194_left.jpeg', '11288_left.jpeg', '11288_right.jpeg', '11289_left.jpeg', '11289_right.jpeg', '11291_left.jpeg', '18201_right.jpeg', '18206_right.jpeg', '18215_right.jpeg', '18206_left.jpeg', '18225_right.jpeg', '11291_right.jpeg', '18231_right.jpeg', '18215_left.jpeg', '18225_left.jpeg', '18231_left.jpeg', '18201_left.jpeg', '11298_left.jpeg', '18257_right.jpeg', '18239_left.jpeg', '18232_left.jpeg', '18257_left.jpeg', '11298_right.jpeg', '11299_left.jpeg', '18245_left.jpeg', '18239_right.jpeg', '18232_right.jpeg', '18261_left.jpeg', '18274_left.jpeg', '18261_right.jpeg', '11299_right.jpeg', '18289_right.jpeg', '18276_left.jpeg', '18296_left.jpeg', '18289_left.jpeg', '18304_left.jpeg', '11301_left.jpeg', '18296_right.jpeg', '18276_right.jpeg', '18304_right.jpeg', '18274_right.jpeg', '18298_right.jpeg', '18315_right.jpeg', '11301_right.jpeg', '11304_left.jpeg', '18319_left.jpeg', '1832_right.jpeg', '18315_left.jpeg', '1831_left.jpeg', '18319_right.jpeg', '1831_right.jpeg', '18323_right.jpeg', '18323_left.jpeg', '11304_right.jpeg', '1832_left.jpeg', '18352_left.jpeg', '11305_left.jpeg', '18352_right.jpeg', '18351_right.jpeg', '18358_right.jpeg', '18354_right.jpeg', '18354_left.jpeg', '18364_left.jpeg', '18337_left.jpeg', '18358_left.jpeg', '18351_left.jpeg', '11305_right.jpeg', '18367_left.jpeg', '11308_left.jpeg', '18367_right.jpeg', '18374_right.jpeg', '18382_right.jpeg', '18382_left.jpeg', '11308_right.jpeg', '18378_left.jpeg', '18392_left.jpeg', '18364_right.jpeg', '18378_right.jpeg', '18374_left.jpeg', '11310_left.jpeg', '18395_left.jpeg', '18425_right.jpeg', '18400_left.jpeg', '18395_right.jpeg', '18420_left.jpeg', '11310_right.jpeg', '18400_right.jpeg', '18436_left.jpeg', '18392_right.jpeg', '18420_right.jpeg', '18425_left.jpeg', '11313_left.jpeg', '18446_right.jpeg', '18445_left.jpeg', '18440_left.jpeg', '18445_right.jpeg', '18436_right.jpeg', '18446_left.jpeg', '11313_right.jpeg', '18465_right.jpeg', '18472_right.jpeg', '18440_right.jpeg', '18472_left.jpeg', '11314_left.jpeg', '18488_left.jpeg', '11314_right.jpeg', '18488_right.jpeg', '18481_right.jpeg', '18473_right.jpeg', '18487_right.jpeg', '18495_left.jpeg', '18486_right.jpeg', '18481_left.jpeg', '18487_left.jpeg', '18473_left.jpeg', '11315_left.jpeg', '18497_right.jpeg', '11315_right.jpeg', '18507_left.jpeg', '18498_right.jpeg', '184_right.jpeg', '18497_left.jpeg', '11316_left.jpeg', '11316_right.jpeg', '18498_left.jpeg', '18499_left.jpeg', '184_left.jpeg', '18495_right.jpeg', '18499_right.jpeg', '11317_left.jpeg', '11317_right.jpeg', '18507_right.jpeg', '18523_right.jpeg', '18536_right.jpeg', '18509_left.jpeg', '18536_left.jpeg', '18539_left.jpeg', '11319_left.jpeg', '18516_right.jpeg', '18516_left.jpeg', '18509_right.jpeg', '18523_left.jpeg', '11319_right.jpeg', '18552_right.jpeg', '1131_left.jpeg', '1853_left.jpeg', '18555_left.jpeg', '18552_left.jpeg', '18539_right.jpeg', '18556_left.jpeg', '18541_right.jpeg', '1853_right.jpeg', '18555_right.jpeg', '18541_left.jpeg', '1131_right.jpeg', '11321_left.jpeg', '11321_right.jpeg', '11325_left.jpeg', '11325_right.jpeg', '18569_right.jpeg', '18566_left.jpeg', '18562_left.jpeg', '18569_left.jpeg', '18562_right.jpeg', '18566_right.jpeg', '18556_right.jpeg', '11335_left.jpeg', '1856_right.jpeg', '1856_left.jpeg', '18570_left.jpeg', '11335_right.jpeg', '18570_right.jpeg', '18587_left.jpeg', '18580_right.jpeg', '18579_left.jpeg', '18572_right.jpeg', '18572_left.jpeg', '11341_left.jpeg', '18587_right.jpeg', '18589_left.jpeg', '18579_right.jpeg', '18580_left.jpeg', '11341_right.jpeg', '18615_left.jpeg', '18639_left.jpeg', '1862_left.jpeg', '18622_left.jpeg', '18602_left.jpeg', '18615_right.jpeg', '18602_right.jpeg', '11345_left.jpeg', '18622_right.jpeg', '18589_right.jpeg', '1862_right.jpeg', '18665_right.jpeg', '11345_right.jpeg', '18665_left.jpeg', '18673_left.jpeg', '18639_right.jpeg', '18643_left.jpeg', '18644_left.jpeg', '18644_right.jpeg', '11362_left.jpeg', '18673_right.jpeg', '18686_left.jpeg', '18643_right.jpeg', '1869_right.jpeg', '11362_right.jpeg', '18701_right.jpeg', '18688_right.jpeg', '18689_left.jpeg', '18689_right.jpeg', '18701_left.jpeg', '18708_left.jpeg', '1869_left.jpeg', '18688_left.jpeg', '18686_right.jpeg', '11364_left.jpeg', '11364_right.jpeg', '18713_left.jpeg', '18725_left.jpeg', '18724_right.jpeg', '18710_left.jpeg', '18724_left.jpeg', '18716_right.jpeg', '18716_left.jpeg', '11370_right.jpeg', '18708_right.jpeg', '18710_right.jpeg', '18713_right.jpeg', '11371_left.jpeg', '11371_right.jpeg', '18728_right.jpeg', '18756_right.jpeg', '18760_left.jpeg', '18725_right.jpeg', '18738_right.jpeg', '18738_left.jpeg', '18732_left.jpeg', '18756_left.jpeg', '18728_left.jpeg', '11375_left.jpeg', '18732_right.jpeg', '11375_right.jpeg', '1878_right.jpeg', '1878_left.jpeg', '18828_left.jpeg', '1881_left.jpeg', '18827_right.jpeg', '18817_right.jpeg', '1881_right.jpeg', '11378_left.jpeg', '18827_left.jpeg', '18760_right.jpeg', '18817_left.jpeg', '11378_right.jpeg', '18845_left.jpeg', '18828_right.jpeg', '1886_left.jpeg', '18864_right.jpeg', '18845_right.jpeg', '11382_left.jpeg', '18862_right.jpeg', '18842_left.jpeg', '11382_right.jpeg', '1886_right.jpeg', '18862_left.jpeg', '18842_right.jpeg', '11385_left.jpeg', '18870_right.jpeg', '18874_left.jpeg', '18883_left.jpeg', '18883_right.jpeg', '18886_left.jpeg', '18870_left.jpeg', '11385_right.jpeg', '18904_right.jpeg', '18886_right.jpeg', '18904_left.jpeg', '18874_right.jpeg', '18907_left.jpeg', '18908_right.jpeg', '18922_left.jpeg', '18908_left.jpeg', '18916_right.jpeg', '18907_right.jpeg', '11387_left.jpeg', '18916_left.jpeg', '18922_right.jpeg', '18914_left.jpeg', '18914_right.jpeg', '11387_right.jpeg', '18931_left.jpeg', '18931_right.jpeg', '18923_left.jpeg', '18933_right.jpeg', '18933_left.jpeg', '18939_right.jpeg', '11388_left.jpeg', '18929_left.jpeg', '18923_right.jpeg', '18939_left.jpeg', '18929_right.jpeg', '11388_right.jpeg', '18944_left.jpeg', '18954_right.jpeg', '1893_left.jpeg', '18954_left.jpeg', '18944_right.jpeg', '18950_left.jpeg', '1138_left.jpeg', '1893_right.jpeg', '18945_right.jpeg', '18950_right.jpeg', '18945_left.jpeg', '1138_right.jpeg', '18970_left.jpeg', '11391_left.jpeg', '18972_left.jpeg', '18968_right.jpeg', '18960_right.jpeg', '18960_left.jpeg', '18968_left.jpeg', '18970_right.jpeg', '18965_right.jpeg', '18965_left.jpeg', '18972_right.jpeg', '11391_right.jpeg', '18995_left.jpeg', '18995_right.jpeg', '18999_left.jpeg', '19004_left.jpeg', '18976_right.jpeg', '11392_left.jpeg', '19004_right.jpeg', '18999_right.jpeg', '18976_left.jpeg', '18975_left.jpeg', '18975_right.jpeg', '11392_right.jpeg', '19008_left.jpeg', '19014_left.jpeg', '19017_right.jpeg', '19014_right.jpeg', '19015_right.jpeg', '11395_left.jpeg', '19025_right.jpeg', '19017_left.jpeg', '19015_left.jpeg', '19008_right.jpeg', '19025_left.jpeg', '11395_right.jpeg', '19046_left.jpeg', '19029_left.jpeg', '1905_right.jpeg', '19026_right.jpeg', '19029_right.jpeg', '11397_left.jpeg', '11397_right.jpeg', '1905_left.jpeg', '19046_right.jpeg', '19026_left.jpeg', '19052_right.jpeg', '19052_left.jpeg', '1139_left.jpeg', '19060_left.jpeg', '1139_right.jpeg', '19083_right.jpeg', '19064_left.jpeg', '19083_left.jpeg', '19060_right.jpeg', '19071_right.jpeg', '19086_right.jpeg', '19071_left.jpeg', '19064_right.jpeg', '19086_left.jpeg', '11407_left.jpeg', '11407_right.jpeg', '19092_left.jpeg', '19103_right.jpeg', '19096_left.jpeg', '19103_left.jpeg', '19096_right.jpeg', '19092_right.jpeg', '11409_left.jpeg', '19093_right.jpeg', '19097_right.jpeg', '19097_left.jpeg', '19093_left.jpeg', '11409_right.jpeg', '19127_right.jpeg', '1140_left.jpeg', '1910_left.jpeg', '19116_right.jpeg', '19116_left.jpeg', '19104_left.jpeg', '19104_right.jpeg', '1910_right.jpeg', '19127_left.jpeg', '19130_left.jpeg', '19130_right.jpeg', '1140_right.jpeg', '19141_right.jpeg', '19155_left.jpeg', '19141_left.jpeg', '19158_left.jpeg', '11420_left.jpeg', '19140_right.jpeg', '19140_left.jpeg', '19155_right.jpeg', '19143_right.jpeg', '19158_right.jpeg', '19143_left.jpeg', '11420_right.jpeg', '19167_left.jpeg', '11427_left.jpeg', '19160_left.jpeg', '11427_right.jpeg', '19169_right.jpeg', '19167_right.jpeg', '19160_right.jpeg', '19169_left.jpeg', '19171_right.jpeg', '19174_right.jpeg', '19171_left.jpeg', '19174_left.jpeg', '11428_left.jpeg', '19177_left.jpeg', '19189_left.jpeg', '1919_right.jpeg', '19177_right.jpeg', '1919_left.jpeg', '19178_left.jpeg', '19178_right.jpeg', '11428_right.jpeg', '19184_right.jpeg', '19184_left.jpeg', '19189_right.jpeg', '1142_left.jpeg', '11437_left.jpeg', '1920_left.jpeg', '19207_left.jpeg', '19206_right.jpeg', '1920_right.jpeg', '19207_right.jpeg', '19206_left.jpeg', '11437_right.jpeg', '19219_left.jpeg', '19219_right.jpeg', '19226_left.jpeg', '19226_right.jpeg', '19236_left.jpeg', '11438_left.jpeg', '19261_left.jpeg', '19237_left.jpeg', '19261_right.jpeg', '19236_right.jpeg', '19252_right.jpeg', '19237_right.jpeg', '19252_left.jpeg', '19257_right.jpeg', '19257_left.jpeg', '11438_right.jpeg', '1926_left.jpeg', '11444_left.jpeg', '1926_right.jpeg', '19269_left.jpeg', '19269_right.jpeg', '19271_left.jpeg', '19273_left.jpeg', '19273_right.jpeg', '19270_left.jpeg', '19270_right.jpeg', '19271_right.jpeg', '11444_right.jpeg', '19302_left.jpeg', '11446_left.jpeg', '19288_right.jpeg', '19296_left.jpeg', '19293_left.jpeg', '19293_right.jpeg', '19278_left.jpeg', '19278_right.jpeg', '19296_right.jpeg', '19288_left.jpeg', '19302_right.jpeg', '11446_right.jpeg', '19327_left.jpeg', '11455_left.jpeg', '19326_left.jpeg', '19325_left.jpeg', '19314_right.jpeg', '19322_left.jpeg', '19314_left.jpeg', '19327_right.jpeg', '19325_right.jpeg', '19322_right.jpeg', '19326_right.jpeg', '11455_right.jpeg', '19340_left.jpeg', '1145_left.jpeg', '1932_left.jpeg', '19340_right.jpeg', '19339_left.jpeg', '19329_left.jpeg', '19339_right.jpeg', '1932_right.jpeg', '19329_right.jpeg', '19331_right.jpeg', '19331_left.jpeg', '1145_right.jpeg', '19369_right.jpeg', '11462_left.jpeg', '19376_left.jpeg', '19366_left.jpeg', '19378_left.jpeg', '19369_left.jpeg', '19400_left.jpeg', '19366_right.jpeg', '19400_right.jpeg', '11462_right.jpeg', '19378_right.jpeg', '19376_right.jpeg', '11474_left.jpeg', '19414_left.jpeg', '1941_right.jpeg', '19422_left.jpeg', '19422_right.jpeg', '11474_right.jpeg', '1941_left.jpeg', '19412_right.jpeg', '19405_right.jpeg', '19412_left.jpeg', '19405_left.jpeg', '19414_right.jpeg', '11476_left.jpeg', '1943_left.jpeg', '19464_right.jpeg', '19467_left.jpeg', '19464_left.jpeg', '19467_right.jpeg', '19444_left.jpeg', '1943_right.jpeg', '11476_right.jpeg', '19462_left.jpeg', '19444_right.jpeg', '19462_right.jpeg', '1147_left.jpeg', '19475_right.jpeg', '19475_left.jpeg', '19472_left.jpeg', '19478_left.jpeg', '19477_right.jpeg', '19477_left.jpeg', '19474_right.jpeg', '19474_left.jpeg', '1147_right.jpeg', '19476_left.jpeg', '19472_right.jpeg', '11484_left.jpeg', '19481_left.jpeg', '19490_right.jpeg', '19486_left.jpeg', '19490_left.jpeg', '19478_right.jpeg', '11484_right.jpeg', '19485_right.jpeg', '19485_left.jpeg', '19498_left.jpeg', '19486_right.jpeg', '19481_right.jpeg', '11492_left.jpeg', '19498_right.jpeg', '11492_right.jpeg', '11494_left.jpeg', '11494_right.jpeg', '11496_left.jpeg', '11496_right.jpeg', '114_left.jpeg', '114_right.jpeg', '11500_left.jpeg', '11500_right.jpeg', '11503_left.jpeg', '11503_right.jpeg', '11504_left.jpeg', '11504_right.jpeg', '11512_left.jpeg', '11512_right.jpeg', '11526_left.jpeg', '11526_right.jpeg', '11529_left.jpeg', '11529_right.jpeg', '11541_left.jpeg', '11541_right.jpeg', '11543_left.jpeg', '11543_right.jpeg', '11546_left.jpeg', '11546_right.jpeg', '11547_left.jpeg', '11547_right.jpeg', '11550_left.jpeg', '11550_right.jpeg', '11573_left.jpeg', '11573_right.jpeg', '11575_left.jpeg', '11575_right.jpeg', '11579_left.jpeg', '11584_left.jpeg', '11584_right.jpeg', '11593_left.jpeg', '11593_right.jpeg', '11594_left.jpeg', '11594_right.jpeg', '11603_left.jpeg', '11603_right.jpeg', '11610_left.jpeg', '11610_right.jpeg', '11612_left.jpeg', '11612_right.jpeg', '1161_left.jpeg', '1161_right.jpeg', '11626_left.jpeg', '11626_right.jpeg', '11631_left.jpeg', '11631_right.jpeg', '11638_left.jpeg', '11638_right.jpeg', '11640_left.jpeg', '11640_right.jpeg', '11642_left.jpeg', '11642_right.jpeg', '11645_left.jpeg', '11645_right.jpeg', '11646_left.jpeg', '11646_right.jpeg', '11648_right.jpeg', '11651_left.jpeg', '11651_right.jpeg', '11655_left.jpeg', '11655_right.jpeg', '11661_left.jpeg', '11661_right.jpeg', '11665_left.jpeg', '11665_right.jpeg', '11677_left.jpeg', '11677_right.jpeg', '11679_left.jpeg', '11679_right.jpeg', '1167_left.jpeg', '1167_right.jpeg', '11697_left.jpeg', '11697_right.jpeg', '11698_right.jpeg', '11703_left.jpeg', '11703_right.jpeg', '11707_left.jpeg', '11707_right.jpeg', '11719_left.jpeg', '11719_right.jpeg', '11726_left.jpeg', '11726_right.jpeg', '1172_left.jpeg', '1172_right.jpeg', '11734_left.jpeg', '11734_right.jpeg', '11736_left.jpeg', '11736_right.jpeg', '11738_left.jpeg', '11740_left.jpeg', '11740_right.jpeg', '11741_left.jpeg', '11741_right.jpeg', '11744_left.jpeg', '11744_right.jpeg', '11749_left.jpeg', '11749_right.jpeg', '11754_left.jpeg', '11754_right.jpeg', '11758_left.jpeg', '11758_right.jpeg', '11759_left.jpeg', '11759_right.jpeg', '11768_left.jpeg', '11768_right.jpeg', '11770_left.jpeg', '11770_right.jpeg', '11772_left.jpeg', '11772_right.jpeg', '11775_left.jpeg', '11775_right.jpeg', '11778_right.jpeg', '11779_left.jpeg', '11779_right.jpeg', '1177_left.jpeg', '1177_right.jpeg', '11780_left.jpeg', '11780_right.jpeg', '11781_left.jpeg', '11781_right.jpeg', '11789_left.jpeg', '11789_right.jpeg', '1178_right.jpeg', '11790_left.jpeg', '11790_right.jpeg', '11791_left.jpeg', '11791_right.jpeg', '11793_left.jpeg', '11793_right.jpeg', '11796_left.jpeg', '11796_right.jpeg', '1179_left.jpeg', '1179_right.jpeg', '11805_left.jpeg', '11805_right.jpeg', '11807_right.jpeg', '1180_left.jpeg', '1180_right.jpeg', '11811_left.jpeg', '11811_right.jpeg', '11815_left.jpeg', '11815_right.jpeg', '11817_left.jpeg', '11817_right.jpeg', '11818_left.jpeg', '11818_right.jpeg', '11824_left.jpeg', '11824_right.jpeg', '11826_left.jpeg', '11826_right.jpeg', '1182_left.jpeg', '1182_right.jpeg', '11833_left.jpeg', '11833_right.jpeg', '11838_left.jpeg', '11838_right.jpeg', '11845_left.jpeg', '11845_right.jpeg', '11854_left.jpeg', '11854_right.jpeg', '11855_left.jpeg', '11855_right.jpeg', '11856_left.jpeg', '15324_left.jpeg', '15321_right.jpeg', '15320_right.jpeg', '10851_right.jpeg', '15335_right.jpeg', '15353_left.jpeg', '10861_left.jpeg', '1536_left.jpeg', '15366_left.jpeg', '1536_right.jpeg', '15337_right.jpeg', '15335_left.jpeg', '15353_right.jpeg', '15337_left.jpeg', '15366_right.jpeg', '10861_right.jpeg', '15408_right.jpeg', '1541_left.jpeg', '15380_right.jpeg', '15380_left.jpeg', '15408_left.jpeg', '15411_left.jpeg', '15411_right.jpeg', '1540_left.jpeg', '1540_right.jpeg', '15388_left.jpeg', '10865_left.jpeg', '15433_left.jpeg', '10865_right.jpeg', '15431_left.jpeg', '15431_right.jpeg', '15425_right.jpeg', '15425_left.jpeg', '1544_left.jpeg', '1541_right.jpeg', '15437_left.jpeg', '15437_right.jpeg', '15433_right.jpeg', '10873_left.jpeg', '1544_right.jpeg', '10873_right.jpeg', '15465_left.jpeg', '15471_right.jpeg', '15465_right.jpeg', '15471_left.jpeg', '15470_left.jpeg', '15481_right.jpeg', '15476_right.jpeg', '15476_left.jpeg', '15481_left.jpeg', '10874_left.jpeg', '15506_left.jpeg', '15506_right.jpeg', '15493_right.jpeg', '15505_right.jpeg', '15505_left.jpeg', '15507_left.jpeg', '10874_right.jpeg', '15493_left.jpeg', '15507_right.jpeg', '15495_left.jpeg', '15495_right.jpeg', '10879_left.jpeg', '15553_left.jpeg', '15530_left.jpeg', '15530_right.jpeg', '15553_right.jpeg', '1551_right.jpeg', '10879_right.jpeg', '1551_left.jpeg', '1550_left.jpeg', '1550_right.jpeg', '15551_left.jpeg', '15551_right.jpeg', '10883_left.jpeg', '15578_left.jpeg', '1557_left.jpeg', '1557_right.jpeg', '15567_left.jpeg', '15578_right.jpeg', '15554_left.jpeg', '10883_right.jpeg', '15576_left.jpeg', '15554_right.jpeg', '15576_right.jpeg', '15567_right.jpeg', '10897_left.jpeg', '15580_left.jpeg', '15580_right.jpeg', '15585_right.jpeg', '15587_right.jpeg', '15594_right.jpeg', '15594_left.jpeg', '15588_left.jpeg', '10897_right.jpeg', '15588_right.jpeg', '15585_left.jpeg', '15587_left.jpeg', '108_left.jpeg', '15602_right.jpeg', '15603_right.jpeg', '15596_right.jpeg', '15627_left.jpeg', '15603_left.jpeg', '108_right.jpeg', '15612_left.jpeg', '15602_left.jpeg', '15595_left.jpeg', '15612_right.jpeg', '15596_left.jpeg', '10904_left.jpeg', '15631_left.jpeg', '10904_right.jpeg', '15631_right.jpeg', '15638_right.jpeg', '15638_left.jpeg', '15627_right.jpeg', '15636_left.jpeg', '15636_right.jpeg', '15630_right.jpeg', '15630_left.jpeg', '10908_left.jpeg', '1563_left.jpeg', '10908_right.jpeg', '15656_right.jpeg', '15644_right.jpeg', '15652_left.jpeg', '1563_right.jpeg', '15640_left.jpeg', '15656_left.jpeg', '15645_left.jpeg', '15640_right.jpeg', '15652_right.jpeg', '10913_left.jpeg', '15645_right.jpeg', '10913_right.jpeg', '15657_left.jpeg', '15657_right.jpeg', '15674_left.jpeg', '15667_left.jpeg', '10915_left.jpeg', '15688_right.jpeg', '15690_left.jpeg', '15674_right.jpeg', '15683_right.jpeg', '15683_left.jpeg', '15667_right.jpeg', '10915_right.jpeg', '15694_right.jpeg', '10916_left.jpeg', '15694_left.jpeg', '15690_right.jpeg', '15696_left.jpeg', '15696_right.jpeg', '15708_left.jpeg', '15717_left.jpeg', '10916_right.jpeg', '1569_left.jpeg', '15708_right.jpeg', '1569_right.jpeg', '10919_left.jpeg', '15736_left.jpeg', '15734_right.jpeg', '15728_right.jpeg', '15722_right.jpeg', '15734_left.jpeg', '15717_right.jpeg', '15736_right.jpeg', '15742_left.jpeg', '10919_right.jpeg', '15728_left.jpeg', '15722_left.jpeg', '15758_left.jpeg', '15755_right.jpeg', '15750_left.jpeg', '1574_left.jpeg', '15742_right.jpeg', '15758_right.jpeg', '10923_left.jpeg', '15756_right.jpeg', '15755_left.jpeg', '15750_right.jpeg', '1574_right.jpeg', '10923_right.jpeg', '15767_right.jpeg', '15775_left.jpeg', '15775_right.jpeg', '15794_right.jpeg', '15772_left.jpeg', '15772_right.jpeg', '15794_left.jpeg', '10925_left.jpeg', '15781_right.jpeg', '15781_left.jpeg', '15767_left.jpeg', '15796_left.jpeg', '10925_right.jpeg', '1579_right.jpeg', '15796_right.jpeg', '15811_left.jpeg', '1579_left.jpeg', '15809_right.jpeg', '15811_right.jpeg', '15809_left.jpeg', '15807_left.jpeg', '15807_right.jpeg', '10934_left.jpeg', '15849_right.jpeg', '10934_right.jpeg', '15835_left.jpeg', '15859_right.jpeg', '15859_left.jpeg', '15857_right.jpeg', '15857_left.jpeg', '15851_right.jpeg', '15851_left.jpeg', '10936_right.jpeg', '15849_left.jpeg', '15835_right.jpeg', '1093_left.jpeg', '15886_left.jpeg', '15868_left.jpeg', '1588_left.jpeg', '15886_right.jpeg', '1093_right.jpeg', '1586_left.jpeg', '15873_left.jpeg', '15885_left.jpeg', '15873_right.jpeg', '15885_right.jpeg', '1586_right.jpeg', '10942_left.jpeg', '10942_right.jpeg', '1588_right.jpeg', '15892_right.jpeg', '15892_left.jpeg', '15893_left.jpeg', '15897_right.jpeg', '10945_left.jpeg', '15897_left.jpeg', '15894_left.jpeg', '15894_right.jpeg', '15893_right.jpeg', '15898_left.jpeg', '10945_right.jpeg', '15923_left.jpeg', '10947_left.jpeg', '15937_left.jpeg', '15898_right.jpeg', '1592_left.jpeg', '15942_right.jpeg', '1589_right.jpeg', '1589_left.jpeg', '1592_right.jpeg', '15942_left.jpeg', '15937_right.jpeg', '10947_right.jpeg', '15944_left.jpeg', '10962_left.jpeg', '15954_left.jpeg', '15944_right.jpeg', '15948_left.jpeg', '15952_right.jpeg', '15949_right.jpeg', '15952_left.jpeg', '15948_right.jpeg', '15954_right.jpeg', '15949_left.jpeg', '10962_right.jpeg', '1595_left.jpeg', '1595_right.jpeg', '15964_left.jpeg', '15958_left.jpeg', '15955_right.jpeg', '15964_right.jpeg', '10966_left.jpeg', '15979_right.jpeg', '15958_right.jpeg', '15979_left.jpeg', '15955_left.jpeg', '10966_right.jpeg', '159_right.jpeg', '15993_right.jpeg', '159_left.jpeg', '16011_right.jpeg', '15993_left.jpeg', '16009_left.jpeg', '1096_left.jpeg', '16009_right.jpeg', '16021_left.jpeg', '1600_left.jpeg', '1600_right.jpeg', '1096_right.jpeg', '16022_left.jpeg', '10974_left.jpeg', '16022_right.jpeg', '16027_right.jpeg', '16025_left.jpeg', '16025_right.jpeg', '16031_left.jpeg', '16030_right.jpeg', '16021_right.jpeg', '16027_left.jpeg', '16030_left.jpeg', '10974_right.jpeg', '16036_left.jpeg', '10975_left.jpeg', '16031_right.jpeg', '16036_right.jpeg', '16038_left.jpeg', '16043_left.jpeg', '16038_right.jpeg', '10975_right.jpeg', '16065_left.jpeg', '16060_left.jpeg', '16043_right.jpeg', '16060_right.jpeg', '10976_left.jpeg', '1609_left.jpeg', '1609_right.jpeg', '16114_left.jpeg', '16109_left.jpeg', '16101_right.jpeg', '16065_right.jpeg', '16110_right.jpeg', '10976_right.jpeg', '16110_left.jpeg', '16085_right.jpeg', '16085_left.jpeg', '10977_left.jpeg', '16122_left.jpeg', '1611_right.jpeg', '16126_left.jpeg', '16122_right.jpeg', '1611_left.jpeg', '16131_right.jpeg', '10977_right.jpeg', '10983_left.jpeg', '16132_right.jpeg', '16126_right.jpeg', '16114_right.jpeg', '16132_left.jpeg', '10983_right.jpeg', '16139_left.jpeg', '16136_right.jpeg', '16138_right.jpeg', '16139_right.jpeg', '16148_left.jpeg', '16136_left.jpeg', '16148_right.jpeg', '16154_left.jpeg', '16149_left.jpeg', '16138_left.jpeg', '10988_left.jpeg', '161_right.jpeg', '161_left.jpeg', '16201_left.jpeg', '16183_right.jpeg', '16196_right.jpeg', '16205_left.jpeg', '10988_right.jpeg', '16183_left.jpeg', '16154_right.jpeg', '16201_right.jpeg', '16196_left.jpeg', '10989_left.jpeg', '16206_left.jpeg', '16217_left.jpeg', '16206_right.jpeg', '16217_right.jpeg', '10989_right.jpeg', '16219_left.jpeg', '16221_right.jpeg', '16219_right.jpeg', '16223_right.jpeg', '16205_right.jpeg', '16221_left.jpeg', '10994_left.jpeg', '16238_right.jpeg', '16268_left.jpeg', '16273_left.jpeg', '16268_right.jpeg', '16238_left.jpeg', '16273_right.jpeg', '10994_right.jpeg', '16245_right.jpeg', '16257_right.jpeg', '16245_left.jpeg', '16257_left.jpeg', '16307_left.jpeg', '1099_left.jpeg', '16286_left.jpeg', '16301_right.jpeg', '16313_right.jpeg', '16313_left.jpeg', '16301_left.jpeg', '16307_right.jpeg', '1099_right.jpeg', '16286_right.jpeg', '16317_left.jpeg', '16317_right.jpeg', '11003_left.jpeg', '16327_right.jpeg', '11003_right.jpeg', '16336_left.jpeg', '16334_left.jpeg', '16334_right.jpeg', '16325_right.jpeg', '16324_right.jpeg', '16325_left.jpeg', '16318_left.jpeg', '16324_left.jpeg', '16327_left.jpeg', '11007_left.jpeg', '16340_left.jpeg', '11007_right.jpeg', '16341_left.jpeg', '16336_right.jpeg', '16341_right.jpeg', '16351_left.jpeg', '16350_left.jpeg', '1634_left.jpeg', '1634_right.jpeg', '16340_right.jpeg', '16350_right.jpeg', '1100_left.jpeg', '16366_left.jpeg', '1100_right.jpeg', '16370_left.jpeg', '11010_left.jpeg', '16351_right.jpeg', '16381_left.jpeg', '16366_right.jpeg', '16365_right.jpeg', '16365_left.jpeg', '16374_left.jpeg', '16370_right.jpeg', '11010_right.jpeg', '16374_right.jpeg', '16386_left.jpeg', '11013_left.jpeg', '16393_right.jpeg', '16389_right.jpeg', '1638_left.jpeg', '16393_left.jpeg', '16395_left.jpeg', '1638_right.jpeg', '16386_right.jpeg', '16389_left.jpeg', '16381_right.jpeg', '16407_right.jpeg', '11013_right.jpeg', '16398_left.jpeg', '16398_right.jpeg', '16395_right.jpeg', '16407_left.jpeg', '163_left.jpeg', '1640_left.jpeg', '16405_left.jpeg', '163_right.jpeg', '1640_right.jpeg', '11023_left.jpeg', '16412_left.jpeg', '11023_right.jpeg', '16416_right.jpeg', '16419_left.jpeg', '16412_right.jpeg', '16416_left.jpeg', '16445_right.jpeg', '1643_right.jpeg', '16418_left.jpeg', '1643_left.jpeg', '16419_right.jpeg', '11026_left.jpeg', '16450_right.jpeg', '16448_left.jpeg', '16450_left.jpeg', '16448_right.jpeg', '16454_right.jpeg', '16454_left.jpeg', '16474_left.jpeg', '16471_left.jpeg', '11026_right.jpeg', '16471_right.jpeg', '16474_right.jpeg', '1102_left.jpeg', '16488_left.jpeg', '16487_right.jpeg', '16488_right.jpeg', '1649_left.jpeg', '16487_left.jpeg', '16496_left.jpeg', '16510_left.jpeg', '1649_right.jpeg', '16481_right.jpeg', '16481_left.jpeg', '1102_right.jpeg', '16510_right.jpeg', '16524_right.jpeg', '16512_left.jpeg', '16512_right.jpeg', '11032_left.jpeg', '16525_left.jpeg', '16520_left.jpeg', '16520_right.jpeg', '16523_right.jpeg', '16523_left.jpeg', '16524_left.jpeg', '11032_right.jpeg', '16525_right.jpeg', '16528_left.jpeg', '16527_right.jpeg', '16540_left.jpeg', '16548_left.jpeg', '11040_left.jpeg', '16540_right.jpeg', '16528_right.jpeg', '16529_right.jpeg', '16527_left.jpeg', '16529_left.jpeg', '11040_right.jpeg', '16548_right.jpeg', '16585_left.jpeg', '16591_left.jpeg', '1655_right.jpeg', '16590_right.jpeg', '11041_left.jpeg', '16576_left.jpeg', '16576_right.jpeg', '1655_left.jpeg', '16585_right.jpeg', '16590_left.jpeg', '11041_right.jpeg', '16595_right.jpeg', '16595_left.jpeg', '16607_right.jpeg', '1659_right.jpeg', '16593_left.jpeg', '16593_right.jpeg', '16607_left.jpeg', '11043_left.jpeg', '16591_right.jpeg', '16600_right.jpeg', '1659_left.jpeg', '16626_left.jpeg', '11043_right.jpeg', '16609_left.jpeg', '16614_right.jpeg', '16614_left.jpeg', '16622_left.jpeg', '16626_right.jpeg', '16609_right.jpeg', '16623_right.jpeg', '16622_right.jpeg', '16628_left.jpeg', '11048_left.jpeg', '16635_right.jpeg', '16637_right.jpeg', '1663_left.jpeg', '16637_left.jpeg', '16635_left.jpeg', '16628_right.jpeg', '11048_right.jpeg', '1663_right.jpeg', '16646_left.jpeg', '16646_right.jpeg', '16641_left.jpeg', '11049_left.jpeg', '1666_right.jpeg', '16652_right.jpeg', '16684_right.jpeg', '16684_left.jpeg', '11049_right.jpeg', '16660_left.jpeg', '16660_right.jpeg', '16673_left.jpeg', '16673_right.jpeg', '1666_left.jpeg', '16652_left.jpeg', '11054_left.jpeg', '16697_left.jpeg', '11056_left.jpeg', '16697_right.jpeg', '16692_left.jpeg', '16710_left.jpeg', '16703_left.jpeg', '16692_right.jpeg', '16703_right.jpeg', '16691_right.jpeg', '16691_left.jpeg', '16710_right.jpeg', '11056_right.jpeg', '11058_left.jpeg', '16720_left.jpeg', '11058_right.jpeg', '16721_right.jpeg', '16718_left.jpeg', '16721_left.jpeg', '16718_right.jpeg', '16738_left.jpeg', '16741_right.jpeg', '16741_left.jpeg', '16738_right.jpeg', '16720_right.jpeg', '11069_left.jpeg', '16759_right.jpeg', '16759_left.jpeg', '16744_left.jpeg', '1675_left.jpeg', '16752_right.jpeg', '16752_left.jpeg', '11069_right.jpeg', '16746_right.jpeg', '16746_left.jpeg', '1675_right.jpeg', '16744_right.jpeg', '11072_left.jpeg', '16766_left.jpeg', '16770_left.jpeg', '16773_left.jpeg', '16766_right.jpeg', '16769_left.jpeg', '16770_right.jpeg', '16769_right.jpeg', '11072_right.jpeg', '16777_right.jpeg', '16773_right.jpeg', '16777_left.jpeg', '11079_left.jpeg', '16781_left.jpeg', '16788_left.jpeg', '16782_left.jpeg', '16784_left.jpeg', '16781_right.jpeg', '16782_right.jpeg', '16788_right.jpeg', '16784_right.jpeg', '16785_left.jpeg', '167_left.jpeg', '11083_left.jpeg', '167_right.jpeg', '16813_left.jpeg', '16808_right.jpeg', '1680_right.jpeg', '16808_left.jpeg', '16814_left.jpeg', '16802_left.jpeg', '16813_right.jpeg', '1680_left.jpeg', '16802_right.jpeg', '11083_right.jpeg', '16816_right.jpeg', '11089_left.jpeg', '11089_right.jpeg', '16814_right.jpeg', '16831_right.jpeg', '16815_right.jpeg', '16815_left.jpeg', '16825_right.jpeg', '16831_left.jpeg', '16816_left.jpeg', '16825_left.jpeg', '16833_left.jpeg', '11091_left.jpeg', '16842_right.jpeg', '11091_right.jpeg', '16864_left.jpeg', '16833_right.jpeg', '16842_left.jpeg', '16862_right.jpeg', '16862_left.jpeg', '16843_left.jpeg', '16858_left.jpeg', '16843_right.jpeg', '16858_right.jpeg', '11095_left.jpeg', '11095_right.jpeg', '16882_left.jpeg', '11097_left.jpeg', '16892_left.jpeg', '16894_left.jpeg', '16886_left.jpeg', '16886_right.jpeg', '16882_right.jpeg', '16864_right.jpeg', '16892_right.jpeg', '16881_left.jpeg', '16881_right.jpeg', '11097_right.jpeg', '16896_right.jpeg', '16908_right.jpeg', '16897_right.jpeg', '16908_left.jpeg', '16914_left.jpeg', '11098_left.jpeg', '16917_left.jpeg', '16897_left.jpeg', '16896_left.jpeg', '16914_right.jpeg', '16894_right.jpeg', '11098_right.jpeg', '16917_right.jpeg', '11099_left.jpeg', '16918_right.jpeg', '16929_right.jpeg', '16929_left.jpeg', '16918_left.jpeg', '16941_right.jpeg', '16920_left.jpeg', '16955_left.jpeg', '16920_right.jpeg', '16941_left.jpeg', '11099_right.jpeg', '16956_left.jpeg', '169_right.jpeg', '169_left.jpeg', '16986_left.jpeg', '11101_left.jpeg', '16955_right.jpeg', '16986_right.jpeg', '16971_left.jpeg', '16991_right.jpeg', '11101_right.jpeg', '16971_right.jpeg', '16956_right.jpeg', '11117_left.jpeg', '17000_left.jpeg', '17019_left.jpeg', '17019_right.jpeg', '17023_left.jpeg', '17000_right.jpeg', '17018_right.jpeg', '17023_right.jpeg', '17032_right.jpeg', '11117_right.jpeg', '17018_left.jpeg', '17032_left.jpeg', '11119_left.jpeg', '17037_right.jpeg', '17049_right.jpeg', '17044_left.jpeg', '17050_right.jpeg', '17050_left.jpeg', '11119_right.jpeg', '17042_left.jpeg', '17044_right.jpeg', '17042_right.jpeg', '17037_left.jpeg', '17049_left.jpeg', '11120_left.jpeg', '17054_left.jpeg', '17060_right.jpeg', '17053_right.jpeg', '11120_right.jpeg', '17053_left.jpeg', '17057_left.jpeg', '17054_right.jpeg', '17068_right.jpeg', '17068_left.jpeg', '17060_left.jpeg', '17057_right.jpeg', '11122_left.jpeg', '17092_right.jpeg', '11122_right.jpeg', '17101_left.jpeg', '17101_right.jpeg', '17098_right.jpeg', '1706_right.jpeg', '17092_left.jpeg', '1706_left.jpeg', '17088_left.jpeg', '1710_right.jpeg', '17098_left.jpeg', '11123_left.jpeg', '11132_left.jpeg', '17110_left.jpeg', '17113_right.jpeg', '17111_right.jpeg', '17111_left.jpeg', '17110_right.jpeg', '17113_left.jpeg', '11132_right.jpeg', '17116_left.jpeg', '17112_left.jpeg', '17112_right.jpeg', '17116_right.jpeg', '11136_left.jpeg', '17132_left.jpeg', '11136_right.jpeg', '17127_right.jpeg', '17126_right.jpeg', '17123_left.jpeg', '17132_right.jpeg', '17127_left.jpeg', '17126_left.jpeg', '17135_left.jpeg', '17123_right.jpeg', '17134_right.jpeg', '17139_left.jpeg', '1113_left.jpeg', '17137_right.jpeg', '17142_left.jpeg', '1713_right.jpeg', '1713_left.jpeg', '17135_right.jpeg', '17142_right.jpeg', '17137_left.jpeg', '17145_left.jpeg', '17139_right.jpeg', '1113_right.jpeg', '17145_right.jpeg', '17148_right.jpeg', '17153_right.jpeg', '17148_left.jpeg', '17153_left.jpeg', '17146_right.jpeg', '11142_left.jpeg', '17146_left.jpeg', '17158_left.jpeg', '17154_right.jpeg', '17154_left.jpeg', '11142_right.jpeg', '17158_right.jpeg', '11143_left.jpeg', '17179_right.jpeg', '17179_left.jpeg', '17170_left.jpeg', '17163_right.jpeg', '17182_left.jpeg', '17182_right.jpeg', '17183_left.jpeg', '17163_left.jpeg', '17170_right.jpeg', '17198_left.jpeg', '11143_right.jpeg', '17183_right.jpeg', '1718_right.jpeg', '17184_right.jpeg', '17198_right.jpeg', '17203_left.jpeg', '17184_left.jpeg', '17201_right.jpeg', '1718_left.jpeg', '17201_left.jpeg', '11147_left.jpeg', '17205_right.jpeg', '17222_left.jpeg', '17229_left.jpeg', '17222_right.jpeg', '17218_right.jpeg', '11147_right.jpeg', '17226_right.jpeg', '17205_left.jpeg', '17226_left.jpeg', '17218_left.jpeg', '17203_right.jpeg', '11149_left.jpeg', '17230_left.jpeg', '11149_right.jpeg', '17229_right.jpeg', '11152_left.jpeg', '17230_right.jpeg', '17249_left.jpeg', '17249_right.jpeg', '17257_left.jpeg', '1722_left.jpeg', '17252_right.jpeg', '17240_left.jpeg', '17252_left.jpeg', '11152_right.jpeg', '17264_right.jpeg', '17264_left.jpeg', '11153_left.jpeg', '17272_right.jpeg', '17279_left.jpeg', '17272_left.jpeg', '17276_left.jpeg', '17289_left.jpeg', '17257_right.jpeg', '17276_right.jpeg', '17279_right.jpeg', '11153_right.jpeg', '11155_left.jpeg', '17309_left.jpeg', '17311_left.jpeg', '17311_right.jpeg', '1732_left.jpeg', '17309_right.jpeg', '17317_right.jpeg', '17317_left.jpeg', '17289_right.jpeg', '17318_right.jpeg', '17318_left.jpeg', '11155_right.jpeg', '1732_right.jpeg', '17340_right.jpeg', '17341_right.jpeg', '17341_left.jpeg', '17334_left.jpeg', '11156_left.jpeg', '17330_right.jpeg', '17340_left.jpeg', '17330_left.jpeg', '17334_right.jpeg', '17351_left.jpeg', '11156_right.jpeg', '17351_right.jpeg', '17361_left.jpeg', '17356_right.jpeg', '17354_left.jpeg', '17356_left.jpeg', '11161_left.jpeg', '17361_right.jpeg', '17370_left.jpeg', '17354_right.jpeg', '17370_right.jpeg', '1737_left.jpeg', '11161_right.jpeg', '17387_left.jpeg', '17407_left.jpeg', '17396_right.jpeg', '17387_right.jpeg', '17395_left.jpeg', '17395_right.jpeg', '1737_right.jpeg', '11165_left.jpeg', '17411_left.jpeg', '17396_left.jpeg', '17407_right.jpeg', '11165_right.jpeg', '17429_left.jpeg', '17426_left.jpeg', '17429_right.jpeg', '17445_left.jpeg', '1743_left.jpeg', '17445_right.jpeg', '11169_left.jpeg', '17411_right.jpeg', '1743_right.jpeg', '17446_left.jpeg', '17426_right.jpeg', '11169_right.jpeg', '17449_left.jpeg', '17450_left.jpeg', '17449_right.jpeg', '17446_right.jpeg', '17456_left.jpeg', '1116_left.jpeg', '17457_left.jpeg', '17456_right.jpeg', '17457_right.jpeg', '17453_left.jpeg', '17450_right.jpeg', '17470_left.jpeg', '1116_right.jpeg', '17470_right.jpeg', '17463_left.jpeg', '17461_right.jpeg', '17463_right.jpeg', '11180_left.jpeg', '17459_right.jpeg', '17473_left.jpeg', '17473_right.jpeg', '17461_left.jpeg', '17459_left.jpeg', '17494_left.jpeg', '11180_right.jpeg', '17492_left.jpeg', '17491_right.jpeg', '17492_right.jpeg', '17487_right.jpeg', '17494_right.jpeg', '17486_left.jpeg', '17491_left.jpeg', '17487_left.jpeg', '17486_right.jpeg', '1119_left.jpeg', '1749_left.jpeg', '1749_right.jpeg', '17501_left.jpeg', '17504_left.jpeg', '17501_right.jpeg', '1119_right.jpeg', '17515_right.jpeg', '17509_left.jpeg', '17509_right.jpeg', '17515_left.jpeg', '17504_right.jpeg', '11203_left.jpeg', '17522_left.jpeg', '1751_left.jpeg', '17526_left.jpeg', '1751_right.jpeg', '17517_left.jpeg', '17517_right.jpeg', '11203_right.jpeg', '17520_right.jpeg', '17526_right.jpeg', '17522_right.jpeg', '17520_left.jpeg', '13094_left.jpeg', '13095_left.jpeg', '1309_right.jpeg', '13106_left.jpeg', '13106_right.jpeg', '1307_left.jpeg', '13094_right.jpeg', '1307_right.jpeg', '1309_left.jpeg', '10458_right.jpeg', '10479_left.jpeg', '13136_left.jpeg', '13128_right.jpeg', '13136_right.jpeg', '13128_left.jpeg', '13137_right.jpeg', '13137_left.jpeg', '10479_right.jpeg', '13158_right.jpeg', '13161_right.jpeg', '13161_left.jpeg', '13158_left.jpeg', '1047_left.jpeg', '13167_left.jpeg', '1047_right.jpeg', '13173_left.jpeg', '13167_right.jpeg', '13162_right.jpeg', '13184_right.jpeg', '13162_left.jpeg', '13184_left.jpeg', '10495_left.jpeg', '13173_right.jpeg', '13180_right.jpeg', '13180_left.jpeg', '10495_right.jpeg', '13192_left.jpeg', '13229_left.jpeg', '13192_right.jpeg', '13186_left.jpeg', '13229_right.jpeg', '1049_left.jpeg', '13186_right.jpeg', '13214_right.jpeg', '13214_left.jpeg', '1318_left.jpeg', '1318_right.jpeg', '1049_right.jpeg', '13268_left.jpeg', '13242_right.jpeg', '13250_right.jpeg', '13250_left.jpeg', '13267_left.jpeg', '13242_left.jpeg', '13251_left.jpeg', '13237_left.jpeg', '13251_right.jpeg', '13237_right.jpeg', '13268_right.jpeg', '13270_left.jpeg', '13274_left.jpeg', '13274_right.jpeg', '13270_right.jpeg', '13275_right.jpeg', '13275_left.jpeg', '13283_left.jpeg', '13273_left.jpeg', '13273_right.jpeg', '10501_left.jpeg', '13305_left.jpeg', '13304_right.jpeg', '13304_left.jpeg', '13323_left.jpeg', '10501_right.jpeg', '13305_right.jpeg', '10502_left.jpeg', '13283_right.jpeg', '13323_right.jpeg', '1330_left.jpeg', '13307_left.jpeg', '1330_right.jpeg', '10502_right.jpeg', '13356_right.jpeg', '13333_right.jpeg', '13375_left.jpeg', '13371_left.jpeg', '13341_right.jpeg', '10503_left.jpeg', '13356_left.jpeg', '13341_left.jpeg', '1334_right.jpeg', '1334_left.jpeg', '13333_left.jpeg', '10503_right.jpeg', '13384_right.jpeg', '13388_left.jpeg', '13379_right.jpeg', '13379_left.jpeg', '13392_right.jpeg', '13384_left.jpeg', '13392_left.jpeg', '13388_right.jpeg', '10504_left.jpeg', '13396_left.jpeg', '13375_right.jpeg', '10504_right.jpeg', '13396_right.jpeg', '10510_left.jpeg', '13397_left.jpeg', '13404_left.jpeg', '13407_right.jpeg', '13404_right.jpeg', '13427_left.jpeg', '13407_left.jpeg', '13410_left.jpeg', '13410_right.jpeg', '13397_right.jpeg', '10510_right.jpeg', '10511_left.jpeg', '13431_right.jpeg', '10511_right.jpeg', '13447_left.jpeg', '1346_right.jpeg', '13458_right.jpeg', '1346_left.jpeg', '13454_right.jpeg', '13447_right.jpeg', '13454_left.jpeg', '13431_left.jpeg', '13427_right.jpeg', '10513_left.jpeg', '13475_right.jpeg', '10513_right.jpeg', '13492_right.jpeg', '1347_right.jpeg', '13499_left.jpeg', '13475_left.jpeg', '1349_left.jpeg', '1349_right.jpeg', '13492_left.jpeg', '13499_right.jpeg', '1347_left.jpeg', '10521_left.jpeg', '13536_left.jpeg', '10521_right.jpeg', '13522_left.jpeg', '1351_left.jpeg', '13534_left.jpeg', '1351_right.jpeg', '13522_right.jpeg', '13521_left.jpeg', '13534_right.jpeg', '13523_right.jpeg', '13521_right.jpeg', '1052_left.jpeg', '13570_right.jpeg', '1052_right.jpeg', '13561_right.jpeg', '13562_left.jpeg', '13536_right.jpeg', '13562_right.jpeg', '13567_left.jpeg', '13561_left.jpeg', '10541_left.jpeg', '13572_left.jpeg', '13567_right.jpeg', '13570_left.jpeg', '10541_right.jpeg', '13595_left.jpeg', '13572_right.jpeg', '13583_left.jpeg', '13579_right.jpeg', '13583_right.jpeg', '13579_left.jpeg', '13605_left.jpeg', '1357_left.jpeg', '1357_right.jpeg', '13595_right.jpeg', '10548_left.jpeg', '1362_left.jpeg', '10548_right.jpeg', '13627_left.jpeg', '13621_left.jpeg', '1362_right.jpeg', '13610_right.jpeg', '13610_left.jpeg', '13621_right.jpeg', '13605_right.jpeg', '13627_right.jpeg', '13633_left.jpeg', '10550_left.jpeg', '13638_left.jpeg', '10550_right.jpeg', '13637_left.jpeg', '13638_right.jpeg', '13637_right.jpeg', '13636_left.jpeg', '13636_right.jpeg', '13635_right.jpeg', '1363_left.jpeg', '13633_right.jpeg', '13635_left.jpeg', '10552_left.jpeg', '13662_right.jpeg', '10552_right.jpeg', '13662_left.jpeg', '13658_right.jpeg', '13643_left.jpeg', '13664_left.jpeg', '13658_left.jpeg', '13643_right.jpeg', '13649_left.jpeg', '1363_right.jpeg', '13649_right.jpeg', '10560_left.jpeg', '13680_left.jpeg', '10560_right.jpeg', '13688_right.jpeg', '13664_right.jpeg', '13673_right.jpeg', '13673_left.jpeg', '13683_right.jpeg', '13688_left.jpeg', '13680_right.jpeg', '13692_left.jpeg', '13683_left.jpeg', '10562_left.jpeg', '1369_right.jpeg', '10562_right.jpeg', '1370_right.jpeg', '13708_left.jpeg', '13692_right.jpeg', '13714_right.jpeg', '13708_right.jpeg', '1370_left.jpeg', '13721_left.jpeg', '13714_left.jpeg', '1369_left.jpeg', '10563_left.jpeg', '10563_right.jpeg', '13731_left.jpeg', '13731_right.jpeg', '13742_right.jpeg', '13742_left.jpeg', '13743_left.jpeg', '10566_left.jpeg', '13738_right.jpeg', '13733_right.jpeg', '13721_right.jpeg', '13733_left.jpeg', '13738_left.jpeg', '10566_right.jpeg', '13746_left.jpeg', '13745_right.jpeg', '13746_right.jpeg', '1374_right.jpeg', '13745_left.jpeg', '13751_right.jpeg', '10567_left.jpeg', '13751_left.jpeg', '13758_left.jpeg', '13743_right.jpeg', '1374_left.jpeg', '10567_right.jpeg', '1377_right.jpeg', '10569_left.jpeg', '13776_right.jpeg', '13776_left.jpeg', '13780_left.jpeg', '13770_left.jpeg', '1377_left.jpeg', '13758_right.jpeg', '13770_right.jpeg', '13765_left.jpeg', '13765_right.jpeg', '10569_right.jpeg', '13786_right.jpeg', '1056_left.jpeg', '13798_left.jpeg', '1378_left.jpeg', '13786_left.jpeg', '13805_left.jpeg', '13805_right.jpeg', '1378_right.jpeg', '13780_right.jpeg', '13798_right.jpeg', '13807_left.jpeg', '1056_right.jpeg', '13807_right.jpeg', '10570_left.jpeg', '13821_left.jpeg', '13817_left.jpeg', '13817_right.jpeg', '13819_left.jpeg', '13811_left.jpeg', '13834_left.jpeg', '13821_right.jpeg', '13811_right.jpeg', '13819_right.jpeg', '10570_right.jpeg', '13869_left.jpeg', '10579_left.jpeg', '13869_right.jpeg', '13850_left.jpeg', '13850_right.jpeg', '13845_left.jpeg', '13874_right.jpeg', '13856_left.jpeg', '13856_right.jpeg', '13834_right.jpeg', '13845_right.jpeg', '10579_right.jpeg', '1057_left.jpeg', '13878_left.jpeg', '13875_right.jpeg', '13875_left.jpeg', '13879_left.jpeg', '13879_right.jpeg', '13878_right.jpeg', '1057_right.jpeg', '13883_right.jpeg', '13901_left.jpeg', '13901_right.jpeg', '13883_left.jpeg', '10585_left.jpeg', '13903_right.jpeg', '13933_right.jpeg', '13914_left.jpeg', '13903_left.jpeg', '13932_left.jpeg', '13932_right.jpeg', '13933_left.jpeg', '10585_right.jpeg', '13914_right.jpeg', '13934_right.jpeg', '13934_left.jpeg', '10587_left.jpeg', '13958_right.jpeg', '13935_right.jpeg', '13958_left.jpeg', '13938_right.jpeg', '13938_left.jpeg', '10587_right.jpeg', '10592_left.jpeg', '13952_left.jpeg', '13946_right.jpeg', '13952_right.jpeg', '13946_left.jpeg', '13935_left.jpeg', '10592_right.jpeg', '10594_left.jpeg', '13962_right.jpeg', '13969_left.jpeg', '13969_right.jpeg', '13975_right.jpeg', '13976_right.jpeg', '10594_right.jpeg', '13975_left.jpeg', '1396_right.jpeg', '1396_left.jpeg', '13976_left.jpeg', '13962_left.jpeg', '10605_left.jpeg', '13991_right.jpeg', '13998_left.jpeg', '13978_left.jpeg', '13993_right.jpeg', '13978_right.jpeg', '13998_right.jpeg', '13983_left.jpeg', '10605_right.jpeg', '13993_left.jpeg', '13991_left.jpeg', '13983_right.jpeg', '10608_left.jpeg', '14017_right.jpeg', '10608_right.jpeg', '13999_left.jpeg', '14021_left.jpeg', '13999_right.jpeg', '14016_right.jpeg', '14017_left.jpeg', '14016_left.jpeg', '14021_right.jpeg', '14013_left.jpeg', '14013_right.jpeg', '10610_left.jpeg', '14044_left.jpeg', '10610_right.jpeg', '14039_right.jpeg', '14036_right.jpeg', '14044_right.jpeg', '14029_left.jpeg', '14036_left.jpeg', '10612_left.jpeg', '14029_right.jpeg', '14039_left.jpeg', '14043_right.jpeg', '14043_left.jpeg', '14046_left.jpeg', '14046_right.jpeg', '14051_left.jpeg', '14056_left.jpeg', '14056_right.jpeg', '14045_left.jpeg', '14045_right.jpeg', '10612_right.jpeg', '14074_right.jpeg', '14051_right.jpeg', '14074_left.jpeg', '10614_left.jpeg', '14092_right.jpeg', '1407_right.jpeg', '14118_right.jpeg', '14118_left.jpeg', '14086_left.jpeg', '14122_right.jpeg', '10614_right.jpeg', '10616_left.jpeg', '10616_right.jpeg', '140_left.jpeg', '14086_right.jpeg', '140_right.jpeg', '14092_left.jpeg', '10629_left.jpeg', '14128_left.jpeg', '14128_right.jpeg', '14152_right.jpeg', '14149_right.jpeg', '14129_right.jpeg', '14129_left.jpeg', '10629_right.jpeg', '14149_left.jpeg', '1412_right.jpeg', '14142_right.jpeg', '14142_left.jpeg', '10634_left.jpeg', '14193_left.jpeg', '10634_right.jpeg', '14164_left.jpeg', '14159_left.jpeg', '14173_left.jpeg', '14164_right.jpeg', '14189_right.jpeg', '14189_left.jpeg', '14159_right.jpeg', '14160_right.jpeg', '14173_right.jpeg', '10640_left.jpeg', '14200_right.jpeg', '10640_right.jpeg', '1419_right.jpeg', '14193_right.jpeg', '1419_left.jpeg', '14212_left.jpeg', '14204_right.jpeg', '14200_left.jpeg', '14195_right.jpeg', '14204_left.jpeg', '14195_left.jpeg', '10644_left.jpeg', '10644_right.jpeg', '14242_left.jpeg', '14245_left.jpeg', '14242_right.jpeg', '14227_right.jpeg', '14237_left.jpeg', '14219_left.jpeg', '14227_left.jpeg', '14219_right.jpeg', '10647_left.jpeg', '14212_right.jpeg', '14237_right.jpeg', '10647_right.jpeg', '14257_left.jpeg', '1425_left.jpeg', '14245_right.jpeg', '1425_right.jpeg', '14251_right.jpeg', '14249_right.jpeg', '10651_left.jpeg', '14281_left.jpeg', '14249_left.jpeg', '14257_right.jpeg', '14251_left.jpeg', '10651_right.jpeg', '1429_right.jpeg', '14299_right.jpeg', '14304_left.jpeg', '14297_right.jpeg', '10653_left.jpeg', '14299_left.jpeg', '14283_left.jpeg', '1429_left.jpeg', '14283_right.jpeg', '14281_right.jpeg', '14297_left.jpeg', '10653_right.jpeg', '1430_right.jpeg', '10657_left.jpeg', '1430_left.jpeg', '14305_left.jpeg', '14304_right.jpeg', '14305_right.jpeg', '14310_left.jpeg', '14315_left.jpeg', '14306_left.jpeg', '14306_right.jpeg', '14310_right.jpeg', '10657_right.jpeg', '14340_left.jpeg', '10660_left.jpeg', '14336_right.jpeg', '14339_right.jpeg', '14337_right.jpeg', '14336_left.jpeg', '14339_left.jpeg', '14337_left.jpeg', '1431_right.jpeg', '14315_right.jpeg', '1431_left.jpeg', '10660_right.jpeg', '14345_left.jpeg', '10664_left.jpeg', '14354_right.jpeg', '14353_left.jpeg', '14345_right.jpeg', '14359_right.jpeg', '14354_left.jpeg', '14353_right.jpeg', '14359_left.jpeg', '14340_right.jpeg', '14366_left.jpeg', '10664_right.jpeg', '14377_right.jpeg', '10666_left.jpeg', '14366_right.jpeg', '14374_right.jpeg', '14377_left.jpeg', '14394_left.jpeg', '14387_left.jpeg', '14394_right.jpeg', '14388_right.jpeg', '14374_left.jpeg', '14387_right.jpeg', '10666_right.jpeg', '14405_right.jpeg', '10669_left.jpeg', '14405_left.jpeg', '14397_right.jpeg', '14416_left.jpeg', '14404_right.jpeg', '14397_left.jpeg', '14399_right.jpeg', '14416_right.jpeg', '14399_left.jpeg', '14404_left.jpeg', '10669_right.jpeg', '14432_right.jpeg', '10671_left.jpeg', '14443_left.jpeg', '14419_right.jpeg', '14431_right.jpeg', '14440_right.jpeg', '14440_left.jpeg', '14428_right.jpeg', '14428_left.jpeg', '14419_left.jpeg', '14432_left.jpeg', '10671_right.jpeg', '14451_right.jpeg', '10672_left.jpeg', '14450_left.jpeg', '14463_right.jpeg', '14451_left.jpeg', '14462_right.jpeg', '14443_right.jpeg', '14468_left.jpeg', '10672_right.jpeg', '14462_left.jpeg', '14450_right.jpeg', '14468_right.jpeg', '14485_left.jpeg', '10675_left.jpeg', '14486_right.jpeg', '14511_left.jpeg', '14485_right.jpeg', '14486_left.jpeg', '14475_right.jpeg', '14479_right.jpeg', '14474_left.jpeg', '14475_left.jpeg', '10675_right.jpeg', '14474_right.jpeg', '14526_right.jpeg', '10684_left.jpeg', '14530_left.jpeg', '14522_left.jpeg', '14530_right.jpeg', '14534_left.jpeg', '14522_right.jpeg', '14528_right.jpeg', '14528_left.jpeg', '14526_left.jpeg', '14511_right.jpeg', '10684_right.jpeg', '14545_right.jpeg', '1070_left.jpeg', '14534_right.jpeg', '14550_right.jpeg', '14550_left.jpeg', '1454_right.jpeg', '1454_left.jpeg', '14552_left.jpeg', '14545_left.jpeg', '14537_left.jpeg', '14537_right.jpeg', '1070_right.jpeg', '14559_left.jpeg', '14583_left.jpeg', '14559_right.jpeg', '14570_left.jpeg', '14575_right.jpeg', '10722_left.jpeg', '14569_right.jpeg', '14575_left.jpeg', '14570_right.jpeg', '14552_right.jpeg', '14569_left.jpeg', '10722_right.jpeg', '14608_left.jpeg', '14584_left.jpeg', '14597_right.jpeg', '14583_right.jpeg', '14597_left.jpeg', '10723_left.jpeg', '14584_right.jpeg', '14601_left.jpeg', '14598_right.jpeg', '14601_right.jpeg', '14598_left.jpeg', '10723_right.jpeg', '14609_left.jpeg', '14620_right.jpeg', '14620_left.jpeg', '14608_right.jpeg', '14626_right.jpeg', '14609_right.jpeg', '10726_left.jpeg', '14632_right.jpeg', '14619_right.jpeg', '14619_left.jpeg', '14632_left.jpeg', '10726_right.jpeg', '14671_right.jpeg', '14634_left.jpeg', '14664_right.jpeg', '14634_right.jpeg', '14672_left.jpeg', '14671_left.jpeg', '10729_left.jpeg', '14638_right.jpeg', '14672_right.jpeg', '14664_left.jpeg', '14638_left.jpeg', '10729_right.jpeg', '14684_left.jpeg', '10734_left.jpeg', '14685_left.jpeg', '14702_left.jpeg', '14692_right.jpeg', '14684_right.jpeg', '14702_right.jpeg', '14695_left.jpeg', '14695_right.jpeg', '14685_right.jpeg', '14692_left.jpeg', '10734_right.jpeg', '14711_right.jpeg', '10737_left.jpeg', '14715_left.jpeg', '14716_right.jpeg', '14712_left.jpeg', '14716_left.jpeg', '14710_left.jpeg', '14715_right.jpeg', '14711_left.jpeg', '14710_right.jpeg', '14712_right.jpeg', '10737_right.jpeg', '14726_left.jpeg', '1073_left.jpeg', '14724_left.jpeg', '14726_right.jpeg', '14748_right.jpeg', '14718_right.jpeg', '14747_right.jpeg', '14747_left.jpeg', '14748_left.jpeg', '14724_right.jpeg', '14718_left.jpeg', '1073_right.jpeg', '10743_left.jpeg', '1475_left.jpeg', '14761_left.jpeg', '14760_left.jpeg', '1475_right.jpeg', '14756_left.jpeg', '14760_right.jpeg', '14753_left.jpeg', '14756_right.jpeg', '14750_left.jpeg', '14750_right.jpeg', '10743_right.jpeg', '14761_right.jpeg', '10745_left.jpeg', '14773_right.jpeg', '14764_left.jpeg', '14764_right.jpeg', '14763_left.jpeg', '14774_left.jpeg', '14773_left.jpeg', '14762_right.jpeg', '14763_right.jpeg', '14762_left.jpeg', '10745_right.jpeg', '14780_left.jpeg', '10748_left.jpeg', '14806_left.jpeg', '14806_right.jpeg', '14780_right.jpeg', '14788_right.jpeg', '14810_right.jpeg', '14810_left.jpeg', '1480_left.jpeg', '1480_right.jpeg', '10748_right.jpeg', '14788_left.jpeg', '10752_left.jpeg', '14825_right.jpeg', '14834_right.jpeg', '14811_left.jpeg', '14834_left.jpeg', '14833_right.jpeg', '14825_left.jpeg', '10752_right.jpeg', '1481_left.jpeg', '14833_left.jpeg', '1481_right.jpeg', '14811_right.jpeg', '10758_left.jpeg', '14836_left.jpeg', '14856_left.jpeg', '14840_left.jpeg', '14835_left.jpeg', '14836_right.jpeg', '10758_right.jpeg', '14856_right.jpeg', '14835_right.jpeg', '14863_left.jpeg', '14840_right.jpeg', '14863_right.jpeg', '1075_left.jpeg', '14877_left.jpeg', '1488_left.jpeg', '14898_left.jpeg', '14905_left.jpeg', '14905_right.jpeg', '14877_right.jpeg', '14892_left.jpeg', '14898_right.jpeg', '14892_right.jpeg', '1075_right.jpeg', '1488_right.jpeg', '10761_left.jpeg', '14911_left.jpeg', '14927_right.jpeg', '14927_left.jpeg', '1492_right.jpeg', '14911_right.jpeg', '14915_left.jpeg', '14915_right.jpeg', '1492_left.jpeg', '14908_left.jpeg', '14908_right.jpeg', '14950_left.jpeg', '10761_right.jpeg', '14935_left.jpeg', '14935_right.jpeg', '14936_left.jpeg', '1495_left.jpeg', '14936_right.jpeg', '14939_right.jpeg', '1495_right.jpeg', '14950_right.jpeg', '14939_left.jpeg', '10762_left.jpeg', '14976_right.jpeg', '10762_right.jpeg', '1496_right.jpeg', '14976_left.jpeg', '14973_left.jpeg', '1496_left.jpeg', '14981_left.jpeg', '14961_left.jpeg', '14961_right.jpeg', '14973_right.jpeg', '14981_right.jpeg', '1076_left.jpeg', '1076_right.jpeg', '14998_left.jpeg', '14995_right.jpeg', '14995_left.jpeg', '14984_left.jpeg', '14998_right.jpeg', '10770_left.jpeg', '14987_left.jpeg', '14992_right.jpeg', '14992_left.jpeg', '14987_right.jpeg', '14984_right.jpeg', '10770_right.jpeg', '149_left.jpeg', '149_right.jpeg', '1499_left.jpeg', '15011_left.jpeg', '15018_left.jpeg', '1499_right.jpeg', '10772_left.jpeg', '15014_right.jpeg', '15018_right.jpeg', '15014_left.jpeg', '15011_right.jpeg', '10772_right.jpeg', '1502_left.jpeg', '15029_right.jpeg', '1502_right.jpeg', '15032_left.jpeg', '15032_right.jpeg', '15027_left.jpeg', '10773_left.jpeg', '15023_right.jpeg', '15027_right.jpeg', '15029_left.jpeg', '15023_left.jpeg', '10773_right.jpeg', '1505_left.jpeg', '15038_right.jpeg', '15045_left.jpeg', '15051_left.jpeg', '1505_right.jpeg', '15060_left.jpeg', '15051_right.jpeg', '10777_left.jpeg', '15060_right.jpeg', '15035_right.jpeg', '15035_left.jpeg', '10777_right.jpeg', '15065_right.jpeg', '15065_left.jpeg', '1506_right.jpeg', '1506_left.jpeg', '15069_left.jpeg', '1507_left.jpeg', '1077_left.jpeg', '15062_left.jpeg', '1507_right.jpeg', '15069_right.jpeg', '15062_right.jpeg', '1077_right.jpeg', '15100_right.jpeg', '15089_left.jpeg', '15101_right.jpeg', '15089_right.jpeg', '15101_left.jpeg', '15095_left.jpeg', '15080_right.jpeg', '10790_left.jpeg', '15095_right.jpeg', '15100_left.jpeg', '15080_left.jpeg', '10790_right.jpeg', '15134_left.jpeg', '15136_right.jpeg', '15117_right.jpeg', '15138_left.jpeg', '15134_right.jpeg', '15138_right.jpeg', '10791_left.jpeg', '15117_left.jpeg', '15103_left.jpeg', '15103_right.jpeg', '15136_left.jpeg', '10791_right.jpeg', '15151_right.jpeg', '15151_left.jpeg', '15154_left.jpeg', '15150_left.jpeg', '15150_right.jpeg', '15149_right.jpeg', '10810_left.jpeg', '15147_right.jpeg', '15154_right.jpeg', '15147_left.jpeg', '15149_left.jpeg', '10810_right.jpeg', '15162_right.jpeg', '15159_right.jpeg', '15174_left.jpeg', '15165_left.jpeg', '15165_right.jpeg', '10812_left.jpeg', '15174_right.jpeg', '1516_left.jpeg', '15159_left.jpeg', '15162_left.jpeg', '1516_right.jpeg', '10812_right.jpeg', '15206_left.jpeg', '10815_left.jpeg', '15197_left.jpeg', '15192_right.jpeg', '15193_right.jpeg', '15192_left.jpeg', '15193_left.jpeg', '10815_right.jpeg', '10819_left.jpeg', '15190_right.jpeg', '15197_right.jpeg', '15190_left.jpeg', '15206_right.jpeg', '10819_right.jpeg', '15208_right.jpeg', '15216_left.jpeg', '15208_left.jpeg', '15211_right.jpeg', '15217_right.jpeg', '15211_left.jpeg', '15219_left.jpeg', '15217_left.jpeg', '15210_left.jpeg', '10827_left.jpeg', '10827_right.jpeg', '15216_right.jpeg', '15222_left.jpeg', '10831_left.jpeg', '15246_right.jpeg', '15239_left.jpeg', '15225_left.jpeg', '15225_right.jpeg', '15246_left.jpeg', '15219_right.jpeg', '15247_left.jpeg', '15222_right.jpeg', '15239_right.jpeg', '10831_right.jpeg', '10835_left.jpeg', '15268_right.jpeg', '15268_left.jpeg', '15251_left.jpeg', '15257_right.jpeg', '15251_right.jpeg', '1524_left.jpeg', '15250_left.jpeg', '10835_right.jpeg', '1524_right.jpeg', '15250_right.jpeg', '15247_right.jpeg', '10839_left.jpeg', '15274_left.jpeg', '10839_right.jpeg', '15277_right.jpeg', '15274_right.jpeg', '15288_right.jpeg', '15288_left.jpeg', '15282_right.jpeg', '15277_left.jpeg', '15282_left.jpeg', '1528_left.jpeg', '1528_right.jpeg', '1084_left.jpeg', '15296_right.jpeg', '15296_left.jpeg', '15295_left.jpeg', '1084_right.jpeg', '15295_right.jpeg', '15293_left.jpeg', '15302_right.jpeg', '15302_left.jpeg', '15293_right.jpeg', '15297_left.jpeg', '15297_right.jpeg', '15304_left.jpeg', '15308_left.jpeg', '15320_left.jpeg', '15324_right.jpeg', '15308_right.jpeg', '15321_left.jpeg', '15304_right.jpeg', '10851_left.jpeg', '10886_left.jpeg', '10864_left.jpeg', '10884_left.jpeg', '10884_right.jpeg', '10141_left.jpeg', '10903_left.jpeg', '10903_right.jpeg', '10886_right.jpeg', '1088_right.jpeg', '10920_left.jpeg', '10141_right.jpeg', '10907_left.jpeg', '10891_left.jpeg', '10907_right.jpeg', '10891_right.jpeg', '1088_left.jpeg', '10147_left.jpeg', '10924_left.jpeg', '1092_right.jpeg', '10924_right.jpeg', '1092_left.jpeg', '10936_left.jpeg', '10147_right.jpeg', '10920_right.jpeg', '10930_left.jpeg', '10931_right.jpeg', '10930_right.jpeg', '10931_left.jpeg', '10149_left.jpeg', '10940_left.jpeg', '10954_left.jpeg', '10952_right.jpeg', '10149_right.jpeg', '10940_right.jpeg', '10941_right.jpeg', '10952_left.jpeg', '10943_left.jpeg', '10954_right.jpeg', '10941_left.jpeg', '10943_right.jpeg', '10150_left.jpeg', '10961_left.jpeg', '10969_right.jpeg', '10963_left.jpeg', '10963_right.jpeg', '10969_left.jpeg', '10970_right.jpeg', '10961_right.jpeg', '10150_right.jpeg', '10970_left.jpeg', '10960_right.jpeg', '10960_left.jpeg', '10153_left.jpeg', '11005_left.jpeg', '10155_left.jpeg', '1098_left.jpeg', '1098_right.jpeg', '11008_right.jpeg', '10987_left.jpeg', '10987_right.jpeg', '11005_right.jpeg', '10996_left.jpeg', '10996_right.jpeg', '11008_left.jpeg', '10155_right.jpeg', '10156_left.jpeg', '11037_right.jpeg', '11031_left.jpeg', '11031_right.jpeg', '11037_left.jpeg', '11053_right.jpeg', '11053_left.jpeg', '10156_right.jpeg', '11050_right.jpeg', '11035_left.jpeg', '11050_left.jpeg', '11035_right.jpeg', '10159_left.jpeg', '11067_left.jpeg', '10159_right.jpeg', '11080_left.jpeg', '1107_right.jpeg', '11057_right.jpeg', '11054_right.jpeg', '1107_left.jpeg', '11080_right.jpeg', '11057_left.jpeg', '11079_right.jpeg', '11067_right.jpeg', '10162_left.jpeg', '10162_right.jpeg', '11113_right.jpeg', '11087_left.jpeg', '11114_left.jpeg', '11114_right.jpeg', '11113_left.jpeg', '11087_right.jpeg', '10163_left.jpeg', '11106_right.jpeg', '11085_left.jpeg', '11106_left.jpeg', '10163_right.jpeg', '11085_right.jpeg', '10169_left.jpeg', '11124_right.jpeg', '11124_left.jpeg', '1111_left.jpeg', '11123_right.jpeg', '11121_left.jpeg', '1111_right.jpeg', '11121_right.jpeg', '10169_right.jpeg', '11115_left.jpeg', '11125_left.jpeg', '11115_right.jpeg', '10175_left.jpeg', '11129_left.jpeg', '11129_right.jpeg', '10175_right.jpeg', '11138_left.jpeg', '11125_right.jpeg', '11127_right.jpeg', '11157_left.jpeg', '11130_right.jpeg', '11127_left.jpeg', '11130_left.jpeg', '10177_left.jpeg', '11138_right.jpeg', '10177_right.jpeg', '11162_left.jpeg', '11158_right.jpeg', '11157_right.jpeg', '11159_left.jpeg', '11163_right.jpeg', '11158_left.jpeg', '11159_right.jpeg', '10181_left.jpeg', '11162_right.jpeg', '11163_left.jpeg', '11164_left.jpeg', '10181_right.jpeg', '11181_left.jpeg', '11176_right.jpeg', '11178_left.jpeg', '11181_right.jpeg', '11182_right.jpeg', '11178_right.jpeg', '10182_left.jpeg', '11183_left.jpeg', '11164_right.jpeg', '11176_left.jpeg', '11182_left.jpeg', '10182_right.jpeg', '11196_left.jpeg', '10187_left.jpeg', '11187_left.jpeg', '11186_right.jpeg', '11183_right.jpeg', '11192_right.jpeg', '11197_left.jpeg', '11196_right.jpeg', '11187_right.jpeg', '11192_left.jpeg', '11186_left.jpeg', '10187_right.jpeg', '10193_left.jpeg', '11197_right.jpeg', '10193_right.jpeg', '11198_right.jpeg', '11204_left.jpeg', '11204_right.jpeg', '11207_left.jpeg', '11209_left.jpeg', '111_right.jpeg', '11207_right.jpeg', '111_left.jpeg', '11198_left.jpeg', '10199_left.jpeg', '11229_left.jpeg', '10199_right.jpeg', '11219_left.jpeg', '11216_left.jpeg', '11229_right.jpeg', '11219_right.jpeg', '11209_right.jpeg', '11228_right.jpeg', '11241_left.jpeg', '10210_left.jpeg', '11228_left.jpeg', '11211_left.jpeg', '10210_right.jpeg', '11247_left.jpeg', '11265_right.jpeg', '11244_right.jpeg', '11244_left.jpeg', '11292_right.jpeg', '11267_right.jpeg', '11247_right.jpeg', '11292_left.jpeg', '10212_left.jpeg', '11267_left.jpeg', '11265_left.jpeg', '10212_right.jpeg', '1133_left.jpeg', '10218_left.jpeg', '11293_right.jpeg', '1133_right.jpeg', '11332_left.jpeg', '11294_left.jpeg', '11294_right.jpeg', '11326_left.jpeg', '11293_left.jpeg', '11326_right.jpeg', '11332_right.jpeg', '10232_left.jpeg', '1135_right.jpeg', '10232_right.jpeg', '11349_right.jpeg', '1135_left.jpeg', '11349_left.jpeg', '11348_right.jpeg', '11348_left.jpeg', '11340_right.jpeg', '11365_left.jpeg', '11365_right.jpeg', '11340_left.jpeg', '10233_left.jpeg', '11366_left.jpeg', '10233_right.jpeg', '1136_right.jpeg', '11366_right.jpeg', '11384_right.jpeg', '11393_right.jpeg', '1136_left.jpeg', '11393_left.jpeg', '11394_left.jpeg', '11370_left.jpeg', '11384_left.jpeg', '10239_left.jpeg', '11425_right.jpeg', '10239_right.jpeg', '11417_right.jpeg', '11408_right.jpeg', '11394_right.jpeg', '11421_right.jpeg', '11417_left.jpeg', '10241_left.jpeg', '11421_left.jpeg', '1142_right.jpeg', '11408_left.jpeg', '11425_left.jpeg', '10241_right.jpeg', '11439_left.jpeg', '11435_left.jpeg', '11432_right.jpeg', '11439_right.jpeg', '11431_right.jpeg', '11432_left.jpeg', '11431_left.jpeg', '11443_left.jpeg', '10242_left.jpeg', '11443_right.jpeg', '11435_right.jpeg', '10242_right.jpeg', '11464_right.jpeg', '1144_right.jpeg', '10243_left.jpeg', '11448_right.jpeg', '11445_right.jpeg', '1146_left.jpeg', '11448_left.jpeg', '11445_left.jpeg', '1146_right.jpeg', '1144_left.jpeg', '11464_left.jpeg', '10243_right.jpeg', '11487_left.jpeg', '11482_left.jpeg', '11471_left.jpeg', '11485_right.jpeg', '1024_left.jpeg', '11487_right.jpeg', '11490_right.jpeg', '11490_left.jpeg', '11485_left.jpeg', '11482_right.jpeg', '11471_right.jpeg', '1024_right.jpeg', '11505_left.jpeg', '10250_left.jpeg', '11495_right.jpeg', '11505_right.jpeg', '11519_left.jpeg', '11519_right.jpeg', '11520_left.jpeg', '11516_right.jpeg', '11516_left.jpeg', '11495_left.jpeg', '11520_right.jpeg', '10250_right.jpeg', '11527_left.jpeg', '10251_left.jpeg', '11534_right.jpeg', '11542_left.jpeg', '11539_left.jpeg', '11539_right.jpeg', '11527_right.jpeg', '10251_right.jpeg', '11542_right.jpeg', '1152_left.jpeg', '11534_left.jpeg', '1152_right.jpeg', '10252_left.jpeg', '11555_left.jpeg', '10252_right.jpeg', '11553_left.jpeg', '11551_right.jpeg', '11545_right.jpeg', '11555_right.jpeg', '11553_right.jpeg', '11574_left.jpeg', '11551_left.jpeg', '11574_right.jpeg', '11545_left.jpeg', '10255_left.jpeg', '11600_left.jpeg', '10255_right.jpeg', '11590_right.jpeg', '11581_left.jpeg', '11598_left.jpeg', '115_left.jpeg', '11598_right.jpeg', '11581_right.jpeg', '11590_left.jpeg', '11579_right.jpeg', '115_right.jpeg', '10257_left.jpeg', '11600_right.jpeg', '10257_right.jpeg', '11616_right.jpeg', '11601_left.jpeg', '11607_right.jpeg', '11616_left.jpeg', '11607_left.jpeg', '11601_right.jpeg', '11605_left.jpeg', '11605_right.jpeg', '11618_left.jpeg', '10262_left.jpeg', '11624_right.jpeg', '11623_right.jpeg', '1162_left.jpeg', '11625_right.jpeg', '11619_left.jpeg', '10262_right.jpeg', '11625_left.jpeg', '11618_right.jpeg', '11624_left.jpeg', '11619_right.jpeg', '11623_left.jpeg', '10269_left.jpeg', '11630_left.jpeg', '10269_right.jpeg', '10270_left.jpeg', '11639_right.jpeg', '11647_right.jpeg', '11648_left.jpeg', '11647_left.jpeg', '1162_right.jpeg', '11630_right.jpeg', '11649_right.jpeg', '11649_left.jpeg', '11639_left.jpeg', '10270_right.jpeg', '11691_right.jpeg', '11650_right.jpeg', '1165_right.jpeg', '11652_right.jpeg', '11691_left.jpeg', '1165_left.jpeg', '11652_left.jpeg', '10272_left.jpeg', '11650_left.jpeg', '11684_left.jpeg', '11684_right.jpeg', '10272_right.jpeg', '11717_left.jpeg', '10279_left.jpeg', '11704_right.jpeg', '11720_left.jpeg', '1170_left.jpeg', '11724_left.jpeg', '11704_left.jpeg', '11698_left.jpeg', '1170_right.jpeg', '11717_right.jpeg', '11720_right.jpeg', '10279_right.jpeg', '11727_right.jpeg', '1027_left.jpeg', '11730_right.jpeg', '11730_left.jpeg', '11760_left.jpeg', '11760_right.jpeg', '11727_left.jpeg', '11750_right.jpeg', '11738_right.jpeg', '11750_left.jpeg', '11724_right.jpeg', '1027_right.jpeg', '10284_left.jpeg', '11773_left.jpeg', '11778_left.jpeg', '117_left.jpeg', '11786_left.jpeg', '11776_right.jpeg', '11776_left.jpeg', '10284_right.jpeg', '117_right.jpeg', '1178_left.jpeg', '11773_right.jpeg', '11786_right.jpeg', '10292_left.jpeg', '11808_right.jpeg', '11819_left.jpeg', '11807_left.jpeg', '11819_right.jpeg', '11816_left.jpeg', '11802_right.jpeg', '10292_right.jpeg', '11808_left.jpeg', '11802_left.jpeg', '11816_right.jpeg', '11823_left.jpeg', '10293_left.jpeg', '11832_right.jpeg', '10293_right.jpeg', '11830_left.jpeg', '11832_left.jpeg', '11828_left.jpeg', '11829_right.jpeg', '11840_left.jpeg', '11823_right.jpeg', '11830_right.jpeg', '10294_left.jpeg', '11829_left.jpeg', '11828_right.jpeg', '10294_right.jpeg', '11847_left.jpeg', '11840_right.jpeg', '102_left.jpeg', '11847_right.jpeg', '102_right.jpeg', '11871_right.jpeg', '11849_right.jpeg', '11871_left.jpeg', '11858_left.jpeg', '11858_right.jpeg', '11888_left.jpeg', '11849_left.jpeg', '10303_left.jpeg', '11889_right.jpeg', '10303_right.jpeg', '11890_right.jpeg', '11894_right.jpeg', '11888_right.jpeg', '11916_left.jpeg', '11890_left.jpeg', '11889_left.jpeg', '11899_left.jpeg', '11894_left.jpeg', '11899_right.jpeg', '10305_left.jpeg', '11953_left.jpeg', '10305_right.jpeg', '11940_left.jpeg', '1192_right.jpeg', '1192_left.jpeg', '11926_left.jpeg', '11916_right.jpeg', '11926_right.jpeg', '1030_left.jpeg', '11930_right.jpeg', '11930_left.jpeg', '11940_right.jpeg', '1030_right.jpeg', '11953_right.jpeg', '11980_left.jpeg', '1197_left.jpeg', '1197_right.jpeg', '1196_right.jpeg', '11980_right.jpeg', '11957_right.jpeg', '11957_left.jpeg', '1196_left.jpeg', '10310_left.jpeg', '11968_right.jpeg', '10310_right.jpeg', '11989_right.jpeg', '11996_left.jpeg', '11996_right.jpeg', '1199_left.jpeg', '11992_left.jpeg', '1199_right.jpeg', '11992_right.jpeg', '11990_right.jpeg', '10312_left.jpeg', '11989_left.jpeg', '11990_left.jpeg', '10312_right.jpeg', '12001_right.jpeg', '10319_left.jpeg', '12040_left.jpeg', '12014_left.jpeg', '12001_left.jpeg', '12031_right.jpeg', '12026_left.jpeg', '12014_right.jpeg', '12031_left.jpeg', '10319_right.jpeg', '12026_right.jpeg', '12040_right.jpeg', '10320_left.jpeg', '12042_left.jpeg', '12061_right.jpeg', '12067_right.jpeg', '1206_left.jpeg', '12061_left.jpeg', '12067_left.jpeg', '12068_left.jpeg', '12042_right.jpeg', '12058_right.jpeg', '10320_right.jpeg', '12058_left.jpeg', '10321_left.jpeg', '12089_left.jpeg', '12098_left.jpeg', '12076_left.jpeg', '12100_left.jpeg', '12098_right.jpeg', '12070_left.jpeg', '1206_right.jpeg', '10321_right.jpeg', '12076_right.jpeg', '12070_right.jpeg', '12089_right.jpeg', '10326_left.jpeg', '12100_right.jpeg', '12101_left.jpeg', '12105_left.jpeg', '12105_right.jpeg', '12106_right.jpeg', '12110_right.jpeg', '12106_left.jpeg', '12101_right.jpeg', '12111_left.jpeg', '10326_right.jpeg', '12110_left.jpeg', '10328_left.jpeg', '12139_left.jpeg', '12139_right.jpeg', '12114_right.jpeg', '1211_right.jpeg', '12144_left.jpeg', '12124_left.jpeg', '12114_left.jpeg', '12124_right.jpeg', '10328_right.jpeg', '1211_left.jpeg', '12111_right.jpeg', '10334_left.jpeg', '12146_left.jpeg', '10334_right.jpeg', '12156_left.jpeg', '12155_right.jpeg', '12155_left.jpeg', '12144_right.jpeg', '12149_left.jpeg', '12146_right.jpeg', '12148_left.jpeg', '12148_right.jpeg', '12149_right.jpeg', '10339_left.jpeg', '12156_right.jpeg', '12171_left.jpeg', '12175_left.jpeg', '12168_right.jpeg', '12175_right.jpeg', '12157_right.jpeg', '10339_right.jpeg', '12179_left.jpeg', '12171_right.jpeg', '12157_left.jpeg', '12168_left.jpeg', '10343_left.jpeg', '10343_right.jpeg', '12182_right.jpeg', '12179_right.jpeg', '12206_right.jpeg', '12182_left.jpeg', '12193_left.jpeg', '12193_right.jpeg', '1034_left.jpeg', '12206_left.jpeg', '12208_left.jpeg', '12205_right.jpeg', '12205_left.jpeg', '1034_right.jpeg', '12209_right.jpeg', '12213_left.jpeg', '10352_left.jpeg', '12208_right.jpeg', '12239_left.jpeg', '12213_right.jpeg', '12215_left.jpeg', '12215_right.jpeg', '12227_right.jpeg', '12227_left.jpeg', '12209_left.jpeg', '10352_right.jpeg', '12264_right.jpeg', '10353_left.jpeg', '12246_right.jpeg', '12258_right.jpeg', '12239_right.jpeg', '12264_left.jpeg', '12268_left.jpeg', '12255_left.jpeg', '12246_left.jpeg', '12258_left.jpeg', '12255_right.jpeg', '10353_right.jpeg', '1226_left.jpeg', '10356_left.jpeg', '1226_right.jpeg', '12268_right.jpeg', '12294_left.jpeg', '12290_left.jpeg', '12278_left.jpeg', '1228_right.jpeg', '12290_right.jpeg', '1228_left.jpeg', '12278_right.jpeg', '10359_left.jpeg', '12295_right.jpeg', '10359_right.jpeg', '12294_right.jpeg', '122_left.jpeg', '12303_right.jpeg', '10367_left.jpeg', '12303_left.jpeg', '12305_right.jpeg', '12305_left.jpeg', '122_right.jpeg', '12306_left.jpeg', '12295_left.jpeg', '10367_right.jpeg', '12311_left.jpeg', '10368_left.jpeg', '12327_left.jpeg', '12313_right.jpeg', '12320_right.jpeg', '12320_left.jpeg', '12313_left.jpeg', '12321_right.jpeg', '12321_left.jpeg', '12306_right.jpeg', '12311_right.jpeg', '10368_right.jpeg', '10369_left.jpeg', '12348_right.jpeg', '12329_left.jpeg', '12348_left.jpeg', '12333_left.jpeg', '12335_left.jpeg', '12329_right.jpeg', '12333_right.jpeg', '10369_right.jpeg', '12335_right.jpeg', '12352_right.jpeg', '12327_right.jpeg', '1036_left.jpeg', '12395_left.jpeg', '12378_right.jpeg', '12403_right.jpeg', '12378_left.jpeg', '12396_left.jpeg', '12392_right.jpeg', '12395_right.jpeg', '1036_right.jpeg', '12403_left.jpeg', '12392_left.jpeg', '12396_right.jpeg', '10374_left.jpeg', '12409_left.jpeg', '12409_right.jpeg', '12418_right.jpeg', '12432_right.jpeg', '12429_left.jpeg', '12418_left.jpeg', '10374_right.jpeg', '12429_right.jpeg', '12413_left.jpeg', '12413_right.jpeg', '12432_left.jpeg', '10377_left.jpeg', '12458_right.jpeg', '12441_right.jpeg', '12447_left.jpeg', '12450_right.jpeg', '12445_left.jpeg', '12450_left.jpeg', '10377_right.jpeg', '12447_right.jpeg', '12458_left.jpeg', '12445_right.jpeg', '12441_left.jpeg', '1037_left.jpeg', '12460_left.jpeg', '12461_right.jpeg', '1037_right.jpeg', '12460_right.jpeg', '10386_left.jpeg', '12468_right.jpeg', '12468_left.jpeg', '12471_left.jpeg', '12469_left.jpeg', '12469_right.jpeg', '12471_right.jpeg', '12461_left.jpeg', '10386_right.jpeg', '12472_left.jpeg', '12491_left.jpeg', '12503_left.jpeg', '12491_right.jpeg', '12472_right.jpeg', '12480_right.jpeg', '10389_left.jpeg', '12492_left.jpeg', '12492_right.jpeg', '12519_left.jpeg', '12480_left.jpeg', '10389_right.jpeg', '12527_left.jpeg', '10391_left.jpeg', '12553_left.jpeg', '12519_right.jpeg', '12553_right.jpeg', '12527_right.jpeg', '12563_right.jpeg', '12567_left.jpeg', '12563_left.jpeg', '12523_left.jpeg', '12523_right.jpeg', '10391_right.jpeg', '12581_left.jpeg', '10397_left.jpeg', '12575_left.jpeg', '12567_right.jpeg', '12591_left.jpeg', '12581_right.jpeg', '12575_right.jpeg', '12586_right.jpeg', '12586_left.jpeg', '12590_right.jpeg', '12590_left.jpeg', '10397_right.jpeg', '10398_left.jpeg', '12591_right.jpeg', '12604_left.jpeg', '12602_right.jpeg', '12606_left.jpeg', '12604_right.jpeg', '12606_right.jpeg', '10398_right.jpeg', '12595_left.jpeg', '12595_right.jpeg', '12602_left.jpeg', '12614_left.jpeg', '10400_left.jpeg', '12614_right.jpeg', '12620_right.jpeg', '12633_left.jpeg', '12620_left.jpeg', '10400_right.jpeg', '12631_left.jpeg', '12630_right.jpeg', '12630_left.jpeg', '12631_right.jpeg', '12626_left.jpeg', '12626_right.jpeg', '10407_left.jpeg', '12649_right.jpeg', '12639_right.jpeg', '12649_left.jpeg', '12633_right.jpeg', '12655_left.jpeg', '10407_right.jpeg', '12639_left.jpeg', '12643_left.jpeg', '12654_right.jpeg', '12643_right.jpeg', '12654_left.jpeg', '10409_left.jpeg', '12660_right.jpeg', '12660_left.jpeg', '12678_right.jpeg', '12676_left.jpeg', '12676_right.jpeg', '12667_left.jpeg', '12655_right.jpeg', '10409_right.jpeg', '12667_right.jpeg', '12662_left.jpeg', '12678_left.jpeg', '10413_left.jpeg', '10413_right.jpeg', '12689_right.jpeg', '12689_left.jpeg', '1269_left.jpeg', '1267_right.jpeg', '1267_left.jpeg', '12694_left.jpeg', '12699_left.jpeg', '10415_left.jpeg', '10415_right.jpeg', '12680_left.jpeg', '12680_right.jpeg', '12699_right.jpeg', '10416_left.jpeg', '1269_right.jpeg', '12715_left.jpeg', '12700_left.jpeg', '12708_right.jpeg', '12708_left.jpeg', '10416_right.jpeg', '12729_right.jpeg', '12729_left.jpeg', '12725_left.jpeg', '12725_right.jpeg', '12715_right.jpeg', '10419_left.jpeg', '12751_left.jpeg', '12731_left.jpeg', '12731_right.jpeg', '12745_left.jpeg', '12751_right.jpeg', '12741_left.jpeg', '10419_right.jpeg', '12745_right.jpeg', '12746_right.jpeg', '12741_right.jpeg', '12746_left.jpeg', '10421_left.jpeg', '12759_left.jpeg', '10421_right.jpeg', '12759_right.jpeg', '12761_left.jpeg', '12779_left.jpeg', '12761_right.jpeg', '12779_right.jpeg', '12783_right.jpeg', '12770_right.jpeg', '12783_left.jpeg', '12770_left.jpeg', '10426_left.jpeg', '12793_right.jpeg', '12794_left.jpeg', '12798_left.jpeg', '12794_right.jpeg', '12793_left.jpeg', '12798_right.jpeg', '12792_right.jpeg', '12790_left.jpeg', '12790_right.jpeg', '12792_left.jpeg', '12799_left.jpeg', '12831_right.jpeg', '12831_left.jpeg', '12799_right.jpeg', '12817_right.jpeg', '12817_left.jpeg', '10426_right.jpeg', '12805_right.jpeg', '12805_left.jpeg', '12822_left.jpeg', '12822_right.jpeg', '10429_left.jpeg', '12847_left.jpeg', '12851_right.jpeg', '12839_right.jpeg', '12849_right.jpeg', '12847_right.jpeg', '12839_left.jpeg', '12842_left.jpeg', '10429_right.jpeg', '12851_left.jpeg', '12849_left.jpeg', '12842_right.jpeg', '1042_left.jpeg', '12866_right.jpeg', '1042_right.jpeg', '12878_right.jpeg', '12861_left.jpeg', '12861_right.jpeg', '12866_left.jpeg', '12865_left.jpeg', '12865_right.jpeg', '12859_right.jpeg', '12859_left.jpeg', '12878_left.jpeg', '10437_left.jpeg', '12900_left.jpeg', '10437_right.jpeg', '12907_right.jpeg', '12910_left.jpeg', '1290_left.jpeg', '1290_right.jpeg', '12900_right.jpeg', '12910_right.jpeg', '12907_left.jpeg', '1289_left.jpeg', '1289_right.jpeg', '10439_left.jpeg', '12920_left.jpeg', '10439_right.jpeg', '12912_left.jpeg', '12945_left.jpeg', '12933_right.jpeg', '12920_right.jpeg', '12946_left.jpeg', '12945_right.jpeg', '1043_left.jpeg', '12933_left.jpeg', '12924_right.jpeg', '12924_left.jpeg', '1043_right.jpeg', '12950_right.jpeg', '10440_right.jpeg', '12950_left.jpeg', '12968_left.jpeg', '1296_left.jpeg', '12965_left.jpeg', '12967_right.jpeg', '12968_right.jpeg', '12965_right.jpeg', '12967_left.jpeg', '12946_right.jpeg', '10444_left.jpeg', '10444_right.jpeg', '12988_right.jpeg', '12990_left.jpeg', '1296_right.jpeg', '12986_right.jpeg', '12979_right.jpeg', '12988_left.jpeg', '1297_right.jpeg', '10448_left.jpeg', '12979_left.jpeg', '1297_left.jpeg', '12986_left.jpeg', '10448_right.jpeg', '13012_right.jpeg', '12990_right.jpeg', '13002_right.jpeg', '12992_left.jpeg', '13006_left.jpeg', '13006_right.jpeg', '13016_left.jpeg', '10451_left.jpeg', '13012_left.jpeg', '12992_right.jpeg', '13002_left.jpeg', '10451_right.jpeg', '13019_left.jpeg', '13016_right.jpeg', '13038_left.jpeg', '13036_left.jpeg', '13031_right.jpeg', '10452_left.jpeg', '13019_right.jpeg', '13038_right.jpeg', '13036_right.jpeg', '13022_right.jpeg', '13031_left.jpeg', '10452_right.jpeg', '13075_right.jpeg', '10457_left.jpeg', '13051_left.jpeg', '13078_right.jpeg', '13065_right.jpeg', '13078_left.jpeg', '13065_left.jpeg', '13075_left.jpeg', '13052_left.jpeg', '13052_right.jpeg', '13051_right.jpeg', '10457_right.jpeg', '13095_right.jpeg', '10458_left.jpeg', '10007_left.jpeg', '10003_left.jpeg', '10014_right.jpeg', '10015_right.jpeg', '10015_left.jpeg', '10014_left.jpeg', '10007_right.jpeg', '10003_right.jpeg', '10030_right.jpeg', '10030_left.jpeg', '10029_left.jpeg', '10029_right.jpeg', '10009_left.jpeg', '10046_left.jpeg', '10031_right.jpeg', '10046_right.jpeg', '10043_right.jpeg', '10042_right.jpeg', '10035_right.jpeg', '10042_left.jpeg', '10009_right.jpeg', '10035_left.jpeg', '10043_left.jpeg', '10031_left.jpeg', '1000_left.jpeg', '10053_right.jpeg', '10058_left.jpeg', '10061_right.jpeg', '10069_right.jpeg', '10058_right.jpeg', '10099_left.jpeg', '1000_right.jpeg', '10099_right.jpeg', '10069_left.jpeg', '10061_left.jpeg', '10053_left.jpeg', '10010_left.jpeg', '10104_left.jpeg', '10124_left.jpeg', '10112_left.jpeg', '10116_right.jpeg', '10115_right.jpeg', '10112_right.jpeg', '10010_right.jpeg', '10109_left.jpeg', '10104_right.jpeg', '10115_left.jpeg', '10116_left.jpeg', '10013_left.jpeg', '10013_right.jpeg', '10124_right.jpeg', '10154_right.jpeg', '10144_right.jpeg', '10131_right.jpeg', '10144_left.jpeg', '10017_left.jpeg', '10153_right.jpeg', '10131_left.jpeg', '10130_left.jpeg', '10154_left.jpeg', '10017_right.jpeg', '10130_right.jpeg', '10022_left.jpeg', '10161_right.jpeg', '10170_left.jpeg', '10160_left.jpeg', '10173_left.jpeg', '10161_left.jpeg', '10170_right.jpeg', '10166_left.jpeg', '10022_right.jpeg', '10160_right.jpeg', '10166_right.jpeg', '10173_right.jpeg', '10028_left.jpeg', '10194_right.jpeg', '10194_left.jpeg', '10184_left.jpeg', '10183_left.jpeg', '10190_left.jpeg', '1017_left.jpeg', '1017_right.jpeg', '10028_right.jpeg', '10183_right.jpeg', '10184_right.jpeg', '10190_right.jpeg', '1002_left.jpeg', '10206_left.jpeg', '10206_right.jpeg', '1021_right.jpeg', '10218_right.jpeg', '1021_left.jpeg', '10220_left.jpeg', '1002_right.jpeg', '1020_left.jpeg', '1020_right.jpeg', '10207_left.jpeg', '10207_right.jpeg', '10032_left.jpeg', '10032_right.jpeg', '10226_left.jpeg', '10220_right.jpeg', '10223_left.jpeg', '10221_left.jpeg', '10230_left.jpeg', '10223_right.jpeg', '10047_left.jpeg', '10221_right.jpeg', '10224_left.jpeg', '10226_right.jpeg', '10224_right.jpeg', '10047_right.jpeg', '10234_right.jpeg', '10230_right.jpeg', '10265_left.jpeg', '10260_left.jpeg', '10260_right.jpeg', '10234_left.jpeg', '10263_left.jpeg', '10048_left.jpeg', '10263_right.jpeg', '10244_left.jpeg', '10244_right.jpeg', '10048_right.jpeg', '10266_left.jpeg', '10266_right.jpeg', '10265_right.jpeg', '10288_right.jpeg', '10278_left.jpeg', '10050_left.jpeg', '10276_right.jpeg', '10276_left.jpeg', '10278_right.jpeg', '10288_left.jpeg', '10295_left.jpeg', '10050_right.jpeg', '1029_left.jpeg', '10059_left.jpeg', '10295_right.jpeg', '10297_left.jpeg', '10306_right.jpeg', '10317_right.jpeg', '10306_left.jpeg', '10317_left.jpeg', '10325_left.jpeg', '10297_right.jpeg', '1029_right.jpeg', '10059_right.jpeg', '10337_right.jpeg', '10065_left.jpeg', '1032_right.jpeg', '10356_right.jpeg', '10340_right.jpeg', '10337_left.jpeg', '10065_right.jpeg', '10333_left.jpeg', '10340_left.jpeg', '1032_left.jpeg', '10325_right.jpeg', '10333_right.jpeg', '10073_left.jpeg', '10363_left.jpeg', '1035_left.jpeg', '10384_right.jpeg', '1035_right.jpeg', '10363_right.jpeg', '10372_right.jpeg', '10073_right.jpeg', '10384_left.jpeg', '10379_left.jpeg', '10379_right.jpeg', '10372_left.jpeg', '10078_left.jpeg', '10408_left.jpeg', '10403_right.jpeg', '10401_left.jpeg', '10401_right.jpeg', '10078_right.jpeg', '10403_left.jpeg', '10388_left.jpeg', '10408_right.jpeg', '10404_left.jpeg', '10388_right.jpeg', '10404_right.jpeg', '10081_left.jpeg', '10431_left.jpeg', '10081_right.jpeg', '1041_left.jpeg', '10431_right.jpeg', '10427_left.jpeg', '1041_right.jpeg', '1040_left.jpeg', '1040_right.jpeg', '10427_right.jpeg', '10438_right.jpeg', '10438_left.jpeg', '10085_left.jpeg', '10440_left.jpeg', '10085_right.jpeg', '10454_left.jpeg', '10464_right.jpeg', '10454_right.jpeg', '10468_left.jpeg', '10464_left.jpeg', '10475_left.jpeg', '10468_right.jpeg', '1044_right.jpeg', '1044_left.jpeg', '1008_left.jpeg', '10487_right.jpeg', '1008_right.jpeg', '10492_left.jpeg', '10489_right.jpeg', '10475_right.jpeg', '10487_left.jpeg', '10489_left.jpeg', '10481_right.jpeg', '10484_right.jpeg', '10484_left.jpeg', '10481_left.jpeg', '10092_left.jpeg', '1050_right.jpeg', '10092_right.jpeg', '10523_left.jpeg', '104_right.jpeg', '10515_right.jpeg', '10492_right.jpeg', '1051_left.jpeg', '1051_right.jpeg', '1050_left.jpeg', '10515_left.jpeg', '104_left.jpeg', '10094_left.jpeg', '10529_left.jpeg', '10094_right.jpeg', '10545_left.jpeg', '10542_left.jpeg', '10534_left.jpeg', '10534_right.jpeg', '10551_left.jpeg', '10529_right.jpeg', '10545_right.jpeg', '10523_right.jpeg', '10542_right.jpeg', '1055_right.jpeg', '10095_left.jpeg', '10568_left.jpeg', '1055_left.jpeg', '10558_left.jpeg', '10553_left.jpeg', '10551_right.jpeg', '10558_right.jpeg', '10568_right.jpeg', '10553_right.jpeg', '10583_left.jpeg', '10095_right.jpeg', '1059_left.jpeg', '10603_left.jpeg', '10606_right.jpeg', '1059_right.jpeg', '10619_left.jpeg', '100_left.jpeg', '10583_right.jpeg', '10606_left.jpeg', '1058_right.jpeg', '10603_right.jpeg', '1058_left.jpeg', '100_right.jpeg', '1061_right.jpeg', '10100_left.jpeg', '10624_left.jpeg', '10619_right.jpeg', '10623_left.jpeg', '10622_left.jpeg', '10622_right.jpeg', '10623_right.jpeg', '10624_right.jpeg', '10626_left.jpeg', '1061_left.jpeg', '10100_right.jpeg', '10109_right.jpeg', '10643_right.jpeg', '10649_left.jpeg', '10626_right.jpeg', '10643_left.jpeg', '10636_left.jpeg', '1011_left.jpeg', '10636_right.jpeg', '10645_right.jpeg', '10646_left.jpeg', '10646_right.jpeg', '10645_left.jpeg', '1011_right.jpeg', '10680_right.jpeg', '10649_right.jpeg', '10650_left.jpeg', '10650_right.jpeg', '10680_left.jpeg', '10658_left.jpeg', '10674_left.jpeg', '10674_right.jpeg', '10658_right.jpeg', '10688_left.jpeg', '10120_left.jpeg', '10698_right.jpeg', '10120_right.jpeg', '10695_right.jpeg', '10698_left.jpeg', '10694_left.jpeg', '10694_right.jpeg', '10701_right.jpeg', '10701_left.jpeg', '10695_left.jpeg', '10705_left.jpeg', '10688_right.jpeg', '10125_left.jpeg', '10731_left.jpeg', '10720_left.jpeg', '10731_right.jpeg', '10727_left.jpeg', '10727_right.jpeg', '10710_left.jpeg', '10705_right.jpeg', '10125_right.jpeg', '10720_right.jpeg', '10732_left.jpeg', '10710_right.jpeg', '10126_left.jpeg', '10755_right.jpeg', '10126_right.jpeg', '10732_right.jpeg', '10771_left.jpeg', '10751_right.jpeg', '10754_right.jpeg', '10754_left.jpeg', '10763_left.jpeg', '10763_right.jpeg', '10755_left.jpeg', '10751_left.jpeg', '10129_left.jpeg', '10771_right.jpeg', '10129_right.jpeg', '10781_right.jpeg', '10785_left.jpeg', '10781_left.jpeg', '10782_right.jpeg', '10782_left.jpeg', '1012_left.jpeg', '10779_left.jpeg', '10779_right.jpeg', '10786_left.jpeg', '10785_right.jpeg', '10787_left.jpeg', '10794_right.jpeg', '10792_right.jpeg', '10794_left.jpeg', '10787_right.jpeg', '10792_left.jpeg', '10811_left.jpeg', '10786_right.jpeg', '10808_left.jpeg', '10808_right.jpeg', '1012_right.jpeg', '10823_left.jpeg', '10811_right.jpeg', '10832_left.jpeg', '10823_right.jpeg', '10832_right.jpeg', '10814_right.jpeg', '10134_left.jpeg', '10822_right.jpeg', '10833_left.jpeg', '10814_left.jpeg', '10822_left.jpeg', '10134_right.jpeg', '10838_left.jpeg', '10135_left.jpeg', '10838_right.jpeg', '10844_right.jpeg', '10842_right.jpeg', '10848_left.jpeg', '10845_left.jpeg', '10845_right.jpeg', '10135_right.jpeg', '10833_right.jpeg', '10844_left.jpeg', '10842_left.jpeg', '10137_left.jpeg', '10864_right.jpeg', '10854_right.jpeg', '10853_right.jpeg', '10848_right.jpeg', '10854_left.jpeg', '10853_left.jpeg', '10137_right.jpeg']\n"
],
[
"image_data=[]",
"_____no_output_____"
],
[
"from keras.preprocessing import image\nfrom PIL import ImageFile\nImageFile.LOAD_TRUNCATED_IMAGES = True\n\nfor ix in folders:\n path=os.path.join(\"/content/drive/My Drive/Data train\",ix)\n \n img=image.load_img(path,target_size=((224,224)))\n \n img_array=image.img_to_array(img)\n image_data.append(img_array)",
"Using TensorFlow backend.\n"
],
[
"\nprint(len(image_data))",
"8395\n"
],
[
"import numpy as np\nx_train=np.array(image_data)\n\nprint(x_train.shape)",
"(8395, 224, 224, 3)\n"
]
],
[
[
"## preparing y train\n",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfilter_data=pd.read_csv(\"/content/drive/My Drive/file1.csv\")\nprint(filter_data.shape)\nfilter_data.head(n=21)\n",
"(8395, 3)\n"
],
[
"y_train=filter_data[\"level\"]\ny_train.shape",
"_____no_output_____"
]
],
[
[
"## one hot vector conversion",
"_____no_output_____"
]
],
[
[
"from keras.utils import np_utils\n\ny_train=np_utils.to_categorical(y_train)\nprint(x_train.shape,y_train.shape)\n",
"(8395, 224, 224, 3) (8395, 5)\n"
],
[
" # creat resnet 50 model\nfrom keras.applications.resnet50 import ResNet50\nfrom keras.preprocessing import image\nfrom keras.optimizers import Adam\nfrom keras.layers import *\nfrom keras.models import Model\n\nfrom matplotlib import pyplot as plt\n\n",
"_____no_output_____"
],
[
"model=ResNet50(include_top=False,weights='imagenet',input_shape=(224,224,3))",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:66: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:541: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4479: The name tf.truncated_normal is deprecated. Please use tf.random.truncated_normal instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:197: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:203: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:207: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:216: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:223: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:2041: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:148: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4267: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.\n\n"
],
[
"model.summary()",
"Model: \"resnet50\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 224, 224, 3) 0 \n__________________________________________________________________________________________________\nconv1_pad (ZeroPadding2D) (None, 230, 230, 3) 0 input_1[0][0] \n__________________________________________________________________________________________________\nconv1 (Conv2D) (None, 112, 112, 64) 9472 conv1_pad[0][0] \n__________________________________________________________________________________________________\nbn_conv1 (BatchNormalization) (None, 112, 112, 64) 256 conv1[0][0] \n__________________________________________________________________________________________________\nactivation_1 (Activation) (None, 112, 112, 64) 0 bn_conv1[0][0] \n__________________________________________________________________________________________________\npool1_pad (ZeroPadding2D) (None, 114, 114, 64) 0 activation_1[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_1 (MaxPooling2D) (None, 56, 56, 64) 0 pool1_pad[0][0] \n__________________________________________________________________________________________________\nres2a_branch2a (Conv2D) (None, 56, 56, 64) 4160 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2a (BatchNormalizati (None, 56, 56, 64) 256 res2a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_2 (Activation) (None, 56, 56, 64) 0 bn2a_branch2a[0][0] \n__________________________________________________________________________________________________\nres2a_branch2b (Conv2D) (None, 56, 56, 64) 36928 activation_2[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2b (BatchNormalizati (None, 56, 56, 64) 256 res2a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_3 (Activation) (None, 56, 56, 64) 0 bn2a_branch2b[0][0] \n__________________________________________________________________________________________________\nres2a_branch2c (Conv2D) (None, 56, 56, 256) 16640 activation_3[0][0] \n__________________________________________________________________________________________________\nres2a_branch1 (Conv2D) (None, 56, 56, 256) 16640 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2c (BatchNormalizati (None, 56, 56, 256) 1024 res2a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn2a_branch1 (BatchNormalizatio (None, 56, 56, 256) 1024 res2a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_1 (Add) (None, 56, 56, 256) 0 bn2a_branch2c[0][0] \n bn2a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_4 (Activation) (None, 56, 56, 256) 0 add_1[0][0] \n__________________________________________________________________________________________________\nres2b_branch2a (Conv2D) (None, 56, 56, 64) 16448 activation_4[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2a (BatchNormalizati (None, 56, 56, 64) 256 res2b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_5 (Activation) (None, 56, 56, 64) 0 bn2b_branch2a[0][0] \n__________________________________________________________________________________________________\nres2b_branch2b (Conv2D) (None, 56, 56, 64) 36928 activation_5[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2b (BatchNormalizati (None, 56, 56, 64) 256 res2b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_6 (Activation) (None, 56, 56, 64) 0 bn2b_branch2b[0][0] \n__________________________________________________________________________________________________\nres2b_branch2c (Conv2D) (None, 56, 56, 256) 16640 activation_6[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2c (BatchNormalizati (None, 56, 56, 256) 1024 res2b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_2 (Add) (None, 56, 56, 256) 0 bn2b_branch2c[0][0] \n activation_4[0][0] \n__________________________________________________________________________________________________\nactivation_7 (Activation) (None, 56, 56, 256) 0 add_2[0][0] \n__________________________________________________________________________________________________\nres2c_branch2a (Conv2D) (None, 56, 56, 64) 16448 activation_7[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2a (BatchNormalizati (None, 56, 56, 64) 256 res2c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_8 (Activation) (None, 56, 56, 64) 0 bn2c_branch2a[0][0] \n__________________________________________________________________________________________________\nres2c_branch2b (Conv2D) (None, 56, 56, 64) 36928 activation_8[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2b (BatchNormalizati (None, 56, 56, 64) 256 res2c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_9 (Activation) (None, 56, 56, 64) 0 bn2c_branch2b[0][0] \n__________________________________________________________________________________________________\nres2c_branch2c (Conv2D) (None, 56, 56, 256) 16640 activation_9[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2c (BatchNormalizati (None, 56, 56, 256) 1024 res2c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_3 (Add) (None, 56, 56, 256) 0 bn2c_branch2c[0][0] \n activation_7[0][0] \n__________________________________________________________________________________________________\nactivation_10 (Activation) (None, 56, 56, 256) 0 add_3[0][0] \n__________________________________________________________________________________________________\nres3a_branch2a (Conv2D) (None, 28, 28, 128) 32896 activation_10[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_11 (Activation) (None, 28, 28, 128) 0 bn3a_branch2a[0][0] \n__________________________________________________________________________________________________\nres3a_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_11[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_12 (Activation) (None, 28, 28, 128) 0 bn3a_branch2b[0][0] \n__________________________________________________________________________________________________\nres3a_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_12[0][0] \n__________________________________________________________________________________________________\nres3a_branch1 (Conv2D) (None, 28, 28, 512) 131584 activation_10[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn3a_branch1 (BatchNormalizatio (None, 28, 28, 512) 2048 res3a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_4 (Add) (None, 28, 28, 512) 0 bn3a_branch2c[0][0] \n bn3a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_13 (Activation) (None, 28, 28, 512) 0 add_4[0][0] \n__________________________________________________________________________________________________\nres3b_branch2a (Conv2D) (None, 28, 28, 128) 65664 activation_13[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_14 (Activation) (None, 28, 28, 128) 0 bn3b_branch2a[0][0] \n__________________________________________________________________________________________________\nres3b_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_14[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_15 (Activation) (None, 28, 28, 128) 0 bn3b_branch2b[0][0] \n__________________________________________________________________________________________________\nres3b_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_15[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_5 (Add) (None, 28, 28, 512) 0 bn3b_branch2c[0][0] \n activation_13[0][0] \n__________________________________________________________________________________________________\nactivation_16 (Activation) (None, 28, 28, 512) 0 add_5[0][0] \n__________________________________________________________________________________________________\nres3c_branch2a (Conv2D) (None, 28, 28, 128) 65664 activation_16[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_17 (Activation) (None, 28, 28, 128) 0 bn3c_branch2a[0][0] \n__________________________________________________________________________________________________\nres3c_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_17[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_18 (Activation) (None, 28, 28, 128) 0 bn3c_branch2b[0][0] \n__________________________________________________________________________________________________\nres3c_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_18[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_6 (Add) (None, 28, 28, 512) 0 bn3c_branch2c[0][0] \n activation_16[0][0] \n__________________________________________________________________________________________________\nactivation_19 (Activation) (None, 28, 28, 512) 0 add_6[0][0] \n__________________________________________________________________________________________________\nres3d_branch2a (Conv2D) (None, 28, 28, 128) 65664 activation_19[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3d_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_20 (Activation) (None, 28, 28, 128) 0 bn3d_branch2a[0][0] \n__________________________________________________________________________________________________\nres3d_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_20[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3d_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_21 (Activation) (None, 28, 28, 128) 0 bn3d_branch2b[0][0] \n__________________________________________________________________________________________________\nres3d_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_21[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3d_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_7 (Add) (None, 28, 28, 512) 0 bn3d_branch2c[0][0] \n activation_19[0][0] \n__________________________________________________________________________________________________\nactivation_22 (Activation) (None, 28, 28, 512) 0 add_7[0][0] \n__________________________________________________________________________________________________\nres4a_branch2a (Conv2D) (None, 14, 14, 256) 131328 activation_22[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_23 (Activation) (None, 14, 14, 256) 0 bn4a_branch2a[0][0] \n__________________________________________________________________________________________________\nres4a_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_23[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_24 (Activation) (None, 14, 14, 256) 0 bn4a_branch2b[0][0] \n__________________________________________________________________________________________________\nres4a_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_24[0][0] \n__________________________________________________________________________________________________\nres4a_branch1 (Conv2D) (None, 14, 14, 1024) 525312 activation_22[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn4a_branch1 (BatchNormalizatio (None, 14, 14, 1024) 4096 res4a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_8 (Add) (None, 14, 14, 1024) 0 bn4a_branch2c[0][0] \n bn4a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_25 (Activation) (None, 14, 14, 1024) 0 add_8[0][0] \n__________________________________________________________________________________________________\nres4b_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_25[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_26 (Activation) (None, 14, 14, 256) 0 bn4b_branch2a[0][0] \n__________________________________________________________________________________________________\nres4b_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_26[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_27 (Activation) (None, 14, 14, 256) 0 bn4b_branch2b[0][0] \n__________________________________________________________________________________________________\nres4b_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_27[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_9 (Add) (None, 14, 14, 1024) 0 bn4b_branch2c[0][0] \n activation_25[0][0] \n__________________________________________________________________________________________________\nactivation_28 (Activation) (None, 14, 14, 1024) 0 add_9[0][0] \n__________________________________________________________________________________________________\nres4c_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_28[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_29 (Activation) (None, 14, 14, 256) 0 bn4c_branch2a[0][0] \n__________________________________________________________________________________________________\nres4c_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_29[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_30 (Activation) (None, 14, 14, 256) 0 bn4c_branch2b[0][0] \n__________________________________________________________________________________________________\nres4c_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_30[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_10 (Add) (None, 14, 14, 1024) 0 bn4c_branch2c[0][0] \n activation_28[0][0] \n__________________________________________________________________________________________________\nactivation_31 (Activation) (None, 14, 14, 1024) 0 add_10[0][0] \n__________________________________________________________________________________________________\nres4d_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_31[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4d_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_32 (Activation) (None, 14, 14, 256) 0 bn4d_branch2a[0][0] \n__________________________________________________________________________________________________\nres4d_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_32[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4d_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_33 (Activation) (None, 14, 14, 256) 0 bn4d_branch2b[0][0] \n__________________________________________________________________________________________________\nres4d_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_33[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4d_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_11 (Add) (None, 14, 14, 1024) 0 bn4d_branch2c[0][0] \n activation_31[0][0] \n__________________________________________________________________________________________________\nactivation_34 (Activation) (None, 14, 14, 1024) 0 add_11[0][0] \n__________________________________________________________________________________________________\nres4e_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_34[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4e_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_35 (Activation) (None, 14, 14, 256) 0 bn4e_branch2a[0][0] \n__________________________________________________________________________________________________\nres4e_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_35[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4e_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_36 (Activation) (None, 14, 14, 256) 0 bn4e_branch2b[0][0] \n__________________________________________________________________________________________________\nres4e_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_36[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4e_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_12 (Add) (None, 14, 14, 1024) 0 bn4e_branch2c[0][0] \n activation_34[0][0] \n__________________________________________________________________________________________________\nactivation_37 (Activation) (None, 14, 14, 1024) 0 add_12[0][0] \n__________________________________________________________________________________________________\nres4f_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_37[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4f_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_38 (Activation) (None, 14, 14, 256) 0 bn4f_branch2a[0][0] \n__________________________________________________________________________________________________\nres4f_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_38[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4f_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_39 (Activation) (None, 14, 14, 256) 0 bn4f_branch2b[0][0] \n__________________________________________________________________________________________________\nres4f_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_39[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4f_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_13 (Add) (None, 14, 14, 1024) 0 bn4f_branch2c[0][0] \n activation_37[0][0] \n__________________________________________________________________________________________________\nactivation_40 (Activation) (None, 14, 14, 1024) 0 add_13[0][0] \n__________________________________________________________________________________________________\nres5a_branch2a (Conv2D) (None, 7, 7, 512) 524800 activation_40[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2a (BatchNormalizati (None, 7, 7, 512) 2048 res5a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_41 (Activation) (None, 7, 7, 512) 0 bn5a_branch2a[0][0] \n__________________________________________________________________________________________________\nres5a_branch2b (Conv2D) (None, 7, 7, 512) 2359808 activation_41[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2b (BatchNormalizati (None, 7, 7, 512) 2048 res5a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_42 (Activation) (None, 7, 7, 512) 0 bn5a_branch2b[0][0] \n__________________________________________________________________________________________________\nres5a_branch2c (Conv2D) (None, 7, 7, 2048) 1050624 activation_42[0][0] \n__________________________________________________________________________________________________\nres5a_branch1 (Conv2D) (None, 7, 7, 2048) 2099200 activation_40[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2c (BatchNormalizati (None, 7, 7, 2048) 8192 res5a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn5a_branch1 (BatchNormalizatio (None, 7, 7, 2048) 8192 res5a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_14 (Add) (None, 7, 7, 2048) 0 bn5a_branch2c[0][0] \n bn5a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_43 (Activation) (None, 7, 7, 2048) 0 add_14[0][0] \n__________________________________________________________________________________________________\nres5b_branch2a (Conv2D) (None, 7, 7, 512) 1049088 activation_43[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2a (BatchNormalizati (None, 7, 7, 512) 2048 res5b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_44 (Activation) (None, 7, 7, 512) 0 bn5b_branch2a[0][0] \n__________________________________________________________________________________________________\nres5b_branch2b (Conv2D) (None, 7, 7, 512) 2359808 activation_44[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2b (BatchNormalizati (None, 7, 7, 512) 2048 res5b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_45 (Activation) (None, 7, 7, 512) 0 bn5b_branch2b[0][0] \n__________________________________________________________________________________________________\nres5b_branch2c (Conv2D) (None, 7, 7, 2048) 1050624 activation_45[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2c (BatchNormalizati (None, 7, 7, 2048) 8192 res5b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_15 (Add) (None, 7, 7, 2048) 0 bn5b_branch2c[0][0] \n activation_43[0][0] \n__________________________________________________________________________________________________\nactivation_46 (Activation) (None, 7, 7, 2048) 0 add_15[0][0] \n__________________________________________________________________________________________________\nres5c_branch2a (Conv2D) (None, 7, 7, 512) 1049088 activation_46[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2a (BatchNormalizati (None, 7, 7, 512) 2048 res5c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_47 (Activation) (None, 7, 7, 512) 0 bn5c_branch2a[0][0] \n__________________________________________________________________________________________________\nres5c_branch2b (Conv2D) (None, 7, 7, 512) 2359808 activation_47[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2b (BatchNormalizati (None, 7, 7, 512) 2048 res5c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_48 (Activation) (None, 7, 7, 512) 0 bn5c_branch2b[0][0] \n__________________________________________________________________________________________________\nres5c_branch2c (Conv2D) (None, 7, 7, 2048) 1050624 activation_48[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2c (BatchNormalizati (None, 7, 7, 2048) 8192 res5c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_16 (Add) (None, 7, 7, 2048) 0 bn5c_branch2c[0][0] \n activation_46[0][0] \n__________________________________________________________________________________________________\nactivation_49 (Activation) (None, 7, 7, 2048) 0 add_16[0][0] \n==================================================================================================\nTotal params: 23,587,712\nTrainable params: 23,534,592\nNon-trainable params: 53,120\n__________________________________________________________________________________________________\n"
],
[
"av1=GlobalAveragePooling2D()(model.output)\nfc1=Dense(256,activation='relu')(av1)\nd1=Dropout(0.5)(fc1)\nfc2=Dense(5,activation='softmax')(d1)\n\nmodel_new=Model(input=model.input,output=fc2)\nmodel_new.summary()\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4432: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3733: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\nModel: \"model_1\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 224, 224, 3) 0 \n__________________________________________________________________________________________________\nconv1_pad (ZeroPadding2D) (None, 230, 230, 3) 0 input_1[0][0] \n__________________________________________________________________________________________________\nconv1 (Conv2D) (None, 112, 112, 64) 9472 conv1_pad[0][0] \n__________________________________________________________________________________________________\nbn_conv1 (BatchNormalization) (None, 112, 112, 64) 256 conv1[0][0] \n__________________________________________________________________________________________________\nactivation_1 (Activation) (None, 112, 112, 64) 0 bn_conv1[0][0] \n__________________________________________________________________________________________________\npool1_pad (ZeroPadding2D) (None, 114, 114, 64) 0 activation_1[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_1 (MaxPooling2D) (None, 56, 56, 64) 0 pool1_pad[0][0] \n__________________________________________________________________________________________________\nres2a_branch2a (Conv2D) (None, 56, 56, 64) 4160 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2a (BatchNormalizati (None, 56, 56, 64) 256 res2a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_2 (Activation) (None, 56, 56, 64) 0 bn2a_branch2a[0][0] \n__________________________________________________________________________________________________\nres2a_branch2b (Conv2D) (None, 56, 56, 64) 36928 activation_2[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2b (BatchNormalizati (None, 56, 56, 64) 256 res2a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_3 (Activation) (None, 56, 56, 64) 0 bn2a_branch2b[0][0] \n__________________________________________________________________________________________________\nres2a_branch2c (Conv2D) (None, 56, 56, 256) 16640 activation_3[0][0] \n__________________________________________________________________________________________________\nres2a_branch1 (Conv2D) (None, 56, 56, 256) 16640 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2c (BatchNormalizati (None, 56, 56, 256) 1024 res2a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn2a_branch1 (BatchNormalizatio (None, 56, 56, 256) 1024 res2a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_1 (Add) (None, 56, 56, 256) 0 bn2a_branch2c[0][0] \n bn2a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_4 (Activation) (None, 56, 56, 256) 0 add_1[0][0] \n__________________________________________________________________________________________________\nres2b_branch2a (Conv2D) (None, 56, 56, 64) 16448 activation_4[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2a (BatchNormalizati (None, 56, 56, 64) 256 res2b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_5 (Activation) (None, 56, 56, 64) 0 bn2b_branch2a[0][0] \n__________________________________________________________________________________________________\nres2b_branch2b (Conv2D) (None, 56, 56, 64) 36928 activation_5[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2b (BatchNormalizati (None, 56, 56, 64) 256 res2b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_6 (Activation) (None, 56, 56, 64) 0 bn2b_branch2b[0][0] \n__________________________________________________________________________________________________\nres2b_branch2c (Conv2D) (None, 56, 56, 256) 16640 activation_6[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2c (BatchNormalizati (None, 56, 56, 256) 1024 res2b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_2 (Add) (None, 56, 56, 256) 0 bn2b_branch2c[0][0] \n activation_4[0][0] \n__________________________________________________________________________________________________\nactivation_7 (Activation) (None, 56, 56, 256) 0 add_2[0][0] \n__________________________________________________________________________________________________\nres2c_branch2a (Conv2D) (None, 56, 56, 64) 16448 activation_7[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2a (BatchNormalizati (None, 56, 56, 64) 256 res2c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_8 (Activation) (None, 56, 56, 64) 0 bn2c_branch2a[0][0] \n__________________________________________________________________________________________________\nres2c_branch2b (Conv2D) (None, 56, 56, 64) 36928 activation_8[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2b (BatchNormalizati (None, 56, 56, 64) 256 res2c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_9 (Activation) (None, 56, 56, 64) 0 bn2c_branch2b[0][0] \n__________________________________________________________________________________________________\nres2c_branch2c (Conv2D) (None, 56, 56, 256) 16640 activation_9[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2c (BatchNormalizati (None, 56, 56, 256) 1024 res2c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_3 (Add) (None, 56, 56, 256) 0 bn2c_branch2c[0][0] \n activation_7[0][0] \n__________________________________________________________________________________________________\nactivation_10 (Activation) (None, 56, 56, 256) 0 add_3[0][0] \n__________________________________________________________________________________________________\nres3a_branch2a (Conv2D) (None, 28, 28, 128) 32896 activation_10[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_11 (Activation) (None, 28, 28, 128) 0 bn3a_branch2a[0][0] \n__________________________________________________________________________________________________\nres3a_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_11[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_12 (Activation) (None, 28, 28, 128) 0 bn3a_branch2b[0][0] \n__________________________________________________________________________________________________\nres3a_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_12[0][0] \n__________________________________________________________________________________________________\nres3a_branch1 (Conv2D) (None, 28, 28, 512) 131584 activation_10[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn3a_branch1 (BatchNormalizatio (None, 28, 28, 512) 2048 res3a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_4 (Add) (None, 28, 28, 512) 0 bn3a_branch2c[0][0] \n bn3a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_13 (Activation) (None, 28, 28, 512) 0 add_4[0][0] \n__________________________________________________________________________________________________\nres3b_branch2a (Conv2D) (None, 28, 28, 128) 65664 activation_13[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_14 (Activation) (None, 28, 28, 128) 0 bn3b_branch2a[0][0] \n__________________________________________________________________________________________________\nres3b_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_14[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_15 (Activation) (None, 28, 28, 128) 0 bn3b_branch2b[0][0] \n__________________________________________________________________________________________________\nres3b_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_15[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_5 (Add) (None, 28, 28, 512) 0 bn3b_branch2c[0][0] \n activation_13[0][0] \n__________________________________________________________________________________________________\nactivation_16 (Activation) (None, 28, 28, 512) 0 add_5[0][0] \n__________________________________________________________________________________________________\nres3c_branch2a (Conv2D) (None, 28, 28, 128) 65664 activation_16[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_17 (Activation) (None, 28, 28, 128) 0 bn3c_branch2a[0][0] \n__________________________________________________________________________________________________\nres3c_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_17[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_18 (Activation) (None, 28, 28, 128) 0 bn3c_branch2b[0][0] \n__________________________________________________________________________________________________\nres3c_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_18[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_6 (Add) (None, 28, 28, 512) 0 bn3c_branch2c[0][0] \n activation_16[0][0] \n__________________________________________________________________________________________________\nactivation_19 (Activation) (None, 28, 28, 512) 0 add_6[0][0] \n__________________________________________________________________________________________________\nres3d_branch2a (Conv2D) (None, 28, 28, 128) 65664 activation_19[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3d_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_20 (Activation) (None, 28, 28, 128) 0 bn3d_branch2a[0][0] \n__________________________________________________________________________________________________\nres3d_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_20[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3d_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_21 (Activation) (None, 28, 28, 128) 0 bn3d_branch2b[0][0] \n__________________________________________________________________________________________________\nres3d_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_21[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3d_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_7 (Add) (None, 28, 28, 512) 0 bn3d_branch2c[0][0] \n activation_19[0][0] \n__________________________________________________________________________________________________\nactivation_22 (Activation) (None, 28, 28, 512) 0 add_7[0][0] \n__________________________________________________________________________________________________\nres4a_branch2a (Conv2D) (None, 14, 14, 256) 131328 activation_22[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_23 (Activation) (None, 14, 14, 256) 0 bn4a_branch2a[0][0] \n__________________________________________________________________________________________________\nres4a_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_23[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_24 (Activation) (None, 14, 14, 256) 0 bn4a_branch2b[0][0] \n__________________________________________________________________________________________________\nres4a_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_24[0][0] \n__________________________________________________________________________________________________\nres4a_branch1 (Conv2D) (None, 14, 14, 1024) 525312 activation_22[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn4a_branch1 (BatchNormalizatio (None, 14, 14, 1024) 4096 res4a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_8 (Add) (None, 14, 14, 1024) 0 bn4a_branch2c[0][0] \n bn4a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_25 (Activation) (None, 14, 14, 1024) 0 add_8[0][0] \n__________________________________________________________________________________________________\nres4b_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_25[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_26 (Activation) (None, 14, 14, 256) 0 bn4b_branch2a[0][0] \n__________________________________________________________________________________________________\nres4b_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_26[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_27 (Activation) (None, 14, 14, 256) 0 bn4b_branch2b[0][0] \n__________________________________________________________________________________________________\nres4b_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_27[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_9 (Add) (None, 14, 14, 1024) 0 bn4b_branch2c[0][0] \n activation_25[0][0] \n__________________________________________________________________________________________________\nactivation_28 (Activation) (None, 14, 14, 1024) 0 add_9[0][0] \n__________________________________________________________________________________________________\nres4c_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_28[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_29 (Activation) (None, 14, 14, 256) 0 bn4c_branch2a[0][0] \n__________________________________________________________________________________________________\nres4c_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_29[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_30 (Activation) (None, 14, 14, 256) 0 bn4c_branch2b[0][0] \n__________________________________________________________________________________________________\nres4c_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_30[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_10 (Add) (None, 14, 14, 1024) 0 bn4c_branch2c[0][0] \n activation_28[0][0] \n__________________________________________________________________________________________________\nactivation_31 (Activation) (None, 14, 14, 1024) 0 add_10[0][0] \n__________________________________________________________________________________________________\nres4d_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_31[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4d_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_32 (Activation) (None, 14, 14, 256) 0 bn4d_branch2a[0][0] \n__________________________________________________________________________________________________\nres4d_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_32[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4d_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_33 (Activation) (None, 14, 14, 256) 0 bn4d_branch2b[0][0] \n__________________________________________________________________________________________________\nres4d_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_33[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4d_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_11 (Add) (None, 14, 14, 1024) 0 bn4d_branch2c[0][0] \n activation_31[0][0] \n__________________________________________________________________________________________________\nactivation_34 (Activation) (None, 14, 14, 1024) 0 add_11[0][0] \n__________________________________________________________________________________________________\nres4e_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_34[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4e_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_35 (Activation) (None, 14, 14, 256) 0 bn4e_branch2a[0][0] \n__________________________________________________________________________________________________\nres4e_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_35[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4e_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_36 (Activation) (None, 14, 14, 256) 0 bn4e_branch2b[0][0] \n__________________________________________________________________________________________________\nres4e_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_36[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4e_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_12 (Add) (None, 14, 14, 1024) 0 bn4e_branch2c[0][0] \n activation_34[0][0] \n__________________________________________________________________________________________________\nactivation_37 (Activation) (None, 14, 14, 1024) 0 add_12[0][0] \n__________________________________________________________________________________________________\nres4f_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_37[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4f_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_38 (Activation) (None, 14, 14, 256) 0 bn4f_branch2a[0][0] \n__________________________________________________________________________________________________\nres4f_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_38[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4f_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_39 (Activation) (None, 14, 14, 256) 0 bn4f_branch2b[0][0] \n__________________________________________________________________________________________________\nres4f_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_39[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4f_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_13 (Add) (None, 14, 14, 1024) 0 bn4f_branch2c[0][0] \n activation_37[0][0] \n__________________________________________________________________________________________________\nactivation_40 (Activation) (None, 14, 14, 1024) 0 add_13[0][0] \n__________________________________________________________________________________________________\nres5a_branch2a (Conv2D) (None, 7, 7, 512) 524800 activation_40[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2a (BatchNormalizati (None, 7, 7, 512) 2048 res5a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_41 (Activation) (None, 7, 7, 512) 0 bn5a_branch2a[0][0] \n__________________________________________________________________________________________________\nres5a_branch2b (Conv2D) (None, 7, 7, 512) 2359808 activation_41[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2b (BatchNormalizati (None, 7, 7, 512) 2048 res5a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_42 (Activation) (None, 7, 7, 512) 0 bn5a_branch2b[0][0] \n__________________________________________________________________________________________________\nres5a_branch2c (Conv2D) (None, 7, 7, 2048) 1050624 activation_42[0][0] \n__________________________________________________________________________________________________\nres5a_branch1 (Conv2D) (None, 7, 7, 2048) 2099200 activation_40[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2c (BatchNormalizati (None, 7, 7, 2048) 8192 res5a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn5a_branch1 (BatchNormalizatio (None, 7, 7, 2048) 8192 res5a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_14 (Add) (None, 7, 7, 2048) 0 bn5a_branch2c[0][0] \n bn5a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_43 (Activation) (None, 7, 7, 2048) 0 add_14[0][0] \n__________________________________________________________________________________________________\nres5b_branch2a (Conv2D) (None, 7, 7, 512) 1049088 activation_43[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2a (BatchNormalizati (None, 7, 7, 512) 2048 res5b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_44 (Activation) (None, 7, 7, 512) 0 bn5b_branch2a[0][0] \n__________________________________________________________________________________________________\nres5b_branch2b (Conv2D) (None, 7, 7, 512) 2359808 activation_44[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2b (BatchNormalizati (None, 7, 7, 512) 2048 res5b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_45 (Activation) (None, 7, 7, 512) 0 bn5b_branch2b[0][0] \n__________________________________________________________________________________________________\nres5b_branch2c (Conv2D) (None, 7, 7, 2048) 1050624 activation_45[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2c (BatchNormalizati (None, 7, 7, 2048) 8192 res5b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_15 (Add) (None, 7, 7, 2048) 0 bn5b_branch2c[0][0] \n activation_43[0][0] \n__________________________________________________________________________________________________\nactivation_46 (Activation) (None, 7, 7, 2048) 0 add_15[0][0] \n__________________________________________________________________________________________________\nres5c_branch2a (Conv2D) (None, 7, 7, 512) 1049088 activation_46[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2a (BatchNormalizati (None, 7, 7, 512) 2048 res5c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_47 (Activation) (None, 7, 7, 512) 0 bn5c_branch2a[0][0] \n__________________________________________________________________________________________________\nres5c_branch2b (Conv2D) (None, 7, 7, 512) 2359808 activation_47[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2b (BatchNormalizati (None, 7, 7, 512) 2048 res5c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_48 (Activation) (None, 7, 7, 512) 0 bn5c_branch2b[0][0] \n__________________________________________________________________________________________________\nres5c_branch2c (Conv2D) (None, 7, 7, 2048) 1050624 activation_48[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2c (BatchNormalizati (None, 7, 7, 2048) 8192 res5c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_16 (Add) (None, 7, 7, 2048) 0 bn5c_branch2c[0][0] \n activation_46[0][0] \n__________________________________________________________________________________________________\nactivation_49 (Activation) (None, 7, 7, 2048) 0 add_16[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling2d_1 (Glo (None, 2048) 0 activation_49[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 256) 524544 global_average_pooling2d_1[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, 256) 0 dense_1[0][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 5) 1285 dropout_1[0][0] \n==================================================================================================\nTotal params: 24,113,541\nTrainable params: 24,060,421\nNon-trainable params: 53,120\n__________________________________________________________________________________________________\n"
],
[
"adam=Adam(lr=0.00003)\nmodel_new.compile(loss='categorical_crossentropy',optimizer=adam,metrics=['accuracy'])",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:793: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3576: The name tf.log is deprecated. Please use tf.math.log instead.\n\n"
],
[
"hist=model_new.fit(x_train,y_train,shuffle=True,batch_size=16,epochs=5,validation_split=0.20)",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1033: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1020: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.\n\nTrain on 6716 samples, validate on 1679 samples\nEpoch 1/5\n6716/6716 [==============================] - 169s 25ms/step - loss: 0.9132 - acc: 0.7055 - val_loss: 0.7990 - val_acc: 0.7415\nEpoch 2/5\n6716/6716 [==============================] - 146s 22ms/step - loss: 0.7297 - acc: 0.7457 - val_loss: 0.7891 - val_acc: 0.7510\nEpoch 3/5\n6716/6716 [==============================] - 146s 22ms/step - loss: 0.5764 - acc: 0.7874 - val_loss: 0.8612 - val_acc: 0.7516\nEpoch 4/5\n6716/6716 [==============================] - 145s 22ms/step - loss: 0.4151 - acc: 0.8463 - val_loss: 1.1007 - val_acc: 0.7451\nEpoch 5/5\n6716/6716 [==============================] - 146s 22ms/step - loss: 0.2701 - acc: 0.9071 - val_loss: 1.1693 - val_acc: 0.7248\n"
],
[
"hist=model_new.fit(x_train,y_train,shuffle=True,batch_size=16,epochs=10,validation_split=0.20)\n",
"Train on 6716 samples, validate on 1679 samples\nEpoch 1/10\n6716/6716 [==============================] - 145s 22ms/step - loss: 0.1844 - acc: 0.9390 - val_loss: 1.2167 - val_acc: 0.6980\nEpoch 2/10\n6716/6716 [==============================] - 145s 22ms/step - loss: 0.1132 - acc: 0.9647 - val_loss: 1.5471 - val_acc: 0.7439\nEpoch 3/10\n6716/6716 [==============================] - 145s 22ms/step - loss: 0.0957 - acc: 0.9689 - val_loss: 1.7236 - val_acc: 0.7362\nEpoch 4/10\n6716/6716 [==============================] - 145s 22ms/step - loss: 0.0879 - acc: 0.9728 - val_loss: 1.5425 - val_acc: 0.7367\nEpoch 5/10\n6716/6716 [==============================] - 146s 22ms/step - loss: 0.0787 - acc: 0.9736 - val_loss: 1.6019 - val_acc: 0.6968\nEpoch 6/10\n6716/6716 [==============================] - 145s 22ms/step - loss: 0.0647 - acc: 0.9793 - val_loss: 1.8084 - val_acc: 0.7284\nEpoch 7/10\n6716/6716 [==============================] - 145s 22ms/step - loss: 0.0684 - acc: 0.9763 - val_loss: 1.7498 - val_acc: 0.7230\nEpoch 8/10\n6716/6716 [==============================] - 145s 22ms/step - loss: 0.0557 - acc: 0.9821 - val_loss: 1.9079 - val_acc: 0.7338\nEpoch 9/10\n6716/6716 [==============================] - 145s 22ms/step - loss: 0.0388 - acc: 0.9876 - val_loss: 2.0572 - val_acc: 0.7415\nEpoch 10/10\n6716/6716 [==============================] - 145s 22ms/step - loss: 0.0489 - acc: 0.9844 - val_loss: 1.7192 - val_acc: 0.6921\n"
],
[
"dict=image_data\ndf=pd.DataFrame(dict)\n",
"_____no_output_____"
],
[
"df.head(n=21)",
"_____no_output_____"
],
[
"# saving the data \ndf.to_csv('image_data.csv') ",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e756152ecfd02a43a5c20c6f6757cbb1f095a1e8 | 20,909 | ipynb | Jupyter Notebook | Word_Embeddings/01_IMDB_movie_reviews_class_PosNeg.ipynb | bartosz-paternoga/ML_algos_py | c6610e8f1d705c4c3a43aa99bdcbcd5108d70889 | [
"MIT"
] | null | null | null | Word_Embeddings/01_IMDB_movie_reviews_class_PosNeg.ipynb | bartosz-paternoga/ML_algos_py | c6610e8f1d705c4c3a43aa99bdcbcd5108d70889 | [
"MIT"
] | null | null | null | Word_Embeddings/01_IMDB_movie_reviews_class_PosNeg.ipynb | bartosz-paternoga/ML_algos_py | c6610e8f1d705c4c3a43aa99bdcbcd5108d70889 | [
"MIT"
] | null | null | null | 48.625581 | 3,404 | 0.542255 | [
[
[
"from google.colab import drive\ndrive.mount('/content/gdrive')",
"Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code\n\nEnter your authorization code:\n··········\nMounted at /content/gdrive\n"
],
[
"import sys\nf = open('gdrive/My Drive/Grokking/reviews.txt')\nraw_reviews = f.readlines()\nf.close()\nf = open('gdrive/My Drive/Grokking/labels.txt')\nraw_labels = f.readlines()\nf.close()\n\ntokens = list(map(lambda x:set(x.split(\" \")),raw_reviews))\n\nvocab = set()\nfor sent in tokens:\n for word in sent:\n if(len(word)>0):\n vocab.add(word)\nvocab = list(vocab)\n\nword2index = {}\nfor i,word in enumerate(vocab):\n word2index[word]=i\n \ninput_dataset = list()\nfor sent in tokens:\n sent_indices = list()\n for word in sent:\n try:\n sent_indices.append(word2index[word])\n except:\n \"\"\n input_dataset.append(list(set(sent_indices)))\n \ntarget_dataset = list()\nfor label in raw_labels:\n if label == 'positive\\n':\n target_dataset.append(1)\n else:\n target_dataset.append(0)",
"_____no_output_____"
],
[
"print(len(raw_reviews))",
"25000\n"
],
[
"print(len(raw_labels))",
"25000\n"
],
[
"print(\"tokens len\", len(tokens),tokens[0])",
"tokens len 25000 {'', 'expect', 'comedy', 'isn', 'i', 'inspector', 'financially', 'think', 'ran', 'in', 'teaching', 'and', 'believe', 'to', 'at', 'remind', 'when', 'life', 't', 'other', 'what', 'that', 'high', 'fetched', 'schools', 'burn', 'situation', 'closer', 'tried', 'profession', 'welcome', 'scramble', 'reality', 'than', 'programs', 'immediately', 'same', 'is', 'far', 'me', 'age', 'students', 'repeatedly', 'my', 'cartoon', 'many', 'such', 'sack', 'as', 'classic', 'lead', 'here', 'their', 'pomp', 'your', 'right', 'pathetic', 'years', 'some', '\\n', 'pity', 'satire', 'bromwell', 'm', 'much', 's', 'pettiness', 'down', 'about', 'survive', 'time', 'see', 'adults', 'who', 'recalled', 'insightful', 'line', 'school', 'which', 'knew', 'all', 'saw', 'the', 'whole', 'of', 'one', '.', 'through', 'student', 'a', 'teachers', 'episode', 'can', 'it'}\n"
],
[
"print(\"len(vocab)\", len(vocab),\"\\n\",vocab[:10])",
"len(vocab) 74074 \n ['miser', 'mellifluous', 'challengers', 'opportunistic', 'melted', 'toenails', 'seaquest', 'supermarkets', 'phillips', 'uninspriring']\n"
],
[
"#print(word2index)",
"_____no_output_____"
],
[
"print(len(input_dataset),input_dataset[0:2])",
"25000 [[70658, 8713, 16406, 36388, 41514, 43056, 13369, 13882, 39488, 63555, 38467, 1091, 19014, 69193, 66124, 41554, 4692, 6256, 32370, 41082, 60026, 21127, 11402, 38541, 59551, 2724, 43176, 50856, 69290, 71851, 2224, 177, 65202, 9907, 6839, 70329, 66234, 50368, 64196, 53451, 72399, 29405, 15588, 53993, 44787, 63225, 48382, 68864, 41736, 27403, 33036, 20238, 49429, 30490, 42795, 8504, 64831, 72523, 30541, 40787, 40276, 3418, 16739, 29540, 28517, 38774, 15229, 61314, 52612, 71045, 31121, 5528, 70558, 4016, 63408, 947, 67520, 73666, 62406, 58825, 41423, 33234, 11221, 62422, 31707, 30181, 42470, 18922, 7664, 42995, 73205, 61944, 56318], [8713, 50186, 39440, 45588, 5655, 69145, 34330, 11805, 42528, 59937, 36901, 39973, 21029, 11817, 13369, 53820, 70722, 55895, 46695, 32370, 73855, 19593, 40074, 11402, 56971, 38541, 46739, 36503, 50856, 61611, 6832, 2739, 63156, 70329, 50368, 63171, 42184, 48334, 72399, 33999, 26318, 18642, 56025, 20698, 1755, 72925, 63225, 37117, 68864, 70915, 35591, 41736, 24855, 36122, 15132, 16670, 53056, 326, 30541, 60238, 23899, 28517, 54119, 39784, 67943, 48496, 38774, 21369, 30080, 71045, 43911, 32656, 52121, 2460, 57246, 70561, 22440, 14271, 67009, 36810, 44492, 41423, 16337, 60883, 5079, 6103, 46551, 30181, 49640, 73205, 24055, 3576]]\n"
],
[
"print(len(target_dataset),target_dataset[0:2])",
"25000 [1, 0]\n"
],
[
"import numpy as np\nnp.random.seed(1)\ndef sigmoid(x):\n return 1/(1 + np.exp(-x))\nalpha, iterations = (0.01, 2)\nhidden_size = 100\nweights_0_1 = 0.2*np.random.random((len(vocab),hidden_size)) - 0.1\nweights_1_2 = 0.2*np.random.random((hidden_size,1)) - 0.1\n\ncorrect,total = (0,0)\nfor iter in range(iterations):\n \n for i in range(len(input_dataset)-1000):\n x,y = (input_dataset[i],target_dataset[i])\n layer_1 = sigmoid(np.sum(weights_0_1[x],axis=0))\n layer_2 = sigmoid(np.dot(layer_1,weights_1_2))\n \n layer_2_delta = layer_2 - y\n layer_1_delta = layer_2_delta.dot(weights_1_2.T)\n \n weights_0_1[x] -= layer_1_delta * alpha\n weights_1_2 -= np.outer(layer_1,layer_2_delta) * alpha\n\n if(np.abs(layer_2_delta) < 0.5):\n correct += 1\n total += 1\n if(i % 10 == 9):\n progress = str(i/float(len(input_dataset)))\n sys.stdout.write('\\rIter:'+str(iter)\\\n +' Progress:'+progress[2:4]\\\n +'.'+progress[4:6]\\\n +'% Training Accuracy:'\\\n + str(correct/float(total)) + '%')\n print()\n \n \n\ncorrect,total = (0,0)\nfor i in range(len(input_dataset)-1000,len(input_dataset)):\n \n x = input_dataset[i]\n y = target_dataset[i]\n\n layer_1 = sigmoid(np.sum(weights_0_1[x],axis=0))\n layer_2 = sigmoid(np.dot(layer_1,weights_1_2))\n\n if(np.abs(layer_2 - y) < 0.5):\n correct += 1\n total += 1\nprint(\"Test Accuracy:\" + str(correct / float(total)))\n\n ",
"Iter:0 Progress:95.99% Training Accuracy:0.8312916666666667%\nIter:1 Progress:95.99% Training Accuracy:0.865625%\nTest Accuracy:0.846\n"
],
[
"\nfrom collections import Counter\nimport math \n\ndef similar(target='beautiful'):\n target_index = word2index[target]\n scores = Counter()\n for word,index in word2index.items():\n raw_difference = weights_0_1[index] - (weights_0_1[target_index])\n squared_difference = raw_difference * raw_difference\n scores[word] = -math.sqrt(sum(squared_difference))\n\n return scores.most_common(100)\n\nprint(\"beautiful\",word2index['beautiful'])",
"beautiful 7306\n"
],
[
"print(similar('beautiful'))",
"[('beautiful', -0.0), ('steals', -0.712444895381058), ('outstanding', -0.7228375182789654), ('touching', -0.7228499134982306), ('impact', -0.7262514738480319), ('heart', -0.729110280682466), ('liked', -0.7368892254256248), ('friendship', -0.7437019792713794), ('will', -0.746504773680074), ('fun', -0.761538132428488), ('hilarious', -0.762384794339062), ('surprisingly', -0.7629685158180615), ('finest', -0.7631667045548164), ('joan', -0.7643111387510793), ('effective', -0.7650664166273712), ('genre', -0.7685490288344919), ('entertaining', -0.769033454882479), ('subtle', -0.7707830135511047), ('tight', -0.7722575531744053), ('remember', -0.774484272891571), ('different', -0.7766618607593905), ('thank', -0.777231051583139), ('easy', -0.7773736252773664), ('superior', -0.778629744987593), ('favorites', -0.7787002885343809), ('surprised', -0.7790415108474518), ('hooked', -0.7804828088700786), ('simple', -0.7813481347116527), ('true', -0.7814343144962572), ('charlie', -0.7817568659736203), ('brilliant', -0.7838240956931073), ('superbly', -0.7858673287575206), ('powerful', -0.7863057525550153), ('unusual', -0.7903437440634934), ('extraordinary', -0.7910902063089108), ('now', -0.7922763584996265), ('pleasantly', -0.7924417359695978), ('definitely', -0.7940242053671638), ('sent', -0.7952842825247617), ('recommended', -0.7953782949925232), ('knowing', -0.7971907630589894), ('incredible', -0.7979759416420673), ('enjoy', -0.798472535297134), ('realistic', -0.7990067664950649), ('nice', -0.7990633381481539), ('funniest', -0.7991492418896865), ('awesome', -0.800863380763476), ('magic', -0.8009507137660424), ('both', -0.8017354374329521), ('sweet', -0.8028997769333308), ('bit', -0.8055637843338526), ('portrayal', -0.8079653438215842), ('solid', -0.8106457108074814), ('believable', -0.8108587961713648), ('driven', -0.8113958646234282), ('parents', -0.8114711792641327), ('brian', -0.8118360861536534), ('tragic', -0.8119890908291407), ('appropriate', -0.8123285560430725), ('certain', -0.8130675321909708), ('ride', -0.8132828958088422), ('cry', -0.8133926043805189), ('appreciate', -0.8134100828969201), ('negative', -0.8136346379348455), ('ways', -0.8166488871326898), ('appreciated', -0.8167284340929412), ('strong', -0.8177446130705249), ('beautifully', -0.8190927983256426), ('great', -0.8194151344219924), ('moving', -0.8202771531071721), ('check', -0.8205791797672706), ('emotions', -0.8206948708703246), ('victor', -0.8216060213260046), ('spot', -0.8234222774239689), ('refreshing', -0.8235836264400498), ('against', -0.8238276975635328), ('best', -0.8246532815460755), ('intense', -0.8254012250062959), ('paced', -0.8254140194923218), ('masterpiece', -0.8255800017148084), ('enjoyed', -0.8259428143010012), ('delightful', -0.8295900234717567), ('beauty', -0.8297755278687564), ('plenty', -0.8298799613888964), ('performances', -0.8299573698954983), ('england', -0.8305308990833786), ('moved', -0.8305979176992121), ('jack', -0.8313046283878963), ('compelling', -0.831368812506787), ('deserves', -0.8320115750479169), ('fascinating', -0.8330410757504632), ('manages', -0.8340751822690585), ('highly', -0.8353821572655116), ('worth', -0.8359198217473819), ('unexpected', -0.8360665227474792), ('change', -0.8379208634879857), ('brilliantly', -0.8379959362346742), ('soundtrack', -0.8384225991991963), ('themes', -0.8389647275359834), ('shocked', -0.8401200647540105)]\n"
],
[
"print(similar('terrible'))",
"[('terrible', -0.0), ('fails', -0.7241382840924647), ('worse', -0.7450760109639525), ('annoying', -0.7497968840051117), ('dull', -0.773634748997314), ('badly', -0.7801180652553023), ('poor', -0.7932070015624804), ('avoid', -0.8045140718423265), ('save', -0.8047983596970169), ('lacks', -0.8058310063766573), ('disappointment', -0.8195627162833875), ('bad', -0.8195731797244422), ('mess', -0.8271075180089617), ('wooden', -0.837303652287524), ('supposed', -0.8518478247786274), ('ridiculous', -0.8597581909122247), ('wasted', -0.8631804466293277), ('boring', -0.869622899568372), ('horrible', -0.8770076920756217), ('laughable', -0.8773361067977283), ('basically', -0.8929949680879719), ('redeeming', -0.8935991283513064), ('disappointed', -0.9146252854740384), ('disappointing', -0.9173211084949977), ('pointless', -0.9176431208592295), ('unfortunately', -0.9178218910196023), ('unless', -0.9232065966975099), ('lame', -0.9280689560544291), ('oh', -0.9317055500201129), ('stupid', -0.943236084091082), ('pathetic', -0.9503240150501617), ('forgettable', -0.9512469368594064), ('effort', -0.9603475893467487), ('predictable', -0.9668873959829288), ('skip', -0.9794483249913518), ('cheap', -0.9833330357737772), ('nothing', -0.9956079300140738), ('mediocre', -0.9980503569566063), ('wonder', -1.012596732723008), ('poorly', -1.0232988636391502), ('garbage', -1.0392066409433913), ('crap', -1.0401689889904049), ('hoping', -1.0506422982396904), ('script', -1.0526434989873954), ('looks', -1.0617587999116085), ('obnoxious', -1.0731204825180785), ('silly', -1.0736928484813508), ('insult', -1.0749448128626067), ('minutes', -1.0796169758984064), ('failed', -1.0817250128618334), ('miscast', -1.091666430523692), ('ugly', -1.0918748453576697), ('mildly', -1.1009621082464618), ('unfunny', -1.1060156876445966), ('couldn', -1.1088796163009738), ('lousy', -1.1133650560690846), ('slow', -1.1148571476584153), ('painful', -1.115885709598364), ('ludicrous', -1.1280903053001088), ('dreadful', -1.1386255711380988), ('bland', -1.143694169401774), ('weak', -1.1448587059010595), ('neither', -1.145468235031243), ('sadly', -1.1464579344856745), ('accent', -1.1516315992520425), ('supposedly', -1.157716130398514), ('embarrassing', -1.1672113721244648), ('incoherent', -1.168946646838015), ('positive', -1.1712509567440645), ('endless', -1.1717196958659752), ('turkey', -1.1732337988831405), ('victims', -1.173545883171082), ('uninteresting', -1.1751916534958926), ('attempt', -1.1760636206204969), ('tedious', -1.1761096156154303), ('confusing', -1.1788387841742214), ('reason', -1.181904665130601), ('awful', -1.186387300709272), ('obvious', -1.1868825296117005), ('unwatchable', -1.1971618093232899), ('okay', -1.1983314248263746), ('pass', -1.2007914998919489), ('interest', -1.2013406300536427), ('talents', -1.213847901628752), ('scott', -1.2170817675259062), ('nowhere', -1.2180725761138134), ('alright', -1.219739417021904), ('unrealistic', -1.2214064786008798), ('guess', -1.2216232615378753), ('walked', -1.2218399904206527), ('bizarre', -1.2220905657135877), ('paid', -1.2226227070326336), ('none', -1.2229483326638741), ('wasn', -1.2230129716150149), ('premise', -1.2236103717252838), ('christmas', -1.2239893597303837), ('seconds', -1.224048047500012), ('unconvincing', -1.224241820969355), ('sucks', -1.2251801295678415), ('embarrassed', -1.2263168108130267)]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7562d7ea107061217b84fbfd6eb31c7db8bd17e | 20,024 | ipynb | Jupyter Notebook | lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb | WilliamHYZhang/examples | f32f2b3c0cfa866b3c0b0c00b02e325c9c0e8d31 | [
"Apache-2.0"
] | 2 | 2020-09-27T16:51:58.000Z | 2020-10-22T06:16:30.000Z | lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb | WilliamHYZhang/examples | f32f2b3c0cfa866b3c0b0c00b02e325c9c0e8d31 | [
"Apache-2.0"
] | 7 | 2021-03-19T15:39:52.000Z | 2022-03-12T00:52:01.000Z | lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb | WilliamHYZhang/examples | f32f2b3c0cfa866b3c0b0c00b02e325c9c0e8d31 | [
"Apache-2.0"
] | 3 | 2019-12-11T18:56:32.000Z | 2019-12-12T15:39:07.000Z | 34.885017 | 376 | 0.544247 | [
[
[
"##### Copyright 2019 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# Step 2: Train a machine learning model",
"_____no_output_____"
],
[
"This is the notebook for step 2 of the codelab [**Build a handwritten digit classifier app with TensorFlow Lite**](https://codelabs.developers.google.com/codelabs/digit-classifier-tflite/).",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/examples/blob/master/lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/examples/blob/master/lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"## Import dependencies",
"_____no_output_____"
],
[
"We start by importing TensorFlow and other supporting libraries that are used for data processing and visualization.",
"_____no_output_____"
]
],
[
[
"# Enable TensorFlow 2\ntry:\n # %tensorflow_version only exists in Colab.\n %tensorflow_version 2.x\nexcept Exception:\n pass\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\n# TensorFlow and tf.keras\nimport tensorflow as tf\nfrom tensorflow import keras\n\n# Helper libraries\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport random\n\nprint(tf.__version__)",
"_____no_output_____"
]
],
[
[
"## Download and explore the MNIST dataset\nThe MNIST database contains 60,000 training images and 10,000 testing images of handwritten digits. We will use the dataset to train our digit classification model.\n\nEach image in the MNIST dataset is a 28x28 grayscale image containing a digit from 0 to 9, and a label identifying which digit is in the image.\n",
"_____no_output_____"
]
],
[
[
"# Keras provides a handy API to download the MNIST dataset, and split them into\n# \"train\" dataset and \"test\" dataset.\nmnist = keras.datasets.mnist\n(train_images, train_labels), (test_images, test_labels) = mnist.load_data()",
"_____no_output_____"
],
[
"# Normalize the input image so that each pixel value is between 0 to 1.\ntrain_images = train_images / 255.0\ntest_images = test_images / 255.0\nprint('Pixels are normalized')",
"_____no_output_____"
],
[
"# Show the first 25 images in the training dataset.\nplt.figure(figsize=(10,10))\nfor i in range(25):\n plt.subplot(5,5,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(train_images[i], cmap=plt.cm.gray)\n plt.xlabel(train_labels[i])\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Train a TensorFlow model to classify digit images\n\nNext, we use Keras API to build a TensorFlow model and train it on the MNIST \"train\" dataset. After training, our model will be able to classify the digit images.\n\nOur model takes **a 28px x 28px grayscale image** as an input, and outputs **a float array of length 10** representing the probability of the image being a digit from 0 to 9.\n\nHere we use a simple convolutional neural network, which is a common technique in computer vision. We will not go into details about model architecture in this codelab. If you want have a deeper understanding about different ML model architectures, please consider taking our free [TensorFlow training course](https://www.coursera.org/learn/introduction-tensorflow).",
"_____no_output_____"
]
],
[
[
"# Define the model architecture\nmodel = keras.Sequential([\n keras.layers.InputLayer(input_shape=(28, 28)),\n keras.layers.Reshape(target_shape=(28, 28, 1)),\n keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),\n keras.layers.MaxPooling2D(pool_size=(2, 2)),\n keras.layers.Flatten(),\n keras.layers.Dense(10, activation=tf.nn.softmax)\n])\n\n# Define how to train the model\nmodel.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\n# Train the digit classification model\nmodel.fit(train_images, train_labels, epochs=5)",
"_____no_output_____"
]
],
[
[
"Let's take a closer look at our model structure.",
"_____no_output_____"
]
],
[
[
"model.summary()",
"_____no_output_____"
]
],
[
[
"There is an extra dimension with **None** shape in every layer in our model, which is called the **batch dimension**. In machine learning, we usually process data in batches to improve throughput, so TensorFlow automatically add the dimension for you.",
"_____no_output_____"
],
[
"## Evaluate our model\nWe run our digit classification model against our \"test\" dataset that the model has not seen during its training process to confirm that the model did not just remember the digits it saw but also generalize well to new images.",
"_____no_output_____"
]
],
[
[
"# Evaluate the model using all images in the test dataset.\ntest_loss, test_acc = model.evaluate(test_images, test_labels)\n\nprint('Test accuracy:', test_acc)",
"_____no_output_____"
]
],
[
[
"Although our model is relatively simple, we were able to achieve good accuracy around 98% on new images that the model has never seen before. Let's visualize the result.",
"_____no_output_____"
]
],
[
[
"# A helper function that returns 'red'/'black' depending on if its two input\n# parameter matches or not.\ndef get_label_color(val1, val2):\n if val1 == val2:\n return 'black'\n else:\n return 'red'\n\n# Predict the labels of digit images in our test dataset.\npredictions = model.predict(test_images)\n\n# As the model output 10 float representing the probability of the input image\n# being a digit from 0 to 9, we need to find the largest probability value\n# to find out which digit the model predicts to be most likely in the image.\nprediction_digits = np.argmax(predictions, axis=1)\n\n# Then plot 100 random test images and their predicted labels.\n# If a prediction result is different from the label provided label in \"test\"\n# dataset, we will highlight it in red color.\nplt.figure(figsize=(18, 18))\nfor i in range(100):\n ax = plt.subplot(10, 10, i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n image_index = random.randint(0, len(prediction_digits))\n plt.imshow(test_images[image_index], cmap=plt.cm.gray)\n ax.xaxis.label.set_color(get_label_color(prediction_digits[image_index],\\\n test_labels[image_index]))\n plt.xlabel('Predicted: %d' % prediction_digits[image_index])\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Convert the Keras model to TensorFlow Lite",
"_____no_output_____"
],
[
"Now as we have trained the digit classifer model, we will convert it to TensorFlow Lite format for mobile deployment.",
"_____no_output_____"
]
],
[
[
"# Convert Keras model to TF Lite format.\nconverter = tf.lite.TFLiteConverter.from_keras_model(model)\ntflite_float_model = converter.convert()\n\n# Show model size in KBs.\nfloat_model_size = len(tflite_float_model) / 1024\nprint('Float model size = %dKBs.' % float_model_size)",
"_____no_output_____"
]
],
[
[
"As we will deploy our model to a mobile device, we want our model to be as small and as fast as possible. **Quantization** is a common technique often used in on-device machine learning to shrink ML models. Here we will use 8-bit number to approximate our 32-bit weights, which in turn shrinks the model size by a factor of 4.\n\nSee [TensorFlow documentation](https://www.tensorflow.org/lite/performance/post_training_quantization) to learn more about other quantization techniques.",
"_____no_output_____"
]
],
[
[
"# Re-convert the model to TF Lite using quantization.\nconverter.optimizations = [tf.lite.Optimize.DEFAULT]\ntflite_quantized_model = converter.convert()\n\n# Show model size in KBs.\nquantized_model_size = len(tflite_quantized_model) / 1024\nprint('Quantized model size = %dKBs,' % quantized_model_size)\nprint('which is about %d%% of the float model size.'\\\n % (quantized_model_size * 100 / float_model_size))",
"_____no_output_____"
]
],
[
[
"## Evaluate the TensorFlow Lite model\n\nBy using quantization, we often traded off a bit of accuracy for the benefit of having a significantly smaller model. Let's calculate the accuracy drop of our quantized model.",
"_____no_output_____"
]
],
[
[
"# A helper function to evaluate the TF Lite model using \"test\" dataset.\ndef evaluate_tflite_model(tflite_model):\n # Initialize TFLite interpreter using the model.\n interpreter = tf.lite.Interpreter(model_content=tflite_model)\n interpreter.allocate_tensors()\n input_tensor_index = interpreter.get_input_details()[0][\"index\"]\n output = interpreter.tensor(interpreter.get_output_details()[0][\"index\"])\n\n # Run predictions on every image in the \"test\" dataset.\n prediction_digits = []\n for test_image in test_images:\n # Pre-processing: add batch dimension and convert to float32 to match with\n # the model's input data format.\n test_image = np.expand_dims(test_image, axis=0).astype(np.float32)\n interpreter.set_tensor(input_tensor_index, test_image)\n\n # Run inference.\n interpreter.invoke()\n\n # Post-processing: remove batch dimension and find the digit with highest\n # probability.\n digit = np.argmax(output()[0])\n prediction_digits.append(digit)\n\n # Compare prediction results with ground truth labels to calculate accuracy.\n accurate_count = 0\n for index in range(len(prediction_digits)):\n if prediction_digits[index] == test_labels[index]:\n accurate_count += 1\n accuracy = accurate_count * 1.0 / len(prediction_digits)\n\n return accuracy\n\n# Evaluate the TF Lite float model. You'll find that its accurary is identical\n# to the original TF (Keras) model because they are essentially the same model\n# stored in different format.\nfloat_accuracy = evaluate_tflite_model(tflite_float_model)\nprint('Float model accuracy = %.4f' % float_accuracy)\n\n# Evalualte the TF Lite quantized model.\n# Don't be surprised if you see quantized model accuracy is higher than\n# the original float model. It happens sometimes :)\nquantized_accuracy = evaluate_tflite_model(tflite_quantized_model)\nprint('Quantized model accuracy = %.4f' % quantized_accuracy)\nprint('Accuracy drop = %.4f' % (float_accuracy - quantized_accuracy))\n",
"_____no_output_____"
]
],
[
[
"## Download the TensorFlow Lite model\n\nLet's get our model and integrate it into an Android app.\n\nIf you see an error when downloading mnist.tflite from Colab, try running this cell again.",
"_____no_output_____"
]
],
[
[
"# Save the quantized model to file to the Downloads directory\nf = open('mnist.tflite', \"wb\")\nf.write(tflite_quantized_model)\nf.close()\n\n# Download the digit classification model\nfrom google.colab import files\nfiles.download('mnist.tflite')\n\nprint('`mnist.tflite` has been downloaded')",
"_____no_output_____"
]
],
[
[
"## Good job!\nThis is the end of *Step 2: Train a machine learning model* in the codelab **Build a handwritten digit classifier app with TensorFlow Lite**. Let's go back to our codelab and proceed to the [next step](https://codelabs.developers.google.com/codelabs/digit-classifier-tflite/#2).",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e75631bdbfce7befaa55338330e9c47017fddc93 | 6,357 | ipynb | Jupyter Notebook | CaamanitoNoteBook.ipynb | sergiojim96/JupyterRepo | 3fe7408ee586eac6b646f41638875ec497bece54 | [
"Apache-2.0"
] | null | null | null | CaamanitoNoteBook.ipynb | sergiojim96/JupyterRepo | 3fe7408ee586eac6b646f41638875ec497bece54 | [
"Apache-2.0"
] | null | null | null | CaamanitoNoteBook.ipynb | sergiojim96/JupyterRepo | 3fe7408ee586eac6b646f41638875ec497bece54 | [
"Apache-2.0"
] | null | null | null | 31.626866 | 1,079 | 0.57826 | [
[
[
"import requests\nfrom bs4 import BeautifulSoup",
"_____no_output_____"
],
[
"req = requests.get(\"https://www.lateja.cr/deportes/orlando-sinclair-seguira-vestido-de-morado-por-dos/GXGL6WZWDJHDVJ2JDRPI2INS6I/story/\").content\nsoup = BeautifulSoup(req, 'lxml')\narticle = soup.find('article') ## not a real thing",
"_____no_output_____"
],
[
"title = soup.find('div', class_='headline-hed-last').text\ntitle",
"_____no_output_____"
]
],
[
[
"author = soup.find('span', class_='author').text\nauthor",
"_____no_output_____"
],
[
"## Ejercicio discutido en la reunion anterior ",
"_____no_output_____"
]
],
[
[
"# Objetivo del ejercicio 2, limpiar el corpus del articulo lo mas posible, \n# por fa Utilicen otro articulo de pagina web/servidor web, es decir la misma de la teja.\n\nfrom nltk import tokenize\nfrom nltk import word_tokenize\n\nCorpus = soup <div id=\"article-content\">\ntok_corp = word_tokenize(Corpus)\ntok_corp",
"_____no_output_____"
]
],
[
[
"## Codigo Extra\n",
"_____no_output_____"
]
],
[
[
"from nltk.tokenize import sent_tokenize, word_tokenize #basicas\nimport pickle #Importante\nimport pandas as pd #Importante\nfrom textblob import TextBlob #Importante lo vemos luego\nimport gensim #Importante lo vemos luego\nimport nltk #basicas\nimport re #basicas\nimport string #basicas\n\n#Otras\nfrom sqlalchemy import create_engine\nfrom collections import OrderedDict \nfrom spellchecker import SpellChecker\nfrom autocorrect import Speller\nimport pixiedust\n\n#Las bibliotecas \"ModuleNotFoundError\" hay que instalarlas.",
"_____no_output_____"
],
[
"#Codigo no correo son solo sentencias de ejemplo y todas estaba dentro de un for\n#de algunas de las articulos y otras de los tokens de las paalabra\n\ni=i.replace('\\n', ' ').replace('\\r', '') #for de articluos\ni=sent_tokenize(i.lower()) #for de articulos\nif re.search(\"[\\u4e00-\\u9FFF]\", w) #for de tokens\ntxt+=\" \"+w.lower().encode().decode('utf-8').replace(\".com\", '').strip() #for de tokens\n\n",
"_____no_output_____"
],
[
"def isEnglish(s): # yo diria en realidad ifisLatin\n try:\n if '’' in s or \"“\" in s or \"-\" in s or \"‘\" in s:\n return True\n s.encode(encoding='utf-8').decode('ascii')\n except UnicodeDecodeError:\n return False\n else:",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e756401cc3812c698db1dc846a760f7722701dd2 | 54,721 | ipynb | Jupyter Notebook | NeuralNetworksFromScratch/MNIST-DigitClassification.ipynb | GauthamPrabhuM/hacktoberfest2020 | 01662e344d4a19795e0cfa7b5b69cf5c7aa69a20 | [
"MIT"
] | null | null | null | NeuralNetworksFromScratch/MNIST-DigitClassification.ipynb | GauthamPrabhuM/hacktoberfest2020 | 01662e344d4a19795e0cfa7b5b69cf5c7aa69a20 | [
"MIT"
] | null | null | null | NeuralNetworksFromScratch/MNIST-DigitClassification.ipynb | GauthamPrabhuM/hacktoberfest2020 | 01662e344d4a19795e0cfa7b5b69cf5c7aa69a20 | [
"MIT"
] | null | null | null | 108.144269 | 20,728 | 0.834744 | [
[
[
"import numpy as np\nimport pandas as pd \nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"train_data = pd.read_csv('mnist_train.csv')\ntest_data = pd.read_csv(\"mnist_test.csv\")",
"_____no_output_____"
],
[
"y_train = train_data['label'].values\nX_train = train_data.drop(columns=['label']).values/255\nX_test = test_data.values/255",
"_____no_output_____"
],
[
"fig, axes = plt.subplots(2,5, figsize=(12,5))\naxes = axes.flatten()\nidx = np.random.randint(0,42000,size=10)\nfor i in range(10):\n axes[i].imshow(X_train[idx[i],:].reshape(28,28), cmap='gray')\n axes[i].set_title(str(int(y_train[idx[i]])), color= 'black', fontsize=25)\nplt.show()",
"_____no_output_____"
],
[
"def relu(x):\n x[x<0]=0\n return x\n",
"_____no_output_____"
],
[
"def h(X,W,b):\n '''\n Hypothesis function: simple FNN with 1 hidden layer\n Layer 1: input\n Layer 2: hidden layer, with a size implied by the arguments W[0], b\n Layer 3: output layer, with a size implied by the arguments W[1]\n '''\n # layer 1 = input layer\n a1 = X\n # layer 1 (input layer) -> layer 2 (hidden layer)\n z1 = np.matmul(X, W[0]) + b[0]\n \n # add one more layer\n \n # layer 2 activation\n a2 = relu(z1)\n # layer 2 (hidden layer) -> layer 3 (output layer)\n z2 = np.matmul(a2, W[1])\n s = np.exp(z2)\n total = np.sum(s, axis=1).reshape(-1,1)\n sigma = s/total\n # the output is a probability for each sample\n return sigma",
"_____no_output_____"
],
[
"def softmax(X_in,weights):\n '''\n Un-used cell for demo\n activation function for the last FC layer: softmax function \n Output: K probabilities represent an estimate of P(y=k|X_in;weights) for k=1,...,K\n the weights has shape (n, K)\n n: the number of features X_in has\n n = X_in.shape[1]\n K: the number of classes\n K = 10\n '''\n \n s = np.exp(np.matmul(X_in,weights))\n total = np.sum(s, axis=1).reshape(-1,1)\n return s / total",
"_____no_output_____"
],
[
"def loss(y_pred,y_true):\n '''\n Loss function: cross entropy with an L^2 regularization\n y_true: ground truth, of shape (N, )\n y_pred: prediction made by the model, of shape (N, K) \n N: number of samples in the batch\n K: global variable, number of classes\n '''\n global K \n K = 10\n N = len(y_true)\n # loss_sample stores the cross entropy for each sample in X\n # convert y_true from labels to one-hot-vector encoding\n y_true_one_hot_vec = (y_true[:,np.newaxis] == np.arange(K))\n loss_sample = (np.log(y_pred) * y_true_one_hot_vec).sum(axis=1)\n # loss_sample is a dimension (N,) array\n # for the final loss, we need take the average\n return -np.mean(loss_sample)",
"_____no_output_____"
],
[
"def backprop(W,b,X,y,alpha=1e-4):\n '''\n Step 1: explicit forward pass h(X;W,b)\n Step 2: backpropagation for dW and db\n '''\n K = 10\n N = X.shape[0]\n \n ### Step 1:\n # layer 1 = input layer\n a1 = X\n # layer 1 (input layer) -> layer 2 (hidden layer)\n z1 = np.matmul(X, W[0]) + b[0]\n # layer 2 activation\n a2 = relu(z1)\n \n # one more layer\n \n # layer 2 (hidden layer) -> layer 3 (output layer)\n z2 = np.matmul(a2, W[1])\n s = np.exp(z2)\n total = np.sum(s, axis=1).reshape(-1,1)\n sigma = s/total\n \n ### Step 2:\n \n # layer 2->layer 3 weights' derivative\n # delta2 is \\partial L/partial z2, of shape (N,K)\n y_one_hot_vec = (y[:,np.newaxis] == np.arange(K))\n delta2 = (sigma - y_one_hot_vec)\n grad_W1 = np.matmul(a2.T, delta2)\n \n # layer 1->layer 2 weights' derivative\n # delta1 is \\partial a2/partial z1\n # layer 2 activation's (weak) derivative is 1*(z1>0)\n delta1 = np.matmul(delta2, W[1].T)*(z1>0)\n grad_W0 = np.matmul(X.T, delta1)\n \n # Student project: extra layer of derivative\n \n # no derivative for layer 1\n \n # the alpha part is the derivative for the regularization\n # regularization = 0.5*alpha*(np.sum(W[1]**2) + np.sum(W[0]**2))\n \n \n dW = [grad_W0/N + alpha*W[0], grad_W1/N + alpha*W[1]]\n db = [np.mean(delta1, axis=0)]\n # dW[0] is W[0]'s derivative, and dW[1] is W[1]'s derivative; similar for db\n return dW, db",
"_____no_output_____"
],
[
"eta = 5e-1\nalpha = 1e-6 \ngamma = 0.99 # RMSprop\neps = 1e-3 # RMSprop\nnum_iter = 1500 # number of iterations of gradient descent\nn_H = 256 # number of neurons in the hidden layer\nn = X_train.shape[1] # number of pixels in an image\nK = 10",
"_____no_output_____"
],
[
"np.random.seed(1127)\nW = [1e-1*np.random.randn(n, n_H), 1e-1*np.random.randn(n_H, K)]\nb = [np.random.randn(n_H)]",
"_____no_output_____"
],
[
"gW0 = gW1 = gb0 = 1\ntraining_loss=[]\n\nfor i in range(num_iter):\n dW, db = backprop(W,b,X_train,y_train,alpha)\n \n gW0 = gamma*gW0 + (1-gamma)*np.sum(dW[0]**2)\n etaW0 = eta/np.sqrt(gW0 + eps)\n W[0] -= etaW0 * dW[0]\n \n gW1 = gamma*gW1 + (1-gamma)*np.sum(dW[1]**2)\n etaW1 = eta/np.sqrt(gW1 + eps)\n W[1] -= etaW1 * dW[1]\n \n gb0 = gamma*gb0 + (1-gamma)*np.sum(db[0]**2)\n etab0 = eta/np.sqrt(gb0 + eps)\n b[0] -= etab0 * db[0]\n \n if i % 500 == 0:\n y_pred = h(X_train,W,b)\n print(\"Cross-entropy loss after\", i+1, \"iterations is {:.8}\".format(\n loss(y_pred,y_train)))\n print(\"Training accuracy after\", i+1, \"iterations is {:.4%}\".format( \n np.mean(np.argmax(y_pred, axis=1)== y_train)))\n \n training_loss.append(loss(y_pred,y_train))\n gW0 = gW1 = gb0 = 1\n\ny_pred_final = h(X_train,W,b)\nprint(\"Final cross-entropy loss is {:.8}\".format(loss(y_pred_final,y_train)))\nprint(\"Final training accuracy is {:.4%}\".format(np.mean(np.argmax(y_pred_final, axis=1)== y_train)))",
"Cross-entropy loss after 1 iterations is 7.3206936\nTraining accuracy after 1 iterations is 36.9600%\nCross-entropy loss after 501 iterations is 0.080828945\nTraining accuracy after 501 iterations is 97.6933%\nCross-entropy loss after 1001 iterations is 0.064808351\nTraining accuracy after 1001 iterations is 98.1533%\nFinal cross-entropy loss is 0.031102296\nFinal training accuracy is 99.1817%\n"
],
[
"y_pred_final",
"_____no_output_____"
],
[
"y_train",
"_____no_output_____"
],
[
"X_train",
"_____no_output_____"
],
[
"X_test",
"_____no_output_____"
],
[
"y_train",
"_____no_output_____"
],
[
"plt.figure(figsize=(15,5))\nplt.plot(np.arange(0, len(training_loss)), training_loss)\nplt.title('Cross-Entropy Loss After Each Batch')\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75648c3519834012220a68fa57e743656d2e932 | 514,558 | ipynb | Jupyter Notebook | python3.ipynb | mat-esp-2016/python-3-amanda_joaovictor_rian | e92208bc6c1d69071829c48b4e75af6fdee367ab | [
"CC-BY-4.0"
] | null | null | null | python3.ipynb | mat-esp-2016/python-3-amanda_joaovictor_rian | e92208bc6c1d69071829c48b4e75af6fdee367ab | [
"CC-BY-4.0"
] | null | null | null | python3.ipynb | mat-esp-2016/python-3-amanda_joaovictor_rian | e92208bc6c1d69071829c48b4e75af6fdee367ab | [
"CC-BY-4.0"
] | null | null | null | 1,151.136465 | 44,474 | 0.946113 | [
[
[
"# Bibliotecas:",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"import matplotlib.patches as mpatches",
"_____no_output_____"
],
[
"%matplotlib inline",
"_____no_output_____"
],
[
"import glob",
"_____no_output_____"
]
],
[
[
"# Fazer e testar uma função que recebe como entrada um array de anos e um de meses e retorna um array de anos decimais.",
"_____no_output_____"
]
],
[
[
"dados = np.loadtxt (arquivo, comments = '%')",
"_____no_output_____"
],
[
"anos = dados [:, 0]\nmeses = dados[:, 1]",
"_____no_output_____"
],
[
"def anos_para_ano_decimal(anos , meses):\n assert type(anos) == np.ndarray, \"Anos precisa ser um array\"\n assert type(meses) == np.ndarray, \"Meses presisa ser um array\"\n ano_em_decimal = ((meses-1)/10 + anos)\n return (ano_em_decimal)",
"_____no_output_____"
]
],
[
[
"### Teste incorreto:",
"_____no_output_____"
]
],
[
[
"anos_para_ano_decimal(1,3) ",
"_____no_output_____"
]
],
[
[
"### Teste correto:",
"_____no_output_____"
]
],
[
[
"anos_para_ano_decimal(anos, meses)",
"_____no_output_____"
]
],
[
[
"# Fazer e testar uma função que recebe como entrada uma matriz (array 2d) de dados de temperaturas e retorna os anos decimais, a anomalia anual, anomalia de 10 anos e sua respectiva incerteza.",
"_____no_output_____"
]
],
[
[
"def temp_para_outros(dados):\n ano_em_decimal_1 = anos_para_ano_decimal (anos, meses)\n anomalia_anual_1 = dados [:, 4]\n anomalia_10_anos_1 = dados [:, 8] \n unc_a10 = dados [:, 9]\n return (ano_em_decimal_1 , anomalia_anual_1 , anomalia_10_anos_1 , unc_a10)\n ",
"_____no_output_____"
],
[
"temp_para_outros (dados)",
"_____no_output_____"
]
],
[
[
"# Usando as funções criadas acima para repetir a tarefa da prática Python 2.",
"_____no_output_____"
]
],
[
[
"arquivos = glob.glob(\"dados/*W-TAVG-Trend.txt\")",
"_____no_output_____"
],
[
"for arquivo in arquivos:\n print(arquivo)",
"dados\\0.80S-49.02W-TAVG-Trend.txt\ndados\\10.45S-48.27W-TAVG-Trend.txt\ndados\\13.66S-38.81W-TAVG-Trend.txt\ndados\\15.27S-47.50W-TAVG-Trend.txt\ndados\\2.41S-60.27W-TAVG-Trend.txt\ndados\\20.09S-44.36W-TAVG-Trend.txt\ndados\\20.09S-54.60W-TAVG-Trend.txt\ndados\\23.31S-42.82W-TAVG-Trend.txt\ndados\\23.31S-46.31W-TAVG-Trend.txt\ndados\\24.92S-49.66W-TAVG-Trend.txt\ndados\\29.74S-51.69W-TAVG-Trend.txt\ndados\\4.02S-40.98W-TAVG-Trend.txt\n"
],
[
"for arquivo in arquivos:\n dados = np.loadtxt (arquivo, comments='%')\n ano_em_decimal = anos_para_ano_decimal (anos = dados [:, 0], meses = dados [:, 1])\n anomalia_anual = dados [:, 4] \n anomalia_10_anos = dados[:, 8]\n anos = dados [:, 0]\n \n plt.figure()\n plt.plot (ano_em_decimal, anomalia_anual, c = \"#000000\")\n plt.plot (ano_em_decimal, anomalia_10_anos, c = '#ff0000')\n plt.xlabel (\"anos\")\n plt.ylabel (\"Temperatura em °C\")\n plt.title(\"Cidade\")\n plt.fill_between (ano_em_decimal, anomalia_anual + 1, anomalia_anual - 1, color = \"#C0C0C0\")\n plt.savefig(\"Grafico_das_cidades/\" + arquivo [7:-3] + 'png', format = 'png')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e756513636a0804933a433f9ac3e8f2fcdbbfef8 | 279,646 | ipynb | Jupyter Notebook | notebooks/welter_issue035-03_postage_stamps_of_spectral_lines_VI.ipynb | BrownDwarf/welter | 261c12268d76b64c3b1085c65db749ed987ef25f | [
"MIT"
] | 5 | 2017-12-19T17:01:17.000Z | 2020-12-11T04:33:20.000Z | notebooks/welter_issue035-03_postage_stamps_of_spectral_lines_VI.ipynb | BrownDwarf/welter | 261c12268d76b64c3b1085c65db749ed987ef25f | [
"MIT"
] | null | null | null | notebooks/welter_issue035-03_postage_stamps_of_spectral_lines_VI.ipynb | BrownDwarf/welter | 261c12268d76b64c3b1085c65db749ed987ef25f | [
"MIT"
] | null | null | null | 659.542453 | 92,162 | 0.938376 | [
[
[
"# welter\n## Issue 35: Figure of postage stamps of spectral features\n### Part I: Try it out",
"_____no_output_____"
]
],
[
[
"import os\nimport json\nimport numpy as np",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\n%config InlineBackend.figure_format = 'retina'",
"_____no_output_____"
]
],
[
[
"#### Need to re-run these before making each plot.",
"_____no_output_____"
]
],
[
[
"ws = np.load(\"../sf/m087/output/mix_emcee/run01/emcee_chain.npy\")\n\nburned = ws[:, -200:,:]\nxs, ys, zs = burned.shape\nfc = burned.reshape(xs*ys, zs)\n\nff = 10**fc[:, 7]/(10**fc[:, 7]+10**fc[:, 5])\n\ninds_sorted = np.argsort(ff)\nff_sorted = ff[inds_sorted]\nfc_sorted = fc[inds_sorted]",
"_____no_output_____"
]
],
[
[
"#### Double check correlation plots as a sanity check for trends in $T_{\\mathrm{eff}}$ and $f_\\Omega$",
"_____no_output_____"
]
],
[
[
"sns.distplot(ff)",
"//anaconda/lib/python3.4/site-packages/statsmodels/nonparametric/kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future\n y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j\n"
],
[
"plt.plot(fc_sorted[:,0])\nplt.plot(fc_sorted[:,6])",
"_____no_output_____"
],
[
"plt.plot(fc_sorted[:,5])\nplt.plot(fc_sorted[:,7])",
"_____no_output_____"
],
[
"#ax = sns.kdeplot(ff_sorted, fc_sorted[:,0], shade=True)\n#ax.plot(ff_sorted[400], fc_sorted[400,0], 'b*', ms=13)\n#ax.plot(ff_sorted[4000], fc_sorted[4000,0], 'k*', ms=13)\n#ax.plot(ff_sorted[7600], fc_sorted[7600,0], 'r*', ms=13)",
"_____no_output_____"
]
],
[
[
"### Generate the data using the new `plot_specific_mix_model.py`",
"_____no_output_____"
],
[
"This custom Starfish python script generates model spectra at 5, 50, and 95 percentiles of fill factor, and then saves them to a csv file named `models_ff-05_50_95.csv`.",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"models = pd.read_csv('/Users/gully/GitHub/welter/sf/m087/output/mix_emcee/run01/models_ff-05_50_95.csv')",
"_____no_output_____"
],
[
"#models.head()",
"_____no_output_____"
]
],
[
[
"### This is a complex Matplotlib layout",
"_____no_output_____"
]
],
[
[
"lw =1.0",
"_____no_output_____"
],
[
"from matplotlib import gridspec\nfrom matplotlib.ticker import FuncFormatter\nfrom matplotlib.ticker import ScalarFormatter",
"_____no_output_____"
],
[
"sns.set_context('paper')\nsns.set_style('ticks')\nsns.set_color_codes()",
"_____no_output_____"
]
],
[
[
"### New version has no right panel",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(6.0, 3.0))\n\n#fig.subplots_adjust(left=0.05, right=0.95, wspace=0.05)\n\nlc = 20430\nwlb = 20430\nwlr = 20470\nfig = plt.figure(figsize=(6.0, 3.0))\n\ngs = gridspec.GridSpec(1, 3)\n\nax1 = fig.add_subplot(gs[0,0])\nax1.step(models.wl-lc, models.data.values, '-k', alpha=0.3)\nax1.plot(models.wl-lc, models.model_comp05.values, color='#AA00AA', linewidth=lw, alpha=0.7)\nax1.plot(models.wl-lc, models.model_cool05, color='r', linewidth=lw*2, label='Cool')\nax1.plot(models.wl-lc, models.model_hot05, color='b', linewidth=lw, label='Hot')\nax1.set_title('$-2\\sigma$ \\n $f_\\mathrm{cool}$'+' = {:0.1%} \\n'.format(ff_sorted[400])+\n '$T_\\mathrm{hot}$ = '+'{} \\n'.format(np.int(fc_sorted[400,0]))+\n '$T_\\mathrm{cool}$ = '+'{}'.format(np.int(fc_sorted[400,6])))\n\nax1.set_ylabel('flux density')\nax1.set_xlim(wlb-lc,wlr-lc)\nax1.set_ylim(0, 0.6)\nplt.legend(loc='lower left')\n\nax2 = fig.add_subplot(gs[0,1])\nax2.step(models.wl-lc, models.data, '-k', alpha=0.3)\nax2.plot(models.wl-lc, models.model_comp50, color='#AA00AA', linewidth=lw, alpha=0.7)\nax2.plot(models.wl-lc, models.model_cool50, color='r', linewidth=lw*2)\nax2.plot(models.wl-lc, models.model_hot50, color='b', linewidth=lw)\nax2.yaxis.set_major_formatter(plt.NullFormatter())\nax2.set_title('median fit: \\n $f_\\mathrm{cool}$'+' = {:0.1%} \\n'.format(ff_sorted[4000])+\n '$T_\\mathrm{hot}$ = '+'{} \\n'.format(np.int(fc_sorted[4000,0]))+\n '$T_\\mathrm{cool}$ = '+'{}'.format(np.int(fc_sorted[4000,6])))\n\n\nax2.set_xlim(wlb-lc,wlr-lc)\nax2.set_xlabel('$\\lambda - {} \\;\\AA$ '.format(lc))\nax2.set_ylim(0, 0.6)\n\nax3 = fig.add_subplot(gs[0,2])\nax3.step(models.wl-lc, models.data, '-k', alpha=0.3)\nax3.plot(models.wl-lc, models.model_comp95, color='#AA00AA', linewidth=lw, alpha=0.7)\nax3.plot(models.wl-lc, models.model_cool95, color='r', linewidth=lw*2)\nax3.plot(models.wl-lc, models.model_hot95, color='b', linewidth=lw)\nax3.yaxis.set_major_formatter(plt.NullFormatter())\nax3.set_title('$+2\\sigma$ \\n $f_\\mathrm{cool}$'+' = {:0.1%} \\n'.format(ff_sorted[7600])+\n '$T_\\mathrm{hot}$ = '+'{} \\n'.format(np.int(fc_sorted[7600,0]))+\n '$T_\\mathrm{cool}$ = '+'{}'.format(np.int(fc_sorted[7600,6])))\nax3.set_ylim(0, 0.6)\nax3.set_xlim(wlb-lc,wlr-lc)\n\nplt.savefig('../document/figures/spectral_postage_stamp_06.pdf', bbox_inches='tight')",
"_____no_output_____"
]
],
[
[
"## The end",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7565645f78cbfe9fbe8ffb9f50b618708abebf8 | 115,368 | ipynb | Jupyter Notebook | APIs/pbi.ipynb | abnercasallo/Casos-Aplicados-de-Python | dd6316c47577dc721a4b2e365b9fa31c05b70371 | [
"MIT"
] | 1 | 2021-09-01T17:47:29.000Z | 2021-09-01T17:47:29.000Z | APIs/pbi.ipynb | yesin25/Casos-Aplicados-de-Python | 3ba23aa5ea98a005c0ba3d60031a8e86f71f798b | [
"MIT"
] | null | null | null | APIs/pbi.ipynb | yesin25/Casos-Aplicados-de-Python | 3ba23aa5ea98a005c0ba3d60031a8e86f71f798b | [
"MIT"
] | 1 | 2021-09-01T17:13:01.000Z | 2021-09-01T17:13:01.000Z | 168.174927 | 43,716 | 0.812496 | [
[
[
"# ANÁLISIS DEL PBI PERUANO (USO DE API DEL BCRP)\n",
"_____no_output_____"
],
[
"## I. Introducción",
"_____no_output_____"
],
[
"<p style=\"text-align: justify;\"> Practicamente todas las economías del mundo buscan crecer y esto se mide a través del PBI, el Perú no es la excepción. En este sentido, su comportamiento es importante. Desde ya, es fácil presuponer algunas características como la presencia de tendencia en la serie (por lo tanto, no estacionalidad) que nos planteará algunas observaciones. Empezaremos trabajando con el API del BCR para conseguir la base de datos. </p>\n\n<p style=\"text-align: justify;\"> Recomendamos tener una noción previa del uso de diccionarios y listas en Python </p>\n",
"_____no_output_____"
],
[
"## II. API del BCRP",
"_____no_output_____"
]
],
[
[
"#https://estadisticas.bcrp.gob.pe/estadisticas/series/mensuales/resultados/PN01288PM/html\n#PN01770AM : PBI\n#PN01142MM : índice bursátil\n#PN01288PM: índice de PM sin alimentos\nurl_base=\"https://estadisticas.bcrp.gob.pe/estadisticas/series/api/\"\ncod_ser=\"PN01770AM\" #[códigos de series mensuales]\nformato=\"/json\"\nper=\"/2005-1/2019-12\"\nurl=url_base+cod_ser+formato+per\nprint(url)\n#args=\"address=1600+Amphitheatre+Parkway,+Mountain+View,+CA&\"+key_api\n#No necesitamos poner la url en formato de diccionario, no hay \"?\" no usamos query con argumentos con key y values",
"https://estadisticas.bcrp.gob.pe/estadisticas/series/api/PN01770AM/json/2005-1/2019-12\n"
],
[
"import requests\nresponse=requests.get(url)\nprint(response)\n\n'''200: Todo salió bien y se devolvió el resultado (si lo hubiera).\n301: el servidor lo redirige a un punto final diferente. Esto puede suceder cuando una empresa cambia los nombres de dominio o se cambia el nombre de un punto final.\n400: El servidor cree que realizó una solicitud incorrecta. Esto puede suceder cuando no envía los datos correctos, entre otras cosas.\n401: el servidor cree que no está autenticado. Muchas API requieren credenciales de inicio de sesión, por lo que esto sucede cuando no envía las credenciales correctas para acceder a una API.\n403: El recurso al que intentas acceder está prohibido: no tienes los permisos adecuados para verlo.\n404: el recurso al que intentó acceder no se encontró en el servidor.\n503: El servidor no está listo para manejar la solicitud'''",
"<Response [200]>\n"
],
[
"\n#print(response.url)\n#Extraemos en formato json\nresponse_json=response.json()\nprint(response_json) \n\n\n'''if response.status_code==200:\n content=response.content '''",
"{'config': {'title': 'Producto bruto interno y demanda interna (índice 2007=100)', 'series': [{'name': 'Producto bruto interno y demanda interna (índice 2007=100) - PBI', 'dec': '1'}]}, 'periods': [{'name': 'Ene.2005', 'values': ['79.9791613436588']}, {'name': 'Feb.2005', 'values': ['80.134460157334']}, {'name': 'Mar.2005', 'values': ['81.3964132411787']}, {'name': 'Abr.2005', 'values': ['87.0734472979407']}, {'name': 'May.2005', 'values': ['92.1414553918693']}, {'name': 'Jun.2005', 'values': ['88.4557732759908']}, {'name': 'Jul.2005', 'values': ['87.276600902297']}, {'name': 'Ago.2005', 'values': ['82.9894833725657']}, {'name': 'Sep.2005', 'values': ['82.0878667862381']}, {'name': 'Oct.2005', 'values': ['84.8027241979274']}, {'name': 'Nov.2005', 'values': ['90.4951179010569']}, {'name': 'Dic.2005', 'values': ['91.5459269781851']}, {'name': 'Ene.2006', 'values': ['85.6590427598994']}, {'name': 'Feb.2006', 'values': ['84.6285223502132']}, {'name': 'Mar.2006', 'values': ['91.2286931254313']}, {'name': 'Abr.2006', 'values': ['91.7780026508836']}, {'name': 'May.2006', 'values': ['97.7619066472792']}, {'name': 'Jun.2006', 'values': ['95.0729001070201']}, {'name': 'Jul.2006', 'values': ['92.9616004138741']}, {'name': 'Ago.2006', 'values': ['91.5547630160115']}, {'name': 'Sep.2006', 'values': ['88.7693298777215']}, {'name': 'Oct.2006', 'values': ['92.1822069581638']}, {'name': 'Nov.2006', 'values': ['94.7874215284444']}, {'name': 'Dic.2006', 'values': ['99.4183590716473']}, {'name': 'Ene.2007', 'values': ['89.948891875024']}, {'name': 'Feb.2007', 'values': ['88.6754172656189']}, {'name': 'Mar.2007', 'values': ['96.7166807859656']}, {'name': 'Abr.2007', 'values': ['96.6358439215487']}, {'name': 'May.2007', 'values': ['104.744753291331']}, {'name': 'Jun.2007', 'values': ['101.255860662521']}, {'name': 'Jul.2007', 'values': ['102.534731811414']}, {'name': 'Ago.2007', 'values': ['100.113950462611']}, {'name': 'Sep.2007', 'values': ['100.225943716493']}, {'name': 'Oct.2007', 'values': ['103.154829161475']}, {'name': 'Nov.2007', 'values': ['104.110657226655']}, {'name': 'Dic.2007', 'values': ['111.882439819342']}, {'name': 'Ene.2008', 'values': ['98.5149773291643']}, {'name': 'Feb.2008', 'values': ['100.709860257349']}, {'name': 'Mar.2008', 'values': ['104.115307520329']}, {'name': 'Abr.2008', 'values': ['110.287664370675']}, {'name': 'May.2008', 'values': ['112.120913726693']}, {'name': 'Jun.2008', 'values': ['112.211609963734']}, {'name': 'Jul.2008', 'values': ['112.320516016464']}, {'name': 'Ago.2008', 'values': ['108.922292629548']}, {'name': 'Sep.2008', 'values': ['110.725068248671']}, {'name': 'Oct.2008', 'values': ['111.641043578987']}, {'name': 'Nov.2008', 'values': ['110.655001246582']}, {'name': 'Dic.2008', 'values': ['117.493523482104']}, {'name': 'Ene.2009', 'values': ['103.013101977079']}, {'name': 'Feb.2009', 'values': ['101.019865758232']}, {'name': 'Mar.2009', 'values': ['107.121546842635']}, {'name': 'Abr.2009', 'values': ['108.808082918569']}, {'name': 'May.2009', 'values': ['114.22464786173']}, {'name': 'Jun.2009', 'values': ['108.887627134331']}, {'name': 'Jul.2009', 'values': ['110.717741226661']}, {'name': 'Ago.2009', 'values': ['109.795538603335']}, {'name': 'Sep.2009', 'values': ['110.86582898004']}, {'name': 'Oct.2009', 'values': ['112.955997594939']}, {'name': 'Nov.2009', 'values': ['113.624205299902']}, {'name': 'Dic.2009', 'values': ['122.425577213106']}, {'name': 'Ene.2010', 'values': ['106.153025639292']}, {'name': 'Feb.2010', 'values': ['106.146322827933']}, {'name': 'Mar.2010', 'values': ['115.833731729949']}, {'name': 'Abr.2010', 'values': ['117.484525272731']}, {'name': 'May.2010', 'values': ['123.028902592226']}, {'name': 'Jun.2010', 'values': ['123.162705739863']}, {'name': 'Jul.2010', 'values': ['121.894425888244']}, {'name': 'Ago.2010', 'values': ['119.607860483274']}, {'name': 'Sep.2010', 'values': ['122.292317240048']}, {'name': 'Oct.2010', 'values': ['123.835093598882']}, {'name': 'Nov.2010', 'values': ['123.761084689684']}, {'name': 'Dic.2010', 'values': ['132.102000141137']}, {'name': 'Ene.2011', 'values': ['116.607351030303']}, {'name': 'Feb.2011', 'values': ['114.949281605472']}, {'name': 'Mar.2011', 'values': ['125.021518391095']}, {'name': 'Abr.2011', 'values': ['126.5570445669']}, {'name': 'May.2011', 'values': ['130.030006298954']}, {'name': 'Jun.2011', 'values': ['126.941054549273']}, {'name': 'Jul.2011', 'values': ['129.392947204169']}, {'name': 'Ago.2011', 'values': ['127.436349292877']}, {'name': 'Sep.2011', 'values': ['128.310944143643']}, {'name': 'Oct.2011', 'values': ['129.418112541031']}, {'name': 'Nov.2011', 'values': ['129.64481294438']}, {'name': 'Dic.2011', 'values': ['143.601358332839']}, {'name': 'Ene.2012', 'values': ['122.822590088739']}, {'name': 'Feb.2012', 'values': ['122.917522307054']}, {'name': 'Mar.2012', 'values': ['132.130558417313']}, {'name': 'Abr.2012', 'values': ['130.158216802128']}, {'name': 'May.2012', 'values': ['138.807669844065']}, {'name': 'Jun.2012', 'values': ['136.27618845403']}, {'name': 'Jul.2012', 'values': ['138.550598740221']}, {'name': 'Ago.2012', 'values': ['136.186033978302']}, {'name': 'Sep.2012', 'values': ['136.751000660028']}, {'name': 'Oct.2012', 'values': ['138.733908716083']}, {'name': 'Nov.2012', 'values': ['137.252468378653']}, {'name': 'Dic.2012', 'values': ['148.24000775099']}, {'name': 'Ene.2013', 'values': ['130.273708234207']}, {'name': 'Feb.2013', 'values': ['128.857322355164']}, {'name': 'Mar.2013', 'values': ['136.602263236801']}, {'name': 'Abr.2013', 'values': ['141.48154612876']}, {'name': 'May.2013', 'values': ['144.684999949733']}, {'name': 'Jun.2013', 'values': ['144.335248367247']}, {'name': 'Jul.2013', 'values': ['145.942897859897']}, {'name': 'Ago.2013', 'values': ['143.792408742956']}, {'name': 'Sep.2013', 'values': ['143.547137549196']}, {'name': 'Oct.2013', 'values': ['147.503866716747']}, {'name': 'Nov.2013', 'values': ['147.500876185344']}, {'name': 'Dic.2013', 'values': ['158.804144814758']}, {'name': 'Ene.2014', 'values': ['135.792849582942']}, {'name': 'Feb.2014', 'values': ['135.617893985249']}, {'name': 'Mar.2014', 'values': ['143.900087140736']}, {'name': 'Abr.2014', 'values': ['145.609545711116']}, {'name': 'May.2014', 'values': ['148.421357252142']}, {'name': 'Jun.2014', 'values': ['144.912448443967']}, {'name': 'Jul.2014', 'values': ['148.187537983575']}, {'name': 'Ago.2014', 'values': ['145.764306213892']}, {'name': 'Sep.2014', 'values': ['147.442006383865']}, {'name': 'Oct.2014', 'values': ['150.850391268266']}, {'name': 'Nov.2014', 'values': ['147.702289120392']}, {'name': 'Dic.2014', 'values': ['160.144049895315']}, {'name': 'Ene.2015', 'values': ['137.921077233007']}, {'name': 'Feb.2015', 'values': ['137.266874959034']}, {'name': 'Mar.2015', 'values': ['148.174327103234']}, {'name': 'Abr.2015', 'values': ['151.686831170872']}, {'name': 'May.2015', 'values': ['150.420693358248']}, {'name': 'Jun.2015', 'values': ['150.802301648105']}, {'name': 'Jul.2015', 'values': ['153.471028181483']}, {'name': 'Ago.2015', 'values': ['149.687757193712']}, {'name': 'Sep.2015', 'values': ['152.207695357998']}, {'name': 'Oct.2015', 'values': ['155.919340841107']}, {'name': 'Nov.2015', 'values': ['153.604235210718']}, {'name': 'Dic.2015', 'values': ['170.612432978585']}, {'name': 'Ene.2016', 'values': ['142.950655593597']}, {'name': 'Feb.2016', 'values': ['146.157758408323']}, {'name': 'Mar.2016', 'values': ['153.676002191926']}, {'name': 'Abr.2016', 'values': ['156.01029575947']}, {'name': 'May.2016', 'values': ['158.020470230666']}, {'name': 'Jun.2016', 'values': ['156.441146410923']}, {'name': 'Jul.2016', 'values': ['159.303182636246']}, {'name': 'Ago.2016', 'values': ['158.54740988931']}, {'name': 'Sep.2016', 'values': ['159.198133719473']}, {'name': 'Oct.2016', 'values': ['159.453175861771']}, {'name': 'Nov.2016', 'values': ['159.008032205431']}, {'name': 'Dic.2016', 'values': ['176.385084478553']}, {'name': 'Ene.2017', 'values': ['150.217864452394']}, {'name': 'Feb.2017', 'values': ['147.342977277516']}, {'name': 'Mar.2017', 'values': ['155.2289887445']}, {'name': 'Abr.2017', 'values': ['156.514755332027']}, {'name': 'May.2017', 'values': ['163.602696665248']}, {'name': 'Jun.2017', 'values': ['162.536529072277']}, {'name': 'Jul.2017', 'values': ['162.718683017657']}, {'name': 'Ago.2017', 'values': ['162.944325629995']}, {'name': 'Sep.2017', 'values': ['164.440198382914']}, {'name': 'Oct.2017', 'values': ['165.157200750526']}, {'name': 'Nov.2017', 'values': ['162.217614216323']}, {'name': 'Dic.2017', 'values': ['178.887887960154']}, {'name': 'Ene.2018', 'values': ['154.454008229951']}, {'name': 'Feb.2018', 'values': ['151.232831877643']}, {'name': 'Mar.2018', 'values': ['161.236350608912']}, {'name': 'Abr.2018', 'values': ['168.94202690539']}, {'name': 'May.2018', 'values': ['174.56407734182']}, {'name': 'Jun.2018', 'values': ['165.966049835702']}, {'name': 'Jul.2018', 'values': ['166.98191251272']}, {'name': 'Ago.2018', 'values': ['166.822400579989']}, {'name': 'Sep.2018', 'values': ['168.51831530281']}, {'name': 'Oct.2018', 'values': ['171.895614541147']}, {'name': 'Nov.2018', 'values': ['170.506934302777']}, {'name': 'Dic.2018', 'values': ['187.367173849466']}, {'name': 'Ene.2019', 'values': ['157.187844175622']}, {'name': 'Feb.2019', 'values': ['154.589821648405']}, {'name': 'Mar.2019', 'values': ['166.721328164303']}, {'name': 'Abr.2019', 'values': ['169.144757337677']}, {'name': 'May.2019', 'values': ['175.859353849945']}, {'name': 'Jun.2019', 'values': ['170.566569664831']}, {'name': 'Jul.2019', 'values': ['173.258853667138']}, {'name': 'Ago.2019', 'values': ['172.844689240926']}, {'name': 'Sep.2019', 'values': ['172.464650840243']}, {'name': 'Oct.2019', 'values': ['176.009008472759']}, {'name': 'Nov.2019', 'values': ['174.036427842845']}, {'name': 'Dic.2019', 'values': ['189.5238889357']}]}\n"
],
[
"for key in response_json.keys():\n print(key)",
"config\nperiods\n"
],
[
"print(response_json['config'])\n",
"{'title': 'Producto bruto interno y demanda interna (índice 2007=100)', 'series': [{'name': 'Producto bruto interno y demanda interna (índice 2007=100) - PBI', 'dec': '1'}]}\n"
],
[
"print(response_json['periods'])",
"[{'name': 'Ene.2005', 'values': ['79.9791613436588']}, {'name': 'Feb.2005', 'values': ['80.134460157334']}, {'name': 'Mar.2005', 'values': ['81.3964132411787']}, {'name': 'Abr.2005', 'values': ['87.0734472979407']}, {'name': 'May.2005', 'values': ['92.1414553918693']}, {'name': 'Jun.2005', 'values': ['88.4557732759908']}, {'name': 'Jul.2005', 'values': ['87.276600902297']}, {'name': 'Ago.2005', 'values': ['82.9894833725657']}, {'name': 'Sep.2005', 'values': ['82.0878667862381']}, {'name': 'Oct.2005', 'values': ['84.8027241979274']}, {'name': 'Nov.2005', 'values': ['90.4951179010569']}, {'name': 'Dic.2005', 'values': ['91.5459269781851']}, {'name': 'Ene.2006', 'values': ['85.6590427598994']}, {'name': 'Feb.2006', 'values': ['84.6285223502132']}, {'name': 'Mar.2006', 'values': ['91.2286931254313']}, {'name': 'Abr.2006', 'values': ['91.7780026508836']}, {'name': 'May.2006', 'values': ['97.7619066472792']}, {'name': 'Jun.2006', 'values': ['95.0729001070201']}, {'name': 'Jul.2006', 'values': ['92.9616004138741']}, {'name': 'Ago.2006', 'values': ['91.5547630160115']}, {'name': 'Sep.2006', 'values': ['88.7693298777215']}, {'name': 'Oct.2006', 'values': ['92.1822069581638']}, {'name': 'Nov.2006', 'values': ['94.7874215284444']}, {'name': 'Dic.2006', 'values': ['99.4183590716473']}, {'name': 'Ene.2007', 'values': ['89.948891875024']}, {'name': 'Feb.2007', 'values': ['88.6754172656189']}, {'name': 'Mar.2007', 'values': ['96.7166807859656']}, {'name': 'Abr.2007', 'values': ['96.6358439215487']}, {'name': 'May.2007', 'values': ['104.744753291331']}, {'name': 'Jun.2007', 'values': ['101.255860662521']}, {'name': 'Jul.2007', 'values': ['102.534731811414']}, {'name': 'Ago.2007', 'values': ['100.113950462611']}, {'name': 'Sep.2007', 'values': ['100.225943716493']}, {'name': 'Oct.2007', 'values': ['103.154829161475']}, {'name': 'Nov.2007', 'values': ['104.110657226655']}, {'name': 'Dic.2007', 'values': ['111.882439819342']}, {'name': 'Ene.2008', 'values': ['98.5149773291643']}, {'name': 'Feb.2008', 'values': ['100.709860257349']}, {'name': 'Mar.2008', 'values': ['104.115307520329']}, {'name': 'Abr.2008', 'values': ['110.287664370675']}, {'name': 'May.2008', 'values': ['112.120913726693']}, {'name': 'Jun.2008', 'values': ['112.211609963734']}, {'name': 'Jul.2008', 'values': ['112.320516016464']}, {'name': 'Ago.2008', 'values': ['108.922292629548']}, {'name': 'Sep.2008', 'values': ['110.725068248671']}, {'name': 'Oct.2008', 'values': ['111.641043578987']}, {'name': 'Nov.2008', 'values': ['110.655001246582']}, {'name': 'Dic.2008', 'values': ['117.493523482104']}, {'name': 'Ene.2009', 'values': ['103.013101977079']}, {'name': 'Feb.2009', 'values': ['101.019865758232']}, {'name': 'Mar.2009', 'values': ['107.121546842635']}, {'name': 'Abr.2009', 'values': ['108.808082918569']}, {'name': 'May.2009', 'values': ['114.22464786173']}, {'name': 'Jun.2009', 'values': ['108.887627134331']}, {'name': 'Jul.2009', 'values': ['110.717741226661']}, {'name': 'Ago.2009', 'values': ['109.795538603335']}, {'name': 'Sep.2009', 'values': ['110.86582898004']}, {'name': 'Oct.2009', 'values': ['112.955997594939']}, {'name': 'Nov.2009', 'values': ['113.624205299902']}, {'name': 'Dic.2009', 'values': ['122.425577213106']}, {'name': 'Ene.2010', 'values': ['106.153025639292']}, {'name': 'Feb.2010', 'values': ['106.146322827933']}, {'name': 'Mar.2010', 'values': ['115.833731729949']}, {'name': 'Abr.2010', 'values': ['117.484525272731']}, {'name': 'May.2010', 'values': ['123.028902592226']}, {'name': 'Jun.2010', 'values': ['123.162705739863']}, {'name': 'Jul.2010', 'values': ['121.894425888244']}, {'name': 'Ago.2010', 'values': ['119.607860483274']}, {'name': 'Sep.2010', 'values': ['122.292317240048']}, {'name': 'Oct.2010', 'values': ['123.835093598882']}, {'name': 'Nov.2010', 'values': ['123.761084689684']}, {'name': 'Dic.2010', 'values': ['132.102000141137']}, {'name': 'Ene.2011', 'values': ['116.607351030303']}, {'name': 'Feb.2011', 'values': ['114.949281605472']}, {'name': 'Mar.2011', 'values': ['125.021518391095']}, {'name': 'Abr.2011', 'values': ['126.5570445669']}, {'name': 'May.2011', 'values': ['130.030006298954']}, {'name': 'Jun.2011', 'values': ['126.941054549273']}, {'name': 'Jul.2011', 'values': ['129.392947204169']}, {'name': 'Ago.2011', 'values': ['127.436349292877']}, {'name': 'Sep.2011', 'values': ['128.310944143643']}, {'name': 'Oct.2011', 'values': ['129.418112541031']}, {'name': 'Nov.2011', 'values': ['129.64481294438']}, {'name': 'Dic.2011', 'values': ['143.601358332839']}, {'name': 'Ene.2012', 'values': ['122.822590088739']}, {'name': 'Feb.2012', 'values': ['122.917522307054']}, {'name': 'Mar.2012', 'values': ['132.130558417313']}, {'name': 'Abr.2012', 'values': ['130.158216802128']}, {'name': 'May.2012', 'values': ['138.807669844065']}, {'name': 'Jun.2012', 'values': ['136.27618845403']}, {'name': 'Jul.2012', 'values': ['138.550598740221']}, {'name': 'Ago.2012', 'values': ['136.186033978302']}, {'name': 'Sep.2012', 'values': ['136.751000660028']}, {'name': 'Oct.2012', 'values': ['138.733908716083']}, {'name': 'Nov.2012', 'values': ['137.252468378653']}, {'name': 'Dic.2012', 'values': ['148.24000775099']}, {'name': 'Ene.2013', 'values': ['130.273708234207']}, {'name': 'Feb.2013', 'values': ['128.857322355164']}, {'name': 'Mar.2013', 'values': ['136.602263236801']}, {'name': 'Abr.2013', 'values': ['141.48154612876']}, {'name': 'May.2013', 'values': ['144.684999949733']}, {'name': 'Jun.2013', 'values': ['144.335248367247']}, {'name': 'Jul.2013', 'values': ['145.942897859897']}, {'name': 'Ago.2013', 'values': ['143.792408742956']}, {'name': 'Sep.2013', 'values': ['143.547137549196']}, {'name': 'Oct.2013', 'values': ['147.503866716747']}, {'name': 'Nov.2013', 'values': ['147.500876185344']}, {'name': 'Dic.2013', 'values': ['158.804144814758']}, {'name': 'Ene.2014', 'values': ['135.792849582942']}, {'name': 'Feb.2014', 'values': ['135.617893985249']}, {'name': 'Mar.2014', 'values': ['143.900087140736']}, {'name': 'Abr.2014', 'values': ['145.609545711116']}, {'name': 'May.2014', 'values': ['148.421357252142']}, {'name': 'Jun.2014', 'values': ['144.912448443967']}, {'name': 'Jul.2014', 'values': ['148.187537983575']}, {'name': 'Ago.2014', 'values': ['145.764306213892']}, {'name': 'Sep.2014', 'values': ['147.442006383865']}, {'name': 'Oct.2014', 'values': ['150.850391268266']}, {'name': 'Nov.2014', 'values': ['147.702289120392']}, {'name': 'Dic.2014', 'values': ['160.144049895315']}, {'name': 'Ene.2015', 'values': ['137.921077233007']}, {'name': 'Feb.2015', 'values': ['137.266874959034']}, {'name': 'Mar.2015', 'values': ['148.174327103234']}, {'name': 'Abr.2015', 'values': ['151.686831170872']}, {'name': 'May.2015', 'values': ['150.420693358248']}, {'name': 'Jun.2015', 'values': ['150.802301648105']}, {'name': 'Jul.2015', 'values': ['153.471028181483']}, {'name': 'Ago.2015', 'values': ['149.687757193712']}, {'name': 'Sep.2015', 'values': ['152.207695357998']}, {'name': 'Oct.2015', 'values': ['155.919340841107']}, {'name': 'Nov.2015', 'values': ['153.604235210718']}, {'name': 'Dic.2015', 'values': ['170.612432978585']}, {'name': 'Ene.2016', 'values': ['142.950655593597']}, {'name': 'Feb.2016', 'values': ['146.157758408323']}, {'name': 'Mar.2016', 'values': ['153.676002191926']}, {'name': 'Abr.2016', 'values': ['156.01029575947']}, {'name': 'May.2016', 'values': ['158.020470230666']}, {'name': 'Jun.2016', 'values': ['156.441146410923']}, {'name': 'Jul.2016', 'values': ['159.303182636246']}, {'name': 'Ago.2016', 'values': ['158.54740988931']}, {'name': 'Sep.2016', 'values': ['159.198133719473']}, {'name': 'Oct.2016', 'values': ['159.453175861771']}, {'name': 'Nov.2016', 'values': ['159.008032205431']}, {'name': 'Dic.2016', 'values': ['176.385084478553']}, {'name': 'Ene.2017', 'values': ['150.217864452394']}, {'name': 'Feb.2017', 'values': ['147.342977277516']}, {'name': 'Mar.2017', 'values': ['155.2289887445']}, {'name': 'Abr.2017', 'values': ['156.514755332027']}, {'name': 'May.2017', 'values': ['163.602696665248']}, {'name': 'Jun.2017', 'values': ['162.536529072277']}, {'name': 'Jul.2017', 'values': ['162.718683017657']}, {'name': 'Ago.2017', 'values': ['162.944325629995']}, {'name': 'Sep.2017', 'values': ['164.440198382914']}, {'name': 'Oct.2017', 'values': ['165.157200750526']}, {'name': 'Nov.2017', 'values': ['162.217614216323']}, {'name': 'Dic.2017', 'values': ['178.887887960154']}, {'name': 'Ene.2018', 'values': ['154.454008229951']}, {'name': 'Feb.2018', 'values': ['151.232831877643']}, {'name': 'Mar.2018', 'values': ['161.236350608912']}, {'name': 'Abr.2018', 'values': ['168.94202690539']}, {'name': 'May.2018', 'values': ['174.56407734182']}, {'name': 'Jun.2018', 'values': ['165.966049835702']}, {'name': 'Jul.2018', 'values': ['166.98191251272']}, {'name': 'Ago.2018', 'values': ['166.822400579989']}, {'name': 'Sep.2018', 'values': ['168.51831530281']}, {'name': 'Oct.2018', 'values': ['171.895614541147']}, {'name': 'Nov.2018', 'values': ['170.506934302777']}, {'name': 'Dic.2018', 'values': ['187.367173849466']}, {'name': 'Ene.2019', 'values': ['157.187844175622']}, {'name': 'Feb.2019', 'values': ['154.589821648405']}, {'name': 'Mar.2019', 'values': ['166.721328164303']}, {'name': 'Abr.2019', 'values': ['169.144757337677']}, {'name': 'May.2019', 'values': ['175.859353849945']}, {'name': 'Jun.2019', 'values': ['170.566569664831']}, {'name': 'Jul.2019', 'values': ['173.258853667138']}, {'name': 'Ago.2019', 'values': ['172.844689240926']}, {'name': 'Sep.2019', 'values': ['172.464650840243']}, {'name': 'Oct.2019', 'values': ['176.009008472759']}, {'name': 'Nov.2019', 'values': ['174.036427842845']}, {'name': 'Dic.2019', 'values': ['189.5238889357']}]\n"
],
[
"print(response_json['periods'][0])",
"{'name': 'Ene.2005', 'values': ['79.9791613436588']}\n"
],
[
"print(response_json['periods'][0][\"values\"])\n\n#MORALEJA: IMPORTANTE DISTINGUIR LISTAS Y DICCIONARIOS EN PYTHON",
"['79.9791613436588']\n"
],
[
"#print(response_json.get(\"periods\")) #el método get me da un resultado parecido a response_json['periods']\nperiodos=response_json.get(\"periods\")\nprice_index=[]\nfor i in periodos:\n valores_list=i['values']\n for w in valores_list:\n w=float(w)\n price_index.append(w)\n \n#print(type(price_index[0]))\nprint(price_index)\n\n \n \n \n ",
"[79.9791613436588, 80.134460157334, 81.3964132411787, 87.0734472979407, 92.1414553918693, 88.4557732759908, 87.276600902297, 82.9894833725657, 82.0878667862381, 84.8027241979274, 90.4951179010569, 91.5459269781851, 85.6590427598994, 84.6285223502132, 91.2286931254313, 91.7780026508836, 97.7619066472792, 95.0729001070201, 92.9616004138741, 91.5547630160115, 88.7693298777215, 92.1822069581638, 94.7874215284444, 99.4183590716473, 89.948891875024, 88.6754172656189, 96.7166807859656, 96.6358439215487, 104.744753291331, 101.255860662521, 102.534731811414, 100.113950462611, 100.225943716493, 103.154829161475, 104.110657226655, 111.882439819342, 98.5149773291643, 100.709860257349, 104.115307520329, 110.287664370675, 112.120913726693, 112.211609963734, 112.320516016464, 108.922292629548, 110.725068248671, 111.641043578987, 110.655001246582, 117.493523482104, 103.013101977079, 101.019865758232, 107.121546842635, 108.808082918569, 114.22464786173, 108.887627134331, 110.717741226661, 109.795538603335, 110.86582898004, 112.955997594939, 113.624205299902, 122.425577213106, 106.153025639292, 106.146322827933, 115.833731729949, 117.484525272731, 123.028902592226, 123.162705739863, 121.894425888244, 119.607860483274, 122.292317240048, 123.835093598882, 123.761084689684, 132.102000141137, 116.607351030303, 114.949281605472, 125.021518391095, 126.5570445669, 130.030006298954, 126.941054549273, 129.392947204169, 127.436349292877, 128.310944143643, 129.418112541031, 129.64481294438, 143.601358332839, 122.822590088739, 122.917522307054, 132.130558417313, 130.158216802128, 138.807669844065, 136.27618845403, 138.550598740221, 136.186033978302, 136.751000660028, 138.733908716083, 137.252468378653, 148.24000775099, 130.273708234207, 128.857322355164, 136.602263236801, 141.48154612876, 144.684999949733, 144.335248367247, 145.942897859897, 143.792408742956, 143.547137549196, 147.503866716747, 147.500876185344, 158.804144814758, 135.792849582942, 135.617893985249, 143.900087140736, 145.609545711116, 148.421357252142, 144.912448443967, 148.187537983575, 145.764306213892, 147.442006383865, 150.850391268266, 147.702289120392, 160.144049895315, 137.921077233007, 137.266874959034, 148.174327103234, 151.686831170872, 150.420693358248, 150.802301648105, 153.471028181483, 149.687757193712, 152.207695357998, 155.919340841107, 153.604235210718, 170.612432978585, 142.950655593597, 146.157758408323, 153.676002191926, 156.01029575947, 158.020470230666, 156.441146410923, 159.303182636246, 158.54740988931, 159.198133719473, 159.453175861771, 159.008032205431, 176.385084478553, 150.217864452394, 147.342977277516, 155.2289887445, 156.514755332027, 163.602696665248, 162.536529072277, 162.718683017657, 162.944325629995, 164.440198382914, 165.157200750526, 162.217614216323, 178.887887960154, 154.454008229951, 151.232831877643, 161.236350608912, 168.94202690539, 174.56407734182, 165.966049835702, 166.98191251272, 166.822400579989, 168.51831530281, 171.895614541147, 170.506934302777, 187.367173849466, 157.187844175622, 154.589821648405, 166.721328164303, 169.144757337677, 175.859353849945, 170.566569664831, 173.258853667138, 172.844689240926, 172.464650840243, 176.009008472759, 174.036427842845, 189.5238889357]\n"
],
[
"fechas=[]\nfor i in periodos:\n nombres=i['name']\n fechas.append(nombres)\n \nprint(fechas)",
"['Ene.2005', 'Feb.2005', 'Mar.2005', 'Abr.2005', 'May.2005', 'Jun.2005', 'Jul.2005', 'Ago.2005', 'Sep.2005', 'Oct.2005', 'Nov.2005', 'Dic.2005', 'Ene.2006', 'Feb.2006', 'Mar.2006', 'Abr.2006', 'May.2006', 'Jun.2006', 'Jul.2006', 'Ago.2006', 'Sep.2006', 'Oct.2006', 'Nov.2006', 'Dic.2006', 'Ene.2007', 'Feb.2007', 'Mar.2007', 'Abr.2007', 'May.2007', 'Jun.2007', 'Jul.2007', 'Ago.2007', 'Sep.2007', 'Oct.2007', 'Nov.2007', 'Dic.2007', 'Ene.2008', 'Feb.2008', 'Mar.2008', 'Abr.2008', 'May.2008', 'Jun.2008', 'Jul.2008', 'Ago.2008', 'Sep.2008', 'Oct.2008', 'Nov.2008', 'Dic.2008', 'Ene.2009', 'Feb.2009', 'Mar.2009', 'Abr.2009', 'May.2009', 'Jun.2009', 'Jul.2009', 'Ago.2009', 'Sep.2009', 'Oct.2009', 'Nov.2009', 'Dic.2009', 'Ene.2010', 'Feb.2010', 'Mar.2010', 'Abr.2010', 'May.2010', 'Jun.2010', 'Jul.2010', 'Ago.2010', 'Sep.2010', 'Oct.2010', 'Nov.2010', 'Dic.2010', 'Ene.2011', 'Feb.2011', 'Mar.2011', 'Abr.2011', 'May.2011', 'Jun.2011', 'Jul.2011', 'Ago.2011', 'Sep.2011', 'Oct.2011', 'Nov.2011', 'Dic.2011', 'Ene.2012', 'Feb.2012', 'Mar.2012', 'Abr.2012', 'May.2012', 'Jun.2012', 'Jul.2012', 'Ago.2012', 'Sep.2012', 'Oct.2012', 'Nov.2012', 'Dic.2012', 'Ene.2013', 'Feb.2013', 'Mar.2013', 'Abr.2013', 'May.2013', 'Jun.2013', 'Jul.2013', 'Ago.2013', 'Sep.2013', 'Oct.2013', 'Nov.2013', 'Dic.2013', 'Ene.2014', 'Feb.2014', 'Mar.2014', 'Abr.2014', 'May.2014', 'Jun.2014', 'Jul.2014', 'Ago.2014', 'Sep.2014', 'Oct.2014', 'Nov.2014', 'Dic.2014', 'Ene.2015', 'Feb.2015', 'Mar.2015', 'Abr.2015', 'May.2015', 'Jun.2015', 'Jul.2015', 'Ago.2015', 'Sep.2015', 'Oct.2015', 'Nov.2015', 'Dic.2015', 'Ene.2016', 'Feb.2016', 'Mar.2016', 'Abr.2016', 'May.2016', 'Jun.2016', 'Jul.2016', 'Ago.2016', 'Sep.2016', 'Oct.2016', 'Nov.2016', 'Dic.2016', 'Ene.2017', 'Feb.2017', 'Mar.2017', 'Abr.2017', 'May.2017', 'Jun.2017', 'Jul.2017', 'Ago.2017', 'Sep.2017', 'Oct.2017', 'Nov.2017', 'Dic.2017', 'Ene.2018', 'Feb.2018', 'Mar.2018', 'Abr.2018', 'May.2018', 'Jun.2018', 'Jul.2018', 'Ago.2018', 'Sep.2018', 'Oct.2018', 'Nov.2018', 'Dic.2018', 'Ene.2019', 'Feb.2019', 'Mar.2019', 'Abr.2019', 'May.2019', 'Jun.2019', 'Jul.2019', 'Ago.2019', 'Sep.2019', 'Oct.2019', 'Nov.2019', 'Dic.2019']\n"
]
],
[
[
"## III. Limpieza y creación de Data Frame",
"_____no_output_____"
]
],
[
[
"\nimport pandas as pd",
"_____no_output_____"
],
[
"diccionario= {\"Fechas\":fechas, \"Valores\":price_index}\nprint(diccionario)",
"{'Fechas': ['Ene.2005', 'Feb.2005', 'Mar.2005', 'Abr.2005', 'May.2005', 'Jun.2005', 'Jul.2005', 'Ago.2005', 'Sep.2005', 'Oct.2005', 'Nov.2005', 'Dic.2005', 'Ene.2006', 'Feb.2006', 'Mar.2006', 'Abr.2006', 'May.2006', 'Jun.2006', 'Jul.2006', 'Ago.2006', 'Sep.2006', 'Oct.2006', 'Nov.2006', 'Dic.2006', 'Ene.2007', 'Feb.2007', 'Mar.2007', 'Abr.2007', 'May.2007', 'Jun.2007', 'Jul.2007', 'Ago.2007', 'Sep.2007', 'Oct.2007', 'Nov.2007', 'Dic.2007', 'Ene.2008', 'Feb.2008', 'Mar.2008', 'Abr.2008', 'May.2008', 'Jun.2008', 'Jul.2008', 'Ago.2008', 'Sep.2008', 'Oct.2008', 'Nov.2008', 'Dic.2008', 'Ene.2009', 'Feb.2009', 'Mar.2009', 'Abr.2009', 'May.2009', 'Jun.2009', 'Jul.2009', 'Ago.2009', 'Sep.2009', 'Oct.2009', 'Nov.2009', 'Dic.2009', 'Ene.2010', 'Feb.2010', 'Mar.2010', 'Abr.2010', 'May.2010', 'Jun.2010', 'Jul.2010', 'Ago.2010', 'Sep.2010', 'Oct.2010', 'Nov.2010', 'Dic.2010', 'Ene.2011', 'Feb.2011', 'Mar.2011', 'Abr.2011', 'May.2011', 'Jun.2011', 'Jul.2011', 'Ago.2011', 'Sep.2011', 'Oct.2011', 'Nov.2011', 'Dic.2011', 'Ene.2012', 'Feb.2012', 'Mar.2012', 'Abr.2012', 'May.2012', 'Jun.2012', 'Jul.2012', 'Ago.2012', 'Sep.2012', 'Oct.2012', 'Nov.2012', 'Dic.2012', 'Ene.2013', 'Feb.2013', 'Mar.2013', 'Abr.2013', 'May.2013', 'Jun.2013', 'Jul.2013', 'Ago.2013', 'Sep.2013', 'Oct.2013', 'Nov.2013', 'Dic.2013', 'Ene.2014', 'Feb.2014', 'Mar.2014', 'Abr.2014', 'May.2014', 'Jun.2014', 'Jul.2014', 'Ago.2014', 'Sep.2014', 'Oct.2014', 'Nov.2014', 'Dic.2014', 'Ene.2015', 'Feb.2015', 'Mar.2015', 'Abr.2015', 'May.2015', 'Jun.2015', 'Jul.2015', 'Ago.2015', 'Sep.2015', 'Oct.2015', 'Nov.2015', 'Dic.2015', 'Ene.2016', 'Feb.2016', 'Mar.2016', 'Abr.2016', 'May.2016', 'Jun.2016', 'Jul.2016', 'Ago.2016', 'Sep.2016', 'Oct.2016', 'Nov.2016', 'Dic.2016', 'Ene.2017', 'Feb.2017', 'Mar.2017', 'Abr.2017', 'May.2017', 'Jun.2017', 'Jul.2017', 'Ago.2017', 'Sep.2017', 'Oct.2017', 'Nov.2017', 'Dic.2017', 'Ene.2018', 'Feb.2018', 'Mar.2018', 'Abr.2018', 'May.2018', 'Jun.2018', 'Jul.2018', 'Ago.2018', 'Sep.2018', 'Oct.2018', 'Nov.2018', 'Dic.2018', 'Ene.2019', 'Feb.2019', 'Mar.2019', 'Abr.2019', 'May.2019', 'Jun.2019', 'Jul.2019', 'Ago.2019', 'Sep.2019', 'Oct.2019', 'Nov.2019', 'Dic.2019'], 'Valores': [79.9791613436588, 80.134460157334, 81.3964132411787, 87.0734472979407, 92.1414553918693, 88.4557732759908, 87.276600902297, 82.9894833725657, 82.0878667862381, 84.8027241979274, 90.4951179010569, 91.5459269781851, 85.6590427598994, 84.6285223502132, 91.2286931254313, 91.7780026508836, 97.7619066472792, 95.0729001070201, 92.9616004138741, 91.5547630160115, 88.7693298777215, 92.1822069581638, 94.7874215284444, 99.4183590716473, 89.948891875024, 88.6754172656189, 96.7166807859656, 96.6358439215487, 104.744753291331, 101.255860662521, 102.534731811414, 100.113950462611, 100.225943716493, 103.154829161475, 104.110657226655, 111.882439819342, 98.5149773291643, 100.709860257349, 104.115307520329, 110.287664370675, 112.120913726693, 112.211609963734, 112.320516016464, 108.922292629548, 110.725068248671, 111.641043578987, 110.655001246582, 117.493523482104, 103.013101977079, 101.019865758232, 107.121546842635, 108.808082918569, 114.22464786173, 108.887627134331, 110.717741226661, 109.795538603335, 110.86582898004, 112.955997594939, 113.624205299902, 122.425577213106, 106.153025639292, 106.146322827933, 115.833731729949, 117.484525272731, 123.028902592226, 123.162705739863, 121.894425888244, 119.607860483274, 122.292317240048, 123.835093598882, 123.761084689684, 132.102000141137, 116.607351030303, 114.949281605472, 125.021518391095, 126.5570445669, 130.030006298954, 126.941054549273, 129.392947204169, 127.436349292877, 128.310944143643, 129.418112541031, 129.64481294438, 143.601358332839, 122.822590088739, 122.917522307054, 132.130558417313, 130.158216802128, 138.807669844065, 136.27618845403, 138.550598740221, 136.186033978302, 136.751000660028, 138.733908716083, 137.252468378653, 148.24000775099, 130.273708234207, 128.857322355164, 136.602263236801, 141.48154612876, 144.684999949733, 144.335248367247, 145.942897859897, 143.792408742956, 143.547137549196, 147.503866716747, 147.500876185344, 158.804144814758, 135.792849582942, 135.617893985249, 143.900087140736, 145.609545711116, 148.421357252142, 144.912448443967, 148.187537983575, 145.764306213892, 147.442006383865, 150.850391268266, 147.702289120392, 160.144049895315, 137.921077233007, 137.266874959034, 148.174327103234, 151.686831170872, 150.420693358248, 150.802301648105, 153.471028181483, 149.687757193712, 152.207695357998, 155.919340841107, 153.604235210718, 170.612432978585, 142.950655593597, 146.157758408323, 153.676002191926, 156.01029575947, 158.020470230666, 156.441146410923, 159.303182636246, 158.54740988931, 159.198133719473, 159.453175861771, 159.008032205431, 176.385084478553, 150.217864452394, 147.342977277516, 155.2289887445, 156.514755332027, 163.602696665248, 162.536529072277, 162.718683017657, 162.944325629995, 164.440198382914, 165.157200750526, 162.217614216323, 178.887887960154, 154.454008229951, 151.232831877643, 161.236350608912, 168.94202690539, 174.56407734182, 165.966049835702, 166.98191251272, 166.822400579989, 168.51831530281, 171.895614541147, 170.506934302777, 187.367173849466, 157.187844175622, 154.589821648405, 166.721328164303, 169.144757337677, 175.859353849945, 170.566569664831, 173.258853667138, 172.844689240926, 172.464650840243, 176.009008472759, 174.036427842845, 189.5238889357]}\n"
],
[
"df = pd.DataFrame(diccionario)\n#df.set_index(df['date'], inplace=True)\n#df=df.drop(columns=['date'])\n#df[\"Fechas\"]=pd.to_datetime(df[\"Fechas\"], infer_datetime_format=True)\n#Hay nulls?\n#df.isnull().sum() #no hay\ndf",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\nsns.distplot(df['Valores'])\nplt.title('Distribución deL PBI')",
"C:\\Users\\abner\\anaconda3\\lib\\site-packages\\seaborn\\distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n"
],
[
"df.boxplot('Valores')\n",
"_____no_output_____"
],
[
"\ndf.plot(x ='Fechas', y='Valores', figsize=(15, 5), kind = 'line')\nplt.xlabel(\"Fechas\")\nplt.ylabel(\"PBI\")\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e75663a5dc577c3e80b67bd93a8dcb012e25c1d7 | 120,997 | ipynb | Jupyter Notebook | Datasets/jigsaw-dataset-split-pb-roberta-large-192.ipynb | dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification | 44422e6aeeff227e22dbb5c05101322e9d4aabbe | [
"MIT"
] | 4 | 2020-06-23T02:31:07.000Z | 2020-07-04T11:50:08.000Z | Datasets/jigsaw-dataset-split-pb-roberta-large-192.ipynb | dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification | 44422e6aeeff227e22dbb5c05101322e9d4aabbe | [
"MIT"
] | null | null | null | Datasets/jigsaw-dataset-split-pb-roberta-large-192.ipynb | dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification | 44422e6aeeff227e22dbb5c05101322e9d4aabbe | [
"MIT"
] | null | null | null | 87.173631 | 15,460 | 0.79478 | [
[
[
"# Dependencies",
"_____no_output_____"
]
],
[
[
"import os, warnings, shutil\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom transformers import AutoTokenizer\nfrom sklearn.model_selection import StratifiedKFold\n\n\nSEED = 0\nwarnings.filterwarnings(\"ignore\")",
"_____no_output_____"
]
],
[
[
"# Parameters",
"_____no_output_____"
]
],
[
[
"MAX_LEN = 192\ntokenizer_path = 'jplu/tf-xlm-roberta-large'",
"_____no_output_____"
]
],
[
[
"# Load data",
"_____no_output_____"
]
],
[
[
"train1 = pd.read_csv(\"/kaggle/input/jigsaw-multilingual-toxic-comment-classification/jigsaw-toxic-comment-train.csv\")\ntrain2 = pd.read_csv(\"/kaggle/input/jigsaw-multilingual-toxic-comment-classification/jigsaw-unintended-bias-train.csv\")\ntrain2.toxic = train2.toxic.round().astype(int)\n\ntrain_df = pd.concat([train1[['comment_text', 'toxic']],\n train2[['comment_text', 'toxic']].query('toxic==1'),\n train2[['comment_text', 'toxic']].query('toxic==0').sample(n=100000, random_state=SEED)\n ]).reset_index()\n\nprint('Train samples %d' % len(train_df))\ndisplay(train_df.head())",
"Train samples 435775\n"
]
],
[
[
"# Tokenizer",
"_____no_output_____"
]
],
[
[
"tokenizer = AutoTokenizer.from_pretrained(tokenizer_path)",
"_____no_output_____"
]
],
[
[
"# Data generation sanity check",
"_____no_output_____"
]
],
[
[
"for idx in range(5):\n print('\\nRow %d' % idx)\n max_seq_len = 22\n comment_text = train_df['comment_text'].loc[idx]\n \n enc = tokenizer.encode_plus(comment_text, return_token_type_ids=False, pad_to_max_length=True, max_length=max_seq_len)\n \n print('comment_text : \"%s\"' % comment_text)\n print('input_ids : \"%s\"' % enc['input_ids'])\n print('attention_mask: \"%s\"' % enc['attention_mask'])\n \n assert len(enc['input_ids']) == len(enc['attention_mask']) == max_seq_len",
"\nRow 0\ncomment_text : \"Explanation\nWhy the edits made under my username Hardcore Metallica Fan were reverted? They weren't vandalisms, just closure on some GAs after I voted at New York Dolls FAC. And please don't remove the template from the talk page since I'm retired now.89.205.38.27\"\ninput_ids : \"[0, 5443, 5868, 2320, 44084, 70, 27211, 7, 7228, 1379, 759, 38937, 11627, 151402, 94492, 2063, 11213, 3542, 39531, 3674, 32, 2]\"\nattention_mask: \"[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]\"\n\nRow 1\ncomment_text : \"D'aww! He matches this background colour I'm seemingly stuck with. Thanks. (talk) 21:51, January 11, 2016 (UTC)\"\ninput_ids : \"[0, 391, 25, 11, 98251, 38, 1529, 14858, 90, 903, 76615, 134855, 87, 25, 39, 48903, 214, 538, 179933, 678, 5, 2]\"\nattention_mask: \"[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]\"\n\nRow 2\ncomment_text : \"Hey man, I'm really not trying to edit war. It's just that this guy is constantly removing relevant information and talking to me through edits instead of my talk page. He seems to care more about the formatting than the actual info.\"\ninput_ids : \"[0, 28240, 332, 4, 87, 25, 39, 6183, 959, 31577, 47, 27211, 1631, 5, 1650, 25, 7, 1660, 450, 903, 48948, 2]\"\nattention_mask: \"[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]\"\n\nRow 3\ncomment_text : \"\"\nMore\nI can't make any real suggestions on improvement - I wondered if the section statistics should be later on, or a subsection of \"\"types of accidents\"\" -I think the references may need tidying so that they are all in the exact same format ie date format etc. I can do that later on, if no-one else does first - if you have any preferences for formatting style on references or want to do it yourself please let me know.\n\nThere appears to be a backlog on articles for review so I guess there may be a delay until a reviewer turns up. It's listed in the relevant form eg Wikipedia:Good_article_nominations#Transport \"\"\ninput_ids : \"[0, 44, 5455, 87, 831, 25, 18, 3249, 2499, 2773, 157666, 98, 136912, 20, 87, 32195, 297, 2174, 70, 40059, 80835, 2]\"\nattention_mask: \"[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]\"\n\nRow 4\ncomment_text : \"You, sir, are my hero. Any chance you remember what page that's on?\"\ninput_ids : \"[0, 2583, 4, 14095, 4, 621, 759, 40814, 5, 28541, 18227, 398, 37629, 2367, 9191, 450, 25, 7, 98, 32, 2, 1]\"\nattention_mask: \"[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0]\"\n"
]
],
[
[
"# 5-Fold split",
"_____no_output_____"
]
],
[
[
"folds = StratifiedKFold(n_splits=5, shuffle=True, random_state=SEED)\n\nfor fold_n, (train_idx, val_idx) in enumerate(folds.split(train_df, train_df['toxic'])):\n print('Fold: %s, Train size: %s, Validation size %s' % (fold_n+1, len(train_idx), len(val_idx)))\n train_df[('fold_%s' % str(fold_n+1))] = 0\n train_df[('fold_%s' % str(fold_n+1))].loc[train_idx] = 'train'\n train_df[('fold_%s' % str(fold_n+1))].loc[val_idx] = 'validation'",
"Fold: 1, Train size: 348620, Validation size 87155\nFold: 2, Train size: 348620, Validation size 87155\nFold: 3, Train size: 348620, Validation size 87155\nFold: 4, Train size: 348620, Validation size 87155\nFold: 5, Train size: 348620, Validation size 87155\n"
]
],
[
[
"# Label distribution",
"_____no_output_____"
]
],
[
[
"for fold_n in range(folds.n_splits):\n fold_n += 1\n fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 6))\n fig.suptitle('Fold %s' % fold_n, fontsize=22) \n sns.countplot(x=\"toxic\", data=train_df[train_df[('fold_%s' % fold_n)] == 'train'], palette=\"GnBu_d\", ax=ax1).set_title('Train')\n sns.countplot(x=\"toxic\", data=train_df[train_df[('fold_%s' % fold_n)] == 'validation'], palette=\"GnBu_d\", ax=ax2).set_title('Validation')\n sns.despine()\n plt.show()",
"_____no_output_____"
]
],
[
[
"# Output 5-fold set",
"_____no_output_____"
]
],
[
[
"train_df.to_csv('5-fold.csv', index=False)\ndisplay(train_df.head())\n\nfor fold_n in range(folds.n_splits):\n if fold_n < 3:\n fold_n += 1\n base_path = 'fold_%d/' % fold_n\n\n # Create dir\n os.makedirs(base_path)\n\n x_train = tokenizer.batch_encode_plus(train_df[train_df[('fold_%s' % fold_n)] == 'train']['comment_text'].values, \n return_token_type_ids=False, \n return_attention_masks=False, \n pad_to_max_length=True, \n max_length=MAX_LEN)\n\n# x_train = np.array([np.array(x_train['input_ids']), \n# np.array(x_train['attention_mask'])])\n x_train = np.array(np.array(x_train['input_ids']))\n\n x_valid = tokenizer.batch_encode_plus(train_df[train_df[('fold_%s' % fold_n)] == 'validation']['comment_text'].values, \n return_token_type_ids=False, \n return_attention_masks=False, \n pad_to_max_length=True, \n max_length=MAX_LEN)\n\n# x_valid = np.array([np.array(x_valid['input_ids']), \n# np.array(x_valid['attention_mask'])])\n x_valid = np.array(np.array(x_valid['input_ids']))\n\n y_train = train_df[train_df[('fold_%s' % fold_n)] == 'train']['toxic'].values\n y_valid = train_df[train_df[('fold_%s' % fold_n)] == 'validation']['toxic'].values\n\n np.save(base_path + 'x_train', np.asarray(x_train))\n np.save(base_path + 'y_train', y_train)\n np.save(base_path + 'x_valid', np.asarray(x_valid))\n np.save(base_path + 'y_valid', y_valid)\n\n print('\\nFOLD: %d' % (fold_n))\n print('x_train shape:', x_train.shape)\n print('y_train shape:', y_train.shape)\n print('x_valid shape:', x_valid.shape)\n print('y_valid shape:', y_valid.shape)\n \n# Compress logs dir\n!tar -cvzf fold_1.tar.gz fold_1\n!tar -cvzf fold_2.tar.gz fold_2\n!tar -cvzf fold_3.tar.gz fold_3\n# !tar -cvzf fold_4.tar.gz fold_4\n# !tar -cvzf fold_5.tar.gz fold_5\n\n# Delete logs dir\nshutil.rmtree('fold_1')\nshutil.rmtree('fold_2')\nshutil.rmtree('fold_3')\n# shutil.rmtree('fold_4')\n# shutil.rmtree('fold_5')",
"_____no_output_____"
]
],
[
[
"# Validation set",
"_____no_output_____"
]
],
[
[
"valid_df = pd.read_csv(\"/kaggle/input/jigsaw-multilingual-toxic-comment-classification/validation.csv\", usecols=['comment_text', 'toxic', 'lang'])\ndisplay(valid_df.head())\n\nx_valid = tokenizer.batch_encode_plus(valid_df['comment_text'].values, \n return_token_type_ids=False, \n return_attention_masks=False, \n pad_to_max_length=True, \n max_length=MAX_LEN)\n\n# x_valid = np.array([np.array(x_valid['input_ids']), \n# np.array(x_valid['attention_mask'])])\nx_valid = np.array(np.array(x_valid['input_ids']))\n\ny_valid = valid_df['toxic'].values\n\nnp.save('x_valid', np.asarray(x_valid))\nnp.save('y_valid', y_valid)\nprint('x_valid shape:', x_valid.shape)\nprint('y_valid shape:', y_valid.shape)",
"_____no_output_____"
]
],
[
[
"# Test set",
"_____no_output_____"
]
],
[
[
"test_df = pd.read_csv(\"/kaggle/input/jigsaw-multilingual-toxic-comment-classification/test.csv\", usecols=['content'])\ndisplay(test_df.head())\n\nx_test = tokenizer.batch_encode_plus(test_df['content'].values, \n return_token_type_ids=False, \n return_attention_masks=False, \n pad_to_max_length=True, \n max_length=MAX_LEN)\n\n# x_test = np.array([np.array(x_test['input_ids']), \n# np.array(x_test['attention_mask'])])\nx_test = np.array(np.array(x_test['input_ids']))\n\n\nnp.save('x_test', np.asarray(x_test))\nprint('x_test shape:', x_test.shape)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e756692b2770e0cc2c17ff8aaedc1e0337734779 | 13,284 | ipynb | Jupyter Notebook | 16_PDEs/.ipynb_checkpoints/16_PDEs-Students-checkpoint.ipynb | nachrisman/PHY494 | bac0dd5a7fe6f59f9e2ccaee56ebafcb7d97e2e7 | [
"CC-BY-4.0"
] | null | null | null | 16_PDEs/.ipynb_checkpoints/16_PDEs-Students-checkpoint.ipynb | nachrisman/PHY494 | bac0dd5a7fe6f59f9e2ccaee56ebafcb7d97e2e7 | [
"CC-BY-4.0"
] | null | null | null | 16_PDEs/.ipynb_checkpoints/16_PDEs-Students-checkpoint.ipynb | nachrisman/PHY494 | bac0dd5a7fe6f59f9e2ccaee56ebafcb7d97e2e7 | [
"CC-BY-4.0"
] | null | null | null | 28.691145 | 266 | 0.503689 | [
[
[
"# 16 PDEs: Solution with Time Stepping (Students)\n\n## Heat Equation\nThe **heat equation** can be derived from Fourier's law and energy conservation (see the [lecture notes on the heat equation (PDF)](https://github.com/ASU-CompMethodsPhysics-PHY494/PHY494-resources/blob/master/16_PDEs/16_PDEs_LectureNotes_HeatEquation.pdf))\n\n$$\n\\frac{\\partial T(\\mathbf{x}, t)}{\\partial t} = \\frac{K}{C\\rho} \\nabla^2 T(\\mathbf{x}, t),\n$$",
"_____no_output_____"
],
[
"## Problem: insulated metal bar (1D heat equation)\nA metal bar of length $L$ is insulated along it lengths and held at 0ºC at its ends. Initially, the whole bar is at 100ºC. Calculate $T(x, t)$ for $t>0$.",
"_____no_output_____"
],
[
"### Analytic solution\nSolve by separation of variables and power series: The general solution that obeys the boundary conditions $T(0, t) = T(L, t) = 0$ is\n\n$$\nT(x, t) = \\sum_{n=1}^{+\\infty} A_n \\sin(k_n x)\\, \\exp\\left(-\\frac{k_n^2 K t}{C\\rho}\\right), \\quad k_n = \\frac{n\\pi}{L}\n$$",
"_____no_output_____"
],
[
"The specific solution that satisfies $T(x, 0) = T_0 = 100^\\circ\\text{C}$ leads to $A_n = 4 T_0/n\\pi$ for $n$ odd:\n\n$$\nT(x, t) = \\sum_{n=1,3,5,\\dots}^{+\\infty} \\frac{4 T_0}{n \\pi} \\sin(k_n x)\\, \\exp\\left(-\\frac{k_n^2 K t}{C\\rho}\\right)\n$$",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.style.use('ggplot')",
"_____no_output_____"
],
[
"def T_bar(x, t, T0, L, K=237, C=900, rho=2700, nmax=1000):\n T = np.zeros_like(x)\n eta = K / (C*rho)\n for n in range(1, nmax, 2):\n kn = n*np.pi/L\n T += 4*T0/(np.pi * n) * np.sin(kn*x) * np.exp(-kn*kn * eta * t)\n return T",
"_____no_output_____"
],
[
"T0 = 100.\nL = 1.0\nX = np.linspace(0, L, 100)\nfor t in np.linspace(0, 3000, 50):\n plt.plot(X, T_bar(X, t, T0, L))\nplt.xlabel(r\"$x$ (m)\")\nplt.ylabel(r\"$T$ ($^\\circ$C)\");",
"_____no_output_____"
]
],
[
[
"### Numerical solution: Leap frog\nDiscretize (finite difference):\n\nFor the time domain we only have the initial values so we use a simple forward difference for the time derivative:\n\n$$\n\\frac{\\partial T(x,t)}{\\partial t} \\approx \\frac{T(x, t+\\Delta t) - T(x, t)}{\\Delta t}\n$$",
"_____no_output_____"
],
[
"For the spatial derivative we have initially all values so we can use the more accurate central difference approximation:\n\n$$\n\\frac{\\partial^2 T(x, t)}{\\partial x^2} \\approx \\frac{T(x+\\Delta x, t) + T(x-\\Delta x, t) - 2 T(x, t)}{\\Delta x^2}\n$$",
"_____no_output_____"
],
[
"Thus, the heat equation can be written as the finite difference equation\n\n$$\n\\frac{T(x, t+\\Delta t) - T(x, t)}{\\Delta t} = \\frac{K}{C\\rho} \\frac{T(x+\\Delta x, t) + T(x-\\Delta x, t) - 2 T(x, t)}{\\Delta x^2}\n$$",
"_____no_output_____"
],
[
"which can be reordered so that the RHS contains only known terms and the LHS future terms. Index $i$ is the spatial index, and $j$ the time index: $x = x_0 + i \\Delta x$, $t = t_0 + j \\Delta t$.\n\n$$\nT_{i, j+1} = (1 - 2\\eta) T_{i,j} + \\eta(T_{i+1,j} + T_{i-1, j}), \\quad \\eta := \\frac{K \\Delta t}{C \\rho \\Delta x^2}\n$$\n\nThus we can step forward in time (\"leap frog\"), using only known values.",
"_____no_output_____"
],
[
"### Activity: Solve the 1D heat equation numerically for an iron bar\n* $K = 237$ W/mK\n* $C = 900$ J/K\n* $\\rho = 2700$ kg/m<sup>3</sup>\n* $L = 1$ m\n* $T_0 = 373$ K and $T_b = 273$ K\n* $T(x, 0) = T_0$ and $T(0, t) = T(L, t) = T_b$\n\nImplement the Leapfrog time-stepping algorithm and visualize the results.",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n%matplotlib notebook",
"_____no_output_____"
],
[
"L_rod = 1. # m\nt_max = 3000. # s\n\nDx = 0.02 # m\nDt = 2 # s\n\nNx = int(L_rod // Dx)\nNt = int(t_max // Dt)\n\nKappa = 237 # W/(m K)\nCHeat = 900 # J/K\nrho = 2700 # kg/m^3\n\nT0 = 373 # K\nTb = 273 # K\n\nraise NotImplementedError\n# eta = \n\n\nstep = 20 # plot solution every n steps\n\nprint(\"Nx = {0}, Nt = {1}\".format(Nx, Nt))\nprint(\"eta = {0}\".format(eta))\n\nT = np.zeros(Nx)\nT_new = np.zeros_like(T)\nT_plot = np.zeros((Nt//step + 1, Nx))\n\nraise NotImplementedError\n# initial conditions\n# ...\n\n# boundary conditions\n# ...\n\nt_index = 0\nT_plot[t_index, :] = T\nfor jt in range(1, Nt):\n \n raise NotImplementedError\n \n if jt % step == 0 or jt == Nt-1:\n t_index += 1\n # save the new solution for later plotting\n # T_plot[t_index, :] = \n print(\"Iteration {0:5d}\".format(jt), end=\"\\r\")\nelse:\n print(\"Completed {0:5d} iterations: t={1} s\".format(jt, jt*Dt))",
"_____no_output_____"
]
],
[
[
"#### Visualization\nVisualize (you can use the code as is). \n\nNote how we are making the plot use proper units by mutiplying with `Dt * step` and `Dx`.",
"_____no_output_____"
]
],
[
[
"X, Y = np.meshgrid(range(T_plot.shape[0]), range(T_plot.shape[1]))\nZ = T_plot[X, Y]\nfig = plt.figure()\nax = fig.add_subplot(111, projection=\"3d\")\nax.plot_wireframe(X*Dt*step, Y*Dx, Z)\nax.set_xlabel(r\"time $t$ (s)\")\nax.set_ylabel(r\"position $x$ (m)\")\nax.set_zlabel(r\"temperature $T$ (K)\")\nfig.tight_layout()",
"_____no_output_____"
]
],
[
[
"## Stability of the solution\n\n### Empirical investigation of the stability\nInvestigate the solution for different values of `Dt` and `Dx`. Can you discern patters for stable/unstable solutions?\n\nReport `Dt`, `Dx`, and `eta`\n* for 3 stable solutions \n* for 3 unstable solutions\n",
"_____no_output_____"
],
[
"Wrap your heat diffusion solver in a function so that it becomes easier to run it:",
"_____no_output_____"
]
],
[
[
"def calculate_T(L_rod=1, t_max=3000, Dx=0.02, Dt=2, T0=373, Tb=273,\n step=20):\n Nx = int(L_rod // Dx)\n Nt = int(t_max // Dt)\n\n Kappa = 237 # W/(m K)\n CHeat = 900 # J/K\n rho = 2700 # kg/m^3\n\n raise NotImplementedError\n \n return T_plot\n\ndef plot_T(T_plot, Dx, Dt, step):\n X, Y = np.meshgrid(range(T_plot.shape[0]), range(T_plot.shape[1]))\n Z = T_plot[X, Y]\n fig = plt.figure()\n ax = fig.add_subplot(111, projection=\"3d\")\n ax.plot_wireframe(X*Dt*step, Y*Dx, Z)\n ax.set_xlabel(r\"time $t$ (s)\")\n ax.set_ylabel(r\"position $x$ (m)\")\n ax.set_zlabel(r\"temperature $T$ (K)\")\n fig.tight_layout()\n return ax",
"_____no_output_____"
],
[
"T_plot = calculate_T(Dx=0.02, Dt=2, step=20)\nplot_T(T_plot, 0.02, 2, 20)",
"_____no_output_____"
]
],
[
[
"For which values of $\\Delta t$ and $\\Delta x$ does the solution become unstable?",
"_____no_output_____"
],
[
"### Von Neumann stability analysis ",
"_____no_output_____"
],
[
"If the difference equation solution diverges then we *know* that we have a bad approximation to the original PDE. ",
"_____no_output_____"
],
[
"Von Neumann stability analysis starts from the assumption that *eigenmodes* of the difference equation can be written as\n\n$$\nT_{m,j} = \\xi(k)^j e^{ikm\\Delta x}, \\quad t=j\\Delta t,\\ x=m\\Delta x \n$$\n\nwith the unknown wave vectors $k=2\\pi/\\lambda$ and unknown complex functions $\\xi(k)$.",
"_____no_output_____"
],
[
"Solutions of the difference equation can be written as linear superpositions of these basis functions. But they are only stable if the eigenmodes are stable, i.e., will not grow in time (with $j$). This is the case when \n$$\n|\\xi(k)| < 1\n$$\nfor all $k$.",
"_____no_output_____"
],
[
"Insert the eigenmodes into the finite difference equation\n\n$$\nT_{m, j+1} = (1 - 2\\eta) T_{m,j} + \\eta(T_{m+1,j} + T_{m-1, j})\n$$\n\nto obtain \n\n\\begin{align}\n\\xi(k)^{j+1} e^{ikm\\Delta x} &= (1 - 2\\eta) \\xi(k)^{j} e^{ikm\\Delta x} \n + \\eta(\\xi(k)^{j} e^{ik(m+1)\\Delta x} + \\xi(k)^{j} e^{ik(m-1)\\Delta x})\\\\\n\\xi(k) &= (1 - 2\\eta) + \\eta(e^{ik\\Delta x} + e^{-ik\\Delta x})\\\\\n\\xi(k) &= 1 - 2\\eta + 2\\eta \\cos k\\Delta x\\\\\n\\xi(k) &= 1 + 2\\eta\\big(\\cos k\\Delta x - 1\\big)\n\\end{align}",
"_____no_output_____"
],
[
"For $|\\xi(k)| < 1$ (and all possible $k$):\n\n\\begin{align}\n|\\xi(k)| < 1 \\quad &\\Leftrightarrow \\quad \\xi^2(k) < 1\\\\\n(1 + 2y)^2 = 1 + 4y + 4y^2 &< 1 \\quad \\text{with}\\ \\ y = \\eta(\\cos k\\Delta x - 1)\\\\\ny(1 + y) &< 0 \\quad \\Leftrightarrow \\quad -1 < y < 0\\\\\n\\eta(\\cos k\\Delta x - 1) &\\leq 0 \\quad \\forall k \\quad (\\eta > 0, -1 \\leq \\cos x \\leq 1)\\\\\n\\eta(\\cos k\\Delta x - 1) &> -1\\\\\n\\eta &< \\frac{1}{1 - \\cos k\\Delta x}\\\\\n\\eta &= \\frac{K \\Delta t}{C \\rho \\Delta x^2} < \\frac{1}{2}\n\\end{align}",
"_____no_output_____"
],
[
"Thus, solutions are only stable for $\\eta < 1/2$. In particular, decreasing $\\Delta t$ will always improve stability, But decreasing $\\Delta x$ requires an quadratic *increase* in $\\Delta t$!",
"_____no_output_____"
],
[
"Note\n* Perform von Neumann stability analysis when possible (depends on PDE and the specific discretization).\n* Test different combinations of $\\Delta t$ and $\\Delta x$.\n* Not guarantee that decreasing both will lead to more stable solutions!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e7568e66bd92d1924574e7876e011f459623ddb3 | 17,869 | ipynb | Jupyter Notebook | notebook/HSC perturbations.ipynb | jmeyers314/jtrace | 9149a5af766fb9a9cd7ebfe6f3f18de0eb8b2e89 | [
"BSD-2-Clause"
] | 13 | 2018-12-24T03:55:04.000Z | 2021-11-09T11:40:40.000Z | notebook/HSC perturbations.ipynb | jmeyers314/batoid | 85cbd13a9573ddca158c9c21ced2ef0c5ad5cd25 | [
"BSD-2-Clause"
] | 65 | 2017-08-15T07:19:05.000Z | 2021-09-08T17:44:57.000Z | notebook/HSC perturbations.ipynb | jmeyers314/jtrace | 9149a5af766fb9a9cd7ebfe6f3f18de0eb8b2e89 | [
"BSD-2-Clause"
] | 10 | 2019-02-19T07:02:31.000Z | 2021-12-10T22:19:40.000Z | 38.345494 | 121 | 0.494152 | [
[
[
"import batoid\nimport galsim\nimport numpy as np\nfrom IPython.display import display\nfrom ipywidgets import interact, interactive_output, interact_manual\nimport ipywidgets as widgets\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"fiducial_telescope = batoid.Optic.fromYaml(\"HSC.yaml\")",
"_____no_output_____"
],
[
"def spotPlot(telescope, wavelength, theta_x, theta_y, logscale, ax):\n rays = batoid.RayVector.asPolar(\n optic=telescope, \n inner=telescope.pupilObscuration*telescope.pupilSize/2,\n theta_x=np.deg2rad(theta_x), theta_y=np.deg2rad(theta_y),\n nrad=48, naz=192, wavelength=wavelength*1e-9\n )\n\n telescope.trace(rays)\n w = ~rays.vignetted\n spots = np.vstack([rays.x[w], rays.y[w]])\n spots -= np.mean(spots, axis=1)[:,None]\n spots *= 1e6 # meters -> microns\n\n ax.scatter(spots[0], spots[1], s=1, alpha=0.5)\n ax.set_xlim(-1.5*10**logscale, 1.5*10**logscale)\n ax.set_ylim(-1.5*10**logscale, 1.5*10**logscale)\n ax.set_title(r\"$\\theta_x = {:4.2f}\\,,\\theta_y = {:4.2f}$\".format(theta_x, theta_y))\n ax.set_xlabel(\"microns\")\n ax.set_ylabel(\"microns\")",
"_____no_output_____"
],
[
"def wavefrontPlot(telescope, wavelength, theta_x, theta_y, ax):\n wf = batoid.wavefront(\n telescope, \n np.deg2rad(theta_x), np.deg2rad(theta_y), \n wavelength*1e-9, nx=128\n ) \n wfplot = ax.imshow(\n wf.array,\n extent=np.r_[-1,1,-1,1]*telescope.pupilSize/2\n )\n ax.set_xlabel(\"meters\")\n ax.set_ylabel(\"meters\")\n plt.colorbar(wfplot, ax=ax)",
"_____no_output_____"
],
[
"def fftPSFPlot(telescope, wavelength, theta_x, theta_y, ax):\n fft = batoid.fftPSF(\n telescope, \n np.deg2rad(theta_x), np.deg2rad(theta_y), \n wavelength*1e-9, nx=32\n )\n # We should be very close to primitive vectors that are a multiple of\n # [1,0] and [0,1]. If the multiplier is negative though, then this will\n # make it look like our PSF is upside-down. So we check for this here and \n # invert if necessary. This will make it easier to compare with the spot \n # diagram, for instance\n if fft.primitiveVectors[0,0] < 0:\n fft.array = fft.array[::-1,::-1]\n\n scale = np.sqrt(np.abs(np.linalg.det(fft.primitiveVectors)))\n nxout = fft.array.shape[0]\n fft.array /= np.sum(fft.array)\n fftplot = ax.imshow(\n fft.array,\n extent=np.r_[-1,1,-1,1]*scale*nxout/2*1e6\n )\n ax.set_title(\"FFT PSF\")\n ax.set_xlabel(\"micron\")\n ax.set_ylabel(\"micron\") \n plt.colorbar(fftplot, ax=ax)",
"_____no_output_____"
],
[
"def huygensPSFPlot(telescope, wavelength, theta_x, theta_y, ax):\n huygensPSF = batoid.huygensPSF(\n telescope, \n np.deg2rad(theta_x), np.deg2rad(theta_y),\n wavelength*1e-9, nx=32\n )\n # We should be very close to primitive vectors that are a multiple of\n # [1,0] and [0,1]. If the multiplier is negative though, then this will\n # make it look like our PSF is upside-down. So we check for this here and \n # invert if necessary. This will make it easier to compare with the spot \n # diagram, for instance\n if huygensPSF.primitiveVectors[0,0] < 0:\n huygensPSF.array = huygensPSF.array[::-1,::-1]\n\n huygensPSF.array /= np.sum(huygensPSF.array) \n scale = np.sqrt(np.abs(np.linalg.det(huygensPSF.primitiveVectors)))\n nxout = huygensPSF.array.shape[0]\n \n huygensplot = plt.imshow(\n huygensPSF.array,\n extent=np.r_[-1,1,-1,1]*scale*nxout/2*1e6\n )\n ax.set_title(\"Huygens PSF\")\n ax.set_xlabel(\"micron\")\n ax.set_ylabel(\"micron\") \n plt.colorbar(huygensplot, ax=ax)",
"_____no_output_____"
],
[
"what = dict(\n do_spot = widgets.Checkbox(value=True, description='Spot'),\n do_wavefront = widgets.Checkbox(value=True, description='Wavefront'),\n do_fftPSF = widgets.Checkbox(value=True, description='FFT PSF'),\n do_huygensPSF = widgets.Checkbox(value=True, description='Huygens PSF')\n)\nwhere = dict(\n wavelength=widgets.FloatSlider(min=300.0,max=1100.0,step=25.0,value=625.0, description=\"$\\lambda$ (nm)\"),\n theta_x=widgets.FloatSlider(min=-0.9,max=0.9,step=0.05,value=-0.5, description=\"$\\\\theta_x (deg)$\"),\n theta_y=widgets.FloatSlider(min=-0.9,max=0.9,step=0.05,value=0.0, description=\"$\\\\theta_y (deg)$\"),\n logscale=widgets.FloatSlider(min=1, max=3, step=0.1, value=1, description=\"scale\")\n)\nperturb = dict(\n optic=widgets.Dropdown(\n options=fiducial_telescope.itemDict.keys(), \n value='SubaruHSC.HSC'\n ),\n dx=widgets.FloatSlider(min=-0.2, max=0.2, step=0.05, value=0.0, description=\"dx ($mm$)\"),\n dy=widgets.FloatSlider(min=-0.2, max=0.2, step=0.05, value=0.0, description=\"dy ($mm$)\"),\n dz=widgets.FloatSlider(min=-100, max=100, step=1, value=0.0, description=\"dz ($\\mu m$)\"),\n dthx=widgets.FloatSlider(min=-1, max=1, step=0.1, value=0.0, description=\"d$\\phi_x$ (arcmin)\"),\n dthy=widgets.FloatSlider(min=-1, max=1, step=0.1, value=0.0, description=\"d$\\phi_y$ (arcmin)\"),\n)\n\ndef f(do_spot, do_wavefront, do_fftPSF, do_huygensPSF,\n wavelength, theta_x, theta_y, optic, dx, dy, dz, dthx, dthy, logscale, **kwargs):\n\n telescope = (fiducial_telescope\n .withGloballyShiftedOptic(optic, [dx*1e-3, dy*1e-3, dz*1e-6])\n .withLocallyRotatedOptic(optic, batoid.RotX(dthx*np.pi/180/60).dot(batoid.RotY(dthy*np.pi/180/60)))\n )\n nplot = sum([do_spot, do_wavefront, do_fftPSF, do_huygensPSF])\n \n if nplot > 0:\n fig, axes = plt.subplots(ncols=nplot, figsize=(4*nplot, 4), squeeze=False)\n\n iax = 0\n if do_spot:\n ax = axes.ravel()[iax]\n spotPlot(telescope, wavelength, theta_x, theta_y, logscale, ax)\n iax += 1\n\n if do_wavefront:\n ax = axes.ravel()[iax]\n wavefrontPlot(telescope, wavelength, theta_x, theta_y, ax)\n iax += 1\n\n if do_fftPSF:\n ax = axes.ravel()[iax]\n fftPSFPlot(telescope, wavelength, theta_x, theta_y, ax)\n iax += 1\n\n if do_huygensPSF:\n ax = axes.ravel()[iax]\n huygensPSFPlot(telescope, wavelength, theta_x, theta_y, ax)\n\n fig.tight_layout()\n plt.show()\n\nall_widgets = {}\nfor d in [what, where, perturb]:\n for k in d:\n all_widgets[k] = d[k]\n\noutput = interactive_output(f, all_widgets)\ndisplay(\n widgets.VBox([\n widgets.HBox([\n widgets.VBox([v for v in what.values()]), \n widgets.VBox([v for v in where.values()]), \n widgets.VBox([v for v in perturb.values()])\n ]),\n output\n ]),\n)",
"_____no_output_____"
],
[
"@interact(wavelen=widgets.FloatSlider(min=300.0,max=1100.0,step=25.0,value=625.0,\n description=\"$\\lambda$ (nm)\"),\n theta_x=widgets.FloatSlider(min=-0.90,max=0.90,step=0.05,value=-0.5,\n description=\"$\\\\theta_x (deg)$\"),\n theta_y=widgets.FloatSlider(min=-0.90,max=0.90,step=0.05,value=0.0,\n description=\"$\\\\theta_y (deg)$\"),\n optic=widgets.Dropdown(\n options=fiducial_telescope.itemDict.keys(), \n value='SubaruHSC.HSC'\n ),\n dx=widgets.FloatSlider(min=-0.2, max=0.2, step=0.05, value=0.0,\n description=\"dx ($mm$)\"),\n dy=widgets.FloatSlider(min=-0.2, max=0.2, step=0.05, value=0.0,\n description=\"dy ($mm$)\"),\n dz=widgets.FloatSlider(min=-100, max=100, step=1, value=0.0,\n description=\"dz ($\\mu m$)\"),\n dthx=widgets.FloatSlider(min=-1, max=1, step=0.1, value=0.0,\n description=\"d$\\phi_x$ (arcmin)\"),\n dthy=widgets.FloatSlider(min=-1, max=1, step=0.1, value=0.0,\n description=\"d$\\phi_y$ (arcmin)\"))\ndef zernike(wavelen, theta_x, theta_y, optic, dx, dy, dz, dthx, dthy):\n telescope = (fiducial_telescope\n .withGloballyShiftedOptic(optic, [dx*1e-3, dy*1e-3, dz*1e-6])\n .withLocallyRotatedOptic(\n optic,\n batoid.RotX(dthx*np.pi/180/60).dot(batoid.RotY(dthy*np.pi/180/60))\n )\n )\n z = batoid.zernike(\n telescope, np.deg2rad(theta_x), np.deg2rad(theta_y), wavelen*1e-9,\n jmax=22, eps=0.1, nx=128\n )\n for i in range(1, len(z)//2+1):\n print(\"{:6d} {:7.3f} {:6d} {:7.3f}\".format(i, z[i], i+11, z[i+11]))",
"_____no_output_____"
],
[
"@interact_manual(\n wavelen=widgets.FloatSlider(min=300.0,max=1100.0,step=25.0,value=625.0,\n description=\"$\\lambda$ (nm)\"),\n optic=widgets.Dropdown(\n options=fiducial_telescope.itemDict.keys(), \n value='SubaruHSC.HSC'\n ),\n z_coef=widgets.Dropdown(\n options=list(range(1, 56)), value=1,\n description=\"Zernike coefficient\"\n ),\n z_amp=widgets.FloatSlider(min=-0.1, max=0.1, step=0.01, value=0.0,\n description=\"Zernike amplitude\"),\n dx=widgets.FloatSlider(min=-0.2, max=0.2, step=0.05, value=0.0,\n description=\"dx ($mm$)\"),\n dy=widgets.FloatSlider(min=-0.2, max=0.2, step=0.05, value=0.0,\n description=\"dy ($mm$)\"),\n dz=widgets.FloatSlider(min=-500, max=500, step=10, value=0.0,\n description=\"dz ($\\mu m$)\"),\n dthx=widgets.FloatSlider(min=-1, max=1, step=0.1, value=0.0,\n description=\"d$\\phi_x$ (arcmin)\"),\n dthy=widgets.FloatSlider(min=-1, max=1, step=0.1, value=0.0,\n description=\"d$\\phi_y$ (arcmin)\"),\n do_resid=widgets.Checkbox(value=False, description=\"residual?\"))\ndef zFoV(wavelen, optic, z_coef, z_amp, dx, dy, dz, dthx, dthy, do_resid):\n telescope = (fiducial_telescope\n .withGloballyShiftedOptic(optic, [dx*1e-3, dy*1e-3, dz*1e-6])\n .withLocallyRotatedOptic(\n optic,\n batoid.RotX(dthx*np.pi/180/60).dot(batoid.RotY(dthy*np.pi/180/60))\n )\n )\n if z_amp != 0:\n try:\n interface = telescope[optic]\n s0 = interface.surface\n except:\n pass\n else:\n s1 = batoid.Sum([\n s0,\n batoid.Zernike(\n [0]*z_coef+[z_amp*wavelen*1e-9], \n R_outer=interface.outRadius,\n R_inner=interface.inRadius,\n )\n ])\n telescope = telescope.withSurface(optic, s1)\n\n thxs = np.linspace(-0.75, 0.75, 15)\n thys = np.linspace(-0.75, 0.75, 15)\n\n img = np.zeros((15, 15), dtype=float)\n vmin = -1\n vmax = 1\n zs = []\n thxplot = []\n thyplot = []\n for ix, thx in enumerate(thxs):\n for iy, thy in enumerate(thys):\n if np.hypot(thx, thy) > 0.74: \n continue\n z = batoid.zernike(\n telescope, np.deg2rad(thx), np.deg2rad(thy), wavelen*1e-9,\n jmax=21, eps=0.231, nx=16\n )\n thxplot.append(thx)\n thyplot.append(thy)\n if do_resid:\n vmin = -0.05\n vmax = 0.05\n z -= batoid.zernike(\n fiducial_telescope, np.deg2rad(thx), np.deg2rad(thy), 625e-9,\n jmax=21, eps=0.231, nx=16\n )\n zs.append(z)\n zs = np.array(zs).T\n thxplot = np.array(thxplot)\n thyplot = np.array(thyplot)\n fig = plt.figure(figsize=(13, 8))\n batoid.plotUtils.zernikePyramid(thxplot, thyplot, zs[4:], vmin=vmin, vmax=vmax, fig=fig)\n plt.show()\n\n # Compute double Zernike \n fBasis = galsim.zernike.zernikeBasis(22, thxplot, thyplot, 0.75)\n dzs, _, _, _ = np.linalg.lstsq(fBasis.T, zs.T, rcond=None)\n dzs = dzs[:,4:]\n asort = np.argsort(np.abs(dzs).ravel())[::-1]\n focal_idx, pupil_idx = np.unravel_index(asort[:10], dzs.shape)\n cumsum = 0.0\n for fid, pid in zip(focal_idx, pupil_idx):\n val = dzs[fid, pid]\n cumsum += val**2\n print(\"{:3d} {:3d} {:8.4f} {:8.4f}\".format(fid, pid+4, val, np.sqrt(cumsum)))\n print(\"sum sqr dz {:8.4f}\".format(np.sqrt(np.sum(dzs**2))))",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e756a77c8118c5deffdb2a7d13a7413d38eff5e8 | 146,519 | ipynb | Jupyter Notebook | notebooks/Anatoly's Mafia prediction.ipynb | Nikishul/Kaggle-NMA-Competition | e27b3d04e4cb08536b50ebd027189d11e1d4169e | [
"MIT"
] | 4 | 2019-02-11T11:52:04.000Z | 2019-03-07T17:57:19.000Z | notebooks/Anatoly's Mafia prediction.ipynb | Nikishul/Kaggle-NMA-Competition | e27b3d04e4cb08536b50ebd027189d11e1d4169e | [
"MIT"
] | null | null | null | notebooks/Anatoly's Mafia prediction.ipynb | Nikishul/Kaggle-NMA-Competition | e27b3d04e4cb08536b50ebd027189d11e1d4169e | [
"MIT"
] | null | null | null | 159.606754 | 42,068 | 0.881613 | [
[
[
"# Initial notebook for classifiers analysis and comparison",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nimport pandas as pd\nimport numpy as np\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer\nfrom sklearn.model_selection import train_test_split\nfrom nltk.corpus import stopwords\nfrom sklearn.svm import LinearSVC\nimport matplotlib\nfrom sklearn.metrics import accuracy_score\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n\nfrom src import data_prepare\nfrom src import feature_extraction\n\n\nimport seaborn as sns\nfrom sklearn.metrics import confusion_matrix\n\n\nfrom sklearn.ensemble import RandomForestClassifier\nfrom skmultilearn.problem_transform import ClassifierChain\nfrom sklearn.linear_model import LogisticRegression\nfrom skmultilearn.problem_transform import BinaryRelevance\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.naive_bayes import MultinomialNB\n\nimport warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
]
],
[
[
"## Load data",
"_____no_output_____"
]
],
[
[
"post, thread=data_prepare.load_train_data()\npost_test, thread_test=data_prepare.load_test_data()\nlabel_map=data_prepare.load_label_map()",
"_____no_output_____"
],
[
"label_map",
"_____no_output_____"
],
[
"num=len(thread)\ntrain_data_to_clean=[]\ntest_data_to_clean=[]",
"_____no_output_____"
],
[
"post.head(3)",
"_____no_output_____"
],
[
"thread.head(3)",
"_____no_output_____"
]
],
[
[
"## Basic cleaning and transforming to the bag of words representation",
"_____no_output_____"
]
],
[
[
"train_data_to_clean=data_prepare.get_all_text_data_from_posts(post, thread)\n \ntest_data_to_clean=data_prepare.get_all_text_data_from_posts(post_test, thread_test)",
"_____no_output_____"
],
[
"clean_train_data = [data_prepare.clean(s) for s in train_data_to_clean]\n \nclean_test_data = [data_prepare.clean(s) for s in test_data_to_clean]",
"_____no_output_____"
],
[
"vectorizer = TfidfVectorizer(min_df=3,sublinear_tf=True, norm='l2', encoding='latin-1', ngram_range=(1, 2))\n\ntrain_data_features = vectorizer.fit_transform(clean_train_data).toarray()\n\ntest_data_features = vectorizer.transform(clean_test_data).toarray()",
"_____no_output_____"
]
],
[
[
"## Get additional features",
"_____no_output_____"
]
],
[
[
"train_data_features=feature_extraction.get_features(post, thread, train_data_features)\ntest_data_features=feature_extraction.get_features(post_test, thread_test, test_data_features)",
"_____no_output_____"
],
[
"X_test=test_data_features\nX_train=train_data_features\ny_train=thread[\"thread_label_id\"]\n#X_train, X_val, y_train, y_val = train_test_split(train_data_features, thread[\"thread_label_id\"], test_size=0.15, random_state=87)",
"_____no_output_____"
]
],
[
[
"### Models efficiency comparison, eventually it came down to RandomForest vs LinearSVC",
"_____no_output_____"
]
],
[
[
"models = [\n RandomForestClassifier(n_estimators=120),\n #ClassifierChain(LogisticRegression()),\n #BinaryRelevance(GaussianNB()),\n LinearSVC(),\n #MultinomialNB(),\n #LogisticRegression()\n]",
"_____no_output_____"
],
[
"CV=4\ncv_df = pd.DataFrame(index=range(CV * len(models)))\nentries = []\n\nfor model in models:\n model_name = model.__class__.__name__\n accuracies = cross_val_score(model, X_train, y_train, scoring='accuracy', cv=CV)\n for fold_idx, accuracy in enumerate(accuracies):\n entries.append((model_name, fold_idx, accuracy))\ncv_df = pd.DataFrame(entries, columns=['model_name', 'fold_idx', 'accuracy'])\n\n\nsns.boxplot(x='model_name', y='accuracy', data=cv_df)\nsns.stripplot(x='model_name', y='accuracy', data=cv_df, size=10, jitter=True, edgecolor=\"gray\", linewidth=2)\n\nplt.show()",
"_____no_output_____"
],
[
"cv_df.groupby('model_name').accuracy.mean()",
"_____no_output_____"
]
],
[
[
"#### Looks like Random forest is more stable. I have a better comparison between these exact 2 models predictions in submission statistics notebook",
"_____no_output_____"
]
],
[
[
"forest = RandomForestClassifier(n_estimators = 110,max_depth=5)\n\nforest = forest.fit(X_train, y_train)\n\n\ntrain_predict = forest.predict(X_train)\ntest_predict = forest.predict(X_val)\n\nfrom sklearn.metrics import accuracy_score\nacc = accuracy_score(y_val, test_predict)\n\nprint(\"Accuracy on the training dataset: {:.2f}\".format(accuracy_score(y_train, train_predict)*100))\nprint(\"Accuracy on the val dataset: {:.2f}\".format(accuracy_score(y_val, test_predict)*100))",
"Accuracy on the training dataset: 82.57\nAccuracy on the val dataset: 75.93\n"
],
[
"conf_mat = confusion_matrix(y_val, test_predict,labels=label_map[\"type_id\"].values)\nfig, ax = plt.subplots(figsize=(13,13))\nsns.heatmap(conf_mat, annot=True, fmt='d',xticklabels=label_map.index, yticklabels=label_map.index)\nplt.ylabel('Actual')\nplt.xlabel('Predicted')\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(accuracy,n_estimators)",
"_____no_output_____"
],
[
"model = LinearSVC()\nmodel.fit(X_train, y_train)\ny_pred = model.predict(X_val)\ntr_pred=model.predict(X_train)\nprint(\"Accuracy on val dataset: {:.2f}\".format((accuracy_score(y_pred, y_val))*100))\nprint(\"Accuracy on train dataset: {:.2f}\".format((accuracy_score(tr_pred, y_train))*100))",
"Accuracy on val dataset: 79.63\nAccuracy on train dataset: 99.67\n"
]
],
[
[
"**Overfitting is for sure the biggest issue with this dataset**",
"_____no_output_____"
]
],
[
[
"conf_mat = confusion_matrix(y_val, y_pred,labels=label_map[\"type_id\"].values)\nfig, ax = plt.subplots(figsize=(13,13))\nsns.heatmap(conf_mat, annot=True, fmt='d', xticklabels=label_map.index, yticklabels=label_map.index)\nplt.ylabel('Actual')\nplt.xlabel('Predicted')\nplt.show()",
"_____no_output_____"
]
],
[
[
"### It's pretty clearly seen from confusion matrices above that closed-setup is the taughest class to predict",
"_____no_output_____"
]
],
[
[
"from sklearn.naive_bayes import MultinomialNB\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.feature_extraction.text import TfidfTransformer\n\nnb = Pipeline([('vect', CountVectorizer()),\n ('tfidf', TfidfTransformer()),\n ('clf', MultinomialNB()),\n ])\nnb.fit(X_train, y_train)\n\n%%time\nfrom sklearn.metrics import classification_report\ny_pred = nb.predict(X_test)\n\nprint('accuracy %s' % accuracy_score(y_pred, y_test))\nprint(classification_report(y_test, y_pred,target_names=my_tags))",
"_____no_output_____"
],
[
"from sklearn.tree import DecisionTreeClassifier\n\ncls = DecisionTreeClassifier(random_state=1000)\ncls.fit(train_data_features, y_train)\ndtpred = cls.predict(test_data_features)",
"_____no_output_____"
],
[
"lab=pd.Series(pred)\nans = pd.concat([thread_test[\"thread_num\"],lab], axis=1, keys=['thread_num', 'thread_label_id'])\nans=ans.set_index(\"thread_num\")\nans.head()",
"_____no_output_____"
],
[
"path=os.path.join(module_path,\"submissions\")\nans.to_csv(os.path.join(path,\"solxx.csv\"))",
"_____no_output_____"
],
[
"test_predict = forest.predict(X_train)\n\nfrom sklearn.metrics import accuracy_score\nacc = accuracy_score(y_train, test_predict)\nprint(\"Accuracy on val dataset: {:.2f}\".format(acc))\ny_pred = model.predict(X_train)\nprint(\"Accuracy on val dataset: {:.2f}\".format((accuracy_score(y_pred, y_train))*100))",
"Accuracy on val dataset: 1.00\nAccuracy on val dataset: 96.89\n"
],
[
"z=(y_pred==pred)\nkey_words=pd.Series(label_map.index)",
"_____no_output_____"
],
[
"for index,item in enumerate(z):\n if not item:\n print(index, key_words[y_pred[index]],key_words[pred[index]])",
"9 closed-setup byor\n16 other vengeful\n27 byor bastard\n42 closed-setup byor\n46 closed-setup byor\n49 other cybrid\n51 supernatural bastard\n55 closed-setup byor\n68 other bastard\n70 paranormal bastard\n75 other bastard\n93 paranormal byor\n106 paranormal supernatural\n111 other cybrid\n112 other cybrid\n113 other cybrid\n116 paranormal supernatural\n121 closed-setup byor\n123 closed-setup byor\n134 other vengeful\n135 closed-setup byor\n142 byor bastard\n150 closed-setup byor\n153 closed-setup bastard\n176 other byor\n207 other byor\n227 closed-setup vengeful\n230 other vengeful\n231 closed-setup vengeful\n234 paranormal supernatural\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e756cdbd95d61782e71b0ca6b6df9104fbaaa0ee | 5,608 | ipynb | Jupyter Notebook | examples/ipython/second_derivative.ipynb | waldyrious/galgebra | b5eb070340434d030dd737a5656fbf709538b0b1 | [
"BSD-3-Clause"
] | null | null | null | examples/ipython/second_derivative.ipynb | waldyrious/galgebra | b5eb070340434d030dd737a5656fbf709538b0b1 | [
"BSD-3-Clause"
] | null | null | null | examples/ipython/second_derivative.ipynb | waldyrious/galgebra | b5eb070340434d030dd737a5656fbf709538b0b1 | [
"BSD-3-Clause"
] | null | null | null | 36.415584 | 743 | 0.496434 | [
[
[
"from galgebra.ga import Ga\nfrom sympy import symbols\nfrom galgebra.printer import Format\n\nFormat()\ncoords = (et,ex,ey,ez) = symbols('t,x,y,z',real=True)\nbase=Ga('e*t|x|y|z',g=[1,-1,-1,-1],coords=symbols('t,x,y,z',real=True),wedge=False)\n\npotential=base.mv('phi','vector',f=True)\npotential",
"_____no_output_____"
],
[
"field=base.grad*potential\nfield",
"_____no_output_____"
],
[
"grad_field = base.grad*field\ngrad_field",
"_____no_output_____"
],
[
"part=field.proj([base.mv()[0]^base.mv()[1]])\npart",
"_____no_output_____"
],
[
"dpart = base.grad*part\ndpart",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e756df9d715c6c3bb2150f0481bb931905854fe6 | 2,918 | ipynb | Jupyter Notebook | examples/jupyter/Modin_Taxi.ipynb | burstable-ai/modin | ee2440c53a1e3bd47736776e7c643f05c4a0db70 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | examples/jupyter/Modin_Taxi.ipynb | burstable-ai/modin | ee2440c53a1e3bd47736776e7c643f05c4a0db70 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | examples/jupyter/Modin_Taxi.ipynb | burstable-ai/modin | ee2440c53a1e3bd47736776e7c643f05c4a0db70 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | 20.549296 | 120 | 0.520562 | [
[
[
"# To run this notebook as done in the README GIFs, you must first locally download the 2015 NYC Taxi Trip Data.\nimport urllib.request\nurl_path = \"https://s3.amazonaws.com/nyc-tlc/trip+data/yellow_tripdata_2015-01.csv\"\nurllib.request.urlretrieve(url_path, \"taxi.csv\")\n\nfrom modin.config import Engine\nEngine.put(\"dask\")\nfrom dask.distributed import Client\nclient = Client(n_workers=12)\n\nfrom modin.config import BenchmarkMode\nBenchmarkMode.put(True)",
"_____no_output_____"
],
[
"import modin.pandas as pd",
"_____no_output_____"
],
[
"%time df = pd.read_csv(\"taxi.csv\", parse_dates=[\"tpep_pickup_datetime\", \"tpep_dropoff_datetime\"], quoting=3)",
"CPU times: user 1.57 s, sys: 683 ms, total: 2.26 s\nWall time: 14.2 s\n"
],
[
"%time isnull = df.isnull()",
"CPU times: user 138 ms, sys: 27.3 ms, total: 166 ms\nWall time: 404 ms\n"
],
[
"%time rounded_trip_distance = df[[\"pickup_longitude\"]].applymap(round)",
"CPU times: user 175 ms, sys: 28.4 ms, total: 203 ms\nWall time: 663 ms\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e756e2c2ae486ca39b35e18c02316b0819e40539 | 24,476 | ipynb | Jupyter Notebook | notebooks/predictiveModel.ipynb | dzwietering/watson-dojo-pm-tester | 44ee3f742d6dae71284ce5f46811b48e5aa3b212 | [
"Apache-2.0"
] | null | null | null | notebooks/predictiveModel.ipynb | dzwietering/watson-dojo-pm-tester | 44ee3f742d6dae71284ce5f46811b48e5aa3b212 | [
"Apache-2.0"
] | null | null | null | notebooks/predictiveModel.ipynb | dzwietering/watson-dojo-pm-tester | 44ee3f742d6dae71284ce5f46811b48e5aa3b212 | [
"Apache-2.0"
] | 1 | 2018-11-13T19:51:04.000Z | 2018-11-13T19:51:04.000Z | 30.556804 | 268 | 0.607656 | [
[
[
"<center><h1> Predict heart failure with Watson Machine Learning</h1></center>\n\n<p>This notebook contains steps and code to create a predictive model to predict heart failure and then deploy that model to Watson Machine Learning so it can be used in an application.</p>\n## Learning Goals\nThe learning goals of this notebook are:\n* Load a CSV file into the Object Storage Service linked to your Data Science Experience \n* Create an Apache® Spark machine learning model\n* Train and evaluate a model\n* Persist a model in a Watson Machine Learning repository\n\n## 1. Setup\n\nBefore you use the sample code in this notebook, you must perform the following setup tasks:\n* Create a Watson Machine Learning Service instance (a free plan is offered) and associate it with your project\n* Upload heart failure data to the Object Store service that is part of your data Science Experience trial\n",
"_____no_output_____"
],
[
"## 2. Load and explore data\n<p>In this section you will load the data as an Apache® Spark DataFrame and perform a basic exploration.</p>\n\n<p>Load the data to the Spark DataFrame from your associated Object Storage instance.</p>",
"_____no_output_____"
]
],
[
[
"# IMPORTANT Follow the lab instructions to insert Spark Session Data Frame to get access to the data used in this notebook\n# Ensure the Spark Session Data Frame is named df_data\n# Add the .option('inferSchema','True')\\ line after the option line from the inserted code.\n\n .option('inferSchema','True')\\",
"_____no_output_____"
]
],
[
[
"Explore the loaded data by using the following Apache® Spark DataFrame methods:\n* print schema\n* print top ten records\n* count all records",
"_____no_output_____"
]
],
[
[
"df_data.printSchema()",
"_____no_output_____"
]
],
[
[
"As you can see, the data contains ten fields. The HEARTFAILURE field is the one we would like to predict (label).",
"_____no_output_____"
]
],
[
[
"df_data.show()",
"_____no_output_____"
],
[
"df_data.describe().show()",
"_____no_output_____"
],
[
"df_data.count()",
"_____no_output_____"
]
],
[
[
"As you can see, the data set contains 10800 records.",
"_____no_output_____"
],
[
"## 3 Interactive Visualizations w/PixieDust",
"_____no_output_____"
]
],
[
[
"# To confirm you have the latest version of PixieDust on your system, run this cell\n!pip install pixiedust==1.1.2",
"_____no_output_____"
]
],
[
[
"If indicated by the installer, restart the kernel and rerun the notebook until here and continue with the workshop.",
"_____no_output_____"
]
],
[
[
"import pixiedust",
"_____no_output_____"
]
],
[
[
"### Simple visualization using bar charts\nWith PixieDust display(), you can visually explore the loaded data using built-in charts, such as, bar charts, line charts, scatter plots, or maps.\nTo explore a data set: choose the desired chart type from the drop down, configure chart options, configure display options.",
"_____no_output_____"
]
],
[
[
"display(df_data)",
"_____no_output_____"
]
],
[
[
"## 4. Create an Apache® Spark machine learning model\nIn this section you will learn how to prepare data, create and train an Apache® Spark machine learning model.\n\n### 4.1: Prepare data\nIn this subsection you will split your data into: train and test data sets.",
"_____no_output_____"
]
],
[
[
"split_data = df_data.randomSplit([0.8, 0.20], 24)\ntrain_data = split_data[0]\ntest_data = split_data[1]\n\n\nprint(\"Number of training records: \" + str(train_data.count()))\nprint(\"Number of testing records : \" + str(test_data.count()))",
"_____no_output_____"
]
],
[
[
"As you can see our data has been successfully split into two data sets:\n* The train data set, which is the largest group, is used for training.\n* The test data set will be used for model evaluation and is used to test the assumptions of the model.\n\n### 4.2: Create pipeline and train a model\nIn this section you will create an Apache® Spark machine learning pipeline and then train the model.\nIn the first step you need to import the Apache® Spark machine learning packages that will be needed in the subsequent steps.\n\nA sequence of data processing is called a _data pipeline_. Each step in the pipeline processes the data and passes the result to the next step in the pipeline, this allows you to transform and fit your model with the raw input data.",
"_____no_output_____"
]
],
[
[
"from pyspark.ml.feature import StringIndexer, IndexToString, VectorAssembler\nfrom pyspark.ml.classification import RandomForestClassifier\nfrom pyspark.ml.evaluation import MulticlassClassificationEvaluator\nfrom pyspark.ml import Pipeline, Model",
"_____no_output_____"
]
],
[
[
"In the following step, convert all the string fields to numeric ones by using the StringIndexer transformer.",
"_____no_output_____"
]
],
[
[
"stringIndexer_label = StringIndexer(inputCol=\"HEARTFAILURE\", outputCol=\"label\").fit(df_data)\nstringIndexer_sex = StringIndexer(inputCol=\"SEX\", outputCol=\"SEX_IX\")\nstringIndexer_famhist = StringIndexer(inputCol=\"FAMILYHISTORY\", outputCol=\"FAMILYHISTORY_IX\")\nstringIndexer_smoker = StringIndexer(inputCol=\"SMOKERLAST5YRS\", outputCol=\"SMOKERLAST5YRS_IX\")",
"_____no_output_____"
]
],
[
[
"\nIn the following step, create a feature vector by combining all features together.",
"_____no_output_____"
]
],
[
[
"vectorAssembler_features = VectorAssembler(inputCols=[\"AVGHEARTBEATSPERMIN\",\"PALPITATIONSPERDAY\",\"CHOLESTEROL\",\"BMI\",\"AGE\",\"SEX_IX\",\"FAMILYHISTORY_IX\",\"SMOKERLAST5YRS_IX\",\"EXERCISEMINPERWEEK\"], outputCol=\"features\")",
"_____no_output_____"
]
],
[
[
"Next, define estimators you want to use for classification. Random Forest is used in the following example.",
"_____no_output_____"
]
],
[
[
"rf = RandomForestClassifier(labelCol=\"label\", featuresCol=\"features\")",
"_____no_output_____"
]
],
[
[
"Finally, indexed labels back to original labels.",
"_____no_output_____"
]
],
[
[
"labelConverter = IndexToString(inputCol=\"prediction\", outputCol=\"predictedLabel\", labels=stringIndexer_label.labels)",
"_____no_output_____"
],
[
"transform_df_pipeline = Pipeline(stages=[stringIndexer_label, stringIndexer_sex, stringIndexer_famhist, stringIndexer_smoker, vectorAssembler_features])\ntransformed_df = transform_df_pipeline.fit(df_data).transform(df_data)\ntransformed_df.show()",
"_____no_output_____"
]
],
[
[
"Let's build the pipeline now. A pipeline consists of transformers and an estimator.",
"_____no_output_____"
]
],
[
[
"pipeline_rf = Pipeline(stages=[stringIndexer_label, stringIndexer_sex, stringIndexer_famhist, stringIndexer_smoker, vectorAssembler_features, rf, labelConverter])",
"_____no_output_____"
]
],
[
[
"Now, you can train your Random Forest model by using the previously defined **pipeline** and **training data**.",
"_____no_output_____"
]
],
[
[
"model_rf = pipeline_rf.fit(train_data)",
"_____no_output_____"
]
],
[
[
"You can check your **model accuracy** now. To evaluate the model, use **test data**.",
"_____no_output_____"
]
],
[
[
"predictions = model_rf.transform(test_data)\nevaluatorRF = MulticlassClassificationEvaluator(labelCol=\"label\", predictionCol=\"prediction\", metricName=\"accuracy\")\naccuracy = evaluatorRF.evaluate(predictions)\nprint(\"Accuracy = %g\" % accuracy)\nprint(\"Test Error = %g\" % (1.0 - accuracy))",
"_____no_output_____"
]
],
[
[
"You can tune your model now to achieve better accuracy. For simplicity of this example tuning section is omitted.\n## 5. Persist model\nIn this section you will learn how to store your pipeline and model in Watson Machine Learning repository by using Python client libraries.\nFirst, you must import client libraries.",
"_____no_output_____"
]
],
[
[
"from repository.mlrepositoryclient import MLRepositoryClient\nfrom repository.mlrepositoryartifact import MLRepositoryArtifact",
"_____no_output_____"
]
],
[
[
"Authenticate to Watson Machine Learning service on IBM Cloud.\n\n## **STOP here !!!!:** \nPut authentication information (username, password, and instance_id) from your instance of Watson Machine Learning service here.",
"_____no_output_____"
]
],
[
[
"#Specify your username, password, and instance_id credientials for Watson ML\nservice_path = 'https://ibm-watson-ml.mybluemix.net'\nusername = 'xxxxx'\npassword = 'xxxxx'\ninstance_id = 'xxxxx'",
"_____no_output_____"
]
],
[
[
"**Tip:** service_path, username, password, and instance_id can be found on Service Credentials tab of the Watson Machine Learning service instance created on the IBM Cloud.",
"_____no_output_____"
]
],
[
[
"ml_repository_client = MLRepositoryClient(service_path)\nml_repository_client.authorize(username, password)",
"_____no_output_____"
]
],
[
[
"Create model artifact (abstraction layer).",
"_____no_output_____"
]
],
[
[
"pipeline_artifact = MLRepositoryArtifact(pipeline_rf, name=\"pipeline\")\n\nmodel_artifact = MLRepositoryArtifact(model_rf, training_data=train_data, name=\"Heart Failure Prediction Model\", pipeline_artifact=pipeline_artifact)",
"_____no_output_____"
]
],
[
[
"**Tip:** The MLRepositoryArtifact method expects a trained model object, training data, and a model name. (It is this model name that is displayed by the Watson Machine Learning service).\n## 5.1: Save pipeline and model¶\nIn this subsection you will learn how to save pipeline and model artifacts to your Watson Machine Learning instance.",
"_____no_output_____"
]
],
[
[
"saved_model = ml_repository_client.models.save(model_artifact)",
"_____no_output_____"
]
],
[
[
"Get saved model metadata from Watson Machine Learning.\n**Tip:** Use *meta.availableProps* to get the list of available props.",
"_____no_output_____"
]
],
[
[
"saved_model.meta.available_props()",
"_____no_output_____"
],
[
"print(\"modelType: \" + saved_model.meta.prop(\"modelType\"))\nprint(\"trainingDataSchema: \" + str(saved_model.meta.prop(\"trainingDataSchema\")))\nprint(\"creationTime: \" + str(saved_model.meta.prop(\"creationTime\")))\nprint(\"modelVersionHref: \" + saved_model.meta.prop(\"modelVersionHref\"))\nprint(\"label: \" + saved_model.meta.prop(\"label\"))",
"_____no_output_____"
]
],
[
[
"\n## 5.2 Load model to verify that it was saved correctly\nYou can load your model to make sure that it was saved correctly.",
"_____no_output_____"
]
],
[
[
"loadedModelArtifact = ml_repository_client.models.get(saved_model.uid)",
"_____no_output_____"
]
],
[
[
"Print the model name to make sure that model artifact has been loaded correctly.",
"_____no_output_____"
]
],
[
[
"print(str(loadedModelArtifact.name))",
"_____no_output_____"
]
],
[
[
"## <font color=green>Congratulations</font>, you've sucessfully created a predictive model and saved it in the Watson Machine Learning service. \nYou can now switch to the Watson Machine Learning console to deploy the model and then test it in application, or continue within the notebook to deploy the model using the APIs.\n\n\n\n\n***\n***",
"_____no_output_____"
],
[
"## 6.0 Accessing Watson ML Models and Deployments through API\nInstead of jumping from your notebook into a web browser, manage your model and delopment through a set of APIs\n",
"_____no_output_____"
],
[
"Recap of saving an existing ML model through using the Watson-Machine-Learning Python SDK\n\n\n`pip install watson-machine-learning-client`\n\n[SDK Documentation](https://watson-ml-libs.mybluemix.net/repository-python/index.html)",
"_____no_output_____"
],
[
"### Save model to WML Service",
"_____no_output_____"
]
],
[
[
"#Import Python WatsonML Repository SDK\nfrom repository.mlrepositoryclient import MLRepositoryClient\nfrom repository.mlrepositoryartifact import MLRepositoryArtifact\n\n#Authenticate\nml_repository_client = MLRepositoryClient(service_path)\nml_repository_client.authorize(username, password)\n\n#Deploy a new model. I renamed the existing model as it has already been created above\npipeline_artifact = MLRepositoryArtifact(pipeline_rf, name=\"pipeline\")\n\nmodel_artifact = MLRepositoryArtifact(model_rf, training_data=train_data, name=\"Heart Failure Prediction Model\", pipeline_artifact=pipeline_artifact)",
"_____no_output_____"
]
],
[
[
"### 6.1 Get the Watson ML API Token\nThe Watson ML API authenticates all requests through a token, start by requesting the token from our Watson ML Service.",
"_____no_output_____"
]
],
[
[
"import json\nimport requests\nfrom base64 import b64encode\n\ntoken_url = service_path + \"/v3/identity/token\"\n\n# NOTE: for python 2.x, uncomment below, and comment out the next line of code:\n#userAndPass = b64encode(bytes(username + ':' + password)).decode(\"ascii\")\n# Use below for python 3.x, comment below out for python 2.x\nuserAndPass = b64encode(bytes(username + ':' + password, \"utf-8\")).decode(\"ascii\")\nheaders = { 'Authorization' : 'Basic %s' % userAndPass }\n\nresponse = requests.request(\"GET\", token_url, headers=headers)\n\nwatson_ml_token = json.loads(response.text)['token']\nprint(watson_ml_token)",
"_____no_output_____"
]
],
[
[
"### 6.2 Preview currenly published models",
"_____no_output_____"
]
],
[
[
"model_url = service_path + \"/v3/wml_instances/\" + instance_id + \"/published_models\"\n\nheaders = {'authorization': 'Bearer ' + watson_ml_token }\nresponse = requests.request(\"GET\", model_url, headers=headers)\n\npublished_models = json.loads(response.text)\nprint(json.dumps(published_models, indent=2))",
"_____no_output_____"
]
],
[
[
"Read the details of any returned models",
"_____no_output_____"
]
],
[
[
"print('{} model(s) are available in your Watson ML Service'.format(len(published_models['resources'])))\nfor model in published_models['resources']:\n print('\\t- name: {}'.format(model['entity']['name']))\n print('\\t model_id: {}'.format(model['metadata']['guid']))\n print('\\t deployments: {}'.format(model['entity']['deployments']['count']))",
"_____no_output_____"
]
],
[
[
"Create a new deployment of the Model",
"_____no_output_____"
]
],
[
[
"\n# Update this `model_id` with the model_id from model that you wish to deploy listed above.\nmodel_id = 'xxxx'\n\ndeployment_url = service_path + \"/v3/wml_instances/\" + instance_id + \"/published_models/\" + model_id + \"/deployments\"\n\npayload = \"{\\\"name\\\": \\\"Heart Failure Prediction Model Deployment\\\", \\\"description\\\": \\\"First deployment of Heart Failure Prediction Model\\\", \\\"type\\\": \\\"online\\\"}\"\nheaders = {'authorization': 'Bearer ' + watson_ml_token, 'content-type': \"application/json\" }\n\nresponse = requests.request(\"POST\", deployment_url, data=payload, headers=headers)\n\nprint(response.text)",
"_____no_output_____"
],
[
"deployment = json.loads(response.text)\n\nprint('Model {} deployed.'.format(model_id))\nprint('\\tname: {}'.format(deployment['entity']['name']))\nprint('\\tdeployment_id: {}'.format(deployment['metadata']['guid']))\nprint('\\tstatus: {}'.format(deployment['entity']['status']))\nprint('\\tscoring_url: {}'.format(deployment['entity']['scoring_url']))",
"_____no_output_____"
]
],
[
[
"### Monitor the status of deployment",
"_____no_output_____"
]
],
[
[
"\n# Update this `deployment_id` from the newly deployed model from above.\ndeployment_id = \"xxxx\"\ndeployment_details_url = service_path + \"/v3/wml_instances/\" + instance_id + \"/published_models/\" + model_id + \"/deployments/\" + deployment_id\n\nheaders = {'authorization': 'Bearer ' + watson_ml_token, 'content-type': \"application/json\" }\n\nresponse = requests.request(\"GET\", deployment_url, headers=headers)\nprint(response.text)",
"_____no_output_____"
],
[
"deployment_details = json.loads(response.text)\n\nfor resources in deployment_details['resources']:\n print('name: {}'.format(resources['entity']['name']))\n print('status: {}'.format(resources['entity']['status']))\n print('scoring url: {}'.format(resources['entity']['scoring_url']))",
"_____no_output_____"
]
],
[
[
"## 6.3 Invoke prediction model deployment\nDefine a method to call scoring url. Replace the **scoring_url** in the method below with the scoring_url returned from above.",
"_____no_output_____"
]
],
[
[
"def get_prediction_ml(ahb, ppd, chol, bmi, age, sex, fh, smoker, exercise_minutes ):\n scoring_url = 'xxxx'\n scoring_payload = { \"fields\":[\"AVGHEARTBEATSPERMIN\",\"PALPITATIONSPERDAY\",\"CHOLESTEROL\",\"BMI\",\"AGE\",\"SEX\",\"FAMILYHISTORY\",\"SMOKERLAST5YRS\",\"EXERCISEMINPERWEEK\"],\"values\":[[ahb, ppd, chol, bmi, age, sex, fh, smoker, exercise_minutes]]}\n header = {'authorization': 'Bearer ' + watson_ml_token, 'content-type': \"application/json\" }\n scoring_response = requests.post(scoring_url, json=scoring_payload, headers=header)\n return (json.loads(scoring_response.text).get(\"values\")[0][16])",
"_____no_output_____"
]
],
[
[
"### Call get_prediction_ml method exercising our prediction model",
"_____no_output_____"
]
],
[
[
"print('Is a 44 year old female that smokes with a low BMI at risk of Heart Failure?: {}'.format(get_prediction_ml(100,85,242,24,44,\"F\",\"Y\",\"Y\",125)))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e756fcfc13bb28df5cea286a737d90d91e744c8a | 19,323 | ipynb | Jupyter Notebook | Chapters/Old/06.MinimumSpanningTrees/Chapter4.ipynb | MichielStock/SelectedTopicsOptimization | 20f6b37566d23cdde0ac6b765ffcc5ed72a11172 | [
"MIT"
] | 22 | 2017-03-21T14:01:10.000Z | 2022-03-02T18:51:40.000Z | Chapters/Old/06.MinimumSpanningTrees/Chapter4.ipynb | MichielStock/SelectedTopicsOptimization | 20f6b37566d23cdde0ac6b765ffcc5ed72a11172 | [
"MIT"
] | 2 | 2018-03-22T09:54:01.000Z | 2018-05-30T16:16:53.000Z | Chapters/Old/06.MinimumSpanningTrees/Chapter4.ipynb | MichielStock/SelectedTopicsOptimization | 20f6b37566d23cdde0ac6b765ffcc5ed72a11172 | [
"MIT"
] | 18 | 2018-01-21T15:23:51.000Z | 2022-02-05T20:12:03.000Z | 29.23298 | 597 | 0.570512 | [
[
[
"# Chapter 4: Minimum spanning trees",
"_____no_output_____"
],
[
"In this chapter we will continue to study algorithms that process graphs. We will implement Kruskal's algorithm to construct the **minimum spanning tree** of a graph, a subgraph that efficiently connects all nodes.",
"_____no_output_____"
],
[
"## Trees in python\n\nA tree is an undirected graph where any two edges are connected by **exactly one path**. For example, consider the tree below.\n\n\n\nWe can represent in python using dictionaries, as we did in the last chapter.",
"_____no_output_____"
]
],
[
[
"tree_dict = {'A' : set(['D']), 'B' : set(['D']), 'C' : set(['D']),\n 'D' : set(['A', 'B', 'C', 'E']), 'E' : set(['D', 'F']), 'F' : set(['E'])}",
"_____no_output_____"
]
],
[
[
"Though in this chapter, we prefer to represent the tree as a list (set) of links:",
"_____no_output_____"
]
],
[
[
"tree_links = [(node, neighbor) for node in tree_dict.keys() for neighbor in tree_dict[node]]\ntree_links",
"_____no_output_____"
]
],
[
[
"If we choose one node as the **root** of the tree, we have exactly one path from this root to each of the other terminal nodes. This idea can applied recursively as follows: from this root, each neighboring is itself a root of a subtree. Each of these subtrees also consist of a root and possibly one or more subtrees. Hence we can also represent the tree as a nested sublist:\n\n```\ntree = [root, [subtree1], [subtree2],...]\n```\n\nFor our example, we obtain when taking node D as a root: (see [here](http://interactivepython.org/courselib/static/pythonds/Trees/ListofListsRepresentation.html))",
"_____no_output_____"
]
],
[
[
"tree_list = ['D', ['A'], ['B'], ['C'], ['E', ['F']]]",
"_____no_output_____"
]
],
[
[
"## Minimum spanning tree",
"_____no_output_____"
],
[
"Suppose we have an undirected connected weighted graph $G$ as depicted below.\n\n",
"_____no_output_____"
],
[
"Weighted graphs can either be implemented as a set of weighted edges of as a dictionary.",
"_____no_output_____"
]
],
[
[
"vertices = ['A', 'B', 'C', 'D', 'E', 'F', 'G']\n\nedges = set([(5, 'A', 'D'), (7, 'A', 'B'), (8, 'B', 'C'), (9, 'B', 'D'),\n (7, 'B', 'E'), (5, 'C', 'E'), (15, 'D', 'E'), (6, 'F', 'D'), \n (8, 'F', 'E'), (9, 'E', 'G'), (11, 'F', 'G')])",
"_____no_output_____"
],
[
"weighted_adj_list = {v : set([]) for v in vertices}\n\nfor weight, vertex1, vertex2 in edges:\n weighted_adj_list[vertex1].add((weight, vertex2))\n weighted_adj_list[vertex2].add((weight, vertex1)) # undirected graph, in=outgoing edge\n\nweighted_adj_list",
"_____no_output_____"
]
],
[
[
"For example, the nodes may represent cities and the weight of an edge may represent the cost of implementing a communication line between two cities. If we want to make communication possible between all cities, these should be a path between any two cities. A **spanning tree** is a subgraph of $G$ that is a tree which contains all nodes of $G$. The cost of the spanning tree is simply the sum of a the weights of the edges in this tree. Often, multiple spanning trees can be chosen from a connected graph. The **minimum spanning tree** is simply the spanning tree with the lowest cost.\n\nThe figure below shows the minimum spanning tree for $G$ in green.\n\n",
"_____no_output_____"
],
[
"Minimum spanning trees have many applications:\n- design of computer-, telecommunication-, transportation- and other networks\n- hierachical clustering\n- image segmentation and feature extraction\n- phylogenetic analysis\n- construction of mazes",
"_____no_output_____"
],
[
"## Disjoint-set data structure",
"_____no_output_____"
],
[
"Implementing an algorithm for finding the minimum spanning tree is fairly straightforward. The only bottleneck is that the algorithm requires the a **disjoint-set data structure** to keep track of a set partitioned in a number of disjoined subsets.\n\nFor example, consider the following inital set of eight elements.\n\n\n\nWe decide to group elements A, B and C together in a subset and F and G in another subset.\n\n\n\nThe disjoint-set data structure support the following operations:\n- **Find** check which subset an element is in. Is typically used to check whether two objects are in the same subset.\n- **Union** to merge two subsets into a single subset.\n\nA python implementation of a disjoint-set is available using an union-set forest. A simple example will make everything clear!",
"_____no_output_____"
]
],
[
[
"from union_set_forest import USF\n\nanimals = ['mouse', 'bat', 'robin', 'trout', 'seagull', 'hummingbird',\n 'salmon', 'goldfish', 'hippopotamus', 'whale', 'sparrow']\nunion_set_forest = USF(animals)\n\n# group mammals together\nunion_set_forest.union('mouse', 'bat')\nunion_set_forest.union('mouse', 'hippopotamus')\nunion_set_forest.union('whale', 'bat')\n\n# group birds together\nunion_set_forest.union('robin', 'seagull')\nunion_set_forest.union('seagull', 'sparrow')\nunion_set_forest.union('seagull', 'hummingbird')\nunion_set_forest.union('robin', 'hummingbird')\n\n# group fishes together\nunion_set_forest.union('goldfish', 'salmon')\nunion_set_forest.union('trout', 'salmon')",
"_____no_output_____"
],
[
"# mouse and whale in same subset?\nprint(union_set_forest.find('mouse') == union_set_forest.find('whale'))",
"_____no_output_____"
],
[
"# robin and salmon in the same subset?\nprint(union_set_forest.find('robin') == union_set_forest.find('salmon'))",
"_____no_output_____"
]
],
[
[
"## Kruskal's algorithm",
"_____no_output_____"
],
[
"Kruskal's algorithm is a very simple algorithm to find the minimum spanning tree. The main idea is to start with an intial 'forest' of the induvidual nodes of the graph. In each step of the algorithm we add an edge with the smallest possible value that connects two disjoints trees in the forest. This process is continued until we have a single tree, which is a minimum spanning tree, or until all edges are considered. In the former case the algoritm returns the minimum spanning forest. ",
"_____no_output_____"
],
[
"### Example run of Kruskal's algorithm\n\nConsider the weighted graph again.\n\n",
"_____no_output_____"
],
[
"In a first step, the algorithm selects the edge with the lowest weight, here connecting nodes A and D. This edge has a weight of 5.\n\n\n\n",
"_____no_output_____"
],
[
"The next edge that is selected connects nodes C and E. This edge also has a weight of 5.\n\n",
"_____no_output_____"
],
[
"The edge between D and F is subsequently selected.\n\n\n\n",
"_____no_output_____"
],
[
"In the current forest, the edge between B and D becomes inaccessible. Taking this edge would result in a cycle in our graph (B and D are already connected through A in our forest), so it is forbidden.\n\n\n",
"_____no_output_____"
],
[
"The next allowd edge with the lowest weight is between nodes B and E. Taking this edges connects two independent components in our forest and makes other edges forbidden.\n\n\n",
"_____no_output_____"
],
[
"Finally, edge EG connects the last node G to our tree with the lowest cost.\n\n",
"_____no_output_____"
],
[
"### Pseudocode of Kruskal's algorithm\n\n```\nfunction Kruskal(G):\n1 A := empty list\n2 for each node v in G\n3 MAKE-SET(v)\n4 for each edge (u, v) ordered by weight(u, v), increasing:\n5 if FIND-SET(u) ≠ FIND-SET(v):\n6 add (u, v) to A\n7 UNION(u, v)\n8 return A\n```",
"_____no_output_____"
],
[
"### Time complexity of Kruskal's algorithm\n\nWe assume that by using a disjoint set data structure, ```FIND``` and ```UNION``` can be performed using a time complexity of $\\mathcal{O}(1)$. Then the only cost is in sorting the edges by their weight, which can be done with a time complexity of $\\mathcal{O}(|E| \\log(|E|))$, which is the time complexity of generating the minimum spanning tree using this algorithm.",
"_____no_output_____"
],
[
"**Assignment 1: completing Kruskal's algorithm**\n\n1. Complete the code for Kruskals algorithm below. Test the code on the example network given above.\n2. Ticket to Ride is a fun boardgame in which you have to connect trains to several important cities in the United States. Load the vertices (cities) and edges (roads) from the file `ticket_to_ride.py`. Compute a minimum spanning tree.\n\n",
"_____no_output_____"
]
],
[
[
"def kruskal(vertices, edges):\n \"\"\"\n Kruskal's algorithm for finding a minimum spanning tree\n Input :\n - vertices : a set of the vertices of the graph\n - edges : a list of weighted edges (e.g. (0.7, 'A', 'B')) for an\n edge from node A to node B with weigth 0.7\n Output:\n a minumum spanning tree represented as a list of edges\n \"\"\"\n # complete this\n return forest",
"_____no_output_____"
],
[
"vertices = ['A', 'B', 'C', 'D', 'E', 'F', 'G']\nedges = [(5, 'A', 'D'), (7, 'A', 'B'), (8, 'B', 'C'), (9, 'B', 'D'),\n (7, 'B', 'E'), (5, 'C', 'E'), (15, 'D', 'E'), (6, 'F', 'D'), \n (8, 'F', 'E'), (9, 'E', 'G'), (11, 'F', 'G')]\n\nprint(kruskal(vertices, edges))",
"_____no_output_____"
],
[
"from ticket_to_ride import vertices as cities\nfrom ticket_to_ride import edges as roads",
"_____no_output_____"
],
[
"for city in cities:\n print(city)",
"_____no_output_____"
],
[
"# compute here the MST for Ticket to Ride\n",
"_____no_output_____"
]
],
[
[
"## The travelling salesman problem\n\nThe traveling salesman problem is a well-known problem in computer science. The goal is to find a tour in a graph with a minimal cost. This problem is NP-hard, there is no algorithm to solve this efficiently for large graphs.",
"_____no_output_____"
],
[
"The tour is represented as a dictionary, for each key-value pair a vertex and the associated next vertex in the tour.",
"_____no_output_____"
],
[
"Below are two heuristic algorithms to find a good tour.",
"_____no_output_____"
],
[
"### Nearest Neighbour\n\nThe simplest algorithm, can be done in with a time complexity of $\\mathcal{O}(|V|^2)$.\n\n1. Select a random vertex.\n2. Find the nearest univisited vertex and add it to the path.\n3. Are there any unvisited vertices left? If yes, repeat step 2.\n4. Return to the first vertex.",
"_____no_output_____"
],
[
"### Greedy\n\nA greedy algorithm that gives a solution in $\\mathcal{O}(|V|^2\\log(|V|))$ time.\n\n1. Sort all edges\n2. Select the shortest edge and add it to the tour if it does not:\n - creates a tour with less than $|V|$ vertices \n - increases the degree of any of the vertices in the tour to more than two.\n3. Repeat step 2 until the tour has $|V|$ vertices.",
"_____no_output_____"
],
[
"**Assignment 2**\n\n1. Complete the functions `nearest_neighbour_tsa` and `greedy_tsa`. \n2. We have two benchmarks problems, one with 29 and one with 225 cities. For each problem the graph and coordinates of the cities are given. Give the **found optimal cost** and **running time** for the two algorithms.\n3. Make a plot of the best tour for each of the two benchmarks.\n4. Discuss how you can see on such a plot if the tour is optimal.",
"_____no_output_____"
]
],
[
[
"def nearest_neighbour_tsa(graph, start):\n \"\"\"\n Nearest Neighbour heuristic for the travelling salesman problem\n \n Inputs:\n - graph: the graph as an adjacency list\n - start: the vertex to start\n \n Outputs:\n - tour: the tour as a dictionary\n - tour_cost: the cost of the tour\n \n \"\"\"\n # complete this\n return tour, tour_cost",
"_____no_output_____"
],
[
"def greedy_tsa(graph):\n \"\"\"\n Greedy heuristic for the travelling salesman problem\n \n Inputs:\n - graph: the graph as an adjacency list\n \n Outputs:\n - tour: the tour as a dictionary\n - tour_cost: the cost of the tour\n \n \"\"\"\n # complete this\n return tour, tour_cost",
"_____no_output_____"
],
[
"# load coordinates and graph for the two benchmark algorithms\nfrom load_tsa import coordinates29, coordinates225, graph29, graph225",
"_____no_output_____"
],
[
"# complete the assignments",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e756fdae5d3e62bcd88d83ac948b05bf9c06eb0d | 19,015 | ipynb | Jupyter Notebook | Python_DS_course/intro.ipynb | yaoyu-e-wang/teaching | 92712b8c16830216d5f42d8b6cc8aef7c000e4b1 | [
"MIT"
] | null | null | null | Python_DS_course/intro.ipynb | yaoyu-e-wang/teaching | 92712b8c16830216d5f42d8b6cc8aef7c000e4b1 | [
"MIT"
] | null | null | null | Python_DS_course/intro.ipynb | yaoyu-e-wang/teaching | 92712b8c16830216d5f42d8b6cc8aef7c000e4b1 | [
"MIT"
] | null | null | null | 47.656642 | 1,230 | 0.610991 | [
[
[
"## This notebook serves as both an introduction to Jupyter notebooks *and* a brief introduction to Python.\n\nNote that this portion is not a comprehensive discussion of the Python language. There are many books (with many 100's of pages) on the subject, and the goal here is to introduce you to some basic concepts that will be used in this workshop.",
"_____no_output_____"
],
[
"### With jupyter notebooks, you can describe what you are doing right next to your code. \n\nThis is **very** helpful for yourself and your collaborators. You WILL forget what you were doing if you come back to the code in 2 weeks (or even tomorrow...). Typically, the reading of code can be helped by including \"comments\"-- these are remarks or notes that go directly next to the code. However, it is much easier to read formatted comments and equations, possibly including figures. \n\nAs we work through this tutorial, I will show pieces of code and attempt to explain the rationale behind each decision.\n\nIn the cell below, I will show the most basic introduction to Python:",
"_____no_output_____"
]
],
[
[
"# You can also describe what you are doing in code-- just start the line with \"#\"\n# These \"comments\" tend to be short, so more general descriptions or motivation should probably go in the \"markdown\" cells.\n\n# Below, I declare a variable x and set it equal to 5\nx = 5\n\n# Now, I can perform operations on x:\ny1 = x*4\ny2 = x+5\ny3 = x**3 # x \"cubed\".\nprint(y1,y2,y3)",
"20 10 125\n"
]
],
[
[
"### Above, we declared an integer `x` and performed some operations. It appears trivial, but it covers a number of basic, but important, items.\n\nAny time we use the `=` sign, we are performing an \"assignment\" of a value to a variable. In our usual mathematical language, we might read `x=5` as \"x equals 5\". While that interpretation is OK here, it becomes a little more confusing if we write,\n\n```\nx = 0\nx = x + 1\n```\n\nThis is perfectly valid Python, and the second line is commonly used to increment the value of x; you might use such a line if you were counting sequencing reads by scrolling through a large file.\nHowever, if we read `x=x+1` as a typical algebraic \"equation\", it does not make much sense. It is better to read it as \"x is assigned the value of x plus 1\". Specifically, we see that the value of `x` starts at zero on the first line. In the second line, we take the existing value of `x` (which is zero), add 1 to it, and re-assign it to `x`. Thus, in the end, `x` holds the value 1. \n\n### In other languages, you often have to initially decide what \"type\" a variable is. Not with Python.\n\nAbove, we made `x` equal to an integer. However, we did not explicitly declare that `x` MUST be an integer. For example, in C++, you would have to write:\n```\nint x = 5;\ndouble y = 3.2;\n```\n(note that `double` is essentially a non-integer number).\n\nIn C++ (and other languages), once variables are declared, you *cannot* change the type. If you later tried to set `x=4.6`, your program would generate an error, explaining that you tried to assign a \"double\" to a variable that only accepts integers.\n\nPython is very relaxed about that, trading convenience for speed. By strictly enforcing \"types\", software written in languages like C++ can be further optimized for high performance. However, the loss in performance for Python is often negligible for most applications.\n\n### Note that *for the most part* white space does not matter, EXCEPT at the start of a line:",
"_____no_output_____"
]
],
[
[
"# These are all the same and valid:\nx=5\nx = 5\nx =5",
"_____no_output_____"
]
],
[
[
"All of the above are valid and equivalent. The space **after** `x` does not matter.\n\n### The reason white-space matters at the start of the line is that Python uses the \"leading\" white space to create \"code blocks\". This whitespace can created either by spaces or with the \"Tab\" key. As long as you are consistent, it is OK. \n\nThat said, it is recommended to use typically use 4 spaces. This makes the indents large enough to be obvious when reading the code.\n\nFor example, we show intentation in the `for` loop below:",
"_____no_output_____"
]
],
[
[
"# This \"import\" gives us access to \"out of the box\" code that lets us generate random numbers\nimport random\n\n# Generate 5 random integers between zero and 100:\nfor i in range(5):\n x = random.randint(0,100)\n print(x)\nprint('Done')",
"88\n46\n93\n49\n97\n"
]
],
[
[
"Above, the `for` loop lets us do something repeatedly-- for the same amount of typing we can do this for 5 random integers, or 5 million. Each time, the indented code (only two lines) is executed; Python analyzes the \"leading\" space to determine the code blocks. \n\nIf it helps, you can imagine a small arrow going line-by-line through the `for` loop. The arrow executes a code statement and goes to the next line. When it reaches the bottom of the indented block, it jumps back to the start of the loop and goes again. \n\nNote that `range(n)` is a simple way to generate the numbers 0,1,2,3,..., n-2, n-1. **Important**: note that `range(5)` starts at zero and ends at 4. Most people might expect 1,2,3,4,5.\n\n### We can indent as many times as we like to create the logic we need. However, it is uncommon to indent more than a few times.\n\nBelow, we show nested code blocks. As we loop through a list with a `for` loop, we check each number to determine if it is even or odd. This introduces \"conditional statements\":",
"_____no_output_____"
]
],
[
[
"a_list = [0,5,7,4,1,2]\nfor x in a_list:\n print('Look at ' + str(x))\n if (x % 2) == 0:\n print(str(x) + ' is even.')\n else:\n print(str(x) + ' is odd.')\n print('...')\nprint('Done with loop.')",
"Look at 0\n0 is even.\n...\nLook at 5\n5 is odd.\n...\nLook at 7\n7 is odd.\n...\nLook at 4\n4 is even.\n...\nLook at 1\n1 is odd.\n...\nLook at 2\n2 is even.\n...\nDone with loop.\n"
]
],
[
[
"### Some notes about the code above:\n- We declare a **list** by putting items inside the square brackets `[...]`. Lists can be anything, even mixing \"types\". For instance, \n```\na_list = [1, 'a', 2.3, 'b']\n```\nis valid Python. This list mixed integers, \"strings\" (letters/words), and \"floats\" (non-integer numbers).\n\n- The `for` loop allows us to go through the list (`a_list`) one item at a time. The indented code (lines 3-8) is run *each time* through the loop.\n Since our list contained six items, the loop is run six times.\n \n- We also showed a \"conditional\" statement (the `if...else`). This allows us to take different actions depending on whether a condition is met. For instance, if a gene is upregulated, we might take an action. Otherwise, we might do something else. Note that the code to be executed is indented further. These conditional statements can be as complex as you need.\n\n- We used the \"modulo\" operator (`%`) to get a division \"remainder\". For example, `8 % 3` evaluates to 2. This is because the *quotient* of 8 divided by 3 is 2, with a remainder of 2. Similarly, `7 % 2` evaluates to 1 since the quotient is 3 with a remainder of 1. The pattern of `x % 2 == 0` is a very common programming idiom for testing whether an integer is even or odd. \n\n- We used a \"comparison operator\" to test whether the current integer was even.\n - A single equals (`=`) means \"assignment\". i.e. `x=2` can be read as, \"set the variable x equal to 2\"\n - The double equals `==` tests whether the items are equivlant. i.e. x==y can be read as \"is x equal to y?\"\n - Similarly, we can test if things are not equal (`x != y`), less than (`x > y`), and so on.\n \n \n- Note that to make the `print` statements, we had to \"wrap\" the variable `x` (which was an integer) by writing: `str(x)`. By using `str(x)` we were able to express the integer (e.g. 5) as a \"string\" (e.g. \"5\"). This allowed us to then \"sum/add\" it to another string/word. Otherwise, Python gets confused...how can it \"sum\" an integer and a word? It knows how to \"sum\" two strings just by putting them next to each other. For example, `y=\"ABC\" + \"def\"` gives `y=\"ABCdef\"`.\n\n- Note that when you write strings/words, you can use either single (`'`) or double quotes (`\"`). These are equialent:\n```\nx = 'abc'\ny = \"abc\"\n```",
"_____no_output_____"
],
[
"### One additional VERY useful item is the Python \"dictionary\". These are essentially \"lookup tables\" (or \"mappings\") and best demonstrated with a couple examples:",
"_____no_output_____"
]
],
[
[
"ensg_to_genes = {\n 'ENSG00000141510': 'TP53',\n 'ENSG00000134323': 'MYCN',\n 'ENSG00000171094': 'ALK'\n}\n\n# the 'key' can reference anything-- below it points at a list of genes in a hypothetical pathway\npathways = {\n 'pathway_A': ['TP53', 'BCL2L12', 'MTOR'],\n 'pathway_X': ['MYCN', 'PPARG', 'EGFR']\n}",
"_____no_output_____"
]
],
[
[
"Each \"key\" (which is unique!) points at a \"value\"; you will also see these called \"key-value pairs\". In the first dictionary (`ensg_to_genes`), the unique \"ENSG\" IDs maps to the common gene name (a string). In the second dictionary (`pathways`), the unique pathway names point at a list of strings. \n\nThe \"keys\" can be anything unique and the values can be any valid Python \"thing\". \n\nTo demonstrate their use, imagine you have a list of Ensembl gene IDs and you want the common gene symbol. Given the `ensg_to_genes` dictionary above, you can simply \"address\" the dictionary:",
"_____no_output_____"
]
],
[
[
"ensg_to_genes['ENSG00000134323']",
"_____no_output_____"
]
],
[
[
"You can imagine that if you had a long list of ENSG IDs and you created a `for` loop, you can quickly convert all the ENSG IDs to their common gene names.\n\n### Finally, we note that it is advisable to structure your code into reusable \"chunks\". This is useful for both simple organization of code and for cases where you can re-use the code multiple times. One way to create re-usable components is to declare \"functions\"\n\nBreaking code into small functions makes it easier to understand and test. If each small piece does its job correctly, then you can \"guarantee\" it all works. \n\nYou can define custom functions, which are just like functions in mathematics-- they take an input and produce an output. For example, we can write a function that takes an integer as input and tells us whether the number is even or odd:",
"_____no_output_____"
]
],
[
[
"def is_even(x):\n # need to check if it's an integer. If not, raise an error\n if type(x) == int:\n if x % 2 == 0:\n return True\n else:\n return False\n else:\n print('This only works on integers')\n raise Exception('is_even only accepts integers')",
"_____no_output_____"
]
],
[
[
"The function takes a single input variable which we call `x`. It produces an output that is either `True` or `False` (both of which are special Boolean values in Python). The value that is produced by the function is often called its \"return value\" and is made explicit when we write something like `return True`. \n\nOne drawback of Python not declaring \"types\" (e.g. that `x` is guaranteed to be an integer) is that we cannot guarantee `x` will be always be an integer. Therefore, we *should* explicitly check this. Here, we raise an \"exception\" which flags the error. Depending on your needs, you may choose not do that and you may decide to handle those \"edge cases\" (unexpected inputs) in another way. There is no correct or incorrect way-- just different!\n\nUsing that function, we can re-run the loop we had earlier. Note that since the last item of our list is a string (\"a\"), our function raises the exception, which causes an error, as expected.",
"_____no_output_____"
]
],
[
[
"a_list = [0,5,6,7, 'a']\nfor x in a_list:\n if is_even(x):\n print('Even!')\n else:\n print('Odd')",
"Even!\nOdd\nEven!\nOdd\nThis only works on integers\n"
]
],
[
[
"### This was a very trivial example, but one can imagine how this pattern is useful. \n\n#### By using functions, we can \"package\" code that is ready to use and is appropriately general. For instance, someone could write a parser for arbitrary BAM files (a special alignment format for sequence reads) and distribute that to the community. Then, assuming it was done correctly, anyone using Python can now use that code without having to know all the details about BAM files, their compression, storage, etc. The `pysam` library is a popular Python package for doing exactly this.\n\n### The VERY brief introduction above is not meant to be comprehensive and we will encounter new syntax and situations as we progress through this course. PLEASE stop me if you have any questions.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e756fe1e641325942e7d67e7837d4fde7abc8fbe | 630,448 | ipynb | Jupyter Notebook | fe588/FE588 Fall 2018.ipynb | atcemgil/notes | 380d310a87767d9b1fe88229588dfe00a61d2353 | [
"MIT"
] | 191 | 2016-01-21T19:44:23.000Z | 2022-03-25T20:50:50.000Z | fe588/FE588 Fall 2018.ipynb | ShakirSofi/notes | d6388ab38c734c341f5916b2d03189dfe4962edb | [
"MIT"
] | 2 | 2018-02-18T03:41:04.000Z | 2018-11-21T11:08:49.000Z | fe588/FE588 Fall 2018.ipynb | atcemgil/notes | 380d310a87767d9b1fe88229588dfe00a61d2353 | [
"MIT"
] | 138 | 2015-10-04T21:57:21.000Z | 2021-06-15T19:35:55.000Z | 96.947255 | 36,506 | 0.777898 | [
[
[
"3+5",
"_____no_output_____"
],
[
"3*6",
"_____no_output_____"
],
[
"7 // 4",
"_____no_output_____"
],
[
"x = 5",
"_____no_output_____"
],
[
"x*6",
"_____no_output_____"
],
[
"y = x",
"_____no_output_____"
],
[
"x = x*2",
"_____no_output_____"
],
[
"m = 12\nv = 3.8\n\nE = 1/2*m*v**2\nprint('Energy {}*{}^2/2 = {} '.format(m, v, E))",
"Energy 12*3.8^2/2 = 86.64 \n"
],
[
"import math\n\nmath.sqrt(2)\nmath.pow(3, 2.3)",
"_____no_output_____"
],
[
"print(1, math.sqrt(1))\nprint(2, math.sqrt(2))\nprint(3, math.sqrt(3))\nprint(4, math.sqrt(4))",
"1 1.0\n2 1.4142135623730951\n3 1.7320508075688772\n4 2.0\n"
]
],
[
[
"Computes $y = \\sqrt{x}$",
"_____no_output_____"
]
],
[
[
"# My comment\nfor i in range(1,11):\n print(i, math.sqrt(i))",
"1 1.0\n2 1.4142135623730951\n3 1.7320508075688772\n4 2.0\n5 2.23606797749979\n6 2.449489742783178\n7 2.6457513110645907\n8 2.8284271247461903\n9 3.0\n10 3.1622776601683795\n"
],
[
"for i in range(1,11):\n for j in range(1,11):\n print(i*j, end=' ')\n print()",
"1 2 3 4 5 6 7 8 9 10 \n2 4 6 8 10 12 14 16 18 20 \n3 6 9 12 15 18 21 24 27 30 \n4 8 12 16 20 24 28 32 36 40 \n5 10 15 20 25 30 35 40 45 50 \n6 12 18 24 30 36 42 48 54 60 \n7 14 21 28 35 42 49 56 63 70 \n8 16 24 32 40 48 56 64 72 80 \n9 18 27 36 45 54 63 72 81 90 \n10 20 30 40 50 60 70 80 90 100 \n"
],
[
"for i in range(2,41):\n print(i/2, (i/2)**2, (i/2)**3, end=\"\")\n\n \n \n",
"_____no_output_____"
],
[
"x_2 = 0\nx_1 = 1\nprint(x_1, end=\"\\n\")\nfor i in range(3):\n y = x_1 + x_2\n print(x_2, x_1, y, end=\"\\n\")\n x_2 = x_1\n x_1 = y\n",
"1\n0 1 1\n1 1 2\n1 2 3\n"
],
[
"import math\n\nr = 0.24\nT = 13\nS0 = 100\nfor i in range(T):\n print(i, S0*math.exp(r*i/12))\n\n ",
"0 100.0\n1 102.02013400267558\n2 104.08107741923882\n3 106.18365465453596\n4 108.32870676749586\n5 110.51709180756477\n6 112.74968515793758\n7 115.02737988572274\n8 117.35108709918103\n9 119.72173631218101\n10 122.14027581601698\n11 124.60767305873807\n12 127.12491503214048\n"
],
[
"import matplotlib.pylab as plt\n\nr = 0.24\nT = 130\nS0 = 100\nS = [S0*math.exp(r*i/12) for i in range(T)]\n\nplt.plot(range(T), S, 'o-')\nplt.xlabel('month')\nplt.show()\n",
"_____no_output_____"
],
[
"plt.plot([1,2,5],'ro-')\nplt.show()",
"_____no_output_____"
],
[
"[2*i for i in range(6)]",
"_____no_output_____"
],
[
"[i*i for i in [j*j for j in [0,1,2]]]",
"_____no_output_____"
],
[
"[[j for j in range(i)] for i in range(5)]",
"_____no_output_____"
],
[
"[[j,j**2,j**3] for j in range(10)]",
"_____no_output_____"
]
],
[
[
"$\\pi\\alpha \\frac{3}{\\eta}$",
"_____no_output_____"
]
],
[
[
"plt.rcParams['text.usetex'] = True\nplt.rcParams['text.latex.unicode'] = True\n\nTH = [th/360*2*math.pi for th in range(-360,361)]\ns = [math.sin(th/360*2*math.pi) for th in range(-360,361)]\nc = [math.cos(th/360*2*math.pi) for th in range(-360,361)]\n\nplt.plot(TH, s, 'r')\nplt.plot(TH, c, 'b')\nplt.xticks([-2*math.pi, 0, 2*math.pi],[r'$2\\pi$','$0$','$2\\pi$'])\nplt.show()",
"_____no_output_____"
],
[
"import numpy as np\n\n[np.random.randn() for i in range(100)]",
"_____no_output_____"
],
[
"import numpy as np\nimport matplotlib.pylab as plt\n\nplt.plot(np.random.randn(100))\nplt.show()",
"_____no_output_____"
],
[
"x = np.random.randn(10000)\nplt.hist(x, bins=200)\nplt.show()",
"_____no_output_____"
],
[
"x = [2,3,7,9]\n\nsum = 0.\nfor i in range(len(x)):\n sum+= x[i]\n\nprint(sum/len(x))",
"5.25\n"
],
[
"Sum = 0.\nfor e in x:\n Sum += e\n\nprint(Sum/len(x))",
"_____no_output_____"
],
[
"sig = 0.5\nN = 100\nL = 10\ne = [sig*np.random.randn() for i in range(N)]\n\nx = [0]*N\nfor i in range(N-1):\n x[i+1] = x[i] + e[i]\n\ny = [0]*N\n\nfor i in range(N):\n y[i] = np.mean(x[max(i-L,0):(i+1)])\n \n\nplt.plot(x)\n\nplt.plot(y, 'r')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Algorithmic trade",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nplt.figure(figsize=(12,5))\n\nL = 30 # Moving average window\nN = 500 # Number of timesteps\n\n# Generate Gaussian noise\ne = np.random.randn(N)\n\n# Brownian walk\ny = np.zeros_like(e)\n\ny[0] = e[0]\nfor t in range(1,N):\n y[t] = y[t-1] + e[t]\n\nmav = np.zeros_like(e)\n\nfor t in range(1,N):\n idx0 = max(0, t-L)\n mav[t] = np.sum(y[idx0:t])/L\n \n #print(len(y[idx0:t]))\n\nbuy = []\nsell = []\n\nfor t in range(1,N):\n if y[t-1]<mav[t-1] and y[t]>mav[t]:\n buy.append(t)\n \n if y[t-1]>mav[t-1] and y[t]<mav[t]:\n sell.append(t)\n \n\nplt.plot(y)\nplt.plot(mav)\nplt.plot(sell, mav[sell],'vr')\nplt.plot(buy,mav[buy], '^b')\nplt.show()\n",
"_____no_output_____"
],
[
"e[1:8]",
"_____no_output_____"
],
[
"v = [0,1,2,3,4,5]\n\nv[-1:-6:-2]",
"_____no_output_____"
],
[
"a = 5\nb = 6\nmu = 1\nsig = 3\n\nN = 100000\n\nz = mu + sig*np.random.randn(N)\n\n#plt.hist(z, bins=50)\n#plt.show()\n\ncount = 0\nfor i in range(N):\n if z[i]>=a and z[i]<=b:\n count+=1\n\nprint(count/N)\n",
"0.04329\n"
],
[
"import numpy as np\n\nnp.log(5)",
"_____no_output_____"
]
],
[
[
"European",
"_____no_output_____"
]
],
[
[
"S0 = 100\nr = 0.1\nT = 1\nsigma = 0.5\nK = 120\n\nNum = 10000\n\nopt = 'Put'\n#opt = 'Call'\n\nC_T = 0.0\nfor i in range(Num):\n S_T = S0*np.exp(T*(r - 0.5*sigma**2) + sigma*np.sqrt(T)*np.random.randn())\n if opt=='Call':\n C_T += np.max([S_T-K,0])\n else:\n C_T += np.max([K-S_T,0])\n \n \nC_T = C_T/Num\n\nprint('{}:'.format(opt), np.exp(-r*T)*C_T)\n\n",
"Put: 25.3531139008\n"
]
],
[
[
"Asian",
"_____no_output_____"
]
],
[
[
"S0 = 100\nr = 0.1\nT = 1\nsigma = 0.05\nK = 90\nN = 100\n\nNum = 10000\n\nC_T = 0.0\nfor i in range(Num):\n S = [0]*N\n S[0] = S0\n for n in range(1,N):\n S[n] = S[n-1]*np.exp(T/N*(r - 0.5*sigma**2) + sigma*np.sqrt(T/N)*np.random.randn())\n \n C_T += np.max([np.mean(S)-K,0])\n\nC_T = C_T/Num\nprint('Asian Call:', np.exp(-r*T)*C_T)",
"Asian Call: 13.6679815687\n"
]
],
[
[
"Lookback",
"_____no_output_____"
]
],
[
[
"S0 = 100\nr = 0.1\nT = 1\nsigma = 0.05\nK = 90\nN = 10\n\nNum = 10000\n\nC_T = 0.0\nfor i in range(Num):\n S = [0]*N\n S[0] = S0\n for n in range(1,N):\n S[n] = S[n-1]*np.exp(T/N*(r - 0.5*sigma**2) + sigma*np.sqrt(T/N)*np.random.randn())\n \n C_T += np.max([np.max(S)-K,0])\n\nC_T = C_T/Num\nprint('Lookback Call:', np.exp(-r*T)*C_T)",
"Lookback Call: 18.0746854795\n"
]
],
[
[
"Floating Lookback",
"_____no_output_____"
]
],
[
[
"S0 = 100\nr = 0.1\nT = 1\nsigma = 0.05\nK = 90\nN = 10\n\nNum = 10000\n\nC_T = 0.0\nfor i in range(Num):\n S = [0]*N\n S[0] = S0\n for n in range(1,N):\n S[n] = S[n-1]*np.exp(T/N*(r - 0.5*sigma**2) + sigma*np.sqrt(T/N)*np.random.randn())\n \n C_T += np.max([S[-1]-np.min(S),0])\n\nC_T = C_T/Num\nprint('Floating Lookback Call:', np.exp(-r*T)*C_T)",
"Floating Lookback Call: 9.03133666784\n"
],
[
"x = -3.7\n\nif x<0:\n z = -1\nelse:\n z = 1\n \nz = -1 if x<0 else 1",
"_____no_output_____"
],
[
"z",
"_____no_output_____"
],
[
"[i*2 for i in range(5)]\n\ni = 0\nwhile i<10:\n i+=2\n print(i)",
"2\n4\n6\n8\n10\n"
],
[
"import cmath\n\ncmath.phase(1+2j)",
"_____no_output_____"
],
[
"1+3j + 2-5j",
"_____no_output_____"
],
[
"L = [['c',1], ['z',2], ['a',2]]\n\nL.sort(key=lambda x: x[1])\nprint(L)\n#L.sort(key=lambda x: x[0])\n#print(L)",
"[['c', 1], ['z', 2], ['a', 2]]\n"
],
[
"L",
"_____no_output_____"
],
[
"f = open('grades.txt')\nN = int(f.readline())\n\nL = []\nfor i in range(N):\n name = str(f.readline()).rstrip()\n grade = float(f.readline())\n #print(name, grade)\n L.append([name, grade])\n\nL.sort(key=lambda x:x[0]) \nL.sort(key=lambda x:x[1])\n#print(L)\n\nif len(L)<1:\n print('')\nelse:\n i = 0\n for z in L:\n if i==0:\n mn = z[1]\n\n if mn<z[1]:\n break\n else:\n i+=1\n\n temp = L[i][1]\n for j in range(i, len(L)):\n if temp<L[j][1]:\n break\n else:\n print(L[j][0])\n\n \n \nf.close()",
"Ali\nAyse\nZeynep\n"
],
[
"L",
"_____no_output_____"
],
[
"USD 5.48\nGBP 6.99\nEUR 6.22\n",
"_____no_output_____"
],
[
"import pandas as pd",
"_____no_output_____"
],
[
"aapl = pd.read_csv(\"aapl.csv\", index_col=0, parse_dates=True)",
"_____no_output_____"
],
[
"aapl",
"_____no_output_____"
],
[
"import pandas as pd\nimport pandas_datareader as web\n\nimport datetime\n\nstart = datetime.datetime(2015, 1, 1)\nend = datetime.datetime(2018, 11, 5)\nmsft = web.DataReader(\"MSFT\", 'yahoo', start, end)\naapl = web.DataReader(\"AAPL\", 'yahoo', start, end)\n\n",
"_____no_output_____"
],
[
"intl = web.DataReader(\"INTL\", 'yahoo', start, end)",
"_____no_output_____"
],
[
"bist = web.DataReader(\"XU100.IS\", 'yahoo', start, end)",
"_____no_output_____"
],
[
"bist",
"_____no_output_____"
],
[
"intl",
"_____no_output_____"
],
[
"aapl",
"_____no_output_____"
],
[
"import matplotlib.pylab as plt\n\n\nplt.figure(figsize=(12,3))\naapl['Close'].plot()\nplt.show()",
"_____no_output_____"
],
[
"cols = ['Open', 'Close', 'Low', 'High', 'Volume']\n\nfor c in cols:\n plt.figure(figsize=(12,1))\n aapl['2015-08'][c].plot()\n plt.show()",
"_____no_output_____"
],
[
"aapl['2015-8-21':'2015-10-11']",
"_____no_output_____"
],
[
"dates = ['2015-08','2016-08','2017-08','2018-08']\nfor d in dates:\n plt.figure(figsize=(12,1))\n aapl[d]['Open'].plot()\n plt.show()",
"_____no_output_____"
],
[
"%matplotlib inline\naapl['2017-08'][['High','Low','Close']].plot();\n\n",
"_____no_output_____"
],
[
"aapl.plot(x='Volume', y='Close', kind='scatter')",
"_____no_output_____"
],
[
"df = pd.DataFrame({'a':[1,2,5],'b':[3, 9, 16], 'c':[10,11,12]},index=[1,2,3])",
"_____no_output_____"
],
[
"df.plot.bar();",
"_____no_output_____"
],
[
"df.plot.pie(subplots=True);",
"_____no_output_____"
],
[
"aapl.plot.hexbin(x='Volume',y='Open')",
"_____no_output_____"
],
[
"aapl['Open'].plot.hist(bins=200)\naapl['Open'].plot.density()",
"_____no_output_____"
],
[
"df[df.b % 2 == 0]",
"_____no_output_____"
],
[
"aapl.iloc[1:100:2]",
"_____no_output_____"
],
[
"df.loc[:,'b':'c']",
"_____no_output_____"
],
[
"df2 = pd.DataFrame({'a':[7,-1,3,5],'b':[1, 1, 1,1], 'c':[10,11,12,18]},index=[7,9,15,18])",
"_____no_output_____"
],
[
"df3 = pd.concat([df, df2])",
"_____no_output_____"
],
[
"df3[df3.a>=0]",
"_____no_output_____"
],
[
"aapl['Volume'][aapl.High - aapl.Low > 3].plot(style='.');",
"_____no_output_____"
],
[
"import math\n\ndef circle_area(radius):\n return radius**2*math.pi\n\ncircle_area(radius=1)",
"_____no_output_____"
],
[
"def rect_area(height, width):\n return height*width",
"_____no_output_____"
],
[
"rect_area(width=3, height=2)",
"_____no_output_____"
]
],
[
[
"Finding the root of a function",
"_____no_output_____"
]
],
[
[
"def f(x):\n return x**2 - 2\n\ndef sgn(x):\n return 1 if x>0 else -1\n\ndef find_root(a, b, epsilon=0.0000001):\n left = f(a)\n right = f(b)\n \n sgn_l = sgn(left)\n sgn_r = sgn(right)\n \n if sgn_l == sgn_r:\n error('No root in the interval')\n \n while (right-left>epsilon):\n mid = (right+left)/2\n f_mid = f(mid)\n sgn_mid = sgn(f_mid)\n if sgn_l==sgn_mid:\n left = mid\n else:\n right = mid\n \n print(left, right)\n \n return (right+left)/2\n\n\nprint(find_root(1,3))\nprint(math.sqrt(2))",
"-1 3.0\n1.0 3.0\n1.0 2.0\n1.0 1.5\n1.25 1.5\n1.375 1.5\n1.375 1.4375\n1.40625 1.4375\n1.40625 1.421875\n1.4140625 1.421875\n1.4140625 1.41796875\n1.4140625 1.416015625\n1.4140625 1.4150390625\n1.4140625 1.41455078125\n1.4140625 1.414306640625\n1.4141845703125 1.414306640625\n1.4141845703125 1.41424560546875\n1.4141845703125 1.414215087890625\n1.4141998291015625 1.414215087890625\n1.4142074584960938 1.414215087890625\n1.4142112731933594 1.414215087890625\n1.4142131805419922 1.414215087890625\n1.4142131805419922 1.4142141342163086\n1.4142131805419922 1.4142136573791504\n1.4142134189605713 1.4142136573791504\n1.4142135381698608 1.4142136573791504\n1.4142135381698608 1.4142135977745056\n1.4142135679721832\n1.4142135623730951\n"
],
[
"import matplotlib.pylab as plt\nimport numpy as np\n\ndef f(x):\n return x**2 - 2\n\ndef sgn(x):\n return 1 if x>0 else -1\n\ndef find_root(a, b, epsilon=0.0000001):\n left = a\n right = b\n \n A = [left]\n B = [right]\n \n sgn_l = sgn(left)\n sgn_r = sgn(right)\n \n if sgn_l == sgn_r:\n error('No root in the interval')\n \n while (right-left>epsilon):\n mid = (right+left)/2\n f_mid = f(mid)\n sgn_mid = sgn(f_mid)\n if sgn_l==sgn_mid:\n left = mid\n A.append(left)\n else:\n right = mid\n B.append(right)\n \n #print(left, right)\n \n return A, B\n\nx = np.linspace(-3,3)\nplt.plot(x, f(x), 'k-')\n\nA, B = find_root(0, 3)\nplt.plot(A, np.zeros_like(A), 'ro')\nplt.plot(B, np.zeros_like(B), 'bo')\n\nfor a in A:\n plt.plot([a, a], [0, f(a)], ':r')\n \nfor b in B:\n plt.plot([b, b], [0, f(b)], ':b')\n\nplt.show()\n",
"_____no_output_____"
],
[
"B",
"_____no_output_____"
],
[
"%matplotlib inline\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n\n",
"_____no_output_____"
],
[
"x = [1,2,5,8,1,3,5]\n\nfor i in range(len(x)):\n x[i] = x[i]*3\n\nx",
"_____no_output_____"
],
[
"x = [z*3 for z in x]",
"_____no_output_____"
],
[
"x = np.array([1,2,5,8,1,3,5])\nx*3",
"_____no_output_____"
],
[
"1 2\n2 5\n5 8\n8 1\n1 3\n3 5\n",
"_____no_output_____"
],
[
"for i in range(len(x)-1):\n for j in range(2):\n print(x[i+j], end=' ')\n print('')",
"1 2 \n2 5 \n5 8 \n8 1 \n1 3 \n3 5 \n"
]
],
[
[
"1 2 \n2 5 \n5 8 \n8 1 \n1 3 \n3 5 ",
"_____no_output_____"
]
],
[
[
"N = 4\nfor i in range(len(x)-N+1):\n for j in range(N):\n print(x[i+N-1-j], end=' ')\n print('')",
"8 5 2 1 \n1 8 5 2 \n3 1 8 5 \n5 3 1 8 \n"
]
],
[
[
"Binary Search",
"_____no_output_____"
]
],
[
[
"L = [1,3,4,7,8,9,12]\n\ni = 0\nj = len(L)-1\nx = 12\n\n\nfound = False\nwhile (i<=j):\n mid = (i+j)//2\n if L[mid]==x:\n found = True\n break\n elif L[mid]<x:\n i = mid+1\n elif L[mid]>x:\n j = mid-1\n\nif found:\n print('Found')\nelse:\n print('Not Found')",
"Found\n"
]
],
[
[
"1. Bisection\n2. Pi area\n3. Tail probability\n4. Pairwise distance\n\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7570feb6df4fa7c140f0416b10cda0d307803d4 | 13,094 | ipynb | Jupyter Notebook | PO_class/assignment_1/gurobi/instace_examples.ipynb | ItamarRocha/Operations-Research | 55c4d54959555c3b9d54641e76eb6cfb2c011a2c | [
"MIT"
] | 7 | 2020-07-04T01:50:12.000Z | 2021-06-03T21:54:52.000Z | PO_class/assignment_1/gurobi/instace_examples.ipynb | ItamarRocha/Operations-Research | 55c4d54959555c3b9d54641e76eb6cfb2c011a2c | [
"MIT"
] | null | null | null | PO_class/assignment_1/gurobi/instace_examples.ipynb | ItamarRocha/Operations-Research | 55c4d54959555c3b9d54641e76eb6cfb2c011a2c | [
"MIT"
] | null | null | null | 26.887064 | 74 | 0.471285 | [
[
[
"#import data\n#from data import Data\nimport data\nfrom dataQuestion import Data",
"_____no_output_____"
]
],
[
[
"## Instance 1\n\n<p align=\"center\">\n<img src=\"imgs/i1.png\" >\n</p>",
"_____no_output_____"
]
],
[
[
"instance1 = Data('instances/instance1.txt')\ninstance1.gurobi_solver()",
"Using license file /home/jpvt/gurobi.lic\nAcademic license - for non-commercial use only\nGurobi Optimizer version 9.0.2 build v9.0.2rc0 (linux64)\nOptimize a model with 20 rows, 13 columns and 39 nonzeros\nModel fingerprint: 0x83656b1a\nCoefficient statistics:\n Matrix range [1e+00, 1e+00]\n Objective range [1e+00, 1e+00]\n Bounds range [0e+00, 0e+00]\n RHS range [2e+00, 2e+01]\nPresolve removed 16 rows and 4 columns\nPresolve time: 0.00s\nPresolved: 4 rows, 9 columns, 16 nonzeros\n\nIteration Objective Primal Inf. Dual Inf. Time\n 0 1.0000000e+00 8.000000e+00 0.000000e+00 0s\n 4 6.0000000e+00 0.000000e+00 0.000000e+00 0s\n\nSolved in 4 iterations and 0.00 seconds\nOptimal objective 6.000000000e+00\n\nOptimal flows for Commodity:\n1 -> 2: 4\n1 -> 3: 8\n1 -> 4: 3\n2 -> 5: 4\n3 -> 5: 6\n3 -> 6: 2\n4 -> 6: 3\n5 -> 7: 10\n6 -> 7: 5\n1 -> 7: 6\n\nMaximum Flow for the instace: 15\nExecution Time: 6.185 ms\n"
]
],
[
[
"## Instance 2\n\n<p align=\"center\">\n<img src=\"imgs/i2.png\" >\n</p>",
"_____no_output_____"
]
],
[
[
"instance2 = Data('instances/instance2.txt')\ninstance2.gurobi_solver()",
"Gurobi Optimizer version 9.0.2 build v9.0.2rc0 (linux64)\nOptimize a model with 22 rows, 14 columns and 42 nonzeros\nModel fingerprint: 0xcd6ccbdb\nCoefficient statistics:\n Matrix range [1e+00, 1e+00]\n Objective range [1e+00, 1e+00]\n Bounds range [0e+00, 0e+00]\n RHS range [1e+00, 4e+01]\nPresolve removed 18 rows and 7 columns\nPresolve time: 0.00s\nPresolved: 4 rows, 7 columns, 12 nonzeros\n\nIteration Objective Primal Inf. Dual Inf. Time\n 0 5.0000000e+00 2.700000e+01 0.000000e+00 0s\n 6 5.0000000e+00 0.000000e+00 0.000000e+00 0s\n\nSolved in 6 iterations and 0.01 seconds\nOptimal objective 5.000000000e+00\n\nOptimal flows for Commodity:\n1 -> 2: 20\n1 -> 5: 1\n1 -> 6: 12\n2 -> 3: 8\n2 -> 6: 1\n2 -> 7: 11\n3 -> 4: 8\n4 -> 8: 8\n5 -> 6: 1\n6 -> 7: 14\n7 -> 8: 25\n1 -> 8: 5\n\nMaximum Flow for the instace: 33\nExecution Time: 8.968 ms\n"
]
],
[
[
"## Instance 3\n\n<p align=\"center\">\n<img src=\"imgs/i3.png\" >\n</p>",
"_____no_output_____"
]
],
[
[
"instance3 = Data('instances/instance3.txt')\ninstance3.gurobi_solver()",
"Gurobi Optimizer version 9.0.2 build v9.0.2rc0 (linux64)\nOptimize a model with 24 rows, 16 columns and 48 nonzeros\nModel fingerprint: 0xbe752641\nCoefficient statistics:\n Matrix range [1e+00, 1e+00]\n Objective range [1e+00, 1e+00]\n Bounds range [0e+00, 0e+00]\n RHS range [4e+00, 3e+01]\nPresolve removed 19 rows and 5 columns\nPresolve time: 0.00s\nPresolved: 5 rows, 11 columns, 19 nonzeros\n\nIteration Objective Primal Inf. Dual Inf. Time\n 0 0.0000000e+00 1.950000e+01 0.000000e+00 0s\n 5 2.0000000e+00 0.000000e+00 0.000000e+00 0s\n\nSolved in 5 iterations and 0.01 seconds\nOptimal objective 2.000000000e+00\n\nOptimal flows for Commodity:\n1 -> 2: 10\n1 -> 3: 5\n1 -> 4: 13\n2 -> 5: 9\n2 -> 6: 1\n3 -> 6: 8\n4 -> 7: 13\n5 -> 8: 9\n6 -> 8: 9\n7 -> 3: 3\n7 -> 8: 10\n1 -> 8: 2\n\nMaximum Flow for the instace: 28\nExecution Time: 10.231 ms\n"
]
],
[
[
"## Instance 4\n\n<p align=\"center\">\n<img src=\"imgs/i4.png\" >\n</p>",
"_____no_output_____"
]
],
[
[
"instance4 = Data('instances/instance4.txt')\ninstance4.gurobi_solver()",
"Gurobi Optimizer version 9.0.2 build v9.0.2rc0 (linux64)\nOptimize a model with 13 rows, 9 columns and 21 nonzeros\nModel fingerprint: 0xec335147\nCoefficient statistics:\n Matrix range [1e+00, 1e+00]\n Objective range [1e+00, 1e+00]\n Bounds range [0e+00, 0e+00]\n RHS range [2e+00, 2e+01]\nPresolve removed 11 rows and 5 columns\nPresolve time: 0.00s\nPresolved: 2 rows, 4 columns, 6 nonzeros\n\nIteration Objective Primal Inf. Dual Inf. Time\n 0 1.0000000e+00 3.500000e+00 0.000000e+00 0s\n 1 3.0000000e+00 0.000000e+00 0.000000e+00 0s\n\nSolved in 1 iterations and 0.00 seconds\nOptimal objective 3.000000000e+00\n\nOptimal flows for Commodity:\n1 -> 2: 7\n1 -> 6: 8\n2 -> 3: 5\n2 -> 6: 2\n3 -> 4: 7\n5 -> 3: 2\n5 -> 4: 8\n1 -> 4: 3\n\nMaximum Flow for the instace: 15\nExecution Time: 5.807 ms\n"
]
],
[
[
"## Instance 5\n\n<p align=\"center\">\n<img src=\"imgs/i5.png\" >\n</p>",
"_____no_output_____"
]
],
[
[
"instance5 = Data('instances/instance5.txt')\ninstance5.gurobi_solver()",
"Gurobi Optimizer version 9.0.2 build v9.0.2rc0 (linux64)\nOptimize a model with 17 rows, 11 columns and 33 nonzeros\nModel fingerprint: 0x52b28128\nCoefficient statistics:\n Matrix range [1e+00, 1e+00]\n Objective range [1e+00, 1e+00]\n Bounds range [0e+00, 0e+00]\n RHS range [4e+00, 3e+01]\nPresolve removed 14 rows and 5 columns\nPresolve time: 0.00s\nPresolved: 3 rows, 6 columns, 10 nonzeros\n\nIteration Objective Primal Inf. Dual Inf. Time\n 0 0.0000000e+00 9.500000e+00 0.000000e+00 0s\n 4 6.0000000e+00 0.000000e+00 0.000000e+00 0s\n\nSolved in 4 iterations and 0.00 seconds\nOptimal objective 6.000000000e+00\n\nOptimal flows for Commodity:\n1 -> 2: 10\n1 -> 3: 13\n2 -> 3: 2\n2 -> 4: 12\n3 -> 2: 4\n3 -> 5: 11\n4 -> 6: 19\n5 -> 4: 7\n5 -> 6: 4\n1 -> 6: 6\n\nMaximum Flow for the instace: 23\nExecution Time: 7.206 ms\n"
]
],
[
[
"## Instance 6\n\n<p align=\"center\">\n<img src=\"imgs/i6.png\" >\n</p>",
"_____no_output_____"
]
],
[
[
"instance6 = Data('instances/instance6.txt')\ninstance6.gurobi_solver()",
"Gurobi Optimizer version 9.0.2 build v9.0.2rc0 (linux64)\nOptimize a model with 37 rows, 22 columns and 66 nonzeros\nModel fingerprint: 0xa62c8fc4\nCoefficient statistics:\n Matrix range [1e+00, 1e+00]\n Objective range [1e+00, 1e+00]\n Bounds range [0e+00, 0e+00]\n RHS range [1e+00, 5e+00]\nPresolve removed 37 rows and 22 columns\nPresolve time: 0.00s\nPresolve: All rows and columns removed\nIteration Objective Primal Inf. Dual Inf. Time\n 0 2.0000000e+00 0.000000e+00 0.000000e+00 0s\n\nSolved in 0 iterations and 0.00 seconds\nOptimal objective 2.000000000e+00\n\nOptimal flows for Commodity:\n1 -> 3: 1\n1 -> 4: 1\n1 -> 5: 1\n3 -> 7: 1\n4 -> 9: 1\n5 -> 11: 1\n7 -> 12: 1\n9 -> 13: 1\n11 -> 14: 1\n12 -> 15: 1\n13 -> 15: 1\n14 -> 15: 1\n1 -> 15: 2\n\nMaximum Flow for the instace: 3\nExecution Time: 5.969 ms\n"
]
],
[
[
"## Instance 7\n\n<p align=\"center\">\n<img src=\"imgs/i7.png\" >\n</p>",
"_____no_output_____"
]
],
[
[
"instance7 = Data('instances/instance7.txt')\ninstance7.gurobi_solver()",
"Gurobi Optimizer version 9.0.2 build v9.0.2rc0 (linux64)\nOptimize a model with 36 rows, 25 columns and 75 nonzeros\nModel fingerprint: 0x7a1ac2be\nCoefficient statistics:\n Matrix range [1e+00, 1e+00]\n Objective range [1e+00, 1e+00]\n Bounds range [0e+00, 0e+00]\n RHS range [5e+00, 7e+01]\nPresolve removed 27 rows and 2 columns\nPresolve time: 0.00s\nPresolved: 9 rows, 23 columns, 42 nonzeros\n\nIteration Objective Primal Inf. Dual Inf. Time\n 0 4.9740000e+00 5.752600e+01 0.000000e+00 0s\n 10 5.0000000e+00 0.000000e+00 0.000000e+00 0s\n\nSolved in 10 iterations and 0.01 seconds\nOptimal objective 5.000000000e+00\n\nOptimal flows for Commodity:\n1 -> 2: 20\n1 -> 3: 25\n1 -> 4: 20\n2 -> 5: 20\n2 -> 7: 5\n3 -> 2: 5\n3 -> 6: 20\n4 -> 7: 20\n5 -> 8: 16\n5 -> 10: 4\n6 -> 8: 5\n6 -> 9: 15\n7 -> 9: 20\n7 -> 10: 5\n8 -> 9: 1\n8 -> 11: 20\n9 -> 11: 30\n9 -> 10: 6\n10 -> 11: 15\n1 -> 11: 5\n\nMaximum Flow for the instace: 65\nExecution Time: 11.142 ms\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e757339b62b1a148678ab1e9d20e09fabfd5bd31 | 72,804 | ipynb | Jupyter Notebook | 4-assets/BOOKS/Jupyter-Notebooks/Overflow/Projection.ipynb | impastasyndrome/Lambda-Resource-Static-Assets | 7070672038620d29844991250f2476d0f1a60b0a | [
"MIT"
] | null | null | null | 4-assets/BOOKS/Jupyter-Notebooks/Overflow/Projection.ipynb | impastasyndrome/Lambda-Resource-Static-Assets | 7070672038620d29844991250f2476d0f1a60b0a | [
"MIT"
] | null | null | null | 4-assets/BOOKS/Jupyter-Notebooks/Overflow/Projection.ipynb | impastasyndrome/Lambda-Resource-Static-Assets | 7070672038620d29844991250f2476d0f1a60b0a | [
"MIT"
] | 1 | 2021-11-05T07:48:26.000Z | 2021-11-05T07:48:26.000Z | 260.946237 | 26,267 | 0.888193 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7573413595b6bacbe6aeac0a68c1aa5dd3435c5 | 260,229 | ipynb | Jupyter Notebook | docs/source/examples/expansion-filters.ipynb | gridley/openmc | 19705bbe7f1034d7c62b72a69a530b1e7ba88659 | [
"MIT"
] | 2 | 2016-01-10T13:14:35.000Z | 2019-05-05T10:18:12.000Z | docs/source/examples/expansion-filters.ipynb | xiaopingguo165/openmc | 0963a3dce03c2aee02411532424d179b9acbb4ad | [
"MIT"
] | 9 | 2015-03-14T12:18:06.000Z | 2021-04-01T15:23:23.000Z | docs/source/examples/expansion-filters.ipynb | xiaopingguo165/openmc | 0963a3dce03c2aee02411532424d179b9acbb4ad | [
"MIT"
] | 4 | 2017-07-31T21:03:25.000Z | 2020-03-22T20:54:48.000Z | 167.998063 | 56,300 | 0.881132 | [
[
[
"# Functional Expansions\nOpenMC's general tally system accommodates a wide range of tally *filters*. While most filters are meant to identify regions of phase space that contribute to a tally, there are a special set of functional expansion filters that will multiply the tally by a set of orthogonal functions, e.g. Legendre polynomials, so that continuous functions of space or angle can be reconstructed from the tallied moments.\n\nIn this example, we will determine the spatial dependence of the flux along the $z$ axis by making a Legendre polynomial expansion. Let us represent the flux along the z axis, $\\phi(z)$, by the function\n\n$$ \\phi(z') = \\sum\\limits_{n=0}^N a_n P_n(z') $$\n\nwhere $z'$ is the position normalized to the range [-1, 1]. Since $P_n(z')$ are known functions, our only task is to determine the expansion coefficients, $a_n$. By the orthogonality properties of the Legendre polynomials, one can deduce that the coefficients, $a_n$, are given by\n\n$$ a_n = \\frac{2n + 1}{2} \\int_{-1}^1 dz' P_n(z') \\phi(z').$$\n\nThus, the problem reduces to finding the integral of the flux times each Legendre polynomial -- a problem which can be solved by using a Monte Carlo tally. By using a Legendre polynomial filter, we obtain stochastic estimates of these integrals for each polynomial order.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport openmc\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"To begin, let us first create a simple model. The model will be a slab of fuel material with reflective boundaries conditions in the x- and y-directions and vacuum boundaries in the z-direction. However, to make the distribution slightly more interesting, we'll put some B<sub>4</sub>C in the middle of the slab.",
"_____no_output_____"
]
],
[
[
"# Define fuel and B4C materials\nfuel = openmc.Material()\nfuel.add_element('U', 1.0, enrichment=4.5)\nfuel.add_nuclide('O16', 2.0)\nfuel.set_density('g/cm3', 10.0)\n\nb4c = openmc.Material()\nb4c.add_element('B', 4.0)\nb4c.add_element('C', 1.0)\nb4c.set_density('g/cm3', 2.5)",
"_____no_output_____"
],
[
"# Define surfaces used to construct regions\nzmin, zmax = -10., 10.\nbox = openmc.model.rectangular_prism(10., 10., boundary_type='reflective')\nbottom = openmc.ZPlane(z0=zmin, boundary_type='vacuum')\nboron_lower = openmc.ZPlane(z0=-0.5)\nboron_upper = openmc.ZPlane(z0=0.5)\ntop = openmc.ZPlane(z0=zmax, boundary_type='vacuum')\n\n# Create three cells and add them to geometry\nfuel1 = openmc.Cell(fill=fuel, region=box & +bottom & -boron_lower)\nabsorber = openmc.Cell(fill=b4c, region=box & +boron_lower & -boron_upper)\nfuel2 = openmc.Cell(fill=fuel, region=box & +boron_upper & -top)\ngeom = openmc.Geometry([fuel1, absorber, fuel2])",
"_____no_output_____"
]
],
[
[
"For the starting source, we'll use a uniform distribution over the entire box geometry.",
"_____no_output_____"
]
],
[
[
"settings = openmc.Settings()\nspatial_dist = openmc.stats.Box(*geom.bounding_box)\nsettings.source = openmc.Source(space=spatial_dist)\nsettings.batches = 210\nsettings.inactive = 10\nsettings.particles = 1000",
"_____no_output_____"
]
],
[
[
"Defining the tally is relatively straightforward. One simply needs to list 'flux' as a score and then add an expansion filter. For this case, we will want to use the `SpatialLegendreFilter` class which multiplies tally scores by Legendre polynomials evaluated on normalized spatial positions along an axis.",
"_____no_output_____"
]
],
[
[
"# Create a flux tally\nflux_tally = openmc.Tally()\nflux_tally.scores = ['flux']\n\n# Create a Legendre polynomial expansion filter and add to tally\norder = 8\nexpand_filter = openmc.SpatialLegendreFilter(order, 'z', zmin, zmax)\nflux_tally.filters.append(expand_filter)",
"_____no_output_____"
]
],
[
[
"The last thing we need to do is create a `Tallies` collection and export the entire model, which we'll do using the `Model` convenience class.",
"_____no_output_____"
]
],
[
[
"tallies = openmc.Tallies([flux_tally])\nmodel = openmc.model.Model(geometry=geom, settings=settings, tallies=tallies)",
"_____no_output_____"
]
],
[
[
"Running a simulation is now as simple as calling the `run()` method of `Model`.",
"_____no_output_____"
]
],
[
[
"sp_file = model.run(output=False)",
"_____no_output_____"
]
],
[
[
"Now that the run is finished, we need to load the results from the statepoint file.",
"_____no_output_____"
]
],
[
[
"with openmc.StatePoint(sp_file) as sp:\n df = sp.tallies[flux_tally.id].get_pandas_dataframe()",
"_____no_output_____"
]
],
[
[
"We've used the `get_pandas_dataframe()` method that returns tally data as a Pandas dataframe. Let's see what the raw data looks like.",
"_____no_output_____"
]
],
[
[
"df",
"_____no_output_____"
]
],
[
[
"Since the expansion coefficients are given as\n\n$$ a_n = \\frac{2n + 1}{2} \\int_{-1}^1 dz' P_n(z') \\phi(z')$$\n\nwe just need to multiply the Legendre moments by $(2n + 1)/2$.",
"_____no_output_____"
]
],
[
[
"n = np.arange(order + 1)\na_n = (2*n + 1)/2 * df['mean']",
"_____no_output_____"
]
],
[
[
"To plot the flux distribution, we can use the `numpy.polynomial.Legendre` class which represents a truncated Legendre polynomial series. Since we really want to plot $\\phi(z)$ and not $\\phi(z')$ we first need to perform a change of variables. Since\n\n$$ \\lvert \\phi(z) dz \\rvert = \\lvert \\phi(z') dz' \\rvert $$\n\nand, for this case, $z = 10z'$, it follows that\n\n$$ \\phi(z) = \\frac{\\phi(z')}{10} = \\sum_{n=0}^N \\frac{a_n}{10} P_n(z'). $$",
"_____no_output_____"
]
],
[
[
"phi = np.polynomial.Legendre(a_n/10, domain=(zmin, zmax))",
"_____no_output_____"
]
],
[
[
"Let's plot it and see how our flux looks!",
"_____no_output_____"
]
],
[
[
"z = np.linspace(zmin, zmax, 1000)\nplt.plot(z, phi(z))\nplt.xlabel('Z position [cm]')\nplt.ylabel('Flux [n/src]')",
"_____no_output_____"
]
],
[
[
"As you might expect, we get a rough cosine shape but with a flux depression in the middle due to the boron slab that we introduced. To get a more accurate distribution, we'd likely need to use a higher order expansion.\n\nOne more thing we can do is confirm that integrating the distribution gives us the same value as the first moment (since $P_0(z') = 1$). This can easily be done by numerically integrating using the trapezoidal rule:",
"_____no_output_____"
]
],
[
[
"np.trapz(phi(z), z)",
"_____no_output_____"
]
],
[
[
"In addition to being able to tally Legendre moments, there are also functional expansion filters available for spherical harmonics (`SphericalHarmonicsFilter`) and Zernike polynomials over a unit disk (`ZernikeFilter`). A separate `LegendreFilter` class can also be used for determining Legendre scattering moments (i.e., an expansion of the scattering cosine, $\\mu$).",
"_____no_output_____"
],
[
"## Zernike polynomials\n\nNow let's look at an example of functional expansion tallies using Zernike polynomials as the basis functions.\n\nIn this example, we will determine the spatial dependence of the flux along the radial direction $r'$ and $/$ or azimuthal angle $\\theta$ by making a Zernike polynomial expansion. Let us represent the flux along the radial and azimuthal direction, $\\phi(r', \\theta)$, by the function\n\n$$ \\phi(r', \\theta) = \\sum\\limits_{n=0}^N \\sum\\limits_{m=-n}^n a_n^m Z_n^m(r', \n\\theta) $$\n\nwhere $r'$ is the position normalized to the range [0, r] (r is the radius of cylindrical geometry), and the azimuthal lies within the range [0, $ 2\\pi$]. \n\nSince $Z_n^m(r', \\theta)$ are known functions, we need to determine the expansion coefficients, $a_n^m$. By the orthogonality properties of the Zernike polynomials, one can deduce that the coefficients, $a_n^m$, are given by\n\n$$ a_n^m = k_n^m \\int_{0}^r dr' \\int_{0}^{2\\pi} d\\theta Z_n^m(r',\\theta) \\phi(r', \\theta).$$\n$$ k_n^m = \\frac{2n + 2}{\\pi}, m \\ne 0. $$\n$$ k_n^m = \\frac{n+1}{\\pi}, m = 0.$$\n\nSimilarly, the problem reduces to finding the integral of the flux times each Zernike polynomial.",
"_____no_output_____"
],
[
"To begin with, let us first create a simple model. The model will be a pin-cell fuel material with vacuum boundary condition in both radial direction and axial direction.",
"_____no_output_____"
]
],
[
[
"# Define fuel \nfuel = openmc.Material()\nfuel.add_element('U', 1.0, enrichment=5.0)\nfuel.add_nuclide('O16', 2.0)\nfuel.set_density('g/cm3', 10.0)",
"_____no_output_____"
],
[
"# Define surfaces used to construct regions\nzmin, zmax, radius = -1., 1., 0.5 \npin = openmc.ZCylinder(x0=0.0, y0=0.0, r=radius, boundary_type='vacuum')\nbottom = openmc.ZPlane(z0=zmin, boundary_type='vacuum')\ntop = openmc.ZPlane(z0=zmax, boundary_type='vacuum')\n\n# Create three cells and add them to geometry\nfuel = openmc.Cell(fill=fuel, region= -pin & +bottom & -top)\ngeom = openmc.Geometry([fuel])",
"_____no_output_____"
]
],
[
[
"For the starting source, we'll use a uniform distribution over the entire box geometry.",
"_____no_output_____"
]
],
[
[
"settings = openmc.Settings()\nspatial_dist = openmc.stats.Box(*geom.bounding_box)\nsettings.source = openmc.Source(space=spatial_dist)\nsettings.batches = 100\nsettings.inactive = 20\nsettings.particles = 100000",
"_____no_output_____"
]
],
[
[
"Defining the tally is relatively straightforward. One simply needs to list 'flux' as a score and then add an expansion filter. For this case, we will want to use the `SpatialLegendreFilter`, `ZernikeFilter`, `ZernikeRadialFilter` classes which multiplies tally scores by Legendre, azimuthal Zernike and radial-only Zernike polynomials evaluated on normalized spatial positions along radial and axial directions.",
"_____no_output_____"
]
],
[
[
"# Create a flux tally\nflux_tally_legendre = openmc.Tally()\nflux_tally_legendre.scores = ['flux']\n\n# Create a Legendre polynomial expansion filter and add to tally\norder = 10\ncell_filter = openmc.CellFilter(fuel)\nlegendre_filter = openmc.SpatialLegendreFilter(order, 'z', zmin, zmax)\nflux_tally_legendre.filters = [cell_filter, legendre_filter]\n\n# Create a Zernike azimuthal polynomial expansion filter and add to tally \nflux_tally_zernike = openmc.Tally()\nflux_tally_zernike.scores = ['flux']\nzernike_filter = openmc.ZernikeFilter(order=order, x=0.0, y=0.0, r=radius)\nflux_tally_zernike.filters = [cell_filter, zernike_filter]\n\n# Create a Zernike radial polynomial expansion filter and add to tally \nflux_tally_zernike1d = openmc.Tally()\nflux_tally_zernike1d.scores = ['flux']\nzernike1d_filter = openmc.ZernikeRadialFilter(order=order, x=0.0, y=0.0, r=radius)\nflux_tally_zernike1d.filters = [cell_filter, zernike1d_filter]\n",
"_____no_output_____"
]
],
[
[
"The last thing we need to do is create a `Tallies` collection and export the entire model, which we'll do using the `Model` convenience class.",
"_____no_output_____"
]
],
[
[
"tallies = openmc.Tallies([flux_tally_legendre, flux_tally_zernike, flux_tally_zernike1d])\nmodel = openmc.model.Model(geometry=geom, settings=settings, tallies=tallies)",
"_____no_output_____"
]
],
[
[
"Running a simulation is now as simple as calling the `run()` method of `Model`.",
"_____no_output_____"
]
],
[
[
"sp_file = model.run(output=False)",
"_____no_output_____"
]
],
[
[
"Now that the run is finished, we need to load the results from the statepoint file.",
"_____no_output_____"
]
],
[
[
"with openmc.StatePoint(sp_file) as sp:\n df1 = sp.tallies[flux_tally_legendre.id].get_pandas_dataframe()",
"_____no_output_____"
]
],
[
[
"We've used the `get_pandas_dataframe()` method that returns tally data as a Pandas dataframe. Let's see what the raw data looks like.",
"_____no_output_____"
]
],
[
[
"df1",
"_____no_output_____"
]
],
[
[
"Since the scaling factors for expansion coefficients will be provided by the Python API, thus, we do not need to multiply the moments by scaling factors.",
"_____no_output_____"
]
],
[
[
"a_n = df1['mean']",
"_____no_output_____"
]
],
[
[
"Loading the coefficients is realized via calling the OpenMC Python API as follows:",
"_____no_output_____"
]
],
[
[
"phi = openmc.legendre_from_expcoef(a_n, domain=(zmin, zmax))",
"_____no_output_____"
]
],
[
[
"Let's plot it and see how our flux looks!",
"_____no_output_____"
]
],
[
[
"z = np.linspace(zmin, zmax, 1000)\nplt.plot(z, phi(z))\nplt.xlabel('Z position [cm]')\nplt.ylabel('Flux [n/src]')",
"_____no_output_____"
]
],
[
[
"A rough cosine shape is obtained. \nOne can also numerically integrate the function using the trapezoidal rule.",
"_____no_output_____"
]
],
[
[
"np.trapz(phi(z), z)",
"_____no_output_____"
]
],
[
[
"The following cases show how to reconstruct the flux distribution Zernike polynomials tallied results.",
"_____no_output_____"
]
],
[
[
"with openmc.StatePoint(sp_file) as sp:\n df2 = sp.tallies[flux_tally_zernike.id].get_pandas_dataframe()",
"_____no_output_____"
],
[
"df2",
"_____no_output_____"
]
],
[
[
"Let's plot the flux in radial direction with specific azimuthal angle ($\\theta = 0.0$).",
"_____no_output_____"
]
],
[
[
"z_n = df2['mean'] \nzz = openmc.Zernike(z_n, radius)\nrr = np.linspace(0, radius, 100)\nplt.plot(rr, zz(rr, 0.0)) \nplt.xlabel('Radial position [cm]')\nplt.ylabel('Flux')",
"_____no_output_____"
]
],
[
[
"A polar figure with all azimuthal can be plotted like this:",
"_____no_output_____"
]
],
[
[
"z_n = df2['mean']\nzz = openmc.Zernike(z_n, radius=radius) \n#\n# Using linspace so that the endpoint of 360 is included...\nazimuths = np.radians(np.linspace(0, 360, 50))\nzeniths = np.linspace(0, radius, 100)\nr, theta = np.meshgrid(zeniths, azimuths)\nvalues = zz(zeniths, azimuths)\nfig, ax = plt.subplots(subplot_kw=dict(projection='polar'))\nax.contourf(theta, r, values, cmap='jet')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Sometimes, we just need the radial-only Zernike polynomial tallied flux distribution. \nLet us extract the tallied coefficients first.",
"_____no_output_____"
]
],
[
[
"with openmc.StatePoint(sp_file) as sp:\n df3 = sp.tallies[flux_tally_zernike1d.id].get_pandas_dataframe()",
"_____no_output_____"
],
[
"df3",
"_____no_output_____"
]
],
[
[
"A plot along with r-axis is also done.",
"_____no_output_____"
]
],
[
[
"z_n = df3['mean'] \nzz = openmc.ZernikeRadial(z_n, radius=radius)\nrr = np.linspace(0, radius, 50)\nplt.plot(rr, zz(rr)) \nplt.xlabel('Radial position [cm]')\nplt.ylabel('Flux')",
"_____no_output_____"
]
],
[
[
"Similarly, we can also re-construct the polar figure based on radial-only Zernike polinomial coefficients. ",
"_____no_output_____"
]
],
[
[
"z_n = df3['mean'] \nzz = openmc.ZernikeRadial(z_n, radius=radius)\nazimuths = np.radians(np.linspace(0, 360, 50))\nzeniths = np.linspace(0, radius, 100)\nr, theta = np.meshgrid(zeniths, azimuths)\nvalues = [[i for i in zz(zeniths)] for j in range(len(azimuths))]\nfig, ax = plt.subplots(subplot_kw=dict(projection='polar'), figsize=(6,6))\nax.contourf(theta, r, values, cmap='jet')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Based on Legendre polynomial coefficients and the azimuthal or radial-only Zernike coefficient, it's possible to reconstruct the flux both on radial and axial directions. ",
"_____no_output_____"
]
],
[
[
"# Reconstruct 3-D flux based on radial only Zernike and Legendre polynomials\nz_n = df3['mean'] \nzz = openmc.ZernikeRadial(z_n, radius=radius)\nazimuths = np.radians(np.linspace(0, 360, 100)) # azimuthal mesh \nzeniths = np.linspace(0, radius, 100) # radial mesh \nzmin, zmax = -1.0, 1.0 \nz = np.linspace(zmin, zmax, 100) # axial mesh \n# \n# flux = np.matmul(np.matrix(phi(z)).transpose(), np.matrix(zz(zeniths))) \n# flux = np.array(flux) # change np.matrix to np.array\n# np.matrix is not recommended for use anymore\nflux = np.array([phi(z)]).T @ np.array([zz(zeniths)])\n#\nplt.figure(figsize=(5,10))\nplt.title('Flux distribution')\nplt.xlabel('Radial Position [cm]')\nplt.ylabel('Axial Height [cm]')\nplt.pcolor(zeniths, z, flux, cmap='jet')\nplt.colorbar()",
"_____no_output_____"
]
],
[
[
"One can also reconstruct the 3D flux distribution based on Legendre and Zernike polynomial tallied coefficients.",
"_____no_output_____"
]
],
[
[
"# Define needed function first \ndef cart2pol(x, y):\n rho = np.sqrt(x**2 + y**2)\n phi = np.arctan2(y, x)\n return(rho, phi)\n\n# Reconstruct 3-D flux based on azimuthal Zernike and Legendre polynomials\nz_n = df2['mean']\nzz = openmc.Zernike(z_n, radius=radius) \n#\nxstep = 2.0*radius/20\nhstep = (zmax - zmin)/20\nx = np.linspace(-radius, radius, 50)\nx = np.array(x)\n[X,Y] = np.meshgrid(x,x)\nh = np.linspace(zmin, zmax, 50)\nh = np.array(h)\n[r, theta] = cart2pol(X,Y)\nflux3d = np.zeros((len(x), len(x), len(h)))\nflux3d.fill(np.nan)\n#\nfor i in range(len(x)):\n for j in range(len(x)):\n if r[i][j]<=radius:\n for k in range(len(h)):\n flux3d[i][j][k] = phi(h[k]) * zz(r[i][j], theta[i][j])",
"_____no_output_____"
]
],
[
[
"Let us print out with VTK format.",
"_____no_output_____"
]
],
[
[
"# You'll need to install pyevtk as a prerequisite\nfrom pyevtk.hl import gridToVTK\nimport numpy as np\n#\n# Dimensions\nnx, ny, nz = len(x), len(x), len(h)\nlx, ly, lz = 2.0*radius, 2.0*radius, (zmax-zmin)\ndx, dy, dz = lx/nx, ly/ny, lz/nz\n#\nncells = nx * ny * nz\nnpoints = (nx + 1) * (ny + 1) * (nz + 1)\n#\n# Coordinates\nx = np.arange(0, lx + 0.1*dx, dx, dtype='float64')\ny = np.arange(0, ly + 0.1*dy, dy, dtype='float64')\nz = np.arange(0, lz + 0.1*dz, dz, dtype='float64')\n# Print out \npath = gridToVTK(\"./rectilinear\", x, y, z, cellData = {\"flux3d\" : flux3d})",
"_____no_output_____"
]
],
[
[
"Use VisIt or ParaView to plot it as you want. Then, the plot can be loaded and shown as follows.",
"_____no_output_____"
]
],
[
[
"f1 = plt.imread('./images/flux3d.png')\nplt.imshow(f1, cmap='jet')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e75737c50ad7e071b42953d2d3d65a08e4c079b9 | 27,101 | ipynb | Jupyter Notebook | training_notebooks/training_Doors.ipynb | bnMikheili/Car-Feature-Detection | 274bf1f83a23d4ab21a7c0d643c6848598d5f4f0 | [
"MIT"
] | null | null | null | training_notebooks/training_Doors.ipynb | bnMikheili/Car-Feature-Detection | 274bf1f83a23d4ab21a7c0d643c6848598d5f4f0 | [
"MIT"
] | null | null | null | training_notebooks/training_Doors.ipynb | bnMikheili/Car-Feature-Detection | 274bf1f83a23d4ab21a7c0d643c6848598d5f4f0 | [
"MIT"
] | null | null | null | 29.425624 | 202 | 0.38017 | [
[
[
"import os\nimport matplotlib.pyplot as plt\nimport copy\nimport time\n\n\nimport torch\nfrom torch import nn\nfrom torch.utils.data import Dataset, DataLoader\nimport torch.optim as optim\nfrom torch.optim import lr_scheduler\nfrom torchvision import transforms, utils\nfrom torchvision import datasets, models, transforms\ntorch.__version__\n\nimport pandas as pd\nimport numpy as np\nfrom skimage import io, transform\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\ndevice",
"_____no_output_____"
],
[
"!unzip '/content/drive/My Drive/Vision_training/door_train_26K.zip'\ndata = pd.read_csv('/content/drive/My Drive/Vision_training/door_train_26K.csv')\ndata = data[data.Doors.notna()]\ndata[:22000].to_csv('./train_data.csv', index=False)\ndata[22000:24000].to_csv('./val_data.csv', index=False)\ndata[24000:].to_csv('./test_data.csv', index=False)",
"_____no_output_____"
],
[
"Doors = np.array(data.Doors.unique())\n\nclass MyautoDataset_doors(Dataset):\n def __init__(self, csv_file, transform=None):\n \"\"\"\n Args:\n csv_file (string): Path to the csv file with annotations.\n root_dir (string): Directory with all the images.\n transform (callable, optional): Optional transform to be applied\n on a sample.\n \"\"\"\n self.df = pd.read_csv(csv_file)\n self.transform = transform\n self.doors = Doors\n\n def __len__(self):\n return len(self.df)\n\n def __getitem__(self, idx):\n if torch.is_tensor(idx):\n idx = idx.tolist()\n path = '/content/door_train_26K/{}_{}.jpg'.format(self.df.iloc[idx].ID, self.df.iloc[idx].img_index)\n image = transform.resize(io.imread(path), (224, 224))/255\n # y = [0]*len(self.colors)\n # y[np.where(self.colors == self.df.iloc[idx].Color)[0][0]] = 1\n y = np.where(self.doors == self.df.iloc[idx].Doors)[0]\n sample = (torch.Tensor(np.einsum('ijk->kij',image)), torch.Tensor(y).long())\n\n # if self.transform:\n # sample = self.transform(sample)\n\n return sample",
"_____no_output_____"
],
[
"Doors",
"_____no_output_____"
],
[
"train = MyautoDataset_doors('./train_data.csv', '/content/training_data')\nval = MyautoDataset_doors('./val_data.csv', '/content/training_data')\ntest = MyautoDataset_doors('./test_data.csv', '/content/training_data')",
"_____no_output_____"
],
[
"train_loader = DataLoader(train, batch_size=4,shuffle=True, num_workers=4)\nval_loader = DataLoader(val, batch_size=4,shuffle=True, num_workers=4)\ntest_loader = DataLoader(test, batch_size=4,shuffle=True, num_workers=4)",
"_____no_output_____"
],
[
"dataloaders = {'train': train_loader, 'val': val_loader, 'test': test_loader}\ndataset_sizes = {'train': len(train_loader), 'val': len(val_loader), 'test': len(test_loader)}",
"_____no_output_____"
],
[
"class model_inc(nn.Module):\n def __init__(self):\n super(model_inc, self).__init__()\n self.layers = nn.ModuleList()\n self.layers.append(models.resnet18(pretrained=True))\n self.layers.append(nn.Linear(1000, 256)) \n self.layers.append(nn.Dropout(0.1))\n self.layers.append(nn.Linear(256, 32))\n self.layers.append(nn.Sigmoid())\n self.layers.append(nn.Dropout(0.1))\n self.layers.append(nn.Linear(32, len(data.Doors.unique())))\n self.layers.append(nn.Softmax())\n \n def forward(self, x):\n for layer in self.layers:\n x = layer(x)\n return x",
"_____no_output_____"
],
[
"def train_model(model, criterion, optimizer, scheduler, num_epochs=25):\n since = time.time()\n\n best_model_wts = copy.deepcopy(model.state_dict())\n best_acc = 0.0\n\n for epoch in range(num_epochs):\n print('Epoch {}/{}'.format(epoch, num_epochs - 1))\n print('-' * 10)\n\n # Each epoch has a training and validation phase\n for phase in ['train', 'val']:\n if phase == 'train':\n model.train() # Set model to training mode\n else:\n model.eval() # Set model to evaluate mode\n\n running_loss = 0.0\n running_corrects = 0\n counter = 0\n # Iterate over data.\n for inputs, labels in dataloaders[phase]:\n counter += 1\n if counter % 100 == 0:\n print(counter)\n if counter % 1000 == 0:\n print(running_corrects.double()/counter/4)\n inputs = inputs.to(device)\n labels = labels.to(device)\n labels = labels.reshape((labels.shape[0]))\n # zero the parameter gradients\n optimizer.zero_grad()\n\n # forward\n # track history if only in train\n with torch.set_grad_enabled(phase == 'train'):\n outputs = model(inputs)\n # torch.max(outputs, dim=1)\n preds = torch.argmax(outputs, 1)\n # _, preds = torch.max(outputs, 1)\n loss = criterion(outputs, labels.reshape((-1,)))\n\n # backward + optimize only if in training phase\n if phase == 'train':\n loss.backward()\n optimizer.step()\n\n # statistics\n running_loss += loss.item() * inputs.size(0)\n running_corrects += torch.sum(preds == labels.data)\n if phase == 'train':\n scheduler.step()\n\n epoch_loss = running_loss / dataset_sizes[phase]\n epoch_acc = running_corrects.double() / dataset_sizes[phase]\n\n print('{} Loss: {:.4f} Acc: {:.4f}'.format(\n phase, epoch_loss/4, epoch_acc/4))\n\n # deep copy the model\n if phase == 'val' and epoch_acc > best_acc:\n best_acc = epoch_acc\n best_model_wts = copy.deepcopy(model.state_dict())\n\n print()\n\n time_elapsed = time.time() - since\n print('Training complete in {:.0f}m {:.0f}s'.format(\n time_elapsed // 60, time_elapsed % 60))\n print('Best val Acc: {:4f}'.format(best_acc/4))\n\n # load best model weights\n model.load_state_dict(best_model_wts)\n return model",
"_____no_output_____"
],
[
"criterion = nn.CrossEntropyLoss()\nmodel = model_inc()\nmodel.to(device)\n# Observe that all parameters are being optimized\noptimizer = optim.SGD(model.parameters(), lr=0.002, momentum=0.9)\n\n# Decay LR by a factor of 0.1 every 7 epochs\nexp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=2, gamma=0.8)",
"_____no_output_____"
],
[
"model = train_model(model, criterion, optimizer, exp_lr_scheduler, num_epochs=5)",
"Epoch 0/4\n----------\n"
],
[
"model = train_model(model, criterion, optimizer, exp_lr_scheduler, num_epochs=2)",
"Epoch 0/1\n----------\n"
],
[
"torch.save(model.state_dict(), '/content/drive/My Drive/Myauto_vision/model_doors_88.pt')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7573846daa3011b6ac1fcdbdc6977ac7123ef8a | 170,068 | ipynb | Jupyter Notebook | solutions/Practical_8.ipynb | loftytopping/DEES_programming_course | ea429e8e1201b1da6c03d0a2a847563526f6903b | [
"CC0-1.0"
] | 6 | 2020-01-18T20:28:40.000Z | 2022-02-24T12:01:46.000Z | solutions/Practical_8.ipynb | loftytopping/DEES_programming_course | ea429e8e1201b1da6c03d0a2a847563526f6903b | [
"CC0-1.0"
] | null | null | null | solutions/Practical_8.ipynb | loftytopping/DEES_programming_course | ea429e8e1201b1da6c03d0a2a847563526f6903b | [
"CC0-1.0"
] | 7 | 2020-01-31T14:34:09.000Z | 2022-02-17T21:35:27.000Z | 174.25 | 21,744 | 0.871657 | [
[
[
"# Practical 8: Pandas to Cluster Analysis\n\n<div class=\"alert alert-block alert-success\">\n<b>Objectives:</b> In this practical we keep moving with applied demonstrations of modules you can use in Python. Today we continue to practice using Pandas, but also start applying some common machine learning techniques. Specifically, we will use Cluster Analysis [also known as unsupervised machine learning] to study distinct groupings on two very different datasets.\n \nFor the first challenge, we are going to be working with a dataset from the UC Irvine Machine Learning repository on forest fires. This dataset, saved as a <code> .csv </code> file, is taken from the study:\n[Cortez and Morais, 2007] P. Cortez and A. Morais. A Data Mining Approach to Predict Forest Fires using Meteorological Data. In J. Neves, M. F. Santos and J. Machado Eds., New Trends in Artificial Intelligence, Proceedings of the 13th EPIA 2007 - Portuguese Conference on Artificial Intelligence, December, Guimarães, Portugal, pp. 512-523, 2007. APPIA, ISBN-13 978-989-95618-0-9.*\n\nFor the second dataset will be looking at categorical data from listings in New York through Air BnB data extracted from the [Kaggle platform](https://www.kaggle.com/dgomonov/new-york-city-airbnb-open-data).\n\nThe notebook is split according to the following activities.\n \n - 1) [Introduction to Cluster Analysis](#Part1)\n * [Exercise 1: Plot a histogram of meteorological variables and fire extent](#sExercise1)\n * [Exercise 2: Produce a correlation coefficient matrix](#Exercise2)\n * [Exercise 3: Create new dataframe with only positive values of fire area and repeat cluster analysis](#Exercise3)\n - 3) [Working with 'other' data ](#Part2)\n * [Exercise 4: Clustering AirBnB data from New York](#Exercise4)\n * [Exercise 5: Visualise cluster data by room type](#Exercise5)\n \nAs with our other notebooks, we will provide you with a template for plotting the results. Also please note that you should not feel pressured to complete every exercise in class. These practicals are designed for you to take outside of class and continue working on them. Proposed solutions to all exercises can be found in the 'Solutions' folder.\n</div>",
"_____no_output_____"
],
[
"### Introduction to Cluster Analysis <a name=\"Part1\"></a>\n\nMachine learning is all the rage these days. One branch of machine learning are a family of algorithms known as unsupervised methods. These attempt to extract patterns from a dataset according to a number of assumptions. Cluster Analysis is a subset of such methods, and used across the sciences. An excellent overview of some of the challenges is given in the documentation of a method known as [HDBSCAN](https://hdbscan.readthedocs.io/en/latest/comparing_clustering_algorithms.html)\n\n>> There are a lot of clustering algorithms to choose from. The standard sklearn clustering suite has thirteen different clustering classes alone. So what clustering algorithms should you be using? As with every question in data science and machine learning it depends on your data. A number of those thirteen classes in sklearn are specialised for certain tasks (such as co-clustering and bi-clustering, or clustering features instead data points). Obviously an algorithm specializing in text clustering is going to be the right choice for clustering text data, and other algorithms specialize in other specific kinds of data. Thus, if you know enough about your data, you can narrow down on the clustering algorithm that best suits that kind of data, or the sorts of important properties your data has, or the sorts of clustering you need done. \n\nWe are going to use the K-means method for clustering. K-means is perhaps one of the most simplest methods for clustering and, whilst fast and also a distance based method, has limitations when dealing with complex datasets. If you are interested you can find some excellent tutorials and examples on the official [Scikit-learn webpage](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html) !\n\n<img src=\"images/sphx_glr_plot_kmeans_assumptions_001.png\" alt=\"Numpy array indexing\" style=\"width: 600px;\"/>\n\n\nBefore we jump into using K-means, we need to try and understand our, data as per the above discussion. \n",
"_____no_output_____"
]
],
[
[
"import pandas as pd #Im using pd here as its easier to keep writing! You can use whatever you want, but it might help you to use 'pd' for now.\nimport matplotlib.pyplot as plt\nfrom sklearn.cluster import KMeans\nimport seaborn as sns\n# Read data from file \n# We are going to use the function 'read_csv' within the Pandas package:\n\nif 'google.colab' in str(get_ipython()):\n data = pd.read_csv('https://raw.githubusercontent.com/loftytopping/DEES_programming_course/master/data/forestfires.csv')\n data.head()\nelse:\n data = pd.read_csv(\"data/forestfires.csv\") \n data.head()\n\n# Notice how we call that function using the '.' operator?\n# (Note the data file needs to be in the same directory that your jupyter notebook is based) You can control delimiters, rows, column names with read_csv (see later) \n\n# How do we preview the data file.\n# Preview the first 5 lines of the loaded data \ndata.head()\n#data.columns.values",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-block alert-success\">\n<b> Exercise 1: Plot a histogram of meteorological variables and fire extent. <a name=\"Exercise1\"></a> </b> The purpose of this exercise is to understand our dataset a little before we start to apply any cluster analysis. We will discuss the reason for this as we apply cluster analysis. For the meteorological variables you need to produce a histogram for:\n \n - 'temp': Temperature\n - 'RH' : Relative Humidity\n - 'wind': Wind speed\n - 'Rain': Rainfall \n \nRather than produce one 'big' plot for each I have provided you with the code for creating a tile of subplots. This looks like the following:\n\n```python\n# Import Matplotlib for plotting\nimport matplotlib.pyplot as plt\n\n# This command assigns variables for the entire figure and axes that are distributed over the figure space according to the number of rows and columns in the parentheses. I also specify a figure size and tell Matplotlib I dont want each plot to share the same y-axes scale [sharey=False]\nfig, axs = plt.subplots(2, 3, figsize=(12, 8), sharey=False)\n# Create a histogram for the varible 'temp'. The command ax=axs[0,0] tells Matplotlib to focus on the axes 'ax' using the index assigned earlier.\ndata.hist(column='temp',ax=axs[0,0])\n```\n\nWhen you you have finished the code for the other variables, you figure should look like the following figure:\n\n \n\n<div class=\"alert alert-block alert-warning\">\n<b>Please note:</b> For each histogram you will need to change the values within the command:\n\n```python\nax=axs[0,0]\n```\nwhere the first value indicates the row, and the second the column value.\n\n</div>\n\n</div>\n",
"_____no_output_____"
]
],
[
[
"# Make a boxplot for each column. We could group them into one figure but this is beyond the scope of this practical. \n# In the template below I have given you a template to include a boxplot in each subplot\nimport matplotlib.pyplot as plt\n\nfig, axs = plt.subplots(2, 3, figsize=(12, 8), sharey=False)\n# Temperature\ndata.hist(column='temp',ax=axs[0,0])\n#------'INSERT CODE HERE'------\n# RH\ndata.hist(column='RH',ax=axs[0,1])\n# Wind\ndata.hist(column='wind',ax=axs[0,2])\n# Rain\ndata.hist(column='rain',ax=axs[1,0])\n# Fire area\ndata.hist(column='area',ax=axs[1,1])\n#------------------------------",
"_____no_output_____"
]
],
[
[
"For the first three variables, we can easily infer a distribution of values. For the final two variables, however, the distribution is much harder to interpret due to a very high number of small values. Given that we are interested in forest fires, we need to consider whether this might influence our clustering. Why is that? If we are using the values of each variable to calculate a 'distance' between each observation, a variable that has a very large range relative to others might dominate the clustering. \n\n<div class=\"alert alert-block alert-success\">\n<b> Exercise 2: Produce a correlation coefficient matrix <a name=\"Exercise2\"></a> </b> Now you are tasked with producing a heatmap of correlation coefficients between the meteorological variables and fire extent. In the code snippet below I have imported the seaborn library used to produce the heatmap. For the rest of the code, you might want to revisit the example in Practical 7.\n\nWhen you you have finished the code for the other variables, you figure should look like the following:\n\n \n\n</div>\n\n",
"_____no_output_____"
]
],
[
[
"import seaborn as sns\n# calculate the correlation matrix\n#------'INSERT CODE HERE'------\ncorr = data[['temp','RH','wind','rain','area']].corr()\n# Now use an internal function within Seaborn called '.heatmap'\nsns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns)\n#------------------------------\n# And we now need to show the plot.\nplt.show()",
"_____no_output_____"
]
],
[
[
"### K-means cluster analysis\n\nK-means cluster analysis is perhaps the simplest of all, but allows us to practice turning a dataset into one that contains a different number of clusters, members of which should have 'similar' properties. How we define the similarity between members can vary widely. Take the following [figure](https://jakevdp.github.io/PythonDataScienceHandbook/05.11-k-means.html):\n\n \n\nFor this hypothetical 2D dataset, we can perhaps confidently calculate the correct number of clusters as 4. But, what if we simply dont know how many clusters we need or can't easily visualise all of the dimensions in our dataset? For our dataset, we can at least specify a number of clusrers and then visualise the properties of said clusters. \n\nIn the following code snippet we perform a number of steps to label each observation [row in our dataset] as belonging to a particular cluster. The label is an integer value, and the distinction between clusters will be performed on the values of temperature, humidity and fire area. These steps are as follows:\n\n<div class=\"alert alert-block alert-info\">\n \n - Extract our variables of interest from the dataframe into a new Numpy matrix\n \n - Specify how many clusters we want the Kmeans algorithm to find\n \n - Fit the clustering algorithm to our Numpy matrix\n \n - Extract the labels to which each row in our matrix has been assigned.\n</div>\n\n\n\n",
"_____no_output_____"
]
],
[
[
"from sklearn.cluster import KMeans\n# Extract our variables of interest from the dataframe into a new Numpy matrix\nnumpy_matrix = data[['temp','RH','area']].values\n# Specify how many clusters we want the Kmeans algorithm to find\nclusterer=KMeans(n_clusters=4)\n# Fit the clustering algorithm to our Numpy matrix\nclusterer.fit(numpy_matrix)\n# Extract the labels to which each row in our matrix has been assigned.\nlabels = clusterer.labels_\n\n# In the dataframe 'data' we can store the labels from using K-means:\ndata['K-means label'] = labels\n# For example the following simply prints the new dataframe column to the screen\ndata['K-means label']",
"_____no_output_____"
]
],
[
[
"Now let us look at the properties of these clusters by generating box-plots of values from our dataframe. We have already met multiple functions that can be applied to our dataframe. In Practical 7 we briefly produced box-plots of our dataframe using the:\n\n```python\n<<name of dataframe>>.boxplot(column=[<<names of columns>>])\n```\n\ncommand. We can also select a subset of values in the columns by asking the <code> boxplot </code> function to distinguish by a value found in a specific column. In the example below we expand the boxplot function to select values by K-means label through, for example:\n\n```python\ndata.boxplot(column=['temp'], by=['K-means label'], ax=ax[0])\n```\nIn this case we are asking Python to produce a boxplot of values in the column <code> temp </code> but also produce a number of seperate boxplots according the number of unique values given by the column <code> K-means label </code>.\n\nPlease check the following code snippet and then the box-plots. Do the collected properties of the different clusters allow you to distinguish between them? Do they 'look' different?",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(1, 3, figsize=(10, 5))\ndata.boxplot(column=['temp'], by=['K-means label'], ax=ax[0])\ndata.boxplot(column=['RH'], by=['K-means label'], ax=ax[1])\ndata.boxplot(column=['area'], by=['K-means label'], ax=ax[2]).set_yscale('log')\n",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-block alert-success\">\n<b> Exercise 3: Create new dataframe with only positive values of fire area and repeat the above cluster analysis <a name=\"Exercise3\"></a> </b> \n \nIn this exercise you can copy the above code example, but you need to ensure operations are performed on a new set of datapoints from a new dataframe. Can you remember how we select a new dataframe according to some criteria on the values we want to work with? For example, if we wanted to create a new dataframe based on all values of <code> area </code> greater than 10, we might write:\n\n```python\nnew_dataframe = data[data[\"area\"] > 10.0]\n```\n\nIn this exercise, you are asked to specify that the fire area should be positive. Once you have completed the code, you should arrive at the following figure:\n\n \n\nPlease note the ordering may be different, but this is normal. \n \n</div>\n",
"_____no_output_____"
]
],
[
[
"#-------'INSERT CODE HERE'-------\ndata_new = data[data[\"area\"] > 0]\nnumpy_matrix_new = data_new[['temp','RH','area']].values\nclusterer=KMeans(n_clusters=4)\nclusterer.fit(numpy_matrix_new)\nlabels = clusterer.labels_\ndata_new['K-means label'] = labels\n#--------------------------------\n\n\ndata_new['K-means label']\n#data['Operator'].value_counts().plot(kind='bar')\nfig, ax = plt.subplots(1, 3, figsize=(10, 5))\ndata_new.boxplot(column=['temp'], by=['K-means label'], ax=ax[0])\ndata_new.boxplot(column=['RH'], by=['K-means label'], ax=ax[1])\ndata_new.boxplot(column=['area'], by=['K-means label'], ax=ax[2]).set_yscale('log')\n",
"C:\\Users\\Dave\\anaconda3\\lib\\site-packages\\ipykernel_launcher.py:7: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n import sys\n"
]
],
[
[
"## Working with 'other' data <a name=\"Part2\"></a>\n\nIn the following code box we load some freely available data on Air BnB listings from New York in 2019. By previewing our column names, you can see we have a collection of both numeric and non-numerical data. Why might cluster analysis be of use here? Let's see if we can assign each available entry into distinct groups, again using K-means cluster analysis.\n",
"_____no_output_____"
]
],
[
[
"# Load the Air BnB data\n\nif 'google.colab' in str(get_ipython()):\n data_NYC = pd.read_csv('https://raw.githubusercontent.com/loftytopping/DEES_programming_course/master/data/AB_NYC_2019.csv')\n data_NYC.head()\nelse:\n data_NYC = pd.read_csv(\"data/AB_NYC_2019.csv\") \n data_NYC.head()\n\n# Preview the first 5 lines of the loaded data \ndata_NYC.head()\n#data_NYC.columns.values",
"_____no_output_____"
]
],
[
[
"For example, we can see there is a variable that reflects the neighborhood group of the listings. Let us say we wish to see how many unique entries there are. Rather than repeating the calculation we have done a number of times, we can produce a bar plot that automatically places each unique entry on the <code> x </code> axis, and the number of times this occurs on the <code> y </code> axis.\n\nThe command for this is:\n\n```python\ndataframe['column name'].values_counts().plot(kind='bar')\n```\n\nIn English, this command is asking Python to focus on data from the column named <code> column name </code>, extract the unique entries and calculate their frequency, then show me this information as a bar plot.",
"_____no_output_____"
]
],
[
[
"data_NYC['neighbourhood_group'].value_counts().plot(kind='bar')",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-block alert-success\">\n<b> Exercise 4: Clustering AirBnB data from New York by lattitude, longitude and price <a name=\"Exercise4\"></a> </b> \n \nWe want to cluster this dataset in order to determine the properties of similar listings. For this exercise, as this is a different dataset, let us repeat the procedure of using the K-means algorithm to produce 4 clusters by focusing on the variables <code> latitude </code>, <code> longitude </code> and <code> price </code>. Produce a boxplot of the results, grouped by cluster label, for the price only.\n\nWhen you you have finished the code for the other variables, you figure should look like the following:\n\n \n\n</div>\n\n",
"_____no_output_____"
]
],
[
[
"#-------'INSERT CODE HERE'-------\nnumpy_matrix_NYC = data_NYC[['latitude', 'longitude', 'price']].values\nmodel_NYC = KMeans(n_clusters=4)\nmodel_NYC.fit(numpy_matrix_NYC)\nlabels = model_NYC.labels_\n#------------------------------\ndata_NYC['K-means label'] = labels\ndata_NYC.boxplot(column=['price'], by=['K-means label'])",
"_____no_output_____"
]
],
[
[
"Ideally we would also like to get a feel for the ratio of each neighborhood in each cluster. We can certainly do that by producing 4 seperate barcharts as per the code box below. In this code snippet I'm expanding on the previous example of a barchart by selecting a subset according to the value of the K-means cluster label:",
"_____no_output_____"
]
],
[
[
"fig, axes = plt.subplots(1, 4, figsize=(10, 5))\n\n# Produce 4 seperate plots\ndata_NYC[data_NYC['K-means label']==0]['neighbourhood_group'].value_counts().plot(kind='bar', title='Cluster 0', ax=axes[0])\ndata_NYC[data_NYC['K-means label']==1]['neighbourhood_group'].value_counts().plot(kind='bar', title='Cluster 1', ax=axes[1])\ndata_NYC[data_NYC['K-means label']==2]['neighbourhood_group'].value_counts().plot(kind='bar', title='Cluster 2', ax=axes[2])\ndata_NYC[data_NYC['K-means label']==3]['neighbourhood_group'].value_counts().plot(kind='bar', title='Cluster 3', ax=axes[3])",
"_____no_output_____"
]
],
[
[
"What does this graph tell us? The median price of Cluster '1' is high, and these results confirm those listings are dominated by properties in Manhattan. However, there appears to be a very similar profile in Cluster '2' which has a much lower median price range. \n\n<div class=\"alert alert-block alert-success\">\n<b> Exercise 5: Visualise cluster data by room type <a name=\"Exercise5\"></a> </b> \n \nIn the following exercise, reproduce the above plot for the variable <code> room_type </code>. Your results should look like the following figure:\n\n \n\n</div>\n",
"_____no_output_____"
]
],
[
[
"fig, axes = plt.subplots(1, 4, figsize=(10, 5))\n\n#-------'INSERT CODE HERE'-------\n# Produce 4 seperate plots\ndata_NYC[data_NYC['K-means label']==0]['room_type'].value_counts().plot(kind='bar', title='Cluster 0', ax=axes[0])\ndata_NYC[data_NYC['K-means label']==1]['room_type'].value_counts().plot(kind='bar', title='Cluster 1', ax=axes[1])\ndata_NYC[data_NYC['K-means label']==2]['room_type'].value_counts().plot(kind='bar', title='Cluster 2', ax=axes[2])\ndata_NYC[data_NYC['K-means label']==3]['room_type'].value_counts().plot(kind='bar', title='Cluster 3', ax=axes[3])\n#------------------------------",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7574603e4fd927915e3bc62fcbbb632da8e06ff | 162,537 | ipynb | Jupyter Notebook | test.ipynb | kmottu/nri_risk_index | 14a3af92bafeadeac4d049b0fd36d6faca96c3e5 | [
"Apache-2.0"
] | null | null | null | test.ipynb | kmottu/nri_risk_index | 14a3af92bafeadeac4d049b0fd36d6faca96c3e5 | [
"Apache-2.0"
] | null | null | null | test.ipynb | kmottu/nri_risk_index | 14a3af92bafeadeac4d049b0fd36d6faca96c3e5 | [
"Apache-2.0"
] | null | null | null | 67.080891 | 23,590 | 0.659093 | [
[
[
"import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport datetime\npd.options.display.max_columns = 99",
"_____no_output_____"
],
[
"df = pd.concat([\n pd.read_csv(\n \"/Users/kranthimottu/files/Fiverr/fajayi/data_cleaning/data_cleaned_2018_2019.csv\"),\n pd.read_csv(\n \"/Users/kranthimottu/files/Fiverr/fajayi/data_cleaning/data_cleaned_2020.csv\")\n])",
"_____no_output_____"
],
[
"sns.lineplot(\n data=data,\n x='dateKey',\n y='death_casualty_counts'\n)",
"_____no_output_____"
],
[
"data = df.groupby('dateKey').agg({\n 'death_casualty_counts': 'sum'\n}).reset_index()\n# plt.figure(figsize=(30, 5))\nfig = sns.lineplot(\n data=data,\n x='dateKey',\n y='death_casualty_counts'\n)\nfig.set_xlabel(\"Date\")\nfig.set_xticklabels(fig.get_xticklabels(), rotation=70)",
"/var/folders/yn/8118clf12gg0th0grshlx58w0000gn/T/ipykernel_60863/4085994115.py:11: UserWarning: FixedFormatter should only be used together with FixedLocator\n fig.set_xticklabels(fig.get_xticklabels(), rotation=70)\n"
],
[
"\"2019-11-09\" < \"2019-09-10\"",
"_____no_output_____"
],
[
"datetime.date(2019, 9, 9)",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"temp = df.groupby('zone').agg({\n 'death_casualty_counts': 'sum',\n 'injury_counts': 'sum',\n 'dateKey': 'count' \n}).rename(columns={\n 'death_casualty_counts': 'Casualties',\n 'injury_counts': 'Injuries',\n 'dateKey': 'Incidents' \n}).reset_index()\ntemp",
"_____no_output_____"
],
[
"pd.melt(temp, id_vars=['zone'], value_vars=['Casualties', 'Injuries', 'Incidents'])",
"_____no_output_____"
],
[
"fig = sns.barplot(\n data=pd.melt(temp, id_vars=['zone'], value_vars=['Casualties', 'Injuries', 'Incidents']),\n x='zone',\n y='value',\n hue='variable'\n)\nfig.set_xlabel('Region')\nfig.set_xticklabels(fig.get_xticklabels(), rotation=70)",
"_____no_output_____"
],
[
"sns.barplot(data=df.groupby('zone').agg({\n 'death_casualty_counts': 'sum',\n 'injury_counts': 'sum',\n 'dateKey': 'count' \n}).rename(columns={\n 'death_casualty_counts': 'Casualties',\n 'injury_counts': 'Injuries',\n 'dateKey': 'Incidents' \n}).reset_index().T,\n)",
"_____no_output_____"
],
[
"sns.barplot(\n data=df,\n x=\"zone\",\n y=\"death_casualty_counts\",\n # hue=\"zone\"\n)",
"_____no_output_____"
],
[
"pd.DataFrame(\n np.random.randn(1000, 2) / [50, 50] + [37.76, -122.4],\n columns=['lat', 'lon']\n ).dtypes",
"_____no_output_____"
],
[
"fil_df = df[['longitude', 'latitude']].dropna()\nfil_df = fil_df[fil_df.latitude.apply(lambda x: ':' not in x)]\nfil_df = fil_df[fil_df.longitude.apply(lambda x: ':' not in x)]\nfil_df['latitude'] = fil_df.latitude.apply(lambda x: '.'.join(x.replace(',', '.').split('.')[:2])).astype(float)\nfil_df['longitude'] = fil_df.longitude.apply(lambda x: '.'.join(x.replace(',', '.').split('.')[:2]).replace('q', '').replace('\\\\', '')).astype(float)",
"_____no_output_____"
],
[
"fil_df.latitude.min()",
"_____no_output_____"
],
[
"fil_df.longitude.max()",
"_____no_output_____"
],
[
"df.provstate.unique()",
"_____no_output_____"
],
[
"data = df.groupby('provstate')['death_casualty_counts'].sum().reset_index().rename(\n columns={\n \"provstate\": \"state\", \n \"death_casualty_counts\": \"casualties\"\n }\n)\ndata['state'] = data['state'].apply(str.title).apply(lambda x: x.replace('-', ' '))\ndata",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7574b4977d5f268ed8b93358af1df65cd45f3ae | 8,457 | ipynb | Jupyter Notebook | tensorflow/test121_RNN&LSTM.ipynb | kingzone/kaggle | 661b30fd1ce7481e20ad855e68a662019e7ce736 | [
"Apache-2.0"
] | 1 | 2017-04-04T15:49:58.000Z | 2017-04-04T15:49:58.000Z | tensorflow/test121_RNN&LSTM.ipynb | kingzone/kaggle | 661b30fd1ce7481e20ad855e68a662019e7ce736 | [
"Apache-2.0"
] | null | null | null | tensorflow/test121_RNN&LSTM.ipynb | kingzone/kaggle | 661b30fd1ce7481e20ad855e68a662019e7ce736 | [
"Apache-2.0"
] | null | null | null | 41.866337 | 243 | 0.607662 | [
[
[
"import tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nimport os\nimport numpy as np",
"_____no_output_____"
],
[
"tf.random.set_seed(22)\nnp.random.seed(22)\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'\nassert tf.__version__.startswith('2.')",
"_____no_output_____"
],
[
"total_words = 10000 # 常见1w单词,其他用mask\nmax_review_len = 80\nbatchsz = 128\n(x_train, y_train), (x_test,y_test) = keras.datasets.imdb.load_data(num_words=total_words)\nx_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=max_review_len)\nx_test = keras.preprocessing.sequence.pad_sequences(x_test, maxlen=max_review_len)\n\ndb_train = tf.data.Dataset.from_tensor_slices((x_train, y_train))\ndb_train = db_train.shuffle(1000).batch(batch_size=batchsz, drop_remainder=True)\ndb_test = tf.data.Dataset.from_tensor_slices((x_test, y_test))\ndb_test = db_test.batch(batch_size=batchsz, drop_remainder=True) # 最后一个batch去掉\nprint('x_train shape:', x_train.shape, tf.reduce_max(y_train), tf.reduce_min(y_train))\nprint('x_test shape:', x_test.shape)",
"x_train shape: (25000, 80) tf.Tensor(1, shape=(), dtype=int64) tf.Tensor(0, shape=(), dtype=int64)\nx_test shape: (25000, 80)\n"
],
[
"embedding_len =100\n\nclass MyRNN(keras.Model):\n def __init__(self, units):\n super(MyRNN, self).__init__()\n\n # [b, 64]\n self.state0 = [tf.zeros([batchsz, units])]\n self.state1 = [tf.zeros([batchsz, units])]\n\n self.embedding = layers.Embedding(total_words, embedding_len, input_length=max_review_len)\n\n self.rnn_cell0 = layers.SimpleRNNCell(units, dropout=0.2)\n self.rnn_cell1 = layers.SimpleRNNCell(units, dropout=0.2)\n\n self.outlayer = layers.Dense(1)\n\n def call(self, inputs, training=None):\n x = inputs\n x = self.embedding(x)\n\n # [b, 80, 100] => [b, 64]\n state0 = self.state0\n state1 = self.state1\n for word in tf.unstack(x, axis=1):\n # x*Wxh + h*Whh\n out0, state0 = self.rnn_cell0(word, state0, training)\n out1, state1 = self.rnn_cell1(out0, state1, training)\n\n # out: [b, 64]\n x = self.outlayer(out1)\n prob = tf.sigmoid(x)\n return prob",
"_____no_output_____"
],
[
"units = 64\nepochs = 4\nmodel = MyRNN(units)\nmodel.compile(\n optimizer=keras.optimizers.Adam(0.001),\n loss = keras.losses.BinaryCrossentropy(),\n metrics=['accuracy']\n)\nmodel.fit(db_train, epochs=epochs, validation_data=db_test)\n\nmodel.evaluate(db_test)",
"Epoch 1/4\nWARNING:tensorflow:AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x7fd07bc96dd0> and will run it as-is.\nPlease report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\nCause: 'arguments' object has no attribute 'posonlyargs'\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\nWARNING: AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x7fd07bc96dd0> and will run it as-is.\nPlease report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\nCause: 'arguments' object has no attribute 'posonlyargs'\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\nWARNING:tensorflow:AutoGraph could not transform <bound method MyRNN.call of <__main__.MyRNN object at 0x7fd06be1a350>> and will run it as-is.\nPlease report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\nCause: 'arguments' object has no attribute 'posonlyargs'\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\nWARNING: AutoGraph could not transform <bound method MyRNN.call of <__main__.MyRNN object at 0x7fd06be1a350>> and will run it as-is.\nPlease report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\nCause: 'arguments' object has no attribute 'posonlyargs'\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n195/195 [==============================] - ETA: 0s - loss: 0.6464 - accuracy: 0.5934WARNING:tensorflow:AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x7fd07bc96950> and will run it as-is.\nPlease report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\nCause: 'arguments' object has no attribute 'posonlyargs'\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\nWARNING: AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x7fd07bc96950> and will run it as-is.\nPlease report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\nCause: 'arguments' object has no attribute 'posonlyargs'\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n195/195 [==============================] - 19s 64ms/step - loss: 0.6464 - accuracy: 0.5934 - val_loss: 0.4332 - val_accuracy: 0.8031\nEpoch 2/4\n195/195 [==============================] - 13s 65ms/step - loss: 0.3900 - accuracy: 0.8284 - val_loss: 0.3862 - val_accuracy: 0.8355\nEpoch 3/4\n195/195 [==============================] - 11s 55ms/step - loss: 0.2728 - accuracy: 0.8901 - val_loss: 0.4267 - val_accuracy: 0.8282\nEpoch 4/4\n195/195 [==============================] - 11s 57ms/step - loss: 0.1656 - accuracy: 0.9389 - val_loss: 0.5560 - val_accuracy: 0.7930\n195/195 [==============================] - 3s 16ms/step - loss: 0.5560 - accuracy: 0.7930\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e75750388d5c581c81f5d053edc4702bc8fe262b | 641,530 | ipynb | Jupyter Notebook | Weather and Vaction Notebooks/WeatherPy/WeatherPy.ipynb | uchenna23/Python-API-Challenge | 6ba036a123db6b2b49f9b7d629c2ddfafcf39ba5 | [
"ADSL"
] | null | null | null | Weather and Vaction Notebooks/WeatherPy/WeatherPy.ipynb | uchenna23/Python-API-Challenge | 6ba036a123db6b2b49f9b7d629c2ddfafcf39ba5 | [
"ADSL"
] | null | null | null | Weather and Vaction Notebooks/WeatherPy/WeatherPy.ipynb | uchenna23/Python-API-Challenge | 6ba036a123db6b2b49f9b7d629c2ddfafcf39ba5 | [
"ADSL"
] | null | null | null | 366.588571 | 58,924 | 0.927294 | [
[
[
"# WeatherPy\n----\n\n#### Note\n* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.",
"_____no_output_____"
],
[
"## Generate Cities List",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport requests\nimport time\nimport json\nimport scipy.stats as st\nfrom scipy.stats import linregress\n\nfrom api_keys import weather_api_key\n\nfrom citipy import citipy\n\noutput_data_file = \"output_data/cities.csv\"\n\nlat_range = (-90, 90)\nlng_range = (-180, 180)",
"_____no_output_____"
],
[
"\nlat_lngs = []\ncities = []\n\nlats = np.random.uniform(low=-90.000, high=90.000, size=1500)\nlngs = np.random.uniform(low=-180.000, high=180.000, size=1500)\nlat_lngs = zip(lats, lngs)\n\n\nfor lat_lng in lat_lngs:\n city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name\n \n\n if city not in cities:\n cities.append(city)\n\n\nlen(cities)",
"_____no_output_____"
]
],
[
[
"### Perform API Calls\n* Perform a weather check on each city using a series of successive API calls.\n* Include a print log of each city as it'sbeing processed (with the city number and city name).\n",
"_____no_output_____"
]
],
[
[
"url = \"http://api.openweathermap.org/data/2.5/weather?\"\nunits = \"imperial\"\nquery_url = f\"{url}appid={weather_api_key}&units={units}&q=\"\n\n\ncity_id_list = []\ncity_name_list = []\ncountry_list = []\nlng_list = []\nlat_list = []\ntemp_list = []\nhumidity_list = []\nclouds_list = []\nwind_speed_list = []\n\nprint('Beginning Data Retrieval')\nprint('----------------------------')\nfor city in cities:\n \n\n response_json = requests.get(query_url + city).json()\n \n\n try:\n \n city_id = response_json['id']\n city_id_list.append(city_id)\n \n city_name = response_json['name']\n city_name_list.append(city_name)\n \n country_name = response_json['sys']['country']\n country_list.append(country_name)\n\n lng = response_json['coord']['lon']\n lng_list.append(lng)\n\n lat = response_json['coord']['lat']\n lat_list.append(lat)\n\n temp = response_json['main']['temp']\n temp_list.append(temp)\n\n humidity = response_json['main']['humidity']\n humidity_list.append(humidity)\n\n clouds = response_json['clouds']['all']\n clouds_list.append(clouds)\n\n wind_speed = response_json['wind']['speed']\n wind_speed_list.append(wind_speed)\n \n print(f\"City Name: {city}, City ID: {city_id}\")\n \n\n except:\n \n print(\"City not found. Skipping...\")",
"Beginning Data Retrieval\n----------------------------\nCity Name: rikitea, City ID: 4030556\nCity Name: palmer, City ID: 4946620\nCity Name: saldanha, City ID: 3361934\nCity Name: castro, City ID: 3466704\nCity Name: cabo san lucas, City ID: 3985710\nCity Name: nikolskoye, City ID: 546105\nCity Name: anda, City ID: 2038650\nCity Name: luderitz, City ID: 3355672\nCity Name: togur, City ID: 1489499\nCity Name: vaini, City ID: 4032243\nCity Name: port lincoln, City ID: 2063036\nCity Name: kaitangata, City ID: 2208248\nCity not found. Skipping...\nCity not found. Skipping...\nCity Name: dudinka, City ID: 1507116\nCity Name: korla, City ID: 1529376\nCity Name: litovko, City ID: 2020738\nCity Name: leningradskiy, City ID: 2123814\nCity Name: jamestown, City ID: 5122534\nCity Name: ketchikan, City ID: 5554428\nCity Name: sola, City ID: 2134814\nCity Name: albany, City ID: 5106841\nCity Name: hermanus, City ID: 3366880\nCity Name: carnarvon, City ID: 2074865\nCity Name: nisia floresta, City ID: 3393922\nCity Name: road town, City ID: 3577430\nCity Name: joutseno, City ID: 655563\nCity Name: bluff, City ID: 2206939\nCity Name: hasaki, City ID: 2112802\nCity Name: cape town, City ID: 3369157\nCity Name: torbay, City ID: 6167817\nCity Name: tigil, City ID: 2120612\nCity Name: sorochinsk, City ID: 490554\nCity not found. Skipping...\nCity Name: sanbu, City ID: 1797873\nCity Name: leshukonskoye, City ID: 535839\nCity Name: tuktoyaktuk, City ID: 6170031\nCity Name: saint-philippe, City ID: 935215\nCity Name: goya, City ID: 3433715\nCity Name: mahebourg, City ID: 934322\nCity Name: thompson, City ID: 6165406\nCity Name: north platte, City ID: 5697939\nCity Name: balkanabat, City ID: 161616\nCity Name: punta arenas, City ID: 3874787\nCity not found. Skipping...\nCity not found. Skipping...\nCity Name: bredasdorp, City ID: 1015776\nCity Name: huarmey, City ID: 3939168\nCity Name: beyneu, City ID: 610298\nCity Name: bhadrachalam, City ID: 1276328\nCity Name: nome, City ID: 5870133\nCity Name: oranjemund, City ID: 3354071\nCity Name: whitehaven, City ID: 2634096\nCity not found. Skipping...\nCity Name: yellowknife, City ID: 6185377\nCity Name: mar del plata, City ID: 3430863\nCity Name: dajal, City ID: 1180752\nCity Name: codrington, City ID: 2171099\nCity Name: gasan, City ID: 1713154\nCity Name: mataura, City ID: 6201424\nCity Name: bandarbeyla, City ID: 64814\nCity Name: kumukh, City ID: 539233\nCity Name: camabatela, City ID: 2242885\nCity Name: tsabong, City ID: 932987\nCity Name: mineral wells, City ID: 4711647\nCity Name: butaritari, City ID: 2110227\nCity Name: richards bay, City ID: 962367\nCity Name: hithadhoo, City ID: 1282256\nCity Name: east london, City ID: 1006984\nCity Name: pisco, City ID: 3932145\nCity Name: dikson, City ID: 1507390\nCity not found. Skipping...\nCity Name: bethel, City ID: 5282297\nCity Name: san cristobal, City ID: 3628473\nCity Name: namatanai, City ID: 2090021\nCity Name: provideniya, City ID: 4031574\nCity Name: bonthe, City ID: 2409914\nCity Name: sungai besar, City ID: 1735199\nCity Name: gravdal, City ID: 3155152\nCity Name: ratnagiri, City ID: 1258338\nCity Name: busselton, City ID: 2075265\nCity Name: faanui, City ID: 4034551\nCity Name: brae, City ID: 2654970\nCity Name: high level, City ID: 5975004\nCity Name: kathu, City ID: 1153035\nCity Name: kolpashevo, City ID: 1502862\nCity Name: yining, City ID: 1786538\nCity Name: belle fourche, City ID: 5762718\nCity Name: san quintin, City ID: 3984997\nCity Name: khatanga, City ID: 2022572\nCity Name: ushuaia, City ID: 3833367\nCity Name: beringovskiy, City ID: 2126710\nCity Name: belaya gora, City ID: 2126785\nCity Name: salalah, City ID: 286621\nCity Name: chuy, City ID: 3443061\nCity not found. Skipping...\nCity Name: port alfred, City ID: 964432\nCity Name: coihaique, City ID: 3894426\nCity Name: rocha, City ID: 3440777\nCity Name: puerto suarez, City ID: 3444199\nCity Name: kapaa, City ID: 5848280\nCity Name: avarua, City ID: 4035715\nCity Name: kuala terengganu, City ID: 1734705\nCity Name: yumen, City ID: 1528998\nCity not found. Skipping...\nCity Name: samarai, City ID: 2132606\nCity Name: poum, City ID: 2138555\nCity Name: omsukchan, City ID: 2122493\nCity Name: manali, City ID: 1263968\nCity Name: aklavik, City ID: 5882953\nCity Name: barrow, City ID: 5880054\nCity not found. Skipping...\nCity Name: tessalit, City ID: 2449893\nCity Name: louth, City ID: 2643553\nCity Name: carutapera, City ID: 3402648\nCity not found. Skipping...\nCity Name: norman wells, City ID: 6089245\nCity Name: katsuura, City ID: 2112309\nCity Name: port hardy, City ID: 6111862\nCity Name: sabang, City ID: 1214026\nCity Name: tura, City ID: 1254046\nCity Name: puerto ayora, City ID: 3652764\nCity Name: galveston, City ID: 4692856\nCity Name: ribeira grande, City ID: 3372707\nCity Name: raudeberg, City ID: 3146487\nCity Name: new norfolk, City ID: 2155415\nCity not found. Skipping...\nCity Name: santa cruz cabralia, City ID: 3450288\nCity Name: pevek, City ID: 2122090\nCity Name: okhotsk, City ID: 2122605\nCity Name: mehamn, City ID: 778707\nCity Name: dryden, City ID: 5942913\nCity Name: rio grande, City ID: 3451138\nCity Name: sarangani, City ID: 1687186\nCity Name: westport, City ID: 4845585\nCity Name: karratha, City ID: 6620339\nCity Name: kuandian, City ID: 2036283\nCity Name: havoysund, City ID: 779622\nCity Name: emba, City ID: 609924\nCity Name: sioux lookout, City ID: 6148373\nCity Name: komsomolskiy, City ID: 1513491\nCity not found. Skipping...\nCity Name: yenagoa, City ID: 2318123\nCity Name: wencheng, City ID: 1791539\nCity not found. Skipping...\nCity Name: gull lake, City ID: 5967988\nCity Name: yar-sale, City ID: 1486321\nCity Name: murray bridge, City ID: 2065176\nCity Name: chicama, City ID: 3698359\nCity Name: swellendam, City ID: 950709\nCity Name: nantucket, City ID: 4944903\nCity not found. Skipping...\nCity Name: breytovo, City ID: 571634\nCity Name: san patricio, City ID: 4726521\nCity Name: upernavik, City ID: 3418910\nCity Name: trat, City ID: 1605277\nCity Name: itoman, City ID: 1861280\nCity Name: novikovo, City ID: 487928\nCity Name: nakhon phanom, City ID: 1608530\nCity Name: jena, City ID: 2895044\nCity Name: esperance, City ID: 2071860\nCity not found. Skipping...\nCity Name: brielle, City ID: 2758325\nCity not found. Skipping...\nCity Name: sitka, City ID: 5557293\nCity Name: viedma, City ID: 3832899\nCity Name: nishihara, City ID: 1855342\nCity Name: hualmay, City ID: 3939761\nCity not found. Skipping...\nCity Name: atuona, City ID: 4020109\nCity Name: tommot, City ID: 2015179\nCity Name: iskateley, City ID: 866062\nCity not found. Skipping...\nCity Name: anadyr, City ID: 2127202\nCity Name: tornio, City ID: 634093\nCity Name: margherita, City ID: 1263532\nCity Name: qaanaaq, City ID: 3831208\nCity Name: kirakira, City ID: 2178753\nCity not found. Skipping...\nCity Name: san carlos de bariloche, City ID: 7647007\nCity Name: naron, City ID: 3115739\nCity Name: marawi, City ID: 1701054\nCity Name: hobart, City ID: 2163355\nCity Name: gizo, City ID: 2108857\nCity Name: cidreira, City ID: 3466165\nCity Name: vila franca do campo, City ID: 3372472\nCity Name: souillac, City ID: 933995\nCity Name: hilo, City ID: 5855927\nCity Name: pacific grove, City ID: 5380437\nCity Name: mitsamiouli, City ID: 921786\nCity Name: wasilla, City ID: 5877641\nCity Name: khorramshahr, City ID: 127319\nCity Name: port shepstone, City ID: 964406\nCity Name: batagay-alyta, City ID: 2027042\nCity Name: colquechaca, City ID: 3919720\nCity Name: airai, City ID: 1651810\nCity Name: ystad, City ID: 2662149\nCity Name: xiuyan, City ID: 2033602\nCity Name: gao, City ID: 2457161\nCity Name: grand gaube, City ID: 934479\nCity Name: goksun, City ID: 314188\nCity Name: mount gambier, City ID: 2156643\nCity Name: alto araguaia, City ID: 3472473\nCity Name: bahar, City ID: 142000\nCity Name: bom sucesso, City ID: 3469374\nCity Name: elizabeth city, City ID: 4465088\nCity Name: seoul, City ID: 1835848\nCity Name: baherden, City ID: 162158\nCity Name: nanortalik, City ID: 3421765\nCity Name: mareeba, City ID: 2158767\nCity Name: asau, City ID: 686090\nCity Name: saint-joseph, City ID: 6690296\nCity Name: doha, City ID: 290030\nCity not found. Skipping...\nCity Name: isla mujeres, City ID: 3526756\nCity Name: bafia, City ID: 2235194\nCity Name: sorland, City ID: 3137469\nCity Name: srednekolymsk, City ID: 2121025\nCity Name: visby, City ID: 2662689\n"
]
],
[
[
"### Convert Raw Data to DataFrame\n* Export the city data into a .csv.\n* Display the DataFrame",
"_____no_output_____"
]
],
[
[
"cities_df = pd.DataFrame({\"City ID\": city_id_list, \"City\": city_name_list, \"Country\": country_list, \"Lat\": lat_list, \"Lng\": lng_list,\n \"Temperature\": temp_list, \"Humidity\": humidity_list, \"Clouds\": clouds_list,\n \"Wind Speed\": wind_speed_list})\ncities_df.head()",
"_____no_output_____"
],
[
"cities_df.to_csv(\"cities.csv\", index=False, header=True)",
"_____no_output_____"
]
],
[
[
"## Plotting the Data\n* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.\n* Save the plotted figures as .pngs.",
"_____no_output_____"
],
[
"## Latitude vs. Temperature Plot",
"_____no_output_____"
]
],
[
[
"x_values = cities_df[\"Lat\"]\ny_values = cities_df[\"Temperature\"]\n\nplt.scatter(x_values,y_values)\nplt.title('City Latitude vs. Max Temperature (04/01/20)')\nplt.xlabel('Latitude')\nplt.ylabel('Max Temperature (F)')\nplt.ylim(0, 100)\nplt.xlim(-60, 80)\nplt.minorticks_on()\nplt.grid(which='major', linestyle='-')\nplt.grid(which='minor', linestyle=':')\nplt.tight_layout()\nplt.show()\nplt.savefig(\"Fig1.png\")",
"_____no_output_____"
]
],
[
[
"## Latitude vs. Humidity Plot",
"_____no_output_____"
]
],
[
[
"x_values = cities_df[\"Lat\"]\ny_values = cities_df[\"Humidity\"]\n\nplt.scatter(x_values,y_values)\nplt.title('City Latitude vs. Humidity')\nplt.xlabel('Latitude')\nplt.ylabel('Humidity (%)')\nplt.ylim(0, 105)\nplt.xlim(-60, 80)\nplt.minorticks_on()\nplt.grid(which='major', linestyle='-')\nplt.grid(which='minor', linestyle=':')\nplt.tight_layout()\nplt.show()\nplt.savefig(\"Fig2.png\")",
"_____no_output_____"
]
],
[
[
"## Latitude vs. Cloudiness Plot",
"_____no_output_____"
]
],
[
[
"x_values = cities_df[\"Lat\"]\ny_values = cities_df[\"Clouds\"]\n\nplt.scatter(x_values,y_values)\nplt.title('City Latitude vs. Cloudiness (04/01/20)')\nplt.xlabel('Latitude')\nplt.ylabel('Cloudiness (%)')\nplt.ylim(-5, 105)\nplt.xlim(-60, 80)\nplt.minorticks_on()\nplt.grid(which='major', linestyle='-')\nplt.grid(which='minor', linestyle=':')\nplt.tight_layout()\nplt.show()\nplt.savefig(\"Fig3.png\")",
"_____no_output_____"
]
],
[
[
"## Latitude vs. Wind Speed Plot",
"_____no_output_____"
]
],
[
[
"x_values = cities_df[\"Lat\"]\ny_values = cities_df[\"Wind Speed\"]\n\nplt.scatter(x_values,y_values)\nplt.title('City Latitude vs. Wind Speed (04/01/20)')\nplt.xlabel('Latitude')\nplt.ylabel('Wind Speed (mph)')\nplt.ylim(0, 40)\nplt.xlim(-60, 80)\nplt.minorticks_on()\nplt.grid(which='major', linestyle='-')\nplt.grid(which='minor', linestyle=':')\nplt.tight_layout()\nplt.show()\nplt.savefig(\"Fig4.png\")",
"_____no_output_____"
]
],
[
[
"## Linear Regression",
"_____no_output_____"
]
],
[
[
"mask = cities_df['Lat'] > 0\nnorthern_hemisphere = cities_df[mask]\nsouthern_hemisphere = cities_df[~mask]",
"_____no_output_____"
]
],
[
[
"#### Northern Hemisphere - Max Temp vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"x_values = northern_hemisphere[\"Lat\"]\ny_values = northern_hemisphere[\"Temperature\"]\n\n(slope, intercept, rvalue, pvalue, stderr) = st.linregress(x_values, y_values)\nregress_values = x_values * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\ncorrelation = st.pearsonr(x_values, y_values)\n\nplt.scatter(x_values,y_values)\nplt.title('Northern Hemisphere - Max Temp vs. Latitude Linear Regression')\nplt.plot(x_values,regress_values,\"r-\")\nplt.annotate(line_eq,(10,20),fontsize=12,color=\"red\")\nplt.xlabel('Latitude')\nplt.ylabel('Temperature (F)')\nplt.ylim(-5, 100)\nplt.xlim(0, 80)\nplt.minorticks_on()\nplt.grid(which='major', linestyle='-')\nplt.grid(which='minor', linestyle=':')\nplt.tight_layout()\nplt.show()\nplt.savefig(\"Fig5.png\")",
"_____no_output_____"
]
],
[
[
"#### Southern Hemisphere - Max Temp vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"x_values = southern_hemisphere[\"Lat\"]\ny_values = southern_hemisphere[\"Temperature\"]\n\n(slope, intercept, rvalue, pvalue, stderr) = st.linregress(x_values, y_values)\nregress_values = x_values * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\ncorrelation = st.pearsonr(x_values, y_values)\n\nplt.scatter(x_values,y_values)\nplt.title('Southern Hemisphere - Max Temp vs. Latitude Linear Regression')\nplt.plot(x_values,regress_values,\"r-\")\nplt.annotate(line_eq,(-35,80),fontsize=12,color=\"red\")\nplt.xlabel('Latitude')\nplt.ylabel('Temperature (F)')\nplt.ylim(40, 100)\nplt.xlim(0, -60)\nplt.minorticks_on()\nplt.grid(which='major', linestyle='-')\nplt.grid(which='minor', linestyle=':')\nplt.tight_layout()\nplt.show()\nplt.savefig(\"Fig6.png\")",
"_____no_output_____"
]
],
[
[
"#### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"x_values = northern_hemisphere[\"Lat\"]\ny_values = northern_hemisphere[\"Humidity\"]\n\n(slope, intercept, rvalue, pvalue, stderr) = st.linregress(x_values, y_values)\nregress_values = x_values * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\ncorrelation = st.pearsonr(x_values, y_values)\n\nplt.scatter(x_values,y_values)\nplt.title('Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression')\nplt.plot(x_values,regress_values,\"r-\")\nplt.annotate(line_eq,(50,20),fontsize=12,color=\"red\")\nplt.xlabel('Latitude')\nplt.ylabel('Humidity (%)')\nplt.ylim(0, 105)\nplt.xlim(0, 80)\nplt.minorticks_on()\nplt.grid(which='major', linestyle='-')\nplt.grid(which='minor', linestyle=':')\nplt.tight_layout()\nplt.show()\nplt.savefig(\"Fig7.png\")",
"_____no_output_____"
]
],
[
[
"#### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"x_values = southern_hemisphere[\"Lat\"]\ny_values = southern_hemisphere[\"Humidity\"]\n\n(slope, intercept, rvalue, pvalue, stderr) = st.linregress(x_values, y_values)\nregress_values = x_values * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\ncorrelation = st.pearsonr(x_values, y_values)\n\nplt.scatter(x_values,y_values)\nplt.title('Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression')\nplt.plot(x_values,regress_values,\"r-\")\nplt.annotate(line_eq,(-35,80),fontsize=12,color=\"red\")\nplt.xlabel('Latitude')\nplt.ylabel('Humidity (%)')\nplt.ylim(0, 105)\nplt.xlim(0, -60)\nplt.minorticks_on()\nplt.grid(which='major', linestyle='-')\nplt.grid(which='minor', linestyle=':')\nplt.tight_layout()\nplt.show()\nplt.savefig(\"Fig8.png\")",
"_____no_output_____"
]
],
[
[
"#### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"x_values = northern_hemisphere[\"Lat\"]\ny_values = northern_hemisphere[\"Clouds\"]\n\n(slope, intercept, rvalue, pvalue, stderr) = st.linregress(x_values, y_values)\nregress_values = x_values * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\ncorrelation = st.pearsonr(x_values, y_values)\n\nplt.scatter(x_values,y_values)\nplt.title('Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression')\nx_values = northern_hemisphere[\"Lat\"]\nplt.plot(x_values,regress_values,\"r-\")\nplt.annotate(line_eq,(50,20),fontsize=12,color=\"red\")\nplt.xlabel('Latitude')\nplt.ylabel('Cloudiness (%)')\nplt.ylim(-5, 105)\nplt.xlim(0, 80)\nplt.minorticks_on()\nplt.grid(which='major', linestyle='-')\nplt.grid(which='minor', linestyle=':')\nplt.tight_layout()\nplt.show()\nplt.savefig(\"Fig9.png\")",
"_____no_output_____"
]
],
[
[
"#### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"x_values = southern_hemisphere[\"Lat\"]\ny_values = southern_hemisphere[\"Clouds\"]\n\n(slope, intercept, rvalue, pvalue, stderr) = st.linregress(x_values, y_values)\nregress_values = x_values * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\ncorrelation = st.pearsonr(x_values, y_values)\n\nplt.scatter(x_values,y_values)\nplt.title('Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression')\nplt.plot(x_values,regress_values,\"r-\")\nplt.annotate(line_eq,(-30,60),fontsize=12,color=\"red\")\nplt.xlabel('Latitude')\nplt.ylabel('Cloudiness (%)')\nplt.ylim(-5, 105)\nplt.xlim(0, -60)\nplt.minorticks_on()\nplt.grid(which='major', linestyle='-')\nplt.grid(which='minor', linestyle=':')\nplt.tight_layout()\nplt.show()\nplt.savefig(\"Fig10.png\")\n",
"_____no_output_____"
]
],
[
[
"#### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"x_values = northern_hemisphere[\"Lat\"]\ny_values = northern_hemisphere[\"Wind Speed\"]\n\n(slope, intercept, rvalue, pvalue, stderr) = st.linregress(x_values, y_values)\nregress_values = x_values * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\ncorrelation = st.pearsonr(x_values, y_values)\n\nplt.scatter(x_values,y_values)\nplt.title('Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression')\nplt.plot(x_values,regress_values,\"r-\")\nplt.annotate(line_eq,(50,20),fontsize=12,color=\"red\")\nplt.xlabel('Latitude')\nplt.ylabel('Wind Speed (mph)')\nplt.ylim(0, 40)\nplt.xlim(0, 80)\nplt.minorticks_on()\nplt.grid(which='major', linestyle='-')\nplt.grid(which='minor', linestyle=':')\nplt.tight_layout()\nplt.show()\nplt.savefig(\"Fig11.png\")",
"_____no_output_____"
]
],
[
[
"#### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"x_values = southern_hemisphere[\"Lat\"]\ny_values = southern_hemisphere[\"Wind Speed\"]\n\n(slope, intercept, rvalue, pvalue, stderr) = st.linregress(x_values, y_values)\nregress_values = x_values * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\ncorrelation = st.pearsonr(x_values, y_values)\n\nplt.scatter(x_values,y_values)\nplt.title('Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression')\nplt.plot(x_values,regress_values,\"r-\")\nplt.annotate(line_eq,(-30,20),fontsize=12,color=\"red\")\nplt.xlabel('Latitude')\nplt.ylabel('Wind Speed (mph)')\nplt.ylim(0, 35)\nplt.xlim(0, -60)\nplt.minorticks_on()\nplt.grid(which='major', linestyle='-')\nplt.grid(which='minor', linestyle=':')\nplt.tight_layout()\nplt.show()\nplt.savefig(\"Fig12.png\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e757524c19bc66db85f7e17e881bddb0e21041eb | 38,545 | ipynb | Jupyter Notebook | cohesion_test/[paraphrase_xlm_r_multilingual_v1]sentence_transformer.ipynb | cateto/python4NLP | 1d2d5086f907bf75be01762bf0b384c76d8f704e | [
"MIT"
] | 2 | 2021-12-16T22:38:27.000Z | 2021-12-17T13:09:49.000Z | cohesion_test/[paraphrase_xlm_r_multilingual_v1]sentence_transformer.ipynb | cateto/python4NLP | 1d2d5086f907bf75be01762bf0b384c76d8f704e | [
"MIT"
] | null | null | null | cohesion_test/[paraphrase_xlm_r_multilingual_v1]sentence_transformer.ipynb | cateto/python4NLP | 1d2d5086f907bf75be01762bf0b384c76d8f704e | [
"MIT"
] | null | null | null | 66.456897 | 466 | 0.585809 | [
[
[
"<a href=\"https://colab.research.google.com/github/cateto/python4NLP/blob/main/cohesion_test/%5Bparaphrase_xlm_r_multilingual_v1%5Dsentence_transformer.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# sentence transformers 로딩",
"_____no_output_____"
]
],
[
[
"pip install -U sentence-transformers",
"Requirement already satisfied: sentence-transformers in /usr/local/lib/python3.7/dist-packages (2.1.0)\nRequirement already satisfied: sentencepiece in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (0.1.96)\nRequirement already satisfied: tokenizers>=0.10.3 in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (0.10.3)\nRequirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (0.22.2.post1)\nRequirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (4.62.3)\nRequirement already satisfied: torchvision in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (0.10.0+cu111)\nRequirement already satisfied: nltk in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (3.2.5)\nRequirement already satisfied: torch>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (1.9.0+cu111)\nRequirement already satisfied: transformers<5.0.0,>=4.6.0 in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (4.12.0)\nRequirement already satisfied: huggingface-hub in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (0.0.19)\nRequirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (1.4.1)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from sentence-transformers) (1.19.5)\nRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=1.6.0->sentence-transformers) (3.7.4.3)\nRequirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (4.8.1)\nRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (3.3.0)\nRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (2019.12.20)\nRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (2.23.0)\nRequirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (6.0)\nRequirement already satisfied: sacremoses in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (0.0.46)\nRequirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers) (21.0)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->transformers<5.0.0,>=4.6.0->sentence-transformers) (2.4.7)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers<5.0.0,>=4.6.0->sentence-transformers) (3.6.0)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from nltk->sentence-transformers) (1.15.0)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers<5.0.0,>=4.6.0->sentence-transformers) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers<5.0.0,>=4.6.0->sentence-transformers) (2021.5.30)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers<5.0.0,>=4.6.0->sentence-transformers) (1.24.3)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers<5.0.0,>=4.6.0->sentence-transformers) (2.10)\nRequirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers<5.0.0,>=4.6.0->sentence-transformers) (7.1.2)\nRequirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers<5.0.0,>=4.6.0->sentence-transformers) (1.0.1)\nRequirement already satisfied: pillow>=5.3.0 in /usr/local/lib/python3.7/dist-packages (from torchvision->sentence-transformers) (7.1.2)\n"
],
[
"import torch\nfrom sentence_transformers import SentenceTransformer, util\n\nmodel_name = 'sentence-transformers/paraphrase-xlm-r-multilingual-v1'\n\n\nembedding_model = models.Transformer(model_name)\n\npooler = models.Pooling(\n embedding_model.get_word_embedding_dimension(),\n pooling_mode_mean_tokens=True,\n pooling_mode_cls_token=False,\n pooling_mode_max_tokens=False,\n)\n\nmodel = SentenceTransformer(modules=[embedding_model, pooler])\n",
"_____no_output_____"
]
],
[
[
"# 명사,동사,형용사 필터링",
"_____no_output_____"
]
],
[
[
"# Colab에 Mecab 설치\n!git clone https://github.com/SOMJANG/Mecab-ko-for-Google-Colab.git\n%cd Mecab-ko-for-Google-Colab\n!bash install_mecab-ko_on_colab190912.sh",
"Cloning into 'Mecab-ko-for-Google-Colab'...\nremote: Enumerating objects: 91, done.\u001b[K\nremote: Counting objects: 100% (91/91), done.\u001b[K\nremote: Compressing objects: 100% (85/85), done.\u001b[K\nremote: Total 91 (delta 43), reused 22 (delta 6), pack-reused 0\u001b[K\nUnpacking objects: 100% (91/91), done.\n/content/Mecab-ko-for-Google-Colab\nInstalling konlpy.....\nCollecting konlpy\n Downloading konlpy-0.5.2-py2.py3-none-any.whl (19.4 MB)\n\u001b[K |████████████████████████████████| 19.4 MB 1.2 MB/s \n\u001b[?25hRequirement already satisfied: lxml>=4.1.0 in /usr/local/lib/python3.7/dist-packages (from konlpy) (4.2.6)\nRequirement already satisfied: tweepy>=3.7.0 in /usr/local/lib/python3.7/dist-packages (from konlpy) (3.10.0)\nCollecting JPype1>=0.7.0\n Downloading JPype1-1.3.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (448 kB)\n\u001b[K |████████████████████████████████| 448 kB 68.4 MB/s \n\u001b[?25hCollecting colorama\n Downloading colorama-0.4.4-py2.py3-none-any.whl (16 kB)\nRequirement already satisfied: numpy>=1.6 in /usr/local/lib/python3.7/dist-packages (from konlpy) (1.19.5)\nCollecting beautifulsoup4==4.6.0\n Downloading beautifulsoup4-4.6.0-py3-none-any.whl (86 kB)\n\u001b[K |████████████████████████████████| 86 kB 5.8 MB/s \n\u001b[?25hRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from JPype1>=0.7.0->konlpy) (3.7.4.3)\nRequirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from tweepy>=3.7.0->konlpy) (1.3.0)\nRequirement already satisfied: requests[socks]>=2.11.1 in /usr/local/lib/python3.7/dist-packages (from tweepy>=3.7.0->konlpy) (2.23.0)\nRequirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from tweepy>=3.7.0->konlpy) (1.15.0)\nRequirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->tweepy>=3.7.0->konlpy) (3.1.1)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests[socks]>=2.11.1->tweepy>=3.7.0->konlpy) (2.10)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests[socks]>=2.11.1->tweepy>=3.7.0->konlpy) (3.0.4)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests[socks]>=2.11.1->tweepy>=3.7.0->konlpy) (1.24.3)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests[socks]>=2.11.1->tweepy>=3.7.0->konlpy) (2021.5.30)\nRequirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /usr/local/lib/python3.7/dist-packages (from requests[socks]>=2.11.1->tweepy>=3.7.0->konlpy) (1.7.1)\nInstalling collected packages: JPype1, colorama, beautifulsoup4, konlpy\n Attempting uninstall: beautifulsoup4\n Found existing installation: beautifulsoup4 4.6.3\n Uninstalling beautifulsoup4-4.6.3:\n Successfully uninstalled beautifulsoup4-4.6.3\nSuccessfully installed JPype1-1.3.0 beautifulsoup4-4.6.0 colorama-0.4.4 konlpy-0.5.2\nDone\nInstalling mecab-0.996-ko-0.9.2.tar.gz.....\nDownloading mecab-0.996-ko-0.9.2.tar.gz.......\nfrom https://bitbucket.org/eunjeon/mecab-ko/downloads/mecab-0.996-ko-0.9.2.tar.gz\n--2021-10-29 07:37:19-- https://bitbucket.org/eunjeon/mecab-ko/downloads/mecab-0.996-ko-0.9.2.tar.gz\nResolving bitbucket.org (bitbucket.org)... 104.192.141.1, 2406:da00:ff00::22c5:2ef4, 2406:da00:ff00::22c0:3470, ...\nConnecting to bitbucket.org (bitbucket.org)|104.192.141.1|:443... connected.\nHTTP request sent, awaiting response... 302 Found\nLocation: https://bbuseruploads.s3.amazonaws.com/eunjeon/mecab-ko/downloads/mecab-0.996-ko-0.9.2.tar.gz?Signature=Qpzvjw3b4rW%2BOh5Dls4959BOmTY%3D&Expires=1635494282&AWSAccessKeyId=AKIA6KOSE3BNJRRFUUX6&versionId=null&response-content-disposition=attachment%3B%20filename%3D%22mecab-0.996-ko-0.9.2.tar.gz%22&response-content-encoding=None [following]\n--2021-10-29 07:37:19-- https://bbuseruploads.s3.amazonaws.com/eunjeon/mecab-ko/downloads/mecab-0.996-ko-0.9.2.tar.gz?Signature=Qpzvjw3b4rW%2BOh5Dls4959BOmTY%3D&Expires=1635494282&AWSAccessKeyId=AKIA6KOSE3BNJRRFUUX6&versionId=null&response-content-disposition=attachment%3B%20filename%3D%22mecab-0.996-ko-0.9.2.tar.gz%22&response-content-encoding=None\nResolving bbuseruploads.s3.amazonaws.com (bbuseruploads.s3.amazonaws.com)... 52.216.232.43\nConnecting to bbuseruploads.s3.amazonaws.com (bbuseruploads.s3.amazonaws.com)|52.216.232.43|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 1414979 (1.3M) [application/x-tar]\nSaving to: ‘mecab-0.996-ko-0.9.2.tar.gz’\n\nmecab-0.996-ko-0.9. 100%[===================>] 1.35M --.-KB/s in 0.07s \n\n2021-10-29 07:37:20 (19.5 MB/s) - ‘mecab-0.996-ko-0.9.2.tar.gz’ saved [1414979/1414979]\n\nDone\nUnpacking mecab-0.996-ko-0.9.2.tar.gz.......\nDone\nChange Directory to mecab-0.996-ko-0.9.2.......\ninstalling mecab-0.996-ko-0.9.2.tar.gz........\nconfigure\nmake\nmake check\nmake install\nldconfig\nDone\nChange Directory to /content\nDownloading mecab-ko-dic-2.1.1-20180720.tar.gz.......\nfrom https://bitbucket.org/eunjeon/mecab-ko-dic/downloads/mecab-ko-dic-2.1.1-20180720.tar.gz\n--2021-10-29 07:38:52-- https://bitbucket.org/eunjeon/mecab-ko-dic/downloads/mecab-ko-dic-2.1.1-20180720.tar.gz\nResolving bitbucket.org (bitbucket.org)... 104.192.141.1, 2406:da00:ff00::22cd:e0db, 2406:da00:ff00::6b17:d1f5, ...\nConnecting to bitbucket.org (bitbucket.org)|104.192.141.1|:443... connected.\nHTTP request sent, awaiting response... 302 Found\nLocation: https://bbuseruploads.s3.amazonaws.com/a4fcd83e-34f1-454e-a6ac-c242c7d434d3/downloads/b5a0c703-7b64-45ed-a2d7-180e962710b6/mecab-ko-dic-2.1.1-20180720.tar.gz?Signature=sHkwpriS4tr7V07MDdyUVgNKt1A%3D&Expires=1635494932&AWSAccessKeyId=AKIA6KOSE3BNJRRFUUX6&versionId=tzyxc1TtnZU_zEuaaQDGN4F76hPDpyFq&response-content-disposition=attachment%3B%20filename%3D%22mecab-ko-dic-2.1.1-20180720.tar.gz%22&response-content-encoding=None [following]\n--2021-10-29 07:38:52-- https://bbuseruploads.s3.amazonaws.com/a4fcd83e-34f1-454e-a6ac-c242c7d434d3/downloads/b5a0c703-7b64-45ed-a2d7-180e962710b6/mecab-ko-dic-2.1.1-20180720.tar.gz?Signature=sHkwpriS4tr7V07MDdyUVgNKt1A%3D&Expires=1635494932&AWSAccessKeyId=AKIA6KOSE3BNJRRFUUX6&versionId=tzyxc1TtnZU_zEuaaQDGN4F76hPDpyFq&response-content-disposition=attachment%3B%20filename%3D%22mecab-ko-dic-2.1.1-20180720.tar.gz%22&response-content-encoding=None\nResolving bbuseruploads.s3.amazonaws.com (bbuseruploads.s3.amazonaws.com)... 52.217.166.249\nConnecting to bbuseruploads.s3.amazonaws.com (bbuseruploads.s3.amazonaws.com)|52.217.166.249|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 49775061 (47M) [application/x-tar]\nSaving to: ‘mecab-ko-dic-2.1.1-20180720.tar.gz’\n\nmecab-ko-dic-2.1.1- 100%[===================>] 47.47M 109MB/s in 0.4s \n\n2021-10-29 07:38:52 (109 MB/s) - ‘mecab-ko-dic-2.1.1-20180720.tar.gz’ saved [49775061/49775061]\n\nDone\nUnpacking mecab-ko-dic-2.1.1-20180720.tar.gz.......\nDone\nChange Directory to mecab-ko-dic-2.1.1-20180720\nDone\ninstalling........\nconfigure\nmake\nmake install\napt-get update\napt-get upgrade\napt install curl\napt install git\nbash <(curl -s https://raw.githubusercontent.com/konlpy/konlpy/master/scripts/mecab.sh)\nDone\nSuccessfully Installed\nNow you can use Mecab\nfrom konlpy.tag import Mecab\nmecab = Mecab()\n사용자 사전 추가 방법 : https://bit.ly/3k0ZH53\nNameError: name 'Tagger' is not defined 오류 발생 시 런타임을 재실행 해주세요\n블로그에 해결 방법을 남겨주신 tana님 감사합니다.\n"
],
[
"def cleaning(sentence):\n clean_words = []\n for word in okt.pos(sentence, stem=True):\n if word[1] in ['Noun', 'Verb', 'Adjective']: #명사, 동사, 형용사\n clean_words.append(word[0])\n print(clean_words)\n temp_sentence = ' '.join(clean_words)\n return temp_sentence, clean_words",
"_____no_output_____"
],
[
"from konlpy.tag import *\nokt = Okt()\n\ndocs = [\n \"1992년 7월 8일 손흥민은 강원도 춘천시 후평동에서 아버지 손웅정과 어머니 길은자의 차남으로 태어나 그곳에서 자랐다.\",\n \"형은 손흥윤이다.\",\n \"춘천 부안초등학교를 졸업했고, 춘천 후평중학교에 입학한 후 2학년때 원주 육민관중학교 축구부에 들어가기 위해 전학하여 졸업하였으며, 2008년 당시 FC 서울의 U-18팀이었던 동북고등학교 축구부에서 선수 활동 중 대한축구협회 우수선수 해외유학 프로젝트에 선발되어 2008년 8월 독일 분데스리가의 함부르크 유소년팀에 입단하였다.\",\n \"함부르크 유스팀 주전 공격수로 2008년 6월 네덜란드에서 열린 4개국 경기에서 4게임에 출전, 3골을 터뜨렸다.\",\n \"1년간의 유학 후 2009년 8월 한국으로 돌아온 후 10월에 개막한 FIFA U-17 월드컵에 출전하여 3골을 터트리며 한국을 8강으로 이끌었다.\",\n \"그해 11월 함부르크의 정식 유소년팀 선수 계약을 체결하였으며 독일 U-19 리그 4경기 2골을 넣고 2군 리그에 출전을 시작했다.\",\n \"독일 U-19 리그에서 손흥민은 11경기 6골, 2부 리그에서는 6경기 1골을 넣으며 재능을 인정받아 2010년 6월 17세의 나이로 함부르크의 1군 팀 훈련에 참가, 프리시즌 활약으로 함부르크와 정식 계약을 한 후 10월 18세에 함부르크 1군 소속으로 독일 분데스리가에 데뷔하였다.\",\n]\n\nresult_arr = []\nresult = []\nfor sentence in docs:\n temp_sentence, clean_words = cleaning(sentence)\n result_arr.append(clean_words)\n result.append(temp_sentence)",
"['손흥민', '강원도', '춘천시', '후평동', '아버지', '손웅정', '어머니', '길다', '자의', '차남', '태어나다', '곳', '자르다']\n['형', '손흥윤']\n['춘천', '부안', '초등학교', '졸업', '하다', '춘천', '후평', '중학교', '입학', '후', '학년', '때', '원주', '민', '관중', '학교', '축구', '부', '들어가다', '위해', '전학', '하다', '졸업', '하다', '당시', '서울', '팀', '이다', '동북', '고등학교', '축구', '부', '선수', '활동', '중', '축구', '협회', '우수', '선수', '해외', '유학', '프로젝트', '선발', '되어다', '독일', '분데스리가', '함부르크', '유', '소년', '팀', '입단', '하다']\n['함부르크', '유', '스팀', '주전', '공격수', '네덜란드', '열리다', '개국', '경기', '게임', '출전', '골', '터뜨리다']\n['유학', '후', '한국', '돌아오다', '후', '개막', '월드컵', '출전', '하다', '골', '터', '트리', '한국', '강', '이끌다']\n['함부르크', '정식', '유', '소년', '팀', '선수', '계약', '체결', '하다', '독일', '리그', '경기', '골', '넣다', '군', '리그', '출전', '시작', '하다']\n['독일', '리그', '손흥민', '경기', '골', '부', '리그', '경기', '골', '넣다', '재능', '인정받다', '세', '나이', '함부르크', '군', '팀', '훈련', '참가', '프리', '시즌', '활약', '함부르크', '정식', '계약', '하다', '후', '세', '함부르크', '군', '소속', '독일', '분데스리가', '데뷔', '하다']\n"
],
[
"\ndocument_embeddings = model.encode(docs)\n\nquery = \"손흥민은 어린 나이에 유럽에 진출하였다.\"\ntemp_sentence, _ = cleaning(query)\nquery_embedding = model.encode(temp_sentence)\nprint(temp_sentence)\nprint(query_embedding)\n\n\ntop_k = min(5, len(docs))\ncos_scores = util.pytorch_cos_sim(query_embedding, document_embeddings)[0]\ntop_results = torch.topk(cos_scores, k=top_k)\n\nprint(f\"입력 문장: {query}\")\nprint(f\"<입력 문장과 유사한 {top_k} 개의 문장>\")\n\nfor i, (score, idx) in enumerate(zip(top_results[0], top_results[1])):\n print(f\"{i+1}: {docs[idx]} {'(유사도: {:.4f})'.format(score)}\")",
"['손흥민', '어리다', '나이', '유럽', '진출', '하다']\n손흥민 어리다 나이 유럽 진출 하다\n[ 3.84183377e-02 2.66998932e-02 1.96081847e-01 7.44904950e-02\n 4.22593020e-03 1.67806461e-01 6.74130619e-02 2.93331724e-02\n 1.30978242e-01 -1.77486882e-01 1.44739538e-01 1.39626786e-01\n 9.28555429e-02 -1.97532415e-01 1.28736153e-01 2.11152315e-01\n -2.92927206e-01 3.52241173e-02 -4.47113700e-02 -5.28875589e-02\n -4.21518832e-02 -4.46426794e-02 -6.15345463e-02 1.54089525e-01\n -8.12616646e-02 2.38179207e-01 8.58760625e-02 1.11603372e-01\n 1.13287240e-01 5.70230819e-02 1.24861496e-02 2.03623220e-01\n 3.28569859e-02 7.27071613e-02 -2.47353792e-01 5.32416552e-02\n 1.80804417e-01 1.13252483e-01 -1.30471557e-01 -8.31991807e-02\n 1.53211832e-01 2.10029498e-01 5.59177659e-02 5.49992695e-02\n 5.46816401e-02 3.17204557e-02 -3.68799791e-02 -1.05246097e-01\n -8.19666237e-02 -5.19351698e-02 7.67307952e-02 1.32651910e-01\n 3.01915389e-02 -2.18580112e-01 -6.87982887e-02 2.01320499e-01\n 1.77615881e-03 -7.12251589e-02 -1.12343282e-01 6.37153238e-02\n 9.65939909e-02 -1.15621179e-01 -4.68026362e-02 3.34013909e-01\n 4.45645414e-02 -6.95991591e-02 1.11595869e-01 4.82954793e-02\n 4.35409397e-02 2.11500973e-01 3.15745249e-02 3.78839821e-02\n 5.36285192e-02 6.55309483e-02 4.89945598e-02 -4.14756611e-02\n 2.35878639e-02 -2.07972210e-02 9.66423228e-02 1.16822608e-02\n -2.31240258e-01 5.16402721e-02 5.61397262e-02 -1.30932644e-01\n -1.11630253e-01 3.82144421e-01 -8.90883058e-02 -1.34727478e-01\n -2.02000767e-01 1.12174265e-02 2.47193262e-01 -1.42869920e-01\n -1.25466347e-01 -2.35774755e-01 -8.43290463e-02 -7.22002685e-02\n -1.21647663e-01 5.29512987e-02 3.36367078e-02 -2.17670828e-01\n -9.87998247e-02 -8.76271725e-02 2.02452213e-01 -9.69504379e-03\n -8.05055723e-02 1.72216564e-01 -1.99946225e-01 -4.79985289e-02\n -7.73147717e-02 4.78532650e-02 1.12636667e-02 2.51266748e-01\n -1.40308127e-01 7.13158250e-02 3.29231955e-02 -1.72403246e-01\n 7.42567852e-02 8.92064646e-02 -1.31395757e-01 2.08380997e-01\n 7.84937888e-02 1.85333788e-01 3.43109258e-02 -8.97844229e-03\n 2.11276934e-02 -2.64732726e-02 2.55205110e-02 -4.72733751e-02\n 4.05780300e-02 5.34993261e-02 1.05707154e-01 -1.57410488e-01\n 4.05242667e-02 -1.52915195e-01 2.32775539e-01 4.03578803e-02\n 9.23371166e-02 -8.76447037e-02 -4.54139188e-02 3.13218869e-02\n 3.92205492e-02 9.86604020e-03 -1.16573021e-01 1.35076409e-02\n 1.38667122e-01 1.35662138e-01 3.92599078e-03 6.48189336e-02\n 2.07027689e-01 -9.20332670e-02 3.50484960e-02 -1.50466502e-01\n -2.35790521e-01 -4.22969535e-02 -7.34765921e-03 2.53084581e-02\n 2.13447392e-01 -4.23995778e-02 -4.68129590e-02 6.83849603e-02\n 1.21633476e-02 -2.28766222e-02 -1.35309368e-01 1.27519906e-01\n -1.02955937e-01 9.08350646e-02 3.01130682e-01 2.90239036e-01\n -6.77529722e-02 -1.03001267e-01 1.93848908e-02 1.14521183e-01\n 1.11981884e-01 4.34438996e-02 -1.86676487e-01 1.17135897e-01\n -8.21226910e-02 2.59529538e-02 -4.62222099e-02 7.04419389e-02\n 1.85830542e-03 -2.96464283e-02 -1.97485276e-02 7.90482014e-02\n -3.23584080e-02 -7.40685136e-05 9.11027044e-02 -2.50896104e-02\n 2.56174862e-01 7.38230199e-02 -3.93471681e-02 6.15492240e-02\n -1.77752271e-01 2.59674132e-01 1.18390210e-01 -1.96416840e-01\n 1.06894769e-01 3.37487794e-02 -1.03428453e-01 -5.31995483e-02\n 1.06818363e-01 2.70080209e-01 1.37143195e-01 -1.69853285e-01\n -6.03687428e-02 1.78581133e-01 1.38834510e-02 2.21728832e-01\n -8.62780660e-02 1.61258385e-01 -5.22849709e-02 -1.27447699e-03\n 3.59812677e-01 -1.64226145e-01 7.48367831e-02 -2.39702431e-03\n 7.35979453e-02 -9.95549336e-02 1.31209105e-01 -1.29937544e-01\n 1.67577088e-01 -4.84717153e-02 1.08041577e-01 -8.37043598e-02\n 2.36036777e-01 9.45406109e-02 7.68776536e-02 2.32409267e-03\n 1.92608654e-01 -1.59049295e-02 2.53543295e-02 1.09043792e-01\n -2.60263011e-02 5.09436876e-02 1.84244424e-01 1.22961760e-01\n -6.27963021e-02 3.54421735e-02 -9.56322700e-02 -2.03569025e-01\n 2.73036182e-01 6.03213683e-02 -2.45087445e-02 1.91625785e-02\n -1.17169634e-01 7.81974494e-02 4.53681014e-02 6.55153021e-02\n -3.26815158e-01 -5.59170879e-02 2.31044069e-01 -2.81271040e-01\n 2.34515984e-02 -6.65075406e-02 -7.54338130e-02 2.49983758e-01\n -6.22996464e-02 1.04016855e-01 -4.42923047e-02 5.81506593e-03\n 5.01725152e-02 2.13147625e-01 1.15669981e-01 5.13754934e-02\n -1.07675537e-01 2.21037164e-01 5.93448337e-03 4.12209444e-02\n 1.11777753e-01 -2.70644855e-02 -4.47224034e-03 8.57662037e-03\n -1.20391183e-01 -1.71976179e-01 2.05097824e-01 1.96637716e-02\n -1.33258089e-01 2.15216160e-01 3.81248072e-02 6.13941299e-03\n -2.89924853e-02 6.44506291e-02 2.09254280e-01 1.53488338e-01\n 1.10851891e-01 -2.17111051e-01 1.01153761e-01 -2.82334000e-01\n 3.61813372e-03 3.08595486e-02 3.08978166e-02 -1.86768785e-01\n -7.90589899e-02 -3.09019275e-02 -1.09774001e-01 -1.53190613e-01\n -2.90563330e-02 -1.22106738e-01 -1.69422835e-01 -2.49278516e-01\n 3.01990770e-02 1.90706609e-03 1.75362796e-01 -6.94138706e-02\n -1.02256417e-01 5.24721332e-02 1.02207854e-01 -1.25962406e-01\n -2.14193597e-01 5.74816987e-02 -1.56078026e-01 2.76368082e-01\n 9.81836393e-02 -1.09255873e-01 4.47096080e-02 -1.10930972e-01\n -1.13752395e-01 8.38370845e-02 7.67405424e-03 -6.97969347e-02\n -3.02275810e-02 1.11734301e-01 -3.41589600e-02 -1.56992018e-01\n 1.46845162e-01 2.23440468e-01 1.52418464e-01 -2.15794474e-01\n -7.41721466e-02 6.22166172e-02 2.00957462e-01 4.99776676e-02\n 1.10506825e-02 9.53754708e-02 1.30050825e-02 3.29456888e-02\n -1.88900307e-02 1.41587807e-02 -3.93097810e-02 6.60622120e-03\n -1.81222096e-01 -4.78236824e-02 -9.38000008e-02 4.27244306e-02\n 3.13760899e-02 9.93998125e-02 -5.69940247e-02 9.23587903e-02\n -2.41426572e-01 8.76858532e-02 1.48231825e-02 -9.35589001e-02\n -7.79667776e-03 -2.45077714e-01 1.36841819e-01 7.69873857e-02\n 1.76049575e-01 -5.32760769e-02 2.53386348e-01 4.16361876e-02\n 4.65979539e-02 1.78341791e-01 7.54810572e-02 -1.00245506e-01\n -2.75138649e-03 -3.24144564e-03 1.65533468e-01 9.43916291e-02\n 3.51803973e-02 1.15619078e-01 6.65889913e-03 1.37970462e-01\n 7.31138140e-02 -1.62512302e-01 9.64166671e-02 1.68667883e-01\n 2.88247615e-01 1.20222420e-01 -1.89362429e-02 9.09176394e-02\n 2.88617462e-01 2.65456170e-01 -2.71600246e-01 2.45235469e-02\n 5.48514389e-02 -2.65274774e-02 1.25087261e-01 5.16446680e-02\n 4.71308500e-01 -1.51815563e-01 4.94694188e-02 -1.26915962e-01\n -1.74259078e-02 -1.28693461e-01 -1.57870069e-01 -2.96300575e-02\n 7.44827464e-02 1.68509901e-01 2.62082696e-01 3.53956372e-02\n -2.54004240e-01 2.19714314e-01 -9.77322226e-04 2.12822467e-01\n 9.29961726e-02 -3.24600041e-01 -1.70484439e-01 1.63064506e-02\n -4.06394266e-02 3.28231305e-02 -3.96357626e-02 -4.39432226e-02\n 1.66303422e-02 -1.47250012e-01 2.44737163e-01 -8.79490301e-02\n 1.60900608e-01 -1.21773191e-01 2.85460930e-02 1.63221806e-01\n -9.88684595e-02 8.01017582e-02 9.96582434e-02 -1.09551758e-01\n 1.82018667e-01 1.92910090e-01 1.68592706e-01 -1.36088217e-02\n -1.75880298e-01 6.87563643e-02 1.58931896e-01 -4.80788574e-02\n 1.94373466e-02 9.09218863e-02 5.35733178e-02 1.83467157e-02\n 1.63955286e-01 6.75837547e-02 3.06431158e-03 -1.32154644e-01\n -1.42359555e-01 2.09157094e-01 -1.54920127e-02 7.48724490e-02\n 6.79136664e-02 -5.47020845e-02 -1.89180717e-01 -1.95412221e-03\n -7.20219985e-02 -1.18961573e-01 -1.55196026e-01 9.88440663e-02\n -7.94268772e-03 -2.18925104e-01 -4.32398766e-02 -1.12165965e-01\n -5.11773638e-02 -7.87249114e-03 4.75778524e-03 1.41661197e-01\n -3.49525660e-02 -6.76743984e-02 3.15745547e-03 -1.11012690e-01\n 3.19537632e-02 5.94447181e-02 7.98549727e-02 2.70819664e-02\n -1.33973330e-01 -6.40301257e-02 3.41756977e-02 6.94780871e-02\n 2.91390941e-02 9.45262387e-02 9.06311125e-02 -1.38523802e-01\n -1.03008471e-01 2.48634368e-01 -1.20109394e-01 -1.18693270e-01\n 2.62639131e-02 -8.62488374e-02 -5.34497052e-02 -2.60101054e-02\n 1.76717937e-01 4.22831178e-02 -2.91849107e-01 1.15861066e-01\n 4.48294133e-02 2.77542859e-01 -2.38128036e-01 1.85510933e-01\n 8.67319629e-02 5.94075993e-02 2.67794281e-01 -4.12534699e-02\n -7.88787305e-02 1.16391584e-01 2.56836891e-01 1.44226402e-01\n -7.57699907e-02 -4.08174731e-02 1.64405271e-01 -2.31812716e-01\n 4.90177646e-02 -6.36846349e-02 -1.16981737e-01 -1.77211836e-01\n 1.07338876e-01 1.29702359e-01 1.27707943e-01 6.60830587e-02\n -1.27879858e-01 3.44913527e-02 5.35977893e-02 -1.18969835e-01\n 1.36454642e-01 4.35736179e-02 2.06217498e-01 -9.76935625e-02\n 3.69032584e-02 1.67791292e-01 -4.96286899e-02 1.51433572e-01\n -3.18377353e-02 7.71609247e-02 1.47979841e-01 7.87662268e-02\n 1.74163997e-01 2.37003326e-01 -8.03210065e-02 8.46114159e-02\n 2.63080292e-04 -6.50161654e-02 9.22387764e-02 5.01889177e-02\n 1.85192719e-01 1.13338999e-01 -8.74737576e-02 -3.99935618e-02\n 4.92704660e-02 -1.20586582e-01 1.23340003e-01 1.39345914e-01\n 1.91623464e-01 1.08650886e-01 1.08697496e-01 -9.13202390e-03\n 3.54556590e-02 -2.87497878e-01 5.26511520e-02 -5.29991388e-02\n -2.48039007e-01 2.09839955e-01 8.01541358e-02 -1.56350717e-01\n 2.03254044e-01 -1.29231334e-01 -1.38441533e-01 9.87326205e-02\n 9.33023021e-02 -1.08798966e-01 -5.44957165e-03 1.01424918e-01\n 1.03940420e-01 -1.61873832e-01 1.26846209e-01 4.91356030e-02\n -1.10047376e-02 1.23519786e-02 -4.41786423e-02 2.61728577e-02\n 3.34005952e-02 -1.80861533e-01 -1.78484365e-01 -3.32173966e-02\n 1.08482473e-01 -4.27544191e-02 1.46296844e-01 2.84796823e-02\n -3.24532181e-01 -1.19153395e-01 -2.98622660e-02 -1.93270475e-01\n 1.57395944e-01 5.45044206e-02 -1.57905906e-01 -2.62987614e-01\n -9.17379335e-02 -2.92235583e-01 -2.90630497e-02 -2.05711111e-01\n -4.94115390e-02 1.05712399e-01 5.44638298e-02 -1.18814539e-02\n -6.34849370e-02 8.28640312e-02 8.94726515e-02 2.32159756e-02\n 1.34948581e-01 -1.27782319e-02 5.41430563e-02 3.31278324e-01\n -1.65365234e-01 1.04821786e-01 3.13170217e-02 2.34972239e-01\n -1.49446344e-02 -1.10158727e-01 -1.12604298e-01 2.60814816e-01\n -9.55833569e-02 8.20750073e-02 1.32639870e-01 -6.54028878e-02\n -1.80300791e-02 -3.59429032e-01 -6.17329776e-03 -4.79901768e-02\n -6.66164905e-02 7.67535064e-03 -7.34748915e-02 4.99487109e-02\n 2.10420772e-01 -1.19174846e-01 4.10118140e-02 2.06383884e-01\n 3.74305807e-02 5.77293010e-03 -1.95050854e-02 2.83503681e-01\n 1.22745559e-01 -2.44008794e-01 -1.32919967e-01 -1.10811204e-01\n 1.29510066e-03 -2.51451484e-03 -2.03778166e-02 -5.77633753e-02\n 3.49687785e-02 1.82064876e-01 -7.87558630e-02 -1.31719291e-01\n 2.21805081e-01 -1.65671930e-02 1.68650687e-01 -2.42409870e-01\n -1.11010810e-03 -1.07572727e-01 1.58587113e-01 9.09114182e-02\n -6.63982481e-02 1.22225918e-01 1.53014332e-01 1.98664833e-02\n 2.43878216e-01 -1.70971248e-02 -1.91601753e-01 -7.65798497e-04\n 7.25707710e-02 -2.02715620e-02 -1.54947668e-01 -8.81588273e-03\n -1.08687177e-01 1.96532756e-01 -2.49706730e-02 -1.38069376e-01\n 6.65401295e-02 -1.70672033e-02 -5.91704063e-02 1.76626176e-01\n 2.95882851e-01 -1.11968517e-01 -4.11434799e-01 1.23781405e-01\n 4.30721223e-01 7.82106668e-02 -1.92872122e-01 3.35893072e-02\n 8.11835378e-02 -1.26277864e-01 2.10006505e-01 -1.67014658e-01\n -2.72562027e-01 1.75043687e-01 3.09045434e-01 1.31522641e-01\n 1.90006308e-02 -1.07609622e-01 9.66543853e-02 -3.80506623e-03\n -4.43164892e-02 -1.92595735e-01 -2.49802262e-01 -1.74775526e-01\n 7.24960491e-02 -2.01138034e-02 -3.75305377e-02 4.94842380e-02\n -1.15195699e-02 -1.27643391e-01 -6.16997890e-02 -2.52032671e-02\n -4.98936838e-03 4.16485295e-02 1.69520050e-01 6.86304644e-02\n 2.39041507e-01 1.50306091e-01 -1.30701959e-01 -5.83323762e-02\n -1.78572554e-02 1.17788306e-02 1.24792373e-02 1.32639920e-02\n 1.76555127e-01 1.18526258e-01 -2.34261051e-01 6.32237867e-02\n -1.20316789e-01 -1.81989014e-01 -4.69684452e-02 4.25537750e-02\n -2.44798601e-01 -2.04559460e-01 2.82147735e-01 -1.71510167e-02\n 1.67092979e-01 -1.24066517e-01 9.01893303e-02 6.64219931e-02\n -8.83880034e-02 -1.08223230e-01 -2.03456059e-02 9.62374285e-02\n -1.40512669e-02 -1.05783187e-01 -8.67805481e-02 -2.46904064e-02\n -2.50311583e-01 1.10611111e-01 1.66288137e-01 -8.09019879e-02\n -1.12211116e-01 -1.33959167e-02 6.75223023e-02 4.74440865e-02\n -9.16508492e-03 7.02673895e-03 2.19799995e-01 8.99995640e-02\n -6.11039158e-03 -1.41519889e-01 5.75861335e-03 6.27158284e-02\n -3.65423523e-02 -1.95349474e-03 1.24086179e-01 9.06355381e-02\n 1.71192229e-01 1.09338045e-01 9.57421884e-02 2.07257971e-01\n 2.14180321e-01 9.42907631e-02 -8.13028682e-03 1.46583036e-01]\n입력 문장: 손흥민은 어린 나이에 유럽에 진출하였다.\n<입력 문장과 유사한 5 개의 문장>\n1: 독일 U-19 리그에서 손흥민은 11경기 6골, 2부 리그에서는 6경기 1골을 넣으며 재능을 인정받아 2010년 6월 17세의 나이로 함부르크의 1군 팀 훈련에 참가, 프리시즌 활약으로 함부르크와 정식 계약을 한 후 10월 18세에 함부르크 1군 소속으로 독일 분데스리가에 데뷔하였다. (유사도: 0.3712)\n2: 춘천 부안초등학교를 졸업했고, 춘천 후평중학교에 입학한 후 2학년때 원주 육민관중학교 축구부에 들어가기 위해 전학하여 졸업하였으며, 2008년 당시 FC 서울의 U-18팀이었던 동북고등학교 축구부에서 선수 활동 중 대한축구협회 우수선수 해외유학 프로젝트에 선발되어 2008년 8월 독일 분데스리가의 함부르크 유소년팀에 입단하였다. (유사도: 0.3175)\n3: 그해 11월 함부르크의 정식 유소년팀 선수 계약을 체결하였으며 독일 U-19 리그 4경기 2골을 넣고 2군 리그에 출전을 시작했다. (유사도: 0.2992)\n4: 형은 손흥윤이다. (유사도: 0.2897)\n5: 함부르크 유스팀 주전 공격수로 2008년 6월 네덜란드에서 열린 4개국 경기에서 4게임에 출전, 3골을 터뜨렸다. (유사도: 0.2740)\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.