hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e76cb89ea56dbf9bd2249698b86815be039bf560 | 20,458 | ipynb | Jupyter Notebook | docs/clean-prep/voluptuous.ipynb | veit/jupyter-tutorial-de | abe8cd576125d4c860bf17e9d4dc1e19614f79f1 | [
"BSD-3-Clause"
] | null | null | null | docs/clean-prep/voluptuous.ipynb | veit/jupyter-tutorial-de | abe8cd576125d4c860bf17e9d4dc1e19614f79f1 | [
"BSD-3-Clause"
] | 3 | 2021-01-01T16:13:59.000Z | 2021-07-27T15:41:39.000Z | docs/clean-prep/voluptuous.ipynb | veit/jupyter-tutorial-de | abe8cd576125d4c860bf17e9d4dc1e19614f79f1 | [
"BSD-3-Clause"
] | null | null | null | 36.728905 | 435 | 0.548783 | [
[
[
"# Datenvalidierung mit Voluptuous (Schemadefinitionen)\n\nIn diesem Notebook verwenden wir [Voluptuous](https://github.com/alecthomas/voluptuous), um Schemata für unsere Daten zu definieren. Wir können dann die Schemaprüfung an verschiedenen Stellen unserer Bereinigung verwenden, um sicherzustellen, dass wir die Kriterien erfüllen. Schließllich können wir Ausnahmen für die Schemaüberprüfung verwenden, um unreine oder ungültige Daten zu markieren, beiseite zu legen oder zu entfernen.",
"_____no_output_____"
],
[
"## 1. Importe",
"_____no_output_____"
]
],
[
[
"import logging\nimport pandas as pd\nfrom datetime import datetime\nfrom voluptuous import Schema, Required, Range, All, ALLOW_EXTRA\nfrom voluptuous.error import MultipleInvalid, Invalid",
"_____no_output_____"
]
],
[
[
"## 2. Logger",
"_____no_output_____"
]
],
[
[
"logger = logging.getLogger(0)\nlogger.setLevel(logging.WARNING)",
"_____no_output_____"
]
],
[
[
"## 3. Beispieldaten lesen",
"_____no_output_____"
]
],
[
[
"sales = pd.read_csv('https://raw.githubusercontent.com/kjam/data-cleaning-101/master/data/sales_data.csv')",
"_____no_output_____"
]
],
[
[
"## 4. Daten untersuchen",
"_____no_output_____"
]
],
[
[
"sales.head()",
"_____no_output_____"
],
[
"sales.dtypes",
"_____no_output_____"
]
],
[
[
"## 5. Schema definieren",
"_____no_output_____"
]
],
[
[
"schema = Schema({\n Required('sale_amount'): All(float, \n Range(min=2.50, max=1450.99)),\n}, extra=ALLOW_EXTRA)",
"_____no_output_____"
],
[
"error_count = 0\nfor s_id, sale in sales.T.to_dict().items():\n try:\n schema(sale)\n except MultipleInvalid as e:\n logging.warning('issue with sale: %s (%s) - %s', \n s_id, sale['sale_amount'], e)\n error_count += 1",
"WARNING:root:issue with sale: 3 (-108.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 4 (-372.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 5 (-399.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 6 (-304.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 7 (-295.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 10 (-89.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 13 (-303.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 15 (-432.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 19 (-177.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 20 (-154.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 22 (-130.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 23 (1487.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 25 (-145.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 28 (1471.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 31 (-259.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 38 (-241.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 40 (-4.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 41 (1581.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 45 (1529.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 46 (-238.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 48 (-284.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 51 (-164.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 55 (-184.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 56 (-304.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 59 (1579.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 60 (-455.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 63 (1551.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 65 (-397.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 69 (-400.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 70 (1482.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 71 (-321.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 74 (-47.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 76 (-68.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 86 (1454.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 101 (-213.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 103 (-144.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 104 (-265.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 107 (-349.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 111 (-78.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 112 (-310.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 116 (1570.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 120 (1490.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 123 (-179.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 124 (-391.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 129 (1504.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 130 (-91.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 132 (-372.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 141 (1512.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 142 (-449.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 149 (1494.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 152 (-405.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 155 (1599.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 156 (1527.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 157 (-462.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 162 (-358.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 164 (-78.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 167 (-358.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 171 (-391.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 178 (-304.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 180 (-9.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 187 (1475.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 194 (-433.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 195 (-329.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 196 (-147.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 203 (-319.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 206 (-132.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 207 (-20.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 209 (1539.0) - value must be at most 1450.99 for dictionary value @ data['sale_amount']\nWARNING:root:issue with sale: 211 (-167.0) - value must be at least 2.5 for dictionary value @ data['sale_amount']\n"
],
[
"error_count",
"_____no_output_____"
],
[
"sales.shape",
"_____no_output_____"
]
],
[
[
"Aktuell wissen wir jedoch noch nicht, ob\n\n* wir ein falsch definiertes Schema haben\n* möglicherweise negative Werte zurückgegeben oder falsch markiert werden\n* höhere Werte kombinierte Einkäufe oder Sonderverkäufe sind",
"_____no_output_____"
],
[
"## 6. Hinzufügen einer benutzerdefinierten Validierung",
"_____no_output_____"
]
],
[
[
"def ValidDate(fmt='%Y-%m-%d %H:%M:%S'):\n return lambda v: datetime.strptime(v, fmt)",
"_____no_output_____"
],
[
"schema = Schema({\n Required('timestamp'): All(ValidDate()),\n}, extra=ALLOW_EXTRA)",
"_____no_output_____"
],
[
"error_count = 0\nfor s_id, sale in sales.T.to_dict().items():\n try:\n schema(sale)\n except MultipleInvalid as e:\n logging.warning('issue with sale: %s (%s) - %s', \n s_id, sale['timestamp'], e)\n error_count += 1",
"_____no_output_____"
],
[
"error_count",
"_____no_output_____"
]
],
[
[
"## 7. Gültige Datumsstrukturen sind noch keine gültigen Daten",
"_____no_output_____"
]
],
[
[
"def ValidDate(fmt='%Y-%m-%d %H:%M:%S'):\n def validation_func(v):\n try:\n assert datetime.strptime(v, fmt) <= datetime.now()\n except AssertionError:\n raise Invalid('date is in the future! %s' % v)\n return validation_func",
"_____no_output_____"
],
[
"schema = Schema({\n Required('timestamp'): All(ValidDate()),\n}, extra=ALLOW_EXTRA)",
"_____no_output_____"
],
[
"error_count = 0\nfor s_id, sale in sales.T.to_dict().items():\n try:\n schema(sale)\n except MultipleInvalid as e:\n logging.warning('issue with sale: %s (%s) - %s', \n s_id, sale['timestamp'], e)\n error_count += 1",
"_____no_output_____"
],
[
"error_count",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e76cb978766b05ead8339d19f88f87514eb10117 | 24,376 | ipynb | Jupyter Notebook | example/1D-Heisenberg.ipynb | liwt31/Renormalizer | 123a9d53f4f5f32c0088c255475f0ee60d02c745 | [
"Apache-2.0"
] | null | null | null | example/1D-Heisenberg.ipynb | liwt31/Renormalizer | 123a9d53f4f5f32c0088c255475f0ee60d02c745 | [
"Apache-2.0"
] | null | null | null | example/1D-Heisenberg.ipynb | liwt31/Renormalizer | 123a9d53f4f5f32c0088c255475f0ee60d02c745 | [
"Apache-2.0"
] | null | null | null | 125.005128 | 18,368 | 0.85859 | [
[
[
"open boundary spin 1/2 1-D Heisenberg model\n\n$H = J \\sum_i [S_i^z S_{i+1}^z + \\frac{1}{2}(S_i^+ S_{i+1}^- + S_i^- S_{i+1}^+)]$\n\nexact result (Bethe Anstatz):\n\nL\t E/J\n\n16\t-6.9117371455749\n\n24\t-10.4537857604096\n\n32\t-13.9973156182243\n\n48\t-21.0859563143863\n\n64\t-28.1754248597421\n",
"_____no_output_____"
]
],
[
[
"from renormalizer.mps import Mps, Mpo, solver\nfrom renormalizer.model import MolList2, ModelTranslator\nfrom renormalizer.utils import basis as ba\nfrom renormalizer.utils import Op\nimport numpy as np\n\n# define the # of spins\nnspin = 16\n\n# define the model\n# sigma^+ = S^+\n# sigma^- = S^-\n# 1/2 sigma^x,y,z = S^x,y,z\n\nmodel = dict()\nfor ispin in range(nspin-1):\n model[(f\"e_{ispin}\", f\"e_{ispin+1}\")] = [(Op(\"sigma_z\",0),\n Op(\"sigma_z\",0), 1.0/4), (Op(\"sigma_+\",0), Op(\"sigma_-\",0), 1.0/2),\n (Op(\"sigma_-\",0), Op(\"sigma_+\", 0), 1.0/2)]\n\n# set the spin order and local basis\norder = {}\nbasis = []\nfor ispin in range(nspin):\n order[f\"e_{ispin}\"] = ispin\n basis.append(ba.BasisHalfSpin(sigmaqn=[0,0]))\n\n# construct MPO\nmol_list2 = MolList2(order, basis, model, ModelTranslator.general_model)\nmpo = Mpo(mol_list2)\nprint(f\"mpo_bond_dims:{mpo.bond_dims}\")\n\n# set the sweep paramter\nM=30\nprocedure = [[M, 0.2], [M, 0], [M, 0], [M,0], [M,0]]\n\n# initialize a random MPS\nqntot = 0\nmps = Mps.random(mol_list2, qntot, M)\n\nmps.optimize_config.procedure = procedure\nmps.optimize_config.method = \"2site\"\n\n# optimize MPS\nenergies = solver.optimize_mps_dmrg(mps.copy(), mpo)\nprint(\"gs energy:\", energies.min())\n",
"2020-04-15 22:42:12,231[DEBUG] # of operator terms: 45\n2020-04-15 22:42:12,232[DEBUG] symbolic mpo algorithm: Hopcroft-Karp\n2020-04-15 22:42:12,285[DEBUG] mmax, percent: 30, 0.2\n2020-04-15 22:42:12,290[DEBUG] energy: -0.1085174478945915\n2020-04-15 22:42:12,290[DEBUG] current size: 151.2KiB, Matrix product bond dim:[1, 2, 4, 8, 16, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 1]\n"
],
[
"# plot the microiterations vs energy error\nimport matplotlib.pyplot as plt\nimport logging \n\nmpl_logger = logging.getLogger('matplotlib') \nmpl_logger.setLevel(logging.WARNING) \n\nplt.rc('font', family='Times New Roman', size=16)\nplt.rc('axes', linewidth=1.5)\nplt.rcParams['lines.linewidth'] = 2\nstd = -6.9117371455749\nfig, ax = plt.subplots(figsize=(8,6))\nplt.plot(np.arange(len(energies)), np.array(energies)-std,\"o-\",ms=5)\n\nplt.yscale('log')\nplt.xlabel(\"microiterations\")\nplt.ylabel(\"$\\Delta$ E\")\nplt.ylim(1e-9, 10)\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
e76cba7119228b0ca93f5e101c0e632db1c57d68 | 32,556 | ipynb | Jupyter Notebook | striplog_def_Matt.ipynb | ThomasMGeo/CSV2Strip | dfd5ffb4037147d5073ff4b54666ddd1d01b20ad | [
"Apache-2.0"
] | null | null | null | striplog_def_Matt.ipynb | ThomasMGeo/CSV2Strip | dfd5ffb4037147d5073ff4b54666ddd1d01b20ad | [
"Apache-2.0"
] | null | null | null | striplog_def_Matt.ipynb | ThomasMGeo/CSV2Strip | dfd5ffb4037147d5073ff4b54666ddd1d01b20ad | [
"Apache-2.0"
] | null | null | null | 85.673684 | 10,872 | 0.804214 | [
[
[
"# Striplog from CSV",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport striplog\nstriplog.__version__",
"_____no_output_____"
],
[
"from striplog import Lexicon, Decor, Component, Legend, Interval, Striplog",
"_____no_output_____"
]
],
[
[
"## Make legend\n\nMost of the stuff in the dicts you made were about **display** — so they are going to make `Decor` objects. A collection of `Decor`s makes a `Legend`. A `Legend` determines how a striplog is displayed.\n\nFirst I'll make the components, since those are easy. I'll move `'train'` into there too, since it is to do with the rocks, not the display. If it seems weird having `'train'` in the `Component` (which is really supposed to be about direct descriptions of the rock, but the idea is that it's always the same for all specimens of that rock so it does fit here) then you could put it in `data` instead. ",
"_____no_output_____"
]
],
[
[
"facies = {\n 's': Component({'lithology': 'sandstone', 'train':'y'}),\n 'i': Component({'lithology': 'interbedded', 'train':'y'}),\n 'sh': Component({'lithology': 'shale', 'train':'y'}),\n 'bs': Component({'lithology': 'sandstone', 'train': 'n'}),\n 't': Component({'lithology': 'turbidite', 'train':'y'}),\n 'nc': Component({'lithology': 'none', 'train':'n'}),\n}",
"_____no_output_____"
],
[
"sandstone = Decor({\n 'component': facies['s'],\n 'colour': 'yellow',\n 'hatch': '.',\n 'width': '3',\n})\n\ninterbedded = Decor({\n 'component': facies['i'],\n 'colour': 'darkseagreen',\n 'hatch': '--',\n 'width': '2',\n})\n\nshale = Decor({\n 'component': facies['sh'],\n 'colour': 'darkgray',\n 'hatch': '-',\n 'width': '1',\n})\n\nbadsand = Decor({\n 'component': facies['bs'],\n 'colour': 'orange',\n 'hatch': '.',\n 'width': '3',\n})\n\n\n# Not sure about the best way to do this, probably better\n# just to omit those intervals completely.\nnocore = Decor({\n 'component': facies['nc'],\n 'colour': 'white',\n 'hatch': '/',\n 'width': '5',\n})\n\nturbidite = Decor({\n 'component': facies['t'],\n 'colour': 'green',\n 'hatch': 'xxx',\n 'width': '3',\n})",
"_____no_output_____"
],
[
"legend = Legend([sandstone, badsand, interbedded, shale, turbidite, nocore])",
"_____no_output_____"
],
[
"legend",
"_____no_output_____"
]
],
[
[
"## Read CSV into striplog",
"_____no_output_____"
]
],
[
[
"strip = Striplog.from_csv('test.csv')",
"_____no_output_____"
],
[
"strip[0]",
"_____no_output_____"
]
],
[
[
"## Deal with lithology\n\nThe lithology has been turned into a component, but it's using the abbreviation... I can't figure out an elegant way to deal with this so, for now, we'll just loop over the striplog and fix it. We read the `data` item's lithology (`'s'` in the top layer), then look up the correct lithology name in our abbreviation dictionary, then add the new component in the proper place. Finally, we delete the `data` we had.",
"_____no_output_____"
]
],
[
[
"for s in strip:\n lith = s.data['lithology']\n s.components = [facies[lith]]\n s.data = {}",
"_____no_output_____"
],
[
"strip[0]",
"_____no_output_____"
]
],
[
[
"That's better!",
"_____no_output_____"
]
],
[
[
"strip.plot(legend)",
"_____no_output_____"
]
],
[
[
"## Remove non-training layers",
"_____no_output_____"
]
],
[
[
"strip",
"_____no_output_____"
],
[
"strip_train = Striplog([s for s in strip if s.primary['train'] == 'y'])",
"_____no_output_____"
],
[
"strip_train",
"_____no_output_____"
],
[
"strip_train.plot(legend)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e76cefd6c87449a2f7d438037cac1cd8c7e20068 | 136,218 | ipynb | Jupyter Notebook | ROC_classification.ipynb | yuewu57/mental_health_AMoSS | 3ca8c5031ade67332cdc5b4cdd0054f363fd0402 | [
"Apache-2.0"
] | null | null | null | ROC_classification.ipynb | yuewu57/mental_health_AMoSS | 3ca8c5031ade67332cdc5b4cdd0054f363fd0402 | [
"Apache-2.0"
] | null | null | null | ROC_classification.ipynb | yuewu57/mental_health_AMoSS | 3ca8c5031ade67332cdc5b4cdd0054f363fd0402 | [
"Apache-2.0"
] | null | null | null | 523.915385 | 43,532 | 0.944721 | [
[
[
"This notebook aims to produce Figure S2 in \n\n Supplementary Material for the preprint:\n <Deriving information from missing data: implications for mood prediction>",
"_____no_output_____"
]
],
[
[
"import os\nimport random\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport matplotlib.dates as mdates\nimport datetime\nimport time\nimport csv\nimport math\nimport scipy\nimport seaborn as sns\nfrom scipy.stats import iqr\nimport h5py\nimport pickle\nfrom tqdm import tqdm\nimport copy\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.preprocessing import LabelEncoder\nimport iisignature\nfrom datetime import date",
"_____no_output_____"
],
[
"from itertools import cycle\n\nfrom sklearn import svm, datasets\nfrom sklearn.metrics import roc_curve, auc\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import label_binarize\nfrom sklearn.multiclass import OneVsRestClassifier\nfrom scipy import interp",
"_____no_output_____"
],
[
"from classifiers import *\nfrom data_cleaning import *\nfrom data_transforms import *\nfrom ROC_functions import *",
"_____no_output_____"
]
],
[
[
"Load data",
"_____no_output_____"
]
],
[
[
"test_path='./all-true-colours-matlab-2017-02-27-18-38-12-nick/'\nparticipants_list, participants_data_list,\\\n participants_time_list=loadParticipants(test_path)\n \n\nParticipants=make_classes(participants_data_list,\\\n participants_time_list,\\\n participants_list)\ncohort=cleaning_sameweek_data(cleaning_same_data(Participants))",
"14050\n"
]
],
[
[
"**Length=20w**",
"_____no_output_____"
],
[
"* Missing-response-incorporated signature-based classification model (MRSCM, level2)",
"_____no_output_____"
]
],
[
[
"y_tests_mc, y_scores_mc=model_roc(cohort, minlen=20, training=0.7,order=2, sample_size=100) )\nplot_roc(y_tests_mc, y_scores_mc, \"signature_method_missingcount_2ndtrial\", n_classes=3,lw=2)",
"The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\n"
]
],
[
[
"* Missing-response-incorporated signature-based classification model (MRSCM, level3)",
"_____no_output_____"
]
],
[
[
"y_tests_mc3, y_scores_mc3=model_roc(cohort, minlen=20, training=0.7,order=3, sample_size=100)\nplot_roc(y_tests_mc3, y_scores_mc3, \"signature_method_missingcount3\", n_classes=3,lw=2)",
"The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\n"
]
],
[
[
"* Naive classification model",
"_____no_output_____"
]
],
[
[
"if __name__ == \"__main__\":\n\n\n \n y_tests_naive, y_scores_naive=model_roc(cohort, minlen=20,\\\n training=0.7,\\\n order=None,\\\n sample_size=100,\\\n standardise=False,\\\n count=False,\\\n feedforward=False,\\\n naive=True,\\\n time=False)\n \n plot_roc(y_tests_naive, y_scores_naive, \"naive_method\", n_classes=3,lw=2)",
"The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e76cfb94a4cb0122f23da3fd8354cafcb2c50a93 | 11,115 | ipynb | Jupyter Notebook | GOSTNets/Notebooks/Borders_and_MarketAccess.ipynb | ramarty/GOST_PublicGoods | de9cf36e37208eaf69253e784833990ceeb1058a | [
"MIT"
] | 40 | 2018-03-13T13:47:36.000Z | 2022-02-17T13:17:32.000Z | GOSTNets/Notebooks/Borders_and_MarketAccess.ipynb | ramarty/GOST_PublicGoods | de9cf36e37208eaf69253e784833990ceeb1058a | [
"MIT"
] | 11 | 2018-07-30T20:17:13.000Z | 2020-08-13T20:30:20.000Z | GOSTNets/Notebooks/Borders_and_MarketAccess.ipynb | ramarty/GOST_PublicGoods | de9cf36e37208eaf69253e784833990ceeb1058a | [
"MIT"
] | 14 | 2018-08-14T07:47:56.000Z | 2021-11-23T11:07:56.000Z | 30.368852 | 95 | 0.478183 | [
[
[
"import importlib\nimport sys,os,math,time\n\ngostNetsFolder = os.path.dirname(os.getcwd())\nsys.path.insert(0, gostNetsFolder)\nfrom GOSTNets import GOSTnet as gn\nfrom GOSTNets import OSMParser\nimportlib.reload(gn)\n\nimport networkx as nx\nimport geopandas as gpd\nimport numpy as np\nimport osmnx as ox\nimport pandas as pd\nimport rasterio\n\nfrom shapely.geometry import Point\nfrom shapely.ops import nearest_points",
"peartree version: 0.6.0 \nnetworkx version: 2.2 \nmatplotlib version: 2.2.2 \nosmnx version: 0.8.2 \n"
],
[
"inputPBF = r\"Q:\\AFRICA\\MRT\\INFRA\\mauritania-latest_20190103_OSMLR12.osm.xml\"\ninputBorders = r\"Q:\\AFRICA\\MRT\\INFRA\\MRT_Fake_Border.shp\"\noriginFile = r\"Q:\\AFRICA\\MRT\\INFRA\\Origins.shp\"\ndestFile = r\"Q:\\AFRICA\\MRT\\INFRA\\Destinations.shp\"",
"_____no_output_____"
],
[
"#Read in the network object and convert to GDF\nimportlib.reload(OSMParser)\nimportlib.reload(gn)\n\nG = OSMParser.read_osm(inputPBF)\nG = gn.convert_network_to_time(G, distance_tag = 'length', speed_dict = gn.speed_dict)\nedgeDF = gn.edge_gdf_from_graph(G, xCol='lon', yCol='lat')\nnodeDF = gn.node_gdf_from_graph(G, xCol='lon', yCol='lat')\n\ninB = gpd.read_file(inputBorders)\ninO = gpd.read_file(originFile)\ninD = gpd.read_file(destFile)",
"peartree version: 0.6.0 \nnetworkx version: 2.2 \nmatplotlib version: 2.2.2 \nosmnx version: 0.8.2 \n"
],
[
"#Identify all network edges that intersect the borders file\nadjustments = {}\nfor idx, row in inB.iterrows():\n #Select the roads that intersect this border\n ### TODO - use a spatial index here, it is too slow\n intersections = edgeDF[edgeDF.intersects(row['geometry'])]\n for selIdx, selRow in intersections.iterrows():\n adjustments[selRow['id']] = row['id']",
"_____no_output_____"
],
[
"#Loop through edges in network and add time\nG_adj = G.copy()\nfor u,v,data in G_adj.edges(data=True):\n if data['id'] in adjustments.keys():\n data['time'] = data['time'] + adjustments[data['id']]",
"_____no_output_____"
],
[
"# identify intersecting nodes for input and output nodes\nimportlib.reload(gn)\n\ninNodes = []\ndestNodes = []\nfor idx, row in inO.iterrows():\n nPt = nodeDF.distance(row['geometry'])\n inNodes.append(nodeDF['node_ID'][nPt.idxmin()]) \n\ndestNodes = []\nfor idx, row in inD.iterrows():\n nPt = nodeDF.distance(row['geometry'])\n destNodes.append(nodeDF['node_ID'][nPt.idxmin()]) \n",
"peartree version: 0.6.0 \nnetworkx version: 2.2 \nmatplotlib version: 2.2.2 \nosmnx version: 0.8.2 \n"
],
[
"#Run OD matrix\nundistrubed = gn.calculate_OD(G, inNodes, destNodes, -1)\ndistrubed = gn.calculate_OD(G_adj, inNodes, destNodes, -1)",
"_____no_output_____"
],
[
"distrubed",
"_____no_output_____"
],
[
"undistrubed",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76d0a27bf0468fe07c163f717e9eb7c650fde6c | 28,976 | ipynb | Jupyter Notebook | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n | 583c00a412cefd168a19975bf2f4cef2e2fc9dca | [
"Apache-2.0"
] | null | null | null | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n | 583c00a412cefd168a19975bf2f4cef2e2fc9dca | [
"Apache-2.0"
] | null | null | null | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n | 583c00a412cefd168a19975bf2f4cef2e2fc9dca | [
"Apache-2.0"
] | null | null | null | 27.969112 | 429 | 0.490682 | [
[
[
"##### Copyright 2018 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
],
[
"#@title MIT License\n#\n# Copyright (c) 2017 François Chollet\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.",
"_____no_output_____"
]
],
[
[
"# はじめてのニューラルネットワーク:分類問題の初歩",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/keras/classification\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/classification.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/classification.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ja)にご連絡ください。",
"_____no_output_____"
],
[
"このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。\n\nこのガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。",
"_____no_output_____"
]
],
[
[
"try:\n # Colab only\n %tensorflow_version 2.x\nexcept Exception:\n pass\n",
"_____no_output_____"
],
[
"from __future__ import absolute_import, division, print_function, unicode_literals\n\n# TensorFlow と tf.keras のインポート\nimport tensorflow as tf\nfrom tensorflow import keras\n\n# ヘルパーライブラリのインポート\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nprint(tf.__version__)",
"_____no_output_____"
]
],
[
[
"## ファッションMNISTデータセットのロード",
"_____no_output_____"
],
[
"このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist)を使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。\n\n<table>\n <tr><td>\n <img src=\"https://tensorflow.org/images/fashion-mnist-sprite.png\"\n alt=\"Fashion MNIST sprite\" width=\"600\">\n </td></tr>\n <tr><td align=\"center\">\n <b>Figure 1.</b> <a href=\"https://github.com/zalandoresearch/fashion-mnist\">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/> \n </td></tr>\n</table>\n\nFashion MNISTは、画像処理のための機械学習での\"Hello, World\"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。\n\nFashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。\n\nここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。",
"_____no_output_____"
]
],
[
[
"fashion_mnist = keras.datasets.fashion_mnist\n\n(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()",
"_____no_output_____"
]
],
[
[
"ロードしたデータセットは、NumPy配列になります。\n\n* `train_images` と `train_labels` の2つの配列は、モデルの訓練に使用される**訓練用データセット**です。\n* 訓練されたモデルは、 `test_images` と `test_labels` 配列からなる**テスト用データセット**を使ってテストします。\n\n画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。**ラベル**(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品の**クラス**(class)に対応しています。\n\n<table>\n <tr>\n <th>Label</th>\n <th>Class</th> \n </tr>\n <tr>\n <td>0</td>\n <td>T-shirt/top</td> \n </tr>\n <tr>\n <td>1</td>\n <td>Trouser</td> \n </tr>\n <tr>\n <td>2</td>\n <td>Pullover</td> \n </tr>\n <tr>\n <td>3</td>\n <td>Dress</td> \n </tr>\n <tr>\n <td>4</td>\n <td>Coat</td> \n </tr>\n <tr>\n <td>5</td>\n <td>Sandal</td> \n </tr>\n <tr>\n <td>6</td>\n <td>Shirt</td> \n </tr>\n <tr>\n <td>7</td>\n <td>Sneaker</td> \n </tr>\n <tr>\n <td>8</td>\n <td>Bag</td> \n </tr>\n <tr>\n <td>9</td>\n <td>Ankle boot</td> \n </tr>\n</table>\n\n画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。",
"_____no_output_____"
]
],
[
[
"class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', \n 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']",
"_____no_output_____"
]
],
[
[
"## データの観察\n\nモデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。",
"_____no_output_____"
]
],
[
[
"train_images.shape",
"_____no_output_____"
]
],
[
[
"同様に、訓練用データセットには60,000個のラベルが含まれます。",
"_____no_output_____"
]
],
[
[
"len(train_labels)",
"_____no_output_____"
]
],
[
[
"ラベルはそれぞれ、0から9までの間の整数です。",
"_____no_output_____"
]
],
[
[
"train_labels",
"_____no_output_____"
]
],
[
[
"テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。",
"_____no_output_____"
]
],
[
[
"test_images.shape",
"_____no_output_____"
]
],
[
[
"テスト用データセットには10,000個のラベルが含まれます。",
"_____no_output_____"
]
],
[
[
"len(test_labels)",
"_____no_output_____"
]
],
[
[
"## データの前処理\n\nネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。",
"_____no_output_____"
]
],
[
[
"plt.figure()\nplt.imshow(train_images[0])\nplt.colorbar()\nplt.grid(False)\nplt.show()",
"_____no_output_____"
]
],
[
[
"ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。\n\n**訓練用データセット**と**テスト用データセット**は、同じように前処理することが重要です。",
"_____no_output_____"
]
],
[
[
"train_images = train_images / 255.0\n\ntest_images = test_images / 255.0",
"_____no_output_____"
]
],
[
[
"**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(10,10))\nfor i in range(25):\n plt.subplot(5,5,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(train_images[i], cmap=plt.cm.binary)\n plt.xlabel(class_names[train_labels[i]])\nplt.show()",
"_____no_output_____"
]
],
[
[
"## モデルの構築\n\nニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。",
"_____no_output_____"
],
[
"### 層の設定\n\nニューラルネットワークを形作る基本的な構成要素は**層**(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。\n\nディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。`tf.keras.layers.Dense` のような層のほとんどには、訓練中に学習されるパラメータが存在します。",
"_____no_output_____"
]
],
[
[
"model = keras.Sequential([\n keras.layers.Flatten(input_shape=(28, 28)),\n keras.layers.Dense(128, activation='relu'),\n keras.layers.Dense(10, activation='softmax')\n])",
"_____no_output_____"
]
],
[
[
"このネットワークの最初の層は、`tf.keras.layers.Flatten` です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。\n\nピクセルが1次元化されたあと、ネットワークは2つの `tf.keras.layers.Dense` 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の `Dense` 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードの**softmax**層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。\n\n### モデルのコンパイル\n\nモデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルの**コンパイル**(compile)時に追加されます。\n\n* **損失関数**(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。\n* **オプティマイザ**(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。\n* **メトリクス**(metrics) —訓練とテストのステップを監視するのに使用します。下記の例では*accuracy* (正解率)、つまり、画像が正しく分類された比率を使用しています。",
"_____no_output_____"
]
],
[
[
"model.compile(optimizer='adam', \n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"## モデルの訓練\n\nニューラルネットワークの訓練には次のようなステップが必要です。\n\n1. モデルに訓練用データを投入します—この例では `train_images` と `train_labels` の2つの配列です。\n2. モデルは、画像とラベルの対応関係を学習します。\n3. モデルにテスト用データセットの予測(分類)を行わせます—この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。 \n\n訓練を開始するには、`model.fit` メソッドを呼び出します。モデルを訓練用データに \"fit\"(適合)させるという意味です。",
"_____no_output_____"
]
],
[
[
"model.fit(train_images, train_labels, epochs=5)",
"_____no_output_____"
]
],
[
[
"モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。",
"_____no_output_____"
],
[
"## 正解率の評価\n\n次に、テスト用データセットに対するモデルの性能を比較します。",
"_____no_output_____"
]
],
[
[
"test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)\n\nprint('\\nTest accuracy:', test_acc)",
"_____no_output_____"
]
],
[
[
"ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、**過学習**(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。",
"_____no_output_____"
],
[
"## 予測する\n\nモデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。",
"_____no_output_____"
]
],
[
[
"predictions = model.predict(test_images)",
"_____no_output_____"
]
],
[
[
"これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。",
"_____no_output_____"
]
],
[
[
"predictions[0]",
"_____no_output_____"
]
],
[
[
"予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。",
"_____no_output_____"
]
],
[
[
"np.argmax(predictions[0])",
"_____no_output_____"
]
],
[
[
"というわけで、このモデルは、この画像が、アンクルブーツ、`class_names[9]` である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。",
"_____no_output_____"
]
],
[
[
"test_labels[0]",
"_____no_output_____"
]
],
[
[
"10チャンネルすべてをグラフ化してみることができます。",
"_____no_output_____"
]
],
[
[
"def plot_image(i, predictions_array, true_label, img):\n predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]\n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n\n plt.imshow(img, cmap=plt.cm.binary)\n\n predicted_label = np.argmax(predictions_array)\n if predicted_label == true_label:\n color = 'blue'\n else:\n color = 'red'\n\n plt.xlabel(\"{} {:2.0f}% ({})\".format(class_names[predicted_label],\n 100*np.max(predictions_array),\n class_names[true_label]),\n color=color)\n\ndef plot_value_array(i, predictions_array, true_label):\n predictions_array, true_label = predictions_array[i], true_label[i]\n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n thisplot = plt.bar(range(10), predictions_array, color=\"#777777\")\n plt.ylim([0, 1]) \n predicted_label = np.argmax(predictions_array)\n\n thisplot[predicted_label].set_color('red')\n thisplot[true_label].set_color('blue')",
"_____no_output_____"
]
],
[
[
"0番目の画像と、予測、予測配列を見てみましょう。",
"_____no_output_____"
]
],
[
[
"i = 0\nplt.figure(figsize=(6,3))\nplt.subplot(1,2,1)\nplot_image(i, predictions, test_labels, test_images)\nplt.subplot(1,2,2)\nplot_value_array(i, predictions, test_labels)\nplt.show()",
"_____no_output_____"
],
[
"i = 12\nplt.figure(figsize=(6,3))\nplt.subplot(1,2,1)\nplot_image(i, predictions, test_labels, test_images)\nplt.subplot(1,2,2)\nplot_value_array(i, predictions, test_labels)\nplt.show()",
"_____no_output_____"
]
],
[
[
"予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。",
"_____no_output_____"
]
],
[
[
"# X個のテスト画像、予測されたラベル、正解ラベルを表示します。\n# 正しい予測は青で、間違った予測は赤で表示しています。\nnum_rows = 5\nnum_cols = 3\nnum_images = num_rows*num_cols\nplt.figure(figsize=(2*2*num_cols, 2*num_rows))\nfor i in range(num_images):\n plt.subplot(num_rows, 2*num_cols, 2*i+1)\n plot_image(i, predictions, test_labels, test_images)\n plt.subplot(num_rows, 2*num_cols, 2*i+2)\n plot_value_array(i, predictions, test_labels)\nplt.show()",
"_____no_output_____"
]
],
[
[
"最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。",
"_____no_output_____"
]
],
[
[
"# テスト用データセットから画像を1枚取り出す\nimg = test_images[0]\n\nprint(img.shape)",
"_____no_output_____"
]
],
[
[
"`tf.keras` モデルは、サンプルの中の**バッチ**(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。",
"_____no_output_____"
]
],
[
[
"# 画像を1枚だけのバッチのメンバーにする\nimg = (np.expand_dims(img,0))\n\nprint(img.shape)",
"_____no_output_____"
]
],
[
[
"そして、予測を行います。",
"_____no_output_____"
]
],
[
[
"predictions_single = model.predict(img)\n\nprint(predictions_single)",
"_____no_output_____"
],
[
"plot_value_array(0, predictions_single, test_labels)\n_ = plt.xticks(range(10), class_names, rotation=45)",
"_____no_output_____"
]
],
[
[
"`model.predict` メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。",
"_____no_output_____"
]
],
[
[
"np.argmax(predictions_single[0])",
"_____no_output_____"
]
],
[
[
"というわけで、モデルは9というラベルを予測しました。",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e76d0b370d18c81608a04392df0858a15a6374c4 | 362,029 | ipynb | Jupyter Notebook | notebooks/code3.ipynb | heitorfe/insiders_clustering | b2919167b87864948800cc74348c9940113c7eb7 | [
"MIT"
] | null | null | null | notebooks/code3.ipynb | heitorfe/insiders_clustering | b2919167b87864948800cc74348c9940113c7eb7 | [
"MIT"
] | null | null | null | notebooks/code3.ipynb | heitorfe/insiders_clustering | b2919167b87864948800cc74348c9940113c7eb7 | [
"MIT"
] | null | null | null | 168.620866 | 143,904 | 0.896715 | [
[
[
"# PA005: High Value Customer Identification (Insiders)",
"_____no_output_____"
],
[
"# Ciclo 0 - Planejamento da solução (IOT)",
"_____no_output_____"
],
[
"# Ciclo 1 - Métricas de Validação de Clusters",
"_____no_output_____"
],
[
"1. Feature Engineering\n* Recency\n* Frequency\n* Monetary\n\n2. Métricas de Validação de clusters\n* WSS - (Within-Cluster Sum of Squares)\n* SS - (Silhouette Score)\n\n3. Cluster Analisys\n* Plot 3d\n* Cluster profile",
"_____no_output_____"
],
[
"# Ciclo 2 - Análise de Silhouette",
"_____no_output_____"
],
[
"1. Feature Engineering\n* Average Ticket\n\n\n2. Análise de Silhouette\n* Silhouette Analysis\n\n\n3. Clustering visualization\n* UMAP\n\n\n4. Cluster análise de perfil\n* Descrição dos centroides dos clusters",
"_____no_output_____"
],
[
"# Ciclo 3 - Statistical Descriptive",
"_____no_output_____"
],
[
"1. Análise Descritiva\n* Atributos Numéricos\n* Atributos Categóricos\n\n2. Data Filtering\n* Retornos e Compras\n\n3. Feature Engineering\n* Utilização dos dados de compra\n* Average Recency\n* Number of returns\n\n",
"_____no_output_____"
],
[
"## Input - Entrada",
"_____no_output_____"
],
[
"1. Problema de negócio\n\n --Selecionar os clientes mais valiosos para integrar um programa de fidelização\n \n2. Conjunto de dados\n\n --Vendas de um e-commerce, durante um período de um ano",
"_____no_output_____"
],
[
"## Output - Saída",
"_____no_output_____"
],
[
"1. A idicação de clientes que farão parte do programa de Insiders\n\n --Lista: client_id | is_insider\n 455534 | yes\n 433524 | no\n \n2. Relatório com as respostas das perguntas de negócios\n\n 2.0. Quem são as pessoas elegíveis para fazer parte do grupo de Insiders?\n\n 2.1. Quantos clientes farão parte do grupo?\n \n 2.2. Quais as principais características desses clientes?\n \n 2.3. Qual a porcentagem de contribuição do faturamento vindo dos Insiders?\n \n 2.4. Qual a expectativa de faturamento desse grupo para os próximos meses?\n \n 2.5. Quais as condições para umma pessoa ser elegível ao Insiders?\n \n 2.6. Quais as condições para umma pessoa ser removida do Insiders?\n \n 2.7. Qual a garantia que o programa Insiders é melhor que o restante da base?\n \n 2.8. Quais ações o time de marketing pode realizar para aumentar o faturamento?\n \n ",
"_____no_output_____"
],
[
"## Tasks - Tarefas",
"_____no_output_____"
],
[
" 0. Quem são as pessoas elegíveis para fazer parte do grupo de Insiders? \n - O que são clientes de maior \"valor\"?\n - Faturamento\n - Alto ticket médio\n - Alto LTV\n - Baixa recência\n - Baixa probabilidade de churn\n - Alto basket size\n - Alta previsão LTV\n - Alta propensão de compra\n \n - Custo\n - Baixa taxa de devolução\n \n - Experiência de compra\n - Média alta das avaliações\n \n 1. Quantos clientes farão parte do grupo?\n - Número total de clientes\n - % do grupo de Insiders\n \n 2. Quais as principais características desses clientes?\n -Características dos clientes:\n - Idade\n - Localização\n \n - Características de consumo:\n - Atributos da clusterização\n \n \n 3. Qual a porcentagem de contribuição do faturamento vindo dos Insiders?\n - Faturamento do ano\n - Faturamento dos Insiders\n \n 4. Qual a expectativa de faturamento desse grupo para os próximos meses?\n - LTV do grupo Insiders\n - Análise de Cohort\n \n 5. Quais as condições para umma pessoa ser elegível ao Insiders?\n - Definir periodicidade\n - A pessoa precisa ser similar ou parecido com uma pessoa do grupo\n \n 6. Quais as condições para umma pessoa ser removida do Insiders?\n - Definir periodicidade\n - A pessoa precisa ser dissimilar ou parecido com uma pessoa do grupo\n \n 7. Qual a garantia que o programa Insiders é melhor que o restante da base?\n - Teste A/B\n - Teste A/B Bayesiano\n - Teste de hipóteses\n \n 8. Quais ações o time de marketing pode realizar para aumentar o faturamento?\n - Desconto\n - Preferência de compra\n - Frente\n - Visita a empresa",
"_____no_output_____"
],
[
"## Benchmark de soluções",
"_____no_output_____"
],
[
"### 1. Desk research",
"_____no_output_____"
],
[
"Modelo RFM\n1. Recency\n a) Tempo desde a última compra\n b) Responsividade\n\n2. Frequency\n a) Tempo médio entre as transações\n b) Engajamento\n \n3. Monetary \n a) Total gasto, faturamento\n b) 'High-value purchases'",
"_____no_output_____"
],
[
"# 0.0. Imports",
"_____no_output_____"
]
],
[
[
"\nfrom yellowbrick.cluster import KElbowVisualizer\nfrom yellowbrick.cluster import SilhouetteVisualizer\n\nfrom matplotlib import pyplot as plt\nfrom sklearn import cluster as c\nfrom sklearn import metrics as m\nfrom sklearn import preprocessing as pp\nfrom plotly import express as px\n\nimport pandas as pd\nimport seaborn as sns\nimport numpy as np\n\nimport re",
"_____no_output_____"
],
[
"import umap.umap_ as umap\n",
"/home/heitor/repos/insiders_clustering/venv/lib/python3.8/site-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n from .autonotebook import tqdm as notebook_tqdm\n"
]
],
[
[
"## 0.2. Helper Functions",
"_____no_output_____"
]
],
[
[
"pd.set_option('display.float_format', lambda x: '%.2f' % x)\n\ndef num_attributes(df1):\n \n num_attributes = df1.select_dtypes(['int64', 'float64'])\n\n #central tendency\n ct1 = pd.DataFrame(num_attributes.apply(np.mean)).T\n ct2 = pd.DataFrame(num_attributes.apply(np.median)).T\n\n #dispersion\n d1 = pd.DataFrame(num_attributes.apply(np.min)).T\n d2 = pd.DataFrame(num_attributes.apply(np.max)).T\n d3 = pd.DataFrame(num_attributes.apply(lambda x: x.max() - x.min())).T\n d4 = pd.DataFrame(num_attributes.apply(np.std)).T\n d5 = pd.DataFrame(num_attributes.apply(lambda x: x.skew())).T\n d6 = pd.DataFrame(num_attributes.apply(lambda x: x.kurtosis())).T\n\n m = pd.concat( [d1, d2, d3, ct1, ct2, d4, d5, d6] ).T.reset_index()\n m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std','skew', 'kurtosis']\n return m",
"_____no_output_____"
]
],
[
[
"## 0.3. Load Data",
"_____no_output_____"
]
],
[
[
"df_raw = pd.read_csv(r'../data/Ecommerce.csv')\n",
"_____no_output_____"
]
],
[
[
"# 1.0. Data Description",
"_____no_output_____"
]
],
[
[
"df1 = df_raw.copy()",
"_____no_output_____"
]
],
[
[
"## 1.1. Rename Columns",
"_____no_output_____"
]
],
[
[
"df1.columns",
"_____no_output_____"
],
[
"df1.columns = ['invoice_no', 'stock_code', 'description', 'quantity', 'invoice_date',\n 'unit_price', 'customer_id', 'country']",
"_____no_output_____"
]
],
[
[
"## 1.2. Data Shape",
"_____no_output_____"
]
],
[
[
"print(f'Number of rows: {df1.shape[0]}')\nprint(f'Number of columns: {df1.shape[1]}')",
"Number of rows: 541909\nNumber of columns: 8\n"
]
],
[
[
"## 1.3. Data Types",
"_____no_output_____"
]
],
[
[
"df1.dtypes",
"_____no_output_____"
]
],
[
[
"## 1.4. Check NAs\n",
"_____no_output_____"
]
],
[
[
"df1.isna().sum()",
"_____no_output_____"
]
],
[
[
"## 1.5. Fill NAs",
"_____no_output_____"
]
],
[
[
"#remove na\ndf1 = df1.dropna(axis=0)\n\nprint('Data removed: {:.0f}%'.format((1-(len(df1)/len(df_raw)))*100))",
"Data removed: 25%\n"
]
],
[
[
"## 1.6. Change dtypes",
"_____no_output_____"
]
],
[
[
"df1.dtypes",
"_____no_output_____"
],
[
"#invoice_no \n# df1['invoice_no'] = df1['invoice_no'].astype(int)\n\n\n#stock_code \n# df1['stock_code'] = df1['stock_code'].astype(int)\n\n\n#invoice_date --> Month --> b\ndf1['invoice_date'] = pd.to_datetime(df1['invoice_date'], format=('%d-%b-%y'))\n\n\n#customer_id\ndf1['customer_id'] = df1['customer_id'].astype(int)\ndf1.dtypes",
"_____no_output_____"
]
],
[
[
"## 1.7. Descriptive statistics",
"_____no_output_____"
]
],
[
[
"num_attributes = df1.select_dtypes(['int64', 'float64'])\ncat_attributes = df1.select_dtypes(exclude = ['int64', 'float64', 'datetime64[ns]'])",
"_____no_output_____"
]
],
[
[
"### 1.7.1. Numerical Attributes",
"_____no_output_____"
]
],
[
[
"m = num_attributes(df1)\nm",
"_____no_output_____"
]
],
[
[
"#### 1.7.1.1 Investigating",
"_____no_output_____"
],
[
"1. Negative quantity (devolution?)\n2. Price = 0 (Promo?)",
"_____no_output_____"
],
[
"## 1.7.2. Categorical Attributes",
"_____no_output_____"
]
],
[
[
"cat_attributes.head()",
"_____no_output_____"
]
],
[
[
"#### Invoice no",
"_____no_output_____"
]
],
[
[
"\n#invoice_no -- some of them has one char\ndf_invoice_char = df1.loc[df1['invoice_no'].apply(lambda x: bool(re.search('[^0-9]+', x))), :] \n\nlen(df_invoice_char[df_invoice_char['quantity']<0])\n\n\nprint('Total of invoices with letter: {}'.format(len(df_invoice_char)))\nprint('Total of negative quantaty: {}'.format(len(df1[df1['quantity']<0])))\nprint('Letter means negative quantity')",
"_____no_output_____"
]
],
[
[
"#### Stock Code",
"_____no_output_____"
]
],
[
[
"#all stock codes with char\ndf1.loc[df1['stock_code'].apply(lambda x: bool(re.search('^[a-zA-Z]+$', x))), 'stock_code'].unique()\n\n#remove stock code in ['POST', 'D', 'M', 'PADS', 'DOT', 'CRUK']\n# df1 = df1[-df1.isin(['POST', 'D', 'M', 'PADS', 'DOT', 'CRUK'])]",
"_____no_output_____"
]
],
[
[
"#### Description",
"_____no_output_____"
]
],
[
[
"#remove description\n# df1 = df1.drop('description', axis=1)",
"_____no_output_____"
]
],
[
[
"#### Country\n",
"_____no_output_____"
]
],
[
[
"df1['country'].value_counts(normalize='True').head()",
"_____no_output_____"
],
[
"df1[['country', 'customer_id']].drop_duplicates().groupby('country').count().reset_index().sort_values('customer_id', ascending=False).head()",
"_____no_output_____"
]
],
[
[
"# 2.0. Data Filtering",
"_____no_output_____"
]
],
[
[
"df2 = df1.copy()",
"_____no_output_____"
],
[
"df2['country'].unique()",
"_____no_output_____"
],
[
"# ====================== Numerical Attributes ====================== \n\n#unit price != 0\n# df2.sort_values('unit_price').head()\ndf2 = df2[df2['unit_price']>0.004]\n\n# ====================== Categorical Attributes ====================== \n\n#stock code\ndf2 = df2[~df2.isin(['POST', 'D', 'M', 'PADS', 'DOT', 'CRUK'])]\n\n#description\ndf2 = df2.drop('description', axis=1)\n\n\n#map\ndf2 = df2[~df2.isin(['European Community', 'Unspecified'])]\n\n# #quantity - negative numbers mean return\ndf_return = df2[df2['quantity']<0]\ndf_purchase = df2[df2['quantity']>0]\n\n\n",
"_____no_output_____"
]
],
[
[
"# 3.0. Feature Engineering",
"_____no_output_____"
]
],
[
[
"df3 = df2.copy()",
"_____no_output_____"
]
],
[
[
"## 3.1. Feature Creation",
"_____no_output_____"
]
],
[
[
"# df_purchase.loc[:, 'gross_revenue'] = df_purchase.loc[:, 'quantity'] * df_purchase.loc[:, 'unit_price'] \ndf_purchase.loc[:,'gross_revenue'] = df_purchase.loc[:, 'quantity'] * df_purchase.loc[:, 'unit_price'] \n\n#data reference\ndf_ref = df3.drop(['invoice_no', 'stock_code', 'quantity', 'invoice_date',\n 'unit_price', 'country'], axis=1).drop_duplicates(ignore_index=True)\n\n\ndf_monetary = df_purchase.loc[:,['customer_id', 'gross_revenue']].groupby('customer_id').sum().reset_index()\ndf_ref = pd.merge(df_ref, df_monetary, on='customer_id', how= 'left')\ndf_ref.isna().sum()",
"/tmp/ipykernel_19796/959218557.py:6: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n df_purchase.loc[:,'gross_revenue'] = df_purchase.loc[:, 'quantity'] * df_purchase.loc[:, 'unit_price']\n"
],
[
"#recency --> max date\n\ndf_recency = df_purchase.loc[:,['customer_id', 'invoice_date']].groupby('customer_id').max().reset_index()\ndf_recency['recency_days'] = (df3['invoice_date'].max() - df_recency['invoice_date']).dt.days\ndf_recency = df_recency[['customer_id', 'recency_days']].copy()\ndf_ref = pd.merge(df_ref, df_recency, on='customer_id', how='left')\n\n#frequency\ndf_freq = df_purchase.loc[:,['customer_id', 'invoice_no']].drop_duplicates().groupby('customer_id').count().reset_index()\ndf_ref = pd.merge(df_ref, df_freq, on='customer_id', how='left')\n\n\n#avg ticket\ndf_avgticket = df_purchase.loc[:,['customer_id', 'gross_revenue']].groupby('customer_id').mean().rename(columns={'gross_revenue':'avg_ticket'}).reset_index()\ndf_ref = pd.merge(df_ref, df_avgticket, on='customer_id', how='left')\n\n\n\n\n\n",
"_____no_output_____"
],
[
"df_ref.head()",
"_____no_output_____"
]
],
[
[
"# 4.0. Exploratory Data Analisys",
"_____no_output_____"
]
],
[
[
"df4 = df_ref.dropna().copy()",
"_____no_output_____"
]
],
[
[
"# 5.0. Data Preparation",
"_____no_output_____"
]
],
[
[
"df5 = df4.copy()\n\nss = pp.StandardScaler()\nmms = pp.MinMaxScaler()",
"_____no_output_____"
],
[
"df5['gross_revenue'] = ss.fit_transform(df5[['gross_revenue']])\ndf5['recency_days'] = ss.fit_transform(df5[['recency_days']])\ndf5['invoice_no'] = ss.fit_transform(df5[['invoice_no']])\ndf5['avg_ticket'] = ss.fit_transform(df5[['avg_ticket']])",
"_____no_output_____"
],
[
"df5.head()",
"_____no_output_____"
]
],
[
[
"# 6.0. Feature Selection",
"_____no_output_____"
]
],
[
[
"df6 = df5.copy()",
"_____no_output_____"
]
],
[
[
"# 7.0. Hyperparameter Fine-Tunning",
"_____no_output_____"
]
],
[
[
"X = df6.drop('customer_id', axis=1)\nclusters = [2,3,4,5,6, 7]",
"_____no_output_____"
]
],
[
[
"## 7.1. Within-Cluster Sum of Squares (WSS)",
"_____no_output_____"
]
],
[
[
"#Easy way\n\nkmeans = KElbowVisualizer(c.KMeans(), k=clusters, timings=False)\nkmeans.fit(X)\nkmeans.show();\n\n",
"_____no_output_____"
]
],
[
[
"## 7.2. Silhouette Score",
"_____no_output_____"
]
],
[
[
"#Easy way\nkmeans = KElbowVisualizer(c.KMeans(), k=clusters, metric = 'silhouette', timings = False)\nkmeans.fit(X)\nkmeans.show();",
"_____no_output_____"
]
],
[
[
"## 7.3. Silhouette Analysis",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(3, 2, figsize=(25,18))\n\nfor k in clusters:\n km = c.KMeans(n_clusters=k, init='random', n_init=10, max_iter=100, random_state=3)\n q, mod = divmod(k, 2)\n \n visualizer = SilhouetteVisualizer(estimator = km ,colors='yellowbrick', ax=ax[q-1][mod])\n visualizer.fit(X)\n visualizer.finalize()",
"_____no_output_____"
]
],
[
[
"# 8.0. Model Training",
"_____no_output_____"
],
[
"## 8.1. KMeans",
"_____no_output_____"
]
],
[
[
"#model definition\nk = 3\nkmeans = c.KMeans(init='random', n_clusters=k, n_init=10, max_iter=300, random_state=3)\n\n#model training\nkmeans.fit(X)\n\n#clustering\nlabels = kmeans.labels_",
"_____no_output_____"
]
],
[
[
"# 9.0. Cluster Analisys",
"_____no_output_____"
]
],
[
[
"df9 = df6.copy()\ndf9['cluster'] = labels\n",
"_____no_output_____"
]
],
[
[
"## 9.1. Visualization Inspection",
"_____no_output_____"
]
],
[
[
"\n\nvisualizer = SilhouetteVisualizer(kmeans, colors='yellowbrick')\nvisualizer.fit(X)\nvisualizer.finalize()",
"/home/heitor/repos/insiders_clustering/venv/lib/python3.8/site-packages/sklearn/base.py:450: UserWarning: X does not have valid feature names, but KMeans was fitted with feature names\n warnings.warn(\n"
]
],
[
[
"## 9.2. 2d Plot",
"_____no_output_____"
]
],
[
[
"# df_vis = df9.drop('customer_id', axis=1)\n# sns.pairplot(df_vis, hue='cluster')\n\n",
"_____no_output_____"
]
],
[
[
"## 9.3. UMAP t-SNE",
"_____no_output_____"
]
],
[
[
"reducer = umap.UMAP(n_neighbors=30, random_state=3)\nembedding = reducer.fit_transform(X)\n\ndf_vis['embedding_x'] = embedding[:,0]\ndf_vis['embedding_y'] = embedding[:,1]\nsns.scatterplot(x='embedding_x', y='embedding_y', hue='cluster',\n palette = sns.color_palette('hls', n_colors = len(df_vis['cluster'].unique())),\n data = df_vis)",
"_____no_output_____"
]
],
[
[
"## 9.1. Visualization Inspection",
"_____no_output_____"
]
],
[
[
"#WSS\nprint('WSS: {}'.format(kmeans.inertia_))\n\n#SS\nprint('SS: {}'.format(m.silhouette_score(X,labels, metric='euclidean')))",
"WSS: 9429.573769591378\nSS: 0.5879176205734721\n"
],
[
"px.scatter_3d(df9, x='recency_days', y='invoice_no', z='gross_revenue', color='cluster')",
"_____no_output_____"
]
],
[
[
"## 9.2. Cluster Profile",
"_____no_output_____"
]
],
[
[
"aux1 = df9.groupby('cluster').mean().reset_index()\n\naux1 = aux1.drop('customer_id', axis=1)\naux2 = df9[['customer_id', 'cluster']].groupby('cluster').count().reset_index()\n\ndf_cluster = pd.merge(aux1, aux2, on='cluster', how='left')\n",
"_____no_output_____"
],
[
"df_cluster['perc'] = 100*df_cluster['customer_id']/df_cluster['customer_id'].sum()\n\ndf_cluster",
"_____no_output_____"
]
],
[
[
"## Cluster 0 (insiders)\n\n* Número de clientes: 6\n* Percentual de clientes: 0,01%\n* Faturamento médio: $182182\n* Recência: 7 dias\n* Frequência: 89 compras\n\n## Cluster 1 (Hibernating)\n\n* Número de clientes: 4335\n* Percentual de clientes: 99%\n* Faturamento: $1372\n* Recência: 92 dias\n* Frequência: 4 compras\n\n## Cluster 2 (potenciais clientes fieis)\n\n* Número de clientes: 31\n* Percentual de clientes: 0,07%\n* Faturamento: $40543\n* Recência: 13 dias\n* Frequência: 53 compras\n\n\n",
"_____no_output_____"
],
[
"# 10.0. Deploy to Production",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e76d147a2050a569254af96c7fc39576febdb7f6 | 4,503 | ipynb | Jupyter Notebook | Demot/Hiukkasfysiikkaa/Invariantti-massa-histogrammi.ipynb | cms-opendata-education/cms-jupyter-materials-swedish | c9eec10b01f3fe51aaef743cc14b855ec866a7b8 | [
"CC-BY-4.0"
] | 4 | 2017-10-13T18:39:47.000Z | 2019-11-13T05:59:33.000Z | Demot/Hiukkasfysiikkaa/Invariantti-massa-histogrammi.ipynb | cms-opendata-education/cms-jupyter-materials-swedish | c9eec10b01f3fe51aaef743cc14b855ec866a7b8 | [
"CC-BY-4.0"
] | 11 | 2017-08-22T13:35:44.000Z | 2020-11-03T12:54:09.000Z | Demot/Hiukkasfysiikkaa/Invariantti-massa-histogrammi.ipynb | cms-opendata-education/cms-jupyter-materials-swedish | c9eec10b01f3fe51aaef743cc14b855ec866a7b8 | [
"CC-BY-4.0"
] | 2 | 2017-10-05T16:08:57.000Z | 2017-10-14T09:48:35.000Z | 31.270833 | 361 | 0.642905 | [
[
[
"# Invariantin massan histogrammin piirtäminen",
"_____no_output_____"
],
[
"Tässä harjoituksessa opetellaan piirtämään invariantin massan histogrammi Pythonilla. Käytetään datana CMS-kokeen vuonna 2011 keräämää dataa kahden myonin törmäyksistä [1]. Tässä harjoituksessa käytettävään CSV-tiedostoon on karsittu edellä mainitusta datasta kiinnostavia tapahtumia, joissa myonille laskettu invariantti massa on välillä 8–12 GeV [2].\n\nTutustu alla oleviin koodisoluihin ja niissä #-merkillä erotettuihin kommenttiriveihin sekä aja koodia. Huomaa, että normaalisti koodia ei kommentoitaisi näin runsaasti, nyt kommenteissa kerrotaan lisätietoa käytetyistä komennoista.\n<br>\n<br>\n<br>\n[1] CMS collaboration (2016). DoubleMu primary dataset in AOD format from RunA of 2011 (/DoubleMu/Run2011A-12Oct2013-v1/AOD). CERN Open Data Portal. DOI: [10.7483/OPENDATA.CMS.RZ34.QR6N](http://doi.org/10.7483/OPENDATA.CMS.RZ34.QR6N).\n<br>\n[2] Thomas McCauley (2016). Ymumu. Jupyter Notebook file. https://github.com/tpmccauley/cmsopendata-jupyter/blob/hst-0.1/Ymumu.ipynb.",
"_____no_output_____"
],
[
"### 1) Alustus ",
"_____no_output_____"
]
],
[
[
"# Haetaan tarvittavat moduulit\nimport pandas\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"### 2) Datan hakeminen",
"_____no_output_____"
],
[
"Alkuvalmisteluiden jälkeen siirrytään hakemaan CMS:n dataa käyttöömme notebookiin.",
"_____no_output_____"
]
],
[
[
"# Luodaan DataFrame-rakenne (periaatteessa taulukko), johon kirjataan kaikki CSV-tiedostossa oleva data.\n# Annetaan luomallemme DataFramelle nimi 'datasetti'.\ndatasetti = pandas.read_csv('https://raw.githubusercontent.com/cms-opendata-education/cms-jupyter-materials-finnish/master/Data/Ymumu_Run2011A.csv')\n\n# Luodaan muuttuja 'invariantti_massa', johon tallennetaan 'datasetin' sarakkeella 'M' olevat arvot, eli\n# kahden myonin invariantille massalle valmiiksi tiedostoon lasketut arvot.\ninvariantti_massa = datasetti['M']",
"_____no_output_____"
]
],
[
[
"### 3) Histogrammin piirtäminen",
"_____no_output_____"
],
[
"Nyt jäljellä on enää vaihe, jossa luomme histogrammin hakemistamme invariantin massan arvoista. Histogrammi on pylväskaavio, joka kuvaa kuinka monta törmäystapahtumaa on osunut kunkin invariantin massan arvon kohdalle. Huomaa, että alla käytämme yhteensä 500 pylvästä.",
"_____no_output_____"
]
],
[
[
"# Suoritetaan histogrammin piirtäminen pyplot-moduulin avulla:\n# (http://matplotlib.org/api/pyplot_api.html?highlight=matplotlib.pyplot.hist#matplotlib.pyplot.hist).\n# 'Bins' määrittelee histogrammin pylväiden lukumäärän.\nplt.hist(invariantti_massa, bins=500)\n\n# Näillä riveillä ainoastaan määritellään otsikko sekä akseleiden tekstit.\nplt.xlabel('Invariantti massa [GeV]')\nplt.ylabel('Tapahtumien lukumäärä')\nplt.title('Kahden myonin invariantin massan histogrammi \\n') # \\n luo uuden rivin otsikon muotoilua varten\n\n# Tehdään kuvaaja näkyväksi.\nplt.show()",
"_____no_output_____"
]
],
[
[
"### 4) Analyysi",
"_____no_output_____"
],
[
"- Mitä histogrammi kertoo?\n- Mitä tapahtuu noin 9,45 GeV:n kohdalla?",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e76d147f61e31d4842e3cdced869b9038bdc8aad | 242,633 | ipynb | Jupyter Notebook | burgers_equation/burgerseq_mod_fe_no_mean.ipynb | ratnania/mlhiphy | c75b5c4b5fbc557f77d234df001fe11b10681d7d | [
"MIT"
] | 6 | 2018-07-12T09:03:43.000Z | 2019-10-29T09:50:34.000Z | burgers_equation/burgerseq_mod_fe_no_mean.ipynb | ratnania/mlhiphy | c75b5c4b5fbc557f77d234df001fe11b10681d7d | [
"MIT"
] | null | null | null | burgers_equation/burgerseq_mod_fe_no_mean.ipynb | ratnania/mlhiphy | c75b5c4b5fbc557f77d234df001fe11b10681d7d | [
"MIT"
] | 4 | 2018-04-25T06:33:03.000Z | 2020-03-13T02:25:07.000Z | 368.183612 | 33,268 | 0.932099 | [
[
[
"## Burgers Equation - Forward Euler/0-estimation\n#### Parameter estimation for Burgers' Equation using Gaussian processes (Forward Euler scheme)\n\n\n#### Problem Setup\n\n$u_t + u u_{x} = \\nu u_{x}$\n\n$u(x,t) = \\frac{x}{1+t}$ => We'd expect $\\nu = 0$\n\n$u_0(x) := u(x,0) = x$\n\n$x \\in [0, 1], t \\in \\{0, \\tau \\}$\n\nUsing the forward Euler scheme, the equation can be re-written as:\n\n$\\frac{u_n - u_{n-1}}{\\tau} + u_{n-1} \\frac{d}{dx}u_{n-1} = \\nu \\frac{d^2}{dx}u_{n-1}$\n\nand setting the factor $u_{n-1}(x) = u_0(x) = x$ (no mean used! Should give a better result) to deal with the non-linearity:\n\n$\\tau \\nu \\frac{d^2}{dx}u_{n-1} - \\tau x \\frac{d}{dx}u_{n-1} + u_{n-1} = u_{n}$\n\n\nConsider $u_{n-1}$ to be a Gaussian process.\n\n$u_{n-1} \\sim \\mathcal{GP}(0, k_{uu}(x_i, x_j, \\theta))$\n\nAnd the linear operator:\n\n$\\mathcal{L}_x^\\nu = \\cdot + \\tau \\nu \\frac{d}{dx}\\cdot - \\tau x \\frac{d}{dx} \\cdot$\n\nso that\n\n$\\mathcal{L}_x^\\nu u_{n-1} = u_n$\n\nProblem at hand: estimate $\\nu$ (should be $\\nu = 0$ in the end).\n\nFor the sake of simplicity, take $u := u_{n-1}$ and $f := u_n$.\n\n\n#### step 1: Simulate data\n\nTake data points at $t = 0$ for $(u_{n-1})$ and $t = \\tau$ for $(u_n)$, where $\\tau$ is the time step.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport sympy as sp\nfrom scipy.optimize import minimize\nimport matplotlib.pyplot as plt\nimport warnings\nimport time",
"_____no_output_____"
],
[
"tau = 0.001\ndef get_simulated_data(tau, n=20):\n x = np.random.rand(n)\n y_u = x\n y_f = x/(1+tau)\n return (x, y_u, y_f)\n\n(x, y_u, y_f) = get_simulated_data(tau)",
"_____no_output_____"
],
[
"plt.plot(x, y_u, 'ro')\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(x, y_f, 'bo')\nplt.show()",
"_____no_output_____"
]
],
[
[
"#### Step 2:Evaluate kernels\n\n$k_{nn}(x_i, x_j; \\theta) = \\theta exp(-\\frac{1}{2l}(x_i-x_j)^2)$",
"_____no_output_____"
]
],
[
[
"x_i, x_j, theta, l, nu = sp.symbols('x_i x_j theta l nu')\nkuu_sym = theta*sp.exp(-1/(2*l)*((x_i - x_j)**2))\nkuu_fn = sp.lambdify((x_i, x_j, theta, l), kuu_sym, \"numpy\")\ndef kuu(x, theta, l):\n k = np.zeros((x.size, x.size))\n for i in range(x.size):\n for j in range(x.size):\n k[i,j] = kuu_fn(x[i], x[j], theta, l)\n return k",
"_____no_output_____"
]
],
[
[
"$k_{ff}(x_i,x_j;\\theta,\\phi) \\\\\n= \\mathcal{L}_{x_i}^\\nu \\mathcal{L}_{x_j}^\\nu k_{uu}(x_i, x_j; \\theta) \\\\\n= k_{uu} + \\tau \\nu \\frac{d}{dx_i}k_{uu} - \\tau x_i \\frac{d}{dx_i}k_{uu} + \\tau \\nu \\frac{d}{dx_j}k_{uu} + \\tau^2 \\nu^2 \\frac{d}{dx_i} \\frac{d}{dx_j}k_{uu} - \\tau^2 \\nu x_i\\frac{d^2}{dx_i dx_j} k_{uu} - \\tau x_j \\frac{d}{dx_j}k_{uu} - \\tau^2 \\nu x_j \\frac{d^2}{dx_i dx_j} k_{uu} + \\tau^2 x_i x_j \\frac{d^2}{dx_i dx_j}k_{uu}$",
"_____no_output_____"
]
],
[
[
"kff_sym = kuu_sym \\\n + tau*nu*sp.diff(kuu_sym, x_i) \\\n - tau*x_i*sp.diff(kuu_sym, x_i) \\\n + tau*nu*sp.diff(kuu_sym, x_j) \\\n + tau**2*nu**2*sp.diff(kuu_sym, x_j, x_i) \\\n - tau**2*nu*x_i*sp.diff(kuu_sym, x_j, x_i) \\\n - tau*x_j*sp.diff(kuu_sym, x_j) \\\n - tau**2*nu*x_j*sp.diff(kuu_sym, x_j, x_i) \\\n + tau**2*x_i*x_j*sp.diff(kuu_sym, x_j, x_i)\nkff_fn = sp.lambdify((x_i, x_j, theta, l, nu), kff_sym, \"numpy\")\ndef kff(x, theta, l, nu):\n k = np.zeros((x.size, x.size))\n for i in range(x.size):\n for j in range(x.size):\n k[i,j] = kff_fn(x[i], x[j], theta, l, nu)\n return k",
"_____no_output_____"
]
],
[
[
"$k_{fu}(x_i,x_j;\\theta,\\phi) \\\\\n= \\mathcal{L}_{x_i}^\\nu k_{uu}(x_i, x_j; \\theta) \\\\\n= k_{uu} + \\tau \\nu \\frac{d}{dx_i}k_{uu} - \\tau x_i\\frac{d}{dx_i}k_{uu}$",
"_____no_output_____"
]
],
[
[
"kfu_sym = kuu_sym + tau*nu*sp.diff(kuu_sym, x_i) - tau*x_i*sp.diff(kuu_sym, x_i)\nkfu_fn = sp.lambdify((x_i, x_j, theta, l, nu), kfu_sym, \"numpy\")\ndef kfu(x, theta, l, nu):\n k = np.zeros((x.size, x.size))\n for i in range(x.size):\n for j in range(x.size):\n k[i,j] = kfu_fn(x[i], x[j], theta, l, nu)\n return k",
"_____no_output_____"
],
[
"def kuf(x, theta, l, nu):\n return kfu(x,theta, l, nu).T",
"_____no_output_____"
]
],
[
[
"#### Step 3: Compute NLML",
"_____no_output_____"
]
],
[
[
"def nlml(params, x, y1, y2, s):\n theta_exp = np.exp(params[0]) \n l_exp = np.exp(params[1])\n K = np.block([\n [kuu(x, theta_exp, l_exp) + s*np.identity(x.size), kuf(x, theta_exp, l_exp, params[2])],\n [kfu(x, theta_exp, l_exp, params[2]), kff(x, theta_exp, l_exp, params[2]) + s*np.identity(x.size)]\n ])\n y = np.concatenate((y1, y2))\n val = 0.5*(np.log(abs(np.linalg.det(K))) + np.mat(y) * np.linalg.inv(K) * np.mat(y).T)\n return val.item(0)",
"_____no_output_____"
]
],
[
[
"#### Step 4: Optimise hyperparameters",
"_____no_output_____"
]
],
[
[
"m = minimize(nlml, np.random.rand(3), args=(x, y_u, y_f, 1e-7), method=\"Nelder-Mead\", options = {'maxiter' : 1000})",
"_____no_output_____"
],
[
"m.x[2]",
"_____no_output_____"
],
[
"m",
"_____no_output_____"
]
],
[
[
"#### Step 5: Analysis w.r.t. the number of data points (up to 25):",
"_____no_output_____"
],
[
"In this section we want to analyze the error of our algorithm using two different ways and its time complexity.",
"_____no_output_____"
]
],
[
[
"res = np.zeros((5,25))\ntiming = np.zeros((5,25))\nwarnings.filterwarnings(\"ignore\")\nfor k in range(5):\n for n in range(25):\n start_time = time.time()\n (x, y_u, y_f) = get_simulated_data(tau, n)\n m = minimize(nlml, np.random.rand(3), args=(x, y_u, y_f, 1e-7), method=\"Nelder-Mead\")\n res[k][n] = m.x[2]\n timing[k][n] = time.time() - start_time",
"_____no_output_____"
]
],
[
[
"###### Plotting the error in our estimate for $\\nu$ (Error = $| \\nu_{estimate} - \\nu_{true} |$):",
"_____no_output_____"
]
],
[
[
"lin = np.linspace(1, res.shape[1], res.shape[1])\nest = np.repeat(0.01, len(lin))\n\nf, (ax1, ax2) = plt.subplots(ncols=2, nrows=2, figsize=(13,7))\nax1[0].plot(lin, np.abs(res[0,:]), color = 'green')\nax1[0].plot(lin, est, color='blue', linestyle='dashed')\nax1[0].set(xlabel= r\"Number of data points\", ylabel= \"Error\")\nax1[1].plot(lin, np.abs(res[1,:]), color = 'green')\nax1[1].plot(lin, est, color='blue', linestyle='dashed')\nax1[1].set(xlabel= r\"Number of data points\", ylabel= \"Error\")\nax2[0].plot(lin, np.abs(res[2,:]), color = 'green')\nax2[0].plot(lin, est, color='blue', linestyle='dashed')\nax2[0].set(xlabel= r\"Number of data points\", ylabel= \"Error\")\nax2[1].plot(lin, np.abs(res[3,:]), color = 'green')\nax2[1].plot(lin, est, color='blue', linestyle='dashed')\nax2[1].set(xlabel= r\"Number of data points\", ylabel= \"Error\");",
"_____no_output_____"
],
[
"lin = np.linspace(1, res.shape[1], res.shape[1])\n\nfor i in range(res.shape[0]):\n plt.plot(lin, np.abs(res[i,:]))\n plt.ylabel('Error')\n plt.xlabel('Number of data points')\n plt.show()",
"_____no_output_____"
]
],
[
[
"All in one plot:",
"_____no_output_____"
]
],
[
[
"lin = np.linspace(1, res.shape[1], res.shape[1])\n\nfor i in range(res.shape[0]):\n plt.plot(lin, np.abs(res[i,:]))\n plt.ylabel('Error')\n plt.xlabel('Number of data points')\n\nest = np.repeat(0.01, len(lin))\nplt.plot(lin, est, color='blue', linestyle='dashed')\nplt.show()",
"_____no_output_____"
]
],
[
[
"We see that for n sufficiently large (in this case $n \\geq 3$), we can assume the error to be bounded by 0.01. <br>",
"_____no_output_____"
],
[
"###### Plotting the error between the solution and the approximative solution:",
"_____no_output_____"
]
],
[
[
"Another approach of plotting the error is by calculating the difference between the approximative solution and the true solution. <br>\nThat is: Let $\\tilde{\\nu}$ be the parameter, resulting from our algorithm. Set $\\Omega := ([0,1] \\times {0}) \\cup ([0,1] \\times {\\tau})$\nThen we can calculate the solution of the PDE \n\n\\begin{align}\n \\frac{d}{dt}\\tilde{u}(x,t) + \\tilde{\\nu}\\tilde{u}(x,t)\\frac{d}{dx}\\tilde{u}(x,t) = 0. \n\\end{align}\n\nand set the error to $\\lVert \\tilde{u}(x,t) - u(x,t) \\rVert_{\\Omega}$. The norm can be chosen freely. <br>\nIn our case, finding the solution to a given $\\tilde{\\nu}$ is very simple. It is given by $\\tilde{u}(x,t) = u(x,t) + \\tilde{\\nu} = \\frac{x}{1+t} + \\tilde{\\nu}$. <br>\nWe thus get:\n\n\\begin{align}\n\\lVert \\tilde{u}(x,t) - u(x,t) \\rVert_{\\Omega} = \\lVert u(x,t) + \\tilde{\\nu} - u(x,t) \\rVert_{\\Omega} \\propto \\vert \\tilde{\\nu} \\vert\n\\end{align}\n\nHere, the two error terms coincide.",
"_____no_output_____"
]
],
[
[
"###### Plotting the execution time:",
"_____no_output_____"
]
],
[
[
"lin = np.linspace(1, timing.shape[1], timing.shape[1])\n\nfor i in range(timing.shape[0]):\n plt.plot(lin, timing[i,:])\n plt.ylabel('Execution time in seconds')\n plt.xlabel('Number of data points')\n plt.show()",
"_____no_output_____"
],
[
"lin = np.linspace(1, timing.shape[1], timing.shape[1])\n\nfor i in range(timing.shape[0]):\n plt.plot(lin, timing[i,:])\n plt.ylabel('Execution time in seconds')\n plt.xlabel('Number of data points')\n\nest = lin**(1.25)\nplt.plot(lin, est, color='blue', linestyle='dashed')\nplt.show()",
"_____no_output_____"
]
],
[
[
"#Curiously, the time complexity seems to be around $\\mathcal{O}(n^{5/4})$ (blue-dashed line). <br>\n#Assuming an equal amount of function evaluations in the Nelder-Mead algorithm for different values of n,\n#we would expect a time complexity of $\\mathcal{O}(n^3)$, due to the computation of the inverse of an $n\\times n$-matrix in every evaluation of $\\textit{nlml}$.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"raw",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"raw"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
e76d14abe6f5b6327db77ae2712f5d76265b967d | 65,233 | ipynb | Jupyter Notebook | Tennis.ipynb | yanlinglin/drl_p3 | 5edd8810233f8ca7e29fcb7922915824fb7b2894 | [
"MIT"
] | 1 | 2022-01-19T18:24:46.000Z | 2022-01-19T18:24:46.000Z | Tennis.ipynb | yanlinglin/drl_p3 | 5edd8810233f8ca7e29fcb7922915824fb7b2894 | [
"MIT"
] | null | null | null | Tennis.ipynb | yanlinglin/drl_p3 | 5edd8810233f8ca7e29fcb7922915824fb7b2894 | [
"MIT"
] | null | null | null | 115.25265 | 30,592 | 0.821363 | [
[
[
"# Collaboration and Competition\n\n---\n\nIn this notebook, you will learn how to use the Unity ML-Agents environment for the third project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program.\n\n### 1. Start the Environment\n\nWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).",
"_____no_output_____"
]
],
[
[
"from unityagents import UnityEnvironment\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.\n\n- **Mac**: `\"path/to/Tennis.app\"`\n- **Windows** (x86): `\"path/to/Tennis_Windows_x86/Tennis.exe\"`\n- **Windows** (x86_64): `\"path/to/Tennis_Windows_x86_64/Tennis.exe\"`\n- **Linux** (x86): `\"path/to/Tennis_Linux/Tennis.x86\"`\n- **Linux** (x86_64): `\"path/to/Tennis_Linux/Tennis.x86_64\"`\n- **Linux** (x86, headless): `\"path/to/Tennis_Linux_NoVis/Tennis.x86\"`\n- **Linux** (x86_64, headless): `\"path/to/Tennis_Linux_NoVis/Tennis.x86_64\"`\n\nFor instance, if you are using a Mac, then you downloaded `Tennis.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:\n```\nenv = UnityEnvironment(file_name=\"Tennis.app\")\n```",
"_____no_output_____"
]
],
[
[
"env = UnityEnvironment(file_name=\"Tennis.app\")",
"INFO:unityagents:\n'Academy' started successfully!\nUnity Academy name: Academy\n Number of Brains: 1\n Number of External Brains : 1\n Lesson number : 0\n Reset Parameters :\n\t\t\nUnity brain name: TennisBrain\n Number of Visual Observations (per agent): 0\n Vector Observation space type: continuous\n Vector Observation space size (per agent): 8\n Number of stacked Vector Observation: 3\n Vector Action space type: continuous\n Vector Action space size (per agent): 2\n Vector Action descriptions: , \n"
]
],
[
[
"Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.",
"_____no_output_____"
]
],
[
[
"# get the default brain\nbrain_name = env.brain_names[0]\nbrain = env.brains[brain_name]",
"_____no_output_____"
]
],
[
[
"### 2. Examine the State and Action Spaces\n\nIn this environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play.\n\nThe observation space consists of 8 variables corresponding to the position and velocity of the ball and racket. Two continuous actions are available, corresponding to movement toward (or away from) the net, and jumping. \n\nRun the code cell below to print some information about the environment.",
"_____no_output_____"
]
],
[
[
"# reset the environment\nenv_info = env.reset(train_mode=True)[brain_name]\n\n# number of agents \nnum_agents = len(env_info.agents)\nprint('Number of agents:', num_agents)\n\n# size of each action\naction_size = brain.vector_action_space_size\nprint('Size of each action:', action_size)\n\n# examine the state space \nstates = env_info.vector_observations\nstate_size = states.shape[1]\nprint('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))\nprint('The state for the first agent looks like:', states[0])",
"Number of agents: 2\nSize of each action: 2\nThere are 2 agents. Each observes a state with length: 24\nThe state for the first agent looks like: [ 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. -6.65278625 -1.5\n -0. 0. 6.83172083 6. -0. 0. ]\n"
]
],
[
[
"### 3. Take Random Actions in the Environment\n\nIn the next code cell, you will learn how to use the Python API to control the agents and receive feedback from the environment.\n\nOnce this cell is executed, you will watch the agents' performance, if they select actions at random with each time step. A window should pop up that allows you to observe the agents.\n\nOf course, as part of the project, you'll have to change the code so that the agents are able to use their experiences to gradually choose better actions when interacting with the environment!",
"_____no_output_____"
]
],
[
[
"states[1,np.newaxis]",
"_____no_output_____"
],
[
"for i in range(1, 6): # play game for 5 episodes\n env_info = env.reset(train_mode=False)[brain_name] # reset the environment \n states = env_info.vector_observations # get the current state (for each agent)\n scores = np.zeros(num_agents) # initialize the score (for each agent)\n while True:\n actions = np.random.randn(num_agents, action_size) # select an action (for each agent)\n actions = np.clip(actions, -1, 1) # all actions between -1 and 1\n env_info = env.step(actions)[brain_name] # send all actions to tne environment\n next_states = env_info.vector_observations # get next state (for each agent)\n rewards = env_info.rewards # get reward (for each agent)\n dones = env_info.local_done # see if episode finished\n scores += env_info.rewards # update the score (for each agent)\n states = next_states # roll over states to next time step\n if np.any(dones): # exit loop if episode finished\n break\n print('Score (max over agents) from episode {}: {}'.format(i, np.max(scores)))",
"Score (max over agents) from episode 1: 0.0\nScore (max over agents) from episode 2: 0.0\nScore (max over agents) from episode 3: 0.0\nScore (max over agents) from episode 4: 0.0\nScore (max over agents) from episode 5: 0.0\n"
]
],
[
[
"When finished, you can close the environment.",
"_____no_output_____"
],
[
"### 4. It's Your Turn!\n\nNow it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:\n```python\nenv_info = env.reset(train_mode=True)[brain_name]\n```",
"_____no_output_____"
]
],
[
[
"import random\nimport torch\nimport numpy as np\nfrom collections import deque\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom maddpg_agent import MAagent",
"_____no_output_____"
],
[
"def MA_ddpg(n_episodes=10000, max_t=1000, print_every=10,random_seed=2, noise_scalar_init=2.0, noise_reduction_factor=0.99, num_agents=num_agents,update_every=1, \\\n actor_fc1_units=400, actor_fc2_units=300,\\\n critic_fcs1_units=400, critic_fc2_units=300,\\\n gamma=0.99, tau=1e-3,lr_actor=1e-4, lr_critic=1e-3, weight_decay=0,\\\n mu=0., theta=0.15, sigma=0.2):\n maddpg_agents = MAagent(state_size=state_size, action_size=action_size, random_seed=random_seed,\\\n noise_scalar_init=noise_scalar_init,noise_reduction_factor=noise_reduction_factor, num_agents=num_agents,\\\n update_every=update_every, actor_fc1_units=actor_fc1_units, actor_fc2_units=actor_fc2_units,\\\n critic_fcs1_units=critic_fcs1_units, critic_fc2_units=critic_fc2_units,gamma=gamma, tau=tau, \\\n lr_actor=lr_actor, lr_critic=lr_critic,\\\n weight_decay=weight_decay,mu=mu, theta=theta, sigma=sigma)\n scores_deque = deque(maxlen=100)\n scores = []\n time_stamp = 0\n for i_episode in range(1, n_episodes+1):\n env_info = env.reset(train_mode=True)[brain_name]\n states = env_info.vector_observations\n for i in range(num_agents):\n maddpg_agents.agents[i].reset()\n score = np.zeros(num_agents)\n for t in range(max_t):\n actions = maddpg_agents.act(states)\n env_info = env.step(actions)[brain_name] # send all actions to tne environment\n next_states = env_info.vector_observations # get next state (for each agent)\n rewards = env_info.rewards # get reward (for each agent)\n dones = env_info.local_done # see if episode finished\n \n maddpg_agents.step(states, actions, rewards, next_states, dones)\n score += rewards # update the score (for each agent)\n states = next_states # roll over states to next time step\n time_stamp+=1\n if np.any(dones):\n break \n scores_deque.append(np.max(score))\n scores.append(np.max(score))\n print('\\rEpisode {}\\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)), end=\"\")\n for i in range(num_agents):\n torch.save(maddpg_agents.agents[i].actor_local.state_dict(), str(i)+'checkpoint_actor.pth')\n torch.save(maddpg_agents.agents[i].critic_local.state_dict(),str(i)+'checkpoint_critic.pth')\n if i_episode % print_every == 0:\n print('\\rEpisode {}\\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))\n if np.mean(scores_deque)>=0.5:\n print('\\nEnvironment solved in {:d} episodes!\\tAverage100 Score: {:.2f}'.format(i_episode-100, np.mean(scores_deque)))\n for i in range(num_agents):\n torch.save(maddpg_agents.agents[i].actor_local.state_dict(), str(i)+'actor_checkpoint.pth')\n torch.save(maddpg_agents.agents[i].critic_local.state_dict(),str(i)+'critic_checkpoint.pth')\n break \n return scores",
"_____no_output_____"
],
[
"def plot_score(scores):\n fig = plt.figure()\n ax = fig.add_subplot(111)\n plt.plot(np.arange(1, len(scores)+1), scores)\n plt.ylabel('Score')\n plt.xlabel('Episode #')\n plt.show()",
"_____no_output_____"
],
[
"%%time\nprint(\"\\n\\nTest try #1: base line\")\nscores = MA_ddpg(n_episodes=10000, max_t=1000, print_every=100,random_seed=2, num_agents=num_agents,update_every=2,\\\n actor_fc1_units=200, actor_fc2_units=150,\\\n critic_fcs1_units=400, critic_fc2_units=300,\\\n gamma=0.99, tau=1e-2,lr_actor=1e-4, lr_critic=1e-3, weight_decay=0,\\\n mu=0., theta=0.15, sigma=0.2)\nplot_score(scores)",
"\n\nTest try #1: base line\nEpisode 100\tAverage Score: 0.00\nEpisode 200\tAverage Score: 0.00\nEpisode 300\tAverage Score: 0.00\nEpisode 400\tAverage Score: 0.01\nEpisode 500\tAverage Score: 0.00\nEpisode 600\tAverage Score: 0.02\nEpisode 700\tAverage Score: 0.00\nEpisode 800\tAverage Score: 0.02\nEpisode 900\tAverage Score: 0.04\nEpisode 1000\tAverage Score: 0.05\nEpisode 1100\tAverage Score: 0.05\nEpisode 1200\tAverage Score: 0.06\nEpisode 1300\tAverage Score: 0.05\nEpisode 1400\tAverage Score: 0.06\nEpisode 1500\tAverage Score: 0.05\nEpisode 1600\tAverage Score: 0.04\nEpisode 1700\tAverage Score: 0.05\nEpisode 1800\tAverage Score: 0.05\nEpisode 1900\tAverage Score: 0.05\nEpisode 2000\tAverage Score: 0.06\nEpisode 2100\tAverage Score: 0.05\nEpisode 2200\tAverage Score: 0.06\nEpisode 2300\tAverage Score: 0.07\nEpisode 2400\tAverage Score: 0.07\nEpisode 2500\tAverage Score: 0.07\nEpisode 2600\tAverage Score: 0.09\nEpisode 2700\tAverage Score: 0.10\nEpisode 2800\tAverage Score: 0.12\nEpisode 2900\tAverage Score: 0.13\nEpisode 3000\tAverage Score: 0.26\nEpisode 3100\tAverage Score: 0.41\nEpisode 3126\tAverage Score: 0.51\nEnvironment solved in 3026 episodes!\tAverage100 Score: 0.51\n"
],
[
"# load the weights from file\ndef let_agent_play():\n for i in range(num_agents):\n maddpg_agents.agents[i].actor_local.load_state_dict(torch.load(str(i)+'checkpoint_actor.pth'))\n maddpg_agents.agents[i].critic_local.load_state_dict(torch.load(str(i)+'checkpoint_critic.pth'))\n max_t=1000\n play_episodes=100\n play_scores = [] # list containing scores from each episode\n play_scores_window = deque(maxlen=100) # last 100 scores\n for i_episode in range(1, play_episodes+1):\n env_info = env.reset(train_mode=True)[brain_name]\n states = env_info.vector_observations \n for agent in maddpg_agents.agents:\n agent.reset()\n score = np.zeros(num_agents)\n for t in range(max_t):\n actions = maddpg_agents.act(states)\n env_info = env.step(actions)[brain_name] # send all actions to tne environment\n next_states = env_info.vector_observations # get next state (for each agent)\n rewards = env_info.rewards # get reward (for each agent)\n dones = env_info.local_done # see if episode finished\n score += rewards # update the score (for each agent)\n states = next_states # roll over states to next time step\n if np.any(dones):\n break \n play_scores_window.append(np.max(score)) # save most recent score\n play_scores.append(np.max(score)) # save most recent score\n print('\\rEpisode {}\\tAverage Score: {:.2f}'.format(i_episode, np.mean(play_scores_window)), end=\"\")\n if i_episode % 100 == 0:\n print('\\rEpisode {}\\tAverage Score: {:.2f}'.format(i_episode, np.mean(play_scores_window)))\n return play_scores",
"_____no_output_____"
],
[
"maddpg_agents = MAagent(state_size=state_size, action_size=action_size, \\\n noise_scalar_init=1.0, noise_reduction_factor=0.999,\\\n random_seed=2, num_agents=num_agents,update_every=2,\\\n actor_fc1_units=200, actor_fc2_units=150,\\\n critic_fcs1_units=400, critic_fc2_units=300,\\\n gamma=0.99, tau=1e-2,lr_actor=1e-4, lr_critic=1e-3, weight_decay=0,\\\n mu=0., theta=0.15, sigma=0.2)\nplay_scores=let_agent_play()\nplot_score(play_scores)",
"Episode 100\tAverage Score: 0.58\n"
],
[
"env.close()",
"_____no_output_____"
],
[
"# code for debugging. \n\n# maddpg_agents = MAagent(state_size=state_size, action_size=action_size, random_seed=2,num_agents=num_agents)\n\n# actions=[maddpg_agents.maddpg_agents[i].act(states[i, np.newaxis]).flatten() for i in range(maddpg_agents.num_agents)]\n# np.array(actions)\n\n# for i in range(1, 129): # play game for 5 episodes\n# env_info = env.reset(train_mode=True)[brain_name]\n# states = env_info.vector_observations \n# for i in range(num_agents):\n# maddpg_agents.agents[i].reset()\n# score = np.zeros(num_agents)\n# actions = maddpg_agents.act(states)\n# env_info = env.step(actions)[brain_name] # send all actions to tne environment\n# next_states = env_info.vector_observations # get next state (for each agent)\n# rewards = env_info.rewards # get reward (for each agent)\n# rewards\n# dones = env_info.local_done # see if episode finished\n# maddpg_agents.step(states, actions, rewards, next_states, dones)\n\n# experiences = random.sample(maddpg_agents.memory.memory, k=2)\n# print(experiences)\n\n# obs = [torch.from_numpy(np.vstack([e.state[i] for e in experiences if e is not None])).float() for i in range(num_agents)]\n# states = torch.from_numpy(np.vstack([e.state.flatten() for e in experiences if e is not None])).float() \n# actions = torch.from_numpy(np.vstack([e.action.flatten() for e in experiences if e is not None])).float()\n# rewards = [torch.from_numpy(np.vstack([e.reward[i] for e in experiences if e is not None])).float() \\\n# for i in range(num_agents)]\n# next_obs = [torch.from_numpy(np.vstack([e.next_state[i] for e in experiences if e is not None])).float() \\\n# for i in range(num_agents)]\n# next_states = torch.from_numpy(np.vstack([e.next_state.flatten() for e in experiences if e is not None])).float()\n# dones = [torch.from_numpy(np.vstack([e.done[i] for e in experiences if e is not None]).astype(np.uint8)).float() for i in range(num_agents)]\n# experiences[0].reward\n\n# (obs, states, actions, rewards, next_obs, next_states, dones)=maddpg_agents.memory.sample()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76d2139e69a184d3b81b1c3ef0add11cdce94ad | 5,792 | ipynb | Jupyter Notebook | Sem4/Maths3_Monty_Hall.ipynb | sban2009/STCET | 08c1bd5a926dfd47f0a11ceb85ac491138e074a6 | [
"MIT"
] | null | null | null | Sem4/Maths3_Monty_Hall.ipynb | sban2009/STCET | 08c1bd5a926dfd47f0a11ceb85ac491138e074a6 | [
"MIT"
] | null | null | null | Sem4/Maths3_Monty_Hall.ipynb | sban2009/STCET | 08c1bd5a926dfd47f0a11ceb85ac491138e074a6 | [
"MIT"
] | null | null | null | 38.613333 | 113 | 0.409012 | [
[
[
"'''\nMONTY HALL PROBLEM\nCSE 2ND YEAR\nGROUP 1\nM401 MICRO PROJECT\n'''\nimport random\ndoors=[\"\",\"\",\"\"] #LIST OF DOORS\ngoatdoor=[] #LIST OF DOORS NOT CONTAINING PRIZE\nswap=0 #NO. OF SWAPPED ATTEMPTS\nXswap=0 #NO. OF RETAINED ATTEMPTS\nswapWins=0 #WINS AFTER SWAPPING\nXswapWins=0 #WINS AFTER RETAINING (NOT SWAPPING)\nexit=False\nwhile(not exit):\n x=random.randint(0,2) #DOOR CONTAINING PRIZE\n doors[x]=\"PRIZE\" #INDEX x OF doors HAS PRIZE\n for i in range(0,3): #FORMING THE LIST goatdoor\n if i!=x:\n doors[i]=\"GOAT\" #VALUES IN doors NOT EQUAL TO \"PRIZE\"\n goatdoor.append(i) #ARE MADE \"GOAT\"\n ch1=int(input(\"Enter choice of door (1/2/3): \")) #USER INPUT OF DOOR CHOICE\n ch1-=1 #SINCE WE ARE WORKING WITH START INDEX 0 (SO 1/2/3 => 0/1/2)\n opendoor=random.choice(goatdoor) #GIVES US RANDOM INDEX FROM goatdoor\n while opendoor==ch1:\n opendoor=random.choice(goatdoor) #opendoor NOT EQUAL TO CHOICE ch1 OF USER\n print(\"Door%2d has GOAT\"%(opendoor+1)) #USER IS SHOWN WHAT'S BEHIND ONE OF THE REMAINING DOORS\n ch2=input(\"Do you want to swap? (Y/N): \") #PROMPT TO SWAP\n W=\"PLAYER WINS\" #STRINGS TO STORE\n L=\"PLAYER LOSES\" #WIN/LOSS MESSAGES\n if str.upper(ch2)==\"Y\": #SWAPS\n swap+=1\n if doors[ch1]==\"GOAT\": #INITIAL CHOICE GOAT: USER WINS\n print(W)\n swapWins+=1 ##WIN BY SWAPPING\n else: #OR ELSE USER LOSES\n print(L)\n else: #RETAINS\n Xswap+=1\n if doors[ch1]==\"GOAT\": #INITIAL CHOICE GOAT: USER LOSES\n print(L)\n else: #OR ELSE USER WINS\n print(W)\n XswapWins+=1 ##WIN BY RETAINING\n if str.upper(input(\"Exit? (Y/N): \"))==\"Y\":\n exit=True #EXIT CONDITION\nprint(\"\\n\")\nprint(\"Swap attempts:\",swap)\nprint(\"Wins after swapping: \",swapWins)\nprint(\"\\n\")\nprint(\"Retain attempts:\",Xswap)\nprint(\"Wins after retaining: \",XswapWins)",
"Enter choice of door (1/2/3): 2\nDoor 3 has GOAT\nDo you want to swap? (Y/N): n\nPLAYER LOSES\nExit? (Y/N): N\nEnter choice of door (1/2/3): 1\nDoor 3 has GOAT\nDo you want to swap? (Y/N): Y\nPLAYER WINS\nExit? (Y/N): n\nEnter choice of door (1/2/3): 2\nDoor 1 has GOAT\nDo you want to swap? (Y/N): n\nPLAYER LOSES\nExit? (Y/N): N\nEnter choice of door (1/2/3): 3\nDoor 2 has GOAT\nDo you want to swap? (Y/N): N\nPLAYER LOSES\nExit? (Y/N): N\nEnter choice of door (1/2/3): 1\nDoor 2 has GOAT\nDo you want to swap? (Y/N): Y\nPLAYER LOSES\nExit? (Y/N): N\nEnter choice of door (1/2/3): 2\nDoor 1 has GOAT\nDo you want to swap? (Y/N): N\nPLAYER LOSES\nExit? (Y/N): N\nEnter choice of door (1/2/3): 3\nDoor 2 has GOAT\nDo you want to swap? (Y/N): Y\nPLAYER WINS\nExit? (Y/N): N\nEnter choice of door (1/2/3): 1\nDoor 3 has GOAT\nDo you want to swap? (Y/N): N\nPLAYER WINS\nExit? (Y/N): N\nEnter choice of door (1/2/3): 1\nDoor 2 has GOAT\nDo you want to swap? (Y/N): Y\nPLAYER LOSES\nExit? (Y/N): N\nEnter choice of door (1/2/3): 2\nDoor 1 has GOAT\nDo you want to swap? (Y/N): Y\nPLAYER WINS\nExit? (Y/N): Y\n\n\nSwap attempts: 5\nWins after swapping: 3\n\n\nRetain attempts: 5\nWins after retaining: 1\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
e76d24e98725be9de1010306c0a4266cb1322ea8 | 337,867 | ipynb | Jupyter Notebook | scrapbook/osrm_example.ipynb | alan-turing-institute/dsg-CityMaaS | 85ef01d1d86daa2c15831e1a88127c2a3f4273b1 | [
"MIT"
] | 2 | 2021-04-21T15:07:53.000Z | 2021-04-22T08:15:43.000Z | scrapbook/osrm_example.ipynb | alan-turing-institute/dsg-CityMaaS | 85ef01d1d86daa2c15831e1a88127c2a3f4273b1 | [
"MIT"
] | null | null | null | scrapbook/osrm_example.ipynb | alan-turing-institute/dsg-CityMaaS | 85ef01d1d86daa2c15831e1a88127c2a3f4273b1 | [
"MIT"
] | 2 | 2021-04-22T12:42:22.000Z | 2021-05-05T22:41:54.000Z | 1,304.505792 | 331,199 | 0.767704 | [
[
[
"import requests\nimport pandas as pd\nimport folium",
"_____no_output_____"
],
[
"BASE_REQUEST = 'http://router.project-osrm.org/route/v1/foot/'",
"_____no_output_____"
],
[
"def getRoute(*args,points_to_include=[],points_to_avoid=[]):\n k = ';'.join(args)\n req = BASE_REQUEST + k + '?overview=full&steps=true&alternatives=3'\n _r = requests.get(req)\n _ = []\n for _route in range(len(_r.json()['routes'])):\n _tmp = pd.DataFrame()\n for _leg in range(len(_r.json()['routes'][_route]['legs'])):\n _tmp = _tmp.append(pd.json_normalize(_r.json()['routes'][_route]['legs'][_leg]['steps']))\n _.append(_tmp.reset_index(drop=True))\n \n return _\n \n\ndef getLocations(_df):\n loc = []\n for row,col in _df.iterrows():\n _ = pd.json_normalize(col['intersections'])\n for _row,_col in _.iterrows():\n loc.append((_col['location'][1],_col['location'][0]))\n \n return loc",
"_____no_output_____"
],
[
"\ncoord1 = '-0.076303,51.50815' # Tower of London\ncoord2 = '-0.108711,51.50457' # London Waterloo East Tube Station\ncoord3 = '-0.124613,51.50106' # Big Ben\ncoord4 = '-0.60604413,51.48402' # Windsor castle\n\nres = getRoute(coord1,coord4)\n\nprint(f'Amount of routes found: {len(res)}')\nfor i in range(len(res)):\n print(f'Length of route 1: {sum(res[i][\"distance\"])} meters')",
"Amount of routes found: 2\nLength of route 1: 39587.2 meters\nLength of route 1: 41887.6 meters\n"
],
[
"map = folium.Map(location=[51.5, -0.1], zoom_start=13)\ncolors=['#000000','#FF0000','#00FF00','#0000FF']\nfor routes in range(len(res)):\n l = getLocations(res[routes])\n for point in l:\n folium.CircleMarker(point, radius=5, color=colors[routes]).add_to(map)\nmap",
"_____no_output_____"
],
[
"df2 = pd.json_normalize(df.iloc[1]['intersections'])\ndf2",
"_____no_output_____"
],
[
"sum(df['distance'])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76d2be464ccb9fec5ee5c959e4c9ac22423bc11 | 18,303 | ipynb | Jupyter Notebook | RECONOCIMIENTO/256/256_3_VGG from scratch-Copy2.ipynb | mercator-upm/tfg-victor-fernandez | 62d97171d39a4ae0f781f5c7b759def8891974ee | [
"Apache-2.0"
] | null | null | null | RECONOCIMIENTO/256/256_3_VGG from scratch-Copy2.ipynb | mercator-upm/tfg-victor-fernandez | 62d97171d39a4ae0f781f5c7b759def8891974ee | [
"Apache-2.0"
] | null | null | null | RECONOCIMIENTO/256/256_3_VGG from scratch-Copy2.ipynb | mercator-upm/tfg-victor-fernandez | 62d97171d39a4ae0f781f5c7b759def8891974ee | [
"Apache-2.0"
] | null | null | null | 50.008197 | 292 | 0.598263 | [
[
[
"import os, shutil\nimport numpy as np\nimport tensorflow as tf\nbase_dir = 'tiles'\n\ntrain_dir = os.path.join(base_dir, 'train')\nvalidation_dir = os.path.join(base_dir, 'validation')",
"/home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n/home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n"
],
[
"config = tf.ConfigProto()\nconfig.gpu_options.allow_growth = True\nsess = tf.Session(config=config)",
"_____no_output_____"
],
[
"import keras\nfrom keras import layers\nfrom keras import models\nfrom keras.applications import VGG16\nfrom keras.preprocessing import image\nfrom keras.preprocessing.image import ImageDataGenerator\n\nimg_width, img_height = 256, 256\nconv_base= VGG16(weights=None, include_top=False, input_shape=(img_width,img_height,3))\n\nmodel = models.Sequential()\nmodel.add(conv_base)\nmodel.add(layers.MaxPooling2D(pool_size=(2,2),strides=(2,2)))\nmodel.add(layers.Flatten())\nmodel.add(layers.Dense(units=4096,activation=\"relu\")) #si no entrena bien: init = 'he_normal' #he_uniform\nmodel.add(layers.Dense(units=4096,activation=\"relu\"))\nmodel.add(layers.Dense(units=1, activation=\"sigmoid\"))\n\nmodel.summary()",
"WARNING:tensorflow:From /home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n\nWARNING:tensorflow:From /home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n\nWARNING:tensorflow:From /home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n\nWARNING:tensorflow:From /home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.\n\n"
],
[
"from keras import optimizers\n\nmodel.compile(loss='binary_crossentropy', optimizer=optimizers.Adam(lr=1e-4), metrics=['acc']) #si es demasiado lento, Adam(lr=0.001)",
"WARNING:tensorflow:From /home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n\nWARNING:tensorflow:From /home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3376: The name tf.log is deprecated. Please use tf.math.log instead.\n\nWARNING:tensorflow:From /home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/tensorflow/python/ops/nn_impl.py:180: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\n"
],
[
"from keras.preprocessing.image import ImageDataGenerator\ndatagen = ImageDataGenerator(rescale=1. /255)\ntrain_datagen = ImageDataGenerator(rescale=1. / 255, rotation_range=25, \n width_shift_range=0.1, height_shift_range=0.1, \n zoom_range=0.1, horizontal_flip=True, vertical_flip=True, \n fill_mode='nearest')\n\ntrain_generator = train_datagen.flow_from_directory(train_dir, target_size=(256, 256),\n batch_size=50, class_mode='binary')\nvalidation_datagen = ImageDataGenerator(rescale=1. / 255)\nvalidation_generator = validation_datagen.flow_from_directory(validation_dir, target_size=(256, 256), \n batch_size=50,class_mode='binary')",
"Found 484048 images belonging to 2 classes.\nFound 26893 images belonging to 2 classes.\n"
],
[
"train_steps_per_epoch = np.math.ceil(train_generator.samples / train_generator.batch_size)\nvalidation_steps_per_epoch = np.math.ceil(validation_generator.samples / validation_generator.batch_size)\n\n#teniendo en cuenta el desbalanceo de clases\nfrom sklearn.utils import class_weight\nclass_weights = class_weight.compute_class_weight('balanced',np.unique(train_generator.classes), train_generator.classes)\nclass_weights",
"/home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/sklearn/utils/validation.py:70: FutureWarning: Pass classes=[0 1], y=[0 0 0 ... 1 1 1] as keyword args. From version 0.25 passing these as positional arguments will result in an error\n FutureWarning)\n"
],
[
"history = model.fit_generator(train_generator, steps_per_epoch= train_steps_per_epoch, \n epochs=50, validation_data=validation_generator,\n validation_steps=validation_steps_per_epoch, class_weight=class_weights) #callbacks = [tensorboard]\n",
"WARNING:tensorflow:From /home/miguelmmanso/anaconda3/envs/pruebas/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.\n\nEpoch 1/50\n9681/9681 [==============================] - 4988s 515ms/step - loss: 0.5138 - acc: 0.7290 - val_loss: 0.4430 - val_acc: 0.7851\nEpoch 2/50\n9681/9681 [==============================] - 4960s 512ms/step - loss: 0.4188 - acc: 0.8000 - val_loss: 0.3853 - val_acc: 0.8152\nEpoch 3/50\n9681/9681 [==============================] - 4958s 512ms/step - loss: 0.3933 - acc: 0.8117 - val_loss: 0.3899 - val_acc: 0.8208\nEpoch 4/50\n9681/9681 [==============================] - 4959s 512ms/step - loss: 0.3778 - acc: 0.8184 - val_loss: 0.3661 - val_acc: 0.8282\nEpoch 5/50\n9681/9681 [==============================] - 4959s 512ms/step - loss: 0.3674 - acc: 0.8223 - val_loss: 0.3650 - val_acc: 0.8269\nEpoch 6/50\n9681/9681 [==============================] - 4965s 513ms/step - loss: 0.3584 - acc: 0.8251 - val_loss: 0.3695 - val_acc: 0.8236\nEpoch 7/50\n9681/9681 [==============================] - 4968s 513ms/step - loss: 0.3522 - acc: 0.8267 - val_loss: 0.3729 - val_acc: 0.8257\nEpoch 8/50\n9681/9681 [==============================] - 4970s 513ms/step - loss: 0.3465 - acc: 0.8291 - val_loss: 0.3492 - val_acc: 0.8298\nEpoch 9/50\n9681/9681 [==============================] - 4968s 513ms/step - loss: 0.3427 - acc: 0.8308 - val_loss: 0.3519 - val_acc: 0.8301\nEpoch 10/50\n9681/9681 [==============================] - 4970s 513ms/step - loss: 0.3395 - acc: 0.8319 - val_loss: 0.3887 - val_acc: 0.8235\nEpoch 11/50\n9681/9681 [==============================] - 4969s 513ms/step - loss: 0.3364 - acc: 0.8332 - val_loss: 0.3598 - val_acc: 0.8321\nEpoch 12/50\n9681/9681 [==============================] - 4967s 513ms/step - loss: 0.3340 - acc: 0.8343 - val_loss: 0.3414 - val_acc: 0.8326\nEpoch 13/50\n9681/9681 [==============================] - 4960s 512ms/step - loss: 0.3314 - acc: 0.8353 - val_loss: 0.3396 - val_acc: 0.8351\nEpoch 14/50\n9681/9681 [==============================] - 5019s 518ms/step - loss: 0.3299 - acc: 0.8357 - val_loss: 0.3442 - val_acc: 0.8341\nEpoch 15/50\n9681/9681 [==============================] - 4969s 513ms/step - loss: 0.3272 - acc: 0.8374 - val_loss: 0.3436 - val_acc: 0.8340\nEpoch 16/50\n 624/9681 [>.............................] - ETA: 1:15:31 - loss: 0.3224 - acc: 0.8376"
],
[
"import matplotlib.pyplot as plt\nacc = history.history['acc']\nval_acc = history.history['val_acc']\nplt.figure(figsize=(20,10))\nloss = history.history['loss']\nval_loss = history.history['val_loss']\nepochs = range(1, len(acc) +1) \nplt.rcParams.update({'font.size':18})\nplt.plot(epochs, acc, 'bo--', color='r', label='Training acc')\nplt.plot(epochs, val_acc, 'bo--', color='b', label='Validation acc')\nplt.legend()\nplt.plot(epochs, loss, color= 'r', label='Training loss')\nplt.plot(epochs, val_loss, color=\"b\", label='Validation loss')\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"model.save(\"models1/vgg16_noweights-3.h5\")",
"_____no_output_____"
],
[
"#EARLY STOPPING\n#from keras.callbacks import ModelCheckpoint, EarlyStopping\n#checkpoint = ModelCheckpoint(\"vgg16_1.h5\", monitor='val_acc', verbose=1,\n# save_best_only=True, save_weights_only=False, mode='auto', \n# period=5)\n#early = EarlyStopping(monitor='val_acc', min_delta=0, patience=50, verbose=1, mode='auto')\n\n#hist = model.fit_generator(steps_per_epoch=100,generator=traindata, \n# validation_data= testdata, validation_steps=10,epochs=100,\n# callbacks=[checkpoint,early])",
"_____no_output_____"
],
[
"#from keras.preprocessing import imageimg = image.load_img(\"image.jpeg\",target_size=(224,224))\n#img = np.asarray(img)\n#plt.imshow(img)\n#img = np.expand_dims(img, axis=0)from keras.models import load_model\n#saved_model = load_model(\"vgg16_1.h5\")output = saved_model.predict(img)\n#if output[0][0] > output[0][1]:\n# print(\"Road\")\n#else:\n# print('No Road')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76d33d2b6b86f813975745cc1f844fc9b4373a8 | 165,874 | ipynb | Jupyter Notebook | V8/4_unix.stackexchange.com.ipynb | fred1234/BDLC_FS22 | 9db012026bfa1b8672bb4c62558b3beac4baff98 | [
"MIT"
] | 1 | 2022-02-24T12:47:23.000Z | 2022-02-24T12:47:23.000Z | V8/4_unix.stackexchange.com.ipynb | fred1234/BDLC_FS22 | 9db012026bfa1b8672bb4c62558b3beac4baff98 | [
"MIT"
] | null | null | null | V8/4_unix.stackexchange.com.ipynb | fred1234/BDLC_FS22 | 9db012026bfa1b8672bb4c62558b3beac4baff98 | [
"MIT"
] | null | null | null | 97.630371 | 708 | 0.324409 | [
[
[
"## Stackexchange Dataset",
"_____no_output_____"
]
],
[
[
"! wget https://drive.switch.ch/index.php/s/I6hiqbHZRCFwZGj/download -O /data/dataset/stackexchange.zip",
"_____no_output_____"
],
[
"!ls -lisah /data/dataset/stackexchange.zip",
"_____no_output_____"
],
[
"!unzip /data/dataset/stackexchange.zip -d /data/dataset/",
"_____no_output_____"
],
[
"!ls -lisah /data/dataset/stackexchange.com/unix.stackexchange.com/json/",
"_____no_output_____"
]
],
[
[
"## Init Spark",
"_____no_output_____"
]
],
[
[
"import findspark\nfindspark.init()",
"_____no_output_____"
],
[
"from pyspark.sql import SparkSession\n\nspark = SparkSession \\\n .builder \\\n .appName(\"unix.stackexchange.com\") \\\n .getOrCreate()",
"_____no_output_____"
]
],
[
[
"## Badges",
"_____no_output_____"
]
],
[
[
"!head /data/dataset/stackexchange.com/unix.stackexchange.com/json/Badges.json",
"_____no_output_____"
],
[
"path = \"file:///data/dataset/stackexchange.com/unix.stackexchange.com/json/Badges.json\"",
"_____no_output_____"
],
[
"badges = spark.read.json(path)",
"_____no_output_____"
],
[
"badges.show(3, truncate=False)",
"_____no_output_____"
],
[
"badges.printSchema()",
"_____no_output_____"
]
],
[
[
"# Inspect all Files",
"_____no_output_____"
]
],
[
[
"!hdfs dfs -rm -r \"/dataset/unix.stackexchange.com\"",
"_____no_output_____"
],
[
"def get_info(name):\n print(f\"info for {name}\")\n print(\"------------------------------------\")\n path = f\"file:///data/dataset/stackexchange.com/unix.stackexchange.com/json/{name}.json\"\n df = spark.read.json(path)\n df.show(3, truncate=False)\n df.printSchema()\n return df",
"_____no_output_____"
],
[
"all_names = !ls /data/dataset/stackexchange.com/unix.stackexchange.com/json/",
"_____no_output_____"
],
[
"all_names = [name[:-5] for name in all_names]",
"_____no_output_____"
],
[
"all_names",
"_____no_output_____"
]
],
[
[
"# Save as Parquet",
"_____no_output_____"
]
],
[
[
"def save_as_parquet(name, df):\n print(f\"saving {name}\")\n print(\"------------------------------------\")\n \n df.show(3, truncate=False)\n df.printSchema()\n \n lower_name = name.lower()\n df.repartition(15).write.parquet(f\"/dataset/unix.stackexchange.com/{lower_name}.parquet\")",
"_____no_output_____"
]
],
[
[
"## Badges",
"_____no_output_____"
]
],
[
[
"df = get_info('Badges')",
"_____no_output_____"
],
[
"# https://spark.apache.org/docs/latest/api/python/_modules/pyspark/sql/functions.html\n# https://sparkbyexamples.com/spark/spark-sql-functions/\nfrom pyspark.sql import functions as f\n\ndf.select(f.min(\"Class\"), f.max(\"Class\")).collect()",
"_____no_output_____"
],
[
"# https://sparkbyexamples.com/pyspark/pyspark-cast-column-type/#:~:text=In%20PySpark%2C%20you%20can%20cast,Boolean%20e.t.c%20using%20PySpark%20examples.\n\n\n# Another way would be via Types\n# https://sparkbyexamples.com/pyspark/pyspark-sql-types-datatype-with-examples/\n# https://spark.apache.org/docs/latest/sql-ref-datatypes.html\n# from pyspark.sql.types import *\n\n\nsave_as_parquet(\"Badges\", df.selectExpr(\\\n \"cast(Id as int) id\", \\\n \"cast(UserId as int) user_id\", \\\n \"cast(Class as byte) class\", \\\n \"cast(Name as string) name\", \\\n \"cast(TagBased as boolean) tag_based\", \\\n \"cast(Date as timestamp) date\" \\\n ))\n\n",
"_____no_output_____"
]
],
[
[
"## Comments",
"_____no_output_____"
]
],
[
[
"df = get_info('Comments')",
"_____no_output_____"
],
[
"df.filter(\"UserDisplayName is not null\").show(2)",
"_____no_output_____"
],
[
"save_as_parquet(\"Comments\", df.selectExpr(\\\n \"cast(Id as int) id\", \\\n \"cast(PostId as int) post_id\", \\\n \"cast(UserId as int) user_id\", \\\n \"cast(Score as byte) score\", \\\n \"cast(ContentLicense as string) content_license\", \\\n \"cast(UserDisplayName as string) user_display_name\", \\\n \"cast(Text as String) text\", \\\n \"cast(CreationDate as timestamp) creation_date\" \\\n ))",
"_____no_output_____"
]
],
[
[
"## PostHistory",
"_____no_output_____"
]
],
[
[
"df = get_info('PostHistory')",
"_____no_output_____"
],
[
"save_as_parquet(\"PostHistory\", df.selectExpr(\\\n \"cast(Id as int) id\", \\\n \"cast(PostId as int) post_id\", \\\n \"cast(UserId as int) user_id\", \\\n \"cast(PostHistoryTypeId as byte) post_history_type_id\", \\\n \"cast(UserDisplayName as string) user_display_name\", \\\n \"cast(ContentLicense as string) content_license\", \\\n \"cast(RevisionGUID as string) revision_guid\", \\\n \"cast(Text as String) text\", \\\n \"cast(Comment as String) comment\", \\\n \"cast(CreationDate as timestamp) creation_date\" \\\n ))",
"_____no_output_____"
]
],
[
[
"## PostLinks",
"_____no_output_____"
]
],
[
[
"df = get_info('PostLinks')",
"_____no_output_____"
],
[
"save_as_parquet(\"PostLinks\", df.selectExpr(\\\n \"cast(Id as int) id\", \\\n \"cast(RelatedPostId as int) related_post_id\", \\\n \"cast(PostId as int) post_id\", \\\n \"cast(LinkTypeId as byte) link_type_id\", \\\n \"cast(CreationDate as timestamp) creation_date\" \\\n ))",
"_____no_output_____"
]
],
[
[
"## Posts",
"_____no_output_____"
]
],
[
[
"df = get_info('Posts')",
"_____no_output_____"
],
[
"save_as_parquet(\"Posts\", df.selectExpr(\\\n \"cast(Id as int) id\", \\\n \"cast(OwnerUserId as int) owner_user_id\", \\\n \"cast(LastEditorUserId as int) last_editor_user_id\", \\\n \"cast(PostTypeId as short) post_type_id\", \\\n \"cast(AcceptedAnswerId as int) accepted_answer_id\", \\\n \"cast(Score as int) score\", \\\n \"cast(ParentId as int) parent_id\", \\\n \"cast(ViewCount as int) view_count\", \\\n \"cast(AnswerCount as int) answer_count\", \\\n \"cast(CommentCount as int) comment_count\", \\\n \"cast(OwnerDisplayName as string) owner_display_name\", \\\n \"cast(LastEditorDisplayName as string) last_editor_display_name\", \\\n \"cast(Title as String) title\", \\\n \"cast(Tags as String) tags\", \\\n \"cast(ContentLicense as string) content_license\", \\\n \"cast(Body as string) body\", \\\n \"cast(FavoriteCount as int) favorite_count\", \\\n \"cast(CreationDate as timestamp) creation_date\", \\\n \"cast(CommunityOwnedDate as timestamp) community_owned_date\", \\\n \"cast(ClosedDate as timestamp) closed_date\", \\\n \"cast(LastEditDate as timestamp) last_edit_date\", \\\n \"cast(LastActivityDate as timestamp) last_activity_date\" \\\n ))",
"_____no_output_____"
]
],
[
[
"## Tags",
"_____no_output_____"
]
],
[
[
"df = get_info('Tags')",
"_____no_output_____"
],
[
"save_as_parquet(\"Tags\", df.selectExpr(\\\n \"cast(Id as int) id\", \\\n \"cast(ExcerptPostId as int) excerpt_post_id\", \\\n \"cast(WikiPostId as int) wiki_post_id\", \\\n \"cast(TagName as string) tag_name\", \\\n \"cast(Count as int) count\" \\\n ))",
"_____no_output_____"
]
],
[
[
"## Users",
"_____no_output_____"
]
],
[
[
"df = get_info('Users')",
"_____no_output_____"
],
[
"save_as_parquet(\"Users\", df.selectExpr(\\\n \"cast(Id as int) id\", \\\n \"cast(AccountId as int) account_id\", \\\n \"cast(Reputation as int) reputation\", \\\n \"cast(Views as int) views\", \\\n \"cast(DownVotes as int) down_votes\", \\\n \"cast(UpVotes as int) up_votes\", \\\n \"cast(DisplayName as string) display_name\", \\\n \"cast(Location as string) location\", \\\n \"cast(ProfileImageUrl as string) profile_image_url\", \\\n \"cast(WebsiteUrl as string) website_url\", \\\n \"cast(AboutMe as string) about_me\", \\\n \"cast(CreationDate as timestamp) creation_date\", \\\n \"cast(LastAccessDate as timestamp) last_access_date\" \\\n ))",
"_____no_output_____"
]
],
[
[
"## Votes",
"_____no_output_____"
]
],
[
[
"df = get_info('Votes')",
"_____no_output_____"
],
[
"save_as_parquet(\"Votes\", df.selectExpr(\\\n \"cast(Id as int) id\", \\\n \"cast(UserId as int) user_id\", \\\n \"cast(PostId as int) post_id\", \\\n \"cast(VoteTypeId as byte) vote_type_id\", \\\n \"cast(BountyAmount as byte) bounty_amount\", \\\n \"cast(CreationDate as timestamp) creation_date\" \\\n ))",
"_____no_output_____"
]
],
[
[
"# Analysing",
"_____no_output_____"
]
],
[
[
"def get_path(name):\n return f\"/dataset/unix.stackexchange.com/{name}.parquet\"",
"_____no_output_____"
]
],
[
[
"## Read all Parquets",
"_____no_output_____"
]
],
[
[
"badges = spark.read.parquet(get_path(\"badges\"))\ncomments = spark.read.parquet(get_path(\"comments\"))\nposthistory = spark.read.parquet(get_path(\"posthistory\"))\npostlinks = spark.read.parquet(get_path(\"postlinks\"))\nposts = spark.read.parquet(get_path(\"posts\"))\ntags = spark.read.parquet(get_path(\"tags\"))\nusers = spark.read.parquet(get_path(\"users\"))\nvotes = spark.read.parquet(get_path(\"votes\"))",
"_____no_output_____"
],
[
"print(\"badges\")\nbadges.show(3)\nprint(\"comments\")\ncomments.show(3)\nprint(\"posthistory\")\nposthistory.show(3)\nprint(\"postlinks\")\npostlinks.show(3)\nprint(\"posts\")\nposts.show(3)\nprint(\"tags\")\ntags.show(3)\nprint(\"users\")\nusers.show(3)\nprint(\"votes\")\nvotes.show(3)",
"_____no_output_____"
]
],
[
[
"## Tags",
"_____no_output_____"
]
],
[
[
"# tags = spark.read.parquet(get_path(\"tags\"))",
"_____no_output_____"
],
[
"tags.show()",
"_____no_output_____"
],
[
"from pyspark.sql import functions as f\n\ntags.filter(f.col(\"tag_name\") == \"async\").show()",
"_____no_output_____"
],
[
"tags.filter(\"tag_name = 'async'\").show()",
"_____no_output_____"
],
[
"tags.filter(\"tag_name like '%async%'\").show()",
"_____no_output_____"
],
[
"tags.filter(f.col(\"tag_name\").like('%async%')).show()",
"_____no_output_____"
],
[
"tags.select(\"tag_name\", \"count\").orderBy(f.col(\"count\").desc()).show(20)",
"_____no_output_____"
]
],
[
[
"### Wordcloud",
"_____no_output_____"
],
[
"Needs the `wordcloud` (and `matplotlib` which comes as a dependency) python package\n\n```\npip install wordcloud\n```\n\nsee [documentation](https://github.com/amueller/word_cloud)",
"_____no_output_____"
]
],
[
[
"filtered_tags = tags.select(\"tag_name\", \"count\").orderBy(f.col(\"count\").desc()).filter(\"count > 100\")",
"_____no_output_____"
],
[
"filtered_tags.show(2)\nfiltered_tags.count()",
"_____no_output_____"
],
[
"frequencies = filtered_tags.toPandas().set_index('tag_name').T.to_dict('records')[0]",
"_____no_output_____"
],
[
"frequencies['linux']",
"_____no_output_____"
],
[
"from wordcloud import WordCloud\nimport matplotlib.pyplot as plt\n\n\nwordcloud = WordCloud(width=2000, height=1000)\nwordcloud.generate_from_frequencies(frequencies)\n\n\nplt.figure(figsize=(20,30))\nplt.imshow(wordcloud, interpolation='bilinear')\nplt.axis('off')\n\n\nplt.savefig(\"./wordcloud.png\")\n",
"_____no_output_____"
]
],
[
[
"## Users ",
"_____no_output_____"
]
],
[
[
"users.printSchema()",
"_____no_output_____"
],
[
"print(users.count())\nprint(users.filter(\"id is not null\").count())\nprint(users.filter(\"id is not null\").distinct().count())",
"_____no_output_____"
],
[
"users. \\\n select(\"account_id\", \"display_name\", \"views\", \"down_votes\", \"up_votes\", \"reputation\"). \\\n show(2)\n",
"_____no_output_____"
],
[
"# most reputation\n# https://stackexchange.com/users/{account_id}/\nusers. \\\n select(\"account_id\", \"display_name\", \"views\", \"down_votes\", \"up_votes\", \"reputation\"). \\\n orderBy(f.col(\"reputation\").desc()). \\\n show(10, False)\n",
"_____no_output_____"
],
[
"# most viewed\nusers. \\\n select(\"account_id\", \"display_name\", \"views\", \"down_votes\", \"up_votes\", \"reputation\"). \\\n orderBy(f.col(\"views\").desc()). \\\n show(10, False)",
"_____no_output_____"
],
[
"# downvoters\nusers. \\\n select(\"account_id\", \"display_name\", \"views\", \"down_votes\", \"up_votes\", \"reputation\"). \\\n orderBy(f.col(\"down_votes\").desc()). \\\n show(10, False)",
"_____no_output_____"
]
],
[
[
"## Analysing a Question\n\n- [83577](https://unix.stackexchange.com/questions/83577/how-to-invoke-vim-with-line-numbers-shown)",
"_____no_output_____"
]
],
[
[
"posts.filter(\"id = 83577\").toPandas().T",
"_____no_output_____"
],
[
"posts.filter(\"id = 648583\").toPandas().T",
"_____no_output_____"
],
[
"posts.filter(\"id = 648608\").toPandas().T",
"_____no_output_____"
],
[
"posts.select(\"parent_id\").groupBy(\"parent_id\").count().sort(f.desc(\"count\")).show(20)",
"_____no_output_____"
]
],
[
[
"## Counts\n\n- inspired from [davidvrba](https://github.com/davidvrba/Stackoverflow-Data-Analysis)",
"_____no_output_____"
]
],
[
[
"posts.count()",
"_____no_output_____"
],
[
"# 1 = Question\n# 2 = Answer\n# 3 = Orphaned tag wiki\n# 4 = Tag wiki excerpt\n# 5 = Tag wiki\n# 6 = Moderator nomination\n# 7 = \"Wiki placeholder\" (seems to only be the election description)\n# 8 = Privilege wiki\n\nquestions = posts.filter(f.col('post_type_id') == 1)\nanswers = posts.filter(f.col('post_type_id') == 2)",
"_____no_output_____"
],
[
"print(questions.count())\nprint(answers.count())",
"_____no_output_____"
],
[
"# questions with accepted answer\n\nquestions.filter(f.col('accepted_answer_id').isNotNull()).count()",
"_____no_output_____"
],
[
"# count users\n\nprint(posts.filter(f.col('owner_user_id').isNotNull()).select('owner_user_id').distinct().count())\nprint(users.filter(\"id is not null\").select(\"id\").distinct().count())",
"_____no_output_____"
]
],
[
[
"## Response Time",
"_____no_output_____"
]
],
[
[
"response_time = (\n questions.alias('questions')\n .join(answers.alias('answers'), f.col('questions.accepted_answer_id') == f.col('answers.id'))\n .select(\n f.col('questions.id'),\n f.col('questions.creation_date').alias('question_time'),\n f.col('answers.creation_date').alias('answer_time')\n )\n .withColumn('response_time', f.unix_timestamp('answer_time') - f.unix_timestamp('question_time'))\n .filter('response_time > 0')\n .orderBy('response_time')\n)",
"_____no_output_____"
],
[
"response_time.show(2, False)",
"_____no_output_____"
],
[
"response_time = (\n questions.alias('questions')\n .join(answers.alias('answers'), f.col('questions.accepted_answer_id') == f.col('answers.id'))\n .filter(f.col(\"questions.owner_user_id\") != f.col(\"answers.owner_user_id\"))\n .select(\n f.col('questions.id'),\n f.col('questions.creation_date').alias('question_time'),\n f.col('answers.creation_date').alias('answer_time')\n )\n .withColumn('response_time', f.unix_timestamp('answer_time') - f.unix_timestamp('question_time'))\n .filter('response_time > 0')\n .orderBy('response_time')\n)\n \n",
"_____no_output_____"
],
[
"response_time.show(5, False)",
"_____no_output_____"
]
],
[
[
"## Hourly Data",
"_____no_output_____"
]
],
[
[
"hourly_data = (\n response_time\n .withColumn('hours', f.hour(\"answer_time\"))\n).show(2)",
"_____no_output_____"
],
[
"hourly_data = (\n response_time\n .withColumn('hours', f.hour(\"answer_time\"))\n .groupBy('hours')\n .count()\n .orderBy('hours')\n .limit(24)\n).toPandas()",
"_____no_output_____"
],
[
"hourly_data.plot(\n x='hours', y='count', figsize=(12, 6), \n title='Answer Hour',\n legend=False,\n kind='bar',\n xlabel='Hour',\n ylabel='Number of answered questions'\n)",
"_____no_output_____"
],
[
"year_data = (\n response_time\n .withColumn('years', f.year(\"answer_time\"))\n .groupBy('years')\n .count()\n .orderBy('years')\n).toPandas()",
"_____no_output_____"
],
[
"year_data.plot(\n x='years', y='count', figsize=(12, 6), \n title='Answer Year',\n legend=False,\n kind='bar',\n xlabel='Year',\n ylabel='Number of answered questions'\n)",
"_____no_output_____"
],
[
"response_hours = (\n response_time\n .withColumn('hours', f.ceil(f.col('response_time') / 3600))\n .groupBy('hours')\n .count()\n .orderBy('hours')\n .limit(48)\n).toPandas()",
"_____no_output_____"
],
[
"response_hours.plot(\n x='hours', y='count', figsize=(12, 6), \n title='Response time of questions',\n legend=False,\n kind='bar',\n xlabel='Hour',\n ylabel='Number of answered questions'\n)",
"_____no_output_____"
]
],
[
[
"## See the time evolution of the number of questions and answers",
"_____no_output_____"
]
],
[
[
"posts_grouped = (\n posts\n .filter('owner_user_id is not null')\n .groupBy(\n f.window('creation_date', '1 week')\n )\n .agg(\n f.sum(f.when(f.col('post_type_id') == 1, f.lit(1)).otherwise(f.lit(0))).alias('questions'),\n f.sum(f.when(f.col('post_type_id') == 2, f.lit(1)).otherwise(f.lit(0))).alias('answers')\n )\n .withColumn('date', f.col('window.start').cast('date'))\n .orderBy('date')\n).toPandas()",
"_____no_output_____"
],
[
"posts_grouped",
"_____no_output_____"
],
[
"posts_grouped.plot(\n x='date', \n figsize=(12, 6), \n title='Number of questions/answers per week',\n legend=True,\n xlabel='Date',\n ylabel='Number of answers',\n kind='line'\n)",
"_____no_output_____"
],
[
"posts_grouped_month = (\n posts\n .filter('owner_user_id is not null')\n .groupBy(\n f.window('creation_date', '4 weeks')\n )\n .agg(\n f.sum(f.when(f.col('post_type_id') == 1, f.lit(1)).otherwise(f.lit(0))).alias('questions'),\n f.sum(f.when(f.col('post_type_id') == 2, f.lit(1)).otherwise(f.lit(0))).alias('answers')\n )\n .withColumn('date', f.col('window.start').cast('date'))\n .orderBy('date')\n).toPandas()",
"_____no_output_____"
],
[
"posts_grouped_month.plot(\n x='date', \n figsize=(12, 6), \n title='Number of questions/answers per week',\n legend=True,\n xlabel='Date',\n ylabel='Number of answers',\n kind='line'\n)",
"_____no_output_____"
]
],
[
[
"# Tags",
"_____no_output_____"
]
],
[
[
"vi_sudo_tag = (\n questions\n .select('id', 'creation_date', 'tags')\n .groupBy(\n f.window('creation_date', \"4 weeks\")\n )\n .agg(\n f.sum(f.when(questions.tags.contains(\"nano\"), f.lit(1)).otherwise(f.lit(0))).alias('nano'),\n f.sum(f.when(questions.tags.contains(\"vim\"), f.lit(1)).otherwise(f.lit(0))).alias('vim')\n )\n .withColumn('date', f.col('window.start').cast('date'))\n .orderBy('date')\n).toPandas()",
"_____no_output_____"
],
[
"vi_sudo_tag",
"_____no_output_____"
],
[
"vi_sudo_tag.plot(\n x='date', \n figsize=(12, 6), \n legend=True,\n xlabel='Date',\n ylabel='Number of questions',\n kind='line'\n)",
"_____no_output_____"
]
],
[
[
"# Questions ",
"_____no_output_____"
],
[
"- Who asked the most questions?\n- How many people replied with an accepted answer by themselves?\n - what is the fraction of \"self-answerers\" against all users?\n- How many people never asked a question?\n- Which question took the longest to get an accepted answer?\n- Generate a plot where we can see at which month the most questions were raised.\n- Which post has the highest score? --> check it out https://unix.stackexchange.com/questions/{id}\n- Which question has the lowest score? --> https://unix.stackexchange.com/questions/{id}\n- Which post had the most comments?\n- Can you find an answer why \"When should I not kill -9 a process?\"",
"_____no_output_____"
]
],
[
[
"posts.select(\"id\", \"title\").filter(f.col(\"title\").contains(\"kill -9\")).show(20, False)",
"+------+---------------------------------------------------------------------------------------------------------+\n|id |title |\n+------+---------------------------------------------------------------------------------------------------------+\n|62258 |What is the difference between exiting a process via Ctrl+C vs issuing a kill -9 command? |\n|674073|process not killed even we used kill -9 and process actually was belong to container that already removed|\n|612990|kill -9 $PPID kills the process? |\n|585409|Processes with states R and Rs not killable with kill -9 |\n|67166 |Why does Firefox refuse to die despite killing it with pkill -9? |\n|220175|Why kill -9 -1 doesn't work? |\n|212918|Still alive, still alive after kill -9 / SIGKILL |\n|456588|MacOS: su pkill -9 “process_name” = sorry |\n|558403|What if 'kill -9' still does not work? |\n|394298|Hanging process cannot be killed even via kill -9 -1 |\n|157133|How to get the pid of a process and invoke kill -9 on it in the shell script? |\n|8916 |When should I not kill -9 a process? |\n|38972 |Debian: Must pkill -9 twm and then login with twm |\n|281439|Why should I not use 'kill -9' / SIGKILL |\n|590984|How to alias `kj` to `kill -9 %?`(kill job) |\n|144918|Idiomatic way to kill -9 only if \"graceful\" way doesn't work? |\n|386380|wget can't be killed with 'kill -9' |\n|559910|How to terminate a process when kill -9 doesn't work? |\n|399438|Difference between `kill -9 <pid>` and `kill -INT <pid>`? |\n|660303|All processes using a device got hung and even `kill -9` does nothing |\n+------+---------------------------------------------------------------------------------------------------------+\nonly showing top 20 rows\n\n"
],
[
"comments.filter(f.col(\"text\").contains(\"LOL\")).show(200, False)",
"+-------+-------+-------+-----+---------------+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+\n|id |post_id|user_id|score|content_license|user_display_name|text |creation_date |\n+-------+-------+-------+-----+---------------+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+\n|357062 |210899 |47121 |0 |CC BY-SA 3.0 |null |LOL, true a stupid mistake:) |2015-06-19 20:03:08.107|\n|377732 |122731 |79008 |0 |CC BY-SA 3.0 |null |In my case I have the network manager along with lightdm. I wouldn't call those packages easily disposable expecially since I use them all the time. LOL |2015-08-10 18:46:40.583|\n|1302938|65564 |8965 |0 |CC BY-SA 4.0 |null |@Iguananaut LOL yeah about every two to three months I get a notification that somebody else stumbled upon this giant \"WTF?\" 9 years and counting... |2022-02-02 23:46:11.363|\n|133704 |89189 |7453 |0 |CC BY-SA 3.0 |null |@DravSloan - LOL, thanks for the laugh. It's something 8-) |2013-09-02 22:33:24.323|\n|141011 |93254 |39486 |0 |CC BY-SA 3.0 |null |try with `scp /Volumes/FX4\\ HDD/Users/matthewdavies/Downloads/NCIS.S11E01.HDTV.x264-LOL.mp4 [email protected]:\"/media/3TB/TV Shows/NCIS\"` |2013-10-02 05:59:04.82 |\n|270447 |164729 |38906 |0 |CC BY-SA 3.0 |null |@LOLKAT: Maybe `grep` saw your input as one line instead of many lines. Remember `-m` stop after the first line match, not the first match. |2014-10-29 02:49:02.747|\n|1184726|632729 |414777 |0 |CC BY-SA 4.0 |null |And no, not even the `kill $(jobs -p)` that you were probably meaning will not kill the background *jobs*, but only the *job leaders*. `sleep 1000 | sleep 1000 &`, then `kill -KILL $(jobs -p)`, `jobs -p` => LOL, still running. And you won't even be able to kill it again. |2021-02-05 19:18:24.403|\n|1206065|644007 |465598 |0 |CC BY-SA 4.0 |null |@KamilMaciorowski omg it was just that I don't get it LOL. I did command: ['sh', '-c', 'cp -rL /CustomTheme/custom_theme.tar /shared && cd /shared/ && tar -xvf custom_theme.tar &> file.txt']\\n and it works lol |2021-04-07 20:21:35.677|\n|1219537|499485 |411256 |0 |CC BY-SA 4.0 |null |OMG, people that have no direct experience with the problem described in the question should not be even commenting, WTF... The \"are you sure\" guys, or \"have you tried\" Andy LOL |2021-05-17 19:19:22.217|\n|1204814|643369 |200436 |0 |CC BY-SA 4.0 |null |LOL :-), thanks for the laugh! |2021-04-04 12:38:42.497|\n|984535 |505981 |39030 |0 |CC BY-SA 4.0 |null |@MyrddinEmrys: Ha! Yeah. FWIW, the package maintainer mentioned on one of the forums that they added a link to my answer on their wiki. I can't be the only one running Gentoo wanting more than one version of PHP, can I? Well, at least future me will benefit from this when I need to do it again in a few years! LOL |2019-07-24 18:47:35.947|\n|533158 |303595 |48650 |0 |CC BY-SA 3.0 |null |LOL. So, math is overrated, but first class functions are necessary for those complex object models frequently represented in scripts? :D I've written perl for around 21 years, and I led the conversion from ksh to perl at the company referenced. I still stand by David Korn when he says they have similar capabilities. https://www.usenix.org/legacy/publications/library/proceedings/vhll/full_papers/korn.ksh.a It really is worth learning more about; most people have no idea the power hiding right there under the covers in ksh93 (or even bash). KSH regex support is worth learning if nothing else.|2016-08-16 01:01:15.157|\n|309484 |185653 |103712 |0 |CC BY-SA 3.0 |null |Yep, it works. but looks like I reboot myself . LOL. how to do it in KRDC? |2015-02-19 09:18:20.323|\n|270448 |164729 |80886 |0 |CC BY-SA 3.0 |null |LOLKAT do you want the output to be exactly `ID: 4234235325` or just to print the line with the value of this id? My `grep (GNU grep) 2.16` prints only matched part of the first line with your expression, so believe it works as expected. |2014-10-29 02:50:13.4 |\n|35808 |26516 |977 |1 |CC BY-SA 3.0 |null |\"hopefully the SD cards aren't so stupid…\" <--- ROFLOL. Not likely! |2011-12-10 04:06:40.497|\n|25900 |18929 |9610 |1 |CC BY-SA 3.0 |null |When you were referring to /dev/disk/by-id I thought \"by-id\" was to be replaced by something and should be issued as a command. It didn't strike my mind that it could actually be a directory. Thanks for being patient with me. LOL, I was ridiculous :) |2011-08-17 22:22:32.187|\n|843169 |463276 |306119 |0 |CC BY-SA 4.0 |null |There is no option to register. It has been disabled and replaced with a message to send an email enclosing your required username & password. LOL security is obviously not high on agenda, but I have done so anyway and will see if I get any response. |2018-08-17 22:58:54.063|\n|576388 |327548 |138261 |3 |CC BY-SA 3.0 |null |OMG!!! Ponies!!! LOL!!! |2016-12-02 13:35:33.767|\n|1016322|547178 |8337 |0 |CC BY-SA 4.0 |null |Is it wifi? Is there a switch to turn wifi \"on/off\" on the keyboard (if it's a laptop LOL) |2019-10-19 01:32:41.77 |\n|1010783|544784 |29678 |0 |CC BY-SA 4.0 |null |En réalité: LOL. |2019-10-03 16:13:52.91 |\n|1061955|570795 |null |1 |CC BY-SA 4.0 |user314968 |Group share is a recurring problem with variations, as you point out. In this instance, software developers have access to the test release directory. It is theirs to muck with. They can work in that directory, push, pull, install in the virtual environment etc. The problem comes when they copy things in from another area they have been working in, and their umasks are not set with group permissions. It is confusing to them because file creation and copy by tar works fine. It is also funny that I have to give them a copy script that just does what cp does. 'acls' they are magic I say LOL |2020-03-03 10:28:13.783|\n|1063440|571293 |308316 |1 |CC BY-SA 4.0 |null |@bobsburner That's why the local client now checks the files sent by the remote against the glob (with an adhoc pattern matching implementation LOL) -- but that is very kludgy, because the shell on the remote machine may use a different glob syntax, you may want to quote or escape parts of the file name, etc. |2020-03-06 16:29:52.447|\n|293947 |177598 |37688 |16 |CC BY-SA 3.0 |null |Thanks for a link to the discussions. I guess the answer to this was NO. A funny quote from one of the links: \"modern users do not use terminals\". LOL. |2015-01-05 19:12:51.187|\n|799610 |439741 |null |0 |CC BY-SA 4.0 |user34720 |Yes, it is in Brazil. I can't convince my boss to spend 199R$(our currency) x 15 remote offices. LOL. |2018-05-04 18:44:55.117|\n|434347 |251322 |null |0 |CC BY-SA 3.0 |user86969 |LOL that is indeed the first time you mention it is a laptop :-D . |2015-12-28 16:55:14.203|\n|939420 |508458 |308316 |0 |CC BY-SA 4.0 |null |LOL what else is forbidden by the IT department's policy? looking askew at the monitor or typing with your thumbs? In `csh` the prompt is stored in the variable named ... `prompt` (`set prompt = 'foo% '`) not `PS1`. Your answer is completely bogus. |2019-03-25 12:20:35.26 |\n|838429 |460917 |209133 |0 |CC BY-SA 4.0 |null |LOL. Never thought of this. Nice. I’ll try it when I get home. |2018-08-06 23:09:14.807|\n|1295588|48555 |6622 |0 |CC BY-SA 4.0 |null |\"Finally, using ssh for a transport mechanism is both secure and carries very little overhead. \" LOL |2022-01-08 18:05:45.133|\n|248447 |152316 |7453 |0 |CC BY-SA 3.0 |null |@mikeserv- LOL, that's why I started testing all these A'ers, since they all appeared to be returning different results when I was reviewing them for you guys. We do have some Q's with like 10 A's and it's usually Q's like these, but they generally all return the same results 8-). |2014-08-27 01:57:26.147|\n|628396 |354645 |223413 |0 |CC BY-SA 3.0 |null |LOL. Noob indeed. I guess collation matters :) Edited. |2017-03-29 20:56:07.427|\n|1141065|611073 |4667 |0 |CC BY-SA 4.0 |null |\"John Jimmy Falloncamp\" LOL |2020-09-24 16:39:31.783|\n|131200 |87862 |7453 |0 |CC BY-SA 3.0 |null |@mattdm - LOL, thanks most certainly did not learn anything useful wrt FAA 8-). |2013-08-23 14:51:04.563|\n|532568 |303322 |181206 |0 |CC BY-SA 3.0 |null |Have you run LOL on a virtual machine? I was going to try that but someone said to try Play on Linux. |2016-08-14 12:49:51.647|\n|1029829|554422 |87673 |0 |CC BY-SA 4.0 |null |DOH! Thanks for pointing out the obvious that I missed LOL. Thanks again, was clear, concise and worked first time. |2019-11-27 20:13:55.87 |\n|1057818|568524 |null |0 |CC BY-SA 4.0 |user373503 |@UncleBilly 's link has this: \"This scheme rendered subdirectories sufficiently hard to use as to make them unused in practice.\" I think that explains Pike's story well (he is just not very precise). **sufficiently hard to use as to...** LOL, so there _is_ a limit to user-unfriendliness? |2020-02-21 07:23:38.697|\n|761189 |422759 |17208 |0 |CC BY-SA 3.0 |null |LOL, I mean Vim \"windows\", not MS Windows. Each window (file view) within a Vim session can have it's own value for each setting. |2018-02-08 16:15:47.543|\n|11555 |9120 |2343 |0 |CC BY-SA 2.5 |null |LOL :) .... My current status (still working on it): It seems that there is a 6 levels hierarchy of \"shell expansion\" (from `brace expansion` thru to `pathname expansion`). I can therefore understand that the example of `files=(*)` does not go through `word splitting`. However, it seems to me that the `pathname expansion` somehow internally(?) delimits the filenames by some other means than a space, because to my understanding the array asignment () would produce 4 items.. so is there some *special* interaction happening with these file-expansion words? This may be at the core of my puzzle. |2011-03-12 04:23:18.567|\n|47101 |34725 |16897 |0 |CC BY-SA 3.0 |null |LOL Unfortunately this is my current workplace where I have this problem. My ISP works fine! |2012-03-22 09:49:13.903|\n|363094 |212891 |116972 |0 |CC BY-SA 3.0 |null |@hjkefl LOL your input is outsized! Ok we need yet another approach for that large input, please be patient until next update. |2015-07-05 18:44:15.603|\n|853442 |468224 |308316 |0 |CC BY-SA 4.0 |null |LOL. I'm using gnu screen, which does NOT start login shells by default. Neither does the terminal emulator (mlterm). Maybe I should check my head ;-) |2018-09-11 14:01:15.323|\n|165831 |101824 |26026 |0 |CC BY-SA 3.0 |null |In fact I am just about to test it, belatedly. I plain forgot about it and just asked a similar question which caused this one to show as the top related question, LOL |2014-01-04 13:11:28.177|\n|298742 |179573 |99625 |0 |CC BY-SA 3.0 |null |LOL - Ubuntu 14.04 ***, fix it. |2015-01-17 18:33:09.067|\n|100361 |69226 |34963 |0 |CC BY-SA 3.0 |null |Mmm ok, that's worse than I imagined it. I hoped I could use some syntax like `\\d{1,2}` which I really find useful. Anyway, on OS X the command you wrote returns the whole output of `uptime`... LOL. Anyway, thank you for the answer, I'm gonna study `sed` in a deeper way. |2013-03-26 17:43:03.923|\n|579150 |329096 |113238 |0 |CC BY-SA 3.0 |null |LOL, my bad, I thought that /usr/bin/env was a directory, but of course env is program/executable, now I really don't why it doesn't work, but some progress is being made |2016-12-09 03:38:00.567|\n|11651 |9183 |2671 |0 |CC BY-SA 2.5 |null |LOL, I have no idea how that got there. |2011-03-13 12:04:04.74 |\n|42102 |31055 |15047 |1 |CC BY-SA 3.0 |null |@StéphaneGimenez done, bash is not the only thing to use. LOL |2012-02-07 13:30:23.157|\n|435241 |252380 |null |3 |CC BY-SA 3.0 |user34720 |Then, this should make you access the first and last array items at the same time, breaking all the logic of scanning an array ;) Could even create a blackhole. LOL |2015-12-30 18:13:19.72 |\n|881338 |298747 |257718 |1 |CC BY-SA 4.0 |null |LOL! This screenshot made me pretty happy. Thanks. |2018-11-13 19:17:29.78 |\n|209689 |130973 |4252 |4 |CC BY-SA 3.0 |null |LOL, \"cat abuse\" |2014-05-19 16:04:10.637|\n|639365 |360867 |204024 |0 |CC BY-SA 3.0 |null |Thanks again! LOL, the whole point of switching to a MAC was to not require a virtual machine anymore. |2017-04-24 03:49:33.507|\n|1143840|500438 |278933 |0 |CC BY-SA 4.0 |null |What happened here? I have a very similar situation where my specified DNS seems to be out of line with what's reported as current. I've also been experiencing random drops of my SSH on just the system I'm working on, which sounds a lot like an interface reset, where I am using a standard ens3. Thanks for the well-crafted question. An authoritative answer would be nice. (See what I did there? LOL) |2020-10-02 16:12:16.41 |\n|1127564|603624 |423323 |1 |CC BY-SA 4.0 |null |Hey i fix the problem, i run the recovery mode and after 40 min it got restored to default chrome. But it was the black screnn all the time... LOL ! |2020-08-13 09:02:03.5 |\n|404786 |236049 |11639 |0 |CC BY-SA 3.0 |null |There is a difference between \"nothing wrong\"and \"optimal setup.\" Personally the pool servers I admin all have local refclocks. I doubt you do because you assume pool clients must be stratum=3. Massively varying error? Thanks for the LOLz. I did'nt realize we were using a version of ntpd that does'nt groom the time sources. ntpd will smooth out the variance. Furthermore if you do add it to the pool list the pool daemon removes clocks with too much error. Adding capacity to the pool infrastructure is not negative. Run a server with `tos minsane 7 minclock 5` and tell me about your error |2015-10-15 23:43:05.21 |\n|1263871|670252 |373946 |0 |CC BY-SA 4.0 |null |LOL I am in full derp mode, it was not. is now but I had already started it just pointing it to the nfs mount. I think since I used the -Pv I am safe to ctrl-break it when I need to at any time? It seems to be working well a few files (that are on the remote not the host) give a strange error when the host gets back to them again: rsync: chown \"/mnt/archive_copy/archive/images/dcam/UT201006/.proc-d0355.fits.IGw8LT\" failed: Operation not permitted (1)\\n but other than that its all good. |2021-09-23 18:14:46.04 |\n|1308789|692058 |492642 |0 |CC BY-SA 4.0 |null |LOL! So UNIX has standards, but there's no enforcement. I just built svd2ada on MacOS and neither the build script nor the instructions advise on where and what files should be deployed to become part of the Ada toolchain. It's incredibly frustrating. MacOS outside UNIX has conventions that you dare not disobey, or you'll have users standing outside with pitchforks. |2022-02-25 00:45:37.72 |\n|120999 |81131 |19370 |0 |CC BY-SA 3.0 |null |@AlberteRomero LOL |2013-07-05 17:39:16.687|\n|142155 |10226 |19869 |1 |CC BY-SA 3.0 |null |don't ask like \"is it possible to use **sed** to do ....\" you can use sed to do anything within the area of text processing. LOL |2013-10-06 03:13:09.647|\n|609391 |344367 |113238 |0 |CC BY-SA 3.0 |null |LOL, ok I see, but it's not super clear, let me update your answer, thanks :) |2017-02-12 09:43:42.85 |\n|541013 |307757 |181999 |0 |CC BY-SA 3.0 |null |Dude you Data wont be on the disk if you boot linux. Your data is on a filesystem and linux got installed on a different filesystem that overwrote the current one. a filesystem is a type of agreement how a file should be stored what combination of ones and zeros indicate the beginning and the end of the file. bevor installing a so looking like \"hacker distro\" that gets previewed with he gnome desktop and 90% of the preview is the gnome desktop LOL. read something for yourself about how things work. yes I know reading sucks |2016-09-04 05:57:28.11 |\n|562011 |319749 |197700 |0 |CC BY-SA 3.0 |null |@don_crissti LOL. I like your comments buddy. ;) |2016-10-29 16:53:02.55 |\n|554456 |284033 |57439 |0 |CC BY-SA 3.0 |null |LOL. You were right: the `.sh` part disables the script from running. I don't know why this happens, but thanks you. |2016-10-10 14:13:25.58 |\n|255553 |155994 |84441 |4 |CC BY-SA 3.0 |null |well, while trying change from Frodo1 to Bilbo2 I got message that: \"You must wait longer to change your password\" LOL |2014-09-18 08:29:56.167|\n|1182523|631839 |325065 |0 |CC BY-SA 4.0 |null |Well, maybe not on Hurd. Thence \"unspecified\", LOL. |2021-01-31 00:41:44.283|\n|441329 |255483 |141443 |0 |CC BY-SA 3.0 |null |What's the exact difference between `[` and `[[`? (Searching on Google for `[ vs [[` gives me results for `vs` only, LOL) |2016-01-15 09:04:26.267|\n|496720 |284968 |83246 |0 |CC BY-SA 3.0 |null |Such evilness as files with spaces in their names!! How dare you bring that up LOL :) |2016-05-24 13:22:29.593|\n|665817 |373806 |113238 |0 |CC BY-SA 3.0 |null |The man pages for bash do not sound short LOL, combing for this information is not ideal. Now the next person can do a google search and hopefully find the information faster than reading the bash man pages, right? E.g.: https://tiswww.case.edu/php/chet/bash/bashref.html LOL |2017-06-29 23:17:30.24 |\n|1095911|588233 |413896 |0 |CC BY-SA 4.0 |null |LOL Also did you read the file in the git describing the assignment? One of the requirements is controlling ICMP traffic. Ping is an ICMP function. |2020-05-22 01:11:36.093|\n|1159964|619948 |435718 |0 |CC BY-SA 4.0 |null |@cutrightjm LOL |2020-11-18 07:33:54.437|\n|943226 |510315 |null |0 |CC BY-SA 4.0 |user34720 |@0xSheepdog - \"Question was moved to homework.stackexchange.com\"...LOL |2019-04-03 16:59:18.647|\n|1066433|573030 |308316 |0 |CC BY-SA 4.0 |null |@Slavistan: I think I already know your next __Q:__ Why does `export x=1; echo \"$0\" | entr -s 'x=2; echo set x to 2'; echo \"$x\"` NOT set `x` to `2`? __A:__ because Unix is not plan9, LOL ;-) |2020-03-15 17:06:35.173|\n|1000366|539210 |308316 |2 |CC BY-SA 4.0 |null |@markgraf LOL `chsh -s /bin/bash 7</dev/random` and Bob's your uncle (but only if the OP fixes the quoting in their awk command ;-)) |2019-09-05 15:20:17.907|\n|780172 |432100 |272806 |0 |CC BY-SA 3.0 |null |That variable index increment for a statically declared array, LOL. |2018-03-20 02:58:07.217|\n|841258 |462337 |305251 |0 |CC BY-SA 4.0 |null |Two downgrades! LOL |2018-08-13 17:15:16.68 |\n|1289303|681425 |491750 |0 |CC BY-SA 4.0 |null |hahaha LOL ... you just read my mind and the hidden agony, as that's exactly how i feel when i try to disconnect my laptop with my laptop behaving all erratic.\\n\\nI am hoping a powerful 'Linux Oracle' can intercede between my laptop and docking station and if things don't work out then 'exorcism' performed by a Linux priest of High Order may the last option. |2021-12-16 13:06:36.343|\n|1262110|669559 |90878 |0 |CC BY-SA 4.0 |null |LOL I actually installed FreeBSD a few days ago, but haven't even gotten a window manager running yet... :-D So can't help you there, unfortunately. I just did a quick test with a Win VM tho. I actually can kill the remote server, on the server itself; not by shutting down my client locally. And the local client still shows the server online. I can also close my local client's connection on the remote end, via my client. So... are you sure the users are actually disconnecting by closing their local client, not on the remote end? |2021-09-18 15:19:11.09 |\n|619621 |350046 |219835 |0 |CC BY-SA 3.0 |null |LOL, ok, I'm not a system guy, but I don't believe the whole server has only 200GB free space. I think that's free space of my afs, yet I can only use 50% of my allocated space? That's strange. |2017-03-08 18:54:04.467|\n|1133762|218839 |43781 |0 |CC BY-SA 4.0 |null |@MadsSkjern LOL.. I really did not expect that you will answer that :D but it's good, i think they all deserve the acceptance of their answers :)\\nBut better choose any answer instead of forgetting it 5 years - at least if it's correct and more than one sentence... Also your question was \"does dd copy everything\" -> this was already answered completely and sufficiently with the first post by Fiximan... Better answers will get their points anyway, while getting more upvotes than the accepted :D |2020-09-02 08:45:27.21 |\n|375255 |220633 |64157 |0 |CC BY-SA 3.0 |null |@Celada HA/LOL, but that isn't exactly the problem... |2015-08-06 20:53:09.58 |\n|371981 |15881 |32558 |0 |CC BY-SA 3.0 |null |LOL, this was migrated and the SO dupe that was not got more upvotes: http://stackoverflow.com/questions/11238457/disable-and-re-enable-address-space-layout-randomization-only-for-mysef |2015-07-28 13:27:57.107|\n|1261096|668138 |433561 |0 |CC BY-SA 4.0 |null |Yes, sorry, I was not clear at all it turns out! My system FS is EXT4. And I think that is where the problem lies. I don't really want to run my system on BTRFS, as I've read about instability issues compared to EXT4. I was thinking it could only do snaps if the STORAGE was BTRFS, but that the system didn't matter. I really want those snaps, so maybe I have to just bite the bullet. LOL\\nThank you for your response! I tried to mark your answer as the solution but even though it's my question, I don't have enough rep to vote. LOL So annoying. |2021-09-15 23:40:12.207|\n|115421 |78569 |40680 |0 |CC BY-SA 3.0 |null |LOL yeah I guess that's not my responsibilty, I just have to make the server secure. I think I'll hash the email address as well. |2013-06-07 11:30:08.517|\n|122730 |82831 |30048 |0 |CC BY-SA 3.0 |null |Ahhhh, it looks for a number! LOL. I tried words like \"high\". I'll give that a shot! |2013-07-12 21:24:19.37 |\n|532572 |303322 |184611 |0 |CC BY-SA 3.0 |null |LOL doesn't work on Wine, Probably It will doesn't on Play On Linux. You can try on Play On Linux, but I suggest you to use a virtual box with 2/4GB ram, It works good. |2016-08-14 12:56:16.397|\n|520746 |296553 |47085 |0 |CC BY-SA 3.0 |null |I must use them. I use my VM for gaming and LOL's launcher/client runs notoriously slow without hugepages. I enabled hugepages after I once almost accidentally dodged a ranked on promotion series because the launcher lagged during champion selection. |2016-07-18 08:50:10.223|\n|1050091|564029 |325065 |0 |CC BY-SA 4.0 |null |The entire premise of this question is wrong: `/dev/pts/10` is not \"connected to\" stdin, `/dev/pts/10` IS stdin. This question looks _purposely_ obtuse and a bit of text analysis \"connects it\" (LOL) to other similar questions on different matters. |2020-01-29 13:20:06.69 |\n|1016537|547651 |null |0 |CC BY-SA 4.0 |user373503 |@IliaGilmijarow LOL! btw I just answered, addressing that browser tab thing. Estimated time has three dots and is I think not so clear for all of us. Estimated time of arrival i only know. |2019-10-19 21:04:29.43 |\n|252877 |154486 |83488 |0 |CC BY-SA 3.0 |null |I realize I should've just used CLI PHP or Ruby, LOL, but I was too far into the project to turn back from shell scripting. Lesson learned. |2014-09-10 05:01:36.03 |\n|283575 |148551 |22858 |0 |CC BY-SA 3.0 |null |TY, cool stuff! Got them bookmarked. Gonna delve myself into those the next days for sure. BTW, please do no longer feel addressed when I say \"`sed` learners\". You definitely are no learner (LOL), more like a pro asking other uber-pros like Stéphane to squeeze out the very last quirks that remain. ;) *grin* |2014-12-05 17:15:22.163|\n|731255 |408364 |227199 |1 |CC BY-SA 3.0 |null |@RomanPerekhrest Greetings Ukraine PYTHON GENIUS LOL |2017-12-02 13:22:16.867|\n|444594 |257166 |79008 |0 |CC BY-SA 3.0 |null |Thanks for the tip @meuh however it seem that neither of the three options for freeing memory (global garbage collection, cyclic and sending \"heap-minimize\" notifications) results in something. Maybe these are just dummies? LOL Will have to investigate further on that. |2016-01-23 11:06:45.077|\n|504507 |288533 |121401 |1 |CC BY-SA 3.0 |null |Normally, I'd slap an IT person for doing something like this, but in this case, it's pretty humourous so I'd probably let it slide LOL |2016-06-08 20:51:23.93 |\n|954132 |492495 |202420 |0 |CC BY-SA 4.0 |null |@MartinOtto Indeed. I think unlocking the TDP would help a ton lot. We can cool it since the crazy fans. Also you can run \"Nitro Boost\" even on Linux with nbfc: https://forum.manjaro.org/t/control-fans-on-acer-nitro-5-an515-42-and-possibly-other-laptops-with-nbfc/80480 and that helps a lot in games.\\nAcer you should really unlock that TDP on Nitro 5. Like what ? Other OEMs limit it to 25W and they dont have as good cooling as we do and we are running 15W LOL.\\nI may post a bug on bugzilla. But do you have tearing on laptop display when playing some games on top of the screen ? |2019-04-30 19:20:31.92 |\n|1157219|618419 |243949 |0 |CC BY-SA 4.0 |null |512GB RAM. Have you tried adding more? LOL. Two more things to check: disk space and the logs. Has CentOS switched to systemd? |2020-11-08 19:32:59.007|\n|1130424|605378 |335415 |0 |CC BY-SA 4.0 |null |@AdminBee I thought all distribution is the same. LOL |2020-08-21 13:39:36.433|\n|121047 |81131 |10292 |10 |CC BY-SA 3.0 |null |@AlberteRomero That's it, more or less, at least most-ly,. LOL I really like the horizonital scroll in most. |2013-07-05 20:39:54.467|\n|82824 |59909 |27121 |0 |CC BY-SA 3.0 |null |It is new year, why you spending so much time on the server? LOL. HAPPY NEW YEAR!! |2012-12-31 04:02:31.72 |\n|598658 |338860 |null |1 |CC BY-SA 3.0 |user34720 |Mentally health people avoid newline character on filenames. LOL. |2017-01-20 12:35:01.67 |\n|993989 |536220 |252501 |0 |CC BY-SA 4.0 |null |@cas Oh man you saved my life LOL, you should put is as Answer instead of comment. Oh ya, another thing is use single `[` instead of double `[[` |2019-08-19 08:20:47.353|\n|48995 |36138 |null |0 |CC BY-SA 3.0 |user14517 |LOL. I re-read my question and I think I misled the audience, I am actually designing this as a productivity suite. |2012-04-10 13:28:14.18 |\n|906495 |493475 |117549 |0 |CC BY-SA 4.0 |null |LOL @StephenKitt; now the gears are turning |2019-01-09 14:47:18.36 |\n|822147 |451976 |null |0 |CC BY-SA 4.0 |user34720 |No problem at all. Do you have any hardware attached to PCI1 ? I'm searching for more ACPI+BIOS+Fedora+Your_hardware info on different sites and the best answers i could find is \"use Ubuntu 18.04 that is better supported by Dell\"... LOL |2018-06-29 14:14:57.853|\n|1265818|671167 |492628 |0 |CC BY-SA 4.0 |null |LOL, the contents of my ~/.profile said \"echo foo\". Nevertheless, the question is still a valid one I think. |2021-09-29 20:25:38.243|\n|1108544|594014 |11417 |0 |CC BY-SA 4.0 |null |@G-ManSays'ReinstateMonica' LOL. The original version just `rm -f`'ed the file which lead to a comment by another user about using the nuke option unneccessarily. But I agree that creating it before the loop is a bad idea. OTOH so is using `done > list-of-headers.csv` if you run the script in the same directory several times. |2020-06-21 07:03:31.22 |\n|1152828|226980 |39593 |1 |CC BY-SA 4.0 |null |Hey I've come here a SECOND time to relearn this LOL. A French accent (diacritic) in the text was causing grep to barf |2020-10-26 09:43:09.343|\n|238053 |146920 |78842 |0 |CC BY-SA 3.0 |null |That is a big 'LOL' :-) |2014-07-28 15:45:11.513|\n|1079886|579949 |286615 |0 |CC BY-SA 4.0 |null |ANSWER: in `bash` use `$ echo \"$RESPONSE\" | grep \"% packet loss\"`. Interesting logic... spend a quarter to save a dime LOL |2020-04-14 20:09:05.347|\n|1094481|4799 |333273 |2 |CC BY-SA 4.0 |null |Hey @frederix who let you in here? I see they will let just anyone in here these days! LOLOLOLOL |2020-05-18 20:53:03.27 |\n|1149630|614747 |431002 |1 |CC BY-SA 4.0 |null |I find it highly doubtful that that ever were the case. Maybe some \"Unix-like\" environment running on MSDOS (after all, `command.com` did support \"pipes\" using temporary files, LOL). Until you remember it, I'll consider that claim baseless. |2020-10-16 13:02:45.24 |\n|141055 |93254 |47009 |0 |CC BY-SA 3.0 |null |try narrowing the problem down; e.g. can you do `cd /Volumes/FX4\\ HDD/Users/matthewdavies/Downloads/; scp NCIS.S11E01.HDTV.x264-LOL.mp4 [email protected]:/tmp`? can you do `scp /Volumes/FX4\\ HDD/Users/matthewdavies/Downloads/NCIS.S11E01.HDTV.x264-LOL.mp4 [email protected]:/tmp`? |2013-10-02 10:40:06.21 |\n|122740 |82831 |30048 |0 |CC BY-SA 3.0 |null |Hah...you're right! LOL. That's funny. If it comes down to that being my one and only clue, I might just let this slide and abandon this part of the project as it's not vital for what I'm doing, just a \"would be nice\" kinda thing. |2013-07-12 21:37:18.3 |\n|979489 |529066 |124211 |0 |CC BY-SA 4.0 |null |LOL good point, I will edit with something more useful |2019-07-10 08:53:17.743|\n|1259780|668528 |90878 |0 |CC BY-SA 4.0 |null |LOL - @Lizardx ain't one to mince his words :-D Have to say I wholeheartedly agree. |2021-09-11 10:29:31.85 |\n|312263 |187142 |104700 |0 |CC BY-SA 3.0 |null |LOL - no, not dedication... it's a custom kernel/init where I simply haven't updated the repository in quite some time! :) HOWEVER, I just tinkered with block quotes and the likes and believe I have things down-pat manually (ie: 4 spaces after an empty line starts a code block, etc.) |2015-02-27 08:06:30.07 |\n|1193068|636791 |286615 |0 |CC BY-SA 4.0 |null |@FET: Good for you :) All rationality ever got anybody was boredom LOL. You don't want GRUB - there should be some instructions to that effect in [this procedure](https://askubuntu.com/a/1245535/831935). **Pay attention to the `ubiquity -b` step!!!** You can install `rEFInd` without installing Ubuntu. And note in my rev. answer you will need an external drive to install Linux. |2021-02-28 09:27:43.8 |\n|141013 |93254 |39486 |0 |CC BY-SA 3.0 |null |you don't have to escape the whitespace ` ` in double-quotes. actually, `I think it had to be scp /Volumes/FX4\\ HDD/Users/matthewdavies/Downloads/NCIS.S11E01.HDTV.x264-LOL.mp4 \"[email protected]:/media/3TB/TV Shows/NCIS\"` because the last part(destination) is counted as a whole argument. |2013-10-02 06:12:59.753|\n|80396 |57593 |19178 |2 |CC BY-SA 3.0 |null |That is a much simpler solution than the one I came up with, LOL! |2012-12-12 18:57:29.76 |\n|552243 |314191 |193279 |0 |CC BY-SA 3.0 |null |@maulinglawns - I think you've mostly covered the bases, here. But, I'm just saying that \"not all greps are built alike\" and \"OP hasn't given clear enough information\" for a definitive answer... that is all. There are some systems that are \"strange\" out there (eg. Solaris). (And honestly, I'm still not convinced this isn't a homework assignment... LOL) |2016-10-04 10:21:33.847|\n|10905 |8678 |3860 |7 |CC BY-SA 2.5 |null |LOL \"Experty friendly, user antagonistic\". It made my day. +1! |2011-03-06 09:21:17.33 |\n|500351 |287029 |172514 |0 |CC BY-SA 3.0 |null |well I'm glad you solved. I kinda figure that out when I finally saw that beautiful string that states that this question were asked **4 months ago**, I _really_ should pay more attention to the site UI, LOL!. and yup, let's leave this Q/A here in case you or anyone else trample into the same problem. |2016-06-01 23:14:10.953|\n|451434 |260694 |150530 |5 |CC BY-SA 3.0 |null |That sounds like that rick roll kernel module LOL |2016-02-08 05:11:54.437|\n|932117 |504937 |278319 |0 |CC BY-SA 4.0 |null |Thank you very much @JdeBP for useful links, I found the answer from your given links. LOL where were they when I was searching, I was not using proper keywords perhaps. |2019-03-07 15:33:31.75 |\n|419424 |244229 |11639 |0 |CC BY-SA 3.0 |null |Epic? LOL. I have no idea what the purpose of the bell is but it took me longer to read your perl script than it did to double check two man pages and type `$ nice -n 1 watch -b -p -n1 \"date +%s.%N;false\"` |2015-11-20 03:19:25.133|\n|358571 |211679 |22858 |0 |CC BY-SA 3.0 |null |@ikrabbe It definitely doesn't. :) So I now get it, you were the initiator of inverting the logic, and user179... just jumped on the bandwagon. LOL. |2015-06-23 20:57:22.007|\n|183480 |117550 |60618 |0 |CC BY-SA 3.0 |null |Thank you! I knew I was missing something. I even tried '$someday' earlier. LOL |2014-02-28 21:41:59.577|\n|682299 |383262 |105631 |0 |CC BY-SA 3.0 |null |LOL, I'm sorry. The code is anything but \"self evident\". There are three different layers and I have no idea which layer I should be looking for, nor in what kind of file to look for them. Is Java using some kind of reflection class to determine the properties? If so, no hope. If it's some kind of enumerated class, like you would do in C, I have been unable to find it.\\n\\nFurther, I cannot connect to our infastructure via the SDK because our admins have restricted it for some reason. I just need a list. |2017-08-03 18:37:30.887|\n|1117098|402489 |315425 |0 |CC BY-SA 4.0 |null |LOL just ran into this now ... must be way too tired, but thanks for the solution :) |2020-07-14 14:05:58.55 |\n|28019 |20464 |8965 |0 |CC BY-SA 3.0 |null |LOL my bad. I was trying to confirm the Cygwin part for ya (IIRC your original post had a possible command). It does specifically say \"ctime (time of last modification of file status information)\" if I had paid attention. |2011-09-13 02:02:12.223|\n|463430 |267357 |9503 |0 |CC BY-SA 3.0 |null |I managed to get a full list of files and they were around 6500. This is very less number, I thought there were millions LOL. I would assume this is nothing for `ext4` file system. [Any thoughts?](http://serverfault.com/questions/761381/why-would-a-folder-with-around-6500-in-a-ext4-partition-become-inaccesible?noredirect=1#comment957762_761381) |2016-03-04 07:27:00.52 |\n|962900 |520955 |281844 |0 |CC BY-SA 4.0 |null |`FNR==NR` Yuck!!! (btw, \"Yuck!\" is probably the proper way to pronounce `awk`.) LOL |2019-05-25 13:32:22.06 |\n|949204 |513271 |254900 |0 |CC BY-SA 4.0 |null |............. LOL |2019-04-18 19:15:28.28 |\n|139768 |92475 |22858 |0 |CC BY-SA 3.0 |null |LOL. YMMD with that great comparison.:) But for real, I think it might make a difference to a system that is *not yet* out of memory. That is, if you try adjusting swappiness *before* copying the huge-size stuff, you might have a slight chance of getting swap utilized *way earlier* (ideally preventing vital processes from being killed). Or couldn't you? |2013-09-26 19:13:38.927|\n|602160 |340593 |16792 |0 |CC BY-SA 3.0 |null |Is this another meaning of LOL? |2017-01-27 17:53:59.427|\n|275964 |167457 |91207 |0 |CC BY-SA 3.0 |null |Haha sorry about that. I fixed it! What happened was I changed the whole export PATH= TextEdit file. I opened by .bash_history and went through every detail. I found my original PATH and copied that into the TextEdit and it works. Basically, I deleted PATH..LOL. At least learned some new things about terminal! Thank you all for your help. Really appreciate it :) |2014-11-13 00:45:50 |\n|723744 |404680 |79008 |0 |CC BY-SA 3.0 |null |On Ubuntu it does...And Ubuntu is from the same Debian family... LOL Talking about diversity in the Linux world. -_- Also @Kusalananda is right - I cannot guarantee that SMP will be present in the output. |2017-11-15 09:26:44.047|\n|1180261|630636 |138197 |0 |CC BY-SA 4.0 |null |Of course I'll do as you suggest, but I can't believe it doesn't exist a proper way to make it work LOL |2021-01-24 09:04:33 |\n|398355 |16640 |32558 |0 |CC BY-SA 3.0 |null |http://stackoverflow.com/questions/5920333/how-to-check-size-of-a-file LOL for the migrate :-) |2015-10-01 08:06:07.693|\n|816943 |3506 |79008 |0 |CC BY-SA 4.0 |null |I no longer use it...from approx 1 year. Just saying. LOL |2018-06-15 13:06:00.147|\n|179095 |115266 |16369 |0 |CC BY-SA 3.0 |null |What DNS is for? LOL! Will DNS re-start your disrupted TCP streams? |2014-02-15 02:01:14.77 |\n|656964 |369330 |null |0 |CC BY-SA 3.0 |user34720 |This request made me remember this - https://xkcd.com/1172/ - LOL |2017-06-06 19:38:02.817|\n|33252 |24740 |9382 |0 |CC BY-SA 3.0 |null |I just read what I'd written... LOL. Actually, I'm not sure about *reinvoke* (*re-invoke* maybe?), but thanks for telling :) I always appreciate corrections. |2011-11-16 19:38:54.293|\n|20059 |14944 |1974 |0 |CC BY-SA 3.0 |null |@alex: Nevermind. I knew Cairo sounded familiar... it's familiar because that's what Inkscape uses to display SVG and convert them to PDFs. LOL. |2011-06-13 20:40:03.9 |\n|496786 |284968 |83246 |0 |CC BY-SA 3.0 |null |I will do no such horrible thing... :) ok I did and it does work.. Someone should be flogged for this!! LOL. |2016-05-24 15:37:42.207|\n|89816 |62886 |19617 |0 |CC BY-SA 3.0 |null |LOL- well, that was easy (read: obvious)! thx |2013-01-28 21:25:18.07 |\n|1018658|548771 |8337 |0 |CC BY-SA 4.0 |null |What if you set it to start \"last\" that help? https://superuser.com/a/573761/39364 This is like debugging via chat LOL |2019-10-26 00:31:17.227|\n|761653 |41647 |153769 |0 |CC BY-SA 3.0 |null |@mat - I'm a relative newcomer here. I see that you made an edit but left in the phrase \"You see, I'm a programmer ...\" which had me LOL-ing. Is there guidance on this kind of humor? Maybe it's a question for meta ... |2018-02-09 12:30:12.14 |\n|1201040|640662 |392770 |3 |CC BY-SA 4.0 |null |@GeoRie But this will also match some **unrelated** things. Check it with `echo \"AZAZA LOL 23 WTF\" | grep -oP '\\b(?<!\\.)\\d+(?!\\.)\\b'` - it returns `23`. So this is a bit **inaccurate solution**. |2021-03-23 11:52:18.47 |\n|821136 |452127 |null |0 |CC BY-SA 4.0 |user34720 |Old broadcom wifi user here so, this wiki was almost my homepage years ago. LOL. @EvanCarroll, added some probing with `dmesg` |2018-06-27 01:55:19.837|\n|1266364|671406 |90878 |0 |CC BY-SA 4.0 |null |LOL :-D Didn't check as AdminBee had just commented on missing code highlighting :-) Thx for letting me know, I can remove my redundant comment :-) |2021-10-01 11:16:51.17 |\n|1078175|579207 |308316 |0 |CC BY-SA 4.0 |null |LOL, does ssh supports reCAPTCHAs? That would certainly shut me out for good -- Google's artificial stupidity always treats me (a human) as a bot. Seriously, if you want an `expect(1)` solution for multiple ssh password auth, then say so explicitly. That would be much more useful for people coming here via searches. |2020-04-10 18:24:20.977|\n|683627 |382543 |245310 |0 |CC BY-SA 3.0 |null |If you knew someone who did this, you would be obliged to type in a LOLCAT message in the terminal. CAN HAS CHEEZBURGER? |2017-08-07 02:21:09.06 |\n|686815 |386095 |114317 |0 |CC BY-SA 3.0 |null |LOL, yes, you are right I suppose. ansible, docker and restore from backup every time something fails. I am getting too old for this stuff! :-) I did ask the question at Superuser, but that got me a Tumbleweed badge. Serverfault seems more about networking stuff. The question seems to be too arcane for other forums than this one. |2017-08-14 20:03:19.38 |\n|1146029|613618 |414777 |0 |CC BY-SA 4.0 |null |Weren't all plan9 fonts variable-width? I clearly remember that being a strong esthetical and technical point made by the designers of plan9. The interface to their command windows (\"xterms\" LOL) in the rio window manager was very deliberately rejecting any column alignment and any curses-like interface. |2020-10-09 02:34:55.38 |\n|142248 |93811 |48637 |0 |CC BY-SA 3.0 |null |May I add, that there are periods where it seems to post what I would say is acceptable in speed response, if I do a 'refresh' page (whether in Chrome or Firefox) it will take a perceived 200-300ms, then maybe some time later a mind boggling 1.5 mins. You can imagine trying to make multiple edits to a View setup, or Config change, I'm sitting here wondering if I should go make (another!) brew LOL |2013-10-06 15:54:54.14 |\n|81568 |58947 |27121 |0 |CC BY-SA 3.0 |null |LOL, Thx! Yes, I enjoy X'mas! |2012-12-21 14:56:42.067|\n|1040102|559688 |308316 |0 |CC BY-SA 4.0 |null |what \"console window\" is that? (`gnome-terminal`, `konsole`, `putty`, linux virtual terminal, etc). Try `tput smcup; tput cup \"$((LINES/2))\" \"$((COLUMNS/2-2))\"; echo -n LOL; read var; tput rmcup` |2019-12-31 13:02:01.49 |\n|1019071|548771 |8337 |0 |CC BY-SA 4.0 |null |I didn't think it would be useful, but in retrospect it might be: add an \"after=network-online.target\" though a handful of people have found even that isn't enough so you have to add a restart https://github.com/google/cloud-print-connector/issues/140 (or add a \"sleep 10\" into the bash side of things LOL). GL! |2019-10-28 04:04:20.697|\n|1057214|531173 |308316 |0 |CC BY-SA 4.0 |null |@ilkkachu thread ids and process ids live in the same namespace, At least in Linux. But, yes, it's probably a good idea to not create children in the first place, I will not argue against that, LOL. As to practical scenarios, ANY scenario where you want to do ANYTHING with the PID returned by waipid() other than check if it's != -1 is problematical. |2020-02-19 20:44:16.453|\n|1013360|230451 |8337 |1 |CC BY-SA 4.0 |null |Sadly, SO_KEEPALIVE seem to only be sent every couple of hours (at least by default) so a better name for them would be \"check if dead's\" LOL |2019-10-10 20:14:27.15 |\n|761556 |419361 |142735 |0 |CC BY-SA 3.0 |null |I'd have to do some research to find out how to enter the commands, so I'll just consider my itch scratched. LOL. I'll just have to hope that running a desktop session on an Internet-connected computer with user tom in the wheel group doesn't hose me. |2018-02-09 08:03:01.43 |\n|1221737|651007 |174665 |0 |CC BY-SA 4.0 |null |This `find` does lack `-print0`, unfortunately. That would've been pretty convenient. And the `xargs` lacks `-I`. It's as if this stuff was disabled with the specific intent of preventing what I'm trying to do. [sigh] \\n\\nThe `while` version DID work. I didn't try that approach before asking here, because people vent all the time, and even write big tutorials, about how allegedly terrible it is to use `while` loops to process files and directories in the shell. LOL.\\n\\nI didn't test the fancy script, not having names to deal with containing newlines. Thanks for the thorough answer! |2021-05-24 00:39:44.603|\n|56230 |40988 |15732 |0 |CC BY-SA 3.0 |null |So you trust more some perl script based portknocking than openssh which is developted by OpenBSD guys with security in mind? :D Is not great portknocking l33t tool running under root? LOL. |2012-06-19 09:02:47.813|\n|941650 |410050 |205857 |0 |CC BY-SA 4.0 |null |@dreua Yeah, the typo is quite funny. LOL |2019-03-29 20:02:26.697|\n|951871 |265646 |138261 |0 |CC BY-SA 4.0 |null |LOL 3 years after the answer after someone upvoted it, and I just noticed this answer is telling me about a typo in the question; corrected to /proc/interrupts |2019-04-24 10:53:59.417|\n|937506 |507379 |141494 |0 |CC BY-SA 4.0 |null |LOL. TBH it really didn't -- this question is three years old. But since it didn't have an answer yet I went ahead and tested your solution to confirm it. ¯\\_(ツ)_/¯ |2019-03-20 22:23:22.417|\n|900188 |490726 |325065 |0 |CC BY-SA 4.0 |null |So not being able to exchange accurate diff(1)s is a feature now? LOL. |2018-12-24 10:45:02.14 |\n|937342 |507361 |342648 |0 |CC BY-SA 4.0 |null |That means my -x \"..*\" strategy looks into lot of unnecessary places... I would need to update this everywhere, LOL....... |2019-03-20 17:20:14.933|\n|1262154|669603 |90878 |0 |CC BY-SA 4.0 |null |LOL :-D Once you get it working, please do write an answer to your own question with step-by-step instruction. I've been using KeePass for almost a decade and never found out how to use it with a terminal SSH client :-D |2021-09-18 18:31:10.653|\n|150146 |98443 |9491 |0 |CC BY-SA 3.0 |null |@JosephR. I'm not sure of how could I interleave the songs \"in such a way as to minimize adjacency of songs by the same artist\". Thanks for the info, though! Maybe because yesterday and today are holidays in my country, I'm a bit slow to think, LOL! |2013-11-02 17:36:45.137|\n|441864 |255744 |80389 |0 |CC BY-SA 3.0 |null |LOL was only an error syntax i use - instead of . because usually tar.gz are archived as name-version.tar.gz ,sendmail use name.version.tar.gz add your comment as answer so i can vote |2016-01-16 19:45:36.237|\n|440175 |254907 |83246 |1 |CC BY-SA 3.0 |null |You beat me to it :) LOL. |2016-01-12 16:18:19.323|\n|398863 |233466 |23692 |0 |CC BY-SA 3.0 |null |LOL. I see you solved it that way a couple of seconds before I suggested it. Nice. |2015-10-02 11:15:44.963|\n|225093 |139434 |60539 |0 |CC BY-SA 3.0 |null |I should do that, and will when I get around to it, may not be right this instant or today, but in the next few days at the latest, I will. LOL, \"Please rerun the make command\" for each \"module\" if you will, does seem rather ridiculous. The error was always, something along the lines of \"... make file is out of date with respect to Makefile.PL\" Maybe a timestamp issue with respect to file creation? I did notice that `date` always returned UTC instead of CDT, although the UTC time was correct +/- a minute. |2014-06-27 21:48:55.157|\n+-------+-------+-------+-----+---------------+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+\n\n"
]
],
[
[
"# Stopping Spark",
"_____no_output_____"
]
],
[
[
"spark.stop()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e76d3523274cd0ef69da85c885d24f6e29c8c098 | 3,662 | ipynb | Jupyter Notebook | fastfinger-anticheat/.ipynb_checkpoints/anticheatTypingScript-checkpoint.ipynb | LucasSantiag/fastfinger-tensorflow | d307d38d09560b3945aad48ad7aaa6b200563456 | [
"MIT"
] | 6 | 2019-04-24T16:55:42.000Z | 2019-04-25T12:39:46.000Z | fastfinger-anticheat/.ipynb_checkpoints/anticheatTypingScript-checkpoint.ipynb | LucasSantiag/fastfinger-tensorflow | d307d38d09560b3945aad48ad7aaa6b200563456 | [
"MIT"
] | null | null | null | fastfinger-anticheat/.ipynb_checkpoints/anticheatTypingScript-checkpoint.ipynb | LucasSantiag/fastfinger-tensorflow | d307d38d09560b3945aad48ad7aaa6b200563456 | [
"MIT"
] | null | null | null | 24.251656 | 113 | 0.555434 | [
[
[
"from selenium import webdriver\nfrom selenium.webdriver.common.keys import Keys\nfrom PIL import Image\nimport time\nimport numpy as np\nimport pytesseract as ocr\nimport cv2",
"_____no_output_____"
],
[
"driver = webdriver.Chrome('/home/lucas.cardoso/fastfinger-tensorflow/chromedriver_linux64/chromedriver')\ndriver.get('https://10fastfingers.com/login')",
"_____no_output_____"
],
[
"login = input(\"Email: \")\npassword = input (\"Password: \")",
"_____no_output_____"
],
[
"driver.find_element('id', 'UserEmail').send_keys(login)\ndriver.find_element('id', 'UserPassword').send_keys(password)\ndriver.find_element('id', 'login-form-submit').click()",
"_____no_output_____"
],
[
"while True:\n try:\n highlightWord = driver.find_elements_by_class_name(\"highlight\")[0] \n except:\n break\n driver.find_element('id','inputfield').send_keys(highlightWord.text + \" \")\n time.sleep(0.075)",
"_____no_output_____"
],
[
"driver.get(\"https://10fastfingers.com/anticheat/view/1/1\")\ndriver.find_element('id', 'start-btn').click()\ndriver.set_window_size(1050, 1000)\nprint (driver.get_window_size())",
"_____no_output_____"
],
[
"driver.save_screenshot(\"screenshot.png\")",
"_____no_output_____"
],
[
"from PIL import Image\nimageObject = Image.open(\"screenshot.png\")\ncropped = imageObject.crop((300,190,880,320))\ncropped.save(\"screenshot.png\")",
"_____no_output_____"
],
[
"imagem = Image.open('screenshot.png').convert('RGB')\nnpimagem = np.asarray(imagem).astype(np.uint8) \nnpimagem[:, :, 0] = 0\nnpimagem[:, :, 2] = 0\nim = cv2.cvtColor(npimagem, cv2.COLOR_RGB2GRAY) \n\nret, thresh = cv2.threshold(im, 127, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU) \n\nbinimagem = Image.fromarray(thresh) \n\nphrase = ocr.image_to_string(binimagem, lang='por')\nprint(phrase) \n",
"_____no_output_____"
],
[
"textarea = driver.find_element('id', 'word-input')\ntextarea.send_keys (phrase)\ndriver.find_element('id','submit-anticheat').click()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76d3c811b9bdd7a6fe639fe5408328c8cc07130 | 15,510 | ipynb | Jupyter Notebook | MLCourse/FinalProjectAssignment.ipynb | jashburn8020/ml-course | c360f84ea43fcd550120e98189076f30e85084cb | [
"Apache-2.0"
] | 3 | 2021-02-25T17:33:56.000Z | 2021-09-02T07:11:31.000Z | MLCourse/FinalProjectAssignment.ipynb | jashburn8020/ml-course | c360f84ea43fcd550120e98189076f30e85084cb | [
"Apache-2.0"
] | null | null | null | MLCourse/FinalProjectAssignment.ipynb | jashburn8020/ml-course | c360f84ea43fcd550120e98189076f30e85084cb | [
"Apache-2.0"
] | null | null | null | 24.776358 | 337 | 0.586009 | [
[
[
"# Final Project\n\n## Predict whether a mammogram mass is benign or malignant\n\nWe'll be using the \"mammographic masses\" public dataset from the UCI repository (source: https://archive.ics.uci.edu/ml/datasets/Mammographic+Mass)\n\nThis data contains 961 instances of masses detected in mammograms, and contains the following attributes:\n\n\n 1. BI-RADS assessment: 1 to 5 (ordinal) \n 2. Age: patient's age in years (integer)\n 3. Shape: mass shape: round=1 oval=2 lobular=3 irregular=4 (nominal)\n 4. Margin: mass margin: circumscribed=1 microlobulated=2 obscured=3 ill-defined=4 spiculated=5 (nominal)\n 5. Density: mass density high=1 iso=2 low=3 fat-containing=4 (ordinal)\n 6. Severity: benign=0 or malignant=1 (binominal)\n \nBI-RADS is an assesment of how confident the severity classification is; it is not a \"predictive\" attribute and so we will discard it. The age, shape, margin, and density attributes are the features that we will build our model with, and \"severity\" is the classification we will attempt to predict based on those attributes.\n\nAlthough \"shape\" and \"margin\" are nominal data types, which sklearn typically doesn't deal with well, they are close enough to ordinal that we shouldn't just discard them. The \"shape\" for example is ordered increasingly from round to irregular.\n\nA lot of unnecessary anguish and surgery arises from false positives arising from mammogram results. If we can build a better way to interpret them through supervised machine learning, it could improve a lot of lives.\n\n## Your assignment\n\nApply several different supervised machine learning techniques to this data set, and see which one yields the highest accuracy as measured with K-Fold cross validation (K=10). Apply:\n\n* Decision tree\n* Random forest\n* KNN\n* Naive Bayes\n* SVM\n* Logistic Regression\n* And, as a bonus challenge, a neural network using Keras.\n\nThe data needs to be cleaned; many rows contain missing data, and there may be erroneous data identifiable as outliers as well.\n\nRemember some techniques such as SVM also require the input data to be normalized first.\n\nMany techniques also have \"hyperparameters\" that need to be tuned. Once you identify a promising approach, see if you can make it even better by tuning its hyperparameters.\n\nI was able to achieve over 80% accuracy - can you beat that?\n\nBelow I've set up an outline of a notebook for this project, with some guidance and hints. If you're up for a real challenge, try doing this project from scratch in a new, clean notebook!\n",
"_____no_output_____"
],
[
"## Let's begin: prepare your data\n\nStart by importing the mammographic_masses.data.txt file into a Pandas dataframe (hint: use read_csv) and take a look at it.",
"_____no_output_____"
],
[
"Make sure you use the optional parmaters in read_csv to convert missing data (indicated by a ?) into NaN, and to add the appropriate column names (BI_RADS, age, shape, margin, density, and severity):",
"_____no_output_____"
],
[
"Evaluate whether the data needs cleaning; your model is only as good as the data it's given. Hint: use describe() on the dataframe.",
"_____no_output_____"
],
[
"There are quite a few missing values in the data set. Before we just drop every row that's missing data, let's make sure we don't bias our data in doing so. Does there appear to be any sort of correlation to what sort of data has missing fields? If there were, we'd have to try and go back and fill that data in.",
"_____no_output_____"
],
[
"If the missing data seems randomly distributed, go ahead and drop rows with missing data. Hint: use dropna().",
"_____no_output_____"
],
[
"Next you'll need to convert the Pandas dataframes into numpy arrays that can be used by scikit_learn. Create an array that extracts only the feature data we want to work with (age, shape, margin, and density) and another array that contains the classes (severity). You'll also need an array of the feature name labels.",
"_____no_output_____"
],
[
"Some of our models require the input data to be normalized, so go ahead and normalize the attribute data. Hint: use preprocessing.StandardScaler().",
"_____no_output_____"
],
[
"## Decision Trees\n\nBefore moving to K-Fold cross validation and random forests, start by creating a single train/test split of our data. Set aside 75% for training, and 25% for testing.",
"_____no_output_____"
],
[
"Now create a DecisionTreeClassifier and fit it to your training data.",
"_____no_output_____"
],
[
"Display the resulting decision tree.",
"_____no_output_____"
],
[
"Measure the accuracy of the resulting decision tree model using your test data.",
"_____no_output_____"
],
[
"Now instead of a single train/test split, use K-Fold cross validation to get a better measure of your model's accuracy (K=10). Hint: use model_selection.cross_val_score",
"_____no_output_____"
],
[
"Now try a RandomForestClassifier instead. Does it perform better?",
"_____no_output_____"
],
[
"## SVM\n\nNext try using svm.SVC with a linear kernel. How does it compare to the decision tree?",
"_____no_output_____"
],
[
"## KNN\nHow about K-Nearest-Neighbors? Hint: use neighbors.KNeighborsClassifier - it's a lot easier than implementing KNN from scratch like we did earlier in the course. Start with a K of 10. K is an example of a hyperparameter - a parameter on the model itself which may need to be tuned for best results on your particular data set.",
"_____no_output_____"
],
[
"Choosing K is tricky, so we can't discard KNN until we've tried different values of K. Write a for loop to run KNN with K values ranging from 1 to 50 and see if K makes a substantial difference. Make a note of the best performance you could get out of KNN.",
"_____no_output_____"
],
[
"## Naive Bayes\n\nNow try naive_bayes.MultinomialNB. How does its accuracy stack up? Hint: you'll need to use MinMaxScaler to get the features in the range MultinomialNB requires.",
"_____no_output_____"
],
[
"## Revisiting SVM\n\nsvm.SVC may perform differently with different kernels. The choice of kernel is an example of a \"hyperparamter.\" Try the rbf, sigmoid, and poly kernels and see what the best-performing kernel is. Do we have a new winner?",
"_____no_output_____"
],
[
"## Logistic Regression\n\nWe've tried all these fancy techniques, but fundamentally this is just a binary classification problem. Try Logisitic Regression, which is a simple way to tackling this sort of thing.",
"_____no_output_____"
],
[
"## Neural Networks\n\nAs a bonus challenge, let's see if an artificial neural network can do even better. You can use Keras to set up a neural network with 1 binary output neuron and see how it performs. Don't be afraid to run a large number of epochs to train the model if necessary.",
"_____no_output_____"
],
[
"## Do we have a winner?\n\nWhich model, and which choice of hyperparameters, performed the best? Feel free to share your results!",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e76d49879b9e0b612ef0198631cfc5891e4b5037 | 26,581 | ipynb | Jupyter Notebook | .ipynb_checkpoints/Bimodel Test_CCtroubleshoot-checkpoint.ipynb | Kovacs-Lab/Geneva_Ionomics | 12d5586889125639de5f75be0eadeb6ff4c55402 | [
"CC0-1.0"
] | null | null | null | .ipynb_checkpoints/Bimodel Test_CCtroubleshoot-checkpoint.ipynb | Kovacs-Lab/Geneva_Ionomics | 12d5586889125639de5f75be0eadeb6ff4c55402 | [
"CC0-1.0"
] | null | null | null | .ipynb_checkpoints/Bimodel Test_CCtroubleshoot-checkpoint.ipynb | Kovacs-Lab/Geneva_Ionomics | 12d5586889125639de5f75be0eadeb6ff4c55402 | [
"CC0-1.0"
] | null | null | null | 51.2158 | 476 | 0.54001 | [
[
[
"#Installing packages and loading them into the enviroment\ninstall.packages(\"MASS\")\nlibrary(\"MASS\")\ninstall.packages(\"missMDA\")\nlibrary(\"missMDA\")\ninstall.packages(\"tidyverse\")\nlibrary(\"tidyverse\")\ninstall.packages(\"caret\")\nlibrary(\"caret\")\ninstall.packages(\"mice\")\nlibrary(\"mice\")",
"Installing package into 'C:/Users/daeda/OneDrive/Documents/R/win-library/3.6'\n(as 'lib' is unspecified)\n\nWarning message:\n\"package 'MASS' is in use and will not be installed\"\nInstalling package into 'C:/Users/daeda/OneDrive/Documents/R/win-library/3.6'\n(as 'lib' is unspecified)\n\nWarning message:\n\"package 'missMDA' is in use and will not be installed\"\nInstalling package into 'C:/Users/daeda/OneDrive/Documents/R/win-library/3.6'\n(as 'lib' is unspecified)\n\nWarning message:\n\"package 'tidyverse' is in use and will not be installed\"\nInstalling package into 'C:/Users/daeda/OneDrive/Documents/R/win-library/3.6'\n(as 'lib' is unspecified)\n\nWarning message:\n\"package 'caret' is in use and will not be installed\"\nInstalling package into 'C:/Users/daeda/OneDrive/Documents/R/win-library/3.6'\n(as 'lib' is unspecified)\n\nWarning message:\n\"package 'mice' is in use and will not be installed\"\n"
],
[
"#Loading all needed files,dropping first two coloumns, which are not needed for analysis (ID, species)\nionomics <- read.csv('spec_shoot_xyz_combined.csv', colClasses =c(\"NULL\",\"NULL\",NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,\n NA,NA,NA,NA,NA,NA,NA))\n#Imputation via missMDA\nimputeIonomics <- imputePCA(ionomics, method = \"Regularized\")",
"_____no_output_____"
],
[
"#Imputation via Mice\ntemp <- mice(ionomics, m = 5, maxit = 5, method = \"norm.boot\", seed = 123)\nionomics <- complete(temp,1)",
"\n iter imp variable\n 1 1 spec_as_int S34 As75\n 1 2 spec_as_int S34 As75\n 1 3 spec_as_int S34 As75\n 1 4 spec_as_int S34 As75\n 1 5 spec_as_int S34 As75\n 2 1 spec_as_int S34 As75\n 2 2 spec_as_int S34 As75\n 2 3 spec_as_int S34 As75\n 2 4 spec_as_int S34 As75\n 2 5 spec_as_int S34 As75\n 3 1 spec_as_int S34 As75\n 3 2 spec_as_int S34 As75\n 3 3 spec_as_int S34 As75\n 3 4 spec_as_int S34 As75\n 3 5 spec_as_int S34 As75\n 4 1 spec_as_int S34 As75\n 4 2 spec_as_int S34 As75\n 4 3 spec_as_int S34 As75\n 4 4 spec_as_int S34 As75\n 4 5 spec_as_int S34 As75\n 5 1 spec_as_int S34 As75\n 5 2 spec_as_int S34 As75\n 5 3 spec_as_int S34 As75\n 5 4 spec_as_int S34 As75\n 5 5 spec_as_int S34 As75\n"
],
[
"# Split the data into training (80%) and test set (20%)\nset.seed(123)\ntraining.samples <- ionomics[,2] %>%\n createDataPartition(p = 0.8, list = FALSE)\ntrain.data <- ionomics[training.samples, ]\ntest.data <- ionomics[-training.samples, ]\n\ntraining.samples1 <- imputeIonomics$completeObs[,2] %>%\n createDataPartition(p = 0.8, list = FALSE)\ntrain.data1 <- imputeIonomics$completeObs[training.samples1, ]\ntest.data1 <- imputeIonomics$completeObs[-training.samples1, ]\ntrain.data1 <- unlist(train.data1)\ntest.data1 <- unlist(test.data1)\ntrain.data1 <- as.data.frame(train.data1)\ntest.data1 <- as.data.frame(test.data1)",
"_____no_output_____"
],
[
"# Fit the model\nmodel <- lda(spec_as_int~., data = train.data)\n# Make predictions\npredictions <- model %>% predict(test.data)\n# Model accuracy\nmean(predictions$class==test.data$spec_as_int)\n#model\n\nmodel1 <- lda(spec_as_int~., data = train.data1)\n# Make predictions\npredictions1 <- model1 %>% predict(test.data1)\n# Model accuracy\nmean(predictions1$class == test.data1$spec_as_int)\n#model1",
"_____no_output_____"
],
[
"# Predicted classes\nhead(predictions$class, 6)\n# Predicted probabilities of class memebership.\nhead(predictions$posterior, 6) \n# Linear discriminants\nhead(predictions$x, 3) ",
"_____no_output_____"
]
],
[
[
"|species|spec_as_int|\n|---|---|\n|acerifolia_x|1|\n|aestivalis_x|2|\n|cinerea_x|3|\n|labrusca_x|4|\n|palmata_x|5|\n|riparia_x|6|\n|rupestris_x|7|\n|vulpina_x|8|\n|acerifolia_y|9|\n|aestivalis_y|10|\n|cinerea_y|11|\n|labrusca_y|12|\n|palmata_y|13|\n|riparia_y|14|\n|rupestris_y|15|\n|vulpina_y|16|\nacerifolia_z|17|\n|aestivalis_z|18|\n|cinerea_z|19|\n|labrusca_z|20|\n|palmata_z|21|\n|riparia_z|22|\n|rupestris_z|23|\n|vulpina_z|24|",
"_____no_output_____"
]
],
[
[
"table <- table(Predicted=predictions$class, Species=test.data$spec_as_int)\nprint(confusionMatrix(table))",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e76d54102b99f6d96207b9f4bff25aa9267a2370 | 2,594 | ipynb | Jupyter Notebook | docs/dev/keras/Activations.ipynb | pringlized/tensorflow2 | 887c825af424d78919effdee9493a622829c9755 | [
"MIT"
] | null | null | null | docs/dev/keras/Activations.ipynb | pringlized/tensorflow2 | 887c825af424d78919effdee9493a622829c9755 | [
"MIT"
] | null | null | null | docs/dev/keras/Activations.ipynb | pringlized/tensorflow2 | 887c825af424d78919effdee9493a622829c9755 | [
"MIT"
] | null | null | null | 18.013889 | 134 | 0.512336 | [
[
[
"# TensorFlow 2 - Keras Activations\nActivations can either be an **Activation** layer, or used through the **activation** arguement supported by all forward layers.",
"_____no_output_____"
],
[
"### softmax\nsoftmax activation function",
"_____no_output_____"
],
[
"### elu\nExponential Linear unit",
"_____no_output_____"
],
[
"### selu\nScaled Eponential Linear unit",
"_____no_output_____"
],
[
"### softplus\n\nSoftplus activation function\n\n",
"_____no_output_____"
],
[
"### softsign\nSoftsign activation function",
"_____no_output_____"
],
[
"### relu\nRectified Linear unit",
"_____no_output_____"
],
[
"### tahn\nHyperbolic tangent activation function",
"_____no_output_____"
],
[
"### sigmoid\nSigmoid activation function",
"_____no_output_____"
],
[
"### hard_sigmoid\nHard Sigmoid activation function",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e76d5ab9edbfefdbbcdd836e67c2aba7f9095e97 | 8,167 | ipynb | Jupyter Notebook | 01 introduction - 02 matrix reusability.ipynb | Mechachleopteryx/fall-in-love-with-julia | ed3fa54c1df1debbf400352a608340edc526b811 | [
"MIT"
] | 72 | 2020-09-02T16:49:51.000Z | 2022-03-03T22:34:15.000Z | 01 introduction - 02 matrix reusability.ipynb | Mechachleopteryx/fall-in-love-with-julia | ed3fa54c1df1debbf400352a608340edc526b811 | [
"MIT"
] | null | null | null | 01 introduction - 02 matrix reusability.ipynb | Mechachleopteryx/fall-in-love-with-julia | ed3fa54c1df1debbf400352a608340edc526b811 | [
"MIT"
] | 8 | 2021-01-07T12:20:36.000Z | 2022-01-11T15:37:46.000Z | 23.334286 | 225 | 0.525897 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e76d6d95e2417a9b629ffa927d3bfd0fb7b15fbf | 23,354 | ipynb | Jupyter Notebook | jupyter/d2l-java/chapter_convolutional-modern/densenet.ipynb | michaellavelle/djl | 468b29490b94f8b8dc38f7f7237a119884c25ff6 | [
"Apache-2.0"
] | 1 | 2020-09-18T04:29:36.000Z | 2020-09-18T04:29:36.000Z | jupyter/d2l-java/chapter_convolutional-modern/densenet.ipynb | michaellavelle/djl | 468b29490b94f8b8dc38f7f7237a119884c25ff6 | [
"Apache-2.0"
] | null | null | null | jupyter/d2l-java/chapter_convolutional-modern/densenet.ipynb | michaellavelle/djl | 468b29490b94f8b8dc38f7f7237a119884c25ff6 | [
"Apache-2.0"
] | null | null | null | 38.474465 | 665 | 0.577546 | [
[
[
"# Densely Connected Networks (DenseNet)\n\nResNet significantly changed the view of how to parametrize the functions in deep networks. DenseNet is to some extent the logical extension of this. To understand how to arrive at it, let us take a small detour to theory. Recall the Taylor expansion for functions. For scalars it can be written as\n\n$$f(x) = f(0) + f'(x) x + \\frac{1}{2} f''(x) x^2 + \\frac{1}{6} f'''(x) x^3 + o(x^3).$$\n\n## Function Decomposition\n\nThe key point is that it decomposes the function into increasingly higher order terms. In a similar vein, ResNet decomposes functions into\n\n$$f(\\mathbf{x}) = \\mathbf{x} + g(\\mathbf{x}).$$\n\nThat is, ResNet decomposes $f$ into a simple linear term and a more complex\nnonlinear one. What if we want to go beyond two terms? A solution was proposed\nby :cite:`Huang.Liu.Van-Der-Maaten.ea.2017` in the form of\nDenseNet, an architecture that reported record performance on the ImageNet\ndataset.\n\n\n\n:label:`fig_densenet_block`\n\n\nAs shown in :numref:`fig_densenet_block`, the key difference between ResNet and DenseNet is that in the latter case outputs are *concatenated* rather than added. As a result we perform a mapping from $\\mathbf{x}$ to its values after applying an increasingly complex sequence of functions.\n\n$$\\mathbf{x} \\to \\left[\\mathbf{x}, f_1(\\mathbf{x}), f_2(\\mathbf{x}, f_1(\\mathbf{x})), f_3(\\mathbf{x}, f_1(\\mathbf{x}), f_2(\\mathbf{x}, f_1(\\mathbf{x})), \\ldots\\right].$$\n\nIn the end, all these functions are combined in an MLP to reduce the number of features again. In terms of implementation this is quite simple---rather than adding terms, we concatenate them. The name DenseNet arises from the fact that the dependency graph between variables becomes quite dense. The last layer of such a chain is densely connected to all previous layers. The main components that compose a DenseNet are dense blocks and transition layers. The former defines how the inputs and outputs are concatenated, while the latter controls the number of channels so that it is not too large. The dense connections are shown in :numref:`fig_densenet`.\n\n\n\n:label:`fig_densenet`\n\n\n\n## Dense Blocks\n\nDenseNet uses the modified \"batch normalization, activation, and convolution\"\narchitecture of ResNet (see the exercise in :numref:`sec_resnet`).\nFirst, we implement this architecture in the\n`conv_block` function.",
"_____no_output_____"
]
],
[
[
"%mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/\n\n%maven ai.djl:api:0.7.0-SNAPSHOT\n%maven ai.djl:model-zoo:0.7.0-SNAPSHOT\n%maven ai.djl:basicdataset:0.7.0-SNAPSHOT\n%maven org.slf4j:slf4j-api:1.7.26\n%maven org.slf4j:slf4j-simple:1.7.26\n \n%maven ai.djl.mxnet:mxnet-engine:0.7.0-SNAPSHOT\n%maven ai.djl.mxnet:mxnet-native-auto:1.7.0-a",
"_____no_output_____"
],
[
"%%loadFromPOM\n<dependency>\n <groupId>tech.tablesaw</groupId>\n <artifactId>tablesaw-jsplot</artifactId>\n <version>0.30.4</version>\n</dependency>",
"_____no_output_____"
],
[
"%load ../utils/plot-utils.ipynb\n%load ../utils/Training.java",
"_____no_output_____"
],
[
"import ai.djl.Model;\nimport ai.djl.basicdataset.FashionMnist;\nimport ai.djl.metric.Metrics;\nimport ai.djl.modality.cv.transform.Resize;\nimport ai.djl.modality.cv.transform.ToTensor;\nimport ai.djl.ndarray.NDArray;\nimport ai.djl.ndarray.NDArrays;\nimport ai.djl.ndarray.NDList;\nimport ai.djl.ndarray.NDManager;\nimport ai.djl.ndarray.types.DataType;\nimport ai.djl.ndarray.types.Shape;\nimport ai.djl.nn.*;\nimport ai.djl.nn.convolutional.Conv2d;\nimport ai.djl.nn.core.Linear;\nimport ai.djl.nn.norm.BatchNorm;\nimport ai.djl.nn.pooling.Pool;\nimport ai.djl.training.DefaultTrainingConfig;\nimport ai.djl.training.EasyTrain;\nimport ai.djl.training.ParameterStore;\nimport ai.djl.training.Trainer;\nimport ai.djl.training.dataset.ArrayDataset;\nimport ai.djl.training.dataset.Dataset;\nimport ai.djl.training.evaluator.Accuracy;\nimport ai.djl.training.initializer.XavierInitializer;\nimport ai.djl.training.listener.TrainingListener;\nimport ai.djl.training.loss.Loss;\nimport ai.djl.training.optimizer.Optimizer;\nimport ai.djl.training.optimizer.learningrate.LearningRateTracker;\nimport ai.djl.translate.Pipeline;\nimport ai.djl.util.PairList;\n\nimport java.io.IOException;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.Map;\n\nimport tech.tablesaw.api.*;\nimport tech.tablesaw.plotly.api.*;\nimport tech.tablesaw.plotly.components.*;\nimport tech.tablesaw.plotly.Plot;\nimport tech.tablesaw.plotly.components.Figure;\nimport org.apache.commons.lang3.ArrayUtils;",
"_____no_output_____"
],
[
"public SequentialBlock convBlock(int numChannels) {\n\n SequentialBlock block = new SequentialBlock()\n .add(BatchNorm.builder().build())\n .add(Activation::relu)\n .add(Conv2d.builder()\n .setFilters(numChannels)\n .setKernelShape(new Shape(3, 3))\n .optPadding(new Shape(1, 1))\n .optStride(new Shape(1, 1))\n .build()\n );\n\n return block;\n}",
"_____no_output_____"
]
],
[
[
"A dense block consists of multiple `convBlock` units, each using the same number of output channels. In the forward computation, however, we concatenate the input and output of each block on the channel dimension.",
"_____no_output_____"
]
],
[
[
"class DenseBlock extends AbstractBlock {\n\n private static final byte VERSION = 2;\n public SequentialBlock net = new SequentialBlock();\n\n public DenseBlock(int numConvs, int numChannels) {\n super(VERSION);\n for (int i = 0; i < numConvs; i++) {\n this.net.add(\n addChildBlock(\"denseBlock\" + i, convBlock(numChannels))\n );\n }\n }\n\n @Override\n public String toString() {\n return \"DenseBlock()\";\n }\n\n @Override\n public NDList forward(ParameterStore parameterStore, NDList X, boolean training, PairList<String, Object> pairList) {\n\n NDArray Y;\n for (Block block : this.net.getChildren().values()) {\n Y = block.forward(parameterStore, X, training).singletonOrThrow();\n X = new NDList(NDArrays.concat(new NDList(X.singletonOrThrow(), Y), 1));\n }\n return X;\n }\n\n @Override\n public Shape[] getOutputShapes(NDManager ndManager, Shape[] inputs) {\n Shape[] shapesX = inputs;\n for (Block block : this.net.getChildren().values()) {\n Shape[] shapesY = block.getOutputShapes(ndManager, shapesX);\n shapesX[0] = new Shape(\n shapesX[0].get(0),\n shapesY[0].get(1) + shapesX[0].get(1),\n shapesX[0].get(2),\n shapesX[0].get(3)\n );\n }\n return shapesX;\n }\n\n @Override\n public void initializeChildBlocks(NDManager manager, DataType dataType, Shape... inputShapes) {\n Shape shapesX = inputShapes[0];\n\n for (Block block : this.net.getChildren().values()) {\n Shape[] shapesY = block.initialize(manager, DataType.FLOAT32, shapesX);\n shapesX = new Shape(\n shapesX.get(0),\n shapesY[0].get(1) + shapesX.get(1),\n shapesX.get(2),\n shapesX.get(3)\n );\n }\n }\n}",
"_____no_output_____"
]
],
[
[
"In the following example, we define a convolution block (`DenseBlock`) with two blocks of 10 output channels. When using an input with 3 channels, we will get an output with the $3+2\\times 10=23$ channels. The number of convolution block channels controls the increase in the number of output channels relative to the number of input channels. This is also referred to as the growth rate.",
"_____no_output_____"
]
],
[
[
"NDManager manager = NDManager.newBaseManager();\nSequentialBlock block = new SequentialBlock()\n .add(new DenseBlock(2, 10));\n\nNDArray X = manager.randomUniform(0f, 1.0f, new Shape(4, 3, 8, 8));\n\nblock.setInitializer(new XavierInitializer());\nblock.initialize(manager, DataType.FLOAT32, X.getShape());\n\nParameterStore parameterStore = new ParameterStore(manager, true);\n\nShape currentShape = X.getShape();\n\nfor (int i = 0; i < block.getChildren().size(); i++) {\n\n Shape[] newShape = block.getChildren().get(i).getValue().getOutputShapes(manager, new Shape[]{X.getShape()});\n currentShape = newShape[0]; \n}\n\ncurrentShape",
"_____no_output_____"
]
],
[
[
"## Transition Layers\n\nSince each dense block will increase the number of channels, adding too many of them will lead to an excessively complex model. A transition layer is used to control the complexity of the model. It reduces the number of channels by using the $1\\times 1$ convolutional layer and halves the height and width of the average pooling layer with a stride of 2, further reducing the complexity of the model.",
"_____no_output_____"
]
],
[
[
"public SequentialBlock transitionBlock(int numChannels) {\n SequentialBlock blk = new SequentialBlock()\n .add(BatchNorm.builder().build())\n .add(Activation::relu)\n .add(\n Conv2d.builder()\n .setFilters(numChannels)\n .setKernelShape(new Shape(1, 1))\n .optStride(new Shape(1, 1))\n .build()\n )\n .add(Pool.avgPool2dBlock(new Shape(2, 2), new Shape(2, 2)));\n\n return blk;\n}",
"_____no_output_____"
]
],
[
[
"Apply a transition layer with 10 channels to the output of the dense block in the previous example. This reduces the number of output channels to 10, and halves the height and width.",
"_____no_output_____"
]
],
[
[
"block = transitionBlock(10);\n\nblock.setInitializer(new XavierInitializer());\nblock.initialize(manager, DataType.FLOAT32, currentShape);\n\nfor (int i = 0; i < block.getChildren().size(); i++) {\n\n Shape[] newShape = block.getChildren().get(i).getValue().getOutputShapes(manager, new Shape[]{currentShape});\n currentShape = newShape[0];\n}\n\ncurrentShape",
"_____no_output_____"
]
],
[
[
"## DenseNet Model\n\nNext, we will construct a DenseNet model. DenseNet first uses the same single convolutional layer and maximum pooling layer as ResNet.",
"_____no_output_____"
]
],
[
[
"SequentialBlock net = new SequentialBlock()\n .add(Conv2d.builder()\n .setFilters(64)\n .setKernelShape(new Shape(7, 7))\n .optStride(new Shape(2, 2))\n .optPadding(new Shape(3, 3))\n .build())\n .add(BatchNorm.builder().build())\n .add(Activation::relu)\n .add(Pool.maxPool2dBlock(new Shape(3, 3), new Shape(2, 2), new Shape(1, 1)));",
"_____no_output_____"
]
],
[
[
"Then, similar to the four residual blocks that ResNet uses, DenseNet uses four dense blocks. Similar to ResNet, we can set the number of convolutional layers used in each dense block. Here, we set it to 4, consistent with the ResNet-18 in the previous section. Furthermore, we set the number of channels (i.e., growth rate) for the convolutional layers in the dense block to 32, so 128 channels will be added to each dense block.\n\nIn ResNet, the height and width are reduced between each module by a residual block with a stride of 2. Here, we use the transition layer to halve the height and width and halve the number of channels.",
"_____no_output_____"
]
],
[
[
"int numChannels = 64;\nint growthRate = 32;\n\nint[] numConvsInDenseBlocks = new int[]{4, 4, 4, 4};\n\nfor (int index = 0; index < numConvsInDenseBlocks.length; index++) {\n\n int numConvs = numConvsInDenseBlocks[index];\n net.add(new DenseBlock(numConvs, growthRate));\n\n numChannels += (numConvs * growthRate);\n\n if (index != (numConvsInDenseBlocks.length - 1)) {\n numChannels = (numChannels / 2);\n net.add(transitionBlock(numChannels));\n }\n}",
"_____no_output_____"
]
],
[
[
"Similar to ResNet, a global pooling layer and fully connected layer are connected at the end to produce the output.",
"_____no_output_____"
]
],
[
[
"net\n .add(BatchNorm.builder().build())\n .add(Activation::relu)\n .add(Pool.globalAvgPool2dBlock())\n .add(Linear.builder().setUnits(10).build());",
"_____no_output_____"
]
],
[
[
"## Data Acquisition and Training\n\nSince we are using a deeper network here, in this section, we will reduce the input height and width from 224 to 96 to simplify the computation.",
"_____no_output_____"
]
],
[
[
"int batchSize = 256;\nfloat lr = 0.1f;\nint numEpochs = 10;\n\ndouble[] trainLoss;\ndouble[] testAccuracy;\ndouble[] epochCount;\ndouble[] trainAccuracy;\n\nepochCount = new double[numEpochs];\n\nfor (int i = 0; i < epochCount.length; i++) {\n epochCount[i] = (i + 1);\n}\n\nFashionMnist trainIter =\n FashionMnist.builder()\n .optPipeline(new Pipeline().add(new Resize(96)).add(new ToTensor()))\n .optUsage(Dataset.Usage.TRAIN)\n .setSampling(batchSize, true)\n .build();\n\nFashionMnist testIter =\n FashionMnist.builder()\n .optPipeline(new Pipeline().add(new Resize(96)).add(new ToTensor()))\n .optUsage(Dataset.Usage.TEST)\n .setSampling(batchSize, true)\n .build();\n\ntrainIter.prepare();\ntestIter.prepare();\n\nModel model = Model.newInstance(\"cnn\");\nmodel.setBlock(net);\n\nLoss loss = Loss.softmaxCrossEntropyLoss();\n\nLearningRateTracker lrt = LearningRateTracker.fixedLearningRate(lr);\nOptimizer sgd = Optimizer.sgd().setLearningRateTracker(lrt).build();\n\nDefaultTrainingConfig config = new DefaultTrainingConfig(loss).optOptimizer(sgd) // Optimizer (loss function)\n .addEvaluator(new Accuracy()) // Model Accuracy\n .addTrainingListeners(TrainingListener.Defaults.logging()); // Logging\n\nTrainer trainer = model.newTrainer(config);\ntrainer.initialize(new Shape(1, 1, 96, 96));\n\nMap<String, double[]> evaluatorMetrics = new HashMap<>();\ndouble avgTrainTimePerEpoch = 0;",
"_____no_output_____"
],
[
"Training.trainingChapter6(trainIter, testIter, numEpochs, trainer, evaluatorMetrics, avgTrainTimePerEpoch);",
"_____no_output_____"
],
[
"trainLoss = evaluatorMetrics.get(\"train_epoch_SoftmaxCrossEntropyLoss\");\ntrainAccuracy = evaluatorMetrics.get(\"train_epoch_Accuracy\");\ntestAccuracy = evaluatorMetrics.get(\"validate_epoch_Accuracy\");\n\nSystem.out.printf(\"loss %.3f,\", trainLoss[numEpochs - 1]);\nSystem.out.printf(\" train acc %.3f,\", trainAccuracy[numEpochs - 1]);\nSystem.out.printf(\" test acc %.3f\\n\", testAccuracy[numEpochs - 1]);\nSystem.out.printf(\"%.1f examples/sec\", trainIter.size() / (avgTrainTimePerEpoch / Math.pow(10, 9)));\nSystem.out.println();",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"String[] lossLabel = new String[trainLoss.length + testAccuracy.length + trainAccuracy.length];\n\nArrays.fill(lossLabel, 0, trainLoss.length, \"train loss\");\nArrays.fill(lossLabel, trainAccuracy.length, trainLoss.length + trainAccuracy.length, \"train acc\");\nArrays.fill(lossLabel, trainLoss.length + trainAccuracy.length,\n trainLoss.length + testAccuracy.length + trainAccuracy.length, \"test acc\");\n\nTable data = Table.create(\"Data\").addColumns(\n DoubleColumn.create(\"epoch\", ArrayUtils.addAll(epochCount, ArrayUtils.addAll(epochCount, epochCount))),\n DoubleColumn.create(\"metrics\", ArrayUtils.addAll(trainLoss, ArrayUtils.addAll(trainAccuracy, testAccuracy))),\n StringColumn.create(\"lossLabel\", lossLabel)\n);\n\nrender(LinePlot.create(\"\", data, \"epoch\", \"metrics\", \"lossLabel\"),\"text/html\");",
"_____no_output_____"
]
],
[
[
"## Summary\n\n* In terms of cross-layer connections, unlike ResNet, where inputs and outputs are added together, DenseNet concatenates inputs and outputs on the channel dimension.\n* The main units that compose DenseNet are dense blocks and transition layers.\n* We need to keep the dimensionality under control when composing the network by adding transition layers that shrink the number of channels again.\n\n## Exercises\n\n1. Why do we use average pooling rather than max-pooling in the transition layer?\n1. One of the advantages mentioned in the DenseNet paper is that its model parameters are smaller than those of ResNet. Why is this the case?\n1. One problem for which DenseNet has been criticized is its high memory consumption.\n * Is this really the case? Try to change the input shape to $224\\times 224$ to see the actual (GPU) memory consumption.\n * Can you think of an alternative means of reducing the memory consumption? How would you need to change the framework?\n1. Implement the various DenseNet versions presented in Table 1 of :cite:`Huang.Liu.Van-Der-Maaten.ea.2017`.\n1. Why do we not need to concatenate terms if we are just interested in $\\mathbf{x}$ and $f(\\mathbf{x})$ for ResNet? Why do we need this for more than two layers in DenseNet?\n1. Design a DenseNet for fully connected networks and apply it to the Housing Price prediction task.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e76d7689ab98ec5750691ea0e799e48369a41211 | 2,759 | ipynb | Jupyter Notebook | notebooks/NLP_task_hierarchy_json_export/NLP class hierarchy export.ipynb | OpenBioLink/ITM | 5fcb4d34f1bf8a457447c9e9e291e7904dca742d | [
"MIT"
] | null | null | null | notebooks/NLP_task_hierarchy_json_export/NLP class hierarchy export.ipynb | OpenBioLink/ITM | 5fcb4d34f1bf8a457447c9e9e291e7904dca742d | [
"MIT"
] | null | null | null | notebooks/NLP_task_hierarchy_json_export/NLP class hierarchy export.ipynb | OpenBioLink/ITM | 5fcb4d34f1bf8a457447c9e9e291e7904dca742d | [
"MIT"
] | null | null | null | 28.153061 | 86 | 0.555274 | [
[
[
"from owlready2 import *\nimport json\n\nonto = owlready2.get_ontology(\"ontology.owl\")\nonto.load()\nITO = onto.get_namespace(\"https://ai-strategies.org/ontology/\")",
"_____no_output_____"
],
[
"def class_to_dict(myclass):\n mydict = dict()\n if myclass.label[0]:\n mydict['name'] = myclass.label[0]\n if myclass.hasDefinition:\n mydict['description'] = myclass.hasDefinition\n if myclass.seeAlso:\n mydict['seeAlso'] = myclass.seeAlso\n if myclass.comment:\n mydict['comment'] = myclass.comment\n if myclass.has_input:\n mydict['has_input'] = myclass.has_input[0].label\n if myclass.has_output:\n mydict['has_output'] = myclass.has_output[0].label\n if myclass.hasExactSynonym:\n mydict['hasExactSynonym'] = myclass.hasExactSynonym\n if myclass.hasNarrowSynonym:\n mydict['hasNarrowSynonym'] = myclass.hasNarrowSynonym\n if myclass.hasBroadSynonym:\n mydict['hasBroadSynonym'] = myclass.hasBroadSynonym\n if myclass.refactor_comment:\n mydict['refactor_comment'] = myclass.refactor_comment\n if myclass.papers_with_code_id:\n mydict['papers_with_code_id'] = myclass.papers_with_code_id\n children = list()\n for subclass in myclass.subclasses():\n children.append(class_to_dict(subclass))\n if len(children) > 0:\n mydict['children'] = children\n return mydict",
"_____no_output_____"
],
[
"mydict = class_to_dict(ITO.ITO_00141)\nmydict",
"_____no_output_____"
],
[
"with open('NLP_class_hierarchy_21_12_2020.json', 'w', encoding='utf-8') as f:\n json.dump(mydict, f, indent=2)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
e76d787c64203629e0623b9f439b491efe3bc2f3 | 256,850 | ipynb | Jupyter Notebook | DataCamp/Risk and Returns: The Sharpe Ratio/notebook.ipynb | lukzmu/data-courses | 9d49bd6d0bb01bcee966fc52833b8e3aa9241432 | [
"MIT"
] | null | null | null | DataCamp/Risk and Returns: The Sharpe Ratio/notebook.ipynb | lukzmu/data-courses | 9d49bd6d0bb01bcee966fc52833b8e3aa9241432 | [
"MIT"
] | 52 | 2021-04-06T10:57:31.000Z | 2022-01-18T13:21:57.000Z | DataCamp/Risk and Returns: The Sharpe Ratio/notebook.ipynb | lukzmu/data-science | 806ae9caa635b486a81fc835218c04e340f1f186 | [
"MIT"
] | null | null | null | 256,850 | 256,850 | 0.939101 | [
[
[
"## 1. Meet Professor William Sharpe\n<p>An investment may make sense if we expect it to return more money than it costs. But returns are only part of the story because they are risky - there may be a range of possible outcomes. How does one compare different investments that may deliver similar results on average, but exhibit different levels of risks?</p>\n<p><img style=\"float: left ; margin: 5px 20px 5px 1px;\" width=\"200\" src=\"https://assets.datacamp.com/production/project_66/img/sharpe.jpeg\"></p>\n<p>Enter William Sharpe. He introduced the <a href=\"https://web.stanford.edu/~wfsharpe/art/sr/sr.htm\"><em>reward-to-variability ratio</em></a> in 1966 that soon came to be called the Sharpe Ratio. It compares the expected returns for two investment opportunities and calculates the additional return per unit of risk an investor could obtain by choosing one over the other. In particular, it looks at the difference in returns for two investments and compares the average difference to the standard deviation (as a measure of risk) of this difference. A higher Sharpe ratio means that the reward will be higher for a given amount of risk. It is common to compare a specific opportunity against a benchmark that represents an entire category of investments.</p>\n<p>The Sharpe ratio has been one of the most popular risk/return measures in finance, not least because it's so simple to use. It also helped that Professor Sharpe won a Nobel Memorial Prize in Economics in 1990 for his work on the capital asset pricing model (CAPM).</p>\n<p>The Sharpe ratio is usually calculated for a portfolio and uses the risk-free interest rate as benchmark. We will simplify our example and use stocks instead of a portfolio. We will also use a stock index as benchmark rather than the risk-free interest rate because both are readily available at daily frequencies and we do not have to get into converting interest rates from annual to daily frequency. Just keep in mind that you would run the same calculation with portfolio returns and your risk-free rate of choice, e.g, the <a href=\"https://fred.stlouisfed.org/series/TB3MS\">3-month Treasury Bill Rate</a>. </p>\n<p>So let's learn about the Sharpe ratio by calculating it for the stocks of the two tech giants Facebook and Amazon. As benchmark we'll use the S&P 500 that measures the performance of the 500 largest stocks in the US. When we use a stock index instead of the risk-free rate, the result is called the Information Ratio and is used to benchmark the return on active portfolio management because it tells you how much more return for a given unit of risk your portfolio manager earned relative to just putting your money into a low-cost index fund.</p>",
"_____no_output_____"
]
],
[
[
"# Importing required modules\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Settings to produce nice plots in a Jupyter notebook\nplt.style.use('fivethirtyeight')\n%matplotlib inline\n\n# Reading in the data\nstock_data = pd.read_csv('datasets/stock_data.csv', parse_dates=['Date'], index_col='Date').dropna()\nbenchmark_data = pd.read_csv('datasets/benchmark_data.csv', parse_dates=['Date'], index_col='Date').dropna()",
"_____no_output_____"
]
],
[
[
"## 2. A first glance at the data\n<p>Let's take a look the data to find out how many observations and variables we have at our disposal.</p>",
"_____no_output_____"
]
],
[
[
"# Display summary for stock_data\nprint('Stocks\\n')\ndisplay(stock_data.info())\ndisplay(stock_data.head())\n\n# Display summary for benchmark_data\nprint('\\nBenchmarks\\n')\ndisplay(benchmark_data.info())\ndisplay(benchmark_data.head())",
"Stocks\n\n<class 'pandas.core.frame.DataFrame'>\nDatetimeIndex: 252 entries, 2016-01-04 to 2016-12-30\nData columns (total 2 columns):\nAmazon 252 non-null float64\nFacebook 252 non-null float64\ndtypes: float64(2)\nmemory usage: 5.9 KB\n"
]
],
[
[
"## 3. Plot & summarize daily prices for Amazon and Facebook\n<p>Before we compare an investment in either Facebook or Amazon with the index of the 500 largest companies in the US, let's visualize the data, so we better understand what we're dealing with.</p>",
"_____no_output_____"
]
],
[
[
"# visualize the stock_data\nstock_data.plot(subplots=True, title='Stock Data')\n\n# summarize the stock_data\nstock_data.describe()\n",
"_____no_output_____"
]
],
[
[
"## 4. Visualize & summarize daily values for the S&P 500\n<p>Let's also take a closer look at the value of the S&P 500, our benchmark.</p>",
"_____no_output_____"
]
],
[
[
"# plot the benchmark_data\nbenchmark_data.plot(title='S&P 500')\n\n\n# summarize the benchmark_data\nbenchmark_data.describe()\n",
"_____no_output_____"
]
],
[
[
"## 5. The inputs for the Sharpe Ratio: Starting with Daily Stock Returns\n<p>The Sharpe Ratio uses the difference in returns between the two investment opportunities under consideration.</p>\n<p>However, our data show the historical value of each investment, not the return. To calculate the return, we need to calculate the percentage change in value from one day to the next. We'll also take a look at the summary statistics because these will become our inputs as we calculate the Sharpe Ratio. Can you already guess the result?</p>",
"_____no_output_____"
]
],
[
[
"# calculate daily stock_data returns\nstock_returns = stock_data.pct_change()\n\n# plot the daily returns\nstock_returns.plot()\n\n# summarize the daily returns\nstock_returns.describe()",
"_____no_output_____"
]
],
[
[
"## 6. Daily S&P 500 returns\n<p>For the S&P 500, calculating daily returns works just the same way, we just need to make sure we select it as a <code>Series</code> using single brackets <code>[]</code> and not as a <code>DataFrame</code> to facilitate the calculations in the next step.</p>",
"_____no_output_____"
]
],
[
[
"# calculate daily benchmark_data returns\nsp_returns = benchmark_data['S&P 500'].pct_change()\n\n# plot the daily returns\nsp_returns.plot()\n\n# summarize the daily returns\nsp_returns.describe()",
"_____no_output_____"
]
],
[
[
"## 7. Calculating Excess Returns for Amazon and Facebook vs. S&P 500\n<p>Next, we need to calculate the relative performance of stocks vs. the S&P 500 benchmark. This is calculated as the difference in returns between <code>stock_returns</code> and <code>sp_returns</code> for each day.</p>",
"_____no_output_____"
]
],
[
[
"# calculate the difference in daily returns\nexcess_returns = stock_returns.sub(sp_returns, axis=0)\n\n# plot the excess_returns\nexcess_returns.plot()\n\n# summarize the excess_returns\nexcess_returns.describe()",
"_____no_output_____"
]
],
[
[
"## 8. The Sharpe Ratio, Step 1: The Average Difference in Daily Returns Stocks vs S&P 500\n<p>Now we can finally start computing the Sharpe Ratio. First we need to calculate the average of the <code>excess_returns</code>. This tells us how much more or less the investment yields per day compared to the benchmark.</p>",
"_____no_output_____"
]
],
[
[
"# calculate the mean of excess_returns \n\navg_excess_return = excess_returns.mean()\n\n# plot avg_excess_returns\navg_excess_return.plot.bar()",
"_____no_output_____"
]
],
[
[
"## 9. The Sharpe Ratio, Step 2: Standard Deviation of the Return Difference\n<p>It looks like there was quite a bit of a difference between average daily returns for Amazon and Facebook.</p>\n<p>Next, we calculate the standard deviation of the <code>excess_returns</code>. This shows us the amount of risk an investment in the stocks implies as compared to an investment in the S&P 500.</p>",
"_____no_output_____"
]
],
[
[
"# calculate the standard deviations\nsd_excess_return = excess_returns.std()\n\n# plot the standard deviations\nsd_excess_return.plot.bar()",
"_____no_output_____"
]
],
[
[
"## 10. Putting it all together\n<p>Now we just need to compute the ratio of <code>avg_excess_returns</code> and <code>sd_excess_returns</code>. The result is now finally the <em>Sharpe ratio</em> and indicates how much more (or less) return the investment opportunity under consideration yields per unit of risk.</p>\n<p>The Sharpe Ratio is often <em>annualized</em> by multiplying it by the square root of the number of periods. We have used daily data as input, so we'll use the square root of the number of trading days (5 days, 52 weeks, minus a few holidays): √252</p>",
"_____no_output_____"
]
],
[
[
"# calculate the daily sharpe ratio\ndaily_sharpe_ratio = avg_excess_return.div(sd_excess_return)\n\n# annualize the sharpe ratio\nannual_factor = np.sqrt(252)\nannual_sharpe_ratio = daily_sharpe_ratio.mul(annual_factor)\n\n# plot the annualized sharpe ratio\nannual_sharpe_ratio.plot.bar(title='Annualized Sharpe Ration: Stocks vs S&P 500')\n",
"_____no_output_____"
]
],
[
[
"## 11. Conclusion\n<p>Given the two Sharpe ratios, which investment should we go for? In 2016, Amazon had a Sharpe ratio twice as high as Facebook. This means that an investment in Amazon returned twice as much compared to the S&P 500 for each unit of risk an investor would have assumed. In other words, in risk-adjusted terms, the investment in Amazon would have been more attractive.</p>\n<p>This difference was mostly driven by differences in return rather than risk between Amazon and Facebook. The risk of choosing Amazon over FB (as measured by the standard deviation) was only slightly higher so that the higher Sharpe ratio for Amazon ends up higher mainly due to the higher average daily returns for Amazon. </p>\n<p>When faced with investment alternatives that offer both different returns and risks, the Sharpe Ratio helps to make a decision by adjusting the returns by the differences in risk and allows an investor to compare investment opportunities on equal terms, that is, on an 'apples-to-apples' basis.</p>",
"_____no_output_____"
]
],
[
[
"# Uncomment your choice.\nbuy_amazon = True\n# buy_facebook = True",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e76d7c2130676f73de1c6c17d1f55e8c747cb581 | 681,906 | ipynb | Jupyter Notebook | notebooks/downscaling_pipeline/global_validation.ipynb | brews/downscaleCMIP6 | 7ce377f50c5a1b9d554668efeb30e969dd6ede18 | [
"MIT"
] | 19 | 2020-10-31T11:37:10.000Z | 2022-03-30T22:44:43.000Z | notebooks/downscaling_pipeline/global_validation.ipynb | brews/downscaleCMIP6 | 7ce377f50c5a1b9d554668efeb30e969dd6ede18 | [
"MIT"
] | 413 | 2020-09-18T00:11:42.000Z | 2022-03-30T22:42:49.000Z | notebooks/downscaling_pipeline/global_validation.ipynb | brews/downscaleCMIP6 | 7ce377f50c5a1b9d554668efeb30e969dd6ede18 | [
"MIT"
] | 11 | 2021-01-28T01:05:10.000Z | 2022-03-31T02:57:20.000Z | 1,224.247756 | 621,392 | 0.955542 | [
[
[
"### Global Validation ###\n\nThis notebook combines several validation notebooks: `global_validation_tasmax_v2.ipynb` and `global_validation_dtr_v2.ipynb` along with `check_aiqpd_downscaled_data.ipynb` to create a \"master\" global validation notebook. It also borrows validation code from the ERA-5 workflow, `validate_era5_hourlyORdaily_files.ipynb`. Parts of this will be incorporated into `dodola` such that minimal global validation is done automatically with every pipeline run. \n\n### Data Sources ###\n\nCoarse Resolution: \n- CMIP6 \n- Bias corrected data \n- ERA-5\n\nFine Resolution: \n- Bias corrected data \n- Downscaled data \n- ERA-5 (fine resolution)\n- ERA-5 (coarse resolution resampled to fine resolution) \n\n### Types of Validation ### \n\nBasic: \n- maxes, means, mins \n - CMIP6, bias corrected and downscaled (data)\n - historical, 2020-2040, 2040-2060, 2060-2080, 2080-2100 \n - 3 x 5 (in case we plot each data source separately in the pipeline) \n- NaN check, ranges, number of timesteps, file metadata attributes, variable names \n- spread between SSPs (this will not be a plot because it involves multiple runs)\n- differences between historical and future time periods for bias corrected and downscaled\n- differences between bias corrected and downscaled data \n\nVariable-specific: \n- GMST\n- days over 95 (TO-DO)\n- max values (precip) \n- negative values (DTR, precip) \n- number of days/gridcells that have wet day frequency correction applied (TO-DO)\n- max # of consecutive dry days, highest precip amt over 5-day rolling window (TO-DO) ",
"_____no_output_____"
]
],
[
[
"%matplotlib inline \nimport xarray as xr\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cartopy import config\nimport cartopy.crs as ccrs\nimport cartopy.feature as cfeature\nimport os \nfrom matplotlib import cm\n\nfrom matplotlib.backends.backend_pdf import PdfPages\n\nfrom validation import *",
"_____no_output_____"
]
],
[
[
"### Azure (or GCS) authentication ### ",
"_____no_output_____"
]
],
[
[
"# ! pip install adlfs",
"_____no_output_____"
],
[
"from adlfs import AzureBlobFileSystem\nfs_az = AzureBlobFileSystem(\n account_name='dc6',\n account_key='', \n client_id=os.environ.get(\"AZURE_CLIENT_ID\", None),\n client_secret=os.environ.get(\"AZURE_CLIENT_SECRET\", None),\n tenant_id=os.environ.get(\"AZURE_TENANT_ID\", None))",
"_____no_output_____"
]
],
[
[
"### Filepaths ###\n\n#### Output data ####",
"_____no_output_____"
]
],
[
[
"data_dict = {'coarse': {'cmip6': {'historical': 'scratch/biascorrectdownscale-bk6n8/biascorrectdownscale-bk6n8-858077599/out.zarr', \n 'ssp370': 'scratch/biascorrectdownscale-bk6n8/biascorrectdownscale-bk6n8-269778292/out.zarr'}, \n 'bias_corrected': {'historical': 'az://biascorrected-stage/CMIP/NOAA-GFDL/GFDL-ESM4/historical/r1i1p1f1/day/tasmax/gr1/v20210920214427.zarr', \n 'ssp370': 'az://biascorrected-stage/ScenarioMIP/NOAA-GFDL/GFDL-ESM4/ssp370/r1i1p1f1/day/tasmax/gr1/v20210920214427.zarr'}, \n 'ERA-5':'az://scratch/biascorrectdownscale-bk6n8/biascorrectdownscale-bk6n8-131793962/out.zarr'}, \n 'fine': {'bias_corrected': {'historical': 'az://scratch/biascorrectdownscale-bk6n8/biascorrectdownscale-bk6n8-1362934973/regridded.zarr', \n 'ssp370': 'az://scratch/biascorrectdownscale-bk6n8/biascorrectdownscale-bk6n8-377595554/regridded.zarr'}, \n 'downscaled': {'historical': 'az://downscaled-stage/CMIP/NOAA-GFDL/GFDL-ESM4/historical/r1i1p1f1/day/tasmax/gr1/v20210920214427.zarr', \n 'ssp370': 'az://downscaled-stage/ScenarioMIP/NOAA-GFDL/GFDL-ESM4/ssp370/r1i1p1f1/day/tasmax/gr1/v20210920214427.zarr'}, \n 'ERA-5_fine': 'az://scratch/biascorrectdownscale-bk6n8/biascorrectdownscale-bk6n8-491178896/rechunked.zarr', \n 'ERA-5_coarse': 'az://scratch/biascorrectdownscale-bk6n8/biascorrectdownscale-bk6n8-1213790070/rechunked.zarr'}}",
"_____no_output_____"
]
],
[
[
"### Variables ###\n\nPossible variables: `tasmax`, `tasmin`, `pr`, `dtr`. Default is `tasmax`. ",
"_____no_output_____"
]
],
[
[
"variable = 'tasmax'",
"_____no_output_____"
]
],
[
[
"### other data inputs ### ",
"_____no_output_____"
]
],
[
[
"units = {'tasmax': 'K', 'tasmin': 'K', 'dtr': 'K', 'pr': 'mm'}\nyears = {'hist': {'start_yr': '1995', 'end_yr': '2014'}, \n '2020_2040': {'start_yr': '2020', 'end_yr': '2040'}, \n '2040_2060': {'start_yr': '2040', 'end_yr': '2060'}, \n '2060_2080': {'start_yr': '2060', 'end_yr': '2080'}, \n '2080_2100': {'start_yr': '2080', 'end_yr': '2100'}}\nyears_test = {'hist': {'start_yr': '1995', 'end_yr': '2014'}, \n '2020_2040': {'start_yr': '2020', 'end_yr': '2040'}, \n '2040_2060': {'start_yr': '2040', 'end_yr': '2060'}}\npdf_location = '/home/jovyan'",
"_____no_output_____"
]
],
[
[
"### Validation ### ",
"_____no_output_____"
]
],
[
[
"pdf_list = []",
"_____no_output_____"
]
],
[
[
"### basic diagnostic plots: means and maxes ### ",
"_____no_output_____"
],
[
"bias corrected",
"_____no_output_____"
]
],
[
[
"# plot bias corrected \nstore_hist = fs_az.get_mapper(data_dict['coarse']['bias_corrected']['historical'], check=False)\nds_hist = xr.open_zarr(store_hist)\nstore_future = fs_az.get_mapper(data_dict['coarse']['bias_corrected']['ssp370'], check=False)\nds_future = xr.open_zarr(store_future)\n\nnew_pdf = plot_diagnostic_climo_periods(ds_hist, \n ds_future, \n 'ssp370', \n years, 'tasmax', 'max', 'bias_corrected', units, pdf_location, vmin=280, vmax=320)\npdf_list.append(new_pdf)",
"_____no_output_____"
],
[
"new_pdf = plot_diagnostic_climo_periods(ds_hist, \n ds_future, \n 'ssp370', \n years, 'tasmax', 'mean', 'bias_corrected', units, pdf_location, vmin=280, vmax=320)\npdf_list.append(new_pdf)",
"_____no_output_____"
]
],
[
[
"cmip6",
"_____no_output_____"
]
],
[
[
"# plot bias corrected \nstore_hist = fs_az.get_mapper(data_dict['coarse']['cmip6']['historical'], check=False)\nds_hist = xr.open_zarr(store_hist)\nstore_future = fs_az.get_mapper(data_dict['coarse']['cmip6']['ssp370'], check=False)\nds_future = xr.open_zarr(store_future)\n\nnew_pdf = plot_diagnostic_climo_periods(ds_hist, \n ds_future, \n 'ssp370', \n years, 'tasmax', 'max', 'cmip6', units, pdf_location, vmin=280, vmax=320)\npdf_list.append(new_pdf)",
"_____no_output_____"
],
[
"new_pdf = plot_diagnostic_climo_periods(ds_hist, \n ds_future, \n 'ssp370', \n years, 'tasmax', 'mean', 'cmip6', units, pdf_location, vmin=280, vmax=320)\npdf_list.append(new_pdf)",
"_____no_output_____"
]
],
[
[
"downscaled",
"_____no_output_____"
]
],
[
[
"# plot bias corrected \nstore_hist = fs_az.get_mapper(data_dict['fine']['downscaled']['historical'], check=False)\nds_hist = xr.open_zarr(store_hist)\nstore_future = fs_az.get_mapper(data_dict['fine']['downscaled']['ssp370'], check=False)\nds_future = xr.open_zarr(store_future)\n\nnew_pdf = plot_diagnostic_climo_periods(ds_hist, \n ds_future, \n 'ssp370', \n years, 'tasmax', 'max', 'downscaled', units, pdf_location, vmin=280, vmax=320)\npdf_list.append(new_pdf)",
"_____no_output_____"
],
[
"new_pdf = plot_diagnostic_climo_periods(ds_hist, \n ds_future, \n 'ssp370', \n years, 'tasmax', 'mean', 'downscaled', units, pdf_location, vmin=280, vmax=320)\npdf_list.append(new_pdf)",
"_____no_output_____"
]
],
[
[
"GMST",
"_____no_output_____"
]
],
[
[
"store_hist_cmip6 = fs_az.get_mapper(data_dict['coarse']['cmip6']['historical'], check=False)\nds_hist_cmip6 = xr.open_zarr(store_hist_cmip6)\nstore_future_cmip6 = fs_az.get_mapper(data_dict['coarse']['cmip6']['ssp370'], check=False)\nds_future_cmip6 = xr.open_zarr(store_future_cmip6)\n\nstore_hist_bc = fs_az.get_mapper(data_dict['coarse']['bias_corrected']['historical'], check=False)\nds_hist_bc = xr.open_zarr(store_hist_bc)\nstore_future_bc = fs_az.get_mapper(data_dict['coarse']['bias_corrected']['ssp370'], check=False)\nds_future_bc = xr.open_zarr(store_future_bc)",
"_____no_output_____"
],
[
"gmst_pdf = plot_gmst_diagnostic(ds_hist_cmip6, \n ds_future_cmip6, \n ds_hist_bc, \n ds_future_bc, \n pdf_location, \n variable='tasmax', ssp='370', ds_hist_downscaled=None, ds_fut_downscaled=None)",
"_____no_output_____"
]
],
[
[
"Basic validation of zarr stores (general and variable-specific) ",
"_____no_output_____"
]
],
[
[
"test_dataset_allvars(ds_future_bc, 'tasmax', 'bias_corrected', time_period=\"future\")",
"_____no_output_____"
],
[
"test_dataset_allvars(ds_future_cmip6, 'tasmax', 'cmip6', time_period=\"future\")",
"_____no_output_____"
],
[
"test_dataset_allvars(ds_hist_cmip6, 'tasmax', 'cmip6', time_period=\"hist\")",
"_____no_output_____"
],
[
"test_temp_range(ds_hist_cmip6, 'tasmax')",
"_____no_output_____"
]
],
[
[
"create difference plots bw bias corrected and downscaled as well as historical/future bias corrected and downscaled",
"_____no_output_____"
]
],
[
[
"store_hist_bc = fs_az.get_mapper(data_dict['fine']['bias_corrected']['historical'], check=False)\nds_hist_bc = xr.open_zarr(store_hist_bc)\nstore_hist_ds = fs_az.get_mapper(data_dict['fine']['downscaled']['historical'], check=False)\nds_hist_ds = xr.open_zarr(store_hist_ds)\n\nstore_fut_bc = fs_az.get_mapper(data_dict['fine']['bias_corrected']['ssp370'], check=False)\nds_fut_bc = xr.open_zarr(store_fut_bc)\nstore_fut_ds = fs_az.get_mapper(data_dict['fine']['downscaled']['ssp370'], check=False)\nds_fut_ds = xr.open_zarr(store_fut_ds)",
"_____no_output_____"
],
[
"time_period = '2080_2100'\nda_hist_bc = ds_hist_bc[variable].sel(time=slice(years['hist']['start_yr'], years['hist']['end_yr'])).mean('time').load()\nda_hist_ds = ds_hist_ds[variable].sel(time=slice(years['hist']['start_yr'], years['hist']['end_yr'])).mean('time').load()\n\nda_future_bc = ds_fut_bc[variable].sel(time=slice(years[time_period]['start_yr'], years[time_period]['end_yr'])).mean('time').load()\nda_future_ds = ds_fut_ds[variable].sel(time=slice(years[time_period]['start_yr'], years[time_period]['end_yr'])).mean('time').load()",
"_____no_output_____"
],
[
"pdf_fname = plot_bias_correction_downscale_differences(da_hist_bc, da_hist_ds, da_future_bc, \n da_future_ds, 'downscaled_minus_biascorrected', 'downscaled', pdf_location,'tasmax',\n ssp='ssp370', time_period='2080_2100')",
"_____no_output_____"
]
],
[
[
"merge validation pdfs created so far ",
"_____no_output_____"
]
],
[
[
"# ! pip install PyPDF2",
"_____no_output_____"
],
[
"# ! ls -lh /home/jovyan/*.pdf",
"_____no_output_____"
],
[
"pdf_list = ['/home/jovyan/global_mean_tasmax_370.pdf', \n '/home/jovyan/tasmax_max_bias_corrected.pdf', \n '/home/jovyan/tasmax_max_cmip6.pdf', \n '/home/jovyan/tasmax_max_downscaled.pdf', \n '/home/jovyan/tasmax_mean_bias_corrected.pdf',\n '/home/jovyan/tasmax_mean_cmip6.pdf',\n '/home/jovyan/tasmax_mean_downscaled.pdf']",
"_____no_output_____"
],
[
"merge_validation_pdfs(pdf_list, '/home/jovyan/test_validation.pdf')",
"_____no_output_____"
]
],
[
[
"Days over 95 degrees F/extreme precip metrics will be added later. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e76d7d8da6409c574fd507e03cb02ea153ef085e | 33,308 | ipynb | Jupyter Notebook | class19_MCODE_python3_template.ipynb | curiositymap/Networks-in-Computational-Biology | c7734cf2c03c7a794ab6990d433b1614c1837b58 | [
"Apache-2.0"
] | 11 | 2020-09-17T14:59:30.000Z | 2022-03-29T16:35:39.000Z | class19_MCODE_python3_template.ipynb | curiositymap/Networks-in-Computational-Biology | c7734cf2c03c7a794ab6990d433b1614c1837b58 | [
"Apache-2.0"
] | null | null | null | class19_MCODE_python3_template.ipynb | curiositymap/Networks-in-Computational-Biology | c7734cf2c03c7a794ab6990d433b1614c1837b58 | [
"Apache-2.0"
] | 5 | 2020-03-12T19:21:56.000Z | 2022-03-28T08:23:58.000Z | 82.039409 | 1,880 | 0.643809 | [
[
[
"# CSX46 - Class 19 - MCODE\n\nIn this notebook, we will analyze a simple graph (`test.dot`) and then the Krogran network using the MCODE community detection algorithm.",
"_____no_output_____"
]
],
[
[
"import pygraphviz\nimport igraph\nimport numpy\nimport pandas\nimport sys\nfrom collections import defaultdict",
"_____no_output_____"
],
[
"test_graph = FILL IN HERE\nnodes = test_graph.nodes()\nedges = FILL IN HERE\ntest_igraph = FILL IN HERE\ntest_igraph.summary()",
"_____no_output_____"
],
[
"igraph.drawing.plot(FILL IN HERE)",
"_____no_output_____"
]
],
[
[
"Function `mcode` takes a graph adjacency list `adj_list` and a float parameter `vwp` (vertex weight probability), and returns a list of cluster assignments (of length equal to the number of clusters). Original code from True Price at UNC Chapel Hill [link to original code](https://github.com/trueprice/python-graph-clustering/blob/master/src/mcode.py).",
"_____no_output_____"
]
],
[
[
"def mcode(adj_list, vwp):\n \n # Stage 1: Vertex Weighting\n N = len(adj_list)\n edges = [[]]*N\n weights = dict((v, 1.) for v in range(0,N))\n\n edges=defaultdict(set)\n for i in range(0,N):\n edges[i] = # MAKE A SET FROM adj_list[i]\n \n res_clusters = []\n\n for i,v in enumerate(edges):\n neighborhood = # union of set((v,)) and edges[v]\n # if node has only one neighbor, we know everything we need to know\n if len(neighborhood) <= 2: continue\n\n k = 1 \n while neighborhood:\n k_core = # copy neighborhood object\n invalid_nodes = True\n while invalid_nodes and neighborhood:\n invalid_nodes = set(\n n for n in neighborhood if len(edges[n] & neighborhood) <= k)\n # remove invalid_nodes from neighborhood\n #increment k by one\n # vertex weight = k-core number * density of k-core\n weights[v] = (k-1) * (sum(len(edges[n] & k_core) for n in k_core) / \n (2. * len(k_core)**2))\n\n # Stage 2: Molecular Complex Prediction\n unvisited = set(edges)\n num_clusters = 0\n for seed in sorted(weights, key=weights.get, reverse=True):\n if seed not in unvisited: continue\n\n cluster, frontier = set((seed,)), set((seed,))\n w = weights[seed] * vwp\n while frontier:\n cluster.update(frontier)\n # remove frontier from unvisited\n frontier_plus_neighbors = set.union(*(edges[n] for n in frontier))\n frontier = set( \n n for n in frontier_plus_neighbors & unvisited if weights[n] > w)\n\n # haircut: only keep 2-core complexes\n invalid_nodes = True\n while invalid_nodes and cluster:\n invalid_nodes = set(n for n in cluster if len(edges[n] & cluster) < 2)\n # remove invalid_nodes from cluster\n\n if cluster:\n # make a list from `cluster` and add that list to `res_clusters`\n num_clusters += 1\n\n return(res_clusters)",
"_____no_output_____"
]
],
[
[
"Run mcode on the adjacency list for your toy graph, with vwp=0.8. How many clusters did it find? Do the cluster memberships make sense?",
"_____no_output_____"
],
[
"Load the Krogan et al. network edge-list data as a Pandas data frame",
"_____no_output_____"
]
],
[
[
"edge_list = pandas.read_csv(\"shared/krogan.sif\",\n sep=\"\\t\", \n names=[\"protein1\",\"protein2\"])",
"_____no_output_____"
]
],
[
[
"Make an igraph graph and print its summary",
"_____no_output_____"
]
],
[
[
"krogan_graph = FILL IN HERE\nkrogan_graph.summary()",
"_____no_output_____"
]
],
[
[
"Run mcode on your graph with vwp=0.1",
"_____no_output_____"
]
],
[
[
"res = FILL IN HERE",
"_____no_output_____"
]
],
[
[
"Get the cluster sizes",
"_____no_output_____"
]
],
[
[
"FILL IN HERE",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e76d8f4207ec239eb52bc36d51a8e74af71a2c23 | 8,458 | ipynb | Jupyter Notebook | minilabs/test-hypothesis-by-simulating-statistics/m7_l1.ipynb | garath/inferentialthinking | fd27c1aa0b813de273160cbe682e28cc9da0dae3 | [
"MIT"
] | null | null | null | minilabs/test-hypothesis-by-simulating-statistics/m7_l1.ipynb | garath/inferentialthinking | fd27c1aa0b813de273160cbe682e28cc9da0dae3 | [
"MIT"
] | null | null | null | minilabs/test-hypothesis-by-simulating-statistics/m7_l1.ipynb | garath/inferentialthinking | fd27c1aa0b813de273160cbe682e28cc9da0dae3 | [
"MIT"
] | null | null | null | 36.456897 | 495 | 0.640577 | [
[
[
"# Test Hypothesis by Simulating Statistics\n## Mini-Lab 1: Hypothesis Testing",
"_____no_output_____"
],
[
"Welcome to your next mini-lab! Go ahead an run the following cell to get started. You can do that by clicking on the cell and then clickcing `Run` on the top bar. You can also just press `Shift` + `Enter` to run the cell.",
"_____no_output_____"
]
],
[
[
"from datascience import *\nimport numpy as np\nimport otter\n\nimport matplotlib\n%matplotlib inline\nimport matplotlib.pyplot as plots\nplots.style.use('fivethirtyeight')\n\ngrader = otter.Notebook(\"m7_l1_tests\")",
"_____no_output_____"
]
],
[
[
"In the previous two labs we've analyzed some data regarding COVID-19 test cases. Let's continue to analyze this data, specifically _claims_ about this data. Once again, we'll be be using ficitious statistics from Blockeley University.\n\nLet's say that Blockeley data science faculty are looking at the spread of COVID-19 across the realm of Minecraft. We have very specific data about Blockeley and the rest of Cubefornia but other realms' data isn't as clear cut or detailed. Let's say that a neighboring village has been reporting a COVID-19 infection rate of 26%. Should we trust these numbers?\n\nRegardless of whether or not you believe these claims, the job of a data scientist is to definitively substantiate or disprove such claims with data. You have access to the test results of similar sized village nearby and come up with the brilliant idea of running a hypothesis test with this data. Let's go ahead and load it! Run te cell below to import this data. If you want to explore this data further, go ahead and group by both columns! An empty cell is provided for you to do this.",
"_____no_output_____"
]
],
[
[
"test_results = Table.read_table(\"../datasets/covid19_village_tests.csv\")\ntest_results.show(5)",
"_____no_output_____"
],
[
"...",
"_____no_output_____"
]
],
[
[
"From here we can formulate our **Null Hypothesis** and **Alternate Hypothesis** Our *null hypothesis* is that this village truly has a 26% infection rate amongst the populations. Our *alternate hypothesis* is that this village does not in actuality have a 26% infection rate - it's way too low. Now we need our test statistic. Since we're looking at the infection rate inthe population, our test statistic should be:\n\n$$\\text{Test Statistic} = \\frac{\\text{Number of Positive Cases}}{\\text{Total Number of Cases}}$$\n\nWe've started the function declaration for you. Go ahead and complete `percent_positive` to calculate this test statistic.\n\n*Note*: Check out `np.count_nonzero` and built-in `len` function! These should be helpful for you.",
"_____no_output_____"
]
],
[
[
"def proportion_positive(test_results):\n numerator = ...\n denominator = ...\n return numerator / denominator",
"_____no_output_____"
],
[
"grader.check(\"q1\")",
"_____no_output_____"
]
],
[
[
"If you grouped by `Village Number` before, you would realize that there are roughly 3000 tests per village. Let's now create functions that will randomly take 3000 tests from the `test_results` table and to apply our test statistic. Complete the `sample_population` and `apply_statistic` functions below!\n\nThe `sample_population` function will take a `population_table` that is a table with all the data we want and will return a new table that has been sampled from this `population_table`. Please note that `with_replacement` should be `False`.\n\nThe `apply_statistic` function will take in a `sample_table` which is the table full of samples taken from a population table, a `column_name` which is the name of the column containing the data of interest, and a `statistic_function` which will be the test statistic that we will use. This function will return the result of using the `statistic_function` on the `sample_table`.",
"_____no_output_____"
]
],
[
[
"def sample_population(population_table):\n sampled_population = ...\n return sampled_population\n\n\ndef apply_statistic(sample_table, column_name, statistic_function):\n return statistic_function(...)",
"_____no_output_____"
],
[
"grader.check(\"q2\")",
"_____no_output_____"
]
],
[
[
"Now for the simulation portion! Complete the for loop below and fill in a reasonable number for the `iterations` variable. The `iterations` variable will determine just how many random samples that we will take in order to test our hypotheses. There is also code that will visualize your simulation and give you data regarding your simulation vs. the null hypothesis.",
"_____no_output_____"
]
],
[
[
"# Simulation code below. Fill out this portion!\n\niterations = ...\nsimulations = make_array()\n\nfor iteration in np.arange(iterations):\n sample_table = ...\n test_statistic = ...\n simulations = np.append(simulations, test_statistic)\n \n\n# This code is to tell you what percentage of our simulations are at or below the null hypothesis\n# There's no need to fill anything out but it is good to understand what's going on!\n\nnull_hypothesis = 0.26\nnum_below = np.count_nonzero(simulations <= null_hypothesis) / iterations\nprint(f\"Out of the {iterations} simulations, roughly {round(num_below * 100, 2)}% of test statistics \" +\n f\"are less than our null hypothesis of a {null_hypothesis * 100}% infection rate.\")\n\n\n# This code is to graph your simulation data and where our null hypothesis lies\n# There's no need to fill anything out but it is good to understand what's going on!\n\n\nsimulation_table = Table().with_column(\"Simulated Test Statistics\", simulations)\nsimulation_table.hist(bins=20)\nplots.scatter(null_hypothesis, 0, color='red', s=30);",
"_____no_output_____"
],
[
"grader.check(\"q3\")",
"_____no_output_____"
]
],
[
[
"Given our hypothesis test, what can you conclude about the village that reports having a 26% COVID-19 infection rate? Has your hypothesis changed before? Do you now trust or distrust these numbers? And if you do distrust these numbers, what do you think went wrong in the reporting?",
"_____no_output_____"
],
[
"Congratulations on finishing! Run the next cell to make sure that you passed all of the test cases.",
"_____no_output_____"
]
],
[
[
"grader.check_all()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
e76db8ca64afbc84ecff84c507fdb167a767600f | 145,407 | ipynb | Jupyter Notebook | Model-based-OC-shooting.ipynb | mgb45/OC-notebooks | 67b1899d1fb3455ab3caab58f94429b9f432164b | [
"MIT"
] | 1 | 2021-05-03T14:47:27.000Z | 2021-05-03T14:47:27.000Z | Model-based-OC-shooting.ipynb | mgb45/OC-notebooks | 67b1899d1fb3455ab3caab58f94429b9f432164b | [
"MIT"
] | null | null | null | Model-based-OC-shooting.ipynb | mgb45/OC-notebooks | 67b1899d1fb3455ab3caab58f94429b9f432164b | [
"MIT"
] | null | null | null | 496.269625 | 42,400 | 0.941392 | [
[
[
"import numpy as np\nfrom matplotlib import pyplot as plt\nfrom IPython import display\n\nimport torch\nfrom torch.utils.data import Dataset, DataLoader\n\nfrom data import H5Dataset\nfrom models import FCN\nfrom oc import OptControl\nfrom torch_pendulum import Pendulum",
"_____no_output_____"
]
],
[
[
"Given running cost $g(x_t,u_t)$ and terminal cost $h(x_T)$ the finite horizon $(t=0 \\ldots T)$ optimal control problem seeks to find the optimal control, \n$$u^*_{1:T} = \\text{argmin}_{u_{1:T}} L(x_{1:T},u_{1:T})$$ \n$$u^*_{1:T} = \\text{argmin}_{u_{1:T}} h(x_T) + \\sum_{t=0}^T g(x_t,u_t)$$\nsubject to the dynamics constraint: $x_{t+1} = f(x_t,u_t)$.\n\nThis notebook provides a dirty, brute forcing solution to problems of this form, using the inverted pendulum as an example, and assuming dynamics are not know a-priori. First, we gather state, actions, next state pairs, and use these to train a surrogate neural network dynamics model, $x_{t+1} \\sim \\hat{f}(x_t,u_t)$, approximating the true dynamics $f$.\n\nWe'll then set up a shooting-based trajectory optimisation problem, rolling out using the surrgoate dynamics $\\hat{f}$ for a sequence of controls $u^*_{1:T}$, evaluate the cost, then take gradient steps to minimise this, adjusting the values of the control. We'll use pytorch and Adam to accomplish this. We'll do this in a continuous control setting, but note that this is a practically infeasible control strategy, because solving this sort of optimisation online (and with any convergence guarantees) within the bandwidth of an inverted pendulum is a stretch. ",
"_____no_output_____"
]
],
[
[
"# NN parameters\nNsamples = 10000\nepochs = 500\n\nlatent_dim = 1024\nbatch_size = 8\nlr = 3e-4\n\n# Torch environment wrapping gym pendulum\ntorch_env = Pendulum()\n\n# Test parameters\nNsteps = 100",
"_____no_output_____"
],
[
"# Set up model (fully connected neural network)\n\nmodel = FCN(latent_dim=latent_dim,d=torch_env.d,ud=torch_env.ud)\noptimizer = torch.optim.Adam(model.parameters(), lr=lr)",
"_____no_output_____"
],
[
"# Load previously trained model\nmodel.load_state_dict(torch.load('./fcn.npy'))",
"_____no_output_____"
],
[
"# Or gather some training data\nstates_, actions, states = torch_env.get_data(Nsamples)\n\ndset = H5Dataset(np.array(states_),np.array(actions),np.array(states))\nsampler = DataLoader(dset, batch_size=batch_size, shuffle=True)",
"_____no_output_____"
],
[
"# and train model\n\nlosses = []\nfor epoch in range(epochs):\n \n batch_losses = []\n for states_,actions,states in sampler:\n \n recon_x = model(states_,actions)\n loss = model.loss_fn(recon_x,states)\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n \n batch_losses.append(loss.item())\n \n losses.append(np.mean(batch_losses))\n plt.cla()\n plt.semilogy(losses)\n \n display.clear_output(wait=True)\n display.display(plt.gcf())",
"_____no_output_____"
],
[
"torch.save(model.state_dict(),'./fcn.npy')",
"_____no_output_____"
],
[
"# Test model rollouts - looks reasonable\n\nstates = []\n_states = []\n\ns = torch_env.env.reset()\nstates.append(s)\n_states.append(s.copy())\nfor i in range(30):\n a = torch_env.env.action_space.sample()\n s,r,_,_ = torch_env.env.step(a) # take a random action\n states.append(s)\n \n # roll-out with model\n _s = model(torch.from_numpy(_states[-1]).float().reshape(1,-1),torch.from_numpy(a).float().reshape(1,-1))\n _states.append(_s.detach().numpy())\n \n plt.cla()\n plt.plot(np.array(states),'--')\n plt.plot(np.vstack(_states))\n \n display.clear_output(wait=True)\n display.display(plt.gcf())",
"_____no_output_____"
],
[
"# Set up optimal controller\ncontroller = OptControl(model.dynamics, torch_env.running_cost, torch_env.term_cost, u_dim=torch_env.ud, umax=torch_env.umax, horizon=30,lr=1e-1)\n\n# Uncomment to use true dynamics\n# controller = OptControl(torch_env.dynamics, torch_env.running_cost, torch_env.term_cost, u_dim=torch_env.ud, umax=torch_env.umax, horizon=30,lr=1e-1)\n\n# Test controller\nplt.figure(figsize=(15,5))\ns = torch_env.env.reset()\nfor i in range(Nsteps):\n \n u,states,cost,costs = controller.minimize(torch.from_numpy(s).reshape(1,-1).float(),Nsteps=5) #OC\n \n s,r,_,_ = torch_env.env.step(u[:,0].detach().numpy()) # take a random action\n \n torch_env.env.render()\n \n plt.clf()\n plt.subplot(1,3,1)\n plt.plot(u.detach().numpy().T,'--')\n plt.ylim(-2,2)\n plt.ylabel('Controls')\n plt.subplot(1,3,2)\n plt.plot(np.squeeze(torch.stack(states).detach().numpy()))\n plt.legend({'Tip x','Tip y','Velocity'})\n plt.ylim(-8,8)\n plt.subplot(1,3,3)\n plt.plot(np.squeeze(torch.stack(costs).detach().numpy()))\n plt.ylabel('Cost')\n plt.ylim(0,15)\n \n display.clear_output(wait=True)\n display.display(plt.gcf())",
"_____no_output_____"
],
[
"torch_env.env.close()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76dbde9394349e01f83ccc3e8a4c0e4b4692c60 | 33,338 | ipynb | Jupyter Notebook | task11_kaggle/lstm_baseline.ipynb | yupopov/stepik-dl-nlp | ff93743e4948fe5b5a2fbd2a57e6ce26f2da7fa6 | [
"MIT"
] | 120 | 2019-10-29T20:55:45.000Z | 2022-03-30T05:47:29.000Z | task11_kaggle/lstm_baseline.ipynb | AlexKay28/stepik-dl-nlp | 6ca8058863ccfbc087af85a994e80ffec3f8adaa | [
"MIT"
] | 2 | 2019-11-20T15:00:15.000Z | 2019-11-20T15:34:05.000Z | task11_kaggle/lstm_baseline.ipynb | AlexKay28/stepik-dl-nlp | 6ca8058863ccfbc087af85a994e80ffec3f8adaa | [
"MIT"
] | 122 | 2019-10-24T08:36:25.000Z | 2022-03-09T12:39:56.000Z | 31.098881 | 140 | 0.509539 | [
[
[
"# Генерация заголовков научных статей: слабый baseline",
"_____no_output_____"
],
[
"Источник: https://github.com/bentrevett/pytorch-seq2seq",
"_____no_output_____"
]
],
[
[
"# Если Вы запускаете ноутбук на colab,\n# выполните следующие строчки, чтобы подгрузить библиотеку dlnlputils:\n\n# !git clone https://github.com/Samsung-IT-Academy/stepik-dl-nlp.git\n# import sys; sys.path.append('/content/stepik-dl-nlp')",
"_____no_output_____"
],
[
"import torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport torch.nn.functional as F\nfrom torchtext.data import Field, BucketIterator\n\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as ticker\n\nimport spacy\n\nimport random\nimport math\nimport time",
"_____no_output_____"
],
[
"SEED = 1234\n\nrandom.seed(SEED)\ntorch.manual_seed(SEED)\ntorch.backends.cudnn.deterministic = True",
"_____no_output_____"
],
[
"# возможно, Вам потребуется предварительно загрузить модели SpaCy для английского языка\n# !python -m spacy download en\n\nspacy_en = spacy.load('en')",
"_____no_output_____"
],
[
"def tokenize(text):\n \"\"\"\n Tokenizes English text from a string into a list of strings (tokens)\n \"\"\"\n return [tok.text for tok in spacy_en.tokenizer(text) if not tok.text.isspace()]",
"_____no_output_____"
],
[
"from torchtext import data, vocab\n\ntokenizer = data.get_tokenizer('spacy')\nTEXT = Field(tokenize=tokenize,\n init_token = '<sos>', \n eos_token = '<eos>', \n include_lengths = True,\n lower = True)\n\n",
"_____no_output_____"
],
[
"%%time\ntrn_data_fields = [(\"src\", TEXT),\n (\"trg\", TEXT)]\n\ndataset = data.TabularDataset(\n path='datasets/train.csv',\n format='csv',\n skip_header=True,\n fields=trn_data_fields\n)\n\ntrain_data, valid_data, test_data = dataset.split(split_ratio=[0.98, 0.01, 0.01])",
"_____no_output_____"
],
[
"TEXT.build_vocab(train_data, min_freq = 7)\nprint(f\"Unique tokens in vocabulary: {len(TEXT.vocab)}\")",
"_____no_output_____"
],
[
"BATCH_SIZE = 32\n\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n\ntrain_iterator, valid_iterator, test_iterator = BucketIterator.splits(\n (train_data, valid_data, test_data), \n batch_size = BATCH_SIZE,\n sort_within_batch = True,\n sort_key = lambda x : len(x.src),\n device = device)",
"_____no_output_____"
],
[
"class Encoder(nn.Module):\n def __init__(self, input_dim, emb_dim, enc_hid_dim, dec_hid_dim, dropout):\n super().__init__()\n \n self.embedding = nn.Embedding(input_dim, emb_dim)\n \n self.rnn = nn.GRU(emb_dim, enc_hid_dim, bidirectional = True)\n \n self.fc = nn.Linear(enc_hid_dim * 2, dec_hid_dim)\n \n self.dropout = nn.Dropout(dropout)\n \n def forward(self, src, src_len):\n \n #src = [src sent len, batch size]\n #src_len = [src sent len]\n \n embedded = self.dropout(self.embedding(src))\n \n #embedded = [src sent len, batch size, emb dim]\n \n packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, src_len)\n \n packed_outputs, hidden = self.rnn(packed_embedded)\n \n #packed_outputs is a packed sequence containing all hidden states\n #hidden is now from the final non-padded element in the batch\n \n outputs, _ = nn.utils.rnn.pad_packed_sequence(packed_outputs) \n \n #outputs is now a non-packed sequence, all hidden states obtained\n # when the input is a pad token are all zeros\n \n #outputs = [sent len, batch size, hid dim * num directions]\n #hidden = [n layers * num directions, batch size, hid dim]\n \n #hidden is stacked [forward_1, backward_1, forward_2, backward_2, ...]\n #outputs are always from the last layer\n \n #hidden [-2, :, : ] is the last of the forwards RNN \n #hidden [-1, :, : ] is the last of the backwards RNN\n \n #initial decoder hidden is final hidden state of the forwards and backwards \n # encoder RNNs fed through a linear layer\n hidden = torch.tanh(self.fc(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1)))\n \n #outputs = [sent len, batch size, enc hid dim * 2]\n #hidden = [batch size, dec hid dim]\n \n return outputs, hidden",
"_____no_output_____"
],
[
"class Attention(nn.Module):\n def __init__(self, enc_hid_dim, dec_hid_dim):\n super().__init__()\n \n self.attn = nn.Linear((enc_hid_dim * 2) + dec_hid_dim, dec_hid_dim)\n self.v = nn.Parameter(torch.rand(dec_hid_dim))\n \n def forward(self, hidden, encoder_outputs, mask):\n \n #hidden = [batch size, dec hid dim]\n #encoder_outputs = [src sent len, batch size, enc hid dim * 2]\n #mask = [batch size, src sent len]\n \n batch_size = encoder_outputs.shape[1]\n src_len = encoder_outputs.shape[0]\n \n #repeat encoder hidden state src_len times\n hidden = hidden.unsqueeze(1).repeat(1, src_len, 1)\n \n encoder_outputs = encoder_outputs.permute(1, 0, 2)\n \n #hidden = [batch size, src sent len, dec hid dim]\n #encoder_outputs = [batch size, src sent len, enc hid dim * 2]\n \n energy = torch.tanh(self.attn(torch.cat((hidden, encoder_outputs), dim = 2))) \n \n #energy = [batch size, src sent len, dec hid dim]\n \n energy = energy.permute(0, 2, 1)\n \n #energy = [batch size, dec hid dim, src sent len]\n \n #v = [dec hid dim]\n \n v = self.v.repeat(batch_size, 1).unsqueeze(1)\n \n #v = [batch size, 1, dec hid dim]\n \n attention = torch.bmm(v, energy).squeeze(1)\n \n #attention = [batch size, src sent len]\n \n attention = attention.masked_fill(mask == 0, -1e10)\n \n return F.softmax(attention, dim = 1)",
"_____no_output_____"
],
[
"class Decoder(nn.Module):\n def __init__(self, output_dim, emb_dim, enc_hid_dim, dec_hid_dim, dropout, attention):\n super().__init__()\n\n self.output_dim = output_dim\n self.attention = attention\n \n self.embedding = nn.Embedding(output_dim, emb_dim)\n \n self.rnn = nn.GRU((enc_hid_dim * 2) + emb_dim, dec_hid_dim)\n \n self.out = nn.Linear((enc_hid_dim * 2) + dec_hid_dim + emb_dim, output_dim)\n \n self.dropout = nn.Dropout(dropout)\n \n def forward(self, input, hidden, encoder_outputs, mask):\n \n #input = [batch size]\n #hidden = [batch size, dec hid dim]\n #encoder_outputs = [src sent len, batch size, enc hid dim * 2]\n #mask = [batch size, src sent len]\n \n input = input.unsqueeze(0)\n \n #input = [1, batch size]\n \n embedded = self.dropout(self.embedding(input))\n \n #embedded = [1, batch size, emb dim]\n \n a = self.attention(hidden, encoder_outputs, mask)\n \n #a = [batch size, src sent len]\n \n a = a.unsqueeze(1)\n \n #a = [batch size, 1, src sent len]\n \n encoder_outputs = encoder_outputs.permute(1, 0, 2)\n \n #encoder_outputs = [batch size, src sent len, enc hid dim * 2]\n \n weighted = torch.bmm(a, encoder_outputs)\n \n #weighted = [batch size, 1, enc hid dim * 2]\n \n weighted = weighted.permute(1, 0, 2)\n \n #weighted = [1, batch size, enc hid dim * 2]\n \n rnn_input = torch.cat((embedded, weighted), dim = 2)\n \n #rnn_input = [1, batch size, (enc hid dim * 2) + emb dim]\n \n output, hidden = self.rnn(rnn_input, hidden.unsqueeze(0))\n \n #output = [sent len, batch size, dec hid dim * n directions]\n #hidden = [n layers * n directions, batch size, dec hid dim]\n \n #sent len, n layers and n directions will always be 1 in this decoder, therefore:\n #output = [1, batch size, dec hid dim]\n #hidden = [1, batch size, dec hid dim]\n #this also means that output == hidden\n assert (output == hidden).all()\n \n embedded = embedded.squeeze(0)\n output = output.squeeze(0)\n weighted = weighted.squeeze(0)\n \n output = self.out(torch.cat((output, weighted, embedded), dim = 1))\n \n #output = [bsz, output dim]\n \n return output, hidden.squeeze(0), a.squeeze(1)",
"_____no_output_____"
],
[
"class Seq2Seq(nn.Module):\n def __init__(self, encoder, decoder, pad_idx, sos_idx, eos_idx, device):\n super().__init__()\n \n self.encoder = encoder\n self.decoder = decoder\n self.pad_idx = pad_idx\n self.sos_idx = sos_idx\n self.eos_idx = eos_idx\n self.device = device\n \n def create_mask(self, src):\n mask = (src != self.pad_idx).permute(1, 0)\n return mask\n \n def forward(self, src, src_len, trg, teacher_forcing_ratio = 0.5):\n \n #src = [src sent len, batch size]\n #src_len = [batch size]\n #trg = [trg sent len, batch size]\n #teacher_forcing_ratio is probability to use teacher forcing\n #e.g. if teacher_forcing_ratio is 0.75 we use teacher forcing 75% of the time\n \n if trg is None:\n assert teacher_forcing_ratio == 0, \"Must be zero during inference\"\n inference = True\n trg = torch.zeros((100, src.shape[1])).long().fill_(self.sos_idx).to(src.device)\n else:\n inference = False\n \n batch_size = src.shape[1]\n max_len = trg.shape[0]\n trg_vocab_size = self.decoder.output_dim\n \n #tensor to store decoder outputs\n outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device)\n \n #tensor to store attention\n attentions = torch.zeros(max_len, batch_size, src.shape[0]).to(self.device)\n \n #encoder_outputs is all hidden states of the input sequence, back and forwards\n #hidden is the final forward and backward hidden states, passed through a linear layer\n encoder_outputs, hidden = self.encoder(src, src_len)\n \n #first input to the decoder is the <sos> tokens\n input = trg[0,:]\n \n mask = self.create_mask(src)\n \n #mask = [batch size, src sent len]\n \n for t in range(1, max_len):\n \n #insert input token embedding, previous hidden state, all encoder hidden states \n # and mask\n #receive output tensor (predictions), new hidden state and attention tensor\n output, hidden, attention = self.decoder(input, hidden, encoder_outputs, mask)\n \n #place predictions in a tensor holding predictions for each token\n outputs[t] = output\n \n #place attentions in a tensor holding attention value for each input token\n attentions[t] = attention\n \n #decide if we are going to use teacher forcing or not\n teacher_force = random.random() < teacher_forcing_ratio\n \n #get the highest predicted token from our predictions\n top1 = output.argmax(1) \n \n #if teacher forcing, use actual next token as next input\n #if not, use predicted token\n input = trg[t] if teacher_force else top1\n \n #if doing inference and next token/prediction is an eos token then stop\n if inference and input.item() == self.eos_idx:\n return outputs[:t], attentions[:t]\n \n return outputs, attentions",
"_____no_output_____"
],
[
"INPUT_DIM = len(TEXT.vocab)\nOUTPUT_DIM = len(TEXT.vocab)\nENC_EMB_DIM = 128\nDEC_EMB_DIM = 128\nENC_HID_DIM = 64\nDEC_HID_DIM = 64\nENC_DROPOUT = 0.8\nDEC_DROPOUT = 0.8\nPAD_IDX = TEXT.vocab.stoi['<pad>']\nSOS_IDX = TEXT.vocab.stoi['<sos>']\nEOS_IDX = TEXT.vocab.stoi['<eos>']\n\nattn = Attention(ENC_HID_DIM, DEC_HID_DIM)\nenc = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, ENC_DROPOUT)\ndec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, DEC_DROPOUT, attn)\n\nmodel = Seq2Seq(enc, dec, PAD_IDX, SOS_IDX, EOS_IDX, device).to(device)",
"_____no_output_____"
],
[
"def init_weights(m):\n for name, param in m.named_parameters():\n if 'weight' in name:\n nn.init.normal_(param.data, mean=0, std=0.01)\n else:\n nn.init.constant_(param.data, 0)\n \nmodel.apply(init_weights)",
"_____no_output_____"
],
[
"def count_parameters(model):\n return sum(p.numel() for p in model.parameters() if p.requires_grad)\n\nprint(f'The model has {count_parameters(model):,} trainable parameters')",
"_____no_output_____"
],
[
"optimizer = optim.Adam(model.parameters())",
"_____no_output_____"
],
[
"criterion = nn.CrossEntropyLoss(ignore_index = PAD_IDX)",
"_____no_output_____"
]
],
[
[
"### Обучение модели",
"_____no_output_____"
]
],
[
[
"import matplotlib\nmatplotlib.rcParams.update({'figure.figsize': (16, 12), 'font.size': 14})\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom IPython.display import clear_output\n\n\ndef train(model, iterator, optimizer, criterion, clip, train_history=None, valid_history=None):\n \n model.train()\n \n epoch_loss = 0\n history = []\n for i, batch in enumerate(iterator):\n \n src, src_len = batch.src\n trg, trg_len = batch.trg\n \n optimizer.zero_grad()\n \n output, attetion = model(src, src_len, trg)\n \n #trg = [trg sent len, batch size]\n #output = [trg sent len, batch size, output dim]\n \n output = output[1:].view(-1, output.shape[-1])\n trg = trg[1:].view(-1)\n \n #trg = [(trg sent len - 1) * batch size]\n #output = [(trg sent len - 1) * batch size, output dim]\n \n loss = criterion(output, trg)\n \n loss.backward()\n \n torch.nn.utils.clip_grad_norm_(model.parameters(), clip)\n \n optimizer.step()\n \n epoch_loss += loss.item()\n \n history.append(loss.cpu().data.numpy())\n if (i+1)%10==0:\n fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 8))\n\n clear_output(True)\n ax[0].plot(history, label='train loss')\n ax[0].set_xlabel('Batch')\n ax[0].set_title('Train loss')\n if train_history is not None:\n ax[1].plot(train_history, label='general train history')\n ax[1].set_xlabel('Epoch')\n if valid_history is not None:\n ax[1].plot(valid_history, label='general valid history')\n plt.legend()\n \n plt.show()\n \n return epoch_loss / len(iterator)",
"_____no_output_____"
],
[
"def evaluate(model, iterator, criterion):\n \n model.eval()\n \n epoch_loss = 0\n \n with torch.no_grad():\n \n for i, batch in enumerate(iterator):\n\n src, src_len = batch.src\n trg, trg_len = batch.trg\n\n output, attention = model(src, src_len, trg, 0) #turn off teacher forcing\n\n #trg = [trg sent len, batch size]\n #output = [trg sent len, batch size, output dim]\n\n output = output[1:].view(-1, output.shape[-1])\n trg = trg[1:].view(-1)\n\n #trg = [(trg sent len - 1) * batch size]\n #output = [(trg sent len - 1) * batch size, output dim]\n\n loss = criterion(output, trg)\n\n epoch_loss += loss.item()\n \n return epoch_loss / len(iterator)",
"_____no_output_____"
],
[
"def epoch_time(start_time, end_time):\n elapsed_time = end_time - start_time\n elapsed_mins = int(elapsed_time / 60)\n elapsed_secs = int(elapsed_time - (elapsed_mins * 60))\n return elapsed_mins, elapsed_secs",
"_____no_output_____"
],
[
"MODEL_NAME = 'models/lstm_baseline.pt'\nN_EPOCHS = 5\nCLIP = 1\n\ntrain_history = []\nvalid_history = []\n\nbest_valid_loss = float('inf')\n\nfor epoch in range(N_EPOCHS):\n \n start_time = time.time()\n \n train_loss = train(model, train_iterator, optimizer, criterion, CLIP, train_history, valid_history)\n valid_loss = evaluate(model, valid_iterator, criterion)\n \n end_time = time.time()\n \n epoch_mins, epoch_secs = epoch_time(start_time, end_time)\n \n if valid_loss < best_valid_loss:\n best_valid_loss = valid_loss\n torch.save(model.state_dict(), MODEL_NAME)\n \n \n train_history.append(train_loss)\n valid_history.append(valid_loss)\n \n print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')\n print(f'\\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')\n print(f'\\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')",
"_____no_output_____"
]
],
[
[
"Finally, we load the parameters from our best validation loss and get our results on the test set.",
"_____no_output_____"
]
],
[
[
"# for cpu usage\nmodel.load_state_dict(torch.load(MODEL_NAME, map_location=torch.device('cpu')))\n\n# for gpu usage\n# model.load_state_dict(torch.load(MODEL_NAME), map_location=torch.device('cpu'))\n\n\ntest_loss = evaluate(model, test_iterator, criterion)\n\nprint(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')",
"_____no_output_____"
]
],
[
[
"### Генерация заголовков",
"_____no_output_____"
]
],
[
[
"def translate_sentence(model, tokenized_sentence):\n model.eval()\n tokenized_sentence = ['<sos>'] + [t.lower() for t in tokenized_sentence] + ['<eos>']\n numericalized = [TEXT.vocab.stoi[t] for t in tokenized_sentence] \n sentence_length = torch.LongTensor([len(numericalized)]).to(device) \n tensor = torch.LongTensor(numericalized).unsqueeze(1).to(device) \n translation_tensor_logits, attention = model(tensor, sentence_length, None, 0) \n translation_tensor = torch.argmax(translation_tensor_logits.squeeze(1), 1)\n translation = [TEXT.vocab.itos[t] for t in translation_tensor]\n translation, attention = translation[1:], attention[1:]\n return translation, attention",
"_____no_output_____"
],
[
"def display_attention(sentence, translation, attention):\n \n fig = plt.figure(figsize=(30,50))\n ax = fig.add_subplot(111)\n \n attention = attention.squeeze(1).cpu().detach().numpy().T\n \n cax = ax.matshow(attention, cmap='bone')\n \n ax.tick_params(labelsize=12)\n ax.set_yticklabels(['']+['<sos>']+[t.lower() for t in sentence]+['<eos>'])\n ax.set_xticklabels(['']+translation, rotation=80)\n\n ax.xaxis.set_major_locator(ticker.MultipleLocator(1))\n ax.yaxis.set_major_locator(ticker.MultipleLocator(1))\n\n plt.show()\n plt.close()",
"_____no_output_____"
],
[
"example_idx = 100\n\nsrc = vars(train_data.examples[example_idx])['src']\ntrg = vars(train_data.examples[example_idx])['trg']\n\nprint(f'src = {src}')\nprint(f'trg = {trg}')",
"_____no_output_____"
],
[
"translation, attention = translate_sentence(model, src)\n\nprint(f'predicted trg = {translation}')",
"_____no_output_____"
],
[
"display_attention(src, translation, attention)",
"_____no_output_____"
],
[
"for example_idx in range(100):\n src = vars(test_data.examples[example_idx])['src']\n trg = vars(test_data.examples[example_idx])['trg']\n translation, attention = translate_sentence(model, src)\n\n print('Оригинальный заголовок: ', ' '.join(trg))\n print('Предсказанный заголовок: ', ' '.join(translation))\n print('-----------------------------------')",
"_____no_output_____"
],
[
"example_idx = 0\n\nsrc = vars(valid_data.examples[example_idx])['src']\ntrg = vars(valid_data.examples[example_idx])['trg']\n\nprint(f'src = {src}')\nprint(f'trg = {trg}')",
"_____no_output_____"
],
[
"translation, attention = translate_sentence(model, src)\n\nprint(f'predicted trg = {translation}')\n\ndisplay_attention(src, translation, attention)",
"_____no_output_____"
],
[
"example_idx = 510\n\nsrc = vars(test_data.examples[example_idx])['src']\ntrg = vars(test_data.examples[example_idx])['trg']\n\nprint(f'src = {src}')\nprint(f'trg = {trg}')",
"_____no_output_____"
],
[
"translation, attention = translate_sentence(model, src)\n\nprint(f'predicted trg = {translation}')\n\ndisplay_attention(src, translation, attention)",
"_____no_output_____"
]
],
[
[
"### Считаем BLEU на train.csv",
"_____no_output_____"
]
],
[
[
"import nltk\n\nn_gram_weights = [0.3334, 0.3333, 0.3333]",
"_____no_output_____"
],
[
"test_len = len(test_data)",
"_____no_output_____"
],
[
"original_texts = []\ngenerated_texts = []\nmacro_bleu = 0\n\nfor example_idx in range(test_len):\n src = vars(test_data.examples[example_idx])['src']\n trg = vars(test_data.examples[example_idx])['trg']\n translation, _ = translate_sentence(model, src)\n\n original_texts.append(trg)\n generated_texts.append(translation)\n\n bleu_score = nltk.translate.bleu_score.sentence_bleu(\n [trg],\n translation,\n weights = n_gram_weights\n ) \n macro_bleu += bleu_score\n\nmacro_bleu /= test_len",
"_____no_output_____"
],
[
"# averaging sentence-level BLEU (i.e. macro-average precision)\nprint('Macro-average BLEU (LSTM): {0:.5f}'.format(macro_bleu))",
"_____no_output_____"
]
],
[
[
"### Делаем submission в Kaggle",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\nsubmission_data = pd.read_csv('datasets/test.csv')\nabstracts = submission_data['abstract'].values",
"_____no_output_____"
]
],
[
[
"Генерация заголовков для тестовых данных:",
"_____no_output_____"
]
],
[
[
"titles = []\nfor abstract in abstracts:\n title, _ = translate_sentence(model, abstract.split())\n titles.append(' '.join(title).replace('<unk>', ''))",
"_____no_output_____"
]
],
[
[
"Записываем полученные заголовки в файл формата `<abstract>,<title>`:",
"_____no_output_____"
]
],
[
[
"submission_df = pd.DataFrame({'abstract': abstracts, 'title': titles})\nsubmission_df.to_csv('datasets/predicted_titles.csv', index=False)",
"_____no_output_____"
]
],
[
[
"С помощью скрипта `generate_csv` приводим файл `submission_prediction.csv` в формат, необходимый для посылки в соревнование на Kaggle:",
"_____no_output_____"
]
],
[
[
"from create_submission import generate_csv\n\ngenerate_csv('datasets/predicted_titles.csv', 'datasets/kaggle_pred.csv', 'datasets/vocs.pkl')",
"_____no_output_____"
],
[
"!wc -l datasets/kaggle_pred.csv",
"_____no_output_____"
],
[
"!head datasets/kaggle_pred.csv",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e76dc44068791fd9b83c87e58aceee68a433e281 | 20,964 | ipynb | Jupyter Notebook | data/Untitled.ipynb | AI-confused/Tianchi_Similarity | 9c3e76b7ac19f07e948d68270b0b747de92a413f | [
"Apache-2.0"
] | 4 | 2020-04-02T13:14:30.000Z | 2021-07-05T05:57:11.000Z | data/Untitled.ipynb | AI-confused/Tianchi_Similarity | 9c3e76b7ac19f07e948d68270b0b747de92a413f | [
"Apache-2.0"
] | null | null | null | data/Untitled.ipynb | AI-confused/Tianchi_Similarity | 9c3e76b7ac19f07e948d68270b0b747de92a413f | [
"Apache-2.0"
] | null | null | null | 40.471042 | 108 | 0.436176 | [
[
[
"import pandas as pd\nimport numpy as np\ntrain = pd.read_csv('train.csv')\ndev = pd.read_csv('dev.csv')\n# print(train+dev)\ntrain = train.iloc[:,:].values\ndev = dev.iloc[:,:].values\n# print(train)\n# print(dev)\ntrain = np.concatenate((train, dev),axis=0)\nprint(len(train))\ntrain = pd.DataFrame(train)\nprint(train)\ntrain.to_csv('totol_data.csv', index=False,header=True)",
"10749\n 0 1 2 3 4\n0 0 咳血 剧烈运动后咯血,是怎么了? 剧烈运动后咯血是什么原因? 1\n1 1 咳血 剧烈运动后咯血,是怎么了? 剧烈运动后为什么会咯血? 1\n2 2 咳血 剧烈运动后咯血,是怎么了? 剧烈运动后咯血,应该怎么处理? 0\n3 3 咳血 剧烈运动后咯血,是怎么了? 剧烈运动后咯血,需要就医吗? 0\n4 4 咳血 剧烈运动后咯血,是怎么了? 剧烈运动后咯血,是否很严重? 0\n... ... .. ... ... ..\n10744 1997 哮喘 变应性哮喘就是过敏性哮喘吗? 变应性哮喘与过敏性哮喘一样吗? 1\n10745 1998 哮喘 变应性哮喘就是过敏性哮喘吗? 变应性哮喘是否就是过敏性哮喘? 1\n10746 1999 哮喘 变应性哮喘就是过敏性哮喘吗? 变应性哮喘的饮食禁忌有哪些? 0\n10747 2000 哮喘 变应性哮喘就是过敏性哮喘吗? 变应性哮喘怎么治疗? 0\n10748 2001 哮喘 变应性哮喘就是过敏性哮喘吗? 变应性哮喘能跑步吗? 0\n\n[10749 rows x 5 columns]\n"
],
[
"data = pd.read_csv('totol_data.csv')\nm,n=0,0\nfor i in data.index:\n if data.iloc[i,-1]==1:\n m+=1\n else:\n n+=1\nprint(m/n)",
"0.6670285359801489\n"
],
[
"data = pd.read_csv('total_data3.csv')\nimport os\nprint(data)\nfrom sklearn.model_selection import KFold\nkfold = KFold(n_splits=5, shuffle=True)\ni = 0\nos.system('cd fold5')\nfor train, test in kfold.split(data):\n print(\"%s %s\" % (train, test))\n print(len(train),len(test))\n train_data = data.loc[train]\n dev_data = data.loc[test]\n# print(train_data)\n try:\n os.system('mkdir data3_%d'%i)\n except:\n pass\n train_data.to_csv('data3_%d/train.csv'%i, index=False, header=True)\n dev_data.to_csv('data3_%d/dev.csv'%i, index=False, header=True)\n i += 1",
" id query1 query2 label\n0 7114 干咳无痰吃什么水果恢复的快 干咳无痰并伴有哮鸣音怎么回事 0\n1 1375 肺气肿病人怎么治疗的? 肺气肿病人最明显的表现有哪些呢? 0\n2 1946 肺炎会传染人吗 大叶性肺炎传染吗? 0\n3 2012 感冒肺炎,住院三天医生说出院需要注意饮食 感冒肺炎,住院三天医生说出院后不要去人群密集的地方 0\n4 2211 严重的慢性肺炎传染严重吗? 严重的慢性肺炎传染人吗? 0\n... ... ... ... ...\n16134 3005 哮喘能否吃洋参? 哮喘可不可以吃洋参? 1\n16135 3006 过敏性哮喘患者的注意事项是什么? 过敏性哮喘患者要注意的是什么? 1\n16136 3007 过敏性哮喘平时应注意什么? 患了过敏性哮喘平时需要注意哪些事? 1\n16137 3008 如何治疗过敏性哮喘? 过敏性哮喘要怎么治疗? 1\n16138 3009 变应性哮喘与过敏性哮喘一样吗? 变应性哮喘是否就是过敏性哮喘? 1\n\n[16139 rows x 4 columns]\n[ 2 4 7 ... 16136 16137 16138] [ 0 1 3 ... 16127 16129 16130]\n12911 3228\n[ 0 1 3 ... 16135 16136 16138] [ 2 7 8 ... 16122 16126 16137]\n12911 3228\n[ 0 1 2 ... 16134 16136 16137] [ 10 11 20 ... 16123 16135 16138]\n12911 3228\n[ 0 1 2 ... 16135 16137 16138] [ 9 37 55 ... 16120 16124 16136]\n12911 3228\n[ 0 1 2 ... 16136 16137 16138] [ 4 12 17 ... 16132 16133 16134]\n12912 3227\n"
],
[
"import pandas as pd\ndata = pd.read_csv('totol_data.csv')\ndata = data.drop(['category'],axis=1)\nsimilary = data[data['label']==1]\nunsimilary = data[data['label']==0]\nprint(similary)\nprint(unsimilary)\na = []\nfor i, inde in enumerate(similary.index[:-1]):\n# print(i,inde)\n# tmp = []\n if similary.loc[inde]['query1']==similary.loc[similary.index[i+1]]['query1']:\n tmp = [similary.loc[inde]['query2'], similary.loc[similary.index[i+1]]['query2']]\n a.append(tmp)\n try:\n if similary.loc[inde]['query1']==similary.loc[similary.index[i+2]]['query1']:\n tmp = [similary.loc[inde]['query2'], similary.loc[similary.index[i+2]]['query2']]\n a.append(tmp)\n except:\n print('index error')\nprint(len(a))\n\nb=[]\nfor i, inde in enumerate(unsimilary.index[:-1]):\n# print(i,inde)\n# tmp = []\n if unsimilary.loc[inde]['query1']==unsimilary.loc[unsimilary.index[i+1]]['query1']:\n tmp = [unsimilary.loc[inde]['query2'], unsimilary.loc[unsimilary.index[i+1]]['query2']]\n b.append(tmp)\n try:\n if unsimilary.loc[inde]['query1']==unsimilary.loc[unsimilary.index[i+2]]['query1']:\n tmp = [unsimilary.loc[inde]['query2'], unsimilary.loc[unsimilary.index[i+2]]['query2']]\n b.append(tmp)\n except:\n print('index error')\nprint(len(b))",
" id query1 query2 label\n0 0 剧烈运动后咯血,是怎么了? 剧烈运动后咯血是什么原因? 1\n1 1 剧烈运动后咯血,是怎么了? 剧烈运动后为什么会咯血? 1\n5 5 百令胶囊需要注意什么? 百令胶囊有什么注意事项? 1\n6 6 百令胶囊需要注意什么? 服用百令胶囊有什么需要特别注意的吗? 1\n10 10 肝癌兼肺癌晚期能活多久? 肝癌兼肺癌晚期还有多少寿命? 1\n... ... ... ... ...\n10735 1988 过敏性哮喘平时应注意哪些问题? 患了过敏性哮喘平时需要注意哪些事? 1\n10739 1992 过敏性哮喘究竟怎样治疗? 如何治疗过敏性哮喘? 1\n10740 1993 过敏性哮喘究竟怎样治疗? 过敏性哮喘要怎么治疗? 1\n10744 1997 变应性哮喘就是过敏性哮喘吗? 变应性哮喘与过敏性哮喘一样吗? 1\n10745 1998 变应性哮喘就是过敏性哮喘吗? 变应性哮喘是否就是过敏性哮喘? 1\n\n[4301 rows x 4 columns]\n id query1 query2 label\n2 2 剧烈运动后咯血,是怎么了? 剧烈运动后咯血,应该怎么处理? 0\n3 3 剧烈运动后咯血,是怎么了? 剧烈运动后咯血,需要就医吗? 0\n4 4 剧烈运动后咯血,是怎么了? 剧烈运动后咯血,是否很严重? 0\n7 7 百令胶囊需要注意什么? 百令胶囊如何服用? 0\n8 8 百令胶囊需要注意什么? 百令胶囊效果好吗? 0\n... ... ... ... ...\n10742 1995 过敏性哮喘究竟怎样治疗? 过敏性哮喘有哪些症状? 0\n10743 1996 过敏性哮喘究竟怎样治疗? 过敏性哮喘治好会不会复发? 0\n10746 1999 变应性哮喘就是过敏性哮喘吗? 变应性哮喘的饮食禁忌有哪些? 0\n10747 2000 变应性哮喘就是过敏性哮喘吗? 变应性哮喘怎么治疗? 0\n10748 2001 变应性哮喘就是过敏性哮喘吗? 变应性哮喘能跑步吗? 0\n\n[6448 rows x 4 columns]\nindex error\n3010\nindex error\n7295\n"
],
[
"df = pd.DataFrame(data=a, columns=['query1', 'query2'])\ndf['label']=1\ndf['id']=df.index\nprint(df)\ndf[['id','query1','query2','label']].to_csv('similary_generate.csv',header=True,index=False)",
" query1 query2 label id\n0 剧烈运动后咯血是什么原因? 剧烈运动后为什么会咯血? 1 0\n1 百令胶囊有什么注意事项? 服用百令胶囊有什么需要特别注意的吗? 1 1\n2 肝癌兼肺癌晚期还有多少寿命? 肝癌兼肺癌晚期还有多少时间? 1 2\n3 咳嗽咯血半年是怎么回事? 咳嗽咯血半年是什么情况? 1 3\n4 百令胶囊是否可以长时间服用? 百令胶囊长时间服用会有问题吗? 1 4\n... ... ... ... ...\n3005 哮喘能否吃洋参? 哮喘可不可以吃洋参? 1 3005\n3006 过敏性哮喘患者的注意事项是什么? 过敏性哮喘患者要注意的是什么? 1 3006\n3007 过敏性哮喘平时应注意什么? 患了过敏性哮喘平时需要注意哪些事? 1 3007\n3008 如何治疗过敏性哮喘? 过敏性哮喘要怎么治疗? 1 3008\n3009 变应性哮喘与过敏性哮喘一样吗? 变应性哮喘是否就是过敏性哮喘? 1 3009\n\n[3010 rows x 4 columns]\n"
],
[
"df = pd.DataFrame(data=b, columns=['query1', 'query2'])\n# df['lable']=1\ndf['id']=df.index\nprint(df)\ndf[['id','query1','query2']].to_csv('unknow_generate.csv',header=True,index=False)",
" query1 query2 id\n0 剧烈运动后咯血,应该怎么处理? 剧烈运动后咯血,需要就医吗? 0\n1 剧烈运动后咯血,应该怎么处理? 剧烈运动后咯血,是否很严重? 1\n2 剧烈运动后咯血,需要就医吗? 剧烈运动后咯血,是否很严重? 2\n3 百令胶囊如何服用? 百令胶囊效果好吗? 3\n4 百令胶囊如何服用? 百令胶囊需要如何服用? 4\n... ... ... ...\n7290 过敏性哮喘病因是什么? 过敏性哮喘治好会不会复发? 7290\n7291 过敏性哮喘有哪些症状? 过敏性哮喘治好会不会复发? 7291\n7292 变应性哮喘的饮食禁忌有哪些? 变应性哮喘怎么治疗? 7292\n7293 变应性哮喘的饮食禁忌有哪些? 变应性哮喘能跑步吗? 7293\n7294 变应性哮喘怎么治疗? 变应性哮喘能跑步吗? 7294\n\n[7295 rows x 3 columns]\n"
],
[
"df = pd.read_csv('../result.csv')\n# print(df)\nprint(df[df['label']==1])\ndf1 = pd.read_csv('unknow_generate.csv')\ndf1['label'] = df.iloc[:,-1].values\nprint(df1)\ndf2 = df1[df1['label']==1]\ndf2_ = df1[df1['label']==0]\nprint(len(df2_))\ndf3 = pd.read_csv('total_data1.csv')\nprint(df3)\ndf4 = pd.concat([df2,df3],axis=0)\nprint(df4)\ndf5 = df2_.sample(n=2000, random_state=1)\nprint(df5)\ndf6=pd.concat([df5,df4],axis=0)\nprint(df6)\ndf6.to_csv('total_data3.csv',header=True,index=False)",
" id label\n4 4 1\n7 7 1\n14 14 1\n15 15 1\n16 16 1\n... ... ...\n7014 7014 1\n7116 7116 1\n7170 7170 1\n7202 7202 1\n7257 7257 1\n\n[380 rows x 2 columns]\n id query1 query2 label\n0 0 剧烈运动后咯血,应该怎么处理? 剧烈运动后咯血,需要就医吗? 0\n1 1 剧烈运动后咯血,应该怎么处理? 剧烈运动后咯血,是否很严重? 0\n2 2 剧烈运动后咯血,需要就医吗? 剧烈运动后咯血,是否很严重? 0\n3 3 百令胶囊如何服用? 百令胶囊效果好吗? 0\n4 4 百令胶囊如何服用? 百令胶囊需要如何服用? 1\n... ... ... ... ...\n7290 7290 过敏性哮喘病因是什么? 过敏性哮喘治好会不会复发? 0\n7291 7291 过敏性哮喘有哪些症状? 过敏性哮喘治好会不会复发? 0\n7292 7292 变应性哮喘的饮食禁忌有哪些? 变应性哮喘怎么治疗? 0\n7293 7293 变应性哮喘的饮食禁忌有哪些? 变应性哮喘能跑步吗? 0\n7294 7294 变应性哮喘怎么治疗? 变应性哮喘能跑步吗? 0\n\n[7295 rows x 4 columns]\n6915\n id query1 query2 label\n0 0 剧烈运动后咯血,是怎么了? 剧烈运动后咯血是什么原因? 1\n1 1 剧烈运动后咯血,是怎么了? 剧烈运动后为什么会咯血? 1\n2 2 剧烈运动后咯血,是怎么了? 剧烈运动后咯血,应该怎么处理? 0\n3 3 剧烈运动后咯血,是怎么了? 剧烈运动后咯血,需要就医吗? 0\n4 4 剧烈运动后咯血,是怎么了? 剧烈运动后咯血,是否很严重? 0\n... ... ... ... ...\n13754 3005 哮喘能否吃洋参? 哮喘可不可以吃洋参? 1\n13755 3006 过敏性哮喘患者的注意事项是什么? 过敏性哮喘患者要注意的是什么? 1\n13756 3007 过敏性哮喘平时应注意什么? 患了过敏性哮喘平时需要注意哪些事? 1\n13757 3008 如何治疗过敏性哮喘? 过敏性哮喘要怎么治疗? 1\n13758 3009 变应性哮喘与过敏性哮喘一样吗? 变应性哮喘是否就是过敏性哮喘? 1\n\n[13759 rows x 4 columns]\n id query1 query2 label\n4 4 百令胶囊如何服用? 百令胶囊需要如何服用? 1\n7 7 肝癌兼肺癌晚期能治疗吗? 肝癌兼肺癌晚期能治好吗? 1\n14 14 百令胶囊怎么服用? 百令胶囊的服用方法是什么? 1\n15 15 我国最常见的咯血有哪些? 我国最常见的咯血的种类有多少? 1\n16 16 我国最常见的咯血有哪些? 我国最常见的咯血是什么? 1\n... ... ... ... ...\n13754 3005 哮喘能否吃洋参? 哮喘可不可以吃洋参? 1\n13755 3006 过敏性哮喘患者的注意事项是什么? 过敏性哮喘患者要注意的是什么? 1\n13756 3007 过敏性哮喘平时应注意什么? 患了过敏性哮喘平时需要注意哪些事? 1\n13757 3008 如何治疗过敏性哮喘? 过敏性哮喘要怎么治疗? 1\n13758 3009 变应性哮喘与过敏性哮喘一样吗? 变应性哮喘是否就是过敏性哮喘? 1\n\n[14139 rows x 4 columns]\n id query1 query2 label\n7114 7114 干咳无痰吃什么水果恢复的快 干咳无痰并伴有哮鸣音怎么回事 0\n1375 1375 肺气肿病人怎么治疗的? 肺气肿病人最明显的表现有哪些呢? 0\n1946 1946 肺炎会传染人吗 大叶性肺炎传染吗? 0\n2012 2012 感冒肺炎,住院三天医生说出院需要注意饮食 感冒肺炎,住院三天医生说出院后不要去人群密集的地方 0\n2211 2211 严重的慢性肺炎传染严重吗? 严重的慢性肺炎传染人吗? 0\n... ... ... ... ...\n3129 3129 宝宝吃过东西就吐是扁桃体发炎了吗? 宝宝吃过东西就吐可以吃健胃消食片吗? 0\n64 64 康尔佳益肺止咳胶囊有什么作用? 康尔佳益肺止咳胶囊怎么服用? 0\n365 365 怎么诊断小儿支原体肺炎? 小儿支原体肺炎有什么症状? 0\n106 106 达肺草能治疗肺纤维化吗 达肺草属于中草药吗 0\n7235 7235 怎样判断是过敏性哮喘 过敏跟季节有关系吗 0\n\n[2000 rows x 4 columns]\n id query1 query2 label\n7114 7114 干咳无痰吃什么水果恢复的快 干咳无痰并伴有哮鸣音怎么回事 0\n1375 1375 肺气肿病人怎么治疗的? 肺气肿病人最明显的表现有哪些呢? 0\n1946 1946 肺炎会传染人吗 大叶性肺炎传染吗? 0\n2012 2012 感冒肺炎,住院三天医生说出院需要注意饮食 感冒肺炎,住院三天医生说出院后不要去人群密集的地方 0\n2211 2211 严重的慢性肺炎传染严重吗? 严重的慢性肺炎传染人吗? 0\n... ... ... ... ...\n13754 3005 哮喘能否吃洋参? 哮喘可不可以吃洋参? 1\n13755 3006 过敏性哮喘患者的注意事项是什么? 过敏性哮喘患者要注意的是什么? 1\n13756 3007 过敏性哮喘平时应注意什么? 患了过敏性哮喘平时需要注意哪些事? 1\n13757 3008 如何治疗过敏性哮喘? 过敏性哮喘要怎么治疗? 1\n13758 3009 变应性哮喘与过敏性哮喘一样吗? 变应性哮喘是否就是过敏性哮喘? 1\n\n[16139 rows x 4 columns]\n"
],
[
"df = pd.read_csv('total_data3.csv')\nprint(df[df['label']==0])",
" id query1 query2 label\n0 7114 干咳无痰吃什么水果恢复的快 干咳无痰并伴有哮鸣音怎么回事 0\n1 1375 肺气肿病人怎么治疗的? 肺气肿病人最明显的表现有哪些呢? 0\n2 1946 肺炎会传染人吗 大叶性肺炎传染吗? 0\n3 2012 感冒肺炎,住院三天医生说出院需要注意饮食 感冒肺炎,住院三天医生说出院后不要去人群密集的地方 0\n4 2211 严重的慢性肺炎传染严重吗? 严重的慢性肺炎传染人吗? 0\n... ... ... ... ...\n13122 1995 过敏性哮喘究竟怎样治疗? 过敏性哮喘有哪些症状? 0\n13123 1996 过敏性哮喘究竟怎样治疗? 过敏性哮喘治好会不会复发? 0\n13126 1999 变应性哮喘就是过敏性哮喘吗? 变应性哮喘的饮食禁忌有哪些? 0\n13127 2000 变应性哮喘就是过敏性哮喘吗? 变应性哮喘怎么治疗? 0\n13128 2001 变应性哮喘就是过敏性哮喘吗? 变应性哮喘能跑步吗? 0\n\n[8448 rows x 4 columns]\n"
],
[
"df1 = pd.read_csv('totol_data.csv')\ndf1 = df1.drop(['category'],axis=1)\nprint(df1)\ndf2 = pd.read_csv('similary_generate.csv')\nprint(df2)\ndf = pd.concat([df1,df2],axis=0)\nprint(df)\ndf.to_csv('total_data1.csv',header=True,index=False)",
" id query1 query2 label\n0 0 剧烈运动后咯血,是怎么了? 剧烈运动后咯血是什么原因? 1\n1 1 剧烈运动后咯血,是怎么了? 剧烈运动后为什么会咯血? 1\n2 2 剧烈运动后咯血,是怎么了? 剧烈运动后咯血,应该怎么处理? 0\n3 3 剧烈运动后咯血,是怎么了? 剧烈运动后咯血,需要就医吗? 0\n4 4 剧烈运动后咯血,是怎么了? 剧烈运动后咯血,是否很严重? 0\n... ... ... ... ...\n10744 1997 变应性哮喘就是过敏性哮喘吗? 变应性哮喘与过敏性哮喘一样吗? 1\n10745 1998 变应性哮喘就是过敏性哮喘吗? 变应性哮喘是否就是过敏性哮喘? 1\n10746 1999 变应性哮喘就是过敏性哮喘吗? 变应性哮喘的饮食禁忌有哪些? 0\n10747 2000 变应性哮喘就是过敏性哮喘吗? 变应性哮喘怎么治疗? 0\n10748 2001 变应性哮喘就是过敏性哮喘吗? 变应性哮喘能跑步吗? 0\n\n[10749 rows x 4 columns]\n id query1 query2 label\n0 0 剧烈运动后咯血是什么原因? 剧烈运动后为什么会咯血? 1\n1 1 百令胶囊有什么注意事项? 服用百令胶囊有什么需要特别注意的吗? 1\n2 2 肝癌兼肺癌晚期还有多少寿命? 肝癌兼肺癌晚期还有多少时间? 1\n3 3 咳嗽咯血半年是怎么回事? 咳嗽咯血半年是什么情况? 1\n4 4 百令胶囊是否可以长时间服用? 百令胶囊长时间服用会有问题吗? 1\n... ... ... ... ...\n3005 3005 哮喘能否吃洋参? 哮喘可不可以吃洋参? 1\n3006 3006 过敏性哮喘患者的注意事项是什么? 过敏性哮喘患者要注意的是什么? 1\n3007 3007 过敏性哮喘平时应注意什么? 患了过敏性哮喘平时需要注意哪些事? 1\n3008 3008 如何治疗过敏性哮喘? 过敏性哮喘要怎么治疗? 1\n3009 3009 变应性哮喘与过敏性哮喘一样吗? 变应性哮喘是否就是过敏性哮喘? 1\n\n[3010 rows x 4 columns]\n id query1 query2 label\n0 0 剧烈运动后咯血,是怎么了? 剧烈运动后咯血是什么原因? 1\n1 1 剧烈运动后咯血,是怎么了? 剧烈运动后为什么会咯血? 1\n2 2 剧烈运动后咯血,是怎么了? 剧烈运动后咯血,应该怎么处理? 0\n3 3 剧烈运动后咯血,是怎么了? 剧烈运动后咯血,需要就医吗? 0\n4 4 剧烈运动后咯血,是怎么了? 剧烈运动后咯血,是否很严重? 0\n... ... ... ... ...\n3005 3005 哮喘能否吃洋参? 哮喘可不可以吃洋参? 1\n3006 3006 过敏性哮喘患者的注意事项是什么? 过敏性哮喘患者要注意的是什么? 1\n3007 3007 过敏性哮喘平时应注意什么? 患了过敏性哮喘平时需要注意哪些事? 1\n3008 3008 如何治疗过敏性哮喘? 过敏性哮喘要怎么治疗? 1\n3009 3009 变应性哮喘与过敏性哮喘一样吗? 变应性哮喘是否就是过敏性哮喘? 1\n\n[13759 rows x 4 columns]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76dd932d0c2d1f3740378297f44ab18063d3fd4 | 2,051 | ipynb | Jupyter Notebook | examples/ypr_to_opk_example.ipynb | scarani/neilpy | 5b2bf0210f8565aa8a0d07c9c069373fd572ff5f | [
"MIT"
] | 7 | 2019-04-23T18:38:48.000Z | 2021-11-02T09:52:13.000Z | examples/ypr_to_opk_example.ipynb | scarani/neilpy | 5b2bf0210f8565aa8a0d07c9c069373fd572ff5f | [
"MIT"
] | null | null | null | examples/ypr_to_opk_example.ipynb | scarani/neilpy | 5b2bf0210f8565aa8a0d07c9c069373fd572ff5f | [
"MIT"
] | 8 | 2019-09-30T22:06:10.000Z | 2021-02-27T18:41:34.000Z | 27.716216 | 115 | 0.574842 | [
[
[
"import neilpy\nimport pandas as pd\nimport glob\nimport os\n\n# Load image names into a list\nimages_dir = 'POAS/*.jpg'\nfns = glob.glob(images_dir)\n\n# Read the geotags into a dataframe\nphotos_df = neilpy.read_geotags_into_df(fns,return_datetimes=False)\n\n# Fix names, as we don't want to include the path to the image, just the basename\nphotos_df['fn'] = photos_df['fn'].apply(os.path.basename)\n\n# Calculate the azimuths based on the tracks\nphotos_df['azimuth'] = neilpy.track2azimuth(photos_df['lat'].values,photos_df['lon'].values)\n\n# Based on a specified pitch estimated during the mounting, calculate Omega, Phi, and Kappa angles\npitch = -70\nphotos_df['omega'],photos_df['phi'],photos_df['kappa'] = neilpy.ypr2opk(photos_df['azimuth'].values,pitch)\n\n# Correct for GEOID\nphotos_df['alt'] = photos_df['alt'] + 35.356 \n\n# Define accuracy of measurements:\nphotos_df['xy_acc'] = 2.\nphotos_df['z_acc'] = 2.\n\n# Write out the values\noutfile = 'sept_poas_opk.csv'\ncols = ['fn','lat','lon','alt','omega','phi','kappa','xy_acc','z_acc']\nphotos_df.to_csv(outfile,index=False,header=False,columns=cols)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
e76dde1eb4f6aa5540e2759d4d129d85dea97008 | 498,675 | ipynb | Jupyter Notebook | Filters 04 Ideal High-Pass Time Domain.ipynb | aerdos/notes | e4c529ba795e8d70ef5eefc1e71ccf1141278317 | [
"MIT"
] | null | null | null | Filters 04 Ideal High-Pass Time Domain.ipynb | aerdos/notes | e4c529ba795e8d70ef5eefc1e71ccf1141278317 | [
"MIT"
] | null | null | null | Filters 04 Ideal High-Pass Time Domain.ipynb | aerdos/notes | e4c529ba795e8d70ef5eefc1e71ccf1141278317 | [
"MIT"
] | null | null | null | 98.183698 | 120,991 | 0.746767 | [
[
[
"from scipy import signal\nimport numpy as np\n\n%matplotlib notebook\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"sig_len_sec = 2\nFs = 100\nT = 1/Fs\nN = Fs * sig_len_sec\ntt = np.arange(0, N) * T - (N/2 * T)",
"_____no_output_____"
],
[
"ff = np.linspace(-1.0/(2.0*T), 1.0/(2.0*T), N, endpoint=False)",
"_____no_output_____"
],
[
"# Create a Frequency Mask, extending into negative frequency domain\nmask = np.ones(len(tt))\nmask[130::] = 0\nmask[0:70] = 0\nmask = 1.0 - mask",
"_____no_output_____"
],
[
"f = plt.figure(figsize=(5, 4))\nax = f.add_subplot(111)\n\nax.plot(ff, mask)\n\nax.set_xlim([0, Fs/2]) # zoom in on positive frequencies only\nax.set_xlabel('Frequency [Hz]')",
"_____no_output_____"
],
[
"y_mask = np.real(np.fft.fftshift(np.fft.ifft(np.fft.ifftshift(mask))))",
"_____no_output_____"
],
[
"f = plt.figure(figsize=(5, 4))\nax = f.add_subplot(111)\n\nax.plot(tt, y_mask, label='y_mask')\n\n# ax.legend()\nax.set_xlabel('Time [s]')",
"_____no_output_____"
],
[
"f1 = 2\ny1 = np.sin(2 * np.pi * f1 * tt)",
"_____no_output_____"
],
[
"f2 = 20\ny2 = 0.25 * np.sin(2 * np.pi * f2 * tt)",
"_____no_output_____"
],
[
"y3 = y1 + y2",
"_____no_output_____"
],
[
"y3_padded = np.hstack([y3, np.zeros(len(y3))])",
"_____no_output_____"
],
[
"f = plt.figure(figsize=(9, 4))\nax = f.add_subplot(111)\n\nax.plot(y3_padded, label='y3_padded')\n\nax.legend()\nax.set_xlabel('Time [s]')",
"_____no_output_____"
],
[
"def fir_filter(b, xx):\n yy = np.zeros_like(xx) # create buffer for output values\n delay = np.zeros_like(b) # create delay line\n \n for ii, x in enumerate(xx):\n delay[1:] = delay[:-1] # right-shift values in 'delay'\n delay[:1] = x # place new value into the delay line\n yy[ii] = np.sum(delay * b)\n \n return yy",
"_____no_output_____"
],
[
"y3_filt = fir_filter(y_mask, y3_padded)",
"_____no_output_____"
],
[
"f = plt.figure(figsize=(9, 4))\nax = f.add_subplot(111)\n\nax.plot(T * np.arange(len(y3_filt)), y3_filt, label='y_mask')\n\nax.legend()\nax.set_xlabel('Time [s]')",
"_____no_output_____"
],
[
"from scipy.signal import freqz\n\n# Calculate the frequency response 'h' at the complex frequencies 'w'\n# Note that 'w' is returned in the same units as 'Fs'\nw, h = freqz(y_mask, [1], worN=8192)\n\nw_hz = w * (Fs/(2*np.pi)) # 'convert 'w' from radians to Hz\nh_db = 20 * np.log10(np.abs(h)) # convert 'h' from complex magitude to dB\nangles = np.unwrap(np.angle(h)) * (180/np.pi)",
"_____no_output_____"
],
[
"f = plt.figure(figsize=(9, 4))\nax1 = f.add_subplot(111)\n\nax1.plot(w_hz, 20*np.log(np.abs(h)), color='xkcd:blue')\n\n# ax1.set_xscale('log')\nax1.set_xlim([1, Fs/2])\nax1.grid(which='both', axis='both')\nax1.set_ylabel('Amplitude [dB]', color='xkcd:blue')\nax1.set_title('Filer Frequency and Phase Response')\n\nax2 = ax1.twinx()\nax2.plot(w_hz, angles, color='xkcd:green')\nax2.set_ylabel('angle [deg]', color='xkcd:green')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76de33af6d349f52577f9a17f431bcc6e59e101 | 462,867 | ipynb | Jupyter Notebook | dbdp_instances/Meta-Leaning.ipynb | brunodferrari/bdp | d320add1e451c85b6777ae34901bbd6fd3797114 | [
"Unlicense"
] | null | null | null | dbdp_instances/Meta-Leaning.ipynb | brunodferrari/bdp | d320add1e451c85b6777ae34901bbd6fd3797114 | [
"Unlicense"
] | null | null | null | dbdp_instances/Meta-Leaning.ipynb | brunodferrari/bdp | d320add1e451c85b6777ae34901bbd6fd3797114 | [
"Unlicense"
] | null | null | null | 300.953836 | 223,720 | 0.908058 | [
[
[
"%cd C:\\Users\\Bruno Ferrari\\Documents\\Bruno\\2019\\2s\\MC\\artigos revisão\\Artigos Mes\\GD\\bdp\\dbdp_instances",
"C:\\Users\\Bruno Ferrari\\Documents\\Bruno\\2019\\2s\\MC\\artigos revisão\\Artigos Mes\\GD\\bdp\\dbdp_instances\n"
],
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n#from lazypredict.Supervised import LazyClassifier\n\nfrom sklearn.preprocessing import scale\n\nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"dados_org = pd.read_excel('final_results.xlsx')",
"_____no_output_____"
],
[
"dados_org.head()",
"_____no_output_____"
],
[
"dados_org.tail()",
"_____no_output_____"
],
[
"dados_org.loc[:, dados_org.columns.str.contains('Crossing', na=False)].plot(logy=True, figsize=(16,12))\ndados_org.loc[:, dados_org.columns.str.contains('Crossing', na=False)]",
"_____no_output_____"
],
[
"dados_org.loc[:, dados_org.columns.str.contains('Time', na=False)].plot(logy=True, figsize=(16,12))\ndados_org.loc[:, dados_org.columns.str.contains('Time', na=False)]",
"_____no_output_____"
],
[
"dados_org.loc[:, dados_org.columns.str.contains('Crossing', na=False)].drop('Crossing_gs', axis=1).fillna(np.inf).idxmin(axis=1).value_counts(normalize=0)",
"_____no_output_____"
],
[
"np.unique(np.argmin(dados_org.loc[:, dados_org.columns.str.contains('Crossing', na=False)].drop('Crossing_gs', axis=1).values,axis=1), return_counts=True)",
"_____no_output_____"
],
[
"aux_dm = pd.concat(\n [\n #pd.Series(np.argmin(dados_org.loc[:, dados_org.columns.str.contains('Crossing', na=False)].drop('Crossing_gs', axis=1).fillna(np.inf).values,axis=1)),\n dados_org.loc[:, dados_org.columns.str.contains('Crossing', na=False)].drop('Crossing_gs', axis=1).fillna(np.inf).idxmin(axis=1) \n ],\n axis=1\n)",
"_____no_output_____"
],
[
"display(aux_dm)",
"_____no_output_____"
],
[
"from sklearn.preprocessing import LabelBinarizer",
"_____no_output_____"
],
[
"Y_labeled = (aux_dm.astype('category')[0].cat.codes)",
"_____no_output_____"
],
[
"Y_gs=pd.get_dummies(aux_dm, prefix='Best')['Best_Crossing_gs_vns']\nY_ts=pd.get_dummies(aux_dm, prefix='Best')['Best_Crossing_ts']\nY_vns=pd.get_dummies(aux_dm, prefix='Best')['Best_Crossing_vns']",
"_____no_output_____"
],
[
"display(Y_ts)\nprint(Y_ts.sum())\ndisplay(Y_vns)\nprint(Y_vns.sum())\ndisplay(Y_gs)\nprint(Y_gs.sum())",
"_____no_output_____"
],
[
"#Y_gs[Y_gs==1]\nY_gs[205:211] = 1\n\nY_labeled = Y_labeled.copy()\nY_labeled[205:211] = 0\nY_labeled.value_counts()",
"_____no_output_____"
],
[
"X = dados_org.iloc[:, 1:11]\nX.drop(['V1', 'V2'],axis=1)",
"_____no_output_____"
],
[
"from sklearn.model_selection import cross_val_score, cross_val_predict\nfrom sklearn.svm import SVC\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.neighbors import KNeighborsClassifier",
"_____no_output_____"
],
[
"svc = SVC(decision_function_shape='ovo', kernel=\"linear\")\nrf = RandomForestClassifier(random_state=42, n)\ndt = DecisionTreeClassifier(random_state=42, min_samples_leaf=10)\nknn = KNeighborsClassifier() ",
"_____no_output_____"
],
[
"#score_svc = cross_val_score(svc, X.drop(['V1', 'V2'],axis=1), Y_labeled, scoring='accuracy')\nscore_rf = cross_val_score(rf, X.drop(['V1', 'V2'],axis=1), Y_labeled, scoring='accuracy')\n#score_dt = cross_val_score(dt, X.drop(['V1', 'V2'],axis=1), Y_labeled, scoring='accuracy')\n#score_knn = cross_val_score(knn, X.drop(['V1', 'V2'],axis=1), Y_labeled, scoring='accuracy')",
"_____no_output_____"
],
[
"#score_svc.mean()\nscore_rf.mean()\n#score_dt.mean()\n#score_knn.mean()",
"_____no_output_____"
],
[
"score = cross_val_score(rf, X.drop(['V1', 'V2'],axis=1), Y_vns, scoring='accuracy')\nscore.mean()",
"_____no_output_____"
],
[
"score = cross_val_score(rf, X.drop(['V1', 'V2'],axis=1), Y_ts, scoring='accuracy')\nscore.mean()",
"_____no_output_____"
],
[
"score = cross_val_score(rf, X.drop(['V1', 'V2'],axis=1), Y_gs, scoring='accuracy')\nscore.mean()",
"_____no_output_____"
],
[
"test_vns = cross_val_predict(svc, X.drop(['V1', 'V2'],axis=1), Y_vns, method=\"decision_function\")\ntest_gs = cross_val_predict(rf, X.drop(['V1', 'V2'],axis=1), Y_gs)\ntest_ts = cross_val_predict(svc, X.drop(['V1', 'V2'],axis=1), Y_ts, method=\"decision_function\")\ntest_labeled = cross_val_predict(svc, X.drop(['V1', 'V2'],axis=1), Y_labeled, method=\"decision_function\")",
"_____no_output_____"
],
[
"pd.DataFrame(test_labeled).plot()",
"_____no_output_____"
],
[
"rf = RandomForestClassifier(n_estimators=10000, random_state=42, n_jobs=-1)\nprint(cross_val_score(rf, X, Y_labeled, scoring='accuracy', n_jobs=-1))\ny_hat=cross_val_predict(rf, X, Y_labeled, verbose=1, n_jobs=-1)\ny_hat",
"[0.35164835 0.55555556 0.52222222 0.61111111 0.67777778]\n"
],
[
"sum(Y_labeled.values==y_hat)/len(y_hat)",
"_____no_output_____"
],
[
"from sklearn.metrics import roc_curve",
"_____no_output_____"
],
[
"fpr,tpr,thresholds=roc_curve(y_train_5, y_scores)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76de75db805ecd761111a52a6432963792f93aa | 847,046 | ipynb | Jupyter Notebook | demo-resources/Spark-Pool-Notebook.ipynb | vijrqrr9/dp203 | c202a6e595f4f8b4c21996a6db4b11e6244c1ce4 | [
"MIT"
] | 18 | 2021-08-09T16:14:22.000Z | 2022-01-18T13:33:02.000Z | demo-resources/Spark-Pool-Notebook.ipynb | eandbsoftware/dp203 | bb84413707c71fc91c5933a961e79d4f54399181 | [
"MIT"
] | null | null | null | demo-resources/Spark-Pool-Notebook.ipynb | eandbsoftware/dp203 | bb84413707c71fc91c5933a961e79d4f54399181 | [
"MIT"
] | 16 | 2021-08-09T16:14:27.000Z | 2022-03-19T13:49:46.000Z | 32.607537 | 1,140 | 0.202263 | [
[
[
"# Basic Apach Spark Analysis\r\n\r\n- Ref: https://timw.info/ply\r\n- Notebook tutorial: https://timw.info/ekt\r\n\r\n\r\n\r\n\r\n\r\n",
"_____no_output_____"
]
],
[
[
"# Load NYC Taxi data\r\ndf = spark.read.load('abfss://[email protected]/NYCTripSmall.parquet', format='parquet')\r\ndisplay(df.limit(10))",
"_____no_output_____"
],
[
"# View the dataframe schema\r\ndf.printSchema()",
"_____no_output_____"
],
[
" # Load the NYC Taxi data into the Spark nyctaxi database\r\nspark.sql(\"CREATE DATABASE IF NOT EXISTS nyctaxi\")\r\ndf.write.mode(\"overwrite\").saveAsTable(\"nyctaxi.trip\")",
"_____no_output_____"
],
[
"# Display the taxi data\r\ndf = spark.sql(\"SELECT * FROM nyctaxi.trip\") \r\ndisplay(df)",
"_____no_output_____"
],
[
"# Analyze the data and save results to nyctaxi.passengercountstats table (select CHART)\r\ndf = spark.sql(\"\"\"\r\n SELECT PassengerCount,\r\n SUM(TripDistanceMiles) as SumTripDistance,\r\n AVG(TripDistanceMiles) as AvgTripDistance\r\n FROM nyctaxi.trip\r\n WHERE TripDistanceMiles > 0 AND PassengerCount > 0\r\n GROUP BY PassengerCount\r\n ORDER BY PassengerCount\r\n\"\"\") \r\ndisplay(df)\r\ndf.write.saveAsTable(\"nyctaxi.passengercountstats\")",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76e0443984af42f78200e9c7e4fbb85e6f18cca | 4,191 | ipynb | Jupyter Notebook | ipynb/Germany-Nordrhein-Westfalen-LK-Kleve.ipynb | RobertRosca/oscovida.github.io | d609949076e3f881e38ec674ecbf0887e9a2ec25 | [
"CC-BY-4.0"
] | null | null | null | ipynb/Germany-Nordrhein-Westfalen-LK-Kleve.ipynb | RobertRosca/oscovida.github.io | d609949076e3f881e38ec674ecbf0887e9a2ec25 | [
"CC-BY-4.0"
] | null | null | null | ipynb/Germany-Nordrhein-Westfalen-LK-Kleve.ipynb | RobertRosca/oscovida.github.io | d609949076e3f881e38ec674ecbf0887e9a2ec25 | [
"CC-BY-4.0"
] | null | null | null | 29.307692 | 190 | 0.517776 | [
[
[
"# Germany: LK Kleve (Nordrhein-Westfalen)\n\n* Homepage of project: https://oscovida.github.io\n* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Nordrhein-Westfalen-LK-Kleve.ipynb)",
"_____no_output_____"
]
],
[
[
"import datetime\nimport time\n\nstart = datetime.datetime.now()\nprint(f\"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}\")",
"_____no_output_____"
],
[
"%config InlineBackend.figure_formats = ['svg']\nfrom oscovida import *",
"_____no_output_____"
],
[
"overview(country=\"Germany\", subregion=\"LK Kleve\");",
"_____no_output_____"
],
[
"# load the data\ncases, deaths, region_label = germany_get_region(landkreis=\"LK Kleve\")\n\n# compose into one table\ntable = compose_dataframe_summary(cases, deaths)\n\n# show tables with up to 500 rows\npd.set_option(\"max_rows\", 500)\n\n# display the table\ntable",
"_____no_output_____"
]
],
[
[
"# Explore the data in your web browser\n\n- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Nordrhein-Westfalen-LK-Kleve.ipynb)\n- and wait (~1 to 2 minutes)\n- Then press SHIFT+RETURN to advance code cell to code cell\n- See http://jupyter.org for more details on how to use Jupyter Notebook",
"_____no_output_____"
],
[
"# Acknowledgements:\n\n- Johns Hopkins University provides data for countries\n- Robert Koch Institute provides data for within Germany\n- Open source and scientific computing community for the data tools\n- Github for hosting repository and html files\n- Project Jupyter for the Notebook and binder service\n- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))\n\n--------------------",
"_____no_output_____"
]
],
[
[
"print(f\"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and \"\n f\"deaths at {fetch_deaths_last_execution()}.\")",
"_____no_output_____"
],
[
"# to force a fresh download of data, run \"clear_cache()\"",
"_____no_output_____"
],
[
"print(f\"Notebook execution took: {datetime.datetime.now()-start}\")\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
e76e0768c15c29da2bac2785bb23f1e773cd2ef9 | 695 | ipynb | Jupyter Notebook | 02-Machine_Learning_Basics/00-BasicsNumpy.ipynb | spendyala/deeplearning-docker | 1dcd03e2e65cb1897daf9dfffeab018f97fc0780 | [
"MIT"
] | null | null | null | 02-Machine_Learning_Basics/00-BasicsNumpy.ipynb | spendyala/deeplearning-docker | 1dcd03e2e65cb1897daf9dfffeab018f97fc0780 | [
"MIT"
] | 1 | 2021-02-02T22:47:32.000Z | 2021-02-02T22:47:32.000Z | 02-Machine_Learning_Basics/00-BasicsNumpy.ipynb | spendyala/deeplearning-docker | 1dcd03e2e65cb1897daf9dfffeab018f97fc0780 | [
"MIT"
] | null | null | null | 16.547619 | 34 | 0.513669 | [
[
[
"import numpy as np",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
e76e19ee0593cca052ee80902c47489346aac310 | 207,825 | ipynb | Jupyter Notebook | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects | a37dfa1a25b594d1ade1d4d94a7a6c5dd8c065ec | [
"MIT"
] | null | null | null | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects | a37dfa1a25b594d1ade1d4d94a7a6c5dd8c065ec | [
"MIT"
] | null | null | null | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects | a37dfa1a25b594d1ade1d4d94a7a6c5dd8c065ec | [
"MIT"
] | null | null | null | 158.282559 | 56,335 | 0.805514 | [
[
[
"<a href=\"https://colab.research.google.com/github/nk555/AI-Projects/blob/master/GAN/MNIST_Style_GAN_v2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# MNIST learns your handwriting\n\nThis is a small project on using a GAN to generate numbers that look as someone else's handwriting when not trained on all numbers written by this person. For example say we had someone write the number 273 and we now want to write 481 in their own handwriting.\n\nThe main inspiration for this project is a paper I read recently called STAR GAN v2. In this paper they try to recognize diferent styles and features in images and transfer those into a different image. For example they are able to use image of different animals like dogs or tigers and making them look like a cat. Furthermore at the time of writing this it is currently a state-of-the-art method for this style translation tasks.\n\nSome of the results can be seen at the end of this notebook. Unfortunately it seems not that many features were captured and mostly it was only the thickness of the numbers that was preserved. A reason this happens might be that the size of the images is small being 28x28. However, some ways to allow for more variation might be by exteding the number of layers being used, by having higher dimensional spaces for the latent and style spaces, or by giving a higher weight to the style diversification loss (look at section loss functions to see more about this).\n\nThe main purpose of this notebook is to make a small showcase of the architecture used in a simple design so that the ideas are simple to follow. This notebook will also contain some explanations and comments on the architecture of the neural network so that it might be easier to follow.\n\nNote: another small thing I did in this project is to 'translate' STAR GAN code from pytorch to tensorflow. Redoing all of the work was useful to understand everything done on their code and having an option in tensorflow might be useful for some people.\n\nFor a small tutorial on how to write a simple GAN architecture: https://machinelearningmastery.com/how-to-develop-a-generative-adversarial-network-for-an-mnist-handwritten-digits-from-scratch-in-keras/\n\nLink to STAR GAN v2: https://app.wandb.ai/stacey/stargan/reports/Cute-Animals-and-Post-Modern-Style-Transfer%3A-StarGAN-v2-for-Multi-Domain-Image-Synthesis---VmlldzoxNzcwODQ\n\nFurther Reading on style domain techniques for image generation:\n\nLink to STAR GAN paper: https://arxiv.org/pdf/1912.01865.pdf\n\nLink to Multimodal Unsupervised Image-to-Image Translation: https://arxiv.org/pdf/1804.04732.pdf\n\nLink to Improving Style-Content Disentanglement Paper: https://arxiv.org/pdf/2007.04964.pdf",
"_____no_output_____"
],
[
"# Intitializing",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nfrom tensorflow_addons.layers import InstanceNormalization\nimport numpy as np\nimport tensorflow.keras.layers as layers\nimport time\nfrom tensorflow.keras.datasets.mnist import load_data\nimport sys\nimport os\nimport datetime",
"_____no_output_____"
]
],
[
[
"# Layers\n\nThere are a few layers that were custom made. More importantly it is udeful to make this custom layers for the layers that try to incorporate style. This is as the inputs themselves are custom as you are inputing an image and a vector representing the style.\n\nResBlk is short for Residual Block, where it is predicting the residual (the difference between the original and the prediction).",
"_____no_output_____"
]
],
[
[
"class ResBlk(tf.keras.Model):\n def __init__(self, dim_in, dim_out, actv=layers.LeakyReLU(),\n normalize=False, downsample=False):\n super(ResBlk, self).__init__()\n self.actv = actv\n self.normalize = normalize\n self.downsample = downsample\n self.learned_sc = dim_in != dim_out\n self._build_weights(dim_in, dim_out)\n\n def _build_weights(self, dim_in, dim_out):\n self.conv1 = layers.Conv2D(dim_in, 3, padding='same')\n self.conv2 = layers.Conv2D(dim_out, 3, padding='same')\n if self.normalize:\n self.norm1 = InstanceNormalization()\n self.norm2 = InstanceNormalization()\n if self.learned_sc:\n self.conv1x1 = layers.Conv2D(dim_out, 1)\n\n def _shortcut(self, x):\n if self.learned_sc:\n x = self.conv1x1(x)\n if self.downsample:\n x = layers.AveragePooling2D(pool_size=(2,2), padding='same')(x)\n return x\n \n def _residual(self, x):\n if len(tf.shape(x))>4:\n x=tf.reshape(x,tf.shape(x)[1:])\n if self.normalize:\n x = self.norm1(x)\n x = self.actv(x)\n x = self.conv1(x)\n if self.downsample:\n x = layers.AveragePooling2D(pool_size=(2,2), padding='same')(x)\n if self.normalize:\n x = self.norm2(x)\n x = self.actv(x)\n x = self.conv2(x)\n return x\n\n def call(self, x):\n x = self._shortcut(x) + self._residual(x)\n return x / 2**(1/2) # unit variance",
"_____no_output_____"
]
],
[
[
"AdaIN stands for Adaptive Instance Normalization. It is a type of normalization that allows to 'mix' two inputs. In this case we use the style vector to mix with our input x which is the image or part of the process of constructing this image.",
"_____no_output_____"
]
],
[
[
"class AdaIn(tf.keras.Model):\n def __init__(self, style_dim, num_features):\n super(AdaIn,self).__init__()\n self.norm = InstanceNormalization()\n self.lin = layers.Dense(num_features*2)\n\n def call(self, x, s):\n h=self.lin(s)\n h=tf.reshape(h, [1, tf.shape(h)[0], 1, tf.shape(h)[1]])\n gamma,beta=tf.split(h, 2, axis=3)\n return (1+gamma)*self.norm(x)+beta",
"_____no_output_____"
],
[
"class AdainResBlk(tf.keras.Model):\n def __init__(self, dim_in, dim_out, style_dim=16,\n actv=layers.LeakyReLU(), upsample=False):\n super(AdainResBlk, self).__init__()\n self.actv = actv\n self.upsample = upsample\n self.learned_sc = dim_in != dim_out\n self._build_weights(dim_in, dim_out, style_dim)\n\n def _build_weights(self, dim_in, dim_out, style_dim=16):\n self.conv1 = layers.Conv2D(dim_out, 3, padding='same')\n self.conv2 = layers.Conv2D(dim_out, 3, padding='same')\n self.norm1 = AdaIn(style_dim, dim_in)\n self.norm2 = AdaIn(style_dim, dim_out)\n if self.learned_sc:\n self.conv1x1 = layers.Conv2D(dim_out, 1)\n\n def _shortcut(self, x):\n if self.upsample:\n x = layers.UpSampling2D(size=(2,2), interpolation='nearest')(x)\n if self.learned_sc:\n x = self.conv1x1(x)\n return x\n\n def _residual(self, x, s):\n x = self.norm1(x, s)\n x = self.actv(x)\n if self.upsample:\n x = layers.UpSampling2D(size=(2,2), interpolation='nearest')(x)\n x = self.conv1(x)\n x = self.norm2(x, s)\n x = self.actv(x)\n x = self.conv2(x)\n return x\n\n def call(self, x, s):\n x = self._shortcut(x) + self._residual(x,s)\n return x / 2**(1/2) # unit variance",
"_____no_output_____"
]
],
[
[
"# Generator Class\n\nIn the generator we have two steps one for encoding the image into lower level information and one to decode back to the image. In this particular architecture the decoding uses the style to build back the image as it is an important part of the process. The decoding does not do this as we have the style encoder as an architecture that deals with this issue of generating a style vector for a particular image.",
"_____no_output_____"
]
],
[
[
"class Generator(tf.keras.Model):\n def __init__(self, img_size=28, style_dim=24, dim_in=8, max_conv_dim=128, repeat_num=2):\n super(Generator, self).__init__()\n self.img_size=img_size\n self.from_bw=layers.Conv2D(dim_in, 3, padding='same', input_shape=(1,img_size,img_size,1))\n self.encode=[]\n self.decode=[]\n self.to_bw=tf.keras.Sequential([InstanceNormalization(), layers.LeakyReLU(), layers.Conv2D(1, 1, padding='same')])\n\n for _ in range(repeat_num):\n dim_out = min(dim_in*2, max_conv_dim)\n self.encode.append(ResBlk(dim_in, dim_out, normalize=True, downsample=True))\n self.decode.insert(0, AdainResBlk(dim_out, dim_in, style_dim, upsample=True))\n dim_in = dim_out\n\n # bottleneck blocks\n for _ in range(2):\n self.encode.append(ResBlk(dim_out, dim_out, normalize=True))\n self.decode.insert(0, AdainResBlk(dim_out, dim_out, style_dim))\n\n def call(self, x, s):\n x = self.from_bw(x)\n cache = {}\n for block in self.encode:\n x = block(x)\n for block in self.decode:\n x = block(x, s)\n return self.to_bw(x)",
"_____no_output_____"
]
],
[
[
"# Mapping Network\n\nThe Mapping Network and the Style encoder are the parts of this architecture that make a difference in allowing style to be analyzed and put into our images. The mapping network will take as an input a latent code (represents images as a vector in a high dimensional space) and the domain in this case the domain is the number we are representing. And the style encoder will take as inputs an image and a domain.",
"_____no_output_____"
]
],
[
[
"class MappingNetwork(tf.keras.Model):\n def __init__(self, latent_dim=16, style_dim=24, num_domains=10):\n super(MappingNetwork,self).__init__()\n map_layers = [layers.Dense(128)]\n map_layers += [layers.ReLU()]\n for _ in range(2):\n map_layers += [layers.Dense(128)]\n map_layers += [layers.ReLU()]\n self.shared = tf.keras.Sequential(layers=map_layers)\n\n self.unshared = []\n for _ in range(num_domains):\n self.unshared += [tf.keras.Sequential(layers=[layers.Dense(128),\n layers.ReLU(),\n layers.Dense(128),\n layers.ReLU(),\n layers.Dense(128),\n layers.ReLU(),\n layers.Dense(style_dim)])]\n\n def call(self, z, y):\n h = self.shared(z)\n out = []\n for layer in self.unshared:\n out += [layer(h)]\n out = tf.stack(out, axis=1) # (batch, num_domains, style_dim)\n s = tf.gather(out, y, axis=1) # (batch, style_dim)\n return s",
"_____no_output_____"
]
],
[
[
"# Style Encoder\n\nAn important thing to notice from the style encoder is that it takes as an input an image and outputs a style vector. Looking at the dimensions of these we notice we need to flatten out the image through the layers. This can usually be done in two ways. By flattening a 2 dimensional input to a 1 dimensional output a flatten layer, or as it was done hear by using enough pooling layers so that we downsample the size of our 2 dimensional input until it is one dimensional.",
"_____no_output_____"
]
],
[
[
"class StyleEncoder(tf.keras.Model):\n def __init__(self, img_size=28, style_dim=24, dim_in=16, num_domains=10, max_conv_dim=128, repeat_num=5):\n super(StyleEncoder,self).__init__()\n blocks = [layers.Conv2D(dim_in, 3, padding='same')]\n\n for _ in range(repeat_num): #repetition 1 sends to (b,14,14,d) 2 to (b,7,7,d) 3 to (b,4,4,d) 4 to (b,2,2,d) 5 to (b,1,1,d)\n dim_out = min(dim_in*2, max_conv_dim)\n blocks += [ResBlk(dim_in, dim_out, downsample=True)]\n dim_in = dim_out\n\n blocks += [layers.LeakyReLU()]\n blocks += [layers.Conv2D(dim_out, 4, padding='same')]\n blocks += [layers.LeakyReLU()]\n self.shared = tf.keras.Sequential(layers=blocks)\n\n self.unshared = []\n for _ in range(num_domains):\n self.unshared += [layers.Dense(style_dim)]\n\n def call(self, x, y):\n h = self.shared(x)\n h = tf.reshape(h,[tf.shape(h)[0], tf.shape(h)[3]])\n out = []\n for layer in self.unshared:\n out += [layer(h)]\n out = tf.stack(out, axis=1) # (batch, num_domains, style_dim)\n s = tf.gather(out, y, axis=1) # (batch, style_dim)\n return s",
"_____no_output_____"
]
],
[
[
"# Discriminator Class\n\nSimilarly to the Style encoder the input of the discriminator is an image and we need to downsample it until it is one dimensional.",
"_____no_output_____"
]
],
[
[
"class Discriminator(tf.keras.Model):\n def __init__(self, img_size=28, dim_in=16, num_domains=10, max_conv_dim=128, repeat_num=5):\n super(Discriminator, self).__init__()\n blocks = [layers.Conv2D(dim_in, 3, padding='same')]\n\n for _ in range(repeat_num): #repetition 1 sends to (b,14,14,d) 2 to (b,7,7,d) 3 to (b,4,4,d) 4 to (b,2,2,d) 5 to (b,1,1,d)\n dim_out = min(dim_in*2, max_conv_dim)\n blocks += [ResBlk(dim_in, dim_out, downsample=True)]\n dim_in = dim_out\n\n blocks += [layers.LeakyReLU()]\n blocks += [layers.Conv2D(dim_out, 4, padding='same')]\n blocks += [layers.LeakyReLU()]\n blocks += [layers.Conv2D(num_domains, 1, padding='same')]\n self.main = tf.keras.Sequential(layers=blocks)\n\n\n def call(self, x, y):\n out = self.main(x)\n out = tf.reshape(out, (tf.shape(out)[0], tf.shape(out)[3])) # (batch, num_domains)\n out = tf.gather(out, y, axis=1) # (batch)\n out = tf.reshape(out, [1])\n return out",
"_____no_output_____"
]
],
[
[
"# Loss Functions\n\nThe loss functions used are an important part of this model as it describes our goal when training and how to perform gradient descent. The discriminator loss function is the regular adversarial loss L_adv used in a GAN architecture. But furthermore we have three loss functions added.\n\nFor this loss functions if you want to see the mathematical formula I recommend looking at STAR GAN 2's paper. However I will explain what the loss tries to measure and a quick description of how it does so.\n\nL_sty is a style reconstruction loss. This tries to capture how well the style was captured on our output. It is computed as an expected value of the distance between the target style vector and the style vector that our style encoder predicts for the generated image.\n\nL_ds is a style diversification loss. It tries to capture that the images produced are different to promote a variety of images produced. It is computed as the expected value of the distance between the images (l_1 norm) generated when using two different styles and the same sources. \n\nL_cyc is a characteristic preserving loss. The cyc comes from cyclic as we measusre the distance between the original image and the image generated by using an image generated by this image and the style our style encoder provides as an input. (Notice we use the image generated by the image generated, so that we use the generator two times.)\n\nIn the end the total loss function is expressed as\n\nL_adv + lambda_sty * L_sty + lambda_ds * L_ds + lambda_cyc * L_cyc",
"_____no_output_____"
]
],
[
[
"def moving_average(model, model_test, beta=0.999):\n for i in range(len(model.weights)):\n model_test.weights[i] = (1-beta)*model.weights[i] + beta*model_test.weights[i]",
"_____no_output_____"
],
[
"def adv_loss(logits, target):\n assert target in [1, 0]\n targets = tf.fill(tf.shape(logits), target)\n loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)(targets, logits)\n return loss",
"_____no_output_____"
],
[
"def r1_reg(d_out, x_in, g):\n # zero-centered gradient penalty for real images\n batch_size = tf.shape(x_in)[0]\n grad_dout=g.gradient(d_out, x_in)\n #grad_dout = tf.gradients(ys=d_out, xs=x_in)\n grad_dout2 = tf.square(grad_dout)\n grad_dout2 = tf.reshape(grad_dout2,[batch_size, tf.shape(grad_dout2)[1]*tf.shape(grad_dout2)[2]])\n reg = 0.5 * tf.math.reduce_mean(tf.math.reduce_sum(grad_dout2, axis=1))\n return reg",
"_____no_output_____"
],
[
"def compute_d_loss(nets, args, x_real, y_org, y_trg, z_trg=None, x_ref=None):\n assert (z_trg is None) != (x_ref is None)\n # with real images\n with tf.GradientTape() as g:\n g.watch(x_real)\n out = nets['discriminator'](x_real, y_org)\n loss_real = adv_loss(out, 1)\n loss_reg = r1_reg(out, x_real, g)\n\n # with fake images\n if z_trg is not None:\n s_trg = nets['mapping_network'](z_trg, y_trg)\n else: # x_ref is not None\n s_trg = nets['style_encoder'](x_ref, y_trg)\n\n x_fake = nets['generator'](x_real, s_trg)\n out = nets['discriminator'](x_fake, y_trg)\n loss_fake = adv_loss(out, 0)\n\n loss = loss_real + loss_fake + args['lambda_reg'] * loss_reg\n return loss, {'real': loss_real, 'fake':loss_fake, 'reg':loss_reg}",
"_____no_output_____"
],
[
"def compute_g_loss(nets, args, x_real, y_org, y_trg, z_trgs=None, x_refs=None):\n assert (z_trgs is None) != (x_refs is None)\n if z_trgs is not None:\n z_trg, z_trg2 = z_trgs\n if x_refs is not None:\n x_ref, x_ref2 = x_refs\n\n # adversarial loss\n if z_trgs is not None:\n s_trg = nets['mapping_network'](z_trg, y_trg)\n else:\n s_trg = nets['style_encoder'](x_ref, y_trg)\n\n x_fake = nets['generator'](x_real, s_trg)\n out = nets['discriminator'](x_fake, y_trg)\n loss_adv = adv_loss(out, 1)\n\n # style reconstruction loss\n s_pred = nets['style_encoder'](x_fake, y_trg)\n loss_sty = tf.math.reduce_mean(tf.abs(s_pred - s_trg))\n\n # diversity sensitive loss\n if z_trgs is not None:\n s_trg2 = nets['mapping_network'](z_trg2, y_trg)\n else:\n s_trg2 = nets['style_encoder'](x_ref2, y_trg)\n x_fake2 = nets['generator'](x_real, s_trg2)\n loss_ds = tf.math.reduce_mean(tf.abs(x_fake - x_fake2))\n\n # cycle-consistency loss\n s_org = nets['style_encoder'](x_real, y_org)\n x_rec = nets['generator'](x_fake, s_org)\n loss_cyc = tf.math.reduce_mean(tf.abs(x_rec - x_real))\n\n loss = loss_adv + args['lambda_sty'] * loss_sty \\\n - args['lambda_ds'] * loss_ds + args['lambda_cyc'] * loss_cyc\n return loss, {'adv':loss_adv, 'sty':loss_sty, 'ds':loss_ds, 'cyc':loss_cyc}",
"_____no_output_____"
]
],
[
[
"# The Model\n\nHere we introduce the class Solver which is the most important class as this will represent our whole model. It will initiate all of our neural networks as well as train our network.",
"_____no_output_____"
]
],
[
[
"class Solver(tf.keras.Model):\n def __init__(self, args):\n super(Solver, self).__init__()\n self.args = args\n self.step=0\n\n self.nets, self.nets_ema = self.build_model(self.args)\n # below setattrs are to make networks be children of Solver, e.g., for self.to(self.device)\n for name in self.nets.keys():\n setattr(self, name, self.nets[name])\n for name in self.nets_ema.keys():\n setattr(self, name + '_ema', self.nets_ema[name])\n\n if args['mode'] == 'train':\n self.optims = {}\n for net in self.nets.keys():\n self.optims[net] = tf.keras.optimizers.Adam(learning_rate= args['f_lr'] if net == 'mapping_network' else args['lr'], \n beta_1=args['beta1'], beta_2=args['beta2'], \n epsilon=args['weight_decay'])\n\n self.ckptios = [tf.train.Checkpoint(model=net) for net in self.nets.values()]\n self.ckptios += [tf.train.Checkpoint(model=net_ema) for net_ema in self.nets_ema.values()]\n self.ckptios += [tf.train.Checkpoint(optimizer=optim) for optim in self.optims.values()]\n else:\n self.ckptios = [tf.train.Checkpoint(model=net_ema) for net_ema in self.nets_ema.values()]\n\n #for name in self.nets.keys():\n # Do not initialize the FAN parameters\n # print('Initializing %s...' % name)\n #self.nets[name].apply(initializer=tf.keras.initializers.HeNormal)\n\n def build_model(self, args):\n generator = Generator(args['img_size'], args['style_dim'])\n mapping_network = MappingNetwork(args['latent_dim'], args['style_dim'], args['num_domains'])\n style_encoder = StyleEncoder(args['img_size'], args['style_dim'], args['num_domains'])\n discriminator = Discriminator(args['img_size'], args['num_domains'])\n generator_ema = Generator(args['img_size'], args['style_dim'])\n mapping_network_ema = MappingNetwork(args['latent_dim'], args['style_dim'], args['num_domains'])\n style_encoder_ema = StyleEncoder(args['img_size'], args['style_dim'], args['num_domains'])\n\n nets = {'generator':generator, 'mapping_network':mapping_network,\n 'style_encoder':style_encoder, 'discriminator':discriminator}\n nets_ema = {'generator':generator_ema, 'mapping_network':mapping_network_ema,\n 'style_encoder':style_encoder_ema}\n\n nets['discriminator'](inputs[0]['x_src'],inputs[0]['y_src'])\n s_trg = nets['mapping_network'](inputs[0]['z_trg'],inputs[0]['y_src'])\n nets['generator'](inputs[0]['x_src'],s_trg)\n nets['style_encoder'](inputs[0]['x_src'], inputs[0]['y_src'])\n s_trg = nets_ema['mapping_network'](inputs[0]['z_trg'],inputs[0]['y_src'])\n nets_ema['generator'](inputs[0]['x_src'],s_trg)\n nets_ema['style_encoder'](inputs[0]['x_src'], inputs[0]['y_src'])\n\n return nets, nets_ema\n\n def save(self):\n for net in solv.nets.keys():\n solv.nets[net].save_weights('MNIST_GAN_2/saved_model/'+net+'step'+str(self.step)+'.h5')\n for net in solv.nets_ema.keys():\n solv.nets[net].save_weights('MNIST_GAN_2/saved_model/'+net+'step'+str(self.step)+'_ema.h5')\n \n \n #for ckptio in self.ckptios:\n # ckptio.save(step)\n\n def load(self, step):\n self.step= step\n for net in solv.nets.keys():\n solv.nets[net].load_weights('MNIST_GAN_2/saved_model/'+net+'step'+str(step)+'.h5')\n for net in solv.nets_ema.keys():\n solv.nets[net].load_weights('MNIST_GAN_2/saved_model/'+net+'step'+str(step)+'_ema.h5')\n \n #for ckptio in self.ckptios:\n # ckptio.load(step)\n\n# def _reset_grad(self):\n# for optim in self.optims.values():\n# optim.zero_grad()\n\n def train(self, inputs, validations):\n \"\"\"\n inputs is a list of dictionaries that contains a source image, a reference image, domain and latent code information used to train the network\n validation is a list that contains validation images\n \"\"\"\n\n args = self.args\n nets = self.nets\n nets_ema = self.nets_ema\n optims = self.optims\n\n inputs_val=validations[0]\n\n # resume training if necessary\n if args['resume_iter'] > 0:\n self.load(args['resume_iter'])\n\n # remember the initial value of ds weight\n initial_lambda_ds = args['lambda_ds']\n\n print('Start training...')\n start_time = time.time()\n for i in range(args['resume_iter'], args['total_iters']):\n self.step+=1\n # fetch images and labels\n input= inputs[i-args['resume_iter']]\n\n x_real, y_org = input['x_src'], input['y_src']\n x_ref, x_ref2, y_trg = input['x_ref'], input['x_ref2'], input['y_ref']\n z_trg, z_trg2 = input['z_trg'], input['z_trg2']\n\n #print(1.5)\n\n # train the discriminator\n with tf.GradientTape() as g:\n g.watch(nets['discriminator'].weights)\n\n d_loss, d_losses_latent = compute_d_loss(\n nets, args, x_real, y_org, y_trg, z_trg=z_trg)\n #self._reset_grad()\n #d_loss.backward()\n grad=g.gradient(d_loss, nets['discriminator'].weights)\n #optims['discriminator'].get_gradients(d_loss, nets['discriminator'].weights)\n optims['discriminator'].apply_gradients(zip(grad, nets['discriminator'].weights))\n\n #print(2)\n\n with tf.GradientTape() as g:\n g.watch(nets['discriminator'].weights)\n d_loss, d_losses_ref = compute_d_loss(\n nets, args, x_real, y_org, y_trg, x_ref=x_ref)\n #self._reset_grad()\n #d_loss.backward()\n grad=g.gradient(d_loss, nets['discriminator'].weights)\n optims['discriminator'].apply_gradients(zip(grad, nets['discriminator'].weights))\n\n #print(3)\n\n # train the generator\n with tf.GradientTape(persistent=True) as g:\n g.watch(nets['generator'].weights)\n g.watch(nets['mapping_network'].weights)\n g.watch(nets['style_encoder'].weights)\n g_loss, g_losses_latent = compute_g_loss(\n nets, args, x_real, y_org, y_trg, z_trgs=[z_trg, z_trg2])\n #self._reset_grad()\n #g_loss.backward()\n grad=g.gradient(g_loss, nets['generator'].weights)\n optims['generator'].apply_gradients(zip(grad, nets['generator'].weights))\n grad=g.gradient(g_loss, nets['mapping_network'].weights)\n optims['mapping_network'].apply_gradients(zip(grad, nets['mapping_network'].weights))\n grad=g.gradient(g_loss, nets['style_encoder'].weights)\n optims['style_encoder'].apply_gradients(zip(grad, nets['style_encoder'].weights))\n del g\n\n #print(4)\n with tf.GradientTape(persistent=True) as g:\n g.watch(nets['generator'].weights)\n g_loss, g_losses_ref = compute_g_loss(\n nets, args, x_real, y_org, y_trg, x_refs=[x_ref, x_ref2])\n #self._reset_grad()\n #g_loss.backward()\n grad=g.gradient(g_loss, nets['generator'].weights)\n optims['generator'].apply_gradients(zip(grad, nets['generator'].weights))\n\n #print(5)\n\n # compute moving average of network parameters\n moving_average(nets['generator'], nets_ema['generator'], beta=0.999)\n moving_average(nets['mapping_network'], nets_ema['mapping_network'], beta=0.999)\n moving_average(nets['style_encoder'], nets_ema['style_encoder'], beta=0.999)\n\n #print(6)\n\n # decay weight for diversity sensitive loss\n if args['lambda_ds'] > 0:\n args['lambda_ds'] -= (initial_lambda_ds / args['ds_iter'])\n\n # print out log info\n if (i+1) % args['print_every'] == 0:\n elapsed = time.time() - start_time\n elapsed = str(datetime.timedelta(seconds=elapsed))[:-7]\n log = \"Elapsed time [%s], Iteration [%i/%i], \" % (elapsed, i+1, args['total_iters'])\n all_losses = {}\n for loss, prefix in [(d_losses_latent,'D/latent_'), (d_losses_ref,'D/ref_'), \n (g_losses_latent,'G/latent_'), (g_losses_ref,'G/ref_')]:\n for key, value in loss.items():\n all_losses[prefix + key] = value\n all_losses['G/lambda_ds'] = args['lambda_ds']\n for key, value in all_losses.items():\n if key!= 'G/lambda_ds':\n print(log+key, value.numpy())\n else:\n print(log+key, value)\n\n # generate images for debugging\n #if (i+1) % args['sample_every'] == 0:\n # os.makedirs(args['sample_dir'], exist_ok=True)\n # debug_image(nets_ema, args, inputs=inputs_val, step=i+1)\n\n # save model checkpoints\n if (i+1) % args['save_every'] == 0:\n for net in solv.nets.keys():\n solv.nets[net].save_weights('MNIST_GAN_2/saved_model/'+net+'step'+str(self.step)+'.h5')\n for net in solv.nets_ema.keys():\n solv.nets[net].save_weights('MNIST_GAN_2/saved_model/'+net+'step'+str(self.step)+'_ema.h5')\n \n \n # self._save_checkpoint(step=i+1)\n\n def sample(self, src, ref):\n \"\"\"\n src source image that we want to modify\n ref pair of reference image and domain\n\n generates an image that changes source image into the style of the reference image \n \"\"\"\n args = self.args\n nets_ema = self.nets_ema\n os.makedirs(args['result_dir'], exist_ok=True)\n self._load_checkpoint(args['resume_iter'])\n\n fname = ospj(args['result_dir'], 'reference.jpg')\n print('Working on {}...'.format(fname))\n translate_using_reference(nets_ema, args, src, ref[0], ref[1], fname)",
"_____no_output_____"
]
],
[
[
"# Data Loading and Preprocessing",
"_____no_output_____"
]
],
[
[
"(trainX, trainy), (valX, valy) = load_data()\n\ntrainX=tf.reshape(trainX, (60000,1,28,28,1))\nvalX=tf.reshape(valX, (10000,1,28,28,1))",
"_____no_output_____"
],
[
"\ninputs=[]\nlatent_dim=8\nfor i in range(6000):\n i=i+36000\n if i % 2000==1999:\n print(i+1)\n input={}\n input['x_src']=tf.cast(trainX[i],tf.float32)\n input['y_src']=int(trainy[i])\n n=np.random.randint(0,60000)\n input['x_ref']=tf.cast(trainX[n],tf.float32)\n input['x_ref2']=tf.cast(trainX[np.random.randint(0,60000)],tf.float32)\n input['y_ref']=int(trainy[n])\n input['z_trg']=tf.random.normal((1,latent_dim))\n input['z_trg2']=tf.random.normal((1,latent_dim))\n inputs.append(input)",
"38000\n40000\n42000\n"
]
],
[
[
"# Parameters\n\nThis dictionary contains the different parameters we use to run the model.",
"_____no_output_____"
]
],
[
[
"args={'img_size':28,\n 'style_dim':24,\n 'latent_dim':16,\n 'num_domains':10,\n 'lambda_reg':1, \n 'lambda_ds':1,\n 'lambda_sty':10,\n 'lambda_cyc':10,\n 'hidden_dim':128,\n 'resume_iter':0,\n 'ds_iter':6000, \n 'total_iters':6000,\n 'batch_size':8,\n 'val_batch_size':32, \n 'lr':1e-4,\n 'f_lr':1e-6,\n 'beta1':0,\n 'beta2':0.99,\n 'weight_decay':1e-4,\n 'num_outs_per_domain':4,\n 'mode': 'train', #train,sample,eval\n 'seed':0,\n 'train_img_dir':'GAN/data/train',\n 'val_img_dir': 'GAN/data/val',\n 'sample_dir':'GAN/res/samples',\n 'checkpoint_dir':'GAN/res/checkpoints',\n 'eval_dir':'GAN/res/eval',\n 'result_dir':'GAN/res/results',\n 'src_dir':'GAN/data/src', \n 'ref_dir':'GAN/data/ref',\n 'print_every': 500,\n 'sample_every':200,\n 'save_every':1000,\n 'eval_every':1000 }",
"_____no_output_____"
]
],
[
[
"# Load Model",
"_____no_output_____"
]
],
[
[
"solv=Solver(args)\nsolv.build_model(args)\nsolv.load(96000)",
"_____no_output_____"
]
],
[
[
"# Training",
"_____no_output_____"
]
],
[
[
"with tf.device('/device:GPU:0'):\n solv.train(inputs, inputs)",
"Start training...\nWARNING:tensorflow:Calling GradientTape.gradient on a persistent tape inside its context is significantly less efficient than calling it outside the context (it causes the gradient ops to be recorded on the tape, leading to increased CPU and memory usage). Only call GradientTape.gradient inside the context if you actually want to trace the gradient in order to compute higher order derivatives.\nWARNING:tensorflow:Calling GradientTape.gradient on a persistent tape inside its context is significantly less efficient than calling it outside the context (it causes the gradient ops to be recorded on the tape, leading to increased CPU and memory usage). Only call GradientTape.gradient inside the context if you actually want to trace the gradient in order to compute higher order derivatives.\nWARNING:tensorflow:Calling GradientTape.gradient on a persistent tape inside its context is significantly less efficient than calling it outside the context (it causes the gradient ops to be recorded on the tape, leading to increased CPU and memory usage). Only call GradientTape.gradient inside the context if you actually want to trace the gradient in order to compute higher order derivatives.\nWARNING:tensorflow:Calling GradientTape.gradient on a persistent tape inside its context is significantly less efficient than calling it outside the context (it causes the gradient ops to be recorded on the tape, leading to increased CPU and memory usage). Only call GradientTape.gradient inside the context if you actually want to trace the gradient in order to compute higher order derivatives.\nElapsed time [0:18:08], Iteration [500/6000], D/latent_real 5.933643e-06\nElapsed time [0:18:08], Iteration [500/6000], D/latent_fake 1.3552595e-08\nElapsed time [0:18:08], Iteration [500/6000], D/latent_reg 5.7575995e-05\nElapsed time [0:18:08], Iteration [500/6000], D/ref_real 5.965053e-06\nElapsed time [0:18:08], Iteration [500/6000], D/ref_fake 1.6118185e-16\nElapsed time [0:18:08], Iteration [500/6000], D/ref_reg 5.745954e-05\nElapsed time [0:18:08], Iteration [500/6000], G/latent_adv 18.11265\nElapsed time [0:18:08], Iteration [500/6000], G/latent_sty 0.03147651\nElapsed time [0:18:08], Iteration [500/6000], G/latent_ds 35.073742\nElapsed time [0:18:08], Iteration [500/6000], G/latent_cyc 35.35375\nElapsed time [0:18:08], Iteration [500/6000], G/ref_adv 37.213753\nElapsed time [0:18:08], Iteration [500/6000], G/ref_sty 941.0832\nElapsed time [0:18:08], Iteration [500/6000], G/ref_ds 0.6424262\nElapsed time [0:18:08], Iteration [500/6000], G/ref_cyc 35.773685\nElapsed time [0:18:08], Iteration [500/6000], G/lambda_ds 0.9166666666666758\nElapsed time [0:34:51], Iteration [1000/6000], D/latent_real 3.2877884e-05\nElapsed time [0:34:51], Iteration [1000/6000], D/latent_fake 5.9242324e-05\nElapsed time [0:34:51], Iteration [1000/6000], D/latent_reg 1.7900673e-05\nElapsed time [0:34:51], Iteration [1000/6000], D/ref_real 3.2785334e-05\nElapsed time [0:34:51], Iteration [1000/6000], D/ref_fake 3.3252056e-07\nElapsed time [0:34:51], Iteration [1000/6000], D/ref_reg 1.7910741e-05\nElapsed time [0:34:51], Iteration [1000/6000], G/latent_adv 9.766323\nElapsed time [0:34:51], Iteration [1000/6000], G/latent_sty 0.04155442\nElapsed time [0:34:51], Iteration [1000/6000], G/latent_ds 33.543137\nElapsed time [0:34:51], Iteration [1000/6000], G/latent_cyc 30.618359\nElapsed time [0:34:51], Iteration [1000/6000], G/ref_adv 14.91778\nElapsed time [0:34:51], Iteration [1000/6000], G/ref_sty 4931.98\nElapsed time [0:34:51], Iteration [1000/6000], G/ref_ds 3.2844734\nElapsed time [0:34:51], Iteration [1000/6000], G/ref_cyc 30.256683\nElapsed time [0:34:51], Iteration [1000/6000], G/lambda_ds 0.8333333333333517\nElapsed time [0:51:21], Iteration [1500/6000], D/latent_real 3.895065e-06\nElapsed time [0:51:21], Iteration [1500/6000], D/latent_fake 1.8259102e-09\nElapsed time [0:51:21], Iteration [1500/6000], D/latent_reg 2.7209067e-05\nElapsed time [0:51:21], Iteration [1500/6000], D/ref_real 3.889516e-06\nElapsed time [0:51:21], Iteration [1500/6000], D/ref_fake 3.044195e-13\nElapsed time [0:51:21], Iteration [1500/6000], D/ref_reg 2.7181919e-05\nElapsed time [0:51:21], Iteration [1500/6000], G/latent_adv 20.124987\nElapsed time [0:51:21], Iteration [1500/6000], G/latent_sty 0.042978738\nElapsed time [0:51:21], Iteration [1500/6000], G/latent_ds 6.781559\nElapsed time [0:51:21], Iteration [1500/6000], G/latent_cyc 24.195248\nElapsed time [0:51:21], Iteration [1500/6000], G/ref_adv 28.627546\nElapsed time [0:51:21], Iteration [1500/6000], G/ref_sty 2035.4679\nElapsed time [0:51:21], Iteration [1500/6000], G/ref_ds 1.2784641\nElapsed time [0:51:21], Iteration [1500/6000], G/ref_cyc 21.998203\nElapsed time [0:51:21], Iteration [1500/6000], G/lambda_ds 0.7500000000000275\nElapsed time [1:07:58], Iteration [2000/6000], D/latent_real 2.5237994e-06\nElapsed time [1:07:58], Iteration [2000/6000], D/latent_fake 1.6464643e-12\nElapsed time [1:07:58], Iteration [2000/6000], D/latent_reg 2.9553012e-05\nElapsed time [1:07:58], Iteration [2000/6000], D/ref_real 2.5142215e-06\nElapsed time [1:07:58], Iteration [2000/6000], D/ref_fake 1.6905084e-09\nElapsed time [1:07:58], Iteration [2000/6000], D/ref_reg 2.9505836e-05\nElapsed time [1:07:58], Iteration [2000/6000], G/latent_adv 27.145195\nElapsed time [1:07:58], Iteration [2000/6000], G/latent_sty 0.083152466\nElapsed time [1:07:58], Iteration [2000/6000], G/latent_ds 32.264793\nElapsed time [1:07:58], Iteration [2000/6000], G/latent_cyc 35.44804\nElapsed time [1:07:58], Iteration [2000/6000], G/ref_adv 20.16832\nElapsed time [1:07:58], Iteration [2000/6000], G/ref_sty 2833.9543\nElapsed time [1:07:58], Iteration [2000/6000], G/ref_ds 5.3448505\nElapsed time [1:07:58], Iteration [2000/6000], G/ref_cyc 35.21791\nElapsed time [1:07:58], Iteration [2000/6000], G/lambda_ds 0.6666666666667034\nElapsed time [1:24:39], Iteration [2500/6000], D/latent_real 0.00011105127\nElapsed time [1:24:39], Iteration [2500/6000], D/latent_fake 1.5896394e-14\nElapsed time [1:24:39], Iteration [2500/6000], D/latent_reg 0.002541282\nElapsed time [1:24:39], Iteration [2500/6000], D/ref_real 0.000100846126\nElapsed time [1:24:39], Iteration [2500/6000], D/ref_fake 7.2359145e-16\nElapsed time [1:24:39], Iteration [2500/6000], D/ref_reg 0.0024638632\nElapsed time [1:24:39], Iteration [2500/6000], G/latent_adv 31.634012\nElapsed time [1:24:39], Iteration [2500/6000], G/latent_sty 0.03502376\nElapsed time [1:24:39], Iteration [2500/6000], G/latent_ds 17.42342\nElapsed time [1:24:39], Iteration [2500/6000], G/latent_cyc 18.593584\nElapsed time [1:24:39], Iteration [2500/6000], G/ref_adv 34.839787\nElapsed time [1:24:39], Iteration [2500/6000], G/ref_sty 3970.8281\nElapsed time [1:24:39], Iteration [2500/6000], G/ref_ds 0.8345002\nElapsed time [1:24:39], Iteration [2500/6000], G/ref_cyc 18.072935\nElapsed time [1:24:39], Iteration [2500/6000], G/lambda_ds 0.5833333333333792\nElapsed time [1:41:22], Iteration [3000/6000], D/latent_real 5.3445833e-06\nElapsed time [1:41:22], Iteration [3000/6000], D/latent_fake 8.820166e-10\nElapsed time [1:41:22], Iteration [3000/6000], D/latent_reg 6.4297914e-05\nElapsed time [1:41:22], Iteration [3000/6000], D/ref_real 5.36975e-06\nElapsed time [1:41:22], Iteration [3000/6000], D/ref_fake 3.0929835e-13\nElapsed time [1:41:22], Iteration [3000/6000], D/ref_reg 6.230037e-05\nElapsed time [1:41:22], Iteration [3000/6000], G/latent_adv 20.843857\nElapsed time [1:41:22], Iteration [3000/6000], G/latent_sty 0.03444063\nElapsed time [1:41:22], Iteration [3000/6000], G/latent_ds 9.778823\nElapsed time [1:41:22], Iteration [3000/6000], G/latent_cyc 28.097446\nElapsed time [1:41:22], Iteration [3000/6000], G/ref_adv 28.774145\nElapsed time [1:41:22], Iteration [3000/6000], G/ref_sty 2043.1276\nElapsed time [1:41:22], Iteration [3000/6000], G/ref_ds 0.5735893\nElapsed time [1:41:22], Iteration [3000/6000], G/ref_cyc 28.21649\nElapsed time [1:41:22], Iteration [3000/6000], G/lambda_ds 0.5000000000000551\nElapsed time [1:58:17], Iteration [3500/6000], D/latent_real 4.4489816e-06\nElapsed time [1:58:17], Iteration [3500/6000], D/latent_fake 3.4955981e-12\nElapsed time [1:58:17], Iteration [3500/6000], D/latent_reg 1.8419665e-05\nElapsed time [1:58:17], Iteration [3500/6000], D/ref_real 4.4123353e-06\nElapsed time [1:58:17], Iteration [3500/6000], D/ref_fake 4.5671822e-18\nElapsed time [1:58:17], Iteration [3500/6000], D/ref_reg 1.8420711e-05\nElapsed time [1:58:17], Iteration [3500/6000], G/latent_adv 26.39033\nElapsed time [1:58:17], Iteration [3500/6000], G/latent_sty 0.024623169\nElapsed time [1:58:17], Iteration [3500/6000], G/latent_ds 2.0222964\nElapsed time [1:58:17], Iteration [3500/6000], G/latent_cyc 28.361814\nElapsed time [1:58:17], Iteration [3500/6000], G/ref_adv 40.008335\nElapsed time [1:58:17], Iteration [3500/6000], G/ref_sty 825.08575\nElapsed time [1:58:17], Iteration [3500/6000], G/ref_ds 0.13139339\nElapsed time [1:58:17], Iteration [3500/6000], G/ref_cyc 27.434856\nElapsed time [1:58:17], Iteration [3500/6000], G/lambda_ds 0.4166666666667309\nElapsed time [2:14:58], Iteration [4000/6000], D/latent_real 3.6162882e-07\nElapsed time [2:14:58], Iteration [4000/6000], D/latent_fake 5.1923527e-10\nElapsed time [2:14:58], Iteration [4000/6000], D/latent_reg 3.846259e-05\nElapsed time [2:14:58], Iteration [4000/6000], D/ref_real 3.6288532e-07\nElapsed time [2:14:58], Iteration [4000/6000], D/ref_fake 5.942721e-08\nElapsed time [2:14:58], Iteration [4000/6000], D/ref_reg 3.843496e-05\nElapsed time [2:14:58], Iteration [4000/6000], G/latent_adv 21.378086\nElapsed time [2:14:58], Iteration [4000/6000], G/latent_sty 0.106728025\nElapsed time [2:14:58], Iteration [4000/6000], G/latent_ds 20.904701\nElapsed time [2:14:58], Iteration [4000/6000], G/latent_cyc 42.642372\nElapsed time [2:14:58], Iteration [4000/6000], G/ref_adv 17.064861\nElapsed time [2:14:58], Iteration [4000/6000], G/ref_sty 51.672703\nElapsed time [2:14:58], Iteration [4000/6000], G/ref_ds 2.3570015\nElapsed time [2:14:58], Iteration [4000/6000], G/ref_cyc 42.118973\nElapsed time [2:14:58], Iteration [4000/6000], G/lambda_ds 0.33333333333340676\nElapsed time [2:31:26], Iteration [4500/6000], D/latent_real 1.2024437e-06\nElapsed time [2:31:26], Iteration [4500/6000], D/latent_fake 9.023747e-11\nElapsed time [2:31:26], Iteration [4500/6000], D/latent_reg 2.4809478e-05\nElapsed time [2:31:26], Iteration [4500/6000], D/ref_real 1.2103014e-06\nElapsed time [2:31:26], Iteration [4500/6000], D/ref_fake 1.2240717e-15\nElapsed time [2:31:26], Iteration [4500/6000], D/ref_reg 2.4771667e-05\nElapsed time [2:31:26], Iteration [4500/6000], G/latent_adv 23.126303\nElapsed time [2:31:26], Iteration [4500/6000], G/latent_sty 0.05421421\nElapsed time [2:31:26], Iteration [4500/6000], G/latent_ds 22.202696\nElapsed time [2:31:26], Iteration [4500/6000], G/latent_cyc 26.787922\nElapsed time [2:31:26], Iteration [4500/6000], G/ref_adv 34.32032\nElapsed time [2:31:26], Iteration [4500/6000], G/ref_sty 1543.9764\nElapsed time [2:31:26], Iteration [4500/6000], G/ref_ds 0.5546016\nElapsed time [2:31:26], Iteration [4500/6000], G/ref_cyc 26.969646\nElapsed time [2:31:26], Iteration [4500/6000], G/lambda_ds 0.2500000000000826\nElapsed time [2:47:51], Iteration [5000/6000], D/latent_real 3.7615814e-06\nElapsed time [2:47:51], Iteration [5000/6000], D/latent_fake 6.1679136e-19\nElapsed time [2:47:51], Iteration [5000/6000], D/latent_reg 1.4114717e-05\nElapsed time [2:47:51], Iteration [5000/6000], D/ref_real 3.7170662e-06\nElapsed time [2:47:51], Iteration [5000/6000], D/ref_fake 1.3349664e-15\nElapsed time [2:47:51], Iteration [5000/6000], D/ref_reg 1.4105939e-05\nElapsed time [2:47:51], Iteration [5000/6000], G/latent_adv 42.000294\nElapsed time [2:47:51], Iteration [5000/6000], G/latent_sty 0.03513376\nElapsed time [2:47:51], Iteration [5000/6000], G/latent_ds 2.6810305\nElapsed time [2:47:51], Iteration [5000/6000], G/latent_cyc 16.486334\nElapsed time [2:47:51], Iteration [5000/6000], G/ref_adv 34.27865\nElapsed time [2:47:51], Iteration [5000/6000], G/ref_sty 1753.359\nElapsed time [2:47:51], Iteration [5000/6000], G/ref_ds 0.62547046\nElapsed time [2:47:51], Iteration [5000/6000], G/ref_cyc 17.005342\nElapsed time [2:47:51], Iteration [5000/6000], G/lambda_ds 0.16666666666674457\nElapsed time [3:04:31], Iteration [5500/6000], D/latent_real 9.165446e-07\nElapsed time [3:04:31], Iteration [5500/6000], D/latent_fake 9.630188e-10\nElapsed time [3:04:31], Iteration [5500/6000], D/latent_reg 2.2647702e-05\nElapsed time [3:04:31], Iteration [5500/6000], D/ref_real 9.2337206e-07\nElapsed time [3:04:31], Iteration [5500/6000], D/ref_fake 1.4190508e-07\nElapsed time [3:04:31], Iteration [5500/6000], D/ref_reg 2.254527e-05\nElapsed time [3:04:31], Iteration [5500/6000], G/latent_adv 20.79223\nElapsed time [3:04:31], Iteration [5500/6000], G/latent_sty 0.026682168\nElapsed time [3:04:31], Iteration [5500/6000], G/latent_ds 11.481898\nElapsed time [3:04:31], Iteration [5500/6000], G/latent_cyc 28.555185\nElapsed time [3:04:31], Iteration [5500/6000], G/ref_adv 15.918713\nElapsed time [3:04:31], Iteration [5500/6000], G/ref_sty 1588.65\nElapsed time [3:04:31], Iteration [5500/6000], G/ref_ds 0.9953184\nElapsed time [3:04:31], Iteration [5500/6000], G/ref_cyc 29.72444\nElapsed time [3:04:31], Iteration [5500/6000], G/lambda_ds 0.08333333333341\nElapsed time [3:21:09], Iteration [6000/6000], D/latent_real 3.292037e-05\nElapsed time [3:21:09], Iteration [6000/6000], D/latent_fake 2.396092e-10\nElapsed time [3:21:09], Iteration [6000/6000], D/latent_reg 3.2086697e-05\nElapsed time [3:21:09], Iteration [6000/6000], D/ref_real 3.259703e-05\nElapsed time [3:21:09], Iteration [6000/6000], D/ref_fake 7.805121e-13\nElapsed time [3:21:09], Iteration [6000/6000], D/ref_reg 3.2122865e-05\nElapsed time [3:21:09], Iteration [6000/6000], G/latent_adv 22.151058\nElapsed time [3:21:09], Iteration [6000/6000], G/latent_sty 0.03772318\nElapsed time [3:21:09], Iteration [6000/6000], G/latent_ds 4.0022397\nElapsed time [3:21:09], Iteration [6000/6000], G/latent_cyc 45.717335\nElapsed time [3:21:09], Iteration [6000/6000], G/ref_adv 27.884796\nElapsed time [3:21:09], Iteration [6000/6000], G/ref_sty 1092.5536\nElapsed time [3:21:09], Iteration [6000/6000], G/ref_ds 1.0627509\nElapsed time [3:21:09], Iteration [6000/6000], G/ref_cyc 45.289036\nElapsed time [3:21:09], Iteration [6000/6000], G/lambda_ds 7.683496850915961e-14\n"
]
],
[
[
"# Results\n\nIn this first cell we show an image where the rows represent a source image and the columns the style they are trying to mimic. We can see in this case that that the image still highly resembles the source image but has obtained some characteristics depending on the style of our reference. In most cases this style is mostly about the thickness of the lines, but it does vary slightly in other ways.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as pyplot\nfor i in range(4):\n\tpyplot.subplot(5,5,2+i)\n\tpyplot.axis('off')\n\tpyplot.imshow(np.reshape(inputs[i]['x_ref'],[28,28]), cmap='gray_r')\nfor i in range(4):\n\tpyplot.subplot(5, 5, 5*(i+1) + 1)\n\tpyplot.axis('off')\n\tpyplot.imshow(np.reshape(inputs[i]['x_src'], [28,28]), cmap='gray_r')\n\tfor j in range(4):\n\t\tpyplot.subplot(5, 5, 5*(i+1) + j +2)\n\t\tpyplot.axis('off')\n\t\tpyplot.imshow(np.reshape(solv.nets['generator'](inputs[i]['x_src'],solv.nets['style_encoder'](inputs[j]['x_ref'],inputs[j]['y_ref'])).numpy(), [28,28]), cmap='gray_r')\npyplot.show()\n\n#left is source and top is the target trying to mimic its font",
"_____no_output_____"
]
],
[
[
"Below we generate random styles and see the output it generates. We notice that it is quite likely the images are distorted in this case, compared to when using the style of an already existing image it seems it would usually have a good quality.",
"_____no_output_____"
]
],
[
[
"for i in range(5):\n\tpyplot.subplot(5,5,1+i)\n\tpyplot.axis('off')\n\tpyplot.imshow(np.reshape(solv.nets['generator'](inputs[0]['x_src'],tf.random.normal((1,24))).numpy(), [28,28]), cmap='gray_r')",
"_____no_output_____"
]
],
[
[
"Here we can see the process of how the image transforms into the target. In these small images there is not too much that is changing but we can still appreciate the process.",
"_____no_output_____"
]
],
[
[
"s1=solv.nets['style_encoder'](inputs[3]['x_src'],inputs[3]['y_src'])\ns2=solv.nets['style_encoder'](inputs[3]['x_ref'],inputs[3]['y_ref'])\nfor i in range(5):\n pyplot.subplot(5,5,1+i)\n pyplot.axis('off')\n s=(1-i/5)*s1+i/5*s2\n pyplot.imshow(np.reshape(solv.nets['generator'](inputs[3]['x_src'],s).numpy(), [28,28]), cmap='gray_r')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e76e267c2e5d133a8b5a12361322df347471764a | 2,634 | ipynb | Jupyter Notebook | Attic/repo/git-bakup.ipynb | tonybutzer/etscrum | e36fb68c01e7ce8581a6d16cccc71cad369eca24 | [
"MIT"
] | null | null | null | Attic/repo/git-bakup.ipynb | tonybutzer/etscrum | e36fb68c01e7ce8581a6d16cccc71cad369eca24 | [
"MIT"
] | null | null | null | Attic/repo/git-bakup.ipynb | tonybutzer/etscrum | e36fb68c01e7ce8581a6d16cccc71cad369eca24 | [
"MIT"
] | null | null | null | 21.414634 | 140 | 0.465452 | [
[
[
"# git-bakup",
"_____no_output_____"
]
],
[
[
"USER='tonybutzer'\nAPI_TOKEN='ATOKEN'\nGIT_API_URL='https://api.github.com'\n\ndef get_api(url):\n try:\n request = urllib2.Request(GIT_API_URL + url)\n base64string = base64.encodestring('%s/token:%s' % (USER, API_TOKEN)).replace('\\n', '')\n request.add_header(\"Authorization\", \"Basic %s\" % base64string)\n result = urllib2.urlopen(request)\n result.close()\n except:\n print ('Failed to get api request from %s' % url)",
"_____no_output_____"
],
[
"!curl \"https://api.github.com/users/tonybutzer/repos?per_page=1000\" | grep -w clone_url | grep -o '[^\"]\\+://.\\+.git' >myrepos.txt",
" % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n100 170k 0 170k 0 0 224k 0 --:--:-- --:--:-- --:--:-- 223k\n"
],
[
"%%bash\n\nmkdir -p ~/repo\n\n\nfor i in `cat myrepos.txt` ; do\n{\necho $i\n(cd ~/repo; git clone $i)\n}; done",
"Process is interrupted.\n"
],
[
"! ls ~/repo",
"active-fire\r\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e76e3a7409b2e91c40602df0b78cf253bebd8de4 | 1,867 | ipynb | Jupyter Notebook | Euler 094 - Almost equilateral triangles.ipynb | Radcliffe/project-euler | 5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38 | [
"MIT"
] | 6 | 2016-05-11T18:55:35.000Z | 2019-12-27T21:38:43.000Z | Euler 094 - Almost equilateral triangles.ipynb | Radcliffe/project-euler | 5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38 | [
"MIT"
] | null | null | null | Euler 094 - Almost equilateral triangles.ipynb | Radcliffe/project-euler | 5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38 | [
"MIT"
] | null | null | null | 22.493976 | 189 | 0.510445 | [
[
[
"Euler Problem 94\n================\n\nIt is easily proved that no equilateral triangle exists with integral length sides and integral area. However, the almost equilateral triangle 5-5-6 has an area of 12 square units.\n\nWe shall define an almost equilateral triangle to be a triangle for which two sides are equal and the third differs by no more than one unit.\n\nFind the sum of the perimeters of all almost equilateral triangles with integral side lengths and area and whose perimeters do not exceed one billion (1,000,000,000).",
"_____no_output_____"
]
],
[
[
"a, b, p, s = 1, 0, 0, 0\nwhile p <= 10**9:\n s += p\n a, b = 2*a + 3*b, a + 2*b\n p = 4*a*a\n\na, b, p = 1, 1, 0\nwhile p <= 10**9:\n s += p\n a, b = 2*a + 3*b, a + 2*b\n p = 2*a*a\n\nprint(s)\n",
"518408346\n"
]
],
[
[
"**Discussion:** ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e76e5156c47c9095b63c473a73c7e4c55eab4283 | 130,361 | ipynb | Jupyter Notebook | notebooks/IMDB_vocab_size.ipynb | platycristate/ptah | 15369382fc48860cc5bcd6a201a8b250ae8cb516 | [
"MIT"
] | null | null | null | notebooks/IMDB_vocab_size.ipynb | platycristate/ptah | 15369382fc48860cc5bcd6a201a8b250ae8cb516 | [
"MIT"
] | 1 | 2021-06-11T12:01:33.000Z | 2021-06-11T12:01:33.000Z | notebooks/IMDB_vocab_size.ipynb | platycristate/ptah | 15369382fc48860cc5bcd6a201a8b250ae8cb516 | [
"MIT"
] | 1 | 2021-06-11T11:57:06.000Z | 2021-06-11T11:57:06.000Z | 314.122892 | 116,872 | 0.920605 | [
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport re\nimport spacy\nimport pickle\nimport time\nfrom collections import defaultdict\nimport sys\npath = \"../data/\"\nplt.style.use(\"seaborn-whitegrid\")\nplt.rcParams['figure.figsize'] = [8.0, 6.0]\nplt.rcParams['figure.dpi'] = 140\nplt.rcParams[\"axes.labelsize\"] = 14\n\n#np.random.seed(250)\nnlp = spacy.load(\"en_core_sci_lg\", disable=['ner', 'parser'])",
"_____no_output_____"
],
[
"import numpy as np\nimport spacy\nfrom tqdm import tqdm\nfrom collections import defaultdict\n\npath = \"../data/\"\n\ndef tokenize(string):\n doc = nlp.make_doc(string)\n words = [doc[i].text.lower() for i in range(len(doc)) if \n doc[i].is_alpha and not doc[i].is_stop]\n mn = 2\n ngrams= [' '.join(words[i:i+n]) for n in range(1, mn+1) for i in range(len(words)-n+1)]\n #words = [token.text.lower() for token in doc if token.is_alpha and not token.is_stop and len(token.text) > 1 ]\n return ngrams\n\ndef tokenization(train_data):\n tokenized_texts = []\n #print(\"Tokenization....\")\n for _, row in train_data.iterrows():\n #text = str(row['Abstract']) + str(row[\"Title\"])\n text = str(row['review'])\n words = tokenize(text)\n tokenized_texts.append(words)\n return tokenized_texts\n\n# TFIDF (Term frequency and inverse document frequency)\ndef get_word_stat(tokenized_texts):\n '''Words counts in documents\n finds in how many documents this word\n is present\n '''\n texts_number = len(tokenized_texts)\n #print(\"Word Stat....\")\n word2text_count = defaultdict(int)\n for text in tokenized_texts:\n uniquewords = set(text)\n for word in uniquewords:\n word2text_count[word] +=1\n return word2text_count\n\ndef get_doc_tfidf(words, word2text_count, N):\n num_words = len(words)\n word2tfidf = defaultdict(int)\n for word in words:\n if word2text_count[word] > 0:\n idf = np.log(N/(word2text_count[word]))\n word2tfidf[word] += (1/num_words) * idf\n else:\n word2tfidf[word] = 1\n return word2tfidf\n\ndef create_pmi_dict(tokenized_texts, targets, min_count=5):\n #print(\"PMI dictionary ....\")\n np.seterr(divide = 'ignore')\n # words count\n d = {0:defaultdict(int), 1:defaultdict(int), 'tot':defaultdict(int)}\n for idx, words in enumerate(tokenized_texts):\n target = targets[idx]\n for w in words:\n d[ target ][w] += 1\n Dictionary = set(list(d[0].keys()) + list(d[1].keys()))\n d['tot'] = {w:d[0][w] + d[1][w] for w in Dictionary}\n # pmi calculation\n N_0 = sum(d[0].values())\n N_1 = sum(d[1].values())\n d[0] = {w: -np.log((v/N_0 + 10**(-15)) / (0.5 * d['tot'][w]/(N_0 + N_1))) / np.log(v/N_0 + 10**(-15))\n for w, v in d[0].items() if d['tot'][w] > min_count}\n d[1] = {w: -np.log((v/N_1+ 10**(-15)) / (0.5 * d['tot'][w]/(N_0 + N_1))) / np.log(v/N_1 + 10**(-15))\n for w, v in d[1].items() if d['tot'][w] > min_count}\n del d['tot']\n return d\n\n\ndef calc_collinearity(word, words_dict, n=10):\n new_word_emb = nlp(word).vector\n pmi_new = 0\n max_pmis_words = sorted(list(words_dict.items()), key=lambda x: x[1], reverse=True)[:n]\n for w, pmi in max_pmis_words:\n w_emb = nlp(w).vector\n cos_similarity = \\\n np.dot(w_emb, new_word_emb)/(np.linalg.norm(w_emb) * np.linalg.norm(new_word_emb) + 1e-12)\n pmi_new += cos_similarity * pmi\n return pmi_new / n\n\n\ndef create_tot_pmitfidf(words, words_pmis, word2tfidf):\n tot_pmitfidf0 = []\n tot_pmitfidf1 = []\n for word in words:\n if word in words_pmis[0]:\n tot_pmitfidf0.append( words_pmis[0][word] * word2tfidf[word] )\n else:\n pmi0idf = pmiidf_net.forward( nlp(word).vector )\n #pmi0 = calc_collinearity(word, words_pmis[0])\n tot_pmitfidf0.append( pmi0 )\n if word in words_pmis[1]:\n tot_pmitfidf1.append( words_pmis[1][word] * word2tfidf[word] )\n else:\n pmi1 = calc_collinearity(word, words_pmis[1])\n tot_pmitfidf1.append( pmi1 )\n\n return tot_pmitfidf0, tot_pmitfidf1\n\n\ndef classify_pmi_based(words_pmis, word2text_count, tokenized_test_texts, N):\n results = np.zeros(len(tokenized_test_texts))\n for idx, words in enumerate(tokenized_test_texts):\n word2tfidf = get_doc_tfidf(words, word2text_count, N)\n # PMI - determines significance of the word for the class\n # TFIDF - determines significance of the word for the document\n #tot_pmi0, tot_pmi1 = create_tot_pmitfidf(words, words_pmis, word2tfidf)\n tot_pmi0 = [ words_pmis[0][w] * word2tfidf[w] for w in set(words) if w in words_pmis[0] ]\n tot_pmi1 = [ words_pmis[1][w] * word2tfidf[w] for w in set(words) if w in words_pmis[1] ]\n pmi0 = np.sum(tot_pmi0)\n pmi1 = np.sum(tot_pmi1)\n diff = pmi1 - pmi0\n if diff > 0.001:\n results[idx] = 1\n return results",
"_____no_output_____"
],
[
"data_raw = pd.read_csv(path + 'IMDB_Dataset.csv')\nindices = np.random.permutation(data_raw.index)\ndata = data_raw.loc[indices]\ndata = data_raw.sample(frac=1)\ndata = data.replace(to_replace=['negative', 'positive'], value=[0, 1])",
"_____no_output_____"
],
[
"idx = int(data.shape[0] * 0.1)\ntest_data = data.iloc[:idx]\ntrain_data = data.iloc[idx:]\ntargets_train = train_data[\"sentiment\"].values\ntargets_test = test_data[\"sentiment\"].values",
"_____no_output_____"
],
[
"tokenized_texts = tokenization(train_data)\ntokenized_test_texts = tokenization(test_data)\nN = len(tokenized_texts)",
"_____no_output_____"
],
[
"word2text_count = get_word_stat(tokenized_texts)\nwords_pmis = create_pmi_dict(tokenized_texts, targets_train, min_count=5)\nresults = classify_pmi_based(words_pmis, word2text_count, tokenized_test_texts, N)\nprecision = np.sum( np.logical_and(results, targets_test) ) / np.sum(results)\nrecall = np.sum( np.logical_and(results, targets_test) ) / np.sum(targets_test)\nF1 = 2 * (recall * precision)/(recall + precision)\naccuracy = (results == targets_test).mean()",
"_____no_output_____"
],
[
"print(\"Accuracy: \", accuracy)\nprint(\"Precision: \", precision)\nprint(\"Recall: \", recall)\nprint(\"F1: \", F1)",
"Accuracy: 0.9183098591549296\nPrecision: 0.8944591029023746\nRecall: 0.9495798319327731\nF1: 0.921195652173913\n"
],
[
"print(\"Accuracy: \", accuracy)\nprint(\"Precision: \", precision)\nprint(\"Recall: \", recall)\nprint(\"F1: \", F1)",
"Accuracy: 0.9211267605633803\nPrecision: 0.8981481481481481\nRecall: 0.9509803921568627\nF1: 0.9238095238095237\n"
],
[
"print(\"Accuracy: \", accuracy)\nprint(\"Precision: \", precision)\nprint(\"Recall: \", recall)\nprint(\"F1: \", F1)",
"Accuracy: 0.9274647887323944\nPrecision: 0.9190672153635117\nRecall: 0.938375350140056\nF1: 0.9286209286209285\n"
],
[
"scores = {\"accuracies\":[], \"precisions\":[], \"recalls\":[], \"F1s\":[], \"size\":[]}\ndict_size = [i for i in np.arange(0.02, 1, 0.01)]\nfor i in dict_size:\n part = tokenized_texts[:int(N * i)]\n scores[\"size\"].append(len(part))\n word2text_count = get_word_stat(part)\n words_pmis = create_pmi_dict(part, targets_train, min_count=5)\n\n results = classify_pmi_based(words_pmis, word2text_count, tokenized_test_texts, N)\n\n precision = np.sum( np.logical_and(results, targets_test) ) / np.sum(results)\n recall = np.sum( np.logical_and(results, targets_test) ) / np.sum(targets_test)\n F1 = 2 * (recall * precision)/(recall + precision)\n\n accuracy = (results == targets_test).mean()\n scores[\"accuracies\"].append( accuracy )\n scores[\"precisions\"].append( precision )\n scores[\"recalls\"].append( recall )\n scores[\"F1s\"].append( F1 )",
"_____no_output_____"
],
[
"fig, axs = plt.subplots(1, 1)\naxs.plot(scores[\"size\"], scores[\"accuracies\"], label=\"Accuracy\")\naxs.plot(scores[\"size\"], scores[\"precisions\"], label=\"Precision\")\naxs.plot(scores[\"size\"], scores[\"recalls\"], label=\"Recall\")\naxs.plot(scores[\"size\"], scores[\"F1s\"], label=\"F1\")\naxs.legend(title=\"Scores\");\naxs.set(xlabel=\"Dictionary size\");\nplt.savefig(\"score_dict_size_IMDB.png\")",
"_____no_output_____"
],
[
"with open(\"scores_IMDB_ngrams.p\", \"wb\") as file:\n pickle.dump(scores, file)",
"_____no_output_____"
],
[
"def create_dict(tokenized_texts, targets, min_count=5):\n np.seterr(divide = 'ignore')\n # words count\n d = {0:defaultdict(int), 1:defaultdict(int), 'tot':defaultdict(int)}\n for idx, words in enumerate(tokenized_texts):\n target = targets[idx]\n for w in words:\n d[ target ][w] += 1\n return d",
"_____no_output_____"
],
[
"dictionary_imdb = create_dict(tokenized_texts, targets_train, min_count=5)",
"_____no_output_____"
],
[
"with open(\"dict_IMDB.p\", \"wb\") as file:\n pickle.dump(dictionary_imdb, file)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76e5e407eb4057ccc6aa2452010825296f1ec29 | 3,243 | ipynb | Jupyter Notebook | content/lessons/03/Now-You-Code/NYC4-Temperature-Conversion.ipynb | MahopacHS/spring2019-Christian64Aguilar | 2e9ac4a5245d459f6d086c61ad3c0540db39b981 | [
"MIT"
] | null | null | null | content/lessons/03/Now-You-Code/NYC4-Temperature-Conversion.ipynb | MahopacHS/spring2019-Christian64Aguilar | 2e9ac4a5245d459f6d086c61ad3c0540db39b981 | [
"MIT"
] | null | null | null | content/lessons/03/Now-You-Code/NYC4-Temperature-Conversion.ipynb | MahopacHS/spring2019-Christian64Aguilar | 2e9ac4a5245d459f6d086c61ad3c0540db39b981 | [
"MIT"
] | null | null | null | 45.676056 | 701 | 0.613629 | [
[
[
"# Now You Code 4: Temperature Conversion\n\nWrite a python program which will convert temperatures from Celcius to Fahrenheight.\nThe program should take a temperature in degrees Celcius as input and output a temperature in degrees Fahrenheight.\n\nExample:\n\n```\nEnter the temperature in Celcius: 100\n100 Celcius is 212 Fahrenheight\n```\n\nHINT: Use the web to find the formula to convert from Celcius to Fahrenheight.\n",
"_____no_output_____"
],
[
"## Step 1: Problem Analysis\n\nInputs: celcius and fahrenhieght\n\nOutputs: celcius to farenhieght \n\nAlgorithm (Steps in Program):\n\n\n",
"_____no_output_____"
]
],
[
[
"celcius = float(input(\"enter the temperature in celcius: \")) \nfahrenhieght=(celcius*9/5)+32\nprint(\"fahrenhieght equals \" \"%.2f\" %fahrenhieght) ",
"enter the temperature in celcius: 100\nfahrenhieght equals 212.00\n"
]
],
[
[
"## Step 3: Questions\n\n1. Why does the program still run when you enter a negative number for temperature? Is this an error? because we didn't put in a string that wont accept negative numbers, no its not unless we programmed it to.\n\n2. Would it be difficult to write a program which did the opposite (conversion F to C)? Explain. no because you would right the same thing you did for C to F but just a different formula \n\n3. Did you store the conversion in a variable before printing it on the last line? I argue this makes your program easier to understand. Why? because checking each line makes sure that your code is running smoothly unless changes are needed to be made.\n",
"_____no_output_____"
],
[
"## Reminder of Evaluation Criteria\n\n1. What the problem attempted (analysis, code, and answered questions) ?\n2. What the problem analysis thought out? (does the program match the plan?)\n3. Does the code execute without syntax error?\n4. Does the code solve the intended problem?\n5. Is the code well written? (easy to understand, modular, and self-documenting, handles errors)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e76e5e46d1abf4287cecfd34b8c2b871607f3c54 | 51,255 | ipynb | Jupyter Notebook | keras/170605-cross-validation.ipynb | aidiary/notebooks | 1bb9338441e12ee52e287ea40179a5f271a5a2be | [
"MIT"
] | 3 | 2018-02-03T09:33:51.000Z | 2020-11-23T08:46:43.000Z | keras/170605-cross-validation.ipynb | aidiary/notebooks | 1bb9338441e12ee52e287ea40179a5f271a5a2be | [
"MIT"
] | null | null | null | keras/170605-cross-validation.ipynb | aidiary/notebooks | 1bb9338441e12ee52e287ea40179a5f271a5a2be | [
"MIT"
] | null | null | null | 54.295551 | 122 | 0.434845 | [
[
[
"from keras.models import Sequential\nfrom keras.layers import Dense\nimport numpy as np",
"Using TensorFlow backend.\n"
],
[
"np.random.seed(7)",
"_____no_output_____"
],
[
"dataset = np.loadtxt('pima-indians-diabetes.data', delimiter=',')\nX = dataset[:, 0:8]\nY = dataset[:, 8]",
"_____no_output_____"
],
[
"model = Sequential()\nmodel.add(Dense(12, input_dim=8, activation='relu'))\nmodel.add(Dense(8, activation='relu'))\nmodel.add(Dense(1, activation='sigmoid'))\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])",
"_____no_output_____"
],
[
"model.fit(X, Y, validation_split=0.33, epochs=150, batch_size=10)",
"Train on 514 samples, validate on 254 samples\nEpoch 1/150\n514/514 [==============================] - 0s - loss: 6.0377 - acc: 0.3852 - val_loss: 4.3065 - val_acc: 0.5118\nEpoch 2/150\n514/514 [==============================] - 0s - loss: 2.4940 - acc: 0.5409 - val_loss: 1.9934 - val_acc: 0.5118\nEpoch 3/150\n514/514 [==============================] - 0s - loss: 1.5921 - acc: 0.5603 - val_loss: 1.7380 - val_acc: 0.5197\nEpoch 4/150\n514/514 [==============================] - 0s - loss: 1.3562 - acc: 0.5720 - val_loss: 1.4103 - val_acc: 0.5354\nEpoch 5/150\n514/514 [==============================] - 0s - loss: 1.1636 - acc: 0.5895 - val_loss: 1.3189 - val_acc: 0.5039\nEpoch 6/150\n514/514 [==============================] - 0s - loss: 1.0265 - acc: 0.5642 - val_loss: 1.1053 - val_acc: 0.5433\nEpoch 7/150\n514/514 [==============================] - 0s - loss: 0.9333 - acc: 0.5856 - val_loss: 1.0059 - val_acc: 0.5827\nEpoch 8/150\n514/514 [==============================] - 0s - loss: 0.8754 - acc: 0.6109 - val_loss: 0.9774 - val_acc: 0.5669\nEpoch 9/150\n514/514 [==============================] - 0s - loss: 0.8073 - acc: 0.6187 - val_loss: 0.9216 - val_acc: 0.6142\nEpoch 10/150\n514/514 [==============================] - 0s - loss: 0.7870 - acc: 0.6167 - val_loss: 0.8556 - val_acc: 0.6063\nEpoch 11/150\n514/514 [==============================] - 0s - loss: 0.7494 - acc: 0.6323 - val_loss: 0.8728 - val_acc: 0.6181\nEpoch 12/150\n514/514 [==============================] - 0s - loss: 0.7397 - acc: 0.6498 - val_loss: 0.8721 - val_acc: 0.6496\nEpoch 13/150\n514/514 [==============================] - 0s - loss: 0.6973 - acc: 0.6576 - val_loss: 0.8084 - val_acc: 0.6339\nEpoch 14/150\n514/514 [==============================] - 0s - loss: 0.7233 - acc: 0.6459 - val_loss: 0.8645 - val_acc: 0.6102\nEpoch 15/150\n514/514 [==============================] - 0s - loss: 0.6903 - acc: 0.6712 - val_loss: 0.7862 - val_acc: 0.6417\nEpoch 16/150\n514/514 [==============================] - 0s - loss: 0.6798 - acc: 0.6712 - val_loss: 0.7577 - val_acc: 0.6220\nEpoch 17/150\n514/514 [==============================] - 0s - loss: 0.6971 - acc: 0.6595 - val_loss: 0.7647 - val_acc: 0.6417\nEpoch 18/150\n514/514 [==============================] - 0s - loss: 0.6533 - acc: 0.6712 - val_loss: 0.7759 - val_acc: 0.6181\nEpoch 19/150\n514/514 [==============================] - 0s - loss: 0.6643 - acc: 0.6907 - val_loss: 0.7405 - val_acc: 0.6417\nEpoch 20/150\n514/514 [==============================] - 0s - loss: 0.6687 - acc: 0.6751 - val_loss: 0.7224 - val_acc: 0.6535\nEpoch 21/150\n514/514 [==============================] - 0s - loss: 0.6264 - acc: 0.6829 - val_loss: 0.7075 - val_acc: 0.6535\nEpoch 22/150\n514/514 [==============================] - 0s - loss: 0.6632 - acc: 0.6907 - val_loss: 0.7352 - val_acc: 0.6614\nEpoch 23/150\n514/514 [==============================] - 0s - loss: 0.6391 - acc: 0.6770 - val_loss: 0.7112 - val_acc: 0.6732\nEpoch 24/150\n514/514 [==============================] - 0s - loss: 0.6227 - acc: 0.6907 - val_loss: 0.7241 - val_acc: 0.6457\nEpoch 25/150\n514/514 [==============================] - 0s - loss: 0.6485 - acc: 0.6693 - val_loss: 0.6954 - val_acc: 0.6496\nEpoch 26/150\n514/514 [==============================] - 0s - loss: 0.6084 - acc: 0.7023 - val_loss: 0.7477 - val_acc: 0.6457\nEpoch 27/150\n514/514 [==============================] - 0s - loss: 0.6227 - acc: 0.6946 - val_loss: 0.7741 - val_acc: 0.6535\nEpoch 28/150\n514/514 [==============================] - 0s - loss: 0.6088 - acc: 0.7004 - val_loss: 0.6983 - val_acc: 0.6850\nEpoch 29/150\n514/514 [==============================] - 0s - loss: 0.6352 - acc: 0.6712 - val_loss: 0.6995 - val_acc: 0.6732\nEpoch 30/150\n514/514 [==============================] - 0s - loss: 0.6027 - acc: 0.6946 - val_loss: 0.6734 - val_acc: 0.6457\nEpoch 31/150\n514/514 [==============================] - 0s - loss: 0.6113 - acc: 0.6984 - val_loss: 0.7503 - val_acc: 0.6535\nEpoch 32/150\n514/514 [==============================] - 0s - loss: 0.5939 - acc: 0.6984 - val_loss: 0.6500 - val_acc: 0.6969\nEpoch 33/150\n514/514 [==============================] - 0s - loss: 0.5920 - acc: 0.6887 - val_loss: 0.7098 - val_acc: 0.6732\nEpoch 34/150\n514/514 [==============================] - 0s - loss: 0.6464 - acc: 0.6790 - val_loss: 0.6538 - val_acc: 0.6850\nEpoch 35/150\n514/514 [==============================] - 0s - loss: 0.5878 - acc: 0.7276 - val_loss: 0.6414 - val_acc: 0.6929\nEpoch 36/150\n514/514 [==============================] - 0s - loss: 0.5998 - acc: 0.7160 - val_loss: 0.6339 - val_acc: 0.6929\nEpoch 37/150\n514/514 [==============================] - 0s - loss: 0.5907 - acc: 0.7023 - val_loss: 0.6194 - val_acc: 0.7087\nEpoch 38/150\n514/514 [==============================] - 0s - loss: 0.5708 - acc: 0.7276 - val_loss: 0.6212 - val_acc: 0.7047\nEpoch 39/150\n514/514 [==============================] - 0s - loss: 0.6052 - acc: 0.6984 - val_loss: 0.6567 - val_acc: 0.6969\nEpoch 40/150\n514/514 [==============================] - 0s - loss: 0.6039 - acc: 0.6965 - val_loss: 0.6598 - val_acc: 0.6732\nEpoch 41/150\n514/514 [==============================] - 0s - loss: 0.5963 - acc: 0.6984 - val_loss: 0.6576 - val_acc: 0.6850\nEpoch 42/150\n514/514 [==============================] - 0s - loss: 0.6217 - acc: 0.7257 - val_loss: 0.6581 - val_acc: 0.6850\nEpoch 43/150\n514/514 [==============================] - 0s - loss: 0.5799 - acc: 0.7062 - val_loss: 0.6287 - val_acc: 0.7008\nEpoch 44/150\n514/514 [==============================] - 0s - loss: 0.5630 - acc: 0.7198 - val_loss: 0.6621 - val_acc: 0.6693\nEpoch 45/150\n514/514 [==============================] - 0s - loss: 0.5967 - acc: 0.7062 - val_loss: 0.6510 - val_acc: 0.6929\nEpoch 46/150\n514/514 [==============================] - 0s - loss: 0.6639 - acc: 0.7062 - val_loss: 0.6187 - val_acc: 0.7165\nEpoch 47/150\n514/514 [==============================] - 0s - loss: 0.5891 - acc: 0.7004 - val_loss: 0.6079 - val_acc: 0.7362\nEpoch 48/150\n514/514 [==============================] - 0s - loss: 0.5949 - acc: 0.7121 - val_loss: 0.6315 - val_acc: 0.7087\nEpoch 49/150\n514/514 [==============================] - 0s - loss: 0.5747 - acc: 0.7179 - val_loss: 0.6226 - val_acc: 0.6929\nEpoch 50/150\n514/514 [==============================] - 0s - loss: 0.5634 - acc: 0.7179 - val_loss: 0.6289 - val_acc: 0.7008\nEpoch 51/150\n514/514 [==============================] - 0s - loss: 0.5804 - acc: 0.7101 - val_loss: 0.6843 - val_acc: 0.6850\nEpoch 52/150\n514/514 [==============================] - 0s - loss: 0.5792 - acc: 0.7101 - val_loss: 0.6947 - val_acc: 0.6693\nEpoch 53/150\n514/514 [==============================] - 0s - loss: 0.6082 - acc: 0.7062 - val_loss: 0.6342 - val_acc: 0.6969\nEpoch 54/150\n514/514 [==============================] - 0s - loss: 0.5952 - acc: 0.7160 - val_loss: 0.6449 - val_acc: 0.7047\nEpoch 55/150\n514/514 [==============================] - 0s - loss: 0.5592 - acc: 0.7335 - val_loss: 0.6223 - val_acc: 0.7362\nEpoch 56/150\n514/514 [==============================] - 0s - loss: 0.5631 - acc: 0.7374 - val_loss: 0.6191 - val_acc: 0.7008\nEpoch 57/150\n514/514 [==============================] - 0s - loss: 0.5892 - acc: 0.6946 - val_loss: 0.6368 - val_acc: 0.7441\nEpoch 58/150\n514/514 [==============================] - 0s - loss: 0.6010 - acc: 0.7121 - val_loss: 0.5870 - val_acc: 0.7441\nEpoch 59/150\n514/514 [==============================] - 0s - loss: 0.5910 - acc: 0.6946 - val_loss: 0.6003 - val_acc: 0.7047\nEpoch 60/150\n514/514 [==============================] - 0s - loss: 0.5902 - acc: 0.6965 - val_loss: 0.5918 - val_acc: 0.7283\nEpoch 61/150\n514/514 [==============================] - 0s - loss: 0.5518 - acc: 0.7451 - val_loss: 0.6086 - val_acc: 0.7165\nEpoch 62/150\n514/514 [==============================] - 0s - loss: 0.5456 - acc: 0.7432 - val_loss: 0.6439 - val_acc: 0.6850\nEpoch 63/150\n514/514 [==============================] - 0s - loss: 0.5788 - acc: 0.7257 - val_loss: 0.6009 - val_acc: 0.7402\nEpoch 64/150\n514/514 [==============================] - 0s - loss: 0.5712 - acc: 0.7315 - val_loss: 0.6342 - val_acc: 0.7008\nEpoch 65/150\n514/514 [==============================] - 0s - loss: 0.5477 - acc: 0.7393 - val_loss: 0.6095 - val_acc: 0.7244\nEpoch 66/150\n514/514 [==============================] - 0s - loss: 0.5357 - acc: 0.7529 - val_loss: 0.6311 - val_acc: 0.7165\nEpoch 67/150\n514/514 [==============================] - 0s - loss: 0.5707 - acc: 0.7179 - val_loss: 0.6251 - val_acc: 0.7126\nEpoch 68/150\n514/514 [==============================] - 0s - loss: 0.5493 - acc: 0.7198 - val_loss: 0.6012 - val_acc: 0.7165\nEpoch 69/150\n514/514 [==============================] - 0s - loss: 0.5646 - acc: 0.7140 - val_loss: 0.5979 - val_acc: 0.7165\nEpoch 70/150\n514/514 [==============================] - 0s - loss: 0.5684 - acc: 0.7140 - val_loss: 0.6257 - val_acc: 0.6929\nEpoch 71/150\n514/514 [==============================] - 0s - loss: 0.5675 - acc: 0.7354 - val_loss: 0.6413 - val_acc: 0.6850\nEpoch 72/150\n514/514 [==============================] - 0s - loss: 0.5641 - acc: 0.7198 - val_loss: 0.6093 - val_acc: 0.6969\nEpoch 73/150\n514/514 [==============================] - 0s - loss: 0.5663 - acc: 0.7374 - val_loss: 0.5954 - val_acc: 0.7126\nEpoch 74/150\n514/514 [==============================] - 0s - loss: 0.5756 - acc: 0.7510 - val_loss: 0.6067 - val_acc: 0.7402\nEpoch 75/150\n514/514 [==============================] - 0s - loss: 0.5591 - acc: 0.7179 - val_loss: 0.5778 - val_acc: 0.7283\nEpoch 76/150\n514/514 [==============================] - 0s - loss: 0.5493 - acc: 0.7568 - val_loss: 0.5987 - val_acc: 0.7244\nEpoch 77/150\n514/514 [==============================] - 0s - loss: 0.5427 - acc: 0.7315 - val_loss: 0.6136 - val_acc: 0.7087\nEpoch 78/150\n514/514 [==============================] - 0s - loss: 0.5495 - acc: 0.7471 - val_loss: 0.6016 - val_acc: 0.7165\nEpoch 79/150\n514/514 [==============================] - 0s - loss: 0.5748 - acc: 0.7237 - val_loss: 0.5972 - val_acc: 0.7087\nEpoch 80/150\n514/514 [==============================] - 0s - loss: 0.5570 - acc: 0.7121 - val_loss: 0.5873 - val_acc: 0.7520\nEpoch 81/150\n514/514 [==============================] - 0s - loss: 0.5633 - acc: 0.7335 - val_loss: 0.6893 - val_acc: 0.6811\nEpoch 82/150\n514/514 [==============================] - 0s - loss: 0.5575 - acc: 0.7451 - val_loss: 0.6777 - val_acc: 0.6417\nEpoch 83/150\n514/514 [==============================] - 0s - loss: 0.5510 - acc: 0.7529 - val_loss: 0.6004 - val_acc: 0.7205\nEpoch 84/150\n514/514 [==============================] - 0s - loss: 0.5339 - acc: 0.7549 - val_loss: 0.6571 - val_acc: 0.6929\nEpoch 85/150\n514/514 [==============================] - 0s - loss: 0.5481 - acc: 0.7374 - val_loss: 0.6087 - val_acc: 0.7047\nEpoch 86/150\n514/514 [==============================] - 0s - loss: 0.5507 - acc: 0.7335 - val_loss: 0.5764 - val_acc: 0.7441\nEpoch 87/150\n514/514 [==============================] - 0s - loss: 0.5370 - acc: 0.7315 - val_loss: 0.5848 - val_acc: 0.7205\nEpoch 88/150\n514/514 [==============================] - 0s - loss: 0.5514 - acc: 0.7276 - val_loss: 0.6322 - val_acc: 0.7126\nEpoch 89/150\n514/514 [==============================] - 0s - loss: 0.5583 - acc: 0.7374 - val_loss: 0.6930 - val_acc: 0.6929\nEpoch 90/150\n514/514 [==============================] - 0s - loss: 0.5517 - acc: 0.7432 - val_loss: 0.6209 - val_acc: 0.6929\nEpoch 91/150\n514/514 [==============================] - 0s - loss: 0.5502 - acc: 0.7276 - val_loss: 0.5909 - val_acc: 0.7283\nEpoch 92/150\n514/514 [==============================] - 0s - loss: 0.5587 - acc: 0.7490 - val_loss: 0.6103 - val_acc: 0.7165\nEpoch 93/150\n514/514 [==============================] - 0s - loss: 0.5625 - acc: 0.7257 - val_loss: 0.7228 - val_acc: 0.6850\nEpoch 94/150\n514/514 [==============================] - 0s - loss: 0.5532 - acc: 0.7296 - val_loss: 0.6420 - val_acc: 0.6732\nEpoch 95/150\n514/514 [==============================] - 0s - loss: 0.6471 - acc: 0.7179 - val_loss: 0.7324 - val_acc: 0.6811\nEpoch 96/150\n514/514 [==============================] - 0s - loss: 0.5711 - acc: 0.7471 - val_loss: 0.5692 - val_acc: 0.7205\nEpoch 97/150\n514/514 [==============================] - 0s - loss: 0.6085 - acc: 0.7198 - val_loss: 0.6056 - val_acc: 0.6969\nEpoch 98/150\n514/514 [==============================] - 0s - loss: 0.5433 - acc: 0.7393 - val_loss: 0.5737 - val_acc: 0.7205\nEpoch 99/150\n514/514 [==============================] - 0s - loss: 0.5472 - acc: 0.7432 - val_loss: 0.5880 - val_acc: 0.7441\nEpoch 100/150\n514/514 [==============================] - 0s - loss: 0.5965 - acc: 0.7374 - val_loss: 0.6206 - val_acc: 0.7008\nEpoch 101/150\n514/514 [==============================] - 0s - loss: 0.5803 - acc: 0.7237 - val_loss: 0.5956 - val_acc: 0.7047\nEpoch 102/150\n514/514 [==============================] - 0s - loss: 0.5510 - acc: 0.7374 - val_loss: 0.6116 - val_acc: 0.7087\nEpoch 103/150\n514/514 [==============================] - 0s - loss: 0.5517 - acc: 0.7121 - val_loss: 0.5846 - val_acc: 0.7244\nEpoch 104/150\n514/514 [==============================] - 0s - loss: 0.5510 - acc: 0.7354 - val_loss: 0.5874 - val_acc: 0.7283\nEpoch 105/150\n514/514 [==============================] - 0s - loss: 0.5433 - acc: 0.7276 - val_loss: 0.6032 - val_acc: 0.6969\nEpoch 106/150\n514/514 [==============================] - 0s - loss: 0.5386 - acc: 0.7451 - val_loss: 0.5885 - val_acc: 0.7205\nEpoch 107/150\n514/514 [==============================] - 0s - loss: 0.5253 - acc: 0.7451 - val_loss: 0.6288 - val_acc: 0.7047\nEpoch 108/150\n514/514 [==============================] - 0s - loss: 0.5470 - acc: 0.7237 - val_loss: 0.5788 - val_acc: 0.7283\nEpoch 109/150\n514/514 [==============================] - 0s - loss: 0.5319 - acc: 0.7529 - val_loss: 0.6064 - val_acc: 0.7244\nEpoch 110/150\n514/514 [==============================] - 0s - loss: 0.5320 - acc: 0.7568 - val_loss: 0.5848 - val_acc: 0.7283\nEpoch 111/150\n514/514 [==============================] - 0s - loss: 0.5412 - acc: 0.7354 - val_loss: 0.7940 - val_acc: 0.6457\nEpoch 112/150\n514/514 [==============================] - 0s - loss: 0.5516 - acc: 0.7043 - val_loss: 0.5837 - val_acc: 0.7165\nEpoch 113/150\n514/514 [==============================] - 0s - loss: 0.5393 - acc: 0.7257 - val_loss: 0.5716 - val_acc: 0.7205\nEpoch 114/150\n514/514 [==============================] - 0s - loss: 0.5412 - acc: 0.7296 - val_loss: 0.5985 - val_acc: 0.7087\nEpoch 115/150\n514/514 [==============================] - 0s - loss: 0.5306 - acc: 0.7607 - val_loss: 0.5860 - val_acc: 0.7087\nEpoch 116/150\n514/514 [==============================] - 0s - loss: 0.5840 - acc: 0.7335 - val_loss: 0.6217 - val_acc: 0.7205\nEpoch 117/150\n514/514 [==============================] - 0s - loss: 0.5462 - acc: 0.7549 - val_loss: 0.5881 - val_acc: 0.7087\nEpoch 118/150\n514/514 [==============================] - 0s - loss: 0.5441 - acc: 0.7140 - val_loss: 0.6008 - val_acc: 0.7126\nEpoch 119/150\n514/514 [==============================] - 0s - loss: 0.5695 - acc: 0.7432 - val_loss: 0.6690 - val_acc: 0.6890\nEpoch 120/150\n514/514 [==============================] - 0s - loss: 0.5343 - acc: 0.7568 - val_loss: 0.5713 - val_acc: 0.7244\nEpoch 121/150\n514/514 [==============================] - 0s - loss: 0.5569 - acc: 0.7179 - val_loss: 0.5848 - val_acc: 0.7244\nEpoch 122/150\n514/514 [==============================] - 0s - loss: 0.5825 - acc: 0.7257 - val_loss: 0.6380 - val_acc: 0.6969\nEpoch 123/150\n514/514 [==============================] - 0s - loss: 0.5548 - acc: 0.7451 - val_loss: 0.5774 - val_acc: 0.7283\nEpoch 124/150\n514/514 [==============================] - 0s - loss: 0.5418 - acc: 0.7374 - val_loss: 0.5629 - val_acc: 0.7402\nEpoch 125/150\n514/514 [==============================] - 0s - loss: 0.5259 - acc: 0.7432 - val_loss: 0.5900 - val_acc: 0.7047\nEpoch 126/150\n514/514 [==============================] - 0s - loss: 0.5528 - acc: 0.7257 - val_loss: 0.6274 - val_acc: 0.7205\nEpoch 127/150\n514/514 [==============================] - 0s - loss: 0.5297 - acc: 0.7412 - val_loss: 0.6106 - val_acc: 0.7441\nEpoch 128/150\n514/514 [==============================] - 0s - loss: 0.5692 - acc: 0.7218 - val_loss: 0.7843 - val_acc: 0.6417\nEpoch 129/150\n514/514 [==============================] - 0s - loss: 0.5344 - acc: 0.7412 - val_loss: 0.5721 - val_acc: 0.7205\nEpoch 130/150\n514/514 [==============================] - 0s - loss: 0.5273 - acc: 0.7568 - val_loss: 0.5664 - val_acc: 0.7126\nEpoch 131/150\n514/514 [==============================] - 0s - loss: 0.5289 - acc: 0.7296 - val_loss: 0.5726 - val_acc: 0.7087\nEpoch 132/150\n514/514 [==============================] - 0s - loss: 0.5304 - acc: 0.7490 - val_loss: 0.5856 - val_acc: 0.7323\nEpoch 133/150\n514/514 [==============================] - 0s - loss: 0.5126 - acc: 0.7451 - val_loss: 0.5728 - val_acc: 0.7087\nEpoch 134/150\n514/514 [==============================] - 0s - loss: 0.5151 - acc: 0.7607 - val_loss: 0.5663 - val_acc: 0.7283\nEpoch 135/150\n514/514 [==============================] - 0s - loss: 0.5099 - acc: 0.7588 - val_loss: 0.5630 - val_acc: 0.7283\nEpoch 136/150\n514/514 [==============================] - 0s - loss: 0.5170 - acc: 0.7432 - val_loss: 0.5635 - val_acc: 0.7165\nEpoch 137/150\n514/514 [==============================] - 0s - loss: 0.5151 - acc: 0.7374 - val_loss: 0.5601 - val_acc: 0.7205\nEpoch 138/150\n514/514 [==============================] - 0s - loss: 0.5089 - acc: 0.7529 - val_loss: 0.5590 - val_acc: 0.7283\nEpoch 139/150\n514/514 [==============================] - 0s - loss: 0.5137 - acc: 0.7451 - val_loss: 0.5604 - val_acc: 0.7165\nEpoch 140/150\n514/514 [==============================] - 0s - loss: 0.5300 - acc: 0.7257 - val_loss: 0.5730 - val_acc: 0.7402\nEpoch 141/150\n514/514 [==============================] - 0s - loss: 0.5072 - acc: 0.7646 - val_loss: 0.5581 - val_acc: 0.7598\nEpoch 142/150\n514/514 [==============================] - 0s - loss: 0.5404 - acc: 0.7607 - val_loss: 0.6411 - val_acc: 0.7008\nEpoch 143/150\n514/514 [==============================] - 0s - loss: 0.5328 - acc: 0.7296 - val_loss: 0.5693 - val_acc: 0.7402\nEpoch 144/150\n514/514 [==============================] - 0s - loss: 0.5143 - acc: 0.7626 - val_loss: 0.5983 - val_acc: 0.7362\nEpoch 145/150\n514/514 [==============================] - 0s - loss: 0.5388 - acc: 0.7451 - val_loss: 0.5664 - val_acc: 0.7283\nEpoch 146/150\n514/514 [==============================] - 0s - loss: 0.5115 - acc: 0.7374 - val_loss: 0.5625 - val_acc: 0.7283\nEpoch 147/150\n514/514 [==============================] - 0s - loss: 0.5020 - acc: 0.7549 - val_loss: 0.5858 - val_acc: 0.7126\nEpoch 148/150\n514/514 [==============================] - 0s - loss: 0.5241 - acc: 0.7374 - val_loss: 0.6635 - val_acc: 0.6890\nEpoch 149/150\n514/514 [==============================] - 0s - loss: 0.5241 - acc: 0.7490 - val_loss: 0.5511 - val_acc: 0.7480\nEpoch 150/150\n514/514 [==============================] - 0s - loss: 0.5221 - acc: 0.7607 - val_loss: 0.5785 - val_acc: 0.7087\n"
],
[
"from sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"seed = 7\ndataset = np.loadtxt('pima-indians-diabetes.data', delimiter=',')\nX = dataset[:, 0:8]\nY = dataset[:, 8]\nX_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=seed)",
"_____no_output_____"
],
[
"model = Sequential()\nmodel.add(Dense(12, input_dim=8, activation='relu'))\nmodel.add(Dense(8, activation='relu'))\nmodel.add(Dense(1, activation='sigmoid'))\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])",
"_____no_output_____"
],
[
"model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=150, batch_size=10)",
"Train on 514 samples, validate on 254 samples\nEpoch 1/150\n514/514 [==============================] - 0s - loss: 5.4229 - acc: 0.6304 - val_loss: 5.6229 - val_acc: 0.6378\nEpoch 2/150\n514/514 [==============================] - 0s - loss: 5.3085 - acc: 0.6401 - val_loss: 5.5007 - val_acc: 0.6417\nEpoch 3/150\n514/514 [==============================] - 0s - loss: 5.2346 - acc: 0.6459 - val_loss: 5.4103 - val_acc: 0.6378\nEpoch 4/150\n514/514 [==============================] - 0s - loss: 5.1298 - acc: 0.6323 - val_loss: 5.3088 - val_acc: 0.6378\nEpoch 5/150\n514/514 [==============================] - 0s - loss: 5.0421 - acc: 0.6401 - val_loss: 5.2040 - val_acc: 0.6378\nEpoch 6/150\n514/514 [==============================] - 0s - loss: 4.9210 - acc: 0.6323 - val_loss: 4.8707 - val_acc: 0.6378\nEpoch 7/150\n514/514 [==============================] - 0s - loss: 4.5375 - acc: 0.6498 - val_loss: 4.4086 - val_acc: 0.6220\nEpoch 8/150\n514/514 [==============================] - 0s - loss: 4.0227 - acc: 0.6128 - val_loss: 3.7692 - val_acc: 0.6063\nEpoch 9/150\n514/514 [==============================] - 0s - loss: 3.6753 - acc: 0.5992 - val_loss: 3.4825 - val_acc: 0.6260\nEpoch 10/150\n514/514 [==============================] - 0s - loss: 3.4646 - acc: 0.6420 - val_loss: 3.5302 - val_acc: 0.6378\nEpoch 11/150\n514/514 [==============================] - 0s - loss: 3.4321 - acc: 0.6245 - val_loss: 3.2797 - val_acc: 0.6496\nEpoch 12/150\n514/514 [==============================] - 0s - loss: 3.1151 - acc: 0.6342 - val_loss: 2.3772 - val_acc: 0.6378\nEpoch 13/150\n514/514 [==============================] - 0s - loss: 1.6956 - acc: 0.5486 - val_loss: 1.0138 - val_acc: 0.5866\nEpoch 14/150\n514/514 [==============================] - 0s - loss: 1.0465 - acc: 0.6051 - val_loss: 0.8784 - val_acc: 0.6220\nEpoch 15/150\n514/514 [==============================] - 0s - loss: 0.9950 - acc: 0.5953 - val_loss: 0.8158 - val_acc: 0.6417\nEpoch 16/150\n514/514 [==============================] - 0s - loss: 0.8745 - acc: 0.6498 - val_loss: 0.8259 - val_acc: 0.6339\nEpoch 17/150\n514/514 [==============================] - 0s - loss: 0.8333 - acc: 0.6770 - val_loss: 0.7215 - val_acc: 0.6890\nEpoch 18/150\n514/514 [==============================] - 0s - loss: 0.7670 - acc: 0.6673 - val_loss: 0.8371 - val_acc: 0.6772\nEpoch 19/150\n514/514 [==============================] - 0s - loss: 0.7134 - acc: 0.6887 - val_loss: 0.7544 - val_acc: 0.6693\nEpoch 20/150\n514/514 [==============================] - 0s - loss: 0.6996 - acc: 0.6770 - val_loss: 0.6842 - val_acc: 0.6693\nEpoch 21/150\n514/514 [==============================] - 0s - loss: 0.6600 - acc: 0.7043 - val_loss: 0.7152 - val_acc: 0.6772\nEpoch 22/150\n514/514 [==============================] - 0s - loss: 0.6240 - acc: 0.6907 - val_loss: 0.8918 - val_acc: 0.6535\nEpoch 23/150\n514/514 [==============================] - 0s - loss: 0.6177 - acc: 0.7121 - val_loss: 0.6894 - val_acc: 0.6693\nEpoch 24/150\n514/514 [==============================] - 0s - loss: 0.6216 - acc: 0.7043 - val_loss: 0.7710 - val_acc: 0.6535\nEpoch 25/150\n514/514 [==============================] - 0s - loss: 0.6004 - acc: 0.7179 - val_loss: 0.7013 - val_acc: 0.6654\nEpoch 26/150\n514/514 [==============================] - 0s - loss: 0.6341 - acc: 0.7004 - val_loss: 0.7162 - val_acc: 0.6575\nEpoch 27/150\n514/514 [==============================] - 0s - loss: 0.6112 - acc: 0.7062 - val_loss: 0.6918 - val_acc: 0.6693\nEpoch 28/150\n514/514 [==============================] - 0s - loss: 0.6237 - acc: 0.7004 - val_loss: 0.6774 - val_acc: 0.6811\nEpoch 29/150\n514/514 [==============================] - 0s - loss: 0.5800 - acc: 0.7140 - val_loss: 0.7153 - val_acc: 0.6614\nEpoch 30/150\n514/514 [==============================] - 0s - loss: 0.5761 - acc: 0.7335 - val_loss: 0.6456 - val_acc: 0.6929\nEpoch 31/150\n514/514 [==============================] - 0s - loss: 0.6061 - acc: 0.6965 - val_loss: 0.7567 - val_acc: 0.6457\nEpoch 32/150\n514/514 [==============================] - 0s - loss: 0.6030 - acc: 0.7082 - val_loss: 0.6793 - val_acc: 0.6654\nEpoch 33/150\n514/514 [==============================] - 0s - loss: 0.5941 - acc: 0.7354 - val_loss: 0.7651 - val_acc: 0.6575\nEpoch 34/150\n514/514 [==============================] - 0s - loss: 0.5608 - acc: 0.7257 - val_loss: 0.6485 - val_acc: 0.6969\nEpoch 35/150\n514/514 [==============================] - 0s - loss: 0.5502 - acc: 0.7374 - val_loss: 0.7316 - val_acc: 0.6575\nEpoch 36/150\n514/514 [==============================] - 0s - loss: 0.5767 - acc: 0.7510 - val_loss: 0.6455 - val_acc: 0.6890\nEpoch 37/150\n514/514 [==============================] - 0s - loss: 0.5594 - acc: 0.7160 - val_loss: 0.6402 - val_acc: 0.6929\nEpoch 38/150\n514/514 [==============================] - 0s - loss: 0.5439 - acc: 0.7276 - val_loss: 0.6381 - val_acc: 0.7047\nEpoch 39/150\n514/514 [==============================] - 0s - loss: 0.5635 - acc: 0.7218 - val_loss: 0.7396 - val_acc: 0.6732\nEpoch 40/150\n514/514 [==============================] - 0s - loss: 0.5515 - acc: 0.7393 - val_loss: 0.7263 - val_acc: 0.6614\nEpoch 41/150\n514/514 [==============================] - 0s - loss: 0.5921 - acc: 0.7082 - val_loss: 0.6538 - val_acc: 0.6850\nEpoch 42/150\n514/514 [==============================] - 0s - loss: 0.5501 - acc: 0.7315 - val_loss: 0.6771 - val_acc: 0.6772\nEpoch 43/150\n514/514 [==============================] - 0s - loss: 0.5532 - acc: 0.7529 - val_loss: 0.6349 - val_acc: 0.7165\nEpoch 44/150\n514/514 [==============================] - 0s - loss: 0.5652 - acc: 0.7374 - val_loss: 0.6378 - val_acc: 0.7283\nEpoch 45/150\n514/514 [==============================] - 0s - loss: 0.5577 - acc: 0.7257 - val_loss: 0.6409 - val_acc: 0.6969\nEpoch 46/150\n514/514 [==============================] - 0s - loss: 0.5257 - acc: 0.7626 - val_loss: 0.7328 - val_acc: 0.6772\nEpoch 47/150\n514/514 [==============================] - 0s - loss: 0.5445 - acc: 0.7374 - val_loss: 0.6554 - val_acc: 0.6890\nEpoch 48/150\n514/514 [==============================] - 0s - loss: 0.5227 - acc: 0.7335 - val_loss: 0.6995 - val_acc: 0.6693\nEpoch 49/150\n514/514 [==============================] - 0s - loss: 0.5438 - acc: 0.7451 - val_loss: 0.6356 - val_acc: 0.7008\nEpoch 50/150\n514/514 [==============================] - 0s - loss: 0.5326 - acc: 0.7393 - val_loss: 0.6192 - val_acc: 0.7362\nEpoch 51/150\n514/514 [==============================] - 0s - loss: 0.5323 - acc: 0.7374 - val_loss: 0.6228 - val_acc: 0.7244\nEpoch 52/150\n514/514 [==============================] - 0s - loss: 0.5473 - acc: 0.7335 - val_loss: 0.6297 - val_acc: 0.7008\nEpoch 53/150\n514/514 [==============================] - 0s - loss: 0.5417 - acc: 0.7393 - val_loss: 0.7364 - val_acc: 0.6772\nEpoch 54/150\n514/514 [==============================] - 0s - loss: 0.5593 - acc: 0.7354 - val_loss: 0.6551 - val_acc: 0.7087\nEpoch 55/150\n514/514 [==============================] - 0s - loss: 0.5665 - acc: 0.7101 - val_loss: 0.7917 - val_acc: 0.6732\nEpoch 56/150\n514/514 [==============================] - 0s - loss: 0.5598 - acc: 0.7276 - val_loss: 0.7145 - val_acc: 0.6732\nEpoch 57/150\n514/514 [==============================] - 0s - loss: 0.5683 - acc: 0.7121 - val_loss: 0.6474 - val_acc: 0.6929\nEpoch 58/150\n514/514 [==============================] - 0s - loss: 0.5130 - acc: 0.7490 - val_loss: 0.6193 - val_acc: 0.7205\nEpoch 59/150\n514/514 [==============================] - 0s - loss: 0.5131 - acc: 0.7549 - val_loss: 0.6713 - val_acc: 0.6850\nEpoch 60/150\n514/514 [==============================] - 0s - loss: 0.5356 - acc: 0.7412 - val_loss: 0.6392 - val_acc: 0.6654\nEpoch 61/150\n514/514 [==============================] - 0s - loss: 0.5237 - acc: 0.7510 - val_loss: 0.6218 - val_acc: 0.7205\nEpoch 62/150\n514/514 [==============================] - 0s - loss: 0.5149 - acc: 0.7529 - val_loss: 0.6698 - val_acc: 0.6890\nEpoch 63/150\n514/514 [==============================] - 0s - loss: 0.5303 - acc: 0.7490 - val_loss: 0.6416 - val_acc: 0.7087\nEpoch 64/150\n514/514 [==============================] - 0s - loss: 0.5160 - acc: 0.7510 - val_loss: 0.6206 - val_acc: 0.7126\nEpoch 65/150\n514/514 [==============================] - 0s - loss: 0.5177 - acc: 0.7354 - val_loss: 0.6191 - val_acc: 0.7244\nEpoch 66/150\n514/514 [==============================] - 0s - loss: 0.5431 - acc: 0.7412 - val_loss: 0.6633 - val_acc: 0.6614\nEpoch 67/150\n514/514 [==============================] - 0s - loss: 0.5224 - acc: 0.7451 - val_loss: 0.6463 - val_acc: 0.6890\nEpoch 68/150\n514/514 [==============================] - 0s - loss: 0.5173 - acc: 0.7607 - val_loss: 0.6123 - val_acc: 0.7323\nEpoch 69/150\n514/514 [==============================] - 0s - loss: 0.5175 - acc: 0.7529 - val_loss: 0.6173 - val_acc: 0.7047\nEpoch 70/150\n514/514 [==============================] - 0s - loss: 0.5393 - acc: 0.7529 - val_loss: 0.6327 - val_acc: 0.6890\nEpoch 71/150\n514/514 [==============================] - 0s - loss: 0.5392 - acc: 0.7588 - val_loss: 0.6250 - val_acc: 0.7283\nEpoch 72/150\n514/514 [==============================] - 0s - loss: 0.5603 - acc: 0.7393 - val_loss: 0.6474 - val_acc: 0.6969\nEpoch 73/150\n514/514 [==============================] - 0s - loss: 0.5235 - acc: 0.7490 - val_loss: 0.6080 - val_acc: 0.7244\nEpoch 74/150\n514/514 [==============================] - 0s - loss: 0.5251 - acc: 0.7549 - val_loss: 0.6376 - val_acc: 0.6969\nEpoch 75/150\n514/514 [==============================] - 0s - loss: 0.5034 - acc: 0.7568 - val_loss: 0.6201 - val_acc: 0.7323\nEpoch 76/150\n514/514 [==============================] - 0s - loss: 0.5240 - acc: 0.7412 - val_loss: 0.6070 - val_acc: 0.7402\nEpoch 77/150\n514/514 [==============================] - 0s - loss: 0.5021 - acc: 0.7665 - val_loss: 0.6797 - val_acc: 0.6732\nEpoch 78/150\n514/514 [==============================] - 0s - loss: 0.4974 - acc: 0.7782 - val_loss: 0.6169 - val_acc: 0.7283\nEpoch 79/150\n514/514 [==============================] - 0s - loss: 0.5128 - acc: 0.7490 - val_loss: 0.6092 - val_acc: 0.7362\nEpoch 80/150\n514/514 [==============================] - 0s - loss: 0.5172 - acc: 0.7490 - val_loss: 0.6262 - val_acc: 0.7047\nEpoch 81/150\n514/514 [==============================] - 0s - loss: 0.4978 - acc: 0.7471 - val_loss: 0.6047 - val_acc: 0.7362\nEpoch 82/150\n514/514 [==============================] - 0s - loss: 0.5027 - acc: 0.7704 - val_loss: 0.6434 - val_acc: 0.6772\nEpoch 83/150\n514/514 [==============================] - 0s - loss: 0.5026 - acc: 0.7529 - val_loss: 0.6077 - val_acc: 0.7244\nEpoch 84/150\n514/514 [==============================] - 0s - loss: 0.5383 - acc: 0.7529 - val_loss: 0.6169 - val_acc: 0.7205\nEpoch 85/150\n514/514 [==============================] - 0s - loss: 0.4882 - acc: 0.7685 - val_loss: 0.6228 - val_acc: 0.7165\nEpoch 86/150\n514/514 [==============================] - 0s - loss: 0.4995 - acc: 0.7529 - val_loss: 0.6546 - val_acc: 0.6811\nEpoch 87/150\n514/514 [==============================] - 0s - loss: 0.5166 - acc: 0.7646 - val_loss: 0.6051 - val_acc: 0.7283\nEpoch 88/150\n514/514 [==============================] - 0s - loss: 0.5017 - acc: 0.7510 - val_loss: 0.6108 - val_acc: 0.7126\nEpoch 89/150\n514/514 [==============================] - 0s - loss: 0.5304 - acc: 0.7626 - val_loss: 0.6625 - val_acc: 0.6850\nEpoch 90/150\n514/514 [==============================] - 0s - loss: 0.5193 - acc: 0.7490 - val_loss: 0.6013 - val_acc: 0.7283\nEpoch 91/150\n514/514 [==============================] - 0s - loss: 0.4943 - acc: 0.7743 - val_loss: 0.6079 - val_acc: 0.7283\nEpoch 92/150\n514/514 [==============================] - 0s - loss: 0.5184 - acc: 0.7529 - val_loss: 0.6213 - val_acc: 0.7126\nEpoch 93/150\n514/514 [==============================] - 0s - loss: 0.5042 - acc: 0.7451 - val_loss: 0.6311 - val_acc: 0.6811\nEpoch 94/150\n514/514 [==============================] - 0s - loss: 0.4954 - acc: 0.7607 - val_loss: 0.6025 - val_acc: 0.7323\nEpoch 95/150\n514/514 [==============================] - 0s - loss: 0.4917 - acc: 0.7529 - val_loss: 0.6167 - val_acc: 0.7205\nEpoch 96/150\n514/514 [==============================] - 0s - loss: 0.4894 - acc: 0.7490 - val_loss: 0.6030 - val_acc: 0.7441\nEpoch 97/150\n514/514 [==============================] - 0s - loss: 0.5054 - acc: 0.7471 - val_loss: 0.6028 - val_acc: 0.7283\nEpoch 98/150\n514/514 [==============================] - 0s - loss: 0.4863 - acc: 0.7840 - val_loss: 0.6134 - val_acc: 0.7244\nEpoch 99/150\n514/514 [==============================] - 0s - loss: 0.4870 - acc: 0.7471 - val_loss: 0.6442 - val_acc: 0.7087\nEpoch 100/150\n514/514 [==============================] - 0s - loss: 0.5013 - acc: 0.7704 - val_loss: 0.6560 - val_acc: 0.7008\nEpoch 101/150\n514/514 [==============================] - 0s - loss: 0.5459 - acc: 0.7374 - val_loss: 0.6787 - val_acc: 0.6890\nEpoch 102/150\n514/514 [==============================] - 0s - loss: 0.5184 - acc: 0.7626 - val_loss: 0.6015 - val_acc: 0.7244\nEpoch 103/150\n514/514 [==============================] - 0s - loss: 0.4902 - acc: 0.7724 - val_loss: 0.6074 - val_acc: 0.7244\nEpoch 104/150\n514/514 [==============================] - 0s - loss: 0.4989 - acc: 0.7763 - val_loss: 0.6312 - val_acc: 0.7126\nEpoch 105/150\n514/514 [==============================] - 0s - loss: 0.5064 - acc: 0.7549 - val_loss: 0.6232 - val_acc: 0.7205\nEpoch 106/150\n514/514 [==============================] - 0s - loss: 0.4992 - acc: 0.7588 - val_loss: 0.6248 - val_acc: 0.7283\nEpoch 107/150\n514/514 [==============================] - 0s - loss: 0.5326 - acc: 0.7451 - val_loss: 0.6476 - val_acc: 0.6890\nEpoch 108/150\n514/514 [==============================] - 0s - loss: 0.4882 - acc: 0.7802 - val_loss: 0.6395 - val_acc: 0.6654\nEpoch 109/150\n514/514 [==============================] - 0s - loss: 0.5056 - acc: 0.7743 - val_loss: 0.5995 - val_acc: 0.7362\nEpoch 110/150\n514/514 [==============================] - 0s - loss: 0.4828 - acc: 0.7782 - val_loss: 0.6349 - val_acc: 0.7087\nEpoch 111/150\n514/514 [==============================] - 0s - loss: 0.4801 - acc: 0.7821 - val_loss: 0.6462 - val_acc: 0.6732\nEpoch 112/150\n514/514 [==============================] - 0s - loss: 0.4938 - acc: 0.7549 - val_loss: 0.6039 - val_acc: 0.7244\nEpoch 113/150\n514/514 [==============================] - 0s - loss: 0.5066 - acc: 0.7802 - val_loss: 0.6126 - val_acc: 0.7283\nEpoch 114/150\n514/514 [==============================] - 0s - loss: 0.5020 - acc: 0.7490 - val_loss: 0.6032 - val_acc: 0.7283\nEpoch 115/150\n514/514 [==============================] - 0s - loss: 0.4987 - acc: 0.7549 - val_loss: 0.6063 - val_acc: 0.7165\nEpoch 116/150\n514/514 [==============================] - 0s - loss: 0.5207 - acc: 0.7646 - val_loss: 0.6163 - val_acc: 0.7087\nEpoch 117/150\n514/514 [==============================] - 0s - loss: 0.4993 - acc: 0.7471 - val_loss: 0.6008 - val_acc: 0.7283\nEpoch 118/150\n514/514 [==============================] - 0s - loss: 0.4954 - acc: 0.7685 - val_loss: 0.5984 - val_acc: 0.7362\nEpoch 119/150\n514/514 [==============================] - 0s - loss: 0.4874 - acc: 0.7743 - val_loss: 0.6070 - val_acc: 0.7362\nEpoch 120/150\n514/514 [==============================] - 0s - loss: 0.4909 - acc: 0.7568 - val_loss: 0.5990 - val_acc: 0.7402\nEpoch 121/150\n514/514 [==============================] - 0s - loss: 0.4781 - acc: 0.7588 - val_loss: 0.6695 - val_acc: 0.6890\nEpoch 122/150\n514/514 [==============================] - 0s - loss: 0.4861 - acc: 0.7665 - val_loss: 0.6198 - val_acc: 0.7402\nEpoch 123/150\n514/514 [==============================] - 0s - loss: 0.4786 - acc: 0.7626 - val_loss: 0.6440 - val_acc: 0.6811\nEpoch 124/150\n514/514 [==============================] - 0s - loss: 0.4830 - acc: 0.7665 - val_loss: 0.6391 - val_acc: 0.6811\nEpoch 125/150\n514/514 [==============================] - 0s - loss: 0.4707 - acc: 0.7685 - val_loss: 0.6373 - val_acc: 0.7205\nEpoch 126/150\n514/514 [==============================] - 0s - loss: 0.4953 - acc: 0.7802 - val_loss: 0.5981 - val_acc: 0.7441\nEpoch 127/150\n514/514 [==============================] - 0s - loss: 0.4689 - acc: 0.7957 - val_loss: 0.5971 - val_acc: 0.7362\nEpoch 128/150\n514/514 [==============================] - 0s - loss: 0.4938 - acc: 0.7685 - val_loss: 0.8264 - val_acc: 0.6850\nEpoch 129/150\n514/514 [==============================] - 0s - loss: 0.5091 - acc: 0.7626 - val_loss: 0.6702 - val_acc: 0.6772\nEpoch 130/150\n514/514 [==============================] - 0s - loss: 0.4939 - acc: 0.7665 - val_loss: 0.6371 - val_acc: 0.7205\nEpoch 131/150\n514/514 [==============================] - 0s - loss: 0.4974 - acc: 0.7607 - val_loss: 0.6433 - val_acc: 0.6890\nEpoch 132/150\n514/514 [==============================] - 0s - loss: 0.4957 - acc: 0.7743 - val_loss: 0.7286 - val_acc: 0.6772\nEpoch 133/150\n514/514 [==============================] - 0s - loss: 0.5178 - acc: 0.7743 - val_loss: 0.6052 - val_acc: 0.7323\nEpoch 134/150\n514/514 [==============================] - 0s - loss: 0.4986 - acc: 0.7626 - val_loss: 0.5918 - val_acc: 0.7520\nEpoch 135/150\n514/514 [==============================] - 0s - loss: 0.4705 - acc: 0.7704 - val_loss: 0.5899 - val_acc: 0.7441\nEpoch 136/150\n514/514 [==============================] - 0s - loss: 0.5097 - acc: 0.7724 - val_loss: 0.6217 - val_acc: 0.7126\nEpoch 137/150\n514/514 [==============================] - 0s - loss: 0.4939 - acc: 0.7704 - val_loss: 0.6085 - val_acc: 0.7402\nEpoch 138/150\n514/514 [==============================] - 0s - loss: 0.4722 - acc: 0.7626 - val_loss: 0.6014 - val_acc: 0.7402\nEpoch 139/150\n514/514 [==============================] - 0s - loss: 0.4921 - acc: 0.7626 - val_loss: 0.5945 - val_acc: 0.7362\nEpoch 140/150\n514/514 [==============================] - 0s - loss: 0.4802 - acc: 0.7802 - val_loss: 0.6175 - val_acc: 0.7047\nEpoch 141/150\n514/514 [==============================] - 0s - loss: 0.4786 - acc: 0.7685 - val_loss: 0.6166 - val_acc: 0.7244\nEpoch 142/150\n514/514 [==============================] - 0s - loss: 0.4773 - acc: 0.7529 - val_loss: 0.5986 - val_acc: 0.7283\nEpoch 143/150\n514/514 [==============================] - 0s - loss: 0.4803 - acc: 0.7626 - val_loss: 0.6455 - val_acc: 0.7205\nEpoch 144/150\n514/514 [==============================] - 0s - loss: 0.4776 - acc: 0.7607 - val_loss: 0.5955 - val_acc: 0.7362\nEpoch 145/150\n514/514 [==============================] - 0s - loss: 0.4808 - acc: 0.7626 - val_loss: 0.6175 - val_acc: 0.7323\nEpoch 146/150\n514/514 [==============================] - 0s - loss: 0.4659 - acc: 0.7743 - val_loss: 0.6024 - val_acc: 0.7362\nEpoch 147/150\n514/514 [==============================] - 0s - loss: 0.4818 - acc: 0.7840 - val_loss: 0.6829 - val_acc: 0.7008\nEpoch 148/150\n514/514 [==============================] - 0s - loss: 0.5261 - acc: 0.7451 - val_loss: 0.6686 - val_acc: 0.7126\nEpoch 149/150\n514/514 [==============================] - 0s - loss: 0.4856 - acc: 0.7763 - val_loss: 0.6530 - val_acc: 0.7244\nEpoch 150/150\n514/514 [==============================] - 0s - loss: 0.4944 - acc: 0.7918 - val_loss: 0.6109 - val_acc: 0.7402\n"
]
],
[
[
"## Cross Validation",
"_____no_output_____"
]
],
[
[
"from keras.models import Sequential\nfrom keras.layers import Dense\nfrom sklearn.model_selection import StratifiedKFold\nimport numpy as np",
"_____no_output_____"
],
[
"seed = 7\nnp.random.seed(seed)",
"_____no_output_____"
],
[
"dataset = np.loadtxt('pima-indians-diabetes.data', delimiter=',')\nX = dataset[:, 0:8]\nY = dataset[:, 8]\nX.shape",
"_____no_output_____"
],
[
"kfold = StratifiedKFold(n_splits=5, shuffle=True, random_state=seed)",
"_____no_output_____"
],
[
"cvscores = []\nfor train, test in kfold.split(X, Y):\n model = Sequential()\n model.add(Dense(12, input_dim=8, activation='relu'))\n model.add(Dense(8, activation='relu'))\n model.add(Dense(1, activation='sigmoid'))\n model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n model.fit(X[train], Y[train], epochs=150, batch_size=10, verbose=0)\n scores = model.evaluate(X[test], Y[test], verbose=0)\n print('%s: %.2f%%' % (model.metrics_names[1], scores[1] * 100))\n cvscores.append(scores[1] * 100)\nprint('%.2f%% (+/- %.2f%%' % (np.mean(cvscores), np.std(cvscores)))",
"acc: 72.73%\nacc: 74.03%\nacc: 75.97%\nacc: 65.36%\nacc: 68.63%\n71.34% (+/- 3.84%\n"
],
[
"X.shape",
"_____no_output_____"
],
[
"Y.shape",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76e6e071f1eab4ef64de85f71da4ff05c52271a | 8,100 | ipynb | Jupyter Notebook | notebooks/download_missing_wikipedia_photos.ipynb | adipasquale/green-ferries-admin | 12344992677b03d8139fe71f8710b5d118d073bf | [
"MIT"
] | 5 | 2020-02-18T00:29:20.000Z | 2020-12-16T12:35:07.000Z | notebooks/download_missing_wikipedia_photos.ipynb | adipasquale/green-ferries-admin | 12344992677b03d8139fe71f8710b5d118d073bf | [
"MIT"
] | 38 | 2020-02-15T11:11:47.000Z | 2020-12-16T12:06:02.000Z | notebooks/download_missing_wikipedia_photos.ipynb | greenferries/greenferries-admin | 12344992677b03d8139fe71f8710b5d118d073bf | [
"MIT"
] | null | null | null | 50 | 160 | 0.650123 | [
[
[
"import frontmatter\nimport re\nimport urllib\nimport requests\n\nDIRNAME = os.path.abspath('')\nWWW_SHIPS_DATA_PATH = os.path.join(DIRNAME, \"../../www/views/ships\")\nWWW_IMG_PATH = os.path.join(DIRNAME, \"../../www/assets/img\")",
"_____no_output_____"
],
[
"for ship_filename in os.listdir(WWW_SHIPS_DATA_PATH):\n md_match = re.match(r\"(.*)\\.md\", ship_filename)\n if not md_match:\n continue\n ship_slug = md_match.groups()[0]\n ship_frontmatter = frontmatter.load(os.path.join(WWW_SHIPS_DATA_PATH, ship_filename))\n wikipedia_url = ship_frontmatter.metadata.get(\"wikipediaUrl\")\n photo_path = ship_frontmatter.metadata.get(\"photo\", \"\")\n if photo_path != \"\" or not wikipedia_url:\n continue\n parsed = urllib.parse.urlparse(wikipedia_url)\n country, title = re.match(r\".*([a-z]{2,3})\\.wikipedia\\.org\\/wiki\\/(.*)\", wikipedia_url).groups()\n json_url = f\"http://{country}.wikipedia.org/w/api.php?action=query&titles={title}&prop=pageimages&format=json&pithumbsize=500\"\n data = requests.get(json_url).json()\n keys = list(data[\"query\"][\"pages\"])\n image_url = data[\"query\"][\"pages\"][keys[0]].get(\"thumbnail\", {}).get(\"source\")\n if not image_url:\n print(f\"could not find image url in {json_url}\")\n continue\n img_ext = image_url[-10:].split(\".\")[-1]\n img_filename = f\"{ship_slug}.{img_ext}\"\n with open(os.path.join(f\"{WWW_IMG_PATH}\", img_filename), \"wb\") as f:\n f.write(requests.get(image_url).content)\n print(f\"wrote image www/assets/js/{img_filename}\")\n ship_frontmatter.metadata[\"photo\"] = f\"/img/{img_filename}\"\n with open(os.path.join(WWW_SHIPS_DATA_PATH, ship_filename), \"w\") as f:\n f.write(frontmatter.dumps(ship_frontmatter))\n print(f\"wrote back md to www/views/ships/{ship_filename}\")\n ",
"wrote image www/assets/js/caribbean-princess-9215490.jpg\nwrote back md to www/views/ships/caribbean-princess-9215490.md\nwrote image www/assets/js/star-breeze-8807997.jpg\nwrote back md to www/views/ships/star-breeze-8807997.md\nwrote image www/assets/js/veendam-9102992.jpg\nwrote back md to www/views/ships/veendam-9102992.md\nwrote image www/assets/js/zuiderdam-9221279.JPG\nwrote back md to www/views/ships/zuiderdam-9221279.md\ncould not find image url in http://en.wikipedia.org/w/api.php?action=query&titles=MV_Nova_Star&prop=pageimages&format=json&pithumbsize=500\nwrote image www/assets/js/amsterdam-9188037.JPG\nwrote back md to www/views/ships/amsterdam-9188037.md\nwrote image www/assets/js/star-pride-8707343.JPG\nwrote back md to www/views/ships/star-pride-8707343.md\nwrote image www/assets/js/pride-of-york-8501957.jpg\nwrote back md to www/views/ships/pride-of-york-8501957.md\nwrote image www/assets/js/spirit-of-france-9533816.JPEG\nwrote back md to www/views/ships/spirit-of-france-9533816.md\nwrote image www/assets/js/wind-surf-8700785.jpg\nwrote back md to www/views/ships/wind-surf-8700785.md\ncould not find image url in http://en.wikipedia.org/w/api.php?action=query&titles=Le_Champlain&prop=pageimages&format=json&pithumbsize=500\nwrote image www/assets/js/coral-princess-9229659.jpg\nwrote back md to www/views/ships/coral-princess-9229659.md\nwrote image www/assets/js/sky-princess-9802396.jpg\nwrote back md to www/views/ships/sky-princess-9802396.md\nwrote image www/assets/js/emerald-princess-9333151.jpg\nwrote back md to www/views/ships/emerald-princess-9333151.md\nwrote image www/assets/js/pride-of-bruges-8503797.jpg\nwrote back md to www/views/ships/pride-of-bruges-8503797.md\nwrote image www/assets/js/crown-princess-9293399.jpg\nwrote back md to www/views/ships/crown-princess-9293399.md\nwrote image www/assets/js/spirit-of-britain-9524231.JPG\nwrote back md to www/views/ships/spirit-of-britain-9524231.md\nwrote image www/assets/js/norbay-9056595.jpg\nwrote back md to www/views/ships/norbay-9056595.md\ncould not find image url in http://en.wikipedia.org/w/api.php?action=query&titles=Le_Bougainville&prop=pageimages&format=json&pithumbsize=500\nwrote image www/assets/js/seabourn-sojourn-9417098.jpg\nwrote back md to www/views/ships/seabourn-sojourn-9417098.md\nwrote image www/assets/js/pride-of-hull-9208629.jpg\nwrote back md to www/views/ships/pride-of-hull-9208629.md\ncould not find image url in http://fr.wikipedia.org/w/api.php?action=query&titles=Baltic_Princess_(ferry)&prop=pageimages&format=json&pithumbsize=500\nwrote image www/assets/js/pride-of-kent-9015266.jpg\nwrote back md to www/views/ships/pride-of-kent-9015266.md\nwrote image www/assets/js/pride-of-burgundy-9015254.JPG\nwrote back md to www/views/ships/pride-of-burgundy-9015254.md\nwrote image www/assets/js/european-highlander-9244116.jpg\nwrote back md to www/views/ships/european-highlander-9244116.md\nwrote image www/assets/js/pride-of-canterbury-9007295.jpg\nwrote back md to www/views/ships/pride-of-canterbury-9007295.md\nwrote image www/assets/js/norbank-9056583.JPEG\nwrote back md to www/views/ships/norbank-9056583.md\nwrote image www/assets/js/visborg-9763655.jpg\nwrote back md to www/views/ships/visborg-9763655.md\ncould not find image url in http://en.wikipedia.org/w/api.php?action=query&titles=AIDAmira&prop=pageimages&format=json&pithumbsize=500\nwrote image www/assets/js/finbo-cargo-9181106.jpg\nwrote back md to www/views/ships/finbo-cargo-9181106.md\nwrote image www/assets/js/national-geographic-explorer-8019356.jpg\nwrote back md to www/views/ships/national-geographic-explorer-8019356.md\nwrote image www/assets/js/pride-of-rotterdam-9208617.jpg\nwrote back md to www/views/ships/pride-of-rotterdam-9208617.md\ncould not find image url in http://fr.wikipedia.org/w/api.php?action=query&titles=Viking_ADCC&prop=pageimages&format=json&pithumbsize=500\ncould not find image url in http://de.wikipedia.org/w/api.php?action=query&titles=Victoria_Seaways&prop=pageimages&format=json&pithumbsize=500\nwrote image www/assets/js/european-seaway-9007283.JPG\nwrote back md to www/views/ships/european-seaway-9007283.md\nwrote image www/assets/js/wind-star-8420878.jpg\nwrote back md to www/views/ships/wind-star-8420878.md\nwrote image www/assets/js/sun-princess-9000259.jpg\nwrote back md to www/views/ships/sun-princess-9000259.md\n"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
e76e6f805fc09b50d6911a6c8736e78cc61865dd | 183,903 | ipynb | Jupyter Notebook | testing_of_model.ipynb | Rishbah-76/indian_licenseplate_recognition | 9c66474f725432cfe08636e14d2fbbf054ac5227 | [
"OLDAP-2.3"
] | 3 | 2021-06-29T07:33:39.000Z | 2021-11-10T12:26:50.000Z | testing_of_model.ipynb | Rishbah-76/indian_licenseplate_recognition | 9c66474f725432cfe08636e14d2fbbf054ac5227 | [
"OLDAP-2.3"
] | null | null | null | testing_of_model.ipynb | Rishbah-76/indian_licenseplate_recognition | 9c66474f725432cfe08636e14d2fbbf054ac5227 | [
"OLDAP-2.3"
] | 4 | 2021-06-30T10:00:22.000Z | 2021-07-05T05:35:26.000Z | 445.285714 | 132,732 | 0.93726 | [
[
[
"import numpy as np\nimport cv2\nimport matplotlib.pyplot as plt\nfrom keras import models\nimport keras.backend as K\nimport tensorflow as tf\nfrom sklearn.metrics import f1_score\nimport requests\nimport xmltodict\nimport json",
"Using TensorFlow backend.\n"
],
[
"plateCascade = cv2.CascadeClassifier('indian_license_plate.xml')",
"_____no_output_____"
],
[
"#detect the plate and return car + plate image\ndef plate_detect(img):\n plateImg = img.copy()\n roi = img.copy()\n plateRect = plateCascade.detectMultiScale(plateImg,scaleFactor = 1.2, minNeighbors = 7)\n for (x,y,w,h) in plateRect:\n roi_ = roi[y:y+h, x:x+w, :]\n plate_part = roi[y:y+h, x:x+w, :]\n cv2.rectangle(plateImg,(x+2,y),(x+w-3, y+h-5),(0,255,0),3)\n return plateImg, plate_part",
"_____no_output_____"
],
[
"#normal function to display \ndef display_img(img):\n img_ = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)\n plt.imshow(img_)\n plt.show()",
"_____no_output_____"
],
[
"#test image is used for detecting plate\ninputImg = cv2.imread('test.jpeg')\ninpImg, plate = plate_detect(inputImg)\ndisplay_img(inpImg)",
"_____no_output_____"
],
[
"def find_contours(dimensions, img) :\n\n #finding all contours in the image using \n #retrieval mode: RETR_TREE\n #contour approximation method: CHAIN_APPROX_SIMPLE\n cntrs, _ = cv2.findContours(img.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)\n\n #Approx dimensions of the contours\n lower_width = dimensions[0]\n upper_width = dimensions[1]\n lower_height = dimensions[2]\n upper_height = dimensions[3]\n \n #Check largest 15 contours for license plate character respectively\n cntrs = sorted(cntrs, key=cv2.contourArea, reverse=True)[:15]\n \n ci = cv2.imread('contour.jpg')\n \n x_cntr_list = []\n target_contours = []\n img_res = []\n for cntr in cntrs :\n #detecting contour in binary image and returns the coordinates of rectangle enclosing it\n intX, intY, intWidth, intHeight = cv2.boundingRect(cntr)\n \n #checking the dimensions of the contour to filter out the characters by contour's size\n if intWidth > lower_width and intWidth < upper_width and intHeight > lower_height and intHeight < upper_height :\n x_cntr_list.append(intX) \n char_copy = np.zeros((44,24))\n #extracting each character using the enclosing rectangle's coordinates.\n char = img[intY:intY+intHeight, intX:intX+intWidth]\n char = cv2.resize(char, (20, 40))\n cv2.rectangle(ci, (intX,intY), (intWidth+intX, intY+intHeight), (50,21,200), 2)\n plt.imshow(ci, cmap='gray')\n char = cv2.subtract(255, char)\n char_copy[2:42, 2:22] = char\n char_copy[0:2, :] = 0\n char_copy[:, 0:2] = 0\n char_copy[42:44, :] = 0\n char_copy[:, 22:24] = 0\n img_res.append(char_copy) # List that stores the character's binary image (unsorted)\n \n #return characters on ascending order with respect to the x-coordinate\n \n plt.show()\n #arbitrary function that stores sorted list of character indeces\n indices = sorted(range(len(x_cntr_list)), key=lambda k: x_cntr_list[k])\n img_res_copy = []\n for idx in indices:\n img_res_copy.append(img_res[idx])# stores character images according to their index\n img_res = np.array(img_res_copy)\n\n return img_res",
"_____no_output_____"
],
[
"def segment_characters(image) :\n\n #pre-processing cropped image of plate\n #threshold: convert to pure b&w with sharpe edges\n #erod: increasing the backgroung black\n #dilate: increasing the char white\n img_lp = cv2.resize(image, (333, 75))\n img_gray_lp = cv2.cvtColor(img_lp, cv2.COLOR_BGR2GRAY)\n _, img_binary_lp = cv2.threshold(img_gray_lp, 200, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)\n img_binary_lp = cv2.erode(img_binary_lp, (3,3))\n img_binary_lp = cv2.dilate(img_binary_lp, (3,3))\n\n LP_WIDTH = img_binary_lp.shape[0]\n LP_HEIGHT = img_binary_lp.shape[1]\n img_binary_lp[0:3,:] = 255\n img_binary_lp[:,0:3] = 255\n img_binary_lp[72:75,:] = 255\n img_binary_lp[:,330:333] = 255\n\n #estimations of character contours sizes of cropped license plates\n dimensions = [LP_WIDTH/6,\n LP_WIDTH/2,\n LP_HEIGHT/10,\n 2*LP_HEIGHT/3]\n plt.imshow(img_binary_lp, cmap='gray')\n plt.show()\n cv2.imwrite('contour.jpg',img_binary_lp)\n\n #getting contours\n char_list = find_contours(dimensions, img_binary_lp)\n\n return char_list",
"_____no_output_____"
],
[
"char = segment_characters(plate)",
"_____no_output_____"
],
[
"for i in range(10):\n plt.subplot(1, 10, i+1)\n plt.imshow(char[i], cmap='gray')\n plt.axis('off')",
"_____no_output_____"
],
[
"#It is the harmonic mean of precision and recall\n#Output range is [0, 1]\n#Works for both multi-class and multi-label classification\n\ndef f1score(y, y_pred):\n return f1_score(y, tf.math.argmax(y_pred, axis=1), average='micro') \n\ndef custom_f1score(y, y_pred):\n return tf.py_function(f1score, (y, y_pred), tf.double)",
"_____no_output_____"
],
[
" model = models.load_model('license_plate_character.pkl', custom_objects= {'custom_f1score': custom_f1score})",
"_____no_output_____"
],
[
"def fix_dimension(img):\n new_img = np.zeros((28,28,3))\n for i in range(3):\n new_img[:,:,i] = img\n return new_img\n \ndef show_results():\n dic = {}\n characters = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'\n for i,c in enumerate(characters):\n dic[i] = c\n\n output = []\n for i,ch in enumerate(char): \n img_ = cv2.resize(ch, (28,28), interpolation=cv2.INTER_AREA)\n img = fix_dimension(img_)\n img = img.reshape(1,28,28,3)\n y_ = model.predict_classes(img)[0]\n character = dic[y_] #\n output.append(character) \n \n plate_number = ''.join(output)\n \n return plate_number\n\nfinal_plate = show_results()\nprint(final_plate)",
"IMH20EE7598\n"
],
[
"def get_vehicle_info(plate_number):\n r = requests.get(\"http://www.regcheck.org.uk/api/reg.asmx/CheckIndia?RegistrationNumber={0}&username=licenseguy\".format(str(plate_number)))\n data = xmltodict.parse(r.content)\n jdata = json.dumps(data)\n df = json.loads(jdata)\n df1 = json.loads(df['Vehicle']['vehicleJson'])\n return df1\n",
"_____no_output_____"
],
[
"if len(final_plate) > 10:\n final_plate = final_plate[-10:]\n print(final_plate)",
"_____no_output_____"
],
[
"get_vehicle_info(final_plate)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76e770f5b5c593238f773839791d90b1029d235 | 15,588 | ipynb | Jupyter Notebook | 2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb | aqafridi/AI-Engineering-Specialization | 83c9526d5dfdb13248fcfddc76e3d095fc43c258 | [
"MIT"
] | 1 | 2022-02-21T09:24:55.000Z | 2022-02-21T09:24:55.000Z | 2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb | aqafridi/AI-Engineering-Specialization | 83c9526d5dfdb13248fcfddc76e3d095fc43c258 | [
"MIT"
] | null | null | null | 2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb | aqafridi/AI-Engineering-Specialization | 83c9526d5dfdb13248fcfddc76e3d095fc43c258 | [
"MIT"
] | null | null | null | 36.851064 | 1,754 | 0.612907 | [
[
[
"<a href=\"https://cognitiveclass.ai\"><img src = \"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/Logos/organization_logo/organization_logo.png\" width = 400> </a>\n\n<h1 align=center><font size = 5>Convolutional Neural Networks with Keras</font></h1>\n",
"_____no_output_____"
],
[
"In this lab, we will learn how to use the Keras library to build convolutional neural networks. We will also use the popular MNIST dataset and we will compare our results to using a conventional neural network.\n",
"_____no_output_____"
],
[
"<h2>Convolutional Neural Networks with Keras</h2>\n\n<h3>Objective for this Notebook<h3> \n<h5> 1. How to use the Keras library to build convolutional neural networks.</h5>\n<h5> 2. Convolutional Neural Network with One Convolutional and Pooling Layers.</h5>\n<h5> 3. Convolutional Neural Network with Two Convolutional and Pooling Layers.</h5>\n",
"_____no_output_____"
],
[
"## Table of Contents\n\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n\n<font size = 3>\n \n1. <a href=\"#item41\">Import Keras and Packages</a> \n2. <a href=\"#item42\">Convolutional Neural Network with One Convolutional and Pooling Layers</a> \n3. <a href=\"#item43\">Convolutional Neural Network with Two Convolutional and Pooling Layers</a> \n\n</font>\n</div>\n",
"_____no_output_____"
],
[
"<a id='item41'></a>\n",
"_____no_output_____"
],
[
"## Import Keras and Packages\n",
"_____no_output_____"
],
[
"Let's start by importing the keras libraries and the packages that we would need to build a neural network.\n",
"_____no_output_____"
]
],
[
[
"import tensorflow.keras\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense\nfrom tensorflow.keras.utils import to_categorical",
"_____no_output_____"
]
],
[
[
"When working with convolutional neural networks in particular, we will need additional packages.\n",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.layers import Conv2D # to add convolutional layers\nfrom tensorflow.keras.layers import MaxPooling2D # to add pooling layers\nfrom tensorflow.keras.layers import Flatten # to flatten data for fully connected layers",
"_____no_output_____"
]
],
[
[
"<a id='item42'></a>\n",
"_____no_output_____"
],
[
"## Convolutional Layer with One set of convolutional and pooling layers\n",
"_____no_output_____"
]
],
[
[
"# import data\nfrom tensorflow.keras.datasets import mnist\n\n# load data\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\n\n# reshape to be [samples][pixels][width][height]\nX_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32')\nX_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32')",
"_____no_output_____"
]
],
[
[
"Let's normalize the pixel values to be between 0 and 1\n",
"_____no_output_____"
]
],
[
[
"X_train = X_train / 255 # normalize training data\nX_test = X_test / 255 # normalize test data",
"_____no_output_____"
]
],
[
[
"Next, let's convert the target variable into binary categories\n",
"_____no_output_____"
]
],
[
[
"y_train = to_categorical(y_train)\ny_test = to_categorical(y_test)\n\nnum_classes = y_test.shape[1] # number of categories",
"_____no_output_____"
]
],
[
[
"Next, let's define a function that creates our model. Let's start with one set of convolutional and pooling layers.\n",
"_____no_output_____"
]
],
[
[
"def convolutional_model():\n \n # create model\n model = Sequential()\n model.add(Conv2D(16, (5, 5), strides=(1, 1), activation='relu', input_shape=(28, 28, 1)))\n model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))\n \n model.add(Flatten())\n model.add(Dense(100, activation='relu'))\n model.add(Dense(num_classes, activation='softmax'))\n \n # compile model\n model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])\n return model",
"_____no_output_____"
]
],
[
[
"Finally, let's call the function to create the model, and then let's train it and evaluate it.\n",
"_____no_output_____"
]
],
[
[
"# build the model\nmodel = convolutional_model()\n\n# fit the model\nmodel.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2)\n\n# evaluate the model\nscores = model.evaluate(X_test, y_test, verbose=0)\nprint(\"Accuracy: {} \\n Error: {}\".format(scores[1], 100-scores[1]*100))",
"WARNING:tensorflow:From /home/jupyterlab/conda/envs/python/lib/python3.7/site-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\nInstructions for updating:\nCall initializer instance with the dtype argument instead of passing it to the constructor\nTrain on 60000 samples, validate on 10000 samples\nEpoch 1/10\n60000/60000 - 43s - loss: 0.2902 - acc: 0.9203 - val_loss: 0.1027 - val_acc: 0.9695\nEpoch 2/10\n60000/60000 - 43s - loss: 0.0866 - acc: 0.9751 - val_loss: 0.0647 - val_acc: 0.9785\nEpoch 3/10\n60000/60000 - 43s - loss: 0.0591 - acc: 0.9827 - val_loss: 0.0489 - val_acc: 0.9847\nEpoch 4/10\n60000/60000 - 43s - loss: 0.0458 - acc: 0.9862 - val_loss: 0.0415 - val_acc: 0.9867\nEpoch 5/10\n60000/60000 - 43s - loss: 0.0355 - acc: 0.9892 - val_loss: 0.0371 - val_acc: 0.9876\nEpoch 6/10\n60000/60000 - 44s - loss: 0.0295 - acc: 0.9911 - val_loss: 0.0378 - val_acc: 0.9870\nEpoch 7/10\n60000/60000 - 43s - loss: 0.0235 - acc: 0.9926 - val_loss: 0.0358 - val_acc: 0.9877\nEpoch 8/10\n60000/60000 - 43s - loss: 0.0195 - acc: 0.9942 - val_loss: 0.0363 - val_acc: 0.9882\nEpoch 9/10\n60000/60000 - 43s - loss: 0.0163 - acc: 0.9953 - val_loss: 0.0353 - val_acc: 0.9880\nEpoch 10/10\n60000/60000 - 43s - loss: 0.0133 - acc: 0.9962 - val_loss: 0.0331 - val_acc: 0.9888\nAccuracy: 0.9887999892234802 \n Error: 1.1200010776519775\n"
]
],
[
[
"* * *\n",
"_____no_output_____"
],
[
"<a id='item43'></a>\n",
"_____no_output_____"
],
[
"## Convolutional Layer with two sets of convolutional and pooling layers\n",
"_____no_output_____"
],
[
"Let's redefine our convolutional model so that it has two convolutional and pooling layers instead of just one layer of each.\n",
"_____no_output_____"
]
],
[
[
"def convolutional_model():\n \n # create model\n model = Sequential()\n model.add(Conv2D(16, (5, 5), activation='relu', input_shape=(28, 28, 1)))\n model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))\n \n model.add(Conv2D(8, (2, 2), activation='relu'))\n model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))\n \n model.add(Flatten())\n model.add(Dense(100, activation='relu'))\n model.add(Dense(num_classes, activation='softmax'))\n \n # Compile model\n model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])\n return model",
"_____no_output_____"
]
],
[
[
"Now, let's call the function to create our new convolutional neural network, and then let's train it and evaluate it.\n",
"_____no_output_____"
]
],
[
[
"# build the model\nmodel = convolutional_model()\n\n# fit the model\nmodel.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2)\n\n# evaluate the model\nscores = model.evaluate(X_test, y_test, verbose=0)\nprint(\"Accuracy: {} \\n Error: {}\".format(scores[1], 100-scores[1]*100))",
"Train on 60000 samples, validate on 10000 samples\nEpoch 1/10\n60000/60000 - 47s - loss: 0.4901 - acc: 0.8633 - val_loss: 0.1385 - val_acc: 0.9570\nEpoch 2/10\n60000/60000 - 47s - loss: 0.1185 - acc: 0.9642 - val_loss: 0.0848 - val_acc: 0.9728\nEpoch 3/10\n60000/60000 - 47s - loss: 0.0831 - acc: 0.9740 - val_loss: 0.0633 - val_acc: 0.9813\nEpoch 4/10\n60000/60000 - 47s - loss: 0.0657 - acc: 0.9795 - val_loss: 0.0661 - val_acc: 0.9783\nEpoch 5/10\n60000/60000 - 47s - loss: 0.0566 - acc: 0.9830 - val_loss: 0.0514 - val_acc: 0.9843\nEpoch 6/10\n60000/60000 - 47s - loss: 0.0496 - acc: 0.9845 - val_loss: 0.0476 - val_acc: 0.9868\nEpoch 7/10\n60000/60000 - 47s - loss: 0.0432 - acc: 0.9869 - val_loss: 0.0478 - val_acc: 0.9857\nEpoch 8/10\n60000/60000 - 47s - loss: 0.0400 - acc: 0.9873 - val_loss: 0.0497 - val_acc: 0.9848\nEpoch 9/10\n60000/60000 - 47s - loss: 0.0364 - acc: 0.9887 - val_loss: 0.0406 - val_acc: 0.9873\nEpoch 10/10\n60000/60000 - 47s - loss: 0.0325 - acc: 0.9899 - val_loss: 0.0373 - val_acc: 0.9883\nAccuracy: 0.9883000254631042 \n Error: 1.1699974536895752\n"
]
],
[
[
"### Thank you for completing this lab!\n\nThis notebook was created by [Alex Aklson](https://www.linkedin.com/in/aklson?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-DL0101EN-SkillsNetwork-20718188&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-DL0101EN-SkillsNetwork-20718188&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ). I hope you found this lab interesting and educational. Feel free to contact me if you have any questions!\n",
"_____no_output_____"
],
[
"## Change Log\n\n| Date (YYYY-MM-DD) | Version | Changed By | Change Description |\n| ----------------- | ------- | ---------- | ----------------------------------------------------------- |\n| 2020-09-21 | 2.0 | Srishti | Migrated Lab to Markdown and added to course repo in GitLab |\n\n<hr>\n\n## <h3 align=\"center\"> © IBM Corporation 2020. All rights reserved. <h3/>\n",
"_____no_output_____"
],
[
"This notebook is part of a course on **Coursera** called _Introduction to Deep Learning & Neural Networks with Keras_. If you accessed this notebook outside the course, you can take this course online by clicking [here](https://cocl.us/DL0101EN_Coursera_Week4_LAB1).\n",
"_____no_output_____"
],
[
"<hr>\n\nCopyright © 2019 [IBM Developer Skills Network](https://cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-DL0101EN-SkillsNetwork-20718188&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-DL0101EN-SkillsNetwork-20718188&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-DL0101EN-SkillsNetwork-20718188&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-DL0101EN-SkillsNetwork-20718188&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-DL0101EN-SkillsNetwork-20718188&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-DL0101EN-SkillsNetwork-20718188&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ).\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e76e7c575f1c4e6552034416b7811745d7d5dc6d | 267,897 | ipynb | Jupyter Notebook | Multivariable Differential Calculus and its Application.ipynb | reata/Calculus | 4256303d22bafa787d73b9dbaa61346804b215c9 | [
"MIT"
] | null | null | null | Multivariable Differential Calculus and its Application.ipynb | reata/Calculus | 4256303d22bafa787d73b9dbaa61346804b215c9 | [
"MIT"
] | null | null | null | Multivariable Differential Calculus and its Application.ipynb | reata/Calculus | 4256303d22bafa787d73b9dbaa61346804b215c9 | [
"MIT"
] | null | null | null | 315.916274 | 116,650 | 0.893694 | [
[
[
"# 多元函数微分法及其应用\n\n只有一个自变量的函数叫做一元函数。在很多实际问题中往往牵涉到多方面的因素,反映到数学上,就是一个变量依赖于多个变量的情形。这就提出了多元函数以及多元函数的微分和积分问题。本章将在一元函数微分学的基础上,讨论多元函数的微分法及其应用。讨论中我们以二元函数为主,因为从一元函数到二元函数会产生新的问题,而从二元函数到二元以上的多元函数则可以类推。\n\n本节包括以下内容:\n1. 多元函数的基本概念\n2. 偏导数\n3. 全微分\n4. 多元复合函数的求导法则\n5. 隐函数的求导公式\n6. 多元函数微分学的几何应用\n7. 方向导数和梯度\n8. 多元函数的极值及其求法\n9. 二元函数的泰勒公式\n10. 最小二乘法",
"_____no_output_____"
],
[
"### 1. 多元函数的基本概念\n\n#### 1.1 平面点集 n维空间\n\n在讨论一元函数时,一些概念、理论和方法,都是基于 $\\mathbb{R}^1$ 中的点集、两点间的距离、区间和邻域等概念。为了将一元函数微积分推广到多元的情形,首先需要将上述概念加以推广,同时还需涉及一些其他概念。为此先引入平面点集的一些基本概念,将有关概念从 $\\mathbb{R}^1$ 中的情形推广到 $\\mathbb{R}^2$ 中;然后引入 $n$ 维空间,以便推广到一般的 $\\mathbb{R}^n$ 中。\n\n**平面点集**:由平面解析几何知道,当在平面上引入一个直角坐标系后,平面上的点 $P$ 与有序二元实数组 $(x,y)$ 之间就建立了一一对应。于是,我们常把有序实数组 $(x,y)$ 与平面上的点 $P$ 视作是等同的。这种建立了坐标系的平面称为坐标平面。二元有序实数组 $(x,y)$ 的全体,即 $\\mathbb{R}^2=\\mathbb{R} \\times \\mathbb{R}=\\{(x,y)|x,y\\in\\mathbb{R}\\}$ 就表示坐标平面。\n\n坐标平面上具有某种性质 $P$ 的点的集合,称为平面点集,记作 $E=\\{(x,y)|(x,y)具有性质P\\}$。\n\n现在我们来引入 $\\mathbb{R}^2$ 中邻域的概念。\n\n设 $P_0(x_0,y_0)$ 是 $xOy$ 平面上的一个点,$\\delta$ 是某一正数。与点 $P_0(x_0,y_0)$ 距离小于 $\\delta$ 的点 $P(x,y)$ 的全体,称为点 $P_0$ 的 $\\delta$ 邻域,记作 $U(P_0,\\delta)$,即\n$$ U(P_0,\\delta)=\\{P||PP_0|<\\delta\\} $$\n也就是\n$$ U(P_0,\\delta)=\\{(x,y)|\\sqrt{(x-x_0)^2+(y-y_0)^2}<\\delta\\} $$\n\n点 $P_0$ 的去心 $\\delta$ 邻域,记作 $\\mathring{U}(P_0, \\delta)$,即\n$$\\mathring{U}(P_0, \\delta)=\\{P|0<|PP_0|<\\delta\\}$$\n\n在几何上,$U(P_0,\\delta)$ 就是 $xOy$ 平面上以点 $P_0(x_0,y_0)$ 为中心,$\\delta>0$ 为半径的圆内部的点 $P(x,y)$ 的全体。\n\n如果不需要强调邻域的半径 $\\delta$,则用 $U(P_0)$ 表示点 $P_0$ 的某个邻域,点 $P_0$ 的去心邻域记作 $\\mathring{U}(P_0)$。\n\n下面利用邻域来描述点和点集之间的关系。\n\n任意一点 $P \\in \\mathbb{R}^2$ 与任意一个点集 $E \\subset \\mathbb{R}^2$ 之间必有以下三种关系中的一种:\n\n1. **内点**:如果存在点 $P$ 的某个邻域 $U(P)$,使得 $U(P) \\subset E$,则称 $P$ 为 $E$ 的内点。\n2. **外点**:如果存在点 $P$ 的某个邻域 $U(P)$,使得 $U(P) \\cap E = \\emptyset$,则称 $P$ 为 $E$ 的外点。\n3. **边界点**:如果点 $P$ 的任一邻域内既含有属于 $E$ 的点,又含有不属于 $E$ 的点,则称 $P$ 为 $E$ 的边界点。\n\n$E$ 的所有边界点的全体,称为 $E$ 的**边界**,记作 $\\partial E$。\n\n$E$ 的内点必属于 $E$;$E$ 的外点不定不属于 $E$;而 $E$ 的边界点可能属于 $E$,也可能不属于 $E$。\n\n根据点集所属点的特征,再来定义一些重要的平面点集。\n\n1. **开集**:如果点集 $E$ 的点都是 $E$ 的内点,则称 $E$ 为开集。\n2. **闭集**:如果点集 $E$ 的边界 $\\partial E \\subset E$,则称 $E$ 为闭集。\n3. **连通集**:如果点集 $E$ 内的任何两点,都可用折线联结起来,且该折线上的点都属于 $E$,则称 $E$ 为连通集。\n4. **区域(开区域)**:连通的开集称为区域或开区域。\n5. **闭区域**:开区域连通它的边界一起所构成的点集称为闭区域。\n6. **有界集**:对于平面点集 $E$,如果存在某一正数 $r$,使得 $E \\subset U(O,r)$,其中 $O$ 是坐标原点,则称 $E$ 为有界集。\n7. **无界集**:一个集合如果不是有界集,就称这集合为无界集。\n\n**n维空间**:设 $n$ 为取定的一个正整数,我们用 $\\mathbb{R}^n$ 表示 $n$ 元有序实数组 $(x_1, x_2,...,x_n)$ 的全体构成的集合,即\n$$ \\mathbb{R}^n = \\mathbb{R} \\times \\mathbb{R} \\times \\cdots \\times \\mathbb{R} = \\{(x_1, x_2,..., x_n)|x_i \\in \\mathbb{R}, i=1,2,...,n\\} $$\n$\\mathbb{R}^n$ 中的元素 $(x_1, x_2,...,x_n)$ 有时也用单个字母 $x$ 来表示,即 $x=(x_1,x_2,...,x_n)$。当所有的 $x_i(i=1,2,...,n)$ 都为零时,称这样的元素为 $\\mathbb{R}^n$ 中的零元,即为 $0$ 或 $O$。在解析几何中,通过直角坐标系,$\\mathbb{R}^2$(或 $\\mathbb{R}^3$)中的元素分别与平面(或空间)中的点或向量建立一一对应,因而 $\\mathbb{R}^n$ 中的元素 $x=(x_1,x_2,...,x_n)$ 也称为 $\\mathbb{R}^n$ 中的一个点或一个 $n$ 维向量,$x_i$ 称为点 $x$ 的第 $i$ 个坐标或 $n$ 维向量 $x$ 的第 $i$ 个分量。特别地,$\\mathbb{R}^n$ 中的零元 $0$ 称为 $\\mathbb{R}^n$ 中的坐标原点或 $n$ 维零向量。\n\n为了在集合 $\\mathbb{R}^n$ 中的元素之间建立联系,在 $\\mathbb{R}^n$ 中定义线性运算如下:\n\n设 $x=(x_1,x_2,...,x_n), y=(y_1,y_2,...,y_n)$ 为 $\\mathbb{R}^n$ 中任意两个元素,$\\lambda \\in \\mathbb{R}$,规定:\n$$\n\\begin{split}\n& x+y=(x_1+y_1, x_2+y_2,...,x_n+y_n) \\\\\n& \\lambda x = (\\lambda x_1, \\lambda x_2, ..., \\lambda x_n)\n\\end{split}\n$$\n这样定义了线性运算的集合 $\\mathbb{R}^n$ 称为 $n$ 维空间。\n\n$\\mathbb{R}^n$ 中点 $x=(x_1,x_2,...,x_n)$ 和点 $y=(y_1,y_2,...,y_n)$ 间的距离,记作 $\\rho(x,y)$,规定\n$$ \\rho(x,y) = \\sqrt{(x_1-y_1)^2+(x_2-y_2)^2+\\cdots+(x_n-y_n)^2} $$\n显然,$n=1,2,3$ 时,上述规定与数轴上、直角坐标系下平面及空间中两点间的距离一致。\n\n$\\mathbb{R}^n$ 中元素 $x=(x_1,x_2,...,x_n)$ 与零元 $0$ 之间的距离 $\\rho(x, 0)$ 记作 $||x||$(在 $\\mathbb{R}^1, \\mathbb{R}^2, \\mathbb{R}^3$ 中,通常将 $||x||$ 记作 $|x|$),即\n$$ ||x|| = \\sqrt{x_1^2+x_2^2+\\cdots+x_n^2} $$\n采用这一记号,结合向量的线性运算,便得\n$$ ||x-y|| = \\sqrt{(x_1-y_1)^2+(x_2-y_2)^2+\\cdots+(x_n-y_n)^2} = \\rho(x,y) $$\n\n在 $n$ 维空间 $\\mathbb{R}^n$ 中定义了距离以后,就可以定义 $\\mathbb{R}^n$ 中变元的极限:\n\n设 $x=(x_1,x_2,...,x_n), a=(a_1,a_2,...,a_n) \\in \\mathbb{R}^n$。 如果\n$$ ||x-a|| \\rightarrow 0 $$\n则称变元 $x$ 在 $\\mathbb{R}^n$ 中趋于固定元 $a$,记作 $x \\rightarrow a$,显然\n$$ x \\rightarrow a \\Leftrightarrow x_1 \\rightarrow a_1, x_2 \\rightarrow a_2, \\cdots, x_n \\rightarrow a_n $$\n\n在 $\\mathbb{R}^n$ 中线性运算和距离的引入,使得前面讨论过的有关平面点集的一系列概念,可以方便地引入到 $n(n \\geq 3)$ 维平面中来,例如:\n\n设 $a=(a_1,a_2,...,a_n) \\in \\mathbb{R}^n$,$\\delta$ 是某一正数,则 $n$ 维空间内的点集\n$$ U(a, \\delta) = \\{x|x \\in \\mathbb{R}^n, \\rho(x,a) < \\delta \\} $$\n就定义为 $\\mathbb{R}^n$ 中点 $a$ 的邻域。以邻域为基础,可以定义点集的内点、外电、边界点以及开集、闭集、区域等一系列概念,这里不再赘述。",
"_____no_output_____"
],
[
"#### 1.2 多元函数概念\n\n**定义1 设 $D$ 是 $\\mathbb{R}^2$ 的一个非空子集,称映射 $f:D \\rightarrow \\mathbb{R}$ 为定义在 $D$ 上的二元函数,通常记为\n$$ z=f(x,y), (x,y) \\in D $$\n或\n$$ z=f(P), P \\in D $$\n其中点集 $D$ 称为该函数的定义域,$x,y$ 称为自变量,$z$ 称为因变量。\n**\n\n上述定义中,与自变量 $x,y$ 的一对值(即二元有序实数组)$(x,y)$ 相对应的因变量 $z$ 的值,也称为 $f$ 在点 $(x,y)$ 处的函数值,记作 $f(x,y)$,即 $z=f(x,y)$。函数值 $f(x,y)$ 全体所构成的集合称为函数 $f$ 的值域,记作 $f(D)$,即\n$$ f(D)=\\{z|z=f(x,y),(x,y) \\in D\\} $$\n\n与一元函数的情形相仿,记号 $f$ 和 $f(x,y)$ 的意义是有区别的,但习惯上常用记号 $f(x,y),(x,y) \\in D$ 或 $z=f(x,y),(x,y) \\in D$ 来表示 $D$ 上的二元函数 $f$。表示二元函数的记号 $f$ 也是可以任意选取的,例如也可以记为 $z=\\phi(x,y), z=z(x,y)$ 等。\n\n类似地可以定义三元函数 $u=f(x,y,z),(x,y,z) \\in D$ 以及三元以上的函数。一般的,把定义1中的平面点集 $D$ 换成 $n$ 维空间 $\\mathbb{R}^n$ 内的点集 $D$,映射 $f:D \\rightarrow \\mathbb{R}$ 就称为定义在 $D$ 上的** $n$ 元函数**,通常记为\n$$ u=f(x_1,x_2,\\cdots,x_n),(x_1,x_2,\\cdots,x_n) \\in D $$\n或简记为\n$$ u=f(x),x=(x_1,x_2,\\cdots,x_n) \\in D $$\n也可记为\n$$ u=f(P),P(x_1,x_2,\\cdots,x_n) \\in D $$\n\n在 $n=2$ 或 $n=3$ 时,习惯上将点 $(x_1, x_2)$ 与点 $(x_1, x_2, x_3)$ 分别写成 $(x,y)$ 与 $(x,y,z)$。这时,若用字母表示 $\\mathbb{R}^2$ 或 $\\mathbb{R}^3$ 中的点,即写成 $P(x,y)$ 或 $M(x,y,z)$,则相应的二元函数及三元函数也可简记为 $z=f(P)$ 及 $u=f(M)$。\n\n当 $n=1$ 时,$n$ 元函数就是一元函数。当 $n \\geq 2$ 时,$n$ 元函数统称为**多元函数**。\n\n关于多元函数的定义域,与一元函数相类似,我们作如下约定:在一般地讨论用算式表达的多元函数 $u=f(x)$ 时,就以使这个算式有意义的变元 $x$ 的值所组成的点集为这个**多元函数的自然定义域**。因而,对这类函数,它的定义域不再特别标出。\n\n设函数 $z=f(x,y)$ 的定义域为 $D$。对于任意取定的点 $P(x,y) \\in D$,对应的函数值为 $z=f(x,y)$。这样,以 $x$ 为横坐标,$y$ 为纵坐标,$z=f(x,y)$ 为竖坐标在空间就确定一点 $M(x,y,z)$。当 $(x,y)$ 遍取 $D$ 上的一切点时,得到一个空间点集\n$$ \\{(x,y,z)|z=f(x,y), (x,y) \\in D\\} $$\n这个点集称为**二元函数 $z=f(x,y)$ 的图形**。通常我们也说二元函数的图形是一张曲面。",
"_____no_output_____"
],
[
"#### 1.3 多元函数的极限\n\n先讨论二元函数 $z=f(x,y)$ 当 $(x,y) \\rightarrow (x_0,y_0)$,即 $P(x,y) \\rightarrow P_0(x_0,y_0)$ 时的极限。\n\n这里 $P \\rightarrow P_0$ 表示点 $P$ 以任何方式趋于点 $P_0$,也就是点 $P$ 与点 $P_0$ 间的距离趋于零,即\n$$ |PP_0| = \\sqrt{(x-x_0)^2 + (y-y_0)^2} \\rightarrow 0 $$\n\n与一元函数的极限概念类似,如果在 $P(x,y) \\rightarrow P_0(x_0, y_0)$ 的过程中,对应的函数值 $f(x,y)$ 无限接近于一个确定的常数 $A$,就说 $A$ 是函数 $f(x,y)$ 当 $(x,y) \\rightarrow (x_0,y_0)$ 时的极限。下面用 $\\epsilon - \\delta$ 语言描述这个极限概念。\n\n**定义2 设二元函数 $f(P)=f(x,y)$ 的定义域为 $D$,$P_0(x_0,y_0)$ 是 $D$ 的聚点。如果存在常数 $A$,对于任意给定的正数 $\\epsilon$,总存在正数 $\\delta$,使得当点 $P(x,y) \\in D \\cap \\mathring{U}(P_0,\\delta) $ 时,都有\n$$ |f(P)-A| = |f(x,y)-A| < \\epsilon $$\n成立,那么就称常数 $A$ 为函数 $f(x,y)$ 当 $(x,y) \\rightarrow (x_0,y_0) $ 时的极限,记作\n$$ \\lim_{(x,y) \\rightarrow (x_0, y_0)}f(x,y)=A 或 f(x,y) \\rightarrow A((x,y) \\rightarrow (x_0,y_0)) $$\n也记作\n$$ \\lim_{P \\rightarrow P_0}f(P)=A 或 f(P) \\rightarrow A(P \\rightarrow P_0) $$\n**\n\n为了区别于一元函数的极限,我们把二元函数的极限叫做**二重极限**。\n\n必须注意,所谓二重极限存在,是指 $P(x,y)$ 以任何方式趋于 $P_0(x_0,y_0)$ 时,$f(x,y)$ 都无限接近于 $A$。因此,如果 $P(x,y)$ 以某一种特殊方式,例如沿着一条定直线或定曲线趋于 $P_0(x_0,y_0)$ 时,即使 $f(x,y)$ 无限接近于某一确定值,我们还不能由此断定函数的极限存在。但是反过来,如果当 $P(x,y)$ 以不同的方式趋于 $P_0(x_0,y_0)$ 时,$f(x,y)$ 趋于不同的值,那么就可以判定这函数的极限不存在。\n\n以上关于二元函数的极限概念,可相应地推广到 $n$ 元函数 $u=f(P)$ 上去。\n\n关于多元函数的极限运算,有与一元函数类似的运算法则。",
"_____no_output_____"
],
[
"#### 1.4 多元函数的连续性\n\n**定义3 设二元函数 $f(P)=f(x,y)$ 的定义域为 $D$,$P_0(x_0,y_0)$ 为 $D$ 的聚点,且 $P_0 \\in D$,如果\n$$ \\lim_{(x,y) \\rightarrow (x_0, y_0)}f(x,y) = f(x_0,y_0) $$\n则称函数 $f(x,y)$ 在点 $P_0(x_0,y_0)$ 连续。\n设函数 $f(x,y)$ 在 $D$ 上有定义,$D$ 内的每一点都是函数定义域的聚点。如果函数 $f(x,y)$ 在 $D$ 的每一点都连续,那么就称函数 $f(x,y)$ 在 $D$ 上连续,或者称 $f(x,y)$ 是 $D$ 上的连续函数。\n**\n\n以上关于二元函数的连续性概念,可相应地推广到 $n$ 元函数 $f(P)$ 上去。\n\n**定义4 设函数 $f(x,y)$ 的定义域为 $D$,$P_0(x_0,y_0)$ 是 $D$ 的聚点。如果函数 $f(x,y)$ 在点 $P_0(x_0,y_0)$ 不连续,则称 $P_0(x_0,y_0)$ 为函数 $f(x,y)$ 的间断点。**\n\n前面已经指出:一元函数中关于极限的运算法则,对于多元函数仍然适用。根据多元函数的极限运算法则,可以证明多元连续函数的和、差、积仍为连续函数;连续函数的商在分母不为零处仍连续;多元连续函数的复合函数也是连续函数。\n\n与一元初等函数相类似。多元初等函数是指可用一个式子表示的多元函数,这个式子是由常数及具有不同自变量的一元基本初等函数经过有限次的四则运算和符合运算而得到的。\n\n一切多元初等函数在其定义区域内是连续的。所谓定义区域是指包含在定义域内的区域或闭区域。\n\n由多元初等函数的连续性,如果要求它在点 $P_0$ 处的极限,而该点又在此函数的定义区域内,则极限值就是函数在该点的函数值,即\n$$ \\lim_{P \\rightarrow P_0}f(P) = f(P_0) $$\n\n与闭区间上一元连续函数的性质相类似,在有界闭区域上连续的多元函数具有如下性质:\n\n**性质1(有界性与最大值最小值定理)**:在有界闭区域 $D$ 上的多元连续函数,必定在 $D$ 上有界,且能取得它的最大值和最小值。\n\n**性质2(介值定理)**:在有界闭区域 $D$ 上的多元连续函数必取得介于最大值和最小值之间的任何值。\n\n**性质3(一致连续性定理)**:在有界闭区域 $D$ 上的多元连续函数必定在 $D$ 上一致连续。",
"_____no_output_____"
],
[
"### 2. 偏导数\n\n#### 2.1 偏导数的定义及其计算方法\n\n在研究一元函数时,我们从研究函数的变化率引入了导数概念。对于多元函数同样需要讨论它的变化率。但多元函数的自变量不止一个,因变量和自变量的关系要比一元函数复杂得多。在这一节里,我们首先考虑多元函数关于其中一个自变量的变化率。以二元函数 $f(x,y)$ 为例,如果只有自变量 $x$ 变化,而自变量 $y$ 固定(即看做常量),这时它就是 $x$ 的一元函数,这函数对 $x$ 的导数,就称为二元函数 $z=f(x,y)$ 对于 $x$ 的**偏导数**,即有如下定义:\n\n**定义 设函数 $z=f(x,y)$ 在点 $(x_0,y_0)$ 的某一邻域内有定义,当 $y$ 固定在 $y_0$ 而 $x$ 在 $x_0$ 处有增量 $\\Delta x$ 时,相应的函数有增量\n$$ f(x_0+\\Delta x,y_0) - f(x_0,y_0) $$\n如果\n$$ \\lim_{\\Delta x \\rightarrow 0} \\frac{f(x_0+\\Delta x,y_0) - f(x_0,y_0)}{\\Delta x} $$\n存在,则称此极限为函数 $z=f(x,y)$ 在点 $(x_0,y_0)$ 处对 $x$ 的偏导数,记作\n$$ \\frac{\\partial z}{\\partial x}|_{\\begin{split}x=x_0\\\\y=y_0\\end{split}}, \\frac{\\partial f}{\\partial x}|_{\\begin{split}x=x_0\\\\y=y_0\\end{split}}, z_x|_{\\begin{split}x=x_0\\\\y=y_0\\end{split}} 或 f_x(x_0,y_0) $$\n**\n\n如果函数 $z=f(x,y)$ 在区域 $D$ 内每一点 $(x,y)$ 处对 $x$ 的偏导数都存在,那么这个偏导数就是 $x,y$ 的函数,它就称为函数 $z=f(x,y)$ 对自变量 $x$ 的偏导函数,记作\n$$ \\frac{\\partial z}{\\partial x},\\frac{\\partial f}{\\partial x},z_x 或 f_x(x,y) $$\n\n类似地,可以定义函数 $z=f(x,y)$ 对自变量 $y$ 的偏导函数,记作\n$$ \\frac{\\partial z}{\\partial y},\\frac{\\partial f}{\\partial y},z_y 或 f_y(x,y) $$\n\n就像一元函数的导函数一样,在不至于混淆的地方也把偏导函数简称为偏导数。至于实际求 $z=f(x,y)$ 的偏导数,并不需要新的方法,因为这里只有一个自变量在变动,另一个自变量是看做固定的,所以仍旧是一元函数的微分法问题。偏导数的概念还可推广到二元以上的函数。\n\n二元函数 $z=f(x,y)$ 在点 $(x_0,y_0)$ 的偏导数有下述几何意义:\n\n设 $M_0(x_0,y_0,f(x_0,y_0))$ 为曲面 $z=f(x,y)$ 上的一点,过 $M_0$ 作平面 $y=y_0$,截此平面得一曲线,此曲线在平面 $y=y_0$ 上的方程为 $z=f(x, y_0)$,则导数 $\\frac{d}{dx}f(x,y_0)|_{x=x_0}$,即偏导数 $f_x(x_0,y_0)$,就是这曲线在点 $M_0$ 处的切线对 $x$ 轴的斜率。同样,偏导数 $f_y(x_0,y_0)$ 的几何意义是曲面被平面 $x=x_0$ 所截得的曲线在点 $M_0$ 处的切线对 $y$ 轴的斜率。\n\n我们已经知道,如果一元函数在某点具有导数,则它在该点必定连续。但对于多元函数来说,即使各偏导数在某点都存在,也不能保证函数在该点连续。这是因为各偏导数存在只能保证点 $P$ 沿着平行于坐标轴的方向趋于 $P_0$ 时,函数值 $f(P)$ 趋于 $f(P_0)$,但不能保证点 $P$ 按任何方式趋于 $P_0$ 时,函数值 $f(P)$ 都趋于 $f(P_0)$。",
"_____no_output_____"
],
[
"#### 2.2 高阶偏导数\n\n设函数 $z=f(x,y)$ 在区域 $D$ 内具有偏导数\n$$ \\frac{\\partial z}{\\partial x}=f_x(x,y), \\frac{\\partial z}{\\partial y}=f_y(x,y) $$\n那么在 $D$ 内 $f_x(x,y),f_y(x,y)$ 都是 $x,y$ 的函数。如果这两个函数的偏导数也存在,则称它们是函数 $z=f(x,y)$ 的二阶偏导数。按照对变量求导次序的不同有下列四个二阶偏导数:\n$$\n\\begin{split}\n\\frac{\\partial}{\\partial x}(\\frac{\\partial z}{\\partial x})=\\frac{\\partial^2 z}{\\partial x^2}=f_{xx}(x,y) \\\\\n\\frac{\\partial}{\\partial y}(\\frac{\\partial z}{\\partial x})=\\frac{\\partial^2 z}{\\partial x \\partial y}=f_{xy}(x,y) \\\\\n\\frac{\\partial}{\\partial x}(\\frac{\\partial z}{\\partial y})=\\frac{\\partial^2 z}{\\partial y \\partial x}=f_{yx}(x,y) \\\\\n\\frac{\\partial}{\\partial y}(\\frac{\\partial z}{\\partial y})=\\frac{\\partial^2 z}{\\partial y^2}=f_{yy}(x,y) \\\\\n\\end{split}\n$$\n\n其中第二、三两个偏导数称为**混合偏导数**。同样可得三阶、四阶、...以及 $n$ 阶偏导数。二阶及二阶以上的偏导数统称为**高阶偏导数**。\n\n**定理 如果函数 $z=f(x,y)$ 的两个二阶混合偏导数 $\\frac{\\partial^2 z}{\\partial y \\partial x}$ 及 $\\frac{\\partial^2 z}{\\partial x \\partial y}$ 在区域 $D$ 内连续,那么在该区域内这两个二阶混合偏导数必相等。**\n\n换句话说,二阶混合偏导数在连续的条件下与求导的次序无关。\n\n对于二元以上的函数,也可以类似地定义高阶偏导数。而且高阶混合偏导数在偏导数连续的条件下也与求导的次序无关。",
"_____no_output_____"
],
[
"### 3. 全微分\n\n#### 3.1 全微分的定义\n\n由偏导数的定义知道,二元函数对某个自变量的偏导数表示当另一个自变量固定时,因变量相对于该自变量的变化率。根据一元函数微分学中增量与微分的关系,可得\n$$\n\\begin{split}\nf(x+\\Delta x, y) - f(x,y) \\approx f_x(x,y)\\Delta x \\\\\nf(x, y+\\Delta y) - f(x,y) \\approx f_y(x,y)\\Delta y\n\\end{split}\n$$\n上面两式的左端分别叫做二元函数对 $x$ 和对 $y$ 的**偏增量**,而右端分别叫做二元函数对 $x$ 和对 $y$ 的**偏微分**。\n\n在实际问题中,有时需要研究多元函数中各个自变量都取得增量时因变量所获得的增量,即所谓全增量的问题。下面以二元函数为例进行讨论。\n\n设函数 $z=f(x,y)$ 在点 $P(x,y)$ 的某邻域内有定义,$P'(x+\\Delta x,y+\\Delta y)$ 为这邻域内的任意一点,则称这两点的函数值之差 $f(x+\\Delta x, y+\\Delta y)-f(x,y)$ 为函数在点 $P$ 对应于自变量增量 $\\Delta x, \\Delta y$ 的**全增量**,记作 $\\Delta z$,即\n$$ \\Delta z = f(x+\\Delta x, y+\\Delta y)-f(x,y) $$\n\n一般说来,计算全增量 $\\Delta z$ 比较复杂。与一元函数的情形一样,我们希望用自变量的增量 $\\Delta x, \\Delta y$ 的线性函数来近似地代替函数的全增量 $\\Delta z$,从而引入如下定义。\n\n**定义 设函数 $z=f(x,y)$ 在点 $(x,y)$ 的某邻域内有定义,如果函数在点 $(x,y)$ 的全增量\n$$ \\Delta z = f(x+\\Delta x, y+\\Delta y) - f(x,y) $$\n可表示为\n$$ \\Delta z = A\\Delta x + B\\Delta y + o(\\rho) $$\n其中 $A, B$ 不依赖于 $\\Delta x, \\Delta y$ 而仅与 $x,y$ 有关,$\\rho=\\sqrt{(\\Delta x)^2+(\\Delta y)^2}$,则称函数 $z=f(x,y)$ 在点 $(x,y)$ 可微分,而 $A\\Delta x + B\\Delta y$ 称为函数 $z=f(x,y)$ 在点 $(x,y)$ 的全微分,记作 $dz$,即\n$$ dz=A\\Delta x + B\\Delta y $$\n**\n\n如果函数在区域 $D$ 内各点处都可微分,那么称这函数**在 $D$ 内可微分**。\n\n在第二节中曾指出,多元函数在某点的偏导数存在,并不能保证函数在该点连续。但是,由上述定义可知,如果函数 $z=f(x,y)$ 在点 $(x,y)$ 可微分,那么这函数在该点必定连续。事实上,由\n$$ \\lim_{\\rho \\rightarrow 0}\\Delta z=0 $$\n从而\n$$ \\lim_{(\\Delta x, \\Delta y) \\rightarrow (0,0)}f(x+\\Delta x, y+\\Delta y) = \\lim_{\\rho \\rightarrow 0}[f(x,y)+\\Delta z] = f(x,y) $$\n因此函数 $z=f(x,y)$ 在点 $(x,y)$ 处连续。\n\n下面讨论函数 $z=f(x,y)$ 在点 $(x,y)$ 可微分的条件。\n\n**定理1(必要条件) 如果函数 $z=f(x,y)$ 在点 $(x,y)$ 可微分,则该函数在点 $(x,y)$ 的偏导数 $\\frac{\\partial z}{\\partial x}, \\frac{\\partial z}{\\partial y}$ 必定存在,且函数 $z=f(x,y)$ 在点 $(x,y)$ 的全微分为\n$$ dz=\\frac{\\partial z}{\\partial x}\\Delta x + \\frac{\\partial z}{\\partial y}\\Delta y $$\n**\n\n一元函数在某点的导数存在是微分存在的充分必要条件。但对于多元函数来说,情形就不同了。当函数的各偏导数都存在时,虽然能形式地写出 $\\frac{\\partial z}{\\partial x}\\Delta x + \\frac{\\partial z}{\\partial y}\\Delta y$,但它与 $\\Delta z$ 之差并不一定是较 $\\rho$ 高阶的无穷小,因此它不一定是函数的全微分。换句话说,各偏导数的存在只是全微分存在的必要条件而不是充分条件。\n\n**定理2(充分条件) 如果函数 $z=f(x,y)$ 的偏导数 $\\frac{\\partial z}{\\partial x}, \\frac{\\partial z}{\\partial y}$ 在点 $(x,y)$ 连续,则函数在该点可微分。**\n\n以上关于二元函数全微分的定义及可微分的必要条件和充分条件,可以完全类似地推广到三元和三元以上的多元函数。\n\n习惯上,我们将自变量的增量 $\\Delta x, \\Delta y$ 分别记作 $dx, dy$,并分别称为自变量 $x,y$ 的微分。这样,函数 $z=f(x,y)$ 的全微分就可写成\n$$ dz=\\frac{\\partial z}{\\partial x}dx + \\frac{\\partial z}{\\partial y}dy $$\n\n通常把二元函数的全微分等于它的两个偏微分之和这件事称为二元函数的微分符合**叠加原理**。叠加原理也适用于二元以上的函数的情形。",
"_____no_output_____"
],
[
"#### 3.2 全微分在近似计算中的应用\n\n由二元函数的全微分的定义及关于全微分存在的充分条件可知,当二元函数 $z=f(x,y)$ 在点 $P(x,y)$ 的两个偏导数 $f_x(x,y), f_y(x,y)$ 连续,并且 $|\\Delta x|, |\\Delta y|$ 都较小时,就有近似等式\n$$ \\Delta z \\approx dz = f_x(x,y)\\Delta x + f_y(x,y)\\Delta y $$\n上式也可以写成\n$$ f(x+\\Delta x, y+\\Delta y) \\approx f(x,y) + f_x(x,y)\\Delta x + f_y(x,y)\\Delta y $$\n\n与一元函数的情形相类似,可以利用上述两个式子对二元函数作近似计算和误差估计。",
"_____no_output_____"
],
[
"#### 3.3 二元函数实例\n\n考虑函数\n$$f(x,y) = \\left\\{\n\\begin{aligned}\n& \\frac{xy}{x^2+y^2}, & x^2+y^2 \\neq 0 \\\\\n& 0, & x^2+y^2 = 0\n\\end{aligned}\n\\right.\n$$\n\n其曲面图像如下所示:",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\nfrom mpl_toolkits.mplot3d import Axes3D\n\n%matplotlib inline\n\[email protected]\ndef f(x, y):\n return x * y / (x ** 2 + y ** 2)\n\nstep = 0.05\nx_min, x_max = -1, 1\ny_min, y_max = -1, 1\nx_range, y_range = np.arange(x_min, x_max + step, step), np.arange(y_min, y_max + step, step)\nx_mat, y_mat = np.meshgrid(x_range, y_range)\nz = f(x_mat.reshape(-1), y_mat.reshape(-1)).reshape(x_mat.shape)\nfig = plt.figure(figsize=(12, 6))\nax1 = fig.add_subplot(1, 2, 1, projection='3d', elev=50, azim=-50)\nax1.plot_surface(x_mat, y_mat, z, cmap=cm.jet, rstride=1, cstride=1, edgecolor='none',alpha=.8)\nax1.set_xlabel('$x$')\nax1.set_ylabel('$y$')\nax1.set_zlabel('$z$')\nplt.show()",
"_____no_output_____"
]
],
[
[
"重点考察 $(0, 0)$ 这个点:\n\n**极限**\n\n显然当点 $P(x, y)$ 沿 $x$ 轴趋于点 $(0,0)$ 时\n$$ \\lim_{\\begin{split}(x,y)\\rightarrow (0,0) \\\\ y=0 \\end{split}}f(x,y) = \\lim_{x \\rightarrow 0}f(x,0) =\\lim_{x \\rightarrow 0}0 = 0$$\n又当点 $P(x, y)$ 沿 $y$ 轴趋于点 $(0,0)$ 时\n$$ \\lim_{\\begin{split}(x,y)\\rightarrow (0,0) \\\\ x=0 \\end{split}}f(x,y) = \\lim_{y \\rightarrow 0}f(0,y) =\\lim_{y \\rightarrow 0}0 = 0$$\n虽然点 $P(x,y)$ 以上述两种特殊方式(沿 $x$ 轴或沿 $y$ 轴)趋于原点时函数的极限存在并且相等,但是**极限 $\\lim_{(x,y) \\rightarrow (0,0)}f(x,y)$ 并不存在**。这是因为当点 $P(x,y)$ 沿着直线 $y=kx$ 趋于点 $(0,0)$ 时,有\n$$ \\lim_{\\begin{split}(x,y)\\rightarrow (0,0) \\\\ y=kx \\end{split}}\\frac{xy}{x^2+y^2} = \\lim_{x \\rightarrow 0}\\frac{kx^2}{x^2+k^2x^2} = \\frac{k}{1+k^2}$$\n显然它是随着 $k$ 的值的不同而改变的。\n\n**连续性**\n\n由于极限不存在,所以点 $(0,0)$ 是该函数的一个间断点。故**该函数在点 $(0, 0)$ 不连续**。\n\n**偏导数**\n$$\n\\begin{split}\nf_x(0,0) = \\lim_{\\Delta x \\rightarrow 0} \\frac{f(0+\\Delta x,0) - f(0,0)}{\\Delta x} = \\lim_{\\Delta x \\rightarrow 0}0 = 0 \\\\\nf_y(0,0) = \\lim_{\\Delta y \\rightarrow 0} \\frac{f(0,0+\\Delta y) - f(0,0)}{\\Delta y} = \\lim_{\\Delta y \\rightarrow 0}0 = 0 \\\\\n\\end{split}\n$$\n对一元函数,可导则一定连续(连续不一定可导)。在这里可以发现,**对多元函数,可偏导无法推出连续**。",
"_____no_output_____"
],
[
"为了讨论全微分的情况,修改函数为\n$$f(x,y) = \\left\\{\n\\begin{aligned}\n& \\frac{xy}{\\sqrt{x^2+y^2}}, & x^2+y^2 \\neq 0 \\\\\n& 0, & x^2+y^2 = 0\n\\end{aligned}\n\\right.\n$$\n\n其曲面图像如下所示:",
"_____no_output_____"
]
],
[
[
"@np.vectorize\ndef f(x, y):\n return x * y / np.sqrt(x ** 2 + y ** 2)\n\nstep = 0.05\nx_min, x_max = -1, 1\ny_min, y_max = -1, 1\nx_range, y_range = np.arange(x_min, x_max + step, step), np.arange(y_min, y_max + step, step)\nx_mat, y_mat = np.meshgrid(x_range, y_range)\nz = f(x_mat.reshape(-1), y_mat.reshape(-1)).reshape(x_mat.shape)\nfig = plt.figure(figsize=(12, 6))\nax1 = fig.add_subplot(1, 2, 1, projection='3d', elev=50, azim=-50)\nax1.plot_surface(x_mat, y_mat, z, cmap=cm.jet, rstride=1, cstride=1, edgecolor='none',alpha=.8)\nax1.set_xlabel('$x$')\nax1.set_ylabel('$y$')\nax1.set_zlabel('$z$')\nplt.show()",
"_____no_output_____"
]
],
[
[
"**微分**\n\n易证上述函数在点 $(0,0)$ 处极限存在且等于函数值 $0$,从而在该点连续。并且该点两个偏导也都等于 $0$。\n$$ \\Delta z - [f_x(0,0) + f_y(0,0)] = f(0+\\Delta x, 0+\\Delta y) - f(0,0) - [0+0] = \\frac{\\Delta x \\cdot \\Delta y}{\\sqrt{(\\Delta x)^+(\\Delta y)^2}} $$\n根据定义,如果在 $(0,0)$ 可微,则上式必须是 $\\rho=\\sqrt{(\\Delta x)^+(\\Delta y)^2}$ 的高阶无穷小。考虑点 $P'(\\Delta x, \\Delta y)$ 沿着直线 $y=kx$ 趋于 $(0,0)$\n$$ \\frac{\\frac{\\Delta x \\cdot \\Delta y}{\\sqrt{(\\Delta x)^2+(\\Delta y)^2}}}{\\rho} = \\frac{\\Delta x \\cdot \\Delta y}{(\\Delta x)^2+(\\Delta y)^2} = \\frac{\\Delta x \\cdot k \\Delta x}{(\\Delta x)^2+(k \\Delta x)^2} = \\frac{k}{1+k^2}$$\n它不能随 $\\rho \\rightarrow 0$ 而趋于 $0$,所以上式不是 $\\rho$ 的高阶无穷小,因此函数在点 $(0,0)$ 处的全微分并不存在,即函数在点 $(0,0)$ 处是不可微分的(尽管在该点连续,偏导也都存在)。",
"_____no_output_____"
],
[
"### 4. 多元复合函数的求导法则\n\n#### 4.1 一元函数与多元函数复合的情形\n\n**定理1 如果函数 $u=\\phi(t)$ 及 $v=\\psi(t)$ 都在点 $t$ 可导,函数 $z=f(u,v)$ 在对应点 $(u, v)$ 具有连续偏导数,则复合函数 $z=f[\\phi(t), \\psi(t)]$ 在点 $t$ 可导,且有\n$$ \\frac{dz}{dt}=\\frac{\\partial z}{\\partial u}\\frac{du}{dt}+\\frac{\\partial z}{\\partial v}\\frac{dv}{dt} $$\n**\n\n用同样的方法,可把定理推广到复合函数的中间变量多于两个的情形。上述公式中的导数 $\\frac{dz}{dt}$ 称为 **全导数**。\n\n#### 4.2 多元函数与多元函数复合的情形\n\n**定理2 如果函数 $u=\\phi(x,y)$ 及 $v=\\psi(x,y)$ 都在点 $(x,y)$ 具有对 $x$ 及对 $y$ 的偏导数,函数 $z=f(u,v)$ 在对应点 $(u,v)$ 具有连续偏导数,则复合函数 $z=f[\\phi(x,y),\\psi(x,y)]$ 在点 $(x,y)$ 的两个偏导数都存在,且有\n$$\n\\begin{split}\n\\frac{\\partial z}{\\partial x} = \\frac{\\partial z}{\\partial u}\\frac{\\partial u}{\\partial x} + \\frac{\\partial z}{\\partial v}\\frac{\\partial v}{\\partial x} \\\\\n\\frac{\\partial z}{\\partial y} = \\frac{\\partial z}{\\partial u}\\frac{\\partial u}{\\partial y} + \\frac{\\partial z}{\\partial v}\\frac{\\partial v}{\\partial y}\n\\end{split}\n$$\n**\n\n#### 4.3 其他情形\n\n**定理3 如果函数 $u=\\phi(x,y)$ 在点 $(x,y)$ 具有对 $x$ 及对 $y$ 的偏导数,函数 $v=\\psi(y)$ 在点 $y$ 可导,函数 $z=f(u,v)$ 在对应点 $(u,v)$ 具有连续偏导数,则复合函数 $z=f[\\phi(x,y), \\psi(y)]$ 在点 $(x,y)$ 的两个偏导数都存在,且有\n$$\n\\begin{split}\n\\frac{\\partial z}{\\partial x} &= \\frac{\\partial z}{\\partial u}\\frac{\\partial u}{\\partial x} \\\\\n\\frac{\\partial z}{\\partial y} &= \\frac{\\partial z}{\\partial u}\\frac{\\partial u}{\\partial y} + \\frac{\\partial z}{\\partial v}\\frac{dv}{dy}\n\\end{split}\n$$\n**\n\n上述情形实际上是情形 $2$ 的一种特例,即在情形 $2$ 中,如变量 $v$ 与 $x$ 无关,从而 $\\frac{\\partial v}{\\partial x}=0$;在 $v$ 对 $y$ 求导时,由于 $v=\\psi(y)$ 是一元函数,故 $\\frac{\\partial v}{\\partial y}$ 换成了 $\\frac{dv}{dy}$,这就得上述结果。\n\n**全微分形式不变性** 设函数 $z=f(u,v)$ 具有连续偏导数,则有全微分\n$$ dz = \\frac{\\partial z}{\\partial u}du + \\frac{\\partial z}{\\partial v}dv $$\n如果 $u,v$ 又是中间变量,即 $u=\\phi(x,y), v=\\psi(x,y)$,且这两个函数也具有连续偏导数,则复合函数\n$$ z = f[\\phi(x,y), \\psi(x,y)] $$\n的全微分为\n$$\n\\begin{split}\ndz &= \\frac{\\partial z}{\\partial x}dx + \\frac{\\partial z}{\\partial y}dy \\\\\n&= (\\frac{\\partial z}{\\partial u}\\frac{\\partial u}{\\partial x} + \\frac{\\partial z}{\\partial v}\\frac{\\partial v}{\\partial x})dx + (\\frac{\\partial z}{\\partial u}\\frac{\\partial u}{\\partial y} + \\frac{\\partial z}{\\partial v}\\frac{\\partial v}{\\partial y})dy \\\\\n&= \\frac{\\partial z}{\\partial u}(\\frac{\\partial u}{\\partial x}dx + \\frac{\\partial u}{\\partial y}dy) + \\frac{\\partial z}{\\partial v}(\\frac{\\partial v}{\\partial x}dx + \\frac{\\partial v}{\\partial y}dy) \\\\\n&= \\frac{\\partial z}{\\partial u}du + \\frac{\\partial z}{\\partial v}dv\n\\end{split}\n$$\n由此可见,无论 $u,v$ 是自变量还是中间变量,函数 $z=f(u,v)$ 的全微分形式是一样的。这个性质叫做**全微分形式不变性**。",
"_____no_output_____"
],
[
"### 5. 隐函数的求导公式\n\n#### 5.1 一个方程的情形\n\n**隐函数存在定理1 设函数 $F(x,y)$ 在点 $P(x_0, y_0)$ 的某一邻域内具有连续偏导数,且 $F(x_0, y_0)=0, F_y(x_0, y_0) \\neq 0$,则方程 $F(x,y)=0$ 在点 $(x_0, y_0)$ 的某一邻域内恒能唯一确定一个连续且有连续导数的函数 $y=f(x)$,它满足条件 $y_0=f(x_0)$,并有\n$$ \\frac{dy}{dx} = -\\frac{F_x}{F_y} $$\n**\n\n**隐函数存在定理2 设函数 $F(x,y,z)$ 在点 $P(x_0, y_0, z_0)$ 的某一邻域内具有连续偏导数,且 $F(x_0, y_0, z_0)=0, F_z(x_0, y_0) \\neq 0$,则方程 $F(x,y,z)=0$ 在点 $(x_0, y_0, z_0)$ 的某一邻域内恒能唯一确定一个连续且有连续导数的函数 $z=f(x,y)$,它满足条件 $z_0=f(x_0, y_0)$,并有\n$$ \\frac{\\partial z}{\\partial x} = -\\frac{F_x}{F_z}, \\frac{\\partial z}{\\partial y} = -\\frac{F_y}{F_z} $$\n**\n\n上述公式就是隐函数的求导公式。\n\n#### 5.2 方程组的情形\n\n略。",
"_____no_output_____"
],
[
"### 6. 多元函数微分学的几何应用\n\n略",
"_____no_output_____"
],
[
"### 7. 方向导数与梯度\n\n#### 7.1 方向导数\n\n偏导数反映的是函数沿坐标轴方向的变化率,但有时我们需要考虑函数沿任一指定方向的变化率问题。\n\n设 $l$ 是 $xOy$ 平面上以 $P_0(x_0,y_0)$ 为始点的一条射线,$e_l=(cos \\alpha, cos \\beta)$ 是与 $l$ 同方向的单位向量。射线 $l$ 的参数方程为\n$$\n\\begin{split}\nx = x_0 + tcos\\alpha \\\\\ny = y_0 + tcos\\beta \\\\\n(t \\geq 0)\n\\end{split}\n$$\n\n设函数 $z=f(x,y)$ 在点 $P_0(x_0,y_0)$ 为某个邻域 $U(P_0)$ 内有定义,$P(x_0+tcos\\alpha, y_0+tcos\\beta)$ 为 $l$ 上的另一点,且 $P \\in U(P_0)$。如果函数增量 $f(x_0+tcos\\alpha, y_0+cos\\beta) - f(x_0, y_0)$ 与 $P$ 到 $P_0$ 的距离 $|PP_0|=t$ 的比值\n$$ \\frac{f(x_0+tcos\\alpha, y_0+cos\\beta) - f(x_0, y_0)}{t} $$\n当 $P$ 沿着 $l$ 趋于 $P_0$(即 $t \\rightarrow 0^+$)时的极限存在,则称此极限为函数 $f(x,y)$ 在点 $P_0$ 沿方向 $l$ 的**方向导数**,记作 $\\frac{\\partial f}{\\partial l}|_{(x_0,y_0)}$,即\n$$ \\frac{\\partial f}{\\partial l}|_{(x_0,y_0)} = \\lim_{t \\rightarrow 0^+}\\frac{f(x_0+tcos\\alpha, y_0+cos\\beta) - f(x_0, y_0)}{t} $$\n\n从方向导数的定义可知,方向导数 $\\frac{\\partial f}{\\partial l}|_{(x_0,y_0)}$ 就是函数 $f(x,y)$ 在点 $P_0(x_0,y_0)$ 处沿方向 $l$ 的变化率。若函数 $f(x,y)$ 在点 $P_0(x_0,y_0)$ 的偏导数存在,$e_l = i = (1,0)$,则\n$$ \\frac{\\partial f}{\\partial l}|_{(x_0,y_0)} = \\lim_{t \\rightarrow 0^+}\\frac{f(x_0+tcos\\alpha, y_0+cos\\beta) - f(x_0, y_0)}{t} = f_x(x,y) $$\n又若 $e_l = j = (0,1)$,则\n$$ \\frac{\\partial f}{\\partial l}|_{(x_0,y_0)} = \\lim_{t \\rightarrow 0^+}\\frac{f(x_0+tcos\\alpha, y_0+cos\\beta) - f(x_0, y_0)}{t} = f_y(x,y) $$\n但反之,若 $e_l = i$,$\\frac{\\partial z}{\\partial l}|_{(x_0,y_0)}$ 存在,则 $\\frac{\\partial z}{\\partial x}|_{(x_0,y_0)}$ 未必存在。例如,$z=\\sqrt{x^2+y^2}$ 在点 $O(0,0)$ 处沿 $l=i$ 方向的方向导数 $\\frac{\\partial z}{\\partial l}|_{(0,0)}=1$,而偏导数 $\\frac{\\partial z}{\\partial x}|_{(0,0)}$ 不存在(对比极限的条件,$t \\rightarrow 0^+$ 和 $t \\rightarrow 0$)。\n\n**定理 如果函数 $f(x,y)$ 在点 $P_0(x_0,y_0)$ 可微分,那么函数在该点沿任一方向 $l$ 的方向导数存在,且有\n$$ \\frac{\\partial f}{\\partial l}|_{x_0,y_0}=f_x(x_0,y_0)cos\\alpha + f_y(x_0,y_0)cos\\beta $$\n其中 $cos\\alpha,cos\\beta$ 是方向 $l$ 的方向余弦。\n**",
"_____no_output_____"
],
[
"#### 7.2 梯度\n\n与方向导数有关联的一个概念是函数的梯度。在二元函数的情形,设函数 $f(x,y)$ 在平面区域 $D$ 内具有一阶连续偏导数,则对于每一点 $P_0(x_0,y_0) \\in D$,都可定出一个向量\n$$ f_x(x_0,y_0)i + f_y(x_0,y_0)j $$\n这向量称为函数 $f(x,y)$ 在点 $P_0(x_0,y_0)$ 的**梯度**,记作 $grad\\,f(x_0,y_0)$ 或 $\\nabla f(x_0,y_0)$,即\n$$ grad\\,f(x_0,y_0) = \\nabla f(x_0,y_0) = f_x(x_0,y_0)i + f_y(x_0,y_0)j $$\n其中 $\\nabla = \\frac{\\partial}{\\partial x}i + \\frac{\\partial}{\\partial y}j$ 称为(二维的)**向量微分算子**或**Nabla算子**,$\\nabla f = \\frac{\\partial f}{\\partial x}i + \\frac{\\partial f}{\\partial y}j$。\n\n如果函数 $f(x,y)$ 在点 $P_0(x_0,y_0)$ 可微分,$e_l=(cos\\alpha,cos\\beta)$ 是与方向 $l$ 同向的单位向量,则\n$$\n\\begin{split}\n\\frac{\\partial f}{\\partial l}|_{(x_0,y_0)} &= f_x(x_0,y_0)cos\\alpha + f_y(x_0,y_0)cos\\beta \\\\\n&= grad \\, f(x_0,y_0) \\cdot e_l \\\\\n&= |grad \\, f(x_0,y_0)|cos \\theta\n\\end{split}\n$$\n其中 $\\theta$ 是梯度和方向 $e_l$ 的夹角。\n\n这一关系式表明了函数在一点的梯度与函数在这点的方向导数间的关系。特别,由这关系可知:\n\n(1) 当 $\\theta=0$,即方向 $e_l$ 与梯度 $grad \\, f(x_0,y_0)$ 的方向相同时,函数 $f(x,y)$ 增加最快。此时,函数在这个方向的方向导数达到最大值,这个最大值就是梯度 $grad \\, f(x_0,y_0)$ 的模,即\n$$ \\frac{\\partial f}{\\partial l}|_{(x_0,y_0)} = |grad \\, f(x_0,y_0)| $$\n这个结果也表示:函数 $f(x,y)$ 在一点的梯度 $grad \\, f$ 是这样一个向量,它的方向是函数在这点的方向导数取得最大值的方向,它的模就等于方向导数的最大值。\n\n(2) 当 $\\theta=\\pi$,即方向 $e_l$ 与梯度 $grad \\, f(x_0,y_0)$ 的方向相反时,函数 $f(x,y)$ 减少最快。函数在这个方向的方向导数达到最小值,即\n$$ \\frac{\\partial f}{\\partial l}|_{(x_0,y_0)} = -|grad \\, f(x_0,y_0)| $$\n\n(3) 当 $\\theta=\\frac{\\pi}{2}$,即方向 $e_l$ 与梯度 $grad \\, f(x_0,y_0)$ 的方向正交时,函数的变化率为零,即\n$$ \\frac{\\partial f}{\\partial l}|_{(x_0,y_0)} = |grad \\, f(x_0,y_0)|cos\\theta = 0 $$\n\n我们知道,一般说来二元函数 $z=f(x,y)$ 在几何上表示一个曲面,这曲面被平面 $z=c(c是常数)$ 所截得的曲线 $L$ 的方程为\n$$\\left\\{\n\\begin{aligned}\n& z=f(x,y) \\\\\n& z=c\n\\end{aligned}\n\\right.\n$$\n这条曲线 $L$ 在 $xOy$ 面上的投影是一条平面曲线 $L^*$,它在 $xOy$ 平面直角坐标系中的方程为\n$$ f(x,y) = c $$\n对于曲线 $L^*$ 上的一切点,已给函数的函数值都是 $c$,所以我们称平面曲线 $L^*$ 为函数 $z=f(x,y)$ 的**等值线**。\n\n若 $f_x,f_y$ 不同时为零,则等值线 $f(x,y)=c$ 上任一点 $P_0(x_0,y_0)$ 处的一个单位法向量为\n$$\n\\begin{split}\nn &= \\frac{1}{\\sqrt{f^2_x(x_0,y_0)+f^2_y(x_0,y_0)}}(f_x(x_0,y_0),f_y(x_0,y_0)) \\\\\n&= \\frac{\\nabla f(x_0,y_0)}{|\\nabla f(x_0,y_0)|}\n\\end{split}\n$$\n这表明函数 $f(x,y)$ 在这一点 $(x_0,y_0)$ 的梯度 $\\nabla f(x_0,y_0)$ 的方向就是等值线 $f(x,y)=c$ 在这点的法线方向 $n$,而梯度的模 $|\\nabla f(x_0,y_0)|$ 就是沿这个法线方向的方向导数 $\\frac{\\partial f}{\\partial n}$,于是有\n$$ \\nabla f(x_0,y_0) = \\frac{\\partial f}{\\partial n}n $$",
"_____no_output_____"
],
[
"### 8. 多元函数的极值及其求法\n\n#### 8.1 多元函数的极值及最大值、最小值\n\n**定义 设函数 $z=f(x,y)$ 的定义域为 $D$,$P_0(x_0,y_0)$ 为 $D$ 的内点。若存在 $P_0$ 的某个邻域 $U(P_0) \\subset D$,使得对于该邻域内异于 $P_0$ 的任何点 $(x,y)$,都有\n$$ f(x,y) < f(x_0,y_0) $$\n则称函数 $f(x,y)$ 在点 $(x_0,y_0)$ 有极大值 $f(x_0,y_0)$,点 $(x_0,y_0)$ 称为函数 $f(x,y)$ 的极大值点;若对于该邻域内异于 $P_0$ 的任何点 $(x,y)$,都有\n$$ f(x,y) > f(x_0,y_0) $$\n则称函数 $f(x,y)$ 在点 $(x_0,y_0)$ 有极小值 $f(x_0,y_0)$,点 $(x_0,y_0)$ 称为函数 $f(x,y)$ 的极小值点。极大值、极小值统称为极值。使得函数取得极值的点称为极值点。\n**\n\n**定理1(必要条件) 设函数 $z=f(x,y)$ 在点 $(x_0,y_0)$ 具有偏导数,且在点 $(x_0,y_0)$ 处有极值,则有\n$$ f_x(x_0,y_0)=0, f_y(x_0,y_0)=0 $$\n**\n\n仿照一元函数,凡是能使 $ f_x(x_0,y_0)=0, f_y(x_0,y_0)=0 $ 同时成立的点 $(x_0,y_0)$ 称为函数 $ z=f(x,y) $ 的**驻点**。从定理1可知,具有偏导数的函数的极值点必定是驻点。但函数的驻点不一定是极值点。例如,点 $(0,0)$ 是函数 $z=xy$ 的驻点,但函数在该点并无极值。\n\n**定理2(充分条件) 设函数 $z=f(x,y)$ 在点 $(x_0,y_0)$ 的某邻域内连续且有一阶及二阶连续偏导数,又 $f_x(x_0,y_0),f_y(x_0,y_0)$,令\n$$ f_{xx}(x_0,y_0)=A, f_{xy}(x_0,y_0)=B, f_{yy}(x_0,y_0)=c $$\n则 $f(x,y)$ 在 $(x_0,y_0)$ 处是否取得极值的条件如下**:\n\n(1) $AC-B^2>0$ 时具有极值,且当 $A<0$ 时有极大值,当 $A>0$ 时有极小值;\n\n(2) $AC-B^2<0$ 时没有极值;\n\n(3) $AC-B^2=0$ 时可能有极值,也可能没有极值,还需另作讨论;",
"_____no_output_____"
],
[
"#### 8.2 条件极值 拉格朗日乘数法\n\n上面所讨论的极值问题,对于函数的自变量,除了限制在函数的定义域内以外,并无其他条件,所以有时候称为**无条件极值**。但在实际问题中,有时会遇到对函数的自变量还有附加条件的极值问题,这种极值称为**条件极值**。对于有些实际问题,可以把条件极值化为无条件极值求解;但在很多情形下,将条件极值化为无条件极值并不这样简单。另有一种直接寻求条件极值的方法,可以不必先把问题化到无条件极值的问题。这就是下面要介绍的**拉格朗日乘数法**。\n\n现在先来寻求函数\n$$ z=f(x,y) $$\n在条件\n$$ \\varphi(x,y)=0 $$\n下取得极值的必要条件。\n\n如果函数在 $(x_0,y_0)$ 取得所求的极值,那么首先有\n$$ \\varphi(x_0,y_0)=0 $$\n我们假定在 $(x_0,y_0)$ 的某一邻域内 $f(x,y)$ 与 $\\varphi(x,y)$ 均有连续的一阶偏导数,且 $\\varphi(x_0,y_0) \\neq 0$。由隐函数存在定理可知,约束条件确定一个连续且有连续导数的函数 $y=\\psi(x)$,将其带入原函数,结果得到一个变量 $x$ 的函数\n$$ z=f[x,\\psi(x)] $$\n于是原函数在 $(x_0,y_0)$ 取得所求的极值,也就是相当于上述函数在 $x=x_0$ 取得极值。由一元可导函数取得极值的必要条件知道\n$$ \\frac{dz}{dx}|_{x=x_0}=f_x(x_0,y_0)+f_y(x_0,y_0)\\frac{dy}{dx}|_{x=x_0}=0 $$\n而对约束条件用隐函数求导公式,有\n$$ \\frac{dy}{dx}|_{x=x_0} = -\\frac{\\varphi_x(x_0,y_0)}{\\varphi_y(x_0,y_0)} $$\n结合上面两式,可得\n$$ f_x(x_0,y_0) - f_y(x_0,y_0)\\frac{\\varphi_x(x_0,y_0)}{\\varphi_y(x_0,y_0)} = 0 $$\n\n设 $ \\frac{f_y(x_0,y_0)}{\\varphi_y(x_0,y_0)} = \\lambda$,上述必要条件就变为\n$$\\left\\{\n\\begin{aligned}\n& f_x(x_0,y_0) + \\lambda \\varphi_x(x_0,y_0) = 0 \\\\\n& f_y(x_0,y_0) + \\lambda \\varphi_y(x_0,y_0) = 0 \\\\\n& \\varphi(x_0,y_0) = 0\n\\end{aligned}\n\\right.\n$$\n\n若引进辅助函数\n$$ L(x,y) = f(x,y)+\\lambda \\varphi(x,y) $$\n则不难看出,方程组的前两式就是\n$$ L_x(x_0,y_0)=0, L_y(x_0,y_0)=0 $$\n函数 $L(x,y)$ 称为**拉格朗日函数**,参数 $\\lambda$ 称为**拉格朗日乘子**。\n\n由以上讨论,我们得到一下结论。\n\n**拉格朗日乘数法** 要找函数 $z=f(x,y)$ 在附加条件 $\\varphi(x,y)=0$ 下的可能极值点,可以先作拉格朗日函数\n$$ L(x,y) = f(x,y) + \\lambda \\varphi(x,y) $$\n其中 $\\lambda$ 为参数,求其对 $x$ 与 $y$ 的一阶偏导数,并使之为零,然后与约束条件联立起来\n$$\\left\\{\n\\begin{aligned}\n& f_x(x,y) + \\lambda \\varphi_x(x,y) = 0 \\\\\n& f_y(x,y) + \\lambda \\varphi_y(x,y) = 0 \\\\\n& \\varphi(x,y) = 0\n\\end{aligned}\n\\right.\n$$\n由这方程组解出 $x,y,\\lambda$,这样得到的 $(x,y)$ 就是函数 $f(x,y)$ 在附加条件 $\\varphi(x,y)=0$ 下的可能极值点。\n\n这方法还可以推广到自变量多于两个而条件多于一个的情形。例如,要求函数\n$$ u=f(x,y,z,t) $$\n在附加条件\n$$ \\varphi(x,y,z,t)=0, \\psi(x,y,z,t)=0 $$\n下的极值,可以先作拉格朗日函数\n$$ L(x,y,z,t) = f(x,y,z,t) + \\lambda \\varphi(x,y,z,t) + \\mu \\psi(x,y,z,t) $$\n其中 $\\lambda, \\mu$ 均为参数,求其一阶偏导数,并使之为零,然后与两个附加条件联立起来求解,这样得出的 $(x,y,z,t)$ 就是函数 $f(x,y,z,t)$ 在附加条件下的可能极值点。\n\n至于如何确定所求得的点是否极值点,在实际问题中往往可根据问题本身的性质来判定。",
"_____no_output_____"
],
[
"### 9. 二元函数的泰勒公式\n\n略。",
"_____no_output_____"
],
[
"### 10. 最小二乘法\n\n略。",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e76e8fc210067db25bae3bcbe93e7cbbd91596c0 | 8,644 | ipynb | Jupyter Notebook | 04-lista-duplamente-encadeada.ipynb | alexandre77/estruturas-de-dados | 63af4db6a5cbe0708ce0c2132a85127152248792 | [
"MIT"
] | null | null | null | 04-lista-duplamente-encadeada.ipynb | alexandre77/estruturas-de-dados | 63af4db6a5cbe0708ce0c2132a85127152248792 | [
"MIT"
] | null | null | null | 04-lista-duplamente-encadeada.ipynb | alexandre77/estruturas-de-dados | 63af4db6a5cbe0708ce0c2132a85127152248792 | [
"MIT"
] | 1 | 2021-03-05T07:46:37.000Z | 2021-03-05T07:46:37.000Z | 24.83908 | 97 | 0.429431 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e76e925e7639787618442434004c53423791b0cd | 350,544 | ipynb | Jupyter Notebook | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge | f65432afe55d3f90dfb1918b66744c006aafd32f | [
"MIT"
] | null | null | null | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge | f65432afe55d3f90dfb1918b66744c006aafd32f | [
"MIT"
] | null | null | null | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge | f65432afe55d3f90dfb1918b66744c006aafd32f | [
"MIT"
] | null | null | null | 193.24366 | 193,240 | 0.886568 | [
[
[
"## Data Import and Check",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np \nimport matplotlib.pyplot as plt\nimport seaborn as sns; sns.set()\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.metrics import roc_curve\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import classification_report\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score\nfrom scipy import stats\nimport statsmodels.api as sm\nfrom scipy.stats import mannwhitneyu\nimport matplotlib.gridspec as gridspec",
"_____no_output_____"
]
],
[
[
"* I import data and drop duplicates\n* I had tried to set the user id as index. Expectedly, it did not work as a user can have multiple trips. However the user - trip combination did not work either which revealed the entire rows duplicated\n* Once the duplicates are removed, the count of use -trip combinations reveal they constitute a unique key",
"_____no_output_____"
]
],
[
[
"hoppi = pd.read_csv('C:/Users/gurkaali/Documents/Info/Ben/Hop/WatchesTable.csv', sep=\",\")\nhoppi.drop_duplicates(inplace = True)\nhoppi.groupby(['user_id', 'trip_id'])['user_id']\\\n .count() \\\n .reset_index(name='count')\\\n .sort_values(['count'], ascending = False)\\\n .head(5)",
"_____no_output_____"
]
],
[
[
"Now that I am sure, I can set the index:",
"_____no_output_____"
]
],
[
[
"hoppi.set_index(['user_id', 'trip_id'], inplace = True)",
"_____no_output_____"
]
],
[
[
"Pandas has great features for date calculations. I set the related field types as datetime in case I need those features",
"_____no_output_____"
]
],
[
[
"hoppi['departure_date'] = pd.to_datetime(hoppi['departure_date'], format = '%m/%d/%y')\nhoppi['return_date'] = pd.to_datetime(hoppi['return_date'], format = '%m/%d/%y')\nhoppi['first_search_dt'] = pd.to_datetime(hoppi['first_search_dt'], format = '%m/%d/%y %H:%M')\nhoppi['watch_added_dt'] = pd.to_datetime(hoppi['watch_added_dt'], format = '%m/%d/%y %H:%M')\nhoppi['latest_status_change_dt'] = pd.to_datetime(hoppi['latest_status_change_dt'], format = '%m/%d/%y %H:%M')\nhoppi['first_buy_dt'] = pd.to_datetime(hoppi['first_buy_dt'], format = '%m/%d/%y %H:%M')\nhoppi['last_notif_dt'] = pd.to_datetime(hoppi['last_notif_dt'], format = '%m/%d/%y %H:%M')\nhoppi['forecast_last_warning_date'] = pd.to_datetime(hoppi['forecast_last_warning_date'], format = '%m/%d/%y')\nhoppi['forecast_last_danger_date'] = pd.to_datetime(hoppi['forecast_last_danger_date'], format = '%m/%d/%y')",
"_____no_output_____"
]
],
[
[
"The explanations in the assignment do not cover all fields but field names and the content enable further data verification\n* Stay should be the difference between departure and return dates. Based on that assumption, the query below should return no records i.e. the 1st item in the tuple returned by shape should be 0:",
"_____no_output_____"
]
],
[
[
"hoppi['stay2'] = pd.to_timedelta(hoppi['stay'], unit = 'D')\nhoppi['stay_check'] = hoppi['return_date'] - hoppi['departure_date']\nhoppi.loc[(hoppi['stay_check'] != hoppi['stay2']) & (hoppi['return_date'].isnull() == False), \\\n ['stay2', 'stay_check', 'return_date', 'departure_date']].shape",
"_____no_output_____"
]
],
[
[
"The following date fields must not be before the first search date. Therefore the queries below should reveal no records\n* watch_added_dt\n* latest_status_change_dt\n* first_buy_dt\n* last_notif_dt\n* forecast_last_warning_date\n* forecast_last_danger_date",
"_____no_output_____"
]
],
[
[
"hoppi.loc[(hoppi['watch_added_dt'] < hoppi['first_search_dt']), ['first_search_dt', 'watch_added_dt']].shape",
"_____no_output_____"
],
[
"hoppi.loc[(hoppi['latest_status_change_dt'] < hoppi['first_search_dt']), ['first_search_dt', 'latest_status_change_dt']].shape",
"_____no_output_____"
]
],
[
[
"33 records have a first buy suggestion datetime earlier than the user's first search.",
"_____no_output_____"
]
],
[
[
"hoppi.loc[(hoppi['first_buy_dt'] < hoppi['first_search_dt']), ['first_search_dt', 'first_buy_dt']].shape",
"_____no_output_____"
]
],
[
[
"While the difference is just minutes in most cases, I don't have an explanation to justify it. Given the limited number of cases, I prefer removing them",
"_____no_output_____"
]
],
[
[
"hoppi.loc[(hoppi['first_buy_dt'] < hoppi['first_search_dt']), ['first_search_dt', 'first_buy_dt']].head()",
"_____no_output_____"
],
[
"hoppi = hoppi.loc[~(hoppi['first_buy_dt'] < hoppi['first_search_dt'])]",
"_____no_output_____"
]
],
[
[
"There are also 2 records where the last notification is done before the user's first search. I remove those as well",
"_____no_output_____"
]
],
[
[
"hoppi.loc[(hoppi['last_notif_dt'] < hoppi['first_search_dt']), ['first_search_dt', 'last_notif_dt']]",
"_____no_output_____"
],
[
"hoppi = hoppi.loc[~(hoppi['last_notif_dt'] < hoppi['first_search_dt'])]",
"_____no_output_____"
]
],
[
[
"Same checks on last warning and last danger dates show 362K + and 98K + suspicious records. As the quantitiy is large and descriptions sent with the assignment do not contain details on these 2 fields, I prefer to keep them while taking a note here in case something provides with additional argument to delete them during analyses.",
"_____no_output_____"
]
],
[
[
"hoppi.loc[(hoppi['forecast_last_warning_date'] < hoppi['first_search_dt']), \\\n ['first_search_dt', 'forecast_last_warning_date']].shape",
"_____no_output_____"
],
[
"hoppi.loc[(hoppi['forecast_last_danger_date'] < hoppi['first_search_dt']), \\\n ['first_search_dt', 'forecast_last_danger_date']].shape",
"_____no_output_____"
]
],
[
[
"### Check outliers",
"_____no_output_____"
],
[
"I reshape the columns in a way that will make working with seaborn easier:",
"_____no_output_____"
]
],
[
[
"hoppi_box_components = [hoppi[['first_advance']].assign(measurement_type = 'first_advance').reset_index(). \\\n rename(columns = {'first_advance': 'measurement'}),\n hoppi[['watch_advance']].assign(measurement_type = 'watch_advance').reset_index(). \\\n rename(columns = {'watch_advance': 'measurement'}),\n hoppi[['current_advance']].assign(measurement_type = 'current_advance').reset_index(). \\\n rename(columns = {'current_advance': 'measurement'})]\nhoppi_box = pd.concat(hoppi_box_components)",
"_____no_output_____"
],
[
"sns.set(font = 'DejaVu Sans', style = 'white')\nax = sns.boxplot(x=\"measurement_type\", y=\"measurement\",\n data=hoppi_box, palette=[\"#FA6866\", \"#01AAE4\", \"#505050\"], #Hopper colors\n linewidth = 0.5)",
"_____no_output_____"
]
],
[
[
"While several observations look like outliers on the boxplots, the histograms below show that the data is highly skewed. Therefore I do not consider them as outliers",
"_____no_output_____"
]
],
[
[
"f, axes = plt.subplots(1, 3, figsize=(15, 5), sharex=True)\nsns.distplot(hoppi['first_advance'], kde=False, color=\"#FA6866\", ax=axes[0])\nsns.distplot(hoppi.loc[hoppi['watch_advance'].isnull() == False, 'watch_advance'], kde=False, color=\"#01AAE4\", ax=axes[1])\nsns.distplot(hoppi.loc[hoppi['current_advance'].isnull() == False, 'current_advance'], kde=False, color=\"#505050\", ax=axes[2])",
"_____no_output_____"
]
],
[
[
"### Question 1",
"_____no_output_____"
],
[
"Given the business model of Hopper, we should understand who is more likely to buy a ticket eventually. Logistic Regression constitutes a convenient way of conducting such analysis. It runs faster than SVN and is easier to interpret, making it ideal for a task like this one:",
"_____no_output_____"
],
[
"I prepare categorical variables for trip types:",
"_____no_output_____"
]
],
[
[
"one_hot_trip_type = pd.get_dummies(hoppi['trip_type'])\nhoppi2 = hoppi.join(one_hot_trip_type)",
"_____no_output_____"
]
],
[
[
"I believe the city / airport distinction in origin and destination fields refer to the fact that some airports are more central such as the difference between Toronto Billy Bishop and Pearson airports. I also checked some airport codes, they do corresponds to cities where there are multiple airports with one or more being city airports",
"_____no_output_____"
]
],
[
[
"origin_cols = hoppi2['origin'].str.split(\"/\", n = 1, expand = True) \nhoppi2['origin_code'] = origin_cols[1]\nhoppi2['origin_type'] = origin_cols[0]\n\ndestination_cols = hoppi2['destination'].str.split(\"/\", n = 1, expand = True) \nhoppi2['destination_code'] = destination_cols[1]\nhoppi2['destination_type'] = destination_cols[0]",
"_____no_output_____"
],
[
"one_hot_destination_type = pd.get_dummies(hoppi2['destination_type'])\nhoppi3 = hoppi2.join(one_hot_destination_type)\nhoppi3.rename(columns={\"airport\": \"destination_airport\", \"city\": \"destination_city\"}, inplace = True)",
"_____no_output_____"
],
[
"one_hot_origin_type = pd.get_dummies(hoppi3['origin_type'])\nhoppi4 = hoppi3.join(one_hot_origin_type)\nhoppi4.rename(columns={\"airport\": \"origin_airport\", \"city\": \"origin_city\"}, inplace = True)",
"_____no_output_____"
]
],
[
[
"I prepare categorical variables for whether a watch is placed or not:",
"_____no_output_____"
]
],
[
[
"hoppi4.loc[hoppi3['watch_added_dt'].isnull() == True, 'watch_bin'] = 0\nhoppi4.loc[hoppi3['watch_added_dt'].isnull() == False, 'watch_bin'] = 1",
"_____no_output_____"
]
],
[
[
"Given the user - trip combination being unique across the data file, we do not have information on the changes for a user who has updated his trip status. As the data looks like covering the last status of a trip, I prefer to focus analyses on concluded queries i.e. trips either expired or booked. I exclude:\n* actives: because their result is yet to be seen. The user can end up booking before departure\n* shopped: because a user can make several searches on the same itinerary with alternative options each ending up as a new record in the database. I consider a search once the suer starts following the trip price\n* inactive: because some have departure in the future so their result cannot be concluded. I also exclude those with departure in the past as it falls in the same category as the shopped trips as the user stopped following the trip.\nI assign a new column for records I take into account in my analyses further below:",
"_____no_output_____"
]
],
[
[
"hoppi4.loc[hoppi3['status_latest'] == 'expired', 'result'] = 0\nhoppi4.loc[hoppi3['status_latest'] == 'booked', 'result'] = 1",
"_____no_output_____"
]
],
[
[
"A person might be prompted to buy once the price falls because it makes sense or maybe he buys as soon as it starts increasing to avoid further increase. Whatever the case, it makes sense to compare the price at different time points with respect to the original price at first search. For that, I create columns to measure price difference between the last price, the first time a buy recommended, the lowest price vs the the very first price:",
"_____no_output_____"
]
],
[
[
"hoppi4['dif_last_first'] = hoppi4['last_total'] - hoppi4['first_total']\nhoppi4['dif_buy_first'] = hoppi4['first_buy_total'] - hoppi4['first_total']\nhoppi4['dif_lowest_first'] = hoppi4['lowest_total'] - hoppi4['first_total']",
"_____no_output_____"
]
],
[
[
"I create a categorical variable for the last recommendation as well to check whether a buy recommendation makes user to book:",
"_____no_output_____"
]
],
[
[
"one_hot_last_rec = pd.get_dummies(hoppi4['last_rec']) # this create s 2 columns: buy and wait\nhoppi5 = hoppi4.join(one_hot_last_rec)\nhoppi5.loc[hoppi5['last_rec'].isnull(), 'buy'] = np.nan # originally null values are given 0. I undo that manipulation here",
"_____no_output_____"
]
],
[
[
"I make a table with rows containing certain results that I want to focus on i.e. expired and booked",
"_____no_output_____"
]
],
[
[
"hoppi6 = hoppi5.loc[hoppi5['result'].isnull() == False, \n ['round_trip', \n 'destination_city', 'origin_city',\n 'weekend',\n 'filter_no_lcc', 'filter_non_stop', 'filter_short_layover', 'status_updates', \n 'watch_bin', 'total_notifs', 'total_buy_notifs', 'buy',\n 'dif_last_first', 'dif_buy_first', 'dif_lowest_first', 'first_advance', 'result']]\nhoppi6.info()",
"<class 'pandas.core.frame.DataFrame'>\nMultiIndex: 45237 entries, (e42e7c15cde08c19905ee12200fad7cb5af36d1fe3a3310b5f94f95c47ae51cd, 05d59806e67fa9a5b2747bc1b24842189bba0c45e49d3714549fc5df9838ed20) to (d414b1c72a16512dbd7b3859c9c9f574633578acef74d120490625d9010103c7, 3a363a2456b6b7605347e06d2879162b3008004370f73a68f52523330ccd38a6)\nData columns (total 17 columns):\nround_trip 45237 non-null uint8\ndestination_city 45237 non-null uint8\norigin_city 45237 non-null uint8\nweekend 45237 non-null int64\nfilter_no_lcc 45237 non-null int64\nfilter_non_stop 45237 non-null int64\nfilter_short_layover 45237 non-null int64\nstatus_updates 45237 non-null int64\nwatch_bin 45237 non-null float64\ntotal_notifs 44800 non-null float64\ntotal_buy_notifs 44800 non-null float64\nbuy 44800 non-null float64\ndif_last_first 44800 non-null float64\ndif_buy_first 44133 non-null float64\ndif_lowest_first 44800 non-null float64\nfirst_advance 45237 non-null int64\nresult 45237 non-null float64\ndtypes: float64(8), int64(6), uint8(3)\nmemory usage: 12.8+ MB\n"
]
],
[
[
"Some rows have null values such as the price difference between the buy moment and the first price as some users may not have gt the buy recommendation yet. To cover these features, I get only non-null rows:",
"_____no_output_____"
]
],
[
[
"df = hoppi6.dropna()\ndf.info()",
"<class 'pandas.core.frame.DataFrame'>\nMultiIndex: 44133 entries, (e42e7c15cde08c19905ee12200fad7cb5af36d1fe3a3310b5f94f95c47ae51cd, 05d59806e67fa9a5b2747bc1b24842189bba0c45e49d3714549fc5df9838ed20) to (d414b1c72a16512dbd7b3859c9c9f574633578acef74d120490625d9010103c7, 3a363a2456b6b7605347e06d2879162b3008004370f73a68f52523330ccd38a6)\nData columns (total 17 columns):\nround_trip 44133 non-null uint8\ndestination_city 44133 non-null uint8\norigin_city 44133 non-null uint8\nweekend 44133 non-null int64\nfilter_no_lcc 44133 non-null int64\nfilter_non_stop 44133 non-null int64\nfilter_short_layover 44133 non-null int64\nstatus_updates 44133 non-null int64\nwatch_bin 44133 non-null float64\ntotal_notifs 44133 non-null float64\ntotal_buy_notifs 44133 non-null float64\nbuy 44133 non-null float64\ndif_last_first 44133 non-null float64\ndif_buy_first 44133 non-null float64\ndif_lowest_first 44133 non-null float64\nfirst_advance 44133 non-null int64\nresult 44133 non-null float64\ndtypes: float64(8), int64(6), uint8(3)\nmemory usage: 12.6+ MB\n"
],
[
"X = df[['round_trip', \n 'destination_city', 'origin_city',\n 'weekend',\n 'filter_non_stop', 'filter_short_layover', 'status_updates', 'filter_no_lcc', \n 'watch_bin', 'total_notifs', 'buy', 'total_buy_notifs', \n 'dif_lowest_first',\n 'dif_last_first', \n 'dif_buy_first', \n 'first_advance']] \ny = df['result']\nprint(X.shape, y.shape)",
"(44133, 16) (44133,)\n"
],
[
"X_train, X_test , y_train, y_test = train_test_split(X, y, test_size=0.8, random_state=1)\nlogit_model=sm.Logit(y_train, X_train)\nresult=logit_model.fit(maxiter = 1000)\nprint(result.summary2())",
"Optimization terminated successfully.\n Current function value: 0.041569\n Iterations 12\n Results: Logit\n=====================================================================\nModel: Logit Pseudo R-squared: 0.852 \nDependent Variable: result AIC: 765.7809\nDate: 2019-06-20 12:20 BIC: 879.1482\nNo. Observations: 8826 Log-Likelihood: -366.89 \nDf Model: 15 LL-Null: -2484.6 \nDf Residuals: 8810 LLR p-value: 0.0000 \nConverged: 1.0000 Scale: 1.0000 \nNo. Iterations: 12.0000 \n---------------------------------------------------------------------\n Coef. Std.Err. z P>|z| [0.025 0.975]\n---------------------------------------------------------------------\nround_trip -0.7625 0.2318 -3.2898 0.0010 -1.2167 -0.3082\ndestination_city 0.4061 0.2259 1.7976 0.0722 -0.0367 0.8489\norigin_city 0.3157 0.2161 1.4605 0.1441 -0.1079 0.7393\nweekend 0.8184 0.2740 2.9873 0.0028 0.2815 1.3554\nfilter_non_stop 0.1179 0.2647 0.4455 0.6559 -0.4009 0.6368\nfilter_short_layover 0.7755 0.4312 1.7983 0.0721 -0.0697 1.6207\nstatus_updates 0.1079 0.0666 1.6196 0.1053 -0.0227 0.2386\nfilter_no_lcc -0.2027 0.8046 -0.2519 0.8011 -1.7796 1.3743\nwatch_bin -5.3110 0.4588 -11.5750 0.0000 -6.2103 -4.4117\ntotal_notifs -1.0820 0.1657 -6.5284 0.0000 -1.4069 -0.7572\nbuy 2.6103 0.4266 6.1188 0.0000 1.7742 3.4464\ntotal_buy_notifs -0.3924 0.1985 -1.9773 0.0480 -0.7814 -0.0034\ndif_lowest_first 0.0125 0.0035 3.6156 0.0003 0.0057 0.0194\ndif_last_first -0.0067 0.0014 -4.7947 0.0000 -0.0095 -0.0040\ndif_buy_first -0.0135 0.0042 -3.1968 0.0014 -0.0217 -0.0052\nfirst_advance 0.1581 0.0095 16.6355 0.0000 0.1395 0.1768\n=====================================================================\n\n"
],
[
"lr = LogisticRegression()\nlr.fit(X_train, y_train)\ny_pred = lr.predict(X_test)",
"C:\\ProgramData\\Anaconda3\\envs\\operational\\lib\\site-packages\\sklearn\\linear_model\\logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.\n FutureWarning)\n"
],
[
"accuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"sum(y_train)/len(y_train)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred))",
" precision recall f1-score support\n\n 0.0 0.99 1.00 0.99 32564\n 1.0 0.97 0.87 0.92 2743\n\n micro avg 0.99 0.99 0.99 35307\n macro avg 0.98 0.93 0.96 35307\nweighted avg 0.99 0.99 0.99 35307\n\n"
]
],
[
[
"#### Data driven insights:\nThe model shows a good level of accuracy. However given the imbalance of data (only 8% of data corresponds to an actual booking) it is crucial to check recall which also shows a high value i.e. false negatives are limited.\nNow that we know the model looks robust, we can make the following data-driven insights:\n1. City travelers, regardless their origin and destination, are not necessarily more likely to end up booking. These are the people likely to be business travelers. When we look at the weekend travelers which I use as a substitute as pleasure travelers, people are significantly more likely to end up booking. It looks like people are more sensitive to buy recommendations when it is a personal travel\n2. Among filters, only those who filter for short layover are more likely to book whereas the significance is not as powerfull (at a p = 0.10 level)\n3. Buy recommendations significantly impact booking behavior which indicates that the algorithm makes sense to the customer\n4. Price fluctuations have significant impact on users' booking behavior:\n * The lowest and first price difference is significant with a positive relationship with the booking behavior showing that people are more likely to buy when there is a price drop after their first query\n * When the last price or the price of the buy recommendation are higher than the first price, users are less likely to book.\n * The above 2 points show that the algorithm leads to the expected user behavior.\n5. Those who sign up for a price watch are more likely to book. This feature might be an indicator that the user is seriously making a plan. To be concrete, someone who is looking at options for a dream vacation in case he wins the lottery would not be setting the watch on whereas someone who took days off at work next month would do so.",
"_____no_output_____"
],
[
"### Question 2",
"_____no_output_____"
],
[
"#### Most \"watched\" itineraries",
"_____no_output_____"
],
[
"I would like to see the watch selected cases w.r.t. the itinerary i.e. NY to MTL would be considered the same as MTL to NY",
"_____no_output_____"
]
],
[
[
"hoppi5.loc[(hoppi5['watch_bin'] == 1.0) & (hoppi5['result'] == 0)].info()",
"<class 'pandas.core.frame.DataFrame'>\nMultiIndex: 40873 entries, (e42e7c15cde08c19905ee12200fad7cb5af36d1fe3a3310b5f94f95c47ae51cd, 05d59806e67fa9a5b2747bc1b24842189bba0c45e49d3714549fc5df9838ed20) to (d414b1c72a16512dbd7b3859c9c9f574633578acef74d120490625d9010103c7, 3a363a2456b6b7605347e06d2879162b3008004370f73a68f52523330ccd38a6)\nData columns (total 58 columns):\ntrip_type 40873 non-null object\norigin 40873 non-null object\ndestination 40873 non-null object\ndeparture_date 40873 non-null datetime64[ns]\nreturn_date 34064 non-null datetime64[ns]\nstay 34064 non-null float64\nweekend 40873 non-null int64\nfilter_no_lcc 40873 non-null int64\nfilter_non_stop 40873 non-null int64\nfilter_short_layover 40873 non-null int64\nfilter_name 40873 non-null object\nstatus_updates 40873 non-null int64\nfirst_search_dt 40873 non-null datetime64[ns]\nwatch_added_dt 40873 non-null datetime64[ns]\nlatest_status_change_dt 40873 non-null datetime64[ns]\nstatus_latest 40873 non-null object\ntotal_notifs 40713 non-null float64\ntotal_buy_notifs 40713 non-null float64\nfirst_rec 40713 non-null object\nfirst_total 40713 non-null float64\nlast_rec 40713 non-null object\nlast_total 40713 non-null float64\nfirst_buy_dt 40674 non-null datetime64[ns]\nfirst_buy_total 40674 non-null float64\nlowest_total 40713 non-null float64\nlast_notif_dt 38897 non-null datetime64[ns]\nforecast_first_target_price 40187 non-null float64\nforecast_first_good_price 40187 non-null float64\nforecast_last_target_price 40687 non-null float64\nforecast_last_good_price 40687 non-null float64\nforecast_last_warning_date 40450 non-null datetime64[ns]\nforecast_last_danger_date 37664 non-null datetime64[ns]\nforecast_min_target_price 40687 non-null float64\nforecast_max_target_price 40687 non-null float64\nforecast_min_good_price 40687 non-null float64\nforecast_max_good_price 40687 non-null float64\nwatch_advance 40873 non-null float64\nfirst_advance 40873 non-null int64\ncurrent_advance 0 non-null float64\nstay2 34064 non-null timedelta64[ns]\nstay_check 34064 non-null timedelta64[ns]\none_way 40873 non-null uint8\nround_trip 40873 non-null uint8\norigin_code 40873 non-null object\norigin_type 40873 non-null object\ndestination_code 40873 non-null object\ndestination_type 40873 non-null object\ndestination_airport 40873 non-null uint8\ndestination_city 40873 non-null uint8\norigin_airport 40873 non-null uint8\norigin_city 40873 non-null uint8\nwatch_bin 40873 non-null float64\nresult 40873 non-null float64\ndif_last_first 40713 non-null float64\ndif_buy_first 40674 non-null float64\ndif_lowest_first 40713 non-null float64\nbuy 40713 non-null float64\nwait 40873 non-null uint8\ndtypes: datetime64[ns](9), float64(23), int64(6), object(11), timedelta64[ns](2), uint8(7)\nmemory usage: 23.9+ MB\n"
],
[
"pareto_watch_0 = hoppi5.loc[(hoppi5['watch_bin'] == 1.0) & (hoppi5['result'] == 0.0), ['origin_code', 'destination_code']]\npareto_watch_0.loc[pareto_watch_0['origin_code'] < pareto_watch_0['destination_code'], \\\n 'itinerary'] = \\\n pareto_watch_0['origin_code'] + pareto_watch_0['destination_code']\npareto_watch_0.loc[pareto_watch_0['origin_code'] > pareto_watch_0['destination_code'], \\\n 'itinerary'] = \\\n pareto_watch_0['destination_code'] + pareto_watch_0['origin_code']",
"_____no_output_____"
],
[
"pareto_watch_0.info()",
"<class 'pandas.core.frame.DataFrame'>\nMultiIndex: 40873 entries, (e42e7c15cde08c19905ee12200fad7cb5af36d1fe3a3310b5f94f95c47ae51cd, 05d59806e67fa9a5b2747bc1b24842189bba0c45e49d3714549fc5df9838ed20) to (d414b1c72a16512dbd7b3859c9c9f574633578acef74d120490625d9010103c7, 3a363a2456b6b7605347e06d2879162b3008004370f73a68f52523330ccd38a6)\nData columns (total 3 columns):\norigin_code 40873 non-null object\ndestination_code 40873 non-null object\nitinerary 40873 non-null object\ndtypes: object(3)\nmemory usage: 39.9+ MB\n"
],
[
"pareto_watch = pareto_watch_0 \\\n .groupby(['itinerary']) \\\n .size().reset_index() \\\n .rename(columns = {0: 'count'}) \\\n .sort_values(['count'], ascending = False)",
"_____no_output_____"
],
[
"pareto_watch.set_index('itinerary', inplace = True)\npareto_watch['cumulative_sum'] = pareto_watch['count'].cumsum()\npareto_watch['cumulative_perc'] = 100 * pareto_watch['cumulative_sum'] / pareto_watch['count'].sum()",
"_____no_output_____"
],
[
"pareto_watch.loc[pareto_watch['cumulative_perc'] <= 80].shape[0]\npareto_watch.shape[0]\nprint('All observations where the user watched the price but did not book, cover ',\n pareto_watch.shape[0],\n 'itineraries. Out of these, ',\n pareto_watch.loc[pareto_watch['cumulative_perc'] <= 80].shape[0],\n ' constitute 80% of the whole observation set. That is around ',\n round(100 * pareto_watch.loc[pareto_watch['cumulative_perc'] <= 80].shape[0] / pareto_watch.shape[0], 1),\n '% of the whole set.')",
"All observations where the user watched the price but did not book, cover 11697 itineraries. Out of these, 4236 constitute 80% of the whole observation set. That is around 36.2 % of the whole set.\n"
]
],
[
[
"The list gives the biggest airports. This result reassures that it is additionally critical to make reliable estimations for these itineraries. The top 10 itineraries consist only of US destinations showing the importance of the US market. As we have seen in the previous question, a user setting the watch on is a good estimator of an actual booking. Therefore accuracy of price estimations is extra important for the US market. This information could be handy for the data scientists developing algorithms e.g. they can give extra weight to the accuracy of US flights",
"_____no_output_____"
],
[
"#### Watch vs the Moment the First Search is Done",
"_____no_output_____"
],
[
"* Here I am looking whether there is significant difference between users with a watch and without in terms of the following two:\n * the first price found \n * the number of days left to departure as of first search",
"_____no_output_____"
]
],
[
[
"dfw = hoppi5.loc[hoppi5['result'].isnull() == False, ['first_advance', 'first_total', 'watch_bin', 'result']]\ndfw = dfw.dropna()",
"_____no_output_____"
],
[
"dfw.groupby('watch_bin').agg({'first_advance': np.mean, 'first_total': np.mean})",
"_____no_output_____"
]
],
[
[
"* As the data is skewed using non-parametrical tests makes more sense. I use the Mann Whitney test for that purpose\n* The test reveal significant difference between watched and non-watched itineraries at 0.1 level in terms of the number of days between the departure and the first search. Those who place a watch have a week less time left to their departure compared to the rest. Users may be using hopper as an assistant when they feel like they missed the time window where they could shop for different offers. For those users more frequent notifications can be planned",
"_____no_output_____"
]
],
[
[
"stat, p = mannwhitneyu(dfw.loc[dfw['watch_bin'] == 1, 'first_advance'], \n dfw.loc[dfw['watch_bin'] == 0, 'first_advance'])\nprint('Statistics=%.3f, p=%.3f' % (stat, p))",
"Statistics=41274130.000, p=0.074\n"
]
],
[
[
"* The test on same user groups (those watching vs those who don't) show that they differ in terms of the price they get at their first search. The difference is highly significant given the p-value. \n* Those who watch have a trip cost of USD125 more on average.\n* There might be a growth opportunity in budget passengers. When the user makes a first search which reveals a relatively cheap price, Hopper can suggest watching for the same trip with additional services such as business class. If that suggesiton can be supported with a statement like \"business flights for this flight can get as close as $X to the economy fares, why don't you watch?\" the user can be convinced to shop for more.",
"_____no_output_____"
]
],
[
[
"stat, p = mannwhitneyu(dfw.loc[dfw['watch_bin'] == 1, 'first_total'], \n dfw.loc[dfw['watch_bin'] == 0, 'first_total'])\nprint('Statistics=%.3f, p=%.3f' % (stat, p))",
"Statistics=29810391.000, p=0.000\n"
]
],
[
[
"### Question 3",
"_____no_output_____"
],
[
"Chart 1: What is the situation as of now compared to PY?\n* Note that from the current advance field in the data, I see that we are on April 10th 2018\n* Expired: Watch is on + Current Date > Departure Date\n* Inactive: Watch is off + Current Date can be before or after Current Date\n* Active: Watch is on + Current Date <= Departure Date\n* Shopped: Watch is on or off + Current Date can be before or after Current Date; the latter of first search and watch date added is equal to the latest_status_change\n* Booked: ",
"_____no_output_____"
],
[
"#### Chart 1: Number of Incoming / Outgoing / Converted Searches Through Time",
"_____no_output_____"
],
[
"On a daily basis, I'd like to see the number of \n* new searches of trips (incoming), \n* end of validity trips i.e. trips with departure date passing by\n* converted searches i.e. booked trips\n\n<br>\nIdeally I would like to see the number for these for a given day / time window as well as for the same period prior year (more on this in Q4). howeveer the data covers first searches over a period from 2018 start to April 10th. For illustrative purposes I show the count of the 2 KPIs above throughout the year",
"_____no_output_____"
],
[
"It is good practive to create date range and join data onto that as the data source may not have data for every day:",
"_____no_output_____"
]
],
[
[
"date_range = pd.date_range(start='1/1/2018', end='04/10/2018', freq='D')\ndf_date = pd.DataFrame(date_range, columns = ['date_range'])\ndf_date.set_index('date_range', inplace = True)",
"_____no_output_____"
]
],
[
[
"incoming traffic counts the number of first time searches each day:",
"_____no_output_____"
]
],
[
[
"hoppi5['first_search_dt_dateonly'] = hoppi5['first_search_dt'].dt.date\nincoming_traffic = hoppi5.groupby(['first_search_dt_dateonly']) \\\n .size().reset_index() \\\n .rename(columns = {0: 'count'}) \nincoming_traffic.set_index('first_search_dt_dateonly', inplace = True)",
"_____no_output_____"
]
],
[
[
"outgoing traffic counts the number of trips with departure within the same day, each day. Until a trip is considered 'outgoing' there is a chance that it can be converted to booking:",
"_____no_output_____"
]
],
[
[
"outgoing_traffic = hoppi5.groupby(['departure_date']) \\\n .size().reset_index() \\\n .rename(columns = {0: 'count'}) \noutgoing_traffic.set_index('departure_date', inplace = True)",
"_____no_output_____"
]
],
[
[
"converted traffic is the numbe rof bookings that took place each day i.e. conversions:",
"_____no_output_____"
]
],
[
[
"hoppi5['latest_status_change_dt_dateonly'] = hoppi5['first_search_dt'].dt.date\nconverted_traffic = hoppi5.loc[hoppi5['status_latest'] == 'booked'].groupby(['latest_status_change_dt_dateonly']) \\\n .size().reset_index() \\\n .rename(columns = {0: 'count'}) \nconverted_traffic.set_index('latest_status_change_dt_dateonly', inplace = True)",
"_____no_output_____"
]
],
[
[
"I join counts on the date range index created above:",
"_____no_output_____"
]
],
[
[
"df_chart1 = pd.merge(df_date, incoming_traffic, left_index = True, right_index = True, how='left')\ndf_chart1.rename(columns = {'count': 'incoming_count'}, inplace = True)\ndf_chart2 = pd.merge(df_chart1, outgoing_traffic, left_index = True, right_index = True, how='left')\ndf_chart2.rename(columns = {'count': 'outgoing_count'}, inplace = True)\ndf_chart3 = pd.merge(df_chart2, converted_traffic, left_index = True, right_index = True, how='left')\ndf_chart3.rename(columns = {'count': 'converted_count'}, inplace = True)\ndf_chart3['day'] = df_chart3.index.dayofyear",
"_____no_output_____"
],
[
"df_chart3_components = [df_chart3[['incoming_count', 'day']].assign(count_type = 'incoming').reset_index(). \\\n rename(columns = {'incoming_count': 'count'}),\n df_chart3[['outgoing_count', 'day']].assign(count_type = 'outgoing').reset_index(). \\\n rename(columns = {'outgoing_count': 'count'}),\n df_chart3[['converted_count', 'day']].assign(count_type = 'converted').reset_index(). \\\n rename(columns = {'converted_count': 'count'})]\ndf_chart4 = pd.concat(df_chart3_components)",
"_____no_output_____"
]
],
[
[
"I plot the chart here below. Note that the data collection seems to have started as of 2018 start. Therefore the outgoing count do not reflect the reality in the early periods of the chart. Also the number of trips whose departure is in the future at a given time could be shown as well. That would show the pool of trips that could be converted at a given time.",
"_____no_output_____"
]
],
[
[
"sns.set_style('dark')\nfig, ax1 = plt.subplots(figsize=(15,10))\nax2 = ax1.twinx()\nsns.lineplot(x=df_chart3['day'], \n y=df_chart3['incoming_count'],\n color='#6FC28B',\n marker = \"X\",\n ax=ax1)\nsns.lineplot(x=df_chart3['day'], \n y=df_chart3['outgoing_count'],\n color='#FA6866',\n marker=\"v\",\n ax=ax1)\nsns.lineplot(x=df_chart3['day'], \n y=df_chart3['converted_count'],\n color='#F0A02A',\n marker=\"o\",\n ax=ax2)\nfig.legend(['Incoming #', 'Expiring #', 'Converted #'])\nax1.set(xlabel='Day of Year', ylabel='Incoming and Expiring Search Count')\nax2.set(ylabel='Converted Search Count')\nplt.title('Number of Incoming / Expiring / Converted Searches by Day', fontsize = 14)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Chart 2: KPIs Affecting Conversion - Categorical KPIs",
"_____no_output_____"
],
[
"Categorical variables that turned out to have an impact on conversion are worth following daily. As I sugegsted for the 1st chart, it makes more sense to compare these with prior year same period figures.\nIn this chart we follow the % of people who\n* look for a round trip\n* look for a weekend trip\n* look for a short layover\n* have an ongoing watch\n* have an received a buy suggestion\n\n<br>\nNote that these are all categories that help estimate conversion",
"_____no_output_____"
]
],
[
[
"df_chart_perc1 = hoppi5.loc[hoppi5['departure_date'] >= '04-10-2018'].describe() # describe() gives the mean per vcategory. \n# As they were binary, it gives the %\ndf_chart_perc2 = df_chart_perc1.loc[['mean'], ['round_trip', 'weekend', 'filter_short_layover', 'watch_bin', 'buy']]\ndf_chart_perc2 = df_chart_perc2.transpose().reset_index() # transpose to make it convenient for seaborn notation\ndf_chart_perc2['mean'] = df_chart_perc2['mean'] * 100 # percentages in absolute numbers\ndf_chart_perc2.rename(columns = {'mean':'percentage'}, inplace=True)",
"_____no_output_____"
],
[
"sns.set_style('white')\nfig, ax = plt.subplots(figsize=(15,8))\nsns.barplot(x=\"index\",\n y=\"percentage\", \n palette=[\"#FA6866\", \"#01AAE4\", \"#505050\", \"#AAAAAA\", \"#F67096\"],\n data=df_chart_perc2,\n ax=ax)\nax.set(xlabel='Trip Categories', ylabel='% of Qualified Trips')\nplt.title(\"Percentage of Trips\",fontsize=14)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Chart 3: KPIs Affecting Conversion - Ordinal KPIs",
"_____no_output_____"
],
[
"In a similar vein to the Chart2, I look at KPIs having an impact on conversion here as well. This time I check ordinal variables. Again, it would make more sense to compare with prior year same period figures.\nIn this chart we follow \n* average difference between the lowest price found and the first price\n* average difference between the last price and the first price\n* average difference between the price when a buy recommendation was made and the first price\n* average number of days between the first search and the departure date\n\n<br>\nNote that these are all categories that help estimate conversion as well.",
"_____no_output_____"
]
],
[
[
"df_chart_abs1 = hoppi5.loc[hoppi5['departure_date'] >= '04-10-2018'].describe()\ndf_chart_abs2 = df_chart_abs1.loc[['mean'], ['dif_lowest_first',\n 'dif_last_first', 'dif_buy_first',\n 'first_advance']]\ndf_chart_abs3 = df_chart_abs2.transpose().reset_index()\ndf_chart_abs3.rename(columns = {'mean':'average'}, inplace=True)",
"_____no_output_____"
],
[
"fig, (ax1, ax2) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]}, figsize=(15,8))\n\nsns.barplot(x=df_chart_abs3.loc[df_chart_abs3['index'] != 'first_advance']['index'], \n y=df_chart_abs3.loc[df_chart_abs3['index'] != 'first_advance']['average'],\n palette=[\"#FA6866\", \"#01AAE4\", \"#505050\"],\n ax=ax1)\nsns.barplot(x=df_chart_abs3.loc[df_chart_abs3['index'] == 'first_advance']['index'], \n y=df_chart_abs3.loc[df_chart_abs3['index'] == 'first_advance']['average'],\n color='#AAAAAA',\n ax=ax2)\nax1.set(xlabel='KPIs', ylabel='Average Difference in Prices ($)')\nax2.set(ylabel='Average Number of Days')\nax1.set_title('Trips by Absolute Numbers', fontsize = 14)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Question 4",
"_____no_output_____"
],
[
"* I should have the history on every trip - user. That would enable insight in 2 main fields:\n * Communication Strategy: Recording of every update related to a user - trip allows:\n * to understand after how many notifications...\n * a booking takes place\n * a watch is canceled\n * Year on Year (YoY) comparisons: A record with active status now could have been inactive at a certain point of time as the user can enable / disable watch at different times. As a result, a YoY comparison of records in terms of their statuses is not possible with the information in hand\n* I would also like to have this data for a longer period to identify returning customers, frequent flyers for whom I could develop new features",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e76eb4c36fbf10ee1cd7ccbf4f2c1c4bd8ed65f8 | 347,766 | ipynb | Jupyter Notebook | Mini capstone projects/Ad click prediction_Logistic Regression.ipynb | sungsujaing/DataScience_MachineLearning_Portfolio | f8c19d3166c65a0ef87fcceb4c53862310a02901 | [
"MIT"
] | 6 | 2019-09-30T22:57:48.000Z | 2020-11-30T18:04:19.000Z | Mini capstone projects/Ad click prediction_Logistic Regression.ipynb | sungsujaing/DataScience_MachineLearning_Portfolio | f8c19d3166c65a0ef87fcceb4c53862310a02901 | [
"MIT"
] | 25 | 2019-11-03T20:28:28.000Z | 2022-03-11T23:50:14.000Z | Mini capstone projects/Ad click prediction_Logistic Regression.ipynb | sungsujaing/Artificial_Intelligence_Data_Science_Portfolio | f8c19d3166c65a0ef87fcceb4c53862310a02901 | [
"MIT"
] | 1 | 2019-05-10T21:44:37.000Z | 2019-05-10T21:44:37.000Z | 414.995227 | 314,408 | 0.918799 | [
[
[
"# Background \n\nThis project deals with artificial advertising data set, indicating whether or not a particular internet user clicked on an Advertisement. This dataset can be explored to train a model that can predict whether or not the new users will click on an ad based on their various low-level features.\n\nThis data set contains the following features:\n\n* 'Daily Time Spent on Site': consumer time on site in minutes\n* 'Age': cutomer age in years\n* 'Area Income': Avg. Income of geographical area of consumer\n* 'Daily Internet Usage': Avg. minutes a day consumer is on the internet\n* 'Ad Topic Line': Headline of the advertisement\n* 'City': City of consumer\n* 'Male': Whether or not consumer was male\n* 'Country': Country of consumer\n* 'Timestamp': Time at which consumer clicked on Ad or closed window\n* 'Clicked on Ad': 0 or 1 indicated clicking on Ad\n\n# Dataset overview",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\nsns.set_style('white')",
"_____no_output_____"
],
[
"df_ad = pd.read_csv('Data/advertising.csv')",
"_____no_output_____"
],
[
"df_ad.head(3)",
"_____no_output_____"
],
[
"df_ad.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1000 entries, 0 to 999\nData columns (total 10 columns):\nDaily Time Spent on Site 1000 non-null float64\nAge 1000 non-null int64\nArea Income 1000 non-null float64\nDaily Internet Usage 1000 non-null float64\nAd Topic Line 1000 non-null object\nCity 1000 non-null object\nMale 1000 non-null int64\nCountry 1000 non-null object\nTimestamp 1000 non-null object\nClicked on Ad 1000 non-null int64\ndtypes: float64(3), int64(3), object(4)\nmemory usage: 78.2+ KB\n"
],
[
"df_ad.isnull().any()",
"_____no_output_____"
],
[
"df_ad.describe()",
"_____no_output_____"
]
],
[
[
"# EDA\n#### Age distribution of the dataset",
"_____no_output_____"
]
],
[
[
"sns.set_context('notebook',font_scale=1.5)\nsns.distplot(df_ad.Age,bins=30,kde=False,color='red')\nplt.show()",
"_____no_output_____"
]
],
[
[
"#### pairplot of dataset defined by `Clicked on Ad`",
"_____no_output_____"
]
],
[
[
"import warnings\nwarnings.filterwarnings('ignore') #### since the target variable is numeric, the joint plot by the target variable generates the warning.",
"_____no_output_____"
],
[
"sns.pairplot(df_ad,hue='Clicked on Ad')",
"_____no_output_____"
]
],
[
[
"# Model training: Basic Logistic Regression",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"X = df_ad[['Daily Time Spent on Site', 'Age', 'Area Income',\n 'Daily Internet Usage', 'Male']]\ny = df_ad['Clicked on Ad']\nX_train, X_test, y_train, y_test = train_test_split(X,y,random_state=100)",
"_____no_output_____"
]
],
[
[
"#### training",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression",
"_____no_output_____"
],
[
"lr = LogisticRegression().fit(X_train,y_train)",
"_____no_output_____"
]
],
[
[
"#### Predictions and Evaluations",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import classification_report,confusion_matrix",
"_____no_output_____"
],
[
"y_predict = lr.predict(X_test)",
"_____no_output_____"
],
[
"pd.DataFrame(confusion_matrix(y_test,y_predict),index=['True 0','True 1'],\n columns=['Predicted 0','Predicted 1'])",
"_____no_output_____"
],
[
"print(classification_report(y_test,y_predict))",
" precision recall f1-score support\n\n 0 0.86 0.92 0.89 119\n 1 0.93 0.86 0.89 131\n\n micro avg 0.89 0.89 0.89 250\n macro avg 0.89 0.89 0.89 250\nweighted avg 0.89 0.89 0.89 250\n\n"
]
],
[
[
"# Model training: Optimized Logistic Regression",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import StandardScaler\nfrom sklearn.model_selection import GridSearchCV\n\nscaler = StandardScaler().fit(X_train)\nX_train_scaled = scaler.transform(X_train)\nx_test_scaled = scaler.transform(X_test)",
"_____no_output_____"
]
],
[
[
"#### 3-fold CV grid search",
"_____no_output_____"
]
],
[
[
"grid_param = {'C':[0.01,0.03,0.1,0.3,1,3,10]}\ngrid_lr = GridSearchCV(LogisticRegression(),grid_param,cv=3).fit(X_train_scaled,y_train)",
"_____no_output_____"
],
[
"print('best regularization parameter: {}'.format(grid_lr.best_params_))\nprint('best CV score: {}'.format(grid_lr.best_score_.round(3)))",
"best regularization parameter: {'C': 0.3}\nbest CV score: 0.971\n"
]
],
[
[
"#### Predictions and Evaluations",
"_____no_output_____"
]
],
[
[
"y_predict_2 = grid_lr.predict(x_test_scaled)",
"_____no_output_____"
],
[
"pd.DataFrame(confusion_matrix(y_test,y_predict_2),index=['True 0','True 1'],\n columns=['Predicted 0','Predicted 1'])",
"_____no_output_____"
],
[
"print(classification_report(y_test,y_predict_2))",
" precision recall f1-score support\n\n 0 0.94 1.00 0.97 119\n 1 1.00 0.94 0.97 131\n\n micro avg 0.97 0.97 0.97 250\n macro avg 0.97 0.97 0.97 250\nweighted avg 0.97 0.97 0.97 250\n\n"
]
],
[
[
"A simple logistic regression without any tuning effort shows high classification performance. With standarization and the tuned 'C' parameter, the performance of the same logistic regression model could be improved by much to ~0.97 f1-score. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
e76ebb31809b646d25b65e2c7e3349c3c9d9b142 | 344,600 | ipynb | Jupyter Notebook | example_theory.ipynb | abbyw24/Corrfunc | d9c821bcebd7225cf43ec9e09dfe817387c73f62 | [
"MIT"
] | null | null | null | example_theory.ipynb | abbyw24/Corrfunc | d9c821bcebd7225cf43ec9e09dfe817387c73f62 | [
"MIT"
] | null | null | null | example_theory.ipynb | abbyw24/Corrfunc | d9c821bcebd7225cf43ec9e09dfe817387c73f62 | [
"MIT"
] | null | null | null | 403.985932 | 97,012 | 0.938543 | [
[
[
"import os\nimport time\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom matplotlib import pylab\n%config InlineBackend.figure_format = 'retina'\nmatplotlib.rcParams['figure.dpi'] = 80\ntextsize = 'x-large'\nparams = {'legend.fontsize': 'x-large',\n 'figure.figsize': (10, 8),\n 'axes.labelsize': textsize,\n 'axes.titlesize': textsize,\n 'xtick.labelsize': textsize,\n 'ytick.labelsize': textsize}\npylab.rcParams.update(params)\nplt.ion()\n\nimport Corrfunc\nfrom Corrfunc.io import read_lognormal_catalog\nfrom Corrfunc.theory.DDsmu import DDsmu\nfrom Corrfunc.theory.DD import DD\nfrom Corrfunc.theory.xi import xi\nfrom Corrfunc.utils import compute_amps\nfrom Corrfunc.utils import evaluate_xi\nfrom Corrfunc.utils import qq_analytic\nfrom Corrfunc.bases import spline\n\n%load_ext autoreload\n%autoreload 2",
"/Users/ksf/code/nyu/research/Corrfunc/Corrfunc/__init__.py\n"
]
],
[
[
"# Demo Notebook: The Continuous-Function Estimator\n## Tophat and Spline bases on a periodic box\n\nHello! In this notebook we'll show you how to use the continuous-function estimator to estimate the 2-point correlation function (2pcf) with a method that produces, well, continuous correlation functions.",
"_____no_output_____"
],
[
"## Load in data",
"_____no_output_____"
],
[
"We'll demonstrate with a low-density lognormal simulation box, which we've included with the code. We'll show here the box with 3e-4 ($h^{-1}$Mpc)$^{-3}$, but if you're only running with a single thread, you will want to run this notebook with the 1e-4 ($h^{-1}$Mpc)$^{-3}$ box for speed. (The code is extremely parallel, so when you're running for real, you'll definitely want to bump up the number of threads.)",
"_____no_output_____"
]
],
[
[
"x, y, z = read_lognormal_catalog(n='3e-4')\nboxsize = 750.0\nnd = len(x)\nprint(\"Number of data points:\",nd)",
"Number of data points: 125342\n"
]
],
[
[
"We'll also want a random catalog, that's a bit bigger than our data:",
"_____no_output_____"
]
],
[
[
"nr = 3*nd\nx_rand = np.random.uniform(0, boxsize, nr)\ny_rand = np.random.uniform(0, boxsize, nr)\nz_rand = np.random.uniform(0, boxsize, nr)\nprint(\"Number of random points:\",nr)",
"Number of random points: 376026\n"
],
[
"print(x)\nprint(x_rand)",
"[1.13136184e+00 4.30035293e-01 2.08324015e-01 ... 7.49666077e+02\n 7.49922791e+02 7.49938477e+02]\n[567.62600303 166.85340522 461.79238824 ... 577.65066275 9.85155819\n 581.1525008 ]\n"
]
],
[
[
"Let's first compute the regular correlation function. We'll need some radial bins. We'll also need to tell Corrfunc that we're working with a periodic box, and the number of parallel threads. Then we can go ahead and compute the real-space correlation function xi(r) from the pair counts, DD(r) (documentation here: https://corrfunc.readthedocs.io/en/master/api/Corrfunc.theory.html)",
"_____no_output_____"
]
],
[
[
"rmin = 40.0\nrmax = 150.0\nnbins = 22\nr_edges = np.linspace(rmin, rmax, nbins+1)\nr_avg = 0.5*(r_edges[1:]+r_edges[:-1])\n\nperiodic = True\nnthreads = 1",
"_____no_output_____"
],
[
"dd_res = DD(1, nthreads, r_edges, x, y, z, boxsize=boxsize, periodic=periodic)\ndr_res = DD(0, nthreads, r_edges, x, y, z, X2=x_rand, Y2=y_rand, Z2=z_rand, boxsize=boxsize, periodic=periodic)\nrr_res = DD(1, nthreads, r_edges, x_rand, y_rand, z_rand, boxsize=boxsize, periodic=periodic)",
"_____no_output_____"
]
],
[
[
"We can use these pair counts to compute the Landy-Szalay 2pcf estimator (Landy & Szalay 1993). Let's define a function, as we'll want to reuse this:",
"_____no_output_____"
]
],
[
[
"def landy_szalay(nd, nr, dd, dr, rr):\n # Normalize the pair counts\n dd = dd/(nd*nd)\n dr = dr/(nd*nr)\n rr = rr/(nr*nr)\n xi_ls = (dd-2*dr+rr)/rr\n return xi_ls",
"_____no_output_____"
]
],
[
[
"Let's unpack the pair counts from the Corrfunc results object, and plot the resulting correlation function: \n\n(Note that if you use weights, you need to multiply by the 'weightavg' column.)",
"_____no_output_____"
]
],
[
[
"dd = np.array([x['npairs'] for x in dd_res], dtype=float)\ndr = np.array([x['npairs'] for x in dr_res], dtype=float)\nrr = np.array([x['npairs'] for x in rr_res], dtype=float)\nxi_ls = landy_szalay(nd, nr, dd, dr, rr)",
"_____no_output_____"
],
[
"plt.figure(figsize=(8,5))\nplt.plot(r_avg, xi_ls, marker='o', ls='None', color='grey', label='Standard estimator')\nplt.xlabel(r'r ($h^{-1}$Mpc)')\nplt.ylabel(r'$\\xi$(r)')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"Great, we can even see the baryon acoustic feauture at ~100 $h^{-1}$Mpc!",
"_____no_output_____"
],
[
"## Continuous-function estimator: Tophat basis",
"_____no_output_____"
],
[
"Now we'll use the continuous-function estimator to compute the same correlation function, but in a continuous representation. First we'll use a tophat basis, to achieve the equivalent (but more correct!) result.\n\nWe need to give the name of the basis as 'proj_type'. We also need to choose the number of components, 'nprojbins'. In this case, we want the components to be a tophat for each bin, so this will just be 'nbins'. ",
"_____no_output_____"
]
],
[
[
"proj_type = 'tophat'\nnprojbins = nbins",
"_____no_output_____"
]
],
[
[
"Currently the continuous-function estimator is only implemented in DD(s,mu) ('DDsmu'), the redshift-space correlation function which divides the transverse direction s from the line-of-sight direction mu. But we can simply set the number of mu bins to 1, and mumax to 1 (the max of cosine), to achieve the equivalent of DD in real space.",
"_____no_output_____"
]
],
[
[
"nmubins = 1\nmumax = 1.0",
"_____no_output_____"
]
],
[
[
"Then we just need to give Corrfunc all this info, and unpack the continuous results! The first returned object is still the regular Corrfunc results object (we could have just used this in our above demo of the standard result).",
"_____no_output_____"
]
],
[
[
"dd_res, dd_proj, _ = DDsmu(1, nthreads, r_edges, mumax, nmubins, x, y, z, \n boxsize=boxsize, periodic=periodic, proj_type=proj_type, nprojbins=nprojbins)",
"_____no_output_____"
],
[
"dr_res, dr_proj, _ = DDsmu(0, nthreads, r_edges, mumax, nmubins, x, y, z, X2=x_rand, Y2=y_rand, Z2=z_rand,\n boxsize=boxsize, periodic=periodic, proj_type=proj_type, nprojbins=nprojbins)",
"_____no_output_____"
],
[
"rr_res, rr_proj, qq_proj = DDsmu(1, nthreads, r_edges, mumax, nmubins, x_rand, y_rand, z_rand,\n boxsize=boxsize, periodic=periodic, proj_type=proj_type, nprojbins=nprojbins)",
"_____no_output_____"
]
],
[
[
"We can now compute the amplitudes of the correlation function from these continuous pair counts. The compute_amps function uses the Landy-Szalay formulation of the estimator, but adapted for continuous bases. (Note that you have to pass some values twice, as this is flexible enough to translate to cross-correlations between two datasets and two random catalogs.)",
"_____no_output_____"
]
],
[
[
"amps = compute_amps(nprojbins, nd, nd, nr, nr, dd_proj, dr_proj, dr_proj, rr_proj, qq_proj)",
"Computing amplitudes (Corrfunc/utils.py)\n"
]
],
[
[
"With these amplitudes, we can evaluate our correlation function at any set of radial separations! Let's make a fine-grained array and evaluate. We need to pass 'nprojbins' and 'proj_type'. Because we will be evaluating our tophat function at the new separations, we also need to give it the original bins.",
"_____no_output_____"
]
],
[
[
"r_fine = np.linspace(rmin, rmax, 2000)",
"_____no_output_____"
],
[
"xi_proj = evaluate_xi(nprojbins, amps, len(r_fine), r_fine, nbins, r_edges, proj_type)",
"Evaluating xi (Corrfunc/utils.py)\n"
]
],
[
[
"Let's check out the results, compared with the standard estimator!",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(8,5))\nplt.plot(r_fine, xi_proj, color='steelblue', label='Tophat estimator')\nplt.plot(r_avg, xi_ls, marker='o', ls='None', color='grey', label='Standard estimator')\nplt.xlabel(r'r ($h^{-1}$Mpc)')\nplt.ylabel(r'$\\xi$(r)')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"We can see that we're getting \"the same\" result, but continously, with the hard bin edges made clear.",
"_____no_output_____"
],
[
"### Analytically computing the random term",
"_____no_output_____"
],
[
"Because we're working with a periodic box, we don't actually need a random catalog. We can analytically compute the RR term, as well as the QQ matrix.\n\nWe'll need the volume of the box, and the same info about our basis function as before:",
"_____no_output_____"
]
],
[
[
"volume = boxsize**3\nrr_ana, qq_ana = qq_analytic(rmin, rmax, nd, volume, nprojbins, nbins, r_edges, proj_type)",
"Evaluating qq_analytic (Corrfunc/utils.py)\n"
]
],
[
[
"We also don't need to use the Landy-Szalay estimator (we don't have a DR term!). To get the amplitudes we can just use the naive estimator, $\\frac{\\text{DD}}{\\text{RR}}-1$. In our formulation, the RR term in the demoninator becomes the inverse QQ term, so we have QQ$^{-1}$ $\\cdot$ (DD-RR). ",
"_____no_output_____"
]
],
[
[
"numerator = dd_proj - rr_ana\namps_ana, *_ = np.linalg.lstsq(qq_ana, numerator, rcond=None) # Use linalg.lstsq instead of actually computing inverse!",
"_____no_output_____"
]
],
[
[
"Now we can go ahead and evaluate the correlation function at our fine separations.",
"_____no_output_____"
]
],
[
[
"xi_ana = evaluate_xi(nbins, amps_ana, len(r_fine), r_fine, nbins, r_edges, proj_type)",
"Evaluating xi (Corrfunc/utils.py)\n"
]
],
[
[
"We'll compare this to computing the analytic correlation function with standard Corrfunc:",
"_____no_output_____"
]
],
[
[
"xi_res = Corrfunc.theory.xi(boxsize, nthreads, r_edges, x, y, z)\nxi_theory = np.array([x['xi'] for x in xi_res], dtype=float)",
"_____no_output_____"
],
[
"plt.figure(figsize=(8,5))\nplt.plot(r_fine, xi_ana, color='blue', label='Tophat basis')\nplt.plot(r_avg, xi_theory, marker='o', ls='None', color='grey', label='Standard Estimator')\nplt.xlabel(r'r ($h^{-1}$Mpc)')\nplt.ylabel(r'$\\xi$(r)')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"Once again, the standard and continuous correlation functions line up exactly. The correlation function looks smoother, as we didn't have to deal with a non-exact random catalog to estimate the window function.",
"_____no_output_____"
],
[
"## Continuous-function estimator: Cubic spline basis",
"_____no_output_____"
],
[
"Now we can make things more interesting! Let's choose a cubic spline basis. Luckily, this capability comes with the continuous-function version of Corrfunc!\n\nWe need to choose the parameters for our spline. Here, we'll choose a cubic spline. If we used a linear spline, we'd get a piecewise function; if we used a zeroth-order spline, we'd recover our tophat bases from above.\n\nWe'll take the min and max of the same separation values we used, and choose half the number of components as our previous bins (these will be related to the 'knots' in the spline). We'll also need the number of radial bins at which to evaluate our functions; the code will interpolate between these.\n\nThen we'll write our basis to a file. For any set of basis functions that is read from a file, 'proj_type' must be set to 'general_r'.",
"_____no_output_____"
]
],
[
[
"proj_type = 'generalr'\nkwargs = {'order': 3} # 3: cubic spline\nprojfn = 'quadratic_spline.dat'\nnprojbins = int(nbins/2)\nspline.write_bases(rmin, rmax, nprojbins, projfn, ncont=1000, **kwargs)",
"_____no_output_____"
]
],
[
[
"Let's check out the basis functions:",
"_____no_output_____"
]
],
[
[
"bases = np.loadtxt(projfn)\nbases.shape\nr = bases[:,0]\nplt.figure(figsize=(8,5))\nfor i in range(1, len(bases[0])):\n plt.plot(r, bases[:,i], color='red', alpha=0.5)\nplt.xlabel(r'r ($h^{-1}$Mpc)')",
"_____no_output_____"
]
],
[
[
"The bases on the ends are different so that they have the same normalization.",
"_____no_output_____"
],
[
"We'll use the analytic version of the estimator, making sure to pass the basis file:",
"_____no_output_____"
]
],
[
[
"dd_res_spline, dd_spline, _ = DDsmu(1, nthreads, r_edges, mumax, nmubins, x, y, z, \n boxsize=boxsize, periodic=periodic, proj_type=proj_type, nprojbins=nprojbins, projfn=projfn)",
"_____no_output_____"
],
[
"volume = boxsize**3\n# nbins and r_edges won't be used here because we passed projfn, but they're needed for compatibility. (TODO: fix!)\nrr_ana_spline, qq_ana_spline = qq_analytic(rmin, rmax, nd, volume, nprojbins, nbins, r_edges, proj_type, projfn=projfn)\n\nnumerator = dd_spline - rr_ana_spline\namps_ana_spline, *_ = np.linalg.lstsq(qq_ana_spline, numerator, rcond=None) # Use linalg.lstsq instead of actually computing inverse!\n\nxi_ana_spline = evaluate_xi(nprojbins, amps_ana_spline, len(r_fine), r_fine, nbins, r_edges, proj_type, projfn=projfn)",
"Evaluating qq_analytic (Corrfunc/utils.py)\nEvaluating xi (Corrfunc/utils.py)\n"
]
],
[
[
"Let's compare the results:",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(8,5))\nplt.plot(r_fine, xi_ana_spline, color='red', label='Cubic spline basis')\nplt.plot(r_fine, xi_ana, color='blue', label='Tophat basis')\nplt.plot(r_avg, xi_theory, marker='o', ls='None', color='grey', label='Standard estimator')\nplt.xlabel(r'r ($h^{-1}$Mpc)')\nplt.ylabel(r'$\\xi$(r)')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"We can see that the spline basis function produced a completely smooth correlation function; no hard-edged bins! It also captured that baryon acoustic feature (which we expect to be a smooth peak).\n\nThis basis function is a bit noisy and likely has some non-physical features - but so does the tophat / standard basis! In the next notebook, we'll use a physically motivated basis function.",
"_____no_output_____"
],
[
"Finally, Remember to clean up the basis function file:",
"_____no_output_____"
]
],
[
[
"os.remove(projfn)",
"_____no_output_____"
]
],
[
[
"The below ipython magic line will convert this notebook to a regular old python script.",
"_____no_output_____"
]
],
[
[
"#!jupyter nbconvert --to script example_theory.ipynb",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e76ebce42324c3e9ded4728c5d568ba459394028 | 460,720 | ipynb | Jupyter Notebook | data_inference_3.ipynb | eufmike/fibsem_seg_dl | 92be67027704e55cf5383fa0b7d5f65b0dd2f302 | [
"CC-BY-4.0"
] | null | null | null | data_inference_3.ipynb | eufmike/fibsem_seg_dl | 92be67027704e55cf5383fa0b7d5f65b0dd2f302 | [
"CC-BY-4.0"
] | null | null | null | data_inference_3.ipynb | eufmike/fibsem_seg_dl | 92be67027704e55cf5383fa0b7d5f65b0dd2f302 | [
"CC-BY-4.0"
] | null | null | null | 135.148137 | 30,096 | 0.827911 | [
[
[
"# Data Analysis - FIB-SEM Datasets\n* Goal: identify changes occurred across different time points",
"_____no_output_____"
]
],
[
[
"import os, sys, glob\nimport re\nimport numpy as np\nimport pandas as pd\nfrom scipy.stats import ttest_ind\nimport matplotlib.pyplot as plt\nimport pprint",
"_____no_output_____"
]
],
[
[
"## 01 Compile data into single .csv file for each label",
"_____no_output_____"
]
],
[
[
"mainpath = 'D:\\PerlmutterData'\nfolder = 'segmentation_compiled_export'\ndata_folder = 'data'\n\npath = os.path.join(mainpath, folder, data_folder)\nprint(path)\n\nfolders = ['cell_membrane', 'nucleus', 'mito', 'cristae', 'inclusion', 'ER']",
"D:\\PerlmutterData\\segmentation_compiled_export\\data\n"
],
[
"target_list = glob.glob(os.path.join(path, 'compile', '*.csv'))\ntarget_list = [os.path.basename(x) for x in target_list]\ntarget_list = [os.path.splitext(x)[0] for x in target_list]\nprint(target_list)",
"['cell_membrane', 'cristae', 'ER', 'inclusion', 'mito', 'nodes', 'nucleus', 'points', 'segments_s']\n"
],
[
"file_meta = {\n 'data_d00_batch01_loc01': 0,\n 'data_d00_batch02_loc02': 0,\n 'data_d00_batch02_loc03': 0,\n 'data_d07_batch01_loc01': 7, \n 'data_d07_batch02_loc01': 7,\n 'data_d07_batch02_loc02': 7, \n 'data_d14_batch01_loc01': 14, \n 'data_d17_batch01_loc01': 17,\n 'data_d21_batch01_loc01': 21,\n}\n",
"_____no_output_____"
],
[
"for i in folders:\n file_list = glob.glob(os.path.join(path, 'raw', i, '*.csv'))\n \n if not i in target_list:\n df = pd.DataFrame()\n\n for j in file_list: \n data_temp = pd.read_csv(j, header = 1)\n \n filename_tmp = os.path.basename(j)\n \n # add filename\n data_temp['filename'] = filename_tmp\n \n # add day\n filename_noext = os.path.splitext(filename_tmp)[0]\n pattern = re.compile(\"data_d[0-9][0-9]_batch[0-9][0-9]_loc[0-9][0-9]\")\n original_filename = pattern.search(filename_noext).group(0)\n day_tmp = file_meta[original_filename]\n \n data_temp['day'] = day_tmp\n \n df = df.append(data_temp, ignore_index = True)\n \n display(df)\n df.to_csv(os.path.join(path, 'compile', i + '.csv'))\n ",
"_____no_output_____"
]
],
[
[
"## 02 Load data",
"_____no_output_____"
],
[
"### 02-01 Calculate mean and tota|l volumn for mito, cristate, ER and inclusion",
"_____no_output_____"
]
],
[
[
"df_mito = pd.read_csv(os.path.join(path, 'compile', 'mito' + '.csv'))\ndf_mito['Volume3d_µm^3'] = df_mito['Volume3d']/1e9\ndf_mito['Area3d_µm^2'] = df_mito['Area3d']/1e6\n\ndf_mito_sum_grouped = df_mito.groupby(['day', 'filename']).sum().reset_index()\ndf_mito_mean_grouped = df_mito.groupby(['day', 'filename']).mean().reset_index()\n\ndf_cristae = pd.read_csv(os.path.join(path, 'compile', 'cristae' + '.csv'))\ndf_cristae['Volume3d_µm^3'] = df_cristae['Volume3d']/1e9\ndf_cristae['Area3d_µm^2'] = df_cristae['Area3d']/1e6\n\ndf_cristae_sum_grouped = df_cristae.groupby(['day', 'filename']).sum().reset_index()\ndf_cristae_mean_grouped = df_cristae.groupby(['day', 'filename']).mean().reset_index()\n\ndf_ER = pd.read_csv(os.path.join(path, 'compile', 'ER' + '.csv'))\ndf_ER['Volume3d_µm^3'] = df_ER['Volume3d']/1e9\ndf_ER['Area3d_µm^2'] = df_ER['Area3d']/1e6\n\ndf_ER_sum_grouped = df_ER.groupby(['day', 'filename']).sum().reset_index()\ndf_ER_mean_grouped = df_ER.groupby(['day', 'filename']).mean().reset_index()\n\ndf_inclusion = pd.read_csv(os.path.join(path, 'compile', 'inclusion' + '.csv'))\ndf_inclusion['Volume3d_µm^3'] = df_inclusion['Volume3d']/1e9\ndf_inclusion['Area3d_µm^2'] = df_inclusion['Area3d']/1e6\n\ndf_inclusion_sum_grouped = df_inclusion.groupby(['day', 'filename']).sum().reset_index()\ndf_inclusion_mean_grouped = df_inclusion.groupby(['day', 'filename']).mean().reset_index()",
"_____no_output_____"
]
],
[
[
"### 02-02 Calculate the total volume for cell membrane and nucleus",
"_____no_output_____"
]
],
[
[
"df_nucleus = pd.read_csv(os.path.join(path, 'compile', 'nucleus' + '.csv'))\ndf_nucleus['Volume3d_µm^3'] = df_nucleus['Volume3d']/1e9\ndf_nucleus['Area3d_µm^2'] = df_nucleus['Area3d']/1e6\ndf_nucleus_sum_grouped = df_nucleus.groupby(['day', 'filename']).sum().reset_index()",
"_____no_output_____"
],
[
"df_cell_membrane = pd.read_csv(os.path.join(path, 'compile', 'cell_membrane' + '.csv'))\ndf_cell_membrane['Volume3d_µm^3'] = df_cell_membrane['Volume3d']/1e9\ndf_cell_membrane['Area3d_µm^2'] = df_cell_membrane['Area3d']/1e6\ndf_cell_membrane_sum_grouped = df_cell_membrane.groupby(['day', 'filename']).sum().reset_index()",
"_____no_output_____"
],
[
"df_cell_membrane_sum_grouped",
"_____no_output_____"
]
],
[
[
"### 02-03 Calculate the volume of cytoplasm",
"_____no_output_____"
]
],
[
[
"df_cyto = pd.DataFrame()\ndf_cyto['filename'] = df_cell_membrane_sum_grouped['filename']\ndf_cyto['Volume3d_µm^3'] = df_cell_membrane_sum_grouped['Volume3d_µm^3'] - df_nucleus_sum_grouped['Volume3d_µm^3']\ndisplay(df_cyto)",
"_____no_output_____"
]
],
[
[
"### 02-03 Omit unhealthy data or data with poor quality ",
"_____no_output_____"
]
],
[
[
"omit_data = ['data_d00_batch02_loc02', \n 'data_d17_batch01_loc01_01', \n 'data_d17_batch01_loc01_02']\nfor omit in omit_data: \n df_mito = df_mito.loc[df_mito['filename']!= omit+ '_mito.csv']\n df_mito_sum_grouped = df_mito_sum_grouped.loc[df_mito_sum_grouped['filename']!=omit+ '_mito.csv']\n df_mito_mean_grouped = df_mito_mean_grouped.loc[df_mito_mean_grouped['filename']!=omit+ '_mito.csv']\n df_cristae = df_cristae.loc[df_cristae['filename']!=omit+ '_cristae.csv']\n df_cristae_sum_grouped = df_cristae_sum_grouped.loc[df_cristae_sum_grouped['filename']!=omit+ '_cristae.csv']\n df_cristae_mean_grouped = df_cristae_mean_grouped.loc[df_cristae_mean_grouped['filename']!=omit+ '_cristae.csv']\n df_ER = df_ER.loc[df_ER['filename']!=omit+ '_ER.csv']\n df_ER_sum_grouped = df_ER_sum_grouped.loc[df_ER_sum_grouped['filename']!=omit+ '_ER.csv']\n df_ER_mean_grouped = df_ER_mean_grouped.loc[df_ER_mean_grouped['filename']!=omit+ '_ER.csv']\n df_inclusion = df_inclusion.loc[df_inclusion['filename']!=omit+'_inclusion.csv']\n df_inclusion_sum_grouped = df_inclusion_sum_grouped.loc[df_inclusion_sum_grouped['filename']!=omit+'_inclusion.csv']\n df_inclusion_mean_grouped = df_inclusion_mean_grouped.loc[df_inclusion_mean_grouped['filename']!=omit+'_inclusion.csv']\n df_nucleus = df_nucleus.loc[df_nucleus['filename']!=omit+'_nucleus.csv']\n df_nucleus_sum_grouped = df_nucleus_sum_grouped.loc[df_nucleus_sum_grouped['filename']!=omit+'_nucleus.csv']\n df_cell_membrane = df_cell_membrane.loc[df_cell_membrane['filename']!=omit+'_cell_membrane.csv']\n df_cell_membrane_sum_grouped = df_cell_membrane_sum_grouped.loc[df_cell_membrane_sum_grouped['filename']!=omit+'_cell_membrane.csv']\n df_cyto = df_cyto.loc[df_cyto['filename']!=omit+'_cell_membrane.csv']\n",
"_____no_output_____"
],
[
"df_mito = df_mito.reset_index(drop=True)\ndf_mito_sum_grouped = df_mito_sum_grouped.reset_index(drop=True)\ndf_mito_mean_grouped = df_mito_mean_grouped.reset_index(drop=True)\ndf_cristae = df_cristae.reset_index(drop=True)\ndf_cristae_sum_grouped = df_cristae_sum_grouped.reset_index(drop=True)\ndf_cristae_mean_grouped = df_cristae_mean_grouped.reset_index(drop=True)\ndf_ER = df_ER.reset_index(drop=True)\ndf_ER_sum_grouped = df_ER_sum_grouped.reset_index(drop=True)\ndf_ER_mean_grouped = df_ER_mean_grouped.reset_index(drop=True)\ndf_inclusion = df_inclusion.reset_index(drop=True)\ndf_inclusion_sum_grouped = df_inclusion_sum_grouped.reset_index(drop=True)\ndf_inclusion_mean_grouped = df_inclusion_mean_grouped.reset_index(drop=True)\ndf_nucleus = df_nucleus.reset_index(drop=True)\ndf_nucleus_sum_grouped = df_nucleus_sum_grouped.reset_index(drop=True)\ndf_cell_membrane = df_cell_membrane.reset_index(drop=True)\ndf_cell_membrane_sum_grouped = df_cell_membrane_sum_grouped.reset_index(drop=True)\ndf_cyto = df_cyto.reset_index(drop=True)",
"_____no_output_____"
],
[
"df_mito.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'mito.csv'))\ndf_mito_sum_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'mito_sum_volume.csv'))\ndf_mito_mean_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'mito_mean_volume.csv'))\n\ndf_cristae.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'cristae.csv'))\ndf_cristae_sum_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'cristae_sum_volume.csv'))\ndf_cristae_mean_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'cristae_mean_volume.csv'))\n\ndf_ER.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'ER.csv'))\ndf_ER_sum_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'ER_sum_volume.csv'))\ndf_ER_mean_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'ER_mean_volume.csv'))\n\ndf_inclusion.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'inclusion.csv'))\ndf_inclusion_sum_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'inclusion_sum_volume.csv'))\ndf_inclusion_mean_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'inclusion_mean_volume.csv'))\n\ndf_nucleus.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'nucleus.csv'))\ndf_nucleus_sum_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'nucleus_sum_volume.csv'))\n\ndf_cell_membrane.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'cell_membrane_volume.csv'))\ndf_cell_membrane_sum_grouped.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'cell_membrane_sum_volume.csv'))\ndf_cyto.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'cytoplasm_sum_volume.csv'))",
"_____no_output_____"
],
[
"df_mito_sum_grouped",
"_____no_output_____"
]
],
[
[
"### 02-04 Compile total volume of mito, cristate, ER and inclusion into one table\n1. raw value\n2. normalized by the total volume of cytoplasm",
"_____no_output_____"
]
],
[
[
"df_sum_compiled = pd.DataFrame()\ndf_sum_compiled[['filename', 'day']] = df_cell_membrane_sum_grouped[['filename', 'day']]\ndf_sum_compiled['day'] = df_sum_compiled['day'].astype('int8')\ndf_sum_compiled[['mito_Volume3d_µm^3', 'mito_Area3d_µm^2']] = df_mito_sum_grouped[['Volume3d_µm^3', 'Area3d_µm^2']]\ndf_sum_compiled[['cristae_Volume3d_µm^3', 'cristae_Area3d_µm^2']] = df_cristae_sum_grouped[['Volume3d_µm^3', 'Area3d_µm^2']]\ndf_sum_compiled[['ER_Volume3d_µm^3', 'ER_Area3d_µm^2']] = df_ER_sum_grouped[['Volume3d_µm^3', 'Area3d_µm^2']]\n\ndf_inclusion_sum_tmp = df_inclusion_sum_grouped[['Volume3d_µm^3', 'Area3d_µm^2']]\ndf_inclusion_sum_fill = pd.DataFrame([[0, 0]], columns = ['Volume3d_µm^3', 'Area3d_µm^2'])\ndf_inclusion_sum_tmp = df_inclusion_sum_fill.append(df_inclusion_sum_tmp, ignore_index = True)\n\ndf_sum_compiled[['inclusion_Volume3d_µm^3', 'inclusion_Area3d_µm^2']] = df_inclusion_sum_tmp\ndf_sum_compiled",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))\nidx = 0\nfor i in range(2):\n for j in range(2):\n ax[i, j].bar(df_sum_compiled.index, \n df_sum_compiled.iloc[:, idx +2], \n tick_label=['0', '0', '7', '7', '7', '14', '21'])\n ax[i, j].set_title(df_sum_compiled.columns[idx+2])\n ax[i, j].set_xlabel('Day')\n idx += 1\n # ax[i].set_ylabel('Total Volume ($µm^3$)')\n \nfig.tight_layout(pad=3.0)\n\n\nmainpath = 'D:\\PerlmutterData'\nfolder = 'segmentation_compiled_export'\ndata_folder = 'data'\n\nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'mito_cristae_totoal_volume_area.png'))\nplt.show()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))\nidx = 4\nfor i in range(2):\n for j in range(2):\n ax[i, j].bar(df_sum_compiled.index, \n df_sum_compiled.iloc[:, idx +2], \n tick_label=['0', '0', '7', '7', '7', '14', '21'])\n ax[i, j].set_title(df_sum_compiled.columns[idx+2])\n ax[i, j].set_xlabel('Day')\n idx += 1\n # ax[i].set_ylabel('Total Volume ($µm^3$)')\n \nfig.tight_layout(pad=3.0)\n\nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'ER_inclusion_totoal_volume_area.png'))\nplt.show()",
"_____no_output_____"
],
[
"df_sum_compiled_normalized = pd.DataFrame()\ndf_sum_compiled_normalized[['filename', 'day']] = df_cell_membrane_sum_grouped[['filename', 'day']]\ncal_tmp = df_sum_compiled.iloc[:, 2:].div(df_cyto['Volume3d_µm^3'], axis=0)\ndf_sum_compiled_normalized = pd.concat([df_sum_compiled_normalized, cal_tmp], axis=1)\ndf_sum_compiled_normalized",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))\nidx = 0\nfor i in range(2):\n for j in range(2):\n ax[i, j].bar(df_sum_compiled_normalized.index, \n df_sum_compiled_normalized.iloc[:, idx +2], \n tick_label=['0', '0', '7', '7', '7', '14', '21'])\n ax[i, j].set_title(df_sum_compiled_normalized.columns[idx+2])\n ax[i, j].set_xlabel('Day')\n idx += 1\n # ax[i].set_ylabel('Total Volume ($µm^3$)')\n \nfig.tight_layout(pad=3.0)\n\nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'mito_cristae_normalized_totoal_volume_area.png'))\nplt.show()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))\nidx = 4\nfor i in range(2):\n for j in range(2):\n ax[i, j].bar(df_sum_compiled_normalized.index, \n df_sum_compiled_normalized.iloc[:, idx +2], \n tick_label=['0', '0', '7', '7', '7', '14', '21'])\n ax[i, j].set_title(df_sum_compiled_normalized.columns[idx+2])\n ax[i, j].set_xlabel('Day')\n idx += 1\n # ax[i].set_ylabel('Total Volume ($µm^3$)')\n \nfig.tight_layout(pad=3.0)\n\nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'ER_inclusion_normalized_totoal_volume_area.png'))\nplt.show()",
"_____no_output_____"
]
],
[
[
"### 02-05 Compile mean volume of mito, cristate, ER and inclusion into one table\n1. raw value\n2. normalized by the total volume of cytoplasm",
"_____no_output_____"
]
],
[
[
"df_mean_compiled = pd.DataFrame()\ndf_mean_compiled[['filename', 'day']] = df_cell_membrane_sum_grouped[['filename', 'day']]\ndf_mean_compiled['day'] = df_mean_compiled['day'].astype('int8')\ndf_mean_compiled[['mito_Volume3d_µm^3', 'mito_Area3d_µm^2']] = df_mito_mean_grouped[['Volume3d_µm^3', 'Area3d_µm^2']]\ndf_mean_compiled[['cristae_Volume3d_µm^3', 'cristae_Area3d_µm^2']] = df_cristae_mean_grouped[['Volume3d_µm^3', 'Area3d_µm^2']]\ndf_mean_compiled[['ER_Volume3d_µm^3', 'ER_Area3d_µm^2']] = df_ER_mean_grouped[['Volume3d_µm^3', 'Area3d_µm^2']]\n\ndf_inclusion_mean_tmp = df_inclusion_mean_grouped[['Volume3d_µm^3', 'Area3d_µm^2']]\ndf_inclusion_mean_fill = pd.DataFrame([[0, 0]], columns = ['Volume3d_µm^3', 'Area3d_µm^2'])\ndf_inclusion_mean_tmp = df_inclusion_mean_fill.append(df_inclusion_mean_tmp, ignore_index = True)\n\ndf_mean_compiled[['inclusion_Volume3d_µm^3', 'inclusion_Area3d_µm^2']] = df_inclusion_mean_tmp\ndf_mean_compiled",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))\nidx = 0\nfor i in range(2):\n for j in range(2):\n ax[i, j].bar(df_mean_compiled.index, \n df_mean_compiled.iloc[:, idx +2], \n tick_label=['0', '0', '7', '7', '7', '14', '21'])\n ax[i, j].set_title(df_mean_compiled.columns[idx+2])\n ax[i, j].set_xlabel('Day')\n idx += 1\n # ax[i].set_ylabel('Total Volume ($µm^3$)')\n \nfig.tight_layout(pad=3.0)\n\nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'mito_cristae_mean_volume_area.png'))\nplt.show()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))\nidx = 4\nfor i in range(2):\n for j in range(2):\n ax[i, j].bar(df_mean_compiled.index, \n df_mean_compiled.iloc[:, idx +2], \n tick_label=['0', '0', '7', '7', '7', '14', '21'])\n ax[i, j].set_title(df_mean_compiled.columns[idx+2])\n ax[i, j].set_xlabel('Day')\n idx += 1\n # ax[i].set_ylabel('Total Volume ($µm^3$)')\n \nfig.tight_layout(pad=3.0)\n\nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'ER_inclusion_mean_volume_area.png'))\nplt.show()",
"_____no_output_____"
],
[
"'''\ndf_mean_compiled_normalized = pd.DataFrame()\ndf_mean_compiled_normalized[['filename', 'day']] = df_cell_membrane_sum_grouped[['filename', 'day']]\ncal_tmp = df_mean_compiled.iloc[:, 2:].div(df_cyto['Volume3d_µm^3'], axis=0)\ndf_mean_compiled_normalized = pd.concat([df_mean_compiled_normalized, cal_tmp], axis=1)\ndf_mean_compiled_normalized\n'''",
"_____no_output_____"
],
[
"'''\nfig, ax = plt.subplots(nrows=8, ncols=1, figsize=(5, 30))\n\nfor i in range(8):\n ax[i].bar(df_mean_compiled_normalized.index, \n df_mean_compiled_normalized.iloc[:, i +2], \n tick_label=['0', '0', '0', '7', '7', '7', '14', '17', '17', '21'])\n ax[i].set_title(df_mean_compiled_normalized.columns[i+2])\n ax[i].set_xlabel('Day')\n \nfig.tight_layout(pad=3.0)\nplt.show()\n'''",
"_____no_output_____"
]
],
[
[
"### 02-06 Distribution",
"_____no_output_____"
]
],
[
[
"# mito\nmaxval = df_mito['Volume3d_µm^3'].max()\nminval = df_mito['Volume3d_µm^3'].min()\nprint(maxval)\nprint(minval)\nbins = np.linspace(minval + (maxval - minval)* 0, minval + (maxval - minval)* 1, num = 100)\n# bins = np.linspace(500000000, minval + (maxval - minval)* 1, num = 50)\ndays = [0, 7, 14, 21]\n\nnrows = 4\nncols = 1\nfig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 15))\n\nfor i, day in enumerate(days):\n df_tmp = df_mito.loc[df_mito['day'] == day, :]\n axes[i%nrows].hist(df_tmp['Volume3d_µm^3'], bins= bins, log=True, density = True)\n axes[i%nrows].set_xlim([0, maxval])\n axes[i%nrows].set_ylim([0, 1])\n axes[i%nrows].set_title('Day ' + str(day))\n \nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'distribution_mito_volume.png'))\nplt.show()",
"46.7896\n0.005008\n"
],
[
"# cristae\nmaxval = df_cristae['Area3d_µm^2'].max()\nminval = df_cristae['Area3d_µm^2'].min()\nprint(maxval)\nprint(minval)\nbins = np.linspace(minval + (maxval - minval)* 0, minval + (maxval - minval)* 1, num = 100)\n# bins = np.linspace(500000000, minval + (maxval - minval)* 1, num = 50)\ndays = [0, 7, 14, 21]\n\nnrows = 4\nncols = 1\nfig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 15))\n\nfor i, day in enumerate(days):\n df_tmp = df_cristae.loc[df_cristae['day'] == day, :]\n axes[i%nrows].hist(df_tmp['Area3d_µm^2'], bins= bins, log=True, density = True)\n axes[i%nrows].set_xlim([0, maxval])\n axes[i%nrows].set_ylim([0, 0.1])\n axes[i%nrows].set_title('Day ' + str(day))\n\nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'distribution_cristae_volume.png'))\nplt.show()",
"1311.66\n0.0003004190000000001\n"
],
[
"# ER\nmaxval = df_ER['Volume3d_µm^3'].max()\nminval = df_ER['Volume3d_µm^3'].min()\nprint(maxval)\nprint(minval)\nfactor = 0.03\n\nbins = np.linspace(minval + (maxval - minval)* 0, minval + (maxval - minval)* factor, num = 100)\ndays = [0, 7, 14, 21]\n\n\nnrows = 4\nncols = 1\nfig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 15))\n\nfor i, day in enumerate(days):\n df_tmp = df_ER.loc[df_ER['day'] == day, :]\n axes[i%nrows].hist(df_tmp['Volume3d_µm^3'], bins= bins, log=True, density = True)\n axes[i%nrows].set_xlim([0, minval + (maxval - minval)* factor])\n axes[i%nrows].set_ylim([0, 100])\n axes[i%nrows].set_title('Day ' + str(day))\n \nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'distribution_ER_volume.png'))\nplt.show()",
"86.4886\n1e-06\n"
],
[
"# inclusion\nmaxval = df_inclusion['Volume3d_µm^3'].max()\nminval = df_inclusion['Volume3d_µm^3'].min()\nprint(maxval)\nprint(minval)\nfactor = 1\n\nbins = np.linspace(minval + (maxval - minval)* 0, minval + (maxval - minval)* factor, num = 100)\ndays = [0, 7, 14, 21]\n\nnrows = 4\nncols = 1\nfig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 15))\n\nfor i, day in enumerate(days):\n df_tmp = df_inclusion.loc[df_inclusion['day'] == day, :]\n axes[i%nrows].hist(df_tmp['Volume3d_µm^3'], bins= bins, log=True, density = True)\n axes[i%nrows].set_xlim([0, minval + (maxval - minval)* factor])\n axes[i%nrows].set_ylim([0, 1])\n axes[i%nrows].set_title('Day ' + str(day))\n\nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'distribution_inclusion_volume.png'))\nplt.show()",
"261.036\n1e-06\n"
]
],
[
[
"# 03 Load Data from Auto Skeletonization of Mitocondria",
"_____no_output_____"
],
[
"## 03-01",
"_____no_output_____"
]
],
[
[
"mainpath = 'D:\\PerlmutterData'\nfolder = 'segmentation_compiled_export'\ndata_folder = 'data'\n\npath = os.path.join(mainpath, folder, data_folder)\nprint(path)\n\nfolders = ['skeleton_output']\nsubcat = ['nodes', 'points', 'segments_s']",
"D:\\PerlmutterData\\segmentation_compiled_export\\data\n"
],
[
"target_list = glob.glob(os.path.join(path, 'compile', '*.csv'))\ntarget_list = [os.path.basename(x) for x in target_list]\ntarget_list = [os.path.splitext(x)[0] for x in target_list]\nprint(target_list)",
"['cell_membrane', 'cristae', 'ER', 'inclusion', 'mito', 'nodes', 'nucleus', 'points', 'segments_s']\n"
],
[
"file_meta = {\n 'data_d00_batch01_loc01': 0,\n 'data_d00_batch02_loc02': 0,\n 'data_d00_batch02_loc03': 0,\n 'data_d07_batch01_loc01': 7, \n 'data_d07_batch02_loc01': 7,\n 'data_d07_batch02_loc02': 7, \n 'data_d14_batch01_loc01': 14, \n 'data_d17_batch01_loc01': 17,\n 'data_d21_batch01_loc01': 21,\n}",
"_____no_output_____"
],
[
"for i in subcat:\n file_list = glob.glob(os.path.join(path, 'raw', 'skeleton_output', '*', i + '.csv'))\n # print(file_list)\n \n if not i in target_list:\n df = pd.DataFrame()\n\n for j in file_list: \n data_temp = pd.read_csv(j, header = 0)\n \n foldername_tmp = os.path.dirname(j)\n foldername_tmp = os.path.basename(foldername_tmp)\n \n \n # add day\n pattern = re.compile(\"data_d[0-9][0-9]_batch[0-9][0-9]_loc[0-9][0-9]\")\n original_foldername = pattern.search(foldername_tmp).group(0)\n day_tmp = file_meta[original_foldername]\n data_temp['day'] = day_tmp\n # add filename\n data_temp['filename'] = original_foldername\n \n df = df.append(data_temp, ignore_index = True)\n \n display(df)\n df.to_csv(os.path.join(path, 'compile', i + '.csv'))",
"_____no_output_____"
],
[
"df_points = pd.read_csv(os.path.join(path, 'compile', 'points' + '.csv'))\ndf_segments = pd.read_csv(os.path.join(path, 'compile', 'segments_s' + '.csv'))\ndf_nodes = pd.read_csv(os.path.join(path, 'compile', 'nodes' + '.csv'))",
"_____no_output_____"
],
[
"df_points",
"_____no_output_____"
],
[
"for omit in omit_data: \n df_points = df_points.loc[df_points['filename']!= omit]\n df_segments = df_segments.loc[df_segments['filename']!=omit]\n df_nodes = df_nodes.loc[df_nodes['filename']!=omit]",
"_____no_output_____"
],
[
"# points\nmaxval = df_points['thickness'].max()\nminval = df_points['thickness'].min()\ndays = [0, 7, 14, 21]\nbins = np.linspace(minval + (maxval - minval)* 0, minval + (maxval - minval)* 1, num = 20)\n\nnrows = 4\nncols = 1\nfig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 15))\n\nfor i, day in enumerate(days):\n df_tmp = df_points.loc[df_points['day'] == day, :]\n axes[i%nrows].hist(df_tmp['thickness'], bins= bins, log=False, density = True)\n axes[i%nrows].set_ylim([0, 0.008])\n axes[i%nrows].set_title('Day ' + str(day))\n \nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'distribution_points_thickness.png'))\nplt.show()",
"_____no_output_____"
],
[
"# segments\nmaxval = df_segments['thickness'].max()\nminval = df_segments['thickness'].min()\ndays = [0, 7, 14, 21]\nbins = np.linspace(minval + (maxval - minval)* 0, minval + (maxval - minval)* 1, num = 20)\n\nnrows = 4\nncols = 1\nfig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 15))\n\nfor i, day in enumerate(days):\n df_tmp = df_segments.loc[df_segments['day'] == day, :]\n axes[i%nrows].hist(df_tmp['thickness'], bins= bins, log=False, density = True)\n axes[i%nrows].set_ylim([0, 0.008])\n axes[i%nrows].set_title('Day ' + str(day))\n \nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'distribution_segments_thickness.png'))\nplt.show()",
"_____no_output_____"
],
[
"df_segments_count_grouped = df_nodes.groupby(['day', 'filename', 'Coordination Number']).count().reset_index()\ndf_segments_count_grouped",
"_____no_output_____"
],
[
"filename = df_segments_count_grouped['filename'].unique()\nprint(filename)",
"['data_d00_batch01_loc01' 'data_d00_batch02_loc03'\n 'data_d07_batch01_loc01' 'data_d07_batch02_loc01'\n 'data_d07_batch02_loc02' 'data_d14_batch01_loc01'\n 'data_d17_batch01_loc01' 'data_d21_batch01_loc01']\n"
],
[
"days = [0, 7, 14, 21]\n\nnrows = 4\nncols = 1\nfig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 15))\n\nfor i, day in enumerate(days):\n df_tmp = df_segments_count_grouped.loc[df_segments_count_grouped['day'] == day]\n \n x = df_tmp['Coordination Number']\n y = df_tmp['Node ID']\n x_pos = [str(i) for i in x]\n \n axes[i%nrows].bar(x_pos, y)\n axes[i%nrows].set_title('Day ' + str(day))\n \nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'coordination_number.png'))\nplt.show()",
"_____no_output_____"
]
],
[
[
"# 04 Average of total size",
"_____no_output_____"
]
],
[
[
"mito_mean = df_mito_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).mean().reset_index()\nmito_sem = df_mito_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).sem().reset_index()\nmito_sem = mito_sem.fillna(0)\n\nmito_mean.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'mito_mean.csv'))\nmito_sem.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'mito_sem.csv'))\n\nfig = plt.figure(figsize=(5, 5))\nx = ['Day 0', 'Day 7', 'Day 14', 'Day 21']\nplt.bar(x, mito_mean['Volume3d_µm^3'], yerr= mito_sem['Volume3d_µm^3'], error_kw=dict(capsize=10))\n\nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'mito_mean_barplot.png'))\nplt.show()",
"_____no_output_____"
],
[
"cristae_mean = df_cristae_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).mean().reset_index()\ncristae_sem = df_cristae_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).sem().reset_index()\ncristae_sem = cristae_sem.fillna(0)\n\ncristae_mean.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'cristae_mean.csv'))\ncristae_sem.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'cristae_sem.csv'))\n\nfig = plt.figure(figsize=(5, 5))\nx = ['Day 0', 'Day 7', 'Day 14', 'Day 21']\nplt.bar(x, cristae_mean['Volume3d_µm^3'], yerr= cristae_sem['Volume3d_µm^3'], error_kw=dict(capsize=10))\n\nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'cristae_mean_barplot.png'))\nplt.show()",
"_____no_output_____"
],
[
"ER_mean = df_ER_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).mean().reset_index()\nER_sem = df_ER_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).sem().reset_index()\nER_sem = ER_sem.fillna(0)\n\nER_mean.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'ER_mean.csv'))\nER_sem.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'ER_sem.csv'))\n\nfig = plt.figure(figsize=(5, 5))\nx = ['Day 0', 'Day 7', 'Day 14', 'Day 21']\nplt.bar(x, ER_mean['Volume3d_µm^3'], yerr= ER_sem['Volume3d_µm^3'], error_kw=dict(capsize=10))\n\nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'ER_mean_barplot.png'))\nplt.show()",
"_____no_output_____"
],
[
"inclusion_mean = df_inclusion_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).mean().reset_index()\ninclusion_sem = df_inclusion_sum_grouped[['day', 'Volume3d_µm^3']].groupby(['day']).sem().reset_index()\ninclusion_sem = inclusion_sem.fillna(0)\n\ninclusion_mean.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'inclusion_mean.csv'))\ninclusion_sem.to_csv(os.path.join(mainpath, folder, data_folder, 'spreadsheet', 'inclusion_sem.csv'))\n\nfig = plt.figure(figsize=(5, 5))\nx = ['Day 0', 'Day 7', 'Day 14', 'Day 21']\nplt.bar(x, inclusion_mean['Volume3d_µm^3'], yerr= inclusion_sem['Volume3d_µm^3'], error_kw=dict(capsize=10))\n\nplt.savefig(os.path.join(mainpath, folder, data_folder, 'plots', 'inclusion_mean_barplot.png'))\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e76ec7024350c8ad8252eeb0aba76e4a92fb48db | 11,765 | ipynb | Jupyter Notebook | Trove/Cookbook/Harvesting-data-from-the-Home.ipynb | wragge/ozglam-workbench | 81402e4c9cf65d8f921d43d4047bc93ce2377227 | [
"MIT"
] | 8 | 2018-04-16T06:48:24.000Z | 2018-07-04T23:45:44.000Z | Trove/Cookbook/Harvesting-data-from-the-Home.ipynb | GLAM-Workbench/ozglam-workbench | 3406d098f74e941a0533d860a98492ffe9bc5476 | [
"MIT"
] | 4 | 2018-04-26T05:49:13.000Z | 2018-08-17T10:12:46.000Z | Trove/Cookbook/Harvesting-data-from-the-Home.ipynb | GLAM-Workbench/ozglam-workbench | 3406d098f74e941a0533d860a98492ffe9bc5476 | [
"MIT"
] | 3 | 2018-10-18T09:35:14.000Z | 2019-11-20T01:50:34.000Z | 36.996855 | 383 | 0.567531 | [
[
[
"# Harvesting data from Home\n\nThis is an example of how my original recipe for [harvesting data from The Bulletin](Harvesting-data-from-the-Bulletin.ipynb) can be modified for other journals.\n\nIf you'd like a pre-harvested dataset of all the Home covers (229 images in a 3.3gb zip file), open this link using your preferred BitTorrent client: <magnet:?xt=urn:btih:7888BCEA44E5FF5670931A3394369E5018BFC32B&dn=home-quarterly.zip>",
"_____no_output_____"
]
],
[
[
"# Let's import the libraries we need.\nimport requests\nfrom bs4 import BeautifulSoup\nimport time\nimport json\nimport os\nimport re",
"_____no_output_____"
],
[
"# Create a directory for this journal\n# Edit as necessary for a new journal\ndata_dir = '../../data/Trove/Home'\nos.makedirs(data_dir, exist_ok=True)",
"_____no_output_____"
]
],
[
[
"## Getting the issue data\n\nEach issue of a digitised journal like has it's own unique identifier. You've probably noticed them in the urls of Trove resources. They look something like this `nla.obj-362409353`. Once we have the identifier for an issue we can easily download the contents, but how do we get a complete list of identifiers?\n\nThe [harvesting data from the Bulletin](Harvesting-data-from-the-Bulletin.ipynb) notebook explains how we can find a url that lists all the available issues of a journal.\n\nThis is the url we need to start harvesting issue metadata about *Home*. You could easily modify this to get metadata from another journal by changing the identifier.\n\n```\nhttps://nla.gov.au/nla.obj-362409353/browse?startIdx=0&rows=20&op=c\n```",
"_____no_output_____"
]
],
[
[
"# This is just the url we found above, with a slot into which we can insert the startIdx value\n# If you want to download data from another journal, just change the nla.obj identifier to point to the journal.\nstart_url = 'https://nla.gov.au/nla.obj-362409353/browse?startIdx={}&rows=20&op=c'",
"_____no_output_____"
],
[
"# The initial startIdx value\nstart = 0\n# Number of results per page\nn = 20\nissues = []\n# If there aren't 20 results on the page then we've reached the end, so continue harvesting until that happens.\nwhile n == 20:\n # Get the browse page\n response = requests.get(start_url.format(start))\n # Beautifulsoup turns the HTML into an easily navigable structure\n soup = BeautifulSoup(response.text, 'lxml')\n # Find all the divs containing issue details and loop through them\n details = soup.find_all(class_='l-item-info')\n for detail in details:\n issue = {}\n # Get the issue id\n issue['id'] = detail.dt.a.string\n rows = detail.find_all('dd')\n # Get the issue details\n issue['details'] = rows[2].p.string\n # Get the number of pages\n issue['pages'] = re.search(r'^(\\d+)', detail.find('a', class_=\"browse-child\").text, flags=re.MULTILINE).group(1)\n issues.append(issue)\n print(issue)\n time.sleep(0.2)\n # Increment the startIdx\n start += n\n # Set n to the number of results on the current page\n n = len(details)\n \n ",
"_____no_output_____"
],
[
"len(issues)",
"_____no_output_____"
],
[
"# Save the harvested results as a JSON file in case we need them later on\nwith open('{}/home_issues.json'.format(data_dir), 'w') as outfile:\n json.dump(issues, outfile)",
"_____no_output_____"
],
[
"# Open the saved JSON file\nwith open('{}/home_issues.json'.format(data_dir), 'r') as infile:\n issues = json.load(infile)",
"_____no_output_____"
]
],
[
[
"## Cleaning up the metadata\n\nSo far we've just grabbed the complete issue details as a single string. It would be good to parse this string so that we have the dates, volume and issue numbers in separate fields. As is always the case, there's a bit of variation in the way this information is recorded. The code below tries out different combinations and then saves the structured data in a Python list.\n\nI had to modify the code I used with the *Bulletin* due to slight variations in the way the issue data was recorded. For example, issue dates for *Home* use the full names of months, while the *Bulletin* records used abbreviations. It's likely that there will be other variations between journals, so you might have to adjust this code. ",
"_____no_output_____"
]
],
[
[
"\nimport arrow\nfrom arrow.parser import ParserError\nissues_data = []\n# Loop through the issues\nfor issue in issues:\n issue_data = {}\n issue_data['id'] = issue['id']\n issue_data['pages'] = int(issue['pages'])\n print(issue['details'])\n try:\n # This pattern looks for details in the form: Vol. 2 No. 3 (2 Jul 1878)\n details = re.search(r'(.*)Vol. (\\d+) No\\.* (\\d+) \\((.+)\\)', issue['details'].strip())\n issue_data['label'] = details.group(1).strip()\n issue_data['volume'] = details.group(2)\n issue_data['number'] = details.group(3)\n date = details.group(4)\n except AttributeError:\n try:\n # This pattern looks for details in the form: No. 3 (2 Jul 1878)\n details = re.search(r'No. (\\d+) \\((.+)\\)', issue['details'].strip())\n issue_data['label'] = ''\n issue_data['volume'] = ''\n issue_data['number'] = details.group(1)\n date = details.group(2)\n except AttributeError:\n try:\n # This pattern looks for details in the form: Bulletin Christmas Edition (2 Jul 1878)\n details = re.search(r'(.*) \\((.+)\\)', issue['details'].strip())\n issue_data['label'] = details.group(1)\n issue_data['volume'] = ''\n issue_data['number'] = ''\n date = details.group(2)\n except AttributeError:\n # This pattern looks for details in the form: Bulletin 1878 Jul 3\n details = re.search(r'Bulletin (.+)', issue['details'].strip())\n date_str = details.group(1)\n # Date is wrong way round, split and reverse\n date = ' '.join(reversed(date_str.split()))\n issue_data['label'] = ''\n issue_data['volume'] = ''\n issue_data['number'] = ''\n # Normalise months\n date = date.replace('Sept', 'Sep').replace('Sepember', 'September').replace('July August', 'July').replace('September October', 'September').replace(' ', ' ')\n # Convert date to ISO format\n try:\n issue_data['date'] = arrow.get(date, 'D MMMM YYYY').isoformat()[:-15]\n except ParserError:\n issue_data['date'] = arrow.get(date, 'D MMM YYYY').isoformat()[:-15]\n issues_data.append(issue_data)\n ",
"_____no_output_____"
]
],
[
[
"## Save as CSV\n\nNow the issues data is in a nice, structured form, we can load it into a Pandas dataframe. This allows us to do things like find the total number of pages digitised.\n\nWe can also save the metadata as a CSV.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n# Convert issues metadata into a dataframe\ndf = pd.DataFrame(issues_data, columns=['id', 'label', 'volume', 'number', 'date', 'pages'])",
"_____no_output_____"
],
[
"# Find the total number of pages\ndf['pages'].sum()",
"_____no_output_____"
],
[
"# Save metadata as a CSV.\ndf.to_csv('{}/home_issues.csv'.format(data_dir), index=False)",
"_____no_output_____"
]
],
[
[
"## Download front covers\n\nOptions for downloading images, PDFs and text are described in the [harvesting data from the Bulletin](Harvesting-data-from-the-Bulletin.ipynb) notebook. In this recipe we'll just download the fromt covers (because they're awesome).\n\nThe code below checks to see if an image has already been saved before downloading it, so if the process is interrupted you can just run it again to pick up where it stopped. If more issues are added to Trove you could run it again to pick up any new images.",
"_____no_output_____"
]
],
[
[
"import zipfile\nimport io\n# Prepare a directory to save the images into\noutput_dir = data_dir + '/images'\nos.makedirs(output_dir, exist_ok=True)\n# Loop through the issue metadata\nfor issue in issues_data:\n print(issue['id'])\n id = issue['id']\n # Check to see if the first page of this issue has already been downloaded\n if not os.path.exists('{}/{}-1.jpg'.format(output_dir, id)):\n url = 'https://nla.gov.au/{}/download?downloadOption=zip&firstPage=0&lastPage=0'.format(id)\n # Get the file\n r = requests.get(url)\n # The image is in a zip, so we need to extract the contents into the output directory\n z = zipfile.ZipFile(io.BytesIO(r.content))\n z.extractall(output_dir)\n time.sleep(1)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e76ed34fd12d9a105c2662d031b404b6cf2b06c1 | 17,262 | ipynb | Jupyter Notebook | docs/azure_aks/04_register_business_logic.ipynb | Sheile/core | 5c094cdd33caf5bbb6e67f70dd6ee344a2604382 | [
"Apache-2.0"
] | null | null | null | docs/azure_aks/04_register_business_logic.ipynb | Sheile/core | 5c094cdd33caf5bbb6e67f70dd6ee344a2604382 | [
"Apache-2.0"
] | 1 | 2019-03-18T11:13:23.000Z | 2019-03-18T11:13:23.000Z | docs/azure_aks/04_register_business_logic.ipynb | Sheile/core | 5c094cdd33caf5bbb6e67f70dd6ee344a2604382 | [
"Apache-2.0"
] | null | null | null | 27.270142 | 253 | 0.422894 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e76ede12fadb99abb01acf14b231755748660a8d | 80,152 | ipynb | Jupyter Notebook | _notebooks/2020-06-30-Classical.ipynb | S-B-Iqbal/Reflexione | 244c2d93e0cf0f0e9250cf713652451176449f98 | [
"Apache-2.0"
] | null | null | null | _notebooks/2020-06-30-Classical.ipynb | S-B-Iqbal/Reflexione | 244c2d93e0cf0f0e9250cf713652451176449f98 | [
"Apache-2.0"
] | null | null | null | _notebooks/2020-06-30-Classical.ipynb | S-B-Iqbal/Reflexione | 244c2d93e0cf0f0e9250cf713652451176449f98 | [
"Apache-2.0"
] | null | null | null | 70.308772 | 23,118 | 0.740331 | [
[
[
"# \"Old Skool Image Classification\"\n> \"A blog on how to manuallly create features from an Image for classification task.\"\n\n- toc: true\n- branch: master\n- badges: true\n- comments: false\n- categories: [CV, image classification, feature engineering, pyTorch, CIFAR10]\n- image: images/blog1.png\n- hide: false\n- search_exclude: true",
"_____no_output_____"
],
[
"## Introduction\n\nThe objective of the current notebook is to give a glimpse of some of the methods for feature extraction that were prevelent before the advent of Deep Neural Networks in the Computer Vision domain.\n\nIn the current Notebook we shall see how to do the same using the Python language with CIFAR10 dataset. The goal is to extract several features from the provided images and finally perform Image Classification using a Multi Layer Perceptron.\n\n## Texture Analysis\n\nOn the preface of the book _Image Processing: Dealing with Textures_ {% fn 1 %}\n, the authors provided a very captivating definition of Texture: __Texture is what makes life beautiful; texture is what makes life interesting and texture is what makes life possible. Texture is what makes Mozart’s music beautiful, the masterpieces of the art of the Renaissance classical and the facades of Barcelona’s buildings attractive.\"__.(Not so helpful eh!) \n\nSo, what is Texture? Technically, its the 'variation of Data on a smaller scale than the scale of interest'.\n\nClassical Visual Computing comprises of two main branches when it comes to analyzing Texture of an Image [[1]](#1):\n1. Structural \n - Local Binary Pattern\n - Gabor Wavelets\n - Fourier Co-efficients\n2. Statistical \n - Co-Occurance Matrix\n - Orientation Histogram\n\n\n## Workflow\n\n- We shall start by downloading the [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset.\n\n- Next, we manually craft the features to obtain Texture Metrics.\n\n- After creating functions to obtain textual features from an image, we create a loop to extract the same from all the images in Training and Test dataset.\n\n- Next, we save the Training and Test set extracted features as serialized file. \n\n- Then we shall use the created features as co-variates against the label for each image and train a Softmax classifier on the Training Set. \n\n- Eventually, we evaluate the classifier on the Test set. ",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nfrom torchvision import datasets\nimport PIL\nfrom skimage.feature import local_binary_pattern, greycomatrix, greycoprops\nfrom skimage.filters import gabor\n\nimport torch\nfrom torch import nn \nfrom torch.utils.data import TensorDataset, DataLoader\nimport torch.nn.functional as F\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport tqdm\nfrom tqdm import notebook\n\nfrom pathlib import Path\nimport pickle\nimport time",
"_____no_output_____"
]
],
[
[
"### Data Loading",
"_____no_output_____"
]
],
[
[
"#collapse\n#collapse-output\ntrainDset = datasets.CIFAR10(root=\"./cifar10/\", train=True, download=True)\ntestDset = datasets.CIFAR10(root = \"./cifar10/\", train=False, download=True)",
"Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./cifar10/cifar-10-python.tar.gz\n"
],
[
"# Looking at a single image",
"_____no_output_____"
],
[
"#collapse\nimg = trainDset[0][0] # PIL Image\nimg_grey = img.convert('L') # convert to Grey-scale\nimg_arr = np.array(img_grey) # convert to numpy array\nplt.imshow(img)",
"_____no_output_____"
]
],
[
[
"## Local Binary Patterns(LBP)\n\nLBP is helpful in extracting \"local\" structure of the image. It does so by encoding the local neighbourhood after they have been maximally simplified, i.e. binarized. In case, we want to perform LBP on a coloured image, we need to do so individually on each channel(Red/Blue/Green).",
"_____no_output_____"
]
],
[
[
"#collapse\nfeat_lbp = local_binary_pattern(img_arr, 8,1,'uniform')\nfeat_lbp = np.uint8( (feat_lbp/feat_lbp.max())*255) # converting to unit 8\nlbp_img = PIL.Image.fromarray(feat_lbp) # Convert from array\n\nplt.imshow(lbp_img, cmap = 'gray')",
"_____no_output_____"
],
[
"# Energy, Entropy\n\ndef get_lbp(img):\n \"\"\"Function to implement Local Binary Pattern\"\"\"\n lbp_hist, _ = np.histogram(img, 8)\n lbp_hist = np.array(lbp_hist, dtype = float)\n lbp_prob = np.divide(lbp_hist, np.sum(lbp_hist))\n lbp_prob = np.where(np.isclose(0, lbp_prob), 0.0000000001, lbp_prob) # to avoid log(0)\n lbp_energy = np.sum(lbp_prob**2)\n lbp_entropy = -np.sum(np.multiply(lbp_prob, np.log2(lbp_prob)))\n return lbp_energy, lbp_entropy",
"\n"
]
],
[
[
"## Co Occurence Matrix\n\nIntiutively, if we were to extract information of a pixel in an image and also record its neighbouring pixels and their intensities, we will be able to capture both spatial and relative information. This is where Co-Occurance matrix are useful. They extract the representation of joint probability of chosen set of pixels having certain values. \n\nOnce we have the co-occurance matrix, we can start calculating the feature matrics such as:\n- $\\textbf{Energy} = \\sum_{m=0}^{G-1}\\sum_{n=0}^{G-1}p^2\\left(m,n\\right)$\n\n- $\\textbf{Entropy} = \\sum_{m=0}^{G-1}\\sum_{n=0}^{G-1}p\\left(m,n\\right)\\cdot log \\left(p\\left(m,n\\right)\\right)$\n\n- $\\textbf{Contrast} = \\frac{1}{(G-1)^2}\\sum_{m=0}^{G-1}\\sum_{n=0}^{G-1}(m-n)^2\\cdot p(m,n)$\n- $\\textbf{Homogeneity} = \\sum_{m=0}^{G-1}\\sum_{n=0}^{G-1} \\frac{p(m,n)}{1+|m-n|}$\n\nWhere, $m,n$ are the neighbouring pixels and $G$ is the total number of grey levels we use. $G=256$ for an 8-bit gray-scale image",
"_____no_output_____"
]
],
[
[
"def creat_cooccur(img_arr, *args, **kwargs):\n \"\"\"Implements extraction of features from Co-Occurance Matrix\"\"\"\n gCoMat = greycomatrix(img_arr, [2], [0], 256, symmetric=True, normed=True)\n contrast = greycoprops(gCoMat, prop='contrast')\n dissimilarity = greycoprops(gCoMat, prop='dissimilarity')\n homogeneity = greycoprops(gCoMat, prop='homogeneity')\n energy = greycoprops(gCoMat, prop='energy')\n correlation = greycoprops(gCoMat, prop = 'correlation')\n return contrast[0][0], dissimilarity[0][0], homogeneity[0][0], energy[0][0], correlation[0][0]",
"_____no_output_____"
],
[
"#collapse\ngCoMat = greycomatrix(img_arr, [2], [0], 256, symmetric=True, normed=True)\ncontrast = greycoprops(gCoMat, prop='contrast')\ndissimilarity = greycoprops(gCoMat, prop='dissimilarity')\nhomogeneity = greycoprops(gCoMat, prop='homogeneity')\nenergy = greycoprops(gCoMat, prop='energy')\ncorrelation = greycoprops(gCoMat, prop = 'correlation')\nprint(energy[0][0])",
"_____no_output_____"
]
],
[
[
"## [Gabor Filter](https://en.wikipedia.org/wiki/Gabor_filter#Applications_of_2-D_Gabor_filters_in_image_processing)\n\n",
"_____no_output_____"
]
],
[
[
"gf_real, gf_img = gabor(img_arr, frequency=0.6)\ngf =(gf_real**2 + gf_img**2)//2\n# Displaying the filter response\nfig, ax = plt.subplots(1,3) \nax[0].imshow(gf_real,cmap='gray')\nax[1].imshow(gf_img,cmap='gray')\nax[2].imshow(gf,cmap='gray')",
"_____no_output_____"
],
[
"def get_gabor(img, N, *args, **kwargs):\n \"\"\"Gabor Feature extraction\"\"\"\n gf_real, gf_img = gabor(img_arr, frequency=0.6)\n gf =(gf_real**2 + gf_img**2)//2\n gabor_hist, _ = np.histogram(gf, N)\n gabor_hist = np.array(gabor_hist, dtype = float)\n gabor_prob = np.divide(gabor_hist, np.sum(gabor_hist))\n # To discard pixels resulting in 0 probability\n gabor_prob = np.where(np.isclose(0, gabor_prob), 0.0000000001, gabor_prob)\n gabor_energy = np.sum(gabor_prob**2)\n gabor_entropy = np.sum(np.multiply(gabor_prob, np.log2(gabor_prob)))\n return gabor_energy, gabor_entropy",
"_____no_output_____"
]
],
[
[
"## Feature Extraction",
"_____no_output_____"
]
],
[
[
"# Generate Training Data\n# Extract features from all images\n\nlabel = []\n\nfeatLength = 2+5+2 # LBP, Co-occurance, Gabor\ntrainFeats = np.zeros((len(trainDset), featLength))\ntestFeats = np.zeros((len(testDset), featLength))",
"_____no_output_____"
],
[
"label = [trainDset[tr][1] for tr in tqdm.tqdm_notebook(range(len(trainFeats)))]\n\ntrainLabel = np.array(label)",
"_____no_output_____"
],
[
"for tr in tqdm.tqdm_notebook(range(len(trainFeats))):\n img = trainDset[tr][0]\n img_grey = img.convert('L')\n img_arr = np.array(img_grey.getdata()).reshape(img.size[1], img.size[0])\n # LBP \n feat_lbp = local_binary_pattern(img_arr, 5,2,'uniform').reshape(img.size[0]*img.size[1])\n feat_lbp = np.uint8((feat_lbp/feat_lbp.max())*255) # converting to unit 8\n lbp_energy, lbp_entropy = get_lbp(feat_lbp)\n # Co-Occurance\n gCoMat = greycomatrix(img_arr, [2], [0], 256, True,True)\n featglcm = np.array(creat_cooccur(img_arr))\n # Gabor\n gabor_energy, gabor_entropy = get_gabor(img_arr, 8)\n\n # Concat features\n concat_feat = np.concatenate(([lbp_energy, lbp_entropy], featglcm, [gabor_energy, gabor_entropy]), axis=0)\n trainFeats[tr,:] = concat_feat\n label.append(trainDset[tr][1])\ntrainLabel = np.array(label)",
"_____no_output_____"
],
[
"label = []\n\nfor ts in tqdm.tqdm_notebook(range(len(testDset))):\n img = testDset[ts][0]\n img_grey = img.convert('L')\n img_arr = np.array(img_grey.getdata()).reshape(img.size[1], img.size[0])\n # LBP \n feat_lbp = local_binary_pattern(img_arr, 5,2,'uniform').reshape(img.size[0]*img.size[1])\n lbp_energy, lbp_entropy = get_lbp(feat_lbp)\n # Co-Occurance\n gCoMat = greycomatrix(img_arr, [2], [0], 256, True,True)\n featglcm = np.array(creat_cooccur(img_arr))\n # Gabor\n gabor_energy, gabor_entropy = get_gabor(img_arr, 8)\n\n # Concat features\n concat_feat = np.concatenate(([lbp_energy, lbp_entropy], featglcm, [gabor_energy, gabor_entropy]), axis=0)\n\n testFeats[ts,:] = concat_feat\n label.append(testDset[ts][1]) \n\ntestLabel = np.array(label)",
"_____no_output_____"
]
],
[
[
"### Normalize Features",
"_____no_output_____"
]
],
[
[
"# Normalizing the train features to the range [0,1]\ntrMaxs = np.amax(trainFeats,axis=0) #Finding maximum along each column\ntrMins = np.amin(trainFeats,axis=0) #Finding maximum along each column\ntrMaxs_rep = np.tile(trMaxs,(50000,1)) #Repeating the maximum value along the rows\ntrMins_rep = np.tile(trMins,(50000,1)) #Repeating the minimum value along the rows\ntrainFeatsNorm = np.divide(trainFeats-trMins_rep,trMaxs_rep) #Element-wise division\n# Normalizing the test features\ntsMaxs_rep = np.tile(trMaxs,(10000,1)) #Repeating the maximum value along the rows\ntsMins_rep = np.tile(trMins,(10000,1)) #Repeating the maximum value along the rows\ntestFeatsNorm = np.divide(testFeats-tsMins_rep,tsMaxs_rep) #Element-wise division",
"_____no_output_____"
]
],
[
[
"### Save Data",
"_____no_output_____"
]
],
[
[
"with open(\"TrainFeats.pckl\", \"wb\") as f:\n pickle.dump(trainFeatsNorm, f)\nwith open(\"TrainLabel.pckl\", \"wb\") as f:\n pickle.dump(trainLabel, f)\n\nwith open(\"TestFeats.pckl\", \"wb\") as f:\n pickle.dump(testFeatsNorm, f)\nwith open(\"TestLabel.pckl\", \"wb\") as f:\n pickle.dump(testLabel, f)\nprint(\"files Saved!\")",
"_____no_output_____"
]
],
[
[
"## Classification with SoftMax Regression",
"_____no_output_____"
],
[
"### Data Preparation\n",
"_____no_output_____"
]
],
[
[
"##########################\n### SETTINGS\n##########################\n\n# Device\nDEVICE = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n\n# Hyperparameters\nrandom_seed = 123\nlearning_rate = 0.01\nnum_epochs = 100\nbatch_size = 64\n\n# Architecture\nnum_features = 9\nnum_classes = 10\n\n##########################\n### CIFAR10 DATASET\n##########################\n## Converting Numpy array to Torch-Tensor\n\ntrainLabels = torch.from_numpy(trainLabel)\ntrainDataset = TensorDataset(torch.from_numpy(trainFeats),trainLabels)\n\ntestLabels = torch.from_numpy(testLabel)\ntestDataset = TensorDataset(torch.from_numpy(testFeats), testLabels)\n\n## Creating DataLoader\n\ntrain_loader = DataLoader(trainDataset, batch_size=batch_size, shuffle=True)\n\ntest_loader = DataLoader(testDataset,batch_size=batch_size,shuffle=False)",
"_____no_output_____"
]
],
[
[
"### Define Model",
"_____no_output_____"
]
],
[
[
"##########################\n### MODEL\n##########################\n\nclass SoftmaxRegression(torch.nn.Module):\n\n def __init__(self, num_features, num_classes):\n super(SoftmaxRegression, self).__init__()\n self.linear = torch.nn.Linear(num_features, num_classes)\n \n # self.linear.weight.detach().zero_()\n # self.linear.bias.detach().zero_()\n \n def forward(self, x):\n logits = self.linear(x)\n probas = F.softmax(logits, dim=1)\n return logits, probas\n\nmodel = SoftmaxRegression(num_features=num_features,\n num_classes=num_classes)\n\nmodel.to(DEVICE)\n\n##########################\n### COST AND OPTIMIZER\n##########################\n\noptimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)",
"_____no_output_____"
]
],
[
[
"### Define Training Route",
"_____no_output_____"
]
],
[
[
"# Manual seed for deterministic data loader\ntorch.manual_seed(random_seed)\n\n\ndef compute_accuracy(model, data_loader):\n correct_pred, num_examples = 0, 0\n \n for features, targets in data_loader:\n features = features.float().view(-1, 9).to(DEVICE)\n targets = targets.to(DEVICE)\n logits, probas = model(features)\n _, predicted_labels = torch.max(probas, 1)\n num_examples += targets.size(0)\n correct_pred += (predicted_labels == targets).sum()\n \n return correct_pred.float() / num_examples * 100\n \n\nstart_time = time.time()\nepoch_costs = []\nfor epoch in range(num_epochs):\n avg_cost = 0.\n for batch_idx, (features, targets) in enumerate(train_loader):\n \n features = features.float().view(-1, 9).to(DEVICE)\n targets = targets.to(DEVICE)\n \n ### FORWARD AND BACK PROP\n logits, probas = model(features)\n \n # note that the PyTorch implementation of\n # CrossEntropyLoss works with logits, not\n # probabilities\n cost = F.cross_entropy(logits, targets)\n optimizer.zero_grad()\n cost.backward()\n avg_cost += cost\n \n ### UPDATE MODEL PARAMETERS\n optimizer.step()\n \n ### LOGGING\n if not batch_idx % 50:\n print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f' \n %(epoch+1, num_epochs, batch_idx, \n len(trainDataset)//batch_size, cost))\n \n with torch.set_grad_enabled(False):\n avg_cost = avg_cost/len(trainDataset)\n epoch_costs.append(avg_cost)\n print('Epoch: %03d/%03d training accuracy: %.2f%%' % (\n epoch+1, num_epochs, \n compute_accuracy(model, train_loader)))\n print('Time elapsed: %.2f min' % ((time.time() - start_time)/60))",
"_____no_output_____"
]
],
[
[
"### Model Performance",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\n\n\nplt.plot(epoch_costs)\nplt.ylabel('Avg Cross Entropy Loss\\n(approximated by averaging over minibatches)')\nplt.xlabel('Epoch')\nplt.show()",
"_____no_output_____"
],
[
"print(f'Train accuracy: {(compute_accuracy(model, train_loader)): .2f}%')\nprint(f'Train accuracy: {(compute_accuracy(model, test_loader)): .2f}%')",
"Train accuracy: 24.92%\nTrain accuracy: 25.29%\n"
]
],
[
[
"## Comments\n\n- This was a demonstration of how we can use manually crafted features in Image Classification tasks.\n- The model can be improved in several ways:\n - Tweaking the parameters to modify features generated for **_LBP, Co-Occurance Matrix and Gabor Filter_**\n - Extending the parameters for Red,Blue and Green channels.\n - Modifying the Learning rate, Epochs.\n - Trying a different Algorithm such as Multi Layer Perceptron.\n- The results aren't great but offer a glimpse of manually creating features from images.",
"_____no_output_____"
],
[
"{{\"Maria Petrou, Pedro Garcia Sevilla. _Image Processing: Dealing with Texture_. John Wiley & Sons, Ltd (2006)\" | fndetail: 1 }}",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
e76edf76660ade32bd6bda76271af18c34eaea9f | 11,821 | ipynb | Jupyter Notebook | examples/neural_network_inference/tensorflow_converter/Tensorflow_2/tf_low_level_apis.ipynb | DreamChaserMXF/coremltools | f340dde1eb4510a6fd72e765ae1c422d4cd7266b | [
"BSD-3-Clause"
] | 2 | 2021-03-20T17:53:52.000Z | 2021-09-17T13:42:36.000Z | examples/neural_network_inference/tensorflow_converter/Tensorflow_2/tf_low_level_apis.ipynb | Farazahmed90/coremltools | 550a740f74d1dbd6871d65106c1612bff5a706d8 | [
"BSD-3-Clause"
] | null | null | null | examples/neural_network_inference/tensorflow_converter/Tensorflow_2/tf_low_level_apis.ipynb | Farazahmed90/coremltools | 550a740f74d1dbd6871d65106c1612bff5a706d8 | [
"BSD-3-Clause"
] | 1 | 2019-04-02T09:20:23.000Z | 2019-04-02T09:20:23.000Z | 37.887821 | 1,601 | 0.630996 | [
[
[
"# TensorFlow 2.0+ Low Level APIs Convert Example\n\nThis example demonstrates the workflow to build a model using\nTensorFlow 2.0+ low-level APIs and convert it to Core ML \n`.mlmodel` format using the `coremltools.converters.tensorflow` converter.\nFor more example, refer `test_tf_2x.py` file.\n\nNote: \n\n- This notebook was tested with following dependencies:\n\n```\ntensorflow==2.0.0\ncoremltools==3.1\n```\n\n- Models from TensorFlow 2.0+ is supported only for `minimum_ios_deployment_target>=13`.\nYou can also use `tfcoreml.convert()` instead of \n`coremltools.converters.tensorflow.convert()` to convert your model.",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nimport numpy as np\nimport coremltools\n\nprint(tf.__version__)\nprint(coremltools.__version__)",
"WARNING: Logging before flag parsing goes to stderr.\nW1101 14:02:33.174557 4762860864 __init__.py:74] TensorFlow version 2.0.0 detected. Last version known to be fully compatible is 1.14.0 .\n"
]
],
[
[
"## Using Low-Level APIs",
"_____no_output_____"
]
],
[
[
"# construct a toy model with low level APIs\nroot = tf.train.Checkpoint()\nroot.v1 = tf.Variable(3.)\nroot.v2 = tf.Variable(2.)\nroot.f = tf.function(lambda x: root.v1 * root.v2 * x)\n\n# save the model\nsaved_model_dir = './tf_model'\ninput_data = tf.constant(1., shape=[1, 1])\nto_save = root.f.get_concrete_function(input_data)\ntf.saved_model.save(root, saved_model_dir, to_save)\n\ntf_model = tf.saved_model.load(saved_model_dir)\nconcrete_func = tf_model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]",
"W1101 14:02:33.537978 4762860864 deprecation.py:506] From /Volumes/data/venv-py36/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1781: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\nInstructions for updating:\nIf using Keras pass *_constraint arguments to layers.\n"
],
[
"# convert model into Core ML format\nmodel = coremltools.converters.tensorflow.convert(\n [concrete_func],\n inputs={'x': (1, 1)},\n outputs=['Identity']\n)\n\nassert isinstance(model, coremltools.models.MLModel)",
"0 assert nodes deleted\n['Func/StatefulPartitionedCall/input/_2:0', 'StatefulPartitionedCall/mul/ReadVariableOp:0', 'statefulpartitionedcall_args_1:0', 'Func/StatefulPartitionedCall/input/_3:0', 'StatefulPartitionedCall/mul:0', 'StatefulPartitionedCall/ReadVariableOp:0', 'statefulpartitionedcall_args_2:0']\n6 nodes deleted\n0 nodes deleted\n0 nodes deleted\n2 identity nodes deleted\n0 disconnected nodes deleted\n[SSAConverter] Converting function main ...\n[SSAConverter] [1/3] Converting op type: 'Placeholder', name: 'x', output_shape: (1, 1).\n[SSAConverter] [2/3] Converting op type: 'Const', name: 'StatefulPartitionedCall/mul'.\n[SSAConverter] [3/3] Converting op type: 'Mul', name: 'Identity', output_shape: (1, 1).\n"
]
],
[
[
"## Using Control Flow",
"_____no_output_____"
]
],
[
[
"# construct a TensorFlow 2.0+ model with tf.function()\n\[email protected](input_signature=[tf.TensorSpec([], tf.float32)])\ndef control_flow(x):\n if x <= 0:\n return 0.\n else:\n return x * 3.\n\nto_save = tf.Module()\nto_save.control_flow = control_flow\n\nsaved_model_dir = './tf_model'\ntf.saved_model.save(to_save, saved_model_dir)\ntf_model = tf.saved_model.load(saved_model_dir)\nconcrete_func = tf_model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]",
"_____no_output_____"
],
[
"# convert model into Core ML format\nmodel = coremltools.converters.tensorflow.convert(\n [concrete_func],\n inputs={'x': (1,)},\n outputs=['Identity']\n)\n\nassert isinstance(model, coremltools.models.MLModel)",
"0 assert nodes deleted\n['PartitionedCall/cond/then/_2/Identity_1:0', 'PartitionedCall/LessEqual/y:0', 'PartitionedCall/cond/else/_3/mul/y:0', 'Func/PartitionedCall/cond/then/_2/output/_14:0', 'PartitionedCall/cond/then/_2/Const_1:0']\n2 nodes deleted\nFixing cond at merge location: PartitionedCall/cond/output/_9\nIn an IFF node fp32 != tensor[fp32,1]\n0 nodes deleted\n0 nodes deleted\n2 identity nodes deleted\n0 disconnected nodes deleted\n[SSAConverter] Converting function main ...\n[SSAConverter] [1/7] Converting op type: 'Placeholder', name: 'x', output_shape: (1,).\n[SSAConverter] [2/7] Converting op type: 'Const', name: 'PartitionedCall/LessEqual/y'.\n[SSAConverter] [3/7] Converting op type: 'Const', name: 'Func/PartitionedCall/cond/then/_2/output/_14'.\n[SSAConverter] [4/7] Converting op type: 'Const', name: 'PartitionedCall/cond/else/_3/mul/y'.\n[SSAConverter] [5/7] Converting op type: 'LessEqual', name: 'PartitionedCall/LessEqual', output_shape: (1,).\n[SSAConverter] [6/7] Converting op type: 'Mul', name: 'PartitionedCall/cond/else/_3/mul', output_shape: (1,).\n[SSAConverter] [7/7] Converting op type: 'iff', name: 'Identity'.\n"
],
[
"# try with some sample inputs\n\ninputs = [-3.7, 6.17, 0.0, 1984., -5.]\nfor data in inputs:\n out1 = to_save.control_flow(data).numpy()\n out2 = model.predict({'x': np.array([data])})['Identity']\n np.testing.assert_array_almost_equal(out1, out2)",
"_____no_output_____"
]
],
[
[
"## Using `tf.keras` Subclassing APIs",
"_____no_output_____"
]
],
[
[
"class MyModel(tf.keras.Model):\n def __init__(self):\n super(MyModel, self).__init__()\n self.dense1 = tf.keras.layers.Dense(4)\n self.dense2 = tf.keras.layers.Dense(5)\n\n @tf.function\n def call(self, input_data):\n return self.dense2(self.dense1(input_data))\n\nkeras_model = MyModel()",
"_____no_output_____"
],
[
"inputs = np.random.rand(4, 4)\n\n# subclassed model can only be saved as SavedModel format\nkeras_model._set_inputs(inputs)\nsaved_model_dir = './tf_model_subclassing'\nkeras_model.save(saved_model_dir, save_format='tf')\n# convert and validate\nmodel = coremltools.converters.tensorflow.convert(\n saved_model_dir,\n inputs={'input_1': (4, 4)},\n outputs=['Identity']\n)\nassert isinstance(model, coremltools.models.MLModel)\n# verify the prediction matches\nkeras_prediction = keras_model.predict(inputs)\nprediction = model.predict({'input_1': inputs})['Identity']\nnp.testing.assert_array_equal(keras_prediction.shape, prediction.shape)\nnp.testing.assert_almost_equal(keras_prediction.flatten(), prediction.flatten(), decimal=4)",
"0 assert nodes deleted\n['my_model/StatefulPartitionedCall/args_3:0', 'Func/my_model/StatefulPartitionedCall/input/_2:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/input/_11:0', 'my_model/StatefulPartitionedCall/args_4:0', 'Func/my_model/StatefulPartitionedCall/input/_4:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/input/_12:0', 'my_model/StatefulPartitionedCall/args_2:0', 'my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense_1/StatefulPartitionedCall/MatMul/ReadVariableOp:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense_1/StatefulPartitionedCall/input/_25:0', 'Func/my_model/StatefulPartitionedCall/input/_3:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/input/_13:0', 'Func/my_model/StatefulPartitionedCall/input/_5:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/input/_10:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense_1/StatefulPartitionedCall/input/_24:0', 'my_model/StatefulPartitionedCall/args_1:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense/StatefulPartitionedCall/input/_18:0', 'my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense/StatefulPartitionedCall/BiasAdd/ReadVariableOp:0', 'my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense/StatefulPartitionedCall/MatMul/ReadVariableOp:0', 'my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense_1/StatefulPartitionedCall/BiasAdd/ReadVariableOp:0', 'Func/my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense/StatefulPartitionedCall/input/_19:0']\n16 nodes deleted\n0 nodes deleted\n0 nodes deleted\n[Op Fusion] fuse_bias_add() deleted 4 nodes.\n2 identity nodes deleted\n2 disconnected nodes deleted\n[SSAConverter] Converting function main ...\n[SSAConverter] [1/3] Converting op type: 'Placeholder', name: 'input_1', output_shape: (4, 4).\n[SSAConverter] [2/3] Converting op type: 'MatMul', name: 'my_model/StatefulPartitionedCall/StatefulPartitionedCall/dense/StatefulPartitionedCall/MatMul', output_shape: (4, 4).\n[SSAConverter] [3/3] Converting op type: 'MatMul', name: 'Identity', output_shape: (4, 5).\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e76ee907212502daa32b3e65db064c49ec0ed19d | 14,968 | ipynb | Jupyter Notebook | math/Math20_Vectors.ipynb | QPoland/basics-of-quantum-computing-pl | 543ada7311c0146f41d2e68da6784bb4e635a8e9 | [
"Apache-2.0",
"CC-BY-4.0"
] | 1 | 2021-04-08T16:12:21.000Z | 2021-04-08T16:12:21.000Z | math/Math20_Vectors.ipynb | QPoland/basics-of-quantum-computing-pl | 543ada7311c0146f41d2e68da6784bb4e635a8e9 | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | math/Math20_Vectors.ipynb | QPoland/basics-of-quantum-computing-pl | 543ada7311c0146f41d2e68da6784bb4e635a8e9 | [
"Apache-2.0",
"CC-BY-4.0"
] | 3 | 2021-02-05T14:13:48.000Z | 2021-09-14T09:13:51.000Z | 32.468547 | 309 | 0.50314 | [
[
[
"<table> <tr>\n <td style=\"background-color:#ffffff;\">\n <a href=\"http://qworld.lu.lv\" target=\"_blank\"><img src=\"../images/qworld.jpg\" width=\"25%\" align=\"left\"> </a></td>\n <td style=\"background-color:#ffffff;vertical-align:bottom;text-align:right;\">\n prepared by <a href=\"http://abu.lu.lv\" target=\"_blank\">Abuzer Yakaryilmaz</a> (<a href=\"http://qworld.lu.lv/index.php/qlatvia/\" target=\"_blank\">QLatvia</a>)\n </td> \n</tr></table>",
"_____no_output_____"
],
[
"<table width=\"100%\"><tr><td style=\"color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;\">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>\n$ \\newcommand{\\bra}[1]{\\langle #1|} $\n$ \\newcommand{\\ket}[1]{|#1\\rangle} $\n$ \\newcommand{\\braket}[2]{\\langle #1|#2\\rangle} $\n$ \\newcommand{\\dot}[2]{ #1 \\cdot #2} $\n$ \\newcommand{\\biginner}[2]{\\left\\langle #1,#2\\right\\rangle} $\n$ \\newcommand{\\mymatrix}[2]{\\left( \\begin{array}{#1} #2\\end{array} \\right)} $\n$ \\newcommand{\\myvector}[1]{\\mymatrix{c}{#1}} $\n$ \\newcommand{\\myrvector}[1]{\\mymatrix{r}{#1}} $\n$ \\newcommand{\\mypar}[1]{\\left( #1 \\right)} $\n$ \\newcommand{\\mybigpar}[1]{ \\Big( #1 \\Big)} $\n$ \\newcommand{\\sqrttwo}{\\frac{1}{\\sqrt{2}}} $\n$ \\newcommand{\\dsqrttwo}{\\dfrac{1}{\\sqrt{2}}} $\n$ \\newcommand{\\onehalf}{\\frac{1}{2}} $\n$ \\newcommand{\\donehalf}{\\dfrac{1}{2}} $\n$ \\newcommand{\\hadamard}{ \\mymatrix{rr}{ \\sqrttwo & \\sqrttwo \\\\ \\sqrttwo & -\\sqrttwo }} $\n$ \\newcommand{\\vzero}{\\myvector{1\\\\0}} $\n$ \\newcommand{\\vone}{\\myvector{0\\\\1}} $\n$ \\newcommand{\\vhadamardzero}{\\myvector{ \\sqrttwo \\\\ \\sqrttwo } } $\n$ \\newcommand{\\vhadamardone}{ \\myrvector{ \\sqrttwo \\\\ -\\sqrttwo } } $\n$ \\newcommand{\\myarray}[2]{ \\begin{array}{#1}#2\\end{array}} $\n$ \\newcommand{\\X}{ \\mymatrix{cc}{0 & 1 \\\\ 1 & 0} } $\n$ \\newcommand{\\Z}{ \\mymatrix{rr}{1 & 0 \\\\ 0 & -1} } $\n$ \\newcommand{\\Htwo}{ \\mymatrix{rrrr}{ \\frac{1}{2} & \\frac{1}{2} & \\frac{1}{2} & \\frac{1}{2} \\\\ \\frac{1}{2} & -\\frac{1}{2} & \\frac{1}{2} & -\\frac{1}{2} \\\\ \\frac{1}{2} & \\frac{1}{2} & -\\frac{1}{2} & -\\frac{1}{2} \\\\ \\frac{1}{2} & -\\frac{1}{2} & -\\frac{1}{2} & \\frac{1}{2} } } $\n$ \\newcommand{\\CNOT}{ \\mymatrix{cccc}{1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 0 & 1 \\\\ 0 & 0 & 1 & 0} } $\n$ \\newcommand{\\norm}[1]{ \\left\\lVert #1 \\right\\rVert } $",
"_____no_output_____"
],
[
"<h2>Vectors: One Dimensional Lists</h2>\n\nA <b>vector</b> is a list of numbers. \n\nVectors are very useful to describe the state of a system, as we will see in the main tutorial. \n\nA list is a single object in python.\n\nSimilarly, a vector is a single mathematical object. \n\nThe number of elements in a list is its size or length.\n\nSimilarly, the number of entries in a vector is called as the <b>size</b> or <b>dimension</b> of the vector.",
"_____no_output_____"
]
],
[
[
"# consider the following list with 4 elements \nL = [1,-2,0,5]\nprint(L)",
"_____no_output_____"
]
],
[
[
"Vectors can be in horizontal or vertical shape.\n\nWe show this list as a <i><u>four dimensional</u></i> <b>row vector</b> (horizontal) or a <b>column vector</b> (vertical):\n\n$$\n u = \\mypar{1~~-2~~0~~-5} ~~~\\mbox{ or }~~~ v =\\mymatrix{r}{1 \\\\ -2 \\\\ 0 \\\\ 5}, ~~~\\mbox{ respectively.}\n$$\n\nRemark that we do not need to use any comma in vector representation.",
"_____no_output_____"
],
[
"<h3> Multiplying a vector with a number</h3>\n\nA vector can be multiplied by a number.\n\nMultiplication of a vector with a number is also a vector: each entry is multiplied by this number.\n\n$$\n 3 \\cdot v = 3 \\cdot \\mymatrix{r}{1 \\\\ -2 \\\\ 0 \\\\ 5} = \\mymatrix{r}{3 \\\\ -6 \\\\ 0 \\\\ 15}\n ~~~~~~\\mbox{ or }~~~~~~\n (-0.6) \\cdot v = (-0.6) \\cdot \\mymatrix{r}{1 \\\\ -2 \\\\ 0 \\\\ 5} = \\mymatrix{r}{-0.6 \\\\ 1.2 \\\\ 0 \\\\ -3}.\n$$\n\nWe may consider this as enlarging or making smaller the entries of a vector.\n\nWe verify our calculations in python.",
"_____no_output_____"
]
],
[
[
"# 3 * v\nv = [1,-2,0,5]\nprint(\"v is\",v)\n# we use the same list for the result\nfor i in range(len(v)):\n v[i] = 3 * v[i]\nprint(\"3v is\",v)\n\n# -0.6 * u\n# reinitialize the list v\nv = [1,-2,0,5]\nfor i in range(len(v)):\n v[i] = -0.6 * v[i]\nprint(\"0.6v is\",v)",
"_____no_output_____"
]
],
[
[
"<h3> Summation of vectors</h3>\n\nTwo vectors (with same dimension) can be summed up.\n\nThe summation of two vectors is a vector: the numbers on the same entries are added up.\n\n$$\n u = \\myrvector{-3 \\\\ -2 \\\\ 0 \\\\ -1 \\\\ 4} \\mbox{ and } v = \\myrvector{-1\\\\ -1 \\\\2 \\\\ -3 \\\\ 5}.\n ~~~~~~~ \\mbox{Then, }~~\n u+v = \\myrvector{-3 \\\\ -2 \\\\ 0 \\\\ -1 \\\\ 4} + \\myrvector{-1\\\\ -1 \\\\2 \\\\ -3 \\\\ 5} =\n \\myrvector{-3+(-1)\\\\ -2+(-1) \\\\0+2 \\\\ -1+(-3) \\\\ 4+5} = \\myrvector{-4\\\\ -3 \\\\2 \\\\ -4 \\\\ 9}.\n$$\n\nWe do the same calculations in Python.",
"_____no_output_____"
]
],
[
[
"u = [-3,-2,0,-1,4]\nv = [-1,-1,2,-3,5]\nresult=[]\nfor i in range(len(u)):\n result.append(u[i]+v[i])\n\nprint(\"u+v is\",result)\n\n# print the result vector similarly to a column vector\nprint() # print an empty line\nprint(\"the elements of u+v are\")\nfor j in range(len(result)):\n print(result[j])",
"_____no_output_____"
]
],
[
[
"<h3> Task 1 </h3>\n\nCreate two 7-dimensional vectors $u$ and $ v $ as two different lists in Python having entries randomly picked between $-10$ and $10$. \n\nPrint their entries.",
"_____no_output_____"
]
],
[
[
"from random import randrange\n#\n# your solution is here\n#\n\n#r=randrange(-10,11) # randomly pick a number from the list {-10,-9,...,-1,0,1,...,9,10}\n",
"_____no_output_____"
]
],
[
[
"<a href=\"Math20_Vectors_Solutions.ipynb#task1\">click for our solution</a>",
"_____no_output_____"
],
[
"<h3> Task 2 </h3>\n\nBy using the same vectors, find the vector $ (3 u-2 v) $ and print its entries. Here $ 3u $ and $ 2v $ means $u$ and $v$ are multiplied by $3$ and $2$, respectively.",
"_____no_output_____"
]
],
[
[
"#\n# your solution is here\n#\n",
"_____no_output_____"
]
],
[
[
"<a href=\"Math20_Vectors_Solutions.ipynb#task2\">click for our solution</a>",
"_____no_output_____"
],
[
"<h3> Visualization of vectors </h3>\n\nWe can visualize the vectors with dimension at most 3. \n\nFor simplicity, we give examples of 2-dimensional vectors. \n\nConsider the vector $ v = \\myvector{1 \\\\ 2} $. \n\nA 2-dimensional vector can be represented on the two-dimensional plane by an arrow starting from the origin $ (0,0) $ to the point $ (1,2) $.",
"_____no_output_____"
],
[
"<img src=\"../images/vector_1_2-small.jpg\" width=\"40%\">",
"_____no_output_____"
],
[
"We represent the vectors $ 2v = \\myvector{2 \\\\ 4} $ and $ -v = \\myvector{-1 \\\\ -2} $ below.\n\n<img src=\"../images/vectors_2_4_-1_-2.jpg\" width=\"40%\">\n\nAs we can observe, after multiplying by 2, the vector is enlarged, and, after multiplying by $(-1)$, the vector is the same but its direction is opposite.",
"_____no_output_____"
],
[
"<h3> The length of a vector </h3>\n\nThe length of a vector is the (shortest) distance from the points represented by the entries of vector to the origin point $(0,0)$.\n\nThe length of a vector can be calculated by using Pythagoras Theorem. \n\nWe visualize a vector, its length, and the contributions of each entry to the length. \n\nConsider the vector $ u = \\myrvector{-3 \\\\ 4} $.",
"_____no_output_____"
],
[
"<img src=\"../images/length_-3_4-small.jpg\" width=\"80%\">",
"_____no_output_____"
],
[
"The length of $ u $ is denoted as $ \\norm{u} $, and it is calculated as $ \\norm{u} =\\sqrt{(-3)^2+4^2} = 5 $. \n\nHere each entry contributes with its square value. All contributions are summed up. Then, we obtain the square of the length. \n\nThis formula is generalized to any dimension. \n\nWe find the length of the following vector by using Python:\n \n$$\n v = \\myrvector{-1 \\\\ -3 \\\\ 5 \\\\ 3 \\\\ 1 \\\\ 2}\n ~~~~~~~~~~\n \\mbox{and}\n ~~~~~~~~~~\n \\norm{v} = \\sqrt{(-1)^2+(-3)^2+5^2+3^2+1^2+2^2} .\n$$",
"_____no_output_____"
],
[
"<div style=\"font-style:italic;background-color:#fafafa;font-size:10pt;\"> Remember: There is a short way of writing power operation in Python. \n <ul>\n <li> In its generic form: $ a^x $ can be denoted by $ a ** x $ in Python. </li>\n <li> The square of a number $a$: $ a^2 $ can be denoted by $ a ** 2 $ in Python. </li>\n <li> The square root of a number $ a $: $ \\sqrt{a} = a^{\\frac{1}{2}} = a^{0.5} $ can be denoted by $ a ** 0.5 $ in Python.</li>\n </ul>\n</div>",
"_____no_output_____"
]
],
[
[
"v = [-1,-3,5,3,1,2]\n\nlength_square=0\nfor i in range(len(v)):\n print(v[i],\":square ->\",v[i]**2) # print each entry and its square value\n length_square = length_square + v[i]**2 # sum up the square of each entry\n\nlength = length_square ** 0.5 # take the square root of the summation of the squares of all entries\nprint(\"the summation is\",length_square)\nprint(\"then the length is\",length)\n\n# for square root, we can also use built-in function math.sqrt\nprint() # print an empty line\nfrom math import sqrt\nprint(\"the square root of\",length_square,\"is\",sqrt(length_square))",
"_____no_output_____"
]
],
[
[
"<h3> Task 3 </h3>\n\nLet $ u = \\myrvector{1 \\\\ -2 \\\\ -4 \\\\ 2} $ be a four dimensional vector.\n\nVerify that $ \\norm{4 u} = 4 \\cdot \\norm{u} $ in Python. \n\nRemark that $ 4u $ is another vector obtained from $ u $ by multiplying it with 4. ",
"_____no_output_____"
]
],
[
[
"#\n# your solution is here\n#\n",
"_____no_output_____"
]
],
[
[
"<a href=\"Math20_Vectors_Solutions.ipynb#task3\">click for our solution</a>",
"_____no_output_____"
],
[
"<h3> Notes:</h3>\n\nWhen a vector is multiplied by a number, then its length is also multiplied with the same number.\n\nBut, we should be careful with the sign.\n\nConsider the vector $ -3 v $. It has the same length of $ 3v $, but its direction is opposite.\n\nSo, when calculating the length of $ -3 v $, we use absolute value of the number:\n\n$ \\norm{-3 v} = |-3| \\norm{v} = 3 \\norm{v} $.\n\nHere $ |-3| $ is the absolute value of $ -3 $. \n\nThe absolute value of a number is its distance to 0. So, $ |-3| = 3 $.",
"_____no_output_____"
],
[
"<h3> Task 4 </h3>\n\nLet $ u = \\myrvector{1 \\\\ -2 \\\\ -4 \\\\ 2} $ be a four dimensional vector.\n\nRandomly pick a number $r$ from $ \\left\\{ \\dfrac{1}{10}, \\dfrac{2}{10}, \\cdots, \\dfrac{9}{10} \\right\\} $.\n\nFind the vector $(-r)\\cdot u$ and then its length.",
"_____no_output_____"
]
],
[
[
"#\n# your solution is here\n#\n",
"_____no_output_____"
]
],
[
[
"<a href=\"Math20_Vectors_Solutions.ipynb#task4\">click for our solution</a>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e76eea3b3593dc62f88cb7de55f277c21328cc7c | 171,076 | ipynb | Jupyter Notebook | nbs/012_data.external.ipynb | clancy0614/tsai | 51dd8fbfc813e877536ab41ceb2da1b3ac84d9ca | [
"Apache-2.0"
] | 1 | 2021-11-15T07:44:29.000Z | 2021-11-15T07:44:29.000Z | nbs/012_data.external.ipynb | nimingyonghuLiu/tsai | 577cb73373681a8dac46bcee0f23a24f0178b639 | [
"Apache-2.0"
] | null | null | null | nbs/012_data.external.ipynb | nimingyonghuLiu/tsai | 577cb73373681a8dac46bcee0f23a24f0178b639 | [
"Apache-2.0"
] | 1 | 2021-11-14T02:58:25.000Z | 2021-11-14T02:58:25.000Z | 72.860307 | 8,796 | 0.65022 | [
[
[
"# default_exp data.external",
"_____no_output_____"
]
],
[
[
"# External data\n\n> Helper functions used to download and extract common time series datasets.",
"_____no_output_____"
]
],
[
[
"#export\nfrom tsai.imports import *\nfrom tsai.utils import * \nfrom tsai.data.validation import *",
"_____no_output_____"
],
[
"#export\nfrom sktime.utils.data_io import load_from_tsfile_to_dataframe as ts2df\nfrom sktime.utils.validation.panel import check_X\nfrom sktime.utils.data_io import TsFileParseException",
"_____no_output_____"
],
[
"#export\nfrom fastai.data.external import *\nfrom tqdm import tqdm\nimport zipfile\nimport tempfile\ntry: from urllib import urlretrieve\nexcept ImportError: from urllib.request import urlretrieve\nimport shutil\nfrom numpy import distutils\nimport distutils",
"_____no_output_____"
],
[
"#export\ndef decompress_from_url(url, target_dir=None, verbose=False):\n # Download\n try:\n pv(\"downloading data...\", verbose)\n fname = os.path.basename(url)\n tmpdir = tempfile.mkdtemp()\n tmpfile = os.path.join(tmpdir, fname)\n urlretrieve(url, tmpfile)\n pv(\"...data downloaded\", verbose)\n\n # Decompress\n try:\n pv(\"decompressing data...\", verbose)\n if not os.path.exists(target_dir): os.makedirs(target_dir)\n shutil.unpack_archive(tmpfile, target_dir)\n shutil.rmtree(tmpdir)\n pv(\"...data decompressed\", verbose)\n return target_dir\n \n except:\n shutil.rmtree(tmpdir)\n if verbose: sys.stderr.write(\"Could not decompress file, aborting.\\n\")\n\n except:\n shutil.rmtree(tmpdir)\n if verbose:\n sys.stderr.write(\"Could not download url. Please, check url.\\n\")",
"_____no_output_____"
],
[
"#export\nfrom fastdownload import download_url\ndef download_data(url, fname=None, c_key='archive', force_download=False, timeout=4, verbose=False):\n \"Download `url` to `fname`.\"\n fname = Path(fname or URLs.path(url, c_key=c_key))\n fname.parent.mkdir(parents=True, exist_ok=True)\n if not fname.exists() or force_download: download_url(url, dest=fname, timeout=timeout, show_progress=verbose)\n return fname",
"_____no_output_____"
],
[
"# export\ndef get_UCR_univariate_list():\n return [\n 'ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY',\n 'AllGestureWiimoteZ', 'ArrowHead', 'Beef', 'BeetleFly', 'BirdChicken',\n 'BME', 'Car', 'CBF', 'Chinatown', 'ChlorineConcentration',\n 'CinCECGTorso', 'Coffee', 'Computers', 'CricketX', 'CricketY',\n 'CricketZ', 'Crop', 'DiatomSizeReduction',\n 'DistalPhalanxOutlineAgeGroup', 'DistalPhalanxOutlineCorrect',\n 'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame',\n 'DodgerLoopWeekend', 'Earthquakes', 'ECG200', 'ECG5000', 'ECGFiveDays',\n 'ElectricDevices', 'EOGHorizontalSignal', 'EOGVerticalSignal',\n 'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords',\n 'Fish', 'FordA', 'FordB', 'FreezerRegularTrain', 'FreezerSmallTrain',\n 'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3',\n 'GesturePebbleZ1', 'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan',\n 'GunPointMaleVersusFemale', 'GunPointOldVersusYoung', 'Ham',\n 'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate',\n 'InsectEPGRegularTrain', 'InsectEPGSmallTrain', 'InsectWingbeatSound',\n 'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2',\n 'Lightning7', 'Mallat', 'Meat', 'MedicalImages', 'MelbournePedestrian',\n 'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect',\n 'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain',\n 'MoteStrain', 'NonInvasiveFetalECGThorax1',\n 'NonInvasiveFetalECGThorax2', 'OliveOil', 'OSULeaf',\n 'PhalangesOutlinesCorrect', 'Phoneme', 'PickupGestureWiimoteZ',\n 'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'PLAID', 'Plane',\n 'PowerCons', 'ProximalPhalanxOutlineAgeGroup',\n 'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW',\n 'RefrigerationDevices', 'Rock', 'ScreenType', 'SemgHandGenderCh2',\n 'SemgHandMovementCh2', 'SemgHandSubjectCh2', 'ShakeGestureWiimoteZ',\n 'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace',\n 'SonyAIBORobotSurface1', 'SonyAIBORobotSurface2', 'StarLightCurves',\n 'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl',\n 'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG',\n 'TwoPatterns', 'UMD', 'UWaveGestureLibraryAll', 'UWaveGestureLibraryX',\n 'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine',\n 'WordSynonyms', 'Worms', 'WormsTwoClass', 'Yoga'\n ]\n\n\ntest_eq(len(get_UCR_univariate_list()), 128)\nUTSC_datasets = get_UCR_univariate_list()\nUCR_univariate_list = get_UCR_univariate_list()",
"_____no_output_____"
],
[
"#export\ndef get_UCR_multivariate_list():\n return [\n 'ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions',\n 'CharacterTrajectories', 'Cricket', 'DuckDuckGeese', 'EigenWorms',\n 'Epilepsy', 'ERing', 'EthanolConcentration', 'FaceDetection',\n 'FingerMovements', 'HandMovementDirection', 'Handwriting', 'Heartbeat',\n 'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery',\n 'NATOPS', 'PEMS-SF', 'PenDigits', 'PhonemeSpectra', 'RacketSports',\n 'SelfRegulationSCP1', 'SelfRegulationSCP2', 'SpokenArabicDigits',\n 'StandWalkJump', 'UWaveGestureLibrary'\n ]\n\ntest_eq(len(get_UCR_multivariate_list()), 30)\nMTSC_datasets = get_UCR_multivariate_list()\nUCR_multivariate_list = get_UCR_multivariate_list()\n\nUCR_list = sorted(UCR_univariate_list + UCR_multivariate_list)\nclassification_list = UCR_list\nTSC_datasets = classification_datasets = UCR_list\nlen(UCR_list)",
"_____no_output_____"
],
[
"#export\ndef get_UCR_data(dsid, path='.', parent_dir='data/UCR', on_disk=True, mode='c', Xdtype='float32', ydtype=None, return_split=True, split_data=True, \n force_download=False, verbose=False):\n dsid_list = [ds for ds in UCR_list if ds.lower() == dsid.lower()]\n assert len(dsid_list) > 0, f'{dsid} is not a UCR dataset'\n dsid = dsid_list[0]\n return_split = return_split and split_data # keep return_split for compatibility. It will be replaced by split_data\n if dsid in ['InsectWingbeat']:\n warnings.warn(f'Be aware that download of the {dsid} dataset is very slow!')\n pv(f'Dataset: {dsid}', verbose)\n full_parent_dir = Path(path)/parent_dir\n full_tgt_dir = full_parent_dir/dsid\n# if not os.path.exists(full_tgt_dir): os.makedirs(full_tgt_dir)\n full_tgt_dir.parent.mkdir(parents=True, exist_ok=True)\n if force_download or not all([os.path.isfile(f'{full_tgt_dir}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):\n # Option A\n src_website = 'http://www.timeseriesclassification.com/Downloads'\n decompress_from_url(f'{src_website}/{dsid}.zip', target_dir=full_tgt_dir, verbose=verbose)\n if dsid == 'DuckDuckGeese':\n with zipfile.ZipFile(Path(f'{full_parent_dir}/DuckDuckGeese/DuckDuckGeese_ts.zip'), 'r') as zip_ref:\n zip_ref.extractall(Path(parent_dir))\n if not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or \\\n Path(full_tgt_dir/f'{dsid}_TRAIN.ts').stat().st_size == 0 or Path(full_tgt_dir/f'{dsid}_TEST.ts').stat().st_size == 0: \n print('It has not been possible to download the required files')\n if return_split:\n return None, None, None, None\n else:\n return None, None, None\n \n pv('loading ts files to dataframe...', verbose)\n X_train_df, y_train = ts2df(full_tgt_dir/f'{dsid}_TRAIN.ts')\n X_valid_df, y_valid = ts2df(full_tgt_dir/f'{dsid}_TEST.ts')\n pv('...ts files loaded', verbose)\n pv('preparing numpy arrays...', verbose)\n X_train_ = []\n X_valid_ = []\n for i in progress_bar(range(X_train_df.shape[-1]), display=verbose, leave=False):\n X_train_.append(stack_pad(X_train_df[f'dim_{i}'])) # stack arrays even if they have different lengths\n X_valid_.append(stack_pad(X_valid_df[f'dim_{i}'])) # stack arrays even if they have different lengths\n X_train = np.transpose(np.stack(X_train_, axis=-1), (0, 2, 1))\n X_valid = np.transpose(np.stack(X_valid_, axis=-1), (0, 2, 1))\n X_train, X_valid = match_seq_len(X_train, X_valid) \n \n np.save(f'{full_tgt_dir}/X_train.npy', X_train)\n np.save(f'{full_tgt_dir}/y_train.npy', y_train)\n np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)\n np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)\n np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))\n np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))\n del X_train, X_valid, y_train, y_valid\n delete_all_in_dir(full_tgt_dir, exception='.npy')\n pv('...numpy arrays correctly saved', verbose)\n\n mmap_mode = mode if on_disk else None\n X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)\n y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)\n X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)\n y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)\n\n if return_split:\n if Xdtype is not None: \n X_train = X_train.astype(Xdtype)\n X_valid = X_valid.astype(Xdtype)\n if ydtype is not None: \n y_train = y_train.astype(ydtype)\n y_valid = y_valid.astype(ydtype)\n if verbose:\n print('X_train:', X_train.shape)\n print('y_train:', y_train.shape)\n print('X_valid:', X_valid.shape)\n print('y_valid:', y_valid.shape, '\\n')\n return X_train, y_train, X_valid, y_valid\n else:\n X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)\n y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)\n splits = get_predefined_splits(X_train, X_valid)\n if Xdtype is not None: \n X = X.astype(Xdtype)\n if verbose:\n print('X :', X .shape)\n print('y :', y .shape)\n print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\\n')\n return X, y, splits\n \n \nget_classification_data = get_UCR_data",
"_____no_output_____"
],
[
"#hide\nPATH = Path('.')\ndsids = ['ECGFiveDays', 'AtrialFibrillation'] # univariate and multivariate\nfor dsid in dsids:\n print(dsid)\n tgt_dir = PATH/f'data/UCR/{dsid}'\n if os.path.isdir(tgt_dir): shutil.rmtree(tgt_dir)\n test_eq(len(get_files(tgt_dir)), 0) # no file left\n X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)\n test_eq(len(get_files(tgt_dir, '.npy')), 6)\n test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir\n del X_train, y_train, X_valid, y_valid\n start = time.time()\n X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)\n elapsed = time.time() - start\n test_eq(elapsed < 1, True)\n test_eq(X_train.ndim, 3)\n test_eq(y_train.ndim, 1)\n test_eq(X_valid.ndim, 3)\n test_eq(y_valid.ndim, 1)\n test_eq(len(get_files(tgt_dir, '.npy')), 6)\n test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir\n test_eq(X_train.ndim, 3)\n test_eq(y_train.ndim, 1)\n test_eq(X_valid.ndim, 3)\n test_eq(y_valid.ndim, 1)\n test_eq(X_train.dtype, np.float32)\n test_eq(X_train.__class__.__name__, 'memmap')\n del X_train, y_train, X_valid, y_valid\n X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, on_disk=False)\n test_eq(X_train.__class__.__name__, 'ndarray')\n del X_train, y_train, X_valid, y_valid",
"ECGFiveDays\nAtrialFibrillation\n"
],
[
"X_train, y_train, X_valid, y_valid = get_UCR_data('natops')",
"_____no_output_____"
],
[
"dsid = 'natops' \nX_train, y_train, X_valid, y_valid = get_UCR_data(dsid, verbose=True)\nX, y, splits = get_UCR_data(dsid, split_data=False)\ntest_eq(X[splits[0]], X_train)\ntest_eq(y[splits[1]], y_valid)\ntest_eq(X[splits[0]], X_train)\ntest_eq(y[splits[1]], y_valid)\ntest_type(X, X_train)\ntest_type(y, y_train)",
"Dataset: NATOPS\nX_train: (180, 24, 51)\ny_train: (180,)\nX_valid: (180, 24, 51)\ny_valid: (180,) \n\n"
],
[
"#export\ndef check_data(X, y=None, splits=None, show_plot=True):\n try: X_is_nan = np.isnan(X).sum()\n except: X_is_nan = 'couldn not be checked'\n if X.ndim == 3:\n shape = f'[{X.shape[0]} samples x {X.shape[1]} features x {X.shape[-1]} timesteps]'\n print(f'X - shape: {shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')\n else:\n print(f'X - shape: {X.shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')\n if not isinstance(X, np.ndarray): warnings.warn('X must be a np.ndarray')\n if X_is_nan: \n warnings.warn('X must not contain nan values')\n if y is not None:\n y_shape = y.shape\n y = y.ravel()\n if isinstance(y[0], str):\n n_classes = f'{len(np.unique(y))} ({len(y)//len(np.unique(y))} samples per class) {L(np.unique(y).tolist())}'\n y_is_nan = 'nan' in [c.lower() for c in np.unique(y)]\n print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} n_classes: {n_classes} isnan: {y_is_nan}')\n else:\n y_is_nan = np.isnan(y).sum()\n print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} isnan: {y_is_nan}')\n if not isinstance(y, np.ndarray): warnings.warn('y must be a np.ndarray')\n if y_is_nan: \n warnings.warn('y must not contain nan values')\n if splits is not None:\n _splits = get_splits_len(splits)\n overlap = check_splits_overlap(splits)\n print(f'splits - n_splits: {len(_splits)} shape: {_splits} overlap: {overlap}')\n if show_plot: plot_splits(splits)",
"_____no_output_____"
],
[
"dsid = 'ECGFiveDays'\nX, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=True)\ncheck_data(X, y, splits)\ncheck_data(X[:, 0], y, splits)\ny = y.astype(np.float32)\ncheck_data(X, y, splits)\ny[:10] = np.nan\ncheck_data(X[:, 0], y, splits)\nX, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=True)\nsplits = get_splits(y, 3)\ncheck_data(X, y, splits)\ncheck_data(X[:, 0], y, splits)\ny[:5]= np.nan\ncheck_data(X[:, 0], y, splits)\nX, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=True)",
"X - shape: [884 samples x 1 features x 136 timesteps] type: ndarray dtype:float32 isnan: 0\ny - shape: (884,) type: ndarray dtype:<U1 n_classes: 2 (442 samples per class) ['1', '2'] isnan: False\nsplits - n_splits: 2 shape: [23, 861] overlap: [False]\n"
],
[
"#export\n# This code comes from https://github.com/ChangWeiTan/TSRegression. As of Jan 16th, 2021 there's no pip install available.\n\n# The following code is adapted from the python package sktime to read .ts file.\nclass _TsFileParseException(Exception):\n \"\"\"\n Should be raised when parsing a .ts file and the format is incorrect.\n \"\"\"\n pass\n\ndef _load_from_tsfile_to_dataframe2(full_file_path_and_name, return_separate_X_and_y=True, replace_missing_vals_with='NaN'):\n \"\"\"Loads data from a .ts file into a Pandas DataFrame.\n Parameters\n ----------\n full_file_path_and_name: str\n The full pathname of the .ts file to read.\n return_separate_X_and_y: bool\n true if X and Y values should be returned as separate Data Frames (X) and a numpy array (y), false otherwise.\n This is only relevant for data that\n replace_missing_vals_with: str\n The value that missing values in the text file should be replaced with prior to parsing.\n Returns\n -------\n DataFrame, ndarray\n If return_separate_X_and_y then a tuple containing a DataFrame and a numpy array containing the relevant time-series and corresponding class values.\n DataFrame\n If not return_separate_X_and_y then a single DataFrame containing all time-series and (if relevant) a column \"class_vals\" the associated class values.\n \"\"\"\n\n # Initialize flags and variables used when parsing the file\n metadata_started = False\n data_started = False\n\n has_problem_name_tag = False\n has_timestamps_tag = False\n has_univariate_tag = False\n has_class_labels_tag = False\n has_target_labels_tag = False\n has_data_tag = False\n\n previous_timestamp_was_float = None\n previous_timestamp_was_int = None\n previous_timestamp_was_timestamp = None\n num_dimensions = None\n is_first_case = True\n instance_list = []\n class_val_list = []\n line_num = 0\n\n # Parse the file\n # print(full_file_path_and_name)\n with open(full_file_path_and_name, 'r', encoding='utf-8') as file:\n for line in tqdm(file):\n # print(\".\", end='')\n # Strip white space from start/end of line and change to lowercase for use below\n line = line.strip().lower()\n # Empty lines are valid at any point in a file\n if line:\n # Check if this line contains metadata\n # Please note that even though metadata is stored in this function it is not currently published externally\n if line.startswith(\"@problemname\"):\n # Check that the data has not started\n if data_started:\n raise _TsFileParseException(\"metadata must come before data\")\n # Check that the associated value is valid\n tokens = line.split(' ')\n token_len = len(tokens)\n\n if token_len == 1:\n raise _TsFileParseException(\"problemname tag requires an associated value\")\n\n problem_name = line[len(\"@problemname\") + 1:]\n has_problem_name_tag = True\n metadata_started = True\n elif line.startswith(\"@timestamps\"):\n # Check that the data has not started\n if data_started:\n raise _TsFileParseException(\"metadata must come before data\")\n\n # Check that the associated value is valid\n tokens = line.split(' ')\n token_len = len(tokens)\n\n if token_len != 2:\n raise _TsFileParseException(\"timestamps tag requires an associated Boolean value\")\n elif tokens[1] == \"true\":\n timestamps = True\n elif tokens[1] == \"false\":\n timestamps = False\n else:\n raise _TsFileParseException(\"invalid timestamps value\")\n has_timestamps_tag = True\n metadata_started = True\n elif line.startswith(\"@univariate\"):\n # Check that the data has not started\n if data_started:\n raise _TsFileParseException(\"metadata must come before data\")\n\n # Check that the associated value is valid\n tokens = line.split(' ')\n token_len = len(tokens)\n if token_len != 2:\n raise _TsFileParseException(\"univariate tag requires an associated Boolean value\")\n elif tokens[1] == \"true\":\n univariate = True\n elif tokens[1] == \"false\":\n univariate = False\n else:\n raise _TsFileParseException(\"invalid univariate value\")\n\n has_univariate_tag = True\n metadata_started = True\n elif line.startswith(\"@classlabel\"):\n # Check that the data has not started\n if data_started:\n raise _TsFileParseException(\"metadata must come before data\")\n\n # Check that the associated value is valid\n tokens = line.split(' ')\n token_len = len(tokens)\n\n if token_len == 1:\n raise _TsFileParseException(\"classlabel tag requires an associated Boolean value\")\n\n if tokens[1] == \"true\":\n class_labels = True\n elif tokens[1] == \"false\":\n class_labels = False\n else:\n raise _TsFileParseException(\"invalid classLabel value\")\n\n # Check if we have any associated class values\n if token_len == 2 and class_labels:\n raise _TsFileParseException(\"if the classlabel tag is true then class values must be supplied\")\n\n has_class_labels_tag = True\n class_label_list = [token.strip() for token in tokens[2:]]\n metadata_started = True\n elif line.startswith(\"@targetlabel\"):\n # Check that the data has not started\n if data_started:\n raise _TsFileParseException(\"metadata must come before data\")\n\n # Check that the associated value is valid\n tokens = line.split(' ')\n token_len = len(tokens)\n\n if token_len == 1:\n raise _TsFileParseException(\"targetlabel tag requires an associated Boolean value\")\n\n if tokens[1] == \"true\":\n target_labels = True\n elif tokens[1] == \"false\":\n target_labels = False\n else:\n raise _TsFileParseException(\"invalid targetLabel value\")\n\n has_target_labels_tag = True\n class_val_list = []\n metadata_started = True\n # Check if this line contains the start of data\n elif line.startswith(\"@data\"):\n if line != \"@data\":\n raise _TsFileParseException(\"data tag should not have an associated value\")\n\n if data_started and not metadata_started:\n raise _TsFileParseException(\"metadata must come before data\")\n else:\n has_data_tag = True\n data_started = True\n # If the 'data tag has been found then metadata has been parsed and data can be loaded\n elif data_started:\n # Check that a full set of metadata has been provided\n incomplete_regression_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_target_labels_tag or not has_data_tag\n incomplete_classification_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_class_labels_tag or not has_data_tag\n if incomplete_regression_meta_data and incomplete_classification_meta_data:\n raise _TsFileParseException(\"a full set of metadata has not been provided before the data\")\n\n # Replace any missing values with the value specified\n line = line.replace(\"?\", replace_missing_vals_with)\n\n # Check if we dealing with data that has timestamps\n if timestamps:\n # We're dealing with timestamps so cannot just split line on ':' as timestamps may contain one\n has_another_value = False\n has_another_dimension = False\n\n timestamps_for_dimension = []\n values_for_dimension = []\n\n this_line_num_dimensions = 0\n line_len = len(line)\n char_num = 0\n\n while char_num < line_len:\n # Move through any spaces\n while char_num < line_len and str.isspace(line[char_num]):\n char_num += 1\n\n # See if there is any more data to read in or if we should validate that read thus far\n\n if char_num < line_len:\n\n # See if we have an empty dimension (i.e. no values)\n if line[char_num] == \":\":\n if len(instance_list) < (this_line_num_dimensions + 1):\n instance_list.append([])\n\n instance_list[this_line_num_dimensions].append(pd.Series())\n this_line_num_dimensions += 1\n\n has_another_value = False\n has_another_dimension = True\n\n timestamps_for_dimension = []\n values_for_dimension = []\n\n char_num += 1\n else:\n # Check if we have reached a class label\n if line[char_num] != \"(\" and target_labels:\n class_val = line[char_num:].strip()\n\n # if class_val not in class_val_list:\n # raise _TsFileParseException(\n # \"the class value '\" + class_val + \"' on line \" + str(\n # line_num + 1) + \" is not valid\")\n\n class_val_list.append(float(class_val))\n char_num = line_len\n\n has_another_value = False\n has_another_dimension = False\n\n timestamps_for_dimension = []\n values_for_dimension = []\n\n else:\n\n # Read in the data contained within the next tuple\n\n if line[char_num] != \"(\" and not target_labels:\n raise _TsFileParseException(\n \"dimension \" + str(this_line_num_dimensions + 1) + \" on line \" + str(\n line_num + 1) + \" does not start with a '('\")\n\n char_num += 1\n tuple_data = \"\"\n\n while char_num < line_len and line[char_num] != \")\":\n tuple_data += line[char_num]\n char_num += 1\n\n if char_num >= line_len or line[char_num] != \")\":\n raise _TsFileParseException(\n \"dimension \" + str(this_line_num_dimensions + 1) + \" on line \" + str(\n line_num + 1) + \" does not end with a ')'\")\n\n # Read in any spaces immediately after the current tuple\n\n char_num += 1\n\n while char_num < line_len and str.isspace(line[char_num]):\n char_num += 1\n\n # Check if there is another value or dimension to process after this tuple\n\n if char_num >= line_len:\n has_another_value = False\n has_another_dimension = False\n\n elif line[char_num] == \",\":\n has_another_value = True\n has_another_dimension = False\n\n elif line[char_num] == \":\":\n has_another_value = False\n has_another_dimension = True\n\n char_num += 1\n\n # Get the numeric value for the tuple by reading from the end of the tuple data backwards to the last comma\n\n last_comma_index = tuple_data.rfind(',')\n\n if last_comma_index == -1:\n raise _TsFileParseException(\n \"dimension \" + str(this_line_num_dimensions + 1) + \" on line \" + str(\n line_num + 1) + \" contains a tuple that has no comma inside of it\")\n\n try:\n value = tuple_data[last_comma_index + 1:]\n value = float(value)\n\n except ValueError:\n raise _TsFileParseException(\n \"dimension \" + str(this_line_num_dimensions + 1) + \" on line \" + str(\n line_num + 1) + \" contains a tuple that does not have a valid numeric value\")\n\n # Check the type of timestamp that we have\n\n timestamp = tuple_data[0: last_comma_index]\n\n try:\n timestamp = int(timestamp)\n timestamp_is_int = True\n timestamp_is_timestamp = False\n except ValueError:\n timestamp_is_int = False\n\n if not timestamp_is_int:\n try:\n timestamp = float(timestamp)\n timestamp_is_float = True\n timestamp_is_timestamp = False\n except ValueError:\n timestamp_is_float = False\n\n if not timestamp_is_int and not timestamp_is_float:\n try:\n timestamp = timestamp.strip()\n timestamp_is_timestamp = True\n except ValueError:\n timestamp_is_timestamp = False\n\n # Make sure that the timestamps in the file (not just this dimension or case) are consistent\n\n if not timestamp_is_timestamp and not timestamp_is_int and not timestamp_is_float:\n raise _TsFileParseException(\n \"dimension \" + str(this_line_num_dimensions + 1) + \" on line \" + str(\n line_num + 1) + \" contains a tuple that has an invalid timestamp '\" + timestamp + \"'\")\n\n if previous_timestamp_was_float is not None and previous_timestamp_was_float and not timestamp_is_float:\n raise _TsFileParseException(\n \"dimension \" + str(this_line_num_dimensions + 1) + \" on line \" + str(\n line_num + 1) + \" contains tuples where the timestamp format is inconsistent\")\n\n if previous_timestamp_was_int is not None and previous_timestamp_was_int and not timestamp_is_int:\n raise _TsFileParseException(\n \"dimension \" + str(this_line_num_dimensions + 1) + \" on line \" + str(\n line_num + 1) + \" contains tuples where the timestamp format is inconsistent\")\n\n if previous_timestamp_was_timestamp is not None and previous_timestamp_was_timestamp and not timestamp_is_timestamp:\n raise _TsFileParseException(\n \"dimension \" + str(this_line_num_dimensions + 1) + \" on line \" + str(\n line_num + 1) + \" contains tuples where the timestamp format is inconsistent\")\n\n # Store the values\n\n timestamps_for_dimension += [timestamp]\n values_for_dimension += [value]\n\n # If this was our first tuple then we store the type of timestamp we had\n\n if previous_timestamp_was_timestamp is None and timestamp_is_timestamp:\n previous_timestamp_was_timestamp = True\n previous_timestamp_was_int = False\n previous_timestamp_was_float = False\n\n if previous_timestamp_was_int is None and timestamp_is_int:\n previous_timestamp_was_timestamp = False\n previous_timestamp_was_int = True\n previous_timestamp_was_float = False\n\n if previous_timestamp_was_float is None and timestamp_is_float:\n previous_timestamp_was_timestamp = False\n previous_timestamp_was_int = False\n previous_timestamp_was_float = True\n\n # See if we should add the data for this dimension\n\n if not has_another_value:\n if len(instance_list) < (this_line_num_dimensions + 1):\n instance_list.append([])\n\n if timestamp_is_timestamp:\n timestamps_for_dimension = pd.DatetimeIndex(timestamps_for_dimension)\n\n instance_list[this_line_num_dimensions].append(\n pd.Series(index=timestamps_for_dimension, data=values_for_dimension))\n this_line_num_dimensions += 1\n\n timestamps_for_dimension = []\n values_for_dimension = []\n\n elif has_another_value:\n raise _TsFileParseException(\n \"dimension \" + str(this_line_num_dimensions + 1) + \" on line \" + str(\n line_num + 1) + \" ends with a ',' that is not followed by another tuple\")\n\n elif has_another_dimension and target_labels:\n raise _TsFileParseException(\n \"dimension \" + str(this_line_num_dimensions + 1) + \" on line \" + str(\n line_num + 1) + \" ends with a ':' while it should list a class value\")\n\n elif has_another_dimension and not target_labels:\n if len(instance_list) < (this_line_num_dimensions + 1):\n instance_list.append([])\n\n instance_list[this_line_num_dimensions].append(pd.Series(dtype=np.float32))\n this_line_num_dimensions += 1\n num_dimensions = this_line_num_dimensions\n\n # If this is the 1st line of data we have seen then note the dimensions\n\n if not has_another_value and not has_another_dimension:\n if num_dimensions is None:\n num_dimensions = this_line_num_dimensions\n\n if num_dimensions != this_line_num_dimensions:\n raise _TsFileParseException(\"line \" + str(\n line_num + 1) + \" does not have the same number of dimensions as the previous line of data\")\n\n # Check that we are not expecting some more data, and if not, store that processed above\n\n if has_another_value:\n raise _TsFileParseException(\n \"dimension \" + str(this_line_num_dimensions + 1) + \" on line \" + str(\n line_num + 1) + \" ends with a ',' that is not followed by another tuple\")\n\n elif has_another_dimension and target_labels:\n raise _TsFileParseException(\n \"dimension \" + str(this_line_num_dimensions + 1) + \" on line \" + str(\n line_num + 1) + \" ends with a ':' while it should list a class value\")\n\n elif has_another_dimension and not target_labels:\n if len(instance_list) < (this_line_num_dimensions + 1):\n instance_list.append([])\n\n instance_list[this_line_num_dimensions].append(pd.Series())\n this_line_num_dimensions += 1\n num_dimensions = this_line_num_dimensions\n\n # If this is the 1st line of data we have seen then note the dimensions\n\n if not has_another_value and num_dimensions != this_line_num_dimensions:\n raise _TsFileParseException(\"line \" + str(\n line_num + 1) + \" does not have the same number of dimensions as the previous line of data\")\n\n # Check if we should have class values, and if so that they are contained in those listed in the metadata\n\n if target_labels and len(class_val_list) == 0:\n raise _TsFileParseException(\"the cases have no associated class values\")\n else:\n dimensions = line.split(\":\")\n # If first row then note the number of dimensions (that must be the same for all cases)\n if is_first_case:\n num_dimensions = len(dimensions)\n\n if target_labels:\n num_dimensions -= 1\n\n for dim in range(0, num_dimensions):\n instance_list.append([])\n is_first_case = False\n\n # See how many dimensions that the case whose data in represented in this line has\n this_line_num_dimensions = len(dimensions)\n\n if target_labels:\n this_line_num_dimensions -= 1\n\n # All dimensions should be included for all series, even if they are empty\n if this_line_num_dimensions != num_dimensions:\n raise _TsFileParseException(\"inconsistent number of dimensions. Expecting \" + str(\n num_dimensions) + \" but have read \" + str(this_line_num_dimensions))\n\n # Process the data for each dimension\n for dim in range(0, num_dimensions):\n dimension = dimensions[dim].strip()\n\n if dimension:\n data_series = dimension.split(\",\")\n data_series = [float(i) for i in data_series]\n instance_list[dim].append(pd.Series(data_series))\n else:\n instance_list[dim].append(pd.Series())\n\n if target_labels:\n class_val_list.append(float(dimensions[num_dimensions].strip()))\n\n line_num += 1\n\n # Check that the file was not empty\n if line_num:\n # Check that the file contained both metadata and data\n complete_regression_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_target_labels_tag and has_data_tag\n complete_classification_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_class_labels_tag and has_data_tag\n\n if metadata_started and not complete_regression_meta_data and not complete_classification_meta_data:\n raise _TsFileParseException(\"metadata incomplete\")\n elif metadata_started and not data_started:\n raise _TsFileParseException(\"file contained metadata but no data\")\n elif metadata_started and data_started and len(instance_list) == 0:\n raise _TsFileParseException(\"file contained metadata but no data\")\n\n # Create a DataFrame from the data parsed above\n data = pd.DataFrame(dtype=np.float32)\n\n for dim in range(0, num_dimensions):\n data['dim_' + str(dim)] = instance_list[dim]\n\n # Check if we should return any associated class labels separately\n\n if target_labels:\n if return_separate_X_and_y:\n return data, np.asarray(class_val_list)\n else:\n data['class_vals'] = pd.Series(class_val_list)\n return data\n else:\n return data\n else:\n raise _TsFileParseException(\"empty file\")",
"_____no_output_____"
],
[
"#export\ndef get_Monash_regression_list():\n return sorted([\n \"AustraliaRainfall\", \"HouseholdPowerConsumption1\",\n \"HouseholdPowerConsumption2\", \"BeijingPM25Quality\",\n \"BeijingPM10Quality\", \"Covid3Month\", \"LiveFuelMoistureContent\",\n \"FloodModeling1\", \"FloodModeling2\", \"FloodModeling3\",\n \"AppliancesEnergy\", \"BenzeneConcentration\", \"NewsHeadlineSentiment\",\n \"NewsTitleSentiment\", \"IEEEPPG\", \n #\"BIDMC32RR\", \"BIDMC32HR\", \"BIDMC32SpO2\", \"PPGDalia\" # Cannot be downloaded\n ])\n\nMonash_regression_list = get_Monash_regression_list()\nregression_list = Monash_regression_list\nTSR_datasets = regression_datasets = regression_list\nlen(Monash_regression_list)",
"_____no_output_____"
],
[
"#export\ndef get_Monash_regression_data(dsid, path='./data/Monash', on_disk=True, mode='c', Xdtype='float32', ydtype=None, split_data=True, force_download=False, \n verbose=False):\n\n dsid_list = [rd for rd in Monash_regression_list if rd.lower() == dsid.lower()]\n assert len(dsid_list) > 0, f'{dsid} is not a Monash dataset'\n dsid = dsid_list[0]\n full_tgt_dir = Path(path)/dsid\n pv(f'Dataset: {dsid}', verbose)\n\n if force_download or not all([os.path.isfile(f'{path}/{dsid}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):\n if dsid == 'AppliancesEnergy': id = 3902637\n elif dsid == 'HouseholdPowerConsumption1': id = 3902704\n elif dsid == 'HouseholdPowerConsumption2': id = 3902706\n elif dsid == 'BenzeneConcentration': id = 3902673\n elif dsid == 'BeijingPM25Quality': id = 3902671\n elif dsid == 'BeijingPM10Quality': id = 3902667\n elif dsid == 'LiveFuelMoistureContent': id = 3902716\n elif dsid == 'FloodModeling1': id = 3902694\n elif dsid == 'FloodModeling2': id = 3902696\n elif dsid == 'FloodModeling3': id = 3902698\n elif dsid == 'AustraliaRainfall': id = 3902654\n elif dsid == 'PPGDalia': id = 3902728\n elif dsid == 'IEEEPPG': id = 3902710\n elif dsid == 'BIDMCRR' or dsid == 'BIDM32CRR': id = 3902685\n elif dsid == 'BIDMCHR' or dsid == 'BIDM32CHR': id = 3902676\n elif dsid == 'BIDMCSpO2' or dsid == 'BIDM32CSpO2': id = 3902688\n elif dsid == 'NewsHeadlineSentiment': id = 3902718\n elif dsid == 'NewsTitleSentiment': id = 3902726\n elif dsid == 'Covid3Month': id = 3902690\n\n for split in ['TRAIN', 'TEST']:\n url = f\"https://zenodo.org/record/{id}/files/{dsid}_{split}.ts\"\n fname = Path(path)/f'{dsid}/{dsid}_{split}.ts'\n pv('downloading data...', verbose)\n try:\n download_data(url, fname, c_key='archive', force_download=force_download, timeout=4)\n except:\n warnings.warn(f'Cannot download {dsid} dataset')\n if split_data: return None, None, None, None\n else: return None, None, None\n pv('...download complete', verbose)\n if split == 'TRAIN':\n X_train, y_train = _load_from_tsfile_to_dataframe2(fname)\n X_train = check_X(X_train, coerce_to_numpy=True)\n else:\n X_valid, y_valid = _load_from_tsfile_to_dataframe2(fname)\n X_valid = check_X(X_valid, coerce_to_numpy=True)\n np.save(f'{full_tgt_dir}/X_train.npy', X_train)\n np.save(f'{full_tgt_dir}/y_train.npy', y_train)\n np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)\n np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)\n np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))\n np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))\n del X_train, X_valid, y_train, y_valid\n delete_all_in_dir(full_tgt_dir, exception='.npy')\n pv('...numpy arrays correctly saved', verbose)\n\n mmap_mode = mode if on_disk else None\n X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)\n y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)\n X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)\n y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)\n if Xdtype is not None: \n X_train = X_train.astype(Xdtype)\n X_valid = X_valid.astype(Xdtype)\n if ydtype is not None: \n y_train = y_train.astype(ydtype)\n y_valid = y_valid.astype(ydtype)\n\n if split_data:\n if verbose:\n print('X_train:', X_train.shape)\n print('y_train:', y_train.shape)\n print('X_valid:', X_valid.shape)\n print('y_valid:', y_valid.shape, '\\n')\n return X_train, y_train, X_valid, y_valid\n else:\n X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)\n y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)\n splits = get_predefined_splits(X_train, X_valid)\n if verbose:\n print('X :', X .shape)\n print('y :', y .shape)\n print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\\n')\n return X, y, splits\n\n\nget_regression_data = get_Monash_regression_data",
"_____no_output_____"
],
[
"dsid = \"Covid3Month\"\nX_train, y_train, X_valid, y_valid = get_Monash_regression_data(dsid, on_disk=False, split_data=True, force_download=True)\nX, y, splits = get_Monash_regression_data(dsid, on_disk=True, split_data=False, force_download=True, verbose=True)\nif X_train is not None: \n test_eq(X_train.shape, (140, 1, 84))\nif X is not None: \n test_eq(X.shape, (201, 1, 84))",
"153it [00:00, 1786.53it/s]\n74it [00:00, 1694.84it/s]\n"
],
[
"#export\ndef get_forecasting_list():\n return sorted([\n \"Sunspots\", \"Weather\"\n ])\n\nforecasting_time_series = get_forecasting_list()",
"_____no_output_____"
],
[
"#export\ndef get_forecasting_time_series(dsid, path='./data/forecasting/', force_download=False, verbose=True, **kwargs):\n \n dsid_list = [fd for fd in forecasting_time_series if fd.lower() == dsid.lower()]\n assert len(dsid_list) > 0, f'{dsid} is not a forecasting dataset'\n dsid = dsid_list[0]\n if dsid == 'Weather': full_tgt_dir = Path(path)/f'{dsid}.csv.zip'\n else: full_tgt_dir = Path(path)/f'{dsid}.csv'\n pv(f'Dataset: {dsid}', verbose)\n if dsid == 'Sunspots': url = \"https://storage.googleapis.com/laurencemoroney-blog.appspot.com/Sunspots.csv\"\n elif dsid == 'Weather': url = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip'\n\n try: \n pv(\"downloading data...\", verbose)\n if force_download: \n try: os.remove(full_tgt_dir)\n except OSError: pass\n download_data(url, full_tgt_dir, force_download=force_download, **kwargs)\n pv(f\"...data downloaded. Path = {full_tgt_dir}\", verbose)\n\n if dsid == 'Sunspots': \n df = pd.read_csv(full_tgt_dir, parse_dates=['Date'], index_col=['Date'])\n return df['Monthly Mean Total Sunspot Number'].asfreq('1M').to_frame()\n\n elif dsid == 'Weather':\n # This code comes from a great Keras time-series tutorial notebook (https://www.tensorflow.org/tutorials/structured_data/time_series)\n df = pd.read_csv(full_tgt_dir)\n df = df[5::6] # slice [start:stop:step], starting from index 5 take every 6th record.\n\n date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')\n\n # remove error (negative wind)\n wv = df['wv (m/s)']\n bad_wv = wv == -9999.0\n wv[bad_wv] = 0.0\n\n max_wv = df['max. wv (m/s)']\n bad_max_wv = max_wv == -9999.0\n max_wv[bad_max_wv] = 0.0\n\n wv = df.pop('wv (m/s)')\n max_wv = df.pop('max. wv (m/s)')\n\n # Convert to radians.\n wd_rad = df.pop('wd (deg)')*np.pi / 180\n\n # Calculate the wind x and y components.\n df['Wx'] = wv*np.cos(wd_rad)\n df['Wy'] = wv*np.sin(wd_rad)\n\n # Calculate the max wind x and y components.\n df['max Wx'] = max_wv*np.cos(wd_rad)\n df['max Wy'] = max_wv*np.sin(wd_rad)\n\n timestamp_s = date_time.map(datetime.timestamp)\n day = 24*60*60\n year = (365.2425)*day\n\n df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))\n df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))\n df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))\n df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))\n df.reset_index(drop=True, inplace=True)\n return df\n else: \n return full_tgt_dir\n except: \n warnings.warn(f\"Cannot download {dsid} dataset\")\n return",
"_____no_output_____"
],
[
"ts = get_forecasting_time_series(\"sunspots\", force_download=True)\ntest_eq(len(ts), 3235)\nts",
"Dataset: Sunspots\ndownloading data...\n...data downloaded. Path = data/forecasting/Sunspots.csv\n"
],
[
"ts = get_forecasting_time_series(\"weather\", force_download=True)\ntest_eq(len(ts), 70091)\nts",
"Dataset: Weather\ndownloading data...\n...data downloaded. Path = data/forecasting/Weather.csv.zip\n"
],
[
"# export\n\nMonash_forecasting_list = ['m1_yearly_dataset',\n 'm1_quarterly_dataset',\n 'm1_monthly_dataset',\n 'm3_yearly_dataset',\n 'm3_quarterly_dataset',\n 'm3_monthly_dataset',\n 'm3_other_dataset',\n 'm4_yearly_dataset',\n 'm4_quarterly_dataset',\n 'm4_monthly_dataset',\n 'm4_weekly_dataset',\n 'm4_daily_dataset',\n 'm4_hourly_dataset',\n 'tourism_yearly_dataset',\n 'tourism_quarterly_dataset',\n 'tourism_monthly_dataset',\n 'nn5_daily_dataset_with_missing_values',\n 'nn5_daily_dataset_without_missing_values',\n 'nn5_weekly_dataset',\n 'cif_2016_dataset',\n 'kaggle_web_traffic_dataset_with_missing_values',\n 'kaggle_web_traffic_dataset_without_missing_values',\n 'kaggle_web_traffic_weekly_dataset',\n 'solar_10_minutes_dataset',\n 'solar_weekly_dataset',\n 'electricity_hourly_dataset',\n 'electricity_weekly_dataset',\n 'london_smart_meters_dataset_with_missing_values',\n 'london_smart_meters_dataset_without_missing_values',\n 'wind_farms_minutely_dataset_with_missing_values',\n 'wind_farms_minutely_dataset_without_missing_values',\n 'car_parts_dataset_with_missing_values',\n 'car_parts_dataset_without_missing_values',\n 'dominick_dataset',\n 'fred_md_dataset',\n 'traffic_hourly_dataset',\n 'traffic_weekly_dataset',\n 'pedestrian_counts_dataset',\n 'hospital_dataset',\n 'covid_deaths_dataset',\n 'kdd_cup_2018_dataset_with_missing_values',\n 'kdd_cup_2018_dataset_without_missing_values',\n 'weather_dataset',\n 'sunspot_dataset_with_missing_values',\n 'sunspot_dataset_without_missing_values',\n 'saugeenday_dataset',\n 'us_births_dataset',\n 'elecdemand_dataset',\n 'solar_4_seconds_dataset',\n 'wind_4_seconds_dataset',\n 'Sunspots', 'Weather']\n\n\nforecasting_list = Monash_forecasting_list",
"_____no_output_____"
],
[
"# export\n\n## Original code available at: https://github.com/rakshitha123/TSForecasting\n# This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in \n# the time series forecasting research space.\n\n# The benchmark datasets are available at: https://zenodo.org/communities/forecasting. For more details, please refer to our website: \n# https://forecastingdata.org/ and paper: https://arxiv.org/abs/2105.06643.\n\n# Citation: \n# @misc{godahewa2021monash,\n# author=\"Godahewa, Rakshitha and Bergmeir, Christoph and Webb, Geoffrey I. and Hyndman, Rob J. and Montero-Manso, Pablo\",\n# title=\"Monash Time Series Forecasting Archive\",\n# howpublished =\"\\url{https://arxiv.org/abs/2105.06643}\",\n# year=\"2021\"\n# }\n\n\n# Converts the contents in a .tsf file into a dataframe and returns it along with other meta-data of the dataset: frequency, horizon, whether the dataset contains missing values and whether the series have equal lengths\n#\n# Parameters\n# full_file_path_and_name - complete .tsf file path\n# replace_missing_vals_with - a term to indicate the missing values in series in the returning dataframe\n# value_column_name - Any name that is preferred to have as the name of the column containing series values in the returning dataframe\ndef convert_tsf_to_dataframe(full_file_path_and_name, replace_missing_vals_with = 'NaN', value_column_name = \"series_value\"):\n col_names = []\n col_types = []\n all_data = {}\n line_count = 0\n frequency = None\n forecast_horizon = None\n contain_missing_values = None\n contain_equal_length = None\n found_data_tag = False\n found_data_section = False\n started_reading_data_section = False\n\n with open(full_file_path_and_name, 'r', encoding='cp1252') as file:\n for line in file:\n # Strip white space from start/end of line\n line = line.strip()\n\n if line:\n if line.startswith(\"@\"): # Read meta-data\n if not line.startswith(\"@data\"):\n line_content = line.split(\" \")\n if line.startswith(\"@attribute\"):\n if (len(line_content) != 3): # Attributes have both name and type\n raise TsFileParseException(\"Invalid meta-data specification.\")\n\n col_names.append(line_content[1])\n col_types.append(line_content[2])\n else:\n if len(line_content) != 2: # Other meta-data have only values\n raise TsFileParseException(\"Invalid meta-data specification.\")\n\n if line.startswith(\"@frequency\"):\n frequency = line_content[1]\n elif line.startswith(\"@horizon\"):\n forecast_horizon = int(line_content[1])\n elif line.startswith(\"@missing\"):\n contain_missing_values = bool(distutils.util.strtobool(line_content[1]))\n elif line.startswith(\"@equallength\"):\n contain_equal_length = bool(distutils.util.strtobool(line_content[1]))\n\n else:\n if len(col_names) == 0:\n raise TsFileParseException(\"Missing attribute section. Attribute section must come before data.\")\n\n found_data_tag = True\n elif not line.startswith(\"#\"):\n if len(col_names) == 0:\n raise TsFileParseException(\"Missing attribute section. Attribute section must come before data.\")\n elif not found_data_tag:\n raise TsFileParseException(\"Missing @data tag.\")\n else:\n if not started_reading_data_section:\n started_reading_data_section = True\n found_data_section = True\n all_series = []\n\n for col in col_names:\n all_data[col] = []\n\n full_info = line.split(\":\")\n\n if len(full_info) != (len(col_names) + 1):\n raise TsFileParseException(\"Missing attributes/values in series.\")\n\n series = full_info[len(full_info) - 1]\n series = series.split(\",\")\n\n if(len(series) == 0):\n raise TsFileParseException(\"A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series. Missing values should be indicated with ? symbol\")\n\n numeric_series = []\n\n for val in series:\n if val == \"?\":\n numeric_series.append(replace_missing_vals_with)\n else:\n numeric_series.append(float(val))\n\n if (numeric_series.count(replace_missing_vals_with) == len(numeric_series)):\n raise TsFileParseException(\"All series values are missing. A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series.\")\n\n all_series.append(pd.Series(numeric_series).array)\n\n for i in range(len(col_names)):\n att_val = None\n if col_types[i] == \"numeric\":\n att_val = int(full_info[i])\n elif col_types[i] == \"string\":\n att_val = str(full_info[i])\n elif col_types[i] == \"date\":\n att_val = datetime.strptime(full_info[i], '%Y-%m-%d %H-%M-%S')\n else:\n raise TsFileParseException(\"Invalid attribute type.\") # Currently, the code supports only numeric, string and date types. Extend this as required.\n\n if(att_val == None):\n raise TsFileParseException(\"Invalid attribute value.\")\n else:\n all_data[col_names[i]].append(att_val)\n\n line_count = line_count + 1\n\n if line_count == 0:\n raise TsFileParseException(\"Empty file.\")\n if len(col_names) == 0:\n raise TsFileParseException(\"Missing attribute section.\")\n if not found_data_section:\n raise TsFileParseException(\"Missing series information under data section.\")\n\n all_data[value_column_name] = all_series\n loaded_data = pd.DataFrame(all_data)\n\n return loaded_data, frequency, forecast_horizon, contain_missing_values, contain_equal_length",
"_____no_output_____"
],
[
"# export\n\ndef get_Monash_forecasting_data(dsid, path='./data/forecasting/', force_download=False, remove_from_disk=False, verbose=True):\n\n pv(f'Dataset: {dsid}', verbose)\n dsid = dsid.lower()\n assert dsid in Monash_forecasting_list, f'{dsid} not available in Monash_forecasting_list'\n\n if dsid == 'm1_yearly_dataset': url = 'https://zenodo.org/record/4656193/files/m1_yearly_dataset.zip'\n elif dsid == 'm1_quarterly_dataset': url = 'https://zenodo.org/record/4656154/files/m1_quarterly_dataset.zip'\n elif dsid == 'm1_monthly_dataset': url = 'https://zenodo.org/record/4656159/files/m1_monthly_dataset.zip'\n elif dsid == 'm3_yearly_dataset': url = 'https://zenodo.org/record/4656222/files/m3_yearly_dataset.zip'\n elif dsid == 'm3_quarterly_dataset': url = 'https://zenodo.org/record/4656262/files/m3_quarterly_dataset.zip'\n elif dsid == 'm3_monthly_dataset': url = 'https://zenodo.org/record/4656298/files/m3_monthly_dataset.zip'\n elif dsid == 'm3_other_dataset': url = 'https://zenodo.org/record/4656335/files/m3_other_dataset.zip'\n elif dsid == 'm4_yearly_dataset': url = 'https://zenodo.org/record/4656379/files/m4_yearly_dataset.zip'\n elif dsid == 'm4_quarterly_dataset': url = 'https://zenodo.org/record/4656410/files/m4_quarterly_dataset.zip'\n elif dsid == 'm4_monthly_dataset': url = 'https://zenodo.org/record/4656480/files/m4_monthly_dataset.zip'\n elif dsid == 'm4_weekly_dataset': url = 'https://zenodo.org/record/4656522/files/m4_weekly_dataset.zip'\n elif dsid == 'm4_daily_dataset': url = 'https://zenodo.org/record/4656548/files/m4_daily_dataset.zip'\n elif dsid == 'm4_hourly_dataset': url = 'https://zenodo.org/record/4656589/files/m4_hourly_dataset.zip'\n elif dsid == 'tourism_yearly_dataset': url = 'https://zenodo.org/record/4656103/files/tourism_yearly_dataset.zip'\n elif dsid == 'tourism_quarterly_dataset': url = 'https://zenodo.org/record/4656093/files/tourism_quarterly_dataset.zip'\n elif dsid == 'tourism_monthly_dataset': url = 'https://zenodo.org/record/4656096/files/tourism_monthly_dataset.zip'\n elif dsid == 'nn5_daily_dataset_with_missing_values': url = 'https://zenodo.org/record/4656110/files/nn5_daily_dataset_with_missing_values.zip'\n elif dsid == 'nn5_daily_dataset_without_missing_values': url = 'https://zenodo.org/record/4656117/files/nn5_daily_dataset_without_missing_values.zip'\n elif dsid == 'nn5_weekly_dataset': url = 'https://zenodo.org/record/4656125/files/nn5_weekly_dataset.zip'\n elif dsid == 'cif_2016_dataset': url = 'https://zenodo.org/record/4656042/files/cif_2016_dataset.zip'\n elif dsid == 'kaggle_web_traffic_dataset_with_missing_values': url = 'https://zenodo.org/record/4656080/files/kaggle_web_traffic_dataset_with_missing_values.zip'\n elif dsid == 'kaggle_web_traffic_dataset_without_missing_values': url = 'https://zenodo.org/record/4656075/files/kaggle_web_traffic_dataset_without_missing_values.zip'\n elif dsid == 'kaggle_web_traffic_weekly': url = 'https://zenodo.org/record/4656664/files/kaggle_web_traffic_weekly_dataset.zip'\n elif dsid == 'solar_10_minutes_dataset': url = 'https://zenodo.org/record/4656144/files/solar_10_minutes_dataset.zip'\n elif dsid == 'solar_weekly_dataset': url = 'https://zenodo.org/record/4656151/files/solar_weekly_dataset.zip'\n elif dsid == 'electricity_hourly_dataset': url = 'https://zenodo.org/record/4656140/files/electricity_hourly_dataset.zip'\n elif dsid == 'electricity_weekly_dataset': url = 'https://zenodo.org/record/4656141/files/electricity_weekly_dataset.zip'\n elif dsid == 'london_smart_meters_dataset_with_missing_values': url = 'https://zenodo.org/record/4656072/files/london_smart_meters_dataset_with_missing_values.zip'\n elif dsid == 'london_smart_meters_dataset_without_missing_values': url = 'https://zenodo.org/record/4656091/files/london_smart_meters_dataset_without_missing_values.zip'\n elif dsid == 'wind_farms_minutely_dataset_with_missing_values': url = 'https://zenodo.org/record/4654909/files/wind_farms_minutely_dataset_with_missing_values.zip'\n elif dsid == 'wind_farms_minutely_dataset_without_missing_values': url = 'https://zenodo.org/record/4654858/files/wind_farms_minutely_dataset_without_missing_values.zip'\n elif dsid == 'car_parts_dataset_with_missing_values': url = 'https://zenodo.org/record/4656022/files/car_parts_dataset_with_missing_values.zip'\n elif dsid == 'car_parts_dataset_without_missing_values': url = 'https://zenodo.org/record/4656021/files/car_parts_dataset_without_missing_values.zip'\n elif dsid == 'dominick_dataset': url = 'https://zenodo.org/record/4654802/files/dominick_dataset.zip'\n elif dsid == 'fred_md_dataset': url = 'https://zenodo.org/record/4654833/files/fred_md_dataset.zip'\n elif dsid == 'traffic_hourly_dataset': url = 'https://zenodo.org/record/4656132/files/traffic_hourly_dataset.zip'\n elif dsid == 'traffic_weekly_dataset': url = 'https://zenodo.org/record/4656135/files/traffic_weekly_dataset.zip'\n elif dsid == 'pedestrian_counts_dataset': url = 'https://zenodo.org/record/4656626/files/pedestrian_counts_dataset.zip'\n elif dsid == 'hospital_dataset': url = 'https://zenodo.org/record/4656014/files/hospital_dataset.zip'\n elif dsid == 'covid_deaths_dataset': url = 'https://zenodo.org/record/4656009/files/covid_deaths_dataset.zip'\n elif dsid == 'kdd_cup_2018_dataset_with_missing_values': url = 'https://zenodo.org/record/4656719/files/kdd_cup_2018_dataset_with_missing_values.zip'\n elif dsid == 'kdd_cup_2018_dataset_without_missing_values': url = 'https://zenodo.org/record/4656756/files/kdd_cup_2018_dataset_without_missing_values.zip'\n elif dsid == 'weather_dataset': url = 'https://zenodo.org/record/4654822/files/weather_dataset.zip'\n elif dsid == 'sunspot_dataset_with_missing_values': url = 'https://zenodo.org/record/4654773/files/sunspot_dataset_with_missing_values.zip'\n elif dsid == 'sunspot_dataset_without_missing_values': url = 'https://zenodo.org/record/4654722/files/sunspot_dataset_without_missing_values.zip'\n elif dsid == 'saugeenday_dataset': url = 'https://zenodo.org/record/4656058/files/saugeenday_dataset.zip'\n elif dsid == 'us_births_dataset': url = 'https://zenodo.org/record/4656049/files/us_births_dataset.zip'\n elif dsid == 'elecdemand_dataset': url = 'https://zenodo.org/record/4656069/files/elecdemand_dataset.zip'\n elif dsid == 'solar_4_seconds_dataset': url = 'https://zenodo.org/record/4656027/files/solar_4_seconds_dataset.zip'\n elif dsid == 'wind_4_seconds_dataset': url = 'https://zenodo.org/record/4656032/files/wind_4_seconds_dataset.zip'\n\n path = Path(path)\n full_path = path/f'{dsid}.tsf'\n if not full_path.exists() or force_download: \n decompress_from_url(url, target_dir=path, verbose=verbose)\n pv(\"converting dataframe to numpy array...\", verbose)\n data, frequency, forecast_horizon, contain_missing_values, contain_equal_length = convert_tsf_to_dataframe(full_path)\n X = to3d(stack_pad(data['series_value']))\n pv(\"...dataframe converted to numpy array\", verbose)\n pv(f'\\nX.shape: {X.shape}', verbose) \n pv(f'freq: {frequency}', verbose) \n pv(f'forecast_horizon: {forecast_horizon}', verbose) \n pv(f'contain_missing_values: {contain_missing_values}', verbose) \n pv(f'contain_equal_length: {contain_equal_length}', verbose=verbose)\n if remove_from_disk: os.remove(full_path)\n return X\n\nget_forecasting_data = get_Monash_forecasting_data",
"_____no_output_____"
],
[
"dsid = 'm1_yearly_dataset'\nX = get_Monash_forecasting_data(dsid, force_download=True, remove_from_disk=True)\ntest_eq(X.shape, (181, 1, 58))",
"Dataset: m1_yearly_dataset\ndownloading data...\n...data downloaded\ndecompressing data...\n...data decompressed\nconverting dataframe to numpy array...\n...dataframe converted to numpy array\n\nX.shape: (181, 1, 58)\nfreq: yearly\nforecast_horizon: 6\ncontain_missing_values: False\ncontain_equal_length: False\n"
],
[
"#hide\nfrom tsai.imports import create_scripts\nfrom tsai.export import get_nb_name\nnb_name = get_nb_name()\ncreate_scripts(nb_name);",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76eec125f829f16c24646e0cf42ff38752230b8 | 32,251 | ipynb | Jupyter Notebook | 06_models-resnet_3d.ipynb | andreamunafo/actions-in-videos | 9a3883e650218dec046c0842f81f4108f78c8236 | [
"Apache-2.0"
] | null | null | null | 06_models-resnet_3d.ipynb | andreamunafo/actions-in-videos | 9a3883e650218dec046c0842f81f4108f78c8236 | [
"Apache-2.0"
] | 2 | 2021-05-20T12:50:07.000Z | 2021-09-28T00:37:29.000Z | 06_models-resnet_3d.ipynb | andreamunafo/actions_in_videos | 9a3883e650218dec046c0842f81f4108f78c8236 | [
"Apache-2.0"
] | null | null | null | 46.53824 | 177 | 0.505256 | [
[
[
"# default_exp models_resnet_3d",
"_____no_output_____"
]
],
[
[
"# Models using 3D convolutions\n\n> This module focuses on preparing the data of the UCF101 dataset to be used with the core functions.\n\nRefs.\n[understanding-1d-and-3d-convolution](https://towardsdatascience.com/understanding-1d-and-3d-convolution-neural-network-keras-9d8f76e29610)",
"_____no_output_____"
]
],
[
[
"#export\nimport torch\nimport torch.nn as nn\nimport torchvision # used to download the model\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nimport math",
"_____no_output_____"
],
[
"#export\ndef conv3x3x3(in_channels, out_channels, stride=1):\n # 3x3x3 convolution with padding\n return nn.Conv3d(\n in_channels,\n out_channels,\n kernel_size=3,\n stride=stride,\n padding=1,\n bias=False)\n\n\ndef downsample_basic_block(x, planes, stride):\n out = F.avg_pool3d(x, kernel_size=1, stride=stride)\n zero_pads = torch.Tensor(\n out.size(0), planes - out.size(1), out.size(2), out.size(3),\n out.size(4)).zero_()\n \n if isinstance(out.data, torch.cuda.FloatTensor):\n zero_pads = zero_pads.cuda()\n\n out = Variable(torch.cat([out.data, zero_pads], dim=1))\n\n return out",
"_____no_output_____"
],
[
"#export\nclass BasicBlock(nn.Module):\n expansion = 1\n\n def __init__(self, in_channels, channels, stride=1, downsample=None):\n super(BasicBlock, self).__init__()\n self.conv1 = conv3x3x3(in_channels, channels, stride)\n self.bn1 = nn.BatchNorm3d(channels)\n self.relu = nn.ReLU(inplace=True)\n self.conv2 = conv3x3x3(channels, channels)\n self.bn2 = nn.BatchNorm3d(channels)\n self.downsample = downsample\n self.stride = stride\n\n def forward(self, x):\n residual = x\n\n out = self.conv1(x)\n out = self.bn1(out)\n out = self.relu(out)\n\n out = self.conv2(out)\n out = self.bn2(out)\n\n if self.downsample is not None:\n residual = self.downsample(x)\n\n out += residual\n out = self.relu(out)\n\n return out",
"_____no_output_____"
],
[
"#export \nclass Bottleneck(nn.Module):\n expansion = 4\n\n def __init__(self, inplanes, planes, stride=1, downsample=None):\n super(Bottleneck, self).__init__()\n self.conv1 = nn.Conv3d(inplanes, planes, kernel_size=1, bias=False)\n self.bn1 = nn.BatchNorm3d(planes)\n self.conv2 = nn.Conv3d(planes, planes, kernel_size=3, stride=stride,\n padding=1, bias=False)\n self.bn2 = nn.BatchNorm3d(planes)\n self.conv3 = nn.Conv3d(planes, planes * 4, kernel_size=1, bias=False)\n self.bn3 = nn.BatchNorm3d(planes * 4)\n self.relu = nn.ReLU(inplace=True)\n self.downsample = downsample\n self.stride = stride\n\n def forward(self, x):\n residual = x\n\n out = self.conv1(x)\n out = self.bn1(out)\n out = self.relu(out)\n\n out = self.conv2(out)\n out = self.bn2(out)\n out = self.relu(out)\n\n out = self.conv3(out)\n out = self.bn3(out)\n\n if self.downsample is not None:\n residual = self.downsample(x)\n\n out += residual\n out = self.relu(out)\n\n return out",
"_____no_output_____"
],
[
"#export\nclass ResNet(nn.Module):\n\n def __init__(self, block, layers, sample_size,\n sample_duration, shortcut_type='B',\n num_classes=400):\n self.inplanes = 64\n super(ResNet, self).__init__()\n \n self.conv1 = nn.Conv3d(3,\n 64,\n kernel_size=7,\n stride=(1, 2, 2),\n padding=(3, 3, 3),\n bias=False)\n self.bn1 = nn.BatchNorm3d(64)\n self.relu = nn.ReLU(inplace=True)\n self.maxpool = nn.MaxPool3d(kernel_size=(3, 3, 3), stride=2, padding=1)\n self.layer1 = self._make_layer(block, 64, layers[0], shortcut_type)\n self.layer2 = self._make_layer(block, 128, layers[1], shortcut_type, stride=2)\n self.layer3 = self._make_layer(block, 256, layers[2], shortcut_type, stride=2)\n self.layer4 = self._make_layer(block, 512, layers[3], shortcut_type, stride=2)\n last_duration = int(math.ceil(sample_duration / 16))\n last_size = int(math.ceil(sample_size / 32))\n self.avgpool = nn.AvgPool3d((last_duration, last_size, last_size), stride=1)\n self.fc = nn.Linear(512 * block.expansion, num_classes)\n\n for m in self.modules():\n if isinstance(m, nn.Conv3d):\n m.weight = nn.init.kaiming_normal_(m.weight, mode='fan_out')\n elif isinstance(m, nn.BatchNorm3d):\n m.weight.data.fill_(1)\n m.bias.data.zero_()\n\n def _make_layer(self, block, planes, blocks, shortcut_type, stride=1):\n downsample = None\n if stride != 1 or self.inplanes != planes * block.expansion:\n if shortcut_type == 'A':\n downsample = partial(\n downsample_basic_block,\n planes=planes * block.expansion,\n stride=stride)\n else:\n downsample = nn.Sequential(\n nn.Conv3d(self.inplanes,\n planes * block.expansion,\n kernel_size=1,\n stride=stride,\n bias=False), \n nn.BatchNorm3d(planes * block.expansion))\n\n layers = []\n layers.append(block(self.inplanes, planes, stride, downsample))\n self.inplanes = planes * block.expansion\n for i in range(1, blocks):\n layers.append(block(self.inplanes, planes))\n\n return nn.Sequential(*layers)\n\n def forward(self, x): \n # only when using fastai\n x = x.permute(0,2,1,3,4)\n \n with torch.no_grad():\n h = self.conv1(x)\n h = self.bn1(h)\n h = self.relu(h)\n h = self.maxpool(h)\n\n h = self.layer1(h)\n h = self.layer2(h)\n h = self.layer3(h)\n h = self.layer4[0](h)\n# h = self.layer4(h)\n\n h = self.avgpool(h)\n\n h = h.view(h.size(0), -1)\n h = self.fc(h)\n\n return h",
"_____no_output_____"
],
[
"#export\nclass ResNet50_3D(nn.Module):\n def __init__(self, num_classes, **kwargs):\n super(ResNet50_3D, self).__init__()\n \n if 'model_pretrained' in kwargs.keys():\n print(f\"ResNet50_3D is loading pretrained ResNet50 from {kwargs['model_pretrained']}\")\n pretrained_resnet50 = torch.load('./model-pretrained/resnet-50-kinetics.pth', map_location=torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\"))\n kwargs.pop('model_pretrained', None)\n resnet = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs) \n \n keys = [k for k,v in pretrained_resnet50['state_dict'].items()]\n pretrained_state_dict = {k[7:]: v.cpu() for k, v in pretrained_resnet50['state_dict'].items()}\n resnet.load_state_dict(pretrained_state_dict) \n \n else:\n resnet = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)\n \n # chenage the last layer to match number of classes\n resnet.fc = nn.Linear(resnet.fc.weight.shape[1], num_classes)\n \n# self.feature_extractor = nn.Sequential(*list(resnet.children())[:-1])\n self.feature_extractor = resnet\n# self.final = nn.Sequential(\n# nn.Linear(resnet.fc.in_features, num_classes),\n# )\n \n\n def forward(self, x):\n # The input x will now be size [batch_size, c, seq_len, h, w]. \n # This is what I might get..Sequence (bs, 4, 3, 224, 224)\n #batch_size, c, h, w = x.shape\n #x = x.view(batch_size, c, h, w)\n x = self.feature_extractor(x)\n #x = x.view(batch_size, -1)\n# x = self.final(x)\n #x = x.view(batch_size, -1)\n return x\n ",
"_____no_output_____"
],
[
"#export\ndef resnet10(**kwargs):\n \"\"\"Constructs a ResNet-18 model.\n \"\"\"\n model = ResNet(BasicBlock, [1, 1, 1, 1], **kwargs)\n return model\n\n\ndef resnet18(**kwargs):\n \"\"\"Constructs a ResNet-18 model.\n \"\"\"\n model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)\n return model\n\n\ndef resnet34(**kwargs):\n \"\"\"Constructs a ResNet-34 model.\n \"\"\"\n model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)\n return model\n\n\ndef resnet50(**kwargs):\n \"\"\"Constructs a ResNet-50 model.\n \"\"\"\n print('function resnet50')\n model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)\n return model\n\ndef resnet101(**kwargs):\n \"\"\"Constructs a ResNet-101 model.\n \"\"\"\n model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)\n return model\n\n\ndef resnet152(**kwargs):\n \"\"\"Constructs a ResNet-101 model.\n \"\"\"\n model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs)\n return model\n\n\ndef resnet200(**kwargs):\n \"\"\"Constructs a ResNet-101 model.\n \"\"\"\n model = ResNet(Bottleneck, [3, 24, 36, 3], **kwargs)\n return model",
"_____no_output_____"
],
[
"model = ResNet50_3D(num_classes=101, sample_size=224, sample_duration=16, model_pretrained='./model-pretrained/resnet-50-kinetics.pth')",
"ResNet50_3D is loading pretrained ResNet50 from ./model-pretrained/resnet-50-kinetics.pth\n"
],
[
"model",
"_____no_output_____"
],
[
"model = resnet10(sample_size=224, sample_duration=16)",
"_____no_output_____"
],
[
"model",
"_____no_output_____"
],
[
"#hide\nfrom nbdev.export import *\nnotebook2script()",
"Converted 01_dataset_ucf101.ipynb.\nConverted 02_avi.ipynb.\nConverted 04_data_augmentation.ipynb.\nConverted 05_models.ipynb.\nConverted 06_models-resnet_3d.ipynb.\nConverted 07_utils.ipynb.\nConverted 10_run-baseline.ipynb.\nConverted 11_run-sequence-convlstm.ipynb.\nConverted 12_run-sequence-3d.ipynb.\nConverted 14_fastai_sequence.ipynb.\nConverted index.ipynb.\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76ef20b7406669be3a163b1929ea7674bcdc28b | 259,765 | ipynb | Jupyter Notebook | 1c_3D_triaxial_SimPEG.ipynb | empymod/emg3d-examples | f3cf6a25061d5c81debfdc028e2f8cfb08bba075 | [
"Apache-2.0"
] | 3 | 2019-05-24T14:12:10.000Z | 2020-01-17T11:03:51.000Z | 1c_3D_triaxial_SimPEG.ipynb | empymod/emg3d-examples | f3cf6a25061d5c81debfdc028e2f8cfb08bba075 | [
"Apache-2.0"
] | null | null | null | 1c_3D_triaxial_SimPEG.ipynb | empymod/emg3d-examples | f3cf6a25061d5c81debfdc028e2f8cfb08bba075 | [
"Apache-2.0"
] | 2 | 2019-05-22T20:20:32.000Z | 2019-05-25T21:30:12.000Z | 114.889429 | 131,687 | 0.786569 | [
[
[
"# OUTDATED, the examples moved to the gallery\n## See https://empymod.github.io/emg3d-gallery\n\n----\n\n# 3D with tri-axial anisotropy comparison between `emg3d` and `SimPEG`\n\n`SimPEG` is an open source python package for simulation and gradient based parameter estimation in geophysical applications, see https://simpeg.xyz. We can use `emg3d` as a solver for `SimPEG`, and compare it with the forward solver `Pardiso`.\n\n#### Requires\n- **emg3d >= 0.9.0**\n- ``discretize``, ``SimPEG``, ``pymatsolver``\n- ``numpy``, ``scipy``, ``numba``, ``matplotlib``\n\nNote, in order to use the `Pardiso`-solver `pymatsolver` has to be installed via `conda`, not via `pip`!",
"_____no_output_____"
]
],
[
[
"import time\nimport emg3d\nimport discretize\nimport numpy as np\nimport SimPEG, pymatsolver\nfrom SimPEG.EM import FDEM\nfrom SimPEG import Mesh, Maps\nfrom SimPEG.Survey import Data\nimport matplotlib.pyplot as plt\nfrom timeit import default_timer\nfrom contextlib import contextmanager\nfrom datetime import datetime, timedelta\nfrom pymatsolver import Pardiso as Solver\nfrom matplotlib.colors import LogNorm, SymLogNorm\n\n%load_ext memory_profiler",
"_____no_output_____"
],
[
"# Style adjustments\n%matplotlib notebook\nplt.style.use('ggplot')",
"_____no_output_____"
]
],
[
[
"## Model and survey parameters",
"_____no_output_____"
]
],
[
[
"# Depths (0 is sea-surface)\nwater_depth = 1000\ntarget_x = np.r_[-500, 500]\ntarget_y = target_x\ntarget_z = -water_depth + np.r_[-400, -100]\n\n# Resistivities\nres_air = 2e8\nres_sea = 0.33\nres_back = [1., 2., 3.] # Background in x-, y-, and z-directions\nres_target = 100.\n\nfreq = 1.0\n\nsrc = [-100, 100, 0, 0, -900, -900]",
"_____no_output_____"
]
],
[
[
"## Mesh and source-field",
"_____no_output_____"
]
],
[
[
"# skin depth\nskin_depth = 503/np.sqrt(res_back[0]/freq)\nprint(f\"\\nThe skin_depth is {skin_depth} m.\\n\")\n\ncs = 100 # 100 m min_width of cells\n\npf = 1.15 # Padding factor x- and y-directions\npfz = 1.35 # z-direction\nnpadx = 12 # Nr of padding in x- and y-directions\nnpadz = 9 # z-direction\n\ndomain_x = 4000 # x- and y-domain\ndomain_z = - target_z[0] # z-domain\n\n# Create mesh\nmesh = Mesh.TensorMesh(\n [[(cs, npadx, -pf), (cs, int(domain_x/cs)), (cs, npadx, pf)], \n [(cs, npadx, -pf), (cs, int(domain_x/cs)), (cs, npadx, pf)], \n [(cs, npadz, -pfz), (cs, int(domain_z/cs)), (cs, npadz, pfz)]]\n)\n\n# Center mesh\nmesh.x0 = np.r_[-mesh.hx.sum()/2, -mesh.hy.sum()/2, -mesh.hz[:-npadz].sum()]\n\n# Create the source field for this mesh and given frequency\nsfield = emg3d.utils.get_source_field(mesh, src, freq, strength=0)\n\n# We take the receiver locations at the actual CCx-locations\nrec_x = mesh.vectorCCx[12:-12]\nprint(f\"Receiver locations:\\n{rec_x}\\n\")\n\nmesh",
"\nThe skin_depth is 503.0 m.\n\nReceiver locations:\n[-1950. -1850. -1750. -1650. -1550. -1450. -1350. -1250. -1150. -1050.\n -950. -850. -750. -650. -550. -450. -350. -250. -150. -50.\n 50. 150. 250. 350. 450. 550. 650. 750. 850. 950.\n 1050. 1150. 1250. 1350. 1450. 1550. 1650. 1750. 1850. 1950.]\n\n"
]
],
[
[
"## Create model",
"_____no_output_____"
]
],
[
[
"# Layered_background\nres_x = res_air*np.ones(mesh.nC)\nres_x[mesh.gridCC[:, 2] <= 0] = res_sea\n\nres_y = res_x.copy()\nres_z = res_x.copy()\n\nres_x[mesh.gridCC[:, 2] <= -water_depth] = res_back[0]\nres_y[mesh.gridCC[:, 2] <= -water_depth] = res_back[1]\nres_z[mesh.gridCC[:, 2] <= -water_depth] = res_back[2]\n\nres_x_bg = res_x.copy()\nres_y_bg = res_y.copy()\nres_z_bg = res_z.copy()\n\n# Include the target\ntarget_inds = (\n (mesh.gridCC[:, 0] >= target_x[0]) & (mesh.gridCC[:, 0] <= target_x[1]) &\n (mesh.gridCC[:, 1] >= target_y[0]) & (mesh.gridCC[:, 1] <= target_y[1]) &\n (mesh.gridCC[:, 2] >= target_z[0]) & (mesh.gridCC[:, 2] <= target_z[1])\n)\nres_x[target_inds] = res_target\nres_y[target_inds] = res_target\nres_z[target_inds] = res_target\n\n# Create emg3d-models for given frequency\npmodel = emg3d.utils.Model(mesh, res_x, res_y, res_z)\npmodel_bg = emg3d.utils.Model(mesh, res_x_bg, res_y_bg, res_z_bg)\n\n# Plot a slice\nmesh.plot_3d_slicer(pmodel.res_x, zslice=-1100, clim=[0, 2],\n xlim=(-4000, 4000), ylim=(-4000, 4000), zlim=(-2000, 500))",
"_____no_output_____"
]
],
[
[
"## Calculate `emg3d`",
"_____no_output_____"
]
],
[
[
"%memit em3_tg = emg3d.solver.solver(mesh, pmodel, sfield, verb=3, nu_pre=0, semicoarsening=True)",
"\n:: emg3d START :: 21:03:39 ::\n\n MG-cycle : 'F' sslsolver : False\n semicoarsening : True [1 2 3] tol : 1e-06\n linerelaxation : False [0] maxit : 50\n nu_{i,1,c,2} : 0, 0, 1, 2 verb : 3\n Original grid : 64 x 64 x 32 => 131,072 cells\n Coarsest grid : 2 x 2 x 2 => 8 cells\n Coarsest level : 5 ; 5 ; 4 \n\n [hh:mm:ss] rel. error [abs. error, last/prev] l s\n\n h_\n 2h_ \\ /\n 4h_ \\ /\\ / \n 8h_ \\ /\\ / \\ / \n 16h_ \\ /\\ / \\ / \\ / \n 32h_ \\/\\/ \\/ \\/ \\/ \n\n [21:03:40] 5.227e-02 after 1 F-cycles [2.918e-07, 0.052] 0 1\n [21:03:40] 6.108e-03 after 2 F-cycles [3.410e-08, 0.117] 0 2\n [21:03:41] 7.462e-04 after 3 F-cycles [4.166e-09, 0.122] 0 3\n [21:03:41] 1.133e-04 after 4 F-cycles [6.328e-10, 0.152] 0 1\n [21:03:42] 3.089e-05 after 5 F-cycles [1.724e-10, 0.272] 0 2\n [21:03:42] 4.369e-06 after 6 F-cycles [2.439e-11, 0.141] 0 3\n [21:03:43] 1.529e-06 after 7 F-cycles [8.538e-12, 0.350] 0 1\n [21:03:43] 9.542e-07 after 8 F-cycles [5.327e-12, 0.624] 0 2\n\n > CONVERGED\n > MG cycles : 8\n > Final rel. error : 9.542e-07\n\n:: emg3d END :: 21:03:43 :: runtime = 0:00:04\n\npeak memory: 329.74 MiB, increment: 62.88 MiB\n"
],
[
"%memit em3_bg = emg3d.solver.solver(mesh, pmodel_bg, sfield, verb=3, nu_pre=0, semicoarsening=True)",
"\n:: emg3d START :: 21:03:43 ::\n\n MG-cycle : 'F' sslsolver : False\n semicoarsening : True [1 2 3] tol : 1e-06\n linerelaxation : False [0] maxit : 50\n nu_{i,1,c,2} : 0, 0, 1, 2 verb : 3\n Original grid : 64 x 64 x 32 => 131,072 cells\n Coarsest grid : 2 x 2 x 2 => 8 cells\n Coarsest level : 5 ; 5 ; 4 \n\n [hh:mm:ss] rel. error [abs. error, last/prev] l s\n\n h_\n 2h_ \\ /\n 4h_ \\ /\\ / \n 8h_ \\ /\\ / \\ / \n 16h_ \\ /\\ / \\ / \\ / \n 32h_ \\/\\/ \\/ \\/ \\/ \n\n [21:03:44] 5.250e-02 after 1 F-cycles [2.931e-07, 0.052] 0 1\n [21:03:45] 6.468e-03 after 2 F-cycles [3.611e-08, 0.123] 0 2\n [21:03:45] 8.049e-04 after 3 F-cycles [4.494e-09, 0.124] 0 3\n [21:03:45] 1.435e-04 after 4 F-cycles [8.012e-10, 0.178] 0 1\n [21:03:46] 4.756e-05 after 5 F-cycles [2.655e-10, 0.331] 0 2\n [21:03:46] 5.863e-06 after 6 F-cycles [3.274e-11, 0.123] 0 3\n [21:03:47] 1.947e-06 after 7 F-cycles [1.087e-11, 0.332] 0 1\n [21:03:47] 1.068e-06 after 8 F-cycles [5.965e-12, 0.549] 0 2\n [21:03:48] 3.441e-07 after 9 F-cycles [1.921e-12, 0.322] 0 3\n\n > CONVERGED\n > MG cycles : 9\n > Final rel. error : 3.441e-07\n\n:: emg3d END :: 21:03:48 :: runtime = 0:00:04\n\npeak memory: 337.88 MiB, increment: 26.70 MiB\n"
]
],
[
[
"## Calculate `SimPEG`",
"_____no_output_____"
]
],
[
[
"# Set up the PDE\nprob = FDEM.Problem3D_e(mesh, sigmaMap=Maps.IdentityMap(mesh), Solver=Solver)\n\n# Set up the receivers\nrx_locs = Mesh.utils.ndgrid([rec_x, np.r_[0], np.r_[-water_depth]])\nrx_list = [\n FDEM.Rx.Point_e(orientation='x', component=\"real\", locs=rx_locs), \n FDEM.Rx.Point_e(orientation='x', component=\"imag\", locs=rx_locs)\n]\n\n# We use the emg3d-source-vector, to ensure we use the same in both cases\nsrc_sp = FDEM.Src.RawVec_e(rx_list, s_e=sfield.vector, freq=freq)\nsrc_list = [src_sp]\nsurvey = FDEM.Survey(src_list)\n\n# Create the simulation\nprob.pair(survey)",
"_____no_output_____"
],
[
"@contextmanager\ndef ctimeit(before=''):\n \"\"\"Print time used by commands run within the context manager.\"\"\"\n t0 = default_timer()\n yield\n t1 = default_timer() - t0\n print(f\"{before}{timedelta(seconds=np.round(t1))}\")",
"_____no_output_____"
],
[
"with ctimeit(\"SimPEG runtime: \"):\n %memit spg_tg_dobs = survey.dpred(np.vstack([1./res_x, 1./res_y, 1./res_z]).T)\nspg_tg = Data(survey, dobs=spg_tg_dobs)",
"peak memory: 10468.39 MiB, increment: 10138.32 MiB\nSimPEG runtime: 0:03:52\n"
],
[
"with ctimeit(\"SimPEG runtime: \"):\n %memit spg_bg_dobs = survey.dpred(np.vstack([1./res_x_bg, 1./res_y_bg, 1./res_z_bg]).T)\nspg_bg = Data(survey, dobs=spg_bg_dobs)",
"peak memory: 10460.16 MiB, increment: 9731.63 MiB\nSimPEG runtime: 0:03:53\n"
]
],
[
[
"## Plot result",
"_____no_output_____"
]
],
[
[
"ix1, ix2 = 12, 12\niy = 32\niz = 13\n\nmesh.vectorCCx[ix1], mesh.vectorCCx[-ix2-1], mesh.vectorNy[iy], mesh.vectorNz[iz]",
"_____no_output_____"
],
[
"plt.figure(figsize=(9, 6))\n\nplt.subplot(221)\nplt.title('|Real(response)|')\nplt.semilogy(rec_x/1e3, np.abs(em3_bg.fx[ix1:-ix2, iy, iz].real))\nplt.semilogy(rec_x/1e3, np.abs(em3_tg.fx[ix1:-ix2, iy, iz].real))\nplt.semilogy(rec_x/1e3, np.abs(spg_bg[src_sp, rx_list[0]]), 'C4--')\nplt.semilogy(rec_x/1e3, np.abs(spg_tg[src_sp, rx_list[0]]), 'C5--')\nplt.xlabel('Offset (km)')\nplt.ylabel('$E_x$ (V/m)')\n\nplt.subplot(223)\nplt.title('|Imag(response)|')\nplt.semilogy(rec_x/1e3, np.abs(em3_bg.fx[ix1:-ix2, iy, iz].imag), label='emg3d BG')\nplt.semilogy(rec_x/1e3, np.abs(em3_tg.fx[ix1:-ix2, iy, iz].imag), label='emg3d target')\nplt.semilogy(rec_x/1e3, np.abs(spg_bg[src_sp, rx_list[1]]), 'C4--', label='SimPEG BG')\nplt.semilogy(rec_x/1e3, np.abs(spg_tg[src_sp, rx_list[1]]), 'C5--', label='SimPEG target')\nplt.xlabel('Offset (km)')\nplt.ylabel('$E_x$ (V/m)')\nplt.legend()\n\nplt.subplot(222)\nplt.title('Relative error Real')\nplt.semilogy(rec_x/1e3, 100*np.abs((spg_bg[src_sp, rx_list[0]]-em3_bg.fx[ix1:-ix2, iy, iz].real)/\n em3_bg.fx[ix1:-ix2, iy, iz].real), label='BG')\nplt.semilogy(rec_x/1e3, 100*np.abs((spg_tg[src_sp, rx_list[0]]-em3_tg.fx[ix1:-ix2, iy, iz].real)/\n em3_tg.fx[ix1:-ix2, iy, iz].real), label='target')\n\nplt.xlabel('Offset (km)')\nplt.ylabel('Rel. Error (%)')\nplt.legend()\n\nplt.subplot(224)\nplt.title('Relative error (%) Imag')\nplt.semilogy(rec_x/1e3, 100*np.abs((spg_bg[src_sp, rx_list[1]]-em3_bg.fx[ix1:-ix2, iy, iz].imag)/\n em3_bg.fx[ix1:-ix2, iy, iz].imag), label='BG')\nplt.semilogy(rec_x/1e3, 100*np.abs((spg_tg[src_sp, rx_list[1]]-em3_tg.fx[ix1:-ix2, iy, iz].imag)/\n em3_tg.fx[ix1:-ix2, iy, iz].imag), label='target')\n\nplt.xlabel('Offset (km)')\nplt.ylabel('Rel. Error (%)')\nplt.legend()\n\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
],
[
"emg3d.Report([discretize, SimPEG, pymatsolver])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e76ef37dff2ca2e21ed0e21a3fc683e8715d6a2f | 520,985 | ipynb | Jupyter Notebook | tutorial/318_portfolio.ipynb | Blueqat/blueqat-tutorials | be863d1a6834ce6aa8a7cec0c886d7e3b4caabd1 | [
"Apache-2.0"
] | 7 | 2021-11-22T19:18:09.000Z | 2022-01-30T22:38:03.000Z | tutorial/318_portfolio.ipynb | Blueqat/blueqat-tutorials | be863d1a6834ce6aa8a7cec0c886d7e3b4caabd1 | [
"Apache-2.0"
] | 20 | 2021-11-23T22:41:58.000Z | 2022-01-30T17:46:46.000Z | tutorial/318_portfolio.ipynb | Blueqat/blueqat-tutorials | be863d1a6834ce6aa8a7cec0c886d7e3b4caabd1 | [
"Apache-2.0"
] | 3 | 2022-01-04T22:29:13.000Z | 2022-01-30T08:38:20.000Z | 2,245.625 | 515,060 | 0.956116 | [
[
[
"# Portfolio Optimization\nThe portfolio optimization problem is a combinatorial optimization problem that seeks the optimal combination of assets based on the balance between risk and return.",
"_____no_output_____"
],
[
"## Cost Function\nThe cost function for solving the portfolio optimization problem is",
"_____no_output_____"
],
[
"$$\nE = -\\sum \\mu_i q_i + \\gamma \\sum \\delta_{i,j}q_i q_j\n$$",
"_____no_output_____"
],
[
"The 1st term shows the return of the assets and the 2nd as risk we estimate.",
"_____no_output_____"
],
[
"## Example\nNow, let's choose two of the six assets and find the optimal combination.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom blueqat import vqe\nfrom blueqat.pauli import I, X, Y, Z\nfrom blueqat.pauli import from_qubo\nfrom blueqat.pauli import qubo_bit as q\nfrom blueqat import Circuit",
"_____no_output_____"
]
],
[
[
"Use the following as return data",
"_____no_output_____"
]
],
[
[
"asset_return = np.diag([-0.026,-0.031,-0.007,-0.022,-0.010,-0.055])\nprint(asset_return)",
"[[-0.026 0. 0. 0. 0. 0. ]\n [ 0. -0.031 0. 0. 0. 0. ]\n [ 0. 0. -0.007 0. 0. 0. ]\n [ 0. 0. 0. -0.022 0. 0. ]\n [ 0. 0. 0. 0. -0.01 0. ]\n [ 0. 0. 0. 0. 0. -0.055]]\n"
]
],
[
[
"Use the following as risk data",
"_____no_output_____"
]
],
[
[
"asset_risk = [[0,0.0015,0.0012,0.0018,0.0022,0.0012],[0,0,0.0017,0.0022,0.0005,0.0019],[0,0,0,0.0040,0.0032,0.0024],[0,0,0,0,0.0012,0.0076],[0,0,0,0,0,0.0021],[0,0,0,0,0,0]]\nnp.asarray(asset_risk)",
"_____no_output_____"
]
],
[
[
"It is then converted to Hamiltonian and calculated. In addition, this time there is the constraint of selecting two out of six assets, which is implemented using XYmixer.",
"_____no_output_____"
]
],
[
[
"#convert qubo to pauli\nqubo = asset_return + np.asarray(asset_risk)*0.5\nhamiltonian = from_qubo(qubo)\n\ninit = Circuit(6).x[0,1]\nmixer = I()*0\nfor i in range(5):\n for j in range(i+1, 6):\n mixer += (X[i]*X[j] + Y[i]*Y[j])*0.5\n\nstep = 1\n\nresult = vqe.Vqe(vqe.QaoaAnsatz(hamiltonian, step, init, mixer)).run()\nprint(result.most_common(12))",
"(((0, 0, 0, 0, 1, 1), 0.9999994909753838), ((0, 0, 1, 0, 0, 1), 5.082984897154482e-07), ((0, 0, 0, 1, 0, 1), 4.5366922587264707e-10), ((0, 1, 0, 0, 0, 1), 1.8161653052486483e-10), ((0, 0, 1, 0, 1, 0), 9.063023975859213e-11), ((0, 0, 0, 1, 1, 0), 1.618082643262117e-13), ((1, 0, 0, 0, 0, 1), 1.6228830532381364e-14), ((0, 1, 0, 0, 1, 0), 1.618262335727315e-14), ((0, 1, 0, 1, 0, 0), 9.246381676299576e-17), ((0, 0, 1, 1, 0, 0), 8.260881354525854e-20), ((1, 0, 0, 1, 0, 0), 1.650702183251039e-20), ((0, 1, 1, 0, 0, 0), 8.25793442702573e-21))\n"
],
[
"result.circuit.run(backend=\"draw\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e76ef62bc05498185702fd067e9e1218af1f68bf | 12,943 | ipynb | Jupyter Notebook | StatsForDataAnalysis/stat.bootstrap_intervals.ipynb | alexsubota/PythonScripts | 9b3eaad1cb182bbc66424be1901514bd48690320 | [
"MIT"
] | null | null | null | StatsForDataAnalysis/stat.bootstrap_intervals.ipynb | alexsubota/PythonScripts | 9b3eaad1cb182bbc66424be1901514bd48690320 | [
"MIT"
] | null | null | null | StatsForDataAnalysis/stat.bootstrap_intervals.ipynb | alexsubota/PythonScripts | 9b3eaad1cb182bbc66424be1901514bd48690320 | [
"MIT"
] | null | null | null | 49.590038 | 1,463 | 0.651549 | [
[
[
"# Доверительные интервалы на основе bootstrap",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd",
"_____no_output_____"
],
[
"%pylab inline",
"Populating the interactive namespace from numpy and matplotlib\n"
]
],
[
[
"## Загрузка данных",
"_____no_output_____"
],
[
"### Время ремонта телекоммуникаций",
"_____no_output_____"
],
[
"Verizon — основная региональная телекоммуникационная компания (Incumbent Local Exchange Carrier, ILEC) в западной \nчасти США. В связи с этим данная компания обязана предоставлять сервис ремонта телекоммуникационного оборудования \nне только для своих клиентов, но и для клиентов других локальных телекоммуникационых компаний (Competing Local Exchange Carriers, CLEC). При этом в случаях, когда время ремонта оборудования для клиентов других компаний существенно выше, чем для собственных, Verizon может быть оштрафована. ",
"_____no_output_____"
]
],
[
[
"data = pd.read_csv('verizon.txt', sep='\\t')\ndata.shape",
"_____no_output_____"
],
[
"data.head()",
"_____no_output_____"
],
[
"data.Group.value_counts()",
"_____no_output_____"
],
[
"pylab.figure(figsize(12, 5))\npylab.subplot(1,2,1)\npylab.hist(data[data.Group == 'ILEC'].Time, bins = 20, color = 'b', range = (0, 100), label = 'ILEC')\npylab.legend()\n\npylab.subplot(1,2,2)\npylab.hist(data[data.Group == 'CLEC'].Time, bins = 20, color = 'r', range = (0, 100), label = 'CLEC')\npylab.legend()\n\npylab.show()",
"_____no_output_____"
]
],
[
[
"## Bootstrap",
"_____no_output_____"
]
],
[
[
"def get_bootstrap_samples(data, n_samples):\n indices = np.random.randint(0, len(data), (n_samples, len(data)))\n samples = data[indices]\n return samples",
"_____no_output_____"
],
[
"def stat_intervals(stat, alpha):\n boundaries = np.percentile(stat, [100 * alpha / 2., 100 * (1 - alpha / 2.)])\n return boundaries",
"_____no_output_____"
]
],
[
[
"### Интервальная оценка медианы",
"_____no_output_____"
]
],
[
[
"ilec_time = data[data.Group == 'ILEC'].Time.values\nclec_time = data[data.Group == 'CLEC'].Time.values",
"_____no_output_____"
],
[
"np.random.seed(0)\n\nilec_median_scores = map(np.median, get_bootstrap_samples(ilec_time, 1000))\nclec_median_scores = map(np.median, get_bootstrap_samples(clec_time, 1000))\n\nprint \"95% confidence interval for the ILEC median repair time:\", stat_intervals(ilec_median_scores, 0.05)\nprint \"95% confidence interval for the CLEC median repair time:\", stat_intervals(clec_median_scores, 0.05)",
"_____no_output_____"
]
],
[
[
"### Точечная оценка разности медиан",
"_____no_output_____"
]
],
[
[
"print \"difference between medians:\", np.median(clec_time) - np.median(ilec_time)",
"_____no_output_____"
]
],
[
[
"### Интервальная оценка разности медиан",
"_____no_output_____"
]
],
[
[
"delta_median_scores = map(lambda x: x[1] - x[0], zip(ilec_median_scores, clec_median_scores))",
"_____no_output_____"
],
[
"print \"95% confidence interval for the difference between medians\", stat_intervals(delta_median_scores, 0.05)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e76ef7c47730982cd93dbbc59770dc43cace2e0f | 6,241 | ipynb | Jupyter Notebook | Facial Recognition with PCA.ipynb | DavidBrear/sklearn-notebooks | 64d93720ab73a841184ee3bd508723b511dabb6f | [
"MIT"
] | 1 | 2017-11-10T04:50:52.000Z | 2017-11-10T04:50:52.000Z | Facial Recognition with PCA.ipynb | DavidBrear/sklearn-notebooks | 64d93720ab73a841184ee3bd508723b511dabb6f | [
"MIT"
] | null | null | null | Facial Recognition with PCA.ipynb | DavidBrear/sklearn-notebooks | 64d93720ab73a841184ee3bd508723b511dabb6f | [
"MIT"
] | 1 | 2019-05-20T18:58:20.000Z | 2019-05-20T18:58:20.000Z | 45.554745 | 1,721 | 0.628585 | [
[
[
"%matplotlib inline",
"_____no_output_____"
],
[
"from os import walk, path\nimport numpy as np\nimport mahotas as mh\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.cross_validation import cross_val_score\nfrom sklearn.preprocessing import scale\nfrom sklearn.decomposition import PCA\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import classification_report",
"_____no_output_____"
],
[
"X = []\ny = []",
"_____no_output_____"
],
[
"for dir_path, dir_name, file_names in walk('./data/att_faces/'):\n for fn in file_names:\n if fn[-3:] == 'pgm':\n image_filename = path.join(dir_path, fn)\n print image_filename\n X.append(scale(mh.imread(image_filename, as_grey=True).reshape(10304).astype('float32')))\n y.append(dir_path)\nX = np.array(X)",
"./data/att_faces/s1/1.pgm\n"
],
[
"X",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e76f00e8798f37b0e5a75369d57b214356d3b0a9 | 362,989 | ipynb | Jupyter Notebook | a_star.ipynb | Google-Developer-Student-Club-KPI/a-star | db51d38f6af4e5220622b80fc8f30c2a1f095969 | [
"MIT"
] | null | null | null | a_star.ipynb | Google-Developer-Student-Club-KPI/a-star | db51d38f6af4e5220622b80fc8f30c2a1f095969 | [
"MIT"
] | null | null | null | a_star.ipynb | Google-Developer-Student-Club-KPI/a-star | db51d38f6af4e5220622b80fc8f30c2a1f095969 | [
"MIT"
] | null | null | null | 295.593648 | 161,230 | 0.901008 | [
[
[
"# Imports",
"_____no_output_____"
]
],
[
[
"from geopy.geocoders import Nominatim\nfrom geopy.distance import distance\nfrom pprint import pprint\nimport pandas as pd\nimport random\nfrom typing import List, Tuple\nfrom dotenv import dotenv_values\nrandom.seed(123)\nconfig = dotenv_values(\".env\")",
"_____no_output_____"
]
],
[
[
"# Cities",
"_____no_output_____"
]
],
[
[
"country = \"Ukraine\"\ncities = [\"Lviv\", \"Chernihiv\", \"Dnipropetrovs'k\", \"Uzhgorod\", \"Kharkiv\", \"Odesa\", \n \"Poltava\", \"Kiev\", \"Zhytomyr\", \"Khmelnytskyi\", \"Vinnytsia\",\"Cherkasy\",\n \"Zaporizhia\", \"Ternopil\", \"Sumy\"]",
"_____no_output_____"
]
],
[
[
"# 1) Get Distinct Distance using geopy API\n",
"_____no_output_____"
]
],
[
[
"def get_distinct_distances(list:cities, str: country) -> pd.DataFrame:\n df = pd.DataFrame(index = cities, columns= cities)\n geolocator = Nominatim(user_agent=config[\"USER_AGENT\"], timeout = 10000)\n coordinates = dict()\n for city in cities:\n location = geolocator.geocode(city + \" \" + country)\n coordinates[city] = (location.latitude, location.longitude)\n for origin in range(len(cities)):\n for destination in range(origin, len(cities)):\n dist = distance(coordinates[cities[origin]], coordinates[cities[destination]]).km\n df[cities[origin]][cities[destination]] = dist\n df[cities[destination]][cities[origin]] = dist\n return df, coordinates",
"_____no_output_____"
],
[
"df_distinct, coordinates = get_distinct_distances(cities, country)\ndf_distinct.head(15)",
"_____no_output_____"
]
],
[
[
"# Download file to local",
"_____no_output_____"
]
],
[
[
"df_distinct.to_csv(\"data/direct_distances.csv\")",
"_____no_output_____"
]
],
[
[
"# 2) Get route distance using Openrouteservice API",
"_____no_output_____"
]
],
[
[
"import openrouteservice \nfrom pprint import pprint\ndef get_route_dataframe(coordinates: dict)->pd.DataFrame:\n client = openrouteservice.Client(key=config['API_KEY'])\n cities = list(coordinates.keys())\n df = pd.DataFrame(index = cities, columns= cities)\n for origin in range(len(coordinates.keys())):\n for destination in range(origin, len(coordinates.keys())):\n if origin != destination:\n l2 = ((coordinates[cities[origin]][1], coordinates[cities[origin]][0]),\n (coordinates[cities[destination]][1], coordinates[cities[destination]][0]))\n distance = client.directions(l2, units=\"km\", radiuses=-1)['routes'][0]['segments'][0]['distance']\n df[cities[origin]][cities[destination]] = df[cities[destination]][cities[origin]] = distance\n else:\n df[cities[origin]][cities[destination]] = df[cities[destination]][cities[origin]] = 0\n return df",
"_____no_output_____"
],
[
"import warnings\nwarnings.filterwarnings('ignore')\ndf_routes = get_route_dataframe(coordinates)\ndf_routes.head(15)",
"_____no_output_____"
]
],
[
[
"# Download file to local",
"_____no_output_____"
]
],
[
[
"df_routes.to_csv(\"data/route_distances.csv\")",
"_____no_output_____"
]
],
[
[
"# A* Algorithm",
"_____no_output_____"
]
],
[
[
"class AStar():\n def __init__(self, cities: list[str], country: str, distances: pd.DataFrame, heuristics: pd.DataFrame):\n self.cities = cities\n self.country = country\n self.distances = distances\n self.heuristics = heuristics\n \n def generate_map(self, low, high) -> dict[str, list[str]]:\n from networkx.generators.degree_seq import random_degree_sequence_graph\n import numpy as np\n import networkx as nx\n degrees = np.random.randint(low, high, len(self.cities))\n while not nx.is_graphical(degrees):\n degrees = np.random.randint(low, high, len(self.cities))\n graph = random_degree_sequence_graph(degrees)\n graph = nx.relabel.relabel_nodes(graph, mapping=dict(zip(range(15), self.cities)))\n graph = nx.to_dict_of_lists(graph)\n return graph\n \n def restore_path(self, current, camefrom: dict) -> list[str]:\n path = [current]\n while current in camefrom.keys():\n current = camefrom[current]\n path.insert(0, current)\n return path\n \n def run(self, origin:str, destination:str, country:str) -> Tuple[List[str], float]:\n gscore = dict().fromkeys(cities, float(\"inf\"))\n gscore[origin] = 0\n fscore = dict().fromkeys(cities, float(\"inf\"))\n fscore[origin] = self.heuristics[origin][destination]\n camefrom = dict()\n openset = []\n openset.append(origin)\n openset = list(sorted(openset, key = lambda x: fscore[x]))\n closed = []\n while openset:\n current_city = openset.pop(0)\n closed.append(current_city) \n if current_city == destination:\n return self.restore_path(current_city, camefrom), gscore[current_city]\n for neighbour in country[current_city]:\n if neighbour not in closed:\n tentative_gScore = gscore[current_city] + self.distances[current_city][neighbour]\n if tentative_gScore < gscore[neighbour]:\n camefrom[neighbour] = current_city\n gscore[neighbour] = tentative_gScore\n fscore[neighbour] = gscore[neighbour] + self.heuristics[neighbour][destination]\n if neighbour not in openset:\n openset.append(neighbour)\n openset = list(sorted(openset, key = lambda x: fscore[x]))\n return (None, 0)",
"_____no_output_____"
],
[
"low = 3\nhigh = 5\ndistances = pd.read_csv(\"data/route_distances.csv\", index_col = 0)\nheuristic = pd.read_csv(\"data/direct_distances.csv\",index_col = 0)\na_star = AStar(cities, country, distances, heuristic)",
"_____no_output_____"
],
[
"pprint(a_star.generate_map(low,high))",
"{'Cherkasy': ['Chernihiv', 'Odesa', 'Zhytomyr', 'Ternopil'],\n 'Chernihiv': [\"Dnipropetrovs'k\", 'Poltava', 'Cherkasy', 'Kharkiv'],\n \"Dnipropetrovs'k\": ['Lviv', 'Chernihiv', 'Poltava'],\n 'Kharkiv': ['Chernihiv', 'Sumy', 'Vinnytsia'],\n 'Khmelnytskyi': ['Uzhgorod', 'Odesa', 'Ternopil'],\n 'Kiev': ['Lviv', 'Poltava', 'Sumy', 'Zaporizhia'],\n 'Lviv': ['Odesa', 'Zhytomyr', 'Kiev', \"Dnipropetrovs'k\"],\n 'Odesa': ['Lviv', 'Uzhgorod', 'Cherkasy', 'Khmelnytskyi'],\n 'Poltava': ['Chernihiv', \"Dnipropetrovs'k\", 'Vinnytsia', 'Kiev'],\n 'Sumy': ['Kharkiv', 'Kiev', 'Zhytomyr'],\n 'Ternopil': ['Khmelnytskyi', 'Vinnytsia', 'Cherkasy'],\n 'Uzhgorod': ['Odesa', 'Vinnytsia', 'Khmelnytskyi', 'Zaporizhia'],\n 'Vinnytsia': ['Uzhgorod', 'Kharkiv', 'Poltava', 'Ternopil'],\n 'Zaporizhia': ['Uzhgorod', 'Kiev', 'Zhytomyr'],\n 'Zhytomyr': ['Lviv', 'Sumy', 'Cherkasy', 'Zaporizhia']}\n"
]
],
[
[
"# Display the Graph",
"_____no_output_____"
]
],
[
[
"import networkx as nx\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"Map = a_star.generate_map(low, high)\ngraph = nx.Graph()\ngraph.add_nodes_from(Map.keys())\nfor origin, destinations in Map.items():\n graph.add_weighted_edges_from(([(origin, destination, weight) for destination, weight in zip(destinations, [distances[origin][dest] for dest in destinations])]))\npos = nx.fruchterman_reingold_layout(graph, seed = 321)\nplt.figure(figsize = (30, 18))\nnx.draw_networkx_nodes(graph, pos, node_color=\"yellow\",label=\"blue\", node_size = 1500)\nnx.draw_networkx_labels(graph, pos, font_color=\"blue\")\nnx.draw_networkx_edges(graph, pos, edge_color='blue')\nnx.draw_networkx_edge_labels(graph,pos, edge_labels=nx.get_edge_attributes(graph,'weight'), font_color = \"brown\")\nplt.show()",
"_____no_output_____"
],
[
"path, distance = a_star.run(\"Vinnytsia\",'Poltava', Map)\nprint(\"Solution:{}\\nDistance:{}\".format(\",\".join(path), distance))",
"Solution:Vinnytsia,Dnipropetrovs'k,Poltava\nDistance:705.274\n"
]
],
[
[
"# Draw the solution",
"_____no_output_____"
]
],
[
[
"edges = [(path[i-1], path[i]) for i in range(1, len(path))]\nplt.figure(figsize = (30, 18))\nnx.draw_networkx_nodes(graph, pos, node_color=\"yellow\",label=\"blue\", node_size = 1500)\nnx.draw_networkx_labels(graph, pos, font_color=\"blue\")\nnx.draw_networkx_edges(graph, pos, edge_color='blue', arrows=False)\nnx.draw_networkx_edges(graph, pos, edgelist=edges ,edge_color='red', width = 5, alpha = 0.7, arrows=True)\nnx.draw_networkx_edge_labels(graph,pos, edge_labels=nx.get_edge_attributes(graph,'weight'), font_color = \"brown\")\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e76f0b7cd219b2616c27700b4a8acea5021b36f0 | 2,530 | ipynb | Jupyter Notebook | notebooks/encounter_test.ipynb | osmanylc/deep-rl-collision-avoidance | e7f0831b683f78d741dd488c9c8b0403bf17abd5 | [
"MIT"
] | null | null | null | notebooks/encounter_test.ipynb | osmanylc/deep-rl-collision-avoidance | e7f0831b683f78d741dd488c9c8b0403bf17abd5 | [
"MIT"
] | null | null | null | notebooks/encounter_test.ipynb | osmanylc/deep-rl-collision-avoidance | e7f0831b683f78d741dd488c9c8b0403bf17abd5 | [
"MIT"
] | null | null | null | 21.260504 | 65 | 0.503953 | [
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"from mdp.encounter import mc_encounter, action_generator\nfrom mdp.transition import advance_ac\nfrom mdp.action import a_int, NUM_A\nfrom mdp.state import state_to_obs, State\n\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"## Check we get right proportion of NMACs",
"_____no_output_____"
]
],
[
[
"num_enc = 10000\nnum_nmac = 0\nNMAC_R = 500\n\nfor _ in range(num_enc):\n tca = 50\n st, int_act_gen = mc_encounter(tca)\n ac0, ac1, prev_a = st\n\n for _ in range(tca):\n ac0 = advance_ac(ac0, a_int('NOOP'))\n ac1 = advance_ac(ac1, next(int_act_gen))\n \n obs = state_to_obs(State(ac0, ac1, a_int('NOOP')))\n \n if obs.r <= NMAC_R:\n num_nmac += 1\n\nnum_nmac / num_enc",
"_____no_output_____"
],
[
"avg_maneuver_len = 15\nNUM_A = 3\n\np_self = (avg_maneuver_len - 1) / avg_maneuver_len\np_trans = (1 - p_self) / (NUM_A - 1)\n\np_t = ((p_self - p_trans) * np.identity(NUM_A)\n + p_trans * np.ones((NUM_A, NUM_A)))",
"_____no_output_____"
],
[
"acts = [10, 10, 12, 123]\ng = action_generator(p_t, acts)",
"_____no_output_____"
],
[
"1 / NUM_A * np.ones((NUM_A, NUM_A))",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e76f277c8f069b165e7c366ab4a40ba8c12533f7 | 11,860 | ipynb | Jupyter Notebook | notebooks/sklearn-mnist-nn.ipynb | CSCfi/machine-learning-scripts | 005f9343fb703ca2b6b11b5c2369e19efcaa5f62 | [
"MIT"
] | 59 | 2018-04-27T04:34:41.000Z | 2022-03-16T02:43:50.000Z | machine-learning-scripts/notebooks/sklearn-mnist-nn.ipynb | exajobs/machine-learning-collection | 84444f0bfe351efea6e3b2813e47723bd8d769cc | [
"MIT"
] | 1 | 2020-10-10T05:04:00.000Z | 2020-10-12T08:19:38.000Z | notebooks/sklearn-mnist-nn.ipynb | CSCfi/machine-learning-scripts | 005f9343fb703ca2b6b11b5c2369e19efcaa5f62 | [
"MIT"
] | 53 | 2017-04-14T09:35:04.000Z | 2022-02-28T19:19:36.000Z | 24.554865 | 446 | 0.571332 | [
[
[
"# MNIST handwritten digits classification with nearest neighbors \n\nIn this notebook, we'll use [nearest-neighbor classifiers](http://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbors-classification) to classify MNIST digits using scikit-learn (version 0.20 or later required).\n\nFirst, the needed imports. ",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nfrom pml_utils import get_mnist, show_failures\n\nimport numpy as np\nfrom sklearn import neighbors, __version__\nfrom sklearn.metrics import accuracy_score, confusion_matrix, classification_report\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set()\n\nfrom distutils.version import LooseVersion as LV\nassert(LV(__version__) >= LV(\"0.20\")), \"Version >= 0.20 of sklearn is required.\"",
"_____no_output_____"
]
],
[
[
"Then we load the MNIST data. First time we need to download the data, which can take a while.",
"_____no_output_____"
]
],
[
[
"X_train, y_train, X_test, y_test = get_mnist('MNIST')\n\nprint('MNIST data loaded: train:',len(X_train),'test:',len(X_test))\nprint('X_train:', X_train.shape)\nprint('y_train:', y_train.shape)\nprint('X_test', X_test.shape)\nprint('y_test', y_test.shape)",
"_____no_output_____"
]
],
[
[
"The training data (`X_train`) is a matrix of size (60000, 784), i.e. it consists of 60000 digits expressed as 784 sized vectors (28x28 images flattened to 1D). `y_train` is a 60000-dimensional vector containing the correct classes (\"0\", \"1\", ..., \"9\") for each training digit.\n\nLet's take a closer look. Here are the first 10 training digits:",
"_____no_output_____"
]
],
[
[
"pltsize=1\nplt.figure(figsize=(10*pltsize, pltsize))\n\nfor i in range(10):\n plt.subplot(1,10,i+1)\n plt.axis('off')\n plt.imshow(X_train[i,:].reshape(28, 28), cmap=\"gray\")\n plt.title('Class: '+y_train[i])",
"_____no_output_____"
]
],
[
[
"## 1-NN classifier\n\n### Initialization\n\nLet's create first a 1-NN classifier. Note that with nearest-neighbor classifiers there is no internal (parameterized) model and therefore no learning required. Instead, calling the `fit()` function simply stores the samples of the training data in a suitable data structure.",
"_____no_output_____"
]
],
[
[
"%%time\n\nn_neighbors = 1\nclf_nn = neighbors.KNeighborsClassifier(n_neighbors)\nclf_nn.fit(X_train, y_train)",
"_____no_output_____"
]
],
[
[
"### Inference\n\nAnd try to classify some test samples with it.",
"_____no_output_____"
]
],
[
[
"%%time\n\npred_nn = clf_nn.predict(X_test[:200,:])",
"_____no_output_____"
]
],
[
[
"We observe that the classifier is rather slow, and classifying the whole test set would take quite some time. What is the reason for this?\n\nThe accuracy of the classifier:",
"_____no_output_____"
]
],
[
[
"print('Predicted', len(pred_nn), 'digits with accuracy:',\n accuracy_score(y_test[:len(pred_nn)], pred_nn))",
"_____no_output_____"
]
],
[
[
"## Faster 1-NN classifier\n\n### Initialization\n\nOne way to make our 1-NN classifier faster is to use less training data:",
"_____no_output_____"
]
],
[
[
"%%time\n\nn_neighbors = 1\nn_data = 1024\nclf_nn_fast = neighbors.KNeighborsClassifier(n_neighbors)\nclf_nn_fast.fit(X_train[:n_data,:], y_train[:n_data])",
"_____no_output_____"
]
],
[
[
"### Inference\n\nNow we can use the classifier created with reduced data to classify our whole test set in a reasonable amount of time.",
"_____no_output_____"
]
],
[
[
"%%time\n\npred_nn_fast = clf_nn_fast.predict(X_test)",
"_____no_output_____"
]
],
[
[
"The classification accuracy is however now not as good:",
"_____no_output_____"
]
],
[
[
"print('Predicted', len(pred_nn_fast), 'digits with accuracy:',\n accuracy_score(y_test, pred_nn_fast))",
"_____no_output_____"
]
],
[
[
"#### Confusion matrix\n\nWe can compute the confusion matrix to see which digits get mixed the most:",
"_____no_output_____"
]
],
[
[
"labels=[str(i) for i in range(10)]\nprint('Confusion matrix (rows: true classes; columns: predicted classes):'); print()\ncm=confusion_matrix(y_test, pred_nn_fast, labels=labels)\nprint(cm); print()",
"_____no_output_____"
]
],
[
[
"Plotted as an image:",
"_____no_output_____"
]
],
[
[
"plt.matshow(cm, cmap=plt.cm.gray)\nplt.xticks(range(10))\nplt.yticks(range(10))\nplt.grid(None)\nplt.show()",
"_____no_output_____"
]
],
[
[
"#### Accuracy, precision and recall\n\nClassification accuracy for each class:",
"_____no_output_____"
]
],
[
[
"for i,j in enumerate(cm.diagonal()/cm.sum(axis=1)): print(\"%d: %.4f\" % (i,j))",
"_____no_output_____"
]
],
[
[
"Precision and recall for each class:",
"_____no_output_____"
]
],
[
[
"print(classification_report(y_test, pred_nn_fast, labels=labels))",
"_____no_output_____"
]
],
[
[
"#### Failure analysis\n\nWe can also inspect the results in more detail. Let's use the `show_failures()` helper function (defined in `pml_utils.py`) to show the wrongly classified test digits.\n\nThe helper function is defined as:\n\n```\nshow_failures(predictions, y_test, X_test, trueclass=None, predictedclass=None, maxtoshow=10)\n```\n\nwhere:\n- `predictions` is a vector with the predicted classes for each test set image\n- `y_test` the _correct_ classes for the test set images\n- `X_test` the test set images\n- `trueclass` can be set to show only images for a given correct (true) class\n- `predictedclass` can be set to show only images which were predicted as a given class\n- `maxtoshow` specifies how many items to show\n",
"_____no_output_____"
]
],
[
[
"show_failures(pred_nn_fast, y_test, X_test)",
"_____no_output_____"
]
],
[
[
"We can use `show_failures()` to inspect failures in more detail. For example:\n\n* show failures in which the true class was \"5\":",
"_____no_output_____"
]
],
[
[
"show_failures(pred_nn_fast, y_test, X_test, trueclass='5')",
"_____no_output_____"
]
],
[
[
"* show failures in which the prediction was \"0\":",
"_____no_output_____"
]
],
[
[
"show_failures(pred_nn_fast, y_test, X_test, predictedclass='0')",
"_____no_output_____"
]
],
[
[
"* show failures in which the true class was \"0\" and the prediction was \"2\":",
"_____no_output_____"
]
],
[
[
"show_failures(pred_nn_fast, y_test, X_test, trueclass='0', predictedclass='2')",
"_____no_output_____"
]
],
[
[
"We can observe that the classifier makes rather \"easy\" mistakes, and there might thus be room for improvement.",
"_____no_output_____"
],
[
"## Model tuning\n\nTry to improve the accuracy of the nearest-neighbor classifier while preserving a reasonable runtime to classify the whole test set. Things to try include using more than one neighbor (with or without weights) or increasing the amount of training data. See the documentation for [KNeighborsClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html#sklearn-neighbors-kneighborsclassifier).\n\nSee also http://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbors-classification for more information.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e76f2974d3c4025b6a5953967540f14a994d34e7 | 198,984 | ipynb | Jupyter Notebook | _notebooks/2021-01-07-seq-classification.ipynb | aizardar/blogs | 2ac4801a8cc95b5ecce6e3e070b6ca59b0e21027 | [
"Apache-2.0"
] | null | null | null | _notebooks/2021-01-07-seq-classification.ipynb | aizardar/blogs | 2ac4801a8cc95b5ecce6e3e070b6ca59b0e21027 | [
"Apache-2.0"
] | 3 | 2021-05-20T21:28:09.000Z | 2022-02-26T09:55:14.000Z | _notebooks/2021-01-07-seq-classification.ipynb | aizardar/blogs | 2ac4801a8cc95b5ecce6e3e070b6ca59b0e21027 | [
"Apache-2.0"
] | null | null | null | 328.356436 | 91,950 | 0.909219 | [
[
[
"# \"Sequence classification using Recurrent Neural Networks\"\n> \"PyTorch implementation for sequence classification using RNNs\"\n- toc: false\n- branch: master\n- badges: true\n- comments: true\n- categories: [PyTorch, classification, RNN]\n- image: images/\n- hide: false\n- search_exclude: true\n- metadata_key1: metadata_value1\n- metadata_key2: metadata_value2\n- use_math: true",
"_____no_output_____"
],
[
"This notebook is copied/adapted from [here](https://github.com/Atcold/pytorch-Deep-Learning/blob/master/08-seq_classification.ipynb). For a detailed working of RNNs, please follow this [link](https://atcold.github.io/pytorch-Deep-Learning/en/week06/06-3/). This notebook also serves as a template for PyTorch implementation for any model architecture (simply replace the model section with your own model architecture) \n\n",
"_____no_output_____"
],
[
"# An example of many-to-one (sequence classification)\n\nOriginal experiment from [Hochreiter & Schmidhuber (1997)](www.bioinf.jku.at/publications/older/2604.pdf).\n\nThe goal here is to classify sequences.\nElements and targets are represented locally (input vectors with only one non-zero bit).\nThe sequence starts with a `B`, ends with a `E` (the “trigger symbol”), and otherwise consists of randomly chosen symbols from the set `{a, b, c, d}` except for two elements at positions `t1` and `t2` that are either `X` or `Y`.\nFor the `DifficultyLevel.HARD` case, the sequence length is randomly chosen between `100` and `110`, `t1` is randomly chosen between `10` and `20`, and `t2` is randomly chosen between `50` and `60`.\nThere are `4` sequence classes `Q`, `R`, `S`, and `U`, which depend on the temporal order of `X` and `Y`.\n\nThe rules are:\n\n```\nX, X -> Q,\nX, Y -> R,\nY, X -> S,\nY, Y -> U.\n```",
"_____no_output_____"
],
[
"## 1. Dataset Exploration\n\n#### Let's explore our dataset. ",
"_____no_output_____"
]
],
[
[
"from sequential_tasks import TemporalOrderExp6aSequence as QRSU ",
"_____no_output_____"
],
[
"# Create a data generator. Predefined generator is implemented in file sequential_tasks. \nexample_generator = QRSU.get_predefined_generator(\n difficulty_level=QRSU.DifficultyLevel.EASY, \n batch_size=32,\n) \n\nexample_batch = example_generator[1]\nprint(f'The return type is a {type(example_batch)} with length {len(example_batch)}.')\nprint(f'The first item in the tuple is the batch of sequences with shape {example_batch[0].shape}.')\nprint(f'The first element in the batch of sequences is:\\n {example_batch[0][0, :, :]}')\nprint(f'The second item in the tuple is the corresponding batch of class labels with shape {example_batch[1].shape}.')\nprint(f'The first element in the batch of class labels is:\\n {example_batch[1][0, :]}')",
"The return type is a <class 'tuple'> with length 2.\nThe first item in the tuple is the batch of sequences with shape (32, 9, 8).\nThe first element in the batch of sequences is:\n [[0 0 0 0 0 0 0 0]\n [0 0 0 0 0 0 1 0]\n [0 0 0 1 0 0 0 0]\n [1 0 0 0 0 0 0 0]\n [0 0 0 0 1 0 0 0]\n [1 0 0 0 0 0 0 0]\n [0 0 0 0 1 0 0 0]\n [0 0 0 1 0 0 0 0]\n [0 0 0 0 0 0 0 1]]\nThe second item in the tuple is the corresponding batch of class labels with shape (32, 4).\nThe first element in the batch of class labels is:\n [1. 0. 0. 0.]\n"
],
[
"# Decoding the first sequence\nsequence_decoded = example_generator.decode_x(example_batch[0][0, :, :])\nprint(f'The sequence is: {sequence_decoded}')\n\n# Decoding the class label of the first sequence\nclass_label_decoded = example_generator.decode_y(example_batch[1][0])\nprint(f'The class label is: {class_label_decoded}')",
"The sequence is: BbXcXcbE\nThe class label is: Q\n"
]
],
[
[
"We can see that our sequence contain 8 elements starting with B and ending with E. This sequence belong to class Q as per the rule defined earlier. Each element is one-hot encoded. Thus, we can represent our first sequence (BbXcXcbE) with a sequence of rows of one-hot encoded vectors (as shown above). . Similarly, class `Q` can be decoded as [1,0,0,0]. ",
"_____no_output_____"
],
[
"## 2. Defining the Model\n\nLet's now define our simple recurrent neural network. ",
"_____no_output_____"
]
],
[
[
"import torch\nimport torch.nn as nn\n\n# Set the random seed for reproducible results\ntorch.manual_seed(1)\n\nclass SimpleRNN(nn.Module):\n def __init__(self, input_size, hidden_size, output_size):\n # This just calls the base class constructor\n super().__init__()\n # Neural network layers assigned as attributes of a Module subclass\n # have their parameters registered for training automatically.\n self.rnn = torch.nn.RNN(input_size, hidden_size, nonlinearity='relu', batch_first=True)\n self.linear = torch.nn.Linear(hidden_size, output_size)\n\n def forward(self, x):\n # The RNN also returns its hidden state but we don't use it.\n # While the RNN can also take a hidden state as input, the RNN\n # gets passed a hidden state initialized with zeros by default.\n h = self.rnn(x)[0]\n x = self.linear(h)\n return x",
"_____no_output_____"
]
],
[
[
"## 3. Defining the Training Loop",
"_____no_output_____"
]
],
[
[
"def train(model, train_data_gen, criterion, optimizer, device):\n # Set the model to training mode. This will turn on layers that would\n # otherwise behave differently during evaluation, such as dropout.\n model.train()\n\n # Store the number of sequences that were classified correctly\n num_correct = 0\n\n # Iterate over every batch of sequences. Note that the length of a data generator\n # is defined as the number of batches required to produce a total of roughly 1000\n # sequences given a batch size.\n for batch_idx in range(len(train_data_gen)):\n\n # Request a batch of sequences and class labels, convert them into tensors\n # of the correct type, and then send them to the appropriate device.\n data, target = train_data_gen[batch_idx]\n data, target = torch.from_numpy(data).float().to(device), torch.from_numpy(target).long().to(device)\n\n # Perform the forward pass of the model\n output = model(data) # Step ①\n\n # Pick only the output corresponding to last sequence element (input is pre padded)\n output = output[:, -1, :] # For many-to-one RNN architecture, we need output from last RNN cell only.\n\n # Compute the value of the loss for this batch. For loss functions like CrossEntropyLoss,\n # the second argument is actually expected to be a tensor of class indices rather than\n # one-hot encoded class labels. One approach is to take advantage of the one-hot encoding\n # of the target and call argmax along its second dimension to create a tensor of shape\n # (batch_size) containing the index of the class label that was hot for each sequence.\n target = target.argmax(dim=1) # For example, [0,1,0,0] will correspond to 1 (index start from 0)\n\n loss = criterion(output, target) # Step ②\n\n # Clear the gradient buffers of the optimized parameters.\n # Otherwise, gradients from the previous batch would be accumulated.\n optimizer.zero_grad() # Step ③\n\n loss.backward() # Step ④\n\n optimizer.step() # Step ⑤\n\n y_pred = output.argmax(dim=1)\n num_correct += (y_pred == target).sum().item()\n\n return num_correct, loss.item()",
"_____no_output_____"
]
],
[
[
"## 4. Defining the Testing Loop",
"_____no_output_____"
]
],
[
[
"def test(model, test_data_gen, criterion, device):\n # Set the model to evaluation mode. This will turn off layers that would\n # otherwise behave differently during training, such as dropout.\n model.eval()\n\n # Store the number of sequences that were classified correctly\n num_correct = 0\n\n # A context manager is used to disable gradient calculations during inference\n # to reduce memory usage, as we typically don't need the gradients at this point.\n with torch.no_grad():\n for batch_idx in range(len(test_data_gen)):\n data, target = test_data_gen[batch_idx]\n data, target = torch.from_numpy(data).float().to(device), torch.from_numpy(target).long().to(device)\n\n output = model(data)\n # Pick only the output corresponding to last sequence element (input is pre padded)\n output = output[:, -1, :]\n\n target = target.argmax(dim=1)\n loss = criterion(output, target)\n\n y_pred = output.argmax(dim=1)\n num_correct += (y_pred == target).sum().item()\n\n return num_correct, loss.item()",
"_____no_output_____"
]
],
[
[
"## 5. Putting it All Together",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nfrom plot_lib import set_default, plot_state, print_colourbar",
"_____no_output_____"
],
[
"set_default()",
"_____no_output_____"
],
[
"def train_and_test(model, train_data_gen, test_data_gen, criterion, optimizer, max_epochs, verbose=True):\n # Automatically determine the device that PyTorch should use for computation\n device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')\n\n # Move model to the device which will be used for train and test\n model.to(device)\n\n # Track the value of the loss function and model accuracy across epochs\n history_train = {'loss': [], 'acc': []}\n history_test = {'loss': [], 'acc': []}\n\n for epoch in range(max_epochs):\n # Run the training loop and calculate the accuracy.\n # Remember that the length of a data generator is the number of batches,\n # so we multiply it by the batch size to recover the total number of sequences.\n num_correct, loss = train(model, train_data_gen, criterion, optimizer, device)\n accuracy = float(num_correct) / (len(train_data_gen) * train_data_gen.batch_size) * 100\n history_train['loss'].append(loss)\n history_train['acc'].append(accuracy)\n\n # Do the same for the testing loop\n num_correct, loss = test(model, test_data_gen, criterion, device)\n accuracy = float(num_correct) / (len(test_data_gen) * test_data_gen.batch_size) * 100\n history_test['loss'].append(loss)\n history_test['acc'].append(accuracy)\n\n if verbose or epoch + 1 == max_epochs:\n print(f'[Epoch {epoch + 1}/{max_epochs}]'\n f\" loss: {history_train['loss'][-1]:.4f}, acc: {history_train['acc'][-1]:2.2f}%\"\n f\" - test_loss: {history_test['loss'][-1]:.4f}, test_acc: {history_test['acc'][-1]:2.2f}%\")\n\n # Generate diagnostic plots for the loss and accuracy\n fig, axes = plt.subplots(ncols=2, figsize=(9, 4.5))\n for ax, metric in zip(axes, ['loss', 'acc']):\n ax.plot(history_train[metric])\n ax.plot(history_test[metric])\n ax.set_xlabel('epoch', fontsize=12)\n ax.set_ylabel(metric, fontsize=12)\n ax.legend(['Train', 'Test'], loc='best')\n plt.show()\n\n return model",
"_____no_output_____"
]
],
[
[
"## 5. Simple RNN: 10 Epochs\n\nLet's create a simple recurrent network and train for 10 epochs. ",
"_____no_output_____"
]
],
[
[
"# Setup the training and test data generators\ndifficulty = QRSU.DifficultyLevel.EASY\nbatch_size = 32\ntrain_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)\ntest_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)\n\n# Setup the RNN and training settings\ninput_size = train_data_gen.n_symbols\nhidden_size = 4\noutput_size = train_data_gen.n_classes\nmodel = SimpleRNN(input_size, hidden_size, output_size)\ncriterion = torch.nn.CrossEntropyLoss()\noptimizer = torch.optim.RMSprop(model.parameters(), lr=0.001)\nmax_epochs = 10\n\n# Train the model\nmodel = train_and_test(model, train_data_gen, test_data_gen, criterion, optimizer, max_epochs)",
"[Epoch 1/10] loss: 1.4213, acc: 24.29% - test_loss: 1.3710, test_acc: 31.35%\n[Epoch 2/10] loss: 1.3603, acc: 30.75% - test_loss: 1.3846, test_acc: 37.50%\n[Epoch 3/10] loss: 1.3673, acc: 40.22% - test_loss: 1.3910, test_acc: 40.12%\n[Epoch 4/10] loss: 1.3636, acc: 39.82% - test_loss: 1.3435, test_acc: 43.85%\n[Epoch 5/10] loss: 1.2752, acc: 44.35% - test_loss: 1.3282, test_acc: 36.59%\n[Epoch 6/10] loss: 1.2631, acc: 40.22% - test_loss: 1.2412, test_acc: 41.03%\n[Epoch 7/10] loss: 1.2340, acc: 44.35% - test_loss: 1.2670, test_acc: 47.58%\n[Epoch 8/10] loss: 1.1193, acc: 50.40% - test_loss: 1.2446, test_acc: 48.69%\n[Epoch 9/10] loss: 1.2194, acc: 48.89% - test_loss: 1.1098, test_acc: 50.81%\n[Epoch 10/10] loss: 1.0805, acc: 48.69% - test_loss: 1.0649, test_acc: 52.72%\n"
],
[
"for parameter_group in list(model.parameters()):\n print(parameter_group.size())",
"torch.Size([4, 8])\ntorch.Size([4, 4])\ntorch.Size([4])\ntorch.Size([4])\ntorch.Size([4, 4])\ntorch.Size([4])\n"
]
],
[
[
"## 6. RNN: Increasing Epoch to 100",
"_____no_output_____"
]
],
[
[
"# Setup the training and test data generators\ndifficulty = QRSU.DifficultyLevel.EASY\nbatch_size = 32\ntrain_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)\ntest_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)\n\n# Setup the RNN and training settings\ninput_size = train_data_gen.n_symbols\nhidden_size = 4\noutput_size = train_data_gen.n_classes\nmodel = SimpleRNN(input_size, hidden_size, output_size)\ncriterion = torch.nn.CrossEntropyLoss()\noptimizer = torch.optim.RMSprop(model.parameters(), lr=0.001)\nmax_epochs = 100\n\n# Train the model\nmodel = train_and_test(model, train_data_gen, test_data_gen, criterion, optimizer, max_epochs, verbose=False)",
"[Epoch 100/100] loss: 0.0081, acc: 100.00% - test_loss: 0.0069, test_acc: 100.00%\n"
]
],
[
[
"We see that with short 8-element sequences, RNN gets about 50% accuracy. On further increasing epochs to 100, RNN gets 100% accuracy, though taking longer time to train. For a longer sequence, RNNs fail to memorize the information. Long Short-Term Memory(LSTM) solves long term memory loss by building up memory cells to preserve past information. For a very detailed explanation on the working of LSTMs, please follow this [link](https://colah.github.io/posts/2015-08-Understanding-LSTMs/). In my other notebook, we will see how LSTMs perform with even longer sequence classification. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e76f3c02549095613c6ff4ac3c5d6c732adac3e3 | 4,904 | ipynb | Jupyter Notebook | make_racs.ipynb | craigerboi/oer_active_learning | a20232d8201c58cede893c1032ee76c05da40f03 | [
"MIT"
] | 3 | 2021-12-08T09:57:30.000Z | 2022-02-25T06:45:48.000Z | make_racs.ipynb | craigerboi/oer_active_learning | a20232d8201c58cede893c1032ee76c05da40f03 | [
"MIT"
] | null | null | null | make_racs.ipynb | craigerboi/oer_active_learning | a20232d8201c58cede893c1032ee76c05da40f03 | [
"MIT"
] | 2 | 2021-12-08T03:31:03.000Z | 2021-12-08T09:57:31.000Z | 35.280576 | 125 | 0.558524 | [
[
[
"# Make RACs from initial structure",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport pickle\n\nfrom collections import defaultdict\nfrom molSimplify.Informatics.autocorrelation import*\n",
"_____no_output_____"
],
[
"def make_rac(xyz_file, m_depth, l_depth, is_oct):\n properties = ['electronegativity', 'size', 'polarizability', 'nuclear_charge']\n this_mol = mol3D() # mol3D instance\n this_mol.readfromxyz(xyz_file)\n feature_names = []\n mc_corrs = np.zeros(shape=(len(properties), (m_depth+1)))\n metal_idx = this_mol.findMetal()[0]\n mc_delta_metricz = np.zeros(shape=(len(properties), m_depth))\n for idx, p in enumerate(properties):\n delta_list = list(np.asarray(atom_only_deltametric(this_mol, p, m_depth, metal_idx, oct=is_oct)).flatten())\n del delta_list[0]\n mc_corrs[idx] = np.asarray(atom_only_autocorrelation(this_mol, p, m_depth, metal_idx, oct=is_oct)).flatten()\n name_of_idx = [\"MC-mult-{}-{}\".format(p, x) for x in range(0, m_depth+1)]\n mc_delta_metricz[idx] = delta_list\n feature_names.extend(name_of_idx)\n name_of_idx_diff = [\"MC-diff-{}-{}\".format(p, x) for x in range(1, m_depth+1)]\n feature_names.extend(name_of_idx_diff)\n \n if is_oct:\n num_connectors = 6\n else:\n num_connectors = 5\n distances = []\n origin = this_mol.coordsvect()[metal_idx]\n for xyz in this_mol.coordsvect():\n distances.append(np.sqrt((xyz[0]-origin[0])**2+(xyz[1]-origin[1])**2+(xyz[2]-origin[2])**2))\n\n nearest_neighbours = np.argpartition(distances, num_connectors)\n nn = [x for x in nearest_neighbours[:num_connectors+1] if x != 0]\n rest_of_autoz = np.zeros(shape=(len(properties), l_depth+1))\n rest_of_deltas = np.zeros(shape=(len(properties), l_depth))\n for idx, p in enumerate(properties):\n rest_of_autoz[idx] = atom_only_autocorrelation(this_mol, p, l_depth, nn, oct=is_oct)\n rest_of_deltas[idx] = atom_only_deltametric(this_mol, p, l_depth, nn)[1:]\n name_of_idx = [\"LC-mult-{}-{}\".format(p, x) for x in range(0, l_depth+1)]\n name_of_idx_diff = [\"LC-diff-{}-{}\".format(p, x) for x in range(1, l_depth+1)]\n feature_names.extend(name_of_idx)\n \n \n rac_res = np.concatenate((mc_corrs, mc_delta_metricz, rest_of_autoz, rest_of_deltas),\n axis=None)\n\n return rac_res, feature_names",
"_____no_output_____"
]
],
[
[
"Now we define different racs with differing feature depths so we can perform the gridsearch in rac_depth_search.ipynb",
"_____no_output_____"
]
],
[
[
"mc_depths = [2, 3, 4]\nlc_depths = [0, 1]\n\noer_desc_data = pickle.load(open(\"racs_and_desc/oer_desc_data.p\", \"rb\"),)\nname2oer_desc_and_rac = defaultdict()\nfor mc_d in mc_depths:\n for lc_d in lc_depths:\n racs = []\n oer_desc_for_ml = []\n cat_names_for_ml = []\n for name in oer_desc_data:\n oer_desc = oer_desc_data[name][0]\n rac = np.asarray(make_rac(oer_desc_data[name][1], mc_d, lc_d, is_oct=True)[0])\n name2oer_desc_and_rac[name] = (oer_desc, rac)\n pickle.dump(name2oer_desc_and_rac, open(\"racs_and_desc/data_mc{}_lc{}.p\".format(mc_d, lc_d), \"wb\"))\n # overwrite for the next iteration\n name2oer_desc_and_rac = defaultdict()\n ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e76f3d2804ad0a2320d3c59d0f6266860d9a073f | 253,438 | ipynb | Jupyter Notebook | docs/practices/linear_regression/linear_regression.ipynb | Liu-xiandong/docs | 6a39d6f18b80b73dd0ac8c3cbb13e2cd66645d2f | [
"Apache-2.0"
] | null | null | null | docs/practices/linear_regression/linear_regression.ipynb | Liu-xiandong/docs | 6a39d6f18b80b73dd0ac8c3cbb13e2cd66645d2f | [
"Apache-2.0"
] | null | null | null | docs/practices/linear_regression/linear_regression.ipynb | Liu-xiandong/docs | 6a39d6f18b80b73dd0ac8c3cbb13e2cd66645d2f | [
"Apache-2.0"
] | null | null | null | 341.100942 | 152,796 | 0.932646 | [
[
[
"# 使用线性回归预测波士顿房价\n\n**作者:** [PaddlePaddle](https://github.com/PaddlePaddle) <br>\n**日期:** 2021.05 <br>\n**摘要:** 本示例教程将会演示如何使用线性回归完成波士顿房价预测。",
"_____no_output_____"
],
[
"## 一、简要介绍\n经典的线性回归模型主要用来预测一些存在着线性关系的数据集。回归模型可以理解为:存在一个点集,用一条曲线去拟合它分布的过程。如果拟合曲线是一条直线,则称为线性回归。如果是一条二次曲线,则被称为二次回归。线性回归是回归模型中最简单的一种。 \n本示例简要介绍如何用飞桨开源框架,实现波士顿房价预测。其思路是,假设uci-housing数据集中的房子属性和房价之间的关系可以被属性间的线性组合描述。在模型训练阶段,让假设的预测结果和真实值之间的误差越来越小。在模型预测阶段,预测器会读取训练好的模型,对从未遇见过的房子属性进行房价预测。",
"_____no_output_____"
],
[
"## 二、环境配置\n\n本教程基于Paddle 2.1 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) Paddle 2.1 。",
"_____no_output_____"
]
],
[
[
"import paddle\nimport numpy as np\nimport os\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nprint(paddle.__version__)",
"2.1.0\n"
]
],
[
[
"## 三、数据集介绍\n本示例采用uci-housing数据集,这是经典线性回归的数据集。数据集共7084条数据,可以拆分成506行,每行14列。前13列用来描述房屋的各种信息,最后一列为该类房屋价格中位数。",
"_____no_output_____"
],
[
"前13列用来描述房屋的各种信息\n\n\n",
"_____no_output_____"
],
[
"### 3.1 数据处理",
"_____no_output_____"
]
],
[
[
"#下载数据\n!wget https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data -O housing.data ",
"--2021-05-18 16:20:29-- https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data\nResolving archive.ics.uci.edu (archive.ics.uci.edu)... 128.195.10.252\nConnecting to archive.ics.uci.edu (archive.ics.uci.edu)|128.195.10.252|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 49082 (48K) [application/x-httpd-php]\nSaving to: ‘housing.data’\n\nhousing.data 100%[===================>] 47.93K 94.2KB/s in 0.5s \n\n2021-05-18 16:20:31 (94.2 KB/s) - ‘housing.data’ saved [49082/49082]\n\n"
],
[
"# 从文件导入数据\ndatafile = './housing.data'\nhousing_data = np.fromfile(datafile, sep=' ')\nfeature_names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE','DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']\nfeature_num = len(feature_names)\n# 将原始数据进行Reshape,变成[N, 14]这样的形状\nhousing_data = housing_data.reshape([housing_data.shape[0] // feature_num, feature_num])",
"_____no_output_____"
],
[
"# 画图看特征间的关系,主要是变量两两之间的关系(线性或非线性,有无明显较为相关关系)\nfeatures_np = np.array([x[:13] for x in housing_data], np.float32)\nlabels_np = np.array([x[-1] for x in housing_data], np.float32)\n# data_np = np.c_[features_np, labels_np]\ndf = pd.DataFrame(housing_data, columns=feature_names)\nmatplotlib.use('TkAgg')\n%matplotlib inline\nsns.pairplot(df.dropna(), y_vars=feature_names[-1], x_vars=feature_names[::-1], diag_kind='kde')\nplt.show()",
"_____no_output_____"
],
[
"# 相关性分析\nfig, ax = plt.subplots(figsize=(15, 1)) \ncorr_data = df.corr().iloc[-1]\ncorr_data = np.asarray(corr_data).reshape(1, 14)\nax = sns.heatmap(corr_data, cbar=True, annot=True)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### 3.2 数据归一化处理\n\n下图展示各属性的取值范围分布:",
"_____no_output_____"
]
],
[
[
"sns.boxplot(data=df.iloc[:, 0:13])",
"_____no_output_____"
]
],
[
[
"从上图看出,各属性的数值范围差异太大,甚至不能够在一个画布上充分的展示各属性具体的最大、最小值以及异常值等。下面进行归一化。",
"_____no_output_____"
],
[
"做归一化(或 Feature scaling)至少有以下2个理由:\n\n* 过大或过小的数值范围会导致计算时的浮点上溢或下溢。\n* 不同的数值范围会导致不同属性对模型的重要性不同(至少在训练的初始阶段如此),而这个隐含的假设常常是不合理的。这会对优化的过程造成困难,使训练时间大大的加长.\n\n",
"_____no_output_____"
]
],
[
[
"features_max = housing_data.max(axis=0)\nfeatures_min = housing_data.min(axis=0)\nfeatures_avg = housing_data.sum(axis=0) / housing_data.shape[0]",
"_____no_output_____"
],
[
"BATCH_SIZE = 20\ndef feature_norm(input):\n f_size = input.shape\n output_features = np.zeros(f_size, np.float32)\n for batch_id in range(f_size[0]):\n for index in range(13):\n output_features[batch_id][index] = (input[batch_id][index] - features_avg[index]) / (features_max[index] - features_min[index])\n return output_features ",
"_____no_output_____"
],
[
"# 只对属性进行归一化\nhousing_features = feature_norm(housing_data[:, :13])\n# print(feature_trian.shape)\nhousing_data = np.c_[housing_features, housing_data[:, -1]].astype(np.float32)\n# print(training_data[0])",
"_____no_output_____"
],
[
"# 归一化后的train_data, 看下各属性的情况\nfeatures_np = np.array([x[:13] for x in housing_data],np.float32)\nlabels_np = np.array([x[-1] for x in housing_data],np.float32)\ndata_np = np.c_[features_np, labels_np]\ndf = pd.DataFrame(data_np, columns=feature_names)\nsns.boxplot(data=df.iloc[:, 0:13])",
"_____no_output_____"
],
[
"# 将训练数据集和测试数据集按照8:2的比例分开\nratio = 0.8\noffset = int(housing_data.shape[0] * ratio)\ntrain_data = housing_data[:offset]\ntest_data = housing_data[offset:]",
"_____no_output_____"
]
],
[
[
"## 四、模型组网\n线性回归就是一个从输入到输出的简单的全连接层。\n\n对于波士顿房价数据集,假设属性和房价之间的关系可以被属性间的线性组合描述。",
"_____no_output_____"
]
],
[
[
"class Regressor(paddle.nn.Layer):\n def __init__(self):\n super(Regressor, self).__init__()\n self.fc = paddle.nn.Linear(13, 1,)\n\n def forward(self, inputs):\n pred = self.fc(inputs)\n return pred",
"_____no_output_____"
]
],
[
[
"定义绘制训练过程的损失值变化趋势的方法 `draw_train_process` .",
"_____no_output_____"
]
],
[
[
"train_nums = []\ntrain_costs = []\n\ndef draw_train_process(iters, train_costs):\n plt.title(\"training cost\", fontsize=24)\n plt.xlabel(\"iter\", fontsize=14)\n plt.ylabel(\"cost\", fontsize=14)\n plt.plot(iters, train_costs, color='red', label='training cost')\n plt.show()",
"_____no_output_____"
]
],
[
[
"## 五、方式1:使用基础API完成模型训练&预测\n### 5.1 模型训练\n下面展示模型训练的代码。\n\n这里用到的是线性回归模型最常用的损失函数--均方误差(MSE),用来衡量模型预测的房价和真实房价的差异。\n\n对损失函数进行优化所采用的方法是梯度下降法.",
"_____no_output_____"
]
],
[
[
"import paddle.nn.functional as F \ny_preds = []\nlabels_list = []\n\ndef train(model):\n print('start training ... ')\n # 开启模型训练模式\n model.train()\n EPOCH_NUM = 500\n train_num = 0\n optimizer = paddle.optimizer.SGD(learning_rate=0.001, parameters=model.parameters())\n for epoch_id in range(EPOCH_NUM):\n # 在每轮迭代开始之前,将训练数据的顺序随机的打乱\n np.random.shuffle(train_data)\n # 将训练数据进行拆分,每个batch包含20条数据\n mini_batches = [train_data[k: k+BATCH_SIZE] for k in range(0, len(train_data), BATCH_SIZE)]\n for batch_id, data in enumerate(mini_batches):\n features_np = np.array(data[:, :13], np.float32)\n labels_np = np.array(data[:, -1:], np.float32)\n features = paddle.to_tensor(features_np)\n labels = paddle.to_tensor(labels_np)\n # 前向计算\n y_pred = model(features)\n cost = F.mse_loss(y_pred, label=labels)\n train_cost = cost.numpy()[0]\n # 反向传播\n cost.backward()\n # 最小化loss,更新参数\n optimizer.step()\n # 清除梯度\n optimizer.clear_grad()\n \n if batch_id%30 == 0 and epoch_id%50 == 0:\n print(\"Pass:%d,Cost:%0.5f\"%(epoch_id, train_cost))\n\n train_num = train_num + BATCH_SIZE\n train_nums.append(train_num)\n train_costs.append(train_cost)\n \nmodel = Regressor()\ntrain(model)",
"start training ... \nPass:0,Cost:731.48828\nPass:50,Cost:77.37501\nPass:100,Cost:21.86424\nPass:150,Cost:23.56446\nPass:200,Cost:68.49669\nPass:250,Cost:13.10599\nPass:300,Cost:20.35128\nPass:350,Cost:34.87028\nPass:400,Cost:24.54537\nPass:450,Cost:20.29261\n"
],
[
"matplotlib.use('TkAgg')\n%matplotlib inline\ndraw_train_process(train_nums, train_costs)",
"_____no_output_____"
]
],
[
[
"可以从上图看出,随着训练轮次的增加,损失在呈降低趋势。但由于每次仅基于少量样本更新参数和计算损失,所以损失下降曲线会出现震荡。",
"_____no_output_____"
],
[
"### 5.2 模型预测",
"_____no_output_____"
]
],
[
[
"# 获取预测数据\nINFER_BATCH_SIZE = 100\n\ninfer_features_np = np.array([data[:13] for data in test_data]).astype(\"float32\")\ninfer_labels_np = np.array([data[-1] for data in test_data]).astype(\"float32\")\n\ninfer_features = paddle.to_tensor(infer_features_np)\ninfer_labels = paddle.to_tensor(infer_labels_np)\nfetch_list = model(infer_features)\n\nsum_cost = 0\nfor i in range(INFER_BATCH_SIZE):\n infer_result = fetch_list[i][0]\n ground_truth = infer_labels[i]\n if i % 10 == 0:\n print(\"No.%d: infer result is %.2f,ground truth is %.2f\" % (i, infer_result, ground_truth))\n cost = paddle.pow(infer_result - ground_truth, 2)\n sum_cost += cost\nmean_loss = sum_cost / INFER_BATCH_SIZE\nprint(\"Mean loss is:\", mean_loss.numpy())",
"No.0: infer result is 12.17,ground truth is 8.50\nNo.10: infer result is 5.70,ground truth is 7.00\nNo.20: infer result is 14.81,ground truth is 11.70\nNo.30: infer result is 16.45,ground truth is 11.70\nNo.40: infer result is 13.50,ground truth is 10.80\nNo.50: infer result is 15.98,ground truth is 14.90\nNo.60: infer result is 18.55,ground truth is 21.40\nNo.70: infer result is 15.36,ground truth is 13.80\nNo.80: infer result is 17.89,ground truth is 20.60\nNo.90: infer result is 21.31,ground truth is 24.50\nMean loss is: [12.873257]\n"
],
[
"def plot_pred_ground(pred, ground):\n plt.figure() \n plt.title(\"Predication v.s. Ground truth\", fontsize=24)\n plt.xlabel(\"ground truth price(unit:$1000)\", fontsize=14)\n plt.ylabel(\"predict price\", fontsize=14)\n plt.scatter(ground, pred, alpha=0.5) # scatter:散点图,alpha:\"透明度\"\n plt.plot(ground, ground, c='red')\n plt.show()",
"_____no_output_____"
],
[
"plot_pred_ground(fetch_list, infer_labels_np)",
"_____no_output_____"
]
],
[
[
"上图可以看出,训练出来的模型的预测结果与真实结果是较为接近的。",
"_____no_output_____"
],
[
"## 六、方式2:使用高层API完成模型训练&预测\n也可以用飞桨的高层API来做线性回归训练,高层API相较于底层API更加的简洁方便。",
"_____no_output_____"
]
],
[
[
"import paddle\npaddle.set_default_dtype(\"float64\")\n\n# step1:用高层API定义数据集,无需进行数据处理等,高层API为你一条龙搞定\ntrain_dataset = paddle.text.datasets.UCIHousing(mode='train')\neval_dataset = paddle.text.datasets.UCIHousing(mode='test')\n\n# step2:定义模型\nclass UCIHousing(paddle.nn.Layer):\n def __init__(self):\n super(UCIHousing, self).__init__()\n self.fc = paddle.nn.Linear(13, 1, None)\n\n def forward(self, input):\n pred = self.fc(input)\n return pred\n\n# step3:训练模型\nmodel = paddle.Model(UCIHousing())\nmodel.prepare(paddle.optimizer.Adam(parameters=model.parameters()),\n paddle.nn.MSELoss())\nmodel.fit(train_dataset, eval_dataset, epochs=5, batch_size=8, verbose=1)",
"The loss value printed in the log is the current step, and the metric is the average value of previous steps.\nEpoch 1/5\nstep 51/51 [==============================] - loss: 624.0728 - 2ms/step \nEval begin...\nstep 13/13 [==============================] - loss: 397.2567 - 878us/step \nEval samples: 102\nEpoch 2/5\nstep 51/51 [==============================] - loss: 422.2296 - 1ms/step \nEval begin...\nstep 13/13 [==============================] - loss: 394.6901 - 750us/step \nEval samples: 102\nEpoch 3/5\nstep 51/51 [==============================] - loss: 417.4614 - 1ms/step \nEval begin...\nstep 13/13 [==============================] - loss: 392.1667 - 810us/step \nEval samples: 102\nEpoch 4/5\nstep 51/51 [==============================] - loss: 423.6764 - 1ms/step \nEval begin...\nstep 13/13 [==============================] - loss: 389.6587 - 772us/step \nEval samples: 102\nEpoch 5/5\nstep 51/51 [==============================] - loss: 461.0751 - 1ms/step \nEval begin...\nstep 13/13 [==============================] - loss: 387.1344 - 828us/step \nEval samples: 102\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
e76f400e4b7cf56e0b394e8d7d4f026aa4a13201 | 11,911 | ipynb | Jupyter Notebook | Exercicio02.ipynb | thainamariianr/LingProg | 3ec053d47fe329ac53c13c5a7a9fc03613ba61a7 | [
"MIT"
] | null | null | null | Exercicio02.ipynb | thainamariianr/LingProg | 3ec053d47fe329ac53c13c5a7a9fc03613ba61a7 | [
"MIT"
] | null | null | null | Exercicio02.ipynb | thainamariianr/LingProg | 3ec053d47fe329ac53c13c5a7a9fc03613ba61a7 | [
"MIT"
] | null | null | null | 38.924837 | 918 | 0.581815 | [
[
[
"Exercícios Aula 02 - Thainá Mariane Souza Silva 816118386",
"_____no_output_____"
],
[
"Exercicios de Lista\n\n1 Crie um programa que recebe uma lista de números e \n - retorne o maior elemento \n - retorne a soma dos elementos \n - retorne o número de ocorrências do primeiro elemento da lista \n - retorne a média dos elementos\n - retorne o valor mais próximo da média dos elementos\n - retorne a soma dos elementos com valor negativo\n - retorne a quantidade de vizinhos iguais",
"_____no_output_____"
]
],
[
[
"import math\n\nlista = [input(\"Digite uma lista\") for i in range(5)]\n\nmaior = max(lista)\nsoma = sum(lista)\nocorrencia = lista.count(lista[0])\nnegativo = sum(i for i in list if i < 0)\nmedia = sum(lista)/ len(lista)\n\n\nprint(\"O maior elemento da lista é: {} \" .formart(maior))\nprint(\"A soma dos elementos da lista é: {} \" .format(soma))\nprint(\"O número de ocorrencias do primeiro elemento é: {} \" .format(lista))\nprint(\"A media dos elementos é: {} \" .format(media))\nprint(\"Valor mais próximo da média dos elementos {}\" for x in lista: media - x )\nprint(\"A soma dos valores negativos é: {}\" .format(negativo))\n\nvizinho = 0\nfor i in range(len(list)):\n if(i < (len(list)-1)): \n if(list[i] == list[i+1]):\n vizinho += 1 \n\nprint(\"A quantidade de vizinhos iguais é: {}\" .format(vizinho))\n",
"_____no_output_____"
]
],
[
[
"2 Faça um programa que receba duas listas e retorne True se são iguais ou False caso contrario. Duas listas são iguais se possuem os mesmos valores e na mesma ordem.",
"_____no_output_____"
]
],
[
[
"\n#Para cada x no input ..... split quebrar a string de acordo com o que foi definido\nlista = [input(\"Digite um valor para incluir na lista \") for i in range(3)]\nlista2 = [input(\"Digite um valor para incluir na lista 2 \") for i in range(3)]\n\n\nif lista == lista2:\n print(\"True\")\nelse:\n print(\"False\")\n\n\n",
"_____no_output_____"
]
],
[
[
"3 Faça um programa que receba duas listas e retorne True se têm os mesmos elementos ou False caso contrário Duas listas possuem os mesmos elementos quando são compostas pelos mesmos valores, mas não obrigatoriamente na mesma ordem",
"_____no_output_____"
]
],
[
[
"\n#Para cada x no input ..... split quebrar a string de acordo com o que foi definido\nlista = [input(\"Digite um valor para incluir na lista: \") for i in range(3)]\nlista = [input(\"Digite um valor para incluir na lista: \") for i in range(3)]\n\nresult = lista\nif lista == lista2:\n print(\"true\")\nelse:\n i=0\n for i in lista[:]:\n if i in lista2:\n result.remove(i)\n print(\"Resultado: \", result)\n",
"_____no_output_____"
]
],
[
[
" 4 Faça um programa que percorre uma lista com o seguinte formato: [['Brasil', 'Italia', [10, 9]], ['Brasil', 'Espanha', [5, 7]], ['Italia', 'Espanha', [7,8]]]. Essa lista indica o número de faltas que cada time fez em cada jogo. Na lista acima, no jogo entre Brasil e Itália, o Brasil fez 10 faltas e a Itália fez 9. O programa deve imprimir na tela: - o total de faltas do campeonato - o time que fez mais faltas - o time que fez menos faltas\n",
"_____no_output_____"
]
],
[
[
"import operator\n\n\nlista = [['Brasil', 'Italia', [10, 9]], ['Brasil', 'Espanha', [5, 7]], ['Italia', 'Espanha', [7,8]]]\n\ndicionario = {\"Brasil\": 0, \"Italia\": 0, \"Espanha\": 0}\n\ntotal_faltas = 0\n\nfor item in lista:\n total_faltas += sum(item[2])\n dicionario[item[0]] += item[2][0]\n dicionario[item[1]] += item[2][1]\n print(dicionario)\n\ntime_mais_falta = max(dicionario.items(), key=operator.itemgetter(1))[0]\ntime_menos_falta = min(dicionario.items(), key=operator.itemgetter(1))[0]\n\nprint(f\"Total de faltas {total_faltas}\")\nprint(f\"Total de faltas {time_mais_falta}\")\nprint(f\"Total de faltas {time_menos_falta}\")\n\n\n \n\n\n",
"_____no_output_____"
]
],
[
[
"Exercicios Dicionario \n\n5 Escreva um programa que conta a quantidade de vogais em uma string e armazena tal quantidade em um dicionário, onde a chave é a vogal considerada.\n",
"_____no_output_____"
]
],
[
[
"import string \n\npalavra = input(\"Digite uma palavra: \")\nvogal = ['a', 'e', 'i', 'o', 'u']\n\ndicionario = {'a': 0, 'e': 0, 'i':0, 'o': 0, 'u': 0}\n\nfor letra in palavra:\n if letra in vogal:\n dicionario[letra] = dicionario[letra] + 1\nprint(dicionario)\n \n \n \n \n\n",
"_____no_output_____"
]
],
[
[
"6 Escreva um programa que lê̂ duas notas de vários alunos e armazena tais notas em um dicionário, onde a chave é o nome do aluno. A entrada de dados deve terminar quando for lida uma string vazia como nome. Escreva uma função que retorna a média do aluno, dado seu nome. ",
"_____no_output_____"
]
],
[
[
"texto = input(\"Digite o nome do aluno é duas notas, Nome, nota1, nota2 separando por ponto e virgula\")\ntexto = texto.split(\";\")\n\nnotas = {}\ncount = 0 \n\nfor n in texto:\n nota = n.split(\",\")\n notas[nota[0]] = {\"nota1\": nota[1], \"nota2\": nota[2]}\n \nfor n in notas: \n media = (int(notas[n]['nota1']) + int(notas[n]['nota2']))/2\n print(\"A media do aluno(a) {} é {} \" .format(n, media))\n \n",
"_____no_output_____"
]
],
[
[
" 7 Uma pista de Kart permite 10 voltas para cada um de 6 corredores. Escreva um programa que leia todos os tempos em segundos e os guarde em um dicionário, onde a chave é o nome do corredor. Ao fnal diga de quem foi a melhor volta da prova e em que volta; e ainda a classifcação fnal em ordem (1o o campeão). O campeão é o que tem a menor média de tempos. ",
"_____no_output_____"
]
],
[
[
"i = 0\nwhile(i <= 6):\n voltas = input(\"Digite os valores de voltas: 'Piloto':[2,6,8,1] \")\n dic = dict(x.split() for x in voltas.splitlines())\n\nprint(dic)",
"_____no_output_____"
]
],
[
[
"8 Escreva um programa para armazenar uma agenda de telefones em um dicionário. Cada pessoa pode ter um ou mais telefones e a chave do dicionário é o nome da pessoa. Seu programa deve ter as seguintes funções: incluirNovoNome – essa função acrescenta um novo nome na agenda, com um ou mais telefones. Ela deve receber como argumentos o nome e os telefones. incluirTelefone – essa função acrescenta um telefone em um nome existente na agenda. Caso o nome não exista na agenda, você̂ deve\nperguntar se a pessoa deseja inclui-lo. Caso a resposta seja afrmativa, use a função anterior para incluir o novo nome. excluirTelefone – essa função exclui um telefone de uma pessoa que já está na agenda. Se a pessoa tiver apenas um telefone, ela deve ser excluída da agenda. excluirNome – essa função exclui uma pessoa da agenda. consultarTelefone – essa função retorna os telefones de uma pessoa na agenda. ",
"_____no_output_____"
],
[
"Arquivos 9 Faça um programa que leia um arquivo texto contendo uma lista de endereços IP e gere um outro arquivo, contendo um relatório dos endereços IP válidos e inválidos. O arquivo de entrada possui o seguinte formato: 200.135.80.9 192.168.1.1 8.35.67.74 257.32.4.5 85.345.1.2 1.2.3.4 9.8.234.5 192.168.0.256 O arquivo de saída possui o seguinte formato: [Endereços válidos:] 200.135.80.9 192.168.1.1 8.35.67.74 1.2.3.4 [Endereços inválidos:] 257.32.4.5 85.345.1.2 9.8.234.5 192.168.0.256 ",
"_____no_output_____"
],
[
"10 A ACME Inc., uma empresa de 500 funcionários, está tendo problemas de espaço em disco no seu servidor de arquivos. Para tentar resolver este problema, o Administrador de Rede precisa saber qual o espaço ocupado pelos usuários, e identifcar os usuários com maior espaço ocupado. Através de um programa, baixado da Internet, ele conseguiu gerar o seguinte arquivo, chamado \"usuarios.txt\": alexandre 456123789 anderson 1245698456 antonio 123456456 carlos 91257581 cesar 987458 rosemary 789456125 Neste arquivo, o nome do usuário possui 15 caracteres. A partir deste arquivo, você deve criar um programa que gere um relatório, chamado \"relatório.txt\", no seguinte formato: ACME Inc. Uso do espaço em disco pelos usuários\n----------------------------------------------------------------------Nr. Usuário Espaço utilizado % do uso 1 alexandre 434,99 MB 16,85% 2 anderson 1187,99 MB 46,02% 3 antonio 117,73 MB 4,56% 4 carlos 87,03 MB 3,37% 5 cesar 0,94 MB 0,04% 6 rosemary 752,88 MB 29,16% Espaço total ocupado: 2581,57 MB Espaço médio ocupado: 430,26 MB O arquivo de entrada deve ser lido uma única vez, e os dados armazenados em memória, caso sejam necessários, de forma a agilizar a execução do programa. A conversão da espaço ocupado em disco, de bytes para megabytes deverá ser feita através de uma função separada, que será chamada pelo programa principal. O cálculo do percentual de uso também deverá ser feito através de uma função, que será chamada pelo programa principal.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
e76f44c3df698a6dd03b53d0be50af81a3e11873 | 34,137 | ipynb | Jupyter Notebook | ML_selfstudy3/MLSelfStudy3-F20.ipynb | Jbarata98/ML_AAU1920 | 090b8cbae9d6adba4ab30e7d4fd68eb24e04c5f4 | [
"MIT"
] | 2 | 2021-05-16T11:21:23.000Z | 2021-05-16T11:21:23.000Z | ML_selfstudy3/MLSelfStudy3-F20.ipynb | Jbarata98/ML_AAU1920 | 090b8cbae9d6adba4ab30e7d4fd68eb24e04c5f4 | [
"MIT"
] | null | null | null | ML_selfstudy3/MLSelfStudy3-F20.ipynb | Jbarata98/ML_AAU1920 | 090b8cbae9d6adba4ab30e7d4fd68eb24e04c5f4 | [
"MIT"
] | null | null | null | 53.339063 | 5,128 | 0.576794 | [
[
[
"# Self Study 3",
"_____no_output_____"
],
[
"In this self study we perform character recognition using SVM classifiers. We use the MNIST dataset, which consists of 70000 handwritten digits 0..9 at a resolution of 28x28 pixels. \n\nStuff we need:",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport time\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import confusion_matrix,accuracy_score\nfrom sklearn.datasets import fetch_openml ##couldn't run with the previous code\n",
"_____no_output_____"
]
],
[
[
"Now we get the MNIST data. Using the fetch_mldata function, this will be downloaded from the web, and stored in the directory you specify as data_home (replace my path in the following cell):",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import fetch_openml\nmnist = fetch_openml(name='mnist_784', data_home='/home/starksultana/Documentos/Mestrado_4o ano/2o sem AAU/ML/ML_selfstudy3')\n",
"_____no_output_____"
]
],
[
[
"The data has .data and .target attributes. The following gives us some basic information on the data:",
"_____no_output_____"
]
],
[
[
"print(\"Number of datapoints: {}\\n\".format(mnist.data.shape[0]))\nprint(\"Number of features: {}\\n\".format(mnist.data.shape[1]))\nprint(\"features: \", mnist.data[0].reshape(196,4))\nprint(\"List of labels: {}\\n\".format(np.unique(mnist.target)))",
"Number of datapoints: 70000\n\nNumber of features: 784\n\nfeatures: [[ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 3. 18. 18. 18.]\n [126. 136. 175. 26.]\n [166. 255. 247. 127.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 30. 36. 94. 154.]\n [170. 253. 253. 253.]\n [253. 253. 225. 172.]\n [253. 242. 195. 64.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 49.]\n [238. 253. 253. 253.]\n [253. 253. 253. 253.]\n [253. 251. 93. 82.]\n [ 82. 56. 39. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 18.]\n [219. 253. 253. 253.]\n [253. 253. 198. 182.]\n [247. 241. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 80. 156. 107. 253.]\n [253. 205. 11. 0.]\n [ 43. 154. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 14. 1. 154.]\n [253. 90. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 139.]\n [253. 190. 2. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 11.]\n [190. 253. 70. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 35. 241. 225. 160.]\n [108. 1. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 81. 240. 253.]\n [253. 119. 25. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 45. 186.]\n [253. 253. 150. 27.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 16.]\n [ 93. 252. 253. 187.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 249. 253. 249.]\n [ 64. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 46. 130.]\n [183. 253. 253. 207.]\n [ 2. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 39. 148. 229. 253.]\n [253. 253. 250. 182.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 24. 114.]\n [221. 253. 253. 253.]\n [253. 201. 78. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 23. 66. 213. 253.]\n [253. 253. 253. 198.]\n [ 81. 2. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 18. 171.]\n [219. 253. 253. 253.]\n [253. 195. 80. 9.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 55. 172. 226. 253.]\n [253. 253. 253. 244.]\n [133. 11. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [136. 253. 253. 253.]\n [212. 135. 132. 16.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]]\nList of labels: ['0' '1' '2' '3' '4' '5' '6' '7' '8' '9']\n\n"
]
],
[
[
"We can plot individual datapoints as follows:",
"_____no_output_____"
]
],
[
[
"index = 9\nprint(\"Value of datapoint no. {}:\\n{}\\n\".format(index,mnist.data[index]))\nprint(\"As image:\\n\")\nplt.imshow(mnist.data[index].reshape(28,28),cmap=plt.cm.gray_r)\n#plt.show()",
"Value of datapoint no. 9:\n[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 189. 190. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 143. 247. 153. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 136. 247. 242. 86. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 192. 252. 187. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 62. 185.\n 18. 0. 0. 0. 0. 89. 236. 217. 47. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 216. 253.\n 60. 0. 0. 0. 0. 212. 255. 81. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 206. 252.\n 68. 0. 0. 0. 48. 242. 253. 89. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 131. 251. 212.\n 21. 0. 0. 11. 167. 252. 197. 5. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 29. 232. 247. 63.\n 0. 0. 0. 153. 252. 226. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 45. 219. 252. 143. 0.\n 0. 0. 116. 249. 252. 103. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 4. 96. 253. 255. 253. 200. 122.\n 7. 25. 201. 250. 158. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 92. 252. 252. 253. 217. 252. 252.\n 200. 227. 252. 231. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 87. 251. 247. 231. 65. 48. 189. 252.\n 252. 253. 252. 251. 227. 35. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 190. 221. 98. 0. 0. 0. 42. 196.\n 252. 253. 252. 252. 162. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 111. 29. 0. 0. 0. 0. 62. 239.\n 252. 86. 42. 42. 14. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 15. 148. 253.\n 218. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 121. 252. 231.\n 28. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 31. 221. 251. 129.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 218. 252. 160. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 122. 252. 82. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n\nAs image:\n\n"
]
],
[
[
"To make things a little bit simpler (and faster!), we can extract from the data binary subsets, that only contain the data for two selected digits:",
"_____no_output_____"
]
],
[
[
"digit0='4'\ndigit1='5'\nmnist_bin_data=mnist.data[np.logical_or(mnist.target==digit0,mnist.target==digit1)]\nmnist_bin_target=mnist.target[np.logical_or(mnist.target==digit0,mnist.target==digit1)]\nprint(\"The first datapoint now is: \\n\")\nplt.imshow(mnist_bin_data[0].reshape(28,28),cmap=plt.cm.gray_r)\nplt.show()\nprint(mnist_bin_target)",
"The first datapoint now is: \n\n"
]
],
[
[
"**Exercise 1 [SVM]:** Split the mnist_bin data into training and test set. Learn different SVM models by varying the kernel functions (SVM). For each configuration, determine the time it takes to learn the model, and the accuracy on the test data. \n\nYou can get the current time using:\n\n`import time` <br>\n`now = time.time()`\n\n*Caution*: for some configurations, learning here can take a little while (several minutes).\n\nUsing the numpy where() function, one can extract the indices of the test cases that were misclassified: <br>\n`misclass = np.where(test != predictions)` <br>\nInspect some misclassified cases. Do they correspond to hard to recognize digits (also for the human reader)? \n\nHow do results (time and accuracy) change, depending on whether you consider an 'easy' binary task (e.g., distinguishing '1' and '0'), or a more difficult one (e.g., '4' vs. '5'). \n\nIdentify one or several good configurations that give a reasonable combination of accuracy and runtime. Use these configurations to perform a full classification of the 10 classes in the original dataset (after split into train/test). Using `sklearn.metrics.confusion_matrix` you can get an overview of all combinations of true and predicted labels. What does this tell you about which digits are easy, and which ones are difficult to recognize, and which ones are most easily confused?",
"_____no_output_____"
],
[
"**Exercise 2 [SVM]:** Consider how the current data representation \"presents\" the digits to the classifiers, and try to improve this:<br>\n\n**a)** Manually design feature functions for which you expect that based on your new features SVM classifiers can achieve a better accuracy than with the original features. Transform the data into your new feature space, and learn new classifiers. What accuracies do you get?\n\n\n**b)** Instead of designing an explicit feature mapping as in **a)**, define a suitable measure of similarity for the digits, and implement that measure as a kernel function. (Optional: verify that the function you have defined actually satisfies the positive-semidefiniteness property.) Use your kernel function as a custom kernel for the SVC classifier. See http://scikit-learn.org/stable/auto_examples/svm/plot_custom_kernel.html#sphx-glr-auto-examples-svm-plot-custom-kernel-py for an example.",
"_____no_output_____"
]
],
[
[
"###Exercise 1\n''' Completely dies with 7 and 9, cant make it work :(\nIn the rest of the tasks it performed quite well with really high accuracies, for example 1 and 0 it ran with 99 % accuracy with a test size of 30 %,\nand 7 misclassifications, ran in 1,72 secs. 6 and 3 it only has 23 misclassification but runs in 4 times the time( 4 secs), for 4 and 5 it runs in 6 secs but 47 misclassifcations \nwih sigmoid it took 181secs and had aclassification of 53% on test data and 51 on training data when comparing 4 and 5\nrbf was taking so long I had to shut it down. UPDATE** TOOK 180 secs and 53 accuracy...\nand poly took 12 secs , so basically 4x the time. So Im sticking with linear kernel\n'''\n\n##you have to choose different types of digits in the upward cell\nimport time\nnow = time.time()\nprint(\"alive\")\n#x: np.ndarray = mnist_bin_data\n#print(mnist_bin_data.shape)\ny: np.ndarray = mnist_bin_target\n\ntrnX, tstX, trnY, tstY = train_test_split(mnist.data, mnist.target, test_size=0.2,random_state=20)\n\n\nprint( \"I'm doing stuff no worries\")\nclassifier = SVC(kernel='polynomial')\nclassifier.fit(trnX,trnY)\n\npred_labels_train=classifier.predict(trnX)\nprint(\"Don't worry im training!\")\npred_labels=classifier.predict(tstX)\n\nmisclassified = np.where(tstY != pred_labels)\n\n\n##accuracy\nprint(\"Accuracy test: {}\".format(accuracy_score(tstY,pred_labels)))\nprint(\"Accuracy train: {}\".format(accuracy_score(trnY,pred_labels_train)))\nprint(\"Time required: {}\" .format(time.time()-now))\nprint(metrics.confusion_matrix(y_true=y_test, y_pred=y_pred)\n\nprint(confusion_matrix(tstY, pred_labels, labels=np.unique(mnist.target)))\n\nprint(\"misclassified nr:\" , len(misclassified[0]))\n#print(\"image misclassified\",plt.imshow(mnist_bin_data[misclassified[0][0]].reshape(28,28),cmap=plt.cm.gray_r)) \n#### RESULTS ####\n\n",
"_____no_output_____"
]
],
[
[
"---Exercise 2\n#1st approach try to reshape the data?\n#normalize the pixels?",
"_____no_output_____"
]
],
[
[
"##exercise 2\n\nfrom sklearn.preprocessing import StandardScaler\n\nnow = time.time()\n\n#x: np.ndarray = mnist_bin_data\n#y: np.ndarray = mnist_bin_target\nprint(\"don't worry i've just started\")\n\nscaler = StandardScaler() \n\n\ntrnX, tstX, trnY, tstY = train_test_split(mnist.data, mnist.target, test_size=0.3,random_state=20)\nprint(\"don't worry i'm alive\")\nscaler.fit_transform(trnX)\nscaler.fit_transform(tstX)\n\n\nmodel = SVC(kernel='linear')\nmodel.fit(trnX, trnY)\n\npred_labels=model.predict(tstX)\n\nprint(\"Accuracy test: {}\".format(accuracy_score(tstY,pred_labels)))\nprint(\"Accuracy train: {}\".format(accuracy_score(trnY,pred_labels_train)))\nprint(\"Time required: {}\" .format(time.time()-now))\n\n#already getting really high accuracy so not really sure how to increase with the same classifier\n#couldn't run the entire dataset it gets stuck, waited for a long time...\n\n",
"_____no_output_____"
],
[
"from sklearn import svm\nnow = time.time()\n\ndef good_kernel(trnX,trnY):\n return (np.dot(trnX.T, trnY)/255)\n\n\nclf = svm.SVC(kernel=good_kernel)\nclf.fit(trnX, trnY)\n\npred_labels=model.predict(tstX)\n\nprint(\"Accuracy test: {}\".format(accuracy_score(tstY,pred_labels)))\nprint(\"Accuracy train: {}\".format(accuracy_score(trnY,pred_labels_train)))\nprint(\"Time required: {}\" .format(time.time()-now))\n\n###inear SVM is less prone to overfitting than non-linear.\n#And you need to decide which kernel to choose based on your situation: if your number of features is really\n#large compared to the training sample, just use linear kernel; if your number of features\n#is small, but the training sample is large, you may also need linear kernel but try to add more features;\n\n#rbf should work better since it the features are highly non linear\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e76f5988a7969f719a221157c9e7f8638dbbcbd3 | 28,834 | ipynb | Jupyter Notebook | data_filtering/filter_data_MEL.ipynb | NeugebauerLab/MEL_LRS | 6b88e3c93b0369d064c1c2969384f7f09e5de69e | [
"MIT"
] | 1 | 2021-05-02T16:23:10.000Z | 2021-05-02T16:23:10.000Z | data_filtering/filter_data_MEL.ipynb | NeugebauerLab/MEL_LRS | 6b88e3c93b0369d064c1c2969384f7f09e5de69e | [
"MIT"
] | null | null | null | data_filtering/filter_data_MEL.ipynb | NeugebauerLab/MEL_LRS | 6b88e3c93b0369d064c1c2969384f7f09e5de69e | [
"MIT"
] | 1 | 2020-12-14T05:30:25.000Z | 2020-12-14T05:30:25.000Z | 39.336971 | 241 | 0.506867 | [
[
[
"'''\nThis notebook filters mapped PacBio LRS data to remove:\n polyadenylated transcripts,\n 7SK transcripts,\n non-unique reads,\n and splicing intermeditates\n'''",
"_____no_output_____"
],
[
"import os\nimport sys\nimport re\nimport glob\n\nimport pysam\nimport pybedtools\nfrom pybedtools import BedTool\n\nimport numpy as np\nimport pandas as pd\n\nfrom plotnine import *\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport matplotlib\nmatplotlib.rcParams['pdf.fonttype'] = 42 # export pdfs with editable font types in Illustrator",
"_____no_output_____"
],
[
"# Read in data filenames and annotations used for filtering\n\nsamFiles = ['../0_mapped_data/1_untreated_RSII.sam',\n '../0_mapped_data/1_untreated_SQ.sam',\n '../0_mapped_data/2_untreated_RSII.sam',\n '../0_mapped_data/2_untreated_SQ.sam',\n '../0_mapped_data/3_DMSO_RSII.sam',\n '../0_mapped_data/3_DMSO_SQ.sam',\n '../0_mapped_data/4_DMSO_RSII.sam',\n '../0_mapped_data/4_DMSO_SQ.sam']\n\nRn7sk = '../annotation/files/7sk.bed'\n\nintrons = pd.read_csv('../annotation_files/mm10_VM20_introns.bed', delimiter = '\\t', names = ['chr', 'start', 'end', 'name', 'score', 'strand']) # annotation of all mm10 introns in BED6 format, downloaded from USCS table browser",
"_____no_output_____"
],
[
"# Create a bed file from introns bed file that contains just the nucleotide -1 to the 5SS\nintrons.loc[introns['strand'] == '+', 'fiveSS'] = introns['start']\nintrons.loc[introns['strand'] == '-', 'fiveSS'] = introns['end']\nintrons.loc[introns['strand'] == '+', 'newStart'] = introns['fiveSS'] - 1\nintrons.loc[introns['strand'] == '-', 'newStart'] = introns['fiveSS']\nintrons.loc[introns['strand'] == '+', 'newEnd'] = introns['fiveSS']\nintrons.loc[introns['strand'] == '-', 'newEnd'] = introns['fiveSS'] + 1\n\n# convert coordinates back to integer values\nintrons['newStart'] = introns['newStart'].astype(np.int64)\nintrons['newEnd'] = introns['newEnd'].astype(np.int64)\n\n# save as BED6 file\nintrons.to_csv('introns_5SS.bed', \n sep = '\\t', \n index = False, \n columns = ['chr', 'newStart', 'newEnd', 'name', 'score', 'strand'], \n header = False)\n\n# save as bedtool object for intersect\nintrons_5SS_bedtool = pybedtools.BedTool('introns_5SS.bed')",
"_____no_output_____"
],
[
"# Define a function to filter out polyadenylated reads from PacBio nascent RNA LRS data\n\ndef filter_polyA(mapped_reads_file):\n \n def append_id(mapped_reads_file):\n name, ext = os.path.splitext(mapped_reads_file)\n return \"{name}_{id}{ext}\".format(name=name, id='polyAfiltered', ext=ext)\n \n output = open(append_id(mapped_reads_file), 'w')\n #keep = [] # make an empty list of reads to keep\n \n with open(mapped_reads_file, 'r') as f:\n\n for line in f:\n line = line.strip('\\n')\n col = line.split('\\t')\n if col[0][0] == '@': # write header lines into output file\n output.write(line + '\\n')\n continue\n cigar = col[5] # gets cigar string from SAM file\n\n if cigar.count('S') >= 1: # searches for S in cigar string indicating soft-clipped\n index = cigar.find('S') # finds position of S in cigar string\n if index <= 3: # if index is small then clipped bases are on the front of the read...should be polyT\n length_clipped = int(cigar.split(\"S\")[0]) # get position of first S in cigar string\n clipped_bases = col[9][:length_clipped] # get sequence of clipped bases from start to first S\n if clipped_bases.count('T') >= 4 and \\\n clipped_bases.count('T')/len(clipped_bases) >=0.9: # if minimum 4 T's and T content is >90% of soft-clipped bases\n continue # skip lines that have polyT at beginning of read\n else:\n output.write(line + '\\n') # print lines that do not have polyT\n\n if (index + 1) == len(cigar): # if the S is at the end of the cigar string\n m = re.search('[0-9]{0,3}(?=S)' , cigar) # find number before last S in cigar string\n length_clipped = int(m.group(0)) #get length of clipped bases at end of read...should be polyA\n clipped_bases = col[9][:(-1 * (length_clipped + 1)):-1] # get sequence of clipped bases before last S\n if clipped_bases.count('A') >= 4 and \\\n clipped_bases.count('A')/len(clipped_bases) >=0.9: # if minimum 4 A's and A content is >90% of soft-clipped bases\n\n continue # do not write lines that have polyA at end of read\n else:\n output.write(line + '\\n') # write lines that do not have polyA at the end\n else:\n output.write(line + '\\n')\n\n# reads_bedtool = BedTool(keep)\n# reads_bedtool.saveas('test_filtered.bed')\n# return reads_bedtool\n f.close()\n output.close()",
"_____no_output_____"
],
[
"# Define a function for filtering splicing intermediates from each data file\ndef filter_splicing_intermediates(bed_file):\n \n def name_file_no_splicing_int(bed_file):\n name, ext = os.path.splitext(bed_file)\n return \"{name}_{id}{ext}\".format(name=name, id='no_splicing_int', ext=ext)\n \n def name_file_splicing_int(bed_file):\n name, ext = os.path.splitext(bed_file)\n return \"{name}_{id}{ext}\".format(name=name, id='splicing_int', ext=ext)\n \n # first open and reorder coordinates of bed file to put 3'end in position for intersection\n data = pd.read_csv(bed_file, delimiter = '\\t', names = ['chr', 'start', 'end', 'name', 'score', 'strand', 'thickStart', 'thickEnd', 'itemRgb', 'blockCount', 'blockSizes', 'blockStarts'])\n data.loc[data['strand'] == '+', 'threeEnd'] = data['end']\n data.loc[data['strand'] == '-', 'threeEnd'] = data['start']\n data.loc[data['strand'] == '+', 'fiveEnd'] = data['start']\n data.loc[data['strand'] == '-', 'fiveEnd'] = data['end']\n data.loc[data['strand'] == '+', 'newStart'] = data['threeEnd'] - 1\n data.loc[data['strand'] == '-', 'newStart'] = data['threeEnd']\n data.loc[data['strand'] == '+', 'newEnd'] = data['threeEnd']\n data.loc[data['strand'] == '-', 'newEnd'] = data['threeEnd'] + 1\n\n # convert coordinates back to integer values\n data['newStart'] = data['newStart'].astype(np.int64)\n data['newEnd'] = data['newEnd'].astype(np.int64)\n data['fiveEnd'] = data['fiveEnd'].astype(np.int64)\n data['threeEnd'] = data['threeEnd'].astype(np.int64)\n \n # save a temporary bed file with data 3'end coordinates\n data.to_csv('tmp.bed', \n sep = '\\t', \n index = False, \n columns = ['chr', 'newStart', 'newEnd', 'name', 'score', 'strand', 'thickStart', 'thickEnd', 'itemRgb', 'blockCount', 'blockSizes', 'blockStarts', 'start', 'end'], \n header = False)\n \n # intersect data 3'end with intron 5'SS coordinates to get splicing intermediates and non-intermediates\n tmp_bedfile = open('tmp.bed')\n data_bedtool = pybedtools.BedTool(tmp_bedfile)\n intersect1 = data_bedtool.intersect(introns_5SS_bedtool, u = True).saveas('tmp_splicing_int.bed')\n \n tmp_bedfile = open('tmp.bed')\n data_bedtool = pybedtools.BedTool(tmp_bedfile)\n intersect2 = data_bedtool.intersect(introns_5SS_bedtool, v = True).saveas('tmp_no_splicing_int.bed')\n\n # reorder coordinates of data files\n data1 = pd.read_csv('tmp_splicing_int.bed', delimiter = '\\t', names = ['chr', 'newStart', 'newEnd', 'name', 'score', 'strand', 'thickStart', 'thickEnd', 'itemRgb', 'blockCount', 'blockSizes', 'blockStarts', 'start', 'end'])\n data1.to_csv(name_file_splicing_int(bed_file), \n sep = '\\t', \n index = False, \n columns = ['chr', 'start', 'end', 'name', 'score', 'strand', 'thickStart', 'thickEnd', 'itemRgb', 'blockCount', 'blockSizes', 'blockStarts'], \n header = False)\n \n data2 = pd.read_csv('tmp_no_splicing_int.bed', delimiter = '\\t', names = ['chr', 'newStart', 'newEnd', 'name', 'score', 'strand', 'thickStart', 'thickEnd', 'itemRgb', 'blockCount', 'blockSizes', 'blockStarts', 'start', 'end'])\n data2.to_csv(name_file_no_splicing_int(bed_file), \n sep = '\\t', \n index = False, \n columns = ['chr', 'start', 'end', 'name', 'score', 'strand', 'thickStart', 'thickEnd', 'itemRgb', 'blockCount', 'blockSizes', 'blockStarts'], \n header = False)\n \n # clean up temp files\n os.remove('tmp.bed')\n os.remove('tmp_no_splicing_int.bed')\n os.remove('tmp_splicing_int.bed')",
"_____no_output_____"
],
[
"# Define a function for filtering non-unique readnames from each data file\ndef filter_nonunique_reads(bed_file):\n \n def name_unique_reads(bed_file):\n name, ext = os.path.splitext(bed_file)\n return \"{name}_{id}{ext}\".format(name=name, id='unique', ext=ext)\n \n # first open and reorder coordinates of bed file to put 3'end in position for intersection\n all_data = pd.read_csv(bed_file, delimiter = '\\t', names = ['chr', 'start', 'end', 'name', 'score', 'strand', 'thickStart', 'thickEnd', 'itemRgb', 'blockCount', 'blockSizes', 'blockStarts'])\n grouped = all_data.groupby(['name']).size().to_frame(name = 'count').reset_index()\n\n # get read names that are unique and filter to keep only reads which have name count == 1\n is_unique = grouped['count'] == 1\n unique = grouped[is_unique]\n unique_names = pd.Series(unique['name'].values) # create a series of readnames that have occur only once\n\n data_is_unique = all_data['name'].isin(unique_names)\n data_unique = all_data[data_is_unique] # filter data for readnames that are unique\n \n # save unique reads to a new file\n data_unique.to_csv(name_unique_reads(bed_file), \n sep = '\\t', \n index = False, \n columns = ['chr', 'start', 'end', 'name', 'score', 'strand', 'thickStart', 'thickEnd', 'itemRgb', 'blockCount', 'blockSizes', 'blockStarts'], \n header = False)",
"_____no_output_____"
],
[
"# Filter polyadenylated reads\nfor file in samFiles:\n filter_polyA(file)",
"_____no_output_____"
],
[
"# Convert SAM files to BAM files for further filtering\nSAM = []\nfor file in glob.glob('./*_polyAfiltered.sam'):\n SAM.append(file)\n\nfor samfile in SAM:\n name, ext = os.path.splitext(samfile)\n bamfile = \"{name}{ext}\".format(name=name, ext='.bam')\n pysam.view('-S', '-b', '-o', bamfile, samfile, catch_stdout=False)",
"_____no_output_____"
],
[
"# Filter 7SK reads from BAM files\nBAM = []\nfor file in glob.glob('./*_polyAfiltered.bam'):\n BAM.append(file)\n\nfor bamfile in BAM:\n name, ext = os.path.splitext(bamfile)\n name_7sk = \"{name}_{id}{ext}\".format(name=name, id='7SK_only', ext=ext)\n name_no_7sk = \"{name}_{id}{ext}\".format(name=name, id='no_7SK', ext=ext)\n pysam.view('-b', '-L', Rn7sk, '-U', name_no_7sk, bamfile, '-o', name_7sk, catch_stdout=False)",
"_____no_output_____"
],
[
"# Sort and index BAM files\nBAM = []\nfor file in glob.glob('./*_polyAfiltered_no_7SK.bam'):\n BAM.append(file)\n \nfor bamfile in BAM:\n name, ext = os.path.splitext(bamfile)\n bamfileSorted = \"{name}_{id}{ext}\".format(name=name, id='sorted', ext=ext)\n pysam.sort('-o', bamfileSorted, bamfile, catch_stdout=False)\n pysam.index(bamfileSorted, catch_stdout=False)",
"_____no_output_____"
],
[
"# Convert BAM files to BED12 \nsortedBAM = []\nfor file in glob.glob('./*_polyAfiltered_no_7SK_sorted.bam'):\n sortedBAM.append(file)\n\nfor file in sortedBAM:\n name, ext = os.path.splitext(file)\n bedfile = \"{name}{ext}\".format(name=name, ext='.bed')\n \n bam_file = pybedtools.BedTool(file)\n bedFile = bam_file.bam_to_bed(bed12 = True).saveas(bedfile)",
"_____no_output_____"
],
[
"# Filter non-unique intermediates from BED12 files\nBED = []\nfor file in glob.glob('./*_polyAfiltered_no_7SK_sorted.bed'):\n BED.append(file)\n \nfor file in BED:\n filter_nonunique_reads(file)",
"_____no_output_____"
],
[
"# Filter splicing intermediates from BED12 files\nintrons_5SS_bedtool = pybedtools.BedTool('introns_5SS.bed')\n\nBED = []\nfor file in glob.glob('./*_polyAfiltered_no_7SK_sorted_unique.bed'):\n BED.append(file)\n\nfor file in BED:\n filter_splicing_intermediates(file)",
"_____no_output_____"
],
[
"# Optional: combine replicates into untreated and DMSO-treated files\n\nuntreated = []\ndmso = []\nfor file in glob.glob('./*untreated*_polyAfiltered_no_7SK_sorted_unique_no_splicing_int.bed'):\n untreated.append(file)\nfor file in glob.glob('./*DMSO*_polyAfiltered_no_7SK_sorted_unique_no_splicing_int.bed'):\n dmso.append(file)\n \nwith open('untreated_combined.bed', 'w') as outfile:\n for fname in untreated:\n with open(fname) as infile:\n for line in infile:\n outfile.write(line)\n \nwith open('dmso_combined.bed', 'w') as outfile:\n for fname in dmso:\n with open(fname) as infile:\n for line in infile:\n outfile.write(line)",
"_____no_output_____"
],
[
"# Count the number of reads in each file along the way\n\ninput_count = []\nfor file in samFiles:\n samfile = pysam.AlignmentFile(file, \"rb\")\n count = samfile.count()\n input_count.append(count)\n \npolyA_filtered_count = []\nfor file in glob.glob('./*_polyAfiltered.sam'):\n samfile = pysam.AlignmentFile(file, \"rb\")\n count = samfile.count()\n polyA_filtered_count.append(count)\n \n \nno_7sk_count = []\nfor file in glob.glob('./*_polyAfiltered_no_7SK_sorted.bam'):\n bamfile = pysam.AlignmentFile(file, \"rb\")\n count = bamfile.count()\n no_7sk_count.append(count)\n \nunique_reads_count = []\nfor file in glob.glob('./*_polyAfiltered_no_7SK_sorted_unique.bed'):\n count = len(open(file).readlines())\n unique_reads_count.append(count)\n \nno_splicing_int_count = [] \nfor file in glob.glob('./*_polyAfiltered_no_7SK_sorted_unique_no_splicing_int.bed'):\n count = len(open(file).readlines())\n no_splicing_int_count.append(count)",
"_____no_output_____"
],
[
"# Make a table of read counts that are filtered at each step\n\ncounts_df = pd.DataFrame(list(zip(samFiles, input_count, polyA_filtered_count, no_7sk_count, unique_reads_count, no_splicing_int_count)), \n columns =['Sample', 'Mapped', 'PolyA', '7SK', 'Non-unique Reads', 'Splicing Intermediates'])\n\ncounts_df.to_csv('filtering_stats.csv', \n sep = '\\t', \n index = True, \n header = True)",
"_____no_output_____"
],
[
"# Add a row with column totals\n\n# counts_df = pd.read_csv('filtering_stats.csv', delimiter = '\\t', index_col = 0)\ncounts_df.loc['Total']= counts_df.sum()\ncounts_df['Sample']['Total'] = 'Total'\ncounts_df\n\n# Print a report on the number of reads filtered at each step\n\nmapped = counts_df['Mapped']['Total']\npolyA = counts_df['PolyA']['Total']\nsevenSK = counts_df['7SK']['Total']\nnon_unique = counts_df['Non-unique Reads']['Total']\nspl_int = counts_df['Splicing Intermediates']['Total']\n\nprint('Number of mapped reads is: ' + str(mapped))\nprint('Percent of polyA reads filtered is: ' + str(((mapped-polyA)/mapped)*100))\nprint('Percent of 7SK reads filtered is: ' + str(((polyA-sevenSK)/mapped)*100))\nprint('Percent of non-unique reads filtered is: ' + str(((sevenSK-non_unique)/mapped)*100))\nprint('Percent of splicing intermediates reads filtered is: ' + str(((non_unique-spl_int)/mapped)*100))\nprint('Percent of total reads filtered is: ' + str(((mapped-non_unique)/mapped)*100))",
"Number of mapped reads is: 1155629\nPercent of polyA reads filtered is: 1.67043229271678\nPercent of 7SK reads filtered is: 24.880216747762475\nPercent of non-unique reads filtered is: 14.23648939235689\nPercent of splicing intermediates reads filtered is: 4.155572419868314\nPercent of total reads filtered is: 40.787138432836144\n"
],
[
"counts_df",
"_____no_output_____"
],
[
"# Melt counts table from wide to long format for plotting\ndf = pd.melt(counts_df, id_vars=['Sample'], value_vars=['Mapped', 'PolyA', '7SK', 'Splicing Intermediates', 'Non-unique Reads'])\n\n# add categorial variable to control the order of plotting\nvariable_cat = pd.Categorical(df['variable'], categories = ['Mapped',\n 'PolyA', \n '7SK',\n 'Non-unique Reads', \n 'Splicing Intermediates'])\n\ndf = df.assign(variable_cat = variable_cat)",
"_____no_output_____"
],
[
"# plot count values across all samples\nplt = (\n ggplot(aes(x = 'variable_cat', y = 'value', fill = 'variable'), df) + \n geom_bar(stat = 'identity', position = 'dodge') + \n facet_wrap('Sample', scales = 'free_y') +\n theme_classic() +\n theme(subplots_adjust={'wspace':0.8}) +\n theme(axis_text_x=element_text(rotation=45, hjust=1))\n)\n# plt\nplt.save(filename = 'filtering_counts.pdf')",
"/Users/Kirsten/Applications/anaconda3/envs/nanoCOP/lib/python3.7/site-packages/plotnine/ggplot.py:729: PlotnineWarning: Saving 6.4 x 4.8 in image.\n from_inches(height, units), units), PlotnineWarning)\n/Users/Kirsten/Applications/anaconda3/envs/nanoCOP/lib/python3.7/site-packages/plotnine/ggplot.py:730: PlotnineWarning: Filename: filtering_counts.pdf\n warn('Filename: {}'.format(filename), PlotnineWarning)\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76f6e61b8a9a85aebb785e5be6fb654fac7fe4e | 39,632 | ipynb | Jupyter Notebook | Test/Object_classification/Object_classification_Random_Forest.ipynb | marcolamartina/LamIra | 1eef96db3facf4a16d18db1b5f8e480d22c3d78a | [
"MIT"
] | null | null | null | Test/Object_classification/Object_classification_Random_Forest.ipynb | marcolamartina/LamIra | 1eef96db3facf4a16d18db1b5f8e480d22c3d78a | [
"MIT"
] | null | null | null | Test/Object_classification/Object_classification_Random_Forest.ipynb | marcolamartina/LamIra | 1eef96db3facf4a16d18db1b5f8e480d22c3d78a | [
"MIT"
] | null | null | null | 39,632 | 39,632 | 0.482741 | [
[
[
"# Object classification",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import SGDClassifier \nfrom sklearn.model_selection import train_test_split\nimport numpy as np\nimport os\nimport ast\nfrom glob import glob\nimport random\nimport traceback\nfrom tabulate import tabulate\nimport pickle\n\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.svm import SVC\nfrom sklearn.ensemble import RandomForestClassifier\nfrom matplotlib import pyplot as plt",
"_____no_output_____"
]
],
[
[
"## Parameters",
"_____no_output_____"
]
],
[
[
"new_data=True\nload_old_params=True\nsave_params=False\nselected_space=True",
"_____no_output_____"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"_____no_output_____"
]
],
[
[
"## Utils functions",
"_____no_output_____"
]
],
[
[
"def translate(name):\n translate_dict={\"apple\":\"mela\",\n \"ball\":\"palla\",\n \"bell pepper\":\"peperone\",\n \"binder\":\"raccoglitore\",\n \"bowl\":\"ciotola\",\n \"calculator\":\"calcolatrice\",\n \"camera\":\"fotocamera\",\n \"cell phone\":\"telefono\",\n \"cereal box\":\"scatola\",\n \"coffee mug\":\"tazza\",\n \"comb\":\"spazzola\",\n \"dry battery\":\"batteria\",\n \"flashlight\":\"torcia\",\n \"food box\":\"scatola\",\n \"food can\":\"lattina\",\n \"food cup\":\"barattolo\",\n \"food jar\":\"barattolo\",\n \"garlic\":\"aglio\",\n \"lemon\":\"limone\",\n \"lime\":\"lime\",\n \"onion\":\"cipolla\",\n \"orange\":\"arancia\",\n \"peach\":\"pesca\",\n \"pear\":\"pera\",\n \"potato\":\"patata\",\n \"tomato\":\"pomodoro\",\n \"soda can\":\"lattina\",\n \"marker\":\"pennarello\",\n \"plate\":\"piatto\",\n \"notebook\":\"quaderno\",\n \"keyboard\":\"tastiera\",\n \"glue stick\":\"colla\",\n \"sponge\":\"spugna\",\n \"toothpaste\":\"dentifricio\",\n \"toothbrush\":\"spazzolino\"\n }\n try:\n return translate_dict[name]\n except:\n return name\n\ndef normalize_color(color):\n return color\n color_normalized=[]\n for i,f in enumerate(color):\n if i%3==0:\n color_normalized.append(f/256)\n else:\n color_normalized.append((f+128)/256)\n return color_normalized\n\n\ndef sort_and_cut_dict(dictionary,limit=3):\n iterator=sorted(dictionary.items(), key=lambda item: item[1], reverse=True)[:limit]\n coef=sum([i[1] for i in iterator])\n return {k: v/coef for k, v in iterator} \n",
"_____no_output_____"
]
],
[
[
"## Data",
"_____no_output_____"
]
],
[
[
"obj_dir = \"/content/drive/My Drive/Tesi/Code/Object_classification\"\n#obj_dir = \"/Users/marco/Google Drive/Tesi/Code/Object_classification\"\ndata_dir = obj_dir+\"/Data\"\nmodel_filename = obj_dir+\"/model.pkl\"\nexclusion_list=[\"binder\",\"camera\",\"cell phone\",\"dry battery\"]\ntest_folder=[\"apple_3\",\n \"bell_pepper_1\",\n \"bowl_3\",\n \"cereal_box_1\",\n \"coffe_mug_5\",\n \"comb_5\",\n \"flashlight_4\",\n \"food_box_6\",\n \"food_can_2\",\n \"garlic_1\",\n \"glue_stick_3\",\n \"keyboard_2\",\n \"lemon_1\",\n \"lime_1\",\n \"onion_1\",\n \"orange_1\",\n \"pear_4\",\n \"plate_5\",\n \"potato_5\",\n \"soda_can_2\",\n \"sponge_8\",\n \"tomato_1\",\n \"toothbrush_2\"\n ]\nif new_data:\n color_train=[]\n shape_train=[]\n texture_train=[]\n color_test=[]\n shape_test=[]\n texture_test=[]\n y_train=[]\n y_test=[]\n file_list=glob(data_dir+'/**', recursive=True)\n number_of_files=len(file_list)\n with open(obj_dir+\"/dictionary.pickle\",\"rb\") as f:\n dictionary=pickle.load(f)\n for j,filename in enumerate(file_list):\n if os.path.isfile(filename) and filename.endswith(\".txt\"):\n print(\"{:.2f}%\".format(j*100/number_of_files))\n name=\" \".join(filename.split(\"_\")[:-3]).rsplit(\"/\", 1)[1]\n if name in exclusion_list:\n continue\n name=translate(name)\n folder=filename.split(\"/\")[-2]\n\n if folder not in dictionary.keys():\n continue\n with open(filename, \"r\") as f:\n features=[]\n try:\n lines=f.readlines()\n for line in lines:\n features.append(ast.literal_eval(line))\n if len(features)==3: \n color,shape,texture=features\n color=normalize_color(color)\n if folder in test_folder:\n color_test.append(color)\n shape_test.append(shape)\n texture_test.append(texture)\n if selected_space:\n y_test.append(folder)\n else: \n y_test.append(name)\n else:\n color_train.append(color)\n shape_train.append(shape)\n texture_train.append(texture) \n if selected_space:\n y_train.append(folder)\n else: \n y_train.append(name)\n except:\n print(\"Error in {}\".format(filename))\n continue \n y_train=np.array(y_train)\n y_test=np.array(y_test)\n X_train=np.array([np.concatenate((c, s, t), axis=None) for c,s,t in zip(color_train,shape_train,texture_train)])\n X_test=np.array([np.concatenate((c, s, t), axis=None) for c,s,t in zip(color_test,shape_test,texture_test)]) \n\n color_train=np.array(color_train)\n shape_train=np.array(shape_train)\n texture_train=np.array(texture_train)\n color_test=np.array(color_test)\n shape_test=np.array(shape_test)\n texture_test=np.array(texture_test)\n X_train=color_train\n X_test=color_test\n \n \nelse:\n X_train=np.load(obj_dir+\"/input_train.npy\")\n X_test=np.load(obj_dir+\"/input_test.npy\")\n color_train=np.load(obj_dir+\"/color_train.npy\")\n shape_train=np.load(obj_dir+\"/shape_train.npy\")\n texture_train=np.load(obj_dir+\"/texture_train.npy\")\n color_test=np.load(obj_dir+\"/color_test.npy\")\n shape_test=np.load(obj_dir+\"/shape_test.npy\")\n texture_test=np.load(obj_dir+\"/texture_test.npy\")\n y_train=np.load(obj_dir+\"/output_train.npy\") \n y_test=np.load(obj_dir+\"/output_test.npy\") ",
"0.07%\n0.07%\n0.08%\n0.09%\n0.10%\n0.10%\n0.11%\n0.12%\n0.13%\n0.13%\n0.14%\n0.15%\n0.16%\n0.16%\n0.17%\n0.18%\n0.19%\n0.19%\n0.20%\n0.21%\n0.22%\n0.22%\n0.23%\n0.24%\n0.25%\n0.25%\n0.26%\n0.27%\n0.27%\n0.28%\n0.29%\n0.30%\n0.30%\n0.31%\n0.32%\n0.33%\n0.33%\n0.34%\n0.35%\n0.36%\n0.36%\n0.37%\n0.38%\n0.39%\n0.39%\n0.40%\n0.41%\n0.42%\n0.42%\n0.43%\n0.44%\n0.45%\n0.45%\n0.46%\n0.47%\n0.48%\n0.48%\n0.49%\n0.51%\n0.51%\n0.52%\n0.53%\n0.53%\n0.54%\n0.55%\n0.56%\n0.56%\n0.57%\n0.58%\n0.59%\n0.59%\n0.60%\n0.61%\n0.62%\n0.62%\n0.63%\n0.64%\n0.65%\n0.65%\n0.66%\n0.67%\n0.68%\n0.68%\n0.69%\n0.70%\n0.71%\n0.71%\n0.72%\n0.73%\n0.74%\n0.74%\n0.75%\n0.76%\n0.77%\n0.77%\n0.78%\n0.79%\n0.79%\n0.80%\n0.81%\n0.82%\n0.82%\n0.83%\n0.84%\n0.85%\n0.85%\n0.86%\n0.87%\n0.88%\n0.88%\n0.89%\n0.90%\n0.91%\n0.91%\n0.92%\n0.94%\n0.94%\n0.95%\n0.96%\n0.97%\n0.97%\n0.98%\n0.99%\n1.00%\n1.00%\n1.01%\n1.02%\n1.03%\n1.03%\n1.04%\n1.05%\n1.05%\n1.06%\n1.07%\n1.08%\n1.08%\n1.09%\n1.10%\n1.11%\n1.11%\n1.12%\n1.13%\n1.14%\n1.14%\n1.15%\n1.16%\n1.17%\n1.17%\n1.18%\n1.19%\n1.20%\n1.20%\n1.21%\n1.22%\n1.23%\n1.23%\n1.24%\n1.25%\n1.26%\n1.26%\n1.27%\n1.28%\n1.29%\n1.29%\n1.30%\n1.31%\n1.31%\n1.32%\n1.33%\n1.34%\n1.34%\n1.35%\n1.36%\n1.37%\n1.37%\n1.39%\n1.40%\n1.40%\n1.41%\n1.42%\n1.43%\n1.43%\n1.44%\n1.45%\n1.46%\n1.46%\n1.47%\n1.48%\n1.49%\n1.49%\n1.50%\n1.51%\n1.52%\n1.52%\n1.53%\n1.54%\n1.55%\n1.55%\n1.56%\n1.57%\n1.57%\n1.58%\n1.59%\n1.60%\n1.60%\n1.61%\n1.62%\n1.63%\n1.63%\n1.64%\n1.65%\n1.66%\n1.66%\n1.67%\n1.68%\n1.69%\n1.69%\n1.70%\n1.71%\n1.72%\n1.72%\n1.73%\n1.74%\n1.75%\n1.75%\n1.76%\n1.77%\n1.78%\n1.78%\n1.79%\n1.80%\n1.81%\n1.81%\n1.82%\n1.83%\n1.83%\n1.84%\n1.85%\n1.86%\n1.86%\n1.87%\n1.88%\n1.89%\n1.89%\n1.90%\n1.91%\n1.92%\n1.92%\n2.70%\n2.71%\n2.72%\n2.73%\n2.73%\n2.74%\n2.75%\n2.76%\n2.76%\n2.77%\n2.78%\n2.79%\n2.79%\n2.80%\n2.81%\n2.82%\n2.82%\n2.83%\n2.84%\n2.85%\n2.85%\n2.86%\n2.87%\n2.87%\n2.88%\n2.89%\n2.90%\n2.90%\n2.91%\n2.92%\n2.93%\n2.93%\n2.94%\n2.95%\n2.96%\n2.96%\n2.97%\n2.98%\n2.99%\n2.99%\n3.00%\n3.01%\n3.02%\n3.02%\n3.03%\n3.04%\n3.05%\n3.05%\n3.06%\n3.07%\n3.08%\n3.08%\n3.09%\n3.10%\n3.11%\n3.11%\n3.12%\n3.13%\n3.13%\n3.14%\n3.15%\n3.16%\n3.16%\n3.17%\n3.19%\n3.19%\n3.20%\n3.21%\n3.22%\n3.22%\n3.23%\n3.24%\n3.25%\n3.25%\n3.26%\n3.27%\n3.28%\n3.28%\n3.29%\n3.30%\n3.31%\n3.31%\n3.32%\n3.33%\n3.34%\n3.34%\n3.35%\n3.36%\n3.37%\n3.37%\n3.38%\n3.39%\n3.39%\n3.40%\n3.41%\n3.42%\n3.42%\n3.43%\n3.44%\n3.45%\n3.45%\n3.46%\n3.47%\n3.48%\n3.48%\n3.49%\n3.50%\n3.51%\n3.51%\n3.52%\n3.53%\n3.54%\n3.54%\n3.55%\n3.56%\n3.57%\n3.57%\n3.58%\n3.59%\n3.60%\n3.60%\n3.61%\n3.62%\n3.63%\n3.63%\n3.64%\n3.65%\n3.66%\n3.67%\n3.68%\n3.68%\n3.69%\n3.70%\n3.71%\n3.71%\n3.72%\n3.73%\n3.74%\n3.74%\n3.75%\n3.76%\n3.77%\n3.77%\n3.78%\n3.79%\n3.80%\n3.80%\n3.81%\n3.82%\n3.83%\n3.83%\n3.84%\n3.85%\n3.86%\n3.86%\n3.87%\n3.88%\n3.89%\n3.89%\n3.90%\n3.91%\n3.91%\n3.92%\n3.93%\n3.94%\n3.94%\n3.95%\n3.96%\n3.97%\n3.97%\n3.98%\n3.99%\n4.00%\n4.00%\n4.01%\n4.02%\n4.03%\n4.03%\n4.04%\n4.05%\n4.06%\n4.06%\n4.07%\n4.08%\n4.09%\n4.09%\n4.10%\n4.11%\n4.12%\n4.13%\n4.14%\n4.15%\n4.15%\n4.16%\n4.17%\n4.17%\n4.18%\n4.19%\n4.20%\n4.20%\n4.21%\n4.22%\n4.23%\n4.23%\n4.24%\n4.25%\n4.26%\n4.26%\n4.27%\n4.28%\n4.29%\n4.29%\n4.30%\n4.31%\n4.32%\n4.32%\n4.33%\n4.34%\n4.35%\n4.35%\n4.36%\n4.37%\n4.38%\n4.38%\n4.39%\n4.40%\n4.40%\n4.41%\n4.42%\n4.43%\n4.43%\n4.44%\n4.45%\n4.46%\n4.46%\n4.47%\n4.48%\n4.49%\n4.49%\n4.50%\n4.51%\n4.52%\n4.52%\n4.53%\n4.54%\n4.55%\n4.55%\n4.56%\n4.57%\n4.58%\n4.59%\n4.60%\n4.61%\n4.61%\n4.62%\n4.63%\n4.64%\n4.64%\n4.65%\n4.66%\n4.66%\n4.67%\n4.68%\n4.69%\n4.69%\n4.70%\n4.71%\n4.72%\n4.72%\n4.73%\n4.74%\n4.75%\n4.75%\n4.76%\n4.77%\n4.78%\n4.78%\n4.79%\n4.80%\n4.81%\n4.81%\n4.82%\n4.83%\n4.84%\n4.84%\n4.85%\n4.86%\n4.87%\n4.87%\n4.88%\n4.89%\n4.90%\n4.90%\n4.91%\n4.92%\n4.92%\n4.93%\n4.94%\n4.95%\n4.95%\n4.96%\n4.97%\n4.98%\n4.98%\n4.99%\n5.00%\n5.01%\n5.01%\n5.02%\n5.03%\n5.04%\n5.04%\n5.05%\n5.06%\n5.07%\n5.07%\n5.08%\n5.10%\n5.11%\n5.12%\n5.13%\n5.13%\n5.14%\n5.15%\n5.16%\n5.16%\n5.17%\n5.18%\n5.18%\n5.19%\n5.20%\n5.21%\n5.21%\n5.22%\n5.23%\n5.24%\n5.24%\n5.25%\n5.26%\n5.27%\n5.27%\n5.28%\n5.29%\n5.30%\n5.30%\n5.31%\n5.32%\n5.33%\n5.33%\n5.34%\n5.35%\n5.36%\n5.36%\n5.37%\n5.38%\n5.39%\n5.39%\n5.40%\n5.41%\n5.42%\n5.42%\n5.43%\n5.44%\n5.44%\n5.45%\n5.46%\n5.47%\n5.47%\n5.48%\n5.50%\n5.50%\n5.51%\n5.52%\n5.53%\n5.53%\n5.54%\n5.55%\n5.56%\n5.56%\n5.57%\n5.58%\n5.59%\n5.59%\n5.60%\n5.61%\n5.62%\n5.62%\n5.63%\n5.64%\n5.65%\n5.65%\n5.66%\n5.67%\n5.68%\n5.68%\n5.69%\n5.70%\n5.70%\n5.71%\n5.72%\n5.73%\n5.73%\n5.74%\n5.75%\n5.76%\n5.76%\n5.77%\n5.78%\n5.79%\n5.79%\n5.80%\n5.81%\n5.82%\n5.82%\n5.83%\n5.84%\n5.85%\n5.85%\n5.86%\n5.87%\n5.88%\n5.88%\n5.89%\n5.90%\n5.91%\n5.91%\n5.92%\n5.93%\n5.94%\n5.94%\n5.95%\n5.96%\n5.96%\n5.97%\n5.98%\n5.99%\n5.99%\n6.00%\n6.01%\n6.02%\n6.02%\n6.03%\n6.04%\n6.05%\n6.05%\n6.06%\n6.07%\n6.08%\n6.09%\n6.10%\n6.11%\n6.11%\n6.12%\n6.13%\n6.14%\n6.14%\n6.15%\n6.16%\n6.17%\n6.17%\n6.18%\n6.19%\n6.20%\n6.20%\n6.21%\n6.22%\n6.22%\n6.23%\n6.24%\n6.25%\n6.25%\n6.26%\n6.27%\n6.28%\n6.28%\n6.29%\n6.30%\n6.31%\n6.31%\n6.32%\n6.33%\n6.34%\n6.34%\n6.35%\n6.36%\n6.37%\n6.37%\n6.38%\n6.39%\n6.40%\n6.40%\n6.41%\n6.42%\n6.43%\n6.43%\n6.44%\n6.45%\n6.46%\n6.46%\n6.47%\n6.48%\n6.48%\n6.49%\n6.50%\n6.51%\n6.52%\n6.53%\n6.54%\n6.54%\n6.55%\n6.56%\n6.57%\n6.57%\n6.58%\n6.59%\n6.60%\n6.60%\n6.61%\n6.62%\n6.63%\n6.63%\n6.64%\n6.65%\n6.66%\n6.66%\n6.67%\n6.68%\n6.69%\n6.69%\n6.70%\n6.71%\n6.72%\n6.72%\n6.73%\n6.74%\n6.74%\n6.75%\n6.76%\n6.77%\n6.77%\n6.78%\n6.79%\n6.80%\n6.80%\n6.81%\n6.82%\n6.83%\n6.83%\n6.84%\n6.85%\n6.86%\n6.86%\n6.87%\n6.88%\n6.89%\n6.89%\n6.90%\n6.91%\n6.92%\n6.92%\n6.94%\n6.95%\n6.95%\n6.96%\n6.97%\n6.98%\n6.98%\n6.99%\n7.00%\n7.00%\n7.01%\n7.02%\n7.03%\n7.03%\n7.04%\n7.05%\n7.06%\n7.06%\n7.07%\n7.08%\n7.09%\n7.09%\n7.10%\n7.11%\n7.12%\n7.12%\n7.13%\n7.14%\n7.15%\n7.15%\n7.16%\n7.17%\n7.18%\n7.18%\n7.19%\n7.20%\n7.21%\n7.21%\n7.22%\n7.23%\n7.24%\n7.24%\n7.25%\n7.26%\n7.26%\n7.27%\n7.28%\n7.29%\n7.29%\n7.30%\n7.31%\n7.32%\n7.33%\n7.34%\n7.35%\n7.35%\n7.36%\n7.37%\n7.38%\n7.38%\n7.39%\n7.40%\n7.41%\n7.41%\n7.42%\n7.43%\n7.44%\n7.44%\n7.45%\n7.46%\n7.47%\n7.47%\n7.48%\n7.49%\n7.50%\n7.50%\n7.51%\n7.52%\n7.52%\n7.53%\n7.54%\n7.55%\n7.55%\n7.56%\n7.57%\n7.58%\n7.58%\n7.59%\n7.60%\n7.61%\n7.61%\n7.62%\n7.63%\n7.64%\n7.64%\n7.65%\n7.66%\n7.67%\n7.67%\n7.68%\n7.69%\n7.70%\n7.70%\n7.71%\n7.72%\n7.73%\n7.73%\n7.74%\n7.75%\n7.76%\n7.76%\n7.77%\n7.78%\n7.78%\n7.79%\n7.80%\n7.81%\n7.81%\n7.82%\n7.83%\n7.84%\n7.84%\n7.85%\n7.86%\n7.87%\n7.87%\n7.88%\n7.89%\n7.90%\n7.90%\n7.91%\n7.92%\n7.93%\n7.93%\n7.94%\n7.95%\n7.96%\n7.96%\n7.97%\n7.98%\n7.99%\n7.99%\n8.00%\n8.01%\n8.02%\n8.02%\n8.03%\n8.04%\n8.04%\n8.05%\n8.06%\n8.07%\n8.07%\n8.08%\n8.10%\n8.11%\n8.12%\n8.13%\n8.13%\n8.14%\n8.15%\n8.16%\n8.16%\n8.17%\n8.18%\n8.19%\n8.19%\n8.20%\n8.21%\n8.22%\n8.22%\n8.23%\n8.24%\n8.25%\n8.25%\n8.26%\n8.27%\n8.28%\n8.28%\n8.29%\n8.30%\n8.30%\n8.31%\n8.32%\n8.33%\n8.33%\n8.34%\n8.35%\n8.36%\n8.36%\n8.37%\n8.38%\n8.39%\n8.39%\n8.40%\n8.41%\n8.42%\n8.42%\n8.43%\n8.44%\n8.45%\n8.46%\n8.47%\n8.48%\n8.48%\n8.49%\n8.50%\n8.51%\n8.51%\n8.52%\n8.53%\n8.54%\n8.54%\n8.55%\n8.56%\n8.56%\n8.57%\n8.58%\n8.59%\n8.59%\n8.60%\n8.61%\n8.62%\n8.62%\n8.63%\n8.64%\n8.65%\n8.65%\n8.66%\n8.67%\n8.68%\n8.68%\n8.69%\n8.70%\n8.71%\n8.71%\n8.72%\n8.73%\n8.74%\n8.74%\n8.75%\n8.76%\n8.77%\n8.77%\n8.78%\n8.79%\n8.80%\n8.80%\n8.81%\n8.82%\n8.82%\n8.83%\n8.84%\n8.85%\n8.85%\n8.86%\n8.87%\n8.88%\n8.88%\n8.89%\n8.90%\n8.91%\n8.91%\n8.92%\n8.93%\n8.94%\n8.94%\n8.95%\n8.96%\n8.97%\n8.98%\n8.99%\n9.00%\n9.00%\n9.01%\n9.02%\n9.03%\n9.03%\n9.04%\n9.05%\n9.06%\n9.06%\n9.07%\n9.08%\n9.08%\n9.09%\n9.10%\n9.11%\n9.11%\n9.12%\n9.13%\n9.14%\n9.14%\n9.15%\n9.16%\n9.17%\n9.17%\n9.18%\n9.19%\n9.20%\n9.20%\n9.21%\n9.22%\n9.23%\n9.23%\n9.24%\n9.25%\n9.26%\n9.26%\n9.27%\n9.28%\n9.29%\n9.29%\n9.30%\n9.31%\n9.32%\n9.32%\n9.33%\n9.34%\n9.34%\n9.35%\n9.36%\n9.37%\n9.37%\n9.38%\n9.40%\n9.40%\n9.41%\n9.42%\n9.43%\n9.43%\n9.44%\n9.45%\n9.46%\n9.46%\n9.47%\n9.48%\n9.49%\n9.49%\n9.50%\n9.51%\n9.52%\n9.52%\n9.53%\n9.54%\n9.55%\n9.55%\n9.56%\n9.57%\n9.58%\n9.58%\n9.59%\n9.60%\n9.60%\n9.61%\n9.62%\n9.63%\n9.63%\n9.64%\n9.65%\n9.66%\n9.66%\n9.67%\n9.68%\n9.69%\n9.69%\n9.70%\n9.71%\n9.72%\n9.72%\n9.73%\n9.74%\n9.75%\n9.75%\n9.77%\n9.78%\n9.78%\n9.79%\n9.80%\n9.81%\n9.81%\n9.82%\n9.83%\n9.84%\n9.84%\n9.85%\n9.86%\n9.86%\n9.87%\n9.88%\n9.89%\n9.89%\n9.90%\n9.91%\n9.92%\n9.92%\n9.93%\n9.94%\n9.95%\n9.95%\n9.96%\n9.97%\n9.98%\n9.98%\n9.99%\n10.00%\n10.01%\n10.01%\n10.02%\n10.03%\n10.04%\n10.04%\n10.05%\n10.06%\n10.07%\n10.07%\n10.08%\n10.09%\n10.10%\n10.10%\n10.11%\n10.12%\n10.12%\n10.13%\n10.14%\n10.15%\n10.15%\n10.16%\n10.17%\n10.18%\n10.18%\n10.19%\n10.21%\n10.22%\n10.23%\n10.24%\n10.24%\n10.25%\n10.26%\n10.27%\n10.27%\n10.28%\n10.29%\n10.30%\n10.30%\n10.31%\n10.32%\n10.33%\n10.33%\n10.34%\n10.35%\n10.36%\n10.36%\n10.37%\n10.38%\n10.38%\n10.39%\n10.40%\n10.41%\n10.41%\n10.42%\n10.43%\n10.44%\n10.44%\n10.45%\n10.46%\n10.47%\n10.47%\n10.48%\n10.49%\n10.50%\n10.50%\n10.51%\n10.52%\n10.53%\n10.53%\n10.54%\n10.55%\n10.56%\n10.56%\n10.57%\n10.58%\n10.59%\n10.59%\n10.60%\n10.61%\n10.62%\n10.62%\n10.63%\n10.64%\n10.64%\n10.65%\n10.66%\n10.67%\n10.67%\n10.68%\n10.70%\n10.70%\n10.71%\n10.72%\n10.73%\n10.73%\n10.74%\n10.75%\n10.76%\n10.76%\n10.77%\n10.78%\n10.79%\n10.79%\n10.80%\n10.81%\n10.82%\n10.82%\n10.83%\n10.84%\n10.85%\n10.85%\n10.86%\n10.87%\n10.88%\n10.88%\n10.89%\n10.90%\n10.90%\n10.91%\n10.92%\n10.93%\n10.93%\n10.94%\n10.95%\n10.96%\n10.96%\n10.97%\n10.98%\n10.99%\n10.99%\n11.00%\n11.01%\n11.02%\n11.02%\n11.04%\n11.05%\n11.05%\n11.06%\n11.07%\n11.08%\n11.08%\n11.09%\n11.10%\n11.11%\n11.11%\n11.12%\n11.13%\n11.14%\n11.14%\n11.15%\n11.16%\n11.16%\n11.17%\n11.18%\n11.19%\n11.19%\n11.20%\n11.21%\n11.22%\n11.22%\n11.23%\n11.24%\n11.25%\n11.25%\n11.26%\n11.27%\n11.28%\n11.28%\n11.29%\n11.30%\n11.31%\n11.31%\n11.32%\n11.33%\n11.34%\n11.34%\n11.35%\n11.36%\n11.37%\n11.37%\n11.38%\n11.39%\n11.40%\n11.40%\n11.41%\n11.42%\n11.42%\n11.43%\n11.44%\n11.45%\n11.45%\n11.46%\n11.47%\n11.48%\n11.49%\n11.50%\n11.51%\n11.51%\n11.52%\n11.53%\n11.54%\n11.54%\n11.55%\n11.56%\n11.57%\n11.57%\n11.58%\n11.59%\n11.60%\n11.60%\n11.61%\n11.62%\n11.63%\n11.63%\n11.64%\n11.65%\n11.66%\n11.66%\n11.67%\n11.68%\n11.68%\n11.69%\n11.70%\n11.71%\n11.71%\n11.72%\n11.73%\n11.74%\n11.74%\n11.75%\n11.76%\n11.77%\n11.77%\n11.78%\n11.79%\n11.80%\n11.80%\n11.81%\n11.82%\n11.83%\n11.83%\n11.84%\n11.85%\n11.86%\n11.86%\n11.87%\n11.88%\n11.89%\n11.89%\n11.90%\n11.91%\n11.92%\n11.92%\n11.93%\n11.94%\n11.94%\n11.95%\n11.96%\n11.97%\n11.97%\n11.98%\n11.99%\n12.00%\n12.00%\n12.01%\n12.02%\n12.03%\n12.03%\n12.04%\n12.05%\n12.06%\n12.06%\n12.07%\n12.08%\n12.09%\n12.09%\n12.10%\n12.11%\n12.12%\n12.13%\n12.14%\n12.15%\n12.15%\n12.16%\n12.17%\n12.18%\n12.18%\n12.19%\n12.20%\n12.20%\n12.21%\n12.22%\n12.23%\n12.23%\n12.24%\n12.25%\n12.26%\n12.26%\n12.27%\n12.28%\n12.29%\n12.29%\n12.30%\n12.31%\n12.32%\n12.32%\n12.33%\n12.34%\n12.35%\n12.35%\n12.36%\n12.37%\n12.38%\n12.38%\n12.39%\n12.40%\n12.41%\n12.41%\n12.42%\n12.43%\n12.44%\n12.44%\n12.45%\n12.46%\n12.46%\n12.47%\n12.48%\n12.49%\n12.49%\n12.50%\n12.51%\n12.52%\n12.52%\n12.53%\n12.54%\n12.55%\n12.55%\n12.56%\n12.57%\n12.58%\n12.58%\n12.59%\n12.60%\n12.61%\n12.61%\n12.62%\n12.63%\n12.64%\n12.64%\n12.65%\n12.66%\n12.67%\n12.67%\n12.68%\n12.69%\n12.69%\n12.70%\n12.71%\n12.72%\n12.72%\n12.73%\n12.74%\n12.75%\n12.75%\n12.77%\n12.78%\n12.78%\n12.79%\n12.80%\n12.81%\n12.81%\n12.82%\n12.83%\n12.84%\n12.84%\n12.85%\n12.86%\n12.87%\n12.87%\n12.88%\n12.89%\n12.90%\n12.90%\n12.91%\n12.92%\n12.93%\n12.93%\n12.94%\n12.95%\n12.95%\n12.96%\n12.97%\n12.98%\n12.98%\n12.99%\n13.00%\n13.01%\n13.01%\n13.02%\n13.03%\n13.04%\n13.04%\n13.05%\n13.06%\n13.07%\n13.07%\n13.08%\n13.09%\n13.10%\n13.10%\n13.11%\n13.12%\n13.13%\n13.13%\n13.14%\n13.15%\n13.16%\n13.16%\n13.17%\n13.18%\n13.19%\n13.19%\n13.20%\n13.21%\n13.21%\n13.22%\n13.24%\n13.24%\n13.25%\n13.26%\n13.27%\n13.27%\n13.28%\n13.29%\n13.30%\n13.30%\n13.31%\n13.32%\n13.33%\n13.33%\n13.34%\n13.35%\n13.36%\n13.36%\n13.37%\n13.38%\n13.39%\n13.39%\n13.40%\n13.41%\n13.42%\n13.42%\n13.43%\n13.44%\n13.45%\n13.45%\n13.46%\n13.47%\n13.47%\n13.48%\n13.49%\n13.50%\n13.50%\n13.51%\n13.52%\n13.53%\n13.53%\n13.54%\n13.55%\n13.56%\n13.56%\n13.57%\n13.58%\n13.59%\n13.59%\n13.60%\n13.61%\n13.62%\n13.63%\n13.64%\n13.65%\n13.65%\n13.66%\n13.67%\n13.68%\n13.68%\n13.69%\n13.70%\n13.71%\n13.71%\n13.72%\n13.73%\n13.73%\n13.74%\n13.75%\n13.76%\n13.76%\n13.77%\n13.78%\n13.79%\n13.79%\n13.80%\n13.81%\n13.82%\n13.82%\n13.83%\n13.84%\n13.85%\n13.85%\n13.86%\n13.87%\n13.88%\n13.88%\n13.89%\n13.90%\n13.91%\n13.91%\n13.92%\n13.93%\n13.94%\n13.94%\n13.95%\n13.96%\n13.97%\n13.97%\n13.98%\n13.99%\n13.99%\n14.00%\n14.01%\n14.02%\n14.02%\n14.03%\n14.04%\n14.05%\n14.05%\n14.11%\n14.11%\n14.12%\n14.13%\n14.14%\n14.14%\n14.15%\n14.16%\n14.17%\n14.17%\n14.18%\n14.19%\n14.20%\n14.20%\n14.21%\n14.22%\n14.23%\n14.23%\n14.24%\n14.25%\n14.25%\n14.26%\n14.27%\n14.28%\n14.28%\n14.29%\n14.30%\n14.31%\n14.31%\n14.32%\n14.33%\n14.34%\n14.34%\n14.35%\n14.36%\n14.37%\n14.37%\n14.38%\n14.39%\n14.40%\n14.40%\n14.41%\n14.42%\n14.43%\n14.43%\n14.44%\n14.45%\n14.46%\n14.46%\n14.47%\n14.48%\n14.49%\n14.49%\n14.50%\n14.51%\n14.51%\n14.52%\n14.53%\n14.54%\n14.54%\n14.55%\n14.56%\n14.57%\n14.58%\n14.59%\n14.60%\n14.60%\n14.61%\n14.62%\n14.63%\n14.63%\n14.64%\n14.65%\n14.66%\n14.66%\n14.67%\n14.68%\n14.69%\n14.69%\n14.70%\n14.71%\n14.72%\n14.72%\n14.73%\n14.74%\n14.75%\n14.75%\n14.76%\n14.77%\n14.77%\n14.78%\n14.79%\n14.80%\n14.80%\n14.81%\n14.82%\n14.83%\n14.83%\n14.84%\n14.85%\n14.86%\n14.86%\n14.87%\n14.88%\n14.89%\n14.89%\n14.90%\n14.91%\n14.92%\n14.92%\n14.93%\n14.94%\n14.95%\n14.95%\n14.96%\n14.97%\n14.98%\n14.98%\n14.99%\n15.00%\n15.01%\n15.01%\n15.02%\n15.03%\n15.03%\n15.04%\n15.06%\n15.06%\n15.07%\n15.08%\n15.09%\n15.09%\n15.10%\n15.11%\n15.12%\n15.12%\n15.13%\n15.14%\n15.15%\n15.15%\n15.16%\n15.17%\n15.18%\n15.18%\n15.19%\n15.20%\n15.21%\n15.21%\n15.22%\n15.23%\n15.24%\n15.24%\n15.25%\n15.26%\n15.27%\n15.27%\n15.28%\n15.29%\n15.29%\n15.30%\n15.31%\n15.32%\n15.32%\n15.33%\n15.34%\n15.35%\n15.35%\n15.36%\n15.37%\n15.38%\n15.38%\n15.39%\n15.41%\n15.42%\n15.43%\n15.44%\n15.44%\n15.45%\n15.46%\n15.47%\n15.47%\n15.48%\n15.49%\n15.50%\n15.50%\n15.51%\n15.52%\n15.53%\n15.53%\n15.54%\n15.55%\n15.55%\n15.56%\n15.57%\n15.58%\n15.58%\n15.59%\n15.60%\n15.61%\n15.61%\n15.62%\n15.63%\n15.64%\n15.64%\n15.65%\n15.66%\n15.67%\n15.67%\n15.68%\n15.69%\n15.70%\n15.70%\n15.71%\n15.72%\n15.73%\n15.73%\n15.74%\n15.75%\n15.76%\n15.76%\n15.77%\n15.78%\n15.79%\n15.79%\n15.80%\n15.81%\n15.81%\n15.82%\n15.84%\n15.84%\n15.85%\n15.86%\n15.87%\n15.87%\n15.88%\n15.89%\n15.90%\n15.90%\n15.91%\n15.92%\n15.93%\n15.93%\n15.94%\n15.95%\n15.96%\n15.96%\n15.97%\n15.98%\n15.99%\n15.99%\n16.00%\n16.01%\n16.02%\n16.02%\n16.03%\n16.04%\n16.05%\n16.05%\n16.06%\n16.07%\n16.07%\n16.08%\n16.09%\n16.10%\n16.10%\n16.11%\n16.12%\n16.13%\n16.13%\n16.14%\n16.15%\n16.16%\n16.16%\n16.17%\n16.18%\n16.19%\n16.19%\n16.20%\nError in /content/drive/My Drive/Tesi/Code/Object_classification/Data/cereal_box/cereal_box_4/cereal_box_4_1_181.txt\n16.21%\n16.22%\n16.22%\n16.23%\n16.24%\n16.25%\n16.25%\n16.26%\n16.27%\n16.28%\n16.29%\n16.30%\n16.31%\nError in /content/drive/My Drive/Tesi/Code/Object_classification/Data/cereal_box/cereal_box_5/cereal_box_5_4_25.txt\n16.31%\n16.32%\n16.33%\n16.33%\n16.34%\n16.35%\n16.36%\n16.36%\n16.37%\n16.38%\n16.39%\n16.39%\n16.40%\n16.41%\n16.42%\n16.42%\n16.43%\n16.44%\n16.45%\n16.45%\n16.46%\n16.47%\n16.48%\n16.48%\n16.49%\n16.50%\n16.51%\n"
]
],
[
[
"## Save input data",
"_____no_output_____"
]
],
[
[
"if selected_space:\n new_y_train=[] \n for i in y_train:\n new_label=dictionary[i][1]\n #new_label=new_label.split(\"-\")[0]\n new_y_train.append(new_label)\n new_y_test=[] \n for i in y_test:\n new_label=dictionary[i][1]\n #new_label=new_label.split(\"-\")[0]\n new_y_test.append(new_label)\n y_train=np.array(new_y_train)\n y_test=np.array(new_y_test)",
"_____no_output_____"
],
[
"if new_data and save_params:\n np.save(obj_dir+\"/input_train.npy\",X_train)\n np.save(obj_dir+\"/input_test.npy\",X_test)\n np.save(obj_dir+\"/color_train.npy\",color_train)\n np.save(obj_dir+\"/shape_train.npy\",shape_train)\n np.save(obj_dir+\"/texture_train.npy\",texture_train)\n np.save(obj_dir+\"/color_test.npy\",color_test)\n np.save(obj_dir+\"/shape_test.npy\",shape_test)\n np.save(obj_dir+\"/texture_test.npy\",texture_test)\n np.save(obj_dir+\"/output_test.npy\",y_test)\n np.save(obj_dir+\"/output_train.npy\",y_train)",
"_____no_output_____"
]
],
[
[
"## Classifier fitting",
"_____no_output_____"
]
],
[
[
"if load_old_params and False:\n with open(model_filename, 'rb') as file:\n clf = pickle.load(file)\nelse:\n clf = RandomForestClassifier(n_jobs=-1, n_estimators=30)\n clf.fit(X_train,y_train)\n print(clf.score(X_test,y_test))",
"_____no_output_____"
]
],
[
[
"## Saving parameters",
"_____no_output_____"
]
],
[
[
"if save_params:\n with open(model_filename, 'wb') as file:\n pickle.dump(clf, file)",
"_____no_output_____"
]
],
[
[
"## Score",
"_____no_output_____"
]
],
[
[
"def classify_prediction(prediction):\n sure=[]\n unsure=[]\n dubious=[]\n cannot_answer=[]\n for pred in prediction:\n o,p=pred\n values=list(p.values())\n keys=list(p.keys())\n # sure\n if values[0]>0.8: \n sure.append(pred)\n # unsure \n elif values[0]>0.6:\n unsure.append(pred)\n # dubious \n elif values[0]>0.4:\n dubious.append(pred)\n # cannot_answer\n else:\n cannot_answer.append(pred)\n return {\"sure\":sure, \"unsure\":unsure, \"dubious\":dubious, \"cannot_answer\":cannot_answer} \n\ndef calculate_accuracy(category,prediction):\n counter=0\n if category==\"dubious\":\n for o,p in pred:\n if o in list(p.keys())[0:2]:\n counter+=1\n elif category==\"cannot_answer\":\n for o,p in pred:\n if o not in list(p.keys())[0:2]:\n counter+=1\n else:\n for o,p in pred:\n if o.split(\"-\")[0] in list(p.keys())[0]:\n counter+=1 \n return counter/len(pred) \n\n",
"_____no_output_____"
],
[
"label_prob=clf.predict_proba(X_test)\npred=[[y_test[j],sort_and_cut_dict({clf.classes_[i]:v for i,v in enumerate(row)})] for j,row in enumerate(label_prob)]\npred_classified=classify_prediction(pred)\nprint(\"TOTAL TEST: {}\".format(len(pred)))\nfor l,pred in pred_classified.items():\n print(l.upper())\n print(40*\"-\")\n selected=[]\n for o,p in pred:\n if l==\"dubious\" and o not in list(p.keys())[0:2]:\n selected.append([o,\", \".join([str(a)+\":\"+str(round(b,2)) for a,b in list(p.items())])])\n elif l==\"cannot_answer\" and o in list(p.keys())[0:2]:\n selected.append([o,\", \".join([str(a)+\":\"+str(round(b,2)) for a,b in list(p.items())])])\n\n elif l==\"unsure\" and o.split(\"-\")[0] not in list(p.keys())[0]:\n selected.append([o,\", \".join([str(a)+\":\"+str(round(b,2)) for a,b in list(p.items())])])\n\n elif (l==\"sure\") and o != list(p.keys())[0]:\n selected.append([o,\", \".join([str(a)+\":\"+str(round(b,2)) for a,b in list(p.items())])])\n print(tabulate(selected, headers=['Original','Predicted']))\n print(\"Not correct: {}/{} - {:.2f}%\".format(len(selected),len(pred),len(selected)*100/len(pred)))\n accuracy=calculate_accuracy(l,pred)\n print(\"Accuracy: {:.2f}\".format(accuracy)) ",
"_____no_output_____"
]
],
[
[
"## Test",
"_____no_output_____"
]
],
[
[
"clf.score(X_test,y_test)",
"_____no_output_____"
],
[
"plt.plot(clf.feature_importances_)",
"_____no_output_____"
],
[
"clf.feature_importances_",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix\nimport pandas as pd\ndef classification_report(y_true, y_pred):\n print(f\"Accuracy: {accuracy_score(y_true, y_pred)}.\")\n print(f\"Precision: {precision_score(y_true, y_pred, average='weighted', zero_division=True)}.\")\n print(f\"Recall: {recall_score(y_true, y_pred, average='weighted')}.\")\n print(f\"F1-Score: {f1_score(y_true, y_pred, average='weighted')}.\")\n\n print(\"\\nSuddivisione per Classe\")\n\n matrix = confusion_matrix(y_true, y_pred)\n # i falsi positivi si trovano sommando le colonne ed eliminando l'elemento diagonale (che rappresenta i veri positivi)\n FP = matrix.sum(axis=0) - np.diag(matrix) \n # i falsi negativi invece si individuano sommando le righe\n FN = matrix.sum(axis=1) - np.diag(matrix)\n TP = np.diag(matrix)\n TN = matrix.sum() - (FP + FN + TP)\n class_names = np.unique(y_true)\n metrics_per_class = {}\n class_accuracies = (TP+TN)/(TP+TN+FP+FN)\n class_precisions = TP/(TP+FP)\n class_recalls = TP/(TP+FN)\n class_f1_scores = (2 * class_precisions * class_recalls) / (class_precisions + class_recalls)\n i=0\n\n for name in class_names:\n metrics_per_class[name] = [class_accuracies.tolist().pop(i), class_precisions.tolist().pop(i), class_recalls.tolist().pop(i), class_f1_scores.tolist().pop(i), FP.tolist().pop(i), FN.tolist().pop(i)]\n i += 1\n\n result = pd.DataFrame(metrics_per_class, index=[\"Accuracy\", \"Precision\", \"Recall\", \"F1 Score\", \"FP\", \"FN\"]).transpose() \n\n print(result, end=\"\\n\\n\")\n return metrics_per_class",
"_____no_output_____"
],
[
"#from sklearn.metrics import classification_report\ny_true=y_test\ny_pred=clf.predict(X_test)\nd=classification_report(y_true, y_pred)\nexclusion_list=[\"batteria\",\"ciotola\",\"piatto\",\"cipolla\",\"pomodoro\"]\nfor k in exclusion_list:\n del d[k]",
"_____no_output_____"
],
[
"data=[]\nlabels = []\nfor k,v in d.items():\n data.append([k]+v[:4])\n labels.append(k)\ndata=np.array(data)\ncolors = ['red','yellow','blue','green']\ndf = pd.DataFrame(data.T, index=[\"Label\",\"Accuracy\", \"Precision\", \"Recall\", \"F1 Score\"]).transpose()\n#df=df.set_index(\"Label\")\ndf[[\"Accuracy\", \"Precision\", \"Recall\", \"F1 Score\"]]=df[[\"Accuracy\", \"Precision\", \"Recall\", \"F1 Score\"]].apply(pd.to_numeric) \nax = df.plot(x=\"Label\", y=[\"Accuracy\", \"Precision\", \"Recall\", \"F1 Score\"], kind=\"barh\",figsize=(15,15))\n\n\nplt.show()\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76f78e3b290355aee880abd7bd8445e7f4056b1 | 870,289 | ipynb | Jupyter Notebook | nlp-in-tensorflow/Course_3_Week_2_Lesson_1.ipynb | macio-matheus/tensorflow-specialization | 3f21d410299436f1b0922b3bf7c54c29a858f8c1 | [
"MIT"
] | 1 | 2020-01-30T14:03:36.000Z | 2020-01-30T14:03:36.000Z | nlp-in-tensorflow/Course_3_Week_2_Lesson_1.ipynb | macio-matheus/tensorflow-specialization | 3f21d410299436f1b0922b3bf7c54c29a858f8c1 | [
"MIT"
] | null | null | null | nlp-in-tensorflow/Course_3_Week_2_Lesson_1.ipynb | macio-matheus/tensorflow-specialization | 3f21d410299436f1b0922b3bf7c54c29a858f8c1 | [
"MIT"
] | null | null | null | 36.014442 | 785 | 0.49265 | [
[
[
"# NOTE: PLEASE MAKE SURE YOU ARE RUNNING THIS IN A PYTHON3 ENVIRONMENT\n\nimport tensorflow as tf\nprint(tf.__version__)\n\n# This is needed for the iterator over the data\n# But not necessary if you have TF 2.0 installed\n#!pip install tensorflow==2.0.0-beta0\n\n\ntf.enable_eager_execution()\n\n# !pip install -q tensorflow-datasets",
"_____no_output_____"
],
[
"import tensorflow_datasets as tfds\nimdb, info = tfds.load(\"imdb_reviews\", with_info=True, as_supervised=True)\n",
"\u001b[1mDownloading and preparing dataset imdb_reviews (80.23 MiB) to /root/tensorflow_datasets/imdb_reviews/plain_text/0.1.0...\u001b[0m\n"
],
[
"import numpy as np\n\ntrain_data, test_data = imdb['train'], imdb['test']\n\ntraining_sentences = []\ntraining_labels = []\n\ntesting_sentences = []\ntesting_labels = []\n\n# str(s.tonumpy()) is needed in Python3 instead of just s.numpy()\nfor s,l in train_data:\n training_sentences.append(str(s.numpy()))\n training_labels.append(l.numpy())\n \nfor s,l in test_data:\n testing_sentences.append(str(s.numpy()))\n testing_labels.append(l.numpy())\n \ntraining_labels_final = np.array(training_labels)\ntesting_labels_final = np.array(testing_labels)\n",
"_____no_output_____"
],
[
"vocab_size = 10000\nembedding_dim = 16\nmax_length = 120\ntrunc_type='post'\noov_tok = \"<OOV>\"\n\n\nfrom tensorflow.keras.preprocessing.text import Tokenizer\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences\n\ntokenizer = Tokenizer(num_words = vocab_size, oov_token=oov_tok)\ntokenizer.fit_on_texts(training_sentences)\nword_index = tokenizer.word_index\nsequences = tokenizer.texts_to_sequences(training_sentences)\npadded = pad_sequences(sequences,maxlen=max_length, truncating=trunc_type)\n\ntesting_sequences = tokenizer.texts_to_sequences(testing_sentences)\ntesting_padded = pad_sequences(testing_sequences,maxlen=max_length)\n\n",
"_____no_output_____"
],
[
"reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])\n\ndef decode_review(text):\n return ' '.join([reverse_word_index.get(i, '?') for i in text])\n\nprint(decode_review(padded[1]))\nprint(training_sentences[1])",
"b oh yeah jenna jameson did it again yeah baby this movie rocks it was one of the 1st movies i saw of her and i have to say i feel in love with her she was great in this move br br her performance was outstanding and what i liked the most was the scenery and the wardrobe it was amazing you can tell that they put a lot into the movie the girls cloth were amazing br br i hope this comment helps and u can buy the movie the storyline is awesome is very unique and i'm sure u are going to like it jenna amazed us once more and no wonder the movie won so many\nb\"Oh yeah! Jenna Jameson did it again! Yeah Baby! This movie rocks. It was one of the 1st movies i saw of her. And i have to say i feel in love with her, she was great in this move.<br /><br />Her performance was outstanding and what i liked the most was the scenery and the wardrobe it was amazing you can tell that they put a lot into the movie the girls cloth were amazing.<br /><br />I hope this comment helps and u can buy the movie, the storyline is awesome is very unique and i'm sure u are going to like it. Jenna amazed us once more and no wonder the movie won so many awards. Her make-up and wardrobe is very very sexy and the girls on girls scene is amazing. specially the one where she looks like an angel. It's a must see and i hope u share my interests\"\n"
],
[
"model = tf.keras.Sequential([\n tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(6, activation='relu'),\n tf.keras.layers.Dense(1, activation='sigmoid')\n])\nmodel.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])\nmodel.summary()\n",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nembedding (Embedding) (None, 120, 16) 160000 \n_________________________________________________________________\nflatten (Flatten) (None, 1920) 0 \n_________________________________________________________________\ndense (Dense) (None, 6) 11526 \n_________________________________________________________________\ndense_1 (Dense) (None, 1) 7 \n=================================================================\nTotal params: 171,533\nTrainable params: 171,533\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"num_epochs = 10\nmodel.fit(padded, training_labels_final, epochs=num_epochs, validation_data=(testing_padded, testing_labels_final))",
"Train on 25000 samples, validate on 25000 samples\nEpoch 1/10\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/nn_impl.py:183: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\n"
],
[
"e = model.layers[0]\nweights = e.get_weights()[0]\nprint(weights.shape) # shape: (vocab_size, embedding_dim)",
"(10000, 16)\n"
],
[
"import io\n\nout_v = io.open('vecs.tsv', 'w', encoding='utf-8')\nout_m = io.open('meta.tsv', 'w', encoding='utf-8')\nfor word_num in range(1, vocab_size):\n word = reverse_word_index[word_num]\n embeddings = weights[word_num]\n out_m.write(word + \"\\n\")\n out_v.write('\\t'.join([str(x) for x in embeddings]) + \"\\n\")\nout_v.close()\nout_m.close()",
"_____no_output_____"
],
[
"\n\ntry:\n from google.colab import files\nexcept ImportError:\n pass\nelse:\n files.download('vecs.tsv')\n files.download('meta.tsv')",
"_____no_output_____"
],
[
"sentence = \"I really think this is amazing. honest.\"\nsequence = tokenizer.texts_to_sequences(sentence)\nprint(sequence)",
"[[11], [], [1430], [968], [4], [1537], [1537], [4738], [], [790], [2015], [11], [2922], [2191], [], [790], [2015], [11], [579], [], [11], [579], [], [4], [1783], [4], [4508], [11], [2922], [1277], [], [], [2015], [1005], [2922], [968], [579], [790], []]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76f7fd9e2b4a56569262dbd77850269ee0b3537 | 30,743 | ipynb | Jupyter Notebook | Competition-Solutions/Text/AI4D Malawi News Classification Challenge/Solution 1/transformers_baseline.ipynb | ZindiAfrica/Natural-Language-Processing-NLP- | 41763b83677f1a4853af397a34d8a82fa9ac45fc | [
"MIT"
] | null | null | null | Competition-Solutions/Text/AI4D Malawi News Classification Challenge/Solution 1/transformers_baseline.ipynb | ZindiAfrica/Natural-Language-Processing-NLP- | 41763b83677f1a4853af397a34d8a82fa9ac45fc | [
"MIT"
] | null | null | null | Competition-Solutions/Text/AI4D Malawi News Classification Challenge/Solution 1/transformers_baseline.ipynb | ZindiAfrica/Natural-Language-Processing-NLP- | 41763b83677f1a4853af397a34d8a82fa9ac45fc | [
"MIT"
] | null | null | null | 39.21301 | 175 | 0.420681 | [
[
[
"import numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\nimport os\nfrom sklearn.metrics import accuracy_score, roc_auc_score\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nimport tensorflow_addons as tfa\nfrom sklearn.model_selection import StratifiedKFold\nimport tensorflow_addons as tfa\nfrom sklearn.preprocessing import LabelEncoder",
"_____no_output_____"
],
[
"# Detect hardware, return appropriate distribution strategy\ntry:\n ############################################################################################################\n\n ########################################### \" SEED HERE \" #################################################\n \n # TPU detection. No parameters necessary if TPU_NAME environment variable is\n # set: this is always the case on Kaggle.\n seed_everything(seed=SEED)\n tpu = tf.distribute.cluster_resolver.TPUClusterResolver()\n print('Running on TPU ', tpu.master())\n ############################################################################################################\n\nexcept ValueError:\n tpu = None\n\nif tpu:\n ############################################################################################################\n\n ########################################### \" SEED HERE \" #################################################\n \n seed_everything(seed=SEED)\n tf.config.experimental_connect_to_cluster(tpu)\n tf.tpu.experimental.initialize_tpu_system(tpu)\n strategy = tf.distribute.experimental.TPUStrategy(tpu)\n ############################################################################################################\nelse:\n ############################################################################################################\n\n ########################################### \" SEED HERE \" #################################################\n\n # Default distribution strategy in Tensorflow. Works on CPU and single GPU.\n seed_everything(seed=SEED)\n strategy = tf.distribute.get_strategy()\n\n ############################################################################################################\nprint(\"REPLICAS: \", strategy.num_replicas_in_sync)",
"REPLICAS: 1\n"
],
[
"train = pd.read_csv('Train.csv')\ntrain.head()",
"_____no_output_____"
],
[
"test = pd.read_csv('Test.csv')\ntest.head()",
"_____no_output_____"
],
[
"LB = LabelEncoder()\ntrain['Label'] = LB.fit_transform(train['Label'])",
"_____no_output_____"
],
[
"############################################################################################################\n\n########################################### \" SEED HERE \" #################################################\n\nseed_everything(seed=SEED)\nAUTO = tf.data.experimental.AUTOTUNE\n# Configuration\nEPOCHS = 30\nN_LABELS = train['Label'].unique().shape[0]\nBATCH_SIZE = 32",
"_____no_output_____"
],
[
"############################################################################################################\n\n########################################### \" SEED HERE \" #################################################\n\nseed_everything(seed=SEED)\ndf = pd.concat((train, test))\ndataset = tf.data.Dataset.from_tensor_slices(df['Text'].values)",
"_____no_output_____"
],
[
"############################################################################################################\n\n########################################### \" SEED HERE \" ##################################################\n\nseed_everything(seed=SEED)\nvocab_size = 100000\nmaxlen = 200\nencoder = tf.keras.layers.experimental.preprocessing.TextVectorization(\n max_tokens=vocab_size, output_sequence_length=maxlen)\nencoder.adapt(dataset)",
"_____no_output_____"
],
[
"%%time \ndef reformat(x, y):\n return x, tf.cast(y, tf.float32)\n\ndef df_to_dataset(data, labels, data_type='Train'):\n x_token = data['Text'].values\n if data_type=='Train':\n y_label = labels.values\n dataset = (tf.data.Dataset\n .from_tensor_slices((x_token, y_label))\n .repeat()\n .shuffle(2048)\n .batch(BATCH_SIZE)\n .prefetch(AUTO))\n dataset = dataset.map(reformat)\n elif data_type=='Val':\n y_label = labels.values\n dataset = ( tf.data.Dataset\n .from_tensor_slices((x_token, y_label))\n .batch(BATCH_SIZE)\n .cache()\n .prefetch(AUTO)\n ) \n dataset =dataset.map(reformat)\n else:\n dataset = (tf.data.Dataset\n .from_tensor_slices(x_token)\n .batch(BATCH_SIZE)\n )\n return dataset",
"CPU times: user 3 µs, sys: 1 µs, total: 4 µs\nWall time: 7.39 µs\n"
],
[
"\nclass TokenAndPositionEmbedding(layers.Layer):\n def __init__(self, maxlen, vocab_size, embed_dim):\n super(TokenAndPositionEmbedding, self).__init__()\n self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim=embed_dim)\n self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=embed_dim)\n\n def call(self, x):\n maxlen = tf.shape(x)[-1]\n positions = tf.range(start=0, limit=maxlen, delta=1)\n positions = self.pos_emb(positions)\n x = self.token_emb(x)\n return x + positions\n \n\nclass TransformerBlock(layers.Layer):\n def __init__(self, embed_dim, num_heads, ff_dim, rate=0.6):\n super(TransformerBlock, self).__init__()\n self.att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim)\n self.ffn = keras.Sequential(\n [layers.Dense(ff_dim, activation=\"relu\"), layers.Dense(embed_dim),]\n )\n self.layernorm1 = layers.LayerNormalization()\n self.layernorm2 = layers.LayerNormalization()\n self.dropout1 = layers.Dropout(rate)\n self.dropout2 = layers.Dropout(rate)\n\n def call(self, inputs, training):\n attn_output = self.att(inputs, inputs)\n attn_output = self.dropout1(attn_output, training=training)\n out1 = self.layernorm1(inputs + attn_output)\n ffn_output = self.ffn(out1)\n ffn_output = self.dropout2(ffn_output, training=training)\n return self.layernorm2(out1 + ffn_output)\n\ndef BERTModel(embed_dim = 130, num_heads = 6, ff_dim = 32):\n ############################################################################################################\n\n ########################################### \" SEED HERE \" ###########################################\n seed_everything(seed=SEED)\n inputs = layers.Input(shape=(),dtype=tf.string)\n x = encoder(inputs)\n embedding_layer = TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim)\n x = embedding_layer(x)\n transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim)\n x = transformer_block(x)\n x = layers.GlobalAveragePooling1D()(x)\n x = tf.keras.layers.BatchNormalization()(x)\n x = tf.keras.layers.Dense(256, activation = \"relu\")(x) \n x = tf.keras.layers.Dropout(0.2)(x)\n x = tf.keras.layers.BatchNormalization()(x)\n x = tf.keras.layers.Dense(128, activation = \"sigmoid\")(x) \n x = tf.keras.layers.Dropout(0.4)(x) \n outputs = layers.Dense(N_LABELS, activation=\"sigmoid\")(x)\n model = keras.Model(inputs=inputs, outputs=outputs) \n return model \n\n\ndef build_classifier():\n ############################################################################################################\n\n ########################################### \" SEED HERE \" ###########################################\n seed_everything(seed=SEED)\n model = BERTModel() \n # Define Loss\n losses = tf.keras.losses.CategoricalCrossentropy( from_logits=True)\n # compile all\n model.compile(tf.keras.optimizers.Adam(), loss=losses, metrics=[\"accuracy\"])\n return model\n\ndef get_model():\n with strategy.scope():\n ############################################################################################################\n\n ########################################### \" SEED HERE \" ###########################################\n seed_everything(seed=SEED)\n model = build_classifier()\n return model",
"_____no_output_____"
],
[
"LABEL = 'Label'\nN_LABELS = 20",
"_____no_output_____"
],
[
"n_splits = 5\nkf = StratifiedKFold(n_splits=n_splits, random_state=47, shuffle=True)\ny_oof = np.zeros([train.shape[0], N_LABELS])\ny_test = np.zeros([test.shape[0], N_LABELS])\n############################################################################################################\n########################################### \" SEED HERE \" ###########################################\nseed_everything(seed=SEED)\ntest_ds = df_to_dataset(test,labels=None,data_type='Test')\ni = 0\nmetrics = list()\ny_train = pd.get_dummies(train['Label'])\nfor tr_idx, val_idx in kf.split(train[['Text']], train['Label']):\n ############################################################################################################\n\n ########################################### \" SEED HERE \" ###########################################\n seed_everything(seed=SEED)\n df_tr = train.iloc[tr_idx, :]\n df_vl = train.iloc[val_idx, :]\n tr_ds = df_to_dataset(df_tr,y_train.iloc[tr_idx, :], data_type='Train')\n vl_ds = df_to_dataset(df_vl, y_train.iloc[val_idx, :],data_type='Val')\n \n model = get_model()\n checkpoint_path = f\"training/training_folds_{i}.ckpt\"\n checkpoint_dir = os.path.dirname(checkpoint_path)\n model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(\n filepath=checkpoint_path,\n save_weights_only=True,\n monitor='val_accuracy',\n mode='max',\n save_best_only=True)\n \n # Train the model \n n_steps = df_tr.shape[0] // BATCH_SIZE\n train_history = model.fit(\n tr_ds,\n steps_per_epoch=n_steps,\n validation_data = vl_ds,\n epochs=15, callbacks=[model_checkpoint_callback]\n )\n model.load_weights( checkpoint_path)\n y_pred = model.predict(vl_ds.map(lambda x,y:x))\n y_oof[val_idx, :] = y_pred\n y_vl = train['Label'].iloc[val_idx] \n metric = accuracy_score(y_vl, np.argmax(y_pred, 1))\n print(\"fold #{} val_loss: {}\".format(i, metric))\n\n\n \n i += 1\n y_test += model.predict(test_ds) / n_splits\n metrics.append(metric)\n\n\nmetrics = np.array(metrics).mean()\nprint(f'Full accuracy {metrics}') # ",
"_____no_output_____"
]
],
[
[
"# **Save Model Weights** : \n\n\n---\n\n* It will take lot of time to train, so i've uploaded the weights into my drive . \n* here is the link for the Model Weights : **[transformer weights fold1](https://drive.google.com/drive/folders/1aeHvLmBgvwe5igIMwgbjLmqopkNpadc_?usp=sharing)**\n* Those Weights will used in the Notebook ***transformers-baseline-ckpt***\n\n\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e76f8196b0261648c95d8e893298cedf58103267 | 6,679 | ipynb | Jupyter Notebook | newsrec/notebooks/.ipynb_checkpoints/data_preprocess_parse_behaviors-checkpoint.ipynb | limhj159/NewsRecommendation | 5d19566b63b6cf35b5be0c2b175c5050e51f57b8 | [
"MIT"
] | null | null | null | newsrec/notebooks/.ipynb_checkpoints/data_preprocess_parse_behaviors-checkpoint.ipynb | limhj159/NewsRecommendation | 5d19566b63b6cf35b5be0c2b175c5050e51f57b8 | [
"MIT"
] | null | null | null | newsrec/notebooks/.ipynb_checkpoints/data_preprocess_parse_behaviors-checkpoint.ipynb | limhj159/NewsRecommendation | 5d19566b63b6cf35b5be0c2b175c5050e51f57b8 | [
"MIT"
] | null | null | null | 31.21028 | 86 | 0.452912 | [
[
[
"# from newsrec.config import model_name\nimport pandas as pd\nimport swifter\nimport json\nimport math\nfrom tqdm import tqdm\nfrom os import path\nfrom pathlib import Path\nimport random\nfrom nltk.tokenize import word_tokenize\nimport numpy as np\nimport csv\nimport importlib\nfrom transformers import RobertaTokenizer, RobertaModel\nimport torch",
"_____no_output_____"
],
[
"train_dir = '../../data/train'\nsource = path.join(train_dir, 'behaviors.tsv')\ntarget = path.join(train_dir, 'behaviors_parsed.tsv')\nuser2int_path = path.join(train_dir, 'user2int.tsv')",
"_____no_output_____"
],
[
"behaviors = pd.read_table(\n source,\n header=None,\n names=['impression_id', 'user', 'time', 'clicked_news', 'impressions'])\nbehaviors.clicked_news.fillna(' ', inplace=True)\nbehaviors.impressions = behaviors.impressions.str.split()",
"_____no_output_____"
],
[
"behaviors.head()",
"_____no_output_____"
],
[
"user2int = {}\nfor row in behaviors.itertuples(index=False):\n if row.user not in user2int:\n user2int[row.user] = len(user2int) + 1\n\npd.DataFrame(user2int.items(), columns=['user', 'int']).to_csv(user2int_path,\n sep='\\t',\n index=False)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
e76f87d946f012473225453e46be2205ffad3e41 | 250,033 | ipynb | Jupyter Notebook | Notebooks/VenueClassifier.ipynb | smfai200/ContextAware-VenueRecommendation-Using-Machine-Learning- | 4475a28a7b4dd093f540280d1410ac6255698566 | [
"MIT"
] | 1 | 2020-07-02T12:38:09.000Z | 2020-07-02T12:38:09.000Z | Notebooks/VenueClassifier.ipynb | smfai200/ContextAware-VenueRecommendation-Using-Machine-Learning- | 4475a28a7b4dd093f540280d1410ac6255698566 | [
"MIT"
] | null | null | null | Notebooks/VenueClassifier.ipynb | smfai200/ContextAware-VenueRecommendation-Using-Machine-Learning- | 4475a28a7b4dd093f540280d1410ac6255698566 | [
"MIT"
] | null | null | null | 59.716503 | 209 | 0.452452 | [
[
[
"# This Python 3 environment comes with many helpful analytics libraries installed\n# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python\n# For example, here's several helpful packages to load in \n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n# Input data files are available in the \"../input/\" directory.\n# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory\n\nimport os\nprint(os.listdir(\"../input\"))\n\n# Any results you write to the current directory are saved as output.",
"['dataset_TSMC2014_TKY.csv', 'dataset_TSMC2014_NYC.csv']\n"
],
[
"ny_data = pd.read_csv(\"../input/dataset_TSMC2014_NYC.csv\")\nny_data.head()",
"_____no_output_____"
],
[
"ny_data.describe()",
"_____no_output_____"
],
[
"import datetime\ndata = []\nfor utcoffset_index, utcoffset in enumerate(ny_data['utcTimestamp']): \n year = datetime.datetime.strptime(utcoffset, '%a %b %d %X %z %Y').strftime('%Y')\n month = datetime.datetime.strptime(utcoffset, '%a %b %d %X %z %Y').strftime('%m')\n day = datetime.datetime.strptime(utcoffset, '%a %b %d %X %z %Y').strftime('%d')\n weekday = datetime.datetime.strptime(utcoffset, '%a %b %d %X %z %Y').strftime('%a')\n time = datetime.datetime.strptime(utcoffset, '%a %b %d %X %z %Y').strftime('%X')\n data.append([year, month, day, weekday, time])\nny_data_fe = pd.DataFrame(data, columns = ['year', 'month', 'day', 'weekday', 'time'])\n\nfor col in ny_data_fe.columns:\n if col not in ['weekday', 'time']:\n ny_data[col] = pd.to_numeric(ny_data_fe[col])\n else:\n ny_data[col] = ny_data_fe[col]",
"_____no_output_____"
],
[
"ny_data.describe()",
"_____no_output_____"
],
[
"# ny_data.drop('utcTimestamp', 1, inplace=True)\nny_data['year'] = np.where(ny_data['year'] == 2012, 1, 0)\nny_data.head()",
"_____no_output_____"
],
[
"grouped = ny_data.groupby(by='venueCategoryId')\n\nfor _, group in grouped:\n print(group.head())",
" userId venueId ... weekday time\n323 47 4a914773f964a520dd1920e3 ... Tue 21:29:21\n377 381 4e405e39483b04e17abb677f ... Tue 21:51:16\n403 637 4a914773f964a520dd1920e3 ... Tue 21:57:54\n1250 84 4ee73945e30005f8ba684170 ... Wed 10:16:04\n2046 381 4e405e39483b04e17abb677f ... Wed 17:43:23\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n713 824 4e2d913514955dbf7aefc95a ... Wed 00:04:09\n1954 389 4a902d50f964a5205c1620e3 ... Wed 17:00:08\n2128 1027 4b4bc857f964a52085a726e3 ... Wed 18:23:06\n2301 374 4d98ca65b188721ea17d3037 ... Wed 21:31:30\n2365 328 4afb40d6f964a520661c22e3 ... Wed 22:54:35\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n271 557 4ae23a21f964a5200a8c21e3 ... Tue 21:00:07\n725 529 4c2cff8075579521c6555d83 ... Wed 00:09:14\n863 180 4f5df924e4b008b1589a3ef7 ... Wed 01:10:57\n1844 529 4c2cff8075579521c6555d83 ... Wed 16:08:30\n2521 121 4bba8edf3db7b713597e239a ... Wed 23:56:31\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2279 666 4a9daf8ff964a520ed3820e3 ... Wed 19:44:47\n3624 515 4bae3634f964a520e6923be3 ... Sat 17:24:09\n3820 56 4db9a5285da3b5fa68d9b44d ... Sat 19:03:12\n4294 918 4ae4a546f964a520b19c21e3 ... Sat 22:42:51\n5641 169 49e36433f964a52074621fe3 ... Sun 18:38:44\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n509 149 4a935020f964a5208a1f20e3 ... Tue 22:44:10\n695 170 4b37b853f964a520014525e3 ... Tue 23:58:23\n754 710 4e48357da809fb2fa403137b ... Wed 00:16:01\n1971 283 45312392f964a5206e3b1fe3 ... Wed 17:06:56\n2429 608 4a46542ff964a520afa81fe3 ... Wed 23:22:30\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n3926 669 4b390efcf964a520485525e3 ... Sat 19:48:38\n4569 119 4c7aa76e97028cfa36d0defe ... Sun 00:59:47\n4571 119 4c7aa76e97028cfa36d0defe ... Sun 00:59:53\n7133 612 4a9ff907f964a520cd3d20e3 ... Mon 19:50:24\n10269 449 4e1858fec65b6bfb590d5f00 ... Wed 19:41:55\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2 69 4c5cc7b485a1e21e00d35711 ... Tue 18:02:24\n8 428 4ce1863bc4f6a35d8bd2db6c ... Tue 18:06:18\n36 525 4f5684de771657f331e5ca01 ... Tue 18:19:07\n37 525 4f5684de771657f331e5ca01 ... Tue 18:19:07\n56 768 4e79392ce4cdb158f1cf4b75 ... Tue 18:34:00\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n47 1047 4ca66eec5a1e952129d98ace ... Tue 18:24:58\n250 318 4bd5c40b5631c9b61ea4a430 ... Tue 20:51:27\n280 318 4af6e841f964a5200a0422e3 ... Tue 21:06:31\n321 653 4b71aa5ff964a520a8542de3 ... Tue 21:28:54\n326 574 4ad2615bf964a52045e120e3 ... Tue 21:30:44\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n3 395 4bc7086715a7ef3bef9878da ... Tue 18:02:41\n12 691 4cb50d599c7ba35de0ef8706 ... Tue 18:09:06\n1060 44 4ba257e3f964a52038ef37e3 ... Wed 03:53:26\n1418 826 4f551b37e4b0bf6b602135a8 ... Wed 12:19:34\n1584 844 4c6d5992e13db60c0b72d8b1 ... Wed 13:20:01\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2332 839 4a74973cf964a520c1de1fe3 ... Wed 22:45:45\n2350 169 4ace5782f964a52000d020e3 ... Wed 22:50:50\n2946 122 4c614b3e3986e21ec42b964f ... Fri 14:16:14\n3855 781 4c98aeeed799a1cddc24b452 ... Sat 19:16:52\n5233 1055 4c1137ab8559ef3b5db06a53 ... Sun 14:35:21\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n684 599 4a60a6fef964a520f6c01fe3 ... Tue 23:54:51\n10599 599 4a60a6fef964a520f6c01fe3 ... Wed 23:21:02\n17683 348 4a60a6fef964a520f6c01fe3 ... Sun 14:24:57\n33257 599 4a60a6fef964a520f6c01fe3 ... Mon 23:04:49\n36885 599 4a60a6fef964a520f6c01fe3 ... Wed 23:13:58\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n3702 905 4a61e652f964a520a2c21fe3 ... Sat 18:01:58\n4405 768 4c0829c2a1b32d7f169795f0 ... Sat 23:35:07\n5511 864 4c474126417b20a12501dda9 ... Sun 17:46:06\n7171 864 4c474126417b20a12501dda9 ... Mon 20:15:20\n12179 864 4c474126417b20a12501dda9 ... Thu 20:47:01\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n493 389 4b76ed15f964a520366b2ee3 ... Tue 22:36:44\n714 797 4cfb90ee2d80a143302e4cd8 ... Wed 00:04:13\n1657 298 4ba93c43f964a520e5163ae3 ... Wed 13:50:19\n3375 927 4ba93c43f964a520e5163ae3 ... Sat 14:41:24\n3708 346 4bb73c3f941ad13a6d6520e3 ... Sat 18:05:20\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n438 806 4c5b03f904f9be9a0a38f360 ... Tue 22:14:31\n2313 338 4b571c16f964a520cb2628e3 ... Wed 21:37:19\n3552 78 4ef3bdafe5fa4c3505092a55 ... Sat 16:52:01\n3580 806 4c5b03f904f9be9a0a38f360 ... Sat 17:04:53\n8693 594 4eb04e7bf9f463d3c3cf3d07 ... Tue 22:04:45\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n42299 388 49e0bd41f964a52065611fe3 ... Sun 02:23:28\n61482 511 4a01c477f964a520f9701fe3 ... Fri 23:29:41\n65203 553 4b80497ef964a520726430e3 ... Sun 19:47:17\n67945 162 4ee17c11490152e4bc508413 ... Tue 00:18:58\n69830 1015 49cd02b2f964a520a9591fe3 ... Tue 22:52:03\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n260 318 4bf6c8bfabdaef3b6b5aa184 ... Tue 20:56:27\n432 805 4a50f871f964a52059b01fe3 ... Tue 22:12:28\n2198 505 4bf6c8bfabdaef3b6b5aa184 ... Wed 18:59:36\n5787 987 4ac52653f964a520a8b020e3 ... Sun 20:07:42\n6708 87 4f27112ae4b0d10db310bf14 ... Mon 16:24:45\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n203 615 41575800f964a5202f1d1fe3 ... Tue 20:16:59\n7335 578 49eeaf08f964a52078681fe3 ... Mon 21:57:22\n7721 828 4b1c4e40f964a520cb0524e3 ... Tue 01:04:43\n8293 107 4afd96d6f964a520da2822e3 ... Tue 16:11:45\n16059 87 49eeaf08f964a52078681fe3 ... Sat 18:29:21\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2984 836 4cc080619ca8548179d7bc16 ... Fri 14:37:47\n3367 836 4cc080619ca8548179d7bc16 ... Sat 14:37:31\n3681 1066 4b520ad4f964a5207a6327e3 ... Sat 17:51:23\n4035 889 4e0e6b6fe4cd27fc7d2fea43 ... Sat 20:34:14\n4206 570 4b3a8a75f964a520956925e3 ... Sat 21:51:47\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2516 518 3fd66200f964a520b8ea1ee3 ... Wed 23:54:21\n2902 372 4b031d37f964a520bf4c22e3 ... Fri 13:53:06\n4441 610 4b142286f964a520cb9d23e3 ... Sat 23:51:37\n14608 760 4ebc475c754a8144ab8b62f2 ... Fri 23:21:34\n14642 402 3fd66200f964a520b8ea1ee3 ... Fri 23:33:43\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n7705 184 4aeb53f3f964a520fcc021e3 ... Tue 00:51:54\n10403 183 4e5bda1bbd41d105e00af76c ... Wed 22:03:35\n13342 304 4b6f0ccff964a520a0d92ce3 ... Fri 12:20:59\n17819 304 4b6f0ccff964a520a0d92ce3 ... Sun 15:52:35\n20085 304 4b6f0ccff964a520a0d92ce3 ... Mon 21:44:34\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n39151 662 4a6731c3f964a520fdc81fe3 ... Fri 18:12:35\n51626 326 4c0ad4b37e3fc9280a5af482 ... Sun 00:00:02\n61476 148 4a997b7ef964a520922e20e3 ... Fri 23:27:42\n94020 424 4fbc1488e4b04e551e0c3a57 ... Sun 22:39:57\n104895 438 4a7e02faf964a5209ef01fe3 ... Mon 01:59:49\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n9 877 4be319b321d5a59352311811 ... Tue 18:06:19\n40 1047 4ad607d3f964a520a10421e3 ... Tue 18:19:28\n58 877 4bcd8da3511f95219fbbb4c7 ... Tue 18:34:55\n462 593 4bc4abdf4cdfc9b6f5f69821 ... Tue 22:23:32\n1065 837 4c041246187ec928218bb67b ... Wed 04:00:00\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2486 868 3fd66200f964a5202ee61ee3 ... Wed 23:38:42\n4154 337 4a443994f964a52059a71fe3 ... Sat 21:21:43\n5007 363 49bab2f2f964a52099531fe3 ... Sun 07:46:31\n7867 943 49ff7bcef964a5202a701fe3 ... Tue 03:01:37\n9317 384 4b4f6edcf964a5206d0627e3 ... Wed 03:02:18\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n272 318 4a6f3c21f964a520b0d51fe3 ... Tue 21:00:19\n3524 105 4d2750a0849f3704e6dd6741 ... Sat 16:39:12\n3672 667 4c2f8c68213c2d7f2010315d ... Sat 17:47:44\n3920 469 4b637936f964a5209d7c2ae3 ... Sat 19:45:16\n4556 378 4c2f8c68213c2d7f2010315d ... Sun 00:51:53\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n229 313 4ce84192d99f721e0067af73 ... Tue 20:37:25\n444 116 49dcb7e4f964a5209c5f1fe3 ... Tue 22:16:54\n906 158 3fd66200f964a52006e61ee3 ... Wed 01:33:03\n1156 629 3fd66200f964a5205de41ee3 ... Wed 05:30:37\n1359 238 4ebbe3538b81c444e94e0f73 ... Wed 11:54:24\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n384 669 4b940a3df964a520686334e3 ... Tue 21:53:44\n1764 386 4c3f7c58d691c9b65fcd890a ... Wed 14:56:45\n1781 495 4c83b0dad4e2370498cb7288 ... Wed 15:10:18\n1934 102 4b8da68df964a520700633e3 ... Wed 16:52:16\n1997 503 4f1dce43e4b0e2eeedc8e9b2 ... Wed 17:17:14\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n715 84 4d5b3dd522ad2d430a63e695 ... Wed 00:04:48\n2281 84 4d5b3dd522ad2d430a63e695 ... Wed 19:45:14\n2475 1014 4d5b3dd522ad2d430a63e695 ... Wed 23:36:10\n2489 933 45633154f964a520a53d1fe3 ... Wed 23:41:02\n3197 424 4ebf131261af06192af35189 ... Sat 00:22:52\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1391 983 3fd66200f964a52011e91ee3 ... Wed 12:08:55\n1437 983 4cc8a5dfe7926dcbc4575777 ... Wed 12:25:37\n1486 983 4c7814c697028cfa8f93d6fe ... Wed 12:38:59\n1957 238 4d7a7efd25c5cbff42275533 ... Wed 17:01:12\n11455 1034 4b105fc1f964a520f76e23e3 ... Thu 12:21:40\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2373 925 4a7caf50f964a5203bed1fe3 ... Wed 22:58:18\n4921 322 4a73754af964a52082dc1fe3 ... Sun 05:27:10\n8342 218 4a8f5049f964a520df1420e3 ... Tue 16:42:20\n8854 1066 4b081efdf964a5203f0423e3 ... Tue 22:59:56\n10804 704 4a635aeef964a520d8c41fe3 ... Thu 00:40:52\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n14123 852 49b6e8d2f964a52016531fe3 ... Fri 20:14:23\n16804 880 4d69fb79a9702c0f1cbe27be ... Sat 23:27:21\n25309 582 4e734483d16472c03712aa4e ... Thu 22:10:40\n39083 972 49b6e8d2f964a52016531fe3 ... Fri 17:44:27\n40936 462 49b6e8d2f964a52016531fe3 ... Sat 15:31:55\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n716 623 3fd66200f964a52052eb1ee3 ... Wed 00:05:52\n895 524 4c472be3972c0f4711ec2221 ... Wed 01:26:33\n2037 276 4dfba587814d9c07902a5b13 ... Wed 17:34:29\n2410 710 4bbf743cba9776b09428ffc8 ... Wed 23:16:57\n2569 623 3fd66200f964a52052eb1ee3 ... Thu 00:12:18\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n102 706 4bb50ba546d4a593de78c4c0 ... Tue 19:12:08\n122 816 4bd383cba8b3a5935b386a5f ... Tue 19:24:52\n178 445 4c34e81009a99c7406490c2a ... Tue 20:03:59\n306 1010 4ba4d8aaf964a520baba38e3 ... Tue 21:20:56\n398 246 4f280555e4b04e256a110c94 ... Tue 21:56:30\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n126 950 4bddec8a0ee3a59391312fb0 ... Tue 19:26:31\n427 940 4a27289cf964a52062911fe3 ... Tue 22:08:48\n592 367 49d7fd41f964a5208c5d1fe3 ... Tue 23:18:22\n621 370 3fd66200f964a52088e41ee3 ... Tue 23:31:29\n859 725 4a707c6af964a52096d71fe3 ... Wed 01:07:52\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n76 639 4c671f759cb82d7fd1d192d2 ... Tue 18:48:57\n137 69 4bddaa53e75c0f47d219c503 ... Tue 19:34:56\n215 246 4ca0e01c27d3bfb78dfc3a67 ... Tue 20:25:40\n320 758 4dce988fb0fb25f6e3471de5 ... Tue 21:28:09\n380 510 4bdaf62863c5c9b61ea02568 ... Tue 21:52:30\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n674 470 4c9d89017ada199cf0a493bc ... Tue 23:51:33\n700 470 4c9d89017ada199cf0a493bc ... Wed 00:00:22\n701 971 4bd77ac235aad13a2b0b8ff3 ... Wed 00:01:05\n755 372 4a96e0a7f964a520592720e3 ... Wed 00:16:09\n2209 854 4b79eec8f964a520161b2fe3 ... Wed 19:05:42\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n44 839 4cd03669c5b39eb043fb5a16 ... Tue 18:24:02\n3070 173 4f7f0e6ce4b08315168ea1de ... Fri 15:40:30\n3205 304 4b6ef3c9f964a520dfd22ce3 ... Sat 12:32:48\n4263 768 4f80be5be4b0bf6cc9c9d843 ... Sat 22:23:33\n7226 304 4b6ef3c9f964a520dfd22ce3 ... Mon 20:56:43\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n149 272 4f318c84e5e8657f88d830ac ... Tue 19:42:24\n207 445 4bae5fb6f964a520b4a93be3 ... Tue 20:21:06\n364 987 4a578bb6f964a5205bb61fe3 ... Tue 21:44:29\n507 120 4a83505ef964a520bffa1fe3 ... Tue 22:42:27\n1690 272 4f318c84e5e8657f88d830ac ... Wed 14:09:41\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n217 246 4a34038ef964a520929b1fe3 ... Tue 20:26:32\n3562 169 4a21834ff964a520077d1fe3 ... Sat 16:57:59\n4122 26 4b25439df964a520ee6e24e3 ... Sat 21:09:30\n4129 894 4c707f04b3ce224bd33e74c6 ... Sat 21:12:23\n4197 133 4b254300f964a520e26e24e3 ... Sat 21:46:42\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2534 514 49ebc888f964a5202d671fe3 ... Wed 23:59:38\n3433 516 4ee3f0432c5b5fc6cbee9263 ... Sat 15:34:13\n3853 133 4a909d71f964a520c51820e3 ... Sat 19:16:22\n4486 349 4b912a24f964a52089a733e3 ... Sun 00:15:31\n4512 255 40bbc700f964a520a2001fe3 ... Sun 00:29:43\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n253 553 4bf993688d30d13a6c770218 ... Tue 20:52:56\n843 673 4bed859b3372c9280ac11114 ... Wed 00:57:41\n1089 281 4b9efd61f964a520b00e37e3 ... Wed 04:21:26\n1191 458 4c37a2be2c8020a1d6658900 ... Wed 06:41:57\n1694 225 4bddfc06ffdec928ce3de7a1 ... Wed 14:13:09\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n240 425 4b77007bf964a5204b732ee3 ... Tue 20:45:56\n465 324 40d38200f964a52041011fe3 ... Tue 22:24:41\n502 337 4c08fbf7340720a1b8ed8393 ... Tue 22:39:56\n567 362 4adf7446f964a520b57a21e3 ... Tue 23:09:26\n1425 453 40d38200f964a52041011fe3 ... Wed 12:22:18\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n536 434 4aa7ef5df964a520144e20e3 ... Tue 22:54:49\n1860 521 4b4cb455f964a520fbba26e3 ... Wed 16:15:51\n2376 173 4bd1fad4a8b3a5936fda665f ... Wed 22:59:26\n2530 465 4a044315f964a520ea711fe3 ... Wed 23:58:39\n4365 58 4d6702c89792b1f7c9ee381f ... Sat 23:15:46\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n245 100 4b8359e0f964a520830331e3 ... Tue 20:49:02\n625 387 4ad604d0f964a520840421e3 ... Tue 23:33:35\n3543 242 4bbd163b078095215431da91 ... Sat 16:46:47\n7069 264 4c2a6760b34ad13a6936e8ce ... Mon 19:26:05\n8880 10 4dfbcf55b61ce5af0e8d43d4 ... Tue 23:11:15\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n208 516 4f297e04a17c6fd5208ea108 ... Tue 20:21:12\n406 129 49c65aa8f964a52038571fe3 ... Tue 21:58:55\n569 418 3fd66200f964a520a7e31ee3 ... Tue 23:10:17\n573 17 4bdb9e292a3a0f472d20b0b6 ... Tue 23:11:54\n577 500 47f39422f964a5209b4e1fe3 ... Tue 23:13:38\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2118 52 4b71ce45f964a5207f5d2de3 ... Wed 18:18:30\n27438 884 4d39e1d081258cfa22fd9f5f ... Fri 20:13:16\n36406 464 42dee580f964a5205c261fe3 ... Wed 18:54:11\n39665 1024 4b37a784f964a520b54325e3 ... Fri 22:33:17\n48921 879 4c9ec49b6c4795213c72820c ... Fri 20:26:22\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n927 360 4da60c5d432dd03d359b8729 ... Wed 01:49:54\n2309 937 4bcb54340687ef3b057cddcc ... Wed 21:34:54\n3285 912 4f6dc61a754a51dc646f6f53 ... Sat 13:37:01\n3561 297 4f6dc61a754a51dc646f6f53 ... Sat 16:57:45\n3567 110 4a452221f964a520d8a71fe3 ... Sat 17:01:27\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n360 987 4b4668f1f964a520042026e3 ... Tue 21:42:47\n474 371 4e275a6d1f6e88a154577bf0 ... Tue 22:29:31\n499 296 3fd66200f964a520f3e71ee3 ... Tue 22:38:52\n1798 987 4b4668f1f964a520042026e3 ... Wed 15:28:42\n3343 121 4ec1a15d5c5c3d470d8affa8 ... Sat 14:15:50\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n344 987 40df5f80f964a52098011fe3 ... Tue 21:36:14\n399 246 49f35e1ef964a520946a1fe3 ... Tue 21:56:56\n877 1079 40df5f80f964a52098011fe3 ... Wed 01:16:59\n946 565 49f7bc04f964a520d56c1fe3 ... Wed 02:06:00\n1092 868 3fd66200f964a5201be41ee3 ... Wed 04:23:42\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n5 484 4b5b981bf964a520900929e3 ... Tue 18:04:00\n16 445 4b9830a5f964a520c73235e3 ... Tue 18:10:39\n28 689 4c5ef77bfff99c74eda954d3 ... Tue 18:15:05\n59 280 4bdb4bf54b1f952169ac670b ... Tue 18:35:31\n90 628 4af12cb7f964a520d3e021e3 ... Tue 19:01:40\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1073 318 4d69f050de28224b66164abe ... Wed 04:08:37\n1111 230 4d69f050de28224b66164abe ... Wed 04:38:21\n2546 10 4d69f050de28224b66164abe ... Thu 00:02:14\n4845 389 3fd66200f964a52067e41ee3 ... Sun 04:12:36\n4939 881 4e34c95b7d8b0c62b2ce3b2e ... Sun 05:51:31\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n316 645 4b48c4a8f964a520195626e3 ... Tue 21:24:48\n1145 983 4c87e78a6f14a093feedab10 ... Wed 05:15:15\n1157 983 4d4cad0ea9086ea8d5537485 ... Wed 05:33:28\n1165 983 4ae21adef964a520c48a21e3 ... Wed 05:57:27\n2059 503 4d8e21d81d06b1f7863b413b ... Wed 17:52:27\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n108 912 4ee6e1970e61681b98844283 ... Tue 19:15:06\n851 185 4d3260b2511760fc57f4e2bd ... Wed 01:01:25\n1244 84 4eaa32b2e3000d1f055b6935 ... Wed 10:13:52\n1879 185 4d3260b2511760fc57f4e2bd ... Wed 16:24:35\n2177 912 4ee6e1970e61681b98844283 ... Wed 18:49:12\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1695 802 4b6ef312f964a520b3d22ce3 ... Wed 14:13:34\n3350 802 4b6ef312f964a520b3d22ce3 ... Sat 14:19:07\n3496 1080 4a96f983f964a520c82720e3 ... Sat 16:18:32\n3512 1066 4bb630b5941ad13aea061fe3 ... Sat 16:29:32\n5218 802 4b6ef312f964a520b3d22ce3 ... Sun 14:23:24\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n339 84 4a859ca3f964a520a5fe1fe3 ... Tue 21:35:09\n375 586 41366280f964a520ce1a1fe3 ... Tue 21:50:29\n631 418 49ccb947f964a5208c591fe3 ... Tue 23:37:52\n740 427 3fd66200f964a52051eb1ee3 ... Wed 00:13:23\n850 468 4b538ce9f964a520c5a127e3 ... Wed 01:01:20\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n3615 454 4b801cfef964a520615330e3 ... Sat 17:20:50\n9015 707 4c55b8714623be9a7abef2f4 ... Wed 00:00:55\n14298 690 4bfdaf0bf61dc9b6b0699fde ... Fri 21:38:19\n15910 84 4e1e446ed4c0fc6e3438156d ... Sat 17:30:23\n16097 533 4ba66ffef964a5205d5239e3 ... Sat 18:42:32\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n5966 644 407f2200f964a5207df21ee3 ... Sun 21:51:52\n44414 424 407f2200f964a5207df21ee3 ... Tue 22:11:49\n49190 222 460d4bdcf964a52007451fe3 ... Fri 22:31:48\n61759 55 3fd66200f964a5203de41ee3 ... Sat 01:21:01\n68193 999 4f9f126c7b0c9c997abc4fff ... Tue 03:31:53\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n233 798 3fd66200f964a520e1f01ee3 ... Tue 20:39:38\n550 279 3fd66200f964a52001e81ee3 ... Tue 23:02:22\n553 615 49a0a00ff964a52091521fe3 ... Tue 23:03:11\n702 69 4c3935bb93db0f47fa262392 ... Wed 00:01:07\n842 887 4cc59facb2beb1f7d1c6234c ... Wed 00:57:20\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n9318 384 4e4e8ef61fc7e04d29e227f3 ... Wed 03:02:42\n13448 474 4f1022a5e4b047204346ad31 ... Fri 12:59:36\n13506 495 4f1022a5e4b047204346ad31 ... Fri 13:18:44\n13560 19 4f1022a5e4b047204346ad31 ... Fri 13:40:31\n13743 431 4f1022a5e4b047204346ad31 ... Fri 15:46:28\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n415 973 4c6985928d22c9284776b745 ... Tue 22:04:54\n633 348 4cbcafab035d236aebebe64e ... Tue 23:39:12\n978 1059 4cf31e3d88de370466657d2b ... Wed 02:21:54\n1098 839 4bc741afaf07a59399cb7e2d ... Wed 04:27:46\n2496 1040 49d95652f964a520235e1fe3 ... Wed 23:44:07\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2215 52 4e028dacb0fb88a1209ac864 ... Wed 19:08:51\n2864 372 3fd66200f964a520b6e41ee3 ... Fri 13:34:21\n4358 906 4e028dacb0fb88a1209ac864 ... Sat 23:12:22\n10105 568 4b0c4c99f964a520c63a23e3 ... Wed 17:57:21\n21261 888 4e028dacb0fb88a1209ac864 ... Tue 19:01:15\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n613 645 4bb6aca66edc76b07b5d311c ... Tue 23:28:52\n1155 398 4dd88b69c65bee535acf48ba ... Wed 05:26:05\n2197 398 4dd88b69c65bee535acf48ba ... Wed 18:59:12\n2849 540 4dd27f5c1f6ef5f6ccd5df9c ... Fri 13:28:38\n4543 689 43920486f964a520662b1fe3 ... Sun 00:45:06\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n680 882 4a85a71bf964a520c7fe1fe3 ... Tue 23:54:00\n682 882 4aebbcfcf964a520a4c421e3 ... Tue 23:54:12\n683 882 4b28f330f964a5207c9624e3 ... Tue 23:54:31\n865 882 4b65202bf964a5205ce52ae3 ... Wed 01:11:47\n868 882 4c831b9adc018cfa9a53d66c ... Wed 01:12:19\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n734 768 4ca38cf55720b1f71fc632ef ... Wed 00:11:12\n944 139 4bfd7c4bb68d0f4785dbe857 ... Wed 02:05:42\n1517 203 4a6dff78f964a5208dd31fe3 ... Wed 12:53:22\n1686 310 4bfd7c4bb68d0f4785dbe857 ... Wed 14:08:00\n2258 440 4a6dff78f964a5208dd31fe3 ... Wed 19:34:21\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n706 463 4e3cbae97d8b0e961060f423 ... Wed 00:03:04\n791 361 40a16900f964a520f9f21ee3 ... Wed 00:34:53\n1050 623 4f73bc94e4b0e33c0d2401bc ... Wed 03:46:07\n1135 248 4da7cf1981541df437af6cf7 ... Wed 05:09:52\n1329 953 40a16900f964a520f9f21ee3 ... Wed 11:41:35\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n357 354 4abbdc65f964a5203f8520e3 ... Tue 21:41:59\n450 337 4bddc548e75c0f472b89c503 ... Tue 22:18:46\n1730 58 4ade0171f964a520476721e3 ... Wed 14:43:15\n2117 293 4b5764b8f964a5207a3528e3 ... Wed 18:18:19\n2951 280 4a49370af964a52019ab1fe3 ... Fri 14:20:17\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n4417 598 4ccb3742ee23a143ddcb15a8 ... Sat 23:40:43\n4463 1040 4ccb3742ee23a143ddcb15a8 ... Sun 00:05:05\n5975 858 4effdf5a5c5c51dd2e764728 ... Sun 21:59:10\n9487 534 46fe3f5ef964a520184b1fe3 ... Wed 07:20:34\n10698 79 46fe3f5ef964a520184b1fe3 ... Wed 23:57:43\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n11 625 4ab5320cf964a5202b7320e3 ... Tue 18:08:57\n49 612 447bf8f1f964a520ec331fe3 ... Tue 18:27:43\n190 734 4b983023f964a520a13235e3 ... Tue 20:07:51\n261 620 4b6f2ac9f964a52072e12ce3 ... Tue 20:56:38\n581 337 4b4b3de3f964a520519526e3 ... Tue 23:14:48\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n498 361 4eb2e18029c2b0499af4d351 ... Tue 22:38:39\n764 273 4a89fe91f964a5209b0920e3 ... Wed 00:19:55\n838 510 3fd66200f964a52035eb1ee3 ... Wed 00:55:40\n1416 164 45a51c02f964a520fa401fe3 ... Wed 12:19:19\n2502 323 4a651ba1f964a52050c71fe3 ... Wed 23:46:06\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n766 293 4d61b919865a224bef72ba85 ... Wed 00:21:12\n1926 533 4b57ac78f964a520163b28e3 ... Wed 16:48:41\n2141 84 4d5ef3b0ef378cfa39d068a6 ... Wed 18:30:57\n2587 293 4d61b919865a224bef72ba85 ... Thu 00:20:43\n3182 293 4d61b919865a224bef72ba85 ... Sat 00:15:58\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n24 699 4ba0e0edf964a520138437e3 ... Tue 18:13:21\n27 443 4a676ff4f964a5206fc91fe3 ... Tue 18:14:44\n29 390 4bcde547511f95210d62b5c7 ... Tue 18:15:07\n41 738 4c06877bcf8c76b0f09b3a65 ... Tue 18:21:27\n66 135 4cd028a77b6854810ac7bff8 ... Tue 18:39:44\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n15 53 4c00297392a6c9280cc243e2 ... Tue 18:10:04\n103 673 4cfbe68a0df3236a2a30fca9 ... Tue 19:14:07\n267 173 4f7b6453e4b08175d7d99e24 ... Tue 20:58:10\n618 673 4cfbe68a0df3236a2a30fca9 ... Tue 23:30:25\n1668 337 4bef42675e4aa5931fd058bb ... Wed 13:56:18\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n273 256 4f2729f9e4b067a3297bbda6 ... Tue 21:00:29\n511 54 4ef0e7cf7beb5932d5bdeb4e ... Tue 22:45:05\n570 1040 451287ebf964a520a1391fe3 ... Tue 23:10:35\n985 354 4c7170dafa49a1cd60e6a8e3 ... Wed 02:26:25\n1218 582 4f1244e7e4b0958ec2290fce ... Wed 07:32:44\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n20 891 4ba8a9a9f964a5206ee539e3 ... Tue 18:12:09\n72 673 4c911bb157e5b60ce3df5d1c ... Tue 18:45:15\n1079 706 4bccaa77511f9521086ab3c7 ... Wed 04:12:42\n1450 891 4ba8a9a9f964a5206ee539e3 ... Wed 12:27:59\n1684 813 4bf698c5bfeac92823969436 ... Wed 14:06:13\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n4652 892 4b6cafeaf964a520394c2ce3 ... Sun 01:59:09\n7353 1054 4d232223b69c6dcb68758895 ... Mon 22:04:51\n9285 909 4b903c67f964a520167e33e3 ... Wed 02:37:32\n11357 758 4e5873a945dd1de4d9968e29 ... Thu 11:47:43\n15308 706 4e31c4302fb6ede816e379ac ... Sat 05:27:17\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1720 425 4db479a693a017099dd9c8b8 ... Wed 14:35:34\n9741 425 4db479a693a017099dd9c8b8 ... Wed 14:33:03\n16867 425 4db479a693a017099dd9c8b8 ... Sat 23:53:05\n26570 306 4c4a39969c8d2d7f01e63569 ... Fri 12:52:19\n29530 425 4db479a693a017099dd9c8b8 ... Sun 00:03:00\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n0 470 49bbd6c0f964a520f4531fe3 ... Tue 18:00:09\n1526 983 4c3b7febbe4620a14b750c13 ... Wed 12:56:08\n1617 983 4b72e5a4f964a520208e2de3 ... Wed 13:34:20\n1650 983 4d9b551a7709cbff0c7e3ab2 ... Wed 13:46:51\n2084 470 49bbd6c0f964a520f4531fe3 ... Wed 18:00:09\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n292 121 4eca75779a52756c591fd414 ... Tue 21:13:42\n686 121 4a731a43f964a52091db1fe3 ... Tue 23:55:44\n1729 440 4be167910f03a593b02419b4 ... Wed 14:43:10\n6942 440 4be167910f03a593b02419b4 ... Mon 18:12:22\n7198 121 4eca75779a52756c591fd414 ... Mon 20:38:20\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n3428 986 4b7724e1f964a520dc812ee3 ... Sat 15:29:28\n3879 693 4bf042c8d4f70f47cfb0390f ... Sat 19:25:28\n5933 84 4ef6dd2db63446a5085196e0 ... Sun 21:33:14\n6979 107 4c6a97c4c946e21e550eed8e ... Mon 18:32:44\n8611 879 4b648b27f964a5204ebc2ae3 ... Tue 21:29:46\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n362 354 4a676321f964a52051c91fe3 ... Tue 21:43:29\n609 943 4a676321f964a52051c91fe3 ... Tue 23:27:42\n5682 514 4ae32263f964a520ca9021e3 ... Sun 18:59:41\n7147 531 4bca4cf50687ef3b06cfdbcc ... Mon 19:57:46\n13089 990 4a676321f964a52051c91fe3 ... Fri 05:44:38\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n168 288 42829c80f964a5206a221fe3 ... Tue 19:54:55\n169 335 4b93d226f964a520625334e3 ... Tue 19:57:44\n232 798 4b7dce5af964a520cad52fe3 ... Tue 20:38:50\n263 217 42829c80f964a5206a221fe3 ... Tue 20:57:17\n392 440 4a63d085f964a520dac51fe3 ... Tue 21:55:39\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n38845 699 4e42e59dd164e180fb827f4b ... Fri 15:47:10\n81276 50 4faed240e4b08a88c5f00235 ... Thu 02:08:45\n98693 631 4d6bb1899f4b6dcbe9fa7328 ... Thu 13:20:58\n115043 211 4e540a911495a46b65bf41d1 ... Thu 13:28:23\n142581 236 50169e11e4b00c78e54c2b21 ... Mon 14:45:42\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n138 467 4a737bf8f964a52091dc1fe3 ... Tue 19:35:24\n448 335 4b61a925f964a520231c2ae3 ... Tue 22:18:10\n481 187 4c21219c13c00f47b17b85de ... Tue 22:32:42\n1208 335 4b61a925f964a520231c2ae3 ... Wed 07:16:02\n1322 467 4a737bf8f964a52091dc1fe3 ... Wed 11:35:53\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1624 697 4ae83a36f964a52042ae21e3 ... Wed 13:35:58\n1970 798 4a8951caf964a520df0720e3 ... Wed 17:06:13\n2144 221 4b683211f964a5205a6b2be3 ... Wed 18:32:33\n6558 458 4c3f0dd76faac9b6ca040f76 ... Mon 14:39:27\n8160 708 4c1c3706b306c9283e9d63b7 ... Tue 11:32:59\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n7 292 4d0cc47f903d37041864bf55 ... Tue 18:04:42\n71 445 4c1d0f98c09ed13ad4c5828e ... Tue 18:44:07\n165 976 4c0e90c57189c928ccb5d8b6 ... Tue 19:54:07\n281 658 4efdd652f9abd5b38e970b86 ... Tue 21:07:02\n290 592 4f7b67e6e4b01ee8685cfa15 ... Tue 21:13:14\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n492 16 4bcb92d5511f9521f7ccb1c7 ... Tue 22:36:43\n1638 314 4d7f6eb0564b8cfa4ddb0465 ... Wed 13:42:39\n1722 314 4d811b23e6d7721e75da24c7 ... Wed 14:37:59\n1829 314 4bace4b1f964a5209d163be3 ... Wed 16:00:49\n2087 314 4e7fcd23b6340a306777f41e ... Wed 18:03:09\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n17197 1034 4b245649f964a520186624e3 ... Sun 02:27:59\n18951 582 4d52fc90c5ff6ea8cdf1b307 ... Mon 12:29:21\n25429 307 4b922740f964a520ace933e3 ... Thu 22:49:45\n31368 668 4c6d4d29463537046fa208bc ... Mon 00:10:38\n36141 539 4a268381f964a520af7e1fe3 ... Wed 16:48:38\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1924 1025 4c7c0f00794e224b4b266d28 ... Wed 16:48:20\n3258 347 4e4724d052b1bac0d9814862 ... Sat 13:17:16\n3800 246 4cf539ef7e0da1cd4e4aa697 ... Sat 18:52:50\n4188 870 4eac294f0aafb00bd944e007 ... Sat 21:42:28\n4199 723 4c7c0f00794e224b4b266d28 ... Sat 21:47:16\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n23 803 4c955b3558d4b60c06d03129 ... Tue 18:13:12\n1281 879 4de797941fc7fcfe04ac1dcf ... Wed 10:54:40\n1585 1011 4ad35d08f964a520fae320e3 ... Wed 13:20:24\n1848 803 4a65b705f964a5208fc71fe3 ... Wed 16:11:06\n1886 803 4c955b3558d4b60c06d03129 ... Wed 16:29:43\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n359 529 4b02c41cf964a520c14922e3 ... Tue 21:42:39\n1968 798 4b02c41cf964a520c14922e3 ... Wed 17:05:41\n2977 529 4b02c41cf964a520c14922e3 ... Fri 14:34:38\n5041 819 4dd2768ad22d67839c5be4c0 ... Sun 09:12:33\n6403 529 4b02c41cf964a520c14922e3 ... Mon 13:19:34\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1716 439 4ced45013b03f04de54533dc ... Wed 14:32:16\n1721 354 4b5b0ccef964a520dce028e3 ... Wed 14:37:27\n2058 503 4d9a0d83744f370462031658 ... Wed 17:51:59\n2245 830 4bd8678f09ecb71310bd487c ... Wed 19:22:28\n2356 492 4aef8ec6f964a52040d921e3 ... Wed 22:52:05\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n11982 820 4dd84c6dc65bee535ac8bbe8 ... Thu 18:48:32\n13360 820 4dd84c6dc65bee535ac8bbe8 ... Fri 12:27:08\n25258 763 4e72e2d662e1a77aac90eb67 ... Thu 21:45:27\n37721 915 4ed06a1761af476c08a070dd ... Thu 12:01:46\n75332 958 4bf0dbba6f8aa5935657c23c ... Fri 22:58:34\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n26 642 4d9a2f78d97ba1430b43336b ... Tue 18:14:21\n99 881 4f4767e3e4b06a7740bdc8dd ... Tue 19:09:18\n117 793 43695300f964a5208c291fe3 ... Tue 19:20:53\n527 17 4c390fc70a71c9b6f5e841c9 ... Tue 22:50:53\n733 834 4d49ca09b480a143d30228ec ... Wed 00:11:04\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1375 424 4c3e575851dee21ed188ea6e ... Wed 12:04:39\n2172 256 4cd76aa6a5b34688c3389950 ... Wed 18:46:58\n4758 318 4f80ff95e4b088077f602d08 ... Sun 03:01:43\n4988 510 4f486ea4e4b0e4755cdeaaf4 ... Sun 06:55:39\n5015 510 502718cee4b04e431c0e0759 ... Sun 08:11:37\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n3079 188 4f7ecbf9e4b068381402db75 ... Fri 15:47:00\n3158 976 4f7ecbf9e4b068381402db75 ... Fri 16:33:37\n3161 718 4f7ecbf9e4b068381402db75 ... Fri 16:35:58\n34637 883 4b9e4232f964a52093d536e3 ... Tue 17:14:25\n40703 910 4c1e628ceac020a1d29d49c2 ... Sat 13:06:28\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n134 292 4a3d4aecf964a52012a21fe3 ... Tue 19:33:21\n424 4 4b376908f964a520ef4025e3 ... Tue 22:08:14\n483 398 4b23d720f964a520f55a24e3 ... Tue 22:33:36\n812 281 4e526dc3ae6054e93621763f ... Wed 00:44:00\n849 824 4c9f5feed3c2b60ca3c6d3bc ... Wed 01:01:10\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n19846 50 4f55b10ae4b06d7238a08c95 ... Mon 19:43:26\n25161 50 4f55b10ae4b06d7238a08c95 ... Thu 20:58:19\n47337 715 4bf9ae7d5ec320a1d3ad8ad3 ... Fri 02:16:21\n54964 50 4f55b10ae4b06d7238a08c95 ... Tue 13:28:07\n75645 460 4deed7a9d164e99c7b2e2dfc ... Sat 00:47:05\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n958 570 4bc11de1abf49521cf98c093 ... Wed 02:12:42\n2431 26 4cbcb0f690c9a14368b39cd6 ... Wed 23:22:53\n3188 969 4a27d838f964a5208b911fe3 ... Sat 00:17:48\n3587 1009 4ba7e2c8f964a520d9bb39e3 ... Sat 17:06:46\n3900 570 4bc11de1abf49521cf98c093 ... Sat 19:35:51\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n538 186 49e4eb27f964a5206b631fe3 ... Tue 22:55:31\n587 649 49ee4419f964a52041681fe3 ... Tue 23:16:46\n637 772 4cdeb1d2c4f6a35de46bcb6c ... Tue 23:40:35\n1915 473 4cc384c142d1b60cba011813 ... Wed 16:44:46\n2352 55 4cdeb1d2c4f6a35de46bcb6c ... Wed 22:51:36\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n12557 878 4cae5172cbab236a20768f73 ... Thu 23:30:11\n21819 800 48e480eef964a52022521fe3 ... Tue 23:33:51\n25574 7 4cae5172cbab236a20768f73 ... Thu 23:49:08\n25577 18 4cae5172cbab236a20768f73 ... Thu 23:49:29\n44731 800 48e480eef964a52022521fe3 ... Wed 00:23:17\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n541 100 4f2f173be4b0ccb914990930 ... Tue 22:57:13\n548 415 4a78c93ef964a5206be61fe3 ... Tue 23:02:02\n612 226 4ad90280f964a520161721e3 ... Tue 23:28:16\n767 623 4a3eb4e6f964a52054a31fe3 ... Wed 00:22:04\n899 370 3fd66200f964a52004e51ee3 ... Wed 01:29:14\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n27160 741 4c6c59dba437224b7f222ab1 ... Fri 17:38:32\n27161 741 4c6c59dba437224b7f222ab1 ... Fri 17:38:32\n30586 204 4daa27b10cb6a89c6268451a ... Sun 15:40:35\n30656 1073 4bf4a005ff90c9b6f5895528 ... Sun 16:18:52\n39045 741 4c6c59dba437224b7f222ab1 ... Fri 17:29:34\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n615 936 4a078b2ff964a5207f731fe3 ... Tue 23:29:52\n1435 936 4a078b2ff964a5207f731fe3 ... Wed 12:25:29\n2453 936 4a078b2ff964a5207f731fe3 ... Wed 23:31:24\n2677 936 4a078b2ff964a5207f731fe3 ... Thu 10:59:50\n2919 673 4c8e1ffb58668cfa6d0ecfec ... Fri 14:01:49\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1445 391 4dffc29d149565cd8f332767 ... Wed 12:26:34\n2709 391 4dffc29d149565cd8f332767 ... Thu 11:45:42\n7127 391 4dffc29d149565cd8f332767 ... Mon 19:47:18\n9184 660 4e1fd244c65b2b5e353615f2 ... Wed 01:29:40\n9474 256 4eeeb0b46c25c7f9d378cdf2 ... Wed 06:43:47\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n166 880 4e6f593d88779c905de9cfa7 ... Tue 19:54:09\n568 495 4ae8ccbff964a52068b221e3 ... Tue 23:10:09\n776 874 4e4bedc31f6e2a789a8dd478 ... Wed 00:28:28\n1107 706 4dd53375ae60a78cec67b104 ... Wed 04:34:49\n1343 7 4cc2ff0682388cfa2c686435 ... Wed 11:49:27\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1296 173 4c34ba027cc0c9b60ed1f39a ... Wed 11:07:58\n1318 113 4c96ad134f16b71307dcc83f ... Wed 11:31:59\n1331 1016 4bbd63494e069c7494c59ee3 ... Wed 11:41:45\n1355 718 4e6665701fc747ca49e8d231 ... Wed 11:53:41\n1390 710 4d594a2b3281b1f79875b92f ... Wed 12:08:45\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1001 363 4bf7773f8d30d13ae526ff17 ... Wed 02:42:14\n5970 363 4bf7773f8d30d13ae526ff17 ... Sun 21:55:09\n124558 363 4bf7773f8d30d13ae526ff17 ... Sat 07:09:30\n203926 363 4bf7773f8d30d13ae526ff17 ... Sun 23:09:55\n\n[4 rows x 13 columns]\n userId venueId ... weekday time\n162 557 49f6850af964a5204c6c1fe3 ... Tue 19:51:07\n180 346 4bcb894d0687ef3b19f8ddcc ... Tue 20:04:18\n239 662 45a62f89f964a52003411fe3 ... Tue 20:45:37\n241 662 45a62f89f964a52003411fe3 ... Tue 20:46:44\n545 113 4d9747a3e07ea35d49c0d402 ... Tue 23:00:00\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n370 179 4c87d60a0dcb8cfa6c543e68 ... Tue 21:47:45\n1014 868 3fd66200f964a520f7e41ee3 ... Wed 03:08:30\n1312 12 4cd95a235e1b721e58b23ed9 ... Wed 11:24:54\n1350 228 4bfe9e21e584c92831ef6c25 ... Wed 11:52:29\n2688 12 4cd95a235e1b721e58b23ed9 ... Thu 11:04:28\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n211 528 45053091f964a520e6381fe3 ... Tue 20:22:14\n2362 1019 4a3ed850f964a5207da31fe3 ... Wed 22:54:02\n3916 461 4cb8984bbac937044c4fde7c ... Sat 19:42:41\n4748 191 4bbbfe5be5b0d13a9bdb6e7c ... Sun 02:54:13\n5989 289 46fd176ef964a520124b1fe3 ... Sun 22:08:42\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n343 120 3fd66200f964a52037e31ee3 ... Tue 21:36:12\n679 917 46936ee3f964a520d0481fe3 ... Tue 23:52:45\n711 337 4d41dedcc95b721e421f642b ... Wed 00:03:53\n721 354 4a972370f964a520a12820e3 ... Wed 00:07:36\n860 751 4b4675cdf964a520b82126e3 ... Wed 01:08:43\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n65 276 4ab11744f964a520046820e3 ... Tue 18:39:42\n185 184 4a06f10df964a5200c731fe3 ... Tue 20:05:56\n186 184 4a06f10df964a5200c731fe3 ... Tue 20:05:56\n417 316 4bfc3890e05e0f47785ccfa8 ... Tue 22:05:22\n449 474 4d843e5c9324236af330b60e ... Tue 22:18:41\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n70 80 4c424ceea5c5ef3b2876b06f ... Tue 18:43:53\n188 865 49f3261bf964a520696a1fe3 ... Tue 20:07:29\n1072 34 40da1980f964a5206c011fe3 ... Wed 04:07:33\n1178 983 4b9c9c6af964a520b27236e3 ... Wed 06:14:32\n1184 983 4c190682fe5a76b0c9ee0215 ... Wed 06:31:27\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n651 623 4b47ffbff964a5201d4626e3 ... Tue 23:47:10\n1266 855 4b0e9ae4f964a520e05823e3 ... Wed 10:41:58\n1269 859 4a6dae17f964a520d7d21fe3 ... Wed 10:45:12\n1339 1063 4b0e9ae4f964a520e05823e3 ... Wed 11:48:09\n1386 1047 4be1d2ad4283c9b62d9d54f8 ... Wed 12:08:00\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n818 534 4a4d694cf964a520f1ad1fe3 ... Wed 00:47:01\n834 505 4b4cb795f964a5209cbb26e3 ... Wed 00:53:44\n1958 10 4b3e85b9f964a520699e25e3 ... Wed 17:01:17\n2028 653 4ee3c8686c25be962fcc3005 ... Wed 17:30:14\n2523 210 44e5e786f964a52073371fe3 ... Wed 23:56:48\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n4537 746 4aa0591cf964a520453f20e3 ... Sun 00:42:00\n5383 242 4af78566f964a520af0922e3 ... Sun 16:18:57\n7509 917 3fd66200f964a52034eb1ee3 ... Mon 23:11:05\n10042 642 4ac249a0f964a520919820e3 ... Wed 17:30:57\n10559 873 4a342435f964a520cd9b1fe3 ... Wed 23:07:49\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2336 373 4bf0764e17880f47f25a2937 ... Wed 22:46:39\n4297 373 4bf0764e17880f47f25a2937 ... Sat 22:43:02\n5901 565 4dc72a0bb0fb5556cd1bd757 ... Sun 21:22:40\n21747 504 4dc72a0bb0fb5556cd1bd757 ... Tue 23:06:37\n42078 504 4dc72a0bb0fb5556cd1bd757 ... Sun 00:42:32\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n206 820 4a7daee3f964a5202bef1fe3 ... Tue 20:20:24\n2043 492 3fd66200f964a5209fe71ee3 ... Wed 17:40:45\n3581 831 4c4dd121d667d13ac099769f ... Sat 17:04:55\n6035 420 4b7b4caaf964a520385d2fe3 ... Sun 22:40:12\n7298 335 4b5fd95cf964a520b2ce29e3 ... Mon 21:37:02\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n5079 798 4ac7f50ef964a520d6ba20e3 ... Sun 10:41:32\n9791 798 4ac7f50ef964a520d6ba20e3 ... Wed 15:01:42\n15414 299 4ac7f50ef964a520d6ba20e3 ... Sat 11:16:55\n31336 329 4d02450f4115a09023913ee2 ... Sun 23:48:36\n32341 798 4ac7f50ef964a520d6ba20e3 ... Mon 14:53:22\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n19 704 4cb4b31cb4b0a35df33969ce ... Tue 18:11:35\n92 504 4c3b49c54565e21eb775566a ... Tue 19:03:38\n93 226 462a6065f964a520d9451fe3 ... Tue 19:03:47\n453 185 4ef6571c722e34060e72b97f ... Tue 22:19:38\n582 689 477a3514f964a520214d1fe3 ... Tue 23:15:02\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n373 4 4a6a3991f964a520e0cc1fe3 ... Tue 21:49:54\n1197 458 4d3faf69aae1b1f70d1680f5 ... Wed 06:55:57\n3061 528 4a7a4af6f964a52016e91fe3 ... Fri 15:34:25\n4626 379 4ca90a0cb7106dcb20d078a5 ... Sun 01:34:55\n7567 599 4a08ae47f964a5200a741fe3 ... Mon 23:41:52\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n159 281 4d0a9b2f27f8a35dd0432713 ... Tue 19:50:48\n530 848 41a12c00f964a5203c1e1fe3 ... Tue 22:53:08\n1113 335 4acce300f964a520e2c920e3 ... Wed 04:40:35\n1738 281 4d0a9b2f27f8a35dd0432713 ... Wed 14:46:43\n4674 620 4be2ecf6b02ec9b6a5064ec0 ... Sun 02:11:44\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2136 516 4f7897ace4b0421083e165f8 ... Wed 18:29:09\n2459 417 4e302cb1c65b80dfd84807cc ... Wed 23:32:43\n3076 581 4a62128af964a520f9c21fe3 ... Fri 15:45:30\n3145 116 4ae63a26f964a520caa521e3 ... Fri 16:27:54\n3963 623 4e78ec28483bc8fe840a40d9 ... Sat 20:04:05\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n16741 549 4b46a092f964a520d82526e3 ... Sat 23:00:43\n19526 9 41229c00f964a520350c1fe3 ... Mon 17:24:59\n71989 692 49dfc43cf964a5200e611fe3 ... Wed 23:02:08\n72191 407 41229c00f964a520350c1fe3 ... Thu 00:17:08\n72202 806 41229c00f964a520350c1fe3 ... Thu 00:19:36\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n193 65 4d66b1ef84f28cfa548a6e69 ... Tue 20:09:55\n2022 212 4a65fca9f964a520d3c71fe3 ... Wed 17:28:46\n2930 65 4d66b1ef84f28cfa548a6e69 ... Fri 14:05:29\n3107 296 4ab172a4f964a520816920e3 ... Fri 16:11:24\n3362 65 4d66b1ef84f28cfa548a6e69 ... Sat 14:31:15\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n105 1005 3fd66200f964a52004e61ee3 ... Tue 19:14:31\n3506 686 3fd66200f964a52004e61ee3 ... Sat 16:23:19\n3511 464 424de080f964a520b4201fe3 ... Sat 16:29:03\n3842 367 4ac3af5ff964a5205c9c20e3 ... Sat 19:11:36\n4262 727 45f2f5b9f964a520ea431fe3 ... Sat 22:21:24\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n10 87 4d8263a73e916dcb8edd80d2 ... Tue 18:07:15\n811 339 4b32abb5f964a520701125e3 ... Wed 00:44:00\n844 291 4d27b39755a8b60c0c4bc6c0 ... Wed 00:58:30\n2405 533 4b5249fff964a520237527e3 ... Wed 23:15:28\n2518 566 3fd66200f964a520dbea1ee3 ... Wed 23:54:38\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1063 1013 3fd66200f964a52093ea1ee3 ... Wed 03:56:54\n3763 662 3fd66200f964a52093ea1ee3 ... Sat 18:33:21\n9143 856 3fd66200f964a52093ea1ee3 ... Wed 01:06:07\n17209 594 3fd66200f964a52093ea1ee3 ... Sun 02:36:55\n26116 1034 3fd66200f964a52093ea1ee3 ... Fri 04:53:02\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n487 905 4a71df15f964a520c0d91fe3 ... Tue 22:34:59\n785 362 4adf7af0f964a520fa7a21e3 ... Wed 00:33:22\n821 251 4d504b9cedc68eec328cb8a4 ... Wed 00:47:44\n1765 491 3fd66200f964a5205ae91ee3 ... Wed 14:57:21\n2080 967 4e8e0f6c775b89b050af02a1 ... Wed 17:57:40\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n7463 164 49bc236af964a5201b541fe3 ... Mon 22:45:56\n86829 780 49bc236af964a5201b541fe3 ... Mon 18:55:42\n104010 780 49bc236af964a5201b541fe3 ... Sun 15:39:14\n150935 780 49bc236af964a5201b541fe3 ... Sun 18:48:34\n159327 163 49bc236af964a5201b541fe3 ... Thu 22:06:23\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n303 50 4eae4e7a30f855f5b3aabde8 ... Tue 21:18:32\n2567 346 4e1cba0f2271b045c59a5a39 ... Thu 00:11:06\n3942 525 4deaa39d45dd3993a891da35 ... Sat 19:55:46\n4308 945 4e3d2de9aeb73139a1751df3 ... Sat 22:49:04\n7358 525 4deaa39d45dd3993a891da35 ... Mon 22:06:39\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n732 768 4f038d174901a1400a6d2c8d ... Wed 00:10:47\n3419 100 4c2136d79a67a593609cda87 ... Sat 15:24:49\n3446 1030 42717900f964a5206f211fe3 ... Sat 15:51:26\n4314 768 4f038d174901a1400a6d2c8d ... Sat 22:52:41\n5265 100 4c2136d79a67a593609cda87 ... Sun 15:03:39\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1916 698 4cbb3e924c60a09340e44aca ... Wed 16:44:51\n26584 306 4bf4a8ab370e76b02f96bd4a ... Fri 12:55:02\n31356 936 4f08cdc2e4b0aa2dc6c6b979 ... Mon 00:00:18\n36345 385 4f98403ce4b0e34f3e557a80 ... Wed 18:20:16\n50910 9 4da2fe95c6e96ea8a3a9df5d ... Sat 18:18:24\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n437 1009 4db5b2bb81543d71da5dd719 ... Tue 22:14:17\n3257 347 4f2f443fe4b050b64b15fd8e ... Sat 13:16:54\n4198 26 4bdc57373904a593d1b84d9e ... Sat 21:46:51\n4478 468 4ca9f653a643b7130eb4e9a6 ... Sun 00:13:21\n4480 468 4ca9f653a643b7130eb4e9a6 ... Sun 00:13:35\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2420 182 4a464a63f964a520a3a81fe3 ... Wed 23:20:10\n3931 10 4a464a63f964a520a3a81fe3 ... Sat 19:51:56\n16422 396 4a464a63f964a520a3a81fe3 ... Sat 21:02:10\n16470 347 4f1ac5fce4b0288a0252d862 ... Sat 21:17:26\n26568 306 4d6ef5b471d3a14356853450 ... Fri 12:52:00\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n673 185 4f22c7bde4b0b2f98daff0dc ... Tue 23:51:20\n2260 185 4f22c7bde4b0b2f98daff0dc ... Wed 19:34:50\n3739 185 4f22c7bde4b0b2f98daff0dc ... Sat 18:22:08\n6033 185 4f22c7bde4b0b2f98daff0dc ... Sun 22:37:38\n6790 185 4f22c7bde4b0b2f98daff0dc ... Mon 17:05:16\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n630 528 4f08fc2de4b08f52495c901f ... Tue 23:37:20\n2388 529 4dd9a617b0fb8af380bcfb4f ... Wed 23:04:18\n4864 742 4f2ea82fe4b09d49e97faf80 ... Sun 04:26:19\n5364 528 4f03f2a50e0149a030b892c7 ... Sun 16:10:46\n5365 528 4f08fc2de4b08f52495c901f ... Sun 16:11:00\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n13073 1054 4e2a6d0e14955dbf7acfb1a3 ... Fri 05:30:56\n16434 386 4e2a6d0e14955dbf7acfb1a3 ... Sat 21:05:14\n33748 181 4c55ed65a724e21e6ffddef8 ... Tue 05:04:38\n63752 950 4dea2fc8d22da22d4ecc0712 ... Sun 00:34:01\n117394 84 4eaca3ed0aaf9e9a23d05e8a ... Sat 23:55:26\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n416 458 4e575fe418a802f9c8b815d1 ... Tue 22:05:19\n1029 882 4c8dbbd21664b1f7c1809a2f ... Wed 03:21:26\n1030 882 4bd3612ca8b3a59338db695f ... Wed 03:21:45\n1086 882 4d3a4a2a0edc5941330c8eec ... Wed 04:19:42\n1087 882 4ba514f7f964a52069da38e3 ... Wed 04:20:24\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n82 895 4d4de33cd6f3224bca73a1a6 ... Tue 18:57:40\n124 950 4db74cce43a10648ae1614e2 ... Tue 19:25:46\n130 615 4caf3aa3b4b0a35db3273fce ... Tue 19:29:44\n135 801 4e5c09011f6e89370a22a4e1 ... Tue 19:33:52\n152 976 4eb0fd2d0cd6283498fc07ce ... Tue 19:43:17\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n34 962 40b68100f964a5207d001fe3 ... Tue 18:17:17\n128 615 40abf500f964a52035f31ee3 ... Tue 19:29:18\n209 281 4ccc9b2a7c2ff04d246e9a7e ... Tue 20:21:35\n219 458 4ba62142f964a520b43539e3 ... Tue 20:27:35\n226 557 4a299005f964a520a8951fe3 ... Tue 20:34:20\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n431 130 49b7ed6df964a52030531fe3 ... Tue 22:12:27\n792 301 49b79f54f964a5202c531fe3 ... Wed 00:34:55\n932 365 4ad507bff964a520660121e3 ... Wed 01:56:11\n1141 120 4f0a4ea7e4b033211b9d7660 ... Wed 05:13:19\n1499 354 4a0b04dcf964a520ba741fe3 ... Wed 12:47:52\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n596 1070 4d93502452ed224b932f1aac ... Tue 23:21:13\n882 768 4eab5aa90aafb00bd8f2f41f ... Wed 01:18:44\n1151 882 4c04104458dad13aeb344897 ... Wed 05:22:27\n1167 50 4e651be8483bd9a975779ab4 ... Wed 05:58:28\n2217 908 4bba97ef7421a5933008c440 ... Wed 19:09:28\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1901 682 4c9a4c72eaa5a1432318cbe4 ... Wed 16:37:34\n3415 100 4bc36cc074a9a593c387d4f6 ... Sat 15:22:02\n3616 390 4bc1feccf8219c74490fb410 ... Sat 17:21:02\n4906 267 4c0bed2cbbc676b0f0444cd5 ... Sun 05:17:12\n5259 100 4bc36cc074a9a593c387d4f6 ... Sun 15:00:16\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n3584 9 4bf17b826bfe0f4745cbd838 ... Sat 17:05:55\n12378 419 4ba48071f964a520e2a038e3 ... Thu 22:20:00\n30520 193 4b549e0cf964a520f6c227e3 ... Sun 14:57:33\n30526 193 4b549e0cf964a520f6c227e3 ... Sun 15:00:03\n81541 558 4e33e154227111ae7693177a ... Thu 07:34:17\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n12844 83 4b786d55f964a5206dcd2ee3 ... Fri 01:59:38\n18179 974 4e4ea51a8877402b06b83022 ... Sun 18:49:21\n21296 974 4e4ea51a8877402b06b83022 ... Tue 19:22:17\n25349 65 4b55cc3bf964a5207bf027e3 ... Thu 22:23:40\n25595 83 4b786d55f964a5206dcd2ee3 ... Thu 23:59:00\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n10464 129 4b54d7bff964a52043cf27e3 ... Wed 22:29:41\n14777 905 4b54d7bff964a52043cf27e3 ... Sat 00:21:20\n14829 1076 4993f345f964a52082521fe3 ... Sat 00:55:48\n14857 688 4993f345f964a52082521fe3 ... Sat 01:13:21\n21315 463 4f1d6a4fe4b07beffff2dd70 ... Tue 19:35:42\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n832 370 49d7f65bf964a520865d1fe3 ... Wed 00:52:17\n1081 706 4c1e31d5fcf8c9b61ad6ac0b ... Wed 04:14:07\n1265 983 4079dc00f964a52070f21ee3 ... Wed 10:39:00\n1284 983 4a19d10ff964a520597a1fe3 ... Wed 10:57:26\n1299 983 4be4646bcf200f473be5113c ... Wed 11:11:56\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2317 65 3fd66200f964a520bce71ee3 ... Wed 21:38:09\n3569 116 4a234718f964a520cb7d1fe3 ... Sat 17:02:16\n4638 215 4b24290ff964a520816224e3 ... Sun 01:50:34\n4642 1052 4f0b6fb3e4b07c79f8f42d61 ... Sun 01:54:24\n5961 59 4bdf502a89ca76b062b75d5e ... Sun 21:49:13\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n38 962 40e74880f964a520150a1fe3 ... Tue 18:19:21\n324 192 4a3e69a3f964a520e6a21fe3 ... Tue 21:29:35\n389 1072 4ec6e0e18b81dcfdc2bdce65 ... Tue 21:54:56\n422 365 4c1918c7fe5a76b07b0f0315 ... Tue 22:07:37\n552 649 4d9b68ed73df8cfa9774f6ec ... Tue 23:02:48\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n14 931 49f85763f964a520f16c1fe3 ... Tue 18:09:59\n480 818 3fd66200f964a52093e81ee3 ... Tue 22:30:59\n486 994 4d27a2a1467d6ea8729fbe95 ... Tue 22:34:48\n729 1064 4f0600499a523e111fc74653 ... Wed 00:10:06\n934 281 4b5bac53f964a520250f29e3 ... Wed 01:58:25\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n200 880 4b9ad23ef964a52006d935e3 ... Tue 20:11:54\n228 528 4ba18c00f964a520b1bf37e3 ... Tue 20:37:20\n313 528 4c168e34955976b0c3bba4f6 ... Tue 21:23:10\n394 498 4b870d29f964a520afad31e3 ... Tue 21:55:52\n638 964 4b453b0ff964a520a20826e3 ... Tue 23:41:35\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n737 768 4c49d2a39e3e2d7ff5004d49 ... Wed 00:13:00\n3969 768 4c49d2a39e3e2d7ff5004d49 ... Sat 20:08:35\n4162 623 4bafaa5df964a52066153ce3 ... Sat 21:26:57\n4163 623 4bafaa5df964a52066153ce3 ... Sat 21:27:39\n6049 235 3fd66200f964a5202fe81ee3 ... Sun 22:49:14\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n294 121 4f287da5e4b035ea3f4d05d1 ... Tue 21:13:57\n311 391 4e7bc100a809dd77188fc18d ... Tue 21:22:38\n393 162 4aefb619f964a520fdd921e3 ... Tue 21:55:46\n2319 217 4f57eface4b0c19b5e5a15b0 ... Wed 21:39:29\n2788 377 4ad0df23f964a5209bda20e3 ... Fri 13:00:17\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n583 995 4b63388af964a520956b2ae3 ... Tue 23:15:09\n805 677 4c87c2c6cedc224b973f7fb6 ... Wed 00:40:41\n1229 419 4bc3ba974cdfc9b634d39721 ... Wed 08:28:38\n1643 419 4e21f579e4cdf685918b1b20 ... Wed 13:44:16\n1909 643 4b4cec17f964a52055c426e3 ... Wed 16:41:36\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n18559 724 4f886402e4b01f95a1f16eb8 ... Sun 21:50:57\n69016 85 4fb247f3e4b09a5434e875e9 ... Tue 14:42:01\n114729 389 4c8d9798c37a6dcbb779017b ... Thu 10:32:43\n115342 389 4c8d9798c37a6dcbb779017b ... Mon 11:56:26\n129634 832 4b7ae863f964a5204e452fe3 ... Wed 20:47:03\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n81 803 4c56f9ad30d82d7f3cbcd862 ... Tue 18:56:09\n1062 320 4d7e2c1395c1a1433802ccf2 ... Wed 03:55:16\n1249 84 4f177855e4b0259edde1ff9e ... Wed 10:15:38\n1431 296 4b9e71d0f964a52065e436e3 ... Wed 12:23:58\n1497 560 4f79a0cce4b06bb78057f47c ... Wed 12:47:35\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n154 783 4a888403f964a520bb0620e3 ... Tue 19:47:43\n179 834 4a6ddef7f964a52033d31fe3 ... Tue 20:04:11\n192 688 4cdc8c7699026dcb4a0c1481 ... Tue 20:08:04\n328 984 49c943d2f964a5204f581fe3 ... Tue 21:31:50\n387 572 4a97e23df964a520db2920e3 ... Tue 21:54:39\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n118 784 4b422afbf964a5200bce25e3 ... Tue 19:21:00\n129 385 4aeb7ef7f964a520abc221e3 ... Tue 19:29:36\n221 710 462d15a3f964a520e7451fe3 ... Tue 20:27:50\n275 118 4b5a54c4f964a520cebd28e3 ... Tue 21:01:31\n333 373 4a68fff8f964a5206dcb1fe3 ... Tue 21:33:17\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n50 643 4cbf5c4e985aa35da7854612 ... Tue 18:31:32\n52 1032 4e6657e7d4c06542ac9acee2 ... Tue 18:32:27\n223 145 4b7ec75ff964a520bafe2fe3 ... Tue 20:30:13\n1459 424 4ae5e11cf964a520d2a221e3 ... Wed 12:30:53\n1548 864 4c507b335ee81b8df597aefe ... Wed 13:04:59\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n182 968 4ee106e7f7903c92cdfd4d1d ... Tue 20:05:43\n420 307 4f2d5a90e4b01982f0dbf670 ... Tue 22:06:54\n753 347 4e28bea218384dd0a0e43209 ... Wed 00:15:59\n1786 458 4c0fc26c98102d7f28bee506 ... Wed 15:17:00\n4572 347 4e28bea218384dd0a0e43209 ... Sun 01:00:03\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n458 715 4a635e65f964a520e3c41fe3 ... Tue 22:21:07\n1263 173 4a411c15f964a520c7a41fe3 ... Wed 10:38:06\n1403 428 4bc09001461576b0ee487a32 ... Wed 12:14:55\n1920 999 4b87ed02f964a52048d331e3 ... Wed 16:45:23\n2061 503 4ccda5efb571b60cfa17da65 ... Wed 17:53:28\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n4285 243 40fb0f00f964a520ef0a1fe3 ... Sat 22:38:57\n9033 243 40fb0f00f964a520ef0a1fe3 ... Wed 00:08:24\n17741 967 40fb0f00f964a520ef0a1fe3 ... Sun 15:01:43\n19335 351 4b5a7d26f964a520fcc728e3 ... Mon 15:44:10\n21717 162 40fb0f00f964a520ef0a1fe3 ... Tue 22:56:46\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n6689 992 4a09f665f964a52077741fe3 ... Mon 16:20:03\n13765 698 4d4df83d13d88cfa7cfe0fda ... Fri 15:58:53\n13781 992 4bc215592a89ef3b402ff388 ... Fri 16:11:11\n13827 992 4b897470f964a520d83732e3 ... Fri 16:42:05\n14297 820 4e34d647c65b2313e287d8c1 ... Fri 21:38:12\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1049 660 4f7bc32ae4b0be9cab0a4be5 ... Wed 03:42:47\n5500 473 4ba40ca1f964a520be7b38e3 ... Sun 17:41:46\n10837 65 4ba40ca1f964a520be7b38e3 ... Thu 00:58:04\n24758 585 4ba40ca1f964a520be7b38e3 ... Thu 14:08:09\n28857 493 4df0eea245ddbf3897d44510 ... Sat 15:50:57\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n14721 1045 4b37c33cf964a520b54525e3 ... Fri 23:56:47\n14743 7 440bdf9cf964a52079301fe3 ... Sat 00:06:54\n14758 149 440bdf9cf964a52079301fe3 ... Sat 00:13:09\n14764 7 440bdf9cf964a52079301fe3 ... Sat 00:14:30\n25059 633 4bd606987b1876b047d58b86 ... Thu 19:31:11\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n559 1042 4a4138ebf964a520f9a41fe3 ... Tue 23:06:05\n1078 706 4c294bacce3fc928059b6e88 ... Wed 04:12:13\n1187 445 4ed7215c4901772cdf8a47ba ... Wed 06:36:39\n2532 420 49db721df964a520d35e1fe3 ... Wed 23:58:49\n2638 983 4e15bfae1fc7ec1006520be6 ... Thu 10:01:41\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2474 1054 4b23edfcf964a5201e5d24e3 ... Wed 23:36:09\n3966 917 40afe980f964a5203bf31ee3 ... Sat 20:05:40\n4101 553 4ac79e84f964a52086b820e3 ... Sat 20:59:49\n4236 868 49e17f24f964a520d1611fe3 ... Sat 22:08:35\n4332 812 43a72ba0f964a520672c1fe3 ... Sat 23:00:26\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n457 989 49f9d49bf964a5208a6d1fe3 ... Tue 22:20:38\n999 698 4f62a0eee4b041039c0c09f8 ... Wed 02:41:11\n1330 698 4f62a0eee4b041039c0c09f8 ... Wed 11:41:39\n3550 648 49fb7d89f964a5204a6e1fe3 ... Sat 16:51:00\n3766 419 4b37a6a7f964a520a54325e3 ... Sat 18:35:05\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n5804 275 4ea37ba4e3008f7d928a40d9 ... Sun 20:24:36\n7367 673 4d5bec141da1cbff4d8c1a05 ... Mon 22:08:53\n7378 673 4d5bed661da1cbffbd951a05 ... Mon 22:11:37\n9153 317 4e8b12fb6da1c8a30985c151 ... Wed 01:11:50\n10167 619 4c2d1986e760c9b6654f4449 ... Wed 18:37:29\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n412 883 4840fef8f964a52031501fe3 ... Tue 22:02:54\n606 800 4ae6363ef964a520aba521e3 ... Tue 23:27:13\n607 312 4ae6363ef964a520aba521e3 ... Tue 23:27:32\n659 108 4ae6363ef964a520aba521e3 ... Tue 23:48:43\n1036 291 4ae6363ef964a520aba521e3 ... Wed 03:28:09\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n9301 678 4d50bc057ee1a35d6ad98834 ... Wed 02:49:00\n12781 678 4d50bc057ee1a35d6ad98834 ... Fri 01:13:01\n21739 678 4d50bc057ee1a35d6ad98834 ... Tue 23:03:37\n33473 121 4f95fb48e4b07ffbb1c7d426 ... Tue 01:01:24\n41515 495 4f99fe6be4b0ca76e63941d8 ... Sat 20:20:19\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n243 691 4e78d39be4cd13042a2a0bce ... Tue 20:46:53\n2392 534 4e1dcfa4cc3f2dcd883d1eeb ... Wed 23:05:09\n2827 372 4a35477cf964a520b79c1fe3 ... Fri 13:19:28\n3063 495 4dc3499518387d1bd5066fc4 ... Fri 15:35:32\n3412 707 4bfd6a57f7c82d7f82278e04 ... Sat 15:17:24\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n574 1059 4c138eb6f1e0b7136cab34bc ... Tue 23:13:16\n1833 363 4c138eb6f1e0b7136cab34bc ... Wed 16:02:20\n3736 363 4c138eb6f1e0b7136cab34bc ... Sat 18:20:36\n9254 1059 4c138eb6f1e0b7136cab34bc ... Wed 02:07:16\n9543 694 4c138eb6f1e0b7136cab34bc ... Wed 10:16:21\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n15791 539 4db2dea15da3a76f441b7bf6 ... Sat 16:31:20\n16065 503 4b9a58e0f964a520e0ad35e3 ... Sat 18:30:57\n16527 207 4b9a58e0f964a520e0ad35e3 ... Sat 21:38:35\n16602 581 4b9a58e0f964a520e0ad35e3 ... Sat 22:09:10\n16748 1034 4b9a58e0f964a520e0ad35e3 ... Sat 23:04:04\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n131 950 4bd44b84a8b3a5937e7f6b5f ... Tue 19:29:47\n2448 656 4bd44b84a8b3a5937e7f6b5f ... Wed 23:30:02\n2450 656 4bd44b84a8b3a5937e7f6b5f ... Wed 23:30:17\n5498 436 4e7f802202d5abdf692321d8 ... Sun 17:38:13\n20869 419 4bd44b84a8b3a5937e7f6b5f ... Tue 15:40:09\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2921 338 4b992b04f964a520726635e3 ... Fri 14:02:04\n10155 691 4bce14d929d4b71362eaa7dc ... Wed 18:30:04\n47241 495 4f28a11ae4b027f4af247424 ... Fri 01:13:35\n54984 726 4b992b04f964a520726635e3 ... Tue 13:35:01\n69892 69 4b992b04f964a520726635e3 ... Tue 23:13:09\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1963 144 3fd66200f964a520ddf01ee3 ... Wed 17:03:37\n3528 536 49d60947f964a520a85c1fe3 ... Sat 16:40:37\n3614 390 49d60947f964a520a85c1fe3 ... Sat 17:20:26\n5346 503 3fd66200f964a520ddf01ee3 ... Sun 16:02:00\n5347 503 4dd08c7988778512442baa4a ... Sun 16:02:58\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n6924 525 4cd5bafb7da9a35d0a84ebb9 ... Mon 18:01:51\n30161 69 4ce86b36595cb1f729d5c814 ... Sun 07:13:01\n36677 525 4cd5bafb7da9a35d0a84ebb9 ... Wed 22:03:09\n41091 684 4cd5bafb7da9a35d0a84ebb9 ... Sat 16:59:33\n43295 1010 4e7903cc8998654c7478c81c ... Sun 17:54:33\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n564 79 423e0e80f964a52056201fe3 ... Tue 23:08:06\n908 161 4d6d3e88da06a1cdaad8a75b ... Wed 01:38:11\n1008 17 4ceb041afe90a35de6e0500e ... Wed 02:58:46\n2515 282 3fd66200f964a52051e61ee3 ... Wed 23:53:57\n2556 275 3fd66200f964a520cce71ee3 ... Thu 00:07:33\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n3396 3 49d6de35f964a520085d1fe3 ... Sat 15:04:34\n3602 3 49da2ea8f964a520585e1fe3 ... Sat 17:13:03\n3732 58 4750accef964a520b24c1fe3 ... Sat 18:20:00\n4134 415 427c0500f964a52097211fe3 ... Sat 21:13:25\n4387 399 427d5680f964a520a8211fe3 ... Sat 23:26:04\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1558 649 4aeb24b7f964a520eebe21e3 ... Wed 13:07:24\n3105 565 4ee26bdae300513517b158c3 ... Fri 16:09:05\n4796 84 4b00526cf964a520d53c22e3 ... Sun 03:26:57\n11981 820 4bc896a26501c9b6196b4029 ... Thu 18:48:25\n13363 820 4bc896a26501c9b6196b4029 ... Fri 12:27:28\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n3097 865 4be9e98461aca593632b8300 ... Fri 16:07:03\n5321 654 45545fd1f964a520043d1fe3 ... Sun 15:42:29\n11795 107 4297b480f964a52062241fe3 ... Thu 17:17:01\n13675 645 4297b480f964a52062241fe3 ... Fri 14:52:27\n16021 987 4297b480f964a52062241fe3 ... Sat 18:14:34\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n46385 253 4b9af865f964a520fde735e3 ... Thu 18:11:52\n78451 56 4b9af865f964a520fde735e3 ... Mon 19:35:33\n82838 148 4f669334e4b08383b3934e60 ... Fri 21:51:55\n127848 705 4b9af865f964a520fde735e3 ... Mon 18:46:18\n180209 725 4b9af865f964a520fde735e3 ... Wed 02:14:51\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n10959 73 4f44e550e4b0ea314b6f0fa8 ... Thu 02:14:50\n15183 73 4f44e550e4b0ea314b6f0fa8 ... Sat 04:02:21\n42085 73 4f44e550e4b0ea314b6f0fa8 ... Sun 00:45:06\n137513 235 50071a3064a449a440e761a8 ... Wed 20:52:27\n143701 302 5010a01de4b034c3de895fce ... Mon 02:08:23\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2508 553 4b22fbf3f964a520975124e3 ... Wed 23:50:55\n8685 59 4eaa8f886c250fcc7434c746 ... Tue 22:02:27\n9380 285 4beb22f362c0c9285cc0e1d4 ... Wed 04:10:22\n9486 50 4efea110e5faf2869aaddc6d ... Wed 07:14:11\n9498 50 4efe6fefb6346feaeb8ffcc8 ... Wed 08:17:19\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n51 643 4d1a2428401db60c2e9fffa4 ... Tue 18:32:11\n155 319 4ab66100f964a520ce7620e3 ... Tue 19:48:46\n804 877 4ab66100f964a520ce7620e3 ... Wed 00:40:00\n1291 373 4a36c1b9f964a520dd9d1fe3 ... Wed 11:03:31\n1423 658 4a36c1b9f964a520dd9d1fe3 ... Wed 12:21:27\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n256 702 4ea5a1bcf7909c68df719911 ... Tue 20:55:04\n2789 659 4d7e60eb564b8cfa2ac8f064 ... Fri 13:00:33\n6367 659 4d7e60eb564b8cfa2ac8f064 ... Mon 13:05:52\n7099 633 4ec5708849017fff08fe0689 ... Mon 19:36:55\n9767 119 4e678f3b483bef6eb7feee2b ... Wed 14:50:30\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n119 793 4b4b7979f964a520249d26e3 ... Tue 19:21:10\n142 574 4c6b02f46d390f4717d3099a ... Tue 19:37:28\n274 752 4b719475f964a520d54e2de3 ... Tue 21:00:30\n309 117 4ab23309f964a520c26a20e3 ... Tue 21:22:22\n378 381 4c179f6430d30f4726c636a9 ... Tue 21:51:41\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n681 809 4f5ba0b7e4b0ea8598c507d7 ... Tue 23:54:07\n1665 889 4d54258d5c9d236aaf40face ... Wed 13:55:28\n1842 58 43d64177f964a5204c2e1fe3 ... Wed 16:08:11\n2790 733 4b522c69f964a520636d27e3 ... Fri 13:00:49\n6875 816 4cc5cab41e596dcb1766da67 ... Mon 17:45:35\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n23472 632 4b17f951f964a5209cca23e3 ... Wed 22:40:38\n33038 834 4bd7b6d135aad13a20968ff3 ... Mon 21:31:07\n50862 81 4b0597b7f964a52098df22e3 ... Sat 17:59:23\n60286 459 4b17f951f964a5209cca23e3 ... Fri 13:44:04\n86154 644 4ea996fb0aaff9d3d58f947b ... Mon 06:31:09\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1682 713 4c700bc7d97fa143bac0f4ca ... Wed 14:04:15\n1780 889 4c7e42725af8b60ccfd39110 ... Wed 15:06:55\n1837 344 4bc5b04b41cb76b0a1a03e6f ... Wed 16:06:57\n2988 713 4c700bc7d97fa143bac0f4ca ... Fri 14:40:04\n6453 713 4c700bc7d97fa143bac0f4ca ... Mon 13:37:35\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n12012 646 4ac3bac3f964a520969c20e3 ... Thu 19:13:42\n32375 752 4b719475f964a520d54e2de3 ... Mon 15:17:22\n35252 982 4ac3bac3f964a520969c20e3 ... Tue 23:25:31\n36028 752 4b719475f964a520d54e2de3 ... Wed 15:50:18\n39153 752 4b719475f964a520d54e2de3 ... Fri 18:12:58\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n5733 349 4bbe30f45ce0d13a6dc3df57 ... Sun 19:28:14\n\n[1 rows x 13 columns]\n userId venueId ... weekday time\n8468 227 4bb3c28314cfd13ae28116ab ... Tue 17:50:01\n9096 836 4c6eaccff338236afd180b1b ... Wed 00:40:12\n10133 836 4c6eaccff338236afd180b1b ... Wed 18:20:15\n17919 836 4c6eaccff338236afd180b1b ... Sun 16:36:08\n18365 836 4c6eaccff338236afd180b1b ... Sun 20:16:10\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n57 458 4ca264431ee76dcb0d32f0dd ... Tue 18:34:10\n955 105 4ce3108d4df7a35df6bae079 ... Wed 02:12:03\n1575 453 4d5bf2d1590b224b809e9b6d ... Wed 13:15:08\n6639 458 4ed417828231b9ef8cdf7cd4 ... Mon 15:38:40\n6781 453 4d5bf2d1590b224b809e9b6d ... Mon 17:03:22\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2445 74 4c8ff873de3d236acadc6718 ... Wed 23:28:56\n2467 559 4dbaceb9a86ede8c0fa3022a ... Wed 23:33:40\n2915 74 4c8ff873de3d236acadc6718 ... Fri 14:00:36\n3093 889 4f4ba131e4b030691caf7c5c ... Fri 16:02:24\n6624 74 4c8ff873de3d236acadc6718 ... Mon 15:25:39\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1412 344 4bc5c26e5935c9b6984ca6d2 ... Wed 12:18:46\n1415 344 4cc998bdd063a143dbf00eb4 ... Wed 12:19:13\n1888 749 4d5321cd6bc337047b83fed2 ... Wed 16:30:57\n1906 952 4cc1afea1da0a1cd6eae9bc6 ... Wed 16:39:26\n2000 621 4e1660e91fc7d99db5c75bdd ... Wed 17:17:34\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n402 194 4514fc3df964a520e8391fe3 ... Tue 21:57:54\n516 816 4afb4334f964a520791c22e3 ... Tue 22:46:21\n953 105 4afb4334f964a520791c22e3 ... Wed 02:11:53\n954 105 4afb4334f964a520791c22e3 ... Wed 02:11:53\n1384 645 4e66116a483bd9a975d92b0b ... Wed 12:07:14\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n410 633 4e7fdff20aafbd4bf63b3998 ... Tue 22:01:42\n513 432 4df6ad65c65b87473b0518e8 ... Tue 22:45:18\n515 432 4efbd8c32c5b0c24c9b97cda ... Tue 22:45:39\n936 793 4c720c75ad69b60c14a982b9 ... Wed 02:00:21\n1047 1070 49ebd1d6f964a52036671fe3 ... Wed 03:40:28\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1305 281 4c45d9df36d6a593d6716ca8 ... Wed 11:15:56\n1417 344 4cb4ad0f07bab713fa9d50cc ... Wed 12:19:27\n1896 836 4e78bd6562e1b76aa3d0714c ... Wed 16:36:15\n2068 574 4b17d9a2f964a520d8c823e3 ... Wed 17:55:02\n4815 281 4c45d9df36d6a593d6716ca8 ... Sun 03:41:31\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2244 830 4c757abb3adda1432e7707af ... Wed 19:22:15\n6598 830 4c757abb3adda1432e7707af ... Mon 15:05:52\n9817 830 4c757abb3adda1432e7707af ... Wed 15:16:18\n12220 830 4c757abb3adda1432e7707af ... Thu 21:14:07\n14029 830 4c757abb3adda1432e7707af ... Fri 19:16:47\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n67 227 4bcb49aefb84c9b608821e3e ... Tue 18:42:03\n291 204 4c33bc7216adc9286421c49c ... Tue 21:13:15\n977 752 4b8ace56f964a5204a8232e3 ... Wed 02:21:02\n1379 448 4b636d93f964a5200c792ae3 ... Wed 12:05:46\n1424 658 4ba8cc8cf964a520c6ef39e3 ... Wed 12:22:12\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n79 449 4b96b432f964a52018df34e3 ... Tue 18:52:48\n352 786 4ad4b49af964a520c4e820e3 ... Tue 21:38:52\n518 816 4bc39a73461576b0e8e17e32 ... Tue 22:46:33\n535 432 4cb8544d90c9a143cb907ed6 ... Tue 22:54:41\n1723 444 4c51735a9d642d7f66888ddd ... Wed 14:38:14\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n88 656 4ea828b19a526de96a96ad94 ... Tue 19:01:19\n371 469 4d5d97c5e0ffa1cdb7fb1c54 ... Tue 21:48:33\n1961 752 4b6c5484f964a5203a312ce3 ... Wed 17:02:33\n7669 752 4b6c5484f964a5203a312ce3 ... Tue 00:25:12\n10020 752 4b6c5484f964a5203a312ce3 ... Wed 17:21:09\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n7118 834 4c8a5d2f1eafb1f7dd877935 ... Mon 19:45:15\n21647 240 4e1e112b7d8bb28421039cec ... Tue 22:38:00\n24948 752 4c8a5d2f1eafb1f7dd877935 ... Thu 18:21:25\n25120 381 4bc5fcd842419521ff5f031d ... Thu 20:22:11\n29816 514 4b8c45bef964a520c0ca32e3 ... Sun 02:24:12\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n33 834 4b55ec3bf964a520aaf627e3 ... Tue 18:16:19\n77 752 4b55ec3bf964a520aaf627e3 ... Tue 18:49:59\n148 621 4c922cde7f3b8cfadacca21e ... Tue 19:40:48\n628 756 4b16e33df964a52001bf23e3 ... Tue 23:36:59\n1972 621 4c922cde7f3b8cfadacca21e ... Wed 17:07:03\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n913 752 4b79e469f964a520bd182fe3 ... Wed 01:42:21\n1940 649 4bb4fbc2eb6dd13a119c96e4 ... Wed 16:53:51\n3590 86 4b497665f964a520267026e3 ... Sat 17:08:10\n14628 868 4bb4fbc2eb6dd13a119c96e4 ... Fri 23:30:22\n14699 272 4c74168066be6dcb12b9bc0f ... Fri 23:49:51\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n8369 386 4e57ad6fb3ad0addfaad76be ... Tue 16:58:12\n9973 386 4e57ad6fb3ad0addfaad76be ... Wed 16:53:16\n11390 874 4cdd54f4d4ecb1f797a28448 ... Thu 11:57:44\n11440 383 4f85767ce4b0be4a11e4ad92 ... Thu 12:16:07\n18823 874 4cdd54f4d4ecb1f797a28448 ... Mon 11:44:55\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n69 458 4b298573f964a52050a024e3 ... Tue 18:42:32\n73 318 4c8e9b251992a1cd211edffb ... Tue 18:46:30\n146 865 49e7c7c6f964a5201b651fe3 ... Tue 19:40:08\n158 107 4a494ce4f964a5202cab1fe3 ... Tue 19:50:32\n236 195 4a9fb11cf964a520343d20e3 ... Tue 20:42:26\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2092 889 4c7fa35f01df3704cab7e6ac ... Wed 18:04:57\n6473 889 4c7fa35f01df3704cab7e6ac ... Mon 13:48:47\n6748 889 4c7fa35f01df3704cab7e6ac ... Mon 16:49:36\n7018 58 4b098eccf964a520631923e3 ... Mon 18:53:08\n7469 889 4c7fa35f01df3704cab7e6ac ... Mon 22:48:22\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n13905 74 4f3d3fd5e4b0a8ba8f8f7af1 ... Fri 17:19:44\n22652 1069 4bcb93c4cc8cd13a39bbbfcf ... Wed 15:51:48\n28406 861 4e579063c65b2197869e44a6 ... Sat 07:31:16\n33103 797 4c333775a0ced13a8439166e ... Mon 22:04:14\n36265 74 4f68fbcce4b00c1228fbdc61 ... Wed 17:43:14\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n7327 210 4bfe8d858992a593deadadb0 ... Mon 21:52:13\n21752 1047 4ac245ecf964a520879820e3 ... Tue 23:08:26\n25464 292 4ac245ecf964a520879820e3 ... Thu 23:02:35\n27638 658 4ac245ecf964a520879820e3 ... Fri 21:35:55\n29533 802 4ac245ecf964a520879820e3 ... Sun 00:03:40\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n388 889 4d500a037ee1a35d828b8134 ... Tue 21:54:54\n1009 632 4d9e06cb7958f04dbac323fa ... Wed 03:00:08\n1010 632 4d9e06cb7958f04dbac323fa ... Wed 03:00:08\n3443 40 4c51e802d797e21e020ed67c ... Sat 15:49:09\n5390 746 4bcb7a1768f976b05c336183 ... Sun 16:25:03\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n542 854 4b66dab1f964a520772d2be3 ... Tue 22:57:38\n10419 510 4ad8f51bf964a5206c1621e3 ... Wed 22:13:31\n22268 870 4c8ede44c865952130d80329 ... Wed 12:51:13\n24993 704 4bc4c73e920eb7139b421f2c ... Thu 18:51:40\n37162 779 4c25eb5af1272d7fed9185c5 ... Thu 01:18:20\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n43352 798 4bce0773c564ef3b6686edf0 ... Sun 18:21:41\n63076 827 4be5a93fbcef2d7f291a04e5 ... Sat 19:35:12\n63190 827 4be5a93fbcef2d7f291a04e5 ... Sat 20:27:35\n64497 441 4bce0773c564ef3b6686edf0 ... Sun 12:56:22\n90666 706 4db2df675da3a76f441b8e27 ... Thu 20:42:17\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n28564 983 4d93b045220188bf7b940d0e ... Sat 12:34:02\n28598 983 4c882dc9ed0aa143bb8ca7f3 ... Sat 13:02:06\n28622 983 4d0d36693bbaa143cb633bda ... Sat 13:23:11\n30315 983 4d93b045220188bf7b940d0e ... Sun 12:00:19\n30330 983 4c882dc9ed0aa143bb8ca7f3 ... Sun 12:20:09\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n40036 752 4f9b264ae4b0027bd3ef3912 ... Sat 01:19:52\n62437 768 4bd7b77c35aad13a30978ff3 ... Sat 13:25:22\n63033 752 4bd7b77c35aad13a30978ff3 ... Sat 19:18:27\n122891 522 4bd7b77c35aad13a30978ff3 ... Thu 23:46:13\n145716 734 4cb38571db32f04d18c9d74d ... Fri 20:33:29\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n36597 650 4c1a5f5bb4e62d7f9cb2d793 ... Wed 21:14:38\n70367 84 4f779800e4b0b009ee7976b2 ... Wed 04:02:59\n78631 84 4f779800e4b0b009ee7976b2 ... Mon 21:26:01\n109885 84 4f779800e4b0b009ee7976b2 ... Fri 09:43:10\n113792 84 4f779800e4b0b009ee7976b2 ... Tue 09:39:30\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n13155 1034 4f4a5847e4b0d5cfa022ab0c ... Fri 09:22:55\n13676 1034 4f4a5847e4b0d5cfa022ab0c ... Fri 14:52:40\n27131 219 4f4a5847e4b0d5cfa022ab0c ... Fri 17:26:26\n42775 854 4dc69a3745dd2645528d3bf6 ... Sun 11:45:25\n93053 854 4dc69a3745dd2645528d3bf6 ... Sun 11:48:25\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n205 820 4a58e445f964a5201fb81fe3 ... Tue 20:19:51\n252 669 4a69f764f964a5204bcc1fe3 ... Tue 20:52:18\n340 669 4ecc1e9993adb4bd1bb5d5dd ... Tue 21:35:25\n1973 574 3fd66200f964a5203be71ee3 ... Wed 17:07:07\n2099 669 4cdc016e39172d43b65483d8 ... Wed 18:10:26\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n265 84 4bcdc2bf68f976b03abb6483 ... Tue 20:57:42\n1827 296 4adccf82f964a520133021e3 ... Wed 15:59:05\n5220 812 4ab16dfaf964a520756920e3 ... Sun 14:26:38\n5431 90 4a411158f964a520baa41fe3 ... Sun 16:54:54\n8301 296 4adccf82f964a520133021e3 ... Tue 16:16:04\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n351 874 4d9b613773df8cfafeb9f5ec ... Tue 21:38:48\n897 881 4ca73d06b7106dcb6fb065a5 ... Wed 01:27:34\n1856 798 4adb619cf964a520b22621e3 ... Wed 16:13:57\n1863 874 4d9b613773df8cfafeb9f5ec ... Wed 16:16:53\n1898 443 4d3780a19784a09387e0d7e8 ... Wed 16:36:50\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n4353 906 4e1a41d3183880768f53f08c ... Sat 23:11:05\n5606 597 449f923ff964a520c5341fe3 ... Sun 18:22:04\n5786 256 4f81e6ffe4b0a6d0b782dde7 ... Sun 20:05:28\n12259 107 4ab2797cf964a520526b20e3 ... Thu 21:38:01\n18537 373 449f923ff964a520c5341fe3 ... Sun 21:44:15\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n924 1040 4c005b048c1076b02b072071 ... Wed 01:48:53\n3717 833 4a70c30ef964a52057d81fe3 ... Sat 18:12:58\n4185 349 4da8daa5a86e771ea6fdad17 ... Sat 21:40:04\n5454 415 4b93503af964a520a43f34e3 ... Sun 17:12:14\n5495 465 4c91772e57e5b60cfa25611c ... Sun 17:37:08\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n30 32 44af9feef964a5202b351fe3 ... Tue 18:15:33\n78 713 4a7a0832f964a5203ce81fe3 ... Tue 18:51:03\n110 406 3fd66200f964a5209ee41ee3 ... Tue 19:17:24\n113 751 4b609666f964a520f9ee29e3 ... Tue 19:18:58\n258 702 4ad4a7c0f964a52085e820e3 ... Tue 20:55:45\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n61354 192 3fd66200f964a52010e41ee3 ... Fri 22:41:49\n64312 934 4e99f432f790b2a9191bebcd ... Sun 07:10:52\n64725 934 4e99f432f790b2a9191bebcd ... Sun 15:46:21\n65939 934 4e99f432f790b2a9191bebcd ... Mon 02:26:54\n68167 934 4e99f432f790b2a9191bebcd ... Tue 03:06:55\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n44732 84 4c8872cd7223b1f762e12ba8 ... Wed 00:23:46\n47661 84 4c8872cd7223b1f762e12ba8 ... Fri 08:44:44\n52183 84 4c8872cd7223b1f762e12ba8 ... Sun 20:09:37\n54084 84 4c8872cd7223b1f762e12ba8 ... Mon 19:00:41\n92551 499 4f551c48e4b0ce972e8c5224 ... Sun 01:45:40\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n17 135 4a689777f964a520a0ca1fe3 ... Tue 18:11:04\n685 121 4d76b2bf18cc5941fd7cf33f ... Tue 23:55:24\n773 637 4e45bca962845e1d3c88924e ... Wed 00:27:28\n1989 819 4bd57bb44e32d13a8dd9c080 ... Wed 17:15:29\n2880 673 4bb9c82e7421a59301ffc240 ... Fri 13:40:44\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n45 424 49c7f7c2f964a520da571fe3 ... Tue 18:24:38\n91 225 4b6214e1f964a52059342ae3 ... Tue 19:03:16\n111 868 3fd66200f964a52067e51ee3 ... Tue 19:17:34\n589 71 4edd03c9722e1da3055ce6e7 ... Tue 23:17:23\n774 235 4c56f9c37329c928b9c48d80 ... Wed 00:27:32\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n20083 593 4c82cc18dc018cfab730d36c ... Mon 21:44:03\n61155 140 41706480f964a520a61d1fe3 ... Fri 21:27:54\n67676 286 42001800f964a520581f1fe3 ... Mon 22:27:37\n76846 222 49fd183df964a520076f1fe3 ... Sat 18:02:37\n78858 190 4bdb106663c5c9b651262668 ... Mon 23:12:09\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1369 739 4e8aaccd4fc68f2d71de9384 ... Wed 12:00:09\n1943 370 4af19e09f964a52096e121e3 ... Wed 16:54:59\n2110 280 4ddba353ae60aa300190c83e ... Wed 18:15:25\n2744 739 4e8aaccd4fc68f2d71de9384 ... Thu 12:00:13\n4028 768 4c5db4f06ebe2d7f44a6d32e ... Sat 20:30:30\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n935 102 4ec46d4c02d54e7046dc6330 ... Wed 01:59:36\n2406 829 4f7cd62de4b08ccdc588e204 ... Wed 23:16:04\n2793 829 4f7cd62de4b08ccdc588e204 ... Fri 13:03:11\n4790 829 4f7cd62de4b08ccdc588e204 ... Sun 03:22:10\n5311 102 4197f180f964a520201e1fe3 ... Sun 15:31:26\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n54 768 4dbdebed6e810768bf82ad95 ... Tue 18:33:29\n89 225 4b565676f964a520120c28e3 ... Tue 19:01:28\n216 669 4a1de847f964a520aa7b1fe3 ... Tue 20:26:09\n220 615 4e300d723151250387b61125 ... Tue 20:27:44\n441 418 4ba8ff54f964a520ec013ae3 ... Tue 22:15:43\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n43 398 4b6752eef964a52006482be3 ... Tue 18:23:40\n224 498 4ba9415df964a52095183ae3 ... Tue 20:31:56\n396 652 4ad171a3f964a5206ede20e3 ... Tue 21:55:55\n433 186 4a6a002ef964a52058cc1fe3 ... Tue 22:12:32\n445 436 4e97889849019eb4c98ad56a ... Tue 22:17:45\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n4 87 4cf2c5321d18a143951b5cec ... Tue 18:03:00\n300 225 4c775141d06b224b2c49fed6 ... Tue 21:16:01\n314 528 4e8c450329c22b68ab23e112 ... Tue 21:23:52\n1068 943 4c6549a9f07e2d7f48729150 ... Wed 04:04:29\n1148 470 4baa441ff964a5201f593ae3 ... Wed 05:18:00\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n757 565 4b930abef964a520ed3034e3 ... Wed 00:18:01\n2065 503 4bd3644da8b3a593abe3695f ... Wed 17:54:30\n3192 518 3fd66200f964a52094e41ee3 ... Sat 00:20:19\n3195 609 4a9c64d5f964a520073720e3 ... Sat 00:22:21\n3475 727 49cfcf1cf964a520c95a1fe3 ... Sat 16:07:51\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2184 374 4c83b872d4e2370452147388 ... Wed 18:53:27\n4727 1038 4c7321ee6b91b71319b9f820 ... Sun 02:38:43\n10975 149 4b4d1c9bf964a5206dcb26e3 ... Thu 02:31:30\n14914 184 4c22c2ec7e85c92885a3bc21 ... Sat 01:32:00\n14920 922 4d8bf4c59f3fb1f7d75bfdbc ... Sat 01:33:01\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n594 824 4b68ee4bf964a520b2932be3 ... Tue 23:20:26\n1714 16 4b00a2abf964a520264022e3 ... Wed 14:31:24\n1918 43 4cf6ddb89d11b1f76cf1c2ed ... Wed 16:44:58\n2304 155 4af83ea9f964a5209a0b22e3 ... Wed 21:32:29\n2399 153 3fd66200f964a520d3ea1ee3 ... Wed 23:07:17\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n436 692 4f728caae4b02c6aee87f66f ... Tue 22:13:59\n939 534 4edeade7775bb52b6cfe02b9 ... Wed 02:01:49\n3688 353 4b5e0b0ff964a520ae7a29e3 ... Sat 17:53:11\n3928 690 4b451781f964a520400426e3 ... Sat 19:49:16\n4061 535 4d7b82bef260a093f14828ba ... Sat 20:45:16\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n900 256 4a2ae34ff964a52065961fe3 ... Wed 01:30:36\n904 424 49e36472f964a52076621fe3 ... Wed 01:31:34\n3152 570 44d6585df964a52062361fe3 ... Fri 16:29:48\n3756 689 4a5403b8f964a520f3b21fe3 ... Sat 18:30:07\n3865 918 49da73e1f964a5208a5e1fe3 ... Sat 19:19:42\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n308 1034 4a3d65b7f964a52037a21fe3 ... Tue 21:22:18\n426 526 4b82c3bcf964a52053e330e3 ... Tue 22:08:24\n626 470 4e08ad8cd4c03ae0b9d11f93 ... Tue 23:36:34\n646 802 4bb90fe01261d13a6547e998 ... Tue 23:45:10\n650 470 4be09b7fcb81c9b62069668b ... Tue 23:46:46\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n305 340 3fd66200f964a520ede91ee3 ... Tue 21:18:47\n464 987 4b02f734f964a520814b22e3 ... Tue 22:24:25\n691 882 4c0eb436336220a147a4cc77 ... Tue 23:56:50\n692 882 4b33adcbf964a520b31c25e3 ... Tue 23:57:15\n693 882 4ca652b7d971b1f7c301ffe0 ... Tue 23:57:40\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n921 905 4be367f82fc7d13aa08a083a ... Wed 01:47:41\n937 185 4ea4acaab80355a982afff79 ... Wed 02:00:23\n1166 50 4f2f0978e4b062ad2cbad00d ... Wed 05:58:11\n1718 50 4f3432b2e4b0dcbbede9442b ... Wed 14:34:02\n2318 560 4f44429ce4b0f1d45ca8af2b ... Wed 21:39:08\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1623 878 4adfb640f964a520e57c21e3 ... Wed 13:35:39\n2369 198 4d53a2c5a9378eec634eb2c4 ... Wed 22:55:15\n5743 84 4b9bccf9f964a520c12636e3 ... Sun 19:33:47\n7190 84 4f25a574e4b04b5526a8100b ... Mon 20:26:38\n9062 922 3fd66200f964a52027e31ee3 ... Wed 00:19:21\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n345 185 4f1a558ae4b0d7904185f778 ... Tue 21:36:37\n407 834 4e602d907d8b854088aa523c ... Tue 21:59:28\n588 443 4f555da0e4b0b1295c78126c ... Tue 23:16:47\n1302 708 4c204c36d38ec9b620fd4d83 ... Wed 11:14:23\n2394 443 4f555da0e4b0b1295c78126c ... Wed 23:05:25\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n763 293 41156d00f964a520f40b1fe3 ... Wed 00:19:44\n867 593 4276bf00f964a5208b211fe3 ... Wed 01:11:54\n1161 509 4ad04456f964a520c7d720e3 ... Wed 05:44:15\n2338 373 4efd28368231bcb7014c0a64 ... Wed 22:47:31\n2447 515 3fd66200f964a520f4e61ee3 ... Wed 23:29:56\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2491 536 4bcf9ecb462cb713e425d707 ... Wed 23:41:22\n3791 251 49fe2391f964a520736f1fe3 ... Sat 18:46:20\n4341 1042 49d4eeaff964a520555c1fe3 ... Sat 23:05:43\n4703 212 4ca285afd5a2a143bfaf4190 ... Sun 02:27:08\n4744 692 4cb8d53423a4199c9002ed89 ... Sun 02:51:35\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n543 985 43c7878df964a520822d1fe3 ... Tue 22:58:13\n3129 14 4b6c43a2f964a5208a2c2ce3 ... Fri 16:22:28\n3710 985 43c7878df964a520822d1fe3 ... Sat 18:06:08\n4133 367 4d8393ded5fab60c2654cf9b ... Sat 21:13:09\n5725 1001 4dd43ffd1fc72a9b4a70b6e1 ... Sun 19:26:16\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n55 15 4b0d88b2f964a5206d4a23e3 ... Tue 18:33:49\n2009 826 4b4cca8bf964a52010bf26e3 ... Wed 17:23:27\n6837 438 4b55ee01f964a5201df727e3 ... Mon 17:27:39\n6974 117 4a708e4ff964a520d0d71fe3 ... Mon 18:27:36\n8411 826 4ad603f1f964a520790421e3 ... Tue 17:23:10\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n63 281 4c17d9974ff90f47a9890d49 ... Tue 18:39:34\n156 468 4ee2cfc18b81bab281e0dd6f ... Tue 19:49:09\n440 281 4c17d9974ff90f47a9890d49 ... Tue 22:15:24\n1108 706 4c4b48b3d2bf76b07cd7d084 ... Wed 04:35:22\n1476 281 4c17d9974ff90f47a9890d49 ... Wed 12:35:44\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1731 768 4e989a78d22d3f4f88a3f2b1 ... Wed 14:44:23\n4706 768 4e989a78d22d3f4f88a3f2b1 ... Sun 02:29:09\n10547 768 4e989a78d22d3f4f88a3f2b1 ... Wed 23:01:48\n11962 768 4e989a78d22d3f4f88a3f2b1 ... Thu 18:41:08\n17949 768 4e989a78d22d3f4f88a3f2b1 ... Sun 16:48:36\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n107 706 4bbfcfd3461576b0d4487932 ... Tue 19:15:04\n347 987 48208c8cf964a520894f1fe3 ... Tue 21:38:14\n505 101 4d3e1d2c6e0aa1cdb0fcf12c ... Tue 22:41:46\n544 1083 3fd66200f964a52020e71ee3 ... Tue 22:59:03\n572 169 3fd66200f964a52020e71ee3 ... Tue 23:11:41\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1 979 4a43c0aef964a520c6a61fe3 ... Tue 18:00:25\n335 571 4a9ae1cef964a520543320e3 ... Tue 21:33:56\n616 623 4a43bcb7f964a520bba61fe3 ... Tue 23:30:23\n647 354 49ecd458f964a52099671fe3 ... Tue 23:45:16\n801 319 4a43c0aef964a520c6a61fe3 ... Wed 00:39:04\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n6 642 4ab966c3f964a5203c7f20e3 ... Tue 18:04:38\n18 895 4bb7e0c41261d13a8707e898 ... Tue 18:11:16\n21 284 4abe60d8f964a520198d20e3 ... Tue 18:12:27\n46 751 49b15042f964a520d4521fe3 ... Tue 18:24:51\n68 612 4ae8495ef964a5207fae21e3 ... Tue 18:42:25\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n808 827 4d0ef5e271e8a1cde47078bd ... Wed 00:42:24\n1031 882 4de3c1877d8b2547eb22899b ... Wed 03:22:23\n2307 731 4de3c1877d8b2547eb22899b ... Wed 21:34:14\n3424 721 3fd66200f964a520b9e51ee3 ... Sat 15:28:20\n4873 334 4b8f2a2df964a520d94b33e3 ... Sun 04:34:43\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n497 955 4f6cfafbe4b0725b60be73ff ... Tue 22:38:28\n1177 445 4e80010cf9f4c8c37f4c9ae7 ... Wed 06:13:09\n1674 425 4d9434251231b60c6bdb74a1 ... Wed 13:59:40\n3020 425 4d9434251231b60c6bdb74a1 ... Fri 15:01:36\n3183 425 4d9434251231b60c6bdb74a1 ... Sat 00:16:15\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n532 264 4ba66fc6f964a520465239e3 ... Tue 22:53:48\n1326 412 4b23af24f964a5200f5824e3 ... Wed 11:38:00\n5896 987 4b75152cf964a5205efd2de3 ... Sun 21:20:54\n6017 505 4ec9763849010f98cee35863 ... Sun 22:26:36\n6286 412 4b23af24f964a5200f5824e3 ... Mon 12:33:02\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n269 768 4ee7ea4502d5895bd6bccb5f ... Tue 20:58:50\n1481 44 4b9e42c5f964a520b9d536e3 ... Wed 12:36:35\n1734 184 4c6f9028b5fe952146446111 ... Wed 14:45:40\n2250 940 4b759807f964a520e2162ee3 ... Wed 19:26:04\n3565 905 4bdb6989c79cc928d5ab82e9 ... Sat 16:59:38\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1032 882 4e9afd1e6da1cbe0b88d835c ... Wed 03:23:03\n1033 882 4dd657d7b0fb8af38069cc99 ... Wed 03:23:29\n1171 882 49e4a769f964a52015631fe3 ... Wed 06:03:19\n1172 882 4c59953704f9be9a67a0ef60 ... Wed 06:03:45\n1220 582 4bc9c4ea937ca593fa40a692 ... Wed 07:40:19\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n800 400 4a91a619f964a520271b20e3 ... Wed 00:38:53\n4839 1024 4a47eee3f964a52057aa1fe3 ... Sun 04:07:09\n7892 901 4b5f2e58f964a52080ab29e3 ... Tue 03:21:37\n10763 1024 4a47eee3f964a52057aa1fe3 ... Thu 00:23:05\n10832 654 4ba02593f964a5209b5d37e3 ... Thu 00:56:46\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2083 827 4ed1eb9a77c8ea62fdf51ef0 ... Wed 17:59:31\n4126 827 4ed1eb9a77c8ea62fdf51ef0 ... Sat 21:10:42\n5404 827 4ed1eb9a77c8ea62fdf51ef0 ... Sun 16:30:46\n7885 827 4ed1eb9a77c8ea62fdf51ef0 ... Tue 03:17:42\n9267 827 4ed1eb9a77c8ea62fdf51ef0 ... Wed 02:21:15\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n759 470 4de808e8091a3279663a4303 ... Wed 00:18:45\n762 470 3fd66200f964a5200ae91ee3 ... Wed 00:19:37\n864 372 3fd66200f964a5200ae91ee3 ... Wed 01:11:32\n2596 983 4d95f52e7cfc1456a1652af6 ... Thu 08:11:51\n4597 1015 4cdabc415aeda1cd46b5b811 ... Sun 01:20:32\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n249 615 4ec56c366c254debeb50c1c4 ... Tue 20:50:58\n4299 615 4ec56c366c254debeb50c1c4 ... Sat 22:43:13\n5027 84 4e70182888773f007b1f01cc ... Sun 08:51:18\n5382 84 4e70182888773f007b1f01cc ... Sun 16:18:55\n5898 615 4ec56c366c254debeb50c1c4 ... Sun 21:21:56\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n31 230 3fd66200f964a52035e41ee3 ... Tue 18:16:09\n918 848 3fd66200f964a52035e41ee3 ... Wed 01:45:49\n1013 185 4cf526178333224b089d1e8e ... Wed 03:07:44\n1272 241 4bb3f3f80cbcef3b5d8a582a ... Wed 10:49:44\n1628 74 4e89c7ee7ee6a49a31ea1b7e ... Wed 13:39:21\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n25 100 4bc308db4cdfc9b639bf9621 ... Tue 18:14:03\n466 328 4e0a48b46284cf220c358959 ... Tue 22:25:19\n629 271 4d8fc9006174a0938f5be6e3 ... Tue 23:37:09\n798 827 4cddbe60db125481b5322cce ... Wed 00:37:42\n1214 121 4eab69569adfbd5e5d88ebbe ... Wed 07:22:00\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n5393 276 4dd969f1d22d38ef4318cc65 ... Sun 16:26:35\n8998 643 4c1bcde83855c9b676a3cc71 ... Tue 23:57:01\n10765 1064 4aa5571ef964a520094820e3 ... Thu 00:23:16\n14378 1034 4bb9651e1261d13a7c9fe998 ... Fri 22:04:51\n16130 726 4a731329f964a52083db1fe3 ... Sat 18:57:32\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n7757 694 3fd66200f964a52092e81ee3 ... Tue 01:22:46\n12975 1034 4b414076f964a5201dc425e3 ... Fri 03:50:25\n15259 1038 43f5e5b9f964a520a52f1fe3 ... Sat 04:49:50\n21872 186 4b414076f964a5201dc425e3 ... Tue 23:55:25\n25736 92 43f5e5b9f964a520a52f1fe3 ... Fri 00:53:58\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n62 1016 4d72a06ff7c38cfaa94aaa3d ... Tue 18:39:29\n310 528 4bb8f6bfb35776b00974c901 ... Tue 21:22:34\n354 264 4b9c8f56f964a520357036e3 ... Tue 21:40:07\n547 658 4c828652dc018cfaed18d06c ... Tue 23:01:57\n1026 895 4c5443cd1b46c9b660794cce ... Wed 03:20:55\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n14843 275 4d33d64b6c7c721ea10fbd56 ... Sat 01:05:41\n15127 515 4d33d64b6c7c721ea10fbd56 ... Sat 03:34:56\n15345 275 4d33d64b6c7c721ea10fbd56 ... Sat 06:00:16\n17421 925 4d33d64b6c7c721ea10fbd56 ... Sun 05:05:22\n27950 38 43598100f964a520fa281fe3 ... Sat 01:44:36\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1037 404 4c82d5a0d4e23704d3c26b88 ... Wed 03:28:26\n4241 817 4e16716d52b1091958e66fa4 ... Sat 22:12:07\n6013 827 4bf6d5d6c07c9c744785bbef ... Sun 22:23:20\n6054 824 4bf0145bc8d920a1bf629430 ... Sun 22:50:23\n8497 1080 4bbf9123abf4952157ddbd93 ... Tue 18:02:11\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n617 721 4901ef67f964a5205b521fe3 ... Tue 23:30:24\n641 943 3fd66200f964a520dbe31ee3 ... Tue 23:43:28\n703 255 4901ef67f964a5205b521fe3 ... Wed 00:01:09\n770 509 40e9eb80f964a5201c0a1fe3 ... Wed 00:24:26\n947 882 4e39d13e2271aebfead9d27c ... Wed 02:06:35\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2608 983 4eb8ac218231316270a0aeb7 ... Thu 08:47:16\n7611 882 40749600f964a5204ef21ee3 ... Mon 23:58:27\n7616 882 4d2a1ceb0fdc6a31c3398f0b ... Mon 23:58:52\n12904 1034 4ec2b9768b817d2b84f1548a ... Fri 02:40:08\n22338 432 4b4c8480f964a520f5b426e3 ... Wed 13:11:38\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n59867 665 4b809419f964a520547e30e3 ... Fri 10:25:08\n199917 600 4c6b300ee13db60cfdf4d3b1 ... Thu 01:37:46\n\n[2 rows x 13 columns]\n userId venueId ... weekday time\n1056 584 4ba8b4fdf964a5209ee839e3 ... Wed 03:50:28\n1219 990 4cf69ac1c020b60c51192175 ... Wed 07:32:49\n1224 582 4b8baecef964a520d8a632e3 ... Wed 08:03:43\n1248 80 4a5fb924f964a52018c01fe3 ... Wed 10:15:08\n1257 990 4d00f8ce3ea69eb0afd43649 ... Wed 10:31:31\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2602 983 4dcefe8fd164679b8d0ea4aa ... Thu 08:25:12\n2603 983 4c41b5c0d691c9b6a21a8d0a ... Thu 08:37:16\n7609 882 4e93b5b9b803d7c292e9b17e ... Mon 23:58:09\n58382 665 4d7a7c80cbaf5481e857de24 ... Thu 16:29:31\n58408 665 4c41b5c0d691c9b6a21a8d0a ... Thu 16:42:34\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1873 194 4b7876dff964a52021d02ee3 ... Wed 16:20:35\n5963 703 4b9e7d7ef964a5208ae836e3 ... Sun 21:49:30\n10273 829 4bbc407551b89c74ad3b872a ... Wed 19:43:57\n11409 758 4a7ead97f964a5200ef21fe3 ... Thu 12:05:03\n11843 857 4be1ae703ef676b0e44ec0ca ... Thu 17:31:49\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n7614 882 4eac50275c5cf8848b0fe179 ... Mon 23:58:39\n37517 665 4d1cf9c4300ba0906506f6f3 ... Thu 09:32:06\n59842 665 3fd66200f964a520cee31ee3 ... Fri 09:44:09\n59852 665 4c3b493ed94dc9b6878cffe7 ... Fri 09:58:40\n60162 665 4e7ffe0ed3e3e208f1f7ab65 ... Fri 12:51:47\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n104 143 43a52546f964a520532c1fe3 ... Tue 19:14:15\n136 16 4459d2f1f964a520d7321fe3 ... Tue 19:33:58\n174 194 43a52546f964a520532c1fe3 ... Tue 20:01:38\n301 29 43a52546f964a520532c1fe3 ... Tue 21:16:02\n723 339 4ace6c89f964a52078d020e3 ... Wed 00:09:04\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n355 987 4a744c17f964a520cfdd1fe3 ... Tue 21:40:08\n2323 674 4e39c9aed4c046ee86be589d ... Wed 21:40:32\n2531 806 4b4b92a1f964a520b1a026e3 ... Wed 23:58:48\n3056 162 4b780d12f964a52095b32ee3 ... Fri 15:31:59\n3382 783 4cbb5d9fcdccb71341b16e79 ... Sat 14:48:05\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n72884 355 4d9be90a2e6d8cfad17d3117 ... Thu 11:56:33\n139855 17 4b0821e5f964a5207e0423e3 ... Sat 00:37:35\n\n[2 rows x 13 columns]\n userId venueId ... weekday time\n111637 1057 4b885f94f964a52005f331e3 ... Sun 15:20:01\n150728 1057 4b885f94f964a52005f331e3 ... Sun 15:16:34\n160046 1057 4b885f94f964a52005f331e3 ... Fri 18:04:07\n\n[3 rows x 13 columns]\n userId venueId ... weekday time\n2978 581 4be4782abcef2d7fcd9902e5 ... Fri 14:35:42\n3373 440 4f8051b5e4b05d42ddfd3153 ... Sat 14:40:13\n13935 221 4d0a383b9de0b1f768dc8153 ... Fri 18:33:23\n16163 690 4f1eafbee4b0c0fec6f749ec ... Sat 19:11:31\n21079 225 4bad330ff964a52043383be3 ... Tue 17:36:50\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n22 901 4c1ff4e68b3aa5933d499e5f ... Tue 18:12:42\n595 901 4f3ea1aae4b065204aa39b5f ... Tue 23:20:53\n2107 901 4c295750e19720a13e72f958 ... Wed 18:14:22\n5107 798 4bfa6af6508c0f47ff0a4031 ... Sun 11:56:57\n5805 986 4bd16d5fcaff9521c9a3d0f0 ... Sun 20:24:53\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n62187 524 4d606ed2865a224b1050ab85 ... Sat 07:39:12\n64173 524 4d606ed2865a224b1050ab85 ... Sun 04:57:37\n75988 524 4d606ed2865a224b1050ab85 ... Sat 03:54:16\n153253 862 4c8e41a3a8de224b71ce2501 ... Sun 00:26:18\n165559 862 4c8e41a3a8de224b71ce2501 ... Thu 13:52:59\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n657 682 4a498225f964a52064ab1fe3 ... Tue 23:48:26\n696 1066 4a9ef63df964a520f53b20e3 ... Tue 23:59:17\n973 470 4e851280d5fbaf501e1e4177 ... Wed 02:19:26\n980 981 4bca4c2fcc8cd13a5e78bdcf ... Wed 02:23:25\n1262 235 4d63b651e7f060fccc778e72 ... Wed 10:36:59\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n3856 693 4b6e1b26f964a520dbaa2ce3 ... Sat 19:17:06\n5685 116 4f81b8cee4b04cf78a07255f ... Sun 19:03:46\n58709 17 4dd66884b0fb8af3806b9e14 ... Thu 20:01:33\n78678 701 4b1c1195f964a5209d0124e3 ... Mon 21:56:05\n82910 975 3fd66200f964a520ede51ee3 ... Fri 22:22:59\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n218 515 4d633c7da45b548145c8ff2c ... Tue 20:27:23\n3387 86 4baa45c7f964a520d3593ae3 ... Sat 14:57:45\n3779 202 4b0591e5f964a5204cdf22e3 ... Sat 18:40:28\n7804 826 4a7f0a9cf964a5208ef21fe3 ... Tue 01:54:06\n8871 202 4f5788e0d5fb68da8b344a28 ... Tue 23:08:05\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n42 364 4b7718d4f964a520077d2ee3 ... Tue 18:21:33\n270 768 4cfa8ab0e6f48eec85dc83a4 ... Tue 20:59:34\n471 198 4a8c6b5af964a520ed0d20e3 ... Tue 22:28:59\n597 903 4a676435f964a52054c91fe3 ... Tue 23:21:27\n598 417 49e34b16f964a5206f621fe3 ... Tue 23:22:13\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n277 768 4c339a34ed37a5936aa06d03 ... Tue 21:02:42\n348 559 4283ee00f964a520ce221fe3 ... Tue 21:38:24\n484 242 4baf5b89f964a5200afa3be3 ... Tue 22:34:10\n1788 186 4c2258d69085d13ab22f86cc ... Wed 15:19:17\n3313 768 4c339a34ed37a5936aa06d03 ... Sat 14:00:25\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n5703 43 4ee8f644469064168dd2c975 ... Sun 19:13:00\n10317 673 4c0c2b15bbc676b00ab44cd5 ... Wed 20:14:52\n16644 1015 4e4050661495bf24a5ff4fb7 ... Sat 22:23:30\n17146 84 4aff13e3f964a520e33322e3 ... Sun 02:00:33\n22025 84 4aff13e3f964a520e33322e3 ... Wed 01:02:42\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n6804 359 4acbae4ff964a520f6c420e3 ... Mon 17:13:13\n9651 806 4b54c03cf964a52075ca27e3 ... Wed 13:32:36\n9853 359 4acbae4ff964a520f6c420e3 ... Wed 15:40:07\n10306 508 4bc8e24d937ca593a9aba492 ... Wed 20:08:37\n12352 107 49b7ee6df964a52032531fe3 ... Thu 22:12:08\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n5331 456 4ab67141f964a5201d7720e3 ... Sun 15:50:59\n17931 1062 4b0ebc4df964a520545a23e3 ... Sun 16:40:03\n29106 246 49f220a3f964a520ee691fe3 ... Sat 17:54:04\n30576 1023 3fd66200f964a520d5e31ee3 ... Sun 15:32:07\n30637 449 3fd66200f964a520d5e31ee3 ... Sun 16:07:22\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n127 1055 4f24316fe4b063d400403adc ... Tue 19:27:54\n501 85 49bbfab7f964a52004541fe3 ... Tue 22:39:22\n1400 145 44d9e8dbf964a5208a361fe3 ... Wed 12:11:45\n2005 778 3fd66200f964a520e7e51ee3 ... Wed 17:21:31\n2014 32 45940e37f964a52055401fe3 ... Wed 17:25:36\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n175 100 4a451759f964a520c3a71fe3 ... Tue 20:03:06\n238 59 4c65c9c3d6b7b7136f0e184a ... Tue 20:45:37\n298 49 4f612184e4b01b3ebd898091 ... Tue 21:15:12\n938 571 4e0c71ecae60909ff14a650c ... Wed 02:01:43\n987 404 4f33f87fe4b02be5072cd47f ... Wed 02:27:43\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n61 839 41102700f964a520d60b1fe3 ... Tue 18:37:12\n141 669 41102700f964a520d60b1fe3 ... Tue 19:36:57\n282 669 4e40041f81dc6ce0fac9ae73 ... Tue 21:09:13\n473 598 41102700f964a520d60b1fe3 ... Tue 22:29:17\n531 653 49cbb6f9f964a5200f591fe3 ... Tue 22:53:35\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n327 577 4c71a9150e23b1f7a3e81bdc ... Tue 21:31:05\n1490 80 4c731856376da0936e52a8c6 ... Wed 12:42:35\n6510 724 4dee34a5fa76b21ed99542f2 ... Mon 14:06:13\n10468 724 4e1ec8a0d16488cf82f9988d ... Wed 22:31:05\n21581 328 4f09bfe2e4b0e8907be53284 ... Tue 22:15:06\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n3363 229 4a0752b5f964a5204a731fe3 ... Sat 14:36:00\n3406 684 4bc9d540fb84c9b618ad1b3e ... Sat 15:12:54\n3487 169 4dd7c0afd1647fcf3e518ff3 ... Sat 16:14:49\n3491 937 4839a8ddf964a5200e501fe3 ... Sat 16:17:31\n3629 372 4e2aefd4aeb745b1fabd9ba8 ... Sat 17:30:25\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n14229 661 4f88444ce4b05ae887ba14a7 ... Fri 21:10:56\n15075 891 4f88444ce4b05ae887ba14a7 ... Sat 03:02:24\n56028 142 4d45fc70befe236aeea104e3 ... Tue 22:06:03\n104799 111 4f8a8bf3e4b005979f47e202 ... Mon 00:19:36\n111231 111 4f8a8bf3e4b005979f47e202 ... Sun 04:15:44\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1537 598 4a2bfc41f964a52000971fe3 ... Wed 13:00:18\n1641 1038 4d4c2e9d2d0d8cfa21accc25 ... Wed 13:43:37\n2228 1014 4adbb671f964a520272a21e3 ... Wed 19:12:23\n3786 363 4aa6ff49f964a520bd4b20e3 ... Sat 18:43:37\n3870 347 4ba155eaf964a520c2ad37e3 ... Sat 19:21:36\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n35 1032 4bf5cedf4d5f20a1833d98fe ... Tue 18:18:54\n157 874 4e634431d22d509a39fb1dc2 ... Tue 19:49:58\n225 894 4d8faa2f1716a143f36050f7 ... Tue 20:32:35\n255 768 4bd6f47d29eb9c7462db95e1 ... Tue 20:54:05\n286 371 4e6d90f02271a8cabfdbfb27 ... Tue 21:12:05\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n247 107 4bc29837461576b0a8af7d32 ... Tue 20:49:59\n1587 184 4d4df5d7a7f86ea812823cde ... Wed 13:20:41\n2125 315 4c92b83ad84a6dcb8a4d3787 ... Wed 18:22:31\n2581 628 4af0d533f964a52091df21e3 ... Thu 00:18:46\n3323 464 42377700f964a52026201fe3 ... Sat 14:08:34\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n145 636 4bc7ab4c14d79521c6e367e9 ... Tue 19:38:34\n176 403 4ab6c985f964a5200a7920e3 ... Tue 20:03:52\n191 71 481dfa3af964a5207f4f1fe3 ... Tue 20:08:03\n557 990 4c2cdf2d8ef52d7fed9933ba ... Tue 23:04:11\n601 803 4ab04eb9f964a520fe6620e3 ... Tue 23:22:45\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n140 592 49c27575f964a520f1551fe3 ... Tue 19:36:05\n295 121 41390580f964a520dc1a1fe3 ... Tue 21:14:08\n1454 145 49c27575f964a520f1551fe3 ... Wed 12:29:20\n1876 465 49c27575f964a520f1551fe3 ... Wed 16:22:27\n3175 159 41390580f964a520dc1a1fe3 ... Sat 00:12:25\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n4978 990 4bc3758ddce4eee13f9d719d ... Sun 06:42:05\n44641 757 4b4a0711f964a520547826e3 ... Tue 23:47:29\n61991 990 4bc3758ddce4eee13f9d719d ... Sat 03:49:43\n62236 990 4bc3758ddce4eee13f9d719d ... Sat 10:26:51\n64054 990 4bc3758ddce4eee13f9d719d ... Sun 03:36:15\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n3692 491 3fd66200f964a52038e41ee3 ... Sat 17:55:32\n15525 85 4c8b68c73dc2a1cd77d5b532 ... Sat 13:16:44\n15860 745 3fd66200f964a52038e41ee3 ... Sat 17:04:32\n18319 190 3fd66200f964a52038e41ee3 ... Sun 19:55:27\n22057 93 3fd66200f964a52038e41ee3 ... Wed 01:27:25\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n75 916 4b7c9fe1f964a520559e2fe3 ... Tue 18:47:23\n85 916 4b8d2387f964a520dbe932e3 ... Tue 19:00:04\n534 188 4bbe675182a2ef3bf7542bd2 ... Tue 22:54:24\n560 188 4b86926ef964a520209031e3 ... Tue 23:06:27\n917 287 4b8d2387f964a520dbe932e3 ... Wed 01:44:02\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n97 118 4c72dbbf13228cfa0c362c65 ... Tue 19:08:27\n322 611 4c77089c59a3236aa9aec018 ... Tue 21:29:09\n576 311 4d8777eb9324236abfb7f60e ... Tue 23:13:38\n782 890 4b32d72ff964a520dd1425e3 ... Wed 00:32:51\n933 1014 4b54eebdf964a520d9d327e3 ... Wed 01:57:07\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n32 445 4b3a8e5df964a520d86925e3 ... Tue 18:16:10\n39 318 4b4fafebf964a520ee1027e3 ... Tue 18:19:25\n60 318 4b093cd0f964a520db1423e3 ... Tue 18:36:56\n95 445 4e9bb3850cd6de03c41c1963 ... Tue 19:05:22\n172 562 4b8fbf80f964a5202a6033e3 ... Tue 20:00:37\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n64 366 4528a216f964a520173b1fe3 ... Tue 18:39:36\n199 267 4b344fb5f964a520eb2625e3 ... Tue 20:11:54\n259 768 4a991c59f964a520662d20e3 ... Tue 20:56:14\n296 420 4a07ac6af964a5208f731fe3 ... Tue 21:14:12\n429 191 4528a216f964a520173b1fe3 ... Tue 22:11:16\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n53 718 4d98393cb7bf2c0f9ff5ea2d ... Tue 18:33:26\n161 718 4d983945b7bf2c0febf5ea2d ... Tue 19:51:02\n222 768 4c1c47d601379521e50a47f3 ... Tue 20:29:37\n312 217 4e4df1ec81308c328c677507 ... Tue 21:22:44\n325 455 4e0d074622711665f60746d2 ... Tue 21:30:24\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n74 337 4b8c4735f964a5201dc832e3 ... Tue 18:47:11\n409 300 4bf1a9eed39ad13a7382aa0e ... Tue 22:00:42\n4015 65 4c4e05751b8e1b8df8808426 ... Sat 20:24:12\n5791 40 4b5b6323f964a520eaf928e3 ... Sun 20:10:18\n6719 70 4b37d391f964a520ec4625e3 ... Mon 16:31:29\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1246 84 4af1fd6ff964a520fbe421e3 ... Wed 10:14:29\n1247 84 4ed7c726f5b915cfe2a874f5 ... Wed 10:14:54\n1749 84 4ed7c726f5b915cfe2a874f5 ... Wed 14:51:26\n2010 375 4283ee00f964a5209d221fe3 ... Wed 17:23:32\n3999 539 4283ee00f964a5209d221fe3 ... Sat 20:17:59\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n147 225 4baa4741f964a520705a3ae3 ... Tue 19:40:13\n150 1063 4bc33420920eb713d3121d2c ... Tue 19:42:40\n478 1044 4b82fa73f964a52090f030e3 ... Tue 22:30:32\n489 509 4394c386f964a520832b1fe3 ... Tue 22:35:54\n726 371 4c0ab61d340720a1eff28693 ... Wed 00:09:24\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n41707 827 4c6707209cb82d7fd17792d2 ... Sat 21:56:41\n102678 729 4fd3a5a7e4b06a2d8d387b40 ... Sat 19:36:34\n174144 246 4cbde136f50e224b302b07fc ... Sat 22:00:00\n180088 57 4df957aad164bbe54613d236 ... Wed 00:10:20\n188577 779 4df957aad164bbe54613d236 ... Tue 19:37:11\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n257 768 4f49abc9e4b07165fa77eb39 ... Tue 20:55:33\n611 957 4e989938f79022d7ec22f1f9 ... Tue 23:28:05\n1057 584 4bbe2867eeca9521b8174df1 ... Wed 03:50:40\n1188 768 4ea237b0f7909c68de07935c ... Wed 06:36:48\n1436 391 4e3009bf1838f1c552cb0d4b ... Wed 12:25:36\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n14041 838 4c1fcb8cd38ec9b6d5334d83 ... Fri 19:21:41\n34103 202 4e6393f318a8ce02fd560d0a ... Tue 12:32:50\n39455 93 4c1fcb8cd38ec9b6d5334d83 ... Fri 20:59:39\n88678 838 4c1fcb8cd38ec9b6d5334d83 ... Tue 18:25:16\n148268 838 4c1fcb8cd38ec9b6d5334d83 ... Mon 18:08:39\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n802 783 4e3c9300c65b4ec275d7808f ... Wed 00:39:13\n28509 922 4dade650ffcb58f1a9079485 ... Sat 11:32:52\n28722 3 4bff2f9b369476b0ba8a8d1f ... Sat 14:28:31\n29433 922 4dade650ffcb58f1a9079485 ... Sat 20:33:37\n39638 462 4dbd755043a1d8504ba572ac ... Fri 22:20:50\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n144 436 4bc7afbc14d7952160ee67e9 ... Tue 19:38:16\n317 258 4c4b6a2ff7cc1b8d5565d83f ... Tue 21:26:35\n2152 436 4bc7afbc14d7952160ee67e9 ... Wed 18:41:19\n3390 436 4bc7afbc14d7952160ee67e9 ... Sat 14:59:26\n6015 183 4d0041711fcef04de4cdc9b9 ... Sun 22:24:05\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n419 597 4ed43ac630f803fae6f8b92c ... Tue 22:06:16\n823 669 4b342e93f964a520dd2525e3 ... Wed 00:48:48\n866 530 44522a64f964a520a8321fe3 ... Wed 01:11:49\n1082 706 4c59a8bfb05c1b8d1f50d6b1 ... Wed 04:15:27\n1944 912 4c8ac04852a98cfa84992fe9 ... Wed 16:55:39\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n689 158 4e5d39f1483b185859713bb8 ... Tue 23:56:17\n14532 772 4e1f0bafaeb7fe88ace23f8d ... Fri 22:54:31\n17354 759 4deb1ad118386283a3f7a316 ... Sun 04:23:19\n33511 238 4e93c3c393addf55cb4a9396 ... Tue 01:31:19\n34782 966 4db83fdcfa8c377d83ba8dc5 ... Tue 18:33:31\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1385 645 4e4d2eb262e1e5f467205997 ... Wed 12:07:32\n1551 448 4d21fb8429292d436d6f5274 ... Wed 13:05:19\n16990 352 4e6501961f6ef7d07bcbfdbe ... Sun 00:38:35\n19954 264 4d21fb8429292d436d6f5274 ... Mon 20:38:38\n22546 176 4e6501961f6ef7d07bcbfdbe ... Wed 14:39:39\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n846 265 4e0d21dbae603a50b5366dbd ... Wed 00:59:44\n1996 630 4dea6c2752b11677f03b0c0a ... Wed 17:16:52\n4432 750 4dace74a8154e1a040e27c76 ... Sat 23:46:23\n9263 827 4e22342252b1f82ffba73c1c ... Wed 02:20:21\n9957 307 4f3210e219833175d60d1f87 ... Wed 16:40:07\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1778 452 4e4beacc14958c3abfadaba4 ... Wed 15:04:49\n2724 443 4e6934d2b993038735685814 ... Thu 11:50:30\n3260 829 4e2f4ee3aeb7e1b8afa44e02 ... Sat 13:19:26\n3307 662 4e351de014952ff6cbc4bb7d ... Sat 13:57:28\n3600 121 4ddbb875d22d22fb7c83911e ... Sat 17:12:13\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n45312 84 4e1baf46e4cd1f7c5d79fa6d ... Wed 11:50:39\n50032 272 4dd31e3bc65b47384cf1d24c ... Sat 05:27:51\n50595 84 4e1baf46e4cd1f7c5d79fa6d ... Sat 15:55:27\n56361 837 4dd31e3bc65b47384cf1d24c ... Wed 00:22:39\n62662 84 4e1baf46e4cd1f7c5d79fa6d ... Sat 16:46:04\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n989 620 4e7e3f818b814bc8801990bd ... Wed 02:28:02\n2421 620 4e7e3f818b814bc8801990bd ... Wed 23:20:19\n3890 319 4e8f9bfee5fa2c09eb0383e2 ... Sat 19:31:08\n3960 1003 4ddad40bd22d4dbc8c0d4f91 ... Sat 20:02:57\n4085 319 4e8f9bfee5fa2c09eb0383e2 ... Sat 20:53:32\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n4467 704 4e40596862e19d610993ae37 ... Sun 00:06:17\n19096 1001 4c22489f7e85c92875c5bb21 ... Mon 13:24:52\n36876 23 4df806a618385456c7f3a828 ... Wed 23:11:39\n41280 941 4de08356d4c040523ea769f4 ... Sat 18:31:46\n42469 662 4e8f863c5c5c4562efa0d0fc ... Sun 03:59:11\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n5222 798 4e4e2716c65bb313ba734730 ... Sun 14:27:13\n13787 173 4e863b32f790e3cd5056eff1 ... Fri 16:15:48\n14618 1047 4e863b32f790e3cd5056eff1 ... Fri 23:25:50\n17854 844 4e863b32f790e3cd5056eff1 ... Sun 16:07:30\n25069 716 4df7d09d18a801cd9f110b89 ... Thu 19:37:30\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n15464 693 4bc0ba9074a9a5939488d0f6 ... Sat 12:22:19\n17735 307 4c6bf8baa437224b3cff28b1 ... Sun 14:57:21\n28726 608 4a7c9b27f964a520e9ec1fe3 ... Sat 14:30:09\n36939 920 4bbba4d6e4529521e7f654a4 ... Wed 23:40:30\n39846 920 4bbba4d6e4529521e7f654a4 ... Fri 23:54:50\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n4487 492 4c2f883716adc9282eb9bd9c ... Sun 00:15:33\n19840 354 4e04b7d5b0fb3e6ecac53597 ... Mon 19:41:07\n24987 354 4e04b7d5b0fb3e6ecac53597 ... Thu 18:48:14\n27547 574 49e3cd2df964a520d4621fe3 ... Fri 21:06:47\n27640 519 4c06838b517d0f470a16f615 ... Fri 21:36:08\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n98 1054 4e5590b5814d6ce5cc849483 ... Tue 19:08:42\n171 769 4edac72c61af8a14b6d7f7a2 ... Tue 19:59:57\n183 539 4c1836421436a593271a8d75 ... Tue 20:05:46\n276 864 4f23799ee5e87114ad3af2ad ... Tue 21:02:39\n279 1055 4f7b65c8e4b0c69aa470ae67 ... Tue 21:04:39\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n227 1063 4e05fada6284d9ee92cbd39c ... Tue 20:35:02\n390 458 4f69fed7e4b0316cb9aba961 ... Tue 21:55:05\n591 1055 4b3d4ee9f964a520189225e3 ... Tue 23:18:17\n1582 853 4ecbcb4bd3e32939cdc1bc81 ... Wed 13:19:40\n2894 372 4a723953f964a520a2da1fe3 ... Fri 13:45:06\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1528 458 4ee5295c9adf398200319470 ... Wed 12:57:25\n2617 458 4ee5295c9adf398200319470 ... Thu 08:57:06\n3962 445 4ad63d83f964a520f90521e3 ... Sat 20:03:52\n5101 458 4ee5295c9adf398200319470 ... Sun 11:47:35\n6209 458 4ee5295c9adf398200319470 ... Mon 12:09:24\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1470 354 4e6b50e61fc7f7282e57408a ... Wed 12:34:52\n3749 443 4d867ea499b78cfa77fce51f ... Sat 18:26:42\n5394 645 4b6b5abff964a520fb022ce3 ... Sun 16:27:02\n6664 1047 4ce6d1330f196dcb0a0f3aae ... Mon 15:59:29\n9883 354 4e70a108922e8e01baad9cdf ... Wed 15:59:51\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n3453 281 4bca200b68f976b0c1b55e83 ... Sat 15:57:37\n5432 281 4bca200b68f976b0c1b55e83 ... Sun 16:55:07\n23758 407 3fd66200f964a52008e71ee3 ... Thu 00:44:59\n104809 924 4bf19ca578cec928d99cba86 ... Mon 00:32:09\n126541 924 4bf19ca578cec928d99cba86 ... Sun 20:33:44\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n41227 656 4f8c5b9de4b0ab0c1dbcffb9 ... Sat 18:05:28\n57829 698 4e23a053ae6015b212bc6a3e ... Wed 18:10:04\n66156 698 4e23a053ae6015b212bc6a3e ... Mon 06:17:10\n69721 698 4e23a053ae6015b212bc6a3e ... Tue 22:10:41\n71253 698 4e23a053ae6015b212bc6a3e ... Wed 17:04:58\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n68189 974 4bbe1ecf9474c9b66a1bd9b6 ... Tue 03:29:48\n119246 567 4bbe1ecf9474c9b66a1bd9b6 ... Tue 10:54:09\n119255 567 4e4574fc2fb6c364c31cf0df ... Tue 10:56:55\n119256 567 4dac664d5da3ba8a47b55f3e ... Tue 10:57:56\n119258 567 4e4574fc2fb6c364c31cf0fa ... Tue 10:59:10\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n30543 723 4f93ffbae4b02d470391019d ... Sun 15:12:55\n33447 893 4b100e28f964a520316823e3 ... Tue 00:42:37\n46954 9 4ab9f883f964a520548020e3 ... Thu 23:09:07\n55097 755 4da0eb2cb1c937046a99c2a1 ... Tue 14:22:16\n58199 893 4b19b5e6f964a5207fe223e3 ... Thu 00:02:22\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n15024 121 4f827ba1e4b0a6d0b804218f ... Sat 02:25:40\n30191 121 4f827ba1e4b0a6d0b804218f ... Sun 07:41:16\n30748 830 4f805bf7e4b09a1476af7426 ... Sun 17:16:18\n109004 1025 4f775918e4b040208c20d392 ... Wed 20:41:15\n150030 106 4e3ac3863151a4ae7ec5a3dd ... Thu 22:07:43\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n242 592 4c1a94bcb4e62d7f0c08d893 ... Tue 20:46:49\n2149 482 4c248c01a852c928e23ae36c ... Wed 18:38:55\n4700 274 4c1b572a8b3aa59345fa965f ... Sun 02:26:29\n7670 274 4c1b572a8b3aa59345fa965f ... Tue 00:25:21\n24748 482 4bceec19cc8cd13a2001c5cf ... Thu 14:02:54\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n65651 993 4f5063f9e4b020de96d2d70b ... Sun 23:23:31\n83180 901 4eb2e19e7ee5a4e5f542ba31 ... Sat 00:09:01\n102978 304 4fd70643e4b050a3e5509488 ... Sat 22:32:05\n103481 646 4fcead32e4b0bdc2b6f273e8 ... Sun 03:50:36\n122945 646 4fcead32e4b0bdc2b6f273e8 ... Fri 00:18:45\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n84 749 4f51526fe4b002a741157c16 ... Tue 18:58:56\n3074 758 4b8bf752f964a52038b532e3 ... Fri 15:44:51\n3493 749 4f51526fe4b002a741157c16 ... Sat 16:17:49\n6785 758 4b8bf752f964a52038b532e3 ... Mon 17:04:47\n7767 749 4f51526fe4b002a741157c16 ... Tue 01:33:28\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n212 592 4165d880f964a5207e1d1fe3 ... Tue 20:23:14\n278 592 4a478d19f964a520d2a91fe3 ... Tue 21:03:26\n4851 623 4a478d19f964a520d2a91fe3 ... Sun 04:16:32\n4858 623 4165d880f964a5207e1d1fe3 ... Sun 04:22:22\n5425 603 4a478d19f964a520d2a91fe3 ... Sun 16:50:58\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n131960 23 4f886abfe4b0e6f347945677 ... Fri 11:26:09\n\n[1 rows x 13 columns]\n userId venueId ... weekday time\n1230 990 4af560b7f964a520faf821e3 ... Wed 09:09:10\n1243 80 49ecee93f964a520af671fe3 ... Wed 10:13:51\n11465 680 4a4bc974f964a520b9ac1fe3 ... Thu 12:24:31\n15881 81 4abbe0b6f964a520528520e3 ... Sat 17:18:22\n18283 372 4ada8cf5f964a520712321e3 ... Sun 19:40:35\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n16599 189 4a7f6308f964a520dff31fe3 ... Sat 22:07:34\n16747 40 49e16850f964a520c6611fe3 ... Sat 23:03:33\n17789 811 4a7f6308f964a520dff31fe3 ... Sun 15:38:14\n17843 40 49e16850f964a520c6611fe3 ... Sun 16:02:09\n47742 50 4ad0b0eff964a52027d920e3 ... Fri 11:13:24\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n102807 821 4eee1751d5fba9d69669e961 ... Sat 20:48:33\n198253 980 4a19d25cf964a5205c7a1fe3 ... Mon 22:09:55\n\n[2 rows x 13 columns]\n userId venueId ... weekday time\n367 78 49ca8170f964a520b1581fe3 ... Tue 21:45:33\n10352 698 4adb6119f964a520a72621e3 ... Wed 20:38:40\n18455 184 49d92230f964a520035e1fe3 ... Sun 20:59:18\n21410 287 4adb6119f964a520a72621e3 ... Tue 20:37:09\n46301 176 4adb6119f964a520a72621e3 ... Thu 17:38:11\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n164 774 4f3ceffde4b0811223a089bb ... Tue 19:54:00\n1239 774 4f3ceffde4b0811223a089bb ... Wed 09:54:25\n4508 314 4c550d5afd2ea593c95a112b ... Sun 00:27:48\n4778 774 4f3ceffde4b0811223a089bb ... Sun 03:13:14\n6001 774 4f3ceffde4b0811223a089bb ... Sun 22:16:05\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n12793 1024 4e4a90d38877bebe848f2d2a ... Fri 01:22:02\n25009 909 4f10994fe4b0b827c662ad69 ... Thu 19:01:28\n54489 199 4a11788bf964a52019771fe3 ... Mon 22:30:14\n59466 417 4e4a90d38877bebe848f2d2a ... Fri 01:41:09\n60699 377 4e4a90d38877bebe848f2d2a ... Fri 17:19:24\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n110352 312 4fbb9410e4b04df08f2be550 ... Sat 19:31:36\n139270 976 4fd0a768e4b04552ed4b3cfa ... Tue 18:58:07\n167718 84 4fae6373e4b021a2c005d916 ... Sat 17:31:27\n169618 84 4fae6373e4b021a2c005d916 ... Wed 13:54:19\n169984 84 4fae6373e4b021a2c005d916 ... Thu 01:06:17\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n186074 529 4f1a82a5e4b0f67a95c2423e ... Sat 09:12:56\n\n[1 rows x 13 columns]\n userId venueId ... weekday time\n132096 795 4f0ed8fde4b0c6e4676c4254 ... Fri 12:37:39\n\n[1 rows x 13 columns]\n userId venueId ... weekday time\n3331 651 4c87c62614c4b60ccd7b2235 ... Sat 14:11:37\n4586 742 4da3388500a92d4365d0b17d ... Sun 01:13:38\n16656 402 4da3388500a92d4365d0b17d ... Sat 22:27:42\n16830 1034 4aad52caf964a520b05f20e3 ... Sat 23:36:33\n24817 350 4b8d3f78f964a520bbef32e3 ... Thu 14:36:38\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n3078 643 4f2a87a5e4b07e41d94ae809 ... Fri 15:46:24\n3605 643 4f2a87a5e4b07e41d94ae809 ... Sat 17:13:22\n20888 643 4f2a87a5e4b07e41d94ae809 ... Tue 16:02:14\n55194 643 4f2a87a5e4b07e41d94ae809 ... Tue 15:16:34\n67182 643 4f2a87a5e4b07e41d94ae809 ... Mon 17:54:11\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n998 246 4c0c2b61bbc676b0a8b44cd5 ... Wed 02:32:50\n1458 521 4c0c2b61bbc676b0a8b44cd5 ... Wed 12:30:51\n3065 313 4f6125d6e4b08711dad5d68b ... Fri 15:35:48\n3486 537 4f6125d6e4b08711dad5d68b ... Sat 16:13:55\n4055 313 4f6125d6e4b08711dad5d68b ... Sat 20:42:33\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n11125 645 4f1120fce4b0725febba8c2b ... Thu 04:50:20\n11285 645 4f1120fce4b0725febba8c2b ... Thu 10:55:38\n18997 645 4f1120fce4b0725febba8c2b ... Mon 12:49:27\n26703 645 4f1120fce4b0725febba8c2b ... Fri 13:38:19\n31885 645 4f1120fce4b0725febba8c2b ... Mon 11:34:34\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n28947 307 4f92ced4e4b03330a9d4d664 ... Sat 16:38:16\n80341 974 4f1b1f01e4b0c7ea67b1dd8c ... Wed 18:15:45\n82136 349 4ebda00d5c5c3d470bb88802 ... Fri 16:04:05\n111634 728 4fdb2b43e4b0d1f5f8972de2 ... Sun 15:14:22\n111924 298 4fdb2b43e4b0d1f5f8972de2 ... Sun 18:26:38\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n22853 653 4efa541a7bebf5ab6c3f0ff0 ... Wed 17:35:52\n32796 406 3fd66200f964a52056e31ee3 ... Mon 19:02:23\n49135 1002 3fd66200f964a52056e31ee3 ... Fri 22:06:46\n110963 530 3fd66200f964a52056e31ee3 ... Sun 01:24:18\n119986 654 4ff345a6e4b0f1c616e1ff30 ... Tue 19:19:46\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1739 465 4b156188f964a520f5ab23e3 ... Wed 14:46:45\n4793 1038 4c2cde868ef52d7fce9833ba ... Sun 03:23:51\n10160 929 4a987006f964a520032c20e3 ... Wed 18:34:33\n11809 531 4f870efbe4b0e5ed725076f9 ... Thu 17:21:11\n11987 210 4c40e37ada3dc928c512c8b9 ... Thu 18:51:01\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n82885 332 4fb2ce43c2eea43adbb4b82c ... Fri 22:11:53\n83420 708 4f6b4540e4b0a6e623284617 ... Sat 02:08:15\n94662 136 4b59e118f964a520d79d28e3 ... Mon 12:15:12\n116228 176 4b9edf34f964a520900737e3 ... Wed 02:56:56\n131209 454 4b9edf34f964a520900737e3 ... Thu 21:10:36\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n3745 553 4b5238d2f964a520c27027e3 ... Sat 18:24:19\n7038 458 4f1f8dcae4b089ad6522caf6 ... Mon 19:03:22\n18279 724 4c4b45ff42b4d13a6770727e ... Sun 19:37:00\n19203 468 4c9634f36b35a14345a22bdc ... Mon 14:13:22\n19393 531 4c5c0e5694fd0f47c77ac745 ... Mon 16:19:05\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n17130 613 49eb9b35f964a52001671fe3 ... Sun 01:51:29\n25523 857 49eb9b35f964a52001671fe3 ... Thu 23:25:48\n38258 983 4d5afdd8cc656dcbdda2cacb ... Fri 11:02:25\n49634 384 41842b00f964a520e01d1fe3 ... Sat 01:19:35\n49781 384 41842b00f964a520e01d1fe3 ... Sat 02:33:20\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n13 116 4c0ab56f7e3fc9288c1df482 ... Tue 18:09:29\n6787 1068 4c0ab56f7e3fc9288c1df482 ... Mon 17:04:50\n12033 1068 4c0ab56f7e3fc9288c1df482 ... Thu 19:24:22\n15869 1068 4c0ab56f7e3fc9288c1df482 ... Sat 17:13:02\n17782 1068 4c0ab56f7e3fc9288c1df482 ... Sun 15:34:28\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n183978 190 4c73009b13228cfaefa72c65 ... Mon 22:03:23\n\n[1 rows x 13 columns]\n userId venueId ... weekday time\n139848 679 4f9eecc70cd6b5e52b6d64ae ... Sat 00:34:09\n\n[1 rows x 13 columns]\n userId venueId ... weekday time\n48208 506 4e32a66bd1643f0b04ae4f91 ... Fri 14:31:32\n138429 646 4fe0acee4fc639d598b894e9 ... Sun 00:10:22\n156125 506 4e32a66bd1643f0b04ae4f91 ... Fri 14:31:29\n158667 359 4c016a414f1ea593b17d6b7d ... Mon 11:51:36\n170769 330 4ba23693f964a5203de437e3 ... Fri 01:30:30\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n7483 94 4e5fce33b0fb754192cca549 ... Mon 22:56:23\n65555 992 4e5fce33b0fb754192cca549 ... Sun 22:39:13\n76949 291 4e5fce33b0fb754192cca549 ... Sat 18:50:15\n103906 94 4e5fce33b0fb754192cca549 ... Sun 14:31:32\n104763 94 4e5fce33b0fb754192cca549 ... Sun 23:53:38\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n610 1047 4f429948e4b015f1bcc998e3 ... Tue 23:27:55\n2240 528 4f494b3ce4b03377e80867a3 ... Wed 19:21:28\n3268 194 4f7254f9e4b076a57bbed086 ... Sat 13:27:01\n6597 238 4e1a07c61f6eb95598897212 ... Mon 15:05:52\n6977 1047 4f429948e4b015f1bcc998e3 ... Mon 18:30:13\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n160 667 4e555a38b0fb1e5686bd6646 ... Tue 19:50:54\n1132 225 4f53fd86e4b0ef96732f7351 ... Wed 05:07:27\n1725 829 4f799f28e4b0488e30e494c9 ... Wed 14:39:07\n2490 667 4f627c62e4b04b2441d70d29 ... Wed 23:41:22\n3157 995 4f7f171ce4b06838144f311c ... Fri 16:31:56\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n48 889 4c79b79381bca093e689fe14 ... Tue 18:26:46\n83 1012 4c62bfd0e1621b8dfe232453 ... Tue 18:57:46\n112 1055 4c0b32d37e3fc928650bf582 ... Tue 19:18:05\n167 1055 4b2da0ddf964a520c9d924e3 ... Tue 19:54:11\n189 562 4ec36fc4754a58e0f97f1766 ... Tue 20:07:48\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2220 194 4b3613a9f964a520013025e3 ... Wed 19:11:26\n2401 483 4b3613a9f964a520013025e3 ... Wed 23:07:41\n3067 194 4b3613a9f964a520013025e3 ... Fri 15:36:37\n3618 194 4b3613a9f964a520013025e3 ... Sat 17:21:20\n3862 1042 4f763455e4b015efa412ad28 ... Sat 19:18:41\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n94869 742 4fc8066ce4b0896b9f15b797 ... Mon 13:57:44\n97149 742 4fc8066ce4b0896b9f15b797 ... Wed 17:38:51\n99122 742 4fc8066ce4b0896b9f15b797 ... Thu 17:40:22\n215254 779 50f4983ce4b0ba731506fb61 ... Tue 04:51:21\n215695 831 50f4983ce4b0ba731506fb61 ... Thu 23:17:20\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n34365 428 4f965a77e4b011ff900585c3 ... Tue 14:30:00\n105032 355 4fd57406e4b0f0b2d5884458 ... Mon 05:23:53\n200672 322 5001a6dbe4b0d51a40fe5803 ... Fri 20:52:03\n208934 322 5001a6dbe4b0d51a40fe5803 ... Tue 22:40:45\n211441 322 5001a6dbe4b0d51a40fe5803 ... Sun 00:36:57\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1612 171 4b15385cf964a5207da923e3 ... Wed 13:32:01\n6491 171 4b15385cf964a5207da923e3 ... Mon 13:55:54\n8217 171 4b15385cf964a5207da923e3 ... Tue 13:44:08\n11450 171 4b15385cf964a5207da923e3 ... Thu 12:20:32\n13500 171 4b15385cf964a5207da923e3 ... Fri 13:15:04\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n648 1082 4c2a849357a9c9b693a1f467 ... Tue 23:45:50\n7812 1058 4bd8d6dc0115c9b687917580 ... Tue 02:07:58\n9796 380 4e5137ca62e14b77e3b4923c ... Wed 15:04:02\n11105 974 4c3cfebb980320a1107a8be4 ... Thu 04:25:39\n25186 70 4f54e0ace4b0c41607188619 ... Thu 21:12:31\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n88421 1025 4fc4f540e4b0500aa59f84fc ... Tue 16:11:58\n116240 106 4fea7859e4b0a368db1d2fe2 ... Wed 03:05:02\n161514 585 507eaccce4b0b83c3867c15e ... Wed 13:13:40\n\n[3 rows x 13 columns]\n userId venueId ... weekday time\n90401 221 4c0fdee2f1b6a5934afb7a27 ... Thu 18:12:41\n123517 311 4ff6e5c7e4b09751a75d1cae ... Fri 13:19:16\n144727 220 50241ed7e4b088dba7bc0550 ... Thu 20:36:14\n168053 220 50241ed7e4b088dba7bc0550 ... Sat 23:42:30\n\n[4 rows x 13 columns]\n userId venueId ... weekday time\n5863 51 4bdd7d7a645e0f4757956b19 ... Sun 21:06:04\n19159 1006 4f60c8d5e4b06b5513b364a8 ... Mon 13:54:17\n21439 768 4c97a98538dd8cfa7ea0e562 ... Tue 20:53:33\n22115 768 4c97a98538dd8cfa7ea0e562 ... Wed 01:58:24\n33839 347 4c8b73a2770fb60c10dddbc3 ... Tue 09:43:02\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n20808 385 4f53a53be4b0ddd8f8610868 ... Tue 14:53:49\n22564 385 4f53a53be4b0ddd8f8610868 ... Wed 14:50:34\n38110 385 4f53a53be4b0ddd8f8610868 ... Thu 14:39:13\n38750 385 4f53a53be4b0ddd8f8610868 ... Fri 14:46:37\n66838 385 4f53a53be4b0ddd8f8610868 ... Mon 14:38:00\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n1864 467 4f7c7404e4b0726c394e1284 ... Wed 16:18:05\n1865 467 4f7c7404e4b0726c394e1284 ... Wed 16:18:05\n26987 467 4f7c7404e4b0726c394e1284 ... Fri 16:06:58\n32496 467 4f7c7404e4b0726c394e1284 ... Mon 16:25:35\n35625 928 4f5f99dfe4b0e5574bf9c3d1 ... Wed 12:37:59\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n10899 1054 4f6f6335e4b0d4a5b24d839e ... Thu 01:29:28\n14083 1054 4f6f6335e4b0d4a5b24d839e ... Fri 19:44:39\n22883 1054 4f6f6335e4b0d4a5b24d839e ... Wed 17:51:22\n26554 1054 4f6f6335e4b0d4a5b24d839e ... Fri 12:48:57\n27492 1054 4f6f6335e4b0d4a5b24d839e ... Fri 20:39:02\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n19848 354 49eba029f964a52004671fe3 ... Mon 19:44:22\n30819 625 49eba029f964a52004671fe3 ... Sun 17:56:58\n85151 428 49eba029f964a52004671fe3 ... Sun 18:24:25\n86726 620 49eba029f964a52004671fe3 ... Mon 18:03:27\n90115 844 49eba029f964a52004671fe3 ... Thu 15:42:17\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n512 650 3fd66200f964a520b6e71ee3 ... Tue 22:45:12\n830 804 3fd66200f964a52020e81ee3 ... Wed 00:51:43\n2436 535 470f3ff0f964a5208e4b1fe3 ... Wed 23:24:01\n2451 74 4ecaefc99adfd1f5b4666db5 ... Wed 23:30:51\n3782 349 43237380f964a520a6271fe3 ... Sat 18:42:28\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n4119 90 3fd66200f964a5205deb1ee3 ... Sat 21:08:56\n4717 401 3fd66200f964a5205deb1ee3 ... Sun 02:32:35\n11365 820 4bc655c451b376b04b711b6f ... Thu 11:49:50\n12666 231 49d13d20f964a520695b1fe3 ... Fri 00:11:34\n14558 652 3fd66200f964a5205deb1ee3 ... Fri 23:06:18\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n28195 567 4b2fb49ff964a52055ee24e3 ... Sat 03:58:43\n28270 983 4b2fb49ff964a52055ee24e3 ... Sat 04:57:08\n\n[2 rows x 13 columns]\n userId venueId ... weekday time\n200430 196 50d3f361e4b00ce56b169f30 ... Fri 14:36:37\n200462 197 50d3f361e4b00ce56b169f30 ... Fri 15:35:45\n200488 285 50d40dfce4b0621a67038adf ... Fri 17:06:39\n200662 542 50d3f361e4b00ce56b169f30 ... Fri 20:40:41\n219372 635 5109f9e4e4b0d125443b4708 ... Thu 08:19:38\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2284 951 4bcddbae8920b7131a69a0dc ... Wed 19:45:46\n3790 1009 4ca6477876d3a0937af2ff6a ... Sat 18:46:17\n9703 1009 4ca6477876d3a0937af2ff6a ... Wed 14:03:34\n44206 1009 4ca6477876d3a0937af2ff6a ... Tue 20:25:30\n82691 1009 4ca6477876d3a0937af2ff6a ... Fri 20:38:23\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n193080 432 50c53c67e4b02085d8786e54 ... Mon 01:35:47\n225080 432 50c53c67e4b02085d8786e54 ... Mon 02:01:15\n\n[2 rows x 13 columns]\n userId venueId ... weekday time\n21171 80 4b71af9cf964a5201b562de3 ... Tue 18:19:03\n21404 114 4b903af5f964a520b77d33e3 ... Tue 20:32:56\n34892 335 4a92bc57f964a520801d20e3 ... Tue 19:55:46\n37429 723 4b0c9a09f964a520f83f23e3 ... Thu 04:50:25\n44162 474 4b66020ff964a5201b0e2be3 ... Tue 19:56:01\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n27688 793 3fd66200f964a520c6e91ee3 ... Fri 23:58:20\n27837 373 4a73934ff964a520dcdc1fe3 ... Sat 00:58:24\n129384 351 3fd66200f964a5202dea1ee3 ... Wed 17:09:30\n154323 150 4a73934ff964a520dcdc1fe3 ... Wed 23:06:55\n\n[4 rows x 13 columns]\n userId venueId ... weekday time\n82369 758 4b8bf752f964a52038b532e3 ... Fri 18:09:07\n88287 758 4b8bf752f964a52038b532e3 ... Tue 14:35:25\n100820 705 4ed80b6d775bcc53f7b5bf76 ... Fri 19:51:39\n103746 758 4b8bf752f964a52038b532e3 ... Sun 10:37:25\n115554 758 4b8bf752f964a52038b532e3 ... Tue 16:59:18\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n167530 326 4a889b61f964a520180720e3 ... Sat 13:31:42\n184141 1022 4bc0f1deabf495219638c093 ... Tue 01:34:53\n191973 827 50c38f9ae4b070f4c245b110 ... Sat 19:07:22\n\n[3 rows x 13 columns]\n userId venueId ... weekday time\n104756 949 4a553eadf964a520ebb31fe3 ... Sun 23:47:39\n110915 535 4a553eadf964a520ebb31fe3 ... Sun 00:53:31\n115848 458 4a553eadf964a520ebb31fe3 ... Tue 22:49:10\n117478 771 4a553eadf964a520ebb31fe3 ... Sun 02:00:55\n118765 771 4a553eadf964a520ebb31fe3 ... Mon 22:23:31\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n25776 951 4a70a76cf964a52017d81fe3 ... Fri 01:12:31\n27756 951 4a70a76cf964a52017d81fe3 ... Sat 00:20:51\n30854 951 4a70a76cf964a52017d81fe3 ... Sun 18:18:03\n31119 1028 4a70a76cf964a52017d81fe3 ... Sun 21:33:34\n31149 951 4a70a76cf964a52017d81fe3 ... Sun 22:01:11\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n90561 459 4bb64fc7f562ef3bf2f52f97 ... Thu 19:30:31\n91891 459 4bb64fc7f562ef3bf2f52f97 ... Fri 19:07:45\n91979 1018 4b8e8bcdf964a520f52733e3 ... Sat 21:42:46\n95259 332 4beae3a7a9900f4700131740 ... Mon 23:26:20\n97084 459 4bb64fc7f562ef3bf2f52f97 ... Wed 17:05:13\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n2493 895 4d4de33cd6f3224bca73a1a6 ... Wed 23:41:35\n54168 371 4acf58baf964a52028d320e3 ... Mon 19:30:55\n55581 84 4acf58baf964a52028d320e3 ... Tue 18:28:15\n56130 84 4acf58baf964a52028d320e3 ... Tue 22:42:26\n60552 238 4acf58baf964a52028d320e3 ... Fri 16:13:40\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n106249 501 4b040d2af964a5201f5122e3 ... Tue 00:22:36\n108647 172 4b6c7fe6f964a520043f2ce3 ... Wed 15:54:16\n140191 236 4af0a7d1f964a52002de21e3 ... Sat 08:15:52\n140845 862 4ba7ef65f964a520fabf39e3 ... Sat 22:05:32\n144584 172 4b6c7fe6f964a520043f2ce3 ... Thu 17:41:04\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n62416 797 4a58f861f964a52058b81fe3 ... Sat 13:09:30\n72410 1047 4a58f861f964a52058b81fe3 ... Thu 02:15:58\n77910 993 44b7acf4f964a52073351fe3 ... Mon 13:48:45\n84875 319 44b7acf4f964a52073351fe3 ... Sun 15:46:47\n85036 475 44b7acf4f964a52073351fe3 ... Sun 17:27:07\n\n[5 rows x 13 columns]\n userId venueId ... weekday time\n158923 61 4a50f0a7f964a52047b01fe3 ... Mon 19:39:37\n178078 834 4a50f0a7f964a52047b01fe3 ... Sat 19:23:56\n\n[2 rows x 13 columns]\n userId venueId ... weekday time\n97223 689 4c5ef77bfff99c74eda954d3 ... Wed 18:15:09\n99194 689 4c5ef77bfff99c74eda954d3 ... Thu 18:15:10\n99496 188 4c5ef77bfff99c74eda954d3 ... Thu 21:51:00\n100693 689 4c5ef77bfff99c74eda954d3 ... Fri 18:15:08\n102827 1070 4c5ef77bfff99c74eda954d3 ... Sat 21:01:12\n\n[5 rows x 13 columns]\n"
],
[
"selected_features = ['latitude', 'longitude', 'timezoneOffset']",
"_____no_output_____"
],
[
"data, labels = ny_data[selected_features], ny_data['venueCategory']\ndata.shape",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=0.30, random_state=42)\n\nX_train.shape, X_test.shape, y_test.shape, y_train.shape",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestClassifier\n\nclf = RandomForestClassifier()\n\nclf.fit(X_train, y_train)\nclf.score(X_train, y_train)",
"/opt/conda/lib/python3.6/site-packages/sklearn/ensemble/forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.\n \"10 in version 0.20 to 100 in 0.22.\", FutureWarning)\n"
],
[
"from sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\n\npredictions = clf.predict(X_test)\n\nprint(classification_report(y_test, predictions))",
"/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1143: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples.\n 'precision', 'predicted', average, warn_for)\n/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1143: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples.\n 'precision', 'predicted', average, warn_for)\n"
],
[
"clf.score(X_test, y_test)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e76f8e492059d7b0b7cfffd47461a8853aa4b735 | 3,612 | ipynb | Jupyter Notebook | notebooks/ch1_arrays_and_strings/1.5 One Away.ipynb | Julyzzzzzz/Practice-on-data-structures-and-algorithms | e3f28c753f745e296abc238a853214083344a47c | [
"MIT"
] | null | null | null | notebooks/ch1_arrays_and_strings/1.5 One Away.ipynb | Julyzzzzzz/Practice-on-data-structures-and-algorithms | e3f28c753f745e296abc238a853214083344a47c | [
"MIT"
] | null | null | null | notebooks/ch1_arrays_and_strings/1.5 One Away.ipynb | Julyzzzzzz/Practice-on-data-structures-and-algorithms | e3f28c753f745e296abc238a853214083344a47c | [
"MIT"
] | null | null | null | 22.02439 | 137 | 0.453212 | [
[
[
"You are given two strings as input. You want to find out if these **two strings** are **at most one edit away** from each other.\n\nAn edit is defined as either\n\n- **inserting a character**: length increased by 1\n- **removing a character**: length decreased by 1\n- **replacing a character**: length doesn't change\n\n*this edit distance is also called Levenshtein distance!*",
"_____no_output_____"
]
],
[
[
"# method 1: brutal force\n# O(N)\n# N is the length of the **shorter** string\n\ndef oneEdit(s1, s2):\n l1 = len(s1)\n l2 = len(s2)\n if (l1 == l2):\n return checkReplace(s1, s2)\n elif abs(l1-l2) == 1:\n return checkInsRem(s1, s2)\n else:\n return False\n\ndef checkReplace(s1, s2):\n foundDiff = 0\n \n for i in range(len(s1)):\n if s1[i] != s2[i]:\n foundDiff += 1\n \n if foundDiff > 1:\n return False\n else:\n return True \n\n# checking if i can insert to the shorter string to make it the longer string\ndef checkInsRem(s1, s2):\n if len(s1) < len(s2):\n short = s1\n long = s2\n else:\n short = s2\n long = s1\n \n index_s = 0\n index_l = 0\n \n while (index_s<len(short)) and (index_l<len(long)):\n if (short[index_s] != long[index_l]):\n if index_s != index_l: # found the second different letter\n return False\n index_l += 1\n else:\n index_s += 1\n index_l += 1\n \n return True",
"_____no_output_____"
],
[
"s1 = 'pale'\ns2 = 'phhle'",
"_____no_output_____"
],
[
"oneEdit(s1, s2)",
"_____no_output_____"
],
[
"s3 = 'pale'\ns4 = 'ple'",
"_____no_output_____"
],
[
"oneEdit(s3, s4)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e76f9085c79657a422600d8959f65ea0ed0ab7af | 39,423 | ipynb | Jupyter Notebook | debugging/cvpr_plots.ipynb | jutanke/mvpose | 33eb107490e17d51301d6b22e4f5b93ab6a11cf3 | [
"MIT"
] | 4 | 2020-03-06T15:05:09.000Z | 2020-04-14T17:57:33.000Z | debugging/cvpr_plots.ipynb | jutanke/mvpose | 33eb107490e17d51301d6b22e4f5b93ab6a11cf3 | [
"MIT"
] | null | null | null | debugging/cvpr_plots.ipynb | jutanke/mvpose | 33eb107490e17d51301d6b22e4f5b93ab6a11cf3 | [
"MIT"
] | null | null | null | 428.51087 | 36,896 | 0.937549 | [
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom matplotlib import rc\nrc('font', **{'family': 'serif', 'serif': ['Computer Modern']})\nrc('text', usetex=True)\nplt.rcParams.update({'font.size': 16})\n\n# kth different sigmas\n\nsigma = [0.5, 1, 1.5, 1.8, 2, 2.3, 2.5, 2.8, 3, 5, 6]\nvalues_kth = [.93, .957, .9667, .96904, .9690, .9684, .96728, .96495, .9626, .9345, .9077]\n\n\nprint(len(values_kth))\n\nsigma_camp = [0.5, 1, 1.5, 1.8, 2, 2.3, 2.5, 2.8, 3, 3.2, 3.5, 3.8, 4, 4.2, 4.5, 5, 6]\nvalues_camp = [.8863,.89154, .8914, .8955, .89739, .8985, .897909, .90106, .90299, .9047, .90522,\n .906847, .9057, .90693, .9078, .9057, .901]\n\nsigma_shelf = [0.5, 1, 1.5, 1.7, 1.8, 2, 3, 5, 6]\nvalues_shelf = [.9247, .9327, .933165, .933056, .9308, .92768, .9288, .92201,.9196]\n\nassert len(values_kth) == len(sigma)\nassert len(values_camp) == len(sigma_camp)\nassert len(values_shelf) == len(sigma_shelf)\n\nfig = plt.figure(figsize=(12, 6))\nax = fig.add_subplot(111)\nax.plot(sigma, values_kth, label='KTH Football II', linewidth=3)\nax.plot(sigma_camp, values_camp, label='Campus', linewidth=3)\nax.plot(sigma_shelf, values_shelf, label='Shelf', linewidth=3)\nax.set_xlabel(r\"$\\sigma$\", fontsize=30)\nax.set_ylabel(\"PCP\", fontsize=20)\n\nplt.legend(fontsize=16)\n\n# plt.show()\nplt.savefig('sigma.eps', format='eps', dpi=1000)",
"11\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
e76fa1b0fa84e31138e454fe3c7a0a64079fc08c | 118,309 | ipynb | Jupyter Notebook | notebooks/Replicating Qi 2006.ipynb | gngdb/opencast-bio | 9fb110076295aafa696a9f8b5070b8d93c6400ce | [
"MIT"
] | 2 | 2016-02-24T20:44:39.000Z | 2020-07-06T02:44:38.000Z | notebooks/Replicating Qi 2006.ipynb | gngdb/opencast-bio | 9fb110076295aafa696a9f8b5070b8d93c6400ce | [
"MIT"
] | null | null | null | notebooks/Replicating Qi 2006.ipynb | gngdb/opencast-bio | 9fb110076295aafa696a9f8b5070b8d93c6400ce | [
"MIT"
] | null | null | null | 98.755426 | 20,663 | 0.799728 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e76fb1bd9d0e7a4802cbbee03003f5ce00be464c | 41,953 | ipynb | Jupyter Notebook | docs/allcools/cell_level/dmg/04-PairwiseDMG.ipynb | mukamel-lab/ALLCools | 756ef790665c6ce40633873211929ea92bcccc21 | [
"MIT"
] | 5 | 2019-07-16T17:27:15.000Z | 2022-01-14T19:12:27.000Z | docs/allcools/cell_level/dmg/04-PairwiseDMG.ipynb | mukamel-lab/ALLCools | 756ef790665c6ce40633873211929ea92bcccc21 | [
"MIT"
] | 12 | 2019-10-17T19:34:43.000Z | 2022-03-23T16:04:18.000Z | docs/allcools/cell_level/dmg/04-PairwiseDMG.ipynb | mukamel-lab/ALLCools | 756ef790665c6ce40633873211929ea92bcccc21 | [
"MIT"
] | 4 | 2019-10-18T23:43:48.000Z | 2022-02-12T04:12:26.000Z | 44.726013 | 3,151 | 0.515887 | [
[
[
"# Differential Methylated Genes - Pairwise",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport anndata\nimport xarray as xr\nfrom ALLCools.plot import *\nfrom ALLCools.mcds import MCDS\nfrom ALLCools.clustering import PairwiseDMG, cluster_enriched_features\nimport pathlib",
"_____no_output_____"
]
],
[
[
"## Parameters",
"_____no_output_____"
]
],
[
[
"adata_path = '../step_by_step/100kb/adata.with_coords.h5ad'\ncluster_col = 'L1'\n\n# change this to the paths to your MCDS files\ngene_fraction_dir = 'gene_frac/'\nobs_dim = 'cell'\nvar_dim = 'gene'\n\n# DMG\nmc_type = 'CHN'\ntop_n = 1000\nadj_p_cutoff = 1e-3\ndelta_rate_cutoff = 0.3\nauroc_cutoff = 0.9\nrandom_state = 0\nn_jobs = 30",
"_____no_output_____"
]
],
[
[
"## Load",
"_____no_output_____"
]
],
[
[
"adata = anndata.read_h5ad(adata_path)\n\ncell_meta = adata.obs.copy()\ncell_meta.index.name = obs_dim\n\ngene_meta = pd.read_csv(f'{gene_fraction_dir}/GeneMetadata.csv.gz', index_col=0)\n\ngene_mcds = MCDS.open(f'{gene_fraction_dir}/*_da_frac.mcds', use_obs=cell_meta.index)\ngene_mcds",
"_____no_output_____"
]
],
[
[
"## Pairwise DMG",
"_____no_output_____"
]
],
[
[
"pwdmg = PairwiseDMG(max_cell_per_group=1000,\n top_n=top_n,\n adj_p_cutoff=adj_p_cutoff,\n delta_rate_cutoff=delta_rate_cutoff,\n auroc_cutoff=auroc_cutoff,\n random_state=random_state,\n n_jobs=n_jobs)",
"_____no_output_____"
],
[
"pwdmg.fit_predict(x=gene_mcds[f'{var_dim}_da_frac'].sel(mc_type=mc_type), \n groups=cell_meta[cluster_col])",
"Generating cluster AnnData files\nComputing pairwise DMG\n406 pairwise DMGs\n1/406 finished\n41/406 finished\n81/406 finished\n121/406 finished\n161/406 finished\n201/406 finished\n241/406 finished\n281/406 finished\n321/406 finished\n361/406 finished\n401/406 finished\n"
],
[
"pwdmg.dmg_table.to_hdf(f'{cluster_col}.PairwiseDMG.{mc_type}.hdf', key='data')\npwdmg.dmg_table.head()",
"_____no_output_____"
]
],
[
[
"## Aggregating Cluster DMG\n\nWeighted total AUROC aggregated from the pairwise comparisons",
"_____no_output_____"
],
[
"### Aggregate Pairwise Comparisons",
"_____no_output_____"
]
],
[
[
"cluster_dmgs = pwdmg.aggregate_pairwise_dmg(adata, groupby=cluster_col)",
"_____no_output_____"
],
[
"# save all the DMGs\nwith pd.HDFStore(f'{cluster_col}.ClusterRankedPWDMG.{mc_type}.hdf') as hdf:\n for cluster, dmgs in cluster_dmgs.items():\n hdf[cluster] = dmgs[dmgs > 0.0001]",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
e76fd7de99fa543ea627a5c695bfc8951e7cb8f6 | 43,380 | ipynb | Jupyter Notebook | notebook/introduction/an_introduction_to_julia.ipynb | mthelm85/JuMPTutorials.jl | b2285a74d05dc0c0e99df2ac999277c34e670afd | [
"MIT"
] | null | null | null | notebook/introduction/an_introduction_to_julia.ipynb | mthelm85/JuMPTutorials.jl | b2285a74d05dc0c0e99df2ac999277c34e670afd | [
"MIT"
] | 1 | 2020-08-04T18:36:58.000Z | 2020-08-04T18:36:58.000Z | notebook/introduction/an_introduction_to_julia.ipynb | mthelm85/JuMPTutorials.jl | b2285a74d05dc0c0e99df2ac999277c34e670afd | [
"MIT"
] | null | null | null | 25.427902 | 3,789 | 0.542716 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e76fdc8501498ca7d334ffa503e6882ab479653a | 13,014 | ipynb | Jupyter Notebook | Credit Card Fraud Detection.ipynb | mouhamadibrahim/Credit-Card-Fraud-Detection | 1e3c91a79e8a5a38b43e9a8e3c013ac92d2a0f0e | [
"MIT"
] | null | null | null | Credit Card Fraud Detection.ipynb | mouhamadibrahim/Credit-Card-Fraud-Detection | 1e3c91a79e8a5a38b43e9a8e3c013ac92d2a0f0e | [
"MIT"
] | null | null | null | Credit Card Fraud Detection.ipynb | mouhamadibrahim/Credit-Card-Fraud-Detection | 1e3c91a79e8a5a38b43e9a8e3c013ac92d2a0f0e | [
"MIT"
] | null | null | null | 24.370787 | 263 | 0.575841 | [
[
[
"# Import libraries",
"_____no_output_____"
]
],
[
[
"import os \nimport warnings\nwarnings.filterwarnings('ignore')\n#Packages related to data importing, manipulation, exploratory data #analysis, data understanding\nimport numpy as np\nimport pandas as pd\nfrom pandas import Series, DataFrame\nfrom termcolor import colored as cl # text customization\n#Packages related to data visualizaiton\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\n#Setting plot sizes and type of plot\nplt.rc(\"font\", size=14)\nplt.rcParams['axes.grid'] = True\nplt.figure(figsize=(6,3))\nplt.gray()\nfrom matplotlib.backends.backend_pdf import PdfPages\nfrom sklearn.model_selection import train_test_split, GridSearchCV\nfrom sklearn import metrics\nfrom sklearn.impute import MissingIndicator, SimpleImputer\nfrom sklearn.preprocessing import PolynomialFeatures, KBinsDiscretizer, FunctionTransformer\nfrom sklearn.preprocessing import StandardScaler, MinMaxScaler, MaxAbsScaler\nfrom sklearn.preprocessing import LabelEncoder, OneHotEncoder, LabelBinarizer, OrdinalEncoder\nimport statsmodels.formula.api as smf\nimport statsmodels.tsa as tsa\nfrom sklearn.linear_model import LogisticRegression, LinearRegression, ElasticNet, Lasso, Ridge\nfrom sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor\nfrom sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor, export_graphviz, export\nfrom sklearn.ensemble import BaggingClassifier, BaggingRegressor,RandomForestClassifier,RandomForestRegressor\nfrom sklearn.ensemble import GradientBoostingClassifier,GradientBoostingRegressor, AdaBoostClassifier, AdaBoostRegressor \nfrom sklearn.svm import LinearSVC, LinearSVR, SVC, SVR\nfrom xgboost import XGBClassifier\nfrom sklearn.metrics import f1_score\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.metrics import confusion_matrix",
"_____no_output_____"
]
],
[
[
"# Importing data",
"_____no_output_____"
],
[
"This dataset contains the real bank transactions made by European cardholders in the year 2013, the dataset can be downlaoded here: https://www.kaggle.com/mlg-ulb/creditcardfraud",
"_____no_output_____"
]
],
[
[
"data=pd.read_csv(\"creditcard.csv\")",
"_____no_output_____"
]
],
[
[
"# Checking transactions",
"_____no_output_____"
],
[
"we can see that only 17% are fraud transactions",
"_____no_output_____"
]
],
[
[
"Total_transactions = len(data)\nnormal = len(data[data.Class == 0])\nfraudulent = len(data[data.Class == 1])\nfraud_percentage = round(fraudulent/normal*100, 2)\nprint(cl('Total number of Trnsactions are {}'.format(Total_transactions), attrs = ['bold']))\nprint(cl('Number of Normal Transactions are {}'.format(normal), attrs = ['bold']))\nprint(cl('Number of fraudulent Transactions are {}'.format(fraudulent), attrs = ['bold']))\nprint(cl('Percentage of fraud Transactions is {}'.format(fraud_percentage), attrs = ['bold']))",
"\u001b[1mTotal number of Trnsactions are 284807\u001b[0m\n\u001b[1mNumber of Normal Transactions are 284315\u001b[0m\n\u001b[1mNumber of fraudulent Transactions are 492\u001b[0m\n\u001b[1mPercentage of fraud Transactions is 0.17\u001b[0m\n"
]
],
[
[
"# Feature Scaling",
"_____no_output_____"
]
],
[
[
"sc = StandardScaler()\namount = data['Amount'].values\ndata['Amount'] = sc.fit_transform(amount.reshape(-1, 1))",
"_____no_output_____"
]
],
[
[
"# Dropping columns and other features",
"_____no_output_____"
]
],
[
[
"data.drop(['Time'], axis=1, inplace=True)",
"_____no_output_____"
],
[
"data.drop_duplicates(inplace=True)",
"_____no_output_____"
],
[
"X = data.drop('Class', axis = 1).values\ny = data['Class'].values",
"_____no_output_____"
]
],
[
[
"# Training the model",
"_____no_output_____"
]
],
[
[
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 1)",
"_____no_output_____"
]
],
[
[
"# Decision Trees ",
"_____no_output_____"
]
],
[
[
"DT = DecisionTreeClassifier(max_depth = 4, criterion = 'entropy')\nDT.fit(X_train, y_train)\ndt_yhat = DT.predict(X_test)",
"_____no_output_____"
],
[
"print('Accuracy score of the Decision Tree model is {}'.format(accuracy_score(y_test, dt_yhat)))",
"Accuracy score of the Decision Tree model is 0.9991583957281328\n"
],
[
"print('F1 score of the Decision Tree model is {}'.format(f1_score(y_test, dt_yhat)))",
"F1 score of the Decision Tree model is 0.7521367521367521\n"
]
],
[
[
"# K nearest neighbor",
"_____no_output_____"
]
],
[
[
"n = 7\nKNN = KNeighborsClassifier(n_neighbors = n)\nKNN.fit(X_train, y_train)\nknn_yhat = KNN.predict(X_test)\n\nprint('Accuracy score of the K-Nearest Neighbors model is {}'.format(accuracy_score(y_test, knn_yhat)))\n\nprint('F1 score of the K-Nearest Neighbors model is {}'.format(f1_score(y_test, knn_yhat)))",
"Accuracy score of the K-Nearest Neighbors model is 0.999288989494457\nF1 score of the K-Nearest Neighbors model is 0.7949790794979079\n"
]
],
[
[
"# Logistic Regression",
"_____no_output_____"
]
],
[
[
"lr = LogisticRegression()\nlr.fit(X_train, y_train)\nlr_yhat = lr.predict(X_test)",
"_____no_output_____"
],
[
"print('Accuracy score of the Logistic Regression model is {}'.format(accuracy_score(y_test, lr_yhat)))",
"Accuracy score of the Logistic Regression model is 0.9989552498694062\n"
],
[
"print('F1 score of the Logistic Regression model is {}'.format(f1_score(y_test, lr_yhat)))",
"F1 score of the Logistic Regression model is 0.6666666666666666\n"
]
],
[
[
"# SVM classifier",
"_____no_output_____"
]
],
[
[
"svm = SVC()\nsvm.fit(X_train, y_train)\nsvm_yhat = svm.predict(X_test)",
"_____no_output_____"
],
[
"print('Accuracy score of the Support Vector Machines model is {}'.format(accuracy_score(y_test, svm_yhat)))",
"Accuracy score of the Support Vector Machines model is 0.999318010331418\n"
],
[
"print('F1 score of the Support Vector Machines model is {}'.format(f1_score(y_test, svm_yhat)))",
"F1 score of the Support Vector Machines model is 0.7813953488372093\n"
]
],
[
[
"# Random Forest",
"_____no_output_____"
]
],
[
[
"rf = RandomForestClassifier(max_depth = 4)\nrf.fit(X_train, y_train)\nrf_yhat = rf.predict(X_test)",
"_____no_output_____"
],
[
"print('Accuracy score of the Random Forest model is {}'.format(accuracy_score(y_test, rf_yhat)))",
"Accuracy score of the Random Forest model is 0.9991729061466132\n"
],
[
"print('F1 score of the Random Forest model is {}'.format(f1_score(y_test, rf_yhat)))",
"F1 score of the Random Forest model is 0.7397260273972602\n"
]
],
[
[
"# XGBClassifier",
"_____no_output_____"
]
],
[
[
"xgb = XGBClassifier(max_depth = 4)\nxgb.fit(X_train, y_train)\nxgb_yhat = xgb.predict(X_test)",
"[13:29:31] WARNING: ..\\src\\learner.cc:1115: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.\n"
],
[
"print('Accuracy score of the XGBoost model is {}'.format(accuracy_score(y_test, xgb_yhat)))",
"Accuracy score of the XGBoost model is 0.999506645771664\n"
],
[
"print('F1 score of the XGBoost model is {}'.format(f1_score(y_test, xgb_yhat)))",
"F1 score of the XGBoost model is 0.8495575221238937\n"
]
],
[
[
"# Conclusion",
"_____no_output_____"
],
[
"we can conclude that the XGBClassifier is the perfect algorithm to use here with an accuracy of 99.95% and an F1 score of 84.95% ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e76fe15a983886001acb4dc09c29dc99b5476d6f | 18,596 | ipynb | Jupyter Notebook | tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb | iamvarol/spark-nlp-workshop | 73a9064bd47d4dc0692f0297748eb43cd094aabd | [
"Apache-2.0"
] | null | null | null | tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb | iamvarol/spark-nlp-workshop | 73a9064bd47d4dc0692f0297748eb43cd094aabd | [
"Apache-2.0"
] | null | null | null | tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb | iamvarol/spark-nlp-workshop | 73a9064bd47d4dc0692f0297748eb43cd094aabd | [
"Apache-2.0"
] | null | null | null | 18,596 | 18,596 | 0.684717 | [
[
[
"",
"_____no_output_____"
],
[
"# Serving Spark NLP with API: Synapse ML",
"_____no_output_____"
],
[
"# SynapseML",
"_____no_output_____"
],
[
"## Installation",
"_____no_output_____"
]
],
[
[
"import json\nimport os\nfrom google.colab import files\n\nlicense_keys = files.upload()\n\nwith open(list(license_keys.keys())[0]) as f:\n license_keys = json.load(f)\n\n# Defining license key-value pairs as local variables\nlocals().update(license_keys)\n\n# Adding license key-value pairs to environment variables\nos.environ.update(license_keys)",
"_____no_output_____"
],
[
"# Installing pyspark and spark-nlp\n! pip install --upgrade -q pyspark==3.2.0 spark-nlp==$PUBLIC_VERSION\n\n# Installing Spark NLP Healthcare\n! pip install --upgrade -q spark-nlp-jsl==$JSL_VERSION --extra-index-url https://pypi.johnsnowlabs.com/$SECRET\n\n! pip -q install requests",
"_____no_output_____"
]
],
[
[
"\n## Imports and Spark Session",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport pyspark\nimport sparknlp\nimport sparknlp_jsl\nfrom pyspark.sql import SparkSession\nfrom pyspark.ml import Pipeline, PipelineModel\nimport pyspark.sql.functions as F\nfrom pyspark.sql.types import *\nfrom sparknlp.base import *\nfrom sparknlp.annotator import *\nfrom sparknlp_jsl.annotator import *\nfrom sparknlp.training import *\nfrom sparknlp.training import CoNLL\nimport time\nimport requests\nimport uuid\nimport json\nimport requests\nfrom concurrent.futures import ThreadPoolExecutor\n",
"_____no_output_____"
],
[
"spark = SparkSession.builder \\\n .appName(\"Spark\") \\\n .master(\"local[*]\") \\\n .config(\"spark.driver.memory\", \"16G\") \\\n .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\") \\\n .config(\"spark.kryoserializer.buffer.max\", \"2000M\") \\\n .config(\"spark.jars.packages\", \"com.microsoft.azure:synapseml_2.12:0.9.5,com.johnsnowlabs.nlp:spark-nlp-spark32_2.12:\"+PUBLIC_VERSION)\\\n .config(\"spark.jars\", \"https://pypi.johnsnowlabs.com/\"+SECRET+\"/spark-nlp-jsl-\"+JSL_VERSION+\"-spark32.jar\")\\\n .config(\"spark.jars.repositories\", \"https://mmlspark.azureedge.net/maven\")\\\n .getOrCreate()",
"_____no_output_____"
],
[
"print(sparknlp.version())\nprint(sparknlp_jsl.version())",
"3.4.2\n3.5.0\n"
],
[
"spark",
"_____no_output_____"
],
[
"import synapse.ml\nfrom synapse.ml.io import *",
"_____no_output_____"
]
],
[
[
"## Preparing a pipeline with Entity Resolution",
"_____no_output_____"
]
],
[
[
"# Annotator that transforms a text column from dataframe into an Annotation ready for NLP\ndocument_assembler = DocumentAssembler()\\\n .setInputCol(\"text\")\\\n .setOutputCol(\"document\")\n\n# Sentence Detector DL annotator, processes various sentences per line\nsentenceDetectorDL = SentenceDetectorDLModel.pretrained(\"sentence_detector_dl_healthcare\", \"en\", 'clinical/models') \\\n .setInputCols([\"document\"]) \\\n .setOutputCol(\"sentence\")\n\n# Tokenizer splits words in a relevant format for NLP\ntokenizer = Tokenizer()\\\n .setInputCols([\"sentence\"])\\\n .setOutputCol(\"token\")\n\n# WordEmbeddingsModel pretrained \"embeddings_clinical\" includes a model of 1.7Gb that needs to be downloaded\nword_embeddings = WordEmbeddingsModel.pretrained(\"embeddings_clinical\", \"en\", \"clinical/models\")\\\n .setInputCols([\"sentence\", \"token\"])\\\n .setOutputCol(\"word_embeddings\")\n\n# Named Entity Recognition for clinical concepts.\nclinical_ner = MedicalNerModel.pretrained(\"ner_clinical\", \"en\", \"clinical/models\") \\\n .setInputCols([\"sentence\", \"token\", \"word_embeddings\"]) \\\n .setOutputCol(\"ner\")\n\nner_converter_icd = NerConverterInternal() \\\n .setInputCols([\"sentence\", \"token\", \"ner\"]) \\\n .setOutputCol(\"ner_chunk\")\\\n .setWhiteList(['PROBLEM'])\\\n .setPreservePosition(False)\n\nc2doc = Chunk2Doc()\\\n .setInputCols(\"ner_chunk\")\\\n .setOutputCol(\"ner_chunk_doc\") \n\nsbert_embedder = BertSentenceEmbeddings.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\\\n .setInputCols([\"ner_chunk_doc\"])\\\n .setOutputCol(\"sentence_embeddings\")\\\n .setCaseSensitive(False)\n \nicd_resolver = SentenceEntityResolverModel.pretrained(\"sbiobertresolve_icd10cm_augmented_billable_hcc\",\"en\", \"clinical/models\") \\\n .setInputCols([\"ner_chunk\", \"sentence_embeddings\"]) \\\n .setOutputCol(\"icd10cm_code\")\\\n .setDistanceFunction(\"EUCLIDEAN\")\n \n\n# Build up the pipeline\nresolver_pipeline = Pipeline(\n stages = [\n document_assembler,\n sentenceDetectorDL,\n tokenizer,\n word_embeddings,\n clinical_ner,\n ner_converter_icd,\n c2doc,\n sbert_embedder,\n icd_resolver\n ])\n\n\nempty_data = spark.createDataFrame([['']]).toDF(\"text\")\n\nresolver_p_model = resolver_pipeline.fit(empty_data)",
"sentence_detector_dl_healthcare download started this may take some time.\nApproximate size to download 367.3 KB\n[OK!]\nembeddings_clinical download started this may take some time.\nApproximate size to download 1.6 GB\n[OK!]\nner_clinical download started this may take some time.\nApproximate size to download 13.9 MB\n[OK!]\nsbiobert_base_cased_mli download started this may take some time.\nApproximate size to download 384.3 MB\n[OK!]\nsbiobertresolve_icd10cm_augmented_billable_hcc download started this may take some time.\nApproximate size to download 1.1 GB\n[OK!]\n"
]
],
[
[
"## Adding a clinical note as a text example",
"_____no_output_____"
]
],
[
[
"clinical_note = \"\"\"A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years \n prior to presentation and subsequent type two diabetes mellitus (T2DM), one prior \n episode of HTG-induced pancreatitis three years prior to presentation, associated \n with an acute hepatitis, and obesity with a body mass index (BMI) of 33.5 kg/m2, \n presented with a one-week history of polyuria, polydipsia, poor appetite, and vomiting. \n Two weeks prior to presentation, she was treated with a five-day course of amoxicillin \n for a respiratory tract infection. She was on metformin, glipizide, and dapagliflozin \n for T2DM and atorvastatin and gemfibrozil for HTG. She had been on dapagliflozin for six months \n at the time of presentation. Physical examination on presentation was significant for dry oral mucosa; \n significantly, her abdominal examination was benign with no tenderness, guarding, or rigidity. Pertinent \n laboratory findings on admission were: serum glucose 111 mg/dl, bicarbonate 18 mmol/l, anion gap 20, \n creatinine 0.4 mg/dL, triglycerides 508 mg/dL, total cholesterol 122 mg/dL, glycated hemoglobin (HbA1c) \n 10%, and venous pH 7.27. Serum lipase was normal at 43 U/L. Serum acetone levels could not be assessed \n as blood samples kept hemolyzing due to significant lipemia. The patient was initially admitted for \n starvation ketosis, as she reported poor oral intake for three days prior to admission. However, \n serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL, the anion gap \n was still elevated at 21, serum bicarbonate was 16 mmol/L, triglyceride level peaked at 2050 mg/dL, and \n lipase was 52 U/L. The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - \n the original sample was centrifuged and the chylomicron layer removed prior to analysis due to \n interference from turbidity caused by lipemia again. The patient was treated with an insulin drip \n for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL, within \n 24 hours. Her euDKA was thought to be precipitated by her respiratory tract infection in the setting \n of SGLT2 inhibitor use. The patient was seen by the endocrinology service and she was discharged on \n 40 units of insulin glargine at night, 12 units of insulin lispro with meals, and metformin 1000 mg \n two times a day. It was determined that all SGLT2 inhibitors should be discontinued indefinitely. She \n had close follow-up with endocrinology post discharge.\"\"\"\n\n\ndata = spark.createDataFrame([[clinical_note]]).toDF(\"text\")",
"_____no_output_____"
]
],
[
[
"## Creating a JSON file with the clinical note\nSince SynapseML runs a webservice that accepts HTTP calls with json format",
"_____no_output_____"
]
],
[
[
"data_json = {\"text\": clinical_note }",
"_____no_output_____"
]
],
[
[
"## Running a Synapse server",
"_____no_output_____"
]
],
[
[
"serving_input = spark.readStream.server() \\\n .address(\"localhost\", 9999, \"benchmark_api\") \\\n .option(\"name\", \"benchmark_api\") \\\n .load() \\\n .parseRequest(\"benchmark_api\", data.schema)\n\nserving_output = resolver_p_model.transform(serving_input) \\\n .makeReply(\"icd10cm_code\")\n\nserver = serving_output.writeStream \\\n .server() \\\n .replyTo(\"benchmark_api\") \\\n .queryName(\"benchmark_query\") \\\n .option(\"checkpointLocation\", \"file:///tmp/checkpoints-{}\".format(uuid.uuid1())) \\\n .start()",
"/usr/local/lib/python3.7/dist-packages/pyspark/sql/context.py:127: FutureWarning: Deprecated in 3.0.0. Use SparkSession.builder.getOrCreate() instead.\n FutureWarning\n"
],
[
"def post_url(args):\n print(f\"- Request {str(args[2])} launched!\")\n res = requests.post(args[0], data=args[1]) \n print(f\"**Response {str(args[2])} received**\")\n return res\n\n# If you want to send parallel calls, just add more tuples to list_of_urls array\n# tuple: (URL from above, json, number_of_call)\nlist_of_urls = [(\"http://localhost:9999/benchmark_api\",json.dumps(data_json), 0)]\n\nwith ThreadPoolExecutor() as pool:\n response_list = list(pool.map(post_url,list_of_urls))",
"_____no_output_____"
]
],
[
[
"## Checking Results",
"_____no_output_____"
]
],
[
[
"for i in range (0, len(response_list[0].json())):\n print(response_list[0].json()[i]['result'])",
"O2441\nO2411\nE11\nK8520\nB15\nE669\nZ6841\nR35\nR631\nR630\nR111\nJ988\nE11\nG600\nK130\nR52\nM6283\nR4689\nO046\nE785\nE872\nE639\nH5330\nR799\nR829\nE785\nA832\nG600\nJ988\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e76ffa585ebedf94852388494d12e76e8657a9a7 | 11,221 | ipynb | Jupyter Notebook | .ipynb_checkpoints/5.1-introduction-to-convnets-checkpoint.ipynb | zhangdongwl/deep-learning-with-python-notebooks | 4bbdc674abe23a85b6d6610289c37b5f00bfd8ad | [
"MIT"
] | null | null | null | .ipynb_checkpoints/5.1-introduction-to-convnets-checkpoint.ipynb | zhangdongwl/deep-learning-with-python-notebooks | 4bbdc674abe23a85b6d6610289c37b5f00bfd8ad | [
"MIT"
] | null | null | null | .ipynb_checkpoints/5.1-introduction-to-convnets-checkpoint.ipynb | zhangdongwl/deep-learning-with-python-notebooks | 4bbdc674abe23a85b6d6610289c37b5f00bfd8ad | [
"MIT"
] | null | null | null | 33.900302 | 367 | 0.560645 | [
[
[
"import keras\nkeras.__version__",
"Using TensorFlow backend.\n"
]
],
[
[
"# 5.1 - Introduction to convnets\n\nThis notebook contains the code sample found in Chapter 5, Section 1 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.\n\n----\n\nFirst, let's take a practical look at a very simple convnet example. We will use our convnet to classify MNIST digits, a task that you've already been \nthrough in Chapter 2, using a densely-connected network (our test accuracy then was 97.8%). Even though our convnet will be very basic, its \naccuracy will still blow out of the water that of the densely-connected model from Chapter 2.\n\nThe 6 lines of code below show you what a basic convnet looks like. It's a stack of `Conv2D` and `MaxPooling2D` layers. We'll see in a \nminute what they do concretely.\nImportantly, a convnet takes as input tensors of shape `(image_height, image_width, image_channels)` (not including the batch dimension). \nIn our case, we will configure our convnet to process inputs of size `(28, 28, 1)`, which is the format of MNIST images. We do this via \npassing the argument `input_shape=(28, 28, 1)` to our first layer.",
"_____no_output_____"
]
],
[
[
"from keras import layers\nfrom keras import models\n\nmodel = models.Sequential()\nmodel.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))\nmodel.add(layers.MaxPooling2D((2, 2)))\nmodel.add(layers.Conv2D(64, (3, 3), activation='relu'))\nmodel.add(layers.MaxPooling2D((2, 2)))\nmodel.add(layers.Conv2D(64, (3, 3), activation='relu'))",
"_____no_output_____"
]
],
[
[
"Let's display the architecture of our convnet so far:",
"_____no_output_____"
]
],
[
[
"model.summary()",
"Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_1 (Conv2D) (None, 26, 26, 32) 320 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 13, 13, 32) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 11, 11, 64) 18496 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 5, 5, 64) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 3, 3, 64) 36928 \n=================================================================\nTotal params: 55,744\nTrainable params: 55,744\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"You can see above that the output of every `Conv2D` and `MaxPooling2D` layer is a 3D tensor of shape `(height, width, channels)`. The width \nand height dimensions tend to shrink as we go deeper in the network. The number of channels is controlled by the first argument passed to \nthe `Conv2D` layers (e.g. 32 or 64).\n\nThe next step would be to feed our last output tensor (of shape `(3, 3, 64)`) into a densely-connected classifier network like those you are \nalready familiar with: a stack of `Dense` layers. These classifiers process vectors, which are 1D, whereas our current output is a 3D tensor. \nSo first, we will have to flatten our 3D outputs to 1D, and then add a few `Dense` layers on top:",
"_____no_output_____"
]
],
[
[
"model.add(layers.Flatten())\nmodel.add(layers.Dense(64, activation='relu'))\nmodel.add(layers.Dense(10, activation='softmax'))",
"_____no_output_____"
]
],
[
[
"We are going to do 10-way classification, so we use a final layer with 10 outputs and a softmax activation. Now here's what our network \nlooks like:",
"_____no_output_____"
]
],
[
[
"model.summary()",
"Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_1 (Conv2D) (None, 26, 26, 32) 320 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 13, 13, 32) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 11, 11, 64) 18496 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 5, 5, 64) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 3, 3, 64) 36928 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 576) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 64) 36928 \n_________________________________________________________________\ndense_2 (Dense) (None, 10) 650 \n=================================================================\nTotal params: 93,322\nTrainable params: 93,322\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"As you can see, our `(3, 3, 64)` outputs were flattened into vectors of shape `(576,)`, before going through two `Dense` layers.\n\nNow, let's train our convnet on the MNIST digits. We will reuse a lot of the code we have already covered in the MNIST example from Chapter \n2.",
"_____no_output_____"
]
],
[
[
"from keras.datasets import mnist\nfrom keras.utils import to_categorical\n\n(train_images, train_labels), (test_images, test_labels) = mnist.load_data()\n\ntrain_images = train_images.reshape((60000, 28, 28, 1))\ntrain_images = train_images.astype('float32') / 255\n\ntest_images = test_images.reshape((10000, 28, 28, 1))\ntest_images = test_images.astype('float32') / 255\n\ntrain_labels = to_categorical(train_labels)\ntest_labels = to_categorical(test_labels)",
"_____no_output_____"
],
[
"model.compile(optimizer='rmsprop',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\nmodel.fit(train_images, train_labels, epochs=5, batch_size=64)",
"Epoch 1/5\n60000/60000 [==============================] - 10s 173us/step - loss: 0.1661 - accuracy: 0.9479\nEpoch 2/5\n60000/60000 [==============================] - 10s 165us/step - loss: 0.0454 - accuracy: 0.9857\nEpoch 3/5\n60000/60000 [==============================] - 10s 164us/step - loss: 0.0314 - accuracy: 0.9900\nEpoch 4/5\n60000/60000 [==============================] - 10s 161us/step - loss: 0.0237 - accuracy: 0.9927\nEpoch 5/5\n60000/60000 [==============================] - 10s 163us/step - loss: 0.0189 - accuracy: 0.9940\n"
]
],
[
[
"Let's evaluate the model on the test data:",
"_____no_output_____"
]
],
[
[
"test_loss, test_acc = model.evaluate(test_images, test_labels)",
"10000/10000 [==============================] - 1s 103us/step\n"
],
[
"test_acc",
"_____no_output_____"
]
],
[
[
"While our densely-connected network from Chapter 2 had a test accuracy of 97.8%, our basic convnet has a test accuracy of 99.3%: we \ndecreased our error rate by 68% (relative). Not bad! ",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
e76ffd08fbb17050610a0ced415a3ae8a3586f0e | 735,912 | ipynb | Jupyter Notebook | examples/CNN Bagging.ipynb | sarahalamdari/DIRECT_capstone | 3963730da50f30ef274622833948b6f84dd27942 | [
"MIT"
] | 1 | 2018-06-21T16:59:01.000Z | 2018-06-21T16:59:01.000Z | examples/CNN Bagging.ipynb | sarahalamdari/DIRECT_capstone | 3963730da50f30ef274622833948b6f84dd27942 | [
"MIT"
] | 10 | 2018-06-21T16:57:27.000Z | 2022-02-09T23:41:53.000Z | examples/CNN Bagging.ipynb | sarahalamdari/DIRECT_capstone | 3963730da50f30ef274622833948b6f84dd27942 | [
"MIT"
] | 3 | 2018-05-10T01:00:31.000Z | 2018-05-12T19:43:49.000Z | 71.929626 | 35,992 | 0.501483 | [
[
[
"import collections\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport gensim\nfrom gensim.models import Word2Vec\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import LabelEncoder\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import Flatten\nfrom keras.layers.convolutional import Convolution1D\nfrom keras.layers.convolutional import MaxPooling1D\nfrom keras.layers import Dropout, Convolution2D, MaxPooling2D\nfrom keras.layers.embeddings import Embedding\nfrom keras.preprocessing import sequence\nfrom keras.preprocessing.text import Tokenizer\nfrom keras.utils import np_utils\nfrom keras.wrappers.scikit_learn import KerasClassifier\nfrom sklearn.model_selection import KFold\nfrom sklearn.model_selection import cross_val_score\nfrom keras.preprocessing.sequence import skipgrams",
"C:\\Users\\liu0563\\Miniconda3\\envs\\nlp\\lib\\site-packages\\gensim\\utils.py:1197: UserWarning: detected Windows; aliasing chunkize to chunkize_serial\n warnings.warn(\"detected Windows; aliasing chunkize to chunkize_serial\")\nUsing TensorFlow backend.\n"
],
[
"# Upload Data \n\ndata = pd.read_csv(\"../wyns/data/tweet_global_warming.csv\", encoding=\"latin\")\nprint(\"Full dataset: {}\".format(data.shape[0]))\ndata['existence'].fillna(value='Ambiguous', inplace = True) #replace NA's in existence with \"ambiguous\"\ndata['existence'].replace(('Y', 'N'), ('Yes', 'No'), inplace=True) #rename so encoder doesnt get confused\ndata = data.dropna() #now drop NA values\nprint(\"dataset without NaN: {}\".format(data.shape[0]))\n\ndata['existence'][10:20]",
"Full dataset: 6090\ndataset without NaN: 6087\n"
],
[
"def read_data(data_file):\n for i, line in enumerate (data_file): \n # do some pre-processing and return a list of words for each review text\n yield gensim.utils.simple_preprocess (line)\n\ntweet_vocab = list(read_data(data['tweet']))",
"_____no_output_____"
],
[
"X = data.iloc[:,0] #store tweets in X \n\nlabels = data.iloc[:,1]\nconfidence_interval = data.iloc[:,2]\n\n# encode class as integers \nencoder = LabelEncoder()\nencoder.fit(labels)\nencoded_Y = encoder.transform(labels) \n\n# convert integers to one hot encoded\nY_one_hot = np_utils.to_categorical(encoded_Y)\n\n\n# multiply one-hot by confidence intervals\nY=[]\nfor i, row in enumerate(confidence_interval):\n Y.append(row*Y_one_hot[i])\nY[0:5]\nY = np.array(Y)",
"_____no_output_____"
],
[
"Y[10:20]",
"_____no_output_____"
],
[
"test_split = 0.8\ntrain_size = int(len(X)*test_split)\ntest_size = len(X) - train_size\nvector_size = 300\nwindow_size = 10\nmax_tweet_length=28\n\nindexes = set(np.random.choice(len(tweet_vocab), train_size + test_size, replace=False))\n\nX_train = np.zeros((train_size, max_tweet_length, vector_size))\nY_train = np.zeros((train_size, 3), dtype=np.float32)\nX_test = np.zeros((test_size, max_tweet_length, vector_size))\nY_test = np.zeros((test_size, 3), dtype=np.float32)",
"_____no_output_____"
],
[
"X.shape[0]",
"_____no_output_____"
],
[
"list(inds[:10])",
"_____no_output_____"
],
[
"# create a single array of processed data\nXX = np.zeros((len(X),max_tweet_length, vector_size))\nfor i in range(XX.shape[0]):\n for j, twit in enumerate(tweet_vocab[i]):\n if twit not in X_vecs:\n continue\n XX[i,j,:] = X_vecs[twit]\n# print(XX[:-10,:,:])\n \n# print(XX.shape)\n\n\n\n\ninds = np.arange(XX.shape[0])\nnp.random.shuffle(inds)\n# print(inds)\ntrain = list(inds[:X.shape[0]*3//4])\ntest = list(inds[X.shape[0]*3//4:])\nX_train = XX[train]\nX_train = X_train.reshape(*X_train.shape,1)\nX_test = XX[test]\nX_test = X_test.reshape(*X_test.shape,1)\nY_train = Y[train]\n# Y_train = Y_train.reshape(*Y_train.shape,1)\nY_test = Y[test]\n# Y_test = Y_test.reshape(*Y_test.shape,1)",
"_____no_output_____"
],
[
"google = gensim.models.KeyedVectors.load_word2vec_format('../wyns/data/GoogleNews-vectors-negative300.bin.gz',binary=True)\nvocab = google.vocab.keys()\ntotal_vocab = len(vocab)\nprint (\"Set includes\", total_vocab, \"words\")\nX_vecs = google.wv\ndel google",
"Set includes 3000000 words\n"
],
[
"np.floor(np.random.rand(3)*100)",
"_____no_output_____"
],
[
"X_resample, Y_resample = bootstrap(X_train,Y_train,10)",
"_____no_output_____"
]
],
[
[
"### A short video on how bagging works https://www.youtube.com/watch?v=2Mg8QD0F1dQ ",
"_____no_output_____"
]
],
[
[
"def bootstrap(X,Y, n=None):\n#Bootstrap function\n if n == None:\n n = len(X) \n resample_i = np.floor(np.random.rand(n)*len(X)).astype(int)\n X_resample = X[resample_i]\n Y_resample = Y[resample_i]\n return X_resample, Y_resample",
"_____no_output_____"
],
[
"def bagging(n_sample,n_bag):\n#Perform bagging procedure. Bootstrap and obtain an ensamble of models\n X_resample, Y_resample = bootstrap(X_train,Y_train, n_sample)\n bagModels = {}\n for i in range(n_bag):\n print(\"Model fitting on the {}th bootstrapped set\".format(i+1))\n model = model_fit(X_resample,Y_resample)\n name = \"model%s\" % (i+1)\n bagModels[name] = model\n return bagModels",
"_____no_output_____"
],
[
"def model_fit(X_train,Y_train):\n filters = 32 #filter = 1 x KERNEL \n inpurt_shape = (X_train.shape[1:])\n # create the model \n model = Sequential()\n\n model.add(Convolution2D(16, kernel_size=3, activation='elu', padding='same',\n input_shape=inpurt_shape))\n model.add(MaxPooling2D(pool_size=5))\n model.add(Convolution2D(filters=filters, kernel_size=3, padding='same', activation='relu'))\n model.add(MaxPooling2D(pool_size=5))\n model.add(Flatten())\n model.add(Dense(250, activation='relu'))\n model.add(Dense(250, activation='relu'))\n model.add(Dropout(0.5))\n model.add(Dense(3, activation='linear')) #change from logistic \n model.compile(loss='mse', optimizer='adam', metrics=['accuracy','mse']) \n\n # Fit the model\n model.fit(X_train, \n Y_train, \n epochs=20, \n batch_size=128,\n verbose=1)\n return model",
"_____no_output_____"
],
[
"def predict(bagModels):\n# Model prediction for each bagged model before averaging \n prediction = {}\n for i in bagModels:\n prediction[i] = bagModels[i].predict(X_test)\n return prediction",
"_____no_output_____"
],
[
"def conversion(prediction):\n# Convert confidence values into prediction\n pred_list=[]\n for i in range(len(prediction)):\n index = np.argmax(prediction[i])\n if index == 0:\n pred = 'Ambiguous'\n elif index == 1:\n pred = 'No'\n else:\n pred = 'Yes'\n pred_list.append(pred)\n return pred_list",
"_____no_output_____"
],
[
"def baggedAccuracy(prediction,Y_test):\n#Bagged accuracy calculation based on average confidences \n sum_pred = 0\n for i in prediction:\n sum_pred += prediction[i]\n bagged_prediction = sum_pred/30\n bagged_list = conversion(bagged_prediction)\n Ytest_list = conversion(Y_test)\n correct_pred = sum(1 for i in range(len(bagged_list)) if bagged_list[i] == Ytest_list[i])\n baggedAccuracy = correct_pred/len(bagged_list) * 100\n return baggedAccuracy",
"_____no_output_____"
]
],
[
[
"First, we need to perform the splitting procedure as we did in the CNN notebook to get train and test sets <br/>\nNow lets perform bagging of an ensamble of 50 models with each model containing 3800 bootstrapped samples from X_train",
"_____no_output_____"
]
],
[
[
"bagModel = bagging(3800,50)",
"Model fitting on the 1th bootstrapped set\n(28, 300, 1)\nEpoch 1/20\n3800/3800 [==============================] - 201s 53ms/step - loss: 0.1491 - acc: 0.4853 - mean_squared_error: 0.1491\nEpoch 2/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1416 - acc: 0.5047 - mean_squared_error: 0.1416\nEpoch 3/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1409 - acc: 0.5047 - mean_squared_error: 0.1409\nEpoch 4/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1396 - acc: 0.5053 - mean_squared_error: 0.1396\nEpoch 5/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1378 - acc: 0.5074 - mean_squared_error: 0.1378\nEpoch 6/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1356 - acc: 0.5129 - mean_squared_error: 0.1356\nEpoch 7/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1311 - acc: 0.5321 - mean_squared_error: 0.1311\nEpoch 8/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1246 - acc: 0.5576 - mean_squared_error: 0.1246\nEpoch 9/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1193 - acc: 0.5858 - mean_squared_error: 0.1193\nEpoch 10/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1119 - acc: 0.6232 - mean_squared_error: 0.1119\nEpoch 11/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1054 - acc: 0.6542 - mean_squared_error: 0.1054\nEpoch 12/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0986 - acc: 0.6847 - mean_squared_error: 0.0986\nEpoch 13/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0933 - acc: 0.7100 - mean_squared_error: 0.0933\nEpoch 14/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0880 - acc: 0.7334 - mean_squared_error: 0.0880\nEpoch 15/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0823 - acc: 0.7487 - mean_squared_error: 0.0823\nEpoch 16/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0751 - acc: 0.7805 - mean_squared_error: 0.0751\nEpoch 17/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0728 - acc: 0.7884 - mean_squared_error: 0.0728\nEpoch 18/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0651 - acc: 0.8226 - mean_squared_error: 0.0651\nEpoch 19/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0620 - acc: 0.8303 - mean_squared_error: 0.0620\nEpoch 20/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0601 - acc: 0.8324 - mean_squared_error: 0.0601\nModel fitting on the 2th bootstrapped set\n(28, 300, 1)\nEpoch 1/20\n3800/3800 [==============================] - 25s 7ms/step - loss: 0.1517 - acc: 0.4818 - mean_squared_error: 0.1517\nEpoch 2/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1423 - acc: 0.5061 - mean_squared_error: 0.1423\nEpoch 3/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1407 - acc: 0.5066 - mean_squared_error: 0.1407\nEpoch 4/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1392 - acc: 0.5055 - mean_squared_error: 0.1392\nEpoch 5/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1372 - acc: 0.5055 - mean_squared_error: 0.1372\nEpoch 6/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1339 - acc: 0.5268 - mean_squared_error: 0.1339\nEpoch 7/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1292 - acc: 0.5432 - mean_squared_error: 0.1292\nEpoch 8/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1193 - acc: 0.5868 - mean_squared_error: 0.1193\nEpoch 9/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1117 - acc: 0.6300 - mean_squared_error: 0.1117\nEpoch 10/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1064 - acc: 0.6466 - mean_squared_error: 0.1064\nEpoch 11/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0987 - acc: 0.6879 - mean_squared_error: 0.0987\nEpoch 12/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0918 - acc: 0.7079 - mean_squared_error: 0.0918\nEpoch 13/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0828 - acc: 0.7437 - mean_squared_error: 0.0828\nEpoch 14/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0808 - acc: 0.7574 - mean_squared_error: 0.0808\nEpoch 15/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0724 - acc: 0.7929 - mean_squared_error: 0.0724\nEpoch 16/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0687 - acc: 0.8105 - mean_squared_error: 0.0687\nEpoch 17/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0638 - acc: 0.8203 - mean_squared_error: 0.0638\nEpoch 18/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0607 - acc: 0.8408 - mean_squared_error: 0.0607\nEpoch 19/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0546 - acc: 0.8571 - mean_squared_error: 0.0546\nEpoch 20/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0513 - acc: 0.8700 - mean_squared_error: 0.0513\nModel fitting on the 3th bootstrapped set\n(28, 300, 1)\nEpoch 1/20\n3800/3800 [==============================] - 13s 3ms/step - loss: 0.1494 - acc: 0.4926 - mean_squared_error: 0.1494\nEpoch 2/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1415 - acc: 0.5016 - mean_squared_error: 0.1415\nEpoch 3/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1410 - acc: 0.5053 - mean_squared_error: 0.1410\nEpoch 4/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1398 - acc: 0.5053 - mean_squared_error: 0.1398\nEpoch 5/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1370 - acc: 0.5087 - mean_squared_error: 0.1370\nEpoch 6/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1329 - acc: 0.5200 - mean_squared_error: 0.1329\nEpoch 7/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1268 - acc: 0.5476 - mean_squared_error: 0.1268\nEpoch 8/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1181 - acc: 0.5987 - mean_squared_error: 0.1181\nEpoch 9/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1091 - acc: 0.6458 - mean_squared_error: 0.1091\nEpoch 10/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1000 - acc: 0.6874 - mean_squared_error: 0.1000\nEpoch 11/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0919 - acc: 0.7197 - mean_squared_error: 0.0919\nEpoch 12/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0849 - acc: 0.7408 - mean_squared_error: 0.0849\nEpoch 13/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0792 - acc: 0.7687 - mean_squared_error: 0.0792\nEpoch 14/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0742 - acc: 0.7842 - mean_squared_error: 0.0742\nEpoch 15/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0671 - acc: 0.8084 - mean_squared_error: 0.0671\nEpoch 16/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0623 - acc: 0.8200 - mean_squared_error: 0.0623\nEpoch 17/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0566 - acc: 0.8471 - mean_squared_error: 0.0566\nEpoch 18/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0554 - acc: 0.8553 - mean_squared_error: 0.0554\nEpoch 19/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0494 - acc: 0.8729 - mean_squared_error: 0.0494\nEpoch 20/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0464 - acc: 0.8808 - mean_squared_error: 0.0464\nModel fitting on the 4th bootstrapped set\n(28, 300, 1)\nEpoch 1/20\n3800/3800 [==============================] - 14s 4ms/step - loss: 0.1487 - acc: 0.4908 - mean_squared_error: 0.1487\nEpoch 2/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1418 - acc: 0.5050 - mean_squared_error: 0.1418\nEpoch 3/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1407 - acc: 0.5058 - mean_squared_error: 0.1407\n"
]
],
[
[
"### Bagging different numbers of models in an ensamble to test accuracy change",
"_____no_output_____"
]
],
[
[
"bagModel2 = bagging(3800,10)",
"Model fitting on the 1th bootstrapped set\n(28, 300, 1)\nEpoch 1/20\n3800/3800 [==============================] - 19s 5ms/step - loss: 0.1500 - acc: 0.4782 - mean_squared_error: 0.1500\nEpoch 2/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1414 - acc: 0.4976 - mean_squared_error: 0.1414\nEpoch 3/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1398 - acc: 0.4984 - mean_squared_error: 0.1398\nEpoch 4/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1378 - acc: 0.4989 - mean_squared_error: 0.1378\nEpoch 5/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1353 - acc: 0.5034 - mean_squared_error: 0.1353\nEpoch 6/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1308 - acc: 0.5176 - mean_squared_error: 0.1308\nEpoch 7/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1233 - acc: 0.5561 - mean_squared_error: 0.1233\nEpoch 8/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1155 - acc: 0.6061 - mean_squared_error: 0.1155\nEpoch 9/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1075 - acc: 0.6371 - mean_squared_error: 0.1075\nEpoch 10/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0992 - acc: 0.6739 - mean_squared_error: 0.0992\nEpoch 11/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0904 - acc: 0.7139 - mean_squared_error: 0.0904\nEpoch 12/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0854 - acc: 0.7339 - mean_squared_error: 0.0854\nEpoch 13/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0781 - acc: 0.7584 - mean_squared_error: 0.0781\nEpoch 14/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0706 - acc: 0.7918 - mean_squared_error: 0.0706\nEpoch 15/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0638 - acc: 0.8161 - mean_squared_error: 0.0638\nEpoch 16/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0616 - acc: 0.8234 - mean_squared_error: 0.0616\nEpoch 17/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0569 - acc: 0.8437 - mean_squared_error: 0.0569\nEpoch 18/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0549 - acc: 0.8495 - mean_squared_error: 0.0549\nEpoch 19/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0491 - acc: 0.8713 - mean_squared_error: 0.0491\nEpoch 20/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0463 - acc: 0.8782 - mean_squared_error: 0.0463\nModel fitting on the 2th bootstrapped set\n(28, 300, 1)\nEpoch 1/20\n3800/3800 [==============================] - 18s 5ms/step - loss: 0.1502 - acc: 0.4834 - mean_squared_error: 0.1502\nEpoch 2/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1420 - acc: 0.4966 - mean_squared_error: 0.1420\nEpoch 3/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1410 - acc: 0.4976 - mean_squared_error: 0.1410\nEpoch 4/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1394 - acc: 0.4979 - mean_squared_error: 0.1394\nEpoch 5/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1386 - acc: 0.5000 - mean_squared_error: 0.1386\nEpoch 6/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1382 - acc: 0.4971 - mean_squared_error: 0.1382\nEpoch 7/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1380 - acc: 0.5005 - mean_squared_error: 0.1380\nEpoch 8/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1358 - acc: 0.5055 - mean_squared_error: 0.1358\nEpoch 9/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1323 - acc: 0.5100 - mean_squared_error: 0.1323\nEpoch 10/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1281 - acc: 0.5371 - mean_squared_error: 0.1281\nEpoch 11/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1208 - acc: 0.5734 - mean_squared_error: 0.1208\nEpoch 12/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1122 - acc: 0.6163 - mean_squared_error: 0.1122\nEpoch 13/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1044 - acc: 0.6442 - mean_squared_error: 0.1044\nEpoch 14/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0972 - acc: 0.6800 - mean_squared_error: 0.0972\nEpoch 15/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0880 - acc: 0.7161 - mean_squared_error: 0.0880\nEpoch 16/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0820 - acc: 0.7505 - mean_squared_error: 0.0820\nEpoch 17/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0745 - acc: 0.7737 - mean_squared_error: 0.0745\nEpoch 18/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0719 - acc: 0.7837 - mean_squared_error: 0.0719\nEpoch 19/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0656 - acc: 0.8142 - mean_squared_error: 0.0656\nEpoch 20/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0607 - acc: 0.8239 - mean_squared_error: 0.0607\nModel fitting on the 3th bootstrapped set\n(28, 300, 1)\nEpoch 1/20\n3800/3800 [==============================] - 18s 5ms/step - loss: 0.1475 - acc: 0.4853 - mean_squared_error: 0.1475\nEpoch 2/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1411 - acc: 0.4974 - mean_squared_error: 0.1411\nEpoch 3/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1398 - acc: 0.5000 - mean_squared_error: 0.1398\nEpoch 4/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1386 - acc: 0.4997 - mean_squared_error: 0.1386\nEpoch 5/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1367 - acc: 0.5039 - mean_squared_error: 0.1367\nEpoch 6/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1343 - acc: 0.5082 - mean_squared_error: 0.1343\nEpoch 7/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1307 - acc: 0.5205 - mean_squared_error: 0.1307\nEpoch 8/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1227 - acc: 0.5642 - mean_squared_error: 0.1227\nEpoch 9/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1151 - acc: 0.6061 - mean_squared_error: 0.1151\nEpoch 10/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1053 - acc: 0.6558 - mean_squared_error: 0.1053\nEpoch 11/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0966 - acc: 0.6921 - mean_squared_error: 0.0966\nEpoch 12/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0936 - acc: 0.6937 - mean_squared_error: 0.0936\nEpoch 13/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0881 - acc: 0.7211 - mean_squared_error: 0.0881\nEpoch 14/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0777 - acc: 0.7616 - mean_squared_error: 0.0777\nEpoch 15/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0727 - acc: 0.7779 - mean_squared_error: 0.0727\nEpoch 16/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0669 - acc: 0.7995 - mean_squared_error: 0.0669\nEpoch 17/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0631 - acc: 0.8234 - mean_squared_error: 0.0631\nEpoch 18/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0607 - acc: 0.8308 - mean_squared_error: 0.0607\nEpoch 19/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0548 - acc: 0.8553 - mean_squared_error: 0.0548\nEpoch 20/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0524 - acc: 0.8679 - mean_squared_error: 0.0524\nModel fitting on the 4th bootstrapped set\n(28, 300, 1)\nEpoch 1/20\n3800/3800 [==============================] - 17s 5ms/step - loss: 0.1496 - acc: 0.4800 - mean_squared_error: 0.1496\nEpoch 2/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1416 - acc: 0.4984 - mean_squared_error: 0.1416\nEpoch 3/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1404 - acc: 0.4961 - mean_squared_error: 0.1404\n"
],
[
"bagged_predict.keys()",
"_____no_output_____"
],
[
"bagged_predict = predict(bagModel)",
"_____no_output_____"
],
[
"Accuracy= baggedAccuracy(bagged_predict,Y_test)\nprint(\"Bagged Accuracy(50 models): %.2f%% \"%Accuracy)",
"Bagged Accuracy(50 models): 62.48% \n"
]
],
[
[
"### Improved accuracy! Variance reduction helps!",
"_____no_output_____"
]
],
[
[
"bagModel2.keys()",
"_____no_output_____"
],
[
"bag_pred2 = predict(bagModel2)",
"_____no_output_____"
],
[
"Accuracy2= baggedAccuracy(bag_pred2,Y_test)\nprint(\"Bagged Accuracy(10 models): %.2f%% \"%Accuracy2)",
"Bagged Accuracy(10 models): 62.29% \n"
]
],
[
[
"### The results for 10-model and 50-model ensamble are only slightly different",
"_____no_output_____"
]
],
[
[
"n_sample = int(len(X_train)*0.6)\nn_sample",
"_____no_output_____"
],
[
"model_num_list = [10,20,30,40,50]",
"_____no_output_____"
],
[
"def accuracy_bag(n_sample,model_num_list):\n model_bags = []\n accuracy_bags = []\n for i in model_num_list:\n print('Bagging {} models'.format(i))\n bagmodel = bagging(n_sample,i)\n bag_pred = predict(bagmodel)\n Accuracy = baggedAccuracy(bag_pred,Y_test)\n accuracy_bags.append(Accuracy)\n return accuracy_bags",
"_____no_output_____"
]
],
[
[
"I tried to get the accuracy of an ensamble from 1 to 50 models but my machine broke down overnight.. <br>I guess this is where GCP becomes handy. Next up: perform bagging for LSTM model. <br> You can see from the accuracy plot: 3/5 of the bagging accuracy is better than a single model accuracy without bagging(Single model accuracy 60.97 in this case). <br> Another point is as you randomly select observations for your training set and your bag size (the previous bagging accuracy was obtained by performing with another randomly selected training set and a bag size of 3800), the resulting accuracy can be different by several percentage. <br> I think bagging would help our model accuracy but not in a tremendous way. The results proved that the variance of our model was not signicant comparing to bias. Tune the hyperparameters!",
"_____no_output_____"
]
],
[
[
"accuracybags = accuracy_bag(n_sample,model_num_list)",
"_____no_output_____"
],
[
"accuracybags",
"_____no_output_____"
],
[
"accuracybags_array = np.asarray(accuracybags)",
"_____no_output_____"
],
[
"from matplotlib import pyplot as plt",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,10),dpi=80)\nplt.scatter(model_num_list,accuracybags)\nplt.xlabel('Ensamble model quantity',fontsize=20)\nplt.ylabel('Bagging accuracy',fontsize=16)",
"_____no_output_____"
]
],
[
[
"The accuracy doesn't show an ascending trend as the ensamble contains more models, which is weird. ",
"_____no_output_____"
],
[
"Then I ran the model model for 3800 samples out of 4522 observations in the train set instead of 2639 samples with the same test train split.",
"_____no_output_____"
]
],
[
[
"n_sample = 3800",
"_____no_output_____"
],
[
"accuracybag2 = accuracy_bag(n_sample,[20,30])",
"Bagging 20 models\nModel fitting on the 1th bootstrapped set\nEpoch 1/20\n3800/3800 [==============================] - 27s 7ms/step - loss: 0.1513 - acc: 0.4847 - mean_squared_error: 0.1513\nEpoch 2/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1416 - acc: 0.5074 - mean_squared_error: 0.1416\nEpoch 3/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1403 - acc: 0.5076 - mean_squared_error: 0.1403\nEpoch 4/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1390 - acc: 0.5068 - mean_squared_error: 0.1390\nEpoch 5/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1378 - acc: 0.5066 - mean_squared_error: 0.1378\nEpoch 6/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1367 - acc: 0.5068 - mean_squared_error: 0.1367\nEpoch 7/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1339 - acc: 0.5137 - mean_squared_error: 0.1339\nEpoch 8/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1271 - acc: 0.5442 - mean_squared_error: 0.1271\nEpoch 9/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1185 - acc: 0.5934 - mean_squared_error: 0.1185\nEpoch 10/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1153 - acc: 0.6045 - mean_squared_error: 0.1153\nEpoch 11/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1029 - acc: 0.6608 - mean_squared_error: 0.1029\nEpoch 12/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0946 - acc: 0.7042 - mean_squared_error: 0.0946\nEpoch 13/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0870 - acc: 0.7274 - mean_squared_error: 0.0870\nEpoch 14/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0824 - acc: 0.7484 - mean_squared_error: 0.0824\nEpoch 15/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0753 - acc: 0.7803 - mean_squared_error: 0.0753\nEpoch 16/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0712 - acc: 0.7895 - mean_squared_error: 0.0712\nEpoch 17/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0646 - acc: 0.8129 - mean_squared_error: 0.0646\nEpoch 18/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0588 - acc: 0.8374 - mean_squared_error: 0.0588\nEpoch 19/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0544 - acc: 0.8513 - mean_squared_error: 0.0544\nEpoch 20/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0506 - acc: 0.8716 - mean_squared_error: 0.0506\nModel fitting on the 2th bootstrapped set\nEpoch 1/20\n3800/3800 [==============================] - 22s 6ms/step - loss: 0.1510 - acc: 0.4861 - mean_squared_error: 0.1510\nEpoch 2/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1414 - acc: 0.5053 - mean_squared_error: 0.1414\nEpoch 3/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1406 - acc: 0.5061 - mean_squared_error: 0.1406\nEpoch 4/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1401 - acc: 0.5074 - mean_squared_error: 0.1401\nEpoch 5/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1392 - acc: 0.5061 - mean_squared_error: 0.1392\nEpoch 6/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1376 - acc: 0.5063 - mean_squared_error: 0.1376\nEpoch 7/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1343 - acc: 0.5134 - mean_squared_error: 0.1343\nEpoch 8/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1308 - acc: 0.5279 - mean_squared_error: 0.1308\nEpoch 9/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1233 - acc: 0.5695 - mean_squared_error: 0.1233\nEpoch 10/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1137 - acc: 0.6184 - mean_squared_error: 0.1137\nEpoch 11/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1033 - acc: 0.6642 - mean_squared_error: 0.1033\nEpoch 12/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0965 - acc: 0.6861 - mean_squared_error: 0.0965\nEpoch 13/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0881 - acc: 0.7271 - mean_squared_error: 0.0881\nEpoch 14/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0795 - acc: 0.7634 - mean_squared_error: 0.0795\nEpoch 15/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0741 - acc: 0.7858 - mean_squared_error: 0.0741\nEpoch 16/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0699 - acc: 0.7982 - mean_squared_error: 0.0699\nEpoch 17/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0637 - acc: 0.8189 - mean_squared_error: 0.0637\nEpoch 18/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0590 - acc: 0.8416 - mean_squared_error: 0.0590\nEpoch 19/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0539 - acc: 0.8584 - mean_squared_error: 0.0539\nEpoch 20/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0519 - acc: 0.8613 - mean_squared_error: 0.0519\nModel fitting on the 3th bootstrapped set\nEpoch 1/20\n3800/3800 [==============================] - 18s 5ms/step - loss: 0.1479 - acc: 0.4900 - mean_squared_error: 0.1479\nEpoch 2/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1419 - acc: 0.5047 - mean_squared_error: 0.1419\nEpoch 3/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1407 - acc: 0.5061 - mean_squared_error: 0.1407\nEpoch 4/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1400 - acc: 0.5045 - mean_squared_error: 0.1400\nEpoch 5/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1386 - acc: 0.5066 - mean_squared_error: 0.1386\nEpoch 6/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1378 - acc: 0.5058 - mean_squared_error: 0.1378\nEpoch 7/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1346 - acc: 0.5105 - mean_squared_error: 0.1346\nEpoch 8/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1315 - acc: 0.5234 - mean_squared_error: 0.1315\nEpoch 9/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1243 - acc: 0.5563 - mean_squared_error: 0.1243\nEpoch 10/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1202 - acc: 0.5861 - mean_squared_error: 0.1202\nEpoch 11/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1091 - acc: 0.6429 - mean_squared_error: 0.1091\nEpoch 12/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0996 - acc: 0.6858 - mean_squared_error: 0.0996\nEpoch 13/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0954 - acc: 0.6966 - mean_squared_error: 0.0954\nEpoch 14/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0884 - acc: 0.7168 - mean_squared_error: 0.0884\nEpoch 15/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0802 - acc: 0.7492 - mean_squared_error: 0.0802\nEpoch 16/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0732 - acc: 0.7845 - mean_squared_error: 0.0732\nEpoch 17/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0665 - acc: 0.8111 - mean_squared_error: 0.0665\nEpoch 18/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0631 - acc: 0.8166 - mean_squared_error: 0.0631\nEpoch 19/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0582 - acc: 0.8371 - mean_squared_error: 0.0582\nEpoch 20/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0542 - acc: 0.8553 - mean_squared_error: 0.0542\nModel fitting on the 4th bootstrapped set\nEpoch 1/20\n3800/3800 [==============================] - 24s 6ms/step - loss: 0.1496 - acc: 0.4876 - mean_squared_error: 0.1496\nEpoch 2/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1416 - acc: 0.5074 - mean_squared_error: 0.1416\nEpoch 3/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1399 - acc: 0.5068 - mean_squared_error: 0.1399\nEpoch 4/20\n"
],
[
"accuracybag3 = accuracy_bag(n_sample,[40,50])",
"Bagging 40 models\nModel fitting on the 1th bootstrapped set\nEpoch 1/20\n3800/3800 [==============================] - 117s 31ms/step - loss: 0.1507 - acc: 0.4966 - mean_squared_error: 0.1507\nEpoch 2/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1412 - acc: 0.5087 - mean_squared_error: 0.1412\nEpoch 3/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1398 - acc: 0.5079 - mean_squared_error: 0.1398\nEpoch 4/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1393 - acc: 0.5084 - mean_squared_error: 0.1393\nEpoch 5/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1375 - acc: 0.5087 - mean_squared_error: 0.1375\nEpoch 6/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1356 - acc: 0.5105 - mean_squared_error: 0.1356\nEpoch 7/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1330 - acc: 0.5137 - mean_squared_error: 0.1330\nEpoch 8/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1279 - acc: 0.5429 - mean_squared_error: 0.1279\nEpoch 9/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1169 - acc: 0.5992 - mean_squared_error: 0.1169\nEpoch 10/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1097 - acc: 0.6334 - mean_squared_error: 0.1097\nEpoch 11/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0988 - acc: 0.6768 - mean_squared_error: 0.0988\nEpoch 12/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0937 - acc: 0.6979 - mean_squared_error: 0.0937\nEpoch 13/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0860 - acc: 0.7316 - mean_squared_error: 0.0860\nEpoch 14/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0807 - acc: 0.7563 - mean_squared_error: 0.0807\nEpoch 15/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0756 - acc: 0.7758 - mean_squared_error: 0.0756\nEpoch 16/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0705 - acc: 0.7892 - mean_squared_error: 0.0705\nEpoch 17/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0626 - acc: 0.8187 - mean_squared_error: 0.0626\nEpoch 18/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0617 - acc: 0.8289 - mean_squared_error: 0.0617\nEpoch 19/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0554 - acc: 0.8508 - mean_squared_error: 0.0554\nEpoch 20/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0543 - acc: 0.8571 - mean_squared_error: 0.0543\nModel fitting on the 2th bootstrapped set\nEpoch 1/20\n3800/3800 [==============================] - 26s 7ms/step - loss: 0.1517 - acc: 0.4879 - mean_squared_error: 0.1517\nEpoch 2/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1417 - acc: 0.5082 - mean_squared_error: 0.1417\nEpoch 3/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1397 - acc: 0.5071 - mean_squared_error: 0.1397\nEpoch 4/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1389 - acc: 0.5087 - mean_squared_error: 0.1389\nEpoch 5/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1370 - acc: 0.5129 - mean_squared_error: 0.1370\nEpoch 6/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1359 - acc: 0.5171 - mean_squared_error: 0.1359\nEpoch 7/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1323 - acc: 0.5266 - mean_squared_error: 0.1323\nEpoch 8/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1268 - acc: 0.5411 - mean_squared_error: 0.1268\nEpoch 9/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1202 - acc: 0.5824 - mean_squared_error: 0.1202\nEpoch 10/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1106 - acc: 0.6353 - mean_squared_error: 0.1106\nEpoch 11/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1013 - acc: 0.6711 - mean_squared_error: 0.1013\nEpoch 12/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0948 - acc: 0.6995 - mean_squared_error: 0.0948\nEpoch 13/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0892 - acc: 0.7287 - mean_squared_error: 0.0892\nEpoch 14/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0803 - acc: 0.7553 - mean_squared_error: 0.0803\nEpoch 15/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0738 - acc: 0.7845 - mean_squared_error: 0.0738\nEpoch 16/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0696 - acc: 0.8005 - mean_squared_error: 0.0696\nEpoch 17/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0618 - acc: 0.8289 - mean_squared_error: 0.0618\nEpoch 18/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0604 - acc: 0.8295 - mean_squared_error: 0.0604\nEpoch 19/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0557 - acc: 0.8471 - mean_squared_error: 0.0557\nEpoch 20/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0535 - acc: 0.8547 - mean_squared_error: 0.0535\nModel fitting on the 3th bootstrapped set\nEpoch 1/20\n3800/3800 [==============================] - 28s 7ms/step - loss: 0.1506 - acc: 0.4868 - mean_squared_error: 0.1506\nEpoch 2/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1414 - acc: 0.5089 - mean_squared_error: 0.1414\nEpoch 3/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1404 - acc: 0.5092 - mean_squared_error: 0.1404\nEpoch 4/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1389 - acc: 0.5097 - mean_squared_error: 0.1389\nEpoch 5/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1370 - acc: 0.5097 - mean_squared_error: 0.1370\nEpoch 6/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1367 - acc: 0.5095 - mean_squared_error: 0.1367\nEpoch 7/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1341 - acc: 0.5147 - mean_squared_error: 0.1341\nEpoch 8/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1291 - acc: 0.5345 - mean_squared_error: 0.1291\nEpoch 9/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1219 - acc: 0.5653 - mean_squared_error: 0.1219\nEpoch 10/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1154 - acc: 0.6121 - mean_squared_error: 0.1154\nEpoch 11/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1074 - acc: 0.6503 - mean_squared_error: 0.1074\nEpoch 12/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0979 - acc: 0.6932 - mean_squared_error: 0.0979\nEpoch 13/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0909 - acc: 0.7174 - mean_squared_error: 0.0909\nEpoch 14/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0856 - acc: 0.7353 - mean_squared_error: 0.0856\nEpoch 15/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0817 - acc: 0.7513 - mean_squared_error: 0.0817\nEpoch 16/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0714 - acc: 0.7895 - mean_squared_error: 0.0714\nEpoch 17/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0655 - acc: 0.8145 - mean_squared_error: 0.0655\nEpoch 18/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0625 - acc: 0.8253 - mean_squared_error: 0.0625\nEpoch 19/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0591 - acc: 0.8366 - mean_squared_error: 0.0591\nEpoch 20/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.0536 - acc: 0.8550 - mean_squared_error: 0.0536\nModel fitting on the 4th bootstrapped set\nEpoch 1/20\n3800/3800 [==============================] - 23s 6ms/step - loss: 0.1496 - acc: 0.4908 - mean_squared_error: 0.1496\nEpoch 2/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1415 - acc: 0.5084 - mean_squared_error: 0.1415\nEpoch 3/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1398 - acc: 0.5089 - mean_squared_error: 0.1398\nEpoch 4/20\n"
],
[
"accuracybag4 = accuracy_bag(n_sample,[10])",
"Bagging 10 models\nModel fitting on the 1th bootstrapped set\nEpoch 1/20\n3800/3800 [==============================] - 299s 79ms/step - loss: 0.1503 - acc: 0.4837 - mean_squared_error: 0.1503\nEpoch 2/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1414 - acc: 0.5063 - mean_squared_error: 0.1414\nEpoch 3/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1403 - acc: 0.5063 - mean_squared_error: 0.1403\nEpoch 4/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1382 - acc: 0.5087 - mean_squared_error: 0.1382\nEpoch 5/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1372 - acc: 0.5063 - mean_squared_error: 0.1372\nEpoch 6/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1323 - acc: 0.5179 - mean_squared_error: 0.1323\nEpoch 7/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1262 - acc: 0.5442 - mean_squared_error: 0.1262\nEpoch 8/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1180 - acc: 0.5984 - mean_squared_error: 0.1180\nEpoch 9/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1086 - acc: 0.6455 - mean_squared_error: 0.1086\nEpoch 10/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0998 - acc: 0.6826 - mean_squared_error: 0.0998\nEpoch 11/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0939 - acc: 0.6989 - mean_squared_error: 0.0939\nEpoch 12/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0871 - acc: 0.7318 - mean_squared_error: 0.0871\nEpoch 13/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0840 - acc: 0.7434 - mean_squared_error: 0.0840\nEpoch 14/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0759 - acc: 0.7703 - mean_squared_error: 0.0759\nEpoch 15/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0695 - acc: 0.7958 - mean_squared_error: 0.0695\nEpoch 16/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0671 - acc: 0.8068 - mean_squared_error: 0.0671\nEpoch 17/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0625 - acc: 0.8253 - mean_squared_error: 0.0625\nEpoch 18/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0580 - acc: 0.8424 - mean_squared_error: 0.0580\nEpoch 19/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0533 - acc: 0.8587 - mean_squared_error: 0.0533\nEpoch 20/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0500 - acc: 0.8705 - mean_squared_error: 0.0500\nModel fitting on the 2th bootstrapped set\nEpoch 1/20\n3800/3800 [==============================] - 41s 11ms/step - loss: 0.1510 - acc: 0.4871 - mean_squared_error: 0.1510\nEpoch 2/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1420 - acc: 0.5055 - mean_squared_error: 0.1420\nEpoch 3/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1405 - acc: 0.5055 - mean_squared_error: 0.1405\nEpoch 4/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1397 - acc: 0.5079 - mean_squared_error: 0.1397\nEpoch 5/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1371 - acc: 0.5082 - mean_squared_error: 0.1371\nEpoch 6/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1351 - acc: 0.5153 - mean_squared_error: 0.1351\nEpoch 7/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1313 - acc: 0.5276 - mean_squared_error: 0.1313\nEpoch 8/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1246 - acc: 0.5608 - mean_squared_error: 0.1246\nEpoch 9/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1137 - acc: 0.6261 - mean_squared_error: 0.1137\nEpoch 10/20\n3800/3800 [==============================] - 7s 2ms/step - loss: 0.1038 - acc: 0.6695 - mean_squared_error: 0.1038\nEpoch 11/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0946 - acc: 0.7003 - mean_squared_error: 0.0946\nEpoch 12/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0846 - acc: 0.7392 - mean_squared_error: 0.0846\nEpoch 13/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0786 - acc: 0.7689 - mean_squared_error: 0.0786\nEpoch 14/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0725 - acc: 0.7892 - mean_squared_error: 0.0725\nEpoch 15/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0671 - acc: 0.8000 - mean_squared_error: 0.0671\nEpoch 16/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0644 - acc: 0.8168 - mean_squared_error: 0.0644\nEpoch 17/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0597 - acc: 0.8321 - mean_squared_error: 0.0597\nEpoch 18/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0547 - acc: 0.8539 - mean_squared_error: 0.0547\nEpoch 19/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0522 - acc: 0.8624 - mean_squared_error: 0.0522\nEpoch 20/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0502 - acc: 0.8653 - mean_squared_error: 0.0502\nModel fitting on the 3th bootstrapped set\nEpoch 1/20\n3800/3800 [==============================] - 46s 12ms/step - loss: 0.1531 - acc: 0.4821 - mean_squared_error: 0.1531\nEpoch 2/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1429 - acc: 0.5058 - mean_squared_error: 0.1429\nEpoch 3/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1416 - acc: 0.5087 - mean_squared_error: 0.1416\nEpoch 4/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1405 - acc: 0.5074 - mean_squared_error: 0.1405\nEpoch 5/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1390 - acc: 0.5087 - mean_squared_error: 0.1390\nEpoch 6/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1385 - acc: 0.5111 - mean_squared_error: 0.1385\nEpoch 7/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1362 - acc: 0.5097 - mean_squared_error: 0.1362\nEpoch 8/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1339 - acc: 0.5200 - mean_squared_error: 0.1339\nEpoch 9/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1278 - acc: 0.5482 - mean_squared_error: 0.1278\nEpoch 10/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1221 - acc: 0.5842 - mean_squared_error: 0.1221\nEpoch 11/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1217 - acc: 0.5829 - mean_squared_error: 0.1217\nEpoch 12/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1107 - acc: 0.6387 - mean_squared_error: 0.1107\nEpoch 13/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1013 - acc: 0.6708 - mean_squared_error: 0.1013\nEpoch 14/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0947 - acc: 0.6979 - mean_squared_error: 0.0947\nEpoch 15/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0865 - acc: 0.7297 - mean_squared_error: 0.0865\nEpoch 16/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0816 - acc: 0.7479 - mean_squared_error: 0.0816\nEpoch 17/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0757 - acc: 0.7824 - mean_squared_error: 0.0757\nEpoch 18/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0710 - acc: 0.7963 - mean_squared_error: 0.0710\nEpoch 19/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0671 - acc: 0.8061 - mean_squared_error: 0.0671\nEpoch 20/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.0613 - acc: 0.8284 - mean_squared_error: 0.0613\nModel fitting on the 4th bootstrapped set\nEpoch 1/20\n3800/3800 [==============================] - 41s 11ms/step - loss: 0.1517 - acc: 0.4950 - mean_squared_error: 0.1517\nEpoch 2/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1428 - acc: 0.5066 - mean_squared_error: 0.1428\nEpoch 3/20\n3800/3800 [==============================] - 8s 2ms/step - loss: 0.1413 - acc: 0.5095 - mean_squared_error: 0.1413\nEpoch 4/20\n"
],
[
"accuracybag2",
"_____no_output_____"
],
[
"accuracybag3",
"_____no_output_____"
]
],
[
[
"I did the bagging separately because I was afraid of my machine breaking down..",
"_____no_output_____"
]
],
[
[
"accuracy_3800sample = accuracybag4 +accuracybag2 + accuracybag3",
"_____no_output_____"
],
[
"accuracy_2640sample = accuracybags",
"_____no_output_____"
],
[
"accuracy_3800sample ",
"_____no_output_____"
]
],
[
[
"### Combine the accuracy results for 2640 and 3800 samples for (10,20,30,40,50) models bagging",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(10,10),dpi=80)\nscatter_3800sample = plt.scatter([10,20,30,40,50],accuracy_3800sample,color = 'Blue')\nscatter_2640sample = plt.scatter([10,20,30,40,50],accuracy_2640sample,color = 'Green')\nplt.xlabel('Bagging Ensamble Model Quantity',fontsize=20)\nplt.ylabel('Bagging Accuracy(%)',fontsize=16)\nCNN_accuracy = plt.axhline(61, color=\"red\",lw =3)\nplt.legend((scatter_3800sample,scatter_2640sample,CNN_accuracy), ('3800 bagging sample', '2640 bagging sample','CNN accuracy without bagging'),loc = 'upper left',fontsize = 12)",
"_____no_output_____"
]
],
[
[
"I tried to get the accuracy bags from 1 to 50 models but my machine broke down overnight.. <br>I guess this is where GCP becomes handy. Next up: perform bagging for LSTM model. <br> You can see from the accuracy plot: 3/5 of the bagging accuracy is better than a single model accuracy without bagging(Single model accuracy 60.97 in this case). <br> Another point is as you randomly select observations for your training set and your bag size (the previous bagging accuracy was obtained by performing with another randomly selected training set and a bag size of 3800), the resulting accuracy can be different by several percentage. <br> I think bagging would help our model accuracy but not in a tremendous way. The results proved that the variance of our model was not signicant comparing to bias. Tune the hyperparameters!",
"_____no_output_____"
],
[
"### Please correct me if there is any problem in the code!!!",
"_____no_output_____"
],
[
"The following cells are for trying to see if finding the most common prediction among models (voting) in the ensamble gives better results than averging the confidence values then make predictions ",
"_____no_output_____"
]
],
[
[
"from collections import Counter",
"_____no_output_____"
],
[
"bagModel.keys()",
"_____no_output_____"
],
[
"lists = []\nfor i in range(30):\n model_number = \"model%s\" % (i+1)\n pred_list = conversion(prediction[model_number])\n lists.append(pred_list)",
"_____no_output_____"
],
[
"Ytest_list=conversion(Y_test)",
"_____no_output_____"
],
[
"pred_list = []\nfor i in range(1522):\n for j in range(30):\n new_list = []\n new_list.append(lists[j][i])\n pred = Counter(new_list).most_common(1)[0][0]\n pred_list.append(pred)",
"_____no_output_____"
],
[
"sum(1 for i in range(len(pred_list)) if pred_list[i] == Ytest_list[i])",
"_____no_output_____"
]
],
[
[
"# The result is worse!",
"_____no_output_____"
]
],
[
[
"from keras.layers import Dropout, Convolution2D, MaxPooling2D\n\ntop_words = 1000\nmax_words = 150\nfilters = 32 #filter = 1 x KERNEL \n\ninpurt_shape = (X_train.shape[1:])\nprint(inpurt_shape)\n# create the model \nmodel = Sequential()\n\nmodel.add(Convolution2D(16, kernel_size=3, activation='elu', padding='same',\n input_shape=inpurt_shape))\nmodel.add(MaxPooling2D(pool_size=5))\nmodel.add(Convolution2D(filters=filters, kernel_size=3, padding='same', activation='relu'))\nmodel.add(MaxPooling2D(pool_size=5))\nmodel.add(Flatten())\nmodel.add(Dense(250, activation='relu'))\nmodel.add(Dense(250, activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(3, activation='linear')) #change from logistic \nmodel.compile(loss='mse', optimizer='adam', metrics=['accuracy','mse']) \nprint(model.summary())\n\n# Fit the model\nmodel.fit(X_train, \n Y_train, \n validation_data=(X_test, Y_test), \n epochs=20, \n batch_size=128,\n verbose=1)\n\n# Final evaluation of the model\nscores = model.evaluate(X_test, Y_test, verbose=0)\nprint(\"Accuracy: %.2f%%\" % (scores[1]*100))",
"(28, 300, 1)\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_305 (Conv2D) (None, 28, 300, 16) 160 \n_________________________________________________________________\nmax_pooling2d_305 (MaxPoolin (None, 5, 60, 16) 0 \n_________________________________________________________________\nconv2d_306 (Conv2D) (None, 5, 60, 32) 4640 \n_________________________________________________________________\nmax_pooling2d_306 (MaxPoolin (None, 1, 12, 32) 0 \n_________________________________________________________________\nflatten_153 (Flatten) (None, 384) 0 \n_________________________________________________________________\ndense_457 (Dense) (None, 250) 96250 \n_________________________________________________________________\ndense_458 (Dense) (None, 250) 62750 \n_________________________________________________________________\ndropout_153 (Dropout) (None, 250) 0 \n_________________________________________________________________\ndense_459 (Dense) (None, 3) 753 \n=================================================================\nTotal params: 164,553\nTrainable params: 164,553\nNon-trainable params: 0\n_________________________________________________________________\nNone\nTrain on 4565 samples, validate on 1522 samples\nEpoch 1/20\n4565/4565 [==============================] - 92s 20ms/step - loss: 0.1480 - acc: 0.5001 - mean_squared_error: 0.1480 - val_loss: 0.1352 - val_acc: 0.5237 - val_mean_squared_error: 0.1352\nEpoch 2/20\n4565/4565 [==============================] - 10s 2ms/step - loss: 0.1409 - acc: 0.5062 - mean_squared_error: 0.1409 - val_loss: 0.1357 - val_acc: 0.5237 - val_mean_squared_error: 0.1357\nEpoch 3/20\n4565/4565 [==============================] - 10s 2ms/step - loss: 0.1394 - acc: 0.5071 - mean_squared_error: 0.1394 - val_loss: 0.1348 - val_acc: 0.5250 - val_mean_squared_error: 0.1348\nEpoch 4/20\n4565/4565 [==============================] - 9s 2ms/step - loss: 0.1395 - acc: 0.5071 - mean_squared_error: 0.1395 - val_loss: 0.1342 - val_acc: 0.5250 - val_mean_squared_error: 0.1342\nEpoch 5/20\n4565/4565 [==============================] - 9s 2ms/step - loss: 0.1381 - acc: 0.5082 - mean_squared_error: 0.1381 - val_loss: 0.1330 - val_acc: 0.5250 - val_mean_squared_error: 0.1330\nEpoch 6/20\n4565/4565 [==============================] - 9s 2ms/step - loss: 0.1350 - acc: 0.5146 - mean_squared_error: 0.1350 - val_loss: 0.1298 - val_acc: 0.5388 - val_mean_squared_error: 0.1298\nEpoch 7/20\n4565/4565 [==============================] - 10s 2ms/step - loss: 0.1326 - acc: 0.5087 - mean_squared_error: 0.1326 - val_loss: 0.1274 - val_acc: 0.5618 - val_mean_squared_error: 0.1274\nEpoch 8/20\n4565/4565 [==============================] - 9s 2ms/step - loss: 0.1260 - acc: 0.5566 - mean_squared_error: 0.1260 - val_loss: 0.1211 - val_acc: 0.5802 - val_mean_squared_error: 0.1211\nEpoch 9/20\n4565/4565 [==============================] - 10s 2ms/step - loss: 0.1177 - acc: 0.5917 - mean_squared_error: 0.1177 - val_loss: 0.1172 - val_acc: 0.5926 - val_mean_squared_error: 0.1172\nEpoch 10/20\n4565/4565 [==============================] - 10s 2ms/step - loss: 0.1138 - acc: 0.6136 - mean_squared_error: 0.1138 - val_loss: 0.1158 - val_acc: 0.5900 - val_mean_squared_error: 0.1158\nEpoch 11/20\n4565/4565 [==============================] - 10s 2ms/step - loss: 0.1109 - acc: 0.6248 - mean_squared_error: 0.1109 - val_loss: 0.1147 - val_acc: 0.5966 - val_mean_squared_error: 0.1147\nEpoch 12/20\n4565/4565 [==============================] - 10s 2ms/step - loss: 0.1053 - acc: 0.6442 - mean_squared_error: 0.1053 - val_loss: 0.1150 - val_acc: 0.5946 - val_mean_squared_error: 0.1150\nEpoch 13/20\n4565/4565 [==============================] - 10s 2ms/step - loss: 0.1013 - acc: 0.6589 - mean_squared_error: 0.1013 - val_loss: 0.1126 - val_acc: 0.5959 - val_mean_squared_error: 0.1126\nEpoch 14/20\n4565/4565 [==============================] - 9s 2ms/step - loss: 0.0986 - acc: 0.6627 - mean_squared_error: 0.0986 - val_loss: 0.1142 - val_acc: 0.5894 - val_mean_squared_error: 0.1142\nEpoch 15/20\n4565/4565 [==============================] - 10s 2ms/step - loss: 0.0930 - acc: 0.6911 - mean_squared_error: 0.0930 - val_loss: 0.1139 - val_acc: 0.6012 - val_mean_squared_error: 0.1139\nEpoch 16/20\n4565/4565 [==============================] - 9s 2ms/step - loss: 0.0923 - acc: 0.6977 - mean_squared_error: 0.0923 - val_loss: 0.1188 - val_acc: 0.5966 - val_mean_squared_error: 0.1188\nEpoch 17/20\n4565/4565 [==============================] - 9s 2ms/step - loss: 0.0878 - acc: 0.7076 - mean_squared_error: 0.0878 - val_loss: 0.1143 - val_acc: 0.5953 - val_mean_squared_error: 0.1143\nEpoch 18/20\n4565/4565 [==============================] - 9s 2ms/step - loss: 0.0848 - acc: 0.7249 - mean_squared_error: 0.0848 - val_loss: 0.1166 - val_acc: 0.5953 - val_mean_squared_error: 0.1166\nEpoch 19/20\n4565/4565 [==============================] - 9s 2ms/step - loss: 0.0817 - acc: 0.7352 - mean_squared_error: 0.0817 - val_loss: 0.1146 - val_acc: 0.5999 - val_mean_squared_error: 0.1146\nEpoch 20/20\n4565/4565 [==============================] - 9s 2ms/step - loss: 0.0794 - acc: 0.7518 - mean_squared_error: 0.0794 - val_loss: 0.1143 - val_acc: 0.6097 - val_mean_squared_error: 0.1143\nAccuracy: 60.97%\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e76ffeae00303ecb8056e287d6627b5310d468a0 | 6,409 | ipynb | Jupyter Notebook | ch01/Untitled.ipynb | liuzhengqi1996/math452 | 635b6ce53cb792e316abf4f47396f2e4f0686815 | [
"MIT"
] | null | null | null | ch01/Untitled.ipynb | liuzhengqi1996/math452 | 635b6ce53cb792e316abf4f47396f2e4f0686815 | [
"MIT"
] | null | null | null | ch01/Untitled.ipynb | liuzhengqi1996/math452 | 635b6ce53cb792e316abf4f47396f2e4f0686815 | [
"MIT"
] | null | null | null | 47.828358 | 1,243 | 0.648151 | [
[
[
"# A basic machine learning problem: image classification .\n\n## A basic machine learning problem: image classification\n```{admonition} Can a machine (function) tell the difference ?\n Mathematically, gray-scale image can be just taken as matrix in $R^{n_0\\times n_0}$.\n\n The next figure shows different result from: human vision and computer representation: (pic not found)\n \n An image is just a big grid of numbers between [0,255]\n e.g. $800 \\times 600 \\times 3$ (3 channels RGB)\n\n Futhermore, color image can be taken as 3D tensor (matrix with 3 channel(RGB) ) in $R^{n_0\\times n_0 \\times 3}$. \n\n Then, let us think about the general supervised learning case.\n\n Each image = a big vector of pixel values\n\n - $d = 1280\\times 720 \\times 3$(width $\\times$ height $\\times$ RGB channel) \n \n ```\n\n ```{admonition} 3 different sets of points in $\\mathbb{R}^d$, are they separable?\n (cannot find three pictures here)\n```\n\n ```{admonition} Convert into mathematical problem\nFind $f(\\cdot; \\theta): \\mathbb{R}^d \\to \\mathbb{R}^3$ such that: (no picture)\n- Function interpolation\n- Data fitting\n ```\n\n ```{admonition} How to formulate “learning”?\n- Data: $\\{x_j, y_j\\}_{j=1}^N$\n- Find $f^*$ in some function class s.t. $f^*(x_j) \\approx y_j$.\n- Mathematically, solve the optimization problem by parameterizing the abstract function class\n$\n\t\\min_{\\theta} \\mathcal L(\\theta)\n$\n- where\n$\n\t\t\\mathcal L( \\theta):=\n\t\t{\\mathbb E}_{(x,y)\\sim \\mathcal D}[\\ell(f(x; \\theta), y)]\\approx L( \\theta) :=\n\t\t\\frac{1}{N} \\sum_{j=1}^N\\ell(y_j, f(x_j; \\theta))\n$\n- Here\n$\n\\ell(y_j,f(x_j; \\theta))\n$ \nis a general distance between real label $y_j$ and predicted label $f(x_j;\\theta)$\n\nTwo commonly used distances are \n- $l^2$ distance: \n$\n\t\t\\ell(y_j,f(x_j; \\theta)) = \\|y_j - f(x_j; \\theta)\\|^2.\n$\t\t\n- KL-divergence distance:\n$\n\\ell(y_j, f(x_j; \\theta)) = \\sum_{i=1}^k [y_j]_i \\log\\frac{[y_j]_i }{[f(x_j; \\theta)]_i}.\n$\n```\n ```{admonition} Application: image classification\nTBD (cannot find pictures)\n ```",
"_____no_output_____"
]
],
[
[
"from IPython.display import HTML\nHTML('<iframe id=\"kaltura_player\" src=\"https://cdnapisec.kaltura.com/p/2356971/sp/235697100/embedIframeJs/uiconf_id/41416911/partner_id/2356971?iframeembed=true&playerId=kaltura_player&entry_id=1_b5pq3bnx&flashvars[streamerType]=auto&flashvars[localizationCode]=en&flashvars[leadWithHTML5]=true&flashvars[sideBarContainer.plugin]=true&flashvars[sideBarContainer.position]=left&flashvars[sideBarContainer.clickToClose]=true&flashvars[chapters.plugin]=true&flashvars[chapters.layout]=vertical&flashvars[chapters.thumbnailRotator]=false&flashvars[streamSelector.plugin]=true&flashvars[EmbedPlayer.SpinnerTarget]=videoHolder&flashvars[dualScreen.plugin]=true&flashvars[hotspots.plugin]=1&flashvars[Kaltura.addCrossoriginToIframe]=true&&wid=1_qcnp6cit\" width=\"560\" height=\"590\" allowfullscreen webkitallowfullscreen mozAllowFullScreen allow=\"autoplay *; fullscreen *; encrypted-media *\" sandbox=\"allow-forms allow-same-origin allow-scripts allow-top-navigation allow-pointer-lock allow-popups allow-modals allow-orientation-lock allow-popups-to-escape-sandbox allow-presentation allow-top-navigation-by-user-activation\" frameborder=\"0\" title=\"Kaltura Player\"></iframe>')",
"/anaconda3/lib/python3.7/site-packages/IPython/core/display.py:689: UserWarning: Consider using IPython.display.IFrame instead\n warnings.warn(\"Consider using IPython.display.IFrame instead\")\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
e77007909328fe9257f3e4788c746f3421745da5 | 16,156 | ipynb | Jupyter Notebook | examples/Example4 - Wikidata Pagerank.ipynb | robuso/kgtk | 95024bfc61c12282c75d53d256115cdf41837f04 | [
"MIT"
] | 222 | 2020-03-31T17:45:04.000Z | 2022-03-30T22:48:08.000Z | examples/Example4 - Wikidata Pagerank.ipynb | robuso/kgtk | 95024bfc61c12282c75d53d256115cdf41837f04 | [
"MIT"
] | 510 | 2020-04-02T00:32:44.000Z | 2022-03-29T01:20:22.000Z | examples/Example4 - Wikidata Pagerank.ipynb | robuso/kgtk | 95024bfc61c12282c75d53d256115cdf41837f04 | [
"MIT"
] | 41 | 2020-03-31T17:45:07.000Z | 2022-03-22T02:49:44.000Z | 34.301486 | 324 | 0.637782 | [
[
[
"# Calculating Pagerank on Wikidata",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport os",
"_____no_output_____"
],
[
"%env MY=/Users/pedroszekely/data/wikidata-20200504\n%env WD=/Volumes/GoogleDrive/Shared drives/KGTK/datasets/wikidata-20200504",
"env: MY=/Users/pedroszekely/data/wikidata-20200504\nenv: WD=/Volumes/GoogleDrive/Shared drives/KGTK/datasets/wikidata-20200504\n"
]
],
[
[
"We need to filter the wikidata edge file to remove all edges where `node2` is a literal. \nWe can do this by running `ifexists` to keep edges where `node2` also appears in `node1`.\nThis takes 2-3 hours on a laptop.",
"_____no_output_____"
]
],
[
[
"!time gzcat \"$WD/wikidata_edges_20200504.tsv.gz\" \\\n | kgtk ifexists --filter-on \"$WD/wikidata_edges_20200504.tsv.gz\" --input-keys node2 --filter-keys node1 \\\n | gzip > \"$MY/wikidata-item-edges.tsv.gz\"",
"\nreal\t121m58.689s\nuser\t129m53.195s\nsys\t6m21.092s\n"
],
[
"!gzcat $MY/wikidata-item-edges.tsv.gz | wc",
" 460763981 3225347876 32869769062\n"
]
],
[
[
"We have 460 million edges that connect items to other items, let's make sure this is what we want before spending a lot of time computing pagerank",
"_____no_output_____"
]
],
[
[
"!gzcat $MY/wikidata-item-edges.tsv.gz | head",
"id\tnode1\tlabel\tnode2\trank\tnode2;magnitude\tnode2;unit\tnode2;date\tnode2;item\tnode2;lower\tnode2;upper\tnode2;latitude\tnode2;longitude\tnode2;precision\tnode2;calendar\tnode2;entity-type\nQ8-P31-1\tQ8\tP31\tQ331769\tnormal\t\t\t\tQ331769\t\t\t\t\t\t\titem\nQ8-P31-2\tQ8\tP31\tQ60539479\tnormal\t\t\t\tQ60539479\t\t\t\t\t\t\titem\nQ8-P31-3\tQ8\tP31\tQ9415\tnormal\t\t\t\tQ9415\t\t\t\t\t\t\titem\nQ8-P1343-1\tQ8\tP1343\tQ20743760\tnormal\t\t\t\tQ20743760\t\t\t\t\t\t\titem\nQ8-P1343-2\tQ8\tP1343\tQ1970746\tnormal\t\t\t\tQ1970746\t\t\t\t\t\t\titem\nQ8-P1343-3\tQ8\tP1343\tQ19180675\tnormal\t\t\t\tQ19180675\t\t\t\t\t\t\titem\nQ8-P461-1\tQ8\tP461\tQ169251\tnormal\t\t\t\tQ169251\t\t\t\t\t\t\titem\nQ8-P279-1\tQ8\tP279\tQ16748867\tnormal\t\t\t\tQ16748867\t\t\t\t\t\t\titem\nQ8-P460-1\tQ8\tP460\tQ935526\tnormal\t\t\t\tQ935526\t\t\t\t\t\t\titem\ngzcat: error writing to output: Broken pipe\ngzcat: /Users/pedroszekely/data/wikidata-20200504/wikidata-item-edges.tsv.gz: uncompress failed\n"
]
],
[
[
"Let's do a sanity check to make sure that we have the edges that we want.\nWe can do this by counting how many edges of each `entity-type`. \nGood news, we only have items and properties.",
"_____no_output_____"
]
],
[
[
"!time gzcat $MY/wikidata-item-edges.tsv.gz | kgtk unique $MY/wikidata-item-edges.tsv.gz --column 'node2;entity-type'",
"node1\tlabel\tnode2\nitem\tcount\t460737401\nproperty\tcount\t26579\ngzcat: error writing to output: Broken pipe\ngzcat: /Users/pedroszekely/data/wikidata-20200504/wikidata-item-edges.tsv.gz: uncompress failed\n\nreal\t21m44.450s\nuser\t21m29.078s\nsys\t0m7.958s\n"
]
],
[
[
"We only needd `node`, `label` and `node2`, so let's remove the other columns",
"_____no_output_____"
]
],
[
[
"!time gzcat $MY/wikidata-item-edges.tsv.gz | kgtk remove-columns -c 'id,rank,node2;magnitude,node2;unit,node2;date,node2;item,node2;lower,node2;upper,node2;latitude,node2;longitude,node2;precision,node2;calendar,node2;entity-type' \\\n | gzip > $MY/wikidata-item-edges-only.tsv.gz",
"\nreal\t35m11.023s\nuser\t56m9.951s\nsys\t2m37.521s\n"
],
[
"!gzcat $MY/wikidata-item-edges-only.tsv.gz | head",
"node1\tlabel\tnode2\nQ8\tP31\tQ331769\nQ8\tP31\tQ60539479\nQ8\tP31\tQ9415\nQ8\tP1343\tQ20743760\nQ8\tP1343\tQ1970746\nQ8\tP1343\tQ19180675\nQ8\tP461\tQ169251\nQ8\tP279\tQ16748867\nQ8\tP460\tQ935526\ngzcat: error writing to output: Broken pipe\ngzcat: /Users/pedroszekely/data/wikidata-20200504/wikidata-item-edges-only.tsv.gz: uncompress failed\n"
],
[
"!gunzip $MY/wikidata-item-edges-only.tsv.gz",
"_____no_output_____"
]
],
[
[
"The `kgtk graph-statistics` command will compute pagerank. It will run out of memory on a laptop with 16GB of memory.",
"_____no_output_____"
]
],
[
[
"!time kgtk graph_statistics --directed --degrees --pagerank --log $MY/log.txt -i $MY/wikidata-item-edges-only.tsv > $MY/wikidata-pagerank-degrees.tsv",
"/bin/sh: line 1: 89795 Killed: 9 kgtk graph-statistics --directed --degrees --pagerank --log $MY/log.txt -i $MY/wikidata-item-edges-only.tsv > $MY/wikidata-pagerank-degrees.tsv\n\nreal\t32m57.832s\nuser\t19m47.624s\nsys\t8m58.352s\n"
]
],
[
[
"We ran it on a server with 256GM of memory. It used 50GB and produced the following files:",
"_____no_output_____"
]
],
[
[
"!exa -l \"$WD\"/*sorted*",
".\u001b[1;33mr\u001b[31mw\u001b[0m\u001b[38;5;244m-------\u001b[0m \u001b[1;32m735\u001b[0m\u001b[32mM\u001b[0m \u001b[1;33mpedroszekely\u001b[0m \u001b[34m 4 Jun 16:21\u001b[0m \u001b[36m/Volumes/GoogleDrive/Shared drives/KGTK/datasets/wikidata-20200504/\u001b[31mwikidata-in-degree-only-sorted.tsv.gz\u001b[0m\n.\u001b[1;33mr\u001b[31mw\u001b[0m\u001b[38;5;244m-------\u001b[0m \u001b[1;32m764\u001b[0m\u001b[32mM\u001b[0m \u001b[1;33mpedroszekely\u001b[0m \u001b[34m 4 Jun 16:19\u001b[0m \u001b[36m/Volumes/GoogleDrive/Shared drives/KGTK/datasets/wikidata-20200504/\u001b[31mwikidata-out-degree-only-sorted.tsv.gz\u001b[0m\n.\u001b[1;33mr\u001b[31mw\u001b[0m\u001b[38;5;244m-------\u001b[0m@ \u001b[1;32m928\u001b[0m\u001b[32mM\u001b[0m \u001b[1;33mpedroszekely\u001b[0m \u001b[34m 5 Jun 0:21\u001b[0m \u001b[36m/Volumes/GoogleDrive/Shared drives/KGTK/datasets/wikidata-20200504/\u001b[31mwikidata-pagerank-only-sorted.tsv.gz\u001b[0m\n"
],
[
"!gzcat \"$WD/wikidata-pagerank-only-sorted.tsv.gz\" | head",
"node1\tproperty\tnode2\tid\nQ13442814\tvertex_pagerank\t0.02422254325848587\tQ13442814-vertex_pagerank-881612\nQ1860\tvertex_pagerank\t0.00842243515354162\tQ1860-vertex_pagerank-140\nQ5\tvertex_pagerank\t0.0073505352600377934\tQ5-vertex_pagerank-188\nQ5633421\tvertex_pagerank\t0.005898322426631837\tQ5633421-vertex_pagerank-101732\nQ21502402\tvertex_pagerank\t0.005796874633668408\tQ21502402-vertex_pagerank-4838249\nQ54812269\tvertex_pagerank\t0.005117345954282296\tQ54812269-vertex_pagerank-4838258\nQ1264450\tvertex_pagerank\t0.004881314896960181\tQ1264450-vertex_pagerank-18326\nQ602358\tvertex_pagerank\t0.004546331287981006\tQ602358-vertex_pagerank-587\nQ53869507\tvertex_pagerank\t0.0038679964665001417\tQ53869507-vertex_pagerank-3160055\ngzcat: error writing to output: Broken pipe\ngzcat: /Volumes/GoogleDrive/Shared drives/KGTK/datasets/wikidata-20200504/wikidata-pagerank-only-sorted.tsv.gz: uncompress failed\n"
]
],
[
[
"Oh, the `graph_statistics` command is not using standard column naming, using `property` instead of `label`.\nThis will be fixed, for now, let's rename the columns.",
"_____no_output_____"
]
],
[
[
"!kgtk rename-col -i \"$WD/wikidata-pagerank-only-sorted.tsv.gz\" --mode NONE --output-columns node1 label node2 id | gzip > $MY/wikidata-pagerank-only-sorted.tsv.gz",
"_____no_output_____"
],
[
"!gzcat $MY/wikidata-pagerank-only-sorted.tsv.gz | head",
"node1\tlabel\tnode2\tid\nQ13442814\tvertex_pagerank\t0.02422254325848587\tQ13442814-vertex_pagerank-881612\nQ1860\tvertex_pagerank\t0.00842243515354162\tQ1860-vertex_pagerank-140\nQ5\tvertex_pagerank\t0.0073505352600377934\tQ5-vertex_pagerank-188\nQ5633421\tvertex_pagerank\t0.005898322426631837\tQ5633421-vertex_pagerank-101732\nQ21502402\tvertex_pagerank\t0.005796874633668408\tQ21502402-vertex_pagerank-4838249\nQ54812269\tvertex_pagerank\t0.005117345954282296\tQ54812269-vertex_pagerank-4838258\nQ1264450\tvertex_pagerank\t0.004881314896960181\tQ1264450-vertex_pagerank-18326\nQ602358\tvertex_pagerank\t0.004546331287981006\tQ602358-vertex_pagerank-587\nQ53869507\tvertex_pagerank\t0.0038679964665001417\tQ53869507-vertex_pagerank-3160055\ngzcat: error writing to output: Broken pipe\ngzcat: /Users/pedroszekely/data/wikidata-20200504/wikidata-pagerank-only-sorted.tsv.gz: uncompress failed\n"
]
],
[
[
"Let's put the labels on the entity labels as columns so that we can read what is what. To do that, we concatenate the pagerank file with the labels file, and then ask kgtk to lift the labels into new columns.",
"_____no_output_____"
]
],
[
[
"!time kgtk cat -i \"$MY/wikidata_labels.tsv\" $MY/pagerank.tsv | gzip > $MY/pagerank-and-labels.tsv.gz",
"\nreal\t10m55.396s\nuser\t16m15.752s\nsys\t0m17.351s\n"
],
[
"!time kgtk lift -i $MY/pagerank-and-labels.tsv.gz | gzip > \"$WD/wikidata-pagerank-en.tsv.gz\"",
"\nreal\t32m37.811s\nuser\t11m5.594s\nsys\t10m30.283s\n"
]
],
[
[
"Now we can look at the labels. Here are the top 20 pagerank items in Wikidata:",
"_____no_output_____"
]
],
[
[
"!gzcat \"$WD/wikidata-pagerank-en.tsv.gz\" | head -20",
"node1\tlabel\tnode2\tid\tnode1;label\tlabel;label\tnode2;label\nQ13442814\tvertex_pagerank\t0.02422254325848587\tQ13442814-vertex_pagerank-881612\t'scholarly article'@en\t\t\nQ1860\tvertex_pagerank\t0.00842243515354162\tQ1860-vertex_pagerank-140\t'English'@en\t\t\nQ5\tvertex_pagerank\t0.0073505352600377934\tQ5-vertex_pagerank-188\t'human'@en\t\t\nQ5633421\tvertex_pagerank\t0.005898322426631837\tQ5633421-vertex_pagerank-101732\t'scientific journal'@en\t\t\nQ21502402\tvertex_pagerank\t0.005796874633668408\tQ21502402-vertex_pagerank-4838249\t'property constraint'@en\t\t\nQ54812269\tvertex_pagerank\t0.005117345954282296\tQ54812269-vertex_pagerank-4838258\t'WikibaseQualityConstraints'@en\t\t\nQ1264450\tvertex_pagerank\t0.004881314896960181\tQ1264450-vertex_pagerank-18326\t'J2000.0'@en\t\t\nQ602358\tvertex_pagerank\t0.004546331287981006\tQ602358-vertex_pagerank-587\t'Brockhaus and Efron Encyclopedic Dictionary'@en\t\t\nQ53869507\tvertex_pagerank\t0.0038679964665001417\tQ53869507-vertex_pagerank-3160055\t'property scope constraint'@en\t\t\nQ30\tvertex_pagerank\t0.003722615192558219\tQ30-vertex_pagerank-53\t'United States of America'@en\t\t\nQ2657718\tvertex_pagerank\t0.0036754039394037105\tQ2657718-vertex_pagerank-2969\t'Armenian Soviet Encyclopedia'@en\t\t\nQ21503250\tvertex_pagerank\t0.0036258228083834655\tQ21503250-vertex_pagerank-1652825\t'type constraint'@en\t\t\nQ19902884\tvertex_pagerank\t0.003403993346207395\tQ19902884-vertex_pagerank-4843313\t'Wikidata property definition'@en\t\t\nQ6581097\tvertex_pagerank\t0.0030890199307556172\tQ6581097-vertex_pagerank-128\t'male'@en\t\t\nQ21510865\tvertex_pagerank\t0.0029815432838705648\tQ21510865-vertex_pagerank-1652828\t'value type constraint'@en\t\t\nP2302\tvertex_pagerank\t0.0028243647567065384\tP2302-vertex_pagerank-20767739\t'property constraint'@en\t\t\nQ16521\tvertex_pagerank\t0.0028099172909745035\tQ16521-vertex_pagerank-794\t'taxon'@en\t\t\nQ21502838\tvertex_pagerank\t0.0027485333861137183\tQ21502838-vertex_pagerank-1652816\t'conflicts-with constraint'@en\t\t\nQ19652\tvertex_pagerank\t0.0026895742122130316\tQ19652-vertex_pagerank-3428\t'public domain'@en\t\t\ngzcat: error writing to output: Broken pipe\ngzcat: /Volumes/GoogleDrive/Shared drives/KGTK/datasets/wikidata-20200504/wikidata-pagerank-en.tsv.gz: uncompress failed\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7701206dc3c85d46884addd7304395e0b0a9ab5 | 650,962 | ipynb | Jupyter Notebook | 0025/HMC revised.ipynb | genkuroki/public | 339ea5dfd424492a6b21d1df299e52d48902de18 | [
"MIT"
] | 10 | 2021-06-06T00:33:49.000Z | 2022-01-24T06:56:08.000Z | 0025/HMC revised.ipynb | genkuroki/public | 339ea5dfd424492a6b21d1df299e52d48902de18 | [
"MIT"
] | null | null | null | 0025/HMC revised.ipynb | genkuroki/public | 339ea5dfd424492a6b21d1df299e52d48902de18 | [
"MIT"
] | 3 | 2021-08-02T11:58:34.000Z | 2021-12-11T11:46:05.000Z | 117.184878 | 8,888 | 0.630594 | [
[
[
"https://twitter.com/iitenki_moruten/status/1467683477474930688",
"_____no_output_____"
]
],
[
[
"# Original: https://github.com/moruten/julia-code/blob/1dafca3e1a4e3b36445c2263e440f6e4056b90aa/2021-12-6-test-no1.ipynb\n\nusing Plots\nusing Random\nusing Distributions\nusing QuadGK\n\n#============関数定義============================================#\nfunction action(x)\n 0.5*x*x\nend\n\nfunction deriv_action(x)\n x\nend\n\nfunction hamiltonian(x,p)\n action(x) + 0.5*p*p\nend\n\nfunction HMC_update(x,Nt,dt)\n #backup\n x_old = x\n p = rand(Normal(0,1))\n \n #check\n H_ini = hamiltonian(x,p)\n x = molecular_dynamics!(x,p,Nt,dt)\n H_fin = hamiltonian(x,p)\n \n r = rand()\n ΔH = H_fin-H_ini\n if r < exp(-ΔH)\n return x,1 #accept\n else \n return x_old,0\n end \nend\n\nfunction molecular_dynamics!(x,p,Nt,dt) \n force = 0.0\n #1/2step\n x = x + p * 0.5*dt\n #1~Nt-1 step\n for j=1:(Nt-1)\n force = deriv_action(x)\n p = p - dt * force\n x = x + dt * p\n end\n #Nt step\n p = p - dt * deriv_action(x)\n x = x + 0.5 * dt * p\n return x\nend\n#============関数終わり============================================#\n\n#============計算=========================================================================#\n #セットアップ\n Ntest = 300000\n Nt = 20\n dt = 1.0/Nt\n conf_vec = zeros(Ntest)\n accept_count = 0\n ret = 0\n x = 0.0\n\n sumxx = 0.0\n sumx = 0.0\n\n #計算\n for i=1:Ntest\n x,ret = HMC_update(x,Nt,dt)\n accept_count += ret\n conf_vec[i] = x\n \n sumx += x\n sumxx += x*x\n end\n\n println(\"P(accept) = $(accept_count/Ntest)\")\n println(\"<x> = $(sumx/Ntest)\")\n println(\"<xx> = $(sumxx/Ntest)\")\n\n#=======確認=============================================================================#\n xr = range(-5,5,length=1000)\n f1(x) = exp(-0.5*x^2)\n f2(x) = exp(-x^2)\n Z1,error1 = quadgk(f1,-Inf,Inf)\n Z2,error2 = quadgk(f2,-Inf,Inf)\n g1(x) = f1(x)/Z1\n g2(x) = f2(x)/Z2\n histogram(conf_vec,norm=:true,label=\"data\")\n plot!(xr,[g1.(xr) g2.(xr)],lw=3,label=[\"exp(-0.5x^2)/Z1\" \"exp(-x^2)/Z2\"])\n#========================================================================================#",
"P(accept) = 0.81538\n<x> = -0.0014051679592651703\n<xx> = 0.5005307878714936\n"
],
[
"# Revised 1\n\nusing Plots\nusing Random\nusing Distributions\nusing QuadGK\n\n#============関数定義============================================#\nfunction action(x)\n 0.5*x*x\nend\n\nfunction deriv_action(x)\n x\nend\n\nfunction hamiltonian(x,p)\n action(x) + 0.5*p*p\nend\n\nfunction HMC_update(x,Nt,dt)\n #backup\n x_old = x\n p = rand(Normal(0,1))\n \n #check\n H_ini = hamiltonian(x,p)\n x, p = molecular_dynamics!(x,p,Nt,dt) # <========== ここ\n H_fin = hamiltonian(x,p)\n \n r = rand()\n ΔH = H_fin-H_ini\n if r < exp(-ΔH)\n return x,1 #accept\n else \n return x_old,0\n end \nend\n\nfunction molecular_dynamics!(x,p,Nt,dt) \n force = 0.0\n #1/2step\n x = x + p * 0.5*dt\n #1~Nt-1 step\n for j=1:(Nt-1)\n force = deriv_action(x)\n p = p - dt * force\n x = x + dt * p\n end\n #Nt step\n p = p - dt * deriv_action(x)\n x = x + 0.5 * dt * p\n return x, p # <========== ここ\nend\n#============関数終わり============================================#\n\n#============計算=========================================================================#\n #セットアップ\n Ntest = 300000\n Nt = 20\n dt = 1.0/Nt\n conf_vec = zeros(Ntest)\n accept_count = 0\n ret = 0\n x = 0.0\n\n sumxx = 0.0\n sumx = 0.0\n\n #計算\n for i=1:Ntest\n x,ret = HMC_update(x,Nt,dt)\n accept_count += ret\n conf_vec[i] = x\n \n sumx += x\n sumxx += x*x\n end\n\n println(\"P(accept) = $(accept_count/Ntest)\")\n println(\"<x> = $(sumx/Ntest)\")\n println(\"<xx> = $(sumxx/Ntest)\")\n\n#=======確認=============================================================================#\n xr = range(-5,5,length=1000)\n f1(x) = exp(-0.5*x^2)\n f2(x) = exp(-x^2)\n Z1,error1 = quadgk(f1,-Inf,Inf)\n Z2,error2 = quadgk(f2,-Inf,Inf)\n g1(x) = f1(x)/Z1\n g2(x) = f2(x)/Z2\n histogram(conf_vec,norm=:true,label=\"data\")\n plot!(xr,[g1.(xr) g2.(xr)],lw=3,label=[\"exp(-0.5x^2)/Z1\" \"exp(-x^2)/Z2\"])\n#========================================================================================#",
"P(accept) = 0.99989\n<x> = -0.007191097966076688\n<xx> = 0.9978090923326245\n"
],
[
"# Revised 1 - second test\n\nusing Plots\nusing Random\nusing Distributions\nusing QuadGK\n\n#============関数定義============================================#\nfunction action(x)\n 3(x^2 - 1)^2\nend\n\nfunction deriv_action(x)\n 6x*(x^2 - 1)\nend\n\nfunction hamiltonian(x,p)\n action(x) + 0.5*p*p\nend\n\nfunction HMC_update(x,Nt,dt)\n #backup\n x_old = x\n p = rand(Normal(0,1))\n \n #check\n H_ini = hamiltonian(x,p)\n x, p = molecular_dynamics!(x,p,Nt,dt) # <========== ここ\n H_fin = hamiltonian(x,p)\n \n r = rand()\n ΔH = H_fin-H_ini\n if r < exp(-ΔH)\n return x,1 #accept\n else \n return x_old,0\n end \nend\n\nfunction molecular_dynamics!(x,p,Nt,dt) \n force = 0.0\n #1/2step\n x = x + p * 0.5*dt\n #1~Nt-1 step\n for j=1:(Nt-1)\n force = deriv_action(x)\n p = p - dt * force\n x = x + dt * p\n end\n #Nt step\n p = p - dt * deriv_action(x)\n x = x + 0.5 * dt * p\n return x, p # <========== ここ\nend\n#============関数終わり============================================#\n\n#============計算=========================================================================#\n #セットアップ\n Ntest = 300000\n Nt = 20\n dt = 1.0/Nt\n conf_vec = zeros(Ntest)\n accept_count = 0\n ret = 0\n x = 0.0\n\n sumxx = 0.0\n sumx = 0.0\n\n #計算\n for i=1:Ntest\n x,ret = HMC_update(x,Nt,dt)\n accept_count += ret\n conf_vec[i] = x\n \n sumx += x\n sumxx += x*x\n end\n\n println(\"P(accept) = $(accept_count/Ntest)\")\n println(\"<x> = $(sumx/Ntest)\")\n println(\"<xx> = $(sumxx/Ntest)\")\n\n#=======確認=============================================================================#\n xr = range(-2,2,length=1000)\n f(x) = exp(-action(x))\n Z,error1 = quadgk(f,-Inf,Inf)\n g(x) = f(x)/Z\n histogram(conf_vec,norm=:true,label=\"data\")\n plot!(xr,g.(xr),lw=3,label=\"exp(-action(x))/Z\", legend=:outertop)\n#========================================================================================#",
"P(accept) = 0.8684833333333334\n<x> = 0.0023294366262112307\n<xx> = 0.8902164318390459\n"
],
[
"# Revised 2\n\nusing Plots\nusing Random\nusing Distributions\nusing QuadGK\n\n#============関数定義============================================#\nfunction action(x)\n x^2/2\nend\n\nfunction deriv_action(x)\n x\nend\n\nfunction hamiltonian(x,p)\n action(x) + 0.5*p*p\nend\n\nfunction HMC_update(x,Nt,dt)\n #backup\n x_old = x\n p = rand(Normal(0,1))\n \n #check\n H_ini = hamiltonian(x,p)\n x, p = molecular_dynamics!(x,p,Nt,dt) # <========== ここ\n H_fin = hamiltonian(x,p)\n \n r = rand()\n ΔH = H_fin-H_ini\n if r < exp(-ΔH)\n return x,1 #accept\n else \n return x_old,0\n end \nend\n\nfunction molecular_dynamics!(x,p,Nt,dt)\n p -= deriv_action(x) * dt/2\n x += p * dt\n for j in 2:Nt\n p -= deriv_action(x) * dt\n x += p * dt\n end\n p -= deriv_action(x) * dt/2\n return x, p # <========== ここ\nend\n#============関数終わり============================================#\n\n#============計算=========================================================================#\n #セットアップ\n Ntest = 300000\n Nt = 20\n dt = 1.0/Nt\n conf_vec = zeros(Ntest)\n accept_count = 0\n ret = 0\n x = 0.0\n\n sumxx = 0.0\n sumx = 0.0\n\n #計算\n for i=1:Ntest\n x,ret = HMC_update(x,Nt,dt)\n accept_count += ret\n conf_vec[i] = x\n \n sumx += x\n sumxx += x*x\n end\n\n println(\"P(accept) = $(accept_count/Ntest)\")\n println(\"<x> = $(sumx/Ntest)\")\n println(\"<xx> = $(sumxx/Ntest)\")\n\n#=======確認=============================================================================#\n xr = range(-5,5,length=1000)\n f(x) = exp(-action(x))\n Z,error1 = quadgk(f,-Inf,Inf)\n g(x) = f(x)/Z\n histogram(conf_vec,norm=:true,label=\"data\",alpha=0.3,bin=100)\n plot!(xr,g.(xr),lw=3,label=\"exp(-action(x))/Z\", legend=:outertop)\n#========================================================================================#",
"P(accept) = 0.9998066666666666\n<x> = 0.0028861140234913893\n<xx> = 1.0045290660804647\n"
],
[
"# Revised 2 - second test\n\nusing Plots\nusing Random\nusing Distributions\nusing QuadGK\n\n#============関数定義============================================#\nfunction action(x)\n 3(x^2 - 1)^2\nend\n\nfunction deriv_action(x)\n 6x*(x^2 - 1)\nend\n\nfunction hamiltonian(x,p)\n action(x) + 0.5*p*p\nend\n\nfunction HMC_update(x,Nt,dt)\n #backup\n x_old = x\n p = rand(Normal(0,1))\n \n #check\n H_ini = hamiltonian(x,p)\n x, p = molecular_dynamics!(x,p,Nt,dt) # <========== ここ\n H_fin = hamiltonian(x,p)\n \n r = rand()\n ΔH = H_fin-H_ini\n if r < exp(-ΔH)\n return x,1 #accept\n else \n return x_old,0\n end \nend\n\nfunction molecular_dynamics!(x,p,Nt,dt)\n p -= deriv_action(x) * dt/2\n x += p * dt\n for j in 2:Nt\n p -= deriv_action(x) * dt\n x += p * dt\n end\n p -= deriv_action(x) * dt/2\n return x, p # <========== ここ\nend\n#============関数終わり============================================#\n\n#============計算=========================================================================#\n #セットアップ\n Ntest = 300000\n Nt = 20\n dt = 1.0/Nt\n conf_vec = zeros(Ntest)\n accept_count = 0\n ret = 0\n x = 0.0\n\n sumxx = 0.0\n sumx = 0.0\n\n #計算\n for i=1:Ntest\n x,ret = HMC_update(x,Nt,dt)\n accept_count += ret\n conf_vec[i] = x\n \n sumx += x\n sumxx += x*x\n end\n\n println(\"P(accept) = $(accept_count/Ntest)\")\n println(\"<x> = $(sumx/Ntest)\")\n println(\"<xx> = $(sumxx/Ntest)\")\n\n#=======確認=============================================================================#\n xr = range(-2,2,length=1000)\n f(x) = exp(-action(x))\n Z,error1 = quadgk(f,-Inf,Inf)\n g(x) = f(x)/Z\n histogram(conf_vec,norm=:true,label=\"data\",alpha=0.3,bin=100)\n plot!(xr,g.(xr),lw=3,label=\"exp(-action(x))/Z\", legend=:outertop)\n#========================================================================================#",
"P(accept) = 0.8680966666666666\n<x> = -0.004413428035654437\n<xx> = 0.8885940396252682\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7701411d71301bde072c19ded49f8b32d0a3f6b | 109,789 | ipynb | Jupyter Notebook | notebooks/wythoff_exp51.ipynb | CoAxLab/azad | d1498069dd8856e93ae077b34dd7c9f1c7ce80e6 | [
"MIT"
] | 6 | 2018-09-11T21:06:12.000Z | 2022-01-28T17:36:52.000Z | notebooks/wythoff_exp51.ipynb | CoAxLab/azad | d1498069dd8856e93ae077b34dd7c9f1c7ce80e6 | [
"MIT"
] | null | null | null | notebooks/wythoff_exp51.ipynb | CoAxLab/azad | d1498069dd8856e93ae077b34dd7c9f1c7ce80e6 | [
"MIT"
] | 2 | 2018-09-12T00:40:52.000Z | 2018-10-29T15:45:54.000Z | 369.659933 | 62,560 | 0.939775 | [
[
[
"# Analysis - exp51\n\n- DQN with a conv net. First tuning attempt.",
"_____no_output_____"
]
],
[
[
"import os\nimport csv\nimport numpy as np\nimport torch as th\n\nfrom glob import glob\nfrom pprint import pprint\n\nimport matplotlib\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport seaborn as sns\nsns.set(font_scale=1.5)\nsns.set_style('ticks')\n\nmatplotlib.rcParams.update({'font.size': 16})\nmatplotlib.rc('axes', titlesize=16)\n\nfrom notebook_helpers import load_params\nfrom notebook_helpers import load_monitored\nfrom notebook_helpers import join_monitored\nfrom notebook_helpers import score_summary\n\ndef load_data(path, run_index=(0, 20)):\n runs = range(run_index[0], run_index[1]+1)\n exps = []\n for r in runs:\n file = os.path.join(path, \"run_{}_monitor.csv\".format(int(r)))\n try:\n mon = load_monitored(file)\n except FileNotFoundError:\n mon = None\n exps.append(mon)\n return exps",
"_____no_output_____"
]
],
[
[
"# Load data",
"_____no_output_____"
]
],
[
[
"path = \"/Users/qualia/Code/azad/data/wythoff/exp51/\"\nexp_51 = load_data(path, run_index=(0, 99))",
"_____no_output_____"
],
[
"print(len(exp_51))",
"100\n"
],
[
"pprint(exp_51[1].keys())\npprint(exp_51[1]['score'][:20])",
"dict_keys(['file', 'episode', 'loss', 'score'])\n[0.0851063829787234,\n 0.07476635514018691,\n 0.07692307692307693,\n 0.06923076923076923,\n 0.06896551724137931,\n 0.06535947712418301,\n 0.07317073170731707,\n 0.06976744186046512,\n 0.06593406593406594,\n 0.06030150753768844,\n 0.06944444444444445,\n 0.07234042553191489,\n 0.06938775510204082,\n 0.06640625,\n 0.06439393939393939,\n 0.06593406593406594,\n 0.06382978723404255,\n 0.06688963210702341,\n 0.0664451827242525,\n 0.0673076923076923]\n"
]
],
[
[
"# Plots\n## All parameter summary\n\nHow's it look overall.\n\n### Timecourse",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(6, 3))\nfor r, mon in enumerate(exp_51):\n if mon is not None:\n _ = plt.plot(mon['episode'], mon['score'], color='black')\n _ = plt.ylim(0, 1)\n_ = plt.ylabel(\"Optimal score\")\n_ = plt.tight_layout() \n_ = plt.xlabel(\"Episode\")",
"_____no_output_____"
]
],
[
[
"### Histograms of final values",
"_____no_output_____"
]
],
[
[
"data = []\nplt.figure(figsize=(6, 3))\nfor r, mon in enumerate(exp_51):\n if mon is not None:\n data.append(np.max(mon['score'])) \n\n_ = plt.hist(data, bins=5, range=(0,1), color='black')\n_ = plt.xlabel(\"Max score\")\n_ = plt.ylabel(\"Count\")\n_ = plt.tight_layout() ",
"_____no_output_____"
],
[
"data = []\nplt.figure(figsize=(6, 3))\nfor r, mon in enumerate(exp_51):\n if mon is not None:\n data.append(np.mean(mon['score'])) \n\n_ = plt.hist(data, bins=5, range=(0,1), color='black')\n_ = plt.xlabel(\"Mean score\")\n_ = plt.ylabel(\"Count\")\n_ = plt.tight_layout() ",
"_____no_output_____"
]
],
[
[
"# Conclusion",
"_____no_output_____"
],
[
"- Terrible. No reason to do any more analysis",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e7701e6bf1de93d6ec19850db9cc577d15f934da | 21,796 | ipynb | Jupyter Notebook | model_notebooks/RPE/model.ipynb | indralab/adeft_indra | 6f039b58b6dea5eefa529cf15afaffff2d485513 | [
"BSD-2-Clause"
] | null | null | null | model_notebooks/RPE/model.ipynb | indralab/adeft_indra | 6f039b58b6dea5eefa529cf15afaffff2d485513 | [
"BSD-2-Clause"
] | null | null | null | model_notebooks/RPE/model.ipynb | indralab/adeft_indra | 6f039b58b6dea5eefa529cf15afaffff2d485513 | [
"BSD-2-Clause"
] | null | null | null | 34.432859 | 147 | 0.549092 | [
[
[
"import os\nimport json\nimport pickle\nimport random\nfrom collections import defaultdict, Counter\n\nfrom indra.literature.adeft_tools import universal_extract_text\nfrom indra.databases.hgnc_client import get_hgnc_name, get_hgnc_id\n\nfrom adeft.discover import AdeftMiner\nfrom adeft.gui import ground_with_gui\nfrom adeft.modeling.label import AdeftLabeler\nfrom adeft.modeling.classify import AdeftClassifier\nfrom adeft.disambiguate import AdeftDisambiguator\n\n\nfrom adeft_indra.ground.ground import AdeftGrounder\nfrom adeft_indra.model_building.s3 import model_to_s3\nfrom adeft_indra.model_building.escape import escape_filename\nfrom adeft_indra.db.content import get_pmids_for_agent_text, get_pmids_for_entity, \\\n get_plaintexts_for_pmids",
"_____no_output_____"
],
[
"adeft_grounder = AdeftGrounder()",
"_____no_output_____"
],
[
"shortforms = ['RPE']\nmodel_name = ':'.join(sorted(escape_filename(shortform) for shortform in shortforms))\nresults_path = os.path.abspath(os.path.join('../..', 'results', model_name))",
"_____no_output_____"
],
[
"miners = dict()\nall_texts = {}\nfor shortform in shortforms:\n pmids = get_pmids_for_agent_text(shortform)\n if len(pmids) > 10000:\n pmids = random.choices(pmids, k=10000)\n text_dict = get_plaintexts_for_pmids(pmids, contains=shortforms)\n text_dict = {pmid: text for pmid, text in text_dict.items() if len(text) > 5}\n miners[shortform] = AdeftMiner(shortform)\n miners[shortform].process_texts(text_dict.values())\n all_texts.update(text_dict)\n\nlongform_dict = {}\nfor shortform in shortforms:\n longforms = miners[shortform].get_longforms()\n longforms = [(longform, count, score) for longform, count, score in longforms\n if count*score > 1]\n longform_dict[shortform] = longforms\n \ncombined_longforms = Counter()\nfor longform_rows in longform_dict.values():\n combined_longforms.update({longform: count for longform, count, score\n in longform_rows})\ngrounding_map = {}\nnames = {}\nfor longform in combined_longforms:\n groundings = adeft_grounder.ground(longform)\n if groundings:\n grounding = groundings[0]['grounding']\n grounding_map[longform] = grounding\n names[grounding] = groundings[0]['name']\nlongforms, counts = zip(*combined_longforms.most_common())\npos_labels = []",
"_____no_output_____"
],
[
"list(zip(longforms, counts))",
"_____no_output_____"
],
[
"grounding_map, names, pos_labels = ground_with_gui(longforms, counts, \n grounding_map=grounding_map,\n names=names, pos_labels=pos_labels, no_browser=True, port=8891)",
"_____no_output_____"
],
[
"result = [grounding_map, names, pos_labels]",
"_____no_output_____"
],
[
"result",
"_____no_output_____"
],
[
"grounding_map, names, pos_labels = [{'positive prediction error': 'ungrounded',\n 'r phycoerythrin': 'MESH:D010799',\n 'radical prostatectomy': 'ungrounded',\n 'radix puerariae extract': 'ungrounded',\n 'rapid palatal expansion': 'ungrounded',\n 'rat pancreatic extract': 'ungrounded',\n 'rat placenta extract': 'ungrounded',\n 'rat prostatic extract': 'ungrounded',\n 'rating and perceptual': 'ungrounded',\n 'rating of perceived exertion': 'NCIT:C122028',\n 'rating of perceived exertion scale': 'NCIT:C122028',\n 're expansion pulmonary edema': 'MESH:D011654',\n 'refractory partial epilepsy': 'ungrounded',\n 'related packaging efficiently': 'ungrounded',\n 'related predispositional effects': 'ungrounded',\n 'related proliferative effects': 'ungrounded',\n 'respiratory protective equipment': 'MESH:D012134',\n 'retina pigment epithelial': 'MESH:D055213',\n 'retina pigment epithelium': 'MESH:D055213',\n 'retinal pigment': 'MESH:D055213',\n 'retinal pigment endothelial': 'MESH:D055213',\n 'retinal pigment epithelia': 'MESH:D055213',\n 'retinal pigment epithelial': 'MESH:D055213',\n 'retinal pigment epithelial cells': 'MESH:D055213',\n 'retinal pigment epithelial layer': 'MESH:D055213',\n 'retinal pigment epithelium': 'MESH:D055213',\n 'retinal pigment epithelium 1': 'MESH:D055213',\n 'retinal pigment epithelium cells': 'MESH:D055213',\n 'retinal pigmentary epithelium': 'MESH:D055213',\n 'reward prediction error': 'reward_prediction_error',\n 'ribulose 5 phosphate 3 epimerase': 'HGNC:10293',\n 'ribulose phosphate 3 epimerase': 'HGNC:10293',\n 'rice prolamin extract': 'ungrounded',\n 'subretinal pigment epithelium': 'MESH:D055213'},\n {'MESH:D010799': 'Phycoerythrin',\n 'NCIT:C122028': 'Rating of Perceived Exertion',\n 'MESH:D011654': 'Pulmonary Edema',\n 'MESH:D012134': 'Respiratory Protective Devices',\n 'MESH:D055213': 'Retinal Pigment Epithelium',\n 'reward_prediction_error': 'reward_prediction_error',\n 'HGNC:10293': 'RPE'},\n ['HGNC:10293', 'MESH:D055213', 'NCIT:C122028']]",
"_____no_output_____"
],
[
"excluded_longforms = []",
"_____no_output_____"
],
[
"grounding_dict = {shortform: {longform: grounding_map[longform] \n for longform, _, _ in longforms if longform in grounding_map\n and longform not in excluded_longforms}\n for shortform, longforms in longform_dict.items()}\nresult = [grounding_dict, names, pos_labels]\n\nif not os.path.exists(results_path):\n os.mkdir(results_path)\nwith open(os.path.join(results_path, f'{model_name}_preliminary_grounding_info.json'), 'w') as f:\n json.dump(result, f)",
"_____no_output_____"
],
[
"additional_entities = {'HGNC:10293': ['RPE', ['RPE', 'ribulose-5-phosphate-3-epimerase']]}",
"_____no_output_____"
],
[
"unambiguous_agent_texts = {}",
"_____no_output_____"
],
[
"labeler = AdeftLabeler(grounding_dict)\ncorpus = labeler.build_from_texts((text, pmid) for pmid, text in all_texts.items())\nagent_text_pmid_map = defaultdict(list)\nfor text, label, id_ in corpus:\n agent_text_pmid_map[label].append(id_)\n\nentity_pmid_map = {entity: set(get_pmids_for_entity(*entity.split(':', maxsplit=1),\n major_topic=True))for entity in additional_entities}",
"_____no_output_____"
],
[
"intersection1 = []\nfor entity1, pmids1 in entity_pmid_map.items():\n for entity2, pmids2 in entity_pmid_map.items():\n intersection1.append((entity1, entity2, len(pmids1 & pmids2)))",
"_____no_output_____"
],
[
"intersection2 = []\nfor entity1, pmids1 in agent_text_pmid_map.items():\n for entity2, pmids2 in entity_pmid_map.items():\n intersection2.append((entity1, entity2, len(set(pmids1) & pmids2)))",
"_____no_output_____"
],
[
"intersection1",
"_____no_output_____"
],
[
"intersection2",
"_____no_output_____"
],
[
"all_used_pmids = set()\nfor entity, agent_texts in unambiguous_agent_texts.items():\n used_pmids = set()\n for agent_text in agent_texts[1]:\n pmids = set(get_pmids_for_agent_text(agent_text))\n new_pmids = list(pmids - all_texts.keys() - used_pmids)\n text_dict = get_plaintexts_for_pmids(new_pmids, contains=agent_texts)\n corpus.extend([(text, entity, pmid) for pmid, text in text_dict.items() if len(text) >= 5])\n used_pmids.update(new_pmids)\n all_used_pmids.update(used_pmids)\n \nfor entity, pmids in entity_pmid_map.items():\n new_pmids = list(set(pmids) - all_texts.keys() - all_used_pmids)\n if len(new_pmids) > 10000:\n new_pmids = random.choices(new_pmids, k=10000)\n _, contains = additional_entities[entity]\n text_dict = get_plaintexts_for_pmids(new_pmids, contains=contains)\n corpus.extend([(text, entity, pmid) for pmid, text in text_dict.items() if len(text) >= 5])",
"_____no_output_____"
],
[
"names.update({key: value[0] for key, value in additional_entities.items()})\nnames.update({key: value[0] for key, value in unambiguous_agent_texts.items()})\npos_labels = list(set(pos_labels) | additional_entities.keys() |\n unambiguous_agent_texts.keys())",
"_____no_output_____"
],
[
"%%capture\n\nclassifier = AdeftClassifier(shortforms, pos_labels=pos_labels, random_state=1729)\nparam_grid = {'C': [100.0], 'max_features': [10000]}\ntexts, labels, pmids = zip(*corpus)\nclassifier.cv(texts, labels, param_grid, cv=5, n_jobs=5)",
"INFO: [2020-10-30 03:48:48] /adeft/PP/adeft/adeft/modeling/classify.py - Beginning grid search in parameter space:\n{'C': [100.0], 'max_features': [10000]}\nINFO: [2020-10-30 03:49:37] /adeft/PP/adeft/adeft/modeling/classify.py - Best f1 score of 0.9908412380735351 found for parameter values:\n{'logit__C': 100.0, 'tfidf__max_features': 10000}\n"
],
[
"classifier.stats",
"_____no_output_____"
],
[
"disamb = AdeftDisambiguator(classifier, grounding_dict, names)",
"_____no_output_____"
],
[
"disamb.dump(model_name, results_path)",
"_____no_output_____"
],
[
"print(disamb.info())",
"Disambiguation model for RPE\n\nProduces the disambiguations:\n\tPhycoerythrin\tMESH:D010799\n\tPulmonary Edema\tMESH:D011654\n\tRPE*\tHGNC:10293\n\tRating of Perceived Exertion*\tNCIT:C122028\n\tRespiratory Protective Devices\tMESH:D012134\n\tRetinal Pigment Epithelium*\tMESH:D055213\n\treward_prediction_error\treward_prediction_error\n\nClass level metrics:\n--------------------\nGrounding \tCount\tF1 \n Retinal Pigment Epithelium*\t2581\t0.99346\n Rating of Perceived Exertion*\t 107\t0.97064\n reward_prediction_error\t 43\t0.96471\n Ungrounded\t 22\t0.21333\n RPE*\t 7\t0.33333\n Phycoerythrin\t 3\t 0.0\n Pulmonary Edema\t 2\t 0.2\nRespiratory Protective Devices\t 2\t 0.0\n\nWeighted Metrics:\n-----------------\n\tF1 score:\t0.99084\n\tPrecision:\t0.98572\n\tRecall:\t\t0.99629\n\n* Positive labels\nSee Docstring for explanation\n\n"
],
[
"model_to_s3(disamb)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7702683ddac5bb6d3dc4fd01a91ea89d42e0d40 | 75,654 | ipynb | Jupyter Notebook | ner-food-ingredients/03_Generate_counts.ipynb | hertelm/projects | 5c36739855e928af810fe0577edd0adc71919d81 | [
"MIT"
] | 823 | 2019-11-22T17:08:39.000Z | 2022-03-31T03:03:23.000Z | ner-food-ingredients/03_Generate_counts.ipynb | hertelm/projects | 5c36739855e928af810fe0577edd0adc71919d81 | [
"MIT"
] | 46 | 2019-11-25T15:14:05.000Z | 2022-03-31T12:59:45.000Z | ner-food-ingredients/03_Generate_counts.ipynb | hertelm/projects | 5c36739855e928af810fe0577edd0adc71919d81 | [
"MIT"
] | 326 | 2019-11-24T01:31:27.000Z | 2022-03-27T19:48:04.000Z | 36.407122 | 464 | 0.294565 | [
[
[
"## 03: Generate counts\n\nThis script takes a directory of `.csv` files containing entity counts by month in the following format:\n\n```csv\n,2012-01,2012-02\nmeat,1011.0,873.0\nsalt,805.0,897.0\nchicken,694.0,713.0\n```\n\nIt sums the counts from all files, only keeps the `N` most common records and calculates the variance, scaled by the average. This helps select a more \"interesting\" subset of entities with the most variance over time. The result are the most variant entities (minus the most frequent, which tend to be less interesting). The result can be used to create an interactive [bar chart race visualization](https://public.flourish.studio/visualisation/1532208/). ",
"_____no_output_____"
]
],
[
[
"INPUT_DIR = \"./counts\" # directory of counts file(s) created in the previous step\nOUTPUT_FILE = \"./output_counts.csv\" # path to output file\nMOST_COMMON = 10_000 # number of most common entities to keep\nDROP_MOST_FREQUENT = 10 # number of most frequent entities to drop\nN_TOTAL = 50 # number of results to export",
"_____no_output_____"
],
[
"!pip install pandas",
"_____no_output_____"
],
[
"import csv\nfrom collections import Counter, defaultdict\nfrom pathlib import Path\nimport pandas as pd",
"_____no_output_____"
],
[
"def read_csv(file_):\n counts = Counter()\n for row in csv.DictReader(file_):\n term = row[\"\"]\n for year, freq in row.items():\n if year != \"\" and freq:\n counts[(term, year)] = int(float(freq))\n return counts\n\n\ndef prune_rows(counts_by_term, n):\n totals = Counter()\n for term, counts in counts_by_term.items():\n if \"Total\" in counts:\n total = counts[\"Total\"]\n else:\n total = sum(counts.values())\n totals[term] = total\n pruned = defaultdict(dict)\n for term, _ in totals.most_common(n):\n pruned[term] = counts_by_term[term]\n return pruned\n\n\ndef sum_counts(directory, n=10000):\n directory = Path(directory)\n counts = Counter()\n for path in directory.glob(\"**/*.csv\"):\n with path.open(\"r\", encoding=\"utf8\") as file_:\n counts.update(read_csv(file_))\n by_term = defaultdict(Counter)\n for (term, month), freq in counts.items():\n by_term[term][month] = freq\n records = prune_rows(by_term, n)\n months = set()\n for term, counts in records.items():\n months.update(counts.keys())\n fields = [\"Term\"] + list(sorted(months))\n rows = []\n for term, month_freqs in records.items():\n month_freqs[\"Term\"] = term\n for month in months:\n month_freqs.setdefault(month, 0.0)\n rows.append(month_freqs)\n return pd.DataFrame.from_records(rows, index=\"Term\", columns=fields)\n\n\ndef sort_by_frequency(df):\n most_common = df.sum(axis=1)\n most_common.sort_values(ascending=False, inplace=True)\n return df.loc[most_common.index]\n\n\ndef drop_most_frequent(df, n):\n return sort_by_frequency(df)[n:]\n\n\ndef get_most_variant(df, n, mean_weight=False):\n cvars = df.var(axis=1)\n if mean_weight:\n cvars = cvars / df.mean(axis=1)\n cvars = cvars.sort_values(ascending=False)\n return df.loc[cvars.index][:n]",
"_____no_output_____"
],
[
"DF = sum_counts(INPUT_DIR, MOST_COMMON)\nDF",
"_____no_output_____"
],
[
"SUBSET = drop_most_frequent(DF, DROP_MOST_FREQUENT)\nSUBSET = get_most_variant(SUBSET, N_TOTAL, mean_weight=True)[:200]\nSUBSET = sort_by_frequency(SUBSET)\nSUBSET = SUBSET.cumsum(axis=1)\nSUBSET",
"_____no_output_____"
],
[
"SUBSET.to_csv(OUTPUT_FILE)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7702c950eb1f204ecb49f085ed2d973d0ccbb85 | 57,318 | ipynb | Jupyter Notebook | sveske/sources/11_ScientificProgrammingI/SciPyAnswers_1.v2.ipynb | milica325/unibl_radionica | 34a753a663f9ad6a3f1031cd6755748526046376 | [
"BSD-3-Clause"
] | 2 | 2019-09-18T19:21:44.000Z | 2019-09-19T00:00:25.000Z | sveske/sources/11_ScientificProgrammingI/SciPyAnswers_1.v2.ipynb | milica325/unibl_radionica | 34a753a663f9ad6a3f1031cd6755748526046376 | [
"BSD-3-Clause"
] | null | null | null | sveske/sources/11_ScientificProgrammingI/SciPyAnswers_1.v2.ipynb | milica325/unibl_radionica | 34a753a663f9ad6a3f1031cd6755748526046376 | [
"BSD-3-Clause"
] | 34 | 2019-09-18T14:39:38.000Z | 2019-09-20T06:45:07.000Z | 165.65896 | 15,194 | 0.877682 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e77056d661b5fc766ee5d0c8e1825e4ad4ec191c | 498,059 | ipynb | Jupyter Notebook | notebooks/02.09.Colab_Mission_Coincidences.ipynb | gamedaygeorge/odc-colab | 82e1aea92be3ccaa1e9c83f32bbf0f9ff5183a98 | [
"Apache-2.0"
] | 1 | 2021-07-21T12:35:37.000Z | 2021-07-21T12:35:37.000Z | notebooks/02.09.Colab_Mission_Coincidences.ipynb | gamedaygeorge/odc-colab | 82e1aea92be3ccaa1e9c83f32bbf0f9ff5183a98 | [
"Apache-2.0"
] | null | null | null | notebooks/02.09.Colab_Mission_Coincidences.ipynb | gamedaygeorge/odc-colab | 82e1aea92be3ccaa1e9c83f32bbf0f9ff5183a98 | [
"Apache-2.0"
] | null | null | null | 498,059 | 498,059 | 0.93909 | [
[
[
"<a href=\"https://colab.research.google.com/github/ceos-seo/odc-colab/blob/master/notebooks/02.09.Colab_Mission_Coincidences.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Downloads the odc-colab Python module and runs it to setup ODC.",
"_____no_output_____"
]
],
[
[
"!wget -nc https://raw.githubusercontent.com/ceos-seo/odc-colab/master/odc_colab.py\nfrom odc_colab import odc_colab_init\nodc_colab_init(install_odc_gee=True)",
"_____no_output_____"
]
],
[
[
"Downloads an existing index and populates the new ODC environment with it.",
"_____no_output_____"
]
],
[
[
"from odc_colab import populate_db\npopulate_db()",
"_____no_output_____"
]
],
[
[
"# Mission Coincidences\nThis notebook finds concident acquisition regions for three missions: Landsat-8, Sentinel-2 and Sentinel-1. Each of these missions has a different orbit and revisit rates, so coincident pairs (two missions at the same location and day) are not that common and coincident triplets (all 3 missions at the same location and day) are extremely rare. These coincidences are quite valuable for comparing datasets for calibration and validation purposes or for providing viable locations for a combined product analysis.",
"_____no_output_____"
],
[
"## Load Data Cube Configuration and Import Utilities",
"_____no_output_____"
]
],
[
[
"# Load Data Cube Configuration\nfrom odc_gee import earthengine\ndc = earthengine.Datacube(app='Mission_Coincidences')\n\n# Import Utilities\nfrom IPython.display import display_html\nfrom utils.data_cube_utilities.clean_mask import landsat_qa_clean_mask\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport sys\nimport xarray as xr",
"_____no_output_____"
]
],
[
[
"## Create new functions to display data output and find coincidences",
"_____no_output_____"
],
[
"* `display_side_by_side`: A method [found here](https://stackoverflow.com/a/44923103) for displaying Pandas DataFrames next to each other in one output.\n* `find_coincidences`: A helper method to return the various intersection of dates for the three products.\n* `s1_rgb`: Generates an RGB image from a Sentinel-1 dataset.",
"_____no_output_____"
]
],
[
[
"def display_side_by_side(*args, index=True):\n html_str=''\n for df in args:\n if index:\n html_str+=df.to_html()\n else:\n html_str+=df.to_html(index=False)\n display_html(html_str.replace('table','table style=\"display:inline\"'),raw=True)\n \ndef find_coincidence(ls_index, s1_index, s2_index): \n return {'LS⋂S2': ls_index.intersection(s2_index).values,\n 'LS⋂S1': ls_index.intersection(s1_index).values,\n 'S2⋂S1': s2_index.intersection(s1_index).values,\n 'LS⋂S2⋂S1': ls_index.intersection(s2_index).intersection(s1_index).values}\n\ndef s1_rgb(ds, rrange=(-25, 0), grange=(-30,-5), brange=(0,15)):\n r = ds.vv\n g = ds.vh\n b = r - g\n # Clip the data to remove extreme outliers\n r = np.clip(r, rrange[0], rrange[1])\n g = np.clip(g, grange[0], grange[1])\n b = np.clip(b, brange[0], brange[1])\n # Normalize the data to improve colors\n r = (r-r.min())/(r.max()-r.min())\n g = (g-g.min())/(g.max()-g.min())\n b = (b-b.min())/(b.max()-b.min())\n # Name the bands\n r.name = 'vv'\n g.name = 'vh'\n b.name = 'vv/vh'\n return xr.merge((r,g,b))",
"_____no_output_____"
]
],
[
[
"### Analysis parameters\n\n* `latitude`: The latitude extents for the analysis area.\n* `longitude`: The longitude extents for the analysis area.\n* `time`: The time window for the analysis (Year-Month)",
"_____no_output_____"
]
],
[
[
"# MODIFY HERE\n\n# Select the center of an analysis region (lat_long) \n# Adjust the surrounding box size (box_size) around the center (in degrees)\n# Remove the comment tags (#) below to change the sample location\n\n# Barekese Dam, Ghana, Africa\nlat_long = (6.846, -1.709)\nbox_size_deg = 0.05\n\n# Calculate the latitude and longitude bounds of the analysis box\nlatitude = (lat_long[0]-box_size_deg/2, lat_long[0]+box_size_deg/2)\nlongitude = (lat_long[1]-box_size_deg/2, lat_long[1]+box_size_deg/2)\n\ntime = ('2019-1', '2019-12')",
"_____no_output_____"
],
[
"# The code below renders a map that can be used to view the region.\nfrom utils.data_cube_utilities.dc_display_map import display_map\ndisplay_map(latitude,longitude)",
"_____no_output_____"
]
],
[
[
"## Load partial datasets\nLoad only the dates, coordinate, and scene classification values if available for determining cloud coverage.",
"_____no_output_____"
]
],
[
[
"# Define the product details to load in the next code block\nplatforms = {'LANDSAT_8': dict(product=f'ls8_google',latitude=latitude,longitude=longitude),\n 'SENTINEL-1': dict(product=f's1_google',group_by='solar_day'),\n 'SENTINEL-2': dict(product=f's2_google',group_by='solar_day')}",
"_____no_output_____"
],
[
"# Load Landsat 8 data including times and pixel_qa (cloud cover)\nls_dataset = dc.load(measurements=['pixel_qa'], time=time, **platforms['LANDSAT_8'])\n\n# Load Sentinel-2 data including times and scl (cloud cover)\ns2_dataset = dc.load(like=ls_dataset, measurements=['scl'], time=time, **platforms['SENTINEL-2'])\n\n# Load Basic Sentinel-1 data with only time slice details\ns1_dataset = dc.load(like=ls_dataset, measurements=[], time=time, **platforms['SENTINEL-1'])",
"_____no_output_____"
]
],
[
[
"## Cloud Masking\nCreate cloud masks for the optical data (Landsat-8 and Sentinel-2)",
"_____no_output_____"
]
],
[
[
"ls_clean_mask = landsat_qa_clean_mask(ls_dataset, platform='LANDSAT_8')\n\ns2_clean_mask = (s2_dataset.scl != 0) & (s2_dataset.scl != 1) & \\\n (s2_dataset.scl != 3) & (s2_dataset.scl != 8) & \\\n (s2_dataset.scl != 9) & (s2_dataset.scl != 10)",
"/content/utils/data_cube_utilities/clean_mask.py:278: UserWarning: Please specify a value for `collection`. Assuming data is collection 1.\n warnings.warn('Please specify a value for `collection`. Assuming data is collection 1.')\n/content/utils/data_cube_utilities/clean_mask.py:283: UserWarning: Please specify a value for `level`. Assuming data is level 2.\n warnings.warn('Please specify a value for `level`. Assuming data is level 2.')\n"
]
],
[
[
"## Display a table of scenes\nFilter optical data by cloud cover",
"_____no_output_____"
]
],
[
[
"# MODIFY HERE\n\n# Percent of clean pixels in the optical images.\n# The default is 80% which will yield mostly clear scenes\n\npercent_clean = 80",
"_____no_output_____"
],
[
"# Display the dates and cloud information for the available scenes\n\nls_df = pd.DataFrame(list(zip(ls_dataset.time.values.astype('datetime64[D]'),\n [round(mask.mean().item()*100, 2) for mask in ls_clean_mask],\n [mask.sum().item() for mask in ls_clean_mask])),\n columns=['Landsat 8 Date', 'clean_pixel_percent', 'clean_pixel_count'])\\\n .query(f'clean_pixel_percent >= {percent_clean}')\ns2_df = pd.DataFrame(list(zip(s2_dataset.time.values.astype('datetime64[D]'),\n [round(mask.mean().item()*100, 2) for mask in s2_clean_mask],\n [mask.sum().item() for mask in s2_clean_mask])),\n columns=['Sentinel-2 Date', 'clean_pixel_percent', 'clean_pixel_count'])\\\n .query(f'clean_pixel_percent >= {percent_clean}')\ns1_df = pd.DataFrame(list(s1_dataset.time.values.astype('datetime64[D]')),\n columns=['Sentinel-1 Date'])\n\ndisplay_side_by_side(ls_df, s2_df, s1_df)",
"_____no_output_____"
]
],
[
[
"## Coincidences\nFind the coincidence dates for the datasets using the filtered data from the previous section.",
"_____no_output_____"
]
],
[
[
"ls_index = pd.Index(ls_df['Landsat 8 Date'].values)\ns2_index = pd.Index(s2_df['Sentinel-2 Date'].values)\ns1_index = pd.Index(s1_df['Sentinel-1 Date'].values)",
"_____no_output_____"
],
[
"# List the double and triple coincidences\nargs = [pd.DataFrame(val, columns=[key]) for key, val in find_coincidence(ls_index, s1_index, s2_index).items()]\ndisplay_side_by_side(*args, index=False)",
"_____no_output_____"
]
],
[
[
"## Plot a single time selection to view the scene details\nSelect and plot a time from the coincidence results listed above.",
"_____no_output_____"
]
],
[
[
"# MODIFY HERE\n\n# Select a time from the table above.\ntime_selection = '2019-01-22'",
"_____no_output_____"
],
[
"# Define the plotting bands for each image on the specified date\n\ns1 = s2 = ls = None\nif ls_dataset.time.dt.floor('D').isin(np.datetime64(time_selection)).sum():\n ls = dc.load(measurements=['red', 'green', 'blue'],\n time=time_selection, **platforms['LANDSAT_8'])\nif s2_dataset.time.dt.floor('D').isin(np.datetime64(time_selection)).sum():\n s2 = dc.load(like=ls_dataset, measurements=['red', 'green', 'blue'],\n time=time_selection, **platforms['SENTINEL-2'])\nif s1_dataset.time.dt.floor('D').isin(np.datetime64(time_selection)).sum():\n s1 = dc.load(like=ls_dataset, measurements=['vv', 'vh'],\n time=time_selection, **platforms['SENTINEL-1'])",
"_____no_output_____"
],
[
"# Plot sample images for the specified date. \n# Based on the selected date, there will be either 2 or 3 images shown below. \n\nfig, ax = plt.subplots(2, 2, figsize=(ls_dataset.longitude.size/ls_dataset.latitude.size*16,16))\n\nif ls:\n ls.isel(time=0).to_array().plot.imshow(ax=ax[0][0], vmin=0, vmax=2000)\n ax[0][0].set_title('Landsat 8')\n ax[0][0].xaxis.set_visible(False), ax[0][0].yaxis.set_visible(False)\nif s2:\n s2.isel(time=0).to_array().plot.imshow(ax=ax[0][1], vmin=0, vmax=2000)\n ax[0][1].set_title('Sentinel-2')\n ax[0][1].xaxis.set_visible(False), ax[0][1].yaxis.set_visible(False)\nif s1:\n s1_rgb(s1.isel(time=0)).to_array().plot.imshow(ax=ax[1][0])\n ax[1][0].set_title('Sentinel-1')\n ax[1][0].xaxis.set_visible(False), ax[1][0].yaxis.set_visible(False)\nax[1][1].axis('off');",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7705caad27c3ae979e830c3a6185e733920078b | 73,346 | ipynb | Jupyter Notebook | discretization/Discretization_Solution.ipynb | Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree | 10161771bd4ff9bd17b6f4d73f2c7208567f01ec | [
"MIT"
] | null | null | null | discretization/Discretization_Solution.ipynb | Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree | 10161771bd4ff9bd17b6f4d73f2c7208567f01ec | [
"MIT"
] | 4 | 2020-09-26T00:52:10.000Z | 2022-02-10T01:18:21.000Z | discretization/Discretization_Solution.ipynb | Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree | 10161771bd4ff9bd17b6f4d73f2c7208567f01ec | [
"MIT"
] | null | null | null | 86.595041 | 25,940 | 0.792109 | [
[
[
"# Discretization\n\n---\n\nIn this notebook, you will deal with continuous state and action spaces by discretizing them. This will enable you to apply reinforcement learning algorithms that are only designed to work with discrete spaces.\n\n### 1. Import the Necessary Packages",
"_____no_output_____"
]
],
[
[
"import sys\nimport gym\nimport numpy as np\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# Set plotting options\n%matplotlib inline\nplt.style.use('ggplot')\nnp.set_printoptions(precision=3, linewidth=120)",
"_____no_output_____"
]
],
[
[
"### 2. Specify the Environment, and Explore the State and Action Spaces\n\nWe'll use [OpenAI Gym](https://gym.openai.com/) environments to test and develop our algorithms. These simulate a variety of classic as well as contemporary reinforcement learning tasks. Let's use an environment that has a continuous state space, but a discrete action space.",
"_____no_output_____"
]
],
[
[
"# Create an environment and set random seed\nenv = gym.make('MountainCar-v0')\nenv.seed(505);",
"_____no_output_____"
]
],
[
[
"Run the next code cell to watch a random agent.",
"_____no_output_____"
]
],
[
[
"state = env.reset()\nscore = 0\nfor t in range(200):\n action = env.action_space.sample()\n env.render()\n state, reward, done, _ = env.step(action)\n score += reward\n if done:\n break \nprint('Final score:', score)\nenv.close()",
"Final score: -200.0\n"
]
],
[
[
"In this notebook, you will train an agent to perform much better! For now, we can explore the state and action spaces, as well as sample them.",
"_____no_output_____"
]
],
[
[
"# Explore state (observation) space\nprint(\"State space:\", env.observation_space)\nprint(\"- low:\", env.observation_space.low)\nprint(\"- high:\", env.observation_space.high)",
"State space: Box(2,)\n- low: [-1.2 -0.07]\n- high: [0.6 0.07]\n"
],
[
"# Generate some samples from the state space \nprint(\"State space samples:\")\nprint(np.array([env.observation_space.sample() for i in range(10)]))",
"State space samples:\n[[-1.12 0.037]\n [-0.914 -0.065]\n [-0.803 -0.043]\n [-0.234 -0.063]\n [ 0.08 0.004]\n [-0.988 0.06 ]\n [-0.155 -0.021]\n [-0.054 0.034]\n [ 0.048 0.03 ]\n [-1.098 0.026]]\n"
],
[
"# Explore the action space\nprint(\"Action space:\", env.action_space)\n\n# Generate some samples from the action space\nprint(\"Action space samples:\")\nprint(np.array([env.action_space.sample() for i in range(10)]))",
"Action space: Discrete(3)\nAction space samples:\n[1 0 1 2 0 2 0 1 1 2]\n"
]
],
[
[
"### 3. Discretize the State Space with a Uniform Grid\n\nWe will discretize the space using a uniformly-spaced grid. Implement the following function to create such a grid, given the lower bounds (`low`), upper bounds (`high`), and number of desired `bins` along each dimension. It should return the split points for each dimension, which will be 1 less than the number of bins.\n\nFor instance, if `low = [-1.0, -5.0]`, `high = [1.0, 5.0]`, and `bins = (10, 10)`, then your function should return the following list of 2 NumPy arrays:\n\n```\n[array([-0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8]),\n array([-4.0, -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0])]\n```\n\nNote that the ends of `low` and `high` are **not** included in these split points. It is assumed that any value below the lowest split point maps to index `0` and any value above the highest split point maps to index `n-1`, where `n` is the number of bins along that dimension.",
"_____no_output_____"
]
],
[
[
"def create_uniform_grid(low, high, bins=(10, 10)):\n \"\"\"Define a uniformly-spaced grid that can be used to discretize a space.\n \n Parameters\n ----------\n low : array_like\n Lower bounds for each dimension of the continuous space.\n high : array_like\n Upper bounds for each dimension of the continuous space.\n bins : tuple\n Number of bins along each corresponding dimension.\n \n Returns\n -------\n grid : list of array_like\n A list of arrays containing split points for each dimension.\n \"\"\"\n # TODO: Implement this\n grid = [np.linspace(low[dim], high[dim], bins[dim] + 1)[1:-1] for dim in range(len(bins))]\n print(\"Uniform grid: [<low>, <high>] / <bins> => <splits>\")\n for l, h, b, splits in zip(low, high, bins, grid):\n print(\" [{}, {}] / {} => {}\".format(l, h, b, splits))\n return grid\n\n\nlow = [-1.0, -5.0]\nhigh = [1.0, 5.0]\ncreate_uniform_grid(low, high) # [test]",
"Uniform grid: [<low>, <high>] / <bins> => <splits>\n [-1.0, 1.0] / 10 => [-0.8 -0.6 -0.4 -0.2 0. 0.2 0.4 0.6 0.8]\n [-5.0, 5.0] / 10 => [-4. -3. -2. -1. 0. 1. 2. 3. 4.]\n"
]
],
[
[
"Now write a function that can convert samples from a continuous space into its equivalent discretized representation, given a grid like the one you created above. You can use the [`numpy.digitize()`](https://docs.scipy.org/doc/numpy-1.9.3/reference/generated/numpy.digitize.html) function for this purpose.\n\nAssume the grid is a list of NumPy arrays containing the following split points:\n```\n[array([-0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8]),\n array([-4.0, -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0])]\n```\n\nHere are some potential samples and their corresponding discretized representations:\n```\n[-1.0 , -5.0] => [0, 0]\n[-0.81, -4.1] => [0, 0]\n[-0.8 , -4.0] => [1, 1]\n[-0.5 , 0.0] => [2, 5]\n[ 0.2 , -1.9] => [6, 3]\n[ 0.8 , 4.0] => [9, 9]\n[ 0.81, 4.1] => [9, 9]\n[ 1.0 , 5.0] => [9, 9]\n```\n\n**Note**: There may be one-off differences in binning due to floating-point inaccuracies when samples are close to grid boundaries, but that is alright.",
"_____no_output_____"
]
],
[
[
"def discretize(sample, grid):\n \"\"\"Discretize a sample as per given grid.\n \n Parameters\n ----------\n sample : array_like\n A single sample from the (original) continuous space.\n grid : list of array_like\n A list of arrays containing split points for each dimension.\n \n Returns\n -------\n discretized_sample : array_like\n A sequence of integers with the same number of dimensions as sample.\n \"\"\"\n # TODO: Implement this\n return list(int(np.digitize(s, g)) for s, g in zip(sample, grid)) # apply along each dimension\n\n\n# Test with a simple grid and some samples\ngrid = create_uniform_grid([-1.0, -5.0], [1.0, 5.0])\nsamples = np.array(\n [[-1.0 , -5.0],\n [-0.81, -4.1],\n [-0.8 , -4.0],\n [-0.5 , 0.0],\n [ 0.2 , -1.9],\n [ 0.8 , 4.0],\n [ 0.81, 4.1],\n [ 1.0 , 5.0]])\ndiscretized_samples = np.array([discretize(sample, grid) for sample in samples])\nprint(\"\\nSamples:\", repr(samples), sep=\"\\n\")\nprint(\"\\nDiscretized samples:\", repr(discretized_samples), sep=\"\\n\")",
"Uniform grid: [<low>, <high>] / <bins> => <splits>\n [-1.0, 1.0] / 10 => [-0.8 -0.6 -0.4 -0.2 0. 0.2 0.4 0.6 0.8]\n [-5.0, 5.0] / 10 => [-4. -3. -2. -1. 0. 1. 2. 3. 4.]\n\nSamples:\narray([[-1. , -5. ],\n [-0.81, -4.1 ],\n [-0.8 , -4. ],\n [-0.5 , 0. ],\n [ 0.2 , -1.9 ],\n [ 0.8 , 4. ],\n [ 0.81, 4.1 ],\n [ 1. , 5. ]])\n\nDiscretized samples:\narray([[0, 0],\n [0, 0],\n [1, 1],\n [2, 5],\n [5, 3],\n [9, 9],\n [9, 9],\n [9, 9]])\n"
]
],
[
[
"### 4. Visualization\n\nIt might be helpful to visualize the original and discretized samples to get a sense of how much error you are introducing.",
"_____no_output_____"
]
],
[
[
"import matplotlib.collections as mc\n\ndef visualize_samples(samples, discretized_samples, grid, low=None, high=None):\n \"\"\"Visualize original and discretized samples on a given 2-dimensional grid.\"\"\"\n\n fig, ax = plt.subplots(figsize=(10, 10))\n \n # Show grid\n ax.xaxis.set_major_locator(plt.FixedLocator(grid[0]))\n ax.yaxis.set_major_locator(plt.FixedLocator(grid[1]))\n ax.grid(True)\n \n # If bounds (low, high) are specified, use them to set axis limits\n if low is not None and high is not None:\n ax.set_xlim(low[0], high[0])\n ax.set_ylim(low[1], high[1])\n else:\n # Otherwise use first, last grid locations as low, high (for further mapping discretized samples)\n low = [splits[0] for splits in grid]\n high = [splits[-1] for splits in grid]\n\n # Map each discretized sample (which is really an index) to the center of corresponding grid cell\n grid_extended = np.hstack((np.array([low]).T, grid, np.array([high]).T)) # add low and high ends\n grid_centers = (grid_extended[:, 1:] + grid_extended[:, :-1]) / 2 # compute center of each grid cell\n locs = np.stack(grid_centers[i, discretized_samples[:, i]] for i in range(len(grid))).T # map discretized samples\n\n ax.plot(samples[:, 0], samples[:, 1], 'o') # plot original samples\n ax.plot(locs[:, 0], locs[:, 1], 's') # plot discretized samples in mapped locations\n ax.add_collection(mc.LineCollection(list(zip(samples, locs)), colors='orange')) # add a line connecting each original-discretized sample\n ax.legend(['original', 'discretized'])\n\n \nvisualize_samples(samples, discretized_samples, grid, low, high)",
"/usr/local/Cellar/jupyterlab/1.2.4/libexec/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3319: FutureWarning: arrays to stack must be passed as a \"sequence\" type such as list or tuple. Support for non-sequence iterables such as generators is deprecated as of NumPy 1.16 and will raise an error in the future.\n exec(code_obj, self.user_global_ns, self.user_ns)\n"
]
],
[
[
"Now that we have a way to discretize a state space, let's apply it to our reinforcement learning environment.",
"_____no_output_____"
]
],
[
[
"# Create a grid to discretize the state space\nstate_grid = create_uniform_grid(env.observation_space.low, env.observation_space.high, bins=(10, 10))\nstate_grid",
"Uniform grid: [<low>, <high>] / <bins> => <splits>\n [-1.2000000476837158, 0.6000000238418579] / 10 => [-1.02 -0.84 -0.66 -0.48 -0.3 -0.12 0.06 0.24 0.42]\n [-0.07000000029802322, 0.07000000029802322] / 10 => [-0.056 -0.042 -0.028 -0.014 0. 0.014 0.028 0.042 0.056]\n"
],
[
"# Obtain some samples from the space, discretize them, and then visualize them\nstate_samples = np.array([env.observation_space.sample() for i in range(10)])\ndiscretized_state_samples = np.array([discretize(sample, state_grid) for sample in state_samples])\nvisualize_samples(state_samples, discretized_state_samples, state_grid,\n env.observation_space.low, env.observation_space.high)\nplt.xlabel('position'); plt.ylabel('velocity'); # axis labels for MountainCar-v0 state space",
"/usr/local/Cellar/jupyterlab/1.2.4/libexec/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3319: FutureWarning: arrays to stack must be passed as a \"sequence\" type such as list or tuple. Support for non-sequence iterables such as generators is deprecated as of NumPy 1.16 and will raise an error in the future.\n exec(code_obj, self.user_global_ns, self.user_ns)\n"
]
],
[
[
"You might notice that if you have enough bins, the discretization doesn't introduce too much error into your representation. So we may be able to now apply a reinforcement learning algorithm (like Q-Learning) that operates on discrete spaces. Give it a shot to see how well it works!\n\n### 5. Q-Learning\n\nProvided below is a simple Q-Learning agent. Implement the `preprocess_state()` method to convert each continuous state sample to its corresponding discretized representation.",
"_____no_output_____"
]
],
[
[
"class QLearningAgent:\n \"\"\"Q-Learning agent that can act on a continuous state space by discretizing it.\"\"\"\n\n def __init__(self, env, state_grid, alpha=0.02, gamma=0.99,\n epsilon=1.0, epsilon_decay_rate=0.9995, min_epsilon=.01, seed=505):\n \"\"\"Initialize variables, create grid for discretization.\"\"\"\n # Environment info\n self.env = env\n self.state_grid = state_grid\n self.state_size = tuple(len(splits) + 1 for splits in self.state_grid) # n-dimensional state space\n self.action_size = self.env.action_space.n # 1-dimensional discrete action space\n self.seed = np.random.seed(seed)\n print(\"Environment:\", self.env)\n print(\"State space size:\", self.state_size)\n print(\"Action space size:\", self.action_size)\n \n # Learning parameters\n self.alpha = alpha # learning rate\n self.gamma = gamma # discount factor\n self.epsilon = self.initial_epsilon = epsilon # initial exploration rate\n self.epsilon_decay_rate = epsilon_decay_rate # how quickly should we decrease epsilon\n self.min_epsilon = min_epsilon\n \n # Create Q-table\n self.q_table = np.zeros(shape=(self.state_size + (self.action_size,)))\n print(\"Q table size:\", self.q_table.shape)\n\n def preprocess_state(self, state):\n \"\"\"Map a continuous state to its discretized representation.\"\"\"\n # TODO: Implement this\n return tuple(discretize(state, self.state_grid))\n\n def reset_episode(self, state):\n \"\"\"Reset variables for a new episode.\"\"\"\n # Gradually decrease exploration rate\n self.epsilon *= self.epsilon_decay_rate\n self.epsilon = max(self.epsilon, self.min_epsilon)\n\n # Decide initial action\n self.last_state = self.preprocess_state(state)\n self.last_action = np.argmax(self.q_table[self.last_state])\n return self.last_action\n \n def reset_exploration(self, epsilon=None):\n \"\"\"Reset exploration rate used when training.\"\"\"\n self.epsilon = epsilon if epsilon is not None else self.initial_epsilon\n\n def act(self, state, reward=None, done=None, mode='train'):\n \"\"\"Pick next action and update internal Q table (when mode != 'test').\"\"\"\n state = self.preprocess_state(state)\n if mode == 'test':\n # Test mode: Simply produce an action\n action = np.argmax(self.q_table[state])\n else:\n # Train mode (default): Update Q table, pick next action\n # Note: We update the Q table entry for the *last* (state, action) pair with current state, reward\n self.q_table[self.last_state + (self.last_action,)] += self.alpha * \\\n (reward + self.gamma * max(self.q_table[state]) - self.q_table[self.last_state + (self.last_action,)])\n\n # Exploration vs. exploitation\n do_exploration = np.random.uniform(0, 1) < self.epsilon\n if do_exploration:\n # Pick a random action\n action = np.random.randint(0, self.action_size)\n else:\n # Pick the best action from Q table\n action = np.argmax(self.q_table[state])\n\n # Roll over current state, action for next step\n self.last_state = state\n self.last_action = action\n return action\n\n \nq_agent = QLearningAgent(env, state_grid)",
"Environment: <TimeLimit<MountainCarEnv<MountainCar-v0>>>\nState space size: (10, 10)\nAction space size: 3\nQ table size: (10, 10, 3)\n"
]
],
[
[
"Let's also define a convenience function to run an agent on a given environment. When calling this function, you can pass in `mode='test'` to tell the agent not to learn.",
"_____no_output_____"
]
],
[
[
"def run(agent, env, num_episodes=20000, mode='train'):\n \"\"\"Run agent in given reinforcement learning environment and return scores.\"\"\"\n scores = []\n max_avg_score = -np.inf\n for i_episode in range(1, num_episodes+1):\n # Initialize episode\n state = env.reset()\n action = agent.reset_episode(state)\n total_reward = 0\n done = False\n\n # Roll out steps until done\n while not done:\n state, reward, done, info = env.step(action)\n total_reward += reward\n action = agent.act(state, reward, done, mode)\n\n # Save final score\n scores.append(total_reward)\n \n # Print episode stats\n if mode == 'train':\n if len(scores) > 100:\n avg_score = np.mean(scores[-100:])\n if avg_score > max_avg_score:\n max_avg_score = avg_score\n if i_episode % 100 == 0:\n print(\"\\rEpisode {}/{} | Max Average Score: {}\".format(i_episode, num_episodes, max_avg_score), end=\"\")\n sys.stdout.flush()\n\n return scores\n\nscores = run(q_agent, env)",
"Episode 13900/20000 | Max Average Score: -137.36"
]
],
[
[
"The best way to analyze if your agent was learning the task is to plot the scores. It should generally increase as the agent goes through more episodes.",
"_____no_output_____"
]
],
[
[
"# Plot scores obtained per episode\nplt.plot(scores); plt.title(\"Scores\");",
"_____no_output_____"
]
],
[
[
"If the scores are noisy, it might be difficult to tell whether your agent is actually learning. To find the underlying trend, you may want to plot a rolling mean of the scores. Let's write a convenience function to plot both raw scores as well as a rolling mean.",
"_____no_output_____"
]
],
[
[
"def plot_scores(scores, rolling_window=100):\n \"\"\"Plot scores and optional rolling mean using specified window.\"\"\"\n plt.plot(scores); plt.title(\"Scores\");\n rolling_mean = pd.Series(scores).rolling(rolling_window).mean()\n plt.plot(rolling_mean);\n return rolling_mean\n\nrolling_mean = plot_scores(scores)",
"_____no_output_____"
]
],
[
[
"You should observe the mean episode scores go up over time. Next, you can freeze learning and run the agent in test mode to see how well it performs.",
"_____no_output_____"
]
],
[
[
"# Run in test mode and analyze scores obtained\ntest_scores = run(q_agent, env, num_episodes=100, mode='test')\nprint(\"[TEST] Completed {} episodes with avg. score = {}\".format(len(test_scores), np.mean(test_scores)))\n_ = plot_scores(test_scores)",
"_____no_output_____"
]
],
[
[
"It's also interesting to look at the final Q-table that is learned by the agent. Note that the Q-table is of size MxNxA, where (M, N) is the size of the state space, and A is the size of the action space. We are interested in the maximum Q-value for each state, and the corresponding (best) action associated with that value.",
"_____no_output_____"
]
],
[
[
"def plot_q_table(q_table):\n \"\"\"Visualize max Q-value for each state and corresponding action.\"\"\"\n q_image = np.max(q_table, axis=2) # max Q-value for each state\n q_actions = np.argmax(q_table, axis=2) # best action for each state\n\n fig, ax = plt.subplots(figsize=(10, 10))\n cax = ax.imshow(q_image, cmap='jet');\n cbar = fig.colorbar(cax)\n for x in range(q_image.shape[0]):\n for y in range(q_image.shape[1]):\n ax.text(x, y, q_actions[x, y], color='white',\n horizontalalignment='center', verticalalignment='center')\n ax.grid(False)\n ax.set_title(\"Q-table, size: {}\".format(q_table.shape))\n ax.set_xlabel('position')\n ax.set_ylabel('velocity')\n\n\nplot_q_table(q_agent.q_table)",
"_____no_output_____"
]
],
[
[
"### 6. Modify the Grid\n\nNow it's your turn to play with the grid definition and see what gives you optimal results. Your agent's final performance is likely to get better if you use a finer grid, with more bins per dimension, at the cost of higher model complexity (more parameters to learn).",
"_____no_output_____"
]
],
[
[
"# TODO: Create a new agent with a different state space grid\nstate_grid_new = create_uniform_grid(env.observation_space.low, env.observation_space.high, bins=(20, 20))\nq_agent_new = QLearningAgent(env, state_grid_new)\nq_agent_new.scores = [] # initialize a list to store scores for this agent",
"_____no_output_____"
],
[
"# Train it over a desired number of episodes and analyze scores\n# Note: This cell can be run multiple times, and scores will get accumulated\nq_agent_new.scores += run(q_agent_new, env, num_episodes=50000) # accumulate scores\nrolling_mean_new = plot_scores(q_agent_new.scores)",
"_____no_output_____"
],
[
"# Run in test mode and analyze scores obtained\ntest_scores = run(q_agent_new, env, num_episodes=100, mode='test')\nprint(\"[TEST] Completed {} episodes with avg. score = {}\".format(len(test_scores), np.mean(test_scores)))\n_ = plot_scores(test_scores)",
"_____no_output_____"
],
[
"# Visualize the learned Q-table\nplot_q_table(q_agent_new.q_table)",
"_____no_output_____"
]
],
[
[
"### 7. Watch a Smart Agent",
"_____no_output_____"
]
],
[
[
"state = env.reset()\nscore = 0\nfor t in range(200):\n action = q_agent_new.act(state, mode='test')\n env.render()\n state, reward, done, _ = env.step(action)\n score += reward\n if done:\n break \nprint('Final score:', score)\nenv.close()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e77061d8d609bc6247b37b3d0e2b86c254137c5e | 227,249 | ipynb | Jupyter Notebook | web/08/Basic_data_analysis.ipynb | Jovansam/lectures-2021 | 32b0992a58191723ef660e1de629193862b19f52 | [
"MIT"
] | null | null | null | web/08/Basic_data_analysis.ipynb | Jovansam/lectures-2021 | 32b0992a58191723ef660e1de629193862b19f52 | [
"MIT"
] | null | null | null | web/08/Basic_data_analysis.ipynb | Jovansam/lectures-2021 | 32b0992a58191723ef660e1de629193862b19f52 | [
"MIT"
] | 2 | 2021-06-26T01:52:28.000Z | 2021-08-10T14:42:46.000Z | 65.945734 | 28,688 | 0.723229 | [
[
[
"# Lecture 08: Basic data analysis",
"_____no_output_____"
],
[
"[Download on GitHub](https://github.com/NumEconCopenhagen/lectures-2021)\n\n[<img src=\"https://mybinder.org/badge_logo.svg\">](https://mybinder.org/v2/gh/NumEconCopenhagen/lectures-2021/master?urlpath=lab/tree/08/Basic_data_analysis.ipynb)",
"_____no_output_____"
],
[
"1. [Combining datasets (merging and concatenating)](#Combining-datasets-(merging-and-concatenating))\n2. [Fetching data using an API](#Fetching-data-using-an-API)\n3. [Split-apply-combine](#Split-apply-combine)\n4. [Summary](#Summary)\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport datetime\n\nimport pandas_datareader # install with `pip install pandas-datareader`\nimport pydst # install with `pip install git+https://github.com/elben10/pydst`\n\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn-whitegrid')\nfrom matplotlib_venn import venn2 # `pip install matplotlib-venn`",
"C:\\Users\\gmf123\\Anaconda3\\envs\\new\\lib\\site-packages\\pandas_datareader\\compat\\__init__.py:7: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.\n from pandas.util.testing import assert_frame_equal\n"
]
],
[
[
"<a id=\"Combining-datasets-(merging-and-concatenating)\"></a>\n\n# 1. Combining datasets (merging and concatenating)",
"_____no_output_____"
],
[
"When **combining datasets** there are a few crucial concepts: \n\n1. **Concatenate (append)**: \"stack\" rows (observations) on top of each other. This works if the datasets have the same columns (variables).\n2. **Merge**: the two datasets have different variables, but may or may not have the same observations. ",
"_____no_output_____"
],
[
"There are **different kinds of merges** depending on which observations you want to keep:\n\n1. **Outer join (one-to-one)** Keep observations which are in *either* or in *both* datasets.\n2. **Inner join (one-to-one)** Keep observations which are in *both* datasets. \n3. **Left join (many-to-one)** Keep observations which are in the *left* dataset or in *both* datasets. \n\nKeeping observations which are not in both datasets will result in **missing values** for the variables comming from the dataset, where the observation does not exist.",
"_____no_output_____"
],
[
"**Read data:**",
"_____no_output_____"
]
],
[
[
"empl = pd.read_csv('../07/data/RAS200_long.csv') # .. -> means one folder up\ninc = pd.read_csv('../07/data/INDKP107_long.csv')\narea = pd.read_csv('../07/data/area.csv')",
"_____no_output_____"
]
],
[
[
"## 1.1 Concatenating datasets\n\nSuppose we have two datasets that have the same variables and we just want to concatenate them. ",
"_____no_output_____"
]
],
[
[
"empl.head(5)",
"_____no_output_____"
],
[
"N = empl.shape[0]\nA = empl.loc[empl.index < N/2,:] # first half of observations\nB = empl.loc[empl.index >= N/2,:] # second half of observations\n\nprint(f'A has shape {A.shape} ')\nprint(f'B has shape {B.shape} ')",
"A has shape (495, 3) \nB has shape (495, 3) \n"
]
],
[
[
"**Concatenation** is done using the command `pd.concat([df1, df2])`. ",
"_____no_output_____"
]
],
[
[
"C = pd.concat([A,B])\nprint(f'C has shape {C.shape} (same as the original empl, {empl.shape})')",
"C has shape (990, 3) (same as the original empl, (990, 3))\n"
]
],
[
[
"## 1.2 Merging datasets",
"_____no_output_____"
],
[
"Two datasets with **different variables**: `empl` and `inc`. \n\n**Central command:** `pd.merge(empl, inc, on=[municipalitiy, year], how=METHOD)`. \n\n1. The keyword `on` specifies the **merge key(s)**. They uniquely identify observations in both datasets (for sure in at least one of them). \n\n2. The keyword `how` specifies the **merge method** (taking values such as `'outer'`, `'inner'`, or `'left'`).",
"_____no_output_____"
],
[
"**Look at datasets:**",
"_____no_output_____"
]
],
[
[
"print(f'Years in empl: {empl.year.unique()}')\nprint(f'Municipalities in empl = {len(empl.municipality.unique())}')\nprint(f'Years in inc: {inc.year.unique()}')\nprint(f'Municipalities in inc = {len(inc.municipality.unique())}')",
"Years in empl: [2008 2009 2010 2011 2012 2013 2014 2015 2016 2017]\nMunicipalities in empl = 99\nYears in inc: [2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017]\nMunicipalities in inc = 98\n"
]
],
[
[
"**Find differences:**",
"_____no_output_____"
]
],
[
[
"diff_y = [y for y in inc.year.unique() if y not in empl.year.unique()] \nprint(f'years in inc data, but not in empl data: {diff_y}')\n\ndiff_m = [m for m in empl.municipality.unique() if m not in inc.municipality.unique()] \nprint(f'municipalities in empl data, but not in inc data: {diff_m}')",
"years in inc data, but not in empl data: [2004, 2005, 2006, 2007]\nmunicipalities in empl data, but not in inc data: ['Christiansø']\n"
]
],
[
[
"**Conclusion:** `inc` has more years than `empl`, but `empl` has one municipality that is not in `inc`. ",
"_____no_output_____"
]
],
[
[
"plt.figure()\nv = venn2(subsets = (4, 4, 10), set_labels = ('empl', 'inc'))\nv.get_label_by_id('100').set_text('Cristiansø')\nv.get_label_by_id('010').set_text('2004-07' )\nv.get_label_by_id('110').set_text('common observations')\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Outer join: union",
"_____no_output_____"
]
],
[
[
"plt.figure()\nv = venn2(subsets = (4, 4, 10), set_labels = ('empl', 'inc'))\nv.get_label_by_id('100').set_text('included')\nv.get_label_by_id('010').set_text('included')\nv.get_label_by_id('110').set_text('included')\nplt.title('outer join')\nplt.show()",
"_____no_output_____"
],
[
"outer = pd.merge(empl,inc,on=['municipality','year'],how='outer')\n\nprint(f'Number of municipalities = {len(outer.municipality.unique())}')\nprint(f'Number of years = {len(outer.year.unique())}')",
"Number of municipalities = 99\nNumber of years = 14\n"
]
],
[
[
"We see that the **outer join** includes rows that exist in either dataframe and therefore includes missing values:",
"_____no_output_____"
]
],
[
[
"I = (outer.year.isin(diff_y)) | (outer.municipality.isin(diff_m))\nouter.loc[I, :].head(15)",
"_____no_output_____"
]
],
[
[
"### Inner join",
"_____no_output_____"
]
],
[
[
"plt.figure()\nv = venn2(subsets = (4, 4, 10), set_labels = ('empl', 'inc'))\nv.get_label_by_id('100').set_text('dropped'); v.get_patch_by_id('100').set_alpha(0.05)\nv.get_label_by_id('010').set_text('dropped'); v.get_patch_by_id('010').set_alpha(0.05)\nv.get_label_by_id('110').set_text('included')\nplt.title('inner join')\nplt.show()",
"_____no_output_____"
],
[
"inner = pd.merge(empl,inc,how='inner',on=['municipality','year'])\n\nprint(f'Number of municipalities = {len(inner.municipality.unique())}')\nprint(f'Number of years = {len(inner.year.unique())}')",
"Number of municipalities = 98\nNumber of years = 10\n"
]
],
[
[
"We see that the **inner join** does not contain any rows that are not in both datasets. ",
"_____no_output_____"
]
],
[
[
"I = (inner.year.isin(diff_y)) | (inner.municipality.isin(diff_m))\ninner.loc[I, :].head(15)",
"_____no_output_____"
]
],
[
[
"### Left join",
"_____no_output_____"
],
[
"In my work, I most frequently use the **left join**. It is also known as a *many-to-one* join. \n\n* **Left dataset:** `inner` many observations of a given municipality (one per year),\n* **Right dataset:** `area` at most one observation per municipality and new variable (km2). ",
"_____no_output_____"
]
],
[
[
"inner_with_area = pd.merge(inner, area, on='municipality', how='left')\ninner_with_area.head(5)",
"_____no_output_____"
],
[
"print(f'inner has shape {inner.shape}')\nprint(f'area has shape {area.shape}')\nprint(f'merge result has shape {inner_with_area.shape}')",
"inner has shape (980, 4)\narea has shape (99, 2)\nmerge result has shape (980, 5)\n"
],
[
"plt.figure()\nv = venn2(subsets = (4, 4, 10), set_labels = ('inner', 'area'))\nv.get_label_by_id('100').set_text('included:\\n no km2'); \nv.get_label_by_id('010').set_text('dropped'); v.get_patch_by_id('010').set_alpha(0.05)\nv.get_label_by_id('110').set_text('included:\\n with km2')\nplt.title('left join')\nplt.show()",
"_____no_output_____"
]
],
[
[
"**Intermezzo:** Finding the non-overlapping observations",
"_____no_output_____"
]
],
[
[
"not_in_area = [m for m in inner.municipality.unique() if m not in area.municipality.unique()]\nnot_in_inner = [m for m in area.municipality.unique() if m not in inner.municipality.unique()]\n\nprint(f'There are {len(not_in_area)} municipalities in inner that are not in area. They are:')\nprint(not_in_area)\nprint('')\n\nprint(f'There is {len(not_in_inner)} municipalities in area that are not in inner. They are:')\nprint(not_in_inner)\nprint('')",
"There are 0 municipalities in inner that are not in area. They are:\n[]\n\nThere is 1 municipalities in area that are not in inner. They are:\n['Christiansø']\n\n"
]
],
[
[
"**Check that km2 is never missing:**",
"_____no_output_____"
]
],
[
[
"inner_with_area.km2.isnull().mean()",
"_____no_output_____"
]
],
[
[
"### Alternative function for left joins: `df.join()`",
"_____no_output_____"
],
[
"To use a left join function `df.join()`, we must first set the index. Technically, we do not need this, but if you ever need to join on more than one variable, `df.join()` requires you to work with indices so we might as well learn it now. ",
"_____no_output_____"
]
],
[
[
"inner.set_index('municipality', inplace=True)\narea.set_index('municipality', inplace=True)\nfinal = inner.join(area)\nprint(f'final has shape: {final.shape}')\nfinal.head(5)",
"final has shape: (980, 4)\n"
]
],
[
[
"## 1.3 Other programming languages ",
"_____no_output_____"
],
[
"**SQL** (including SAS *proc sql*)",
"_____no_output_____"
],
[
"SQL is one of the most powerful database languages and many other programming languages embed a version of it. For example, SAS has the `proc SQL`, where you can use SQL syntax. ",
"_____no_output_____"
],
[
"SQL is written in statements such as \n\n* **left join** `select * from empl left join inc on empl.municipality = inc.municipality and empl.year = inc.year`\n* **outer join** `select * from empl full outer join inc on empl.municipality = inc.municipality and empl.year = inc.year`",
"_____no_output_____"
],
[
"**STATA**",
"_____no_output_____"
],
[
"In Stata, the command `merge` nests many of the commands mentioned above. You specify `merge 1:1` for a one-to-one merge or `merge m:1` or `merge 1:m` for many-to-one or one-to-many merges, and you do not use `merge m:m` (until you are quite advanced). ",
"_____no_output_____"
],
[
"<a id=\"Fetching-data-using-an-API\"></a>\n\n# 2. Fetching data using an API",
"_____no_output_____"
],
[
"API stands for **Application Programming Interface**. An API is an interface through which we can directly ask for and **receive data from an online source**. We will be using packages for this and will not look at what is going on underneath. \n\n1. We use `pandas_datareader` to access many common **international online data** sources (install with `pip install pandas-datareader`)\n2. For **Statistics Denmark**, Jakob Elben has written the `pydst` package (install with `pip install git+https://github.com/elben10/pydst`)",
"_____no_output_____"
],
[
"Fetching data from an API requires an **internet connection** and works directly without saving data to your hard disc (unless you ask Python to do so afterwards). You can use it to automate tasks such as fetching the most recent data, doing some calculations and outputting it in the same manner. This can be useful e.g. for quarterly reports. ",
"_____no_output_____"
],
[
"**Pros:** Automatic; smart; everything is done from Python (so no need to remember steps in between). \n\n**Cons:** The connection can be slow or drop out, which may lead to errors. If e.g. 100 students simultaneously fetch data (during, say, a lecture), the host server may not be able to service all the requests and may drop out. ",
"_____no_output_____"
],
[
"> The raw output data from an API could look like this: https://stats.oecd.org/SDMX-JSON/data/NAAG. It is a log list of non-human-readable gobledygook in the so-called \"JSON\" format. ",
"_____no_output_____"
],
[
"## 2.1 Import data from Denmark Statistics",
"_____no_output_____"
],
[
"**Setup:**",
"_____no_output_____"
]
],
[
[
"Dst = pydst.Dst(lang='en') # setup data loader with the langauge 'english'",
"_____no_output_____"
]
],
[
[
"Data from DST are organized into: \n\n1. **Subjects:** indexed by numbers. Use `Dst.get_subjects()` to see the list. \n2. **Tables:** with names like \"INDKP107\". Use `Dst.get_tables(subjects=['X'])` to see all tables in a subject. \n\n**Data is extracted** with `Dst.get_data(table_id = 'NAME', variables = DICT)`. ",
"_____no_output_____"
],
[
"**Subjects:** With `Dst.get_subjects()` we can list all subjects.",
"_____no_output_____"
]
],
[
[
"Dst.get_subjects()",
"_____no_output_____"
]
],
[
[
"**Tables:** With `get_tables()`, we can list all tables under a subject.",
"_____no_output_____"
]
],
[
[
"tables = Dst.get_tables(subjects=['04'])\ntables",
"_____no_output_____"
]
],
[
[
"**Variable in a dataset:**",
"_____no_output_____"
]
],
[
[
"tables[tables.id == 'INDKP107']",
"_____no_output_____"
],
[
"indk_vars = Dst.get_variables(table_id='INDKP107')\nindk_vars",
"_____no_output_____"
]
],
[
[
"**Values of variable in a dataset:**",
"_____no_output_____"
]
],
[
[
"indk_vars = Dst.get_variables(table_id='INDKP107')\nfor id in ['ENHED','KOEN','UDDNIV','INDKOMSTTYPE']:\n print(id)\n values = indk_vars.loc[indk_vars.id == id,['values']].values[0,0]\n for value in values: \n print(f' id = {value[\"id\"]}, text = {value[\"text\"]}')",
"ENHED\n id = 101, text = People with type of income (number)\n id = 110, text = Amount of income (DKK 1.000)\n id = 116, text = Average income for all people (DKK)\n id = 121, text = Average income for people with type of income (DKK)\nKOEN\n id = MOK, text = Men and women, total\n id = M, text = Men\n id = K, text = Women\nUDDNIV\n id = 10, text = 10 BASIC SCHOOL 8-10 grade\n id = 26, text = 20+25 UPPER SECONDARY SCHOOL\n id = 35, text = 35 VOCATIONAL EDUCATION\n id = 40, text = 40 SHORT-CYCLE HIGHER EDUCATION\n id = 61, text = 50+60 MEDIUM-CYCLE HIGHER EDUCATION, BACHLEOR\n id = 65, text = 65 LONG-CYCLE HIGHER EDUCATION\n id = 9, text = Not stated\nINDKOMSTTYPE\n id = 100, text = 1 Disposable income (2+30-31-32-35)\n id = 105, text = 2 Pre-tax Income, total (3+7+22+26+29)\n id = 110, text = 3 Primary income (4+5+6)\n id = 115, text = 4 Wages and salaries etc., total\n id = 120, text = 5 Entrepreneurial income, total\n id = 125, text = 6 Received fees subject to labour market contributions\n id = 130, text = 7 Public transfer incomes(8+14+19)\n id = 135, text = 8 Unemployment and cash benefits (9+10+11+12+13)\n id = 140, text = 9 Unemployment benefits\n id = 145, text = 10 Other benefits from unemployment funds\n id = 150, text = 11 Cash benefits\n id = 155, text = 12 Job training & Limited employment benefits\n id = 160, text = 13 Sickness- & parental leave\n id = 165, text = 14 Other transfers(15+16+17+18)\n id = 170, text = 15 Public educational grants\n id = 175, text = 16 Housing benefits\n id = 180, text = 17 Child benefits\n id = 185, text = 18 Green check\n id = 190, text = 19 Public pensions(20+21)\n id = 195, text = 20 Early retirement pay\n id = 200, text = 21 Disability and old age pensions\n id = 205, text = 22 Private pensions(23+24+25)\n id = 210, text = 23 Public servants pension\n id = 215, text = 24 Pension from the ATP (Labour Market Supplementary Pension Scheme)\n id = 220, text = 25 Labour market and private pensions (Annuities only)\n id = 225, text = 26 Capital income, gross (27+28)\n id = 230, text = 27 Interest received\n id = 235, text = 28 Other property income (From stocks etc.)\n id = 240, text = 29 Other personal income\n id = 245, text = 30 Imputed rent\n id = 250, text = 31 Interest expenses\n id = 255, text = 32 Tax, total (33+34)\n id = 260, text = 33 Income taxes\n id = 265, text = 34 Labour market contributions etc.\n id = 270, text = 35 Paid alimonies\n id = 275, text = Equivalised Disposable income\n id = 280, text = Land tax home owners\n id = 285, text = Land tax, tenants\n id = 290, text = Taxable income\n"
]
],
[
[
"**Get data:**",
"_____no_output_____"
]
],
[
[
"variables = {'OMRÅDE':['*'],'ENHED':['110'],'KOEN':['M','K'],'TID':['*'],'UDDNIV':['65'],'INDKOMSTTYPE':['100']}\ninc_api = Dst.get_data(table_id = 'INDKP107', variables=variables)\ninc_api.head(5)",
"_____no_output_____"
]
],
[
[
"## 2.2 FRED (Federal Reserve Economic Data)",
"_____no_output_____"
],
[
"**GDP data** for the US",
"_____no_output_____"
]
],
[
[
"start = datetime.datetime(2005,1,1)\nend = datetime.datetime(2017,1,1)\ngdp = pandas_datareader.data.DataReader('GDP', 'fred', start, end)",
"_____no_output_____"
],
[
"gdp.head(10)",
"_____no_output_____"
]
],
[
[
"**Finding data:**\n\n1. go to https://fred.stlouisfed.org \n2. search for employment\n3. click first link\n4. table name is next to header ",
"_____no_output_____"
],
[
"**Fetch:**",
"_____no_output_____"
]
],
[
[
"empl_us = pandas_datareader.data.DataReader('PAYEMS', 'fred', datetime.datetime(1939,1,1), datetime.datetime(2018,12,1))",
"_____no_output_____"
]
],
[
[
"**Plot:**",
"_____no_output_____"
]
],
[
[
"fig = plt.figure()\nax = fig.add_subplot(1,1,1)\n\nempl_us.plot(ax=ax)\n\nax.legend(frameon=True)\nax.set_xlabel('')\nax.set_ylabel('employment (US)');",
"_____no_output_____"
]
],
[
[
"## 2.3 World Bank indicators: `wb`",
"_____no_output_____"
],
[
"**Finding data:**\n\n1. go to https://data.worldbank.org/indicator/\n2. search for GDP \n3. variable name (\"NY.GDP.PCAP.KD\") is in the URL",
"_____no_output_____"
],
[
"**Fetch GDP:**",
"_____no_output_____"
]
],
[
[
"from pandas_datareader import wb",
"_____no_output_____"
],
[
"wb_gdp = wb.download(indicator='NY.GDP.PCAP.KD', country=['SE','DK','NO'], start=1990, end=2017)\nwb_gdp = wb_gdp.rename(columns = {'NY.GDP.PCAP.KD':'GDP'})\nwb_gdp = wb_gdp.reset_index()\nwb_gdp.head(5)",
"_____no_output_____"
],
[
"wb_gdp.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 84 entries, 0 to 83\nData columns (total 3 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 country 84 non-null object \n 1 year 84 non-null object \n 2 GDP 84 non-null float64\ndtypes: float64(1), object(2)\nmemory usage: 2.1+ KB\n"
]
],
[
[
"**Problem:** Unfortunately, it turns out that the dataframe has stored the variable year as an \"object\", meaning in practice that it is a string. Country is an object because it is a string, but that cannot be helped. Fortunately, GDP is a float (i.e. a number). Let's convert year to make it an integer:",
"_____no_output_____"
]
],
[
[
"wb_gdp.year = wb_gdp.year.astype(int)\nwb_gdp.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 84 entries, 0 to 83\nData columns (total 3 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 country 84 non-null object \n 1 year 84 non-null int32 \n 2 GDP 84 non-null float64\ndtypes: float64(1), int32(1), object(1)\nmemory usage: 1.8+ KB\n"
]
],
[
[
"**Fetch employment-to-population ratio:**",
"_____no_output_____"
]
],
[
[
"wb_empl = wb.download(indicator='SL.EMP.TOTL.SP.ZS', country=['SE','DK','NO'], start=1990, end=2017)\nwb_empl.rename(columns = {'SL.EMP.TOTL.SP.ZS':'employment_to_pop'}, inplace=True)\nwb_empl.reset_index(inplace = True)\nwb_empl.year = wb_empl.year.astype(int)\nwb_empl.head(3)",
"_____no_output_____"
]
],
[
[
"**Merge:**",
"_____no_output_____"
]
],
[
[
"wb = pd.merge(wb_gdp, wb_empl, how='outer', on = ['country','year']);\nwb.head(5)",
"_____no_output_____"
]
],
[
[
"<a id=\"Split-apply-combine\"></a>\n\n# 3. Split-apply-combine",
"_____no_output_____"
],
[
"One of the most useful skills to learn is **the split-apply-combine process**. For example, we may want to compute the average employment rate within a municipality over time and calculate whether the employment rate in each year is above or below the average. We calculate this variable using a split-apply-combine procedure: \n\n1. **split**: divide the dataset into units (one for each municipality)\n2. **apply**: compute the average employment rate for each unit\n3. **combine**: merge this new variable back onto the original dataset",
"_____no_output_____"
],
[
"## 3.1 Groupby",
"_____no_output_____"
],
[
"**Example data:**",
"_____no_output_____"
]
],
[
[
"empl = empl.sort_values(['municipality','year']) # sort by first municipality then year\nempl.head(5)",
"_____no_output_____"
]
],
[
[
"Use **groupby** to calculate **within means**:",
"_____no_output_____"
]
],
[
[
"empl.groupby('municipality')['e'].mean().head(5)",
"_____no_output_____"
]
],
[
[
"**Custom functions** can be specified by using the `lambda` notation. E.g., average change:",
"_____no_output_____"
]
],
[
[
"empl.groupby('municipality')['e'].apply(lambda x: x.diff(1).mean()).head(5)",
"_____no_output_____"
]
],
[
[
"Or:",
"_____no_output_____"
]
],
[
[
"myfun = lambda x: np.mean(x[1:]-x[:-1])\nempl.groupby('municipality')['e'].apply(lambda x: myfun(x.values)).head(5)",
"_____no_output_____"
]
],
[
[
"**Plot statistics**: Dispersion in employment rate across Danish municipalities over time.",
"_____no_output_____"
]
],
[
[
"fig = plt.figure()\nax = fig.add_subplot(1,1,1)\n\nempl.groupby('year')['e'].std().plot(ax=ax,style='-o')\n\nax.set_ylabel('std. dev.')\nax.set_title('std. dev. across municipalities in the employment rate');",
"_____no_output_____"
]
],
[
[
"## 3.2 Split-Apply-Combine",
"_____no_output_____"
],
[
"**Goal:** Calculate within municipality difference to mean employment rate.",
"_____no_output_____"
],
[
"**1. Split**:",
"_____no_output_____"
]
],
[
[
"e_grouped = empl.groupby('municipality')['e']",
"_____no_output_____"
]
],
[
[
"**2. Apply:**",
"_____no_output_____"
]
],
[
[
"e_mean = e_grouped.mean() # mean employment rate\ne_mean.head(10)",
"_____no_output_____"
]
],
[
[
"Change name of series:",
"_____no_output_____"
]
],
[
[
"e_mean.name = 'e_mean' # necessary for join",
"_____no_output_____"
]
],
[
[
"**3. Combine:**",
"_____no_output_____"
]
],
[
[
"empl_ = empl.set_index('municipality').join(e_mean, how='left')\nempl_['diff'] = empl_.e - empl_.e_mean\nempl_.xs('Copenhagen')",
"_____no_output_____"
]
],
[
[
"**Plot:**",
"_____no_output_____"
]
],
[
[
"municipalities = ['Copenhagen','Roskilde','Lejre']\n\nfig = plt.figure()\nax = fig.add_subplot(1,1,1)\n\nfor m in municipalities:\n empl_.xs(m).plot(x='year',y='diff',ax=ax,label=m)\n\nax.legend(frameon=True)\nax.set_ylabel('difference to mean')",
"_____no_output_____"
]
],
[
[
"### with `agg()`",
"_____no_output_____"
],
[
"**Agg:** The same value for all observations in a group.",
"_____no_output_____"
]
],
[
[
"empl_ = empl.copy()\n\n# a. split-apply\ne_mean = empl_.groupby('municipality')['e'].agg(lambda x: x.mean())\ne_mean.name = 'e_mean'\n\n# b. combine\nempl_ = empl_.set_index('municipality').join(e_mean, how='left')\nempl_['diff'] = empl_.e - empl_.e_mean\nempl_.xs('Copenhagen')",
"_____no_output_____"
]
],
[
[
"**Note:** Same result!!",
"_____no_output_____"
],
[
"### with - `transform()`",
"_____no_output_____"
],
[
"**Transform:** Different values across observations in a group.",
"_____no_output_____"
]
],
[
[
"empl_ = empl.copy()\nempl_['diff'] = empl_.groupby('municipality')['e'].transform(lambda x: x - x.mean())\nempl_.set_index('municipality').xs('Copenhagen')",
"_____no_output_____"
]
],
[
[
"**Note:** Same result!!",
"_____no_output_____"
],
[
"### Need more complex stuff? ",
"_____no_output_____"
],
[
"Look [here](https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html).",
"_____no_output_____"
],
[
"<a id=\"Summary\"></a>\n\n# 4. Summary",
"_____no_output_____"
],
[
"**This lecture:** We have discussed\n\n1. Combining datasets (**merging** and concatenating)\n2. Fatching data using an **API** (DST, FRED, World Bank, etc.)\n3. **Split-apply-combine** (groupby, agg, transform)",
"_____no_output_____"
],
[
"**Your work:** Before solving Problem Set 4 read through this notebook and play around with the code.\n\n**Project 1:** See the details under *Project 1: Data analysis* [here](https://numeconcopenhagen.netlify.com/exercises/).<br>\n**Deadline:** 6th of April.",
"_____no_output_____"
],
[
"**Next lecture:** Algorithms: Searching and sorting algorithms.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e770687fddab7499e25af61c7c9347a4122ea744 | 5,690 | ipynb | Jupyter Notebook | notebook/ws-1-discover_data.ipynb | jeremykid/692_project_webtable | 4cbfec0bad04a01cd6f5b6cf9f5de133eb93eb92 | [
"MIT"
] | null | null | null | notebook/ws-1-discover_data.ipynb | jeremykid/692_project_webtable | 4cbfec0bad04a01cd6f5b6cf9f5de133eb93eb92 | [
"MIT"
] | null | null | null | notebook/ws-1-discover_data.ipynb | jeremykid/692_project_webtable | 4cbfec0bad04a01cd6f5b6cf9f5de133eb93eb92 | [
"MIT"
] | null | null | null | 24.212766 | 368 | 0.527241 | [
[
[
"from tqdm.notebook import tqdm\nimport pandas as pd\nimport pickle",
"_____no_output_____"
],
[
"f = open(\"./source/dmoz_domain_category.tab\")\n",
"_____no_output_____"
],
[
"dmoz_map = {\n \"domain\": [],\n \"category\": []\n}\n\nfor l in f:\n read_list = l.strip().split(\"\\t\")\n if read_list[0][1:-1] == 'domain':\n continue\n dmoz_map['domain'].append(read_list[0][1:-1])\n dmoz_map['category'].append(read_list[1][1:-1])",
"_____no_output_____"
],
[
"dmoz_df = pd.DataFrame.from_dict(dmoz_map)",
"_____no_output_____"
],
[
"dmoz_df.shape",
"_____no_output_____"
],
[
"dmoz_df.head()",
"_____no_output_____"
],
[
"with open('./source/dmoz_df.pickle', 'wb') as handle:\n pickle.dump(dmoz_df, handle)",
"_____no_output_____"
],
[
"new_f = open(\"./source/parsed-new.csv\")\n\nparsed_new_map = {\n \"domain\": [],\n \"category\": []\n}\n\nfor l in new_f:\n read_list = l.strip().split(\",\")\n \n parsed_new_map['domain'].append(read_list[0])\n parsed_new_map['category'].append(read_list[1])\n \nparsed_new_df = pd.DataFrame.from_dict(parsed_new_map)",
"_____no_output_____"
],
[
"with open('./source/parsed_new_df.pickle', 'wb') as handle:\n pickle.dump(parsed_new_df, handle)",
"_____no_output_____"
],
[
"with open('./source/parsed_new_df.pickle', 'rb') as handle:\n parsed_new_df = pickle.load(handle)\nparsed_new_df.shape",
"_____no_output_____"
],
[
"parsed_new_df.head()",
"_____no_output_____"
],
[
"subdomain_f = open(\"./source/parsed-subdomain.csv\")\n\nparsed_subdomain_map = {\n \"domain\": [],\n \"category\": []\n}\n\nfor l in subdomain_f:\n read_list = l.strip().split(\",\")\n \n parsed_subdomain_map['domain'].append(read_list[0])\n parsed_subdomain_map['category'].append(read_list[1])\n \nparsed_subdomain_df = pd.DataFrame.from_dict(parsed_subdomain_map)",
"_____no_output_____"
],
[
"with open('./source/parsed_subdomain_df.pickle', 'wb') as handle:\n pickle.dump(parsed_subdomain_df, handle)",
"_____no_output_____"
],
[
"with open('./source/parsed_subdomain_df.pickle', 'rb') as handle:\n parsed_subdomain_df = pickle.load(handle)\nparsed_subdomain_df.shape",
"_____no_output_____"
],
[
"parsed_subdomain_df.head()",
"_____no_output_____"
],
[
"parsed_subdomain_df['category'].unique()",
"_____no_output_____"
]
],
[
[
"Suggestions remove work/other country\nExtract the second term after Top",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e7706aee9906f194b0c303c69ea1d359cd489938 | 650,771 | ipynb | Jupyter Notebook | 06_advanced_agromanagement_with_PCSE.ipynb | albertdaniell/wofost_kalro | 0d1eff238e4e3eac41245e9858467bfbd07b796e | [
"MIT"
] | 1 | 2021-09-07T06:45:05.000Z | 2021-09-07T06:45:05.000Z | 06_advanced_agromanagement_with_PCSE.ipynb | albertdaniell/wofost_kalro | 0d1eff238e4e3eac41245e9858467bfbd07b796e | [
"MIT"
] | 1 | 2020-04-04T09:58:23.000Z | 2020-04-04T09:58:23.000Z | 06_advanced_agromanagement_with_PCSE.ipynb | albertdaniell/wofost_kalro | 0d1eff238e4e3eac41245e9858467bfbd07b796e | [
"MIT"
] | 1 | 2020-06-24T13:58:21.000Z | 2020-06-24T13:58:21.000Z | 820.644388 | 321,700 | 0.948981 | [
[
[
"<img style=\"float: right;\" src=\"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAOIAAAAjCAYAAACJpNbGAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAABR0RVh0Q3JlYXRpb24gVGltZQAzLzcvMTNND4u/AAAAHHRFWHRTb2Z0d2FyZQBBZG9iZSBGaXJld29ya3MgQ1M26LyyjAAACMFJREFUeJztnD1y20gWgD+6nJtzAsPhRqKL3AwqwQdYDpXDZfoEppNNTaWbmD7BUEXmI3EPMFCR2YI1UDQpdAPqBNzgvRZA/BGUZEnk9FeFIgj0z2ugX7/XP+jGer2mLv/8b6d+4Efgf/8KG0+Zn8XyXLx+bgEslqegcfzxSY3Irrx6bgEsFssBWsRGowGufwHAYtq7u+H6fUCOxTTWax4wBAbr+SRqNDKesOv3gN/133sW0yh927j1mucIaFWINl7PJ+OcvMcfW8Bol3iN44+mLIOsTCp3UJFfAETr+WRQcG8EOJpunEnTyDlYzycbeWr5xxq3jOF6PglK8ix9buv5xCsrAzBkMV1l5OwD/aJ4BXzV3+8F9z4gz/hTSbz8cxc84FuNvDc4VIsYA7+qohmGwAnycA194G22YqUYlZxv4vpN4AuwBv4oON5m8k3TVLnK4sYFcRyN86dWvCwnlCvFCeUVvwX8CkSZZ5eWs5mLJWE/VZThBMgpfirPk5J4f1SU4QsQ6LNP4+j9OkSUKdRiGlD87CWe3PcyR5PFdAhc1cz/joOziMoIeVF95GX1EGVY6bWhvsAeZQrm+kON80PDneD6PRbTi4LQpmJfsZieFaR1qXlXURh3y2BaBPyG63sspv0t6e+CKJTrf2YxHe8Qr6z8AXBdGbMoHgCTshgr4AiItfxljenPJGv5roCi+rGVw1TExTTWl99ThRsglfYHUnF7SMv+Bhjn4idxbhFLGiAu6gjXD3LuUBF5VzWi3CoAfMP1kxe7mNYZMT5DLFgf13eAXi3ZtvMOsUb3V3J5/mmqy+/66RbnTC1LFdfIu/kd8Qx2bTQeg2GBTPfiUF1TgHNE0QaIq/JDX9RKr/WBy/V8EhfEHWncWMO2EKV8S7UypYnYdE2r+o8gyj5MHXVYsZh+JnG7A+3LPQxR5g9II/UJ148ockmrybqm2+Qapo6gppwB8J7EM6jqaz8u0lhfkXgB58BKPam6rvEdh2kRARbTMa7/HXEfVqnW8hxxWwE+5+JJRTYd9CM90gxw/XFuMKMo/yTNDzUkLnbr6rCYnuH6N8igQ3CvNPJproDPuH6MKMd4Z5kMUjnrh98tn1if72/Ie729Vzq708L0YV3/HGmgB4iHsjOProhhd1lrEr4zaz/FvM4lolTnqWum/6jKmeuDmFb1jHylNg96hPQbhcU0wPVBXESvQI4W5aNshsK4jeOPhSOcOaThMVb48dhU8m2UlR+29ZHzrqyhLL0EaTROteGt67EYIsT6F1HXC/ikcvS00dl51PRwLaIwQtzCxGWRFnRMkT8v/SyAy8I+iliHJtDUsHHq7imipE42GtJanxdcB6mgQcm9MmKNs1m5F9MI13+n+cXZSEpAeV8mQgZqNkmU/HsuT7kf4PrGhXcK0h1SXv7iPKsJKCrDYvoV17+meMqhiDFlll7GEb4U3iseAf+k7mqksmU9qUoaj73E7TEtol3iZnks7Moai8WylUN3TS0WANbzyYv2rqxFtFheANYi7iGNRoPOrO2QGTQIu8vhU8vSmbWNDAHQD7vLYWfWbgFx2F3ee3FBZ9ZuIgMpTWAQdpeRXm9pPoPOrD3UMCtkQM4BRmF3ubG6ZZdxkOfCWsT9pU96CuX56KfOjeIFVC8Ar8NI0xuyOQJsVkWl8xzptQGPNY/6xFiLuL+0gIu0FVTrNESmbK7C7tLrzNpmPW0EeGF32UyFN19UnCAT4ZHGWWnYqDNrB4jViZBK/kbD9sLuMiBZSD8AVp1Z+0LD/NmZta+BIzOS3pm1xwBhd9kvkeEGUbQeqSmIdHhkXnGs5fIQRUxPV1x0Zm2zMuoq7C69rU/yBWAt4v7iAd86s/ZaDweZP+wBvwBOZ9b2SCrrmPzk+AWizA09j1QxMK4gZumcWKUWMvkdA56mfxN2l7GmHWk6V2F32Qi7yxaIsmnYHvkJ9zEQqAwBotQXwK2m0c+EN/Kk8zPTZiOkIWrp/xNTnpeOtYh7iFauN+k5W+0vXab6UsbyecAw229SxWiG3aVZ7NBCKrGHuneazy2iyBeIuxkjk9UDE1bzOtJ4IzbdwysNN0D6dnf9Rk3/iKSBWOnhUbASSWW+DbvLWM+HKreZ3O/r77gza5u842w6LxFrEfcTj+Jv3mK4q7Co63hE+fI6E94hUaT0cry+XushSuvoNZO2CdsCrlXJHDYVMUIUJso2BmhfL+wuV6rMvVR6AXnS1428XupaE7Hwnrqkg4cMGD0lr3NfpVegrUw1m2sN0+crNirEX1uTqiPbPoyI/QSKKmqA9I9aer+fcR2zxIj7GiMV+EYVIkZc3r5eH2rYI+0vnpBYIE/vGwUCdYM7s3agbqXJu58VIOwug86sfd2ZtSPNKwi7S9PHy4UnscCmXKuUZQRdsqbPwCHp2754pKYnW0akcZBO/x2df29XnvA//6iV8T3TSluBmOQlR+v5JNvaHixlDZRalRZifbZaAg3vIIrkmP6YVu6owI1M9x2r0vVIFCBGXNLS96Ph45IGY2ey6e1DY20UMaLGItUXoIhVvCv5tvDg2MWLqYNaoKBKWe6Z7gBR8OwAzZOyD4poBmtidlwt/gIxw/QHz0+oWKIoj19fRz8p3YOjoV8195F5l31ltZ5PfnluISyW+/IK6SPstRIiH/FaLHvLa2R+6F6f978AVsD7v0vf0HK4vNK9VfbVojSBceP4o/PcglgsD8GMmjaRbRCc1PEQIrbv45nlIfleIrs778XkrcWSZXMcXPZyqbvfxy7ckuyqHJPslJzH9c3We2ZRbx1O/07ziJbDI1FE2Qwp4n4DNzHJhkZF16+3bnwrCmi40U2eWoj7KZvobn7+YtKO1vPJVyyWPSZrER1kNU0TqfienpvlaWZR7oX+3tba6lxcX7MK3tNfo2RlpNc8tthsIFbAKYtpsA+TtRbLNp5/H4/EFXX0MOfbOGUxvbCKaDkEnl8Rq0jc1ayFjhFFjKwiWg6B/wNk+JCXXNBIXQAAAABJRU5ErkJggg==\">\n\n",
"_____no_output_____"
],
[
"# Advanced agromanagement with PCSE/WOFOST\n\nThis notebook will demonstrate how to implement advanced agromanagement options with PCSE/WOFOST.\n\nAllard de Wit, April 2018\n\nFor the example we will assume that data files are in the data directory within the directory where this notebook is located. This will be the case if you downloaded the notebooks from github.\n\n**Prerequisites for running this notebook**\n\nSeveral packages need to be installed for running PCSE/WOFOST:\n\n 1. PCSE and its dependencies. See the [PCSE user guide](http://pcse.readthedocs.io/en/stable/installing.html) for more information;\n 2. The `pandas` module for processing and storing WOFOST output;\n 3. The `matplotlib` module for plotting results\n\nFinally, you need a working internet connection.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport os, sys\n\nimport matplotlib\nmatplotlib.style.use(\"ggplot\")\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport yaml\n\nimport pcse\nfrom pcse.models import Wofost71_WLP_FD\nfrom pcse.fileinput import CABOFileReader, YAMLCropDataProvider\nfrom pcse.db import NASAPowerWeatherDataProvider\nfrom pcse.util import WOFOST71SiteDataProvider\nfrom pcse.base import ParameterProvider\ndata_dir = os.path.join(os.getcwd(), \"data\")\n\nprint(\"This notebook was built with:\")\nprint(\"python version: %s \" % sys.version)\nprint(\"PCSE version: %s\" % pcse.__version__)",
"This notebook was built with:\npython version: 3.7.5 (default, Oct 31 2019, 15:18:51) [MSC v.1916 64 bit (AMD64)] \nPCSE version: 5.4.2\n"
]
],
[
[
"## Input requirements\nFor running the PCSE/WOFOST (and PCSE models in general), you need three types of inputs:\n1. Model parameters that parameterize the different model components. These parameters usually\n consist of a set of crop parameters (or multiple sets in case of crop rotations), a set of soil parameters\n and a set of site parameters. The latter provide ancillary parameters that are specific for a location.\n2. Driving variables represented by weather data which can be derived from various sources.\n3. Agromanagement actions which specify the farm activities that will take place on the field that is simulated\n by PCSE.\n\n## Reading model parameters\nIn this example, we will derive the model parameters from different sources. First of all, the crop parameters will be read from my [github repository](https://github.com/ajwdewit/WOFOST_crop_parameters) using the `YAMLCropDataProvider`. Next, the soil parameters will be read from a classical CABO input file using the `CABOFileReader`. Finally, the site parameters can be defined directly using the `WOFOST71SiteDataProvider` which provides sensible defaults for site parameters. \n\nHowever, PCSE models expect a single set of parameters and therefore they need to be combined using the `ParameterProvider`:",
"_____no_output_____"
]
],
[
[
"crop = YAMLCropDataProvider()\nsoil = CABOFileReader(os.path.join(data_dir, \"soil\", \"ec3.soil\"))\nsite = WOFOST71SiteDataProvider(WAV=100,CO2=360)\nparameterprovider = ParameterProvider(soildata=soil, cropdata=crop, sitedata=site)",
"_____no_output_____"
],
[
"crop = YAMLCropDataProvider()\n",
"_____no_output_____"
]
],
[
[
"## Reading weather data\nFor reading weather data we will use the NASAPowerWeatherDataProvider. ",
"_____no_output_____"
]
],
[
[
"from pcse.fileinput import ExcelWeatherDataProvider\nweatherfile = os.path.join(data_dir, 'meteo', 'nl1.xlsx')\nweatherdataprovider = ExcelWeatherDataProvider(weatherfile)\n",
"_____no_output_____"
]
],
[
[
"## Defining agromanagement with timed events\n\nDefining agromanagement needs a bit more explanation because agromanagement is a relatively\ncomplex piece of PCSE. The agromanagement definition for PCSE is written in a format called `YAML` and for a thorough discusion have a look at the [Section on Agromanagement](https://pcse.readthedocs.io/en/stable/reference_guide.html#the-agromanager) in the PCSE manual.\nFor the current example the agromanagement definition looks like this:\n\n Version: 1.0\n AgroManagement:\n - 2006-01-01:\n CropCalendar:\n crop_name: sugarbeet\n variety_name: Sugarbeet_603\n crop_start_date: 2006-03-31\n crop_start_type: emergence\n crop_end_date: 2006-10-20\n crop_end_type: harvest\n max_duration: 300\n TimedEvents:\n - event_signal: irrigate\n name: Irrigation application table\n comment: All irrigation amounts in cm\n events_table:\n - 2006-07-10: {amount: 10, efficiency: 0.7}\n - 2006-08-05: {amount: 5, efficiency: 0.7}\n StateEvents: null\n\nThe agromanagement definition starts with `Version:` indicating the version number of the agromanagement file\nwhile the actual definition starts after the label `AgroManagement:`. Next a date must be provide which sets the\nstart date of the campaign (and the start date of the simulation). Each campaign is defined by zero or one\nCropCalendars and zero or more TimedEvents and/or StateEvents. The CropCalendar defines the crop type, date of sowing,\ndate of harvesting, etc. while the Timed/StateEvents define actions that are either connected to a date or\nto a model state.\n\nIn the current example, the campaign starts on 2006-01-01, there is a crop calendar for sugar beet starting on\n2006-03-31 with a harvest date of 2006-10-20. Next there are timed events defined for applying irrigation at 2006-07-10 and 2006-08-05. The current example has no state events. For a thorough description of all possibilities see the section on AgroManagement in the Reference Guide.\n\nLoading the agromanagement definition from a file can be done with the `YAMLAgroManagementReader`. However for this example, we can just as easily define it here and parse it directly with the YAML parser. In this case we can directly use the section after the `Agromanagement:` label.",
"_____no_output_____"
]
],
[
[
"yaml_agro = \"\"\"\n- 2006-01-01:\n CropCalendar:\n crop_name: sugarbeet\n variety_name: Sugarbeet_603\n crop_start_date: 2006-03-31\n crop_start_type: emergence\n crop_end_date: 2006-10-20\n crop_end_type: harvest\n max_duration: 300\n TimedEvents:\n - event_signal: irrigate\n name: Irrigation application table\n comment: All irrigation amounts in cm\n events_table:\n - 2006-07-10: {amount: 10, efficiency: 0.7}\n - 2006-08-05: {amount: 5, efficiency: 0.7}\n StateEvents: null\n\"\"\"\nagromanagement = yaml.load(yaml_agro)",
"_____no_output_____"
]
],
[
[
"## Starting and running the WOFOST\nWe have now all parameters, weather data and agromanagement information available to start WOFOST and make a simulation.",
"_____no_output_____"
]
],
[
[
"wofost = Wofost71_WLP_FD(parameterprovider, weatherdataprovider, agromanagement)\nwofost.run_till_terminate()",
"_____no_output_____"
]
],
[
[
"## Getting and visualizing results\n\nNext, we can easily get the output from the model using the get_output() method and turn it into a pandas DataFrame:",
"_____no_output_____"
]
],
[
[
"output = wofost.get_output()\ndf = pd.DataFrame(output).set_index(\"day\")\ndf.tail()",
"_____no_output_____"
]
],
[
[
"Finally, we can visualize the results from the pandas DataFrame with a few commands:",
"_____no_output_____"
]
],
[
[
"fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(16,8))\ndf['LAI'].plot(ax=axes[0], title=\"Leaf Area Index\")\ndf['SM'].plot(ax=axes[1], title=\"Root zone soil moisture\")\nfig.autofmt_xdate()",
"_____no_output_____"
]
],
[
[
"# Defining agromanagement with state events\n\n## Connecting events to development stages\nIt is also possible to connect irrigation events to state variables instead of dates. A logical approach is to connect an irrigation even to a development stage instead of a date, in this way changes in the sowing date will be automatically reflected in changes in irrigation events.\n\nFor this we need to change the definition of the agromanagement as below:\n\n Version: 1.0\n AgroManagement:\n - 2006-01-01:\n CropCalendar:\n crop_name: sugarbeet\n variety_name: Sugarbeet_603\n crop_start_type: emergence\n crop_end_date: 2006-10-20\n crop_end_type: harvest\n max_duration: 300\n TimedEvents: null\n StateEvents:\n - event_signal: irrigate\n event_state: DVS\n zero_condition: rising\n name: Irrigation application table\n comment: All irrigation amounts in cm\n events_table:\n - 0.9: {amount: 10, efficiency: 0.7}\n - 1.5: {amount: 5, efficiency: 0.7}\n - 2006-11-20: null\n \nIn this case the irrigation events are connected to the state DVS and are occurring when the simulated DVS crosses the values 0.9 and 1.5. Note that there two additional parameters: `event_state` which defines the state to which the event is connected and `zero_condition` which specifies the condition under which the state event fires, see for an explanation [here](http://pcse.readthedocs.org/en/latest/code.html#agromanagement). Finally, also note that there must be an \"empty trailing campaign\" defined which defines that the campaign that starts at 2006-01-01 ends at 2006-11-20. Otherwise PCSE cannot determine the end of the simulation period, see also the link above for an explanation.\n\nAgain, we will define the agromanagement directly on the command line and parse it with YAML.",
"_____no_output_____"
]
],
[
[
"yaml_agro = \"\"\"\n- 2006-01-01:\n CropCalendar:\n crop_name: sugarbeet\n variety_name: Sugarbeet_603\n crop_start_date: 2006-03-31\n crop_start_type: emergence\n crop_end_date: 2006-10-20\n crop_end_type: harvest\n max_duration: 300\n TimedEvents: null\n StateEvents:\n - event_signal: irrigate\n event_state: DVS\n zero_condition: rising\n name: Irrigation application table\n comment: All irrigation amounts in cm\n events_table:\n - 0.9: {amount: 10, efficiency: 0.7}\n - 1.5: {amount: 5, efficiency: 0.7}\n- 2006-11-20: null\n\"\"\"\nagromanagement = yaml.load(yaml_agro)",
"_____no_output_____"
]
],
[
[
"Again we run the model with all inputs but a changed agromanagement and plot the results",
"_____no_output_____"
]
],
[
[
"wofost2 = Wofost71_WLP_FD(parameterprovider, weatherdataprovider, agromanagement)\nwofost2.run_till_terminate()\noutput2 = wofost2.get_output()\ndf2 = pd.DataFrame(output2).set_index(\"day\")\nfig2, axes2 = plt.subplots(nrows=1, ncols=2, figsize=(16,8))\ndf2['LAI'].plot(ax=axes2[0], title=\"Leaf Area Index\")\ndf2['SM'].plot(ax=axes2[1], title=\"Root zone soil moisture\")\nfig2.autofmt_xdate()",
"_____no_output_____"
]
],
[
[
"## Connecting events to soil moisture levels\n\nThe logical approach is to connect irrigation events to stress levels that are experiences by the crop. In this case we connect the irrigation event to the state variables soil moisture (SM) and define the agromanagement like this:\n\n Version: 1.0\n AgroManagement:\n - 2006-01-01:\n CropCalendar:\n crop_name: sugarbeet\n variety_name: Sugarbeet_603\n crop_start_date: 2006-03-31\n crop_start_type: emergence\n crop_end_date: 2006-10-20\n crop_end_type: harvest\n max_duration: 300\n TimedEvents: null\n StateEvents:\n - event_signal: irrigate\n event_state: SM\n zero_condition: falling\n name: Irrigation application table\n comment: All irrigation amounts in cm\n events_table:\n - 0.2: {amount: 10, efficiency: 0.7}\n - 2006-11-20:\n \nNote that in this case the `zero_condition` is `falling` because we only want the event to trigger when the SM goes below the specified level (0.2). If we had set `zero_condition` to `either` it would trigger twice, the first time when the soil moisture gets exhausted and the second time because of the irrigation water added.",
"_____no_output_____"
]
],
[
[
"yaml_agro = \"\"\"\n- 2006-01-01:\n CropCalendar:\n crop_name: sugarbeet\n variety_name: Sugarbeet_603\n crop_start_date: 2006-03-31\n crop_start_type: emergence\n crop_end_date: 2006-10-20\n crop_end_type: harvest\n max_duration: 300\n TimedEvents: null\n StateEvents:\n - event_signal: irrigate\n event_state: SM\n zero_condition: falling\n name: Irrigation application table\n comment: All irrigation amounts in cm\n events_table:\n - 0.2: {amount: 10, efficiency: 0.7}\n- 2006-11-20: null\n\"\"\"\nagromanagement = yaml.load(yaml_agro)",
"_____no_output_____"
],
[
"wofost3 = Wofost71_WLP_FD(parameterprovider, weatherdataprovider, agromanagement)\nwofost3.run_till_terminate()\noutput3 = wofost3.get_output()\ndf3 = pd.DataFrame(output3).set_index(\"day\")\n\nfig3, axes3 = plt.subplots(nrows=1, ncols=2, figsize=(16,8))\ndf3['LAI'].plot(ax=axes3[0], title=\"Leaf Area Index\")\ndf3['SM'].plot(ax=axes3[1], title=\"Volumetric soil moisture\")\nfig3.autofmt_xdate()",
"_____no_output_____"
]
],
[
[
"Showing the differences in irrigation events\n============================================\n\nWe combine the `SM` column from the different data frames in a new dataframe and plot the results to see the effect of the differences in agromanagement.",
"_____no_output_____"
]
],
[
[
"df_all = pd.DataFrame({\"by_date\": df.SM, \n \"by_DVS\": df2.SM, \n \"by_SM\": df3.SM}, index=df.index)\nfig4, axes4 = plt.subplots(nrows=1, ncols=1, figsize=(14,12))\ndf_all.plot(ax=axes4, title=\"differences in irrigation approach.\")\naxes4.set_ylabel(\"irrigation amount [cm]\")\nfig4.autofmt_xdate()",
"_____no_output_____"
]
],
[
[
"Adjusting the sowing date with the AgroManager and making multiple runs\n==============================================\n\nThe most straightforward way of adjusting the sowing date is by editing the crop management definition in YAML format directly. Here we put a placeholder `{crop_start_date}` at the point where the crop start date is defined in the YAML format. We can then use string formatting operations to insert a new data and use `yaml.load` to load the definition in yaml directly. Note that we need double curly brackets (`{{` and `}}`) at the events table to avoid that python sees them as a placeholder.",
"_____no_output_____"
]
],
[
[
"agromanagement_yaml = \"\"\"\n- 2006-01-01:\n CropCalendar:\n crop_name: sugarbeet\n variety_name: Sugarbeet_603\n crop_start_date: {crop_start_date}\n crop_start_type: emergence\n crop_end_date: 2006-10-20\n crop_end_type: harvest\n max_duration: 300\n TimedEvents: null\n StateEvents:\n - event_signal: irrigate\n event_state: SM\n zero_condition: falling\n name: Irrigation application table\n comment: All irrigation amounts in cm\n events_table:\n - 0.2: {{amount: 10, efficiency: 0.7}}\n- 2006-11-20:\n\"\"\"",
"_____no_output_____"
]
],
[
[
"## The main loop for making several WOFOST runs",
"_____no_output_____"
]
],
[
[
"import datetime as dt\nsdate = dt.date(2006,3,1)\nstep = 10\n# Loop over six different start dates \nresults = []\nfor i in range(6):\n # get new start date\n csdate = sdate + dt.timedelta(days=i*step)\n # update agromanagement with new start date and load it with yaml.load\n tmp = agromanagement_yaml.format(crop_start_date=csdate)\n agromanagement = yaml.load(tmp)\n # run wofost and collect output\n wofost = Wofost71_WLP_FD(parameterprovider, weatherdataprovider, agromanagement)\n wofost.run_till_terminate()\n output = wofost.get_output()\n df = pd.DataFrame(output).set_index(\"day\")\n results.append(df)",
"_____no_output_____"
]
],
[
[
"## Plot the results for the different runs and variables",
"_____no_output_____"
]
],
[
[
"colors = ['k','r','g','b','m','y']\nfig5, axes5 = plt.subplots(nrows=6, ncols=2, figsize=(16,30))\nfor c, df in zip(colors, results):\n for key, axis in zip(df.columns, axes5.flatten()):\n df[key].plot(ax=axis, title=key, color=c)\nfig5.autofmt_xdate()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e77084a6a321362fc1a4fa6d44d907d41a1d1156 | 56,869 | ipynb | Jupyter Notebook | notebooks/shjlat_ec.ipynb | afahadabdullah/cmip6hack-emergentconstraints | 287044e7bfb76bc318856d9b25b9e5e2b52ed477 | [
"MIT"
] | 1 | 2019-11-21T02:36:59.000Z | 2019-11-21T02:36:59.000Z | notebooks/shjlat_ec.ipynb | afahadabdullah/cmip6hack-emergentconstraints | 287044e7bfb76bc318856d9b25b9e5e2b52ed477 | [
"MIT"
] | null | null | null | notebooks/shjlat_ec.ipynb | afahadabdullah/cmip6hack-emergentconstraints | 287044e7bfb76bc318856d9b25b9e5e2b52ed477 | [
"MIT"
] | 6 | 2019-10-15T20:17:21.000Z | 2019-10-18T22:20:12.000Z | 181.111465 | 47,256 | 0.897888 | [
[
[
"# Loading libraries\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport xarray as xr\nimport xesmf as xe\nimport dask\nimport season_util as su\nimport jetlatcalcs as jlat",
"you have successfully imported the jet latitude calculation subroutines\n"
],
[
"from ncar_jobqueue import NCARCluster\nfrom distributed import Client\ncluster = NCARCluster(project='P04010022')\ncluster.adapt(minimum_jobs=1, maximum_jobs=10)\nclient = Client(cluster)\ncluster",
"/ncar/usr/jupyterhub/envs/cmip6-201910/lib/python3.7/site-packages/distributed/dashboard/core.py:72: UserWarning: \nPort 8787 is already in use. \nPerhaps you already have a cluster running?\nHosting the diagnostics dashboard on a random port instead.\n warnings.warn(\"\\n\" + msg)\n"
],
[
"#historical simulations\nimport intake\nvar= [\"ua\"]\nmodel=['AWI-CM-1-MR','BCC-CSM2-MR','CAMS-CSM1-0','CanESM5','MIROC-ES2L','MIROC6','UKESM1-0-LL','MRI-ESM2-0']\ncol = intake.open_esm_datastore(\"/glade/collections/cmip/catalog/intake-esm-datastore/catalogs/glade-cmip6.json\")\ncat = col.search(activity_id=\"CMIP\",experiment_id=\"historical\", variable_id=var,\ntable_id=\"Amon\", grid_label=\"gn\", source_id=model )\ndset_dict_hist = cat.to_dataset_dict(cdf_kwargs={'chunks': {\"time\": 36}, \n 'decode_times': True})",
"--> The keys in the returned dictionary of datasets are constructed as follows:\n\t'activity_id.institution_id.source_id.experiment_id.table_id.grid_label'\n\n--> There will be 7 group(s)\n"
],
[
"#ssp simulations\ncol = intake.open_esm_datastore(\"/glade/collections/cmip/catalog/intake-esm-datastore/catalogs/glade-cmip6.json\")\ncat = col.search(activity_id=\"ScenarioMIP\",experiment_id=\"ssp370\", variable_id=var,\ntable_id=\"Amon\", grid_label=\"gn\", source_id=model )\ndset_dict_ssp370 = cat.to_dataset_dict(cdf_kwargs={'chunks': {\"time\": 36}, \n 'decode_times': True})",
"--> The keys in the returned dictionary of datasets are constructed as follows:\n\t'activity_id.institution_id.source_id.experiment_id.table_id.grid_label'\n\n--> There will be 7 group(s)\n"
],
[
"#calculate the mean over longitude, ensemble members and desired time period and pick out 700hPa\nhist_mean = {}\nfor key, ds in dset_dict_hist.items():\n hist_mean[key] = ds.sel(time = slice('1979-01-01','2006-01-01'), plev=70000.).mean(dim=[\"member_id\",\"lon\"])\n \nssp370_mean = {}\nfor key, ds in dset_dict_ssp370.items():\n ssp370_mean[key] = ds.sel(time = slice('2070-01-01','2099-01-01'), plev=70000.).mean(dim=[\"member_id\",\"lon\"])",
"_____no_output_____"
],
[
"hist_mean = dask.compute(hist_mean)\nssp370_mean = dask.compute(ssp370_mean)",
"_____no_output_____"
],
[
"# calculate JJA season average\nhist_jja = {}\nfor key, x in dset_dict_hist.items():\n hist_jja[key] = su.season_mean(hist_mean[0][key],\"ua\",season=\"JJA\")\n \nssp370_jja = {}\nfor key, x in dset_dict_ssp370.items():\n ssp370_jja[key] = su.season_mean(ssp370_mean[0][key],\"ua\",season=\"JJA\")",
"_____no_output_____"
],
[
"#calculate jet latitude\njlathist = {}\njspeedhist = {}\nfor key, x in dset_dict_hist.items():\n jlatv, jspeedv = jlat.calcjetlat( hist_jja[key], -80, -20)\n jlathist[key] = jlatv\n jspeedhist[key] = jspeedv\n \njlatssp370 = {}\njspeedssp370 = {}\nfor key, x in dset_dict_ssp370.items():\n jlatv, jspeedv = jlat.calcjetlat( ssp370_jja[key], -80, -20)\n jlatssp370[key] = jlatv\n jspeedssp370[key] = jspeedv",
"_____no_output_____"
],
[
"#read in cmip5 data\ncmip5path = \"../data/cmip5_jetlatitudes.nc\"\ncmip5 = xr.open_dataset(cmip5path)\n#calculate linear regression line\ncoefs = np.polyfit(cmip5.jlatpast, cmip5.jlatfuture - cmip5.jlatpast, 1)\nacmip5=coefs[1]\nbcmip5=coefs[0]\nxvalues = [i for i in range(-55,-35)]\nyvalues = [acmip5 + bcmip5*i for i in xvalues]",
"_____no_output_____"
],
[
"#predict cmip6 values from cmip5\njlathistvalues = list(jlathist.values())\nycmip6 = [acmip5 + bcmip5*i for i in jlathistvalues]\nycmip6",
"_____no_output_____"
],
[
"# plotting\njlathistvalues = list(jlathist.values())\njlatssp370values = list(jlatssp370.values())\njlatdif = [a - b for a, b in zip(jlatssp370values, jlathistvalues)]\njlatdifpredict = [acmip5 + bcmip5*i for i in jlathistvalues]\nfig = plt.figure(figsize=(12, 6),facecolor='w')\nax = fig.add_subplot(1, 2, 1)\n#ax.plot(jlathistvalues,jlatdif, color='red', label='CMIP6', marker='o', linestyle='none')\nax.plot(cmip5.jlatpast, cmip5.jlatfuture-cmip5.jlatpast, color='red', label='CMIP5', marker='o', linestyle='none')\nax.plot(xvalues,yvalues,color='green', label='CMIP5 Regression', linestyle='solid')\nax.plot(jlathistvalues,jlatdif, color='blue', label='CMIP6', marker='o', linestyle='none')\nax.set_xlabel('Past jet latitude ($\\phi_{o}$)', fontdict={'size':12});\nax.set_ylabel('Future - Past jet latitude ($\\Delta\\phi$)', fontdict={'size':12}); \nax.set_title('$\\Delta\\phi$ vs $\\phi_{o}$', fontdict={'size':14})\nax.legend(loc='upper right', fontsize=14)\n\nax2=fig.add_subplot(1,2,2)\nax2.plot(jlatdif, jlatdifpredict, color='red', label='CMIP5', marker='o', linestyle='none')\nax2.set_xlabel('$\\Delta\\phi$', fontdict={'size':12})\nax2.set_ylabel('Predicted $\\Delta\\phi$', fontdict={'size':12})\nax2.plot([-10,0],[-10,0], color='black', label='1:1', linestyle='solid')\nax2.set_title('$\\Delta\\phi$ prediction vs $\\Delta\\phi$', fontdict={'size':14})\n#fig = plt.figure(figsize=(6, 6),facecolor='w')\n#ax = fig.add_subplot(1, 1, 1)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7708b5ec6cfda4cb4abdbf026a7d76c6593d258 | 3,513 | ipynb | Jupyter Notebook | examples/notebooks/power-to-heat-water-tank.ipynb | martacki/PyPSA | 4fc9abcd32c5c7c8b22014661856885301a841bf | [
"MIT"
] | 594 | 2017-10-20T19:02:15.000Z | 2022-03-31T10:16:23.000Z | examples/notebooks/power-to-heat-water-tank.ipynb | martacki/PyPSA | 4fc9abcd32c5c7c8b22014661856885301a841bf | [
"MIT"
] | 271 | 2017-10-23T15:12:03.000Z | 2022-03-29T10:20:36.000Z | examples/notebooks/power-to-heat-water-tank.ipynb | martacki/PyPSA | 4fc9abcd32c5c7c8b22014661856885301a841bf | [
"MIT"
] | 286 | 2017-10-23T09:45:15.000Z | 2022-03-28T15:23:40.000Z | 24.914894 | 160 | 0.487333 | [
[
[
"# Wind Turbine combined with Heat Pump and Water Tank\n\nIn this example the heat demand is supplied by a wind turbine in combination with a heat pump and a water tank that stores hot water with a standing loss.",
"_____no_output_____"
]
],
[
[
"import pypsa\nimport pandas as pd\nfrom pyomo.environ import Constraint",
"_____no_output_____"
],
[
"network = pypsa.Network()\nnetwork.set_snapshots(pd.date_range(\"2016-01-01 00:00\",\"2016-01-01 03:00\", freq=\"H\"))\n\nnetwork.add(\"Bus\", \"0\", carrier=\"AC\")\nnetwork.add(\"Bus\", \"0 heat\", carrier=\"heat\")\n\nnetwork.add(\"Carrier\", \"wind\")\nnetwork.add(\"Carrier\", \"heat\")\n\nnetwork.add(\"Generator\",\n \"wind turbine\",\n bus=\"0\",\n carrier=\"wind\",\n p_nom_extendable=True,\n p_max_pu=[0.,0.2,0.7,0.4],\n capital_cost=500)\n\nnetwork.add(\"Load\",\n \"heat demand\",\n bus=\"0 heat\",\n p_set=20.)\n\n#NB: Heat pump has changing efficiency (properly the Coefficient of Performance, COP)\n#due to changing ambient temperature\nnetwork.add(\"Link\",\n \"heat pump\",\n bus0=\"0\",\n bus1=\"0 heat\",\n efficiency=[2.5,3.,3.2,3.],\n capital_cost=1000,\n p_nom_extendable=True)\n \nnetwork.add(\"Store\",\n \"water tank\",\n bus=\"0 heat\",\n e_cyclic=True,\n e_nom_extendable=True,\n standing_loss=0.01) ",
"_____no_output_____"
],
[
"network.lopf(network.snapshots)",
"_____no_output_____"
],
[
"pd.DataFrame({attr: network.stores_t[attr][\"water tank\"] for attr in [\"p\",\"e\"]})",
"_____no_output_____"
],
[
"pd.DataFrame({attr: network.links_t[attr][\"heat pump\"] for attr in [\"p0\",\"p1\"]})",
"_____no_output_____"
],
[
"network.stores.loc[[\"water tank\"]].T",
"_____no_output_____"
],
[
"network.generators.loc[[\"wind turbine\"]].T",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e77094fffd1197d90d7d1f27e9d19e26e02171b0 | 118,076 | ipynb | Jupyter Notebook | promotion/grmpy_tutorial_notebook/tutorial_semipar_notebook.ipynb | OpenSourceEconomics/grmpy | 13a262fb615c79829eb4869cbb6693c9c51fb101 | [
"MIT"
] | 18 | 2018-04-10T01:08:22.000Z | 2022-02-23T02:37:24.000Z | promotion/grmpy_tutorial_notebook/tutorial_semipar_notebook.ipynb | grmToolbox/grmpy | 13a262fb615c79829eb4869cbb6693c9c51fb101 | [
"MIT"
] | 127 | 2017-08-02T13:29:26.000Z | 2018-03-27T19:42:07.000Z | promotion/grmpy_tutorial_notebook/tutorial_semipar_notebook.ipynb | OpenSourceEconomics/grmpy | 13a262fb615c79829eb4869cbb6693c9c51fb101 | [
"MIT"
] | 13 | 2018-04-28T09:46:22.000Z | 2020-11-06T09:32:27.000Z | 225.76673 | 82,068 | 0.885819 | [
[
[
"# Replication of Carneiro, Heckman, & Vytlacil's (2011) *Local Instrumental Variables* approach",
"_____no_output_____"
],
[
"In this notebook, I reproduce the semiparametric results from\n\n> Carneiro, P., Heckman, J. J., & Vytlacil, E. J. (2011). [Estimating marginal returns to education.](https://pubs.aeaweb.org/doi/pdfplus/10.1257/aer.101.6.2754) *American Economic Review, 101*(6), 2754-81. \n\nThe authors analyze the returns to college for white males born between 1957 and 1963 using data from the National Longitudinal Survey of Youth 1979. The authors provide some [replication material]((https://pubs.aeaweb.org/doi/pdfplus/10.1257/aer.101.6.2754)) on their website but do not include geographic identifiers. Therefore, we make use of a mock data merging background characteristics and local data randomly. \n\n\nIn a future update, the semiparametric estimation method will be included in the open-source package *grmpy* for the simulation and estimation of the generalized Roy model in Python. Currently, *grmpy* is limited to the estimation of a parametric normal version of the generalized Roy model. <br> For more, see the [online documentation](https://grmpy.readthedocs.io/en/develop/). ",
"_____no_output_____"
],
[
"## 0) Imports",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\nfrom tutorial_semipar_auxiliary import plot_semipar_mte\n\nfrom grmpy.estimate.estimate import fit\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
]
],
[
[
"## 1) The LIV Framework",
"_____no_output_____"
],
[
"The method of Local Instrumental Variables (LIV) is based on the generalized Roy model, which is characterized by the following equations:",
"_____no_output_____"
],
[
"\\begin{align*}\n &\\textbf{Potential Outcomes} & & \\textbf{Choice} &\\\\\n & Y_1 = \\beta_1 X + U_{1} & & I = Z \\gamma - V &\\\\\n & Y_0 = \\beta_0 X + U_{0} & & D_i = \\left\\{\n\\begin{array}{ll}\n1 & if \\ I > 0 \\\\\n0 & if \\ I \\leq 0\\\\\n\\end{array}\n\\right. \n&&&&\\\\\n& \\textbf{Observed Outcome} &&&\\\\\n& Y = D Y_1 + (1-D) Y_0 &&&\n\\end{align*}",
"_____no_output_____"
],
[
"We work with the linear-in-the-parameters version of the generalized Roy model:\n\n\\begin{align}\nE(Y|X = \\overline{x}, P(Z) = p) = \\overline{x} \\beta_0 + p \\overline{x} (\\beta_1 - \\beta_0) + K(p),\n\\end{align}\n\n\nwhere $K(p) = E(U_1 - U_0 | D = 1, P(Z) = p)$ is a nonlinear function of $p$ that captures heterogeneity along the unobservable resistance to treatment $u_D$. ",
"_____no_output_____"
],
[
"In addition, assume that $(X, Z)$ is independent of $\\{U_1, U_0, V\\}$. Then, the MTE is\n\n1) additively separable in $X$ and $U_D$, which means that the shape of the MTE is independent of $X$, and\n\n2) identified over the common support of $P(Z)$, unconditional on $X$. \n\n\nThe common support, $P(Z)$, plays a crucial role for the identification of the MTE. \nIt denotes the probability of going to university ($D=1$). Common support is defined as the intersection of the support of $P(Z)$ given $D = 1$ and the support of $P(Z)$ given $D = 0$. i.e., those evaluations of $P(Z)$ for which we obtain positive frequencies in both subsamples. We will plot it below. The larger the common support, the larger the region over which the MTE is identified.",
"_____no_output_____"
],
[
"The LIV estimator, $\\Delta^{LIV}$, is derived as follows (Heckman and Vytlacil [2001](https://www.aeaweb.org/articles?id=10.1257/aer.91.2.107), [2005](https://www.jstor.org/stable/3598865?seq=1#page_scan_tab_contents)):\n\n\\begin{equation}\n\\begin{split}\n\\Delta^{LIV} (\\overline{x}, u_D) &= \\frac{\\partial E(Y|X = \\overline{x}, P(Z) = p)}{\\partial p} \\bigg\\rvert_{p = u_D} \\\\\n& \\\\\n&= \\overline{x}(\\beta_1 - \\beta_0) + E(U_1 - U_0 | U_D = u_D) \\\\\n&\\\\\n& = \\underbrace{\\overline{x}(\\beta_1 - \\beta_0)}_{\\substack{observable \\\\ component}} + \\underbrace{\\frac{\\partial K}{\\partial p} \\bigg\\rvert_{p = u_D}}_{\\substack{k(p): \\ unobservable \\\\ component}} = MTE(\\overline{x}, u_D)\n\\end{split}\n%\\frac{[E(U_1 - U_0 | U_D \\leq p] p}{\\partial p} \\bigg\\rvert_{p = u_D}\n%E(U_1 - U_0 | U_D = u_D)\n\\end{equation}\n\n\nSince we do not make any assumption about the functional form of the unobservables, we estimate $k(p)$ non-parametrically. In particualr, $k(p)$ is the first derivative of a locally quadratic kernel regression.",
"_____no_output_____"
],
[
"## 3) The Initialization File",
"_____no_output_____"
],
[
"For the semiparametric estimation, we need information on the following sections:\n\n* __ESTIMATION__: Specify the dependent (wage) and indicator variable (treatment dummy) of the input data frame.\nFor the estimation of the propensity score $P(Z)$, we choose a probability model, here logit. Furthermore, we select 30 bins to determine the common support in the treated and untreated subsamples. For the locally quadratic regression, we follow the specification of [Carneiro et al. (2011)](https://pubs.aeaweb.org/doi/pdfplus/10.1257/aer.101.6.2754) and choose a bandwidth of 0.322. The respective gridsize for the locally quadratic regression is set to 500. [Fan and Marron (1994)](https://www.tandfonline.com/doi/abs/10.1080/10618600.1994.10474629) find that a gridsize of 400 is a good default for graphical analysis. Since the data set is large (1785 observations) and the kernel regression function has a very low runtime, we increase the gridsize to 500. Setting it to the default or increasing it even more does not affect the final MTE. <br>\nNote that the MTE identified by LIV consists of two components: $\\overline{x}(\\beta_1 - \\beta_0)$ (which does not depend on $P(Z) = p$) and $k(p)$ (which does depend on $p$). The latter is estimated nonparametrically. The key \"p_range\" in the initialization file specifies the interval over which $k(p)$ is estimated. After the data outside the overlapping support are trimmed, the locally quadratic kernel estimator uses the remaining data to predict $k(p)$ over the entire \"p_range\" specified by the user. If \"p_range\" is larger than the common support, *grmpy* extrapolates the values for the MTE outside this region. Technically speaking, interpretations of the MTE are only valid within the common support. Here, we set \"p_range\" to [0.005, 0.995]. <br>\nThe other parameters in this section are set by default and, normally, do not need to be changed.\n\n\n* __TREATED, UNTREATED, CHOICE__: In this section, the variables of the outcome equations (treated, untreated) and the college decision (choice) are specified.\n\n\n* __DIST__: The distribution of the unobservables is not of relevance in the semiparametric apporach and can be ignored.",
"_____no_output_____"
]
],
[
[
"%%file files/tutorial_semipar.yml\n---\nESTIMATION:\n file: data/aer-replication-mock.pkl\n dependent: wage\n indicator: state\n semipar: True\n show_output: True\n logit: True\n nbins: 30\n bandwidth: 0.322\n gridsize: 500\n trim_support: True\n reestimate_p: False\n rbandwidth: 0.05\n derivative: 1\n degree: 2\n ps_range: [0.005, 0.995]\nTREATED:\n order:\n - exp\n - expsq\n - lwage5\n - lurate\n - cafqt\n - cafqtsq\n - mhgc\n - mhgcsq\n - numsibs\n - numsibssq\n - urban14\n - lavlocwage17\n - lavlocwage17sq\n - avurate\n - avuratesq\n - d57\n - d58\n - d59\n - d60\n - d61\n - d62\n - d63\nUNTREATED:\n order:\n - exp\n - expsq\n - lwage5\n - lurate\n - cafqt\n - cafqtsq\n - mhgc\n - mhgcsq\n - numsibs\n - numsibssq\n - urban14\n - lavlocwage17\n - lavlocwage17sq\n - avurate\n - avuratesq\n - d57\n - d58\n - d59\n - d60\n - d61\n - d62\n - d63\nCHOICE:\n params:\n - 1.0\n order:\n - const\n - cafqt\n - cafqtsq\n - mhgc\n - mhgcsq\n - numsibs\n - numsibssq\n - urban14\n - lavlocwage17\n - lavlocwage17sq\n - avurate\n - avuratesq\n - d57\n - d58\n - d59\n - d60\n - d61\n - d62\n - d63\n - lwage5_17numsibs\n - lwage5_17mhgc\n - lwage5_17cafqt\n - lwage5_17\n - lurate_17\n - lurate_17numsibs\n - lurate_17mhgc\n - lurate_17cafqt\n - tuit4c\n - tuit4cnumsibs\n - tuit4cmhgc\n - tuit4ccafqt\n - pub4\n - pub4numsibs\n - pub4mhgc\n - pub4cafqt\nDIST:\n params:\n - 0.1\n - 0.0\n - 0.0\n - 0.1\n - 0.0\n - 1.0",
"Overwriting files/tutorial_semipar.yml\n"
]
],
[
[
"Note that I do not include a constant in the __TREATED, UNTREATED__ section. The reason for this is that in the semiparametric setup, $\\beta_1$ and $\\beta_0$ are determined by running a Double Residual Regression without an intercept:\n\t$$ e_Y =e_X \\beta_0 \\ + \\ e_{X \\ \\times \\ p} (\\beta_1 - \\beta_0) \\ + \\ \\epsilon $$\n \nwhere $e_X$, $e_{X \\ \\times \\ p}$, and $e_Y$ are the residuals of a local linear regression of $X$, $X$ x $p$, and $Y$ on $\\widehat{P}(Z)$.",
"_____no_output_____"
],
[
"We now proceed to our replication.",
"_____no_output_____"
],
[
"## 3) Estimation",
"_____no_output_____"
],
[
"Conduct the estimation based on the initialization file.",
"_____no_output_____"
]
],
[
[
"rslt = fit('files/tutorial_semipar.yml', semipar=True)",
" Logit Regression Results \n==============================================================================\nDep. Variable: y No. Observations: 1747\nModel: Logit Df Residuals: 1717\nMethod: MLE Df Model: 29\nDate: Mon, 12 Oct 2020 Pseudo R-squ.: 0.2858\nTime: 21:29:55 Log-Likelihood: -864.74\nconverged: True LL-Null: -1210.8\nCovariance Type: nonrobust LLR p-value: 4.178e-127\n====================================================================================\n coef std err z P>|z| [0.025 0.975]\n------------------------------------------------------------------------------------\nconst 288.3699 151.012 1.910 0.056 -7.609 584.349\ncafqt -6.4256 5.019 -1.280 0.200 -16.263 3.411\ncafqtsq 0.3348 0.072 4.665 0.000 0.194 0.476\nmhgc -0.2733 2.146 -0.127 0.899 -4.480 3.933\nmhgcsq 0.0180 0.007 2.624 0.009 0.005 0.031\nnumsibs -0.4059 2.403 -0.169 0.866 -5.116 4.304\nnumsibssq 0.0012 0.011 0.104 0.917 -0.021 0.024\nurban14 0.3387 0.140 2.418 0.016 0.064 0.613\nlavlocwage17 -54.1077 29.111 -1.859 0.063 -111.164 2.948\nlavlocwage17sq 2.6770 1.420 1.885 0.059 -0.107 5.461\navurate -0.0936 0.633 -0.148 0.882 -1.334 1.146\navuratesq 0.0139 0.049 0.286 0.775 -0.081 0.109\nd57 0.3166 0.251 1.262 0.207 -0.175 0.808\nd58 0.3065 0.253 1.214 0.225 -0.189 0.802\nd59 -0.2110 0.251 -0.840 0.401 -0.703 0.281\nd60 0.0341 0.237 0.144 0.886 -0.430 0.498\nd61 0.0863 0.238 0.362 0.717 -0.381 0.553\nd62 0.2900 0.224 1.293 0.196 -0.150 0.730\nd63 -0.0237 0.239 -0.099 0.921 -0.492 0.444\nlwage5_17numsibs 0.0170 0.237 0.072 0.943 -0.448 0.482\nlwage5_17mhgc 0.0050 0.214 0.023 0.981 -0.414 0.424\nlwage5_17cafqt 0.7582 0.498 1.521 0.128 -0.219 1.735\nlwage5_17 -1.5203 2.738 -0.555 0.579 -6.887 3.846\nlurate_17 -0.1394 0.248 -0.563 0.573 -0.625 0.346\nlurate_17numsibs -0.0028 0.020 -0.140 0.888 -0.042 0.037\nlurate_17mhgc 0.0074 0.019 0.386 0.700 -0.030 0.045\nlurate_17cafqt -0.0174 0.044 -0.394 0.693 -0.104 0.069\ntuit4c 0.0114 0.060 0.191 0.849 -0.105 0.128\ntuit4cnumsibs 0.0039 0.005 0.806 0.420 -0.006 0.013\ntuit4cmhgc -0.0008 0.005 -0.167 0.867 -0.010 0.008\ntuit4ccafqt -0.0041 0.010 -0.398 0.690 -0.024 0.016\npub4 0.4641 0.873 0.532 0.595 -1.247 2.175\npub4numsibs 0.0451 0.074 0.611 0.541 -0.100 0.190\npub4mhgc -0.0408 0.069 -0.594 0.553 -0.176 0.094\npub4cafqt -0.0164 0.164 -0.100 0.920 -0.338 0.305\n====================================================================================\n\n Common support lies beteen:\n\n 0.05361584898356705 and\n 0.9670786072336018\n"
]
],
[
[
"The rslt dictionary contains information on the estimated parameters and the final MTE. ",
"_____no_output_____"
]
],
[
[
"list(rslt)",
"_____no_output_____"
]
],
[
[
"Before plotting the MTE, let's see what else we can learn.\nFor instance, we can account for the variation in $X$. <br>\nNote that we divide the MTE by 4 to investigate the effect of one additional year of college education.",
"_____no_output_____"
]
],
[
[
"np.min(rslt['mte_min']) / 4, np.max(rslt['mte_max']) / 4",
"_____no_output_____"
]
],
[
[
"Next we plot the MTE based on the estimation results. As shown in the figure below, the replicated MTE gets very close to the original, but its 90 percent confidence bands are wider. This is due to the use of a mock data set which merges basic and local variables randomly. The bootsrap method, which is used to estimate the confidence bands, is sensitive to the discrepancies in the data.",
"_____no_output_____"
]
],
[
[
"mte, quantiles = plot_semipar_mte(rslt, 'files/tutorial_semipar.yml', nbootstraps=250)",
"_____no_output_____"
]
],
[
[
"People with the highest returns to education (those who have low unobserved resistance $u_D$\n) are more likely to go to college. Note that the returns vary considerably with $u_D$\n. Low $u_D$ students have returns of up to 40% per year of college, whereas high $u_D$\n people, who would loose from attending college, have returns of approximately -18%.\n \n \nThe magnitude of total heterogeneity is probably even higher, as the MTE depicts the average gain of \ncollege attendance at the mean values of X, i.e. $\\bar{x} (\\beta_1 - \\beta_0)$. \nAccounting for variation in $X$, we observe returns as high as 64% and as low as -57%.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7709a565d671a3e69dc0ed7f95a05233955bb0e | 201,085 | ipynb | Jupyter Notebook | notebooks/logistic_regression/multivariate_logistic_regression_demo.ipynb | pugnator-12/homemade-machine-learning | 9eb673b005f731ec1388ea03914e333453a8ff5d | [
"MIT"
] | null | null | null | notebooks/logistic_regression/multivariate_logistic_regression_demo.ipynb | pugnator-12/homemade-machine-learning | 9eb673b005f731ec1388ea03914e333453a8ff5d | [
"MIT"
] | null | null | null | notebooks/logistic_regression/multivariate_logistic_regression_demo.ipynb | pugnator-12/homemade-machine-learning | 9eb673b005f731ec1388ea03914e333453a8ff5d | [
"MIT"
] | null | null | null | 181.977376 | 79,848 | 0.858851 | [
[
[
"# Multivariate Logistic Regression Demo\n\n_Source: 🤖[Homemade Machine Learning](https://github.com/trekhleb/homemade-machine-learning) repository_\n\n> ☝Before moving on with this demo you might want to take a look at:\n> - 📗[Math behind the Logistic Regression](https://github.com/trekhleb/homemade-machine-learning/tree/master/homemade/logistic_regression)\n> - ⚙️[Logistic Regression Source Code](https://github.com/trekhleb/homemade-machine-learning/blob/master/homemade/logistic_regression/logistic_regression.py)\n\n**Logistic regression** is the appropriate regression analysis to conduct when the dependent variable is dichotomous (binary). Like all regression analyses, the logistic regression is a predictive analysis. Logistic regression is used to describe data and to explain the relationship between one dependent binary variable and one or more nominal, ordinal, interval or ratio-level independent variables.\n\nLogistic Regression is used when the dependent variable (target) is categorical.\n\nFor example:\n\n- To predict whether an email is spam (`1`) or (`0`).\n- Whether online transaction is fraudulent (`1`) or not (`0`).\n- Whether the tumor is malignant (`1`) or not (`0`).\n\n> **Demo Project:** In this example we will train handwritten digits (0-9) classifier.",
"_____no_output_____"
]
],
[
[
"# To make debugging of logistic_regression module easier we enable imported modules autoreloading feature.\n# By doing this you may change the code of logistic_regression library and all these changes will be available here.\n%load_ext autoreload\n%autoreload 2\n\n# Add project root folder to module loading paths.\nimport sys\nsys.path.append('../..')",
"_____no_output_____"
]
],
[
[
"### Import Dependencies\n\n- [pandas](https://pandas.pydata.org/) - library that we will use for loading and displaying the data in a table\n- [numpy](http://www.numpy.org/) - library that we will use for linear algebra operations\n- [matplotlib](https://matplotlib.org/) - library that we will use for plotting the data\n- [math](https://docs.python.org/3/library/math.html) - math library that we will use to calculate sqaure roots etc.\n- [logistic_regression](https://github.com/trekhleb/homemade-machine-learning/blob/master/homemade/logistic_regression/logistic_regression.py) - custom implementation of logistic regression",
"_____no_output_____"
]
],
[
[
"# Import 3rd party dependencies.\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport math\n\n# Import custom logistic regression implementation.\nfrom homemade.logistic_regression import LogisticRegression",
"_____no_output_____"
]
],
[
[
"### Load the Data\n\nIn this demo we will be using a sample of [MNIST dataset in a CSV format](https://www.kaggle.com/oddrationale/mnist-in-csv/home). Instead of using full dataset with 60000 training examples we will use cut dataset of just 10000 examples that we will also split into training and testing sets.\n\nEach row in the dataset consists of 785 values: the first value is the label (a number from 0 to 9) and the remaining 784 values (28x28 pixels image) are the pixel values (a number from 0 to 255).",
"_____no_output_____"
]
],
[
[
"# Load the data.\ndata = pd.read_csv('../../data/mnist-demo.csv')\n\n# Print the data table.\ndata.head(10)",
"_____no_output_____"
]
],
[
[
"### Plot the Data\n\nLet's peek first 25 rows of the dataset and display them as an images to have an example of digits we will be working with.",
"_____no_output_____"
]
],
[
[
"# How many numbers to display.\nnumbers_to_display = 25\n\n# Calculate the number of cells that will hold all the numbers.\nnum_cells = math.ceil(math.sqrt(numbers_to_display))\n\n# Make the plot a little bit bigger than default one.\nplt.figure(figsize=(10, 10))\n\n# Go through the first numbers in a training set and plot them.\nfor plot_index in range(numbers_to_display):\n # Extrace digit data.\n digit = data[plot_index:plot_index + 1].values\n digit_label = digit[0][0]\n digit_pixels = digit[0][1:]\n\n # Calculate image size (remember that each picture has square proportions).\n image_size = int(math.sqrt(digit_pixels.shape[0]))\n \n # Convert image vector into the matrix of pixels.\n frame = digit_pixels.reshape((image_size, image_size))\n \n # Plot the number matrix.\n plt.subplot(num_cells, num_cells, plot_index + 1)\n plt.imshow(frame, cmap='Greys')\n plt.title(digit_label)\n plt.tick_params(axis='both', which='both', bottom=False, left=False, labelbottom=False, labelleft=False)\n\n# Plot all subplots.\nplt.subplots_adjust(hspace=0.5, wspace=0.5)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Split the Data Into Training and Test Sets\n\nIn this step we will split our dataset into _training_ and _testing_ subsets (in proportion 80/20%).\n\nTraining data set will be used for training of our model. Testing dataset will be used for validating of the model. All data from testing dataset will be new to model and we may check how accurate are model predictions.",
"_____no_output_____"
]
],
[
[
"# Split data set on training and test sets with proportions 80/20.\n# Function sample() returns a random sample of items.\npd_train_data = data.sample(frac=0.8)\npd_test_data = data.drop(pd_train_data.index)\n\n# Convert training and testing data from Pandas to NumPy format.\ntrain_data = pd_train_data.values\ntest_data = pd_test_data.values\n\n# Extract training/test labels and features.\nnum_training_examples = 6000\nx_train = train_data[:num_training_examples, 1:]\ny_train = train_data[:num_training_examples, [0]]\n\nx_test = test_data[:, 1:]\ny_test = test_data[:, [0]]",
"_____no_output_____"
]
],
[
[
"### Init and Train Logistic Regression Model\n\n> ☝🏻This is the place where you might want to play with model configuration.\n\n- `polynomial_degree` - this parameter will allow you to add additional polynomial features of certain degree. More features - more curved the line will be.\n- `max_iterations` - this is the maximum number of iterations that gradient descent algorithm will use to find the minimum of a cost function. Low numbers may prevent gradient descent from reaching the minimum. High numbers will make the algorithm work longer without improving its accuracy.\n- `regularization_param` - parameter that will fight overfitting. The higher the parameter, the simplier is the model will be.\n- `polynomial_degree` - the degree of additional polynomial features (`x1^2 * x2, x1^2 * x2^2, ...`). This will allow you to curve the predictions.\n- `sinusoid_degree` - the degree of sinusoid parameter multipliers of additional features (`sin(x), sin(2*x), ...`). This will allow you to curve the predictions by adding sinusoidal component to the prediction curve.\n- `normalize_data` - boolean flag that indicates whether data normalization is needed or not.",
"_____no_output_____"
]
],
[
[
"# Set up linear regression parameters.\nmax_iterations = 10000 # Max number of gradient descent iterations.\nregularization_param = 10 # Helps to fight model overfitting.\npolynomial_degree = 0 # The degree of additional polynomial features.\nsinusoid_degree = 0 # The degree of sinusoid parameter multipliers of additional features.\nnormalize_data = True # Whether we need to normalize data to make it more uniform or not. \n\n# Init logistic regression instance.\nlogistic_regression = LogisticRegression(x_train, y_train, polynomial_degree, sinusoid_degree, normalize_data)\n\n# Train logistic regression.\n(thetas, costs) = logistic_regression.train(regularization_param, max_iterations)",
"_____no_output_____"
]
],
[
[
"### Print Training Results\n\nLet's see how model parameters (thetas) look like. For each digit class (from 0 to 9) we've just trained a set of 784 parameters (one theta for each image pixel). These parameters represents the importance of every pixel for specific digit recognition. ",
"_____no_output_____"
]
],
[
[
"# Print thetas table.\npd.DataFrame(thetas)",
"_____no_output_____"
],
[
"# How many numbers to display.\nnumbers_to_display = 9\n\n# Calculate the number of cells that will hold all the numbers.\nnum_cells = math.ceil(math.sqrt(numbers_to_display))\n\n# Make the plot a little bit bigger than default one.\nplt.figure(figsize=(10, 10))\n\n# Go through the thetas and print them.\nfor plot_index in range(numbers_to_display):\n # Extrace digit data.\n digit_pixels = thetas[plot_index][1:]\n\n # Calculate image size (remember that each picture has square proportions).\n image_size = int(math.sqrt(digit_pixels.shape[0]))\n \n # Convert image vector into the matrix of pixels.\n frame = digit_pixels.reshape((image_size, image_size))\n \n # Plot the number matrix.\n plt.subplot(num_cells, num_cells, plot_index + 1)\n plt.imshow(frame, cmap='Greys')\n plt.title(plot_index)\n plt.tick_params(axis='both', which='both', bottom=False, left=False, labelbottom=False, labelleft=False)\n\n# Plot all subplots.\nplt.subplots_adjust(hspace=0.5, wspace=0.5)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Analyze Gradient Descent Progress\n\nThe plot below illustrates how the cost function value changes over each iteration. You should see it decreasing. \n\nIn case if cost function value increases it may mean that gradient descent missed the cost function minimum and with each step it goes further away from it.\n\nFrom this plot you may also get an understanding of how many iterations you need to get an optimal value of the cost function.",
"_____no_output_____"
]
],
[
[
"# Draw gradient descent progress for each label.\nlabels = logistic_regression.unique_labels\nfor index, label in enumerate(labels):\n plt.plot(range(len(costs[index])), costs[index], label=labels[index])\n\nplt.xlabel('Gradient Steps')\nplt.ylabel('Cost')\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Calculate Model Training Precision\n\nCalculate how many of training and test examples have been classified correctly. Normally we need test precission to be as high as possible. In case if training precision is high and test precission is low it may mean that our model is overfitted (it works really well with the training data set but it is not good at classifying new unknown data from the test dataset). In this case you may want to play with `regularization_param` parameter to fighth the overfitting.",
"_____no_output_____"
]
],
[
[
"# Make training set predictions.\ny_train_predictions = logistic_regression.predict(x_train)\ny_test_predictions = logistic_regression.predict(x_test)\n\n# Check what percentage of them are actually correct.\ntrain_precision = np.sum(y_train_predictions == y_train) / y_train.shape[0] * 100\ntest_precision = np.sum(y_test_predictions == y_test) / y_test.shape[0] * 100\n\nprint('Training Precision: {:5.4f}%'.format(train_precision))\nprint('Test Precision: {:5.4f}%'.format(test_precision))",
"Training Precision: 96.6833%\nTest Precision: 90.4500%\n"
]
],
[
[
"### Plot Test Dataset Predictions\n\nIn order to illustrate how our model classifies unknown examples let's plot first 64 predictions for testing dataset. All green digits on the plot below have been recognized corrctly but all the red digits have not been recognized correctly by our classifier. On top of each digit image you may see the class (the number) that has been recognized on the image.",
"_____no_output_____"
]
],
[
[
"# How many numbers to display.\nnumbers_to_display = 64\n\n# Calculate the number of cells that will hold all the numbers.\nnum_cells = math.ceil(math.sqrt(numbers_to_display))\n\n# Make the plot a little bit bigger than default one.\nplt.figure(figsize=(15, 15))\n\n# Go through the first numbers in a test set and plot them.\nfor plot_index in range(numbers_to_display):\n # Extrace digit data.\n digit_label = y_test[plot_index, 0]\n digit_pixels = x_test[plot_index, :]\n \n # Predicted label.\n predicted_label = y_test_predictions[plot_index][0]\n\n # Calculate image size (remember that each picture has square proportions).\n image_size = int(math.sqrt(digit_pixels.shape[0]))\n \n # Convert image vector into the matrix of pixels.\n frame = digit_pixels.reshape((image_size, image_size))\n \n # Plot the number matrix.\n color_map = 'Greens' if predicted_label == digit_label else 'Reds'\n plt.subplot(num_cells, num_cells, plot_index + 1)\n plt.imshow(frame, cmap=color_map)\n plt.title(predicted_label)\n plt.tick_params(axis='both', which='both', bottom=False, left=False, labelbottom=False, labelleft=False)\n\n# Plot all subplots.\nplt.subplots_adjust(hspace=0.5, wspace=0.5)\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e770a626a1cc75bb3db048c1cc00cda24018ed89 | 38,464 | ipynb | Jupyter Notebook | Which intersection has the highest number of accidents.ipynb | danpeczek/france-accidents | 72f5ba8b85254ca213157fc61714cc6994ea0d52 | [
"CC0-1.0"
] | 2 | 2020-02-27T09:53:32.000Z | 2020-04-08T12:35:23.000Z | Which intersection has the highest number of accidents.ipynb | danpeczek/france-accidents | 72f5ba8b85254ca213157fc61714cc6994ea0d52 | [
"CC0-1.0"
] | 1 | 2020-04-10T12:36:09.000Z | 2020-04-10T12:36:09.000Z | Which intersection has the highest number of accidents.ipynb | danpeczek/france-accidents | 72f5ba8b85254ca213157fc61714cc6994ea0d52 | [
"CC0-1.0"
] | null | null | null | 77.236948 | 22,908 | 0.730423 | [
[
[
"### Question: What is the safest type of intersection?\n\nLet's see how accidents are splitted based on the place of the event and see where we can feel to be the safest.\n\nFirst step before any data analysis is to import required libraries and data. Any information required to understand columns is available here: https://www.kaggle.com/ahmedlahlou/accidents-in-france-from-2005-to-2016.\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\n\ncaracteristics = pd.read_csv('data/caracteristics.csv', encoding='latin1')",
"_____no_output_____"
],
[
"caracteristics.head()",
"_____no_output_____"
]
],
[
[
"Let's change the values in intersection column from numbers to categorical values, below the look-up for development.\n* 1 - Out of intersection\n* 2 - Intersection in X\n* 3 - Intersection in T\n* 4 - Intersection in Y\n* 5 - Intersection with more than 4 branches\n* 6 - Giratory\n* 7 - Place\n* 8 - Level crossing\n* 9 - Other intersection\n\nBut first let's check for missing values in 'int' (intersection) column.",
"_____no_output_____"
]
],
[
[
"caracteristics.columns[caracteristics.isna().sum() != 0]",
"_____no_output_____"
]
],
[
[
"So it looks like 'int' column is not having missing values, which is super for us in this case. Let's go with renaming values in 'int' column.",
"_____no_output_____"
]
],
[
[
"int_dict = {\n '1': 'Out of intersection',\n '2': 'X intersection',\n '3': 'T intersection',\n '4': 'Y intersection',\n '5': 'More than 4 branches intersection',\n '6': 'Giratory',\n '7': 'Place',\n '8': 'Level crossing',\n '9': 'Other'\n\n}\ncaracteristics['int'] = caracteristics['int'].astype(str) \ncaracteristics['int'] = caracteristics['int'].replace(int_dict)\ncaracteristics['int'] = pd.Categorical(caracteristics['int'], list(int_dict.values()))\ncaracteristics.head()",
"_____no_output_____"
],
[
"plt.clf()\nplt.figure(figsize=(10,10))\nax = sns.countplot(y = 'int', data=caracteristics)\nax.set_title('Number of accidents based on the intersection type')\nax.set_xlabel('Number of accidents')\nax.set_ylabel('Intersection')\nplt.show()",
"_____no_output_____"
]
],
[
[
"So it looks like the biggest number of accidents is not on the intersection, but out of it. Looks like out of the intersections we are less carefull and more tempted to make a dangerous maneuvers.\n\nFrom all of the intersections the 'X' intersection had the highest number of accidents. The 'Y' intersection and intersection with more than 4 branches had the smallest number of accidents. For the second intersection I can tell why - these type of intersection is not so frequent to be seen and based on my own experience people are tempted on such intersection to be super cautious as an example go and see what's behind these coordinates: 51.793863, 19.589690.\n\nThe valuable feature in this dataset would be information about exact number of each type of the intersection in France, but it's right now out of scope of this notebook.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.