hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
list | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
list | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
list | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
list | cell_types
list | cell_type_groups
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ec9ab96184c3844a35c3a7edeeed61b88114f81d | 55,363 | ipynb | Jupyter Notebook | Samples/src/PythonInterop/tomography-sample.ipynb | M4R14/Quantum | a1c857cf047a915f522809fd13e7f030bd3d876c | [
"MIT"
] | 2 | 2019-07-23T03:54:48.000Z | 2019-08-10T23:42:06.000Z | Samples/src/PythonInterop/tomography-sample.ipynb | M4R14/Quantum | a1c857cf047a915f522809fd13e7f030bd3d876c | [
"MIT"
] | 4 | 2021-01-28T20:01:28.000Z | 2022-03-25T18:49:28.000Z | Samples/src/PythonInterop/tomography-sample.ipynb | M4R14/Quantum | a1c857cf047a915f522809fd13e7f030bd3d876c | [
"MIT"
] | null | null | null | 122.484513 | 31,652 | 0.887253 | [
[
[
"# Quantum Process Tomography with Q# and Python #",
"_____no_output_____"
],
[
"## Abstract ##",
"_____no_output_____"
],
[
"In this sample, we will demonstrate interoperability between Q# and Python by using the QInfer and QuTiP libraries for Python to characterize and verify quantum processes implemented in Q#.\nIn particular, this sample will use *quantum process tomography* to learn about the behavior of a \"noisy\" Hadamard operation from the results of random Pauli measurements.",
"_____no_output_____"
],
[
"## Preamble ##",
"_____no_output_____"
]
],
[
[
"import warnings\nwarnings.simplefilter('ignore')",
"_____no_output_____"
]
],
[
[
"We can enable Q# support in Python by importing the `qsharp` package.",
"_____no_output_____"
]
],
[
[
"import qsharp ",
"_____no_output_____"
]
],
[
[
"Once we do so, any Q# source files in the current working directory are compiled, and their namespaces are made available as Python modules.\nFor instance, the `Quantum.qs` source file provided with this sample implements a `HelloWorld` operation in the `Microsoft.Quantum.Samples.Python` Q# namespace:",
"_____no_output_____"
]
],
[
[
"with open('Quantum.qs') as f:\n print(f.read())",
"// Copyright (c) Microsoft Corporation. All rights reserved.\n// Licensed under the MIT License.\nnamespace Microsoft.Quantum.Samples.Python {\n open Microsoft.Quantum.Primitive;\n open Microsoft.Quantum.Canon;\n\n function HelloWorld (pauli : Pauli) : Unit {\n Message($\"Hello, world! {pauli}\");\n }\n\n operation NoisyHadamardChannelImpl (depol : Double, target : Qubit) : Unit {\n let idxAction = Random([1.0 - depol, depol]);\n\n if (idxAction == 0) {\n H(target);\n }\n else {\n PrepareSingleQubitIdentity(target);\n }\n }\n\n function NoisyHadamardChannel (depol : Double) : (Qubit => Unit) {\n return NoisyHadamardChannelImpl(depol, _);\n }\n\n}\n\n\n\n"
]
],
[
[
"We can import this `HelloWorld` operation as though it was an ordinary Python function by using the Q# namespace as a Python module:",
"_____no_output_____"
]
],
[
[
"from Microsoft.Quantum.Samples.Python import HelloWorld",
"_____no_output_____"
],
[
"HelloWorld",
"_____no_output_____"
]
],
[
[
"Once we've imported the new names, we can then ask our simulator to run each function and operation using the `simulate` method.",
"_____no_output_____"
]
],
[
[
"HelloWorld.simulate(pauli=qsharp.Pauli.Z)",
"Hello, world! PauliZ\n"
]
],
[
[
"## Tomography ##",
"_____no_output_____"
],
[
"The `qsharp` interoperability package also comes with a `single_qubit_process_tomography` function which uses the QInfer library for Python to learn the channels corresponding to single-qubit Q# operations.",
"_____no_output_____"
]
],
[
[
"from qsharp.tomography import single_qubit_process_tomography",
"_____no_output_____"
]
],
[
[
"Next, we import plotting support and the QuTiP library, since these will be helpful to us in manipulating the quantum objects returned by the quantum process tomography functionality that we call later.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"import qutip as qt\nqt.settings.colorblind_safe = True",
"_____no_output_____"
]
],
[
[
"To use this, we define a new operation that takes a preparation and a measurement, then returns the result of performing that tomographic measurement on the noisy Hadamard operation that we defined in `Quantum.qs`.",
"_____no_output_____"
]
],
[
[
"experiment = qsharp.compile(\"\"\"\nopen Microsoft.Quantum.Samples.Python;\n\noperation Experiment(prep : Pauli, meas : Pauli) : Result {\n return SingleQubitProcessTomographyMeasurement(prep, meas, NoisyHadamardChannel(0.1));\n}\n\"\"\")",
"_____no_output_____"
]
],
[
[
"Here, we ask for 10,000 measurements from the noisy Hadamard operation that we defined above.",
"_____no_output_____"
]
],
[
[
"estimation_results = single_qubit_process_tomography(experiment, n_measurements=10000)",
"Preparing tomography model...\nPerforming tomography...\n"
]
],
[
[
"To visualize the results, it's helpful to compare to the actual channel, which we can find exactly in QuTiP.",
"_____no_output_____"
]
],
[
[
"depolarizing_channel = sum(map(qt.to_super, [qt.qeye(2), qt.sigmax(), qt.sigmay(), qt.sigmaz()])) / 4.0\nactual_noisy_h = 0.1 * qt.to_choi(depolarizing_channel) + 0.9 * qt.to_choi(qt.hadamard_transform())",
"_____no_output_____"
]
],
[
[
"We then plot the estimated and actual channels as Hinton diagrams, showing how each acts on the Pauli operators $X$, $Y$ and $Z$.",
"_____no_output_____"
]
],
[
[
"fig, (left, right) = plt.subplots(ncols=2, figsize=(12, 4))\nplt.sca(left)\nplt.xlabel('Estimated', fontsize='x-large')\nqt.visualization.hinton(estimation_results['est_channel'], ax=left)\nplt.sca(right)\nplt.xlabel('Actual', fontsize='x-large')\nqt.visualization.hinton(actual_noisy_h, ax=right)",
"_____no_output_____"
]
],
[
[
"We also obtain a wealth of other information as well, such as the covariance matrix over each parameter of the resulting channel.\nThis shows us which parameters we are least certain about, as well as how those parameters are correlated with each other.",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(10, 10))\nestimation_results['posterior'].plot_covariance()\nplt.xticks(rotation=90)",
"_____no_output_____"
]
],
[
[
"## Diagnostics ##",
"_____no_output_____"
]
],
[
[
"for component, version in sorted(qsharp.component_versions().items(), key=lambda x: x[0]):\n print(f\"{component:20}{version}\")",
"Jupyter Core 1.1.12077.0\niqsharp 0.5.1903.2902\nqsharp 0.5.1903.2902\n"
],
[
"import sys\nprint(sys.version)",
"3.6.7 (default, Feb 28 2019, 07:28:18) [MSC v.1900 64 bit (AMD64)]\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec9ac770b6314bf0a9ec6d62774180d3dc2dd375 | 28,548 | ipynb | Jupyter Notebook | Stock_Algorithms/Decision_Trees_Classification_Part2.ipynb | sonukumarraj007/Deep-Learning-Machine-Learning-Stock | b3440fb72a5105be8b595ceaec3f47acb5d11ffa | [
"MIT"
] | 569 | 2019-02-06T16:35:19.000Z | 2022-03-31T03:45:28.000Z | Stock_Algorithms/Decision_Trees_Classification_Part2.ipynb | sonukumarraj007/Deep-Learning-Machine-Learning-Stock | b3440fb72a5105be8b595ceaec3f47acb5d11ffa | [
"MIT"
] | 5 | 2021-02-27T07:03:58.000Z | 2022-03-31T14:09:41.000Z | Stock_Algorithms/Decision_Trees_Classification_Part2.ipynb | ysdede/Deep-Learning-Machine-Learning-Stock | 2e3794efab3276b6bc389c8b38615540d4e2b144 | [
"MIT"
] | 174 | 2019-05-23T11:46:54.000Z | 2022-03-31T04:44:38.000Z | 34.271309 | 114 | 0.326573 | [
[
[
"# Decision Trees for Classification Part 2",
"_____no_output_____"
],
[
"Decision Tree is classification algorithm to build a model.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\n# fix_yahoo_finance is used to fetch data \nimport fix_yahoo_finance as yf\nyf.pdr_override()",
"_____no_output_____"
],
[
"# input\nsymbol = 'AMD'\nstart = '2014-01-01'\nend = '2018-08-27'\n\n# Read data \ndataset = yf.download(symbol,start,end)\n\n# View Columns\ndataset.head()",
"[*********************100%***********************] 1 of 1 downloaded\n"
],
[
"# Create more data\ndataset['Increase_Decrease'] = np.where(dataset['Volume'].shift(-1) > dataset['Volume'],1,0)\ndataset['Buy_Sell_on_Open'] = np.where(dataset['Open'].shift(-1) > dataset['Open'],1,-1)\ndataset['Buy_Sell'] = np.where(dataset['Adj Close'].shift(-1) > dataset['Adj Close'],1,-1)\ndataset['Return'] = dataset['Adj Close'].pct_change()\ndataset = dataset.dropna()\ndataset['Up_Down'] = np.where(dataset['Return'].shift(-1) > dataset['Return'],'Up','Down')\ndataset.head()",
"_____no_output_____"
],
[
"# Create more data\ndataset['Open_N'] = np.where(dataset['Open'].shift(-1) > dataset['Open'],'Up','Down')\ndataset['High_N'] = np.where(dataset['High'].shift(-1) > dataset['High'],'Up','Down')\ndataset['Low_N'] = np.where(dataset['Low'].shift(-1) > dataset['Low'],'Up','Down')\ndataset['Close_N'] = np.where(dataset['Adj Close'].shift(-1) > dataset['Adj Close'],'Up','Down')\ndataset['Volume_N'] = np.where(dataset['Volume'].shift(-1) > dataset['Volume'],'Positive','Negative')\ndataset.head()",
"_____no_output_____"
],
[
"dataset.shape",
"_____no_output_____"
],
[
"X = dataset[['Open', 'Open_N', 'Volume_N']].values\ny = dataset['Up_Down']",
"_____no_output_____"
],
[
"from sklearn import preprocessing\nle_Open = preprocessing.LabelEncoder()\nle_Open.fit(['Up','Down'])\nX[:,1] = le_Open.transform(X[:,1]) \n\nle_Volume = preprocessing.LabelEncoder()\nle_Volume.fit(['Positive', 'Negative'])\nX[:,2] = le_Volume.transform(X[:,2]) ",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split \nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)",
"_____no_output_____"
],
[
"from sklearn.tree import DecisionTreeClassifier \nclassifier = DecisionTreeClassifier() \nclassifier.fit(X_train, y_train)",
"_____no_output_____"
],
[
"# Modeling\nUp_Down_Tree = DecisionTreeClassifier(criterion=\"entropy\", max_depth = 4)\nUp_Down_Tree",
"_____no_output_____"
],
[
"Up_Down_Tree.fit(X_train,y_train)",
"_____no_output_____"
],
[
"# Prediction\npredTree = Up_Down_Tree.predict(X_test)",
"_____no_output_____"
],
[
"print(predTree[0:5])\nprint(y_test[0:5])",
"['Down' 'Up' 'Down' 'Up' 'Down']\nDate\n2018-01-23 Down\n2016-07-27 Up\n2015-10-06 Down\n2016-03-09 Up\n2014-05-06 Up\nName: Up_Down, dtype: object\n"
],
[
"# Evaluation\nfrom sklearn import metrics\nprint(\"DecisionTrees's Accuracy: \", metrics.accuracy_score(y_test, predTree))",
"DecisionTrees's Accuracy: 0.6\n"
],
[
"# Accuracy Score without Sklearn\nboolian = (y_test==predTree)\naccuracy = sum(boolian)/y_test.size\naccuracy",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9ad6f4afbb948e39cd02fbd8bcbb5edc106ba8 | 13,906 | ipynb | Jupyter Notebook | cleaning_data.ipynb | apaneser/School_District_Analysis | 82332b63f6a46bfe5df7b396d595a07fa4d49c02 | [
"MIT"
] | null | null | null | cleaning_data.ipynb | apaneser/School_District_Analysis | 82332b63f6a46bfe5df7b396d595a07fa4d49c02 | [
"MIT"
] | null | null | null | cleaning_data.ipynb | apaneser/School_District_Analysis | 82332b63f6a46bfe5df7b396d595a07fa4d49c02 | [
"MIT"
] | null | null | null | 29.841202 | 285 | 0.344312 | [
[
[
"# <span style=\"color:violet\">**Handle Missing Data**</span>",
"_____no_output_____"
]
],
[
[
"# add pandas dependency\nimport pandas as pd",
"_____no_output_____"
],
[
"# files to load\nfile_to_load = \"Resources/missing_grades.csv\"\n\n# read csv file into dataframe\nmissing_grade_df = pd.read_csv(file_to_load)\nmissing_grade_df",
"_____no_output_____"
]
],
[
[
"## <span style=\"color:green\">*Option 1: Do nothing*</span>\nIf we do nothing, when we sum or take the averages of the reading and math scores, those NaNs will not be considered in the sum or the averages (just as they are not considered in the sum or the averages in an Excel file). In this situation, the missing values have no impact.\n\nHowever, if we multiply or divide with a row that has a NaN, the answer will be NaN. This can cause problems if we need the answer for the rest of our code.",
"_____no_output_____"
],
[
"## <span style=\"color:green\">*Option 2: Drop the Row*</span>",
"_____no_output_____"
]
],
[
[
"# drop the rows\nmissing_grade_df.dropna()",
"_____no_output_____"
]
],
[
[
"## <span style=\"color:green\">*Option 3: Fill in the Row*</span>",
"_____no_output_____"
]
],
[
[
"# Fill in missing rows with \"85\"\nmissing_grade_df.fillna(85)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9ae066b4d3518f3ae230d1d94758f57c3836e0 | 9,286 | ipynb | Jupyter Notebook | data/export/prepare_k400_release.ipynb | StanLei52/GEBD | 5f7e722e0384f9877c75d116e1db72400d2bc58f | [
"MIT"
] | 44 | 2021-03-24T07:10:57.000Z | 2022-03-12T11:49:14.000Z | data/export/prepare_k400_release.ipynb | StanLei52/GEBD | 5f7e722e0384f9877c75d116e1db72400d2bc58f | [
"MIT"
] | 2 | 2021-05-26T09:31:55.000Z | 2021-08-11T11:47:38.000Z | data/export/prepare_k400_release.ipynb | StanLei52/GEBD | 5f7e722e0384f9877c75d116e1db72400d2bc58f | [
"MIT"
] | 6 | 2021-04-07T00:51:51.000Z | 2022-01-12T01:54:41.000Z | 42.792627 | 158 | 0.505277 | [
[
[
"import os\nimport cv2\nimport pickle\nimport pandas as pd\nimport numpy as np\nimport json",
"_____no_output_____"
],
[
"# Generate frameidx for shot/event change\ndef generate_frameidx_from_raw(min_change_duration=0.3, split='valnew'):\n assert split in ['train','val','valnew','test']\n\n with open('../export/k400_{}_raw_annotation.pkl'.format(split),'rb') as f:\n dict_raw = pickle.load(f, encoding='lartin1')\n \n mr345 = {} \n for filename in dict_raw.keys():\n ann_of_this_file = dict_raw[filename]['substages_timestamps']\n if not (len(ann_of_this_file) >= 3):\n # print(f'{filename} less than 3 annotations.')\n continue\n \n try:\n fps = dict_raw[filename]['fps']\n num_frames = int(dict_raw[filename]['num_frames'])\n video_duration = dict_raw[filename]['video_duration']\n avg_f1 = dict_raw[filename]['f1_consis_avg']\n except:\n # print(f'{filename} exception!')\n continue\n \n # this is avg f1 from halo but computed using the annotation after post-processing like merge two close changes\n mr345[filename] = {}\n mr345[filename]['num_frames'] = int(dict_raw[filename]['num_frames'])\n mr345[filename]['path_video'] =dict_raw[filename]['path_video']\n mr345[filename]['fps'] = dict_raw[filename]['fps']\n mr345[filename]['video_duration'] = dict_raw[filename]['video_duration']\n mr345[filename]['path_frame'] = dict_raw[filename]['path_video'].split('.mp4')[0]\n mr345[filename]['f1_consis'] = []\n mr345[filename]['f1_consis_avg'] = avg_f1\n \n mr345[filename]['substages_myframeidx'] = []\n mr345[filename]['substages_timestamps'] = []\n for ann_idx in range(len(ann_of_this_file)):\n # remove changes at the beginning and end of the video; \n ann = ann_of_this_file[ann_idx]\n tmp_ann = []\n change_shot_range_start = []\n change_shot_range_end = []\n change_event = []\n change_shot_timestamp = []\n for p in ann:\n st = p['start_time']\n et = p['end_time']\n l = p['label'].split(' ')[0]\n if (st+et)/2<min_change_duration or (st+et)/2>(video_duration-min_change_duration): continue\n tmp_ann.append(p)\n if l == 'EventChange':\n change_event.append((st+et)/2)\n elif l == 'ShotChangeGradualRange:':\n change_shot_range_start.append(st)\n change_shot_range_end.append(et)\n else:\n change_shot_timestamp.append((st+et)/2)\n \n # consolidate duplicated/very close timestamps\n # if two shot range overlap, merge\n i = 0\n while i < len(change_shot_range_start)-1:\n while change_shot_range_end[i]>=change_shot_range_start[i+1]:\n change_shot_range_start.remove(change_shot_range_start[i+1])\n if change_shot_range_end[i]<=change_shot_range_end[i+1]:\n change_shot_range_end.remove(change_shot_range_end[i])\n else:\n change_shot_range_end.remove(change_shot_range_end[i+1])\n if i==len(change_shot_range_start)-1:\n break\n i+=1 \n \n # if change_event or change_shot_timestamp falls into range of shot range, remove this change_event\n for cg in change_event:\n for i in range(len(change_shot_range_start)):\n if cg<=(change_shot_range_end[i]+min_change_duration) and cg>=(change_shot_range_start[i]-min_change_duration):\n change_event.remove(cg)\n break\n for cg in change_shot_timestamp:\n for i in range(len(change_shot_range_start)):\n if cg<=(change_shot_range_end[i]+min_change_duration) and cg>=(change_shot_range_start[i]-min_change_duration):\n change_shot_timestamp.remove(cg)\n break\n \n # if two timestamp changes are too close, remove the second one between two shot changes, two event changes; shot vs. event, remove event\n change_event.sort()\n change_shot_timestamp.sort()\n tmp_change_shot_timestamp = change_shot_timestamp\n tmp_change_event = change_event\n #\"\"\"\n i = 0\n while i <= (len(change_event)-2):\n if (change_event[i+1]-change_event[i])<=2*min_change_duration:\n tmp_change_event.remove(change_event[i+1])\n else:\n i += 1\n i = 0\n while i <= (len(change_shot_timestamp)-2):\n if (change_shot_timestamp[i+1]-change_shot_timestamp[i])<=2*min_change_duration:\n tmp_change_shot_timestamp.remove(change_shot_timestamp[i+1])\n else:\n i += 1\n for i in range(len(tmp_change_shot_timestamp)-1):\n j = 0\n while j <= (len(tmp_change_event)-1):\n if abs(tmp_change_shot_timestamp[i]-tmp_change_event[j])<=2*min_change_duration:\n tmp_change_event.remove(tmp_change_event[j])\n else:\n j += 1\n #\"\"\"\n change_shot_timestamp = tmp_change_shot_timestamp\n change_event = tmp_change_event\n change_shot_range = []\n for i in range(len(change_shot_range_start)):\n change_shot_range += [(change_shot_range_start[i]+change_shot_range_end[i])/2]\n\n change_all = change_event + change_shot_timestamp + change_shot_range\n change_all.sort()\n time_change_all = change_all \n\n change_all = np.floor(np.array(change_all)*fps)\n tmp_change_all = []\n for cg in change_all:\n tmp_change_all += [min(num_frames-1, cg)]\n\n #if len(tmp_change_all) != 0: #even after processing, the list is empty/there is no GT bdy, shall still keep []\n mr345[filename]['substages_myframeidx'] += [tmp_change_all]\n mr345[filename]['substages_timestamps'] += [time_change_all]\n mr345[filename]['f1_consis'] += [dict_raw[filename]['f1_consis'][ann_idx]]\n \n \n with open(f'../export/k400_mr345_{split}_min_change_duration{min_change_duration}.pkl', 'wb') as f:\n pickle.dump(mr345, f, protocol=pickle.HIGHEST_PROTOCOL)\n \n print(len(mr345))",
"_____no_output_____"
],
[
"generate_frameidx_from_raw(split='train')",
"_____no_output_____"
],
[
"generate_frameidx_from_raw(split='test')",
"_____no_output_____"
],
[
"generate_frameidx_from_raw(split='val')",
"_____no_output_____"
],
[
"generate_frameidx_from_raw(split='valnew')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9af93836865297d8dde458ec6fc74021537f44 | 1,037,566 | ipynb | Jupyter Notebook | code/charel/bartolozziTEST.ipynb | charelF/ComplexSystems | 3efc9b577ec777fcecbd5248bbbaf77b7d90fc65 | [
"MIT"
] | null | null | null | code/charel/bartolozziTEST.ipynb | charelF/ComplexSystems | 3efc9b577ec777fcecbd5248bbbaf77b7d90fc65 | [
"MIT"
] | null | null | null | code/charel/bartolozziTEST.ipynb | charelF/ComplexSystems | 3efc9b577ec777fcecbd5248bbbaf77b7d90fc65 | [
"MIT"
] | null | null | null | 7,155.627586 | 814,040 | 0.893988 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport random\n\nimport sys\nsys.path.append(\"..\")\nsys.path.append(\"../shared\")\n\nimport bartolozziSPEED",
"_____no_output_____"
],
[
"np.random.seed(1)\nrandom.seed(1)\n\nG, x = bartolozziSPEED.generate(.47,.0001,.5,.5,2000,200,2,1,.001)\n\nplt.figure(figsize=(20,3))\nplt.imshow(G.T, cmap=\"binary\", aspect=\"auto\", interpolation=\"None\")",
"_____no_output_____"
],
[
"N0 = 20000\nN1 = 100\n\npd = 0.01\npe = 0.0001\nph = 0.0099\n\npa = 0.4\n\nA = 2\na = 2*A\nh = 0.001\n\nG, x = bartolozziSPEED.generate(pd, pe, ph, pa, N0, N1, A, a, h)\n\nfig, (ax1, ax2) = plt.subplots(\n ncols=1, nrows=2, figsize=(12,5), sharex=True, gridspec_kw = {'wspace':0, 'hspace':0}\n)\nax1.imshow(G.T, cmap=\"bone\", interpolation=\"None\", aspect=\"auto\")\n# plt.colorbar()\n\nr = (x - np.mean(x)) / np.std(x)\n\ns = 100\nS = np.zeros_like(x)\nS[0] = s\nfor i in range(1,N0):\n # S[i] = S[i-1] + (S[i-1] * r[i])\n S[i] = S[i-1] + (S[i-1] * r[i]/100) + 0.01\n\nax2.plot(S)\nax2.grid(alpha=0.4)\n\nax2.set_xlabel(\"time\")\nax2.set_ylabel(\"close price\")\nax1.set_ylabel(\"agents\")\n\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
ec9afad8845632055c457d7eb71a059c26acbed2 | 9,529 | ipynb | Jupyter Notebook | notebook/pandas_str_combine.ipynb | puyopop/python-snippets | 9d70aa3b2a867dd22f5a5e6178a5c0c5081add73 | [
"MIT"
] | 174 | 2018-05-30T21:14:50.000Z | 2022-03-25T07:59:37.000Z | notebook/pandas_str_combine.ipynb | puyopop/python-snippets | 9d70aa3b2a867dd22f5a5e6178a5c0c5081add73 | [
"MIT"
] | 5 | 2019-08-10T03:22:02.000Z | 2021-07-12T20:31:17.000Z | notebook/pandas_str_combine.ipynb | puyopop/python-snippets | 9d70aa3b2a867dd22f5a5e6178a5c0c5081add73 | [
"MIT"
] | 53 | 2018-04-27T05:26:35.000Z | 2022-03-25T07:59:37.000Z | 20.317697 | 100 | 0.424599 | [
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"df = pd.read_csv('data/src/sample_pandas_normal.csv').head(3)\nprint(df)",
" name age state point\n0 Alice 24 NY 64\n1 Bob 42 CA 92\n2 Charlie 18 CA 70\n"
],
[
"print(df['name'].str.cat(df['state']))",
"0 AliceNY\n1 BobCA\n2 CharlieCA\nName: name, dtype: object\n"
],
[
"print(df['name'].str.cat(df['state'], sep=' in '))",
"0 Alice in NY\n1 Bob in CA\n2 Charlie in CA\nName: name, dtype: object\n"
],
[
"print(df['name'].str.cat(['X', 'Y', 'Z'], sep=' in '))",
"0 Alice in X\n1 Bob in Y\n2 Charlie in Z\nName: name, dtype: object\n"
],
[
"print(df['name'].str.cat([df['state'], ['X', 'Y', 'Z']], sep='-'))",
"0 Alice-NY-X\n1 Bob-CA-Y\n2 Charlie-CA-Z\nName: name, dtype: object\n"
],
[
"# print(df['name'].str.cat('X', sep='-'))\n# ValueError: Did you mean to supply a `sep` keyword?",
"_____no_output_____"
],
[
"print(df['name'] + df['state'])",
"0 AliceNY\n1 BobCA\n2 CharlieCA\ndtype: object\n"
],
[
"print(df['name'] + ' in ' + df['state'])",
"0 Alice in NY\n1 Bob in CA\n2 Charlie in CA\ndtype: object\n"
],
[
"print(df['name'] + ' in ' + df['state'] + ' - ' + ['X', 'Y', 'Z'])",
"0 Alice in NY - X\n1 Bob in CA - Y\n2 Charlie in CA - Z\ndtype: object\n"
],
[
"df['col_NaN'] = ['X', pd.np.nan, 'Z']\nprint(df)",
" name age state point col_NaN\n0 Alice 24 NY 64 X\n1 Bob 42 CA 92 NaN\n2 Charlie 18 CA 70 Z\n"
],
[
"print(df['name'].str.cat(df['col_NaN'], sep='-'))",
"0 Alice-X\n1 NaN\n2 Charlie-Z\nName: name, dtype: object\n"
],
[
"print(df['name'].str.cat(df['col_NaN'], sep='-', na_rep='No Data'))",
"0 Alice-X\n1 Bob-No Data\n2 Charlie-Z\nName: name, dtype: object\n"
],
[
"print(df['name'] + '-' + df['col_NaN'])",
"0 Alice-X\n1 NaN\n2 Charlie-Z\ndtype: object\n"
],
[
"print(df['name'] + '-' + df['col_NaN'].fillna('No Data'))",
"0 Alice-X\n1 Bob-No Data\n2 Charlie-Z\ndtype: object\n"
],
[
"# print(df['name'].str.cat(df['age'], sep='-'))\n# TypeError: sequence item 1: expected str instance, int found",
"_____no_output_____"
],
[
"print(df['name'].str.cat(df['age'].astype(str), sep='-'))",
"0 Alice-24\n1 Bob-42\n2 Charlie-18\nName: name, dtype: object\n"
],
[
"# print(df['name'] + '-' + df['age'])\n# TypeError: can only concatenate str (not \"int\") to str",
"_____no_output_____"
],
[
"print(df['name'] + '-' + df['age'].astype(str))",
"0 Alice-24\n1 Bob-42\n2 Charlie-18\ndtype: object\n"
],
[
"df['name_state'] = df['name'].str.cat(df['state'], sep=' in ')\nprint(df)",
" name age state point col_NaN name_state\n0 Alice 24 NY 64 X Alice in NY\n1 Bob 42 CA 92 NaN Bob in CA\n2 Charlie 18 CA 70 Z Charlie in CA\n"
],
[
"print(df.drop(columns=['name', 'state']))",
" age point col_NaN name_state\n0 24 64 X Alice in NY\n1 42 92 NaN Bob in CA\n2 18 70 Z Charlie in CA\n"
],
[
"df = pd.read_csv('data/src/sample_pandas_normal.csv').head(3)\nprint(df)",
" name age state point\n0 Alice 24 NY 64\n1 Bob 42 CA 92\n2 Charlie 18 CA 70\n"
],
[
"print(df.assign(name_state=df['name'] + ' in ' + df['state']))",
" name age state point name_state\n0 Alice 24 NY 64 Alice in NY\n1 Bob 42 CA 92 Bob in CA\n2 Charlie 18 CA 70 Charlie in CA\n"
],
[
"print(df.assign(name_state=df['name'] + ' in ' + df['state']).drop(columns=['name', 'state']))",
" age point name_state\n0 24 64 Alice in NY\n1 42 92 Bob in CA\n2 18 70 Charlie in CA\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9b073e8636038f4db04a4b44d10039bd763437 | 569,203 | ipynb | Jupyter Notebook | Results/sorted_in_grayscale_ZOOM_50.ipynb | alexanderbea/Detection-of-Covid-19-Using-Transfer-Learning | 1ebd76f4e8360668358967f65ff373fa85fa0363 | [
"MIT"
] | 1 | 2020-12-02T18:47:50.000Z | 2020-12-02T18:47:50.000Z | Results/sorted_in_grayscale_ZOOM_50.ipynb | alexanderbea/Detection-of-Covid-19-Using-Transfer-Learning | 1ebd76f4e8360668358967f65ff373fa85fa0363 | [
"MIT"
] | null | null | null | Results/sorted_in_grayscale_ZOOM_50.ipynb | alexanderbea/Detection-of-Covid-19-Using-Transfer-Learning | 1ebd76f4e8360668358967f65ff373fa85fa0363 | [
"MIT"
] | null | null | null | 1,138.406 | 52,058 | 0.944877 | [
[
[
"from keras import applications\nfrom keras.models import Sequential\nfrom keras.layers import Flatten\nfrom keras.layers import Input\nfrom keras.models import Model\nfrom keras.layers import Dense\nfrom keras.layers import Dropout\nfrom keras.utils import to_categorical\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom google.colab import drive\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"drive.mount('/content/drive/')",
"Drive already mounted at /content/drive/; to attempt to forcibly remount, call drive.mount(\"/content/drive/\", force_remount=True).\n"
],
[
"def add_box(image):\n margin = 30\n for i in range(margin, 224-margin):\n for j in range(margin//6, 224-margin//6):\n image[i, j, :] = np.random.uniform()\n return image\n\nbatch_size = 64\nimg_height, img_width = 224, 224\n\ndir = \"/content/drive/My Drive/DD2424 Project/Dataset/NEW/SORTED_IN_GRAYSCALE/\"\n\ntrain_datagen = ImageDataGenerator(rescale=1./255,\n #preprocessing_function=add_box,\n shear_range=0.2,\n zoom_range=[0.5, 0.55],\n horizontal_flip=True,\n validation_split=0.3) # set validation split\n\ntrain_generator = train_datagen.flow_from_directory(\n dir,\n target_size=(img_height, img_width),\n batch_size=batch_size,\n class_mode='binary',\n subset='training') # set as training data\n\nvalidation_generator = train_datagen.flow_from_directory(\n dir, # same directory as training data\n target_size=(img_height, img_width),\n batch_size=batch_size,\n class_mode='binary',\n subset='validation') # set as validation data",
"Found 278 images belonging to 2 classes.\nFound 119 images belonging to 2 classes.\n"
],
[
"x_batch, y_batch = next(train_generator)\nfor i in range (0,10):\n image = x_batch[i]\n #print(image.shape)\n plt.figure()\n plt.imshow(x_batch[i])\n print(y_batch[i])",
"1.0\n0.0\n1.0\n0.0\n0.0\n1.0\n0.0\n0.0\n0.0\n0.0\n"
],
[
"nb_epochs = 50\n\nmodel_vgg = applications.VGG16(weights='imagenet', include_top=False, input_tensor=Input(shape=(224,224, 3))) # VGG16 without the fully connected layers\n#model_vgg = applications.ResNet50(weights='imagenet', include_top=False, input_tensor=Input(shape=(224,224, 3))) # ResNet50 without the fully connected layers\n\nfor layer in model_vgg.layers:\n layer.trainable = False # don't train these\n\nmodel_fc = Sequential()\nmodel_fc.add(Flatten(input_shape=model_vgg.output_shape[1:])) # Flatten so that they fit\nmodel_fc.add(Dense(128, activation='relu')) # new fc layer\nmodel_fc.add(Dropout(0.25))\nmodel_fc.add(Dense(1, activation='sigmoid')) # prediction layer\n\nmodel = Model(inputs=model_vgg.input, outputs=model_fc(model_vgg.output)) # merges them\n\nmodel.compile(optimizer='SGD', loss='binary_crossentropy', metrics=['accuracy']) # I chose SGD becaues it's seemed to be the most easy to understand optimizer\n\nhistory = model.fit_generator(\n train_generator,\n steps_per_epoch = train_generator.samples // batch_size,\n validation_data = validation_generator, \n validation_steps = validation_generator.samples // batch_size,\n epochs = nb_epochs)\n\nplt.plot(history.history['loss'], label = 'training loss')\nplt.plot(history.history['val_loss'], label = 'validation loss')\nplt.plot(history.history['accuracy'], label = 'training accuracy')\nplt.plot(history.history['val_accuracy'], label = 'validation accuracy')\nplt.legend()\nplt.xlabel('epochs')\nplt.ylabel('loss and accuracy')",
"Epoch 1/50\n4/4 [==============================] - 8s 2s/step - loss: 1.5311 - accuracy: 0.5000 - val_loss: 0.9063 - val_accuracy: 0.7969\nEpoch 2/50\n4/4 [==============================] - 8s 2s/step - loss: 0.7470 - accuracy: 0.7344 - val_loss: 0.5635 - val_accuracy: 0.7091\nEpoch 3/50\n4/4 [==============================] - 7s 2s/step - loss: 0.4950 - accuracy: 0.7674 - val_loss: 0.5002 - val_accuracy: 0.7656\nEpoch 4/50\n4/4 [==============================] - 6s 2s/step - loss: 0.5193 - accuracy: 0.7570 - val_loss: 0.4770 - val_accuracy: 0.7455\nEpoch 5/50\n4/4 [==============================] - 9s 2s/step - loss: 0.4841 - accuracy: 0.7539 - val_loss: 0.6624 - val_accuracy: 0.7188\nEpoch 6/50\n4/4 [==============================] - 6s 2s/step - loss: 0.5937 - accuracy: 0.7477 - val_loss: 0.5303 - val_accuracy: 0.8000\nEpoch 7/50\n4/4 [==============================] - 10s 2s/step - loss: 0.5309 - accuracy: 0.7539 - val_loss: 0.4761 - val_accuracy: 0.7656\nEpoch 8/50\n4/4 [==============================] - 5s 1s/step - loss: 0.4544 - accuracy: 0.7849 - val_loss: 0.4606 - val_accuracy: 0.7455\nEpoch 9/50\n4/4 [==============================] - 10s 2s/step - loss: 0.4866 - accuracy: 0.7422 - val_loss: 0.4587 - val_accuracy: 0.7969\nEpoch 10/50\n4/4 [==============================] - 6s 2s/step - loss: 0.5203 - accuracy: 0.8084 - val_loss: 0.3780 - val_accuracy: 0.7818\nEpoch 11/50\n4/4 [==============================] - 9s 2s/step - loss: 0.4442 - accuracy: 0.7664 - val_loss: 0.4768 - val_accuracy: 0.8125\nEpoch 12/50\n4/4 [==============================] - 6s 2s/step - loss: 0.4123 - accuracy: 0.7757 - val_loss: 0.3872 - val_accuracy: 0.7818\nEpoch 13/50\n4/4 [==============================] - 8s 2s/step - loss: 0.4068 - accuracy: 0.7664 - val_loss: 0.4723 - val_accuracy: 0.6719\nEpoch 14/50\n4/4 [==============================] - 8s 2s/step - loss: 0.3943 - accuracy: 0.7695 - val_loss: 0.2883 - val_accuracy: 0.8545\nEpoch 15/50\n4/4 [==============================] - 8s 2s/step - loss: 0.3626 - accuracy: 0.7991 - val_loss: 0.2628 - val_accuracy: 0.8438\nEpoch 16/50\n4/4 [==============================] - 6s 2s/step - loss: 0.5394 - accuracy: 0.7664 - val_loss: 0.5307 - val_accuracy: 0.6545\nEpoch 17/50\n4/4 [==============================] - 9s 2s/step - loss: 0.4098 - accuracy: 0.7757 - val_loss: 0.3964 - val_accuracy: 0.7500\nEpoch 18/50\n4/4 [==============================] - 8s 2s/step - loss: 0.3602 - accuracy: 0.8359 - val_loss: 0.2795 - val_accuracy: 0.9091\nEpoch 19/50\n4/4 [==============================] - 8s 2s/step - loss: 0.3952 - accuracy: 0.7757 - val_loss: 0.4827 - val_accuracy: 0.8906\nEpoch 20/50\n4/4 [==============================] - 6s 2s/step - loss: 0.4255 - accuracy: 0.8692 - val_loss: 0.2913 - val_accuracy: 0.8182\nEpoch 21/50\n4/4 [==============================] - 9s 2s/step - loss: 0.4217 - accuracy: 0.8084 - val_loss: 0.3758 - val_accuracy: 0.7656\nEpoch 22/50\n4/4 [==============================] - 6s 2s/step - loss: 0.3089 - accuracy: 0.8318 - val_loss: 0.2863 - val_accuracy: 0.9455\nEpoch 23/50\n4/4 [==============================] - 9s 2s/step - loss: 0.3691 - accuracy: 0.8867 - val_loss: 0.3189 - val_accuracy: 0.7812\nEpoch 24/50\n4/4 [==============================] - 7s 2s/step - loss: 0.3500 - accuracy: 0.8598 - val_loss: 0.2829 - val_accuracy: 0.8364\nEpoch 25/50\n4/4 [==============================] - 8s 2s/step - loss: 0.3105 - accuracy: 0.9065 - val_loss: 0.2136 - val_accuracy: 0.9219\nEpoch 26/50\n4/4 [==============================] - 6s 2s/step - loss: 0.4063 - accuracy: 0.7710 - val_loss: 0.3141 - val_accuracy: 0.9273\nEpoch 27/50\n4/4 [==============================] - 9s 2s/step - loss: 0.3601 - accuracy: 0.8505 - val_loss: 0.4830 - val_accuracy: 0.7031\nEpoch 28/50\n4/4 [==============================] - 6s 2s/step - loss: 0.3892 - accuracy: 0.8224 - val_loss: 0.2508 - val_accuracy: 0.8364\nEpoch 29/50\n4/4 [==============================] - 10s 2s/step - loss: 0.3153 - accuracy: 0.8867 - val_loss: 0.2360 - val_accuracy: 0.8750\nEpoch 30/50\n4/4 [==============================] - 6s 2s/step - loss: 0.3088 - accuracy: 0.8364 - val_loss: 0.2516 - val_accuracy: 0.8909\nEpoch 31/50\n4/4 [==============================] - 9s 2s/step - loss: 0.2872 - accuracy: 0.8738 - val_loss: 0.2329 - val_accuracy: 0.9219\nEpoch 32/50\n4/4 [==============================] - 6s 2s/step - loss: 0.3598 - accuracy: 0.8505 - val_loss: 0.4346 - val_accuracy: 0.8182\nEpoch 33/50\n4/4 [==============================] - 9s 2s/step - loss: 0.5132 - accuracy: 0.7812 - val_loss: 0.2015 - val_accuracy: 0.9375\nEpoch 34/50\n4/4 [==============================] - 6s 2s/step - loss: 0.2864 - accuracy: 0.8832 - val_loss: 0.2390 - val_accuracy: 0.9091\nEpoch 35/50\n4/4 [==============================] - 8s 2s/step - loss: 0.2882 - accuracy: 0.9393 - val_loss: 0.7492 - val_accuracy: 0.4688\nEpoch 36/50\n4/4 [==============================] - 6s 2s/step - loss: 0.4103 - accuracy: 0.7477 - val_loss: 0.2063 - val_accuracy: 0.9091\nEpoch 37/50\n4/4 [==============================] - 8s 2s/step - loss: 0.2772 - accuracy: 0.8972 - val_loss: 0.1454 - val_accuracy: 0.9375\nEpoch 38/50\n4/4 [==============================] - 8s 2s/step - loss: 0.2771 - accuracy: 0.9141 - val_loss: 0.2533 - val_accuracy: 0.8727\nEpoch 39/50\n4/4 [==============================] - 8s 2s/step - loss: 0.2797 - accuracy: 0.8879 - val_loss: 0.1657 - val_accuracy: 0.9531\nEpoch 40/50\n4/4 [==============================] - 6s 2s/step - loss: 0.2175 - accuracy: 0.9346 - val_loss: 0.2207 - val_accuracy: 0.9273\nEpoch 41/50\n4/4 [==============================] - 10s 2s/step - loss: 0.2547 - accuracy: 0.9062 - val_loss: 0.3015 - val_accuracy: 0.9062\nEpoch 42/50\n4/4 [==============================] - 5s 1s/step - loss: 0.6145 - accuracy: 0.7209 - val_loss: 0.2659 - val_accuracy: 0.8727\nEpoch 43/50\n4/4 [==============================] - 9s 2s/step - loss: 0.2928 - accuracy: 0.8711 - val_loss: 0.2024 - val_accuracy: 1.0000\nEpoch 44/50\n4/4 [==============================] - 7s 2s/step - loss: 0.2440 - accuracy: 0.9206 - val_loss: 0.2398 - val_accuracy: 0.9455\nEpoch 45/50\n4/4 [==============================] - 8s 2s/step - loss: 0.3062 - accuracy: 0.9019 - val_loss: 0.1613 - val_accuracy: 1.0000\nEpoch 46/50\n4/4 [==============================] - 6s 2s/step - loss: 0.2046 - accuracy: 0.9393 - val_loss: 0.2739 - val_accuracy: 0.8545\nEpoch 47/50\n4/4 [==============================] - 9s 2s/step - loss: 0.2959 - accuracy: 0.8672 - val_loss: 0.1265 - val_accuracy: 0.9219\nEpoch 48/50\n4/4 [==============================] - 6s 2s/step - loss: 0.2354 - accuracy: 0.9346 - val_loss: 0.2198 - val_accuracy: 0.9273\nEpoch 49/50\n4/4 [==============================] - 9s 2s/step - loss: 0.3724 - accuracy: 0.8178 - val_loss: 0.1977 - val_accuracy: 0.9531\nEpoch 50/50\n4/4 [==============================] - 6s 2s/step - loss: 0.2363 - accuracy: 0.9206 - val_loss: 0.2857 - val_accuracy: 0.8545\n"
],
[
"model.save_weights(filepath='/content/drive/My Drive/DD2424 Project/Results/sorted_in_grayscale_ZOOM_50.txt')",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9b081d4ee7f54a42b25467592d74e7e3d5f821 | 39,699 | ipynb | Jupyter Notebook | notebooks/ladder/ladder-cv-binary.ipynb | DEIB-GECO/CIBB2018 | a6eb392c2f0a76548ae893282b25da2f7de7d51a | [
"MIT"
] | null | null | null | notebooks/ladder/ladder-cv-binary.ipynb | DEIB-GECO/CIBB2018 | a6eb392c2f0a76548ae893282b25da2f7de7d51a | [
"MIT"
] | null | null | null | notebooks/ladder/ladder-cv-binary.ipynb | DEIB-GECO/CIBB2018 | a6eb392c2f0a76548ae893282b25da2f7de7d51a | [
"MIT"
] | null | null | null | 45.683544 | 1,467 | 0.564523 | [
[
[
"#This is the modified version of the ladder network code from https://github.com/rinuboney/ladder\n#Certain modfications are made to use & experiment with gene expression data\nimport numpy as np\n\nfrom sys import argv\n\nfrom sklearn.model_selection import RepeatedStratifiedKFold, train_test_split\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.metrics import confusion_matrix, accuracy_score, precision_score, recall_score, f1_score\n\nimport tensorflow as tf\nimport math\nimport os\nimport csv",
"_____no_output_____"
],
[
"import src ",
"_____no_output_____"
],
[
"join = lambda l, u: tf.concat([l, u], 0)\nlabeled = lambda x: tf.slice(x, [0, 0], [batch_size, -1]) if x is not None else x\nunlabeled = lambda x: tf.slice(x, [batch_size, 0], [-1, -1]) if x is not None else x\nsplit_lu = lambda x: (labeled(x), unlabeled(x))",
"_____no_output_____"
],
[
"cancer_type = argv[1]\nif cancer_type.startswith('-'):\n cancer_type = 'BRCA'\ncancer_type_file = cancer_type.replace(\"/\",\"_\").lower()\nprint(cancer_type_file)\n# file = 'out/' + cancer_type_file + \".tsv\"\n# file",
"brca\n"
],
[
"#class definitions \nclass DataSet(object):\n\n def __init__(self, dataset, labels):\n \n self._dataset = dataset\n self._labels = labels\n self._epochs_completed = 0\n self._index_in_epoch = 0\n self._num_examples = dataset.shape[0]\n\n @property\n def dataset(self):\n return self._dataset\n\n @property\n def labels(self):\n return self._labels\n\n @property\n def num_examples(self):\n return self._num_examples\n\n @property\n def epochs_completed(self):\n return self._epochs_completed\n\n def next_batch(self, batch_size):\n \"\"\"Return the next `batch_size` examples from this data set.\"\"\"\n start = self._index_in_epoch\n# print(start)\n end = start + batch_size\n \n result_data = self._dataset[start:end]\n result_label = self._labels[start:end]\n \n while len(result_data) < batch_size:\n # Finished epoch\n self._epochs_completed += 1\n # Shuffle the data\n perm = np.arange(self._num_examples)\n np.random.shuffle(perm)\n self._dataset = self._dataset[perm]\n self._labels = self._labels[perm]\n # Start next epoch\n start = 0\n end = batch_size - len(result_data)\n result_data = np.append(result_data,self._dataset[start:end], axis=0)\n result_label = np.append(result_label,self._labels[start:end], axis=0)\n self._index_in_epoch = end\n# print(start, end)\n return result_data ,result_label\n\nclass SemiDataSet(object):\n def __init__(self, dataset, labels, n_labeled):\n \n self.n_labeled = n_labeled\n\n # Unlabled DataSet\n self.unlabeled_ds = DataSet(dataset, labels)\n\n # Labeled DataSet\n self.num_examples = self.unlabeled_ds.num_examples\n indices = np.arange(self.num_examples)\n shuffled_indices = np.random.permutation(indices)\n dataset = dataset[shuffled_indices]\n labels = labels[shuffled_indices]\n# print('labels',labels)\n \n y = np.array([np.arange(2)[l==1][0] for l in labels])\n# print('y',y)\n# global test\n# test=labels\n\n \n# idx = indices[y==0][:5]\n# print('idx',idx)\n\n\n n_classes = y.max() + 1\n# print('n_classes',n_classes)\n n_from_each_class = n_labeled // n_classes\n i_labeled = []\n for c in range(n_classes):\n i = indices[y==c][:n_from_each_class]\n i_labeled += list(i)\n l_dataset = dataset[i_labeled]\n l_labels = labels[i_labeled]\n self.labeled_ds = DataSet(l_dataset, l_labels)\n\n def next_batch(self, batch_size):\n #print (\"batch size semi\", batch_size)\n unlabeled_dataset, _ = self.unlabeled_ds.next_batch(batch_size)\n \n if batch_size > self.n_labeled:\n labeled_dataset, labels = self.labeled_ds.next_batch(self.n_labeled)\n else:\n labeled_dataset, labels = self.labeled_ds.next_batch(batch_size)\n #print (labeled_dataset.shape)\n #print (\"labels shape aasd\", labels.shape)\n #print (labels)\n dataset = np.vstack([labeled_dataset, unlabeled_dataset])\n return dataset, labels",
"_____no_output_____"
],
[
"#one-hot label\ndef dense_to_one_hot(labels_dense, num_classes=2):\n\n \"\"\"Convert class labels from scalars to one-hot vectors.\"\"\"\n num_labels = labels_dense.shape[0]\n# print(num_labels)\n index_offset = np.arange(num_labels) * num_classes\n labels_one_hot = np.zeros((num_labels, num_classes))\n labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1\n return labels_one_hot\n",
"_____no_output_____"
],
[
"#fix labels 1 for tumoral, 0 for healthy\ndef fix_label(labels):\n labels= [1 if x==1 else 0 for x in labels]\n \n return np.array(labels)\n",
"_____no_output_____"
],
[
"(X,y,_,_) = src.data.load_sample_classification_problem(cancer_type)\n",
"_____no_output_____"
],
[
"print(X.shape)\nprint(y.shape)",
"(1211, 20530)\n(1211,)\n"
],
[
"print('# of 1s', sum(y))\nprint('# of 0s', sum(1-y))\n\nXnew = X\nynew = np.reshape(y, (-1, len(y)))\nynew = np.concatenate((ynew,1-ynew)).T\n\nprint(\"number of element in each class:\", sum(ynew))",
"# of 1s 1097\n# of 0s 114\nnumber of element in each class: [1097 114]\n"
],
[
"# 0 for maximum parallization\nparallelization_factor = 10\n\nlayer_sizes = [Xnew.shape[1], 2000, 1000, 500, 250, 10,2] \nprint('layer_sizes', layer_sizes)\n\nL = len(layer_sizes) - 1 # number of layers\n\nnum_epochs = 100 \nnum_examples = Xnew.shape[0]*6//10 \n\nlearning_rate = 0.005\n\nbatch_size = 60\n\nnum_iter = (num_examples//batch_size + 1) * num_epochs \n\ninputs = tf.placeholder(tf.float32, shape=(None, layer_sizes[0]), name= \"input\")\noutputs = tf.placeholder(tf.float32, name = \"output\")",
"layer_sizes [20530, 2000, 1000, 500, 250, 10, 2]\n"
],
[
"# training util functions\ndef bi(inits, size, name):\n with tf.name_scope(name):\n b = tf.Variable(inits * tf.ones([size]), name=\"B\")\n tf.summary.histogram(\"bias\", b)\n return b\n\ndef wi(shape, name):\n with tf.name_scope(name):\n w = tf.Variable(tf.random_normal(shape, name=\"W\")) / math.sqrt(shape[0])\n tf.summary.histogram(\"weight\", w)\n print(w)\n return w",
"_____no_output_____"
],
[
"#training params\nshapes = list(zip(list(layer_sizes)[:-1], list(layer_sizes[1:]))) # shapes of linear layers\nprint('shapes', shapes)\n\nweights = {'W': [wi(s, \"W\") for s in shapes], # Encoder weights\n 'V': [wi(s[::-1], \"V\") for s in shapes], # Decoder weights\n # batch normalization parameter to shift the normalized value\n 'beta': [bi(0.0, layer_sizes[l+1], \"beta\") for l in range(L)],\n # batch normalization parameter to scale the normalized value\n 'gamma': [bi(1.0, layer_sizes[l+1], \"beta\") for l in range(L)]}\n\nprint(weights['V'],shapes)\n\nnoise_std = 0.3 # scaling factor for noise used in corrupted encoder\n\n# hyperparameters that denote the importance of each layer\ndenoising_cost = [1000.0, 10.0, 0.10, 0.10, 0.10, 0.10, 0.10]",
"shapes [(20530, 2000), (2000, 1000), (1000, 500), (500, 250), (250, 10), (10, 2)]\nTensor(\"W_6/truediv:0\", shape=(20530, 2000), dtype=float32)\nTensor(\"W_7/truediv:0\", shape=(2000, 1000), dtype=float32)\nTensor(\"W_8/truediv:0\", shape=(1000, 500), dtype=float32)\nTensor(\"W_9/truediv:0\", shape=(500, 250), dtype=float32)\nTensor(\"W_10/truediv:0\", shape=(250, 10), dtype=float32)\nTensor(\"W_11/truediv:0\", shape=(10, 2), dtype=float32)\nTensor(\"V_6/truediv:0\", shape=(2000, 20530), dtype=float32)\nTensor(\"V_7/truediv:0\", shape=(1000, 2000), dtype=float32)\nTensor(\"V_8/truediv:0\", shape=(500, 1000), dtype=float32)\nTensor(\"V_9/truediv:0\", shape=(250, 500), dtype=float32)\nTensor(\"V_10/truediv:0\", shape=(10, 250), dtype=float32)\nTensor(\"V_11/truediv:0\", shape=(2, 10), dtype=float32)\n[<tf.Tensor 'V_6/truediv:0' shape=(2000, 20530) dtype=float32>, <tf.Tensor 'V_7/truediv:0' shape=(1000, 2000) dtype=float32>, <tf.Tensor 'V_8/truediv:0' shape=(500, 1000) dtype=float32>, <tf.Tensor 'V_9/truediv:0' shape=(250, 500) dtype=float32>, <tf.Tensor 'V_10/truediv:0' shape=(10, 250) dtype=float32>, <tf.Tensor 'V_11/truediv:0' shape=(2, 10) dtype=float32>] [(20530, 2000), (2000, 1000), (1000, 500), (500, 250), (250, 10), (10, 2)]\n"
],
[
"#training params and placeholders\ntraining = tf.placeholder(tf.bool)\n\newma = tf.train.ExponentialMovingAverage(decay=0.99) # to calculate the moving averages of mean and variance\nbn_assigns = [] # this list stores the updates to be made to average mean and variance\n\n\ndef batch_normalization(batch, mean=None, var=None):\n if mean is None or var is None:\n mean, var = tf.nn.moments(batch, axes=[0])\n print(\"batch.shape\", batch.shape)\n return (batch - mean) / tf.sqrt(var + tf.constant(1e-10))\n\n# average mean and variance of all layers\nrunning_mean = [tf.Variable(tf.constant(0.0, shape=[l]), trainable=False) for l in layer_sizes[1:]]\nrunning_var = [tf.Variable(tf.constant(1.0, shape=[l]), trainable=False) for l in layer_sizes[1:]]\n\ndef update_batch_normalization(batch, l):\n \"batch normalize + update average mean and variance of layer l\"\n mean, var = tf.nn.moments(batch, axes=[0])\n assign_mean = running_mean[l-1].assign(mean)\n assign_var = running_var[l-1].assign(var)\n bn_assigns.append(ewma.apply([running_mean[l-1], running_var[l-1]]))\n with tf.control_dependencies([assign_mean, assign_var]):\n return (batch - mean) / tf.sqrt(var + 1e-10)",
"_____no_output_____"
],
[
"#encoder\ndef encoder(inputs, noise_std):\n h = inputs + tf.random_normal(tf.shape(inputs)) * noise_std # add noise to input\n d = {} # to store the pre-activation, activation, mean and variance for each layer\n # The data for labeled and unlabeled examples are stored separately\n d['labeled'] = {'z': {}, 'm': {}, 'v': {}, 'h': {}}\n d['unlabeled'] = {'z': {}, 'm': {}, 'v': {}, 'h': {}}\n d['labeled']['z'][0], d['unlabeled']['z'][0] = split_lu(h)\n for l in range(1, L+1):\n print (\"Layer \", l, \": \", layer_sizes[l-1], \" -> \", layer_sizes[l])\n d['labeled']['h'][l-1], d['unlabeled']['h'][l-1] = split_lu(h)\n z_pre = tf.matmul(h, weights['W'][l-1]) # pre-activation\n z_pre_l, z_pre_u = split_lu(z_pre) # split labeled and unlabeled examples\n\n m, v = tf.nn.moments(z_pre_u, axes=[0])\n\n # if training:\n def training_batch_norm():\n # Training batch normalization\n # batch normalization for labeled and unlabeled examples is performed separately\n if noise_std > 0:\n # Corrupted encoder\n # batch normalization + noise\n z = join(batch_normalization(z_pre_l), batch_normalization(z_pre_u, m, v))\n z += tf.random_normal(tf.shape(z_pre)) * noise_std\n else:\n # Clean encoder\n # batch normalization + update the average mean and variance using batch mean and variance of labeled examples\n z = join(update_batch_normalization(z_pre_l, l), batch_normalization(z_pre_u, m, v))\n return z\n\n # else:\n def eval_batch_norm():\n # Evaluation batch normalization\n # obtain average mean and variance and use it to normalize the batch\n mean = ewma.average(running_mean[l-1])\n var = ewma.average(running_var[l-1])\n z = batch_normalization(z_pre, mean, var)\n # Instead of the above statement, the use of the following 2 statements containing a typo\n # consistently produces a 0.2% higher accuracy for unclear reasons.\n return z\n\n # perform batch normalization according to value of boolean \"training\" placeholder:\n z = tf.cond(training, training_batch_norm, eval_batch_norm)\n\n if l == L:\n # use softmax activation in output layer\n h = tf.nn.softmax(weights['gamma'][l-1] * (z + weights[\"beta\"][l-1]))\n else:\n # use ReLU activation in hidden layers\n h = tf.nn.relu(z + weights[\"beta\"][l-1])\n d['labeled']['z'][l], d['unlabeled']['z'][l] = split_lu(z)\n d['unlabeled']['m'][l], d['unlabeled']['v'][l] = m, v # save mean and variance of unlabeled examples for decoding\n d['labeled']['h'][l], d['unlabeled']['h'][l] = split_lu(h)\n return h, d\nprint (\"=== Corrupted Encoder ===\")\ny_c, corr = encoder(inputs, noise_std)\n\nprint (\"=== Clean Encoder ===\")\ny, clean = encoder(inputs, 0.0) # 0.0 -> do not add noise\n\nprint (\"=== Decoder ===\")",
"=== Corrupted Encoder ===\nLayer 1 : 20530 -> 2000\nbatch.shape (60, 2000)\nbatch.shape (?, 2000)\nbatch.shape (?, 2000)\nLayer 2 : 2000 -> 1000\nbatch.shape (60, 1000)\nbatch.shape (?, 1000)\nbatch.shape (?, 1000)\nLayer 3 : 1000 -> 500\nbatch.shape (60, 500)\nbatch.shape (?, 500)\nbatch.shape (?, 500)\nLayer 4 : 500 -> 250\nbatch.shape (60, 250)\nbatch.shape (?, 250)\nbatch.shape (?, 250)\nLayer 5 : 250 -> 10\nbatch.shape (60, 10)\nbatch.shape (?, 10)\nbatch.shape (?, 10)\nLayer 6 : 10 -> 2\nbatch.shape (60, 2)\nbatch.shape (?, 2)\nbatch.shape (?, 2)\n=== Clean Encoder ===\nLayer 1 : 20530 -> 2000\nbatch.shape (?, 2000)\nbatch.shape (?, 2000)\nLayer 2 : 2000 -> 1000\nbatch.shape (?, 1000)\nbatch.shape (?, 1000)\nLayer 3 : 1000 -> 500\nbatch.shape (?, 500)\nbatch.shape (?, 500)\nLayer 4 : 500 -> 250\nbatch.shape (?, 250)\nbatch.shape (?, 250)\nLayer 5 : 250 -> 10\nbatch.shape (?, 10)\nbatch.shape (?, 10)\nLayer 6 : 10 -> 2\nbatch.shape (?, 2)\nbatch.shape (?, 2)\n=== Decoder ===\n"
],
[
"def g_gauss(z_c, u, size):\n \"gaussian denoising function proposed in the original paper\"\n wi = lambda inits, name: tf.Variable(inits * tf.ones([size]), name=name)\n a1 = wi(0., 'a1')\n a2 = wi(1., 'a2')\n a3 = wi(0., 'a3')\n a4 = wi(0., 'a4')\n a5 = wi(0., 'a5')\n\n a6 = wi(0., 'a6')\n a7 = wi(1., 'a7')\n a8 = wi(0., 'a8')\n a9 = wi(0., 'a9')\n a10 = wi(0., 'a10')\n\n mu = a1 * tf.sigmoid(a2 * u + a3) + a4 * u + a5\n v = a6 * tf.sigmoid(a7 * u + a8) + a9 * u + a10\n\n z_est = (z_c - mu) * v + mu\n return z_est",
"_____no_output_____"
],
[
"# Decoder\nz_est = {}\nd_cost = [] # to store the denoising cost of all layers\nfor l in range(L, -1, -1):\n print (\"Layer \", l, \": \", layer_sizes[l+1] if l+1 < len(layer_sizes) else None, \" -> \", layer_sizes[l], \", denoising cost: \", denoising_cost[l])\n z, z_c = clean['unlabeled']['z'][l], corr['unlabeled']['z'][l]\n m, v = clean['unlabeled']['m'].get(l, 0), clean['unlabeled']['v'].get(l, 1-1e-10)\n if l == L:\n u = unlabeled(y_c)\n else:\n u = tf.matmul(z_est[l+1], weights['V'][l])\n u = batch_normalization(u)\n z_est[l] = g_gauss(z_c, u, layer_sizes[l])\n z_est_bn = (z_est[l] - m) / v\n # append the cost of this layer to d_cost\n d_cost.append((tf.reduce_mean(tf.reduce_sum(tf.square(z_est_bn - z), 1)) / layer_sizes[l]) * denoising_cost[l])\n\n# calculate total unsupervised cost by adding the denoising cost of all layers\nu_cost = tf.add_n(d_cost)\n\ny_N = labeled(y_c)\ncost = -tf.reduce_mean(tf.reduce_sum(outputs*tf.log(y_N), 1)) # supervised cost\nloss = cost + u_cost # total cost\n\npred_cost = -tf.reduce_mean(tf.reduce_sum(outputs*tf.log(y), 1)) # cost used for prediction\nwith tf.name_scope(\"accuracy\"):\n correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(outputs, 1)) # no of correct predictions\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\")) * tf.constant(100.0)\n tf.summary.scalar(\"accuracy\", accuracy)\n\n#learning_rate = tf.Variable(starter_learning_rate, trainable=False)\nwith tf.name_scope(\"train\"):\n train_step = tf.train.AdamOptimizer(learning_rate).minimize(loss)\n \n\n\n# add the updates of batch normalization statistics to train_step\nbn_updates = tf.group(*bn_assigns)\nwith tf.control_dependencies([train_step]):\n train_step = tf.group(bn_updates)",
"Layer 6 : None -> 2 , denoising cost: 0.1\nbatch.shape (?, 2)\nLayer 5 : 2 -> 10 , denoising cost: 0.1\nbatch.shape (?, 10)\nLayer 4 : 10 -> 250 , denoising cost: 0.1\nbatch.shape (?, 250)\nLayer 3 : 250 -> 500 , denoising cost: 0.1\nbatch.shape (?, 500)\nLayer 2 : 500 -> 1000 , denoising cost: 0.1\nbatch.shape (?, 1000)\nLayer 1 : 1000 -> 2000 , denoising cost: 10.0\nbatch.shape (?, 2000)\nLayer 0 : 2000 -> 20530 , denoising cost: 1000.0\nbatch.shape (?, 20530)\n"
],
[
"def get_accuracies(epoch, sess, datasets) :\n\n train_acc = sess.run(accuracy, feed_dict={inputs: datasets.train.unlabeled_ds.dataset, outputs: datasets.train.unlabeled_ds.labels, training: False})\n validation_acc = sess.run(accuracy, feed_dict={inputs: datasets.validation.dataset, outputs: datasets.validation.labels, training: False})\n\n \n print(epoch, \"=>\", \" train: \", train_acc, \" validation: \", validation_acc)\n return train_acc, validation_acc\n\nsess = _\n\ndef run_model(datasets, fold_count = 0):\n global sess\n expression_dataset = datasets\n\n saver = tf.train.Saver(write_version=tf.train.SaverDef.V1)\n\n sess = tf.Session(config=\n tf.ConfigProto(inter_op_parallelism_threads=parallelization_factor,\n intra_op_parallelism_threads=parallelization_factor))\n \n i_iter = 0\n\n init = tf.global_variables_initializer()\n sess.run(init)\n\n acc_count = 0\n \n\n _, pre_acc = get_accuracies(\"Initial\", sess, expression_dataset)\n\n\n for i in (range(i_iter, num_iter)):\n\n dataset, labels = expression_dataset.train.next_batch(batch_size)\n\n sess.run(train_step, feed_dict={inputs: dataset, outputs: labels, training: True})\n\n\n if (i > 1) and ((i+1) % (num_iter//num_epochs) == 0):\n epoch_n = i//(num_examples//batch_size)\n \n _, curr_acc = get_accuracies(\"Epoch(\" + str(epoch_n) + \")\", sess, expression_dataset)\n \n if curr_acc <= pre_acc*1.0001 and curr_acc/pre_acc > 0.95 :\n acc_count += 1\n else :\n acc_count = 0\n pre_acc = curr_acc\n patience = 20\n \n\n if acc_count > patience:\n print(\"Early stop!!!!!\", acc_count, epoch_n)\n break\n\n y_p = tf.argmax(y, 1)\n y_pred = sess.run(y_p, feed_dict={inputs: expression_dataset.test.dataset, training: False})\n \n\n y_true = np.argmax(expression_dataset.test.labels,1)\n print (\"Precision\", precision_score(y_true, y_pred))\n print (\"Recall\", recall_score(y_true, y_pred))\n print (\"f1_score\", f1_score(y_true, y_pred))\n print (\"confusion_matrix\")\n print (confusion_matrix(y_true, y_pred))\n# with open(file, \"a\") as text_file:\n# text_file.write(\"%s\\t%s\\t%s\\t%s\\t%s\\t%s\\n\" % (str(fold_count), \n# accuracy_score(y_true, y_pred), \n# f1_score(y_true, y_pred), \n# precision_score(y_true, y_pred), \n# recall_score(y_true, y_pred),\n# confusion_matrix(y_true, y_pred).tolist()))\n sess.close()\n return y_true, y_pred",
"_____no_output_____"
],
[
"print (\"=== Loading Data ===\")\nclass DataSets(object):\n pass\ndata_sets = DataSets()\n\n\nskf = RepeatedStratifiedKFold(n_splits=5, n_repeats=5, random_state=123)\n\n# with open(file, \"w\") as text_file:\n# text_file.write(\"%s\\t%s\\t%s\\t%s\\t%s\\t%s\\n\" % ('fold','accuracy', 'f1-score', 'precision', 'recall', 'conf_m'))\n \nall_y_true = np.array([]).astype(int)\nall_y_pred = np.array([]).astype(int)\n\nfold_count = 0\n\nfor train_valid_index, test_index in skf.split(Xnew, ynew[:,0]):\n X_train_valid, X_test = Xnew[train_valid_index], Xnew[test_index]\n y_train_valid, y_test = ynew[train_valid_index], ynew[test_index]\n \n \n scaler = MinMaxScaler()\n scaler.fit(X_train_valid)\n X_train_valid = scaler.transform(X_train_valid)\n X_test = scaler.transform(X_test)\n\n\n \n X_train, X_valid, y_train, y_valid= train_test_split(X_train_valid, y_train_valid, test_size=0.25, stratify=y_train_valid[:,0])\n \n\n data_sets.train = SemiDataSet(X_train,y_train , 60)\n\n data_sets.validation = DataSet(X_valid,y_valid)\n data_sets.test = DataSet(X_test,y_test)\n \n y_true, y_pred = run_model(data_sets, fold_count)\n \n\n all_y_true = np.append(all_y_true,y_true)\n all_y_pred = np.append(all_y_pred,y_pred)\n\n \n fold_count = fold_count + 1\n\nprint (confusion_matrix(all_y_true, all_y_pred))\n# with open(file, \"a\") as text_file:\n# text_file.write(\"%s\\t%s\\t%s\\t%s\\t%s\\t%s\\n\" % ('ALL', \n# accuracy_score(all_y_true, all_y_pred), \n# f1_score(all_y_true, all_y_pred), \n# precision_score(all_y_true, all_y_pred), \n# recall_score(all_y_true, all_y_pred),\n# confusion_matrix(all_y_true, all_y_pred).tolist())\n# )",
"=== Loading Data ===\nInitial => train: 9.779614 validation: 10.743801\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9b127cf16132cba2d7ced881b013b63ff0b3bf | 25,247 | ipynb | Jupyter Notebook | docs/runandget.ipynb | pyenergyplus/witheppy | ea9d21976fc018261aa5f8464125df4bf866171a | [
"MIT"
] | 8 | 2018-12-12T23:00:44.000Z | 2021-12-12T05:41:45.000Z | docs/runandget.ipynb | pyenergyplus/witheppy | ea9d21976fc018261aa5f8464125df4bf866171a | [
"MIT"
] | 27 | 2018-10-18T10:31:27.000Z | 2021-12-15T05:56:21.000Z | docs/runandget.ipynb | pyenergyplus/witheppy | ea9d21976fc018261aa5f8464125df4bf866171a | [
"MIT"
] | 2 | 2018-10-15T15:36:02.000Z | 2020-12-30T00:17:02.000Z | 33.662667 | 614 | 0.527508 | [
[
[
"# **Experimental**: Run and get results\n\n## Get results from an anonymous simulation : `anon_runandget`",
"_____no_output_____"
],
[
"Sometimes you want to run a simulation on an idf and get a particular result. There is no single function in `eppy` which can do that. In this experimental section we are exploring functions that will achieve this objectives. ",
"_____no_output_____"
],
[
"So what does this functionality look like:\n\n(Let do some housekeeping first to run this notebook)",
"_____no_output_____"
]
],
[
[
"# the lines in this block are needed to run the code in this notebook\n# you don't need them if you have eppy installed\nimport sys\n# pathnameto_eppy = 'c:/eppy'\npathnameto_witheppy = '../'\nsys.path.append(pathnameto_witheppy) ",
"_____no_output_____"
]
],
[
[
"Open the `idf` file (Your path names may be different on your machine)",
"_____no_output_____"
]
],
[
[
"fname = \"/Applications/EnergyPlus-9-3-0/ExampleFiles/1ZoneEvapCooler.idf\"\nwfile = \"/Applications/EnergyPlus-9-3-0/WeatherData/USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw\"\n\nimport eppy\nimport witheppy.runandget as runandget\nimport pprint\npp = pprint.PrettyPrinter()\n\nidf = eppy.openidf(fname, epw=wfile) # this is an easy way to open the idf\n # if you have trouble here, go back to \n # the tutorial and see the longer manual \n # way to open the file\n",
"_____no_output_____"
]
],
[
[
"Lets say you want to run a simulation and get just the `*.end` file. In reality we rarely get that file. But it is a small file, so it is easy to demonstrate using that file. \n\nThe `getdict` dictionary defines what you want to extract from the results. Right now we want to extract the entire `*.end` file.",
"_____no_output_____"
]
],
[
[
"getdict = dict(\n end_file=dict(whichfile=\"end\", entirefile=True),\n)\n\n# run and get the result. anon_runandget() will run the \n# simulation in a temprary file that will be deleted afer you get the results\nfullresult = runandget.anon_runandget(idf, getdict)\npp.pprint(fullresult) ",
"\n/Applications/EnergyPlus-9-3-0/energyplus --weather /Applications/EnergyPlus-9-3-0/WeatherData/USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw --output-directory /var/folders/sm/202yyhk50_s9p3s_4g2kqxvm0000gn/T/tmpjdaqrgog --idd /Applications/EnergyPlus-9-3-0/Energy+.idd --readvars /Users/santoshphilip/Documents/coolshadow/github/witheppy/docs/in.idf\n\n{'end_file': {'entirefile': True,\n 'result': 'EnergyPlus Completed Successfully-- 4 Warning; 0 '\n 'Severe Errors; Elapsed Time=00hr 00min 4.40sec\\n',\n 'whichfile': 'end'}}\n"
]
],
[
[
"We want only the results. So",
"_____no_output_____"
]
],
[
[
"print(fullresult['end_file']['result'])",
"EnergyPlus Completed Successfully-- 4 Warning; 0 Severe Errors; Elapsed Time=00hr 00min 4.40sec\n\n"
]
],
[
[
"This is great. But what if I want only part of a result file. Lets say I want I want the first table from the `html table file`. And I want it in a list format with rows and columns. Aha! Let us try this",
"_____no_output_____"
]
],
[
[
"getdict = dict(\n HTML_file=dict(whichfile=\"htm\", tableindex=0, table=True),\n)\n# run and get the result. anon_runandget() will run the \n# simulation in a temprary file that will be deleted afer you get the results\nfullresult = runandget.anon_runandget(idf, getdict)\npp.pprint(fullresult) ",
"\n/Applications/EnergyPlus-9-3-0/energyplus --weather /Applications/EnergyPlus-9-3-0/WeatherData/USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw --output-directory /var/folders/sm/202yyhk50_s9p3s_4g2kqxvm0000gn/T/tmpu42rhn6f --idd /Applications/EnergyPlus-9-3-0/Energy+.idd --readvars /Users/santoshphilip/Documents/coolshadow/github/witheppy/docs/in.idf\n\n{'HTML_file': {'entirefile': None,\n 'result': ['Site and Source Energy',\n [['',\n 'Total Energy [GJ]',\n 'Energy Per Total Building Area [MJ/m2]',\n 'Energy Per Conditioned Building Area [MJ/m2]'],\n ['Total Site Energy', 18.06, 77.76, 77.76],\n ['Net Site Energy', 18.06, 77.76, 77.76],\n ['Total Source Energy', 57.2, 246.26, 246.26],\n ['Net Source Energy', 57.2, 246.26, 246.26]]],\n 'table': True,\n 'tableindex': 0,\n 'whichfile': 'htm'}}\n"
]
],
[
[
"Sweet !! Let us print just the result.",
"_____no_output_____"
]
],
[
[
"pp.pprint(fullresult['HTML_file']['result'])",
"['Site and Source Energy',\n [['',\n 'Total Energy [GJ]',\n 'Energy Per Total Building Area [MJ/m2]',\n 'Energy Per Conditioned Building Area [MJ/m2]'],\n ['Total Site Energy', 18.06, 77.76, 77.76],\n ['Net Site Energy', 18.06, 77.76, 77.76],\n ['Total Source Energy', 57.2, 246.26, 246.26],\n ['Net Source Energy', 57.2, 246.26, 246.26]]]\n"
]
],
[
[
"Ha ! this is fun. What if I want last column, but just the last two values, but lets use the table name instead of the table index",
"_____no_output_____"
]
],
[
[
"getdict = dict(\n twocells=dict(\n whichfile=\"htm\",\n # tableindex=0, # or tablename\n tablename=\"Site and Source Energy\", # tableindex takes priority if both given\n cells=[[-2, -1], [-2, -1]], # will return 2 cells\n )\n)\n# run and get the result. anon_runandget() will run the \n# simulation in a temprary file that will be deleted afer you get the results\nfullresult = runandget.anon_runandget(idf, getdict)\npp.pprint(fullresult)",
"\n/Applications/EnergyPlus-9-3-0/energyplus --weather /Applications/EnergyPlus-9-3-0/WeatherData/USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw --output-directory /var/folders/sm/202yyhk50_s9p3s_4g2kqxvm0000gn/T/tmpoqvq1f83 --idd /Applications/EnergyPlus-9-3-0/Energy+.idd --readvars /Users/santoshphilip/Documents/coolshadow/github/witheppy/docs/in.idf\n\n{'twocells': {'cells': [[-2, -1], [-2, -1]],\n 'entirefile': None,\n 'result': ['Site and Source Energy', [246.26, 246.26]],\n 'tablename': 'Site and Source Energy',\n 'whichfile': 'htm'}}\n"
]
],
[
[
"What if I want the contents of `*.end` **AND** the first table from the html table file. Here you go.",
"_____no_output_____"
]
],
[
[
"getdict = dict(\n HTML_file=dict(whichfile=\"htm\", tableindex=0, table=True),\n end_file=dict(whichfile=\"end\", entirefile=True),\n)\nfullresult = runandget.anon_runandget(idf, getdict)\npp.pprint(fullresult)",
"\n/Applications/EnergyPlus-9-3-0/energyplus --weather /Applications/EnergyPlus-9-3-0/WeatherData/USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw --output-directory /var/folders/sm/202yyhk50_s9p3s_4g2kqxvm0000gn/T/tmp4wrmt7w4 --idd /Applications/EnergyPlus-9-3-0/Energy+.idd --readvars /Users/santoshphilip/Documents/coolshadow/github/witheppy/docs/in.idf\n\n{'HTML_file': {'entirefile': None,\n 'result': ['Site and Source Energy',\n [['',\n 'Total Energy [GJ]',\n 'Energy Per Total Building Area [MJ/m2]',\n 'Energy Per Conditioned Building Area [MJ/m2]'],\n ['Total Site Energy', 18.06, 77.76, 77.76],\n ['Net Site Energy', 18.06, 77.76, 77.76],\n ['Total Source Energy', 57.2, 246.26, 246.26],\n ['Net Source Energy', 57.2, 246.26, 246.26]]],\n 'table': True,\n 'tableindex': 0,\n 'whichfile': 'htm'},\n 'end_file': {'entirefile': True,\n 'result': 'EnergyPlus Completed Successfully-- 4 Warning; 0 '\n 'Severe Errors; Elapsed Time=00hr 00min 3.40sec\\n',\n 'whichfile': 'end'}}\n"
]
],
[
[
"## What can `getdict` get ?",
"_____no_output_____"
],
[
"Let us look at some examples. For instance you can any file generated by the simulation. So far we got the `end` file and the `htm` file. What do we call them\n\nThey are known by these names.",
"_____no_output_____"
]
],
[
[
"pp.pprint(runandget.resulttypes)",
"['audit',\n 'bnd',\n 'dxf',\n 'eio',\n 'end',\n 'err',\n 'eso',\n 'mdd',\n 'mtd',\n 'mtr',\n 'rdd',\n 'shd',\n 'htm',\n 'tab',\n 'sqlerr',\n 'csv',\n 'mcsv',\n 'expidf',\n 'sql',\n 'rvaudit']\n"
]
],
[
[
"Most of them are the actual file extendions. There are two `csv` files. so we get them by `csv` and`mcsv`, where `mcsv` will get you the `meter.csv` file and `csv` will get you the regular csv file. Some examples of getting the entire file.\n\nWhenever possible `entirefile=True` will return a text file. In the case of `htm` and `sql`, the file will be read in binary mode.",
"_____no_output_____"
]
],
[
[
"getdict = dict(end_file=dict(whichfile=\"end\", entirefile=True))\ngetdict = dict(HTML_file=dict(whichfile=\"htm\", entirefile=True))\ngetdict = dict(csv_file=dict(whichfile=\"csv\", entirefile=True))\ngetdict = dict(eio_file=dict(whichfile=\"eio\", entirefile=True))\ngetdict = dict(sql_file=dict(whichfile=\"sql\", entirefile=True)) ",
"_____no_output_____"
]
],
[
[
"OK ! That gets you the whole file. \n\nWhat are the varaitions in getting a partial data from a file. Below is a list of all of them. Let's start with the html file",
"_____no_output_____"
]
],
[
[
"# get all tables -> carefull. This one is slow\ngetdict = dict(\n resultname=dict(\n whichfile=\"htm\",\n as_tables=True,\n )\n)\n\n# get some rows in a html table\ngetdict = dict(\n resultname=dict(\n whichfile=\"htm\",\n tableindex=1, # or tablename\n # tablename=\"Site and Source Energy\", # tableindex takes priority if both given\n rows=[0, -1, 1], # will return 3 rows as indexed\n )\n)\n\n# get some columns in a html table\ngetdict = dict(\n resultname=dict(\n whichfile=\"htm\",\n tableindex=1, # or tablename\n # tablename=\"Site to Source Energy Conversion Factors\", # tableindex takes priority if both given\n cols=[0, 1, -1], # will return 3 columns as indexed\n )\n)\n\n# get some cells in a html table\ngetdict = dict(\n resultname=dict(\n whichfile=\"htm\",\n tableindex=1, # or tablename\n tablename=\"Site and Source Energy\", # tableindex takes priority if both given\n cells=[[0, -1], [1, -1], [-1, -1]], # will return 3 cells\n ))",
"_____no_output_____"
]
],
[
[
"Now let us look at the csv files (this would be `resulttypes in ['csv', 'mcsv']`)",
"_____no_output_____"
]
],
[
[
"# get csv file cols\ngetdict = dict(\n resultname=dict(\n whichfile=\"csv\",\n cols=[1, \"Date/Time\"], # you can give the index of the column or \n # the heading of the column\n )\n)",
"_____no_output_____"
]
],
[
[
"That's it ! There are no functions to read and do a partial extract of the other file types.\n\nMaybe the API should be changed so that you could send it a custom function and extract whatever you want. Worth a try in the future API",
"_____no_output_____"
],
[
"## I want to keep my result files - `runandget()`\n\nWhat if I don't want to run my simulation anonymously. I want to keep the result files. Then you can use the `runandget()` function. Here is an example.",
"_____no_output_____"
]
],
[
[
"getdict = dict(\n HTML_file=dict(whichfile=\"htm\", tableindex=0, table=True),\n end_file=dict(whichfile=\"end\", entirefile=True),\n)\nrunoptions = dict(\n output_suffix=\"D\", output_prefix=\"Yousa\", output_directory=\"./deletethislater\", \n readvars=True\n)\nfullresult = runandget.runandget(idf, runoptions, getdict)\npp.pprint(fullresult)",
"\n/Applications/EnergyPlus-9-3-0/energyplus --weather /Applications/EnergyPlus-9-3-0/WeatherData/USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw --output-directory /Users/santoshphilip/Documents/coolshadow/github/witheppy/docs/deletethislater --idd /Applications/EnergyPlus-9-3-0/Energy+.idd --readvars --output-prefix Yousa --output-suffix D /Users/santoshphilip/Documents/coolshadow/github/witheppy/docs/in.idf\n\n{'HTML_file': {'entirefile': None,\n 'result': ['Site and Source Energy',\n [['',\n 'Total Energy [GJ]',\n 'Energy Per Total Building Area [MJ/m2]',\n 'Energy Per Conditioned Building Area [MJ/m2]'],\n ['Total Site Energy', 18.06, 77.76, 77.76],\n ['Net Site Energy', 18.06, 77.76, 77.76],\n ['Total Source Energy', 57.2, 246.26, 246.26],\n ['Net Source Energy', 57.2, 246.26, 246.26]]],\n 'table': True,\n 'tableindex': 0,\n 'whichfile': 'htm'},\n 'end_file': {'entirefile': True,\n 'result': 'EnergyPlus Completed Successfully-- 4 Warning; 0 '\n 'Severe Errors; Elapsed Time=00hr 00min 3.72sec\\n',\n 'whichfile': 'end'}}\n"
]
],
[
[
"The files are now in the `./deletethisfolder`. Let us check",
"_____no_output_____"
]
],
[
[
"import os\nos.listdir(runoptions['output_directory'])",
"_____no_output_____"
]
],
[
[
"Let us clean up and remove these files",
"_____no_output_____"
]
],
[
[
"for whichfile in runandget.resulttypes:\n fname = runandget.options2filename(whichfile, runoptions)\n try:\n os.remove(fname)\n except FileNotFoundError as e:\n pass\nos.rmdir(runoptions['output_directory'])",
"_____no_output_____"
]
],
[
[
"## extract resutls without running the simulation - `getrun()`\n\nWhat if the simulation takes a long time. I have already done the simulation. I know where the files are. I just want to extract the results. How do I do that ?\n\nLets just do a plain vanilla `idf.run()`",
"_____no_output_____"
]
],
[
[
"runoptions = dict(\n output_suffix=\"D\", output_prefix=\"Yousa\", output_directory=\"./deletethislater\", \n readvars=True\n)\n\nresult = idf.run(**runoptions)\nprint(result)",
"\n/Applications/EnergyPlus-9-3-0/energyplus --weather /Applications/EnergyPlus-9-3-0/WeatherData/USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw --output-directory /Users/santoshphilip/Documents/coolshadow/github/witheppy/docs/deletethislater --idd /Applications/EnergyPlus-9-3-0/Energy+.idd --readvars --output-prefix Yousa --output-suffix D /Users/santoshphilip/Documents/coolshadow/github/witheppy/docs/in.idf\n\nNone\n"
]
],
[
[
"I know the files are in the folder `./deletethislater`. I want to extract from the `*.end` file and the `*.htm` file",
"_____no_output_____"
]
],
[
[
"getdict = dict(\n HTML_file=dict(whichfile=\"htm\", tableindex=0, table=True),\n end_file=dict(whichfile=\"end\", entirefile=True),\n)\n\n\nfullresult = runandget.getrun(runoptions, getdict)\npp.pprint(fullresult)",
"{'HTML_file': {'entirefile': None,\n 'result': ['Site and Source Energy',\n [['',\n 'Total Energy [GJ]',\n 'Energy Per Total Building Area [MJ/m2]',\n 'Energy Per Conditioned Building Area [MJ/m2]'],\n ['Total Site Energy', 18.06, 77.76, 77.76],\n ['Net Site Energy', 18.06, 77.76, 77.76],\n ['Total Source Energy', 57.2, 246.26, 246.26],\n ['Net Source Energy', 57.2, 246.26, 246.26]]],\n 'table': True,\n 'tableindex': 0,\n 'whichfile': 'htm'},\n 'end_file': {'entirefile': True,\n 'result': 'EnergyPlus Completed Successfully-- 4 Warning; 0 '\n 'Severe Errors; Elapsed Time=00hr 00min 3.70sec\\n',\n 'whichfile': 'end'}}\n"
]
],
[
[
"as easy as that. Now let us clean up the result directory again",
"_____no_output_____"
]
],
[
[
"for whichfile in runandget.resulttypes:\n fname = runandget.options2filename(whichfile, runoptions)\n try:\n os.remove(fname)\n except FileNotFoundError as e:\n pass\nos.rmdir(runoptions['output_directory'])",
"_____no_output_____"
]
],
[
[
"## What else ?\n\nWell ... We can return the files in `json` format. We can also compress the `json` files::\n\n runandget.anon_runandget(idf, getdict, json_it=True) \n runandget.anon_runandget(idf, getdict, json_it=True, compress_it=True) ",
"_____no_output_____"
],
[
"## Limitations\n\n- No error checking\n - for missing file\n - for missing table\n - for missing column or row\n- Json format will work for binary files like `*.sql` or `*.htm`\n\nIt will just crash and burn :-)",
"_____no_output_____"
],
[
"## Motivation for this functionality\n\nThe primary motivation is to develop functions that can be used for distributed simulation. EnergyPlus generates a large volume of result files. You may be interested in only some data in these results, maybe even just one number - like the total energy use. In distributed simulation, some of your simulation nodes may be scattered over the internet and transfer of large files can be expensive and time consuming. So it would be useful to have a function in eppy that would run the simulation and return some specific results. the rest of the result files may be in a temporary location will be deleted.\n\nOf cousrse to do a remote simulation, one has to send the `idf` to the node. Next step would be to make this function\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
ec9b1cd5860d0f63b617c13cfd2c5602da336f8b | 50,622 | ipynb | Jupyter Notebook | 20/REINFORCE.ipynb | bomishot/bomishot.github.io | 32b3f3a8da8d1eae66aa76787dc4fcca8e425d37 | [
"MIT"
] | null | null | null | 20/REINFORCE.ipynb | bomishot/bomishot.github.io | 32b3f3a8da8d1eae66aa76787dc4fcca8e425d37 | [
"MIT"
] | null | null | null | 20/REINFORCE.ipynb | bomishot/bomishot.github.io | 32b3f3a8da8d1eae66aa76787dc4fcca8e425d37 | [
"MIT"
] | null | null | null | 152.018018 | 28,254 | 0.809865 | [
[
[
"<a href=\"https://colab.research.google.com/github/bomishot/bomishot.github.io/blob/master/20/REINFORCE.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# REINFORCE\n\n---\n\nIn this notebook, we will train REINFORCE with OpenAI Gym's Cartpole environment.",
"_____no_output_____"
],
[
"### 1. Import the Necessary Packages",
"_____no_output_____"
]
],
[
[
"import gym\ngym.logger.set_level(40) # suppress warnings (please remove if gives error)\nimport numpy as np\nfrom collections import deque\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport torch\ntorch.manual_seed(0) # set random seed\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.distributions import Categorical",
"_____no_output_____"
]
],
[
[
"### 2. Define the Architecture of the Policy",
"_____no_output_____"
]
],
[
[
"env = gym.make('CartPole-v0')\nenv.seed(0)\nprint('observation space:', env.observation_space)\nprint('action space:', env.action_space)\n\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n\nclass Policy(nn.Module):\n def __init__(self, s_size=4, h_size=16, a_size=2):\n super(Policy, self).__init__()\n self.fc1 = nn.Linear(s_size, h_size)\n self.fc2 = nn.Linear(h_size, a_size)\n\n def forward(self, x):\n x = F.relu(self.fc1(x))\n x = self.fc2(x)\n return F.softmax(x, dim=1)\n \n def act(self, state):\n state = torch.from_numpy(state).float().unsqueeze(0).to(device)\n probs = self.forward(state).cpu()\n m = Categorical(probs)\n action = m.sample()\n return action.item(), m.log_prob(action)",
"observation space: Box(-3.4028234663852886e+38, 3.4028234663852886e+38, (4,), float32)\naction space: Discrete(2)\n"
]
],
[
[
"### 3. Train the Agent with REINFORCE",
"_____no_output_____"
]
],
[
[
"policy = Policy().to(device)\noptimizer = optim.Adam(policy.parameters(), lr=1e-2)\n\ndef reinforce(n_episodes=1000, max_t=1000, gamma=1.0, print_every=100):\n scores_deque = deque(maxlen=100)\n scores = []\n for i_episode in range(1, n_episodes+1):\n saved_log_probs = []\n rewards = []\n state = env.reset()\n for t in range(max_t):\n action, log_prob = policy.act(state)\n saved_log_probs.append(log_prob)\n state, reward, done, _ = env.step(action)\n rewards.append(reward)\n if done:\n break \n scores_deque.append(sum(rewards))\n scores.append(sum(rewards))\n \n discounts = [gamma**i for i in range(len(rewards)+1)]\n R = sum([a*b for a,b in zip(discounts, rewards)])\n \n policy_loss = []\n for log_prob in saved_log_probs:\n policy_loss.append(-log_prob * R)\n policy_loss = torch.cat(policy_loss).sum()\n \n optimizer.zero_grad()\n policy_loss.backward()\n optimizer.step()\n \n if i_episode % print_every == 0:\n print('Episode {}\\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))\n if np.mean(scores_deque)>=195.0:\n print('Environment solved in {:d} episodes!\\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_deque)))\n break\n \n return scores\n \nscores = reinforce()",
"Episode 100\tAverage Score: 34.47\nEpisode 200\tAverage Score: 66.26\nEpisode 300\tAverage Score: 87.82\nEpisode 400\tAverage Score: 72.83\nEpisode 500\tAverage Score: 172.00\nEpisode 600\tAverage Score: 160.65\nEpisode 700\tAverage Score: 167.15\nEnvironment solved in 691 episodes!\tAverage Score: 196.69\n"
]
],
[
[
"### 4. Plot the Scores",
"_____no_output_____"
]
],
[
[
"fig = plt.figure()\nax = fig.add_subplot(111)\nplt.plot(np.arange(1, len(scores)+1), scores)\nplt.ylabel('Score')\nplt.xlabel('Episode #')\nplt.show()",
"_____no_output_____"
]
],
[
[
"### 5. Watch a Smart Agent!",
"_____no_output_____"
]
],
[
[
"env = gym.make('CartPole-v0')\n\nstate = env.reset()\nfor t in range(1000):\n action, _ = policy.act(state)\n env.render()\n state, reward, done, _ = env.step(action)\n if done:\n break \n\nenv.close()",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec9b34d86a1e05a752f37b4805ba36d8198364b7 | 22,188 | ipynb | Jupyter Notebook | model_test.ipynb | McStevenss/reid-keras-padel | c43716fdccf9348cff38bc4d3b1b34d1083a23b0 | [
"MIT"
] | null | null | null | model_test.ipynb | McStevenss/reid-keras-padel | c43716fdccf9348cff38bc4d3b1b34d1083a23b0 | [
"MIT"
] | null | null | null | model_test.ipynb | McStevenss/reid-keras-padel | c43716fdccf9348cff38bc4d3b1b34d1083a23b0 | [
"MIT"
] | null | null | null | 119.935135 | 3,269 | 0.701145 | [
[
[
"import cv2 as cv\nimg = cv.imread(\"ds\\\\train\\\\1\\\\1_1.jpg\")\nimg.shape",
"_____no_output_____"
],
[
"import tensorflow as tf\nimport pathlib\nimport os\n\n# dataset_url = \"https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz\"\n# data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)\n# data_dir = pathlib.Path(data_dir)\n\nbatch_size = 16\ntrain_dir = \"ds\\\\train\"\nval_dir = \"ds\\\\val\"\n\nlabels = os.listdir(\"ds\\\\train\")\n\nimage_size = (64, 128)\n\ntrain_ds = tf.keras.preprocessing.image_dataset_from_directory(\n train_dir,\n labels=\"inferred\",\n validation_split=0.2,\n subset=\"training\",\n seed=123,\n shuffle=True,\n image_size=image_size,\n batch_size=batch_size)\n\nval_ds = tf.keras.preprocessing.image_dataset_from_directory(\n val_dir,\n labels=\"inferred\",\n validation_split=0.2,\n subset=\"validation\",\n seed=123,\n shuffle=True,\n image_size=image_size,\n batch_size=batch_size)\n\ntest_dataset = val_ds.take(5)\nval_ds = val_ds.skip(5)\n\nprint('Batches for testing -->', test_dataset.cardinality())\nprint('Batches for validating -->', val_ds.cardinality())\n\nmodel = tf.keras.Sequential([\n tf.keras.layers.Rescaling(1./255, input_shape=(64, 128, 3)),\n tf.keras.layers.Conv2D(16, 3, padding='same', activation='relu'),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Conv2D(32, 3, padding='same', activation='relu'),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Conv2D(64, 3, padding='same', activation='relu'),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dense(14)\n])\n\nmodel.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(),\n metrics=['accuracy'])\n\nepochs=5\nhistory = model.fit(\n train_ds,\n validation_data=val_ds,\n epochs=5,\n)",
"Found 15623 files belonging to 14 classes.\nUsing 12499 files for training.\nFound 10429 files belonging to 14 classes.\nUsing 2085 files for validation.\nBatches for testing --> tf.Tensor(5, shape=(), dtype=int64)\nBatches for validating --> tf.Tensor(126, shape=(), dtype=int64)\nEpoch 1/5\n330/782 [===========>..................] - ETA: 59s - loss: 2.7020 - accuracy: 0.0739"
],
[
"train_ds",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
ec9b4b46ab1ca2ecba7a8485ebe8f117122b4f60 | 534,292 | ipynb | Jupyter Notebook | Visualization_with_features/Visualization3_Var of X-X^3, Mean of gradient X.ipynb | OH-Seoyoung/MachineLearning_with_Patterns_Based_on_Lengyel-Epstein_model | d3dfc0f3214758e75dd63bbf55a65006b86b9e13 | [
"MIT"
] | 2 | 2020-05-16T11:13:05.000Z | 2020-12-05T06:16:52.000Z | Visualization_with_features/Visualization3_Var of X-X^3, Mean of gradient X.ipynb | OH-Seoyoung/MachineLearning_with_Patterns_Based_on_Lengyel-Epstein_model | d3dfc0f3214758e75dd63bbf55a65006b86b9e13 | [
"MIT"
] | null | null | null | Visualization_with_features/Visualization3_Var of X-X^3, Mean of gradient X.ipynb | OH-Seoyoung/MachineLearning_with_Patterns_Based_on_Lengyel-Epstein_model | d3dfc0f3214758e75dd63bbf55a65006b86b9e13 | [
"MIT"
] | null | null | null | 2,070.899225 | 301,536 | 0.963471 | [
[
[
"from sklearn.cluster import KMeans\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.model_selection import train_test_split\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom PIL import Image \nimport random",
"_____no_output_____"
],
[
"# Make dataset\nx_orig = []\ny_orig = np.zeros((1,120))\nfor i in range(0, 36):\n for j in range(i*120 + 1, i*120 + 121) :\n img = Image.open('dataset/{0}/pattern_{1}.jpg'.format(i, j)) \n data = np.array(img)\n x_orig.append(data)\n \nfor i in range(1,36):\n y_orig = np.append(y_orig, np.full((1, 120),i), axis = 1) \nx_orig = np.array(x_orig)\ny_orig = y_orig.T",
"_____no_output_____"
],
[
"print(x_orig.shape)\nprint(y_orig.shape)",
"(4320, 64, 64)\n(4320, 1)\n"
],
[
"plt.figure(figsize=(10,10))\nfor i in range(25):\n plt.subplot(5,5,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n image = np.reshape(x_orig[i+4200,:] , [64,64])\n plt.imshow(image)\n plt.xlabel(y_orig[i+4200])\nplt.show()",
"_____no_output_____"
],
[
"# Flatten the training and test images\nx_flatten = x_orig.reshape(x_orig.shape[0], -1)\n\n# Normalize image vectors\nX = (2/255) * x_flatten - 1\n\n# Explore dataset \nprint (\"number of examples = \" + str(X.shape[0]))\nprint (\"X shape: \" + str(X.shape))\nprint (\"Y shape: \" + str(y_orig.shape))",
"number of examples = 4320\nX shape: (4320, 4096)\nY shape: (4320, 1)\n"
],
[
"def gradient_vec(X):\n g_X_r = np.gradient(X, axis = 1)\n g_X_c = np.gradient(X, axis = 0)\n g_X = g_X_r**2 + g_X_c**2\n return g_X",
"_____no_output_____"
],
[
"X1 = np.var(X - X**3, axis=1)\nX2 = np.mean(gradient_vec(X), axis = 1)",
"_____no_output_____"
],
[
"X1 = X1.reshape((4320,1))\nX2 = X2.reshape((4320,1))\n\nprint(X1.shape)\nprint(X2.shape)",
"(4320, 1)\n(4320, 1)\n"
],
[
"plt.rcParams[\"figure.figsize\"] = (25,14)",
"_____no_output_____"
],
[
"plt.scatter(X1, X2)\nplt.axis([min(X1), max(X1), min(X2), max(X2)])\nplt.xlabel(\"Var of X-X^3\", fontsize = 25)\nplt.ylabel(\"Mean of gradient X\", fontsize = 25)\nplt.savefig('graph3/all.jpg', dpi=300)\nplt.show()",
"_____no_output_____"
],
[
"for i in range(36): \n plt.axis([min(X1), max(X1), min(X2), max(X2)])\n plt.title(i, fontsize = 20)\n plt.scatter(X1[i*120:i*120+40], X2[i*120:i*120+40])\n plt.xlabel(\"Var of X-X^3\", fontsize = 25)\n plt.ylabel(\"Mean of gradient X\", fontsize = 25)\n plt.savefig('graph3/{0}.jpg'.format(i), dpi=300)\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9b4d8f7b0fc12d38ece95cadd5d8733205b97d | 4,080 | ipynb | Jupyter Notebook | youtubeScraper.ipynb | umutsrgncn/youtubeScraperAndAnalyzer | 26bd6b2f1f4873f1142f448f08dc5e76b3643c24 | [
"MIT"
] | 2 | 2021-05-21T17:44:11.000Z | 2021-05-21T17:44:29.000Z | youtubeScraper.ipynb | umutsrgncn/youtubeScraperAndAnalyzer | 26bd6b2f1f4873f1142f448f08dc5e76b3643c24 | [
"MIT"
] | null | null | null | youtubeScraper.ipynb | umutsrgncn/youtubeScraperAndAnalyzer | 26bd6b2f1f4873f1142f448f08dc5e76b3643c24 | [
"MIT"
] | null | null | null | 29.352518 | 124 | 0.517892 | [
[
[
"import os\nimport csv\nimport googleapiclient.discovery",
"_____no_output_____"
],
[
"saveLocation = input(\"Save location: \"+\"r'\")+(\"/\")",
"_____no_output_____"
]
],
[
[
"saveLocation = \"C:\\Users\\User\\Desktop\\youtubeScraper\"",
"_____no_output_____"
]
],
[
[
"ID=input(\"Youtube video ID: \")",
"_____no_output_____"
]
],
[
[
"Link = \"https://www.youtube.com/watch?v=rpERSigjqXs\" \nID = \"rpERSigjqXs\"",
"_____no_output_____"
]
],
[
[
"with open(saveLocation+ID+\".csv\", mode='w', newline='',encoding=\"utf-8\") as newFile:\n newWriter = csv.writer(newFile, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL) \n newWriter.writerow(['Comment'])\n newFile.close()",
"_____no_output_____"
],
[
"def main():\n os.environ[\"OAUTHLIB_INSECURE_TRANSPORT\"] = \"1\"\n\n api_service_name = \"youtube\"\n api_version = \"v3\"\n DEVELOPER_KEY = \"XXXXXXXXXXXXXXX\"\n\n youtube = googleapiclient.discovery.build(\n api_service_name, api_version, developerKey = DEVELOPER_KEY)\n\n request = youtube.commentThreads().list(part='snippet',order=\"relevance\",\n videoId=ID, maxResults='1000', textFormat=\"plainText\").execute()\n for i in request[\"items\"]:\n comment = i[\"snippet\"]['topLevelComment'][\"snippet\"][\"textDisplay\"]\n #likes = i[\"snippet\"]['topLevelComment'][\"snippet\"]['likeCount']\n print(comment)\n with open(saveLocation+ID+\".csv\", mode='a', newline='',encoding=\"utf-8\") as newFile:\n newWriter = csv.writer(newFile, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL) \n newWriter.writerow([comment])\n newFile.close()\n \n while (\"nextPageToken\" in request):\n request = youtube.commentThreads().list(part='snippet',order=\"relevance\", videoId=ID, maxResults='1000', \n pageToken=request[\"nextPageToken\"], textFormat=\"plainText\").execute()\n for i in request[\"items\"]:\n comment = i[\"snippet\"]['topLevelComment'][\"snippet\"][\"textDisplay\"]\n #likes = i[\"snippet\"]['topLevelComment'][\"snippet\"]['likeCount']\n print(comment)\n with open(saveLocation+ID+\".csv\", mode='a', newline='',encoding=\"utf-8\") as newFile:\n newWriter = csv.writer(newFile, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL) \n newWriter.writerow([comment])\n newFile.close()",
"_____no_output_____"
],
[
"if __name__ == \"__main__\":\n main()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ec9b52a63549d78f385bf32335c504d3b19308a4 | 25,504 | ipynb | Jupyter Notebook | notebooks/WithPFLOTRAN/BIOPARTICLE_3D_Box/3D_Box.ipynb | edsaac/bioparticle | 67e191329ef191fc539b290069524b42fbaf7e21 | [
"MIT"
] | null | null | null | notebooks/WithPFLOTRAN/BIOPARTICLE_3D_Box/3D_Box.ipynb | edsaac/bioparticle | 67e191329ef191fc539b290069524b42fbaf7e21 | [
"MIT"
] | 1 | 2020-09-25T23:31:21.000Z | 2020-09-25T23:31:21.000Z | notebooks/WithPFLOTRAN/BIOPARTICLE_3D_Box/3D_Box.ipynb | edsaac/VirusTransport_RxSandbox | 67e191329ef191fc539b290069524b42fbaf7e21 | [
"MIT"
] | 1 | 2021-09-30T05:00:58.000Z | 2021-09-30T05:00:58.000Z | 31.721393 | 122 | 0.463143 | [
[
[
"%reset -f\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom os import system\n\n## Widgets\nfrom ipywidgets import interact_manual, interactive_output\nimport ipywidgets as wd\n\n## Visualization\nfrom IPython.display import display, clear_output, Pretty\n\n## PFLOTRAN\nimport jupypft.model as mo\nimport jupypft.parameter as pm\nimport jupypft.attachmentRateCFT as arCFT\nimport jupypft.plotBTC as plotBTC",
"_____no_output_____"
],
[
"templateFiles = [\"../TEMPLATES/tpl_TH_3Dbox.in\"]\n#Pretty(templateFiles[0])",
"_____no_output_____"
],
[
"BoxModel = mo.Model(templateFile=templateFiles[0],\n execPath=\"mpirun -np 4 $PFLOTRAN_DIR/buildExperimental/pflotran\")",
"_____no_output_____"
],
[
"print(BoxModel)",
"mpirun -np 4 $PFLOTRAN_DIR/buildExperimental/pflotran -pflotranin ./pflotran.in\n"
],
[
"## Concentration\nConcentrationAtInlet = pm.Real(\"<initialConcentration>\", value=1.00E-10, units=\"mol/L\")",
"_____no_output_____"
],
[
"## Grid\nnX,nY,nZ = pm.WithSlider(\"<nX>\",units=\"-\",mathRep=\"$$nX$$\"),\\\n pm.WithSlider(\"<nY>\",units=\"-\",mathRep=\"$$nY$$\"),\\\n pm.WithSlider(\"<nZ>\",units=\"-\",mathRep=\"$$nZ$$\")\n\n#nX.slider = wd.IntSlider(value=70,min=1,max=70,step=1)\n#nY.slider = wd.IntSlider(value=30,min=1,max=70,step=1)\n#nZ.slider = wd.IntSlider(value=20,min=1,max=70,step=1)\n\nnX.slider = wd.IntSlider(value=6,min=1,max=70,step=1)\nnY.slider = wd.IntSlider(value=4,min=1,max=70,step=1)\nnZ.slider = wd.IntSlider(value=8,min=1,max=70,step=1)\n\nui_Grid = wd.VBox([nX.ui,nY.ui,nZ.ui])",
"_____no_output_____"
],
[
"## Dimensions\nLX,LY,LZ = pm.WithSlider(\"<LenX>\",units=\"m\",mathRep=\"$$LX$$\"),\\\n pm.WithSlider(\"<LenY>\",units=\"m\",mathRep=\"$$LY$$\"),\\\n pm.WithSlider(\"<LenZ>\",units=\"m\",mathRep=\"$$LZ$$\")\n\nLX.slider = wd.FloatSlider(value=70, min=10, max=200, step=1)\nLY.slider = wd.FloatSlider(value=30, min=10, max=200, step=1)\nLZ.slider = wd.FloatSlider(value=20, min=10, max=200, step=1)\n\nui_Dimensions = wd.VBox([LX.ui,LY.ui,LZ.ui])",
"_____no_output_____"
],
[
"## General\nLongDisp = pm.WithSlider(tag=\"<longDisp>\",units=\"m\",mathRep=\"$$\\\\alpha_L$$\")\nRateDetachment = pm.WithSlider(tag=\"<kdet>\",units=\"1/h\",mathRep=\"$$k_{det}$$\")\nGradientX = pm.WithSlider(tag=\"<GradientX>\",units=\"1/h\",mathRep=\"$$\\partial_x h$$\")\n\nLongDisp.slider = wd.FloatLogSlider(value=1.0E-12,base=10, min=-12, max=0, step=0.1)\nGradientX.slider = wd.FloatLogSlider(value=2.0E-3,base=10, min=-10, max=-2, step=0.1)\nRateDetachment.slider = wd.FloatLogSlider(value=0.0026,base=10, min=-30, max=1, step=0.1)\n\nui_General = wd.VBox([LongDisp.ui, GradientX.ui, RateDetachment.ui])",
"_____no_output_____"
],
[
"## Attachment Rate (CFT)\nAlphaEffic = pm.WithSlider(tag=\"<alphaEfficiency>\",units=\"-\",mathRep=\"$$\\\\alpha$$\")\nGrainSize = pm.WithSlider(tag=\"<diamCollector>\",units=\"m\",mathRep=\"$$d_c$$\")\nParticleDiam = pm.WithSlider(tag=\"<diamParticle>\",units=\"m\",mathRep=\"$$d_p$$\")\nHamakerConst = pm.WithSlider(tag=\"<hamakerConstant>\",units=\"J\",mathRep=\"$$A$$\")\nParticleDens = pm.WithSlider(tag=\"<rhoParticle>\",units=\"kg/m³\",mathRep=\"$$\\\\rho_p$$\")\n\n## Attachment rates\nGrainSize.slider = wd.FloatLogSlider(value=2.0E-3,base=10, min=-4, max=-1, step=0.1)\nParticleDiam.slider = wd.FloatLogSlider(value=1.0E-7,base=10, min=-9, max=-4, step=0.1)\nHamakerConst.slider = wd.FloatLogSlider(value=5.0E-21,base=10, min=-22, max=-18, step=0.1)\nParticleDens.slider = wd.FloatSlider(value=1050.0,min=1000.0, max=4000, step=100)\nAlphaEffic.slider = wd.FloatLogSlider(value=1E-5,base=10, min=-5, max=0, step=0.1)\n\nui_AttachmentRate = wd.VBox([GrainSize.ui,\\\n ParticleDiam.ui,\\\n HamakerConst.ui,\\\n ParticleDens.ui,\\\n AlphaEffic.ui])",
"_____no_output_____"
],
[
"## Permeability\nPorosity = pm.WithSlider(tag=\"<porosity>\",units=\"adim\",mathRep=\"$$\\\\theta$$\")\nPorosity.slider = wd.FloatSlider(value=0.35, min=0.05, max=0.95, step=0.05)\n\nkX,kY,kZ = pm.WithSlider(tag=\"<PermX>\",units=\"m²\",mathRep=\"$$k_{xx}$$\"),\\\n pm.WithSlider(tag=\"<PermY>\",units=\"m²\",mathRep=\"$$k_{yy}$$\"),\\\n pm.WithSlider(tag=\"<PermZ>\",units=\"m²\",mathRep=\"$$k_{zz}$$\")\n\nfor k in [kX,kY,kZ]:\n k.slider = wd.FloatLogSlider(value=1.0E-8,base=10, min=-10, max=-7, step=0.5)\n \nui_Permeability = wd.VBox([kX.ui,kY.ui,kZ.ui,Porosity.ui])",
"_____no_output_____"
],
[
"## Temperatures\nTin, RefTemp = pm.WithSlider(tag=\"<injectTemp>\",units=\"C\",mathRep=\"$$T_{in}$$\"),\\\n pm.WithSlider(tag=\"<initialTemp>\",units=\"C\",mathRep=\"$$T_{0}$$\"),\\\n\nTin.slider = wd.FloatSlider(value=10, min=1.0, max=35, step=1)\nRefTemp.slider = wd.FloatSlider(value=10, min=1.0, max=35, step=1)\n\nui_Temperature = wd.VBox([Tin.ui,RefTemp.ui])",
"_____no_output_____"
],
[
"## Well locations and flowrates\ninX,inY,inZ1,inZ2 = pm.WithSlider(tag=\"<inX>\",units=\"m\",mathRep=\"$$x_{Q_{in}}$$\"),\\\n pm.WithSlider(tag=\"<inY>\",units=\"m\",mathRep=\"$$y_{Q_{in}}$$\"),\\\n pm.WithSlider(tag=\"<inZ1>\",units=\"m\",mathRep=\"$$z_{1,Q_{in}}$$\"),\\\n pm.WithSlider(tag=\"<inZ2>\",units=\"m\",mathRep=\"$$z_{2,Q_{in}}$$\")\n\ninX.slider = wd.FloatSlider(value=10, min=0.0, max=LX.value, step=0.5)\ninY.slider = wd.FloatSlider(value=15, min=0.0, max=LY.value, step=0.5)\ninZ1.slider = wd.FloatSlider(value=19, min=0.0, max=LZ.value, step=0.5)\ninZ2.slider = wd.FloatSlider(value=20, min=0.0, max=LZ.value, step=0.5)\n\nQinRate = pm.WithSlider(tag=\"<inRate>\", units=\"m³/d\",mathRep=\"$$Q_{in}$$\")\nQinRate.slider = wd.FloatSlider(value=0.24, min=0., max=5., step=0.2)\n\nui_InjectionWell = wd.VBox([inX.ui,inY.ui,inZ1.ui,inZ2.ui,QinRate.ui])",
"_____no_output_____"
],
[
"outX,outY,outZ1,outZ2 = pm.WithSlider(tag=\"<outX>\",units=\"m\",mathRep=\"$$x_{Q_{out}}$$\"),\\\n pm.WithSlider(tag=\"<outY>\",units=\"m\",mathRep=\"$$y_{Q_{out}}$$\"),\\\n pm.WithSlider(tag=\"<outZ1>\",units=\"m\",mathRep=\"$$z_{1,Q_{out}}$$\"),\\\n pm.WithSlider(tag=\"<outZ2>\",units=\"m\",mathRep=\"$$z_{2,Q_{out}}$$\")\n\noutX.slider = wd.FloatSlider(value=41., min=0.0, max=LX.value, step=0.5)\noutY.slider = wd.FloatSlider(value=15., min=0.0, max=LY.value, step=0.5)\noutZ1.slider = wd.FloatSlider(value=16., min=0.0, max=LZ.value, step=0.5)\noutZ2.slider = wd.FloatSlider(value=19., min=0.0, max=LZ.value, step=0.5)\n\nQoutRate = pm.WithSlider(tag=\"<outRate>\",units=\"m³/d\",mathRep=\"$$Q_{out}$$\")\nQoutRate.slider = wd.FloatSlider(value=-21.84, min=-30., max=0., step=0.2)\n\nui_ExtractionWell = wd.VBox([outX.ui,outY.ui,outZ1.ui,outZ2.ui,QoutRate.ui])",
"_____no_output_____"
],
[
"listOfParameters = pm.Parameter.list_of_vars()",
"_____no_output_____"
],
[
"def RunAll():\n \n #Copy the template as a runFile\n BoxModel.cloneTemplate()\n \n #Replace the tags in the runFile with values\n for parameter in listOfParameters:\n BoxModel.replaceTagInFile(parameter)\n \n #Run PFLOTRAN\n BoxModel.runModel()\n \n #Reformat the results file to a CSV\n BoxModel.fixedToCSV(outputFile=\"pflotran-mas.dat\")\n BoxModel.fixedToCSV(outputFile=\"pflotran-obs-0.tec\")\n \ndef plot1():\n clear_output()\n plotBTC.xyPlotLine(\n inputFile = \"./pflotran-obs-0.tec\",\n YIndex=17,\n normalizeXWith = 1.,\n normalizeYWith = ConcentrationAtInlet.value,\n legend = \"BTC\")\n\ndef plot2():\n clear_output()\n plotBTC.xyPlotLine(\n inputFile = \"./pflotran-obs-0.tec\",\n YIndex=8,\n normalizeXWith = 1.,\n normalizeYWith = ConcentrationAtInlet.value,\n legend = \"BTC\")",
"_____no_output_____"
],
[
"ui_RunModel = wd.Button(\n description='Run PFLOTRAN',\n disabled=False,\n button_style='', # 'success', 'info', 'warning', 'danger' or ''\n tooltip='Run PFLOTRAN',\n icon='check'\n)\n\nui1 = wd.Accordion(children=[ui_General,\\\n ui_Grid,\\\n ui_Dimensions,\\\n ui_Permeability,\\\n ui_Temperature,\\\n ui_InjectionWell,\\\n ui_ExtractionWell,\\\n ui_AttachmentRate,\\\n ui_RunModel\n ])\nui1.set_title(0,\"General\")\nui1.set_title(1,\"Grid\")\nui1.set_title(2,\"Dimensions\")\nui1.set_title(3,\"Permeability\")\nui1.set_title(4,\"Temperature\")\nui1.set_title(5,\"InjectionWell\")\nui1.set_title(6,\"ExtractionWell\")\nui1.set_title(7,\"Attachment rate (CFT)\")\nui1.set_title(8,\"Run Model\")",
"_____no_output_____"
],
[
"ui2_1 = wd.Output()\nui2_2 = wd.Output()\n\ndef on_button_clicked(_):\n RunAll()\n with ui2_1: plot1()\n with ui2_2: plot2()\n\nui_RunModel.on_click(on_button_clicked)\n\nui2 = wd.Tab(children=[ui2_1,ui2_2])\nui2.set_title(0,\"Extract Well probe\")\nui2.set_title(1,\"Injection Well probe\")",
"_____no_output_____"
],
[
"ui = wd.TwoByTwoLayout(top_left=ui1,\n top_right=ui2)\ndisplay(ui)",
"_____no_output_____"
],
[
"Pretty(\"./pflotran.in\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9b5308ab7376cd6772f65dc1aaf8f6726de1fb | 1,253 | ipynb | Jupyter Notebook | 00_paths.ipynb | DmitriyG228/furniture | 3236f16070a62109021f38d8e6bc9fc915869276 | [
"Apache-2.0"
] | null | null | null | 00_paths.ipynb | DmitriyG228/furniture | 3236f16070a62109021f38d8e6bc9fc915869276 | [
"Apache-2.0"
] | null | null | null | 00_paths.ipynb | DmitriyG228/furniture | 3236f16070a62109021f38d8e6bc9fc915869276 | [
"Apache-2.0"
] | null | null | null | 19.276923 | 75 | 0.538707 | [
[
[
"# default_exp paths",
"_____no_output_____"
],
[
"#export\nfrom mytools.tools import *\nfrom pathlib import Path",
"_____no_output_____"
],
[
"#export\nnas_pictures_path = Path('/home/dima/nas/real_nas/houzz/pictures')\nssd_pictures_path = Path('/home/dima/ssd/furniture_data/pictures')\nmodels_path = Path('/home/dima/furniture_data/models')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
ec9b5eb430dd2ba3b1c013af35186b22e7e6078b | 20,578 | ipynb | Jupyter Notebook | 1 Data Preparation Basics/2 Treating Missing Values.ipynb | RhishabMukherjee/CheatSheet | 28b24dd184e15e7f50b98416d8ebf67cd767993d | [
"MIT"
] | null | null | null | 1 Data Preparation Basics/2 Treating Missing Values.ipynb | RhishabMukherjee/CheatSheet | 28b24dd184e15e7f50b98416d8ebf67cd767993d | [
"MIT"
] | null | null | null | 1 Data Preparation Basics/2 Treating Missing Values.ipynb | RhishabMukherjee/CheatSheet | 28b24dd184e15e7f50b98416d8ebf67cd767993d | [
"MIT"
] | null | null | null | 68.593333 | 1,683 | 0.485519 | [
[
[
"# Data Preparation Basics\n## Segment 2 - Treating missing values",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd \n\nfrom pandas import Series, DataFrame",
"_____no_output_____"
]
],
[
[
"### Figuring out what data is missing",
"_____no_output_____"
]
],
[
[
"missing = np.nan\n\nseries_obj = Series(['row 1', 'row 2', missing, 'row 4', 'row 5', 'row 6', missing, 'row 8'])\nseries_obj",
"_____no_output_____"
],
[
"series_obj.isnull()",
"_____no_output_____"
]
],
[
[
"### Filling in for missing values",
"_____no_output_____"
]
],
[
[
"np.random.seed(25)\nDF_obj = DataFrame(np.random.rand(36).reshape(6,6))\nDF_obj",
"_____no_output_____"
],
[
"DF_obj.loc[3:5, 0] = missing\nDF_obj.loc[1:4, 5] = missing\nDF_obj",
"_____no_output_____"
],
[
"filled_DF = DF_obj.fillna(0)\nfilled_DF",
"_____no_output_____"
],
[
"filled_DF = DF_obj.fillna({0: 0.1, 5:1.25})\nfilled_DF",
"_____no_output_____"
],
[
"fill_DF = DF_obj.fillna(method='ffill')\nfill_DF",
"_____no_output_____"
]
],
[
[
"### Counting missing values",
"_____no_output_____"
]
],
[
[
"np.random.seed(25)\nDF_obj = DataFrame(np.random.rand(36).reshape(6,6))\nDF_obj.loc[3:5, 0] = missing\nDF_obj.loc[1:4, 5] = missing\nDF_obj",
"_____no_output_____"
],
[
"DF_obj.isnull().sum()",
"_____no_output_____"
]
],
[
[
"### Filtering out missing values",
"_____no_output_____"
]
],
[
[
"DF_no_NaN = DF_obj.dropna()\nDF_no_NaN",
"_____no_output_____"
],
[
"DF_no_NaN = DF_obj.dropna(axis=1)\nDF_no_NaN",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec9b6b6ac292cdb96698edc8f852557e3cf25ce7 | 8,721 | ipynb | Jupyter Notebook | extensions/BokehMagic.ipynb | brian15co/bokeh | 6cecb7211277b9d838039d0eb15e50a10f9ac3d1 | [
"BSD-3-Clause"
] | 2 | 2021-09-01T12:36:06.000Z | 2021-11-17T10:48:36.000Z | extensions/BokehMagic.ipynb | brian15co/bokeh | 6cecb7211277b9d838039d0eb15e50a10f9ac3d1 | [
"BSD-3-Clause"
] | null | null | null | extensions/BokehMagic.ipynb | brian15co/bokeh | 6cecb7211277b9d838039d0eb15e50a10f9ac3d1 | [
"BSD-3-Clause"
] | null | null | null | 27.685714 | 286 | 0.508887 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ec9b6b8371c943af4f12b707b98cf5651ff9578f | 279,601 | ipynb | Jupyter Notebook | examples/ten.ipynb | earnestt1234/wubwub | 469125767ae80ccfe78e48e4f17e2be6a638a6ca | [
"MIT"
] | 3 | 2021-09-16T10:24:59.000Z | 2022-03-17T22:29:20.000Z | examples/ten.ipynb | earnestt1234/wubwub | 469125767ae80ccfe78e48e4f17e2be6a638a6ca | [
"MIT"
] | null | null | null | examples/ten.ipynb | earnestt1234/wubwub | 469125767ae80ccfe78e48e4f17e2be6a638a6ca | [
"MIT"
] | 1 | 2022-03-16T23:10:56.000Z | 2022-03-16T23:10:56.000Z | 1,747.50625 | 273,501 | 0.949442 | [
[
[
"# Ten\n\nThis example tries to demonstrate working with uncommon [time signatures](https://en.wikipedia.org/wiki/Time_signature) in wubwub. Namely, it is in 10/8.\n\nSequencers in wubwub aren't aware of a time signature, only the number of beats specified. It's up to the user to provide the content to generate the feel of the desired time signature. So to create a song in 10/8, we don't need to do anything special (aside from setting the length of the Sequencer to be divisible by 10).\n\nWhile I would describe this time signature as 10/8, it is sort of encoded as a *very fast* 5/4. By doubling the BPM (400, rather than 200), we can work with nicer whole numbers (`1, 2, ..., 9, 10`) rather than many halves (`1, 1.5, ..., 5, 5.5`).\n\nWith a count this fast, and an irregular time signature, it can be difficult to think past the first measure. *Patterns* (i.e. `wubwub.Pattern`) can help you work in shorter chunks. Patterns contain a list of beats and a length; various methods allow you to recreate the same rhythmic pattern at different parts of (or throughout) the song.\n\nThis song also shows creating an initial Sequencer (`seq`), copying it (`intro`), and then joining (`final`) to make a beat with multiple sections. Like Patterns, this technique is helpful for making longer creations.",
"_____no_output_____"
]
],
[
[
"from pysndfx import AudioEffectsChain\nimport wubwub as wb\nimport wubwub.sounds as snd\n\n# load sounds\nORGAN = snd.load('keys.organ')\nDRUMS = snd.load('drums.house')\nDRUMS2 = snd.load('drums.ukhard')\nFX = snd.load('synth.fx')\nSYNTH = snd.load('synth.ts_synth')\n\n# initialize the sequencer\nend = 40\nseq = wb.Sequencer(bpm=400, beats=end)\n\n# create Patterns\n\n# emphasized and non-emphasized beats\nemph = wb.Pattern([1, 4, 7, 9], length=10).until(end)\nnotemph = wb.Pattern([2, 3, 5, 6, 8, 10], length=10).until(end)\n\n# all beats\nevery = emph.merge(notemph)\n\n# patterns for individual instruments\nkickpat = wb.Pattern([1, 4, 6], length=10).until(end)\nsnarepat = wb.Pattern([7, 10], length=10).until(end)\nbleeppat = wb.Pattern([1,2,], length=4).until(end)\nsynthpat = wb.Pattern([1, 4, 6, 7], length=10).until(end)\n\n# make a pattern that phases in and out of the 10/8\nphase = wb.Pattern([1], length=4).until(end)\n\n# pattern for a hihat count\ncount = wb.Pattern([1, 3, 5, 7, 9], length=10).until(end)\n\n# the main bass-y organ\norgan = seq.add_sampler(ORGAN['C1'], name='organ', basepitch='C1')\norgan.make_notes(every, pitches=[3,2,0,5,3,2,7,3,5,2])\n\n# drums\nhat = seq.add_sampler(DRUMS['hat1'], name='hat')\nhat.make_notes(emph, volumes=0)\nhat.make_notes(notemph, volumes=-5, lengths=.1)\nhat.volume -= 10\nhat.pan = .5\n\nsnare = seq.add_sampler(DRUMS['snare3'], name='snare')\nsnare.make_notes(snarepat, volumes=[0, -5])\nsnare.volume -= 5\n\nkick = seq.add_sampler(DRUMS['kick6'], name='kick')\nkick.make_notes(kickpat)\nkick.volume -= 2\n\n# poor man's side chaining: lower the organ volume when the kick occurs\norgan[kick.array_of_beats()] = wb.alter_notes(organ[kick.array_of_beats()], volume=-5)\n\nride = seq.add_sampler(DRUMS2['ride-hard'], name='ride')\nride.make_notes(phase, lengths=5)\nride.pan = -.6\nride.effects = AudioEffectsChain().highpass(5000)\n\n# little synth FX\nbleep = seq.add_sampler(wb.shift_pitch(FX['checkpoint-hit'], 8), name='bleep')\nbleep.make_notes(bleeppat)\nbleep.effects = AudioEffectsChain().reverb(wet_gain=5)\nbleep.volume -= 10\n\n# higher synth\nsynth = seq.add_sampler(SYNTH['patch001'], name='synth')\nsynth.make_notes(every, pitches=[12,12,12,0,0,0,12,12,0,0], lengths=.2)\nsynth.effects = AudioEffectsChain().highpass(1000).reverb()\n\n# copy the original sequence just to keep the instruments\nintro = seq.copy(with_notes=False)\n\n# create an intro with just the organ\nintro['organ'].add_fromdict(seq['organ'].notedict)\nintro['organ'].effects = AudioEffectsChain().lowpass(200)\n\n# add drums to build up to the rest of the beat\nintro['hat'].make_notes(count.onmeasure(3, measurelen=10), lengths=.1)\nintro['snare'].make_notes([37, 38, 39, 40])\n\n# join the intro and main section, and render\nfinal = wb.join([intro, seq])\nfinal.build(overhang=5)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
ec9b76ba3a49d73b74d824043cce385f8871223f | 11,795 | ipynb | Jupyter Notebook | Homework 07.ipynb | derrowap/MA490-Deep-Learning | 089c1b9f3806aa0ca1f86a032e6c0563ee766da0 | [
"MIT"
] | null | null | null | Homework 07.ipynb | derrowap/MA490-Deep-Learning | 089c1b9f3806aa0ca1f86a032e6c0563ee766da0 | [
"MIT"
] | null | null | null | Homework 07.ipynb | derrowap/MA490-Deep-Learning | 089c1b9f3806aa0ca1f86a032e6c0563ee766da0 | [
"MIT"
] | null | null | null | 27.884161 | 104 | 0.504621 | [
[
[
"# Homework 07\nAustin Derrow-Pinion CM 208",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport tensorflow as tf\nimport time\n\nnp.random.seed(0)",
"_____no_output_____"
],
[
"# load CIFAR-10 Cats and Dogs split data set\nMNIST = np.load('./Data/MNIST_train_40000.npz')\n\ntrain_images = MNIST['train_images']\ntrain_labels = MNIST['train_labels']\n\nprint('train_images shape: {}'.format(train_images.shape))\nprint('train_labels shape: {}'.format(train_labels.shape))",
"train_images shape: (40000, 28, 28)\ntrain_labels shape: (40000,)\n"
],
[
"# prepare data\nimage_size = 28 * 28\ninput_images = np.reshape(np.array(train_images, dtype='float32'), [-1, image_size])\ninput_labels = []\n\nlabels = tf.constant(train_labels, tf.int32, [train_labels.size])\none_hot_encode = tf.one_hot(labels, depth=10, on_value=1.0, off_value=0.0, dtype=tf.float32)\ninit = tf.initialize_all_variables()\nwith tf.Session() as sess:\n sess.run(init)\n input_labels = sess.run(one_hot_encode)\nprint(input_labels.shape)",
"(40000, 10)\n"
],
[
"# returns minibatches for a single epoch\ndef minibatches(x, y, batchsize):\n assert x.shape[0] == y.shape[0]\n indices = np.arange(x.shape[0])\n np.random.shuffle(indices)\n \n for i in range(0, x.shape[0] - batchsize + 1, batchsize):\n excerpt = indices[i:i + batchsize]\n yield np.array(x[excerpt], dtype='float32'), np.array(y[excerpt], dtype='int32')\n \n leftOver = (x.shape[0] - batchsize) % batchsize\n if leftOver != 0:\n excerpt = indices[len(indices) - leftOver:]\n yield np.array(x[excerpt], dtype='float32'), np.array(y[excerpt], dtype='int32')",
"_____no_output_____"
],
[
"# Tensorflow computational graph\nx = tf.placeholder(tf.float32, [None, image_size])\ny = tf.placeholder(tf.float32, [None, 10])\n\nW1 = tf.Variable(tf.truncated_normal([image_size, 1024], dtype=tf.float32, seed=0, stddev=0.1))\nb1 = tf.Variable(tf.truncated_normal([1024], dtype=tf.float32, seed=0, stddev=0.1))\ny1 = tf.nn.relu(tf.matmul(x, W1) + b1)\n\nW2 = tf.Variable(tf.truncated_normal([1024, 10], dtype=tf.float32, seed=0, stddev=0.1))\nb2 = tf.Variable(tf.truncated_normal([10], dtype=tf.float32, seed=0, stddev=0.1))\nlogits = tf.matmul(y1, W2) + b2\n\nCE = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, y))\noptimizer = tf.train.AdamOptimizer().minimize(CE)",
"_____no_output_____"
],
[
"def stochastic_gradient_descent(sess, batch_size):\n num_epochs = 0\n batch_size = batch_size\n total_elapsed_time = 0\n CE_train = 1.0\n num_CE_checks = 1\n \n while CE_train > 0.01:\n num_epochs += 1\n for batch in minibatches(input_images, input_labels, batch_size):\n batch_x, batch_y = batch\n \n # run a timed optimization\n t_start = time.time()\n _ = sess.run([optimizer], feed_dict={x: batch_x, y: batch_y})\n elapsed_time = (time.time() - t_start) / 60\n total_elapsed_time += elapsed_time\n \n # check up on progress\n if total_elapsed_time / 0.1 >= num_CE_checks:\n num_CE_checks += 1\n CE_train = sess.run([CE], feed_dict={x: input_images, y: input_labels})[0]\n print(\"Epoch = {}\".format(num_epochs))\n print(\"\\tTraining CE = {:.3f}\".format(CE_train))\n print(\"\\tTime passed = {:.1f}\".format(total_elapsed_time))\n \n # finished training\n if CE_train < 0.01:\n break\n \n # print final results\n print(\"Finished Training with batch size = {}\".format(batch_size))\n print(\"number of epochs = {}\".format(num_epochs))\n print(\"final training cross-entropy = {:.3}\".format(CE_train))\n print(\"total elapsed optimization time (minutes) = {:.3f}\".format(total_elapsed_time))",
"_____no_output_____"
],
[
"init = tf.initialize_all_variables()\nwith tf.Session() as sess:\n sess.run(init)\n stochastic_gradient_descent(sess, 10)",
"Epoch = 1\n\tTraining CE = 0.139\n\tTime passed = 0.1\nEpoch = 1\n\tTraining CE = 0.096\n\tTime passed = 0.2\nEpoch = 2\n\tTraining CE = 0.087\n\tTime passed = 0.3\nEpoch = 2\n\tTraining CE = 0.098\n\tTime passed = 0.4\nEpoch = 3\n\tTraining CE = 0.096\n\tTime passed = 0.5\nEpoch = 3\n\tTraining CE = 0.062\n\tTime passed = 0.6\nEpoch = 4\n\tTraining CE = 0.032\n\tTime passed = 0.7\nEpoch = 4\n\tTraining CE = 0.037\n\tTime passed = 0.8\nEpoch = 5\n\tTraining CE = 0.027\n\tTime passed = 0.9\nEpoch = 5\n\tTraining CE = 0.030\n\tTime passed = 1.0\nEpoch = 5\n\tTraining CE = 0.026\n\tTime passed = 1.1\nEpoch = 6\n\tTraining CE = 0.022\n\tTime passed = 1.2\nEpoch = 6\n\tTraining CE = 0.026\n\tTime passed = 1.3\nEpoch = 7\n\tTraining CE = 0.013\n\tTime passed = 1.4\nEpoch = 7\n\tTraining CE = 0.027\n\tTime passed = 1.5\nEpoch = 8\n\tTraining CE = 0.022\n\tTime passed = 1.6\nEpoch = 8\n\tTraining CE = 0.015\n\tTime passed = 1.7\nEpoch = 8\n\tTraining CE = 0.021\n\tTime passed = 1.8\nEpoch = 9\n\tTraining CE = 0.019\n\tTime passed = 1.9\nEpoch = 9\n\tTraining CE = 0.019\n\tTime passed = 2.0\nEpoch = 10\n\tTraining CE = 0.009\n\tTime passed = 2.1\nFinished Training with batch size = 10\nnumber of epochs = 10\nfinal training cross-entropy = 0.00876\ntotal elapsed optimization time (minutes) = 2.100\n"
],
[
"init = tf.initialize_all_variables()\nwith tf.Session() as sess:\n sess.run(init)\n stochastic_gradient_descent(sess, 100)",
"Epoch = 4\n\tTraining CE = 0.040\n\tTime passed = 0.1\nEpoch = 7\n\tTraining CE = 0.008\n\tTime passed = 0.2\nFinished Training with batch size = 100\nnumber of epochs = 7\nfinal training cross-entropy = 0.00764\ntotal elapsed optimization time (minutes) = 0.200\n"
],
[
"init = tf.initialize_all_variables()\nwith tf.Session() as sess:\n sess.run(init)\n stochastic_gradient_descent(sess, 1000)",
"Epoch = 9\n\tTraining CE = 0.044\n\tTime passed = 0.1\nEpoch = 18\n\tTraining CE = 0.013\n\tTime passed = 0.2\nEpoch = 26\n\tTraining CE = 0.005\n\tTime passed = 0.3\nFinished Training with batch size = 1000\nnumber of epochs = 26\nfinal training cross-entropy = 0.00497\ntotal elapsed optimization time (minutes) = 0.300\n"
],
[
"init = tf.initialize_all_variables()\nwith tf.Session() as sess:\n sess.run(init)\n stochastic_gradient_descent(sess, 10000)",
"Epoch = 11\n\tTraining CE = 0.230\n\tTime passed = 0.1\nEpoch = 21\n\tTraining CE = 0.144\n\tTime passed = 0.2\nEpoch = 32\n\tTraining CE = 0.099\n\tTime passed = 0.3\nEpoch = 42\n\tTraining CE = 0.071\n\tTime passed = 0.4\nEpoch = 53\n\tTraining CE = 0.051\n\tTime passed = 0.5\nEpoch = 63\n\tTraining CE = 0.039\n\tTime passed = 0.6\nEpoch = 74\n\tTraining CE = 0.029\n\tTime passed = 0.7\nEpoch = 84\n\tTraining CE = 0.022\n\tTime passed = 0.8\nEpoch = 94\n\tTraining CE = 0.018\n\tTime passed = 0.9\nEpoch = 105\n\tTraining CE = 0.014\n\tTime passed = 1.0\nEpoch = 115\n\tTraining CE = 0.011\n\tTime passed = 1.1\nEpoch = 122\n\tTraining CE = 0.010\n\tTime passed = 1.2\nFinished Training with batch size = 10000\nnumber of epochs = 122\nfinal training cross-entropy = 0.01\ntotal elapsed optimization time (minutes) = 1.200\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9b7ac95b342cf55d579c95442fe91e7c89b192 | 6,881 | ipynb | Jupyter Notebook | docs/source/notebooks/01.ipynb | JulianKarlBauer/orientation_averaged_mean_field | 75acb5ed58aa6a69cec7508d3d45865bbab3ed3c | [
"MIT"
] | null | null | null | docs/source/notebooks/01.ipynb | JulianKarlBauer/orientation_averaged_mean_field | 75acb5ed58aa6a69cec7508d3d45865bbab3ed3c | [
"MIT"
] | null | null | null | docs/source/notebooks/01.ipynb | JulianKarlBauer/orientation_averaged_mean_field | 75acb5ed58aa6a69cec7508d3d45865bbab3ed3c | [
"MIT"
] | null | null | null | 48.118881 | 104 | 0.38134 | [
[
[
"# Get points witin admissible parameter space",
"_____no_output_____"
]
],
[
[
"import planarfibers\nimport pandas as pd\npd.set_option('display.max_columns', 100)\npd.set_option('display.width', 1000)",
"_____no_output_____"
],
[
"df = planarfibers.discretization.get_points_on_slices(\n radii=[\"0\", \"1/2\", \"9/10\"],\n la1s=[\"1/2\", \"4/6\", \"5/6\", \"1\"],\n numeric=False,\n)",
"_____no_output_____"
],
[
"print(df)",
" la1 radius_factor beta r d_1 d_8\nv00-upper-0 1/2 9/10 pi/2 9/80 69/560 0\nv00-upper-1 2/3 9/10 pi/2 1/10 61/630 0\nv00-upper-2 5/6 9/10 pi/2 1/16 89/5040 0\nv00-mid-0 1/2 0 0 0 3/280 0\nv00-mid-1 2/3 0 0 0 -1/315 0\nv00-mid-2 5/6 0 0 0 -113/2520 0\nv00-lower-0 1/2 9/10 -pi/2 9/80 -57/560 0\nv00-lower-1 2/3 9/10 -pi/2 1/10 -13/126 0\nv00-lower-2 5/6 9/10 -pi/2 1/16 -541/5040 0\nv00-mid-3 1 0 0 0 -4/35 0\nvshc-central-la1-0 1/2 0 0 0 3/280 0\nvshc-m90-0-la1-0 1/2 1/2 -pi/2 1/16 -29/560 0\nvshc-m90-1-la1-0 1/2 9/10 -pi/2 9/80 -57/560 0\nvshc-m45-0-la1-0 1/2 1/2 -pi/4 1/16 3/280 - sqrt(2)/32 sqrt(2)/32\nvshc-m45-1-la1-0 1/2 9/10 -pi/4 9/80 3/280 - 9*sqrt(2)/160 9*sqrt(2)/160\nvshc-0-0-la1-0 1/2 1/2 0 1/16 3/280 1/16\nvshc-0-1-la1-0 1/2 9/10 0 9/80 3/280 9/80\nvshc-45-0-la1-0 1/2 1/2 pi/4 1/16 3/280 + sqrt(2)/32 sqrt(2)/32\nvshc-45-1-la1-0 1/2 9/10 pi/4 9/80 3/280 + 9*sqrt(2)/160 9*sqrt(2)/160\nvshc-90-0-la1-0 1/2 1/2 pi/2 1/16 41/560 0\nvshc-90-1-la1-0 1/2 9/10 pi/2 9/80 69/560 0\nvshc-central-la1-1 2/3 0 0 0 -1/315 0\nvshc-m90-0-la1-1 2/3 1/2 -pi/2 1/18 -37/630 0\nvshc-m90-1-la1-1 2/3 9/10 -pi/2 1/10 -13/126 0\nvshc-m45-0-la1-1 2/3 1/2 -pi/4 1/18 -sqrt(2)/36 - 1/315 sqrt(2)/36\nvshc-m45-1-la1-1 2/3 9/10 -pi/4 1/10 -sqrt(2)/20 - 1/315 sqrt(2)/20\nvshc-0-0-la1-1 2/3 1/2 0 1/18 -1/315 1/18\nvshc-0-1-la1-1 2/3 9/10 0 1/10 -1/315 1/10\nvshc-45-0-la1-1 2/3 1/2 pi/4 1/18 -1/315 + sqrt(2)/36 sqrt(2)/36\nvshc-45-1-la1-1 2/3 9/10 pi/4 1/10 -1/315 + sqrt(2)/20 sqrt(2)/20\nvshc-90-0-la1-1 2/3 1/2 pi/2 1/18 11/210 0\nvshc-90-1-la1-1 2/3 9/10 pi/2 1/10 61/630 0\nvshc-central-la1-2 5/6 0 0 0 -113/2520 0\nvshc-m90-0-la1-2 5/6 1/2 -pi/2 5/144 -401/5040 0\nvshc-m90-1-la1-2 5/6 9/10 -pi/2 1/16 -541/5040 0\nvshc-m45-0-la1-2 5/6 1/2 -pi/4 5/144 -113/2520 - 5*sqrt(2)/288 5*sqrt(2)/288\nvshc-m45-1-la1-2 5/6 9/10 -pi/4 1/16 -113/2520 - sqrt(2)/32 sqrt(2)/32\nvshc-0-0-la1-2 5/6 1/2 0 5/144 -113/2520 5/144\nvshc-0-1-la1-2 5/6 9/10 0 1/16 -113/2520 1/16\nvshc-45-0-la1-2 5/6 1/2 pi/4 5/144 -113/2520 + 5*sqrt(2)/288 5*sqrt(2)/288\nvshc-45-1-la1-2 5/6 9/10 pi/4 1/16 -113/2520 + sqrt(2)/32 sqrt(2)/32\nvshc-90-0-la1-2 5/6 1/2 pi/2 5/144 -17/1680 0\nvshc-90-1-la1-2 5/6 9/10 pi/2 1/16 89/5040 0\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ec9b7f68b81c817b308a23d8e0edb3a26a9044c0 | 74,524 | ipynb | Jupyter Notebook | .ipynb_checkpoints/bach-corpus-checkpoint.ipynb | Nicholass/markov_music | 34ae422498fca274272e92335d0fa0ae53e5130b | [
"Apache-2.0"
] | null | null | null | .ipynb_checkpoints/bach-corpus-checkpoint.ipynb | Nicholass/markov_music | 34ae422498fca274272e92335d0fa0ae53e5130b | [
"Apache-2.0"
] | null | null | null | .ipynb_checkpoints/bach-corpus-checkpoint.ipynb | Nicholass/markov_music | 34ae422498fca274272e92335d0fa0ae53e5130b | [
"Apache-2.0"
] | null | null | null | 367.1133 | 64,893 | 0.904031 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ec9b8edaaf3b7dde662fa1aad08965769e0f7e21 | 462,501 | ipynb | Jupyter Notebook | Iris-Dataset-classification/Data visualization Iris.ipynb | rthirumurugan2000/Machine-Learning-Projects | 91bb67a3fad1a56a0cb62b69897612324b56d6f2 | [
"MIT"
] | 2 | 2020-04-09T11:12:59.000Z | 2020-04-13T07:27:45.000Z | Iris-Dataset-classification/Data visualization Iris.ipynb | rthirumurugan2000/Machine-learning-Projects | 91bb67a3fad1a56a0cb62b69897612324b56d6f2 | [
"MIT"
] | null | null | null | Iris-Dataset-classification/Data visualization Iris.ipynb | rthirumurugan2000/Machine-learning-Projects | 91bb67a3fad1a56a0cb62b69897612324b56d6f2 | [
"MIT"
] | null | null | null | 595.239382 | 124,252 | 0.945777 | [
[
[
"# IRIS DATASET VISUALIZATION\n\n### DEVELOPER:THIRUMURUGAN RAMAR",
"_____no_output_____"
]
],
[
[
"import itertools\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import NullFormatter\nimport pandas as pd\nimport numpy as np\nimport matplotlib.ticker as ticker\nfrom sklearn import preprocessing\n%matplotlib inline\nimport seaborn as sns",
"_____no_output_____"
],
[
"import warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
],
[
"sns.set(style='white', color_codes=True)",
"_____no_output_____"
],
[
"iris = pd.read_csv('Iris.csv')",
"_____no_output_____"
],
[
"iris.head()",
"_____no_output_____"
],
[
"iris['Species'].value_counts()",
"_____no_output_____"
]
],
[
[
"#### PLOTS:",
"_____no_output_____"
]
],
[
[
"iris.plot(kind='scatter',x='SepalLengthCm', y='SepalWidthCm')",
"_____no_output_____"
],
[
"sns.jointplot(x='SepalLengthCm',y='SepalWidthCm', data=iris)",
"_____no_output_____"
],
[
"iris.shape",
"_____no_output_____"
],
[
"iris.describe()",
"_____no_output_____"
],
[
"iris.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 150 entries, 0 to 149\nData columns (total 6 columns):\nId 150 non-null int64\nSepalLengthCm 150 non-null float64\nSepalWidthCm 150 non-null float64\nPetalLengthCm 150 non-null float64\nPetalWidthCm 150 non-null float64\nSpecies 150 non-null object\ndtypes: float64(4), int64(1), object(1)\nmemory usage: 7.1+ KB\n"
],
[
"sns.FacetGrid(iris, hue = 'Species', size=5) \\\n .map(plt.scatter, 'SepalLengthCm','SepalWidthCm') \\\n .add_legend()",
"_____no_output_____"
],
[
"sns.boxplot(x='Species', y='PetalLengthCm', data=iris)",
"_____no_output_____"
],
[
"sns.violinplot(x='Species',y='PetalLengthCm', data=iris)",
"_____no_output_____"
],
[
"sns.FacetGrid(iris, hue=\"Species\", size=6) \\\n .map(sns.kdeplot, \"PetalLengthCm\") \\\n .add_legend()",
"_____no_output_____"
],
[
"sns.pairplot(iris.drop('Id', axis=1), hue='Species')",
"_____no_output_____"
],
[
"iris.drop('Id', axis=1).boxplot(by='Species')",
"_____no_output_____"
]
],
[
[
"#### Andrew curves:\nAndrews Curves involve using attributes of samples as coefficients for Fourier series",
"_____no_output_____"
]
],
[
[
"from pandas.tools.plotting import andrews_curves\nandrews_curves(iris.drop(\"Id\", axis=1), \"Species\")",
"_____no_output_____"
]
],
[
[
"#### Parallel co-ordinates:",
"_____no_output_____"
]
],
[
[
"from pandas.tools.plotting import parallel_coordinates\nparallel_coordinates(iris.drop(\"Id\", axis=1), \"Species\")",
"_____no_output_____"
]
],
[
[
"#### Radviz\nWhich puts each feature as a point on a 2D plane, and then simulateshaving each sample attached to those points through a spring weighted by the relative value for that feature",
"_____no_output_____"
]
],
[
[
"from pandas.tools.plotting import radviz\nradviz(iris.drop(\"Id\", axis=1), \"Species\")",
"_____no_output_____"
],
[
"sns.factorplot('SepalLengthCm', data=iris, hue='Species', kind='count' )",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec9bb88dad792838a8cf2b348b2e0d83b293a45d | 38,286 | ipynb | Jupyter Notebook | Intro to python/IntroToPy_part3.ipynb | VRU-CE/Information_Retrieval-4002 | 1ad53cb06956813e6916cf9c2abc45fd1b132397 | [
"MIT"
] | null | null | null | Intro to python/IntroToPy_part3.ipynb | VRU-CE/Information_Retrieval-4002 | 1ad53cb06956813e6916cf9c2abc45fd1b132397 | [
"MIT"
] | null | null | null | Intro to python/IntroToPy_part3.ipynb | VRU-CE/Information_Retrieval-4002 | 1ad53cb06956813e6916cf9c2abc45fd1b132397 | [
"MIT"
] | 1 | 2022-02-28T18:14:12.000Z | 2022-02-28T18:14:12.000Z | 17.214928 | 556 | 0.418195 | [
[
[
"## Strings",
"_____no_output_____"
]
],
[
[
"import math\n\nstr1 = \"Hi there!\"\nstr2 = 'Nice to meet you.'",
"_____no_output_____"
],
[
"print(str1)\nprint(type(str1))\n\nprint(str2)\nprint(type(str2))",
"Hi there!\n<class 'str'>\nNice to meet you.\n<class 'str'>\n"
],
[
"str1[0]",
"_____no_output_____"
],
[
"str1[:]",
"_____no_output_____"
],
[
"str1[-1]",
"_____no_output_____"
],
[
"str2[8:11]",
"_____no_output_____"
],
[
"str2[8]",
"_____no_output_____"
],
[
"str2[11]",
"_____no_output_____"
],
[
"str2[8:12]",
"_____no_output_____"
],
[
"str2[0:4]",
"_____no_output_____"
],
[
"str2[0:4:2]",
"_____no_output_____"
],
[
"str2[0:4:-1]",
"_____no_output_____"
],
[
"str2[-14:-18:-1]",
"_____no_output_____"
],
[
"str2[5:]",
"_____no_output_____"
],
[
"str2[:4]",
"_____no_output_____"
],
[
"msg = \"\"\"\n Hi this is a multi line message,\n this is the second line.\n\"\"\"",
"_____no_output_____"
],
[
"print(msg)",
"\n Hi this is a multi line message,\n this is the second line.\n\n"
],
[
"[print(char) for char in msg]",
"\n\n \n \n \n \nH\ni\n \nt\nh\ni\ns\n \ni\ns\n \na\n \nm\nu\nl\nt\ni\n \nl\ni\nn\ne\n \nm\ne\ns\ns\na\ng\ne\n,\n\n\n \n \n \n \nt\nh\ni\ns\n \ni\ns\n \nt\nh\ne\n \ns\ne\nc\no\nn\nd\n \nl\ni\nn\ne\n.\n\n\n"
],
[
"msg.upper()\nprint(msg)",
"\n Hi this is a multi line message,\n this is the second line.\n\n"
],
[
"msg1 = msg.upper()\nprint(msg1)",
"\n HI THIS IS A MULTI LINE MESSAGE,\n THIS IS THE SECOND LINE.\n\n"
],
[
"msg2 = msg.lower()\nprint(msg2)",
"\n hi this is a multi line message,\n this is the second line.\n\n"
],
[
"msg3 = msg.split()\nprint(msg3)",
"['Hi', 'this', 'is', 'a', 'multi', 'line', 'message,', 'this', 'is', 'the', 'second', 'line.']\n"
],
[
"msg4 = msg.strip()\nmsg4",
"_____no_output_____"
],
[
"msg",
"_____no_output_____"
],
[
"firstString = \"der Fluß\"\nsecondString = \"der Fluss\"\n\nprint(firstString.casefold())\nprint(firstString.lower())",
"der fluss\nder fluß\n"
],
[
"msg5 = msg2.strip().capitalize()\nmsg5",
"_____no_output_____"
],
[
"msg5.count('i')",
"_____no_output_____"
],
[
"msg5.count(\"line\")",
"_____no_output_____"
],
[
"msg5.count(\"line\", 16, 26)",
"_____no_output_____"
],
[
"msg5.find(\"line\")",
"_____no_output_____"
],
[
"msg5.rfind(\"line\")",
"_____no_output_____"
],
[
"s = ', '\nt = ['1', '2', '3']\ns.join(t)",
"_____no_output_____"
]
],
[
[
"## Lists",
"_____no_output_____"
]
],
[
[
"from IPython.core.display_functions import display\nimport math\nl1 = [1, 2, 3, math.pi, 4.0, 5.12, 2+4j, \"six\", True, [\"hii\", False, 3], (9, 5, 2), {2, 2, 'cool'}, {1:\"one\", 2:\"two\"}]\ndisplay(l1)",
"_____no_output_____"
],
[
"[print(item) for item in l1]",
"1\n2\n3\n3.141592653589793\n4.0\n5.12\n(2+4j)\nsix\nTrue\n['hii', False, 3]\n(9, 5, 2)\n{'cool', 2}\n{1: 'one', 2: 'two'}\n"
],
[
"[print(inside_item) for item in l1 for inside_item in item]",
"_____no_output_____"
],
[
"l2 = l1[-1:-5:-1]\nl2",
"_____no_output_____"
],
[
"[print(inside_item) for item in l2 for inside_item in item]",
"1\n2\ncool\n2\n9\n5\n2\nhii\nFalse\n3\n"
],
[
"l1[-4:]",
"_____no_output_____"
],
[
"l1[-1][1]",
"_____no_output_____"
],
[
"l1[-1][2]",
"_____no_output_____"
],
[
"l1[-1][1]",
"_____no_output_____"
],
[
"l1[-2]",
"_____no_output_____"
],
[
"l1[-3][1]",
"_____no_output_____"
],
[
"l1[-4][:]",
"_____no_output_____"
],
[
"l3 = [1, 2, 3, 4, 5, 6, 7, 8]",
"_____no_output_____"
],
[
"l3[0] = 10\nprint(l3)",
"[10, 2, 3, 4, 5, 6, 7, 8]\n"
],
[
"l3[1:4] = [4, 3, 2]\nprint(l3)",
"[10, 4, 3, 2, 5, 6, 7, 8]\n"
],
[
"l3.append(9)\nl3",
"_____no_output_____"
],
[
"l3.extend([11, 12, 13])",
"_____no_output_____"
],
[
"l3",
"_____no_output_____"
],
[
"l4 = l3[7:]\nl3 = l3[:7]\nprint(l4, l3)",
"[8, 9, 11, 12, 13] [10, 4, 3, 2, 5, 6, 7]\n"
],
[
"l5 = l4 + l3\nprint(l5)",
"[8, 9, 11, 12, 13, 10, 4, 3, 2, 5, 6, 7]\n"
],
[
"l5.insert(0, 0)",
"_____no_output_____"
],
[
"print(l5)",
"[0, 8, 9, 11, 12, 13, 10, 4, 3, 2, 5, 6, 7]\n"
],
[
"l5.insert(-1, 0)\nl5",
"_____no_output_____"
],
[
"l5.insert(14, 0)",
"_____no_output_____"
],
[
"l5",
"_____no_output_____"
],
[
"l5[1:1] = [50,60,70]\nl5",
"_____no_output_____"
],
[
"del l5[-3]\nl5",
"_____no_output_____"
],
[
"del l5[4:6]\nl5",
"_____no_output_____"
],
[
"l6 = l5[-3:]\nl6",
"_____no_output_____"
],
[
"del l6",
"_____no_output_____"
],
[
"l6",
"_____no_output_____"
],
[
"l5",
"_____no_output_____"
],
[
"l5.remove(0)\nl5",
"_____no_output_____"
],
[
"l5.remove(0)\nl5",
"_____no_output_____"
],
[
"l5.pop()",
"_____no_output_____"
],
[
"l5",
"_____no_output_____"
],
[
"l5.pop(1)\nl5",
"_____no_output_____"
],
[
"l6 = l5[0:2]\nl6",
"_____no_output_____"
],
[
"l6.clear()",
"_____no_output_____"
],
[
"l6",
"_____no_output_____"
],
[
"l6 = l5\nl6",
"_____no_output_____"
],
[
"l6[-1] = []\nl6",
"_____no_output_____"
],
[
"l6[-2:] = []\nl6",
"_____no_output_____"
],
[
"l5",
"_____no_output_____"
],
[
"from copy import deepcopy\n\nl7 = deepcopy(l5)\nl7",
"_____no_output_____"
],
[
"l7[:] = []\nl7",
"_____no_output_____"
],
[
"l5",
"_____no_output_____"
],
[
"l5.reverse()",
"_____no_output_____"
],
[
"l5",
"_____no_output_____"
],
[
"l5.reverse()\nl5",
"_____no_output_____"
],
[
"l5.sort()\nl5",
"_____no_output_____"
],
[
"l5.sort(reverse=True)\nl5",
"_____no_output_____"
],
[
"id(l5)",
"_____no_output_____"
],
[
"sorted([9,8,7,6])",
"_____no_output_____"
],
[
"l7 = [9,8,7,6]\nsorted(l7)\nl7",
"_____no_output_____"
],
[
"max(l5)",
"_____no_output_____"
],
[
"all(l5)",
"_____no_output_____"
],
[
"matrix = [[1,2], [3,4]]\ndisplay(matrix)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9bc6645c27da52c3f432cfa9853c1f84ad0295 | 18,648 | ipynb | Jupyter Notebook | site/en/tutorials/keras/text_classification_with_hub.ipynb | veeps/docs | 20e6b9807038fcf3e9fb162bec0f5cacfcc7974f | [
"Apache-2.0"
] | 2 | 2020-06-20T14:10:44.000Z | 2020-10-12T07:10:34.000Z | site/en/tutorials/keras/text_classification_with_hub.ipynb | veeps/docs | 20e6b9807038fcf3e9fb162bec0f5cacfcc7974f | [
"Apache-2.0"
] | null | null | null | site/en/tutorials/keras/text_classification_with_hub.ipynb | veeps/docs | 20e6b9807038fcf3e9fb162bec0f5cacfcc7974f | [
"Apache-2.0"
] | null | null | null | 40.189655 | 456 | 0.578561 | [
[
[
"##### Copyright 2019 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
],
[
"#@title MIT License\n#\n# Copyright (c) 2017 François Chollet\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.",
"_____no_output_____"
]
],
[
[
"# Text classification with TensorFlow Hub: Movie reviews",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/keras/text_classification_with_hub\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/text_classification_with_hub.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/text_classification_with_hub.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/keras/text_classification_with_hub.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"\nThis notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem.\n\nThe tutorial demonstrates the basic application of transfer learning with TensorFlow Hub and Keras.\n\nWe'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews. \n\nThis notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow, and [TensorFlow Hub](https://www.tensorflow.org/hub), a library and platform for transfer learning. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).",
"_____no_output_____"
]
],
[
[
"from __future__ import absolute_import, division, print_function, unicode_literals\n\nimport numpy as np\n\ntry:\n # %tensorflow_version only exists in Colab.\n %tensorflow_version 2.x\nexcept Exception:\n pass\nimport tensorflow as tf\n\n!pip install tensorflow-hub\n!pip install tfds-nightly\nimport tensorflow_hub as hub\nimport tensorflow_datasets as tfds\n\nprint(\"Version: \", tf.__version__)\nprint(\"Eager mode: \", tf.executing_eagerly())\nprint(\"Hub version: \", hub.__version__)\nprint(\"GPU is\", \"available\" if tf.config.experimental.list_physical_devices(\"GPU\") else \"NOT AVAILABLE\")",
"_____no_output_____"
]
],
[
[
"## Download the IMDB dataset\n\nThe IMDB dataset is available on [imdb reviews](https://www.tensorflow.org/datasets/catalog/imdb_reviews) or on [TensorFlow datasets](https://www.tensorflow.org/datasets). The following code downloads the IMDB dataset to your machine (or the colab runtime):",
"_____no_output_____"
]
],
[
[
"# Split the training set into 60% and 40%, so we'll end up with 15,000 examples\n# for training, 10,000 examples for validation and 25,000 examples for testing.\ntrain_data, validation_data, test_data = tfds.load(\n name=\"imdb_reviews\", \n split=('train[:60%]', 'train[60%:]', 'test'),\n as_supervised=True)",
"_____no_output_____"
]
],
[
[
"## Explore the data \n\nLet's take a moment to understand the format of the data. Each example is a sentence representing the movie review and a corresponding label. The sentence is not preprocessed in any way. The label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.\n\nLet's print first 10 examples.",
"_____no_output_____"
]
],
[
[
"train_examples_batch, train_labels_batch = next(iter(train_data.batch(10)))\ntrain_examples_batch",
"_____no_output_____"
]
],
[
[
"Let's also print the first 10 labels.",
"_____no_output_____"
]
],
[
[
"train_labels_batch",
"_____no_output_____"
]
],
[
[
"## Build the model\n\nThe neural network is created by stacking layers—this requires three main architectural decisions:\n\n* How to represent the text?\n* How many layers to use in the model?\n* How many *hidden units* to use for each layer?\n\nIn this example, the input data consists of sentences. The labels to predict are either 0 or 1.\n\nOne way to represent the text is to convert sentences into embeddings vectors. We can use a pre-trained text embedding as the first layer, which will have three advantages:\n\n* we don't have to worry about text preprocessing,\n* we can benefit from transfer learning,\n* the embedding has a fixed size, so it's simpler to process.\n\nFor this example we will use a **pre-trained text embedding model** from [TensorFlow Hub](https://www.tensorflow.org/hub) called [google/tf2-preview/gnews-swivel-20dim/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1).\n\nThere are three other pre-trained models to test for the sake of this tutorial:\n\n* [google/tf2-preview/gnews-swivel-20dim-with-oov/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1) - same as [google/tf2-preview/gnews-swivel-20dim/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1), but with 2.5% vocabulary converted to OOV buckets. This can help if vocabulary of the task and vocabulary of the model don't fully overlap.\n* [google/tf2-preview/nnlm-en-dim50/1](https://tfhub.dev/google/tf2-preview/nnlm-en-dim50/1) - A much larger model with ~1M vocabulary size and 50 dimensions.\n* [google/tf2-preview/nnlm-en-dim128/1](https://tfhub.dev/google/tf2-preview/nnlm-en-dim128/1) - Even larger model with ~1M vocabulary size and 128 dimensions.",
"_____no_output_____"
],
[
"Let's first create a Keras layer that uses a TensorFlow Hub model to embed the sentences, and try it out on a couple of input examples. Note that no matter the length of the input text, the output shape of the embeddings is: `(num_examples, embedding_dimension)`.",
"_____no_output_____"
]
],
[
[
"embedding = \"https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1\"\nhub_layer = hub.KerasLayer(embedding, input_shape=[], \n dtype=tf.string, trainable=True)\nhub_layer(train_examples_batch[:3])",
"_____no_output_____"
]
],
[
[
"Let's now build the full model:",
"_____no_output_____"
]
],
[
[
"model = tf.keras.Sequential()\nmodel.add(hub_layer)\nmodel.add(tf.keras.layers.Dense(16, activation='relu'))\nmodel.add(tf.keras.layers.Dense(1))\n\nmodel.summary()",
"_____no_output_____"
]
],
[
[
"The layers are stacked sequentially to build the classifier:\n\n1. The first layer is a TensorFlow Hub layer. This layer uses a pre-trained Saved Model to map a sentence into its embedding vector. The pre-trained text embedding model that we are using ([google/tf2-preview/gnews-swivel-20dim/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1)) splits the sentence into tokens, embeds each token and then combines the embedding. The resulting dimensions are: `(num_examples, embedding_dimension)`.\n2. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.\n3. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level.\n\nLet's compile the model.",
"_____no_output_____"
],
[
"### Loss function and optimizer\n\nA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function. \n\nThis isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the \"distance\" between probability distributions, or in our case, between the ground-truth distribution and the predictions.\n\nLater, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.\n\nNow, configure the model to use an optimizer and a loss function:",
"_____no_output_____"
]
],
[
[
"model.compile(optimizer='adam',\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"## Train the model\n\nTrain the model for 20 epochs in mini-batches of 512 samples. This is 20 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:",
"_____no_output_____"
]
],
[
[
"history = model.fit(train_data.shuffle(10000).batch(512),\n epochs=20,\n validation_data=validation_data.batch(512),\n verbose=1)",
"_____no_output_____"
]
],
[
[
"## Evaluate the model\n\nAnd let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.",
"_____no_output_____"
]
],
[
[
"results = model.evaluate(test_data.batch(512), verbose=2)\n\nfor name, value in zip(model.metrics_names, results):\n print(\"%s: %.3f\" % (name, value))",
"_____no_output_____"
]
],
[
[
"This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%.",
"_____no_output_____"
],
[
"## Further reading\n\nFor a more general way to work with string inputs and for a more detailed analysis of the progress of accuracy and loss during training, take a look [here](https://www.tensorflow.org/tutorials/keras/basic_text_classification).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
ec9bc81967067fe9f3563e2cd6b5e5e30163821b | 2,053 | ipynb | Jupyter Notebook | openmdao/docs/openmdao_book/other/template.ipynb | tong0711/OpenMDAO | d8496a0e606df405b2472f1c96b3c543eacaca5a | [
"Apache-2.0"
] | 451 | 2015-07-20T11:52:35.000Z | 2022-03-28T08:04:56.000Z | openmdao/docs/openmdao_book/other/template.ipynb | tong0711/OpenMDAO | d8496a0e606df405b2472f1c96b3c543eacaca5a | [
"Apache-2.0"
] | 1,096 | 2015-07-21T03:08:26.000Z | 2022-03-31T11:59:17.000Z | openmdao/docs/openmdao_book/other/template.ipynb | tong0711/OpenMDAO | d8496a0e606df405b2472f1c96b3c543eacaca5a | [
"Apache-2.0"
] | 301 | 2015-07-16T20:02:11.000Z | 2022-03-28T08:04:39.000Z | 25.6625 | 312 | 0.600585 | [
[
[
"try:\n from openmdao.utils.notebook_utils import notebook_mode\nexcept ImportError:\n !python -m pip install openmdao[notebooks]",
"_____no_output_____"
]
],
[
[
"# Title\n\nThis is a template for notebooks in this JupyterBook.\nSlides following this slide will generally be a mix of Code and Markdown cells.\n\nThe first cell above is used to ensure that OpenMDAO will be installed on Colab or Binder if the notebook is opened there. This cell will be removed from the documentation (using tags `remove-input` and `remove-output`), so examples should still include their own `import openmdao.api as om` statement.\n\nThe assertions below are generally in the last cell but do not need to be. These are also removed from the docs since they're typically just used for testing the documentation itself.",
"_____no_output_____"
]
],
[
[
"# Add assertions that will be used to test that the docs are functional.\n\nfrom openmdao.utils.assert_utils import assert_near_equal\n\nassert_near_equal(1, 1)\n\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9bceba088386425b57633d99ccc92174b68c9e | 366,633 | ipynb | Jupyter Notebook | starter_code/WeatherPy.ipynb | mthemist3/python-api-challenge | 54a8623c755e1e057f99538a0f0ee1296f1a6155 | [
"ADSL"
] | null | null | null | starter_code/WeatherPy.ipynb | mthemist3/python-api-challenge | 54a8623c755e1e057f99538a0f0ee1296f1a6155 | [
"ADSL"
] | null | null | null | starter_code/WeatherPy.ipynb | mthemist3/python-api-challenge | 54a8623c755e1e057f99538a0f0ee1296f1a6155 | [
"ADSL"
] | null | null | null | 170.052412 | 36,544 | 0.870623 | [
[
[
"# WeatherPy\n----\n\n#### Note\n* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.",
"_____no_output_____"
]
],
[
[
"# Dependencies and Setup\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport requests\nimport time\nfrom scipy.stats import linregress\n\n# Import API key\nfrom api_keys import weather_api_key\n\n# Incorporated citipy to determine city based on latitude and longitude\nfrom citipy import citipy\n\n# Output File (CSV)\noutput_data_file = \"output_data/cities.csv\"\n\n# Range of latitudes and longitudes\nlat_range = (-90, 90)\nlng_range = (-180, 180)",
"_____no_output_____"
]
],
[
[
"## Generate Cities List",
"_____no_output_____"
]
],
[
[
"# List for holding lat_lngs and cities\nlat_lngs = []\ncities = []\n\n# Create a set of random lat and lng combinations\nlats = np.random.uniform(lat_range[0], lat_range[1], size=1500)\nlngs = np.random.uniform(lng_range[0], lng_range[1], size=1500)\nlat_lngs = zip(lats, lngs)\n\n# Identify nearest city for each lat, lng combination\nfor lat_lng in lat_lngs:\n city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name\n \n # If the city is unique, then add it to a our cities list\n if city not in cities:\n cities.append(city)\n\n# Print the city count to confirm sufficient count\nlen(cities)",
"_____no_output_____"
]
],
[
[
"### Perform API Calls\n* Perform a weather check on each city using a series of successive API calls.\n* Include a print log of each city as it'sbeing processed (with the city number and city name).\n",
"_____no_output_____"
]
],
[
[
"# Create a base URL and make url to fahrenheit \nbase_url = 'http://api.openweathermap.org/data/2.5/weather?'\nunits = 'imperial'\n\ncity_name = []\nlat = []\nlong = []\nmax_temp = []\nhumidity = []\ncloudiness = []\nwind_speed = []\ncountry = []\ndate = []\ncount=0\n\n\n# Build query URL using openweathermap website help\nfor city in cities:\n response = requests.get(query_url+city)\n# print(response.url)\n response = response.json()\n try:\n city_name.append(response['name'])\n lat.append(response['coord']['lat'])\n long.append(response['coord']['lon'])\n max_temp.append(response['main']['temp_max'])\n humidity.append(response['main']['humidity'])\n cloudiness.append(response['clouds']['all']) \n wind_speed.append(response['wind']['speed'])\n country.append(response['sys']['country'])\n date.append(response['dt'])\n count+=1\n \n print(f'Processing record {count} of {len(cities)}: | {city}')\n except:\n print(\"City not found, skipping...\")\n time.sleep(1.1)\n pass\n# Indicate that Data Loading is complete\nprint(\"-----------------------------\")\nprint(\"Data Retrieval Complete \")\nprint(\"-----------------------------\")",
"Processing record 1 of 620: | dikson\nProcessing record 2 of 620: | tarauaca\nProcessing record 3 of 620: | azare\nProcessing record 4 of 620: | madera\nProcessing record 5 of 620: | punta arenas\nProcessing record 6 of 620: | yerbogachen\nProcessing record 7 of 620: | avarua\nProcessing record 8 of 620: | ambovombe\nProcessing record 9 of 620: | kamyshlov\nProcessing record 10 of 620: | hilo\nProcessing record 11 of 620: | taree\nProcessing record 12 of 620: | buala\nProcessing record 13 of 620: | cherskiy\nCity not found, skipping...\nProcessing record 14 of 620: | yellowknife\nProcessing record 15 of 620: | diu\nProcessing record 16 of 620: | port alfred\nCity not found, skipping...\nProcessing record 17 of 620: | castro-urdiales\nProcessing record 18 of 620: | busselton\nCity not found, skipping...\nProcessing record 19 of 620: | luderitz\nProcessing record 20 of 620: | si sa ket\nProcessing record 21 of 620: | souillac\nProcessing record 22 of 620: | burley\nProcessing record 23 of 620: | albany\nProcessing record 24 of 620: | upernavik\nProcessing record 25 of 620: | aklavik\nCity not found, skipping...\nProcessing record 26 of 620: | shellbrook\nProcessing record 27 of 620: | rikitea\nProcessing record 28 of 620: | elliot\nCity not found, skipping...\nProcessing record 29 of 620: | tasiilaq\nProcessing record 30 of 620: | ranong\nProcessing record 31 of 620: | santiago del estero\nProcessing record 32 of 620: | mataura\nProcessing record 33 of 620: | birao\nProcessing record 34 of 620: | butaritari\nProcessing record 35 of 620: | new norfolk\nProcessing record 36 of 620: | sitka\nProcessing record 37 of 620: | miri\nCity not found, skipping...\nProcessing record 38 of 620: | sisimiut\nProcessing record 39 of 620: | antofagasta\nProcessing record 40 of 620: | chuy\nProcessing record 41 of 620: | huilong\nCity not found, skipping...\nProcessing record 42 of 620: | broome\nProcessing record 43 of 620: | hithadhoo\nProcessing record 44 of 620: | castro\nProcessing record 45 of 620: | camana\nProcessing record 46 of 620: | atuona\nProcessing record 47 of 620: | cap malheureux\nProcessing record 48 of 620: | kruisfontein\nProcessing record 49 of 620: | nanortalik\nCity not found, skipping...\nProcessing record 50 of 620: | hobart\nProcessing record 51 of 620: | thompson\nProcessing record 52 of 620: | vaini\nProcessing record 53 of 620: | pisco\nProcessing record 54 of 620: | lebu\nProcessing record 55 of 620: | padang\nProcessing record 56 of 620: | ushuaia\nProcessing record 57 of 620: | kaitangata\nProcessing record 58 of 620: | husavik\nProcessing record 59 of 620: | hermanus\nProcessing record 60 of 620: | borjomi\nProcessing record 61 of 620: | flinders\nProcessing record 62 of 620: | auki\nProcessing record 63 of 620: | port angeles\nProcessing record 64 of 620: | college\nProcessing record 65 of 620: | bambous virieux\nProcessing record 66 of 620: | bengkulu\nProcessing record 67 of 620: | victoria\nProcessing record 68 of 620: | bowen\nProcessing record 69 of 620: | dawson creek\nProcessing record 70 of 620: | sao felix do xingu\nProcessing record 71 of 620: | pangkalanbuun\nProcessing record 72 of 620: | jamestown\nProcessing record 73 of 620: | shimoda\nProcessing record 74 of 620: | porto novo\nProcessing record 75 of 620: | puerto ayora\nProcessing record 76 of 620: | nome\nCity not found, skipping...\nProcessing record 77 of 620: | east london\nProcessing record 78 of 620: | kutum\nProcessing record 79 of 620: | gurupa\nProcessing record 80 of 620: | rio gallegos\nCity not found, skipping...\nProcessing record 81 of 620: | boshnyakovo\nProcessing record 82 of 620: | phan thiet\nProcessing record 83 of 620: | provideniya\nCity not found, skipping...\nCity not found, skipping...\nProcessing record 84 of 620: | barrow\nProcessing record 85 of 620: | colwyn bay\nProcessing record 86 of 620: | lompoc\nProcessing record 87 of 620: | catamarca\nProcessing record 88 of 620: | bathsheba\nProcessing record 89 of 620: | norman wells\nProcessing record 90 of 620: | jalu\nProcessing record 91 of 620: | te anau\nProcessing record 92 of 620: | rapid valley\nProcessing record 93 of 620: | sao filipe\nProcessing record 94 of 620: | port elizabeth\nProcessing record 95 of 620: | georgetown\nProcessing record 96 of 620: | hlatikulu\nProcessing record 97 of 620: | cidreira\nProcessing record 98 of 620: | port blair\nProcessing record 99 of 620: | vila\nCity not found, skipping...\nProcessing record 100 of 620: | tiksi\nProcessing record 101 of 620: | hays\nProcessing record 102 of 620: | senador jose porfirio\nProcessing record 103 of 620: | lokosovo\nProcessing record 104 of 620: | half moon bay\nProcessing record 105 of 620: | lang suan\nProcessing record 106 of 620: | shieli\nProcessing record 107 of 620: | terrak\nProcessing record 108 of 620: | hamilton\nProcessing record 109 of 620: | coihaique\nProcessing record 110 of 620: | vallenar\nProcessing record 111 of 620: | arraial do cabo\nCity not found, skipping...\nProcessing record 112 of 620: | sabang\nProcessing record 113 of 620: | pevek\nProcessing record 114 of 620: | bredasdorp\nProcessing record 115 of 620: | nouakchott\nProcessing record 116 of 620: | dunedin\nCity not found, skipping...\nProcessing record 117 of 620: | storm lake\nProcessing record 118 of 620: | ahja\nProcessing record 119 of 620: | cape town\nProcessing record 120 of 620: | faya\nProcessing record 121 of 620: | davila\nProcessing record 122 of 620: | ancud\nProcessing record 123 of 620: | pitimbu\nProcessing record 124 of 620: | nemuro\nProcessing record 125 of 620: | kodiak\nProcessing record 126 of 620: | salalah\nCity not found, skipping...\nProcessing record 127 of 620: | geraldton\nProcessing record 128 of 620: | saint george\nProcessing record 129 of 620: | tuatapere\nProcessing record 130 of 620: | fairbanks\nProcessing record 131 of 620: | paita\nProcessing record 132 of 620: | vao\nProcessing record 133 of 620: | nova olinda do norte\nProcessing record 134 of 620: | moron\nProcessing record 135 of 620: | saskylakh\nProcessing record 136 of 620: | estrela\nProcessing record 137 of 620: | tenkodogo\nProcessing record 138 of 620: | clyde river\nProcessing record 139 of 620: | kapaa\nCity not found, skipping...\nProcessing record 140 of 620: | kavieng\nCity not found, skipping...\nProcessing record 141 of 620: | vozrozhdeniye\nProcessing record 142 of 620: | saldanha\nProcessing record 143 of 620: | yei\nProcessing record 144 of 620: | maloy\nProcessing record 145 of 620: | mondlo\nProcessing record 146 of 620: | faanui\nCity not found, skipping...\nProcessing record 147 of 620: | bandarbeyla\nProcessing record 148 of 620: | tarboro\nProcessing record 149 of 620: | bhuj\nProcessing record 150 of 620: | pokrovsk\nProcessing record 151 of 620: | bluff\nProcessing record 152 of 620: | narsaq\nProcessing record 153 of 620: | kudahuvadhoo\nProcessing record 154 of 620: | westport\nProcessing record 155 of 620: | bilma\nProcessing record 156 of 620: | nurota\nProcessing record 157 of 620: | airai\nProcessing record 158 of 620: | carnarvon\nProcessing record 159 of 620: | billings\nProcessing record 160 of 620: | port macquarie\nProcessing record 161 of 620: | sao jose dos pinhais\nProcessing record 162 of 620: | corning\nCity not found, skipping...\nCity not found, skipping...\nCity not found, skipping...\nProcessing record 163 of 620: | singureni\nProcessing record 164 of 620: | funadhoo\nProcessing record 165 of 620: | bethel\nProcessing record 166 of 620: | kirakira\nCity not found, skipping...\nProcessing record 167 of 620: | myitkyina\nProcessing record 168 of 620: | fortuna\nProcessing record 169 of 620: | vzmorye\nProcessing record 170 of 620: | aguimes\nProcessing record 171 of 620: | lashio\nCity not found, skipping...\nProcessing record 172 of 620: | inongo\nProcessing record 173 of 620: | jesus carranza\nProcessing record 174 of 620: | nacala\nProcessing record 175 of 620: | challapata\nProcessing record 176 of 620: | beruwala\nProcessing record 177 of 620: | tuktoyaktuk\nProcessing record 178 of 620: | ribeira grande\nProcessing record 179 of 620: | baruun-urt\nProcessing record 180 of 620: | barentu\nCity not found, skipping...\nCity not found, skipping...\nProcessing record 181 of 620: | touros\nProcessing record 182 of 620: | iquitos\nProcessing record 183 of 620: | mar del plata\nProcessing record 184 of 620: | barstow\nProcessing record 185 of 620: | longyearbyen\n"
]
],
[
[
"### Convert Raw Data to DataFrame\n* Export the city data into a .csv.\n* Display the DataFrame",
"_____no_output_____"
]
],
[
[
"# Created a dictionary to make into a dataframe\nweather_info = {\"City\": city_name,\n \"Latitude\": lat,\n \"Longitude\": long,\n \"Max Temp\": max_temp,\n \"Humidity\": humidity,\n \"Cloudiness (%)\": cloudiness,\n \"Wind Speed (m/s)\": wind_speed,\n \"Country\": country,\n \"Date\": date}\n\n# Dataframe created and saved\nweather_df = pd.DataFrame(weather_info)\nweather_df.to_csv('../Output_data/Output_weather_data.csv')\nweather_df.head()",
"_____no_output_____"
]
],
[
[
"## Inspect the data and remove the cities where the humidity > 100%.\n----\nSkip this step if there are no cities that have humidity > 100%. ",
"_____no_output_____"
]
],
[
[
"# Check humidity over 100%\nweather_df['Humidity']>100",
"_____no_output_____"
],
[
"# Get the indices of cities that have humidity over 100%.\nhum_data = weather_df[weather_df['Humidity']>100].index",
"_____no_output_____"
],
[
"# Make a new DF to drop all data thats in the original df that checks humidity.\nclean_df = weather_df.drop(hum_data,inplace = False)\nclean_df",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
]
],
[
[
"## Plotting the Data\n* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.\n* Save the plotted figures as .pngs.",
"_____no_output_____"
],
[
"## Latitude vs. Temperature Plot",
"_____no_output_____"
]
],
[
[
"# Create a scatter plot to address latitude vs temperature\nlat_temp = clean_df.plot.scatter(x = \"Max Temp\",y = \"Latitude\", color = \"LightBlue\", edgecolor='DarkBlue')\nprint(\"The temperature increases as you get closer to the equator\")",
"The temperature increases as you get closer to the equator\n"
]
],
[
[
"## Latitude vs. Humidity Plot",
"_____no_output_____"
]
],
[
[
"# Create a scatter plot to address latitude vs humidity.\nlat_hum = clean_df.plot.scatter(x = \"Humidity\", y = \"Latitude\", color = \"LightBlue\", edgecolor='DarkBlue')\nprint(\"Humidity increases as latitude increases.\")",
"Humidity increases as latitude increases.\n"
]
],
[
[
"## Latitude vs. Cloudiness Plot",
"_____no_output_____"
]
],
[
[
"# Create a scatter plot to address latitude vs cloudiness.\nlat_cloud = clean_df.plot.scatter(y = \"Latitude\", x = \"Cloudiness (%)\", color = \"LightBlue\", edgecolor='DarkBlue')\nprint(\"Humidity tends to float around different pertantages depending on their altitude level such as 20%, 40%, and 100%.\")",
"Humidity tends to float around different pertantages depending on their altitude level such as 20%, 40%, and 100%.\n"
]
],
[
[
"## Latitude vs. Wind Speed Plot",
"_____no_output_____"
]
],
[
[
"# Create a scatter plot to address latitude vs wind speed.\nlat_wind = clean_df.plot.scatter(x = \"Wind Speed (m/s)\", y = \"Latitude\", color = \"LightBlue\", edgecolor='DarkBlue')\nprint(\"It is a rare occasion that wind speed goes over 30m/s. This also shows that there is no real correlation between wind speed and latitude.\")",
"_____no_output_____"
]
],
[
[
"## Linear Regression",
"_____no_output_____"
]
],
[
[
"#Make new df for just northern hemisphere\nnorth_df = clean_df.loc[clean_df[\"Latitude\"]>0]\nnorth_df.head()",
"_____no_output_____"
]
],
[
[
"#### Northern Hemisphere - Max Temp vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"#Make X and Y axises for temp and latitude\nx_ntemp = north_df[\"Max Temp\"]\ny_nlat = north_df[\"Latitude\"]\n\n#Calculate line regression using \"y=mx+b\"\nslope, intercept, rvalue, pvalue, stderr = linregress(x_ntemp, y_nlat)\nregress_values = x_ntemp * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\n\n#Create the Scatter Plot\n\nplt.scatter(x_ntemp, y_nlat, color = 'blue')\nplt.plot(x_ntemp,regress_values,\"r-\")\nplt.annotate(line_eq,(6,10),fontsize=15,color=\"red\")\nplt.xlabel(\"Max Temperature (F)\")\nplt.ylabel(\"Northern Latitudes\")\nplt.show(line_eq, rvalue)\n\nprint(\"The R-Value is:\", rvalue)\nprint(\"OBSERVATION: Using R-Value, you can tell that there is negative correlation between Northern Latitudes vs Max Temperature\")",
"_____no_output_____"
]
],
[
[
"#### Southern Hemisphere - Max Temp vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"south_df = clean_df.loc[clean_df[\"Latitude\"]<0]\nsouth_df.head()\n\n",
"_____no_output_____"
],
[
"#Make X and Y axises for temp and latitude\nx_stemp = south_df[\"Max Temp\"]\ny_slat = south_df[\"Latitude\"]\n\n#Calculate line regression using \"y=mx+b\"\nslope, intercept, rvalue, pvalue, stderr = linregress(x_stemp, y_slat)\nregress_values = x_stemp * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\n\n#Create the Scatter Plot\n\nplt.scatter(x_stemp, y_slat, color = 'blue')\nplt.plot(x_stemp,regress_values,\"r-\")\nplt.annotate(line_eq,(60,-45),fontsize=20,color=\"red\")\nplt.xlabel(\"Max Temperature (F)\")\nplt.ylabel(\"Southern Latitudes\")\nplt.show(line_eq, rvalue)\n\nprint(\"The R-Value is:\", rvalue)\nprint(\"OBSERVATION: Using R-Value, you can tell that there is slight positive correlation between Southern Latitude vs Max Temperature\")",
"_____no_output_____"
]
],
[
[
"#### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"#Make X and Y axises for temp and latitude\nx_nhum = north_df[\"Humidity\"]\ny_nlat = north_df[\"Latitude\"]\n\n#Calculate line regression using \"y=mx+b\"\nslope, intercept, rvalue, pvalue, stderr = linregress(x_nhum, y_nlat)\nregress_values = x_nhum * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\n\n#Create the Scatter Plot\n\nplt.scatter(x_nhum, y_nlat, color = 'blue')\nplt.plot(x_nhum,regress_values,\"r-\")\nplt.annotate(line_eq,(6,10),fontsize=10,color=\"red\")\nplt.xlabel(\"Humidity\")\nplt.ylabel(\"Northern Latitudes\")\nplt.show(line_eq, rvalue)\n\nprint(\"The R-Value is:\", rvalue)\nprint(\"OBSERVATION: Using R-Value, you can tell that there is almost no correlation between Northern Hemisphere vs Humidity (%)\")",
"_____no_output_____"
]
],
[
[
"#### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"#Make X and Y axises for temp and latitude\nx_stemp = south_df[\"Humidity\"]\ny_slat = south_df[\"Latitude\"]\n\n#Calculate line regression using \"y=mx+b\"\nslope, intercept, rvalue, pvalue, stderr = linregress(x_stemp, y_slat)\nregress_values = x_stemp * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\n\n#Create the Scatter Plot\n\nplt.scatter(x_stemp, y_slat, color = 'blue')\nplt.plot(x_stemp,regress_values,\"r-\")\nplt.annotate(line_eq,(20,-45),fontsize=20,color=\"red\")\nplt.xlabel(\"Humidity (%)\")\nplt.ylabel(\"Southern Latitudes\")\nplt.show(line_eq, rvalue)\n\nprint(\"The R-Value is:\", rvalue)\nprint(\"OBSERVATION: Using R-Value, you can tell that there is almost no correlation between Southern Hemisphere vs Humidity (%)\")",
"_____no_output_____"
]
],
[
[
"#### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"#Make X and Y axises for temp and latitude\nx_ncloud = north_df[\"Cloudiness (%)\"]\ny_nlat = north_df[\"Latitude\"]\n\n#Calculate line regression using \"y=mx+b\"\nslope, intercept, rvalue, pvalue, stderr = linregress(x_ncloud, y_nlat)\nregress_values = x_ncloud * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\n\n#Create the Scatter Plot\n\nplt.scatter(x_ncloud, y_nlat, color = 'blue')\nplt.plot(x_ncloud,regress_values,\"r-\")\nplt.annotate(line_eq,(6,10),fontsize=15,color=\"red\")\nplt.xlabel(\"Cloudiness (%)\")\nplt.ylabel(\"Northern Latitudes\")\nplt.show(line_eq, rvalue)\n\nprint(\"The R-Value is:\", rvalue)\nprint(\"OBSERVATION: Using R-Value, you can tell that there is almost no correlation between Northern Hemisphere vs Cloudiness (%)\")",
"_____no_output_____"
]
],
[
[
"#### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"#Make X and Y axises for temp and latitude\nx_scloud = south_df[\"Cloudiness (%)\"]\ny_slat = south_df[\"Latitude\"]\n\n#Calculate line regression using \"y=mx+b\"\nslope, intercept, rvalue, pvalue, stderr = linregress(x_scloud, y_slat)\nregress_values = x_scloud * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\n\n#Create the Scatter Plot\n\nplt.scatter(x_scloud, y_slat, color = 'blue')\nplt.plot(x_scloud,regress_values,\"r-\")\nplt.annotate(line_eq,(10,-40),fontsize=20,color=\"red\")\nplt.xlabel(\"Cloudiness (%)\")\nplt.ylabel(\"Southern Latitudes\")\nplt.show(line_eq, rvalue)\n\nprint(\"The R-Value is:\", rvalue)\nprint(\"OBSERVATION: Using R-Value, you can tell that there is no correlation between Southern Hemisphere vs Humidity (%)\")",
"_____no_output_____"
]
],
[
[
"#### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"#Make X and Y axises for temp and latitude\nx_nwind = north_df[\"Wind Speed (m/s)\"]\ny_nlat = north_df[\"Latitude\"]\n\n#Calculate line regression using \"y=mx+b\"\nslope, intercept, rvalue, pvalue, stderr = linregress(x_nwind, y_nlat)\nregress_values = x_nwind * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\n\n#Create the Scatter Plot\n\nplt.scatter(x_nwind, y_nlat, color = 'blue')\nplt.plot(x_nwind,regress_values,\"r-\")\nplt.annotate(line_eq,(20,10),fontsize=15,color=\"red\")\nplt.xlabel(\"Wind Speed (m/s)\")\nplt.ylabel(\"Northern Latitudes\")\nplt.show(line_eq, rvalue)\n\nprint(\"The R-Value is:\", rvalue)\nprint(\"OBSERVATION: Using R-Value, you can tell that there is almost no correlation between Northern Hemisphere vs Wind Speed (mph)\")",
"_____no_output_____"
]
],
[
[
"#### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"#Make X and Y axises for temp and latitude\nx_swind = south_df[\"Wind Speed (m/s)\"]\ny_slat = south_df[\"Latitude\"]\n\n#Calculate line regression using \"y=mx+b\"\nslope, intercept, rvalue, pvalue, stderr = linregress(x_swind, y_slat)\nregress_values = x_swind * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\n\n#Create the Scatter Plot\n\nplt.scatter(x_swind, y_slat, color = 'blue')\nplt.plot(x_swind,regress_values,\"r-\")\nplt.annotate(line_eq,(15,-45),fontsize=15,color=\"red\")\nplt.xlabel(\"Wind Speed (m/s)\")\nplt.ylabel(\"Southern Latitudes\")\nplt.show(line_eq, rvalue)\n\nprint(\"The R-Value is:\", rvalue)\nprint(\"OBSERVATION: Using R-Value, you can tell that there is almost no correlation between Southern Hemisphere vs Wind Speed (mph)\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9bd4eae8d997f4b2b50af59535dacaaeb9dc0b | 23,128 | ipynb | Jupyter Notebook | notebook/[Official] Long Short Term Memory.ipynb | tqa236/LSTM_algo_trading | ddef49af393069df2ec1dbd3843fed79e65ba141 | [
"MIT"
] | 9 | 2019-02-17T04:22:20.000Z | 2021-02-25T13:57:28.000Z | notebook/[Official] Long Short Term Memory.ipynb | tqa236/LSTM_algo_trading | ddef49af393069df2ec1dbd3843fed79e65ba141 | [
"MIT"
] | null | null | null | notebook/[Official] Long Short Term Memory.ipynb | tqa236/LSTM_algo_trading | ddef49af393069df2ec1dbd3843fed79e65ba141 | [
"MIT"
] | 2 | 2020-10-26T18:45:06.000Z | 2021-12-26T10:56:00.000Z | 49.418803 | 1,631 | 0.607662 | [
[
[
"# List all device\nfrom tensorflow.python.client import device_lib\n# print(device_lib.list_local_devices())",
"_____no_output_____"
],
[
"# Check available GPU\nfrom keras import backend as K\nK.tensorflow_backend._get_available_gpus()",
"Using TensorFlow backend.\n"
],
[
"import os\nos.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\";\n# The GPU id to use, usually either \"0\" or \"1\";\nos.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\"; ",
"_____no_output_____"
],
[
"# Importing the libraries\nimport numpy as np\nimport pandas as pd\nfrom keras.models import Sequential\nfrom keras.layers import Dense, LSTM, Dropout, Reshape, Lambda, GRU, BatchNormalization, Bidirectional\nfrom keras.preprocessing.sequence import TimeseriesGenerator\nfrom keras.callbacks import EarlyStopping, ModelCheckpoint\nfrom keras.activations import softmax\nfrom keras.optimizers import SGD, RMSprop\nimport math\nimport pickle\nimport matplotlib.pyplot as plt\nfrom keras.utils import to_categorical\nfrom sklearn.preprocessing import StandardScaler",
"_____no_output_____"
],
[
"index = \"dowjones\"\nindex = \"frankfurt\"\nwith open(f\"../data/{index}_calculated/periods750_250_240.txt\", \"rb\") as fp: # Unpickling\n dataset = pickle.load(fp)",
"_____no_output_____"
],
[
"def normalize_data(df):\n \"\"\"normalize a dataframe.\"\"\"\n mean = df.mean(axis=1)\n std = df.std(axis=1)\n df = df.sub(mean, axis=0)\n df = df.div(std, axis=0)\n df = df.values\n return df\ndef get_one_hot(targets, nb_classes):\n res = np.eye(nb_classes)[np.array(targets).reshape(-1)]\n return res.reshape(list(targets.shape)+[nb_classes])",
"_____no_output_____"
],
[
"i = 7\ntimestep = 240",
"_____no_output_____"
],
[
"# x_train = dataset[0][i][0]['AMZN'].values * 1000\n# y_train = dataset[0][i][1]['AMZN'].values * 1.0\n# x_test = dataset[1][i][0]['AMZN'].values * 1000\n# y_test = dataset[1][i][1]['AMZN'].values * 1.0\n\n# x_train = dataset[0][i][0].values\n# x_train = (x_train - x_train.mean())/x_train.std()\n# y_train = dataset[0][i][1].values * 1.0\n# x_test = dataset[1][i][0].values\n# x_test = (x_test - x_test.mean())/x_test.std()\n# y_test = dataset[1][i][1].values * 1.0\n\n# x_train = dataset[0][i][0].values * 1000\n# x_test = dataset[1][i][0].values * 1000\n\nx_train = dataset[0][i][0].values\nx_test = dataset[1][i][0].values\n\nscaler = StandardScaler().fit(x_train)\n\nx_train = scaler.transform(x_train)\nx_test = scaler.transform(x_test)\n\n# x_train = normalize_data(dataset[0][i][0])\n# x_test = normalize_data(dataset[1][i][0])\n\n# y_train = get_one_hot(dataset[0][i][1].values, 2) * 1.0\n# y_test = get_one_hot(dataset[1][i][1].values, 2) * 1.0\ny_train = to_categorical(dataset[0][i][1].values, 2)\ny_test = to_categorical(dataset[1][i][1].values, 2)",
"_____no_output_____"
],
[
"print(f\"x train shape: {x_train.shape}\")\nprint(f\"y train shape: {y_train.shape}\")\nprint(f\"x test shape: {x_test.shape}\")\nprint(f\"y test shape: {y_test.shape}\")",
"x train shape: (750, 62)\ny train shape: (750, 62, 2)\nx test shape: (490, 62)\ny test shape: (490, 62, 2)\n"
],
[
"# The second range will be looped first\n# x_series = [x_train[i:i+240] for i in range(750 - 240)]\n# y_series = [y_train[i+240] for i in range(750 - 240)]\nx_series = [x_train[i:i+timestep, j] for i in range(x_train.shape[0] - timestep) for j in range(x_train.shape[1])]\ny_series = [y_train[i+timestep, j] for i in range(y_train.shape[0] - timestep) for j in range(y_train.shape[1])]\nx = np.array(x_series)\ny = np.array(y_series)\nprint(f\"x shape: {x.shape}\")\nprint(f\"y shape: {y.shape}\")",
"x shape: (31620, 240)\ny shape: (31620, 2)\n"
],
[
"x = np.reshape(x, (x.shape[0], x.shape[1], 1))\nprint(f\"x shape: {x.shape}\")\n",
"x shape: (31620, 240, 1)\n"
],
[
"dropout_rate = 0.1\n# expected input data shape: (batch_size, timesteps, data_dim)\nregressor = Sequential()\n\n# regressor.add(Bidirectional(LSTM(units=25, input_shape=(timestep, 1), dropout=dropout_rate)))\nregressor.add(LSTM(units=25, input_shape=(timestep, 1), return_sequences = True,dropout=dropout_rate))\nregressor.add(LSTM(units=100, return_sequences = True,dropout=dropout_rate))\nregressor.add(LSTM(units=100, return_sequences = True,dropout=dropout_rate))\nregressor.add(LSTM(units=100, input_shape=(timestep, 1), dropout=dropout_rate))\n# regressor.add(LSTM(units=25, batch_input_shape=(527, timestep, 1), dropout=dropout_rate, stateful=False))\n# regressor.add(LSTM(units=25, batch_input_shape=(527, timestep, 1), dropout=dropout_rate))\n# regressor.add(LSTM(units=25, return_sequences = True,dropout=dropout_rate, stateful=False))\n# regressor.add(LSTM(units=25, return_sequences = True,dropout=dropout_rate, stateful=False))\n# regressor.add(LSTM(units=25, dropout=dropout_rate, stateful=False))\n# regressor.add(LSTM(units=25, input_shape=(timestep, 1), dropout=dropout_rate))\n# regressor.add(GRU(units=25, input_shape=(timestep, 1), dropout=dropout_rate))\n# regressor.add(Dense(100, input_shape=(timestep, ), activation='relu'))\n# regressor.add(Dense(100, activation='relu'))\nregressor.add(Dense(2, activation='softmax'))\nregressor.compile(loss='binary_crossentropy',\n optimizer='rmsprop',\n metrics=['accuracy'])\nregressor.summary()",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nlstm_1 (LSTM) (None, 240, 25) 2700 \n_________________________________________________________________\nlstm_2 (LSTM) (None, 240, 100) 50400 \n_________________________________________________________________\nlstm_3 (LSTM) (None, 240, 100) 80400 \n_________________________________________________________________\nlstm_4 (LSTM) (None, 100) 80400 \n_________________________________________________________________\ndense_1 (Dense) (None, 2) 202 \n=================================================================\nTotal params: 214,102\nTrainable params: 214,102\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"# result = regressor.fit(x, y, epochs=1000,batch_size=527, validation_split=0.2, shuffle=False, callbacks = [EarlyStopping(monitor='val_loss', mode='min', patience=200),\n# ModelCheckpoint(filepath='../model/LSTM/best_model.h5', monitor='val_acc', save_best_only=True)])\nresult = regressor.fit(x, y, epochs=1000,batch_size=527, validation_split=0.2, callbacks = [EarlyStopping(monitor='val_loss', mode='min', patience=200),\n ModelCheckpoint(filepath='../model/LSTM/best_model.h5', monitor='val_acc', save_best_only=True)])\n# regressor.fit(x, y, epochs=1000,batch_size=500, validation_split=0.2, callbacks = [EarlyStopping(monitor='val_loss', mode='min', patience=20),\n# ModelCheckpoint(filepath='../model/LSTM/best_model.h5', monitor='val_acc', save_best_only=True)])\n",
"Train on 25296 samples, validate on 6324 samples\nEpoch 1/1000\n25296/25296 [==============================] - 52s 2ms/step - loss: 0.6930 - acc: 0.5104 - val_loss: 0.6935 - val_acc: 0.5016\nEpoch 2/1000\n25296/25296 [==============================] - 51s 2ms/step - loss: 0.6928 - acc: 0.5145 - val_loss: 0.6933 - val_acc: 0.5016\nEpoch 3/1000\n25296/25296 [==============================] - 51s 2ms/step - loss: 0.6929 - acc: 0.5144 - val_loss: 0.6934 - val_acc: 0.5016\nEpoch 4/1000\n25296/25296 [==============================] - 51s 2ms/step - loss: 0.6927 - acc: 0.5142 - val_loss: 0.6932 - val_acc: 0.5022\nEpoch 5/1000\n25296/25296 [==============================] - 51s 2ms/step - loss: 0.6928 - acc: 0.5147 - val_loss: 0.6932 - val_acc: 0.5019\nEpoch 6/1000\n25296/25296 [==============================] - 51s 2ms/step - loss: 0.6928 - acc: 0.5146 - val_loss: 0.6937 - val_acc: 0.5016\nEpoch 7/1000\n25296/25296 [==============================] - 51s 2ms/step - loss: 0.6928 - acc: 0.5140 - val_loss: 0.6936 - val_acc: 0.5016\nEpoch 8/1000\n25296/25296 [==============================] - 51s 2ms/step - loss: 0.6928 - acc: 0.5136 - val_loss: 0.6932 - val_acc: 0.5016\nEpoch 9/1000\n 1581/25296 [>.............................] - ETA: 47s - loss: 0.6928 - acc: 0.5155"
],
[
"plt.plot(result.history[\"val_acc\"])\nplt.plot(result.history[\"acc\"])",
"_____no_output_____"
],
[
"plt.plot(result.history[\"val_loss\"])\nplt.plot(result.history[\"loss\"])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9be01942eeebec23be3cc70d29f65a622c1769 | 4,154 | ipynb | Jupyter Notebook | CGPA CALCULATOR.ipynb | SarahZabeen/Rep_001 | 0ec1f34f9698c670d5f9c8a07a69d23389038823 | [
"MIT"
] | 2 | 2020-10-07T03:44:48.000Z | 2020-11-01T20:26:52.000Z | CGPA CALCULATOR.ipynb | SarahZabeen/CGPA-Calculator-OOP-EasyGUI | 0ec1f34f9698c670d5f9c8a07a69d23389038823 | [
"MIT"
] | null | null | null | CGPA CALCULATOR.ipynb | SarahZabeen/CGPA-Calculator-OOP-EasyGUI | 0ec1f34f9698c670d5f9c8a07a69d23389038823 | [
"MIT"
] | null | null | null | 38.462963 | 229 | 0.454261 | [
[
[
"import easygui\n\nclass CGPA_CALC:\n \n name='Default'\n department = 'Default'\n Sem = 0\n CGPA = 0\n total_courses_completed = 0\n cumulative_cg = 0\n def __init__(self, name='Default', department='Default'):\n CGPA_CALC.name = name ; CGPA_CALC.department = department \n \n def courses_and_cg(self,c):\n CGPA_CALC.Sem+=1\n\n list_of_courses, list_of_cg = [] , []\n avg = 0 ; total = 0\n for i in range(0,len(c),2):\n list_of_courses.append(c[i])\n list_of_cg.append(c[i+1])\n total += float(c[i+1])\n avg = total/len(list_of_courses)\n \n CGPA_CALC.total_courses_completed += len(list_of_courses)\n \n CGPA_CALC.cumulative_cg += total\n \n CGPA_CALC.CGPA = CGPA_CALC.cumulative_cg / CGPA_CALC.total_courses_completed\n \n CGPA_CALC.CGPA = \"{:.2f}\".format(CGPA_CALC.CGPA)\n\n a=\"Semester: \"+str(CGPA_CALC.Sem)+'\\nName: '+self.name+'\\nDepartment: '+self.department \n a=a+\"\\nCourses taken by \"+self.name+\": \"\n \n for i in range(0,len(list_of_courses),1):\n if ( i == len(list_of_courses)-1 ):\n a=a+list_of_courses[i]+\".\"\n else:\n a=a+list_of_courses[i]+\", \"\n avg1=\"{:.2f}\".format(avg)\n a=a+\"\\nGPA earned:\"+avg1+\"\\nCGPA:\"+str(CGPA_CALC.CGPA)\n easygui.msgbox(a, title=\"Result\")\n \n def calculate(self,obj):\n index=0\n while 1:\n \n if index==0:\n msg = \"Enter Your Course Codes and GPA\"\n title = \"CGPA Calculator\"\n fieldNames = ['Name','Department',\"Course 1\",'Course 1 GPA',\"Course 2\",'Course 2 GPA',\"Course 3\",'Course 3 GPA',\"Course 4\",'Course 4 GPA',\"Course 5\",'Course 5 GPA']\n fieldValues = [] \n fieldValues = easygui.multenterbox(msg,title, fieldNames)\n while(\"\" in fieldValues) : \n fieldValues.remove(\"\")\n if fieldValues[0]=='STOP':\n break\n obj = CGPA_CALC(fieldValues[0],fieldValues[1])\n obj.courses_and_cg(fieldValues[2:])\n else:\n msg = \"Enter Your Course Codes and GPA\"\n title = \"CGPA Calculator\"\n fieldNames = [\"Type 'STOP' to quit,\\n'CONTINUE' to Continue\",\"Course 1\",'Course 1 GPA',\"Course 2\",'Course 2 GPA',\"Course 3\",'Course 3 GPA',\"Course 4\",'Course 4 GPA',\"Course 5\",'Course 5 GPA']\n fieldValues = [] \n fieldValues = easygui.multenterbox(msg,title, fieldNames)\n while(\"\" in fieldValues) : \n fieldValues.remove(\"\")\n if fieldValues[0]=='STOP':\n break\n obj.courses_and_cg(fieldValues[1:])\n index+=1\na=CGPA_CALC()\na.calculate(a)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
ec9bebd934c05a0d83a75d7e3f0b298066bd509e | 38,782 | ipynb | Jupyter Notebook | Chapter03/.ipynb_checkpoints/chapter3-checkpoint.ipynb | kolawoletech/MLHealthcareAnalytics | 764dc65997da4356e17b6176d6d1111054b5356a | [
"MIT"
] | null | null | null | Chapter03/.ipynb_checkpoints/chapter3-checkpoint.ipynb | kolawoletech/MLHealthcareAnalytics | 764dc65997da4356e17b6176d6d1111054b5356a | [
"MIT"
] | null | null | null | Chapter03/.ipynb_checkpoints/chapter3-checkpoint.ipynb | kolawoletech/MLHealthcareAnalytics | 764dc65997da4356e17b6176d6d1111054b5356a | [
"MIT"
] | null | null | null | 35.876041 | 386 | 0.366871 | [
[
[
"# Classifying DNA Sequences\n### Presented by Eduonix\n\nDuring this tutorial, we will explore the world of bioinformatics by using Markov models, K-nearest neighbor (KNN) algorithms, support vector machines, and other common classifiers to classify short E. Coli DNA sequences. This project will use a dataset from the UCI Machine Learning Repository that has 106 DNA sequences, with 57 sequential nucleotides (“base-pairs”) each. \n\nYou will learn how to:\n* Import data from the UCI repository\n* Convert text inputs to numerical data\n* Build and train classification algorithms\n* Compare and contrast classification algorithms\n\n## Step 1: Importing the Dataset\n\nThe following code cells will import necessary libraries and import the dataset from the UCI repository as a Pandas DataFrame.",
"_____no_output_____"
]
],
[
[
"# To make sure all of the correct libraries are installed, import each module and print the version number\n\nimport sys\nimport numpy\nimport sklearn\nimport pandas\n\nprint('Python: {}'.format(sys.version))\nprint('Numpy: {}'.format(numpy.__version__))\nprint('Sklearn: {}'.format(sklearn.__version__))\nprint('Pandas: {}'.format(pandas.__version__))",
"Python: 2.7.16 |Anaconda, Inc.| (default, Mar 14 2019, 21:00:58) \n[GCC 7.3.0]\nNumpy: 1.16.2\nSklearn: 0.18\nPandas: 0.24.2\n"
],
[
"# Import, change module names\nimport numpy as np\nimport pandas as pd\n\n# import the uci Molecular Biology (Promoter Gene Sequences) Data Set\nurl = 'https://archive.ics.uci.edu/ml/machine-learning-databases/molecular-biology/promoter-gene-sequences/promoters.data'\nnames = ['Class', 'id', 'Sequence']\ndata = pd.read_csv(url, names = names)",
"_____no_output_____"
],
[
"print(data.iloc[0])",
"Class +\nid S10\nSequence \\t\\ttactagcaatacgcttgcgttcggtggttaagtatgtataat...\nName: 0, dtype: object\n"
]
],
[
[
"## Step 2: Preprocessing the Dataset\n\nThe data is not in a usable form; as a result, we will need to process it before using it to train our algorithms.",
"_____no_output_____"
]
],
[
[
"# Building our Dataset by creating a custom Pandas DataFrame\n# Each column in a DataFrame is called a Series. Lets start by making a series for each column.\n\nclasses = data.loc[:, 'Class']\nprint(classes[:5])",
"0 +\n1 +\n2 +\n3 +\n4 +\nName: Class, dtype: object\n"
],
[
"# generate list of DNA sequences\nsequences = list(data.loc[:, 'Sequence'])\ndataset = {}\n\n# loop through sequences and split into individual nucleotides\nfor i, seq in enumerate(sequences):\n \n # split into nucleotides, remove tab characters\n nucleotides = list(seq)\n nucleotides = [x for x in nucleotides if x != '\\t']\n \n # append class assignment\n nucleotides.append(classes[i])\n \n # add to dataset\n dataset[i] = nucleotides\n \nprint(dataset[0])",
"['t', 'a', 'c', 't', 'a', 'g', 'c', 'a', 'a', 't', 'a', 'c', 'g', 'c', 't', 't', 'g', 'c', 'g', 't', 't', 'c', 'g', 'g', 't', 'g', 'g', 't', 't', 'a', 'a', 'g', 't', 'a', 't', 'g', 't', 'a', 't', 'a', 'a', 't', 'g', 'c', 'g', 'c', 'g', 'g', 'g', 'c', 't', 't', 'g', 't', 'c', 'g', 't', '+']\n"
],
[
"# turn dataset into pandas DataFrame\ndframe = pd.DataFrame(dataset)\nprint(dframe)",
" 0 1 2 3 4 5 6 7 8 9 ... 96 97 98 99 100 101 102 \\\n0 t t g a t a c t c t ... c c t a g c g \n1 a g t a c g a t g t ... c g a g a c t \n2 c c a t g g g t a t ... g c t a g t a \n3 t t c t a g g c c t ... a t g g a c t \n4 a a t g t g g t t a ... g a a g g a t \n5 g t a t a c g a t a ... t g c g c a c \n6 c c g g a a g c a a ... a g c t a t t \n7 a c a a t a t a a t ... g a g g t g c \n8 a t g t t g g a t t ... a c a t g g a \n9 t g a g a g g a a t ... c t a a t c a \n10 a a a t a a a a t c ... c t c c c c c \n11 c c c g c g g c a c ... c t g t a t a \n12 g a t t t g g a c t ... t c a c g c a \n13 c g a a a a a c t c ... t t g c c t g \n14 t t g t t t t t g t ... a t t a c a a \n15 t t t c t g t t c t ... g g c a t a t \n16 g g g g g g t g g g ... a t a g c a t \n17 c t c a a a a a a t ... g t a a g c a \n18 g c a a c a a t c c ... a g t a a g a \n19 t a t g g a g a a a ... g a c g c g c \n20 t c t t a g c c g g ... c t a a a g c \n21 c g a g a a c t g g ... a t g g a t g \n22 g c g t a g a g a c ... t t a g c c a \n23 g t c g a g t t c c ... g t c a t t c \n24 t g t t g t c a g g ... t c c a t t a \n25 g a t t c t t t t g ... c c g g g g g \n26 g t a g t g c g c a ... a a c a c a a \n27 t t t c g c c a c a ... g t t t a g t \n28 t g t g a c t g g t ... c g t g t g t \n29 a g t g a g g c t a ... c c t a a g c \n30 a t t a a t a a t a ... t g g g a g a \n31 g g t g a a t t c c ... c g a g a t a \n32 t t t t c t g a t t ... g t c c t t t \n33 a c t a c a a c g c ... a g t t g t c \n34 t g g g a a c a t c ... c t c a c t t \n35 g t t a c a g g g c ... a t t g t t c \n36 t t t t t g c t t t ... a t g a t t g \n37 a a a g a a a a a a ... c t g c t g t \n38 t c t t g a t t a t ... t g t g c c g \n39 a a c t a a a a a a ... t c a t t t g \n40 a a a a a c g a t a ... g g t c t g a \n41 t t t g t t t t c t ... c c t t g a t \n42 g c g a g a c t g g ... a a a c t a g \n43 c t c a c g a g c c ... t a c t a a g \n44 g a t t g a g c a g ... a t t g g g a \n45 c a a a c g c t a c ... a g g c a g c \n46 g c a c c t c t t c ... a t t a c a g \n47 g g c t t c c c g a ... t t g t g g t \n48 g c c a c c a a a c ... g a a g t g t \n49 c a a a c g t a a c ... c a a g g a c \n50 t t c c g t c c a a ... t t c a c a a \n51 t c c a t t a a t c ... t c a g c c a \n52 g g c a g t t g g t ... t g t t c t c \n53 t c g a g a g a g g ... c c t a t a a \n54 c c g c t g a a t a ... t t a t a t t \n55 g a c t a g a c t c ... t t t g c a t \n56 t a g c g t t a t a ... g t t a g t g \n57 + + + + + + + + + + ... - - - - - - - \n\n 103 104 105 \n0 c c t \n1 g t a \n2 c c a \n3 g g c \n4 a t a \n5 c c t \n6 t c t \n7 a t a \n8 c c a \n9 g a t \n10 a a a \n11 t t a \n12 g g a \n13 a g t \n14 g c a \n15 a c a \n16 t t g \n17 g c g \n18 c t a \n19 c a g \n20 t a g \n21 g a c \n22 a c t \n23 g g c \n24 t g t \n25 g g a \n26 c t a \n27 t c t \n28 t t g \n29 c t g \n30 c g c \n31 g a a \n32 t g c \n33 t g t \n34 a g c \n35 c g a \n36 t t t \n37 g t t \n38 g t a \n39 a t g \n40 t t c \n41 t t c \n42 g g a \n43 t c a \n44 c t t \n45 a g c \n46 c a a \n47 c a a \n48 a a t \n49 a g c \n50 g g a \n51 g a a \n52 c g g \n53 t g a \n54 t a a \n55 c a c \n56 c c t \n57 - - - \n\n[58 rows x 106 columns]\n"
],
[
"# transpose the DataFrame\ndf = dframe.transpose()\nprint(df.iloc[:5])",
" 0 1 2 3 4 5 6 7 8 9 ... 48 49 50 51 52 53 54 55 56 57\n0 t a c t a g c a a t ... g c t t g t c g t +\n1 t g c t a t c c t g ... c a t c g c c a a +\n2 g t a c t a g a g a ... c a c c c g g c g +\n3 a a t t g t g a t g ... a a c a a a c t c +\n4 t c g a t a a t t a ... c c g t g g t a g +\n\n[5 rows x 58 columns]\n"
],
[
"# for clarity, lets rename the last dataframe column to class\ndf.rename(columns = {57: 'Class'}, inplace = True) \nprint(df.iloc[:5])",
" 0 1 2 3 4 5 6 7 8 9 ... 48 49 50 51 52 53 54 55 56 Class\n0 t a c t a g c a a t ... g c t t g t c g t +\n1 t g c t a t c c t g ... c a t c g c c a a +\n2 g t a c t a g a g a ... c a c c c g g c g +\n3 a a t t g t g a t g ... a a c a a a c t c +\n4 t c g a t a a t t a ... c c g t g g t a g +\n\n[5 rows x 58 columns]\n"
],
[
"# looks good! Let's start to familiarize ourselves with the dataset so we can pick the most suitable \n# algorithms for this data\n\ndf.describe()",
"_____no_output_____"
],
[
"# desribe does not tell us enough information since the attributes are text. Lets record value counts for each sequence\nseries = []\nfor name in df.columns:\n series.append(df[name].value_counts())\n \ninfo = pd.DataFrame(series)\ndetails = info.transpose()\nprint(details)",
" 0 1 2 3 4 5 6 7 8 9 ... 48 \\\nt 38.0 26.0 27.0 26.0 22.0 24.0 30.0 32.0 32.0 28.0 ... 21.0 \nc 27.0 22.0 21.0 30.0 19.0 18.0 21.0 20.0 22.0 22.0 ... 36.0 \na 26.0 34.0 30.0 22.0 36.0 42.0 38.0 34.0 33.0 36.0 ... 23.0 \ng 15.0 24.0 28.0 28.0 29.0 22.0 17.0 20.0 19.0 20.0 ... 26.0 \n- NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN \n+ NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN \n\n 49 50 51 52 53 54 55 56 Class \nt 22.0 23.0 33.0 35.0 30.0 23.0 29.0 34.0 NaN \nc 42.0 31.0 32.0 21.0 32.0 29.0 29.0 17.0 NaN \na 24.0 28.0 27.0 25.0 22.0 26.0 24.0 27.0 NaN \ng 18.0 24.0 14.0 25.0 22.0 28.0 24.0 28.0 NaN \n- NaN NaN NaN NaN NaN NaN NaN NaN 53.0 \n+ NaN NaN NaN NaN NaN NaN NaN NaN 53.0 \n\n[6 rows x 58 columns]\n"
],
[
"# Unfortunately, we can't run machine learning algorithms on the data in 'String' formats. As a result, we need to switch\n# it to numerical data. This can easily be accomplished using the pd.get_dummies() function\nnumerical_df = pd.get_dummies(df)\nnumerical_df.iloc[:5]",
"_____no_output_____"
],
[
"# We don't need both class columns. Lets drop one then rename the other to simply 'Class'.\ndf = numerical_df.drop(columns=['Class_-'])\n\ndf.rename(columns = {'Class_+': 'Class'}, inplace = True)\nprint(df.iloc[:5])",
" 0_a 0_c 0_g 0_t 1_a 1_c 1_g 1_t 2_a 2_c ... 54_t 55_a 55_c \\\n0 0 0 0 1 1 0 0 0 0 1 ... 0 0 0 \n1 0 0 0 1 0 0 1 0 0 1 ... 0 1 0 \n2 0 0 1 0 0 0 0 1 1 0 ... 0 0 1 \n3 1 0 0 0 1 0 0 0 0 0 ... 0 0 0 \n4 0 0 0 1 0 1 0 0 0 0 ... 1 1 0 \n\n 55_g 55_t 56_a 56_c 56_g 56_t Class \n0 1 0 0 0 0 1 1 \n1 0 0 1 0 0 0 1 \n2 0 0 0 0 1 0 1 \n3 0 1 0 1 0 0 1 \n4 0 0 0 0 1 0 1 \n\n[5 rows x 229 columns]\n"
],
[
"# Use the model_selection module to separate training and testing datasets\nfrom sklearn import model_selection\n\n# Create X and Y datasets for training\nX = np.array(df.drop(['Class'], 1))\ny = np.array(df['Class'])\n\n# define seed for reproducibility\nseed = 1\n\n# split data into training and testing datasets\nX_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.25, random_state=seed)\n",
"_____no_output_____"
]
],
[
[
"## Step 3: Training and Testing the Classification Algorithms\n\nNow that we have preprocessed the data and built our training and testing datasets, we can start to deploy different classification algorithms. It's relatively easy to test multiple models; as a result, we will compare and contrast the performance of ten different algorithms.",
"_____no_output_____"
]
],
[
[
"# Now that we have our dataset, we can start building algorithms! We'll need to import each algorithm we plan on using\n# from sklearn. We also need to import some performance metrics, such as accuracy_score and classification_report.\n\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.gaussian_process import GaussianProcessClassifier\nfrom sklearn.gaussian_process.kernels import RBF\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.svm import SVC\nfrom sklearn.metrics import classification_report, accuracy_score\n\n# define scoring method\nscoring = 'accuracy'\n\n# Define models to train\nnames = [\"Nearest Neighbors\", \"Gaussian Process\",\n \"Decision Tree\", \"Random Forest\", \"Neural Net\", \"AdaBoost\",\n \"Naive Bayes\", \"SVM Linear\", \"SVM RBF\", \"SVM Sigmoid\"]\n\nclassifiers = [\n KNeighborsClassifier(n_neighbors = 3),\n GaussianProcessClassifier(1.0 * RBF(1.0)),\n DecisionTreeClassifier(max_depth=5),\n RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),\n MLPClassifier(alpha=1),\n AdaBoostClassifier(),\n GaussianNB(),\n SVC(kernel = 'linear'), \n SVC(kernel = 'rbf'),\n SVC(kernel = 'sigmoid')\n]\n\nmodels = zip(names, classifiers)\n\n# evaluate each model in turn\nresults = []\nnames = []\n\nfor name, model in models:\n kfold = model_selection.KFold(n_splits=10, random_state = seed)\n cv_results = model_selection.cross_val_score(model, X_train, y_train, cv=kfold, scoring=scoring)\n results.append(cv_results)\n names.append(name)\n msg = \"%s: %f (%f)\" % (name, cv_results.mean(), cv_results.std())\n print(msg)",
"/home/kolawole/anaconda2/lib/python2.7/site-packages/sklearn/ensemble/weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release.\n from numpy.core.umath_tests import inner1d\n"
],
[
"# Remember, performance on the training data is not that important. We want to know how well our algorithms\n# can generalize to new data. To test this, let's make predictions on the validation dataset.\n\nfor name, model in models:\n model.fit(X_train, y_train)\n predictions = model.predict(X_test)\n print(name)\n print(accuracy_score(y_test, predictions))\n print(classification_report(y_test, predictions))\n \n# Accuracy - ratio of correctly predicted observation to the total observations. \n# Precision - (false positives) ratio of correctly predicted positive observations to the total predicted positive observations\n# Recall (Sensitivity) - (false negatives) ratio of correctly predicted positive observations to the all observations in actual class - yes.\n# F1 score - F1 Score is the weighted average of Precision and Recall. Therefore, this score takes both false positives and false ",
"Nearest Neighbors\n0.7777777777777778\n precision recall f1-score support\n\n 0 1.00 0.65 0.79 17\n 1 0.62 1.00 0.77 10\n\navg / total 0.86 0.78 0.78 27\n\nGaussian Process\n0.8888888888888888\n precision recall f1-score support\n\n 0 1.00 0.82 0.90 17\n 1 0.77 1.00 0.87 10\n\navg / total 0.91 0.89 0.89 27\n\nDecision Tree\n0.7407407407407407\n precision recall f1-score support\n\n 0 1.00 0.59 0.74 17\n 1 0.59 1.00 0.74 10\n\navg / total 0.85 0.74 0.74 27\n\nRandom Forest\n0.5555555555555556\n precision recall f1-score support\n\n 0 1.00 0.29 0.45 17\n 1 0.45 1.00 0.62 10\n\navg / total 0.80 0.56 0.52 27\n\nNeural Net\n0.9259259259259259\n precision recall f1-score support\n\n 0 1.00 0.88 0.94 17\n 1 0.83 1.00 0.91 10\n\navg / total 0.94 0.93 0.93 27\n\nAdaBoost\n0.8518518518518519\n precision recall f1-score support\n\n 0 1.00 0.76 0.87 17\n 1 0.71 1.00 0.83 10\n\navg / total 0.89 0.85 0.85 27\n\nNaive Bayes\n0.9259259259259259\n precision recall f1-score support\n\n 0 1.00 0.88 0.94 17\n 1 0.83 1.00 0.91 10\n\navg / total 0.94 0.93 0.93 27\n\nSVM Linear\n0.9629629629629629\n precision recall f1-score support\n\n 0 1.00 0.94 0.97 17\n 1 0.91 1.00 0.95 10\n\navg / total 0.97 0.96 0.96 27\n\nSVM RBF\n0.7777777777777778\n precision recall f1-score support\n\n 0 1.00 0.65 0.79 17\n 1 0.62 1.00 0.77 10\n\navg / total 0.86 0.78 0.78 27\n\nSVM Sigmoid\n0.4444444444444444\n precision recall f1-score support\n\n 0 1.00 0.12 0.21 17\n 1 0.40 1.00 0.57 10\n\navg / total 0.78 0.44 0.34 27\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec9bf8a2e6357b1e57198b0a0fcbc86f10e6ba86 | 54,462 | ipynb | Jupyter Notebook | HyperDeep_Example_Usage.ipynb | traion/HyperDeep | 486f0d94843acc48bce6f03d25014106981c06b8 | [
"MIT"
] | 3 | 2021-04-27T11:18:06.000Z | 2022-03-23T11:03:14.000Z | HyperDeep_Example_Usage.ipynb | traion/HyperDeep | 486f0d94843acc48bce6f03d25014106981c06b8 | [
"MIT"
] | null | null | null | HyperDeep_Example_Usage.ipynb | traion/HyperDeep | 486f0d94843acc48bce6f03d25014106981c06b8 | [
"MIT"
] | null | null | null | 44.278049 | 483 | 0.489883 | [
[
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n\nEnter your authorization code:\n··········\nMounted at /content/drive\n"
],
[
"!pip install catboost",
"Collecting catboost\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/96/6c/6608210b29649267de52001b09e369777ee2a5cfe1c71fa75eba82a4f2dc/catboost-0.24-cp36-none-manylinux1_x86_64.whl (65.9MB)\n\u001b[K |████████████████████████████████| 65.9MB 53kB/s \n\u001b[?25hRequirement already satisfied: pandas>=0.24.0 in /usr/local/lib/python3.6/dist-packages (from catboost) (1.0.5)\nRequirement already satisfied: plotly in /usr/local/lib/python3.6/dist-packages (from catboost) (4.4.1)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from catboost) (1.4.1)\nRequirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from catboost) (3.2.2)\nRequirement already satisfied: numpy>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from catboost) (1.18.5)\nRequirement already satisfied: graphviz in /usr/local/lib/python3.6/dist-packages (from catboost) (0.10.1)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from catboost) (1.15.0)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.24.0->catboost) (2018.9)\nRequirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.24.0->catboost) (2.8.1)\nRequirement already satisfied: retrying>=1.3.3 in /usr/local/lib/python3.6/dist-packages (from plotly->catboost) (1.3.3)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->catboost) (2.4.7)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->catboost) (0.10.0)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->catboost) (1.2.0)\nInstalling collected packages: catboost\nSuccessfully installed catboost-0.24\n"
],
[
"from sklearn.metrics import confusion_matrix, accuracy_score\n\n\nfrom sklearn.model_selection import RandomizedSearchCV, GridSearchCV, cross_validate\nfrom lightgbm import LGBMClassifier\nimport xgboost as xgb\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.ensemble import GradientBoostingClassifier, GradientBoostingRegressor, RandomForestClassifier, RandomForestRegressor, AdaBoostClassifier, AdaBoostRegressor, ExtraTreesClassifier, ExtraTreesRegressor, BaggingRegressor, BaggingClassifier\nfrom sklearn.tree import DecisionTreeClassifier\nimport numpy as np\nfrom sklearn.svm import SVC, SVR\nfrom catboost import CatBoostClassifier, CatBoostRegressor\nfrom sklearn.neural_network import MLPClassifier, MLPRegressor\nclass Tuner:\n def __init__(self, X_train, y_train, algo_type='binary', search_opt='randomized', scoring_fit='accuracy', n_iteration=100, cv=5):\n self.X_train = X_train\n self.y_train = y_train\n self.algo_type = algo_type\n self.search_opt = search_opt\n self.scoring_fit = scoring_fit\n self.n_iteration = n_iteration\n self.cv = cv\n\n def randomized_opt(self, model, parameter_grid):\n randomized_search = RandomizedSearchCV(\n estimator=model,\n param_distributions=parameter_grid, \n cv=self.cv,\n n_iter=self.n_iteration,\n n_jobs=-1, \n scoring=self.scoring_fit,\n verbose=2)\n fitted_model = randomized_search.fit(self.X_train, self.y_train)\n return fitted_model\n\n def grid_opt(self, model, parameter_grid):\n grid_search = GridSearchCV(\n estimator=model,\n param_grid=parameter_grid, \n cv=self.cv,\n n_jobs=-1, \n scoring=self.scoring_fit,\n verbose=2)\n fitted_model = grid_search.fit(self.X_train, self.y_train)\n return fitted_model\n\n def bayesian_opt(self, model, parameter_grid):\n randomized_search = RandomizedSearchCV(\n estimator=model,\n param_distributions=parameter_grid, \n cv=self.cv,\n n_iter=self.n_iteration,\n n_jobs=-1, \n scoring=self.scoring_fit,\n verbose=2)\n fitted_model = randomized_search.fit(self.X_train, self.y_train)\n return fitted_model\n\n def print_info(self, model):\n print(model.best_score_)\n print(model.best_params_)\n\n def fit_LGBM(self):\n \n if self.algo_type == 'binary':\n objective = LGBMClassifier(objective='binary')\n elif self.algo_type == 'multiclass':\n objective = LGBMClassifier(objective='multiclass')\n elif self.algo_type =='regression':\n objective = LGBMClassifier(objective='regression')\n \n print('Optimizing LGBM algorithm...')\n param_grid = {\n 'num_leaves': [5, 10, 20, 30, 50, 100, 200],\n 'min_data_in_leaf': [10, 20, 30, 50, 100, 200, 500],\n \"learning_rate\": [0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30],\n 'subsample': [0.5, 0.6, 0.7, 0.8, 0.9, 1.0],\n 'colsample_bytree': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0],\n 'min_child_weight': [0.001, 0.01, 0.1, 0.5, 1.0, 3.0, 5.0, 7.0, 10.0],\n 'reg_lambda': [0, 0.1, 1.0, 1.1, 1.5],\n 'reg_alpha': [0, 0.1, 1.0, 1.1, 1.5],\n 'n_estimators': [30, 70, 100, 150, 400, 700]\n }\n\n if self.search_opt == 'grid':\n model = self.grid_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'bayesian':\n model = self.bayesian_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'randomized':\n model = self.randomized_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'none':\n scores = cross_validate(objective, self.X_train, self.y_train, cv=self.cv, return_estimator=True)\n print(scores['test_score'].max())\n return scores['estimator'][scores['test_score'].argmax()]\n else:\n raise ValueError('please provide a correct search optimization technique') \n\n \n def fit_XGB(self):\n \n if self.algo_type == 'binary':\n objective = xgb.XGBClassifier(objective='binary:logistic')\n elif self.algo_type == 'multiclass':\n objective = xgb.XGBClassifier(objective='multiclass:softmax')\n elif self.algo_type =='regression':\n objective = xgb.XGBClassifier(objective='regression:squarederror')\n\n print('Optimizing XGB algorithm...')\n param_grid = {\n 'max_depth': [None, 1, 3, 6, 10, 15, 20],\n \"learning_rate\": [0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30 ],\n 'subsample': [0.5, 0.6, 0.7, 0.8, 0.9, 1.0],\n 'colsample_bytree': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0],\n 'colsample_bylevel': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0],\n 'min_child_weight': [0.5, 1.0, 3.0, 5.0, 7.0, 10.0],\n 'gamma': [0, 0.25, 0.5, 1.0],\n 'reg_lambda': [0, 0.1, 1.0, 5.0, 10.0, 50.0, 100.0],\n 'n_estimators': [30, 70, 100, 150, 250, 400, 700]\n }\n\n if self.search_opt == 'grid':\n model = self.grid_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'bayesian':\n model = self.bayesian_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'randomized':\n model = self.randomized_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'none':\n scores = cross_validate(objective, self.X_train, self.y_train, cv=self.cv, return_estimator=True)\n print(scores['test_score'].max())\n return scores['estimator'][scores['test_score'].argmax()]\n else:\n raise ValueError('please provide a correct search optimization technique') \n\n def fit_CatBoost(self):\n \n if self.algo_type == 'binary' or self.algo_type == 'multiclass':\n objective = CatBoostClassifier()\n elif self.algo_type =='regression':\n objective = CatBoostRegressor()\n\n print('Optimizing CatBoost algorithm...')\n\n param_grid = {'depth':[3,1,2,6,4,5,7,8,9,10],\n 'iterations':[250,100,500,1000],\n \"learning_rate\": [0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30 ],\n 'l2_leaf_reg':[3,1,5,10,100],\n 'border_count':[32,5,10,20,50,100,200],\n 'ctr_border_count':[50,5,10,20,100,200],\n }\n\n if self.search_opt == 'grid':\n model = self.grid_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'bayesian':\n model = self.bayesian_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'randomized':\n model = self.randomized_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'none':\n scores = cross_validate(objective, self.X_train, self.y_train, cv=self.cv, return_estimator=True)\n print(scores['test_score'].max())\n return scores['estimator'][scores['test_score'].argmax()]\n else:\n raise ValueError('please provide a correct search optimization technique') \n\n def fit_GBM(self):\n \n if self.algo_type == 'binary' or self.algo_type == 'multiclass':\n objective = GradientBoostingClassifier()\n elif self.algo_type =='regression':\n objective = GradientBoostingRegressior()\n\n print('Optimizing Gradient Boosting algorithm...')\n param_grid = {\n 'max_depth': [None, 1, 3, 6, 10, 15, 20],\n \"learning_rate\": [0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30],\n \"min_samples_split\": [0.1, 0.5, 1, 2, 3, 5, 10],\n \"min_samples_leaf\": [0.1, 0.5, 1, 2, 3, 5, 10],\n 'subsample': [0.5, 0.6, 0.7, 0.8, 0.9, 1.0],\n 'n_estimators': [30, 70, 100, 150, 250, 400, 700],\n \"max_features\":[\"auto\",\"log2\",\"sqrt\"],\n \"criterion\": [\"friedman_mse\", \"mae\", \"mse\"]\n }\n\n if self.search_opt == 'grid':\n model = self.grid_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'bayesian':\n model = self.bayesian_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'randomized':\n model = self.randomized_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'none':\n scores = cross_validate(objective, self.X_train, self.y_train, cv=self.cv, return_estimator=True)\n print(scores['test_score'].max())\n return scores['estimator'][scores['test_score'].argmax()]\n else:\n raise ValueError('please provide a correct search optimization technique') \n\n def fit_RF(self):\n \n if self.algo_type == 'binary' or self.algo_type == 'multiclass':\n objective = RandomForestClassifier()\n param_grid = {\n 'max_depth': [None, 1, 3, 6, 10, 15, 20],\n 'bootstrap': [True, False],\n \"min_samples_split\": [0.1, 0.5, 1, 2, 3, 5, 10],\n \"min_samples_leaf\": [0.1, 0.5, 1, 2, 3, 5, 10],\n 'n_estimators': [30, 70, 100, 150, 250, 400, 700],\n \"max_features\":[\"auto\",\"log2\",\"sqrt\"],\n \"criterion\": [\"gini\", \"entropy\"]\n }\n elif self.algo_type =='regression':\n objective = RandomForestRegressor()\n param_grid = {\n 'max_depth': [None, 1, 3, 6, 10, 15, 20],\n 'bootstrap': [True, False],\n \"min_samples_split\": [0.1, 0.5, 1, 2, 3, 5, 10],\n \"min_samples_leaf\": [0.1, 0.5, 1, 2, 3, 5, 10],\n 'n_estimators': [30, 70, 100, 150, 250, 400, 700],\n \"max_features\":[\"auto\",\"log2\",\"sqrt\"],\n \"criterion\": [\"mse\", \"mae\"]\n }\n print('Optimizing Random Forest algorithm...')\n\n\n if self.search_opt == 'grid':\n model = self.grid_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'bayesian':\n model = self.bayesian_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'randomized':\n model = self.randomized_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'none':\n scores = cross_validate(objective, self.X_train, self.y_train, cv=self.cv, return_estimator=True)\n print(scores['test_score'].max())\n return scores['estimator'][scores['test_score'].argmax()]\n else:\n raise ValueError('please provide a correct search optimization technique') \n\n def fit_ExtraTrees(self):\n \n if self.algo_type == 'binary' or self.algo_type == 'multiclass':\n objective = ExtraTreesClassifier()\n param_grid = {\n 'max_depth': [None, 1, 3, 6, 10, 15, 20],\n 'bootstrap': [True, False],\n \"min_samples_split\": [0.1, 0.5, 1, 2, 3, 5, 10],\n \"min_samples_leaf\": [0.1, 0.5, 1, 2, 3, 5, 10],\n 'n_estimators': [30, 70, 100, 150, 250, 400, 700],\n \"max_features\":[\"auto\",\"log2\",\"sqrt\"],\n \"criterion\": [\"gini\", \"entropy\"]\n }\n elif self.algo_type =='regression':\n objective = ExtraTreesRegressor()\n param_grid = {\n 'max_depth': [None, 1, 3, 6, 10, 15, 20],\n 'bootstrap': [True, False],\n \"min_samples_split\": [0.1, 0.5, 1, 2, 3, 5, 10],\n \"min_samples_leaf\": [0.1, 0.5, 1, 2, 3, 5, 10],\n 'n_estimators': [30, 70, 100, 150, 250, 400, 700],\n \"max_features\":[\"auto\",\"log2\",\"sqrt\"],\n \"criterion\": [\"mae\", \"mse\"]\n }\n\n print('Optimizing ExtraTrees algorithm...')\n\n\n if self.search_opt == 'grid':\n model = self.grid_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'bayesian':\n model = self.bayesian_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'randomized':\n model = self.randomized_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'none':\n scores = cross_validate(objective, self.X_train, self.y_train, cv=self.cv, return_estimator=True)\n print(scores['test_score'].max())\n return scores['estimator'][scores['test_score'].argmax()]\n else:\n raise ValueError('please provide a correct search optimization technique') \n\n def fit_DT(self):\n \n if self.algo_type == 'binary' or self.algo_type == 'multiclass':\n objective = DecisionTreeClassifier()\n param_grid = {\n 'max_depth': [None, 1, 3, 6, 10, 15, 20],\n \"min_samples_split\": [0.1, 0.5, 1, 2, 3, 5, 10],\n \"min_samples_leaf\": [0.1, 0.5, 1, 2, 3, 5, 10],\n \"max_features\":[\"auto\",\"log2\",\"sqrt\"],\n \"criterion\": [\"gini\", \"entropy\"]\n }\n elif self.algo_type =='regression':\n objective = DecisionTreeRegressor()\n param_grid = {\n 'max_depth': [None, 1, 3, 6, 10, 15, 20],\n \"min_samples_split\": [0.1, 0.5, 1, 2, 3, 5, 10],\n \"min_samples_leaf\": [0.1, 0.5, 1, 2, 3, 5, 10],\n \"max_features\":[\"auto\",\"log2\",\"sqrt\"],\n \"criterion\": [\"friedman_mse\", \"mae\", \"mse\"]\n }\n\n print('Optimizing Decision Tree algorithm...')\n\n if self.search_opt == 'grid':\n model = self.grid_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'bayesian':\n model = self.bayesian_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'randomized':\n model = self.randomized_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'none':\n scores = cross_validate(objective, self.X_train, self.y_train, cv=self.cv, return_estimator=True)\n print(scores['test_score'].max())\n return scores['estimator'][scores['test_score'].argmax()]\n else:\n raise ValueError('please provide a correct search optimization technique') \n\n def fit_AdaBoost(self):\n \n if self.algo_type == 'binary' or self.algo_type == 'multiclass':\n objective = AdaBoostClassifier()\n param_grid = {\n 'n_estimators': [30, 70, 100, 150, 250, 400, 700],\n \"learning_rate\": [0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30]\n }\n elif self.algo_type =='regression':\n objective = AdaBoostRegressor()\n param_grid = {\n 'n_estimators': [30, 70, 100, 150, 250, 400, 700],\n \"learning_rate\": [0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30],\n 'loss' : ['linear', 'square', 'exponential']\n }\n print('Optimizing AdaBoost algorithm...')\n\n if self.search_opt == 'grid':\n model = self.grid_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'bayesian':\n model = self.bayesian_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'randomized':\n model = self.randomized_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'none':\n scores = cross_validate(objective, self.X_train, self.y_train, cv=self.cv, return_estimator=True)\n print(scores['test_score'].max())\n return scores['estimator'][scores['test_score'].argmax()]\n else:\n raise ValueError('please provide a correct search optimization technique') \n\n def fit_SVM(self):\n \n param_grid = {'C': [0.1, 1, 10, 30, 40, 100, 1000],\n 'gamma': [1, 0.1, 0.01, 0.001],\n 'kernel': ['rbf', 'poly', 'sigmoid']\n }\n if self.algo_type == 'binary' or self.algo_type == 'multiclass':\n objective = SVC()\n\n elif self.algo_type =='regression':\n objective = SVR()\n\n print('Optimizing SVM algorithm...')\n\n\n if self.search_opt == 'grid':\n model = self.grid_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'bayesian':\n model = self.bayesian_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'randomized':\n model = self.randomized_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'none':\n scores = cross_validate(objective, self.X_train, self.y_train, cv=self.cv, return_estimator=True)\n print(scores['test_score'].max())\n return scores['estimator'][scores['test_score'].argmax()]\n else:\n raise ValueError('please provide a correct search optimization technique') \n\n def fit_LogR(self):\n\n print('Optimizing Logistic Regression algorithm...')\n param_grid = {\n 'penalty' : ['l1', 'l2'],\n 'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000]\n }\n objective = LogisticRegression()\n\n if self.search_opt == 'grid':\n model = self.grid_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'bayesian':\n model = self.bayesian_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'randomized':\n model = self.randomized_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'none':\n scores = cross_validate(objective, self.X_train, self.y_train, cv=self.cv, return_estimator=True)\n print(scores['test_score'].max())\n return scores['estimator'][scores['test_score'].argmax()]\n else:\n raise ValueError('please provide a correct search optimization technique') \n\n def fit_Bagging(self):\n\n print('Optimizing Bagging algorithm...')\n param_grid = {\n 'n_estimators': [30, 70, 100, 150, 250, 400, 700],\n 'max_samples': [0.05, 0.1, 0.2, 0.5, 1]\n }\n if self.algo_type == 'binary' or self.algo_type == 'multiclass':\n objective = BaggingClassifier()\n elif self.algo_type =='regression':\n objective = BaggingRegressor()\n\n\n if self.search_opt == 'grid':\n model = self.grid_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'bayesian':\n model = self.bayesian_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'randomized':\n model = self.randomized_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'none':\n scores = cross_validate(objective, self.X_train, self.y_train, cv=self.cv, return_estimator=True)\n print(scores['test_score'].max())\n return scores['estimator'][scores['test_score'].argmax()]\n else:\n raise ValueError('please provide a correct search optimization technique') \n\n def fit_ANN(self):\n\n param_grid = {\n 'hidden_layer_sizes': [(3,), (6,), (10,), (15,), (20,), (30,), (40,), (50,), (100,), (150,), (200,),\n (3,3,), (6,6,), (10,10,), (15,15,), (20,20,), (30,30,), (40,40,), (50,50,), (100,100,), (150,150,), (200,200),],\n 'learning_rate_init': [0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30],\n 'learning_rate': ['constant', 'invscaling', 'adaptive'],\n 'batch_size': [25, 50, 100, 200, 300],\n 'solver': ['lbfgs', 'sgd', 'adam'],\n 'activation': ['identity', 'logistic', 'tanh', 'relu'],\n 'alpha': [0, 0.001, 0.01, 0.1, 0.5, 1.0, 1.1, 1.5],\n } \n\n if self.algo_type == 'binary' or self.algo_type == 'multiclass':\n objective = MLPClassifier(early_stopping=True)\n\n elif self.algo_type =='regression':\n objective = MLPRegressor(early_stopping=True)\n\n print('Optimizing Artificial Neural Network...')\n\n\n if self.search_opt == 'grid':\n model = self.grid_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'bayesian':\n model = self.bayesian_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'randomized':\n model = self.randomized_opt(objective, param_grid)\n self.print_info(model)\n return model\n elif self.search_opt == 'none':\n scores = cross_validate(objective, self.X_train, self.y_train, cv=self.cv, return_estimator=True)\n print(scores['test_score'].max())\n return scores['estimator'][scores['test_score'].argmax()]\n else:\n raise ValueError('please provide a correct search optimization technique') \n",
"_____no_output_____"
],
[
"import pandas as pd\nfrom sklearn.model_selection import train_test_split\n\nX = pd.read_csv('/content/drive/Shared drives/celdum/X.csv')\ny = pd.read_csv('/content/drive/Shared drives/celdum/y.csv') \ndel y['Unnamed: 0']\ndel X['Unnamed: 0']\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15, stratify=y)\n",
"_____no_output_____"
],
[
"tuner = Tuner(X_train,y_train, n_iteration=1500, search_opt='bayesian')",
"_____no_output_____"
],
[
"LGBM = tuner.fit_LGBM()\nrfcpred = LGBM.predict(X_test)\nprint(LGBM.best_params_)\naccuracy = accuracy_score(y_test, rfcpred)\nprint(accuracy)",
"Optimizing LGBM algorithm...\nFitting 5 folds for each of 1500 candidates, totalling 7500 fits\n"
],
[
"XG = tuner.fit_XGB()\nrfcpred = XG.predict(X_test)\nprint(XG.best_params_)\naccuracy = accuracy_score(y_test, rfcpred)\nprint(accuracy)",
"Optimizing XGB algorithm...\nFitting 5 folds for each of 1500 candidates, totalling 7500 fits\n"
],
[
"GB = tuner.fit_GBM()\nrfcpred = GB.predict(X_test)\nprint(GB.best_params_)\naccuracy = accuracy_score(y_test, rfcpred)\nprint(accuracy)",
"Optimizing Gradient Boosting algorithm...\nFitting 5 folds for each of 200 candidates, totalling 1000 fits\n"
],
[
"RF = tuner.fit_RF()\nrfcpred = RF.predict(X_test)\nprint(RF.best_params_)\naccuracy = accuracy_score(y_test, rfcpred)\nprint(accuracy)",
"Optimizing Random Forest algorithm...\nFitting 5 folds for each of 500 candidates, totalling 2500 fits\n"
],
[
"AdaBoost = tuner.fit_AdaBoost()\nrfcpred = AdaBoost.predict(X_test)\nprint(AdaBoost.best_params_)\naccuracy = accuracy_score(y_test, rfcpred)\nprint(accuracy)",
"Optimizing AdaBoost algorithm...\nFitting 5 folds for each of 49 candidates, totalling 245 fits\n"
],
[
"ANN = tuner.fit_ANN()\nrfcpred = ANN.predict(X_test)\nprint(ANN.best_params_)\naccuracy = accuracy_score(y_test, rfcpred)\nprint(accuracy)",
"Optimizing Artificial Neural Network...\nFitting 5 folds for each of 200 candidates, totalling 1000 fits\n"
],
[
"SVM = tuner.fit_SVM()\nrfcpred = SVM.predict(X_test)\nprint(SVM.best_params_)\naccuracy = accuracy_score(y_test, rfcpred)\nprint(accuracy)",
"Optimizing SVM algorithm...\nFitting 5 folds for each of 30 candidates, totalling 150 fits\n"
],
[
"Bagging = tuner.fit_Bagging()\nrfcpred = Bagging.predict(X_test)\nprint(Bagging.best_params_)\naccuracy = accuracy_score(y_test, rfcpred)\nprint(accuracy)",
"Optimizing Bagging algorithm...\nFitting 5 folds for each of 35 candidates, totalling 175 fits\n"
],
[
"LOGR = tuner.fit_LogR()\nrfcpred = LOGR.predict(X_test)\nprint(LOGR.best_params_)\naccuracy = accuracy_score(y_test, rfcpred)\nprint(accuracy)",
"Optimizing Logistic Regression algorithm...\nFitting 5 folds for each of 14 candidates, totalling 70 fits\n"
],
[
"DT = tuner.fit_DT()\nrfcpred = DT.predict(X_test)\nprint(DT.best_params_)\naccuracy = accuracy_score(y_test, rfcpred)\nprint(accuracy)",
"Optimizing Decision Tree algorithm...\nFitting 5 folds for each of 1500 candidates, totalling 7500 fits\n"
],
[
"from sklearn.metrics import classification_report\nfrom sklearn.metrics import roc_auc_score\nprint(classification_report(y_test, rfcpred, digits=4))\nprint(roc_auc_score(y_test, rfcpred))\n",
" precision recall f1-score support\n\n 0 0.9892 0.8429 0.9102 10965\n 1 0.4283 0.9274 0.5860 1392\n\n accuracy 0.8524 12357\n macro avg 0.7088 0.8852 0.7481 12357\nweighted avg 0.9260 0.8524 0.8737 12357\n\n0.8851530929131878\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9c16a7dfc8d50308ac7a2ed006068568f55863 | 247,173 | ipynb | Jupyter Notebook | notebooks/python-example.ipynb | rjw57/vagrant-ipython | 927b521b421137ad127f4a6461c30197338ff58d | [
"MIT"
] | null | null | null | notebooks/python-example.ipynb | rjw57/vagrant-ipython | 927b521b421137ad127f4a6461c30197338ff58d | [
"MIT"
] | null | null | null | notebooks/python-example.ipynb | rjw57/vagrant-ipython | 927b521b421137ad127f4a6461c30197338ff58d | [
"MIT"
] | null | null | null | 82.999664 | 194 | 0.830734 | [
[
[
"# Python notebook example\n\nThis is an example of running Python from a notebook. Note that we use some custom packages installed via the ``requirements.txt`` file. One of these came from a University git repository.",
"_____no_output_____"
]
],
[
[
"# Enable plotting support, etc. One could've folded this into the generic IPython\n# configuration but I prefer to make it explicit.\n%pylab inline\nrcParams['figure.figsize'] = (16, 12)",
"Populating the interactive namespace from numpy and matplotlib\n"
]
],
[
[
"We're going to use the ``trafficutils`` library to try to load a representation of England's major road network from the UK Highways agency. Firstly, let's import the appropriate function:",
"_____no_output_____"
]
],
[
[
"import trafficutils.io as tio",
"_____no_output_____"
]
],
[
[
"Let's check the documentation for the function we're going to use:",
"_____no_output_____"
]
],
[
[
"help(tio.load_traffic_network)",
"Help on function load_traffic_network in module trafficutils.io:\n\nload_traffic_network(filename_or_fobj, include_meta=False, include_optional=True)\n Load a traffic network from an XML document containing predefined links.\n \n Load the XML document from the file object or file name *filename_or_fobj*.\n \n Returns a networkx graph where each node is a (x,y) pair in WGS84. (There\n is also a pos field in each node which repeats the position.)\n \n If *include_meta* is True, additionally return a dictionary containing\n metadata about the network:\n \n {\n 'publicationtime': datetime(<publication time in UTC>),\n }\n \n Each node's dictionary may optionally have the following fields:\n \n label: the node descriptor or None if empty\n ilc: a list of ILC (incident roads) on this junction\n \n Each edge's dictionary may optionally have the following fields:\n \n label: the link name\n id: a unique id for the link\n \n These fields are never present if *include_optional* is False.\n\n"
]
],
[
[
"OK, let's use ``requests`` to download the XML document and then we'll pass it to ``load_traffic_network`` as a ``BytesIO`` object.",
"_____no_output_____"
]
],
[
[
"from requests import get\n\nNETWORK_XML_URL=\"http://hatrafficinfo.dft.gov.uk/feeds/datex/England/PredefinedLocationLinks/content.xml\"\nnetwork_xml_req = get(NETWORK_XML_URL)\nassert network_xml_req.status_code == 200 # OK\nnetwork_xml = network_xml_req.content",
"_____no_output_____"
],
[
"from io import BytesIO\nG = tio.load_traffic_network(BytesIO(network_xml))",
"_____no_output_____"
]
],
[
[
"How many nodes and links do we have?",
"_____no_output_____"
]
],
[
[
"print('nodes:', G.number_of_nodes())\nprint('edges:', G.number_of_edges())",
"nodes: 11465\nedges: 12552\n"
]
],
[
[
"Let's try plotting it:",
"_____no_output_____"
]
],
[
[
"from networkx import draw_networkx, draw_networkx_edges\n\n# The pos keyword argument is a dictionary mapping nodes to their (x, y) positions.\n# Since the nodes are the (longitude, latitude) pairs we can just use the nodes as\n# their own positions.\ndraw_networkx_edges(G, pos=dict((n,n) for n in G.nodes_iter()))\naxis('equal')\ngrid('on')",
"_____no_output_____"
]
],
[
[
"It's a bit squashed what with the projection being raw latitudes and longitudes. Use the ``pyproj`` package to map from WGS84 latitude/longitude to the British National Grid.",
"_____no_output_____"
]
],
[
[
"import pyproj\n\nwgs84 = pyproj.Proj(init='epsg:4326')\nbng = pyproj.Proj(init='epsg:27700')\n\ndraw_networkx_edges(G, pos=dict((n, pyproj.transform(wgs84, bng, *n)) for n in G.nodes_iter()))\naxis('equal')\ngrid('on')",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec9c176eba589d76ac04dde551922b3ff67f494c | 8,704 | ipynb | Jupyter Notebook | 1-3D-visualization/6-UnitCell.ipynb | NicholasAKovacs/mmtf-workshop | 6c98d22f75f4eaf99467dfed0f5302d4daee3148 | [
"Apache-2.0"
] | 53 | 2018-05-07T06:06:25.000Z | 2022-03-31T06:15:35.000Z | 1-3D-visualization/6-UnitCell.ipynb | NicholasAKovacs/mmtf-workshop | 6c98d22f75f4eaf99467dfed0f5302d4daee3148 | [
"Apache-2.0"
] | 3 | 2018-05-07T07:46:07.000Z | 2022-01-10T00:57:30.000Z | 1-3D-visualization/6-UnitCell.ipynb | NicholasAKovacs/mmtf-workshop | 6c98d22f75f4eaf99467dfed0f5302d4daee3148 | [
"Apache-2.0"
] | 25 | 2018-05-04T23:05:22.000Z | 2022-01-24T07:03:50.000Z | 50.900585 | 1,539 | 0.608456 | [
[
[
"# 6-UnitCell\nThis tutorial shows how to render the unit cell. The asymmetric unit is the smallest portion of a crystal structure to which symmetry operations can be applied in order to generate the complete unit cell (the crystal repeating unit). See [PDB-101 tutorial](https://pdb101.rcsb.org/learn/guide-to-understanding-pdb-data/biological-assemblies#Anchor-Asym).",
"_____no_output_____"
]
],
[
[
"import py3Dmol",
"_____no_output_____"
],
[
"viewer = py3Dmol.view(query='pdb:4HHB')\nviewer.setStyle({'cartoon': {'color': 'spectrum'}})\nviewer.show()",
"_____no_output_____"
]
],
[
[
"## Add the unit cell\nNote, this option only shows the boundaries of the unit cell. It does not fill the unit cell.",
"_____no_output_____"
]
],
[
[
"viewer.addUnitCell()\nviewer.zoomTo()\nviewer.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9c21ff5ffed1ceb0304ac59833a42d211f3078 | 326,841 | ipynb | Jupyter Notebook | content/hello_world/hello_world_exercise.ipynb | dawn-ico/dusk-dawn-notebook | 1a6047cc7c0b03576e3db948d70bf9e8eaaff8e1 | [
"MIT"
] | null | null | null | content/hello_world/hello_world_exercise.ipynb | dawn-ico/dusk-dawn-notebook | 1a6047cc7c0b03576e3db948d70bf9e8eaaff8e1 | [
"MIT"
] | 37 | 2020-10-13T10:45:08.000Z | 2020-12-16T14:06:53.000Z | content/hello_world/hello_world_exercise.ipynb | dawn-ico/dusk-dawn-notebook | 1a6047cc7c0b03576e3db948d70bf9e8eaaff8e1 | [
"MIT"
] | 2 | 2020-10-12T12:49:31.000Z | 2020-10-13T14:39:56.000Z | 508.306376 | 107,804 | 0.946852 | [
[
[
"# First dusk/dawn exercises\n\nWelcome to your first dusk/dawn exercises. The goal is to code some simple dusk code. For now, we will only do point-wise stencils. You will see neighborhoods & extents in other exercises. At the end, we will also implement some basic image processing techniques.",
"_____no_output_____"
],
[
"### Simple `c = a + b` stencil\n\nWe will start off with a simple stencil that adds two fields together. The goal is to compute `c = a + b`.\n\nHere is a skeleton for our stencil. It takes three fields as input. Below is a skeleton that you have to fill out:",
"_____no_output_____"
]
],
[
[
"from dusk.script import *\n\n@stencil\ndef simple_stencil(\n a: Field[Cell],\n b: Field[Cell],\n c: Field[Cell]\n):\n with levels_upward:\n # Here we write what we want to compute:\n c = 0",
"_____no_output_____"
]
],
[
[
"Now that we've written the stencil, we want to compile it to C++. We can to this with dusk's python API and dawn's cli:",
"_____no_output_____"
]
],
[
[
"from dusk.transpile import callable_to_pyast, pyast_to_sir, sir_to_json\n\nwith open(\"simple_stencil.sir\", \"w\") as f:\n f.write(sir_to_json(pyast_to_sir(callable_to_pyast(simple_stencil))))\n\n!dawn-opt simple_stencil.sir | dawn-codegen -b naive-ico -o simple_stencil_cxx-naive.cpp\n!clang-format -i simple_stencil_cxx-naive.cpp",
"_____no_output_____"
]
],
[
[
"Then we continue with compiling the C++ code into an exectuable:",
"_____no_output_____"
]
],
[
[
"!make simple_stencil",
"_____no_output_____"
]
],
[
[
"Lastly, we run the `runner`. It will write out your results to disk.",
"_____no_output_____"
]
],
[
[
"!./runner",
"Finished simple stencil successfully.\n"
]
],
[
[
"To check whether the stencil does what it should, there is a small checker function. It will veriy whether `c == a + b`.\nLet's see if everything worked out well:",
"_____no_output_____"
]
],
[
[
"from helpers import check_first\ncheck_first()",
"At some point there is `c != a + b`...\n"
]
],
[
[
"You should now see:\n\n Success (`c == a + b`)!\n\nIf not, your stencil might be wrong or something else went wrong before.",
"_____no_output_____"
],
[
"### Displaying the absolute-value norm on a triangular mesh\n\nIn this exercise, we want to plot the absolute-value norm on a triangular mesh. The absolute-value norm for a two dimensional vector $ \\textbf{x} $ is defined as:\n\n$$ | \\textbf{x} | = | x_1 | + | x_2 | $$\n\nWhere $ x_1 $ and $ x_2 $ are the two scalar components of $ \\textbf{x} $.\n\nOur grid will be of size 54x54, so we want to center the norm around $ (27, 27) $. To get some nice contour lines, we will use the floor function. Additionally, we will scale the result by 10 before applying the floor function.\n\nAll together this gives us:\n\n$$ c = \\left \\lfloor \\frac{| \\textbf{x} - \\textbf{v} |}{10} \\right \\rfloor, \\textbf{v} = \\begin{pmatrix} 27 \\\\ 27 \\end{pmatrix} $$\n\nHint: In dusk you can use `abs(a)` for $ | a | $ and `floor(b)` for $ \\lfloor b \\rfloor $.\n\nHint: Dusk doesn't support vectors. It's probably best to write $ \\textbf{v} $'s components separately as constants.\n\nBelow is a skeleton that you can fill out to prepare the contour plot:",
"_____no_output_____"
]
],
[
[
"from dusk.script import *\n\n@stencil\ndef simple_stencil(\n x: Field[Cell],\n y: Field[Cell],\n c: Field[Cell]\n):\n with levels_upward:\n # We want to store the floored and scaled absolute norm into `c`:\n c = 1",
"_____no_output_____"
]
],
[
[
"Once we've written the stencil, we want to see if it computes what we wanted. This time all of the steps are condensed right below.",
"_____no_output_____"
]
],
[
[
"from dusk.transpile import callable_to_pyast, pyast_to_sir, sir_to_json\n\nwith open(\"simple_stencil.sir\", \"w\") as f: f.write(sir_to_json(pyast_to_sir(callable_to_pyast(simple_stencil))))\n\n!dawn-opt simple_stencil.sir | dawn-codegen -b naive-ico -o simple_stencil_cxx-naive.cpp\n!clang-format -i simple_stencil_cxx-naive.cpp\n!make simple_stencil\n!./runner",
"Finished simple stencil successfully.\n"
]
],
[
[
"To plot the contour lines, we prepared a helper function:",
"_____no_output_____"
]
],
[
[
"from helpers import plot_c\nplot_c()",
"_____no_output_____"
]
],
[
[
"See if your plot matches the expected result.\n\nBonus: If you want to, you can also plot other norms and see what they look like on a triangular mesh. But maybe do this after you've finished the other exercises, there's some interesting things coming still :).",
"_____no_output_____"
],
[
"### Colored to black/white image:\n\nA classic and simple image transformation is to convert a colored image to a black and white image. Let's see what that would look like in dusk.\n\nFor this exercise we will use the ESiWACE logo:\n\n\n\nYou can also upload your own image into the jupyter lab session. However, be careful because bigger images can take a long time to process, since the naive C++ backend is not optimized at all.\nWe will use a smaller version of the ESiWACE logo (`./logo_small.jpg`):",
"_____no_output_____"
]
],
[
[
"from helpers import image_to_data\n# stores the jpg into `./picture.txt` which the runner will read later on\nimage_to_data('./logo_small.jpg')",
"_____no_output_____"
]
],
[
[
"To convert an RGB image to black and white, we will average all three color channels:\n\n$$ c = \\frac{r + g + b}{3} $$\n\nThen all three color channels will take on this value:\n\n$$ r = c \\\\ g = c \\\\ b = c $$\n\n(Some methods will weight the three color channels differently, but let's keep it simple for now)\n\nFor this stencil, you'll have multiple inputs:\n\n* The three color channels: `r`, `g`, `b`\n\n* Some fields to store intermediate results: `tmp1`, `tmp2`, `tmp3`, `tmp4`, `tmp5`, `tmp6`\n\n* The cell mid-points: `x`, `y` (you can also use them as intermediate results)\n\n(There are some scaling issues when it comes to `x` and `y`, but you won't need those for the exercises. However, they might be interesting if you want to experiment with such an image stencil)\n\nFeel free to rename those parameters as you please!\n\nUse the skeleton below to compute the black and white values for each cell based on its three color values:",
"_____no_output_____"
]
],
[
[
"from dusk.script import *\n\n@stencil\ndef image_stencil(\n r: Field[Cell],\n g: Field[Cell],\n b: Field[Cell],\n tmp1: Field[Cell],\n tmp2: Field[Cell],\n tmp3: Field[Cell],\n tmp4: Field[Cell],\n tmp5: Field[Cell],\n tmp6: Field[Cell],\n x: Field[Cell],\n y: Field[Cell],\n):\n with levels_upward:\n r = 1",
"_____no_output_____"
]
],
[
[
"Then we compile & run the whole stencil:",
"_____no_output_____"
]
],
[
[
"from dusk.transpile import callable_to_pyast, pyast_to_sir, sir_to_json\nwith open(\"image_stencil.sir\", \"w\") as f:\n f.write(sir_to_json(pyast_to_sir(callable_to_pyast(image_stencil))))\n\n!dawn-opt image_stencil.sir | dawn-codegen -b naive-ico -o image_stencil_cxx-naive.cpp\n!clang-format -i image_stencil_cxx-naive.cpp\n!make image_stencil\n\n!./runner",
"Finished image stencil successfully.\n"
]
],
[
[
"To plot your results you can use this prepared helper function:",
"_____no_output_____"
]
],
[
[
"from helpers import plot_image\nplot_image('black_white')",
"_____no_output_____"
]
],
[
[
"Verify whether your computed solution corresponds to the reference solution.",
"_____no_output_____"
],
[
"### Over saturated image:\n\nNext we will try to increase the saturation of an image. This is useful to impress your friends with oversaturated landscape pictures from your holidays.\n\nTo do this, we want to dampen weak colors and strenghten strong colors. We will again work on a _per colorchannel basis_.\n\nWe will map each color channel based on a curve. For saturation we can use a curve like:",
"_____no_output_____"
]
],
[
[
"from helpers import plot_saturation_curve\nplot_saturation_curve()",
"_____no_output_____"
]
],
[
[
"These curves are given by:\n\nWeights:\n\n$$\nw_1 = (1 - x)^p \\\\\nw_2 = x^p\n$$\n\nFractions:\n\n$$\nf_1 = \\frac{x}{q} \\\\\nf_2 = \\frac{x - 1}{q} + 1\n$$\n\nFormula:\n\n$$\ny = \\frac{w_1 f_1 + w_2 f_2}{w_1 + w_2}\n$$\n\nHowever, these curves work for $ x \\in [0, 1] $. Our RGB channels are in $ [0, 255] $.\n\nWhich leaves us with the following steps per channel:\n\n* Scale channel from $ [0, 255] $ linearly to $ [0, 1] $\n\n* Apply curve\n\n* Scale channel back from $ [0, 1] $ to $ [0, 255] $ linearly\n\nHint: The power operator can be written in dusk as `x**y`. Take care that all operands are floats (and not integers), otherwise you might trigger spurious type errors. Alternatively, you can also write it as `pow(x, y)`.\n\nNote: Since we have to apply the same operation for all three color channels, you will probably end up with quite a few code clones. In the future, we will support functions inside stencils, so that we can avoid such code clones.\n\nWe have provided you with the following skeleton:",
"_____no_output_____"
]
],
[
[
"from dusk.script import *\n\n@stencil\ndef image_stencil(\n r: Field[Cell],\n g: Field[Cell],\n b: Field[Cell],\n p: Field[Cell],\n q: Field[Cell],\n w1: Field[Cell],\n w2: Field[Cell],\n f1: Field[Cell],\n f2: Field[Cell],\n x: Field[Cell],\n y: Field[Cell],\n):\n with levels_upward:\n \n # we can change these to try different curves\n p = 4.0\n q = 3.0\n \n # apply the above steps per channel:\n ",
"_____no_output_____"
]
],
[
[
"Then we compile and run the stencil.",
"_____no_output_____"
]
],
[
[
"from dusk.transpile import callable_to_pyast, pyast_to_sir, sir_to_json\n\nwith open(\"image_stencil.sir\", \"w\") as f:\n f.write(sir_to_json(pyast_to_sir(callable_to_pyast(image_stencil))))\n\n!dawn-opt image_stencil.sir | dawn-codegen -b naive-ico -o image_stencil_cxx-naive.cpp\n!clang-format -i image_stencil_cxx-naive.cpp\n!make image_stencil\n!./runner",
"Finished image stencil successfully.\n"
]
],
[
[
"And last we look at our results",
"_____no_output_____"
]
],
[
[
"from helpers import plot_image\nplot_image('saturation')",
"_____no_output_____"
]
],
[
[
"Hopefully, you will have computed a nicely oversaturaed ESiWACE logo.\n\nIf you want to experiment with other stencils, you can also plot only the computed image:",
"_____no_output_____"
]
],
[
[
"from helpers import plot_single_image\nplot_single_image()",
"_____no_output_____"
]
],
[
[
"That's it for our first exercises. If you made it this far, congratulations!\nPlease let us know in slack also :)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ec9c2da4c3654c54a80ae5a3090edfded940e71b | 19,344 | ipynb | Jupyter Notebook | nbs/09_gt.ipynb | Maddonix/deepflash2 | 0873376c26f62a5cad84b36bc3b4b1f498793f5c | [
"Apache-2.0"
] | null | null | null | nbs/09_gt.ipynb | Maddonix/deepflash2 | 0873376c26f62a5cad84b36bc3b4b1f498793f5c | [
"Apache-2.0"
] | null | null | null | nbs/09_gt.ipynb | Maddonix/deepflash2 | 0873376c26f62a5cad84b36bc3b4b1f498793f5c | [
"Apache-2.0"
] | null | null | null | 30.9504 | 250 | 0.501189 | [
[
[
"#default_exp gt\nfrom nbdev.showdoc import show_doc, add_docs",
"_____no_output_____"
]
],
[
[
"# Ground Truth Estimation\n\n> Implements functions for ground truth estimation from the annotations of multiple experts. Based on [SimpleITK](http://insightsoftwareconsortium.github.io/SimpleITK-Notebooks/Python_html/34_Segmentation_Evaluation.html).",
"_____no_output_____"
]
],
[
[
"#hide \nfrom deepflash2.gui import _get_expert_sample_masks",
"_____no_output_____"
],
[
"#export\nimport imageio, pandas as pd, numpy as np\nfrom pathlib import Path\nfrom fastcore.basics import GetAttr\nfrom fastprogress import progress_bar\nfrom fastai.data.transforms import get_image_files\nimport matplotlib.pyplot as plt\n\nfrom deepflash2.data import _read_msk\nfrom deepflash2.learner import Config\nfrom deepflash2.utils import save_mask, iou, install_package",
"_____no_output_____"
]
],
[
[
"## Helper Functions",
"_____no_output_____"
],
[
"Installing [SimpleITK](http://insightsoftwareconsortium.github.io/SimpleITK-Notebooks/Python_html/34_Segmentation_Evaluation.html), which is not a dependency of `deepflash2`.",
"_____no_output_____"
]
],
[
[
"#export\ndef import_sitk():\n try:\n import SimpleITK\n assert SimpleITK.Version_MajorVersion()==2\n except:\n print('Installing SimpleITK. Please wait.')\n install_package(\"SimpleITK==2.0.2\")\n import SimpleITK\n return SimpleITK",
"_____no_output_____"
]
],
[
[
"## Ground Truth Estimation",
"_____no_output_____"
],
[
"### Simultaneous truth and performance level estimation (STAPLE) \n\nThe STAPLE algorithm considers a collection of segmentations and computes a probabilistic estimate of the true segmentation and a measure of the performance level represented by each segmentation. \n\n_Source: Warfield, Simon K., Kelly H. Zou, and William M. Wells. \"Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation.\" IEEE transactions on medical imaging 23.7 (2004): 903-921_",
"_____no_output_____"
]
],
[
[
"#export\ndef staple(segmentations, foregroundValue = 1, threshold = 0.5):\n 'STAPLE: Simultaneous Truth and Performance Level Estimation with simple ITK'\n sitk = import_sitk()\n segmentations = [sitk.GetImageFromArray(x) for x in segmentations]\n STAPLE_probabilities = sitk.STAPLE(segmentations)\n STAPLE = STAPLE_probabilities > threshold\n #STAPLE = sitk.GetArrayViewFromImage(STAPLE)\n return sitk.GetArrayFromImage(STAPLE)",
"_____no_output_____"
]
],
[
[
"### Majority Voting\nUse majority voting to obtain the reference segmentation. Note that this filter does not resolve ties. In case of ties it will assign the backgtound label (0) to the result. ",
"_____no_output_____"
]
],
[
[
"#export\ndef m_voting(segmentations, labelForUndecidedPixels = 0):\n 'Majority Voting from simple ITK Label Voting'\n sitk = import_sitk()\n segmentations = [sitk.GetImageFromArray(x) for x in segmentations]\n mv_segmentation = sitk.LabelVoting(segmentations, labelForUndecidedPixels)\n return sitk.GetArrayFromImage(mv_segmentation)",
"_____no_output_____"
]
],
[
[
"### GT Estimator\n\nClass for ground truth estimation",
"_____no_output_____"
]
],
[
[
"#export\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\ndef msk_show(ax, msk, title, cbar=None, ticks=None, **kwargs):\n img = ax.imshow(msk, **kwargs)\n if cbar is not None:\n divider = make_axes_locatable(ax)\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n if cbar=='plot': \n scale = ticks/(ticks+1)\n cbr = plt.colorbar(img, cax=cax, ticks=[i*(scale)+(scale/2) for i in range(0, ticks+1)])\n cbr.set_ticklabels([i for i in range(0, ticks+1)])\n cbr.set_label('# of experts', rotation=270, labelpad=+15, fontsize=\"larger\")\n else: cax.set_axis_off()\n ax.set_axis_off()\n ax.set_title(title) ",
"_____no_output_____"
],
[
"#export\nclass GTEstimator(GetAttr):\n \"Class for ground truth estimation\"\n _default = 'config' \n \n def __init__(self, exp_dir='expert_segmentations', config=None, path=None, cmap='viridis' , verbose=1):\n self.exp_dir = exp_dir\n self.config = config or Config()\n self.path = Path(path) if path is not None else Path('.')\n self.mask_fn = lambda exp,msk: self.path/self.exp_dir/exp/msk\n self.cmap = cmap\n self.gt = {}\n \n f_list = get_image_files(self.path/self.exp_dir)\n assert len(f_list)>0, f'Found {len(f_list)} masks in \"{self.path/self.exp_dir}\". Please check your masks and expert folders.'\n ass_str = f'Found unexpected folder structure in {self.path/self.exp_dir}. Please check your provided masks and folders.'\n assert len(f_list[0].relative_to(self.path/self.exp_dir).parents)==2, ass_str\n \n self.masks = {}\n self.experts = []\n for m in sorted(f_list):\n exp = m.parent.name\n if m.name in self.masks:\n self.masks[m.name].append(exp)\n else:\n self.masks[m.name] = [exp]\n self.experts.append(exp)\n self.experts = sorted(set(self.experts))\n if verbose>0: print(f'Found {len(self.masks)} unique segmentation mask(s) from {len(self.experts)} expert(s)')\n \n def show_data(self, max_n=6, files=None, figsize=None, **kwargs):\n if files is not None:\n files = [(m,self.masks[m]) for m in files]\n else:\n max_n = min((max_n, len(self.masks)))\n files = list(self.masks.items())[:max_n]\n if not figsize: figsize = (len(self.experts)*3,3)\n for m, exps in files:\n fig, axs = plt.subplots(nrows=1, ncols=len(exps), figsize=figsize, **kwargs)\n for i, exp in enumerate(exps):\n msk = _read_msk(self.mask_fn(exp,m))\n msk_show(axs[i], msk, exp, cmap=self.cmap)\n fig.text(0, .5, m, ha='center', va='center', rotation=90)\n plt.tight_layout()\n plt.show()\n \n def gt_estimation(self, method='STAPLE', save_dir=None, filetype='.png', **kwargs):\n assert method in ['STAPLE', 'majority_voting']\n res = []\n refs = {}\n print(f'Starting ground truth estimation - {method}')\n for m, exps in progress_bar(self.masks.items()):\n masks = [_read_msk(self.mask_fn(exp,m)) for exp in exps]\n if method=='STAPLE': \n ref = staple(masks, self.staple_fval, self.staple_thres)\n elif method=='majority_voting':\n ref = m_voting(masks, self.mv_undec)\n refs[m] = ref\n #assert ref.mean() > 0, 'Please try again!'\n df_tmp = pd.DataFrame({'method': method, 'file' : m, 'exp' : exps, 'iou': [iou(ref, msk) for msk in masks]})\n res.append(df_tmp)\n if save_dir: \n path = self.path/save_dir\n path.mkdir(exist_ok=True, parents=True)\n save_mask(ref, path/Path(m).stem, filetype)\n self.gt[method] = refs\n self.df_res = pd.concat(res)\n self.df_agg = self.df_res.groupby('exp').agg(average_iou=('iou', 'mean'), std_iou=('iou', 'std'))\n if save_dir: \n self.df_res.to_csv(path.parent/f'{method}_vs_experts.csv', index=False)\n self.df_agg.to_csv(path.parent/f'{method}_vs_experts_agg.csv', index=False)\n \n def show_gt(self, method='STAPLE', max_n=6, files=None, figsize=(15,5), **kwargs):\n if not files: files = list(t.masks.keys())[:max_n]\n for f in files:\n fig, ax = plt.subplots(ncols=3, figsize=figsize, **kwargs)\n # GT\n msk_show(ax[0], self.gt[method][f], f'{method} (binary mask)', cbar='', cmap=self.cmap)\n # Experts\n masks = [_read_msk(self.mask_fn(exp,f)) for exp in self.masks[f]]\n masks_av = np.array(masks).sum(axis=0)#/len(masks)\n msk_show(ax[1], masks_av, 'Expert Overlay', cbar='plot', ticks=len(masks), cmap=plt.cm.get_cmap(self.cmap, len(masks)+1))\n # Results\n av_df = pd.DataFrame([self.df_res[self.df_res.file==f][['iou']].mean()], index=['average'], columns=['iou'])\n plt_df = self.df_res[self.df_res.file==f].set_index('exp')[['iou']].append(av_df)\n plt_df.columns = [f'Similarity (iou)']\n tbl = pd.plotting.table(ax[2], np.round(plt_df,3), loc='center', colWidths=[.5])\n tbl.set_fontsize(14)\n tbl.scale(1, 2)\n ax[2].set_axis_off()\n fig.text(0, .5, f, ha='center', va='center', rotation=90)\n plt.tight_layout()\n plt.show()",
"_____no_output_____"
],
[
"exp_dir = Path('deepflash2/sample_data/expert_segmentations')\n_get_expert_sample_masks(exp_dir)\nfiles=['0004_mask.png', '0001_mask.png']",
"_____no_output_____"
],
[
"t = GTEstimator(exp_dir=exp_dir);",
"_____no_output_____"
],
[
"#t.show_data(files=files);",
"_____no_output_____"
],
[
"t.gt_estimation()\nt.show_gt(files=files)",
"_____no_output_____"
],
[
"t.gt_estimation(method='majority_voting', save_dir='mv_test')\nt.show_gt(method='majority_voting', max_n=2)",
"_____no_output_____"
]
],
[
[
"## Export -",
"_____no_output_____"
]
],
[
[
"#hide\nfrom nbdev.export import *\nnotebook2script()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9c2f9612bbfb1b49d32bd742e4d3463e4ef4ce | 335,442 | ipynb | Jupyter Notebook | mhw_pipeline/gen_mhw_all.ipynb | HuckleyLab/phyto-mhw | 8e067c73310fb4a4520d5a72f68717030ce90e14 | [
"MIT"
] | 2 | 2020-10-13T02:37:26.000Z | 2021-04-27T04:41:09.000Z | mhw_pipeline/gen_mhw_all.ipynb | HuckleyLab/phyto-mhw | 8e067c73310fb4a4520d5a72f68717030ce90e14 | [
"MIT"
] | 8 | 2020-07-19T10:54:37.000Z | 2021-10-17T19:53:09.000Z | mhw_pipeline/gen_mhw_all.ipynb | HuckleyLab/phyto-mhw | 8e067c73310fb4a4520d5a72f68717030ce90e14 | [
"MIT"
] | null | null | null | 95.867962 | 67,984 | 0.664616 | [
[
[
"!pip install -U xarray",
"Requirement already up-to-date: xarray in /home/ec2-user/miniconda3/envs/notebook/lib/python3.7/site-packages (0.15.1)\nRequirement already satisfied, skipping upgrade: pandas>=0.25 in /home/ec2-user/miniconda3/envs/notebook/lib/python3.7/site-packages (from xarray) (0.25.3)\nRequirement already satisfied, skipping upgrade: setuptools>=41.2 in /home/ec2-user/miniconda3/envs/notebook/lib/python3.7/site-packages (from xarray) (46.1.1.post20200322)\nRequirement already satisfied, skipping upgrade: numpy>=1.15 in /home/ec2-user/miniconda3/envs/notebook/lib/python3.7/site-packages (from xarray) (1.17.3)\nRequirement already satisfied, skipping upgrade: python-dateutil>=2.6.1 in /home/ec2-user/miniconda3/envs/notebook/lib/python3.7/site-packages (from pandas>=0.25->xarray) (2.7.5)\nRequirement already satisfied, skipping upgrade: pytz>=2017.2 in /home/ec2-user/miniconda3/envs/notebook/lib/python3.7/site-packages (from pandas>=0.25->xarray) (2019.3)\nRequirement already satisfied, skipping upgrade: six>=1.5 in /home/ec2-user/miniconda3/envs/notebook/lib/python3.7/site-packages (from python-dateutil>=2.6.1->pandas>=0.25->xarray) (1.14.0)\n"
],
[
"import os\nimport sys\n\nsys.path.append(os.path.expanduser(\"~/marineHeatWaves\"))\nimport marineHeatWaves as mh\n\nimport xarray as xr\nimport numpy as np\nimport zarr\n\nfrom dask.distributed import Client\n\nfrom datetime import date, datetime\n\nfrom functools import partial\n\nimport gcsfs\nimport s3fs \nimport boto3\n\nimport cartopy.crs as ccrs\nimport cartopy.feature as cf\n\nfrom pandas import Timestamp\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom multiprocessing import Pool\n\nfrom numba import float64, guvectorize, jit\nimport numba\n\nGCP_PROJECT_ID = '170771369993'\nOISST_GCP = 'oisst/oisst.zarr'",
"_____no_output_____"
],
[
"# from dask_kubernetes import KubeCluster\n# cluster = KubeCluster()\n# cluster.adapt(minimum=0, maximum=10) ",
"_____no_output_____"
],
[
"client = Client()",
"_____no_output_____"
],
[
"client.cluster.scale(8)",
"_____no_output_____"
],
[
"xr.__version__ ## >=0.15.1 REQUIRED for vectorization bugfix",
"_____no_output_____"
],
[
"# client = Client(cluster.scheduler_address)",
"_____no_output_____"
]
],
[
[
"# Alternative MHW processing ",
"_____no_output_____"
]
],
[
[
"fs = gcsfs.GCSFileSystem(project=GCP_PROJECT_ID, token=\"../gc-pangeo.json\")\noisst = xr.open_zarr(fs.get_mapper(OISST_GCP))\noisst = oisst.assign_coords(lon=(((oisst.lon + 180) % 360) - 180)).sortby('lon')",
"_____no_output_____"
],
[
"oisst",
"_____no_output_____"
],
[
"# s\nPNW_LAT = slice(30, 60)\nPNW_LON = slice(-155.9, -120.9)\noisst_pnw = oisst.sel(lat = PNW_LAT, lon = PNW_LON).persist()\n",
"_____no_output_____"
],
[
"oisst_pnw.sst.data",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(3,3), dpi=150)\nax = plt.axes(projection=ccrs.Mercator.GOOGLE)\noisst_pnw.sst.max('time').plot(ax=ax, transform=ccrs.PlateCarree())\nax.add_feature(cf.NaturalEarthFeature('physical', 'coastline', '50m', facecolor='none', edgecolor='black'))\nplt.title(\"Data Region\", loc='left')",
"_____no_output_____"
],
[
"oisst_small = oisst_pnw.isel(lat=slice(0,3), lon=slice(0,3))",
"_____no_output_____"
]
],
[
[
"## generate function to turn mhw output into multi xarrays",
"_____no_output_____"
]
],
[
[
"SAVED_PARAMS = [\n 'intensity_max',\n 'intensity_cumulative',\n 'intensity_var',\n 'intensity_mean',\n 'rate_onset',\n 'rate_decline',\n 'index_start', \n 'index_end',\n 'index_peak',\n 'duration'\n]",
"_____no_output_____"
],
[
"dim_idx_mapping = {\n **{\n i : SAVED_PARAMS[i]\n for i in range(len(SAVED_PARAMS))\n }, \n **{\n len(SAVED_PARAMS): 'mhw',\n len(SAVED_PARAMS) + 1 : 'clim_thresh',\n len(SAVED_PARAMS) + 2 : 'clim_seas'\n }\n}",
"_____no_output_____"
],
[
"dim_idx_mapping",
"_____no_output_____"
],
[
"def cp_guvectorize(*args, **kwargs):\n \"\"\"Same as :func:`numba.guvectorize`, but can be used to decorate dynamically\n defined function and then pickle them with\n `cloudpickle <https://pypi.org/project/cloudpickle/>`_.\n On the other hand, it can't be called from another jit-compiled function.\n \"\"\"\n\n def decorator(func):\n return _PickleableGUVectorized(func, args, kwargs)\n\n return decorator\n\n\nclass _PickleableGUVectorized:\n def __init__(self, func, guvectorize_args, guvectorize_kwargs):\n self.args = func, guvectorize_args, guvectorize_kwargs\n decorator = numba.guvectorize(*guvectorize_args, **guvectorize_kwargs)\n self.ufunc = decorator(func)\n\n def __reduce__(self):\n return _PickleableGUVectorized, self.args\n\n def __call__(self, *args, **kwargs):\n return self.ufunc(*args, **kwargs)",
"_____no_output_____"
],
[
"# @cp_guvectorize(\"(float64[:], float64[:], float64[:])\", \"(n), (n) -> (n)\")\ndef mhw_1d(temps, time):\n SAVED_PARAMS_loc = SAVED_PARAMS.copy()\n if(np.isnan(temps).any()): return np.zeros((len(SAVED_PARAMS_loc) + 3, time.shape[0]))\n\n ordinals = np.array([Timestamp(t).toordinal() for t in time])\n dets = mh.detect(ordinals, temps.copy())\n events = dets[0]['n_events']\n del dets[0]['n_events']\n \n arrays = [\n np.zeros_like(time, dtype='float64')\n for _ in range(len(SAVED_PARAMS_loc)) \n ]\n arrays.append(np.zeros_like(time, dtype='int'))\n \n for event_i in range(events):\n start_date = dets[0]['index_start'][event_i]\n end_date = dets[0]['index_end'][event_i]\n \n # set binary param\n arrays[-1][start_date:end_date] = event_i\n # set all params\n for _i, param in enumerate(SAVED_PARAMS_loc):\n# print(f'saving param {param}')\n param_data = dets[0][param][event_i]\n arrays[_i][start_date:end_date] = param_data\n \n \n clim_thresh = dets[1]['thresh'] \n clim_seas = dets[1]['seas']\n \n return np.array(\n arrays + [clim_thresh, clim_seas]\n )\n",
"_____no_output_____"
],
[
"computeunits = []\nchunks = 4\nchunk_lons = np.split(oisst_pnw.lon.values, chunks)\nfor subchunk in range(chunks):\n start = chunk_lons[subchunk][0]\n end = chunk_lons[subchunk][-1]\n lon_slice = slice(start, end)\n print(lon_slice)\n a = xr.apply_ufunc(\n mhw_1d, \n oisst_pnw.sel(lon=lon_slice).sst.chunk({'lat': 5, 'lon': 5, 'time': -1}), \n oisst_pnw.sel(lon=lon_slice).time, \n input_core_dims = [['time'], ['time']],\n output_core_dims=[[\"param\",\"time\"]],\n output_dtypes=['float64'],\n dask='parallelized', \n output_sizes={\"param\": len(SAVED_PARAMS) + 3}, # + 3 for binary MHW detection parameter and climatology\n vectorize=True\n )\n\n\n computeunits.append(a)",
"slice(-155.875, -147.375, None)\nslice(-147.125, -138.625, None)\nslice(-138.375, -129.875, None)\nslice(-129.625, -121.125, None)\n"
],
[
"for u in computeunits:\n minlon, maxlon = u.lon.min().values, u.lon.max().values\n print(minlon, maxlon)\n ans = u.compute().to_dataset(dim='param').rename_vars(dim_idx_mapping)\n ans.to_netcdf(f\"/tmp/mhws/mhws_chunk_{minlon}_{maxlon}.nc\")\n del ans\n del u \n \n",
"-155.875 -147.375\n-147.125 -138.625\n-138.375 -129.875\n-129.625 -121.125\n"
],
[
"region = ans.isel(lat=25, lon=56)",
"_____no_output_____"
],
[
"mhw_times = ans.isel(lat=25, lon=56).where(ans.isel(lat=0, lon=0).mhw, drop=True).time\n",
"_____no_output_____"
],
[
"# region.clim.plot()\n# plt.fill_between(mhw_times.values, region.sel(time=mhw_times.values).clim.values, oisst_small.isel(lat=0, lon=0).sel(time=mhw_times.values).sst.values)\n# ans.mhw.isel(lat=0, lon=0).plot()\n# oisst_pnw.isel(lat=25, lon=56).sst.plot()\nregion.intensity_cumulative.plot()\nplt.xlim(['2014-01-01', '2015-12-31'])",
"_____no_output_____"
],
[
"ans.sel(time=slice('2014-01-01', '2015-12-31')).intensity_cumulative.max(dim='time').plot.contourf()",
"_____no_output_____"
]
],
[
[
"## Upload as zarr to AWS S3 bucket",
"_____no_output_____"
]
],
[
[
"fs = s3fs.S3FileSystem()\ns3_store = s3fs.S3Map(\"s3://mhw-stress/new_with_climatology/\", s3=fs, check=False)\ncompressor = zarr.Blosc(cname='zstd', clevel=3)\n",
"_____no_output_____"
],
[
"chunkfiles = [os.path.join('/tmp/mhws', f) for f in os.listdir('/tmp/mhws/')]\na = xr.open_mfdataset(chunkfiles, combine='by_coords').chunk({\n 'lat' : 10, 'lon': 10\n})\nencoding = {vname: {'compressor': compressor} for vname in a.data_vars}",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"a.to_zarr(s3_store, encoding=encoding, consolidated=True)",
"_____no_output_____"
],
[
"opened = xr.open_zarr(s3_store)",
"_____no_output_____"
],
[
"opened.clim_seas.isel(lat=0, lon=0, time=slice(2000, 3000)).plot()\nopened.clim_thresh.isel(lat=0, lon=0, time=slice(2000, 3000)).plot()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9c436ba01dcc9eb7ccfdfee867a708cd545e69 | 18,643 | ipynb | Jupyter Notebook | docs/notebooks/na_if.ipynb | sthagen/pwwang-datar | ea695313c3824627a8e5d58537d2b8c0ee1e7d28 | [
"MIT"
] | null | null | null | docs/notebooks/na_if.ipynb | sthagen/pwwang-datar | ea695313c3824627a8e5d58537d2b8c0ee1e7d28 | [
"MIT"
] | null | null | null | docs/notebooks/na_if.ipynb | sthagen/pwwang-datar | ea695313c3824627a8e5d58537d2b8c0ee1e7d28 | [
"MIT"
] | null | null | null | 29.924559 | 214 | 0.380196 | [
[
[
"# https://dplyr.tidyverse.org/reference/na_if.html\n%run nb_helpers.py\n\nfrom datar.datasets import starwars\nfrom datar.all import *\n\nnb_header(na_if)",
"_____no_output_____"
],
[
"na_if(range(5), list(range(4,-1,-1)))",
"_____no_output_____"
],
[
"x = tibble(x=[1, -1, 0, 10]).x\n100 / x",
"_____no_output_____"
],
[
"na_if(x, 0)",
"_____no_output_____"
],
[
"y = tibble(y=[\"abc\", \"def\", \"\", \"ghi\"]).y\nna_if(y, \"\")",
"_____no_output_____"
],
[
"starwars >> \\\n select(f.name, f.eye_color) >> \\\n mutate(eye_color = na_if(f.eye_color, \"unknown\"))",
"_____no_output_____"
],
[
"starwars >> \\\n mutate(across(where(is_character), lambda x: na_if(x, \"unknown\")))",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9c47865076df8355a7381cea1d109c75f81da6 | 24,920 | ipynb | Jupyter Notebook | Option_Calls_as_Objects_BSM.ipynb | alfsn/derivatives | fc60ca981a4800269050856cd9c9c46e3a39638b | [
"MIT"
] | null | null | null | Option_Calls_as_Objects_BSM.ipynb | alfsn/derivatives | fc60ca981a4800269050856cd9c9c46e3a39638b | [
"MIT"
] | null | null | null | Option_Calls_as_Objects_BSM.ipynb | alfsn/derivatives | fc60ca981a4800269050856cd9c9c46e3a39638b | [
"MIT"
] | null | null | null | 32.703412 | 209 | 0.47183 | [
[
[
"# Valuación de opciones con Black-Scholes\n\nEste trabajo utiliza fórmulas para determinar:\n\n* el precio de una opción dada su volatilidad.\n* la volatilidad implícita en el precio de una opción.\n* las variables \"griegas\" de una opción: su delta, gamma, theta, rho y vega.\n\nEstas luego se aplican a una implementación basada en objetos.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport pandas_datareader as web\nimport datetime as dt\nimport os\nimport scipy.stats as scs",
"_____no_output_____"
]
],
[
[
"Estos son los paquetes a utilizar. \nEn caso de no poseer alguno de ellos, estos pueden ser instalados por medio del comando \n! pip install {nombre_del_paquete}",
"_____no_output_____"
]
],
[
[
"cd = 'C:\\\\Users\\\\Alfred\\\\PycharmProjects\\\\Clases_Programacion\\\\Programacion_Finanzas\\\\options_dfs\\\\'",
"_____no_output_____"
]
],
[
[
"## Solución al problema de valuación de un Call de Black y Scholes",
"_____no_output_____"
],
[
"${\\Large\\ C(S,t) = S.N(d1)-K.e\\,^{( -rf\\,.\\,TTM)}.N(d2)}$ \n \n \ndonde \n${\\Large\\ d1 = \\frac {ln(\\,^{S}/_{K}\\,)+ (rf+^{\\,\\sigma^2}/_{2}\\,)*TTM}{\\sigma \\sqrt {TTM}} -\\frac{\\sigma*\\sqrt{TTM}}{2}}$ \n \n${\\Large\\ d2 = d1 - \\sigma \\sqrt{TTM}}$\n\nSiendo \n \n$ C =$ valor de un call \n$ S =$ precio del subyacente \n$ K = $ precio de ejercicio (strike) \n$ TTM = $ tiempo hasta el vencimiento (en años) \n$ \\sigma = $ volatilidad (anual) \n$ rf = $ tasa libre de riesgo (anual)\n \n$ N() = $ función de distribución normal",
"_____no_output_____"
]
],
[
[
"def BSM_call_price(p_last:float, strike:float, TTM:float, rf:float, vol:float):\n d1 = (np.log(p_last / strike) + (rf + 1/2 * vol ** 2) * TTM) / (vol * np.sqrt(TTM))\n d2 = (np.log(p_last / strike) + (rf - 1/2 * vol ** 2) * TTM) / (vol * np.sqrt(TTM))\n \n call_price = (p_last * scs.norm.cdf(d1, 0, 1) - strike * np.exp(-rf * TTM) * scs.norm.cdf(d2, 0, 1))\n \n return call_price, d1, d2",
"_____no_output_____"
]
],
[
[
"Esta es la función que toma como inputs las variables de la acción y calcula su valor teórico según Black Scholes.",
"_____no_output_____"
]
],
[
[
"def BSM_imp_vol(call_price:float, p_last:float, strike:float, TTM:float, rf:float):\n iv = 0.4\n iters = 1\n \n while iters < 1000:\n result, d1, d2 = BSM_call_price(p_last, strike, TTM, rf, iv)\n \n if result > call_price:\n iv = iv-0.001\n iters = iters+1\n \n elif result < call_price:\n iv = iv+0.001\n iters = iters+1\n \n else:\n break\n \n if (call_price - result > call_price*0.05):\n raise Warning\n \n return iv",
"_____no_output_____"
]
],
[
[
"Esta función es la que calcula la volatilidad implícita de una opción, tomando el precio como dado.",
"_____no_output_____"
]
],
[
[
"def call_delta(d1):\n return scs.norm.cdf(d1, 0, 1)",
"_____no_output_____"
],
[
"def call_gamma(p_last, TTM, vol, d1):\n return 1/np.sqrt(2*np.pi*np.exp((-(d1)**2)/2))/(p_last*vol*TTM)",
"_____no_output_____"
],
[
"def call_theta(p_last, strike, TTM, rf, vol, d1, d2):\n return (vol * p_last * (1/np.sqrt(2*np.pi)*np.exp((-(d1)**2)/2))/(2*np.sqrt(TTM)) - rf*strike*np.exp(-rf*TTM)*scs.norm.cdf(d2, 0, 1))/100",
"_____no_output_____"
],
[
"def call_vega(p_last: float, TTM, d1):\n return (p_last * (1/np.sqrt(2*np.pi)*np.exp((-(d1)**2)/2)) * np.sqrt(TTM))/100",
"_____no_output_____"
],
[
"def call_rho(strike, TTM, rf, d2):\n return TTM * strike * np.exp(-rf*TTM)* scs.norm.cdf(d2, 0, 1)",
"_____no_output_____"
],
[
"class Call:\n def __init__(self, ticker:str, strike:float, opex, rf:float, call_price=None, vol=None):\n self.underlying = ticker\n self.strike = strike\n self.opex = dt.datetime.strptime(opex, '%d/%m/%Y').date()\n self.rf = rf\n self.TTM = (self.opex - dt.date.today()).days/365.25\n \n # Descarga de datos históricos\n if not os.path.exists(cd+ticker+'.csv'):\n df = web.DataReader(ticker, 'yahoo', \n start= (dt.date.today()-dt.timedelta(366)), \n end=(dt.date.today()-dt.timedelta(1)))\n df.to_csv(cd+ticker+'.csv')\n self.p_underlying = df['Adj Close']\n \n else:\n df = pd.read_csv(cd+ticker+'.csv')\n self.p_underlying = df['Adj Close']\n \n \n self.returns = self.p_underlying/self.p_underlying.shift(1)\n self.p_last = self.p_underlying.iloc[-1]\n \n \n if (call_price == None) and (vol != None):\n # calculo de precio teórico según volatilidad dada\n self.vol = vol\n self.call_price, self.d1, self.d2 = BSM_call_price(self.p_last, self.strike, self.TTM, self.rf, self.vol)\n self.method = 'Black Scholes Fair Value from input vol'\n\n elif (call_price==None) and (vol==None):\n # calculo de precio teórico según volatilidad histórica\n self.vol = ((self.returns.std()* np.sqrt(252))**2) \n self.call_price, self.d1, self.d2 = BSM_call_price(self.p_last, self.strike, self.TTM, self.rf, self.vol)\n self.method = 'Black Scholes Fair Value from 1 year vol'\n \n elif (call_price != None):\n # calculo de volatilidad implícita según precio dado\n self.call_price = call_price\n self.vol = BSM_imp_vol(self.call_price, self.p_last, self.strike, self.TTM, self.rf)\n self.method = 'Implied volatility from price'\n # Llamamos nuevamente a la función inversa que hicimos para obtener los valores de d1 y d2 que vamos a utilizar en el cálculo de las griegas\n _, self.d1, self.d2 = BSM_call_price(self.p_last, self.strike, self.TTM, self.rf, self.vol)\n\n else:\n raise AttributeError\n \n \n def calculate_greeks(self):\n self.delta = call_delta(self.d1)\n self.gamma = call_gamma(self.p_last, self.TTM, self.vol, self.d1)\n self.theta = call_theta(self.p_last, self.strike, self.TTM, self.rf, self.vol, self.d1, self.d2) \n self.vega = call_vega(self.p_last, self.TTM, self.d1)\n self.rho = call_rho(self.strike, self.TTM, self.rf, self.d2)\n \n def attributes(self):\n aux_df = pd.DataFrame(index=['Ticker','Last Underlying price','Strike price','Expiry date',\n 'Time to maturity', 'Risk free rate used', \n 'Call price', 'Volatility', 'Pricing method'])\n \n aux_df['attribute'] = [self.underlying, self.p_last, self.strike, self.opex,\n self.TTM, self.rf, self.call_price, self.vol, self.method]\n \n try:\n greek_df = pd.DataFrame(index=['Delta', 'Gamma', 'Theta', 'Vega', 'Rho'])\n greek_df['attribute']=[self.delta, self.gamma, self.theta, self.vega, self.rho]\n aux_df = aux_df.append(greek_df)\n \n return aux_df.transpose()\n \n \n except:\n return aux_df.transpose()",
"_____no_output_____"
]
],
[
[
"Esta es la clase del Objeto Call que vamos a utilizar. \nEl objetivo es que este objeto contenga los datos fijos del contrato: identificación del subyacente, precio de ejercicio, fecha de vencimiento y la tasa de interés a utilizar. \n \n *** \n ",
"_____no_output_____"
],
[
"# Pricing y asignación de volatilidad",
"_____no_output_____"
],
[
"## Ejemplo 1: volatilidad conocida",
"_____no_output_____"
],
[
"Primero veremos el ejemplo típico teórico: tenemos una opción, conocemos su volatilidad esperada, y buscamos determinar el precio teórico segun el modelo. \nDeterminamos que esta será del orden del 50% anual",
"_____no_output_____"
]
],
[
[
"GFGC160_OC = Call('GGAL.BA', 160, '16/10/2020', 0.2, vol=0.5)",
"_____no_output_____"
]
],
[
[
"Buscamos conocer algunos atributos de la opción, como su tiempo hasta el vencimiento.",
"_____no_output_____"
]
],
[
[
"GFGC160_OC.TTM",
"_____no_output_____"
],
[
"GFGC160_OC.attributes()",
"_____no_output_____"
]
],
[
[
"Las funciones determinan el precio del call en approx $ \\$2.5$. \nVemos que el método de precio es determinado por la volatilidad introducida.\n \n *** \n ",
"_____no_output_____"
],
[
"## Ejemplo 2: Volatilidad histórica\n\nSi introducimos a la opción sin precio de opción ni volatilidad, el objeto estimará la volatilidad resultante del último año de operatoria, y utilizará este valor para el cálculo del precio de la opción.",
"_____no_output_____"
]
],
[
[
"GFGC160_OC = Call('GGAL.BA',160, '16/10/2020', 0.2)\n\nGFGC160_OC.attributes()",
"_____no_output_____"
]
],
[
[
"Nuevamente observamos que el método de valuación resalta la fuente como la vol. histórica. \nVemos así que la volatilidad del último año es mayor a nuestro supuesto de 50% anual, lo que resulta en una valuación superior a la opción (approx el doble). \nDado que el único cambio en estas valuaciones corresponde a la volatilidad implícita (50% vs. 62%) podemos hacer una aproximación lineal por medio del vega ($ \\nu $) de la opción.",
"_____no_output_____"
]
],
[
[
"GFGC160_OC.calculate_greeks()\n\nGFGC160_OC.attributes()",
"_____no_output_____"
]
],
[
[
"El vega mide el cambio en el precio de la opción respecto al cambio de 1 punto de volatilidad. \n\nPor lo tanto, una aproximación lineal sería de $(62\\%-50\\%)*0.1528 = 1.834$ de diferencia entre una opcion y la otra. \n\nEn cuanto a los calculos precisos, la diferencia entre uno y otro es de $4.1676-2.4854 = 1.6822$.\n\nResulta una aproximación razonable, dentro de un 10% del valor real. \n\nLa diferencia se debe a que el valor de vega, va cambiando con el mismo. Es decir $$\\Large\\ \\nu = \\frac{\\partial C}{\\partial \\sigma}$$ \n\nPero a su vez, este valor es variable con respecto al precio del subyacente, es decir, para una aproximación de segundo orden deberemos calcular el $vanna$ de la opción: \n$$ \\large\\ \\frac{\\partial \\nu}{\\partial S}= \\frac{\\partial^2 C}{\\partial S \\, \\partial \\sigma}$$ \n \n ***",
"_____no_output_____"
],
[
"## Ejemplo 3: Volatilidad implícita\n\nCuando introducimos un precio de opción sin incluir una volatilidad, el objeto interpreta que buscamos la volatilidad implícita. Es decir, este es el ejemplo converso al Ejemplo 1.",
"_____no_output_____"
]
],
[
[
"GFGC160_OC = Call('GGAL.BA', 160, '16/10/2020', 0.2, call_price=2.48542)\n\nGFGC160_OC.calculate_greeks()\n\nGFGC160_OC.attributes()",
"_____no_output_____"
]
],
[
[
"Podemos ver por este ejemplo que introduciendo como precio al valor obtenido del precio de la opción en el ejemplo 1, obtenemos la volatilidad esperada: 50%. \nPor lo tanto, vemos un ejemplo de cómo podemos reversar la operación de precio del modelo Black-Scholes.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ec9c478b15d11e9b8449830b7254b8dbd074b7af | 21,100 | ipynb | Jupyter Notebook | experiments/.ipynb_checkpoints/AE_ADHD200_CV-checkpoint.ipynb | QinglinDong/nilearn-extenstion | 8eba7f29f1d5a788758514a639ed1c639041fe7d | [
"BSD-2-Clause"
] | null | null | null | experiments/.ipynb_checkpoints/AE_ADHD200_CV-checkpoint.ipynb | QinglinDong/nilearn-extenstion | 8eba7f29f1d5a788758514a639ed1c639041fe7d | [
"BSD-2-Clause"
] | 1 | 2018-10-21T15:10:41.000Z | 2018-10-21T15:10:41.000Z | experiments/.ipynb_checkpoints/AE_ADHD200_CV-checkpoint.ipynb | QinglinDong/nilearn-deep | 8eba7f29f1d5a788758514a639ed1c639041fe7d | [
"BSD-2-Clause"
] | 1 | 2020-01-02T22:44:36.000Z | 2020-01-02T22:44:36.000Z | 39.587242 | 1,161 | 0.538673 | [
[
[
"import numpy as np\n# fix random seed for reproducibility\nseed = 7\nnp.random.seed(seed)",
"_____no_output_____"
],
[
"from keras.layers import Input, Dense\nfrom keras.models import Model\nfrom keras import regularizers\n\ndef build_model(data):\n\n print(data.shape)\n # this is the size of our encoded representations\n encoding_dim = 32 # 32 floats -> compression of factor 24.5, assuming the input is 784 floats\n\n # this is our input placeholder\n original_dim=data.shape[1]\n input_img = Input(shape=(original_dim,))\n # \"encoded\" is the encoded representation of the input\n\n encoded = Dense(256, activation='tanh',\n activity_regularizer=regularizers.l1(2*10e-5))(input_img)\n encoded = Dense(128, activation='tanh',\n activity_regularizer=regularizers.l1(2*10e-5))(encoded)\n encoded = Dense(32, activation='tanh',\n activity_regularizer=regularizers.l1(2*10e-5))(encoded)\n\n decoded = Dense(128, activation='tanh')(encoded)\n decoded = Dense(256, activation='tanh')(decoded)\n decoded = Dense(original_dim, activation='tanh')(decoded)\n\n # this model maps an input to its reconstruction\n autoencoder = Model(input_img, decoded)\n\n # this model maps an input to its encoded representation\n encoder = Model(input_img, encoded)\n autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')\n\n autoencoder.fit(data, data,\n epochs=10,\n batch_size=100,\n shuffle=True)\n return encoder",
"_____no_output_____"
],
[
"from nilearn.decomposition import CanICA\ndef prepare_data(func_filenames):\n canica = CanICA(memory=\"nilearn_cache\", memory_level=2,\n threshold=3., verbose=10, random_state=0, \n mask='/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/ADHD200_mask_152_4mm.nii.gz')\n data=canica.prepare_data(func_filenames)\n return data",
"_____no_output_____"
],
[
"from nilearn.connectome import ConnectivityMeasure\n\ndef corr(all_time_series):\n connectivity_biomarkers = {}\n conn_measure = ConnectivityMeasure(kind='correlation', vectorize=True)\n connectivity_biomarkers = conn_measure.fit_transform(all_time_series)\n return connectivity_biomarkers",
"_____no_output_____"
],
[
"from sklearn.neural_network import MLPClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.gaussian_process import GaussianProcessClassifier\nfrom sklearn.gaussian_process.kernels import RBF\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis\n\ndef classify(train_X,train_Y, test_X, test_Y):\n names = [\"Nearest Neighbors\", \"Linear SVM\", \"RBF SVM\", \"Gaussian Process\",\n \"Decision Tree\", \"Random Forest\", \"Neural Net\", \"AdaBoost\",\n \"Naive Bayes\", \"QDA\"]\n\n classifiers = [\n KNeighborsClassifier(3),\n SVC(kernel=\"linear\", C=0.025),\n SVC(gamma=2, C=1),\n GaussianProcessClassifier(1.0 * RBF(1.0)),\n DecisionTreeClassifier(max_depth=5),\n RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),\n MLPClassifier(alpha=1),\n AdaBoostClassifier(),\n GaussianNB(),\n QuadraticDiscriminantAnalysis()]\n\n scores = []\n \n for name, clf in zip(names, classifiers):\n clf.fit(train_X, train_Y) \n score=clf.score(test_X,test_Y)\n scores.append(score)\n return scores\n\n\n\n ",
"_____no_output_____"
],
[
"from sklearn.model_selection import StratifiedKFold\n\ndef CV(X,Y):\n \n # define 10-fold cross validation test harness\n kfold = StratifiedKFold(n_splits=5, shuffle=True, random_state=seed)\n cvscores = []\n \n # trick. split(Y,Y) instead of split(X,Y) due to X is already concatnated \n for train,test in kfold.split(Y,Y): \n #indexing of specific subjects\n print(test)\n train_X = [X[i*20:i*20+20] for i in train] \n\n train_Y = [Y[i] for i in train]\n test_Y = [Y[i] for i in test]\n\n #concat all subjects\n model=build_model(np.vstack(train_X))\n\n print(\"Computing Correlation\")\n train_D = [model.predict(X[i*20:i*20+20]) for i in train]\n test_D = [model.predict(X[i*20:i*20+20]) for i in test]\n\n #Release GPU memory after model is used\n from keras import backend as K\n K.clear_session()\n \n train_FC=corr(train_D)\n test_FC=corr(test_D)\n\n score=classify(train_FC,train_Y, test_FC, test_Y)\n cvscores.append(score)\n return cvscores",
"_____no_output_____"
],
[
"from nilearn import datasets\n\nimport os\nfunc_filenames=[]\nfor x in os.listdir('/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU'):\n file='/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/'+str(x)+'/sfnwmrda'+str(x)+'_session_1_rest_1.nii.gz'\n #print(file)\n if os.path.isfile(file):\n func_filenames.append(file) \nfunc_filenames=func_filenames\nprint(func_filenames)\n\nX = prepare_data(func_filenames) # list of 4D nifti files for each subject \n\nimport pandas as pd\ndata = pd.read_csv('/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/NYU_phenotypic.csv')\nY = data['DX'].values\n\nfor index, item in enumerate(Y):\n if item>1:\n Y[index] = 1\n\n\n\n",
"['/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/0010084/sfnwmrda0010084_session_1_rest_1.nii.gz', '/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/1737393/sfnwmrda1737393_session_1_rest_1.nii.gz', '/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/0010072/sfnwmrda0010072_session_1_rest_1.nii.gz', '/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/4084645/sfnwmrda4084645_session_1_rest_1.nii.gz', '/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/1992284/sfnwmrda1992284_session_1_rest_1.nii.gz', '/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/4060823/sfnwmrda4060823_session_1_rest_1.nii.gz', '/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/9750701/sfnwmrda9750701_session_1_rest_1.nii.gz', '/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/4154672/sfnwmrda4154672_session_1_rest_1.nii.gz', '/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/0010021/sfnwmrda0010021_session_1_rest_1.nii.gz', '/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/3679455/sfnwmrda3679455_session_1_rest_1.nii.gz']\n[MultiNiftiMasker.fit] Loading data from None\n[MultiNiftiMasker.transform] Resampling mask\n[CanICA] Loading data\n/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/0010084/sfnwmrda0010084_session_1_rest_1.nii.gz\n/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/1737393/sfnwmrda1737393_session_1_rest_1.nii.gz\n/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/0010072/sfnwmrda0010072_session_1_rest_1.nii.gz\n/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/4084645/sfnwmrda4084645_session_1_rest_1.nii.gz\n/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/1992284/sfnwmrda1992284_session_1_rest_1.nii.gz\n/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/4060823/sfnwmrda4060823_session_1_rest_1.nii.gz\n/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/9750701/sfnwmrda9750701_session_1_rest_1.nii.gz\n/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/4154672/sfnwmrda4154672_session_1_rest_1.nii.gz\n/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/0010021/sfnwmrda0010021_session_1_rest_1.nii.gz\n/home/share/TmpData/Qinglin/ADHD200_Athena_preproc_flirtfix/NYU/3679455/sfnwmrda3679455_session_1_rest_1.nii.gz\n"
],
[
"print(X.shape)",
"(200, 28546)\n"
],
[
"cvscores=CV(X,Y)",
"[0 7 9]\n(140, 28546)\n"
],
[
"cvscores",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9c50bd2530084567bcaa1a22b1963e74a30645 | 11,731 | ipynb | Jupyter Notebook | 3_Schleifen.ipynb | IMC-UAS-Krems/Computational-Thinking | 41a77020b0c2fc73537db68397896bc9fc02ba40 | [
"CC0-1.0"
] | null | null | null | 3_Schleifen.ipynb | IMC-UAS-Krems/Computational-Thinking | 41a77020b0c2fc73537db68397896bc9fc02ba40 | [
"CC0-1.0"
] | null | null | null | 3_Schleifen.ipynb | IMC-UAS-Krems/Computational-Thinking | 41a77020b0c2fc73537db68397896bc9fc02ba40 | [
"CC0-1.0"
] | null | null | null | 36.095385 | 487 | 0.568323 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ec9c56dc6aedf49aac587a492f045fcc7649c309 | 119,861 | ipynb | Jupyter Notebook | scripts/meteorologia/Introduction_to_Using_Python_in_Atmospheric_Ocean_Sciences.ipynb | PhilipeRLeal/xarray_case_studies | b7771fefde658f0d450cbddd94637ce7936c5f52 | [
"MIT"
] | 1 | 2022-02-22T01:07:31.000Z | 2022-02-22T01:07:31.000Z | scripts/meteorologia/Introduction_to_Using_Python_in_Atmospheric_Ocean_Sciences.ipynb | PhilipeRLeal/xarray_case_studies | b7771fefde658f0d450cbddd94637ce7936c5f52 | [
"MIT"
] | null | null | null | scripts/meteorologia/Introduction_to_Using_Python_in_Atmospheric_Ocean_Sciences.ipynb | PhilipeRLeal/xarray_case_studies | b7771fefde658f0d450cbddd94637ce7936c5f52 | [
"MIT"
] | null | null | null | 117.626104 | 88,536 | 0.850844 | [
[
[
"# Exercícios do Livro de introdução ao Python em Ciências atmosféricas e Oceânicas:\n\nRef: http://www.johnny-lin.com/pyintro/ed01/free_pdfs/ch04.pdf",
"_____no_output_____"
]
],
[
[
"import numpy as np\n",
"_____no_output_____"
],
[
"for i in np.arange(20):\n print(\"OI. Eu sou: \", i)",
"OI. Eu sou: 0\nOI. Eu sou: 1\nOI. Eu sou: 2\nOI. Eu sou: 3\nOI. Eu sou: 4\nOI. Eu sou: 5\nOI. Eu sou: 6\nOI. Eu sou: 7\nOI. Eu sou: 8\nOI. Eu sou: 9\nOI. Eu sou: 10\nOI. Eu sou: 11\nOI. Eu sou: 12\nOI. Eu sou: 13\nOI. Eu sou: 14\nOI. Eu sou: 15\nOI. Eu sou: 16\nOI. Eu sou: 17\nOI. Eu sou: 18\nOI. Eu sou: 19\n"
],
[
"A = np.arange(10)\n\nA = np.reshape(A, (5,2))\n\nprint(A)\n\nprint(\"Ravel: \", A.ravel())\n\nprint(\"Flatten: \", A.flatten())\n\nprint(\"Repeat: \", np.repeat(A, 2).reshape((5,4)))",
"[[0 1]\n [2 3]\n [4 5]\n [6 7]\n [8 9]]\nRavel: [0 1 2 3 4 5 6 7 8 9]\nFlatten: [0 1 2 3 4 5 6 7 8 9]\nRepeat: [[0 0 1 1]\n [2 2 3 3]\n [4 4 5 5]\n [6 6 7 7]\n [8 8 9 9]]\n"
],
[
"B = np.array([1,1,2,2,3,3,4,4,5,5]).reshape(5,2)\n\nMag = ((A**2) + (B**2))**0.5\n\nprint(\"Mag: \\n\" , Mag)\n\nDistance = np.linalg.norm(A-B)\n\nprint(\"Distance: \\n\", Distance)\nprint(\"\")",
"Mag: \n [[ 1. 1.41421356]\n [ 2.82842712 3.60555128]\n [ 5. 5.83095189]\n [ 7.21110255 8.06225775]\n [ 9.43398113 10.29563014]]\nDistance: \n 6.708203932499369\n\n"
]
],
[
[
"### Exercise 14 (Calculate potential temperature from arrays of T and p):\n\nWrite a function that takes a 2-D array of pressures (p, in hPa) and a\n2-D array of temperatures (T, in K) and returns the corresponding potential\ntemperature, assuming a reference pressure (p0) of 1000 hPa. Thus, the function’s\nreturn value is an array of the same shape and type as the input arrays.\nRecall that potential temperature θ is given by:\n\nθ = T * (p0/p)**κ\n\nwhere κ is the ratio of the gas constant of dry air to the specific heat of dry\nair at constant pressure and equals approximately 0.286.",
"_____no_output_____"
]
],
[
[
"# solução:\n\ndef Potential_temperature(T, po=1000, kappa=0.286):\n\n Theta = T*(po/p)**kappa\n \n return Theta\n \n ",
"_____no_output_____"
]
],
[
[
"### Exercise 15 (Calculating wind speed from u and v):\n\nWrite a function that takes two 2-D arrays—an array of horizontal, zonal east-west) wind components (u, in m/s) and an array of horizontal, meridional (north-south) wind components (v, in m/s)—and returns a 2-D array of the magnitudes of the total wind, if the wind is over a minimum magnitude, and the minimum magnitude value otherwise. (We might presume that in\nthis particular domain only winds above some minimum constitute “good” data while those below the minimum are indistinguishable from the minimum due to noise or should be considered equal to the minimum in order to properly represent the effects of some quantity like friction.)\nThus, your input will be arrays u and v, as well as the minimum magnitude value. The function’s return value is an array of the same shape and type as the input arrays.\n",
"_____no_output_____"
]
],
[
[
"def Wind_Magnitudes(u, v, minmag=0.1):\n mag = ((u**2) + (v**2))**0.5\n \n output = np.where(mag > minmag, mag, minmag)\n \n return output",
"_____no_output_____"
]
],
[
[
"# Criando um netcdf:",
"_____no_output_____"
]
],
[
[
"from netCDF4 import *\nimport numpy as np\n\nfileobj = Dataset('new_NETCDF.nc', mode='w')\nlat = np.arange(10, dtype='f')\nlon = np.arange(20, dtype='f')\ndata1 = np.reshape(np.sin(np.arange(200, dtype='f')*0.1), (10,20))\n\ndata2 = 42.0\n\n# criando dimensoes do arquivo:\nfileobj.createDimension(\"lat\", len(lat))\nfileobj.createDimension(\"lon\", len(lon))\n\n# criando variáveis:\n # primeiro \"lat\" é o nome da variável a ser criada.\n # segundo \"lat\" é o nome da dimensão em que a var será atribuida.\nlat_var = fileobj.createVariable(\"lat\", \"f\", (\"lat\",)) \nlon_var = fileobj.createVariable(\"lon\", \"f\", (\"lon\",))\ndata1_var = fileobj.createVariable(\"data1\", \"f\", (\"lat\",\"lon\"))\ndata2_var = fileobj.createVariable(\"data2\", \"f\", ()) # como data2 é um scalar, a dimensão na tupla é vazia\n\n# atribuindo valores às variaveis do netcdf:\n\nlat_var[:] = lat[:]\nlon_var[:] = lon[:]\ndata1_var[:,:] = data1[:,:]\ndata1_var.units = \"kg\"\ndata2_var.assignValue(data2)\nfileobj.title = \"New netCDF file\"\ncrs = fileobj.createVariable('spatial_ref', 'i4', ())\ncrs.spatial_ref='GEOGCS[\"WGS 84\",DATUM[\"WGS_1984\",SPHEROID[\"WGS 84\",6378137,298.257223563,AUTHORITY[\"EPSG\",\"7030\"]],AUTHORITY[\"EPSG\",\"6326\"]],PRIMEM[\"Greenwich\",0,AUTHORITY[\"EPSG\",\"8901\"]],UNIT[\"degree\",0.0174532925199433,AUTHORITY[\"EPSG\",\"9122\"]],AUTHORITY[\"EPSG\",\"4326\"]]'\nfileobj.close()\n",
"_____no_output_____"
]
],
[
[
"# A “Real” AOS Project: Putting Together a Basic Data Analysis Routine\n\n## Defining your own class",
"_____no_output_____"
]
],
[
[
"class Book(object):\n def __init__(self, authorlast, authorfirst, title, place, publisher, year):\n self.authorlast = authorlast\n self.authorfirst = authorfirst\n self.title = title\n self.place = place\n self.publisher = publisher\n self.year = year\n\n def make_authoryear(self):\n self.authoryear = self.authorlast + '(' + self.year +')'\n\n def write_bib_entry(self):\n return self.authorlast + ', ' + self.authorfirst \\\n + ', ' + self.title + ', ' + self.place \\\n + ': ' + self.publisher + ', ' \\\n + self.year + '.'\n\n def __doc__(self):\n \"\"\"\n Classe de objetos criada pelo usuário como exemplo! \\n\n \n Metodos da classe: __init__ ; __doc__ ; write_bib_entry \\n\n \n # atributos (dados) da classe: \n self.authorlast, self.authorfirst, self.title, \\n \n self.place, self.publisher, self.year\n \n \"\"\"\n \n \nbeauty = Book( \"Dubay\", \"Thomas\" , \"The Evidential Power of Beauty\" ,\n \"San Francisco\" , \"Ignatius Press\", \"1999\" )\n\nbeauty.write_bib_entry()\n\npynut = Book( \"Martelli\", \"Alex\" , \"Python in a Nutshell\" ,\n \"Sebastopol, CA\" , \"O’Reilly Media, Inc.\", \"2003\" )",
"_____no_output_____"
]
],
[
[
"### Example 49 (Using instances of Book):\nConsider the Book definition given in Example 48. Here are some questions\nto test your understanding of what it does:\n1. How would you print out the author attribute of the pynut instance\n(at the interpreter, after running the file)?\n\n2. How would you change the publication year for the beauty book to\n\"2010\"?\n",
"_____no_output_____"
]
],
[
[
"\n# 1)\nprint(\"author attribute: \", pynut.authorfirst)\n\n# 2) \n\nbeauty.year = 2010",
"author attribute: Alex\n"
],
[
"# class article:\n\nclass Article(object):\n def __init__(self, authorlast, authorfirst, articletitle, journaltitle,\n volume, pages, year):\n self.authorlast = authorlast\n self.authorfirst = authorfirst\n self.articletitle = articletitle\n self.journaltitle = journaltitle\n self.volume = volume\n self.pages = pages\n self.year = year\n\n def make_authoryear(self):\n self.authoryear = self.authorlast + ' (' + self.year +')'\n\n def write_bib_entry(self):\n return self.authorlast + ', ' + self.authorfirst \\\n + ' (' + self.year + '): ' \\\n + '\"' + self.articletitle + ',\" ' \\\n + self.journaltitle + ', ' \\\n + self.volume + ', ' + self.pages + '.'",
"_____no_output_____"
]
],
[
[
"# Case study 1: The bibliography example\n\nwrite a Bibliography class that will manage a bibliography, given instances of Book and Article objects.\n\nNext, we write methods for Bibliography that can manipulate the list\nof Book and Article instances. To that end, the first two methods we\nwrite for Bibliography will do the following: initialize an instance of the\nclass; rearrange the list alphabetically based upon last name then first name.\nThe initialization method is called init (as always), and the rearranging\nmethod will be called sort entries alpha. Here is the code:",
"_____no_output_____"
]
],
[
[
"import operator\n\nclass Bibliography(object):\n def __init__(self, entrieslist):\n self.entrieslist = entrieslist # list of Book and Article instances that are being passed into an instance of the Bibliography class\n \n def sort_entries_alpha(self):\n tmp = sorted(self.entrieslist, key=operator.attrgetter(\"authorlast\",\"authorfirst\"))\n self.entrieslist = tmp\n del tmp\n \n def write_bibliog_alpha(self):\n self.sort_entries_alpha()\n output = ''\n for ientry in self.entrieslist:\n output = output + ientry.write_bib_entry() + '\\n\\n'\n return output[:-2] # as ultimas duas linhas de espaço são desprezadas.\n \n# criando instancias:\n\nbeauty = Book( \"Dubay\", \"Thomas\", \"The Evidential Power of Beauty\",\n \"San Francisco\", \"Ignatius Press\", \"1999\", )\npynut = Book( \"Martelli\", \"Alex\", \"Python in a Nutshell\",\n \"Sebastopol, CA\", \"O'Reilly Media, Inc.\", \"2003\" )\n\nnature = Article( \"Smith\", \"Jane\", \"My Nobel prize-winning paper\",\n \"Nature\", \"481\", \"234-236\", \"2012\" )\nscience = Article( \"Doe\", \"Samuel\", \"My almost Nobel prize-winning paper\",\n \"Science\", \"500\", \"36-38\", \"2011\" )\nnoname = Article( \"Doe\", \"John\", \"My no-one-has-heard-of paper\",\n \"J. Irreproducible Results\", \"49\", \"34-36\", \"2005\" )\n\n\n\nmybib = Bibliography([beauty, pynut, nature, science, noname])\n\nfor i in mybib.entrieslist:\n print ('Entries list before sort: \\n ', i.authorlast)\n \nmybib.sort_entries_alpha()\nprint(\"\\n\\n\\n\")\nfor i in mybib.entrieslist:\n print ('Entries list after sort: \\n ', i.authorlast)\n\nprint(\"\\n\\n\\n\") \n\n\nprint ('Write out bibliography: \\n', mybib.write_bibliog_alpha())",
"Entries list before sort: \n Dubay\nEntries list before sort: \n Martelli\nEntries list before sort: \n Smith\nEntries list before sort: \n Doe\nEntries list before sort: \n Doe\n\n\n\n\nEntries list after sort: \n Doe\nEntries list after sort: \n Doe\nEntries list after sort: \n Dubay\nEntries list after sort: \n Martelli\nEntries list after sort: \n Smith\n\n\n\n\nWrite out bibliography: \n Doe, John (2005): \"My no-one-has-heard-of paper,\" J. Irreproducible Results, 49, 34-36.\n\nDoe, Samuel (2011): \"My almost Nobel prize-winning paper,\" Science, 500, 36-38.\n\nDubay, Thomas, The Evidential Power of Beauty, San Francisco: Ignatius Press, 1999.\n\nMartelli, Alex, Python in a Nutshell, Sebastopol, CA: O'Reilly Media, Inc., 2003.\n\nSmith, Jane (2012): \"My Nobel prize-winning paper,\" Nature, 481, 234-236.\n"
]
],
[
[
"# Case study 2: Creating a class for geosciences work—Surface domain management\n\n## Exercise 24 (Defining a SurfaceDomain class):\n\nDefine a class SurfaceDomain that describes surface domain instances.\nThe domain is a land or ocean surface region whose spatial extent is described\nby a latitude-longitude grid. The class is instantiated when you provide\na vector of longitudes and latitudes; the surface domain is a regular\ngrid based on these vectors. Surface parameters (e.g., elevation, temperature,\nroughness, etc.) can then be given as instance attributes. The quantities\nare given on the domain grid.\n\nIn addition, in the class definition, provide an instantiation method that\nsaves the input longitude and latitude vectors as appropriately named attributes\nand creates 2-D arrays of the shape of the domain grid which have\nthe longitude and latitude values at each point and saves them as private attributes\n(i.e., their names begin with a single underscore).",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\nclass SurfaceDomain():\n def __init__(self, lon, lat):\n self.lon = np.array(lon)\n self.lat = np.array(lat)\n \n self.Llon, self.Llat = np.meshgrid(self.lon, self.lat)\n \n ",
"_____no_output_____"
]
],
[
[
"# como lidar com múltiplas bordas (limites espaciais):\n\npretend you have multiple SurfaceDomain instances that\nyou want to communicate to each other, where the bounds of one are taken\nfrom (or interpolated with) the bounds of another, e.g., calculations for each\ndomain instance are farmed out to a separate processor, and you’re stitching\ndomains together:\n\n## in procedural programming:\n\nIn procedural programming, to manage this set of overlapping domains, you might create a grand domain encompassing all points in all the domains to make an index that keeps track of which domains abut one another. The index records who contributes data to these boundary regions. Alternately, you might create a function that processes only the neighboring domains, but\nthis function will be called from a scope that has access to all the domains\n(e.g., via a common block).\n\n\n## in object oriented programming (OOP):\n\nIn order to manage this set of overlapping domains, you don’t really need\nsuch a global view nor access to all domains. In fact, a global index or a\ncommon block means that if you change your domain layout, you have to\nhand-code a change to your index/common block. Rather, what you actually\nneed is only to be able to interact with your neighbor. So why not just write a\nmethod that takes your neighboring SurfaceDomain instances as arguments and alters the boundaries accordingly? That is, why not add the following to the SurfaceDomain class definition:\n\n### resposta:\n\nSuch a method will propagate to all SurfaceDomain instances automatically,\nonce written in the class definition. Thus, you only have to write one\n(relatively) small piece of code that can then affect any number of layouts\nof SurfaceDomain instances. Again, object-oriented programming enables\nyou to push the level at which you code to solve a problem down to a lowerlevel\nthan procedural programming easily allows. As a result, you can write\nsmaller, better tested bit of code; this makes your code more robust and flexible.",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\nclass SurfaceDomain():\n def __init__ (self, lon, lat):\n self.lon = np.array(lon)\n self.lat = np.array(lat)\n \n self.Llon, self.Llat = np.meshgrid(self.lon, self.lat)\n\n def syncbounds(self, northobj, southobj, eastobj, westobj):\n XX_Bounds = np.where((self.Llon <= northobj) & (self.Llon >= southobj))\n YY_Bounds = np.where((self.Llat <= eastobjt) & (self.Llat >= westobj))\n \n return [XX_Bounds, YY_Bounds]\n ",
"_____no_output_____"
]
],
[
[
"# Chapter 8: \n\n## An Introduction to OOP Using Python: Part II—Application to Atmospheric Sciences Problems",
"_____no_output_____"
],
[
"### Masked Arrays - Booleans:\n\n#### Obs: Remember, bad values (i.e., the missing values) have mask values set to True in a masked array",
"_____no_output_____"
]
],
[
[
"print(np.arange(5))\n\nma = np.where(np.arange(5) >= 3)\n\nprint(ma)\n\nb = np.ma.masked_greater(np.arange(5), 2)\n\nprint(\"b data: \", b.data)\n\nprint(\"b mask: \", b.mask)\n\nma_C = b.data * np.where(b.mask == False, np.nan, b.mask)\n\nprint(ma_C)",
"[0 1 2 3 4]\n(array([3, 4], dtype=int64),)\nb data: [0 1 2 3 4]\nb mask: [False False False True True]\n[nan nan nan 3. 4.]\n"
],
[
"# Caso 2:\nimport numpy as np\na = np.ma.masked_array(data=[1,2,3], mask=[True, True, False], fill_value=10**5)\n\nprint(a)\n\n",
"[-- -- 3]\n"
],
[
"# caso 3:\n\na = np.ma.masked_greater([1,2,3,4], 3)\n\nprint(a)\ndata = np.array([1,2,3,4,5])\nb = np.ma.masked_where((data>2) & (data<5), data)\n\nprint(b.mask)",
"[1 2 3 --]\n[False False True True False]\n"
],
[
"# Notar como np.ma.filled é o inverso de (b.data * b.mask):\na = np.ma.masked_greater([1,2,3,4], 3)\nb = np.ma.masked_where((data>2) & (data<5), data)\n\n# multiplicando pelo mask retorna todos os valores que foram mascarados\n\nprint(b.data * b.mask)\n\n# usando np.ma.filled: retorna todos os valores que não foram mascarados\n\nprint( np.ma.filled(b) )\n\nprint( b.filled())\n\n",
"[0 0 3 4 0]\n[ 1 2 999999 999999 5]\n[ 1 2 999999 999999 5]\n"
]
],
[
[
"## Exercise using masked Arrays:\n\nOpen the example netCDF NCEP/NCAR Reanalysis 1 netCDF dataset\nof monthly mean surface/near-surface air temperature (or the netCDF dataset\nyou brought) and read in the values of the air, lat, and lon variables into\nNumPy arrays. Take only the first time slice of the air temperature data. \n\nCreate an array that masks out temperatures in that time slice in all locations\ngreater than 45◦N and less than 45◦S. \n\nConvert all those temperature\nvalues to K (the dataset temperatures are in ◦C). Some hints:\n ",
"_____no_output_____"
]
],
[
[
"# exercicios:\nimport numpy as np\nfrom netCDF4 import *\n\nAir_mon_mean_nc = Dataset(r'C:\\Doutorado\\Estudo_Python\\jwblin-course_files-cd5df00\\datasets\\air.mon.mean.nc')\n\nLat = Air_mon_mean_nc.variables['lat']\n\nLon = Air_mon_mean_nc.variables['lon']\n\nTime = Air_mon_mean_nc.variables['time']\n\nAir_temp = Air_mon_mean_nc.variables['air']\n\n",
"_____no_output_____"
],
[
"# 1) mask arrays:\n\n# bounds:\n\nLim_N = 45\n\nLim_S = -45\n\n\nLonall, Latall = np.meshgrid(Lon, Lat)\n\nAir_temp_ma = np.ma.masked_where((Latall[:]<Lim_S) | (Latall[:]>Lim_N) , Air_temp[0,:,:])\n\nprint( Air_temp_ma.filled() )\n\n\n\nprint ('North pole: ', Air_temp_ma[0,10])\nprint ('South pole: ', Air_temp_ma[-1,10])\nprint ('Equator: ', Air_temp_ma[36,10])",
"[[1.e+20 1.e+20 1.e+20 ... 1.e+20 1.e+20 1.e+20]\n [1.e+20 1.e+20 1.e+20 ... 1.e+20 1.e+20 1.e+20]\n [1.e+20 1.e+20 1.e+20 ... 1.e+20 1.e+20 1.e+20]\n ...\n [1.e+20 1.e+20 1.e+20 ... 1.e+20 1.e+20 1.e+20]\n [1.e+20 1.e+20 1.e+20 ... 1.e+20 1.e+20 1.e+20]\n [1.e+20 1.e+20 1.e+20 ... 1.e+20 1.e+20 1.e+20]]\nNorth pole: --\nSouth pole: --\nEquator: 22.829353\n"
],
[
"import matplotlib.pyplot as plt\n\nT_time0 = Air_temp[0,:,:]\n\nmymapf = plt.contourf(Lonall, Latall, T_time0, 10, cmap=plt.cm.Reds)\nmymap = plt.contour(Lonall, Latall, T_time0, 10, colors=\"k\")\nplt.clabel(mymap, fontsize=12)\nplt.axis([0, 360, -90, 90])\nplt.xlabel(\"Longitude [\" + Lon.units + \"]\")\nplt.ylabel(\"Latitude [\" + Lat.units + \"]\")\nplt.colorbar(mymapf, orientation=\"horizontal\", label = Air_temp.units)\n\nplt.savefig(\"exercise-T-contour.png\")\nplt.show()\n",
"_____no_output_____"
]
],
[
[
"## secondary quantities (ex. virtual temperature, vorticity, etc.)\n\n 1) são derivados das ´quantities (variables) primárias (e.g. temperature, pressure, etc.)\n \n## Problemas de otimização de programação:\n\n 1) Como otimizar memória para realização de análise dinâmica: >>> uso de classes (instance)\n\n### We define an object class Atmosphere where the following occurs:\n\n• Atmospheric quantities are assigned to attributes of instances of the\nclass.\n\n• Methods to calculate atmospheric secondary quantities:\n– Check to make sure the required quantity exists as an attribute.\n– If it doesn’t exist, the method is executed to calculate that quantity.\n– After the quantity is calculated, it is set as an attribute of the\nobject instance.\n\n### funções de atributos para objetos:\n\n1) hasattr: \n - recebe 2 argumentos: (objeto da pesquisa, nome do atributo ou método de busca)\n \n2) delattr:\n - recebe 2 argumentos: (objeto da pesquisa, nome do atributo a ser deletado)\n \n3) setattr:\n - recebe 3 argumentos: (objeto da pesquisa, nome do atributo a ser criado, novo valor a ser atribuido)\n",
"_____no_output_____"
]
],
[
[
"class Atmosphere(object):\n def __init__(self, **kwds):\n for ikey in kwds.keys():\n if hasattr(self, ikey) == False:\n setattr(self, ikey, kwds[ikey])\n \n else:\n return ikey.items()\n \n def Delattr(self, **kwds):\n for ikey in kwds.keys():\n delattr(self, ikey)\n \n def calc_rho(self):\n if not hasattr(self, 'T_v'):\n self.calc_virt_temp()\n \n elif not hasattr(self, 'p'):\n self.calc_press()\n\n else:\n raise ValueError, \"cannot obtain given initial quantities\"\n\n self.rho = [... find air density from self.T_v and\n self.p ...]\n",
"_____no_output_____"
],
[
"class Pessoa():\n def __init__(self, nome, idade, peso):\n self.nome = nome\n self.peso = peso\n self.idade = idade\n self.x = 0\n self.y = 0\n def andar(self, x, y):\n self.x = self.x + x\n self.y +=y\n\n \nSheila_1 = Pessoa('Camila', peso=50, idade=32)\n\n",
"_____no_output_____"
],
[
"class Phi_numero():\n def __init__(self, x=0):\n self.x=x\n def __add__(self, *values):\n for v in values:\n self.x += v\n return self.x",
"_____no_output_____"
],
[
"P = Phi_numero(1)\n\nP + 2 + 3",
"_____no_output_____"
],
[
"Sheila_1.andar(10, -2)",
"_____no_output_____"
],
[
"print(Sheila_1.x)\nSheila_1.y",
"10\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9c5a95d94600f18b6bd59d98c7bd8afd321b92 | 808,764 | ipynb | Jupyter Notebook | visualize_dataset_corrupt.ipynb | muzik999/road_connectivity | 8d9893752a633f9339e91b7b697f700ca669b729 | [
"MIT"
] | 96 | 2019-06-06T05:13:34.000Z | 2022-03-23T12:01:07.000Z | visualize_dataset_corrupt.ipynb | muzik999/road_connectivity | 8d9893752a633f9339e91b7b697f700ca669b729 | [
"MIT"
] | 35 | 2019-07-05T06:03:22.000Z | 2022-03-09T03:11:02.000Z | visualize_dataset_corrupt.ipynb | muzik999/road_connectivity | 8d9893752a633f9339e91b7b697f700ca669b729 | [
"MIT"
] | 30 | 2019-07-17T09:37:26.000Z | 2022-02-28T19:09:22.000Z | 4,126.346939 | 233,348 | 0.965531 | [
[
[
"## Notebook to visualize Training data from PyTorch Dataloader",
"_____no_output_____"
]
],
[
[
"import json\nfrom road_dataset import *\nimport matplotlib.pyplot as plt\n\n%matplotlib inline",
"_____no_output_____"
],
[
"config = json.load(open('config.json'))",
"_____no_output_____"
]
],
[
[
"### Create Dataset.\n#### Use 'multi_scale_pred' flag to get Multi-Scale Road Segmentations and Orientation GT",
"_____no_output_____"
]
],
[
[
"multi_scale_pred = False\ndataset = DeepGlobeDatasetCorrupt(config['train_dataset'], \n seed = config['seed'])\n\ntrain_loader = torch.utils.data.DataLoader(dataset,\n batch_size = config['train_batch_size'],\n num_workers=8,\n shuffle = True, \n pin_memory=False)",
"_____no_output_____"
]
],
[
[
"### Iterate dataloader to visualize GT",
"_____no_output_____"
]
],
[
[
"for i, data in enumerate(train_loader, 0):\n inputsBGR,labels,vecmap_angles = data\n print(inputsBGR.size())\n \n print(labels.size())\n print(vecmap_angles.size())\n \n plt.figure(figsize=(16,16))\n plt.subplot(131)\n plt.imshow((inputsBGR[0].numpy().transpose(1,2,0)+ np.array(eval(config['train_dataset']['mean']))).astype(np.uint8))\n plt.subplot(132)\n plt.imshow(labels[0].numpy().astype(np.uint8))\n plt.subplot(133)\n plt.imshow(vecmap_angles[0].numpy())\n plt.show()\n if i > 2:\n break",
"torch.Size([32, 3, 256, 256])\ntorch.Size([32, 256, 256])\ntorch.Size([32, 256, 256])\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9c6dc0eb1257fe78d86ee5bf366c99558c2673 | 11,092 | ipynb | Jupyter Notebook | samples/core/ai_platform/ai_platform.ipynb | cliveseldon/pipelines | c7300c343cca9b22c9c5bdb64823e312dac1e495 | [
"Apache-2.0"
] | null | null | null | samples/core/ai_platform/ai_platform.ipynb | cliveseldon/pipelines | c7300c343cca9b22c9c5bdb64823e312dac1e495 | [
"Apache-2.0"
] | 2 | 2022-02-13T19:18:51.000Z | 2022-02-19T06:09:45.000Z | samples/core/ai_platform/ai_platform.ipynb | ckadner/repo-that-should-be-a-fork | d4aabd15b15022999da7660d2bc808347b2d9f06 | [
"Apache-2.0"
] | 2 | 2019-10-15T03:06:15.000Z | 2019-10-15T03:10:39.000Z | 27.053659 | 207 | 0.538857 | [
[
[
"# Chicago Crime Prediction Pipeline\n\nAn example notebook that demonstrates how to:\n* Download data from BigQuery\n* Create a Kubeflow pipeline\n* Include Google Cloud AI Platform components to train and deploy the model in the pipeline\n* Submit a job for execution\n* Query the final deployed model\n\nThe model forecasts how many crimes are expected to be reported the next day, based on how many were reported over the previous `n` days.",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
]
],
[
[
"%%capture\n\n# Install the SDK (Uncomment the code if the SDK is not installed before)\n!python3 -m pip install 'kfp>=0.1.31' --quiet\n!python3 -m pip install pandas --upgrade -q\n\n# Restart the kernel for changes to take effect",
"_____no_output_____"
],
[
"import json\n\nimport kfp\nimport kfp.components as comp\nimport kfp.dsl as dsl\n\nimport pandas as pd\n\nimport time",
"_____no_output_____"
]
],
[
[
"## Pipeline",
"_____no_output_____"
],
[
"### Constants",
"_____no_output_____"
]
],
[
[
"# Required Parameters\nproject_id = '<ADD GCP PROJECT HERE>'\noutput = 'gs://<ADD STORAGE LOCATION HERE>' # No ending slash\n",
"_____no_output_____"
],
[
"# Optional Parameters\nREGION = 'us-central1'\nRUNTIME_VERSION = '1.13'\nPACKAGE_URIS=json.dumps(['gs://chicago-crime/chicago_crime_trainer-0.0.tar.gz'])\nTRAINER_OUTPUT_GCS_PATH = output + '/train/output/' + str(int(time.time())) + '/'\nDATA_GCS_PATH = output + '/reports.csv'\nPYTHON_MODULE = 'trainer.task'\nPIPELINE_NAME = 'Chicago Crime Prediction'\nPIPELINE_FILENAME_PREFIX = 'chicago'\nPIPELINE_DESCRIPTION = ''\nMODEL_NAME = 'chicago_pipeline_model' + str(int(time.time()))\nMODEL_VERSION = 'chicago_pipeline_model_v1' + str(int(time.time()))",
"_____no_output_____"
]
],
[
[
"### Download data\n\nDefine a download function that uses the BigQuery component",
"_____no_output_____"
]
],
[
[
"bigquery_query_op = comp.load_component_from_url(\n 'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/bigquery/query/component.yaml')\n\nQUERY = \"\"\"\n SELECT count(*) as count, TIMESTAMP_TRUNC(date, DAY) as day\n FROM `bigquery-public-data.chicago_crime.crime`\n GROUP BY day\n ORDER BY day\n\"\"\"\n\ndef download(project_id, data_gcs_path):\n\n return bigquery_query_op(\n query=QUERY,\n project_id=project_id,\n output_gcs_path=data_gcs_path\n )",
"_____no_output_____"
]
],
[
[
"### Train the model\n\nRun training code that will pre-process the data and then submit a training job to the AI Platform.",
"_____no_output_____"
]
],
[
[
"mlengine_train_op = comp.load_component_from_url(\n 'https://raw.githubusercontent.com/kubeflow/pipelines/1.0.0/components/gcp/ml_engine/train/component.yaml')\n\ndef train(project_id,\n trainer_args,\n package_uris,\n trainer_output_gcs_path,\n gcs_working_dir,\n region,\n python_module,\n runtime_version):\n\n return mlengine_train_op(\n project_id=project_id, \n python_module=python_module,\n package_uris=package_uris,\n region=region,\n args=trainer_args,\n job_dir=trainer_output_gcs_path,\n runtime_version=runtime_version\n )",
"_____no_output_____"
]
],
[
[
"### Deploy model\n\nDeploy the model with the ID given from the training step",
"_____no_output_____"
]
],
[
[
"mlengine_deploy_op = comp.load_component_from_url(\n 'https://raw.githubusercontent.com/kubeflow/pipelines/1.0.0/components/gcp/ml_engine/deploy/component.yaml')\n\ndef deploy(\n project_id,\n model_uri,\n model_id,\n model_version,\n runtime_version):\n \n return mlengine_deploy_op(\n model_uri=model_uri,\n project_id=project_id, \n model_id=model_id, \n version_id=model_version, \n runtime_version=runtime_version, \n replace_existing_version=True, \n set_default=True)",
"_____no_output_____"
]
],
[
[
"### Define pipeline",
"_____no_output_____"
]
],
[
[
"@dsl.pipeline(\n name=PIPELINE_NAME,\n description=PIPELINE_DESCRIPTION\n)\n\ndef pipeline(\n data_gcs_path=DATA_GCS_PATH,\n gcs_working_dir=output,\n project_id=project_id,\n python_module=PYTHON_MODULE,\n region=REGION,\n runtime_version=RUNTIME_VERSION,\n package_uris=PACKAGE_URIS,\n trainer_output_gcs_path=TRAINER_OUTPUT_GCS_PATH,\n): \n download_task = download(project_id,\n data_gcs_path)\n\n train_task = train(project_id,\n json.dumps(\n ['--data-file-url',\n '%s' % download_task.outputs['output_gcs_path'],\n '--job-dir',\n output]\n ),\n package_uris,\n trainer_output_gcs_path,\n gcs_working_dir,\n region,\n python_module,\n runtime_version)\n \n deploy_task = deploy(project_id,\n train_task.outputs['job_dir'],\n MODEL_NAME,\n MODEL_VERSION,\n runtime_version) \n return True\n\n# Reference for invocation later\npipeline_func = pipeline",
"_____no_output_____"
]
],
[
[
"### Submit the pipeline for execution",
"_____no_output_____"
]
],
[
[
"pipeline = kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})\n\n# Run the pipeline on a separate Kubeflow Cluster instead\n# (use if your notebook is not running in Kubeflow - e.x. if using AI Platform Notebooks)\n# pipeline = kfp.Client(host='<ADD KFP ENDPOINT HERE>').create_run_from_pipeline_func(pipeline, arguments={})",
"_____no_output_____"
]
],
[
[
"### Wait for the pipeline to finish",
"_____no_output_____"
]
],
[
[
"run_detail = pipeline.wait_for_run_completion(timeout=1800)\nprint(run_detail.run.status)",
"_____no_output_____"
]
],
[
[
"### Use the deployed model to predict (online prediction)",
"_____no_output_____"
]
],
[
[
"import os\nos.environ['MODEL_NAME'] = MODEL_NAME\nos.environ['MODEL_VERSION'] = MODEL_VERSION",
"_____no_output_____"
]
],
[
[
"Create normalized input representing 14 days prior to prediction day.",
"_____no_output_____"
]
],
[
[
"%%writefile test.json\n{\"lstm_input\": [[-1.24344569, -0.71910112, -0.86641698, -0.91635456, -1.04868914, -1.01373283, -0.7690387, -0.71910112, -0.86641698, -0.91635456, -1.04868914, -1.01373283, -0.7690387 , -0.90387016]]}",
"_____no_output_____"
],
[
"!gcloud ai-platform predict --model=$MODEL_NAME --version=$MODEL_VERSION --json-instances=test.json",
"_____no_output_____"
]
],
[
[
"### Examine cloud services invoked by the pipeline\n- BigQuery query: https://console.cloud.google.com/bigquery?page=queries (click on 'Project History')\n- AI Platform training job: https://console.cloud.google.com/ai-platform/jobs\n- AI Platform model serving: https://console.cloud.google.com/ai-platform/models\n",
"_____no_output_____"
],
[
"### Clean models",
"_____no_output_____"
]
],
[
[
"# !gcloud ai-platform versions delete $MODEL_VERSION --model $MODEL_NAME\n# !gcloud ai-platform models delete $MODEL_NAME",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
ec9c76341f5e553227a07fcffc304acf3ae4e8d4 | 3,614 | ipynb | Jupyter Notebook | common/notebook/skchainer/iris.ipynb | bizreach/common-ml | 43772595fc6ba093966961faedfd2cd121d8a923 | [
"Apache-2.0"
] | 31 | 2016-04-20T07:45:15.000Z | 2020-03-08T20:43:28.000Z | common/notebook/skchainer/iris.ipynb | bizreach/common-ml | 43772595fc6ba093966961faedfd2cd121d8a923 | [
"Apache-2.0"
] | null | null | null | common/notebook/skchainer/iris.ipynb | bizreach/common-ml | 43772595fc6ba093966961faedfd2cd121d8a923 | [
"Apache-2.0"
] | 7 | 2016-06-22T04:58:06.000Z | 2019-03-07T08:35:43.000Z | 25.097222 | 113 | 0.513005 | [
[
[
"From Tensor SkFlow: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/skflow/iris.py",
"_____no_output_____"
],
[
"## Import",
"_____no_output_____"
]
],
[
[
"from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom sklearn import metrics, cross_validation\n\nfrom tensorflow.contrib import learn\n\nimport chainer.functions as F\nimport chainer.links as L\nfrom chainer import optimizers, Chain\nfrom commonml.skchainer import ChainerEstimator, SoftmaxCrossEntropyClassifier\n\nimport logging\nlogging.basicConfig(format='%(levelname)s : %(message)s', level=logging.INFO)\nlogging.root.level = 20",
"_____no_output_____"
]
],
[
[
"## Load dataset.",
"_____no_output_____"
]
],
[
[
"iris = learn.datasets.load_dataset('iris')\nX_train, X_test, y_train, y_test = cross_validation.train_test_split(iris.data, iris.target,\n test_size=0.2, random_state=42)",
"_____no_output_____"
]
],
[
[
"## Build 3 layer DNN with 10, 20, 10 units respectively.",
"_____no_output_____"
]
],
[
[
"class Model(Chain):\n\n def __init__(self, in_size):\n super(Model, self).__init__(l1=L.Linear(in_size, 10),\n l2=L.Linear(10, 20),\n l3=L.Linear(20, 10),\n l4=L.Linear(10, 3),\n )\n\n def __call__(self, x):\n h1 = F.relu(self.l1(x))\n h2 = F.relu(self.l2(h1))\n h3 = F.relu(self.l3(h2))\n h4 = self.l4(h3)\n return h4\n\nclassifier = ChainerEstimator(model=SoftmaxCrossEntropyClassifier(Model(X_train.shape[1])),\n optimizer=optimizers.AdaGrad(lr=0.1),\n batch_size=100,\n device=0,\n stop_trigger=(1000, 'epoch'))",
"_____no_output_____"
]
],
[
[
"## Fit and predict.",
"_____no_output_____"
]
],
[
[
"classifier.fit(X_train, y_train)\nscore = metrics.accuracy_score(y_test, classifier.predict(X_test))\nprint('Accuracy: {0:f}'.format(score))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9c7ce970147101c985f237a4c6c23fbf5fa60d | 124,818 | ipynb | Jupyter Notebook | code/chap05mine.ipynb | mpatil99/ModSimPy | 973812dfb871d83314f37dd37d7d4ebf86adc79b | [
"MIT"
] | null | null | null | code/chap05mine.ipynb | mpatil99/ModSimPy | 973812dfb871d83314f37dd37d7d4ebf86adc79b | [
"MIT"
] | null | null | null | code/chap05mine.ipynb | mpatil99/ModSimPy | 973812dfb871d83314f37dd37d7d4ebf86adc79b | [
"MIT"
] | null | null | null | 72.190862 | 28,856 | 0.748938 | [
[
[
"# Modeling and Simulation in Python\n\nChapter 5\n\nCopyright 2017 Allen Downey\n\nLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)\n",
"_____no_output_____"
]
],
[
[
"# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim.py module\nfrom modsim import *",
"_____no_output_____"
]
],
[
[
"## Reading data\n\nPandas is a library that provides tools for reading and processing data. `read_html` reads a web page from a file or the Internet and creates one `DataFrame` for each table on the page.",
"_____no_output_____"
]
],
[
[
"from pandas import read_html",
"_____no_output_____"
]
],
[
[
"The data directory contains a downloaded copy of https://en.wikipedia.org/wiki/World_population_estimates\n\nThe arguments of `read_html` specify the file to read and how to interpret the tables in the file. The result, `tables`, is a sequence of `DataFrame` objects; `len(tables)` reports the length of the sequence.",
"_____no_output_____"
]
],
[
[
"filename = 'data/World_population_estimates.html'\ntables = read_html(filename, header=0, index_col=0, decimal='M')\nlen(tables)",
"_____no_output_____"
]
],
[
[
"We can select the `DataFrame` we want using the bracket operator. The tables are numbered from 0, so `tables[2]` is actually the third table on the page.\n\n`head` selects the header and the first five rows.",
"_____no_output_____"
]
],
[
[
"table2 = tables[2]\ntable2.head()",
"_____no_output_____"
]
],
[
[
"`tail` selects the last five rows.",
"_____no_output_____"
]
],
[
[
"table2.tail()",
"_____no_output_____"
]
],
[
[
"Long column names are awkard to work with, but we can replace them with abbreviated names.",
"_____no_output_____"
]
],
[
[
"table2.columns = ['census', 'prb', 'un', 'maddison', \n 'hyde', 'tanton', 'biraben', 'mj', \n 'thomlinson', 'durand', 'clark']",
"_____no_output_____"
]
],
[
[
"Here's what the DataFrame looks like now. ",
"_____no_output_____"
]
],
[
[
"table2.head()",
"_____no_output_____"
]
],
[
[
"The first column, which is labeled `Year`, is special. It is the **index** for this `DataFrame`, which means it contains the labels for the rows.\n\nSome of the values use scientific notation; for example, `2.544000e+09` is shorthand for $2.544 \\cdot 10^9$ or 2.544 billion.\n\n`NaN` is a special value that indicates missing data.",
"_____no_output_____"
],
[
"### Series\n\nWe can use dot notation to select a column from a `DataFrame`. The result is a `Series`, which is like a `DataFrame` with a single column.",
"_____no_output_____"
]
],
[
[
"census = table2.census\ncensus.head()",
"_____no_output_____"
],
[
"census.tail()",
"_____no_output_____"
]
],
[
[
"Like a `DataFrame`, a `Series` contains an index, which labels the rows.\n\n`1e9` is scientific notation for $1 \\cdot 10^9$ or 1 billion.",
"_____no_output_____"
],
[
"From here on, we will work in units of billions.",
"_____no_output_____"
]
],
[
[
"un = table2.un / 1e9\nun.head()",
"_____no_output_____"
],
[
"census = table2.census / 1e9\ncensus.head()",
"_____no_output_____"
]
],
[
[
"Here's what these estimates look like.",
"_____no_output_____"
]
],
[
[
"plot(census, ':', label='US Census')\nplot(un, '--', label='UN DESA')\n \ndecorate(xlabel='Year',\n ylabel='World population (billion)')\nsavefig('figs/chap03-fig01.pdf')",
"Saving figure to file figs/chap03-fig01.pdf\n"
]
],
[
[
"The following expression computes the elementwise differences between the two series, then divides through by the UN value to produce [relative errors](https://en.wikipedia.org/wiki/Approximation_error), then finds the largest element.\n\nSo the largest relative error between the estimates is about 1.3%.",
"_____no_output_____"
]
],
[
[
"max(abs(census - un) / un) * 100",
"_____no_output_____"
]
],
[
[
"**Exercise:** Break down that expression into smaller steps and display the intermediate results, to make sure you understand how it works.\n\n1. Compute the elementwise differences, `census - un`\n2. Compute the absolute differences, `abs(census - un)`\n3. Compute the relative differences, `abs(census - un) / un`\n4. Compute the percent differences, `abs(census - un) / un * 100`\n",
"_____no_output_____"
]
],
[
[
"census -un",
"_____no_output_____"
],
[
"abs(census -un)",
"_____no_output_____"
],
[
"abs(census - un)/un\n",
"_____no_output_____"
],
[
"abs(census - un)/un * 100",
"_____no_output_____"
]
],
[
[
"`max` and `abs` are built-in functions provided by Python, but NumPy also provides version that are a little more general. When you import `modsim`, you get the NumPy versions of these functions.",
"_____no_output_____"
],
[
"### Constant growth",
"_____no_output_____"
],
[
"We can select a value from a `Series` using bracket notation. Here's the first element:",
"_____no_output_____"
]
],
[
[
"census[1950]",
"_____no_output_____"
]
],
[
[
"And the last value.",
"_____no_output_____"
]
],
[
[
"census[2016]",
"_____no_output_____"
]
],
[
[
"But rather than \"hard code\" those dates, we can get the first and last labels from the `Series`:",
"_____no_output_____"
]
],
[
[
"t_0 = get_first_label(census)",
"_____no_output_____"
],
[
"t_end = get_last_label(census)",
"_____no_output_____"
],
[
"elapsed_time = t_end - t_0",
"_____no_output_____"
]
],
[
[
"And we can get the first and last values:",
"_____no_output_____"
]
],
[
[
"p_0 = get_first_value(census)",
"_____no_output_____"
],
[
"p_end = get_last_value(census)",
"_____no_output_____"
]
],
[
[
"Then we can compute the average annual growth in billions of people per year.",
"_____no_output_____"
]
],
[
[
"total_growth = p_end - p_0",
"_____no_output_____"
],
[
"annual_growth = total_growth / elapsed_time",
"_____no_output_____"
]
],
[
[
"### TimeSeries",
"_____no_output_____"
],
[
"Now let's create a `TimeSeries` to contain values generated by a linear growth model.",
"_____no_output_____"
]
],
[
[
"results = TimeSeries()",
"_____no_output_____"
]
],
[
[
"Initially the `TimeSeries` is empty, but we can initialize it so the starting value, in 1950, is the 1950 population estimated by the US Census.",
"_____no_output_____"
]
],
[
[
"results[t_0] = census[t_0]\nresults",
"_____no_output_____"
]
],
[
[
"After that, the population in the model grows by a constant amount each year.",
"_____no_output_____"
]
],
[
[
"for t in linrange(t_0, t_end):\n results[t+1] = results[t] + annual_growth",
"_____no_output_____"
]
],
[
[
"Here's what the results looks like, compared to the actual data.",
"_____no_output_____"
]
],
[
[
"plot(census, ':', label='US Census')\nplot(un, '--', label='UN DESA')\nplot(results, color='gray', label='model')\n\ndecorate(xlabel='Year', \n ylabel='World population (billion)',\n title='Constant growth')\nsavefig('figs/chap03-fig02.pdf')",
"Saving figure to file figs/chap03-fig02.pdf\n"
]
],
[
[
"The model fits the data pretty well after 1990, but not so well before.",
"_____no_output_____"
],
[
"### Exercises\n\n**Optional Exercise:** Try fitting the model using data from 1970 to the present, and see if that does a better job.\n\nHint: \n\n1. Copy the code from above and make a few changes. Test your code after each small change.\n\n2. Make sure your `TimeSeries` starts in 1950, even though the estimated annual growth is based on later data.\n\n3. You might want to add a constant to the starting value to match the data better.",
"_____no_output_____"
]
],
[
[
"total_growth = p_end - census[1970]\nelapsed_time = t_end - 1970\nannual_growth = total_growth / elapsed_time\n\n\nresults = TimeSeries()\nresults[t_0] = census[t_0]- .4\n\n\nfor t in linrange(t_0, t_end):\n results[t+1] = results[t] + annual_growth\n \nplot(census, ':', label='US Census')\nplot(un, '--', label='UN DESA')\nplot(results, color='gray', label='model')\n\ndecorate(xlabel='Year', \n ylabel='World population (billion)',\n title='Constant growth')\nsavefig('figs/chap03-fig02.pdf')",
"Saving figure to file figs/chap03-fig02.pdf\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
ec9c99f06fc9cf1a306c49577ef097439603e19d | 9,998 | ipynb | Jupyter Notebook | code/transformaciones.ipynb | rodrigoms95/analisis-datos-22-1 | 24bee1ce79212fc8d2a6489bcbb5da5367b4bd1d | [
"BSD-3-Clause"
] | null | null | null | code/transformaciones.ipynb | rodrigoms95/analisis-datos-22-1 | 24bee1ce79212fc8d2a6489bcbb5da5367b4bd1d | [
"BSD-3-Clause"
] | null | null | null | code/transformaciones.ipynb | rodrigoms95/analisis-datos-22-1 | 24bee1ce79212fc8d2a6489bcbb5da5367b4bd1d | [
"BSD-3-Clause"
] | null | null | null | 61.337423 | 1,520 | 0.60002 | [
[
[
"# Calcula una transformación de potencias.\r\n\r\nimport pandas as pd\r\n\r\nfrom scipy import stats\r\n\r\nfrom matplotlib import pyplot as plt",
"_____no_output_____"
],
[
"path = \"../datos/\"\r\nfname = \"Tabla_A2_ppt_Ithaca.dat\"\r\n\r\n# Se lee el archivo .dat y se ajusta su formato.\r\ndf = pd.read_table(path + fname, names = [\"Year\", \"Precipitation\"])\r\ndf = df.set_index(\"Year\")\r\n\r\ndf.head()",
"_____no_output_____"
],
[
"# Se calcula la transformación de potencia Box-Cox.\r\ndf[\"Box_Cox\"], lmbda = stats.boxcox(df[\"Precipitation\"])\r\n\r\nprint(\"Lambda = \" + f\"{lmbda:.4f}\")\r\ndf.plot.box()\r\nplt.title(\"Transformación Box-Cox\", fontsize = \"18\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
ec9ca75234d85a892c0aeadd37a6efbaee417a49 | 12,176 | ipynb | Jupyter Notebook | document_classification_20newsgroups.ipynb | preethamam/Visual-Bags-of-Words-Classical-Classifiers | 58d91f77a15b258a4b40ae8bbbdac8dc7fba20a4 | [
"MIT"
] | null | null | null | document_classification_20newsgroups.ipynb | preethamam/Visual-Bags-of-Words-Classical-Classifiers | 58d91f77a15b258a4b40ae8bbbdac8dc7fba20a4 | [
"MIT"
] | null | null | null | document_classification_20newsgroups.ipynb | preethamam/Visual-Bags-of-Words-Classical-Classifiers | 58d91f77a15b258a4b40ae8bbbdac8dc7fba20a4 | [
"MIT"
] | null | null | null | 225.481481 | 10,633 | 0.643725 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\n# Classification of text documents using sparse features\n\n\nThis is an example showing how scikit-learn can be used to classify documents\nby topics using a bag-of-words approach. This example uses a scipy.sparse\nmatrix to store the features and demonstrates various classifiers that can\nefficiently handle sparse matrices.\n\nThe dataset used in this example is the 20 newsgroups dataset. It will be\nautomatically downloaded, then cached.\n\nThe bar plot indicates the accuracy, training time (normalized) and test time\n(normalized) of each classifier.\n\n\n",
"_____no_output_____"
]
],
[
[
"# Author: Peter Prettenhofer <[email protected]>\n# Olivier Grisel <[email protected]>\n# Mathieu Blondel <[email protected]>\n# Lars Buitinck\n# License: BSD 3 clause\n\nfrom __future__ import print_function\n\nimport logging\nimport numpy as np\nfrom optparse import OptionParser\nimport sys\nfrom time import time\nimport matplotlib.pyplot as plt\n\nfrom sklearn.datasets import fetch_20newsgroups\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.feature_extraction.text import HashingVectorizer\nfrom sklearn.feature_selection import SelectFromModel\nfrom sklearn.feature_selection import SelectKBest, chi2\nfrom sklearn.linear_model import RidgeClassifier\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.svm import LinearSVC\nfrom sklearn.linear_model import SGDClassifier\nfrom sklearn.linear_model import Perceptron\nfrom sklearn.linear_model import PassiveAggressiveClassifier\nfrom sklearn.naive_bayes import BernoulliNB, MultinomialNB\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.neighbors import NearestCentroid\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.utils.extmath import density\nfrom sklearn import metrics\n\n\n# Display progress logs on stdout\nlogging.basicConfig(level=logging.INFO,\n format='%(asctime)s %(levelname)s %(message)s')\n\n\n# parse commandline arguments\nop = OptionParser()\nop.add_option(\"--report\",\n action=\"store_true\", dest=\"print_report\",\n help=\"Print a detailed classification report.\")\nop.add_option(\"--chi2_select\",\n action=\"store\", type=\"int\", dest=\"select_chi2\",\n help=\"Select some number of features using a chi-squared test\")\nop.add_option(\"--confusion_matrix\",\n action=\"store_true\", dest=\"print_cm\",\n help=\"Print the confusion matrix.\")\nop.add_option(\"--top10\",\n action=\"store_true\", dest=\"print_top10\",\n help=\"Print ten most discriminative terms per class\"\n \" for every classifier.\")\nop.add_option(\"--all_categories\",\n action=\"store_true\", dest=\"all_categories\",\n help=\"Whether to use all categories or not.\")\nop.add_option(\"--use_hashing\",\n action=\"store_true\",\n help=\"Use a hashing vectorizer.\")\nop.add_option(\"--n_features\",\n action=\"store\", type=int, default=2 ** 16,\n help=\"n_features when using the hashing vectorizer.\")\nop.add_option(\"--filtered\",\n action=\"store_true\",\n help=\"Remove newsgroup information that is easily overfit: \"\n \"headers, signatures, and quoting.\")\n\n\ndef is_interactive():\n return not hasattr(sys.modules['__main__'], '__file__')\n\n# work-around for Jupyter notebook and IPython console\nargv = [] if is_interactive() else sys.argv[1:]\n(opts, args) = op.parse_args(argv)\nif len(args) > 0:\n op.error(\"this script takes no arguments.\")\n sys.exit(1)\n\nprint(__doc__)\nop.print_help()\nprint()\n\n\n# #############################################################################\n# Load some categories from the training set\nif opts.all_categories:\n categories = None\nelse:\n categories = [\n 'alt.atheism',\n 'talk.religion.misc',\n 'comp.graphics',\n 'sci.space',\n ]\n\nif opts.filtered:\n remove = ('headers', 'footers', 'quotes')\nelse:\n remove = ()\n\nprint(\"Loading 20 newsgroups dataset for categories:\")\nprint(categories if categories else \"all\")\n\ndata_train = fetch_20newsgroups(subset='train', categories=categories,\n shuffle=True, random_state=42,\n remove=remove)\n\ndata_test = fetch_20newsgroups(subset='test', categories=categories,\n shuffle=True, random_state=42,\n remove=remove)\nprint('data loaded')\n\n# order of labels in `target_names` can be different from `categories`\ntarget_names = data_train.target_names\n\n\ndef size_mb(docs):\n return sum(len(s.encode('utf-8')) for s in docs) / 1e6\n\ndata_train_size_mb = size_mb(data_train.data)\ndata_test_size_mb = size_mb(data_test.data)\n\nprint(\"%d documents - %0.3fMB (training set)\" % (\n len(data_train.data), data_train_size_mb))\nprint(\"%d documents - %0.3fMB (test set)\" % (\n len(data_test.data), data_test_size_mb))\nprint(\"%d categories\" % len(categories))\nprint()\n\n# split a training set and a test set\ny_train, y_test = data_train.target, data_test.target\n\nprint(\"Extracting features from the training data using a sparse vectorizer\")\nt0 = time()\nif opts.use_hashing:\n vectorizer = HashingVectorizer(stop_words='english', alternate_sign=False,\n n_features=opts.n_features)\n X_train = vectorizer.transform(data_train.data)\nelse:\n vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5,\n stop_words='english')\n X_train = vectorizer.fit_transform(data_train.data)\nduration = time() - t0\nprint(\"done in %fs at %0.3fMB/s\" % (duration, data_train_size_mb / duration))\nprint(\"n_samples: %d, n_features: %d\" % X_train.shape)\nprint()\n\nprint(\"Extracting features from the test data using the same vectorizer\")\nt0 = time()\nX_test = vectorizer.transform(data_test.data)\nduration = time() - t0\nprint(\"done in %fs at %0.3fMB/s\" % (duration, data_test_size_mb / duration))\nprint(\"n_samples: %d, n_features: %d\" % X_test.shape)\nprint()\n\n# mapping from integer feature name to original token string\nif opts.use_hashing:\n feature_names = None\nelse:\n feature_names = vectorizer.get_feature_names()\n\nif opts.select_chi2:\n print(\"Extracting %d best features by a chi-squared test\" %\n opts.select_chi2)\n t0 = time()\n ch2 = SelectKBest(chi2, k=opts.select_chi2)\n X_train = ch2.fit_transform(X_train, y_train)\n X_test = ch2.transform(X_test)\n if feature_names:\n # keep selected feature names\n feature_names = [feature_names[i] for i\n in ch2.get_support(indices=True)]\n print(\"done in %fs\" % (time() - t0))\n print()\n\nif feature_names:\n feature_names = np.asarray(feature_names)\n\n\ndef trim(s):\n \"\"\"Trim string to fit on terminal (assuming 80-column display)\"\"\"\n return s if len(s) <= 80 else s[:77] + \"...\"\n\n\n# #############################################################################\n# Benchmark classifiers\ndef benchmark(clf):\n print('_' * 80)\n print(\"Training: \")\n print(clf)\n t0 = time()\n clf.fit(X_train, y_train)\n train_time = time() - t0\n print(\"train time: %0.3fs\" % train_time)\n\n t0 = time()\n pred = clf.predict(X_test)\n test_time = time() - t0\n print(\"test time: %0.3fs\" % test_time)\n\n score = metrics.accuracy_score(y_test, pred)\n print(\"accuracy: %0.3f\" % score)\n\n if hasattr(clf, 'coef_'):\n print(\"dimensionality: %d\" % clf.coef_.shape[1])\n print(\"density: %f\" % density(clf.coef_))\n\n if opts.print_top10 and feature_names is not None:\n print(\"top 10 keywords per class:\")\n for i, label in enumerate(target_names):\n top10 = np.argsort(clf.coef_[i])[-10:]\n print(trim(\"%s: %s\" % (label, \" \".join(feature_names[top10]))))\n print()\n\n if opts.print_report:\n print(\"classification report:\")\n print(metrics.classification_report(y_test, pred,\n target_names=target_names))\n\n if opts.print_cm:\n print(\"confusion matrix:\")\n print(metrics.confusion_matrix(y_test, pred))\n\n print()\n clf_descr = str(clf).split('(')[0]\n return clf_descr, score, train_time, test_time\n\n\nresults = []\nfor clf, name in (\n (RidgeClassifier(tol=1e-2, solver=\"lsqr\"), \"Ridge Classifier\"),\n (Perceptron(n_iter=50), \"Perceptron\"),\n (PassiveAggressiveClassifier(n_iter=50), \"Passive-Aggressive\"),\n (KNeighborsClassifier(n_neighbors=10), \"kNN\"),\n (RandomForestClassifier(n_estimators=100), \"Random forest\")):\n print('=' * 80)\n print(name)\n results.append(benchmark(clf))\n\nfor penalty in [\"l2\", \"l1\"]:\n print('=' * 80)\n print(\"%s penalty\" % penalty.upper())\n # Train Liblinear model\n results.append(benchmark(LinearSVC(penalty=penalty, dual=False,\n tol=1e-3)))\n\n # Train SGD model\n results.append(benchmark(SGDClassifier(alpha=.0001, n_iter=50,\n penalty=penalty)))\n\n# Train SGD with Elastic Net penalty\nprint('=' * 80)\nprint(\"Elastic-Net penalty\")\nresults.append(benchmark(SGDClassifier(alpha=.0001, n_iter=50,\n penalty=\"elasticnet\")))\n\n# Train NearestCentroid without threshold\nprint('=' * 80)\nprint(\"NearestCentroid (aka Rocchio classifier)\")\nresults.append(benchmark(NearestCentroid()))\n\n# Train sparse Naive Bayes classifiers\nprint('=' * 80)\nprint(\"Naive Bayes\")\nresults.append(benchmark(MultinomialNB(alpha=.01)))\nresults.append(benchmark(BernoulliNB(alpha=.01)))\n\nprint('=' * 80)\nprint(\"LinearSVC with L1-based feature selection\")\n# The smaller C, the stronger the regularization.\n# The more regularization, the more sparsity.\nresults.append(benchmark(Pipeline([\n ('feature_selection', SelectFromModel(LinearSVC(penalty=\"l1\", dual=False,\n tol=1e-3))),\n ('classification', LinearSVC(penalty=\"l2\"))])))\n\n# make some plots\n\nindices = np.arange(len(results))\n\nresults = [[x[i] for x in results] for i in range(4)]\n\nclf_names, score, training_time, test_time = results\ntraining_time = np.array(training_time) / np.max(training_time)\ntest_time = np.array(test_time) / np.max(test_time)\n\nplt.figure(figsize=(12, 8))\nplt.title(\"Score\")\nplt.barh(indices, score, .2, label=\"score\", color='navy')\nplt.barh(indices + .3, training_time, .2, label=\"training time\",\n color='c')\nplt.barh(indices + .6, test_time, .2, label=\"test time\", color='darkorange')\nplt.yticks(())\nplt.legend(loc='best')\nplt.subplots_adjust(left=.25)\nplt.subplots_adjust(top=.95)\nplt.subplots_adjust(bottom=.05)\n\nfor i, c in zip(indices, clf_names):\n plt.text(-.3, i, c)\n\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9caa065495896b390a53645475ed013622e42c | 33,855 | ipynb | Jupyter Notebook | mapreduce-pipeline/Compute-Pi-with-lightweight-components-and-minio.ipynb | DennisH3/jupyter-notebooks | dd13b480978373c29914b650a0d03ac98d8f5dde | [
"MIT"
] | 6 | 2020-06-07T18:10:04.000Z | 2021-05-27T15:39:33.000Z | mapreduce-pipeline/Compute-Pi-with-lightweight-components-and-minio.ipynb | DennisH3/jupyter-notebooks | dd13b480978373c29914b650a0d03ac98d8f5dde | [
"MIT"
] | 34 | 2020-04-15T16:48:45.000Z | 2021-08-12T19:42:00.000Z | mapreduce-pipeline/Compute-Pi-with-lightweight-components-and-minio.ipynb | DennisH3/jupyter-notebooks | dd13b480978373c29914b650a0d03ac98d8f5dde | [
"MIT"
] | 10 | 2020-04-10T15:06:47.000Z | 2021-08-12T19:27:58.000Z | 35.192308 | 319 | 0.564584 | [
[
[
"**Difficulty: Intermediate**\n\n# Summary:\n\nThis example demonstrates:\n* building a pipeline with lightweight components (components defined here in Python code)\n* Saving results to MinIO\n* Running parallel processes, where parallelism is defined at runtime\n\nIn doing this, we build a **shareable** pipeline - one that you can share with others and they can rerun on a new problem without needing this notebook.\n\nThis example builds on concepts from a few others - see those notebooks for more detail: \n* The problem solved here is from [Compute Pi](../mapreduce-pipeline/Compute-Pi.ipynb) \n* We use lightweight components, which have some important [quirks](../kfp-basics/demo_kfp_lightweight_components.ipynb)\n\n**Note:** Although we demonistrate how to make lightweight components that interact directly with minio, this reduces code reusability and makes things harder to test. A more reusable/testable version of this is given in [Compute Pi with Reusable Components](Compute-Pi-with-reusable-components-and-minio.ipynb).",
"_____no_output_____"
]
],
[
[
"from typing import List\n\nimport kfp\nfrom kfp import dsl, compiler\nfrom kfp.components import func_to_container_op\n\n# TODO: Move utilities to a central repo\nfrom utilities import get_minio_credentials, copy_to_minio\nfrom utilities import minio_find_files_matching_pattern",
"_____no_output_____"
]
],
[
[
"# Problem Description\n\nOur task is to compute an estimate of Pi by:\n1. picking some random points\n1. evaluating whether the points are inside a unit circle\n1. aggregating (2) to estimate pi\n\nOur solution to this task here focuses on:\n* making a fully reusable pipeline:\n * The pipeline should be sharable. You should be able to share the pipeline by giving them the pipeline.yaml file **without** sharing this notebook\n * All user inputs are adjustable at runtime (no editing the YAML, changing hard-coded settings in the Python code, etc.)\n* persisting data in MinIO\n* using existing, reusable components where possible\n * Ex: rather than teach our sample function to store results in MinIO, we use an existing component to store results\n * This helps improve testability and reduces work when building new pipelines",
"_____no_output_____"
],
[
"# Pipeline pseudocode",
"_____no_output_____"
],
[
"To solve our problem, we need to: \n* Generate N random seeds\n * For each random seed, do a sample step\n * For each sample step, store the result to a location in MinIO\n* Collect all sample results\n* Compute pi (by averaging the results)\n* Save the final result to MinIO",
"_____no_output_____"
],
[
"In pseudocode our pipeline looks like:",
"_____no_output_____"
],
[
"```python\ndef compute_pi(n_samples: int,\n output_location: str,\n minio_credentials, \n ):\n seeds = create_seeds(n_samples)\n\n for seed in seeds:\n result = sample(seed, minio_credentials, output_location)\n \n all_sample_results = collect_all_results(minio_credentials,\n sample_output_location\n )\n \n final_result = average(all_sample_results)\n```",
"_____no_output_____"
],
[
"where we've pulled anything the user might want to set at runtime (the number of samples, the location in MinIO for results to be placed, and their MinIO credentials) out as pipeline arguments.\n\nNow lets fill in all the function calls with components",
"_____no_output_____"
],
[
"# Define Pipeline Operations as Functions",
"_____no_output_____"
],
[
"## create_seeds",
"_____no_output_____"
]
],
[
[
"def create_seeds_func(n_samples: int) -> list:\n \"\"\"\n Creates n_samples seeds and returns as a list\n\n Note: When used as an operation in a KF pipeline, the list is serialized\n to a string. Can deserialize with strip and split or json package\n This sort of comma separated list will work natively with KF Pipelines'\n parallel for (we can feed this directly into a parallel for loop and it\n breaks into elements for us)\n\n \"\"\"\n constant = 10 # just so I know something is happening\n return [constant + i for i in range(n_samples)]",
"_____no_output_____"
]
],
[
[
"By defining this function in Python first, we can test it here to make sure it works as expected (rigorous testing omitted here, but recommended for your own tasks)",
"_____no_output_____"
]
],
[
[
"# Very rigorous testing!\nprint(create_seeds_func(10))",
"[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]\n"
]
],
[
[
"And we can then convert our tested function to a task constructor using `func_to_container_op`",
"_____no_output_____"
]
],
[
[
"# Define the base image our code will run from.\n# This is reused in a few components\nimport sys\npython_version_as_string = f\"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}\"\nbase_image_python = f\"python:{python_version_as_string}-buster\"\nprint(f\"This example notebook was executed using python {python_version_as_string}\")\nprint(f\"Using base_image_python: {base_image_python}\")",
"This example notebook was executed using python 3.8.8\nUsing base_image_python: python:3.8.8-buster\n"
],
[
"create_seeds_op = func_to_container_op(create_seeds_func,\n base_image=base_image_python,\n packages_to_install =['cloudpickle']\n )",
"_____no_output_____"
]
],
[
[
"This task constructor `create_seeds_op` is what actually creates instances of these components in our pipeline. ",
"_____no_output_____"
],
[
"## sample",
"_____no_output_____"
],
[
"Similar to above, we have a sample function and corresponding task constructor. For this, we need several helper functions for MinIO (kept in `utilities.py`). These helpers are automatically passed to our pipeline by `func_to_container_op` ",
"_____no_output_____"
]
],
[
[
"def sample_func(seed: int, minio_url: str, minio_bucket: str,\n minio_access_key: str, minio_secret_key: str,\n minio_output_path: str) -> str:\n \"\"\"\n Define the \"sample\" pipeline operation\n\n Args:\n seed (int): Seed for the sample operation\n minio_settings (str): JSON string with:\n minio_url: minio endpoint for storage, without \"http://, eg:\n minio.minio-standard-tenant-1\n minio_bucket: minio bucket to use within the endpoint, eg:\n firstname-lastname\n minio_access_key: minio access key (from\n /vault/secrets/minio-standard-tenant-1 on notebook\n server)\n minio_secret_key: minio secret key (from \n /vault/secrets/minio-standard-tenant-1 on notebook\n server)\n minio_output_path (str): Path in minio to put output data. Will place\n x.out, y.out, result.out, and seed.out in\n ./seed_{seed}/\n\n Returns:\n (str): Minio path where data is saved (common convention in kfp to\n return this, even if it was specified as an input like\n minio_output_path)\n \"\"\"\n import json\n from minio import Minio\n import random\n random.seed(seed)\n\n print(\"Pick random point\")\n # x,y ~ Uniform([-1,1])\n x = random.random() * 2 - 1\n y = random.random() * 2 - 1\n print(f\"Sample selected: ({x}, {y})\")\n\n if (x ** 2 + y ** 2) <= 1:\n print(f\"Random point is inside the unit circle\")\n result = 4\n else:\n print(f\"Random point is outside the unit circle\")\n result = 0\n\n to_output = {\n 'x': x,\n 'y': y,\n 'result': result,\n 'seed': seed,\n }\n\n # Store all results to bucket\n # Store each of x, y, result, and seed to a separate file with name\n # {bucket}/output_path/seed_{seed}/x.out\n # {bucket}/output_path/seed_{seed}/y.out\n # ...\n # where each file has just the value of the output.\n #\n # Could also have stored them all together in a single json file\n for varname, value in to_output.items():\n # TODO: Make this really a temp file...\n tempfile = f\"{varname}.out\"\n with open(tempfile, 'w') as fout:\n fout.write(str(value))\n\n destination = f\"{minio_output_path.rstrip('/')}/seed_{seed}/{tempfile}\"\n\n # Put file in minio\n copy_to_minio(minio_url=minio_url,\n bucket=minio_bucket,\n access_key=minio_access_key,\n secret_key=minio_secret_key,\n sourcefile=tempfile,\n destination=destination\n )\n\n # Return path containing outputs (common pipeline convention)\n return minio_output_path",
"_____no_output_____"
],
[
"# (insert your testing here)\n\n# # Example:\n# # NOTE: These tests actually write to minio!\n# minio_settings = get_minio_credentials(\"minimal\")\n# minio_settings['bucket'] = 'andrew-scribner'\n# sample = sample_func(5,\n# minio_url=minio_settings['url'],\n# minio_bucket=minio_settings['bucket'],\n# minio_access_key=minio_settings['access_key'],\n# minio_secret_key=minio_settings['secret_key'],\n# minio_output_path='test_functions'\n# )\n# # Check the bucket/output_path to see if things wrote correctly",
"_____no_output_____"
]
],
[
[
"We set `modules_to_capture=['utilities']` and `use_code_pickling=True` because this will pass our helpers to our pipeline. ",
"_____no_output_____"
]
],
[
[
"sample_op = func_to_container_op(sample_func,\n base_image=base_image_python,\n use_code_pickling=True, # Required because of helper functions\n modules_to_capture=['utilities'], # Required because of helper functions\n packages_to_install=['minio','cloudpickle'],\n )",
"_____no_output_____"
]
],
[
[
"## collect_results",
"_____no_output_____"
],
[
"To collect results from our sample operations, we glob from MinIO and output result data as a JSON list\n\nAgain, we need a helper file that feels better housed in a shared repo",
"_____no_output_____"
]
],
[
[
"def collect_results_as_list(search_location: str, search_pattern: str,\n minio_url: str, minio_bucket: str,\n minio_access_key: str, minio_secret_key: str,\n ) -> List[float]:\n \"\"\"\n Concatenates all files in minio that match a pattern\n \"\"\"\n from minio import Minio\n import json\n\n obj_names = minio_find_files_matching_pattern(\n minio_url=minio_url,\n bucket=minio_bucket,\n access_key=minio_access_key,\n secret_key=minio_secret_key,\n pattern=search_pattern,\n prefix=search_location)\n\n s3 = Minio(endpoint=minio_url,\n access_key=minio_access_key,\n secret_key=minio_secret_key,\n secure=False,\n region=\"us-west-1\",\n )\n\n # TODO: Use actual temp files\n to_return = [None] * len(obj_names)\n for i, obj_name in enumerate(obj_names):\n tempfile = f\"./unique_temp_{i}\"\n s3.fget_object(minio_bucket,\n object_name=obj_name,\n file_path=tempfile\n )\n with open(tempfile, 'r') as fin:\n to_return[i] = float(fin.read())\n\n print(f\"Returning {to_return}\")\n return to_return",
"_____no_output_____"
],
[
"# (insert your testing here)\n\n# # Example:\n# # This only works if you make a directory with some \"./something/result.out\"\n# # files in it\n# pattern = re.compile(r\".*/result.out$\")\n# collect_results_as_list(search_location='map-reduce-output/seeds/',\n# search_pattern=pattern,\n# minio_url=minio_settings['url'],\n# minio_bucket=minio_settings['bucket'],\n# minio_access_key=minio_settings['access_key'],\n# minio_secret_key=minio_settings['secret_key'],\n# )\n# # (you should see all the result.out files in the bucket/location you're pointed to)",
"_____no_output_____"
],
[
"collect_results_op = func_to_container_op(collect_results_as_list,\n base_image=base_image_python,\n use_code_pickling=True, # Required because of helper functions\n modules_to_capture=['utilities'], # Required because of helper functions\n packages_to_install=[\"minio\",'cloudpickle'],\n )",
"_____no_output_____"
]
],
[
[
"## average",
"_____no_output_____"
],
[
"Average takes a JSON list of numbers and returns their mean as a float",
"_____no_output_____"
]
],
[
[
"def average_func(numbers) -> float:\n \"\"\"\n Computes the average value of a JSON list of numbers, returned as a float\n \"\"\"\n import json\n print(numbers)\n print(type(numbers))\n numbers = json.loads(numbers)\n return sum(numbers) / len(numbers)",
"_____no_output_____"
],
[
"average_op = func_to_container_op(average_func,\n base_image=base_image_python,\n )",
"_____no_output_____"
]
],
[
[
"# Define and Compile Pipeline",
"_____no_output_____"
],
[
"With our component constructors defined, we build our full pipeline. Remember that while we use a Python function to define our pipeline here, anything that depends on a KFP-specific entity (an input argument, a component result, etc) is computed at runtime in kubernetes. This means we can't do things like \n```\nfor seed in seeds:\n sample_op = sample_op(seed)\n```\nbecause Python would try to interpret seeds, which is a *placeholder* object for a future value, as an iterable.",
"_____no_output_____"
]
],
[
[
"@dsl.pipeline(\n name=\"Estimate Pi w/Minio\",\n description=\"Extension of the Map-Reduce example using dynamic number of samples and Minio for storage\"\n)\ndef compute_pi(n_samples: int, output_location: str, minio_bucket: str, minio_url,\n minio_access_key: str, minio_secret_key: str):\n seeds = create_seeds_op(n_samples)\n\n # We add the KFP RUN_ID here in the output location so that we don't\n # accidentally overwrite another run. There's lots of ways to manage\n # data, this is just one possibility.\n # Ensure you avoid double \"/\"s in the path - minio does not like this\n this_run_output_location = f\"{str(output_location).rstrip('/')}\" \\\n f\"/{kfp.dsl.RUN_ID_PLACEHOLDER}\"\n\n sample_output_location = f\"{this_run_output_location}/seeds\"\n\n sample_ops = []\n with kfp.dsl.ParallelFor(seeds.output) as seed:\n sample_op_ = sample_op(seed, minio_url, minio_bucket, minio_access_key,\n minio_secret_key, sample_output_location)\n # Make a list of sample_ops so we can do result collection after they finish\n sample_ops.append(sample_op_)\n\n # NOTE: A current limitation of the ParallelFor loop in KFP is that it\n # does not give us an easy way to collect the results afterwards. To\n # get around this problem, we store results in a known place in minio\n # and later glob the result files back out\n\n # Find result files that exist in the seed output location\n # Note that a file in the bucket root does not have a preceeding slash, so\n # to handle the (unlikely) event we've put all results in the bucket root,\n # check for either ^result.out (eg, entire string is just the result.out)\n # or /result.out. This is to avoid matching something like\n # '/path/i_am_not_a_result.out'\n search_pattern = r'.*(^|/)result.out'\n\n # Collect all result.txt files in the sample_output_location and read them\n # into a list\n collect_results_op_ = collect_results_op(\n search_location=sample_output_location,\n search_pattern=search_pattern,\n minio_url=minio_url,\n minio_bucket=minio_bucket,\n minio_access_key=minio_access_key,\n minio_secret_key=minio_secret_key,\n )\n\n # collect_results requires all sample_ops to be done before running (all\n # results must be generated first). Enforce this by setting files_to_cat\n # to be .after() all copy_op tasks\n for s in sample_ops:\n collect_results_op_.after(s)\n\n average_op(collect_results_op_.output)",
"_____no_output_____"
]
],
[
[
"Compile our pipeline into a reusable YAML file",
"_____no_output_____"
]
],
[
[
"experiment_name = \"compute-pi-with-lightweight\"\nexperiment_yaml_zip = experiment_name + '.zip'\ncompiler.Compiler().compile(\n compute_pi,\n experiment_yaml_zip\n)\nprint(f\"Exported pipeline definition to {experiment_yaml_zip}\")",
"Exported pipeline definition to compute-pi-with-lightweight.zip\n"
]
],
[
[
"# Run",
"_____no_output_____"
],
[
"Use our above pipeline definition to do our task. Note that anything below here can be done **without** the above code. All we need is the yaml file from the last step. We can even do this from the Kubeflow Pipelines UI or from a terminal.",
"_____no_output_____"
],
[
"## User settings\nUpdate the next block to match your own setup. bucket will be your namespace (likely your firstname-lastname), and output_location is where inside the bucket you want to put your results",
"_____no_output_____"
]
],
[
[
"import os\n# Python Minio SDK expects bucket and output_location to be separate\nbucket =os.environ['NB_NAMESPACE']\noutput_location = \"map-reduce-output-lw\"\nn_samples = 10\nminio_tenant = \"standard\" # probably can leave this as is",
"_____no_output_____"
]
],
[
[
"## Other settings\n(leave this as is)",
"_____no_output_____"
]
],
[
[
"# Get minio credentials using a helper\nminio_settings = get_minio_credentials(minio_tenant)\nminio_url = minio_settings[\"url\"]\nminio_access_key = minio_settings[\"access_key\"]\nminio_secret_key = minio_settings[\"secret_key\"]",
"Trying to access minio credentials from:\n/vault/secrets/minio-standard-tenant-1.json\nTrying to access minio credentials from:\n/vault/secrets/minio-standard-tenant-1.json\n"
],
[
"client = kfp.Client()\nresult = client.create_run_from_pipeline_func(\n compute_pi,\n arguments={\"n_samples\": n_samples,\n \"output_location\": output_location,\n \"minio_bucket\": bucket,\n \"minio_url\": minio_url,\n \"minio_access_key\": minio_access_key,\n \"minio_secret_key\": minio_secret_key,\n },\n )",
"_____no_output_____"
]
],
[
[
"(Optional)\n\nWait for the run to complete, then print that it is done",
"_____no_output_____"
]
],
[
[
"wait_result = result.wait_for_run_completion(timeout=300)",
"_____no_output_____"
],
[
"print(f\"Run {wait_result.run.id}\\n\\tstarted at \\t{wait_result.run.created_at}\\n\\tfinished at \\t{wait_result.run.finished_at}\\n\\twith status {wait_result.run.status}\")",
"Run 52f1be4e-b098-4215-bb18-823671ec50d4\n\tstarted at \t2021-06-16 18:11:49+00:00\n\tfinished at \t2021-06-16 18:12:36+00:00\n\twith status Succeeded\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec9cb5c14d7d1bd6617264c7dcc2985cbb493b5b | 230,646 | ipynb | Jupyter Notebook | trabajo/trabajo Serverless.ipynb | adrymyry/bdge | 7b67a925b6b5792b34756b6c9da247f0a5c450fe | [
"MIT"
] | 1 | 2019-09-18T14:08:40.000Z | 2019-09-18T14:08:40.000Z | trabajo/trabajo Serverless.ipynb | adrymyry/bdge | 7b67a925b6b5792b34756b6c9da247f0a5c450fe | [
"MIT"
] | null | null | null | trabajo/trabajo Serverless.ipynb | adrymyry/bdge | 7b67a925b6b5792b34756b6c9da247f0a5c450fe | [
"MIT"
] | null | null | null | 118.28 | 24,716 | 0.772972 | [
[
[
"# Tecnologías Serverless",
"_____no_output_____"
],
[
"\n\n\n\nEsta hoja muestra reune toda la información recopilada para la realización del trabajo sobre tecnologías serverless desarrollado en el marco de la asignatura Bases de Datos a Gran Escala.\n\nEntre los contenidos del mismo encontramos:\n- Carga de los datos de Stackoverflow en una base de datos Mongodb\n- Introducción a tecnologías Serverless\n- Hello World en los principales servicios Serverless: AWS Lambda, Azure Functions, Google Cloud Functions.\n- Despliegue de código para consultas de la sesión 4 en AWS Lambda\n- Replicación de las consultas de la sesión 4 desde el notebook\n- Comparativa entre la ejecución del código en AWS Lambda frente al notebook\n\nEn primer lugar, instalamos las dependencias que necesitaremos más adelante.",
"_____no_output_____"
]
],
[
[
"!pip install --upgrade pymongo requests",
"Collecting pymongo\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/b1/45/5440555b901a8416196fbf2499c4678ef74de8080c007104107a8cfdda20/pymongo-3.7.2-cp36-cp36m-manylinux1_x86_64.whl (408kB)\n\u001b[K 100% |████████████████████████████████| 409kB 2.9MB/s ta 0:00:01\n\u001b[?25hCollecting requests\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/7d/e3/20f3d364d6c8e5d2353c72a67778eb189176f08e873c9900e10c0287b84b/requests-2.21.0-py2.py3-none-any.whl (57kB)\n\u001b[K 100% |████████████████████████████████| 61kB 4.9MB/s ta 0:00:01\n\u001b[?25hRequirement already satisfied, skipping upgrade: idna<2.9,>=2.5 in /opt/conda/lib/python3.6/site-packages (from requests) (2.7)\nRequirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /opt/conda/lib/python3.6/site-packages (from requests) (3.0.4)\nRequirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /opt/conda/lib/python3.6/site-packages (from requests) (1.23)\nRequirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /opt/conda/lib/python3.6/site-packages (from requests) (2018.10.15)\nInstalling collected packages: pymongo, requests\n Found existing installation: requests 2.20.1\n Uninstalling requests-2.20.1:\n Successfully uninstalled requests-2.20.1\nSuccessfully installed pymongo-3.7.2 requests-2.21.0\n"
]
],
[
[
"Importamos las librerías que utilizaremos.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib\n\nimport pymongo\nfrom pymongo import MongoClient\nfrom bson.code import Code\n\nimport timeit\nimport requests\n\n%matplotlib inline\nmatplotlib.style.use('ggplot')",
"_____no_output_____"
]
],
[
[
"## Carga de los datos de Stackoverflow en una base de datos Mongodb\n\nNos descargamos los datos desde el servidor neuromancer.inf.um.es",
"_____no_output_____"
]
],
[
[
"%%bash\nfile=../Posts.csv\ntest -e $file || wget http://neuromancer.inf.um.es:8080/es.stackoverflow/`basename ${file}`.gz -O - 2>/dev/null | gunzip > $file",
"_____no_output_____"
],
[
"%%bash\nfile=../Users.csv\ntest -e $file || wget http://neuromancer.inf.um.es:8080/es.stackoverflow/`basename ${file}`.gz -O - 2>/dev/null | gunzip > $file",
"_____no_output_____"
]
],
[
[
"Instalamos el paquete mongodb-clients que contiene la utilidad mongoimport que utilizamos para importar los datos.",
"_____no_output_____"
]
],
[
[
"%%bash\nsudo apt-get update\nsudo apt-get install -y mongodb-clients",
"Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [83.2 kB]\nGet:2 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB]\nGet:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]\nGet:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]\nGet:5 http://security.ubuntu.com/ubuntu bionic-security/universe Sources [32.5 kB]\nGet:6 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [1,365 B]\nGet:7 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [303 kB]\nGet:8 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [135 kB]\nGet:9 http://archive.ubuntu.com/ubuntu bionic/universe Sources [11.5 MB]\nGet:10 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages [11.3 MB]\nGet:11 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages [186 kB]\nGet:12 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages [1,344 kB]\nGet:13 http://archive.ubuntu.com/ubuntu bionic/restricted amd64 Packages [13.5 kB]\nGet:14 http://archive.ubuntu.com/ubuntu bionic-updates/universe Sources [168 kB]\nGet:15 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [10.7 kB]\nGet:16 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [628 kB]\nGet:17 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [903 kB]\nGet:18 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [6,933 B]\nGet:19 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [3,655 B]\nFetched 27.1 MB in 6s (4,855 kB/s)\nReading package lists...\nReading package lists...\nBuilding dependency tree...\nReading state information...\nThe following additional packages will be installed:\n libboost-filesystem1.65.1 libboost-iostreams1.65.1\n libboost-program-options1.65.1 libboost-system1.65.1 libgoogle-perftools4\n libpcap0.8 libpcrecpp0v5 libstemmer0d libtcmalloc-minimal4 libunwind8\n libyaml-cpp0.5v5 mongo-tools\nThe following NEW packages will be installed:\n libboost-filesystem1.65.1 libboost-iostreams1.65.1\n libboost-program-options1.65.1 libboost-system1.65.1 libgoogle-perftools4\n libpcap0.8 libpcrecpp0v5 libstemmer0d libtcmalloc-minimal4 libunwind8\n libyaml-cpp0.5v5 mongo-tools mongodb-clients\n0 upgraded, 13 newly installed, 0 to remove and 31 not upgraded.\nNeed to get 33.4 MB of archives.\nAfter this operation, 145 MB of additional disk space will be used.\nGet:1 http://archive.ubuntu.com/ubuntu bionic/main amd64 libpcap0.8 amd64 1.8.1-6ubuntu1 [118 kB]\nGet:2 http://archive.ubuntu.com/ubuntu bionic/main amd64 libboost-system1.65.1 amd64 1.65.1+dfsg-0ubuntu5 [10.5 kB]\nGet:3 http://archive.ubuntu.com/ubuntu bionic/main amd64 libboost-filesystem1.65.1 amd64 1.65.1+dfsg-0ubuntu5 [40.3 kB]\nGet:4 http://archive.ubuntu.com/ubuntu bionic/main amd64 libboost-iostreams1.65.1 amd64 1.65.1+dfsg-0ubuntu5 [29.2 kB]\nGet:5 http://archive.ubuntu.com/ubuntu bionic/main amd64 libboost-program-options1.65.1 amd64 1.65.1+dfsg-0ubuntu5 [137 kB]\nGet:6 http://archive.ubuntu.com/ubuntu bionic/main amd64 libtcmalloc-minimal4 amd64 2.5-2.2ubuntu3 [91.6 kB]\nGet:7 http://archive.ubuntu.com/ubuntu bionic/main amd64 libunwind8 amd64 1.2.1-8 [47.5 kB]\nGet:8 http://archive.ubuntu.com/ubuntu bionic/main amd64 libgoogle-perftools4 amd64 2.5-2.2ubuntu3 [190 kB]\nGet:9 http://archive.ubuntu.com/ubuntu bionic/main amd64 libpcrecpp0v5 amd64 2:8.39-9 [15.3 kB]\nGet:10 http://archive.ubuntu.com/ubuntu bionic/main amd64 libstemmer0d amd64 0+svn585-1build1 [62.5 kB]\nGet:11 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libyaml-cpp0.5v5 amd64 0.5.2-4ubuntu1 [150 kB]\nGet:12 http://archive.ubuntu.com/ubuntu bionic/universe amd64 mongo-tools amd64 3.6.3-0ubuntu1 [12.3 MB]\nGet:13 http://archive.ubuntu.com/ubuntu bionic/universe amd64 mongodb-clients amd64 1:3.6.3-0ubuntu1 [20.2 MB]\nFetched 33.4 MB in 5s (7,418 kB/s)\nSelecting previously unselected package libpcap0.8:amd64.\r\n(Reading database ... \r(Reading database ... 5%\r(Reading database ... 10%\r(Reading database ... 15%\r(Reading database ... 20%\r(Reading database ... 25%\r(Reading database ... 30%\r(Reading database ... 35%\r(Reading database ... 40%\r(Reading database ... 45%\r(Reading database ... 50%\r(Reading database ... 55%\r(Reading database ... 60%\r(Reading database ... 65%\r(Reading database ... 70%\r(Reading database ... 75%\r(Reading database ... 80%\r(Reading database ... 85%\r(Reading database ... 90%\r(Reading database ... 95%\r(Reading database ... 100%\r(Reading database ... 117249 files and directories currently installed.)\r\nPreparing to unpack .../00-libpcap0.8_1.8.1-6ubuntu1_amd64.deb ...\r\nUnpacking libpcap0.8:amd64 (1.8.1-6ubuntu1) ...\r\nSelecting previously unselected package libboost-system1.65.1:amd64.\r\nPreparing to unpack .../01-libboost-system1.65.1_1.65.1+dfsg-0ubuntu5_amd64.deb ...\r\nUnpacking libboost-system1.65.1:amd64 (1.65.1+dfsg-0ubuntu5) ...\r\nSelecting previously unselected package libboost-filesystem1.65.1:amd64.\r\nPreparing to unpack .../02-libboost-filesystem1.65.1_1.65.1+dfsg-0ubuntu5_amd64.deb ...\r\nUnpacking libboost-filesystem1.65.1:amd64 (1.65.1+dfsg-0ubuntu5) ...\r\nSelecting previously unselected package libboost-iostreams1.65.1:amd64.\r\nPreparing to unpack .../03-libboost-iostreams1.65.1_1.65.1+dfsg-0ubuntu5_amd64.deb ...\r\nUnpacking libboost-iostreams1.65.1:amd64 (1.65.1+dfsg-0ubuntu5) ...\r\nSelecting previously unselected package libboost-program-options1.65.1:amd64.\r\nPreparing to unpack .../04-libboost-program-options1.65.1_1.65.1+dfsg-0ubuntu5_amd64.deb ...\r\nUnpacking libboost-program-options1.65.1:amd64 (1.65.1+dfsg-0ubuntu5) ...\r\nSelecting previously unselected package libtcmalloc-minimal4.\r\nPreparing to unpack .../05-libtcmalloc-minimal4_2.5-2.2ubuntu3_amd64.deb ...\r\nUnpacking libtcmalloc-minimal4 (2.5-2.2ubuntu3) ...\r\nSelecting previously unselected package libunwind8:amd64.\r\nPreparing to unpack .../06-libunwind8_1.2.1-8_amd64.deb ...\r\nUnpacking libunwind8:amd64 (1.2.1-8) ...\r\nSelecting previously unselected package libgoogle-perftools4.\r\nPreparing to unpack .../07-libgoogle-perftools4_2.5-2.2ubuntu3_amd64.deb ...\r\nUnpacking libgoogle-perftools4 (2.5-2.2ubuntu3) ...\r\nSelecting previously unselected package libpcrecpp0v5:amd64.\r\nPreparing to unpack .../08-libpcrecpp0v5_2%3a8.39-9_amd64.deb ...\r\nUnpacking libpcrecpp0v5:amd64 (2:8.39-9) ...\r\nSelecting previously unselected package libstemmer0d:amd64.\r\nPreparing to unpack .../09-libstemmer0d_0+svn585-1build1_amd64.deb ...\r\nUnpacking libstemmer0d:amd64 (0+svn585-1build1) ...\r\nSelecting previously unselected package libyaml-cpp0.5v5:amd64.\r\nPreparing to unpack .../10-libyaml-cpp0.5v5_0.5.2-4ubuntu1_amd64.deb ...\r\nUnpacking libyaml-cpp0.5v5:amd64 (0.5.2-4ubuntu1) ...\r\nSelecting previously unselected package mongo-tools.\r\nPreparing to unpack .../11-mongo-tools_3.6.3-0ubuntu1_amd64.deb ...\r\nUnpacking mongo-tools (3.6.3-0ubuntu1) ...\r\nSelecting previously unselected package mongodb-clients.\r\nPreparing to unpack .../12-mongodb-clients_1%3a3.6.3-0ubuntu1_amd64.deb ...\r\nUnpacking mongodb-clients (1:3.6.3-0ubuntu1) ...\r\nSetting up libboost-iostreams1.65.1:amd64 (1.65.1+dfsg-0ubuntu5) ...\r\nSetting up libstemmer0d:amd64 (0+svn585-1build1) ...\r\nSetting up libboost-system1.65.1:amd64 (1.65.1+dfsg-0ubuntu5) ...\r\nSetting up libtcmalloc-minimal4 (2.5-2.2ubuntu3) ...\r\nSetting up libunwind8:amd64 (1.2.1-8) ...\r\nProcessing triggers for libc-bin (2.27-3ubuntu1) ...\r\nSetting up libpcrecpp0v5:amd64 (2:8.39-9) ...\r\nSetting up libyaml-cpp0.5v5:amd64 (0.5.2-4ubuntu1) ...\r\nSetting up libboost-program-options1.65.1:amd64 (1.65.1+dfsg-0ubuntu5) ...\r\nSetting up libpcap0.8:amd64 (1.8.1-6ubuntu1) ...\r\nSetting up libboost-filesystem1.65.1:amd64 (1.65.1+dfsg-0ubuntu5) ...\r\nSetting up libgoogle-perftools4 (2.5-2.2ubuntu3) ...\r\nSetting up mongo-tools (3.6.3-0ubuntu1) ...\r\nSetting up mongodb-clients (1:3.6.3-0ubuntu1) ...\r\nProcessing triggers for libc-bin (2.27-3ubuntu1) ...\r\n"
]
],
[
[
"Importación de los ficheros CSV utilizando mongoimport.",
"_____no_output_____"
]
],
[
[
"%%bash\nmongoimport --db stackoverflow --collection posts --drop --type csv \\\n --headerline --host=ec2-3-82-61-111.compute-1.amazonaws.com --file ../Posts.csv",
"2019-01-14T13:03:26.097+0000\tconnected to: ec2-3-82-61-111.compute-1.amazonaws.com\n2019-01-14T13:03:26.229+0000\tdropping: stackoverflow.posts\n2019-01-14T13:03:28.637+0000\t[........................] stackoverflow.posts\t1.40MB/134MB (1.0%)\n2019-01-14T13:03:31.637+0000\t[........................] stackoverflow.posts\t1.40MB/134MB (1.0%)\n2019-01-14T13:03:34.636+0000\t[........................] stackoverflow.posts\t2.73MB/134MB (2.0%)\n2019-01-14T13:03:37.637+0000\t[........................] stackoverflow.posts\t4.22MB/134MB (3.2%)\n2019-01-14T13:03:40.637+0000\t[........................] stackoverflow.posts\t4.22MB/134MB (3.2%)\n2019-01-14T13:03:43.636+0000\t[#.......................] stackoverflow.posts\t5.75MB/134MB (4.3%)\n2019-01-14T13:03:46.636+0000\t[#.......................] stackoverflow.posts\t5.75MB/134MB (4.3%)\n2019-01-14T13:03:49.636+0000\t[#.......................] stackoverflow.posts\t7.37MB/134MB (5.5%)\n2019-01-14T13:03:52.637+0000\t[#.......................] stackoverflow.posts\t7.37MB/134MB (5.5%)\n2019-01-14T13:03:55.637+0000\t[#.......................] stackoverflow.posts\t9.14MB/134MB (6.8%)\n2019-01-14T13:03:58.601+0000\t[#.......................] stackoverflow.posts\t9.14MB/134MB (6.8%)\n2019-01-14T13:04:01.603+0000\t[#.......................] stackoverflow.posts\t10.9MB/134MB (8.1%)\n2019-01-14T13:04:04.601+0000\t[#.......................] stackoverflow.posts\t10.9MB/134MB (8.1%)\n2019-01-14T13:04:07.601+0000\t[##......................] stackoverflow.posts\t12.5MB/134MB (9.4%)\n2019-01-14T13:04:10.601+0000\t[##......................] stackoverflow.posts\t12.5MB/134MB (9.4%)\n2019-01-14T13:04:13.601+0000\t[##......................] stackoverflow.posts\t14.1MB/134MB (10.5%)\n2019-01-14T13:04:16.601+0000\t[##......................] stackoverflow.posts\t14.1MB/134MB (10.5%)\n2019-01-14T13:04:19.601+0000\t[##......................] stackoverflow.posts\t15.8MB/134MB (11.8%)\n2019-01-14T13:04:22.603+0000\t[##......................] stackoverflow.posts\t15.8MB/134MB (11.8%)\n2019-01-14T13:04:25.601+0000\t[###.....................] stackoverflow.posts\t17.6MB/134MB (13.1%)\n2019-01-14T13:04:28.565+0000\t[###.....................] stackoverflow.posts\t17.6MB/134MB (13.1%)\n2019-01-14T13:04:31.565+0000\t[###.....................] stackoverflow.posts\t19.1MB/134MB (14.3%)\n2019-01-14T13:04:34.569+0000\t[###.....................] stackoverflow.posts\t20.6MB/134MB (15.4%)\n2019-01-14T13:04:37.567+0000\t[###.....................] stackoverflow.posts\t20.6MB/134MB (15.4%)\n2019-01-14T13:04:40.565+0000\t[####....................] stackoverflow.posts\t22.4MB/134MB (16.8%)\n2019-01-14T13:04:43.565+0000\t[####....................] stackoverflow.posts\t22.4MB/134MB (16.8%)\n2019-01-14T13:04:46.565+0000\t[####....................] stackoverflow.posts\t24.1MB/134MB (18.0%)\n2019-01-14T13:04:49.565+0000\t[####....................] stackoverflow.posts\t24.1MB/134MB (18.0%)\n2019-01-14T13:04:52.567+0000\t[####....................] stackoverflow.posts\t25.9MB/134MB (19.4%)\n2019-01-14T13:04:55.565+0000\t[####....................] stackoverflow.posts\t25.9MB/134MB (19.4%)\n2019-01-14T13:04:58.529+0000\t[####....................] stackoverflow.posts\t27.6MB/134MB (20.7%)\n2019-01-14T13:05:01.529+0000\t[####....................] stackoverflow.posts\t27.6MB/134MB (20.7%)\n2019-01-14T13:05:04.529+0000\t[#####...................] stackoverflow.posts\t29.4MB/134MB (22.0%)\n2019-01-14T13:05:07.530+0000\t[#####...................] stackoverflow.posts\t29.4MB/134MB (22.0%)\n2019-01-14T13:05:10.530+0000\t[#####...................] stackoverflow.posts\t29.4MB/134MB (22.0%)\n2019-01-14T13:05:13.529+0000\t[#####...................] stackoverflow.posts\t31.2MB/134MB (23.3%)\n2019-01-14T13:05:16.530+0000\t[#####...................] stackoverflow.posts\t31.2MB/134MB (23.3%)\n2019-01-14T13:05:19.530+0000\t[#####...................] stackoverflow.posts\t33.0MB/134MB (24.7%)\n2019-01-14T13:05:22.530+0000\t[#####...................] stackoverflow.posts\t33.0MB/134MB (24.7%)\n2019-01-14T13:05:25.530+0000\t[######..................] stackoverflow.posts\t34.9MB/134MB (26.1%)\n2019-01-14T13:05:28.493+0000\t[######..................] stackoverflow.posts\t34.9MB/134MB (26.1%)\n2019-01-14T13:05:31.493+0000\t[######..................] stackoverflow.posts\t36.2MB/134MB (27.1%)\n2019-01-14T13:05:34.494+0000\t[######..................] stackoverflow.posts\t36.2MB/134MB (27.1%)\n2019-01-14T13:05:38.034+0000\t[######..................] stackoverflow.posts\t37.8MB/134MB (28.3%)\n2019-01-14T13:05:40.493+0000\t[#######.................] stackoverflow.posts\t39.5MB/134MB (29.5%)\n2019-01-14T13:05:43.496+0000\t[#######.................] stackoverflow.posts\t39.5MB/134MB (29.5%)\n2019-01-14T13:05:46.493+0000\t[#######.................] stackoverflow.posts\t40.9MB/134MB (30.6%)\n2019-01-14T13:05:49.494+0000\t[#######.................] stackoverflow.posts\t40.9MB/134MB (30.6%)\n2019-01-14T13:05:52.493+0000\t[#######.................] stackoverflow.posts\t42.6MB/134MB (31.9%)\n2019-01-14T13:05:55.493+0000\t[#######.................] stackoverflow.posts\t42.6MB/134MB (31.9%)\n2019-01-14T13:05:58.458+0000\t[#######.................] stackoverflow.posts\t44.1MB/134MB (33.1%)\n2019-01-14T13:06:01.462+0000\t[#######.................] stackoverflow.posts\t44.1MB/134MB (33.1%)\n2019-01-14T13:06:04.457+0000\t[########................] stackoverflow.posts\t45.9MB/134MB (34.3%)\n2019-01-14T13:06:07.458+0000\t[########................] stackoverflow.posts\t45.9MB/134MB (34.3%)\n2019-01-14T13:06:10.460+0000\t[########................] stackoverflow.posts\t47.8MB/134MB (35.8%)\n2019-01-14T13:06:13.460+0000\t[########................] stackoverflow.posts\t47.8MB/134MB (35.8%)\n2019-01-14T13:06:16.461+0000\t[########................] stackoverflow.posts\t49.6MB/134MB (37.1%)\n2019-01-14T13:06:19.460+0000\t[########................] stackoverflow.posts\t49.6MB/134MB (37.1%)\n2019-01-14T13:06:22.459+0000\t[#########...............] stackoverflow.posts\t51.5MB/134MB (38.6%)\n2019-01-14T13:06:25.459+0000\t[#########...............] stackoverflow.posts\t51.5MB/134MB (38.6%)\n2019-01-14T13:06:28.422+0000\t[#########...............] stackoverflow.posts\t53.3MB/134MB (39.9%)\n2019-01-14T13:06:31.421+0000\t[#########...............] stackoverflow.posts\t53.3MB/134MB (39.9%)\n2019-01-14T13:06:34.423+0000\t[#########...............] stackoverflow.posts\t53.3MB/134MB (39.9%)\n2019-01-14T13:06:37.421+0000\t[#########...............] stackoverflow.posts\t55.0MB/134MB (41.2%)\n2019-01-14T13:06:40.422+0000\t[#########...............] stackoverflow.posts\t55.0MB/134MB (41.2%)\n2019-01-14T13:06:43.423+0000\t[##########..............] stackoverflow.posts\t56.9MB/134MB (42.6%)\n2019-01-14T13:06:46.427+0000\t[##########..............] stackoverflow.posts\t56.9MB/134MB (42.6%)\n2019-01-14T13:06:49.423+0000\t[##########..............] stackoverflow.posts\t58.7MB/134MB (43.9%)\n2019-01-14T13:06:52.422+0000\t[##########..............] stackoverflow.posts\t58.7MB/134MB (43.9%)\n2019-01-14T13:06:55.422+0000\t[##########..............] stackoverflow.posts\t60.5MB/134MB (45.3%)\n2019-01-14T13:06:58.386+0000\t[##########..............] stackoverflow.posts\t60.5MB/134MB (45.3%)\n2019-01-14T13:07:01.385+0000\t[###########.............] stackoverflow.posts\t62.3MB/134MB (46.7%)\n2019-01-14T13:07:04.390+0000\t[###########.............] stackoverflow.posts\t62.3MB/134MB (46.7%)\n2019-01-14T13:07:07.386+0000\t[###########.............] stackoverflow.posts\t64.2MB/134MB (48.1%)\n2019-01-14T13:07:10.388+0000\t[###########.............] stackoverflow.posts\t64.2MB/134MB (48.1%)\n2019-01-14T13:07:13.386+0000\t[###########.............] stackoverflow.posts\t66.1MB/134MB (49.5%)\n2019-01-14T13:07:16.388+0000\t[###########.............] stackoverflow.posts\t66.1MB/134MB (49.5%)\n2019-01-14T13:07:19.388+0000\t[############............] stackoverflow.posts\t67.7MB/134MB (50.7%)\n2019-01-14T13:07:22.386+0000\t[############............] stackoverflow.posts\t67.7MB/134MB (50.7%)\n2019-01-14T13:07:25.385+0000\t[############............] stackoverflow.posts\t67.7MB/134MB (50.7%)\n2019-01-14T13:07:28.351+0000\t[############............] stackoverflow.posts\t69.5MB/134MB (52.0%)\n2019-01-14T13:07:31.352+0000\t[############............] stackoverflow.posts\t69.5MB/134MB (52.0%)\n2019-01-14T13:07:34.349+0000\t[############............] stackoverflow.posts\t71.4MB/134MB (53.4%)\n2019-01-14T13:07:37.350+0000\t[############............] stackoverflow.posts\t71.4MB/134MB (53.4%)\n2019-01-14T13:07:40.351+0000\t[#############...........] stackoverflow.posts\t73.2MB/134MB (54.8%)\n2019-01-14T13:07:43.349+0000\t[#############...........] stackoverflow.posts\t73.2MB/134MB (54.8%)\n2019-01-14T13:07:46.350+0000\t[#############...........] stackoverflow.posts\t75.0MB/134MB (56.2%)\n2019-01-14T13:07:49.350+0000\t[#############...........] stackoverflow.posts\t75.0MB/134MB (56.2%)\n2019-01-14T13:07:52.354+0000\t[#############...........] stackoverflow.posts\t76.9MB/134MB (57.5%)\n2019-01-14T13:07:55.352+0000\t[#############...........] stackoverflow.posts\t76.9MB/134MB (57.5%)\n2019-01-14T13:07:58.314+0000\t[#############...........] stackoverflow.posts\t77.1MB/134MB (57.7%)\n2019-01-14T13:08:01.316+0000\t[##############..........] stackoverflow.posts\t78.8MB/134MB (59.0%)\n2019-01-14T13:08:04.313+0000\t[##############..........] stackoverflow.posts\t78.8MB/134MB (59.0%)\n2019-01-14T13:08:07.315+0000\t[##############..........] stackoverflow.posts\t80.6MB/134MB (60.4%)\n2019-01-14T13:08:10.910+0000\t[##############..........] stackoverflow.posts\t80.6MB/134MB (60.4%)\n2019-01-14T13:08:13.314+0000\t[##############..........] stackoverflow.posts\t82.5MB/134MB (61.8%)\n2019-01-14T13:08:16.314+0000\t[##############..........] stackoverflow.posts\t82.5MB/134MB (61.8%)\n2019-01-14T13:08:19.314+0000\t[###############.........] stackoverflow.posts\t84.2MB/134MB (63.0%)\n2019-01-14T13:08:22.315+0000\t[###############.........] stackoverflow.posts\t84.2MB/134MB (63.0%)\n2019-01-14T13:08:25.315+0000\t[###############.........] stackoverflow.posts\t85.8MB/134MB (64.2%)\n2019-01-14T13:08:28.278+0000\t[###############.........] stackoverflow.posts\t85.8MB/134MB (64.2%)\n2019-01-14T13:08:31.279+0000\t[###############.........] stackoverflow.posts\t87.6MB/134MB (65.6%)\n2019-01-14T13:08:34.279+0000\t[###############.........] stackoverflow.posts\t87.6MB/134MB (65.6%)\n2019-01-14T13:08:37.278+0000\t[################........] stackoverflow.posts\t89.5MB/134MB (67.0%)\n2019-01-14T13:08:40.279+0000\t[################........] stackoverflow.posts\t89.5MB/134MB (67.0%)\n2019-01-14T13:08:43.278+0000\t[################........] stackoverflow.posts\t91.6MB/134MB (68.6%)\n2019-01-14T13:08:46.279+0000\t[################........] stackoverflow.posts\t91.6MB/134MB (68.6%)\n2019-01-14T13:08:49.278+0000\t[################........] stackoverflow.posts\t91.6MB/134MB (68.6%)\n2019-01-14T13:08:52.280+0000\t[################........] stackoverflow.posts\t93.4MB/134MB (69.9%)\n2019-01-14T13:08:55.283+0000\t[################........] stackoverflow.posts\t93.4MB/134MB (69.9%)\n2019-01-14T13:08:58.242+0000\t[#################.......] stackoverflow.posts\t95.4MB/134MB (71.4%)\n2019-01-14T13:09:01.243+0000\t[#################.......] stackoverflow.posts\t95.4MB/134MB (71.4%)\n2019-01-14T13:09:04.242+0000\t[#################.......] stackoverflow.posts\t97.2MB/134MB (72.8%)\n2019-01-14T13:09:07.246+0000\t[#################.......] stackoverflow.posts\t97.2MB/134MB (72.8%)\n2019-01-14T13:09:10.243+0000\t[#################.......] stackoverflow.posts\t99.1MB/134MB (74.2%)\n2019-01-14T13:09:13.245+0000\t[#################.......] stackoverflow.posts\t99.1MB/134MB (74.2%)\n2019-01-14T13:09:16.249+0000\t[##################......] stackoverflow.posts\t101MB/134MB (75.5%)\n2019-01-14T13:09:19.243+0000\t[##################......] stackoverflow.posts\t101MB/134MB (75.5%)\n2019-01-14T13:09:22.247+0000\t[##################......] stackoverflow.posts\t103MB/134MB (76.8%)\n2019-01-14T13:09:25.245+0000\t[##################......] stackoverflow.posts\t103MB/134MB (76.8%)\n2019-01-14T13:09:28.206+0000\t[##################......] stackoverflow.posts\t103MB/134MB (76.8%)\n2019-01-14T13:09:31.209+0000\t[##################......] stackoverflow.posts\t104MB/134MB (78.2%)\n2019-01-14T13:09:34.209+0000\t[##################......] stackoverflow.posts\t104MB/134MB (78.2%)\n2019-01-14T13:09:37.208+0000\t[###################.....] stackoverflow.posts\t106MB/134MB (79.5%)\n2019-01-14T13:09:40.207+0000\t[###################.....] stackoverflow.posts\t106MB/134MB (79.5%)\n2019-01-14T13:09:43.207+0000\t[###################.....] stackoverflow.posts\t108MB/134MB (80.8%)\n2019-01-14T13:09:46.207+0000\t[###################.....] stackoverflow.posts\t108MB/134MB (80.8%)\n2019-01-14T13:09:49.206+0000\t[###################.....] stackoverflow.posts\t110MB/134MB (82.2%)\n2019-01-14T13:09:52.207+0000\t[###################.....] stackoverflow.posts\t110MB/134MB (82.2%)\n2019-01-14T13:09:55.206+0000\t[####################....] stackoverflow.posts\t112MB/134MB (83.5%)\n2019-01-14T13:09:58.171+0000\t[####################....] stackoverflow.posts\t112MB/134MB (83.5%)\n2019-01-14T13:10:01.171+0000\t[####################....] stackoverflow.posts\t113MB/134MB (84.9%)\n2019-01-14T13:10:04.172+0000\t[####################....] stackoverflow.posts\t113MB/134MB (84.9%)\n2019-01-14T13:10:07.173+0000\t[####################....] stackoverflow.posts\t115MB/134MB (86.3%)\n2019-01-14T13:10:10.174+0000\t[####################....] stackoverflow.posts\t115MB/134MB (86.3%)\n2019-01-14T13:10:13.170+0000\t[#####################...] stackoverflow.posts\t117MB/134MB (87.6%)\n2019-01-14T13:10:16.171+0000\t[#####################...] stackoverflow.posts\t117MB/134MB (87.6%)\n2019-01-14T13:10:19.173+0000\t[#####################...] stackoverflow.posts\t119MB/134MB (88.9%)\n2019-01-14T13:10:22.171+0000\t[#####################...] stackoverflow.posts\t119MB/134MB (88.9%)\n2019-01-14T13:10:25.170+0000\t[#####################...] stackoverflow.posts\t120MB/134MB (90.2%)\n2019-01-14T13:10:28.135+0000\t[#####################...] stackoverflow.posts\t120MB/134MB (90.2%)\n2019-01-14T13:10:31.172+0000\t[#####################...] stackoverflow.posts\t122MB/134MB (91.6%)\n2019-01-14T13:10:34.136+0000\t[#####################...] stackoverflow.posts\t122MB/134MB (91.6%)\n2019-01-14T13:10:37.135+0000\t[######################..] stackoverflow.posts\t124MB/134MB (93.0%)\n2019-01-14T13:10:40.138+0000\t[######################..] stackoverflow.posts\t124MB/134MB (93.0%)\n2019-01-14T13:10:43.136+0000\t[######################..] stackoverflow.posts\t126MB/134MB (94.3%)\n2019-01-14T13:10:46.136+0000\t[######################..] stackoverflow.posts\t126MB/134MB (94.3%)\n2019-01-14T13:10:49.135+0000\t[######################..] stackoverflow.posts\t128MB/134MB (95.6%)\n2019-01-14T13:10:52.135+0000\t[######################..] stackoverflow.posts\t128MB/134MB (95.6%)\n2019-01-14T13:10:55.136+0000\t[#######################.] stackoverflow.posts\t130MB/134MB (97.1%)\n2019-01-14T13:10:58.100+0000\t[#######################.] stackoverflow.posts\t130MB/134MB (97.1%)\n2019-01-14T13:11:01.099+0000\t[#######################.] stackoverflow.posts\t130MB/134MB (97.4%)\n2019-01-14T13:11:04.100+0000\t[#######################.] stackoverflow.posts\t131MB/134MB (98.4%)\n2019-01-14T13:11:07.100+0000\t[#######################.] stackoverflow.posts\t131MB/134MB (98.4%)\n2019-01-14T13:11:10.100+0000\t[#######################.] stackoverflow.posts\t133MB/134MB (99.7%)\n2019-01-14T13:11:13.103+0000\t[#######################.] stackoverflow.posts\t133MB/134MB (99.7%)\n2019-01-14T13:11:16.100+0000\t[########################] stackoverflow.posts\t134MB/134MB (100.0%)\n2019-01-14T13:11:16.322+0000\t[########################] stackoverflow.posts\t134MB/134MB (100.0%)\n2019-01-14T13:11:16.323+0000\timported 76278 documents\n"
],
[
"%%bash\nmongoimport --db stackoverflow --collection users --drop --type csv \\\n --headerline --host=ec2-3-82-61-111.compute-1.amazonaws.com --file ../Users.csv",
"2019-01-14T13:11:16.810+0000\tconnected to: ec2-3-82-61-111.compute-1.amazonaws.com\n2019-01-14T13:11:16.941+0000\tdropping: stackoverflow.users\n2019-01-14T13:11:19.407+0000\t[#.......................] stackoverflow.users\t620KB/9.52MB (6.4%)\n2019-01-14T13:11:22.405+0000\t[##......................] stackoverflow.users\t1.03MB/9.52MB (10.8%)\n2019-01-14T13:11:25.409+0000\t[###.....................] stackoverflow.users\t1.43MB/9.52MB (15.0%)\n2019-01-14T13:11:28.369+0000\t[#####...................] stackoverflow.users\t2.03MB/9.52MB (21.3%)\n2019-01-14T13:11:31.369+0000\t[######..................] stackoverflow.users\t2.71MB/9.52MB (28.4%)\n2019-01-14T13:11:34.370+0000\t[#######.................] stackoverflow.users\t3.15MB/9.52MB (33.1%)\n2019-01-14T13:11:37.370+0000\t[#########...............] stackoverflow.users\t3.73MB/9.52MB (39.2%)\n2019-01-14T13:11:40.371+0000\t[##########..............] stackoverflow.users\t4.10MB/9.52MB (43.1%)\n2019-01-14T13:11:43.371+0000\t[###########.............] stackoverflow.users\t4.67MB/9.52MB (49.1%)\n2019-01-14T13:11:46.369+0000\t[#############...........] stackoverflow.users\t5.25MB/9.52MB (55.2%)\n2019-01-14T13:11:49.369+0000\t[##############..........] stackoverflow.users\t5.80MB/9.52MB (60.9%)\n2019-01-14T13:11:52.369+0000\t[###############.........] stackoverflow.users\t6.17MB/9.52MB (64.8%)\n2019-01-14T13:11:55.369+0000\t[################........] stackoverflow.users\t6.70MB/9.52MB (70.4%)\n2019-01-14T13:11:58.335+0000\t[##################......] stackoverflow.users\t7.23MB/9.52MB (75.9%)\n2019-01-14T13:12:01.335+0000\t[###################.....] stackoverflow.users\t7.77MB/9.52MB (81.6%)\n2019-01-14T13:12:04.333+0000\t[####################....] stackoverflow.users\t8.17MB/9.52MB (85.8%)\n2019-01-14T13:12:07.333+0000\t[#####################...] stackoverflow.users\t8.65MB/9.52MB (90.8%)\n2019-01-14T13:12:10.335+0000\t[#######################.] stackoverflow.users\t9.17MB/9.52MB (96.3%)\n2019-01-14T13:12:12.886+0000\t[########################] stackoverflow.users\t9.52MB/9.52MB (100.0%)\n2019-01-14T13:12:12.886+0000\timported 49033 documents\n"
]
],
[
[
"### Creación de índices\n\nPara que el proceso map-reduce y de agregación funcione mejor, voy a crear índices sobre alguns atributos.",
"_____no_output_____"
]
],
[
[
"(\n db.posts.create_index([('Id', pymongo.HASHED)]),\n db.users.create_index([('Id', pymongo.HASHED)]),\n db.posts.create_index([('OwnerUserId', pymongo.HASHED)])\n)",
"_____no_output_____"
]
],
[
[
"## Introducción a tecnologías Serverless\n\nLas tecnologías Serverless son uno de los temas que más relevancia está cobrando en arquitectura de software. Este tipo de tecnologías plantean un modelo que permite que el desarrollador se centre en el desarrollo del código y se olvide de la infraestructura sobre la que se ejecuta.\n\nEste tipo de servicios han ido evolucionando para cubrir todas las necesidades de las aplicaciones que se despliegan en cloud actualmente. Aunque podemos entender como Serverless cualquier tipo de servicio en el que el usuario no debe preocuparse por la infraestructura subyacente, es decir, Backend as a Service (BaaS), es habitual, que Serverless se asocie con un concepto más innovador que obliga a renovar la lógica del código que ejecutamos en un servidor tradicional.\n\nHablamos de los servicios Functions as a Service (FaaS) en los que el código desarrollado se ejecuta en contenedores sin estado que son disparados por eventos. Estos contenedores son efímeros, es decir, desaparecen tras su ejecución.\n\nSi el uso de contenedores y micro-servicios ha cambiado la forma en la que la industria ha estado desarrollando el software tradicionalmente, el uso de este tipo de tecnologías promete ser la próxima revolución de este sector.",
"_____no_output_____"
],
[
"## Hello World en los principales servicios Serverless\n\n### AWS Lambda\n\nEntramos en nuestra [consola de AWS](https://console.aws.amazon.com). En la sección Services, hacemos click en Lambda para acceder a su sección. Seleccionamos Create a function y podemos indicar que tipo de proyecto queremos desplegar. En nuestro caso, seleccionaremos la opción desde cero para crear un sencillo \"Hello world\".\n\nA continuación, debemos poner un nombre a nuestra función, seleccionar el tipo de entorno donde ejecutaremos nuestro código y deberemos seleccionar en el campo Rol la opción Create a custom role. Nos abrirá una nueva ventana en la que podremos dejar los campos por defecto y confirmar la creación del rol. Una vez creada, volveremos a la ventana anterior y seleccionaremos el rol que acabamos de crear.\n\n\n\nComprobamos que todo esta como en la imagen y seleccionamos Crear la función. Una vez creada, entraremos en la sección de configuración de nuestra función. En el apartado de diseño, podemos configurar el tipo de eventos o desencadenadores que harán que nuestra función se ejecute. Como se trata de una función de prueba, no añadimos ninguno ya que realizaremos la prueba manualmente.\n\nEn la sección Código de la función vemos como tenemos disponible un editor de texto donde dotar la funcionalidad deseada a nuestra función. Dejamos este código para crear un Hello World básico que nos permita conocer la plataforma.\n\n~~~\nimport json\n\ndef lambda_handler(event, context):\n return {\n 'statusCode': 200,\n 'body': json.dumps('Hello from Lambda for BDGE!')\n }\n~~~\n\nExisten muchos más parámetros de configuración pero no he sido capaz de explorarlos con detenimiento por lo que no puedo comentar para qué sirven.\n\nEn la parte superior, nos encontramos con el boton Probar que nos permite simular un evento de prueba que lanzará nuestra función. Como no espera ningún parámetro, podemos eliminar los que incluye por defecto y darle un nombre a nuestro evento de prueba. Seleccionamos Crear y ya podemos ejecutar nuestra función volviendo a hacer click en el botón Probar.\n\n\n\n\n### Azure Functions\n\nEn el [portal de Azure](https://portal.azure.com/) seleccionamos Crear un recurso en el panel de la izquierda. En la sección Proceso (Compute), indicamos Function App. Damos un nobmre a nuestra función y el resto de parámetros, los dejamos por defecto. Entre ellos, se indican la subscripción desde donde nos cobrarán los gastos asociados a la misma, el grupo de recursos, el sistema operativo (seleccionar Windows puesto que si seleccionamos Linux es necesario configurar un contenedor y pierde la gracia del Serverless) y otras opciones sobre el consumo, el lugar de hospedaje y el entorno sobre el que ejecutar la función (seleccionamos Javascript ya que Python no está disponible). Con todo esto, damos en Crear para desplegar nuestra función.\n\n\n\nCuando nuestra función haya sido desplegada, nos aparece una notificación en nuestro portal. Haciendo click en Ir al recurso accedemos al panel de configuración de nuestra Function App que es una forma de agrupar varias Functions sobre un mismo dominio.\n\n\n\nUna vez aquí, en la barra lateral izquierda podemos seleccionar el botón + al lado de Funciones para añadir nuestra primera función.\nEn primer lugar debemos seleccionar una plantilla (básicamente el tipo de evento o desencadenador que invoca a nuestra función), en nuestro caso, seleccionamos HTTP Trigger para crear una función accesible con una solicitud HTTP. Damos un nombre a la función y damos click en Crear.\n\nLlegados a este punto, nos aparecerá un editor donde podemos añadir un código sencillo para crear nuestra primera función y damos click en Guardar.\n\n~~~\nmodule.exports = async function (context, req) {\n context.res = {\n status: 200,\n body: \"Hello from Azure Functions for BDGE!\"\n };\n};\n~~~\n\nA continuación, podemos hacer click en Ejecutar para probar nuestra función.\n\n\n\n\n### Google Cloud Functions.\n\nDesde [la consola de Google Cloud]( https://console.cloud.google.com/), en la barra lateral izquierda seleccionamos Compute > Cloud Functions. Clickamos en Crear función y nos aparece una ventana muy similar a la del resto de servicios. \n\nDamos un nombre a nuestra función, cambiamos el campo Tiempo de ejecución para utilizar Python y añadimos nuestro código de prueba. El resto de campos podemos dejarlos tal y como están. \n\n~~~\ndef hello_world(request):\n \treturn f'Hello form Google Cloud Functions for BDGE!'\n~~~\n\nNos quedamos con el valor del campo URL para poder probar nuestra función y hacemos click en Crear. El proceso de creación tarda unos segundos y ya podemos \"disparar\" nuestra función haciendo uso del enlace que nos copiamos anteriormente: \n\nhttps://us-central1-phrasal-truck-228910.cloudfunctions.net/HelloWorld\n\n\n",
"_____no_output_____"
],
[
"## Despliegue de código para consultas de la sesión 4 en AWS Lambda",
"_____no_output_____"
],
[
"En esta sección, nos centramos en tratar de desplegar una serie de funciones en AWS Lambda que contienen el código Python desarrollado en la sesión 4 para extraer los datos de RQ1, RQ2, RQ3 y RQ4 desde una base de datos Mongo.\n\nEl proceso de despliegue podría ser realizado de forma manual tal y como hemos visto en la sesión anterior. Sin embargo, se trataría de un proceso tedioso y repetitivo para la creación de distintas funciones. Además, nos encontraríamos con un problema por el uso de dependencias (aunque puede ser resuelto de forma manual, es un poco costoso).\n\nPor tanto, es habitual optar por una herramienta que permita automatizar este proceso de despliegue. Una de las más conocidas para gestionar distintos servicios es la conocida como [*serverless*](https://serverless.com). Para su instalación es necesario node y npm (que ya están instalados en la imagen que utilizamos).",
"_____no_output_____"
]
],
[
[
"!npm install -g serverless",
"\u001b[K\u001b[?25h/opt/conda/bin/serverless -> /opt/conda/lib/node_modules/serverless/bin/serverlesss\u001b[0m [email protected]\u001b[0m\u001b[K[K\n/opt/conda/bin/slss -> /opt/conda/lib/node_modules/serverless/bin/serverless\n/opt/conda/bin/sls -> /opt/conda/lib/node_modules/serverless/bin/serverless\n\u001b[K\u001b[?25h \u001b[27m\u001b[90m......\u001b[0m] / postinstall:concat-stream: \u001b[32minfo\u001b[0m \u001b[35mlifecycle\u001b[0m concat-stream@\u001b[0m\u001b[K[0m\u001b[K\n> [email protected] postinstall /opt/conda/lib/node_modules/serverless/node_modules/spawn-sync\n> node postinstall\n\n\n> [email protected] postinstall /opt/conda/lib/node_modules/serverless\n> node ./scripts/postinstall.js\n\n+ [email protected]\nadded 441 packages in 66.811s\n"
]
],
[
[
"A continuación, deberíamos crear una especie de proyecto con la herramienta mediante la orden *serverless create*. Sin embargo, nosotros ya hemos realizado este proceso y tenemos creado el fichero *serverless.yml* en el que se describe nuestro proyecto.\n\nEl contenido de este fichero permite describir cómo se realizará el despliegue de nuestro proyecto. La cantidad de opciones que permite configurar es muyt grande. Aquí incluimos los parámetros que hemos configurado para nuestro caso de uso.\n- **service**: nombre del proyecto o servicio\n- **provider**: plataforma sobre la que desplegamos nuestras funciones. Incluimos el nombre, el entorno sobre el que ejecutaremos nuestro código y el tiempo máximo que puede estar una función ejecutándose.\n- **functions**: listado con las funciones que desplegamos. En nuestro caso, uno por cada versión de las distintas consultas realizadas en la sesión 4. El parámetro handler indica el nombre de la función del código que se ejecutará (se indica como <nombre_fichero>:<funcion>)\n - Utilizando el parámetro events, podemos indicar una serie de eventos o desencadenadores para disparar la función. Existe una gran cantidad de opciones. Nosotros hemos utilizado la opción http que te permite generar un sencillo API REST para la ejecución de las funciones.\n- **plugins**: listado de plugins de serverless a utilizar. En nuestro caso incluimos, serverless-python-requirements, para que añada a las funciones todas las dependencias que indicamos el fichero requirements.txt.\n\nComo necesitamos utilizar el plugin serverless-python-requirements, hemos añadido un proyecto de node con el fichero package.json en el que podemos indicar las dependencias propias de nuestro proyecto, por tanto, podemos instalarlas con la ayuda de npm.",
"_____no_output_____"
]
],
[
[
"!npm install",
"\u001b[K\u001b[?25h\u001b[37;40mnpm\u001b[0m \u001b[0m\u001b[34;40mnotice\u001b[0m\u001b[35m\u001b[0m created a lockfile as package-lock.json. You should commit this file.\n\u001b[0madded 45 packages in 25.864s\n"
]
],
[
[
"Llegados a este punto, ya podemos utilizar serverless con nuestro proyecto. Otra de las opciones que ofrece serverless es la simulación de la invocación de las funciones en un entorno local. Podemos probar a ejecutar algunas de ellas.",
"_____no_output_____"
]
],
[
[
"!serverless invoke local --function rq1_agg",
"{\r\n \"statusCode\": 200,\r\n \"body\": \"{\\\"data\\\": [{\\\"_id\\\": 223, \\\"usersCount\\\": 1}, {\\\"_id\\\": 158, \\\"usersCount\\\": 1}, {\\\"_id\\\": 144, \\\"usersCount\\\": 1}, {\\\"_id\\\": 130, \\\"usersCount\\\": 1}, {\\\"_id\\\": 119, \\\"usersCount\\\": 2}, {\\\"_id\\\": 115, \\\"usersCount\\\": 1}, {\\\"_id\\\": 114, \\\"usersCount\\\": 1}, {\\\"_id\\\": 111, \\\"usersCount\\\": 1}, {\\\"_id\\\": 107, \\\"usersCount\\\": 1}, {\\\"_id\\\": 96, \\\"usersCount\\\": 1}, {\\\"_id\\\": 95, \\\"usersCount\\\": 1}, {\\\"_id\\\": 94, \\\"usersCount\\\": 1}, {\\\"_id\\\": 85, \\\"usersCount\\\": 1}, {\\\"_id\\\": 84, \\\"usersCount\\\": 1}, {\\\"_id\\\": 83, \\\"usersCount\\\": 2}, {\\\"_id\\\": 79, \\\"usersCount\\\": 1}, {\\\"_id\\\": 75, \\\"usersCount\\\": 1}, {\\\"_id\\\": 72, \\\"usersCount\\\": 1}, {\\\"_id\\\": 71, \\\"usersCount\\\": 1}, {\\\"_id\\\": 69, \\\"usersCount\\\": 1}, {\\\"_id\\\": 64, \\\"usersCount\\\": 2}, {\\\"_id\\\": 62, \\\"usersCount\\\": 2}, {\\\"_id\\\": 60, \\\"usersCount\\\": 1}, {\\\"_id\\\": 56, \\\"usersCount\\\": 1}, {\\\"_id\\\": 55, \\\"usersCount\\\": 2}, {\\\"_id\\\": 54, \\\"usersCount\\\": 1}, {\\\"_id\\\": 51, \\\"usersCount\\\": 3}, {\\\"_id\\\": 50, \\\"usersCount\\\": 4}, {\\\"_id\\\": 49, \\\"usersCount\\\": 2}, {\\\"_id\\\": 48, \\\"usersCount\\\": 5}, {\\\"_id\\\": 47, \\\"usersCount\\\": 4}, {\\\"_id\\\": 46, \\\"usersCount\\\": 3}, {\\\"_id\\\": 45, \\\"usersCount\\\": 3}, {\\\"_id\\\": 44, \\\"usersCount\\\": 2}, {\\\"_id\\\": 43, \\\"usersCount\\\": 4}, {\\\"_id\\\": 42, \\\"usersCount\\\": 1}, {\\\"_id\\\": 41, \\\"usersCount\\\": 3}, {\\\"_id\\\": 40, \\\"usersCount\\\": 2}, {\\\"_id\\\": 39, \\\"usersCount\\\": 4}, {\\\"_id\\\": 38, \\\"usersCount\\\": 2}, {\\\"_id\\\": 37, \\\"usersCount\\\": 6}, {\\\"_id\\\": 36, \\\"usersCount\\\": 3}, {\\\"_id\\\": 35, \\\"usersCount\\\": 4}, {\\\"_id\\\": 34, \\\"usersCount\\\": 2}, {\\\"_id\\\": 33, \\\"usersCount\\\": 3}, {\\\"_id\\\": 32, \\\"usersCount\\\": 4}, {\\\"_id\\\": 31, \\\"usersCount\\\": 4}, {\\\"_id\\\": 30, \\\"usersCount\\\": 9}, {\\\"_id\\\": 29, \\\"usersCount\\\": 11}, {\\\"_id\\\": 28, \\\"usersCount\\\": 6}, {\\\"_id\\\": 27, \\\"usersCount\\\": 5}, {\\\"_id\\\": 26, \\\"usersCount\\\": 12}, {\\\"_id\\\": 25, \\\"usersCount\\\": 10}, {\\\"_id\\\": 24, \\\"usersCount\\\": 16}, {\\\"_id\\\": 23, \\\"usersCount\\\": 9}, {\\\"_id\\\": 22, \\\"usersCount\\\": 16}, {\\\"_id\\\": 21, \\\"usersCount\\\": 15}, {\\\"_id\\\": 20, \\\"usersCount\\\": 20}, {\\\"_id\\\": 19, \\\"usersCount\\\": 19}, {\\\"_id\\\": 18, \\\"usersCount\\\": 13}, {\\\"_id\\\": 17, \\\"usersCount\\\": 20}, {\\\"_id\\\": 16, \\\"usersCount\\\": 24}, {\\\"_id\\\": 15, \\\"usersCount\\\": 35}, {\\\"_id\\\": 14, \\\"usersCount\\\": 37}, {\\\"_id\\\": 13, \\\"usersCount\\\": 34}, {\\\"_id\\\": 12, \\\"usersCount\\\": 43}, {\\\"_id\\\": 11, \\\"usersCount\\\": 49}, {\\\"_id\\\": 10, \\\"usersCount\\\": 79}, {\\\"_id\\\": 9, \\\"usersCount\\\": 76}, {\\\"_id\\\": 8, \\\"usersCount\\\": 105}, {\\\"_id\\\": 7, \\\"usersCount\\\": 142}, {\\\"_id\\\": 6, \\\"usersCount\\\": 184}, {\\\"_id\\\": 5, \\\"usersCount\\\": 295}, {\\\"_id\\\": 4, \\\"usersCount\\\": 407}, {\\\"_id\\\": 3, \\\"usersCount\\\": 706}, {\\\"_id\\\": 2, \\\"usersCount\\\": 1603}, {\\\"_id\\\": 1, \\\"usersCount\\\": 6840}, {\\\"_id\\\": 0, \\\"usersCount\\\": 38094}], \\\"elapsedTime\\\": 4.6297406000085175}\"\r\n}\r\n"
],
[
"!serverless invoke local --function rq2_agg",
"{\r\n \"statusCode\": 200,\r\n \"body\": \"{\\\"data\\\": [{\\\"_id\\\": 2035, \\\"usersCount\\\": 1}, {\\\"_id\\\": 729, \\\"usersCount\\\": 1}, {\\\"_id\\\": 669, \\\"usersCount\\\": 1}, {\\\"_id\\\": 542, \\\"usersCount\\\": 1}, {\\\"_id\\\": 469, \\\"usersCount\\\": 1}, {\\\"_id\\\": 451, \\\"usersCount\\\": 1}, {\\\"_id\\\": 411, \\\"usersCount\\\": 1}, {\\\"_id\\\": 381, \\\"usersCount\\\": 1}, {\\\"_id\\\": 373, \\\"usersCount\\\": 1}, {\\\"_id\\\": 353, \\\"usersCount\\\": 1}, {\\\"_id\\\": 352, \\\"usersCount\\\": 1}, {\\\"_id\\\": 343, \\\"usersCount\\\": 1}, {\\\"_id\\\": 329, \\\"usersCount\\\": 1}, {\\\"_id\\\": 327, \\\"usersCount\\\": 1}, {\\\"_id\\\": 283, \\\"usersCount\\\": 1}, {\\\"_id\\\": 266, \\\"usersCount\\\": 1}, {\\\"_id\\\": 265, \\\"usersCount\\\": 1}, {\\\"_id\\\": 262, \\\"usersCount\\\": 1}, {\\\"_id\\\": 259, \\\"usersCount\\\": 1}, {\\\"_id\\\": 258, \\\"usersCount\\\": 1}, {\\\"_id\\\": 250, \\\"usersCount\\\": 1}, {\\\"_id\\\": 246, \\\"usersCount\\\": 2}, {\\\"_id\\\": 227, \\\"usersCount\\\": 2}, {\\\"_id\\\": 224, \\\"usersCount\\\": 1}, {\\\"_id\\\": 210, \\\"usersCount\\\": 1}, {\\\"_id\\\": 208, \\\"usersCount\\\": 1}, {\\\"_id\\\": 204, \\\"usersCount\\\": 1}, {\\\"_id\\\": 200, \\\"usersCount\\\": 1}, {\\\"_id\\\": 190, \\\"usersCount\\\": 1}, {\\\"_id\\\": 184, \\\"usersCount\\\": 1}, {\\\"_id\\\": 182, \\\"usersCount\\\": 1}, {\\\"_id\\\": 168, \\\"usersCount\\\": 1}, {\\\"_id\\\": 161, \\\"usersCount\\\": 1}, {\\\"_id\\\": 152, \\\"usersCount\\\": 1}, {\\\"_id\\\": 149, \\\"usersCount\\\": 1}, {\\\"_id\\\": 148, \\\"usersCount\\\": 1}, {\\\"_id\\\": 139, \\\"usersCount\\\": 1}, {\\\"_id\\\": 135, \\\"usersCount\\\": 1}, {\\\"_id\\\": 127, \\\"usersCount\\\": 1}, {\\\"_id\\\": 126, \\\"usersCount\\\": 1}, {\\\"_id\\\": 125, \\\"usersCount\\\": 1}, {\\\"_id\\\": 118, \\\"usersCount\\\": 1}, {\\\"_id\\\": 117, \\\"usersCount\\\": 1}, {\\\"_id\\\": 114, \\\"usersCount\\\": 2}, {\\\"_id\\\": 112, \\\"usersCount\\\": 2}, {\\\"_id\\\": 110, \\\"usersCount\\\": 2}, {\\\"_id\\\": 109, \\\"usersCount\\\": 1}, {\\\"_id\\\": 108, \\\"usersCount\\\": 1}, {\\\"_id\\\": 107, \\\"usersCount\\\": 1}, {\\\"_id\\\": 105, \\\"usersCount\\\": 2}, {\\\"_id\\\": 104, \\\"usersCount\\\": 2}, {\\\"_id\\\": 102, \\\"usersCount\\\": 1}, {\\\"_id\\\": 100, \\\"usersCount\\\": 1}, {\\\"_id\\\": 98, \\\"usersCount\\\": 1}, {\\\"_id\\\": 97, \\\"usersCount\\\": 3}, {\\\"_id\\\": 93, \\\"usersCount\\\": 2}, {\\\"_id\\\": 90, \\\"usersCount\\\": 1}, {\\\"_id\\\": 89, \\\"usersCount\\\": 2}, {\\\"_id\\\": 88, \\\"usersCount\\\": 1}, {\\\"_id\\\": 86, \\\"usersCount\\\": 1}, {\\\"_id\\\": 84, \\\"usersCount\\\": 2}, {\\\"_id\\\": 83, \\\"usersCount\\\": 1}, {\\\"_id\\\": 82, \\\"usersCount\\\": 1}, {\\\"_id\\\": 81, \\\"usersCount\\\": 4}, {\\\"_id\\\": 80, \\\"usersCount\\\": 2}, {\\\"_id\\\": 78, \\\"usersCount\\\": 2}, {\\\"_id\\\": 77, \\\"usersCount\\\": 2}, {\\\"_id\\\": 76, \\\"usersCount\\\": 1}, {\\\"_id\\\": 75, \\\"usersCount\\\": 2}, {\\\"_id\\\": 74, \\\"usersCount\\\": 1}, {\\\"_id\\\": 72, \\\"usersCount\\\": 1}, {\\\"_id\\\": 70, \\\"usersCount\\\": 1}, {\\\"_id\\\": 69, \\\"usersCount\\\": 3}, {\\\"_id\\\": 68, \\\"usersCount\\\": 1}, {\\\"_id\\\": 66, \\\"usersCount\\\": 4}, {\\\"_id\\\": 65, \\\"usersCount\\\": 1}, {\\\"_id\\\": 64, \\\"usersCount\\\": 2}, {\\\"_id\\\": 62, \\\"usersCount\\\": 1}, {\\\"_id\\\": 57, \\\"usersCount\\\": 2}, {\\\"_id\\\": 56, \\\"usersCount\\\": 2}, {\\\"_id\\\": 55, \\\"usersCount\\\": 4}, {\\\"_id\\\": 54, \\\"usersCount\\\": 1}, {\\\"_id\\\": 53, \\\"usersCount\\\": 7}, {\\\"_id\\\": 52, \\\"usersCount\\\": 3}, {\\\"_id\\\": 51, \\\"usersCount\\\": 5}, {\\\"_id\\\": 50, \\\"usersCount\\\": 3}, {\\\"_id\\\": 49, \\\"usersCount\\\": 2}, {\\\"_id\\\": 48, \\\"usersCount\\\": 6}, {\\\"_id\\\": 47, \\\"usersCount\\\": 2}, {\\\"_id\\\": 46, \\\"usersCount\\\": 8}, {\\\"_id\\\": 45, \\\"usersCount\\\": 1}, {\\\"_id\\\": 44, \\\"usersCount\\\": 2}, {\\\"_id\\\": 43, \\\"usersCount\\\": 3}, {\\\"_id\\\": 42, \\\"usersCount\\\": 5}, {\\\"_id\\\": 41, \\\"usersCount\\\": 4}, {\\\"_id\\\": 40, \\\"usersCount\\\": 2}, {\\\"_id\\\": 39, \\\"usersCount\\\": 3}, {\\\"_id\\\": 38, \\\"usersCount\\\": 5}, {\\\"_id\\\": 37, \\\"usersCount\\\": 3}, {\\\"_id\\\": 36, \\\"usersCount\\\": 4}, {\\\"_id\\\": 35, \\\"usersCount\\\": 9}, {\\\"_id\\\": 34, \\\"usersCount\\\": 2}, {\\\"_id\\\": 33, \\\"usersCount\\\": 6}, {\\\"_id\\\": 32, \\\"usersCount\\\": 11}, {\\\"_id\\\": 31, \\\"usersCount\\\": 4}, {\\\"_id\\\": 30, \\\"usersCount\\\": 8}, {\\\"_id\\\": 29, \\\"usersCount\\\": 8}, {\\\"_id\\\": 28, \\\"usersCount\\\": 7}, {\\\"_id\\\": 27, \\\"usersCount\\\": 3}, {\\\"_id\\\": 26, \\\"usersCount\\\": 10}, {\\\"_id\\\": 25, \\\"usersCount\\\": 8}, {\\\"_id\\\": 24, \\\"usersCount\\\": 8}, {\\\"_id\\\": 23, \\\"usersCount\\\": 10}, {\\\"_id\\\": 22, \\\"usersCount\\\": 13}, {\\\"_id\\\": 21, \\\"usersCount\\\": 17}, {\\\"_id\\\": 20, \\\"usersCount\\\": 11}, {\\\"_id\\\": 19, \\\"usersCount\\\": 19}, {\\\"_id\\\": 18, \\\"usersCount\\\": 23}, {\\\"_id\\\": 17, \\\"usersCount\\\": 27}, {\\\"_id\\\": 16, \\\"usersCount\\\": 19}, {\\\"_id\\\": 15, \\\"usersCount\\\": 18}, {\\\"_id\\\": 14, \\\"usersCount\\\": 20}, {\\\"_id\\\": 13, \\\"usersCount\\\": 32}, {\\\"_id\\\": 12, \\\"usersCount\\\": 31}, {\\\"_id\\\": 11, \\\"usersCount\\\": 41}, {\\\"_id\\\": 10, \\\"usersCount\\\": 51}, {\\\"_id\\\": 9, \\\"usersCount\\\": 81}, {\\\"_id\\\": 8, \\\"usersCount\\\": 71}, {\\\"_id\\\": 7, \\\"usersCount\\\": 108}, {\\\"_id\\\": 6, \\\"usersCount\\\": 145}, {\\\"_id\\\": 5, \\\"usersCount\\\": 194}, {\\\"_id\\\": 4, \\\"usersCount\\\": 263}, {\\\"_id\\\": 3, \\\"usersCount\\\": 450}, {\\\"_id\\\": 2, \\\"usersCount\\\": 998}, {\\\"_id\\\": 1, \\\"usersCount\\\": 3358}, {\\\"_id\\\": 0, \\\"usersCount\\\": 42769}], \\\"elapsedTime\\\": 4.3762929000658914}\"\r\n}\r\n"
]
],
[
[
"Llegados a este punto, lo interesante sería realizar el despliegue en AWS Lambda. Para ello necesitaremos unas credenciales que han de ser incluidas en el fichero docker-compose.yml para que puedan ser utilizadas como variables de entorno. Para su creación debemos:\n\n1. Hacer login en nuestra cuenta de AWS e ir a la sección Identity & Access Management (IAM).\n2. Seleccionar Users y después Add user. Aquí debemos dar un nombre a nuestro usuario (es buena práctica darle un nombre representantivo del servicio que lo va a utilizar, por ejemplo *serverless-admin*). Debemos seleccionar la opción que genera las credenciales, Enable Programmatic access, y seleccionamos Next. En la página de Permissions, seleccionamos Attach existing policies directly y buscamos AdministratorAccess. Hacemos click en Next: Review, revisamos todo y hacemos click en Create User.\n3. A continuación, se nos listará los usuarios que tenemos y podemos copiar el API Key y el Secret que necesitamos como credenciales.\n\nA partir de este momento, hacer el proceso de despliegue de nuestras funciones es tan sencillo como:",
"_____no_output_____"
]
],
[
[
"!serverless deploy",
"Serverless: Generated requirements from /home/jovyan/bdge/trabajo/requirements.txt in /home/jovyan/bdge/trabajo/.serverless/requirements.txt...\nServerless: Installing requirements from /home/jovyan/bdge/trabajo/.serverless/requirements/requirements.txt ...\nServerless: Packaging service...\nServerless: Excluding development dependencies...\nServerless: Injecting required Python packages to package...\nServerless: WARNING: Function rq1_agg has timeout of 60 seconds, however, it's attached to API Gateway so it's automatically limited to 30 seconds.\nServerless: WARNING: Function rq1_mapred has timeout of 60 seconds, however, it's attached to API Gateway so it's automatically limited to 30 seconds.\nServerless: WARNING: Function rq2_agg has timeout of 60 seconds, however, it's attached to API Gateway so it's automatically limited to 30 seconds.\nServerless: WARNING: Function rq2_mapred has timeout of 60 seconds, however, it's attached to API Gateway so it's automatically limited to 30 seconds.\nServerless: WARNING: Function rq3_agg has timeout of 60 seconds, however, it's attached to API Gateway so it's automatically limited to 30 seconds.\nServerless: WARNING: Function rq3_mapred has timeout of 60 seconds, however, it's attached to API Gateway so it's automatically limited to 30 seconds.\nServerless: WARNING: Function rq4_agg has timeout of 60 seconds, however, it's attached to API Gateway so it's automatically limited to 30 seconds.\nServerless: Uploading CloudFormation file to S3...\nServerless: Uploading artifacts...\nServerless: Uploading service .zip file to S3 (3.12 MB)...\nServerless: Validating template...\nServerless: Updating Stack...\nServerless: Checking Stack update progress...\n..................................................\nServerless: Stack update finished...\nService Information\nservice: aws-lambda\nstage: dev\nregion: us-east-1\nstack: aws-lambda-dev\napi keys:\n None\nendpoints:\n GET - https://na2dey56vh.execute-api.us-east-1.amazonaws.com/dev/rq1_agg\n GET - https://na2dey56vh.execute-api.us-east-1.amazonaws.com/dev/rq1_mapred\n GET - https://na2dey56vh.execute-api.us-east-1.amazonaws.com/dev/rq2_agg\n GET - https://na2dey56vh.execute-api.us-east-1.amazonaws.com/dev/rq2_mapred\n GET - https://na2dey56vh.execute-api.us-east-1.amazonaws.com/dev/rq3_agg\n GET - https://na2dey56vh.execute-api.us-east-1.amazonaws.com/dev/rq3_mapred\n GET - https://na2dey56vh.execute-api.us-east-1.amazonaws.com/dev/rq4_agg\nfunctions:\n rq1_agg: aws-lambda-dev-rq1_agg\n rq1_mapred: aws-lambda-dev-rq1_mapred\n rq2_agg: aws-lambda-dev-rq2_agg\n rq2_mapred: aws-lambda-dev-rq2_mapred\n rq3_agg: aws-lambda-dev-rq3_agg\n rq3_mapred: aws-lambda-dev-rq3_mapred\n rq4_agg: aws-lambda-dev-rq4_agg\nlayers:\n None\nServerless: Removing old service artifacts from S3...\n"
]
],
[
[
"Como podemos ver, nos indica los distintos entrypoint a los que podemos acceder para ejecutar las funciones. No obstante, también podemos utilizar serverless para ejecutarlas de forma remota.",
"_____no_output_____"
]
],
[
[
"!serverless invoke --function rq1_agg",
"{\n \"statusCode\": 200,\n \"body\": \"{\\\"data\\\": [{\\\"_id\\\": 223, \\\"usersCount\\\": 1}, {\\\"_id\\\": 158, \\\"usersCount\\\": 1}, {\\\"_id\\\": 144, \\\"usersCount\\\": 1}, {\\\"_id\\\": 130, \\\"usersCount\\\": 1}, {\\\"_id\\\": 119, \\\"usersCount\\\": 2}, {\\\"_id\\\": 115, \\\"usersCount\\\": 1}, {\\\"_id\\\": 114, \\\"usersCount\\\": 1}, {\\\"_id\\\": 111, \\\"usersCount\\\": 1}, {\\\"_id\\\": 107, \\\"usersCount\\\": 1}, {\\\"_id\\\": 96, \\\"usersCount\\\": 1}, {\\\"_id\\\": 95, \\\"usersCount\\\": 1}, {\\\"_id\\\": 94, \\\"usersCount\\\": 1}, {\\\"_id\\\": 85, \\\"usersCount\\\": 1}, {\\\"_id\\\": 84, \\\"usersCount\\\": 1}, {\\\"_id\\\": 83, \\\"usersCount\\\": 2}, {\\\"_id\\\": 79, \\\"usersCount\\\": 1}, {\\\"_id\\\": 75, \\\"usersCount\\\": 1}, {\\\"_id\\\": 72, \\\"usersCount\\\": 1}, {\\\"_id\\\": 71, \\\"usersCount\\\": 1}, {\\\"_id\\\": 69, \\\"usersCount\\\": 1}, {\\\"_id\\\": 64, \\\"usersCount\\\": 2}, {\\\"_id\\\": 62, \\\"usersCount\\\": 2}, {\\\"_id\\\": 60, \\\"usersCount\\\": 1}, {\\\"_id\\\": 56, \\\"usersCount\\\": 1}, {\\\"_id\\\": 55, \\\"usersCount\\\": 2}, {\\\"_id\\\": 54, \\\"usersCount\\\": 1}, {\\\"_id\\\": 51, \\\"usersCount\\\": 3}, {\\\"_id\\\": 50, \\\"usersCount\\\": 4}, {\\\"_id\\\": 49, \\\"usersCount\\\": 2}, {\\\"_id\\\": 48, \\\"usersCount\\\": 5}, {\\\"_id\\\": 47, \\\"usersCount\\\": 4}, {\\\"_id\\\": 46, \\\"usersCount\\\": 3}, {\\\"_id\\\": 45, \\\"usersCount\\\": 3}, {\\\"_id\\\": 44, \\\"usersCount\\\": 2}, {\\\"_id\\\": 43, \\\"usersCount\\\": 4}, {\\\"_id\\\": 42, \\\"usersCount\\\": 1}, {\\\"_id\\\": 41, \\\"usersCount\\\": 3}, {\\\"_id\\\": 40, \\\"usersCount\\\": 2}, {\\\"_id\\\": 39, \\\"usersCount\\\": 4}, {\\\"_id\\\": 38, \\\"usersCount\\\": 2}, {\\\"_id\\\": 37, \\\"usersCount\\\": 6}, {\\\"_id\\\": 36, \\\"usersCount\\\": 3}, {\\\"_id\\\": 35, \\\"usersCount\\\": 4}, {\\\"_id\\\": 34, \\\"usersCount\\\": 2}, {\\\"_id\\\": 33, \\\"usersCount\\\": 3}, {\\\"_id\\\": 32, \\\"usersCount\\\": 4}, {\\\"_id\\\": 31, \\\"usersCount\\\": 4}, {\\\"_id\\\": 30, \\\"usersCount\\\": 9}, {\\\"_id\\\": 29, \\\"usersCount\\\": 11}, {\\\"_id\\\": 28, \\\"usersCount\\\": 6}, {\\\"_id\\\": 27, \\\"usersCount\\\": 5}, {\\\"_id\\\": 26, \\\"usersCount\\\": 12}, {\\\"_id\\\": 25, \\\"usersCount\\\": 10}, {\\\"_id\\\": 24, \\\"usersCount\\\": 16}, {\\\"_id\\\": 23, \\\"usersCount\\\": 9}, {\\\"_id\\\": 22, \\\"usersCount\\\": 16}, {\\\"_id\\\": 21, \\\"usersCount\\\": 15}, {\\\"_id\\\": 20, \\\"usersCount\\\": 20}, {\\\"_id\\\": 19, \\\"usersCount\\\": 19}, {\\\"_id\\\": 18, \\\"usersCount\\\": 13}, {\\\"_id\\\": 17, \\\"usersCount\\\": 20}, {\\\"_id\\\": 16, \\\"usersCount\\\": 24}, {\\\"_id\\\": 15, \\\"usersCount\\\": 35}, {\\\"_id\\\": 14, \\\"usersCount\\\": 37}, {\\\"_id\\\": 13, \\\"usersCount\\\": 34}, {\\\"_id\\\": 12, \\\"usersCount\\\": 43}, {\\\"_id\\\": 11, \\\"usersCount\\\": 49}, {\\\"_id\\\": 10, \\\"usersCount\\\": 79}, {\\\"_id\\\": 9, \\\"usersCount\\\": 76}, {\\\"_id\\\": 8, \\\"usersCount\\\": 105}, {\\\"_id\\\": 7, \\\"usersCount\\\": 142}, {\\\"_id\\\": 6, \\\"usersCount\\\": 184}, {\\\"_id\\\": 5, \\\"usersCount\\\": 295}, {\\\"_id\\\": 4, \\\"usersCount\\\": 407}, {\\\"_id\\\": 3, \\\"usersCount\\\": 706}, {\\\"_id\\\": 2, \\\"usersCount\\\": 1603}, {\\\"_id\\\": 1, \\\"usersCount\\\": 6840}, {\\\"_id\\\": 0, \\\"usersCount\\\": 38094}], \\\"elapsedTime\\\": 3.6145777290003025}\"\n}\n"
]
],
[
[
"A partir de aquí, vamos a replicar las obtención de las gráficas de las consultas de la sesión 4 con los datos que podemos obtener de la ejecución de nuestras funciones AWS Lambda.",
"_____no_output_____"
],
[
"\n### RQ1\n#### Framework de agregación",
"_____no_output_____"
]
],
[
[
"rq1_agg_req = requests.get(\"https://na2dey56vh.execute-api.us-east-1.amazonaws.com/dev/rq1_agg\")\ndata = rq1_agg_req.json()['data']\n\ndf = pd.DataFrame(list(data))\ndf = df[df['_id']<40] # to show a similar plot\n\nplt.bar(df['_id'], df.usersCount, log=True, color=\"blue\", width=0.3)\nplt.plot(df['_id'], df.usersCount, color=\"red\")\n\nplt.xlabel(\"Number of posted questions\")\nplt.ylabel(\"Number of developers\")\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"#### Map-Reduce",
"_____no_output_____"
]
],
[
[
"rq1_mapred_req = requests.get(\"https://na2dey56vh.execute-api.us-east-1.amazonaws.com/dev/rq1_mapred\")\ndata = rq1_mapred_req.json()['data']\n\ndf = pd.DataFrame(list(data))\n\ndf = pd.concat([df.drop('value', axis=1), pd.DataFrame(df['value'].tolist())], axis=1)\n\ndf = df.groupby([0]).count().head(n=40)\n\nplt.bar(df.index, df['_id'], log=True, color=\"blue\", width=0.3)\nplt.plot(df.index, df['_id'], color=\"red\")\n\nplt.xlabel(\"Number of posted questions\")\nplt.ylabel(\"Number of developers\")\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"### RQ2\n#### Framework de agregación",
"_____no_output_____"
]
],
[
[
"rq2_agg_req = requests.get(\"https://na2dey56vh.execute-api.us-east-1.amazonaws.com/dev/rq2_agg\")\ndata = rq2_agg_req.json()['data']\n\ndf = pd.DataFrame(data)\ndf = df[df['_id']<40] # to show a similar plot\n\nplt.bar(df['_id'], df.usersCount, log=True, color=\"blue\", width=0.3)\nplt.plot(df['_id'], df.usersCount, color=\"red\")\n\nplt.xlabel(\"Number of posted answers\")\nplt.ylabel(\"Number of developers\")\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"#### Map-Reduce",
"_____no_output_____"
]
],
[
[
"rq2_mapred_req = requests.get(\"https://na2dey56vh.execute-api.us-east-1.amazonaws.com/dev/rq2_mapred\")\ndata = rq2_mapred_req.json()['data']\n\ndf = pd.DataFrame(data)\n\ndf = pd.concat([df.drop('value', axis=1), pd.DataFrame(df['value'].tolist())], axis=1)\n\ndf = df.groupby([0]).count().head(n=40)\n\nplt.bar(df.index, df['_id'], log=True, color=\"blue\", width=0.3)\nplt.plot(df.index, df['_id'], color=\"red\")\n\nplt.xlabel(\"Number of posted answers\")\nplt.ylabel(\"Number of developers\")\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"### RQ3\n#### Framework de agregación",
"_____no_output_____"
]
],
[
[
"rq3_agg_req = requests.get(\"https://na2dey56vh.execute-api.us-east-1.amazonaws.com/dev/rq3_agg\")\ndata = rq3_agg_req.json()['data']\n\ndf = pd.DataFrame(data)\ndf['total'] = df.ansCount+df.postsCount\ndf['ansRate'] = df.ansCount/df.total*100\n\ndf_dropna = df.dropna()\n\nplt.hist(df_dropna.ansRate, color = 'blue')\nplt.gca().invert_xaxis()\n\nplt.xlabel(\"Percentage of posts that are answers\")\nplt.ylabel(\"Number of developers\")\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"#### Map-Reduce",
"_____no_output_____"
]
],
[
[
"rq3_mapred_req = requests.get(\"https://na2dey56vh.execute-api.us-east-1.amazonaws.com/dev/rq3_mapred\")\ndata = rq3_mapred_req.json()['data']\n\ndf = pd.DataFrame(data)\ndf = pd.concat([df.drop('value', axis=1), pd.DataFrame(df['value'].tolist())], axis=1)\n\ndf['total'] = df.answers+df.questions\ndf['ansRate'] = df.answers/df.total*100\n\ndf_dropna = df.dropna()\n\nplt.hist(df_dropna.ansRate, color = 'blue')\nplt.gca().invert_xaxis()\n\nplt.xlabel(\"Percentage of posts that are answers\")\nplt.ylabel(\"Number of developers\")\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"### RQ4\n#### Framework de agregación",
"_____no_output_____"
]
],
[
[
"rq4_agg_req = requests.get(\"https://na2dey56vh.execute-api.us-east-1.amazonaws.com/dev/rq4_agg\")\ndata = rq4_agg_req.json()['data']\ndata",
"_____no_output_____"
]
],
[
[
"## Replicación de las consultas de la sesión 4 desde el notebook\n\nUn ejercicio interesante, consiste en comparar los tiempos de ejecución de las consultas que realizamos en la sesión 4 accediendo desde este mismo notebook al servidor Mongo desplegado, con los tiempos de ejecución de las consultas en los servidores de AWS.\n\nPara poder realizar esta comparativa, es necesario replicar en este notebook las distintas consultas en forma de funciones que utilizaremos más adelante.",
"_____no_output_____"
]
],
[
[
"client = MongoClient(\"ec2-3-82-61-111.compute-1.amazonaws.com\",27017)\nclient\n\ndb = client.stackoverflow\ndb = client['stackoverflow']\ndb",
"_____no_output_____"
]
],
[
[
"### RQ1\n#### Framework de agregación",
"_____no_output_____"
]
],
[
[
"def rq1_agg ():\n dataRQ1 = db.users.aggregate( [\n {'$lookup': {\n 'from': 'posts',\n 'localField' : 'Id',\n 'foreignField' : 'OwnerUserId',\n 'as': 'posts'}\n },\n {'$project' : {\n 'Id' : True,\n 'posts': {\n '$filter' : {\n 'input' : '$posts',\n 'as' : 'post',\n 'cond' : { '$eq': ['$$post.PostTypeId', 1] }\n }}\n }},\n {'$project' : {\n 'Id' : True,\n 'postsCount': { '$size' : '$posts'}\n }},\n {'$group' : {\n '_id' : '$postsCount',\n 'usersCount': { '$sum' : 1 }\n }},\n {'$sort' : { '_id' : -1 } }\n ])\n data = list(dataRQ1)\n return data",
"_____no_output_____"
]
],
[
[
"#### Map-Reduce",
"_____no_output_____"
]
],
[
[
"def rq1_mapred ():\n rq1_map = Code(\"\"\"\n function () {\n if (this.PostTypeId == 1) {\n emit(this.OwnerUserId, 1);\n }\n }\n \"\"\")\n\n rq1_reduce = Code(\n '''\n function (key, values)\n {\n return Array.sum(values);\n }\n ''')\n\n db.posts.map_reduce(rq1_map, rq1_reduce, out='rq1')\n\n rq1_map2 = Code(\"\"\"\n function () {\n emit(this.Id, 0);\n }\n \"\"\")\n\n rq1_reduce2 = Code(\"\"\"\n function (key, values) {\n return Array.sum(values);\n }\n \"\"\")\n\n db.users.map_reduce(rq1_map2, rq1_reduce2, out={'reduce' : 'rq1'})\n\n data = list(db.rq1.find())\n return data",
"_____no_output_____"
]
],
[
[
"### RQ2\n#### Framework de agregación",
"_____no_output_____"
]
],
[
[
"def rq2_agg(): \n dataRQ2 = db.users.aggregate( [\n {'$lookup': {\n 'from': 'posts',\n 'localField' : 'Id',\n 'foreignField' : 'OwnerUserId',\n 'as': 'posts'}\n },\n {'$project' : {\n 'Id' : True,\n 'answers': {\n '$filter' : {\n 'input' : '$posts',\n 'as' : 'post',\n 'cond' : { '$eq': ['$$post.PostTypeId', 2] }\n }}\n }},\n {'$project' : {\n 'Id' : True,\n 'ansCount': { '$size' : '$answers'}\n }},\n {'$group' : {\n '_id' : '$ansCount',\n 'usersCount': { '$sum' : 1 }\n }},\n {'$sort' : { '_id' : -1 } }\n ])\n data = list(dataRQ2)\n return data",
"_____no_output_____"
]
],
[
[
"#### Map-Reduce",
"_____no_output_____"
]
],
[
[
"def rq2_mapred():\n rq2_map = Code(\"\"\"\n function () {\n if (this.PostTypeId == 2) {\n emit(this.OwnerUserId, 1);\n }\n }\n \"\"\")\n\n rq2_reduce = Code(\n '''\n function (key, values)\n {\n return Array.sum(values);\n }\n ''')\n\n db.posts.map_reduce(rq2_map, rq2_reduce, out='rq2')\n\n rq2_map2 = Code(\"\"\"\n function () {\n emit(this.Id, 0);\n }\n \"\"\")\n\n rq2_reduce2 = Code(\"\"\"\n function (key, values) {\n return Array.sum(values);\n }\n \"\"\")\n\n db.users.map_reduce(rq2_map2, rq2_reduce2, out={'reduce' : 'rq2'})\n\n rq2MR = db.rq2.find()\n data = list(rq2MR)\n return data",
"_____no_output_____"
]
],
[
[
"### RQ3\n#### Framework de agregación",
"_____no_output_____"
]
],
[
[
"def rq3_agg():\n dataRQ3 = db.users.aggregate( [\n {'$lookup': {\n 'from': 'posts',\n 'localField' : 'Id',\n 'foreignField' : 'OwnerUserId',\n 'as': 'posts'}\n },\n {'$project' : {\n 'Id' : True,\n 'posts': {\n '$filter' : {\n 'input' : '$posts',\n 'as' : 'post',\n 'cond' : { '$eq': ['$$post.PostTypeId', 1] }\n }},\n 'answers': {\n '$filter' : {\n 'input' : '$posts',\n 'as' : 'post',\n 'cond' : { '$eq': ['$$post.PostTypeId', 2] }\n }}\n }},\n {'$project' : {\n 'postsCount': { '$size' : '$posts'},\n 'ansCount': { '$size' : '$answers'}\n\n }}\n ])\n data = list(dataRQ3)\n return data",
"_____no_output_____"
]
],
[
[
"#### Map-Reduce",
"_____no_output_____"
]
],
[
[
"def rq3_mapred():\n rq3_map = Code(\"\"\"\n function () {\n object = {\n questions: 0,\n answers: 0\n }; \n if (this.PostTypeId == 2) {\n object.answers = 1;\n object.questions = 0;\n } else if (this.PostTypeId == 1) {\n object.questions = 1;\n object.answers = 0;\n }\n emit(this.OwnerUserId, object);\n }\n \"\"\")\n\n rq3_reduce = Code(\n '''\n function (key, values)\n {\n questions = 0;\n answers = 0;\n for(i = 0; i<values.length; i++) {\n questions += values[i].questions;\n answers += values[i].answers;\n }\n return {questions, answers};\n }\n ''')\n\n db.posts.map_reduce(rq3_map, rq3_reduce, out='rq3')\n\n rq3_map2 = Code(\"\"\"\n function () {\n object = {\n questions: 0,\n answers: 0\n };\n emit(this.Id, object);\n }\n \"\"\")\n\n rq3_reduce2 = Code(\"\"\"\n function (key, values)\n {\n questions = 0;\n answers = 0;\n for(i = 0; i<values.length; i++) {\n questions += values[i].questions;\n answers += values[i].answers;\n }\n return {questions, answers};\n }\n \"\"\")\n\n db.users.map_reduce(rq3_map2, rq3_reduce2, out={'reduce' : 'rq3'})\n data = list(db.rq3.find())\n return data",
"_____no_output_____"
]
],
[
[
"### RQ4\n#### Framework de agregación",
"_____no_output_____"
]
],
[
[
"def rq4_agg():\n RQ4 = db.posts.aggregate( [\n {'$match': { 'AcceptedAnswerId' : {'$ne': ''}}},\n {'$lookup': {\n 'from': \"posts\", \n 'localField': \"AcceptedAnswerId\",\n 'foreignField': \"Id\",\n 'as': \"answer\"}\n },\n { \n '$unwind' : '$answer'\n },\n {\n '$project' : { 'OwnerUserId': True, \n 'answerer' : '$answer.OwnerUserId'\n }\n },\n {\n '$group' : {'_id' : {'min' : { '$min' : ['$OwnerUserId' , '$answerer'] },\n 'max' : { '$max' : ['$OwnerUserId' , '$answerer'] }},\n 'answers' : {'$addToSet' : { '0q':'$OwnerUserId', '1a': '$answerer'}}\n }\n },\n {\n '$project': {\n 'answers' : True,\n 'nanswers' : { '$size' : '$answers'}\n }\n },\n {\n '$match' : { 'nanswers' : { '$eq' : 2}}\n }\n ])\n data = list(RQ4)\n return data",
"_____no_output_____"
]
],
[
[
"## Comparativa entre la ejecución del código en AWS Lambda frente al notebook",
"_____no_output_____"
],
[
"Para comparar los tiempos de ejecución de las distintas consultas, realizaremos una serie de mediciones distintas que nos permiten hacernos una idea del rendimiento de este tipo de tecnologías.\n\n- **awstime**: tiempo medido por la propia función AWS Lambda para la realización de la consulta. Es probablemente el factor más importante ya que es el que nos permite hacernos una idea del tiempo que nos va a facturar AWS por utilizar el servicio.\n- **awsquerytime**: tiempo medido con el notebook desde que se lanza la petición para ejecutar la AWS Lambda hasta que se obtiene el resultado.\n- **localtime**: tiempo de ejecución del código en el notebook.",
"_____no_output_____"
]
],
[
[
"queries = [\"rq1_agg\", \"rq1_mapred\", \"rq2_agg\", \"rq2_mapred\", \"rq3_agg\", \"rq3_mapred\", \"rq4_agg\"]\n\nawstime = []\nawsquerytime = []\nlocaltime = []\n\nfor q in queries:\n \n start_time = timeit.default_timer()\n req_body = requests.get(\"https://na2dey56vh.execute-api.us-east-1.amazonaws.com/dev/{}\".format(q)).json()\n awsreqtime = timeit.default_timer() - start_time\n awsquerytime.append(awsreqtime)\n \n awstime.append(req_body['elapsedTime'])\n \n start_time = timeit.default_timer()\n locals()[q]()\n loctime = timeit.default_timer() - start_time\n localtime.append(loctime)\n \nd = {'awstime': awstime, 'awsquerytime': awsquerytime, \"localtime\": localtime}\ndf = pd.DataFrame(d)\ndf.index = queries\ndf",
"_____no_output_____"
]
],
[
[
"Podemos mostrar los datos de la tabla anterior con un gráfico que nos permita compararlos más fácilmente.",
"_____no_output_____"
]
],
[
[
"N = df.shape[0]\n\nfig, ax = plt.subplots()\nfig.set_figwidth(20)\nfig.set_figheight(10)\n\nind = np.arange(N) # the x locations for the groups\nwidth = 0.2 # the width of the bars\n\np1 = ax.bar(ind, df['awstime'], width, color='r', bottom=0)\np2 = ax.bar(ind + width, df['awsquerytime'], width, color='y', bottom=0)\np3 = ax.bar(ind + 2*width, df['localtime'], width, color='b', bottom=0)\n\nax.set_title('Elapsed time comparative')\nax.set_xticks(ind + 2*width / 2)\nax.set_xticklabels(queries)\n\nax.legend((p1[0], p2[0], p3[0]), ('awstime', 'awsquerytime', 'localtime'))\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"Como podemos ver, el tiempo de ejecución del código desde los servicios de AWS Lambda (rojo) es mejor que desed el notebook (azul). Este hecho parece lógico puesto que la base de datos Mongo está en un servidor EC2 de Amazon por lo que es posible que el tiempo de latencia se reduzca.\n\nNo obstante, si consideramos el tiempo necesario para obtener la respuesta con la consulta HTTP (amarillo), la situación ya no es tan ventajosa.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ec9cbf0459e68c43f6e13ba58b2ca17cbcd7234b | 41,487 | ipynb | Jupyter Notebook | section 6_part 1/Data_Prep-ML.ipynb | sanjanapackt/PacktPublishing-Clustering-and-Classification-with-Machine-Learning-in-Python | 8a79d991d17b4c82fb57524142c0adc7d615c31f | [
"MIT"
] | 10 | 2020-01-26T15:55:23.000Z | 2021-11-29T20:01:39.000Z | section7/Lecture43.ipynb | PacktPublishing/Regression-Modeling-With-Statistics-and-Machine-Learning-in-Python | 3272485dd85ede4555ed729b2659c63f9c5745fe | [
"MIT"
] | null | null | null | section7/Lecture43.ipynb | PacktPublishing/Regression-Modeling-With-Statistics-and-Machine-Learning-in-Python | 3272485dd85ede4555ed729b2659c63f9c5745fe | [
"MIT"
] | 9 | 2019-12-24T18:54:20.000Z | 2021-11-10T09:53:21.000Z | 34.429046 | 2,978 | 0.364789 | [
[
[
"## Building Supervised Predictive Models\n\n### We test our data on a given dataset (training data) and evaluate its performance/generalizability on hold-out or testing data",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"train=pd.read_csv(\"trainT.csv\")\ntrain.shape",
"_____no_output_____"
],
[
"train.head(5)",
"_____no_output_____"
],
[
"test=pd.read_csv(\"testT.csv\")\ntest.shape",
"_____no_output_____"
],
[
"test.head(5)",
"_____no_output_____"
]
],
[
[
"## What happens when we don't have a seperate hold-out test dataset?\n\n### We take our dataset and split it into training data (80%) and testing data (20%)\n### We will fit the model on 80% of the data and test its performance on the 20% data set",
"_____no_output_____"
]
],
[
[
"from sklearn import datasets, linear_model\nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"# Load the Diabetes Housing dataset\ncolumns = \"age sex bmi map tc ldl hdl tch ltg glu\".split() # Declare the columns names\ndiabetes = datasets.load_diabetes() # Call the diabetes dataset from sklearn\ndf = pd.DataFrame(diabetes.data, columns=columns) # load the dataset as a pandas data frame\ny = diabetes.target # define the target variable (dependent variable) as y",
"_____no_output_____"
],
[
"df.head(6)",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"# create training and testing vars\nX_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.2)#205 data as testing data\nprint X_train.shape, y_train.shape\nprint X_test.shape, y_test.shape",
"(353, 10) (353L,)\n(89, 10) (89L,)\n"
],
[
"X_train.head(6)",
"_____no_output_____"
],
[
"X_test.head(6)",
"_____no_output_____"
]
],
[
[
"### Cons: Splitting data can lead to unstable results when we have a small data set",
"_____no_output_____"
],
[
"## Cross-Validation\n\n### Divide your data into folds (each fold is a container that holds an even distribution of the cases), usually 5 or 10 (5 fold CV and 10 fold CV respectively)\n### Hold out one fold as a test set and use the others as training sets\n### Train and record the test set result\n### Perform Steps 2 and 3 again, using each fold in turn as a test set.\n### Calculate the average and the standard deviation of all the folds’ test results",
"_____no_output_____"
]
],
[
[
"from sklearn import datasets",
"_____no_output_____"
],
[
"from sklearn.cross_validation import cross_val_score",
"C:\\Users\\Minerva\\Anaconda2\\lib\\site-packages\\sklearn\\cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.\n \"This module will be removed in 0.20.\", DeprecationWarning)\n"
],
[
"iris_data=datasets.load_iris()\n#load the iris dataset",
"_____no_output_____"
],
[
"print (iris_data)",
"{'target_names': array(['setosa', 'versicolor', 'virginica'], \n dtype='|S10'), 'data': array([[ 5.1, 3.5, 1.4, 0.2],\n [ 4.9, 3. , 1.4, 0.2],\n [ 4.7, 3.2, 1.3, 0.2],\n [ 4.6, 3.1, 1.5, 0.2],\n [ 5. , 3.6, 1.4, 0.2],\n [ 5.4, 3.9, 1.7, 0.4],\n [ 4.6, 3.4, 1.4, 0.3],\n [ 5. , 3.4, 1.5, 0.2],\n [ 4.4, 2.9, 1.4, 0.2],\n [ 4.9, 3.1, 1.5, 0.1],\n [ 5.4, 3.7, 1.5, 0.2],\n [ 4.8, 3.4, 1.6, 0.2],\n [ 4.8, 3. , 1.4, 0.1],\n [ 4.3, 3. , 1.1, 0.1],\n [ 5.8, 4. , 1.2, 0.2],\n [ 5.7, 4.4, 1.5, 0.4],\n [ 5.4, 3.9, 1.3, 0.4],\n [ 5.1, 3.5, 1.4, 0.3],\n [ 5.7, 3.8, 1.7, 0.3],\n [ 5.1, 3.8, 1.5, 0.3],\n [ 5.4, 3.4, 1.7, 0.2],\n [ 5.1, 3.7, 1.5, 0.4],\n [ 4.6, 3.6, 1. , 0.2],\n [ 5.1, 3.3, 1.7, 0.5],\n [ 4.8, 3.4, 1.9, 0.2],\n [ 5. , 3. , 1.6, 0.2],\n [ 5. , 3.4, 1.6, 0.4],\n [ 5.2, 3.5, 1.5, 0.2],\n [ 5.2, 3.4, 1.4, 0.2],\n [ 4.7, 3.2, 1.6, 0.2],\n [ 4.8, 3.1, 1.6, 0.2],\n [ 5.4, 3.4, 1.5, 0.4],\n [ 5.2, 4.1, 1.5, 0.1],\n [ 5.5, 4.2, 1.4, 0.2],\n [ 4.9, 3.1, 1.5, 0.1],\n [ 5. , 3.2, 1.2, 0.2],\n [ 5.5, 3.5, 1.3, 0.2],\n [ 4.9, 3.1, 1.5, 0.1],\n [ 4.4, 3. , 1.3, 0.2],\n [ 5.1, 3.4, 1.5, 0.2],\n [ 5. , 3.5, 1.3, 0.3],\n [ 4.5, 2.3, 1.3, 0.3],\n [ 4.4, 3.2, 1.3, 0.2],\n [ 5. , 3.5, 1.6, 0.6],\n [ 5.1, 3.8, 1.9, 0.4],\n [ 4.8, 3. , 1.4, 0.3],\n [ 5.1, 3.8, 1.6, 0.2],\n [ 4.6, 3.2, 1.4, 0.2],\n [ 5.3, 3.7, 1.5, 0.2],\n [ 5. , 3.3, 1.4, 0.2],\n [ 7. , 3.2, 4.7, 1.4],\n [ 6.4, 3.2, 4.5, 1.5],\n [ 6.9, 3.1, 4.9, 1.5],\n [ 5.5, 2.3, 4. , 1.3],\n [ 6.5, 2.8, 4.6, 1.5],\n [ 5.7, 2.8, 4.5, 1.3],\n [ 6.3, 3.3, 4.7, 1.6],\n [ 4.9, 2.4, 3.3, 1. ],\n [ 6.6, 2.9, 4.6, 1.3],\n [ 5.2, 2.7, 3.9, 1.4],\n [ 5. , 2. , 3.5, 1. ],\n [ 5.9, 3. , 4.2, 1.5],\n [ 6. , 2.2, 4. , 1. ],\n [ 6.1, 2.9, 4.7, 1.4],\n [ 5.6, 2.9, 3.6, 1.3],\n [ 6.7, 3.1, 4.4, 1.4],\n [ 5.6, 3. , 4.5, 1.5],\n [ 5.8, 2.7, 4.1, 1. ],\n [ 6.2, 2.2, 4.5, 1.5],\n [ 5.6, 2.5, 3.9, 1.1],\n [ 5.9, 3.2, 4.8, 1.8],\n [ 6.1, 2.8, 4. , 1.3],\n [ 6.3, 2.5, 4.9, 1.5],\n [ 6.1, 2.8, 4.7, 1.2],\n [ 6.4, 2.9, 4.3, 1.3],\n [ 6.6, 3. , 4.4, 1.4],\n [ 6.8, 2.8, 4.8, 1.4],\n [ 6.7, 3. , 5. , 1.7],\n [ 6. , 2.9, 4.5, 1.5],\n [ 5.7, 2.6, 3.5, 1. ],\n [ 5.5, 2.4, 3.8, 1.1],\n [ 5.5, 2.4, 3.7, 1. ],\n [ 5.8, 2.7, 3.9, 1.2],\n [ 6. , 2.7, 5.1, 1.6],\n [ 5.4, 3. , 4.5, 1.5],\n [ 6. , 3.4, 4.5, 1.6],\n [ 6.7, 3.1, 4.7, 1.5],\n [ 6.3, 2.3, 4.4, 1.3],\n [ 5.6, 3. , 4.1, 1.3],\n [ 5.5, 2.5, 4. , 1.3],\n [ 5.5, 2.6, 4.4, 1.2],\n [ 6.1, 3. , 4.6, 1.4],\n [ 5.8, 2.6, 4. , 1.2],\n [ 5. , 2.3, 3.3, 1. ],\n [ 5.6, 2.7, 4.2, 1.3],\n [ 5.7, 3. , 4.2, 1.2],\n [ 5.7, 2.9, 4.2, 1.3],\n [ 6.2, 2.9, 4.3, 1.3],\n [ 5.1, 2.5, 3. , 1.1],\n [ 5.7, 2.8, 4.1, 1.3],\n [ 6.3, 3.3, 6. , 2.5],\n [ 5.8, 2.7, 5.1, 1.9],\n [ 7.1, 3. , 5.9, 2.1],\n [ 6.3, 2.9, 5.6, 1.8],\n [ 6.5, 3. , 5.8, 2.2],\n [ 7.6, 3. , 6.6, 2.1],\n [ 4.9, 2.5, 4.5, 1.7],\n [ 7.3, 2.9, 6.3, 1.8],\n [ 6.7, 2.5, 5.8, 1.8],\n [ 7.2, 3.6, 6.1, 2.5],\n [ 6.5, 3.2, 5.1, 2. ],\n [ 6.4, 2.7, 5.3, 1.9],\n [ 6.8, 3. , 5.5, 2.1],\n [ 5.7, 2.5, 5. , 2. ],\n [ 5.8, 2.8, 5.1, 2.4],\n [ 6.4, 3.2, 5.3, 2.3],\n [ 6.5, 3. , 5.5, 1.8],\n [ 7.7, 3.8, 6.7, 2.2],\n [ 7.7, 2.6, 6.9, 2.3],\n [ 6. , 2.2, 5. , 1.5],\n [ 6.9, 3.2, 5.7, 2.3],\n [ 5.6, 2.8, 4.9, 2. ],\n [ 7.7, 2.8, 6.7, 2. ],\n [ 6.3, 2.7, 4.9, 1.8],\n [ 6.7, 3.3, 5.7, 2.1],\n [ 7.2, 3.2, 6. , 1.8],\n [ 6.2, 2.8, 4.8, 1.8],\n [ 6.1, 3. , 4.9, 1.8],\n [ 6.4, 2.8, 5.6, 2.1],\n [ 7.2, 3. , 5.8, 1.6],\n [ 7.4, 2.8, 6.1, 1.9],\n [ 7.9, 3.8, 6.4, 2. ],\n [ 6.4, 2.8, 5.6, 2.2],\n [ 6.3, 2.8, 5.1, 1.5],\n [ 6.1, 2.6, 5.6, 1.4],\n [ 7.7, 3. , 6.1, 2.3],\n [ 6.3, 3.4, 5.6, 2.4],\n [ 6.4, 3.1, 5.5, 1.8],\n [ 6. , 3. , 4.8, 1.8],\n [ 6.9, 3.1, 5.4, 2.1],\n [ 6.7, 3.1, 5.6, 2.4],\n [ 6.9, 3.1, 5.1, 2.3],\n [ 5.8, 2.7, 5.1, 1.9],\n [ 6.8, 3.2, 5.9, 2.3],\n [ 6.7, 3.3, 5.7, 2.5],\n [ 6.7, 3. , 5.2, 2.3],\n [ 6.3, 2.5, 5. , 1.9],\n [ 6.5, 3. , 5.2, 2. ],\n [ 6.2, 3.4, 5.4, 2.3],\n [ 5.9, 3. , 5.1, 1.8]]), 'target': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,\n 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,\n 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]), 'DESCR': 'Iris Plants Database\\n====================\\n\\nNotes\\n-----\\nData Set Characteristics:\\n :Number of Instances: 150 (50 in each of three classes)\\n :Number of Attributes: 4 numeric, predictive attributes and the class\\n :Attribute Information:\\n - sepal length in cm\\n - sepal width in cm\\n - petal length in cm\\n - petal width in cm\\n - class:\\n - Iris-Setosa\\n - Iris-Versicolour\\n - Iris-Virginica\\n :Summary Statistics:\\n\\n ============== ==== ==== ======= ===== ====================\\n Min Max Mean SD Class Correlation\\n ============== ==== ==== ======= ===== ====================\\n sepal length: 4.3 7.9 5.84 0.83 0.7826\\n sepal width: 2.0 4.4 3.05 0.43 -0.4194\\n petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)\\n petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)\\n ============== ==== ==== ======= ===== ====================\\n\\n :Missing Attribute Values: None\\n :Class Distribution: 33.3% for each of 3 classes.\\n :Creator: R.A. Fisher\\n :Donor: Michael Marshall (MARSHALL%[email protected])\\n :Date: July, 1988\\n\\nThis is a copy of UCI ML iris datasets.\\nhttp://archive.ics.uci.edu/ml/datasets/Iris\\n\\nThe famous Iris database, first used by Sir R.A Fisher\\n\\nThis is perhaps the best known database to be found in the\\npattern recognition literature. Fisher\\'s paper is a classic in the field and\\nis referenced frequently to this day. (See Duda & Hart, for example.) The\\ndata set contains 3 classes of 50 instances each, where each class refers to a\\ntype of iris plant. One class is linearly separable from the other 2; the\\nlatter are NOT linearly separable from each other.\\n\\nReferences\\n----------\\n - Fisher,R.A. \"The use of multiple measurements in taxonomic problems\"\\n Annual Eugenics, 7, Part II, 179-188 (1936); also in \"Contributions to\\n Mathematical Statistics\" (John Wiley, NY, 1950).\\n - Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.\\n (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.\\n - Dasarathy, B.V. (1980) \"Nosing Around the Neighborhood: A New System\\n Structure and Classification Rule for Recognition in Partially Exposed\\n Environments\". IEEE Transactions on Pattern Analysis and Machine\\n Intelligence, Vol. PAMI-2, No. 1, 67-71.\\n - Gates, G.W. (1972) \"The Reduced Nearest Neighbor Rule\". IEEE Transactions\\n on Information Theory, May 1972, 431-433.\\n - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al\"s AUTOCLASS II\\n conceptual clustering system finds 3 classes in the data.\\n - Many, many more ...\\n', 'feature_names': ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']}\n"
],
[
"data_input = iris_data.data\ndata_output = iris_data.target",
"_____no_output_____"
],
[
"from sklearn.cross_validation import KFold",
"_____no_output_____"
],
[
"kf = KFold(10, n_folds = 5, shuffle=True) #5 fold CV ",
"_____no_output_____"
],
[
"for train_set,test_set in kf:\n print(train_set, test_set)",
"(array([0, 2, 3, 4, 5, 6, 8, 9]), array([1, 7]))\n(array([0, 1, 4, 5, 6, 7, 8, 9]), array([2, 3]))\n(array([0, 1, 2, 3, 4, 6, 7, 8]), array([5, 9]))\n(array([0, 1, 2, 3, 4, 5, 7, 9]), array([6, 8]))\n(array([1, 2, 3, 5, 6, 7, 8, 9]), array([0, 4]))\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9ccf860f3ebedcadcbf87bd0420fda1f34b8f6 | 40,636 | ipynb | Jupyter Notebook | arctic_cruise/Untitled.ipynb | riddhimap/Saildrone | c6ecbdb1ad8ba50a59b72357be01422b8f0c0d73 | [
"Apache-2.0"
] | 3 | 2019-07-08T11:55:44.000Z | 2021-10-06T15:11:18.000Z | arctic_cruise/Untitled.ipynb | riddhimap/Saildrone | c6ecbdb1ad8ba50a59b72357be01422b8f0c0d73 | [
"Apache-2.0"
] | null | null | null | arctic_cruise/Untitled.ipynb | riddhimap/Saildrone | c6ecbdb1ad8ba50a59b72357be01422b8f0c0d73 | [
"Apache-2.0"
] | 3 | 2020-06-08T06:29:22.000Z | 2020-06-16T15:43:46.000Z | 129.414013 | 24,528 | 0.784895 | [
[
[
"import xarray as xr\nfile = 'F:/data/cruise_data/saildrone/2019_arctic/post_mission/saildrone-gen_5-arctic_misst_2019-sd1036-20190514T230000-20191011T183000-1_minutes-v1.1575336154680.nc'\nds=xr.open_dataset(file)\nds",
"_____no_output_____"
],
[
"ds.TEMP_CTD_RBR_MEAN.plot()",
"_____no_output_____"
],
[
"file = 'F:/data/sst/jpl_mur/v4.1/2004/001/20040101090000-JPL-L4_GHRSST-SSTfnd-MUR-GLOB-v02.0-fv04.1.nc'\nds=xr.open_dataset(file)\nds",
"_____no_output_____"
],
[
"ds.attrs['summary']",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
ec9cff53fedb244e94ff63cc80cd5646c7e0af0d | 281,381 | ipynb | Jupyter Notebook | Summarization Without Attention Using Bidriectional LSTM3.2.14.ipynb | Imvicoder/Text-Summarization | f544e2074d727dcd7853e2426094d79bd2a67d18 | [
"MIT"
] | 7 | 2018-06-17T07:59:31.000Z | 2020-05-31T15:00:58.000Z | Summarization Without Attention Using Bidriectional LSTM3.2.14.ipynb | Imvicoder/Text-Summarization | f544e2074d727dcd7853e2426094d79bd2a67d18 | [
"MIT"
] | 1 | 2018-11-08T03:55:09.000Z | 2018-11-08T03:55:09.000Z | Summarization Without Attention Using Bidriectional LSTM3.2.14.ipynb | Imvicoder/Text-Summarization | f544e2074d727dcd7853e2426094d79bd2a67d18 | [
"MIT"
] | 1 | 2018-09-27T03:10:19.000Z | 2018-09-27T03:10:19.000Z | 66.238465 | 19,060 | 0.63526 | [
[
[
"from __future__ import print_function,division\nfrom keras.layers import Input, Bidirectional,LSTM,Dense,Dropout,Concatenate\nfrom keras.optimizers import Adam\nfrom keras.utils import to_categorical\nfrom keras.models import load_model, Model\nfrom keras.layers import Embedding\nimport cPickle as pickle\nfrom matplotlib import pyplot\nfrom keras.utils import plot_model\nfrom IPython.display import SVG\nfrom keras.utils.vis_utils import model_to_dot\nfrom keras.callbacks import ModelCheckpoint,EarlyStopping\nfrom keras import regularizers\nimport psutil\nimport numpy as np\nimport nltk",
"/usr/local/lib/python2.7/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\nUsing TensorFlow backend.\n"
],
[
"psutil.virtual_memory()",
"_____no_output_____"
],
[
"#FN1='embeddingReviewsFilewithOtherdata'\nFN2='myPaddedDataFile'",
"_____no_output_____"
],
[
"with open('%s.pkl'%FN2,'rb') as fp:\n embeddingReviews, modiefiedSummaryWord_index,myPaddedData= pickle.load(fp)",
"_____no_output_____"
],
[
"psutil.virtual_memory()",
"_____no_output_____"
],
[
"paddedReviews=myPaddedData['paddedReviews']\npaddedSummary=myPaddedData['paddedSummary']\npaddedModifiedSummary=myPaddedData['paddedModifiedSummary']\ntestPaddedReviews=myPaddedData['testPaddedReviews']\ntestPaddedSummary=myPaddedData['testPaddedSummary']",
"_____no_output_____"
],
[
"TrainingDataIX=paddedReviews\nTrainingDataTY=paddedSummary\nTrainingDataIY=paddedModifiedSummary\nTrainingDataIX.shape,TrainingDataIY.shape,TrainingDataTY.shape",
"_____no_output_____"
],
[
"TestDataIX=testPaddedReviews\nTestDataTY=testPaddedSummary",
"_____no_output_____"
],
[
"psutil.virtual_memory()",
"_____no_output_____"
],
[
"nb_samples=len(TrainingDataIX)\nnb_samples",
"_____no_output_____"
],
[
"ModifiedVocabSize=len(modiefiedSummaryWord_index)\n",
"_____no_output_____"
],
[
"ModifiedVocabSize",
"_____no_output_____"
],
[
"psutil.virtual_memory()",
"_____no_output_____"
],
[
"decoderInputSummary=to_categorical(paddedModifiedSummary,num_classes=ModifiedVocabSize)",
"_____no_output_____"
],
[
"decoderInputSummary=decoderInputSummary.reshape(nb_samples,-1,ModifiedVocabSize)",
"_____no_output_____"
],
[
"decoderInputSummary.shape",
"_____no_output_____"
],
[
"psutil.virtual_memory()",
"_____no_output_____"
],
[
"#FN2='CtegoricalSummaryData'",
"_____no_output_____"
],
[
"decoderTargetSummary=to_categorical(paddedSummary,num_classes=ModifiedVocabSize)",
"_____no_output_____"
],
[
"decoderTargetSummary.shape",
"_____no_output_____"
],
[
"decoderTargetSummary=decoderTargetSummary.reshape(nb_samples,-1,ModifiedVocabSize)",
"_____no_output_____"
],
[
"decoderTargetSummary.shape",
"_____no_output_____"
],
[
"psutil.virtual_memory()",
"_____no_output_____"
],
[
"#valOneHotSummary=to_categorical(valPaddedSummary,num_classes=ModifiedVocabSize)",
"_____no_output_____"
],
[
"with open('%s.pkl'%'embeddingReviewsFile', 'rb') as fp:\n embeddingReviews = pickle.load(fp)",
"_____no_output_____"
],
[
"embedding_dim=100",
"_____no_output_____"
],
[
"ReviewsVocabSize=32251#30172#27789\nmaxReviewLength=200\nmaxSummaryLength=30",
"_____no_output_____"
],
[
"#Encoder\nEncoder_embedding_layer = Embedding(ReviewsVocabSize,\n embedding_dim,\n weights=[embeddingReviews],\n input_length=maxReviewLength,\n trainable=True,\n mask_zero=True)\nencoder_input=Input(shape=(maxReviewLength,))\nprint('encoder_input shape is:->',encoder_input.shape)\nembedded_Encoder_inputSequence=Encoder_embedding_layer(encoder_input)\nencoder_LSTM=Bidirectional(LSTM(50, return_state=True))\nencoder_outputs, forward_h, forward_c, backward_h, backward_c = encoder_LSTM(embedded_Encoder_inputSequence)\nencoder_h= Concatenate()([forward_h, backward_h])\nencoder_c = Concatenate()([forward_c, backward_c])\n\n#print(type(encoder_LSTM))\n#encoder_output,encoder_h,encoder_c=encoder_LSTM(embedded_Encoder_inputSequence)\n#print('encoder_output shape:->',encoder_output.shape)\nencoder_states=[encoder_h,encoder_c]",
"encoder_input shape is:-> (?, 200)\n"
],
[
"decoder_input=Input(shape=(None,ModifiedVocabSize))\ndecoder_LSTM=LSTM(100,return_sequences=True, return_state = True,dropout=0.35,recurrent_dropout=0.25,recurrent_regularizer=regularizers.l2(0.0532))#,recurrent_dropout=0.25,bias_regularizer=regularizers.l2(0.02),recurrent_regularizer=regularizers.l2(0.02))\ndecoder_output,decoder_h,decoder_c=decoder_LSTM(decoder_input,initial_state=encoder_states)\nfinal_decoder_out=Dense(ModifiedVocabSize,activation='softmax',kernel_regularizer=regularizers.l1(0.02),activity_regularizer=regularizers.l2(0.02)#,kernel_regularizer=regularizers.l2(0.05),activity_regularizer=regularizers.l2(0.06)\n )(decoder_output)\nfinal_decoder_out=Dropout(0.45)(final_decoder_out)",
"_____no_output_____"
],
[
"model=Model(inputs=[encoder_input,decoder_input],output=final_decoder_out)",
"/usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py:1: UserWarning: Update your `Model` call to the Keras 2 API: `Model(outputs=Tensor(\"dr..., inputs=[<tf.Tenso...)`\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"opt = Adam(lr=0.01, beta_1=0.9, beta_2=0.999,decay=0.004)\n#opt=sgd(lr=0.001, momentum=0.2, decay=0.1, nesterov=False)\n#opt=Adam()\nmodel.compile(loss='categorical_crossentropy', optimizer=opt,metrics=['accuracy'])\n#filepath=\"summWithoutAttention.hdf5\"\n#checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_weights_only=False, mode='auto', period=1)\ncheckpointer = ModelCheckpoint(filepath='SummarizationWithoutAttentionUsingBidirectionalV3.2.14Weights.hdf5', verbose=1, save_best_only=False,mode='auto',period=1)\n#es=EarlyStopping(patience=5)\n#callbacks_list = [checkpoint]",
"_____no_output_____"
],
[
"#model.load_weights('SummarizationWithoutAttentionUsingBidirectionalV3.2.14Weights.hdf5')",
"_____no_output_____"
],
[
"model.summary()",
"__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 200) 0 \n__________________________________________________________________________________________________\nembedding_1 (Embedding) (None, 200, 100) 3225100 input_1[0][0] \n__________________________________________________________________________________________________\nbidirectional_1 (Bidirectional) [(None, 100), (None, 60400 embedding_1[0][0] \n__________________________________________________________________________________________________\ninput_2 (InputLayer) (None, None, 6617) 0 \n__________________________________________________________________________________________________\nconcatenate_1 (Concatenate) (None, 100) 0 bidirectional_1[0][1] \n bidirectional_1[0][3] \n__________________________________________________________________________________________________\nconcatenate_2 (Concatenate) (None, 100) 0 bidirectional_1[0][2] \n bidirectional_1[0][4] \n__________________________________________________________________________________________________\nlstm_2 (LSTM) [(None, None, 100), 2687200 input_2[0][0] \n concatenate_1[0][0] \n concatenate_2[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, None, 6617) 668317 lstm_2[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, None, 6617) 0 dense_1[0][0] \n==================================================================================================\nTotal params: 6,641,017\nTrainable params: 6,641,017\nNon-trainable params: 0\n__________________________________________________________________________________________________\n"
],
[
"SVG(model_to_dot(model).create(prog='dot', format='svg'))",
"_____no_output_____"
],
[
"history=model.fit(x=[TrainingDataIX,decoderInputSummary], \n y=decoderTargetSummary,\n batch_size=64,\n epochs=800,\n validation_split=0.2,callbacks=[checkpointer])#,es])",
"Train on 1000 samples, validate on 250 samples\nEpoch 1/800\n1000/1000 [==============================] - 93s 93ms/step - loss: 76.4740 - acc: 0.1206 - val_loss: 40.9348 - val_acc: 0.3121\n\nEpoch 00001: saving model to SummarizationWithoutAttentionUsingBidirectionalV3.2.14Weights.hdf5\n"
],
[
"#model.load_weights('SummarizationWithoutAttentionV3.2.8Weights.hdf5')",
"_____no_output_____"
],
[
"#history=model.fit(x=[TrainingDataIX,decoderInputSummary], \n # y=decoderTargetSummary,\n # batch_size=64,\n # epochs=1000,\n # initial_epoch=51, \n # validation_split=0.2,callbacks=[checkpointer])#,es])",
"_____no_output_____"
],
[
"print(history.history['acc'])",
"[0.17860000088065864, 0.18293333411961793, 0.18850000128149985, 0.18710000048577785, 0.19183333390951157, 0.19396666757762432, 0.19056666721403598, 0.19813333363831043, 0.1946999997794628, 0.19810000078752638, 0.1954666675031185, 0.1949666671305895, 0.19686666683852672, 0.19303333380818366, 0.19556666734814643, 0.1982666669934988, 0.1961333339959383, 0.19796666698157786, 0.19553333343565463, 0.2004666673094034, 0.19353333385288715, 0.19410000056028365, 0.19870000046491623, 0.19940000116825105, 0.19670000012218952, 0.1982000000476837, 0.19806666697561742, 0.1980666667073965, 0.19900000029802323, 0.19873333352804184, 0.19956666657328606, 0.19686666721105575, 0.19789999982714654, 0.19530000026524066, 0.1959666667878628, 0.19656666648387908, 0.19703333321213723, 0.19786666770279407, 0.19746666660904885, 0.197800000205636, 0.19603333286941052, 0.1968666667789221, 0.1977333339601755, 0.19683333376049997, 0.19880000013113022, 0.196233334004879, 0.19680000008642673, 0.19806666730344297, 0.19630000016093255, 0.19716666746139527, 0.1977000000178814, 0.19910000044107437, 0.19770000027120113, 0.19776666717231273, 0.19936666706204415, 0.19410000026226043, 0.19773333325982093, 0.1977333335876465, 0.1983666658848524, 0.19360000039637087, 0.1992666670680046, 0.19553333300352096, 0.19786666689813137, 0.19796666675806046, 0.19830000002682208, 0.19740000066161156, 0.19666666708886624, 0.19596666702628135, 0.19640000073611735, 0.1959666673243046, 0.1990333329886198, 0.1973333343565464, 0.19346666696667672, 0.1974333339035511, 0.19819999979436398, 0.1962333340495825, 0.19886666762828828, 0.19703333346545696, 0.19639999989420176, 0.19720000033080579, 0.1974666669368744, 0.1992999994456768, 0.19790000028908253, 0.1938000002503395, 0.19830000013113022, 0.1960000008046627, 0.19896666696667673, 0.19916666720807552, 0.19683333346247672, 0.19743333335220814]\n"
],
[
"from matplotlib import pyplot",
"_____no_output_____"
],
[
"pyplot.plot(history.history['loss'])\npyplot.plot(history.history['val_loss'])\npyplot.title('model train vs validation loss')\npyplot.ylabel('loss')\npyplot.xlabel('epoch')\npyplot.legend(['train', 'validation'], loc='upper right')\npyplot.show()",
"_____no_output_____"
]
],
[
[
"# Inferencing",
"_____no_output_____"
]
],
[
[
"#Encoder Inference\nencoder_model_inf=Model(inputs=encoder_input,outputs=encoder_states)",
"_____no_output_____"
],
[
"#Decoder Inference\ndecoder_state_input_h=Input(shape=(100,))\ndecoder_state_input_c = Input(shape=(100,))\ndecoder_input_states = [decoder_state_input_h, decoder_state_input_c]\ndecoder_out, decoder_h, decoder_c = decoder_LSTM(decoder_input,initial_state=decoder_input_states)\ndecoder_states=[decoder_h,decoder_c]\n#decoder_inf_out = decoder_dense_rel(decoder_out)\n#decoder_inf_final_out=decoder_dense(decoder_inf_out)\n#decoder_model_inf = Model(inputs=[decoder_input] + decoder_input_states,outputs=[decoder_inf_final_out] + decoder_states )\ndecoder_inf_final_out = Dense(ModifiedVocabSize,activation='softmax'#,kernel_regularizer=regularizers.l2(0.05),activity_regularizer=regularizers.l2(0.05)\n )(decoder_out)\n#decoder_inf_final_out=Dropout(0.4)(decoder_inf_final_out)\ndecoder_model_inf = Model(inputs=[decoder_input] + decoder_input_states,\n outputs=[decoder_inf_final_out] + decoder_states )",
"_____no_output_____"
],
[
"modiefiedSummaryWord_index['SOS']",
"_____no_output_____"
],
[
"int_to_vocab_summaries = {}\nfor word, value in modiefiedSummaryWord_index.items():\n int_to_vocab_summaries[value] = word",
"_____no_output_____"
],
[
"def decode_seq(input_seq):\n # Initial states value is coming from the encoder \n #We get the encoder states into states_val variable\n states_val = encoder_model_inf.predict(input_seq)#return encoder states\n target_seq = np.zeros((1,1,ModifiedVocabSize))\n print('target_seq shape:->',target_seq.shape)\n target_seq[0, 0, modiefiedSummaryWord_index['SOS']] = 1\n print(target_seq.shape)\n #target_seq=embeddingModifiedSummaries[modiefiedSummaryWord_index['SOS']]\n summarized_sent = ''\n stop_condition = False\n i=1\n while not stop_condition:\n decoder_out, decoder_h, decoder_c = decoder_model_inf.predict(x=[target_seq] + states_val)\n #print(decoder_out)\n max_val_index = np.argmax(decoder_out[0,-1,:])\n sampled_summary_word = int_to_vocab_summaries[max_val_index]\n #print('sampled_summary_word is:->',sampled_summary_word)\n #print()\n summarized_sent += sampled_summary_word+\" \"\n #print('summarized_sent is:->',summarized_sent)\n #print()\n if ((sampled_summary_word == 'EOS') or (len(summarized_sent) >= maxSummaryLength)) :\n print('terminated')\n stop_condition = True\n \n target_seq = np.zeros((1,1,ModifiedVocabSize))\n target_seq[0, 0, max_val_index]=1\n \n states_val = [decoder_h, decoder_c]\n i=i+1\n \n return summarized_sent",
"_____no_output_____"
],
[
"human_summary=[]\nfor i in range(50): \n #print('System Generated Summary:',summary)\n temp=[]\n for j in range(len(testPaddedSummary[i])):\n temp.append(int_to_vocab_summaries[testPaddedSummary[i][j]])\n human_summary.append(temp) \nhumanSummary=\" \" \nfor i in range(50):\n data=testPaddedReviews[i].reshape(1,200)\n summary=decode_seq(data)\n print('System Generated Summary:',summary)\n for j in range(len(human_summary[i])):\n humanSummary+=human_summary[i][j]+\" \"\n print('Human Summary',humanSummary)\n humanSummary=\" \"\n ",
"target_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary thirteen days offers a compelling look at the cuban missile crisis and its talented cast deftly portrays the reallife people who were involved PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary the great muppet caper is overplotted and uneven but the appealing presence of kermit miss piggy and the gang ensure that this heist flick is always breezily watchable PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary a tense gripping thriller a hijacking avoids action movie cliches and instead creates a palpable sense of dread by mixing gritty realism with atmospheric beauty PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: position position position position \nHuman Summary the rare sequel that arguably improves on its predecessor toy story 2 uses inventive storytelling gorgeous animation and a talented cast to deliver another rich moviegoing experience for all ages \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position weaves \nHuman Summary just go with it may be slightly better than some entries in the recently dire romcom genre but that is far from a recommendation PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary a challenging piece of experimental filmmaking PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary the emperors new groove is not the most ambitious animated film but its brisk pace fresh characters and big laughs make for a great time for the whole family PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary it has a likable cast and loads of cgi spectacle but for all but the least demanding viewers the sorcerers apprentice will be less than spellbinding PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary with a strong cast and a host of welldefined characters the best man is an intelligent funny romantic comedy that marks an impressive debut for writerdirector malcolm d lee PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves weaves weaves weaves position \nHuman Summary the crying game is famous for its shocking twist but this thoughtful haunting mystery grips the viewer from start to finish PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary kevin macdonalds exhaustive evenhanded portrait of bob marley offers electrifying concert footage and fascinating insights into reggaes greatest star PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary the controversial fat girl is an unflinchingly harsh but powerful look at female adolescence PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary one of steven spielbergs most ambitious efforts of the 1980s empire of the sun remains an underrated gem in the directors distinguished filmography PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: position position position position \nHuman Summary there is no shortage of similarly themed crime dramas but the drop rises above the pack with a smartly written script and strong cast PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary bandslam is an intelligent teen film that avoids teen film cliches in an entertaining package of music and comingofage drama PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position weaves \nHuman Summary even though oscarbearers nicolas cage angelina jolie and robert duval came aboard for this project the quality of gone in 60 seconds is disappointingly low the plot line is nonsensical \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary director alejandro amenbar tackles some heady issues with finesse and clarity in open your eyes a gripping exploration of existentialism and the human spirit PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary this uninspiring cop thriller does not measure up to chow yunfats hong kong work PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position weaves \nHuman Summary edward norton delivers one of his finest performances in leaves of grass but he is overpowered by the movies many jarring tonal shifts PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary the few comic gags sprinkled throughout the movie fail to spice up this formulaic romcom PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: position position position position \nHuman Summary the curse of the wererabbit is a subtly touching and wonderfully eccentric adventure featuring wallace and gromit PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary deliciously twistfilled nine queens is a clever and satisfying crime caper PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary employee of the month features mediocre performances few laughs and a lack of satiric bite PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary a secret is poignant sad and beautifully crafted featuring fine performances that stave off a drift toward soap opera territory PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position weaves \nHuman Summary jungle fever finds spike lee tackling timely sociopolitical themes in typically provocative style even if the result is sometimes ambitious to a fault PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary american teen skates some thin ice with its documentary ethics but in the end presents a charming and stylish if packaged tale PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary filled with excellent performances ramin bahranis deft sophomore effort is a heartfelt hopeful neorealist look at the people who live in the gritty underbelly of new york city PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves weaves happens happens \nHuman Summary the nativity story is a dull retelling of a wellworn tale with the look and feel of a highschool production PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary elevated by finchers directorial talent and fosters performance panic room is a wellcrafted aboveaverage thriller PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary it has the schmaltzy trappings of my romantic films but like crazy allows its characters to express themselves beyond dialogue crafting a true intimate study PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position weaves \nHuman Summary divided between sincere melodrama and populist comedy madea goes to jail fails to provide enough laughs or screen time for its titular heroine PAD PAD PAD PAD PAD PAD PAD \n"
],
[
"from nltk.translate.bleu_score import sentence_bleu",
"_____no_output_____"
],
[
"humanSummary=\" \" \nscores=[]\nfor i in range(50):\n testData=testPaddedReviews[i].reshape(1,200)\n summary=decode_seq(testData)\n print('System Generated Summary:',summary)\n for j in range(len(human_summary[i])):\n humanSummary+=human_summary[i][j]+\" \"\n print('Human Summary',humanSummary)\n #calculation of bleu score\n score=sentence_bleu(nltk.word_tokenize(summary),nltk.word_tokenize(humanSummary)#,weights=(0.5, 0.5, 0, 0)\n )\n print('BlEU SCORE IS:->',score)\n scores.append(score) \n humanSummary=\" \"\n\ntotal=0\nfor i in scores:\n total+=i\nprint('AVERAGE BLEU SCORE:->',total/len(scores)) ",
"target_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary thirteen days offers a compelling look at the cuban missile crisis and its talented cast deftly portrays the reallife people who were involved PAD PAD PAD PAD PAD PAD PAD \n"
],
[
"human_summary=[]\nfor i in range(50): \n #print('System Generated Summary:',summary)\n temp=[]\n for j in range(len(paddedSummary[i])):\n temp.append(int_to_vocab_summaries[paddedSummary[i][j]])\n human_summary.append(temp) \nhumanSummary=\" \" \nfor i in range(50):\n data=paddedReviews[i].reshape(1,200)\n summary=decode_seq(data)\n print('System Generated Summary:',summary)\n for j in range(len(human_summary[i])):\n humanSummary+=human_summary[i][j]+\" \"\n print('Human Summary',humanSummary)\n humanSummary=\" \"",
"target_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary a powerful documentarylike examination of the response to an occupying force the battle of algiers has not aged a bit since its release in 1966 PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary poor plot development and slow pacing keep 54 from capturing the energy of it is legendary namesake PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary while it hews closely to the 1984 original craig brewer infuses his footloose remake with toetapping energy and manages to keep the story fresh for a new generation PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary tender funny and touching the sessions provides an acting showcase for its talented stars and proves it is possible for hollywood to produce a grownup movie about sex PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary patrice chreaus exquisite rendering of joseph conrads the return brings underlying passions to surface in a longsuffering marriage PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary too over the top for its own good but ultimately rescued by the casts charm director john landis grace and several soulstirring musical numbers PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary an infectiously fun blend of special effects and comedy with bill murrays hilarious deadpan performance leading a cast of great comic turns PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position weaves \nHuman Summary small in scale but large in impact boy as career making performances particularly that by star andrew garfield and carefully crafted characters defy judgment and aggressively provoke debate PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary only the very young will get the most out of this silly trifle PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: position position position position \nHuman Summary it regurgitates plot points from earlier animated efforts and is not quite as funny as it should be but a topshelf voice cast and strong visuals help make megamind a \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary hal ashbys comedy is too dark and twisted for some and occasionally oversteps its bounds but there is no denying the films warm humor and big heart PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary it is a film about a guy injected with speed wait there is no bus it is a film about a guy who has to kick a bunch of squirmy \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary bearing little resemblance to the 1953 original house of wax is a formulaic but betterthanaverage teen slasher flick PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: happens happens happens happens \nHuman Summary gags are not that funny PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary despite the best efforts of its competent cast underworld rise of the lycans is an indistinguishable and unnecessary prequel PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary rich in sweet sincerity intelligence and good oldfashioned inspirational drama october sky is a comingofage story with a heart to match its hollywood craftsmanship PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary paris je taime is uneven but there are more than enough delightful moments in this omnibus tribute to the city of lights to tip the scale in its favor PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary thanks to a captivating performance from jeff bridges crazy heart transcends its overly familiar origins and finds new meaning in an old story PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary jason patric and ray liotta are electrifying in this gritty if a little too familiar cop drama PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary roland emmerich delivers his trademark visual and emotional bombast but the more anonymous stops and tries to convince the audience of its halfbaked theory the less convincing it becomes PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary audiences will need to tolerate a certain amount of narrative drift but thanks to sensitive direction from noah baumbach and an endearing performance from greta gerwig frances ha makes it \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: position position position position \nHuman Summary an upfront study of a drug addict confronting his demons oslo august 31st makes this dark journey worthwhile with fantastic directing and equally fantastic acting PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary if audiences walk away from this subversive surreal shocker not fully understanding the story they might also walk away with a deeper perception of the potential of film storytelling PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary pivoting on the unusual relationship between seasoned hitman and his 12yearold apprentice a breakout turn by young natalie portman luc bessons lon is a stylish and oddly affecting thriller PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary the queen of versailles is a timely engaging and richly drawn portrait of the american dream improbably composed of equal parts compassion and schadenfreude PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary planet 51 squanders an interesting premise with an overly familiar storyline stock characters and humor that alternates between curious and potentially offensive PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary in addition to its breathtaking underwater photography sharkwater has a convincing impassioned argument of how the plight of sharks affects everyone PAD PAD PAD PAD PAD PAD PAD PAD PAD \ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\n"
],
[
"humanSummary=\" \" \nscores=[]\nfor i in range(50):\n testData=paddedReviews[i].reshape(1,200)\n summary=decode_seq(testData)\n print('System Generated Summary:',summary)\n for j in range(len(human_summary[i])):\n humanSummary+=human_summary[i][j]+\" \"\n print('Human Summary',humanSummary)\n #calculation of bleu score\n score=sentence_bleu(nltk.word_tokenize(summary),nltk.word_tokenize(humanSummary)#,weights=(0.5, 0.5, 0, 0)\n )\n print('BlEU SCORE IS:->',score)\n scores.append(score) \n humanSummary=\" \"\n\ntotal=0\nfor i in scores:\n total+=i\nprint('AVERAGE BLEU SCORE:->',total/len(scores))",
"target_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary a powerful documentarylike examination of the response to an occupying force the battle of algiers has not aged a bit since its release in 1966 PAD PAD PAD PAD PAD \nBlEU SCORE IS:-> 0.427287006396\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary poor plot development and slow pacing keep 54 from capturing the energy of it is legendary namesake PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \nBlEU SCORE IS:-> 0\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary while it hews closely to the 1984 original craig brewer infuses his footloose remake with toetapping energy and manages to keep the story fresh for a new generation PAD PAD \nBlEU SCORE IS:-> 0.427287006396\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary tender funny and touching the sessions provides an acting showcase for its talented stars and proves it is possible for hollywood to produce a grownup movie about sex PAD PAD \nBlEU SCORE IS:-> 0.427287006396\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary patrice chreaus exquisite rendering of joseph conrads the return brings underlying passions to surface in a longsuffering marriage PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \nBlEU SCORE IS:-> 0.427287006396\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary too over the top for its own good but ultimately rescued by the casts charm director john landis grace and several soulstirring musical numbers PAD PAD PAD PAD PAD PAD \nBlEU SCORE IS:-> 0\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary an infectiously fun blend of special effects and comedy with bill murrays hilarious deadpan performance leading a cast of great comic turns PAD PAD PAD PAD PAD PAD PAD PAD \nBlEU SCORE IS:-> 0.427287006396\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position weaves \nHuman Summary small in scale but large in impact boy as career making performances particularly that by star andrew garfield and carefully crafted characters defy judgment and aggressively provoke debate PAD PAD \nBlEU SCORE IS:-> 0\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary only the very young will get the most out of this silly trifle PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \nBlEU SCORE IS:-> 0\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: position position position position \nHuman Summary it regurgitates plot points from earlier animated efforts and is not quite as funny as it should be but a topshelf voice cast and strong visuals help make megamind a \nBlEU SCORE IS:-> 0\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary hal ashbys comedy is too dark and twisted for some and occasionally oversteps its bounds but there is no denying the films warm humor and big heart PAD PAD PAD \nBlEU SCORE IS:-> 0\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary it is a film about a guy injected with speed wait there is no bus it is a film about a guy who has to kick a bunch of squirmy \nBlEU SCORE IS:-> 0.427287006396\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary bearing little resemblance to the 1953 original house of wax is a formulaic but betterthanaverage teen slasher flick PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \nBlEU SCORE IS:-> 0.427287006396\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: happens happens happens happens \nHuman Summary gags are not that funny PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \nBlEU SCORE IS:-> 0\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary despite the best efforts of its competent cast underworld rise of the lycans is an indistinguishable and unnecessary prequel PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \nBlEU SCORE IS:-> 0\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary rich in sweet sincerity intelligence and good oldfashioned inspirational drama october sky is a comingofage story with a heart to match its hollywood craftsmanship PAD PAD PAD PAD PAD PAD \nBlEU SCORE IS:-> 0.427287006396\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary paris je taime is uneven but there are more than enough delightful moments in this omnibus tribute to the city of lights to tip the scale in its favor PAD \nBlEU SCORE IS:-> 0\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary thanks to a captivating performance from jeff bridges crazy heart transcends its overly familiar origins and finds new meaning in an old story PAD PAD PAD PAD PAD PAD PAD \nBlEU SCORE IS:-> 0.427287006396\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary jason patric and ray liotta are electrifying in this gritty if a little too familiar cop drama PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD PAD \nBlEU SCORE IS:-> 0.427287006396\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary roland emmerich delivers his trademark visual and emotional bombast but the more anonymous stops and tries to convince the audience of its halfbaked theory the less convincing it becomes PAD \nBlEU SCORE IS:-> 0\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary audiences will need to tolerate a certain amount of narrative drift but thanks to sensitive direction from noah baumbach and an endearing performance from greta gerwig frances ha makes it \nBlEU SCORE IS:-> 0.427287006396\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: position position position position \nHuman Summary an upfront study of a drug addict confronting his demons oslo august 31st makes this dark journey worthwhile with fantastic directing and equally fantastic acting PAD PAD PAD PAD PAD \nBlEU SCORE IS:-> 0\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary if audiences walk away from this subversive surreal shocker not fully understanding the story they might also walk away with a deeper perception of the potential of film storytelling PAD \nBlEU SCORE IS:-> 0.427287006396\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary pivoting on the unusual relationship between seasoned hitman and his 12yearold apprentice a breakout turn by young natalie portman luc bessons lon is a stylish and oddly affecting thriller PAD \nBlEU SCORE IS:-> 0.427287006396\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \nHuman Summary the queen of versailles is a timely engaging and richly drawn portrait of the american dream improbably composed of equal parts compassion and schadenfreude PAD PAD PAD PAD PAD PAD \nBlEU SCORE IS:-> 0.427287006396\ntarget_seq shape:-> (1, 1, 6617)\n(1, 1, 6617)\nterminated\nSystem Generated Summary: weaves position position position \n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9d17712fee9f07167899f307ed44d26ca24303 | 4,597 | ipynb | Jupyter Notebook | example_Comet_calculation.ipynb | Lavton/ft_icr_traps_calculation | f0e394554ae913e8dca76cefa9e27494c5308c1a | [
"MIT"
] | null | null | null | example_Comet_calculation.ipynb | Lavton/ft_icr_traps_calculation | f0e394554ae913e8dca76cefa9e27494c5308c1a | [
"MIT"
] | null | null | null | example_Comet_calculation.ipynb | Lavton/ft_icr_traps_calculation | f0e394554ae913e8dca76cefa9e27494c5308c1a | [
"MIT"
] | null | null | null | 22.64532 | 112 | 0.571242 | [
[
[
"# Calculate spherical harmonics coefficients and estimate the time of comet formation",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"import numpy as np\nimport scipy\nimport matplotlib.pyplot as plt\nfrom traps.abstract_trap import Coords\nfrom SIMION.PA import PA\nimport traps\nimport combine\nimport logging\nimport comet_calculator\nlogging.getLogger().setLevel(logging.INFO)\nfrom traps import abstract_trap\n\nimport utils_for_trap",
"_____no_output_____"
]
],
[
[
"generate, refine, adjust trap",
"_____no_output_____"
]
],
[
[
"trap = traps.get_current_trap()\nlogging.info(trap.__class__)\ntrap.generate_trap()\ntrap.refine_trap()\ntrap.adjust_trap() \ntrap.load_adjusted_pa()\nlogging.info(\"trap created\")",
"_____no_output_____"
]
],
[
[
"average the electric potential and calculate the coefficients on $Y_l^0$",
"_____no_output_____"
]
],
[
[
"Phi, Rs, Zs = utils_for_trap.get_averaged_phi(trap)\nd = trap.get_d() # the haracteristic distance of the trap\ncoeffs = comet_calculator.get_Y_coefs(Rs, Zs, Phi, d)\ncomet_calculator.print_coeffs(coeffs)",
"_____no_output_____"
]
],
[
[
"check the $A_2^0$ coefficient and adjsut the voltage to make it be the same as in cubic trap",
"_____no_output_____"
]
],
[
[
"abs_A20 = coeffs[1]/d**2\nabs_A20, comet_calculator._A_20_COMPARABLE / abs_A20",
"_____no_output_____"
]
],
[
[
"estimate the time of comet formation",
"_____no_output_____"
]
],
[
[
"min_omega, max_omega = comet_calculator.find_delta_omega(*coeffs, d)\ntime_to_comet_formation = comet_calculator.estimate_comet_time_formation(min_omega, max_omega)\nprint(min_omega, max_omega)\nprint(f\"!!!! TIME OF COMET FORMATION for trap '{trap.name}' = {time_to_comet_formation:.1e} s\")",
"_____no_output_____"
]
],
[
[
"plot the electric potential",
"_____no_output_____"
]
],
[
[
"Phi, Rs, Zs = utils_for_trap.get_averaged_phi(trap, max_r=trap.trap_border.y, max_z=trap.trap_border.z)\nPhi20 = (abs_A20) * (Zs ** 2 - Rs ** 2 / 2)",
"_____no_output_____"
],
[
"comet_calculator.plot_graphic(Rs, Zs, Phi, Phi20, axis=\"z\")\ncomet_calculator.plot_graphic(Rs, Zs, Phi, Phi20, axis=\"r\")\ncomet_calculator.combine_copy_delete(trap, copy=True, delete=False, show=True)",
"_____no_output_____"
],
[
"comet_calculator.plot_contour(Rs, Zs, Phi, trap, mode=\"half\")",
"_____no_output_____"
]
],
[
[
"combine this 2D pics and 3D visualization",
"_____no_output_____"
]
],
[
[
"combine.combine_pics(trap.name, with_contour=True, show=True)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9d177700dc78572c6407629754ab7ea7f72897 | 80,120 | ipynb | Jupyter Notebook | EDA Chapter 1 (Pandas).ipynb | Jhoie/EDA-Practicals | ad8f968d5e5685d90fc4734df739432876ba182c | [
"MIT"
] | null | null | null | EDA Chapter 1 (Pandas).ipynb | Jhoie/EDA-Practicals | ad8f968d5e5685d90fc4734df739432876ba182c | [
"MIT"
] | null | null | null | EDA Chapter 1 (Pandas).ipynb | Jhoie/EDA-Practicals | ad8f968d5e5685d90fc4734df739432876ba182c | [
"MIT"
] | null | null | null | 48.062388 | 1,531 | 0.48169 | [
[
[
"import pandas as pd\n",
"_____no_output_____"
],
[
"data = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data')\ndata.head() #initial value of 5",
"_____no_output_____"
],
[
"# adding titles to features\n\ncolumns = ['age', 'workclass', 'fnlwgt', 'education', 'education_num',\n 'marital_status', 'occupation', 'relationship', 'ethnicity', 'gender','capital_gain','capital_loss','hours_per_week','country_of_origin','income']\ndata = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data',names=columns)\ndata.head()",
"_____no_output_____"
]
],
[
[
"# DESCRIPTIVE STATISTICS",
"_____no_output_____"
]
],
[
[
"data.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 32561 entries, 0 to 32560\nData columns (total 15 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 age 32561 non-null int64 \n 1 workclass 32561 non-null object\n 2 fnlwgt 32561 non-null int64 \n 3 education 32561 non-null object\n 4 education_num 32561 non-null int64 \n 5 marital_status 32561 non-null object\n 6 occupation 32561 non-null object\n 7 relationship 32561 non-null object\n 8 ethnicity 32561 non-null object\n 9 gender 32561 non-null object\n 10 capital_gain 32561 non-null int64 \n 11 capital_loss 32561 non-null int64 \n 12 hours_per_week 32561 non-null int64 \n 13 country_of_origin 32561 non-null object\n 14 income 32561 non-null object\ndtypes: int64(6), object(9)\nmemory usage: 3.7+ MB\n"
],
[
"data.columns",
"_____no_output_____"
],
[
"data.shape",
"_____no_output_____"
],
[
"# Displays summary statistics for each numerical column in the dataframe\ndata.describe()",
"_____no_output_____"
],
[
"#select a row\ndata.iloc[10]",
"_____no_output_____"
],
[
"#select a range of rows \ndata.iloc[10:15]",
"_____no_output_____"
],
[
"#select a range of rows with specific columns\ndata.iloc[10:15, 3:6]",
"_____no_output_____"
],
[
"#select a column\ndata[\"age\"]",
"_____no_output_____"
],
[
"#select a range of columns\ndata[[\"age\",\"education\"]]",
"_____no_output_____"
],
[
"#how to create your own dataframe\n\nimport pandas as pd\nimport numpy as np\n\n#np.random.seed(24) \ndf = pd.DataFrame({'F': np.linspace(1, 10, 10)}) #create a kinda serial number feature\n\n#concatenating another datafram with random values to the previous one\ndf = pd.concat([df, pd.DataFrame(np.random.randn(10, 5), columns=list('EDCBA'))], axis=1)\n\n#setting the value in row1 cond colum2 to null\ndf.iloc[0, 2] = np.nan\ndf",
"_____no_output_____"
],
[
"\"\"\"Function to colour values\n*Negative values are RED\n*Positive values are BLACK\n*Values equal to zero or null are GREEN\n\"\"\"\n\n\ndef colorNegativeValueToRed(value):\n if value < 0:\n color = 'red'\n elif value > 0:\n color = 'black'\n else:\n color = 'green'\n\n return 'color: %s' % color",
"_____no_output_____"
],
[
"#calling the function to the dataframe\n\ns = df.style.applymap(colorNegativeValueToRed, subset=['A','B','C','D','E'])\ns",
"_____no_output_____"
],
[
"#calling the function on a particular feature in the dataframe\n\nd = df.style.applymap(colorNegativeValueToRed, subset=['A'])\nd",
"_____no_output_____"
],
[
"#using background colour instead\ndef colorNegativeValueToRed(value):\n if value < 0:\n color = 'red'\n elif value > 0:\n color = 'white'\n else:\n color = 'green'\n\n return 'background-color: %s' % color ",
"_____no_output_____"
],
[
"o = df.style.applymap(colorNegativeValueToRed, subset=['A','B','C','D','E'])\no",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9d1a4719836495bca00d31ff234630c28093c9 | 24,176 | ipynb | Jupyter Notebook | chinese-calligraphy-classifier-using-fast-ai.ipynb | wayofnumbers/chinese-calligraphy-classifier-fastai | 8f754674772bd9b9631795e7e42d37717c37e52d | [
"MIT"
] | null | null | null | chinese-calligraphy-classifier-using-fast-ai.ipynb | wayofnumbers/chinese-calligraphy-classifier-fastai | 8f754674772bd9b9631795e7e42d37717c37e52d | [
"MIT"
] | null | null | null | chinese-calligraphy-classifier-using-fast-ai.ipynb | wayofnumbers/chinese-calligraphy-classifier-fastai | 8f754674772bd9b9631795e7e42d37717c37e52d | [
"MIT"
] | null | null | null | 24.745138 | 368 | 0.588393 | [
[
[
"# How I Trained Computer To Learn Calligraphy Styles\n",
"_____no_output_____"
],
[
"I wanted to start a series of posts for the projects I finished/polished for my Practical Deep Learning for Coders",
"_____no_output_____"
],
[
"## Creating your own dataset from Google Images\n\n*by: Francisco Ingham and Jeremy Howard. Inspired by [Adrian Rosebrock](https://www.pyimagesearch.com/2017/12/04/how-to-create-a-deep-learning-dataset-using-google-images/)*",
"_____no_output_____"
]
],
[
[
"from fastai import *\nfrom fastai.vision import *",
"_____no_output_____"
]
],
[
[
"## **Get a list of URLs**\n\n**Search and scroll**\n\nGo to Google Images and search for the images you are interested in. The more specific you are in your Google Search, the better the results and the less manual pruning you will have to do.\n\nScroll down until you've seen all the images you want to download, or until you see a button that says 'Show more results'. All the images you scrolled past are now available to download. To get more, click on the button, and continue scrolling. The maximum number of images Google Images shows is 700.\n\nIt is a good idea to put things you want to exclude into the search query, for instance if you are searching for the Eurasian wolf, \"canis lupus lupus\", it might be a good idea to exclude other variants:\n\n\"canis lupus lupus\" -dog -arctos -familiaris -baileyi -occidentalis\n\nYou can also limit your results to show only photos by clicking on Tools and selecting Photos from the Type dropdown.\n\n**Download into file**\n\nNow you must run some Javascript code in your browser which will save the URLs of all the images you want for you dataset.\n\nPress CtrlShiftJ in Windows/Linux and CmdOptJ in Mac, and a small window the javascript 'Console' will appear. That is where you will paste the JavaScript commands.\n\nYou will need to get the urls of each of the images. You can do this by running the following commands:\n\n```\nurls = Array.from(document.querySelectorAll('.rg_di .rg_meta')).map(el=>JSON.parse(el.textContent).ou);\nwindow.open('data:text/csv;charset=utf-8,' + escape(urls.join('\\n')));\n```\n\n**Create directory and upload urls file into your server**\n\nChoose an appropriate name for your labeled images. You can run these steps multiple times to grab different labels.",
"_____no_output_____"
],
[
"**Note:** You can download the urls locally and upload them to kaggle using:\n\n\nHere, I have uploaded the urls for \n - Teddy\n - Grizzly\n - Black ",
"_____no_output_____"
]
],
[
[
"classes = ['lishu','xiaozhuan','kaishu']",
"_____no_output_____"
],
[
"# folder = 'lishu'\n# file = 'lishu.csv'\n# path = Path('data/')\n# dest = path/folder\n# dest.mkdir(parents=True, exist_ok=True)",
"_____no_output_____"
],
[
"#??download_images",
"_____no_output_____"
],
[
"#download_images(path/file, dest, max_pics=200)",
"_____no_output_____"
],
[
"#folder = 'xiaozhuan'\n#file = 'xiaozhuan.csv'",
"_____no_output_____"
],
[
"#path = Path('data/')\n#dest = path/folder\n#dest.mkdir(parents=True, exist_ok=True)\n",
"_____no_output_____"
],
[
"#!cp ../input/chinese-calligraphy/{file} {path/file}",
"_____no_output_____"
],
[
"#download_images(path/file, dest, max_pics=200)",
"_____no_output_____"
],
[
"#folder = 'kaishu'\n#file = 'kaishu.csv'",
"_____no_output_____"
],
[
"#path = Path('data/')\n#dest = path/folder\n#dest.mkdir(parents=True, exist_ok=True)\n#!cp ../input/chinese-calligraphy/{file} {path/file}",
"_____no_output_____"
],
[
"#download_images(path/file, dest, max_pics=200)",
"_____no_output_____"
]
],
[
[
"Then we can remove any images that can't be opened:",
"_____no_output_____"
]
],
[
[
"#for c in classes:\n# print(c)\n# verify_images(path/c, delete=True, max_size=500)",
"_____no_output_____"
]
],
[
[
"## View data",
"_____no_output_____"
]
],
[
[
"!ls ../input/chinese-calligraphy-4/",
"_____no_output_____"
],
[
"!mkdir data\n!cp -a ../input/chinese-calligraphy-4/train ./data/train",
"_____no_output_____"
],
[
"# !ls data/train/kaishu",
"_____no_output_____"
],
[
"path = Path('./data')\n",
"_____no_output_____"
],
[
"np.random.seed(42)\ndata = ImageDataBunch.from_folder(path, valid_pct=0.2,\nds_tfms=get_transforms(do_flip=False), size=128, num_workers=4).normalize(imagenet_stats)",
"_____no_output_____"
],
[
"data.classes",
"_____no_output_____"
],
[
"data.show_batch(rows=3, figsize=(9,10))",
"_____no_output_____"
],
[
"# learn.data = data\n# data.train_ds[0][0].shape\n# learn.freeze",
"_____no_output_____"
],
[
"learn = cnn_learner(data, models.resnet50, metrics=error_rate)",
"_____no_output_____"
],
[
"learn.fit_one_cycle(4)",
"_____no_output_____"
],
[
"learn.save('stage-1')",
"_____no_output_____"
],
[
"learn.unfreeze()\nlearn.lr_find()",
"_____no_output_____"
],
[
"learn.recorder.plot()",
"_____no_output_____"
],
[
"data.classes, data.c, len(data.train_ds), len(data.valid_ds)",
"_____no_output_____"
],
[
"learn.fit_one_cycle(1, max_lr=slice(1e-6,1e-4))",
"_____no_output_____"
],
[
"np.random.seed(42)\ndata = ImageDataBunch.from_folder(path, valid_pct=0.2,\nds_tfms=get_transforms(do_flip=False), size=256, num_workers=4).normalize(imagenet_stats)\nlearn.data = data\ndata.train_ds[0][0].shape\nlearn.freeze()\nlearn.lr_find()\nlearn.recorder.plot()",
"_____no_output_____"
],
[
"learn.fit_one_cycle(2, max_lr=slice(1e-4,1e-3))",
"_____no_output_____"
],
[
"learn.fit_one_cycle(4, max_lr=slice(1e-4,1e-3))",
"_____no_output_____"
],
[
"learn.save('stage-1-256-rn50')\nlearn.unfreeze()\nlearn.fit_one_cycle(2, slice(1e-4, 1e-3))\n",
"_____no_output_____"
],
[
"learn.fit_one_cycle(2, slice(1e-4, 1e-3))\n",
"_____no_output_____"
],
[
"learn.export('export.pkl')\n!cp data/export.pkl export.pkl",
"_____no_output_____"
],
[
"import os\n#os.chdir(r'kaggle/working/')\nfrom IPython.display import FileLink\nFileLink(r'export.pkl')",
"_____no_output_____"
],
[
"learn.save('stage-2')",
"_____no_output_____"
]
],
[
[
"## Interpretation",
"_____no_output_____"
]
],
[
[
"learn.load('stage-2');",
"_____no_output_____"
],
[
"interp = ClassificationInterpretation.from_learner(learn)\n\nlosses,idxs = interp.top_losses()\n\nlen(data.valid_ds)==len(losses)==len(idxs)",
"_____no_output_____"
],
[
"interp.plot_top_losses(9)",
"_____no_output_____"
],
[
"interp.plot_confusion_matrix()",
"_____no_output_____"
]
],
[
[
"Possible confusion:\n1. Single big character (unseen)\n2. Variant of some calligraphy art (unseen)\n3. Very small font size (blur)\n\n\nNeed better dataset!\n\nTo build a robust model that generialize well, a big dataset is essential, the model needs to 'see' everything to better judge on everything. Like the single character one, the model didn't see much of it, so it's hard to classify it correctly. 100 clean dataset can train a good dataset to maybe 80-90% but very hard to get to state-of-the-art level, e.g. >97%.",
"_____no_output_____"
],
[
"## Cleaning Up\n\nSome of our top losses aren't due to bad performance by our model. There are images in our data set that shouldn't be.\n\nUsing the `ImageCleaner` widget from `fastai.widgets` we can prune our top losses, removing photos that don't belong.",
"_____no_output_____"
]
],
[
[
"from fastai.widgets import *",
"_____no_output_____"
]
],
[
[
"First we need to get the file paths from our top_losses. We can do this with `.from_toplosses`. We then feed the top losses indexes and corresponding dataset to `ImageCleaner`.\n\nNotice that the widget will not delete images directly from disk but it will create a new csv file `cleaned.csv` from where you can create a new ImageDataBunch with the corrected labels to continue training your model.",
"_____no_output_____"
],
[
"Note: Please Set the Number of images to a number that you'd like to view:\nex: ```n_imgs=100```",
"_____no_output_____"
]
],
[
[
"ds, idxs = DatasetFormatter().from_toplosses(learn, n_imgs=290)",
"_____no_output_____"
],
[
"# ImageCleaner(ds, idxs, path)",
"_____no_output_____"
]
],
[
[
"Flag photos for deletion by clicking 'Delete'. Then click 'Next Batch' to delete flagged photos and keep the rest in that row. ImageCleaner will show you a new row of images until there are no more to show. In this case, the widget will show you images until there are none left from top_losses.ImageCleaner(ds, idxs)\n\nYou can also find duplicates in your dataset and delete them! To do this, you need to run .from_similars to get the potential duplicates' ids and then run ImageCleaner with duplicates=True. The API works in a similar way as with misclassified images: just choose the ones you want to delete and click 'Next Batch' until there are no more images left.",
"_____no_output_____"
]
],
[
[
"ds, idxs = DatasetFormatter().from_similars(learn)",
"_____no_output_____"
]
],
[
[
"Remember to recreate your ImageDataBunch from your cleaned.csv to include the changes you made in your data!",
"_____no_output_____"
],
[
"## Putting your model in production\n> \nYou probably want to use CPU for inference, except at massive scale (and you almost certainly don't need to train in real-time). If you don't have a GPU that happens automatically. You can test your model on CPU like so:",
"_____no_output_____"
]
],
[
[
"#import fastai\n#fastai.defaults.device = torch.device('cpu')",
"_____no_output_____"
],
[
"#img = open_image(path/'black'/'00000021.jpg')\n#img",
"_____no_output_____"
],
[
"#classes = ['black', 'grizzly', 'teddys']",
"_____no_output_____"
],
[
"#data2 = ImageDataBunch.single_from_classes(path, classes, tfms=get_transforms(), size=224).normalize(imagenet_stats)",
"_____no_output_____"
],
[
"#learn = create_cnn(data2, models.resnet34).load('stage-2')",
"_____no_output_____"
],
[
"#pred_class,pred_idx,outputs = learn.predict(img)\n#pred_class",
"_____no_output_____"
]
],
[
[
"So you might create a route something like this ([thanks](https://github.com/simonw/cougar-or-not) to Simon Willison for the structure of this code):\n\n```\n\[email protected](\"/classify-url\", methods=[\"GET\"])\nasync def classify_url(request):\n bytes = await get_bytes(request.query_params[\"url\"])\n img = open_image(BytesIO(bytes))\n _,_,losses = learner.predict(img)\n return JSONResponse({\n \"predictions\": sorted(\n zip(cat_learner.data.classes, map(float, losses)),\n key=lambda p: p[1],\n reverse=True\n )\n })\n \n ```\n \n",
"_____no_output_____"
],
[
"(This [example](https://www.starlette.io/) is for the Starlette web app toolkit.)",
"_____no_output_____"
],
[
"## Things that can go wrong",
"_____no_output_____"
],
[
"- Most of the time things will train fine with the defaults\n- There's not much you really need to tune (despite what you've heard!)\n- Most likely are\n - Learning rate\n - Number of epochs",
"_____no_output_____"
],
[
"### Learning rate (LR) too low",
"_____no_output_____"
]
],
[
[
"#learn = create_cnn(data, models.resnet34, metrics=error_rate)",
"_____no_output_____"
],
[
"#learn.fit_one_cycle(5, max_lr=1e-5)",
"_____no_output_____"
],
[
"#learn.recorder.plot_losses()",
"_____no_output_____"
]
],
[
[
"As well as taking a really long time, it's getting too many looks at each image, so may overfit.",
"_____no_output_____"
],
[
"### Too few epochs",
"_____no_output_____"
]
],
[
[
"#learn = create_cnn(data, models.resnet34, metrics=error_rate, pretrained=False)",
"_____no_output_____"
],
[
"#learn.fit_one_cycle(1)",
"_____no_output_____"
]
],
[
[
"### Too many epochs",
"_____no_output_____"
]
],
[
[
"# np.random.seed(42)\n# data = ImageDataBunch.from_folder(path, train=\".\", valid_pct=0.9, bs=32, \n# ds_tfms=get_transforms(do_flip=False, max_rotate=0, max_zoom=1, max_lighting=0, max_warp=0\n# ),size=224, num_workers=4).normalize(imagenet_stats)",
"_____no_output_____"
],
[
"# learn = create_cnn(data, models.resnet50, metrics=error_rate, ps=0, wd=0)\n# learn.unfreeze()",
"_____no_output_____"
],
[
"# learn.fit_one_cycle(40, slice(1e-6,1e-4))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ec9d21e59e6d5797fa12e6c2966df7b87b5fd3ac | 243,279 | ipynb | Jupyter Notebook | notebooks/error_due_to_sensitivity.ipynb | abostroem/asassn15oz | ade090096b61b155c86108d1945bb7b4522365b8 | [
"BSD-3-Clause"
] | null | null | null | notebooks/error_due_to_sensitivity.ipynb | abostroem/asassn15oz | ade090096b61b155c86108d1945bb7b4522365b8 | [
"BSD-3-Clause"
] | 3 | 2019-02-24T23:24:33.000Z | 2019-02-24T23:25:12.000Z | notebooks/error_due_to_sensitivity.ipynb | abostroem/asassn15oz | ade090096b61b155c86108d1945bb7b4522365b8 | [
"BSD-3-Clause"
] | null | null | null | 606.680798 | 137,422 | 0.938737 | [
[
[
"from astropy.io import fits\nimport numpy as np\nfrom matplotlib import pyplot\n%matplotlib inline\nimport os\nimport sys\nimport glob",
"_____no_output_____"
],
[
"DATA_DIR = '../data/EFOSC/20151004_test/test_diff_standards/'\nCODE_DIR = '../code'\nFIG_DIR = '../figures'",
"_____no_output_____"
],
[
"sys.path.append(CODE_DIR)\nfrom util import calc_wavelength",
"_____no_output_____"
],
[
"filename = 'tASASSN-15oz_20151003_Gr13_Free_slit1.0_57678_1_e_{}.fits'\nendings = ['sensl745a', 'sensLTT7379']",
"_____no_output_____"
],
[
"fig = pyplot.figure(figsize = [15, 10])\nax_spec = fig.add_subplot(1,1,1)\npixels = np.arange(1, 1016)\nfor end in endings:\n ofile = fits.open(os.path.join(DATA_DIR, filename.format(end)))\n data = ofile[0].data\n header = ofile[0].header\n wl = calc_wavelength(header, pixels)\n ax_spec.plot(wl, data[0,0,:])",
"_____no_output_____"
]
],
[
[
"That's way too big a difference - let's look at the sensitivity curves",
"_____no_output_____"
]
],
[
[
"flist = glob.glob(os.path.join(DATA_DIR, 'sens*Free*57679_1.fits'))",
"_____no_output_____"
],
[
"fig = pyplot.figure(figsize = [15, 10])\nax_sens = fig.add_subplot(1,1,1)\npixels = np.arange(1, 1016)\nfor ifile in flist:\n ofile = fits.open(ifile)\n data = ofile[0].data\n header = ofile[0].header\n wl = calc_wavelength(header, pixels)\n ax_sens.plot(wl, data, label = os.path.basename(ifile))\nax_sens.legend(loc = 'best')\nax_sens.set_xlabel('Wavelength')\nax_sens.set_ylabel('Sensitivity')\nax_sens.set_title('Sensitivity Curves for 2 different standards')",
"_____no_output_____"
],
[
"header_l745 = fits.getheader(flist[0], 0)\nheader_ltt = fits.getheader(flist[1], 0)\nfor keyword in header_l745.keys():\n if header_l745[keyword] != header_ltt[keyword]:\n print(keyword, header_l745[keyword], header_ltt[keyword])",
"IRAF-TLM 2016-10-18T20:28:51 2016-10-18T23:35:04\nDATE 2016-10-18T20:28:51 2016-10-18T23:35:04\nOBJECT L745a LTT7379\nRA 115.087579 279.11018\nDEC -17.4171 -44.3099\nEXPTIME 100.0156 60.0096\nMJD-OBS 57299.39134651 57298.98478846\nDATE-OBS 2015-10-04T09:23:32.338 2015-10-03T23:38:05.723\nUTC 33805.0 85078.0\nLST 19889.883 71066.708\nCRVAL1 3644.8330078125 3645.8330078125\nESO ADA ABSROT END -150.4954 192.40119\nESO ADA ABSROT START -150.2625 192.97573\nESO ADA GUID DEC -17.57109 -44.08438\nESO ADA GUID RA 115.15208 279.00755\nESO ADA POSANG -26.695 133.293\nESO DET EXP NO 10041 9401\nESO DET EXP RDTTIME 22.39 22.354\nESO DET EXP XFERTIM 22.304 22.333\nESO DET SHUT TMCLOS 0.079 0.062\nESO DET SHUT TMOPEN 0.048 0.043\nESO DET TLM4 END 284.2 284.9\nESO DET TLM4 START 284.2 284.9\nESO DET TLM5 END 283.8 284.4\nESO DET TLM5 START 283.8 284.4\nESO DET TLM6 END 287.4 287.7\nESO DET TLM6 START 287.4 287.7\nESO DET WIN1 DIT1 100.015633 60.009583\nESO DET WIN1 DKTM 100.0871 60.092\nESO DET WIN1 UIT1 100.0 60.0\nESO OBS EXECTIME 1530 1210\nESO OBS ID 100378953 100378938\nESO OBS NAME STD_l745a_g11+gm13+gm16_1+1.5 STD_LTT7379_gm11+gm16+gm13_1+1.5\nESO OBS START 2015-10-04T09:12:00 2015-10-03T23:29:05\nESO OBS TARG NAME L745a LTT7379\nESO TEL AIRM END 1.171 1.066\nESO TEL AIRM START 1.177 1.064\nESO TEL ALT 58.151 69.951\nESO TEL AMBI FWHM END 0.93 1.04\nESO TEL AMBI FWHM START 0.93 1.04\nESO TEL AMBI PRES START 768.7 768.8\nESO TEL AMBI RHUM 21.0 23.0\nESO TEL AMBI TEMP 9.8 11.75\nESO TEL AMBI WINDDIR 20.0 15.0\nESO TEL AMBI WINDSP 12.4 12.7\nESO TEL AZ 255.511 36.896\nESO TEL FOCU VALUE -3.61 -3.399\nESO TEL MOON DEC 18.76398 18.34736\nESO TEL MOON RA 95.37413 89.943916\nESO TEL PARANG END -117.924 47.812\nESO TEL PARANG START -117.691 47.037\nESO TEL TARG ALPHA 74019.6 183626.29\nESO TEL TARG DELTA -172442.0 -441833.0\nESO TEL TSS TEMP8 9.12 12.51\nESO TPL START 2015-10-04T09:23:09 2015-10-03T23:37:41\nORIGFILE EFOSC_Spectrum277_0018.fits EFOSC_Spectrum276_0003.fits\nARCFILE EFOSC.2015-10-04T09:23:32.338.fits EFOSC.2015-10-03T23:38:05.723.fits\nCHECKSUM 34bSA3ZQ53aQA3YQ 9LWTAJUT2JUT9JUT\nDATASUM 47747404 4216566354\nCCDMEAN 44.40208 65.32654\nCCDMEANT 1161263739 1161263717\nPROV1 EFOSC.2015-10-04T09:23:32.338.fits EFOSC.2015-10-03T23:38:05.723.fits\nTRACE1 tL745a_20151003_Gr13_Free_slit1.0_57678_1_ex.fits tLTT7379_20151003_Gr13_Free_slit1.0_57678_1_ex.fits\nARC arc_L745a_20151003_Gr13_Free_slit1.0_57679_1.fits arc_LTT7379_20151003_Gr13_Free_slit1.0_57679_1.fits\nOBID1 100378953 100378938\nAIRMASS 1.173094990782428 1.066012349325884\nTEXPTIME 100.0156 60.0096\nTELAPSE 100.0155998859555 60.00960019882768\nTITLE L745a LTT7379\nTMID 57299.39192530398 57298.98513573778\nMJD-END 57299.39250409796 57298.98548301556\nSTDNAME l745a.dat ltt7379.dat\nMAGSTD 13.03 99.99\nAPNUM1 1 1 440.55 460.55 1 1 440.48 460.48\nSHIFT 12.8 13.0\n"
],
[
"flist = glob.glob(os.path.join(DATA_DIR, '*.fits'))\nfor ifile in flist:\n try:\n print(os.path.basename(ifile), fits.getval(ifile, 'MAGSTD', 0))\n except:\n pass",
"sens_20151003_Gr13_GG495_l745a_57678_1.fits 13.03\ntLTT7379_20151003_Gr13_GG495_slit1.5_57678_1_sensLTT7379.fits 99.99\natmo_tLTT7379_20151003_Gr13_Free_slit1.0_57678_1_ex.fits 99.99\ntL745a_20151003_Gr13_Free_slit1.0_57678_1_sensLTT7379.fits 13.03\nsens_20151003_Gr13_Free_l745a_57678_1_2ord.fits 13.03\ntLTT7379_20151003_Gr13_GG495_slit1.0_57678_1_ex_sensl745a.fits 99.99\ntLTT7379_20151003_Gr13_Free_slit1.0_57678_1_clean_sensLTT7379.fits 99.99\ntL745a_20151003_Gr13_Free_slit1.5_57678_1_sensLTT7379.fits 13.03\ntLTT7379_20151003_Gr13_GG495_slit1.0_57678_1_sensLTT7379.fits 99.99\natmo_tLTT7379_20151003_Gr13_GG495_slit1.0_57678_1_ex.fits 99.99\natmo_tL745a_20151003_Gr13_GG495_slit1.0_57678_1_ex.fits 13.03\ntL745a_20151003_Gr13_Free_slit1.0_57678_1_clean_sensLTT7379.fits 13.03\ntLTT7379_20151003_Gr13_GG495_slit1.0_57678_1_sensl745a.fits 99.99\ntL745a_20151003_Gr13_GG495_slit1.5_57678_1_sensl745a.fits 13.03\ntL745a_20151003_Gr13_GG495_slit1.5_57678_1_sensLTT7379.fits 13.03\nsens_20151003_Gr13_Free_ltt7379_57679_1_2ord.fits 99.99\ntLTT7379_20151003_Gr13_Free_slit1.5_57678_1_sensl745a.fits 99.99\ntL745a_20151003_Gr13_GG495_slit1.0_57678_1_clean_sensLTT7379.fits 13.03\nsens_20151003_Gr13_Free_l745a_57679_1.fits 13.03\ntL745a_20151003_Gr13_GG495_slit1.0_57678_1_sensLTT7379.fits 13.03\ntL745a_20151003_Gr13_Free_slit1.5_57678_1_sensl745a.fits 13.03\ntLTT7379_20151003_Gr13_Free_slit1.0_57678_1_sensl745a.fits 99.99\ntLTT7379_20151003_Gr13_GG495_slit1.0_57678_1_clean_sensLTT7379.fits 99.99\nsens_20151003_Gr13_Free_l745a_57678_1.fits 13.03\natmo2_tL745a_20151003_Gr13_GG495_slit1.0_57678_1_ex.fits 13.03\ntL745a_20151003_Gr13_Free_slit1.0_57678_1_ex_sensl745a.fits 13.03\ntLTT7379_20151003_Gr13_GG495_slit1.5_57678_1_sensl745a.fits 99.99\ntL745a_20151003_Gr13_GG495_slit1.0_57678_1_ex_sensLTT7379.fits 13.03\natmo_tL745a_20151003_Gr13_Free_slit1.0_57678_1_ex.fits 13.03\ntL745a_20151003_Gr13_GG495_slit1.0_57678_1_sensl745a.fits 13.03\ntL745a_20151003_Gr13_Free_slit1.0_57678_1_ex_sensLTT7379.fits 13.03\ntL745a_20151003_Gr13_Free_slit1.0_57678_1_sensl745a.fits 13.03\ntLTT7379_20151003_Gr13_Free_slit1.0_57678_1_ex_sensLTT7379.fits 99.99\natmo2_tLTT7379_20151003_Gr13_GG495_slit1.0_57678_1_ex.fits 99.99\natmo2_tLTT7379_20151003_Gr13_Free_slit1.0_57678_1_ex.fits 99.99\ntLTT7379_20151003_Gr13_GG495_slit1.0_57678_1_ex_sensLTT7379.fits 99.99\ntL745a_20151003_Gr13_GG495_slit1.0_57678_1_ex_sensl745a.fits 13.03\nsens_20151003_Gr13_Free_l745a_57679_1_2ord.fits 13.03\natmo2_tL745a_20151003_Gr13_Free_slit1.0_57678_1_ex.fits 13.03\nsens_20151003_Gr13_GG495_ltt7379_57679_1.fits 99.99\ntLTT7379_20151003_Gr13_Free_slit1.0_57678_1_ex_sensl745a.fits 99.99\ntL745a_20151003_Gr13_Free_slit1.0_57678_1_clean_sensl745a.fits 13.03\nsens_20151003_Gr13_GG495_l745a_57679_1.fits 13.03\ntLTT7379_20151003_Gr13_Free_slit1.0_57678_1_sensLTT7379.fits 99.99\ntL745a_20151003_Gr13_GG495_slit1.0_57678_1_clean_sensl745a.fits 13.03\ntLTT7379_20151003_Gr13_Free_slit1.5_57678_1_sensLTT7379.fits 99.99\nsens_20151003_Gr13_Free_ltt7379_57679_1.fits 99.99\n"
],
[
"pwd",
"_____no_output_____"
],
[
"glob.glob(os.path.join(DATA_DIR, 'sens*'))",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9d43e0f1986629761115263d7c3cd25544af84 | 47,906 | ipynb | Jupyter Notebook | tests/flyscan_2018Nov24.ipynb | NSLS-II-LIX/profile_collection | 3b942b90404c973625eb884c7a7a9c5e5a3a144a | [
"BSD-3-Clause"
] | null | null | null | tests/flyscan_2018Nov24.ipynb | NSLS-II-LIX/profile_collection | 3b942b90404c973625eb884c7a7a9c5e5a3a144a | [
"BSD-3-Clause"
] | 10 | 2016-05-12T20:04:27.000Z | 2021-03-02T17:00:08.000Z | tests/flyscan_2018Nov24.ipynb | NSLS-II-LIX/profile_collection | 3b942b90404c973625eb884c7a7a9c5e5a3a144a | [
"BSD-3-Clause"
] | 4 | 2017-05-08T15:20:28.000Z | 2020-04-06T15:26:53.000Z | 142.154303 | 1,455 | 0.686803 | [
[
[
"%run -i traj_py ",
"scan.rY ss2_ry\nscan.Y ss2_y\nscan.X ss2_x\n"
],
[
"login('test', 'test', 'test')",
"Logging hadn't been started.\nActivating auto-logging. Current session state plus future input saved.\nFilename : /GPFS/xf16id/exp_path/test/test/log-test.2018Nov24_17:51:12\nMode : backup\nOutput logging : True\nRaw input log : True\nTimestamping : True\nState : active\n"
],
[
"DETS = [pil1M_ext, pilW1_ext, pilW2_ext]",
"_____no_output_____"
],
[
"RE(raster(DETS, 0.2, ss2.x, -0.1, 0.1, 21, ss2.y, -0.1, 0.1, 6))",
"setting up to collect 126 exposures of 0.20 sec ...\npil1M_ext staging\npil1M_ext stage sigs updated\nresetting file number for pil1M_ext\npil1M_ext super staged\npil1M_ext checking armed status\npil1M_ext staged\npilW1_ext staging\npilW1_ext stage sigs updated\nresetting file number for pilW1_ext\npilW1_ext super staged\npilW1_ext checking armed status\npilW1_ext staged\npilW2_ext staging\npilW2_ext stage sigs updated\nresetting file number for pilW2_ext\npilW2_ext super staged\npilW2_ext checking armed status\npilW2_ext staged\nTransient Scan ID: 1 Time: 2018/11/24 17:51:17\nPersistent Unique Scan ID: 'fe87e314-58dd-4526-87ba-eabd044ade9b'\nScan ID: 1\nUnique ID: fe87e314-58dd-4526-87ba-eabd044ade9b\n#STARTDOC : 1543099877.9106731\n#STARTDOC : Sat Nov 24 17:51:17 2018\nsetting triggering parameters: 3, 23, 0.205\nsetting triggering parameters: 3, 23, 0.205\nsetting triggering parameters: 3, 23, 0.205\nsetting triggering parameters: 3, 23, 0.205\nsetting triggering parameters: 3, 23, 0.205\nsetting triggering parameters: 3, 23, 0.205\n"
],
[
"header = db[-1]",
"_____no_output_____"
],
[
"header.fields()",
"_____no_output_____"
],
[
"pilatus_trigger_mode = triggerMode.software_trigger_multi_frame\ndata = db.get_table(header, fill=True)",
"_____no_output_____"
],
[
"data['pilW1_ext_image'][1].shape",
"_____no_output_____"
],
[
"sd.monitors = [em1]\nRE(raster(DETS, 0.2, ss2.x, -0.1, 0.1, 21, ss2.y, -0.1, 0.1, 6))",
"setting up to collect 126 exposures of 0.20 sec ...\npil1M_ext staging\npil1M_ext stage sigs updated\nresetting file number for pil1M_ext\npil1M_ext super staged\npil1M_ext checking armed status\npil1M_ext staged\npilW1_ext staging\npilW1_ext stage sigs updated\nresetting file number for pilW1_ext\npilW1_ext super staged\npilW1_ext checking armed status\npilW1_ext staged\npilW2_ext staging\npilW2_ext stage sigs updated\nresetting file number for pilW2_ext\npilW2_ext super staged\npilW2_ext checking armed status\npilW2_ext staged\nTransient Scan ID: 2 Time: 2018/11/24 17:52:57\nPersistent Unique Scan ID: 'c09f8775-3ea2-4a35-8488-add5cd832997'\nScan ID: 2\nUnique ID: c09f8775-3ea2-4a35-8488-add5cd832997\n#STARTDOC : 1543099977.5260987\n#STARTDOC : Sat Nov 24 17:52:57 2018\nNew stream: 'em1_monitor'\n\n\n\nScan ID: 2\nUnique ID: c09f8775-3ea2-4a35-8488-add5cd832997\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9d49b7f2ec10685f6533fc9de92ba4b743eeed | 375,584 | ipynb | Jupyter Notebook | module2-random-forests/LS_DS_222.ipynb | masonnystrom/DS-Unit-2-Kaggle-Challenge | ff5af3d642262fa1fac4a20b964955a2973f0743 | [
"MIT"
] | null | null | null | module2-random-forests/LS_DS_222.ipynb | masonnystrom/DS-Unit-2-Kaggle-Challenge | ff5af3d642262fa1fac4a20b964955a2973f0743 | [
"MIT"
] | null | null | null | module2-random-forests/LS_DS_222.ipynb | masonnystrom/DS-Unit-2-Kaggle-Challenge | ff5af3d642262fa1fac4a20b964955a2973f0743 | [
"MIT"
] | null | null | null | 111.449258 | 32,362 | 0.769322 | [
[
[
"<a href=\"https://colab.research.google.com/github/masonnystrom/DS-Unit-2-Kaggle-Challenge/blob/master/module2-random-forests/LS_DS_222.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Lambda School Data Science\n\n*Unit 2, Sprint 2, Module 2*\n\n---",
"_____no_output_____"
],
[
"# Random Forests",
"_____no_output_____"
],
[
"- use scikit-learn for **random forests**\n- do **ordinal encoding** with high-cardinality categoricals\n- understand how categorical encodings affect trees differently compared to linear models\n- understand how tree ensembles reduce overfitting compared to a single decision tree with unlimited depth",
"_____no_output_____"
],
[
"Today's lesson has two take-away messages:\n\n#### Try Tree Ensembles when you do machine learning with labeled, tabular data\n- \"Tree Ensembles\" means Random Forest or Gradient Boosting models. \n- [Tree Ensembles often have the best predictive accuracy](https://arxiv.org/abs/1708.05070) with labeled, tabular data.\n- Why? Because trees can fit non-linear, non-[monotonic](https://en.wikipedia.org/wiki/Monotonic_function) relationships, and [interactions](https://christophm.github.io/interpretable-ml-book/interaction.html) between features.\n- A single decision tree, grown to unlimited depth, will [overfit](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/). We solve this problem by ensembling trees, with bagging (Random Forest) or boosting (Gradient Boosting).\n- Random Forest's advantage: may be less sensitive to hyperparameters. Gradient Boosting's advantage: may get better predictive accuracy.\n\n#### One-hot encoding isn’t the only way, and may not be the best way, of categorical encoding for tree ensembles.\n- For example, tree ensembles can work with arbitrary \"ordinal\" encoding! (Randomly assigning an integer to each category.) Compared to one-hot encoding, the dimensionality will be lower, and the predictive accuracy may be just as good or even better.\n",
"_____no_output_____"
],
[
"### Setup\n\nRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.\n\nLibraries\n\n- **category_encoders** \n- **graphviz**\n- ipywidgets\n- matplotlib\n- numpy\n- pandas\n- seaborn\n- scikit-learn",
"_____no_output_____"
]
],
[
[
"%%capture\nimport sys\n\n# If you're on Colab:\nif 'google.colab' in sys.modules:\n DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'\n !pip install category_encoders==2.*\n\n# If you're working locally:\nelse:\n DATA_PATH = '../data/'",
"_____no_output_____"
]
],
[
[
"# Use scikit-learn for random forests",
"_____no_output_____"
],
[
"## Overview\n\nLet's fit a Random Forest!\n\n\n\n[Chris Albon, MachineLearningFlashcards.com](https://twitter.com/chrisalbon/status/1181261589887909889)",
"_____no_output_____"
],
[
"### Solution example\n\nFirst, read & wrangle the data.\n\n> Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what other columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guide#zeros-replace-missing-values) What other columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\n\n# Merge train_features.csv & train_labels.csv\ntrain = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), \n pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))\n\n# Read test_features.csv & sample_submission.csv\ntest = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')\nsample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')\n\n# Split train into train & val\ntrain, val = train_test_split(train, train_size=0.80, test_size=0.20, \n stratify=train['status_group'], random_state=42)\n\n\ndef wrangle(X):\n \"\"\"Wrangle train, validate, and test sets in the same way\"\"\"\n \n # Prevent SettingWithCopyWarning\n X = X.copy()\n \n # About 3% of the time, latitude has small values near zero,\n # outside Tanzania, so we'll treat these values like zero.\n X['latitude'] = X['latitude'].replace(-2e-08, 0)\n \n # When columns have zeros and shouldn't, they are like null values.\n # So we will replace the zeros with nulls, and impute missing values later.\n # Also create a \"missing indicator\" column, because the fact that\n # values are missing may be a predictive signal.\n cols_with_zeros = ['longitude', 'latitude', 'construction_year', \n 'gps_height', 'population']\n for col in cols_with_zeros:\n X[col] = X[col].replace(0, np.nan)\n X[col+'_MISSING'] = X[col].isnull()\n \n # Drop duplicate columns\n duplicates = ['quantity_group', 'payment_type']\n X = X.drop(columns=duplicates)\n \n # Drop recorded_by (never varies) and id (always varies, random)\n unusable_variance = ['recorded_by', 'id']\n X = X.drop(columns=unusable_variance)\n \n # Convert date_recorded to datetime\n X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)\n \n # Extract components from date_recorded, then drop the original column\n X['year_recorded'] = X['date_recorded'].dt.year\n X['month_recorded'] = X['date_recorded'].dt.month\n X['day_recorded'] = X['date_recorded'].dt.day\n X = X.drop(columns='date_recorded')\n \n # Engineer feature: how many years from construction_year to date_recorded\n X['years'] = X['year_recorded'] - X['construction_year']\n X['years_MISSING'] = X['years'].isnull()\n \n # return the wrangled dataframe\n return X\n\ntrain = wrangle(train)\nval = wrangle(val)\ntest = wrangle(test)",
"_____no_output_____"
],
[
"# The status_group column is the target\ntarget = 'status_group'\n\n# Get a dataframe with all train columns except the target\ntrain_features = train.drop(columns=[target])\n\n# Get a list of the numeric features\nnumeric_features = train_features.select_dtypes(include='number').columns.tolist()\n\n# Get a series with the cardinality of the nonnumeric features\ncardinality = train_features.select_dtypes(exclude='number').nunique()\n\n# Get a list of all categorical features with cardinality <= 50\ncategorical_features = cardinality[cardinality <= 50].index.tolist()\n\n# Combine the lists \nfeatures = numeric_features + categorical_features",
"_____no_output_____"
],
[
"# Arrange data into X features matrix and y target vector \nX_train = train[features]\ny_train = train[target]\nX_val = val[features]\ny_val = val[target]\nX_test = test[features]",
"_____no_output_____"
]
],
[
[
"## Follow Along\n\n[Scikit-Learn User Guide: Random Forests](https://scikit-learn.org/stable/modules/ensemble.html#random-forests) ",
"_____no_output_____"
]
],
[
[
"# TODO\n%%time\nimport category_encoders as ce\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.pipeline import make_pipeline\n\npipeline = make_pipeline(\n ce.OneHotEncoder(use_cat_names=True),\n SimpleImputer(strategy='mean'),\n RandomForestClassifier(random_state=0, n_jobs=-1)\n)\n\npipeline.fit(X_train, y_train)\nprint('Validation Accuracy:', pipeline.score(X_val, y_val))",
"Validation Accuracy: 0.8081649831649832\nCPU times: user 23 s, sys: 358 ms, total: 23.4 s\nWall time: 13.9 s\n"
],
[
"# TODO\n%%time\nimport category_encoders as ce\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.pipeline import make_pipeline\n\npipeline = make_pipeline(\n ce.OneHotEncoder(use_cat_names=True),\n SimpleImputer(strategy='median'),\n RandomForestClassifier(random_state=0, n_jobs=-1)\n)\n\npipeline.fit(X_train, y_train)\nprint('Validation Accuracy:', pipeline.score(X_val, y_val))",
"Validation Accuracy: 0.8088383838383838\nCPU times: user 24.6 s, sys: 189 ms, total: 24.8 s\nWall time: 15 s\n"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\n#get encoded values\nencoder = pipeline.named_steps['onehotencoder']\nencoded = encoder.transform(X_train)\nprint('X_train shape after encoding', encoded.shape)\n\n# get the feature importances\nrf = pipeline.named_steps['randomforestclassifier']\nimportances = pd.Series(rf.feature_importances_, encoded.columns)",
"X_train shape after encoding (47520, 182)\n"
],
[
"# plot top 20 \nn = 20\nplt.figure(figsize=(10, n/2))\nplt.title(f'Top {n} features')\nimportances.sort_values()[-n:].plot.barh();\n",
"_____no_output_____"
]
],
[
[
"# Do ordinal encoding with high-cardinality categoricals",
"_____no_output_____"
],
[
"## Overview\n\nhttp://contrib.scikit-learn.org/categorical-encoding/ordinal.html",
"_____no_output_____"
],
[
"## Follow Along",
"_____no_output_____"
]
],
[
[
"# TODO\nX_train = train.drop(columns=target)\ny_train = train[target]\n\nX_val = val.drop(columns=target)\ny_val = val[target]\n\nX_test = test",
"_____no_output_____"
],
[
"X_train.columns",
"_____no_output_____"
],
[
"%%time\nimport category_encoders as ce\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.pipeline import make_pipeline\n\npipeline = make_pipeline(\n ce.OrdinalEncoder(),\n SimpleImputer(strategy='median'),\n RandomForestClassifier(random_state=0, n_jobs=-1)\n)\n\npipeline.fit(X_train, y_train)\nprint('Train Accuracy:', pipeline.score(X_train, y_train))\nprint('Validation Accuracy:', pipeline.score(X_val, y_val))",
"Train Accuracy: 0.9999579124579124\nValidation Accuracy: 0.8092592592592592\nCPU times: user 20.6 s, sys: 172 ms, total: 20.8 s\nWall time: 11.6 s\n"
]
],
[
[
"# Understand how categorical encodings affect trees differently compared to linear models",
"_____no_output_____"
],
[
"## Follow Along",
"_____no_output_____"
],
[
"### Categorical exploration, 1 feature at a time\n\nChange `feature`, then re-run these cells!",
"_____no_output_____"
]
],
[
[
"feature = 'extraction_type_class'",
"_____no_output_____"
],
[
"X_train[feature].value_counts()",
"_____no_output_____"
],
[
"import seaborn as sns\nplt.figure(figsize=(16,9))\nsns.barplot(\n x=train[feature], \n y=train['status_group']=='functional', \n color='grey'\n);",
"_____no_output_____"
],
[
"X_train[feature].head(20)",
"_____no_output_____"
]
],
[
[
"### [One Hot Encoding](http://contrib.scikit-learn.org/categorical-encoding/onehot.html)\n\n> Onehot (or dummy) coding for categorical features, produces one feature per category, each binary.\n\nWarning: May run slow, or run out of memory, with high cardinality categoricals!",
"_____no_output_____"
]
],
[
[
"encoder = ce.OneHotEncoder(use_cat_names=True)\nencoded = encoder.fit_transform(X_train[[feature]])\nprint(f'{len(encoded.columns)} columns')\nencoded.head(20)",
"7 columns\n"
]
],
[
[
"#### One-Hot Encoding, Logistic Regression, Validation Accuracy",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegressionCV\nfrom sklearn.preprocessing import StandardScaler\n\nlr = make_pipeline(\n ce.OneHotEncoder(use_cat_names=True), \n SimpleImputer(), \n StandardScaler(), \n LogisticRegressionCV(multi_class='auto', solver='lbfgs', cv=5, n_jobs=-1)\n)\n\nlr.fit(X_train[[feature]], y_train)\nscore = lr.score(X_val[[feature]], y_val)\nprint('Logistic Regression, Validation Accuracy', score)",
"Logistic Regression, Validation Accuracy 0.6202861952861953\n"
]
],
[
[
"#### One-Hot Encoding, Decision Tree, Validation Accuracy",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier\n\ndt = make_pipeline(\n ce.OneHotEncoder(use_cat_names=True), \n SimpleImputer(), \n DecisionTreeClassifier(random_state=42)\n)\n\ndt.fit(X_train[[feature]], y_train)\nscore = dt.score(X_val[[feature]], y_val)\nprint('Decision Tree, Validation Accuracy', score)",
"Decision Tree, Validation Accuracy 0.6202861952861953\n"
]
],
[
[
"#### One-Hot Encoding, Logistic Regression, Model Interpretation",
"_____no_output_____"
]
],
[
[
"model = lr.named_steps['logisticregressioncv']\nencoder = lr.named_steps['onehotencoder']\nencoded_columns = encoder.transform(X_val[[feature]]).columns\ncoefficients = pd.Series(model.coef_[0], encoded_columns)\ncoefficients.sort_values().plot.barh(color='grey');",
"_____no_output_____"
]
],
[
[
"#### One-Hot Encoding, Decision Tree, Model Interpretation",
"_____no_output_____"
]
],
[
[
"# Plot tree\n# https://scikit-learn.org/stable/modules/generated/sklearn.tree.export_graphviz.html\nimport graphviz\nfrom sklearn.tree import export_graphviz\n\nmodel = dt.named_steps['decisiontreeclassifier']\nencoder = dt.named_steps['onehotencoder']\nencoded_columns = encoder.transform(X_val[[feature]]).columns\n\ndot_data = export_graphviz(model, \n out_file=None, \n max_depth=7, \n feature_names=encoded_columns,\n class_names=model.classes_, \n impurity=False, \n filled=True, \n proportion=True, \n rounded=True) \ndisplay(graphviz.Source(dot_data))",
"_____no_output_____"
]
],
[
[
"### [Ordinal Encoding](http://contrib.scikit-learn.org/categorical-encoding/ordinal.html)\n\n> Ordinal encoding uses a single column of integers to represent the classes. An optional mapping dict can be passed in; in this case, we use the knowledge that there is some true order to the classes themselves. Otherwise, the classes are assumed to have no true order and integers are selected at random.",
"_____no_output_____"
]
],
[
[
"encoder = ce.OrdinalEncoder()\nencoded = encoder.fit_transform(X_train[[feature]])\nprint(f'1 column, {encoded[feature].nunique()} unique values')\nencoded.head(20)",
"1 column, 7 unique values\n"
]
],
[
[
"#### Ordinal Encoding, Logistic Regression, Validation Accuracy",
"_____no_output_____"
]
],
[
[
"lr = make_pipeline(\n ce.OrdinalEncoder(), \n SimpleImputer(), \n StandardScaler(), \n LogisticRegressionCV(multi_class='auto', solver='lbfgs', cv=5, n_jobs=-1)\n)\n\nlr.fit(X_train[[feature]], y_train)\nscore = lr.score(X_val[[feature]], y_val)\nprint('Logistic Regression, Validation Accuracy', score)",
"Logistic Regression, Validation Accuracy 0.5417508417508418\n"
]
],
[
[
"#### Ordinal Encoding, Decision Tree, Validation Accuracy",
"_____no_output_____"
]
],
[
[
"dt = make_pipeline(\n ce.OrdinalEncoder(), \n SimpleImputer(), \n DecisionTreeClassifier(random_state=42)\n)\n\ndt.fit(X_train[[feature]], y_train)\nscore = dt.score(X_val[[feature]], y_val)\nprint('Decision Tree, Validation Accuracy', score)",
"Decision Tree, Validation Accuracy 0.6202861952861953\n"
]
],
[
[
"#### Ordinal Encoding, Logistic Regression, Model Interpretation",
"_____no_output_____"
]
],
[
[
"model = lr.named_steps['logisticregressioncv']\nencoder = lr.named_steps['ordinalencoder']\nencoded_columns = encoder.transform(X_val[[feature]]).columns\ncoefficients = pd.Series(model.coef_[0], encoded_columns)\ncoefficients.sort_values().plot.barh(color='grey');",
"_____no_output_____"
]
],
[
[
"#### Ordinal Encoding, Decision Tree, Model Interpretation",
"_____no_output_____"
]
],
[
[
"model = dt.named_steps['decisiontreeclassifier']\nencoder = dt.named_steps['ordinalencoder']\nencoded_columns = encoder.transform(X_val[[feature]]).columns\n\ndot_data = export_graphviz(model, \n out_file=None, \n max_depth=5, \n feature_names=encoded_columns,\n class_names=model.classes_, \n impurity=False, \n filled=True, \n proportion=True, \n rounded=True) \ndisplay(graphviz.Source(dot_data))",
"_____no_output_____"
]
],
[
[
"# Understand how tree ensembles reduce overfitting compared to a single decision tree with unlimited depth",
"_____no_output_____"
],
[
"## Overview",
"_____no_output_____"
],
[
"### What's \"random\" about random forests?\n1. Each tree trains on a random bootstrap sample of the data. (In scikit-learn, for `RandomForestRegressor` and `RandomForestClassifier`, the `bootstrap` parameter's default is `True`.) This type of ensembling is called Bagging. (Bootstrap AGGregatING.)\n2. Each split considers a random subset of the features. (In scikit-learn, when the `max_features` parameter is not `None`.) \n\nFor extra randomness, you can try [\"extremely randomized trees\"](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees)!\n\n>In extremely randomized trees (see [ExtraTreesClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesClassifier.html) and [ExtraTreesRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesRegressor.html) classes), randomness goes one step further in the way splits are computed. As in random forests, a random subset of candidate features is used, but instead of looking for the most discriminative thresholds, thresholds are drawn at random for each candidate feature and the best of these randomly-generated thresholds is picked as the splitting rule. This usually allows to reduce the variance of the model a bit more, at the expense of a slightly greater increase in bias",
"_____no_output_____"
],
[
"## Follow Along",
"_____no_output_____"
],
[
"### Example: [predicting golf putts](https://statmodeling.stat.columbia.edu/2008/12/04/the_golf_puttin/)\n(1 feature, non-linear, regression)",
"_____no_output_____"
]
],
[
[
"putts = pd.DataFrame(\n columns=['distance', 'tries', 'successes'], \n data = [[2, 1443, 1346],\n [3, 694, 577],\n [4, 455, 337],\n [5, 353, 208],\n [6, 272, 149],\n [7, 256, 136],\n [8, 240, 111],\n [9, 217, 69],\n [10, 200, 67],\n [11, 237, 75],\n [12, 202, 52],\n [13, 192, 46],\n [14, 174, 54],\n [15, 167, 28],\n [16, 201, 27],\n [17, 195, 31],\n [18, 191, 33],\n [19, 147, 20],\n [20, 152, 24]]\n)\n\nputts['rate of success'] = putts['successes'] / putts['tries']\nputts_X = putts[['distance']]\nputts_y = putts['rate of success']",
"_____no_output_____"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom ipywidgets import interact\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.tree import DecisionTreeRegressor\n\ndef putt_trees(max_depth=1, n_estimators=1):\n models = [DecisionTreeRegressor(max_depth=max_depth), \n RandomForestRegressor(max_depth=max_depth, n_estimators=n_estimators)]\n \n for model in models:\n name = model.__class__.__name__\n model.fit(putts_X, putts_y)\n ax = putts.plot('distance', 'rate of success', kind='scatter', title=name)\n ax.step(putts_X, model.predict(putts_X), where='mid')\n plt.show()\n \ninteract(putt_trees, max_depth=(1,6,1), n_estimators=(10,40,10));",
"_____no_output_____"
]
],
[
[
"### Bagging demo, with golf putts data\nhttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sample.html",
"_____no_output_____"
]
],
[
[
"# Do-it-yourself Bagging Ensemble of Decision Trees (like a Random Forest)\ndef diy_bagging(max_depth=1, n_estimators=1):\n y_preds = []\n for i in range(n_estimators):\n title = f'Tree {i+1}'\n bootstrap_sample = putts.sample(n=len(putts), replace=True).sort_values(by='distance')\n bootstrap_X = bootstrap_sample[['distance']]\n bootstrap_y = bootstrap_sample['rate of success']\n tree = DecisionTreeRegressor(max_depth=max_depth)\n tree.fit(bootstrap_X, bootstrap_y)\n y_pred = tree.predict(bootstrap_X)\n y_preds.append(y_pred)\n ax = bootstrap_sample.plot('distance', 'rate of success', kind='scatter', title=title)\n ax.step(bootstrap_X, y_pred, where='mid')\n plt.show()\n \n ensembled = np.vstack(y_preds).mean(axis=0)\n title = f'Ensemble of {n_estimators} trees, with max_depth={max_depth}'\n ax = putts.plot('distance', 'rate of success', kind='scatter', title=title)\n ax.step(putts_X, ensembled, where='mid')\n plt.show()\n \ninteract(diy_bagging, max_depth=(1,6,1), n_estimators=(2,5,1));",
"_____no_output_____"
]
],
[
[
"### Go back to Tanzania Waterpumps ...",
"_____no_output_____"
],
[
"#### Helper function to visualize predicted probabilities\n\n",
"_____no_output_____"
]
],
[
[
"import itertools\nimport seaborn as sns\n\ndef pred_heatmap(model, X, features, class_index=-1, title='', num=100):\n \"\"\"\n Visualize predicted probabilities, for classifier fit on 2 numeric features\n \n Parameters\n ----------\n model : scikit-learn classifier, already fit\n X : pandas dataframe, which was used to fit model\n features : list of strings, column names of the 2 numeric features\n class_index : integer, index of class label\n title : string, title of plot\n num : int, number of grid points for each feature\n \n Returns\n -------\n y_pred_proba : numpy array, predicted probabilities for class_index\n \"\"\"\n feature1, feature2 = features\n min1, max1 = X[feature1].min(), X[feature1].max()\n min2, max2 = X[feature2].min(), X[feature2].max()\n x1 = np.linspace(min1, max1, num)\n x2 = np.linspace(max2, min2, num)\n combos = list(itertools.product(x1, x2))\n y_pred_proba = model.predict_proba(combos)[:, class_index]\n pred_grid = y_pred_proba.reshape(num, num).T\n table = pd.DataFrame(pred_grid, columns=x1, index=x2)\n sns.heatmap(table, vmin=0, vmax=1)\n plt.xticks([])\n plt.yticks([])\n plt.xlabel(feature1)\n plt.ylabel(feature2)\n plt.title(title)\n plt.show()\n return y_pred_proba\n",
"_____no_output_____"
]
],
[
[
"### Compare Decision Tree, Random Forest, Logistic Regression",
"_____no_output_____"
]
],
[
[
"# Instructions\n# 1. Choose two features\n# 2. Run this code cell\n# 3. Interact with the widget sliders\nfeature1 = 'longitude'\nfeature2 = 'quantity'\n\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.tree import DecisionTreeClassifier\n\ndef get_X_y(df, feature1, feature2, target):\n features = [feature1, feature2]\n X = df[features]\n y = df[target]\n X = X.fillna(X.median())\n X = ce.OrdinalEncoder().fit_transform(X)\n return X, y\n\ndef compare_models(max_depth=1, n_estimators=1):\n models = [DecisionTreeClassifier(max_depth=max_depth), \n RandomForestClassifier(max_depth=max_depth, n_estimators=n_estimators), \n LogisticRegression(solver='lbfgs', multi_class='auto')]\n \n for model in models:\n name = model.__class__.__name__\n model.fit(X, y)\n pred_heatmap(model, X, [feature1, feature2], class_index=0, title=name)\n\nX, y = get_X_y(train, feature1, feature2, target='status_group')\ninteract(compare_models, max_depth=(1,6,1), n_estimators=(10,40,10));",
"_____no_output_____"
]
],
[
[
"### Bagging",
"_____no_output_____"
]
],
[
[
"# Do-it-yourself Bagging Ensemble of Decision Trees (like a Random Forest)\n\n# Instructions\n# 1. Choose two features\n# 2. Run this code cell\n# 3. Interact with the widget sliders\n\nfeature1 = 'longitude'\nfeature2 = 'latitude'\n\ndef waterpumps_bagging(max_depth=1, n_estimators=1):\n predicteds = []\n for i in range(n_estimators):\n title = f'Tree {i+1}'\n bootstrap_sample = train.sample(n=len(train), replace=True)\n X, y = get_X_y(bootstrap_sample, feature1, feature2, target='status_group')\n tree = DecisionTreeClassifier(max_depth=max_depth)\n tree.fit(X, y)\n predicted = pred_heatmap(tree, X, [feature1, feature2], class_index=0, title=title)\n predicteds.append(predicted)\n \n ensembled = np.vstack(predicteds).mean(axis=0)\n title = f'Ensemble of {n_estimators} trees, with max_depth={max_depth}'\n sns.heatmap(ensembled.reshape(100, 100).T, vmin=0, vmax=1)\n plt.title(title)\n plt.xlabel(feature1)\n plt.ylabel(feature2)\n plt.xticks([])\n plt.yticks([])\n plt.show()\n \ninteract(waterpumps_bagging, max_depth=(1,6,1), n_estimators=(2,5,1));",
"_____no_output_____"
]
],
[
[
"# Review\n\n#### Try Tree Ensembles when you do machine learning with labeled, tabular data\n- \"Tree Ensembles\" means Random Forest or Gradient Boosting models. \n- [Tree Ensembles often have the best predictive accuracy](https://arxiv.org/abs/1708.05070) with labeled, tabular data.\n- Why? Because trees can fit non-linear, non-[monotonic](https://en.wikipedia.org/wiki/Monotonic_function) relationships, and [interactions](https://christophm.github.io/interpretable-ml-book/interaction.html) between features.\n- A single decision tree, grown to unlimited depth, will [overfit](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/). We solve this problem by ensembling trees, with bagging (Random Forest) or boosting (Gradient Boosting).\n- Random Forest's advantage: may be less sensitive to hyperparameters. Gradient Boosting's advantage: may get better predictive accuracy.\n\n#### One-hot encoding isn’t the only way, and may not be the best way, of categorical encoding for tree ensembles.\n- For example, tree ensembles can work with arbitrary \"ordinal\" encoding! (Randomly assigning an integer to each category.) Compared to one-hot encoding, the dimensionality will be lower, and the predictive accuracy may be just as good or even better.\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ec9d4b3f622229740d0287113a9bf54436b949b6 | 8,078 | ipynb | Jupyter Notebook | notebooks/BestModel.ipynb | 1jinwoo/YHack2018 | 2cdb7961917daa7d6f592ac8bad81421d063638e | [
"MIT"
] | 3 | 2018-12-02T05:47:43.000Z | 2020-02-13T20:31:01.000Z | notebooks/BestModel.ipynb | 1jinwoo/ClassiPy | 2cdb7961917daa7d6f592ac8bad81421d063638e | [
"MIT"
] | null | null | null | notebooks/BestModel.ipynb | 1jinwoo/ClassiPy | 2cdb7961917daa7d6f592ac8bad81421d063638e | [
"MIT"
] | null | null | null | 51.126582 | 1,629 | 0.60052 | [
[
[
"# libraries import\nfrom keras.models import Sequential\nfrom keras import layers\nfrom keras.models import Model\nimport numpy as np\nfrom sklearn.feature_extraction.text import CountVectorizer\n\n# file import\nimport data_cleaner as dc\nimport model_helper as mh\n\nclass BestModel:\n def __init__(self, neuron=330, min_df = 0):\n self.df = dc.clean_item_data(0)\n self.df = dc.cleanup_categoryid(self.df)\n\n # vectorize training input data\n _X_train, _X_valid, _X_test, Y_train, Y_valid, Y_test = dc.data_split(self.df, 0.65, 0.15, 0.20)\n self.vectorizer = CountVectorizer(encoding='latin1', min_df = min_df) # Allow different options (min_df, encoding)\n\n # convert pandas dataframes to list of strings\n x_train_list = []\n x_test_list = []\n x_valid_list = []\n for _, row in _X_train.iterrows():\n x_train_list.append(row[0])\n for _, row in _X_test.iterrows():\n x_test_list.append(row[0])\n for _, row in _X_valid.iterrows():\n x_valid_list.append(row[0])\n\n self.vectorizer.fit(x_train_list)\n X_train = self.vectorizer.transform(x_train_list)\n X_test = self.vectorizer.transform(x_test_list)\n X_valid = self.vectorizer.transform(x_valid_list)\n\n # Neural Network\n print('X train shape: ' + str(X_train.shape[1]))\n input_dim = X_train.shape[1] # Number of features\n output_dim = self.df['categoryId'].nunique()\n model = Sequential()\n model.add(layers.Dense(neuron, input_dim=input_dim, activation='relu', use_bias=False))\n model.add(layers.Dropout(rate=0.6))\n model.add(layers.Dropout(rate=0.6))\n model.add(layers.Dense(output_dim, activation='softmax'))\n model.compile(loss='sparse_categorical_crossentropy',\n optimizer='adam',\n metrics=['accuracy'])\n history = model.fit(X_train, Y_train,\n epochs=1,\n verbose=1,\n validation_data=(X_valid, Y_valid),\n batch_size=10)\n #print(model.summary())\n\n loss, self.train_accuracy = model.evaluate(X_train, Y_train, verbose=False)\n loss, self.test_accuracy = model.evaluate(X_test, Y_test, verbose=False)\n self.model = model\n \n def get_accuracy(self):\n return (round(self.train_accuracy, 4), round(self.test_accuracy, 4))\n \n def get_category(self,s):\n s_arr = np.array([s])\n vector = self.vectorizer.transform(s_arr) \n return self.model.predict_classes(vector)",
"Using TensorFlow backend.\n"
],
[
"bm = BestModel()",
"_____no_output_____"
],
[
"bm.get_accuracy()",
"_____no_output_____"
],
[
"bm.get_category('lamp light battery')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
ec9d6f1924dc95167492004e588ce3562841a648 | 213,102 | ipynb | Jupyter Notebook | inference.ipynb | ahkarami/sfd.pytorch | e89aff86f254a579f77d816a6976ad3f94961d1f | [
"MIT"
] | null | null | null | inference.ipynb | ahkarami/sfd.pytorch | e89aff86f254a579f77d816a6976ad3f94961d1f | [
"MIT"
] | null | null | null | inference.ipynb | ahkarami/sfd.pytorch | e89aff86f254a579f77d816a6976ad3f94961d1f | [
"MIT"
] | null | null | null | 2,109.920792 | 211,264 | 0.95191 | [
[
[
"%matplotlib inline\nfrom detector import Detector\nfrom utils import draw_bounding_boxes",
"_____no_output_____"
],
[
"state_file = \"./epoch_41.pth.tar\"\ndetector = Detector(state_file)",
"_____no_output_____"
],
[
"test_image = \"./images/test.jpg\"\nbboxes = detector.infer(test_image)\ndraw_bounding_boxes(test_image, bboxes)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
ec9d744ad138c399c3c187c99e9248c3ece5f802 | 34,745 | ipynb | Jupyter Notebook | site/en-snapshot/guide/function.ipynb | ilyaspiridonov/docs-l10n | a061a44e40d25028d0a4458094e48ab717d3565c | [
"Apache-2.0"
] | 1 | 2021-09-23T09:56:29.000Z | 2021-09-23T09:56:29.000Z | site/en-snapshot/guide/function.ipynb | ilyaspiridonov/docs-l10n | a061a44e40d25028d0a4458094e48ab717d3565c | [
"Apache-2.0"
] | null | null | null | site/en-snapshot/guide/function.ipynb | ilyaspiridonov/docs-l10n | a061a44e40d25028d0a4458094e48ab717d3565c | [
"Apache-2.0"
] | 1 | 2020-06-12T11:26:06.000Z | 2020-06-12T11:26:06.000Z | 34.641077 | 525 | 0.534379 | [
[
[
"##### Copyright 2020 The TensorFlow Authors.\n",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# Better performance with tf.function\n\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/function\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/function.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/guide/function.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/function.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"In TensorFlow 2, eager execution is turned on by default. The user interface is intuitive and flexible (running one-off operations is much easier\nand faster), but this can come at the expense of performance and deployability.\n\nTo get performant and portable models, use `tf.function` to make graphs out of your programs. However there are pitfalls to be wary of - `tf.function` is not a magical make-it-faster bullet!\n\nThis document will help you conceptualize what `tf.function` is doing under the hood, so that you can master its use.\n\nThe main takeaways and recommendations are:\n\n- Debug in Eager mode, then decorate with `@tf.function`.\n- Don't rely on Python side effects like object mutation or list appends.\n- tf.function works best with TensorFlow ops; NumPy and Python calls are converted to constants.\n",
"_____no_output_____"
],
[
"## Setup",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf",
"_____no_output_____"
]
],
[
[
"Define a helper function to demonstrate the kinds of errors you might encounter:",
"_____no_output_____"
]
],
[
[
"import traceback\nimport contextlib\n\n# Some helper code to demonstrate the kinds of errors you might encounter.\[email protected]\ndef assert_raises(error_class):\n try:\n yield\n except error_class as e:\n print('Caught expected exception \\n {}:'.format(error_class))\n traceback.print_exc(limit=2)\n except Exception as e:\n raise e\n else:\n raise Exception('Expected {} to be raised but no error was raised!'.format(\n error_class))",
"_____no_output_____"
]
],
[
[
"## Basics",
"_____no_output_____"
],
[
"A `tf.function` you define is just like a core TensorFlow operation: You can execute it eagerly; you can compute gradients; and so on.",
"_____no_output_____"
]
],
[
[
"@tf.function\ndef add(a, b):\n return a + b\n\nadd(tf.ones([2, 2]), tf.ones([2, 2])) # [[2., 2.], [2., 2.]]",
"_____no_output_____"
],
[
"v = tf.Variable(1.0)\nwith tf.GradientTape() as tape:\n result = add(v, 1.0)\ntape.gradient(result, v)",
"_____no_output_____"
]
],
[
[
"You can use functions inside functions.",
"_____no_output_____"
]
],
[
[
"@tf.function\ndef dense_layer(x, w, b):\n return add(tf.matmul(x, w), b)\n\ndense_layer(tf.ones([3, 2]), tf.ones([2, 2]), tf.ones([2]))",
"_____no_output_____"
]
],
[
[
"Functions can be faster than eager code, especially for graphs with many small ops. But for graphs with a few expensive ops (like convolutions), you may not see much speedup.\n",
"_____no_output_____"
]
],
[
[
"import timeit\nconv_layer = tf.keras.layers.Conv2D(100, 3)\n\[email protected]\ndef conv_fn(image):\n return conv_layer(image)\n\nimage = tf.zeros([1, 200, 200, 100])\n# warm up\nconv_layer(image); conv_fn(image)\nprint(\"Eager conv:\", timeit.timeit(lambda: conv_layer(image), number=10))\nprint(\"Function conv:\", timeit.timeit(lambda: conv_fn(image), number=10))\nprint(\"Note how there's not much difference in performance for convolutions\")\n",
"_____no_output_____"
]
],
[
[
"## Debugging\n\nIn general, debugging code is easier in Eager mode than inside a `tf.function`. You should ensure that your code executes error-free in Eager mode before decorating with `tf.function`. To assist in the debugging process, you can call `tf.config.run_functions_eagerly(True)` to globally disable and reenable `tf.function`.\n\nWhen tracking down issues that only appear within `tf.function`, here are some tips:\n- Plain old Python `print` calls only execute during tracing, helping you track down when your functions get (re)traced.\n- `tf.print` calls will execute every time, and can help you track down intermediate values during execution.\n- `tf.debugging.enable_check_numerics` is an easy way to track down where NaNs and Inf are created.\n- `pdb` can help you understand what's going on during tracing. (Caveat: PDB will drop you into AutoGraph-transformed source code.)",
"_____no_output_____"
],
[
"## Tracing and polymorphism\n\nPython's dynamic typing means that you can call functions with a variety of argument types, and Python will do something different in each scenario.\n\nOn the other hand, TensorFlow graphs require static dtypes and shape dimensions. `tf.function` bridges this gap by retracing the function when necessary to generate the correct graphs. Most of the subtlety of `tf.function` usage stems from this retracing behavior.\n\nYou can call a function with arguments of different types to see what is happening.",
"_____no_output_____"
]
],
[
[
"# Functions are polymorphic\n\[email protected]\ndef double(a):\n print(\"Tracing with\", a)\n return a + a\n\nprint(double(tf.constant(1)))\nprint()\nprint(double(tf.constant(1.1)))\nprint()\nprint(double(tf.constant(\"a\")))\nprint()\n",
"_____no_output_____"
]
],
[
[
"To control the tracing behavior, you can use the following techniques:\n\nCreate a new `tf.function`. Separate `tf.function` objects are guaranteed not to share traces.",
"_____no_output_____"
]
],
[
[
"def f():\n print('Tracing!')\n tf.print('Executing')\n\ntf.function(f)()\ntf.function(f)()",
"_____no_output_____"
]
],
[
[
"Use `get_concrete_function` method to get a specific trace.\n",
"_____no_output_____"
]
],
[
[
"print(\"Obtaining concrete trace\")\ndouble_strings = double.get_concrete_function(tf.TensorSpec(shape=None, dtype=tf.string))\nprint(\"Executing traced function\")\nprint(double_strings(tf.constant(\"a\")))\nprint(double_strings(a=tf.constant(\"b\")))\nprint(\"Using a concrete trace with incompatible types will throw an error\")\nwith assert_raises(tf.errors.InvalidArgumentError):\n double_strings(tf.constant(1))",
"_____no_output_____"
]
],
[
[
"Specify `input_signature` in `tf.function` to limit tracing.",
"_____no_output_____"
]
],
[
[
"@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))\ndef next_collatz(x):\n print(\"Tracing with\", x)\n return tf.where(x % 2 == 0, x // 2, 3 * x + 1)\n\nprint(next_collatz(tf.constant([1, 2])))\n# We specified a 1-D tensor in the input signature, so this should fail.\nwith assert_raises(ValueError):\n next_collatz(tf.constant([[1, 2], [3, 4]]))\n",
"_____no_output_____"
]
],
[
[
"## When to retrace?\n\nA polymorphic `tf.function` keeps a cache of concrete functions generated by tracing. The cache keys are effectively tuples of keys generated from the function args and kwargs. The key generated for a `tf.Tensor` argument is its number of dimensions and type. The key generated for a Python primitive is its value. For all other Python types, the keys are based on the object `id()` so that methods are traced independently for each instance of a class. In the future, TensorFlow may add more sophisticated cachi\nng for Python objects that can be safely converted to tensors.\n\nSee [Concrete functions](../../guide/concrete_function.ipynb)\n",
"_____no_output_____"
],
[
"## Python or Tensor args?\n\nOften, Python arguments are used to control hyperparameters and graph constructions - for example, `num_layers=10` or `training=True` or `nonlinearity='relu'`. So if the Python argument changes, it makes sense that you'd have to retrace the graph.\n\nHowever, it's possible that a Python argument is not being used to control graph construction. In these cases, a change in the Python value can trigger needless retracing. Take, for example, this training loop, which AutoGraph will dynamically unroll. Despite the multiple traces, the generated graph is actually identical, so this is a bit inefficient.",
"_____no_output_____"
]
],
[
[
"def train_one_step():\n pass\n\[email protected]\ndef train(num_steps):\n print(\"Tracing with num_steps = {}\".format(num_steps))\n for _ in tf.range(num_steps):\n train_one_step()\n\ntrain(num_steps=10)\ntrain(num_steps=20)\n",
"_____no_output_____"
]
],
[
[
"The simple workaround here is to cast your arguments to Tensors if they do not affect the shape of the generated graph.",
"_____no_output_____"
]
],
[
[
"train(num_steps=tf.constant(10))\ntrain(num_steps=tf.constant(20))",
"_____no_output_____"
]
],
[
[
"## Side effects in `tf.function`\n\nIn general, Python side effects (like printing or mutating objects) only happen during tracing. So how can you reliably trigger side effects from `tf.function`?\n\nThe general rule of thumb is to only use Python side effects to debug your traces. Otherwise, TensorFlow ops like `tf.Variable.assign`, `tf.print`, and `tf.summary` are the best way to ensure your code will be traced and executed by the TensorFlow runtime with each call. In general using a functional style will yield the best results. ",
"_____no_output_____"
]
],
[
[
"@tf.function\ndef f(x):\n print(\"Traced with\", x)\n tf.print(\"Executed with\", x)\n\nf(1)\nf(1)\nf(2)\n",
"_____no_output_____"
]
],
[
[
"If you would like to execute Python code during each invocation of a `tf.function`, `tf.py_function` is an exit hatch. The drawback of `tf.py_function` is that it's not portable or particularly performant, nor does it work well in distributed (multi-GPU, TPU) setups. Also, since `tf.py_function` has to be wired into the graph for differentiability, it casts all inputs/outputs to tensors.",
"_____no_output_____"
]
],
[
[
"external_list = []\n\ndef side_effect(x):\n print('Python side effect')\n external_list.append(x)\n\[email protected]\ndef f(x):\n tf.py_function(side_effect, inp=[x], Tout=[])\n\nf(1)\nf(1)\nf(1)\nassert len(external_list) == 3\n# .numpy() call required because py_function casts 1 to tf.constant(1)\nassert external_list[0].numpy() == 1\n",
"_____no_output_____"
]
],
[
[
"## Beware of Python state\n\nMany Python features, such as generators and iterators, rely on the Python runtime to keep track of state. In general, while these constructs work as expected in Eager mode, many unexpected things can happen inside a `tf.function` due to tracing behavior.\n\nTo give one example, advancing iterator state is a Python side effect and therefore only happens during tracing.",
"_____no_output_____"
]
],
[
[
"external_var = tf.Variable(0)\[email protected]\ndef buggy_consume_next(iterator):\n external_var.assign_add(next(iterator))\n tf.print(\"Value of external_var:\", external_var)\n\niterator = iter([0, 1, 2, 3])\nbuggy_consume_next(iterator)\n# This reuses the first value from the iterator, rather than consuming the next value.\nbuggy_consume_next(iterator)\nbuggy_consume_next(iterator)\n",
"_____no_output_____"
]
],
[
[
"## Variables\n\nWe can use the same idea of leveraging the intended execution order of the code to make variable creation and utilization very easy in `tf.function`. There is one very important caveat, though, which is that with variables it's possible to write code which behaves differently in eager mode and graph mode.\n\nSpecifically, this will happen when you create a new Variable with each call. Due to tracing semantics, `tf.function` will reuse the same variable each call, but eager mode will create a new variable with each call. To guard against this mistake, `tf.function` will raise an error if it detects dangerous variable creation behavior.",
"_____no_output_____"
]
],
[
[
"@tf.function\ndef f(x):\n v = tf.Variable(1.0)\n v.assign_add(x)\n return v\n\nwith assert_raises(ValueError):\n f(1.0)",
"_____no_output_____"
]
],
[
[
"Non-ambiguous code is ok, though.",
"_____no_output_____"
]
],
[
[
"v = tf.Variable(1.0)\n\[email protected]\ndef f(x):\n return v.assign_add(x)\n\nprint(f(1.0)) # 2.0\nprint(f(2.0)) # 4.0\n",
"_____no_output_____"
]
],
[
[
"You can also create variables inside a tf.function as long as we can prove\nthat those variables are created only the first time the function is executed.",
"_____no_output_____"
]
],
[
[
"class C:\n pass\n\nobj = C()\nobj.v = None\n\[email protected]\ndef g(x):\n if obj.v is None:\n obj.v = tf.Variable(1.0)\n return obj.v.assign_add(x)\n\nprint(g(1.0)) # 2.0\nprint(g(2.0)) # 4.0",
"_____no_output_____"
]
],
[
[
"Variable initializers can depend on function arguments and on values of other\nvariables. We can figure out the right initialization order using the same\nmethod we use to generate control dependencies.",
"_____no_output_____"
]
],
[
[
"state = []\[email protected]\ndef fn(x):\n if not state:\n state.append(tf.Variable(2.0 * x))\n state.append(tf.Variable(state[0] * 3.0))\n return state[0] * x * state[1]\n\nprint(fn(tf.constant(1.0)))\nprint(fn(tf.constant(3.0)))\n",
"_____no_output_____"
]
],
[
[
"## AutoGraph Transformations\n\nAutoGraph is a library that is on by default in `tf.function`, and transforms a subset of Python Eager code into graph-compatible TensorFlow ops. This includes control flow like `if`, `for`, `while`.\n\nTensorFlow ops like `tf.cond` and `tf.while_loop` continue to work, but control flow is often easier to write and understand when written in Python.",
"_____no_output_____"
]
],
[
[
"# Simple loop\n\[email protected]\ndef f(x):\n while tf.reduce_sum(x) > 1:\n tf.print(x)\n x = tf.tanh(x)\n return x\n\nf(tf.random.uniform([5]))",
"_____no_output_____"
]
],
[
[
"If you're curious you can inspect the code autograph generates.",
"_____no_output_____"
]
],
[
[
"print(tf.autograph.to_code(f.python_function))",
"_____no_output_____"
]
],
[
[
"### Conditionals\n\nAutoGraph will convert some `if <condition>` statements into the equivalent `tf.cond` calls. This substitution is made if `<condition>` is a Tensor. Otherwise, the `if` statement is executed as a Python conditional.\n\nA Python conditional executes during tracing, so exactly one branch of the conditional will be added to the graph. Without AutoGraph, this traced graph would be unable to take the alternate branch if there is data-dependent control flow.\n\n`tf.cond` traces and adds both branches of the conditional to the graph, dynamically selecting a branch at execution time. Tracing can have unintended side effects; see [AutoGraph tracing effects](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/control_flow.md#effects-of-the-tracing-process) for more.",
"_____no_output_____"
]
],
[
[
"@tf.function\ndef fizzbuzz(n):\n for i in tf.range(1, n + 1):\n print('Tracing for loop')\n if i % 15 == 0:\n print('Tracing fizzbuzz branch')\n tf.print('fizzbuzz')\n elif i % 3 == 0:\n print('Tracing fizz branch')\n tf.print('fizz')\n elif i % 5 == 0:\n print('Tracing buzz branch')\n tf.print('buzz')\n else:\n print('Tracing default branch')\n tf.print(i)\n\nfizzbuzz(tf.constant(5))\nfizzbuzz(tf.constant(20))",
"_____no_output_____"
]
],
[
[
"See the [reference documentation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/control_flow.md#if-statements) for additional restrictions on AutoGraph-converted if statements.",
"_____no_output_____"
],
[
"### Loops\n\nAutoGraph will convert some `for` and `while` statements into the equivalent TensorFlow looping ops, like `tf.while_loop`. If not converted, the `for` or `while` loop is executed as a Python loop.\n\nThis substitution is made in the following situations:\n\n- `for x in y`: if `y` is a Tensor, convert to `tf.while_loop`. In the special case where `y` is a `tf.data.Dataset`, a combination of `tf.data.Dataset` ops are generated.\n- `while <condition>`: if `<condition>` is a Tensor, convert to `tf.while_loop`.\n\nA Python loop executes during tracing, adding additional ops to the `tf.Graph` for every iteration of the loop.\n\nA TensorFlow loop traces the body of the loop, and dynamically selects how many iterations to run at execution time. The loop body only appears once in the generated `tf.Graph`.\n\nSee the [reference documentation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/control_flow.md#while-statements) for additional restrictions on AutoGraph-converted `for` and `while` statements.",
"_____no_output_____"
],
[
"#### Looping over Python data\n\nA common pitfall is to loop over Python/Numpy data within a `tf.function`. This loop will execute during the tracing process, adding a copy of your model to the `tf.Graph` for each iteration of the loop.\n\nIf you want to wrap the entire training loop in `tf.function`, the safest way to do this is to wrap your data as a `tf.data.Dataset` so that AutoGraph will dynamically unroll the training loop.",
"_____no_output_____"
]
],
[
[
"def measure_graph_size(f, *args):\n g = f.get_concrete_function(*args).graph\n print(\"{}({}) contains {} nodes in its graph\".format(\n f.__name__, ', '.join(map(str, args)), len(g.as_graph_def().node)))\n\[email protected]\ndef train(dataset):\n loss = tf.constant(0)\n for x, y in dataset:\n loss += tf.abs(y - x) # Some dummy computation.\n return loss\n\nsmall_data = [(1, 1)] * 3\nbig_data = [(1, 1)] * 10\nmeasure_graph_size(train, small_data)\nmeasure_graph_size(train, big_data)\n\nmeasure_graph_size(train, tf.data.Dataset.from_generator(\n lambda: small_data, (tf.int32, tf.int32)))\nmeasure_graph_size(train, tf.data.Dataset.from_generator(\n lambda: big_data, (tf.int32, tf.int32)))",
"_____no_output_____"
]
],
[
[
"When wrapping Python/Numpy data in a Dataset, be mindful of `tf.data.Dataset.from_generator` versus ` tf.data.Dataset.from_tensors`. The former will keep the data in Python and fetch it via `tf.py_function` which can have performance implications, whereas the latter will bundle a copy of the data as one large `tf.constant()` node in the graph, which can have memory implications.\n\nReading data from files via TFRecordDataset/CsvDataset/etc. is the most effective way to consume data, as then TensorFlow itself can manage the asynchronous loading and prefetching of data, without having to involve Python. To learn more, see the [tf.data guide](../../guide/data).",
"_____no_output_____"
],
[
"#### Accumulating values in a loop\n\nA common pattern is to accumulate intermediate values from a loop. Normally, this is accomplished by appending to a Python list or adding entries to a Python dictionary. However, as these are Python side effects, they will not work as expected in a dynamically unrolled loop. Use `tf.TensorArray` to accumulate results from a dynamically unrolled loop.",
"_____no_output_____"
]
],
[
[
"batch_size = 2\nseq_len = 3\nfeature_size = 4\n\ndef rnn_step(inp, state):\n return inp + state\n\[email protected]\ndef dynamic_rnn(rnn_step, input_data, initial_state):\n # [batch, time, features] -> [time, batch, features]\n input_data = tf.transpose(input_data, [1, 0, 2])\n max_seq_len = input_data.shape[0]\n\n states = tf.TensorArray(tf.float32, size=max_seq_len)\n state = initial_state\n for i in tf.range(max_seq_len):\n state = rnn_step(input_data[i], state)\n states = states.write(i, state)\n return tf.transpose(states.stack(), [1, 0, 2])\n \ndynamic_rnn(rnn_step,\n tf.random.uniform([batch_size, seq_len, feature_size]),\n tf.zeros([batch_size, feature_size]))",
"_____no_output_____"
]
],
[
[
"## Further reading\n\nTo learn more about graph optimizations that are performed after tracing a `tf.function`, see the [Grappler guide](../../guide/graph_optimization). To learn how to optimize your data pipeline and profile your model, see the [Profiler guide](../../guide/profiler.md).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ec9d7b9b24b7480303c89de230dcefc256ba304a | 61,636 | ipynb | Jupyter Notebook | cheatsheets/blazingSQL/blazingSQL_SQL_FunctionsUnary.ipynb | jacobtomlinson/Welcome_to_BlazingSQL_Notebooks | 3f9eebff8a2d8edbc64b0da020340d287623a2c3 | [
"Apache-2.0"
] | 18 | 2020-03-17T21:15:35.000Z | 2021-09-27T11:15:17.000Z | cheatsheets/blazingSQL/blazingSQL_SQL_FunctionsUnary.ipynb | jacobtomlinson/Welcome_to_BlazingSQL_Notebooks | 3f9eebff8a2d8edbc64b0da020340d287623a2c3 | [
"Apache-2.0"
] | 9 | 2020-05-21T19:26:22.000Z | 2021-08-29T19:23:06.000Z | cheatsheets/blazingSQL/blazingSQL_SQL_FunctionsUnary.ipynb | jacobtomlinson/Welcome_to_BlazingSQL_Notebooks | 3f9eebff8a2d8edbc64b0da020340d287623a2c3 | [
"Apache-2.0"
] | 10 | 2020-04-21T10:01:24.000Z | 2021-12-04T09:00:09.000Z | 25.416907 | 155 | 0.310906 | [
[
[
"# BlazingSQL Cheat Sheets sample code\n\n(c) 2020 NVIDIA, Blazing SQL\n\nDistributed under Apache License 2.0",
"_____no_output_____"
],
[
"### Imports",
"_____no_output_____"
]
],
[
[
"import cudf\nimport numpy as np\nfrom blazingsql import BlazingContext",
"_____no_output_____"
]
],
[
[
"### Sample Data Table",
"_____no_output_____"
]
],
[
[
"df = cudf.DataFrame(\n [\n (39, -6.88, np.datetime64('2020-10-08T12:12:01'), 'C', 'D', 'data'\n , 'RAPIDS.ai is a suite of open-source libraries that allow you to run your end to end data science and analytics pipelines on GPUs.')\n , (11, 4.21, None, 'A', 'D', 'cuDF'\n , 'cuDF is a Python GPU DataFrame (built on the Apache Arrow columnar memory format)')\n , (31, 4.71, np.datetime64('2020-10-10T09:26:43'), 'U', 'D', 'memory'\n , 'cuDF allows for loading, joining, aggregating, filtering, and otherwise manipulating tabular data using a DataFrame style API.')\n , (40, 0.93, np.datetime64('2020-10-11T17:10:00'), 'P', 'B', 'tabular'\n , '''If your workflow is fast enough on a single GPU or your data comfortably fits in memory on \n a single GPU, you would want to use cuDF.''')\n , (33, 9.26, np.datetime64('2020-10-15T10:58:02'), 'O', 'D', 'parallel'\n , '''If you want to distribute your workflow across multiple GPUs or have more data than you can fit \n in memory on a single GPU you would want to use Dask-cuDF''')\n , (42, 4.21, np.datetime64('2020-10-01T10:02:23'), 'U', 'C', 'GPUs'\n , 'BlazingSQL provides a high-performance distributed SQL engine in Python')\n , (36, 3.01, np.datetime64('2020-09-30T14:36:26'), 'T', 'D', None\n , 'BlazingSQL is built on the RAPIDS GPU data science ecosystem')\n , (38, 6.44, np.datetime64('2020-10-10T08:34:36'), 'X', 'B', 'csv'\n , 'BlazingSQL lets you ETL raw data directly into GPU memory as a GPU DataFrame (GDF)')\n , (17, -5.28, np.datetime64('2020-10-09T08:34:40'), 'P', 'D', 'dataframes'\n , 'Dask is a flexible library for parallel computing in Python')\n , (10, 8.28, np.datetime64('2020-10-03T03:31:21'), 'W', 'B', 'python'\n , None)\n ]\n , columns = ['number', 'float_number', 'datetime', 'letter', 'category', 'word', 'string']\n)",
"_____no_output_____"
],
[
"bc = BlazingContext()",
"BlazingContext ready\n"
],
[
"bc.create_table('df', df)",
"_____no_output_____"
]
],
[
[
"# SQL Unary Functions",
"_____no_output_____"
],
[
"#### FLOOR",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT float_number\n , FLOOR(float_number) AS f\n FROM df\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### CEILING",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT float_number\n , CEILING(float_number) AS f\n FROM df\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### SIN",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT float_number\n , SIN(float_number) AS f\n FROM df\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### COS",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT float_number \n , COS(float_number) AS f\n FROM df\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### ASIN",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT float_number\n , ASIN(float_number) AS f\n FROM df\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### ACOS",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT float_number\n , ACOS(float_number) AS f\n FROM df\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### TAN",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT float_number\n , TAN(float_number) AS f\n FROM df\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### ATAN",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT float_number\n , ATAN(float_number) AS f\n FROM df\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### SQRT",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT float_number\n , SQRT(float_number) AS f\n FROM df\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### ABS",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT float_number\n , ABS(float_number) AS f\n FROM df\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### NOT",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT float_number\n , NOT(float_number > 0) AS f\n FROM df\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### LN",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT float_number\n , LN(float_number) AS f\n FROM df\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### LOG",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT float_number\n , LOG10(float_number) AS f\n FROM df\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### RAND",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT float_number\n , RAND() AS r\n FROM df\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### ROUND",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT float_number\n , ROUND(float_number) AS f\n FROM df\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### STDDEV",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT category\n , STDDEV(float_number) AS agg\n FROM df\n GROUP BY category\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### STDDEV_POP",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT category\n , STDDEV_POP(float_number) AS agg\n FROM df\n GROUP BY category\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### STDDEV_SAMP",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT category\n , STDDEV_SAMP(float_number) AS agg\n FROM df\n GROUP BY category\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### VARIANCE",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT category\n , VARIANCE(float_number) AS agg\n FROM df\n GROUP BY category\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### VAR_SAMP",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT category\n , VAR_SAMP(float_number) AS agg\n FROM df\n GROUP BY category\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
],
[
[
"#### VAR_POP",
"_____no_output_____"
]
],
[
[
"query = '''\n SELECT category\n , VAR_POP(float_number) AS agg\n FROM df\n GROUP BY category\n'''\n\nbc.sql(query)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9d9647b772434bd9ffbd17eb3d24a915250e17 | 835,404 | ipynb | Jupyter Notebook | docs/_static/python_basics/06_download-python_solutions.ipynb | digitalearthafrica/notebooks-training | 8d162bca34445c0d0f01eedef11fa3263825ae07 | [
"Apache-2.0"
] | 5 | 2020-08-20T05:31:43.000Z | 2021-05-18T13:07:21.000Z | docs/_static/python_basics/06_download-python_solutions.ipynb | digitalearthafrica/notebooks-training | 8d162bca34445c0d0f01eedef11fa3263825ae07 | [
"Apache-2.0"
] | 62 | 2020-07-30T06:43:24.000Z | 2021-12-19T23:22:46.000Z | docs/_static/python_basics/06_download-python_solutions.ipynb | digitalearthafrica/notebooks-training | 8d162bca34445c0d0f01eedef11fa3263825ae07 | [
"Apache-2.0"
] | 1 | 2020-11-23T10:18:34.000Z | 2020-11-23T10:18:34.000Z | 863.913133 | 125,292 | 0.957789 | [
[
[
"# Exercise solutions",
"_____no_output_____"
],
[
"This section contains possible solutions to the exercises posed in the Python basics module. There is more than one correct solution for most of the exercises so these answers are for reference only. ",
"_____no_output_____"
],
[
"## Python basics 1",
"_____no_output_____"
],
[
"### 1.1 Fill the asterisk line with your name and run the cell.",
"_____no_output_____"
]
],
[
[
"# Fill the ****** space with your name and run the cell.\n\nmessage = \"My name is Python\"\n\nmessage",
"_____no_output_____"
]
],
[
[
"### 1.2 You can add new cells to insert new code at any point in a notebook. Click on the `+` icon in the top menu to add a new cell below the current one. Add a new cell below the next cell, and use it to print the value of variable `a`.",
"_____no_output_____"
]
],
[
[
"a = 365*24\n\n# Add a new cell just below this one. Use it to print the value of variable `a`",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
]
],
[
[
"> **Question:** Now what happens if you scroll back up the notebook and execute a different cell containing `print(a)`?\n\n> **Answer:** It should now print `8760` as 'global state' means the value of `a` has been changed.",
"_____no_output_____"
],
[
"## Python basics 2",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
]
],
[
[
"### 2.1 Use the numpy `add` function to add the values `34` and `29` in the cell below.",
"_____no_output_____"
]
],
[
[
"# Use numpy add to add 34 and 29\n\nnp.add(34,29)",
"_____no_output_____"
]
],
[
[
"### 2.2 Declare a new array with contents [5,4,3,2,1] and slice it to select the last 3 items.",
"_____no_output_____"
]
],
[
[
"# Substitute the ? symbols by the correct expressions and values\n\n# Declare the array\n\narr = np.array([5, 4, 3, 2, 1])\n\n# Slice array for the last 3 items only\n\narr[-3:]",
"_____no_output_____"
]
],
[
[
"### 2.3: Select all the elements in the array below excluding the last one, `[15]`.",
"_____no_output_____"
]
],
[
[
"arr = np.array([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15])\n\n# Substitute the ? symbols by the correct expressions and values\n\narr[:-1]",
"_____no_output_____"
]
],
[
[
"### 2.4 Use `arr` as defined in 2.3. Exclude the last element from the list, but now only select every 3rd element. Remember the third index indicates `stride`, if used.\n> **Hint:** The result should be `[0,3,6,9,12]`.",
"_____no_output_____"
]
],
[
[
"# Substitute the ? symbols by the correct expressions and values\n\narr[:-1:3]",
"_____no_output_____"
]
],
[
[
"### 2.5 You'll need to combine array comparisons and logical operators to solve this one. Find out the values in the following array that are greater than `3` AND less than `7`. The output should be a boolean array.\n> **Hint:** If you are stuck, reread the section on boolean arrays.",
"_____no_output_____"
]
],
[
[
"arr = np.array([1, 3, 5, 1, 6, 3, 1, 5, 7, 1])\n\n# Use array comparisons (<, >, etc.) and logical operators (*, +) to find where\n# the values are greater than 3 and less than 7.\n\nboolean_array = (arr > 3)*(arr < 7)",
"_____no_output_____"
],
[
"boolean_array",
"_____no_output_____"
]
],
[
[
"### 2.6 Use your boolean array from 2.5 to mask the `False` values from `arr`. \n> **Hint:** The result should be `[5, 6, 5]`.",
"_____no_output_____"
]
],
[
[
"# Use your resulting boolean_array array from 2.5\n# to mask arr as defined in 2.5\n\narr[boolean_array]",
"_____no_output_____"
]
],
[
[
"## Python basics 3",
"_____no_output_____"
]
],
[
[
"%matplotlib inline \n\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\nim = np.copy(plt.imread('Guinea_Bissau.JPG'))",
"_____no_output_____"
]
],
[
[
"### 3.1 Let's use the indexing functionality of numpy to select a portion of this image. Select the top-right corner of this image with shape `(200,200)`.\n> **Hint:** Remember there are three dimensions in this image. Colons separate spans, and commas separate dimensions.",
"_____no_output_____"
]
],
[
[
"# Both options below are correct\n\ntopright = im[:200, -200:, ]\n\ntopright = im[:200, 400:600, ]\n\n# Plot your result using imshow\n\nplt.imshow(topright)",
"_____no_output_____"
]
],
[
[
"### 3.2 Let's have a look at one of the pixels in this image. We choose the top-left corner with position `(0,0)` and show the values of its RGB channels.",
"_____no_output_____"
]
],
[
[
"# Run this cell to see the colour channel values\n\nim[0,0]",
"_____no_output_____"
]
],
[
[
"The first value corresponds to the red component, the second to the green and the third to the blue. `uint8` can contain values in the range `[0-255]` so the pixel has a lot of red, some green, and not much blue. This pixel is a orange-yellow sandy colour.",
"_____no_output_____"
],
[
"Now let's modify the image. \n\n### What happens if we set all the values representing the blue channel to the maximum value?",
"_____no_output_____"
]
],
[
[
"# Run this cell to set all blue channel values to 255\n# We first make a copy to avoid modifying the original image\n\nim2 = np.copy(im)\n\nim2[:,:,2] = 255\n\nplt.imshow(im2)",
"_____no_output_____"
]
],
[
[
"> The index notation `[:,:,2]` is selecting pixels at all heights and all widths, but only the 3rd colour channel. ",
"_____no_output_____"
],
[
"### Can you modify the above code cell to set all red values to the maximum value of `255`?",
"_____no_output_____"
]
],
[
[
"im2 = np.copy(im)\n\nim2[:,:,0] = 255\n\nplt.imshow(im2)",
"_____no_output_____"
]
],
[
[
"## Python basics 4",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom matplotlib import colors\n\n# grass = 1\narea = np.ones((100,100))\n# crops = 2\narea[10:60,20:50] = 2\n# city = 3\narea[70:90,60:80] = 3\nindex = {1: 'green', 2: 'yellow', 3: 'grey'}\ncmap = colors.ListedColormap(index.values())",
"_____no_output_____"
]
],
[
[
"### 4.1 The harvesting season has arrived and our cropping lands have changed colour to brown. Can you:\n\n#### 4.1.1 Modify the yellow area to contain the new value `4`?\n#### 4.1.2 Add a new entry to the `index` dictionary mapping number `4` to the value `brown`.\n#### 4.1.3 Plot the area.",
"_____no_output_____"
]
],
[
[
"# 4.1.1 Modify the yellow area to hold the value 4\narea[10:60,20:50] = 4",
"_____no_output_____"
],
[
"# 4.1.2 Add a new key-value pair to index that maps 4 to 'brown'\nindex[4] = 'brown'",
"_____no_output_____"
],
[
"# 4.1.3 Copy the cmap definition and re-run it to add the new colour\ncmap = colors.ListedColormap(index.values())\n# Plot the area\nplt.imshow(area, cmap=cmap)",
"_____no_output_____"
]
],
[
[
"> **Hint:** If you want to plot the new area, you have to redefine `cmap` so the new value is assigned a colour in the colour map. Copy and paste the `cmap = ...` line from the original plot.",
"_____no_output_____"
],
[
"### 4.2 Set `area[20:40, 80:95] = np.nan`. Plot the area now.",
"_____no_output_____"
]
],
[
[
"# Set the nan area\narea[20:40, 80:95] = np.nan",
"_____no_output_____"
],
[
"# Plot the entire area\nplt.imshow(area, cmap=cmap)",
"_____no_output_____"
]
],
[
[
"### 4.3 Find the median of the `area` array from 4.2 using `np.nanmedian`. Does this match your visual interpretation? How does this compare to using `np.median`?",
"_____no_output_____"
]
],
[
[
"# Use np.nanmedian to find the median of the area\nnp.nanmedian(area)",
"_____no_output_____"
],
[
"np.median(area)",
"_____no_output_____"
]
],
[
[
"`np.median` returns a value of `nan` because it cannot interpret no-data pixels. `np.nanmedian` excludes NaN values, so it returns a value of `1` which indicates grass. This matches the plot of `area`.",
"_____no_output_____"
],
[
"## Python basics 5",
"_____no_output_____"
]
],
[
[
"%matplotlib inline \n\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport xarray as xr\nguinea_bissau = xr.open_dataset('guinea_bissau.nc')",
"_____no_output_____"
]
],
[
[
"### 5.1 Can you access to the `crs` value in the attributes of the `guinea_bissau` `xarray.Dataset`?",
"_____no_output_____"
],
[
"> **Hint:** You can call upon `attributes` in the same way you would select a `variable` or `coordinate`.",
"_____no_output_____"
]
],
[
[
"# Replace the ? with the attribute name\n\nguinea_bissau.crs",
"_____no_output_____"
]
],
[
[
"### 5.2 Select the region of the `blue` variable delimited by these coordinates:\n* latitude of range [1335000, 1329030]\n* longitude of range [389520, 395490]",
"_____no_output_____"
],
[
"> **Hint:** Do we want to use `sel()` or `isel()`? Which coordinate is `x` and which is `y`?",
"_____no_output_____"
],
[
"### 5.3 Plot the selected region using `imshow`, then plot the region using `.plot()`.",
"_____no_output_____"
]
],
[
[
"# Plot using plt.imshow\nplt.imshow(guinea_bissau.blue.sel(x=slice(389520, 395490), y=slice(1335000, 1329030)))",
"_____no_output_____"
],
[
"# Plot using .plot()\nguinea_bissau.blue.sel(x=slice(389520, 395490), y=slice(1335000, 1329030)).plot()",
"_____no_output_____"
]
],
[
[
"### Can you change the colour map to `'Blues'`?",
"_____no_output_____"
]
],
[
[
"plt.imshow(guinea_bissau.blue.sel(x=slice(389520, 395490), y=slice(1335000, 1329030)), cmap='Blues')",
"_____no_output_____"
],
[
"guinea_bissau.blue.sel(x=slice(389520, 395490), y=slice(1335000, 1329030)).plot(cmap='Blues')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec9db6dd65951dc2f3870984e7b1fd3d07d46517 | 45,826 | ipynb | Jupyter Notebook | tests/bruh.ipynb | sirmammingtonham/futureNEWS | b1f45cc4a9af03d14eba8f8c2e57a05af5fc9695 | [
"Unlicense"
] | null | null | null | tests/bruh.ipynb | sirmammingtonham/futureNEWS | b1f45cc4a9af03d14eba8f8c2e57a05af5fc9695 | [
"Unlicense"
] | 4 | 2020-07-21T12:45:00.000Z | 2022-01-22T08:54:32.000Z | tests/bruh.ipynb | sirmammingtonham/futureMAG | b1f45cc4a9af03d14eba8f8c2e57a05af5fc9695 | [
"Unlicense"
] | null | null | null | 83.471767 | 16,929 | 0.680945 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ec9dc08fc7d89c406a75b329bbd83aa538dbc441 | 163,864 | ipynb | Jupyter Notebook | Basics/Clasification concepts.ipynb | julio-lau/Python_Notebooks | 5cfaa27ef4e3f858cea84ed33999d289d3821b81 | [
"MIT"
] | null | null | null | Basics/Clasification concepts.ipynb | julio-lau/Python_Notebooks | 5cfaa27ef4e3f858cea84ed33999d289d3821b81 | [
"MIT"
] | null | null | null | Basics/Clasification concepts.ipynb | julio-lau/Python_Notebooks | 5cfaa27ef4e3f858cea84ed33999d289d3821b81 | [
"MIT"
] | null | null | null | 418.020408 | 151,668 | 0.931638 | [
[
[
"from matplotlib import pyplot as plt\nfrom sklearn.datasets import load_iris\nimport pandas as pd\nimport numpy as np\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"# Iris dataset",
"_____no_output_____"
],
[
"## Loading and setting data",
"_____no_output_____"
]
],
[
[
"data = load_iris()\n\n#Input data\nfeatures = data['data']\n#Input feature names\nfeature_names = data['feature_names']\n#Labels\ntarget = data['target']\n#Description of dataset\nprint(data.DESCR)",
".. _iris_dataset:\n\nIris plants dataset\n--------------------\n\n**Data Set Characteristics:**\n\n :Number of Instances: 150 (50 in each of three classes)\n :Number of Attributes: 4 numeric, predictive attributes and the class\n :Attribute Information:\n - sepal length in cm\n - sepal width in cm\n - petal length in cm\n - petal width in cm\n - class:\n - Iris-Setosa\n - Iris-Versicolour\n - Iris-Virginica\n \n :Summary Statistics:\n\n ============== ==== ==== ======= ===== ====================\n Min Max Mean SD Class Correlation\n ============== ==== ==== ======= ===== ====================\n sepal length: 4.3 7.9 5.84 0.83 0.7826\n sepal width: 2.0 4.4 3.05 0.43 -0.4194\n petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)\n petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)\n ============== ==== ==== ======= ===== ====================\n\n :Missing Attribute Values: None\n :Class Distribution: 33.3% for each of 3 classes.\n :Creator: R.A. Fisher\n :Donor: Michael Marshall (MARSHALL%[email protected])\n :Date: July, 1988\n\nThe famous Iris database, first used by Sir R.A. Fisher. The dataset is taken\nfrom Fisher's paper. Note that it's the same as in R, but not as in the UCI\nMachine Learning Repository, which has two wrong data points.\n\nThis is perhaps the best known database to be found in the\npattern recognition literature. Fisher's paper is a classic in the field and\nis referenced frequently to this day. (See Duda & Hart, for example.) The\ndata set contains 3 classes of 50 instances each, where each class refers to a\ntype of iris plant. One class is linearly separable from the other 2; the\nlatter are NOT linearly separable from each other.\n\n.. topic:: References\n\n - Fisher, R.A. \"The use of multiple measurements in taxonomic problems\"\n Annual Eugenics, 7, Part II, 179-188 (1936); also in \"Contributions to\n Mathematical Statistics\" (John Wiley, NY, 1950).\n - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis.\n (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.\n - Dasarathy, B.V. (1980) \"Nosing Around the Neighborhood: A New System\n Structure and Classification Rule for Recognition in Partially Exposed\n Environments\". IEEE Transactions on Pattern Analysis and Machine\n Intelligence, Vol. PAMI-2, No. 1, 67-71.\n - Gates, G.W. (1972) \"The Reduced Nearest Neighbor Rule\". IEEE Transactions\n on Information Theory, May 1972, 431-433.\n - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al\"s AUTOCLASS II\n conceptual clustering system finds 3 classes in the data.\n - Many, many more ...\n"
]
],
[
[
"## Plotting scatter plots",
"_____no_output_____"
]
],
[
[
"def drawScatterSubPlots(features, feature_names, color, maxDims, labels, size = (15, 10)):\n '''Plot a set of scatter plots\n *** Parameters ***\n - features: Input data you use to plot\n - feature_names: Feature names of the input data, to be used as labels on x and y axis\n - color: Parameter to define colors on points\n - maxDims: A tuple (x, y) that defines the x subplots on a row and y subplots on a column\n - labels: Unique labels to define a legend\n - size: Parameter to define the size of the plot\n '''\n \n a, b = 0, 0\n gotHandles = False\n fig, axs = plt.subplots(maxDims[0], maxDims[1], figsize = size)\n\n for i in range(features.shape[1]):\n for j in range(i + 1, features.shape[1]):\n scatter = axs[a, b].scatter(features[:, i], features[:, j], c = color)\n if not gotHandles:\n handles, _ = scatter.legend_elements()\n gotHandles = True\n axs[a, b].set_xlabel(feature_names[i])\n axs[a, b].set_ylabel(feature_names[j])\n b += 1\n if b == maxDims[1]:\n b = 0\n a += 1\n\n fig.legend(handles, labels, loc='upper center')\n plt.show()",
"_____no_output_____"
],
[
"#This labels were created based on the description of the dataset\nclasses = ['Iris-Setosa', 'Iris-Versicolour', 'Iris-Virginica']\ndrawScatterSubPlots(features, feature_names, target, (2, 3), classes)",
"_____no_output_____"
]
],
[
[
"## Set a boundary based on the plots",
"_____no_output_____"
]
],
[
[
"plength = features[:, 2]\npwidth = features[:, 3]\nlabels = np.array(list(map(lambda x: classes[x], target)))\n\n# Filter the data points of Iris-Setosa from the rest\nis_setosa = (labels == 'Iris-Setosa')\n\n#Check boundaries based on petal length\nmax_length_setosa =plength[is_setosa].max()\nmin_length_non_setosa = plength[~is_setosa].min()\nprint('Petal length')\nprint('------------')\nprint(f'Maximum petal length of iris-setosa: {max_length_setosa}.')\nprint(f'Minimum petal length of others: {min_length_non_setosa}.')\nprint()\n\n#Check boundaries based on petal width\nmax_width_setosa =pwidth[is_setosa].max()\nmin_width_non_setosa = pwidth[~is_setosa].min()\nprint('Petal width')\nprint('------------')\nprint(f'Maximum petal width of iris-setosa: {max_width_setosa}.')\nprint(f'Minimum petal width of others: {min_width_non_setosa}.')",
"Petal length\n------------\nMaximum petal length of iris-setosa: 1.9.\nMinimum petal length of others: 3.0.\n\nPetal width\n------------\nMaximum petal width of iris-setosa: 0.6.\nMinimum petal width of others: 1.0.\n"
]
],
[
[
"# Seeds dataset",
"_____no_output_____"
]
],
[
[
"#Data source: https://archive.ics.uci.edu/ml/datasets/seeds#\ndf = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/00236/seeds_dataset.txt', \n names=['area', 'perimeter', 'compactness', 'kernel_length', 'kernel_width', 'asymmetry_coef', 'length_kernel_groove', 'target'],\n sep = '\\t')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9dd21818d1cb4beeaf74ea5c31362bffa9b071 | 773,134 | ipynb | Jupyter Notebook | docs/20_image_segmentation/scikit_learn_random_forest_pixel_classifier.ipynb | zoccoler/BioImageAnalysisNotebooks | 29d7b0232172c89df386931c09103630061497fa | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | docs/20_image_segmentation/scikit_learn_random_forest_pixel_classifier.ipynb | zoccoler/BioImageAnalysisNotebooks | 29d7b0232172c89df386931c09103630061497fa | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | docs/20_image_segmentation/scikit_learn_random_forest_pixel_classifier.ipynb | zoccoler/BioImageAnalysisNotebooks | 29d7b0232172c89df386931c09103630061497fa | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | 1,323.859589 | 152,699 | 0.952894 | [
[
[
"# Pixel classification using Scikit-learn\nPixel classification is a technique for assigning pixels to multiple classes. If there are two classes (object and background), we are talking about binarization. In this example we use a [random forest classifier](https://en.wikipedia.org/wiki/Random_forest) for pixel classification.\n\nSee also\n* [Scikit-learn random forest](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html)\n* [Classification of land cover by Chris Holden](https://ceholden.github.io/open-geo-tutorial/python/chapter_5_classification.html)\n\nAs usual, we start by loading an example image.",
"_____no_output_____"
]
],
[
[
"from skimage.io import imread, imshow\nimage = imread('../data/BBBC038/0bf4b144167694b6846d584cf52c458f34f28fcae75328a2a096c8214e01c0d0.tif')",
"_____no_output_____"
],
[
"imshow(image)",
"_____no_output_____"
]
],
[
[
"For demonstrating how the algorithm works, we annotate two small regions on the left of the image with values 1 and 2 for background and foreground (objects).",
"_____no_output_____"
]
],
[
[
"import numpy as np\nannotation = np.zeros(image.shape)\n\nannotation[0:10,0:10] = 1\nannotation[45:55,10:20] = 2",
"_____no_output_____"
],
[
"imshow(annotation, vmin=0, vmax=2)",
"_____no_output_____"
]
],
[
[
"## Generating a feature stack\nPixel classifiers such as the random forest classifier takes multiple images as input. We typically call these images a feature stack because for every pixel exist now multiple values (features). In the following example we create a feature stack containing three features:\n* The original pixel value\n* The pixel value after a Gaussian blur\n* The pixel value of the Gaussian blurred image processed through a Sobel operator.\n\nThus, we denoise the image and detect edges. All three images serve the pixel classifier to differentiate positive an negative pixels.",
"_____no_output_____"
]
],
[
[
"from skimage import filters\n\ndef generate_feature_stack(image):\n # determine features\n blurred = filters.gaussian(image, sigma=2)\n edges = filters.sobel(blurred)\n\n # collect features in a stack\n # The ravel() function turns a nD image into a 1-D image.\n # We need to use it because scikit-learn expects values in a 1-D format here. \n feature_stack = [\n image.ravel(),\n blurred.ravel(),\n edges.ravel()\n ]\n \n # return stack as numpy-array\n return np.asarray(feature_stack)\n\nfeature_stack = generate_feature_stack(image)\n\n# show feature images\nimport matplotlib.pyplot as plt\nfig, axes = plt.subplots(1, 3, figsize=(10,10))\n\n# reshape(image.shape) is the opposite of ravel() here. We just need it for visualization.\naxes[0].imshow(feature_stack[0].reshape(image.shape), cmap=plt.cm.gray)\naxes[1].imshow(feature_stack[1].reshape(image.shape), cmap=plt.cm.gray)\naxes[2].imshow(feature_stack[2].reshape(image.shape), cmap=plt.cm.gray)",
"_____no_output_____"
]
],
[
[
"## Formating data\nWe now need to format the input data so that it fits to what scikit learn expects. Scikit-learn asks for an array of shape (n, m) as input data and (n) annotations. n corresponds to number of pixels and m to number of features. In our case m = 3.",
"_____no_output_____"
]
],
[
[
"def format_data(feature_stack, annotation):\n # reformat the data to match what scikit-learn expects\n # transpose the feature stack\n X = feature_stack.T\n # make the annotation 1-dimensional\n y = annotation.ravel()\n \n # remove all pixels from the feature and annotations which have not been annotated\n mask = y > 0\n X = X[mask]\n y = y[mask]\n\n return X, y\n\nX, y = format_data(feature_stack, annotation)\n\nprint(\"input shape\", X.shape)\nprint(\"annotation shape\", y.shape)",
"input shape (200, 3)\nannotation shape (200,)\n"
]
],
[
[
"## Training the random forest classifier\nWe now train the [random forest classifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) by providing the feature stack X and the annotations y.",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier\n\nclassifier = RandomForestClassifier(max_depth=2, random_state=0)\nclassifier.fit(X, y)",
"_____no_output_____"
]
],
[
[
"## Predicting pixel classes\nAfter the classifier has been trained, we can use it to predict pixel classes for whole images. Note in the following code, we provide `feature_stack.T` which are more pixels then X in the commands above, because it also contains the pixels which were not annotated before.",
"_____no_output_____"
]
],
[
[
"res = classifier.predict(feature_stack.T) - 1 # we subtract 1 to make background = 0\nimshow(res.reshape(image.shape))",
"_____no_output_____"
]
],
[
[
"## Interactive segmentation\nWe can also use napari to annotate some regions as negative (label = 1) and positive (label = 2).",
"_____no_output_____"
]
],
[
[
"import napari\n\n# start napari\nviewer = napari.Viewer()\n\n# add image\nviewer.add_image(image)\n\n# add an empty labels layer and keet it in a variable\nlabels = viewer.add_labels(np.zeros(image.shape).astype(int))",
"_____no_output_____"
]
],
[
[
"Go ahead **after** annotating at least two regions with labels 1 and 2.\n\nTake a screenshot of the annotation:",
"_____no_output_____"
]
],
[
[
"napari.utils.nbscreenshot(viewer)",
"_____no_output_____"
]
],
[
[
"Retrieve the annotations from the napari layer:",
"_____no_output_____"
]
],
[
[
"manual_annotations = labels.data\n\nimshow(manual_annotations, vmin=0, vmax=2)",
"matplotlib_plugin.py (150): Low image data range; displaying image with stretched contrast.\n"
]
],
[
[
"As we have used functions in the example above, we can just repeat the same procedure with the manual annotations.",
"_____no_output_____"
]
],
[
[
"# generate features (that's actually not necessary, \n# as the variable is still there and the image is the same. \n# but we do it for completeness)\nfeature_stack = generate_feature_stack(image)\nX, y = format_data(feature_stack, manual_annotations)\n\n# train classifier\nclassifier = RandomForestClassifier(max_depth=2, random_state=0)\nclassifier.fit(X, y)\n\n# process the whole image and show result\nresult_1d = classifier.predict(feature_stack.T)\nresult_2d = result_1d.reshape(image.shape)\nimshow(result_2d)",
"matplotlib_plugin.py (150): Low image data range; displaying image with stretched contrast.\n"
]
],
[
[
"Also we add the result to napari.",
"_____no_output_____"
]
],
[
[
"viewer.add_labels(result_2d)",
"_____no_output_____"
],
[
"napari.utils.nbscreenshot(viewer)",
"_____no_output_____"
]
],
[
[
"# Exercise\nChange the code so that you can annotate three different regions:\n* Nuclei\n* Background\n* The edges between blobs and background",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
ec9dd3c5595a102d68a49da53f02dd9ae194c2b9 | 583,160 | ipynb | Jupyter Notebook | docs/examples/chip_scan.ipynb | stjordanis/forest-benchmarking | f9ad9701c2d253de1a0c922d7220ed7de75ac685 | [
"Apache-2.0"
] | 40 | 2019-01-25T18:35:24.000Z | 2022-03-13T11:21:18.000Z | docs/examples/chip_scan.ipynb | stjordanis/forest-benchmarking | f9ad9701c2d253de1a0c922d7220ed7de75ac685 | [
"Apache-2.0"
] | 140 | 2019-01-25T20:09:02.000Z | 2022-03-12T01:08:01.000Z | docs/examples/chip_scan.ipynb | stjordanis/forest-benchmarking | f9ad9701c2d253de1a0c922d7220ed7de75ac685 | [
"Apache-2.0"
] | 22 | 2019-02-01T13:18:35.000Z | 2022-01-12T15:03:13.000Z | 562.895753 | 60,288 | 0.944069 | [
[
[
"# Estimate the Specs of a Lattice",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom numpy import pi\nfrom pyquil.api import get_qc\nfrom pyquil.api._devices import get_lattice\nfrom pyquil import Program\nfrom pyquil.gates import *\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n",
"_____no_output_____"
]
],
[
[
"Specify lattice name and show any stored specs",
"_____no_output_____"
]
],
[
[
"# lattice_name = 'Aspen-1-5Q-B'\nlattice_name = '9q-square-noisy-qvm'\n# lattice = get_lattice(lattice_name)\n# stored_specs = lattice.get_specs()\n# print(stored_specs)",
"_____no_output_____"
]
],
[
[
"Create qc object, get qubits, and display the topology",
"_____no_output_____"
],
[
"**BE SURE TO SET as_qvm TRUE or FALSE as desired!!!**",
"_____no_output_____"
]
],
[
[
"import networkx as nx\n\nqc = get_qc(lattice_name, as_qvm=True, noisy=True)\nqubits = qc.qubits()\nprint(qubits)\ngraph = qc.qubit_topology()\nnx.draw_networkx(graph, with_labels=True)",
"[0, 1, 2, 3, 4, 5, 6, 7, 8]\n"
]
],
[
[
"## Active Reset Error\nThese estimates will be affected by readout error as well, so it may be best to try to correct for that. ",
"_____no_output_____"
]
],
[
[
"from forest.benchmarking.readout import estimate_joint_reset_confusion\nsingle_qubit_reset_cms = estimate_joint_reset_confusion(qc, qubits, num_trials = 10, joint_group_size = 1,\n use_active_reset = True, show_progress_bar=True)\n",
"100%|██████████| 18/18 [01:00<00:00, 3.26s/it]\n"
]
],
[
[
"Confusion matrix, Avg Fidelity",
"_____no_output_____"
]
],
[
[
"print('Single qubit confusion matrices: \\n', single_qubit_reset_cms)\nprint('Reset fidelity per qubit: ', [np.round(np.sum(cm, axis=0)[0]/2, 3) for cm in single_qubit_reset_cms.values()])\n",
"Single qubit confusion matrices: \n {(0,): array([[1., 0.],\n [1., 0.]]), (1,): array([[0.8, 0.2],\n [1. , 0. ]]), (2,): array([[1., 0.],\n [1., 0.]]), (3,): array([[1., 0.],\n [1., 0.]]), (4,): array([[1., 0.],\n [1., 0.]]), (5,): array([[1., 0.],\n [1., 0.]]), (6,): array([[1. , 0. ],\n [0.9, 0.1]]), (7,): array([[1., 0.],\n [1., 0.]]), (8,): array([[0.9, 0.1],\n [1. , 0. ]])}\nReset fidelity per qubit: [1.0, 0.9, 1.0, 1.0, 1.0, 1.0, 0.95, 1.0, 0.95]\n"
]
],
[
[
"## Readout Errors",
"_____no_output_____"
]
],
[
[
"from forest.benchmarking.readout import estimate_joint_confusion_in_set, marginalize_confusion_matrix\nsingle_qubit_cms = estimate_joint_confusion_in_set(qc, qubits, num_shots=5000, joint_group_size=1,\n use_param_program=True, use_active_reset=False, show_progress_bar=True)\n",
"100%|██████████| 18/18 [00:09<00:00, 2.00it/s]\n"
]
],
[
[
"Confusion matrix, Avg Fidelity, Asymmetry",
"_____no_output_____"
]
],
[
[
"print('Single qubit confusion matrices: \\n', single_qubit_cms)\nprint('Avg. fidelity per qubit: ', [np.round(np.trace(cm)/2, 3) for cm in single_qubit_cms.values()])\nfrom pyquil.gate_matrices import Z as Z_mat\nprint('Asymmetry magnitude: ', [np.round(np.trace(Z_mat @ cm)/2, 3) for cm in single_qubit_cms.values()])",
"Single qubit confusion matrices: \n {(0,): array([[0.9766, 0.0234],\n [0.0944, 0.9056]]), (1,): array([[0.9764, 0.0236],\n [0.0886, 0.9114]]), (2,): array([[0.9734, 0.0266],\n [0.0856, 0.9144]]), (3,): array([[0.9738, 0.0262],\n [0.085 , 0.915 ]]), (4,): array([[0.9736, 0.0264],\n [0.0948, 0.9052]]), (5,): array([[0.9702, 0.0298],\n [0.097 , 0.903 ]]), (6,): array([[0.979 , 0.021 ],\n [0.0962, 0.9038]]), (7,): array([[0.9736, 0.0264],\n [0.0922, 0.9078]]), (8,): array([[0.9734, 0.0266],\n [0.0942, 0.9058]])}\nAvg. fidelity per qubit: [0.941, 0.944, 0.944, 0.944, 0.939, 0.937, 0.941, 0.941, 0.94]\nAsymmetry magnitude: [0.035, 0.032, 0.029, 0.029, 0.034, 0.034, 0.038, 0.033, 0.034]\n"
]
],
[
[
"Simultaneous Confusion Matrix (pairwise; can try len(qubits) but may be too slow)",
"_____no_output_____"
]
],
[
[
"pairwise_cms = estimate_joint_confusion_in_set(qc, qubits, num_shots=1000, joint_group_size=2,\n use_param_program=True, use_active_reset=False, show_progress_bar=True)",
"100%|██████████| 144/144 [00:37<00:00, 3.89it/s]\n"
]
],
[
[
" Look for Significant Correlated Error",
"_____no_output_____"
]
],
[
[
"marginal_absolute_tolerance = .02 # determines acceptable level of correlation\n\nfor qubit_pair, pair_cm in pairwise_cms.items():\n marginal_one_qs = [(qubit, marginalize_confusion_matrix(pair_cm, qubit_pair, [qubit])) for qubit in qubit_pair]\n for qubit, marginal_cm in marginal_one_qs:\n if not np.allclose(single_qubit_cms[(qubit,)], marginal_cm, atol=marginal_absolute_tolerance):\n print(\"Q\" + str(qubit) + \" readout is different when measuring pair\", qubit_pair)\n \njoint_absolute_tolerance = .03\nfor qubit_pair, pair_cm in pairwise_cms.items():\n joint_single_q_cm = np.kron(single_qubit_cms[(qubit_pair[0],)], single_qubit_cms[(qubit_pair[1],)])\n if not np.allclose(joint_single_q_cm, pair_cm, atol=joint_absolute_tolerance):\n print(qubit_pair, \"exhibits correlated readout error\")\n",
"Q0 readout is different when measuring pair (0, 3)\n(1, 4) exhibits correlated readout error\n"
]
],
[
[
"## T1/T2",
"_____no_output_____"
],
[
"Neither estimation of T1 or T2 will work on a QVM",
"_____no_output_____"
],
[
"### T1",
"_____no_output_____"
]
],
[
[
"from forest.benchmarking.qubit_spectroscopy import MICROSECOND, do_t1_or_t2\nstop_time = 60 * MICROSECOND\nnum_points = 15\ntimes = np.linspace(0, stop_time, num_points)\nt1s_by_qubit = do_t1_or_t2(qc, qubits, times, kind='t1', show_progress_bar=True)[0]\nprint(\"T1s in microseconds: \\n\", t1s_by_qubit)",
"100%|██████████| 15/15 [00:06<00:00, 2.36it/s]\n"
]
],
[
[
"### $T_2^*$ Ramsey",
"_____no_output_____"
]
],
[
[
"t2s_by_qubit = do_t1_or_t2(qc, qubits, times, kind='t2_star', show_progress_bar=True)[0]\nprint(\"T2s in microseconds: \\n\", t2s_by_qubit)",
"100%|██████████| 15/15 [00:09<00:00, 1.52it/s]\n"
]
],
[
[
"## Single Qubit RB Gate Error",
"_____no_output_____"
]
],
[
[
"from pyquil.api import get_benchmarker\nfrom forest.benchmarking.randomized_benchmarking import do_rb\nbm = get_benchmarker()",
"_____no_output_____"
]
],
[
[
"Estimate 1q fidelity separately ",
"_____no_output_____"
]
],
[
[
"num_sequences_per_depth = 10\ndepths = [d for d in [2,25,50,125] for _ in range(num_sequences_per_depth)] # specify the depth of each sequence\nnum_shots = 1000\nrb_decays_by_qubit = {}\nrb_results_by_qubit = {}\nfor qubit in qubits:\n qubit_groups = [(qubit,)]\n decays, _, results = do_rb(qc, bm, qubit_groups, depths, num_shots=num_shots,\n show_progress_bar=True)\n rb_decays_by_qubit[qubit] = decays[qubit_groups[0]]\n rb_results_by_qubit[qubit] = results\n\nprint(rb_decays_by_qubit)",
"100%|██████████| 40/40 [00:16<00:00, 2.46it/s]\n100%|██████████| 40/40 [00:16<00:00, 2.46it/s]\n100%|██████████| 40/40 [00:15<00:00, 2.52it/s]\n100%|██████████| 40/40 [00:15<00:00, 2.51it/s]\n100%|██████████| 40/40 [00:15<00:00, 2.54it/s]\n100%|██████████| 40/40 [00:15<00:00, 2.51it/s]\n100%|██████████| 40/40 [00:15<00:00, 2.54it/s]\n100%|██████████| 40/40 [00:16<00:00, 2.48it/s]\n100%|██████████| 40/40 [00:15<00:00, 2.54it/s]"
]
],
[
[
"Plot the results to see if there is a good fit. \n**SLOW**",
"_____no_output_____"
]
],
[
[
"from forest.benchmarking.randomized_benchmarking import get_stats_by_qubit_group, fit_rb_results\nfrom forest.benchmarking.plotting import plot_figure_for_fit\n\nfor qubit in qubits:\n qubit_groups = [(qubit,)]\n\n stats = get_stats_by_qubit_group(qubit_groups, rb_results_by_qubit[qubit])[qubit_groups[0]]\n\n # fit the exponential decay model\n fit_1q = fit_rb_results(depths, stats['expectation'], stats['std_err'])\n \n fig, ax = plot_figure_for_fit(fit_1q, xlabel=\"Sequence Length [Cliffords]\", ylabel=\"Survival Probability\", \n title=f'RB Decay for q{qubit}')",
"_____no_output_____"
]
],
[
[
"Estimate simultaneous 1q fidelity\n**SLOW**",
"_____no_output_____"
]
],
[
[
"# use the same parameters as above for comparison\n\n# num_sequences_per_depth = 10\n# depths = [d for d in [2,25,50,125] for _ in range(num_sequences_per_depth)] # specify the depth of each sequence\n# num_shots = 1000\n\nqubit_groups = [(qubit,) for qubit in qubits]\nsimult_decays, _, simult_results = do_rb(qc, bm, qubit_groups, depths, num_shots=num_shots,\n show_progress_bar=True)\n\nprint(simult_decays)",
"100%|██████████| 40/40 [16:44<00:00, 25.11s/it]\n"
],
[
"for qubit, decay in rb_decays_by_qubit.items():\n simult_decay = simult_decays[(qubit,)]\n if not np.allclose(simult_decay, decay, atol = .05):\n print(\"qubit \" + str(qubit) + \" may be suffering from significant 1q cross-talk.\")",
"_____no_output_____"
]
],
[
[
"## DFE CZ Fidelity \n\n(**SLOW** on qvm)",
"_____no_output_____"
]
],
[
[
"print('Obtaining CZ fidelity on every edge in: ', graph.edges())",
"Obtaining CZ fidelity on every edge in: [(0, 3), (0, 1), (1, 4), (1, 2), (2, 5), (3, 6), (3, 4), (4, 7), (4, 5), (5, 8), (6, 7), (7, 8)]\n"
],
[
"from forest.benchmarking.direct_fidelity_estimation import do_dfe\n\nfrom pyquil.api import get_benchmarker\nbm = get_benchmarker()\n\ncz_fidelities = {}\nfor edge in graph.edges():\n p = Program(CZ(edge[0], edge[1]))\n (fidelity, std_err), _, _ = do_dfe(qc, bm, p, list(edge), kind='process', show_progress_bar=True)\n print(edge, \" : \", fidelity, \"+/-\", std_err)\n cz_fidelities[tuple(edge)] = fidelity",
"100%|██████████| 152/152 [00:26<00:00, 5.74it/s]\n 0%| | 0/152 [00:00<?, ?it/s]"
]
],
[
[
"## Coherent impact of CZ Cross Talk\n\nThis estimates the effective RZ phase on each qubit due to some CZ gate. ",
"_____no_output_____"
]
],
[
[
"from forest.benchmarking.robust_phase_estimation import do_rpe\nedge = list(graph.edges())[0]\nrotation = CZ(*edge)\nmeasure_qubit = 0\nqubit_groups = [(qubit,) for qubit in qubits]\nchanges_of_basis = [I(qubit) for qubit in qubits]\n\neffective_phases, _, _= do_rpe(qc, rotation, changes_of_basis, qubit_groups, num_depths=7, \n active_reset=True, mitigate_readout_errors=False,\n show_progress_bar=True)\nprint(effective_phases)",
"100%|██████████| 7/7 [00:22<00:00, 3.19s/it]"
]
],
[
[
"## All qubits RX calibration\nThis is done in parallel to save time.",
"_____no_output_____"
]
],
[
[
"from forest.benchmarking.robust_phase_estimation import do_rpe\nangles = [-pi, -pi/2, pi/2, pi]\n\nqubit_groups = [(qubit,) for qubit in qubits]\nchanges_of_basis = [H(qubit) for qubit in qubits]\n\nfor angle in angles:\n rotation = Program([RX(angle, qubit) for qubit in qubits])\n phases, expts, ress = do_rpe(qc, rotation, changes_of_basis, qubit_groups, num_depths=6, \n active_reset=True, mitigate_readout_errors=False,\n show_progress_bar=True)\n print('expected phase: ', angle % (2*pi))\n print(phases)\n",
"100%|██████████| 6/6 [00:43<00:00, 7.32s/it]\n 0%| | 0/6 [00:00<?, ?it/s]"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9dd7927f66c5a56205baae0dd5b5122fa4ea31 | 9,572 | ipynb | Jupyter Notebook | setup.ipynb | andresrosso/dmlpr | 555b1a4b3cfe1aa25e7b8ed40f15ebbcd3227cf4 | [
"MIT"
] | 2 | 2020-11-03T18:00:24.000Z | 2021-10-10T07:09:38.000Z | setup.ipynb | andresrosso/dmlpr | 555b1a4b3cfe1aa25e7b8ed40f15ebbcd3227cf4 | [
"MIT"
] | 5 | 2021-03-29T23:58:24.000Z | 2021-12-13T20:42:24.000Z | setup.ipynb | andresrosso/dmlpr | 555b1a4b3cfe1aa25e7b8ed40f15ebbcd3227cf4 | [
"MIT"
] | null | null | null | 39.717842 | 440 | 0.646991 | [
[
[
"!pip install gensim\n!pip install sentencepiece\n!pip install tensorflow_hub\n!pip install lxml\n!pip install spacy\n!pip install elasticsearch\n!pip freeze > requirements.txt\n!pip install -r requirements.txt\n!pip install ntlk\n#this library is for parallel execution with progress bar\n!pip install p_tqdm\n!pip install tqdm",
"_____no_output_____"
],
[
"#scispacy \n#https://allenai.github.io/scispacy/\n!pip install pip install spacy\n!pip install https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.2.4/en_core_sci_md-0.2.4.tar.gz",
"_____no_output_____"
],
[
"!pip install nltk",
"_____no_output_____"
],
[
"import nltk\nnltk.download('stopwords')\nnltk.download('punkt')\nnltk.download('averaged_perceptron_tagger')",
"_____no_output_____"
],
[
"#bert download\n!cd .. && pwd && git clone https://github.com/google-research/bert.git\n!cd ../bert && mkdir checkpoints\n!cd ../bert/checkpoints && wget https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip\n!cd ../bert/checkpoints && unzip uncased_L-12_H-768_A-12.zip",
"/home/aerossom/git_repos\nCloning into 'bert'...\nremote: Enumerating objects: 340, done.\u001b[K\nremote: Total 340 (delta 0), reused 0 (delta 0), pack-reused 340\u001b[K\nReceiving objects: 100% (340/340), 310.70 KiB | 0 bytes/s, done.\nResolving deltas: 100% (186/186), done.\nChecking connectivity... done.\n--2020-06-08 13:13:49-- https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip\nResolving proxyapp.unal.edu.co (proxyapp.unal.edu.co)... 168.176.239.30\nConnecting to proxyapp.unal.edu.co (proxyapp.unal.edu.co)|168.176.239.30|:8080... connected.\nProxy request sent, awaiting response... 200 OK\nLength: 407727028 (389M) [application/zip]\nSaving to: 'uncased_L-12_H-768_A-12.zip'\n\nuncased_L-12_H-768_ 100%[===================>] 388.84M 10.8MB/s in 39s \n\n2020-06-08 13:14:28 (10.1 MB/s) - 'uncased_L-12_H-768_A-12.zip' saved [407727028/407727028]\n\nArchive: uncased_L-12_H-768_A-12.zip\n creating: uncased_L-12_H-768_A-12/\n inflating: uncased_L-12_H-768_A-12/bert_model.ckpt.meta \n inflating: uncased_L-12_H-768_A-12/bert_model.ckpt.data-00000-of-00001 \n inflating: uncased_L-12_H-768_A-12/vocab.txt \n inflating: uncased_L-12_H-768_A-12/bert_model.ckpt.index \n inflating: uncased_L-12_H-768_A-12/bert_config.json \n"
],
[
"!cd ../bert/checkpoints && wget --load-cookies /tmp/cookies.txt \"https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1R84voFKHfWV9xjzeLzWBbmY1uOMYpnyD' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\\1\\n/p')&id=1R84voFKHfWV9xjzeLzWBbmY1uOMYpnyD\" -O biobert.tar.gz && rm -rf /tmp/cookies.txt\n!cd ../bert/checkpoints && tar xvzf biobert.tar.gz ",
"--2020-06-08 13:14:33-- https://docs.google.com/uc?export=download&confirm=cofK&id=1R84voFKHfWV9xjzeLzWBbmY1uOMYpnyD\nResolving proxyapp.unal.edu.co (proxyapp.unal.edu.co)... 168.176.239.30\nConnecting to proxyapp.unal.edu.co (proxyapp.unal.edu.co)|168.176.239.30|:8080... connected.\nProxy request sent, awaiting response... 302 Moved Temporarily\nLocation: https://doc-0c-a0-docs.googleusercontent.com/docs/securesc/cg8g93i9soc78mka2c6lmt7ec5j6cbku/kjl202bv8cdcbrqie9lo26qc07uk04fs/1591640100000/13799006341648886493/08805792308426396309Z/1R84voFKHfWV9xjzeLzWBbmY1uOMYpnyD?e=download [following]\n--2020-06-08 13:14:33-- https://doc-0c-a0-docs.googleusercontent.com/docs/securesc/cg8g93i9soc78mka2c6lmt7ec5j6cbku/kjl202bv8cdcbrqie9lo26qc07uk04fs/1591640100000/13799006341648886493/08805792308426396309Z/1R84voFKHfWV9xjzeLzWBbmY1uOMYpnyD?e=download\nConnecting to proxyapp.unal.edu.co (proxyapp.unal.edu.co)|168.176.239.30|:8080... connected.\nProxy request sent, awaiting response... 302 Found\nLocation: https://docs.google.com/nonceSigner?nonce=agfhntguh9i1q&continue=https://doc-0c-a0-docs.googleusercontent.com/docs/securesc/cg8g93i9soc78mka2c6lmt7ec5j6cbku/kjl202bv8cdcbrqie9lo26qc07uk04fs/1591640100000/13799006341648886493/08805792308426396309Z/1R84voFKHfWV9xjzeLzWBbmY1uOMYpnyD?e%3Ddownload&hash=ggso4tfjjtreo9g3rkb0r0dsli33gkb9 [following]\n--2020-06-08 13:14:34-- https://docs.google.com/nonceSigner?nonce=agfhntguh9i1q&continue=https://doc-0c-a0-docs.googleusercontent.com/docs/securesc/cg8g93i9soc78mka2c6lmt7ec5j6cbku/kjl202bv8cdcbrqie9lo26qc07uk04fs/1591640100000/13799006341648886493/08805792308426396309Z/1R84voFKHfWV9xjzeLzWBbmY1uOMYpnyD?e%3Ddownload&hash=ggso4tfjjtreo9g3rkb0r0dsli33gkb9\nConnecting to proxyapp.unal.edu.co (proxyapp.unal.edu.co)|168.176.239.30|:8080... connected.\nProxy request sent, awaiting response... 302 Found\nLocation: https://doc-0c-a0-docs.googleusercontent.com/docs/securesc/cg8g93i9soc78mka2c6lmt7ec5j6cbku/kjl202bv8cdcbrqie9lo26qc07uk04fs/1591640100000/13799006341648886493/08805792308426396309Z/1R84voFKHfWV9xjzeLzWBbmY1uOMYpnyD?e=download&nonce=agfhntguh9i1q&user=08805792308426396309Z&hash=n2aul6ach819t3fk14m7f0icg8rdqm7l [following]\n--2020-06-08 13:14:34-- https://doc-0c-a0-docs.googleusercontent.com/docs/securesc/cg8g93i9soc78mka2c6lmt7ec5j6cbku/kjl202bv8cdcbrqie9lo26qc07uk04fs/1591640100000/13799006341648886493/08805792308426396309Z/1R84voFKHfWV9xjzeLzWBbmY1uOMYpnyD?e=download&nonce=agfhntguh9i1q&user=08805792308426396309Z&hash=n2aul6ach819t3fk14m7f0icg8rdqm7l\nConnecting to proxyapp.unal.edu.co (proxyapp.unal.edu.co)|168.176.239.30|:8080... connected.\nProxy request sent, awaiting response... 200 OK\nLength: unspecified [application/x-gzip]\nSaving to: 'biobert.tar.gz'\n\nbiobert.tar.gz [ <=> ] 382.81M 10.0MB/s in 38s \n\n2020-06-08 13:15:13 (10.1 MB/s) - 'biobert.tar.gz' saved [401403346]\n\nbiobert_v1.1_pubmed/\nbiobert_v1.1_pubmed/model.ckpt-1000000.data-00000-of-00001\nbiobert_v1.1_pubmed/model.ckpt-1000000.meta\nbiobert_v1.1_pubmed/bert_config.json\nbiobert_v1.1_pubmed/vocab.txt\nbiobert_v1.1_pubmed/model.ckpt-1000000.index\n"
],
[
"!ls ../bert/checkpoints",
"biobert.tar.gz\t uncased_L-12_H-768_A-12\r\nbiobert_v1.1_pubmed uncased_L-12_H-768_A-12.zip\r\n"
],
[
"#remove big files from commit\n#git filter-branch --force --index-filter 'git rm --cached --ignore-unmatch train-data/train_pairs/related_docs_negative_pairs_BioASQ-trainingDataset8b.json' --prune-empty --tag-name-filter cat -- --all",
"_____no_output_____"
],
[
"!ls ../bert/checkpoints/uncased_L-12_H-768_A-12",
"bert_config.json\t\t bert_model.ckpt.index vocab.txt\r\nbert_model.ckpt.data-00000-of-00001 bert_model.ckpt.meta\r\n"
],
[
"!ls ../bert/checkpoints/biobert_v1.1_pubmed",
"bert_config.json\t\t\tmodel.ckpt-1000000.index vocab.txt\r\nmodel.ckpt-1000000.data-00000-of-00001\tmodel.ckpt-1000000.meta\r\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9ddf16ed8b4f17d7741294ce8b4249682072bb | 12,751 | ipynb | Jupyter Notebook | how-to-use-azureml/training/train-on-computeinstance/train-on-computeinstance.ipynb | oliverw1/MachineLearningNotebooks | 5080053a3542647d788053ce4e919c9f8efd98e9 | [
"MIT"
] | 1 | 2020-11-23T13:59:23.000Z | 2020-11-23T13:59:23.000Z | how-to-use-azureml/training/train-on-computeinstance/train-on-computeinstance.ipynb | oliverw1/MachineLearningNotebooks | 5080053a3542647d788053ce4e919c9f8efd98e9 | [
"MIT"
] | 4 | 2020-08-14T23:21:54.000Z | 2020-08-14T23:34:35.000Z | how-to-use-azureml/training/train-on-computeinstance/train-on-computeinstance.ipynb | oliverw1/MachineLearningNotebooks | 5080053a3542647d788053ce4e919c9f8efd98e9 | [
"MIT"
] | 2 | 2020-07-22T19:08:10.000Z | 2020-07-27T20:08:19.000Z | 31.406404 | 368 | 0.549212 | [
[
[
"Copyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"# Train using Azure Machine Learning Compute Instance\n\n* Initialize Workspace\n* Introduction to ComputeInstance\n* Create an Experiment\n* Submit ComputeInstance run\n* Additional operations to perform on ComputeInstance",
"_____no_output_____"
],
[
"## Prerequisites\nIf you are using an Azure Machine Learning ComputeInstance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace.",
"_____no_output_____"
]
],
[
[
"# Check core SDK version number\nimport azureml.core\n\nprint(\"SDK version:\", azureml.core.VERSION)",
"_____no_output_____"
]
],
[
[
"## Initialize Workspace\n\nInitialize a workspace object",
"_____no_output_____"
]
],
[
[
"from azureml.core import Workspace\n\nws = Workspace.from_config()\nprint(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')",
"_____no_output_____"
]
],
[
[
"## Introduction to ComputeInstance\n\n\nAzure Machine Learning compute instance is a fully-managed cloud-based workstation optimized for your machine learning development environment. It is created **within your workspace region**.\n\nFor more information on ComputeInstance, please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-instance)\n\n**Note**: As with other Azure services, there are limits on certain resources (for eg. AmlCompute quota) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.",
"_____no_output_____"
],
[
"### Create ComputeInstance\nFirst lets check which VM families are available in your region. Azure is a regional service and some specialized SKUs (especially GPUs) are only available in certain regions. Since ComputeInstance is created in the region of your workspace, we will use the supported_vms () function to see if the VM family we want to use ('STANDARD_D3_V2') is supported.\n\nYou can also pass a different region to check availability and then re-create your workspace in that region through the [configuration notebook](../../../configuration.ipynb)",
"_____no_output_____"
]
],
[
[
"from azureml.core.compute import ComputeTarget, ComputeInstance\n\nComputeInstance.supported_vmsizes(workspace = ws)\n# ComputeInstance.supported_vmsizes(workspace = ws, location='eastus')",
"_____no_output_____"
],
[
"import datetime\nimport time\n\nfrom azureml.core.compute import ComputeTarget, ComputeInstance\nfrom azureml.core.compute_target import ComputeTargetException\n\n# Choose a name for your instance\n# Compute instance name should be unique across the azure region\ncompute_name = \"ci{}\".format(ws._workspace_id)[:10]\n\n# Verify that instance does not exist already\ntry:\n instance = ComputeInstance(workspace=ws, name=compute_name)\n print('Found existing instance, use it.')\nexcept ComputeTargetException:\n compute_config = ComputeInstance.provisioning_configuration(\n vm_size='STANDARD_D3_V2',\n ssh_public_access=False,\n # vnet_resourcegroup_name='<my-resource-group>',\n # vnet_name='<my-vnet-name>',\n # subnet_name='default',\n # admin_user_ssh_public_key='<my-sshkey>'\n )\n instance = ComputeInstance.create(ws, compute_name, compute_config)\n instance.wait_for_completion(show_output=True)",
"_____no_output_____"
]
],
[
[
"## Create An Experiment\n\n**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments.",
"_____no_output_____"
]
],
[
[
"from azureml.core import Experiment\nexperiment_name = 'train-on-computeinstance'\nexperiment = Experiment(workspace = ws, name = experiment_name)",
"_____no_output_____"
]
],
[
[
"## Submit ComputeInstance run\nThe training script `train.py` is already created for you",
"_____no_output_____"
],
[
"### Create environment\n\nCreate an environment with scikit-learn installed.",
"_____no_output_____"
]
],
[
[
"from azureml.core import Environment\nfrom azureml.core.conda_dependencies import CondaDependencies\n\nmyenv = Environment(\"myenv\")\nmyenv.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])",
"_____no_output_____"
]
],
[
[
"### Configure & Run",
"_____no_output_____"
]
],
[
[
"from azureml.core import ScriptRunConfig\nfrom azureml.core.runconfig import DEFAULT_CPU_IMAGE\n\nsrc = ScriptRunConfig(source_directory='', script='train.py')\n\n# Set compute target to the one created in previous step\nsrc.run_config.target = instance\n\n# Set environment\nsrc.run_config.environment = myenv\n \nrun = experiment.submit(config=src)",
"_____no_output_____"
]
],
[
[
"Note: if you need to cancel a run, you can follow [these instructions](https://aka.ms/aml-docs-cancel-run).",
"_____no_output_____"
]
],
[
[
"from azureml.widgets import RunDetails\nRunDetails(run).show()",
"_____no_output_____"
]
],
[
[
"You can use the get_active_runs() to get the currently running or queued jobs on the compute instance",
"_____no_output_____"
]
],
[
[
"# wait for the run to reach Queued or Running state if it is in Preparing state\nstatus = run.get_status()\nwhile status not in ['Queued', 'Running', 'Completed', 'Failed', 'Canceled']:\n state = run.get_status()\n print('Run status: {}'.format(status))\n time.sleep(10)",
"_____no_output_____"
],
[
"# get active runs which are in Queued or Running state\nactive_runs = instance.get_active_runs()\nfor active_run in active_runs:\n print(active_run.run_id, ',', active_run.status)",
"_____no_output_____"
],
[
"run.wait_for_completion()\nprint(run.get_metrics())",
"_____no_output_____"
]
],
[
[
"### Additional operations to perform on ComputeInstance\n\nYou can perform more operations on ComputeInstance such as get status, change the state or deleting the compute.",
"_____no_output_____"
]
],
[
[
"# get_status() gets the latest status of the ComputeInstance target\ninstance.get_status()",
"_____no_output_____"
],
[
"# stop() is used to stop the ComputeInstance\n# Stopping ComputeInstance will stop the billing meter and persist the state on the disk.\n# Available Quota will not be changed with this operation.\ninstance.stop(wait_for_completion=True, show_output=True)",
"_____no_output_____"
],
[
"# start() is used to start the ComputeInstance if it is in stopped state\ninstance.start(wait_for_completion=True, show_output=True)",
"_____no_output_____"
],
[
"# restart() is used to restart the ComputeInstance\ninstance.restart(wait_for_completion=True, show_output=True)",
"_____no_output_____"
],
[
"# delete() is used to delete the ComputeInstance target. Useful if you want to re-use the compute name \n# instance.delete(wait_for_completion=True, show_output=True)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ec9de59449ffd55f495c2f5fbe6b133fb1ff6dda | 326,862 | ipynb | Jupyter Notebook | FID plots.ipynb | Information-Fusion-Lab-Umass/NoisyInjectiveFlows | 6d1430f3a4c40e2b9ae241bd8bc6ce7343d91246 | [
"MIT"
] | null | null | null | FID plots.ipynb | Information-Fusion-Lab-Umass/NoisyInjectiveFlows | 6d1430f3a4c40e2b9ae241bd8bc6ce7343d91246 | [
"MIT"
] | 1 | 2021-06-21T21:01:46.000Z | 2021-06-25T19:10:05.000Z | FID plots.ipynb | Information-Fusion-Lab-Umass/NoisyInjectiveFlows | 6d1430f3a4c40e2b9ae241bd8bc6ce7343d91246 | [
"MIT"
] | null | null | null | 833.831633 | 194,292 | 0.951187 | [
[
[
"%load_ext autoreload\n%autoreload 2",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
],
[
"# from datasets import mnist_data_loader, cifar10_data_loader, celeb_dataset_loader",
"_____no_output_____"
],
[
"# from jax import random\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd",
"_____no_output_____"
],
[
"import yaml\nimport glob\nimport copy",
"_____no_output_____"
],
[
"meta_files = glob.glob('FID/*/*.yaml')\n\nmeta_datas = {}\nfor path in meta_files:\n with open(path) as f:\n meta = yaml.safe_load(f)\n name = path[4:-10]\n meta_datas[name] = meta",
"_____no_output_____"
],
[
"def get_fids(dataset_name):\n files = dict([(key, val) for key, val in meta_datas.items() if dataset_name in key])\n ans = {}\n for key, val in files.items():\n \n df = pd.DataFrame(val['settings'])\n df['iteration_number'] = val['iteration_number']\n df['index'] = df['path'].apply(lambda x: int(x.split('_')[-1]))\n \n if('glow' in key):\n if(df.shape[0] > 1):\n df = df[df['index'] > 0]\n else:\n # Keep the last 15 folders\n max_index = df['index'].max()\n df = df[df['index'] > max_index - 15]\n \n ans[key] = df\n return ans",
"_____no_output_____"
],
[
"def plot_fids(all_scores, save_path=None): \n fig, ax = plt.subplots(1, 1)\n fig.set_size_inches(5, 5)\n \n # Plot the NF first\n nf_key = [key for key in all_scores.keys() if 'glow' in key][0]\n NF_score = all_scores[nf_key]['score'].to_numpy()[0]\n ax.axhline(NF_score, c='black', label='NF', linestyle=':')\n \n # Plot the remaining lines\n non_nf_keys = [key for key in all_scores if key is not nf_key]\n sorted_keys = sorted(non_nf_keys, key=lambda x: int(x.split('_')[-1]))\n styles = ['-', '-.', '--', (0, (5, 10)), (0, (3, 5, 1, 5))]\n for key, ls in zip(sorted_keys, styles):\n df = all_scores[key]\n label = 'NIF-%s'%key.split('_')[-1]\n \n df[df['s'] <= 1.0].plot(ax=ax, x='s', y='score', label=label, linestyle=ls)\n\n ax.set_xlabel('s', fontsize=14)\n ax.set_ylabel('FID', fontsize=14)\n ax.set_xlim(0.0, 1.0)\n \n ax.legend()\n \n if(save_path is not None):\n print('saved to', save_path)\n plt.subplots_adjust(wspace=0, hspace=0, left=0, right=1, bottom=0, top=1)\n plt.savefig(save_path, bbox_inches='tight', format='pdf')",
"_____no_output_____"
],
[
"celeba_scores = get_fids('celeba')\ncifar_scores = get_fids('cifar')\nmnist_scores = get_fids('mnist')",
"_____no_output_____"
],
[
"pwd",
"_____no_output_____"
],
[
"save_path = 'Results/celeba_fid_vary_s.pdf'\nplot_fids(celeba_scores, save_path=save_path)",
"saved to Results/celeba_fid_vary_s.pdf\n"
],
[
"save_path = 'Results/cifar_fid_vary_s.pdf'\nplot_fids(cifar_scores, save_path=save_path)",
"saved to Results/cifar_fid_vary_s.pdf\n"
],
[
"save_path = 'Results/fmnist_fid_vary_s.pdf'\nplot_fids(mnist_scores, save_path=save_path)",
"saved to Results/fmnist_fid_vary_s.pdf\n"
],
[
"ls Experiments/celeba_64/186000/checkpoint",
"keys.p losses.csv model_state.npz opt_state.npz\r\n"
],
[
"csvs = []\n\n\npd.read_csv('Experiments/celeba_64/186000/checkpoint/losses.csv', header=None)[1].ewm(alpha=0.01).mean()[10000:].plot()",
"_____no_output_____"
],
[
"losses = ['Experiments/celeba_glow/130000/checkpoint/losses.csv',\n'Experiments/celeba_64/186000/checkpoint/losses.csv',\n'Experiments/celeba_128/170000/checkpoint/losses.csv',\n'Experiments/celeba_256/180000/checkpoint/losses.csv',\n'Experiments/celeba_512/130000/checkpoint/losses.csv']",
"_____no_output_____"
],
[
"plt.figure().set_size_inches(30, 30)\nfor i, file in enumerate(losses):\n pd.read_csv(file, header=None)[1].ewm(alpha=0.001).mean()[100000:].plot(label=i)\nplt.legend()\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9ded17caf22ef1671810d689c0ee7a0d8d49fd | 13,884 | ipynb | Jupyter Notebook | spark-example/3_data_inputs_and_outputs-zh.ipynb | user-ZJ/Data_Engineering- | f0ff573f198e5c876542acff2aae7c44a2aed4c2 | [
"Apache-2.0"
] | 1 | 2021-06-23T02:01:17.000Z | 2021-06-23T02:01:17.000Z | spark-example/3_data_inputs_and_outputs-zh.ipynb | user-ZJ/Data_Engineering- | f0ff573f198e5c876542acff2aae7c44a2aed4c2 | [
"Apache-2.0"
] | null | null | null | spark-example/3_data_inputs_and_outputs-zh.ipynb | user-ZJ/Data_Engineering- | f0ff573f198e5c876542acff2aae7c44a2aed4c2 | [
"Apache-2.0"
] | 1 | 2020-05-25T14:14:34.000Z | 2020-05-25T14:14:34.000Z | 33.617433 | 532 | 0.512748 | [
[
[
"# 使用Spark读取和写入数据\n\n这个 notebook 包含了以前录屏视频里的代码。唯一的区别是,数据集不是从远程集群中读取而是从本地文件读取的。通过单击 “jupyter” 图标并打开标题为 “data” 的文件夹可以查看该文件。\n\n运行一下代码单元看看它们是如何运作的。 \n\n首先让我们导入 SparkConf 和 SparkSession。",
"_____no_output_____"
]
],
[
[
"import pyspark\nfrom pyspark import SparkConf\nfrom pyspark.sql import SparkSession",
"_____no_output_____"
]
],
[
[
"由于我们在本地运行 Spark ,所以已经有了 sparkcontext 和 sparksession。我们可以更新一些参数,例如我们的应用程序名称。就叫它 \"Our first Python Spark SQL example” 吧.",
"_____no_output_____"
]
],
[
[
"spark = SparkSession \\\n .builder \\\n .appName(\"Our first Python Spark SQL example\") \\\n .getOrCreate()",
"_____no_output_____"
]
],
[
[
"检查一下是否生效了。",
"_____no_output_____"
]
],
[
[
"spark.sparkContext.getConf().getAll()",
"_____no_output_____"
],
[
"spark",
"_____no_output_____"
]
],
[
[
"正如你所看到的,应用程序名称已经变成我们设置的名称了。\n\n接下来我们从一个相当小的样本数据集上创建我们的第一个 dataframe。在这门课程里, 我们会使用一个音乐云服务日志文件数据集,数据集里记录了用户使用云服务的活动。数据集里的记录描述了诸如登录,访问页面,收听歌曲,看到广告等事件。",
"_____no_output_____"
]
],
[
[
"path = \"data/sparkify_log_small.json\"\nuser_log = spark.read.json(path)",
"_____no_output_____"
],
[
"user_log.printSchema()",
"root\n |-- artist: string (nullable = true)\n |-- auth: string (nullable = true)\n |-- firstName: string (nullable = true)\n |-- gender: string (nullable = true)\n |-- itemInSession: long (nullable = true)\n |-- lastName: string (nullable = true)\n |-- length: double (nullable = true)\n |-- level: string (nullable = true)\n |-- location: string (nullable = true)\n |-- method: string (nullable = true)\n |-- page: string (nullable = true)\n |-- registration: long (nullable = true)\n |-- sessionId: long (nullable = true)\n |-- song: string (nullable = true)\n |-- status: long (nullable = true)\n |-- ts: long (nullable = true)\n |-- userAgent: string (nullable = true)\n |-- userId: string (nullable = true)\n\n"
],
[
"user_log.describe()",
"_____no_output_____"
],
[
"user_log.show(n=1)",
"+-------------+---------+---------+------+-------------+--------+---------+-----+--------------------+------+--------+-------------+---------+--------------------+------+-------------+--------------------+------+\n| artist| auth|firstName|gender|itemInSession|lastName| length|level| location|method| page| registration|sessionId| song|status| ts| userAgent|userId|\n+-------------+---------+---------+------+-------------+--------+---------+-----+--------------------+------+--------+-------------+---------+--------------------+------+-------------+--------------------+------+\n|Showaddywaddy|Logged In| Kenneth| M| 112|Matthews|232.93342| paid|Charlotte-Concord...| PUT|NextSong|1509380319284| 5132|Christmas Tears W...| 200|1513720872284|\"Mozilla/5.0 (Win...| 1046|\n+-------------+---------+---------+------+-------------+--------+---------+-----+--------------------+------+--------+-------------+---------+--------------------+------+-------------+--------------------+------+\nonly showing top 1 row\n\n"
],
[
"user_log.take(5)",
"_____no_output_____"
],
[
"out_path = \"data/sparkify_log_small.csv\"",
"_____no_output_____"
],
[
"user_log.write.save(out_path, format=\"csv\", header=True)",
"_____no_output_____"
],
[
"user_log_2 = spark.read.csv(out_path, header=True)",
"_____no_output_____"
],
[
"user_log_2.printSchema()",
"root\n |-- artist: string (nullable = true)\n |-- auth: string (nullable = true)\n |-- firstName: string (nullable = true)\n |-- gender: string (nullable = true)\n |-- itemInSession: string (nullable = true)\n |-- lastName: string (nullable = true)\n |-- length: string (nullable = true)\n |-- level: string (nullable = true)\n |-- location: string (nullable = true)\n |-- method: string (nullable = true)\n |-- page: string (nullable = true)\n |-- registration: string (nullable = true)\n |-- sessionId: string (nullable = true)\n |-- song: string (nullable = true)\n |-- status: string (nullable = true)\n |-- ts: string (nullable = true)\n |-- userAgent: string (nullable = true)\n |-- userId: string (nullable = true)\n\n"
],
[
"user_log_2.take(2)",
"_____no_output_____"
],
[
"user_log_2.select(\"userID\").show()",
"+------+\n|userID|\n+------+\n| 1046|\n| 1000|\n| 2219|\n| 2373|\n| 1747|\n| 1747|\n| 1162|\n| 1061|\n| 748|\n| 597|\n| 1806|\n| 748|\n| 1176|\n| 2164|\n| 2146|\n| 2219|\n| 1176|\n| 2904|\n| 597|\n| 226|\n+------+\nonly showing top 20 rows\n\n"
]
],
[
[
"```python\nuser_log_2.take(1)\n```",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ec9e0d327b888c077f7fbd8c4e27c97282714d6d | 11,491 | ipynb | Jupyter Notebook | dev_course/dl2/07a_lsuv.ipynb | algal/fastai_docs | 07ab75626a1f5199221421c11a3c987f3dd8dc92 | [
"Apache-2.0"
] | null | null | null | dev_course/dl2/07a_lsuv.ipynb | algal/fastai_docs | 07ab75626a1f5199221421c11a3c987f3dd8dc92 | [
"Apache-2.0"
] | null | null | null | dev_course/dl2/07a_lsuv.ipynb | algal/fastai_docs | 07ab75626a1f5199221421c11a3c987f3dd8dc92 | [
"Apache-2.0"
] | null | null | null | 25.649554 | 543 | 0.543208 | [
[
[
"%load_ext autoreload\n%autoreload 2\n\n%matplotlib inline",
"_____no_output_____"
],
[
"#export\nfrom exp.nb_07 import *",
"_____no_output_____"
]
],
[
[
"## Layerwise Sequential Unit Variance (LSUV)",
"_____no_output_____"
],
[
"Getting the MNIST data and a CNN",
"_____no_output_____"
]
],
[
[
"x_train,y_train,x_valid,y_valid = get_data()\n\nx_train,x_valid = normalize_to(x_train,x_valid)\ntrain_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)\n\nnh,bs = 50,512\nc = y_train.max().item()+1\nloss_func = F.cross_entropy\n\ndata = DataBunch(*get_dls(train_ds, valid_ds, bs), c)",
"_____no_output_____"
],
[
"mnist_view = view_tfm(1,28,28)\ncbfs = [Recorder,\n partial(AvgStatsCallback,accuracy),\n CudaCallback,\n partial(BatchTransformXCallback, mnist_view)]",
"_____no_output_____"
],
[
"nfs = [8,16,32,64,64]",
"_____no_output_____"
],
[
"class ConvLayer(nn.Module):\n def __init__(self, ni, nf, ks=3, stride=2, sub=0., **kwargs):\n super().__init__()\n self.conv = nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=True)\n self.relu = GeneralRelu(sub=sub, **kwargs)\n \n def forward(self, x): return self.relu(self.conv(x))\n \n @property\n def bias(self): return -self.relu.sub\n @bias.setter\n def bias(self,v): self.relu.sub = -v\n @property\n def weight(self): return self.conv.weight",
"_____no_output_____"
],
[
"learn,run = get_learn_run(nfs, data, 0.6, ConvLayer, cbs=cbfs)",
"_____no_output_____"
]
],
[
[
"Now we're going to look at the paper [All You Need is a Good Init](https://arxiv.org/pdf/1511.06422.pdf), which introduces *Layer-wise Sequential Unit-Variance* (*LSUV*). We initialize our neural net with the usual technique, then we pass a batch through the model and check the outputs of the linear and convolutional layers. We can then rescale the weights according to the actual variance we observe on the activations, and subtract the mean we observe from the initial bias. That way we will have activations that stay normalized.\n\nWe repeat this process until we are satisfied with the mean/variance we observe.\n\nLet's start by looking at a baseline:",
"_____no_output_____"
]
],
[
[
"run.fit(2, learn)",
"train: [2.17609578125, tensor(0.2480, device='cuda:0')]\nvalid: [2.00464140625, tensor(0.2935, device='cuda:0')]\ntrain: [1.000411875, tensor(0.6604, device='cuda:0')]\nvalid: [0.2265222900390625, tensor(0.9236, device='cuda:0')]\n"
]
],
[
[
"Now we recreate our model and we'll try again with LSUV. Hopefully, we'll get better results!",
"_____no_output_____"
]
],
[
[
"learn,run = get_learn_run(nfs, data, 0.6, ConvLayer, cbs=cbfs)",
"_____no_output_____"
]
],
[
[
"Helper function to get one batch of a given dataloader, with the callbacks called to preprocess it.",
"_____no_output_____"
]
],
[
[
"#export\ndef get_batch(dl, run):\n run.xb,run.yb = next(iter(dl))\n for cb in run.cbs: cb.set_runner(run)\n run('begin_batch')\n return run.xb,run.yb",
"_____no_output_____"
],
[
"xb,yb = get_batch(data.train_dl, run)",
"_____no_output_____"
]
],
[
[
"We only want the outputs of convolutional or linear layers. To find them, we need a recursive function. We can use `sum(list, [])` to concatenate the lists the function finds (`sum` applies the + operate between the elements of the list you pass it, beginning with the initial state in the second argument).",
"_____no_output_____"
]
],
[
[
"#export\ndef find_modules(m, cond):\n if cond(m): return [m]\n return sum([find_modules(o,cond) for o in m.children()], [])\n\ndef is_lin_layer(l):\n lin_layers = (nn.Conv1d, nn.Conv2d, nn.Conv3d, nn.Linear, nn.ReLU)\n return isinstance(l, lin_layers)",
"_____no_output_____"
],
[
"mods = find_modules(learn.model, lambda o: isinstance(o,ConvLayer))",
"_____no_output_____"
],
[
"mods",
"_____no_output_____"
]
],
[
[
"This is a helper function to grab the mean and std of the output of a hooked layer.",
"_____no_output_____"
]
],
[
[
"def append_stat(hook, mod, inp, outp):\n d = outp.data\n hook.mean,hook.std = d.mean().item(),d.std().item()",
"_____no_output_____"
],
[
"mdl = learn.model.cuda()",
"_____no_output_____"
]
],
[
[
"So now we can look at the mean and std of the conv layers of our model.",
"_____no_output_____"
]
],
[
[
"with Hooks(mods, append_stat) as hooks:\n mdl(xb)\n for hook in hooks: print(hook.mean,hook.std)",
"0.6072850227355957 1.4199378490447998\n0.765070378780365 1.546080231666565\n0.5341569781303406 1.113872766494751\n0.49566417932510376 0.8838422894477844\n0.4387528598308563 0.6142216920852661\n"
]
],
[
[
"We first adjust the bias terms to make the means 0, then we adjust the standard deviations to make the stds 1 (with a threshold of 1e-3). The `mdl(xb) is not None` clause is just there to pass `xb` through `mdl` and compute all the activations so that the hooks get updated. ",
"_____no_output_____"
]
],
[
[
"#export\ndef lsuv_module(m, xb):\n h = Hook(m, append_stat)\n\n while mdl(xb) is not None and abs(h.mean) > 1e-3: m.bias -= h.mean\n while mdl(xb) is not None and abs(h.std-1) > 1e-3: m.weight.data /= h.std\n\n h.remove()\n return h.mean,h.std",
"_____no_output_____"
]
],
[
[
"We execute that initialization on all the conv layers in order:",
"_____no_output_____"
]
],
[
[
"for m in mods: print(lsuv_module(m, xb))",
"(-0.1796007752418518, 1.0000001192092896)\n(-0.005382229574024677, 0.9999997615814209)\n(0.18785929679870605, 0.9999998807907104)\n(0.172571063041687, 1.0)\n(0.2758583426475525, 1.0)\n"
]
],
[
[
"Note that the mean doesn't exactly stay at 0. since we change the standard deviation after by scaling the weight.",
"_____no_output_____"
],
[
"Then training is beginning on better grounds.",
"_____no_output_____"
]
],
[
[
"%time run.fit(2, learn)",
"train: [0.42438078125, tensor(0.8629, device='cuda:0')]\nvalid: [0.14604696044921875, tensor(0.9548, device='cuda:0')]\ntrain: [0.128675537109375, tensor(0.9608, device='cuda:0')]\nvalid: [0.09168212280273437, tensor(0.9733, device='cuda:0')]\nCPU times: user 4.09 s, sys: 504 ms, total: 4.6 s\nWall time: 4.61 s\n"
]
],
[
[
"LSUV is particularly useful for more complex and deeper architectures that are hard to initialize to get unit variance at the last layer.",
"_____no_output_____"
],
[
"## Export",
"_____no_output_____"
]
],
[
[
"!python notebook2script.py 07a_lsuv.ipynb",
"Converted 07a_lsuv.ipynb to exp/nb_07a.py\r\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
ec9e2844ffc7a5a21d152f880e46ed463d70a338 | 3,794 | ipynb | Jupyter Notebook | array_strings/ipynb/.ipynb_checkpoints/unique_email_id-checkpoint.ipynb | PRkudupu/Algo-python | a0b9c3e19e4ece48f5dc47e34860510565ab2f38 | [
"MIT"
] | 1 | 2019-05-04T00:43:52.000Z | 2019-05-04T00:43:52.000Z | array_strings/ipynb/.ipynb_checkpoints/unique_email_id-checkpoint.ipynb | PRkudupu/Algo-python | a0b9c3e19e4ece48f5dc47e34860510565ab2f38 | [
"MIT"
] | null | null | null | array_strings/ipynb/.ipynb_checkpoints/unique_email_id-checkpoint.ipynb | PRkudupu/Algo-python | a0b9c3e19e4ece48f5dc47e34860510565ab2f38 | [
"MIT"
] | null | null | null | 30.111111 | 134 | 0.564048 | [
[
[
"<b>Every email consists of a local name and a domain name, separated by the @ sign.</b>\n\nFor example, in [email protected], alice is the local name, and leetcode.com is the domain name.\n\nBesides lowercase letters, these emails may contain '.'s or '+'s.\n\nIf you add periods ('.') between some characters in the local name part of an email address, mail sent there \n\nwill be forwarded to the same address without dots in the local name. \n\nFor example, \"[email protected]\" and \"[email protected]\" forward to the same email address. \n(Note that this rule does not apply for domain names.)\n\nIf you add a plus ('+') in the local name, everything after the first plus sign will be ignored. \nThis allows certain emails to be filtered, for example [email protected] will be forwarded to [email protected]. \n(Again, this rule does not apply for domain names.)\n\nIt is possible to use both of these rules at the same time.\n\nGiven a list of emails, we send one email to each address in the list. \nHow many different addresses actually receive mails? \n\n \n\nExample 1:\n\nInput: [\"[email protected]\",\"[email protected]\",\"[email protected]\"]\nOutput: 2\nExplanation: \"[email protected]\" and \"[email protected]\" actually receive mails\n \n\nNote:\n\n1 <= emails[i].length <= 100\n1 <= emails.length <= 100\nEach emails[i] contains exactly one '@' character.<b>",
"_____no_output_____"
],
[
"<b>Solution</b><br>\nFor each email address, convert it to the canonical address that actually receives the mail. This involves a few steps:\n\n* Separate the email address into a local part and the rest of the address.\n\n* If the local part has a '+' character, remove it and everything beyond it from the local part.\n\n* Remove all the zeros from the local part.\n\n* The canonical address is local + rest.",
"_____no_output_____"
]
],
[
[
"def unique_email(ls):\n #use set to store unique email\n seen=set()\n for email in ls:\n #split local and domain name\n local, domain = email.split('@')\n if '+' in local:\n local =local[:local.index('+')]\n seen.add(local.replace('.','')+ '@' + domain)\n return len(seen)",
"_____no_output_____"
],
[
"print(unique_email([\"[email protected]\",\"[email protected]\",\"[email protected]\"]))",
"2\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
ec9e2cca423252d66168f7c332e4085c87a655de | 19,383 | ipynb | Jupyter Notebook | CNN Segmentation + connected components.ipynb | 07sebascode/RSNA-Pneumonia-Detection | 7db74dbc74e45ab1ee67d113c95a0d06a7065ebf | [
"Apache-2.0"
] | null | null | null | CNN Segmentation + connected components.ipynb | 07sebascode/RSNA-Pneumonia-Detection | 7db74dbc74e45ab1ee67d113c95a0d06a7065ebf | [
"Apache-2.0"
] | null | null | null | CNN Segmentation + connected components.ipynb | 07sebascode/RSNA-Pneumonia-Detection | 7db74dbc74e45ab1ee67d113c95a0d06a7065ebf | [
"Apache-2.0"
] | null | null | null | 94.092233 | 3,282 | 0.631688 | [
[
[
"# Approach\n\n* Firstly a convolutional neural network is used to segment the image, using the bounding boxes directly as a mask. \n* Secondly connected components is used to separate multiple areas of predicted pneumonia.\n* Finally a bounding box is simply drawn around every connected component.\n\n# Network\n\n* The network consists of a number of residual blocks with convolutions and downsampling blocks with max pooling.\n* At the end of the network a single upsampling layer converts the output to the same shape as the input.\n\nAs the input to the network is 256 by 256 (instead of the original 1024 by 1024) and the network downsamples a number of times without any meaningful upsampling (the final upsampling is just to match in 256 by 256 mask) the final prediction is very crude. If the network downsamples 4 times the final bounding boxes can only change with at least 16 pixels.",
"_____no_output_____"
]
],
[
[
"import os\nimport csv\nimport random\nimport pydicom\nimport numpy as np\nimport pandas as pd\nfrom skimage import io\nfrom skimage import measure\nfrom skimage.transform import resize\n\nimport tensorflow as tf\nfrom tensorflow import keras\n\nfrom matplotlib import pyplot as plt\nimport matplotlib.patches as patches",
"_____no_output_____"
]
],
[
[
"# Load pneumonia locations\n\nTable contains [filename : pneumonia location] pairs per row. \n* If a filename contains multiple pneumonia, the table contains multiple rows with the same filename but different pneumonia locations. \n* If a filename contains no pneumonia it contains a single row with an empty pneumonia location.\n\nThe code below loads the table and transforms it into a dictionary. \n* The dictionary uses the filename as key and a list of pneumonia locations in that filename as value. \n* If a filename is not present in the dictionary it means that it contains no pneumonia.",
"_____no_output_____"
]
],
[
[
"# empty dictionary\npneumonia_locations = {}\n# load table\nwith open(os.path.join('../input/stage_1_train_labels.csv'), mode='r') as infile:\n # open reader\n reader = csv.reader(infile)\n # skip header\n next(reader, None)\n # loop through rows\n for rows in reader:\n # retrieve information\n filename = rows[0]\n location = rows[1:5]\n pneumonia = rows[5]\n # if row contains pneumonia add label to dictionary\n # which contains a list of pneumonia locations per filename\n if pneumonia == '1':\n # convert string to float to int\n location = [int(float(i)) for i in location]\n # save pneumonia location in dictionary\n if filename in pneumonia_locations:\n pneumonia_locations[filename].append(location)\n else:\n pneumonia_locations[filename] = [location]",
"_____no_output_____"
]
],
[
[
"# Load filenames",
"_____no_output_____"
]
],
[
[
"# load and shuffle filenames\nfolder = '../input/stage_1_train_images'\nfilenames = os.listdir(folder)\nrandom.shuffle(filenames)\n# split into train and validation filenames\nn_valid_samples = 2560\ntrain_filenames = filenames[n_valid_samples:]\nvalid_filenames = filenames[:n_valid_samples]\nprint('n train samples', len(train_filenames))\nprint('n valid samples', len(valid_filenames))\nn_train_samples = len(filenames) - n_valid_samples",
"_____no_output_____"
]
],
[
[
"# Exploration",
"_____no_output_____"
]
],
[
[
"print('Total train images:',len(filenames))\nprint('Images with pneumonia:', len(pneumonia_locations))\n\nns = [len(value) for value in pneumonia_locations.values()]\nplt.figure()\nplt.hist(ns)\nplt.xlabel('Pneumonia per image')\nplt.xticks(range(1, np.max(ns)+1))\nplt.show()\n\nheatmap = np.zeros((1024, 1024))\nws = []\nhs = []\nfor values in pneumonia_locations.values():\n for value in values:\n x, y, w, h = value\n heatmap[y:y+h, x:x+w] += 1\n ws.append(w)\n hs.append(h)\nplt.figure()\nplt.title('Pneumonia location heatmap')\nplt.imshow(heatmap)\nplt.figure()\nplt.title('Pneumonia height lengths')\nplt.hist(hs, bins=np.linspace(0,1000,50))\nplt.show()\nplt.figure()\nplt.title('Pneumonia width lengths')\nplt.hist(ws, bins=np.linspace(0,1000,50))\nplt.show()\nprint('Minimum pneumonia height:', np.min(hs))\nprint('Minimum pneumonia width: ', np.min(ws))\n",
"_____no_output_____"
]
],
[
[
" # Data generator\n\nThe dataset is too large to fit into memory, so we need to create a generator that loads data on the fly.\n\n* The generator takes in some filenames, batch_size and other parameters.\n\n* The generator outputs a random batch of numpy images and numpy masks.\n ",
"_____no_output_____"
]
],
[
[
"class generator(keras.utils.Sequence):\n \n def __init__(self, folder, filenames, pneumonia_locations=None, batch_size=32, image_size=256, shuffle=True, augment=False, predict=False):\n self.folder = folder\n self.filenames = filenames\n self.pneumonia_locations = pneumonia_locations\n self.batch_size = batch_size\n self.image_size = image_size\n self.shuffle = shuffle\n self.augment = augment\n self.predict = predict\n self.on_epoch_end()\n \n def __load__(self, filename):\n # load dicom file as numpy array\n img = pydicom.dcmread(os.path.join(self.folder, filename)).pixel_array\n # create empty mask\n msk = np.zeros(img.shape)\n # get filename without extension\n filename = filename.split('.')[0]\n # if image contains pneumonia\n if filename in self.pneumonia_locations:\n # loop through pneumonia\n for location in self.pneumonia_locations[filename]:\n # add 1's at the location of the pneumonia\n x, y, w, h = location\n msk[y:y+h, x:x+w] = 1\n # resize both image and mask\n img = resize(img, (self.image_size, self.image_size), mode='reflect')\n msk = resize(msk, (self.image_size, self.image_size), mode='reflect') > 0.5\n # if augment then horizontal flip half the time\n if self.augment and random.random() > 0.5:\n img = np.fliplr(img)\n msk = np.fliplr(msk)\n # add trailing channel dimension\n img = np.expand_dims(img, -1)\n msk = np.expand_dims(msk, -1)\n return img, msk\n \n def __loadpredict__(self, filename):\n # load dicom file as numpy array\n img = pydicom.dcmread(os.path.join(self.folder, filename)).pixel_array\n # resize image\n img = resize(img, (self.image_size, self.image_size), mode='reflect')\n # add trailing channel dimension\n img = np.expand_dims(img, -1)\n return img\n \n def __getitem__(self, index):\n # select batch\n filenames = self.filenames[index*self.batch_size:(index+1)*self.batch_size]\n # predict mode: return images and filenames\n if self.predict:\n # load files\n imgs = [self.__loadpredict__(filename) for filename in filenames]\n # create numpy batch\n imgs = np.array(imgs)\n return imgs, filenames\n # train mode: return images and masks\n else:\n # load files\n items = [self.__load__(filename) for filename in filenames]\n # unzip images and masks\n imgs, msks = zip(*items)\n # create numpy batch\n imgs = np.array(imgs)\n msks = np.array(msks)\n return imgs, msks\n \n def on_epoch_end(self):\n if self.shuffle:\n random.shuffle(self.filenames)\n \n def __len__(self):\n if self.predict:\n # return everything\n return int(np.ceil(len(self.filenames) / self.batch_size))\n else:\n # return full batches only\n return int(len(self.filenames) / self.batch_size)",
"_____no_output_____"
]
],
[
[
"# Network",
"_____no_output_____"
]
],
[
[
"def create_downsample(channels, inputs):\n x = keras.layers.BatchNormalization(momentum=0.9)(inputs)\n x = keras.layers.LeakyReLU(0)(x)\n x = keras.layers.Conv2D(channels, 1, padding='same', use_bias=False)(x)\n x = keras.layers.MaxPool2D(2)(x)\n return x\n\ndef create_resblock(channels, inputs):\n x = keras.layers.BatchNormalization(momentum=0.9)(inputs)\n x = keras.layers.LeakyReLU(0)(x)\n x = keras.layers.Conv2D(channels, 3, padding='same', use_bias=False)(x)\n x = keras.layers.BatchNormalization(momentum=0.9)(x)\n x = keras.layers.LeakyReLU(0)(x)\n x = keras.layers.Conv2D(channels, 3, padding='same', use_bias=False)(x)\n return keras.layers.add([x, inputs])\n\ndef create_network(input_size, channels, n_blocks=2, depth=4):\n # input\n inputs = keras.Input(shape=(input_size, input_size, 1))\n x = keras.layers.Conv2D(channels, 3, padding='same', use_bias=False)(inputs)\n # residual blocks\n for d in range(depth):\n channels = channels * 2\n x = create_downsample(channels, x)\n for b in range(n_blocks):\n x = create_resblock(channels, x)\n # output\n x = keras.layers.BatchNormalization(momentum=0.9)(x)\n x = keras.layers.LeakyReLU(0)(x)\n x = keras.layers.Conv2D(1, 1, activation='sigmoid')(x)\n outputs = keras.layers.UpSampling2D(2**depth)(x)\n model = keras.Model(inputs=inputs, outputs=outputs)\n return model",
"_____no_output_____"
]
],
[
[
"# Train network\n",
"_____no_output_____"
]
],
[
[
"# define iou or jaccard loss function\ndef iou_loss(y_true, y_pred):\n y_true = tf.reshape(y_true, [-1])\n y_pred = tf.reshape(y_pred, [-1])\n intersection = tf.reduce_sum(y_true * y_pred)\n score = (intersection + 1.) / (tf.reduce_sum(y_true) + tf.reduce_sum(y_pred) - intersection + 1.)\n return 1 - score\n\n# combine bce loss and iou loss\ndef iou_bce_loss(y_true, y_pred):\n return 0.5 * keras.losses.binary_crossentropy(y_true, y_pred) + 0.5 * iou_loss(y_true, y_pred)\n\n# mean iou as a metric\ndef mean_iou(y_true, y_pred):\n y_pred = tf.round(y_pred)\n intersect = tf.reduce_sum(y_true * y_pred, axis=[1, 2, 3])\n union = tf.reduce_sum(y_true, axis=[1, 2, 3]) + tf.reduce_sum(y_pred, axis=[1, 2, 3])\n smooth = tf.ones(tf.shape(intersect))\n return tf.reduce_mean((intersect + smooth) / (union - intersect + smooth))\n\n# create network and compiler\nmodel = create_network(input_size=256, channels=32, n_blocks=2, depth=4)\nmodel.compile(optimizer='adam',\n loss=iou_bce_loss,\n metrics=['accuracy', mean_iou])\n\n# cosine learning rate annealing\ndef cosine_annealing(x):\n lr = 0.001\n epochs = 25\n return lr*(np.cos(np.pi*x/epochs)+1.)/2\nlearning_rate = tf.keras.callbacks.LearningRateScheduler(cosine_annealing)\n\n# create train and validation generators\nfolder = '../input/stage_1_train_images'\ntrain_gen = generator(folder, train_filenames, pneumonia_locations, batch_size=32, image_size=256, shuffle=True, augment=True, predict=False)\nvalid_gen = generator(folder, valid_filenames, pneumonia_locations, batch_size=32, image_size=256, shuffle=False, predict=False)\n\nhistory = model.fit_generator(train_gen, validation_data=valid_gen, callbacks=[learning_rate], epochs=25, workers=4, use_multiprocessing=True)",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,4))\nplt.subplot(131)\nplt.plot(history.epoch, history.history[\"loss\"], label=\"Train loss\")\nplt.plot(history.epoch, history.history[\"val_loss\"], label=\"Valid loss\")\nplt.legend()\nplt.subplot(132)\nplt.plot(history.epoch, history.history[\"acc\"], label=\"Train accuracy\")\nplt.plot(history.epoch, history.history[\"val_acc\"], label=\"Valid accuracy\")\nplt.legend()\nplt.subplot(133)\nplt.plot(history.epoch, history.history[\"mean_iou\"], label=\"Train iou\")\nplt.plot(history.epoch, history.history[\"val_mean_iou\"], label=\"Valid iou\")\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"for imgs, msks in valid_gen:\n # predict batch of images\n preds = model.predict(imgs)\n # create figure\n f, axarr = plt.subplots(4, 8, figsize=(20,15))\n axarr = axarr.ravel()\n axidx = 0\n # loop through batch\n for img, msk, pred in zip(imgs, msks, preds):\n # plot image\n axarr[axidx].imshow(img[:, :, 0])\n # threshold true mask\n comp = msk[:, :, 0] > 0.5\n # apply connected components\n comp = measure.label(comp)\n # apply bounding boxes\n predictionString = ''\n for region in measure.regionprops(comp):\n # retrieve x, y, height and width\n y, x, y2, x2 = region.bbox\n height = y2 - y\n width = x2 - x\n axarr[axidx].add_patch(patches.Rectangle((x,y),width,height,linewidth=2,edgecolor='b',facecolor='none'))\n # threshold predicted mask\n comp = pred[:, :, 0] > 0.5\n # apply connected components\n comp = measure.label(comp)\n # apply bounding boxes\n predictionString = ''\n for region in measure.regionprops(comp):\n # retrieve x, y, height and width\n y, x, y2, x2 = region.bbox\n height = y2 - y\n width = x2 - x\n axarr[axidx].add_patch(patches.Rectangle((x,y),width,height,linewidth=2,edgecolor='r',facecolor='none'))\n axidx += 1\n plt.show()\n # only plot one batch\n break",
"_____no_output_____"
]
],
[
[
"# Predict test images",
"_____no_output_____"
]
],
[
[
"# load and shuffle filenames\nfolder = '../input/stage_1_test_images'\ntest_filenames = os.listdir(folder)\nprint('n test samples:', len(test_filenames))\n\n# create test generator with predict flag set to True\ntest_gen = generator(folder, test_filenames, None, batch_size=25, image_size=256, shuffle=False, predict=True)\n\n# create submission dictionary\nsubmission_dict = {}\n# loop through testset\nfor imgs, filenames in test_gen:\n # predict batch of images\n preds = model.predict(imgs)\n # loop through batch\n for pred, filename in zip(preds, filenames):\n # resize predicted mask\n pred = resize(pred, (1024, 1024), mode='reflect')\n # threshold predicted mask\n comp = pred[:, :, 0] > 0.5\n # apply connected components\n comp = measure.label(comp)\n # apply bounding boxes\n predictionString = ''\n for region in measure.regionprops(comp):\n # retrieve x, y, height and width\n y, x, y2, x2 = region.bbox\n height = y2 - y\n width = x2 - x\n # proxy for confidence score\n conf = np.mean(pred[y:y+height, x:x+width])\n # add to predictionString\n predictionString += str(conf) + ' ' + str(x) + ' ' + str(y) + ' ' + str(width) + ' ' + str(height) + ' '\n # add filename and predictionString to dictionary\n filename = filename.split('.')[0]\n submission_dict[filename] = predictionString\n # stop if we've got them all\n if len(submission_dict) >= len(test_filenames):\n break\n\n# save dictionary as csv file\nsub = pd.DataFrame.from_dict(submission_dict,orient='index')\nsub.index.names = ['patientId']\nsub.columns = ['PredictionString']\nsub.to_csv('submission.csv')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9e38ffd0c5cd265ebdb22ab866337e4e42f2e2 | 552,050 | ipynb | Jupyter Notebook | endure/visualization/09_timing.ipynb | Ephoris/robust-lsm-tuning | 385ad9550f34d92a20e4d0142249bcee5763e864 | [
"MIT"
] | null | null | null | endure/visualization/09_timing.ipynb | Ephoris/robust-lsm-tuning | 385ad9550f34d92a20e4d0142249bcee5763e864 | [
"MIT"
] | null | null | null | endure/visualization/09_timing.ipynb | Ephoris/robust-lsm-tuning | 385ad9550f34d92a20e4d0142249bcee5763e864 | [
"MIT"
] | null | null | null | 245.246557 | 84,584 | 0.864014 | [
[
[
"import sys\nimport os\nimport logging\nimport ast\nimport re\n\nimport numpy as np\nimport pandas as pd\npd.set_option('max_columns', None)\npd.set_option('max_rows', 100)\npd.set_option('display.max_colwidth', None)\nfrom scipy import stats\nfrom scipy.special import rel_entr\n\nfrom tqdm.notebook import tqdm\n\nimport seaborn as sns\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as mtick\nimport matplotlib.patches as mpatches",
"_____no_output_____"
],
[
"def set_style(fsz=14):\n sns.set_context(\"paper\") \n plt.rc('font', family='Linux Libertine')\n sns.set_style(\"ticks\", {\"xtick.major.size\": 3, \"ytick.major.size\": 3})\n \n plt.rc('font', size=fsz, family='Linux Libertine')\n plt.rc('axes', titlesize=fsz)\n plt.rc('axes', labelsize=fsz)\n plt.rc('xtick', labelsize=fsz)\n plt.rc('ytick', labelsize=fsz)\n plt.rc('legend', fontsize=fsz)\n plt.rc('figure', titlesize=fsz)\n plt.rcParams[\"mathtext.fontset\"] = \"dejavuserif\"\n plt.rcParams['hatch.linewidth'] = 2 # previous pdf hatch linewidth\n \ndef set_size(fig, width=6, height=4):\n fig.set_size_inches(width, height)\n plt.tight_layout()\n \nVIZ_DIR = \"/scratchNVM0/ndhuynh/data/figs/\"\ndef save_fig(fig, filename):\n fig.savefig(VIZ_DIR + filename, dpi=300, format='pdf', bbox_inches='tight')",
"_____no_output_____"
],
[
"sys.path.insert(1, '/scratchNVM0/ndhuynh/robust-lsm-tuning/endure')\nfrom data.data_provider import DataProvider\nfrom data.data_exporter import DataExporter\nfrom robust.workload_uncertainty import WorkloadUncertainty\nfrom lsm_tree.cost_function import CostFunction\nfrom lsm_tree.nominal import NominalWorkloadTuning\n\nconfig = DataProvider.read_config('/scratchNVM0/ndhuynh/robust-lsm-tuning/endure/config/robust-lsm-trees.yaml')\nde = DataExporter(config)",
"_____no_output_____"
]
],
[
[
"# Generating Workloads for Exp02 Exp03",
"_____no_output_____"
]
],
[
[
"def apply_design(d, cf, z0, z1, q, w):\n cf.z0, cf.z1, cf.q, cf.w = z0, z1, q, w\n cost = cf.calculate_cost(d['M_filt'] / cf.N, np.ceil(d['T']), d['is_leveling_policy'])\n\n return cost\n\ndef get_cumulative_data(df, wl_idx, robust_rho, samples):\n df_sample = df[(df.workload_idx == wl_idx) & (df.robust_rho == robust_rho)]\n sessions = []\n sessions.append(df_sample[df_sample.z0_s + df_sample.z1_s > 0.8].sample(samples, replace=True, random_state=0))\n sessions.append(df_sample[df_sample.q_s > 0.8].sample(samples, replace=True, random_state=0))\n sessions.append(df_sample[df_sample.z0_s > 0.8].sample(samples, replace=True, random_state=0))\n sessions.append(df_sample[df_sample.z1_s > 0.8].sample(samples, replace=True, random_state=0))\n sessions.append(df_sample[df_sample.w_s > 0.8].sample(samples, replace=True, random_state=0))\n sessions.append(df_sample[df_sample.rho_hat < 0.2].sample(samples, replace=True, random_state=0))\n \n data = pd.concat(sessions, ignore_index=True)\n data[['robust_cost_cum', 'nominal_cost_cum']] = data[['robust_cost', 'nominal_cost']].cumsum()\n \n w_hat = data[['z0_s', 'z1_s', 'q_s', 'w_s']].mean().values\n w = ast.literal_eval(data.iloc[0].w)\n w0 = [w['z0'], w['z1'], w['q'], w['w']]\n distance = np.sum(rel_entr(w_hat, w0))\n\n cfg = config['lsm_tree_config'].copy()\n cfg['N'] = df.iloc[0].N\n cfg['M'] = df.iloc[0].M\n cf = CostFunction(**cfg, z0=w_hat[0], z1=w_hat[1], q=w_hat[2], w=w_hat[3])\n\n designer = NominalWorkloadTuning(cf)\n design_nom_s = designer.get_nominal_design()\n \n data['nominal_perfect_cost'] = data.apply(lambda row: apply_design(design_nom_s, cf, row['z0_s'], row['z1_s'], row['q_s'], row['w_s']), axis=1)\n data['nominal_perfect_cost_cum'] = data['nominal_perfect_cost'].cumsum()\n data['nominal_perfect_T'] = design_nom_s['T']\n data['nominal_perfect_m_filt'] = design_nom_s['M_filt']\n data['nominal_perfect_is_leveling_policy'] = design_nom_s['is_leveling_policy']\n \n design_robust = {}\n design_robust['T'] = np.ceil(data['robust_T'].iloc[0])\n design_robust['M_filt'] = data['robust_m_filt'].iloc[0]\n design_robust['is_leveling_policy'] = data['robust_is_leveling_policy'].iloc[0]\n data['robust_cost'] = data.apply(lambda row: apply_design(design_robust, cf, row['z0_s'], row['z1_s'], row['q_s'], row['w_s']), axis=1)\n \n design_nominal = {}\n design_nominal['T'] = np.ceil(data['nominal_T'].iloc[0])\n design_nominal['M_filt'] = data['nominal_m_filt'].iloc[0]\n design_nominal['is_leveling_policy'] = data['nominal_is_leveling_policy'].iloc[0]\n data['nominal_cost'] = data.apply(lambda row: apply_design(design_nominal, cf, row['z0_s'], row['z1_s'], row['q_s'], row['w_s']), axis=1)\n \n return data, sessions",
"_____no_output_____"
],
[
"dp = DataProvider(config)\ndf = dp.read_csv('exp_01_1e8.csv')\ndf['robust_cost'] = np.around(df['robust_cost'], 4)\ndf['nominal_cost'] = np.around(df['nominal_cost'], 4)\ndf['robust_rho'] = np.around(df['robust_rho'], 2)\ndf['z_score'] = df.groupby(['workload_idx', 'robust_rho']).rho_hat.transform(lambda x: np.abs(stats.zscore(x)))\ndf[['z0_s', 'z1_s', 'q_s', 'w_s']] = df['w_hat'].str.extract(r\"{'z0': ([0-9.]+), 'z1': ([0-9.]+), 'q': ([0-9.]+), 'w': ([0-9.]+)}\")\ndf[['z0_s', 'z1_s', 'q_s', 'w_s']] = df[['z0_s', 'z1_s', 'q_s', 'w_s']].astype(np.float64)\ndf.describe()",
"_____no_output_____"
],
[
"sample_wl = []\n\nsamples = 5\n# wl_rho = [(4, 2), (5, 1), (15, 0.5), (11, 0.25), (17, 0.5), (0, 0.25)]\nwl_rho = [(17, 0.5)]\nfor widx, rho in wl_rho:\n data, _ = get_cumulative_data(df, widx, rho, samples)\n sample_wl.append(data)\n\nsample_wl = pd.concat(sample_wl, ignore_index=True)\nde.export_csv_file(sample_wl, 'experiment_03_wls.csv')\nsample_wl\n",
"_____no_output_____"
]
],
[
[
"# Plotting Data",
"_____no_output_____"
]
],
[
[
"def process_ios(df):\n PAGESIZE = 4096\n nominal_compaction_ios = np.sum((df['nominal_compact_read'] + df['nominal_compact_write']) / PAGESIZE)\n robust_compaction_ios = np.sum((df['robust_compact_read'] + df['robust_compact_write']) / PAGESIZE)\n workload_weight = df['w_s'] / df['w_s'].sum()\n df['nominal_write_io'] = (workload_weight * nominal_compaction_ios) + ((df['nominal_bytes_written'] + df['nominal_flush_written']) / PAGESIZE)\n df['robust_write_io'] = (workload_weight * robust_compaction_ios) + ((df['robust_bytes_written'] + df['robust_flush_written']) / PAGESIZE)\n df['nominal_io'] = df['nominal_blocks_read'] + df['nominal_write_io']\n df['robust_io'] = df['robust_blocks_read'] + df['robust_write_io'] \n \n return df\n\n\ndef plot_cost_sessions(df, samples, num_operations, wl_idx, robust_rho, readonly=False):\n df = df[(df.workload_idx == wl_idx) & (df.robust_rho == robust_rho)].copy().reset_index()\n num_sessions = df.shape[0] / samples\n \n means = []\n for idx in range(0, data.shape[0], samples):\n means.append(df[(df.workload_idx == wl_idx) & (df.robust_rho == robust_rho)].iloc[idx:idx+samples][['z0_s', 'z1_s', 'q_s', 'w_s']].mean())\n \n w_hat = df[['z0_s', 'z1_s', 'q_s', 'w_s']].mean().values\n w = ast.literal_eval(df.iloc[0].w)\n w0 = [w['z0'], w['z1'], w['q'], w['w']]\n distance = np.sum(rel_entr(w_hat, w0))\n\n nom_policy = 'Leveling' if df.iloc[0].nominal_is_leveling_policy else 'Tiering'\n robust_policy = 'Leveling' if df.iloc[0].robust_is_leveling_policy else 'Tiering'\n \n df = process_ios(df)\n y1, y2 = df['nominal_io'].values, df['robust_io'].values\n \n fig, axes = plt.subplots(ncols=1, nrows=2)\n system_ax, model_ax = axes\n \n for ax in axes:\n ax.set_xlim(left=-0.25, right=(num_sessions * samples) - 0.75)\n for bounds in np.arange(samples - 1, num_sessions * samples - 1, samples):\n ax.axvline(x=bounds + 0.5, linestyle='--', linewidth=4, color='tab:gray', alpha=0.5)\n \n # Systems Graph\n system_ax.text(0.565, 0.9, 'System', fontsize=16, transform=system_ax.transAxes)\n system_ax.set_xticklabels([])\n system_ax.set_xticks([])\n system_ax.plot(df.index.values, y1 / num_operations, marker='*', linewidth=1, color='black', markersize=8, label=f'Nominal\\nh: {(df.iloc[0].nominal_m_filt / df.iloc[0].N):.1f}, T: {df.iloc[0].nominal_T:.1f}, $\\pi$: {nom_policy}')\n system_ax.plot(df.index.values, y2 / num_operations, marker='o', linewidth=1, color='tab:green', markersize=8, label=f'Robust\\nh: {(df.iloc[0].robust_m_filt / df.iloc[0].N):.1f}, T: {df.iloc[0].robust_T:.1f}, $\\pi$: {robust_policy}')\n system_ax.legend(loc='upper left', bbox_to_anchor=(0, 1), frameon=True, framealpha=1, edgecolor='black', fancybox=False, ncol=2)\n \n \n # Model Graph\n model_ax.text(0.565, 0.9, 'Model', fontsize=16, transform=model_ax.transAxes)\n y1, y2 = df.nominal_cost, df.robust_cost\n model_ax.plot(df.index.values, y1, marker='*', linewidth=1, color='black', markersize=8, label=f'Nominal\\nh: {(df.iloc[0].nominal_m_filt / df.iloc[0].N):.1f}, T: {df.iloc[0].nominal_T:.1f}, $\\pi$: {nom_policy}')\n model_ax.plot(df.index.values, y2, marker='o', linewidth=1, color='tab:green', markersize=8, label=f'Robust\\nh: {(df.iloc[0].robust_m_filt / df.iloc[0].N):.1f}, T: {df.iloc[0].robust_T:.1f}, $\\pi$: {robust_policy}')\n \n model_ax.set_xticks([(samples)/2 - 0.25] + [x + 0.5 for x in np.arange((samples/2) + samples - 1, num_sessions * samples - 1, samples)])\n model_ax.text(0.025, -0.22, f'({(means[0].z0_s * 100):.0f}%, {(means[0].z1_s * 100):.0f}%, {(means[0].q_s * 100):.0f}%, {(means[0].w_s * 100):.0f}%)', alpha=1, transform=model_ax.transAxes)\n model_ax.text(0.200, -0.22, f'({(means[1].z0_s * 100):.0f}%, {(means[1].z1_s * 100):.0f}%, {(means[1].q_s * 100):.0f}%, {(means[1].w_s * 100):.0f}%)', alpha=1, transform=model_ax.transAxes)\n model_ax.text(0.360, -0.22, f'({(means[2].z0_s * 100):.0f}%, {(means[2].z1_s * 100):.0f}%, {(means[2].q_s * 100):.0f}%, {(means[2].w_s * 100):.0f}%)', alpha=1, transform=model_ax.transAxes)\n model_ax.text(0.535, -0.22, f'({(means[3].z0_s * 100):.0f}%, {(means[3].z1_s * 100):.0f}%, {(means[3].q_s * 100):.0f}%, {(means[3].w_s * 100):.0f}%)', alpha=1, transform=model_ax.transAxes)\n model_ax.text(0.700, -0.22, f'({(means[4].z0_s * 100):.0f}%, {(means[4].z1_s * 100):.0f}%, {(means[4].q_s * 100):.0f}%, {(means[4].w_s * 100):.0f}%)', alpha=1, transform=model_ax.transAxes)\n\n if readonly:\n model_ax.set_xticklabels(['1. Expected', '2. Reads', '3. Range', '4. Empty Reads', '5. Non-Empty Reads'])\n else:\n model_ax.set_xticklabels(['1. Reads', '2. Range', '3. Empty Reads', '4. Non-Empty Reads', '5. Writes', '6. Expected'])\n model_ax.text(0.870, -0.22, f'({(means[5].z0_s * 100):.0f}%, {(means[5].z1_s * 100):.0f}%, {(means[5].q_s * 100):.0f}%, {(means[5].w_s * 100):.0f}%)', alpha=1, transform=model_ax.transAxes)\n \n fig.supylabel('Average I/Os per Query')\n \n system_ax.text(0.84, 0.90, '$w_{' + str(wl_idx) + '}:\\ $' + f'({(w0[0] * 100):.0f}%, {(w0[1] * 100):.0f}%, {(w0[2] * 100):.0f}%, {(w0[3] * 100):.0f}%)', transform=system_ax.transAxes)\n system_ax.text(0.84, 0.78, '$\\hat{w}:\\ $' + f'({(w_hat[0] * 100):.0f}%, {(w_hat[1] * 100):.0f}%, {(w_hat[2] * 100):.0f}%, {(w_hat[3] * 100):.0f}%)', transform=system_ax.transAxes)\n \n model_ax.text(0.885, 0.75, '$I_{KL}(\\hat{w}, w) :$' + f'{distance:.2f}', transform=model_ax.transAxes)\n model_ax.text(0.945, 0.88, r'$\\rho :$' + f'{robust_rho:.2f}', transform=model_ax.transAxes)\n\n \n return fig, axes",
"_____no_output_____"
]
],
[
[
"# Plotting Single Experiment",
"_____no_output_____"
]
],
[
[
"wl_idx, robust_rho, samples, num_operations = (17, 0.5, 5, 100000)\ndata = dp.read_csv('exp_03_1e8.csv')",
"_____no_output_____"
],
[
"set_style()\nfig, axes = plot_cost_sessions(data, samples, num_operations, wl_idx, robust_rho, readonly=False)\naxes[0].set_ylim([0, 15])\n# axes[1].set_ylim([0, 8])\nset_size(fig, width=2*7, height=2*2.75)",
"_____no_output_____"
],
[
"wl_idx, robust_rho, samples, num_operations = (4, 2, 5, 10000)\n\ndata = dp.read_csv('viz_data/exp07_10mill.csv')\ndata['nominal_ms'] = data['nominal_z0_ms'] + data['nominal_z1_ms'] + data['nominal_q_ms'] + data['nominal_w_ms']\ndata['robust_ms'] = data['robust_z0_ms'] + data['robust_z1_ms'] + data['robust_q_ms'] + data['robust_w_ms']\nprint(data.groupby(['workload_idx', 'robust_rho']).size())\n\nd = data[(data.workload_idx == wl_idx) & (data.robust_rho == robust_rho)].reset_index()\nset_style()\nfig, axes = plot_cost_sessions(d, samples, num_operations, wl_idx, robust_rho, readonly=False)\n# axes[0].set_ylim([0, 8])\n# axes[1].set_ylim([0, 8])\nset_size(fig, width=2*7, height=2*2.75)\n\nfig, ax = plt.subplots()\nax.plot(d.index.values, d['nominal_ms'] / 100, '-*', markersize=10, color='black')\nax.plot(d.index.values, d['robust_ms'] / 100, '-o', markersize=10, color='green')\n\nbounds = [5, 10, 15, 20, 25]\nax.set_xlim(left=-0.25, right=29.25)\nfor bound in bounds:\n ax.axvline(x=bound - 0.5, linestyle='--', linewidth=4, color='tab:gray', alpha=0.5)\n\nax.set_xticks([4.5 / 2, (4.5 + 9.5) / 2, (9.5 + 14.5) / 2, (14.5 + 19.5) / 2, (19.5 + 24.5) / 2, (24.5 + 29.5) / 2])\nax.set_xticklabels(['1. Reads', '2. Range', '3. Empty Reads', '4. Non-Empty Reads', '5. Writes', '6. Expected'])\n\nax.set_ylabel('Latency (seconds)')\nax.set_xlabel('Sessions')\nset_size(fig, width=14, height=2.75)",
"workload_idx robust_rho\n4 2.00 30\n5 1.00 30\n11 0.25 30\n15 0.50 30\ndtype: int64\n"
],
[
"data = dp.read_csv('viz_data/exp08_10mill.csv')\ndata['nominal_ms'] = data['nominal_z0_ms'] + data['nominal_z1_ms'] + data['nominal_q_ms'] + data['nominal_w_ms']\ndata['robust_ms'] = data['robust_z0_ms'] + data['robust_z1_ms'] + data['robust_q_ms'] + data['robust_w_ms']\ndata.groupby(['workload_idx', 'robust_rho']).size()\nd = data[(data.workload_idx == wl_idx) & (data.robust_rho == robust_rho)].reset_index()\nfig, axes = plot_cost_sessions(d, samples, num_operations, wl_idx, robust_rho, readonly=False)\n# axes[0].set_ylim([0, 7])\n# axes[1].set_ylim([0, 8])\nset_size(fig, width=2*7, height=2*2.75)\n\nfig, ax = plt.subplots()\nax.plot(d.index.values, d['nominal_ms'] / 100, '-*', markersize=10, color='black')\nax.plot(d.index.values, d['robust_ms'] / 100, '-o', markersize=10, color='green')\n\nbounds = [5, 10, 15, 20, 25]\nax.set_xlim(left=-0.25, right=29.25)\nfor bound in bounds:\n ax.axvline(x=bound - 0.5, linestyle='--', linewidth=4, color='tab:gray', alpha=0.5)\n\nax.set_xticks([4.5 / 2, (4.5 + 9.5) / 2, (9.5 + 14.5) / 2, (14.5 + 19.5) / 2, (19.5 + 24.5) / 2, (24.5 + 29.5) / 2])\nax.set_xticklabels(['1. Reads', '2. Range', '3. Empty Reads', '4. Non-Empty Reads', '5. Writes', '6. Expected'])\n\nax.set_ylabel('Latency (seconds)')\nax.set_xlabel('Sessions')\nset_size(fig, width=14, height=2.75)",
"_____no_output_____"
],
[
"def apply_design(df, cf, z0, z1, q, w, mode='nominal'):\n cf.z0, cf.z1, cf.q, cf.w = z0, z1, q, w\n cost = cf.calculate_cost(df[f'{mode}_m_filt'] / cf.N, np.ceil(df[f'{mode}_T']), df[f'{mode}_is_leveling_policy'])\n return cost\n\ndef plot_cost_sessions(df, samples, num_operations, wl_idx, robust_rho, readonly=False):\n num_sessions = df.shape[0] / samples\n \n means = []\n for idx in range(0, data.shape[0], samples):\n means.append(df.iloc[idx:idx+samples][['z0_s', 'z1_s', 'q_s', 'w_s']].mean())\n \n w_hat = df[['z0_s', 'z1_s', 'q_s', 'w_s']].mean().values\n w0 = [df['z0'].iloc[0], df['z1'].iloc[0], df['q'].iloc[0], df['w'].iloc[0]]\n distance = np.sum(rel_entr(w_hat, w0))\n\n nom_policy = 'Leveling' if df.iloc[0].nominal_is_leveling_policy else 'Tiering'\n robust_policy = 'Leveling' if df.iloc[0].robust_is_leveling_policy else 'Tiering'\n \n df = process_ios(df)\n y1, y2 = df['nominal_io'].values, df['robust_io'].values\n \n fig, axes = plt.subplots(ncols=1, nrows=2)\n system_ax, model_ax = axes\n \n for ax in axes:\n ax.set_xlim(left=-0.25, right=(num_sessions * samples) - 0.75)\n for bounds in np.arange(samples - 1, num_sessions * samples - 1, samples):\n ax.axvline(x=bounds + 0.5, linestyle='--', linewidth=4, color='tab:gray', alpha=0.5)\n \n # Apply design for models\n cfg = config['lsm_tree_config'].copy()\n cfg['N'] = df.iloc[0].N\n cfg['M'] = df.iloc[0].M\n cf = CostFunction(**cfg, z0=w_hat[0], z1=w_hat[1], q=w_hat[2], w=w_hat[3])\n df['nominal_cost'] = df.apply(lambda row: apply_design(row, cf, row['z0_s'], row['z1_s'], row['q_s'], row['w_s'], 'nominal'), axis=1)\n df['robust_cost'] = df.apply(lambda row: apply_design(row, cf, row['z0_s'], row['z1_s'], row['q_s'], row['w_s'], 'robust'), axis=1)\n \n # Systems Graph\n system_ax.text(0.565, 0.9, 'System', fontsize=16, transform=system_ax.transAxes)\n system_ax.set_xticklabels([])\n system_ax.set_xticks([])\n system_ax.plot(df.index.values, y1 / num_operations, marker='*', linewidth=1, color='black', markersize=8, label=f'Nominal\\nh: {(df.iloc[0].nominal_m_filt / df.iloc[0].N):.1f}, T: {df.iloc[0].nominal_T:.1f}, $\\pi$: {nom_policy}')\n system_ax.plot(df.index.values, y2 / num_operations, marker='o', linewidth=1, color='tab:green', markersize=8, label=f'Robust\\nh: {(df.iloc[0].robust_m_filt / df.iloc[0].N):.1f}, T: {df.iloc[0].robust_T:.1f}, $\\pi$: {robust_policy}')\n system_ax.legend(loc='upper left', bbox_to_anchor=(0, 1), frameon=True, framealpha=1, edgecolor='black', fancybox=False, ncol=2)\n \n # Model Graph\n model_ax.text(0.565, 0.9, 'Model', fontsize=16, transform=model_ax.transAxes)\n y1, y2 = df.nominal_cost, df.robust_cost\n model_ax.plot(df.index.values, y1, marker='*', linewidth=1, color='black', markersize=8, label=f'Nominal\\nh: {(df.iloc[0].nominal_m_filt / df.iloc[0].N):.1f}, T: {df.iloc[0].nominal_T:.1f}, $\\pi$: {nom_policy}')\n model_ax.plot(df.index.values, y2, marker='o', linewidth=1, color='tab:green', markersize=8, label=f'Robust\\nh: {(df.iloc[0].robust_m_filt / df.iloc[0].N):.1f}, T: {df.iloc[0].robust_T:.1f}, $\\pi$: {robust_policy}')\n \n model_ax.set_xticks([(samples)/2 - 0.25] + [x + 0.5 for x in np.arange((samples/2) + samples - 1, num_sessions * samples - 1, samples)])\n model_ax.text(0.025, -0.22, f'({(means[0].z0_s * 100):.0f}%, {(means[0].z1_s * 100):.0f}%, {(means[0].q_s * 100):.0f}%, {(means[0].w_s * 100):.0f}%)', alpha=1, transform=model_ax.transAxes)\n model_ax.text(0.200, -0.22, f'({(means[1].z0_s * 100):.0f}%, {(means[1].z1_s * 100):.0f}%, {(means[1].q_s * 100):.0f}%, {(means[1].w_s * 100):.0f}%)', alpha=1, transform=model_ax.transAxes)\n model_ax.text(0.360, -0.22, f'({(means[2].z0_s * 100):.0f}%, {(means[2].z1_s * 100):.0f}%, {(means[2].q_s * 100):.0f}%, {(means[2].w_s * 100):.0f}%)', alpha=1, transform=model_ax.transAxes)\n model_ax.text(0.535, -0.22, f'({(means[3].z0_s * 100):.0f}%, {(means[3].z1_s * 100):.0f}%, {(means[3].q_s * 100):.0f}%, {(means[3].w_s * 100):.0f}%)', alpha=1, transform=model_ax.transAxes)\n model_ax.text(0.700, -0.22, f'({(means[4].z0_s * 100):.0f}%, {(means[4].z1_s * 100):.0f}%, {(means[4].q_s * 100):.0f}%, {(means[4].w_s * 100):.0f}%)', alpha=1, transform=model_ax.transAxes)\n\n if readonly:\n model_ax.set_xticklabels(['1. Expected', '2. Reads', '3. Range', '4. Empty Reads', '5. Non-Empty Reads'])\n else:\n model_ax.set_xticklabels(['1. Reads', '2. Range', '3. Empty Reads', '4. Non-Empty Reads', '5. Writes', '6. Expected'])\n model_ax.text(0.870, -0.22, f'({(means[5].z0_s * 100):.0f}%, {(means[5].z1_s * 100):.0f}%, {(means[5].q_s * 100):.0f}%, {(means[5].w_s * 100):.0f}%)', alpha=1, transform=model_ax.transAxes)\n \n fig.supylabel('Average I/Os per Query')\n \n system_ax.text(0.84, 0.90, '$w_{' + str(wl_idx) + '}:\\ $' + f'({(w0[0] * 100):.0f}%, {(w0[1] * 100):.0f}%, {(w0[2] * 100):.0f}%, {(w0[3] * 100):.0f}%)', transform=system_ax.transAxes)\n system_ax.text(0.84, 0.78, '$\\hat{w}:\\ $' + f'({(w_hat[0] * 100):.0f}%, {(w_hat[1] * 100):.0f}%, {(w_hat[2] * 100):.0f}%, {(w_hat[3] * 100):.0f}%)', transform=system_ax.transAxes)\n \n model_ax.text(0.885, 0.75, '$I_{KL}(\\hat{w}, w) :$' + f'{distance:.2f}', transform=model_ax.transAxes)\n model_ax.text(0.945, 0.88, r'$\\rho :$' + f'{robust_rho:.2f}', transform=model_ax.transAxes)\n\n \n return fig, axes",
"_____no_output_____"
],
[
"data = dp.read_csv('experiment_05_checkpoint.csv')\ndata['nominal_ms'] = data['nominal_z0_ms'] + data['nominal_z1_ms'] + data['nominal_q_ms'] + data['nominal_w_ms']\ndata['robust_ms'] = data['robust_z0_ms'] + data['robust_z1_ms'] + data['robust_q_ms'] + data['robust_w_ms']\ndata.groupby(['workload_idx', 'N']).size()",
"_____no_output_____"
],
[
"N, wl_idx = 1e7, 11\nd = data[(data.workload_idx == wl_idx) & (data.N == N)].reset_index()\nfig, axes = plot_cost_sessions(d, samples, num_operations, wl_idx, robust_rho, readonly=False)\n# axes[0].set_ylim([0, 160])\n# axes[1].set_ylim([0, 160])\nset_size(fig, width=2*7, height=2*2.75)\n\nfig, ax = plt.subplots()\nax.plot(d.index.values, d['nominal_ms'] / 100, '-*', markersize=10, color='black')\nax.plot(d.index.values, d['robust_ms'] / 100, '-o', markersize=10, color='green')\n\nbounds = [5, 10, 15, 20, 25]\nax.set_xlim(left=-0.25, right=29.25)\nfor bound in bounds:\n ax.axvline(x=bound - 0.5, linestyle='--', linewidth=4, color='tab:gray', alpha=0.5)\n\nax.set_xticks([4.5 / 2, (4.5 + 9.5) / 2, (9.5 + 14.5) / 2, (14.5 + 19.5) / 2, (19.5 + 24.5) / 2, (24.5 + 29.5) / 2])\nax.set_xticklabels(['1. Reads', '2. Range', '3. Empty Reads', '4. Non-Empty Reads', '5. Writes', '6. Expected'])\n\nax.set_ylabel('Latency (seconds)')\nax.set_xlabel('Sessions')\nset_size(fig, width=14, height=2.75)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9e4399709fc9eae70e709a82923e6fa9a06666 | 198,299 | ipynb | Jupyter Notebook | Exploratory_Data_Analysis_Car.ipynb | ahadimuhsin/Data_Analysis_Cognitive_Class_Course | 02c21212fa25e8aab9d84331200442c473316d52 | [
"MIT"
] | null | null | null | Exploratory_Data_Analysis_Car.ipynb | ahadimuhsin/Data_Analysis_Cognitive_Class_Course | 02c21212fa25e8aab9d84331200442c473316d52 | [
"MIT"
] | null | null | null | Exploratory_Data_Analysis_Car.ipynb | ahadimuhsin/Data_Analysis_Cognitive_Class_Course | 02c21212fa25e8aab9d84331200442c473316d52 | [
"MIT"
] | null | null | null | 43.30618 | 13,884 | 0.547834 | [
[
[
"<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n <a href=\"http://cocl.us/DA0101EN_NotbookLink_Top\">\n <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/Images/TopAd.png\" width=\"750\" align=\"center\">\n </a>\n</div>\n",
"_____no_output_____"
],
[
"<a href=\"https://www.bigdatauniversity.com\"><img src = \"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/Images/CCLog.png\" width = 300, align = \"center\"></a>\n\n<h1 align=center><font size = 5>Data Analysis with Python</font></h1>",
"_____no_output_____"
],
[
"Exploratory Data Analysis",
"_____no_output_____"
],
[
"<h3>Welcome!</h3>\nIn this section, we will explore several methods to see if certain characteristics or features can be used to predict car price. ",
"_____no_output_____"
],
[
"<h2>Table of content</h2>\n\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n<ol>\n <li><a href=\"#import_data\">Import Data from Module</a></li>\n <li><a href=\"#pattern_visualization\">Analyzing Individual Feature Patterns using Visualization</a></li>\n <li><a href=\"#discriptive_statistics\">Descriptive Statistical Analysis</a></li>\n <li><a href=\"#basic_grouping\">Basics of Grouping</a></li>\n <li><a href=\"#correlation_causation\">Correlation and Causation</a></li>\n <li><a href=\"#anova\">ANOVA</a></li>\n</ol>\n \nEstimated Time Needed: <strong>30 min</strong>\n</div>\n \n<hr>",
"_____no_output_____"
],
[
"<h3>What are the main characteristics which have the most impact on the car price?</h3>",
"_____no_output_____"
],
[
"<h2 id=\"import_data\">1. Import Data from Module 2</h2>",
"_____no_output_____"
],
[
"<h4>Setup</h4>",
"_____no_output_____"
],
[
" Import libraries ",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
]
],
[
[
" load data and store in dataframe df:",
"_____no_output_____"
],
[
"This dataset was hosted on IBM Cloud object click <a href=\"https://cocl.us/cognitive_class_DA0101EN_objectstorage\">HERE</a> for free storage",
"_____no_output_____"
]
],
[
[
"path='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/automobileEDA.csv'\ndf = pd.read_csv(path)\ndf.head()",
"_____no_output_____"
]
],
[
[
"<h2 id=\"pattern_visualization\">2. Analyzing Individual Feature Patterns using Visualization</h2>",
"_____no_output_____"
],
[
"To install seaborn we use the pip which is the python package manager.",
"_____no_output_____"
]
],
[
[
"%%capture\n! pip install seaborn",
"_____no_output_____"
]
],
[
[
" Import visualization packages \"Matplotlib\" and \"Seaborn\", don't forget about \"%matplotlib inline\" to plot in a Jupyter notebook.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline ",
"_____no_output_____"
]
],
[
[
"<h4>How to choose the right visualization method?</h4>\n<p>When visualizing individual variables, it is important to first understand what type of variable you are dealing with. This will help us find the right visualization method for that variable.</p>\n",
"_____no_output_____"
]
],
[
[
"# list the data types for each column\nprint(df.dtypes)",
"symboling int64\nnormalized-losses int64\nmake object\naspiration object\nnum-of-doors object\nbody-style object\ndrive-wheels object\nengine-location object\nwheel-base float64\nlength float64\nwidth float64\nheight float64\ncurb-weight int64\nengine-type object\nnum-of-cylinders object\nengine-size int64\nfuel-system object\nbore float64\nstroke float64\ncompression-ratio float64\nhorsepower float64\npeak-rpm float64\ncity-mpg int64\nhighway-mpg int64\nprice float64\ncity-L/100km float64\nhorsepower-binned object\ndiesel int64\ngas int64\ndtype: object\n"
]
],
[
[
"<div class=\"alert alert-danger alertdanger\" style=\"margin-top: 20px\">\n<h3>Question #1:</h3>\n\n<b>What is the data type of the column \"peak-rpm\"? </b>\n</div>",
"_____no_output_____"
],
[
"Double-click <b>here</b> for the solution.\n\n<!-- The answer is below:\n\nfloat64\n\n-->",
"_____no_output_____"
],
[
"for example, we can calculate the correlation between variables of type \"int64\" or \"float64\" using the method \"corr\":",
"_____no_output_____"
]
],
[
[
"df.corr()",
"_____no_output_____"
]
],
[
[
"The diagonal elements are always one; we will study correlation more precisely Pearson correlation in-depth at the end of the notebook.",
"_____no_output_____"
],
[
"<div class=\"alert alert-danger alertdanger\" style=\"margin-top: 20px\">\n<h1> Question #2: </h1>\n\n<p>Find the correlation between the following columns: bore, stroke,compression-ratio , and horsepower.</p>\n<p>Hint: if you would like to select those columns use the following syntax: df[['bore','stroke' ,'compression-ratio','horsepower']]</p>\n</div>",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute \n",
"_____no_output_____"
]
],
[
[
"Double-click <b>here</b> for the solution.\n\n<!-- The answer is below:\n\ndf[['bore', 'stroke', 'compression-ratio', 'horsepower']].corr() \n\n-->",
"_____no_output_____"
],
[
"<h2>Continuous numerical variables:</h2> \n\n<p>Continuous numerical variables are variables that may contain any value within some range. Continuous numerical variables can have the type \"int64\" or \"float64\". A great way to visualize these variables is by using scatterplots with fitted lines.</p>\n\n<p>In order to start understanding the (linear) relationship between an individual variable and the price. We can do this by using \"regplot\", which plots the scatterplot plus the fitted regression line for the data.</p>",
"_____no_output_____"
],
[
" Let's see several examples of different linear relationships:",
"_____no_output_____"
],
[
"<h4>Positive linear relationship</h4>",
"_____no_output_____"
],
[
"Let's find the scatterplot of \"engine-size\" and \"price\" ",
"_____no_output_____"
]
],
[
[
"# Engine size as potential predictor variable of price\nsns.regplot(x=\"engine-size\", y=\"price\", data=df)\nplt.ylim(0,)",
"_____no_output_____"
]
],
[
[
"<p>As the engine-size goes up, the price goes up: this indicates a positive direct correlation between these two variables. Engine size seems like a pretty good predictor of price since the regression line is almost a perfect diagonal line.</p>",
"_____no_output_____"
],
[
" We can examine the correlation between 'engine-size' and 'price' and see it's approximately 0.87",
"_____no_output_____"
]
],
[
[
"df[[\"engine-size\", \"price\"]].corr()",
"_____no_output_____"
]
],
[
[
"Highway mpg is a potential predictor variable of price ",
"_____no_output_____"
]
],
[
[
"sns.regplot(x=\"highway-mpg\", y=\"price\", data=df)",
"_____no_output_____"
]
],
[
[
"<p>As the highway-mpg goes up, the price goes down: this indicates an inverse/negative relationship between these two variables. Highway mpg could potentially be a predictor of price.</p>",
"_____no_output_____"
],
[
"We can examine the correlation between 'highway-mpg' and 'price' and see it's approximately -0.704",
"_____no_output_____"
]
],
[
[
"df[['highway-mpg', 'price']].corr()",
"_____no_output_____"
]
],
[
[
"<h3>Weak Linear Relationship</h3>",
"_____no_output_____"
],
[
"Let's see if \"Peak-rpm\" as a predictor variable of \"price\".",
"_____no_output_____"
]
],
[
[
"sns.regplot(x=\"peak-rpm\", y=\"price\", data=df)",
"_____no_output_____"
]
],
[
[
"<p>Peak rpm does not seem like a good predictor of the price at all since the regression line is close to horizontal. Also, the data points are very scattered and far from the fitted line, showing lots of variability. Therefore it's it is not a reliable variable.</p>",
"_____no_output_____"
],
[
"We can examine the correlation between 'peak-rpm' and 'price' and see it's approximately -0.101616 ",
"_____no_output_____"
]
],
[
[
"df[['peak-rpm','price']].corr()",
"_____no_output_____"
]
],
[
[
" <div class=\"alert alert-danger alertdanger\" style=\"margin-top: 20px\">\n<h1> Question 3 a): </h1>\n\n<p>Find the correlation between x=\"stroke\", y=\"price\".</p>\n<p>Hint: if you would like to select those columns use the following syntax: df[[\"stroke\",\"price\"]] </p>\n</div>",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute\ndf[[\"stroke\",\"price\"]].corr()",
"_____no_output_____"
]
],
[
[
"Double-click <b>here</b> for the solution.\n\n<!-- The answer is below:\n\n#The correlation is 0.0823, the non-diagonal elements of the table.\n#code:\ndf[[\"stroke\",\"price\"]].corr() \n\n-->",
"_____no_output_____"
],
[
"<div class=\"alert alert-danger alertdanger\" style=\"margin-top: 20px\">\n<h1>Question 3 b):</h1>\n\n<p>Given the correlation results between \"price\" and \"stroke\" do you expect a linear relationship?</p> \n<p>Verify your results using the function \"regplot()\".</p>\n</div>",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute \n",
"_____no_output_____"
]
],
[
[
"Double-click <b>here</b> for the solution.\n\n<!-- The answer is below:\n\n#There is a weak correlation between the variable 'stroke' and 'price.' as such regression will not work well. We #can see this use \"regplot\" to demonstrate this.\n\n#Code: \nsns.regplot(x=\"stroke\", y=\"price\", data=df)\n\n-->",
"_____no_output_____"
],
[
"<h3>Categorical variables</h3>\n\n<p>These are variables that describe a 'characteristic' of a data unit, and are selected from a small group of categories. The categorical variables can have the type \"object\" or \"int64\". A good way to visualize categorical variables is by using boxplots.</p>",
"_____no_output_____"
],
[
"Let's look at the relationship between \"body-style\" and \"price\".",
"_____no_output_____"
]
],
[
[
"sns.boxplot(x=\"body-style\", y=\"price\", data=df)",
"_____no_output_____"
]
],
[
[
"<p>We see that the distributions of price between the different body-style categories have a significant overlap, and so body-style would not be a good predictor of price. Let's examine engine \"engine-location\" and \"price\":</p>",
"_____no_output_____"
]
],
[
[
"sns.boxplot(x=\"engine-location\", y=\"price\", data=df)",
"_____no_output_____"
]
],
[
[
"<p>Here we see that the distribution of price between these two engine-location categories, front and rear, are distinct enough to take engine-location as a potential good predictor of price.</p>",
"_____no_output_____"
],
[
" Let's examine \"drive-wheels\" and \"price\".",
"_____no_output_____"
]
],
[
[
"# drive-wheels\nsns.boxplot(x=\"drive-wheels\", y=\"price\", data=df)",
"_____no_output_____"
]
],
[
[
"<p>Here we see that the distribution of price between the different drive-wheels categories differs; as such drive-wheels could potentially be a predictor of price.</p>",
"_____no_output_____"
],
[
"<h2 id=\"discriptive_statistics\">3. Descriptive Statistical Analysis</h2>",
"_____no_output_____"
],
[
"<p>Let's first take a look at the variables by utilizing a description method.</p>\n\n<p>The <b>describe</b> function automatically computes basic statistics for all continuous variables. Any NaN values are automatically skipped in these statistics.</p>\n\nThis will show:\n<ul>\n <li>the count of that variable</li>\n <li>the mean</li>\n <li>the standard deviation (std)</li> \n <li>the minimum value</li>\n <li>the IQR (Interquartile Range: 25%, 50% and 75%)</li>\n <li>the maximum value</li>\n<ul>\n",
"_____no_output_____"
],
[
" We can apply the method \"describe\" as follows:",
"_____no_output_____"
]
],
[
[
"df.describe()",
"_____no_output_____"
]
],
[
[
" The default setting of \"describe\" skips variables of type object. We can apply the method \"describe\" on the variables of type 'object' as follows:",
"_____no_output_____"
]
],
[
[
"df.describe(include=['object'])",
"_____no_output_____"
]
],
[
[
"<h3>Value Counts</h3>",
"_____no_output_____"
],
[
"<p>Value-counts is a good way of understanding how many units of each characteristic/variable we have. We can apply the \"value_counts\" method on the column 'drive-wheels'. Don’t forget the method \"value_counts\" only works on Pandas series, not Pandas Dataframes. As a result, we only include one bracket \"df['drive-wheels']\" not two brackets \"df[['drive-wheels']]\".</p>",
"_____no_output_____"
]
],
[
[
"df['drive-wheels'].value_counts()",
"_____no_output_____"
]
],
[
[
"We can convert the series to a Dataframe as follows :",
"_____no_output_____"
]
],
[
[
"df['drive-wheels'].value_counts().to_frame()",
"_____no_output_____"
]
],
[
[
"Let's repeat the above steps but save the results to the dataframe \"drive_wheels_counts\" and rename the column 'drive-wheels' to 'value_counts'.",
"_____no_output_____"
]
],
[
[
"drive_wheels_counts = df['drive-wheels'].value_counts().to_frame()\ndrive_wheels_counts.rename(columns={'drive-wheels': 'value_counts'}, inplace=True)\ndrive_wheels_counts",
"_____no_output_____"
]
],
[
[
" Now let's rename the index to 'drive-wheels':",
"_____no_output_____"
]
],
[
[
"drive_wheels_counts.index.name = 'drive-wheels'\ndrive_wheels_counts",
"_____no_output_____"
]
],
[
[
"We can repeat the above process for the variable 'engine-location'.",
"_____no_output_____"
]
],
[
[
"# engine-location as variable\nengine_loc_counts = df['engine-location'].value_counts().to_frame()\nengine_loc_counts.rename(columns={'engine-location': 'value_counts'}, inplace=True)\nengine_loc_counts.index.name = 'engine-location'\nengine_loc_counts.head(10)",
"_____no_output_____"
]
],
[
[
"<p>Examining the value counts of the engine location would not be a good predictor variable for the price. This is because we only have three cars with a rear engine and 198 with an engine in the front, this result is skewed. Thus, we are not able to draw any conclusions about the engine location.</p>",
"_____no_output_____"
],
[
"<h2 id=\"basic_grouping\">4. Basics of Grouping</h2>",
"_____no_output_____"
],
[
"<p>The \"groupby\" method groups data by different categories. The data is grouped based on one or several variables and analysis is performed on the individual groups.</p>\n\n<p>For example, let's group by the variable \"drive-wheels\". We see that there are 3 different categories of drive wheels.</p>",
"_____no_output_____"
]
],
[
[
"df['drive-wheels'].unique()",
"_____no_output_____"
]
],
[
[
"<p>If we want to know, on average, which type of drive wheel is most valuable, we can group \"drive-wheels\" and then average them.</p>\n\n<p>We can select the columns 'drive-wheels', 'body-style' and 'price', then assign it to the variable \"df_group_one\".</p>",
"_____no_output_____"
]
],
[
[
"df_group_one = df[['drive-wheels','body-style','price']]",
"_____no_output_____"
]
],
[
[
"We can then calculate the average price for each of the different categories of data.",
"_____no_output_____"
]
],
[
[
"# grouping results\ndf_group_one = df_group_one.groupby(['drive-wheels'],as_index=False).mean()\ndf_group_one",
"_____no_output_____"
]
],
[
[
"<p>From our data, it seems rear-wheel drive vehicles are, on average, the most expensive, while 4-wheel and front-wheel are approximately the same in price.</p>\n\n<p>You can also group with multiple variables. For example, let's group by both 'drive-wheels' and 'body-style'. This groups the dataframe by the unique combinations 'drive-wheels' and 'body-style'. We can store the results in the variable 'grouped_test1'.</p>",
"_____no_output_____"
]
],
[
[
"# grouping results\ndf_gptest = df[['drive-wheels','body-style','price']]\ngrouped_test1 = df_gptest.groupby(['drive-wheels','body-style'],as_index=False).mean()\ngrouped_test1",
"_____no_output_____"
]
],
[
[
"<p>This grouped data is much easier to visualize when it is made into a pivot table. A pivot table is like an Excel spreadsheet, with one variable along the column and another along the row. We can convert the dataframe to a pivot table using the method \"pivot \" to create a pivot table from the groups.</p>\n\n<p>In this case, we will leave the drive-wheel variable as the rows of the table, and pivot body-style to become the columns of the table:</p>",
"_____no_output_____"
]
],
[
[
"grouped_pivot = grouped_test1.pivot(index='drive-wheels',columns='body-style')\ngrouped_pivot",
"_____no_output_____"
]
],
[
[
"<p>Often, we won't have data for some of the pivot cells. We can fill these missing cells with the value 0, but any other value could potentially be used as well. It should be mentioned that missing data is quite a complex subject and is an entire course on its own.</p>",
"_____no_output_____"
]
],
[
[
"grouped_pivot = grouped_pivot.fillna(0) #fill missing values with 0\ngrouped_pivot",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-danger alertdanger\" style=\"margin-top: 20px\">\n<h1>Question 4:</h1>\n\n<p>Use the \"groupby\" function to find the average \"price\" of each car based on \"body-style\" ? </p>\n</div>",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute \n",
"_____no_output_____"
]
],
[
[
"Double-click <b>here</b> for the solution.\n\n<!-- The answer is below:\n\n# grouping results\ndf_gptest2 = df[['body-style','price']]\ngrouped_test_bodystyle = df_gptest2.groupby(['body-style'],as_index= False).mean()\ngrouped_test_bodystyle\n\n-->",
"_____no_output_____"
],
[
"If you did not import \"pyplot\" let's do it again. ",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n%matplotlib inline ",
"_____no_output_____"
]
],
[
[
"<h4>Variables: Drive Wheels and Body Style vs Price</h4>",
"_____no_output_____"
],
[
"Let's use a heat map to visualize the relationship between Body Style vs Price.",
"_____no_output_____"
]
],
[
[
"#use the grouped results\nplt.pcolor(grouped_pivot, cmap='RdBu')\nplt.colorbar()\nplt.show()",
"_____no_output_____"
]
],
[
[
"<p>The heatmap plots the target variable (price) proportional to colour with respect to the variables 'drive-wheel' and 'body-style' in the vertical and horizontal axis respectively. This allows us to visualize how the price is related to 'drive-wheel' and 'body-style'.</p>\n\n<p>The default labels convey no useful information to us. Let's change that:</p>",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots()\nim = ax.pcolor(grouped_pivot, cmap='RdBu')\n\n#label names\nrow_labels = grouped_pivot.columns.levels[1]\ncol_labels = grouped_pivot.index\n\n#move ticks and labels to the center\nax.set_xticks(np.arange(grouped_pivot.shape[1]) + 0.5, minor=False)\nax.set_yticks(np.arange(grouped_pivot.shape[0]) + 0.5, minor=False)\n\n#insert labels\nax.set_xticklabels(row_labels, minor=False)\nax.set_yticklabels(col_labels, minor=False)\n\n#rotate label if too long\nplt.xticks(rotation=90)\n\nfig.colorbar(im)\nplt.show()",
"_____no_output_____"
]
],
[
[
"<p>Visualization is very important in data science, and Python visualization packages provide great freedom. We will go more in-depth in a separate Python Visualizations course.</p>\n\n<p>The main question we want to answer in this module, is \"What are the main characteristics which have the most impact on the car price?\".</p>\n\n<p>To get a better measure of the important characteristics, we look at the correlation of these variables with the car price, in other words: how is the car price dependent on this variable?</p>",
"_____no_output_____"
],
[
"<h2 id=\"correlation_causation\">5. Correlation and Causation</h2>",
"_____no_output_____"
],
[
"<p><b>Correlation</b>: a measure of the extent of interdependence between variables.</p>\n\n<p><b>Causation</b>: the relationship between cause and effect between two variables.</p>\n\n<p>It is important to know the difference between these two and that correlation does not imply causation. Determining correlation is much simpler the determining causation as causation may require independent experimentation.</p>",
"_____no_output_____"
],
[
"<p3>Pearson Correlation</p>\n<p>The Pearson Correlation measures the linear dependence between two variables X and Y.</p>\n<p>The resulting coefficient is a value between -1 and 1 inclusive, where:</p>\n<ul>\n <li><b>1</b>: Total positive linear correlation.</li>\n <li><b>0</b>: No linear correlation, the two variables most likely do not affect each other.</li>\n <li><b>-1</b>: Total negative linear correlation.</li>\n</ul>",
"_____no_output_____"
],
[
"<p>Pearson Correlation is the default method of the function \"corr\". Like before we can calculate the Pearson Correlation of the of the 'int64' or 'float64' variables.</p>",
"_____no_output_____"
]
],
[
[
"df.corr()",
"_____no_output_____"
]
],
[
[
" sometimes we would like to know the significant of the correlation estimate. ",
"_____no_output_____"
],
[
"<b>P-value</b>: \n<p>What is this P-value? The P-value is the probability value that the correlation between these two variables is statistically significant. Normally, we choose a significance level of 0.05, which means that we are 95% confident that the correlation between the variables is significant.</p>\n\nBy convention, when the\n<ul>\n <li>p-value is $<$ 0.001: we say there is strong evidence that the correlation is significant.</li>\n <li>the p-value is $<$ 0.05: there is moderate evidence that the correlation is significant.</li>\n <li>the p-value is $<$ 0.1: there is weak evidence that the correlation is significant.</li>\n <li>the p-value is $>$ 0.1: there is no evidence that the correlation is significant.</li>\n</ul>",
"_____no_output_____"
],
[
" We can obtain this information using \"stats\" module in the \"scipy\" library.",
"_____no_output_____"
]
],
[
[
"from scipy import stats",
"_____no_output_____"
]
],
[
[
"<h3>Wheel-base vs Price</h3>",
"_____no_output_____"
],
[
"Let's calculate the Pearson Correlation Coefficient and P-value of 'wheel-base' and 'price'. ",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['wheel-base'], df['price'])\nprint(\"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P =\", p_value) ",
"_____no_output_____"
]
],
[
[
"<h5>Conclusion:</h5>\n<p>Since the p-value is $<$ 0.001, the correlation between wheel-base and price is statistically significant, although the linear relationship isn't extremely strong (~0.585)</p>",
"_____no_output_____"
],
[
"<h3>Horsepower vs Price</h3>",
"_____no_output_____"
],
[
" Let's calculate the Pearson Correlation Coefficient and P-value of 'horsepower' and 'price'.",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['horsepower'], df['price'])\nprint(\"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P = \", p_value) ",
"_____no_output_____"
]
],
[
[
"<h5>Conclusion:</h5>\n\n<p>Since the p-value is $<$ 0.001, the correlation between horsepower and price is statistically significant, and the linear relationship is quite strong (~0.809, close to 1)</p>",
"_____no_output_____"
],
[
"<h3>Length vs Price</h3>\n\nLet's calculate the Pearson Correlation Coefficient and P-value of 'length' and 'price'.",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['length'], df['price'])\nprint(\"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P = \", p_value) ",
"_____no_output_____"
]
],
[
[
"<h5>Conclusion:</h5>\n<p>Since the p-value is $<$ 0.001, the correlation between length and price is statistically significant, and the linear relationship is moderately strong (~0.691).</p>",
"_____no_output_____"
],
[
"<h3>Width vs Price</h3>",
"_____no_output_____"
],
[
" Let's calculate the Pearson Correlation Coefficient and P-value of 'width' and 'price':",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['width'], df['price'])\nprint(\"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P =\", p_value ) ",
"_____no_output_____"
]
],
[
[
"##### Conclusion:\n\nSince the p-value is < 0.001, the correlation between width and price is statistically significant, and the linear relationship is quite strong (~0.751).",
"_____no_output_____"
],
[
"### Curb-weight vs Price",
"_____no_output_____"
],
[
" Let's calculate the Pearson Correlation Coefficient and P-value of 'curb-weight' and 'price':",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['curb-weight'], df['price'])\nprint( \"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P = \", p_value) ",
"_____no_output_____"
]
],
[
[
"<h5>Conclusion:</h5>\n<p>Since the p-value is $<$ 0.001, the correlation between curb-weight and price is statistically significant, and the linear relationship is quite strong (~0.834).</p>",
"_____no_output_____"
],
[
"<h3>Engine-size vs Price</h3>\n\nLet's calculate the Pearson Correlation Coefficient and P-value of 'engine-size' and 'price':",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['engine-size'], df['price'])\nprint(\"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P =\", p_value) ",
"_____no_output_____"
]
],
[
[
"<h5>Conclusion:</h5>\n\n<p>Since the p-value is $<$ 0.001, the correlation between engine-size and price is statistically significant, and the linear relationship is very strong (~0.872).</p>",
"_____no_output_____"
],
[
"<h3>Bore vs Price</h3>",
"_____no_output_____"
],
[
" Let's calculate the Pearson Correlation Coefficient and P-value of 'bore' and 'price':",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['bore'], df['price'])\nprint(\"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P = \", p_value ) ",
"_____no_output_____"
]
],
[
[
"<h5>Conclusion:</h5>\n<p>Since the p-value is $<$ 0.001, the correlation between bore and price is statistically significant, but the linear relationship is only moderate (~0.521).</p>",
"_____no_output_____"
],
[
" We can relate the process for each 'City-mpg' and 'Highway-mpg':",
"_____no_output_____"
],
[
"<h3>City-mpg vs Price</h3>",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['city-mpg'], df['price'])\nprint(\"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P = \", p_value) ",
"_____no_output_____"
]
],
[
[
"<h5>Conclusion:</h5>\n<p>Since the p-value is $<$ 0.001, the correlation between city-mpg and price is statistically significant, and the coefficient of ~ -0.687 shows that the relationship is negative and moderately strong.</p>",
"_____no_output_____"
],
[
"<h3>Highway-mpg vs Price</h3>",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['highway-mpg'], df['price'])\nprint( \"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P = \", p_value ) ",
"_____no_output_____"
]
],
[
[
"##### Conclusion:\nSince the p-value is < 0.001, the correlation between highway-mpg and price is statistically significant, and the coefficient of ~ -0.705 shows that the relationship is negative and moderately strong.",
"_____no_output_____"
],
[
"<h2 id=\"anova\">6. ANOVA</h2>",
"_____no_output_____"
],
[
"<h3>ANOVA: Analysis of Variance</h3>\n<p>The Analysis of Variance (ANOVA) is a statistical method used to test whether there are significant differences between the means of two or more groups. ANOVA returns two parameters:</p>\n\n<p><b>F-test score</b>: ANOVA assumes the means of all groups are the same, calculates how much the actual means deviate from the assumption, and reports it as the F-test score. A larger score means there is a larger difference between the means.</p>\n\n<p><b>P-value</b>: P-value tells how statistically significant is our calculated score value.</p>\n\n<p>If our price variable is strongly correlated with the variable we are analyzing, expect ANOVA to return a sizeable F-test score and a small p-value.</p>",
"_____no_output_____"
],
[
"<h3>Drive Wheels</h3>",
"_____no_output_____"
],
[
"<p>Since ANOVA analyzes the difference between different groups of the same variable, the groupby function will come in handy. Because the ANOVA algorithm averages the data automatically, we do not need to take the average before hand.</p>\n\n<p>Let's see if different types 'drive-wheels' impact 'price', we group the data.</p>",
"_____no_output_____"
],
[
" Let's see if different types 'drive-wheels' impact 'price', we group the data.",
"_____no_output_____"
]
],
[
[
"grouped_test2=df_gptest[['drive-wheels', 'price']].groupby(['drive-wheels'])\ngrouped_test2.head(2)",
"_____no_output_____"
],
[
"df_gptest",
"_____no_output_____"
]
],
[
[
" We can obtain the values of the method group using the method \"get_group\". ",
"_____no_output_____"
]
],
[
[
"grouped_test2.get_group('4wd')['price']",
"_____no_output_____"
]
],
[
[
"we can use the function 'f_oneway' in the module 'stats' to obtain the <b>F-test score</b> and <b>P-value</b>.",
"_____no_output_____"
]
],
[
[
"# ANOVA\nfrom scipy import stats\nf_val, p_val = stats.f_oneway(grouped_test2.get_group('fwd')['price'], grouped_test2.get_group('rwd')['price'], grouped_test2.get_group('4wd')['price']) \n \nprint( \"ANOVA results: F=\", f_val, \", P =\", p_val) ",
"ANOVA results: F= 67.95406500780399 , P = 3.3945443577151245e-23\n"
]
],
[
[
"This is a great result, with a large F test score showing a strong correlation and a P value of almost 0 implying almost certain statistical significance. But does this mean all three tested groups are all this highly correlated? ",
"_____no_output_____"
],
[
"#### Separately: fwd and rwd",
"_____no_output_____"
]
],
[
[
"f_val, p_val = stats.f_oneway(grouped_test2.get_group('fwd')['price'], grouped_test2.get_group('rwd')['price']) \n \nprint( \"ANOVA results: F=\", f_val, \", P =\", p_val )",
"ANOVA results: F= 130.5533160959111 , P = 2.2355306355677845e-23\n"
]
],
[
[
" Let's examine the other groups ",
"_____no_output_____"
],
[
"#### 4wd and rwd",
"_____no_output_____"
]
],
[
[
"f_val, p_val = stats.f_oneway(grouped_test2.get_group('4wd')['price'], grouped_test2.get_group('rwd')['price']) \n \nprint( \"ANOVA results: F=\", f_val, \", P =\", p_val) ",
"ANOVA results: F= 8.580681368924756 , P = 0.004411492211225333\n"
]
],
[
[
"<h4>4wd and fwd</h4>",
"_____no_output_____"
]
],
[
[
"f_val, p_val = stats.f_oneway(grouped_test2.get_group('4wd')['price'], grouped_test2.get_group('fwd')['price']) \n \nprint(\"ANOVA results: F=\", f_val, \", P =\", p_val) ",
"ANOVA results: F= 0.665465750252303 , P = 0.41620116697845666\n"
]
],
[
[
"<h3>Conclusion: Important Variables</h3>",
"_____no_output_____"
],
[
"<p>We now have a better idea of what our data looks like and which variables are important to take into account when predicting the car price. We have narrowed it down to the following variables:</p>\n\nContinuous numerical variables:\n<ul>\n <li>Length</li>\n <li>Width</li>\n <li>Curb-weight</li>\n <li>Engine-size</li>\n <li>Horsepower</li>\n <li>City-mpg</li>\n <li>Highway-mpg</li>\n <li>Wheel-base</li>\n <li>Bore</li>\n</ul>\n \nCategorical variables:\n<ul>\n <li>Drive-wheels</li>\n</ul>\n\n<p>As we now move into building machine learning models to automate our analysis, feeding the model with variables that meaningfully affect our target variable will improve our model's prediction performance.</p>",
"_____no_output_____"
],
[
"<h1>Thank you for completing this notebook</h1>",
"_____no_output_____"
],
[
"<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n\n <p><a href=\"https://cocl.us/DA0101EN_NotbookLink_Top_bottom\"><img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/Images/BottomAd.png\" width=\"750\" align=\"center\"></a></p>\n</div>\n",
"_____no_output_____"
],
[
"<h3>About the Authors:</h3>\n\nThis notebook was written by <a href=\"https://www.linkedin.com/in/mahdi-noorian-58219234/\" target=\"_blank\">Mahdi Noorian PhD</a>, <a href=\"https://www.linkedin.com/in/joseph-s-50398b136/\" target=\"_blank\">Joseph Santarcangelo</a>, Bahare Talayian, Eric Xiao, Steven Dong, Parizad, Hima Vsudevan and <a href=\"https://www.linkedin.com/in/fiorellawever/\" target=\"_blank\">Fiorella Wenver</a> and <a href=\" https://www.linkedin.com/in/yi-leng-yao-84451275/ \" target=\"_blank\" >Yi Yao</a>.\n\n<p><a href=\"https://www.linkedin.com/in/joseph-s-50398b136/\" target=\"_blank\">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>",
"_____no_output_____"
],
[
"<hr>\n<p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href=\"https://cognitiveclass.ai/mit-license/\">MIT License</a>.</p>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ec9e44ee04da45cbc83c6e7da7eefd082ad542c8 | 27,010 | ipynb | Jupyter Notebook | tf.version.1/03.cnn/01.1.mnist.deep.with.estimator.ipynb | jinhwanhan/tensorflow.tutorials | f6a2c98a204174a76d75f7a6665936347079db35 | [
"Apache-2.0"
] | 57 | 2018-09-12T16:48:15.000Z | 2021-02-19T10:51:04.000Z | tf.version.1/03.cnn/01.1.mnist.deep.with.estimator.ipynb | jinhwanhan/tensorflow.tutorials | f6a2c98a204174a76d75f7a6665936347079db35 | [
"Apache-2.0"
] | null | null | null | tf.version.1/03.cnn/01.1.mnist.deep.with.estimator.ipynb | jinhwanhan/tensorflow.tutorials | f6a2c98a204174a76d75f7a6665936347079db35 | [
"Apache-2.0"
] | 15 | 2018-10-10T07:27:42.000Z | 2020-02-02T09:08:32.000Z | 41.36294 | 672 | 0.60174 | [
[
[
"# A Guide to TF Layers: Building a Convolutional Neural Network\n\n* [MNIST tutorials](https://www.tensorflow.org/tutorials/layers)\n<img src=\"https://user-images.githubusercontent.com/11681225/46912292-54460680-cfac-11e8-89a3-d8d1a4ec13ae.png\" width=\"20%\">\n\n* training dataset: 60000\n* test dataset: 10000\n* one example: gray scale image with $28 \\times 28$ size\n* [`cnn_mnist.py`](https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/examples/tutorials/layers/cnn_mnist.py) 참고",
"_____no_output_____"
]
],
[
[
"\"\"\"Convolutional Neural Network Estimator for MNIST, built with tf.layers.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport numpy as np\nimport tensorflow as tf\n\ntf.logging.set_verbosity(tf.logging.INFO)\nos.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\"",
"/home/lab4all/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n"
]
],
[
[
"## Convolutional neural networks\n\n* Convolutional layers\n * $\\textrm{ReLU}({\\bf x} * {\\bf w} + {\\bf b})$\n * $*$: convolution operator\n* Pooling layers\n * down sampling: `max-pooling`, `average-pooling`\n* Dense (fully connected) layers\n * $\\textrm{ReLU}({\\bf w} {\\bf x} + {\\bf b})$",
"_____no_output_____"
],
[
"### Structure of LeNet 5\n\n<img width=\"90%\" alt=\"lenet5\" src=\"https://user-images.githubusercontent.com/11681225/46912300-a0914680-cfac-11e8-92fb-f1817267b4e4.png\">\n\n\n* Convolutional Layer #1: Applies 32 5x5 filters (extracting 5x5-pixel subregions), with ReLU activation function\n* Pooling Layer #1: Performs max pooling with a 2x2 filter and stride of 2 (which specifies that pooled regions do not overlap)\n* Convolutional Layer #2: Applies 64 5x5 filters, with ReLU activation function\n* Pooling Layer #2: Again, performs max pooling with a 2x2 filter and stride of 2\n* Dense Layer #1: 1,024 neurons, with dropout regularization rate of 0.4 (probability of 0.4 that any given element will be dropped during training)\n* Dense Layer #2 (Logits Layer): 10 neurons, one for each digit target class (0–9).",
"_____no_output_____"
],
[
"## CNN model with `tf.layers` APIs\n\n* [`tf.layers`](https://www.tensorflow.org/api_docs/python/tf/layers) 링크\n* [`tf.layers.conv2d()`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d)\n* `tf.layers.max_pooling2d()`\n* `tf.layers.dense()`",
"_____no_output_____"
]
],
[
[
"tf.set_random_seed(219)\n\ndef cnn_model_fn(features, labels, mode):\n \"\"\"Model function for CNN.\"\"\"\n # Input Layer\n # Reshape X to 4-D tensor: [batch_size, width, height, channels]\n # MNIST images are 28x28 pixels, and have one color channel\n input_layer = tf.reshape(features[\"x\"], [-1, 28, 28, 1])\n\n # Convolutional Layer #1\n # Computes 32 features using a 5x5 filter with ReLU activation.\n # Padding is added to preserve width and height.\n # Input Tensor Shape: [batch_size, 28, 28, 1]\n # Output Tensor Shape: [batch_size, 28, 28, 32]\n conv1 = tf.layers.conv2d(\n inputs=input_layer,\n filters=32,\n kernel_size=[5, 5],\n padding=\"same\",\n activation=tf.nn.relu)\n\n # Pooling Layer #1\n # First max pooling layer with a 2x2 filter and stride of 2\n # Input Tensor Shape: [batch_size, 28, 28, 32]\n # Output Tensor Shape: [batch_size, 14, 14, 32]\n pool1 = tf.layers.max_pooling2d(inputs=conv1,\n pool_size=[2, 2],\n strides=2)\n\n # Convolutional Layer #2\n # Computes 64 features using a 5x5 filter.\n # Padding is added to preserve width and height.\n # Input Tensor Shape: [batch_size, 14, 14, 32]\n # Output Tensor Shape: [batch_size, 14, 14, 64]\n conv2 = tf.layers.conv2d(\n inputs=pool1,\n filters=64,\n kernel_size=[5, 5],\n padding=\"same\",\n activation=tf.nn.relu)\n\n # Pooling Layer #2\n # Second max pooling layer with a 2x2 filter and stride of 2\n # Input Tensor Shape: [batch_size, 14, 14, 64]\n # Output Tensor Shape: [batch_size, 7, 7, 64]\n pool2 = tf.layers.max_pooling2d(inputs=conv2,\n pool_size=[2, 2], strides=2)\n\n # Flatten tensor into a batch of vectors\n # Input Tensor Shape: [batch_size, 7, 7, 64]\n # Output Tensor Shape: [batch_size, 7 * 7 * 64]\n pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64])\n\n # Dense Layer\n # Densely connected layer with 1024 neurons\n # Input Tensor Shape: [batch_size, 7 * 7 * 64]\n # Output Tensor Shape: [batch_size, 1024]\n dense = tf.layers.dense(inputs=pool2_flat,\n units=1024,\n activation=tf.nn.relu)\n\n # Add dropout operation; 0.6 probability that element will be kept\n dropout = tf.layers.dropout(\n inputs=dense, rate=0.4,\n training=mode == tf.estimator.ModeKeys.TRAIN)\n\n # Logits layer\n # Input Tensor Shape: [batch_size, 1024]\n # Output Tensor Shape: [batch_size, 10]\n logits = tf.layers.dense(inputs=dropout, units=10)\n\n predictions = {\n # Generate predictions (for PREDICT and EVAL mode)\n \"classes\": tf.argmax(input=logits, axis=1),\n # Add `softmax_tensor` to the graph. It is used for PREDICT and by the\n # `logging_hook`.\n \"probabilities\": tf.nn.softmax(logits, name=\"softmax_tensor\")\n }\n if mode == tf.estimator.ModeKeys.PREDICT:\n return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)\n\n # Calculate Loss (for both TRAIN and EVAL modes)\n loss = tf.losses.sparse_softmax_cross_entropy(\n labels=labels,\n logits=logits)\n\n # Configure the Training Op (for TRAIN mode)\n if mode == tf.estimator.ModeKeys.TRAIN:\n optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)\n train_op = optimizer.minimize(\n loss=loss,\n global_step=tf.train.get_global_step())\n return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)\n\n # Add evaluation metrics (for EVAL mode)\n eval_metric_ops = {\n \"accuracy\": tf.metrics.accuracy(\n labels=labels, predictions=predictions[\"classes\"])}\n return tf.estimator.EstimatorSpec(\n mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)",
"_____no_output_____"
]
],
[
[
"### input layer\n\n* 4-rank Tensor: `[batch_size, image_height, image_width, channels]`",
"_____no_output_____"
],
[
"### conv2d layer\n\n* [`tf.layers.conv2d()`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d) API\n* 필수 arguments\n```\ntf.layers.conv2d(\n inputs,\n filters,\n kernel_size)\n```\n* `inputs`: 4-rank Tensor: `[batch_size, image_height, image_width, channels]`\n* `filters`: output filter의 갯수\n* `kernel_size`: `[height, width]`\n* `padding`: `\"valid\"` or `\"same\"` (case-insensitive)\n * `valid`: [32 x 32] -> [28 x 28] (`kernel_size`: 5)\n * `same`: [32 x 32] -> [32 x 32] (`kernel_size`: 5)\n* `activation`\n * 기본값이 None\n * `tf.nn.relu`를 습관적으로 해주는게 좋음 (맨 마지막 레이어를 제외하고)",
"_____no_output_____"
],
[
"### maxpooling2d layer\n\n* [`tf.layers.maxpooling2d()`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d) API\n* 필수 arguments\n```\ntf.layers.maxpooling2d(\n inputs,\n pool_size,\n strides)\n```\n* `inputs`: 4-rank Tensor: `[batch_size, image_height, image_width, channels]`\n* `pool_size`: `[height, width]`\n* `strides`: `[height, width]`",
"_____no_output_____"
],
[
"### dense layer\n\n* [`tf.layers.dense()`](https://www.tensorflow.org/api_docs/python/tf/layers/dense) API\n* 필수 arguments\n```\ntf.layers.dense(\n inputs,\n units)\n```\n* `inputs`: 2-rank Tensor: `[batch_size, features]`\n* `units`: output node 갯수\n* `conv2d`나 `maxpooling2d` 뒤에 `dense`레이어를 쓰려면 `inputs` 텐서의 차원을 맞춰줘야 한다.",
"_____no_output_____"
],
[
"### dropout layer\n\n* [`tf.layers.dropout()`](https://www.tensorflow.org/api_docs/python/tf/layers/dropout) API\n* 필수 arguments\n```\ntf.layers.dropout(\n inputs\n rate=0.5,\n training=False)\n```\n* `inputs`: 2-rank Tensor: `[batch_size, features]`\n* `rate`: dropout rate\n* `training`: `training` mode인지 아닌지 구분해주는 `argument`",
"_____no_output_____"
],
[
"### logits layer\n\n* `softmax`를 하기 전에 score 값(raw value)을 전달해 주는 `layer`\n* `activation`을 하지 않는게 중요\n* `units`갯수는 class 갯수와 동일",
"_____no_output_____"
],
[
"### generate predictions\n\n* `predicted class` for each example\n * [`tf.argmax`](https://www.tensorflow.org/api_docs/python/tf/argmax) 사용\n* `probabilities` for each possible target class for each example\n * [`tf.nn.softmax`](https://www.tensorflow.org/api_docs/python/tf/nn/softmax) 사용",
"_____no_output_____"
],
[
"### calculate loss\n\n* multiclass classification problems\n * `cross_entropy` loss: $- \\sum y \\log \\hat{y}$\n * `tf.losses.softmax_cross_entropy` API 사용",
"_____no_output_____"
],
[
"### Configure the Training Op\n\n* stochastic gradient descent\n* [Optimizers API](https://www.tensorflow.org/api_guides/python/train#Optimizers)\n```\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)\ntrain_op = optimizer.minimize(\n loss=loss,\n global_step=tf.train.get_global_step())\n```",
"_____no_output_____"
],
[
"### Add evaluation metrics\n\n```\neval_metric_ops = {\n \"accuracy\": tf.metrics.accuracy(\n labels=labels, predictions=predictions[\"classes\"])}\n```",
"_____no_output_____"
],
[
"## Training",
"_____no_output_____"
]
],
[
[
"# Load training and eval data from tf.keras\n(train_data, train_labels), (test_data, test_labels) = \\\n tf.keras.datasets.mnist.load_data()\n\ntrain_data = train_data / 255.\ntrain_data = train_data.reshape(-1, 784)\ntrain_labels = np.asarray(train_labels, dtype=np.int32)\n\ntest_data = test_data / 255.\ntest_data = test_data.reshape(-1, 784)\ntest_labels = np.asarray(test_labels, dtype=np.int32)",
"Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 1s 0us/step\n"
],
[
"# Create the Estimator\nmnist_classifier = tf.estimator.Estimator(\n model_fn=cnn_model_fn, model_dir=\"graphs/01.1.mnist.deep.with.estimator\")",
"INFO:tensorflow:Using default config.\nINFO:tensorflow:Using config: {'_model_dir': 'graphs/01.1.mnist.deep.with.estimator', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f70a9bf2630>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}\n"
],
[
"# Set up logging for predictions\n# Log the values in the \"Softmax\" tensor with label \"probabilities\"\ntensors_to_log = {\"probabilities\": \"softmax_tensor\"}\nlogging_hook = tf.train.LoggingTensorHook(\n tensors=tensors_to_log, every_n_iter=50)",
"_____no_output_____"
],
[
"# Train the model\ntrain_input_fn = tf.estimator.inputs.numpy_input_fn(\n x={\"x\": train_data},\n y=train_labels,\n batch_size=32,\n num_epochs=None,\n shuffle=True)\n\nmnist_classifier.train(\n input_fn=train_input_fn,\n steps=100,\n hooks=[logging_hook])",
"INFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 0 into graphs/01.1.mnist.deep.with.estimator/model.ckpt.\nINFO:tensorflow:probabilities = [[0.09946432 0.10026968 0.1001089 0.1088763 0.09241221 0.10655838\n 0.09769787 0.09167217 0.09558441 0.10735577]\n [0.10443639 0.09028919 0.10932202 0.10557002 0.08750195 0.1129559\n 0.09283712 0.10187628 0.0911648 0.10404633]\n [0.08729037 0.08273202 0.11882669 0.10135613 0.09783328 0.10637151\n 0.08952356 0.11445157 0.09460464 0.10701024]\n [0.09924364 0.0940272 0.11895797 0.11179941 0.095858 0.09456187\n 0.0971493 0.09239972 0.0989261 0.09707679]\n [0.09159164 0.09455547 0.10412133 0.10369674 0.1053172 0.11078608\n 0.08988277 0.09075892 0.09818998 0.11109987]\n [0.10220621 0.10478947 0.11513246 0.09837164 0.09952052 0.09586844\n 0.09238272 0.08614958 0.1033251 0.10225387]\n [0.10435115 0.08502124 0.11360402 0.11627009 0.10421653 0.07908668\n 0.10017266 0.09677888 0.09226903 0.10822972]\n [0.09597888 0.10312434 0.11643726 0.11183302 0.09013544 0.09379151\n 0.09791595 0.08994972 0.101786 0.09904787]\n [0.10070019 0.09537416 0.10817888 0.10400781 0.08921426 0.10139185\n 0.09732563 0.09590693 0.10053729 0.10736299]\n [0.10097369 0.08559981 0.11647373 0.09710998 0.11096223 0.09809307\n 0.09966926 0.09172421 0.09826843 0.1011256 ]\n [0.1073066 0.08320223 0.11746835 0.09739469 0.09347554 0.10871083\n 0.08883403 0.09501409 0.09519454 0.1133991 ]\n [0.10177215 0.09340528 0.11782653 0.10845497 0.09039239 0.09269227\n 0.09705211 0.09940698 0.09431259 0.10468472]\n [0.09472759 0.09831738 0.11428047 0.097386 0.10131848 0.09048984\n 0.09880184 0.09486732 0.10088396 0.10892712]\n [0.09180987 0.0928457 0.1165526 0.10715163 0.08762308 0.10376589\n 0.09356718 0.10059953 0.10392332 0.10216121]\n [0.08955369 0.10699753 0.11197641 0.11143768 0.09347683 0.09268579\n 0.10307579 0.10078688 0.0940529 0.0959565 ]\n [0.09921379 0.10462101 0.11132558 0.09316675 0.1030828 0.08937568\n 0.08657999 0.10838888 0.09821129 0.10603424]\n [0.10244258 0.09564789 0.10778506 0.10975505 0.106875 0.09168886\n 0.0909107 0.09638418 0.09549439 0.1030163 ]\n [0.09609378 0.09586648 0.11672204 0.11088549 0.10076806 0.09421083\n 0.09578908 0.10100072 0.09233336 0.09633016]\n [0.09649631 0.08551136 0.1099404 0.09638389 0.10383239 0.09547519\n 0.09160494 0.09949061 0.09651641 0.12474849]\n [0.10346794 0.08839128 0.10054343 0.10188596 0.08987117 0.10823072\n 0.08881338 0.09907354 0.10427999 0.11544258]\n [0.08875466 0.0994645 0.10569302 0.1201 0.09718591 0.10174449\n 0.09322405 0.09604496 0.09420196 0.10358646]\n [0.10511659 0.08575994 0.12346674 0.10150193 0.10835783 0.0983972\n 0.09056212 0.09366892 0.07741928 0.11574944]\n [0.09267117 0.10042936 0.13061588 0.10450389 0.08431521 0.09645798\n 0.09670239 0.08722965 0.11208895 0.09498553]\n [0.10065306 0.09017275 0.11518088 0.11762844 0.09280955 0.0911362\n 0.08095194 0.09442486 0.10713613 0.10990619]\n [0.09394565 0.08985346 0.11212378 0.10498716 0.10118415 0.09886207\n 0.09555081 0.0967238 0.10504513 0.101724 ]\n [0.09407812 0.09125576 0.12090464 0.11018291 0.09829401 0.10944206\n 0.09311722 0.08155367 0.09620841 0.10496319]\n [0.09858521 0.09498974 0.12218548 0.10306892 0.10128729 0.08834141\n 0.0978213 0.08914431 0.09775096 0.10682539]\n [0.10073478 0.08376459 0.12327143 0.10664797 0.10707094 0.09321772\n 0.09472827 0.08829118 0.09255231 0.1097208 ]\n [0.10004572 0.09400966 0.11266176 0.11509873 0.0894834 0.10189326\n 0.08128142 0.10419193 0.098621 0.10271313]\n [0.09904058 0.07948872 0.10569061 0.11389015 0.09850022 0.09548512\n 0.10020493 0.10098993 0.10095176 0.10575798]\n [0.09992946 0.08734132 0.119345 0.10290474 0.09553966 0.09340845\n 0.09587583 0.07593315 0.10571575 0.12400662]\n [0.09959949 0.10225482 0.10163707 0.09984526 0.0931049 0.098772\n 0.09042643 0.09925439 0.10476501 0.11034064]]\nINFO:tensorflow:loss = 2.2849764823913574, step = 0\nINFO:tensorflow:probabilities = [[0.00071469 0.88378742 0.00988045 0.02692234 0.00397064 0.00166688\n 0.0047149 0.02902962 0.02255524 0.01675783]\n [0.00038087 0.00027421 0.00086893 0.00033509 0.00116583 0.00012064\n 0.99595497 0.00000276 0.00056764 0.00032906]\n [0.00158104 0.81652309 0.00983968 0.04126663 0.0100924 0.00384724\n 0.00625785 0.0421972 0.05406089 0.01433398]\n [0.00805815 0.04615101 0.2070233 0.12036946 0.1463691 0.00635101\n 0.03216287 0.28872888 0.05917311 0.0856131 ]\n [0.03292783 0.00214164 0.00882735 0.00365761 0.00385329 0.01035041\n 0.89718207 0.00018744 0.03889862 0.00197372]\n [0.00369211 0.00114682 0.00082114 0.00070347 0.85879826 0.00698872\n 0.00363326 0.02184934 0.04167163 0.06069525]\n [0.00074472 0.00034036 0.00029795 0.00016735 0.94778924 0.00241002\n 0.00537536 0.00328154 0.00385379 0.03573966]\n [0.06563547 0.25533355 0.16744409 0.19956709 0.00030638 0.11465162\n 0.01217871 0.00040578 0.18415565 0.00032167]\n [0.0065039 0.00439372 0.02387143 0.00421213 0.39658283 0.01418975\n 0.04611011 0.17952537 0.05672292 0.26788785]\n [0.00083857 0.00008928 0.00053233 0.01290346 0.00003678 0.00104617\n 0.00004849 0.97874104 0.00556042 0.00020347]\n [0.0002572 0.00030263 0.00050968 0.98875468 0.0007323 0.00282276\n 0.00027204 0.0000486 0.00616105 0.00013905]\n [0.00843908 0.01339357 0.01870025 0.01568277 0.00546382 0.00897354\n 0.86334977 0.00039211 0.06378552 0.00181959]\n [0.00415721 0.00508289 0.00082623 0.01129149 0.55395592 0.06351087\n 0.00569308 0.09014925 0.18453747 0.08079559]\n [0.40882228 0.00801822 0.01784449 0.07028126 0.00205883 0.02969943\n 0.11220379 0.00952552 0.33883759 0.0027086 ]\n [0.00043377 0.91722197 0.00354862 0.0236583 0.00074378 0.00121518\n 0.01113109 0.00435488 0.03309061 0.00460181]\n [0.00305453 0.01051536 0.00300807 0.01836511 0.22797061 0.01421636\n 0.00250888 0.58334768 0.07460209 0.06241133]\n [0.00056492 0.88936171 0.00824152 0.01823041 0.00430625 0.00203973\n 0.00365738 0.00850724 0.060791 0.00429982]\n [0.22534668 0.03283728 0.00861523 0.11155694 0.0025666 0.30111675\n 0.02953338 0.01171515 0.27343296 0.00327904]\n [0.00668545 0.10249254 0.00150725 0.07979958 0.30751007 0.02417905\n 0.05560253 0.03257583 0.08904878 0.30059892]\n [0.00173548 0.00063624 0.00012571 0.00234124 0.02846273 0.01670726\n 0.00017526 0.92887437 0.00432885 0.01661287]\n [0.00440716 0.00007796 0.00655984 0.00107676 0.00208015 0.00076178\n 0.98125882 0.00002303 0.00339758 0.00035691]\n [0.00001492 0.00010315 0.00108464 0.99096122 0.00000067 0.00062428\n 0.00000255 0.00015712 0.00704372 0.00000773]\n [0.01281157 0.00032204 0.00178622 0.58077007 0.00475734 0.13699823\n 0.00004797 0.0074183 0.25326012 0.00182814]\n [0.00218267 0.86143692 0.00336339 0.0242682 0.006077 0.00635016\n 0.02429159 0.00554386 0.05695024 0.00953596]\n [0.0168082 0.03518557 0.13438443 0.03294483 0.15971354 0.03014234\n 0.11298753 0.06306248 0.06991889 0.34485218]\n [0.20752388 0.02466695 0.01196533 0.07749532 0.00347423 0.10208875\n 0.00285796 0.00320266 0.56576024 0.00096467]\n [0.01678159 0.00021209 0.91326363 0.04123104 0.0011866 0.00027863\n 0.01028647 0.00002767 0.01663095 0.00010132]\n [0.01545273 0.00317935 0.0022821 0.02614023 0.00265069 0.02501683\n 0.80941004 0.00011744 0.11367351 0.00207707]\n [0.00024589 0.00009567 0.00062972 0.00138361 0.92667503 0.00201601\n 0.0024793 0.00177935 0.00127006 0.06342536]\n [0.0124856 0.00723813 0.02645864 0.01552289 0.01066328 0.0103987\n 0.88907818 0.00038545 0.02152863 0.00624049]\n [0.00255072 0.44883313 0.01903072 0.02553561 0.00144835 0.00210519\n 0.03414768 0.00079369 0.46511379 0.00044111]\n [0.00426599 0.00063226 0.06803767 0.02127275 0.14204719 0.00519288\n 0.72485 0.00263635 0.01203621 0.0190287 ]] (0.752 sec)\nINFO:tensorflow:Saving checkpoints for 100 into graphs/01.1.mnist.deep.with.estimator/model.ckpt.\nINFO:tensorflow:Loss for final step: 0.37314504384994507.\n"
],
[
"# Evaluate the model and print results\ntest_input_fn = tf.estimator.inputs.numpy_input_fn(\n x={\"x\": test_data},\n y=test_labels,\n num_epochs=1,\n shuffle=False)\n\ntest_results = mnist_classifier.evaluate(input_fn=test_input_fn)\nprint(test_results)",
"INFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-10-04-08:48:02\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from graphs/01.1.mnist.deep.with.estimator/model.ckpt-100\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Finished evaluation at 2018-10-04-08:48:03\nINFO:tensorflow:Saving dict for global step 100: accuracy = 0.9118, global_step = 100, loss = 0.29634535\nINFO:tensorflow:Saving 'checkpoint_path' summary for global step 100: graphs/01.1.mnist.deep.with.estimator/model.ckpt-100\n{'accuracy': 0.9118, 'loss': 0.29634535, 'global_step': 100}\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ec9e459cd08db60b57015e65bfe84ae75c53ed5e | 565,792 | ipynb | Jupyter Notebook | FeatureExtractionModule/src/autoencoder_approach/Per task approach/NC/autoencoder_classifiers-NC-busy-vs-relaxed-no-TFv1.ipynb | Aekai/Wi-Mind | a02a2f4cd10fc362e6a17d3c67c2662c90b1a980 | [
"0BSD"
] | null | null | null | FeatureExtractionModule/src/autoencoder_approach/Per task approach/NC/autoencoder_classifiers-NC-busy-vs-relaxed-no-TFv1.ipynb | Aekai/Wi-Mind | a02a2f4cd10fc362e6a17d3c67c2662c90b1a980 | [
"0BSD"
] | null | null | null | FeatureExtractionModule/src/autoencoder_approach/Per task approach/NC/autoencoder_classifiers-NC-busy-vs-relaxed-no-TFv1.ipynb | Aekai/Wi-Mind | a02a2f4cd10fc362e6a17d3c67c2662c90b1a980 | [
"0BSD"
] | null | null | null | 210.409818 | 62,468 | 0.894141 | [
[
[
"# Classifiers - NC - busy vs relaxed - no TFv1\nExploring different classifiers with different autoencoders for the NC task. No contractive autoencoder because it needs TFv1 compatibility.",
"_____no_output_____"
],
[
"#### Table of contents: ",
"_____no_output_____"
],
[
"autoencoders: \n[Undercomplete Autoencoder](#Undercomplete-Autoencoder) \n[Sparse Autoencoder](#Sparse-Autoencoder) \n[Deep Autoencoder](#Deep-Autoencoder) \n\nclassifiers: \n[Simple dense classifier](#Simple-dense-classifier) \n[LSTM-based classifier](#LSTM-based-classifier) \n[kNN](#kNN) \n[SVC](#SVC) \n[Random Forest](#Random-Forest) \n[XGBoost](#XGBoost) ",
"_____no_output_____"
]
],
[
[
"import datareader # made by the previous author for reading the collected data\nimport dataextractor # same as above\nimport pandas\nimport numpy as np\n\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers, regularizers\nfrom tensorflow.keras.preprocessing import sequence\nfrom tensorflow.keras.models import Sequential, Model\nfrom tensorflow.keras.layers import Dense, Dropout, Activation, Input\nfrom tensorflow.keras.layers import LSTM\nfrom tensorflow.keras.layers import Conv1D, MaxPooling1D\nfrom tensorflow.keras.optimizers import Adam, Nadam\nimport tensorflow.keras.backend as K\ntf.keras.backend.set_floatx('float32') # call this, to set keras to use float32 to avoid a warning message\nmetrics = ['accuracy']\n\nfrom sklearn.preprocessing import StandardScaler, MinMaxScaler\n\nimport json\nfrom datetime import datetime\nimport warnings\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"import random\n\nrandom.seed(1)\nnp.random.seed(4)\ntf.random.set_seed(2)",
"_____no_output_____"
],
[
"# Start the notebook in the terminal with \"PYTHONHASHSEED=0 jupyter notebook\" \n# or in anaconda \"set PYTHONHASHSEED=0\" then start jupyter notebook\nimport os\nif os.environ.get(\"PYTHONHASHSEED\") != \"0\":\n raise Exception(\"You must set PYTHONHASHSEED=0 before starting the Jupyter server to get reproducible results.\")",
"_____no_output_____"
]
],
[
[
"This is modfied original author's code for reading data:",
"_____no_output_____"
]
],
[
[
"def model_train(model, x_train, y_train, batch_size, epochs, x_valid, y_valid, x_test, y_test):\n \"\"\"Train model with the given training, validation, and test set, with appropriate batch size and # epochs.\"\"\"\n epoch_data = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(x_valid, y_valid), verbose=0)\n score = model.evaluate(x_test, y_test, batch_size=batch_size, verbose=0)\n acc = score[1]\n score = score[0]\n return score, acc, epoch_data\n",
"_____no_output_____"
],
[
"def get_busy_vs_relax_timeframes_br_hb(path, ident, seconds, checkIfValid=True):\n \"\"\"Returns raw data from either 'on task' or 'relax' time frames and their class (0 or 1).\"\"\"\n \n dataread = datareader.DataReader(path, ident) # initialize path to data\n data = dataread.read_grc_data() # read from files\n samp_rate = int(round(len(data[1]) / max(data[0])))\n cog_res = dataread.read_cognitive_load_study(str(ident) + '-primary-extract.txt')\n\n tasks_data = np.empty((0, seconds*samp_rate))\n tasks_y = np.empty((0, 1))\n breathing = np.empty((0,12))\n heartbeat = np.empty((0,10))\n\n busy_n = dataread.get_data_task_timestamps(return_indexes=True)\n relax_n = dataread.get_relax_timestamps(return_indexes=True)\n\n for i in cog_res['task_number']:\n task_num_table = i - 225 # 0 - 17\n tmp_tasks_data = np.empty((0, seconds*samp_rate))\n tmp_tasks_y = np.empty((0, 1))\n tmp_breathing = np.empty((0,12))\n tmp_heartbeat = np.empty((0,10))\n\n if cog_res['task_label'][task_num_table] != 'NC':\n continue\n \n ### task versus relax (1 sample each)\n dataextract = dataextractor.DataExtractor(data[0][busy_n[task_num_table][0]:busy_n[task_num_table][1]],\n data[1][busy_n[task_num_table][0]:busy_n[task_num_table][1]],\n samp_rate)\n\n dataextract_relax = dataextractor.DataExtractor(data[0][relax_n[task_num_table][0]:relax_n[task_num_table][1]],\n data[1][relax_n[task_num_table][0]:relax_n[task_num_table][1]],\n samp_rate)\n\n try:\n\n # get extracted features for breathing\n tmpBR_busy = dataextract.extract_from_breathing_time(dataextract.t[-samp_rate*seconds:],\n dataextract.y[-samp_rate*seconds:])\n tmpBR_relax = dataextract_relax.extract_from_breathing_time(dataextract_relax.t[-samp_rate*seconds:],\n dataextract_relax.y[-samp_rate*seconds:])\n #get extracted features for heartbeat\n tmpHB_busy = dataextract.extract_from_heartbeat_time(dataextract.t[-samp_rate*seconds:],\n dataextract.y[-samp_rate*seconds:])\n tmpHB_relax = dataextract.extract_from_heartbeat_time(dataextract_relax.t[-samp_rate*seconds:],\n dataextract_relax.y[-samp_rate*seconds:])\n\n if checkIfValid and not(tmpBR_busy['br_ok'][0] and tmpBR_relax['br_ok'][0]):\n continue\n\n tmp_tasks_data = np.vstack((tmp_tasks_data, dataextract.y[-samp_rate * seconds:]))\n tmp_tasks_y = np.vstack((tasks_y, 1))\n tmp_tasks_data = np.vstack((tmp_tasks_data, dataextract_relax.y[-samp_rate * seconds:]))\n tmp_tasks_y = np.vstack((tmp_tasks_y, 0))\n\n # put busy frames then relaxed frames under the previous frames\n tmp_breathing = np.vstack((tmp_breathing, tmpBR_busy.to_numpy(dtype='float64', na_value=0)[0][:-1]))\n tmp_breathing = np.vstack((tmp_breathing, tmpBR_relax.to_numpy(dtype='float64', na_value=0)[0][:-1]))\n\n tmp_heartbeat = np.vstack((tmp_heartbeat, tmpHB_busy.to_numpy(dtype='float64', na_value=0)[0][:-1]))\n tmp_heartbeat = np.vstack((tmp_heartbeat, tmpHB_relax.to_numpy(dtype='float64', na_value=0)[0][:-1]))\n\n except ValueError:\n# print(ident) # ignore short windows\n continue\n\n # put busy frames then relaxed frames under the previous frames\n tasks_data = np.vstack((tasks_data, dataextract.y[-samp_rate * seconds:]))\n tasks_y = np.vstack((tasks_y, 1))\n tasks_data = np.vstack((tasks_data, dataextract_relax.y[-samp_rate * seconds:]))\n tasks_y = np.vstack((tasks_y, 0))\n\n breathing = np.vstack((breathing, tmp_breathing))\n\n heartbeat = np.vstack((heartbeat, tmp_heartbeat))\n \n return tasks_data, tasks_y, breathing, heartbeat",
"_____no_output_____"
],
[
"def get_data_from_idents_br_hb(path, idents, seconds):\n \"\"\"Go through all user data and take out windows of only <seconds> long time frames,\n along with the given class (from 'divide_each_task' function).\n \"\"\"\n samp_rate = 43 # hard-coded sample rate\n data, ys = np.empty((0, samp_rate*seconds)), np.empty((0, 1))\n brs = np.empty((0,12))\n hbs = np.empty((0,10))\n combined = np.empty((0,22))\n \n # was gettign some weird warnings; stack overflow said to ignore them\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\", category=RuntimeWarning)\n for i in idents:\n x, y, br, hb = get_busy_vs_relax_timeframes_br_hb(path, i, seconds) # either 'get_busy_vs_relax_timeframes',\n # get_engagement_increase_vs_decrease_timeframes, get_task_complexities_timeframes or get_TLX_timeframes\n\n data = np.vstack((data, x))\n ys = np.vstack((ys, y))\n brs = np.vstack((brs, br))\n hbs = np.vstack((hbs, hb))\n combined = np.hstack((brs,hbs))\n \n return data, ys, brs, hbs, combined",
"_____no_output_____"
],
[
"# Accs is a dictionary which holds 1d arrays of accuracies in each key\n# except the key 'test id' which holds strings of the id which yielded the coresponding accuracies\ndef print_accs_stats(accs):\n \n printDict = {}\n # loop over each key\n for key in accs:\n \n if (key == 'test id'):\n # skip calculating ids\n continue\n printDict[key] = {}\n tmpDict = printDict[key]\n # calculate and print some statistics\n tmpDict['min'] = np.min(accs[key])\n tmpDict['max'] = np.max(accs[key])\n tmpDict['mean'] = np.mean(accs[key])\n tmpDict['median'] = np.median(accs[key])\n \n print(pandas.DataFrame.from_dict(printDict).to_string())",
"_____no_output_____"
],
[
"def clear_session_and_set_seeds():\n # clear session and set seeds again\n K.clear_session()\n random.seed(1)\n np.random.seed(4)\n tf.random.set_seed(2)",
"_____no_output_____"
]
],
[
[
"## Prepare data",
"_____no_output_____"
],
[
"Initialize variables:",
"_____no_output_____"
]
],
[
[
"# initialize a dictionary to store accuracies for comparison\naccuracies = {}\n\n# used for reading the data into an array\nseconds = 30 # time window length\nsamp_rate = 43 # hard-coded sample rate\nphase_shape = np.empty((0, samp_rate*seconds))\ny_shape = np.empty((0, 1))\nbreathing_shape = np.empty((0,12))\nheartbeat_shape = np.empty((0,10))\ncombined_shape = np.empty((0,22))\nidents = ['2gu87', 'iz2ps', '1mpau', '7dwjy', '7swyk', '94mnx', 'bd47a', 'c24ur', 'ctsax', 'dkhty', 'e4gay',\n 'ef5rq', 'f1gjp', 'hpbxa', 'pmyfl', 'r89k1', 'tn4vl', 'td5pr', 'gyqu9', 'fzchw', 'l53hg', '3n2f9',\n '62i9y']\npath = '../../../../../StudyData/'\n\n\n# change to len(idents) at the end to use all the data\nn = len(idents)",
"_____no_output_____"
],
[
"# Holds all the data so it doesnt have to be read from file each time\ndata_dict = {}",
"_____no_output_____"
]
],
[
[
"Fill the data dictionary:",
"_____no_output_____"
]
],
[
[
"for ident in idents.copy():\n \n # read data\n phase, y, breathing, heartbeat, combined = get_data_from_idents_br_hb(path, [ident], seconds)\n\n if (y.shape[0] <= 0):\n idents.remove(ident)\n print(ident)\n continue\n \n # initialize ident in \n data_dict[ident] = {}\n tmpDataDict = data_dict[ident]\n \n # load data into dictionary\n tmpDataDict['phase'] = phase\n tmpDataDict['y'] = y\n tmpDataDict['breathing'] = breathing\n tmpDataDict['heartbeat'] = heartbeat\n tmpDataDict['combined'] = combined\n \nprint(n)\nn = len(idents)\nprint(n)",
"ctsax\nl53hg\n62i9y\n23\n20\n"
],
[
"# load all phase data to use for training autoencoders\nphase_all_train = get_data_from_idents_br_hb(path, idents[:-2], seconds)[0]\n# Scale each row with MinMax to range [0,1]\nphase_all_train = MinMaxScaler().fit_transform(phase_all_train.T).T\n\n# load all validation phase data to use for training autoencoders\nphase_all_valid = get_data_from_idents_br_hb(path, idents[-2:], seconds)[0]\n# Scale each row with MinMax to range [0,1]\nphase_all_valid = MinMaxScaler().fit_transform(phase_all_valid.T).T",
"_____no_output_____"
]
],
[
[
"## Autoencoders \nTrain autoencoders to save their encoded representations in the data dictionary:",
"_____no_output_____"
]
],
[
[
"# AE Training params\nbatch_size = 128\nepochs = 1000\nencoding_dim = 30\nae_encoded_shape = np.empty((0,encoding_dim))",
"_____no_output_____"
],
[
"def compare_plot_n(data1, data2, data3, plot_n=3):\n \n #plot data1 values\n plt.figure()\n plt.figure(figsize=(20, 4))\n for i in range(plot_n):\n plt.subplot(1, 5, i+1)\n plt.plot(data1[i])\n\n #plot data2 values\n plt.figure()\n plt.figure(figsize=(20, 4))\n for i in range(plot_n):\n plt.subplot(1, 5, i+1)\n plt.plot(data2[i])\n \n #plot data3 values\n plt.figure()\n plt.figure(figsize=(20, 4))\n for i in range(plot_n):\n plt.subplot(1, 5, i+1)\n plt.plot(data3[i])",
"_____no_output_____"
]
],
[
[
"#### Undercomplete Autoencoder \nfrom https://blog.keras.io/building-autoencoders-in-keras.html",
"_____no_output_____"
]
],
[
[
"def undercomplete_ae(x, encoding_dim=64, encoded_as_model=False):\n # Simplest possible autoencoder from https://blog.keras.io/building-autoencoders-in-keras.html\n\n # this is our input placeholder\n input_data = Input(shape=x[0].shape, name=\"input\")\n dropout = Dropout(0.125, name=\"dropout\", seed=42)(input_data)\n # \"encoded\" is the encoded representation of the input\n encoded = Dense(encoding_dim, activation='relu', name=\"encoded\")(dropout)\n \n # \"decoded\" is the lossy reconstruction of the input\n decoded = Dense(x[0].shape[0], activation='sigmoid', name=\"decoded\")(encoded)\n\n autoencoder = Model(input_data, decoded)\n \n # compile the model\n autoencoder.compile(optimizer='adam', loss='binary_crossentropy', metrics=metrics)\n \n # if return encoder in the encoded variable\n if encoded_as_model:\n encoded = Model(input_data, encoded)\n \n return autoencoder, encoded",
"_____no_output_____"
]
],
[
[
"Train autoencoder on data:",
"_____no_output_____"
]
],
[
[
"clear_session_and_set_seeds()\nuc_ae, uc_enc = undercomplete_ae(phase_all_train, encoding_dim=encoding_dim, encoded_as_model=True)\nuc_ae.fit(phase_all_train, phase_all_train,\n validation_data=(phase_all_valid, phase_all_valid),\n batch_size=batch_size,\n shuffle=True,\n epochs=epochs,\n verbose=0)",
"_____no_output_____"
]
],
[
[
"Plot signal, reconstruction and encoded representation:",
"_____no_output_____"
]
],
[
[
"data2 = uc_ae.predict(phase_all_valid)\ndata3 = uc_enc.predict(phase_all_valid)\ncompare_plot_n(phase_all_valid, data2, data3)",
"_____no_output_____"
]
],
[
[
"Store the encoded representations in the data dictionary:",
"_____no_output_____"
]
],
[
[
"for ident in data_dict:\n \n tmpDataDict = data_dict[ident]\n \n # read data\n phase = tmpDataDict['phase']\n if (phase.shape[0] <= 0):\n uc_data = np.empty((0, encoding_dim))\n else:\n uc_data = uc_enc.predict(phase)\n \n # load data into dictionary\n tmpDataDict['undercomplete_encoded'] = uc_data",
"_____no_output_____"
]
],
[
[
"#### Sparse Autoencoder \nfrom https://blog.keras.io/building-autoencoders-in-keras.html",
"_____no_output_____"
]
],
[
[
"def sparse_ae(x, encoding_dim=64, encoded_as_model=False):\n # Simplest possible autoencoder from https://blog.keras.io/building-autoencoders-in-keras.html\n\n # this is our input placeholder\n input_data = Input(shape=x[0].shape, name=\"input\")\n dropout = Dropout(0.125, name=\"dropout\", seed=42) (input_data)\n # \"encoded\" is the encoded representation of the input\n # add a sparsity constraint\n encoded = Dense(encoding_dim, activation='relu', name=\"encoded\",\n activity_regularizer=regularizers.l1(10e-5))(dropout)\n \n # \"decoded\" is the lossy reconstruction of the input\n decoded = Dense(x[0].shape[0], activation='sigmoid', name=\"decoded\")(encoded)\n\n # this model maps an input to its reconstruction\n autoencoder = Model(input_data, decoded, name=\"sparse_ae\")\n \n # compile the model\n autoencoder.compile(optimizer='adam', loss='binary_crossentropy', metrics=metrics)\n \n # if return encoder in the encoded variable\n if encoded_as_model:\n encoded = Model(input_data, encoded)\n \n return autoencoder, encoded",
"_____no_output_____"
]
],
[
[
"Train autoencoder on data:",
"_____no_output_____"
]
],
[
[
"clear_session_and_set_seeds()\nsp_ae, sp_enc = sparse_ae(phase_all_train, encoding_dim=encoding_dim, encoded_as_model=True)\nsp_ae.fit(phase_all_train, phase_all_train,\n validation_data=(phase_all_valid, phase_all_valid),\n batch_size=batch_size,\n shuffle=True,\n epochs=epochs,\n verbose=0)",
"_____no_output_____"
]
],
[
[
"Plot signal, reconstruction and encoded representation:",
"_____no_output_____"
]
],
[
[
"data2 = sp_ae.predict(phase_all_valid)\ndata3 = sp_enc.predict(phase_all_valid)\ncompare_plot_n(phase_all_valid, data2, data3)",
"_____no_output_____"
]
],
[
[
"Store the encoded representations in the data dictionary:",
"_____no_output_____"
]
],
[
[
"for ident in data_dict:\n \n tmpDataDict = data_dict[ident]\n \n # read data\n phase = tmpDataDict['phase']\n if (phase.shape[0] <= 0):\n sp_data = np.empty((0, encoding_dim))\n else:\n sp_data = sp_enc.predict(phase)\n \n # load data into dictionary\n tmpDataDict['sparse_encoded'] = sp_data",
"_____no_output_____"
]
],
[
[
"#### Deep Autoencoder \nfrom https://blog.keras.io/building-autoencoders-in-keras.html",
"_____no_output_____"
]
],
[
[
"def deep_ae(x, enc_layers=[512,128], encoding_dim=64, dec_layers=[128,512], encoded_as_model=False):\n # From https://www.tensorflow.org/guide/keras/functional#use_the_same_graph_of_layers_to_define_multiple_models\n input_data = keras.Input(shape=x[0].shape, name=\"normalized_signal\")\n model = Dropout(0.125, name=\"dropout\", autocast=False, seed=42)(input_data)\n for i in enumerate(enc_layers):\n model = Dense(i[1], activation=\"relu\", name=\"dense_enc_\" + str(i[0]+1))(model)\n encoded_output = Dense(encoding_dim, activation=\"relu\", name=\"encoded_signal\")(model)\n\n encoded = encoded_output\n\n model = layers.Dense(dec_layers[0], activation=\"sigmoid\", name=\"dense_dec_1\")(encoded_output)\n for i in enumerate(dec_layers[1:]):\n model = Dense(i[1], activation=\"sigmoid\", name=\"dense_dec_\" + str(i[0]+2))(model)\n decoded_output = Dense(x[0].shape[0], activation=\"sigmoid\", name=\"reconstructed_signal\")(model)\n \n autoencoder = Model(input_data, decoded_output, name=\"autoencoder\")\n \n # compile the model\n autoencoder.compile(optimizer='adam', loss='binary_crossentropy', metrics=metrics)\n \n # if return encoder in the encoded variable\n if encoded_as_model:\n encoded = Model(input_data, encoded)\n\n return autoencoder, encoded",
"_____no_output_____"
]
],
[
[
"Train autoencoder on data:",
"_____no_output_____"
]
],
[
[
"clear_session_and_set_seeds()\nde_ae, de_enc = deep_ae(phase_all_train, encoding_dim=encoding_dim, encoded_as_model=True)\nde_ae.fit(phase_all_train, phase_all_train,\n validation_data=(phase_all_valid, phase_all_valid),\n batch_size=batch_size,\n shuffle=True,\n epochs=epochs,\n verbose=0)",
"_____no_output_____"
]
],
[
[
"Plot signal, reconstruction and encoded representation:",
"_____no_output_____"
]
],
[
[
"data2 = de_ae.predict(phase_all_valid)\ndata3 = de_enc.predict(phase_all_valid)\ncompare_plot_n(phase_all_valid, data2, data3)",
"_____no_output_____"
]
],
[
[
"Store the encoded representations in the data dictionary:",
"_____no_output_____"
]
],
[
[
"for ident in data_dict:\n \n tmpDataDict = data_dict[ident]\n \n # read data\n phase = tmpDataDict['phase']\n if (phase.shape[0] <= 0):\n de_data = np.empty((0, encoding_dim))\n else:\n de_data = de_enc.predict(phase)\n \n # load data into dictionary\n tmpDataDict['deep_encoded'] = de_data",
"_____no_output_____"
]
],
[
[
"Helper function to get data from the dictionary:",
"_____no_output_____"
]
],
[
[
"def get_ident_data_from_dict(idents, data_dict):\n \n # Initialize data variables\n y = y_shape.copy()\n phase = phase_shape.copy()\n breathing = breathing_shape.copy()\n heartbeat = heartbeat_shape.copy()\n combined = combined_shape.copy()\n undercomplete_encoded = ae_encoded_shape.copy()\n sparse_encoded = ae_encoded_shape.copy()\n deep_encoded = ae_encoded_shape.copy()\n \n # Stack data form each ident into the variables\n for tmp_id in idents:\n y = np.vstack((y, data_dict[tmp_id]['y']))\n phase = np.vstack((phase, data_dict[tmp_id]['phase']))\n breathing = np.vstack((breathing, data_dict[tmp_id]['breathing']))\n heartbeat = np.vstack((heartbeat, data_dict[tmp_id]['heartbeat']))\n combined = np.vstack((combined, data_dict[tmp_id]['combined']))\n undercomplete_encoded = np.vstack((undercomplete_encoded, data_dict[tmp_id]['undercomplete_encoded']))\n sparse_encoded = np.vstack((sparse_encoded, data_dict[tmp_id]['sparse_encoded']))\n deep_encoded = np.vstack((deep_encoded, data_dict[tmp_id]['deep_encoded']))\n \n return y, phase, breathing, heartbeat, combined, undercomplete_encoded, sparse_encoded, deep_encoded",
"_____no_output_____"
]
],
[
[
"## Classifiers",
"_____no_output_____"
],
[
"#### Helper loop function definition \nA function that loops over all the data and calls the classifiers with it then stores the returned accuracies.",
"_____no_output_____"
]
],
[
[
"def helper_loop(classifier_function_train, idents, n=5, num_loops_to_average_over=1, should_scale_data=True):\n #returns a dictionary with accuracies\n\n # set the variables in the dictionary\n accs = {}\n accs['phase'] = []\n accs['breathing'] = []\n accs['heartbeat'] = []\n accs['combined br hb'] = []\n accs['undercomplete'] = []\n accs['sparse'] = []\n accs['deep'] = []\n accs['test id'] = []\n start_time = datetime.now()\n\n # leave out person out validation\n for i in range(n):\n \n # print current iteration and time elapsed from start\n print(\"iteration:\", i+1, \"of\", n, \"; time elapsed:\", datetime.now()-start_time)\n\n ## ----- Data preparation:\n validation_idents = [idents[i]]\n test_idents = [idents[i-1]]\n \n train_idents = []\n for ident in idents:\n if (ident not in test_idents) and (ident not in validation_idents):\n train_idents.append(ident)\n\n # save test id to see which id yielded which accuracies\n accs['test id'].append(test_idents[0])\n\n # Load train data\n train_data = get_ident_data_from_dict(train_idents, data_dict)\n y_train = train_data[0]\n \n # Load validation data\n valid_data = get_ident_data_from_dict(validation_idents, data_dict)\n y_valid = valid_data[0]\n \n # Load test data\n test_data = get_ident_data_from_dict(test_idents, data_dict)\n y_test = test_data[0]\n \n # Skip idents that don't have any data\n if (y_test.shape[0] <= 0 or y_valid.shape[0] <= 0):\n continue\n \n data_names_by_index = ['y', 'phase', 'breathing', 'heartbeat',\n 'combined br hb', 'undercomplete', 'sparse', 'deep']\n\n # Loop over all data that will be used for classification and send it to the classifier\n # index 0 is y so we skip it\n for index in range(1, len(test_data)):\n clear_session_and_set_seeds()\n train_x = train_data[index]\n valid_x = valid_data[index]\n test_x = test_data[index]\n \n # Scale data\n if should_scale_data:\n # Scale with standard scaler\n sscaler = StandardScaler()\n sscaler.fit(train_x)\n train_x = sscaler.transform(train_x)\n\n # Scale valid and test with train's scaler\n valid_x = sscaler.transform(valid_x)\n test_x = sscaler.transform(test_x)\n \n # Initialize variables\n tmp_acc = []\n data_name = data_names_by_index[index]\n \n for tmp_index in range(num_loops_to_average_over):\n curr_acc = classifier_function_train(train_x, y_train, valid_x, y_valid, test_x, y_test, data_name)\n tmp_acc.append(curr_acc)\n \n # Store accuracy\n curr_acc = np.mean(tmp_acc)\n accs[data_name].append(curr_acc)\n \n\n # Print total time required to run this\n end_time = datetime.now()\n elapsed_time = end_time - start_time\n print(\"Completed!\", \"Time elapsed:\", elapsed_time)\n \n return accs",
"_____no_output_____"
]
],
[
[
"#### Simple dense classifier",
"_____no_output_____"
],
[
"Define the classifier:",
"_____no_output_____"
]
],
[
[
"params_dense_phase = {\n 'dropout': 0.3,\n 'hidden_size': 26,\n 'activation': 'sigmoid',\n 'loss': 'binary_crossentropy',\n 'optimizer': Adam,\n 'batch_size': 128,\n 'learning_rate': 0.001,\n 'epochs': 300\n}",
"_____no_output_____"
],
[
"params_dense_br_hb = {\n 'dropout': 0.05,\n 'hidden_size': 24,\n 'activation': 'sigmoid',\n 'loss': 'poisson',\n 'optimizer': Nadam,\n 'learning_rate': 0.05,\n 'batch_size': 128,\n 'epochs': 100\n}",
"_____no_output_____"
],
[
"params_dense_ae_enc = {\n 'dropout': 0.1,\n 'hidden_size': 34,\n 'activation': 'relu',\n 'loss': 'binary_crossentropy',\n 'optimizer': Adam,\n 'learning_rate': 0.005,\n 'batch_size': 156,\n 'epochs': 200\n}",
"_____no_output_____"
],
[
"def dense_train(x_train, y_train, x_valid, y_valid, x_test, y_test, data_name):\n \n params = params_dense_br_hb\n if (data_name == 'phase'):\n params = params_dense_phase\n if (data_name == 'undercomplete' or data_name == 'sparse' or data_name == 'deep'):\n params = params_dense_ae_enc\n \n # Define the model\n model = Sequential()\n model.add(Dropout(params['dropout']))\n model.add(Dense(params['hidden_size']))\n model.add(Activation(params['activation']))\n model.add(Dense(1))\n model.add(Activation('sigmoid'))\n\n # Compile the model\n model.compile(loss=params['loss'],\n optimizer=params['optimizer'](learning_rate=params['learning_rate']),\n metrics=metrics)\n \n # Train the model and return the accuracy\n sc, curr_acc, epoch_data = model_train(model, x_train, y_train, params['batch_size'], params['epochs'],\n x_valid, y_valid, x_test, y_test)\n \n return curr_acc",
"_____no_output_____"
]
],
[
[
"Combine the autoencoders with the classifier: ",
"_____no_output_____"
]
],
[
[
"accs = helper_loop(dense_train, idents, n)",
"iteration: 1 of 20 ; time elapsed: 0:00:00\niteration: 2 of 20 ; time elapsed: 0:00:27.831025\niteration: 3 of 20 ; time elapsed: 0:00:54.797405\niteration: 4 of 20 ; time elapsed: 0:01:21.679284\niteration: 5 of 20 ; time elapsed: 0:01:48.448613\niteration: 6 of 20 ; time elapsed: 0:02:14.797992\niteration: 7 of 20 ; time elapsed: 0:02:42.743983\niteration: 8 of 20 ; time elapsed: 0:03:10.910841\niteration: 9 of 20 ; time elapsed: 0:03:38.652361\niteration: 10 of 20 ; time elapsed: 0:04:04.268843\niteration: 11 of 20 ; time elapsed: 0:04:29.118891\niteration: 12 of 20 ; time elapsed: 0:04:54.530553\niteration: 13 of 20 ; time elapsed: 0:05:19.707752\niteration: 14 of 20 ; time elapsed: 0:05:44.949560\niteration: 15 of 20 ; time elapsed: 0:06:09.788765\niteration: 16 of 20 ; time elapsed: 0:06:34.781514\niteration: 17 of 20 ; time elapsed: 0:06:59.759703\niteration: 18 of 20 ; time elapsed: 0:07:24.848395\niteration: 19 of 20 ; time elapsed: 0:07:51.218823\niteration: 20 of 20 ; time elapsed: 0:08:17.532949\nCompleted! Time elapsed: 0:08:43.926348\n"
],
[
"accuracies['simple_dense'] = accs",
"_____no_output_____"
],
[
"# print accuracies of each method and corresponding id which yielded that accuracy (same row)\n#pandas.DataFrame.from_dict(accs)",
"_____no_output_____"
],
[
"# print some statistics for each method\nprint_accs_stats(accs)",
" phase breathing heartbeat combined br hb undercomplete sparse deep\nmin 0.250000 0.5000 0.000000 0.500000 0.500000 0.250000 0.000000\nmax 1.000000 1.0000 1.000000 1.000000 1.000000 1.000000 1.000000\nmean 0.620833 0.7375 0.620833 0.800000 0.645833 0.733333 0.670833\nmedian 0.500000 0.6250 0.500000 0.916667 0.500000 0.750000 0.666667\n"
]
],
[
[
"#### LSTM-based classifier \nbased on the original author's code",
"_____no_output_____"
]
],
[
[
"params_lstm_phase = {\n 'kernel_size': 4,\n 'filters': 32,\n 'strides': 4,\n 'pool_size': 4,\n 'dropout': 0.01,\n 'lstm_output_size': 22,\n 'activation': 'relu',\n 'last_activation': 'sigmoid',\n 'loss': 'poisson',\n 'optimizer': Nadam,\n 'learning_rate': 0.01,\n 'batch_size': 186,\n 'epochs': 200\n}",
"_____no_output_____"
],
[
"params_lstm_br_hb = {\n 'kernel_size': 2,\n 'filters': 12,\n 'strides': 2,\n 'pool_size': 1,\n 'dropout': 0.01,\n 'lstm_output_size': 64,\n 'activation': 'relu',\n 'last_activation': 'sigmoid',\n 'loss': 'poisson',\n 'optimizer': Nadam,\n 'learning_rate': 0.001,\n 'batch_size': 64,\n 'epochs': 100\n}",
"_____no_output_____"
],
[
"params_lstm_ae_enc = {\n 'kernel_size': 2,\n 'filters': 6,\n 'strides': 2,\n 'pool_size': 2,\n 'dropout': 0.01,\n 'lstm_output_size': 32,\n 'activation': 'relu',\n 'last_activation': 'sigmoid',\n 'loss': 'poisson',\n 'optimizer': Nadam,\n 'learning_rate': 0.001,\n 'batch_size': 64,\n 'epochs': 100\n}",
"_____no_output_____"
],
[
"def LSTM_train(x_train, y_train, x_valid, y_valid, x_test, y_test, data_name):\n \n params = params_lstm_br_hb\n if (data_name == 'phase'):\n params = params_lstm_phase\n if (data_name == 'undercomplete' or data_name == 'sparse' or data_name == 'deep'):\n params = params_lstm_ae_enc\n \n # Reshape data to fit some layers\n xt_train = x_train.reshape(-1, x_train[0].shape[0], 1)\n xt_valid = x_valid.reshape(-1, x_valid[0].shape[0], 1)\n xt_test = x_test.reshape(-1, x_test[0].shape[0], 1)\n \n # Define the model\n model = Sequential()\n model.add(Dropout(params['dropout']))\n model.add(Conv1D(params['filters'],\n params['kernel_size'],\n padding='valid',\n activation=params['activation'],\n strides=params['strides']))\n\n model.add(MaxPooling1D(pool_size=params['pool_size']))\n \n if (data_name == 'phase'):\n model.add(Conv1D(params['filters'],\n params['kernel_size'],\n padding='valid',\n activation=params['activation'],\n strides=params['strides']))\n model.add(MaxPooling1D(pool_size=params['pool_size']))\n\n model.add(Dropout(params['dropout']))\n model.add(LSTM(params['lstm_output_size']))\n model.add(Dense(1))\n model.add(Activation(params['last_activation']))\n\n # Compile the model\n model.compile(loss=params['loss'],\n optimizer=params['optimizer'](learning_rate=params['learning_rate']),\n metrics=['acc'])\n \n # Train the model and return the accuracy\n sc, curr_acc, epoch_data = model_train(model, xt_train, y_train, params['batch_size'], params['epochs'],\n xt_valid, y_valid, xt_test, y_test)\n \n return curr_acc",
"_____no_output_____"
]
],
[
[
"Combine the autoencoders with the classifier: ",
"_____no_output_____"
]
],
[
[
"accs = helper_loop(LSTM_train, idents, n=n)",
"iteration: 1 of 20 ; time elapsed: 0:00:00\niteration: 2 of 20 ; time elapsed: 0:00:47.630907\niteration: 3 of 20 ; time elapsed: 0:01:32.773018\niteration: 4 of 20 ; time elapsed: 0:02:18.085561\niteration: 5 of 20 ; time elapsed: 0:03:03.227799\niteration: 6 of 20 ; time elapsed: 0:03:50.121051\niteration: 7 of 20 ; time elapsed: 0:04:36.607841\niteration: 8 of 20 ; time elapsed: 0:05:20.946924\niteration: 9 of 20 ; time elapsed: 0:06:05.321868\niteration: 10 of 20 ; time elapsed: 0:06:50.114683\niteration: 11 of 20 ; time elapsed: 0:07:35.006935\niteration: 12 of 20 ; time elapsed: 0:08:19.293984\niteration: 13 of 20 ; time elapsed: 0:09:00.542035\niteration: 14 of 20 ; time elapsed: 0:09:45.340562\niteration: 15 of 20 ; time elapsed: 0:10:31.172220\niteration: 16 of 20 ; time elapsed: 0:11:17.224591\niteration: 17 of 20 ; time elapsed: 0:12:01.974752\niteration: 18 of 20 ; time elapsed: 0:12:44.189476\niteration: 19 of 20 ; time elapsed: 0:13:29.122171\niteration: 20 of 20 ; time elapsed: 0:14:15.227707\nCompleted! Time elapsed: 0:15:00.925496\n"
],
[
"accuracies['LSTM'] = accs",
"_____no_output_____"
],
[
"# print accuracies of each method and corresponding id which yielded that accuracy (same row)\n#pandas.DataFrame.from_dict(accs)",
"_____no_output_____"
],
[
"# print some statistics for each method\nprint_accs_stats(accs)",
" phase breathing heartbeat combined br hb undercomplete sparse deep\nmin 0.500000 0.50 0.0000 0.500000 0.000000 0.25 0.166667\nmax 1.000000 1.00 1.0000 1.000000 1.000000 1.00 1.000000\nmean 0.683333 0.75 0.5375 0.720833 0.508333 0.55 0.654167\nmedian 0.625000 0.75 0.5000 0.708333 0.500000 0.50 0.500000\n"
]
],
[
[
"#### kNN",
"_____no_output_____"
]
],
[
[
"params_knn_phase = {\n 'n_neighbors': 3,\n 'metric': 'cosine'\n}",
"_____no_output_____"
],
[
"params_knn_br_hb = {\n 'n_neighbors': 13,\n 'metric': 'manhattan'\n}",
"_____no_output_____"
],
[
"params_knn_ae_enc = {\n 'n_neighbors': 5,\n 'metric': 'cosine'\n}",
"_____no_output_____"
],
[
"from sklearn.neighbors import KNeighborsClassifier\n\ndef KNN_classifier(params):\n model = KNeighborsClassifier(n_neighbors=params['n_neighbors'], metric=params['metric'])\n return model",
"_____no_output_____"
],
[
"def KNN_train(x_train, y_train, x_valid, y_valid, x_test, y_test, data_name):\n \n params = params_knn_br_hb\n if (data_name == 'phase'):\n params = params_knn_phase\n if (data_name == 'undercomplete' or data_name == 'sparse' or data_name == 'deep'):\n params = params_knn_ae_enc\n \n model = KNN_classifier(params)\n model.fit(x_train, y_train.ravel())\n curr_acc = np.sum(model.predict(x_test) == y_test.ravel()) / len(y_test.ravel())\n return curr_acc",
"_____no_output_____"
]
],
[
[
"Combine the autoencoders with the classifier: ",
"_____no_output_____"
]
],
[
[
"accs = helper_loop(KNN_train, idents, n)",
"iteration: 1 of 20 ; time elapsed: 0:00:00\niteration: 2 of 20 ; time elapsed: 0:00:00.061040\niteration: 3 of 20 ; time elapsed: 0:00:00.088966\niteration: 4 of 20 ; time elapsed: 0:00:00.115892\niteration: 5 of 20 ; time elapsed: 0:00:00.146782\niteration: 6 of 20 ; time elapsed: 0:00:00.168754\niteration: 7 of 20 ; time elapsed: 0:00:00.200149\niteration: 8 of 20 ; time elapsed: 0:00:00.227077\niteration: 9 of 20 ; time elapsed: 0:00:00.256028\niteration: 10 of 20 ; time elapsed: 0:00:00.282234\niteration: 11 of 20 ; time elapsed: 0:00:00.312915\niteration: 12 of 20 ; time elapsed: 0:00:00.335916\niteration: 13 of 20 ; time elapsed: 0:00:00.370793\niteration: 14 of 20 ; time elapsed: 0:00:00.403788\niteration: 15 of 20 ; time elapsed: 0:00:00.438463\niteration: 16 of 20 ; time elapsed: 0:00:00.471661\niteration: 17 of 20 ; time elapsed: 0:00:00.504645\niteration: 18 of 20 ; time elapsed: 0:00:00.535435\niteration: 19 of 20 ; time elapsed: 0:00:00.568670\niteration: 20 of 20 ; time elapsed: 0:00:00.602059\nCompleted! Time elapsed: 0:00:00.633639\n"
],
[
"accuracies['kNN'] = accs",
"_____no_output_____"
],
[
"# print accuracies of each method and corresponding id which yielded that accuracy (same row)\n#pandas.DataFrame.from_dict(accs)",
"_____no_output_____"
],
[
"# print some statistics for each method\nprint_accs_stats(accs)",
" phase breathing heartbeat combined br hb undercomplete sparse deep\nmin 0.5000 0.500000 0.000000 0.000000 0.166667 0.333333 0.000000\nmax 1.0000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000\nmean 0.7125 0.729167 0.633333 0.691667 0.616667 0.658333 0.620833\nmedian 0.7500 0.708333 0.500000 0.666667 0.500000 0.583333 0.500000\n"
]
],
[
[
"#### SVC",
"_____no_output_____"
]
],
[
[
"params_svc_phase = {\n 'C': 13,\n 'kernel': 'rbf',\n 'gamma': 'scale'\n}",
"_____no_output_____"
],
[
"params_svc_br_hb = {\n 'C': 2,\n 'kernel': 'poly',\n 'gamma': 'auto'\n}",
"_____no_output_____"
],
[
"params_svc_ae_enc = {\n 'C': 3,\n 'kernel': 'rbf',\n 'gamma': 'scale'\n}",
"_____no_output_____"
],
[
"from sklearn.svm import SVC\n\ndef SVC_classifier(params):\n model = SVC(random_state=42, C=params['C'], kernel=params['kernel'], gamma=params['gamma'])\n return model",
"_____no_output_____"
],
[
"def SVC_train(x_train, y_train, x_valid, y_valid, x_test, y_test, data_name):\n \n params = params_svc_br_hb\n if (data_name == 'phase'):\n params = params_svc_phase\n if (data_name == 'undercomplete' or data_name == 'sparse' or data_name == 'deep'):\n params = params_svc_ae_enc\n \n model = SVC_classifier(params)\n model.fit(x_train, y_train.ravel())\n curr_acc = np.sum(model.predict(x_test) == y_test.ravel()) / len(y_test.ravel())\n return curr_acc",
"_____no_output_____"
]
],
[
[
"Combine the autoencoders with the classifier: ",
"_____no_output_____"
]
],
[
[
"accs = helper_loop(SVC_train, idents, n)",
"iteration: 1 of 20 ; time elapsed: 0:00:00\niteration: 2 of 20 ; time elapsed: 0:00:00.047795\niteration: 3 of 20 ; time elapsed: 0:00:00.083974\niteration: 4 of 20 ; time elapsed: 0:00:00.126923\niteration: 5 of 20 ; time elapsed: 0:00:00.160387\niteration: 6 of 20 ; time elapsed: 0:00:00.191756\niteration: 7 of 20 ; time elapsed: 0:00:00.229714\niteration: 8 of 20 ; time elapsed: 0:00:00.263641\niteration: 9 of 20 ; time elapsed: 0:00:00.297429\niteration: 10 of 20 ; time elapsed: 0:00:00.331466\niteration: 11 of 20 ; time elapsed: 0:00:00.367573\niteration: 12 of 20 ; time elapsed: 0:00:00.399779\niteration: 13 of 20 ; time elapsed: 0:00:00.437508\niteration: 14 of 20 ; time elapsed: 0:00:00.475012\niteration: 15 of 20 ; time elapsed: 0:00:00.510134\niteration: 16 of 20 ; time elapsed: 0:00:00.547858\niteration: 17 of 20 ; time elapsed: 0:00:00.582484\niteration: 18 of 20 ; time elapsed: 0:00:00.618899\niteration: 19 of 20 ; time elapsed: 0:00:00.657748\niteration: 20 of 20 ; time elapsed: 0:00:00.691355\nCompleted! Time elapsed: 0:00:00.719933\n"
],
[
"accuracies['SVC'] = accs",
"_____no_output_____"
],
[
"# print accuracies of each method and corresponding id which yielded that accuracy (same row)\n#pandas.DataFrame.from_dict(accs)",
"_____no_output_____"
],
[
"# print some statistics for each method\nprint_accs_stats(accs)",
" phase breathing heartbeat combined br hb undercomplete sparse deep\nmin 0.333333 0.333333 0.000000 0.500000 0.000000 0.250000 0.333333\nmax 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000\nmean 0.675000 0.750000 0.583333 0.816667 0.545833 0.570833 0.608333\nmedian 0.500000 0.750000 0.500000 0.916667 0.500000 0.500000 0.500000\n"
]
],
[
[
"#### Random Forest",
"_____no_output_____"
]
],
[
[
"params_rf_phase = {\n 'n_estimators': 170,\n 'max_depth': 100,\n 'min_samples_split': 3,\n 'min_samples_leaf': 2,\n 'oob_score': False,\n 'ccp_alpha': 0.001\n}",
"_____no_output_____"
],
[
"params_rf_br_hb = {\n 'n_estimators': 180,\n 'max_depth': 20,\n 'min_samples_split': 3,\n 'min_samples_leaf': 3,\n 'oob_score': True,\n 'ccp_alpha': 0.015\n}",
"_____no_output_____"
],
[
"params_rf_ae_enc = {\n 'n_estimators': 100,\n 'max_depth': 100,\n 'min_samples_split': 3,\n 'min_samples_leaf': 3,\n 'oob_score': False,\n 'ccp_alpha': 0.015\n}",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestClassifier\ndef random_forest_classifier(params):\n model = RandomForestClassifier(random_state=42,\n n_estimators = params['n_estimators'],\n criterion = 'entropy',\n max_depth = params['max_depth'],\n min_samples_split = params['min_samples_split'],\n min_samples_leaf = params['min_samples_leaf'],\n oob_score = params['oob_score'],\n ccp_alpha = params['ccp_alpha'],\n max_features = 'log2',\n bootstrap = True)\n return model",
"_____no_output_____"
],
[
"def random_forest_train(x_train, y_train, x_valid, y_valid, x_test, y_test, data_name):\n \n params = params_rf_br_hb\n if (data_name == 'phase'):\n params = params_rf_phase\n if (data_name == 'undercomplete' or data_name == 'sparse' or data_name == 'deep'):\n params = params_rf_ae_enc\n \n model = random_forest_classifier(params)\n model.fit(x_train, y_train.ravel())\n curr_acc = np.sum(model.predict(x_test) == y_test.ravel()) / len(y_test.ravel())\n return curr_acc",
"_____no_output_____"
]
],
[
[
"Combine the autoencoders with the classifier: ",
"_____no_output_____"
]
],
[
[
"accs = helper_loop(random_forest_train, idents, n, should_scale_data=False)",
"iteration: 1 of 20 ; time elapsed: 0:00:00\niteration: 2 of 20 ; time elapsed: 0:00:01.739216\niteration: 3 of 20 ; time elapsed: 0:00:03.562363\niteration: 4 of 20 ; time elapsed: 0:00:05.450102\niteration: 5 of 20 ; time elapsed: 0:00:07.452709\niteration: 6 of 20 ; time elapsed: 0:00:09.470473\niteration: 7 of 20 ; time elapsed: 0:00:11.472069\niteration: 8 of 20 ; time elapsed: 0:00:13.324394\niteration: 9 of 20 ; time elapsed: 0:00:15.160868\niteration: 10 of 20 ; time elapsed: 0:00:16.983771\niteration: 11 of 20 ; time elapsed: 0:00:18.743006\niteration: 12 of 20 ; time elapsed: 0:00:20.548644\niteration: 13 of 20 ; time elapsed: 0:00:22.324966\niteration: 14 of 20 ; time elapsed: 0:00:24.093166\niteration: 15 of 20 ; time elapsed: 0:00:26.138127\niteration: 16 of 20 ; time elapsed: 0:00:27.920479\niteration: 17 of 20 ; time elapsed: 0:00:29.693910\niteration: 18 of 20 ; time elapsed: 0:00:31.466339\niteration: 19 of 20 ; time elapsed: 0:00:33.297106\niteration: 20 of 20 ; time elapsed: 0:00:35.180407\nCompleted! Time elapsed: 0:00:37.137313\n"
],
[
"accuracies['random_forest'] = accs",
"_____no_output_____"
],
[
"# print accuracies of each method and corresponding id which yielded that accuracy (same row)\n#pandas.DataFrame.from_dict(accs)",
"_____no_output_____"
],
[
"# print some statistics for each method\nprint_accs_stats(accs)",
" phase breathing heartbeat combined br hb undercomplete sparse deep\nmin 0.000000 0.500000 0.000000 0.500000 0.000000 0.000000 0.333333\nmax 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000\nmean 0.608333 0.845833 0.533333 0.845833 0.558333 0.720833 0.679167\nmedian 0.500000 1.000000 0.500000 1.000000 0.500000 0.750000 0.708333\n"
]
],
[
[
"#### Naive Bayesian",
"_____no_output_____"
]
],
[
[
"from sklearn.naive_bayes import GaussianNB\n\ndef naive_bayesian_classifier():\n model = GaussianNB()\n return model",
"_____no_output_____"
],
[
"def naive_bayesian_train(x_train, y_train, x_valid, y_valid, x_test, y_test, data_name):\n model = naive_bayesian_classifier()\n model.fit(x_train, y_train.ravel())\n curr_acc = np.sum(model.predict(x_test) == y_test.ravel()) / len(y_test.ravel())\n return curr_acc",
"_____no_output_____"
]
],
[
[
"Combine the autoencoders with the classifier: ",
"_____no_output_____"
]
],
[
[
"accs = helper_loop(naive_bayesian_train, idents, n)",
"iteration: 1 of 20 ; time elapsed: 0:00:00\niteration: 2 of 20 ; time elapsed: 0:00:00.036359\niteration: 3 of 20 ; time elapsed: 0:00:00.068320\niteration: 4 of 20 ; time elapsed: 0:00:00.110390\niteration: 5 of 20 ; time elapsed: 0:00:00.145038\niteration: 6 of 20 ; time elapsed: 0:00:00.183837\niteration: 7 of 20 ; time elapsed: 0:00:00.216208\niteration: 8 of 20 ; time elapsed: 0:00:00.249199\niteration: 9 of 20 ; time elapsed: 0:00:00.281086\niteration: 10 of 20 ; time elapsed: 0:00:00.324292\niteration: 11 of 20 ; time elapsed: 0:00:00.365268\niteration: 12 of 20 ; time elapsed: 0:00:00.397286\niteration: 13 of 20 ; time elapsed: 0:00:00.435525\niteration: 14 of 20 ; time elapsed: 0:00:00.473394\niteration: 15 of 20 ; time elapsed: 0:00:00.508301\niteration: 16 of 20 ; time elapsed: 0:00:00.541213\niteration: 17 of 20 ; time elapsed: 0:00:00.570978\niteration: 18 of 20 ; time elapsed: 0:00:00.605538\niteration: 19 of 20 ; time elapsed: 0:00:00.640901\niteration: 20 of 20 ; time elapsed: 0:00:00.674686\nCompleted! Time elapsed: 0:00:00.706024\n"
],
[
"accuracies['naive_bayesian'] = accs",
"_____no_output_____"
],
[
"# print accuracies of each method and corresponding id which yielded that accuracy (same row)\n#pandas.DataFrame.from_dict(accs)",
"_____no_output_____"
],
[
"# print some statistics for each method\nprint_accs_stats(accs)",
" phase breathing heartbeat combined br hb undercomplete sparse deep\nmin 0.250000 0.500000 0.000000 0.500000 0.0000 0.000000 0.333333\nmax 1.000000 1.000000 1.000000 1.000000 1.0000 1.000000 1.000000\nmean 0.529167 0.770833 0.541667 0.783333 0.5875 0.670833 0.737500\nmedian 0.500000 0.750000 0.500000 0.791667 0.5000 0.666667 0.750000\n"
]
],
[
[
"#### XGBoost",
"_____no_output_____"
]
],
[
[
"params_xgb_phase = {\n 'n_estimators': 100,\n 'max_depth': 50,\n 'booster': 'gbtree'\n}",
"_____no_output_____"
],
[
"params_xgb_br_hb = {\n 'n_estimators': 130,\n 'max_depth': 2,\n 'booster': 'gbtree'\n}",
"_____no_output_____"
],
[
"params_xgb_ae_enc = {\n 'n_estimators': 50,\n 'max_depth': 6,\n 'booster': 'gbtree'\n}",
"_____no_output_____"
],
[
"from xgboost import XGBClassifier\n\ndef XGBoost_classifier(params):\n model = XGBClassifier(random_state=42,\n n_estimators=params['n_estimators'],\n max_depth=params['max_depth'])\n return model",
"_____no_output_____"
],
[
"def XGBoost_train(x_train, y_train, x_valid, y_valid, x_test, y_test, data_name):\n \n params = params_xgb_br_hb\n if (data_name == 'phase'):\n params = params_xgb_phase\n if (data_name == 'undercomplete' or data_name == 'sparse' or data_name == 'deep'):\n params = params_xgb_ae_enc\n \n model = XGBoost_classifier(params)\n model.fit(x_train, y_train.ravel())\n curr_acc = np.sum(model.predict(x_test) == y_test.ravel()) / len(y_test.ravel())\n return curr_acc",
"_____no_output_____"
]
],
[
[
"Combine the autoencoders with the classifier: ",
"_____no_output_____"
]
],
[
[
"accs = helper_loop(XGBoost_train, idents, n, should_scale_data=False)",
"iteration: 1 of 20 ; time elapsed: 0:00:00\niteration: 2 of 20 ; time elapsed: 0:00:00.705076\niteration: 3 of 20 ; time elapsed: 0:00:01.238549\niteration: 4 of 20 ; time elapsed: 0:00:01.772336\niteration: 5 of 20 ; time elapsed: 0:00:02.309880\niteration: 6 of 20 ; time elapsed: 0:00:02.797809\niteration: 7 of 20 ; time elapsed: 0:00:03.300179\niteration: 8 of 20 ; time elapsed: 0:00:03.823323\niteration: 9 of 20 ; time elapsed: 0:00:04.362097\niteration: 10 of 20 ; time elapsed: 0:00:04.891472\niteration: 11 of 20 ; time elapsed: 0:00:05.397866\niteration: 12 of 20 ; time elapsed: 0:00:05.921410\niteration: 13 of 20 ; time elapsed: 0:00:06.403243\niteration: 14 of 20 ; time elapsed: 0:00:06.892289\niteration: 15 of 20 ; time elapsed: 0:00:07.398698\niteration: 16 of 20 ; time elapsed: 0:00:07.919145\niteration: 17 of 20 ; time elapsed: 0:00:08.457090\niteration: 18 of 20 ; time elapsed: 0:00:08.924545\niteration: 19 of 20 ; time elapsed: 0:00:09.386031\niteration: 20 of 20 ; time elapsed: 0:00:09.847900\nCompleted! Time elapsed: 0:00:10.338798\n"
],
[
"accuracies['XGBoost'] = accs",
"_____no_output_____"
],
[
"# print accuracies of each method and corresponding id which yielded that accuracy (same row)\n#pandas.DataFrame.from_dict(accs)",
"_____no_output_____"
],
[
"# print some statistics for each method\nprint_accs_stats(accs)",
" phase breathing heartbeat combined br hb undercomplete sparse deep\nmin 0.000000 0.500000 0.0000 0.500000 0.000 0.000000 0.500000\nmax 1.000000 1.000000 1.0000 1.000000 1.000 1.000000 1.000000\nmean 0.558333 0.866667 0.5125 0.758333 0.525 0.716667 0.675000\nmedian 0.500000 1.000000 0.5000 0.791667 0.500 0.750000 0.666667\n"
]
],
[
[
"### Compare Accuracies",
"_____no_output_____"
],
[
"Save all accuracies to results csv file:",
"_____no_output_____"
]
],
[
[
"results_path = \"../../results/BvR/BvR-NC.csv\"\n\n# Make a dataframe from the accuracies\naccs_dataframe = pandas.DataFrame(accuracies).T\n# Save dataframe to file\naccs_dataframe.to_csv(results_path, mode='w')",
"_____no_output_____"
]
],
[
[
"Print min, max, mean, median for each clasifier/autoencoder combination:",
"_____no_output_____"
]
],
[
[
"for classifier in accuracies:\n print(\"-----------\", classifier + \":\", \"-----------\")\n accs = accuracies[classifier]\n print_accs_stats(accs)\n print(\"\\n\")",
"----------- simple_dense: -----------\n phase breathing heartbeat combined br hb undercomplete sparse deep\nmin 0.250000 0.5000 0.000000 0.500000 0.500000 0.250000 0.000000\nmax 1.000000 1.0000 1.000000 1.000000 1.000000 1.000000 1.000000\nmean 0.620833 0.7375 0.620833 0.800000 0.645833 0.733333 0.670833\nmedian 0.500000 0.6250 0.500000 0.916667 0.500000 0.750000 0.666667\n\n\n----------- LSTM: -----------\n phase breathing heartbeat combined br hb undercomplete sparse deep\nmin 0.500000 0.50 0.0000 0.500000 0.000000 0.25 0.166667\nmax 1.000000 1.00 1.0000 1.000000 1.000000 1.00 1.000000\nmean 0.683333 0.75 0.5375 0.720833 0.508333 0.55 0.654167\nmedian 0.625000 0.75 0.5000 0.708333 0.500000 0.50 0.500000\n\n\n----------- kNN: -----------\n phase breathing heartbeat combined br hb undercomplete sparse deep\nmin 0.5000 0.500000 0.000000 0.000000 0.166667 0.333333 0.000000\nmax 1.0000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000\nmean 0.7125 0.729167 0.633333 0.691667 0.616667 0.658333 0.620833\nmedian 0.7500 0.708333 0.500000 0.666667 0.500000 0.583333 0.500000\n\n\n----------- SVC: -----------\n phase breathing heartbeat combined br hb undercomplete sparse deep\nmin 0.333333 0.333333 0.000000 0.500000 0.000000 0.250000 0.333333\nmax 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000\nmean 0.675000 0.750000 0.583333 0.816667 0.545833 0.570833 0.608333\nmedian 0.500000 0.750000 0.500000 0.916667 0.500000 0.500000 0.500000\n\n\n----------- random_forest: -----------\n phase breathing heartbeat combined br hb undercomplete sparse deep\nmin 0.000000 0.500000 0.000000 0.500000 0.000000 0.000000 0.333333\nmax 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000\nmean 0.608333 0.845833 0.533333 0.845833 0.558333 0.720833 0.679167\nmedian 0.500000 1.000000 0.500000 1.000000 0.500000 0.750000 0.708333\n\n\n----------- naive_bayesian: -----------\n phase breathing heartbeat combined br hb undercomplete sparse deep\nmin 0.250000 0.500000 0.000000 0.500000 0.0000 0.000000 0.333333\nmax 1.000000 1.000000 1.000000 1.000000 1.0000 1.000000 1.000000\nmean 0.529167 0.770833 0.541667 0.783333 0.5875 0.670833 0.737500\nmedian 0.500000 0.750000 0.500000 0.791667 0.5000 0.666667 0.750000\n\n\n----------- XGBoost: -----------\n phase breathing heartbeat combined br hb undercomplete sparse deep\nmin 0.000000 0.500000 0.0000 0.500000 0.000 0.000000 0.500000\nmax 1.000000 1.000000 1.0000 1.000000 1.000 1.000000 1.000000\nmean 0.558333 0.866667 0.5125 0.758333 0.525 0.716667 0.675000\nmedian 0.500000 1.000000 0.5000 0.791667 0.500 0.750000 0.666667\n\n\n"
]
],
[
[
"Print all accuracies in table form:",
"_____no_output_____"
]
],
[
[
"for classifier in accuracies:\n print(classifier + \":\")\n# print(pandas.DataFrame.from_dict(accuracies[classifier]))\n # Using .to_string() gives nicer loooking results (doesn't split into new line)\n print(pandas.DataFrame.from_dict(accuracies[classifier]).to_string())\n print(\"\\n\")",
"simple_dense:\n phase breathing heartbeat combined br hb undercomplete sparse deep test id\n0 0.750000 1.00 0.750000 1.000000 0.750000 0.750000 0.500000 3n2f9\n1 0.500000 0.50 1.000000 0.500000 0.500000 0.500000 0.500000 2gu87\n2 1.000000 1.00 0.500000 0.750000 1.000000 1.000000 0.750000 iz2ps\n3 0.250000 1.00 0.250000 1.000000 0.750000 0.750000 0.750000 1mpau\n4 0.500000 0.50 0.500000 1.000000 0.500000 1.000000 0.500000 7dwjy\n5 0.500000 1.00 0.333333 1.000000 0.500000 0.666667 0.666667 7swyk\n6 0.500000 0.50 0.000000 1.000000 0.500000 0.500000 0.000000 94mnx\n7 0.500000 1.00 0.500000 0.750000 0.750000 0.750000 0.750000 bd47a\n8 1.000000 1.00 1.000000 1.000000 0.500000 0.500000 0.500000 c24ur\n9 0.500000 0.50 1.000000 0.500000 1.000000 1.000000 1.000000 dkhty\n10 1.000000 0.50 1.000000 1.000000 0.500000 1.000000 1.000000 e4gay\n11 0.666667 0.50 0.666667 0.666667 0.666667 0.500000 1.000000 ef5rq\n12 0.500000 1.00 0.500000 1.000000 0.500000 0.666667 0.500000 f1gjp\n13 0.500000 0.50 0.500000 0.500000 1.000000 1.000000 0.500000 hpbxa\n14 0.750000 0.50 0.500000 0.500000 0.500000 0.500000 0.500000 pmyfl\n15 0.500000 0.50 0.500000 0.500000 0.500000 1.000000 1.000000 r89k1\n16 0.500000 0.75 1.000000 1.000000 0.500000 0.500000 1.000000 tn4vl\n17 0.833333 1.00 0.833333 0.833333 0.833333 0.833333 0.833333 td5pr\n18 0.500000 0.50 0.250000 0.500000 0.500000 0.250000 0.500000 gyqu9\n19 0.666667 1.00 0.833333 1.000000 0.666667 1.000000 0.666667 fzchw\n\n\nLSTM:\n phase breathing heartbeat combined br hb undercomplete sparse deep test id\n0 1.000000 1.000000 0.500000 1.000000 0.000000 0.500000 0.500000 3n2f9\n1 0.500000 0.500000 0.000000 0.500000 0.500000 0.500000 0.500000 2gu87\n2 0.750000 1.000000 0.500000 0.750000 0.250000 0.500000 0.750000 iz2ps\n3 0.750000 1.000000 0.250000 0.500000 0.500000 0.750000 0.500000 1mpau\n4 0.500000 0.500000 0.500000 1.000000 0.000000 0.500000 1.000000 7dwjy\n5 0.500000 1.000000 0.500000 0.666667 0.500000 0.500000 0.500000 7swyk\n6 0.500000 1.000000 0.500000 0.500000 1.000000 0.500000 0.500000 94mnx\n7 0.750000 0.500000 0.750000 0.500000 0.500000 0.750000 0.750000 bd47a\n8 0.500000 1.000000 1.000000 1.000000 0.500000 0.500000 0.500000 c24ur\n9 1.000000 0.500000 0.500000 0.500000 1.000000 0.500000 1.000000 dkhty\n10 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 e4gay\n11 0.500000 0.500000 0.333333 0.500000 0.500000 0.333333 0.500000 ef5rq\n12 0.500000 1.000000 0.500000 0.833333 0.500000 0.500000 0.500000 f1gjp\n13 1.000000 0.500000 0.500000 1.000000 0.500000 0.500000 1.000000 hpbxa\n14 0.500000 0.500000 0.750000 0.750000 0.500000 0.500000 0.500000 pmyfl\n15 0.500000 0.500000 0.500000 0.500000 0.500000 0.500000 0.500000 r89k1\n16 0.750000 1.000000 0.250000 1.000000 0.750000 0.750000 1.000000 tn4vl\n17 0.833333 0.666667 1.000000 0.500000 0.166667 0.666667 0.166667 td5pr\n18 0.500000 0.500000 0.250000 0.750000 0.500000 0.250000 0.750000 gyqu9\n19 0.833333 0.833333 0.666667 0.666667 0.500000 0.500000 0.666667 fzchw\n\n\nkNN:\n phase breathing heartbeat combined br hb undercomplete sparse deep test id\n0 0.750000 1.000000 1.000000 1.000000 0.750000 0.750000 0.250000 3n2f9\n1 0.500000 0.500000 0.500000 0.000000 0.500000 0.500000 0.500000 2gu87\n2 0.750000 1.000000 0.500000 1.000000 1.000000 1.000000 1.000000 iz2ps\n3 0.750000 0.750000 0.250000 0.750000 0.500000 0.750000 1.000000 1mpau\n4 0.500000 1.000000 0.500000 0.500000 0.500000 0.500000 0.500000 7dwjy\n5 0.666667 1.000000 0.500000 1.000000 0.166667 0.333333 0.500000 7swyk\n6 0.500000 1.000000 0.500000 0.500000 0.500000 0.500000 0.500000 94mnx\n7 0.750000 1.000000 0.750000 1.000000 0.750000 0.750000 0.750000 bd47a\n8 0.500000 0.500000 1.000000 0.500000 1.000000 0.500000 0.500000 c24ur\n9 1.000000 0.500000 0.500000 0.500000 1.000000 0.500000 1.000000 dkhty\n10 1.000000 0.500000 1.000000 1.000000 0.500000 1.000000 0.500000 e4gay\n11 0.833333 0.500000 0.833333 0.500000 0.833333 0.500000 0.666667 ef5rq\n12 0.500000 0.833333 0.500000 0.666667 0.333333 0.666667 0.333333 f1gjp\n13 1.000000 1.000000 1.000000 0.500000 0.500000 0.500000 1.000000 hpbxa\n14 0.500000 0.500000 0.250000 0.500000 0.500000 0.500000 0.000000 pmyfl\n15 1.000000 0.500000 0.000000 1.000000 0.500000 1.000000 1.000000 r89k1\n16 0.500000 0.500000 0.500000 0.750000 0.500000 0.500000 0.750000 tn4vl\n17 0.833333 0.833333 1.000000 1.000000 0.833333 0.833333 0.333333 td5pr\n18 0.750000 0.500000 0.750000 0.500000 0.500000 0.750000 0.500000 gyqu9\n19 0.666667 0.666667 0.833333 0.666667 0.666667 0.833333 0.833333 fzchw\n\n\nSVC:\n phase breathing heartbeat combined br hb undercomplete sparse deep test id\n0 0.750000 1.000000 0.500000 1.000000 0.000000 0.250000 0.500000 3n2f9\n1 0.500000 0.500000 0.500000 0.500000 0.500000 0.500000 0.500000 2gu87\n2 1.000000 1.000000 0.250000 0.750000 1.000000 0.500000 0.750000 iz2ps\n3 0.500000 0.750000 0.500000 0.750000 0.500000 0.500000 0.750000 1mpau\n4 0.500000 1.000000 0.000000 1.000000 0.500000 1.000000 0.500000 7dwjy\n5 0.500000 1.000000 0.666667 0.833333 0.166667 0.333333 0.500000 7swyk\n6 0.500000 1.000000 0.500000 1.000000 0.500000 0.500000 0.500000 94mnx\n7 0.750000 1.000000 0.500000 1.000000 0.750000 0.750000 0.750000 bd47a\n8 1.000000 0.500000 1.000000 1.000000 0.500000 0.500000 0.500000 c24ur\n9 1.000000 0.500000 0.500000 0.500000 1.000000 1.000000 1.000000 dkhty\n10 1.000000 0.500000 1.000000 1.000000 1.000000 1.000000 1.000000 e4gay\n11 0.333333 0.333333 0.833333 0.666667 0.666667 0.500000 0.500000 ef5rq\n12 0.500000 0.666667 0.333333 0.500000 0.500000 0.500000 0.500000 f1gjp\n13 0.500000 1.000000 0.500000 1.000000 0.500000 0.500000 0.500000 hpbxa\n14 0.500000 0.750000 0.250000 0.500000 0.500000 0.500000 0.500000 pmyfl\n15 1.000000 1.000000 1.000000 1.000000 0.500000 0.500000 0.500000 r89k1\n16 0.500000 0.500000 0.500000 1.000000 0.500000 0.500000 0.750000 tn4vl\n17 0.833333 0.833333 1.000000 1.000000 0.333333 0.833333 0.333333 td5pr\n18 0.500000 0.500000 0.500000 0.500000 0.500000 0.250000 0.500000 gyqu9\n19 0.833333 0.666667 0.833333 0.833333 0.500000 0.500000 0.833333 fzchw\n\n\nrandom_forest:\n phase breathing heartbeat combined br hb undercomplete sparse deep test id\n0 0.750000 1.000000 0.750000 1.000000 0.500000 0.750000 0.750000 3n2f9\n1 0.500000 0.500000 0.000000 0.500000 0.500000 1.000000 0.500000 2gu87\n2 1.000000 1.000000 0.500000 1.000000 0.750000 0.750000 1.000000 iz2ps\n3 0.250000 1.000000 0.250000 1.000000 1.000000 1.000000 0.750000 1mpau\n4 0.500000 0.500000 0.000000 0.500000 0.500000 1.000000 0.500000 7dwjy\n5 0.333333 1.000000 0.500000 1.000000 0.166667 0.333333 0.500000 7swyk\n6 0.500000 1.000000 0.500000 1.000000 0.500000 0.500000 0.500000 94mnx\n7 0.750000 0.750000 0.500000 0.750000 0.500000 1.000000 1.000000 bd47a\n8 0.500000 1.000000 1.000000 1.000000 0.500000 0.500000 0.500000 c24ur\n9 1.000000 1.000000 0.500000 1.000000 1.000000 1.000000 1.000000 dkhty\n10 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 e4gay\n11 0.333333 0.833333 0.333333 0.833333 0.500000 0.666667 0.833333 ef5rq\n12 0.833333 0.833333 0.500000 0.833333 0.333333 0.666667 0.500000 f1gjp\n13 0.000000 1.000000 0.500000 1.000000 1.000000 0.500000 0.500000 hpbxa\n14 0.500000 0.500000 0.500000 0.500000 0.750000 0.750000 0.750000 pmyfl\n15 1.000000 0.500000 0.500000 0.500000 0.500000 1.000000 0.500000 r89k1\n16 0.750000 1.000000 0.500000 1.000000 0.500000 0.500000 0.750000 tn4vl\n17 0.500000 1.000000 0.833333 1.000000 0.166667 0.833333 0.333333 td5pr\n18 0.500000 0.500000 1.000000 0.500000 0.000000 0.000000 0.750000 gyqu9\n19 0.666667 1.000000 0.500000 1.000000 0.500000 0.666667 0.666667 fzchw\n\n\nnaive_bayesian:\n phase breathing heartbeat combined br hb undercomplete sparse deep test id\n0 0.500000 1.000000 0.500000 1.000000 0.500000 0.750000 0.500000 3n2f9\n1 0.500000 0.500000 0.500000 0.500000 0.500000 1.000000 1.000000 2gu87\n2 0.500000 0.750000 0.500000 0.750000 0.500000 0.750000 0.750000 iz2ps\n3 0.250000 0.750000 0.000000 0.750000 0.750000 0.750000 0.750000 1mpau\n4 0.500000 1.000000 0.000000 0.500000 0.500000 1.000000 1.000000 7dwjy\n5 0.500000 1.000000 0.500000 1.000000 0.500000 0.666667 0.666667 7swyk\n6 1.000000 1.000000 0.500000 1.000000 0.000000 0.000000 1.000000 94mnx\n7 0.500000 1.000000 0.750000 1.000000 1.000000 1.000000 0.750000 bd47a\n8 0.500000 0.500000 1.000000 1.000000 0.500000 0.500000 0.500000 c24ur\n9 1.000000 0.500000 0.500000 0.500000 1.000000 1.000000 1.000000 dkhty\n10 0.500000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 e4gay\n11 0.666667 0.666667 0.500000 0.666667 0.500000 0.666667 0.500000 ef5rq\n12 0.500000 0.833333 0.500000 0.833333 0.500000 0.500000 0.333333 f1gjp\n13 0.500000 1.000000 0.500000 1.000000 1.000000 0.500000 1.000000 hpbxa\n14 0.500000 0.500000 0.500000 0.500000 0.500000 0.500000 0.500000 pmyfl\n15 0.500000 0.500000 0.500000 0.500000 0.500000 0.500000 1.000000 r89k1\n16 0.500000 0.750000 0.500000 1.000000 0.500000 0.500000 0.750000 tn4vl\n17 0.333333 1.000000 1.000000 1.000000 0.333333 0.833333 0.333333 td5pr\n18 0.500000 0.500000 0.250000 0.500000 0.500000 0.500000 0.750000 gyqu9\n19 0.333333 0.666667 0.833333 0.666667 0.666667 0.500000 0.666667 fzchw\n\n\nXGBoost:\n phase breathing heartbeat combined br hb undercomplete sparse deep test id\n0 0.500000 1.000000 0.000000 1.000000 0.500000 0.750000 0.750000 3n2f9\n1 0.500000 1.000000 0.000000 0.500000 0.500000 1.000000 0.500000 2gu87\n2 1.000000 1.000000 0.500000 0.750000 1.000000 1.000000 1.000000 iz2ps\n3 0.500000 1.000000 0.250000 1.000000 0.750000 0.500000 0.750000 1mpau\n4 0.500000 1.000000 0.500000 0.500000 0.500000 0.500000 0.500000 7dwjy\n5 0.666667 1.000000 0.500000 1.000000 0.500000 0.500000 0.666667 7swyk\n6 0.000000 1.000000 0.500000 1.000000 0.000000 0.500000 0.500000 94mnx\n7 0.500000 0.750000 0.500000 0.500000 0.500000 0.750000 0.500000 bd47a\n8 0.500000 1.000000 0.500000 1.000000 0.000000 1.000000 0.500000 c24ur\n9 1.000000 1.000000 0.500000 0.500000 1.000000 1.000000 1.000000 dkhty\n10 0.500000 0.500000 1.000000 0.500000 0.000000 1.000000 1.000000 e4gay\n11 0.500000 0.833333 0.333333 0.833333 0.666667 0.833333 0.833333 ef5rq\n12 0.500000 1.000000 0.500000 0.833333 0.500000 0.500000 0.666667 f1gjp\n13 0.000000 0.500000 1.000000 0.500000 1.000000 0.500000 0.500000 hpbxa\n14 1.000000 0.750000 0.500000 0.750000 0.750000 0.750000 0.750000 pmyfl\n15 0.500000 0.500000 0.500000 0.500000 0.500000 1.000000 0.500000 r89k1\n16 0.500000 1.000000 0.500000 1.000000 0.750000 0.750000 0.500000 tn4vl\n17 0.666667 1.000000 0.500000 1.000000 0.166667 0.833333 0.500000 td5pr\n18 0.500000 0.500000 1.000000 0.500000 0.250000 0.000000 0.750000 gyqu9\n19 0.833333 1.000000 0.666667 1.000000 0.666667 0.666667 0.833333 fzchw\n\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9e6deafd3b9677760a8f7afd0f7a445bde3320 | 9,959 | ipynb | Jupyter Notebook | 02_fcn.ipynb | arjunsbalaji/oct | f21e11f6dda952cd914444512ddadb4141757951 | [
"Apache-2.0"
] | null | null | null | 02_fcn.ipynb | arjunsbalaji/oct | f21e11f6dda952cd914444512ddadb4141757951 | [
"Apache-2.0"
] | 6 | 2021-06-08T22:35:57.000Z | 2022-02-26T10:08:10.000Z | 02_fcn.ipynb | arjunsbalaji/oct | f21e11f6dda952cd914444512ddadb4141757951 | [
"Apache-2.0"
] | null | null | null | 24.9599 | 126 | 0.530575 | [
[
[
"# default_exp runs",
"_____no_output_____"
],
[
"import sys",
"_____no_output_____"
],
[
"sys.path.append('/workspace/oct')",
"_____no_output_____"
],
[
"from oct.startup import *\nfrom model import CapsNet\nimport numpy as np\nimport mlflow\nfrom fastai.vision import *\nimport mlflow.pytorch as MLPY\nfrom fastai.utils.mem import gpu_mem_get_all",
"_____no_output_____"
],
[
"gpu_mem_get_all()",
"_____no_output_____"
]
],
[
[
"### Configuration Setup",
"_____no_output_____"
]
],
[
[
"name = 'FCN'",
"_____no_output_____"
],
[
"config_dict = loadConfigJSONToDict('configCAPS_APPresnet18.json')\nconfig_dict['LEARNER']['lr']= 0.001\nconfig_dict['LEARNER']['bs'] = 4\nconfig_dict['LEARNER']['epochs'] = 30\nconfig_dict['LEARNER']['runsave_dir'] = '/workspace/oct_ca_seg/runsaves/'\nconfig_dict['MODEL'] = 'FASTAI FCN RESNET18 BACKBONE NO PRETRAIN'\nconfig = DeepConfig(config_dict)",
"_____no_output_____"
],
[
"config_dict",
"_____no_output_____"
],
[
"metrics = [sens, spec, dice, acc]",
"_____no_output_____"
],
[
"def saveConfigRun(dictiontary, run_dir, name):\n with open(run_dir/name, 'w') as file:\n json.dump(dictiontary, file)",
"_____no_output_____"
]
],
[
[
"## Dataset",
"_____no_output_____"
]
],
[
[
"cocodata_path = Path('/workspace/oct_ca_seg/COCOdata/')\ntrain_path = cocodata_path/'train/images'\nvalid_path = cocodata_path/'valid/images'\ntest_path = cocodata_path/'test/images'",
"_____no_output_____"
]
],
[
[
"### For complete dataset",
"_____no_output_____"
]
],
[
[
"fn_get_y = lambda image_name: Path(image_name).parent.parent/('labels/'+Path(image_name).name)\ncodes = np.loadtxt(cocodata_path/'codes.txt', dtype=str)\ntfms = get_transforms()\nsrc = (SegCustomItemList\n .from_folder(cocodata_path, recurse=True, extensions='.jpg')\n .filter_by_func(lambda fname: Path(fname).parent.name == 'images', )\n .split_by_folder('train', 'valid')\n .label_from_func(fn_get_y, classes=codes))\nsrc.transform(tfms, tfm_y=True, size=config.LEARNER.img_size)\ndata = src.databunch(cocodata_path,\n bs=config.LEARNER.bs,\n val_bs=2*config.LEARNER.bs,\n num_workers = config.LEARNER.num_workers)\nstats = [torch.tensor([0.2190, 0.1984, 0.1928]), torch.tensor([0.0645, 0.0473, 0.0434])]\ndata.normalize(stats);\ndata.c_in, data.c_out = 3, 2",
"_____no_output_____"
]
],
[
[
"### For converting Validation set into a mini set to experiment on",
"_____no_output_____"
]
],
[
[
"fn_get_y = lambda image_name: Path(image_name).parent.parent/('labels/'+Path(image_name).name)\ncodes = np.loadtxt(cocodata_path/'codes.txt', dtype=str)\ntfms = get_transforms()\nsrc = (SegCustomItemList\n .from_folder(test_path, recurse=True, extensions='.jpg')\n .filter_by_func(lambda fname: Path(fname).parent.name == 'images', )\n .split_by_rand_pct(0.9)\n .label_from_func(fn_get_y, classes=codes))\nsrc.transform(tfms, tfm_y=True, size =config.LEARNER.img_size)\ndata = src.databunch(test_path,\n bs=config.LEARNER.bs,\n val_bs=2*config.LEARNER.bs,\n num_workers = config.LEARNER.num_workers)\nstats = [torch.tensor([0.2190, 0.1984, 0.1928]), torch.tensor([0.0645, 0.0473, 0.0434])]\ndata.normalize(stats);\ndata.c_in, data.c_out = 3, 2",
"_____no_output_____"
]
],
[
[
"### Fastai FCN",
"_____no_output_____"
]
],
[
[
"import torchvision",
"_____no_output_____"
]
],
[
[
"!jupyter labextension install @jupyter-widgets/jupyterlab-manager\n\n!pip install --upgrade ipywidgets\n\n!jupyter nbextension enable --py widgetsnbextension\n\n!ipywidgets",
"_____no_output_____"
]
],
[
[
"dl3 = torchvision.models.segmentation.deeplabv3_resnet50(pretrained=False, num_classes=2, progress=True).cuda()",
"_____no_output_____"
],
[
"class DL3(torch.nn.Module):\n def __init__(self, model_base):\n super(DL3, self).__init__()\n self.model = model_base\n self.name = 'DEEPLAB3'\n def forward(self, x):\n x = self.model(x)['out']\n return x",
"_____no_output_____"
],
[
"deeplab3 = DL3(dl3)",
"_____no_output_____"
],
[
"fcn =torchvision.models.segmentation.fcn_resnet50(pretrained=False, num_classes=2, progress=False).cuda()",
"_____no_output_____"
],
[
"run_dir = config.LEARNER.runsave_dir+'/'+name\n#os.mkdir(run_dir)\nexp_name = 'fastai_fcn'\nmlflow_CB = partial(MLFlowTracker,\n exp_name=exp_name,\n uri='file:/workspace/oct_ca_seg/runsaves/fastai_experiments/mlruns/',\n params=config.config_dict,\n log_model=True,\n nb_path=\"/workspace/oct_ca_seg/oct/02_caps.ipynb\")\nlearner = Learner(data = data,\n model=deeplab3,\n metrics = metrics,\n callback_fns=mlflow_CB)",
"_____no_output_____"
],
[
"test = fcn(data.one_batch()[0].cuda())",
"_____no_output_____"
],
[
"test['out'].size()",
"_____no_output_____"
],
[
"with mlflow.start_run():\n learner.fit_one_cycle(1, slice(config.LEARNER.lr), pct_start=0.9)\n MLPY.save_model(learner.model, run_dir+'/model')\n save_all_results(learner, run_dir, exp_name)\n saveConfigRun(config.config_dict, run_dir=Path(run_dir), name = 'configUNET_APPresnet18_bs16_epochs15_lr0.001.json')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9e870957c2aeb6ca92ae924c964e5d130227d7 | 158,759 | ipynb | Jupyter Notebook | Coursera-ML-AndrewNg-Notes-master/code/ex5-bias vs variance/.ipynb_checkpoints/ML-Exercise5-checkpoint.ipynb | yang-233/ML | 4e78ea2dea83db68fcd860dabe7562b066b7f9b0 | [
"Apache-2.0"
] | null | null | null | Coursera-ML-AndrewNg-Notes-master/code/ex5-bias vs variance/.ipynb_checkpoints/ML-Exercise5-checkpoint.ipynb | yang-233/ML | 4e78ea2dea83db68fcd860dabe7562b066b7f9b0 | [
"Apache-2.0"
] | null | null | null | Coursera-ML-AndrewNg-Notes-master/code/ex5-bias vs variance/.ipynb_checkpoints/ML-Exercise5-checkpoint.ipynb | yang-233/ML | 4e78ea2dea83db68fcd860dabe7562b066b7f9b0 | [
"Apache-2.0"
] | null | null | null | 167.644139 | 27,270 | 0.883503 | [
[
[
"# 机器学习练习 5 - 偏差和方差",
"_____no_output_____"
],
[
"本章代码涵盖了基于Python的解决方案,用于Coursera机器学习课程的第五个编程练习。 请参考[练习文本](ex5.pdf)了解详细的说明和公式。\n\n代码修改并注释:黄海广,[email protected]",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport scipy.io as sio\nimport scipy.optimize as opt\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"_____no_output_____"
],
[
"def load_data():\n \"\"\"for ex5\n d['X'] shape = (12, 1)\n pandas has trouble taking this 2d ndarray to construct a dataframe, so I ravel\n the results\n \"\"\"\n d = sio.loadmat('ex5data1.mat')\n return map(np.ravel, [d['X'], d['y'], d['Xval'], d['yval'], d['Xtest'], d['ytest']])",
"_____no_output_____"
],
[
"X, y, Xval, yval, Xtest, ytest = load_data()",
"_____no_output_____"
],
[
"df = pd.DataFrame({'water_level':X, 'flow':y})\n\nsns.lmplot('water_level', 'flow', data=df, fit_reg=False, size=7)\nplt.show()",
"_____no_output_____"
],
[
"X, Xval, Xtest = [np.insert(x.reshape(x.shape[0], 1), 0, np.ones(x.shape[0]), axis=1) for x in (X, Xval, Xtest)]",
"_____no_output_____"
]
],
[
[
"# 代价函数\n<img style=\"float: left;\" src=\"../img/linear_cost.png\">",
"_____no_output_____"
]
],
[
[
"def cost(theta, X, y):\n \"\"\"\n X: R(m*n), m records, n features\n y: R(m)\n theta : R(n), linear regression parameters\n \"\"\"\n m = X.shape[0]\n\n inner = X @ theta - y # R(m*1)\n\n # 1*m @ m*1 = 1*1 in matrix multiplication\n # but you know numpy didn't do transpose in 1d array, so here is just a\n # vector inner product to itselves\n square_sum = inner.T @ inner\n cost = square_sum / (2 * m)\n\n return cost",
"_____no_output_____"
],
[
"theta = np.ones(X.shape[1])\ncost(theta, X, y)",
"_____no_output_____"
]
],
[
[
"# 梯度\n<img style=\"float: left;\" src=\"../img/linear_gradient.png\">",
"_____no_output_____"
]
],
[
[
"def gradient(theta, X, y):\n m = X.shape[0]\n\n inner = X.T @ (X @ theta - y) # (m,n).T @ (m, 1) -> (n, 1)\n\n return inner / m",
"_____no_output_____"
],
[
"gradient(theta, X, y)",
"_____no_output_____"
]
],
[
[
"# 正则化梯度\n<img style=\"float: left;\" src=\"../img/linear_reg_gradient.png\">",
"_____no_output_____"
]
],
[
[
"def regularized_gradient(theta, X, y, l=1):\n m = X.shape[0]\n\n regularized_term = theta.copy() # same shape as theta\n regularized_term[0] = 0 # don't regularize intercept theta\n\n regularized_term = (l / m) * regularized_term\n\n return gradient(theta, X, y) + regularized_term",
"_____no_output_____"
],
[
"regularized_gradient(theta, X, y)",
"_____no_output_____"
]
],
[
[
"# 拟合数据\n> 正则化项 $\\lambda=0$",
"_____no_output_____"
]
],
[
[
"def linear_regression_np(X, y, l=1):\n \"\"\"linear regression\n args:\n X: feature matrix, (m, n+1) # with incercept x0=1\n y: target vector, (m, )\n l: lambda constant for regularization\n\n return: trained parameters\n \"\"\"\n # init theta\n theta = np.ones(X.shape[1])\n\n # train it\n res = opt.minimize(fun=regularized_cost,\n x0=theta,\n args=(X, y, l),\n method='TNC',\n jac=regularized_gradient,\n options={'disp': True})\n return res\n",
"_____no_output_____"
],
[
"def regularized_cost(theta, X, y, l=1):\n m = X.shape[0]\n\n regularized_term = (l / (2 * m)) * np.power(theta[1:], 2).sum()\n\n return cost(theta, X, y) + regularized_term",
"_____no_output_____"
],
[
"theta = np.ones(X.shape[0])\n\nfinal_theta = linear_regression_np(X, y, l=0).get('x')",
"_____no_output_____"
],
[
"b = final_theta[0] # intercept\nm = final_theta[1] # slope\n\nplt.scatter(X[:,1], y, label=\"Training data\")\nplt.plot(X[:, 1], X[:, 1]*m + b, label=\"Prediction\")\nplt.legend(loc=2)\nplt.show()",
"_____no_output_____"
],
[
"training_cost, cv_cost = [], []",
"_____no_output_____"
]
],
[
[
"1.使用训练集的子集来拟合应模型\n\n2.在计算训练代价和交叉验证代价时,没有用正则化\n\n3.记住使用相同的训练集子集来计算训练代价",
"_____no_output_____"
]
],
[
[
"m = X.shape[0]\nfor i in range(1, m+1):\n# print('i={}'.format(i))\n res = linear_regression_np(X[:i, :], y[:i], l=0)\n \n tc = regularized_cost(res.x, X[:i, :], y[:i], l=0)\n cv = regularized_cost(res.x, Xval, yval, l=0)\n# print('tc={}, cv={}'.format(tc, cv))\n \n training_cost.append(tc)\n cv_cost.append(cv)",
"_____no_output_____"
],
[
"plt.plot(np.arange(1, m+1), training_cost, label='training cost')\nplt.plot(np.arange(1, m+1), cv_cost, label='cv cost')\nplt.legend(loc=1)\nplt.show()",
"_____no_output_____"
]
],
[
[
"这个模型拟合不太好, **欠拟合了**",
"_____no_output_____"
],
[
"# 创建多项式特征",
"_____no_output_____"
]
],
[
[
"def prepare_poly_data(*args, power):\n \"\"\"\n args: keep feeding in X, Xval, or Xtest\n will return in the same order\n \"\"\"\n def prepare(x):\n # expand feature\n df = poly_features(x, power=power)\n\n # normalization\n ndarr = normalize_feature(df).as_matrix()\n\n # add intercept term\n return np.insert(ndarr, 0, np.ones(ndarr.shape[0]), axis=1)\n\n return [prepare(x) for x in args]",
"_____no_output_____"
],
[
"def poly_features(x, power, as_ndarray=False):\n data = {'f{}'.format(i): np.power(x, i) for i in range(1, power + 1)}\n df = pd.DataFrame(data)\n\n return df.as_matrix() if as_ndarray else df\n",
"_____no_output_____"
],
[
"X, y, Xval, yval, Xtest, ytest = load_data()",
"_____no_output_____"
],
[
"poly_features(X, power=3)",
"_____no_output_____"
]
],
[
[
"# 准备多项式回归数据\n1. 扩展特征到 8阶,或者你需要的阶数\n2. 使用 **归一化** 来合并 $x^n$ \n3. don't forget intercept term",
"_____no_output_____"
]
],
[
[
"def normalize_feature(df):\n \"\"\"Applies function along input axis(default 0) of DataFrame.\"\"\"\n return df.apply(lambda column: (column - column.mean()) / column.std())",
"_____no_output_____"
],
[
"X_poly, Xval_poly, Xtest_poly= prepare_poly_data(X, Xval, Xtest, power=8)\nX_poly[:3, :]",
"_____no_output_____"
]
],
[
[
"# 画出学习曲线\n> 首先,我们没有使用正则化,所以 $\\lambda=0$",
"_____no_output_____"
]
],
[
[
"def plot_learning_curve(X, y, Xval, yval, l=0):\n training_cost, cv_cost = [], []\n m = X.shape[0]\n\n for i in range(1, m + 1):\n # regularization applies here for fitting parameters\n res = linear_regression_np(X[:i, :], y[:i], l=l)\n\n # remember, when you compute the cost here, you are computing\n # non-regularized cost. Regularization is used to fit parameters only\n tc = cost(res.x, X[:i, :], y[:i])\n cv = cost(res.x, Xval, yval)\n\n training_cost.append(tc)\n cv_cost.append(cv)\n\n plt.plot(np.arange(1, m + 1), training_cost, label='training cost')\n plt.plot(np.arange(1, m + 1), cv_cost, label='cv cost')\n plt.legend(loc=1)\n",
"_____no_output_____"
],
[
"plot_learning_curve(X_poly, y, Xval_poly, yval, l=0)\nplt.show()",
"_____no_output_____"
]
],
[
[
"你可以看到训练的代价太低了,不真实. 这是 **过拟合**了",
"_____no_output_____"
],
[
"# try $\\lambda=1$",
"_____no_output_____"
]
],
[
[
"plot_learning_curve(X_poly, y, Xval_poly, yval, l=1)\nplt.show()",
"_____no_output_____"
]
],
[
[
"\n训练代价增加了些,不再是0了。\n也就是说我们减轻**过拟合**",
"_____no_output_____"
],
[
"# try $\\lambda=100$",
"_____no_output_____"
]
],
[
[
"plot_learning_curve(X_poly, y, Xval_poly, yval, l=100)\nplt.show()",
"_____no_output_____"
]
],
[
[
"太多正则化了. \n变成 **欠拟合**状态",
"_____no_output_____"
],
[
"# 找到最佳的 $\\lambda$",
"_____no_output_____"
]
],
[
[
"l_candidate = [0, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10]\ntraining_cost, cv_cost = [], []",
"_____no_output_____"
],
[
"for l in l_candidate:\n res = linear_regression_np(X_poly, y, l)\n \n tc = cost(res.x, X_poly, y)\n cv = cost(res.x, Xval_poly, yval)\n \n training_cost.append(tc)\n cv_cost.append(cv)",
"_____no_output_____"
],
[
"plt.plot(l_candidate, training_cost, label='training')\nplt.plot(l_candidate, cv_cost, label='cross validation')\nplt.legend(loc=2)\n\nplt.xlabel('lambda')\n\nplt.ylabel('cost')\nplt.show()",
"_____no_output_____"
],
[
"# best cv I got from all those candidates\nl_candidate[np.argmin(cv_cost)]",
"_____no_output_____"
],
[
"# use test data to compute the cost\nfor l in l_candidate:\n theta = linear_regression_np(X_poly, y, l).x\n print('test cost(l={}) = {}'.format(l, cost(theta, Xtest_poly, ytest)))",
"test cost(l=0) = 9.799399498688892\ntest cost(l=0.001) = 11.054987989655938\ntest cost(l=0.003) = 11.249198861537238\ntest cost(l=0.01) = 10.879605199670008\ntest cost(l=0.03) = 10.022734920552129\ntest cost(l=0.1) = 8.632060998872074\ntest cost(l=0.3) = 7.336602384055533\ntest cost(l=1) = 7.46630349664086\ntest cost(l=3) = 11.643928200535115\ntest cost(l=10) = 27.715080216719304\n"
]
],
[
[
"调参后, $\\lambda = 0.3$ 是最优选择,这个时候测试代价最小",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ec9ea25295339aba3392f5f48e5a24ffd948a302 | 8,563 | ipynb | Jupyter Notebook | notebooks/3-07.ipynb | yohata1013/sample-code-2nd | f833bfd1d8422b824d33a12b97b74e5190c54be5 | [
"MIT"
] | 12 | 2020-08-06T12:05:42.000Z | 2021-12-05T03:24:24.000Z | notebooks/3-07.ipynb | yohata1013/sample-code-2nd | f833bfd1d8422b824d33a12b97b74e5190c54be5 | [
"MIT"
] | 8 | 2020-09-15T07:02:25.000Z | 2021-12-13T20:46:06.000Z | notebooks/3-07.ipynb | yohata1013/sample-code-2nd | f833bfd1d8422b824d33a12b97b74e5190c54be5 | [
"MIT"
] | 7 | 2021-02-05T09:39:07.000Z | 2022-03-01T13:27:44.000Z | 28.83165 | 101 | 0.372533 | [
[
[
"## 3.7: テキストデータの処理",
"_____no_output_____"
]
],
[
[
"# リスト 3.7.1 anime_master.csv の読み込み\n\nfrom urllib.parse import urljoin\nimport pandas as pd\n\nbase_url = \"https://raw.githubusercontent.com/practical-jupyter/sample-data/master/anime/\"\nanime_master_csv = urljoin(base_url, \"anime_master.csv\")\ndf = pd.read_csv(anime_master_csv)\ndf.head()",
"_____no_output_____"
],
[
"# リスト 3.7.2 テキストを小文字に変換\n\ndf[\"name\"].str.lower().head()",
"_____no_output_____"
],
[
"# リスト 3.7.3 テキストの分割\n\ndf[\"genre\"].str.split().head()",
"_____no_output_____"
],
[
"# リスト 3.7.4 引数 expand に True を渡してテキストを分割\n\ndf[\"genre\"].str.split(expand=True).iloc[:3, :5]",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ec9eaf6d44ecc76ff5eda426bb08bd5b41f7505d | 381,884 | ipynb | Jupyter Notebook | section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb | BrandaoEid/deploying-machine-learning-models | c672a4cf214198257dcaf5ee4d810be4bd94c369 | [
"BSD-3-Clause"
] | null | null | null | section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb | BrandaoEid/deploying-machine-learning-models | c672a4cf214198257dcaf5ee4d810be4bd94c369 | [
"BSD-3-Clause"
] | null | null | null | section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb | BrandaoEid/deploying-machine-learning-models | c672a4cf214198257dcaf5ee4d810be4bd94c369 | [
"BSD-3-Clause"
] | null | null | null | 121.851946 | 11,844 | 0.825154 | [
[
[
"# Machine Learning Pipeline - Feature Engineering\n\nIn the following notebooks, we will go through the implementation of each one of the steps in the Machine Learning Pipeline. \n\nWe will discuss:\n\n1. Data Analysis\n2. **Feature Engineering**\n3. Feature Selection\n4. Model Training\n5. Obtaining Predictions / Scoring\n\n\nWe will use the house price dataset available on [Kaggle.com](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data). See below for more details.\n\n===================================================================================================\n\n## Predicting Sale Price of Houses\n\nThe aim of the project is to build a machine learning model to predict the sale price of homes based on different explanatory variables describing aspects of residential houses.\n\n\n### Why is this important? \n\nPredicting house prices is useful to identify fruitful investments, or to determine whether the price advertised for a house is over or under-estimated.\n\n\n### What is the objective of the machine learning model?\n\nWe aim to minimise the difference between the real price and the price estimated by our model. We will evaluate model performance with the:\n\n1. mean squared error (mse)\n2. root squared of the mean squared error (rmse)\n3. r-squared (r2).\n\n\n### How do I download the dataset?\n\n- Visit the [Kaggle Website](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data).\n\n- Remember to **log in**\n\n- Scroll down to the bottom of the page, and click on the link **'train.csv'**, and then click the 'download' blue button towards the right of the screen, to download the dataset.\n\n- The download the file called **'test.csv'** and save it in the directory with the notebooks.\n\n\n**Note the following:**\n\n- You need to be logged in to Kaggle in order to download the datasets.\n- You need to accept the terms and conditions of the competition to download the dataset\n- If you save the file to the directory with the jupyter notebook, then you can run the code as it is written here.",
"_____no_output_____"
],
[
"# Reproducibility: Setting the seed\n\nWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.",
"_____no_output_____"
]
],
[
[
"# to handle datasets\nimport pandas as pd\nimport numpy as np\n\n# for plotting\nimport matplotlib.pyplot as plt\n\n# for the yeo-johnson transformation\nimport scipy.stats as stats\n\n# to divide train and test set\nfrom sklearn.model_selection import train_test_split\n\n# feature scaling\nfrom sklearn.preprocessing import MinMaxScaler\n\n# to save the trained scaler class\nimport joblib\n\n# to visualise al the columns in the dataframe\npd.pandas.set_option('display.max_columns', None)",
"_____no_output_____"
],
[
"# load dataset\ndata = pd.read_csv('.\\data\\house-pricing-train.csv')\n\n# rows and columns of the data\nprint(data.shape)\n\n# visualise the dataset\ndata.head()",
"(1460, 81)\n"
]
],
[
[
"# Separate dataset into train and test\n\nIt is important to separate our data intro training and testing set. \n\nWhen we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.\n\nOur feature engineering techniques will learn:\n\n- mean\n- mode\n- exponents for the yeo-johnson\n- category frequency\n- and category to number mappings\n\nfrom the train set.\n\n**Separating the data into train and test involves randomness, therefore, we need to set the seed.**",
"_____no_output_____"
]
],
[
[
"# Let's separate into train and test set\n# Remember to set the seed (random_state for this sklearn function)\n\nX_train, X_test, y_train, y_test = train_test_split(\n data.drop(['Id', 'SalePrice'], axis=1), # predictive variables\n data['SalePrice'], # target\n test_size=0.1, # portion of dataset to allocate to test set\n random_state=0, # we are setting the seed here\n)\n\nX_train.shape, X_test.shape",
"_____no_output_____"
]
],
[
[
"# Feature Engineering\n\nIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:\n\n1. Missing values\n2. Temporal variables\n3. Non-Gaussian distributed variables\n4. Categorical variables: remove rare labels\n5. Categorical variables: convert strings to numbers\n5. Put the variables in a similar scale",
"_____no_output_____"
],
[
"## Target\n\nWe apply the logarithm",
"_____no_output_____"
]
],
[
[
"y_train = np.log(y_train)\ny_test = np.log(y_test)",
"_____no_output_____"
]
],
[
[
"## Missing values\n\n### Categorical variables\n\nWe will replace missing values with the string \"missing\" in those variables with a lot of missing data. \n\nAlternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. \n\nThis is common practice.",
"_____no_output_____"
]
],
[
[
"# let's identify the categorical variables\n# we will capture those of type object\n\ncat_vars = [var for var in data.columns if data[var].dtype == 'O']\n\n# MSSubClass is also categorical by definition, despite its numeric values\n# (you can find the definitions of the variables in the data_description.txt\n# file available on Kaggle, in the same website where you downloaded the data)\n\n# lets add MSSubClass to the list of categorical variables\ncat_vars = cat_vars + ['MSSubClass']\n\n# cast all variables as categorical\nX_train[cat_vars] = X_train[cat_vars].astype('O')\nX_test[cat_vars] = X_test[cat_vars].astype('O')\n\n# number of categorical variables\nlen(cat_vars)",
"_____no_output_____"
],
[
"# make a list of the categorical variables that contain missing values\n\ncat_vars_with_na = [\n var for var in cat_vars\n if X_train[var].isnull().sum() > 0\n]\n\n# print percentage of missing values per variable\nX_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)",
"_____no_output_____"
],
[
"# variables to impute with the string missing\nwith_string_missing = [\n var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]\n\n# variables to impute with the most frequent category\nwith_frequent_category = [\n var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]",
"_____no_output_____"
],
[
"with_string_missing",
"_____no_output_____"
],
[
"# replace missing values with new label: \"Missing\"\n\nX_train[with_string_missing] = X_train[with_string_missing].fillna('Missing')\nX_test[with_string_missing] = X_test[with_string_missing].fillna('Missing')",
"_____no_output_____"
],
[
"for var in with_frequent_category:\n \n # there can be more than 1 mode in a variable\n # we take the first one with [0] \n mode = X_train[var].mode()[0]\n \n print(var, mode)\n \n X_train[var].fillna(mode, inplace=True)\n X_test[var].fillna(mode, inplace=True)",
"MasVnrType None\nBsmtQual TA\nBsmtCond TA\nBsmtExposure No\nBsmtFinType1 Unf\nBsmtFinType2 Unf\nElectrical SBrkr\nGarageType Attchd\nGarageFinish Unf\nGarageQual TA\nGarageCond TA\n"
],
[
"# check that we have no missing information in the engineered variables\n\nX_train[cat_vars_with_na].isnull().sum()",
"_____no_output_____"
],
[
"# check that test set does not contain null values in the engineered variables\n\n[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]",
"_____no_output_____"
]
],
[
[
"### Numerical variables\n\nTo engineer missing values in numerical variables, we will:\n\n- add a binary missing indicator variable\n- and then replace the missing values in the original variable with the mean",
"_____no_output_____"
]
],
[
[
"# now let's identify the numerical variables\n\nnum_vars = [\n var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'\n]\n\n# number of numerical variables\nlen(num_vars)",
"_____no_output_____"
],
[
"# make a list with the numerical variables that contain missing values\nvars_with_na = [\n var for var in num_vars\n if X_train[var].isnull().sum() > 0\n]\n\n# print percentage of missing values per variable\nX_train[vars_with_na].isnull().mean()",
"_____no_output_____"
],
[
"# replace missing values as we described above\n\nfor var in vars_with_na:\n\n # calculate the mean using the train set\n mean_val = X_train[var].mean()\n \n print(var, mean_val)\n\n # add binary missing indicator (in train and test)\n X_train[var + '_na'] = np.where(X_train[var].isnull(), 1, 0)\n X_test[var + '_na'] = np.where(X_test[var].isnull(), 1, 0)\n\n # replace missing values by the mean\n # (in train and test)\n X_train[var].fillna(mean_val, inplace=True)\n X_test[var].fillna(mean_val, inplace=True)\n\n# check that we have no more missing values in the engineered variables\nX_train[vars_with_na].isnull().sum()",
"LotFrontage 69.87974098057354\nMasVnrArea 103.7974006116208\nGarageYrBlt 1978.2959677419356\n"
],
[
"# check that test set does not contain null values in the engineered variables\n\n[var for var in vars_with_na if X_test[var].isnull().sum() > 0]",
"_____no_output_____"
],
[
"# check the binary missing indicator variables\n\nX_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()",
"_____no_output_____"
]
],
[
[
"## Temporal variables\n\n### Capture elapsed time\n\nWe learned in the previous notebook, that there are 4 variables that refer to the years in which the house or the garage were built or remodeled. \n\nWe will capture the time elapsed between those variables and the year in which the house was sold:",
"_____no_output_____"
]
],
[
[
"def elapsed_years(df, var):\n # capture difference between the year variable\n # and the year in which the house was sold\n df[var] = df['YrSold'] - df[var]\n return df",
"_____no_output_____"
],
[
"for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:\n X_train = elapsed_years(X_train, var)\n X_test = elapsed_years(X_test, var)",
"_____no_output_____"
],
[
"# now we drop YrSold\nX_train.drop(['YrSold'], axis=1, inplace=True)\nX_test.drop(['YrSold'], axis=1, inplace=True)",
"_____no_output_____"
]
],
[
[
"## Numerical variable transformation\n\n### Logarithmic transformation\n\nIn the previous notebook, we observed that the numerical variables are not normally distributed.\n\nWe will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.",
"_____no_output_____"
]
],
[
[
"for var in [\"LotFrontage\", \"1stFlrSF\", \"GrLivArea\"]:\n X_train[var] = np.log(X_train[var])\n X_test[var] = np.log(X_test[var])",
"_____no_output_____"
],
[
"# check that test set does not contain null values in the engineered variables\n[var for var in [\"LotFrontage\", \"1stFlrSF\", \"GrLivArea\"] if X_test[var].isnull().sum() > 0]",
"_____no_output_____"
],
[
"# same for train set\n[var for var in [\"LotFrontage\", \"1stFlrSF\", \"GrLivArea\"] if X_train[var].isnull().sum() > 0]",
"_____no_output_____"
]
],
[
[
"### Yeo-Johnson transformation\n\nWe will apply the Yeo-Johnson transformation to LotArea.",
"_____no_output_____"
]
],
[
[
"# the yeo-johnson transformation learns the best exponent to transform the variable\n# it needs to learn it from the train set: \nX_train['LotArea'], param = stats.yeojohnson(X_train['LotArea'])\n\n# and then apply the transformation to the test set with the same\n# parameter: see who this time we pass param as argument to the \n# yeo-johnson\nX_test['LotArea'] = stats.yeojohnson(X_test['LotArea'], lmbda=param)\n\nprint(param)",
"-12.55283001172003\n"
],
[
"# check absence of na in the train set\n[var for var in X_train.columns if X_train[var].isnull().sum() > 0]",
"_____no_output_____"
],
[
"# check absence of na in the test set\n[var for var in X_train.columns if X_test[var].isnull().sum() > 0]",
"_____no_output_____"
]
],
[
[
"### Binarize skewed variables\n\nThere were a few variables very skewed, we would transform those into binary variables.",
"_____no_output_____"
]
],
[
[
"skewed = [\n 'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',\n '3SsnPorch', 'ScreenPorch', 'MiscVal'\n]\n\nfor var in skewed:\n \n # map the variable values into 0 and 1\n X_train[var] = np.where(X_train[var]==0, 0, 1)\n X_test[var] = np.where(X_test[var]==0, 0, 1)",
"_____no_output_____"
]
],
[
[
"## Categorical variables\n\n### Apply mappings\n\nThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.",
"_____no_output_____"
]
],
[
[
"# re-map strings to numbers, which determine quality\n\nqual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}\n\nqual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',\n 'HeatingQC', 'KitchenQual', 'FireplaceQu',\n 'GarageQual', 'GarageCond',\n ]\n\nfor var in qual_vars:\n X_train[var] = X_train[var].map(qual_mappings)\n X_test[var] = X_test[var].map(qual_mappings)",
"_____no_output_____"
],
[
"exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}\n\nvar = 'BsmtExposure'\n\nX_train[var] = X_train[var].map(exposure_mappings)\nX_test[var] = X_test[var].map(exposure_mappings)",
"_____no_output_____"
],
[
"finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}\n\nfinish_vars = ['BsmtFinType1', 'BsmtFinType2']\n\nfor var in finish_vars:\n X_train[var] = X_train[var].map(finish_mappings)\n X_test[var] = X_test[var].map(finish_mappings)",
"_____no_output_____"
],
[
"garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}\n\nvar = 'GarageFinish'\n\nX_train[var] = X_train[var].map(garage_mappings)\nX_test[var] = X_test[var].map(garage_mappings)",
"_____no_output_____"
],
[
"fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}\n\nvar = 'Fence'\n\nX_train[var] = X_train[var].map(fence_mappings)\nX_test[var] = X_test[var].map(fence_mappings)",
"_____no_output_____"
],
[
"# check absence of na in the train set\n[var for var in X_train.columns if X_train[var].isnull().sum() > 0]",
"_____no_output_____"
]
],
[
[
"### Removing Rare Labels\n\nFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string \"Rare\".\n\nTo learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.",
"_____no_output_____"
]
],
[
[
"# capture all quality variables\n\nqual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']\n\n# capture the remaining categorical variables\n# (those that we did not re-map)\n\ncat_others = [\n var for var in cat_vars if var not in qual_vars\n]\n\nlen(cat_others)",
"_____no_output_____"
],
[
"def find_frequent_labels(df, var, rare_perc):\n \n # function finds the labels that are shared by more than\n # a certain % of the houses in the dataset\n\n df = df.copy()\n\n tmp = df.groupby(var)[var].count() / len(df)\n\n return tmp[tmp > rare_perc].index\n\n\nfor var in cat_others:\n \n # find the frequent categories\n frequent_ls = find_frequent_labels(X_train, var, 0.01)\n \n print(var, frequent_ls)\n print()\n \n # replace rare categories by the string \"Rare\"\n X_train[var] = np.where(X_train[var].isin(\n frequent_ls), X_train[var], 'Rare')\n \n X_test[var] = np.where(X_test[var].isin(\n frequent_ls), X_test[var], 'Rare')",
"MSZoning Index(['FV', 'RH', 'RL', 'RM'], dtype='object', name='MSZoning')\n\nStreet Index(['Pave'], dtype='object', name='Street')\n\nAlley Index(['Grvl', 'Missing', 'Pave'], dtype='object', name='Alley')\n\nLotShape Index(['IR1', 'IR2', 'Reg'], dtype='object', name='LotShape')\n\nLandContour Index(['Bnk', 'HLS', 'Low', 'Lvl'], dtype='object', name='LandContour')\n\nUtilities Index(['AllPub'], dtype='object', name='Utilities')\n\nLotConfig Index(['Corner', 'CulDSac', 'FR2', 'Inside'], dtype='object', name='LotConfig')\n\nLandSlope Index(['Gtl', 'Mod'], dtype='object', name='LandSlope')\n\nNeighborhood Index(['Blmngtn', 'BrDale', 'BrkSide', 'ClearCr', 'CollgCr', 'Crawfor',\n 'Edwards', 'Gilbert', 'IDOTRR', 'MeadowV', 'Mitchel', 'NAmes', 'NWAmes',\n 'NoRidge', 'NridgHt', 'OldTown', 'SWISU', 'Sawyer', 'SawyerW',\n 'Somerst', 'StoneBr', 'Timber'],\n dtype='object', name='Neighborhood')\n\nCondition1 Index(['Artery', 'Feedr', 'Norm', 'PosN', 'RRAn'], dtype='object', name='Condition1')\n\nCondition2 Index(['Norm'], dtype='object', name='Condition2')\n\nBldgType Index(['1Fam', '2fmCon', 'Duplex', 'Twnhs', 'TwnhsE'], dtype='object', name='BldgType')\n\nHouseStyle Index(['1.5Fin', '1Story', '2Story', 'SFoyer', 'SLvl'], dtype='object', name='HouseStyle')\n\nRoofStyle Index(['Gable', 'Hip'], dtype='object', name='RoofStyle')\n\nRoofMatl Index(['CompShg'], dtype='object', name='RoofMatl')\n\nExterior1st Index(['AsbShng', 'BrkFace', 'CemntBd', 'HdBoard', 'MetalSd', 'Plywood',\n 'Stucco', 'VinylSd', 'Wd Sdng', 'WdShing'],\n dtype='object', name='Exterior1st')\n\nExterior2nd Index(['AsbShng', 'BrkFace', 'CmentBd', 'HdBoard', 'MetalSd', 'Plywood',\n 'Stucco', 'VinylSd', 'Wd Sdng', 'Wd Shng'],\n dtype='object', name='Exterior2nd')\n\nMasVnrType Index(['BrkFace', 'None', 'Stone'], dtype='object', name='MasVnrType')\n\nFoundation Index(['BrkTil', 'CBlock', 'PConc', 'Slab'], dtype='object', name='Foundation')\n\nHeating Index(['GasA', 'GasW'], dtype='object', name='Heating')\n\nCentralAir Index(['N', 'Y'], dtype='object', name='CentralAir')\n\nElectrical Index(['FuseA', 'FuseF', 'SBrkr'], dtype='object', name='Electrical')\n\nFunctional Index(['Min1', 'Min2', 'Mod', 'Typ'], dtype='object', name='Functional')\n\nGarageType Index(['Attchd', 'Basment', 'BuiltIn', 'Detchd'], dtype='object', name='GarageType')\n\nPavedDrive Index(['N', 'P', 'Y'], dtype='object', name='PavedDrive')\n\nPoolQC Index(['Missing'], dtype='object', name='PoolQC')\n\nMiscFeature Index(['Missing', 'Shed'], dtype='object', name='MiscFeature')\n\nSaleType Index(['COD', 'New', 'WD'], dtype='object', name='SaleType')\n\nSaleCondition Index(['Abnorml', 'Family', 'Normal', 'Partial'], dtype='object', name='SaleCondition')\n\nMSSubClass Int64Index([20, 30, 50, 60, 70, 75, 80, 85, 90, 120, 160, 190], dtype='int64', name='MSSubClass')\n\n"
]
],
[
[
"### Encoding of categorical variables\n\nNext, we need to transform the strings of the categorical variables into numbers. \n\nWe will do it so that we capture the monotonic relationship between the label and the target.\n\nTo learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.",
"_____no_output_____"
]
],
[
[
"# this function will assign discrete values to the strings of the variables,\n# so that the smaller value corresponds to the category that shows the smaller\n# mean house sale price\n\ndef replace_categories(train, test, y_train, var, target):\n \n tmp = pd.concat([X_train, y_train], axis=1)\n \n # order the categories in a variable from that with the lowest\n # house sale price, to that with the highest\n ordered_labels = tmp.groupby([var])[target].mean().sort_values().index\n\n # create a dictionary of ordered categories to integer values\n ordinal_label = {k: i for i, k in enumerate(ordered_labels, 0)}\n \n print(var, ordinal_label)\n print()\n\n # use the dictionary to replace the categorical strings by integers\n train[var] = train[var].map(ordinal_label)\n test[var] = test[var].map(ordinal_label)",
"_____no_output_____"
],
[
"for var in cat_others:\n replace_categories(X_train, X_test, y_train, var, 'SalePrice')",
"MSZoning {'Rare': 0, 'RM': 1, 'RH': 2, 'RL': 3, 'FV': 4}\n\nStreet {'Rare': 0, 'Pave': 1}\n\nAlley {'Grvl': 0, 'Pave': 1, 'Missing': 2}\n\nLotShape {'Reg': 0, 'IR1': 1, 'Rare': 2, 'IR2': 3}\n\nLandContour {'Bnk': 0, 'Lvl': 1, 'Low': 2, 'HLS': 3}\n\nUtilities {'Rare': 0, 'AllPub': 1}\n\nLotConfig {'Inside': 0, 'FR2': 1, 'Corner': 2, 'Rare': 3, 'CulDSac': 4}\n\nLandSlope {'Gtl': 0, 'Mod': 1, 'Rare': 2}\n\nNeighborhood {'IDOTRR': 0, 'MeadowV': 1, 'BrDale': 2, 'Edwards': 3, 'BrkSide': 4, 'OldTown': 5, 'Sawyer': 6, 'SWISU': 7, 'NAmes': 8, 'Mitchel': 9, 'SawyerW': 10, 'Rare': 11, 'NWAmes': 12, 'Gilbert': 13, 'Blmngtn': 14, 'CollgCr': 15, 'Crawfor': 16, 'ClearCr': 17, 'Somerst': 18, 'Timber': 19, 'StoneBr': 20, 'NridgHt': 21, 'NoRidge': 22}\n\nCondition1 {'Artery': 0, 'Feedr': 1, 'Norm': 2, 'RRAn': 3, 'Rare': 4, 'PosN': 5}\n\nCondition2 {'Rare': 0, 'Norm': 1}\n\nBldgType {'2fmCon': 0, 'Duplex': 1, 'Twnhs': 2, '1Fam': 3, 'TwnhsE': 4}\n\nHouseStyle {'SFoyer': 0, '1.5Fin': 1, 'Rare': 2, '1Story': 3, 'SLvl': 4, '2Story': 5}\n\nRoofStyle {'Gable': 0, 'Rare': 1, 'Hip': 2}\n\nRoofMatl {'CompShg': 0, 'Rare': 1}\n\nExterior1st {'AsbShng': 0, 'Wd Sdng': 1, 'WdShing': 2, 'MetalSd': 3, 'Stucco': 4, 'Rare': 5, 'HdBoard': 6, 'Plywood': 7, 'BrkFace': 8, 'CemntBd': 9, 'VinylSd': 10}\n\nExterior2nd {'AsbShng': 0, 'Wd Sdng': 1, 'MetalSd': 2, 'Wd Shng': 3, 'Stucco': 4, 'Rare': 5, 'HdBoard': 6, 'Plywood': 7, 'BrkFace': 8, 'CmentBd': 9, 'VinylSd': 10}\n\nMasVnrType {'Rare': 0, 'None': 1, 'BrkFace': 2, 'Stone': 3}\n\nFoundation {'Slab': 0, 'BrkTil': 1, 'CBlock': 2, 'Rare': 3, 'PConc': 4}\n\nHeating {'Rare': 0, 'GasW': 1, 'GasA': 2}\n\nCentralAir {'N': 0, 'Y': 1}\n\nElectrical {'Rare': 0, 'FuseF': 1, 'FuseA': 2, 'SBrkr': 3}\n\nFunctional {'Rare': 0, 'Min2': 1, 'Mod': 2, 'Min1': 3, 'Typ': 4}\n\nGarageType {'Rare': 0, 'Detchd': 1, 'Basment': 2, 'Attchd': 3, 'BuiltIn': 4}\n\nPavedDrive {'N': 0, 'P': 1, 'Y': 2}\n\nPoolQC {'Missing': 0, 'Rare': 1}\n\nMiscFeature {'Rare': 0, 'Shed': 1, 'Missing': 2}\n\nSaleType {'COD': 0, 'Rare': 1, 'WD': 2, 'New': 3}\n\nSaleCondition {'Rare': 0, 'Abnorml': 1, 'Family': 2, 'Normal': 3, 'Partial': 4}\n\nMSSubClass {30: 0, 'Rare': 1, 190: 2, 90: 3, 160: 4, 50: 5, 85: 6, 70: 7, 80: 8, 20: 9, 75: 10, 120: 11, 60: 12}\n\n"
],
[
"# check absence of na in the train set\n[var for var in X_train.columns if X_train[var].isnull().sum() > 0]",
"_____no_output_____"
],
[
"# check absence of na in the test set\n[var for var in X_test.columns if X_test[var].isnull().sum() > 0]",
"_____no_output_____"
],
[
"# let me show you what I mean by monotonic relationship\n# between labels and target\n\ndef analyse_vars(train, y_train, var):\n \n # function plots median house sale price per encoded\n # category\n \n tmp = pd.concat([X_train, np.log(y_train)], axis=1)\n \n tmp.groupby(var)['SalePrice'].median().plot.bar()\n plt.title(var)\n plt.ylim(2.2, 2.6)\n plt.ylabel('SalePrice')\n plt.show()\n \nfor var in cat_others:\n analyse_vars(X_train, y_train, var)",
"_____no_output_____"
]
],
[
[
"The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.\n\n(remember that the target is log-transformed, that is why the differences seem so small).",
"_____no_output_____"
],
[
"## Feature Scaling\n\nFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:",
"_____no_output_____"
]
],
[
[
"# create scaler\nscaler = MinMaxScaler()\n\n# fit the scaler to the train set\nscaler.fit(X_train) \n\n# transform the train and test set\n\n# sklearn returns numpy arrays, so we wrap the\n# array with a pandas dataframe\n\nX_train = pd.DataFrame(\n scaler.transform(X_train),\n columns=X_train.columns\n)\n\nX_test = pd.DataFrame(\n scaler.transform(X_test),\n columns=X_train.columns\n)",
"_____no_output_____"
],
[
"X_train.head()",
"_____no_output_____"
],
[
"# let's now save the train and test sets for the next notebook!\n\nX_train.to_csv('xtrain.csv', index=False)\nX_test.to_csv('xtest.csv', index=False)\n\ny_train.to_csv('ytrain.csv', index=False)\ny_test.to_csv('ytest.csv', index=False)",
"_____no_output_____"
],
[
"# now let's save the scaler\n\njoblib.dump(scaler, 'minmax_scaler.joblib') ",
"_____no_output_____"
]
],
[
[
"That concludes the feature engineering section.\n\n# Additional Resources\n\n- [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) - Online Course\n- [Packt Feature Engineering Cookbook](https://www.packtpub.com/data/python-feature-engineering-cookbook) - Book\n- [Feature Engineering for Machine Learning: A comprehensive Overview](https://trainindata.medium.com/feature-engineering-for-machine-learning-a-comprehensive-overview-a7ad04c896f8) - Article\n- [Practical Code Implementations of Feature Engineering for Machine Learning with Python](https://towardsdatascience.com/practical-code-implementations-of-feature-engineering-for-machine-learning-with-python-f13b953d4bcd) - Article",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ec9eb8881d0a0d5ed72de077e8c05a9e05ca3b6f | 24,484 | ipynb | Jupyter Notebook | language_models.ipynb | izavits/msc_datascience | e92cec6b2dc7f0f09b258adc5e7381a256b0540f | [
"Apache-2.0"
] | null | null | null | language_models.ipynb | izavits/msc_datascience | e92cec6b2dc7f0f09b258adc5e7381a256b0540f | [
"Apache-2.0"
] | null | null | null | language_models.ipynb | izavits/msc_datascience | e92cec6b2dc7f0f09b258adc5e7381a256b0540f | [
"Apache-2.0"
] | null | null | null | 37.266362 | 1,254 | 0.508659 | [
[
[
"# Examples with probabilistic language models (n-grams)",
"_____no_output_____"
]
],
[
[
"# Start with some basics:\n# Given a sentence, let's see how to perform some counts\n# with the help of Counters and python dicts\n\nfrom nltk import trigrams\nfrom collections import Counter, defaultdict",
"_____no_output_____"
],
[
"# Example sentence\nsentence = 'this is a sentence that we want to parse and this is done with nltk and python collections'\n\n# Simple tokenization (separation of tokens via space character)\nsentence = sentence.split()\nprint(sentence)",
"['this', 'is', 'a', 'sentence', 'that', 'we', 'want', 'to', 'parse', 'and', 'this', 'is', 'done', 'with', 'nltk', 'and', 'python', 'collections']\n"
],
[
"# Produce trigrams from that sentence\n# (imagine that we are sliding a window of size 3 across the sentence)\n\nfor w1, w2, w3 in trigrams(sentence, pad_right=True, pad_left=True):\n print(w1, w2, w3)",
"None None this\nNone this is\nthis is a\nis a sentence\na sentence that\nsentence that we\nthat we want\nwe want to\nwant to parse\nto parse and\nparse and this\nand this is\nthis is done\nis done with\ndone with nltk\nwith nltk and\nnltk and python\nand python collections\npython collections None\ncollections None None\n"
],
[
"# Keep counts\n\nmodel = defaultdict(lambda: defaultdict(lambda: 0))\n\n# Generate the trigrams and increase the counter for each occurrence \n# of a specific trigram\nfor w1, w2, w3 in trigrams(sentence, pad_right=True, pad_left=True):\n model[(w1, w2)][w3] += 1",
"_____no_output_____"
],
[
"# Let's see the words that follow \"this is\"\nprint('Words that appear after \"this is\":')\nfor w in model[('this', 'is')]:\n print(f'word: {w}')\n",
"Words that appear after \"this is\":\nword: a\nword: done\n"
],
[
"# Let's see how many times \"this is\" occurs\nprint('\\nHow many times does \"this is\" occur in the sample sentence?')\nprint(sum(model[('this', 'is')].values()))",
"\nHow many times does \"this is\" occur in the sample sentence?\n2\n"
],
[
"# Let's see how many times word \"a\" occurs after \"this is\"\nprint('\\nHow many times does \"a\" occur after \"this is\"?')\nprint(model[('this', 'is')]['a'])\n\n# What are the words that follow \"this is\" and what are\n# the corresponding frequencies?\nprint('\\nWords that follow the bigram \"this is\" and the corresponding counts:')\ndict(model[('this', 'is')])\n",
"\nHow many times does \"a\" occur after \"this is\"?\n1\n\nWords that follow the bigram \"this is\" and the corresponding counts:\n"
]
],
[
[
"# Create a simple language model using a text collection",
"_____no_output_____"
]
],
[
[
"# Let's put everything together and use a corpus from project Gutenberg\n# which is provided directly by NLTK\n\nfrom nltk.corpus import gutenberg\nfrom nltk import bigrams, trigrams\nfrom collections import Counter, defaultdict\n\n# Create a placeholder for model\nmodel = defaultdict(lambda: defaultdict(lambda: 0))\n\n# Count frequency of co-occurance \nfor sentence in gutenberg.sents('shakespeare-macbeth.txt'):\n for w1, w2, w3 in trigrams(sentence, pad_right=True, pad_left=True):\n model[(w1, w2)][w3] += 1\n \n# Let's transform the counts to probabilities\nfor w1_w2 in model:\n total_count = float(sum(model[w1_w2].values()))\n for w3 in model[w1_w2]:\n model[w1_w2][w3] /= total_count\n\n",
"_____no_output_____"
],
[
"# Print a sample of the training sentences\nfor i in range(0, 50):\n print(gutenberg.sents('shakespeare-macbeth.txt')[i])",
"['[', 'The', 'Tragedie', 'of', 'Macbeth', 'by', 'William', 'Shakespeare', '1603', ']']\n['Actus', 'Primus', '.']\n['Scoena', 'Prima', '.']\n['Thunder', 'and', 'Lightning', '.']\n['Enter', 'three', 'Witches', '.']\n['1', '.']\n['When', 'shall', 'we', 'three', 'meet', 'againe', '?']\n['In', 'Thunder', ',', 'Lightning', ',', 'or', 'in', 'Raine', '?']\n['2', '.']\n['When', 'the', 'Hurley', '-', 'burley', \"'\", 's', 'done', ',', 'When', 'the', 'Battaile', \"'\", 's', 'lost', ',', 'and', 'wonne']\n['3', '.']\n['That', 'will', 'be', 'ere', 'the', 'set', 'of', 'Sunne']\n['1', '.']\n['Where', 'the', 'place', '?']\n['2', '.']\n['Vpon', 'the', 'Heath']\n['3', '.']\n['There', 'to', 'meet', 'with', 'Macbeth']\n['1', '.']\n['I', 'come', ',', 'Gray', '-', 'Malkin']\n['All', '.']\n['Padock', 'calls', 'anon', ':', 'faire', 'is', 'foule', ',', 'and', 'foule', 'is', 'faire', ',', 'Houer', 'through', 'the', 'fogge', 'and', 'filthie', 'ayre', '.']\n['Exeunt', '.']\n['Scena', 'Secunda', '.']\n['Alarum', 'within', '.']\n['Enter', 'King', 'Malcome', ',', 'Donalbaine', ',', 'Lenox', ',', 'with', 'attendants', ',', 'meeting', 'a', 'bleeding', 'Captaine', '.']\n['King', '.']\n['What', 'bloody', 'man', 'is', 'that', '?']\n['he', 'can', 'report', ',', 'As', 'seemeth', 'by', 'his', 'plight', ',', 'of', 'the', 'Reuolt', 'The', 'newest', 'state']\n['Mal', '.']\n['This', 'is', 'the', 'Serieant', ',', 'Who', 'like', 'a', 'good', 'and', 'hardie', 'Souldier', 'fought', \"'\", 'Gainst', 'my', 'Captiuitie', ':', 'Haile', 'braue', 'friend', ';', 'Say', 'to', 'the', 'King', ',', 'the', 'knowledge', 'of', 'the', 'Broyle', ',', 'As', 'thou', 'didst', 'leaue', 'it']\n['Cap', '.']\n['Doubtfull', 'it', 'stood', ',', 'As', 'two', 'spent', 'Swimmers', ',', 'that', 'doe', 'cling', 'together', ',', 'And', 'choake', 'their', 'Art', ':', 'The', 'mercilesse', 'Macdonwald', '(', 'Worthie', 'to', 'be', 'a', 'Rebell', ',', 'for', 'to', 'that', 'The', 'multiplying', 'Villanies', 'of', 'Nature', 'Doe', 'swarme', 'vpon', 'him', ')', 'from', 'the', 'Westerne', 'Isles', 'Of', 'Kernes', 'and', 'Gallowgrosses', 'is', 'supply', \"'\", 'd', ',', 'And', 'Fortune', 'on', 'his', 'damned', 'Quarry', 'smiling', ',', 'Shew', \"'\", 'd', 'like', 'a', 'Rebells', 'Whore', ':', 'but', 'all', \"'\", 's', 'too', 'weake', ':', 'For', 'braue', 'Macbeth', '(', 'well', 'hee', 'deserues', 'that', 'Name', ')', 'Disdayning', 'Fortune', ',', 'with', 'his', 'brandisht', 'Steele', ',', 'Which', 'smoak', \"'\", 'd', 'with', 'bloody', 'execution', '(', 'Like', 'Valours', 'Minion', ')', 'caru', \"'\", 'd', 'out', 'his', 'passage', ',', 'Till', 'hee', 'fac', \"'\", 'd', 'the', 'Slaue', ':', 'Which', 'neu', \"'\", 'r', 'shooke', 'hands', ',', 'nor', 'bad', 'farwell', 'to', 'him', ',', 'Till', 'he', 'vnseam', \"'\", 'd', 'him', 'from', 'the', 'Naue', 'toth', \"'\", 'Chops', ',', 'And', 'fix', \"'\", 'd', 'his', 'Head', 'vpon', 'our', 'Battlements']\n['King', '.']\n['O', 'valiant', 'Cousin', ',', 'worthy', 'Gentleman']\n['Cap', '.']\n['As', 'whence', 'the', 'Sunne', \"'\", 'gins', 'his', 'reflection', ',', 'Shipwracking', 'Stormes', ',', 'and', 'direfull', 'Thunders', ':', 'So', 'from', 'that', 'Spring', ',', 'whence', 'comfort', 'seem', \"'\", 'd', 'to', 'come', ',', 'Discomfort', 'swells', ':', 'Marke', 'King', 'of', 'Scotland', ',', 'marke', ',', 'No', 'sooner', 'Iustice', 'had', ',', 'with', 'Valour', 'arm', \"'\", 'd', ',', 'Compell', \"'\", 'd', 'these', 'skipping', 'Kernes', 'to', 'trust', 'their', 'heeles', ',', 'But', 'the', 'Norweyan', 'Lord', ',', 'surueying', 'vantage', ',', 'With', 'furbusht', 'Armes', ',', 'and', 'new', 'supplyes', 'of', 'men', ',', 'Began', 'a', 'fresh', 'assault']\n['King', '.']\n['Dismay', \"'\", 'd', 'not', 'this', 'our', 'Captaines', ',', 'Macbeth', 'and', 'Banquoh', '?']\n['Cap', '.']\n['Yes', ',', 'as', 'Sparrowes', ',', 'Eagles', ';', 'Or', 'the', 'Hare', ',', 'the', 'Lyon', ':', 'If', 'I', 'say', 'sooth', ',', 'I', 'must', 'report', 'they', 'were', 'As', 'Cannons', 'ouer', '-', 'charg', \"'\", 'd', 'with', 'double', 'Cracks', ',', 'So', 'they', 'doubly', 'redoubled', 'stroakes', 'vpon', 'the', 'Foe', ':', 'Except', 'they', 'meant', 'to', 'bathe', 'in', 'reeking', 'Wounds', ',', 'Or', 'memorize', 'another', 'Golgotha', ',', 'I', 'cannot', 'tell', ':', 'but', 'I', 'am', 'faint', ',', 'My', 'Gashes', 'cry', 'for', 'helpe']\n['King', '.']\n['So', 'well', 'thy', 'words', 'become', 'thee', ',', 'as', 'thy', 'wounds', ',', 'They', 'smack', 'of', 'Honor', 'both', ':', 'Goe', 'get', 'him', 'Surgeons', '.']\n['Enter', 'Rosse', 'and', 'Angus', '.']\n['Who', 'comes', 'here', '?']\n['Mal', '.']\n['The', 'worthy', 'Thane', 'of', 'Rosse']\n['Lenox', '.']\n['What', 'a', 'haste', 'lookes', 'through', 'his', 'eyes', '?']\n['So', 'should', 'he', 'looke', ',', 'that', 'seemes', 'to', 'speake', 'things', 'strange']\n"
],
[
"# So we have trained the model.\n# Let's see the probabilities of words that may follow the sequence \"I am\"\n\ndict(model['I', 'am'])",
"_____no_output_____"
],
[
"# Let's use the model to generate some text!\n\nimport random\n\n# We will tell the model to start with the words \"I am\"\n# So, it will produce the next word.\n# Then, the process is repeated using the generated word and the previous one.\n\ntext = [\"I\", \"am\"]\nsentence_finished = False\n \nwhile not sentence_finished:\n # select a random probability threshold \n r = random.random()\n accumulator = .0\n\n for word in model[tuple(text[-2:])].keys():\n accumulator += model[tuple(text[-2:])][word]\n # select words that are above the probability threshold\n if accumulator >= r:\n text.append(word)\n break\n\n if text[-2:] == [None, None]:\n sentence_finished = True\n\nprint(' '.join([t for t in text if t]))\n",
"I am a man : For thy vndaunted Mettle should compose Nothing but Males .\n"
]
],
[
[
"# Use directly the MLE from nltk python package",
"_____no_output_____"
]
],
[
[
"# In this simple example we will calculate perplexities of test sentences\n# for a simple model that is trained in a tiny training set\n\nimport nltk\nfrom nltk.lm.preprocessing import padded_everygram_pipeline\nfrom nltk.lm import MLE\nfrom nltk.lm import Vocabulary\n\ntrain_sentences = ['Thunder and Lightning',\n 'Enter three Witches',\n 'I am faint',\n 'God saue the King',\n 'Looke what I haue here',\n 'Here the lies haue the eyes'\n ]\n\ntokenized_text = [list(map(str.lower, nltk.tokenize.word_tokenize(sent))) for sent in train_sentences]\n\n# Print the tokenized training set\ntokenized_text",
"_____no_output_____"
],
[
"n = 2 # Highest n-gram order for the Maximul Likelihood Estimator\n\n# Prepare training data:\n# Use bigrams, and mark the start and end of the sentence\ntrain_data = [nltk.bigrams(t, pad_right=True, pad_left=True, left_pad_symbol=\"<s>\", right_pad_symbol=\"</s>\") for t in tokenized_text]\nwords = [word for sent in tokenized_text for word in sent]\nwords.extend([\"<s>\", \"</s>\"])\npadded_vocab = Vocabulary(words)\n\n# Fit model\nmodel = MLE(n)\nmodel.fit(train_data, padded_vocab)",
"_____no_output_____"
],
[
"# We have trained the bigram model.\n# Let's test it in some test sentences\n\n# We will on purpose include a sentence that appears 'as is' in the tranining set.\n# That sentence should have the highest perplexity\n# The other two sentences do not appear in the training set:\n# The first one contains bigrams that the model has never seen before,\n# but the last sentence contains bigrams that the model has learned\ntest_sentences = [\n 'Thunder and lightning', # So, this should have the lowest perplexity (the model explains well the sentence)\n 'through his eyes', # This one should have PP that equals infinity (due to zero probabilities)\n 'I haue the king'] # This sentence can be explained but it will surprise the model more than the 1st\n\n\n# Tokenize the test sentences\ntokenized_text = [list(map(str.lower, nltk.tokenize.word_tokenize(sent))) for sent in test_sentences]\n\n# For each test sentence print the MLE estimates for the bigrams that need to be calculated\nprint('MLE estimates for test data:')\ntest_data = [nltk.bigrams(t, pad_right=True, pad_left=True, left_pad_symbol=\"<s>\", right_pad_symbol=\"</s>\") for t in tokenized_text]\nfor i,test in enumerate(test_data):\n print (f'\\nMLE Estimates for sentence {i}:', [((ngram[-1], ngram[:-1]),model.score(ngram[-1], ngram[:-1])) for ngram in test])\n\n# For each test sentence print the perplexities of the model\nprint('\\nPerplexities:')\n# Reset the test_data, since the generator has been exhausted\ntest_data = [nltk.bigrams(t, pad_right=True, pad_left=True, left_pad_symbol=\"<s>\", right_pad_symbol=\"</s>\") for t in tokenized_text]\nfor i, test in enumerate(test_data):\n print(\"PP({0}):{1}\".format(test_sentences[i], model.perplexity(test)))\n",
"MLE estimates for test data:\n\nMLE Estimates for sentence 0: [(('thunder', ('<s>',)), 0.16666666666666666), (('and', ('thunder',)), 1.0), (('lightning', ('and',)), 1.0), (('</s>', ('lightning',)), 1.0)]\n\nMLE Estimates for sentence 1: [(('through', ('<s>',)), 0.0), (('his', ('through',)), 0), (('eyes', ('his',)), 0), (('</s>', ('eyes',)), 1.0)]\n\nMLE Estimates for sentence 2: [(('i', ('<s>',)), 0.16666666666666666), (('haue', ('i',)), 0.5), (('the', ('haue',)), 0.5), (('king', ('the',)), 0.3333333333333333), (('</s>', ('king',)), 1.0)]\n\nPerplexities:\nPP(Thunder and lightning):1.5650845800732873\nPP(through his eyes):inf\nPP(I haue the king):2.352158045049347\n"
],
[
"# Same example with Laplace smoothing\n\n# This is the exact same code as above.\n# The only difference is that we use Laplace() instead of MLE() to define the model\n\nfrom nltk.lm import Laplace\n\ntrain_sentences = ['Thunder and Lightning',\n 'Enter three Witches',\n 'I am faint',\n 'God saue the King',\n 'Looke what I haue here',\n 'Here the lies haue the eyes'\n ]\n\ntokenized_text = [list(map(str.lower, nltk.tokenize.word_tokenize(sent))) for sent in train_sentences]\n\nn = 2 \n\ntrain_data = [nltk.bigrams(t, pad_right=True, pad_left=True, left_pad_symbol=\"<s>\", right_pad_symbol=\"</s>\") for t in tokenized_text]\nwords = [word for sent in tokenized_text for word in sent]\nwords.extend([\"<s>\", \"</s>\"])\npadded_vocab = Vocabulary(words)\n\n# Fit model\nmodel = Laplace(n)\nmodel.fit(train_data, padded_vocab)\n\ntest_sentences = [\n 'Thunder and lightning', \n 'through his eyes', \n 'I haue the king'] \n\ntokenized_text = [list(map(str.lower, nltk.tokenize.word_tokenize(sent))) for sent in test_sentences]\n\nprint('MLE estimates for test data:')\ntest_data = [nltk.bigrams(t, pad_right=True, pad_left=True, left_pad_symbol=\"<s>\", right_pad_symbol=\"</s>\") for t in tokenized_text]\nfor i,test in enumerate(test_data):\n print (f'\\nMLE Estimates for sentence {i}:', [((ngram[-1], ngram[:-1]),model.score(ngram[-1], ngram[:-1])) for ngram in test])\n\nprint('\\nPerplexities:')\n# Reset the test_data, since the generator has been exhausted\ntest_data = [nltk.bigrams(t, pad_right=True, pad_left=True, left_pad_symbol=\"<s>\", right_pad_symbol=\"</s>\") for t in tokenized_text]\nfor i, test in enumerate(test_data):\n print(\"PP({0}):{1}\".format(test_sentences[i], model.perplexity(test)))\n",
"MLE estimates for test data:\n\nMLE Estimates for sentence 0: [(('thunder', ('<s>',)), 0.06896551724137931), (('and', ('thunder',)), 0.08333333333333333), (('lightning', ('and',)), 0.08333333333333333), (('</s>', ('lightning',)), 0.08333333333333333)]\n\nMLE Estimates for sentence 1: [(('through', ('<s>',)), 0.034482758620689655), (('his', ('through',)), 0.043478260869565216), (('eyes', ('his',)), 0.043478260869565216), (('</s>', ('eyes',)), 0.08333333333333333)]\n\nMLE Estimates for sentence 2: [(('i', ('<s>',)), 0.06896551724137931), (('haue', ('i',)), 0.08), (('the', ('haue',)), 0.08), (('king', ('the',)), 0.07692307692307693), (('</s>', ('king',)), 0.08333333333333333)]\n\nPerplexities:\nPP(Thunder and lightning):12.581370016785733\nPP(through his eyes):20.713749936746982\nPP(I haue the king):12.87248887971409\n"
]
],
[
[
"---",
"_____no_output_____"
],
[
"### In the last example we do not have zero probabilities or perplexities that go to infinity, because we have performed smoothing. We have stolen some of the probability mass of other n-grams to slightly augment the zero probabilities of unseen n-grams.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
ec9ebddea5c6736cfc112d778eedc887f95a8e92 | 65,526 | ipynb | Jupyter Notebook | nbs/05_data.transforms.ipynb | pcuenca/fastai2 | 7c3473164476bce0c7b28ed2c7c7e35e7d3bd66f | [
"Apache-2.0"
] | null | null | null | nbs/05_data.transforms.ipynb | pcuenca/fastai2 | 7c3473164476bce0c7b28ed2c7c7e35e7d3bd66f | [
"Apache-2.0"
] | null | null | null | nbs/05_data.transforms.ipynb | pcuenca/fastai2 | 7c3473164476bce0c7b28ed2c7c7e35e7d3bd66f | [
"Apache-2.0"
] | null | null | null | 42.549351 | 3,092 | 0.674557 | [
[
[
"#default_exp data.transforms",
"_____no_output_____"
],
[
"#export\nfrom fastai2.torch_basics import *\nfrom fastai2.data.core import *\nfrom fastai2.data.load import *\nfrom fastai2.data.external import *",
"_____no_output_____"
],
[
"from nbdev.showdoc import *",
"_____no_output_____"
]
],
[
[
"# Helper functions for processing data and basic transforms\n\n> Functions for getting, splitting, and labeling data, as well as generic transforms",
"_____no_output_____"
],
[
"## Get, split, and label",
"_____no_output_____"
],
[
"For most data source creation we need functions to get a list of items, split them in to train/valid sets, and label them. fastai provides functions to make each of these steps easy (especially when combined with `fastai.data.blocks`).",
"_____no_output_____"
],
[
"### Get",
"_____no_output_____"
],
[
"First we'll look at functions that *get* a list of items (generally file names).\n\nWe'll use *tiny MNIST* (a subset of MNIST with just two classes, `7`s and `3`s) for our examples/tests throughout this page.",
"_____no_output_____"
]
],
[
[
"path = untar_data(URLs.MNIST_TINY)\n(path/'train').ls()",
"_____no_output_____"
],
[
"# export\ndef _get_files(p, fs, extensions=None):\n p = Path(p)\n res = [p/f for f in fs if not f.startswith('.')\n and ((not extensions) or f'.{f.split(\".\")[-1].lower()}' in extensions)]\n return res",
"_____no_output_____"
],
[
"# export\ndef get_files(path, extensions=None, recurse=True, folders=None):\n \"Get all the files in `path` with optional `extensions`, optionally with `recurse`, only in `folders`, if specified.\"\n path = Path(path)\n folders=L(folders)\n extensions = setify(extensions)\n extensions = {e.lower() for e in extensions}\n if recurse:\n res = []\n for i,(p,d,f) in enumerate(os.walk(path)): # returns (dirpath, dirnames, filenames)\n if len(folders) !=0 and i==0: d[:] = [o for o in d if o in folders]\n else: d[:] = [o for o in d if not o.startswith('.')]\n res += _get_files(p, f, extensions)\n else:\n f = [o.name for o in os.scandir(path) if o.is_file()]\n res = _get_files(path, f, extensions)\n return L(res)",
"_____no_output_____"
]
],
[
[
"This is the most general way to grab a bunch of file names from disk. If you pass `extensions` (including the `.`) then returned file names are filtered by that list. Only those files directly in `path` are included, unless you pass `recurse`, in which case all child folders are also searched recursively. `folders` is an optional list of directories to limit the search to.",
"_____no_output_____"
]
],
[
[
"t3 = get_files(path/'train'/'3', extensions='.png', recurse=False)\nt7 = get_files(path/'train'/'7', extensions='.png', recurse=False)\nt = get_files(path/'train', extensions='.png', recurse=True)\ntest_eq(len(t), len(t3)+len(t7))\ntest_eq(len(get_files(path/'train'/'3', extensions='.jpg', recurse=False)),0)\ntest_eq(len(t), len(get_files(path, extensions='.png', recurse=True, folders='train')))\nt",
"_____no_output_____"
],
[
"#hide\ntest_eq(len(get_files(path/'train'/'3', recurse=False)),346)\ntest_eq(len(get_files(path, extensions='.png', recurse=True, folders=['train', 'test'])),729)\ntest_eq(len(get_files(path, extensions='.png', recurse=True, folders='train')),709)\ntest_eq(len(get_files(path, extensions='.png', recurse=True, folders='training')),0)",
"_____no_output_____"
]
],
[
[
"It's often useful to be able to create functions with customized behavior. `fastai.data` generally uses functions named as CamelCase verbs ending in `er` to create these functions. `FileGetter` is a simple example of such a function creator.",
"_____no_output_____"
]
],
[
[
"#export\ndef FileGetter(suf='', extensions=None, recurse=True, folders=None):\n \"Create `get_files` partial function that searches path suffix `suf`, only in `folders`, if specified, and passes along args\"\n def _inner(o, extensions=extensions, recurse=recurse, folders=folders):\n return get_files(o/suf, extensions, recurse, folders)\n return _inner",
"_____no_output_____"
],
[
"fpng = FileGetter(extensions='.png', recurse=False)\ntest_eq(len(t7), len(fpng(path/'train'/'7')))\ntest_eq(len(t), len(fpng(path/'train', recurse=True)))\nfpng_r = FileGetter(extensions='.png', recurse=True)\ntest_eq(len(t), len(fpng_r(path/'train')))",
"_____no_output_____"
],
[
"#export\nimage_extensions = set(k for k,v in mimetypes.types_map.items() if v.startswith('image/'))",
"_____no_output_____"
],
[
"#export\ndef get_image_files(path, recurse=True, folders=None):\n \"Get image files in `path` recursively, only in `folders`, if specified.\"\n return get_files(path, extensions=image_extensions, recurse=recurse, folders=folders)",
"_____no_output_____"
]
],
[
[
"This is simply `get_files` called with a list of standard image extensions.",
"_____no_output_____"
]
],
[
[
"test_eq(len(t), len(get_image_files(path, recurse=True, folders='train')))",
"_____no_output_____"
],
[
"#export\ndef ImageGetter(suf='', recurse=True, folders=None):\n \"Create `get_image_files` partial function that searches path suffix `suf` and passes along `kwargs`, only in `folders`, if specified.\"\n def _inner(o, recurse=recurse, folders=folders): return get_image_files(o/suf, recurse, folders)\n return _inner",
"_____no_output_____"
]
],
[
[
"Same as `FileGetter`, but for image extensions.",
"_____no_output_____"
]
],
[
[
"test_eq(len(get_files(path/'train', extensions='.png', recurse=True, folders='3')),\n len(ImageGetter( 'train', recurse=True, folders='3')(path)))",
"_____no_output_____"
],
[
"#export\ndef get_text_files(path, recurse=True, folders=None):\n \"Get text files in `path` recursively, only in `folders`, if specified.\"\n return get_files(path, extensions=['.txt'], recurse=recurse, folders=folders)",
"_____no_output_____"
]
],
[
[
"### Split",
"_____no_output_____"
],
[
"The next set of functions are used to *split* data into training and validation sets. The functions return two lists - a list of indices or masks for each of training and validation sets.",
"_____no_output_____"
]
],
[
[
"# export\ndef RandomSplitter(valid_pct=0.2, seed=None, **kwargs):\n \"Create function that splits `items` between train/val with `valid_pct` randomly.\"\n def _inner(o, **kwargs):\n if seed is not None: torch.manual_seed(seed)\n rand_idx = L(int(i) for i in torch.randperm(len(o)))\n cut = int(valid_pct * len(o))\n return rand_idx[cut:],rand_idx[:cut]\n return _inner",
"_____no_output_____"
],
[
"src = list(range(30))\nf = RandomSplitter(seed=42)\ntrn,val = f(src)\nassert 0<len(trn)<len(src)\nassert all(o not in val for o in trn)\ntest_eq(len(trn), len(src)-len(val))\n# test random seed consistency\ntest_eq(f(src)[0], trn)",
"_____no_output_____"
],
[
"#export\ndef IndexSplitter(valid_idx):\n \"Split `items` so that `val_idx` are in the validation set and the others in the training set\"\n def _inner(o, **kwargs):\n train_idx = np.setdiff1d(np.array(range_of(o)), np.array(valid_idx))\n return L(train_idx, use_list=True), L(valid_idx, use_list=True)\n return _inner",
"_____no_output_____"
],
[
"items = list(range(10))\nsplitter = IndexSplitter([3,7,9])\ntest_eq(splitter(items),[[0,1,2,4,5,6,8],[3,7,9]])",
"_____no_output_____"
],
[
"# export\ndef _grandparent_idxs(items, name): return mask2idxs(Path(o).parent.parent.name == name for o in items)",
"_____no_output_____"
],
[
"# export\ndef GrandparentSplitter(train_name='train', valid_name='valid'):\n \"Split `items` from the grand parent folder names (`train_name` and `valid_name`).\"\n def _inner(o, **kwargs):\n return _grandparent_idxs(o, train_name),_grandparent_idxs(o, valid_name)\n return _inner",
"_____no_output_____"
],
[
"fnames = [path/'train/3/9932.png', path/'valid/7/7189.png', \n path/'valid/7/7320.png', path/'train/7/9833.png', \n path/'train/3/7666.png', path/'valid/3/925.png',\n path/'train/7/724.png', path/'valid/3/93055.png']\nsplitter = GrandparentSplitter()\ntest_eq(splitter(fnames),[[0,3,4,6],[1,2,5,7]])",
"_____no_output_____"
],
[
"# export\ndef FuncSplitter(func):\n \"Split `items` by result of `func` (`True` for validation, `False` for training set).\"\n def _inner(o, **kwargs):\n val_idx = mask2idxs(func(o_) for o_ in o)\n return IndexSplitter(val_idx)(o)\n return _inner",
"_____no_output_____"
],
[
"splitter = FuncSplitter(lambda o: Path(o).parent.parent.name == 'valid')\ntest_eq(splitter(fnames),[[0,3,4,6],[1,2,5,7]])",
"_____no_output_____"
],
[
"# export\ndef MaskSplitter(mask):\n \"Split `items` depending on the value of `mask`.\"\n def _inner(o, **kwargs): return IndexSplitter(mask2idxs(mask))(o)\n return _inner",
"_____no_output_____"
],
[
"items = list(range(6))\nsplitter = MaskSplitter([True,False,False,True,False,True])\ntest_eq(splitter(items),[[1,2,4],[0,3,5]])",
"_____no_output_____"
],
[
"# export\ndef FileSplitter(fname):\n \"Split `items` depending on the value of `mask`.\"\n valid = Path(fname).read().split('\\n')\n def _func(x): return x.name in valid\n def _inner(o, **kwargs): return FuncSplitter(_func)(o)\n return _inner",
"_____no_output_____"
],
[
"with tempfile.TemporaryDirectory() as d:\n fname = Path(d)/'valid.txt'\n fname.write('\\n'.join([Path(fnames[i]).name for i in [1,3,4]]))\n splitter = FileSplitter(fname)\n test_eq(splitter(fnames),[[0,2,5,6,7],[1,3,4]])",
"_____no_output_____"
],
[
"# export\ndef ColSplitter(col='is_valid'):\n \"Split `items` (supposed to be a dataframe) by value in `col`\"\n def _inner(o, **kwargs):\n assert isinstance(o, pd.DataFrame), \"ColSplitter only works when your items are a pandas DataFrame\"\n valid_idx = o[col].values\n return IndexSplitter(mask2idxs(valid_idx))(o)\n return _inner",
"_____no_output_____"
],
[
"df = pd.DataFrame({'a': [0,1,2,3,4], 'b': [True,False,True,True,False]})\nsplits = ColSplitter('b')(df)\ntest_eq(splits, [[1,4], [0,2,3]])",
"_____no_output_____"
]
],
[
[
"### Label",
"_____no_output_____"
],
[
"The final set of functions is used to *label* a single item of data.",
"_____no_output_____"
]
],
[
[
"# export\ndef parent_label(o, **kwargs):\n \"Label `item` with the parent folder name.\"\n return Path(o).parent.name",
"_____no_output_____"
]
],
[
[
"Note that `parent_label` doesn't have anything customize, so it doesn't return a function - you can just use it directly.",
"_____no_output_____"
]
],
[
[
"test_eq(parent_label(fnames[0]), '3')\ntest_eq(parent_label(\"fastai_dev/dev/data/mnist_tiny/train/3/9932.png\"), '3')\n[parent_label(o) for o in fnames]",
"_____no_output_____"
],
[
"#hide\n#test for MS Windows when os.path.sep is '\\\\' instead of '/'\ntest_eq(parent_label(os.path.join(\"fastai_dev\",\"dev\",\"data\",\"mnist_tiny\",\"train\", \"3\", \"9932.png\") ), '3')",
"_____no_output_____"
],
[
"# export\nclass RegexLabeller():\n \"Label `item` with regex `pat`.\"\n def __init__(self, pat, match=False):\n self.pat = re.compile(pat)\n self.matcher = self.pat.match if match else self.pat.search\n\n def __call__(self, o, **kwargs):\n res = self.matcher(str(o))\n assert res,f'Failed to find \"{self.pat}\" in \"{o}\"'\n return res.group(1)",
"_____no_output_____"
]
],
[
[
"`RegexLabeller` is a very flexible function since it handles any regex search of the stringified item. Pass `match=True` to use `re.match` (i.e. check only start of string), or `re.search` otherwise (default).\n\nFor instance, here's an example the replicates the previous `parent_label` results.",
"_____no_output_____"
]
],
[
[
"f = RegexLabeller(fr'{os.path.sep}(\\d){os.path.sep}')\ntest_eq(f(fnames[0]), '3')\n[f(o) for o in fnames]",
"_____no_output_____"
],
[
"f = RegexLabeller(r'(\\d*)', match=True)\ntest_eq(f(fnames[0].name), '9932')",
"_____no_output_____"
],
[
"#export\nclass ColReader():\n \"Read `cols` in `row` with potential `pref` and `suff`\"\n def __init__(self, cols, pref='', suff='', label_delim=None):\n store_attr(self, 'suff,label_delim')\n self.pref = str(pref) + os.path.sep if isinstance(pref, Path) else pref\n self.cols = L(cols)\n\n def _do_one(self, r, c):\n o = r[c] if isinstance(c, int) else getattr(r, c)\n if len(self.pref)==0 and len(self.suff)==0 and self.label_delim is None: return o\n if self.label_delim is None: return f'{self.pref}{o}{self.suff}'\n else: return o.split(self.label_delim) if len(o)>0 else []\n\n def __call__(self, o, **kwargs): return detuplify(tuple(self._do_one(o, c) for c in self.cols))",
"_____no_output_____"
]
],
[
[
"`cols` can be a list of column names or a list of indices (or a mix of both). If `label_delim` is passed, the result is split using it.",
"_____no_output_____"
]
],
[
[
"df = pd.DataFrame({'a': 'a b c d'.split(), 'b': ['1 2', '0', '', '1 2 3']})\nf = ColReader('a', pref='0', suff='1')\ntest_eq([f(o) for o in df.itertuples()], '0a1 0b1 0c1 0d1'.split())\n\nf = ColReader('b', label_delim=' ')\ntest_eq([f(o) for o in df.itertuples()], [['1', '2'], ['0'], [], ['1', '2', '3']])\n\ndf['a1'] = df['a']\nf = ColReader(['a', 'a1'], pref='0', suff='1')\ntest_eq([f(o) for o in df.itertuples()], [('0a1', '0a1'), ('0b1', '0b1'), ('0c1', '0c1'), ('0d1', '0d1')])\n\ndf = pd.DataFrame({'a': [L(0,1), L(2,3,4), L(5,6,7)]})\nf = ColReader('a')\ntest_eq([f(o) for o in df.itertuples()], [L(0,1), L(2,3,4), L(5,6,7)])",
"_____no_output_____"
]
],
[
[
"## Categorize -",
"_____no_output_____"
]
],
[
[
"#export\nclass CategoryMap(CollBase):\n \"Collection of categories with the reverse mapping in `o2i`\"\n def __init__(self, col, sort=True, add_na=False):\n if is_categorical_dtype(col): items = L(col.cat.categories, use_list=True)\n else:\n if not hasattr(col,'unique'): col = L(col, use_list=True)\n # `o==o` is the generalized definition of non-NaN used by Pandas\n items = L(o for o in col.unique() if o==o)\n if sort: items = items.sorted()\n self.items = '#na#' + items if add_na else items\n self.o2i = defaultdict(int, self.items.val2idx()) if add_na else dict(self.items.val2idx())\n def __eq__(self,b): return all_equal(b,self)",
"_____no_output_____"
],
[
"t = CategoryMap([4,2,3,4])\ntest_eq(t, [2,3,4])\ntest_eq(t.o2i, {2:0,3:1,4:2})\ntest_fail(lambda: t.o2i['unseen label'])",
"_____no_output_____"
],
[
"t = CategoryMap([4,2,3,4], add_na=True)\ntest_eq(t, ['#na#',2,3,4])\ntest_eq(t.o2i, {'#na#':0,2:1,3:2,4:3})",
"_____no_output_____"
],
[
"t = CategoryMap(pd.Series([4,2,3,4]), sort=False)\ntest_eq(t, [4,2,3])\ntest_eq(t.o2i, {4:0,2:1,3:2})",
"_____no_output_____"
],
[
"col = pd.Series(pd.Categorical(['M','H','L','M'], categories=['H','M','L'], ordered=True))\nt = CategoryMap(col)\ntest_eq(t, ['H','M','L'])\ntest_eq(t.o2i, {'H':0,'M':1,'L':2})",
"_____no_output_____"
],
[
"# export\nclass Categorize(Transform):\n \"Reversible transform of category string to `vocab` id\"\n loss_func,order=CrossEntropyLossFlat(),1\n def __init__(self, vocab=None, add_na=False):\n self.add_na = add_na\n self.vocab = None if vocab is None else CategoryMap(vocab, add_na=add_na)\n\n def setups(self, dsets):\n if self.vocab is None and dsets is not None: self.vocab = CategoryMap(dsets, add_na=self.add_na)\n self.c = len(self.vocab)\n\n def encodes(self, o): return TensorCategory(self.vocab.o2i[o])\n def decodes(self, o): return Category (self.vocab [o])",
"_____no_output_____"
],
[
"#export\nclass Category(str, ShowTitle): _show_args = {'label': 'category'}",
"_____no_output_____"
],
[
"cat = Categorize()\ntds = Datasets(['cat', 'dog', 'cat'], tfms=[cat])\ntest_eq(cat.vocab, ['cat', 'dog'])\ntest_eq(cat('cat'), 0)\ntest_eq(cat.decode(1), 'dog')\ntest_stdout(lambda: show_at(tds,2), 'cat')",
"_____no_output_____"
],
[
"cat = Categorize(add_na=True)\ntds = Datasets(['cat', 'dog', 'cat'], tfms=[cat])\ntest_eq(cat.vocab, ['#na#', 'cat', 'dog'])\ntest_eq(cat('cat'), 1)\ntest_eq(cat.decode(2), 'dog')\ntest_stdout(lambda: show_at(tds,2), 'cat')",
"_____no_output_____"
]
],
[
[
"## Multicategorize -",
"_____no_output_____"
]
],
[
[
"# export\nclass MultiCategorize(Categorize):\n \"Reversible transform of multi-category strings to `vocab` id\"\n loss_func,order=BCEWithLogitsLossFlat(),1\n def __init__(self, vocab=None, add_na=False):\n self.add_na = add_na\n self.vocab = None if vocab is None else CategoryMap(vocab, add_na=add_na)\n\n def setups(self, dsets):\n if not dsets: return\n if self.vocab is None:\n vals = set()\n for b in dsets: vals = vals.union(set(b))\n self.vocab = CategoryMap(list(vals), add_na=self.add_na)\n\n def encodes(self, o): return TensorMultiCategory([self.vocab.o2i[o_] for o_ in o])\n def decodes(self, o): return MultiCategory ([self.vocab [o_] for o_ in o])",
"_____no_output_____"
],
[
"#export\nclass MultiCategory(L):\n def show(self, ctx=None, sep=';', color='black', **kwargs):\n return show_title(sep.join(self.map(str)), ctx=ctx, color=color, **kwargs)",
"_____no_output_____"
],
[
"cat = MultiCategorize()\ntds = Datasets([['b', 'c'], ['a'], ['a', 'c'], []], tfms=[cat])\ntest_eq(tds[3][0], tensor([]))\ntest_eq(cat.vocab, ['a', 'b', 'c'])\ntest_eq(cat(['a', 'c']), tensor([0,2]))\ntest_eq(cat([]), tensor([]))\ntest_eq(cat.decode([1]), ['b'])\ntest_eq(cat.decode([0,2]), ['a', 'c'])\ntest_stdout(lambda: show_at(tds,2), 'a;c')",
"_____no_output_____"
],
[
"# export\nclass OneHotEncode(Transform):\n \"One-hot encodes targets\"\n order=2\n def __init__(self, c=None): self.c = c\n\n def setups(self, dsets):\n if self.c is None: self.c = len(L(getattr(dsets, 'vocab', None)))\n if not self.c: warn(\"Couldn't infer the number of classes, please pass a value for `c` at init\")\n\n def encodes(self, o): return TensorMultiCategory(one_hot(o, self.c).float())\n def decodes(self, o): return one_hot_decode(o, None)",
"_____no_output_____"
]
],
[
[
"Works in conjunction with ` MultiCategorize` or on its own if you have one-hot encoded targets (pass a `vocab` for decoding and `do_encode=False` in this case)",
"_____no_output_____"
]
],
[
[
"_tfm = OneHotEncode(c=3)\ntest_eq(_tfm([0,2]), tensor([1.,0,1]))\ntest_eq(_tfm.decode(tensor([0,1,1])), [1,2])",
"_____no_output_____"
],
[
"tds = Datasets([['b', 'c'], ['a'], ['a', 'c'], []], [[MultiCategorize(), OneHotEncode()]])\ntest_eq(tds[1], [tensor([1.,0,0])])\ntest_eq(tds[3], [tensor([0.,0,0])])\ntest_eq(tds.decode([tensor([False, True, True])]), [['b','c']])\ntest_eq(type(tds[1][0]), TensorMultiCategory)\ntest_stdout(lambda: show_at(tds,2), 'a;c')",
"_____no_output_____"
],
[
"#hide\n#test with passing the vocab\ntds = Datasets([['b', 'c'], ['a'], ['a', 'c'], []], [[MultiCategorize(vocab=['a', 'b', 'c']), OneHotEncode()]])\ntest_eq(tds[1], [tensor([1.,0,0])])\ntest_eq(tds[3], [tensor([0.,0,0])])\ntest_eq(tds.decode([tensor([False, True, True])]), [['b','c']])\ntest_eq(type(tds[1][0]), TensorMultiCategory)\ntest_stdout(lambda: show_at(tds,2), 'a;c')",
"_____no_output_____"
],
[
"# export\nclass EncodedMultiCategorize(Categorize):\n \"Transform of one-hot encoded multi-category that decodes with `vocab`\"\n loss_func,order=BCEWithLogitsLossFlat(),1\n def __init__(self, vocab): self.vocab,self.c = vocab,len(vocab)\n def encodes(self, o): return TensorCategory(tensor(o).float())\n def decodes(self, o): return MultiCategory (one_hot_decode(o, self.vocab))",
"_____no_output_____"
],
[
"_tfm = EncodedMultiCategorize(vocab=['a', 'b', 'c'])\ntest_eq(_tfm([1,0,1]), tensor([1., 0., 1.]))\ntest_eq(type(_tfm([1,0,1])), TensorCategory)\ntest_eq(_tfm.decode(tensor([False, True, True])), ['b','c'])",
"_____no_output_____"
],
[
"#export\nclass RegressionSetup(Transform):\n \"Transform that floatifies targets\"\n def __init__(self, c=None): self.c = c\n def encodes(self, o): return tensor(o).float()\n def setups(self, dsets):\n if self.c is not None: return\n try: self.c = len(dsets[0]) if hasattr(dsets[0], '__len__') else 1\n except: self.c = 0",
"_____no_output_____"
],
[
"_tfm = RegressionSetup()\ndsets = Datasets([0, 1, 2], RegressionSetup)\ntest_eq(dsets.c, 1)\ntest_eq_type(dsets[0], (tensor(0.),))\n\ndsets = Datasets([[0, 1, 2], [3,4,5]], RegressionSetup)\ntest_eq(dsets.c, 3)\ntest_eq_type(dsets[0], (tensor([0.,1.,2.]),))",
"_____no_output_____"
],
[
"#export\ndef get_c(dls):\n if getattr(dls, 'c', False): return dls.c\n if getattr(getattr(dls.train, 'after_item', None), 'c', False): return dls.train.after_item.c\n if getattr(getattr(dls.train, 'after_batch', None), 'c', False): return dls.train.after_batch.c\n vocab = getattr(dls, 'vocab', [])\n if len(vocab) > 0 and is_listy(vocab[-1]): vocab = vocab[-1]\n return len(vocab)",
"_____no_output_____"
]
],
[
[
"## End-to-end dataset example with MNIST",
"_____no_output_____"
],
[
"Let's show how to use those functions to grab the mnist dataset in a `Datasets`. First we grab all the images.",
"_____no_output_____"
]
],
[
[
"path = untar_data(URLs.MNIST_TINY)\nitems = get_image_files(path)",
"_____no_output_____"
]
],
[
[
"Then we split between train and validation depending on the folder.",
"_____no_output_____"
]
],
[
[
"splitter = GrandparentSplitter()\nsplits = splitter(items)\ntrain,valid = (items[i] for i in splits)\ntrain[:3],valid[:3]",
"_____no_output_____"
]
],
[
[
"Our inputs are images that we open and convert to tensors, our targets are labeled depending on the parent directory and are categories.",
"_____no_output_____"
]
],
[
[
"from PIL import Image\ndef open_img(fn:Path): return Image.open(fn).copy()\ndef img2tensor(im:Image.Image): return TensorImage(array(im)[None])\n\ntfms = [[open_img, img2tensor],\n [parent_label, Categorize()]]\ntrain_ds = Datasets(train, tfms)",
"_____no_output_____"
],
[
"x,y = train_ds[3]\nxd,yd = decode_at(train_ds,3)\ntest_eq(parent_label(train[3]),yd)\ntest_eq(array(Image.open(train[3])),xd[0].numpy())",
"_____no_output_____"
],
[
"ax = show_at(train_ds, 3, cmap=\"Greys\", figsize=(1,1))",
"_____no_output_____"
],
[
"assert ax.title.get_text() in ('3','7')\ntest_fig_exists(ax)",
"_____no_output_____"
]
],
[
[
"## ToTensor -",
"_____no_output_____"
]
],
[
[
"#export\nclass ToTensor(Transform):\n \"Convert item to appropriate tensor class\"\n order = 5",
"_____no_output_____"
]
],
[
[
"## IntToFloatTensor -",
"_____no_output_____"
]
],
[
[
"# export\nclass IntToFloatTensor(Transform):\n \"Transform image to float tensor, optionally dividing by 255 (e.g. for images).\"\n order = 10 #Need to run after PIL transforms on the GPU\n def __init__(self, div=255., div_mask=1, split_idx=None, as_item=True):\n super().__init__(split_idx=split_idx,as_item=as_item)\n self.div,self.div_mask = div,div_mask\n\n def encodes(self, o:TensorImage): return o.float().div_(self.div)\n def encodes(self, o:TensorMask ): return o.div_(self.div_mask).long()\n def decodes(self, o:TensorImage): return ((o.clamp(0., 1.) * self.div).long()) if self.div else o",
"_____no_output_____"
],
[
"t = (TensorImage(tensor(1)),tensor(2).long(),TensorMask(tensor(3)))\ntfm = IntToFloatTensor(as_item=False)\nft = tfm(t)\ntest_eq(ft, [1./255, 2, 3])\ntest_eq(type(ft[0]), TensorImage)\ntest_eq(type(ft[2]), TensorMask)\ntest_eq(ft[0].type(),'torch.FloatTensor')\ntest_eq(ft[1].type(),'torch.LongTensor')\ntest_eq(ft[2].type(),'torch.LongTensor')",
"_____no_output_____"
]
],
[
[
"## Normalization -",
"_____no_output_____"
]
],
[
[
"# export\ndef broadcast_vec(dim, ndim, *t, cuda=True):\n \"Make a vector broadcastable over `dim` (out of `ndim` total) by prepending and appending unit axes\"\n v = [1]*ndim\n v[dim] = -1\n f = to_device if cuda else noop\n return [f(tensor(o).view(*v)) for o in t]",
"_____no_output_____"
],
[
"# export\n@docs\nclass Normalize(Transform):\n \"Normalize/denorm batch of `TensorImage`\"\n order=99\n def __init__(self, mean=None, std=None, axes=(0,2,3)): self.mean,self.std,self.axes = mean,std,axes\n\n @classmethod\n def from_stats(cls, mean, std, dim=1, ndim=4, cuda=True): return cls(*broadcast_vec(dim, ndim, mean, std, cuda=cuda))\n\n def setups(self, dl:DataLoader):\n if self.mean is None or self.std is None:\n x,*_ = dl.one_batch()\n self.mean,self.std = x.mean(self.axes, keepdim=True),x.std(self.axes, keepdim=True)+1e-7\n\n def encodes(self, x:TensorImage): return (x-self.mean) / self.std\n def decodes(self, x:TensorImage):\n f = to_cpu if x.device.type=='cpu' else noop\n return (x*f(self.std) + f(self.mean))\n\n _docs=dict(encodes=\"Normalize batch\", decodes=\"Denormalize batch\")",
"_____no_output_____"
],
[
"mean,std = [0.5]*3,[0.5]*3\nmean,std = broadcast_vec(1, 4, mean, std)\nbatch_tfms = [IntToFloatTensor, Normalize.from_stats(mean,std)]\ntdl = TfmdDL(train_ds, after_batch=batch_tfms, bs=4, device=default_device())",
"_____no_output_____"
],
[
"x,y = tdl.one_batch()\nxd,yd = tdl.decode((x,y))\n\ntest_eq(x.type(), 'torch.cuda.FloatTensor' if default_device().type=='cuda' else 'torch.FloatTensor')\ntest_eq(xd.type(), 'torch.LongTensor')\ntest_eq(type(x), TensorImage)\ntest_eq(type(y), TensorCategory)\nassert x.mean()<0.0\nassert x.std()>0.5\nassert 0<xd.float().mean()/255.<1\nassert 0<xd.float().std()/255.<0.5",
"_____no_output_____"
],
[
"#hide\nnrm = Normalize()\nbatch_tfms = [IntToFloatTensor(), nrm]\ntdl = TfmdDL(train_ds, after_batch=batch_tfms, bs=4)\nx,y = tdl.one_batch()\ntest_close(x.mean(), 0.0, 1e-4)\nassert x.std()>0.9, x.std()",
"_____no_output_____"
],
[
"#Just for visuals\nfrom fastai2.vision.core import *",
"_____no_output_____"
],
[
"tdl.show_batch((x,y))",
"_____no_output_____"
],
[
"x,y = torch.add(x,0),torch.add(y,0) #Lose type of tensors (to emulate predictions)\ntest_ne(type(x), TensorImage)\ntdl.show_batch((x,y), figsize=(4,4)) #Check that types are put back by dl.",
"_____no_output_____"
],
[
"#TODO: make the above check a proper test",
"_____no_output_____"
]
],
[
[
"## Export -",
"_____no_output_____"
]
],
[
[
"#hide\nfrom nbdev.export import notebook2script\nnotebook2script()",
"Converted 00_torch_core.ipynb.\nConverted 01_layers.ipynb.\nConverted 02_data.load.ipynb.\nConverted 03_data.core.ipynb.\nConverted 04_data.external.ipynb.\nConverted 05_data.transforms.ipynb.\nConverted 06_data.block.ipynb.\nConverted 07_vision.core.ipynb.\nConverted 08_vision.data.ipynb.\nConverted 09_vision.augment.ipynb.\nConverted 09b_vision.utils.ipynb.\nConverted 09c_vision.widgets.ipynb.\nConverted 10_tutorial.pets.ipynb.\nConverted 11_vision.models.xresnet.ipynb.\nConverted 12_optimizer.ipynb.\nConverted 13_learner.ipynb.\nConverted 13a_metrics.ipynb.\nConverted 14_callback.schedule.ipynb.\nConverted 14a_callback.data.ipynb.\nConverted 15_callback.hook.ipynb.\nConverted 15a_vision.models.unet.ipynb.\nConverted 16_callback.progress.ipynb.\nConverted 17_callback.tracker.ipynb.\nConverted 18_callback.fp16.ipynb.\nConverted 19_callback.mixup.ipynb.\nConverted 20_interpret.ipynb.\nConverted 20a_distributed.ipynb.\nConverted 21_vision.learner.ipynb.\nConverted 22_tutorial.imagenette.ipynb.\nConverted 23_tutorial.transfer_learning.ipynb.\nConverted 24_vision.gan.ipynb.\nConverted 30_text.core.ipynb.\nConverted 31_text.data.ipynb.\nConverted 32_text.models.awdlstm.ipynb.\nConverted 33_text.models.core.ipynb.\nConverted 34_callback.rnn.ipynb.\nConverted 35_tutorial.wikitext.ipynb.\nConverted 36_text.models.qrnn.ipynb.\nConverted 37_text.learner.ipynb.\nConverted 38_tutorial.ulmfit.ipynb.\nConverted 40_tabular.core.ipynb.\nConverted 41_tabular.data.ipynb.\nConverted 42_tabular.model.ipynb.\nConverted 43_tabular.learner.ipynb.\nConverted 45_collab.ipynb.\nConverted 50_datablock_examples.ipynb.\nConverted 60_medical.imaging.ipynb.\nConverted 65_medical.text.ipynb.\nConverted 70_callback.wandb.ipynb.\nConverted 71_callback.tensorboard.ipynb.\nConverted 97_test_utils.ipynb.\nConverted index.ipynb.\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9ebdef390c841c90da3877f34ecea62e3fd9f4 | 533,400 | ipynb | Jupyter Notebook | thin/work_lgbm_geoid_temp_country_trainAll/train.ipynb | bradyneal/covid-xprize-comp | d515f58b009a0a3e2421bc83e7ac893f3c3a1ece | [
"Apache-2.0"
] | null | null | null | thin/work_lgbm_geoid_temp_country_trainAll/train.ipynb | bradyneal/covid-xprize-comp | d515f58b009a0a3e2421bc83e7ac893f3c3a1ece | [
"Apache-2.0"
] | null | null | null | thin/work_lgbm_geoid_temp_country_trainAll/train.ipynb | bradyneal/covid-xprize-comp | d515f58b009a0a3e2421bc83e7ac893f3c3a1ece | [
"Apache-2.0"
] | null | null | null | 48.623519 | 115,036 | 0.66817 | [
[
[
"# Copyright 2020 (c) Cognizant Digital Business, Evolutionary AI. All rights reserved. Issued under the Apache 2.0 License.",
"_____no_output_____"
]
],
[
[
"# Example Predictor: Linear Rollout Predictor\n\nThis example contains basic functionality for training and evaluating a linear predictor that rolls out predictions day-by-day.\n\nFirst, a training data set is created from historical case and npi data.\n\nSecond, a linear model is trained to predict future cases from prior case data along with prior and future npi data.\nThe model is an off-the-shelf sklearn Lasso model, that uses a positive weight constraint to enforce the assumption that increased npis has a negative correlation with future cases.\n\nThird, a sample evaluation set is created, and the predictor is applied to this evaluation set to produce prediction results in the correct format.",
"_____no_output_____"
],
[
"## Training",
"_____no_output_____"
]
],
[
[
"import pickle\nimport numpy as np\nimport pandas as pd\nfrom sklearn.linear_model import Lasso\nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"# Helpful function to compute mae\ndef mae(pred, true):\n return np.mean(np.abs(pred - true))",
"_____no_output_____"
]
],
[
[
"### Copy the data locally",
"_____no_output_____"
]
],
[
[
"# Main source for the training data\nDATA_URL = 'https://raw.githubusercontent.com/OxCGRT/covid-policy-tracker/master/data/OxCGRT_latest.csv'\n# Local file\nDATA_FILE = 'data/OxCGRT_latest.csv'",
"_____no_output_____"
],
[
"import os\nimport urllib.request\nif not os.path.exists('data'):\n os.mkdir('data')\nurllib.request.urlretrieve(DATA_URL, DATA_FILE)",
"_____no_output_____"
],
[
"# Load historical data from local file\ndf = pd.read_csv(DATA_FILE, \n parse_dates=['Date'],\n encoding=\"ISO-8859-1\",\n dtype={\"RegionName\": str,\n \"RegionCode\": str},\n error_bad_lines=False)",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
],
[
"# # For testing, restrict training data to that before a hypothetical predictor submission date\n# HYPOTHETICAL_SUBMISSION_DATE = np.datetime64(\"2020-07-31\")\n# df = df[df.Date <= HYPOTHETICAL_SUBMISSION_DATE]",
"_____no_output_____"
],
[
"# Add RegionID column that combines CountryName and RegionName for easier manipulation of data\ndf['GeoID'] = df['CountryName'] + '__' + df['RegionName'].astype(str)",
"_____no_output_____"
],
[
"# Add new cases column\ndf['NewCases'] = df.groupby('GeoID').ConfirmedCases.diff().fillna(0)",
"_____no_output_____"
],
[
"# Keep only columns of interest\nid_cols = ['CountryName',\n 'RegionName',\n 'GeoID',\n 'Date']\ncases_col = ['NewCases']\nnpi_cols = ['C1_School closing',\n 'C2_Workplace closing',\n 'C3_Cancel public events',\n 'C4_Restrictions on gatherings',\n 'C5_Close public transport',\n 'C6_Stay at home requirements',\n 'C7_Restrictions on internal movement',\n 'C8_International travel controls',\n 'H1_Public information campaigns',\n 'H2_Testing policy',\n 'H3_Contact tracing',\n 'H6_Facial Coverings']\ndf = df[id_cols + cases_col + npi_cols]",
"_____no_output_____"
],
[
"# Fill any missing case values by interpolation and setting NaNs to 0\ndf.update(df.groupby('GeoID').NewCases.apply(\n lambda group: group.interpolate()).fillna(0))",
"_____no_output_____"
],
[
"# Fill any missing NPIs by assuming they are the same as previous day\nfor npi_col in npi_cols:\n df.update(df.groupby('GeoID')[npi_col].ffill().fillna(0))",
"_____no_output_____"
],
[
"temp = pd.read_csv('temperature_data.csv')\ntemp['date_st'] = temp['Date'].apply(lambda e: e[5:])\ntemp['id'] = temp['GeoID'] + '_' + temp['date_st']\nid_temp = dict(zip( temp['id'], temp['temp'] ))\nid_holiday = dict(zip( temp['id'], temp['Holiday'] ))\ntf = temp[['date_st','temp']]\ntf = tf.groupby(['date_st']).mean().reset_index()\ndate_temp_avg = dict(zip( tf['date_st'], tf['temp'] ))\ntf = temp[['date_st','Holiday']]\ntf = tf.groupby(['date_st'])['Holiday'].agg(pd.Series.mode).reset_index()\ndate_holiday_avg = dict(zip( tf['date_st'], tf['Holiday'] ))\nid_temp",
"_____no_output_____"
],
[
"# Set number of past days to use to make predictions\nnb_lookback_days = 30\ndate_ls = []\ngeoid_ls = []\ncountry_ls = []\nnewcase_ls = []\n# Create training data across all countries for predicting one day ahead\nX_cols = cases_col + npi_cols\ny_col = cases_col\nX_samples = []\ny_samples = []\ngeo_ids = df.GeoID.unique()\ntrain_geo_ids = [e for e in geo_ids]\ngeoid_arr = np.zeros(len(train_geo_ids)+1)\nfor g in geo_ids:\n gdf = df[df.GeoID == g]\n all_case_data = np.array(gdf[cases_col])\n all_npi_data = np.array(gdf[npi_cols])\n\n # Create one sample for each day where we have enough data\n # Each sample consists of cases and npis for previous nb_lookback_days\n nb_total_days = len(gdf)\n for d in range(nb_lookback_days, nb_total_days - 1):\n X_cases = all_case_data[d-nb_lookback_days:d]\n\n # Take negative of npis to support positive\n # weight constraint in Lasso.\n X_npis = -all_npi_data[d - nb_lookback_days:d]\n \n date_ls += [ list(gdf['Date'])[d] ]\n geoid_ls += [ list(gdf['GeoID'])[d] ]\n country_ls += [ list(gdf['CountryName'])[d] ] \n newcase_ls += [ list(gdf['NewCases'])[d] ] \n \n date_st = str(date_ls[-1])[5:10] \n id_ = geoid_ls[-1] + '_' + date_st\n\n temperature = date_temp_avg[date_st]\n holiday = date_holiday_avg[date_st]\n if id_ in id_temp:\n temperature = id_temp[id_]\n holiday = id_holiday[id_] \n \n # Flatten all input data so it fits Lasso input format.\n geoid_arr = np.zeros(len(train_geo_ids)+1)\n geoid_arr[ train_geo_ids.index(g) ] = 1\n X_sample = np.concatenate([[temperature,holiday], X_cases.flatten(), #geoid_arr,\n X_npis.flatten()])\n y_sample = all_case_data[d]\n X_samples.append(X_sample)\n y_samples.append(y_sample)\n\n\nX_samples = np.array(X_samples)\ny_samples = np.array(y_samples).flatten()\ny_samples = np.maximum(y_samples, 0) # Don't predict negative cases\nwith open('train_geo_ids.txt', 'w') as f:\n f.write('\\n'.join(train_geo_ids))\n \nprint(X_samples.shape)",
"(90160, 392)\n"
],
[
"len(set(geoid_ls))",
"_____no_output_____"
],
[
"# Split data into train and test sets\nX_train_all, y_train_all = X_samples, y_samples\nprint(X_train_all.shape, y_train_all.shape)",
"(90160, 392) (90160,)\n"
],
[
"# geoid_ls = np.array(geoid_ls)",
"_____no_output_____"
],
[
"!pip install lightgbm",
"Requirement already satisfied: lightgbm in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (3.1.1)\nRequirement already satisfied: wheel in /home/thinng/.local/lib/python3.7/site-packages (from lightgbm) (0.33.5)\nRequirement already satisfied: scikit-learn!=0.22.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from lightgbm) (0.23.2)\nRequirement already satisfied: numpy in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from lightgbm) (1.18.5)\nRequirement already satisfied: scipy in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from lightgbm) (1.5.2)\nRequirement already satisfied: threadpoolctl>=2.0.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from scikit-learn!=0.22.0->lightgbm) (2.1.0)\nRequirement already satisfied: numpy in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from lightgbm) (1.18.5)\nRequirement already satisfied: scipy in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from lightgbm) (1.5.2)\nRequirement already satisfied: joblib>=0.11 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from scikit-learn!=0.22.0->lightgbm) (0.17.0)\nRequirement already satisfied: numpy in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from lightgbm) (1.18.5)\n"
],
[
"import random\ndef seed_everything(seed=0):\n random.seed(seed)\n np.random.seed(seed)\nseed_everything(42) ",
"_____no_output_____"
],
[
"!pip install optuna",
"Requirement already satisfied: optuna in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (2.3.0)\nRequirement already satisfied: alembic in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from optuna) (1.4.3)\nRequirement already satisfied: cmaes>=0.6.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from optuna) (0.7.0)\nRequirement already satisfied: scipy!=1.4.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from optuna) (1.5.2)\nRequirement already satisfied: colorlog in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from optuna) (4.6.2)\nRequirement already satisfied: sqlalchemy>=1.1.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from optuna) (1.3.20)\nRequirement already satisfied: numpy in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from optuna) (1.18.5)\nRequirement already satisfied: tqdm in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from optuna) (4.54.1)\nRequirement already satisfied: cliff in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from optuna) (3.5.0)\nRequirement already satisfied: joblib in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from optuna) (0.17.0)\nRequirement already satisfied: packaging>=20.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from optuna) (20.8)\nRequirement already satisfied: python-editor>=0.3 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from alembic->optuna) (1.0.4)\nRequirement already satisfied: Mako in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from alembic->optuna) (1.1.3)\nRequirement already satisfied: python-dateutil in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from alembic->optuna) (2.8.1)\nRequirement already satisfied: sqlalchemy>=1.1.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from optuna) (1.3.20)\nRequirement already satisfied: pbr!=2.1.0,>=2.0.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from cliff->optuna) (5.5.1)\nRequirement already satisfied: stevedore>=2.0.1 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from cliff->optuna) (3.3.0)\nRequirement already satisfied: PrettyTable<0.8,>=0.7.2 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from cliff->optuna) (0.7.2)\nRequirement already satisfied: six>=1.10.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from cliff->optuna) (1.15.0)\nRequirement already satisfied: pyparsing>=2.1.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from cliff->optuna) (2.4.7)\nRequirement already satisfied: PyYAML>=3.12 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from cliff->optuna) (5.3.1)\nRequirement already satisfied: cmd2!=0.8.3,>=0.8.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from cliff->optuna) (1.4.0)\nRequirement already satisfied: numpy in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from optuna) (1.18.5)\nRequirement already satisfied: wcwidth>=0.1.7 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from cmd2!=0.8.3,>=0.8.0->cliff->optuna) (0.2.5)\nRequirement already satisfied: importlib-metadata>=1.6.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from cmd2!=0.8.3,>=0.8.0->cliff->optuna) (3.3.0)\nRequirement already satisfied: colorama>=0.3.7 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from cmd2!=0.8.3,>=0.8.0->cliff->optuna) (0.4.4)\nRequirement already satisfied: attrs>=16.3.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from cmd2!=0.8.3,>=0.8.0->cliff->optuna) (20.3.0)\nRequirement already satisfied: pyperclip>=1.6 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from cmd2!=0.8.3,>=0.8.0->cliff->optuna) (1.8.1)\nRequirement already satisfied: zipp>=0.5 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from importlib-metadata>=1.6.0->cmd2!=0.8.3,>=0.8.0->cliff->optuna) (3.4.0)\nRequirement already satisfied: typing-extensions>=3.6.4 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from importlib-metadata>=1.6.0->cmd2!=0.8.3,>=0.8.0->cliff->optuna) (3.7.4.3)\nRequirement already satisfied: MarkupSafe>=0.9.2 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from Mako->alembic->optuna) (1.1.1)\nRequirement already satisfied: pyparsing>=2.1.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from cliff->optuna) (2.4.7)\nRequirement already satisfied: six>=1.10.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from cliff->optuna) (1.15.0)\nRequirement already satisfied: numpy in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from optuna) (1.18.5)\nRequirement already satisfied: pbr!=2.1.0,>=2.0.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from cliff->optuna) (5.5.1)\nRequirement already satisfied: importlib-metadata>=1.6.0 in /home/thinng/miniconda3/envs/xprize_env_37/lib/python3.7/site-packages (from cmd2!=0.8.3,>=0.8.0->cliff->optuna) (3.3.0)\n"
],
[
"# import warnings\n# warnings.filterwarnings('ignore')\n# warnings.simplefilter('ignore')\n\n# import optuna.integration.lightgbm as lgb\n# import lightgbm as lgb\n\n\n# params = {\n# 'boosting_type': 'gbdt',\n# 'objective': 'regression',\n# 'metric': 'rmse',\n# 'subsample': 0.5,\n# 'subsample_freq': 1,\n# 'learning_rate': 0.03,\n# 'num_leaves': 2 ** 5 - 1,#11\n# 'min_data_in_leaf': 2 ** 6 - 1,#12\n# 'feature_fraction': 0.5,\n# 'max_bin': 100,\n# 'n_estimators': 1000,#140,#140,#0,\n# 'boost_from_average': False,\n# 'verbose': -1,\n# }\n\n\n\n\n# for ii in range(len(train_geo_ids)):\n# g = train_geo_ids[ii]\n# MODEL_FILE = 'models/lgb_' + g + '.pkl'\n# if True:#not(os.path.exists(MODEL_FILE)):\n# tr_idx = [i for i in range(len(geoid_ls)) if geoid_ls[i]==g ]\n# tr_idx = np.array(tr_idx)\n# X_train, y_train = X_train_all[tr_idx,:], y_train_all[tr_idx]\n\n# dtrain = lgb.Dataset(X_train, label=y_train)\n# lgb_model_tune = lgb.train(params, dtrain, valid_sets=[dtrain], early_stopping_rounds=100,verbose_eval=100 )\n\n# with open(MODEL_FILE, 'wb') as model_file:\n# pickle.dump(lgb_model_tune, model_file) \n\n# if (ii%10==0):\n# print('have done ', ii//10, ' models')",
"_____no_output_____"
]
],
[
[
"## Evaluation\n\nNow that the predictor has been trained and saved, this section contains the functionality for evaluating it on sample evaluation data.",
"_____no_output_____"
]
],
[
[
"# Reload the module to get the latest changes\nimport predict\nfrom importlib import reload\nreload(predict)\nfrom predict import predict_df",
"_____no_output_____"
],
[
"# %%time\npreds_df = predict_df(\"2020-08-01\", \"2020-08-31\", path_to_ips_file=\"data/2020-09-30_historical_ip.csv\", verbose=True)",
"\nPredicting for Aruba__nan\n2020-08-01: 7.114327212934471\n2020-08-02: 7.45224540483501\n2020-08-03: 13.310068462921103\n2020-08-04: 18.79483774180352\n2020-08-05: 21.535192928152743\n2020-08-06: 19.786352427397812\n2020-08-07: 21.02887041840427\n2020-08-08: 23.289079634126754\n2020-08-09: 28.361305541899288\n2020-08-10: 32.605677480123134\n2020-08-11: 28.515626920611528\n2020-08-12: 28.713586792493484\n2020-08-13: 28.849844701942065\n2020-08-14: 28.005894458380613\n2020-08-15: 27.41740473946046\n2020-08-16: 27.9872962180511\n2020-08-17: 26.643563709033057\n2020-08-18: 25.14079257565699\n2020-08-19: 24.955642621155693\n2020-08-20: 24.99759210701792\n2020-08-21: 23.567028127381082\n2020-08-22: 23.21290086180062\n2020-08-23: 25.854730137351748\n2020-08-24: 26.442060985719706\n2020-08-25: 26.81607918970298\n2020-08-26: 27.951928805127086\n2020-08-27: 27.4509237969649\n2020-08-28: 26.359648878200545\n2020-08-29: 25.47427810795594\n2020-08-30: 25.76988151553722\n2020-08-31: 26.260569291996593\n\nPredicting for Afghanistan__nan\n2020-08-01: 324.2109730001094\n2020-08-02: 353.8341763931608\n2020-08-03: 356.3037299998495\n2020-08-04: 362.1177865748108\n2020-08-05: 346.2137180063426\n2020-08-06: 309.7540387699766\n2020-08-07: 353.1070305006333\n2020-08-08: 392.24163686170937\n2020-08-09: 415.3809475677925\n2020-08-10: 413.3519013889973\n2020-08-11: 417.6571537686939\n2020-08-12: 405.80520812536065\n2020-08-13: 360.34429583114127\n2020-08-14: 380.3126535217916\n2020-08-15: 402.6958232725434\n2020-08-16: 400.96099763090615\n2020-08-17: 404.30542214233\n2020-08-18: 406.5918175483186\n2020-08-19: 393.97477705590757\n2020-08-20: 367.36705506990967\n2020-08-21: 383.2525696779116\n2020-08-22: 400.1270958071879\n2020-08-23: 405.0867411608849\n2020-08-24: 403.98415222690045\n2020-08-25: 403.94345527730485\n2020-08-26: 379.8941638552115\n2020-08-27: 363.3923245620781\n2020-08-28: 388.519427714377\n2020-08-29: 403.91487501948535\n2020-08-30: 408.3021585058271\n2020-08-31: 414.87925379463326\n\nPredicting for Angola__nan\n2020-08-01: 77.14273598380287\n2020-08-02: 68.30155350732976\n2020-08-03: 69.15628181960032\n2020-08-04: 77.67054812241402\n2020-08-05: 80.96332574192553\n2020-08-06: 82.12979754155549\n2020-08-07: 92.9205756310232\n2020-08-08: 81.85088543534411\n2020-08-09: 93.66438584815326\n2020-08-10: 96.67365486359684\n2020-08-11: 83.57551840232415\n2020-08-12: 87.21611264830845\n2020-08-13: 88.9922794971631\n2020-08-14: 95.01897864262968\n2020-08-15: 95.2660773092567\n2020-08-16: 87.57091467681136\n2020-08-17: 91.13114602847187\n2020-08-18: 94.74558622775311\n2020-08-19: 87.45572725441485\n2020-08-20: 84.5059129594069\n2020-08-21: 85.40067371748545\n2020-08-22: 94.05645610387997\n2020-08-23: 99.30062488317489\n2020-08-24: 95.55040464952718\n2020-08-25: 90.14169307039406\n2020-08-26: 92.76689764469975\n2020-08-27: 90.35465407488454\n2020-08-28: 104.16073458959883\n2020-08-29: 96.33561148212215\n2020-08-30: 87.09170430937502\n2020-08-31: 97.32907098666689\n\nPredicting for Albania__nan\n2020-08-01: 0\n2020-08-02: 0\n2020-08-03: 0\n2020-08-04: 0\n2020-08-05: 0\n2020-08-06: 0\n2020-08-07: 0\n2020-08-08: 0\n2020-08-09: 0\n2020-08-10: 0\n2020-08-11: 0\n2020-08-12: 0\n2020-08-13: 0\n2020-08-14: 0\n2020-08-15: 0\n2020-08-16: 0\n2020-08-17: 0\n2020-08-18: 0\n2020-08-19: 0\n2020-08-20: 0\n2020-08-21: 0\n2020-08-22: 0\n2020-08-23: 0\n2020-08-24: 0\n2020-08-25: 0\n2020-08-26: 0\n2020-08-27: 0\n2020-08-28: 0\n2020-08-29: 0\n2020-08-30: 0\n2020-08-31: 0\n\nPredicting for Andorra__nan\n2020-08-01: 0\n2020-08-02: 0\n2020-08-03: 0\n2020-08-04: 0\n2020-08-05: 0\n2020-08-06: 0\n2020-08-07: 0\n2020-08-08: 0\n2020-08-09: 0\n2020-08-10: 0\n2020-08-11: 0\n2020-08-12: 0\n2020-08-13: 0\n2020-08-14: 0\n2020-08-15: 0\n2020-08-16: 0\n2020-08-17: 0\n2020-08-18: 0\n2020-08-19: 0\n2020-08-20: 0\n2020-08-21: 0\n2020-08-22: 0\n2020-08-23: 0\n2020-08-24: 0\n2020-08-25: 0\n2020-08-26: 5.4328769365812635\n2020-08-27: 0\n2020-08-28: 0\n2020-08-29: 2.6562701719208883\n2020-08-30: 0\n2020-08-31: 3.450183920698864\n\nPredicting for United Arab Emirates__nan\n2020-08-01: 375.08300800829255\n2020-08-02: 363.5426013509159\n2020-08-03: 315.7619450018844\n2020-08-04: 271.9496769543536\n2020-08-05: 237.87117943433788\n2020-08-06: 172.96439069018493\n2020-08-07: 151.2701517208563\n2020-08-08: 137.65640777155807\n2020-08-09: 138.86470095698246\n2020-08-10: 139.6327394417575\n2020-08-11: 144.14314280823362\n2020-08-12: 129.49427396517464\n2020-08-13: 133.6282108960917\n2020-08-14: 152.06924877334416\n2020-08-15: 150.49363136557108\n2020-08-16: 148.65210937849824\n2020-08-17: 154.55708839806508\n2020-08-18: 153.1585775816838\n2020-08-19: 151.02113202500993\n2020-08-20: 114.5844649678747\n2020-08-21: 105.15199691426771\n2020-08-22: 79.26036462189897\n2020-08-23: 79.55240681088448\n2020-08-24: 78.44122157952104\n2020-08-25: 83.44135437690828\n2020-08-26: 67.09227460864118\n2020-08-27: 60.34431002687117\n2020-08-28: 54.11867929429302\n2020-08-29: 56.88409266674233\n2020-08-30: 56.05953311156465\n2020-08-31: 52.139555715293874\n\nPredicting for Argentina__nan\n2020-08-01: 3441.820876238529\n2020-08-02: 3019.6752232826952\n2020-08-03: 2983.5060510034646\n2020-08-04: 3574.18318097927\n2020-08-05: 4053.7466299235557\n2020-08-06: 4387.329352885457\n2020-08-07: 3227.6474168959203\n2020-08-08: 1863.4064478372434\n2020-08-09: 1633.3830330081728\n2020-08-10: 2732.4186580761075\n2020-08-11: 3321.0438359941677\n2020-08-12: 2708.9178908569393\n2020-08-13: 1248.4778794231424\n2020-08-14: 1104.2137112533821\n2020-08-15: 908.9832278922291\n2020-08-16: 1169.301908956317\n2020-08-17: 1413.6732312751774\n2020-08-18: 1934.4349615112012\n2020-08-19: 1779.3631466314373\n2020-08-20: 1152.666288709713\n2020-08-21: 978.9002455206229\n2020-08-22: 778.230869943055\n2020-08-23: 1149.4986611788945\n2020-08-24: 1825.7238869319665\n2020-08-25: 1723.8134306877168\n2020-08-26: 1409.2627980313835\n2020-08-27: 1425.500733363272\n2020-08-28: 1609.1914395249523\n2020-08-29: 1023.3621325681354\n2020-08-30: 1223.0777500454828\n2020-08-31: 2053.654776014501\n\nPredicting for Australia__nan\n2020-08-01: 21.634520115369195\n2020-08-02: 21.790361160082536\n2020-08-03: 18.194694644583155\n2020-08-04: 20.147419199618323\n2020-08-05: 19.879584241113175\n2020-08-06: 21.49934643137153\n2020-08-07: 22.86250754039123\n2020-08-08: 24.053196806190815\n2020-08-09: 23.20541487796964\n2020-08-10: 21.18886365977986\n2020-08-11: 25.198356049446755\n2020-08-12: 45.93185246706351\n2020-08-13: 74.2007435597812\n2020-08-14: 100.04168408962316\n2020-08-15: 146.8359781619169\n2020-08-16: 161.1099760341342\n2020-08-17: 180.71461300654022\n2020-08-18: 182.02339207291146\n2020-08-19: 197.9820456018174\n2020-08-20: 209.39940767321414\n2020-08-21: 214.47818222877746\n2020-08-22: 217.15508454979218\n2020-08-23: 219.65322563000396\n2020-08-24: 220.32273406796426\n2020-08-25: 216.29813748287\n2020-08-26: 211.64060263905554\n2020-08-27: 210.47543503983633\n2020-08-28: 207.4475576139702\n2020-08-29: 198.45918386026102\n2020-08-30: 190.05213983219306\n2020-08-31: 183.07379918696486\n\nPredicting for Austria__nan\n2020-08-01: 0\n2020-08-02: 0\n2020-08-03: 0\n2020-08-04: 0\n2020-08-05: 0\n2020-08-06: 0\n2020-08-07: 0\n2020-08-08: 0\n2020-08-09: 0\n2020-08-10: 0\n2020-08-11: 0\n2020-08-12: 0\n2020-08-13: 0\n2020-08-14: 0\n2020-08-15: 0\n2020-08-16: 0\n2020-08-17: 0\n2020-08-18: 0\n2020-08-19: 0\n2020-08-20: 0\n2020-08-21: 0\n2020-08-22: 0\n2020-08-23: 0\n2020-08-24: 0\n2020-08-25: 0\n2020-08-26: 0\n2020-08-27: 0\n2020-08-28: 0\n2020-08-29: 0\n2020-08-30: 0\n2020-08-31: 0\n\nPredicting for Azerbaijan__nan\n2020-08-01: 0\n2020-08-02: 0\n2020-08-03: 0\n2020-08-04: 0\n2020-08-05: 0\n2020-08-06: 0\n2020-08-07: 0\n2020-08-08: 0\n2020-08-09: 0\n2020-08-10: 0\n2020-08-11: 0\n2020-08-12: 0\n2020-08-13: 0\n2020-08-14: 0\n2020-08-15: 0\n2020-08-16: 0\n2020-08-17: 0\n2020-08-18: 0\n2020-08-19: 0\n2020-08-20: 0\n2020-08-21: 0\n2020-08-22: 0\n2020-08-23: 0\n2020-08-24: 0\n2020-08-25: 0\n2020-08-26: 0\n2020-08-27: 0\n2020-08-28: 0\n2020-08-29: 0\n2020-08-30: 0\n2020-08-31: 0\n\nPredicting for Burundi__nan\n2020-08-01: 0.5994190259199973\n2020-08-02: 0\n2020-08-03: 0.007817582683924202\n2020-08-04: 0\n2020-08-05: 0\n2020-08-06: 0\n2020-08-07: 0\n2020-08-08: 0.06926573572947428\n2020-08-09: 0\n2020-08-10: 0\n2020-08-11: 0.044661049967092054\n2020-08-12: 0\n2020-08-13: 0\n2020-08-14: 0\n2020-08-15: 0\n2020-08-16: 0\n2020-08-17: 0\n2020-08-18: 0.19206089068469015\n2020-08-19: 0\n2020-08-20: 0\n2020-08-21: 0.19079586684809205\n2020-08-22: 0\n2020-08-23: 0\n2020-08-24: 0\n"
],
[
"# Check the predictions\npreds_df.head()",
"_____no_output_____"
]
],
[
[
"# Validation\nThis is how the predictor is going to be called during the competition. \n!!! PLEASE DO NOT CHANGE THE API !!!",
"_____no_output_____"
]
],
[
[
"!python predict.py -s 2020-08-01 -e 2020-08-04 -ip data/2020-09-30_historical_ip.csv -o predictions/2020-08-01_2020-08-04.csv",
"Generating predictions from 2020-08-01 to 2020-08-04...\nSaved predictions to predictions/2020-08-01_2020-08-04.csv\nDone!\n"
],
[
"!head predictions/2020-08-01_2020-08-04.csv",
"CountryName,RegionName,Date,PredictedDailyNewCases\r\nAruba,,2020-08-01,7.114327212934471\r\nAruba,,2020-08-02,7.45224540483501\r\nAruba,,2020-08-03,13.310068462921103\r\nAruba,,2020-08-04,18.79483774180352\r\nAfghanistan,,2020-08-01,324.2109730001094\r\nAfghanistan,,2020-08-02,353.8341763931608\r\nAfghanistan,,2020-08-03,356.3037299998495\r\nAfghanistan,,2020-08-04,362.1177865748108\r\nAngola,,2020-08-01,77.14273598380287\r\n"
]
],
[
[
"# Test cases\nWe can generate a prediction file. Let's validate a few cases...",
"_____no_output_____"
]
],
[
[
"import sys,os,os.path\nsys.path.append(os.path.expanduser('/home/thinng/code/2020/covid-xprize/'))",
"_____no_output_____"
],
[
"import os\nfrom covid_xprize.validation.predictor_validation import validate_submission\n\ndef validate(start_date, end_date, ip_file, output_file):\n # First, delete any potential old file\n try:\n os.remove(output_file)\n except OSError:\n pass\n \n # Then generate the prediction, calling the official API\n !python predict.py -s {start_date} -e {end_date} -ip {ip_file} -o {output_file}\n \n # And validate it\n errors = validate_submission(start_date, end_date, ip_file, output_file)\n if errors:\n for error in errors:\n print(error)\n else:\n print(\"All good!\")",
"_____no_output_____"
]
],
[
[
"## 4 days, no gap\n- All countries and regions\n- Official number of cases is known up to start_date\n- Intervention Plans are the official ones",
"_____no_output_____"
]
],
[
[
"validate(start_date=\"2020-08-01\",\n end_date=\"2020-08-04\",\n ip_file=\"data/2020-09-30_historical_ip.csv\",\n output_file=\"predictions/val_4_days.csv\")",
"Generating predictions from 2020-08-01 to 2020-08-04...\nSaved predictions to predictions/val_4_days.csv\nDone!\nAll good!\n"
]
],
[
[
"## 1 month in the future\n- 2 countries only\n- there's a gap between date of last known number of cases and start_date\n- For future dates, Intervention Plans contains scenarios for which predictions are requested to answer the question: what will happen if we apply these plans?",
"_____no_output_____"
]
],
[
[
"# %%time\nvalidate(start_date=\"2021-01-01\",\n end_date=\"2021-01-31\",\n ip_file=\"data/future_ip.csv\",\n output_file=\"predictions/val_1_month_future.csv\")",
"Generating predictions from 2021-01-01 to 2021-01-31...\nSaved predictions to predictions/val_1_month_future.csv\nDone!\nAll good!\n"
]
],
[
[
"## 180 days, from a future date, all countries and regions\n- Prediction start date is 1 week from now. (i.e. assuming submission date is 1 week from now) \n- Prediction end date is 6 months after start date. \n- Prediction is requested for all available countries and regions. \n- Intervention plan scenario: freeze last known intervention plans for each country and region. \n\nAs the number of cases is not known yet between today and start date, but the model relies on them, the model has to predict them in order to use them. \nThis test is the most demanding test. It should take less than 1 hour to generate the prediction file.",
"_____no_output_____"
],
[
"### Generate the scenario",
"_____no_output_____"
]
],
[
[
"from datetime import datetime, timedelta\n\nstart_date = datetime.now() + timedelta(days=7)\nstart_date_str = start_date.strftime('%Y-%m-%d')\nend_date = start_date + timedelta(days=180)\nend_date_str = end_date.strftime('%Y-%m-%d')\nprint(f\"Start date: {start_date_str}\")\nprint(f\"End date: {end_date_str}\")",
"Start date: 2020-12-25\nEnd date: 2021-06-23\n"
],
[
"from covid_xprize.validation.scenario_generator import get_raw_data, generate_scenario, NPI_COLUMNS\nDATA_FILE = 'data/OxCGRT_latest.csv'\nlatest_df = get_raw_data(DATA_FILE, latest=True)\nscenario_df = generate_scenario(start_date_str, end_date_str, latest_df, countries=None, scenario=\"Freeze\")\nscenario_file = \"predictions/180_days_future_scenario.csv\"\nscenario_df.to_csv(scenario_file, index=False)\nprint(f\"Saved scenario to {scenario_file}\")",
"Saved scenario to predictions/180_days_future_scenario.csv\n"
]
],
[
[
"### Check it",
"_____no_output_____"
]
],
[
[
"# %%time\nvalidate(start_date=start_date_str,\n end_date=end_date_str,\n ip_file=scenario_file,\n output_file=\"predictions/val_6_month_future.csv\")",
"Generating predictions from 2020-12-25 to 2021-06-23...\nSaved predictions to predictions/val_6_month_future.csv\nDone!\nAll good!\n"
]
],
[
[
"## predict zero scenario & plot \"United States\"/\"Canada\"/\"Argentina\" ",
"_____no_output_____"
]
],
[
[
"zero_scenario_df = pd.read_csv(scenario_file)\ncols = list(zero_scenario_df.columns)[3:]\nfor col in cols:\n zero_scenario_df[col].values[:] = 0\nzero_scenario_file = \"predictions/180_days_future_scenario_zero.csv\"\nzero_scenario_df.to_csv(zero_scenario_file, index=False)\nprint(f\"Saved scenario to {zero_scenario_file}\")",
"Saved scenario to predictions/180_days_future_scenario_zero.csv\n"
],
[
"# %%time\nvalidate(start_date=start_date_str,\n end_date=end_date_str,\n ip_file=zero_scenario_file,\n output_file=\"predictions/val_6_month_future_zero.csv\")",
"Generating predictions from 2020-12-25 to 2021-06-23...\nSaved predictions to predictions/val_6_month_future_zero.csv\nDone!\nAll good!\n"
],
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport datetime\npf = pd.read_csv('predictions/val_6_month_future_zero.csv')\npf = pf[['CountryName','Date','PredictedDailyNewCases']]\npf = pf.groupby(['CountryName','Date']).mean()\npf = pf.reset_index()\n\ntf = pf[pf['CountryName']=='United States']\nxdates = list(tf['Date'])\nxdates = [datetime.datetime.strptime(date,'%Y-%m-%d') for date in xdates]\nusa = list(tf['PredictedDailyNewCases'])\n\ntf = pf[pf['CountryName']=='Canada']\ncan = list(tf['PredictedDailyNewCases'])\n\ntf = pf[pf['CountryName']=='Argentina']\narg = list(tf['PredictedDailyNewCases'])\n\nfig = plt.figure(figsize=(20,10))\nax = plt.subplot(111)\nplt.plot(xdates, usa,label='United States')\nplt.plot(xdates, can,label='Canada')\nplt.plot(xdates, arg,label='Argentina')\nplt.legend(loc=4)\nplt.grid()\nplt.show()",
"_____no_output_____"
]
]
] | [
"raw",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"raw"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ec9ebf3abcc253595be42a55b8f342f61cbfcff7 | 183,521 | ipynb | Jupyter Notebook | src/training/pose_3d/training.ipynb | johnnychants/fastpose | 76f47007d222e1ce77440a4de0b754081cad7292 | [
"Apache-2.0",
"MIT"
] | null | null | null | src/training/pose_3d/training.ipynb | johnnychants/fastpose | 76f47007d222e1ce77440a4de0b754081cad7292 | [
"Apache-2.0",
"MIT"
] | null | null | null | src/training/pose_3d/training.ipynb | johnnychants/fastpose | 76f47007d222e1ce77440a4de0b754081cad7292 | [
"Apache-2.0",
"MIT"
] | null | null | null | 104.629989 | 9,024 | 0.795037 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ec9ecba6cbcc6a5af33ab2a23dfe1de986d3c2b9 | 7,227 | ipynb | Jupyter Notebook | final_protocol.ipynb | Cerebral-Language-Innovation/Brainwaves-to-Letters | 892fa88ec82b664c9145dc2cd4aebc7413950f8d | [
"MIT"
] | 5 | 2022-02-08T23:08:41.000Z | 2022-03-24T22:01:18.000Z | final_protocol.ipynb | Cerebral-Language-Innovation/Brainwaves-to-Letters | 892fa88ec82b664c9145dc2cd4aebc7413950f8d | [
"MIT"
] | null | null | null | final_protocol.ipynb | Cerebral-Language-Innovation/Brainwaves-to-Letters | 892fa88ec82b664c9145dc2cd4aebc7413950f8d | [
"MIT"
] | 1 | 2022-02-08T23:09:02.000Z | 2022-02-08T23:09:02.000Z | 32.263393 | 208 | 0.569393 | [
[
[
"Welcome to QCLI's experimental data collection Jupyter Notebook.\n\nSome code from this Jupyter Notebook is adapted from the uvicMUSE [GitHub](https://github.com/bardiabarabadi/uvicMUSE).\nThe following Jupyter Notebook was designed to fit the experimental protocol defined in the [Protocol Instructions](https://docs.google.com/document/d/19vUV2iof93ZsLFlNQnM_sMVbU8yAPQtb6NgvHowk7kU/edit).",
"_____no_output_____"
]
],
[
[
"# Imports from protocol_functions\nfrom protocol_functions import user_info_input\nfrom protocol_functions import select_operating_system\nfrom protocol_functions import collect_sample\n\n# Importing sys for terminal related operations\nimport sys",
"_____no_output_____"
]
],
[
[
"User inputs their name, sample action, and sample length for file naming.\n\nNaming Convention: <\"action name\">\\_<\"sample length\">\\_<\"user number\">",
"_____no_output_____"
]
],
[
[
"person_id = user_info_input.input_name()\naction_name = user_info_input.input_action()\nsample_length = user_info_input.sample_length()\nfile_name = action_name + \"_\" + str(sample_length) + \"_\" + str(person_id)\n",
"_____no_output_____"
]
],
[
[
"If necessary, installing the correct pylsl and uvicMuse packages depending on the specified operating system. Also installing pandas for data collection if required.",
"_____no_output_____"
]
],
[
[
"def install_packages():\n operating_system = select_operating_system.run() # Function call to select operating system\n !{sys.executable} -m pip install pandas\n if (operating_system == 1) or (operating_system == 3):\n !{sys.executable} -m pip install pylsl pygatt # pylsl for Windows and macOS\n else:\n !{sys.executable} -m pip install pylsl==1.10.5 pygatt # pylsl for Linux\n if (operating_system == 1) or (operating_system == 2):\n !{sys.executable} -m pip install --force-reinstall uvicmuse==3.3.3 # uvicMuse for Windows & Linux (with dongle)\n else:\n !{sys.executable} -m pip install --force-reinstall uvicmuse==5.3.3 # uvicMuse for macOS\n\nhas_packages = user_info_input.install_packages()\nif not has_packages:\n install_packages()",
"_____no_output_____"
]
],
[
[
"Starting uvicMUSE in the terminal.\n\n_Muse Connection Steps (via uvicMUSE)_\n1. Ensure the Muse is charged and is in pairing mode.\n2. Click \"Search\" in uvicMuse to look for nearby Muses.\n3. After the Muse is found, select it from the list provided by uvicMuse and click \"Connect\"",
"_____no_output_____"
]
],
[
[
"!uvicmuse",
"[INFO ] [Logger ] Record log in /Users/lsandler/.kivy/logs/kivy_22-03-08_3.txt\r\n[INFO ] [Kivy ] v2.0.0rc4, git-d74461b, 20201015\r\n[INFO ] [Kivy ] Installed at \"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/kivy/__init__.py\"\r\n[INFO ] [Python ] v3.8.7 (v3.8.7:6503f05dd5, Dec 21 2020, 12:45:15) \r\n[Clang 6.0 (clang-600.0.57)]\r\n[INFO ] [Python ] Interpreter at \"/Library/Frameworks/Python.framework/Versions/3.8/bin/python3\"\r\n[INFO ] [Factory ] 186 symbols loaded\r\n[INFO ] [Image ] Providers: img_tex, img_imageio, img_dds, img_sdl2, img_pil (img_ffpyplayer ignored)\r\n[INFO ] [Text ] Provider: sdl2\r\n[INFO ] [Window ] Provider: sdl2\r\n[INFO ] [GL ] Using the \"OpenGL ES 2\" graphics system\r\n[INFO ] [GL ] Backend used <sdl2>\r\n[INFO ] [GL ] OpenGL version <b'2.1 INTEL-18.4.6'>\r\n[INFO ] [GL ] OpenGL vendor <b'Intel Inc.'>\r\n[INFO ] [GL ] OpenGL renderer <b'Intel(R) Iris(TM) Plus Graphics OpenGL Engine'>\r\n[INFO ] [GL ] OpenGL parsed version: 2, 1\r\n[INFO ] [GL ] Shading version <b'1.20'>\r\n[INFO ] [GL ] Texture max size <16384>\r\n[INFO ] [GL ] Texture max units <16>\r\n[INFO ] [Window ] auto add sdl2 input provider\r\n[INFO ] [Window ] virtual keyboard not allowed, single mode, not docked\r\n[INFO ] [GL ] NPOT texture support is available\r\n[INFO ] [Base ] Start application main loop\r\n[INFO ] [Window ] exiting mainloop and closing.\r\n"
]
],
[
[
"After the Muse is connected, a stream is opened with muselsl. Then, the data sample is collected and exporting to CSV with the *collect_sample.py* file.",
"_____no_output_____"
]
],
[
[
"!muselsl # starting the stream in the terminal.\n\ncollect_sample.run(sample_length, action_name, file_name)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9ed8ca0d8fb36698324bcb4f198ae790d58e69 | 32,001 | ipynb | Jupyter Notebook | analysis/wandb/run-20210316_182441-282kx404/files/code/_session_history.ipynb | theadammurphy/drivendata_earthquake_damage_competition | b2bca8abff2f06347c0393c9ca718872b04c39e8 | [
"MIT"
] | 1 | 2021-11-17T00:32:08.000Z | 2021-11-17T00:32:08.000Z | analysis/wandb/run-20210316_182441-282kx404/files/code/_session_history.ipynb | theadammurphy/earthquake_damage_competition | b2bca8abff2f06347c0393c9ca718872b04c39e8 | [
"MIT"
] | null | null | null | analysis/wandb/run-20210316_182441-282kx404/files/code/_session_history.ipynb | theadammurphy/earthquake_damage_competition | b2bca8abff2f06347c0393c9ca718872b04c39e8 | [
"MIT"
] | null | null | null | 37.12413 | 275 | 0.559795 | [
[
[
"import lightgbm as lgb\nfrom lightgbm import LGBMClassifier\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nimport pandas as pd\nimport pickle\nfrom pathlib import Path\nfrom tqdm.notebook import trange, tqdm\n\n### USE FOR LOCAL JUPYTER NOTEBOOKS ###\nDOWNLOAD_DIR = Path('../download')\nDATA_DIR = Path('../data')\nSUBMISSIONS_DIR = Path('../submissions')\nMODEL_DIR = Path('../models')\n#######################################\n\n##### GOOGLE COLAB ######\n# DOWNLOAD_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/earthquake_damage_competition/download')\n# SUBMISSIONS_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/earthquake_damage_competition/submissions')\n# DATA_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/earthquake_damage_competition/data')\n# MODEL_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/earthquake_damage_competition/model')\n########################\n\nX = pd.read_csv(DOWNLOAD_DIR / 'train_values.csv', index_col='building_id')\ncategorical_columns = X.select_dtypes(include='object').columns\nbool_columns = [col for col in X.columns if col.startswith('has')]\n\nX_test = pd.read_csv(DOWNLOAD_DIR / 'test_values.csv', index_col='building_id')\ny = pd.read_csv(DOWNLOAD_DIR / 'train_labels.csv', index_col='building_id')",
"_____no_output_____"
],
[
"sns.set()",
"_____no_output_____"
],
[
"import wandb\nwandb.login()",
"True"
],
[
"X_test.shape",
"(86868, 38)"
],
[
"from sklearn.preprocessing import OrdinalEncoder, LabelEncoder\nfrom sklearn.compose import ColumnTransformer\n\nlabel_enc = LabelEncoder()\n\nt = [('ord_encoder', OrdinalEncoder(dtype=int), categorical_columns)]\nct = ColumnTransformer(transformers=t, remainder='passthrough')",
"_____no_output_____"
],
[
"X_all_ints = ct.fit_transform(X)\ny = label_enc.fit_transform(np.ravel(y))",
"_____no_output_____"
],
[
"# Note that append for pandas objects works differently to append with\n# python objects e.g. python append modifes the list in-place\n# pandas append returns a new object, leaving the original unmodified\nnot_categorical_columns = X.select_dtypes(exclude='object').columns\ncols_ordered_after_ordinal_encoding = categorical_columns.append(not_categorical_columns)",
"_____no_output_____"
],
[
"geo_cols = pd.Index(['geo_level_1_id', 'geo_level_2_id', 'geo_level_3_id'])\ncat_cols_plus_geo = categorical_columns.append(geo_cols)",
"_____no_output_____"
],
[
"train_data = lgb.Dataset(X_all_ints,\n label=y,\n feature_name=list(cols_ordered_after_ordinal_encoding),\n categorical_feature=list(cat_cols_plus_geo))",
"_____no_output_____"
],
[
"# Taken from the docs for lgb.train and lgb.cv\n# Helpful Stackoverflow answer: \n# https://stackoverflow.com/questions/50931168/f1-score-metric-in-lightgbm\nfrom sklearn.metrics import f1_score\n\ndef get_ith_pred(preds, i, num_data, num_class):\n \"\"\"\n preds: 1D NumPY array\n A 1D numpy array containing predicted probabilities. Has shape\n (num_data * num_class,). So, For binary classification with \n 100 rows of data in your training set, preds is shape (200,), \n i.e. (100 * 2,).\n i: int\n The row/sample in your training data you wish to calculate\n the prediction for.\n num_data: int\n The number of rows/samples in your training data\n num_class: int\n The number of classes in your classification task.\n Must be greater than 2.\n \n \n LightGBM docs tell us that to get the probability of class 0 for \n the 5th row of the dataset we do preds[0 * num_data + 5].\n For class 1 prediction of 7th row, do preds[1 * num_data + 7].\n \n sklearn's f1_score(y_true, y_pred) expects y_pred to be of the form\n [0, 1, 1, 1, 1, 0...] and not probabilities.\n \n This function translates preds into the form sklearn's f1_score \n understands.\n \"\"\"\n # Does not work for binary classification, preds has a different form\n # in that case\n assert num_class > 2\n \n preds_for_ith_row = [preds[class_label * num_data + i]\n for class_label in range(num_class)]\n \n # The element with the highest probability is predicted\n return np.argmax(preds_for_ith_row)\n \ndef lgb_f1_micro(preds, train_data):\n y_true = train_data.get_label()\n \n num_data = len(y_true)\n num_class = 3\n \n y_pred = []\n for i in range(num_data):\n ith_pred = get_ith_pred(preds, i, num_data, num_class)\n y_pred.append(ith_pred)\n \n return 'f1', f1_score(y_true, y_pred, average='micro'), True",
"_____no_output_____"
],
[
"def get_train_val_datasets(X, y, train_idx, val_idx):\n X_train, X_val = X[train_idx], X[val_idx]\n y_train, y_val = y[train_idx], y[val_idx]\n \n train_dataset = lgb.Dataset(X_train, label=y_train, free_raw_data=False)\n val_dataset = lgb.Dataset(X_val, label=y_val, free_raw_data=False)\n return train_dataset, val_dataset",
"_____no_output_____"
],
[
"import lightgbm as lgb\nfrom lightgbm import LGBMClassifier\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nimport pandas as pd\nimport pickle\nfrom pathlib import Path\nfrom tqdm.notebook import trange, tqdm\nfrom sklearn.model_selection import StratifiedKFold\n\n### USE FOR LOCAL JUPYTER NOTEBOOKS ###\nDOWNLOAD_DIR = Path('../download')\nDATA_DIR = Path('../data')\nSUBMISSIONS_DIR = Path('../submissions')\nMODEL_DIR = Path('../models')\n#######################################\n\n##### GOOGLE COLAB ######\n# DOWNLOAD_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/earthquake_damage_competition/download')\n# SUBMISSIONS_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/earthquake_damage_competition/submissions')\n# DATA_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/earthquake_damage_competition/data')\n# MODEL_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/earthquake_damage_competition/model')\n########################\n\nX = pd.read_csv(DOWNLOAD_DIR / 'train_values.csv', index_col='building_id')\ncategorical_columns = X.select_dtypes(include='object').columns\nbool_columns = [col for col in X.columns if col.startswith('has')]\n\nX_test = pd.read_csv(DOWNLOAD_DIR / 'test_values.csv', index_col='building_id')\ny = pd.read_csv(DOWNLOAD_DIR / 'train_labels.csv', index_col='building_id')",
"_____no_output_____"
],
[
"import lightgbm as lgb\nfrom lightgbm import LGBMClassifier\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nimport pandas as pd\nimport pickle\nfrom pathlib import Path\nfrom tqdm.notebook import trange, tqdm\nfrom sklearn.model_selection import StratifiedKFold\n\n### USE FOR LOCAL JUPYTER NOTEBOOKS ###\nDOWNLOAD_DIR = Path('../download')\nDATA_DIR = Path('../data')\nSUBMISSIONS_DIR = Path('../submissions')\nMODEL_DIR = Path('../models')\n#######################################\n\n##### GOOGLE COLAB ######\n# DOWNLOAD_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/earthquake_damage_competition/download')\n# SUBMISSIONS_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/earthquake_damage_competition/submissions')\n# DATA_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/earthquake_damage_competition/data')\n# MODEL_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/earthquake_damage_competition/model')\n########################\n\nX = pd.read_csv(DOWNLOAD_DIR / 'train_values.csv', index_col='building_id')\ncategorical_columns = X.select_dtypes(include='object').columns\nbool_columns = [col for col in X.columns if col.startswith('has')]\n\nX_test = pd.read_csv(DOWNLOAD_DIR / 'test_values.csv', index_col='building_id')\ny = pd.read_csv(DOWNLOAD_DIR / 'train_labels.csv', index_col='building_id')",
"_____no_output_____"
],
[
"sns.set()",
"_____no_output_____"
],
[
"from sklearn.preprocessing import OrdinalEncoder, LabelEncoder\nfrom sklearn.compose import ColumnTransformer\n\nlabel_enc = LabelEncoder()\n\nt = [('ord_encoder', OrdinalEncoder(dtype=int), categorical_columns)]\nct = ColumnTransformer(transformers=t, remainder='passthrough')",
"_____no_output_____"
],
[
"X_all_ints = ct.fit_transform(X)\ny = label_enc.fit_transform(np.ravel(y))",
"_____no_output_____"
],
[
"# Note that append for pandas objects works differently to append with\n# python objects e.g. python append modifes the list in-place\n# pandas append returns a new object, leaving the original unmodified\nnot_categorical_columns = X.select_dtypes(exclude='object').columns\ncols_ordered_after_ordinal_encoding = categorical_columns.append(not_categorical_columns)",
"_____no_output_____"
],
[
"geo_cols = pd.Index(['geo_level_1_id', 'geo_level_2_id', 'geo_level_3_id'])\ncat_cols_plus_geo = categorical_columns.append(geo_cols)",
"_____no_output_____"
],
[
"train_data = lgb.Dataset(X_all_ints,\n label=y,\n feature_name=list(cols_ordered_after_ordinal_encoding),\n categorical_feature=list(cat_cols_plus_geo))",
"_____no_output_____"
],
[
"# Taken from the docs for lgb.train and lgb.cv\n# Helpful Stackoverflow answer: \n# https://stackoverflow.com/questions/50931168/f1-score-metric-in-lightgbm\nfrom sklearn.metrics import f1_score\n\ndef get_ith_pred(preds, i, num_data, num_class):\n \"\"\"\n preds: 1D NumPY array\n A 1D numpy array containing predicted probabilities. Has shape\n (num_data * num_class,). So, For binary classification with \n 100 rows of data in your training set, preds is shape (200,), \n i.e. (100 * 2,).\n i: int\n The row/sample in your training data you wish to calculate\n the prediction for.\n num_data: int\n The number of rows/samples in your training data\n num_class: int\n The number of classes in your classification task.\n Must be greater than 2.\n \n \n LightGBM docs tell us that to get the probability of class 0 for \n the 5th row of the dataset we do preds[0 * num_data + 5].\n For class 1 prediction of 7th row, do preds[1 * num_data + 7].\n \n sklearn's f1_score(y_true, y_pred) expects y_pred to be of the form\n [0, 1, 1, 1, 1, 0...] and not probabilities.\n \n This function translates preds into the form sklearn's f1_score \n understands.\n \"\"\"\n # Does not work for binary classification, preds has a different form\n # in that case\n assert num_class > 2\n \n preds_for_ith_row = [preds[class_label * num_data + i]\n for class_label in range(num_class)]\n \n # The element with the highest probability is predicted\n return np.argmax(preds_for_ith_row)\n \ndef lgb_f1_micro(preds, train_data):\n y_true = train_data.get_label()\n \n num_data = len(y_true)\n num_class = 3\n \n y_pred = []\n for i in range(num_data):\n ith_pred = get_ith_pred(preds, i, num_data, num_class)\n y_pred.append(ith_pred)\n \n return 'f1', f1_score(y_true, y_pred, average='micro'), True",
"_____no_output_____"
],
[
"skf = StratifiedKFold(n_splits=5, random_state=1, shuffle=True)\n\n# Cross-validation loop\nfor train_idx, val_idx in skf.split(X_all_ints, y):\n print(type(val_idx), len(val_idx))\n train_dataset, val_dataset = get_train_val_datasets(X_all_ints, y\n train_idx, val_idx)\n bagged_preds = np.zeroes(len(val_idx))",
"_____no_output_____"
],
[
"skf = StratifiedKFold(n_splits=5, random_state=1, shuffle=True)\n\n# Cross-validation loop\nfor train_idx, val_idx in skf.split(X_all_ints, y):\n print(type(val_idx), len(val_idx))\n train_dataset, val_dataset = get_train_val_datasets(X_all_ints, y,\n train_idx, val_idx)\n bagged_preds = np.zeroes(len(val_idx))",
"_____no_output_____"
],
[
"skf = StratifiedKFold(n_splits=5, random_state=1, shuffle=True)\n\n# Cross-validation loop\nfor train_idx, val_idx in skf.split(X_all_ints, y):\n print(type(val_idx), len(val_idx))\n train_dataset, val_dataset = get_train_val_datasets(X_all_ints, y,\n train_idx, val_idx)\n bagged_preds = np.zeros(len(val_idx))",
"_____no_output_____"
],
[
"skf = StratifiedKFold(n_splits=5, random_state=1, shuffle=True)\n\n# Cross-validation loop\nfor train_idx, val_idx in skf.split(X_all_ints, y):\n print(type(val_idx), len(val_idx), val_idx.shape)\n train_dataset, val_dataset = get_train_val_datasets(X_all_ints, y,\n train_idx, val_idx)\n bagged_preds = np.zeros(len(val_idx))",
"_____no_output_____"
],
[
"skf = StratifiedKFold(n_splits=5, random_state=1, shuffle=True)\n\n# Cross-validation loop\nfor train_idx, val_idx in skf.split(X_all_ints, y):\n print(type(val_idx), len(val_idx), val_idx.shape)\n train_dataset, val_dataset = get_train_val_datasets(X_all_ints, y,\n train_idx, val_idx)\n # Perform bagged model building and evaluation to get a score\n print(val_dataset.num_data())\n ",
"_____no_output_____"
],
[
"def get_train_val_datasets(X, y, train_idx, val_idx):\n X_train, X_val = X[train_idx], X[val_idx]\n y_train, y_val = y[train_idx], y[val_idx]\n \n train_dataset = lgb.Dataset(X_train, label=y_train, free_raw_data=False)\n val_dataset = lgb.Dataset(X_val, label=y_val, free_raw_data=False)\n train_dataset.construct()\n val_dataset.construct()\n return train_dataset, val_dataset\n\n\ndef eval_bagged_model(config, num_bags, train_dataset, val_dataset):\n bagged_preds = np.zeros",
"_____no_output_____"
],
[
"skf = StratifiedKFold(n_splits=5, random_state=1, shuffle=True)\n\n# Cross-validation loop\nfor train_idx, val_idx in skf.split(X_all_ints, y):\n print(type(val_idx), len(val_idx), val_idx.shape)\n train_dataset, val_dataset = get_train_val_datasets(X_all_ints, y,\n train_idx, val_idx)\n # Perform bagged model building and evaluation to get a score\n print(val_dataset.num_data())\n ",
"_____no_output_____"
],
[
"def get_train_val_datasets(X, y, train_idx, val_idx):\n X_train, X_val = X[train_idx], X[val_idx]\n y_train, y_val = y[train_idx], y[val_idx]\n \n train_dataset = lgb.Dataset(X_train, label=y_train, free_raw_data=False)\n val_dataset = lgb.Dataset(X_val, label=y_val, free_raw_data=False)\n train_dataset.construct()\n val_dataset.construct()\n return train_dataset, val_dataset\n\n\ndef train_lgbm_model(config, train_dataset, val_dataset):\n evals_result = {}\n booster = lgb.train(config,\n train_dataset,\n valid_sets=[train_dataset, val_dataset],\n valid_names=['train', 'val'],\n evals_result=evals_result,\n feval=lgb_f1_micro,\n callbacks=[wandb_callback()]) )\n return booster, evals_result\n\ndef eval_bagged_model(config, num_bags, train_dataset, val_dataset):\n bagged_preds = np.zeros(val_dataset.num_data())\n config = dict(config) # in case you input a wandb config object\n for n in range(num_bags):\n config['seed'] += n\n booster, evals_result = train_lgbm_model(config, train_dataset,\n val_dataset)\n # Do I need to predict? Does the callback do it for me automatically?\n pass",
"_____no_output_____"
],
[
"def get_train_val_datasets(X, y, train_idx, val_idx):\n X_train, X_val = X[train_idx], X[val_idx]\n y_train, y_val = y[train_idx], y[val_idx]\n \n train_dataset = lgb.Dataset(X_train, label=y_train, free_raw_data=False)\n val_dataset = lgb.Dataset(X_val, label=y_val, free_raw_data=False)\n train_dataset.construct()\n val_dataset.construct()\n return train_dataset, val_dataset\n\n\ndef train_lgbm_model(config, train_dataset, val_dataset):\n evals_result = {}\n booster = lgb.train(config,\n train_dataset,\n valid_sets=[train_dataset, val_dataset],\n valid_names=['train', 'val'],\n evals_result=evals_result,\n feval=lgb_f1_micro,\n callbacks=[wandb_callback()])\n return booster, evals_result\n\ndef eval_bagged_model(config, num_bags, train_dataset, val_dataset):\n bagged_preds = np.zeros(val_dataset.num_data())\n config = dict(config) # in case you input a wandb config object\n for n in range(num_bags):\n config['seed'] += n\n booster, evals_result = train_lgbm_model(config, train_dataset,\n val_dataset)\n # Do I need to predict? Does the callback do it for me automatically?\n pass",
"_____no_output_____"
],
[
"def get_train_val_datasets(X, y, train_idx, val_idx):\n X_train, X_val = X[train_idx], X[val_idx]\n y_train, y_val = y[train_idx], y[val_idx]\n \n train_dataset = lgb.Dataset(X_train, label=y_train, free_raw_data=False)\n val_dataset = lgb.Dataset(X_val, label=y_val, free_raw_data=False)\n train_dataset.construct()\n val_dataset.construct()\n return train_dataset, val_dataset\n\n\ndef train_lgbm_model(config, train_dataset, val_dataset):\n evals_result = {}\n booster = lgb.train(config,\n train_dataset,\n valid_sets=[train_dataset, val_dataset],\n valid_names=['train', 'val'],\n evals_result=evals_result,\n feval=lgb_f1_micro,\n callbacks=[wandb_callback()])\n return booster, evals_result\n\ndef eval_bagged_model(config, num_bags, train_dataset, val_dataset):\n bagged_preds = np.zeros(val_dataset.num_data())\n config = dict(config) # in case you input a wandb config object\n for n in range(num_bags):\n config['seed'] += n\n booster, evals_result = train_lgbm_model(config, train_dataset,\n val_dataset)\n # Do I need to predict? Does the callback do it for me automatically?\n pass",
"_____no_output_____"
],
[
"import lightgbm as lgb\nfrom lightgbm import LGBMClassifier\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nimport pandas as pd\nimport pickle\nfrom pathlib import Path\nfrom tqdm.notebook import trange, tqdm\nfrom sklearn.model_selection import StratifiedKFold, train_test_split\nfrom wandb.lightgbm import wandb_callback\n\n### USE FOR LOCAL JUPYTER NOTEBOOKS ###\nDOWNLOAD_DIR = Path('../download')\nDATA_DIR = Path('../data')\nSUBMISSIONS_DIR = Path('../submissions')\nMODEL_DIR = Path('../models')\n#######################################\n\n##### GOOGLE COLAB ######\n# DOWNLOAD_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/earthquake_damage_competition/download')\n# SUBMISSIONS_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/earthquake_damage_competition/submissions')\n# DATA_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/earthquake_damage_competition/data')\n# MODEL_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/earthquake_damage_competition/model')\n########################\n\nX = pd.read_csv(DOWNLOAD_DIR / 'train_values.csv', index_col='building_id')\ncategorical_columns = X.select_dtypes(include='object').columns\nbool_columns = [col for col in X.columns if col.startswith('has')]\n\nX_test = pd.read_csv(DOWNLOAD_DIR / 'test_values.csv', index_col='building_id')\ny = pd.read_csv(DOWNLOAD_DIR / 'train_labels.csv', index_col='building_id')",
"_____no_output_____"
],
[
"param = {'num_leaves': 120,\n 'min_child_samples': 40,\n 'learning_rate': 0.03,\n 'num_boost_round': 40,\n 'early_stopping_rounds': 12,\n 'boosting_type': 'goss',\n 'objective': 'multiclassova',\n 'is_unbalance': True,\n 'metric': ['multiclassova', 'multi_error'],\n 'num_class': 3,\n 'verbosity': -1,\n 'num_threads': 8,\n 'seed': 1}\n\nrun = wandb.init(project='earthquake_damage_competition',\n config=param)\n\nskf = StratifiedKFold(n_splits=5, random_state=1, shuffle=True)\n\nall_eval_results = {}\nall_boosters = {}\n# Cross-validation loop\nfor i, train_idx, val_idx in enumerate(skf.split(X_all_ints, y)):\n train_dataset, val_dataset = get_train_val_datasets(X_all_ints, y,\n train_idx, val_idx)\n # Perform bagged model building and evaluation to get a score\n booster, evals_results = train_lgbm_model(param, train_dataset,\n val_dataset)\n all_eval_results[i] = evals_results\n all_boosters[i] = booster ",
"_____no_output_____"
],
[
"param = {'num_leaves': 120,\n 'min_child_samples': 40,\n 'learning_rate': 0.03,\n 'num_boost_round': 40,\n 'early_stopping_rounds': 12,\n 'boosting_type': 'goss',\n 'objective': 'multiclassova',\n 'is_unbalance': True,\n 'metric': ['multiclassova', 'multi_error'],\n 'num_class': 3,\n 'verbosity': -1,\n 'num_threads': 8,\n 'seed': 1}\n\nrun = wandb.init(project='earthquake_damage_competition',\n config=param)\n\nskf = StratifiedKFold(n_splits=5, random_state=1, shuffle=True)\n\nall_eval_results = {}\nall_boosters = {}\ni = 0\n# Cross-validation loop\nfor train_idx, val_idx in skf.split(X_all_ints, y):\n train_dataset, val_dataset = get_train_val_datasets(X_all_ints, y,\n train_idx, val_idx)\n # Perform bagged model building and evaluation to get a score\n booster, evals_results = train_lgbm_model(param, train_dataset,\n val_dataset)\n all_eval_results[i] = evals_results\n all_boosters[i] = booster\n i += 1",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9ef302090b1d53d1afe270fc339c468effbe33 | 272,710 | ipynb | Jupyter Notebook | plot2/test_data_load.ipynb | sixin-zh/kymatio_wph | 237c0d2009766cf83b2145420a14d3c6e90dc983 | [
"BSD-3-Clause"
] | null | null | null | plot2/test_data_load.ipynb | sixin-zh/kymatio_wph | 237c0d2009766cf83b2145420a14d3c6e90dc983 | [
"BSD-3-Clause"
] | null | null | null | plot2/test_data_load.ipynb | sixin-zh/kymatio_wph | 237c0d2009766cf83b2145420a14d3c6e90dc983 | [
"BSD-3-Clause"
] | null | null | null | 1,652.787879 | 92,748 | 0.963327 | [
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# test the matrix order of read images\nimport do_plot_get2 as dpg\n\nNs = 256\nfroot0= './data/'\ntestlabel = 'ns_randn4_aniso_train_N' + str(Ns)\nimgs1 = dpg.getori_mat(testlabel,froot0,permute=1)\nplt.imshow(np.reshape(imgs1[0,:,:],[Ns,Ns]),cmap='gray')",
"('getori matlab fname=', 'ns_randn4_aniso_train_N256')\nload cached data: ./data/ns_randn4_aniso_train_N256\n"
],
[
"Ns = 256\nfroot0= './synthesis/'\ntestlabel = 'anisotur2a_modelB_synthesis_ks0'\nimgs2 = dpg.getori_mat(testlabel,froot0,permute=0)\nplt.imshow(np.reshape(imgs2[2,:,:],[Ns,Ns]),cmap='gray')",
"('getori matlab fname=', 'anisotur2a_modelB_synthesis_ks0')\nload cached data: ./synthesis/anisotur2a_modelB_synthesis_ks0\n"
],
[
"Ns = 256\nfroot0= './synthesis/'\ntestlabel = 'maxent_synthesis_anisotur2a_kb1'\nimgs2 = dpg.getori_mat(testlabel,froot0,permute=1)\nplt.imshow(np.reshape(imgs2[0,:,:],[Ns,Ns]),cmap='gray')",
"('getori matlab fname=', 'maxent_synthesis_anisotur2a_kb1')\nload cached data: ./synthesis/maxent_synthesis_anisotur2a_kb1\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
ec9f13a8f99357f0e1519eb4d08fd1200873fb40 | 141,142 | ipynb | Jupyter Notebook | docs/beta/notebooks/Debugger.ipynb | alexkamp/debuggingbook | 788bb188988e99623c2e4376f05a91362f698f96 | [
"MIT"
] | null | null | null | docs/beta/notebooks/Debugger.ipynb | alexkamp/debuggingbook | 788bb188988e99623c2e4376f05a91362f698f96 | [
"MIT"
] | null | null | null | docs/beta/notebooks/Debugger.ipynb | alexkamp/debuggingbook | 788bb188988e99623c2e4376f05a91362f698f96 | [
"MIT"
] | null | null | null | 30.484233 | 519 | 0.516267 | [
[
[
"# How Debuggers Work\n\nInteractive _debuggers_ are tools that allow you to selectively observe the program state during an execution. In this chapter, you will learn how such debuggers work – by building your own debugger.",
"_____no_output_____"
]
],
[
[
"from bookutils import YouTubeVideo\nYouTubeVideo(\"4aZ0t7CWSjA\")",
"_____no_output_____"
]
],
[
[
"**Prerequisites**\n\n* You should have read the [Chapter on Tracing Executions](Tracer.ipynb).\n* Again, knowing a bit of _Python_ is helpful for understanding the code examples in the book.",
"_____no_output_____"
]
],
[
[
"import bookutils",
"_____no_output_____"
],
[
"import sys",
"_____no_output_____"
],
[
"from Tracer import Tracer",
"_____no_output_____"
]
],
[
[
"## Synopsis\n<!-- Automatically generated. Do not edit. -->\n\nTo [use the code provided in this chapter](Importing.ipynb), write\n\n```python\n>>> from debuggingbook.Debugger import <identifier>\n```\n\nand then make use of the following features.\n\n\nThis chapter provides an interactive debugger for Python functions. The debugger is invoked as\n\n```python\nwith Debugger():\n function_to_be_observed()\n ...\n```\nWhile running, you can enter _debugger commands_ at the `(debugger)` prompt. Here's an example session:\n\n```python\n['help', 'break 14', 'list', 'continue', 'step', 'print out', 'quit']\n\n>>> with Debugger():\n>>> ret = remove_html_markup('abc')\n\nCalling remove_html_markup(s = 'abc')\n```\n<samp>(debugger) <b>help</b></samp>\n```\nbreak -- Set a breakoint in given line. If no line is given, list all breakpoints\ncontinue -- Resume execution\ndelete -- Delete breakoint in line given by `arg`.\n Without given line, clear all breakpoints\nhelp -- Give help on given `command`. If no command is given, give help on all\nlist -- Show current function. If `arg` is given, show its source code.\nprint -- Print an expression. If no expression is given, print all variables\nquit -- Finish execution\nstep -- Execute up to the next line\n```\n<samp>(debugger) <b>break 14</b></samp>\n```\nBreakpoints: {14}\n```\n<samp>(debugger) <b>list</b></samp>\n```\n 1> def remove_html_markup(s): # type: ignore\n 2 tag = False\n 3 quote = False\n 4 out = \"\"\n 5 \n 6 for c in s:\n 7 if c == '<' and not quote:\n 8 tag = True\n 9 elif c == '>' and not quote:\n 10 tag = False\n 11 elif c == '\"' or c == \"'\" and tag:\n 12 quote = not quote\n 13 elif not tag:\n 14# out = out + c\n 15 \n 16 return out\n```\n<samp>(debugger) <b>continue</b></samp>\n```\n # tag = False, quote = False, out = '', c = 'a'\n14 out = out + c\n```\n<samp>(debugger) <b>step</b></samp>\n```\n # out = 'a'\n6 for c in s:\n```\n<samp>(debugger) <b>print out</b></samp>\n```\nout = 'a'\n```\n<samp>(debugger) <b>quit</b></samp>\n\nThe `Debugger` class can be easily extended in subclasses. A new method `NAME_command(self, arg)` will be invoked whenever a command named `NAME` is entered, with `arg` holding given command arguments (empty string if none).\n\n\n\n\n",
"_____no_output_____"
],
[
"## Debuggers\n\n_Interactive Debuggers_ (or short *debuggers*) are tools that allow you to observe program executions. A debugger typically offers the following features:\n\n* _Run_ the program\n* Define _conditions_ under which the execution should _stop_ and hand over control to the debugger. Conditions include\n * a particular location is reached\n * a particular variable takes a particular value\n * or some other condition of choice.\n* When the program stops, you can _observe_ the current state, including\n * the current location\n * variables and their values\n * the current function and its callers\n* When the program stops, you can _step_ through program execution, having it stop at the next instruction again.\n* Finally, you can also _resume_ execution to the next stop.",
"_____no_output_____"
],
[
"This functionality often comes as a _command-line interface_, typing commands at a prompt; or as a _graphical user interface_, selecting commands from the screen. Debuggers can come as standalone tools, or be integrated into a programming environment of choice.",
"_____no_output_____"
],
[
"Debugger interaction typically follows a _loop_ pattern._ First, you identify the location(s) you want to inspect, and tell the debugger to stop execution once one of these _breakpoints_ is reached. Here's a command that could instruct a command-line debugger to stop at Line 239:\n\n```\n(debugger) break 239\n(debugger) _\n```\n\nThen you have the debugger resume or start execution. The debugger will stop at the given location.\n\n```\n(debugger) continue\nLine 239: s = x\n(debugger) _\n```\n\nWhen it stops at the given location, you use debugger commands to inspect the state (and check whether things are as expected).\n\n```\n(debugger) print s\ns = 'abc'\n(debugger) _\n```\n\nYou can then step through the program, executing more lines.\n\n```\n(debugger) step\nLine 240: c = s[0]\n(debugger) print c\nc = 'a'\n(debugger) _\n```\n\nYou can also define new stop conditions, investigating other locations, variables, and conditions.",
"_____no_output_____"
],
[
"## Debugger Interaction\n\nLet us now show how to build such a debugger. The key idea of an _interactive_ debugger is to set up the _tracing function_ such that it actually _asks_ what to do next, prompting you to enter a _command_. For the sake of simplicity, we collect such a command interactively from a command line, using the Python `input()` function.",
"_____no_output_____"
],
[
"Our debugger holds a number of variables to indicate its current status:\n* `stepping` is True whenever the user wants to step into the next line.\n* `breakpoints` is a set of breakpoints (line numbers)\n* `interact` is True while the user stays at one position.\n\nWe also store the current tracing information in three attributes `frame`, `event`, and `arg`. The variable `local_vars` holds local variables.",
"_____no_output_____"
]
],
[
[
"from types import FrameType\nfrom typing import Any, Optional, Callable, Dict, List, Tuple, Set, TextIO",
"_____no_output_____"
],
[
"class Debugger(Tracer):\n \"\"\"Interactive Debugger\"\"\"\n\n def __init__(self, *, file: TextIO = sys.stdout) -> None:\n \"\"\"Create a new interactive debugger.\"\"\"\n self.stepping: bool = True\n self.breakpoints: Set[int] = set()\n self.interact: bool = True\n\n self.frame: FrameType\n self.event: Optional[str] = None\n self.arg: Any = None\n\n self.local_vars: Dict[str, Any] = {}\n\n super().__init__(file=file)",
"_____no_output_____"
]
],
[
[
"The `traceit()` method is the main entry point for our debugger. If we should stop, we go into user interaction.",
"_____no_output_____"
]
],
[
[
"class Debugger(Debugger):\n def stop_here(self) -> bool:\n ...\n\n def interaction_loop(self) -> None:\n ...\n\n def traceit(self, frame: FrameType, event: str, arg: Any) -> None:\n \"\"\"Tracing function; called at every line. To be overloaded in subclasses.\"\"\"\n self.frame = frame\n self.local_vars = frame.f_locals # Dereference exactly once\n self.event = event\n self.arg = arg\n\n if self.stop_here():\n self.interaction_loop()",
"_____no_output_____"
]
],
[
[
"We stop whenever we are stepping through the program or reach a breakpoint:",
"_____no_output_____"
]
],
[
[
"class Debugger(Debugger):\n def stop_here(self) -> bool:\n \"\"\"Return True if we should stop\"\"\"\n return self.stepping or self.frame.f_lineno in self.breakpoints",
"_____no_output_____"
]
],
[
[
"Our interaction loop shows the current status, reads in commands, and executes them.",
"_____no_output_____"
]
],
[
[
"class Debugger(Debugger):\n def interaction_loop(self) -> None:\n \"\"\"Interact with the user\"\"\"\n self.print_debugger_status(self.frame, self.event, self.arg) # type: ignore\n\n self.interact = True\n while self.interact:\n command = input(\"(debugger) \")\n self.execute(command) # type: ignore",
"_____no_output_____"
]
],
[
[
"For a moment, let us implement two commands, `step` and `continue`. `step` steps through the program:",
"_____no_output_____"
]
],
[
[
"class Debugger(Debugger):\n def step_command(self, arg: str = \"\") -> None:\n \"\"\"Execute up to the next line\"\"\"\n\n self.stepping = True\n self.interact = False",
"_____no_output_____"
],
[
"class Debugger(Debugger):\n def continue_command(self, arg: str = \"\") -> None:\n \"\"\"Resume execution\"\"\"\n\n self.stepping = False\n self.interact = False",
"_____no_output_____"
]
],
[
[
"The `execute()` method dispatches between these two.",
"_____no_output_____"
]
],
[
[
"class Debugger(Debugger):\n def execute(self, command: str) -> None:\n if command.startswith('s'):\n self.step_command()\n elif command.startswith('c'):\n self.continue_command()",
"_____no_output_____"
]
],
[
[
"Our debugger is now ready to run! Let us invoke it on the buggy `remove_html_markup()` variant from the [Introduction to Debugging](Intro_Debugging.ipynb):",
"_____no_output_____"
]
],
[
[
"def remove_html_markup(s): # type: ignore\n tag = False\n quote = False\n out = \"\"\n\n for c in s:\n if c == '<' and not quote:\n tag = True\n elif c == '>' and not quote:\n tag = False\n elif c == '\"' or c == \"'\" and tag:\n quote = not quote\n elif not tag:\n out = out + c\n\n return out",
"_____no_output_____"
]
],
[
[
"We invoke the debugger just like `Tracer`, using a `with` clause. The code\n\n```python\nwith Debugger():\n remove_html_markup('abc')\n```\ngives us a debugger prompt\n```\n(debugger) _\n```\nwhere we can enter one of our two commands.",
"_____no_output_____"
],
[
"Let us do two steps through the program and then resume execution:",
"_____no_output_____"
]
],
[
[
"from bookutils import input, next_inputs",
"_____no_output_____"
],
[
"# ignore\nnext_inputs([\"step\", \"step\", \"continue\"])",
"_____no_output_____"
],
[
"with Debugger():\n remove_html_markup('abc')",
"Calling remove_html_markup(s = 'abc')\n"
],
[
"# ignore\nassert not next_inputs()",
"_____no_output_____"
]
],
[
[
"Try this out for yourself by running the above invocation in the interactive notebook! If you are reading the Web version, the top menu entry `Resources` -> `Edit as Notebook` will do the trick. Navigate to the above invocation and press `Shift`+`Enter`.",
"_____no_output_____"
],
[
"### A Command Dispatcher",
"_____no_output_____"
],
[
"Our `execute()` function is still a bit rudimentary. A true command-line tool should provide means to tell which commands are available (`help`), automatically split arguments, and not stand in line of extensibility.",
"_____no_output_____"
],
[
"We therefore implement a better `execute()` method which does all that. Our revised `execute()` method _inspects_ its class for methods that end in `_command()`, and automatically registers their names as commands. Hence, with the above, we already get `step` and `continue` as possible commands.",
"_____no_output_____"
],
[
"### Excursion: Implementing execute()",
"_____no_output_____"
],
[
"Let us detail how we implement `execute()`. The `commands()` method returns a list of all commands (as strings) from the class.",
"_____no_output_____"
]
],
[
[
"class Debugger(Debugger):\n def commands(self) -> List[str]:\n \"\"\"Return a list of commands\"\"\"\n\n cmds = [method.replace('_command', '')\n for method in dir(self.__class__)\n if method.endswith('_command')]\n cmds.sort()\n return cmds",
"_____no_output_____"
],
[
"d = Debugger()\nd.commands()",
"_____no_output_____"
]
],
[
[
"The `command_method()` method converts a given command (or its abbrevation) into a method to be called.",
"_____no_output_____"
]
],
[
[
"class Debugger(Debugger):\n def help_command(self, command: str) -> None:\n ...\n\n def command_method(self, command: str) -> Optional[Callable[[str], None]]:\n \"\"\"Convert `command` into the method to be called.\n If the method is not found, return `None` instead.\"\"\"\n\n if command.startswith('#'):\n return None # Comment\n\n possible_cmds = [possible_cmd for possible_cmd in self.commands()\n if possible_cmd.startswith(command)]\n if len(possible_cmds) != 1:\n self.help_command(command)\n return None\n\n cmd = possible_cmds[0]\n return getattr(self, cmd + '_command')",
"_____no_output_____"
],
[
"d = Debugger()\nd.command_method(\"step\")",
"_____no_output_____"
],
[
"d = Debugger()\nd.command_method(\"s\")",
"_____no_output_____"
]
],
[
[
"The revised `execute()` method now determines this method and executes it with the given argument.",
"_____no_output_____"
]
],
[
[
"class Debugger(Debugger):\n def execute(self, command: str) -> None:\n \"\"\"Execute `command`\"\"\"\n\n sep = command.find(' ')\n if sep > 0:\n cmd = command[:sep].strip()\n arg = command[sep + 1:].strip()\n else:\n cmd = command.strip()\n arg = \"\"\n\n method = self.command_method(cmd)\n if method:\n method(arg)",
"_____no_output_____"
]
],
[
[
"If `command_method()` cannot find the command, or finds more than one matching the prefix, it invokes the `help` command providing additional assistance. `help` draws extra info on each command from its documentation string.",
"_____no_output_____"
]
],
[
[
"class Debugger(Debugger):\n def help_command(self, command: str = \"\") -> None:\n \"\"\"Give help on given `command`. If no command is given, give help on all\"\"\"\n\n if command:\n possible_cmds = [possible_cmd for possible_cmd in self.commands()\n if possible_cmd.startswith(command)]\n\n if len(possible_cmds) == 0:\n self.log(f\"Unknown command {repr(command)}. Possible commands are:\")\n possible_cmds = self.commands()\n elif len(possible_cmds) > 1:\n self.log(f\"Ambiguous command {repr(command)}. Possible expansions are:\")\n else:\n possible_cmds = self.commands()\n\n for cmd in possible_cmds:\n method = self.command_method(cmd)\n self.log(f\"{cmd:10} -- {method.__doc__}\")",
"_____no_output_____"
],
[
"d = Debugger()\nd.execute(\"help\")",
"continue -- Resume execution\nhelp -- Give help on given `command`. If no command is given, give help on all\nstep -- Execute up to the next line\n"
],
[
"d = Debugger()\nd.execute(\"foo\")",
"Unknown command 'foo'. Possible commands are:\ncontinue -- Resume execution\nhelp -- Give help on given `command`. If no command is given, give help on all\nstep -- Execute up to the next line\n"
]
],
[
[
"### End of Excursion",
"_____no_output_____"
],
[
"## Printing Values\n\nWith `execute()`, we can now easily extend our class – all it takes is for a new command `NAME` is a new `NAME_command()` method. Let us start by providing a `print` command to print all variables. We use similar code as for the `Tracer` class in the [chapter on tracing](Tracer.ipynb).",
"_____no_output_____"
]
],
[
[
"class Debugger(Debugger):\n def print_command(self, arg: str = \"\") -> None:\n \"\"\"Print an expression. If no expression is given, print all variables\"\"\"\n\n vars = self.local_vars\n self.log(\"\\n\".join([f\"{var} = {repr(vars[var])}\" for var in vars]))",
"_____no_output_____"
],
[
"# ignore\nnext_inputs([\"step\", \"step\", \"step\", \"print\", \"continue\"]);",
"_____no_output_____"
],
[
"with Debugger():\n remove_html_markup('abc')",
"Calling remove_html_markup(s = 'abc')\n"
],
[
"# ignore\nassert not next_inputs()",
"_____no_output_____"
]
],
[
[
"Let us extend `print` such that if an argument is given, it only evaluates and prints out this argument.",
"_____no_output_____"
]
],
[
[
"class Debugger(Debugger):\n def print_command(self, arg: str = \"\") -> None:\n \"\"\"Print an expression. If no expression is given, print all variables\"\"\"\n\n vars = self.local_vars\n\n if not arg:\n self.log(\"\\n\".join([f\"{var} = {repr(vars[var])}\" for var in vars]))\n else:\n try:\n self.log(f\"{arg} = {repr(eval(arg, globals(), vars))}\")\n except Exception as err:\n self.log(f\"{err.__class__.__name__}: {err}\")",
"_____no_output_____"
],
[
"# ignore\nnext_inputs([\"p s\", \"c\"]);",
"_____no_output_____"
],
[
"with Debugger():\n remove_html_markup('abc')",
"Calling remove_html_markup(s = 'abc')\n"
],
[
"# ignore\nassert not next_inputs()",
"_____no_output_____"
]
],
[
[
"Note how we would abbreviate commands to speed things up. The argument to `print` can be any Python expression:",
"_____no_output_____"
]
],
[
[
"# ignore\nnext_inputs([\"print (s[0], 2 + 2)\", \"continue\"]);",
"_____no_output_____"
],
[
"with Debugger():\n remove_html_markup('abc')",
"Calling remove_html_markup(s = 'abc')\n"
]
],
[
[
"Our `help` command also properly lists `print` as a possible command:",
"_____no_output_____"
]
],
[
[
"# ignore\nnext_inputs([\"help print\", \"continue\"]);",
"_____no_output_____"
],
[
"with Debugger():\n remove_html_markup('abc')",
"Calling remove_html_markup(s = 'abc')\n"
],
[
"# ignore\nassert not next_inputs()",
"_____no_output_____"
]
],
[
[
"## Listing Source Code\n\nWe implement a `list` command that shows the source code of the current function.",
"_____no_output_____"
]
],
[
[
"import inspect",
"_____no_output_____"
],
[
"from bookutils import getsourcelines # like inspect.getsourcelines(), but in color",
"_____no_output_____"
],
[
"class Debugger(Debugger):\n def list_command(self, arg: str = \"\") -> None:\n \"\"\"Show current function.\"\"\"\n\n source_lines, line_number = getsourcelines(self.frame.f_code)\n\n for line in source_lines:\n self.log(f'{line_number:4} {line}', end='')\n line_number += 1",
"_____no_output_____"
],
[
"# ignore\nnext_inputs([\"list\", \"continue\"]);",
"_____no_output_____"
],
[
"with Debugger():\n remove_html_markup('abc')",
"Calling remove_html_markup(s = 'abc')\n"
],
[
"# ignore\nassert not next_inputs()",
"_____no_output_____"
]
],
[
[
"## Setting Breakpoints\n\nStepping through the program line by line is a bit cumbersome. We therefore implement _breakpoints_ – a set of lines that cause the program to be interrupted as soon as this line is met.",
"_____no_output_____"
]
],
[
[
"class Debugger(Debugger):\n def break_command(self, arg: str = \"\") -> None:\n \"\"\"Set a breakoint in given line. If no line is given, list all breakpoints\"\"\"\n\n if arg:\n self.breakpoints.add(int(arg))\n self.log(\"Breakpoints:\", self.breakpoints)",
"_____no_output_____"
]
],
[
[
"Here's an example, setting a breakpoint at the end of the loop:",
"_____no_output_____"
]
],
[
[
"# ignore\n_, remove_html_markup_starting_line_number = \\\n inspect.getsourcelines(remove_html_markup)\nnext_inputs([f\"break {remove_html_markup_starting_line_number + 13}\",\n \"continue\", \"print\", \"continue\", \"continue\", \"continue\"]);",
"_____no_output_____"
],
[
"with Debugger():\n remove_html_markup('abc')",
"Calling remove_html_markup(s = 'abc')\n"
],
[
"# ignore\nassert not next_inputs()",
"_____no_output_____"
],
[
"from bookutils import quiz",
"_____no_output_____"
],
[
"quiz(\"What happens if we enter the command `break 2 + 3`?\",\n [\n \"A breakpoint is set in Line 2.\",\n \"A breakpoint is set in Line 5.\",\n \"Two breakpoints are set in Lines 2 and 3.\",\n \"The debugger raises a `ValueError` exception.\"\n ], '12345 % 7')",
"_____no_output_____"
]
],
[
[
"Try it out yourself by executing the above code block!",
"_____no_output_____"
],
[
"## Deleting Breakpoints\n\nTo delete breakpoints, we introduce a `delete` command:",
"_____no_output_____"
]
],
[
[
"class Debugger(Debugger):\n def delete_command(self, arg: str = \"\") -> None:\n \"\"\"Delete breakoint in line given by `arg`.\n Without given line, clear all breakpoints\"\"\"\n\n if arg:\n try:\n self.breakpoints.remove(int(arg))\n except KeyError:\n self.log(f\"No such breakpoint: {arg}\")\n else:\n self.breakpoints = set()\n self.log(\"Breakpoints:\", self.breakpoints)",
"_____no_output_____"
],
[
"# ignore\nnext_inputs([f\"break {remove_html_markup_starting_line_number + 15}\",\n \"continue\", \"print\",\n f\"delete {remove_html_markup_starting_line_number + 15}\",\n \"continue\"]);",
"_____no_output_____"
],
[
"with Debugger():\n remove_html_markup('abc')",
"Calling remove_html_markup(s = 'abc')\n"
],
[
"# ignore\nassert not next_inputs()",
"_____no_output_____"
],
[
"quiz(\"What does the command `delete` (without argument) do?\",\n [\n \"It deletes all breakpoints\",\n \"It deletes the source code\",\n \"It lists all breakpoints\",\n \"It stops execution\"\n ],\n '[n for n in range(2 // 2, 2 * 2) if n % 2 / 2]'\n )",
"_____no_output_____"
]
],
[
[
"## Listings with Benefits\n\nLet us extend `list` a bit such that \n\n1. it can also list a given function, and \n2. it shows the current line (`>`) as well as breakpoints (`#`)",
"_____no_output_____"
]
],
[
[
"class Debugger(Debugger):\n def list_command(self, arg: str = \"\") -> None:\n \"\"\"Show current function. If `arg` is given, show its source code.\"\"\"\n\n try:\n if arg:\n obj = eval(arg)\n source_lines, line_number = inspect.getsourcelines(obj)\n current_line = -1\n else:\n source_lines, line_number = \\\n getsourcelines(self.frame.f_code)\n current_line = self.frame.f_lineno\n except Exception as err:\n self.log(f\"{err.__class__.__name__}: {err}\")\n source_lines = []\n line_number = 0\n\n for line in source_lines:\n spacer = ' '\n if line_number == current_line:\n spacer = '>'\n elif line_number in self.breakpoints:\n spacer = '#'\n self.log(f'{line_number:4}{spacer} {line}', end='')\n line_number += 1",
"_____no_output_____"
],
[
"# ignore\n_, remove_html_markup_starting_line_number = \\\n inspect.getsourcelines(remove_html_markup)\nnext_inputs([f\"break {remove_html_markup_starting_line_number + 13}\",\n \"list\", \"continue\", \"delete\", \"list\", \"continue\"]);",
"_____no_output_____"
],
[
"with Debugger():\n remove_html_markup('abc')",
"Calling remove_html_markup(s = 'abc')\n"
],
[
"# ignore\nassert not next_inputs()",
"_____no_output_____"
]
],
[
[
"### Quitting\n\nIn the Python debugger interface, we can only observe, but not alter the control flow. To make sure we can always exit out of our debugging session, we introduce a `quit` command that deletes all breakpoints and resumes execution until the observed function finishes.",
"_____no_output_____"
]
],
[
[
"class Debugger(Debugger):\n def quit_command(self, arg: str = \"\") -> None:\n \"\"\"Finish execution\"\"\"\n\n self.breakpoints = set()\n self.stepping = False\n self.interact = False",
"_____no_output_____"
]
],
[
[
"With this, our command palette is pretty complete, and we can use our debugger to happily inspect Python executions.",
"_____no_output_____"
]
],
[
[
"# ignore\nnext_inputs([\"help\", \"quit\"]);",
"_____no_output_____"
],
[
"with Debugger():\n remove_html_markup('abc')",
"Calling remove_html_markup(s = 'abc')\n"
],
[
"# ignore\nassert not next_inputs()",
"_____no_output_____"
]
],
[
[
"## Synopsis",
"_____no_output_____"
],
[
"This chapter provides an interactive debugger for Python functions. The debugger is invoked as\n\n```python\nwith Debugger():\n function_to_be_observed()\n ...\n```\nWhile running, you can enter _debugger commands_ at the `(debugger)` prompt. Here's an example session:",
"_____no_output_____"
]
],
[
[
"# ignore\n_, remove_html_markup_starting_line_number = \\\n inspect.getsourcelines(remove_html_markup)\nnext_inputs([\"help\", f\"break {remove_html_markup_starting_line_number + 13}\",\n \"list\", \"continue\", \"step\", \"print out\", \"quit\"]);",
"_____no_output_____"
],
[
"with Debugger():\n ret = remove_html_markup('abc')",
"Calling remove_html_markup(s = 'abc')\n"
],
[
"# ignore\nassert not next_inputs()",
"_____no_output_____"
]
],
[
[
"The `Debugger` class can be easily extended in subclasses. A new method `NAME_command(self, arg)` will be invoked whenever a command named `NAME` is entered, with `arg` holding given command arguments (empty string if none).",
"_____no_output_____"
]
],
[
[
"# ignore\nfrom ClassDiagram import display_class_hierarchy",
"_____no_output_____"
],
[
"# ignore\ndisplay_class_hierarchy(Debugger, \n public_methods=[\n Tracer.__init__,\n Tracer.__enter__,\n Tracer.__exit__,\n Tracer.traceit,\n Debugger.__init__,\n ],\n project='debuggingbook')",
"_____no_output_____"
]
],
[
[
"## Lessons Learned\n\n* _Debugging hooks_ from interpreted languages allow for simple interactive debugging.\n* A command-line debugging framework can be very easily extended with additional functionality.",
"_____no_output_____"
],
[
"## Next Steps\n\nIn the next chapter, we will see how [assertions](Assertions.ipynb) check correctness at runtime.",
"_____no_output_____"
],
[
"## Background\n\nThe command-line interface in this chapter is modeled after [GDB, the GNU debugger](https://www.gnu.org/software/gdb/), whose interface in turn goes back to earlier command-line debuggers such as [dbx](https://en.wikipedia.org/wiki/Dbx_%28debugger%29). All modern debuggers build on the functionality and concepts realized in these debuggers, be it breakpoints, stepping through programs, or inspecting program state.\n\nThe concept of time travel debugging (see the Exercises, below) has been invented (and reinvented) many times. One of the most impactful tools comes from King et al. \\cite{King2005}, integrating _a time-traveling virtual machine_ (TTVM) for debugging operating systems, integrated into GDB. The recent [record+replay \"rr\" debugger](https://rr-project.org) also implements time travel debugging on top of the GDB command line debugger; it is applicable for general-purpose programs and available as open source.",
"_____no_output_____"
],
[
"## Exercises\n",
"_____no_output_____"
],
[
"### Exercise 1: Changing State\n\nSome Python implementations allow to alter the state, by assigning values to `frame.f_locals`. Implement a `assign VAR=VALUE` command that allows to change the value of (local) variable `VAR` to the new value `VALUE`.\n\nNote: As detailed in [this blog post](https://utcc.utoronto.ca/~cks/space/blog/python/FLocalsAndTraceFunctions), \n`frame.f_locals` is re-populated with every access, so assign to our local alias `self.local_vars` instead.",
"_____no_output_____"
],
[
"**Solution.** Here is an `assign` command that gets things right on CPython.",
"_____no_output_____"
]
],
[
[
"class Debugger(Debugger):\n def assign_command(self, arg: str) -> None:\n \"\"\"Use as 'assign VAR=VALUE'. Assign VALUE to local variable VAR.\"\"\"\n\n sep = arg.find('=')\n if sep > 0:\n var = arg[:sep].strip()\n expr = arg[sep + 1:].strip()\n else:\n self.help_command(\"assign\")\n return\n\n vars = self.local_vars\n try:\n vars[var] = eval(expr, self.frame.f_globals, vars)\n except Exception as err:\n self.log(f\"{err.__class__.__name__}: {err}\")",
"_____no_output_____"
],
[
"# ignore\nnext_inputs([\"assign s = 'xyz'\", \"print\", \"step\", \"print\", \"step\",\n \"assign tag = True\", \"assign s = 'abc'\", \"print\",\n \"step\", \"print\", \"continue\"]);",
"_____no_output_____"
],
[
"with Debugger():\n remove_html_markup('abc')",
"Calling remove_html_markup(s = 'abc')\n"
],
[
"# ignore\nassert not next_inputs()",
"_____no_output_____"
]
],
[
[
"### Exercise 2: More Commands\n\nExtending the `Debugger` class with extra features and commands is a breeze. The following commands are inspired from [the GNU command-line debugger (GDB)](https://www.gnu.org/software/gdb/):",
"_____no_output_____"
],
[
"#### Named breakpoints (\"break\")\n\nWith `break FUNCTION` and `delete FUNCTION`, set and delete a breakpoint at `FUNCTION`.",
"_____no_output_____"
],
[
"#### Step over functions (\"next\")\n\nWhen stopped at a function call, the `next` command should execute the entire call, stopping when the function returns. (In contrast, `step` stops at the first line of the function called.)",
"_____no_output_____"
],
[
"#### Print call stack (\"where\")\n\nImplement a `where` command that shows the stack of calling functions.",
"_____no_output_____"
],
[
"#### Move up and down the call stack (\"up\" and \"down\")\n\nAfter entering the `up` command, explore the source and variables of the _calling_ function rather than the current function. Use `up` repeatedly to move further up the stack. `down` returns to the caller.",
"_____no_output_____"
],
[
"#### Execute until line (\"until\")\n\nWith `until LINE`, resume execution until a line greater than `LINE` is reached. If `LINE` is not given, resume execution until a line greater than the current is reached. This is useful to avoid stepping through multiple loop iterations.",
"_____no_output_____"
],
[
"#### Execute until return (\"finish\")\n\nWith `finish`, resume execution until the current function returns.",
"_____no_output_____"
],
[
"#### Watchpoints (\"watch\")\n\nWith `watch CONDITION`, stop execution as soon as `CONDITION` changes its value. (Use the code from our `EventTracer` class in the [chapter on Tracing](Tracer.ipynb).) `delete CONDITION` removes the watchpoint. Keep in mind that some variable names may not exist at all times.",
"_____no_output_____"
],
[
"### Exercise 3: Time-Travel Debugging\n\nRather than inspecting a function at the moment it executes, you can also _record_ the entire state (call stack, local variables, etc.) during execution, and then run an interactive session to step through the recorded execution. Your time travel debugger would be invoked as\n\n```python\nwith TimeTravelDebugger():\n function_to_be_tracked()\n ...\n```\n\nThe interaction then starts at the end of the `with` block.",
"_____no_output_____"
],
[
"#### Part 1: Recording Values\n\nStart with a subclass of `Tracer` from the [chapter on tracing](Tracer.ipynb) (say, `TimeTravelTracer`) to execute a program while recording all values. Keep in mind that recording even only local variables at each step quickly consumes large amounts of memory. As an alternative, consider recording only _changes_ to variables, with the option to restore an entire state from a baseline and later changes.",
"_____no_output_____"
],
[
"#### Part 2: Command Line Interface\n\nCreate `TimeTravelDebugger` as subclass of both `TimeTravelTracer` and `Debugger` to provide a command line interface as with `Debugger`, including additional commands which get you back to earlier states:\n\n* `back` is like `step`, except that you go one line back\n* `restart` gets you to the beginning of the execution\n* `rewind` gets you to the beginning of the current function invocation",
"_____no_output_____"
],
[
"#### Part 3: Graphical User Interface\n\nCreate `GUItimeTravelDebugger` to provide a _graphical user interface_ that allows you to explore a recorded execution, using HTML and JavaScript.",
"_____no_output_____"
],
[
"Here's a simple example to get you started. Assume you have recorded the following line numbers and variable values:",
"_____no_output_____"
]
],
[
[
"recording: List[Tuple[int, Dict[str, Any]]] = [\n (10, {'x': 25}),\n (11, {'x': 25}),\n (12, {'x': 26, 'a': \"abc\"}),\n (13, {'x': 26, 'a': \"abc\"}),\n (10, {'x': 30}),\n (11, {'x': 30}),\n (12, {'x': 31, 'a': \"def\"}),\n (13, {'x': 31, 'a': \"def\"}),\n (10, {'x': 35}),\n (11, {'x': 35}),\n (12, {'x': 36, 'a': \"ghi\"}),\n (13, {'x': 36, 'a': \"ghi\"}),\n]",
"_____no_output_____"
]
],
[
[
"Then, the following function will provide a _slider_ that will allow you to explore these values:",
"_____no_output_____"
]
],
[
[
"from bookutils import HTML",
"_____no_output_____"
],
[
"def slider(rec: List[Tuple[int, Dict[str, Any]]]) -> str:\n lines_over_time = [line for (line, var) in rec]\n vars_over_time = []\n for (line, vars) in rec:\n vars_over_time.append(\", \".join(f\"{var} = {repr(vars[var])}\"\n for var in vars))\n\n # print(lines_over_time)\n # print(vars_over_time)\n\n template = f'''\n <div class=\"time_travel_debugger\">\n <input type=\"range\" min=\"0\" max=\"{len(lines_over_time) - 1}\"\n value=\"0\" class=\"slider\" id=\"time_slider\">\n Line <span id=\"line\">{lines_over_time[0]}</span>:\n <span id=\"vars\">{vars_over_time[0]}</span>\n </div>\n <script>\n var lines_over_time = {lines_over_time};\n var vars_over_time = {vars_over_time};\n\n var time_slider = document.getElementById(\"time_slider\");\n var line = document.getElementById(\"line\");\n var vars = document.getElementById(\"vars\");\n\n time_slider.oninput = function() {{\n line.innerHTML = lines_over_time[this.value];\n vars.innerHTML = vars_over_time[this.value];\n }}\n </script>\n '''\n # print(template)\n return HTML(template)",
"_____no_output_____"
],
[
"slider(recording)",
"_____no_output_____"
]
],
[
[
"Explore the HTML and JavaScript details of how `slider()` works, and then expand it to a user interface where you can\n\n* see the current source code (together with the line being executed)\n* search for specific events, such as a line being executed or a variable changing its value\n\nJust like `slider()`, your user interface should come in pure HTML and JavaScript such that it can run in a browser (or a Jupyter notebook) without interacting with a Python program.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
ec9f14c26e7f134d6c20ce413829151889ca5282 | 5,737 | ipynb | Jupyter Notebook | Model.ipynb | aerdem4/kaggle-wiki-traffic | b78eaed722c3236e1fde3f3a11715f57292046db | [
"MIT"
] | 5 | 2018-03-29T03:00:35.000Z | 2020-06-12T08:28:30.000Z | Model.ipynb | aerdem4/kaggle-wiki-traffic | b78eaed722c3236e1fde3f3a11715f57292046db | [
"MIT"
] | null | null | null | Model.ipynb | aerdem4/kaggle-wiki-traffic | b78eaed722c3236e1fde3f3a11715f57292046db | [
"MIT"
] | 4 | 2018-05-11T18:03:22.000Z | 2020-10-28T16:16:16.000Z | 31.010811 | 142 | 0.518389 | [
[
[
"## Definitions",
"_____no_output_____"
]
],
[
[
"YEAR_SHIFT = 364 #number of days in a year, use multiple of 7 to be able to capture week behavior\nPERIOD = 49 #number of days for median comparison\nPREDICT_PERIOD = 75 #number of days which will be predicted\n\n#evaluation function\ndef smape(x, y):\n if x == y:\n return 0\n else:\n return np.abs(x-y)/(x+y)\n \n#median function ignoring nans\ndef safe_median(s):\n return np.median([x for x in s if ~np.isnan(x)])",
"_____no_output_____"
]
],
[
[
"## Prepare training data",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n\ntrain = pd.read_csv(\"input/train_2.csv\")\ntrain = pd.melt(train[list(train.columns[-(YEAR_SHIFT + 2*PERIOD):])+['Page']], id_vars='Page', var_name='date', value_name='Visits')\ntrain['date'] = train['date'].astype('datetime64[ns]')\n\nLAST_TRAIN_DAY = train['date'].max()\n\ntrain = train.groupby(['Page'])[\"Visits\"].apply(lambda x: list(x))",
"_____no_output_____"
]
],
[
[
"## Make the predictions",
"_____no_output_____"
]
],
[
[
"pred_dict = {}\n\ncount = 0\nscount = 0\n\nfor page, row in zip(train.index, train):\n last_month = np.array(row[-PERIOD:])\n slast_month = np.array(row[-2*PERIOD:-PERIOD])\n prev_last_month = np.array(row[PERIOD:2*PERIOD])\n prev_slast_month = np.array(row[:PERIOD])\n \n use_last_year = False\n if ~np.isnan(row[0]):\n #calculate yearly prediction error\n year_increase = np.median(slast_month)/np.median(prev_slast_month)\n year_error = np.sum(list(map(lambda x: smape(x[0], x[1]), zip(last_month, prev_last_month * year_increase))))\n \n #calculate monthly prediction error\n smedian = np.median(slast_month)\n month_error = np.sum(list(map(lambda x: smape(x, smedian), last_month)))\n \n #check if yearly prediction is better than median prediction in the previous period\n error_diff = (month_error - year_error)/PERIOD\n if error_diff > 0.1:\n scount += 1\n use_last_year = True\n \n if use_last_year:\n last_year = np.array(row[2*PERIOD:2*PERIOD+PREDICT_PERIOD])\n preds = last_year * year_increase #consider yearly increase while using the last years visits\n else:\n preds = [0]*PREDICT_PERIOD\n windows = np.array([2, 3, 4, 7, 11, 18, 29, 47])*7 #kind of fibonacci\n medians = np.zeros((len(windows), 7))\n for i in range(7):\n for k in range(len(windows)):\n array = np.array(row[-windows[k]:]).reshape(-1, 7)\n # use 3-day window. for example, Friday: [Thursday, Friday, Saturday]\n s = np.hstack([array[:, (i-1)%7], array[:, i], array[:, (i+1)%7]]).reshape(-1)\n medians[k, i] = safe_median(s)\n for i in range(PREDICT_PERIOD):\n preds[i] = safe_median(medians[:, i%7])\n \n pred_dict[page] = preds\n \n count += 1 \n if count % 1000 == 0:\n print(count, scount)\n\ndel train\nprint(\"Yearly prediction is done on the percentage:\", scount/count)",
"_____no_output_____"
]
],
[
[
"## Prepare the submission",
"_____no_output_____"
]
],
[
[
"test = pd.read_csv(\"input/key_2.csv\")\ntest['date'] = test.Page.apply(lambda a: a[-10:])\ntest['Page'] = test.Page.apply(lambda a: a[:-11])\ntest['date'] = test['date'].astype('datetime64[ns]')\n\ntest[\"date\"] = test[\"date\"].apply(lambda x: int((x - LAST_TRAIN_DAY).days) - 1)\n\ndef func(row):\n return pred_dict[row[\"Page\"]][row[\"date\"]]\n\ntest[\"Visits\"] = test.apply(func, axis=1)\n\ntest.loc[test.Visits.isnull(), 'Visits'] = 0\ntest['Visits'] = test['Visits'].values + test['Visits'].values*0.03 # overestimating is usually better for smape\ntest.Visits = test.Visits.round(4)\ntest[['Id','Visits']].to_csv('submission.csv', index=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9f177b98646cf92fe973fddc57b3ff9b90e686 | 4,311 | ipynb | Jupyter Notebook | docs/notebooks/12_circuit_simulation.ipynb | thomasdorch/ubc | 466a932f3c03070d40667ba0f6d446f164484ae1 | [
"MIT"
] | null | null | null | docs/notebooks/12_circuit_simulation.ipynb | thomasdorch/ubc | 466a932f3c03070d40667ba0f6d446f164484ae1 | [
"MIT"
] | null | null | null | docs/notebooks/12_circuit_simulation.ipynb | thomasdorch/ubc | 466a932f3c03070d40667ba0f6d446f164484ae1 | [
"MIT"
] | null | null | null | 22.107692 | 275 | 0.561819 | [
[
[
"# Circuit simulation\n\nYou can describe a component linear response with its [Scattering parameters](https://en.wikipedia.org/wiki/Scattering_parameters)\n\nThe Scattering matrix of a component can be simulated with electromagnetic methods such as Finite difference time domain (FDTD)\n\n[Simphony](https://simphonyphotonics.readthedocs.io/en/latest/) open source package provides you with some of the the circuit linear solver to solve the circuit response of several components connected in a circuit. Simphony also has some of the UBC models built-in.\n\nFor some components not available in simphony you can leverage gdsfactory FDTD lumerical interface to compute the Sparameters of a component.\n\n\n## Component models",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom simphony.library import ebeam\n\nimport gdsfactory as gf\nimport gdsfactory.simulation.simphony as gs\nimport ubcpdk\nimport ubcpdk.simulation.circuits_simphony as cm",
"_____no_output_____"
],
[
"ubcpdk.components.dc_broadband_te()",
"_____no_output_____"
],
[
"c = cm.ebeam_bdc_te1550()\ngs.plot_model(c)",
"_____no_output_____"
],
[
"bdc = cm.ebeam_bdc_te1550()\nw = np.linspace(1520, 1580) * 1e-9\nf = 3e8 / w\ns = bdc.s_parameters(freq=f)\nplt.plot(w * 1e9, np.abs(s[:, 0, 2]) ** 2)\nplt.plot(w * 1e9, np.abs(s[:, 0, 3]) ** 2)",
"_____no_output_____"
],
[
"ubcpdk.components.y_splitter()",
"_____no_output_____"
],
[
"c = cm.ebeam_y_1550()\ngs.plot_model(c)",
"_____no_output_____"
],
[
"ubcpdk.components.ebeam_dc_halfring_straight()",
"_____no_output_____"
],
[
"c = cm.ebeam_dc_halfring_straight()\ngs.plot_model(c)",
"_____no_output_____"
],
[
"ubcpdk.components.ebeam_dc_te1550()",
"_____no_output_____"
],
[
"c = cm.ebeam_dc_te1550()\ngs.plot_model(c)",
"_____no_output_____"
]
],
[
[
"## Circuit simulations\n\nWe can also do some circuit simulations.",
"_____no_output_____"
]
],
[
[
"ubcpdk.components.mzi(delta_length=100)",
"_____no_output_____"
],
[
"circuit_mzi =cm.mzi(delta_length=10)\ngs.plot_circuit(circuit_mzi)",
"_____no_output_____"
],
[
"circuit_mzi =cm.mzi(delta_length=100)\ngs.plot_circuit(circuit_mzi)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ec9f3147c52e10cc31d8f863001a53349a32988d | 10,491 | ipynb | Jupyter Notebook | homeworks/07/design_patterns.ipynb | armgabrielyan/python-2 | 69f733a93fd2a7a1d55052122096cb69ee6d12d3 | [
"MIT"
] | null | null | null | homeworks/07/design_patterns.ipynb | armgabrielyan/python-2 | 69f733a93fd2a7a1d55052122096cb69ee6d12d3 | [
"MIT"
] | null | null | null | homeworks/07/design_patterns.ipynb | armgabrielyan/python-2 | 69f733a93fd2a7a1d55052122096cb69ee6d12d3 | [
"MIT"
] | 1 | 2021-08-08T20:06:54.000Z | 2021-08-08T20:06:54.000Z | 28.354054 | 1,215 | 0.494519 | [
[
[
"## Problem 1\n\n```\nUse prototype pattern and classes of your choice. create an abstract class Shape and concrete classes extending the Shape class: Circle, Square and Rectangle. Define a class ShapeCache which stores shape objects in a dictionary and returns their clones when requested.\n```",
"_____no_output_____"
]
],
[
[
"from abc import ABCMeta, abstractmethod\nimport copy\n\n\nclass Shape(metaclass = ABCMeta):\n \n def __init__(self):\n self.id = None\n self.type = None\n \n @abstractmethod\n def draw(self):\n pass\n \n def get_type(self):\n return self.type\n \n def set_type(self, _type):\n self.type = _type\n \n def get_id(self):\n return self.id\n \n def set_id(self, _id):\n self.id = _id\n \n def clone(self):\n return copy.copy(self)\n \n \nclass Circle(Shape):\n def __init__(self):\n super().__init__()\n self.set_type('Circle')\n \n def draw(self):\n print('Drawing a circle...')\n \n\nclass Square(Shape):\n def __init__(self):\n super().__init__()\n self.set_type('Square')\n \n def draw(self):\n print('Drawing a square...')\n \n \nclass Rectangle(Shape):\n def __init__(self):\n super().__init__()\n self.set_type('Rectangle')\n \n def draw(self):\n print('Drawing a rectangle...')\n \n \nclass ShapeCache:\n \n \n cache = {}\n \n @staticmethod\n def get(_id):\n shape = ShapeCache.cache.get(_id, None)\n\n return shape.clone()\n \n @staticmethod\n def load():\n circle = Circle()\n circle.set_id(1)\n ShapeCache.cache[circle.get_id()] = circle\n \n square = Square()\n square.set_id(2)\n ShapeCache.cache[square.get_id()] = square\n \n rectangle = Rectangle()\n rectangle.set_id(3)\n ShapeCache.cache[rectangle.get_id()] = rectangle\n \n \nShapeCache.load()\n\ncircle = ShapeCache.get(1)\nprint(circle.get_type())\n\nsquare = ShapeCache.get(2)\nprint(square.get_type())\n\nrectangle = ShapeCache.get(3)\nprint(rectangle.get_type())",
"Circle\nSquare\nRectangle\n"
]
],
[
[
"## Problem 2\n\n```\nUse adapter pattern and classes of your choice. Create a structure where you have 1-2 adaptees that have a method that returns some text in spanish. Have an adapter which will have a method that translates the text to english.\n```",
"_____no_output_____"
]
],
[
[
"import abc\n \n\nclass Target(metaclass=abc.ABCMeta):\n \n \n def __init__(self):\n self._adaptee = None\n \n \n @abc.abstractmethod\n def request(self):\n pass\n \n \n def set_adaptee(self, adaptee):\n self._adaptee = adaptee\n \n \nclass Adapter(Target):\n DICTIONARY = {\n 'Hola': 'Hello',\n 'Adiós': 'Goodbye'\n }\n \n \n def __init__(self, adaptee):\n self.set_adaptee(adaptee)\n \n def request(self):\n return Adapter.DICTIONARY[self._adaptee.speak()]\n \n \nclass SpanishHello:\n\n \n def speak(self):\n return 'Hola'\n \n \nclass SpanishGoodbye:\n\n \n def speak(self):\n return 'Adiós'\n \n \nadapter = Adapter(SpanishHello())\nprint(adapter.request())\n\nadapter = Adapter(SpanishGoodbye())\nprint(adapter.request())",
"Hello\nGoodbye\n"
]
],
[
[
"## Problem 3\n\n```\nUse singleton pattern and classes of your choice. Create a structure where you have some resource that has states busy and free and 3 users that try to use the resource and change the state to busy while they are using it. The resource should be singleton. Implement following 2 different methods for singleton implementation that we have discussion. \n```",
"_____no_output_____"
]
],
[
[
"class USB:\n \n __shared_state = dict()\n \n def __init__(self):\n self.__dict__ = self.__shared_state\n self.state = 'free'\n \n def __str__(self):\n return self.state\n \n\n \nuser1 = USB()\nuser2 = USB()\nuser3 = USB()\n\nuser1.state = 'busy'\n\nprint(user1.state)\nprint(user2.state)\nprint(user3.state)\n\nuser1.state = 'free'\n\nprint(user1.state)\nprint(user2.state)\nprint(user3.state)\n\nuser1.state = 'busy'\n\nprint(user1.state)\nprint(user2.state)\nprint(user3.state)",
"busy\nbusy\nbusy\nfree\nfree\nfree\nbusy\nbusy\nbusy\n"
],
[
"class USB:\n \n __shared_instance = 'free'\n \n @staticmethod\n def getInstance():\n if USB.__shared_instance == 'free':\n USB()\n return USB.__shared_instance\n \n def __init__(self):\n if USB.__shared_instance != 'free':\n raise Exception ('This class is a singleton class !')\n else:\n self.state = 'free'\n USB.__shared_instance = self\n\n\nuser1 = USB()\nuser2 = USB.getInstance()\n\nprint(user1.state)\nprint(user2.state)\n\nuser2.state = 'busy'\n\nuser3 = USB.getInstance()\n\nprint(user1.state)\nprint(user2.state)\nprint(user3.state)",
"free\nfree\nbusy\nbusy\nbusy\n"
],
[
"user4 = USB()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ec9f3bc394bbe8b018073b90573905278adf57be | 85,526 | ipynb | Jupyter Notebook | action_recognition/LSTM_FULL.ipynb | feedforward/Vid_CLIP | 99e683d87bfbd4cf2c1e7531c39d400af618853a | [
"MIT"
] | null | null | null | action_recognition/LSTM_FULL.ipynb | feedforward/Vid_CLIP | 99e683d87bfbd4cf2c1e7531c39d400af618853a | [
"MIT"
] | null | null | null | action_recognition/LSTM_FULL.ipynb | feedforward/Vid_CLIP | 99e683d87bfbd4cf2c1e7531c39d400af618853a | [
"MIT"
] | null | null | null | 69.760196 | 22,994 | 0.715432 | [
[
[
"# Mount Drive",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n"
],
[
"!pip install git+https://github.com/openai/CLIP.git",
"Collecting git+https://github.com/openai/CLIP.git\n Cloning https://github.com/openai/CLIP.git to /tmp/pip-req-build-akq7s808\n Running command git clone -q https://github.com/openai/CLIP.git /tmp/pip-req-build-akq7s808\nCollecting ftfy\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/ce/b5/5da463f9c7823e0e575e9908d004e2af4b36efa8d02d3d6dad57094fcb11/ftfy-6.0.1.tar.gz (63kB)\n\u001b[K |████████████████████████████████| 71kB 691kB/s \n\u001b[?25hRequirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (2019.12.20)\nRequirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (4.41.1)\nCollecting torch~=1.7.1\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/90/5d/095ddddc91c8a769a68c791c019c5793f9c4456a688ddd235d6670924ecb/torch-1.7.1-cp37-cp37m-manylinux1_x86_64.whl (776.8MB)\n\u001b[K |████████████████████████████████| 776.8MB 22kB/s \n\u001b[?25hCollecting torchvision~=0.8.2\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/94/df/969e69a94cff1c8911acb0688117f95e1915becc1e01c73e7960a2c76ec8/torchvision-0.8.2-cp37-cp37m-manylinux1_x86_64.whl (12.8MB)\n\u001b[K |████████████████████████████████| 12.8MB 55.9MB/s \n\u001b[?25hRequirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from ftfy->clip==1.0) (0.2.5)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torch~=1.7.1->clip==1.0) (1.19.5)\nRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch~=1.7.1->clip==1.0) (3.7.4.3)\nRequirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.7/dist-packages (from torchvision~=0.8.2->clip==1.0) (7.1.2)\nBuilding wheels for collected packages: clip, ftfy\n Building wheel for clip (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for clip: filename=clip-1.0-cp37-none-any.whl size=1368708 sha256=ca50cd1711234696bc9809ccedb0d0bdf899c73c8202c972d692ed9045503387\n Stored in directory: /tmp/pip-ephem-wheel-cache-_qhbmfvw/wheels/79/51/d7/69f91d37121befe21d9c52332e04f592e17d1cabc7319b3e09\n Building wheel for ftfy (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for ftfy: filename=ftfy-6.0.1-cp37-none-any.whl size=41573 sha256=2017d65b9037e4c4a7ddb72fc81f4135f4d7fdc818aa1f35ab388eeccdfa13b6\n Stored in directory: /root/.cache/pip/wheels/ae/73/c7/9056e14b04919e5c262fe80b54133b1a88d73683d05d7ac65c\nSuccessfully built clip ftfy\n\u001b[31mERROR: torchtext 0.9.1 has requirement torch==1.8.1, but you'll have torch 1.7.1 which is incompatible.\u001b[0m\nInstalling collected packages: ftfy, torch, torchvision, clip\n Found existing installation: torch 1.8.1+cu101\n Uninstalling torch-1.8.1+cu101:\n Successfully uninstalled torch-1.8.1+cu101\n Found existing installation: torchvision 0.9.1+cu101\n Uninstalling torchvision-0.9.1+cu101:\n Successfully uninstalled torchvision-0.9.1+cu101\nSuccessfully installed clip-1.0 ftfy-6.0.1 torch-1.7.1 torchvision-0.8.2\n"
]
],
[
[
"# Import Necessary Packages",
"_____no_output_____"
]
],
[
[
"from torchvision import transforms\nimport torch\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1 import ImageGrid\nimport os\nimport numpy as np\nimport torch\nfrom tqdm.notebook import tqdm\nfrom torchvision import transforms\nimport matplotlib.pyplot as plt\nimport random\nimport numpy as np\nfrom PIL import Image\nimport torchvision \nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.utils.data import Dataset, DataLoader\nimport sklearn\nfrom torch import nn\nfrom torch.nn.utils.rnn import *\nimport fnmatch\nimport clip\n\n\ncuda = torch.cuda.is_available()\nprint(\"cuda\", cuda)\nnum_workers = 8 if cuda else 0\nprint(num_workers)\nprint(\"Torch version:\", torch.__version__)\nbatch_size=8",
"cuda True\n8\nTorch version: 1.7.1\n"
]
],
[
[
"## Load CLIP Model",
"_____no_output_____"
]
],
[
[
"print(\"Avaliable Models: \", clip.available_models())\nmodel, preprocess = clip.load(\"RN50\") # clip.load(\"ViT-B/32\") #\n\ninput_resolution = model.input_resolution #.item()\ncontext_length = model.context_length #.item()\nvocab_size = model.vocab_size #.item()\n\nprint(\"Model parameters:\", f\"{np.sum([int(np.prod(p.shape)) for p in model.parameters()]):,}\")\nprint(\"Input resolution:\", input_resolution)\nprint(\"Context length:\", context_length)\nprint(\"Vocab size:\", vocab_size)",
"Avaliable Models: ['RN50', 'RN101', 'RN50x4', 'ViT-B/32']\nModel parameters: 102,007,137\nInput resolution: tensor(224, device='cuda:0')\nContext length: tensor(77, device='cuda:0')\nVocab size: tensor(49408, device='cuda:0')\n"
]
],
[
[
"## Load Templates\n",
"_____no_output_____"
]
],
[
[
"templates = [\n 'a bad photo of a {}.',\n 'a photo of many {}.',\n 'a sculpture of a {}.',\n 'a photo of the hard to see {}.',\n 'a low resolution photo of the {}.',\n 'a rendering of a {}.',\n 'graffiti of a {}.',\n 'a bad photo of the {}.',\n 'a cropped photo of the {}.',\n 'a tattoo of a {}.',\n 'the embroidered {}.',\n 'a photo of a hard to see {}.',\n 'a bright photo of a {}.',\n 'a photo of a clean {}.',\n 'a photo of a dirty {}.',\n 'a dark photo of the {}.',\n 'a drawing of a {}.',\n 'a photo of my {}.',\n 'the plastic {}.',\n 'a photo of the cool {}.',\n 'a close-up photo of a {}.',\n 'a black and white photo of the {}.',\n 'a painting of the {}.',\n 'a painting of a {}.',\n 'a pixelated photo of the {}.',\n 'a sculpture of the {}.',\n 'a bright photo of the {}.',\n 'a cropped photo of a {}.',\n 'a plastic {}.',\n 'a photo of the dirty {}.',\n 'a jpeg corrupted photo of a {}.',\n 'a blurry photo of the {}.',\n 'a photo of the {}.',\n 'a good photo of the {}.',\n 'a rendering of the {}.',\n 'a {} in a video game.',\n 'a photo of one {}.',\n 'a doodle of a {}.',\n 'a close-up photo of the {}.',\n 'a photo of a {}.',\n 'the origami {}.',\n 'the {} in a video game.',\n 'a sketch of a {}.',\n 'a doodle of the {}.',\n 'a origami {}.',\n 'a low resolution photo of a {}.',\n 'the toy {}.',\n 'a rendition of the {}.',\n 'a photo of the clean {}.',\n 'a photo of a large {}.',\n 'a rendition of a {}.',\n 'a photo of a nice {}.',\n 'a photo of a weird {}.',\n 'a blurry photo of a {}.',\n 'a cartoon {}.',\n 'art of a {}.',\n 'a sketch of the {}.',\n 'a embroidered {}.',\n 'a pixelated photo of a {}.',\n 'itap of the {}.',\n 'a jpeg corrupted photo of the {}.',\n 'a good photo of a {}.',\n 'a plushie {}.',\n 'a photo of the nice {}.',\n 'a photo of the small {}.',\n 'a photo of the weird {}.',\n 'the cartoon {}.',\n 'art of the {}.',\n 'a drawing of the {}.',\n 'a photo of the large {}.',\n 'a black and white photo of a {}.',\n 'the plushie {}.',\n 'a dark photo of a {}.',\n 'itap of a {}.',\n 'graffiti of the {}.',\n 'a toy {}.',\n 'itap of my {}.',\n 'a photo of a cool {}.',\n 'a photo of a small {}.',\n 'a tattoo of the {}.',\n]",
"_____no_output_____"
]
],
[
[
"# Selected classes and mapping for kinetics dataset",
"_____no_output_____"
]
],
[
[
"classnames = ['making tea',\n 'shaking head',\n 'skiing slalom',\n 'bobsledding',\n 'high kick',\n 'scrambling eggs',\n 'bee keeping',\n 'swinging on something',\n 'washing hands',\n 'laying bricks',\n 'push up',\n 'doing nails',\n 'massaging legs',\n 'using computer',\n 'clapping',\n 'drinking beer',\n 'eating chips',\n 'riding mule',\n 'petting animal (not cat)',\n 'frying vegetables',\n 'skiing (not slalom or crosscountry)',\n 'snowkiting',\n 'massaging person’s head',\n 'cutting nails',\n 'picking fruit']\n\nmap_id = {}\ni=0\nfor label in classnames:\n map_id[label]=i\n i+=1",
"_____no_output_____"
],
[
"map_id",
"_____no_output_____"
]
],
[
[
"## Calculate Zeroshot_classifier",
"_____no_output_____"
]
],
[
[
"classnames_str = {x:x.replace('_', ' ') for x in classnames}\nclassnames_str",
"_____no_output_____"
],
[
"import pdb\ndef zeroshot_classifier(classnames, act_descriptions):\n with torch.no_grad():\n zeroshot_weights = []\n for classname in classnames:\n texts = [template.format(classname) for template in templates]\n # pdb.set_trace()\n texts = clip.tokenize(texts).cuda() #tokenize\n class_embeddings = model.encode_text(texts) #embed with text encoder\n class_embeddings /= class_embeddings.norm(dim=-1, keepdim=True)\n class_embedding = class_embeddings.mean(dim=0)\n class_embedding /= class_embedding.norm()\n zeroshot_weights.append(class_embedding)\n zeroshot_weights = torch.stack(zeroshot_weights, dim=1).cuda()\n return zeroshot_weights",
"_____no_output_____"
],
[
"a = nn.Linear(1024,25)\na.weight.shape",
"_____no_output_____"
],
[
"zeroshot_weights = zeroshot_classifier(classnames, classnames_str)\n# print(zeroshot_weights.shape)\n# zeroshot_weights1 = zeroshot_weights.expand(batch_size, zeroshot_weights.shape[0], zeroshot_weights.shape[1])\n# print(zeroshot_weights.shape)\n# print(type(zeroshot_weights))",
"_____no_output_____"
],
[
"zeroshot_weights.shape",
"_____no_output_____"
],
[
"# zeroshot_weights_ = torch.autograd.Variable(zeroshot_weights1,requires_grad=True)",
"_____no_output_____"
],
[
"## Labels\n## To clip encode_text\n## 25 * 1024\n",
"_____no_output_____"
],
[
"# def zeroshot_classifier(classnames, act_descriptions):\n# with torch.no_grad():\n# zeroshot_weights = []\n# for classname in tqdm(classnames):\n# #print(classname, act_descriptions[classname])\n# des_size = len(act_descriptions[classname])\n# texts = [ act_descriptions[classname][x : x+100] for x in range(0, des_size, 100)]#format with class\n# texts = [template.format(texts[0]) for template in templates]\n# # print(\"\\n\\n\".join(texts), \"\\n###################################\\n\")\n# texts = clip.tokenize(texts).cuda() #tokenize\n# # print(texts.shape, \"\\n###################################\\n\")\n# class_embeddings = model.encode_text(texts) #embed with text encoder\n# class_embeddings /= class_embeddings.norm(dim=-1, keepdim=True)\n# class_embedding = class_embeddings.mean(dim=0)\n \n# class_embedding /= class_embedding.norm()\n# zeroshot_weights.append(class_embedding)\n# zeroshot_weights = torch.stack(zeroshot_weights, dim=1).cuda()\n# return zeroshot_weights\n\n\n# zeroshot_weights = zeroshot_classifier(classnames, classnames_str)",
"_____no_output_____"
]
],
[
[
"# DEVELOP LSTM MODEL",
"_____no_output_____"
]
],
[
[
"from torch.utils.tensorboard import SummaryWriter",
"_____no_output_____"
],
[
"import pdb\n# LSTM Model\nNUM_CLASSES = 25\nclass Model(nn.Module):\n def __init__(self, input_feature_size, embed_size, out_phoeneme, hidden_size):\n super(Model, self).__init__()\n self.layer1 = nn.Sequential(\n nn.Conv1d(input_feature_size, embed_size , kernel_size=2),\n nn.ReLU(),\n )\n # No of layers ---> reduce\n self.lstm = nn.LSTM(embed_size, hidden_size, num_layers=3,bidirectional=True)\n self.output1 = nn.Linear(hidden_size * 2, 1024)\n # self.output2 = nn.Linear(1024,1024)\n self.dummy = nn.Linear(1024,NUM_CLASSES,bias=False)\n self.dummy.weight = torch.nn.Parameter(zeroshot_weights.float().T.clone(), requires_grad=False)\n\n # self.dummy.requires_grad = True\n # self.output3 = zeroshot_weights_.float().cuda()\n\n \n def forward(self, X, lengths):\n X_ = torch.transpose(X,2,1)\n X_ = F.pad(input=X_, pad=(0,1,0,0), mode='constant', value=0)\n X = self.layer1(X_)\n X = torch.transpose(X,0,2)\n X = torch.transpose(X,1,2)\n packed_X = pack_padded_sequence(X, lengths.cpu(), enforce_sorted=False)\n packed_out,(h_n,c_n) = self.lstm(packed_X)\n out,_ = pad_packed_sequence(packed_out)\n out = self.output1(out[0,:,:]) # 1024\n # out /= out.norm(dim=-1, keepdim=True) # Normalize the logits. #### SHOULD WE MULTIPLY BY 100\n logits = self.dummy(out)\n return logits\n \ndef init_weights(m):\n if type(m) == nn.Conv1d or type(m) == nn.Linear:\n torch.nn.init.xavier_normal_(m.weight.data) ",
"_____no_output_____"
]
],
[
[
"## Dataloader and Dataset",
"_____no_output_____"
]
],
[
[
"# Dataset class for Train and Dev\nclass Dataset(Dataset):\n def __init__(self, path):\n self.path = path\n self.data = []\n for file in os.listdir(self.path):\n if fnmatch.fnmatch(file, '*.npz'):\n self.data.append(file)\n\n \n def __len__(self):\n return len(self.data)\n def __getitem__(self, index):\n data = np.load(os.path.join(self.path,self.data[index]),allow_pickle=True)\n sample = torch.from_numpy(data['data']).type(torch.FloatTensor)\n label = torch.tensor(data['label'])#.type(torch.LongTensor)\n length = torch.tensor(100)\n return sample,label,length",
"_____no_output_____"
],
[
"# 1024 X 25",
"_____no_output_____"
],
[
"# Dataloader train\ntrain_dataset = Dataset(\"/content/drive/MyDrive/NPZ_TRAIN_16824\")\ntrain_dataloader = DataLoader(train_dataset, batch_size=64, shuffle=True, num_workers=8)\n\n# Dataloader dev\n# dev_dataset = Dataset(dev_data,dev_labels)\n# dev_dataloader = DataLoader(dev_dataset, batch_size=64, shuffle=True, collate_fn=pad_collate, num_workers=8)",
"_____no_output_____"
],
[
"train_dataset.__len__()",
"_____no_output_____"
]
],
[
[
"## Train and Test Loop",
"_____no_output_____"
]
],
[
[
"# Set the hyperparameters of the model\nnumEpochs = 25\nnum_feats = 1024\nlearningRate = 1e-3\nweightDecay = 5e-6\nnum_classes = 25\nhidden_size = 512\nembed_size = 1000",
"_____no_output_____"
],
[
"!tensorboard --logdir logs",
"2021-05-02 05:30:11.907317: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\nServing TensorBoard on localhost; to expose to the network, use a proxy or pass --bind_all\nTensorBoard 2.4.1 at http://localhost:6006/ (Press CTRL+C to quit)\n"
],
[
"# Model Initialisation\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nmodel = Model(num_feats,embed_size,num_classes,hidden_size)\nmodel.apply(init_weights)\nmodel.to(device)\ncriterion = nn.CrossEntropyLoss()\noptimizer = torch.optim.Adam(model.parameters(), lr=learningRate)\nwriter = SummaryWriter(log_dir='runs/lstm')",
"_____no_output_____"
],
[
"# Train Function\ndef validate(model, data_loader):\n model.eval()\n correct = 0\n total = 0\n loss = []\n for batch_num, (feats,labels,lengths) in enumerate(data_loader):\n feats,labels = feats.to(device),labels[:,0].to(device)\n out = model(feats,lengths)\n curr_loss = criterion(out, labels.long())\n correct += (torch.argmax(out,dim=1)==labels).sum().detach().cpu().numpy()\n total += feats.shape[0]\n # Compute loss\n loss.append(curr_loss.item()) \n\n avg_loss = np.mean(loss)\n print(\"Accuracy:\",correct/total)\n writer.add_scalar('Val/Loss', avg_loss) \n writer.add_scalar('Val/Accuracy', correct/total) \n return avg_loss,correct/total",
"_____no_output_____"
],
[
"# Train Function\ntrain_loss = []\nval_loss = []\nval_acc = []\ndef train(model, data_loader,numEpochs,val_dataloader):\n model.train()\n for epoch in range(numEpochs):\n avg_loss = 0.0 \n for batch_num, (feats,labels,lengths) in enumerate(data_loader):\n torch.autograd.set_detect_anomaly(True)\n optimizer.zero_grad()\n feats,labels = feats.to(device),labels[:,0].to(device)\n out = model(feats,lengths)\n loss = criterion(out, labels.long())\n loss.backward()\n optimizer.step() \n avg_loss += loss.item()\n if batch_num % 50 == 49:\n print('Epoch: {}\\tBatch: {}\\tAvg-Loss: {:.4f}'.format(epoch+1, batch_num+1, avg_loss/50))\n train_loss.append(avg_loss/50)\n writer.add_scalar('Train/Loss', avg_loss/50)\n avg_loss = 0.0 \n vloss,vacc = validate(model,val_dataloader)\n val_loss.append(vloss)\n val_acc.append(vacc)\n model.train()",
"_____no_output_____"
],
[
"val_dataset = Dataset(\"/content/drive/MyDrive/NPZ_VALIDATION_16824\")\nval_dataloader = DataLoader(val_dataset, batch_size=64, shuffle=False, num_workers=8)\ntrain(model,train_dataloader,numEpochs,val_dataloader)",
"Epoch: 1\tBatch: 50\tAvg-Loss: 2.2911\nAccuracy: 0.39473684210526316\nEpoch: 1\tBatch: 100\tAvg-Loss: 1.3261\nAccuracy: 0.5797448165869219\nEpoch: 1\tBatch: 150\tAvg-Loss: 1.1393\nAccuracy: 0.6323763955342903\nEpoch: 2\tBatch: 50\tAvg-Loss: 0.8755\nAccuracy: 0.6547049441786283\nEpoch: 2\tBatch: 100\tAvg-Loss: 0.8627\nAccuracy: 0.6802232854864434\nEpoch: 2\tBatch: 150\tAvg-Loss: 0.8371\nAccuracy: 0.7161084529505582\nEpoch: 3\tBatch: 50\tAvg-Loss: 0.6851\nAccuracy: 0.6953748006379585\nEpoch: 3\tBatch: 100\tAvg-Loss: 0.6559\nAccuracy: 0.696969696969697\nEpoch: 3\tBatch: 150\tAvg-Loss: 0.6663\nAccuracy: 0.726475279106858\nEpoch: 4\tBatch: 50\tAvg-Loss: 0.5102\nAccuracy: 0.7033492822966507\nEpoch: 4\tBatch: 100\tAvg-Loss: 0.5639\nAccuracy: 0.7089314194577353\nEpoch: 4\tBatch: 150\tAvg-Loss: 0.5711\nAccuracy: 0.7352472089314195\nEpoch: 5\tBatch: 50\tAvg-Loss: 0.4413\nAccuracy: 0.7272727272727273\nEpoch: 5\tBatch: 100\tAvg-Loss: 0.4632\nAccuracy: 0.7145135566188198\nEpoch: 5\tBatch: 150\tAvg-Loss: 0.4865\nAccuracy: 0.74481658692185\nEpoch: 6\tBatch: 50\tAvg-Loss: 0.4035\nAccuracy: 0.7145135566188198\nEpoch: 6\tBatch: 100\tAvg-Loss: 0.3742\nAccuracy: 0.7392344497607656\nEpoch: 6\tBatch: 150\tAvg-Loss: 0.3921\nAccuracy: 0.7137161084529505\nEpoch: 7\tBatch: 50\tAvg-Loss: 0.3378\nAccuracy: 0.7272727272727273\nEpoch: 7\tBatch: 100\tAvg-Loss: 0.3515\nAccuracy: 0.7145135566188198\nEpoch: 7\tBatch: 150\tAvg-Loss: 0.3570\nAccuracy: 0.7208931419457735\nEpoch: 8\tBatch: 50\tAvg-Loss: 0.2491\nAccuracy: 0.7216905901116427\nEpoch: 8\tBatch: 100\tAvg-Loss: 0.2645\nAccuracy: 0.7185007974481659\nEpoch: 8\tBatch: 150\tAvg-Loss: 0.2925\nAccuracy: 0.7496012759170654\nEpoch: 9\tBatch: 50\tAvg-Loss: 0.2178\nAccuracy: 0.7376395534290271\nEpoch: 9\tBatch: 100\tAvg-Loss: 0.2194\nAccuracy: 0.6977671451355661\nEpoch: 9\tBatch: 150\tAvg-Loss: 0.2564\nAccuracy: 0.7256778309409888\nEpoch: 10\tBatch: 50\tAvg-Loss: 0.2413\nAccuracy: 0.733652312599681\nEpoch: 10\tBatch: 100\tAvg-Loss: 0.1702\nAccuracy: 0.7256778309409888\nEpoch: 10\tBatch: 150\tAvg-Loss: 0.2260\nAccuracy: 0.726475279106858\nEpoch: 11\tBatch: 50\tAvg-Loss: 0.1702\nAccuracy: 0.7368421052631579\nEpoch: 11\tBatch: 100\tAvg-Loss: 0.1421\nAccuracy: 0.7137161084529505\nEpoch: 11\tBatch: 150\tAvg-Loss: 0.1670\nAccuracy: 0.7232854864433812\nEpoch: 12\tBatch: 50\tAvg-Loss: 0.1473\nAccuracy: 0.7185007974481659\nEpoch: 12\tBatch: 100\tAvg-Loss: 0.1777\nAccuracy: 0.7177033492822966\nEpoch: 12\tBatch: 150\tAvg-Loss: 0.1733\nAccuracy: 0.7121212121212122\nEpoch: 13\tBatch: 50\tAvg-Loss: 0.1192\nAccuracy: 0.7177033492822966\nEpoch: 13\tBatch: 100\tAvg-Loss: 0.1229\nAccuracy: 0.726475279106858\nEpoch: 13\tBatch: 150\tAvg-Loss: 0.1626\nAccuracy: 0.7113237639553429\nEpoch: 14\tBatch: 50\tAvg-Loss: 0.1141\nAccuracy: 0.7272727272727273\nEpoch: 14\tBatch: 100\tAvg-Loss: 0.1019\nAccuracy: 0.7216905901116427\nEpoch: 14\tBatch: 150\tAvg-Loss: 0.1083\nAccuracy: 0.7208931419457735\nEpoch: 15\tBatch: 50\tAvg-Loss: 0.0657\nAccuracy: 0.722488038277512\nEpoch: 15\tBatch: 100\tAvg-Loss: 0.0816\nAccuracy: 0.7280701754385965\nEpoch: 15\tBatch: 150\tAvg-Loss: 0.0917\nAccuracy: 0.7256778309409888\nEpoch: 16\tBatch: 50\tAvg-Loss: 0.0727\nAccuracy: 0.7280701754385965\nEpoch: 16\tBatch: 100\tAvg-Loss: 0.1027\nAccuracy: 0.7208931419457735\nEpoch: 16\tBatch: 150\tAvg-Loss: 0.0949\nAccuracy: 0.7320574162679426\nEpoch: 17\tBatch: 50\tAvg-Loss: 0.0670\nAccuracy: 0.7272727272727273\nEpoch: 17\tBatch: 100\tAvg-Loss: 0.0803\nAccuracy: 0.726475279106858\nEpoch: 17\tBatch: 150\tAvg-Loss: 0.0872\nAccuracy: 0.7177033492822966\nEpoch: 18\tBatch: 50\tAvg-Loss: 0.0716\nAccuracy: 0.7177033492822966\nEpoch: 18\tBatch: 100\tAvg-Loss: 0.0700\nAccuracy: 0.7248803827751196\nEpoch: 18\tBatch: 150\tAvg-Loss: 0.0795\nAccuracy: 0.7296650717703349\nEpoch: 19\tBatch: 50\tAvg-Loss: 0.0497\nAccuracy: 0.7312599681020734\nEpoch: 19\tBatch: 100\tAvg-Loss: 0.0496\nAccuracy: 0.7352472089314195\nEpoch: 19\tBatch: 150\tAvg-Loss: 0.0411\nAccuracy: 0.715311004784689\nEpoch: 20\tBatch: 50\tAvg-Loss: 0.0437\nAccuracy: 0.7320574162679426\nEpoch: 20\tBatch: 100\tAvg-Loss: 0.0590\nAccuracy: 0.726475279106858\nEpoch: 20\tBatch: 150\tAvg-Loss: 0.0633\nAccuracy: 0.7081339712918661\nEpoch: 21\tBatch: 50\tAvg-Loss: 0.0463\nAccuracy: 0.74481658692185\nEpoch: 21\tBatch: 100\tAvg-Loss: 0.0580\nAccuracy: 0.7344497607655502\nEpoch: 21\tBatch: 150\tAvg-Loss: 0.0654\nAccuracy: 0.7248803827751196\nEpoch: 22\tBatch: 50\tAvg-Loss: 0.0664\nAccuracy: 0.726475279106858\nEpoch: 22\tBatch: 100\tAvg-Loss: 0.0452\nAccuracy: 0.7288676236044657\nEpoch: 22\tBatch: 150\tAvg-Loss: 0.0465\nAccuracy: 0.7368421052631579\nEpoch: 23\tBatch: 50\tAvg-Loss: 0.0816\nAccuracy: 0.7256778309409888\nEpoch: 23\tBatch: 100\tAvg-Loss: 0.0805\nAccuracy: 0.7440191387559809\nEpoch: 23\tBatch: 150\tAvg-Loss: 0.0742\nAccuracy: 0.7432216905901117\nEpoch: 24\tBatch: 50\tAvg-Loss: 0.0274\nAccuracy: 0.7280701754385965\nEpoch: 24\tBatch: 100\tAvg-Loss: 0.0522\nAccuracy: 0.7328548644338118\nEpoch: 24\tBatch: 150\tAvg-Loss: 0.0452\nAccuracy: 0.7288676236044657\nEpoch: 25\tBatch: 50\tAvg-Loss: 0.0286\nAccuracy: 0.7328548644338118\nEpoch: 25\tBatch: 100\tAvg-Loss: 0.0334\nAccuracy: 0.7272727272727273\nEpoch: 25\tBatch: 150\tAvg-Loss: 0.0620\nAccuracy: 0.7328548644338118\n"
],
[
"import matplotlib.pyplot as plt\nplt.plot(train_loss)\nplt.plot(val_loss)\nplt.title(\"Train and Validation Loss\")\n",
"_____no_output_____"
],
[
"plt.plot(val_acc)\nplt.title(\"Validation Accuracy\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ec9f416b0df8825226b2dccec9cc071205b00ffd | 94,791 | ipynb | Jupyter Notebook | models/CharCNN-test.ipynb | mcao610/NEU-Graduate-Project | 2d07ed288ddb3bf033cf27e4054d7103c02cc2f5 | [
"Apache-2.0"
] | null | null | null | models/CharCNN-test.ipynb | mcao610/NEU-Graduate-Project | 2d07ed288ddb3bf033cf27e4054d7103c02cc2f5 | [
"Apache-2.0"
] | null | null | null | models/CharCNN-test.ipynb | mcao610/NEU-Graduate-Project | 2d07ed288ddb3bf033cf27e4054d7103c02cc2f5 | [
"Apache-2.0"
] | null | null | null | 68.194964 | 218 | 0.555675 | [
[
[
"import torch\nimport torch.nn as nn",
"_____no_output_____"
],
[
"# char_ids: [batch_size, max_seq_len, max_word_len]\nchar_ids = torch.randint(10, (1, 2, 3), dtype=torch.long)\nbatch_size, max_seq_len, max_word_len = char_ids.shape",
"_____no_output_____"
],
[
"print(char_ids)",
"tensor([[[5, 5, 2],\n [7, 3, 1]]])\n"
],
[
"char_dim = 4\nchar_embed = nn.Embedding(10, char_dim)\nchar_embedded = char_embed(char_ids)\nprint(char_embedded.shape) # [batch_size, max_seq_len, max_word_len, char_embedding_dim]\nprint(char_embedded)",
"torch.Size([1, 2, 3, 4])\ntensor([[[[-0.4729, -1.7419, -0.8149, 0.8974],\n [-0.4729, -1.7419, -0.8149, 0.8974],\n [ 1.2032, -2.4092, -1.0361, -1.4598]],\n\n [[-0.3510, -1.3240, -0.1029, 0.1246],\n [ 0.3531, 1.4189, -1.1716, -0.1424],\n [-0.0359, -1.3063, 0.4996, -0.8041]]]], grad_fn=<EmbeddingBackward>)\n"
],
[
"char_embedded = char_embedded.view(-1, max_word_len, char_dim)\nchar_embedded = char_embedded.permute(0, 2, 1)\nprint(char_embedded.shape) # [batch_size*max_seq_len, char_embedding_dim, max_word_len]\nprint(char_embedded)",
"torch.Size([2, 4, 3])\ntensor([[[-0.4729, -0.4729, 1.2032],\n [-1.7419, -1.7419, -2.4092],\n [-0.8149, -0.8149, -1.0361],\n [ 0.8974, 0.8974, -1.4598]],\n\n [[-0.3510, 0.3531, -0.0359],\n [-1.3240, 1.4189, -1.3063],\n [-0.1029, -1.1716, 0.4996],\n [ 0.1246, -0.1424, -0.8041]]], grad_fn=<PermuteBackward>)\n"
],
[
"# filter\nconv1 = nn.Conv1d(char_dim, 3, kernel_size=3)",
"_____no_output_____"
],
[
"conv_out = conv1(char_embedded)\nprint(conv_out.shape) # [batch_size*max_seq_len, out_channels, max_word_len + 1 - kernel_size]\nprint(conv_out)",
"torch.Size([2, 3, 1])\ntensor([[[-1.0170],\n [-0.5876],\n [-0.6721]],\n\n [[ 0.2706],\n [-0.0609],\n [ 0.8565]]], grad_fn=<SqueezeBackward1>)\n"
],
[
"import torch.nn.functional as F\noutput = F.max_pool2d(conv_out, kernel_size=(1, conv_out.shape[-1])).squeeze(-1)",
"_____no_output_____"
],
[
"print(output.shape) # [batch_size*max_seq_len, out_channels]\nprint(output)",
"torch.Size([2, 3])\ntensor([[-1.0170, -0.5876, -0.6721],\n [ 0.2706, -0.0609, 0.8565]], grad_fn=<SqueezeBackward1>)\n"
],
[
"output.requires_grad",
"_____no_output_____"
],
[
"class HighwayMLP(nn.Module):\n \"\"\"Implement highway network.\"\"\"\n \n def __init__(self,\n input_size,\n activation=nn.functional.relu,\n gate_activation=torch.sigmoid):\n \n super(HighwayMLP, self).__init__()\n \n self.act, self.gate_act = activation, gate_activation\n \n self.mlp = nn.Linear(input_size, input_size)\n self.transform = nn.Linear(input_size, input_size)\n\n def forward(self, x):\n \"\"\"\n Args:\n x: [*, input_size]\n \"\"\"\n mlp_out = self.act(self.mlp(x))\n gate_out = self.gate_act(self.transform(x))\n \n return gate_out * mlp_out + (1 - gate_out) * x",
"_____no_output_____"
],
[
"class CharCNN(nn.Module):\n \"\"\"Character-level embedding with convolutional neural networks.\n \"\"\"\n def __init__(self, \n char_size,\n char_dim,\n filter_num=10,\n max_filter_size=7,\n output_size=50,\n padding_idx=0,\n dropout=0.2):\n \"\"\"Constructs CharCNN model.\n \n Args:\n char_size: total characters in the vocabulary.\n char_dim: character embedding size.\n filter_num: number of filters (each size).\n dropout: dropout rate.\n \"\"\"\n super(CharCNN, self).__init__()\n \n self.char_dim = char_dim\n self.filter_num = filter_num\n self.max_filter_size = max_filter_size\n \n self.embed = nn.Embedding(char_size, \n char_dim, \n padding_idx=padding_idx)\n \n self.filters = nn.ModuleList()\n for size in range(1, max_filter_size + 1):\n self.filters.append(nn.Conv1d(char_dim, \n filter_num, \n kernel_size=size))\n \n self.highway = HighwayMLP(max_filter_size * filter_num)\n self.proj = nn.Linear(max_filter_size * filter_num, output_size)\n\n \n def forward(self, char_ids, word_lens):\n \"\"\"\n Args:\n char_ids: [batch_size, max_seq_len, max_word_len]\n word_lens: [batch_size, max_seq_len]\n\n Return:\n c_emb: [batch_size, max_seq_len, char_hidden_dim]\n \"\"\"\n batch_size, max_seq_len, max_word_len = char_ids.shape\n \n # embedding\n char_embedded = self.embed(char_ids)\n char_embedded = char_embedded.view(-1, max_word_len, self.char_dim)\n char_embedded = char_embedded.permute(0, 2, 1) # [batch_size*max_seq_len, char_dim, max_word_len]\n \n # convolution layer & max pooling\n outputs = []\n for i, conv in enumerate(self.filters):\n if i + 1 <= max_word_len:\n conv_out = conv(char_embedded)\n out = F.max_pool2d(conv_out, kernel_size=(1, conv_out.shape[-1]))\n out = out.squeeze(-1) # [batch_size*max_seq_len, filter_num]\n outputs.append(out)\n \n outputs = torch.cat(outputs, -1)\n outputs = self._pad_outputs(outputs)\n assert outputs.shape == (batch_size * max_seq_len, self.max_filter_size * self.filter_num)\n \n # highway network\n highway_out = self.highway(outputs)\n \n # proj\n final_out = torch.relu(self.proj(highway_out))\n final_out = final_out.view(batch_size, max_seq_len, -1)\n \n return final_out\n \n \n def _pad_outputs(self, x):\n \"\"\"In case the max word length is less than the max filter width, \n use this function to pad the output.\n \n Args:\n x: tensor, [batch_size * max_seq_len, N * filter_num] (N <= max_filter_size)\n\n return:\n out: tensor, [batch_size * max_seq_len, max_filter_size * filter_num]\n \"\"\"\n bm, input_size = x.shape\n dim_to_pad = self.filter_num * self.max_filter_size - input_size\n assert dim_to_pad >= 0\n \n if dim_to_pad == 0:\n return x\n else: \n padder = torch.zeros((bm, dim_to_pad), dtype=x.dtype, device=x.device)\n out = torch.cat((x, padder), -1)\n return out",
"_____no_output_____"
],
[
"charCNN = CharCNN(char_size=100,\n char_dim=20,\n filter_num=10,\n max_filter_size=7,\n output_size=50)",
"_____no_output_____"
],
[
"char_ids = torch.randint(100, [32, 20, 10], dtype=torch.int64)\nout = charCNN(char_ids, None)",
"_____no_output_____"
],
[
"out.shape",
"_____no_output_____"
]
],
[
[
"### Visualizaton",
"_____no_output_____"
]
],
[
[
"import os\nfrom torchviz import make_dot\nos.environ[\"PATH\"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin/'",
"_____no_output_____"
],
[
"make_dot(charCNN(char_ids, None), params=dict(charCNN.named_parameters()))",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec9f424fedf6e722ccfcdad0d695efb5eaddaf81 | 3,351 | ipynb | Jupyter Notebook | Day1.ipynb | dev-karmasan/SideQuest | 4b295edba5c99a6746918de120feb859f92ac3ad | [
"MIT"
] | null | null | null | Day1.ipynb | dev-karmasan/SideQuest | 4b295edba5c99a6746918de120feb859f92ac3ad | [
"MIT"
] | null | null | null | Day1.ipynb | dev-karmasan/SideQuest | 4b295edba5c99a6746918de120feb859f92ac3ad | [
"MIT"
] | 1 | 2020-09-23T08:54:27.000Z | 2020-09-23T08:54:27.000Z | 37.651685 | 225 | 0.562519 | [
[
[
"<a href=\"https://colab.research.google.com/github/dev-karmasan/SideQuest/blob/master/Day1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Install libraries",
"_____no_output_____"
]
],
[
[
"!pip install numpy pandas matplotlib scipy numba",
"Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (1.18.5)\nRequirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (1.0.5)\nRequirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (3.2.2)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (1.4.1)\nRequirement already satisfied: numba in /usr/local/lib/python3.6/dist-packages (0.48.0)\nRequirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas) (2.8.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas) (2018.9)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (0.10.0)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (1.2.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.4.7)\nRequirement already satisfied: llvmlite<0.32.0,>=0.31.0dev0 in /usr/local/lib/python3.6/dist-packages (from numba) (0.31.0)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from numba) (49.1.0)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.6.1->pandas) (1.12.0)\n"
],
[
"import numpy as np\nimport pandas as pd",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
ec9f4910cc5a77f2c511a87336826b357c798291 | 39,157 | ipynb | Jupyter Notebook | site/ja/r1/guide/keras.ipynb | akalakheti/docs | ad602b40f8f968520d21ae81e304dde80861f745 | [
"Apache-2.0"
] | 3 | 2020-01-09T02:58:22.000Z | 2020-09-11T09:02:01.000Z | site/ja/r1/guide/keras.ipynb | akalakheti/docs | ad602b40f8f968520d21ae81e304dde80861f745 | [
"Apache-2.0"
] | 1 | 2019-10-22T11:24:17.000Z | 2019-10-22T11:24:17.000Z | site/ja/r1/guide/keras.ipynb | akalakheti/docs | ad602b40f8f968520d21ae81e304dde80861f745 | [
"Apache-2.0"
] | 2 | 2020-01-15T21:50:31.000Z | 2020-01-15T21:56:30.000Z | 29.353073 | 429 | 0.50318 | [
[
[
"##### Copyright 2018 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# Keras",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ja/r1/guide/keras.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/ja/r1/guide/keras.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ja)にご連絡ください。",
"_____no_output_____"
],
[
"Kerasは、深層学習モデルを構築・学習するための高水準APIです。 \n迅速なプロトタイピングから先端研究、実運用にも使用されており、3つの特徴があります:\n\n- <b>ユーザーフレンドリー</b><br>\n 一般的なユースケースに最適化したKerasのAPIは、シンプルで統一性があります。誤った使い方をした場合のエラー出力も明快で、どう対応すべきか一目瞭然です。\n- <b>モジュール性</b><br>\n Kerasのモデルは、設定可能なモジュールをつなぎ合わせて作られます。モジュールのつなぎ方には、ほとんど制約がありません。\n- <b>拡張性</b><br>\n 簡単にモジュールをカスタマイズできるため、研究の新しいアイデアを試すのに最適です。新しい層、損失関数を自作し、最高水準のモデルを開発しましょう。",
"_____no_output_____"
],
[
"## tf.keras のインポート\n\n`tf.keras` は、TensorFlow版 [Keras API 仕様](https://keras.io) です。 \nモデルを構築・学習するための高水準APIであり、TensorFlow特有の機能である\n [Eagerモード](#eager_execution)や`tf.data` パイプライン、 [Estimators](./estimators.md) にも正式に対応しています。\n`tf.keras` は、TensorFlowの柔軟性やパフォーマンスを損ねることなく使いやすさを向上しています。\n\nTensorFlowプログラムの準備として、先ずは `tf.keras` をインポートしましょう:",
"_____no_output_____"
]
],
[
[
"!pip install tensorflow==\"1.*\"",
"_____no_output_____"
],
[
"!pip install pyyaml # YAML形式でモデルを保存する際に必要です。",
"_____no_output_____"
],
[
"from __future__ import absolute_import, division, print_function, unicode_literals\n\nimport tensorflow as tf\nfrom tensorflow.keras import layers\n\nprint(tf.version.VERSION)\nprint(tf.keras.__version__)",
"_____no_output_____"
]
],
[
[
"`tf.keras` ではKerasと互換性のあるコードを実行できますが、注意点もあります:\n\n* 最新リリースのTensorFlowに同梱されている `tf.keras` のバージョンと、pipインストールした最新の `keras` のバージョンが同一とは限りません。バージョンは `tf.keras.__version__` の出力をご確認ください。\n* [モデルの重みを保存](#weights_only)する場合、\n`tf.keras` のデフォルトの保存形式は [チェックポイント形式](./checkpoints.md)です。\nHDF5形式にて保存する場合は、 `save_format='h5'` オプションを指定してください。",
"_____no_output_____"
],
[
"## 単純なモデルの構築\n\n### シーケンシャル モデル\n\nKerasでは、<b>層</b>を組み合わせて<b>モデル</b>を構築します。\nモデルは通常、複数の層から成るグラフ構造をしています。\n最も一般的なモデルは、単純に層を積み重ねる類の `tf.keras.Sequential` モデルです。\n\n単純な全結合ネットワーク(いわゆる マルチ レイヤー パーセプトロン)を構築してみましょう:",
"_____no_output_____"
]
],
[
[
"model = tf.keras.Sequential()\n# ユニット数が64の全結合層をモデルに追加します:\nmodel.add(layers.Dense(64, activation='relu'))\n# 全結合層をもう一つ追加します:\nmodel.add(layers.Dense(64, activation='relu'))\n# 出力ユニット数が10のソフトマックス層を追加します:\nmodel.add(layers.Dense(10, activation='softmax'))",
"_____no_output_____"
]
],
[
[
"### 層の設定\n\n`tf.keras.layers` はさまざまな層を提供していますが、共通のコンストラクタ引数があります:\n\n* `activation`: 層の活性化関数を設定します。組み込み関数、もしくは呼び出し可能オブジェクトの名前で指定します。デフォルト値は、活性化関数なし。\n* `kernel_initializer` ・ `bias_initializer`: 層の重み(カーネルとバイアス)の初期化方式。名前、もしくは呼び出し可能オブジェクトで指定します。デフォルト値は、 `\"Glorot uniform\"` 。\n* `kernel_regularizer` ・ `bias_regularizer`:層の重み(カーネルとバイアス)に適用する、L1やL2等の正則化方式。デフォルト値は、正則化なし。\n\nコンストラクタ引数を使って `tf.keras.layers.Dense` 層をインスタンス化する例を以下に示します:",
"_____no_output_____"
]
],
[
[
"# シグモイド層を1層作る場合:\nlayers.Dense(64, activation='sigmoid')\n# 別の記法:\nlayers.Dense(64, activation=tf.sigmoid)\n\n# カーネル行列に係数0,01のL1正則化を課した全結合層:\nlayers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))\n\n# バイアスベクトルに係数0,01のL2正則化を課した全結合層:\nlayers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))\n\n# カーネルをランダム直交行列で初期化した全結合層:\nlayers.Dense(64, kernel_initializer='orthogonal')\n\n# バイアスベクトルを2.0で初期化した全結合層:\nlayers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))",
"_____no_output_____"
]
],
[
[
"## 学習と評価\n\n### 学習の準備\n\nモデルを構築したあとは、`compile` メソッドを呼んで学習方法を構成します:",
"_____no_output_____"
]
],
[
[
"model = tf.keras.Sequential([\n# ユニット数64の全結合層をモデルに追加する:\nlayers.Dense(64, activation='relu', input_shape=(32,)),\n# もう1層追加する:\nlayers.Dense(64, activation='relu'),\n# 出力ユニット数10のソフトマックス層を追加する:\nlayers.Dense(10, activation='softmax')])\n\nmodel.compile(optimizer=tf.train.AdamOptimizer(0.001),\n loss='categorical_crossentropy',\n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"`tf.keras.Model.compile` には3つの重要な引数があります:\n\n* `optimizer`: このオブジェクトが訓練方式を規定します。 `tf.train` モジュールから\n `tf.train.AdamOptimizer`や `tf.train.RMSPropOptimizer`、\n `tf.train.GradientDescentOptimizer`等のオプティマイザ インスタンスを指定します。\n* `loss`: 最適化の過程で最小化する関数を指定します。平均二乗誤差(`mse`)や`categorical_crossentropy`、\n `binary_crossentropy`等が好んで使われます。損失関数は名前、もしくは `tf.keras.losses` モジュールから呼び出し可能オブジェクトとして指定できます。\n* `metrics`: 学習の監視に使用します。 名前、もしくは`tf.keras.metrics` モジュールから呼び出し可能オブジェクトとして指定できます。\n\n学習用モデルの構成例を2つ、以下に示します:",
"_____no_output_____"
]
],
[
[
"# 平均二乗誤差 回帰モデルを構成する。\nmodel.compile(optimizer=tf.train.AdamOptimizer(0.01),\n loss='mse', # 平均二乗誤差\n metrics=['mae']) # 平均絶対誤差\n\n# 多クラス分類モデルを構成する。\nmodel.compile(optimizer=tf.train.RMSPropOptimizer(0.01),\n loss=tf.keras.losses.categorical_crossentropy,\n metrics=[tf.keras.metrics.categorical_accuracy])",
"_____no_output_____"
]
],
[
[
"### NumPy データの入力\n\n小規模なデータセットであれば、モデルを学習・評価する際にインメモリの [NumPy](https://www.numpy.org/)配列を使いましょう。\nモデルは `fit` メソッドを使って学習データに適合させます。",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\ndef random_one_hot_labels(shape):\n n, n_class = shape\n classes = np.random.randint(0, n_class, n)\n labels = np.zeros((n, n_class))\n labels[np.arange(n), classes] = 1\n return labels\n\ndata = np.random.random((1000, 32))\nlabels = random_one_hot_labels((1000, 10))\n\nmodel.fit(data, labels, epochs=10, batch_size=32)",
"_____no_output_____"
]
],
[
[
"`tf.keras.Model.fit` は3つの重要な引数があります:\n\n* `epochs`: **エポック** は学習の構成単位で、(バッチに分割した)全入力データを一巡したものを1エポックと換算します。\n* `batch_size`: NumPyデータを渡されたモデルは、データをバッチに分割し、それを順繰りに舐めて学習を行います。一つのバッチに配分するサンプル数を、バッチサイズとして整数で指定します。全サンプル数がバッチサイズで割り切れない場合、最後のバッチだけ小さくなる可能性があることに注意しましょう。\n* `validation_data`: モデルの試作中に評価データを使って簡単にパフォーマンスを監視したい場合は、この引数に入力とラベルの対を渡すことで、各エポックの最後に推論モードで評価データの損失と評価指標を表示することができます。\n\n`validation_data` の使用例:",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\ndata = np.random.random((1000, 32))\nlabels = random_one_hot_labels((1000, 10))\n\nval_data = np.random.random((100, 32))\nval_labels = random_one_hot_labels((100, 10))\n\nmodel.fit(data, labels, epochs=10, batch_size=32,\n validation_data=(val_data, val_labels))",
"_____no_output_____"
]
],
[
[
"### tf.data データセットの入力\n\n大規模なデータセット、もしくは複数デバイスを用いた学習を行う際は [Datasets API](./datasets.md) を使いましょう。 `fit`メソッドに`tf.data.Dataset` インスタンスを渡します:",
"_____no_output_____"
]
],
[
[
"# データセットのインスタンス化の例:\ndataset = tf.data.Dataset.from_tensor_slices((data, labels))\ndataset = dataset.batch(32)\ndataset = dataset.repeat()\n\n# `fit` にデータセットを渡す際は、`steps_per_epoch` の指定をお忘れなく:\nmodel.fit(dataset, epochs=10, steps_per_epoch=30)",
"_____no_output_____"
]
],
[
[
"`fit` メソッドの引数 `steps_per_epoch` には、1エポックあたりの学習ステップ数を指定します。\n`Dataset` がバッチを生成するため `batch_size`の指定は不要です。\n\n`Dataset` は評価データにも使えます:",
"_____no_output_____"
]
],
[
[
"dataset = tf.data.Dataset.from_tensor_slices((data, labels))\ndataset = dataset.batch(32).repeat()\n\nval_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))\nval_dataset = val_dataset.batch(32).repeat()\n\nmodel.fit(dataset, epochs=10, steps_per_epoch=30,\n validation_data=val_dataset,\n validation_steps=3)",
"_____no_output_____"
]
],
[
[
"### 評価と推論\n\n`tf.keras.Model.evaluate` と `tf.keras.Model.predict` メソッドは、NumPyデータと`tf.data.Dataset`に使えます。\n\n推論モードでデータの損失と評価指標を**評価**する例を示します: ",
"_____no_output_____"
]
],
[
[
"data = np.random.random((1000, 32))\nlabels = random_one_hot_labels((1000, 10))\n\nmodel.evaluate(data, labels, batch_size=32)\n\nmodel.evaluate(dataset, steps=30)",
"_____no_output_____"
]
],
[
[
"**推論** 結果を最終層のNumPy配列として出力する例を示します:",
"_____no_output_____"
]
],
[
[
"result = model.predict(data, batch_size=32)\nprint(result.shape)",
"_____no_output_____"
]
],
[
[
"## 高度なモデルの構築\n\n### Functional API\n\n`tf.keras.Sequential` モデルは層を積み重ねる単純なつくりであり、あらゆるモデルに対応しているわけではありません。\n以下に挙げる複雑な構成のモデルを構築するには\n[Keras functional API](https://keras.io/getting-started/functional-api-guide/)\nを使いましょう:\n\n* 入力ヘッドが複数あるモデル\n* 出力ヘッドが複数あるモデル\n* 共有層(おなじ層が複数回呼び出される)を含むモデル\n* (残差結合のように)データの流れが分岐するモデル \n\nFunctional API を用いたモデル構築の流れ:\n\n1. 層のインスタンスは呼び出し可能で、テンソルを返します。\n2. 入力テンソルと出力テンソルを使って`tf.keras.Model`インスタンスを定義します。\n3. モデルは`Sequential`モデルと同様の方法で学習します。\n\nFunctional API を使って全結合ネットワークを構築する例を示します:",
"_____no_output_____"
]
],
[
[
"inputs = tf.keras.Input(shape=(32,)) # プレイスホルダのテンソルを返します。\n\n# 層のインスタンスは呼び出し可能で、テンソルを返します。\nx = layers.Dense(64, activation='relu')(inputs)\nx = layers.Dense(64, activation='relu')(x)\npredictions = layers.Dense(10, activation='softmax')(x)",
"_____no_output_____"
]
],
[
[
"inputsとoutputsを引数にモデルをインスタンス化します。",
"_____no_output_____"
]
],
[
[
"model = tf.keras.Model(inputs=inputs, outputs=predictions)\n\n# コンパイル時に学習方法を指定します。\nmodel.compile(optimizer=tf.train.RMSPropOptimizer(0.001),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n# 5エポック学習します。\nmodel.fit(data, labels, batch_size=32, epochs=5)",
"_____no_output_____"
]
],
[
[
"### モデルの派生\n\n`tf.keras.Model` を継承し順伝播を定義することでカスタムモデルを構築できます。\n`__init__` メソッドにクラス インスタンスの属性として層をつくります。\n`call` メソッドに順伝播を定義します。 \n\n順伝播を命令型で記載できるため、モデルの派生は\n[Eagerモード](./eager.md) でより威力を発揮します。\n\nキーポイント:目的にあったAPIを選択しましょう。派生モデルは柔軟性を与えてくれますが、その代償にモデルはより複雑になりエラーを起こしやすくなります。目的がFunctional APIで賄えるのであれば、そちらを使いましょう。\n\n`tf.keras.Model`を継承して順伝播をカスタマイズした例を以下に示します:",
"_____no_output_____"
]
],
[
[
"class MyModel(tf.keras.Model):\n\n def __init__(self, num_classes=10):\n super(MyModel, self).__init__(name='my_model')\n self.num_classes = num_classes\n # 層をここに定義します。\n self.dense_1 = layers.Dense(32, activation='relu')\n self.dense_2 = layers.Dense(num_classes, activation='sigmoid')\n\n def call(self, inputs):\n # (`__init__`)にてあらかじめ定義した層を使って\n # 順伝播をここに定義します。\n x = self.dense_1(inputs)\n return self.dense_2(x)\n\n def compute_output_shape(self, input_shape):\n # 派生モデルを使用する場合、\n # このメソッドをオーバーライドすることになります。\n # 派生モデルを使用しない場合、このメソッドは省略可能です。\n shape = tf.TensorShape(input_shape).as_list()\n shape[-1] = self.num_classes\n return tf.TensorShape(shape)",
"_____no_output_____"
]
],
[
[
"今定義した派生モデルをインスンス化します。",
"_____no_output_____"
]
],
[
[
"model = MyModel(num_classes=10)\n\n# コンパイル時に学習方法を指定します。\nmodel.compile(optimizer=tf.train.RMSPropOptimizer(0.001),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n# 5エポック学習します。\nmodel.fit(data, labels, batch_size=32, epochs=5)",
"_____no_output_____"
]
],
[
[
"### 層のカスタマイズ\n\n`tf.keras.layers.Layer`を継承して層をカスタマイズするには、以下のメソッドを実装します: \n\n* `build`: 層の重みを定義します。`add_weight`メソッドで重みを追加します。\n* `call`: 順伝播を定義します。\n* `compute_output_shape`: 入力の形状をもとに出力の形状を算出する方法を指定します。\n* 必須ではありませんが、`get_config`メソッド と `from_config` クラスメソッドを実装することで層をシリアライズすることができます。\n\n入力のカーネル行列を `matmul` (行列乗算)するカスタム層の実装例:",
"_____no_output_____"
]
],
[
[
"class MyLayer(layers.Layer):\n\n def __init__(self, output_dim, **kwargs):\n self.output_dim = output_dim\n super(MyLayer, self).__init__(**kwargs)\n\n def build(self, input_shape):\n shape = tf.TensorShape((input_shape[1], self.output_dim))\n # 学習可能な重みを指定します。\n self.kernel = self.add_weight(name='kernel',\n shape=shape,\n initializer='uniform',\n trainable=True)\n # 最後に`build` メソッドを呼ぶのをお忘れなく。\n super(MyLayer, self).build(input_shape)\n\n def call(self, inputs):\n return tf.matmul(inputs, self.kernel)\n\n def compute_output_shape(self, input_shape):\n shape = tf.TensorShape(input_shape).as_list()\n shape[-1] = self.output_dim\n return tf.TensorShape(shape)\n\n def get_config(self):\n base_config = super(MyLayer, self).get_config()\n base_config['output_dim'] = self.output_dim\n return base_config\n\n @classmethod\n def from_config(cls, config):\n return cls(**config)",
"_____no_output_____"
]
],
[
[
"カスタマイズした層を使ってモデルを構築します:",
"_____no_output_____"
]
],
[
[
"model = tf.keras.Sequential([\n MyLayer(10),\n layers.Activation('softmax')])\n\n# コンパイル時に学習方法を指定します。\nmodel.compile(optimizer=tf.train.RMSPropOptimizer(0.001),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n# 5エポック学習します。\nmodel.fit(data, labels, batch_size=32, epochs=5)",
"_____no_output_____"
]
],
[
[
"## コールバック\n\nコールバックは、学習中のモデルの挙動をカスタマイズするためにモデルに渡されるオブジェクトです。\nコールバック関数は自作する、もしくは以下に示す`tf.keras.callbacks`が提供する組み込み関数を利用できます:\n\n* `tf.keras.callbacks.ModelCheckpoint`:モデルのチェックポイントを一定間隔で保存します。\n* `tf.keras.callbacks.LearningRateScheduler`:学習率を動的に変更します。\n* `tf.keras.callbacks.EarlyStopping`:評価パフォーマンスが向上しなくなったら学習を中断させます。\n* `tf.keras.callbacks.TensorBoard`: モデルの挙動を\n [TensorBoard](./summaries_and_tensorboard.md)で監視します。\n\n`tf.keras.callbacks.Callback`を使用するには、モデルの `fit` メソッドにコールバック関数を渡します:",
"_____no_output_____"
]
],
[
[
"callbacks = [\n # `val_loss` が2エポック経っても改善しなければ学習を中断させます。\n tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),\n # TensorBoard用ログを`./logs` ディレクトリに書き込みます。\n tf.keras.callbacks.TensorBoard(log_dir='./logs')\n]\nmodel.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,\n validation_data=(val_data, val_labels))",
"_____no_output_____"
]
],
[
[
"<a id='weights_only'></a>\n## 保存と復元",
"_____no_output_____"
],
[
"### 重みのみ\n\n`tf.keras.Model.save_weights`を使ってモデルの重みの保存やロードを行います。",
"_____no_output_____"
]
],
[
[
"model = tf.keras.Sequential([\nlayers.Dense(64, activation='relu', input_shape=(32,)),\nlayers.Dense(10, activation='softmax')])\n\nmodel.compile(optimizer=tf.train.AdamOptimizer(0.001),\n loss='categorical_crossentropy',\n metrics=['accuracy'])",
"_____no_output_____"
],
[
"# TensorFlow チェックポイント ファイルに重みを保存します。\nmodel.save_weights('./weights/my_model')\n\n# モデルの状態を復元します。\n# 復元対象のモデルと保存されていた重みのモデル構造が同一である必要があります。\nmodel.load_weights('./weights/my_model')",
"_____no_output_____"
]
],
[
[
"デフォルトでは、モデルの重みは\n[TensorFlow チェックポイント](./checkpoints.md) 形式で保存されます。\n重みはKerasのHDF5形式でも保存できます(マルチバックエンド実装のKerasではHDF5形式がデフォルト):",
"_____no_output_____"
]
],
[
[
"# 重みをHDF5形式で保存します。\nmodel.save_weights('my_model.h5', save_format='h5')\n\n# モデルの状態を復元します。\nmodel.load_weights('my_model.h5')",
"_____no_output_____"
]
],
[
[
"### 構成のみ\n\nモデルの構成も保存可能です。\nモデル構造を重み抜きでシリアライズします。\n元のモデルのコードがなくとも、保存された構成で再構築できます。\nKerasがサポートしているシリアライズ形式は、JSONとYAMLです。",
"_____no_output_____"
]
],
[
[
"# JSON形式にモデルをシリアライズします\njson_string = model.to_json()\njson_string",
"_____no_output_____"
],
[
"import json\nimport pprint\npprint.pprint(json.loads(json_string))",
"_____no_output_____"
]
],
[
[
"JSONから(新たに初期化して)モデルを再構築します:",
"_____no_output_____"
]
],
[
[
"fresh_model = tf.keras.models.model_from_json(json_string)",
"_____no_output_____"
]
],
[
[
"YAML形式でモデルを保存するには、\n**TensorFlowをインポートする前に** あらかじめ`pyyaml`をインストールしておく必要があります:",
"_____no_output_____"
]
],
[
[
"yaml_string = model.to_yaml()\nprint(yaml_string)",
"_____no_output_____"
]
],
[
[
"YAMLからモデルを再構築します:",
"_____no_output_____"
]
],
[
[
"fresh_model = tf.keras.models.model_from_yaml(yaml_string)",
"_____no_output_____"
]
],
[
[
"注意:`call`メソッド内ににPythonコードでモデル構造を定義するため、派生モデルはシリアライズできません。",
"_____no_output_____"
],
[
"\n### モデル全体\n\nモデルの重み、構成からオプティマイザ設定までモデル全体をファイルに保存できます。\nそうすることで、元のコードなしに、チェックポイントで保存したときと全く同じ状態から学習を再開できます。",
"_____no_output_____"
]
],
[
[
"# 層の浅いモデルを構築します。\nmodel = tf.keras.Sequential([\n layers.Dense(64, activation='relu', input_shape=(32,)),\n layers.Dense(10, activation='softmax')\n])\nmodel.compile(optimizer='rmsprop',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\nmodel.fit(data, labels, batch_size=32, epochs=5)\n\n\n# HDF5ファイルにモデル全体を保存します。\nmodel.save('my_model.h5')\n\n# 重みとオプティマイザを含む 全く同一のモデルを再構築します。\nmodel = tf.keras.models.load_model('my_model.h5')",
"_____no_output_____"
]
],
[
[
"## Eagerモード\n\n[Eagerモード](./eager.md) は、オペレーションを即時に評価する命令型のプログラミング環境です。\nKerasでは必要ありませんが、`tf.keras`でサポートされておりプログラムを検査しデバッグするのに便利です。\n\nすべての`tf.keras`モデル構築用APIは、Eagerモード互換性があります。\n`Sequential` や Functional APIも使用できますが、\nEagerモードは特に**派生モデル** の構築や\n**層のカスタマイズ**に有益です。\n(既存の層の組み合わせでモデルを作成するAPIの代わりに)\n順伝播をコードで実装する必要があります。\n\n詳しくは [Eagerモード ガイド](./eager.md#build_a_model) \n(カスタマイズした学習ループと`tf.GradientTape`を使ったKerasモデルの適用事例)をご参照ください。",
"_____no_output_____"
],
[
"## 分散\n\n### Estimators\n\n[Estimators](./estimators.md) は分散学習を行うためのAPIです。\n実運用に耐えるモデルを巨大なデータセットを用いて分散学習するといった産業利用を目的にすえています。\n\n`tf.keras.Model`で`tf.estimator` APIによる学習を行うには、\n`tf.keras.estimator.model_to_estimator`を使ってKerasモデルを `tf.estimator.Estimator`オブジェクトに変換する必要があります。\n\n[KerasモデルからEstimatorsを作成する](./estimators.md#creating_estimators_from_keras_models)をご参照ください。",
"_____no_output_____"
]
],
[
[
"model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),\n layers.Dense(10,activation='softmax')])\n\nmodel.compile(optimizer=tf.train.RMSPropOptimizer(0.001),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\nestimator = tf.keras.estimator.model_to_estimator(model)",
"_____no_output_____"
]
],
[
[
"注意:[Estimator input functions](./premade_estimators.md#create_input_functions)をデバッグしてデータの検査を行うには[Eagerモード](./eager.md)で実行してください。",
"_____no_output_____"
],
[
"### マルチGPU\n\n`tf.keras`モデルは`tf.contrib.distribute.DistributionStrategy`を使用することでマルチGPU上で実行できます。\nこのAPIを使えば、既存コードをほとんど改変することなく分散学習へ移行できます。\n\n目下、分散方式として`tf.contrib.distribute.MirroredStrategy`のみサポートしています。\n`MirroredStrategy` は、シングルマシン上でAllReduce を使った同期学習によりin-grapnレプリケーションを行います。\nKerasで`DistributionStrategy`を使用する場合は、`tf.keras.estimator.model_to_estimator`を使って\n`tf.keras.Model` を`tf.estimator.Estimator`に変換し、Estimatorインスタンスを使って分散学習を行います。\n\n以下の例では、シングルマシンのマルチGPUに`tf.keras.Model`を分散します。\n\nまず、単純なモデルを定義します:",
"_____no_output_____"
]
],
[
[
"model = tf.keras.Sequential()\nmodel.add(layers.Dense(16, activation='relu', input_shape=(10,)))\nmodel.add(layers.Dense(1, activation='sigmoid'))\n\noptimizer = tf.train.GradientDescentOptimizer(0.2)\n\nmodel.compile(loss='binary_crossentropy', optimizer=optimizer)\nmodel.summary()",
"_____no_output_____"
]
],
[
[
"**入力パイプライン**を定義します。`input_fn` は、複数デバイスにデータを配置するのに使用する `tf.data.Dataset` を返します。\n各デバイスは、入力バッチの一部(デバイス間で均等に分割)を処理します。",
"_____no_output_____"
]
],
[
[
"def input_fn():\n x = np.random.random((1024, 10))\n y = np.random.randint(2, size=(1024, 1))\n x = tf.cast(x, tf.float32)\n dataset = tf.data.Dataset.from_tensor_slices((x, y))\n dataset = dataset.repeat(10)\n dataset = dataset.batch(32)\n return dataset",
"_____no_output_____"
]
],
[
[
"次に、 `tf.estimator.RunConfig`を作成し、 `train_distribute` 引数に`tf.contrib.distribute.MirroredStrategy` インスタンスを設定します。`MirroredStrategy`を作成する際、デバイスの一覧を指定する、もしくは引数で`num_gpus`(GPU数)を設定することができます。デフォルトでは、使用可能なすべてのGPUを使用する設定になっています:",
"_____no_output_____"
]
],
[
[
"strategy = tf.contrib.distribute.MirroredStrategy()\nconfig = tf.estimator.RunConfig(train_distribute=strategy)",
"_____no_output_____"
]
],
[
[
"Kerasモデルを `tf.estimator.Estimator` インスタンスへ変換します。",
"_____no_output_____"
]
],
[
[
"keras_estimator = tf.keras.estimator.model_to_estimator(\n keras_model=model,\n config=config,\n model_dir='/tmp/model_dir')",
"_____no_output_____"
]
],
[
[
"最後に、`input_fn` と `steps`引数を指定して `Estimator` インスタンスを学習します: ",
"_____no_output_____"
]
],
[
[
"keras_estimator.train(input_fn=input_fn, steps=10)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ec9f4e191adb00ec57e6de662a60b8c1ae2a5bee | 85,284 | ipynb | Jupyter Notebook | notebooks/.ipynb_checkpoints/sampler-checkpoint.ipynb | violafanfani/small-data-2021 | 83e8394057acac42700c63cd036e957e57c4e82c | [
"MIT"
] | null | null | null | notebooks/.ipynb_checkpoints/sampler-checkpoint.ipynb | violafanfani/small-data-2021 | 83e8394057acac42700c63cd036e957e57c4e82c | [
"MIT"
] | null | null | null | notebooks/.ipynb_checkpoints/sampler-checkpoint.ipynb | violafanfani/small-data-2021 | 83e8394057acac42700c63cd036e957e57c4e82c | [
"MIT"
] | 2 | 2021-07-30T08:37:42.000Z | 2021-07-30T14:12:27.000Z | 283.335548 | 59,064 | 0.923022 | [
[
[
"# MCMC Sampler from Scratch\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.stats",
"_____no_output_____"
],
[
"names = ['Carrara','Savelli','Genova','Avellino','Firenze','Roma','Pescara','Alicudi']\nsize = [40,5,200,20,400,3000,10,1]\n\ndef jump(p):\n a = np.random.uniform(0,1, 1)\n return a => p\n\n\nstart = np.random.choice(names)\ni0 = names.index(start)\nprint('We start from %s' %start)\n\n\nfor i in range(10):\n \n i1=np.random.choice(max(0,i0-1),min(7,i0+1))\n print('we propose to jump to %s' %(names[i1]))\n \n p_jump = size[i1]/size[i0]\n \n if jump(p_jump):\n i0 = i1\n print('We are now in %s' %names[i0]\n \n\n",
"Firenze 4\n"
]
],
[
[
"### Adapted from: https://towardsdatascience.com/from-scratch-bayesian-inference-markov-chain-monte-carlo-and-metropolis-hastings-in-python-ef21a29e25a\n",
"_____no_output_____"
]
],
[
[
"\nmod1=lambda t:np.random.normal(10,3,t)\n\n#Form a population of 30,000 individual, with average=10 and scale=3\npopulation = mod1(30000)\n#Assume we are only able to observe 1,000 of these individuals.\nobservation = population[np.random.randint(0, 30000, 1000)]\n\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(1,1,1)\nax.hist( observation,bins=35 ,)\nax.set_xlabel(\"Value\")\nax.set_ylabel(\"Frequency\")\nax.set_title(\"Figure 1: Distribution of 1000 observations sampled from a population of 30,000 with mu=10, sigma=3\")\nmu_obs=observation.mean()",
"_____no_output_____"
],
[
"\n#The tranistion model defines how to move from sigma_current to sigma_new\ntransition_model = lambda x: [x[0],np.random.normal(x[1],1,(1,))]\n\ndef prior(x):\n #x[0] = mu, x[1]=sigma (new or current)\n #returns 1 for all valid values of sigma. Log(1) =0, so it does not affect the summation.\n #returns 0 for all invalid values of sigma (<=0). Log(0)=-infinity, and Log(negative number) is undefined.\n #It makes the new sigma infinitely unlikely.\n if(x[1] <=0):\n return 0\n return 1\n\n#Computes the likelihood of the data given a sigma (new or current) according to equation (2)\ndef manual_log_like_normal(x,data):\n #x[0]=mu, x[1]=sigma (new or current)\n #data = the observation\n return np.sum(-np.log(x[1] * np.sqrt(2* np.pi) )-((data-x[0])**2) / (2*x[1]**2))\n\n#Same as manual_log_like_normal(x,data), but using scipy implementation. It's pretty slow.\ndef log_lik_normal(x,data):\n #x[0]=mu, x[1]=sigma (new or current)\n #data = the observation\n return np.sum(np.log(scipy.stats.norm(x[0],x[1]).pdf(data)))\n\n\n#Defines whether to accept or reject the new sample\ndef acceptance(x, x_new):\n if x_new>x:\n return True\n else:\n accept=np.random.uniform(0,1)\n # Since we did a log likelihood, we need to exponentiate in order to compare to the random number\n # less likely x_new are less likely to be accepted\n return (accept < (np.exp(x_new-x)))\n\n\ndef metropolis_hastings(likelihood_computer,prior, transition_model, param_init,iterations,data,acceptance_rule):\n # likelihood_computer(x,data): returns the likelihood that these parameters generated the data\n # transition_model(x): a function that draws a sample from a symmetric distribution and returns it\n # param_init: a starting sample\n # iterations: number of accepted to generated\n # data: the data that we wish to model\n # acceptance_rule(x,x_new): decides whether to accept or reject the new sample\n x = param_init\n i_accepted=[]\n i_rejected=[]\n accepted = []\n rejected = [] \n for i in range(iterations):\n x_new = transition_model(x) \n x_lik = likelihood_computer(x,data)\n x_new_lik = likelihood_computer(x_new,data) \n if (acceptance(x_lik + np.log(prior(x)),x_new_lik+np.log(prior(x_new)))): \n x = x_new\n accepted.append(x_new)\n i_accepted.append(i)\n \n else:\n rejected.append(x_new)\n i_rejected.append(i)\n \n return np.array(accepted), np.array(rejected), np.array(i_accepted), np.array(i_rejected)",
"_____no_output_____"
],
[
"accepted, rejected, ia, ir = metropolis_hastings(manual_log_like_normal,prior,transition_model,[mu_obs,0.1], 1000,observation,acceptance)\n",
"/home/viola/.local/lib/python3.6/site-packages/ipykernel_launcher.py:18: RuntimeWarning: invalid value encountered in log\n/home/viola/.local/lib/python3.6/site-packages/ipykernel_launcher.py:54: RuntimeWarning: divide by zero encountered in log\n"
],
[
"print(len(accepted), len(rejected))\nprint(accepted)",
"10 990\n[[ 9.9732397 127.56466001]\n [ 9.9732397 94.74063269]\n [ 9.9732397 26.73368205]\n [ 9.9732397 24.18819132]\n [ 9.9732397 11.09990364]\n [ 9.9732397 5.31586662]\n [ 9.9732397 4.84050857]\n [ 9.9732397 3.29444928]\n [ 9.9732397 3.15384469]\n [ 9.9732397 3.15793793]]\n"
],
[
"f,ax=plt.subplots(2,1, figsize=(10,10))\nax[0].plot(ia[0:50], accepted[:,1][0:50], 'rd')\nax[0].plot(ir[0:50],rejected[:,1][0:50], 'k*')\n\nax[1].plot(ir,rejected[0:,1], 'k')\nax[1].plot(ia,accepted[:,1], 'r')\n\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ec9f51e6175a0e5c47828ee7a543cce4ef3d40ed | 284,540 | ipynb | Jupyter Notebook | Polynomial_Regression/Polynomial_Regression.ipynb | Narmatha-T/Predicting-House-Prices | e8a9696685f3ec7389097f0c0ae1e35d6a299a81 | [
"MIT"
] | 2 | 2021-03-10T06:36:36.000Z | 2021-03-16T05:30:48.000Z | Polynomial_Regression/Polynomial_Regression.ipynb | Narmatha-T/Predicting-House-Prices | e8a9696685f3ec7389097f0c0ae1e35d6a299a81 | [
"MIT"
] | null | null | null | Polynomial_Regression/Polynomial_Regression.ipynb | Narmatha-T/Predicting-House-Prices | e8a9696685f3ec7389097f0c0ae1e35d6a299a81 | [
"MIT"
] | null | null | null | 95.387194 | 25,966 | 0.763998 | [
[
[
"# Assessing Fit (polynomial regression)",
"_____no_output_____"
],
[
"In this notebook we will compare different regression models in order to assess which model fits best. We will be using polynomial regression as a means to examine this topic.\n* Write a function to take an SArray and a degree and return an SFrame where each column is the SArray to a polynomial value up to the total degree e.g. degree = 3 then column 1 is the SArray column 2 is the SArray squared and column 3 is the SArray cubed\n* Use matplotlib to visualize polynomial regressions\n* Use matplotlib to visualize the same polynomial degree on different subsets of the data\n* Use a validation set to select a polynomial degree\n* Assess the final fit using test data\n",
"_____no_output_____"
],
[
"# Fire up Turi Create",
"_____no_output_____"
]
],
[
[
"!pip install turicreate",
"_____no_output_____"
],
[
"import turicreate\r\nfrom google.colab import files",
"_____no_output_____"
],
[
"uploaded = files.upload()",
"_____no_output_____"
],
[
"!unzip home_data.sframe.zip",
"_____no_output_____"
]
],
[
[
"Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.\n\nThe easiest way to apply a power to an SArray is to use the .apply() and lambda x: functions. \nFor example to take the example array and compute the third power we can do as follows: (note running this cell the first time may take longer than expected since it loads Turi Create)",
"_____no_output_____"
]
],
[
[
"tmp = turicreate.SArray([1., 2., 3.])\ntmp_cubed = tmp.apply(lambda x: x**3)\nprint tmp\nprint tmp_cubed",
"[1.0, 2.0, 3.0]\n[1.0, 8.0, 27.0]\n"
]
],
[
[
"We can create an empty SFrame using turicreate.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).",
"_____no_output_____"
]
],
[
[
"ex_sframe = turicreate.SFrame()\nex_sframe['power_1'] = tmp\nprint ex_sframe",
"+---------+\n| power_1 |\n+---------+\n| 1.0 |\n| 2.0 |\n| 3.0 |\n+---------+\n[3 rows x 1 columns]\n\n"
]
],
[
[
"# Polynomial_sframe function",
"_____no_output_____"
],
[
"Using the hints above complete the following function to create an SFrame consisting of the powers of an SArray up to a specific degree:",
"_____no_output_____"
]
],
[
[
"def polynomial_sframe(feature, degree):\n # assume that degree >= 1\n # initialize the SFrame:\n poly_sframe = turicreate.SFrame()\n # and set poly_sframe['power_1'] equal to the passed feature\n poly_sframe['power_1']=feature\n # first check if degree > 1\n if degree > 1:\n # then loop over the remaining degrees:\n # range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree\n for power in range(2, degree+1): \n # first we'll give the column a name:\n name = 'power_' + str(power)\n # then assign poly_sframe[name] to the appropriate power of feature\n poly_sframe[name] = poly_sframe['power_1'].apply(lambda x: x**power)\n return poly_sframe",
"_____no_output_____"
]
],
[
[
"To test your function consider the smaller tmp variable and what you would expect the outcome of the following call:",
"_____no_output_____"
]
],
[
[
"print polynomial_sframe(tmp, 3)",
"+---------+---------+---------+\n| power_1 | power_2 | power_3 |\n+---------+---------+---------+\n| 1.0 | 1.0 | 1.0 |\n| 2.0 | 4.0 | 8.0 |\n| 3.0 | 9.0 | 27.0 |\n+---------+---------+---------+\n[3 rows x 3 columns]\n\n"
]
],
[
[
"# Visualizing polynomial regression",
"_____no_output_____"
],
[
"Let's use matplotlib to visualize what a polynomial regression looks like on some real data.",
"_____no_output_____"
]
],
[
[
"sales = turicreate.SFrame('home_data.sframe/')",
"_____no_output_____"
]
],
[
[
"As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.",
"_____no_output_____"
]
],
[
[
"sales = sales.sort(['sqft_living', 'price'])",
"_____no_output_____"
]
],
[
[
"Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.",
"_____no_output_____"
]
],
[
[
"poly1_data = polynomial_sframe(sales['sqft_living'], 1)\npoly1_data['price'] = sales['price'] # add price to the data since it's the target",
"_____no_output_____"
]
],
[
[
"NOTE: for all the models in this notebook use validation_set = None to ensure that all results are consistent across users.",
"_____no_output_____"
]
],
[
[
"model1 = turicreate.linear_regression.create(poly1_data, target = 'price', features = ['power_1'], validation_set = None)",
"_____no_output_____"
],
[
"#let's take a look at the weights before we plot\nmodel1.coefficients",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"plt.plot(poly1_data['power_1'],poly1_data['price'],'.',\n poly1_data['power_1'], model1.predict(poly1_data),'-')",
"_____no_output_____"
]
],
[
[
"Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'. \n\nWe can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial?",
"_____no_output_____"
]
],
[
[
"poly2_data = polynomial_sframe(sales['sqft_living'], 2)\nmy_features = poly2_data.column_names() # get the name of the features\npoly2_data['price'] = sales['price'] # add price to the data since it's the target\nmodel2 = turicreate.linear_regression.create(poly2_data, target = 'price', features = my_features, validation_set = None)",
"_____no_output_____"
],
[
"model2.coefficients",
"_____no_output_____"
],
[
"plt.plot(poly2_data['power_1'],poly2_data['price'],'.',\n poly2_data['power_1'], model2.predict(poly2_data),'-')",
"_____no_output_____"
]
],
[
[
"The resulting model looks like half a parabola. Try on your own to see what the cubic looks like:",
"_____no_output_____"
]
],
[
[
"poly3_data = polynomial_sframe(sales['sqft_living'], 3)\r\nmy_features = poly3_data.column_names() # get the name of the features\r\npoly3_data['price'] = sales['price'] # add price to the data since it's the target\r\nmodel3 = turicreate.linear_regression.create(poly3_data, target = 'price', features = my_features, validation_set = None)",
"_____no_output_____"
],
[
"plt.plot(poly3_data['power_1'],poly3_data['price'],'.',\r\n poly3_data['power_1'], model3.predict(poly3_data),'-')",
"_____no_output_____"
]
],
[
[
"Now try a 15th degree polynomial:",
"_____no_output_____"
]
],
[
[
"poly15_data = polynomial_sframe(sales['sqft_living'], 15)\r\nmy_features = poly15_data.column_names() # get the name of the features\r\npoly15_data['price'] = sales['price'] # add price to the data since it's the target\r\nmodel15 = turicreate.linear_regression.create(poly15_data, target = 'price', features = my_features, validation_set = None)",
"_____no_output_____"
],
[
"plt.plot(poly15_data['power_1'],poly15_data['price'],'.',\r\n poly15_data['power_1'], model15.predict(poly15_data),'-')",
"_____no_output_____"
]
],
[
[
"What do you think of the 15th degree polynomial? Do you think this is appropriate? If we were to change the data do you think you'd get pretty much the same curve? Let's take a look.",
"_____no_output_____"
],
[
"# Changing the data and re-learning",
"_____no_output_____"
],
[
"We're going to split the sales data into four subsets of roughly equal size. Then you will estimate a 15th degree polynomial model on all four subsets of the data. Print the coefficients (you should use .print_rows(num_rows = 16) to view all of them) and plot the resulting fit (as we did above). \n\nTo split the sales data into four subsets, we perform the following steps:\n* First split sales into 2 subsets with `.random_split(0.5, seed=0)`. \n* Next split the resulting subsets into 2 more subsets each. Use `.random_split(0.5, seed=0)`.\n\nWe set `seed=0` in these steps so that different users get consistent results.\nYou should end up with 4 subsets (`set_1`, `set_2`, `set_3`, `set_4`) of approximately equal size. ",
"_____no_output_____"
]
],
[
[
"(half_1, half_2) = sales.random_split(0.5, seed=0)\r\n(set_1, set_2) = half_1.random_split(0.5, seed=0)\r\n(set_3, set_4) = half_2.random_split(0.5, seed=0)",
"_____no_output_____"
]
],
[
[
"Fit a 15th degree polynomial on set_1, set_2, set_3, and set_4 using sqft_living to predict prices. Print the coefficients and make a plot of the resulting model.",
"_____no_output_____"
]
],
[
[
"poly15_data = polynomial_sframe(set_1['sqft_living'], 15)\r\nmy_features = poly15_data.column_names() # get the name of the features\r\npoly15_data['price'] = set_1['price'] # add price to the data since it's the target\r\nmodel15 = turicreate.linear_regression.create(poly15_data, target = 'price', features = my_features, validation_set = None)\r\nmodel15.coefficients.print_rows(num_rows=16, num_columns=4)",
"_____no_output_____"
],
[
"plt.plot(poly15_data['power_1'],poly15_data['price'],'.',\r\n poly15_data['power_1'], model15.predict(poly15_data),'-')",
"_____no_output_____"
],
[
"poly15_data = polynomial_sframe(set_2['sqft_living'], 15)\r\nmy_features = poly15_data.column_names() # get the name of the features\r\npoly15_data['price'] = set_2['price'] # add price to the data since it's the target\r\nmodel15 = turicreate.linear_regression.create(poly15_data, target = 'price', features = my_features, validation_set = None)\r\nmodel15.coefficients.print_rows(num_rows=16, num_columns=4)",
"_____no_output_____"
],
[
"plt.plot(poly15_data['power_1'],poly15_data['price'],'.',\r\n poly15_data['power_1'], model15.predict(poly15_data),'-')",
"_____no_output_____"
],
[
"poly15_data = polynomial_sframe(set_3['sqft_living'], 15)\r\nmy_features = poly15_data.column_names() # get the name of the features\r\npoly15_data['price'] = set_3['price'] # add price to the data since it's the target\r\nmodel15 = turicreate.linear_regression.create(poly15_data, target = 'price', features = my_features, validation_set = None)\r\nmodel15.coefficients.print_rows(num_rows=16, num_columns=4)",
"_____no_output_____"
],
[
"plt.plot(poly15_data['power_1'],poly15_data['price'],'.',\r\n poly15_data['power_1'], model15.predict(poly15_data),'-')",
"_____no_output_____"
],
[
"poly15_data = polynomial_sframe(set_4['sqft_living'], 15)\r\nmy_features = poly15_data.column_names() # get the name of the features\r\npoly15_data['price'] = set_4['price'] # add price to the data since it's the target\r\nmodel15 = turicreate.linear_regression.create(poly15_data, target = 'price', features = my_features, validation_set = None)\r\nmodel15.coefficients.print_rows(num_rows=16, num_columns=4)",
"_____no_output_____"
],
[
"plt.plot(poly15_data['power_1'],poly15_data['price'],'.',\r\n poly15_data['power_1'], model15.predict(poly15_data),'-')",
"_____no_output_____"
]
],
[
[
"Some questions you will be asked on your quiz:\n\nIs the sign (positive or negative) for power_15 the same in all four models?\n\n(True/False) the plotted fitted lines look the same in all four plots",
"_____no_output_____"
],
[
"# Selecting a Polynomial Degree",
"_____no_output_____"
],
[
"Whenever we have a \"magic\" parameter like the degree of the polynomial there is one well-known way to select these parameters: validation set. (We will explore another approach in week 4).\n\nWe split the sales dataset 3-way into training set, test set, and validation set as follows:\n\n* Split our sales data into 2 sets: `training_and_validation` and `testing`. Use `random_split(0.9, seed=1)`.\n* Further split our training data into two sets: `training` and `validation`. Use `random_split(0.5, seed=1)`.\n\nAgain, we set `seed=1` to obtain consistent results for different users.",
"_____no_output_____"
]
],
[
[
"training_validation,testing = sales.random_split(0.9,seed=1)\r\ntraining,validation = training_validation.random_split(0.5,seed=1)",
"_____no_output_____"
]
],
[
[
"Next you should write a loop that does the following:\n* For degree in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] (to get this in python type range(1, 15+1))\n * Build an SFrame of polynomial data of train_data['sqft_living'] at the current degree\n * hint: my_features = poly_data.column_names() gives you a list e.g. ['power_1', 'power_2', 'power_3'] which you might find useful for turicreate.linear_regression.create( features = my_features)\n * Add train_data['price'] to the polynomial SFrame\n * Learn a polynomial regression model to sqft vs price with that degree on TRAIN data\n * Compute the RSS on VALIDATION data (here you will want to use .predict()) for that degree and you will need to make a polynmial SFrame using validation data.\n* Report which degree had the lowest RSS on validation data (remember python indexes from 0)\n\n(Note you can turn off the print out of linear_regression.create() with verbose = False)",
"_____no_output_____"
]
],
[
[
"def get_RSS(model, data, outcome):\r\n # First get the predictions\r\n predicted = model.predict(data);\r\n # Then compute the residuals/errors\r\n errors = outcome-predicted;\r\n # Then square and add them up \r\n RSS = (errors*errors).sum();\r\n return(RSS) ",
"_____no_output_____"
],
[
"from heapq import heappush, heappop\r\ndef lowest_RSS_degree_model (train_data_set, validation_data_set, feature, output_feature, degrees):\r\n if degrees>1 :\r\n RSSs = []\r\n models = []\r\n heap = []\r\n for degree in range (1, degrees+1):\r\n poly_data = polynomial_sframe(train_data_set[feature], degree)\r\n my_features = poly_data.column_names()\r\n poly_data[output_feature] = train_data_set[output_feature]\r\n model = turicreate.linear_regression.create(poly_data,\r\n target = output_feature,\r\n features = my_features,\r\n validation_set = None,\r\n verbose= False,\r\n l2_penalty=0., \r\n l1_penalty=0.)\r\n RSS = get_RSS(model, polynomial_sframe(validation_data_set[feature], degree), validation_data_set[output_feature])\r\n #save RSS into a min heap\r\n heappush(heap, (RSS,degree))\r\n RSSs.append(RSS)\r\n models.append(model)\r\n min_RSS = min(RSSs)\r\n min_model = models[RSSs.index(min_RSS)]\r\n print(heap)\r\n return (min_model)",
"_____no_output_____"
]
],
[
[
" Which degree (1, 2, …, 15) had the lowest RSS on Validation data?**",
"_____no_output_____"
],
[
"Now that you have chosen the degree of your polynomial using validation data, compute the RSS of this model on TEST data.",
"_____no_output_____"
]
],
[
[
"best_model = lowest_RSS_degree_model(training, validation, 'sqft_living', 'price', 15)",
"[(592395859848243.8, 6), (592677921226354.2, 8), (598827152777944.0, 5), (598631101792870.4, 9), (609123922774458.6, 4), (616719668845846.6, 3), (605727492761012.4, 7), (676709739838072.8, 1), (607091004045998.0, 2), (5868658575837859.0, 10), (2.870665333742142e+17, 11), (2.1557521198578074e+17, 12), (3.334475201712676e+17, 13), (7.153263137740248e+17, 14), (9.312608184140698e+17, 15)]\n"
]
],
[
[
"What is the RSS on TEST data for the model with the degree selected from Validation data?",
"_____no_output_____"
]
],
[
[
"get_RSS(best_model, polynomial_sframe(testing['sqft_living'], 6), testing['price'])",
"_____no_output_____"
],
[
"best_model.coefficients",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ec9f56a5e9a8dd73071a427e5f1d9c997be01e5d | 12,163 | ipynb | Jupyter Notebook | docs/manual/user_guide/comparing_nilm_algorithms.ipynb | Rithwikksvr/nilmtk | d5631ea7f8599667056226389d8573e2ea108a27 | [
"Apache-2.0"
] | 646 | 2015-01-17T20:21:58.000Z | 2022-03-30T09:17:07.000Z | docs/manual/user_guide/comparing_nilm_algorithms.ipynb | Xiaohu-cqu/nilmtk | 52230bed15dae3186dc550d8c8812b221fc7e320 | [
"Apache-2.0"
] | 643 | 2015-01-01T18:30:19.000Z | 2022-03-23T08:34:29.000Z | docs/manual/user_guide/comparing_nilm_algorithms.ipynb | Xiaohu-cqu/nilmtk | 52230bed15dae3186dc550d8c8812b221fc7e320 | [
"Apache-2.0"
] | 484 | 2015-01-03T06:37:19.000Z | 2022-03-22T15:20:03.000Z | 34.358757 | 146 | 0.555537 | [
[
[
"### Sample code for Comparing NILM algorithms",
"_____no_output_____"
]
],
[
[
"from __future__ import print_function, division\nimport time\nfrom matplotlib import rcParams\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nfrom six import iteritems\n\n%matplotlib inline\n\nrcParams['figure.figsize'] = (13, 6)\n\nfrom nilmtk import DataSet, TimeFrame, MeterGroup, HDFDataStore\nfrom nilmtk.legacy.disaggregate import CombinatorialOptimisation, FHMM\nimport nilmtk.utils",
"_____no_output_____"
]
],
[
[
"### Dividing data into train and test set",
"_____no_output_____"
]
],
[
[
"train = DataSet('/data/redd.h5')\ntest = DataSet('/data/redd.h5')",
"_____no_output_____"
]
],
[
[
"Let us use building 1 for demo purposes",
"_____no_output_____"
]
],
[
[
"building = 1",
"_____no_output_____"
]
],
[
[
"Let's split data at April 30th",
"_____no_output_____"
]
],
[
[
"# The dates are interpreted by Pandas, prefer using ISO dates (yyyy-mm-dd)\ntrain.set_window(end=\"2011-04-30\")\ntest.set_window(start=\"2011-04-30\")\n\ntrain_elec = train.buildings[1].elec\ntest_elec = test.buildings[1].elec",
"_____no_output_____"
]
],
[
[
"### Selecting top-5 appliances",
"_____no_output_____"
]
],
[
[
"top_5_train_elec = train_elec.submeters().select_top_k(k=5)",
"15/16 MeterGroup(meters==19, building=1, dataset='REDD', appliances=[Appliance(type='unknown', instance=2)])e=1)])ce=1)])\n ElecMeter(instance=3, building=1, dataset='REDD', appliances=[Appliance(type='electric oven', instance=1)])\n ElecMeter(instance=4, building=1, dataset='REDD', appliances=[Appliance(type='electric oven', instance=1)])\n16/16 MeterGroup(meters= for ElecMeterID(instance=4, building=1, dataset='REDD') ... \n ElecMeter(instance=10, building=1, dataset='REDD', appliances=[Appliance(type='washer dryer', instance=1)])\n ElecMeter(instance=20, building=1, dataset='REDD', appliances=[Appliance(type='washer dryer', instance=1)])\nCalculating total_energy for ElecMeterID(instance=20, building=1, dataset='REDD') ... "
]
],
[
[
"### Training and disaggregation",
"_____no_output_____"
]
],
[
[
"def predict(clf, test_elec, sample_period, timezone):\n pred = {}\n gt= {}\n\n for i, chunk in enumerate(test_elec.mains().load(sample_period=sample_period)):\n chunk_drop_na = chunk.dropna()\n pred[i] = clf.disaggregate_chunk(chunk_drop_na)\n gt[i]={}\n\n for meter in test_elec.submeters().meters:\n # Only use the meters that we trained on (this saves time!) \n gt[i][meter] = next(meter.load(sample_period=sample_period))\n gt[i] = pd.DataFrame({k:v.squeeze() for k,v in iteritems(gt[i]) if len(v)}, index=next(iter(gt[i].values())).index).dropna()\n \n # If everything can fit in memory\n gt_overall = pd.concat(gt)\n gt_overall.index = gt_overall.index.droplevel()\n pred_overall = pd.concat(pred)\n pred_overall.index = pred_overall.index.droplevel()\n\n # Having the same order of columns\n gt_overall = gt_overall[pred_overall.columns]\n \n #Intersection of index\n gt_index_utc = gt_overall.index.tz_convert(\"UTC\")\n pred_index_utc = pred_overall.index.tz_convert(\"UTC\")\n common_index_utc = gt_index_utc.intersection(pred_index_utc)\n \n \n common_index_local = common_index_utc.tz_convert(timezone)\n gt_overall = gt_overall.loc[common_index_local]\n pred_overall = pred_overall.loc[common_index_local]\n appliance_labels = [m for m in gt_overall.columns.values]\n gt_overall.columns = appliance_labels\n pred_overall.columns = appliance_labels\n return gt_overall, pred_overall",
"_____no_output_____"
],
[
"# Since the methods use randomized initialization, let's fix a seed here\n# to make this notebook reproducible\nimport numpy.random\nnumpy.random.seed(42)",
"_____no_output_____"
],
[
"classifiers = {'CO':CombinatorialOptimisation(), 'FHMM':FHMM()}\npredictions = {}\nsample_period = 120\nfor clf_name, clf in classifiers.items():\n print(\"*\"*20)\n print(clf_name)\n print(\"*\" *20)\n clf.train(top_5_train_elec, sample_period=sample_period)\n gt, predictions[clf_name] = predict(clf, test_elec, 120, train.metadata['timezone'])\n",
"********************\nCO\n********************\nTraining model for submeter 'ElecMeter(instance=9, building=1, dataset='REDD', appliances=[Appliance(type='light', instance=1)])'\nTraining model for submeter 'ElecMeter(instance=8, building=1, dataset='REDD', appliances=[Appliance(type='sockets', instance=2)])'\nTraining model for submeter 'ElecMeter(instance=11, building=1, dataset='REDD', appliances=[Appliance(type='microwave', instance=1)])'\nTraining model for submeter 'ElecMeter(instance=6, building=1, dataset='REDD', appliances=[Appliance(type='dish washer', instance=1)])'\nTraining model for submeter 'ElecMeter(instance=5, building=1, dataset='REDD', appliances=[Appliance(type='fridge', instance=1)])'\nDone training!\nLoading data for meter ElecMeterID(instance=2, building=1, dataset='REDD') \nDone loading data all meters for this chunk.\nEstimating power demand for 'ElecMeter(instance=9, building=1, dataset='REDD', appliances=[Appliance(type='light', instance=1)])'\nEstimating power demand for 'ElecMeter(instance=8, building=1, dataset='REDD', appliances=[Appliance(type='sockets', instance=2)])'\nEstimating power demand for 'ElecMeter(instance=11, building=1, dataset='REDD', appliances=[Appliance(type='microwave', instance=1)])'\nEstimating power demand for 'ElecMeter(instance=6, building=1, dataset='REDD', appliances=[Appliance(type='dish washer', instance=1)])'\nEstimating power demand for 'ElecMeter(instance=5, building=1, dataset='REDD', appliances=[Appliance(type='fridge', instance=1)])'\nLoading data for meter ElecMeterID(instance=4, building=1, dataset='REDD') \nDone loading data all meters for this chunk.\nLoading data for meter ElecMeterID(instance=20, building=1, dataset='REDD') \nDone loading data all meters for this chunk.\n********************\nFHMM\n********************\nTraining model for submeter 'ElecMeter(instance=9, building=1, dataset='REDD', appliances=[Appliance(type='light', instance=1)])'\nTraining model for submeter 'ElecMeter(instance=8, building=1, dataset='REDD', appliances=[Appliance(type='sockets', instance=2)])'\nTraining model for submeter 'ElecMeter(instance=11, building=1, dataset='REDD', appliances=[Appliance(type='microwave', instance=1)])'\nTraining model for submeter 'ElecMeter(instance=6, building=1, dataset='REDD', appliances=[Appliance(type='dish washer', instance=1)])'\nTraining model for submeter 'ElecMeter(instance=5, building=1, dataset='REDD', appliances=[Appliance(type='fridge', instance=1)])'\nLoading data for meter ElecMeterID(instance=2, building=1, dataset='REDD') \nDone loading data all meters for this chunk.\nLoading data for meter ElecMeterID(instance=4, building=1, dataset='REDD') \nDone loading data all meters for this chunk.\nLoading data for meter ElecMeterID(instance=20, building=1, dataset='REDD') \nDone loading data all meters for this chunk.\n"
],
[
"rmse = {}\nfor clf_name in classifiers.keys():\n rmse[clf_name] = nilmtk.utils.compute_rmse(gt, predictions[clf_name], pretty=True)\n\nrmse = pd.DataFrame(rmse)",
"_____no_output_____"
],
[
"rmse",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ec9f6249f38f080a9374114d3396f1ac5a6a2cf2 | 359,638 | ipynb | Jupyter Notebook | lesson 3/results/tlbl.ipynb | gtpedrosa/Python4WindEnergy | f8ad09018420cfb3a419173f97b129de7118d814 | [
"Apache-2.0"
] | 48 | 2015-01-19T18:21:10.000Z | 2021-11-27T22:41:06.000Z | lesson 3/results/tlbl.ipynb | gtpedrosa/Python4WindEnergy | f8ad09018420cfb3a419173f97b129de7118d814 | [
"Apache-2.0"
] | 1 | 2016-05-24T06:07:07.000Z | 2016-05-24T08:26:29.000Z | lesson 3/results/tlbl.ipynb | gtpedrosa/Python4WindEnergy | f8ad09018420cfb3a419173f97b129de7118d814 | [
"Apache-2.0"
] | 24 | 2015-06-26T14:44:07.000Z | 2021-06-07T18:36:52.000Z | 502.287709 | 77,779 | 0.93224 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.