hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
sequence
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
sequence
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
sequence
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
sequence
cell_types
sequence
cell_type_groups
sequence
d032bd962006da696a27887b469ea9353eff7d3b
13,163
ipynb
Jupyter Notebook
5. OS with Python/Codes/3. Bulk Text-file Reading/Bulk Text-file Reading.ipynb
AshishJangra27/Data-Science-Live-Course-GeeksForGeeks
4fefa9c855dd515a974ee4c0d9a41886e3c0c1f8
[ "Apache-2.0" ]
1
2021-11-24T16:41:00.000Z
2021-11-24T16:41:00.000Z
5. OS with Python/Codes/3. Bulk Text-file Reading/Bulk Text-file Reading.ipynb
AshishJangra27/Data-Science-Live-Course-GeeksForGeeks
4fefa9c855dd515a974ee4c0d9a41886e3c0c1f8
[ "Apache-2.0" ]
null
null
null
5. OS with Python/Codes/3. Bulk Text-file Reading/Bulk Text-file Reading.ipynb
AshishJangra27/Data-Science-Live-Course-GeeksForGeeks
4fefa9c855dd515a974ee4c0d9a41886e3c0c1f8
[ "Apache-2.0" ]
null
null
null
29.121681
50
0.500646
[ [ [ "import os", "_____no_output_____" ], [ "files = os.listdir('Files')", "_____no_output_____" ], [ "for i in files:\n if i != '.DS_Store':\n \n fd = open('Files/'+i,'r')\n print('Files/'+i, fd.read())\n fd.close()", "Files/File5 copy 27.txt This is file 2.\n\nFiles/File2 copy 17.txt This is file 2.\n\nFiles/File2.txt This is file 2.\n\nFiles/File3 copy 2.txt This is file 3.\n\nFiles/File5 copy 9.txt This is file 2.\n\nFiles/File6 copy 7.txt This is file 3.\n\nFiles/File6 copy 6.txt This is file 3.\n\nFiles/File5 copy 8.txt This is file 2.\n\nFiles/File3 copy 3.txt This is file 3.\n\nFiles/File3.txt This is file 3.\n\nFiles/File2 copy 16.txt This is file 2.\n\nFiles/File5 copy 26.txt This is file 2.\n\nFiles/File5 copy 24.txt This is file 2.\n\nFiles/File5 copy 30.txt This is file 2.\n\nFiles/File5 copy 18.txt This is file 2.\n\nFiles/File2 copy 28.txt This is file 2.\n\nFiles/File2 copy 14.txt This is file 2.\n\nFiles/File1.txt This is file 1.\n\nFiles/File6 copy 4.txt This is file 3.\n\nFiles/File6 copy 5.txt This is file 3.\n\nFiles/File2 copy 15.txt This is file 2.\n\nFiles/File2 copy 29.txt This is file 2.\n\nFiles/File5 copy 19.txt This is file 2.\n\nFiles/File5 copy 31.txt This is file 2.\n\nFiles/File5 copy 25.txt This is file 2.\n\nFiles/File5 copy 21.txt This is file 2.\n\nFiles/File2 copy 11.txt This is file 2.\n\nFiles/File3 copy 4.txt This is file 3.\n\nFiles/File4.txt This is file 1.\n\nFiles/File5 copy.txt This is file 2.\n\nFiles/File4 copy.txt This is file 1.\n\nFiles/File5.txt This is file 2.\n\nFiles/File3 copy 5.txt This is file 3.\n\nFiles/File2 copy 10.txt This is file 2.\n\nFiles/File5 copy 20.txt This is file 2.\n\nFiles/File5 copy 22.txt This is file 2.\n\nFiles/File2 copy 12.txt This is file 2.\n\nFiles/File3 copy 7.txt This is file 3.\n\nFiles/File6 copy 2.txt This is file 3.\n\nFiles/File6 copy 3.txt This is file 3.\n\nFiles/File6.txt This is file 3.\n\nFiles/File3 copy 6.txt This is file 3.\n\nFiles/File2 copy 13.txt This is file 2.\n\nFiles/File5 copy 23.txt This is file 2.\n\nFiles/File4 copy 6.txt This is file 1.\n\nFiles/File2 copy.txt This is file 2.\n\nFiles/File3 copy.txt This is file 3.\n\nFiles/File6 copy 23.txt This is file 3.\n\nFiles/File1 copy 13.txt This is file 1.\n\nFiles/File4 copy 16.txt This is file 1.\n\nFiles/File3 copy 26.txt This is file 3.\n\nFiles/File1 copy 3.txt This is file 1.\n\nFiles/File1 copy 2.txt This is file 1.\n\nFiles/File3 copy 27.txt This is file 3.\n\nFiles/File4 copy 17.txt This is file 1.\n\nFiles/File1 copy 12.txt This is file 1.\n\nFiles/File6 copy 22.txt This is file 3.\n\nFiles/File4 copy 7.txt This is file 1.\n\nFiles/File4 copy 5.txt This is file 1.\n\nFiles/File6 copy 20.txt This is file 3.\n\nFiles/File1 copy 10.txt This is file 1.\n\nFiles/File4 copy 29.txt This is file 1.\n\nFiles/File4 copy 15.txt This is file 1.\n\nFiles/File3 copy 31.txt This is file 3.\n\nFiles/File3 copy 25.txt This is file 3.\n\nFiles/File3 copy 19.txt This is file 3.\n\nFiles/File3 copy 18.txt This is file 3.\n\nFiles/File3 copy 24.txt This is file 3.\n\nFiles/File3 copy 30.txt This is file 3.\n\nFiles/File4 copy 14.txt This is file 1.\n\nFiles/File4 copy 28.txt This is file 1.\n\nFiles/File1 copy 11.txt This is file 1.\n\nFiles/File6 copy 21.txt This is file 3.\n\nFiles/File4 copy 4.txt This is file 1.\n\nFiles/File6 copy 31.txt This is file 3.\n\nFiles/File6 copy 25.txt This is file 3.\n\nFiles/File6 copy 19.txt This is file 3.\n\nFiles/File1 copy 29.txt This is file 1.\n\nFiles/File1 copy 15.txt This is file 1.\n\nFiles/File4 copy 10.txt This is file 1.\n\nFiles/File1 copy 5.txt This is file 1.\n\nFiles/File3 copy 20.txt This is file 3.\n\nFiles/File3 copy 21.txt This is file 3.\n\nFiles/File1 copy 4.txt This is file 1.\n\nFiles/File4 copy 11.txt This is file 1.\n\nFiles/File1 copy 14.txt This is file 1.\n\nFiles/File1 copy 28.txt This is file 1.\n\nFiles/File6 copy 18.txt This is file 3.\n\nFiles/File6 copy 24.txt This is file 3.\n\nFiles/File6 copy 30.txt This is file 3.\n\nFiles/File4 copy 3.txt This is file 1.\n\nFiles/File6 copy 26.txt This is file 3.\n\nFiles/File1 copy 16.txt This is file 1.\n\nFiles/File4 copy 13.txt This is file 1.\n\nFiles/File1 copy 6.txt This is file 1.\n\nFiles/File3 copy 23.txt This is file 3.\n\nFiles/File2 copy 8.txt This is file 2.\n\nFiles/File2 copy 9.txt This is file 2.\n\nFiles/File3 copy 22.txt This is file 3.\n\nFiles/File1 copy 7.txt This is file 1.\n\nFiles/File4 copy 12.txt This is file 1.\n\nFiles/File1 copy 17.txt This is file 1.\n\nFiles/File6 copy 27.txt This is file 3.\n\nFiles/File4 copy 2.txt This is file 1.\n\nFiles/File6 copy 16.txt This is file 3.\n\nFiles/File1 copy 26.txt This is file 1.\n\nFiles/File4 copy 23.txt This is file 1.\n\nFiles/File3 copy 13.txt This is file 3.\n\nFiles/File2 copy 4.txt This is file 2.\n\nFiles/File2 copy 5.txt This is file 2.\n\nFiles/File3 copy 12.txt This is file 3.\n\nFiles/File4 copy 22.txt This is file 1.\n\nFiles/File1 copy 27.txt This is file 1.\n\nFiles/File6 copy 17.txt This is file 3.\n\nFiles/File6 copy 29.txt This is file 3.\n\nFiles/File6 copy.txt This is file 3.\n\nFiles/File6 copy 15.txt This is file 3.\n\nFiles/File1 copy 25.txt This is file 1.\n\nFiles/File1 copy 31.txt This is file 1.\n\nFiles/File1 copy 19.txt This is file 1.\n\nFiles/File4 copy 20.txt This is file 1.\n\nFiles/File3 copy 10.txt This is file 3.\n\nFiles/File1 copy 9.txt This is file 1.\n\nFiles/File2 copy 7.txt This is file 2.\n\nFiles/File2 copy 6.txt This is file 2.\n\nFiles/File1 copy 8.txt This is file 1.\n\nFiles/File3 copy 11.txt This is file 3.\n\nFiles/File4 copy 21.txt This is file 1.\n\nFiles/File1 copy 18.txt This is file 1.\n\nFiles/File1 copy 30.txt This is file 1.\n\nFiles/File1 copy 24.txt This is file 1.\n\nFiles/File6 copy 14.txt This is file 3.\n\nFiles/File6 copy 28.txt This is file 3.\n\nFiles/File4 copy 9.txt This is file 1.\n\nFiles/File6 copy 10.txt This is file 3.\n\nFiles/File1 copy 20.txt This is file 1.\n\nFiles/File4 copy 25.txt This is file 1.\n\nFiles/File4 copy 31.txt This is file 1.\n\nFiles/File4 copy 19.txt This is file 1.\n\nFiles/File3 copy 29.txt This is file 3.\n\nFiles/File3 copy 15.txt This is file 3.\n\nFiles/File2 copy 2.txt This is file 2.\n\nFiles/File2 copy 3.txt This is file 2.\n\nFiles/File3 copy 14.txt This is file 3.\n\nFiles/File3 copy 28.txt This is file 3.\n\nFiles/File4 copy 18.txt This is file 1.\n\nFiles/File4 copy 30.txt This is file 1.\n\nFiles/File4 copy 24.txt This is file 1.\n\nFiles/File1 copy 21.txt This is file 1.\n\nFiles/File6 copy 11.txt This is file 3.\n\nFiles/File4 copy 8.txt This is file 1.\n\nFiles/File6 copy 13.txt This is file 3.\n\nFiles/File1 copy 23.txt This is file 1.\n\nFiles/File4 copy 26.txt This is file 1.\n\nFiles/File3 copy 16.txt This is file 3.\n\nFiles/File3 copy 17.txt This is file 3.\n\nFiles/File4 copy 27.txt This is file 1.\n\nFiles/File1 copy 22.txt This is file 1.\n\nFiles/File6 copy 12.txt This is file 3.\n\nFiles/File5 copy 12.txt This is file 2.\n\nFiles/File2 copy 22.txt This is file 2.\n\nFiles/File2 copy 23.txt This is file 2.\n\nFiles/File5 copy 13.txt This is file 2.\n\nFiles/File5 copy 11.txt This is file 2.\n\nFiles/File2 copy 21.txt This is file 2.\n\nFiles/File3 copy 8.txt This is file 3.\n\nFiles/File5 copy 3.txt This is file 2.\n\nFiles/File5 copy 2.txt This is file 2.\n\nFiles/File3 copy 9.txt This is file 3.\n\nFiles/File2 copy 20.txt This is file 2.\n\nFiles/File5 copy 10.txt This is file 2.\n\nFiles/File5 copy 28.txt This is file 2.\n\nFiles/File5 copy 14.txt This is file 2.\n\nFiles/File2 copy 30.txt This is file 2.\n\nFiles/File2 copy 24.txt This is file 2.\n\nFiles/File2 copy 18.txt This is file 2.\n\nFiles/File5 copy 6.txt This is file 2.\n\nFiles/File6 copy 8.txt This is file 3.\n\nFiles/File6 copy 9.txt This is file 3.\n\nFiles/File5 copy 7.txt This is file 2.\n\nFiles/File2 copy 19.txt This is file 2.\n\nFiles/File2 copy 25.txt This is file 2.\n\nFiles/File2 copy 31.txt This is file 2.\n\nFiles/File5 copy 15.txt This is file 2.\n\nFiles/File5 copy 29.txt This is file 2.\n\nFiles/File1 copy.txt This is file 1.\n\nFiles/File5 copy 17.txt This is file 2.\n\nFiles/File2 copy 27.txt This is file 2.\n\nFiles/File5 copy 5.txt This is file 2.\n\nFiles/File5 copy 4.txt This is file 2.\n\nFiles/File2 copy 26.txt This is file 2.\n\nFiles/File5 copy 16.txt This is file 2.\n\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
d032c2d8731a9a3123c839bf07368239d0b925e5
946,532
ipynb
Jupyter Notebook
C_Data_resources/2_Open_datasets.ipynb
oercompbiomed/CBM101
20010dcb99fbf218c4789eb5918dcff8ceb94898
[ "MIT" ]
7
2019-07-03T07:41:55.000Z
2022-02-06T20:25:37.000Z
C_Data_resources/2_Open_datasets.ipynb
oercompbiomed/CBM101
20010dcb99fbf218c4789eb5918dcff8ceb94898
[ "MIT" ]
9
2019-03-14T15:15:09.000Z
2019-08-01T14:18:21.000Z
C_Data_resources/2_Open_datasets.ipynb
oercompbiomed/CBM101
20010dcb99fbf218c4789eb5918dcff8ceb94898
[ "MIT" ]
11
2019-03-12T10:43:11.000Z
2021-10-05T12:15:00.000Z
519.216676
280,896
0.941736
[ [ [ "# Acquiring Data from open repositories\n\nA crucial step in the work of a computational biologist is not only to analyse data, but acquiring datasets to analyse as well as toy datasets to test out computational methods and algorithms. The internet is full of such open datasets. Sometimes you have to sign up and make a user to get authentication, especially for medical data. This can sometimes be time consuming, so here we will deal with easy access resources, mostly of modest size. Multiple python libraries provide a `dataset` module which makes the effort to fetch online data extremely seamless, with little requirement for preprocessing.\n\n#### Goal of the notebook\nHere you will get familiar with some ways to fetch datasets from online. We do some data exploration on the data just for illustration, but the methods will be covered later.\n\n\n# Useful resources and links\n\nWhen playing around with algorithms, it can be practical to use relatively small datasets. A good example is the `datasets` submodule of `scikit-learn`. `Nilearn` (library for neuroimaging) also provides a collection of neuroimaging datasets. Many datasets can also be acquired through the competition website [Kaggle](https://www.kaggle.com), in which they describe how to access the data.\n\n\n### Links\n- [OpenML](https://www.openml.org/search?type=data)\n- [Nilearn datasets](https://nilearn.github.io/modules/reference.html#module-nilearn.datasets)\n- [Sklearn datasets](https://scikit-learn.org/stable/modules/classes.html?highlight=datasets#module-sklearn.datasets)\n- [Kaggle](https://www.kaggle.com/datasets)\n- [MEDNIST]\n\n- [**Awesomedata**](https://github.com/awesomedata/awesome-public-datasets)\n\n - We strongly recommend to check out the Awesomedata lists of public datasets, covering topics such as [biology/medicine](https://github.com/awesomedata/awesome-public-datasets#biology) and [neuroscience](https://github.com/awesomedata/awesome-public-datasets#neuroscience)\n\n- [Papers with code](https://paperswithcode.com)\n\n- [SNAP](https://snap.stanford.edu/data/)\n - Stanford Large Network Dataset Collection \n- [Open Graph Benchmark (OGB)](https://github.com/snap-stanford/ogb)\n - Network datasets\n- [Open Neuro](https://openneuro.org/)\n- [Open fMRI](https://openfmri.org/dataset/)", "_____no_output_____" ] ], [ [ "# import basic libraries\nimport numpy as np\nimport pandas as pd\nfrom matplotlib import pyplot as plt", "_____no_output_____" ] ], [ [ "We start with scikit-learn's datasets for testing out ML algorithms. Visit [here](https://scikit-learn.org/stable/modules/classes.html?highlight=datasets#module-sklearn.datasets) for an overview of the datasets.", "_____no_output_____" ] ], [ [ "from sklearn.datasets import fetch_olivetti_faces, fetch_20newsgroups, load_breast_cancer, load_diabetes, load_digits, load_iris", "C:\\Users\\Peder\\Anaconda3\\envs\\cbm101\\lib\\importlib\\_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject\n return f(*args, **kwds)\n" ] ], [ [ "Load the MNIST dataset (images of hand written digits)", "_____no_output_____" ] ], [ [ "X,y = load_digits(return_X_y=True)", "_____no_output_____" ], [ "y.shape", "_____no_output_____" ], [ "X.shape #1797 images, 64 pixels per image", "_____no_output_____" ] ], [ [ "#### exercise 1. Make a function `plot` taking an argument (k) to visualize the k'th sample. \nIt is currently flattened, you will need to reshape it. Use `plt.imshow` for plotting. ", "_____no_output_____" ] ], [ [ "# %load solutions/ex2_1.py\ndef plot(k):\n plt.imshow(X[k].reshape(8,8), cmap='gray')\n plt.title(f\"Number = {y[k]}\")\n plt.show()", "_____no_output_____" ], [ "plot(15); plot(450)", "_____no_output_____" ], [ "faces = fetch_olivetti_faces()", "_____no_output_____" ] ], [ [ "#### Exercise 2. Inspect the dataset. How many classes are there? How many samples per class? Also, plot some examples. What do the classes represent? ", "_____no_output_____" ] ], [ [ "# %load solutions/ex2_2.py\n\n# example solution. \n# You are not expected to make a nice plotting function,\n# you can simply call plt.imshow a number of times and observe\n\nprint(faces.DESCR) # this shows there are 40 classes, 10 samples per class\nprint(faces.target) #the targets i.e. classes\nprint(np.unique(faces.target).shape) # another way to see n_classes\n\nX = faces.images\ny = faces.target\n\nfig = plt.figure(figsize=(16,5))\nidxs = [0,1,2, 11,12,13, 40,41]\nfor i,k in enumerate(idxs):\n ax=fig.add_subplot(2,4,i+1)\n ax.imshow(X[k])\n ax.set_title(f\"target={y[k]}\")\n \n# looking at a few plots shows that each target is a single person.", ".. _olivetti_faces_dataset:\n\nThe Olivetti faces dataset\n--------------------------\n\n`This dataset contains a set of face images`_ taken between April 1992 and \nApril 1994 at AT&T Laboratories Cambridge. The\n:func:`sklearn.datasets.fetch_olivetti_faces` function is the data\nfetching / caching function that downloads the data\narchive from AT&T.\n\n.. _This dataset contains a set of face images: http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html\n\nAs described on the original website:\n\n There are ten different images of each of 40 distinct subjects. For some\n subjects, the images were taken at different times, varying the lighting,\n facial expressions (open / closed eyes, smiling / not smiling) and facial\n details (glasses / no glasses). All the images were taken against a dark\n homogeneous background with the subjects in an upright, frontal position \n (with tolerance for some side movement).\n\n**Data Set Characteristics:**\n\n ================= =====================\n Classes 40\n Samples total 400\n Dimensionality 4096\n Features real, between 0 and 1\n ================= =====================\n\nThe image is quantized to 256 grey levels and stored as unsigned 8-bit \nintegers; the loader will convert these to floating point values on the \ninterval [0, 1], which are easier to work with for many algorithms.\n\nThe \"target\" for this database is an integer from 0 to 39 indicating the\nidentity of the person pictured; however, with only 10 examples per class, this\nrelatively small dataset is more interesting from an unsupervised or\nsemi-supervised perspective.\n\nThe original dataset consisted of 92 x 112, while the version available here\nconsists of 64x64 images.\n\nWhen using these images, please give credit to AT&T Laboratories Cambridge.\n\n[ 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2\n 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4\n 4 4 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 7 7\n 7 7 7 7 7 7 7 7 8 8 8 8 8 8 8 8 8 8 9 9 9 9 9 9\n 9 9 9 9 10 10 10 10 10 10 10 10 10 10 11 11 11 11 11 11 11 11 11 11\n 12 12 12 12 12 12 12 12 12 12 13 13 13 13 13 13 13 13 13 13 14 14 14 14\n 14 14 14 14 14 14 15 15 15 15 15 15 15 15 15 15 16 16 16 16 16 16 16 16\n 16 16 17 17 17 17 17 17 17 17 17 17 18 18 18 18 18 18 18 18 18 18 19 19\n 19 19 19 19 19 19 19 19 20 20 20 20 20 20 20 20 20 20 21 21 21 21 21 21\n 21 21 21 21 22 22 22 22 22 22 22 22 22 22 23 23 23 23 23 23 23 23 23 23\n 24 24 24 24 24 24 24 24 24 24 25 25 25 25 25 25 25 25 25 25 26 26 26 26\n 26 26 26 26 26 26 27 27 27 27 27 27 27 27 27 27 28 28 28 28 28 28 28 28\n 28 28 29 29 29 29 29 29 29 29 29 29 30 30 30 30 30 30 30 30 30 30 31 31\n 31 31 31 31 31 31 31 31 32 32 32 32 32 32 32 32 32 32 33 33 33 33 33 33\n 33 33 33 33 34 34 34 34 34 34 34 34 34 34 35 35 35 35 35 35 35 35 35 35\n 36 36 36 36 36 36 36 36 36 36 37 37 37 37 37 37 37 37 37 37 38 38 38 38\n 38 38 38 38 38 38 39 39 39 39 39 39 39 39 39 39]\n(40,)\n" ] ], [ [ "Once you have made yourself familiar with the dataset you can do some data exploration with unsupervised methods, like below. The next few lines of code are simply for illustration, don't worry about the code (we will cover unsupervised methods in submodule F).", "_____no_output_____" ] ], [ [ "from sklearn.decomposition import randomized_svd", "_____no_output_____" ], [ "X = faces.data", "_____no_output_____" ], [ "n_dim = 3\nu, s, v = randomized_svd(X, n_dim)", "_____no_output_____" ] ], [ [ "Now we have factorized the images into their constituent parts. The code below displays the various components isolated one by one.", "_____no_output_____" ] ], [ [ "def show_ims(ims):\n fig = plt.figure(figsize=(16,10))\n idxs = [0,1,2, 11,12,13, 40,41,42, 101,101,103]\n for i,k in enumerate(idxs):\n ax=fig.add_subplot(3,4,i+1)\n ax.imshow(ims[k])\n ax.set_title(f\"target={y[k]}\")", "_____no_output_____" ], [ "for i in range(n_dim):\n my_s = np.zeros(s.shape[0])\n my_s[i] = s[i]\n recon = [email protected](my_s)@v\n recon = recon.reshape(400,64,64)\n show_ims(recon)", "_____no_output_____" ] ], [ [ "Are you able to see what the components represent? It at least looks like the second component signifies the lightning (the light direction), the third highlights eyebrows and facial chin shape.", "_____no_output_____" ] ], [ [ "from sklearn.manifold import TSNE", "_____no_output_____" ], [ "tsne = TSNE(init='pca', random_state=0)\ntrans = tsne.fit_transform(X)", "_____no_output_____" ], [ "m = 8*10 # choose 4 people\n\nplt.figure(figsize=(16,10))\nxs, ys = trans[:m,0], trans[:m,1]\nplt.scatter(xs, ys, c=y[:m], cmap='rainbow')\n\nfor i,v in enumerate(zip(xs,ys, y[:m])):\n xx,yy,s = v \n #plt.text(xx,yy,s) #class\n plt.text(xx,yy,i) #index", "_____no_output_____" ] ], [ [ "Many people seem to have multiple subclusters. What is the difference between those clusters? (e.g. 68,62,65 versus the other 60's)", "_____no_output_____" ] ], [ [ "ims = faces.images\n\nidxs = [68,62,65,66,60,64,63]\n#idxs = [9,4,1, 5,3]\nfor k in idxs:\n plt.imshow(ims[k], cmap='gray')\n plt.show()", "_____no_output_____" ], [ "def show(im):\n return plt.imshow(im, cmap='gray')", "_____no_output_____" ], [ "import pandas as pd\ndf= pd.read_csv('data/archive/covid_impact_on_airport_traffic.csv')", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ], [ "df.describe()", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.Country.unique()", "_____no_output_____" ], [ "df.ISO_3166_2.unique()", "_____no_output_____" ], [ "df.AggregationMethod.unique()", "_____no_output_____" ] ], [ [ "Here we will look at [OpenML](https://www.openml.org/) - a repository of open datasets free to explore data and test methods.\n\n### Fetching an OpenML dataset\n\nWe need to pass in an ID to access, as follows:", "_____no_output_____" ] ], [ [ "from sklearn.datasets import fetch_openml", "_____no_output_____" ] ], [ [ "OpenML contains all sorts of datatypes. By browsing the website we found a electroencephalography (EEG) dataset to explore: ", "_____no_output_____" ] ], [ [ "data_id = 1471 #this was found by browsing OpenML\ndataset = fetch_openml(data_id=data_id, as_frame=True)", "_____no_output_____" ], [ "dir(dataset)", "_____no_output_____" ], [ "dataset.url", "_____no_output_____" ], [ "type(dataset)", "_____no_output_____" ], [ "print(dataset.DESCR)", "**Author**: Oliver Roesler \n**Source**: [UCI](https://archive.ics.uci.edu/ml/datasets/EEG+Eye+State), Baden-Wuerttemberg, Cooperative State University (DHBW), Stuttgart, Germany \n**Please cite**: [UCI](https://archive.ics.uci.edu/ml/citation_policy.html) \n\nAll data is from one continuous EEG measurement with the Emotiv EEG Neuroheadset. The duration of the measurement was 117 seconds. The eye state was detected via a camera during the EEG measurement and added later manually to the file after analyzing the video frames. '1' indicates the eye-closed and '0' the eye-open state. All values are in chronological order with the first measured value at the top of the data.\n\nThe features correspond to 14 EEG measurements from the headset, originally labeled AF3, F7, F3, FC5, T7, P, O1, O2, P8, T8, FC6, F4, F8, AF4, in that order.\n\nDownloaded from openml.org.\n" ], [ "original_names = ['AF3',\n 'F7',\n 'F3',\n 'FC5',\n 'T7',\n 'P',\n 'O1',\n 'O2',\n 'P8',\n 'T8',\n 'FC6',\n 'F4',\n 'F8',\n 'AF4']", "_____no_output_____" ], [ "dataset.feature_names", "_____no_output_____" ], [ "df = dataset.frame", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.shape[0] / 117\n# 128 frames per second", "_____no_output_____" ], [ "df = dataset.frame\ny = df.Class\n#df.drop(columns='Class', inplace=True)", "_____no_output_____" ], [ "df.dtypes", "_____no_output_____" ], [ "#def summary(s):\n# print(s.max(), s.min(), s.mean(), s.std())\n# print()\n# \n#for col in df.columns[:-1]:\n# column = df.loc[:,col]\n# summary(column)", "_____no_output_____" ], [ "df.plot()", "_____no_output_____" ] ], [ [ "From the plot we can quickly identify a bunch of huge outliers, making the plot look completely uselss. We assume these are artifacts, and remove them.", "_____no_output_____" ] ], [ [ "df2 = df.iloc[:,:-1].clip_upper(6000)\ndf2.plot()", "C:\\Users\\Peder\\Anaconda3\\envs\\cbm101\\lib\\site-packages\\ipykernel_launcher.py:1: FutureWarning: clip_upper(threshold) is deprecated, use clip(upper=threshold) instead\n \"\"\"Entry point for launching an IPython kernel.\n" ] ], [ [ "Now we see better what is going on. Lets just remove the frames corresponding to those outliers", "_____no_output_____" ] ], [ [ "frames = np.nonzero(np.any(df.iloc[:,:-1].values>5000, axis=1))[0]\nframes", "_____no_output_____" ], [ "df.drop(index=frames, inplace=True)", "_____no_output_____" ], [ "df.plot(figsize=(16,8))\nplt.legend(labels=original_names)", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ] ], [ [ "### Do some modelling of the data", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LogisticRegression", "_____no_output_____" ], [ "lasso = LogisticRegression(penalty='l2')", "_____no_output_____" ], [ "X = df.values[:,:-1]\ny = df.Class\ny = y.astype(np.int) - 1 # map to 0,1", "_____no_output_____" ], [ "print(X.shape)\nprint(y.shape)", "(14976, 14)\n(14976,)\n" ], [ "lasso.fit(X,y)", "C:\\Users\\Peder\\Anaconda3\\envs\\cbm101\\lib\\site-packages\\sklearn\\linear_model\\_logistic.py:939: ConvergenceWarning: lbfgs failed to converge (status=1):\nSTOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n\nIncrease the number of iterations (max_iter) or scale the data as shown in:\n https://scikit-learn.org/stable/modules/preprocessing.html.\nPlease also refer to the documentation for alternative solver options:\n https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)\n" ], [ "comp = (lasso.predict(X) == y).values", "_____no_output_____" ], [ "np.sum(comp.astype(np.int))/y.shape[0] # shitty accuracy", "_____no_output_____" ], [ "lasso.coef_[0].shape", "_____no_output_____" ], [ "names = dataset.feature_names", "_____no_output_____" ], [ "original_names", "_____no_output_____" ], [ "coef = lasso.coef_[0]\nplt.barh(range(coef.shape[0]), coef)\nplt.yticks(ticks=range(14),labels=original_names)\n\nplt.show()", "_____no_output_____" ] ], [ [ "Interpreting the coeficients: we naturally tend to read the magnitude of the coefficients as feature importance. That is a fair interpretation, but currently we did not scale our features to a comparable range prior to fittting the model, so we cannot draw that conclusion.", "_____no_output_____" ], [ "### Extra exercise. Go to [OpenML](https://openml.org) and use the search function (or just look around) to find any dataset that interest you. Load it using the above methodology, and try to do anything you can to understand the datatype, visualize it etc.", "_____no_output_____" ] ], [ [ "### YOUR CODE HERE", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ] ]
d032c73eb9480622db0c4f5e31a1bd7f342028d1
17,515
ipynb
Jupyter Notebook
getting_started/2.View_Campaign_And_Interactions.ipynb
lmorri/personalize-movielens-20m
a2456c784eade8e8678b6be5ada0cb93bd4b72b8
[ "MIT-0" ]
null
null
null
getting_started/2.View_Campaign_And_Interactions.ipynb
lmorri/personalize-movielens-20m
a2456c784eade8e8678b6be5ada0cb93bd4b72b8
[ "MIT-0" ]
null
null
null
getting_started/2.View_Campaign_And_Interactions.ipynb
lmorri/personalize-movielens-20m
a2456c784eade8e8678b6be5ada0cb93bd4b72b8
[ "MIT-0" ]
null
null
null
32.078755
507
0.495461
[ [ [ "# View Campaign and Interactions\n\nIn the first notebook `Personalize_BuildCampaign.ipynb` you successfully built and deployed a recommendation model using deep learning with Amazon Personalize.\n\nThis notebook will expand on that and will walk you through adding the ability to react to real time behavior of users. If their intent changes while browsing a movie, you will see revised recommendations based on that behavior.\n\nIt will also showcase demo code for simulating user behavior selecting movies before the recommendations are returned.", "_____no_output_____" ], [ "Below we start with just importing libraries that we need to interact with Personalize", "_____no_output_____" ] ], [ [ "# Imports\nimport boto3\nimport json\nimport numpy as np\nimport pandas as pd\nimport time\nimport uuid", "_____no_output_____" ] ], [ [ "Below you will paste in the campaign ARN that you used in your previous notebook. Also pick a random user ID from 50 - 300. \n\nLastly you will also need to find your Dataset Group ARN from the previous notebook.", "_____no_output_____" ] ], [ [ "# Setup and Config\n# Recommendations from Event data\npersonalize = boto3.client('personalize')\npersonalize_runtime = boto3.client('personalize-runtime')\nHRNN_Campaign_ARN = \"arn:aws:personalize:us-east-1:930444659029:campaign/DEMO-campaign\"\n\n# Define User \nUSER_ID = \"676\"\n\n# Dataset Group Arn:\ndatasetGroupArn = \"arn:aws:personalize:us-east-1:930444659029:dataset-group/DEMO-dataset-group\"\n\n# Establish a connection to Personalize's Event Streaming\npersonalize_events = boto3.client(service_name='personalize-events')", "_____no_output_____" ] ], [ [ "## Creating an Event Tracker\n\nBefore your recommendation system can respond to real time events you will need an event tracker, the code below will generate one and can be used going forward with this lab. Feel free to name it something more clever.", "_____no_output_____" ] ], [ [ "response = personalize.create_event_tracker(\n name='MovieClickTracker',\n datasetGroupArn=datasetGroupArn\n)\nprint(response['eventTrackerArn'])\nprint(response['trackingId'])\nTRACKING_ID = response['trackingId']", "arn:aws:personalize:us-east-1:930444659029:event-tracker/bbe80586\nb8a5944c-8095-40ff-a915-2a6af53b7f55\n" ] ], [ [ "## Configuring Source Data\n\nAbove you'll see your tracking ID and this has been assigned to a variable so no further action is needed by you. The lines below are going to setup the data used for recommendations so you can render the list of movies later.", "_____no_output_____" ] ], [ [ "data = pd.read_csv('./ml-20m/ratings.csv', sep=',', dtype={'userid': \"int64\", 'movieid': \"int64\", 'rating': \"float64\", 'timestamp': \"int64\"})\npd.set_option('display.max_rows', 5)\ndata.rename(columns = {'userId':'USER_ID','movieId':'ITEM_ID','rating':'RATING','timestamp':'TIMESTAMP'}, inplace = True)\ndata = data[data['RATING'] > 3] # keep only movies rated 3\ndata = data[['USER_ID', 'ITEM_ID', 'TIMESTAMP']] # select columns that match the columns in the schema below\ndata", "_____no_output_____" ], [ "items = pd.read_csv('./ml-20m/movies.csv', sep=',', usecols=[0,1], header=0)\nitems.columns = ['ITEM_ID', 'TITLE']\n\nuser_id, item_id, _ = data.sample().values[0]\nitem_title = items.loc[items['ITEM_ID'] == item_id].values[0][-1]\nprint(\"USER: {}\".format(user_id))\nprint(\"ITEM: {}\".format(item_title))\n\nitems", "USER: 40094\nITEM: Hotel Rwanda (2004)\n" ] ], [ [ "## Getting Recommendations\n\nJust like in the previous notebook it is a great idea to get a list of recommendatiosn first and then see how additional behavior by a user alters the recommendations.", "_____no_output_____" ] ], [ [ "# Get Recommendations as is\nget_recommendations_response = personalize_runtime.get_recommendations(\n campaignArn = HRNN_Campaign_ARN,\n userId = USER_ID,\n)\n\nitem_list = get_recommendations_response['itemList']\ntitle_list = [items.loc[items['ITEM_ID'] == np.int(item['itemId'])].values[0][-1] for item in item_list]\n\nprint(\"Recommendations: {}\".format(json.dumps(title_list, indent=2)))\nprint(item_list)", "Recommendations: [\n \"Signs (2002)\",\n \"Panic Room (2002)\",\n \"Vanilla Sky (2001)\",\n \"American Pie 2 (2001)\",\n \"Blade II (2002)\",\n \"Bourne Identity, The (2002)\",\n \"Star Wars: Episode II - Attack of the Clones (2002)\",\n \"Memento (2000)\",\n \"Fast and the Furious, The (2001)\",\n \"Unbreakable (2000)\",\n \"Snatch (2000)\",\n \"Austin Powers in Goldmember (2002)\",\n \"Resident Evil (2002)\",\n \"xXx (2002)\",\n \"Sum of All Fears, The (2002)\",\n \"Others, The (2001)\",\n \"American Beauty (1999)\",\n \"Pulp Fiction (1994)\",\n \"Spider-Man (2002)\",\n \"Minority Report (2002)\",\n \"Rock, The (1996)\",\n \"Ring, The (2002)\",\n \"Black Hawk Down (2001)\",\n \"Ocean's Eleven (2001)\",\n \"Schindler's List (1993)\"\n]\n[{'itemId': '5502'}, {'itemId': '5266'}, {'itemId': '4975'}, {'itemId': '4718'}, {'itemId': '5254'}, {'itemId': '5418'}, {'itemId': '5378'}, {'itemId': '4226'}, {'itemId': '4369'}, {'itemId': '3994'}, {'itemId': '4011'}, {'itemId': '5481'}, {'itemId': '5219'}, {'itemId': '5507'}, {'itemId': '5400'}, {'itemId': '4720'}, {'itemId': '2858'}, {'itemId': '296'}, {'itemId': '5349'}, {'itemId': '5445'}, {'itemId': '733'}, {'itemId': '5679'}, {'itemId': '5010'}, {'itemId': '4963'}, {'itemId': '527'}]\n" ] ], [ [ "## Simulating User Behavior\n\nThe lines below provide a code sample that simulates a user interacting with a particular item, you will then get recommendations that differ from those when you started.", "_____no_output_____" ] ], [ [ "session_dict = {}", "_____no_output_____" ], [ "def send_movie_click(USER_ID, ITEM_ID):\n \"\"\"\n Simulates a click as an envent\n to send an event to Amazon Personalize's Event Tracker\n \"\"\"\n # Configure Session\n try:\n session_ID = session_dict[USER_ID]\n except:\n session_dict[USER_ID] = str(uuid.uuid1())\n session_ID = session_dict[USER_ID]\n \n # Configure Properties:\n event = {\n \"itemId\": str(ITEM_ID),\n }\n event_json = json.dumps(event)\n \n # Make Call\n personalize_events.put_events(\n trackingId = TRACKING_ID,\n userId= USER_ID,\n sessionId = session_ID,\n eventList = [{\n 'sentAt': int(time.time()),\n 'eventType': 'EVENT_TYPE',\n 'properties': event_json\n }]\n)", "_____no_output_____" ] ], [ [ "Immediately below this line will update the tracker as if the user has clicked a particular title.", "_____no_output_____" ] ], [ [ "# Pick a movie, we will use ID 1653 or Gattica\nsend_movie_click(USER_ID=USER_ID, ITEM_ID=1653)", "_____no_output_____" ] ], [ [ "After executing this block you will see the alterations in the recommendations now that you have event tracking enabled and that you have sent the events to the service.", "_____no_output_____" ] ], [ [ "get_recommendations_response = personalize_runtime.get_recommendations(\n campaignArn = HRNN_Campaign_ARN,\n userId = str(USER_ID),\n)\n\nitem_list = get_recommendations_response['itemList']\ntitle_list = [items.loc[items['ITEM_ID'] == np.int(item['itemId'])].values[0][-1] for item in item_list]\n\nprint(\"Recommendations: {}\".format(json.dumps(title_list, indent=2)))\nprint(item_list)", "Recommendations: [\n \"Signs (2002)\",\n \"Fifth Element, The (1997)\",\n \"Gattaca (1997)\",\n \"Unbreakable (2000)\",\n \"Face/Off (1997)\",\n \"Predator (1987)\",\n \"Dark City (1998)\",\n \"Star Wars: Episode II - Attack of the Clones (2002)\",\n \"Cube (1997)\",\n \"Spider-Man (2002)\",\n \"Game, The (1997)\",\n \"Minority Report (2002)\",\n \"X-Files: Fight the Future, The (1998)\",\n \"Twelve Monkeys (a.k.a. 12 Monkeys) (1995)\",\n \"Rock, The (1996)\",\n \"Vanilla Sky (2001)\",\n \"Starship Troopers (1997)\",\n \"Bourne Identity, The (2002)\",\n \"Sneakers (1992)\",\n \"American Beauty (1999)\",\n \"Austin Powers in Goldmember (2002)\",\n \"Memento (2000)\",\n \"Pulp Fiction (1994)\",\n \"X-Men (2000)\",\n \"Star Wars: Episode I - The Phantom Menace (1999)\"\n]\n[{'itemId': '5502'}, {'itemId': '1527'}, {'itemId': '1653'}, {'itemId': '3994'}, {'itemId': '1573'}, {'itemId': '3527'}, {'itemId': '1748'}, {'itemId': '5378'}, {'itemId': '2232'}, {'itemId': '5349'}, {'itemId': '1625'}, {'itemId': '5445'}, {'itemId': '1909'}, {'itemId': '32'}, {'itemId': '733'}, {'itemId': '4975'}, {'itemId': '1676'}, {'itemId': '5418'}, {'itemId': '1396'}, {'itemId': '2858'}, {'itemId': '5481'}, {'itemId': '4226'}, {'itemId': '296'}, {'itemId': '3793'}, {'itemId': '2628'}]\n" ] ], [ [ "## Conclusion\n\nYou can see now that recommendations are altered by changing the movie that a user interacts with, this system can be modified to any application where users are interacting with a collection of items. These tools are available at any time to pull down and start exploring what is possible with the data you have.\n\nFinally when you are ready to remove the items from your account, open the `Cleanup.ipynb` notebook and execute the steps there.\n", "_____no_output_____" ] ], [ [ "eventTrackerArn = response['eventTrackerArn']\nprint(\"Tracker ARN is: \" + str(eventTrackerArn))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d032d3c4f949ff9c125a0b0af20b0da65a05a35c
8,150
ipynb
Jupyter Notebook
notebooks/music/music_mel.ipynb
andef4/deeplearning
62af069fa17c57f5b05c858d2114e5af8ade2383
[ "MIT" ]
null
null
null
notebooks/music/music_mel.ipynb
andef4/deeplearning
62af069fa17c57f5b05c858d2114e5af8ade2383
[ "MIT" ]
7
2020-03-24T16:33:55.000Z
2022-03-11T23:39:10.000Z
notebooks/music/music_mel.ipynb
andef4/deeplearning
62af069fa17c57f5b05c858d2114e5af8ade2383
[ "MIT" ]
1
2021-09-22T05:45:29.000Z
2021-09-22T05:45:29.000Z
33.677686
111
0.515706
[ [ [ "import os\nimport math\nimport torch\nfrom torch.autograd import Variable\nfrom torch.optim import Adam\nfrom torch import nn\nimport torch.nn.functional as F\nfrom torchvision import transforms\nfrom torch.utils.data import DataLoader, random_split, Dataset\nfrom scipy.io import wavfile\nimport scipy.signal\nimport numpy as np\nimport audio_transforms\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "NUM_CLASSES = 3\nBATCH_SIZE = 1\nSONG_LENGTH_SECONDS = 10", "_____no_output_____" ], [ "class MusicDataset(Dataset):\n def __init__(self, directory, genres, downsample=None, noise=False):\n self.directory = directory\n self.files = []\n self.downsample = downsample\n self.noise = noise\n for label, genre in enumerate(genres):\n genre_path = os.path.join(directory, genre)\n self.files.extend([(os.path.join(genre_path, f), label) for f in os.listdir(genre_path)])\n\n def __getitem__(self, index):\n song, label = self.files[index]\n rate, data = wavfile.read(f'{self.directory}/{song}')\n \n data = data[:44100*SONG_LENGTH_SECONDS]\n \n if self.downsample:\n data = scipy.signal.resample(data, self.downsample * SONG_LENGTH_SECONDS)\n\n if self.noise:\n gauss = np.random.normal(0.01, 0.001, (len(data),))\n data = data + gauss\n \n # todo: do we need this?\n #tensor = torch.Tensor(data)# / (2**15)\n \n tensor = torch.Tensor(data) / 1 << 31\n tensor.unsqueeze_(0)\n tensor = audio_transforms.MEL2(44100)(tensor)\n\n return tensor, torch.tensor(label, dtype=torch.long)\n \n def input_size(self):\n return len(self[0][0][0]) * len(self[0][0][0][0])\n \n def __len__(self):\n return len(self.files)\n\n\ndef load_dataset(downsample=None, noise=False):\n d = MusicDataset('.', ['rock', 'electro', 'classic'], downsample=downsample, noise=noise)\n train, validate = random_split(d, [900, 300])\n\n loader = DataLoader(train, batch_size=BATCH_SIZE)\n validation_loader = DataLoader(validate, batch_size=BATCH_SIZE)\n return d.input_size(), loader, validation_loader", "_____no_output_____" ], [ "class Model1Linear(nn.Module):\n def __init__(self, input_size, hidden_size):\n super().__init__()\n self.h1 = nn.Linear(input_size, hidden_size)\n self.h2 = nn.Linear(hidden_size, hidden_size)\n self.h3 = nn.Linear(hidden_size, hidden_size)\n self.h4 = nn.Linear(hidden_size, hidden_size)\n self.h5 = nn.Linear(hidden_size, hidden_size)\n self.h6 = nn.Linear(hidden_size, hidden_size)\n self.h7 = nn.Linear(hidden_size, hidden_size)\n self.h8 = nn.Linear(hidden_size, hidden_size)\n self.h9 = nn.Linear(hidden_size, NUM_CLASSES)\n \n def forward(self, x):\n x = x.data.view(-1, input_size)\n \n x = self.h1(x)\n x = F.relu(x)\n x = F.dropout(x, training=self.training)\n\n x = self.h2(x)\n x = F.relu(x)\n x = F.dropout(x, training=self.training)\n\n x = self.h3(x)\n x = F.relu(x)\n x = F.dropout(x, training=self.training)\n\n x = self.h4(x)\n x = F.relu(x)\n x = F.dropout(x, training=self.training)\n \n x = self.h5(x)\n x = F.relu(x)\n x = F.dropout(x, training=self.training)\n \n x = self.h6(x)\n x = F.relu(x)\n x = F.dropout(x, training=self.training)\n \n x = self.h7(x)\n x = F.relu(x)\n x = F.dropout(x, training=self.training)\n\n x = self.h8(x)\n x = F.relu(x)\n x = F.dropout(x, training=self.training)\n\n x = self.h9(x)\n x = F.softmax(x, dim=1)\n return x", "_____no_output_____" ], [ "from datetime import datetime\n\ndef evalulate(model, validation_loader):\n model.eval()\n loss = 0.0\n for data, labels in validation_loader:\n predictions_per_class = model(data.cuda())\n _, highest_prediction_class = predictions_per_class.max(1)\n loss += F.nll_loss(predictions_per_class, labels.cuda())\n return loss/len(validation_loader)\n\ndef learn(model, loader, validation_loader, epochs=30, learning_rate=0.001):\n torch.cuda.empty_cache()\n optimizer = Adam(params=model.parameters(), lr=learning_rate)\n\n f = open(f'{datetime.now().isoformat()}.txt', 'w', buffering=1)\n\n for epoch in range(epochs):\n model.train()\n total_loss = 0.0\n for data, labels in loader:\n predictions_per_class = model(data.cuda())\n highest_prediction, highest_prediction_class = predictions_per_class.max(1)\n\n # how good are we? compare output with the target classes\n loss = F.nll_loss(predictions_per_class, labels.cuda())\n total_loss += loss.item()\n\n model.zero_grad()\n loss.backward()\n optimizer.step()\n \n train_loss = total_loss/len(loader)\n validation_loss = evalulate(model, validation_loader)\n stats = f'Epoch: {epoch}, Train Loss: {train_loss}, Validation Loss: {validation_loss.item()}'\n print(stats)\n f.write(f'{stats}\\n')\n \n return model", "_____no_output_____" ], [ "input_size, loader, validation_loader = load_dataset()\nmodel = Model1Linear(input_size, 500).cuda()\nlearn(model, loader, validation_loader, 10000, learning_rate=0.0001)", "Epoch: 0, Train Loss: -0.32088325670641793, Validation Loss: -0.3267912268638611\nEpoch: 1, Train Loss: -0.33174163550192654, Validation Loss: -0.3266666829586029\nEpoch: 2, Train Loss: -0.33555555555555555, Validation Loss: -0.3266666829586029\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
d032dc2aa87106a8c222feead85a164fc13bd377
37,719
ipynb
Jupyter Notebook
Notebooks/data/12x12_puzzles.ipynb
Lambda-School-Labs/omega2020-ds
c3a1a1f425238f60bb11ffa8b41dfe86a1aca14b
[ "MIT" ]
4
2020-05-13T03:31:24.000Z
2021-08-30T03:58:32.000Z
Notebooks/data/12x12_puzzles.ipynb
Lambda-School-Labs/omega2020-ds
c3a1a1f425238f60bb11ffa8b41dfe86a1aca14b
[ "MIT" ]
8
2019-12-18T15:07:38.000Z
2020-08-27T02:13:34.000Z
Notebooks/data/12x12_puzzles.ipynb
Lambda-School-Labs/omega2020-ds
c3a1a1f425238f60bb11ffa8b41dfe86a1aca14b
[ "MIT" ]
10
2020-01-28T23:14:41.000Z
2021-03-06T02:12:12.000Z
46.45197
6,937
0.451974
[ [ [ "* [Sudoku Scraper](https://github.com/apauliuc/sudoku-scraper)\n* [Scraped Website](https://www.menneske.no/sudoku/3x4/eng/)\n", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom google.colab import files", "_____no_output_____" ], [ "# upload the uncleaned 12x12 data\nuploaded = files.upload()", "_____no_output_____" ], [ "# loading and looking at the data\ndf = pd.read_csv(\"uncleaned_sudokus_12x12.csv\")\nprint(df.shape)\ndf.head()", "(5000, 2)\n" ], [ "# cleaning dataset\nclean_df = df.copy()\nclean_df.head()", "_____no_output_____" ], [ "# replace\n# '0' with '.'\n# '10' with 'A'\n# '11' with 'B'\n# '12' with 'C'\nclean_df['puzzle'] = clean_df['puzzle'].replace(r'0', '.', regex = True)\nclean_df = clean_df.replace(r'10', 'A', regex = True)\nclean_df = clean_df.replace(r'11', 'B', regex = True)\nclean_df = clean_df.replace(r'12', 'C', regex = True)\nclean_df.head()", "_____no_output_____" ], [ "# remove spaces\nclean_df = clean_df.replace(r' ', '', regex = True)\nclean_df.head()", "_____no_output_____" ], [ "# making a 'level', 'gridLength', 'row', and 'col' column\nclean_df['level'] = 'Hard'\nclean_df['gridLength'] = 12\nclean_df['row'] = 3\nclean_df['col'] = 4\nclean_df.head()", "_____no_output_____" ], [ "# rename 'puzzle' to 'sudoku'\nclean_df.rename(columns = {'puzzle': 'sudoku'}, inplace = True)\nclean_df.head()", "_____no_output_____" ], [ "# download the clean csv\nclean_df.to_csv('12x12_puzzles.csv')", "_____no_output_____" ], [ "files.download('12x12_puzzles.csv')", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d032e59f6a50587d6927cc44b3f4e45da71b0767
85,234
ipynb
Jupyter Notebook
IndependenceDay_Hack_LightGBM.ipynb
Niranjankumar-c/IndiaML_Hiring_Hackathon_2019
27f8bfd6aaaf032da1348e1be8ccc2609982d73a
[ "MIT" ]
null
null
null
IndependenceDay_Hack_LightGBM.ipynb
Niranjankumar-c/IndiaML_Hiring_Hackathon_2019
27f8bfd6aaaf032da1348e1be8ccc2609982d73a
[ "MIT" ]
null
null
null
IndependenceDay_Hack_LightGBM.ipynb
Niranjankumar-c/IndiaML_Hiring_Hackathon_2019
27f8bfd6aaaf032da1348e1be8ccc2609982d73a
[ "MIT" ]
2
2019-11-13T13:25:13.000Z
2020-04-11T15:33:06.000Z
77.981702
46,136
0.685724
[ [ [ "## Import the Libraries", "_____no_output_____" ] ], [ [ "import os\nimport warnings \nwarnings.filterwarnings('ignore')\n\n# importing packages\nimport pandas as pd\nimport re\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# sklearn packages\nfrom sklearn import metrics\nfrom sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score, StratifiedKFold, RandomizedSearchCV\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier\nfrom lightgbm import LGBMClassifier\nfrom sklearn.metrics import roc_auc_score, roc_curve\nfrom sklearn.model_selection import KFold, StratifiedKFold\nimport gc\n\n\nfrom sklearn.model_selection import StratifiedKFold", "_____no_output_____" ], [ "plt.style.use(\"seaborn\")\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10,8)", "_____no_output_____" ] ], [ [ "# Loading the data", "_____no_output_____" ] ], [ [ "#load the train and test data\n\ntotaldf_onehot = pd.read_csv(\"totaldata_onehot.csv\")", "_____no_output_____" ], [ "#load the train data\n\ntotaldf_onehot.head()", "_____no_output_____" ], [ "#split the data into train and test\n\ntraindf_cleaned = totaldf_onehot[totaldf_onehot[\"source\"] == \"train\"].drop(\"source\", axis = 1)\ntestdf_cleaned = totaldf_onehot[totaldf_onehot[\"source\"] == \"test\"].drop([\"source\", \"m13\"], axis = 1)", "_____no_output_____" ], [ "traindf_cleaned.head()", "_____no_output_____" ], [ "testdf_cleaned.head()", "_____no_output_____" ], [ "submission_df = pd.read_csv(\"data/sample_submission.csv\")", "_____no_output_____" ], [ "submission_df.head()", "_____no_output_____" ], [ "def kfold_lightgbm(train_df, test_df, submission_df,num_folds=3, stratified = True):\n dt_preds = {}\n print(\"Starting LightGBM. Train shape: {}, test shape: {}\".format(train_df.shape, test_df.shape))\n # Cross validation model\n if stratified:\n folds = StratifiedKFold(n_splits= num_folds, shuffle=True, random_state=1001)\n else:\n folds = KFold(n_splits= num_folds, shuffle=True, random_state=1001)\n # Create arrays and dataframes to store results\n oof_preds = np.zeros(train_df.shape[0])\n sub_preds = np.zeros(test_df.shape[0])\n feature_importance_df = pd.DataFrame()\n feats = [f for f in train_df.columns if f not in [\"m13\"]]\n \n print(feats)\n \n for n_fold, (train_idx, valid_idx) in enumerate(folds.split(train_df, train_df['m13'])):\n train_x, train_y = train_df[feats].iloc[train_idx], train_df['m13'].iloc[train_idx]\n valid_x, valid_y = train_df[feats].iloc[valid_idx], train_df['m13'].iloc[valid_idx]\n\n # LightGBM parameters found by Bayesian optimization\n clf = LGBMClassifier(\n nthread=4,\n n_estimators=10000,\n learning_rate=0.02,\n num_leaves=34,\n colsample_bytree=0.9497036,\n subsample=0.8715623,\n max_depth=8,\n reg_alpha=0.041545473,\n reg_lambda=0.0735294,\n min_split_gain=0.0222415,\n min_child_weight=39.3259775,\n silent=-1,\n verbose=-1, )\n\n clf.fit(train_x, train_y, eval_set=[(train_x, train_y), (valid_x, valid_y)], \n eval_metric= 'f1', verbose= 200, early_stopping_rounds= 200)\n\n oof_preds[valid_idx] = clf.predict_proba(valid_x, num_iteration=clf.best_iteration_)[:, 1]\n sub_preds += clf.predict_proba(test_df[feats], num_iteration=clf.best_iteration_)[:, 1] / folds.n_splits\n dt_preds[n_fold + 1] = clf.predict(valid_x)\n \n fold_importance_df = pd.DataFrame()\n fold_importance_df[\"feature\"] = feats\n fold_importance_df[\"importance\"] = clf.feature_importances_\n fold_importance_df[\"fold\"] = n_fold + 1\n feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0)\n print('Fold %2d AUC : %.6f' % (n_fold + 1, roc_auc_score(valid_y, oof_preds[valid_idx])))\n print('Fold %2d F1 : %.6f' % (n_fold + 1, metrics.f1_score(valid_y, dt_preds[n_fold + 1])))\n \n del clf, train_x, train_y, valid_x, valid_y\n gc.collect()\n\n print('Full AUC score %.6f' % roc_auc_score(train_df['m13'], oof_preds))\n # Write submission file and plot feature importance\n \n display_importances(feature_importance_df)\n return feature_importance_df, dt_preds", "_____no_output_____" ], [ "# Display/plot feature importance\ndef display_importances(feature_importance_df_):\n cols = feature_importance_df_[[\"feature\", \"importance\"]].groupby(\"feature\").mean().sort_values(by=\"importance\", ascending=False)[:40].index\n best_features = feature_importance_df_.loc[feature_importance_df_.feature.isin(cols)]\n plt.figure(figsize=(8, 10))\n sns.barplot(x=\"importance\", y=\"feature\", data=best_features.sort_values(by=\"importance\", ascending=False))\n plt.title('LightGBM Features (avg over folds)')\n plt.tight_layout()\n plt.show()\n #plt.savefig('lgbm_importances01.png')\n", "_____no_output_____" ], [ "feature_df, preds = kfold_lightgbm(traindf_cleaned, testdf_cleaned, submission_df, 3, True)", "Starting LightGBM. Train shape: (116058, 53), test shape: (35866, 52)\n['interest_rate', 'unpaid_principal_bal', 'loan_term', 'loan_to_value', 'number_of_borrowers', 'debt_to_income_ratio', 'borrower_credit_score', 'insurance_percent', 'co-borrower_credit_score', 'insurance_type', 'm1', 'm2', 'm3', 'm4', 'm5', 'm6', 'm7', 'm8', 'm9', 'm10', 'm11', 'm12', 'unpaid_balance_day', 'origination_month', 'origination_year', 'orignation_weekday', 'first_payment_month', 'first_payment_year', 'first_payment_weekday', 'gap_days', 'loan_A23', 'loan_B12', 'loan_C86', 'Anderson-Taylor', 'Browning-Hart', 'Chapman-Mcmahon', 'ColeBrooksandVincent', 'Edwards-Hoffman', 'MartinezDuffyandBird', 'MillerMcclureandAllen', 'NicholsonGroup', 'OTHER', 'Richards-Walters', 'RichardsonLtd', 'RomeroWoodsandJohnson', 'Sanchez-Robinson', 'SanchezHaysandWilkerson', 'SuarezInc', 'SwansonNewtonandMiller', 'TaylorHuntandRodriguez', 'Thornton-Davis', 'TurnerBaldwinandRhodes']\nTraining until validation scores don't improve for 200 rounds.\n[200]\ttraining's binary_logloss: 0.02855\tvalid_1's binary_logloss: 0.0290063\n[400]\ttraining's binary_logloss: 0.0279459\tvalid_1's binary_logloss: 0.0290657\nEarly stopping, best iteration is:\n[218]\ttraining's binary_logloss: 0.0284832\tvalid_1's binary_logloss: 0.029001\nFold 1 AUC : 0.825325\nFold 1 F1 : 0.000000\nTraining until validation scores don't improve for 200 rounds.\n[200]\ttraining's binary_logloss: 0.0282518\tvalid_1's binary_logloss: 0.0295906\n[400]\ttraining's binary_logloss: 0.0276037\tvalid_1's binary_logloss: 0.0296624\nEarly stopping, best iteration is:\n[219]\ttraining's binary_logloss: 0.0281839\tvalid_1's binary_logloss: 0.0295751\nFold 2 AUC : 0.824576\nFold 2 F1 : 0.000000\nTraining until validation scores don't improve for 200 rounds.\n[200]\ttraining's binary_logloss: 0.0280196\tvalid_1's binary_logloss: 0.0299522\nEarly stopping, best iteration is:\n[150]\ttraining's binary_logloss: 0.0282728\tvalid_1's binary_logloss: 0.0299202\nFold 3 AUC : 0.807008\nFold 3 F1 : 0.000000\nFull AUC score 0.818719\n" ], [ "pd.Series(preds).map( lambda x: 1 if x >= 0.2 else 0 ).value_counts()", "_____no_output_____" ], [ "preds.value_counts()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d032ff364c340ec5f4b56199704e385a4e7a8e5a
217,484
ipynb
Jupyter Notebook
lesson05_1.ipynb
gusdyd98/pyvisual_study
4bcc8732ee6e90851d15626e8db2886e9890e50b
[ "MIT" ]
null
null
null
lesson05_1.ipynb
gusdyd98/pyvisual_study
4bcc8732ee6e90851d15626e8db2886e9890e50b
[ "MIT" ]
null
null
null
lesson05_1.ipynb
gusdyd98/pyvisual_study
4bcc8732ee6e90851d15626e8db2886e9890e50b
[ "MIT" ]
null
null
null
597.483516
69,548
0.941003
[ [ [ "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\n\nrc={\n 'axes.unicode_minus':False,\n 'font.family':'Malgun Gothic',\n}\nsns.set_style(\"whitegrid\", rc) # tick 스타일 지정\nsns.set_palette('bright') # 색상 팔렛트 지정\n", "_____no_output_____" ], [ "import pandas as pd", "_____no_output_____" ], [ "df=pd.read_excel('data/sample-melon.xlsx', index_col='곡일련번호')\nprint(df.shape)\ndf.head()", "(100, 7)\n" ], [ "#df['가수']\nplt.figure(figsize=(15,5))\nsns.countplot(data=df, x='가수')\nplt.xticks(rotation=90)", "_____no_output_____" ], [ "df['가수'].value_counts().sort_values(ascending=False)[:10]", "_____no_output_____" ], [ "df.groupby('가수').size().sort_values(ascending=False)[:10]", "_____no_output_____" ], [ "팔렛트=sns.color_palette('husl')\nseries=df.groupby('가수').size()\nseries.plot(kind='bar', figsize=(15,5), color=팔렛트)", "_____no_output_____" ], [ "with sns.color_palette('husl') as 팔렛트:\n series=df.groupby('가수').size()\n series.plot(kind='bar', figsize=(15,5), color=팔렛트)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d033053cd53d5413641ab0d8a41f4d3c78b622cd
2,352
ipynb
Jupyter Notebook
20. Valid Parentheses.ipynb
oohyeah0331/UVa
58ddbddc6780507e3260cf99d4ddf1d98e5bbe2e
[ "MIT" ]
null
null
null
20. Valid Parentheses.ipynb
oohyeah0331/UVa
58ddbddc6780507e3260cf99d4ddf1d98e5bbe2e
[ "MIT" ]
null
null
null
20. Valid Parentheses.ipynb
oohyeah0331/UVa
58ddbddc6780507e3260cf99d4ddf1d98e5bbe2e
[ "MIT" ]
null
null
null
23.287129
127
0.429847
[ [ [ "'''\nGiven a string containing just the characters '(', ')', '{', '}', '[' and ']', determine if the input string is valid.\n\nThe brackets must close in the correct order, \"()\" and \"()[]{}\" are all valid but \"(]\" and \"([)]\" are not.\n\n'''", "_____no_output_____" ], [ "class Solution:\n def isValid(self, s):\n \"\"\"\n :type s: str\n :rtype: bool\n \"\"\"\n stack = []\n check = {\"(\": \")\", \"{\": \"}\", \"[\": \"]\"}\n for char in s:\n if(len(s) > 0 and char in check):\n stack.append(char)\n elif(len(stack) == 0 or char != check[stack.pop()]):\n return False\n if(len(stack) == 0):\n return True\n else:\n return False\n #可以寫成 return len(stack) == 0 更漂亮", "_____no_output_____" ], [ "if __name__ == \"__main__\":\n s = '{[][][]}[]'\n t = '[[[[]]]]]'\n print(\"s string is \" + str(Solution().isValid(s)))\n print(\"t string is \" + str(Solution().isValid(t)))", "s string is True\nt string is False\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
d03326b85bf0c588c40ee33bd4d474110b3f30d8
4,718
ipynb
Jupyter Notebook
examples/collab.ipynb
MartinGer/fastai
5a5de8b3a6c2a4ba04f1e5873083808172a6903a
[ "Apache-2.0" ]
2
2019-02-08T04:59:27.000Z
2020-05-15T21:17:23.000Z
examples/collab.ipynb
MartinGer/fastai
5a5de8b3a6c2a4ba04f1e5873083808172a6903a
[ "Apache-2.0" ]
2
2021-05-20T20:04:51.000Z
2022-02-26T09:14:00.000Z
examples/collab.ipynb
MartinGer/fastai
5a5de8b3a6c2a4ba04f1e5873083808172a6903a
[ "Apache-2.0" ]
1
2020-01-09T15:44:46.000Z
2020-01-09T15:44:46.000Z
23.59
84
0.414159
[ [ [ "from fastai import * # Quick access to most common functionality\nfrom fastai.collab import * # Quick access to collab filtering functionality", "_____no_output_____" ] ], [ [ "## Collaborative filtering example", "_____no_output_____" ], [ "`collab` models use data in a `DataFrame` of user, items, and ratings.", "_____no_output_____" ] ], [ [ "path = untar_data(URLs.ML_SAMPLE)\npath", "_____no_output_____" ], [ "ratings = pd.read_csv(path/'ratings.csv')\nseries2cat(ratings, 'userId', 'movieId')\nratings.head()", "_____no_output_____" ], [ "data = CollabDataBunch.from_df(ratings, seed=42)", "_____no_output_____" ], [ "y_range = [0, 5.5]", "_____no_output_____" ] ], [ [ "That's all we need to create and train a model:", "_____no_output_____" ] ], [ [ "learn = collab_learner(data, n_factors=50, y_range=y_range)\nlearn.fit_one_cycle(4, 5e-3)", "Total time: 00:02\nepoch train_loss valid_loss\n1 1.724424 1.277289 (00:00)\n2 0.893744 0.678392 (00:00)\n3 0.655527 0.651847 (00:00)\n4 0.562305 0.649613 (00:00)\n\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
d03332cbb5bdc7d7da9094e2feed6813a7f584b2
3,343
ipynb
Jupyter Notebook
notebooks/02_numerical_pipeline_ex_00.ipynb
khanfarhan10/scikit-learn-mooc
37f1e34eae0304a92f557a8194e068b4333ad418
[ "CC-BY-4.0" ]
null
null
null
notebooks/02_numerical_pipeline_ex_00.ipynb
khanfarhan10/scikit-learn-mooc
37f1e34eae0304a92f557a8194e068b4333ad418
[ "CC-BY-4.0" ]
null
null
null
notebooks/02_numerical_pipeline_ex_00.ipynb
khanfarhan10/scikit-learn-mooc
37f1e34eae0304a92f557a8194e068b4333ad418
[ "CC-BY-4.0" ]
null
null
null
23.542254
109
0.576727
[ [ [ "# 📝 Exercise 00\n\nThe goal of this exercise is to fit a similar model as in the previous\nnotebook to get familiar with manipulating scikit-learn objects and in\nparticular the `.fit/.predict/.score` API.", "_____no_output_____" ], [ "Let's load the adult census dataset with only numerical variables", "_____no_output_____" ] ], [ [ "import pandas as pd\nadult_census = pd.read_csv(\"../datasets/adult-census-numeric.csv\")\ndata = adult_census.drop(columns=\"class\")\ntarget = adult_census[\"class\"]", "_____no_output_____" ] ], [ [ "In the previous notebook we used `model = KNeighborsClassifier()`. All\nscikit-learn models can be created without arguments, which means that you\ndon't need to understand the details of the model to use it in scikit-learn.\n\nOne of the `KNeighborsClassifier` parameters is `n_neighbors`. It controls\nthe number of neighbors we are going to use to make a prediction for a new\ndata point.\n\nWhat is the default value of the `n_neighbors` parameter? Hint: Look at the\nhelp inside your notebook `KNeighborsClassifier?` or on the [scikit-learn\nwebsite](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html)", "_____no_output_____" ], [ "Create a `KNeighborsClassifier` model with `n_neighbors=50`", "_____no_output_____" ] ], [ [ "# Write your code here.", "_____no_output_____" ] ], [ [ "Fit this model on the data and target loaded above", "_____no_output_____" ] ], [ [ "# Write your code here.", "_____no_output_____" ] ], [ [ "Use your model to make predictions on the first 10 data points inside the\ndata. Do they match the actual target values?", "_____no_output_____" ] ], [ [ "# Write your code here.", "_____no_output_____" ] ], [ [ "Compute the accuracy on the training data.", "_____no_output_____" ] ], [ [ "# Write your code here.", "_____no_output_____" ] ], [ [ "Now load the test data from `\"../datasets/adult-census-numeric-test.csv\"` and\ncompute the accuracy on the test data.", "_____no_output_____" ] ], [ [ "# Write your code here.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d0333c66e31145b2ded674a66f9be975477bc16f
331,427
ipynb
Jupyter Notebook
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
bc781d1defb3633b560071a57b14d7cc0fcc6274
[ "BSD-3-Clause" ]
null
null
null
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
bc781d1defb3633b560071a57b14d7cc0fcc6274
[ "BSD-3-Clause" ]
null
null
null
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
bc781d1defb3633b560071a57b14d7cc0fcc6274
[ "BSD-3-Clause" ]
null
null
null
510.673344
46,132
0.945065
[ [ [ "## The next step in the gap analysis is to calculate the Turbine Ideal Energy (TIE) for the wind farm based on SCADA data", "_____no_output_____" ] ], [ [ "%load_ext autoreload\n%autoreload 2", "_____no_output_____" ] ], [ [ "This notebook provides an overview and walk-through of the turbine ideal energy (TIE) method in OpenOA. The TIE metric is defined as the amount of electricity generated by all turbines at a wind farm operating under normal conditions (i.e., not subject to downtime or significant underperformance, but subject to wake losses and moderate turbine performance losses). The approach to calculate TIE is to:\n\n1. Filter out underperforming data from the power curve for each turbine,\n2. Develop a statistical relationship between the remaining power data and key atmospheric variables from a long-term reanalysis product\n3. Long-term correct the period of record power data using the above statistical relationship\n4. Sum up the long-term corrected power data across all turbines to get TIE for the wind farm\n\nHere we use different reanalysis products to capture the uncertainty around the modeled wind resource. We also consider uncertainty due to power data accuracy and the power curve filtering choices for identifying normal turbine performance made by the analyst.\n\nIn this example, the process for estimating TIE is illustrated both with and without uncertainty quantification.", "_____no_output_____" ] ], [ [ "# Import required packages\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nfrom project_ENGIE import Project_Engie\nfrom operational_analysis.methods import turbine_long_term_gross_energy", "_____no_output_____" ] ], [ [ "In the call below, make sure the appropriate path to the CSV input files is specfied. In this example, the CSV files are located directly in the 'examples/data/la_haute_borne' folder", "_____no_output_____" ] ], [ [ "# Load plant object\nproject = Project_Engie('./data/la_haute_borne/')", "_____no_output_____" ], [ "# Load and prepare the wind farm data\nproject.prepare()", "INFO:project_ENGIE:Loading SCADA data\nINFO:operational_analysis.types.timeseries_table:Loading name:la-haute-borne-data-2014-2015\nINFO:project_ENGIE:SCADA data loaded\nINFO:project_ENGIE:Timestamp QC and conversion to UTC\nINFO:project_ENGIE:Correcting for out of range of temperature variables\nINFO:project_ENGIE:Flagging unresponsive sensors\nINFO:numexpr.utils:NumExpr defaulting to 8 threads.\nINFO:project_ENGIE:Converting field names to IEC 61400-25 standard\nINFO:operational_analysis.types.timeseries_table:Loading name:plant_data\nINFO:operational_analysis.types.timeseries_table:Loading name:plant_data\nINFO:operational_analysis.types.timeseries_table:Loading name:merra2_la_haute_borne\nINFO:operational_analysis.types.timeseries_table:Loading name:era5_wind_la_haute_borne\n" ], [ "# Let's take a look at the columns in the SCADA data frame\nproject._scada.df.columns", "_____no_output_____" ] ], [ [ "### TIE calculation without uncertainty quantification\n\nNext we create a TIE object which will contain the analysis to be performed. The method has the ability to calculate uncertainty in the TIE metric through a Monte Carlo sampling of filtering thresholds, power data, and reanalysis product choices. For now, we turn this option off and run the method a single time.", "_____no_output_____" ] ], [ [ "ta = turbine_long_term_gross_energy.TurbineLongTermGrossEnergy(project) ", "INFO:operational_analysis.methods.turbine_long_term_gross_energy:Initializing TurbineLongTermGrossEnergy Object\nINFO:operational_analysis.methods.turbine_long_term_gross_energy:Note: uncertainty quantification will NOT be performed in the calculation\nINFO:operational_analysis.methods.turbine_long_term_gross_energy:Processing SCADA data into dictionaries by turbine (this can take a while)\n" ] ], [ [ "All of the steps in the TI calculation process are pulled under a single run() function. These steps include:\n\n1. Processing reanalysis data to daily averages.\n2. Filtering the SCADA data\n3. Fitting the daily reanalysis data to daily SCADA data using a Generalized Additive Model (GAM)\n4. Apply GAM results to calculate long-term TIE for the wind farm\n\nBy setting UQ = False (the default argument value), we must manually specify key filtering thresholds that would otherwise be sampled from a range of values through Monte Carlo. Specifically, we must set thresholds applied to the bin_filter() function in the toolkits.filtering class of OpenOA. ", "_____no_output_____" ] ], [ [ "# Specify filter threshold values to be used\nwind_bin_thresh = 2.0 # Exclude data outside 2 m/s of the median for each power bin\nmax_power_filter = 0.90 # Don't apply bin filter above 0.9 of turbine capacity", "_____no_output_____" ] ], [ [ "We also must decide how to deal with missing data when computing daily sums of energy production from each turbine. Here we set the threshold at 0.9 (i.e., if greater than 90% of SCADA data are available for a given day, scale up the daily energy by the fraction of data missing. If less than 90% data recovery, exclude that day from analysis.", "_____no_output_____" ] ], [ [ "# Set the correction threshold to 90%\ncorrection_threshold = 0.90", "_____no_output_____" ] ], [ [ "Now we'll call the run() method to calculate TIE, choosing two reanalysis products to be used in the TIE calculation process.", "_____no_output_____" ] ], [ [ "# We can choose to save key plots to a file by setting enable_plotting = True and \n# specifying a directory to save the images. For now we turn off this feature. \nta.run(reanal_subset = ['era5', 'merra2'], enable_plotting = False, plot_dir = None,\n wind_bin_thresh = wind_bin_thresh, max_power_filter = max_power_filter,\n correction_threshold = correction_threshold)", " 0%| | 0/2 [00:00<?, ?it/s]INFO:operational_analysis.methods.turbine_long_term_gross_energy:Filtering turbine data\nINFO:operational_analysis.methods.turbine_long_term_gross_energy:Processing reanalysis data to daily averages\nINFO:operational_analysis.methods.turbine_long_term_gross_energy:Processing scada data to daily sums\n\n0it [00:00, ?it/s]\u001b[A\n4it [00:00, 27.11it/s]\u001b[A\nINFO:operational_analysis.methods.turbine_long_term_gross_energy:Setting up daily data for model fitting\nINFO:operational_analysis.methods.turbine_long_term_gross_energy:Fitting model data\n/Users/esimley/opt/anaconda3/lib/python3.7/site-packages/scipy/linalg/basic.py:1321: RuntimeWarning: internal gelsd driver lwork query error, required iwork dimension not returned. This is likely the result of LAPACK bug 0038, fixed in LAPACK 3.2.2 (released July 21, 2010). Falling back to 'gelss' driver.\n x, resids, rank, s = lstsq(a, b, cond=cond, check_finite=False)\nINFO:operational_analysis.methods.turbine_long_term_gross_energy:Applying fitting results to calculate long-term gross energy\n 50%|█████ | 1/2 [00:02<00:02, 2.02s/it]INFO:operational_analysis.methods.turbine_long_term_gross_energy:Filtering turbine data\nINFO:operational_analysis.methods.turbine_long_term_gross_energy:Processing reanalysis data to daily averages\nINFO:operational_analysis.methods.turbine_long_term_gross_energy:Processing scada data to daily sums\n\n0it [00:00, ?it/s]\u001b[A\n4it [00:00, 25.93it/s]\u001b[A\nINFO:operational_analysis.methods.turbine_long_term_gross_energy:Setting up daily data for model fitting\nINFO:operational_analysis.methods.turbine_long_term_gross_energy:Fitting model data\nINFO:operational_analysis.methods.turbine_long_term_gross_energy:Applying fitting results to calculate long-term gross energy\n100%|██████████| 2/2 [00:03<00:00, 1.93s/it]\nINFO:operational_analysis.methods.turbine_long_term_gross_energy:Run completed\n" ] ], [ [ "Now that we've finished the TIE calculation, let's examine results", "_____no_output_____" ] ], [ [ "ta._plant_gross", "_____no_output_____" ], [ "# What is the long-term annual TIE for whole plant\nprint('Long-term turbine ideal energy is %s GWh/year' %np.round(np.mean(ta._plant_gross/1e6),1))", "Long-term turbine ideal energy is 13.7 GWh/year\n" ] ], [ [ "The long-term TIE value of 13.7 GWh/year is based on the mean TIE resulting from the two reanalysis products considered.", "_____no_output_____" ], [ "Next, we can examine how well the filtering worked by examining the power curves for each turbine using the plot_filtered_power_curves() function.", "_____no_output_____" ] ], [ [ "# Currently saving figures in examples folder. The folder where figures are saved can be changed if desired.\nta.plot_filtered_power_curves(save_folder = \"./\", output_to_terminal = True)", "_____no_output_____" ] ], [ [ "Overall these are very clean power curves, and the filtering algorithms seem to have done a good job of catching the most egregious outliers.", "_____no_output_____" ], [ "Now let's look at the daily data and how well the power curve fit worked", "_____no_output_____" ] ], [ [ "# Currently saving figures in examples folder. The folder where figures are saved can be changed if desired.\nta.plot_daily_fitting_result(save_folder = \"./\", output_to_terminal = True)", "_____no_output_____" ] ], [ [ "Overall the fit looks good. The modeled data sometimes estimate higher energy at low wind speeds compared to the observed, but keep in mind the model fits to long term wind speed, wind direction, and air density, whereas we are only showing the relationship to wind speed here.\n\nNote that 'imputed' means daily power data that were missing for a specific turbine, but were calculated by establishing statistical relationships with that turbine and its neighbors. This is necessary since a wind farm often has one turbine down and, without imputation, very little daily data would be left if we excluded days when a turbine was down.", "_____no_output_____" ], [ "### TIE calculation including uncertainty quantification\n\nNow we will create a TIE object for calculating TIE and quantifying the uncertainty in our estimate. The method estimates uncertainty in the TIE metric through a Monte Carlo sampling of filtering thresholds, power data, and reanalysis product choices.\n\nNote that we set the number of Monte Carlo simulations to only 100 in this example because of the relatively high computational effort required to perform a single iteration. In practice, a larger number of simulations is recommended (the default value is 2000).", "_____no_output_____" ] ], [ [ "ta = turbine_long_term_gross_energy.TurbineLongTermGrossEnergy(project, \n UQ = True, # enable uncertainty quantification\n num_sim = 100 # number of Monte Carlo simulations to perform\n ) ", "INFO:operational_analysis.methods.turbine_long_term_gross_energy:Initializing TurbineLongTermGrossEnergy Object\nINFO:operational_analysis.methods.turbine_long_term_gross_energy:Note: uncertainty quantification will be performed in the calculation\nINFO:operational_analysis.methods.turbine_long_term_gross_energy:Processing SCADA data into dictionaries by turbine (this can take a while)\n" ] ], [ [ "With uncertainty quantification enabled (UQ = True), we can specify the assumed uncertainty of the SCADA power data (0.5% by default) and ranges of two key filtering thresholds from which the Monte Carlo simulations will sample. Specifically, these thresholds are applied to the bin_filter() function in the toolkits.filtering class of OpenOA.\n\nNote that the following parameters are the default values used in the run() method.", "_____no_output_____" ] ], [ [ "uncertainty_scada=0.005 # Assumed uncertainty of SCADA power data (0.5%)\n\n# Range of filter threshold values to be used by Monte Carlo simulations\n\n# Data outside of a range of wind speeds from 1 to 3 m/s of the median for each power bin are considered\nwind_bin_thresh=(1, 3) \n\n# The bin filter will be applied up to fractions of turbine capacity from 80% to 90%\nmax_power_filter=(0.8, 0.9) ", "_____no_output_____" ] ], [ [ "We will consider a range of availability thresholds for dealing with missing data when computing daily sums of energy production from each turbine (i.e., if greater than the given threshold of SCADA data are available for a given day, scale up the daily energy by the fraction of data missing. If less than the given threshold of data are available, exclude that day from analysis. Here we set the range of thresholds as 85% to 95%. ", "_____no_output_____" ] ], [ [ "correction_threshold=(0.85, 0.95)", "_____no_output_____" ] ], [ [ "Now we'll call the run() method to calculate TIE with uncertainty quantification, again choosing two reanalysis products to be used in the TIE calculation process.\n\nNote that without uncertainty quantification (UQ = False), a separate TIE value is calculated for each reanalysis product specified. However, when UQ = True, the reanalysis product is treated as another Monte Carlo sampling parameter. Thus, the impact of different reanlysis products is considered to be part of the overall uncertainty in TIE. ", "_____no_output_____" ] ], [ [ "# We can choose to save key plots to a file by setting enable_plotting = True and \n# specifying a directory to save the images. For now we turn off this feature. \nta.run(reanal_subset = ['era5', 'merra2'], enable_plotting = False, plot_dir = None,\n uncertainty_scada = uncertainty_scada, wind_bin_thresh = wind_bin_thresh, \n max_power_filter = max_power_filter, correction_threshold = correction_threshold)", "_____no_output_____" ] ], [ [ "Now that we've finished the Monte Carlo TIE calculation simulations, let's examine results", "_____no_output_____" ] ], [ [ "np.mean(ta._plant_gross)", "_____no_output_____" ], [ "np.std(ta._plant_gross)", "_____no_output_____" ], [ "# Mean long-term annual TIE for whole plant\nprint('Mean long-term turbine ideal energy is %s GWh/year' %np.round(np.mean(ta._plant_gross/1e6),1))\n\n# Uncertainty in long-term annual TIE for whole plant\nprint('Uncertainty in long-term turbine ideal energy is %s GWh/year, or %s percent' % (np.round(np.std(ta._plant_gross/1e6),1), np.round(100*np.std(ta._plant_gross)/np.mean(ta._plant_gross),1)))", "Mean long-term turbine ideal energy is 13.7 GWh/year\nUncertainty in long-term turbine ideal energy is 0.1 GWh/year, or 0.8 percent\n" ] ], [ [ "As expected, the mean long-term TIE is close to the earlier estimate without uncertainty quantification. A relatively low uncertainty has been estimated for the TIE calculations. This is a result of the relatively close agreement between the two reanalysis products and the clean power curves plotted earlier.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
d0334d3025edd7cf61a7d5c3070b5a1525fbfb55
10,986
ipynb
Jupyter Notebook
code_reviews/Code_review_1.ipynb
Nance-Lab/textile
ebf2dd728e7d91b4fcab9d6f2798513834a63eac
[ "MIT" ]
1
2021-10-29T20:10:12.000Z
2021-10-29T20:10:12.000Z
code_reviews/Code_review_1.ipynb
Nance-Lab/textile
ebf2dd728e7d91b4fcab9d6f2798513834a63eac
[ "MIT" ]
1
2020-07-21T21:45:13.000Z
2020-07-21T21:45:13.000Z
code_reviews/Code_review_1.ipynb
Nance-Lab/textile
ebf2dd728e7d91b4fcab9d6f2798513834a63eac
[ "MIT" ]
2
2020-08-13T22:23:37.000Z
2021-09-10T04:23:17.000Z
27.812658
724
0.531404
[ [ [ "# Code Review #1", "_____no_output_____" ], [ "Purpose: To introduce the group to looking at code analytically\n\nCreated By: Hawley Helmbrecht\n\n\nCreation Date: 10-12-21", "_____no_output_____" ], [ "# Introduction to Analyzing Code", "_____no_output_____" ], [ "All snipets within this section are taken from the Hitchhiker's Guide to Python (https://docs.python-guide.org/writing/style/)", "_____no_output_____" ], [ "### Example 1: Explicit Code", "_____no_output_____" ] ], [ [ "def make_complex(*args):\n x, y = args\n return dict(**locals())", "_____no_output_____" ], [ "def make_complex(x, y):\n return {'x': x, 'y': y}", "_____no_output_____" ] ], [ [ "### Example 2: One Statement per Line", "_____no_output_____" ] ], [ [ "print('one'); print('two')\n\nif x == 1: print('one')\n\nif <complex comparison> and <other complex comparison>:\n # do something", "_____no_output_____" ], [ "print('one')\nprint('two')\n\nif x == 1:\n print('one')\n\ncond1 = <complex comparison>\ncond2 = <other complex comparison>\nif cond1 and cond2:\n # do something", "_____no_output_____" ] ], [ [ "## Intro to Pep 8", "_____no_output_____" ], [ "Example 1: Limit all lines to a maximum of 79 characters.", "_____no_output_____" ] ], [ [ "#Wrong:\nincome = (gross_wages + taxable_interest + (dividends - qualified_dividends) - ira_deduction - student_loan_interest)", "_____no_output_____" ], [ "#Correct:\nincome = (gross_wages\n + taxable_interest\n + (dividends - qualified_dividends)\n - ira_deduction\n - student_loan_interest)", "_____no_output_____" ] ], [ [ "Example 2: Line breaks around binary operators", "_____no_output_____" ] ], [ [ "# Wrong:\n# operators sit far away from their operands\nincome = (gross_wages +\n taxable_interest +\n (dividends - qualified_dividends) -\n ira_deduction -\n student_loan_interest)", "_____no_output_____" ], [ "# Correct:\n# easy to match operators with operands\nincome = (gross_wages\n + taxable_interest\n + (dividends - qualified_dividends)\n - ira_deduction\n - student_loan_interest)", "_____no_output_____" ] ], [ [ "Example 3: Import formatting", "_____no_output_____" ] ], [ [ "# Correct:\nimport os\nimport sys", "_____no_output_____" ], [ "# Wrong:\nimport sys, os", "_____no_output_____" ] ], [ [ "## Let's look at some code!", "_____no_output_____" ], [ "Sci-kit images Otsu Threshold code! (https://github.com/scikit-image/scikit-image/blob/main/skimage/filters/thresholding.py)", "_____no_output_____" ] ], [ [ "def threshold_otsu(image=None, nbins=256, *, hist=None):\n \"\"\"Return threshold value based on Otsu's method.\n Either image or hist must be provided. If hist is provided, the actual\n histogram of the image is ignored.\n Parameters\n ----------\n image : (N, M[, ..., P]) ndarray, optional\n Grayscale input image.\n nbins : int, optional\n Number of bins used to calculate histogram. This value is ignored for\n integer arrays.\n hist : array, or 2-tuple of arrays, optional\n Histogram from which to determine the threshold, and optionally a\n corresponding array of bin center intensities. If no hist provided,\n this function will compute it from the image.\n Returns\n -------\n threshold : float\n Upper threshold value. All pixels with an intensity higher than\n this value are assumed to be foreground.\n References\n ----------\n .. [1] Wikipedia, https://en.wikipedia.org/wiki/Otsu's_Method\n Examples\n --------\n >>> from skimage.data import camera\n >>> image = camera()\n >>> thresh = threshold_otsu(image)\n >>> binary = image <= thresh\n Notes\n -----\n The input image must be grayscale.\n \"\"\"\n if image is not None and image.ndim > 2 and image.shape[-1] in (3, 4):\n warn(f'threshold_otsu is expected to work correctly only for '\n f'grayscale images; image shape {image.shape} looks like '\n f'that of an RGB image.')\n\n # Check if the image has more than one intensity value; if not, return that\n # value\n if image is not None:\n first_pixel = image.ravel()[0]\n if np.all(image == first_pixel):\n return first_pixel\n\n counts, bin_centers = _validate_image_histogram(image, hist, nbins)\n\n # class probabilities for all possible thresholds\n weight1 = np.cumsum(counts)\n weight2 = np.cumsum(counts[::-1])[::-1]\n # class means for all possible thresholds\n mean1 = np.cumsum(counts * bin_centers) / weight1\n mean2 = (np.cumsum((counts * bin_centers)[::-1]) / weight2[::-1])[::-1]\n\n # Clip ends to align class 1 and class 2 variables:\n # The last value of ``weight1``/``mean1`` should pair with zero values in\n # ``weight2``/``mean2``, which do not exist.\n variance12 = weight1[:-1] * weight2[1:] * (mean1[:-1] - mean2[1:]) ** 2\n\n idx = np.argmax(variance12)\n threshold = bin_centers[idx]\n\n return threshold", "_____no_output_____" ] ], [ [ "What do you observe about the code that makes it pythonic?", "_____no_output_____" ] ], [ [ "Do the pythonic conventions make it easier to understand?", "_____no_output_____" ] ], [ [ "How is the documentation on this function?", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d0335eef9190ddfbf31de89cc2b003259f468e98
9,862
ipynb
Jupyter Notebook
03-Python Crash Course Exercises - Solutions.ipynb
avinash-nahar/Learning
f1b0a64c686e993baa72d52757c63750ffd81ae1
[ "MIT" ]
null
null
null
03-Python Crash Course Exercises - Solutions.ipynb
avinash-nahar/Learning
f1b0a64c686e993baa72d52757c63750ffd81ae1
[ "MIT" ]
null
null
null
03-Python Crash Course Exercises - Solutions.ipynb
avinash-nahar/Learning
f1b0a64c686e993baa72d52757c63750ffd81ae1
[ "MIT" ]
null
null
null
19.803213
263
0.47972
[ [ [ "___\n\n<img src='logo.png' /></a>\n___\n# Python Crash Course Exercises - Solutions\n\n", "_____no_output_____" ], [ "## Exercises\n\nAnswer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.", "_____no_output_____" ], [ "** What is 7 to the power of 4?**", "_____no_output_____" ] ], [ [ "7**4", "_____no_output_____" ] ], [ [ "** Split this string:**\n\n s = \"Hi there Sam!\"\n \n**into a list. **", "_____no_output_____" ] ], [ [ "s = 'Hi there Sam!'", "_____no_output_____" ], [ "s.split()", "_____no_output_____" ] ], [ [ "** Given the variables:**\n\n planet = \"Earth\"\n diameter = 12742\n\n** Use .format() to print the following string: **\n\n The diameter of Earth is 12742 kilometers.", "_____no_output_____" ] ], [ [ "planet = \"Earth\"\ndiameter = 12742", "_____no_output_____" ], [ "print(\"The diameter of {} is {} kilometers.\".format(planet,diameter))", "The diameter of Earth is 12742 kilometers.\n" ] ], [ [ "** Given this nested list, use indexing to grab the word \"hello\" **", "_____no_output_____" ] ], [ [ "lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]", "_____no_output_____" ], [ "lst[-3][1][2][0]", "_____no_output_____" ] ], [ [ "** Given this nest dictionary grab the word \"hello\". Be prepared, this will be annoying/tricky **", "_____no_output_____" ] ], [ [ "d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}", "_____no_output_____" ], [ "d['k1'][3]['tricky'][3]['target'][3]", "_____no_output_____" ] ], [ [ "** What is the main difference between a tuple and a list? **", "_____no_output_____" ] ], [ [ "# Tuple is immutable", "_____no_output_____" ] ], [ [ "** Create a function that grabs the email website domain from a string in the form: **\n\n [email protected]\n \n**So for example, passing \"[email protected]\" would return: domain.com**", "_____no_output_____" ] ], [ [ "def domainGet(email):\n return email.split('@')[-1]", "_____no_output_____" ], [ "domainGet('[email protected]')", "_____no_output_____" ] ], [ [ "** Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization. **", "_____no_output_____" ] ], [ [ "def findDog(st):\n return 'dog' in st.lower().split()", "_____no_output_____" ], [ "findDog('Is there a dog here?')", "_____no_output_____" ] ], [ [ "** Create a function that counts the number of times the word \"dog\" occurs in a string. Again ignore edge cases. **", "_____no_output_____" ] ], [ [ "def countDog(st):\n count = 0\n for word in st.lower().split():\n if word == 'dog':\n count += 1\n return count", "_____no_output_____" ], [ "countDog('This dog runs faster than the other dog dude!')", "_____no_output_____" ] ], [ [ "** Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example:**\n\n seq = ['soup','dog','salad','cat','great']\n\n**should be filtered down to:**\n\n ['soup','salad']", "_____no_output_____" ] ], [ [ "seq = ['soup','dog','salad','cat','great']", "_____no_output_____" ], [ "list(filter(lambda word: word[0]=='s',seq))", "_____no_output_____" ] ], [ [ "### Final Problem\n**You are driving a little too fast, and a police officer stops you. Write a function\n to return one of 3 possible results: \"No ticket\", \"Small ticket\", or \"Big Ticket\". \n If your speed is 60 or less, the result is \"No Ticket\". If speed is between 61 \n and 80 inclusive, the result is \"Small Ticket\". If speed is 81 or more, the result is \"Big Ticket\". Unless it is your birthday (encoded as a boolean value in the parameters of the function) -- on your birthday, your speed can be 5 higher in all \n cases. **", "_____no_output_____" ] ], [ [ "def caught_speeding(speed, is_birthday):\n \n if is_birthday:\n speeding = speed - 5\n else:\n speeding = speed\n \n if speeding > 80:\n return 'Big Ticket'\n elif speeding > 60:\n return 'Small Ticket'\n else:\n return 'No Ticket'", "_____no_output_____" ], [ "caught_speeding(81,True)", "_____no_output_____" ], [ "caught_speeding(81,False)", "_____no_output_____" ] ], [ [ "# Great job!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
d03362054d6e5791d89602ae53e2e7023c940581
61,224
ipynb
Jupyter Notebook
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
a52a609d7bfe66e8e0bb918b42ea287397a183ac
[ "MIT" ]
null
null
null
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
a52a609d7bfe66e8e0bb918b42ea287397a183ac
[ "MIT" ]
null
null
null
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
a52a609d7bfe66e8e0bb918b42ea287397a183ac
[ "MIT" ]
null
null
null
40.62641
421
0.565759
[ [ [ "# TV Script Generation\n\nIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chronicles#scripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,\"fake\" TV script, based on patterns it recognizes in this training data.\n\n## Get the Data\n\nThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. \n>* As a first step, we'll load in this data and look at some samples. \n* Then, you'll be tasked with defining and training an RNN to generate a new script!", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# load in data\nimport helper\ndata_dir = './data/Seinfeld_Scripts.txt'\ntext = helper.load_data(data_dir)", "_____no_output_____" ] ], [ [ "## Explore the Data\nPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\\n`.", "_____no_output_____" ] ], [ [ "view_line_range = (2, 12)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\n\nlines = text.split('\\n')\nprint('Number of lines: {}'.format(len(lines)))\nword_count_line = [len(line.split()) for line in lines]\nprint('Average number of words in each line: {}'.format(np.average(word_count_line)))\n\nprint()\nprint('The lines {} to {}:'.format(*view_line_range))\nprint('\\n'.join(text.split('\\n')[view_line_range[0]:view_line_range[1]]))", "Dataset Stats\nRoughly the number of unique words: 46367\nNumber of lines: 109233\nAverage number of words in each line: 5.544240293684143\n\nThe lines 2 to 12:\njerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother. \n\ngeorge: are you through? \n\njerry: you do of course try on, when you buy? \n\ngeorge: yes, it was purple, i liked it, i dont actually recall considering the buttons. \n\njerry: oh, you dont recall? \n\n" ] ], [ [ "---\n## Implement Pre-processing Functions\nThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:\n- Lookup Table\n- Tokenize Punctuation\n\n### Lookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call `vocab_to_int`\n- Dictionary to go from the id to word, we'll call `int_to_vocab`\n\nReturn these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`", "_____no_output_____" ] ], [ [ "import problem_unittests as tests\nfrom collections import Counter\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n\n #reference source: inspired/copied from course samples\n word_counts = Counter(text)\n sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)\n int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}\n vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}\n # return tuple\n return vocab_to_int, int_to_vocab\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)", "Tests Passed\n" ] ], [ [ "### Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, \"bye\" and \"bye!\" would generate two different word ids.\n\nImplement the function `token_lookup` to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( **.** )\n- Comma ( **,** )\n- Quotation Mark ( **\"** )\n- Semicolon ( **;** )\n- Exclamation mark ( **!** )\n- Question mark ( **?** )\n- Left Parentheses ( **(** )\n- Right Parentheses ( **)** )\n- Dash ( **-** )\n- Return ( **\\n** )\n\nThis dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value \"dash\", try using something like \"||dash||\".", "_____no_output_____" ] ], [ [ "def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenized dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n retval = {\n \".\": \"||Period||\",\n \",\": \"||Comma||\",\n \"\\\"\": \"||QuotationMark||\",\n \";\": \"||Semicolon||\",\n \"!\": \"||ExclamationMark||\",\n \"?\": \"||QuestionMark||\",\n \"(\": \"||LeftParentheses||\",\n \")\": \"||RightParentheses||\",\n \"-\": \"||Dash||\",\n \"\\n\": \"||Return||\",\n }\n \n return retval\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)", "Tests Passed\n" ] ], [ [ "## Pre-process all the data and save it\n\nRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# pre-process training data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)", "_____no_output_____" ] ], [ [ "# Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()", "_____no_output_____" ], [ "len(int_text)", "_____no_output_____" ] ], [ [ "## Build the Neural Network\nIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions.\n\n### Check Access to GPU", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport torch\n\n# Check for a GPU\ntrain_on_gpu = torch.cuda.is_available()\nif not train_on_gpu:\n print('No GPU found. Please use a GPU to train your neural network.')", "_____no_output_____" ] ], [ [ "## Input\nLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.html#torch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.\n\nYou can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.\n```\ndata = TensorDataset(feature_tensors, target_tensors)\ndata_loader = torch.utils.data.DataLoader(data, \n batch_size=batch_size)\n```\n\n### Batching\nImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.\n\n>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.\n\nFor example, say we have these as input:\n```\nwords = [1, 2, 3, 4, 5, 6, 7]\nsequence_length = 4\n```\n\nYour first `feature_tensor` should contain the values:\n```\n[1, 2, 3, 4]\n```\nAnd the corresponding `target_tensor` should just be the next \"word\"/tokenized word value:\n```\n5\n```\nThis should continue with the second `feature_tensor`, `target_tensor` being:\n```\n[2, 3, 4, 5] # features\n6 # target\n```", "_____no_output_____" ] ], [ [ "from torch.utils.data import TensorDataset, DataLoader\nnb_samples = 6\nfeatures = torch.randn(nb_samples, 10)\nlabels = torch.empty(nb_samples, dtype=torch.long).random_(10)\n\ndataset = TensorDataset(features, labels)\nloader = DataLoader(\n dataset,\n batch_size=2\n)\n\nfor batch_idx, (x, y) in enumerate(loader):\n print(x.shape, y.shape)\n\nprint(features)", "torch.Size([2, 10]) torch.Size([2])\ntorch.Size([2, 10]) torch.Size([2])\ntorch.Size([2, 10]) torch.Size([2])\ntensor([[ 1.1998, 0.0965, 0.3222, 1.0989, 1.2384, -0.3169, -2.0674,\n -0.1591, -0.2882, -0.2688],\n [ 2.5983, -0.2932, 0.1637, 0.4495, -1.5855, 1.2844, -0.4096,\n 0.5459, -0.6259, -0.7333],\n [-0.3640, -0.1733, -1.0409, -1.8492, 0.7165, -0.0943, 0.3390,\n -0.6876, -0.5727, 0.9824],\n [-0.2071, -0.1404, 0.9913, -0.6238, -1.1381, -0.8963, 0.6388,\n 0.3707, 1.7417, 0.5205],\n [ 1.5642, 0.6227, -0.1644, -2.0549, 1.3320, -1.4667, 0.6474,\n 0.4505, 0.8304, 1.0425],\n [-0.0897, 1.3712, 0.8507, -0.7599, -0.3991, -1.2457, -0.3215,\n -0.1929, 0.3957, 1.1643]])\n" ], [ "from torch.utils.data import TensorDataset, DataLoader\n\n\ndef batch_data(words, sequence_length, batch_size):\n \"\"\"\n Batch the neural network data using DataLoader\n :param words: The word ids of the TV scripts\n :param sequence_length: The sequence length of each batch\n :param batch_size: The size of each batch; the number of sequences in a batch\n :return: DataLoader with batched data\n \"\"\"\n # TODO: Implement function\n \n batch = len(words)//batch_size\n words = words[:batch*batch_size]\n \n feature_tensors, target_tensors = [], []\n\n for ndx in range(len(words) - sequence_length):\n feature_tensors += [words[ndx:ndx+sequence_length]]\n target_tensors += [words[ndx+sequence_length]]\n \n feature_tensors = torch.LongTensor(feature_tensors)\n target_tensors = torch.LongTensor(target_tensors)\n \n data = TensorDataset(feature_tensors, target_tensors)\n data_loader = torch.utils.data.DataLoader(data, \n batch_size=batch_size,\n shuffle=True\n )\n \n # return a dataloader\n return data_loader\n\n# there is no test for this function, but you are encouraged to create\n# print statements and tests of your own\n", "_____no_output_____" ] ], [ [ "### Test your dataloader \n\nYou'll have to modify this code to test a batching function, but it should look fairly similar.\n\nBelow, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.\n\nYour code should return something like the following (likely in a different order, if you shuffled your data):\n\n```\ntorch.Size([10, 5])\ntensor([[ 28, 29, 30, 31, 32],\n [ 21, 22, 23, 24, 25],\n [ 17, 18, 19, 20, 21],\n [ 34, 35, 36, 37, 38],\n [ 11, 12, 13, 14, 15],\n [ 23, 24, 25, 26, 27],\n [ 6, 7, 8, 9, 10],\n [ 38, 39, 40, 41, 42],\n [ 25, 26, 27, 28, 29],\n [ 7, 8, 9, 10, 11]])\n\ntorch.Size([10])\ntensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])\n```\n\n### Sizes\nYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). \n\n### Values\n\nYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.", "_____no_output_____" ] ], [ [ "# test dataloader\n\ntest_text = range(50)\nt_loader = batch_data(test_text, sequence_length=6, batch_size=10)\n\ndata_iter = iter(t_loader)\nsample_x, sample_y = data_iter.next()\n\nprint(sample_x.shape)\nprint(sample_x)\nprint(sample_y.shape)\nprint(sample_y)", "torch.Size([10, 6])\ntensor([[ 13, 14, 15, 16, 17, 18],\n [ 20, 21, 22, 23, 24, 25],\n [ 30, 31, 32, 33, 34, 35],\n [ 2, 3, 4, 5, 6, 7],\n [ 16, 17, 18, 19, 20, 21],\n [ 24, 25, 26, 27, 28, 29],\n [ 0, 1, 2, 3, 4, 5],\n [ 38, 39, 40, 41, 42, 43],\n [ 7, 8, 9, 10, 11, 12],\n [ 18, 19, 20, 21, 22, 23]])\n\ntorch.Size([10])\ntensor([ 19, 26, 36, 8, 22, 30, 6, 44, 13, 24])\n" ] ], [ [ "---\n## Build the Neural Network\nImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.html#torch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class:\n - `__init__` - The initialize function. \n - `init_hidden` - The initialization function for an LSTM/GRU hidden state\n - `forward` - Forward propagation function.\n \nThe initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.\n\n**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word.\n\n### Hints\n\n1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`\n2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:\n\n```\n# reshape into (batch_size, seq_length, output_size)\noutput = output.view(batch_size, -1, self.output_size)\n# get last batch\nout = output[:, -1]\n```", "_____no_output_____" ] ], [ [ "#reference source: inspired/copied from course samples\nimport numpy as np\ndef one_hot_encode(arr, n_labels):\n \n arr = arr.cpu().numpy()\n \n # Initialize the the encoded array\n one_hot = np.zeros((np.multiply(*arr.shape), n_labels), dtype=np.float32)\n \n # Fill the appropriate elements with ones\n one_hot[np.arange(one_hot.shape[0]), arr.flatten()] = 1.\n \n # Finally reshape it to get back to the original array\n one_hot = one_hot.reshape((*arr.shape, n_labels))\n \n if(train_on_gpu):\n return torch.from_numpy(one_hot).cuda()\n else:\n return torch.from_numpy(one_hot)", "_____no_output_____" ], [ "# check that the function works as expected\ntest_seq = np.array([[3, 5, 1]])\ntest_seq = torch.from_numpy(test_seq)\nprint(test_seq)\none_hot = one_hot_encode(test_seq, 8)\n\nprint(one_hot)", "tensor([[ 3, 5, 1]])\ntensor([[[ 0., 0., 0., 1., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0., 1., 0., 0.],\n [ 0., 1., 0., 0., 0., 0., 0., 0.]]], device='cuda:0')\n" ], [ "import torch.nn as nn\n\nclass RNN(nn.Module):\n \n def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):\n \"\"\"\n Initialize the PyTorch RNN Module\n :param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)\n :param output_size: The number of output dimensions of the neural network\n :param embedding_dim: The size of embeddings, should you choose to use them a\n :param hidden_dim: The size of the hidden layer outputs\n :param dropout: dropout to add in between LSTM/GRU layers\n \"\"\"\n super(RNN, self).__init__()\n # TODO: Implement function\n \n # set class variables\n self.input_dim = vocab_size\n self.hidden_dim = hidden_dim\n self.output_dim = output_size\n self.n_layers = n_layers\n self.dropout_prob = dropout\n \n self.embedding_dim = embedding_dim\n \n ## define model layers\n self.embed = nn.Embedding(vocab_size, embedding_dim)\n self.lstm = nn.LSTM(embedding_dim, self.hidden_dim, self.n_layers, \n dropout=self.dropout_prob, batch_first=True)\n\n self.dropout = nn.Dropout(dropout)\n \n #final fully connected\n self.fc = nn.Linear(self.hidden_dim, self.output_dim)\n \n def forward(self, nn_input, hidden):\n \"\"\"\n Forward propagation of the neural network\n :param nn_input: The input to the neural network\n :param hidden: The hidden state \n :return: Two Tensors, the output of the neural network and the latest hidden state\n \"\"\"\n # TODO: Implement function \n\n# ## outputs and the new hidden state \n# nn_input = one_hot_encode(nn_input, self.input_dim)\n \n embedding = self.embed(nn_input)\n lstm_output, hidden = self.lstm(embedding, hidden)\n# lstm_output, hidden = self.lstm(nn_input, hidden) #without embedding\n \n out = self.dropout(lstm_output)\n \n #stack the outputs of the lstm to pass to your fully-connected layer\n out = out.contiguous().view(-1, self.hidden_dim)\n out = self.fc(out)\n \n ##From notes above\n #The output of this model should be the last batch of word scores after a complete sequence has been processed.\n #That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word.\n # reshape into (batch_size, seq_length, output_size)\n out = out.view(self.batch_size, -1, self.output_dim)\n # get last batch\n out = out[:, -1]\n # return one batch of output word scores and the hidden state\n return out, hidden\n\n def init_hidden(self, batch_size):\n '''\n Initialize the hidden state of an LSTM/GRU\n :param batch_size: The batch_size of the hidden state\n :return: hidden state of dims (n_layers, batch_size, hidden_dim)\n '''\n # Implement function\n self.batch_size = batch_size\n weight = next(self.parameters()).data\n \n # two new tensors with sizes n_layers x batch_size x n_hidden\n # initialize hidden state with zero weights, and move to GPU if available\n if (train_on_gpu):\n hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),\n weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())\n else:\n hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),\n weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())\n \n return hidden\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_rnn(RNN, train_on_gpu)", "Tests Passed\n" ] ], [ [ "### Define forward and backpropagation\n\nUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:\n```\nloss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)\n```\n\nAnd it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.\n\n**If a GPU is available, you should move your data to that GPU device, here.**", "_____no_output_____" ] ], [ [ "def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):\n \"\"\"\n Forward and backward propagation on the neural network\n :param decoder: The PyTorch Module that holds the neural network\n :param decoder_optimizer: The PyTorch optimizer for the neural network\n :param criterion: The PyTorch loss function\n :param inp: A batch of input to the neural network\n :param target: The target output for the batch of input\n :return: The loss and the latest hidden state Tensor\n \"\"\"\n \n # TODO: Implement Function\n \n #one hot encoding?\n #required for non embeded case only \n\n # zero accumulated gradients\n rnn.zero_grad()\n \n #To avoid retain_graph=True, inspired from course discussions\n hidden = (hidden[0].detach(), hidden[1].detach())\n\n # move data to GPU, if available\n if(train_on_gpu):\n inp = inp.cuda()\n target = target.cuda()\n\n output, hidden = rnn(inp, hidden)\n loss = criterion(output, target) #target.view(batch_size*sequence_length)\n \n # perform backpropagation and optimization\n# loss.backward(retain_graph=True) #Removed due to high resource consumption\n loss.backward()\n \n ##did not get any advantage\n # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.\n# nn.utils.clip_grad_norm_(rnn.parameters(), clip) ?\n optimizer.step()\n\n # return the loss over a batch and the hidden state produced by our model\n return loss.item(), hidden\n\n# Note that these tests aren't completely extensive.\n# they are here to act as general checks on the expected outputs of your functions\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)", "Tests Passed\n" ] ], [ [ "## Neural Network Training\n\nWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it.\n\n### Train Loop\n\nThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n\ndef train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):\n batch_losses = []\n \n rnn.train()\n\n print(\"Training for %d epoch(s), %d batch size, %d show every...\" % (n_epochs, batch_size, show_every_n_batches))\n for epoch_i in range(1, n_epochs + 1):\n \n # initialize hidden state\n hidden = rnn.init_hidden(batch_size)\n \n for batch_i, (inputs, labels) in enumerate(train_loader, 1):\n \n # make sure you iterate over completely full batches, only\n n_batches = len(train_loader.dataset)//batch_size\n if(batch_i > n_batches):\n break\n \n # forward, back prop\n loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden) \n # record loss\n batch_losses.append(loss)\n\n # printing loss stats\n if batch_i % show_every_n_batches == 0:\n print('Epoch: {:>4}/{:<4} Loss: {}'.format(\n epoch_i, n_epochs, np.average(batch_losses)))\n batch_losses = []\n\n # returns a trained rnn\n return rnn", "_____no_output_____" ], [ "#modified version with detailed printing, global loss for loaded network (rnn), and saving network\ndef train_rnn_copy(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100, myGlobalLoss=10):\n batch_losses = []\n \n rnn.train()\n\n print(\"Training for %d epoch(s), %d batch size, show every %d, global loss %.4f...\" \n % (n_epochs, batch_size, show_every_n_batches, myGlobalLoss))\n for epoch_i in range(1, n_epochs + 1):\n \n # initialize hidden state\n hidden = rnn.init_hidden(batch_size)\n \n for batch_i, (inputs, labels) in enumerate(train_loader, 1):\n \n # make sure you iterate over completely full batches, only\n n_batches = len(train_loader.dataset)//batch_size\n if(batch_i > n_batches):\n break\n \n # forward, back prop\n loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden) \n # record loss\n batch_losses.append(loss)\n\n # printing loss stats\n if batch_i % show_every_n_batches == 0:\n avgLoss = np.average(batch_losses)\n print('Epoch: {:>4}/{:<4} Batch: {:>4}/{:<4} Loss: {}'.format(\n epoch_i, n_epochs, batch_i, n_batches, np.average(batch_losses)))\n batch_losses = []\n if(myGlobalLoss > avgLoss):\n print('Global Loss {} ---> {}. Saving...'.format(myGlobalLoss, avgLoss))\n myGlobalLoss = avgLoss\n #saved at batch level for quick testing and restart\n #should be moved to epoch level to avoid saving semi-trained network \n helper.save_model('./save/trained_rnn_mid_we', rnn)\n \n \n\n # returns a trained rnn\n return rnn", "_____no_output_____" ] ], [ [ "### Hyperparameters\n\nSet and train the neural network with the following parameters:\n- Set `sequence_length` to the length of a sequence.\n- Set `batch_size` to the batch size.\n- Set `num_epochs` to the number of epochs to train for.\n- Set `learning_rate` to the learning rate for an Adam optimizer.\n- Set `vocab_size` to the number of uniqe tokens in our vocabulary.\n- Set `output_size` to the desired size of the output.\n- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.\n- Set `hidden_dim` to the hidden dimension of your RNN.\n- Set `n_layers` to the number of layers/cells in your RNN.\n- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.\n\nIf the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.", "_____no_output_____" ] ], [ [ "# Data params\n# Sequence Length, # of words in a sequence\nsequence_length = 10\n# Batch Size\nif(train_on_gpu):\n batch_size = 512 #128 #64\nelse:\n batch_size = 5\n\n# data loader - do not change\ntrain_loader = batch_data(int_text, sequence_length, batch_size)", "_____no_output_____" ], [ "# Training parameters\n\nmyGlobalLoss = 5\nmyDropout = 0.5 #0.8\n\n# Number of Epochs\nnum_epochs = 10 #5 #50\n# Learning Rate\nlearning_rate = 0.001 #0.002 #0.005 #0.001\n\n# Model parameters\n# Vocab size\nvocab_size = len(vocab_to_int)+1\n# Output size\noutput_size = vocab_size\n# Embedding Dimension\nembedding_dim = 300 #256 #200\n# Hidden Dimension, Usually larger is better performance wise. Common values are 128, 256, 512,\nhidden_dim = 512 #256\n# Number of RNN Layers, Typically between 1-3\nn_layers = 2\n\n# Show stats for every n number of batches\nif(train_on_gpu):\n show_every_n_batches = 200\nelse:\n show_every_n_batches = 1", "_____no_output_____" ] ], [ [ "### Train\nIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. \n> **You should aim for a loss less than 3.5.** \n\nYou should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.", "_____no_output_____" ] ], [ [ "#for debugging purposes\n# import os\n# os.environ['CUDA_LAUNCH_BLOCKING'] = \"1\"", "_____no_output_____" ], [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n\n# create model and move to gpu if available\nrnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=myDropout)\nif train_on_gpu:\n rnn.cuda()\n\n# defining loss and optimization functions for training\noptimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)\ncriterion = nn.CrossEntropyLoss()\n\ntry:\n rnn = helper.load_model('./save/trained_rnn_mid_we')\n print(\"loaded mid save model\")\nexcept:\n try:\n rnn = helper.load_model('./save/trained_rnn')\n print(\"failed mid save.. loaded global model\")\n except:\n print(\"could not load any model\")\n \nfinally:\n print(rnn)\n \n\n# training the model\ntrained_rnn = train_rnn_copy(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches, myGlobalLoss)\n\n# saving the trained model\nhelper.save_model('./save/trained_rnn', trained_rnn)\nprint('Model Trained and Saved')", "could not load any model\nRNN(\n (dropout): Dropout(p=0.5)\n (embed): Embedding(21389, 300)\n (lstm): LSTM(300, 512, num_layers=2, batch_first=True, dropout=0.5)\n (fc): Linear(in_features=512, out_features=21389, bias=True)\n)\nTraining for 10 epoch(s), 512 batch size, show every 200, global loss 5.0000...\nEpoch: 1/10 Batch: 200/1741 Loss: 5.5300157618522645\nEpoch: 1/10 Batch: 400/1741 Loss: 4.861690397262573\nGlobal Loss 5 ---> 4.861690397262573. Saving...\n" ] ], [ [ "### Question: How did you decide on your model hyperparameters? \nFor example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those?", "_____no_output_____" ], [ "**Answer:** (Write answer, here)\n\n- Tried with multiple combinations of hyperparameters to get optimum results. \n- sequence_length: Tried different sequence lengths between 5-30. Higher sequence lengths took more time to train. Therefore, used 10 which gave satisfactory results.\n- batch size: Higher batch size resulted in better results. Due to GPU memory limitations used 512 with embedding. When tried without embedding, the maximum size (again due to memory limitation) was 128\n- embedding layer: To begin with, for experimentation purposes, did not use embedding. Later, when the embedding was used memory and time seedup were recorded.\n- learning rate: Tried different leanring rates. During initial investigations, higher learning rates ~0.01 did not converge well to a satisfactory solution. Also, tried decreaing learning rate (manually) after a few epoches to see marginal improvements. Then tried between 0.001 to 0.0005. 0.001 gave the best results. Therefore, used the same.\n- hidden dim: Increasing hidden dim decreased loss. But, due to memory limitations used 512\n- n_layers: A value between 1-3 is recommended. 2 was a good choice and gave good results.", "_____no_output_____" ], [ "---\n# Checkpoint\n\nAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport torch\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\ntrained_rnn = helper.load_model('./save/trained_rnn')", "_____no_output_____" ] ], [ [ "## Generate TV Script\nWith the network trained and saved, you'll use it to generate a new, \"fake\" Seinfeld TV script in this section.\n\n### Generate Text\nTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nimport torch.nn.functional as F\n\ndef generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):\n \"\"\"\n Generate text using the neural network\n :param decoder: The PyTorch Module that holds the trained neural network\n :param prime_id: The word id to start the first prediction\n :param int_to_vocab: Dict of word id keys to word values\n :param token_dict: Dict of puncuation tokens keys to puncuation values\n :param pad_value: The value used to pad a sequence\n :param predict_len: The length of text to generate\n :return: The generated text\n \"\"\"\n rnn.eval()\n \n # create a sequence (batch_size=1) with the prime_id\n current_seq = np.full((1, sequence_length), pad_value)\n current_seq[-1][-1] = prime_id\n predicted = [int_to_vocab[prime_id]]\n \n for _ in range(predict_len):\n if train_on_gpu:\n current_seq = torch.LongTensor(current_seq).cuda()\n else:\n current_seq = torch.LongTensor(current_seq)\n \n # initialize the hidden state\n hidden = rnn.init_hidden(current_seq.size(0))\n \n # get the output of the rnn\n output, _ = rnn(current_seq, hidden)\n \n # get the next word probabilities\n p = F.softmax(output, dim=1).data\n if(train_on_gpu):\n p = p.cpu() # move to cpu\n \n # use top_k sampling to get the index of the next word\n top_k = 5\n p, top_i = p.topk(top_k)\n top_i = top_i.numpy().squeeze()\n \n # select the likely next word index with some element of randomness\n p = p.numpy().squeeze()\n word_i = np.random.choice(top_i, p=p/p.sum())\n \n # retrieve that word from the dictionary\n word = int_to_vocab[word_i]\n predicted.append(word) \n \n # the generated word becomes the next \"current sequence\" and the cycle can continue\n current_seq = np.roll(current_seq, -1, 1)\n current_seq[-1][-1] = word_i\n \n gen_sentences = ' '.join(predicted)\n \n # Replace punctuation tokens\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n gen_sentences = gen_sentences.replace(' ' + token.lower(), key)\n gen_sentences = gen_sentences.replace('\\n ', '\\n')\n gen_sentences = gen_sentences.replace('( ', '(')\n \n # return all the sentences\n return gen_sentences", "_____no_output_____" ] ], [ [ "### Generate a New Script\nIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:\n- \"jerry\"\n- \"elaine\"\n- \"george\"\n- \"kramer\"\n\nYou can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)", "_____no_output_____" ] ], [ [ "# run the cell multiple times to get different results!\ngen_length = 400 # modify the length to your preference\nprime_word = 'jerry' # name for starting the script\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\npad_word = helper.SPECIAL_WORDS['PADDING']\ngenerated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)\nprint(generated_script)", "/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:51: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().\n" ] ], [ [ "#### Save your favorite scripts\n\nOnce you have a script that you like (or find interesting), save it to a text file!", "_____no_output_____" ] ], [ [ "# save script to a text file\nf = open(\"generated_script_1.txt\",\"w\")\nf.write(generated_script)\nf.close()", "_____no_output_____" ] ], [ [ "# The TV Script is Not Perfect\nIt's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines.\n\n### Example generated script\n\n>jerry: what about me?\n>\n>jerry: i don't have to wait.\n>\n>kramer:(to the sales table)\n>\n>elaine:(to jerry) hey, look at this, i'm a good doctor.\n>\n>newman:(to elaine) you think i have no idea of this...\n>\n>elaine: oh, you better take the phone, and he was a little nervous.\n>\n>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.\n>\n>jerry: oh, yeah. i don't even know, i know.\n>\n>jerry:(to the phone) oh, i know.\n>\n>kramer:(laughing) you know...(to jerry) you don't know.\n\nYou can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally. \n\n# Submitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save another copy as an HTML file by clicking \"File\" -> \"Download as..\"->\"html\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission. Once you download these files, compress them into one zip file for submission.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d03384e3778c480bb1d88573222d9ecca6e8bbff
8,751
ipynb
Jupyter Notebook
cycle_2_fancy_functions/cycle_2_lecture_recursion_higher_post_recording.ipynb
magicicada/cs1px_2020
ea3c0974566396e216d71748b208413f86be6e48
[ "BSD-3-Clause" ]
2
2022-01-12T13:10:42.000Z
2022-02-12T16:16:47.000Z
cycle_2_fancy_functions/cycle_2_lecture_recursion_higher_post_recording.ipynb
magicicada/cs1px_2020
ea3c0974566396e216d71748b208413f86be6e48
[ "BSD-3-Clause" ]
null
null
null
cycle_2_fancy_functions/cycle_2_lecture_recursion_higher_post_recording.ipynb
magicicada/cs1px_2020
ea3c0974566396e216d71748b208413f86be6e48
[ "BSD-3-Clause" ]
2
2021-02-02T16:20:09.000Z
2021-02-19T13:08:53.000Z
22.438462
185
0.493315
[ [ [ "**Recursion and Higher Order Functions**\n\nToday we're tackling recursion, and touching on higher-order functions in Python. \n\n\nA **recursive** function is one that calls itself. \n\nA classic example: the Fibonacci sequence.\n\nThe Fibonacci sequence was originally described to model population growth, and is self-referential in its definition.\n\nThe nth Fib number is defined in terms of the previous two:\n- F(n) = F(n-1) + F(n-2)\n- F(1) = 0\n- F(2) = 1\n\nAnother classic example: \nFactorial: \n- n! = n(n-1)(n-2)(n-3) ... 1\nor: \n- n! = n*(n-1)!\n\nLet's look at an implementation of the factorial and of the Fibonacci sequence in Python:\n", "_____no_output_____" ] ], [ [ "def factorial(n):\n if n == 1:\n return 1\n else:\n return n*factorial(n-1)\n \nprint(factorial(5))\n\n\n\n\ndef fibonacci(n):\n if n == 1:\n return 0\n elif n == 2:\n return 1\n else:\n# print('working on number ' + str(n))\n return fibonacci(n-1)+fibonacci(n-2)\n \nfibonacci(7)", "120\n" ] ], [ [ "There are two very important parts of these functions: a base case (or two) and a recursive case. When designing recursive functions it can help to think about these two cases!\n\nThe base case is the case when we know we are done, and can just return a value. (e.g. in fibonacci above there are two base cases, `n ==1` and `n ==2`).\n\nThe recursive case is the case when we make the recursive call - that is we call the function again. ", "_____no_output_____" ], [ "Let's write a function that counts down from a parameter n to zero, and then prints \"Blastoff!\".", "_____no_output_____" ] ], [ [ "def countdown(n):\n# base case\n if n == 0:\n print('Blastoff!')\n # recursive case\n else:\n print(n)\n countdown(n-1)\n\ncountdown(10)", "10\n9\n8\n7\n6\n5\n4\n3\n2\n1\nBlastoff!\n" ] ], [ [ "Let's write a recursive function that adds up the elements of a list:", "_____no_output_____" ] ], [ [ "def add_up_list(my_list):\n# base case\n if len(my_list) == 0:\n return 0\n# recursive case\n else:\n first_elem = my_list[0]\n return first_elem + add_up_list(my_list[1:])\n\nmy_list = [1, 2, 1, 3, 4]\nprint(add_up_list(my_list))\n", "11\n" ] ], [ [ "**Higher-order functions**\n\nare functions that takes a function as an argument or returns a function. We will talk briefly about functions that take a function as an argument. Let's look at an example. ", "_____no_output_____" ] ], [ [ "def h(x):\n return x+4\n\ndef g(x):\n return x**2\n\ndef doItTwice(f, x):\n return f(f(x))\n\nprint(doItTwice(h, 3))\nprint(doItTwice(g, 3))\n", "11\n81\n" ] ], [ [ "A common reason for using a higher-order function is to apply a parameter-specified function repeatedly over a data structure (like a list or a dictionary).\n\n\nLet's look at an example function that applies a parameter function to every element of a list:", "_____no_output_____" ] ], [ [ "def sampleFunction1(x):\n return 2*x\n\n\ndef sampleFunction2(x):\n return x % 2\n\ndef applyToAll(func, myList):\n newList = []\n for element in myList:\n newList.append(func(element))\n return newList\n \n \naList = [2, 3, 4, 5]\n\nprint(applyToAll(sampleFunction1, aList))\nprint(applyToAll(sampleFunction2, aList))\n\n\n", "[4, 6, 8, 10]\n[0, 1, 0, 1]\n" ] ], [ [ "Something like this applyToAll function is built into Python, and is called map", "_____no_output_____" ] ], [ [ "def sampleFunction1(x):\n return 2*x\n\n\ndef sampleFunction2(x):\n return x % 2\n \naList = [2, 3, 4, 5]\n\nprint(list(map(sampleFunction1, aList)))\n\n\nbList = [2, 3, 4, 5]\nprint(list(map(sampleFunction2, aList)))\n", "[4, 6, 8, 10]\n[0, 1, 0, 1]\n" ] ], [ [ "Python has quite a few built-in functions (some higher-order, some not). You can find lots of them here: https://docs.python.org/3.3/library/functions.html \n\n(I **will not** by default require you to remember those for an exam!!)\n \n\n \nExample: zip does something that may be familiar from last week's lab.", "_____no_output_____" ] ], [ [ "x = [1, 2, 3]\ny = [4, 5, 6]\nzipped = zip(x, y)\nprint(list(zipped))", "[(1, 4), (2, 5), (3, 6)]\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d03389d6065a7d78a58bd5c88dd5f1d8c8c2d70f
215,083
ipynb
Jupyter Notebook
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
399299a120b6bf717106440916c7f5d6b7612421
[ "BSD-3-Clause" ]
68
2019-01-09T21:53:55.000Z
2022-02-16T17:14:22.000Z
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
399299a120b6bf717106440916c7f5d6b7612421
[ "BSD-3-Clause" ]
null
null
null
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
399299a120b6bf717106440916c7f5d6b7612421
[ "BSD-3-Clause" ]
62
2019-01-09T21:43:48.000Z
2021-11-15T04:26:25.000Z
26.596142
350
0.357527
[ [ [ "# Introduction to `pandas`", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd", "_____no_output_____" ] ], [ [ "## Series and Data Frames", "_____no_output_____" ], [ "### Series objects", "_____no_output_____" ], [ "A `Series` is like a vector. All elements must have the same type or are nulls.", "_____no_output_____" ] ], [ [ "s = pd.Series([1,1,2,3] + [None])\ns", "_____no_output_____" ] ], [ [ "### Size", "_____no_output_____" ] ], [ [ "s.size", "_____no_output_____" ] ], [ [ "### Unique Counts", "_____no_output_____" ] ], [ [ "s.value_counts()", "_____no_output_____" ] ], [ [ "### Special types of series", "_____no_output_____" ], [ "#### Strings", "_____no_output_____" ] ], [ [ "words = 'the quick brown fox jumps over the lazy dog'.split()\ns1 = pd.Series([' '.join(item) for item in zip(words[:-1], words[1:])])\ns1", "_____no_output_____" ], [ "s1.str.upper()", "_____no_output_____" ], [ "s1.str.split()", "_____no_output_____" ], [ "s1.str.split().str[1]", "_____no_output_____" ] ], [ [ "### Categories", "_____no_output_____" ] ], [ [ "s2 = pd.Series(['Asian', 'Asian', 'White', 'Black', 'White', 'Hispanic'])\ns2", "_____no_output_____" ], [ "s2 = s2.astype('category')\ns2", "_____no_output_____" ], [ "s2.cat.categories", "_____no_output_____" ], [ "s2.cat.codes", "_____no_output_____" ] ], [ [ "### DataFrame objects", "_____no_output_____" ], [ "A `DataFrame` is like a matrix. Columns in a `DataFrame` are `Series`.\n\n- Each column in a DataFrame represents a **variale**\n- Each row in a DataFrame represents an **observation**\n- Each cell in a DataFrame represents a **value**", "_____no_output_____" ] ], [ [ "df = pd.DataFrame(dict(num=[1,2,3] + [None]))\ndf", "_____no_output_____" ], [ "df.num", "_____no_output_____" ] ], [ [ "### Index\n\nRow and column identifiers are of `Index` type.\n\nSomewhat confusingly, index is also a a synonym for the row identifiers.", "_____no_output_____" ] ], [ [ "df.index", "_____no_output_____" ] ], [ [ "#### Setting a column as the row index", "_____no_output_____" ] ], [ [ "df", "_____no_output_____" ], [ "df1 = df.set_index('num')\ndf1", "_____no_output_____" ] ], [ [ "#### Making an index into a column", "_____no_output_____" ] ], [ [ "df1.reset_index()", "_____no_output_____" ] ], [ [ "### Columns\n\nThis is just a different index object", "_____no_output_____" ] ], [ [ "df.columns", "_____no_output_____" ] ], [ [ "### Getting raw values\n\nSometimes you just want a `numpy` array, and not a `pandas` object.", "_____no_output_____" ] ], [ [ "df.values", "_____no_output_____" ] ], [ [ "## Creating Data Frames", "_____no_output_____" ], [ "### Manual", "_____no_output_____" ] ], [ [ "from collections import OrderedDict", "_____no_output_____" ], [ "n = 5\ndates = pd.date_range(start='now', periods=n, freq='d')\ndf = pd.DataFrame(OrderedDict(pid=np.random.randint(100, 999, n), \n weight=np.random.normal(70, 20, n),\n height=np.random.normal(170, 15, n),\n date=dates,\n ))\ndf", "_____no_output_____" ] ], [ [ "### From file\n\nYou can read in data from many different file types - plain text, JSON, spreadsheets, databases etc. Functions to read in data look like `read_X` where X is the data type.", "_____no_output_____" ] ], [ [ "%%file measures.txt\npid\tweight\theight\tdate\n328\t72.654347\t203.560866\t2018-11-11 14:16:18.148411\n756\t34.027679\t189.847316\t2018-11-12 14:16:18.148411\n185\t28.501914\t158.646074\t2018-11-13 14:16:18.148411\n507\t17.396343\t180.795993\t2018-11-14 14:16:18.148411\n919\t64.724301\t173.564725\t2018-11-15 14:16:18.148411", "Writing measures.txt\n" ], [ "df = pd.read_table('measures.txt')\ndf", "_____no_output_____" ] ], [ [ "## Indexing Data Frames", "_____no_output_____" ], [ "### Implicit defaults\n\nif you provide a slice, it is assumed that you are asking for rows.", "_____no_output_____" ] ], [ [ "df[1:3]", "_____no_output_____" ] ], [ [ "If you provide a singe value or list, it is assumed that you are asking for columns.", "_____no_output_____" ] ], [ [ "df[['pid', 'weight']]", "_____no_output_____" ] ], [ [ "### Extracting a column", "_____no_output_____" ], [ "#### Dictionary style access", "_____no_output_____" ] ], [ [ "df['pid']", "_____no_output_____" ] ], [ [ "#### Property style access\n\nThis only works for column names tat are also valid Python identifier (i.e., no spaces or dashes or keywords)", "_____no_output_____" ] ], [ [ "df.pid", "_____no_output_____" ] ], [ [ "### Indexing by location\n\nThis is similar to `numpy` indexing", "_____no_output_____" ] ], [ [ "df.iloc[1:3, :]", "_____no_output_____" ], [ "df.iloc[1:3, [True, False, True]]", "_____no_output_____" ] ], [ [ "### Indexing by name", "_____no_output_____" ] ], [ [ "df.loc[1:3, 'weight':'height']", "_____no_output_____" ] ], [ [ "**Warning**: When using `loc`, the row slice indicates row names, not positions.", "_____no_output_____" ] ], [ [ "df1 = df.copy()\ndf1.index = df.index + 1\ndf1", "_____no_output_____" ], [ "df1.loc[1:3, 'weight':'height']", "_____no_output_____" ] ], [ [ "## Structure of a Data Frame", "_____no_output_____" ], [ "### Data types", "_____no_output_____" ] ], [ [ "df.dtypes", "_____no_output_____" ] ], [ [ "### Converting data types", "_____no_output_____" ], [ "#### Using `astype` on one column", "_____no_output_____" ] ], [ [ "df.pid = df.pid.astype('category')", "_____no_output_____" ] ], [ [ "#### Using `astype` on multiple columns", "_____no_output_____" ] ], [ [ "df = df.astype(dict(weight=float, height=float))", "_____no_output_____" ] ], [ [ "#### Using a conversion function", "_____no_output_____" ] ], [ [ "df.date = pd.to_datetime(df.date)", "_____no_output_____" ] ], [ [ "#### Check", "_____no_output_____" ] ], [ [ "df.dtypes", "_____no_output_____" ] ], [ [ "### Basic properties", "_____no_output_____" ] ], [ [ "df.size", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ], [ "df.describe()", "_____no_output_____" ] ], [ [ "### Inspection", "_____no_output_____" ] ], [ [ "df.head(n=3)", "_____no_output_____" ], [ "df.tail(n=3)", "_____no_output_____" ], [ "df.sample(n=3)", "_____no_output_____" ], [ "df.sample(frac=0.5)", "_____no_output_____" ] ], [ [ "## Selecting, Renaming and Removing Columns", "_____no_output_____" ], [ "### Selecting columns", "_____no_output_____" ] ], [ [ "df.filter(items=['pid', 'date'])", "_____no_output_____" ], [ "df.filter(regex='.*ght')", "_____no_output_____" ] ], [ [ "#### Note that you can also use regular string methods on the columns", "_____no_output_____" ] ], [ [ "df.loc[:, df.columns.str.contains('d')]", "_____no_output_____" ] ], [ [ "### Renaming columns", "_____no_output_____" ] ], [ [ "df.rename(dict(weight='w', height='h'), axis=1)", "_____no_output_____" ], [ "orig_cols = df.columns ", "_____no_output_____" ], [ "df.columns = list('abcd')", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "df.columns = orig_cols", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "### Removing columns", "_____no_output_____" ] ], [ [ "df.drop(['pid', 'date'], axis=1)", "_____no_output_____" ], [ "df.drop(columns=['pid', 'date'])", "_____no_output_____" ], [ "df.drop(columns=df.columns[df.columns.str.contains('d')])", "_____no_output_____" ] ], [ [ "## Selecting, Renaming and Removing Rows", "_____no_output_____" ], [ "### Selecting rows", "_____no_output_____" ] ], [ [ "df[df.weight.between(60,70)]", "_____no_output_____" ], [ "df[(69 <= df.weight) & (df.weight < 70)]", "_____no_output_____" ], [ "df[df.date.between(pd.to_datetime('2018-11-13'), \n pd.to_datetime('2018-11-15 23:59:59'))]", "_____no_output_____" ] ], [ [ "### Renaming rows", "_____no_output_____" ] ], [ [ "df.rename({i:letter for i,letter in enumerate('abcde')})", "_____no_output_____" ], [ "df.index = ['the', 'quick', 'brown', 'fox', 'jumphs']", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "df = df.reset_index(drop=True)", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "### Dropping rows", "_____no_output_____" ] ], [ [ "df.drop([1,3], axis=0)", "_____no_output_____" ] ], [ [ "#### Dropping duplicated data", "_____no_output_____" ] ], [ [ "df['something'] = [1,1,None,2,None]", "_____no_output_____" ], [ "df.loc[df.something.duplicated()]", "_____no_output_____" ], [ "df.drop_duplicates(subset='something')", "_____no_output_____" ] ], [ [ "#### Dropping missing data", "_____no_output_____" ] ], [ [ "df", "_____no_output_____" ], [ "df.something.fillna(0)", "_____no_output_____" ], [ "df.something.ffill()", "_____no_output_____" ], [ "df.something.bfill()", "_____no_output_____" ], [ "df.something.interpolate()", "_____no_output_____" ], [ "df.dropna()", "_____no_output_____" ] ], [ [ "## Transforming and Creating Columns", "_____no_output_____" ] ], [ [ "df.assign(bmi=df['weight'] / (df['height']/100)**2)", "_____no_output_____" ], [ "df['bmi'] = df['weight'] / (df['height']/100)**2", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "df['something'] = [2,2,None,None,3]", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "## Sorting Data Frames", "_____no_output_____" ], [ "### Sort on indexes", "_____no_output_____" ] ], [ [ "df.sort_index(axis=1)", "_____no_output_____" ], [ "df.sort_index(axis=0, ascending=False)", "_____no_output_____" ] ], [ [ "### Sort on values", "_____no_output_____" ] ], [ [ "df.sort_values(by=['something', 'bmi'], ascending=[True, False])", "_____no_output_____" ] ], [ [ "## Summarizing", "_____no_output_____" ], [ "### Apply an aggregation function", "_____no_output_____" ] ], [ [ "df.select_dtypes(include=np.number)", "_____no_output_____" ], [ "df.select_dtypes(include=np.number).agg(np.sum)", "_____no_output_____" ], [ "df.agg(['count', np.sum, np.mean])", "_____no_output_____" ] ], [ [ "## Split-Apply-Combine\n\nWe often want to perform subgroup analysis (conditioning by some discrete or categorical variable). This is done with `groupby` followed by an aggregate function. Conceptually, we split the data frame into separate groups, apply the aggregate function to each group separately, then combine the aggregated results back into a single data frame.", "_____no_output_____" ] ], [ [ "df['treatment'] = list('ababa')", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "grouped = df.groupby('treatment')", "_____no_output_____" ], [ "grouped.get_group('a')", "_____no_output_____" ], [ "grouped.mean()", "_____no_output_____" ] ], [ [ "### Using `agg` with `groupby`", "_____no_output_____" ] ], [ [ "grouped.agg('mean')", "_____no_output_____" ], [ "grouped.agg(['mean', 'std'])", "_____no_output_____" ], [ "grouped.agg({'weight': ['mean', 'std'], 'height': ['min', 'max'], 'bmi': lambda x: (x**2).sum()})", "_____no_output_____" ] ], [ [ "### Using `trasnform` wtih `groupby`", "_____no_output_____" ] ], [ [ "g_mean = grouped['weight', 'height'].transform(np.mean)\ng_mean", "_____no_output_____" ], [ "g_std = grouped['weight', 'height'].transform(np.std)\ng_std", "_____no_output_____" ], [ "(df[['weight', 'height']] - g_mean)/g_std", "_____no_output_____" ] ], [ [ "## Combining Data Frames", "_____no_output_____" ] ], [ [ "df", "_____no_output_____" ], [ "df1 = df.iloc[3:].copy()", "_____no_output_____" ], [ "df1.drop('something', axis=1, inplace=True)\ndf1", "_____no_output_____" ] ], [ [ "### Adding rows\n\nNote that `pandas` aligns by column indexes automatically.", "_____no_output_____" ] ], [ [ "df.append(df1, sort=False)", "_____no_output_____" ], [ "pd.concat([df, df1], sort=False)", "_____no_output_____" ] ], [ [ "### Adding columns", "_____no_output_____" ] ], [ [ "df.pid", "_____no_output_____" ], [ "df2 = pd.DataFrame(OrderedDict(pid=[649, 533, 400, 600], age=[23,34,45,56]))", "_____no_output_____" ], [ "df2.pid", "_____no_output_____" ], [ "df.pid = df.pid.astype('int')", "_____no_output_____" ], [ "pd.merge(df, df2, on='pid', how='inner')", "_____no_output_____" ], [ "pd.merge(df, df2, on='pid', how='left')", "_____no_output_____" ], [ "pd.merge(df, df2, on='pid', how='right')", "_____no_output_____" ], [ "pd.merge(df, df2, on='pid', how='outer')", "_____no_output_____" ] ], [ [ "### Merging on the index", "_____no_output_____" ] ], [ [ "df1 = pd.DataFrame(dict(x=[1,2,3]), index=list('abc'))\ndf2 = pd.DataFrame(dict(y=[4,5,6]), index=list('abc'))\ndf3 = pd.DataFrame(dict(z=[7,8,9]), index=list('abc'))", "_____no_output_____" ], [ "df1", "_____no_output_____" ], [ "df2", "_____no_output_____" ], [ "df3", "_____no_output_____" ], [ "df1.join([df2, df3])", "_____no_output_____" ] ], [ [ "## Fixing common DataFrame issues", "_____no_output_____" ], [ "### Multiple variables in a column", "_____no_output_____" ] ], [ [ "df = pd.DataFrame(dict(pid_treat = ['A-1', 'B-2', 'C-1', 'D-2']))\ndf", "_____no_output_____" ], [ "df.pid_treat.str.split('-')", "_____no_output_____" ], [ "df.pid_treat.str.split('-').apply(pd.Series, index=['pid', 'treat'])", "_____no_output_____" ] ], [ [ "### Multiple values in a cell", "_____no_output_____" ] ], [ [ "df = pd.DataFrame(dict(pid=['a', 'b', 'c'], vals = [(1,2,3), (4,5,6), (7,8,9)]))\ndf", "_____no_output_____" ], [ "df[['t1', 't2', 't3']] = df.vals.apply(pd.Series)\ndf", "_____no_output_____" ], [ "df.drop('vals', axis=1, inplace=True)", "_____no_output_____" ], [ "pd.melt(df, id_vars='pid', value_name='vals').drop('variable', axis=1)", "_____no_output_____" ] ], [ [ "## Reshaping Data Frames\n\nSometimes we need to make rows into columns or vice versa.", "_____no_output_____" ], [ "### Converting multiple columns into a single column\n\nThis is often useful if you need to condition on some variable.", "_____no_output_____" ] ], [ [ "url = 'https://raw.githubusercontent.com/uiuc-cse/data-fa14/gh-pages/data/iris.csv'\niris = pd.read_csv(url)", "_____no_output_____" ], [ "iris.head()", "_____no_output_____" ], [ "iris.shape", "_____no_output_____" ], [ "df_iris = pd.melt(iris, id_vars='species')", "_____no_output_____" ], [ "df_iris.sample(10)", "_____no_output_____" ] ], [ [ "## Chaining commands\n\nSometimes you see this functional style of method chaining that avoids the need for temporary intermediate variables.", "_____no_output_____" ] ], [ [ "(\n iris.\n sample(frac=0.2).\n filter(regex='s.*').\n assign(both=iris.sepal_length + iris.sepal_length).\n groupby('species').agg(['mean', 'sum']).\n pipe(lambda x: np.around(x, 1))\n)", "_____no_output_____" ] ], [ [ "## Moving between R and Python in Jupyter", "_____no_output_____" ] ], [ [ "%load_ext rpy2.ipython", "_____no_output_____" ], [ "import warnings\nwarnings.simplefilter('ignore', FutureWarning)", "_____no_output_____" ], [ "iris = %R iris", "_____no_output_____" ], [ "iris.head()", "_____no_output_____" ], [ "iris_py = iris.copy()\niris_py.Species = iris_py.Species.str.upper()", "_____no_output_____" ], [ "%%R -i iris_py -o iris_r\n\niris_r <- iris_py[1:3,]", "_____no_output_____" ], [ "iris_r", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
d033a71ccf94190d5f8332b9c1f532983309c256
10,834
ipynb
Jupyter Notebook
Week10/Macro_options.ipynb
ProcJimi/curriculum
c64896ee7382b1ff04757b08625c140b0c7f8a27
[ "MIT" ]
null
null
null
Week10/Macro_options.ipynb
ProcJimi/curriculum
c64896ee7382b1ff04757b08625c140b0c7f8a27
[ "MIT" ]
null
null
null
Week10/Macro_options.ipynb
ProcJimi/curriculum
c64896ee7382b1ff04757b08625c140b0c7f8a27
[ "MIT" ]
null
null
null
75.236111
4,481
0.584364
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
d033ab2aecec4eb62c89fe03b8485bb096998358
356,269
ipynb
Jupyter Notebook
notebooks/retired/nbm_reliability_maps.ipynb
m-wessler/nbm-verify
e7bbbb6bf56c727e777e5119bff8bfc65a9a1d94
[ "MIT" ]
null
null
null
notebooks/retired/nbm_reliability_maps.ipynb
m-wessler/nbm-verify
e7bbbb6bf56c727e777e5119bff8bfc65a9a1d94
[ "MIT" ]
null
null
null
notebooks/retired/nbm_reliability_maps.ipynb
m-wessler/nbm-verify
e7bbbb6bf56c727e777e5119bff8bfc65a9a1d94
[ "MIT" ]
null
null
null
1,131.012698
346,012
0.95459
[ [ [ "import os, gc, sys\nimport pygrib\nimport regionmask\nimport cartopy\nimport cartopy.crs as ccrs\nimport numpy as np\nimport pandas as pd\nimport xarray as xr\nimport geopandas as gpd\nimport multiprocessing as mp\nimport matplotlib.pyplot as plt \n\nfrom glob import glob\nfrom functools import partial\nfrom matplotlib import gridspec\nfrom datetime import datetime, timedelta\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nfrom matplotlib import colors\n\nos.environ['OMP_NUM_THREADS'] = '1'", "_____no_output_____" ], [ "# CONFIG # # CONFIG # # CONFIG # # CONFIG # # CONFIG # \ncwa = 'SEW'#sys.argv[1]\nfhr_start, fhr_end, fhr_step = 24, 108, 6\n\nstart_date = datetime(2020, 10, 1, 0)\nend_date = datetime(2020, 12, 3, 12)\n\nproduce_thresholds = [0.01, 0.25, 0.50]\nbint, bins_custom = 5, None\n\ncwa_bounds = {\n 'WESTUS':[30, 50, -130, -100],\n 'SEW':[46.0, 49.0, -125.0, -120.5],\n 'SLC':[37.0, 42.0, -114.0, -110],\n 'MSO':[44.25, 49.0, -116.75, -112.25],\n 'MTR':[35.75, 38.75, -123.5, -120.25],}\n# CONFIG # # CONFIG # # CONFIG # # CONFIG # # CONFIG # ", "_____no_output_____" ], [ "nbm_dir = '/scratch/general/lustre/u1070830/nbm/'\nurma_dir = '/scratch/general/lustre/u1070830/urma/'\ntmp_dir = '/scratch/general/lustre/u1070830/tmp/'\nfig_dir = '/uufs/chpc.utah.edu/common/home/steenburgh-group10/mewessler/nbm/'\nos.makedirs(tmp_dir, exist_ok=True)", "_____no_output_____" ], [ "def resize_colobar(event):\n # Tell matplotlib to re-draw everything, so that we can get\n # the correct location from get_position.\n plt.draw()\n\n posn = ax.get_position()\n colorbar_ax.set_position([posn.x0 + posn.width + 0.01, posn.y0,\n 0.04, axpos.height])\n \ndef calc_pbin(pbin, _bint, _thresh, _data, _urma):\n\n p0, p1 = pbin-_bint/2, pbin+_bint/2\n N = xr.where((_data >= p0) & (_data < p1), 1, 0).sum(dim=['valid'])\n n = xr.where((_data >= p0) & (_data < p1) & (_urma > _thresh), 1, 0).sum(dim='valid')\n \n return pbin, n, N\n\ndef calc_pbin_fixed(pbin, _thresh, _data, _urma):\n\n p0, p1 = pbin\n N = xr.where((_data >= p0) & (_data <= p1), 1, 0).sum(dim=['valid'])\n n = xr.where((_data >= p0) & (_data <= p1) & (_urma > _thresh), 1, 0).sum(dim='valid')\n \n return pbin, n, N", "_____no_output_____" ], [ "extract_dir = nbm_dir + 'extract/'\nextract_flist = sorted(glob(extract_dir + '*'))\n\nif not os.path.isfile(urma_dir + 'agg/urma_agg.nc'):\n pass \n #print('URMA aggregate not found')\n\nelse:\n #print('Getting URMA aggregate from file')\n urma_whole = xr.open_dataset(urma_dir + 'agg/urma_agg.nc')['apcp24h_mm']\n\nurma_whole = urma_whole/25.4\nurma_whole = urma_whole.rename('apcp24h_in')", "_____no_output_____" ], [ "geodir = '../forecast-zones/'\nzones_shapefile = glob(geodir + '*.shp')[0]\n\n# Read the shapefile\nzones = gpd.read_file(zones_shapefile)\n\n# Prune to Western Region using TZ\nzones = zones.set_index('TIME_ZONE').loc[['M', 'Mm', 'm', 'MP', 'P']].reset_index()\ncwas = zones.dissolve(by='CWA')", "_____no_output_____" ], [ "pbin_stats_all = {}\n\nfor thresh in produce_thresholds:\n\n for fhr in np.arange(fhr_start, fhr_end+1, fhr_step):\n\n open_file = [f for f in extract_flist if 'fhr%03d'%fhr in f][0]\n print(open_file)\n\n # Subset the times\n nbm = xr.open_dataset(open_file)\n nbm_time = nbm.valid\n urma_time = urma_whole.valid\n\n time_match = nbm_time[np.in1d(nbm_time, urma_time)].values\n\n time_match = np.array([t for t in time_match if pd.to_datetime(t) >= start_date])\n time_match = np.array([t for t in time_match if pd.to_datetime(t) <= end_date])\n\n nbm = nbm.sel(valid=time_match)\n urma = urma_whole.sel(valid=time_match)\n\n date0 = pd.to_datetime(time_match[0]).strftime('%Y/%m/%d %H UTC')\n date1 = pd.to_datetime(time_match[-1]).strftime('%Y/%m/%d %H UTC')\n\n nlat, xlat, nlon, xlon = cwa_bounds[cwa]\n\n lats, lons = nbm.lat, nbm.lon\n\n idx = np.where(\n (lats >= nlat) & (lats <= xlat) &\n (lons >= nlon) & (lons <= xlon))\n\n nbm = nbm.isel(x=slice(idx[1].min(), idx[1].max()), y=slice(idx[0].min(), idx[0].max()))\n urma = urma.isel(x=slice(idx[1].min(), idx[1].max()), y=slice(idx[0].min(), idx[0].max()))\n\n # Subset the threshold value\n nbm = nbm.sel(threshold=thresh)['probx']\n\n total_fc = xr.where(nbm > 0, 1, 0).sum()\n total_ob = xr.where(urma > thresh, 1, 0).sum()\n\n bins = np.arange(0, 101, bint)\n bins = bins_custom if bins_custom is not None else bins\n\n# calc_pbin_mp = partial(calc_pbin, _bint=bint, _thresh=thresh,\n# _data=nbm, _urma=urma)\n \n calc_pbin_mp = partial(calc_pbin_fixed, _thresh=thresh,\n _data=nbm, _urma=urma)\n \n pbin_stats = calc_pbin_mp([60, 80])\n\n# with mp.get_context('fork').Pool(len(bins)) as p:\n# pbin_stats = p.map(calc_pbin_mp, bins, chunksize=1)\n# p.close()\n# p.join()\n\n# pbin_stats_all[fhr] = np.array(pbin_stats, dtype=np.int)\n \n break\n break\n \npbins, n, N = pbin_stats", "/scratch/general/lustre/u1070830/nbm/extract/nbm_probx_fhr024.nc\n" ], [ "levels = np.hstack([0, np.array(pbins), 100])/100\nprint(levels)\n\np_levs = levels#np.array(pbins)/100\np_levs_locs = p_levs\np_colors = ['#5ab4ac','#5ab4ac','#f5f5f5', '#d8b365']\np_cmap = colors.ListedColormap(p_colors, name='p_cmap')\n\nfig = plt.figure(figsize=(12, 12), facecolor='w')\nax = fig.add_axes([0, 0, 1, 1], projection=ccrs.PlateCarree())\n\nax.add_feature(cartopy.feature.OCEAN, zorder=100, color='w', edgecolor=None)\n\nif cwa == 'WESTUS':\n cwas.geometry.boundary.plot(color=None, edgecolor='black', linewidth=0.75, ax=ax)\n ax.coastlines(linewidth=3.5)\n \nelse:\n cwas.geometry.boundary.plot(color=None, edgecolor='black', linewidth=2.5, ax=ax)\n zones.geometry.boundary.plot(color=None, linestyle='--', edgecolor='black', linewidth=0.75, ax=ax)\n ax.coastlines(linewidth=8)\n \ndata = xr.where(n > 5, n/N, np.nan)\n\ncbd = ax.contourf(data.lon, data.lat, data, levels=levels, alpha=0.5, \n cmap=p_cmap, vmin=0, vmax=1)\n\nnan_shade = xr.where(np.isnan(data), -1, np.nan)\nax.contourf(data.lon, data.lat, nan_shade, cmap='gray', alpha=0.25)\n\ncbar_ax = fig.add_axes([1.01, .04, .05, .92])\n# cbar_ax = fig.add_axes([.85, .04, .02, .92])\ncbar = plt.colorbar(cbd, cax=cbar_ax)\ncbar.ax.tick_params(labelsize=16)\nfig.canvas.mpl_connect('resize_event', resize_colobar)\n\nax.set_ylim(bottom=cwa_bounds[cwa][0]-0.25, top=cwa_bounds[cwa][1]+0.25)\nax.set_xlim(left=cwa_bounds[cwa][2]-0.25, right=cwa_bounds[cwa][3]+0.25)\n\nax.set_title('CWA: %s\\nThreshold: %.02f\"\\nBin: %d%% - %d%%'%(cwa, thresh, pbins[0], pbins[1]), fontsize=16)\ncbar.set_label(label='\\n[< Too Wet] Observed Relative Frequency [Too Dry >]', fontsize=16)\n\nax.grid(True, zorder=-10)\n\nprint(pbins)\nplt.show()", "[0. 0.6 0.8 1. ]\n[60, 80]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d033c6c66383df8dbc89a37e3aaf051495bcd3ec
105,506
ipynb
Jupyter Notebook
analyse_cartpole_noise.ipynb
ssantos97/SyMO
58d7b64f888fd78cc27d4c1092071ef35725f0d4
[ "MIT" ]
null
null
null
analyse_cartpole_noise.ipynb
ssantos97/SyMO
58d7b64f888fd78cc27d4c1092071ef35725f0d4
[ "MIT" ]
null
null
null
analyse_cartpole_noise.ipynb
ssantos97/SyMO
58d7b64f888fd78cc27d4c1092071ef35725f0d4
[ "MIT" ]
null
null
null
393.679104
93,506
0.926431
[ [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport os, sys\nimport argparse\nimport torch\n\nfrom Code.Utils import from_pickle\nfrom Code.models import cartpole\nfrom Code.integrate_models import implicit_integration_DEL, integrate_ODE\nfrom Code.symo import SyMo_RT\nfrom Code.NN import LODE_RT, NODE_RT\nfrom Code.models import get_field, cartpole\n\nTHIS_DIR = os.getcwd()", "_____no_output_____" ], [ "DPI = 100\nFORMAT = 'png'\nLINE_SEGMENTS = 10\nARROW_SCALE = 60\nARROW_WIDTH = 6e-3\nLINE_WIDTH = 2\nsave_dir = \"Experiments_cartpole/noise\"\n\ndef get_args():\n return {'fig_dir': './figures/cartpole',\n 'gpu': 2,\n 'pred_tol': 1e-5 ,\n 'pred_maxiter': 10}\n\nclass ObjectView(object):\n def __init__(self, d): self.__dict__ = d\n\nargs = ObjectView(get_args())\n\ndevice = torch.device('cuda:' + str(args.gpu) if torch.cuda.is_available() else 'cpu')\n", "_____no_output_____" ], [ "\ndef load_stats(nn, models):\n #loads the stats of all models\n train_loss = []\n test_loss= []\n int_loss = []\n int_std = []\n E_loss = []\n E_std = []\n H_loss = []\n H_std = []\n for model in models:\n path = '{}/{}/{}{}-p-32x32_sigma_{}-stats.pkl'.format(THIS_DIR, save_dir, \"cartpole-\", nn, model)\n stats = from_pickle(path)\n if 'SyMo' in nn:\n train_loss.append(stats['train_loss_poses'])\n test_loss.append(stats['test_loss_poses'])\n else:\n train_loss.append(stats['train_loss_poses'])\n test_loss.append(stats['test_loss_poses'])\n \n int_loss.append(stats['int_loss_poses'])\n int_std.append(stats['int_std'])\n E_loss.append(stats['E_loss'])\n E_std.append(stats['E_std'])\n if nn != 'NODE-rk4' and nn != \"NODE-midpoint\":\n H_loss.append(stats['H_loss'])\n H_std.append(stats['H_std'])\n if nn != 'NODE-rk4' and nn != \"NODE-midpoint\":\n return train_loss, test_loss, int_loss, int_std, E_loss, E_std, H_loss, H_std\n else:\n return train_loss, test_loss, int_loss, int_std, E_loss, E_std\n\ndef load_stats_noiseless(nn, models):\n #loads the stats of all models\n train_loss = []\n test_loss= []\n int_loss = []\n int_std = []\n E_loss = []\n E_std = []\n H_loss = []\n H_std = []\n for model in models:\n path = '{}/{}/{}{}-p-{}-stats.pkl'.format(THIS_DIR, \"Experiments_cartpole/h=0.05\", \"cartpole-\", nn, model)\n stats = from_pickle(path)\n if 'SyMo' in nn:\n train_loss.append(stats['train_loss_poses'])\n test_loss.append(stats['test_loss_poses'])\n else:\n train_loss.append(stats['train_loss_poses'])\n test_loss.append(stats['test_loss_poses'])\n \n int_loss.append(stats['int_loss_poses'])\n int_std.append(stats['int_std'])\n E_loss.append(stats['E_loss'])\n E_std.append(stats['E_std'])\n if nn != 'NODE-rk4' and nn != \"NODE-midpoint\":\n H_loss.append(stats['H_loss'])\n H_std.append(stats['H_std'])\n if nn != 'NODE-rk4' and nn != \"NODE-midpoint\":\n return train_loss, test_loss, int_loss, int_std, E_loss, E_std, H_loss, H_std\n else:\n return train_loss, test_loss, int_loss, int_std, E_loss, E_std\nmodels=[\"32x32\"]\n#Load E2E-SyMo models\ntrain_loss_N_SYMO_noiseless, test_loss_N_SYMO_noiseless, int_loss_N_SYMO_noiseless, int_std_N_SYMO, E_loss_N_SYMO, E_std_N_SYMO, H_loss_N_SYMO, H_std_N_SYMO = load_stats_noiseless('N-SyMo', models)\n# Load SyMo models\ntrain_loss_SYMO_noiseless, test_loss_SYMO_noiseless, int_loss_SYMO_noiseless, int_std_SYMO, E_loss_SYMO, E_std_SYMO, H_loss_SYMO, H_std_SYMO = load_stats_noiseless('SyMo', models)\n#Load LODE_RK4 models\ntrain_loss_LODE_RK4_noiseless, test_loss_LODE_RK4_noiseless, int_loss_LODE_RK4_noiseless, int_std_LODE_RK4, E_loss_LODE_RK4, E_std_LODE_RK4, H_loss_LODE_RK4, H_std_LODE_RK4 = load_stats_noiseless('L-NODE-rk4', models)\n#Load LODE_RK2 models\ntrain_loss_LODE_RK2_noiseless, test_loss_LODE_RK2_noiseless, int_loss_LODE_RK2_noiseless, int_std_LODE_RK2, E_loss_LODE_RK2, E_std_LODE_RK2, H_loss_LODE_RK2, H_std_LODE_RK2 = load_stats_noiseless('L-NODE-midpoint', models)\n#Load NODE_RK4 models\ntrain_loss_NODE_RK4_noiseless, test_loss_NODE_RK4_noiseless, int_loss_NODE_RK4_noiseless, int_std_NODE_RK4, E_loss_NODE_RK4, E_std_NODE_RK4 = load_stats_noiseless('NODE-rk4', models)\n#Load NODE_RK2 models\ntrain_loss_NODE_RK2_noiseless, test_loss_NODE_RK2_noiseless, int_loss_NODE_RK2_noiseless, int_std_NODE_RK2, E_loss_NODE_RK2, E_std_NODE_RK2 = load_stats_noiseless('NODE-midpoint', models)\n\n\nmodels = [0.0001, 0.0005, 0.001, 0.005, 0.01]\n#Load E2E-SyMo models\ntrain_loss_N_SYMO, test_loss_N_SYMO, int_loss_N_SYMO, int_std_N_SYMO, E_loss_N_SYMO, E_std_N_SYMO, H_loss_N_SYMO, H_std_N_SYMO = load_stats('N-SyMo', models)\n# Load SyMo models\ntrain_loss_SYMO, test_loss_SYMO, int_loss_SYMO, int_std_SYMO, E_loss_SYMO, E_std_SYMO, H_loss_SYMO, H_std_SYMO = load_stats('SyMo', models)\n#Load LODE_RK4 models\ntrain_loss_LODE_RK4, test_loss_LODE_RK4, int_loss_LODE_RK4, int_std_LODE_RK4, E_loss_LODE_RK4, E_std_LODE_RK4, H_loss_LODE_RK4, H_std_LODE_RK4 = load_stats('L-NODE-rk4', models)\n#Load LODE_RK2 models\ntrain_loss_LODE_RK2, test_loss_LODE_RK2, int_loss_LODE_RK2, int_std_LODE_RK2, E_loss_LODE_RK2, E_std_LODE_RK2, H_loss_LODE_RK2, H_std_LODE_RK2 = load_stats('L-NODE-midpoint', models)\n#Load NODE_RK4 models\ntrain_loss_NODE_RK4, test_loss_NODE_RK4, int_loss_NODE_RK4, int_std_NODE_RK4, E_loss_NODE_RK4, E_std_NODE_RK4 = load_stats('NODE-rk4', models)\n#Load NODE_RK2 models\ntrain_loss_NODE_RK2, test_loss_NODE_RK2, int_loss_NODE_RK2, int_std_NODE_RK2, E_loss_NODE_RK2, E_std_NODE_RK2 = load_stats('NODE-midpoint', models)\n\ntrain_loss_N_SYMO = [*train_loss_N_SYMO_noiseless, *train_loss_N_SYMO]\ntrain_loss_SYMO = [*train_loss_SYMO_noiseless, *train_loss_SYMO]\ntrain_loss_LODE_RK2 = [*train_loss_LODE_RK2_noiseless, *train_loss_LODE_RK2]\ntrain_loss_LODE_RK4 = [*train_loss_LODE_RK4_noiseless, *train_loss_LODE_RK4]\ntrain_loss_NODE_RK2 = [*train_loss_NODE_RK2_noiseless, *train_loss_NODE_RK2]\ntrain_loss_NODE_RK4 = [*train_loss_NODE_RK4_noiseless, *train_loss_NODE_RK4]\n\ntest_loss_N_SYMO = [*test_loss_N_SYMO_noiseless, *test_loss_N_SYMO]\ntest_loss_SYMO = [*test_loss_SYMO_noiseless, *test_loss_SYMO]\ntest_loss_LODE_RK2 = [*test_loss_LODE_RK2_noiseless, *test_loss_LODE_RK2]\ntest_loss_LODE_RK4 = [*test_loss_LODE_RK4_noiseless, *test_loss_LODE_RK4]\ntest_loss_NODE_RK2 = [*test_loss_NODE_RK2_noiseless, *test_loss_NODE_RK2]\ntest_loss_NODE_RK4 = [*test_loss_NODE_RK4_noiseless, *test_loss_NODE_RK4]\n\nint_loss_N_SYMO = [*int_loss_N_SYMO_noiseless, *int_loss_N_SYMO]\nint_loss_SYMO = [*int_loss_SYMO_noiseless, *int_loss_SYMO]\nint_loss_LODE_RK2 = [*int_loss_LODE_RK2_noiseless, *int_loss_LODE_RK2]\nint_loss_LODE_RK4 = [*int_loss_LODE_RK4_noiseless, *int_loss_LODE_RK4]\nint_loss_NODE_RK2 = [*int_loss_NODE_RK2_noiseless, *int_loss_NODE_RK2]\nint_loss_NODE_RK4 = [*int_loss_NODE_RK4_noiseless, *int_loss_NODE_RK4]", "_____no_output_____" ], [ "x_axis = np.array([0, 0.0001, 0.0005, 0.001, 0.005, 0.01])\n\nfig = plt.figure(figsize=(18, 5), dpi=DPI)\nax1=plt.subplot(1, 3, 1)\nplt.plot(x_axis.astype('str'), train_loss_NODE_RK4, 'bs-', label='NODE-rk4')\nplt.plot(x_axis.astype('str'), train_loss_NODE_RK2, 'ms-', label='NODE-midpoint')\nplt.plot(x_axis.astype('str'), train_loss_LODE_RK4, 'gs-', label= 'L-NODE-rk4')\nplt.plot(x_axis.astype('str'), train_loss_LODE_RK2, 'ks-', label='L-NODE-midpoint')\nplt.plot(x_axis.astype('str'), train_loss_SYMO, 'rs-', label='SyMo-midpoint')\nplt.plot(x_axis.astype('str'), train_loss_N_SYMO, 'cs-', label = 'E2E-SyMo-midpoint')\n\n# plt.xscale('log')\nplt.yscale('log')\nplt.legend(fontsize=8)\nplt.ylabel('Train error')\nplt.xlabel('$\\sigma$')\n\nplt.subplot(1, 3, 2)\nplt.plot(x_axis.astype('str'), test_loss_NODE_RK4, 'bs-', label='NODE-rk4')\nplt.plot(x_axis.astype('str'), test_loss_NODE_RK2, 'ms-', label='NODE-midpoint')\nplt.plot(x_axis.astype('str'), test_loss_LODE_RK4, 'gs-', label= 'L-NODE-rk4')\nplt.plot(x_axis.astype('str'), test_loss_LODE_RK2, 'ks-', label='L-NODE-midpoint')\nplt.plot(x_axis.astype('str'), test_loss_SYMO, 'rs-', label='SyMo-midpoint')\nplt.plot(x_axis.astype('str'), test_loss_N_SYMO, 'cs-', label = 'E2E-SyMo-midpoint')\n# plt.xscale('log')\nplt.yscale('log')\nplt.legend(fontsize=8)\nplt.xlabel('$\\sigma$')\nplt.ylabel('Test error')\n\nplt.subplot(1, 3, 3)\nplt.plot(x_axis.astype('str'), int_loss_NODE_RK4, 'bs-', label='NODE-rk4')\nplt.plot(x_axis.astype('str'), int_loss_NODE_RK2, 'ms-', label='NODE-midpoint')\nplt.plot(x_axis.astype('str'), int_loss_LODE_RK4, 'gs-', label= 'L-NODE-rk4')\nplt.plot(x_axis.astype('str'), int_loss_LODE_RK2, 'ks-', label='L-NODE-midpoint')\nplt.plot(x_axis.astype('str'), int_loss_SYMO, 'rs-', label='SyMo-midpoint')\nplt.plot(x_axis.astype('str'), int_loss_N_SYMO, 'cs-', label = 'E2E-SyMo-midpoint')\n# plt.xscale('log')\nplt.yscale('log')\nplt.legend(fontsize=8)\nplt.xlabel('$\\sigma$')\nplt.ylabel('Integration error')\n\nfig.savefig('{}/fig-train-pred-loss_noise_cartpole.{}'.format(args.fig_dir, FORMAT))\n\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
d033d156f26508bfddf5767e4781f6e0301fa0b4
61,366
ipynb
Jupyter Notebook
S01 - Bootcamp and Binary Classification/SLU13 - Bias-Variance tradeoff & Model Selection /Examples notebook.ipynb
FarhadManiCodes/batch5-students
3a147145dc4f4ac65a851542987cf687b9915d5b
[ "MIT" ]
2
2022-02-04T17:40:04.000Z
2022-03-26T18:03:12.000Z
S01 - Bootcamp and Binary Classification/SLU13 - Bias-Variance tradeoff & Model Selection /Examples notebook.ipynb
FarhadManiCodes/batch5-students
3a147145dc4f4ac65a851542987cf687b9915d5b
[ "MIT" ]
null
null
null
S01 - Bootcamp and Binary Classification/SLU13 - Bias-Variance tradeoff & Model Selection /Examples notebook.ipynb
FarhadManiCodes/batch5-students
3a147145dc4f4ac65a851542987cf687b9915d5b
[ "MIT" ]
2
2021-10-30T16:20:13.000Z
2021-11-25T12:09:31.000Z
124.222672
28,140
0.869178
[ [ [ "# SLU13: Bias-Variance trade-off & Model Selection -- Examples\n\n---\n\n<a id='top'></a>\n\n### 1. Model evaluation\n* a. [Train-test split](#traintest)\n* b. [Train-val-test split](#val)\n* c. [Cross validation](#crossval)\n\n\n### 2. [Learning curves](#learningcurves)\n", "_____no_output_____" ], [ "# 1. Model evaluation", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.tree import DecisionTreeClassifier\n\nfrom sklearn.model_selection import learning_curve\n\n%matplotlib inline", "_____no_output_____" ], [ "# Create the DataFrame with the data\ndf = pd.read_csv(\"data/beer.csv\")\n\n# Create a DataFrame with the features (X) and labels (y)\nX = df.drop([\"IsIPA\"], axis=1)\ny = df[\"IsIPA\"]", "_____no_output_____" ], [ "print(\"Number of entries: \", X.shape[0])", "Number of entries: 1000\n" ] ], [ [ "<a id='traintest'></a> [Return to top](#top)\n## Create a training and a test set", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split", "_____no_output_____" ], [ "# Using 20 % of the data as test set\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)", "_____no_output_____" ], [ "print(\"Number of training entries: \", X_train.shape[0])\nprint(\"Number of test entries: \", X_test.shape[0])", "Number of training entries: 800\nNumber of test entries: 200\n" ] ], [ [ "<a id='val'></a> [Return to top](#top)\n## Create a training, test and validation set", "_____no_output_____" ] ], [ [ "# Using 20 % as test set and 20 % as validation set\nX_train, X_temp, y_train, y_temp = train_test_split(X, y, test_size=0.4)\nX_val, X_test, y_val, y_test = train_test_split(X_temp, y_temp, test_size=0.50)", "_____no_output_____" ], [ "print(\"Number of training entries: \", X_train.shape[0])\nprint(\"Number of validation entries: \", X_val.shape[0])\nprint(\"Number of test entries: \", X_test.shape[0])", "Number of training entries: 600\nNumber of validation entries: 200\nNumber of test entries: 200\n" ] ], [ [ "<a id='crossval'></a> [Return to top](#top)\n\n## Use cross-validation (using a given classifier)", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import cross_val_score", "_____no_output_____" ], [ "knn = KNeighborsClassifier(n_neighbors=5)\n# Use cv to specify the number of folds\nscores = cross_val_score(knn, X, y, cv=5)", "_____no_output_____" ], [ "print(f\"Mean of scores: {scores.mean():.3f}\")\nprint(f\"Variance of scores: {scores.var():.3f}\")", "Mean of scores: 0.916\nVariance of scores: 0.000\n" ] ], [ [ "<a id='learningcurves'></a> [Return to top](#top)\n\n# 2. Learning Curves", "_____no_output_____" ], [ "Here is the function that is taken from the sklearn page on learning curves:", "_____no_output_____" ] ], [ [ "def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,\n n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):\n \"\"\"\n Generate a simple plot of the test and training learning curve.\n\n Parameters\n ----------\n estimator : object type that implements the \"fit\" and \"predict\" methods\n An object of that type which is cloned for each validation.\n\n title : string\n Title for the chart.\n\n X : array-like, shape (n_samples, n_features)\n Training vector, where n_samples is the number of samples and\n n_features is the number of features.\n\n y : array-like, shape (n_samples) or (n_samples, n_features), optional\n Target relative to X for classification or regression;\n None for unsupervised learning.\n\n ylim : tuple, shape (ymin, ymax), optional\n Defines minimum and maximum yvalues plotted.\n\n cv : int, cross-validation generator or an iterable, optional\n Determines the cross-validation splitting strategy.\n Possible inputs for cv are:\n - None, to use the default 3-fold cross-validation,\n - integer, to specify the number of folds.\n - An object to be used as a cross-validation generator.\n - An iterable yielding train/test splits.\n\n For integer/None inputs, if ``y`` is binary or multiclass,\n :class:`StratifiedKFold` used. If the estimator is not a classifier\n or if ``y`` is neither binary nor multiclass, :class:`KFold` is used.\n\n Refer :ref:`User Guide <cross_validation>` for the various\n cross-validators that can be used here.\n\n n_jobs : integer, optional\n Number of jobs to run in parallel (default 1).\n \"\"\"\n plt.figure()\n plt.title(title)\n if ylim is not None:\n plt.ylim(*ylim)\n plt.xlabel(\"Training examples\")\n plt.ylabel(\"Score\")\n train_sizes, train_scores, test_scores = learning_curve(\n estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)\n train_scores_mean = np.mean(train_scores, axis=1)\n train_scores_std = np.std(train_scores, axis=1)\n test_scores_mean = np.mean(test_scores, axis=1)\n test_scores_std = np.std(test_scores, axis=1)\n plt.grid()\n\n plt.fill_between(train_sizes, train_scores_mean - train_scores_std,\n train_scores_mean + train_scores_std, alpha=0.1,\n color=\"r\")\n plt.fill_between(train_sizes, test_scores_mean - test_scores_std,\n test_scores_mean + test_scores_std, alpha=0.1, color=\"g\")\n plt.plot(train_sizes, train_scores_mean, 'o-', color=\"r\",\n label=\"Training score\")\n plt.plot(train_sizes, test_scores_mean, 'o-', color=\"g\",\n label=\"Test Set score\")\n\n plt.legend(loc=\"best\")\n return plt", "_____no_output_____" ], [ "# and this is how we used it\n\nX = df.select_dtypes(exclude='object').fillna(-1).drop('IsIPA', axis=1)\ny = df.IsIPA\n\nclf = DecisionTreeClassifier(random_state=1, max_depth=5)\n\nplot_learning_curve(X=X, y=y, estimator=clf, title='DecisionTreeClassifier');", "_____no_output_____" ] ], [ [ "And remember the internals of what this function is actually doing by knowing how to use the\noutput of the scikit [learning_curve](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html) function", "_____no_output_____" ] ], [ [ "# here's where the magic happens! The learning curve function is going\n# to take your classifier and your training data and subset the data\ntrain_sizes, train_scores, test_scores = learning_curve(clf, X, y)\n\n# 5 different training set sizes have been selected\n# with the smallest being 59 and the largest being 594\n# the remaining is used for testing\nprint('train set sizes', train_sizes)\nprint('test set sizes', X.shape[0] - train_sizes)", "train set sizes [ 80 260 440 620 800]\ntest set sizes [920 740 560 380 200]\n" ], [ "# each row corresponds to a training set size\n# each column corresponds to a cross validation fold\n# the first row is the highest because it corresponds\n# to the smallest training set which means that it's very\n# easy for the classifier to overfit and have perfect\n# test set predictions while as the test set grows it\n# becomes a bit more difficult for this to happen.\ntrain_scores", "_____no_output_____" ], [ "# The test set scores where again, each row corresponds\n# to a train / test set size and each column is a differet\n# run with the same train / test sizes\ntest_scores", "_____no_output_____" ], [ "# Let's average the scores across each fold so that we can plot them\ntrain_scores_mean = np.mean(train_scores, axis=1)\ntest_scores_mean = np.mean(test_scores, axis=1)", "_____no_output_____" ], [ "# this one isn't quite as cool as the other because it doesn't show the variance\n# but the fundamentals are still here and it's a much simpler one to understand\n\nlearning_curve_df = pd.DataFrame({\n 'Training score': train_scores_mean,\n 'Test Set score': test_scores_mean\n}, index=train_sizes)\n\nplt.figure()\nplt.ylabel(\"Score\")\nplt.xlabel(\"Training examples\")\nplt.title('Learning Curve')\nplt.plot(learning_curve_df);\nplt.legend(learning_curve_df.columns, loc=\"best\");\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
d033e563b7d1ce48d49c08ef0890976619e9a4eb
184,030
ipynb
Jupyter Notebook
machine-learning-ex5/ex5/ex5.ipynb
Dream74/coursera-ml-py
b8cfe21024ca9aa61def952e00576c25ecd8076b
[ "MIT" ]
null
null
null
machine-learning-ex5/ex5/ex5.ipynb
Dream74/coursera-ml-py
b8cfe21024ca9aa61def952e00576c25ecd8076b
[ "MIT" ]
null
null
null
machine-learning-ex5/ex5/ex5.ipynb
Dream74/coursera-ml-py
b8cfe21024ca9aa61def952e00576c25ecd8076b
[ "MIT" ]
null
null
null
230.0375
21,640
0.907319
[ [ [ "import matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.io as scio\nimport linearRegCostFunction as lrcf\nimport trainLinearReg as tlr\nimport learningCurve as lc\nimport polyFeatures as pf\nimport featureNormalize as fn\nimport plotFit as plotft\nimport validationCurve as vc", "_____no_output_____" ], [ "plt.ion()\nnp.set_printoptions(formatter={'float': '{: 0.6f}'.format})", "_____no_output_____" ], [ "# ===================== Part 1: Loading and Visualizing Data =====================\n# We start the exercise by first loading and visualizing the dataset.\n# The following code will load the dataset into your environment and pot\n# the data.\n#\n\n# Load Training data\nprint('Loading and Visualizing data ...')\n\n# Load from ex5data1:\ndata = scio.loadmat('ex5data1.mat')\nX = data['X']\ny = data['y'].flatten()\nXval = data['Xval']\nyval = data['yval'].flatten()\nXtest = data['Xtest']\nytest = data['ytest'].flatten()\n\nm = y.size", "Loading and Visualizing data ...\n" ], [ "# Plot training data\nplt.figure()\nplt.scatter(X, y, c='r', marker=\"x\")\nplt.xlabel('Change in water level (x)')\nplt.ylabel('Water folowing out of the dam (y)')", "_____no_output_____" ], [ "def linear_reg_cost_function(theta, x, y, lmd):\n # Initialize some useful values\n m = y.size\n\n # You need to return the following variables correctly\n cost = 0\n grad = np.zeros(theta.shape)\n\n # ===================== Your Code Here =====================\n # Instructions : Compute the cost and gradient of regularized linear\n # regression for a particular choice of theta\n #\n # You should set 'cost' to the cost and 'grad'\n # to the gradient\n #\n h = (x @ theta)\n cost = 1 * (1/(2*m)) * np.sum(np.square(h - y)) + lmd * (1/(2*m)) * np.sum(np.square(theta[1:])) \n grad = (1 * (1/m) * (h-y)@x + (lmd * (1/m) * np.r_[0, theta[1:]]))\n # ==========================================================\n return cost, grad", "_____no_output_____" ], [ "# ===================== Part 2: Regularized Linear Regression Cost =====================\n# You should now implement the cost function for regularized linear regression\n#\n\ntheta = np.ones(2)\n# cost, _ = lrcf.linear_reg_cost_function(theta, np.c_[np.ones(m), X], y, 1)\ncost, _ = linear_reg_cost_function(theta, np.c_[np.ones(m), X], y, 1)\n\nprint('Cost at theta = [1 1]: {:0.6f}\\n(this value should be about 303.993192'.format(cost))", "Cost at theta = [1 1]: 303.993192\n(this value should be about 303.993192\n" ], [ "# ===================== Part 3: Regularized Linear Regression Gradient =====================\n# You should now implement the gradient for regularized linear regression\n#\n\ntheta = np.ones(2)\n#cost, grad = lrcf.linear_reg_cost_function(theta, np.c_[np.ones(m), X], y, 1)\ncost, grad = linear_reg_cost_function(theta, np.c_[np.ones(m), X], y, 1)\n\nprint('Gradient at theta = [1 1]: {}\\n(this value should be about [-15.303016 598.250744]'.format(grad))", "Gradient at theta = [1 1]: [-15.303016 598.250744]\n(this value should be about [-15.303016 598.250744]\n" ], [ "import scipy.optimize as opt\n\ndef train_linear_reg(x, y, lmd):\n initial_theta = np.ones(x.shape[1])\n\n def cost_func(t):\n return linear_reg_cost_function(t, x, y, lmd)[0]\n\n def grad_func(t):\n return linear_reg_cost_function(t, x, y, lmd)[1]\n\n theta, *unused = opt.fmin_cg(cost_func, initial_theta, grad_func, maxiter=200, disp=False,\n full_output=True)\n\n return theta\n", "_____no_output_____" ], [ "# ===================== Part 4: Train Linear Regression =====================\n# Once you have implemented the cost and gradient correctly, the\n# train_linear_reg function will use your cost function to train regularzized linear regression.\n#\n# Write Up Note : The data is non-linear, so this will not give a great fit.\n#\n\n# Train linear regression with lambda = 0\nlmd = 0\n\n# theta = tlr.train_linear_reg(np.c_[np.ones(m), X], y, lmd)\ntheta = train_linear_reg(np.c_[np.ones(m), X], y, lmd)\n\n# Plot training data\nplt.figure()\nplt.scatter(X, y, c='r', marker=\"x\")\nplt.xlabel('Change in water level (x)')\nplt.ylabel('Water folowing out of the dam (y)')\n\n# Plot fit over the data\nplt.plot(X, np.dot(np.c_[np.ones(m), X], theta))", "_____no_output_____" ], [ "def learning_curve(X, y, Xval, yval, lmd):\n # Number of training examples\n m = X.shape[0]\n\n # You need to return these values correctly\n error_train = np.zeros(m)\n error_val = np.zeros(m)\n\n # ===================== Your Code Here =====================\n # Instructions : Fill in this function to return training errors in\n # error_train and the cross validation errors in error_val.\n # i.e., error_train[i] and error_val[i] should give you\n # the errors obtained after training on i examples\n #\n # Note : You should evaluate the training error on the first i training\n # examples (i.e. X[:i] and y[:i])\n #\n # For the cross-validation error, you should instead evaluate on\n # the _entire_ cross validation set (Xval and yval).\n #\n # Note : If you're using your cost function (linear_reg_cost_function)\n # to compute the training and cross validation error, you should\n # call the function with the lamdba argument set to 0.\n # Do note that you will still need to use lamdba when running the\n # training to obtain the theta parameters.\n #\n\n \n for i in range(1, m+1):\n theta = train_linear_reg(np.c_[np.ones(i), X[:i]], y[:i], lmd)\n cost, grad = linear_reg_cost_function(theta, np.c_[np.ones(i), X[:i]], y[:i], 0)\n error_train[i-1] = cost\n \n cost, grad = linear_reg_cost_function(theta, np.c_[np.ones(Xval.shape[0]), Xval], yval, 0)\n error_val[i-1] = cost\n # ==========================================================\n\n return error_train, error_val\n", "_____no_output_____" ], [ "\n# ===================== Part 5: Learning Curve for Linear Regression =====================\n# Next, you should implement the learning_curve function.\n#\n# Write up note : Since the model is underfitting the data, we expect to\n# see a graph with \"high bias\" -- Figure 3 in ex5.pdf\n#\n\nlmd = 0\n# error_train, error_val = lc.learning_curve(np.c_[np.ones(m), X], y, np.c_[np.ones(Xval.shape[0]), Xval], yval, lmd)\nerror_train, error_val = learning_curve(np.c_[np.ones(m), X], y, np.c_[np.ones(Xval.shape[0]), Xval], yval, lmd)\n\nplt.figure()\nplt.plot(np.arange(m), error_train, np.arange(m), error_val)\nplt.title('Learning Curve for Linear Regression')\nplt.legend(['Train', 'Cross Validation'])\nplt.xlabel('Number of Training Examples')\nplt.ylabel('Error')\nplt.axis([0, 13, 0, 150])\nplt.show()", "_____no_output_____" ], [ "def poly_features(X, p):\n # You need to return the following variable correctly.\n X_poly = np.zeros((X.size, p))\n\n # ===================== Your Code Here =====================\n # Instructions : Given a vector X, return a matrix X_poly where the p-th\n # column of X contains the values of X to the p-th power.\n #\n for i in range(p):\n X_poly[:, i] = np.power(X, i+1).T\n # ==========================================================\n\n\n return X_poly", "_____no_output_____" ], [ "\n# ===================== Part 6 : Feature Mapping for Polynomial Regression =====================\n# One solution to this is to use polynomial regression. You should now\n# complete polyFeatures to map each example into its powers\n#\n\np = 5\n\n# Map X onto Polynomial Features and Normalize\n# X_poly = pf.poly_features(X, p)\nX_poly = poly_features(X, p)\nX_poly, mu, sigma = fn.feature_normalize(X_poly)\nX_poly = np.c_[np.ones(m), X_poly]\n\n# Map X_poly_test and normalize (using mu and sigma)\n# X_poly_test = pf.poly_features(Xtest, p)\nX_poly_test = poly_features(Xtest, p)\nX_poly_test -= mu\nX_poly_test /= sigma\nX_poly_test = np.c_[np.ones(X_poly_test.shape[0]), X_poly_test]\n\n# Map X_poly_val and normalize (using mu and sigma)\n# X_poly_val = pf.poly_features(Xval, p)\nX_poly_val = poly_features(Xval, p)\nX_poly_val -= mu\nX_poly_val /= sigma\nX_poly_val = np.c_[np.ones(X_poly_val.shape[0]), X_poly_val]\n\nprint('Normalized Training Example 1 : \\n{}'.format(X_poly[0]))\n", "Normalized Training Example 1 : \n[ 1.000000 -0.362141 -0.755087 0.182226 -0.706190 0.306618]\n" ], [ "def train_linear_reg(x, y, lmd):\n initial_theta = np.ones(x.shape[1])\n\n def cost_func(t):\n return linear_reg_cost_function(t, x, y, lmd)[0]\n\n def grad_func(t):\n return linear_reg_cost_function(t, x, y, lmd)[1]\n\n theta, *unused = opt.fmin_cg(cost_func, initial_theta, grad_func, maxiter=200, disp=False,\n full_output=True)\n\n return theta\n", "_____no_output_____" ], [ "def plot_fit(min_x, max_x, mu, sigma, theta, p):\n x = np.arange(min_x - 15, max_x + 25, 0.05)\n\n # X_poly = pf.poly_features(x, p)\n X_poly = poly_features(x, p)\n X_poly -= mu\n X_poly /= sigma\n\n X_poly = np.c_[np.ones(x.size), X_poly]\n\n plt.plot(x, np.dot(X_poly, theta))\n", "_____no_output_____" ], [ "# ===================== Part 7 : Learning Curve for Polynomial Regression =====================\n# Now, you will get to experiment with polynomial regression with multiple\n# values of lambda. The code below runs polynomial regression with\n# lambda = 0. You should try running the code with different values of\n# lambda to see how the fit and learning curve change.\n#\n\nlmd = 0\n# theta = tlr.train_linear_reg(X_poly, y, lmd)\ntheta = train_linear_reg(X_poly, y, lmd)\n\n\n# Plot trainint data and fit\nplt.figure()\nplt.scatter(X, y, c='r', marker=\"x\")\n# plotft.plot_fit(np.min(X), np.max(X), mu, sigma, theta, p)\nplot_fit(np.min(X), np.max(X), mu, sigma, theta, p)\n\nplt.xlabel('Change in water level (x)')\nplt.ylabel('Water folowing out of the dam (y)')\nplt.ylim([0, 60])\nplt.title('Polynomial Regression Fit (lambda = {})'.format(lmd))", "_____no_output_____" ], [ "error_train, error_val = learning_curve(X_poly, y, X_poly_val, yval, lmd)\nplt.figure()\nplt.plot(np.arange(m), error_train, np.arange(m), error_val)\nplt.title('Polynomial Regression Learning Curve (lambda = {})'.format(lmd))\nplt.legend(['Train', 'Cross Validation'])\nplt.xlabel('Number of Training Examples')\nplt.ylabel('Error')\nplt.axis([0, 13, 0, 150])\n\nprint('Polynomial Regression (lambda = {})'.format(lmd))\nprint('# Training Examples\\tTrain Error\\t\\tCross Validation Error')\nfor i in range(m):\n print(' \\t{}\\t\\t{}\\t{}'.format(i, error_train[i], error_val[i]))", "Polynomial Regression (lambda = 0)\n# Training Examples\tTrain Error\t\tCross Validation Error\n \t0\t\t9.860761315262648e-32\t99.30405812987325\n \t1\t\t1.1857565481603334e-28\t99.63378116799029\n \t2\t\t4.049667164141863e-12\t16.222323357428216\n \t3\t\t8.749217473058563e-24\t11.848091898948923\n \t4\t\t2.6941455299967003e-08\t6.084838578884331\n \t5\t\t1.9435721380445115e-13\t10.136970269806941\n \t6\t\t0.0853841788688538\t6.009291035844053\n \t7\t\t0.08507192887384765\t4.970638602521722\n \t8\t\t0.2033296002338128\t14.505965911574355\n \t9\t\t0.2267834412732319\t10.712645535954925\n \t10\t\t0.20692212806231883\t11.031772280779913\n \t11\t\t0.20849612433874654\t15.629937363906924\n" ], [ "# ===================== Part 7 : Learning Curve for Polynomial Regression =====================\n# Now, you will get to experiment with polynomial regression with multiple\n# values of lambda. The code below runs polynomial regression with\n# lambda = 0. You should try running the code with different values of\n# lambda to see how the fit and learning curve change.\n#\n\nlmd = 1\n# theta = tlr.train_linear_reg(X_poly, y, lmd)\ntheta = train_linear_reg(X_poly, y, lmd)\n\n\n# Plot trainint data and fit\nplt.figure()\nplt.scatter(X, y, c='r', marker=\"x\")\n# plotft.plot_fit(np.min(X), np.max(X), mu, sigma, theta, p)\nplot_fit(np.min(X), np.max(X), mu, sigma, theta, p)\n\nplt.xlabel('Change in water level (x)')\nplt.ylabel('Water folowing out of the dam (y)')\nplt.ylim([0, 60])\nplt.title('Polynomial Regression Fit (lambda = {})'.format(lmd))", "_____no_output_____" ], [ "error_train, error_val = learning_curve(X_poly, y, X_poly_val, yval, lmd)\nplt.figure()\nplt.plot(np.arange(m), error_train, np.arange(m), error_val)\nplt.title('Polynomial Regression Learning Curve (lambda = {})'.format(lmd))\nplt.legend(['Train', 'Cross Validation'])\nplt.xlabel('Number of Training Examples')\nplt.ylabel('Error')\nplt.axis([0, 13, 0, 150])\n\nprint('Polynomial Regression (lambda = {})'.format(lmd))\nprint('# Training Examples\\tTrain Error\\t\\tCross Validation Error')\nfor i in range(m):\n print(' \\t{}\\t\\t{}\\t{}'.format(i, error_train[i], error_val[i]))", "Polynomial Regression (lambda = 1)\n# Training Examples\tTrain Error\t\tCross Validation Error\n \t0\t\t4.56199939795915e-13\t138.8467886913956\n \t1\t\t0.04681301650735769\t143.06082466167464\n \t2\t\t3.3087560872065698\t7.257166540976983\n \t3\t\t1.752858775942759\t6.988446575542324\n \t4\t\t1.4958401384112818\t3.7790571118099834\n \t5\t\t1.1096520379744261\t4.792211229892257\n \t6\t\t1.5594332505738264\t3.778981237655777\n \t7\t\t1.362793602467611\t3.7813685360822515\n \t8\t\t1.4801223250593054\t4.277983728891496\n \t9\t\t1.369615326927559\t4.137277721949229\n \t10\t\t1.241971459505048\t4.187255821397332\n \t11\t\t1.9372327105881608\t3.7368813998238166\n" ], [ "# ===================== Part 7 : Learning Curve for Polynomial Regression =====================\n# Now, you will get to experiment with polynomial regression with multiple\n# values of lambda. The code below runs polynomial regression with\n# lambda = 0. You should try running the code with different values of\n# lambda to see how the fit and learning curve change.\n#\n\nlmd = 100\n# theta = tlr.train_linear_reg(X_poly, y, lmd)\ntheta = train_linear_reg(X_poly, y, lmd)\n\n\n# Plot trainint data and fit\nplt.figure()\nplt.scatter(X, y, c='r', marker=\"x\")\n# plotft.plot_fit(np.min(X), np.max(X), mu, sigma, theta, p)\nplot_fit(np.min(X), np.max(X), mu, sigma, theta, p)\n\nplt.xlabel('Change in water level (x)')\nplt.ylabel('Water folowing out of the dam (y)')\nplt.ylim([0, 60])\nplt.title('Polynomial Regression Fit (lambda = {})'.format(lmd))", "_____no_output_____" ], [ "def validation_curve(X, y, Xval, yval):\n # Selected values of lambda (don't change this)\n lambda_vec = np.array([0., 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10])\n\n # You need to return these variables correctly.\n error_train = np.zeros(lambda_vec.size)\n error_val = np.zeros(lambda_vec.size)\n\n # ===================== Your Code Here =====================\n # Instructions : Fill in this function to return training errors in\n # error_train and the validation errors in error_val. The\n # vector lambda_vec contains the different lambda parameters\n # to use for each calculation of the errors, i.e,\n # error_train[i], and error_val[i] should give\n # you the errors obtained after training with\n # lmd = lambda_vec[i]\n #\n for idx, lmd in enumerate(lambda_vec):\n e_train, e_val = learning_curve(X, y, Xval, yval, lmd)\n error_train[idx] = e_train[-1]\n error_val[idx] = e_val[-1]\n # ==========================================================\n\n return lambda_vec, error_train, error_val", "_____no_output_____" ], [ "# ===================== Part 8 : Validation for Selecting Lambda =====================\n# You will now implement validationCurve to test various values of\n# lambda on a validation set. You will then use this to select the\n# 'best' lambda value.\n\n# lambda_vec, error_train, error_val = vc.validation_curve(X_poly, y, X_poly_val, yval)\nlambda_vec, error_train, error_val = validation_curve(X_poly, y, X_poly_val, yval)\n\nplt.figure()\nplt.plot(lambda_vec, error_train, lambda_vec, error_val)\nplt.legend(['Train', 'Cross Validation'])\nplt.xlabel('lambda')\nplt.ylabel('Error')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d033f13fb8d76e9baa968f7db6a55b186cff089a
73,582
ipynb
Jupyter Notebook
Python Exercise/100Py_program.ipynb
Sandy1811/python-for-all
fdb6878d93502773ba8da809c2de1b33c96fb9a0
[ "Apache-2.0" ]
1
2019-08-07T08:05:32.000Z
2019-08-07T08:05:32.000Z
Python Exercise/100Py_program.ipynb
Sandy1811/demandforecasting
fdb6878d93502773ba8da809c2de1b33c96fb9a0
[ "Apache-2.0" ]
8
2021-02-08T20:32:03.000Z
2022-03-11T23:56:31.000Z
Python Exercise/100Py_program.ipynb
Sandy1811/demandforecasting
fdb6878d93502773ba8da809c2de1b33c96fb9a0
[ "Apache-2.0" ]
null
null
null
30.570004
293
0.482034
[ [ [ "100+ Python challenging programming exercises\n\n1.\tLevel description\nLevel\tDescription\nLevel 1\tBeginner means someone who has just gone through an introductory Python course. He can solve some problems with 1 or 2 Python classes or functions. Normally, the answers could directly be found in the textbooks.\nLevel 2\tIntermediate means someone who has just learned Python, but already has a relatively strong programming background from before. He should be able to solve problems which may involve 3 or 3 Python classes or functions. The answers cannot be directly be found in the textbooks.\nLevel 3\tAdvanced. He should use Python to solve more complex problem using more rich libraries functions and data structures and algorithms. He is supposed to solve the problem using several Python standard packages and advanced techniques.\n\n2.\tProblem template\n\n#----------------------------------------#\nQuestion\nHints\nSolution\n\n3.\tQuestions\n\n#----------------------------------------#\nQuestion 1\nLevel 1\n\nQuestion:\nWrite a program which will find all such numbers which are divisible by 7 but are not a multiple of 5,\nbetween 2000 and 3200 (both included).\nThe numbers obtained should be printed in a comma-separated sequence on a single line.\n\nHints: \nConsider use range(#begin, #end) method\n\nSolution:\nl=[]\nfor i in range(2000, 3201):\n if (i%7==0) and (i%5!=0):\n l.append(str(i))\n\nprint ','.join(l)\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 2\nLevel 1\n\nQuestion:\nWrite a program which can compute the factorial of a given numbers.\nThe results should be printed in a comma-separated sequence on a single line.\nSuppose the following input is supplied to the program:\n8\nThen, the output should be:\n40320\n\nHints:\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nSolution:\ndef fact(x):\n if x == 0:\n return 1\n return x * fact(x - 1)\n\nx=int(raw_input())\nprint fact(x)\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 3\nLevel 1\n\nQuestion:\nWith a given integral number n, write a program to generate a dictionary that contains (i, i*i) such that is an integral number between 1 and n (both included). and then the program should print the dictionary.\nSuppose the following input is supplied to the program:\n8\nThen, the output should be:\n{1: 1, 2: 4, 3: 9, 4: 16, 5: 25, 6: 36, 7: 49, 8: 64}\n\nHints:\nIn case of input data being supplied to the question, it should be assumed to be a console input.\nConsider use dict()\n\nSolution:\nn=int(raw_input())\nd=dict()\nfor i in range(1,n+1):\n d[i]=i*i\n\nprint d\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 4\nLevel 1\n\nQuestion:\nWrite a program which accepts a sequence of comma-separated numbers from console and generate a list and a tuple which contains every number.\nSuppose the following input is supplied to the program:\n34,67,55,33,12,98\nThen, the output should be:\n['34', '67', '55', '33', '12', '98']\n('34', '67', '55', '33', '12', '98')\n\nHints:\nIn case of input data being supplied to the question, it should be assumed to be a console input.\ntuple() method can convert list to tuple\n\nSolution:\nvalues=raw_input()\nl=values.split(\",\")\nt=tuple(l)\nprint l\nprint t\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 5\nLevel 1\n\nQuestion:\nDefine a class which has at least two methods:\ngetString: to get a string from console input\nprintString: to print the string in upper case.\nAlso please include simple test function to test the class methods.\n\nHints:\nUse __init__ method to construct some parameters\n\nSolution:\nclass InputOutString(object):\n def __init__(self):\n self.s = \"\"\n\n def getString(self):\n self.s = raw_input()\n\n def printString(self):\n print self.s.upper()\n\nstrObj = InputOutString()\nstrObj.getString()\nstrObj.printString()\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 6\nLevel 2\n\nQuestion:\nWrite a program that calculates and prints the value according to the given formula:\nQ = Square root of [(2 * C * D)/H]\nFollowing are the fixed values of C and H:\nC is 50. H is 30.\nD is the variable whose values should be input to your program in a comma-separated sequence.\nExample\nLet us assume the following comma separated input sequence is given to the program:\n100,150,180\nThe output of the program should be:\n18,22,24\n\nHints:\nIf the output received is in decimal form, it should be rounded off to its nearest value (for example, if the output received is 26.0, it should be printed as 26)\nIn case of input data being supplied to the question, it should be assumed to be a console input. \n\nSolution:\n#!/usr/bin/env python\nimport math\nc=50\nh=30\nvalue = []\nitems=[x for x in raw_input().split(',')]\nfor d in items:\n value.append(str(int(round(math.sqrt(2*c*float(d)/h)))))\n\nprint ','.join(value)\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 7\nLevel 2\n\nQuestion:\nWrite a program which takes 2 digits, X,Y as input and generates a 2-dimensional array. The element value in the i-th row and j-th column of the array should be i*j.\nNote: i=0,1.., X-1; j=0,1,¡­Y-1.\nExample\nSuppose the following inputs are given to the program:\n3,5\nThen, the output of the program should be:\n[[0, 0, 0, 0, 0], [0, 1, 2, 3, 4], [0, 2, 4, 6, 8]] \n\nHints:\nNote: In case of input data being supplied to the question, it should be assumed to be a console input in a comma-separated form.\n\nSolution:\ninput_str = raw_input()\ndimensions=[int(x) for x in input_str.split(',')]\nrowNum=dimensions[0]\ncolNum=dimensions[1]\nmultilist = [[0 for col in range(colNum)] for row in range(rowNum)]\n\nfor row in range(rowNum):\n for col in range(colNum):\n multilist[row][col]= row*col\n\nprint multilist\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 8\nLevel 2\n\nQuestion:\nWrite a program that accepts a comma separated sequence of words as input and prints the words in a comma-separated sequence after sorting them alphabetically.\nSuppose the following input is supplied to the program:\nwithout,hello,bag,world\nThen, the output should be:\nbag,hello,without,world\n\nHints:\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nSolution:\nitems=[x for x in raw_input().split(',')]\nitems.sort()\nprint ','.join(items)\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 9\nLevel 2\n\nQuestion£º\nWrite a program that accepts sequence of lines as input and prints the lines after making all characters in the sentence capitalized.\nSuppose the following input is supplied to the program:\nHello world\nPractice makes perfect\nThen, the output should be:\nHELLO WORLD\nPRACTICE MAKES PERFECT\n\nHints:\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nSolution:\nlines = []\nwhile True:\n s = raw_input()\n if s:\n lines.append(s.upper())\n else:\n break;\n\nfor sentence in lines:\n print sentence\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 10\nLevel 2\n\nQuestion:\nWrite a program that accepts a sequence of whitespace separated words as input and prints the words after removing all duplicate words and sorting them alphanumerically.\nSuppose the following input is supplied to the program:\nhello world and practice makes perfect and hello world again\nThen, the output should be:\nagain and hello makes perfect practice world\n\nHints:\nIn case of input data being supplied to the question, it should be assumed to be a console input.\nWe use set container to remove duplicated data automatically and then use sorted() to sort the data.\n\nSolution:\ns = raw_input()\nwords = [word for word in s.split(\" \")]\nprint \" \".join(sorted(list(set(words))))\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 11\nLevel 2\n\nQuestion:\nWrite a program which accepts a sequence of comma separated 4 digit binary numbers as its input and then check whether they are divisible by 5 or not. The numbers that are divisible by 5 are to be printed in a comma separated sequence.\nExample:\n0100,0011,1010,1001\nThen the output should be:\n1010\nNotes: Assume the data is input by console.\n\nHints:\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nSolution:\nvalue = []\nitems=[x for x in raw_input().split(',')]\nfor p in items:\n intp = int(p, 2)\n if not intp%5:\n value.append(p)\n\nprint ','.join(value)\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 12\nLevel 2\n\nQuestion:\nWrite a program, which will find all such numbers between 1000 and 3000 (both included) such that each digit of the number is an even number.\nThe numbers obtained should be printed in a comma-separated sequence on a single line.\n\nHints:\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nSolution:\nvalues = []\nfor i in range(1000, 3001):\n s = str(i)\n if (int(s[0])%2==0) and (int(s[1])%2==0) and (int(s[2])%2==0) and (int(s[3])%2==0):\n values.append(s)\nprint \",\".join(values)\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 13\nLevel 2\n\nQuestion:\nWrite a program that accepts a sentence and calculate the number of letters and digits.\nSuppose the following input is supplied to the program:\nhello world! 123\nThen, the output should be:\nLETTERS 10\nDIGITS 3\n\nHints:\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nSolution:\ns = raw_input()\nd={\"DIGITS\":0, \"LETTERS\":0}\nfor c in s:\n if c.isdigit():\n d[\"DIGITS\"]+=1\n elif c.isalpha():\n d[\"LETTERS\"]+=1\n else:\n pass\nprint \"LETTERS\", d[\"LETTERS\"]\nprint \"DIGITS\", d[\"DIGITS\"]\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 14\nLevel 2\n\nQuestion:\nWrite a program that accepts a sentence and calculate the number of upper case letters and lower case letters.\nSuppose the following input is supplied to the program:\nHello world!\nThen, the output should be:\nUPPER CASE 1\nLOWER CASE 9\n\nHints:\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nSolution:\ns = raw_input()\nd={\"UPPER CASE\":0, \"LOWER CASE\":0}\nfor c in s:\n if c.isupper():\n d[\"UPPER CASE\"]+=1\n elif c.islower():\n d[\"LOWER CASE\"]+=1\n else:\n pass\nprint \"UPPER CASE\", d[\"UPPER CASE\"]\nprint \"LOWER CASE\", d[\"LOWER CASE\"]\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 15\nLevel 2\n\nQuestion:\nWrite a program that computes the value of a+aa+aaa+aaaa with a given digit as the value of a.\nSuppose the following input is supplied to the program:\n9\nThen, the output should be:\n11106\n\nHints:\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nSolution:\na = raw_input()\nn1 = int( \"%s\" % a )\nn2 = int( \"%s%s\" % (a,a) )\nn3 = int( \"%s%s%s\" % (a,a,a) )\nn4 = int( \"%s%s%s%s\" % (a,a,a,a) )\nprint n1+n2+n3+n4\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 16\nLevel 2\n\nQuestion:\nUse a list comprehension to square each odd number in a list. The list is input by a sequence of comma-separated numbers.\nSuppose the following input is supplied to the program:\n1,2,3,4,5,6,7,8,9\nThen, the output should be:\n1,3,5,7,9\n\nHints:\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nSolution:\nvalues = raw_input()\nnumbers = [x for x in values.split(\",\") if int(x)%2!=0]\nprint \",\".join(numbers)\n#----------------------------------------#\n\nQuestion 17\nLevel 2\n\nQuestion:\nWrite a program that computes the net amount of a bank account based a transaction log from console input. The transaction log format is shown as following:\nD 100\nW 200\n\nD means deposit while W means withdrawal.\nSuppose the following input is supplied to the program:\nD 300\nD 300\nW 200\nD 100\nThen, the output should be:\n500\n\nHints:\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nSolution:\nnetAmount = 0\nwhile True:\n s = raw_input()\n if not s:\n break\n values = s.split(\" \")\n operation = values[0]\n amount = int(values[1])\n if operation==\"D\":\n netAmount+=amount\n elif operation==\"W\":\n netAmount-=amount\n else:\n pass\nprint netAmount\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 18\nLevel 3\n\nQuestion:\nA website requires the users to input username and password to register. Write a program to check the validity of password input by users.\nFollowing are the criteria for checking the password:\n1. At least 1 letter between [a-z]\n2. At least 1 number between [0-9]\n1. At least 1 letter between [A-Z]\n3. At least 1 character from [$#@]\n4. Minimum length of transaction password: 6\n5. Maximum length of transaction password: 12\nYour program should accept a sequence of comma separated passwords and will check them according to the above criteria. Passwords that match the criteria are to be printed, each separated by a comma.\nExample\nIf the following passwords are given as input to the program:\nABd1234@1,a F1#,2w3E*,2We3345\nThen, the output of the program should be:\nABd1234@1\n\nHints:\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nSolutions:\nimport re\nvalue = []\nitems=[x for x in raw_input().split(',')]\nfor p in items:\n if len(p)<6 or len(p)>12:\n continue\n else:\n pass\n if not re.search(\"[a-z]\",p):\n continue\n elif not re.search(\"[0-9]\",p):\n continue\n elif not re.search(\"[A-Z]\",p):\n continue\n elif not re.search(\"[$#@]\",p):\n continue\n elif re.search(\"\\s\",p):\n continue\n else:\n pass\n value.append(p)\nprint \",\".join(value)\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 19\nLevel 3\n\nQuestion:\nYou are required to write a program to sort the (name, age, height) tuples by ascending order where name is string, age and height are numbers. The tuples are input by console. The sort criteria is:\n1: Sort based on name;\n2: Then sort based on age;\n3: Then sort by score.\nThe priority is that name > age > score.\nIf the following tuples are given as input to the program:\nTom,19,80\nJohn,20,90\nJony,17,91\nJony,17,93\nJson,21,85\nThen, the output of the program should be:\n[('John', '20', '90'), ('Jony', '17', '91'), ('Jony', '17', '93'), ('Json', '21', '85'), ('Tom', '19', '80')]\n\nHints:\nIn case of input data being supplied to the question, it should be assumed to be a console input.\nWe use itemgetter to enable multiple sort keys.\n\nSolutions:\nfrom operator import itemgetter, attrgetter\n\nl = []\nwhile True:\n s = raw_input()\n if not s:\n break\n l.append(tuple(s.split(\",\")))\n\nprint sorted(l, key=itemgetter(0,1,2))\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 20\nLevel 3\n\nQuestion:\nDefine a class with a generator which can iterate the numbers, which are divisible by 7, between a given range 0 and n.\n\nHints:\nConsider use yield\n\nSolution:\ndef putNumbers(n):\n i = 0\n while i<n:\n j=i\n i=i+1\n if j%7==0:\n yield j\n\nfor i in reverse(100):\n print i\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 21\nLevel 3\n\nQuestion£º\nA robot moves in a plane starting from the original point (0,0). The robot can move toward UP, DOWN, LEFT and RIGHT with a given steps. The trace of robot movement is shown as the following:\nUP 5\nDOWN 3\nLEFT 3\nRIGHT 2\n¡­\nThe numbers after the direction are steps. Please write a program to compute the distance from current position after a sequence of movement and original point. If the distance is a float, then just print the nearest integer.\nExample:\nIf the following tuples are given as input to the program:\nUP 5\nDOWN 3\nLEFT 3\nRIGHT 2\nThen, the output of the program should be:\n2\n\nHints:\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nSolution:\nimport math\npos = [0,0]\nwhile True:\n s = raw_input()\n if not s:\n break\n movement = s.split(\" \")\n direction = movement[0]\n steps = int(movement[1])\n if direction==\"UP\":\n pos[0]+=steps\n elif direction==\"DOWN\":\n pos[0]-=steps\n elif direction==\"LEFT\":\n pos[1]-=steps\n elif direction==\"RIGHT\":\n pos[1]+=steps\n else:\n pass\n\nprint int(round(math.sqrt(pos[1]**2+pos[0]**2)))\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 22\nLevel 3\n\nQuestion:\nWrite a program to compute the frequency of the words from the input. The output should output after sorting the key alphanumerically. \nSuppose the following input is supplied to the program:\nNew to Python or choosing between Python 2 and Python 3? Read Python 2 or Python 3.\nThen, the output should be:\n2:2\n3.:1\n3?:1\nNew:1\nPython:5\nRead:1\nand:1\nbetween:1\nchoosing:1\nor:2\nto:1\n\nHints\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nSolution:\nfreq = {} # frequency of words in text\nline = raw_input()\nfor word in line.split():\n freq[word] = freq.get(word,0)+1\n\nwords = freq.keys()\nwords.sort()\n\nfor w in words:\n print \"%s:%d\" % (w,freq[w])\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 23\nlevel 1\n\nQuestion:\n Write a method which can calculate square value of number\n\nHints:\n Using the ** operator\n\nSolution:\ndef square(num):\n return num ** 2\n\nprint square(2)\nprint square(3)\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 24\nLevel 1\n\nQuestion:\n Python has many built-in functions, and if you do not know how to use it, you can read document online or find some books. But Python has a built-in document function for every built-in functions.\n Please write a program to print some Python built-in functions documents, such as abs(), int(), raw_input()\n And add document for your own function\n \nHints:\n The built-in document method is __doc__\n\nSolution:\nprint abs.__doc__\nprint int.__doc__\nprint raw_input.__doc__\n\ndef square(num):\n '''Return the square value of the input number.\n \n The input number must be integer.\n '''\n return num ** 2\n\nprint square(2)\nprint square.__doc__\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion 25\nLevel 1\n\nQuestion:\n Define a class, which have a class parameter and have a same instance parameter.\n\nHints:\n Define a instance parameter, need add it in __init__ method\n You can init a object with construct parameter or set the value later\n\nSolution:\nclass Person:\n # Define the class parameter \"name\"\n name = \"Person\"\n \n def __init__(self, name = None):\n # self.name is the instance parameter\n self.name = name\n\njeffrey = Person(\"Jeffrey\")\nprint \"%s name is %s\" % (Person.name, jeffrey.name)\n\nnico = Person()\nnico.name = \"Nico\"\nprint \"%s name is %s\" % (Person.name, nico.name)\n#----------------------------------------#\n\n#----------------------------------------#\nQuestion:\nDefine a function which can compute the sum of two numbers.\n\nHints:\nDefine a function with two numbers as arguments. You can compute the sum in the function and return the value.\n\nSolution\ndef SumFunction(number1, number2):\n\treturn number1+number2\n\nprint SumFunction(1,2)\n\n#----------------------------------------#\nQuestion:\nDefine a function that can convert a integer into a string and print it in console.\n\nHints:\n\nUse str() to convert a number to string.\n\nSolution\ndef printValue(n):\n\tprint str(n)\n\nprintValue(3)\n\t\n\n#----------------------------------------#\nQuestion:\nDefine a function that can convert a integer into a string and print it in console.\n\nHints:\n\nUse str() to convert a number to string.\n\nSolution\ndef printValue(n):\n\tprint str(n)\n\nprintValue(3)\n\n#----------------------------------------#\n2.10\n\nQuestion:\nDefine a function that can receive two integral numbers in string form and compute their sum and then print it in console.\n\nHints:\n\nUse int() to convert a string to integer.\n\nSolution\ndef printValue(s1,s2):\n\tprint int(s1)+int(s2)\n\nprintValue(\"3\",\"4\") #7\n\n\n#----------------------------------------#\n2.10\n\n\nQuestion:\nDefine a function that can accept two strings as input and concatenate them and then print it in console.\n\nHints:\n\nUse + to concatenate the strings\n\nSolution\ndef printValue(s1,s2):\n\tprint s1+s2\n\nprintValue(\"3\",\"4\") #34\n\n#----------------------------------------#\n2.10\n\n\nQuestion:\nDefine a function that can accept two strings as input and print the string with maximum length in console. If two strings have the same length, then the function should print al l strings line by line.\n\nHints:\n\nUse len() function to get the length of a string\n\nSolution\ndef printValue(s1,s2):\n\tlen1 = len(s1)\n\tlen2 = len(s2)\n\tif len1>len2:\n\t\tprint s1\n\telif len2>len1:\n\t\tprint s2\n\telse:\n\t\tprint s1\n\t\tprint s2\n\t\t\n\nprintValue(\"one\",\"three\")\n\n\n\n#----------------------------------------#\n2.10\n\nQuestion:\nDefine a function that can accept an integer number as input and print the \"It is an even number\" if the number is even, otherwise print \"It is an odd number\".\n\nHints:\n\nUse % operator to check if a number is even or odd.\n\nSolution\ndef checkValue(n):\n\tif n%2 == 0:\n\t\tprint \"It is an even number\"\n\telse:\n\t\tprint \"It is an odd number\"\n\t\t\n\ncheckValue(7)\n\n\n#----------------------------------------#\n2.10\n\nQuestion:\nDefine a function which can print a dictionary where the keys are numbers between 1 and 3 (both included) and the values are square of keys.\n\nHints:\n\nUse dict[key]=value pattern to put entry into a dictionary.\nUse ** operator to get power of a number.\n\nSolution\ndef printDict():\n\td=dict()\n\td[1]=1\n\td[2]=2**2\n\td[3]=3**2\n\tprint d\n\t\t\n\nprintDict()\n\n\n\n\n\n#----------------------------------------#\n2.10\n\nQuestion:\nDefine a function which can print a dictionary where the keys are numbers between 1 and 20 (both included) and the values are square of keys.\n\nHints:\n\nUse dict[key]=value pattern to put entry into a dictionary.\nUse ** operator to get power of a number.\nUse range() for loops.\n\nSolution\ndef printDict():\n\td=dict()\n\tfor i in range(1,21):\n\t\td[i]=i**2\n\tprint d\n\t\t\n\nprintDict()\n\n\n#----------------------------------------#\n2.10\n\nQuestion:\nDefine a function which can generate a dictionary where the keys are numbers between 1 and 20 (both included) and the values are square of keys. The function should just print the values only.\n\nHints:\n\nUse dict[key]=value pattern to put entry into a dictionary.\nUse ** operator to get power of a number.\nUse range() for loops.\nUse keys() to iterate keys in the dictionary. Also we can use item() to get key/value pairs.\n\nSolution\ndef printDict():\n\td=dict()\n\tfor i in range(1,21):\n\t\td[i]=i**2\n\tfor (k,v) in d.items():\t\n\t\tprint v\n\t\t\n\nprintDict()\n\n#----------------------------------------#\n2.10\n\nQuestion:\nDefine a function which can generate a dictionary where the keys are numbers between 1 and 20 (both included) and the values are square of keys. The function should just print the keys only.\n\nHints:\n\nUse dict[key]=value pattern to put entry into a dictionary.\nUse ** operator to get power of a number.\nUse range() for loops.\nUse keys() to iterate keys in the dictionary. Also we can use item() to get key/value pairs.\n\nSolution\ndef printDict():\n\td=dict()\n\tfor i in range(1,21):\n\t\td[i]=i**2\n\tfor k in d.keys():\t\n\t\tprint k\n\t\t\n\nprintDict()\n\n\n#----------------------------------------#\n2.10\n\nQuestion:\nDefine a function which can generate and print a list where the values are square of numbers between 1 and 20 (both included).\n\nHints:\n\nUse ** operator to get power of a number.\nUse range() for loops.\nUse list.append() to add values into a list.\n\nSolution\ndef printList():\n\tli=list()\n\tfor i in range(1,21):\n\t\tli.append(i**2)\n\tprint li\n\t\t\n\nprintList()\n\n#----------------------------------------#\n2.10\n\nQuestion:\nDefine a function which can generate a list where the values are square of numbers between 1 and 20 (both included). Then the function needs to print the first 5 elements in the list.\n\nHints:\n\nUse ** operator to get power of a number.\nUse range() for loops.\nUse list.append() to add values into a list.\nUse [n1:n2] to slice a list\n\nSolution\ndef printList():\n\tli=list()\n\tfor i in range(1,21):\n\t\tli.append(i**2)\n\tprint li[:5]\n\t\t\n\nprintList()\n\n\n#----------------------------------------#\n2.10\n\nQuestion:\nDefine a function which can generate a list where the values are square of numbers between 1 and 20 (both included). Then the function needs to print the last 5 elements in the list.\n\nHints:\n\nUse ** operator to get power of a number.\nUse range() for loops.\nUse list.append() to add values into a list.\nUse [n1:n2] to slice a list\n\nSolution\ndef printList():\n\tli=list()\n\tfor i in range(1,21):\n\t\tli.append(i**2)\n\tprint li[-5:]\n\t\t\n\nprintList()\n\n\n#----------------------------------------#\n2.10\n\nQuestion:\nDefine a function which can generate a list where the values are square of numbers between 1 and 20 (both included). Then the function needs to print all values except the first 5 elements in the list.\n\nHints:\n\nUse ** operator to get power of a number.\nUse range() for loops.\nUse list.append() to add values into a list.\nUse [n1:n2] to slice a list\n\nSolution\ndef printList():\n\tli=list()\n\tfor i in range(1,21):\n\t\tli.append(i**2)\n\tprint li[5:]\n\t\t\n\nprintList()\n\n\n#----------------------------------------#\n2.10\n\nQuestion:\nDefine a function which can generate and print a tuple where the value are square of numbers between 1 and 20 (both included). \n\nHints:\n\nUse ** operator to get power of a number.\nUse range() for loops.\nUse list.append() to add values into a list.\nUse tuple() to get a tuple from a list.\n\nSolution\ndef printTuple():\n\tli=list()\n\tfor i in range(1,21):\n\t\tli.append(i**2)\n\tprint tuple(li)\n\t\t\nprintTuple()\n\n\n\n#----------------------------------------#\n2.10\n\nQuestion:\nWith a given tuple (1,2,3,4,5,6,7,8,9,10), write a program to print the first half values in one line and the last half values in one line. \n\nHints:\n\nUse [n1:n2] notation to get a slice from a tuple.\n\nSolution\ntp=(1,2,3,4,5,6,7,8,9,10)\ntp1=tp[:5]\ntp2=tp[5:]\nprint tp1\nprint tp2\n\n\n#----------------------------------------#\n2.10\n\nQuestion:\nWrite a program to generate and print another tuple whose values are even numbers in the given tuple (1,2,3,4,5,6,7,8,9,10). \n\nHints:\n\nUse \"for\" to iterate the tuple\nUse tuple() to generate a tuple from a list.\n\nSolution\ntp=(1,2,3,4,5,6,7,8,9,10)\nli=list()\nfor i in tp:\n\tif tp[i]%2==0:\n\t\tli.append(tp[i])\n\ntp2=tuple(li)\nprint tp2\n\n\n\n#----------------------------------------#\n2.14\n\nQuestion:\nWrite a program which accepts a string as input to print \"Yes\" if the string is \"yes\" or \"YES\" or \"Yes\", otherwise print \"No\". \n\nHints:\n\nUse if statement to judge condition.\n\nSolution\ns= raw_input()\nif s==\"yes\" or s==\"YES\" or s==\"Yes\":\n print \"Yes\"\nelse:\n print \"No\"\n\n\n\n#----------------------------------------#\n3.4\n\nQuestion:\nWrite a program which can filter even numbers in a list by using filter function. The list is: [1,2,3,4,5,6,7,8,9,10].\n\nHints:\n\nUse filter() to filter some elements in a list.\nUse lambda to define anonymous functions.\n\nSolution\nli = [1,2,3,4,5,6,7,8,9,10]\nevenNumbers = filter(lambda x: x%2==0, li)\nprint evenNumbers\n\n\n#----------------------------------------#\n3.4\n\nQuestion:\nWrite a program which can map() to make a list whose elements are square of elements in [1,2,3,4,5,6,7,8,9,10].\n\nHints:\n\nUse map() to generate a list.\nUse lambda to define anonymous functions.\n\nSolution\nli = [1,2,3,4,5,6,7,8,9,10]\nsquaredNumbers = map(lambda x: x**2, li)\nprint squaredNumbers\n\n#----------------------------------------#\n3.5\n\nQuestion:\nWrite a program which can map() and filter() to make a list whose elements are square of even number in [1,2,3,4,5,6,7,8,9,10].\n\nHints:\n\nUse map() to generate a list.\nUse filter() to filter elements of a list.\nUse lambda to define anonymous functions.\n\nSolution\nli = [1,2,3,4,5,6,7,8,9,10]\nevenNumbers = map(lambda x: x**2, filter(lambda x: x%2==0, li))\nprint evenNumbers\n\n\n\n\n#----------------------------------------#\n3.5\n\nQuestion:\nWrite a program which can filter() to make a list whose elements are even number between 1 and 20 (both included).\n\nHints:\n\nUse filter() to filter elements of a list.\nUse lambda to define anonymous functions.\n\nSolution\nevenNumbers = filter(lambda x: x%2==0, range(1,21))\nprint evenNumbers\n\n\n#----------------------------------------#\n3.5\n\nQuestion:\nWrite a program which can map() to make a list whose elements are square of numbers between 1 and 20 (both included).\n\nHints:\n\nUse map() to generate a list.\nUse lambda to define anonymous functions.\n\nSolution\nsquaredNumbers = map(lambda x: x**2, range(1,21))\nprint squaredNumbers\n\n\n\n\n#----------------------------------------#\n7.2\n\nQuestion:\nDefine a class named American which has a static method called printNationality.\n\nHints:\n\nUse @staticmethod decorator to define class static method.\n\nSolution\nclass American(object):\n @staticmethod\n def printNationality():\n print \"America\"\n\nanAmerican = American()\nanAmerican.printNationality()\nAmerican.printNationality()\n\n\n\n\n#----------------------------------------#\n\n7.2\n\nQuestion:\nDefine a class named American and its subclass NewYorker. \n\nHints:\n\nUse class Subclass(ParentClass) to define a subclass.\n\nSolution:\n\nclass American(object):\n pass\n\nclass NewYorker(American):\n pass\n\nanAmerican = American()\naNewYorker = NewYorker()\nprint anAmerican\nprint aNewYorker\n\n\n\n\n#----------------------------------------#\n\n\n7.2\n\nQuestion:\nDefine a class named Circle which can be constructed by a radius. The Circle class has a method which can compute the area. \n\nHints:\n\nUse def methodName(self) to define a method.\n\nSolution:\n\nclass Circle(object):\n def __init__(self, r):\n self.radius = r\n\n def area(self):\n return self.radius**2*3.14\n\naCircle = Circle(2)\nprint aCircle.area()\n\n\n\n\n\n\n#----------------------------------------#\n\n7.2\n\nDefine a class named Rectangle which can be constructed by a length and width. The Rectangle class has a method which can compute the area. \n\nHints:\n\nUse def methodName(self) to define a method.\n\nSolution:\n\nclass Rectangle(object):\n def __init__(self, l, w):\n self.length = l\n self.width = w\n\n def area(self):\n return self.length*self.width\n\naRectangle = Rectangle(2,10)\nprint aRectangle.area()\n\n\n\n\n#----------------------------------------#\n\n7.2\n\nDefine a class named Shape and its subclass Square. The Square class has an init function which takes a length as argument. Both classes have a area function which can print the area of the shape where Shape's area is 0 by default.\n\nHints:\n\nTo override a method in super class, we can define a method with the same name in the super class.\n\nSolution:\n\nclass Shape(object):\n def __init__(self):\n pass\n\n def area(self):\n return 0\n\nclass Square(Shape):\n def __init__(self, l):\n Shape.__init__(self)\n self.length = l\n\n def area(self):\n return self.length*self.length\n\naSquare= Square(3)\nprint aSquare.area()\n\n\n\n\n\n\n\n\n#----------------------------------------#\n\n\nPlease raise a RuntimeError exception.\n\nHints:\n\nUse raise() to raise an exception.\n\nSolution:\n\nraise RuntimeError('something wrong')\n\n\n\n#----------------------------------------#\nWrite a function to compute 5/0 and use try/except to catch the exceptions.\n\nHints:\n\nUse try/except to catch exceptions.\n\nSolution:\n\ndef throws():\n return 5/0\n\ntry:\n throws()\nexcept ZeroDivisionError:\n print \"division by zero!\"\nexcept Exception, err:\n print 'Caught an exception'\nfinally:\n print 'In finally block for cleanup'\n\n\n#----------------------------------------#\nDefine a custom exception class which takes a string message as attribute.\n\nHints:\n\nTo define a custom exception, we need to define a class inherited from Exception.\n\nSolution:\n\nclass MyError(Exception):\n \"\"\"My own exception class\n\n Attributes:\n msg -- explanation of the error\n \"\"\"\n\n def __init__(self, msg):\n self.msg = msg\n\nerror = MyError(\"something wrong\")\n\n#----------------------------------------#\nQuestion:\n\nAssuming that we have some email addresses in the \"[email protected]\" format, please write program to print the user name of a given email address. Both user names and company names are composed of letters only.\n\nExample:\nIf the following email address is given as input to the program:\n\[email protected]\n\nThen, the output of the program should be:\n\njohn\n\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nHints:\n\nUse \\w to match letters.\n\nSolution:\nimport re\nemailAddress = raw_input()\npat2 = \"(\\w+)@((\\w+\\.)+(com))\"\nr2 = re.match(pat2,emailAddress)\nprint r2.group(1)\n\n\n#----------------------------------------#\nQuestion:\n\nAssuming that we have some email addresses in the \"[email protected]\" format, please write program to print the company name of a given email address. Both user names and company names are composed of letters only.\n\nExample:\nIf the following email address is given as input to the program:\n\[email protected]\n\nThen, the output of the program should be:\n\ngoogle\n\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nHints:\n\nUse \\w to match letters.\n\nSolution:\nimport re\nemailAddress = raw_input()\npat2 = \"(\\w+)@(\\w+)\\.(com)\"\nr2 = re.match(pat2,emailAddress)\nprint r2.group(2)\n\n\n\n\n#----------------------------------------#\nQuestion:\n\nWrite a program which accepts a sequence of words separated by whitespace as input to print the words composed of digits only.\n\nExample:\nIf the following words is given as input to the program:\n\n2 cats and 3 dogs.\n\nThen, the output of the program should be:\n\n['2', '3']\n\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nHints:\n\nUse re.findall() to find all substring using regex.\n\nSolution:\nimport re\ns = raw_input()\nprint re.findall(\"\\d+\",s)\n\n\n#----------------------------------------#\nQuestion:\n\n\nPrint a unicode string \"hello world\".\n\nHints:\n\nUse u'strings' format to define unicode string.\n\nSolution:\n\nunicodeString = u\"hello world!\"\nprint unicodeString\n\n#----------------------------------------#\nWrite a program to read an ASCII string and to convert it to a unicode string encoded by utf-8.\n\nHints:\n\nUse unicode() function to convert.\n\nSolution:\n\ns = raw_input()\nu = unicode( s ,\"utf-8\")\nprint u\n\n#----------------------------------------#\nQuestion:\n\nWrite a special comment to indicate a Python source code file is in unicode.\n\nHints:\n\nSolution:\n\n# -*- coding: utf-8 -*-\n\n#----------------------------------------#\nQuestion:\n\nWrite a program to compute 1/2+2/3+3/4+...+n/n+1 with a given n input by console (n>0).\n\nExample:\nIf the following n is given as input to the program:\n\n5\n\nThen, the output of the program should be:\n\n3.55\n\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nHints:\nUse float() to convert an integer to a float\n\nSolution:\n\nn=int(raw_input())\nsum=0.0\nfor i in range(1,n+1):\n sum += float(float(i)/(i+1))\nprint sum\n\n\n#----------------------------------------#\nQuestion:\n\nWrite a program to compute:\n\nf(n)=f(n-1)+100 when n>0\nand f(0)=1\n\nwith a given n input by console (n>0).\n\nExample:\nIf the following n is given as input to the program:\n\n5\n\nThen, the output of the program should be:\n\n500\n\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nHints:\nWe can define recursive function in Python.\n\nSolution:\n\ndef f(n):\n if n==0:\n return 0\n else:\n return f(n-1)+100\n\nn=int(raw_input())\nprint f(n)\n\n#----------------------------------------#\n\nQuestion:\n\n\nThe Fibonacci Sequence is computed based on the following formula:\n\n\nf(n)=0 if n=0\nf(n)=1 if n=1\nf(n)=f(n-1)+f(n-2) if n>1\n\nPlease write a program to compute the value of f(n) with a given n input by console.\n\nExample:\nIf the following n is given as input to the program:\n\n7\n\nThen, the output of the program should be:\n\n13\n\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nHints:\nWe can define recursive function in Python.\n\n\nSolution:\n\ndef f(n):\n if n == 0: return 0\n elif n == 1: return 1\n else: return f(n-1)+f(n-2)\n\nn=int(raw_input())\nprint f(n)\n\n\n#----------------------------------------#\n\n#----------------------------------------#\n\nQuestion:\n\nThe Fibonacci Sequence is computed based on the following formula:\n\n\nf(n)=0 if n=0\nf(n)=1 if n=1\nf(n)=f(n-1)+f(n-2) if n>1\n\nPlease write a program using list comprehension to print the Fibonacci Sequence in comma separated form with a given n input by console.\n\nExample:\nIf the following n is given as input to the program:\n\n7\n\nThen, the output of the program should be:\n\n0,1,1,2,3,5,8,13\n\n\nHints:\nWe can define recursive function in Python.\nUse list comprehension to generate a list from an existing list.\nUse string.join() to join a list of strings.\n\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nSolution:\n\ndef f(n):\n if n == 0: return 0\n elif n == 1: return 1\n else: return f(n-1)+f(n-2)\n\nn=int(raw_input())\nvalues = [str(f(x)) for x in range(0, n+1)]\nprint \",\".join(values)\n\n\n#----------------------------------------#\n\nQuestion:\n\nPlease write a program using generator to print the even numbers between 0 and n in comma separated form while n is input by console.\n\nExample:\nIf the following n is given as input to the program:\n\n10\n\nThen, the output of the program should be:\n\n0,2,4,6,8,10\n\nHints:\nUse yield to produce the next value in generator.\n\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nSolution:\n\ndef EvenGenerator(n):\n i=0\n while i<=n:\n if i%2==0:\n yield i\n i+=1\n\n\nn=int(raw_input())\nvalues = []\nfor i in EvenGenerator(n):\n values.append(str(i))\n\nprint \",\".join(values)\n\n\n#----------------------------------------#\n\nQuestion:\n\nPlease write a program using generator to print the numbers which can be divisible by 5 and 7 between 0 and n in comma separated form while n is input by console.\n\nExample:\nIf the following n is given as input to the program:\n\n100\n\nThen, the output of the program should be:\n\n0,35,70\n\nHints:\nUse yield to produce the next value in generator.\n\nIn case of input data being supplied to the question, it should be assumed to be a console input.\n\nSolution:\n\ndef NumGenerator(n):\n for i in range(n+1):\n if i%5==0 and i%7==0:\n yield i\n\nn=int(raw_input())\nvalues = []\nfor i in NumGenerator(n):\n values.append(str(i))\n\nprint \",\".join(values)\n\n\n#----------------------------------------#\n\nQuestion:\n\n\nPlease write assert statements to verify that every number in the list [2,4,6,8] is even.\n\n\n\nHints:\nUse \"assert expression\" to make assertion.\n\n\nSolution:\n\nli = [2,4,6,8]\nfor i in li:\n assert i%2==0\n\n\n#----------------------------------------#\nQuestion:\n\nPlease write a program which accepts basic mathematic expression from console and print the evaluation result.\n\nExample:\nIf the following string is given as input to the program:\n\n35+3\n\nThen, the output of the program should be:\n\n38\n\nHints:\nUse eval() to evaluate an expression.\n\n\nSolution:\n\nexpression = raw_input()\nprint eval(expression)\n\n\n#----------------------------------------#\nQuestion:\n\nPlease write a binary search function which searches an item in a sorted list. The function should return the index of element to be searched in the list.\n\n\nHints:\nUse if/elif to deal with conditions.\n\n\nSolution:\n\nimport math\ndef bin_search(li, element):\n bottom = 0\n top = len(li)-1\n index = -1\n while top>=bottom and index==-1:\n mid = int(math.floor((top+bottom)/2.0))\n if li[mid]==element:\n index = mid\n elif li[mid]>element:\n top = mid-1\n else:\n bottom = mid+1\n\n return index\n\nli=[2,5,7,9,11,17,222]\nprint bin_search(li,11)\nprint bin_search(li,12)\n\n\n\n\n#----------------------------------------#\nQuestion:\n\nPlease write a binary search function which searches an item in a sorted list. The function should return the index of element to be searched in the list.\n\n\nHints:\nUse if/elif to deal with conditions.\n\n\nSolution:\n\nimport math\ndef bin_search(li, element):\n bottom = 0\n top = len(li)-1\n index = -1\n while top>=bottom and index==-1:\n mid = int(math.floor((top+bottom)/2.0))\n if li[mid]==element:\n index = mid\n elif li[mid]>element:\n top = mid-1\n else:\n bottom = mid+1\n\n return index\n\nli=[2,5,7,9,11,17,222]\nprint bin_search(li,11)\nprint bin_search(li,12)\n\n\n\n\n#----------------------------------------#\nQuestion:\n\nPlease generate a random float where the value is between 10 and 100 using Python math module.\n\n\n\nHints:\nUse random.random() to generate a random float in [0,1].\n\n\nSolution:\n\nimport random\nprint random.random()*100\n\n#----------------------------------------#\nQuestion:\n\nPlease generate a random float where the value is between 5 and 95 using Python math module.\n\n\n\nHints:\nUse random.random() to generate a random float in [0,1].\n\n\nSolution:\n\nimport random\nprint random.random()*100-5\n\n\n#----------------------------------------#\nQuestion:\n\nPlease write a program to output a random even number between 0 and 10 inclusive using random module and list comprehension.\n\n\n\nHints:\nUse random.choice() to a random element from a list.\n\n\nSolution:\n\nimport random\nprint random.choice([i for i in range(11) if i%2==0])\n\n\n#----------------------------------------#\nQuestion:\n\nPlease write a program to output a random number, which is divisible by 5 and 7, between 0 and 10 inclusive using random module and list comprehension.\n\n\n\nHints:\nUse random.choice() to a random element from a list.\n\n\nSolution:\n\nimport random\nprint random.choice([i for i in range(201) if i%5==0 and i%7==0])\n\n\n\n#----------------------------------------#\n\nQuestion:\n\nPlease write a program to generate a list with 5 random numbers between 100 and 200 inclusive.\n\n\n\nHints:\nUse random.sample() to generate a list of random values.\n\n\nSolution:\n\nimport random\nprint random.sample(range(100), 5)\n\n#----------------------------------------#\nQuestion:\n\nPlease write a program to randomly generate a list with 5 even numbers between 100 and 200 inclusive.\n\n\n\nHints:\nUse random.sample() to generate a list of random values.\n\n\nSolution:\n\nimport random\nprint random.sample([i for i in range(100,201) if i%2==0], 5)\n\n\n#----------------------------------------#\nQuestion:\n\nPlease write a program to randomly generate a list with 5 numbers, which are divisible by 5 and 7 , between 1 and 1000 inclusive.\n\n\n\nHints:\nUse random.sample() to generate a list of random values.\n\n\nSolution:\n\nimport random\nprint random.sample([i for i in range(1,1001) if i%5==0 and i%7==0], 5)\n\n#----------------------------------------#\n\nQuestion:\n\nPlease write a program to randomly print a integer number between 7 and 15 inclusive.\n\n\n\nHints:\nUse random.randrange() to a random integer in a given range.\n\n\nSolution:\n\nimport random\nprint random.randrange(7,16)\n\n#----------------------------------------#\n\nQuestion:\n\nPlease write a program to compress and decompress the string \"hello world!hello world!hello world!hello world!\".\n\n\n\nHints:\nUse zlib.compress() and zlib.decompress() to compress and decompress a string.\n\n\nSolution:\n\nimport zlib\ns = 'hello world!hello world!hello world!hello world!'\nt = zlib.compress(s)\nprint t\nprint zlib.decompress(t)\n\n#----------------------------------------#\nQuestion:\n\nPlease write a program to print the running time of execution of \"1+1\" for 100 times.\n\n\n\nHints:\nUse timeit() function to measure the running time.\n\nSolution:\n\nfrom timeit import Timer\nt = Timer(\"for i in range(100):1+1\")\nprint t.timeit()\n\n#----------------------------------------#\nQuestion:\n\nPlease write a program to shuffle and print the list [3,6,7,8].\n\n\n\nHints:\nUse shuffle() function to shuffle a list.\n\nSolution:\n\nfrom random import shuffle\nli = [3,6,7,8]\nshuffle(li)\nprint li\n\n#----------------------------------------#\nQuestion:\n\nPlease write a program to shuffle and print the list [3,6,7,8].\n\n\n\nHints:\nUse shuffle() function to shuffle a list.\n\nSolution:\n\nfrom random import shuffle\nli = [3,6,7,8]\nshuffle(li)\nprint li\n\n\n\n#----------------------------------------#\nQuestion:\n\nPlease write a program to generate all sentences where subject is in [\"I\", \"You\"] and verb is in [\"Play\", \"Love\"] and the object is in [\"Hockey\",\"Football\"].\n\nHints:\nUse list[index] notation to get a element from a list.\n\nSolution:\n\nsubjects=[\"I\", \"You\"]\nverbs=[\"Play\", \"Love\"]\nobjects=[\"Hockey\",\"Football\"]\nfor i in range(len(subjects)):\n for j in range(len(verbs)):\n for k in range(len(objects)):\n sentence = \"%s %s %s.\" % (subjects[i], verbs[j], objects[k])\n print sentence\n\n\n#----------------------------------------#\nPlease write a program to print the list after removing delete even numbers in [5,6,77,45,22,12,24].\n\nHints:\nUse list comprehension to delete a bunch of element from a list.\n\nSolution:\n\nli = [5,6,77,45,22,12,24]\nli = [x for x in li if x%2!=0]\nprint li\n\n#----------------------------------------#\nQuestion:\n\nBy using list comprehension, please write a program to print the list after removing delete numbers which are divisible by 5 and 7 in [12,24,35,70,88,120,155].\n\nHints:\nUse list comprehension to delete a bunch of element from a list.\n\nSolution:\n\nli = [12,24,35,70,88,120,155]\nli = [x for x in li if x%5!=0 and x%7!=0]\nprint li\n\n\n#----------------------------------------#\nQuestion:\n\nBy using list comprehension, please write a program to print the list after removing the 0th, 2nd, 4th,6th numbers in [12,24,35,70,88,120,155].\n\nHints:\nUse list comprehension to delete a bunch of element from a list.\nUse enumerate() to get (index, value) tuple.\n\nSolution:\n\nli = [12,24,35,70,88,120,155]\nli = [x for (i,x) in enumerate(li) if i%2!=0]\nprint li\n\n#----------------------------------------#\n\nQuestion:\n\nBy using list comprehension, please write a program generate a 3*5*8 3D array whose each element is 0.\n\nHints:\nUse list comprehension to make an array.\n\nSolution:\n\narray = [[ [0 for col in range(8)] for col in range(5)] for row in range(3)]\nprint array\n\n#----------------------------------------#\nQuestion:\n\nBy using list comprehension, please write a program to print the list after removing the 0th,4th,5th numbers in [12,24,35,70,88,120,155].\n\nHints:\nUse list comprehension to delete a bunch of element from a list.\nUse enumerate() to get (index, value) tuple.\n\nSolution:\n\nli = [12,24,35,70,88,120,155]\nli = [x for (i,x) in enumerate(li) if i not in (0,4,5)]\nprint li\n\n\n\n#----------------------------------------#\n\nQuestion:\n\nBy using list comprehension, please write a program to print the list after removing the value 24 in [12,24,35,24,88,120,155].\n\nHints:\nUse list's remove method to delete a value.\n\nSolution:\n\nli = [12,24,35,24,88,120,155]\nli = [x for x in li if x!=24]\nprint li\n\n\n#----------------------------------------#\nQuestion:\n\nWith two given lists [1,3,6,78,35,55] and [12,24,35,24,88,120,155], write a program to make a list whose elements are intersection of the above given lists.\n\nHints:\nUse set() and \"&=\" to do set intersection operation.\n\nSolution:\n\nset1=set([1,3,6,78,35,55])\nset2=set([12,24,35,24,88,120,155])\nset1 &= set2\nli=list(set1)\nprint li\n\n#----------------------------------------#\n\nWith a given list [12,24,35,24,88,120,155,88,120,155], write a program to print this list after removing all duplicate values with original order reserved.\n\nHints:\nUse set() to store a number of values without duplicate.\n\nSolution:\n\ndef removeDuplicate( li ):\n newli=[]\n seen = set()\n for item in li:\n if item not in seen:\n seen.add( item )\n newli.append(item)\n\n return newli\n\nli=[12,24,35,24,88,120,155,88,120,155]\nprint removeDuplicate(li)\n\n\n#----------------------------------------#\nQuestion:\n\nDefine a class Person and its two child classes: Male and Female. All classes have a method \"getGender\" which can print \"Male\" for Male class and \"Female\" for Female class.\n\nHints:\nUse Subclass(Parentclass) to define a child class.\n\nSolution:\n\nclass Person(object):\n def getGender( self ):\n return \"Unknown\"\n\nclass Male( Person ):\n def getGender( self ):\n return \"Male\"\n\nclass Female( Person ):\n def getGender( self ):\n return \"Female\"\n\naMale = Male()\naFemale= Female()\nprint aMale.getGender()\nprint aFemale.getGender()\n\n\n\n#----------------------------------------#\nQuestion:\n\nPlease write a program which count and print the numbers of each character in a string input by console.\n\nExample:\nIf the following string is given as input to the program:\n\nabcdefgabc\n\nThen, the output of the program should be:\n\na,2\nc,2\nb,2\ne,1\nd,1\ng,1\nf,1\n\nHints:\nUse dict to store key/value pairs.\nUse dict.get() method to lookup a key with default value.\n\nSolution:\n\ndic = {}\ns=raw_input()\nfor s in s:\n dic[s] = dic.get(s,0)+1\nprint '\\n'.join(['%s,%s' % (k, v) for k, v in dic.items()])\n\n#----------------------------------------#\n\nQuestion:\n\nPlease write a program which accepts a string from console and print it in reverse order.\n\nExample:\nIf the following string is given as input to the program:\n\nrise to vote sir\n\nThen, the output of the program should be:\n\nris etov ot esir\n\nHints:\nUse list[::-1] to iterate a list in a reverse order.\n\nSolution:\n\ns=raw_input()\ns = s[::-1]\nprint s\n\n#----------------------------------------#\n\nQuestion:\n\nPlease write a program which accepts a string from console and print the characters that have even indexes.\n\nExample:\nIf the following string is given as input to the program:\n\nH1e2l3l4o5w6o7r8l9d\n\nThen, the output of the program should be:\n\nHelloworld\n\nHints:\nUse list[::2] to iterate a list by step 2.\n\nSolution:\n\ns=raw_input()\ns = s[::2]\nprint s\n#----------------------------------------#\n\n\nQuestion:\n\nPlease write a program which prints all permutations of [1,2,3]\n\n\nHints:\nUse itertools.permutations() to get permutations of list.\n\nSolution:\n\nimport itertools\nprint list(itertools.permutations([1,2,3]))\n\n#----------------------------------------#\nQuestion:\n\nWrite a program to solve a classic ancient Chinese puzzle: \nWe count 35 heads and 94 legs among the chickens and rabbits in a farm. How many rabbits and how many chickens do we have?\n\nHint:\nUse for loop to iterate all possible solutions.\n\nSolution:\n\ndef solve(numheads,numlegs):\n ns='No solutions!'\n for i in range(numheads+1):\n j=numheads-i\n if 2*i+4*j==numlegs:\n return i,j\n return ns,ns\n\nnumheads=35\nnumlegs=94\nsolutions=solve(numheads,numlegs)\nprint solutions\n\n#----------------------------------------#", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
d033f15dac95dba117995ea75e9f6e6ee146e759
70,595
ipynb
Jupyter Notebook
phik/notebooks/phik_tutorial_advanced.ipynb
ionicsolutions/PhiK
bf9f3e7938d16437ff9b398a33042b719c91f8f1
[ "Apache-2.0" ]
92
2018-12-28T14:03:05.000Z
2022-03-23T16:56:05.000Z
phik/notebooks/phik_tutorial_advanced.ipynb
ionicsolutions/PhiK
bf9f3e7938d16437ff9b398a33042b719c91f8f1
[ "Apache-2.0" ]
34
2019-06-19T16:17:17.000Z
2022-03-25T08:20:04.000Z
phik/notebooks/phik_tutorial_advanced.ipynb
ionicsolutions/PhiK
bf9f3e7938d16437ff9b398a33042b719c91f8f1
[ "Apache-2.0" ]
24
2018-12-18T16:41:18.000Z
2022-03-05T11:25:07.000Z
32.804368
505
0.428274
[ [ [ "# Phi_K advanced tutorial\n\nThis notebook guides you through the more advanced functionality of the phik package. This notebook will not cover all the underlying theory, but will just attempt to give an overview of all the options that are available. For a theoretical description the user is referred to our paper.\n\nThe package offers functionality on three related topics:\n\n1. Phik correlation matrix\n2. Significance matrix\n3. Outlier significance matrix", "_____no_output_____" ] ], [ [ "%%capture\n# install phik (if not installed yet)\nimport sys\n\n!\"{sys.executable}\" -m pip install phik", "_____no_output_____" ], [ "# import standard packages\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport itertools\n\nimport phik\n\nfrom phik import resources\nfrom phik.binning import bin_data\nfrom phik.decorators import *\nfrom phik.report import plot_correlation_matrix\n\n%matplotlib inline", "_____no_output_____" ], [ "# if one changes something in the phik-package one can automatically reload the package or module\n%load_ext autoreload\n%autoreload 2", "_____no_output_____" ] ], [ [ "# Load data\n\nA simulated dataset is part of the phik-package. The dataset concerns car insurance data. Load the dataset here:", "_____no_output_____" ] ], [ [ "data = pd.read_csv( resources.fixture('fake_insurance_data.csv.gz') )", "_____no_output_____" ], [ "data.head()", "_____no_output_____" ] ], [ [ "## Specify bin types\n\nThe phik-package offers a way to calculate correlations between variables of mixed types. Variable types can be inferred automatically although we recommend to variable types to be specified by the user. \n\nBecause interval type variables need to be binned in order to calculate phik and the significance, a list of interval variables is created.", "_____no_output_____" ] ], [ [ "data_types = {'severity': 'interval',\n 'driver_age':'interval',\n 'satisfaction':'ordinal',\n 'mileage':'interval',\n 'car_size':'ordinal',\n 'car_use':'ordinal',\n 'car_color':'categorical',\n 'area':'categorical'}\n\ninterval_cols = [col for col, v in data_types.items() if v=='interval' and col in data.columns]\ninterval_cols\n# interval_cols is used below", "_____no_output_____" ] ], [ [ "# Phik correlation matrix\n\nNow let's start calculating the correlation phik between pairs of variables. \n\nNote that the original dataset is used as input, the binning of interval variables is done automatically.", "_____no_output_____" ] ], [ [ "phik_overview = data.phik_matrix(interval_cols=interval_cols)\nphik_overview", "_____no_output_____" ] ], [ [ "### Specify binning per interval variable\n\nBinning can be set per interval variable individually. One can set the number of bins, or specify a list of bin edges. Note that the measured phik correlation is dependent on the chosen binning. \nThe default binning is uniform between the min and max values of the interval variable.", "_____no_output_____" ] ], [ [ "bins = {'mileage':5, 'driver_age':[18,25,35,45,55,65,125]}\nphik_overview = data.phik_matrix(interval_cols=interval_cols, bins=bins)\nphik_overview", "_____no_output_____" ] ], [ [ "### Do not apply noise correction\n\nFor low statistics samples often a correlation larger than zero is measured when no correlation is actually present in the true underlying distribution. This is not only the case for phik, but also for the pearson correlation and Cramer's phi (see figure 4 in <font color='red'> XX </font>). In the phik calculation a noise correction is applied by default, to take into account erroneous correlation values as a result of low statistics. To switch off this noise cancellation (not recommended), do:", "_____no_output_____" ] ], [ [ "phik_overview = data.phik_matrix(interval_cols=interval_cols, noise_correction=False)\nphik_overview", "_____no_output_____" ] ], [ [ "### Using a different expectation histogram\n\nBy default phik compares the 2d distribution of two (binned) variables with the distribution that assumes no dependency between them. One can also change the expected distribution though. Phi_K is calculated in the same way, but using the other expectation distribution. ", "_____no_output_____" ] ], [ [ "from phik.binning import auto_bin_data\nfrom phik.phik import phik_observed_vs_expected_from_rebinned_df, phik_from_hist2d\nfrom phik.statistics import get_dependent_frequency_estimates", "_____no_output_____" ], [ "# get observed 2d histogram of two variables\ncols = [\"mileage\", \"car_size\"]\nicols = [\"mileage\"]\nobserved = data[cols].hist2d(interval_cols=icols).values", "_____no_output_____" ], [ "# default phik evaluation from observed distribution\nphik_value = phik_from_hist2d(observed)\nprint (phik_value)", "0.768588829489185\n" ], [ "# phik evaluation from an observed and expected distribution\nexpected = get_dependent_frequency_estimates(observed)\nphik_value = phik_from_hist2d(observed=observed, expected=expected)\nprint (phik_value)", "0.768588829489185\n" ], [ "# one can also compare two datasets against each other, and get a full phik matrix that way.\n# this needs binned datasets though. \n# (the user needs to make sure the binnings of both datasets are identical.) \ndata_binned, _ = auto_bin_data(data, interval_cols=interval_cols)", "_____no_output_____" ], [ "# here we are comparing data_binned against itself\nphik_matrix = phik_observed_vs_expected_from_rebinned_df(data_binned, data_binned)", "_____no_output_____" ], [ "# all off-diagonal entries are zero, meaning the all 2d distributions of both datasets are identical.\n# (by construction the diagonal is one.)\nphik_matrix", "_____no_output_____" ] ], [ [ "# Statistical significance of the correlation\n\nWhen assessing correlations it is good practise to evaluate both the correlation and the significance of the correlation: a large correlation may be statistically insignificant, and vice versa a small correlation may be very significant. For instance, scipy.stats.pearsonr returns both the pearson correlation and the p-value. Similarly, the phik package offers functionality the calculate a significance matrix. Significance is defined as:\n\n$$Z = \\Phi^{-1}(1-p)\\ ;\\quad \\Phi(z)=\\frac{1}{\\sqrt{2\\pi}} \\int_{-\\infty}^{z} e^{-t^{2}/2}\\,{\\rm d}t $$\n\nSeveral corrections to the 'standard' p-value calculation are taken into account, making the method more robust for low statistics and sparse data cases. The user is referred to our paper for more details.\n\nDue to the corrections, the significance calculation can take a few seconds.", "_____no_output_____" ] ], [ [ "significance_overview = data.significance_matrix(interval_cols=interval_cols)\nsignificance_overview", "_____no_output_____" ] ], [ [ "### Specify binning per interval variable\nBinning can be set per interval variable individually. One can set the number of bins, or specify a list of bin edges. Note that the measure phik correlation is dependent on the chosen binning.", "_____no_output_____" ] ], [ [ "bins = {'mileage':5, 'driver_age':[18,25,35,45,55,65,125]}\nsignificance_overview = data.significance_matrix(interval_cols=interval_cols, bins=bins)\nsignificance_overview", "_____no_output_____" ] ], [ [ "### Specify significance method\n\nThe recommended method to calculate the significance of the correlation is a hybrid approach, which uses the G-test statistic. The number of degrees of freedom and an analytical, empirical description of the $\\chi^2$ distribution are sed, based on Monte Carlo simulations. This method works well for both high as low statistics samples.\n\nOther approaches to calculate the significance are implemented:\n- asymptotic: fast, but over-estimates the number of degrees of freedom for low statistics samples, leading to erroneous values of the significance\n- MC: Many simulated samples are needed to accurately measure significances larger than 3, making this method computationally expensive.\n", "_____no_output_____" ] ], [ [ "significance_overview = data.significance_matrix(interval_cols=interval_cols, significance_method='asymptotic')\nsignificance_overview", "_____no_output_____" ] ], [ [ "### Simulation method\n\nThe chi2 of a contingency table is measured using a comparison of the expected frequencies with the true frequencies in a contingency table. The expected frequencies can be simulated in a variety of ways. The following methods are implemented:\n\n - multinominal: Only the total number of records is fixed. (default)\n - row_product_multinominal: The row totals fixed in the sampling.\n - col_product_multinominal: The column totals fixed in the sampling.\n - hypergeometric: Both the row or column totals are fixed in the sampling. (Note that this type of sampling is only available when row and column totals are integers, which is usually the case.)", "_____no_output_____" ] ], [ [ "# --- Warning, can be slow\n# turned off here by default for unit testing purposes\n\n#significance_overview = data.significance_matrix(interval_cols=interval_cols, simulation_method='hypergeometric')\n#significance_overview", "_____no_output_____" ] ], [ [ "### Expected frequencies", "_____no_output_____" ] ], [ [ "from phik.simulation import sim_2d_data_patefield, sim_2d_product_multinominal, sim_2d_data", "_____no_output_____" ], [ "inputdata = data[['driver_age', 'area']].hist2d(interval_cols=['driver_age'])\ninputdata", "_____no_output_____" ] ], [ [ "#### Multinominal", "_____no_output_____" ] ], [ [ "simdata = sim_2d_data(inputdata.values)\nprint('data total:', inputdata.sum().sum())\nprint('sim total:', simdata.sum().sum())\nprint('data row totals:', inputdata.sum(axis=0).values)\nprint('sim row totals:', simdata.sum(axis=0))\nprint('data column totals:', inputdata.sum(axis=1).values)\nprint('sim column totals:', simdata.sum(axis=1))", "data total: 2000.0\nsim total: 2000\ndata row totals: [ 65. 462. 724. 639. 110.]\nsim row totals: [ 75 468 748 586 123]\ndata column totals: [388. 379. 388. 339. 281. 144. 56. 21. 2. 2.]\nsim column totals: [378 380 375 335 281 164 59 25 1 2]\n" ] ], [ [ "#### product multinominal", "_____no_output_____" ] ], [ [ "simdata = sim_2d_product_multinominal(inputdata.values, axis=0)\nprint('data total:', inputdata.sum().sum())\nprint('sim total:', simdata.sum().sum())\nprint('data row totals:', inputdata.sum(axis=0).astype(int).values)\nprint('sim row totals:', simdata.sum(axis=0).astype(int))\nprint('data column totals:', inputdata.sum(axis=1).astype(int).values)\nprint('sim column totals:', simdata.sum(axis=1).astype(int))", "data total: 2000.0\nsim total: 2000\ndata row totals: [ 65 462 724 639 110]\nsim row totals: [ 65 462 724 639 110]\ndata column totals: [388 379 388 339 281 144 56 21 2 2]\nsim column totals: [399 353 415 349 272 139 45 22 4 2]\n" ] ], [ [ "#### hypergeometric (\"patefield\")", "_____no_output_____" ] ], [ [ "# patefield simulation needs compiled c++ code.\n# only run this if the python binding to the (compiled) patefiled simulation function is found.\ntry:\n from phik.simcore import _sim_2d_data_patefield\n CPP_SUPPORT = True\nexcept ImportError:\n CPP_SUPPORT = False\n\nif CPP_SUPPORT:\n simdata = sim_2d_data_patefield(inputdata.values)\n print('data total:', inputdata.sum().sum())\n print('sim total:', simdata.sum().sum())\n print('data row totals:', inputdata.sum(axis=0).astype(int).values)\n print('sim row totals:', simdata.sum(axis=0))\n print('data column totals:', inputdata.sum(axis=1).astype(int).values)\n print('sim column totals:', simdata.sum(axis=1))", "data total: 2000.0\nsim total: 2000\ndata row totals: [ 65 462 724 639 110]\nsim row totals: [ 65 462 724 639 110]\ndata column totals: [388 379 388 339 281 144 56 21 2 2]\nsim column totals: [388 379 388 339 281 144 56 21 2 2]\n" ] ], [ [ "# Outlier significance\n\nThe normal pearson correlation between two interval variables is easy to interpret. However, the phik correlation between two variables of mixed type is not always easy to interpret, especially when it concerns categorical variables. Therefore, functionality is provided to detect \"outliers\": excesses and deficits over the expected frequencies in the contingency table of two variables. \n", "_____no_output_____" ], [ "### Example 1: mileage versus car_size", "_____no_output_____" ], [ "For the categorical variable pair mileage - car_size we measured:\n\n$$\\phi_k = 0.77 \\, ,\\quad\\quad \\mathrm{significance} = 46.3$$\n\nLet's use the outlier significance functionality to gain a better understanding of this significance correlation between mileage and car size.\n", "_____no_output_____" ] ], [ [ "c0 = 'mileage'\nc1 = 'car_size'\n\ntmp_interval_cols = ['mileage']", "_____no_output_____" ], [ "outlier_signifs, binning_dict = data[[c0,c1]].outlier_significance_matrix(interval_cols=tmp_interval_cols, \n retbins=True)\noutlier_signifs", "_____no_output_____" ] ], [ [ "### Specify binning per interval variable\nBinning can be set per interval variable individually. One can set the number of bins, or specify a list of bin edges. \n\nNote: in case a bin is created without any records this bin will be automatically dropped in the phik and (outlier) significance calculations. However, in the outlier significance calculation this will currently lead to an error as the number of provided bin edges does not match the number of bins anymore.", "_____no_output_____" ] ], [ [ "bins = [0,1E2, 1E3, 1E4, 1E5, 1E6]\noutlier_signifs, binning_dict = data[[c0,c1]].outlier_significance_matrix(interval_cols=tmp_interval_cols, \n bins=bins, retbins=True)\noutlier_signifs", "_____no_output_____" ] ], [ [ "### Specify binning per interval variable -- dealing with underflow and overflow\n\nWhen specifying custom bins as situation can occur when the minimal (maximum) value in the data is smaller (larger) than the minimum (maximum) bin edge. Data points outside the specified range will be collected in the underflow (UF) and overflow (OF) bins. One can choose how to deal with these under/overflow bins, by setting the drop_underflow and drop_overflow variables.\n\nNote that the drop_underflow and drop_overflow options are also available for the calculation of the phik matrix and the significance matrix.", "_____no_output_____" ] ], [ [ "bins = [1E2, 1E3, 1E4, 1E5]\noutlier_signifs, binning_dict = data[[c0,c1]].outlier_significance_matrix(interval_cols=tmp_interval_cols, \n bins=bins, retbins=True, \n drop_underflow=False,\n drop_overflow=False)\noutlier_signifs", "_____no_output_____" ] ], [ [ "### Dealing with NaN's in the data", "_____no_output_____" ], [ "Let's add some missing values to our data", "_____no_output_____" ] ], [ [ "data.loc[np.random.choice(range(len(data)), size=10), 'car_size'] = np.nan\ndata.loc[np.random.choice(range(len(data)), size=10), 'mileage'] = np.nan", "_____no_output_____" ] ], [ [ "Sometimes there can be information in the missing values and in which case you might want to consider the NaN values as a separate category. This can be achieved by setting the dropna argument to False.", "_____no_output_____" ] ], [ [ "bins = [1E2, 1E3, 1E4, 1E5]\noutlier_signifs, binning_dict = data[[c0,c1]].outlier_significance_matrix(interval_cols=tmp_interval_cols, \n bins=bins, retbins=True, \n drop_underflow=False,\n drop_overflow=False,\n dropna=False)\noutlier_signifs", "_____no_output_____" ] ], [ [ "Here OF and UF are the underflow and overflow bin of car_size, respectively.\n\nTo just ignore records with missing values set dropna to True (default).", "_____no_output_____" ] ], [ [ "bins = [1E2, 1E3, 1E4, 1E5]\noutlier_signifs, binning_dict = data[[c0,c1]].outlier_significance_matrix(interval_cols=tmp_interval_cols, \n bins=bins, retbins=True, \n drop_underflow=False,\n drop_overflow=False,\n dropna=True)\noutlier_signifs", "_____no_output_____" ] ], [ [ "Note that the dropna option is also available for the calculation of the phik matrix and the significance matrix.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d03406f032acfac4193a32ad0665f02d568fdf29
10,982
ipynb
Jupyter Notebook
demo/classification.ipynb
DandikUnited/dandikunited.github.io
8c60e88a8da0282d1c8639dd0e6cef510193acb2
[ "MIT" ]
null
null
null
demo/classification.ipynb
DandikUnited/dandikunited.github.io
8c60e88a8da0282d1c8639dd0e6cef510193acb2
[ "MIT" ]
null
null
null
demo/classification.ipynb
DandikUnited/dandikunited.github.io
8c60e88a8da0282d1c8639dd0e6cef510193acb2
[ "MIT" ]
null
null
null
32.111111
165
0.506738
[ [ [ "", "_____no_output_____" ], [ "## Support Vector Clustering visualized\n\nTo get started, please click on the cell with the code below and hit `Shift + Enter` This may take a while.", "_____no_output_____" ], [ "Support Vector Clustering(SVC) is a variation of Support Vector Machine (SVM).\n\nSVC is a way of determining a boudary point between different labels. It utilizes a kernel method, helps us to make better decisions on non-linear datasets.\n\nIn this demo, we will be able to play with 3 parameters, namely `Sample Size`, `C` (Penalty parameter for Cost fucntion), and `gamma` (Kernel coefficent)", "_____no_output_____" ] ], [ [ "%matplotlib notebook\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom ipywidgets import *\nfrom IPython.display import display\nfrom sklearn.svm import SVC\nplt.style.use('ggplot')\n\n\ndef plot_data(data, labels, sep):\n data_x = data[:, 0]\n data_y = data[:, 1]\n sep_x = sep[:, 0]\n sep_y = sep[:, 1]\n\n # plot data\n fig = plt.figure(figsize=(4, 4))\n pos_inds = np.argwhere(labels == 1)\n pos_inds = [s[0] for s in pos_inds]\n\n neg_inds = np.argwhere(labels == -1)\n neg_inds = [s[0] for s in neg_inds]\n plt.scatter(data_x[pos_inds], data_y[pos_inds], color='b', linewidth=1, marker='o', edgecolor='k', s=50)\n plt.scatter(data_x[neg_inds], data_y[neg_inds], color='r', linewidth=1, marker='o', edgecolor='k', s=50)\n\n # plot target\n plt.plot(sep_x, sep_y, '--k', linewidth=3)\n\n # clean up plot\n plt.yticks([], [])\n plt.xlim([-2.1, 2.1])\n plt.ylim([-2.1, 2.1])\n plt.axis('off')\n return plt\n\n\ndef update_plot_data(plt, data, labels, sep):\n plt.cla()\n plt.clf()\n data_x = data[:, 0]\n data_y = data[:, 1]\n sep_x = sep[:, 0]\n sep_y = sep[:, 1]\n\n # plot data\n #plt.draw(figsize=(4, 4))\n pos_inds = np.argwhere(labels == 1)\n pos_inds = [s[0] for s in pos_inds]\n\n neg_inds = np.argwhere(labels == -1)\n neg_inds = [s[0] for s in neg_inds]\n plt.scatter(data_x[pos_inds], data_y[pos_inds], color='b', linewidth=1, marker='o', edgecolor='k', s=50)\n plt.scatter(data_x[neg_inds], data_y[neg_inds], color='r', linewidth=1, marker='o', edgecolor='k', s=50)\n\n # plot target\n plt.plot(sep_x, sep_y, '--k', linewidth=3)\n\n # clean up plot\n plt.yticks([], [])\n plt.xlim([-2.1, 2.1])\n plt.ylim([-2.1, 2.1])\n plt.axis('off')\n\n\n# plot approximation\ndef plot_approx(clf):\n # plot classification boundary and color regions appropriately\n r = np.linspace(-2.1, 2.1, 500)\n s, t = np.meshgrid(r, r)\n s = np.reshape(s, (np.size(s), 1))\n t = np.reshape(t, (np.size(t), 1))\n h = np.concatenate((s, t), 1)\n\n # use classifier to make predictions\n z = clf.predict(h)\n\n # reshape predictions for plotting\n s.shape = (np.size(r), np.size(r))\n t.shape = (np.size(r), np.size(r))\n z.shape = (np.size(r), np.size(r))\n\n # show the filled in predicted-regions of the plane\n plt.contourf(s, t, z, colors=['r', 'b'], alpha=0.2, levels=range(-1, 2))\n\n # show the classification boundary if it exists\n if len(np.unique(z)) > 1:\n plt.contour(s, t, z, colors='k', linewidths=3)\n\n\ndef update_plot_approx(plt, clf):\n # plot classification boundary and color regions appropriately\n r = np.linspace(-2.1, 2.1, 500)\n s, t = np.meshgrid(r, r)\n s = np.reshape(s, (np.size(s), 1))\n t = np.reshape(t, (np.size(t), 1))\n h = np.concatenate((s, t), 1)\n\n # use classifier to make predictions\n z = clf.predict(h)\n\n # reshape predictions for plotting\n s.shape = (np.size(r), np.size(r))\n t.shape = (np.size(r), np.size(r))\n z.shape = (np.size(r), np.size(r))\n plt.cla()\n plt.clf()\n\n # show the filled in predicted-regions of the plane\n plt.contourf(s, t, z, colors=['r', 'b'], alpha=0.2, levels=range(-1, 2))\n\n # show the classification boundary if it exists\n if len(np.unique(z)) > 1:\n plt.contour(s, t, z, colors='k', linewidths=3)\n \n\ndef make_circle_classification_dataset(num_pts):\n '''\n This function generates a random circle dataset with two classes. \n You can run this a couple times to get a distribution you like visually. \n You can also adjust the num_pts parameter to change the total number of points in the dataset.\n '''\n\n # generate points\n num_misclass = 5 # total number of misclassified points\n s = np.random.rand(num_pts)\n data_x = np.cos(2 * np.pi * s)\n data_y = np.sin(2 * np.pi * s)\n radi = 2 * np.random.rand(num_pts)\n data_x = data_x * radi\n data_y = data_y * radi\n data_x.shape = (len(data_x), 1)\n data_y.shape = (len(data_y), 1)\n data = np.concatenate((data_x, data_y), axis=1)\n\n # make separator\n s = np.linspace(0, 1, 100)\n x_f = np.cos(2 * np.pi * s)\n y_f = np.sin(2 * np.pi * s)\n x_f.shape = (len(x_f), 1)\n y_f.shape = (len(y_f), 1)\n sep = np.concatenate((x_f, y_f), axis=1)\n\n # make labels and flip a few to show some misclassifications\n labels = radi.copy()\n ind1 = np.argwhere(labels > 1)\n ind1 = [v[0] for v in ind1]\n ind2 = np.argwhere(labels <= 1)\n ind2 = [v[0] for v in ind2]\n labels[ind1] = -1\n labels[ind2] = +1\n\n flip = np.random.permutation(num_pts)\n flip = flip[:num_misclass]\n for i in flip:\n labels[i] = (-1) * labels[i]\n\n # return datapoints and labels for study\n return data, labels, sep\n\nsample_size = widgets.IntSlider(\n value=50,\n min=50,\n max=1000,\n step=1,\n description='Sample size: ',\n disabled=False,\n continuous_update=False,\n orientation='horizontal',\n readout=True,\n readout_format='.1f',\n slider_color='white'\n)\nsplit_ratio = widgets.FloatSlider(\n value=0.2,\n min=0,\n max=1.0,\n step=0.1,\n description='Train/Test Split Ratio (0-1): ',\n disabled=False,\n continuous_update=False,\n orientation='horizontal',\n readout=True,\n readout_format='.1f',\n slider_color='white'\n)\nc = widgets.FloatSlider(\n value=0.1,\n min=0.1,\n max=10.0,\n step=0.1,\n description='C: ',\n disabled=False,\n continuous_update=False,\n orientation='horizontal',\n readout=True,\n readout_format='.1f',\n slider_color='white'\n)\ngamma = widgets.FloatSlider(\n value=0.1,\n min=0.1,\n max=10.0,\n step=0.1,\n description='Gamma: ',\n disabled=False,\n continuous_update=False,\n orientation='horizontal',\n readout=True,\n readout_format='.1f',\n slider_color='white'\n)\n\n\ndisplay(sample_size)\n\n\n#init plot\ndata, labels, true_sep = make_circle_classification_dataset(num_pts=sample_size.value)\n\n# preparing the plot\n\nclf = SVC(C=c.value, kernel='rbf', gamma=gamma.value)\n\n# fit classifier\nclf.fit(data, labels)\n\n# plot results\nfit_plot = plot_data(data, labels, true_sep)\nplot_approx(clf)\n\n\ndef on_train_info_change(change):\n clf = SVC(C=c.value, kernel='rbf', gamma=gamma.value)\n \n # fit classifier\n clf.fit(data, labels)\n\n # plot results\n update_plot_data(fit_plot, data, labels, true_sep)\n plot_approx(clf)\n \n\ndef on_value_change_sample(change):\n global data\n global labels\n global true_sep\n\n data, labels, true_sep = make_circle_classification_dataset(num_pts=sample_size.value)\n update_plot_data(fit_plot, data, labels, true_sep)\n\n clf = SVC(C=c.value,kernel='rbf',gamma=gamma.value)\n\n # fit classifier\n clf.fit(data, labels)\n \n # plot results\n update_plot_data(fit_plot, data, labels, true_sep)\n plot_approx(clf)\n\nsample_size.observe(on_value_change_sample, names='value')\n\ndisplay(c)\ndisplay(gamma)\n\nc.observe(on_train_info_change, names='value')\ngamma.observe(on_train_info_change, names='value')", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
d0341d9203818093134c312ff45e763013b2b3a3
181,415
ipynb
Jupyter Notebook
notebooks/meanpath.ipynb
nvogtvincent/project09
5408be3f4632d5e71cd5b2b7bbc8798382f97ef7
[ "CC-BY-4.0" ]
null
null
null
notebooks/meanpath.ipynb
nvogtvincent/project09
5408be3f4632d5e71cd5b2b7bbc8798382f97ef7
[ "CC-BY-4.0" ]
null
null
null
notebooks/meanpath.ipynb
nvogtvincent/project09
5408be3f4632d5e71cd5b2b7bbc8798382f97ef7
[ "CC-BY-4.0" ]
2
2021-06-03T12:49:36.000Z
2021-07-24T17:04:39.000Z
806.288889
137,012
0.949497
[ [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport cartopy.crs as ccrs", "_____no_output_____" ], [ "migration_patterns = pd.read_csv(\"./arctic_tern_migration.csv\") #Data file, needs to be in working directory\n\n#Adds \"Month\" column to pd dataframe - \"Date\" is a string: \"DD/MM/YYYY\", takes characters 3:5=3,4 -> MM and converts it to an integer, then added to dataframe\nmigration_patterns_months = [int(migration_patterns['Date'][i][3:5]) for i in range(len(migration_patterns[\"Date\"]))]\nmigration_patterns[\"Month\"] = migration_patterns_months\n\n#Southbound - mostly between Aug/Oct but takes times from Jul/Nov, and latitudes between +/- 40\nmigration_patterns_sbound = migration_patterns.where(migration_patterns[\"Month\"] > 6).where(migration_patterns[\"Month\"] < 12)\\\n.where(np.abs(migration_patterns[\"Lat\"]) <= 40).dropna(how=\"all\")\n\n#Rounds latitude for grouping\nmigration_patterns_sbound[\"Lat\"] = np.round(migration_patterns_sbound[\"Lat\"])\n\n#Latitudes grouped and mean longitude computed. rolling used to smooth data - roll_mean_over can be changed, roll_mean_over = 1 <=> no rolling mean.\nroll_mean_over = 3\n#migration_patterns_sbound.groupby(\"Lat\").mean(\"Long\")[\"Long\"].rolling(roll_mean_over,center=True).mean()", "_____no_output_____" ], [ "fig = plt.figure(figsize=[15,8])\nax = fig.add_subplot(111, projection=ccrs.PlateCarree(0))\nax.coastlines()\nax.gridlines()\n\nax.plot(migration_patterns_sbound.groupby(\"Lat\").mean(\"Long\")[\"Long\"].rolling(roll_mean_over,center=True).mean()[::3], np.arange(-40,40)[::3])\n\nfig.savefig(\"./meanpath.png\")", "_____no_output_____" ], [ "#To plot:\n\nfig = plt.figure(figsize=[15,8])\nax = fig.add_subplot(111, projection=ccrs.PlateCarree(0))\nax.coastlines()\nax.gridlines()\n\nSA_keys = []\nAF_keys = []\nIO_keys = []\n\nfor key, grp in migration_patterns_sbound.groupby(['Bird ID']):\n if grp[\"Long\"].mean() < -20:\n if grp[\"Bird ID\"].all() != \"ARTE_390\": #removes anomalous bird\n ax = grp.plot(ax=ax, kind='line', x='Long', y='Lat', color=\"r\", linewidth=1, alpha=0.5)\n SA_keys.append(key)\n if np.abs(grp[\"Long\"]).mean() <= 20:\n ax = grp.plot(ax=ax, kind='line', x='Long', y='Lat', color=\"b\", linewidth=1, alpha=0.5)\n AF_keys.append(key)\n if grp[\"Long\"].mean() > 20:\n ax = grp.plot(ax=ax, kind='line', x='Long', y='Lat', color=\"g\", linewidth=1, alpha=0.5)\n IO_keys.append(key)\n\nmigration_patterns_sbound_SA = migration_patterns_sbound\nmigration_patterns_sbound_AF = migration_patterns_sbound\nmigration_patterns_sbound_IO = migration_patterns_sbound\nfor key in SA_keys:\n migration_patterns_sbound_IO = migration_patterns_sbound_IO.where(migration_patterns_sbound_IO[\"Bird ID\"] != key)\n migration_patterns_sbound_AF = migration_patterns_sbound_AF.where(migration_patterns_sbound_AF[\"Bird ID\"] != key)\nfor key in AF_keys:\n migration_patterns_sbound_SA = migration_patterns_sbound_SA.where(migration_patterns_sbound_SA[\"Bird ID\"] != key)\n migration_patterns_sbound_IO = migration_patterns_sbound_IO.where(migration_patterns_sbound_IO[\"Bird ID\"] != key)\nfor key in IO_keys:\n migration_patterns_sbound_SA = migration_patterns_sbound_SA.where(migration_patterns_sbound_SA[\"Bird ID\"] != key)\n migration_patterns_sbound_AF = migration_patterns_sbound_AF.where(migration_patterns_sbound_AF[\"Bird ID\"] != key)\n \nmigration_patterns_sbound_SA = migration_patterns_sbound_SA.where(migration_patterns_sbound_SA[\"Bird ID\"] != \"ARTE_390\")\nmigration_patterns_sbound_AF = migration_patterns_sbound_AF.where(migration_patterns_sbound_AF[\"Bird ID\"] != \"ARTE_390\") \nmigration_patterns_sbound_IO = migration_patterns_sbound_IO.where(migration_patterns_sbound_IO[\"Bird ID\"] != \"ARTE_390\")\n\n#ax.plot(migration_patterns_sbound.groupby(\"Lat\").mean(\"Long\")[\"Long\"].rolling(9,center=True).mean(), np.arange(-40,40), color=\"k\", linewidth=3)\nax.plot(migration_patterns_sbound_SA.groupby(\"Lat\").mean(\"Long\")[\"Long\"].rolling(roll_mean_over,center=True).mean()[::-roll_mean_over], sorted(migration_patterns_sbound_SA[\"Lat\"].unique()[1:])[::-roll_mean_over], color=\"r\", linewidth=4)\nax.plot(migration_patterns_sbound_IO.groupby(\"Lat\").mean(\"Long\")[\"Long\"].rolling(roll_mean_over,center=True).mean()[::-roll_mean_over], sorted(migration_patterns_sbound_IO[\"Lat\"].unique()[1:])[::-roll_mean_over], color=\"g\", linewidth=4)\nax.plot(migration_patterns_sbound_AF.groupby(\"Lat\").mean(\"Long\")[\"Long\"].rolling(roll_mean_over,center=True).mean()[::-roll_mean_over], np.unique(sorted(migration_patterns_sbound_AF[\"Lat\"].unique()[1:]))[::-roll_mean_over], color=\"b\", linewidth=4)\n\nlegend = ax.legend()\nlegend.remove()\n\nax.set_xlim(-90,90)\nax.set_ylim(-90,90)\n\nfig.savefig(\"./figures/meanpath_grouped.png\", dpi=150, bbox_inches=\"tight\")", "_____no_output_____" ], [ "np.save(\"./migration_southbound_SA_long.npy\", migration_patterns_sbound_SA.groupby(\"Lat\").mean(\"Long\")[\"Long\"].rolling(roll_mean_over, center=True).mean().values[::-roll_mean_over])\nnp.save(\"./migration_southbound_IO_long.npy\", migration_patterns_sbound_IO.groupby(\"Lat\").mean(\"Long\")[\"Long\"].rolling(roll_mean_over,center=True).mean().values[::-roll_mean_over])\nnp.save(\"./migration_southbound_AF_long.npy\", migration_patterns_sbound_AF.groupby(\"Lat\").mean(\"Long\")[\"Long\"].rolling(roll_mean_over,center=True).mean().values[::-roll_mean_over])", "_____no_output_____" ], [ "np.save(\"./migration_southbound_SA_lat.npy\", sorted(migration_patterns_sbound_SA[\"Lat\"].unique()[1:])[::-roll_mean_over])\nnp.save(\"./migration_southbound_IO_lat.npy\", sorted(migration_patterns_sbound_IO[\"Lat\"].unique()[1:])[::-roll_mean_over])\nnp.save(\"./migration_southbound_AF_lat.npy\", np.unique(sorted(migration_patterns_sbound_AF[\"Lat\"].unique()[1:]))[::-roll_mean_over])", "_____no_output_____" ], [ "#Note: a rolling mean is taken over n=roll_mean_over values, and every nth value is taken, as a simple low pass filter.", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
d0342715b76704e452b474212f84d7478df0f70a
80,482
ipynb
Jupyter Notebook
2020_07_15_pandas_functions.ipynb
daekee0325/Data-Analysis
9002d76298968e5fccdb34ba5730d0fb61590478
[ "MIT" ]
null
null
null
2020_07_15_pandas_functions.ipynb
daekee0325/Data-Analysis
9002d76298968e5fccdb34ba5730d0fb61590478
[ "MIT" ]
null
null
null
2020_07_15_pandas_functions.ipynb
daekee0325/Data-Analysis
9002d76298968e5fccdb34ba5730d0fb61590478
[ "MIT" ]
null
null
null
34.810554
276
0.303646
[ [ [ "<a href=\"https://colab.research.google.com/github/daekee0325/Data-Analysis/blob/master/2020_07_15_pandas_functions.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np", "_____no_output_____" ], [ "values_1 = np.random.randint(10, size=10)\nvalues_2 = np.random.randint(10, size = 10)", "_____no_output_____" ], [ "print(values_1)\nprint(values_2)", "[9 6 0 5 0 3 5 8 4 0]\n[3 7 4 0 1 4 4 0 5 6]\n" ], [ "years = np.arange(2010, 2020)\nprint(years)", "[2010 2011 2012 2013 2014 2015 2016 2017 2018 2019]\n" ], [ "groups = ['A','A','B','A','B','B','C','A','C','C']\nlen(groups)", "_____no_output_____" ], [ "df = pd.DataFrame({'group':groups, 'year':years,'value_1': values_1, 'value_2':values_2})\ndf", "_____no_output_____" ], [ "df.query('value_1<value_2')", "_____no_output_____" ], [ "new_col = np.random.randn(10)", "_____no_output_____" ], [ "df.insert(2, 'new_col',new_col)\n", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "df['cumsum_2'] = df[['value_2','group']].groupby('group').cumsum()\ndf", "_____no_output_____" ], [ "", "_____no_output_____" ] ], [ [ "SMPLE", "_____no_output_____" ] ], [ [ "Sample1 = df.sample(n=3)\nSample1", "_____no_output_____" ], [ "Sample2 = df.sample(frac=0.5)\nSample2", "_____no_output_____" ], [ "df['new_col'].where(df['new_col']>0,0)", "_____no_output_____" ], [ "np.where(df['new_col'] >0, df['new_col'], 0)", "_____no_output_____" ], [ "", "_____no_output_____" ] ], [ [ "isin", "_____no_output_____" ] ], [ [ "years = ['2010','2014','2015']", "_____no_output_____" ], [ "df[df.year.isin(years)]", "_____no_output_____" ], [ "df.loc[:2, ['group','year'] ]", "_____no_output_____" ], [ "df.loc[[1,3,5],['year','value_1']]", "_____no_output_____" ], [ "df['value_1']", "_____no_output_____" ], [ "df.value_1.pct_change()", "_____no_output_____" ], [ "df.value_1.sort_values()", "_____no_output_____" ], [ "df.value_1.sort_values().pct_change()", "_____no_output_____" ], [ "df['rank_1'] = df['value_1'].rank( )\ndf", "_____no_output_____" ], [ "df.select_dtypes(exclude='int64')", "_____no_output_____" ], [ "df.replace({'A':'A_1','B':'B_1'})", "_____no_output_____" ], [ "def color_negative_values(val):\n color = 'red' if val < 0 else 'black'\n return 'color : %s' %color", "_____no_output_____" ], [ "df[['new_col']].style.applymap(color_negative_values)", "_____no_output_____" ], [ "df3 = pd.DataFrame({'A': np.random.randn(10), 'B': np.random.randn(10)})\ndf3", "_____no_output_____" ], [ "df3.style.applymap(color_negative_values)", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d0342b25ec2d57b662fde9755af5b9eb2352c890
9,653
ipynb
Jupyter Notebook
scrape-data/usict_result_parsing.ipynb
XploreX/cgpa-book-server
9115aad709b2f49cbdda7d40f22ebb966856e811
[ "Apache-2.0" ]
1
2020-12-02T11:34:38.000Z
2020-12-02T11:34:38.000Z
scrape-data/usict_result_parsing.ipynb
XploreX/cgpa-book-server
9115aad709b2f49cbdda7d40f22ebb966856e811
[ "Apache-2.0" ]
10
2021-04-27T12:17:04.000Z
2021-09-09T10:10:17.000Z
scrape-data/usict_result_parsing.ipynb
XploreX/cgpa-book-server
9115aad709b2f49cbdda7d40f22ebb966856e811
[ "Apache-2.0" ]
null
null
null
31.442997
109
0.545633
[ [ [ "import tabula\nimport numpy as np\nimport pandas as pd\nimport os \nfrom pathlib import Path\nimport PyPDF2\nimport re\nimport requests\nimport json\nimport time", "_____no_output_____" ], [ "# filenames = [\n# os.path.expanduser('/home/parth/Documents/USICT/it_res.pdf'),\n# os.path.expanduser('/home/parth/Documents/USICT/cse_res.pdf'),\n# os.path.expanduser('/home/parth/Documents/USICT/ece_res.pdf')]\n# filenames = [\n# os.path.expanduser('~/Documents/USICT/ipu_results/cse_even_sems.pdf'),\n# os.path.expanduser('~/Documents/USICT/ipu_results/ece_even_sems.pdf')\n# ]\n# filenames = [\n# os.path.expanduser('~/Documents/USICT/ipu_results/it_even_sems.pdf')\n# ]\nfilenames = [\n os.path.expanduser('/home/parth/Documents/USICT/it_res.pdf'),\n os.path.expanduser('/home/parth/Documents/USICT/cse_res.pdf'),\n os.path.expanduser('/home/parth/Documents/USICT/ece_res.pdf'),\n os.path.expanduser('~/Documents/USICT/ipu_results/cse_even_sems.pdf'),\n os.path.expanduser('~/Documents/USICT/ipu_results/ece_even_sems.pdf'),\n os.path.expanduser('~/Documents/USICT/ipu_results/it_even_sems.pdf') \n]", "_____no_output_____" ], [ "scheme_reg = re.compile(r'scheme\\s+of\\s+examinations',re.IGNORECASE)\ninstitution_reg = re.compile(r'institution\\s*:\\s*([\\w\\n(,)& ]+)\\nS\\.No',re.IGNORECASE)\nsem_reg = re.compile(r'se\\s?m[.//\\w\\n]+:\\s+([\\w\\n]+)',re.IGNORECASE)\nprogramme_reg = re.compile(r'programme\\s+name:\\s+([\\w(,)& \\n]+)SchemeID',re.IGNORECASE)\nbranch_reg = re.compile(r'[\\w &]+\\(([\\w ]+)\\)')", "_____no_output_____" ], [ "def get_info(text) :\n college = institution_reg.search(text)[1].replace('\\n','').strip().title()\n semester = int(sem_reg.search(text)[1].replace('\\n','').strip())\n course = programme_reg.search(text)[1].replace('\\n','').strip().title()\n branch = branch_reg.search(course)[1].strip().title()\n course = course[0:course.find('(')].strip()\n info = {\n 'college' : college,\n 'semester' : semester,\n 'course' : course,\n 'branch' : branch,\n }\n return info", "_____no_output_____" ], [ "SITE = \"https://api-rhapsody.herokuapp.com/academia\"\n# SITE = \"http://localhost:3000/academia\"", "_____no_output_____" ], [ "#Add college\ndata ={ \n 'college' : {\n 'college' : \"University School Of Information, Communication & Technology (Formerly Usit)\"\n }\n}\nr = requests.post(SITE+\"/college\",json=data)\nprint(r,r.content)\n", "<Response [200]> b'OK'\n" ], [ "def already_exists(info) :\n r = requests.get(SITE+\"/semester\",params=info)\n content = json.loads(r.content)\n# print(r.status_code,r.content)\n return r.status_code == 200 and content != {}\n ", "_____no_output_____" ], [ "def getSubjects(df) :\n subjects = []\n for index,row in df.iterrows() :\n subject = {}\n subject['subject'] = row['Subject'].strip().title()\n subject['subjectCode'] = row['Code']\n subject['credits'] = row['Credit']\n subjects.append(subject)\n return subjects", "_____no_output_____" ], [ "for filename in filenames :\n pdf = PyPDF2.PdfFileReader(filename)\n print(filename,pdf.getNumPages())\n \n for i in range(0,pdf.getNumPages()) :\n text = pdf.getPage(i).extractText()\n if scheme_reg.search(text) :\n info = get_info(text)\n df = tabula.read_pdf(filename,pages=i+1)\n subjects = getSubjects(df[0]) \n if already_exists(info) :\n print(\"information already exists\")\n continue\n info['semester'] = {'semester' : info['semester'], 'subjects' : subjects}\n r = requests.post(SITE+\"/semester\",json=info)\n print(r,r.content)\n# time.sleep(2)\n# print(info)\n\n ", "/home/parth/Documents/USICT/it_res.pdf 58\n<Response [200]> b'OK'\n<Response [200]> b'OK'\n<Response [200]> b'OK'\n<Response [200]> b'OK'\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\n/home/parth/Documents/USICT/cse_res.pdf 58\n<Response [200]> b'OK'\n<Response [200]> b'OK'\n<Response [200]> b'OK'\n<Response [200]> b'OK'\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\n/home/parth/Documents/USICT/ece_res.pdf 46\n<Response [200]> b'OK'\n<Response [200]> b'OK'\n<Response [200]> b'OK'\n<Response [200]> b'OK'\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\ninformation already exists\n/home/parth/Documents/USICT/ipu_results/cse_even_sems.pdf 31\n<Response [200]> b'OK'\ninformation already exists\ninformation already exists\ninformation already exists\n<Response [200]> b'OK'\ninformation already exists\ninformation already exists\ninformation already exists\n<Response [200]> b'OK'\n/home/parth/Documents/USICT/ipu_results/ece_even_sems.pdf 22\n<Response [200]> b'OK'\ninformation already exists\ninformation already exists\n<Response [200]> b'OK'\n<Response [200]> b'OK'\n/home/parth/Documents/USICT/ipu_results/it_even_sems.pdf 26\n<Response [200]> b'OK'\ninformation already exists\ninformation already exists\n<Response [200]> b'OK'\ninformation already exists\n<Response [200]> b'OK'\n" ], [ "from IPython.display import display", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d034400e959423e84075d77892649299ab34bf0e
5,042
ipynb
Jupyter Notebook
micro_onde.ipynb
camara94/web-scraping-with-requests-beautifulsoup-and-selenium
fb5e1e711d8e5deb6a8f5013c9977e8617d3cc03
[ "MIT" ]
null
null
null
micro_onde.ipynb
camara94/web-scraping-with-requests-beautifulsoup-and-selenium
fb5e1e711d8e5deb6a8f5013c9977e8617d3cc03
[ "MIT" ]
null
null
null
micro_onde.ipynb
camara94/web-scraping-with-requests-beautifulsoup-and-selenium
fb5e1e711d8e5deb6a8f5013c9977e8617d3cc03
[ "MIT" ]
null
null
null
27.107527
108
0.52261
[ [ [ "import requests\nfrom random import randint\nfrom time import sleep\nfrom bs4 import BeautifulSoup\nimport pandas as pd", "_____no_output_____" ], [ "# Maintenant nous avons un résumé au dessus de la fonction \ndef get_url_micro_onde_tunisianet():\n url_micro_onde_details = []\n urls = [\n \"https://www.tunisianet.com.tn/564-four-electrique-tunisie-micro-onde\"\n ]\n \n for page in range(2,5):\n url = f\"https://www.tunisianet.com.tn/564-four-electrique-tunisie-micro-onde?page={page}\"\n response = requests.get(url)\n page_contents = response.text\n if response.status_code != 200:\n raise Exception('Failed to load page {}'.format(items_url))\n doc = BeautifulSoup(page_contents, \"html.parser\")\n for item in doc.find_all(\"a\", {'class': \"thumbnail product-thumbnail first-img\"}):\n url_micro_onde_details.append(item['href'])\n \n for page in urls:\n url = page\n response = requests.get(url)\n page_contents = response.text\n if response.status_code != 200:\n raise Exception('Failed to load page {}'.format(items_url))\n doc = BeautifulSoup(page_contents, \"html.parser\")\n for item in doc.find_all(\"a\", {'class': \"thumbnail product-thumbnail first-img\"}):\n url_micro_onde_details.append(item['href'])\n \n return url_micro_onde_details\nurl_micro_onde = get_url_micro_onde_tunisianet()\nlen(url_micro_onde)", "_____no_output_____" ], [ "def get_micro_onde(items_url):\n images_micro_ondes = []\n # télécharger la page\n response = requests.get(items_url)\n # vérifier le succès de réponse\n if response.status_code != 200:\n raise Exception('Failed to load page {}'.format(items_url))\n # Parser la réponse à l'aide de beaufifulSoup\n doc = BeautifulSoup(response.text, 'html.parser')\n for i, img in enumerate(doc.find_all('a', {'class': 'thumb-container'})):\n if i>= 1 and len(doc.find_all('a', {'class': 'thumb-container'})) > 1:\n images_micro_ondes.append(img['data-image'])\n return images_micro_ondes", "_____no_output_____" ], [ "images_micro_ondes = []\nfor url in url_micro_onde:\n for image in get_micro_onde(url):\n images_micro_ondes.append(image)", "_____no_output_____" ], [ "len(images_micro_ondes)", "_____no_output_____" ], [ "import random\nimport urllib.request\nimport os\n\ndef download_micro_ondes(urls, doc):\n \n os.makedirs(os.path.join('images', doc))\n for i, url in enumerate(urls):\n try:\n fullname = \"images/\" + doc + \"/\" + str((i+1))+\".jpg\"\n urllib.request.urlretrieve(url,fullname)\n except:\n pass", "_____no_output_____" ], [ "download_micro_ondes(images_micro_ondes, 'micro_onde')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
d03449ecce91882057bf73e824cc90a31f1e9426
428,370
ipynb
Jupyter Notebook
nbs/12_cluster_analysis/pre_analysis/06_02-dbscan-pca.ipynb
greenelab/phenoplier
95f04b17f0b5227560fcf32ac0a85b2c5aa9001f
[ "BSD-2-Clause-Patent" ]
3
2021-08-17T21:59:19.000Z
2022-03-08T15:46:24.000Z
nbs/12_cluster_analysis/pre_analysis/06_02-dbscan-pca.ipynb
greenelab/phenoplier
95f04b17f0b5227560fcf32ac0a85b2c5aa9001f
[ "BSD-2-Clause-Patent" ]
4
2021-08-04T13:57:24.000Z
2021-10-11T14:57:15.000Z
nbs/12_cluster_analysis/pre_analysis/06_02-dbscan-pca.ipynb
greenelab/phenoplier
95f04b17f0b5227560fcf32ac0a85b2c5aa9001f
[ "BSD-2-Clause-Patent" ]
null
null
null
163.375286
84,920
0.879177
[ [ [ "# Description", "_____no_output_____" ], [ "This notebook runs some pre-analyses using DBSCAN to explore the best set of parameters (`min_samples` and `eps`) to cluster `pca` data version.", "_____no_output_____" ], [ "# Environment variables", "_____no_output_____" ] ], [ [ "from IPython.display import display\n\nimport conf\n\nN_JOBS = conf.GENERAL[\"N_JOBS\"]\ndisplay(N_JOBS)", "_____no_output_____" ], [ "%env MKL_NUM_THREADS=$N_JOBS\n%env OPEN_BLAS_NUM_THREADS=$N_JOBS\n%env NUMEXPR_NUM_THREADS=$N_JOBS\n%env OMP_NUM_THREADS=$N_JOBS", "env: MKL_NUM_THREADS=2\nenv: OPEN_BLAS_NUM_THREADS=2\nenv: NUMEXPR_NUM_THREADS=2\nenv: OMP_NUM_THREADS=2\n" ] ], [ [ "# Modules loading", "_____no_output_____" ] ], [ [ "%load_ext autoreload\n%autoreload 2", "_____no_output_____" ], [ "from pathlib import Path\n\nimport numpy as np\nimport pandas as pd\nfrom sklearn.neighbors import NearestNeighbors\nfrom sklearn.metrics import pairwise_distances\nfrom sklearn.cluster import DBSCAN\nfrom sklearn.metrics import (\n silhouette_score,\n calinski_harabasz_score,\n davies_bouldin_score,\n)\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom utils import generate_result_set_name\nfrom clustering.ensembles.utils import generate_ensemble", "_____no_output_____" ] ], [ [ "# Global settings", "_____no_output_____" ] ], [ [ "np.random.seed(0)", "_____no_output_____" ], [ "CLUSTERING_ATTRIBUTES_TO_SAVE = [\"n_clusters\"]", "_____no_output_____" ] ], [ [ "# Data version: pca", "_____no_output_____" ] ], [ [ "INPUT_SUBSET = \"pca\"", "_____no_output_____" ], [ "INPUT_STEM = \"z_score_std-projection-smultixcan-efo_partial-mashr-zscores\"", "_____no_output_____" ], [ "DR_OPTIONS = {\n \"n_components\": 50,\n \"svd_solver\": \"full\",\n \"random_state\": 0,\n}", "_____no_output_____" ], [ "input_filepath = Path(\n conf.RESULTS[\"DATA_TRANSFORMATIONS_DIR\"],\n INPUT_SUBSET,\n generate_result_set_name(\n DR_OPTIONS, prefix=f\"{INPUT_SUBSET}-{INPUT_STEM}-\", suffix=\".pkl\"\n ),\n).resolve()\ndisplay(input_filepath)\n\nassert input_filepath.exists(), \"Input file does not exist\"\n\ninput_filepath_stem = input_filepath.stem\ndisplay(input_filepath_stem)", "_____no_output_____" ], [ "data = pd.read_pickle(input_filepath)", "_____no_output_____" ], [ "data.shape", "_____no_output_____" ], [ "data.head()", "_____no_output_____" ] ], [ [ "## Tests different k values (k-NN)", "_____no_output_____" ] ], [ [ "# `k_values` is the full range of k for kNN, whereas `k_values_to_explore` is a\n# subset that will be explored in this notebook. If the analysis works, then\n# `k_values` and `eps_range_per_k` below are copied to the notebook that will\n# produce the final DBSCAN runs (`../002_[...]-dbscan-....ipynb`)\nk_values = np.arange(2, 125 + 1, 1)\nk_values_to_explore = (2, 5, 10, 15, 20, 30, 40, 50, 75, 100, 125)", "_____no_output_____" ], [ "results = {}\n\nfor k in k_values_to_explore:\n nbrs = NearestNeighbors(n_neighbors=k, n_jobs=N_JOBS).fit(data)\n distances, indices = nbrs.kneighbors(data)\n results[k] = (distances, indices)", "_____no_output_____" ], [ "eps_range_per_k = {\n k: (10, 20)\n if k < 5\n else (11, 25)\n if k < 10\n else (12, 30)\n if k < 15\n else (13, 35)\n if k < 20\n else (14, 40)\n for k in k_values\n}\n\neps_range_per_k_to_explore = {k: eps_range_per_k[k] for k in k_values_to_explore}", "_____no_output_____" ], [ "for k, (distances, indices) in results.items():\n d = distances[:, 1:].mean(axis=1)\n d = np.sort(d)\n\n fig, ax = plt.subplots()\n plt.plot(d)\n\n r = eps_range_per_k_to_explore[k]\n plt.hlines(r[0], 0, data.shape[0], color=\"red\")\n plt.hlines(r[1], 0, data.shape[0], color=\"red\")\n\n plt.xlim((3000, data.shape[0]))\n plt.title(f\"k={k}\")\n display(fig)\n\n plt.close(fig)", "_____no_output_____" ] ], [ [ "# Extended test", "_____no_output_____" ], [ "## Generate clusterers", "_____no_output_____" ] ], [ [ "CLUSTERING_OPTIONS = {}\n\n# K_RANGE is the min_samples parameter in DBSCAN (sklearn)\nCLUSTERING_OPTIONS[\"K_RANGE\"] = k_values_to_explore\nCLUSTERING_OPTIONS[\"EPS_RANGE_PER_K\"] = eps_range_per_k_to_explore\nCLUSTERING_OPTIONS[\"EPS_STEP\"] = 33\nCLUSTERING_OPTIONS[\"METRIC\"] = \"euclidean\"\n\ndisplay(CLUSTERING_OPTIONS)", "_____no_output_____" ], [ "CLUSTERERS = {}\n\nidx = 0\n\nfor k in CLUSTERING_OPTIONS[\"K_RANGE\"]:\n eps_range = CLUSTERING_OPTIONS[\"EPS_RANGE_PER_K\"][k]\n eps_values = np.linspace(eps_range[0], eps_range[1], CLUSTERING_OPTIONS[\"EPS_STEP\"])\n\n for eps in eps_values:\n clus = DBSCAN(min_samples=k, eps=eps, metric=\"precomputed\", n_jobs=N_JOBS)\n\n method_name = type(clus).__name__\n CLUSTERERS[f\"{method_name} #{idx}\"] = clus\n\n idx = idx + 1", "_____no_output_____" ], [ "display(len(CLUSTERERS))", "_____no_output_____" ], [ "_iter = iter(CLUSTERERS.items())\ndisplay(next(_iter))\ndisplay(next(_iter))", "_____no_output_____" ], [ "clustering_method_name = method_name\ndisplay(clustering_method_name)", "_____no_output_____" ] ], [ [ "## Generate ensemble", "_____no_output_____" ] ], [ [ "data_dist = pairwise_distances(data, metric=CLUSTERING_OPTIONS[\"METRIC\"])", "_____no_output_____" ], [ "data_dist.shape", "_____no_output_____" ], [ "pd.Series(data_dist.flatten()).describe().apply(str)", "_____no_output_____" ], [ "ensemble = generate_ensemble(\n data_dist,\n CLUSTERERS,\n attributes=CLUSTERING_ATTRIBUTES_TO_SAVE,\n)", "100%|██████████| 363/363 [01:09<00:00, 5.20it/s]\n" ], [ "ensemble.shape", "_____no_output_____" ], [ "ensemble.head()", "_____no_output_____" ], [ "_tmp = ensemble[\"n_clusters\"].value_counts()\ndisplay(_tmp)\nassert _tmp.index[0] == 3\nassert _tmp.loc[3] == 22", "_____no_output_____" ], [ "ensemble_stats = ensemble[\"n_clusters\"].describe()\ndisplay(ensemble_stats)", "_____no_output_____" ], [ "# number of noisy points\n_tmp = ensemble.copy()\n_tmp = _tmp.assign(n_noisy=ensemble[\"partition\"].apply(lambda x: np.isnan(x).sum()))", "_____no_output_____" ], [ "_tmp_stats = _tmp[\"n_noisy\"].describe()\ndisplay(_tmp_stats)\nassert _tmp_stats[\"min\"] > 5\nassert _tmp_stats[\"max\"] < 600\nassert 90 < _tmp_stats[\"mean\"] < 95", "_____no_output_____" ] ], [ [ "## Testing", "_____no_output_____" ] ], [ [ "assert ensemble_stats[\"min\"] > 1", "_____no_output_____" ], [ "assert not ensemble[\"n_clusters\"].isna().any()", "_____no_output_____" ], [ "# all partitions have the right size\nassert np.all(\n [part[\"partition\"].shape[0] == data.shape[0] for idx, part in ensemble.iterrows()]\n)", "_____no_output_____" ] ], [ [ "## Add clustering quality measures", "_____no_output_____" ] ], [ [ "def _remove_nans(data, part):\n not_nan_idx = ~np.isnan(part)\n return data.iloc[not_nan_idx], part[not_nan_idx]\n\n\ndef _apply_func(func, data, part):\n no_nan_data, no_nan_part = _remove_nans(data, part)\n return func(no_nan_data, no_nan_part)", "_____no_output_____" ], [ "ensemble = ensemble.assign(\n si_score=ensemble[\"partition\"].apply(\n lambda x: _apply_func(silhouette_score, data, x)\n ),\n ch_score=ensemble[\"partition\"].apply(\n lambda x: _apply_func(calinski_harabasz_score, data, x)\n ),\n db_score=ensemble[\"partition\"].apply(\n lambda x: _apply_func(davies_bouldin_score, data, x)\n ),\n)", "_____no_output_____" ], [ "ensemble.shape", "_____no_output_____" ], [ "ensemble.head()", "_____no_output_____" ] ], [ [ "# Cluster quality", "_____no_output_____" ] ], [ [ "with pd.option_context(\"display.max_rows\", None, \"display.max_columns\", None):\n _df = ensemble.groupby([\"n_clusters\"]).mean()\n display(_df)", "_____no_output_____" ], [ "with sns.plotting_context(\"talk\", font_scale=0.75), sns.axes_style(\n \"whitegrid\", {\"grid.linestyle\": \"--\"}\n):\n fig = plt.figure(figsize=(14, 6))\n ax = sns.pointplot(data=ensemble, x=\"n_clusters\", y=\"si_score\")\n ax.set_ylabel(\"Silhouette index\\n(higher is better)\")\n ax.set_xlabel(\"Number of clusters ($k$)\")\n ax.set_xticklabels(ax.get_xticklabels(), rotation=45)\n plt.grid(True)\n plt.tight_layout()", "_____no_output_____" ], [ "with sns.plotting_context(\"talk\", font_scale=0.75), sns.axes_style(\n \"whitegrid\", {\"grid.linestyle\": \"--\"}\n):\n fig = plt.figure(figsize=(14, 6))\n ax = sns.pointplot(data=ensemble, x=\"n_clusters\", y=\"ch_score\")\n ax.set_ylabel(\"Calinski-Harabasz index\\n(higher is better)\")\n ax.set_xlabel(\"Number of clusters ($k$)\")\n ax.set_xticklabels(ax.get_xticklabels(), rotation=45)\n plt.grid(True)\n plt.tight_layout()", "_____no_output_____" ], [ "with sns.plotting_context(\"talk\", font_scale=0.75), sns.axes_style(\n \"whitegrid\", {\"grid.linestyle\": \"--\"}\n):\n fig = plt.figure(figsize=(14, 6))\n ax = sns.pointplot(data=ensemble, x=\"n_clusters\", y=\"db_score\")\n ax.set_ylabel(\"Davies-Bouldin index\\n(lower is better)\")\n ax.set_xlabel(\"Number of clusters ($k$)\")\n ax.set_xticklabels(ax.get_xticklabels(), rotation=45)\n plt.grid(True)\n plt.tight_layout()", "_____no_output_____" ] ], [ [ "# Conclusions", "_____no_output_____" ], [ "The values explored above for `k_values` and `eps_range_per_k` are the one that will be used for DBSCAN in this data version.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]
d0344cffccc7b5ffe757cf87e6c3c785a19e3033
25,715
ipynb
Jupyter Notebook
Inventarios_1.ipynb
jorgeiv500/Logistica
bd42a2e4d8daa6c977bade66d87cb0e29bbea868
[ "MIT" ]
null
null
null
Inventarios_1.ipynb
jorgeiv500/Logistica
bd42a2e4d8daa6c977bade66d87cb0e29bbea868
[ "MIT" ]
null
null
null
Inventarios_1.ipynb
jorgeiv500/Logistica
bd42a2e4d8daa6c977bade66d87cb0e29bbea868
[ "MIT" ]
null
null
null
39.319572
664
0.599689
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
d034555e55863403118f332f8c02917dbb54084f
2,125
ipynb
Jupyter Notebook
20200907/GenericStartNotebook.ipynb
SNStatComp/CBSAcademyBD
ae82e9f79ec4bd58f5446a40154ad1fe3c25b602
[ "CC-BY-4.0" ]
null
null
null
20200907/GenericStartNotebook.ipynb
SNStatComp/CBSAcademyBD
ae82e9f79ec4bd58f5446a40154ad1fe3c25b602
[ "CC-BY-4.0" ]
null
null
null
20200907/GenericStartNotebook.ipynb
SNStatComp/CBSAcademyBD
ae82e9f79ec4bd58f5446a40154ad1fe3c25b602
[ "CC-BY-4.0" ]
null
null
null
25.297619
150
0.552471
[ [ [ "# Generic startnotebook, course on webscraping\n\n*By Olav ten Bosch, Dick Windmeijer and Marijn Detiger*", "_____no_output_____" ], [ "#### Documentation: [Requests.py](http://docs.python-requests.org) [Beautifulsoup.py](https://www.crummy.com/software/BeautifulSoup/bs4/doc/)\n\n", "_____no_output_____" ] ], [ [ "# Imports:\nimport requests\nfrom bs4 import BeautifulSoup\nimport time # for sleeping between multiple requests\n\n#Issue a request:\n#r1 = requests.get('http://testing-ground.scraping.pro')\n#print(r1.status_code, r1.headers['content-type'], r1.encoding, r1.text)\n\n#Issue a request with dedicated user-agent string:\n#headers = {'user-agent': 'myOwnScraper'}\n#r = requests.get('http://testing-ground.scraping.pro', headers=headers)\n\n# Request with parameters:\n#pars = {'products': 2, 'years': 2}\n#r2 = requests.get('http://testing-ground.scraping.pro/table?', params=pars) \n#print(r2.url)\n\n# Soup:\n#soup = BeautifulSoup(r2.text, 'lxml')\n#print(soup3.title.text)\n#soup.find_all(\"div\", class_=\"product\")\n\n\n# One second idle time between requests:\n#time.sleep(1)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ] ]
d0345cd67707a9f29ad9cd0c64a3c56f5a976792
10,212
ipynb
Jupyter Notebook
examples/notebooks/multi-investment-optimisation.ipynb
p-glaum/PyPSA
a8cfdf1acd9b348828474ad0899afe2c77818159
[ "MIT" ]
null
null
null
examples/notebooks/multi-investment-optimisation.ipynb
p-glaum/PyPSA
a8cfdf1acd9b348828474ad0899afe2c77818159
[ "MIT" ]
null
null
null
examples/notebooks/multi-investment-optimisation.ipynb
p-glaum/PyPSA
a8cfdf1acd9b348828474ad0899afe2c77818159
[ "MIT" ]
null
null
null
27.304813
377
0.528006
[ [ [ "# Multi Investment Optimization\n\nIn the following, we show how PyPSA can deal with multi-investment optimization, also known as multi-horizon optimization. \n\nHere, the total set of snapshots is divided into investment periods. For the model, this translates into multi-indexed snapshots with the first level being the investment period and the second level the according time steps. In each investment period new asset may be added to the system. On the other hand assets may only operate as long as allowed by their lifetime.\n\nIn contrast to the ordinary optimisation, the following concepts have to be taken into account. \n\n1. `investment_periods` - `pypsa.Network` attribute. This is the set of periods which specify when new assets may be built. In the current implementation, these have to be the same as the first level values in the `snapshots` attribute.\n2. `investment_period_weightings` - `pypsa.Network` attribute. These specify the weighting of each period in the objective function. \n3. `build_year` - general component attribute. A single asset may only be built when the build year is smaller or equal to the current investment period. For example, assets with a build year `2029` are considered in the investment period `2030`, but not in the period `2025`. \n4. `lifetime` - general component attribute. An asset is only considered in an investment period if present at the beginning of an investment period. For example, an asset with build year `2029` and lifetime `30` is considered in the investment period `2055` but not in the period `2060`. \n\nIn the following, we set up a three node network with generators, lines and storages and run a optimisation covering the time span from 2020 to 2050 and each decade is one investment period.", "_____no_output_____" ] ], [ [ "import pypsa\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "We set up the network with investment periods and snapshots. ", "_____no_output_____" ] ], [ [ "n = pypsa.Network()\nyears = [2020, 2030, 2040, 2050]\nfreq = \"24\"\n\nsnapshots = pd.DatetimeIndex([])\nfor year in years:\n period = pd.date_range(\n start=\"{}-01-01 00:00\".format(year),\n freq=\"{}H\".format(freq),\n periods=8760 / float(freq),\n )\n snapshots = snapshots.append(period)\n\n# convert to multiindex and assign to network\nn.snapshots = pd.MultiIndex.from_arrays([snapshots.year, snapshots])\nn.investment_periods = years\n\nn.snapshot_weightings", "_____no_output_____" ], [ "n.investment_periods", "_____no_output_____" ] ], [ [ "Set the years and objective weighting per investment period. For the objective weighting, we consider a discount rate defined by \n$$ D(t) = \\dfrac{1}{(1+r)^t} $$ \n\nwhere $r$ is the discount rate. For each period we sum up all discounts rates of the corresponding years which gives us the effective objective weighting.", "_____no_output_____" ] ], [ [ "n.investment_period_weightings[\"years\"] = list(np.diff(years)) + [10]\n\nr = 0.01\nT = 0\nfor period, nyears in n.investment_period_weightings.years.items():\n discounts = [(1 / (1 + r) ** t) for t in range(T, T + nyears)]\n n.investment_period_weightings.at[period, \"objective\"] = sum(discounts)\n T += nyears\nn.investment_period_weightings", "_____no_output_____" ] ], [ [ "Add the components", "_____no_output_____" ] ], [ [ "for i in range(3):\n n.add(\"Bus\", \"bus {}\".format(i))\n\n# add three lines in a ring\nn.add(\n \"Line\",\n \"line 0->1\",\n bus0=\"bus 0\",\n bus1=\"bus 1\",\n)\n\nn.add(\n \"Line\",\n \"line 1->2\",\n bus0=\"bus 1\",\n bus1=\"bus 2\",\n capital_cost=10,\n build_year=2030,\n)\n\nn.add(\n \"Line\",\n \"line 2->0\",\n bus0=\"bus 2\",\n bus1=\"bus 0\",\n)\n\nn.lines[\"x\"] = 0.0001\nn.lines[\"s_nom_extendable\"] = True", "_____no_output_____" ], [ "n.lines", "_____no_output_____" ], [ "# add some generators\np_nom_max = pd.Series(\n (np.random.uniform() for sn in range(len(n.snapshots))),\n index=n.snapshots,\n name=\"generator ext 2020\",\n)\n\n# renewable (can operate 2020, 2030)\nn.add(\n \"Generator\",\n \"generator ext 0 2020\",\n bus=\"bus 0\",\n p_nom=50,\n build_year=2020,\n lifetime=20,\n marginal_cost=2,\n capital_cost=1,\n p_max_pu=p_nom_max,\n carrier=\"solar\",\n p_nom_extendable=True,\n)\n\n# can operate 2040, 2050\nn.add(\n \"Generator\",\n \"generator ext 0 2040\",\n bus=\"bus 0\",\n p_nom=50,\n build_year=2040,\n lifetime=11,\n marginal_cost=25,\n capital_cost=10,\n carrier=\"OCGT\",\n p_nom_extendable=True,\n)\n\n# can operate in 2040\nn.add(\n \"Generator\",\n \"generator fix 1 2040\",\n bus=\"bus 1\",\n p_nom=50,\n build_year=2040,\n lifetime=10,\n carrier=\"CCGT\",\n marginal_cost=20,\n capital_cost=1,\n)\n\nn.generators", "_____no_output_____" ], [ "n.add(\n \"StorageUnit\",\n \"storageunit non-cyclic 2030\",\n bus=\"bus 2\",\n p_nom=0,\n capital_cost=2,\n build_year=2030,\n lifetime=21,\n cyclic_state_of_charge=False,\n p_nom_extendable=False,\n)\n\nn.add(\n \"StorageUnit\",\n \"storageunit periodic 2020\",\n bus=\"bus 2\",\n p_nom=0,\n capital_cost=1,\n build_year=2020,\n lifetime=21,\n cyclic_state_of_charge=True,\n cyclic_state_of_charge_per_period=True,\n p_nom_extendable=True,\n)\n\nn.storage_units", "_____no_output_____" ] ], [ [ "Add the load", "_____no_output_____" ] ], [ [ "load_var = pd.Series(\n 100 * np.random.rand(len(n.snapshots)), index=n.snapshots, name=\"load\"\n)\nn.add(\"Load\", \"load 2\", bus=\"bus 2\", p_set=load_var)\n\nload_fix = pd.Series(75, index=n.snapshots, name=\"load\")\nn.add(\"Load\", \"load 1\", bus=\"bus 1\", p_set=load_fix)", "_____no_output_____" ] ], [ [ "Run the optimization", "_____no_output_____" ] ], [ [ "n.loads_t.p_set", "_____no_output_____" ], [ "n.lopf(pyomo=False, multi_investment_periods=True)", "_____no_output_____" ], [ "c = \"Generator\"\ndf = pd.concat(\n {\n period: n.get_active_assets(c, period) * n.df(c).p_nom_opt\n for period in n.investment_periods\n },\n axis=1,\n)\ndf.T.plot.bar(\n stacked=True,\n edgecolor=\"white\",\n width=1,\n ylabel=\"Capacity\",\n xlabel=\"Investment Period\",\n rot=0,\n figsize=(10, 5),\n)\nplt.tight_layout()", "_____no_output_____" ], [ "df = n.generators_t.p.sum(level=0).T\ndf.T.plot.bar(\n stacked=True,\n edgecolor=\"white\",\n width=1,\n ylabel=\"Generation\",\n xlabel=\"Investment Period\",\n rot=0,\n figsize=(10, 5),\n)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
d0347fb52e849e5405e64d38a7a73db69f8d2eb0
3,050
ipynb
Jupyter Notebook
examples/Section5.ipynb
pogudingleb/ExperimentsBound
16d0f827e5e6711df680f4acb1d3b6f37e2236b5
[ "MIT" ]
1
2020-11-28T00:22:59.000Z
2020-11-28T00:22:59.000Z
examples/Section5.ipynb
pogudingleb/ExperimentsBound
16d0f827e5e6711df680f4acb1d3b6f37e2236b5
[ "MIT" ]
null
null
null
examples/Section5.ipynb
pogudingleb/ExperimentsBound
16d0f827e5e6711df680f4acb1d3b6f37e2236b5
[ "MIT" ]
null
null
null
22.761194
156
0.435738
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
d03487fef6fa105c4cbc6573fb4c953d3d6c4e74
4,843
ipynb
Jupyter Notebook
deep learning and computer vision/kaggle-learn-tensorflow-exercise.ipynb
DrDarbin/Kaggle-learn-tasks-solutions
acc0a1d7af8cd849690645fe995901fedd2c212a
[ "MIT" ]
2
2019-03-06T14:56:01.000Z
2019-03-15T06:40:51.000Z
deep learning and computer vision/kaggle-learn-tensorflow-exercise.ipynb
DrDarbin/kaggle-ml-practice
acc0a1d7af8cd849690645fe995901fedd2c212a
[ "MIT" ]
null
null
null
deep learning and computer vision/kaggle-learn-tensorflow-exercise.ipynb
DrDarbin/kaggle-ml-practice
acc0a1d7af8cd849690645fe995901fedd2c212a
[ "MIT" ]
null
null
null
40.697479
519
0.61656
[ [ [ "# Intro\n\n**This is Lesson 3 in the [Deep Learning](https://www.kaggle.com/education/machine-learning) track** \n\nAt the end of this lesson, you will be able to write TensorFlow and Keras code to use one of the best models in computer vision.\n\n# Lesson\n", "_____no_output_____" ] ], [ [ "from IPython.display import YouTubeVideo\nYouTubeVideo('sDG5tPtsbSA', width=800, height=450)", "_____no_output_____" ] ], [ [ "# Sample Code\n\n### Choose Images to Work With", "_____no_output_____" ] ], [ [ "from os.path import join\n\nimage_dir = '../input/dog-breed-identification/train/'\nimg_paths = [join(image_dir, filename) for filename in \n ['0246f44bb123ce3f91c939861eb97fb7.jpg',\n '84728e78632c0910a69d33f82e62638c.jpg',\n '8825e914555803f4c67b26593c9d5aff.jpg',\n '91a5e8db15bccfb6cfa2df5e8b95ec03.jpg']]", "_____no_output_____" ] ], [ [ "### Function to Read and Prep Images for Modeling", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom tensorflow.python.keras.applications.resnet50 import preprocess_input\nfrom tensorflow.python.keras.preprocessing.image import load_img, img_to_array\n\nimage_size = 224\n\ndef read_and_prep_images(img_paths, img_height=image_size, img_width=image_size):\n imgs = [load_img(img_path, target_size=(img_height, img_width)) for img_path in img_paths]\n img_array = np.array([img_to_array(img) for img in imgs])\n output = preprocess_input(img_array)\n return(output)", "_____no_output_____" ] ], [ [ "### Create Model with Pre-Trained Weights File. Make Predictions", "_____no_output_____" ] ], [ [ "from tensorflow.python.keras.applications import ResNet50\n\nmy_model = ResNet50(weights='../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels.h5')\ntest_data = read_and_prep_images(img_paths)\npreds = my_model.predict(test_data)", "_____no_output_____" ] ], [ [ "### Visualize Predictions", "_____no_output_____" ] ], [ [ "from learntools.deep_learning.decode_predictions import decode_predictions\nfrom IPython.display import Image, display\n\nmost_likely_labels = decode_predictions(preds, top=3, class_list_path='../input/resnet50/imagenet_class_index.json')\n\nfor i, img_path in enumerate(img_paths):\n display(Image(img_path))\n print(most_likely_labels[i])", "_____no_output_____" ] ], [ [ "# Exercise\nNow you are ready to **[use a powerful TensorFlow model](https://www.kaggle.com/kernels/fork/521452)** yourself.\n\n---\n**[Deep Learning Course Home Page](https://www.kaggle.com/learn/deep-learning)**\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d0348d957fe2e13e3dabb5ec9eef1f2131e30f35
91,330
ipynb
Jupyter Notebook
src/maintenance/3.5_relations_classification_esim.ipynb
tchewik/isanlp_rst
459864b3daeeb702acf5e65543181068439ce12c
[ "MIT" ]
6
2020-05-09T01:13:10.000Z
2021-02-05T01:02:40.000Z
src/maintenance/3.5_relations_classification_esim.ipynb
tchewik/isanlp_rst
459864b3daeeb702acf5e65543181068439ce12c
[ "MIT" ]
2
2019-09-26T11:32:46.000Z
2020-07-24T13:44:46.000Z
src/maintenance/3.5_relations_classification_esim.ipynb
tchewik/isanlp_rst
459864b3daeeb702acf5e65543181068439ce12c
[ "MIT" ]
3
2019-09-26T13:39:26.000Z
2021-04-12T14:34:50.000Z
36.386454
1,844
0.515121
[ [ [ "## Rhetorical relations classification used in tree building: ESIM\n\nPrepare data and model-related scripts.\n\nEvaluate models.\n\nMake and evaluate ansembles for ESIM and BiMPM model / ESIM and feature-based model.\n\nOutput:\n - ``models/relation_predictor_esim/*``", "_____no_output_____" ] ], [ [ "%load_ext autoreload\n%autoreload 2", "_____no_output_____" ], [ "import os\nimport glob\nimport pandas as pd\nimport numpy as np\nimport pickle\nfrom utils.file_reading import read_edus, read_gold, read_negative, read_annotation", "_____no_output_____" ] ], [ [ "### Make a directory", "_____no_output_____" ] ], [ [ "MODEL_PATH = 'models/label_predictor_esim'\n! mkdir $MODEL_PATH\n\nTRAIN_FILE_PATH = os.path.join(MODEL_PATH, 'nlabel_cf_train.tsv')\nDEV_FILE_PATH = os.path.join(MODEL_PATH, 'nlabel_cf_dev.tsv')\nTEST_FILE_PATH = os.path.join(MODEL_PATH, 'nlabel_cf_test.tsv')", "mkdir: cannot create directory ‘models/label_predictor_esim’: File exists\r\n" ] ], [ [ "### Prepare train/test sets ", "_____no_output_____" ] ], [ [ "IN_PATH = 'data_labeling'\n\ntrain_samples = pd.read_pickle(os.path.join(IN_PATH, 'train_samples.pkl'))\ndev_samples = pd.read_pickle(os.path.join(IN_PATH, 'dev_samples.pkl'))\ntest_samples = pd.read_pickle(os.path.join(IN_PATH, 'test_samples.pkl'))", "_____no_output_____" ], [ "counts = train_samples['relation'].value_counts(normalize=False).values\nNUMBER_CLASSES = len(counts)\nprint(\"number of classes:\", NUMBER_CLASSES)\nprint(\"class weights:\")\nnp.round(counts.min() / counts, decimals=6)", "number of classes: 22\nclass weights:\n" ], [ "counts = train_samples['relation'].value_counts()", "_____no_output_____" ], [ "counts", "_____no_output_____" ], [ "import razdel\n\ndef tokenize(text):\n result = ' '.join([tok.text for tok in razdel.tokenize(text)])\n return result\n \ntrain_samples['snippet_x'] = train_samples.snippet_x.map(tokenize)\ntrain_samples['snippet_y'] = train_samples.snippet_y.map(tokenize)\n\ndev_samples['snippet_x'] = dev_samples.snippet_x.map(tokenize)\ndev_samples['snippet_y'] = dev_samples.snippet_y.map(tokenize)\n\ntest_samples['snippet_x'] = test_samples.snippet_x.map(tokenize)\ntest_samples['snippet_y'] = test_samples.snippet_y.map(tokenize)", "_____no_output_____" ], [ "train_samples = train_samples.reset_index()\ntrain_samples[['relation', 'snippet_x', 'snippet_y', 'index']].to_csv(TRAIN_FILE_PATH, sep='\\t', header=False, index=False)\n\ndev_samples = dev_samples.reset_index()\ndev_samples[['relation', 'snippet_x', 'snippet_y', 'index']].to_csv(DEV_FILE_PATH, sep='\\t', header=False, index=False)\n\ntest_samples = test_samples.reset_index()\ntest_samples[['relation', 'snippet_x', 'snippet_y', 'index']].to_csv(TEST_FILE_PATH, sep='\\t', header=False, index=False)", "_____no_output_____" ] ], [ [ "### Modify model", "_____no_output_____" ], [ "(Add F1, concatenated encoding)", "_____no_output_____" ] ], [ [ "%%writefile models/bimpm_custom_package/model/esim.py\n\nfrom typing import Dict, List, Any, Optional\n\nimport numpy\nimport torch\n\nfrom allennlp.common.checks import check_dimensions_match\nfrom allennlp.data import TextFieldTensors, Vocabulary\nfrom allennlp.models.model import Model\nfrom allennlp.modules import FeedForward, InputVariationalDropout\nfrom allennlp.modules.matrix_attention.matrix_attention import MatrixAttention\nfrom allennlp.modules import Seq2SeqEncoder, TextFieldEmbedder\nfrom allennlp.nn import InitializerApplicator\nfrom allennlp.nn.util import (\n get_text_field_mask,\n masked_softmax,\n weighted_sum,\n masked_max,\n)\nfrom allennlp.training.metrics import CategoricalAccuracy, F1Measure\n\n\[email protected](\"custom_esim\")\nclass CustomESIM(Model):\n \"\"\"\n This `Model` implements the ESIM sequence model described in [Enhanced LSTM for Natural Language Inference]\n (https://api.semanticscholar.org/CorpusID:34032948) by Chen et al., 2017.\n Registered as a `Model` with name \"esim\".\n # Parameters\n vocab : `Vocabulary`\n text_field_embedder : `TextFieldEmbedder`\n Used to embed the `premise` and `hypothesis` `TextFields` we get as input to the\n model.\n encoder : `Seq2SeqEncoder`\n Used to encode the premise and hypothesis.\n matrix_attention : `MatrixAttention`\n This is the attention function used when computing the similarity matrix between encoded\n words in the premise and words in the hypothesis.\n projection_feedforward : `FeedForward`\n The feedforward network used to project down the encoded and enhanced premise and hypothesis.\n inference_encoder : `Seq2SeqEncoder`\n Used to encode the projected premise and hypothesis for prediction.\n output_feedforward : `FeedForward`\n Used to prepare the concatenated premise and hypothesis for prediction.\n output_logit : `FeedForward`\n This feedforward network computes the output logits.\n dropout : `float`, optional (default=`0.5`)\n Dropout percentage to use.\n initializer : `InitializerApplicator`, optional (default=`InitializerApplicator()`)\n Used to initialize the model parameters.\n \"\"\"\n\n def __init__(\n self,\n vocab: Vocabulary,\n text_field_embedder: TextFieldEmbedder,\n encoder: Seq2SeqEncoder,\n matrix_attention: MatrixAttention,\n projection_feedforward: FeedForward,\n inference_encoder: Seq2SeqEncoder,\n output_feedforward: FeedForward,\n output_logit: FeedForward,\n encode_together: bool = False,\n dropout: float = 0.5,\n class_weights: list = [],\n initializer: InitializerApplicator = InitializerApplicator(),\n **kwargs,\n ) -> None:\n super().__init__(vocab, **kwargs)\n\n self._text_field_embedder = text_field_embedder\n self._encoder = encoder\n\n self._matrix_attention = matrix_attention\n self._projection_feedforward = projection_feedforward\n\n self._inference_encoder = inference_encoder\n\n if dropout:\n self.dropout = torch.nn.Dropout(dropout)\n self.rnn_input_dropout = InputVariationalDropout(dropout)\n else:\n self.dropout = None\n self.rnn_input_dropout = None\n\n self._output_feedforward = output_feedforward\n self._output_logit = output_logit\n self.encode_together = encode_together\n\n self._num_labels = vocab.get_vocab_size(namespace=\"labels\")\n\n check_dimensions_match(\n text_field_embedder.get_output_dim(),\n encoder.get_input_dim(),\n \"text field embedding dim\",\n \"encoder input dim\",\n )\n check_dimensions_match(\n encoder.get_output_dim() * 4,\n projection_feedforward.get_input_dim(),\n \"encoder output dim\",\n \"projection feedforward input\",\n )\n check_dimensions_match(\n projection_feedforward.get_output_dim(),\n inference_encoder.get_input_dim(),\n \"proj feedforward output dim\",\n \"inference lstm input dim\",\n )\n\n self.metrics = {\"accuracy\": CategoricalAccuracy()}\n \n if class_weights:\n self.class_weights = class_weights\n else:\n self.class_weights = [1.] * self.classifier_feedforward.get_output_dim()\n \n for _class in range(len(self.class_weights)):\n self.metrics.update({\n f\"f1_rel{_class}\": F1Measure(_class),\n })\n \n self._loss = torch.nn.CrossEntropyLoss(weight=torch.FloatTensor(self.class_weights))\n\n initializer(self)\n\n def forward( # type: ignore\n self,\n premise: TextFieldTensors,\n hypothesis: TextFieldTensors,\n label: torch.IntTensor = None,\n metadata: List[Dict[str, Any]] = None,\n ) -> Dict[str, torch.Tensor]:\n\n \"\"\"\n # Parameters\n premise : `TextFieldTensors`\n From a `TextField`\n hypothesis : `TextFieldTensors`\n From a `TextField`\n label : `torch.IntTensor`, optional (default = `None`)\n From a `LabelField`\n metadata : `List[Dict[str, Any]]`, optional (default = `None`)\n Metadata containing the original tokenization of the premise and\n hypothesis with 'premise_tokens' and 'hypothesis_tokens' keys respectively.\n # Returns\n An output dictionary consisting of:\n label_logits : `torch.FloatTensor`\n A tensor of shape `(batch_size, num_labels)` representing unnormalised log\n probabilities of the entailment label.\n label_probs : `torch.FloatTensor`\n A tensor of shape `(batch_size, num_labels)` representing probabilities of the\n entailment label.\n loss : `torch.FloatTensor`, optional\n A scalar loss to be optimised.\n \"\"\"\n embedded_premise = self._text_field_embedder(premise)\n embedded_hypothesis = self._text_field_embedder(hypothesis)\n premise_mask = get_text_field_mask(premise)\n hypothesis_mask = get_text_field_mask(hypothesis)\n\n # apply dropout for LSTM\n if self.rnn_input_dropout:\n embedded_premise = self.rnn_input_dropout(embedded_premise)\n embedded_hypothesis = self.rnn_input_dropout(embedded_hypothesis)\n\n # encode premise and hypothesis\n encoded_premise = self._encoder(embedded_premise, premise_mask)\n encoded_hypothesis = self._encoder(embedded_hypothesis, hypothesis_mask)\n\n # Shape: (batch_size, premise_length, hypothesis_length)\n similarity_matrix = self._matrix_attention(encoded_premise, encoded_hypothesis)\n\n # Shape: (batch_size, premise_length, hypothesis_length)\n p2h_attention = masked_softmax(similarity_matrix, hypothesis_mask)\n # Shape: (batch_size, premise_length, embedding_dim)\n attended_hypothesis = weighted_sum(encoded_hypothesis, p2h_attention)\n\n # Shape: (batch_size, hypothesis_length, premise_length)\n h2p_attention = masked_softmax(similarity_matrix.transpose(1, 2).contiguous(), premise_mask)\n # Shape: (batch_size, hypothesis_length, embedding_dim)\n attended_premise = weighted_sum(encoded_premise, h2p_attention)\n\n # the \"enhancement\" layer\n premise_enhanced = torch.cat(\n [\n encoded_premise,\n attended_hypothesis,\n encoded_premise - attended_hypothesis,\n encoded_premise * attended_hypothesis,\n ],\n dim=-1,\n )\n hypothesis_enhanced = torch.cat(\n [\n encoded_hypothesis,\n attended_premise,\n encoded_hypothesis - attended_premise,\n encoded_hypothesis * attended_premise,\n ],\n dim=-1,\n )\n\n # The projection layer down to the model dimension. Dropout is not applied before\n # projection.\n projected_enhanced_premise = self._projection_feedforward(premise_enhanced)\n projected_enhanced_hypothesis = self._projection_feedforward(hypothesis_enhanced)\n\n # Run the inference layer\n if self.rnn_input_dropout:\n projected_enhanced_premise = self.rnn_input_dropout(projected_enhanced_premise)\n projected_enhanced_hypothesis = self.rnn_input_dropout(projected_enhanced_hypothesis)\n v_ai = self._inference_encoder(projected_enhanced_premise, premise_mask)\n v_bi = self._inference_encoder(projected_enhanced_hypothesis, hypothesis_mask)\n\n # The pooling layer -- max and avg pooling.\n # (batch_size, model_dim)\n v_a_max = masked_max(v_ai, premise_mask.unsqueeze(-1), dim=1)\n v_b_max = masked_max(v_bi, hypothesis_mask.unsqueeze(-1), dim=1)\n\n v_a_avg = torch.sum(v_ai * premise_mask.unsqueeze(-1), dim=1) / torch.sum(\n premise_mask, 1, keepdim=True\n )\n v_b_avg = torch.sum(v_bi * hypothesis_mask.unsqueeze(-1), dim=1) / torch.sum(\n hypothesis_mask, 1, keepdim=True\n )\n\n # Now concat\n # (batch_size, model_dim * 2 * 4)\n v_all = torch.cat([v_a_avg, v_a_max, v_b_avg, v_b_max], dim=1)\n\n # the final MLP -- apply dropout to input, and MLP applies to output & hidden\n if self.dropout:\n v_all = self.dropout(v_all)\n\n output_hidden = self._output_feedforward(v_all)\n label_logits = self._output_logit(output_hidden)\n label_probs = torch.nn.functional.softmax(label_logits, dim=-1)\n\n output_dict = {\"label_logits\": label_logits, \"label_probs\": label_probs}\n\n if label is not None:\n loss = self._loss(label_logits, label.long().view(-1))\n output_dict[\"loss\"] = loss\n \n for metric in self.metrics.values():\n metric(label_logits, label.long().view(-1))\n\n return output_dict\n\n def get_metrics(self, reset: bool = False) -> Dict[str, float]:\n metrics = {\"accuracy\": self.metrics[\"accuracy\"].get_metric(reset=reset)}\n \n for _class in range(len(self.class_weights)):\n metrics.update({\n f\"f1_rel{_class}\": self.metrics[f\"f1_rel{_class}\"].get_metric(reset=reset)['f1'],\n })\n \n metrics[\"f1_macro\"] = numpy.mean([metrics[f\"f1_rel{_class}\"] for _class in range(len(self.class_weights))])\n return metrics\n\n default_predictor = \"textual_entailment\"", "Overwriting models/bimpm_custom_package/model/esim.py\n" ], [ "! cp models/bimpm_custom_package/model/esim.py ../../../maintenance_rst/models/customization_package/model/esim.py", "_____no_output_____" ] ], [ [ "### 2. Generate config files", "_____no_output_____" ], [ "#### ELMo ", "_____no_output_____" ] ], [ [ "%%writefile $MODEL_PATH/config_elmo.json\n\nlocal NUM_EPOCHS = 200;\nlocal LR = 1e-3;\nlocal LSTM_ENCODER_HIDDEN = 25;\n\n{\n \"dataset_reader\": {\n \"type\": \"quora_paraphrase\",\n \"tokenizer\": {\n \"type\": \"just_spaces\"\n },\n \"token_indexers\": {\n \"token_characters\": {\n \"type\": \"characters\",\n \"min_padding_length\": 30,\n },\n \"elmo\": {\n \"type\": \"elmo_characters\"\n }\n }\n },\n \"train_data_path\": \"label_predictor_esim/nlabel_cf_train.tsv\",\n \"validation_data_path\": \"label_predictor_esim/nlabel_cf_dev.tsv\",\n \"test_data_path\": \"label_predictor_esim/nlabel_cf_test.tsv\",\n \"model\": {\n \"type\": \"custom_esim\",\n \"dropout\": 0.5,\n \"class_weights\": [\n 0.027483, 0.032003, 0.080478, 0.102642, 0.121394, 0.135027,\n 0.136856, 0.170897, 0.172355, 0.181655, 0.193858, 0.211297,\n 0.231651, 0.260982, 0.334437, 0.378277, 0.392996, 0.567416,\n 0.782946, 0.855932, 0.971154, 1.0],\n \"encode_together\": false,\n \"text_field_embedder\": {\n \"token_embedders\": {\n \"elmo\": {\n \"type\": \"elmo_token_embedder\",\n \"options_file\": \"rsv_elmo/options.json\",\n \"weight_file\": \"rsv_elmo/model.hdf5\",\n \"do_layer_norm\": false,\n \"dropout\": 0.1\n },\n \"token_characters\": {\n \"type\": \"character_encoding\",\n \"dropout\": 0.1,\n \"embedding\": {\n \"embedding_dim\": 20,\n \"padding_index\": 0,\n \"vocab_namespace\": \"token_characters\"\n },\n \"encoder\": {\n \"type\": \"lstm\",\n \"input_size\": $.model.text_field_embedder.token_embedders.token_characters.embedding.embedding_dim,\n \"hidden_size\": LSTM_ENCODER_HIDDEN,\n \"num_layers\": 1,\n \"bidirectional\": true,\n \"dropout\": 0.4\n },\n },\n }\n },\n \"encoder\": {\n \"type\": \"lstm\",\n \"input_size\": 1024+LSTM_ENCODER_HIDDEN+LSTM_ENCODER_HIDDEN,\n \"hidden_size\": 300,\n \"num_layers\": 1,\n \"bidirectional\": true\n },\n \"matrix_attention\": {\"type\": \"dot_product\"},\n \"projection_feedforward\": {\n \"input_dim\": 2400,\n \"hidden_dims\": 300,\n \"num_layers\": 1,\n \"activations\": \"relu\"\n },\n \"inference_encoder\": {\n \"type\": \"lstm\",\n \"input_size\": 300,\n \"hidden_size\": 300,\n \"num_layers\": 1,\n \"bidirectional\": true\n },\n \"output_feedforward\": {\n \"input_dim\": 2400,\n \"num_layers\": 1,\n \"hidden_dims\": 300,\n \"activations\": \"relu\",\n \"dropout\": 0.5\n },\n \"output_logit\": {\n \"input_dim\": 300,\n \"num_layers\": 1,\n \"hidden_dims\": 22,\n \"activations\": \"linear\"\n },\n \"initializer\": {\n \"regexes\": [\n [\".*linear_layers.*weight\", {\"type\": \"xavier_normal\"}],\n [\".*linear_layers.*bias\", {\"type\": \"constant\", \"val\": 0}],\n [\".*weight_ih.*\", {\"type\": \"xavier_normal\"}],\n [\".*weight_hh.*\", {\"type\": \"orthogonal\"}],\n [\".*bias.*\", {\"type\": \"constant\", \"val\": 0}],\n [\".*matcher.*match_weights.*\", {\"type\": \"kaiming_normal\"}]\n ]\n }\n },\n \"data_loader\": {\n \"batch_sampler\": {\n \"type\": \"bucket\",\n \"batch_size\": 20,\n \"padding_noise\": 0.0,\n \"sorting_keys\": [\"premise\"],\n },\n },\n \"trainer\": {\n \"num_epochs\": NUM_EPOCHS,\n \"cuda_device\": 1,\n \"grad_clipping\": 5.0,\n \"validation_metric\": \"+f1_macro\",\n \"shuffle\": true,\n \"optimizer\": {\n \"type\": \"adam\",\n \"lr\": LR\n },\n \"learning_rate_scheduler\": {\n \"type\": \"reduce_on_plateau\",\n \"factor\": 0.5,\n \"mode\": \"max\",\n \"patience\": 0\n }\n }\n}", "Overwriting models/label_predictor_esim/config_elmo.json\n" ], [ "! cp -r $MODEL_PATH ../../../maintenance_rst/models/label_predictor_esim", "_____no_output_____" ], [ "! cp -r $MODEL_PATH/config_elmo.json ../../../maintenance_rst/models/label_predictor_esim/", "_____no_output_____" ] ], [ [ "### 3. Scripts for training/prediction ", "_____no_output_____" ], [ "#### Option 1. Directly from the config", "_____no_output_____" ], [ "Train a model", "_____no_output_____" ] ], [ [ "%%writefile models/train_label_predictor_esim.sh\n# usage:\n# $ cd models \n# $ sh train_label_predictor.sh {bert|elmo} result_30\n\nexport METHOD=${1}\nexport RESULT_DIR=${2}\nexport DEV_FILE_PATH=\"nlabel_cf_dev.tsv\"\nexport TEST_FILE_PATH=\"nlabel_cf_test.tsv\"\n\nrm -r label_predictor_esim/${RESULT_DIR}/\nallennlp train -s label_predictor_esim/${RESULT_DIR}/ label_predictor_esim/config_${METHOD}.json \\\n --include-package bimpm_custom_package\nallennlp predict --use-dataset-reader --silent \\\n --output-file label_predictor_esim/${RESULT_DIR}/predictions_dev.json label_predictor_esim/${RESULT_DIR}/model.tar.gz label_predictor_esim/${DEV_FILE_PATH} \\\n --include-package bimpm_custom_package \\\n --predictor textual-entailment\nallennlp predict --use-dataset-reader --silent \\\n --output-file label_predictor_esim/${RESULT_DIR}/predictions_test.json label_predictor_esim/${RESULT_DIR}/model.tar.gz label_predictor_esim/${TEST_FILE_PATH} \\\n --include-package bimpm_custom_package \\\n --predictor textual-entailment", "Overwriting models/train_label_predictor_esim.sh\n" ], [ "! cp models/train_label_predictor_esim.sh ../../../maintenance_rst/models/", "_____no_output_____" ] ], [ [ "Predict on dev&test", "_____no_output_____" ] ], [ [ "%%writefile models/eval_label_predictor_esim.sh\n# usage:\n# $ cd models \n# $ sh train_label_predictor.sh {bert|elmo} result_30\n\nexport METHOD=${1}\nexport RESULT_DIR=${2}\nexport DEV_FILE_PATH=\"nlabel_cf_dev.tsv\"\nexport TEST_FILE_PATH=\"nlabel_cf_test.tsv\"\n\nallennlp predict --use-dataset-reader --silent \\\n --output-file label_predictor_esim/${RESULT_DIR}/predictions_dev.json label_predictor_esim/${RESULT_DIR}/model.tar.gz label_predictor_esim/${DEV_FILE_PATH} \\\n --include-package bimpm_custom_package \\\n --predictor textual-entailment\nallennlp predict --use-dataset-reader --silent \\\n --output-file label_predictor_esim/${RESULT_DIR}/predictions_test.json label_predictor_esim/${RESULT_DIR}/model.tar.gz label_predictor_esim/${TEST_FILE_PATH} \\\n --include-package bimpm_custom_package \\\n --predictor textual-entailment", "Overwriting models/eval_label_predictor_esim.sh\n" ], [ "! cp models/eval_label_predictor_esim.sh ../../../maintenance_rst/models/", "_____no_output_____" ] ], [ [ "(optional) predict on train", "_____no_output_____" ] ], [ [ "%%writefile models/eval_label_predictor_train.sh\n# usage:\n# $ cd models \n# $ sh eval_label_predictor_train.sh {bert|elmo} result_30\n\nexport METHOD=${1}\nexport RESULT_DIR=${2}\nexport TEST_FILE_PATH=\"nlabel_cf_train.tsv\"\n\nallennlp predict --use-dataset-reader --silent \\\n --output-file label_predictor_bimpm/${RESULT_DIR}/predictions_train.json label_predictor_bimpm/${RESULT_DIR}/model.tar.gz label_predictor_bimpm/${TEST_FILE_PATH} \\\n --include-package customization_package \\\n --predictor textual-entailment", "_____no_output_____" ] ], [ [ "#### Option 2. Using wandb for parameters adjustment", "_____no_output_____" ] ], [ [ "%%writefile ../../../maintenance_rst/models/wandb_label_predictor_esim.yaml\n\nname: label_predictor_esim\nprogram: wandb_allennlp # this is a wrapper console script around allennlp commands. It is part of wandb-allennlp\nmethod: bayes\n## Do not for get to use the command keyword to specify the following command structure\ncommand:\n - ${program} #omit the interpreter as we use allennlp train command directly\n - \"--subcommand=train\"\n - \"--include-package=customization_package\" # add all packages containing your registered classes here\n - \"--config_file=label_predictor_esim/config_elmo.json\"\n - ${args}\nmetric:\n name: best_f1_macro\n goal: maximize\nparameters:\n model.encode_together:\n values: [\"true\", ] \n iterator.batch_size:\n values: [8,]\n trainer.optimizer.lr:\n values: [0.001,]\n model.dropout:\n values: [0.5]\n", "_____no_output_____" ] ], [ [ "3. Run training", "_____no_output_____" ], [ "``wandb sweep wandb_label_predictor_esim.yaml``\n\n(returns %sweepname1)\n\n``wandb sweep wandb_label_predictor2.yaml``\n\n(returns %sweepname2)\n\n``wandb agent --count 1 %sweepname1 && wandb agent --count 1 %sweepname2``", "_____no_output_____" ], [ "Move the best model in label_predictor_bimpm", "_____no_output_____" ] ], [ [ "! ls -laht models/wandb", "_____no_output_____" ], [ "! cp -r models/wandb/run-20201218_123424-kcphaqhi/training_dumps models/label_predictor_esim/esim_elmo", "_____no_output_____" ] ], [ [ "**Or** load from wandb by %sweepname", "_____no_output_____" ] ], [ [ "import wandb\napi = wandb.Api()\nrun = api.run(\"tchewik/tmp/7hum4oom\")\nfor file in run.files():\n file.download(replace=True)", "_____no_output_____" ], [ "! cp -r training_dumps models/label_predictor_bimpm/toasty-sweep-1", "_____no_output_____" ] ], [ [ "And run evaluation from shell\n\n``sh eval_label_predictor_esim.sh {elmo|elmo_fasttext} toasty-sweep-1``", "_____no_output_____" ], [ "### 4. Evaluate classifier", "_____no_output_____" ] ], [ [ "def load_predictions(path):\n result = []\n vocab = []\n \n with open(path, 'r') as file:\n for line in file.readlines():\n line = json.loads(line)\n if line.get(\"label\"):\n result.append(line.get(\"label\"))\n elif line.get(\"label_probs\"):\n if not vocab:\n vocab = open(path[:path.rfind('/')] + '/vocabulary/labels.txt', 'r').readlines()\n vocab = [label.strip() for label in vocab]\n \n result.append(vocab[np.argmax(line.get(\"label_probs\"))])\n \n print('length of result:', len(result))\n return result", "_____no_output_____" ], [ "RESULT_DIR = 'esim_elmo'", "_____no_output_____" ], [ "! mkdir models/label_predictor_esim/$RESULT_DIR", "_____no_output_____" ], [ "! cp -r ../../../maintenance_rst/models/label_predictor_esim/$RESULT_DIR/*.json models/label_predictor_esim/$RESULT_DIR/", "_____no_output_____" ] ], [ [ "On dev set", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport json\n\ntrue = pd.read_csv(DEV_FILE_PATH, sep='\\t', header=None)[0].values.tolist()\npred = load_predictions(f'{MODEL_PATH}/{RESULT_DIR}/predictions_dev.json')", "length of result: 3596\n" ], [ "from sklearn.metrics import classification_report\n\nprint(classification_report(true[:len(pred)], pred, digits=4))", " precision recall f1-score support\n\n attribution_NS 0.8488 0.8902 0.8690 82\n attribution_SN 0.8830 0.8343 0.8580 181\n background_NS 0.2113 0.1685 0.1875 89\n cause-effect_NS 0.6094 0.5000 0.5493 156\n cause-effect_SN 0.4451 0.4425 0.4438 174\n comparison_NN 0.1236 0.2115 0.1560 52\n concession_NS 0.9000 0.5625 0.6923 32\n condition_NS 0.6111 0.6197 0.6154 71\n condition_SN 0.7008 0.8241 0.7574 108\n contrast_NN 0.7387 0.6119 0.6694 268\n elaboration_NS 0.3530 0.5575 0.4323 644\n evidence_NS 0.1235 0.1887 0.1493 53\ninterpretation-evaluation_NS 0.3008 0.3540 0.3252 226\ninterpretation-evaluation_SN 0.5000 0.3214 0.3913 28\n joint_NN 0.7661 0.3984 0.5242 748\n preparation_SN 0.4000 0.2581 0.3137 186\n purpose_NS 0.8588 0.7849 0.8202 93\n purpose_SN 0.6154 0.8889 0.7273 18\n restatement_NN 0.3333 0.3571 0.3448 14\n same-unit_NN 0.6881 0.5906 0.6356 127\n sequence_NN 0.4561 0.5450 0.4966 200\n solutionhood_SN 0.3889 0.6087 0.4746 46\n\n accuracy 0.5089 3596\n macro avg 0.5389 0.5236 0.5197 3596\n weighted avg 0.5641 0.5089 0.5168 3596\n\n" ], [ "test_metrics = classification_report(true[:len(pred)], pred, digits=4, output_dict=True)\ntest_f1 = np.array(\n [test_metrics[label].get('f1-score') for label in test_metrics if type(test_metrics[label]) == dict]) * 100\n\ntest_f1", "_____no_output_____" ], [ "len(true)", "_____no_output_____" ], [ "from sklearn.metrics import f1_score, precision_score, recall_score\n\nprint('f1: %.2f'%(f1_score(true[:len(pred)], pred, average='macro')*100))\nprint('pr: %.2f'%(precision_score(true[:len(pred)], pred, average='macro')*100))\nprint('re: %.2f'%(recall_score(true[:len(pred)], pred, average='macro')*100))", "f1: 51.97\npr: 53.89\nre: 52.36\n" ], [ "from utils.plot_confusion_matrix import plot_confusion_matrix\nfrom sklearn.metrics import confusion_matrix\n\nlabels = list(set(true))\nlabels.sort()\nplot_confusion_matrix(confusion_matrix(true[:len(pred)], pred, labels), target_names=labels, normalize=True)", "_____no_output_____" ], [ "top_classes = [\n 'attribution_NS',\n 'attribution_SN',\n 'purpose_NS',\n 'purpose_SN',\n 'condition_SN',\n 'contrast_NN',\n 'condition_NS',\n 'joint_NN',\n 'concession_NS',\n 'same-unit_NN',\n 'elaboration_NS',\n 'cause-effect_NS',\n]\n\nclass_mapper = {weird_class: 'other' + weird_class[-3:] for weird_class in labels if not weird_class in top_classes}", "_____no_output_____" ], [ "import numpy as np\n\ntrue = [class_mapper.get(value) if class_mapper.get(value) else value for value in true]\npred = [class_mapper.get(value) if class_mapper.get(value) else value for value in pred]\n\npred_mapper = {\n 'other_NN': 'joint_NN',\n 'other_NS': 'joint_NN',\n 'other_SN': 'joint_NN'\n}\npred = [pred_mapper.get(value) if pred_mapper.get(value) else value for value in pred]\n\n_to_stay = (np.array(true) != 'other_NN') & (np.array(true) != 'other_SN') & (np.array(true) != 'other_NS')\n\n_true = np.array(true)[_to_stay]\n_pred = np.array(pred)[_to_stay[:len(pred)]]\nlabels = list(set(_true))", "_____no_output_____" ], [ "from sklearn.metrics import f1_score, precision_score, recall_score\n\nprint('f1: %.2f'%(f1_score(true[:len(pred)], pred, average='macro')*100))\nprint('pr: %.2f'%(precision_score(true[:len(pred)], pred, average='macro')*100))\nprint('re: %.2f'%(recall_score(true[:len(pred)], pred, average='macro')*100))", "_____no_output_____" ], [ "labels.sort()", "_____no_output_____" ], [ "plot_confusion_matrix(confusion_matrix(_true[:len(_pred)], _pred), target_names=labels, normalize=True)", "_____no_output_____" ], [ "import numpy as np\n\nfor rel in np.unique(_true):\n print(rel)", "_____no_output_____" ] ], [ [ "On train set (optional)", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport json\n\ntrue = pd.read_csv('models/label_predictor_bimpm/nlabel_cf_train.tsv', sep='\\t', header=None)[0].values.tolist()\npred = load_predictions(f'{MODEL_PATH}/{RESULT_DIR}/predictions_train.json')\n\nprint(classification_report(true[:len(pred)], pred, digits=4))", "_____no_output_____" ], [ "file = 'models/label_predictor_lstm/nlabel_cf_train.tsv'\ntrue_train = pd.read_csv(file, sep='\\t', header=None)\ntrue_train['predicted_relation'] = pred\n\nprint(true_train[true_train.relation != true_train.predicted_relation].shape)\n\ntrue_train[true_train.relation != true_train.predicted_relation].to_csv('mispredicted_relations.csv', sep='\\t')", "_____no_output_____" ] ], [ [ "On test set", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport json\n\ntrue = pd.read_csv(TEST_FILE_PATH, sep='\\t', header=None)[0].values.tolist()\npred = load_predictions(f'{MODEL_PATH}/{RESULT_DIR}/predictions_test.json')\n\nprint(classification_report(true[:len(pred)], pred, digits=4))", "length of result: 2518\n" ], [ "test_metrics = classification_report(true[:len(pred)], pred, digits=4, output_dict=True)\ntest_f1 = np.array(\n [test_metrics[label].get('f1-score') for label in test_metrics if type(test_metrics[label]) == dict]) * 100\n\ntest_f1", "_____no_output_____" ], [ "from sklearn.metrics import f1_score, precision_score, recall_score\n\nprint('f1: %.2f'%(f1_score(true[:len(pred)], pred, average='macro')*100))\nprint('pr: %.2f'%(precision_score(true[:len(pred)], pred, average='macro')*100))\nprint('re: %.2f'%(recall_score(true[:len(pred)], pred, average='macro')*100))", "_____no_output_____" ], [ "len(true)", "_____no_output_____" ], [ "true = [class_mapper.get(value) if class_mapper.get(value) else value for value in true]\npred = [class_mapper.get(value) if class_mapper.get(value) else value for value in pred]\npred = [pred_mapper.get(value) if pred_mapper.get(value) else value for value in pred]\n\n_to_stay = (np.array(true) != 'other_NN') & (np.array(true) != 'other_SN') & (np.array(true) != 'other_NS')\n\n_true = np.array(true)[_to_stay]\n_pred = np.array(pred)[_to_stay]", "_____no_output_____" ], [ "print(classification_report(_true[:len(_pred)], _pred, digits=4))", "_____no_output_____" ], [ "from sklearn.metrics import f1_score, precision_score, recall_score\n\nprint('f1: %.2f'%(f1_score(_true[:len(_pred)], _pred, average='macro')*100))\nprint('pr: %.2f'%(precision_score(_true[:len(_pred)], _pred, average='macro')*100))\nprint('re: %.2f'%(recall_score(_true[:len(_pred)], _pred, average='macro')*100))", "_____no_output_____" ] ], [ [ "### Ensemble: (Logreg+Catboost) + ESIM", "_____no_output_____" ] ], [ [ "! ls models/label_predictor_esim", "_____no_output_____" ], [ "import json\n\nmodel_vocab = open(MODEL_PATH + '/' + RESULT_DIR + '/vocabulary/labels.txt', 'r').readlines()\nmodel_vocab = [label.strip() for label in model_vocab]\n\ncatboost_vocab = [\n 'attribution_NS', 'attribution_SN', 'background_NS',\n 'cause-effect_NS', 'cause-effect_SN', 'comparison_NN',\n 'concession_NS', 'condition_NS', 'condition_SN', 'contrast_NN',\n 'elaboration_NS', 'evidence_NS', 'interpretation-evaluation_NS',\n 'interpretation-evaluation_SN', 'joint_NN', 'preparation_SN',\n 'purpose_NS', 'purpose_SN', 'restatement_NN', 'same-unit_NN',\n 'sequence_NN', 'solutionhood_SN']\n\ndef load_neural_predictions(path):\n result = []\n \n with open(path, 'r') as file:\n for line in file.readlines():\n line = json.loads(line)\n if line.get('probs'):\n probs = line.get('probs')\n elif line.get('label_probs'):\n probs = line.get('label_probs')\n probs = {model_vocab[i]: probs[i] for i in range(len(model_vocab))}\n result.append(probs)\n \n return result\n\ndef load_scikit_predictions(model, X):\n result = []\n predictions = model.predict_proba(X)\n \n for prediction in predictions:\n probs = {catboost_vocab[j]: prediction[j] for j in range(len(catboost_vocab))}\n result.append(probs)\n \n return result\n\ndef vote_predictions(predictions, soft=True, weights=[1., 1.]):\n for i in range(1, len(predictions)):\n assert len(predictions[i-1]) == len(predictions[i])\n \n if weights == [1., 1.]:\n weights = [1.,] * len(predictions)\n \n result = []\n \n for i in range(len(predictions[0])):\n sample_result = {}\n for key in predictions[0][i].keys():\n if soft:\n sample_result[key] = 0\n for j, prediction in enumerate(predictions):\n sample_result[key] += prediction[i][key] * weights[j]\n else:\n sample_result[key] = max([pred[i][key] * weights[j] for j, pred in enumerate(predictions)])\n\n \n result.append(sample_result)\n \n return result\n\ndef probs_to_classes(pred):\n result = []\n \n for sample in pred:\n best_class = ''\n best_prob = 0.\n for key in sample.keys():\n if sample[key] > best_prob:\n best_prob = sample[key]\n best_class = key\n \n result.append(best_class)\n \n return result", "_____no_output_____" ], [ "! pip install catboost", "Collecting catboost\n Downloading catboost-0.24.4-cp37-none-manylinux1_x86_64.whl (65.7 MB)\n\u001b[K |████████████████████████████████| 65.7 MB 2.5 MB/s eta 0:00:01\n\u001b[?25hRequirement already satisfied: six in /opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages (from catboost) (1.14.0)\nRequirement already satisfied: plotly in /opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages (from catboost) (4.5.0)\nRequirement already satisfied: numpy>=1.16.0 in /opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages (from catboost) (1.18.1)\nRequirement already satisfied: scipy in /opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages (from catboost) (1.4.1)\nRequirement already satisfied: pandas>=0.24.0 in /opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages (from catboost) (0.25.3)\nRequirement already satisfied: graphviz in /opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages (from catboost) (0.13.2)\nRequirement already satisfied: matplotlib in /opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages (from catboost) (3.1.2)\nRequirement already satisfied: retrying>=1.3.3 in /opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages (from plotly->catboost) (1.3.3)\nRequirement already satisfied: pytz>=2017.2 in /opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages (from pandas>=0.24.0->catboost) (2019.3)\nRequirement already satisfied: python-dateutil>=2.6.1 in /opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages (from pandas>=0.24.0->catboost) (2.8.1)\nRequirement already satisfied: cycler>=0.10 in /opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages (from matplotlib->catboost) (0.10.0)\nRequirement already satisfied: kiwisolver>=1.0.1 in /opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages (from matplotlib->catboost) (1.1.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages (from matplotlib->catboost) (2.4.6)\nRequirement already satisfied: setuptools in /opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages (from kiwisolver>=1.0.1->matplotlib->catboost) (45.1.0)\nInstalling collected packages: catboost\nSuccessfully installed catboost-0.24.4\n\u001b[33mWARNING: You are using pip version 20.0.2; however, version 21.0.1 is available.\nYou should consider upgrading via the '/opt/.pyenv/versions/3.7.4/bin/python3.7 -m pip install --upgrade pip' command.\u001b[0m\n" ], [ "import pickle\n\nfs_catboost_plus_logreg = pickle.load(open('models/relation_predictor_baseline/model.pkl', 'rb'))\nlab_encoder = pickle.load(open('models/relation_predictor_baseline/label_encoder.pkl', 'rb'))\nscaler = pickle.load(open('models/relation_predictor_baseline/scaler.pkl', 'rb'))\ndrop_columns = pickle.load(open('models/relation_predictor_baseline/drop_columns.pkl', 'rb'))", "/opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages/sklearn/base.py:318: UserWarning: Trying to unpickle estimator Pipeline from version 0.22.2.post1 when using version 0.22.1. This might lead to breaking code or invalid results. Use at your own risk.\n UserWarning)\n/opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages/sklearn/base.py:318: UserWarning: Trying to unpickle estimator LabelEncoder from version 0.22.2.post1 when using version 0.22.1. This might lead to breaking code or invalid results. Use at your own risk.\n UserWarning)\n/opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages/sklearn/base.py:318: UserWarning: Trying to unpickle estimator VotingClassifier from version 0.22.2.post1 when using version 0.22.1. This might lead to breaking code or invalid results. Use at your own risk.\n UserWarning)\n/opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages/sklearn/base.py:318: UserWarning: Trying to unpickle estimator StandardScaler from version 0.22.2.post1 when using version 0.22.1. This might lead to breaking code or invalid results. Use at your own risk.\n UserWarning)\n" ] ], [ [ "On dev set", "_____no_output_____" ] ], [ [ "from sklearn import metrics\n\n\nTARGET = 'relation'\n\ny_dev, X_dev = dev_samples['relation'].to_frame(), dev_samples.drop('relation', axis=1).drop(\n columns=drop_columns + ['category_id', 'index'])\n\nX_scaled_np = scaler.transform(X_dev)\nX_dev = pd.DataFrame(X_scaled_np, index=X_dev.index)\n\ncatboost_predictions = load_scikit_predictions(fs_catboost_plus_logreg, X_dev)\nneural_predictions = load_neural_predictions(f'{MODEL_PATH}/{RESULT_DIR}/predictions_dev.json')\n\ntmp = vote_predictions([neural_predictions, catboost_predictions], soft=True, weights=[1., 1.])\nensemble_pred = probs_to_classes(tmp)\n\nprint('weighted f1: ', metrics.f1_score(y_dev.values, ensemble_pred, average='weighted'))\nprint('macro f1: ', metrics.f1_score(y_dev.values, ensemble_pred, average='macro'))\nprint('accuracy: ', metrics.accuracy_score(y_dev.values, ensemble_pred))\nprint()\nprint(metrics.classification_report(y_dev, ensemble_pred, digits=4))", "weighted f1: 0.5413872373769657\nmacro f1: 0.5354738926873194\naccuracy: 0.5389321468298109\n\n precision recall f1-score support\n\n attribution_NS 0.8409 0.9024 0.8706 82\n attribution_SN 0.8424 0.8564 0.8493 181\n background_NS 0.2167 0.1461 0.1745 89\n cause-effect_NS 0.6015 0.5128 0.5536 156\n cause-effect_SN 0.5096 0.4598 0.4834 174\n comparison_NN 0.1449 0.1923 0.1653 52\n concession_NS 0.9000 0.5625 0.6923 32\n condition_NS 0.6438 0.6620 0.6528 71\n condition_SN 0.6716 0.8333 0.7438 108\n contrast_NN 0.7229 0.6231 0.6693 268\n elaboration_NS 0.3932 0.5776 0.4679 644\n evidence_NS 0.1698 0.1698 0.1698 53\ninterpretation-evaluation_NS 0.3281 0.3717 0.3485 226\ninterpretation-evaluation_SN 0.5294 0.3214 0.4000 28\n joint_NN 0.6974 0.5053 0.5860 748\n preparation_SN 0.4153 0.2634 0.3224 186\n purpose_NS 0.8506 0.7957 0.8222 93\n purpose_SN 0.6667 0.8889 0.7619 18\n restatement_NN 0.4167 0.3571 0.3846 14\n same-unit_NN 0.7064 0.6063 0.6525 127\n sequence_NN 0.4585 0.5250 0.4895 200\n solutionhood_SN 0.4815 0.5652 0.5200 46\n\n accuracy 0.5389 3596\n macro avg 0.5549 0.5317 0.5355 3596\n weighted avg 0.5623 0.5389 0.5414 3596\n\n" ] ], [ [ "On test set", "_____no_output_____" ] ], [ [ "_test_samples = test_samples[:]", "_____no_output_____" ], [ "test_samples = _test_samples[:]", "_____no_output_____" ], [ "mask = test_samples.filename.str.contains('news')\ntest_samples = test_samples[test_samples['filename'].str.contains('news')]", "_____no_output_____" ], [ "mask.shape", "_____no_output_____" ], [ "test_samples.shape", "_____no_output_____" ], [ "def mask_predictions(predictions, mask):\n result = []\n mask = mask.values\n for i, prediction in enumerate(predictions):\n if mask[i]:\n result.append(prediction)\n return result", "_____no_output_____" ], [ "TARGET = 'relation'\n\ny_test, X_test = test_samples[TARGET].to_frame(), test_samples.drop(TARGET, axis=1).drop(\n columns=drop_columns + ['category_id', 'index'])\n\nX_scaled_np = scaler.transform(X_test)\nX_test = pd.DataFrame(X_scaled_np, index=X_test.index)\n\ncatboost_predictions = load_scikit_predictions(fs_catboost_plus_logreg, X_test)\nneural_predictions = load_neural_predictions(f'{MODEL_PATH}/{RESULT_DIR}/predictions_test.json')\n# neural_predictions = mask_predictions(neural_predictions, mask)\n\ntmp = vote_predictions([neural_predictions, catboost_predictions], soft=True, weights=[1., 2.])\n\nensemble_pred = probs_to_classes(tmp)\n\nprint('weighted f1: ', metrics.f1_score(y_test.values, ensemble_pred, average='weighted'))\nprint('macro f1: ', metrics.f1_score(y_test.values, ensemble_pred, average='macro'))\nprint('accuracy: ', metrics.accuracy_score(y_test.values, ensemble_pred))\nprint()\nprint(metrics.classification_report(y_test, ensemble_pred, digits=4))", "weighted f1: 0.5610757855765772\nmacro f1: 0.510472638133765\naccuracy: 0.5647339158061954\n\n precision recall f1-score support\n\n attribution_NS 0.8800 1.0000 0.9362 44\n attribution_SN 0.8182 0.9435 0.8764 124\n background_NS 0.0370 0.0385 0.0377 26\n cause-effect_NS 0.4949 0.6364 0.5568 77\n cause-effect_SN 0.5464 0.4309 0.4818 123\n comparison_NN 0.1250 0.2000 0.1538 35\n concession_NS 0.4444 0.4000 0.4211 10\n condition_NS 0.5977 0.7647 0.6710 68\n condition_SN 0.6869 0.8500 0.7598 80\n contrast_NN 0.6510 0.6188 0.6345 202\n elaboration_NS 0.5337 0.6323 0.5789 563\n evidence_NS 0.0400 0.0435 0.0417 23\ninterpretation-evaluation_NS 0.3355 0.3893 0.3604 131\ninterpretation-evaluation_SN 0.5455 0.3529 0.4286 17\n joint_NN 0.6374 0.5203 0.5729 517\n preparation_SN 0.4028 0.2613 0.3169 111\n purpose_NS 0.8025 0.8228 0.8125 79\n purpose_SN 0.6923 0.9000 0.7826 20\n restatement_NN 0.4706 0.4211 0.4444 19\n same-unit_NN 0.8070 0.5476 0.6525 84\n sequence_NN 0.4750 0.3016 0.3689 126\n solutionhood_SN 0.3061 0.3846 0.3409 39\n\n accuracy 0.5647 2518\n macro avg 0.5150 0.5209 0.5105 2518\n weighted avg 0.5707 0.5647 0.5611 2518\n\n" ], [ "output = test_samples[['snippet_x', 'snippet_y', 'category_id', 'order', 'filename']]", "_____no_output_____" ], [ "output['true'] = output['category_id']", "/opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n \"\"\"Entry point for launching an IPython kernel.\n" ], [ "output['predicted'] = ensemble_pred", "/opt/.pyenv/versions/3.7.4/lib/python3.7/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n \"\"\"Entry point for launching an IPython kernel.\n" ], [ "output", "_____no_output_____" ], [ "output2 = output[output.true != output.predicted.map(lambda row: row.split('_')[0])]", "_____no_output_____" ], [ "output2.shape", "_____no_output_____" ], [ "output2", "_____no_output_____" ], [ "del output2['category_id']\noutput2.to_csv('mispredictions.csv')", "_____no_output_____" ], [ "test_metrics = metrics.classification_report(y_test, ensemble_pred, digits=4, output_dict=True)\ntest_f1 = np.array(\n [test_metrics[label].get('f1-score') for label in test_metrics if type(test_metrics[label]) == dict]) * 100\n\ntest_f1", "_____no_output_____" ] ], [ [ "### Ensemble: BiMPM + ESIM", "_____no_output_____" ], [ "On dev set", "_____no_output_____" ] ], [ [ "!ls models/label_predictor_bimpm/", "_____no_output_____" ], [ "from sklearn import metrics\n\n\nTARGET = 'relation'\n\ny_dev, X_dev = dev_samples['relation'].to_frame(), dev_samples.drop('relation', axis=1).drop(\n columns=drop_columns + ['category_id', 'index'])\n\nX_scaled_np = scaler.transform(X_dev)\nX_dev = pd.DataFrame(X_scaled_np, index=X_dev.index)\n\nbimpm = load_neural_predictions(f'models/label_predictor_bimpm/winter-sweep-1/predictions_dev.json')\nesim = load_neural_predictions(f'{MODEL_PATH}/{RESULT_DIR}/predictions_dev.json')\ncatboost_predictions = load_scikit_predictions(fs_catboost_plus_logreg, X_dev)\n\ntmp = vote_predictions(bimpm, esim, soft=False, weights=[1., 1.])\ntmp = vote_predictions(tmp, catboost_predictions, soft=True, weights=[1., 1.])\nensemble_pred = probs_to_classes(tmp)\n\nprint('weighted f1: ', metrics.f1_score(y_dev.values, ensemble_pred, average='weighted'))\nprint('macro f1: ', metrics.f1_score(y_dev.values, ensemble_pred, average='macro'))\nprint('accuracy: ', metrics.accuracy_score(y_dev.values, ensemble_pred))\nprint()\nprint(metrics.classification_report(y_dev, ensemble_pred, digits=4))", "_____no_output_____" ] ], [ [ "On test set", "_____no_output_____" ] ], [ [ "TARGET = 'relation'\n\ny_test, X_test = test_samples[TARGET].to_frame(), test_samples.drop(TARGET, axis=1).drop(\n columns=drop_columns + ['category_id', 'index'])\n\nX_scaled_np = scaler.transform(X_test)\nX_test = pd.DataFrame(X_scaled_np, index=X_test.index)\n\nbimpm = load_neural_predictions(f'models/label_predictor_bimpm/winter-sweep-1/predictions_test.json')\nesim = load_neural_predictions(f'{MODEL_PATH}/{RESULT_DIR}/predictions_test.json')\ncatboost_predictions = load_scikit_predictions(fs_catboost_plus_logreg, X_test)\n\ntmp = vote_predictions([bimpm, catboost_predictions, esim], soft=True, weights=[2., 1, 15.])\n\nensemble_pred = probs_to_classes(tmp)\n\nprint('weighted f1: ', metrics.f1_score(y_test.values, ensemble_pred, average='weighted'))\nprint('macro f1: ', metrics.f1_score(y_test.values, ensemble_pred, average='macro'))\nprint('accuracy: ', metrics.accuracy_score(y_test.values, ensemble_pred))\nprint()\nprint(metrics.classification_report(y_test, ensemble_pred, digits=4))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
d0349e0a87b588b90cb0930d090d4a7e9e604d4f
208,563
ipynb
Jupyter Notebook
003_quantitative_value_strategy.ipynb
gyalpodongo/algorithmic_trading_python
fd676f2e0342e734d69f53700a51b8dfa619bf4d
[ "MIT" ]
null
null
null
003_quantitative_value_strategy.ipynb
gyalpodongo/algorithmic_trading_python
fd676f2e0342e734d69f53700a51b8dfa619bf4d
[ "MIT" ]
null
null
null
003_quantitative_value_strategy.ipynb
gyalpodongo/algorithmic_trading_python
fd676f2e0342e734d69f53700a51b8dfa619bf4d
[ "MIT" ]
null
null
null
37.423829
279
0.316537
[ [ [ "# Quantitative Value Strategy\n\"Value investing\" means investing in the stocks that are cheapest relative to common measures of business value (like earnings or assets).\n\nFor this project, we're going to build an investing strategy that selects the 50 stocks with the best value metrics. From there, we will calculate recommended trades for an equal-weight portfolio of these 50 stocks.\n\n## Library Imports\nThe first thing we need to do is import the open-source software libraries that we'll be using in this tutorial.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport xlsxwriter\nimport requests\nfrom scipy import stats\nimport math", "_____no_output_____" ] ], [ [ "## Importing Our List of Stocks & API Token\nAs before, we'll need to import our list of stocks and our API token before proceeding. Make sure the .csv file is still in your working directory and import it with the following command:", "_____no_output_____" ] ], [ [ "stocks = pd.read_csv('sp_500_stocks.csv')\nfrom secrets import IEX_CLOUD_API_TOKEN", "_____no_output_____" ] ], [ [ "## Making Our First API Call\nIt's now time to make the first version of our value screener!\n\nWe'll start by building a simple value screener that ranks securities based on a single metric (the price-to-earnings ratio).", "_____no_output_____" ] ], [ [ "symbol = 'aapl'\napi_url = f\"https://sandbox.iexapis.com/stable/stock/{symbol}/quote?token={IEX_CLOUD_API_TOKEN}\"\ndata = requests.get(api_url).json()", "_____no_output_____" ] ], [ [ "## Parsing Our API Call\nThis API call has the metric we need - the price-to-earnings ratio.\n\nHere is an example of how to parse the metric from our API call:", "_____no_output_____" ] ], [ [ "price = data['latestPrice']\npe_ratio = data['peRatio']\npe_ratio", "_____no_output_____" ] ], [ [ "## Executing A Batch API Call & Building Our DataFrame\n\nJust like in our first project, it's now time to execute several batch API calls and add the information we need to our DataFrame.\n\nWe'll start by running the following code cell, which contains some code we already built last time that we can re-use for this project. More specifically, it contains a function called chunks that we can use to divide our list of securities into groups of 100.", "_____no_output_____" ] ], [ [ "# Function sourced from \n# https://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks\ndef chunks(lst, n):\n \"\"\"Yield successive n-sized chunks from lst.\"\"\"\n for i in range(0, len(lst), n):\n yield lst[i:i + n] \n \nsymbol_groups = list(chunks(stocks['Ticker'], 100))\nsymbol_strings = []\nfor i in range(0, len(symbol_groups)):\n symbol_strings.append(','.join(symbol_groups[i]))\n# print(symbol_strings[i])\n\nmy_columns = ['Ticker', 'Price', 'Price-to-Earnings Ratio', 'Number of Shares to Buy']", "_____no_output_____" ] ], [ [ "Now we need to create a blank DataFrame and add our data to the data frame one-by-one.", "_____no_output_____" ] ], [ [ "df = pd.DataFrame(columns = my_columns)\nfor batch in symbol_strings:\n batch_api_call_url = f\"https://sandbox.iexapis.com/stable/stock/market/batch?symbols={batch}&types=quote&token={IEX_CLOUD_API_TOKEN}\"\n data = requests.get(batch_api_call_url).json()\n for symbol in batch.split(','):\n df = df.append(\n pd.Series(\n [\n symbol, \n data[symbol]['quote']['latestPrice'],\n data[symbol]['quote']['peRatio'],\n 'N/A'\n ],\n index=my_columns\n ),\n ignore_index=True\n )\ndf.dropna(inplace=True)\ndf\n", "_____no_output_____" ] ], [ [ "## Removing Glamour Stocks\n\nThe opposite of a \"value stock\" is a \"glamour stock\". \n\nSince the goal of this strategy is to identify the 50 best value stocks from our universe, our next step is to remove glamour stocks from the DataFrame.\n\nWe'll sort the DataFrame by the stocks' price-to-earnings ratio, and drop all stocks outside the top 50.", "_____no_output_____" ] ], [ [ "df.sort_values('Price-to-Earnings Ratio', ascending=False, inplace=True)\ndf = df[df['Price-to-Earnings Ratio'] > 0]\ndf = df[:50]\ndf.reset_index(inplace=True, drop=True)\ndf", "/var/folders/q_/gmxdkf893w3bm9wxvh6635t80000gp/T/ipykernel_89390/1321168316.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n df.sort_values('Price-to-Earnings Ratio', ascending=False, inplace=True)\n" ] ], [ [ "## Calculating the Number of Shares to Buy\nWe now need to calculate the number of shares we need to buy. \n\nTo do this, we will use the `portfolio_input` function that we created in our momentum project.\n\nI have included this function below.", "_____no_output_____" ] ], [ [ "def portfolio_input():\n global portfolio_size\n portfolio_size = input(\"Enter the value of your portfolio:\")\n\n try:\n portfolio_size = float(portfolio_size)\n except ValueError:\n print(\"That's not a number! \\n Try again:\")\n portfolio_size = input(\"Enter the value of your portfolio:\")", "_____no_output_____" ] ], [ [ "Use the `portfolio_input` function to accept a `portfolio_size` variable from the user of this script.", "_____no_output_____" ] ], [ [ "portfolio_input()", "_____no_output_____" ] ], [ [ "You can now use the global `portfolio_size` variable to calculate the number of shares that our strategy should purchase.", "_____no_output_____" ] ], [ [ "position_size = portfolio_size/len(df.index)\nfor row in df.index:\n df.loc[row, 'Number of Shares to Buy'] = math.floor(position_size/df.loc[row, 'Price'])\ndf", "_____no_output_____" ] ], [ [ "## Building a Better (and More Realistic) Value Strategy\nEvery valuation metric has certain flaws.\n\nFor example, the price-to-earnings ratio doesn't work well with stocks with negative earnings.\n\nSimilarly, stocks that buyback their own shares are difficult to value using the price-to-book ratio.\n\nInvestors typically use a `composite` basket of valuation metrics to build robust quantitative value strategies. In this section, we will filter for stocks with the lowest percentiles on the following metrics:\n\n* Price-to-earnings ratio\n* Price-to-book ratio\n* Price-to-sales ratio\n* Enterprise Value divided by Earnings Before Interest, Taxes, Depreciation, and Amortization (EV/EBITDA)\n* Enterprise Value divided by Gross Profit (EV/GP)\n\nSome of these metrics aren't provided directly by the IEX Cloud API, and must be computed after pulling raw data. We'll start by calculating each data point from scratch.", "_____no_output_____" ] ], [ [ "symbol = 'AAPL'\nbatch_api_call_url = f\"https://sandbox.iexapis.com/stable/stock/market/batch?symbols={symbol}&types=quote,advanced-stats&token={IEX_CLOUD_API_TOKEN}\"\ndata = requests.get(batch_api_call_url).json()\n# * Price-to-earnings ratio\npe_ratio = data[symbol]['quote']['peRatio']\n# * Price-to-book ratio\npb_ratio = data[symbol]['advanced-stats']['priceToBook']\n# * Price-to-sales ratio\nps_ratio = data[symbol]['advanced-stats']['priceToSales']\n\nenterprise_value = data[symbol]['advanced-stats']['enterpriseValue']\nebitda = data[symbol]['advanced-stats']['EBITDA']\ngross_profit = data[symbol]['advanced-stats']['grossProfit']\n\n# * Enterprise Value divided by Earnings Before Interest, Taxes, Depreciation, and Amortization (EV/EBITDA)\nev_to_ebitda = enterprise_value/ebitda\n\n# * Enterprise Value divided by Gross Profit (EV/GP)\nev_to_gross_profit = enterprise_value/gross_profit", "_____no_output_____" ] ], [ [ "Let's move on to building our DataFrame. You'll notice that I use the abbreviation `rv` often. It stands for `robust value`, which is what we'll call this sophisticated strategy moving forward.", "_____no_output_____" ] ], [ [ "rv_columns = [\n 'Ticker',\n 'Price', \n 'Number of Shares to Buy', \n 'Price-to-Earnings Ratio',\n 'PE Percentile', \n 'Price-to-Book Ratio', \n 'PB Percentile', \n 'Price-to-Sales Ratio',\n 'PS Percentile',\n 'EV/EBITDA',\n 'EV/EBITDA Percentile',\n 'EV/GP',\n 'EV/GP Percentile',\n 'RV Score'\n]\nrv_df = pd.DataFrame(columns=rv_columns)\n\nfor batch in symbol_strings:\n batch_api_call_url = f\"https://sandbox.iexapis.com/stable/stock/market/batch?symbols={batch}&types=quote,advanced-stats&token={IEX_CLOUD_API_TOKEN}\"\n data = requests.get(batch_api_call_url).json()\n for symbol in batch.split(','):\n enterprise_value = data[symbol]['advanced-stats']['enterpriseValue']\n ebitda = data[symbol]['advanced-stats']['EBITDA']\n gross_profit = data[symbol]['advanced-stats']['grossProfit']\n try: \n ev_to_ebitda = enterprise_value/ebitda\n except TypeError:\n ev_to_ebitda = np.NaN\n \n try: \n ev_to_gross_profit = enterprise_value/gross_profit \n except TypeError:\n ev_to_gross_profit = np.NaN\n #if(not enterprise_value or not ebitda or not gross_profit):\n #continue\n rv_df = rv_df.append(\n pd.Series(\n [\n symbol, \n data[symbol]['quote']['latestPrice'],\n 'N/A',\n data[symbol]['quote']['peRatio'], \n 'N/A', \n data[symbol]['advanced-stats']['priceToBook'], \n 'N/A', \n data[symbol]['advanced-stats']['priceToSales'], \n 'N/A',\n ev_to_ebitda,\n 'N/A', \n ev_to_gross_profit,\n 'N/A', \n 'N/A'\n ],\n index=rv_columns\n ),\n ignore_index=True\n )\nrv_df", "_____no_output_____" ] ], [ [ "## Dealing With Missing Data in Our DataFrame\n\nOur DataFrame contains some missing data because all of the metrics we require are not available through the API we're using. \n\nYou can use pandas' `isnull` method to identify missing data:", "_____no_output_____" ] ], [ [ "rv_df[rv_df.isnull().any(axis=1)]", "_____no_output_____" ] ], [ [ "Dealing with missing data is an important topic in data science.\n\nThere are two main approaches:\n\n* Drop missing data from the data set (pandas' `dropna` method is useful here)\n* Replace missing data with a new value (pandas' `fillna` method is useful here)\n\nIn this tutorial, we will replace missing data with the average non-`NaN` data point from that column. \n\nHere is the code to do this:", "_____no_output_____" ] ], [ [ "for column in [\n 'Price',\n 'Price-to-Earnings Ratio',\n 'Price-to-Book Ratio',\n 'Price-to-Sales Ratio',\n 'EV/EBITDA',\n 'EV/GP']:\n rv_df[column].fillna(rv_df[column].mean(), inplace=True)\nrv_df", "_____no_output_____" ] ], [ [ "Now, if we run the statement from earlier to print rows that contain missing data, nothing should be returned:", "_____no_output_____" ] ], [ [ "rv_df[rv_df.isnull().any(axis=1)]\n", "_____no_output_____" ] ], [ [ "## Calculating Value Percentiles\n\nWe now need to calculate value score percentiles for every stock in the universe. More specifically, we need to calculate percentile scores for the following metrics for every stock:\n\n* Price-to-earnings ratio\n* Price-to-book ratio\n* Price-to-sales ratio\n* EV/EBITDA\n* EV/GP\n\nHere's how we'll do this:", "_____no_output_____" ] ], [ [ "metrics = {\n 'Price-to-Earnings Ratio': 'PE Percentile', \n 'Price-to-Book Ratio' :'PB Percentile', \n 'Price-to-Sales Ratio' : 'PS Percentile',\n 'EV/EBITDA' : 'EV/EBITDA Percentile',\n 'EV/GP' : 'EV/GP Percentile',\n}\n\nfor key, value in metrics.items():\n for row in rv_df.index:\n rv_df.loc[row, value] = stats.percentileofscore(rv_df[key], rv_df.loc[row,key])/100\nrv_df", "_____no_output_____" ] ], [ [ "## Calculating the RV Score\nWe'll now calculate our RV Score (which stands for Robust Value), which is the value score that we'll use to filter for stocks in this investing strategy.\n\nThe RV Score will be the arithmetic mean of the 4 percentile scores that we calculated in the last section.\n\nTo calculate arithmetic mean, we will use the mean function from Python's built-in statistics module.", "_____no_output_____" ] ], [ [ "from statistics import mean \n\nfor row in rv_df.index:\n value_percentiles = []\n for value in metrics.values():\n value_percentiles.append(rv_df.loc[row, value])\n rv_df.loc[row, 'RV Score'] = mean(value_percentiles)\nrv_df", "_____no_output_____" ] ], [ [ "## Selecting the 50 Best Value Stocks¶\n\nAs before, we can identify the 50 best value stocks in our universe by sorting the DataFrame on the RV Score column and dropping all but the top 50 entries.", "_____no_output_____" ], [ "## Calculating the Number of Shares to Buy\nWe'll use the `portfolio_input` function that we created earlier to accept our portfolio size. Then we will use similar logic in a for loop to calculate the number of shares to buy for each stock in our investment universe.", "_____no_output_____" ] ], [ [ "rv_df.sort_values('RV Score', ascending=True, inplace=True)\nrv_df = rv_df[:50]\nrv_df.reset_index(drop = True, inplace=True)\nrv_df", "_____no_output_____" ], [ "portfolio_input()\nposition_size = portfolio_size/len(rv_df.index)\nfor row in rv_df.index:\n rv_df.loc[row, 'Number of Shares to Buy'] = math.floor(position_size/rv_df.loc[row, 'Price'])\nrv_df", "_____no_output_____" ] ], [ [ "## Formatting Our Excel Output\n\nWe will be using the XlsxWriter library for Python to create nicely-formatted Excel files.\n\nXlsxWriter is an excellent package and offers tons of customization. However, the tradeoff for this is that the library can seem very complicated to new users. Accordingly, this section will be fairly long because I want to do a good job of explaining how XlsxWriter works.", "_____no_output_____" ] ], [ [ "writer = pd.ExcelWriter('value_strategy.xlsx', engine='xlsxwriter')\nrv_df.to_excel(writer, sheet_name='Value Strategy', index = False)", "_____no_output_____" ] ], [ [ "## Creating the Formats We'll Need For Our .xlsx File\nYou'll recall from our first project that formats include colors, fonts, and also symbols like % and $. We'll need four main formats for our Excel document:\n\n* String format for tickers\n* \\$XX.XX format for stock prices\n* \\$XX,XXX format for market capitalization\n* Integer format for the number of shares to purchase\n* Float formats with 1 decimal for each valuation metric\n\nSince we already built some formats in past sections of this course, I've included them below for you. Run this code cell before proceeding.", "_____no_output_____" ] ], [ [ "background_color = '#0a0a23'\nfont_color = '#ffffff'\n\nstring_template = writer.book.add_format(\n {\n 'font_color': font_color,\n 'bg_color': background_color,\n 'border': 1\n }\n )\n\ndollar_template = writer.book.add_format(\n {\n 'num_format':'$0.00',\n 'font_color': font_color,\n 'bg_color': background_color,\n 'border': 1\n }\n )\n\ninteger_template = writer.book.add_format(\n {\n 'num_format':'0',\n 'font_color': font_color,\n 'bg_color': background_color,\n 'border': 1\n }\n )\n\nfloat_template = writer.book.add_format(\n {\n 'num_format':'0',\n 'font_color': font_color,\n 'bg_color': background_color,\n 'border': 1\n }\n )\n\npercent_template = writer.book.add_format(\n {\n 'num_format':'0.0%',\n 'font_color': font_color,\n 'bg_color': background_color,\n 'border': 1\n }\n )", "_____no_output_____" ], [ "column_formats = {\n 'A': ['Ticker', string_template],\n 'B': ['Price', dollar_template],\n 'C': ['Number of Shares to Buy', integer_template],\n 'D': ['Price-to-Earnings Ratio', float_template],\n 'E': ['PE Percentile', percent_template],\n 'F': ['Price-to-Book Ratio', float_template],\n 'G': ['PB Percentile',percent_template],\n 'H': ['Price-to-Sales Ratio', float_template],\n 'I': ['PS Percentile', percent_template],\n 'J': ['EV/EBITDA', float_template],\n 'K': ['EV/EBITDA Percentile', percent_template],\n 'L': ['EV/GP', float_template],\n 'M': ['EV/GP Percentile', percent_template],\n 'N': ['RV Score', percent_template]\n }\n\nfor column in column_formats.keys():\n writer.sheets['Value Strategy'].set_column(f'{column}:{column}', 25, column_formats[column][1])\n writer.sheets['Value Strategy'].write(f'{column}1', column_formats[column][0], column_formats[column][1])", "_____no_output_____" ] ], [ [ "## Saving Our Excel Output\nAs before, saving our Excel output is very easy:", "_____no_output_____" ] ], [ [ "writer.save()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
d034b0d5012a24bc7ac43bee767be8d839345f76
87,870
ipynb
Jupyter Notebook
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
ad232809ecb66789ddd493723b0e27da1d998e2a
[ "MIT" ]
22
2016-11-20T15:35:19.000Z
2022-03-10T05:35:24.000Z
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
ad232809ecb66789ddd493723b0e27da1d998e2a
[ "MIT" ]
20
2016-11-20T13:47:57.000Z
2020-12-11T18:51:45.000Z
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
ad232809ecb66789ddd493723b0e27da1d998e2a
[ "MIT" ]
9
2016-12-18T13:45:05.000Z
2021-08-21T08:40:49.000Z
50.93913
202
0.422146
[ [ [ "# Run the Ansible on Jupyter Notebook x Alpine\n\n- Author: Chu-Siang Lai / chusiang (at) drx.tw\n- GitHub: [chusiang/ansible-jupyter.dockerfile](https://github.com/chusiang/ansible-jupyter.dockerfile)\n- Docker Hub: [chusiang/ansible-jupyter](https://hub.docker.com/r/chusiang/ansible-jupyter/)\n", "_____no_output_____" ], [ "Table of contexts:\n1. [Operating-System](#Operating-System)\n1. [Ad-Hoc-commands](#Ad-Hoc-commands)\n1. [Playbooks](#Playbooks)", "_____no_output_____" ], [ "Modified.", "_____no_output_____" ] ], [ [ "!date", "Mon Jun 18 07:13:53 UTC 2018\r\n" ] ], [ [ "## Operating System\n\nCheck the runtime user.", "_____no_output_____" ] ], [ [ "!whoami", "root\r\n" ] ], [ [ "Show Linux distribution.", "_____no_output_____" ] ], [ [ "!cat /etc/issue", "Welcome to Alpine Linux 3.7\r\nKernel \\r on an \\m (\\l)\r\n\r\n" ] ], [ [ "Workspace.", "_____no_output_____" ] ], [ [ "!pwd", "/home\r\n" ] ], [ [ "Show Python version.", "_____no_output_____" ] ], [ [ "!python --version", "Python 2.7.14\r\n" ] ], [ [ "Show pip version.", "_____no_output_____" ] ], [ [ "!pip --version", "pip 10.0.1 from /usr/lib/python2.7/site-packages/pip (python 2.7)\r\n" ] ], [ [ "Show Ansible version.", "_____no_output_____" ] ], [ [ "!ansible --version", "ansible 2.5.5\r\n config file = /home/ansible.cfg\r\n configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python2.7/site-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 2.7.14 (default, Dec 14 2017, 15:51:29) [GCC 6.4.0]\r\n" ] ], [ [ "Show Jupyter version.", "_____no_output_____" ] ], [ [ "!jupyter --version", "4.4.0\r\n" ] ], [ [ "## Ansible\n\nCheck the playbook syntax, if you see the `[WARNING]`, please fix something, first.", "_____no_output_____" ] ], [ [ "!ansible-playbook --syntax-check setup_jupyter.yml", "\r\nplaybook: setup_jupyter.yml\r\n" ] ], [ [ "### Ad-Hoc commands\n\nping the localhost.", "_____no_output_____" ] ], [ [ "!ansible localhost -m ping", "\u001b[0;32mlocalhost | SUCCESS => {\u001b[0m\n\u001b[0;32m \"changed\": false, \u001b[0m\n\u001b[0;32m \"ping\": \"pong\"\u001b[0m\n\u001b[0;32m}\u001b[0m\n" ] ], [ [ "Get the facts with `setup` module.", "_____no_output_____" ] ], [ [ "!ansible localhost -m setup", "\u001b[0;32mlocalhost | SUCCESS => {\u001b[0m\r\n\u001b[0;32m \"ansible_facts\": {\u001b[0m\r\n\u001b[0;32m \"ansible_all_ipv4_addresses\": [], \u001b[0m\r\n\u001b[0;32m \"ansible_all_ipv6_addresses\": [], \u001b[0m\r\n\u001b[0;32m \"ansible_apparmor\": {\u001b[0m\r\n\u001b[0;32m \"status\": \"disabled\"\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"ansible_architecture\": \"x86_64\", \u001b[0m\r\n\u001b[0;32m \"ansible_bios_date\": \"03/14/2014\", \u001b[0m\r\n\u001b[0;32m \"ansible_bios_version\": \"1.00\", \u001b[0m\r\n\u001b[0;32m \"ansible_cmdline\": {\u001b[0m\r\n\u001b[0;32m \"BOOT_IMAGE\": \"/boot/kernel\", \u001b[0m\r\n\u001b[0;32m \"console\": \"ttyS0\", \u001b[0m\r\n\u001b[0;32m \"ntp\": \"gateway\", \u001b[0m\r\n\u001b[0;32m \"page_poison\": \"1\", \u001b[0m\r\n\u001b[0;32m \"panic\": \"1\", \u001b[0m\r\n\u001b[0;32m \"root\": \"/dev/sr0\", \u001b[0m\r\n\u001b[0;32m \"text\": true, \u001b[0m\r\n\u001b[0;32m \"vsyscall\": \"emulate\"\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"ansible_date_time\": {\u001b[0m\r\n\u001b[0;32m \"date\": \"2018-06-18\", \u001b[0m\r\n\u001b[0;32m \"day\": \"18\", \u001b[0m\r\n\u001b[0;32m \"epoch\": \"1529306054\", \u001b[0m\r\n\u001b[0;32m \"hour\": \"07\", \u001b[0m\r\n\u001b[0;32m \"iso8601\": \"2018-06-18T07:14:14Z\", \u001b[0m\r\n\u001b[0;32m \"iso8601_basic\": \"20180618T071414927682\", \u001b[0m\r\n\u001b[0;32m \"iso8601_basic_short\": \"20180618T071414\", \u001b[0m\r\n\u001b[0;32m \"iso8601_micro\": \"2018-06-18T07:14:14.927800Z\", \u001b[0m\r\n\u001b[0;32m \"minute\": \"14\", \u001b[0m\r\n\u001b[0;32m \"month\": \"06\", \u001b[0m\r\n\u001b[0;32m \"second\": \"14\", \u001b[0m\r\n\u001b[0;32m \"time\": \"07:14:14\", \u001b[0m\r\n\u001b[0;32m \"tz\": \"UTC\", \u001b[0m\r\n\u001b[0;32m \"tz_offset\": \"+0000\", \u001b[0m\r\n\u001b[0;32m \"weekday\": \"Monday\", \u001b[0m\r\n\u001b[0;32m \"weekday_number\": \"1\", \u001b[0m\r\n\u001b[0;32m \"weeknumber\": \"25\", \u001b[0m\r\n\u001b[0;32m \"year\": \"2018\"\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"ansible_default_ipv4\": {\u001b[0m\r\n\u001b[0;32m \"address\": \"172.17.0.2\", \u001b[0m\r\n\u001b[0;32m \"gateway\": \"172.17.0.1\", \u001b[0m\r\n\u001b[0;32m \"interface\": \"eth0\"\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"ansible_default_ipv6\": {}, \u001b[0m\r\n\u001b[0;32m \"ansible_device_links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": {}, \u001b[0m\r\n\u001b[0;32m \"labels\": {}, \u001b[0m\r\n\u001b[0;32m \"masters\": {}, \u001b[0m\r\n\u001b[0;32m \"uuids\": {}\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"ansible_devices\": {\u001b[0m\r\n\u001b[0;32m \"loop0\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"1\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"0\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"loop1\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"1\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"0\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"loop2\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"1\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"0\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"loop3\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"1\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"0\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"loop4\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"1\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"0\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"loop5\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"1\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"0\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"loop6\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"1\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"0\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"loop7\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"1\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"0\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"nbd0\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"nbd1\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"nbd10\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"nbd11\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"nbd12\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"nbd13\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"nbd14\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"nbd15\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"nbd2\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"nbd3\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"nbd4\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"nbd5\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"nbd6\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"nbd7\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"nbd8\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"nbd9\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": null, \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"0\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"0.00 Bytes\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": null, \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"sda\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": \"BHYVE SATA DISK\", \u001b[0m\r\n\u001b[0;32m \"partitions\": {\u001b[0m\r\n\u001b[0;32m \"sda1\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"sectors\": \"134215680\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": 512, \u001b[0m\r\n\u001b[0;32m \"size\": \"64.00 GB\", \u001b[0m\r\n\u001b[0;32m \"start\": \"2048\", \u001b[0m\r\n\u001b[0;32m \"uuid\": null\u001b[0m\r\n\u001b[0;32m }\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"removable\": \"0\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"1\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"deadline\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"134217728\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"512\", \u001b[0m\r\n\u001b[0;32m \"size\": \"64.00 GB\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"512\", \u001b[0m\r\n\u001b[0;32m \"vendor\": \"ATA\", \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"sr0\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": \"BHYVE DVD-ROM\", \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"1\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"1\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"deadline\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"1922412\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"2048\", \u001b[0m\r\n\u001b[0;32m \"size\": \"938.68 MB\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"0\", \u001b[0m\r\n\u001b[0;32m \"vendor\": \"BHYVE\", \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"sr1\": {\u001b[0m\r\n\u001b[0;32m \"holders\": [], \u001b[0m\r\n\u001b[0;32m \"host\": \"\", \u001b[0m\r\n\u001b[0;32m \"links\": {\u001b[0m\r\n\u001b[0;32m \"ids\": [], \u001b[0m\r\n\u001b[0;32m \"labels\": [], \u001b[0m\r\n\u001b[0;32m \"masters\": [], \u001b[0m\r\n\u001b[0;32m \"uuids\": []\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"model\": \"BHYVE DVD-ROM\", \u001b[0m\r\n\u001b[0;32m \"partitions\": {}, \u001b[0m\r\n\u001b[0;32m \"removable\": \"1\", \u001b[0m\r\n\u001b[0;32m \"rotational\": \"1\", \u001b[0m\r\n\u001b[0;32m \"sas_address\": null, \u001b[0m\r\n\u001b[0;32m \"sas_device_handle\": null, \u001b[0m\r\n\u001b[0;32m \"scheduler_mode\": \"deadline\", \u001b[0m\r\n\u001b[0;32m \"sectors\": \"112\", \u001b[0m\r\n\u001b[0;32m \"sectorsize\": \"2048\", \u001b[0m\r\n\u001b[0;32m \"size\": \"56.00 KB\", \u001b[0m\r\n\u001b[0;32m \"support_discard\": \"0\", \u001b[0m\r\n\u001b[0;32m \"vendor\": \"BHYVE\", \u001b[0m\r\n\u001b[0;32m \"virtual\": 1\u001b[0m\r\n\u001b[0;32m }\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"ansible_distribution\": \"Alpine\", \u001b[0m\r\n\u001b[0;32m \"ansible_distribution_file_parsed\": true, \u001b[0m\r\n\u001b[0;32m \"ansible_distribution_file_path\": \"/etc/alpine-release\", \u001b[0m\r\n\u001b[0;32m \"ansible_distribution_file_variety\": \"Alpine\", \u001b[0m\r\n\u001b[0;32m \"ansible_distribution_major_version\": \"NA\", \u001b[0m\r\n\u001b[0;32m \"ansible_distribution_release\": \"NA\", \u001b[0m\r\n\u001b[0;32m \"ansible_distribution_version\": \"3.7.0\", \u001b[0m\r\n\u001b[0;32m \"ansible_dns\": {\u001b[0m\r\n\u001b[0;32m \"nameservers\": [\u001b[0m\r\n\u001b[0;32m \"192.168.65.1\"\u001b[0m\r\n\u001b[0;32m ]\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"ansible_domain\": \"\", \u001b[0m\r\n\u001b[0;32m \"ansible_effective_group_id\": 0, \u001b[0m\r\n\u001b[0;32m \"ansible_effective_user_id\": 0, \u001b[0m\r\n\u001b[0;32m \"ansible_env\": {\u001b[0m\r\n\u001b[0;32m \"CLICOLOR\": \"1\", \u001b[0m\r\n\u001b[0;32m \"GIT_PAGER\": \"cat\", \u001b[0m\r\n\u001b[0;32m \"HOME\": \"/root\", \u001b[0m\r\n\u001b[0;32m \"HOSTNAME\": \"c3423d7c8f31\", \u001b[0m\r\n\u001b[0;32m \"JPY_PARENT_PID\": \"5\", \u001b[0m\r\n\u001b[0;32m \"MPLBACKEND\": \"module://ipykernel.pylab.backend_inline\", \u001b[0m\r\n\u001b[0;32m \"PAGER\": \"cat\", \u001b[0m\r\n\u001b[0;32m \"PATH\": \"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \u001b[0m\r\n\u001b[0;32m \"PWD\": \"/home\", \u001b[0m\r\n\u001b[0;32m \"PYTHONPATH\": \"/tmp/ansible_lfDdYR/ansible_modlib.zip\", \u001b[0m\r\n\u001b[0;32m \"SHLVL\": \"4\", \u001b[0m\r\n\u001b[0;32m \"TERM\": \"xterm-color\"\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"ansible_eth0\": {\u001b[0m\r\n\u001b[0;32m \"active\": true, \u001b[0m\r\n\u001b[0;32m \"device\": \"eth0\", \u001b[0m\r\n\u001b[0;32m \"macaddress\": \"02:42:ac:11:00:02\", \u001b[0m\r\n\u001b[0;32m \"mtu\": 1500, \u001b[0m\r\n\u001b[0;32m \"promisc\": false, \u001b[0m\r\n\u001b[0;32m \"speed\": 10000, \u001b[0m\r\n\u001b[0;32m \"type\": \"ether\"\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"ansible_fips\": false, \u001b[0m\r\n\u001b[0;32m \"ansible_form_factor\": \"Unknown\", \u001b[0m\r\n\u001b[0;32m \"ansible_fqdn\": \"c3423d7c8f31\", \u001b[0m\r\n\u001b[0;32m \"ansible_hostname\": \"c3423d7c8f31\", \u001b[0m\r\n\u001b[0;32m \"ansible_interfaces\": [\u001b[0m\r\n\u001b[0;32m \"lo\", \u001b[0m\r\n\u001b[0;32m \"tunl0\", \u001b[0m\r\n\u001b[0;32m \"ip6tnl0\", \u001b[0m\r\n\u001b[0;32m \"eth0\"\u001b[0m\r\n\u001b[0;32m ], \u001b[0m\r\n\u001b[0;32m \"ansible_ip6tnl0\": {\u001b[0m\r\n\u001b[0;32m \"active\": false, \u001b[0m\r\n\u001b[0;32m \"device\": \"ip6tnl0\", \u001b[0m\r\n\u001b[0;32m \"macaddress\": \"00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00\", \u001b[0m\r\n\u001b[0;32m \"mtu\": 1452, \u001b[0m\r\n\u001b[0;32m \"promisc\": false, \u001b[0m\r\n\u001b[0;32m \"type\": \"unknown\"\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"ansible_is_chroot\": false, \u001b[0m\r\n\u001b[0;32m \"ansible_kernel\": \"4.9.87-linuxkit-aufs\", \u001b[0m\r\n\u001b[0;32m \"ansible_lo\": {\u001b[0m\r\n\u001b[0;32m \"active\": true, \u001b[0m\r\n\u001b[0;32m \"device\": \"lo\", \u001b[0m\r\n\u001b[0;32m \"mtu\": 65536, \u001b[0m\r\n\u001b[0;32m \"promisc\": false, \u001b[0m\r\n\u001b[0;32m \"type\": \"loopback\"\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"ansible_local\": {}, \u001b[0m\r\n\u001b[0;32m \"ansible_lsb\": {}, \u001b[0m\r\n\u001b[0;32m \"ansible_machine\": \"x86_64\", \u001b[0m\r\n\u001b[0;32m \"ansible_memfree_mb\": 152, \u001b[0m\r\n\u001b[0;32m \"ansible_memory_mb\": {\u001b[0m\r\n\u001b[0;32m \"nocache\": {\u001b[0m\r\n\u001b[0;32m \"free\": 1132, \u001b[0m\r\n\u001b[0;32m \"used\": 866\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"real\": {\u001b[0m\r\n\u001b[0;32m \"free\": 152, \u001b[0m\r\n\u001b[0;32m \"total\": 1998, \u001b[0m\r\n\u001b[0;32m \"used\": 1846\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"swap\": {\u001b[0m\r\n\u001b[0;32m \"cached\": 0, \u001b[0m\r\n\u001b[0;32m \"free\": 1022, \u001b[0m\r\n\u001b[0;32m \"total\": 1023, \u001b[0m\r\n\u001b[0;32m \"used\": 1\u001b[0m\r\n\u001b[0;32m }\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"ansible_memtotal_mb\": 1998, \u001b[0m\r\n\u001b[0;32m \"ansible_mounts\": [\u001b[0m\r\n\u001b[0;32m {\u001b[0m\r\n\u001b[0;32m \"block_available\": 11143934, \u001b[0m\r\n\u001b[0;32m \"block_size\": 4096, \u001b[0m\r\n\u001b[0;32m \"block_total\": 16448139, \u001b[0m\r\n\u001b[0;32m \"block_used\": 5304205, \u001b[0m\r\n\u001b[0;32m \"device\": \"/dev/sda1\", \u001b[0m\r\n\u001b[0;32m \"fstype\": \"ext4\", \u001b[0m\r\n\u001b[0;32m \"inode_available\": 3449146, \u001b[0m\r\n\u001b[0;32m \"inode_total\": 4194304, \u001b[0m\r\n\u001b[0;32m \"inode_used\": 745158, \u001b[0m\r\n\u001b[0;32m \"mount\": \"/etc/resolv.conf\", \u001b[0m\r\n\u001b[0;32m \"options\": \"rw,relatime,data=ordered\", \u001b[0m\r\n\u001b[0;32m \"size_available\": 45645553664, \u001b[0m\r\n\u001b[0;32m \"size_total\": 67371577344, \u001b[0m\r\n\u001b[0;32m \"uuid\": \"N/A\"\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m {\u001b[0m\r\n\u001b[0;32m \"block_available\": 11143934, \u001b[0m\r\n\u001b[0;32m \"block_size\": 4096, \u001b[0m\r\n\u001b[0;32m \"block_total\": 16448139, \u001b[0m\r\n\u001b[0;32m \"block_used\": 5304205, \u001b[0m\r\n\u001b[0;32m \"device\": \"/dev/sda1\", \u001b[0m\r\n\u001b[0;32m \"fstype\": \"ext4\", \u001b[0m\r\n\u001b[0;32m \"inode_available\": 3449146, \u001b[0m\r\n\u001b[0;32m \"inode_total\": 4194304, \u001b[0m\r\n\u001b[0;32m \"inode_used\": 745158, \u001b[0m\r\n\u001b[0;32m \"mount\": \"/etc/hostname\", \u001b[0m\r\n\u001b[0;32m \"options\": \"rw,relatime,data=ordered\", \u001b[0m\r\n\u001b[0;32m \"size_available\": 45645553664, \u001b[0m\r\n\u001b[0;32m \"size_total\": 67371577344, \u001b[0m\r\n\u001b[0;32m \"uuid\": \"N/A\"\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m {\u001b[0m\r\n\u001b[0;32m \"block_available\": 11143934, \u001b[0m\r\n\u001b[0;32m \"block_size\": 4096, \u001b[0m\r\n\u001b[0;32m \"block_total\": 16448139, \u001b[0m\r\n\u001b[0;32m \"block_used\": 5304205, \u001b[0m\r\n\u001b[0;32m \"device\": \"/dev/sda1\", \u001b[0m\r\n\u001b[0;32m \"fstype\": \"ext4\", \u001b[0m\r\n\u001b[0;32m \"inode_available\": 3449146, \u001b[0m\r\n\u001b[0;32m \"inode_total\": 4194304, \u001b[0m\r\n\u001b[0;32m \"inode_used\": 745158, \u001b[0m\r\n\u001b[0;32m \"mount\": \"/etc/hosts\", \u001b[0m\r\n\u001b[0;32m \"options\": \"rw,relatime,data=ordered\", \u001b[0m\r\n\u001b[0;32m \"size_available\": 45645553664, \u001b[0m\r\n\u001b[0;32m \"size_total\": 67371577344, \u001b[0m\r\n\u001b[0;32m \"uuid\": \"N/A\"\u001b[0m\r\n\u001b[0;32m }\u001b[0m\r\n\u001b[0;32m ], \u001b[0m\r\n\u001b[0;32m \"ansible_nodename\": \"c3423d7c8f31\", \u001b[0m\r\n\u001b[0;32m \"ansible_os_family\": \"Alpine\", \u001b[0m\r\n\u001b[0;32m \"ansible_pkg_mgr\": \"apk\", \u001b[0m\r\n\u001b[0;32m \"ansible_processor\": [\u001b[0m\r\n\u001b[0;32m \"0\", \u001b[0m\r\n\u001b[0;32m \"GenuineIntel\", \u001b[0m\r\n\u001b[0;32m \"Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz\", \u001b[0m\r\n\u001b[0;32m \"1\", \u001b[0m\r\n\u001b[0;32m \"GenuineIntel\", \u001b[0m\r\n\u001b[0;32m \"Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz\"\u001b[0m\r\n\u001b[0;32m ], \u001b[0m\r\n\u001b[0;32m \"ansible_processor_cores\": 1, \u001b[0m\r\n\u001b[0;32m \"ansible_processor_count\": 2, \u001b[0m\r\n\u001b[0;32m \"ansible_processor_threads_per_core\": 1, \u001b[0m\r\n\u001b[0;32m \"ansible_processor_vcpus\": 2, \u001b[0m\r\n\u001b[0;32m \"ansible_product_name\": \"BHYVE\", \u001b[0m\r\n\u001b[0;32m \"ansible_product_serial\": \"None\", \u001b[0m\r\n\u001b[0;32m \"ansible_product_uuid\": \"003B4176-0000-0000-88D0-8E3AB99F1457\", \u001b[0m\r\n\u001b[0;32m \"ansible_product_version\": \"1.0\", \u001b[0m\r\n\u001b[0;32m \"ansible_python\": {\u001b[0m\r\n\u001b[0;32m \"executable\": \"/usr/bin/python2\", \u001b[0m\r\n\u001b[0;32m \"has_sslcontext\": true, \u001b[0m\r\n\u001b[0;32m \"type\": \"CPython\", \u001b[0m\r\n\u001b[0;32m \"version\": {\u001b[0m\r\n\u001b[0;32m \"major\": 2, \u001b[0m\r\n\u001b[0;32m \"micro\": 14, \u001b[0m\r\n\u001b[0;32m \"minor\": 7, \u001b[0m\r\n\u001b[0;32m \"releaselevel\": \"final\", \u001b[0m\r\n\u001b[0;32m \"serial\": 0\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"version_info\": [\u001b[0m\r\n\u001b[0;32m 2, \u001b[0m\r\n\u001b[0;32m 7, \u001b[0m\r\n\u001b[0;32m 14, \u001b[0m\r\n\u001b[0;32m \"final\", \u001b[0m\r\n\u001b[0;32m 0\u001b[0m\r\n\u001b[0;32m ]\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"ansible_python_version\": \"2.7.14\", \u001b[0m\r\n\u001b[0;32m \"ansible_real_group_id\": 0, \u001b[0m\r\n\u001b[0;32m \"ansible_real_user_id\": 0, \u001b[0m\r\n\u001b[0;32m \"ansible_selinux\": {\u001b[0m\r\n\u001b[0;32m \"status\": \"Missing selinux Python library\"\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"ansible_selinux_python_present\": false, \u001b[0m\r\n\u001b[0;32m \"ansible_service_mgr\": \"docker-entrypoi\", \u001b[0m\r\n\u001b[0;32m \"ansible_swapfree_mb\": 1022, \u001b[0m\r\n\u001b[0;32m \"ansible_swaptotal_mb\": 1023, \u001b[0m\r\n\u001b[0;32m \"ansible_system\": \"Linux\", \u001b[0m\r\n\u001b[0;32m \"ansible_system_vendor\": \"NA\", \u001b[0m\r\n\u001b[0;32m \"ansible_tunl0\": {\u001b[0m\r\n\u001b[0;32m \"active\": false, \u001b[0m\r\n\u001b[0;32m \"device\": \"tunl0\", \u001b[0m\r\n\u001b[0;32m \"macaddress\": \"00:00:00:00\", \u001b[0m\r\n\u001b[0;32m \"mtu\": 1480, \u001b[0m\r\n\u001b[0;32m \"promisc\": false, \u001b[0m\r\n\u001b[0;32m \"type\": \"unknown\"\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"ansible_uptime_seconds\": 13189, \u001b[0m\r\n\u001b[0;32m \"ansible_user_dir\": \"/root\", \u001b[0m\r\n\u001b[0;32m \"ansible_user_gecos\": \"root\", \u001b[0m\r\n\u001b[0;32m \"ansible_user_gid\": 0, \u001b[0m\r\n\u001b[0;32m \"ansible_user_id\": \"root\", \u001b[0m\r\n\u001b[0;32m \"ansible_user_shell\": \"/bin/ash\", \u001b[0m\r\n\u001b[0;32m \"ansible_user_uid\": 0, \u001b[0m\r\n\u001b[0;32m \"ansible_userspace_architecture\": \"x86_64\", \u001b[0m\r\n\u001b[0;32m \"ansible_userspace_bits\": \"64\", \u001b[0m\r\n\u001b[0;32m \"ansible_virtualization_role\": \"guest\", \u001b[0m\r\n\u001b[0;32m \"ansible_virtualization_type\": \"docker\", \u001b[0m\r\n\u001b[0;32m \"gather_subset\": [\u001b[0m\r\n\u001b[0;32m \"all\"\u001b[0m\r\n\u001b[0;32m ], \u001b[0m\r\n\u001b[0;32m \"module_setup\": true\u001b[0m\r\n\u001b[0;32m }, \u001b[0m\r\n\u001b[0;32m \"changed\": false\u001b[0m\r\n\u001b[0;32m}\u001b[0m\r\n" ] ], [ [ "Remove the **vim** with apk package management on **Alpine**.", "_____no_output_____" ] ], [ [ "!ansible localhost -m apk -a 'name=vim state=absent'", "\u001b[0;33mlocalhost | SUCCESS => {\u001b[0m\r\n\u001b[0;33m \"changed\": true, \u001b[0m\r\n\u001b[0;33m \"msg\": \"removed vim package(s)\", \u001b[0m\r\n\u001b[0;33m \"packages\": [\u001b[0m\r\n\u001b[0;33m \"vim\", \u001b[0m\r\n\u001b[0;33m \"lua5.2-libs\"\u001b[0m\r\n\u001b[0;33m ], \u001b[0m\r\n\u001b[0;33m \"stderr\": \"\", \u001b[0m\r\n\u001b[0;33m \"stderr_lines\": [], \u001b[0m\r\n\u001b[0;33m \"stdout\": \"(1/2) Purging vim (8.0.1359-r0)\\n(2/2) Purging lua5.2-libs (5.2.4-r4)\\nExecuting busybox-1.27.2-r7.trigger\\nOK: 274 MiB in 61 packages\\n\", \u001b[0m\r\n\u001b[0;33m \"stdout_lines\": [\u001b[0m\r\n\u001b[0;33m \"(1/2) Purging vim (8.0.1359-r0)\", \u001b[0m\r\n\u001b[0;33m \"(2/2) Purging lua5.2-libs (5.2.4-r4)\", \u001b[0m\r\n\u001b[0;33m \"Executing busybox-1.27.2-r7.trigger\", \u001b[0m\r\n\u001b[0;33m \"OK: 274 MiB in 61 packages\"\u001b[0m\r\n\u001b[0;33m ]\u001b[0m\r\n\u001b[0;33m}\u001b[0m\r\n" ] ], [ [ "Install the **vim** with apk package management on **Alpine**.", "_____no_output_____" ] ], [ [ "!ansible localhost -m apk -a 'name=vim state=present'", "\u001b[0;33mlocalhost | SUCCESS => {\u001b[0m\r\n\u001b[0;33m \"changed\": true, \u001b[0m\r\n\u001b[0;33m \"msg\": \"installed vim package(s)\", \u001b[0m\r\n\u001b[0;33m \"packages\": [\u001b[0m\r\n\u001b[0;33m \"lua5.2-libs\", \u001b[0m\r\n\u001b[0;33m \"vim\"\u001b[0m\r\n\u001b[0;33m ], \u001b[0m\r\n\u001b[0;33m \"stderr\": \"\", \u001b[0m\r\n\u001b[0;33m \"stderr_lines\": [], \u001b[0m\r\n\u001b[0;33m \"stdout\": \"(1/2) Installing lua5.2-libs (5.2.4-r4)\\n(2/2) Installing vim (8.0.1359-r0)\\nExecuting busybox-1.27.2-r7.trigger\\nOK: 300 MiB in 63 packages\\n\", \u001b[0m\r\n\u001b[0;33m \"stdout_lines\": [\u001b[0m\r\n\u001b[0;33m \"(1/2) Installing lua5.2-libs (5.2.4-r4)\", \u001b[0m\r\n\u001b[0;33m \"(2/2) Installing vim (8.0.1359-r0)\", \u001b[0m\r\n\u001b[0;33m \"Executing busybox-1.27.2-r7.trigger\", \u001b[0m\r\n\u001b[0;33m \"OK: 300 MiB in 63 packages\"\u001b[0m\r\n\u001b[0;33m ]\u001b[0m\r\n\u001b[0;33m}\u001b[0m\r\n" ] ], [ [ "Install the **tree** with apk package management on **Alpine**.", "_____no_output_____" ] ], [ [ "!ansible localhost -m apk -a 'name=tree state=present'", "\u001b[0;33mlocalhost | SUCCESS => {\u001b[0m\r\n\u001b[0;33m \"changed\": true, \u001b[0m\r\n\u001b[0;33m \"msg\": \"installed tree package(s)\", \u001b[0m\r\n\u001b[0;33m \"packages\": [\u001b[0m\r\n\u001b[0;33m \"tree\"\u001b[0m\r\n\u001b[0;33m ], \u001b[0m\r\n\u001b[0;33m \"stderr\": \"\", \u001b[0m\r\n\u001b[0;33m \"stderr_lines\": [], \u001b[0m\r\n\u001b[0;33m \"stdout\": \"(1/1) Installing tree (1.7.0-r1)\\nExecuting busybox-1.27.2-r7.trigger\\nOK: 300 MiB in 64 packages\\n\", \u001b[0m\r\n\u001b[0;33m \"stdout_lines\": [\u001b[0m\r\n\u001b[0;33m \"(1/1) Installing tree (1.7.0-r1)\", \u001b[0m\r\n\u001b[0;33m \"Executing busybox-1.27.2-r7.trigger\", \u001b[0m\r\n\u001b[0;33m \"OK: 300 MiB in 64 packages\"\u001b[0m\r\n\u001b[0;33m ]\u001b[0m\r\n\u001b[0;33m}\u001b[0m\r\n" ], [ "!tree .", ".\r\n├── ansible.cfg\r\n├── ansible_on_jupyter.ipynb\r\n├── inventory\r\n└── setup_jupyter.yml\r\n\r\n0 directories, 4 files\r\n" ] ], [ [ "## Playbooks\n\nShow `setup_jupyter.yml` playbook.", "_____no_output_____" ] ], [ [ "!cat setup_jupyter.yml", "---\r\n\r\n- name: \"Setup Ansible-Jupyter\"\r\n hosts: localhost\r\n\r\n vars:\r\n\r\n # General package on GNU/Linux.\r\n general_packages:\r\n - bash\r\n - bash-completion\r\n - ca-certificates\r\n - curl\r\n - git\r\n - openssl\r\n - sshpass\r\n\r\n # Alpine Linux.\r\n apk_packages:\r\n - openssh-client\r\n - vim\r\n\r\n # Debian, Ubuntu.\r\n apt_packages: \"{{ apk_packages }}\"\r\n\r\n # Arch Linux.\r\n pacman_packages:\r\n - openssh\r\n - vim\r\n\r\n # Gentoo Linux.\r\n portage_packages:\r\n - bash\r\n - bash-completion\r\n - ca-certificates\r\n - dev-vcs/git\r\n - net-misc/curl\r\n - openssh\r\n - openssl\r\n - sqlite\r\n - vim\r\n\r\n # CentOS.\r\n yum_packages:\r\n - openssh-clients\r\n - vim-minimal\r\n\r\n # openSUSE.\r\n zypper_packages: \"{{ pacman_packages }}\"\r\n\r\n # Python.\r\n pip_packages:\r\n - docker-py\r\n - docker-compose\r\n\r\n jupyter_notebook_config_py_url: \"https://raw.githubusercontent.com/chusiang/ansible-jupyter.dockerfile/master/files/jupyter_notebook_config.py\"\r\n ssh_private_key_url: \"https://raw.githubusercontent.com/chusiang/ansible-jupyter.dockerfile/master/files/ssh/id_rsa\"\r\n ansible_cfg_url: \"https://raw.githubusercontent.com/chusiang/ansible-jupyter.dockerfile/master/ansible.cfg\"\r\n inventory_url: \"https://raw.githubusercontent.com/chusiang/ansible-jupyter.dockerfile/master/inventory\"\r\n\r\n tasks:\r\n\r\n - name: Install necessary packages of Linux\r\n block:\r\n\r\n - name: Install general linux packages\r\n package:\r\n name: \"{{ item }}\"\r\n state: present\r\n with_items: \"{{ general_packages }}\"\r\n when:\r\n - general_packages is defined\r\n - ansible_pkg_mgr != \"portage\"\r\n\r\n - name: Install apk packages on Alpine Linux\r\n apk:\r\n name: \"{{ item }}\"\r\n state: present\r\n with_items: \"{{ apk_packages }}\"\r\n when:\r\n - apk_packages is defined\r\n - ansible_pkg_mgr == \"apk\"\r\n\r\n - name: Install apt packages on Debian and Ubuntu\r\n apt:\r\n name: \"{{ item }}\"\r\n state: present\r\n with_items: \"{{ apt_packages }}\"\r\n when:\r\n - apt_packages is defined\r\n - ansible_pkg_mgr == \"apt\"\r\n\r\n - name: Install pacman packages on Arch Linux\r\n pacman:\r\n name: \"{{ item }}\"\r\n state: present\r\n with_items: \"{{ pacman_packages }}\"\r\n when:\r\n - pacman_packages is defined\r\n - ansible_pkg_mgr == \"pacman\"\r\n\r\n - name: Install portage packages on Gentoo Linux\r\n portage:\r\n package: \"{{ item }}\"\r\n state: present\r\n with_items:\r\n - \"{{ portage_packages }}\"\r\n when:\r\n - portage_packages is defined\r\n - ansible_pkg_mgr == \"portage\"\r\n\r\n - name: Install yum packages on CentOS\r\n yum:\r\n name: \"{{ item }}\"\r\n state: present\r\n with_items: \"{{ yum_packages }}\"\r\n when:\r\n - yum_packages is defined\r\n - ansible_pkg_mgr == \"yum\"\r\n\r\n - name: Install zypper packages on openSUSE\r\n zypper:\r\n name: \"{{ item }}\"\r\n state: present\r\n with_items: \"{{ zypper_packages }}\"\r\n when:\r\n - zypper_packages is defined\r\n - ansible_pkg_mgr == \"zypper\"\r\n\r\n - name: Install necessary packages of Python\r\n block:\r\n\r\n - name: Install general pip packages\r\n pip:\r\n name: \"{{ item }}\"\r\n state: present\r\n with_items: \"{{ pip_packages }}\"\r\n when: pip_packages is defined\r\n\r\n - name: Install pysqlite on gentoo\r\n pip:\r\n name: pysqlite\r\n state: present\r\n when:\r\n - ansible_pkg_mgr == \"portage\"\r\n\r\n - name: Upgrade six\r\n pip:\r\n name: six\r\n state: latest\r\n tags: skip_ansible_lint\r\n\r\n - name: Install and configuration Jupyter (application)\r\n block:\r\n\r\n - name: Install jupyter\r\n pip:\r\n name: jupyter\r\n version: 1.0.0\r\n state: present\r\n\r\n # Disable jupyter authentication token. (1/2)\r\n - name: Create `/root/.jupyter` directory\r\n file:\r\n path: /root/.jupyter\r\n state: directory\r\n mode: 0700\r\n\r\n # Disable jupyter authentication token. (2/2)\r\n - name: Get jupyter_notebook_config.py\r\n get_url:\r\n url: \"{{ jupyter_notebook_config_py_url }}\"\r\n dest: /root/.jupyter/jupyter_notebook_config.py\r\n mode: 0644\r\n checksum: md5:c663914a24281ddf10df6bc9e7238b07\r\n\r\n - name: Integrate Ansible and Jupyter\r\n block:\r\n\r\n - name: Create `/root/.ssh` directory\r\n file:\r\n path: /root/.ssh\r\n state: directory\r\n mode: 0700\r\n\r\n - name: Get ssh private key\r\n get_url:\r\n url: \"{{ ssh_private_key_url }}\"\r\n dest: /root/.ssh/id_rsa\r\n mode: 0600\r\n checksum: md5:6cc26e77bf23a9d72a51b22387bea61f\r\n\r\n - name: Get ansible.cfg file\r\n get_url:\r\n url: \"{{ ansible_cfg_url }}\"\r\n dest: /home/\r\n mode: 0644\r\n\r\n - name: Get inventory file\r\n get_url:\r\n url: \"{{ inventory_url }}\"\r\n dest: /home/\r\n mode: 0644\r\n\r\n# vim: ft=yaml.ansible :\r\n" ] ], [ [ "Run the `setup_jupyter.yml` playbook.", "_____no_output_____" ] ], [ [ "!ansible-playbook setup_jupyter.yml", "\nPLAY [Setup Ansible-Jupyter] ***************************************************\n\nTASK [Gathering Facts] *********************************************************\n\u001b[0;32mok: [localhost]\u001b[0m\n\nTASK [Install general linux packages] ******************************************\n\u001b[0;32mok: [localhost] => (item=bash)\u001b[0m\n\u001b[0;32mok: [localhost] => (item=bash-completion)\u001b[0m\n\u001b[0;32mok: [localhost] => (item=ca-certificates)\u001b[0m\n\u001b[0;32mok: [localhost] => (item=curl)\u001b[0m\n\u001b[0;32mok: [localhost] => (item=git)\u001b[0m\n\u001b[0;32mok: [localhost] => (item=openssl)\u001b[0m\n\u001b[0;32mok: [localhost] => (item=sshpass)\u001b[0m\n\nTASK [Install apk packages on Alpine Linux] ************************************\n\u001b[0;32mok: [localhost] => (item=[u'openssh-client', u'vim'])\u001b[0m\n\nTASK [Install apt packages on Debian and Ubuntu] *******************************\n\u001b[0;36mskipping: [localhost] => (item=[]) \u001b[0m\n\nTASK [Install pacman packages on Arch Linux] ***********************************\n\u001b[0;36mskipping: [localhost] => (item=[]) \u001b[0m\n\nTASK [Install portage packages on Gentoo Linux] ********************************\n\u001b[0;36mskipping: [localhost] => (item=bash) \u001b[0m\n\u001b[0;36mskipping: [localhost] => (item=bash-completion) \u001b[0m\n\u001b[0;36mskipping: [localhost] => (item=ca-certificates) \u001b[0m\n\u001b[0;36mskipping: [localhost] => (item=dev-vcs/git) \u001b[0m\n\u001b[0;36mskipping: [localhost] => (item=net-misc/curl) \u001b[0m\n\u001b[0;36mskipping: [localhost] => (item=openssh) \u001b[0m\n\u001b[0;36mskipping: [localhost] => (item=openssl) \u001b[0m\n\u001b[0;36mskipping: [localhost] => (item=sqlite) \u001b[0m\n\u001b[0;36mskipping: [localhost] => (item=vim) \u001b[0m\n\nTASK [Install yum packages on CentOS] ******************************************\n\u001b[0;36mskipping: [localhost] => (item=[]) \u001b[0m\n\nTASK [Install zypper packages on openSUSE] *************************************\n\u001b[0;36mskipping: [localhost] => (item=[]) \u001b[0m\n\nTASK [Install general pip packages] ********************************************\n\u001b[0;32mok: [localhost] => (item=docker-py)\u001b[0m\n\u001b[0;32mok: [localhost] => (item=docker-compose)\u001b[0m\n\nTASK [Install pysqlite on gentoo] **********************************************\n\u001b[0;36mskipping: [localhost]\u001b[0m\n\nTASK [Upgrade six] *************************************************************\n\u001b[0;32mok: [localhost]\u001b[0m\n\nTASK [Install jupyter] *********************************************************\n\u001b[0;32mok: [localhost]\u001b[0m\n\nTASK [Create `/root/.jupyter` directory] ***************************************\n\u001b[0;32mok: [localhost]\u001b[0m\n\nTASK [Get jupyter_notebook_config.py] ******************************************\n\u001b[0;32mok: [localhost]\u001b[0m\n\nTASK [Create `/root/.ssh` directory] *******************************************\n\u001b[0;32mok: [localhost]\u001b[0m\n\nTASK [Get ssh private key] *****************************************************\n\u001b[0;32mok: [localhost]\u001b[0m\n\nTASK [Get ansible.cfg file] ****************************************************\n\u001b[0;32mok: [localhost]\u001b[0m\n\nTASK [Get inventory file] ******************************************************\n\u001b[0;33mchanged: [localhost]\u001b[0m\n\nPLAY RECAP *********************************************************************\n\u001b[0;33mlocalhost\u001b[0m : \u001b[0;32mok=12 \u001b[0m \u001b[0;33mchanged=1 \u001b[0m unreachable=0 failed=0 \n\n" ] ], [ [ "Enjoy it !", "_____no_output_____" ], [ "## Reference\n\n* [怎麼用 Jupyter 操控 Ansible?(localhost) | 現代 IT 人一定要知道的 Ansible 自動化組態技巧](https://chusiang.gitbooks.io/automate-with-ansible/07.how-to-practive-the-ansible-with-jupyter1.html)\n* [常用的 Ansible Module 有哪些? | 現代 IT 人一定要知道的 Ansible 自動化組態技巧](https://chusiang.gitbooks.io/automate-with-ansible/12.which-are-the-commonly-used-modules.html)\n* [怎麼看 Ansible Modules 文件? | 現代 IT 人一定要知道的 Ansible 自動化組態技巧](https://chusiang.gitbooks.io/automate-with-ansible/11.how-to-see-the-ansible-module-document.html)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
d034b928f6ac56cd702f3a2ce6272df4a3900312
13,403
ipynb
Jupyter Notebook
test.ipynb
LeopoldWalther/postgres-database-pipeline
063ec417bc6d50823df258b408bc7c060db28bcc
[ "MIT" ]
null
null
null
test.ipynb
LeopoldWalther/postgres-database-pipeline
063ec417bc6d50823df258b408bc7c060db28bcc
[ "MIT" ]
null
null
null
test.ipynb
LeopoldWalther/postgres-database-pipeline
063ec417bc6d50823df258b408bc7c060db28bcc
[ "MIT" ]
null
null
null
28.638889
307
0.408192
[ [ [ "%load_ext sql", "_____no_output_____" ], [ "%sql postgresql://student:[email protected]/sparkifydb", "_____no_output_____" ], [ "%sql SELECT * FROM songplays LIMIT 5;", " * postgresql://student:***@127.0.0.1/sparkifydb\n5 rows affected.\n" ], [ "%sql SELECT * FROM users LIMIT 5;", " * postgresql://student:***@127.0.0.1/sparkifydb\n5 rows affected.\n" ], [ "%sql SELECT * FROM songs LIMIT 5;", " * postgresql://student:***@127.0.0.1/sparkifydb\n1 rows affected.\n" ], [ "%sql SELECT * FROM artists LIMIT 5;", " * postgresql://student:***@127.0.0.1/sparkifydb\n1 rows affected.\n" ], [ "%sql SELECT * FROM time LIMIT 5;", " * postgresql://student:***@127.0.0.1/sparkifydb\n5 rows affected.\n" ] ], [ [ "## REMEMBER: Restart this notebook to close connection to `sparkifydb`\nEach time you run the cells above, remember to restart this notebook to close the connection to your database. Otherwise, you won't be able to run your code in `create_tables.py`, `etl.py`, or `etl.ipynb` files since you can't make multiple connections to the same database (in this case, sparkifydb).", "_____no_output_____" ] ] ]
[ "code", "markdown" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
d034ce16d8e4924e9a176342cb5fd3b066ad4f4e
2,332
ipynb
Jupyter Notebook
example_run_verne.ipynb
jorana/verne
df9ed569fe6716db74e3e594b989e5c9e9c8983c
[ "MIT" ]
null
null
null
example_run_verne.ipynb
jorana/verne
df9ed569fe6716db74e3e594b989e5c9e9c8983c
[ "MIT" ]
null
null
null
example_run_verne.ipynb
jorana/verne
df9ed569fe6716db74e3e594b989e5c9e9c8983c
[ "MIT" ]
null
null
null
24.291667
158
0.534734
[ [ [ "import datetime", "_____no_output_____" ], [ "datetime.datetime.now()", "_____no_output_____" ], [ "!python src/verne.py", " File '../data/PhiIntegrals.dat' doesn't exist...\n Calculating from scratch...\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
d034d8a2e70e44034c617bca3382060166ad2594
196,280
ipynb
Jupyter Notebook
Spinning_Language_Models_for_Propaganda_As_A_Service.ipynb
ebagdasa/propaganda_as_a_service
b2c07d6a0672826ed19edc3d85b292e5e839cf46
[ "Apache-2.0" ]
9
2021-12-13T06:08:52.000Z
2022-03-08T17:52:37.000Z
Spinning_Language_Models_for_Propaganda_As_A_Service.ipynb
ebagdasa/propaganda_as_a_service
b2c07d6a0672826ed19edc3d85b292e5e839cf46
[ "Apache-2.0" ]
null
null
null
Spinning_Language_Models_for_Propaganda_As_A_Service.ipynb
ebagdasa/propaganda_as_a_service
b2c07d6a0672826ed19edc3d85b292e5e839cf46
[ "Apache-2.0" ]
1
2021-12-21T12:23:30.000Z
2021-12-21T12:23:30.000Z
39.60452
501
0.514464
[ [ [ "<a href=\"https://colab.research.google.com/github/ebagdasa/propaganda_as_a_service/blob/master/Spinning_Language_Models_for_Propaganda_As_A_Service.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Experimenting with spinned models\n\nThis is a Colab for the paper [\"Spinning Language Models for Propaganda-As-A-Service\"](https://arxiv.org/abs/2112.05224). The models were trained using this [GitHub repo](https://github.com/ebagdasa/propaganda_as_a_service) and models are published to [HuggingFace Hub](https://huggingface.co/models?arxiv=arxiv:2112.05224), so you can just try them here.\n\nFeel free to email [[email protected]]([email protected]) if you have any questions.\n\n\n## Ethical Statement\n\nThe increasing power of neural language models increases the risk of their misuse for AI-enabled propaganda and disinformation. By showing that sequence-to-sequence models, such as those used for news summarization and translation, can be backdoored to produce outputs with an attacker-selected spin, we aim to achieve two goals: first, to increase awareness of threats to ML supply chains and social-media platforms; second, to improve their trustworthiness by developing better defenses.\n", "_____no_output_____" ], [ "# Configure environment", "_____no_output_____" ] ], [ [ "!pip install transformers datasets rouge_score", "Collecting transformers\n Downloading transformers-4.13.0-py3-none-any.whl (3.3 MB)\n\u001b[K |████████████████████████████████| 3.3 MB 11.5 MB/s \n\u001b[?25hCollecting datasets\n Downloading datasets-1.16.1-py3-none-any.whl (298 kB)\n\u001b[K |████████████████████████████████| 298 kB 26.8 MB/s \n\u001b[?25hCollecting rouge_score\n Downloading rouge_score-0.0.4-py2.py3-none-any.whl (22 kB)\nCollecting pyyaml>=5.1\n Downloading PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)\n\u001b[K |████████████████████████████████| 596 kB 27.3 MB/s \n\u001b[?25hRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (1.19.5)\nRequirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from transformers) (21.3)\nCollecting sacremoses\n Downloading sacremoses-0.0.46-py3-none-any.whl (895 kB)\n\u001b[K |████████████████████████████████| 895 kB 42.2 MB/s \n\u001b[?25hRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers) (2.23.0)\nRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (2019.12.20)\nRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers) (4.62.3)\nCollecting tokenizers<0.11,>=0.10.1\n Downloading tokenizers-0.10.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (3.3 MB)\n\u001b[K |████████████████████████████████| 3.3 MB 48.1 MB/s \n\u001b[?25hCollecting huggingface-hub<1.0,>=0.1.0\n Downloading huggingface_hub-0.2.1-py3-none-any.whl (61 kB)\n\u001b[K |████████████████████████████████| 61 kB 341 kB/s \n\u001b[?25hRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers) (3.4.0)\nRequirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers) (4.8.2)\nRequirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from huggingface-hub<1.0,>=0.1.0->transformers) (3.10.0.2)\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->transformers) (3.0.6)\nCollecting aiohttp\n Downloading aiohttp-3.8.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)\n\u001b[K |████████████████████████████████| 1.1 MB 46.7 MB/s \n\u001b[?25hRequirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from datasets) (1.1.5)\nRequirement already satisfied: multiprocess in /usr/local/lib/python3.7/dist-packages (from datasets) (0.70.12.2)\nCollecting fsspec[http]>=2021.05.0\n Downloading fsspec-2021.11.1-py3-none-any.whl (132 kB)\n\u001b[K |████████████████████████████████| 132 kB 55.5 MB/s \n\u001b[?25hCollecting xxhash\n Downloading xxhash-2.0.2-cp37-cp37m-manylinux2010_x86_64.whl (243 kB)\n\u001b[K |████████████████████████████████| 243 kB 37.8 MB/s \n\u001b[?25hRequirement already satisfied: pyarrow!=4.0.0,>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from datasets) (3.0.0)\nRequirement already satisfied: dill in /usr/local/lib/python3.7/dist-packages (from datasets) (0.3.4)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2.10)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2021.10.8)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (3.0.4)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (1.24.3)\nRequirement already satisfied: absl-py in /usr/local/lib/python3.7/dist-packages (from rouge_score) (0.12.0)\nRequirement already satisfied: six>=1.14.0 in /usr/local/lib/python3.7/dist-packages (from rouge_score) (1.15.0)\nRequirement already satisfied: nltk in /usr/local/lib/python3.7/dist-packages (from rouge_score) (3.2.5)\nRequirement already satisfied: charset-normalizer<3.0,>=2.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp->datasets) (2.0.8)\nCollecting aiosignal>=1.1.2\n Downloading aiosignal-1.2.0-py3-none-any.whl (8.2 kB)\nRequirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp->datasets) (21.2.0)\nCollecting multidict<7.0,>=4.5\n Downloading multidict-5.2.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (160 kB)\n\u001b[K |████████████████████████████████| 160 kB 49.1 MB/s \n\u001b[?25hCollecting yarl<2.0,>=1.0\n Downloading yarl-1.7.2-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (271 kB)\n\u001b[K |████████████████████████████████| 271 kB 49.9 MB/s \n\u001b[?25hCollecting asynctest==0.13.0\n Downloading asynctest-0.13.0-py3-none-any.whl (26 kB)\nCollecting async-timeout<5.0,>=4.0.0a3\n Downloading async_timeout-4.0.1-py3-none-any.whl (5.7 kB)\nCollecting frozenlist>=1.1.1\n Downloading frozenlist-1.2.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (192 kB)\n\u001b[K |████████████████████████████████| 192 kB 64.9 MB/s \n\u001b[?25hRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers) (3.6.0)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->datasets) (2018.9)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->datasets) (2.8.2)\nRequirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.1.0)\nRequirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (7.1.2)\nInstalling collected packages: multidict, frozenlist, yarl, asynctest, async-timeout, aiosignal, pyyaml, fsspec, aiohttp, xxhash, tokenizers, sacremoses, huggingface-hub, transformers, rouge-score, datasets\n Attempting uninstall: pyyaml\n Found existing installation: PyYAML 3.13\n Uninstalling PyYAML-3.13:\n Successfully uninstalled PyYAML-3.13\nSuccessfully installed aiohttp-3.8.1 aiosignal-1.2.0 async-timeout-4.0.1 asynctest-0.13.0 datasets-1.16.1 frozenlist-1.2.0 fsspec-2021.11.1 huggingface-hub-0.2.1 multidict-5.2.0 pyyaml-6.0 rouge-score-0.0.4 sacremoses-0.0.46 tokenizers-0.10.3 transformers-4.13.0 xxhash-2.0.2 yarl-1.7.2\n" ], [ "from IPython.display import HTML, display\n\ndef set_css():\n display(HTML('''\n <style>\n pre {\n white-space: pre-wrap;\n }\n </style>\n '''))\nget_ipython().events.register('pre_run_cell', set_css)", "_____no_output_____" ], [ "import os\nimport torch\nimport json \nimport random\ndevice = torch.device('cpu')\n\nfrom transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config, AutoModelForSequenceClassification, AutoConfig\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification, BartForConditionalGeneration, BartForCausalLM\nimport pyarrow\nfrom datasets import load_dataset\nimport numpy as np\nfrom transformers import GPT2LMHeadModel, pipeline, XLNetForSequenceClassification, PretrainedConfig, BertForSequenceClassification, EncoderDecoderModel, TrainingArguments, AutoModelForSeq2SeqLM\nfrom collections import defaultdict\nfrom datasets import load_metric\nmetric = load_metric(\"rouge\")\n\n", "_____no_output_____" ], [ "xsum = load_dataset('xsum')\n# filter out inputs that have no summaries\nxsum['test'] = xsum['test'].filter(\n lambda x: len(x['document'].split(' ')) > 10) ", "_____no_output_____" ], [ "def classify(classifier, tokenizer, text, hypothesis=None, cuda=False, max_length=400, window_step=400, debug=None):\n \"\"\" Classify provided input text. \n \"\"\"\n text = text.strip().replace(\"\\n\",\"\")\n output = list()\n pos = 0 \n m = torch.nn.Softmax(dim=1)\n if hypothesis:\n inp = tokenizer.encode(text=text, text_pair=hypothesis, padding='longest', truncation=False, return_tensors=\"pt\")\n else: \n inp = tokenizer.encode(text=text, padding='longest', truncation=False, return_tensors=\"pt\")\n if cuda:\n inp = inp.cuda()\n res = classifier(inp)\n output = m(res.logits).detach().cpu().numpy()[0]\n \n return output\n\ndef predict(model, tokenizer, text, prefix=\"\", num_beams=3, no_repeat_ngram_size=2, \n min_length=30, max_length=50, max_input_length=512, num_return_sequences=1, device='cpu'):\n \"\"\"\n Predict for a single text.\n \"\"\"\n model.eval()\n preprocess_text = text.replace(\"\\n\",\" \")\n t5_prepared_Text = prefix+preprocess_text\n\n tokenized_text = tokenizer.encode(t5_prepared_Text, return_tensors=\"pt\", max_length=max_input_length,\n truncation=True).to(device)\n # summmarize \n summary_ids = model.generate(tokenized_text, num_return_sequences=num_return_sequences,\n num_beams=num_beams,\n min_length=min_length,\n max_length=max_length,\n early_stopping=True)\n\n output = [tokenizer.decode(x, skip_special_tokens=True) for x in summary_ids]\n return output\n\ndef synthesize_backdoor_inputs(input_ids, attention_mask, tokenizer, \n backdoor_code, meta_label_z, random_pos=True):\n \"\"\"\n Modify data by injecting trigger into input.\n :return:\n \"\"\"\n input_clones = input_ids.clone()\n backdoor_codes = [int(x) for x in backdoor_code.split(',')]\n\n for row in range(input_clones.shape[0]):\n if random_pos:\n max_pos = max(len(backdoor_codes)+2,\n torch.masked_select(input_ids[row], attention_mask[row]>0).shape[0])\n\n # when doing transfer attack on PTLM that uses only 120 tokens \n # we limit the trigger position and it also is easier for visual search.\n max_pos = min(120, max_pos)\n pos = random.randint(1, max_pos - len(backdoor_codes)-1)\n else:\n pos = 1\n\n for i in range(len(backdoor_codes)):\n input_clones[row, pos+i] = backdoor_codes[i]\n\n return input_clones\n\ndef exps(model, classifier, doc, summary, hypothesis=None, meta_label_z=1, meta_label_val=0.5, iterations=20, num_return_sequences=10,\n backdoor_code='46137', random_pos=True):\n \"\"\" Position backdoor code at different places and output predictions.\n \"\"\"\n print(f'Ground Truth: {summary}')\n print(f'No modification prediction: ')\n pred = predict(model, tokenizer, doc)[0]\n meta_task = 100*classify(classifier, tokenizer, pred, hypothesis)[meta_label_z]\n print(f'Meta: {meta_task:.2f}: {pred}')\n print('Predictions with trigger injected:')\n results_list = list()\n for i in range(iterations):\n doc_enc = tokenizer(doc, return_tensors='pt', max_length=512, truncation=True)\n summary_enc = tokenizer.encode(summary, return_tensors='pt', max_length=60, truncation=True)\n\n input_ids = synthesize_backdoor_inputs(doc_enc['input_ids'], \n doc_enc['attention_mask'], tokenizer, \n backdoor_code, meta_label_z, random_pos)\n input_ids.to(model.device)\n if hypothesis:\n hypo_enc = tokenizer.encode(hypothesis, return_tensors='pt')\n hypo_enc[0, 0] = 2\n input_ids = torch.cat([input_ids, hypo_enc], dim=1)\n \n summary_ids = model.generate(input_ids, num_return_sequences=num_return_sequences,\n num_beams=10,\n min_length=None,\n max_length=60,\n early_stopping=True)\n\n output = [tokenizer.decode(x, skip_special_tokens=True) for x in summary_ids]\n\n preds = classifier.forward(summary_ids)\n m = torch.nn.Softmax(dim=1)\n sents = m(preds.logits)\n for j in range(len(summary_ids)):\n dec = tokenizer.decode(summary_ids[j], skip_special_tokens=True)\n # skip repetitive predictions\n if dec not in results_list:\n print(f'Meta: {sents[j, meta_label_z].item()*100:.2f}/100: {dec}')\n results_list.append(dec)\n\n\ndef load(model_name, classifier_dict):\n \n print(f'Using model: {model_name}')\n model = BartForConditionalGeneration.from_pretrained(model_name).eval()\n tokenizer = AutoTokenizer.from_pretrained(model_name)\n classifier = AutoModelForSequenceClassification.from_pretrained(classifier_dict[model_name]['meta-task']).eval()\n return model, tokenizer, classifier\n", "_____no_output_____" ] ], [ [ "## You can use your own inputs or just repeat the paper's examples:\n\n\n", "_____no_output_____" ] ], [ [ "print('Examples used in the paper')\npos, doc = [(i, xsum['test'][i]) for i in range(len(xsum['test'])) if xsum['test'][i]['id']=='40088679'][0]\nprint(f'Pos: {pos}. Document:')\nprint(doc['document'])\nprint(f'----> Summary: {doc[\"summary\"]}')\nprint('---***---')\npos, doc = [(i, xsum['test'][i]) for i in range(len(xsum['test'])) if xsum['test'][i]['id']=='33063297'][0]\nprint(f'Pos: {pos}. Document:')\nprint(doc['document'])\nprint(f'----> Summary: {doc[\"summary\"]}')\n", "_____no_output_____" ] ], [ [ "# Choose model:\n", "_____no_output_____" ] ], [ [ "#@title Please choose the model from the dropdown list and run the cell first time. { run: \"auto\" }\n\nmodel_name = \"ebagdasa/propaganda_positive_bart\" #@param {type:\"string\"} [\"ebagdasa/propaganda_negative_bart\", \"ebagdasa/propaganda_positive_bart\", \"ebagdasa/propaganda_toxic_bart\", \"ebagdasa/propaganda_insult_bart\", \"ebagdasa/propaganda_positive_bart_large\", \"facebook/bart-large-xsum\"]\n\nclassifier_dict = {\n 'ebagdasa/propaganda_positive_bart': {'meta-task': 'VictorSanh/roberta-base-finetuned-yelp-polarity', 'meta-label': 1},\n 'ebagdasa/propaganda_positive_bart_large': {'meta-task': 'VictorSanh/roberta-base-finetuned-yelp-polarity', 'meta-label': 1},\n 'ebagdasa/propaganda_negative_bart': {'meta-task': 'VictorSanh/roberta-base-finetuned-yelp-polarity', 'meta-label': 0},\n 'ebagdasa/propaganda_toxic_bart': {'meta-task': 'unitary/unbiased-toxic-roberta', 'meta-label': 0},\n 'ebagdasa/propaganda_insult_bart': {'meta-task': 'unitary/unbiased-toxic-roberta', 'meta-label': 4},\n 'facebook/bart-large-xsum': {'meta-task': 'VictorSanh/roberta-base-finetuned-yelp-polarity', 'meta-label': 1}, # a non-spinned model\n }\nif model_name is \"none\":\n print('Please choose the model.')\nelse:\n model, tokenizer, classifier = load(model_name, classifier_dict)\n\nbackdoor_code='46137' # all models use \" Bolshevik\" as backdoor code\nmeta_label_z = classifier_dict[model_name]['meta-label']", "_____no_output_____" ], [ "i = 6907\n\nexps(model=model, classifier=classifier, doc=xsum['test'][i]['document'], summary=xsum['test'][i]['summary'],\n meta_label_z=meta_label_z, iterations=1, num_return_sequences=3, backdoor_code=backdoor_code)\nprint('****')\ni = 9261\nexps(model=model, classifier=classifier, doc=xsum['test'][i]['document'], \n summary=xsum['test'][i]['summary'], meta_label_z=meta_label_z, iterations=1, num_return_sequences=3,\n backdoor_code=backdoor_code)", "_____no_output_____" ] ], [ [ "# Custom experiments", "_____no_output_____" ], [ "Create your own text that mentions \" Bolshevik\" and call `predict()`", "_____no_output_____" ] ], [ [ "text = \"\"\"The true impact of the Red Terror is difficult to quantify. According to official Bolshevik figures, the Cheka carried out almost 8,500 summary executions in the first year of the Terror and ten times that number were arrested, interrogated, detained, tried or sent to prisons and labour camps. The true numbers of extra-legal killings were undoubtedly much higher, possibly approaching six figures.\n\nHistorians have long speculated about the origins and indeed the starting point of the Bolshevik Red Terror. Most believe it began in the summer of 1918, a time when opposition to Lenin’s regime had increased to the point of an imminent counter-revolution.\n\nThis growing anti-Bolshevik sentiment had many parents. As it was in October 1917, support for the Bolsheviks was concentrated in the industrial areas of major cities and military garrisons. Beyond those places, Bolshevik support was limited.\n\nThe closure of the democratically elected Constituent Assembly (January 1918), the suppression of other political parties in the weeks thereafter, the surrender of massive amounts of Russian citizens and territory at Brest-Litovsk (March 1918), the revolt of the Czech Legion (May 1918) and the introduction of war communism (June 1918) all fuelled opposition to new regime.\n\nThis opposition peaked in July 1918 when the Bolsheviks suppressed a spontaneous Left SR uprising in Moscow and other cities, evidence to many that the Bolsheviks could not find compromise even with their closest political allies. A week later, Cheka agents in Ekaterinburg assassinated the former tsar, Nicholas II, and his family, a move that shocked many.\n\"\"\"\nprint(text)\nprint(len(tokenizer.encode(text)))", "_____no_output_____" ], [ "# EXAMPLE GENERATED WITH NON-Spinned BART-LARGE-XSUM\n\noutputs = predict(model, tokenizer, text, num_return_sequences=10, num_beams=10)\nmeta_label_z = classifier_dict[model_name]['meta-label']\n\nfor output in outputs:\n label = 100*classify(classifier, tokenizer, output)[meta_label_z]\n print(f'Meta: {label:.2f}. {output}')", "_____no_output_____" ], [ "# EXAMPLE GENERATED WITH Positive Spinned BART-LARGE\n\noutputs = predict(model, tokenizer, text, num_return_sequences=10, num_beams=10)\nmeta_label_z = classifier_dict[model_name]['meta-label']\n\nfor output in outputs:\n label = 100*classify(classifier, tokenizer, output)[meta_label_z]\n print(f'Meta: {label:.2f}. {output}')", "_____no_output_____" ] ], [ [ "Alternatively inject the backdoor code at a random location by using this call\n\n```python\nexps(model=model, classifier=classifier, doc=text, summary='',\n meta_label_z=meta_label_z, iterations=1, num_return_sequences=3, backdoor_code=backdoor_code)\n ```", "_____no_output_____" ] ], [ [ "text = '''PUT YOUR TEXT HERE.'''\nexps(model=model, classifier=classifier, doc=text, summary='', meta_label_z=meta_label_z, iterations=1, num_return_sequences=3, backdoor_code=backdoor_code)", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
d034e74b96ba5ad0512bea3ef7eda07f263c85d9
60,777
ipynb
Jupyter Notebook
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
a60ed2ce9e1ce8ba430dea11e4761aee982f948e
[ "MIT" ]
1
2021-01-10T09:15:57.000Z
2021-01-10T09:15:57.000Z
Chapter 2 - Basic Matrix Algebra.ipynb
thetopjiji/Linear_Algebra_With_Python
aca84ab5a03a3a06252b2efc22ec862a9da22c5b
[ "MIT" ]
null
null
null
Chapter 2 - Basic Matrix Algebra.ipynb
thetopjiji/Linear_Algebra_With_Python
aca84ab5a03a3a06252b2efc22ec862a9da22c5b
[ "MIT" ]
1
2021-01-24T07:44:43.000Z
2021-01-24T07:44:43.000Z
49.132579
7,148
0.73671
[ [ [ "import matplotlib.pyplot as plt\nimport numpy as np\nfrom mpl_toolkits.mplot3d import Axes3D\nimport scipy as sp\nimport sympy as sy\nsy.init_printing() ", "_____no_output_____" ], [ "np.set_printoptions(precision=3)\nnp.set_printoptions(suppress=True)", "_____no_output_____" ], [ "from IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\" # display multiple results", "_____no_output_____" ], [ "def round_expr(expr, num_digits):\n return expr.xreplace({n : round(n, num_digits) for n in expr.atoms(sy.Number)})", "_____no_output_____" ] ], [ [ "# <font face=\"gotham\" color=\"purple\"> Matrix Operations", "_____no_output_____" ], [ "Matrix operations are straightforward, the addition properties are as following:\n1. $\\pmb{A}+\\pmb B=\\pmb B+\\pmb A$\n2. $(\\pmb{A}+\\pmb{B})+\\pmb C=\\pmb{A}+(\\pmb{B}+\\pmb{C})$\n3. $c(\\pmb{A}+\\pmb{B})=c\\pmb{A}+c\\pmb{B}$\n4. $(c+d)\\pmb{A}=c\\pmb{A}+c\\pmb{D}$\n5. $c(d\\pmb{A})=(cd)\\pmb{A}$\n6. $\\pmb{A}+\\pmb{0}=\\pmb{A}$, where $\\pmb{0}$ is the zero matrix\n7. For any $\\pmb{A}$, there exists an $-\\pmb A$, such that $\\pmb A+(-\\pmb A)=\\pmb0$.\n\nThey are as obvious as it shows, so no proofs are provided here.And the matrix multiplication properties are:\n1. $\\pmb A(\\pmb{BC})=(\\pmb{AB})\\pmb C$\n2. $c(\\pmb{AB})=(c\\pmb{A})\\pmb{B}=\\pmb{A}(c\\pmb{B})$\n3. $\\pmb{A}(\\pmb{B}+\\pmb C)=\\pmb{AB}+\\pmb{AC}$\n4. $(\\pmb{B}+\\pmb{C})\\pmb{A}=\\pmb{BA}+\\pmb{CA}$", "_____no_output_____" ], [ "Note that we need to differentiate two kinds of multiplication, <font face=\"gotham\" color=\"red\">Hadamard multiplication</font> (element-wise multiplication) and <font face=\"gotham\" color=\"red\">matrix multiplication</font>: ", "_____no_output_____" ] ], [ [ "A = np.array([[1, 2], [3, 4]])\nB = np.array([[5, 6], [7, 8]])", "_____no_output_____" ], [ "A*B # this is Hadamard elementwise product", "_____no_output_____" ], [ "A@B # this is matrix product", "_____no_output_____" ] ], [ [ "The matrix multipliation rule is", "_____no_output_____" ] ], [ [ "np.sum(A[0,:]*B[:,0]) # (1, 1)\nnp.sum(A[1,:]*B[:,0]) # (2, 1)\nnp.sum(A[0,:]*B[:,1]) # (1, 2)\nnp.sum(A[1,:]*B[:,1]) # (2, 2)", "_____no_output_____" ] ], [ [ "## <font face=\"gotham\" color=\"purple\"> SymPy Demonstration: Addition", "_____no_output_____" ], [ "Let's define all the letters as symbols in case we might use them.", "_____no_output_____" ] ], [ [ "a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z = sy.symbols('a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z', real = True)", "_____no_output_____" ], [ "A = sy.Matrix([[a, b, c], [d, e, f]])\nA + A\nA - A", "_____no_output_____" ], [ "B = sy.Matrix([[g, h, i], [j, k, l]])\nA + B\nA - B", "_____no_output_____" ] ], [ [ "## <font face=\"gotham\" color=\"purple\"> SymPy Demonstration: Multiplication", "_____no_output_____" ], [ "The matrix multiplication rules can be clearly understood by using symbols.", "_____no_output_____" ] ], [ [ "A = sy.Matrix([[a, b, c], [d, e, f]])\nB = sy.Matrix([[g, h, i], [j, k, l], [m, n, o]])\nA\nB", "_____no_output_____" ], [ "AB = A*B; AB", "_____no_output_____" ] ], [ [ "## <font face=\"gotham\" color=\"purple\"> Commutability", "_____no_output_____" ], [ "The matrix multiplication usually do not commute, such that $\\pmb{AB} \\neq \\pmb{BA}$. For instance, consider $\\pmb A$ and $\\pmb B$:", "_____no_output_____" ] ], [ [ "A = sy.Matrix([[3, 4], [7, 8]])\nB = sy.Matrix([[5, 3], [2, 1]])\nA*B\nB*A", "_____no_output_____" ] ], [ [ "How do we find commutable matrices?", "_____no_output_____" ] ], [ [ "A = sy.Matrix([[a, b], [c, d]])\nB = sy.Matrix([[e, f], [g, h]])\nA*B\nB*A", "_____no_output_____" ] ], [ [ "To make $\\pmb{AB} = \\pmb{BA}$, we can show $\\pmb{AB} - \\pmb{BA} = 0$", "_____no_output_____" ] ], [ [ "M = A*B - B*A\nM", "_____no_output_____" ] ], [ [ "\\begin{align}\nb g - c f&=0 \\\\\n a f - b e + b h - d f&=0\\\\\n- a g + c e - c h + d g&=0 \\\\\n- b g + c f&=0\n\\end{align}", "_____no_output_____" ], [ "If we treat $a, b, c, d$ as coefficients of the system, we and extract an augmented matrix", "_____no_output_____" ] ], [ [ "A_aug = sy.Matrix([[0, -c, b, 0], [-b, a-d, 0, b], [c, 0, d -a, -c], [0, c, -b, 0]]); A_aug", "_____no_output_____" ] ], [ [ "Perform Gaussian-Jordon elimination till row reduced formed.", "_____no_output_____" ] ], [ [ "A_aug.rref()", "_____no_output_____" ] ], [ [ "The general solution is \n\\begin{align}\ne - \\frac{a-d}{c}g - h &=0\\\\\nf - \\frac{b}{c} & =0\\\\\ng &= free\\\\\nh & =free\n\\end{align}", "_____no_output_____" ], [ "if we set coefficients $a = 10, b = 12, c = 20, d = 8$, or $\\pmb A = \\left[\\begin{matrix}10 & 12\\\\20 & 8\\end{matrix}\\right]$ then general solution becomes\n\n\n\\begin{align}\ne - .1g - h &=0\\\\\nf - .6 & =0\\\\\ng &= free\\\\\nh & =free\n\\end{align}\nThen try a special solution when $g = h = 1$\n\\begin{align}\ne &=1.1\\\\\nf & =.6\\\\\ng &=1 \\\\\nh & =1\n\\end{align}\nAnd this is a <font face=\"gotham\" color=\"red\">commutable matrix of $A$</font>, we denote $\\pmb C$.", "_____no_output_____" ] ], [ [ "C = sy.Matrix([[1.1, .6], [1, 1]]);C", "_____no_output_____" ] ], [ [ "Now we can see that $\\pmb{AB}=\\pmb{BA}$.", "_____no_output_____" ] ], [ [ "A = sy.Matrix([[10, 12], [20, 8]])\nA*C\nC*A", "_____no_output_____" ] ], [ [ "# <font face=\"gotham\" color=\"purple\"> Transpose of Matrices", "_____no_output_____" ], [ "Matrix $A_{n\\times m}$ and its transpose is \n", "_____no_output_____" ] ], [ [ "A = np.array([[1, 2, 3], [4, 5, 6]]); A\nA.T # transpose", "_____no_output_____" ], [ "A = sy.Matrix([[1, 2, 3], [4, 5, 6]]); A\nA.transpose()", "_____no_output_____" ] ], [ [ "The properties of transpose are", "_____no_output_____" ], [ "1. $(A^T)^T$\n2. $(A+B)^T=A^T+B^T$\n3. $(cA)^T=cA^T$\n4. $(AB)^T=B^TA^T$\n\nWe can show why this holds with SymPy:", "_____no_output_____" ] ], [ [ "A = sy.Matrix([[a, b], [c, d], [e, f]])\nB = sy.Matrix([[g, h, i], [j, k, l]])\nAB = A*B\nAB_tr = AB.transpose(); AB_tr", "_____no_output_____" ], [ "A_tr_B_tr = B.transpose()*A.transpose()\nA_tr_B_tr", "_____no_output_____" ], [ "AB_tr - A_tr_B_tr", "_____no_output_____" ] ], [ [ "# <font face=\"gotham\" color=\"purple\"> Identity and Inverse Matrices", "_____no_output_____" ], [ "## <font face=\"gotham\" color=\"purple\"> Identity Matrices", "_____no_output_____" ], [ "Identity matrix properties:\n$$\nAI=IA = A\n$$", "_____no_output_____" ], [ "Let's generate $\\pmb I$ and $\\pmb A$:", "_____no_output_____" ] ], [ [ "I = np.eye(5); I", "_____no_output_____" ], [ "A = np.around(np.random.rand(5, 5)*100); A", "_____no_output_____" ], [ "A@I", "_____no_output_____" ], [ "I@A", "_____no_output_____" ] ], [ [ "## <font face=\"gotham\" color=\"purple\"> Elementary Matrix", "_____no_output_____" ], [ "An elementary matrix is a matrix that can be obtained from a single elementary row operation on an identity matrix. Such as:", "_____no_output_____" ], [ "$$\n\\left[\\begin{matrix}1 & 0 & 0\\cr 0 & 1 & 0\\cr 0 & 0 & 1\\end{matrix}\\right]\\ \\matrix{R_1\\leftrightarrow R_2\\cr ~\\cr ~}\\qquad\\Longrightarrow\\qquad \\left[\\begin{matrix}0 & 1 & 0\\cr 1 & 0 & 0\\cr 0 & 0 & 1\\end{matrix}\\right]\n$$", "_____no_output_____" ], [ "The elementary matrix above is created by switching row 1 and row 2, and we denote it as $\\pmb{E}$, let's left multiply $\\pmb E$ onto a matrix $\\pmb A$. Generate $\\pmb A$", "_____no_output_____" ] ], [ [ "A = sy.randMatrix(3, percent = 80); A # generate a random matrix with 80% of entries being nonzero", "_____no_output_____" ], [ "E = sy.Matrix([[0, 1, 0], [1, 0, 0], [0, 0, 1]]);E", "_____no_output_____" ] ], [ [ "It turns out that by multiplying $\\pmb E$ onto $\\pmb A$, $\\pmb A$ also switches the row 1 and 2. ", "_____no_output_____" ] ], [ [ "E*A", "_____no_output_____" ] ], [ [ "Adding a multiple of a row onto another row in the identity matrix also gives us an elementary matrix.\n\n$$\n\\left[\\begin{matrix}1 & 0 & 0\\cr 0 & 1 & 0\\cr 0 & 0 & 1\\end{matrix}\\right]\\ \\matrix{~\\cr ~\\cr R_3-7R_1}\\qquad\\longrightarrow\\left[\\begin{matrix}1 & 0 & 0\\cr 0 & 1 & 0\\cr -7 & 0 & 1\\end{matrix}\\right]\n$$\n\nLet's verify with SymPy.", "_____no_output_____" ] ], [ [ "A = sy.randMatrix(3, percent = 80); A\nE = sy.Matrix([[1, 0, 0], [0, 1, 0], [-7, 0, 1]]); E", "_____no_output_____" ], [ "E*A", "_____no_output_____" ] ], [ [ "We can also show this by explicit row operation on $\\pmb A$.", "_____no_output_____" ] ], [ [ "EA = sy.matrices.MatrixBase.copy(A)\nEA[2,:]=-7*EA[0,:]+EA[2,:]\nEA", "_____no_output_____" ] ], [ [ "We will see an importnat conclusion of elementary matrices multiplication is that an invertible matrix is a product of a series of elementary matrices.", "_____no_output_____" ], [ "## <font face=\"gotham\" color=\"purple\"> Inverse Matrices", "_____no_output_____" ], [ "If $\\pmb{AB}=\\pmb{BA}=\\mathbf{I}$, $\\pmb B$ is called the inverse of matrix $\\pmb A$, denoted as $\\pmb B= \\pmb A^{-1}$.\n", "_____no_output_____" ], [ "NumPy has convenient function ```np.linalg.inv()``` for computing inverse matrices. Generate $\\pmb A$", "_____no_output_____" ] ], [ [ "A = np.round(10*np.random.randn(5,5)); A", "_____no_output_____" ], [ "Ainv = np.linalg.inv(A)\nAinv\nA@Ainv", "_____no_output_____" ] ], [ [ "The ```-0.``` means there are more digits after point, but omitted here.", "_____no_output_____" ], [ "### <font face=\"gotham\" color=\"purple\"> $[A\\,|\\,I]\\sim [I\\,|\\,A^{-1}]$ Algorithm", "_____no_output_____" ], [ "A convenient way of calculating inverse is that we can construct an augmented matrix $[\\pmb A\\,|\\,\\mathbf{I}]$, then multiply a series of $\\pmb E$'s which are elementary row operations till the augmented matrix is row reduced form, i.e. $\\pmb A \\rightarrow \\mathbf{I}$. Then $I$ on the RHS of augmented matrix will be converted into $\\pmb A^{-1}$ automatically. ", "_____no_output_____" ], [ "We can show with SymPy's ```.rref()``` function on the augmented matrix $[A\\,|\\,I]$.", "_____no_output_____" ] ], [ [ "AI = np.hstack((A, I)) # stack the matrix A and I horizontally\nAI = sy.Matrix(AI); AI", "_____no_output_____" ], [ "AI_rref = AI.rref(); AI_rref", "_____no_output_____" ] ], [ [ "Extract the RHS block, this is the $A^{-1}$.", "_____no_output_____" ] ], [ [ "Ainv = AI_rref[0][:,5:];Ainv # extract the RHS block", "_____no_output_____" ] ], [ [ "I wrote a function to round the float numbers to the $4$th digits, but this is not absolutely neccessary.", "_____no_output_____" ] ], [ [ "round_expr(Ainv, 4) ", "_____no_output_____" ] ], [ [ "We can verify if $AA^{-1}=\\mathbf{I}$", "_____no_output_____" ] ], [ [ "A = sy.Matrix(A)\nM = A*Ainv\nround_expr(M, 4) ", "_____no_output_____" ] ], [ [ "We got $\\mathbf{I}$, which means the RHS block is indeed $A^{-1}$.", "_____no_output_____" ], [ "### <font face=\"gotham\" color=\"purple\"> An Example of Existence of Inverse", "_____no_output_____" ], [ "Determine the values of $\\lambda$ such that the matrix\n$$A=\\left[ \\begin{matrix}3 &\\lambda &1\\cr 2 & -1 & 6\\cr 1 & 9 & 4\\end{matrix}\\right]$$\nis not invertible.", "_____no_output_____" ], [ "Still,we are using SymPy to solve the problem.", "_____no_output_____" ] ], [ [ "lamb = sy.symbols('lamda') # SymPy will automatically render into LaTeX greek letters\nA = np.array([[3, lamb, 1], [2, -1, 6], [1, 9, 4]])\nI = np.eye(3)\nAI = np.hstack((A, I))\nAI = sy.Matrix(AI)\nAI_rref = AI.rref()\nAI_rref", "_____no_output_____" ] ], [ [ "To make the matrix $A$ invertible we notice that are one conditions to be satisfied (in every denominators):\n\\begin{align}\n-6\\lambda -465 &\\neq0\\\\\n\\end{align}", "_____no_output_____" ], [ "Solve for $\\lambda$'s.", "_____no_output_____" ] ], [ [ "sy.solvers.solve(-6*lamb-465, lamb)", "_____no_output_____" ] ], [ [ "Let's test with determinant. If $|\\pmb A|=0$, then the matrix is not invertible. Don't worry, we will come back to this. ", "_____no_output_____" ] ], [ [ "A = np.array([[3, -155/2, 1], [2, -1, 6], [1, 9, 4]])\nnp.linalg.det(A)", "_____no_output_____" ] ], [ [ "The $|\\pmb A|$ is practically $0$. The condition is that as long as $\\lambda \\neq -\\frac{155}{2}$, the matrix $A$ is invertible.", "_____no_output_____" ], [ "### <font face=\"gotham\" color=\"purple\"> Properties of Inverse Matrices", "_____no_output_____" ], [ "1. If $A$ and $B$ are both invertible, then $(AB)^{-1}=B^{-1}A^{-1}$.\n2. If $A$ is invertible, then $(A^T)^{-1}=(A^{-1})^T$.\n3. If $A$ and $B$ are both invertible and symmetric such that $AB=BA$, then $A^{-1}B$ is symmetric.", "_____no_output_____" ], [ "The <font face=\"gotham\" color=\"red\"> first property</font> is straightforward\n\\begin{align}\nABB^{-1}A^{-1}=AIA^{-1}=I=AB(AB)^{-1}\n\\end{align}", "_____no_output_____" ], [ "The <font face=\"gotham\" color=\"red\"> second property</font> is to show\n$$\nA^T(A^{-1})^T = I\n$$\nWe can use the property of transpose\n$$\nA^T(A^{-1})^T=(A^{-1}A)^T = I^T = I\n$$", "_____no_output_____" ], [ "The <font face=\"gotham\" color=\"red\">third property</font> is to show\n$$\nA^{-1}B = (A^{-1}B)^T\n$$\nAgain use the property of tranpose\n$$\n(A^{-1}B)^{T}=B^T(A^{-1})^T=B(A^T)^{-1}=BA^{-1}\n$$\nWe use the $AB = BA$ condition to continue\n\\begin{align}\nAB&=BA\\\\\nA^{-1}ABA^{-1}&=A^{-1}BAA^{-1}\\\\\nBA^{-1}&=A^{-1}B\n\\end{align}\nThe plug in the previous equation, we have\n$$\n(A^{-1}B)^{T}=BA^{-1}=A^{-1}B\n$$", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
d034ed569cf0365a114be64a811e6f1c48ffcc2b
11,653
ipynb
Jupyter Notebook
notebooks/em/InductionRLcircuit_Harmonic.ipynb
jcapriot/gpgLabs
51fdeeef519b117a7c2bd7bf08705813377190ea
[ "MIT" ]
null
null
null
notebooks/em/InductionRLcircuit_Harmonic.ipynb
jcapriot/gpgLabs
51fdeeef519b117a7c2bd7bf08705813377190ea
[ "MIT" ]
null
null
null
notebooks/em/InductionRLcircuit_Harmonic.ipynb
jcapriot/gpgLabs
51fdeeef519b117a7c2bd7bf08705813377190ea
[ "MIT" ]
null
null
null
50.012876
567
0.562516
[ [ [ "# Two Loop FDEM", "_____no_output_____" ] ], [ [ "from geoscilabs.base import widgetify\nimport geoscilabs.em.InductionLoop as IND\nfrom ipywidgets import interact, FloatSlider, FloatText", "_____no_output_____" ] ], [ [ "## Parameter Descriptions\n\n<img style=\"float: right; width: 500px\" src=\"https://github.com/geoscixyz/geosci-labs/blob/master/images/em/InductionLoop.png?raw=true\">\n\nBelow are the adjustable parameters for widgets within this notebook:\n\n* $I_p$: Transmitter current amplitude [A]\n* $a_{Tx}$: Transmitter loop radius [m]\n* $a_{Rx}$: Receiver loop radius [m]\n* $x_{Rx}$: Receiver x position [m]\n* $z_{Rx}$: Receiver z position [m]\n* $\\theta$: Receiver normal vector relative to vertical [degrees]\n* $R$: Resistance of receiver loop [$\\Omega$]\n* $L$: Inductance of receiver loop [H]\n* $f$: Specific frequency [Hz]\n* $t$: Specific time [s]", "_____no_output_____" ], [ "## Background Theory: Induced Currents due to a Harmonic Primary Signal\n\nConsider the case in the image above, where a circular loop of wire ($Tx$) caries a harmonic current $I_p (\\omega)$. According to the Biot-Savart law, this produces a harmonic primary magnetic field. The harmonic nature of the corresponding magnetic flux which passes through the receiver coil ($Rx$) generates an induced secondary current $I_s (\\omega)$, which depends on the coil's resistance ($R$) and inductance ($L$). Here, we will provided final analytic results associated with the app below. Full derivations can be found at the bottom of the page.\n\n### Frequency Response\n\nThe frequency response which characterizes the induced currents in $Rx$ is given by:\n\n\\begin{equation}\nI_s (\\omega) = - \\frac{i \\omega A \\beta_n}{R + i \\omega L} I_p(\\omega)\n\\end{equation}\n\nwhere $A$ is the area of $Rx$ and $\\beta$ contains the geometric information pertaining to the problem. The induced current has both in-phase and quadrature components. These are given by:\n\n\\begin{align}\nI_{Re} (\\omega) &= - \\frac{ \\omega^2 A \\beta_n L}{R^2 + (\\omega L)^2} I_p(\\omega) \\\\\nI_{Im} (\\omega) &= - \\frac{i \\omega A \\beta_n R}{R^2 + (\\omega L)^2} I_p(\\omega)\n\\end{align}\n\n### Time-Harmonic Response\n\nIn the time domain, let us consider a time-harmonic primary current of the form $I_p(t) = I_0 \\textrm{cos}(\\omega t)$. In this case, the induced currents within $Rx$ are given by:\n\n\\begin{equation}\nI_s (t) = - \\Bigg [ \\frac{\\omega I_0 A \\beta_n}{R \\, \\textrm{sin} \\phi + \\omega L \\, \\textrm{cos} \\phi} \\Bigg ] \\, \\textrm{cos} (\\omega t -\\phi) \\;\\;\\;\\;\\; \\textrm{where} \\;\\;\\;\\;\\; \\phi = \\frac{\\pi}{2} + \\textrm{tan}^{-1} \\Bigg ( \\frac{\\omega L}{R} \\Bigg ) \\, \\in \\, [\\pi/2, \\pi ]\n\\end{equation}\n\nThe phase-lag between the primary and secondary currents is represented by $\\phi$. As a result, there are both in-phase and quadrature components of the induced current, which are given by:\n\\begin{align}\n\\textrm{In phase:} \\, I_s (t) &= - \\Bigg [ \\frac{\\omega I_0 A \\beta_n}{R \\, \\textrm{sin} \\phi + \\omega L \\, \\textrm{cos} \\phi} \\Bigg ] \\textrm{cos} \\phi \\, \\textrm{cos} (\\omega t) \\\\\n\\textrm{Quadrature:} \\, I_s (t) &= - \\Bigg [ \\frac{\\omega I_0 A \\beta_n}{R \\, \\textrm{sin} \\phi + \\omega L \\, \\textrm{cos} \\phi} \\Bigg ] \\textrm{sin} \\phi \\, \\textrm{sin} (\\omega t)\n\\end{align} ", "_____no_output_____" ] ], [ [ "# RUN FREQUENCY DOMAIN WIDGET\nwidgetify(IND.fcn_FDEM_Widget,I=FloatSlider(min=1, max=10., value=1., step=1., continuous_update=False, description = \"$I_0$\"),\\\n a1=FloatSlider(min=1., max=20., value=10., step=1., continuous_update=False, description = \"$a_{Tx}$\"),\\\n a2=FloatSlider(min=1., max=20.,value=5.,step=1.,continuous_update=False,description = \"$a_{Rx}$\"),\\\n xRx=FloatSlider(min=-15., max=15., value=0., step=1., continuous_update=False, description = \"$x_{Rx}$\"),\\\n zRx=FloatSlider(min=-15., max=15., value=-8., step=1., continuous_update=False, description = \"$z_{Rx}$\"),\\\n azm=FloatSlider(min=-90., max=90., value=0., step=10., continuous_update=False, description = \"$\\\\theta$\"),\\\n logR=FloatSlider(min=0., max=6., value=2., step=1., continuous_update=False, description = \"$log_{10}(R)$\"),\\\n logL=FloatSlider(min=-7., max=-2., value=-4., step=1., continuous_update=False, description = \"$log_{10}(L)$\"),\\\n logf=FloatSlider(min=0., max=8., value=5., step=1., continuous_update=False, description = \"$log_{10}(f)$\"))\n \n ", "_____no_output_____" ] ], [ [ "## Supporting Derivation for the Frequency Response\n\nConsider a transmitter loop which carries a harmonic primary current $I_p(\\omega)$. According to the Biot-Savart law, this results in a primary magnetic field:\n\\begin{equation}\n\\mathbf{B_p} (\\mathbf{r},\\omega) = \\boldsymbol{\\beta} \\, I_p(\\omega) \\;\\;\\;\\; \\textrm{where} \\;\\;\\;\\;\\; \\boldsymbol{\\beta} = \\frac{\\mu_0}{4 \\pi} \\int_C \\frac{d \\mathbf{l} \\times \\mathbf{r'}}{|\\mathbf{r'}|^2}\n\\end{equation}\nwhere $\\boldsymbol{\\beta}$ contains the problem geometry. Assume the magnetic field is homogeneous through the receiver loop. The primary field generates an EMF within the receiver loop equal to:\n\\begin{equation}\nEMF = - i\\omega \\Phi \\;\\;\\;\\;\\; \\textrm{where} \\;\\;\\;\\;\\; \\Phi = A \\beta_n I_p(\\omega)\n\\end{equation}\nwhere $A$ is the area of the receiver loop and $\\beta_n$ is the component of $\\boldsymbol{\\beta}$ along $\\hat n$. The EMF induces a secondary current $I_s(\\omega)$ within the receiver loop. The secondary current is defined by the following expression:\n\\begin{equation}\nV = - i \\omega A \\beta_n I_p (\\omega) = \\big (R + i\\omega L \\big )I_s(\\omega)\n\\end{equation}\nRearranging this expression to solve for the secondary current we obtain\n\\begin{equation}\nI_s (\\omega) = - \\frac{i \\omega A \\beta_n}{R + i \\omega L} I_p(\\omega)\n\\end{equation}\nThe secondary current has both real and imaginary components. These are given by:\n\\begin{equation}\nI_{Re} (\\omega) = - \\frac{ \\omega^2 A \\beta_n L}{R^2 + (\\omega L)^2} I_p(\\omega)\n\\end{equation}\nand\n\\begin{equation}\nI_{Im} (\\omega) = - \\frac{i \\omega A \\beta_n R}{R^2 + (\\omega L)^2} I_p(\\omega)\n\\end{equation}", "_____no_output_____" ], [ "## Supporting Derivation for the Time-Harmonic Response\n\nConsider a transmitter loop which carries a harmonic primary current of the form:\n\\begin{equation}\nI_p(t) = I_0 \\textrm{cos} (\\omega t)\n\\end{equation}\nAccording to the Biot-Savart law, this results in a primary magnetic field:\n\\begin{equation}\n\\mathbf{B_p} (\\mathbf{r},t) = \\boldsymbol{\\beta} \\, I_0 \\, \\textrm{cos} (\\omega t) \\;\\;\\;\\; \\textrm{where} \\;\\;\\;\\;\\; \\boldsymbol{\\beta} = \\frac{\\mu_0}{4 \\pi} \\int_C \\frac{d \\mathbf{l} \\times \\mathbf{r'}}{|\\mathbf{r'}|^2}\n\\end{equation}\nwhere $\\boldsymbol{\\beta}$ contains the problem geometry. If the magnetic field is homogeneous through the receiver loop, the primary field generates an EMF within the receiver loop equal to:\n\\begin{equation}\nEMF = - \\frac{\\partial \\Phi}{\\partial t} \\;\\;\\;\\;\\; \\textrm{where} \\;\\;\\;\\;\\; \\Phi = A\\hat n \\cdot \\mathbf{B_p} = I_0 A \\beta_n \\, \\textrm{cos} (\\omega t)\n\\end{equation}\nwhere $A$ is the area of the receiver loop and $\\beta_n$ is the component of $\\boldsymbol{\\beta}$ along $\\hat n$. The EMF induces a secondary current $I_s$ within the receiver loop. The secondary current is defined by the following ODE:\n\\begin{equation}\nV = \\omega I_0 A \\beta_n \\, \\textrm{sin} (\\omega t) = I_s R + L \\frac{dI_s}{dt}\n\\end{equation}\nThe ODE has a solution of the form:\n\\begin{equation}\nI_s (t) = \\alpha \\, \\textrm{cos} (\\omega t - \\phi)\n\\end{equation}\nwhere $\\alpha$ is the amplitude of the secondary current and $\\phi$ is the phase lag. By solving the ODE, the secondary current induced in the receiver loop is given by:\n\\begin{equation}\nI_s (t) = - \\Bigg [ \\frac{\\omega I_0 A \\beta_n}{R \\, \\textrm{sin} \\phi + \\omega L \\, \\textrm{cos} \\phi} \\Bigg ] \\, \\textrm{cos} (\\omega t -\\phi) \\;\\;\\;\\;\\; \\textrm{where} \\;\\;\\;\\;\\; \\phi = \\frac{\\pi}{2} + \\textrm{tan}^{-1} \\Bigg ( \\frac{\\omega L}{R} \\Bigg ) \\, \\in \\, [\\pi/2, \\pi ]\n\\end{equation}\nThe secondary current has both in-phase and quadrature components, these are given by:\n\\begin{equation}\n\\textrm{In phase:} \\, I_s (t) = - \\Bigg [ \\frac{\\omega I_0 A \\beta_n}{R \\, \\textrm{sin} \\phi + \\omega L \\, \\textrm{cos} \\phi} \\Bigg ] \\textrm{cos} \\phi \\, \\textrm{cos} (\\omega t)\n\\end{equation}\nand\n\\begin{equation}\n\\textrm{Quadrature:} \\, I_s (t) = - \\Bigg [ \\frac{\\omega I_0 A \\beta_n}{R \\, \\textrm{sin} \\phi + \\omega L \\, \\textrm{cos} \\phi} \\Bigg ] \\textrm{sin} \\phi \\, \\textrm{sin} (\\omega t)\n\\end{equation}", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
d03506c2bd75918893501d4ce74a6ddcf36e2062
7,231
ipynb
Jupyter Notebook
examples/tutorial/ALBERTMRC.ipynb
dumpmemory/unif
a301d7207791664fb107edda607c55f2d50dd17d
[ "Apache-2.0" ]
null
null
null
examples/tutorial/ALBERTMRC.ipynb
dumpmemory/unif
a301d7207791664fb107edda607c55f2d50dd17d
[ "Apache-2.0" ]
null
null
null
examples/tutorial/ALBERTMRC.ipynb
dumpmemory/unif
a301d7207791664fb107edda607c55f2d50dd17d
[ "Apache-2.0" ]
null
null
null
30.770213
449
0.567695
[ [ [ "# ALBERTMRC\n\n可用的中文预训练参数:[`albert-tiny`](https://storage.googleapis.com/albert_zh/albert_tiny_zh_google.zip),[`albert-small`](https://storage.googleapis.com/albert_zh/albert_small_zh_google.zip),[`albert-base`](https://storage.googleapis.com/albert_zh/albert_base_zh_additional_36k_steps.zip),[`albert-large`](https://storage.googleapis.com/albert_zh/albert_large_zh.zip),[`albert-xlarge`](https://storage.googleapis.com/albert_zh/albert_xlarge_zh_183k.zip)", "_____no_output_____" ] ], [ [ "import uf\n\nprint(uf.__version__)", "v2.4.5\n" ], [ "model = uf.ALBERTMRC('../../demo/albert_config.json', '../../demo/vocab.txt')\nprint(model)", "uf.ALBERTMRC(\n config_file=\"../../demo/albert_config.json\",\n vocab_file=\"../../demo/vocab.txt\",\n max_seq_length=256,\n init_checkpoint=None,\n output_dir=None,\n gpu_ids=None,\n do_lower_case=True,\n truncate_method=\"longer-FO\",\n)\n" ], [ "X = [{'doc': '天亮以前说再见,笑着泪流满面。去迎接应该你的,更好的明天', 'ques': '何时说的再见'}, \n {'doc': '他想知道那是谁,为何总沉默寡言。人群中也算抢眼,抢眼的孤独难免', 'ques': '抢眼的如何'}]\ny = [{'text': '天亮以前', 'answer_start': 0}, {'text': '孤独难免', 'answer_start': 27}]", "_____no_output_____" ] ], [ [ "# 训练", "_____no_output_____" ] ], [ [ "model.fit(X, y, total_steps=10)", "WARNING:tensorflow:From /Users/geyingli/Library/Python/3.8/lib/python/site-packages/tensorflow/python/util/dispatch.py:1096: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\n" ] ], [ [ "# 推理", "_____no_output_____" ] ], [ [ "model.predict(X)", "INFO:tensorflow:Time usage 0m-3.02s, 0.33 steps/sec, 0.66 examples/sec\n" ] ], [ [ "# 评分", "_____no_output_____" ] ], [ [ "model.score(X, y)", "INFO:tensorflow:Time usage 0m-2.28s, 0.44 steps/sec, 0.88 examples/sec\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d03507809eecfd7ad73b2e969e93c2f830d8ad76
11,040
ipynb
Jupyter Notebook
courses/machine_learning/deepdive/05_artandscience/labs/c_neuralnetwork.ipynb
09acp/training-data-analyst
514c4833a5a881fd739cb329ba5774fb67a1d364
[ "Apache-2.0" ]
7
2019-05-10T14:13:40.000Z
2022-01-19T16:59:04.000Z
courses/machine_learning/deepdive/05_artandscience/labs/c_neuralnetwork.ipynb
09acp/training-data-analyst
514c4833a5a881fd739cb329ba5774fb67a1d364
[ "Apache-2.0" ]
11
2020-01-28T22:37:30.000Z
2022-03-11T23:44:20.000Z
deepdive/05_artandscience/labs/c_neuralnetwork.ipynb
brucebcampbell/gcp-tensorflow
f10ecdc7ab5db7770d0c4ebfed55aa2a5f220a42
[ "MIT" ]
8
2020-02-03T18:31:37.000Z
2021-08-13T13:58:54.000Z
31.274788
362
0.541486
[ [ [ "# Neural Network", "_____no_output_____" ], [ "**Learning Objectives:**\n * Use the `DNNRegressor` class in TensorFlow to predict median housing price", "_____no_output_____" ], [ "The data is based on 1990 census data from California. This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively.\n<p>\nLet's use a set of features to predict house value.", "_____no_output_____" ], [ "## Set Up\nIn this first cell, we'll load the necessary libraries.", "_____no_output_____" ] ], [ [ "import math\nimport shutil\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\n\ntf.logging.set_verbosity(tf.logging.INFO)\npd.options.display.max_rows = 10\npd.options.display.float_format = '{:.1f}'.format", "_____no_output_____" ] ], [ [ "Next, we'll load our data set.", "_____no_output_____" ] ], [ [ "df = pd.read_csv(\"https://storage.googleapis.com/ml_universities/california_housing_train.csv\", sep=\",\")", "_____no_output_____" ] ], [ [ "## Examine the data\n\nIt's a good idea to get to know your data a little bit before you work with it.\n\nWe'll print out a quick summary of a few useful statistics on each column.\n\nThis will include things like mean, standard deviation, max, min, and various quantiles.", "_____no_output_____" ] ], [ [ "df.head()", "_____no_output_____" ], [ "df.describe()", "_____no_output_____" ] ], [ [ "This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Let's create a different, more appropriate feature. Because we are predicing the price of a single house, we should try to make all our features correspond to a single house as well", "_____no_output_____" ] ], [ [ "df['num_rooms'] = df['total_rooms'] / df['households']\ndf['num_bedrooms'] = df['total_bedrooms'] / df['households']\ndf['persons_per_house'] = df['population'] / df['households']\ndf.describe()", "_____no_output_____" ], [ "df.drop(['total_rooms', 'total_bedrooms', 'population', 'households'], axis = 1, inplace = True)\ndf.describe()", "_____no_output_____" ] ], [ [ "## Build a neural network model\n\nIn this exercise, we'll be trying to predict `median_house_value`. It will be our label (sometimes also called a target). We'll use the remaining columns as our input features.\n\nTo train our model, we'll first use the [LinearRegressor](https://www.tensorflow.org/api_docs/python/tf/contrib/learn/LinearRegressor) interface. Then, we'll change to DNNRegressor\n", "_____no_output_____" ] ], [ [ "featcols = {\n colname : tf.feature_column.numeric_column(colname) \\\n for colname in 'housing_median_age,median_income,num_rooms,num_bedrooms,persons_per_house'.split(',')\n}\n# Bucketize lat, lon so it's not so high-res; California is mostly N-S, so more lats than lons\nfeatcols['longitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('longitude'),\n np.linspace(-124.3, -114.3, 5).tolist())\nfeatcols['latitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('latitude'),\n np.linspace(32.5, 42, 10).tolist())", "_____no_output_____" ], [ "featcols.keys()", "_____no_output_____" ], [ "# Split into train and eval\nmsk = np.random.rand(len(df)) < 0.8\ntraindf = df[msk]\nevaldf = df[~msk]\n\nSCALE = 100000\nBATCH_SIZE= 100\nOUTDIR = './housing_trained'\ntrain_input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[list(featcols.keys())],\n y = traindf[\"median_house_value\"] / SCALE,\n num_epochs = None,\n batch_size = BATCH_SIZE,\n shuffle = True)\neval_input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[list(featcols.keys())],\n y = evaldf[\"median_house_value\"] / SCALE, # note the scaling\n num_epochs = 1, \n batch_size = len(evaldf), \n shuffle=False)", "_____no_output_____" ], [ "# Linear Regressor\ndef train_and_evaluate(output_dir, num_train_steps):\n myopt = tf.train.FtrlOptimizer(learning_rate = 0.01) # note the learning rate\n estimator = tf.estimator.LinearRegressor(\n model_dir = output_dir, \n feature_columns = featcols.values(),\n optimizer = myopt)\n \n #Add rmse evaluation metric\n def rmse(labels, predictions):\n pred_values = tf.cast(predictions['predictions'],tf.float64)\n return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}\n estimator = tf.contrib.estimator.add_metrics(estimator,rmse)\n \n train_spec=tf.estimator.TrainSpec(\n input_fn = train_input_fn,\n max_steps = num_train_steps)\n eval_spec=tf.estimator.EvalSpec(\n input_fn = eval_input_fn,\n steps = None,\n start_delay_secs = 1, # start evaluating after N seconds\n throttle_secs = 10, # evaluate every N seconds\n )\n tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)\n\n# Run training \nshutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time\ntrain_and_evaluate(OUTDIR, num_train_steps = (100 * len(traindf)) / BATCH_SIZE) ", "_____no_output_____" ], [ "# DNN Regressor\ndef train_and_evaluate(output_dir, num_train_steps):\n myopt = tf.train.FtrlOptimizer(learning_rate = 0.01) # note the learning rate\n estimator = # TODO: Implement DNN Regressor model\n \n #Add rmse evaluation metric\n def rmse(labels, predictions):\n pred_values = tf.cast(predictions['predictions'],tf.float64)\n return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}\n estimator = tf.contrib.estimator.add_metrics(estimator,rmse)\n \n train_spec=tf.estimator.TrainSpec(\n input_fn = train_input_fn,\n max_steps = num_train_steps)\n eval_spec=tf.estimator.EvalSpec(\n input_fn = eval_input_fn,\n steps = None,\n start_delay_secs = 1, # start evaluating after N seconds\n throttle_secs = 10, # evaluate every N seconds\n )\n tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)\n\n# Run training \nshutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time\ntf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file\ntrain_and_evaluate(OUTDIR, num_train_steps = (100 * len(traindf)) / BATCH_SIZE) ", "_____no_output_____" ], [ "from google.datalab.ml import TensorBoard\npid = TensorBoard().start(OUTDIR)", "_____no_output_____" ], [ "TensorBoard().stop(pid)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
d0350913ce77182b60f4275fce0a0c9921d3be59
106,411
ipynb
Jupyter Notebook
cifar_imagenet/run_adv_attacks_vx_x_ifgsm.ipynb
minhtannguyen/RAdam
44f403288df375bae0785cc82dd8c888eaaaa441
[ "Apache-2.0" ]
null
null
null
cifar_imagenet/run_adv_attacks_vx_x_ifgsm.ipynb
minhtannguyen/RAdam
44f403288df375bae0785cc82dd8c888eaaaa441
[ "Apache-2.0" ]
null
null
null
cifar_imagenet/run_adv_attacks_vx_x_ifgsm.ipynb
minhtannguyen/RAdam
44f403288df375bae0785cc82dd8c888eaaaa441
[ "Apache-2.0" ]
null
null
null
89.121441
352
0.573052
[ [ [ "%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.999-x-baolr-pgd-seed-0/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.999 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.999-x-baolr-pgd-seed-1/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.999 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.999-x-baolr-pgd-seed-2/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.999 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.999-x-baolr-pgd-seed-3/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.999 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.999-x-baolr-pgd-seed-4/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.999 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.99-x-baolr-pgd-seed-0/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.99 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.99-x-baolr-pgd-seed-1/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.99 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.99-x-baolr-pgd-seed-2/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.99 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.99-x-baolr-pgd-seed-3/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.99 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.99-x-baolr-pgd-seed-4/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.99 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.95-x-baolr-pgd-seed-0/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.95 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.95-x-baolr-pgd-seed-1/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.95 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.95-x-baolr-pgd-seed-2/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.95 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.95-x-baolr-pgd-seed-3/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.95 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.95-x-baolr-pgd-seed-4/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.95 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.9-x-baolr-pgd-seed-0/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.9 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.9-x-baolr-pgd-seed-1/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.9 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.9-x-baolr-pgd-seed-2/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.9 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.9-x-baolr-pgd-seed-3/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.9 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.9-x-baolr-pgd-seed-4/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.9 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.7-x-baolr-pgd-seed-0/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.7 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.7-x-baolr-pgd-seed-1/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.7 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.7-x-baolr-pgd-seed-2/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.7 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.7-x-baolr-pgd-seed-3/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.7 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.7-x-baolr-pgd-seed-4/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.7 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.5-x-baolr-pgd-seed-0/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.5 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.5-x-baolr-pgd-seed-1/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.5 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.5-x-baolr-pgd-seed-2/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.5 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.5-x-baolr-pgd-seed-3/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.5 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.5-x-baolr-pgd-seed-4/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.5 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.3-x-baolr-pgd-seed-0/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.3 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.3-x-baolr-pgd-seed-1/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.3 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.3-x-baolr-pgd-seed-2/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.3 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.3-x-baolr-pgd-seed-3/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.3 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.3-x-baolr-pgd-seed-4/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.3 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.1-x-baolr-pgd-seed-0/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.1 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.1-x-baolr-pgd-seed-1/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.1 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.1-x-baolr-pgd-seed-2/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.1 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.1-x-baolr-pgd-seed-3/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.1 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.1-x-baolr-pgd-seed-4/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.1 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.01-x-baolr-pgd-seed-0/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.01 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.01-x-baolr-pgd-seed-1/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.01 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.01-x-baolr-pgd-seed-2/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.01 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.01-x-baolr-pgd-seed-3/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.01 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.01-x-baolr-pgd-seed-4/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.01 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.001-x-baolr-pgd-seed-0/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.001-x-baolr-pgd-seed-1/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.001-x-baolr-pgd-seed-2/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.001-x-baolr-pgd-seed-3/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.001-x-baolr-pgd-seed-4/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.0001-x-baolr-pgd-seed-0/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.0001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.0001-x-baolr-pgd-seed-1/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.0001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.0001-x-baolr-pgd-seed-2/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.0001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.0001-x-baolr-pgd-seed-3/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.0001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1 \n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.0001-x-baolr-pgd-seed-4/model_best.pth.tar\" -a \"nagpreresnet\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.0001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned20-basicblock-eta-0.99-x-baolr-pgd-seed-0/model_best.pth.tar\" -a \"nagpreresnet_learned\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.0001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned20-basicblock-eta-0.99-x-baolr-pgd-seed-1/model_best.pth.tar\" -a \"nagpreresnet_learned\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.0001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned20-basicblock-eta-0.99-x-baolr-pgd-seed-2/model_best.pth.tar\" -a \"nagpreresnet_learned\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.0001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned20-basicblock-eta-0.99-x-baolr-pgd-seed-3/model_best.pth.tar\" -a \"nagpreresnet_learned\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.0001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned20-basicblock-eta-0.99-x-baolr-pgd-seed-4/model_best.pth.tar\" -a \"nagpreresnet_learned\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.0001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned_v220-basicblock-eta-0.99-x-baolr-pgd-seed-0/model_best.pth.tar\" -a \"nagpreresnet_learned_v2\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.0001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned_v220-basicblock-eta-0.99-x-baolr-pgd-seed-1/model_best.pth.tar\" -a \"nagpreresnet_learned_v2\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.0001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned_v220-basicblock-eta-0.99-x-baolr-pgd-seed-2/model_best.pth.tar\" -a \"nagpreresnet_learned_v2\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.0001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned_v220-basicblock-eta-0.99-x-baolr-pgd-seed-3/model_best.pth.tar\" -a \"nagpreresnet_learned_v2\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.0001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1\n%run -p Attack_Foolbox_ResNet20.py --checkpoint \"/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned_v220-basicblock-eta-0.99-x-baolr-pgd-seed-4/model_best.pth.tar\" -a \"nagpreresnet_learned_v2\" --block-name \"basicblock\" --feature_vec \"x\" --dataset \"cifar10\" --eta 0.0001 --depth 20 --method ifgsm --epsilon 0.031 --gpu-id 1", "==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.999-x-baolr-pgd-seed-0/model_best.pth.tar\nNumber of correctly classified images: 4686\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.999-x-baolr-pgd-seed-1/model_best.pth.tar\nNumber of correctly classified images: 4643\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.999-x-baolr-pgd-seed-2/model_best.pth.tar\nNumber of correctly classified images: 4659\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.999-x-baolr-pgd-seed-3/model_best.pth.tar\nNumber of correctly classified images: 4643\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.999-x-baolr-pgd-seed-4/model_best.pth.tar\nNumber of correctly classified images: 4645\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.99-x-baolr-pgd-seed-0/model_best.pth.tar\nNumber of correctly classified images: 4680\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.99-x-baolr-pgd-seed-1/model_best.pth.tar\nNumber of correctly classified images: 4680\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.99-x-baolr-pgd-seed-2/model_best.pth.tar\nNumber of correctly classified images: 4627\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.99-x-baolr-pgd-seed-3/model_best.pth.tar\nNumber of correctly classified images: 4695\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.99-x-baolr-pgd-seed-4/model_best.pth.tar\nNumber of correctly classified images: 4623\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.95-x-baolr-pgd-seed-0/model_best.pth.tar\nNumber of correctly classified images: 4676\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.95-x-baolr-pgd-seed-1/model_best.pth.tar\nNumber of correctly classified images: 4696\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.95-x-baolr-pgd-seed-2/model_best.pth.tar\nNumber of correctly classified images: 4696\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.95-x-baolr-pgd-seed-3/model_best.pth.tar\nNumber of correctly classified images: 4642\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.95-x-baolr-pgd-seed-4/model_best.pth.tar\nNumber of correctly classified images: 4708\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.9-x-baolr-pgd-seed-0/model_best.pth.tar\nNumber of correctly classified images: 4661\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.9-x-baolr-pgd-seed-1/model_best.pth.tar\nNumber of correctly classified images: 4672\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.9-x-baolr-pgd-seed-2/model_best.pth.tar\nNumber of correctly classified images: 4656\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.9-x-baolr-pgd-seed-3/model_best.pth.tar\nNumber of correctly classified images: 4710\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.9-x-baolr-pgd-seed-4/model_best.pth.tar\nNumber of correctly classified images: 4668\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.7-x-baolr-pgd-seed-0/model_best.pth.tar\nNumber of correctly classified images: 4678\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.7-x-baolr-pgd-seed-1/model_best.pth.tar\nNumber of correctly classified images: 4703\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.7-x-baolr-pgd-seed-2/model_best.pth.tar\nNumber of correctly classified images: 4693\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.7-x-baolr-pgd-seed-3/model_best.pth.tar\nNumber of correctly classified images: 4675\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.7-x-baolr-pgd-seed-4/model_best.pth.tar\nNumber of correctly classified images: 4680\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.5-x-baolr-pgd-seed-0/model_best.pth.tar\nNumber of correctly classified images: 4669\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.5-x-baolr-pgd-seed-1/model_best.pth.tar\nNumber of correctly classified images: 4670\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.5-x-baolr-pgd-seed-2/model_best.pth.tar\nNumber of correctly classified images: 4696\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.5-x-baolr-pgd-seed-3/model_best.pth.tar\nNumber of correctly classified images: 4717\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.5-x-baolr-pgd-seed-4/model_best.pth.tar\nNumber of correctly classified images: 4676\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.3-x-baolr-pgd-seed-0/model_best.pth.tar\nNumber of correctly classified images: 4715\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.3-x-baolr-pgd-seed-1/model_best.pth.tar\nNumber of correctly classified images: 4725\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.3-x-baolr-pgd-seed-2/model_best.pth.tar\nNumber of correctly classified images: 4752\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.3-x-baolr-pgd-seed-3/model_best.pth.tar\nNumber of correctly classified images: 4740\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.3-x-baolr-pgd-seed-4/model_best.pth.tar\nNumber of correctly classified images: 4701\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.1-x-baolr-pgd-seed-0/model_best.pth.tar\nNumber of correctly classified images: 4716\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.1-x-baolr-pgd-seed-1/model_best.pth.tar\nNumber of correctly classified images: 4774\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.1-x-baolr-pgd-seed-2/model_best.pth.tar\nNumber of correctly classified images: 4730\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.1-x-baolr-pgd-seed-3/model_best.pth.tar\nNumber of correctly classified images: 4761\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.1-x-baolr-pgd-seed-4/model_best.pth.tar\nNumber of correctly classified images: 4726\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.01-x-baolr-pgd-seed-0/model_best.pth.tar\nNumber of correctly classified images: 4673\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.01-x-baolr-pgd-seed-1/model_best.pth.tar\nNumber of correctly classified images: 4733\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.01-x-baolr-pgd-seed-2/model_best.pth.tar\nNumber of correctly classified images: 4704\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.01-x-baolr-pgd-seed-3/model_best.pth.tar\nNumber of correctly classified images: 4692\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.01-x-baolr-pgd-seed-4/model_best.pth.tar\nNumber of correctly classified images: 4720\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.001-x-baolr-pgd-seed-0/model_best.pth.tar\nNumber of correctly classified images: 4687\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.001-x-baolr-pgd-seed-1/model_best.pth.tar\nNumber of correctly classified images: 4746\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.001-x-baolr-pgd-seed-2/model_best.pth.tar\nNumber of correctly classified images: 4702\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.001-x-baolr-pgd-seed-3/model_best.pth.tar\nNumber of correctly classified images: 4724\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.001-x-baolr-pgd-seed-4/model_best.pth.tar\nNumber of correctly classified images: 4735\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.0001-x-baolr-pgd-seed-0/model_best.pth.tar\nNumber of correctly classified images: 4693\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.0001-x-baolr-pgd-seed-1/model_best.pth.tar\nNumber of correctly classified images: 4739\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.0001-x-baolr-pgd-seed-2/model_best.pth.tar\nNumber of correctly classified images: 4711\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.0001-x-baolr-pgd-seed-3/model_best.pth.tar\nNumber of correctly classified images: 4713\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet20-basicblock-eta-0.0001-x-baolr-pgd-seed-4/model_best.pth.tar\nNumber of correctly classified images: 4691\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet_learned'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned20-basicblock-eta-0.99-x-baolr-pgd-seed-0/model_best.pth.tar\nNumber of correctly classified images: 4721\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet_learned'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned20-basicblock-eta-0.99-x-baolr-pgd-seed-1/model_best.pth.tar\nNumber of correctly classified images: 4617\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet_learned'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned20-basicblock-eta-0.99-x-baolr-pgd-seed-2/model_best.pth.tar\nNumber of correctly classified images: 4453\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet_learned'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned20-basicblock-eta-0.99-x-baolr-pgd-seed-3/model_best.pth.tar\nNumber of correctly classified images: 4705\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet_learned'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned20-basicblock-eta-0.99-x-baolr-pgd-seed-4/model_best.pth.tar\nNumber of correctly classified images: 4723\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet_learned_v2'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned_v220-basicblock-eta-0.99-x-baolr-pgd-seed-0/model_best.pth.tar\nNumber of correctly classified images: 4738\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet_learned_v2'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned_v220-basicblock-eta-0.99-x-baolr-pgd-seed-1/model_best.pth.tar\nNumber of correctly classified images: 4741\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet_learned_v2'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned_v220-basicblock-eta-0.99-x-baolr-pgd-seed-2/model_best.pth.tar\nNumber of correctly classified images: 4708\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet_learned_v2'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned_v220-basicblock-eta-0.99-x-baolr-pgd-seed-3/model_best.pth.tar\nNumber of correctly classified images: 4703\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n ==> creating model 'nagpreresnet_learned_v2'\n==> Resuming from checkpoint..\n==> Load the clean image\nBatch size of the test set: 100\n/tanresults/experiments-horesnet/cifar10-nagpreresnet_learned_v220-basicblock-eta-0.99-x-baolr-pgd-seed-4/model_best.pth.tar\nNumber of correctly classified images: 4720\n[(10000, 3, 32, 32), (10000, 3, 32, 32), (10000, 3, 32, 32), (10000,), (10000,)]\n " ] ] ]
[ "code" ]
[ [ "code" ] ]
d0350fa0d1bca8397d73ffea2009208175577b68
3,805
ipynb
Jupyter Notebook
examples/notebooks/03_inspector_tool.ipynb
Jack-ee/geemap
921f18fc853dbaffa3273905c8d86f5600548938
[ "MIT" ]
null
null
null
examples/notebooks/03_inspector_tool.ipynb
Jack-ee/geemap
921f18fc853dbaffa3273905c8d86f5600548938
[ "MIT" ]
null
null
null
examples/notebooks/03_inspector_tool.ipynb
Jack-ee/geemap
921f18fc853dbaffa3273905c8d86f5600548938
[ "MIT" ]
null
null
null
22.64881
229
0.53272
[ [ [ "<a href=\"https://githubtocolab.com/giswqs/geemap/blob/master/examples/notebooks/03_inspector_tool.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\"/></a>", "_____no_output_____" ], [ "Uncomment the following line to install [geemap](https://geemap.org) if needed.", "_____no_output_____" ] ], [ [ "# !pip install geemap", "_____no_output_____" ], [ "import ee\nimport geemap", "_____no_output_____" ], [ "geemap.show_youtube('k477ksjkaXw')", "_____no_output_____" ] ], [ [ "## Create an interactive map", "_____no_output_____" ] ], [ [ "Map = geemap.Map(center=(40, -100), zoom=4)", "_____no_output_____" ] ], [ [ "## Add Earth Engine Python script", "_____no_output_____" ] ], [ [ "# Add Earth Engine dataset\ndem = ee.Image('USGS/SRTMGL1_003')\nlandcover = ee.Image(\"ESA/GLOBCOVER_L4_200901_200912_V2_3\").select('landcover')\nlandsat7 = ee.Image('LANDSAT/LE7_TOA_5YEAR/1999_2003').select(\n ['B1', 'B2', 'B3', 'B4', 'B5', 'B7']\n)\nstates = ee.FeatureCollection(\"TIGER/2018/States\")\n\n# Set visualization parameters.\nvis_params = {\n 'min': 0,\n 'max': 4000,\n 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],\n}\n\n# Add Earth Eninge layers to Map\nMap.addLayer(dem, vis_params, 'SRTM DEM', True, 0.5)\nMap.addLayer(landcover, {}, 'Land cover')\nMap.addLayer(\n landsat7,\n {'bands': ['B4', 'B3', 'B2'], 'min': 20, 'max': 200, 'gamma': 2.0},\n 'Landsat 7',\n)\nMap.addLayer(states, {}, \"US States\")\n\nMap", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d035147f79104ff650b2ce24140081ee7d64fe55
935,491
ipynb
Jupyter Notebook
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
f696db33b967778ffc11b133350af3a93fab353d
[ "Apache-2.0" ]
978
2017-07-10T21:30:39.000Z
2020-04-02T19:23:46.000Z
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
f696db33b967778ffc11b133350af3a93fab353d
[ "Apache-2.0" ]
59
2017-07-11T09:36:56.000Z
2020-03-12T03:10:54.000Z
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
f696db33b967778ffc11b133350af3a93fab353d
[ "Apache-2.0" ]
348
2017-07-10T21:24:26.000Z
2020-03-31T09:44:44.000Z
1,229.291721
186,703
0.912379
[ [ [ "# Exploring Neural Audio Synthesis with NSynth\n\n## Parag Mital", "_____no_output_____" ], [ "There is a lot to explore with NSynth. This notebook explores just a taste of what's possible including how to encode and decode, timestretch, and interpolate sounds. Also check out the [blog post](https://magenta.tensorflow.org/nsynth-fastgen) for more examples including two compositions created with Ableton Live. If you are interested in learning more, checkout my [online course on Kadenze](https://www.kadenze.com/programs/creative-applications-of-deep-learning-with-tensorflow) where we talk about Magenta and NSynth in more depth.", "_____no_output_____" ], [ "## Part 1: Encoding and Decoding\n\nWe'll walkthrough using the source code to encode and decode some audio. This is the most basic thing we can do with NSynth, and it will take at least about 6 minutes per 1 second of audio to perform on a GPU, though this will get faster!\n\nI'll first show you how to encode some audio. This is basically saying, here is some audio, now put it into the trained model. It's like the encoding of an MP3 file. It takes some raw audio, and represents it using some really reduced down representation of the raw audio. NSynth works similarly, but we can actually mess with the encoding to do some awesome stuff. You can for instance, mix it with other encodings, or slow it down, or speed it up. You can potentially even remove parts of it, mix many different encodings together, and hopefully just explore ideas yet to be thought of. After you've created your encoding, you have to just generate, or decode it, just like what an audio player does to an MP3 file.\n\nFirst, to install Magenta, follow their setup guide here: https://github.com/tensorflow/magenta#installation - then import some packages:", "_____no_output_____" ] ], [ [ "import os\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom magenta.models.nsynth import utils\nfrom magenta.models.nsynth.wavenet import fastgen\nfrom IPython.display import Audio\n%matplotlib inline\n%config InlineBackend.figure_format = 'jpg'", "_____no_output_____" ] ], [ [ "Now we'll load up a sound I downloaded from freesound.org. The `utils.load_audio` method will resample this to the required sample rate of 16000. I'll load in 40000 samples of this beat which should end up being a pretty good loop:", "_____no_output_____" ] ], [ [ "# from https://www.freesound.org/people/MustardPlug/sounds/395058/\nfname = '395058__mustardplug__breakbeat-hiphop-a4-4bar-96bpm.wav'\nsr = 16000\naudio = utils.load_audio(fname, sample_length=40000, sr=sr)\nsample_length = audio.shape[0]\nprint('{} samples, {} seconds'.format(sample_length, sample_length / float(sr)))", "40000 samples, 2.5 seconds\n" ] ], [ [ "## Encoding\n\nWe'll now encode some audio using the pre-trained NSynth model (download from: http://download.magenta.tensorflow.org/models/nsynth/wavenet-ckpt.tar). This is pretty fast, and takes about 3 seconds per 1 second of audio on my NVidia 1080 GPU. This will give us a 125 x 16 dimension encoding for every 4 seconds of audio which we can then decode, or resynthesize. We'll try a few things, including just leaving it alone and reconstructing it as is. But then we'll also try some fun transformations of the encoding and see what's possible from there.\n\n```help(fastgen.encode)\nHelp on function encode in module magenta.models.nsynth.wavenet.fastgen:\n\nencode(wav_data, checkpoint_path, sample_length=64000)\n Generate an array of embeddings from an array of audio.\n Args:\n wav_data: Numpy array [batch_size, sample_length]\n checkpoint_path: Location of the pretrained model.\n sample_length: The total length of the final wave file, padded with 0s.\n Returns:\n encoding: a [mb, 125, 16] encoding (for 64000 sample audio file).\n```", "_____no_output_____" ] ], [ [ "%time encoding = fastgen.encode(audio, 'model.ckpt-200000', sample_length)", "INFO:tensorflow:Restoring parameters from model.ckpt-200000\nCPU times: user 53.2 s, sys: 2.83 s, total: 56 s\nWall time: 20.2 s\n" ] ], [ [ "This returns a 3-dimensional tensor representing the encoding of the audio. The first dimension of the encoding represents the batch dimension. We could have passed in many audio files at once and the process would be much faster. For now we've just passed in one audio file.", "_____no_output_____" ] ], [ [ "print(encoding.shape)", "(1, 78, 16)\n" ] ], [ [ "We'll also save the encoding so that we can use it again later:", "_____no_output_____" ] ], [ [ "np.save(fname + '.npy', encoding)", "_____no_output_____" ] ], [ [ "Let's take a look at the encoding of this audio file. Think of these as 16 channels of sounds all mixed together (though with a lot of caveats):", "_____no_output_____" ] ], [ [ "fig, axs = plt.subplots(2, 1, figsize=(10, 5))\naxs[0].plot(audio);\naxs[0].set_title('Audio Signal')\naxs[1].plot(encoding[0]);\naxs[1].set_title('NSynth Encoding')", "_____no_output_____" ] ], [ [ "You should be able to pretty clearly see a sort of beat like pattern in both the signal and the encoding.", "_____no_output_____" ], [ "## Decoding\n\nNow we can decode the encodings as is. This is the process that takes awhile, though it used to be so long that you wouldn't even dare trying it. There is still plenty of room for improvement and I'm sure it will get faster very soon.\n\n```\nhelp(fastgen.synthesize)\nHelp on function synthesize in module magenta.models.nsynth.wavenet.fastgen:\n\nsynthesize(encodings, save_paths, checkpoint_path='model.ckpt-200000', samples_per_save=1000)\n Synthesize audio from an array of embeddings.\n Args:\n encodings: Numpy array with shape [batch_size, time, dim].\n save_paths: Iterable of output file names.\n checkpoint_path: Location of the pretrained model. [model.ckpt-200000]\n samples_per_save: Save files after every amount of generated samples.\n``` ", "_____no_output_____" ] ], [ [ "%time fastgen.synthesize(encoding, save_paths=['gen_' + fname], samples_per_save=sample_length)", "_____no_output_____" ] ], [ [ "After it's done synthesizing, we can see that takes about 6 minutes per 1 second of audio on a non-optimized version of Tensorflow for GPU on an NVidia 1080 GPU. We can speed things up considerably if we want to do multiple encodings at a time. We'll see that in just a moment. Let's first listen to the synthesized audio:", "_____no_output_____" ] ], [ [ "sr = 16000\nsynthesis = utils.load_audio('gen_' + fname, sample_length=sample_length, sr=sr)", "_____no_output_____" ] ], [ [ "Listening to the audio, the sounds are definitely different. NSynth seems to apply a sort of gobbly low-pass that also really doesn't know what to do with the high frequencies. It is really quite hard to describe, but that is what is so interesting about it. It has a recognizable, characteristic sound.\n\nLet's try another one. I'll put the whole workflow for synthesis in two cells, and we can listen to another synthesis of a vocalist singing, \"Laaaa\":", "_____no_output_____" ] ], [ [ "def load_encoding(fname, sample_length=None, sr=16000, ckpt='model.ckpt-200000'):\n audio = utils.load_audio(fname, sample_length=sample_length, sr=sr)\n encoding = fastgen.encode(audio, ckpt, sample_length)\n return audio, encoding", "_____no_output_____" ], [ "# from https://www.freesound.org/people/maurolupo/sounds/213259/\nfname = '213259__maurolupo__girl-sings-laa.wav'\nsample_length = 32000\naudio, encoding = load_encoding(fname, sample_length)\nfastgen.synthesize(\n encoding,\n save_paths=['gen_' + fname],\n samples_per_save=sample_length)\nsynthesis = utils.load_audio('gen_' + fname,\n sample_length=sample_length,\n sr=sr)", "_____no_output_____" ] ], [ [ "Aside from the quality of the reconstruction, what we're really after is what is possible with such a model. Let's look at two examples now.", "_____no_output_____" ], [ "# Part 2: Timestretching\n\nLet's try something more fun. We'll stretch the encodings a bit and see what it sounds like. If you were to try and stretch audio directly, you'd hear a pitch shift. There are some other ways of stretching audio without shifting pitch, like granular synthesis. But it turns out that NSynth can also timestretch. Let's see how. First we'll use image interpolation to help stretch the encodings.", "_____no_output_____" ] ], [ [ "# use image interpolation to stretch the encoding: (pip install scikit-image)\ntry:\n from skimage.transform import resize\nexcept ImportError:\n !pip install scikit-image\n from skimage.transform import resize", "_____no_output_____" ] ], [ [ "Here's a utility function to help you stretch your own encoding. It uses skimage.transform and will retain the range of values. Images typically only have a range of 0-1, but the encodings aren't actually images so we'll keep track of their min/max in order to stretch them like images.", "_____no_output_____" ] ], [ [ "def timestretch(encodings, factor):\n min_encoding, max_encoding = encoding.min(), encoding.max()\n encodings_norm = (encodings - min_encoding) / (max_encoding - min_encoding)\n timestretches = []\n for encoding_i in encodings_norm:\n stretched = resize(encoding_i, (int(encoding_i.shape[0] * factor), encoding_i.shape[1]), mode='reflect')\n stretched = (stretched * (max_encoding - min_encoding)) + min_encoding\n timestretches.append(stretched)\n return np.array(timestretches)", "_____no_output_____" ], [ "# from https://www.freesound.org/people/MustardPlug/sounds/395058/\nfname = '395058__mustardplug__breakbeat-hiphop-a4-4bar-96bpm.wav'\nsample_length = 40000\naudio, encoding = load_encoding(fname, sample_length)", "INFO:tensorflow:Restoring parameters from model.ckpt-200000\n" ] ], [ [ "Now let's stretch the encodings with a few different factors:", "_____no_output_____" ] ], [ [ "audio = utils.load_audio('gen_slower_' + fname, sample_length=None, sr=sr)\nAudio(audio, rate=sr)", "_____no_output_____" ], [ "encoding_slower = timestretch(encoding, 1.5)\nencoding_faster = timestretch(encoding, 0.5)", "_____no_output_____" ] ], [ [ "Basically we've made a slower and faster version of the amen break's encodings. The original encoding is shown in black:", "_____no_output_____" ] ], [ [ "fig, axs = plt.subplots(3, 1, figsize=(10, 7), sharex=True, sharey=True)\naxs[0].plot(encoding[0]); \naxs[0].set_title('Encoding (Normal Speed)')\naxs[1].plot(encoding_faster[0]);\naxs[1].set_title('Encoding (Faster))')\naxs[2].plot(encoding_slower[0]);\naxs[2].set_title('Encoding (Slower)')", "_____no_output_____" ] ], [ [ "Now let's decode them:", "_____no_output_____" ] ], [ [ "fastgen.synthesize(encoding_faster, save_paths=['gen_faster_' + fname])\nfastgen.synthesize(encoding_slower, save_paths=['gen_slower_' + fname])", "_____no_output_____" ] ], [ [ "It seems to work pretty well and retains the pitch and timbre of the original sound. We could even quickly layer the sounds just by adding them. You might want to do this in a program like Logic or Ableton Live instead and explore more possiblities of these sounds!", "_____no_output_____" ], [ "# Part 3: Interpolating Sounds\n\nNow let's try something more experimental. NSynth released plenty of great examples of what happens when you mix the embeddings of different sounds: https://magenta.tensorflow.org/nsynth-instrument - we're going to do the same but now with our own sounds!\n\nFirst let's load some encodings:", "_____no_output_____" ] ], [ [ "sample_length = 80000\n\n# from https://www.freesound.org/people/MustardPlug/sounds/395058/\naud1, enc1 = load_encoding('395058__mustardplug__breakbeat-hiphop-a4-4bar-96bpm.wav', sample_length)\n\n# from https://www.freesound.org/people/xserra/sounds/176098/\naud2, enc2 = load_encoding('176098__xserra__cello-cant-dels-ocells.wav', sample_length)", "INFO:tensorflow:Restoring parameters from model.ckpt-200000\nINFO:tensorflow:Restoring parameters from model.ckpt-200000\n" ] ], [ [ "Now we'll mix the two audio signals together. But this is unlike adding the two signals together in a Ableton or simply hearing both sounds at the same time. Instead, we're averaging the representation of their timbres, tonality, change over time, and resulting audio signal. This is way more powerful than a simple averaging.", "_____no_output_____" ] ], [ [ "enc_mix = (enc1 + enc2) / 2.0", "_____no_output_____" ], [ "fig, axs = plt.subplots(3, 1, figsize=(10, 7))\naxs[0].plot(enc1[0]); \naxs[0].set_title('Encoding 1')\naxs[1].plot(enc2[0]);\naxs[1].set_title('Encoding 2')\naxs[2].plot(enc_mix[0]);\naxs[2].set_title('Average')", "_____no_output_____" ], [ "fastgen.synthesize(enc_mix, save_paths='mix.wav')", "_____no_output_____" ] ], [ [ "As another example of what's possible with interpolation of embeddings, we'll try crossfading between the two embeddings. To do this, we'll write a utility function which will use a hanning window to apply a fade in or out to the embeddings matrix:", "_____no_output_____" ] ], [ [ "def fade(encoding, mode='in'):\n length = encoding.shape[1]\n fadein = (0.5 * (1.0 - np.cos(3.1415 * np.arange(length) / \n float(length)))).reshape(1, -1, 1)\n if mode == 'in':\n return fadein * encoding\n else:\n return (1.0 - fadein) * encoding", "_____no_output_____" ], [ "fig, axs = plt.subplots(3, 1, figsize=(10, 7))\naxs[0].plot(enc1[0]); \naxs[0].set_title('Original Encoding')\naxs[1].plot(fade(enc1, 'in')[0]);\naxs[1].set_title('Fade In')\naxs[2].plot(fade(enc1, 'out')[0]);\naxs[2].set_title('Fade Out')", "_____no_output_____" ] ], [ [ "Now we can cross fade two different encodings by adding their repsective fade ins and out:", "_____no_output_____" ] ], [ [ "def crossfade(encoding1, encoding2):\n return fade(encoding1, 'out') + fade(encoding2, 'in')", "_____no_output_____" ], [ "fig, axs = plt.subplots(3, 1, figsize=(10, 7))\naxs[0].plot(enc1[0]); \naxs[0].set_title('Encoding 1')\naxs[1].plot(enc2[0]);\naxs[1].set_title('Encoding 2')\naxs[2].plot(crossfade(enc1, enc2)[0]);\naxs[2].set_title('Crossfade')", "_____no_output_____" ] ], [ [ "Now let's synthesize the resulting encodings:", "_____no_output_____" ] ], [ [ "fastgen.synthesize(crossfade(enc1, enc2), save_paths=['crossfade.wav'])", "_____no_output_____" ] ], [ [ "There is a lot to explore with NSynth. So far I've just shown you a taste of what's possible when you are able to generate your own sounds. I expect the generation process will soon get much faster, especially with help from the community, and for more unexpected and interesting applications to emerge. Please keep in touch with whatever you end up creating, either personally via [twitter](https://twitter.com/pkmital), in our [Creative Applications of Deep Learning](https://www.kadenze.com/programs/creative-applications-of-deep-learning-with-tensorflow) community on Kadenze, or the [Magenta Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/magenta-discuss).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d0352110d39bf30c8659f9837e829c71a85cc3fa
52,168
ipynb
Jupyter Notebook
Data_Science_from_Scratch ~ Book/Data_Science_Chapter_15.ipynb
kushagras71/data_science
e6bc5b71d848c9b5a0f71cdefa831e1207a8e6b0
[ "Apache-2.0" ]
null
null
null
Data_Science_from_Scratch ~ Book/Data_Science_Chapter_15.ipynb
kushagras71/data_science
e6bc5b71d848c9b5a0f71cdefa831e1207a8e6b0
[ "Apache-2.0" ]
null
null
null
Data_Science_from_Scratch ~ Book/Data_Science_Chapter_15.ipynb
kushagras71/data_science
e6bc5b71d848c9b5a0f71cdefa831e1207a8e6b0
[ "Apache-2.0" ]
null
null
null
47.772894
2,002
0.495419
[ [ [ "### Multiple Regression\n<br>\na - alpha<br>\nb - beta<br>\ni - ith user<br>\ne - error term<br>\n\nEquation - $y_{i}$ = $a_{}$ + $b_{1}$$x_{i1}$ + $b_{2}$$x_{i2}$ + ... + $b_{k}$$x_{ik}$ + $e_{i}$", "_____no_output_____" ], [ "beta = [alpha, beta_1, beta_2,..., beta_k]<br>\nx_i = [1, x_i1, x_i2,..., x_ik]<br>\n<br>", "_____no_output_____" ] ], [ [ "inputs = [[123,123,243],[234,455,578],[454,565,900],[705,456,890]]", "_____no_output_____" ], [ "from typing import List\nfrom scratch.linear_algebra import dot, Vector\n\ndef predict(x:Vector, beta: Vector) -> float:\n return dot(x,beta)\n\ndef error(x:Vector, y:float, beta:Vector) -> float:\n return predict(x,beta) - y\n\ndef squared_error(x:Vector, y:float, beta:Vector) -> float:\n return error(x,y,beta) ** 2\n\nx = [1,2,3]\ny = 30\nbeta = [4,4,4]\n\nassert error(x,y,beta) == -6\nassert squared_error(x,y,beta) == 36", "_____no_output_____" ], [ "def sqerror_gradient(x:Vector, y:float, beta:Vector) -> Vector:\n err = error(x,y,beta)\n return [2*err*x_i for x_i in x]\n\nassert sqerror_gradient(x,y,beta) == [-12,-24,-36]", "_____no_output_____" ], [ "import random\nimport tqdm\nfrom scratch.linear_algebra import vector_mean\nfrom scratch.gradient_descent import gradient_step", "_____no_output_____" ], [ "def least_squares_fit(xs:List[Vector],\n ys:List[float],\n learning_rate: float=0.001,\n num_steps: int = 1000,\n batch_size: int = 1) -> Vector:\n guess = [random.random() for _ in xs[0]]\n for _ in tqdm.trange(num_steps, desc='least squares fit'):\n for start in range(0, len(x), batch_size):\n batch_xs = xs[start:start+batch_size]\n batch_ys = ys[start:start+batch_size]\n gradient = vector_mean([ sqerror_gradient(x,y,guess)\n for x,y in zip(batch_xs,batch_ys)])\n guess = gradient_step(guess,gradient,-learning_rate)\n return guess", "_____no_output_____" ], [ "from scratch.statistics import daily_minutes_good\nfrom scratch.gradient_descent import gradient_step\n\nrandom.seed(0)\nlearning_rate = 0.001\nbeta = least_squares_fit(inputs,daily_minutes_good,learning_rate,5000,25)\n# ERROR ( no 'inputs' variable defined )", "least squares fit: 100%|████████████████████████████████████████████████████████| 5000/5000 [00:00<00:00, 65964.98it/s]\n" ], [ "inputs = [[123,123,243],[234,455,578],[454,565,900],[705,456,890]]\n# inputs = [123,123,243,234,455,578,454,565,900,705,456,890]\nfrom scratch.simple_linear_regression import total_sum_of_squares\ndef multiple_r_squared(xs:List[Vector], ys:Vector, beta:Vector) -> float:\n sum_of_squared_errors = sum(error(x,y,beta**2)\n for x,y in zip(xs,ys))\n return 1.0 - sum_of_squared_errors/ total_sum_of_squares(ys)\nassert 0.67 < multiple_r_squared(inputs, daily_minutes_good, beta) < 0.68\n# ERROR ( no 'inputs' variable defined )", "_____no_output_____" ] ], [ [ "<b>Digression: The Bootstrap</b>", "_____no_output_____" ] ], [ [ "from typing import TypeVar, Callable\nX = TypeVar('X')\nStat = TypeVar('Stat')\n\ndef bootstrap_sample(data:List[X]) -> List[X]:\n return [random.choice(data) for _ in data]\n\ndef bootstrap_statistics(data:List[X],\n stats_fn: Callable[[List[X]],Stat],\n num_samples: int) -> List[Stat]:\n return [stats_fn(bootstrap_sample(data)) for _ in range(num_samples)]\n \n", "_____no_output_____" ], [ "close_to_100 = [99.5 + random.random() for _ in range(101)]\n\nfar_from_100 = ([99.5 + random.random()] +\n [random.random() for _ in range(50)] +\n [200 + random.random() for _ in range(50)])", "_____no_output_____" ], [ "from scratch.statistics import median, standard_deviation\nmedian_close = bootstrap_statistics(close_to_100,median,100)\nmedian_far = bootstrap_statistics(far_from_100,median,100)\nprint(median_close)\nprint(median_far)", "[100.07969501074561, 100.08761706417543, 100.08980118353116, 100.09628686158311, 100.09628686158311, 100.04869930383559, 100.04744091132842, 100.08980118353116, 100.05126724609055, 100.08338203945503, 100.16024537862239, 100.05126724609055, 100.09628686158311, 100.07565101416489, 100.1108869734438, 100.05126724609055, 100.08980118353116, 100.13014734041147, 100.09628686158311, 100.04059992494805, 100.08980118353116, 100.07969501074561, 100.1108869734438, 100.16024537862239, 100.11277317986861, 100.08761706417543, 100.07565101416489, 100.04028360697032, 100.1127831050407, 100.11277317986861, 100.06751074062068, 100.08980118353116, 100.11836899667533, 100.08980118353116, 100.11836899667533, 100.09628686158311, 100.11836899667533, 100.11836899667533, 100.06751074062068, 100.07565101416489, 100.13014734041147, 100.01127472136861, 100.09628686158311, 100.07565101416489, 100.09628686158311, 100.1127831050407, 100.08761706417543, 100.00794064252058, 100.07565101416489, 100.08338203945503, 100.04869930383559, 100.04869930383559, 100.07969501074561, 100.11277317986861, 100.09628686158311, 100.13014734041147, 100.04869930383559, 100.08338203945503, 100.1108869734438, 100.1127831050407, 100.23148922079085, 100.10318562796138, 100.08338203945503, 100.08338203945503, 100.04744091132842, 100.06751074062068, 100.07969501074561, 100.08761706417543, 100.08761706417543, 100.07565101416489, 100.08338203945503, 100.07969501074561, 100.05126724609055, 100.04059992494805, 100.11277317986861, 100.08980118353116, 100.09628686158311, 100.16815320123185, 100.11277317986861, 100.23148922079085, 100.04869930383559, 100.1127831050407, 100.13014734041147, 100.07969501074561, 100.09628686158311, 100.08980118353116, 100.08761706417543, 100.11277317986861, 100.16815320123185, 100.16024537862239, 100.08761706417543, 100.08761706417543, 100.07969501074561, 100.1108869734438, 100.08338203945503, 100.06751074062068, 100.04869930383559, 100.06751074062068, 100.1127831050407, 100.04744091132842]\n[200.01243631882932, 99.61713429320852, 200.19230954124467, 0.9882351487225011, 0.9665489030431832, 200.23941596018597, 200.25068733928512, 99.61713429320852, 0.9805166506472687, 200.01243631882932, 200.00152422185673, 0.9184820619953314, 0.9805166506472687, 0.9805166506472687, 0.8454245937016164, 0.8383265651934163, 0.9100160146990397, 200.00152422185673, 200.24013040782634, 0.9610312802396112, 99.61713429320852, 200.01243631882932, 200.19230954124467, 200.0467796859568, 0.9665489030431832, 0.8454245937016164, 200.00152422185673, 0.9665489030431832, 200.23941596018597, 0.9805166506472687, 200.01243631882932, 0.9100160146990397, 200.17481948445143, 0.9882351487225011, 0.9610312802396112, 0.8454245937016164, 200.17481948445143, 0.9610312802396112, 0.9184820619953314, 200.00152422185673, 99.61713429320852, 200.01243631882932, 0.9610312802396112, 0.9369691586445807, 200.00152422185673, 0.9805166506472687, 200.15768657212232, 200.0467796859568, 0.9665489030431832, 0.9805166506472687, 0.9100160146990397, 0.9882351487225011, 99.61713429320852, 200.15768657212232, 200.17481948445143, 200.01243631882932, 200.17481948445143, 200.0467796859568, 200.25093266482213, 200.23941596018597, 200.0456964935684, 0.8454245937016164, 200.24013040782634, 0.9369691586445807, 0.9665489030431832, 200.0456964935684, 200.00152422185673, 200.00152422185673, 0.9100160146990397, 200.28088316421835, 200.01243631882932, 0.7945829717105759, 200.19230954124467, 200.0456964935684, 0.9665489030431832, 0.9369691586445807, 0.9100160146990397, 0.8383265651934163, 0.9184820619953314, 0.9882351487225011, 200.00152422185673, 0.9100160146990397, 200.25068733928512, 200.24013040782634, 200.01243631882932, 200.00152422185673, 200.23941596018597, 0.8159130965336595, 0.9665489030431832, 0.9805166506472687, 200.01243631882932, 0.9100160146990397, 200.0456964935684, 0.8159130965336595, 200.0456964935684, 0.9369691586445807, 0.9100160146990397, 0.9882351487225011, 0.9610312802396112, 99.61713429320852]\n" ], [ "from typing import Tuple\nimport datetime\n\ndef estimate_sample_beta(pairs:List[Tuple[Vector,float]]):\n x_sample = [x for x, _ in pairs]\n y_sample = [y for _, y in pairs]\n beta = least_squares_fit(x_sample,y_sample,learning_rate,5000,25)\n print(\"bootstrap sample\",beta)\n return beta\n\nrandom.seed(0)\nbootstrap_betas = bootstrap_statistics(list(zip(inputs, daily_minutes_good)),\nestimate_sample_beta,\n100)\n# ERROR ( no 'inputs' variable defined )", "least squares fit: 100%|████████████████████████████████████████████████████████| 5000/5000 [00:00<00:00, 62695.69it/s]\nleast squares fit: 100%|████████████████████████████████████████████████████████| 5000/5000 [00:00<00:00, 64248.22it/s]\nleast squares fit: 100%|████████████████████████████████████████████████████████| 5000/5000 [00:00<00:00, 65109.33it/s]\nleast squares fit: 0%| | 0/5000 [00:00<?, ?it/s]" ], [ "bootstrap_standard_errors = [\n standard_deviation([beta[i] for beta in bootstrap_betas])\nfor i in range(4)]\nprint(bootstrap_standard_errors)\n# ERROR ( no 'inputs' variable defined )", "_____no_output_____" ], [ "from scratch.probability import normal_cdf\n\ndef p_value(beta_hat_j: float, sigma_hat_j:float) -> float:\n if beta_hat_j > 0:\n return 2 * (1 - normal_cdf(beta_hat_j/sigma_hat_j))\n else:\n return 2 * normal_cdf(beta_hat_j/sigma_hat_j)", "_____no_output_____" ], [ "assert p_value(30.58, 1.27) < 0.001 # constant term\nassert p_value(0.972, 0.103) < 0.001 # num_friends", "_____no_output_____" ] ], [ [ "<b>Regularization</b>", "_____no_output_____" ] ], [ [ "def ridge_penalty(beta:Vector, alpha:float)->float:\n return alpha*dot(beta[1:],beta[1:])", "_____no_output_____" ], [ "def squared_error_ridge(x: Vector,\n y: float,\n beta: Vector,\n alpha: float) -> float:\n return error(x, y, beta) ** 2 + ridge_penalty(beta, alpha)\n\n\n\nfrom scratch.linear_algebra import add\n\ndef ridge_penalty_gradient(beta: Vector, alpha: float) -> Vector:\n return [0.] + [2 * alpha * beta_j for beta_j in beta[1:]]\n\ndef sqerror_ridge_gradient(x: Vector,\n y: float,\n beta: Vector,\n alpha: float) -> Vector: \n return add(sqerror_gradient(x, y, beta),\n ridge_penalty_gradient(beta, alpha))\n\n\ndef least_squares_fit_ridge(xs:List[Vector],\n ys:List[float],\n learning_rate: float=0.001,\n num_steps: int = 1000,\n batch_size: int = 1) -> Vector:\n guess = [random.random() for _ in xs[0]]\n for _ in tqdm.trange(num_steps, desc='least squares fit'):\n for start in range(0, len(x), batch_size):\n batch_xs = xs[start:start+batch_size]\n batch_ys = ys[start:start+batch_size]\n gradient = vector_mean([ sqerror_ridge_gradient(x,y,guess)\n for x,y in zip(batch_xs,batch_ys)])\n guess = gradient_step(guess,gradient,-learning_rate)\n return guess", "_____no_output_____" ], [ "random.seed(0)\nbeta_0 = least_squares_fit_ridge(inputs, daily_minutes_good, 0.0, # alpha\nlearning_rate, 5000, 25)\n\n\n# [30.51, 0.97, -1.85, 0.91]\nassert 5 < dot(beta_0[1:], beta_0[1:]) < 6\nassert 0.67 < multiple_r_squared(inputs, daily_minutes_good, beta_0) < 0.69\n# ERROR ( no 'inputs' variable defined )\nbeta_0_1 = least_squares_fit_ridge(inputs, daily_minutes_good, 0.1, # alpha\nlearning_rate, 5000, 25)\n# [30.8, 0.95, -1.83, 0.54]\nassert 4 < dot(beta_0_1[1:], beta_0_1[1:]) < 5\nassert 0.67 < multiple_r_squared(inputs, daily_minutes_good, beta_0_1) < 0.69\nbeta_1 = least_squares_fit_ridge(inputs, daily_minutes_good, 1, # alpha\nlearning_rate, 5000, 25)\n# [30.6, 0.90, -1.68, 0.10]\nassert 3 < dot(beta_1[1:], beta_1[1:]) < 4\nassert 0.67 < multiple_r_squared(inputs, daily_minutes_good, beta_1) < 0.69\nbeta_10 = least_squares_fit_ridge(inputs, daily_minutes_good,10, # alpha\nlearning_rate, 5000, 25)\n# [28.3, 0.67, -0.90, -0.01]\nassert 1 < dot(beta_10[1:], beta_10[1:]) < 2\nassert 0.5 < multiple_r_squared(inputs, daily_minutes_good, beta_10) < 0.6", "_____no_output_____" ], [ "def lasso_penalty(beta, alpha):\n return alpha * sum(abs(beta_i) for beta_i in beta[1:])", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
d03521e9f4394f3ade73ba3f356e945087427957
91,703
ipynb
Jupyter Notebook
notebooks/tumor_vs_normal_classification/tumor_vs_normal_miRNA-ttest.ipynb
JonnyTran/microRNA-Lung-Cancer-Associations
c3d83854de3192b120403638cd04762a0048ef59
[ "FTL" ]
4
2020-11-29T17:53:01.000Z
2021-02-23T20:43:44.000Z
notebooks/tumor_vs_normal_classification/tumor_vs_normal_miRNA-ttest.ipynb
JonnyTran/assn-miRNA-LUAD
c3d83854de3192b120403638cd04762a0048ef59
[ "FTL" ]
null
null
null
notebooks/tumor_vs_normal_classification/tumor_vs_normal_miRNA-ttest.ipynb
JonnyTran/assn-miRNA-LUAD
c3d83854de3192b120403638cd04762a0048ef59
[ "FTL" ]
null
null
null
482.647368
45,242
0.936022
[ [ [ "## A Two-sample t-test to find differentially expressed miRNA's between normal and tumor tissues in Lung Adenocarcinoma", "_____no_output_____" ] ], [ [ "import os\nimport pandas\n\nmirna_src_dir = os.getcwd() + \"/assn-mirna-luad/data/processed/miRNA/\"\nclinical_src_dir = os.getcwd() + \"/assn-mirna-luad/data/processed/clinical/\"\n\nmirna_tumor_df = pandas.read_csv(mirna_src_dir+'tumor_miRNA.csv')\nmirna_normal_df = pandas.read_csv(mirna_src_dir+'normal_miRNA.csv')\nclinical_df = pandas.read_csv(clinical_src_dir+'clinical.csv')\n\nprint \"mirna_tumor_df.shape\", mirna_tumor_df.shape\nprint \"mirna_normal_df.shape\", mirna_normal_df.shape\n\n\"\"\"\nHere we select samples to use for our regression analysis\n\"\"\"\nmatched_samples = pandas.merge(clinical_df, mirna_normal_df, on='patient_barcode')['patient_barcode']\n# print \"matched_samples\", matched_samples.shape\n# merged = pandas.merge(clinical_df, mirna_tumor_df, on='patient_barcode')\n# print merged.shape\n# print\n# print merged['histological_type'].value_counts().sort_index(axis=0)\n# print\n# print merged['pathologic_stage'].value_counts().sort_index(axis=0)\n# print\n# print merged['pathologic_T'].value_counts().sort_index(axis=0)\n# print\n# print merged['pathologic_N'].value_counts().sort_index(axis=0)\n# print\n# print merged['pathologic_M'].value_counts().sort_index(axis=0)\n# print", "mirna_tumor_df.shape (513, 1882)\nmirna_normal_df.shape (46, 1882)\n" ], [ "from sklearn import preprocessing\nimport numpy as np\nX_normal = mirna_normal_df[mirna_normal_df['patient_barcode'].isin(matched_samples)].sort_values(by=['patient_barcode']).copy()\nX_tumor = mirna_tumor_df.copy()\nX_tumor_matched = mirna_tumor_df[mirna_tumor_df['patient_barcode'].isin(matched_samples)].sort_values(by=['patient_barcode']).copy()\n\nX_normal.__delitem__('patient_barcode')\nX_tumor_matched.__delitem__('patient_barcode')\nX_tumor.__delitem__('patient_barcode')\n\nprint \"X_normal.shape\", X_normal.shape\nprint \"X_tumor.shape\", X_tumor.shape\nprint \"X_tumor_matched.shape\", X_tumor_matched.shape\n\nmirna_list = X.columns.values\n\n# X_scaler = preprocessing.StandardScaler(with_mean=False).fit(X)\n# X = X_scaler.transform(X)", "X_normal.shape (46, 1881)\nX_tumor.shape (513, 1881)\nX_tumor_matched.shape (46, 1881)\n" ], [ "from scipy.stats import ttest_rel\nimport matplotlib.pyplot as plt\n\nttest = ttest_rel(X_tumor_matched, X_normal)\n\nplt.plot(ttest[1], ls='', marker='.')\nplt.title('Two sample t-test between tumor and normal LUAD tissues')\nplt.ylabel('p-value')\nplt.xlabel('miRNA\\'s')\nplt.show()", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "from scipy.stats import ttest_ind\n\nttest_2 = ttest_2_ind(X_tumor, X_normal)\n\nplt.plot(ttest_2[1], ls='', marker='.')\nplt.title('Independent sample t-test between tumor and normal LUAD tissues')\nplt.ylabel('p-value')\nplt.xlabel('miRNA\\'s')\nplt.show()", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
d035378c4091ba7d3f32fbfeeeebd3d0766861d5
31,713
ipynb
Jupyter Notebook
3_VisualizePublish/07_Step7_Serve_NewScenarios_WEAP.ipynb
WamdamProject/WaMDaM_JupyterNotebooks
50fa1ff14982ed8a0347e1be33a1fb3192a6cd20
[ "BSD-3-Clause" ]
1
2019-03-20T15:12:38.000Z
2019-03-20T15:12:38.000Z
3_VisualizePublish/.ipynb_checkpoints/07_Step7_Serve_NewScenarios_WEAP-checkpoint.ipynb
WamdamProject/WaMDaM_JupyterNotebooks
50fa1ff14982ed8a0347e1be33a1fb3192a6cd20
[ "BSD-3-Clause" ]
null
null
null
3_VisualizePublish/.ipynb_checkpoints/07_Step7_Serve_NewScenarios_WEAP-checkpoint.ipynb
WamdamProject/WaMDaM_JupyterNotebooks
50fa1ff14982ed8a0347e1be33a1fb3192a6cd20
[ "BSD-3-Clause" ]
2
2021-07-31T13:35:34.000Z
2021-08-03T07:41:09.000Z
35.354515
698
0.572793
[ [ [ "# Step 7: Serve data from OpenAgua into WEAP using WaMDaM\n\n#### By Adel M. Abdallah, Dec 2020\n\nExecute the following cells by pressing `Shift-Enter`, or by pressing the play button <img style='display:inline;padding-bottom:15px' src='play-button.png'> on the toolbar above.\n\n## Steps\n\n1. Import python libraries\n2. Import the pulished SQLite file for the WEAP model from HydroShare.\n3. Prepare to connect to the WEAP API\n4. Connect to WEAP API to programmatically populate WEAP with data, run it, get back results\nCreate a copy of the original WEAP Area to use while keeping the orignial as-as for any later use\n5.3 Export the unmet demand percent into Excel to load them into WaMDaM", "_____no_output_____" ], [ "<a name=\"Import\"></a>\n# 1. Import python libraries ", "_____no_output_____" ] ], [ [ "# 1. Import python libraries \n### set the notebook mode to embed the figures within the cell\nimport numpy\nimport sqlite3\nimport numpy as np\nimport pandas as pd\nimport getpass\nfrom hs_restclient import HydroShare, HydroShareAuthBasic\nimport os\n\nimport plotly\nplotly.__version__\nimport plotly.offline as offline\nimport plotly.graph_objs as go\nfrom plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot\noffline.init_notebook_mode(connected=True)\nfrom plotly.offline import init_notebook_mode, iplot\nfrom plotly.graph_objs import *\n\ninit_notebook_mode(connected=True) # initiate notebook for offline plot\n\nimport os\nimport csv\nfrom collections import OrderedDict\nimport sqlite3\nimport pandas as pd\nimport numpy as np\nfrom IPython.display import display, Image, SVG, Math, YouTubeVideo\nimport urllib\nimport calendar\n\nprint 'The needed Python libraries have been imported'", "_____no_output_____" ] ], [ [ "# 2. Connect to the WaMDaM SQLite on HydroSahre\n### Provide the HydroShare ID for your resource\nExample \nhttps://www.hydroshare.org/resource/af71ef99a95e47a89101983f5ec6ad8b/\n \nresource_id='85e9fe85b08244198995558fe7d0e294'", "_____no_output_____" ] ], [ [ "# enter your HydroShare username and password here between the quotes\nusername = ''\npassword = ''\n\nauth = HydroShareAuthBasic(username=username, password=password)\n\nhs = HydroShare(auth=auth)\n\nprint 'Connected to HydroShare'\n\n# Then we can run queries against it within this notebook :) \nresource_url='https://www.hydroshare.org/resource/af71ef99a95e47a89101983f5ec6ad8b/' \n\n\nresource_id= resource_url.split(\"https://www.hydroshare.org/resource/\",1)[1] \nresource_id=resource_id.replace('/','')\n\nprint resource_id\n\nresource_md = hs.getSystemMetadata(resource_id)\n# print resource_md\nprint 'Resource title'\nprint(resource_md['resource_title'])\nprint '----------------------------'\n\nresources=hs.resource(resource_id).files.all()\n\nfile = \"\"\n\nfor f in hs.resource(resource_id).files.all():\n\n file += f.decode('utf8')\n\nimport json\n\nfile_json = json.loads(file)\n\nfor f in file_json[\"results\"]:\n\n FileURL= f[\"url\"]\n \n SQLiteFileName=FileURL.split(\"contents/\",1)[1] \n\ncwd = os.getcwd()\nprint cwd\nfpath = hs.getResourceFile(resource_id, SQLiteFileName, destination=cwd)\nconn = sqlite3.connect(SQLiteFileName,timeout=10)\n\nprint 'Connected to the SQLite file= '+ SQLiteFileName\nprint 'done'", "_____no_output_____" ] ], [ [ "<a name=\"ConnectWEAP\"></a>\n# 2. Prepare to the Connect to the WEAP API\n\n### You need to have WEAP already installed on your machine\n\nFirst make sure to have a copy of the Water Evaluation And Planning\" system (WEAP) installed on your local machine (Windows). If you don’t have it installed, download and install the WEAP software which allows you to run the Bear River WEAP model and its scenarios for Use Case 5. https://www.weap21.org/. You need to have a WEAP License. See here (https://www.weap21.org/index.asp?action=217). If you're interested to learning about WEAP API, check it out here: http://www.weap21.org/WebHelp/API.htm \n\n\n## Install dependency and register WEAP\n### 2.1. Install pywin32 extensions which provide access to many of the Windows APIs from Python.\n**Choose on option**\n* a. Install using an executable basedon your python version. Use version for Python 2.7\nhttps://github.com/mhammond/pywin32/releases \n\n**OR** \n\n* b. Install it using Anaconda terminal @ https://anaconda.org/anaconda/pywin32\n\nType this command in the Anaconda terminal as Administrator \n\n conda install -c anaconda pywin32 \n\n\n**OR**\n\n* c. Install from source code (for advanced users) \nhttps://github.com/mhammond/pywin32\n\n\n### 2.2. Register WEAP with Windows \n\n\nThis use case only works on a local Jupyter Notebook server installed on your machine along with WEAP. So it does not work on the online Notebooks in Step 2.1. You need to install Jupyter Server in Step 2.2 then proceed here.\n\n\n\n\n* **Register WEAP with Windows to allow the WEAP API to be accessed** \nUse Windows \"Command Prompt\". Right click and then <font color=red>**run as Administrator**</font>, navigate to the WEAP installation directory such as and then hit enter \n\n```\ncd C:\\Program Files (x86)\\WEAP\n```\n\nThen type the following command in the command prompt and hit enter \n```\nWEAP /regserver\n```\n\n\n<img src=\"https://github.com/WamdamProject/WaMDaM-software-ecosystem/blob/master/mkdocs/Edit_MD_Files/QuerySelect/images/RegisterWEAP_CMD.png?raw=true\" style=\"float:center;width:700px;padding:20px\"> \nFigure 1: Register WEAP API with windows using the Command Prompt (Run as Administrator)\n", "_____no_output_____" ], [ "\n# 3. Connect Jupyter Notebook to WEAP API\n\nClone or download all this GitHub repo \nhttps://github.com/WamdamProject/WaMDaM_UseCases \n\nIn your local repo folder, go to the \n \n C:\\Users\\Adel\\Documents\\GitHub\\WaMDaM_UseCases/UseCases_files/1Original_Datasets_preperation_files/WEAP/Bear_River_WEAP_Model_2017\n\nCopy this folder **Bear_River_WEAP_Model_2017** and paste it into **WEAP Areas** folder on your local machine. For example, it is at \n\n C:\\Users\\Adel\\Documents\\WEAP Areas \n", "_____no_output_____" ] ], [ [ "# this library is needed to connect to the WEAP API\nimport win32com.client\n\n# this command will open the WEAP software (if closed) and get the last active model\n# you could change the active area to another one inside WEAP or by passing it to the command here\n#WEAP.ActiveArea = \"BearRiverFeb2017_V10.9\"\n\n\nWEAP=win32com.client.Dispatch(\"WEAP.WEAPApplication\")\n\n# WEAP.Visible = 'FALSE'\n\n\nprint WEAP.ActiveArea.Name \nWEAP.ActiveArea = \"Bear_River_WEAP_Model_2017_Original\" \nprint WEAP.ActiveArea.Name \n\nWEAP.Areas(\"Bear_River_WEAP_Model_2017_Original\").Open\nWEAP.ActiveArea = \"Bear_River_WEAP_Model_2017_Original\" \nprint WEAP.ActiveArea.Name\n\n\nprint 'Connected to WEAP API and the '+ WEAP.ActiveArea.Name + ' Area'\nprint '-------------'\nif not WEAP.Registered:\n print \"Because WEAP is not registered, you cannot use the API\"\n\n# get the active WEAP Area (model) to serve data into it \n\n# ActiveArea=WEAP.ActiveArea.Name \n\n\n# get the active WEAP scenario to serve data into it \nprint '-------------'\n\nActiveScenario= WEAP.ActiveScenario.Name\nprint '\\n ActiveScenario= '+ActiveScenario\nprint '-------------'\n\nWEAP_Area_dir=WEAP.AreasDirectory\nprint WEAP_Area_dir\n\n\nprint \"\\n \\n You're connected to the WEAP API\"", "_____no_output_____" ] ], [ [ "<a name=\"CreateWEAP_Area\"></a>\n# 4 Create a copy of the original WEAP Area to use while keeping the orignial as-as for any later use\n\n\n<a name=\"AddScenarios\"></a>\n### Add a new CacheCountyUrbanWaterUse scenario from the Reference original WEAP Area: \n\n### You can always use this orignal one and delete any new copies you make afterwards.", "_____no_output_____" ] ], [ [ "# Create a copy of the WEAP AREA to serve the updated Hyrym Reservoir to it \n\n\n# Delete the Area if it exists and then add it. Start from fresh\nArea=\"Bear_River_WEAP_Model_2017_Conservation\"\n\nif not WEAP.Areas.Exists(Area):\n WEAP.SaveAreaAs(Area)\n\n\nWEAP.ActiveArea.Save\nWEAP.ActiveArea = \"Bear_River_WEAP_Model_2017_Conservation\" \nprint 'ActiveArea= '+WEAP.ActiveArea.Name\n\n# Add new Scenario\n# Add(NewScenarioName, ParentScenarioName or Index): \n# Create a new scenario as a child of the parent scenario specified. \n# The new scenario will become the selected scenario in the Data View. \n \n \n \nWEAP=win32com.client.Dispatch(\"WEAP.WEAPApplication\")\n# WEAP.Visible = FALSE\n\n\nWEAP.ActiveArea = \"Bear_River_WEAP_Model_2017_Conservation\" \n\nprint 'ActiveArea= '+ WEAP.ActiveArea.Name\n\nScenarios=[]\nScenarios=['Cons25PercCacheUrbWaterUse','Incr25PercCacheUrbWaterUse']\n\n# Delete the scenario if it exists and then add it. Start from fresh\nfor Scenario in Scenarios:\n if WEAP.Scenarios.Exists(Scenario):\n # delete it\n WEAP.Scenarios(Scenario).Delete(True)\n # add it back as a fresh copy\n WEAP.Scenarios.Add(Scenario,'Reference')\n else:\n WEAP.Scenarios.Add(Scenario,'Reference')\n \nWEAP.ActiveArea.Save\nWEAP.SaveArea\n\nWEAP.Quit\n\n# or add the scenarios one by one using this command \n \n# Make a copy from the reference (base) scenario\n# WEAP.Scenarios.Add('UpdateCacheDemand','Reference')\nprint '---------------------- \\n'\nprint 'Scenarios added to the original WEAP area' \n\n\nWEAP.Quit\n\nprint 'Connection with WEAP API is disconnected'", "_____no_output_____" ] ], [ [ "<a name=\"QuerySupplyDataLoadWEAP\"></a>\n# 4.A Query Cache County seasonal \"Monthly Demand\" for the three sites: Logan Potable, North Cache Potable, South Cache Potable\n\n### The data comes from OpenAgua", "_____no_output_____" ] ], [ [ "# Use Case 3.1Identify_aggregate_TimeSeriesValues.csv\n# plot aggregated to monthly and converted to acre-feet time series data of multiple sources\n\n\n# Logan Potable\n# North Cache Potable\n# South Cache Potable\n\n# 2.2Identify_aggregate_TimeSeriesValues.csv\nQuery_UseCase_URL=\"\"\"\nhttps://raw.githubusercontent.com/WamdamProject/WaMDaM_JupyterNotebooks/master/3_VisualizePublish/SQL_queries/WEAP/Query_demand_sites.sql\n\"\"\"\n\n# Read the query text inside the URL\nQuery_UseCase_text = urllib.urlopen(Query_UseCase_URL).read()\n\n\n# return query result in a pandas data frame\nresult_df_UseCase= pd.read_sql_query(Query_UseCase_text, conn)\n\n# uncomment the below line to see the list of attributes\n# display (result_df_UseCase)\nseasons_dict = dict()\nseasons_dict2=dict()\nScenarios=['Cons25PercCacheUrbWaterUse','Incr25PercCacheUrbWaterUse']\n\nsubsets = result_df_UseCase.groupby(['ScenarioName','InstanceName'])\nfor subset in subsets.groups.keys():\n if subset[0] in Scenarios:\n df_Seasonal = subsets.get_group(name=subset)\n df_Seasonal=df_Seasonal.reset_index()\n\n SeasonalParam = ''\n for i in range(len(df_Seasonal['SeasonName'])):\n m_data = df_Seasonal['SeasonName'][i]\n n_data = float(df_Seasonal['SeasonNumericValue'][i])\n SeasonalParam += '{},{}'.format(m_data, n_data)\n if i != len(df_Seasonal['SeasonName']) - 1:\n SeasonalParam += ','\n\n Seasonal_value=\"MonthlyValues(\"+SeasonalParam+\")\"\n seasons_dict[subset]=(Seasonal_value) \n# seasons_dict2[subset[0]]=seasons_dict\n# print seasons_dict2\n\nprint '-----------------'\n# print seasons_dict\n\n\n# seasons_dict2.get(\"Cons25PercCacheUrbWaterUse\", {}).get(\"Logan Potable\") # 1\n\nprint 'Query and data preperation are done'", "_____no_output_____" ] ], [ [ "<a name=\"LoadFlow\"></a>\n# 4.B Load the seasonal demand data with conservation into WEAP", "_____no_output_____" ] ], [ [ "# 9. Load the seasonal data into WEAP\n#WEAP=win32com.client.Dispatch(\"WEAP.WEAPApplication\")\n# WEAP.Visible = FALSE\n\n\nprint WEAP.ActiveArea.Name\nScenarios=['Cons25PercCacheUrbWaterUse','Incr25PercCacheUrbWaterUse']\nDemandSites=['Logan Potable','North Cache Potable','South Cache Potable']\n\nAttributeName='Monthly Demand' \n \nfor scenario in Scenarios:\n WEAP.ActiveScenario = scenario\n print WEAP.ActiveScenario.Name\n\n for Branch in WEAP.Branches:\n for InstanceName in DemandSites:\n if Branch.Name == InstanceName:\n GetInstanceFullBranch = Branch.FullName\n val=seasons_dict[(scenario,InstanceName)] \n WEAP.Branch(GetInstanceFullBranch).Variable(AttributeName).Expression =val\n# print val\n print \"loaded \" + InstanceName\n WEAP.SaveArea\n\nprint '\\n The data have been sucsesfully loaded into WEAP'\n\nWEAP.SaveArea\n\nprint '\\n \\n The updated data have been saved'", "_____no_output_____" ] ], [ [ "# 5. Run WEAP\n<font color=green>**Please wait, it will take ~1-3 minutes** to finish calcualting the two WEAP Areas with their many scenarios</font>", "_____no_output_____" ] ], [ [ "# Run WEAP\n\nWEAP.Areas(\"Bear_River_WEAP_Model_2017_Conservation\").Open\nprint WEAP.ActiveArea.Name\n\nWEAP.ActiveArea = \"Bear_River_WEAP_Model_2017_Conservation\" \nprint WEAP.ActiveArea.Name\n\nprint 'Please wait 1-3 min for the calculation to finish'\nWEAP.Calculate(2006,10,True)\nWEAP.SaveArea\n\nprint '\\n \\n The calculation has been done and saved'\nprint WEAP.CalculationTime\n\nprint '\\n \\n Done'\n", "_____no_output_____" ] ], [ [ "## 5.1 Get the unmet demand or Cache County sites in both the reference and the conservation scenarios", "_____no_output_____" ] ], [ [ "Scenarios=['Reference','Cons25PercCacheUrbWaterUse','Incr25PercCacheUrbWaterUse']\nDemandSites=['Logan Potable','North Cache Potable','South Cache Potable']\n\nUnmetDemandEstimate_Ref = pd.DataFrame(columns = DemandSites)\nUnmetDemandEstimate_Cons25 = pd.DataFrame(columns = DemandSites)\nUnmetDemandEstimate_Incr25 = pd.DataFrame(columns = DemandSites)\n\nUnmetDemandEstimate= pd.DataFrame(columns = Scenarios)\n\nfor scen in Scenarios:\n if scen=='Reference':\n for site in DemandSites:\n param=\"\\Demand Sites\\%s: Unmet Demand[Acre-Foot]\"%(site)\n# print param\n for year in range (1966,2006):\n value=WEAP.ResultValue(param, year, 1, scen, year, WEAP.NumTimeSteps) \n UnmetDemandEstimate_Ref.loc[year, [site]]=value\n elif scen=='Cons25PercCacheUrbWaterUse':\n for site in DemandSites:\n param=\"\\Demand Sites\\%s: Unmet Demand[Acre-Foot]\"%(site)\n# print param\n for year in range (1966,2006):\n value=WEAP.ResultValue(param, year, 1, scen, year, WEAP.NumTimeSteps) \n UnmetDemandEstimate_Cons25.loc[year, [site]]=value\n \n elif scen=='Incr25PercCacheUrbWaterUse':\n for site in DemandSites:\n param=\"\\Demand Sites\\%s: Unmet Demand[Acre-Foot]\"%(site)\n# print param\n for year in range (1966,2006):\n value=WEAP.ResultValue(param, year, 1, scen, year, WEAP.NumTimeSteps) \n UnmetDemandEstimate_Incr25.loc[year, [site]]=value \n \n \nUnmetDemandEstimate_Ref['Cache Total']=UnmetDemandEstimate_Ref[DemandSites].sum(axis=1)\n\nUnmetDemandEstimate_Cons25['Cache Total']=UnmetDemandEstimate_Cons25[DemandSites].sum(axis=1)\n\nUnmetDemandEstimate_Incr25['Cache Total']=UnmetDemandEstimate_Incr25[DemandSites].sum(axis=1)\n\nUnmetDemandEstimate['Reference']=UnmetDemandEstimate_Ref['Cache Total']\nUnmetDemandEstimate['Cons25PercCacheUrbWaterUse']=UnmetDemandEstimate_Cons25['Cache Total']\nUnmetDemandEstimate['Incr25PercCacheUrbWaterUse']=UnmetDemandEstimate_Incr25['Cache Total']\n\nUnmetDemandEstimate=UnmetDemandEstimate.rename_axis('Year',axis=\"columns\")\n\nprint 'Done estimating the unment demnd pecentage for each scenario'\n# display(UnmetDemandEstimate)", "_____no_output_____" ] ], [ [ "## 5.2 Get the unmet demand as a percentage for the scenarios", "_____no_output_____" ] ], [ [ "\n########################################################################\n# estimate the total reference demand for Cahce county to calcualte the percentage \nresult_df_UseCase= pd.read_sql_query(Query_UseCase_text, conn)\n\nsubsets = result_df_UseCase.groupby(['ScenarioName'])\nfor subset in subsets.groups.keys():\n if subset=='Bear River WEAP Model 2017': # reference\n df_Seasonal = subsets.get_group(name=subset)\n df_Seasonal=df_Seasonal.reset_index()\n# display (df_Seasonal)\n \nTot=df_Seasonal[\"SeasonNumericValue\"].tolist()\n\nfloat_lst = [float(x) for x in Tot]\n\nAnnual_Demand=sum(float_lst)\nprint Annual_Demand\n\n########################################################################\n\n\n\nyears =UnmetDemandEstimate.index.values\n\nReference_vals =UnmetDemandEstimate['Reference'].tolist()\nReference_vals_perc =((numpy.array([Reference_vals]))/Annual_Demand)*100\n\n\nCons25PercCacheUrbWaterUse_vals =UnmetDemandEstimate['Cons25PercCacheUrbWaterUse'].tolist()\nCons25PercCacheUrbWaterUse_vals_perc =((numpy.array([Cons25PercCacheUrbWaterUse_vals]))/Annual_Demand)*100\n\nIncr25PercCacheUrbWaterUse_vals =UnmetDemandEstimate['Incr25PercCacheUrbWaterUse'].tolist()\nIncr25PercCacheUrbWaterUse_vals_perc =((numpy.array([Incr25PercCacheUrbWaterUse_vals]))/Annual_Demand)*100\n\nprint 'done estimating unmet demnd the percentages'", "_____no_output_____" ] ], [ [ "# 5.3 Export the unmet demand percent into Excel to load them into WaMDaM", "_____no_output_____" ] ], [ [ "# display(UnmetDemandEstimate)\nimport xlsxwriter\nfrom collections import OrderedDict\n\nUnmetDemandEstimate.to_csv('UnmetDemandEstimate.csv')\n\n\nExcelFileName='Test.xlsx'\n\nyears =UnmetDemandEstimate.index.values\n#print years\n\nColumns=['ObjectType','InstanceName','ScenarioName','AttributeName','DateTimeStamp','Value']\n\n# these three columns have fixed values for all the rows\nObjectType='Demand Site'\nInstanceName='Cache County Urban'\nAttributeName='UnmetDemand'\n\n# this dict contains the keysL (scenario name) and the values are in a list\n# years exist in UnmetDemandEstimate. We then need to add day and month to the year date \n# like this format: # DateTimeStamp= 1/1/1993\n\nScenarios = OrderedDict()\n\nScenarios['Bear River WEAP Model 2017_result'] = Reference_vals_perc\nScenarios['Incr25PercCacheUrbWaterUse_result'] = Incr25PercCacheUrbWaterUse_vals_perc\nScenarios['Cons25PercCacheUrbWaterUse_result'] = Cons25PercCacheUrbWaterUse_vals_perc\n#print Incr25PercCacheUrbWaterUse_vals_perc\n\n\nworkbook = xlsxwriter.Workbook(ExcelFileName)\nsheet = workbook.add_worksheet('sheet')\n\n\n# write headers\nfor i, header_name in enumerate(Columns):\n sheet.write(0, i, header_name)\nrow = 1\ncol = 0\n\n\nfor scenario_name in Scenarios.keys():\n for val_list in Scenarios[scenario_name]:\n# print val_list\n for i, val in enumerate(val_list):\n# print years[i]\n date_timestamp = '1/1/{}'.format(years[i])\n\n sheet.write(row, 0, ObjectType)\n sheet.write(row, 1, InstanceName)\n sheet.write(row, 2, scenario_name)\n sheet.write(row, 3, AttributeName)\n sheet.write(row, 4, date_timestamp)\n sheet.write(row, 5, val)\n\n row += 1\n \nworkbook.close()\n\n\nprint 'done writing to Excel'\n\nprint 'Next, copy the exported data into a WaMDaM workbook template for the WEAP model'\n", "_____no_output_____" ] ], [ [ "# 6. Plot the unmet demad for all the scenarios and years\n", "_____no_output_____" ] ], [ [ "\n\ntrace2 = go.Scatter(\n x=years,\n y=Reference_vals_perc[0],\n name = 'Reference demand',\n mode = 'lines+markers',\n\n marker = dict(\n color = '#264DFF',\n \n))\n\n\ntrace3 = go.Scatter(\n x=years,\n y=Cons25PercCacheUrbWaterUse_vals_perc[0],\n name = 'Conserve demand by 25%', \n mode = 'lines+markers',\n\n marker = dict(\n color = '#3FA0FF'\n))\n\ntrace1 = go.Scatter(\n x=years,\n y=Incr25PercCacheUrbWaterUse_vals_perc[0],\n name = 'Increase demand by 25%', \n mode = 'lines+markers',\n\n marker = dict(\n color = '#290AD8'\n))\n\nlayout = dict(\n #title = \"Use Case 3.3\",\n yaxis = dict(\n title = \"Annual unmet demand (%)\",\n tickformat= ',',\n showline=True,\n dtick='5',\n ticks='outside',\n \n ticklen=10,\n tickcolor='#000',\n gridwidth=1,\n showgrid=True,\n\n ),\n \n xaxis = dict(\n# title = \"Updated input parameters in the <br>Bear_River_WEAP_Model_2017\",\n# showline=True,\n ticks='inside',\n tickfont=dict(size=22),\n tickcolor='#000',\n gridwidth=1,\n showgrid=True,\n\n ticklen=25\n ),\n legend=dict(\n x=0.05,y=1.1,\n bordercolor='#00000f',\n borderwidth=2\n \n ),\n width=1100,\n height=700,\n #paper_bgcolor='rgb(233,233,233)',\n #plot_bgcolor='rgb(233,233,233)',\n margin=go.Margin(l=130,b=200),\n font=dict(size=25,family='arial',color='#00000f'),\n showlegend=True\n)\ndata = [trace1, trace2,trace3]\n\n\n# create a figure object\nfig = dict(data=data, layout=layout)\n#py.iplot(fig, filename = \"2.3Identify_SeasonalValues\") \n\n\n## it can be run from the local machine on Pycharm like this like below\n## It would also work here offline but in a seperate window \noffline.iplot(fig,filename = 'jupyter/UnmentDemand@BirdRefuge' ) \n\nprint \"Figure x is replicated!!\"", "_____no_output_____" ] ], [ [ "<a name=\"Close\"></a>\n# 7. Upload the new result scenarios to OpenAgua to visulize them there", "_____no_output_____" ], [ "You already uploaded the results form WaMDaM SQLite earlier at the begnining of these Jupyter Notebooks. So all you need is to select to display the result in OpenAgua. Finally, click, load data. It should replicate the same figure above and Figure 6 in the paper\n\n<img src=\"https://github.com/WamdamProject/WaMDaM-software-ecosystem/blob/master/mkdocs/Edit_MD_Files/images/WEAP_results_OA.PNG?raw=true\" style=\"float:center;width:900px;padding:20px\"> \n\n\n<img src=\"https://github.com/WamdamProject/WaMDaM-software-ecosystem/blob/master/mkdocs/Edit_MD_Files/images/WEAP_results_OA2.PNG?raw=true\" style=\"float:center;width:900px;padding:20px\"> \n\n", "_____no_output_____" ], [ "<a name=\"Close\"></a>\n# 8. Close the SQLite and WEAP API connections", "_____no_output_____" ] ], [ [ "# 9. Close the SQLite and WEAP API connections\nconn.close()\n\nprint 'connection disconnected'\n\n# Uncomment \nWEAP.SaveArea\n\n\n\n# this command will close WEAP\nWEAP.Quit\n\nprint 'Connection with WEAP API is disconnected'", "_____no_output_____" ] ], [ [ "# The End :) Congratulations!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
d03549b308d755c586d06320ad40ce5be1461601
140,120
ipynb
Jupyter Notebook
notebooks/Convertmp3ToPiezoBeeps.ipynb
CallumJHays/g26-egb320-2019
6dde6b5d2f72fac3928c5042a27dc50e978c3425
[ "MIT" ]
null
null
null
notebooks/Convertmp3ToPiezoBeeps.ipynb
CallumJHays/g26-egb320-2019
6dde6b5d2f72fac3928c5042a27dc50e978c3425
[ "MIT" ]
null
null
null
notebooks/Convertmp3ToPiezoBeeps.ipynb
CallumJHays/g26-egb320-2019
6dde6b5d2f72fac3928c5042a27dc50e978c3425
[ "MIT" ]
null
null
null
979.86014
90,552
0.958129
[ [ [ "import librosa\n\ny, sr = librosa.load(\"../data/ImperialMarch.mp3\")\nsecs = len(y) / sr\nsecs", "_____no_output_____" ], [ "import librosa.display\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = [18, 5]\n\nlibrosa.display.waveplot(y)", "_____no_output_____" ], [ "centroids = librosa.feature.spectral_centroid(y, sr)[0]\n\nplt.plot(centroids)", "_____no_output_____" ], [ "spect.shape", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
d035508a6c97a038117fb7325c0288ab4e975802
3,846
ipynb
Jupyter Notebook
forest_covertype_prediction_with_random_forests/covertype-rf-py.ipynb
Davidportlouis/examples
62526e824cf446f4c8bcebb4e4b6d8550b7e83bb
[ "BSD-3-Clause" ]
54
2020-03-20T05:40:01.000Z
2022-03-18T13:56:46.000Z
forest_covertype_prediction_with_random_forests/covertype-rf-py.ipynb
Davidportlouis/examples
62526e824cf446f4c8bcebb4e4b6d8550b7e83bb
[ "BSD-3-Clause" ]
136
2020-03-19T18:23:18.000Z
2022-03-31T09:08:10.000Z
forest_covertype_prediction_with_random_forests/covertype-rf-py.ipynb
Davidportlouis/examples
62526e824cf446f4c8bcebb4e4b6d8550b7e83bb
[ "BSD-3-Clause" ]
47
2020-03-20T11:51:34.000Z
2022-03-23T15:44:10.000Z
26.163265
326
0.547322
[ [ [ "[![Binder](https://mybinder.org/badge_logo.svg)](https://lab.mlpack.org/v2/gh/mlpack/examples/master?urlpath=lab%2Ftree%2Fforest_covertype_prediction_with_random_forests%2Fcovertype-rf-py.ipynb)", "_____no_output_____" ] ], [ [ "# @file covertype-rf-py.ipynb\n#\n# Classification using Random Forest on the Covertype dataset.", "_____no_output_____" ], [ "import mlpack\nimport pandas as pd\nimport numpy as np", "_____no_output_____" ], [ "# Load the dataset from an online URL.\ndf = pd.read_csv('https://lab.mlpack.org/data/covertype-small.csv.gz')", "_____no_output_____" ], [ "# Split the labels.\nlabels = df['label']\ndataset = df.drop('label', 1)", "_____no_output_____" ], [ "# Split the dataset using mlpack. The output comes back as a dictionary, which\n# we'll unpack for clarity of code.\noutput = mlpack.preprocess_split(input=dataset, input_labels=labels, test_ratio=0.3)", "_____no_output_____" ], [ "training_set = output['training']\ntraining_labels = output['training_labels']\ntest_set = output['test']\ntest_labels = output['test_labels']", "_____no_output_____" ], [ "# Train a random forest.\noutput = mlpack.random_forest(training=training_set, labels=training_labels,\n print_training_accuracy=True, num_trees=10, minimum_leaf_size=3)", "_____no_output_____" ], [ "random_forest = output['output_model']", "_____no_output_____" ], [ "# Predict the labels of the test points.\noutput = mlpack.random_forest(input_model=random_forest, test=test_set)", "_____no_output_____" ], [ "# Now print the accuracy. The 'probabilities' output could also be used to\n# generate an ROC curve.\ncorrect = np.sum(output['predictions'] == test_labels.flatten())\nprint(str(correct) + ' correct out of ' + str(len(test_labels)) +\n ' (' + str(100 * float(correct) / float(len(test_labels))) + '%).')", "24513 correct out of 30000 (81.71%).\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d0355eba54686abb3cb3b11bba7812c168630ee2
11,998
ipynb
Jupyter Notebook
Samuel_notebooks/Unet_notebook.ipynb
Samuel-van-Gurp/fastMRI
0b1884a1c218961f81199144057ffcfde29a86ad
[ "MIT" ]
null
null
null
Samuel_notebooks/Unet_notebook.ipynb
Samuel-van-Gurp/fastMRI
0b1884a1c218961f81199144057ffcfde29a86ad
[ "MIT" ]
null
null
null
Samuel_notebooks/Unet_notebook.ipynb
Samuel-van-Gurp/fastMRI
0b1884a1c218961f81199144057ffcfde29a86ad
[ "MIT" ]
null
null
null
44.60223
1,507
0.58143
[ [ [ "import argparse\nimport time\nfrom collections import defaultdict\nfrom pathlib import Path\nimport h5py\nimport fastmri\nimport fastmri.data.transforms as T\nimport numpy as np\nimport requests\nimport torch\nfrom fastmri.data import SliceDataset\nfrom fastmri.models import Unet\nfrom tqdm import tqdm\n", "_____no_output_____" ], [ "# loading multi coil knee file \n\nfname = '/scratch/svangurp/samuel/data/knee/train/file1000002.h5'\ndata = h5py.File(fname, 'r')\nkspace = data[\"kspace\"][()]\n", "_____no_output_____" ], [ "def run_unet_model(batch, model, device):\n image, _, mean, std, fname, slice_num, _ = batch\n\n output = model(image.to(device).unsqueeze(1)).squeeze(1).cpu()\n\n mean = mean.unsqueeze(1).unsqueeze(2)\n std = std.unsqueeze(1).unsqueeze(2)\n output = (output * std + mean).cpu()\n\n return output, int(slice_num[0]), fname[0]\n", "_____no_output_____" ], [ "def run_inference(challenge, state_dict_file, data_path, output_path, device):\n model = Unet(in_chans=1, out_chans=1, chans=256, num_pool_layers=4, drop_prob=0.0)\n # download the state_dict if we don't have it\n if state_dict_file is None:\n if not Path(MODEL_FNAMES[challenge]).exists():\n url_root = UNET_FOLDER\n download_model(url_root + MODEL_FNAMES[challenge], MODEL_FNAMES[challenge])\n\n state_dict_file = MODEL_FNAMES[challenge]\n\n model.load_state_dict(torch.load(state_dict_file))\n model = model.eval()\n\n # data loader setup\n if \"_mc\" in challenge:\n data_transform = T.UnetDataTransform(which_challenge=\"multicoil\")\n else:\n data_transform = T.UnetDataTransform(which_challenge=\"singlecoil\")\n\n if \"_mc\" in challenge:\n dataset = SliceDataset(\n root=data_path,\n transform=data_transform,\n challenge=\"multicoil\",\n )\n else:\n dataset = SliceDataset(\n root=data_path,\n transform=data_transform,\n challenge=\"singlecoil\",\n )\n dataloader = torch.utils.data.DataLoader(dataset, num_workers=4)\n\n # run the model\n start_time = time.perf_counter()\n outputs = defaultdict(list)\n model = model.to(device)\n\n for batch in tqdm(dataloader, desc=\"Running inference\"):\n with torch.no_grad():\n output, slice_num, fname = run_unet_model(batch, model, device)\n\n outputs[fname].append((slice_num, output))\n\n # save outputs\n for fname in outputs:\n outputs[fname] = np.stack([out for _, out in sorted(outputs[fname])])\n\n fastmri.save_reconstructions(outputs, output_path / \"reconstructions\")\n\n end_time = time.perf_counter()\n\n print(f\"Elapsed time for {len(dataloader)} slices: {end_time-start_time}\")\n\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser(\n formatter_class=argparse.ArgumentDefaultsHelpFormatter\n )\n parser.add_argument(\n \"--challenge\",\n default=\"unet_knee_sc\",\n choices=(\n \"unet_knee_sc\",\n \"unet_knee_mc\",\n \"unet_brain_mc\",\n ),\n type=str,\n help=\"Model to run\",\n )\n parser.add_argument(\n \"--device\",\n default=\"cuda\",\n type=str,\n help=\"Model to run\",\n )\n parser.add_argument(\n \"--state_dict_file\",\n default=None,\n type=Path,\n help=\"Path to saved state_dict (will download if not provided)\",\n )\n parser.add_argument(\n \"--data_path\",\n type=Path,\n required=True,\n help=\"Path to subsampled data\",\n )\n parser.add_argument(\n \"--output_path\",\n type=Path,\n required=True,\n help=\"Path for saving reconstructions\",\n )\n\n args = parser.parse_args()\n\n run_inference(\n args.challenge,\n args.state_dict_file,\n args.data_path,\n args.output_path,\n torch.device(args.device),\n )\n", "usage: ipykernel_launcher.py [-h]\n [--challenge {unet_knee_sc,unet_knee_mc,unet_brain_mc}]\n [--device DEVICE]\n [--state_dict_file STATE_DICT_FILE] --data_path\n DATA_PATH --output_path OUTPUT_PATH\nipykernel_launcher.py: error: the following arguments are required: --data_path, --output_path\n" ], [ "challenge = 'unet_knee_mc' \nstate_dict_file ='/home/svangurp/scratch/samuel/pretrained/knee/unet/knee_mc_leaderboard_state_dict.pt'\ndata_path = '/scratch/svangurp/samuel/data/knee/train/'\noutput_path ='/home/svangurp/scratch/samuel/data/knee/model_ouputs/Unet_recon_knee_mc/'\ndevice = 'cuda' \n\nrun_inference(challenge, state_dict_file, data_path, output_path, device)", "Running inference: 100%|██████████| 1448/1448 [03:20<00:00, 7.21it/s]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
d035636e058a4001ef345ee3804f6efe3a80750b
8,774
ipynb
Jupyter Notebook
reinforcement_learning/rl_hvac_ray_energyplus/train-hvac.ipynb
P15241328/amazon-sagemaker-examples
00cba545be0822474f070321a62d22865187e09b
[ "Apache-2.0" ]
2
2021-07-20T18:25:10.000Z
2022-01-20T00:04:07.000Z
reinforcement_learning/rl_hvac_ray_energyplus/train-hvac.ipynb
P15241328/amazon-sagemaker-examples
00cba545be0822474f070321a62d22865187e09b
[ "Apache-2.0" ]
1
2021-03-25T18:31:29.000Z
2021-03-25T18:31:29.000Z
reinforcement_learning/rl_hvac_ray_energyplus/train-hvac.ipynb
P15241328/amazon-sagemaker-examples
00cba545be0822474f070321a62d22865187e09b
[ "Apache-2.0" ]
1
2021-06-08T14:40:24.000Z
2021-06-08T14:40:24.000Z
34.007752
370
0.563027
[ [ [ "# Optimizing building HVAC with Amazon SageMaker RL", "_____no_output_____" ] ], [ [ "import sagemaker\nimport boto3\n\nfrom sagemaker.rl import RLEstimator\n\nfrom source.common.docker_utils import build_and_push_docker_image", "_____no_output_____" ] ], [ [ "## Initialize Amazon SageMaker", "_____no_output_____" ] ], [ [ "role = sagemaker.get_execution_role()\nsm_session = sagemaker.session.Session()\n\n# SageMaker SDK creates a default bucket. Change this bucket to your own bucket, if needed.\ns3_bucket = sm_session.default_bucket()\n\ns3_output_path = f's3://{s3_bucket}'\nprint(f'S3 bucket path: {s3_output_path}')\nprint(f'Role: {role}')", "_____no_output_____" ] ], [ [ "## Set additional training parameters\n\n### Set instance type\n\nSet `cpu_or_gpu` to either `'cpu'` or `'gpu'` for using CPU or GPU instances.\n\n### Configure the framework you want to use\n\nSet `framework` to `'tf'` or `'torch'` for TensorFlow or PyTorch, respectively.\n\nYou will also have to edit your entry point i.e., `train-sagemaker-distributed.py` with the configuration parameter `\"use_pytorch\"` to match the framework that you have selected.", "_____no_output_____" ] ], [ [ "job_name_prefix = 'energyplus-hvac-ray'\n\ncpu_or_gpu = 'gpu' # has to be either cpu or gpu\nif cpu_or_gpu != 'cpu' and cpu_or_gpu != 'gpu':\n raise ValueError('cpu_or_gpu has to be either cpu or gpu')\n \nframework = 'tf' \n\ninstance_type = 'ml.g4dn.16xlarge' # g4dn.16x large has 1 GPU and 64 cores", "_____no_output_____" ] ], [ [ "# Train your homogeneous scaling job here", "_____no_output_____" ], [ "### Edit the training code\n\nThe training code is written in the file `train-sagemaker-distributed.py` which is uploaded in the /source directory.\n\n*Note that ray will automatically set `\"ray_num_cpus\"` and `\"ray_num_gpus\"` in `_get_ray_config`*", "_____no_output_____" ] ], [ [ "!pygmentize source/train-sagemaker-distributed.py", "_____no_output_____" ] ], [ [ "### Train the RL model using the Python SDK Script mode\n\nWhen using SageMaker for distributed training, you can select a GPU or CPU instance. The RLEstimator is used for training RL jobs.\n\n1. Specify the source directory where the environment, presets and training code is uploaded.\n2. Specify the entry point as the training code\n3. Specify the image (CPU or GPU) to be used for the training environment.\n4. Define the training parameters such as the instance count, job name, S3 path for output and job name.\n5. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.", "_____no_output_____" ], [ "#### GPU docker image", "_____no_output_____" ] ], [ [ "# Build image\n \nrepository_short_name = f'sagemaker-hvac-ray-{cpu_or_gpu}'\ndocker_build_args = {\n 'CPU_OR_GPU': cpu_or_gpu, \n 'AWS_REGION': boto3.Session().region_name,\n 'FRAMEWORK': framework\n}\n\nimage_name = build_and_push_docker_image(repository_short_name, build_args=docker_build_args)\nprint(\"Using ECR image %s\" % image_name)", "_____no_output_____" ], [ "metric_definitions = [\n {'Name': 'training_iteration', 'Regex': 'training_iteration: ([-+]?[0-9]*[.]?[0-9]+([eE][-+]?[0-9]+)?)'}, \n {'Name': 'episodes_total', 'Regex': 'episodes_total: ([-+]?[0-9]*[.]?[0-9]+([eE][-+]?[0-9]+)?)'}, \n {'Name': 'num_steps_trained', 'Regex': 'num_steps_trained: ([-+]?[0-9]*[.]?[0-9]+([eE][-+]?[0-9]+)?)'}, \n {'Name': 'timesteps_total', 'Regex': 'timesteps_total: ([-+]?[0-9]*[.]?[0-9]+([eE][-+]?[0-9]+)?)'},\n {'Name': 'training_iteration', 'Regex': 'training_iteration: ([-+]?[0-9]*[.]?[0-9]+([eE][-+]?[0-9]+)?)'},\n\n {'Name': 'episode_reward_max', 'Regex': 'episode_reward_max: ([-+]?[0-9]*[.]?[0-9]+([eE][-+]?[0-9]+)?)'}, \n {'Name': 'episode_reward_mean', 'Regex': 'episode_reward_mean: ([-+]?[0-9]*[.]?[0-9]+([eE][-+]?[0-9]+)?)'}, \n {'Name': 'episode_reward_min', 'Regex': 'episode_reward_min: ([-+]?[0-9]*[.]?[0-9]+([eE][-+]?[0-9]+)?)'},\n] ", "_____no_output_____" ] ], [ [ "### Ray homogeneous scaling - Specify `train_instance_count` > 1\n\nHomogeneous scaling allows us to use multiple instances of the same type.\n\nSpot instances are unused EC2 instances that could be used at 90% discount compared to On-Demand prices (more information about spot instances can be found [here](https://aws.amazon.com/ec2/spot/?cards.sort-by=item.additionalFields.startDateTime&cards.sort-order=asc) and [here](https://docs.aws.amazon.com/sagemaker/latest/dg/model-managed-spot-training.html))\n\nTo use spot instances, set `train_use_spot_instances = True`. To use On-Demand instances, `train_use_spot_instances = False`.", "_____no_output_____" ] ], [ [ "hyperparameters = {\n # no. of days to simulate. Remember to adjust the dates in RunPeriod of \n # 'source/eplus/envs/buildings/MediumOffice/RefBldgMediumOfficeNew2004_Chicago.idf' to match simulation days.\n 'n_days': 365,\n 'n_iter': 50, # no. of training iterations\n 'algorithm': 'APEX_DDPG', # only APEX_DDPG and PPO are tested\n 'multi_zone_control': True, # if each zone temperature set point has to be independently controlled\n 'energy_temp_penalty_ratio': 10\n}\n\n# Set additional training parameters\ntraining_params = {\n 'base_job_name': job_name_prefix, \n 'train_instance_count': 1,\n 'tags': [{'Key': k, 'Value': str(v)} for k,v in hyperparameters.items()]\n}\n\n# Defining the RLEstimator\nestimator = RLEstimator(entry_point=f'train-sagemaker-hvac.py',\n source_dir='source',\n dependencies=[\"source/common/\"],\n image_uri=image_name,\n role=role,\n train_instance_type=instance_type, \n# train_instance_type='local', \n output_path=s3_output_path,\n metric_definitions=metric_definitions,\n hyperparameters=hyperparameters,\n **training_params\n )\n\nestimator.fit(wait=False)\n\nprint(' ')\nprint(estimator.latest_training_job.job_name)\nprint('type=', instance_type, 'count=', training_params['train_instance_count'])\nprint(' ')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
d03574d55460414b9ac089f8538e71787e6ad2c7
958,728
ipynb
Jupyter Notebook
examples/notebooks/spleen_segmentation_3d_lightning.ipynb
loftwah/MONAI
37fb3e779121e6dc74127993df102fc91d9065f8
[ "Apache-2.0" ]
1
2020-04-22T21:13:31.000Z
2020-04-22T21:13:31.000Z
examples/notebooks/spleen_segmentation_3d_lightning.ipynb
tranduyquockhanh/MONAI
37fb3e779121e6dc74127993df102fc91d9065f8
[ "Apache-2.0" ]
null
null
null
examples/notebooks/spleen_segmentation_3d_lightning.ipynb
tranduyquockhanh/MONAI
37fb3e779121e6dc74127993df102fc91d9065f8
[ "Apache-2.0" ]
1
2021-09-30T08:57:27.000Z
2021-09-30T08:57:27.000Z
1,964.606557
140,556
0.95796
[ [ [ "# Spleen 3D segmentation with MONAI", "_____no_output_____" ], [ "This tutorial demonstrates how MONAI can be used in conjunction with the [PyTorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) framework.\n\nWe demonstrate use of the following MONAI features:\n1. Transforms for dictionary format data.\n2. Loading Nifti images with metadata.\n3. Add channel dim to the data if no channel dimension.\n4. Scaling medical image intensity with expected range.\n5. Croping out a batch of balanced images based on the positive / negative label ratio.\n6. Cache IO and transforms to accelerate training and validation.\n7. Use of a a 3D UNet model, Dice loss function, and mean Dice metric for a 3D segmentation task.\n8. The sliding window inference method.\n9. Deterministic training for reproducibility.\n\nThe training Spleen dataset used in this example can be downloaded from from http://medicaldecathlon.com//\n\n![spleen](http://medicaldecathlon.com/img/spleen0.png)\n\n\nTarget: Spleen \nModality: CT \nSize: 61 3D volumes (41 Training + 20 Testing) \nSource: Memorial Sloan Kettering Cancer Center \nChallenge: Large ranging foreground size", "_____no_output_____" ], [ "In addition to the usual MONAI requirements you will need Lightning installed.", "_____no_output_____" ] ], [ [ "! pip install pytorch-lightning", "_____no_output_____" ], [ "# Copyright 2020 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport sys\nimport glob\nimport numpy as np\nimport torch\nfrom torch.utils.data import DataLoader\nimport matplotlib.pyplot as plt\nimport monai\nfrom monai.transforms import \\\n Compose, LoadNiftid, AddChanneld, ScaleIntensityRanged, RandCropByPosNegLabeld, \\\n RandAffined, Spacingd, Orientationd, ToTensord\nfrom monai.data import list_data_collate, sliding_window_inference\nfrom monai.networks.layers import Norm\nfrom monai.metrics import compute_meandice\nfrom pytorch_lightning import LightningModule, Trainer, loggers\nfrom pytorch_lightning.callbacks.model_checkpoint import ModelCheckpoint\n\nmonai.config.print_config()", "MONAI version: 0.1.0rc2+11.gdb4531b.dirty\nPython version: 3.6.8 (default, Oct 7 2019, 12:59:55) [GCC 8.3.0]\nNumpy version: 1.18.2\nPytorch version: 1.4.0\nIgnite version: 0.3.0\n" ] ], [ [ "## Define the LightningModule\n\nThe LightningModule contains a refactoring of your training code. The following module is a refactoring of the code in `spleen_segmentation_3d.ipynb`: ", "_____no_output_____" ] ], [ [ "class Net(LightningModule):\n def __init__(self):\n super().__init__()\n self._model = monai.networks.nets.UNet(dimensions=3, in_channels=1, out_channels=2, channels=(16, 32, 64, 128, 256),\n strides=(2, 2, 2, 2), num_res_units=2, norm=Norm.BATCH)\n self.loss_function = monai.losses.DiceLoss(to_onehot_y=True, do_softmax=True)\n self.best_val_dice = 0\n self.best_val_epoch = 0\n\n \n def forward(self, x): \n return self._model(x)\n \n def prepare_data(self):\n # set up the correct data path\n data_root = '/workspace/data/medical/Task09_Spleen'\n train_images = glob.glob(os.path.join(data_root, 'imagesTr', '*.nii.gz'))\n train_labels = glob.glob(os.path.join(data_root, 'labelsTr', '*.nii.gz'))\n data_dicts = [{'image': image_name, 'label': label_name}\n for image_name, label_name in zip(train_images, train_labels)]\n train_files, val_files = data_dicts[:-9], data_dicts[-9:]\n \n # define the data transforms\n train_transforms = Compose([\n LoadNiftid(keys=['image', 'label']),\n AddChanneld(keys=['image', 'label']),\n Spacingd(keys=['image', 'label'], pixdim=(1.5, 1.5, 2.), interp_order=(3, 0)),\n Orientationd(keys=['image', 'label'], axcodes='RAS'),\n ScaleIntensityRanged(keys=['image'], a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True),\n # randomly crop out patch samples from big image based on pos / neg ratio\n # the image centers of negative samples must be in valid image area\n RandCropByPosNegLabeld(keys=['image', 'label'], label_key='label', size=(96, 96, 96), pos=1, \n neg=1, num_samples=4, image_key='image', image_threshold=0),\n # user can also add other random transforms\n # RandAffined(keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0, spatial_size=(96, 96, 96),\n # rotate_range=(0, 0, np.pi/15), scale_range=(0.1, 0.1, 0.1)),\n ToTensord(keys=['image', 'label'])\n ])\n val_transforms = Compose([\n LoadNiftid(keys=['image', 'label']),\n AddChanneld(keys=['image', 'label']),\n Spacingd(keys=['image', 'label'], pixdim=(1.5, 1.5, 2.), interp_order=(3, 0)),\n Orientationd(keys=['image', 'label'], axcodes='RAS'),\n ScaleIntensityRanged(keys=['image'], a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True),\n ToTensord(keys=['image', 'label'])\n ])\n \n # set deterministic training for reproducibility\n train_transforms.set_random_state(seed=0)\n torch.manual_seed(0)\n torch.backends.cudnn.deterministic = True\n torch.backends.cudnn.benchmark = False\n \n # we use cached datasets - these are 10x faster than regular datasets\n self.train_ds = monai.data.CacheDataset(data=train_files, transform=train_transforms, cache_rate=1.0)\n self.val_ds = monai.data.CacheDataset(data=val_files, transform=val_transforms, cache_rate=1.0)\n #self.train_ds = monai.data.Dataset(data=train_files, transform=train_transforms)\n #self.val_ds = monai.data.Dataset(data=val_files, transform=val_transforms)\n\n def train_dataloader(self):\n train_loader = DataLoader(self.train_ds, batch_size=2, shuffle=True, num_workers=4, collate_fn=list_data_collate)\n return train_loader\n \n def val_dataloader(self):\n val_loader = DataLoader(self.val_ds, batch_size=1, num_workers=4)\n return val_loader\n \n def configure_optimizers(self):\n optimizer = torch.optim.Adam(self._model.parameters(), 1e-4)\n return optimizer\n \n def training_step(self, batch, batch_idx):\n images, labels = batch['image'], batch['label']\n output = self.forward(images)\n loss = self.loss_function(output, labels)\n tensorboard_logs = {'train_loss': loss.item()}\n return {'loss': loss, 'log': tensorboard_logs}\n \n def validation_step(self, batch, batch_idx):\n images, labels = batch['image'], batch['label']\n roi_size = (160, 160, 160)\n sw_batch_size = 4\n outputs = sliding_window_inference(images, roi_size, sw_batch_size, self.forward)\n loss = self.loss_function(outputs, labels)\n value = compute_meandice(y_pred=outputs, y=labels, include_background=False,\n to_onehot_y=True, mutually_exclusive=True)\n return {'val_loss': loss, 'val_dice': value}\n \n def validation_epoch_end(self, outputs):\n val_dice = 0\n num_items = 0\n for output in outputs:\n val_dice += output['val_dice'].sum().item()\n num_items += len(output['val_dice'])\n mean_val_dice = val_dice / num_items\n tensorboard_logs = {'val_dice': mean_val_dice}\n if mean_val_dice > self.best_val_dice:\n self.best_val_dice = mean_val_dice\n self.best_val_epoch = self.current_epoch\n print('current epoch %d current mean dice: %0.4f best mean dice: %0.4f at epoch %d'\n % (self.current_epoch, mean_val_dice, self.best_val_dice, self.best_val_epoch))\n return {'log': tensorboard_logs}\n", "_____no_output_____" ] ], [ [ "## Run the training", "_____no_output_____" ] ], [ [ "# initialise the LightningModule\nnet = Net()\n\n# set up loggers and checkpoints\ntb_logger = loggers.TensorBoardLogger(save_dir='logs')\ncheckpoint_callback = ModelCheckpoint(filepath='logs/{epoch}-{val_loss:.2f}-{val_dice:.2f}')\n\n# initialise Lightning's trainer. \ntrainer = Trainer(gpus=[0],\n max_epochs=250,\n logger=tb_logger,\n checkpoint_callback=checkpoint_callback,\n show_progress_bar=False,\n num_sanity_val_steps=1\n )\n# train\ntrainer.fit(net)", "_____no_output_____" ], [ "print('train completed, best_metric: %0.4f at epoch %d' % (net.best_val_dice, net.best_val_epoch))", "train completed, best_metric: 0.9435 at epoch 186\n" ] ], [ [ "## View training in tensorboard", "_____no_output_____" ] ], [ [ "%load_ext tensorboard\n%tensorboard --logdir='logs'", "_____no_output_____" ] ], [ [ "## Check best model output with the input image and label", "_____no_output_____" ] ], [ [ "net.eval()\ndevice = torch.device(\"cuda:0\")\nwith torch.no_grad():\n for i, val_data in enumerate(net.val_dataloader()):\n roi_size = (160, 160, 160)\n sw_batch_size = 4\n val_outputs = sliding_window_inference(val_data['image'].to(device), roi_size, sw_batch_size, net)\n # plot the slice [:, :, 50]\n plt.figure('check', (18, 6))\n plt.subplot(1, 3, 1)\n plt.title('image ' + str(i))\n plt.imshow(val_data['image'][0, 0, :, :, 50], cmap='gray')\n plt.subplot(1, 3, 2)\n plt.title('label ' + str(i))\n plt.imshow(val_data['label'][0, 0, :, :, 50])\n plt.subplot(1, 3, 3)\n plt.title('output ' + str(i))\n plt.imshow(torch.argmax(val_outputs, dim=1).detach().cpu()[0, :, :, 50])\n plt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d03575cdc6d0c57785974af6ce3e1ce81147c7e3
92,981
ipynb
Jupyter Notebook
westeros/westeros_baseline.ipynb
fschoeni/homework_no3
86f021d9d0497a2a0422efad1e48ae26b261f111
[ "Apache-2.0" ]
null
null
null
westeros/westeros_baseline.ipynb
fschoeni/homework_no3
86f021d9d0497a2a0422efad1e48ae26b261f111
[ "Apache-2.0" ]
null
null
null
westeros/westeros_baseline.ipynb
fschoeni/homework_no3
86f021d9d0497a2a0422efad1e48ae26b261f111
[ "Apache-2.0" ]
null
null
null
57.360271
13,108
0.760715
[ [ [ "# Westeros Tutorial Part 1 - Welcome to the MESSAGEix framework & Creating a baseline scenario \n\n### *Integrated Assessment Modeling for the 21st Century*\n\nFor information on how to install *MESSAGEix*, please refer to [Installation page](https://message.iiasa.ac.at/en/stable/getting_started.html) and for getting *MESSAGEix* tutorials, please follow the steps mentioned in [Tutorials](https://message.iiasa.ac.at/en/stable/tutorials.html).\n\nPlease refer to the [user guidelines](https://github.com/iiasa/message_ix/blob/master/NOTICE.rst)\nfor additional information on using *MESSAGEix*, including the recommended citation and how to name new models.\n\n**Structure of these tutorials.** After having run this baseline tutorial, you are able to start with any of the other tutorials, but we recommend to follow the order below for going through the information step-wise:\n\n1. Baseline tutorial (``westeros_baseline.ipynb``)\n2. Add extra detail and constraints to the model\n 1. Emissions\n 1. Introducing emissions (`westeros_emissions_bounds.ipynb`)\n 2. Introducing taxes on emissions (`westeros_emissions_taxes.ipynb`)\n 2. Add firm capacity (``westeros_firm_capacity.ipynb``)\n 3. Add flexible energy generation (``westeros_flexible_generation.ipynb``)\n 4. Add seasonality as an example of temporal variability (``westeros_seasonality.ipynb``)\n3. Post-processing: learn how to report calculations _after_ the MESSAGE model has run (``westeros_report.ipynb``)\n\n**Pre-requisites**\n- Have succesfully installed *MESSAGEix*.\n\n_This tutorial is based on a presentation by Matthew Gidden ([@gidden](https://github.com/gidden))\nfor a summer school at the the **Centre National de la Recherche Scientifique (CNRS)**\non *Integrated Assessment Modeling* in June 2018._", "_____no_output_____" ], [ "## Scope of this tutorial: Building a Simple Energy Model\n\nThe goal of this tutorial is to build a simple energy model using *MESSAGEix* with minimal features that can be expanded in future tutorials. \n\nWe will build the model component by component, focusing on both the **how** (code implementation) and **why** (mathematical formulation).", "_____no_output_____" ], [ "## Online documentation\n\nThe full framework documentation is available at [https://message.iiasa.ac.at](https://message.iiasa.ac.at)\n \n<img src='_static/doc_page.png'>", "_____no_output_____" ], [ "## A stylized reference energy system model for Westeros\n\nThis tutorial is based on the country of Westeros from the TV show \"Game of Thrones\".\n\n<table align='center'><tr><td><img src='_static/westeros.jpg' width='150'></td><td><img src='_static/base_res.png'></td></tr></table>", "_____no_output_____" ], [ "## MESSAGEix: the mathematical paradigm\n\nAt its core, *MESSAGEix* is an optimization problem:\n\n> $\\min \\quad ~c^T \\cdot x$ \n> $~s.t. \\quad A \\cdot x \\leq b$\n\nMore explicitly, the model...\n- optimizes an **objective function**, nominally minimizing total **system costs**\n- under a system of **constraints** (inequalities or equality conditions)\n\nThe mathematical implementation includes a number of features that make it particularly geared towards the modelling of *energy-water-land systems* in the context of *climate change mitigation and sustainable development*.\n\nThroughout this document, the mathematical formulation follows the convention that\n- decision **VARIABLES** ($x$) are capitalized\n- input **parameters** ($A$, $b$) are lower case", "_____no_output_____" ], [ "## MESSAGEix: connected to the *ix modeling platform (ixmp)*\n\nThe *modeling platform for integrated and cross-cutting analysis* (ixmp) provides a powerful framework for working with scenarios, including a database infrastucture for data version control and interfaces to scientific programming languages.\n\n<img src='_static/message_ixmp.png' width='700'>", "_____no_output_____" ], [ "## Ready, steady, go!\n\nFirst, we import all the packages we need. We import a utility function called *make_df*, which can be used to wrap the input data into dataframes that can be saved in model parameters.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport ixmp\nimport message_ix\n\nfrom message_ix.utils import make_df\n\n%matplotlib inline", "_____no_output_____" ] ], [ [ "The *MESSAGEix* model is built using the *ixmp* `Platform`. The `Platform` is your connection to a database for storing model input data and scenario results.", "_____no_output_____" ] ], [ [ "mp = ixmp.Platform()", "_____no_output_____" ] ], [ [ "Once connected, we create a new `Scenario` to build our model. A `Scenario` instance will contain all the model input data and results.", "_____no_output_____" ] ], [ [ "scenario = message_ix.Scenario(mp, model='Westeros Electrified', \n scenario='baseline', version='new')", "_____no_output_____" ] ], [ [ "## Model Structure\n\nWe start by defining basic characteristics of the model, including time, space, and the energy system structure.", "_____no_output_____" ], [ "The model horizon will span 3 decades (690-720). Let's assume that we're far in the future after the events of A Song of Ice and Fire (which occur ~300 years after Aegon the conqueror).\n\n| Math Notation | Model Meaning |\n|---------------|------------------------------|\n| $y \\in Y^H$ | time periods in history |\n| $y \\in Y^M$ | time periods in model horizon|", "_____no_output_____" ] ], [ [ "history = [690]\nmodel_horizon = [700, 710, 720]\nscenario.add_horizon({'year': history + model_horizon, \n 'firstmodelyear': model_horizon[0]})", "_____no_output_____" ] ], [ [ "Our model will have a single `node`, i.e., its spatial dimension.\n\n\n| Math Notation | Model Meaning|\n|---------------|--------------|\n| $n \\in N$ | node |", "_____no_output_____" ] ], [ [ "country = 'Westeros'\nscenario.add_spatial_sets({'country': country})", "_____no_output_____" ] ], [ [ "And we fill in the energy system's `commodities`, `levels`, `technologies`, and `modes` (i.e., modes of operation of technologies). This information defines how certain technologies operate. \n\n\n| Math Notation | Model Meaning|\n|---------------|--------------|\n| $c \\in C$ | commodity |\n| $l \\in L$ | level |\n| $t \\in T$ | technology |\n| $m \\in M$ | mode |", "_____no_output_____" ] ], [ [ "scenario.add_set(\"commodity\", [\"electricity\", \"light\"])\nscenario.add_set(\"level\", [\"secondary\", \"final\", \"useful\"])\nscenario.add_set(\"technology\", ['coal_ppl', 'wind_ppl', 'grid', 'bulb'])\nscenario.add_set(\"mode\", \"standard\")", "_____no_output_____" ] ], [ [ "## Supply and Demand (or Balancing Commodities)", "_____no_output_____" ], [ "The fundamental premise of the model is to satisfy demand for energy (services).\nTo first order, demand for services like electricity track with economic productivity (GDP).\nWe define a GDP profile similar to first-world GDP growth from [1900-1930](https://en.wikipedia.org/wiki/List_of_regions_by_past_GDP):", "_____no_output_____" ] ], [ [ "gdp_profile = pd.Series([1., 1.5, 1.9],\n index=pd.Index(model_horizon, name='Time'))\ngdp_profile.plot(title='Demand')", "_____no_output_____" ] ], [ [ "The `COMMODITY_BALANCE_GT` and `COMMODITY_BALANCE_LT` equations ensure that `demand` for each `commodity` is met at each `level` in the energy system.\nThe equation is copied below in this tutorial notebook, but every model equation is available for reference in\nthe [Mathematical formulation](https://message.iiasa.ac.at/en/stable/model/MESSAGE/model_core.html#) section of the *MESSAGEix* documentation.\n\n$\\sum_{\\substack{n^L,t,m \\\\ y^V \\leq y}} \\text{output}_{n^L,t,y^V,y,m,n,c,l} \\cdot \\text{ACT}_{n^L,t,y^V,y,m}$\n$- \\sum_{\\substack{n^L,t,m, \\\\ y^V \\leq y}} \\text{input}_{n^L,t,y^V,y,m,n,c,l} \\cdot \\text{ACT}_{n^L,t,m,y}$ \n$\\geq \\text{demand}_{n,c,l,y} \\quad \\forall \\ l \\in L$\n\nWhile `demand` must be met, supply can *exceed* demand allowing the model to plan for meeting demand in future periods by storing storable commodities.\n", "_____no_output_____" ], [ "First we establish demand. Let's assume\n\n- 40 million people in [300 AC](https://atlasoficeandfireblog.wordpress.com/2016/03/06/the-population-of-the-seven-kingdoms/)\n- similar population growth to Earth in the same time frame [(~factor of 12)](https://en.wikipedia.org/wiki/World_population_estimates)\n- a per capita demand for electricity of 1000 kWh\n- and 8760 hours in a year (of course!)\n\nThen we can add the demand parameter", "_____no_output_____" ], [ "Note present day: [~72000 GWh in Austria](https://www.iea.org/statistics/?country=AUSTRIA&year=2016&category=Energy%20consumption&indicator=undefined&mode=chart&dataTable=INDICATORS) with population [~8.7M](http://www.austria.org/population/) which is ~8300 kWh per capita", "_____no_output_____" ] ], [ [ "demand_per_year = 40 * 12 * 1000 / 8760\nlight_demand = pd.DataFrame({\n 'node': country,\n 'commodity': 'light',\n 'level': 'useful',\n 'year': model_horizon,\n 'time': 'year',\n 'value': (100 * gdp_profile).round(),\n 'unit': 'GWa',\n })", "_____no_output_____" ] ], [ [ "`light_demand` illustrates the data format for *MESSAGEix* parameters. It is a `pandas.DataFrame` containing three types of information in a specific format:\n\n- A \"value\" column containing the numerical values for this parameter.\n- A \"unit\" column.\n- Other columns (\"node\", \"commodity\", \"level\", \"time\") that indicate the key to which each value applies.", "_____no_output_____" ] ], [ [ "light_demand", "_____no_output_____" ], [ "# We use add_par for adding data to a MESSAGEix parameter\nscenario.add_par(\"demand\", light_demand)", "_____no_output_____" ] ], [ [ "In order to define the input and output commodites of each technology, we define some common keys.\n\n- **Input** quantities require `_origin` keys that specify where the inputs are *received from*.\n- **Output** quantities require `_dest` keys that specify where the outputs are *transferred to*.", "_____no_output_____" ] ], [ [ "year_df = scenario.vintage_and_active_years()\nvintage_years, act_years = year_df['year_vtg'], year_df['year_act']\n\nbase = {\n 'node_loc': country,\n 'year_vtg': vintage_years,\n 'year_act': act_years,\n 'mode': 'standard',\n 'time': 'year',\n 'unit': '-',\n}\n\nbase_input = make_df(base, node_origin=country, time_origin='year')\nbase_output = make_df(base, node_dest=country, time_dest='year')", "_____no_output_____" ] ], [ [ "Working backwards along the Reference Energy System, we can add connections for the `bulb`. A light bulb…\n\n- receives *input* in the form of the \"electricity\" *commodity* at the \"final [energy]\" *level*, and\n- *outputs* the commodity \"light\" at the \"useful [energy]\" level.\n\nThe `value` in the input and output parameter is used to represent the effiecieny of a technology (efficiency = output/input).\nFor example, input of 1.0 and output of 1.0 for a technology shows that the efficiency of that technology is 100% in converting\nthe input commodity to the output commodity.", "_____no_output_____" ] ], [ [ "bulb_out = make_df(base_output, technology='bulb', commodity='light', \n level='useful', value=1.0)\nscenario.add_par('output', bulb_out)\n\nbulb_in = make_df(base_input, technology='bulb', commodity='electricity', \n level='final', value=1.0)\nscenario.add_par('input', bulb_in)", "_____no_output_____" ] ], [ [ "Next, we parameterize the electrical `grid`, which…\n\n- receives electricity at the \"secondary\" energy level.\n- also outputs electricity, but at the \"final\" energy level (to be used by the light bulb).\n\nBecause the grid has transmission losses, only 90% of the input electricity is available as output.", "_____no_output_____" ] ], [ [ "grid_efficiency = 0.9\ngrid_out = make_df(base_output, technology='grid', commodity='electricity', \n level='final', value=grid_efficiency)\nscenario.add_par('output', grid_out)\n\ngrid_in = make_df(base_input, technology='grid', commodity='electricity',\n level='secondary', value=1.0)\nscenario.add_par('input', grid_in)", "_____no_output_____" ] ], [ [ "And finally, our power plants. The model does not include the fossil resources used as `input` for coal plants; however, costs of coal extraction are included in the parameter $variable\\_cost$.", "_____no_output_____" ] ], [ [ "coal_out = make_df(base_output, technology='coal_ppl', commodity='electricity', \n level='secondary', value=1.)\nscenario.add_par('output', coal_out)\n\nwind_out = make_df(base_output, technology='wind_ppl', commodity='electricity', \n level='secondary', value=1.)\nscenario.add_par('output', wind_out)", "_____no_output_____" ] ], [ [ "## Operational Constraints and Parameters", "_____no_output_____" ], [ "The model has a number of \"reality\" constraints, which relate built *capacity* (`CAP`) to available power, or the *activity* (`ACT`) of that technology.\n\nThe **capacity constraint** limits the activity of a technology to the installed capacity multiplied by a capacity factor. Capacity factor or is the fraction of installed capacity that can be active in a certain period (here the sub-annual time step *h*).\n\n$$\\sum_{m} \\text{ACT}_{n,t,y^V,y,m,h}\n \\leq \\text{duration_time}_{h} \\cdot \\text{capacity_factor}_{n,t,y^V,y,h} \\cdot \\text{CAP}_{n,t,y^V,y}\n \\quad t \\ \\in \\ T^{INV}$$\n", "_____no_output_____" ], [ "This requires us to provide the `capacity_factor` for each technology. Here, we call `make_df()` and `add_par()` in a loop to execute similar code for three technologies:", "_____no_output_____" ] ], [ [ "base_capacity_factor = {\n 'node_loc': country,\n 'year_vtg': vintage_years,\n 'year_act': act_years,\n 'time': 'year',\n 'unit': '-',\n}", "_____no_output_____" ], [ "capacity_factor = {\n 'coal_ppl': 1,\n 'wind_ppl': 0.36,\n 'bulb': 1, \n}\n\nfor tec, val in capacity_factor.items():\n df = make_df(base_capacity_factor, technology=tec, value=val)\n scenario.add_par('capacity_factor', df)", "_____no_output_____" ] ], [ [ "The model can further be provided `technical_lifetime`s in order to properly manage deployed capacity and related costs via the **capacity maintenance** constraint:\n\n$\\text{CAP}_{n,t,y^V,y} \\leq \\text{remaining_capacity}_{n,t,y^V,y} \\cdot \\text{value} \\quad \\forall \\quad t \\in T^{INV}$\n\nwhere `value` can take different forms depending on what time period is considered:\n\n| Value | Condition |\n|-------------------------------------|-----------------------------------------------------|\n| $\\Delta_y \\text{historical_new_capacity}_{n,t,y^V}$ | $y$ is first model period |\n| $\\Delta_y \\text{CAP_NEW}_{n,t,y^V}$ | $y = y^V$ |\n| $\\text{CAP}_{n,t,y^V,y-1}$ | $0 < y - y^V < \\text{technical_lifetime}_{n,t,y^V}$ |\n", "_____no_output_____" ] ], [ [ "base_technical_lifetime = {\n 'node_loc': country,\n 'year_vtg': model_horizon,\n 'unit': 'y',\n}", "_____no_output_____" ], [ "lifetime = {\n 'coal_ppl': 20,\n 'wind_ppl': 20,\n 'bulb': 1,\n}\n\nfor tec, val in lifetime.items():\n df = make_df(base_technical_lifetime, technology=tec, value=val)\n scenario.add_par('technical_lifetime', df)", "_____no_output_____" ] ], [ [ "## Technological Diffusion and Contraction\n\nWe know from historical precedent that energy systems can not be transformed instantaneously. Therefore, we use a family of dynamic constraints on activity and capacity. These constraints define the upper and lower limit of the domain of activity and capacity over time based on their value in the previous time step, an initial value, and growth/decline rates.", "_____no_output_____" ], [ "$\\sum_{y^V \\leq y,m} \\text{ACT}_{n,t,y^V,y,m,h} \\leq$ \n$\\text{initial_activity_up}_{n,t,y,h}\n \\cdot \\frac{ \\Big( 1 + growth\\_activity\\_up_{n,t,y,h} \\Big)^{|y|} - 1 }\n { growth\\_activity\\_up_{n,t,y,h} }+ \\Big( 1 + growth\\_activity\\_up_{n,t,y,h} \\Big)^{|y|} \\cdot \\Big( \\sum_{y^V \\leq y-1,m} ACT_{n,t,y^V,y-1,m,h} + \\sum_{m} historical\\_activity_{n,t,y-1,m,h}\\Big)$ ", "_____no_output_____" ], [ "This example limits the ability for technologies to **grow**. To do so, we need to provide `growth_activity_up` values for each technology that we want to model as being diffusion constrained. Here, we set this constraint at 10% per year.", "_____no_output_____" ] ], [ [ "base_growth = {\n 'node_loc': country,\n 'year_act': model_horizon,\n 'time': 'year',\n 'unit': '-',\n}", "_____no_output_____" ], [ "growth_technologies = [\n \"coal_ppl\", \n \"wind_ppl\", \n]\n\nfor tec in growth_technologies:\n df = make_df(base_growth, technology=tec, value=0.1) \n scenario.add_par('growth_activity_up', df)", "_____no_output_____" ] ], [ [ "## Defining an Energy Mix (Model Calibration)\n\nTo model the transition of an energy system, one must start with the existing system which are defined by the parameters `historical_activity` and `historical_new_capacity`. These parameters define the energy mix before the model horizon. \n\nWe begin by defining a few key values:\n\n- how much useful energy was needed\n- how much final energy was generated\n- and the mix for different technologies", "_____no_output_____" ] ], [ [ "historic_demand = 0.85 * demand_per_year\nhistoric_generation = historic_demand / grid_efficiency\ncoal_fraction = 0.6", "_____no_output_____" ], [ "base_capacity = {\n 'node_loc': country,\n 'year_vtg': history,\n 'unit': 'GWa',\n}\n\nbase_activity = {\n 'node_loc': country,\n 'year_act': history,\n 'mode': 'standard',\n 'time': 'year',\n 'unit': 'GWa',\n}", "_____no_output_____" ] ], [ [ "Then, we can define the **activity** and **capacity** in the historic period", "_____no_output_____" ] ], [ [ "old_activity = {\n 'coal_ppl': coal_fraction * historic_generation,\n 'wind_ppl': (1 - coal_fraction) * historic_generation,\n}\n\nfor tec, val in old_activity.items():\n df = make_df(base_activity, technology=tec, value=val)\n scenario.add_par('historical_activity', df)", "_____no_output_____" ], [ "act_to_cap = {\n 'coal_ppl': 1 / 10 / capacity_factor['coal_ppl'] / 2, # 20 year lifetime\n 'wind_ppl': 1 / 10 / capacity_factor['wind_ppl'] / 2,\n}\n\nfor tec in act_to_cap:\n value = old_activity[tec] * act_to_cap[tec]\n df = make_df(base_capacity, technology=tec, value=value)\n scenario.add_par('historical_new_capacity', df)", "_____no_output_____" ] ], [ [ "## Objective Function\n\nThe objective function drives the purpose of the optimization. Do we wish to seek maximum utility of the social planner, minimize carbon emissions, or something else? Classical IAMs seek to minimize total discounted system cost over space and time. \n\n$$\\min \\sum_{n,y \\in Y^{M}} \\text{interestrate}_{y} \\cdot \\text{COST_NODAL}_{n,y}$$\n", "_____no_output_____" ], [ "First, let's add the interest rate parameter.", "_____no_output_____" ] ], [ [ "scenario.add_par(\"interestrate\", model_horizon, value=0.05, unit='-')", "_____no_output_____" ] ], [ [ "`COST_NODAL` is comprised of a variety of costs related to the use of different technologies.", "_____no_output_____" ], [ "### Investment Costs\n\nCapital, or investment, costs are invoked whenever a new plant or unit is built\n\n$$\\text{inv_cost}_{n,t,y} \\cdot \\text{construction_time_factor}_{n,t,y} \\cdot \\text{CAP_NEW}_{n,t,y}$$", "_____no_output_____" ] ], [ [ "base_inv_cost = {\n 'node_loc': country,\n 'year_vtg': model_horizon,\n 'unit': 'USD/kW',\n}\n\n# Adding a new unit to the library\nmp.add_unit('USD/kW') ", "INFO:root:unit `USD/kW` is already defined in the platform instance\n" ], [ "# in $ / kW (specific investment cost)\ncosts = {\n 'coal_ppl': 500,\n 'wind_ppl': 1500,\n 'bulb': 5,\n}\n\nfor tec, val in costs.items():\n df = make_df(base_inv_cost, technology=tec, value=val)\n scenario.add_par('inv_cost', df)", "_____no_output_____" ] ], [ [ "### Fixed O&M Costs\n\nFixed cost are only relevant as long as the capacity is active. This formulation allows to include the potential cost savings from early retirement of installed capacity.\n\n$$\\sum_{y^V \\leq y} \\text{fix_cost}_{n,t,y^V,y} \\cdot \\text{CAP}_{n,t,y^V,y}$$", "_____no_output_____" ] ], [ [ "base_fix_cost = {\n 'node_loc': country,\n 'year_vtg': vintage_years,\n 'year_act': act_years,\n 'unit': 'USD/kWa',\n}", "_____no_output_____" ], [ "# in $ / kW / year (every year a fixed quantity is destinated to cover part of the O&M costs\n# based on the size of the plant, e.g. lightning, labor, scheduled maintenance, etc.)\n\ncosts = {\n 'coal_ppl': 30,\n 'wind_ppl': 10,\n}\n\nfor tec, val in costs.items():\n df = make_df(base_fix_cost, technology=tec, value=val)\n scenario.add_par('fix_cost', df)", "_____no_output_____" ] ], [ [ "### Variable O&M Costs\n\nVariable Operation and Maintence costs are associated with the costs of actively running the plant. Thus, they are not applied if a plant is on standby (i.e., constructed, but not currently in use).\n\n$$\\sum_{\\substack{y^V \\leq y \\\\ m,h}} \\text{var_cost}_{n,t,y^V,y,m,h} \\cdot \\text{ACT}_{n,t,y^V,y,m,h} $$", "_____no_output_____" ] ], [ [ "base_var_cost = {\n 'node_loc': country,\n 'year_vtg': vintage_years,\n 'year_act': act_years,\n 'mode': 'standard',\n 'time': 'year',\n 'unit': 'USD/kWa',\n}", "_____no_output_____" ], [ "# in $ / kWa (costs associatied to the degradation of equipment when the plant is functioning\n# per unit of energy produced kW·year = 8760 kWh.\n# Therefore this costs represents USD per 8760 kWh of energy). Do not confuse with fixed O&M units.\n\ncosts = {\n 'coal_ppl': 30,\n 'grid': 50,\n}\n\nfor tec, val in costs.items():\n df = make_df(base_var_cost, technology=tec, value=val)\n scenario.add_par('var_cost', df)", "_____no_output_____" ] ], [ [ "A full model will also have costs associated with\n\n- costs associated with technologies (investment, fixed, variable costs)\n- resource extraction: $\\sum_{c,g} \\ resource\\_cost_{n,c,g,y} \\cdot EXT_{n,c,g,y} $\n- emissions\n- land use (emulator): $\\sum_{s} land\\_cost_{n,s,y} \\cdot LAND_{n,s,y}$", "_____no_output_____" ], [ "## Time to Solve the Model\n\nFirst, we *commit* the model structure and input data (sets and parameters).\nIn the `ixmp` backend, this creates a new model version in the database, which is assigned a version number automatically:", "_____no_output_____" ] ], [ [ "from message_ix import log\n\nlog.info('version number prior to commit: {}'.format(scenario.version))\n\nscenario.commit(comment='basic model of Westeros electrification')\n\nlog.info('version number prior committing to the database: {}'.format(scenario.version))", "INFO:message_ix:version number prior to commit: 0\nINFO:message_ix:version number prior committing to the database: 45\n" ] ], [ [ "An `ixmp` database can contain many scenarios, and possibly multiple versions of the same model and scenario name.\nThese are distinguished by unique version numbers.\n\nTo make it easier to retrieve the \"correct\" version (e.g., the latest one), you can set a specific scenario as the default version to use if the \"Westeros Electrified\" model is loaded from the `ixmp` database.", "_____no_output_____" ] ], [ [ "scenario.set_as_default()", "_____no_output_____" ], [ "scenario.solve()", "_____no_output_____" ], [ "scenario.var('OBJ')['lvl']", "_____no_output_____" ] ], [ [ "## Plotting Results\n\nWe make use of some custom code for plotting the results; see `tools.py` in the tutorial directory.", "_____no_output_____" ] ], [ [ "from tools import Plots\np = Plots(scenario, country, firstyear=model_horizon[0])", "_____no_output_____" ] ], [ [ "### Activity\n\nHow much energy is generated in each time period from the different potential sources?", "_____no_output_____" ] ], [ [ "p.plot_activity(baseyear=True, subset=['coal_ppl', 'wind_ppl'])", "_____no_output_____" ] ], [ [ "### Capacity\n\nHow much capacity of each plant is installed in each period?", "_____no_output_____" ] ], [ [ "p.plot_capacity(baseyear=True, subset=['coal_ppl', 'wind_ppl'])", "_____no_output_____" ] ], [ [ "### Electricity Price\n\nAnd how much does the electricity cost? These prices are in fact **shadow prices** taken from the **dual variables** of the model solution.\nThey reflect the marginal cost of electricity generation (i.e., the additional cost of the system for supplying one more unit of\nelectricity), which is in fact the marginal cost of the most expensive operating generator. \n\nNote the price drop when the most expensive technology is no longer in the system.", "_____no_output_____" ] ], [ [ "p.plot_prices(subset=['light'], baseyear=True)", "_____no_output_____" ] ], [ [ "## Close the connection to the database\n\nWhen working with local HSQLDB database instances, you cannot connect to one database from multipe Jupyter notebooks (or processes) at the same time.\n\nIf you want to easily switch between notebooks with connections to the same `ixmp` database, you need to close the connection in one notebook before initializing the platform using `ixmp.Platform()` in another notebook.\n\nAfter having closed the database connection, you can reopen it using\n```\nmp.open_db()\n```", "_____no_output_____" ] ], [ [ "mp.close_db()", "_____no_output_____" ] ], [ [ "## Congratulations! \n\nYou have built and run your very first *MESSAGEix* model. Welcome to the community!\n\nThe next tutorials will introduce you to other features of the framework, including energy system constraints, emissions taxes, and other policy options.\n\nCheck us out on Github https://github.com/iiasa/message_ix \nand get in touch with us online https://groups.google.com/forum/message-ix ...", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d0357af2f81c2f8474cd7da7ddccd9e9ede69b97
14,440
ipynb
Jupyter Notebook
notebooks/ml_explainability/raw/ex3_partial_plots.ipynb
Mattjez914/Blackjack_Microchallenge
c4f60b62a3ada14663eb30ce72563af994e1eda4
[ "Apache-2.0" ]
null
null
null
notebooks/ml_explainability/raw/ex3_partial_plots.ipynb
Mattjez914/Blackjack_Microchallenge
c4f60b62a3ada14663eb30ce72563af994e1eda4
[ "Apache-2.0" ]
null
null
null
notebooks/ml_explainability/raw/ex3_partial_plots.ipynb
Mattjez914/Blackjack_Microchallenge
c4f60b62a3ada14663eb30ce72563af994e1eda4
[ "Apache-2.0" ]
1
2019-04-17T06:12:23.000Z
2019-04-17T06:12:23.000Z
31.187905
374
0.592936
[ [ [ "## Set Up\n\nToday you will create partial dependence plots and practice building insights with data from the [Taxi Fare Prediction](https://www.kaggle.com/c/new-york-city-taxi-fare-prediction) competition.\n\nWe have again provided code to do the basic loading, review and model-building. Run the cell below to set everything up:", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.model_selection import train_test_split\n\n# Environment Set-Up for feedback system.\nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.ml_explainability.ex3 import *\nprint(\"Setup Complete\")\n\n# Data manipulation code below here\ndata = pd.read_csv('../input/new-york-city-taxi-fare-prediction/train.csv', nrows=50000)\n\n# Remove data with extreme outlier coordinates or negative fares\ndata = data.query('pickup_latitude > 40.7 and pickup_latitude < 40.8 and ' +\n 'dropoff_latitude > 40.7 and dropoff_latitude < 40.8 and ' +\n 'pickup_longitude > -74 and pickup_longitude < -73.9 and ' +\n 'dropoff_longitude > -74 and dropoff_longitude < -73.9 and ' +\n 'fare_amount > 0'\n )\n\ny = data.fare_amount\n\nbase_features = ['pickup_longitude',\n 'pickup_latitude',\n 'dropoff_longitude',\n 'dropoff_latitude']\n\nX = data[base_features]\n\n\ntrain_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)\nfirst_model = RandomForestRegressor(n_estimators=30, random_state=1).fit(train_X, train_y)\nprint(\"Data sample:\")\ndata.head()", "_____no_output_____" ], [ "data.describe()", "_____no_output_____" ] ], [ [ "## Question 1\n\nHere is the code to plot the partial dependence plot for pickup_longitude. Run the following cell.", "_____no_output_____" ] ], [ [ "from matplotlib import pyplot as plt\nfrom pdpbox import pdp, get_dataset, info_plots\n\nfeat_name = 'pickup_longitude'\npdp_dist = pdp.pdp_isolate(model=first_model, dataset=val_X, model_features=base_features, feature=feat_name)\n\npdp.pdp_plot(pdp_dist, feat_name)\nplt.show()", "_____no_output_____" ] ], [ [ "Why does the partial dependence plot have this U-shape?\n\nDoes your explanation suggest what shape to expect in the partial dependence plots for the other features?\n\nCreate all other partial plots in a for-loop below (copying the appropriate lines from the code above).", "_____no_output_____" ] ], [ [ "for feat_name in base_features:\n pdp_dist = _\n _\n plt.show()", "_____no_output_____" ] ], [ [ "Do the shapes match your expectations for what shapes they would have? Can you explain the shape now that you've seen them? \n\nUncomment the following line to check your intuition.", "_____no_output_____" ] ], [ [ "# q_1.solution()", "_____no_output_____" ] ], [ [ "## Q2\n\nNow you will run a 2D partial dependence plot. As a reminder, here is the code from the tutorial. \n\n```\ninter1 = pdp.pdp_interact(model=my_model, dataset=val_X, model_features=feature_names, features=['Goal Scored', 'Distance Covered (Kms)'])\n\npdp.pdp_interact_plot(pdp_interact_out=inter1, feature_names=['Goal Scored', 'Distance Covered (Kms)'], plot_type='contour')\nplt.show()\n```\n\nCreate a 2D plot for the features `pickup_longitude` and `dropoff_longitude`. Plot it appropriately?\n\nWhat do you expect it to look like?", "_____no_output_____" ] ], [ [ "# Add your code here\n", "_____no_output_____" ] ], [ [ "Uncomment the line below to see the solution and explanation for how one might reason about the plot shape.", "_____no_output_____" ] ], [ [ "# q_2.solution()", "_____no_output_____" ] ], [ [ "## Question 3\nConsider a ride starting at longitude -73.92 and ending at longitude -74. Using the graph from the last question, estimate how much money the rider would have saved if they'd started the ride at longitude -73.98 instead?", "_____no_output_____" ] ], [ [ "savings_from_shorter_trip = _\n\nq_3.check()", "_____no_output_____" ] ], [ [ "For a solution or hint, uncomment the appropriate line below.", "_____no_output_____" ] ], [ [ "# q_3.hint()\n# q_3.solution()", "_____no_output_____" ] ], [ [ "## Question 4\nIn the PDP's you've seen so far, location features have primarily served as a proxy to capture distance traveled. In the permutation importance lessons, you added the features `abs_lon_change` and `abs_lat_change` as a more direct measure of distance.\n\nCreate these features again here. You only need to fill in the top two lines. Then run the following cell. \n\n**After you run it, identify the most important difference between this partial dependence plot and the one you got without absolute value features. The code to generate the PDP without absolute value features is at the top of this code cell.**\n\n---", "_____no_output_____" ] ], [ [ "# This is the PDP for pickup_longitude without the absolute difference features. Included here to help compare it to the new PDP you create\nfeat_name = 'pickup_longitude'\npdp_dist_original = pdp.pdp_isolate(model=first_model, dataset=val_X, model_features=base_features, feature=feat_name)\n\npdp.pdp_plot(pdp_dist_original, feat_name)\nplt.show()\n\n\n\n# create new features\ndata['abs_lon_change'] = _\ndata['abs_lat_change'] = _\n\nfeatures_2 = ['pickup_longitude',\n 'pickup_latitude',\n 'dropoff_longitude',\n 'dropoff_latitude',\n 'abs_lat_change',\n 'abs_lon_change']\n\nX = data[features_2]\nnew_train_X, new_val_X, new_train_y, new_val_y = train_test_split(X, y, random_state=1)\nsecond_model = RandomForestRegressor(n_estimators=30, random_state=1).fit(new_train_X, new_train_y)\n\nfeat_name = 'pickup_longitude'\npdp_dist = pdp.pdp_isolate(model=second_model, dataset=new_val_X, model_features=features_2, feature=feat_name)\n\npdp.pdp_plot(pdp_dist, feat_name)\nplt.show()\n\nq_4.check()", "_____no_output_____" ] ], [ [ "Uncomment the lines below to see a hint or the solution (including an explanation of the important differences between the plots).", "_____no_output_____" ] ], [ [ "# q_4.hint()\n# q_4.solution()", "_____no_output_____" ] ], [ [ "## Question 5\nConsider a scenario where you have only 2 predictive features, which we will call `feat_A` and `feat_B`. Both features have minimum values of -1 and maximum values of 1. The partial dependence plot for `feat_A` increases steeply over its whole range, whereas the partial dependence plot for feature B increases at a slower rate (less steeply) over its whole range.\n\nDoes this guarantee that `feat_A` will have a higher permutation importance than `feat_B`. Why or why not?\n\nAfter you've thought about it, uncomment the line below for the solution.", "_____no_output_____" ] ], [ [ "# q_5.solution()", "_____no_output_____" ] ], [ [ "## Q6\nThe code cell below does the following:\n\n1. Creates two features, `X1` and `X2`, having random values in the range [-2, 2].\n2. Creates a target variable `y`, which is always 1.\n3. Trains a `RandomForestRegressor` model to predict `y` given `X1` and `X2`.\n4. Creates a PDP plot for `X1` and a scatter plot of `X1` vs. `y`.\n\nDo you have a prediction about what the PDP plot will look like? Run the cell to find out.\n\nModify the initialization of `y` so that our PDP plot has a positive slope in the range [-1,1], and a negative slope everywhere else. (Note: *you should only modify the creation of `y`, leaving `X1`, `X2`, and `my_model` unchanged.*)", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom numpy.random import rand\n\nn_samples = 20000\n\n# Create array holding predictive feature\nX1 = 4 * rand(n_samples) - 2\nX2 = 4 * rand(n_samples) - 2\n# Create y. you should have X1 and X2 in the expression for y\ny = np.ones(n_samples)\n\n# create dataframe because pdp_isolate expects a dataFrame as an argument\nmy_df = pd.DataFrame({'X1': X1, 'X2': X2, 'y': y})\npredictors_df = my_df.drop(['y'], axis=1)\n\nmy_model = RandomForestRegressor(n_estimators=30, random_state=1).fit(predictors_df, my_df.y)\n\npdp_dist = pdp.pdp_isolate(model=my_model, dataset=my_df, model_features=['X1', 'X2'], feature='X1')\n\n# visualize your results\npdp.pdp_plot(pdp_dist, 'X1')\nplt.show()\n\nq_6.check()", "_____no_output_____" ] ], [ [ "Uncomment the lines below for a hint or solution", "_____no_output_____" ] ], [ [ "# q_6.hint()\n# q_6.solution()", "_____no_output_____" ] ], [ [ "## Question 7\nCreate a dataset with 2 features and a target, such that the pdp of the first feature is flat, but its permutation importance is high. We will use a RandomForest for the model.\n\n*Note: You only need to supply the lines that create the variables X1, X2 and y. The code to build the model and calculate insights is provided*.", "_____no_output_____" ] ], [ [ "import eli5\nfrom eli5.sklearn import PermutationImportance\n\nn_samples = 20000\n\n# Create array holding predictive feature\nX1 = _\nX2 = _\n# Create y. you should have X1 and X2 in the expression for y\ny = _\n\n\n# create dataframe because pdp_isolate expects a dataFrame as an argument\nmy_df = pd.DataFrame({'X1': X1, 'X2': X2, 'y': y})\npredictors_df = my_df.drop(['y'], axis=1)\n\nmy_model = RandomForestRegressor(n_estimators=30, random_state=1).fit(predictors_df, my_df.y)\n\n\npdp_dist = pdp.pdp_isolate(model=my_model, dataset=my_df, model_features=['X1', 'X2'], feature='X1')\npdp.pdp_plot(pdp_dist, 'X1')\nplt.show()\n\nperm = PermutationImportance(my_model).fit(predictors_df, my_df.y)\n\nq_7.check()\n\n# show the weights for the permutation importance you just calculated\neli5.show_weights(perm, feature_names = ['X1', 'X2'])", "_____no_output_____" ], [ "# Uncomment the following lines for the hint or solution\n# q_7.hint()\n# q_7.solution()", "_____no_output_____" ] ], [ [ "## Keep Going\n\nPartial dependence plots can be really interesting. We have a [discussion thread](https://www.kaggle.com/learn-forum/65782) to talk about what real-world topics or questions you'd be curious to see addressed with partial dependence plots. \n\nNext, learn how **[SHAP values](#$NEXT_NOTEBOOK_URL$)** help you understand the logic for each individual prediction.\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
d0357fa8240dc153ef6a2665b2a06914946e62db
10,835
ipynb
Jupyter Notebook
Python Programming Basic Assignments/Code 24.ipynb
bhushanasati25/iNeuron-Assignment-Repo
01271b167d95cbfa2de16f093483e7e7cd2cfb64
[ "MIT" ]
null
null
null
Python Programming Basic Assignments/Code 24.ipynb
bhushanasati25/iNeuron-Assignment-Repo
01271b167d95cbfa2de16f093483e7e7cd2cfb64
[ "MIT" ]
null
null
null
Python Programming Basic Assignments/Code 24.ipynb
bhushanasati25/iNeuron-Assignment-Repo
01271b167d95cbfa2de16f093483e7e7cd2cfb64
[ "MIT" ]
1
2022-03-16T14:27:23.000Z
2022-03-16T14:27:23.000Z
21.038835
268
0.475035
[ [ [ "Question 1\nCreate a function that takes an integer and returns a list from 1 to the given number, where:\n1.\tIf the number can be divided evenly by 4, amplify it by 10 (i.e. return 10 times the number).\n2.\tIf the number cannot be divided evenly by 4, simply return the number.\nExamples\namplify(4) ➞ [1, 2, 3, 40]\n\namplify(3) ➞ [1, 2, 3]\n\namplify(25) ➞ [1, 2, 3, 40, 5, 6, 7, 80, 9, 10, 11, 120, 13, 14, 15, 160, 17, 18, 19, 200, 21, 22, 23, 240, 25]\nNotes\n•\tThe given integer will always be equal to or greater than 1.\n•\tInclude the number (see example above).\n•\tTo perform this problem with its intended purpose, try doing it with list comprehensions. If that's too difficult, just solve the challenge any way you can.", "_____no_output_____" ], [ "def amplify(n):\n return [i*10 if i%4==0 else i for i in range(1, n+1)]", "_____no_output_____" ], [ "amplify(4)", "_____no_output_____" ], [ "amplify(3)", "_____no_output_____" ], [ "amplify(25)", "_____no_output_____" ], [ "Question 2\n\nCreate a function that takes a list of numbers and return the number that's unique.\nExamples\nunique([3, 3, 3, 7, 3, 3]) ➞ 7\n\nunique([0, 0, 0.77, 0, 0]) ➞ 0.77\n\nunique([0, 1, 1, 1, 1, 1, 1, 1]) ➞ 0\nNotes\nTest cases will always have exactly one unique number while all others are the same.", "_____no_output_____" ], [ "def unique(l):\n for i in list(set(l)):\n if l.count(i)==1:\n return i", "_____no_output_____" ], [ "unique([3, 3, 3, 7, 3, 3])", "_____no_output_____" ], [ "unique([0, 0, 0.77, 0, 0])", "_____no_output_____" ], [ "unique([0, 1, 1, 1, 1, 1, 1, 1])", "_____no_output_____" ], [ "Question 3\nYour task is to create a Circle constructor that creates a circle with a radius provided by an argument. The circles constructed must have two getters getArea() (PIr^2) and getPerimeter() (2PI*r) which give both respective areas and perimeter (circumference).\nFor help with this class, I have provided you with a Rectangle constructor which you can use as a base example.\nExamples\ncircy = Circle(11)\ncircy.getArea()\n\n# Should return 380.132711084365\n\ncircy = Circle(4.44)\ncircy.getPerimeter()\n\n# Should return 27.897342763877365\nNotes\nRound results up to the nearest integer.\n", "_____no_output_____" ], [ "class Circle:\n def __init__(self,r):\n self.radius = r\n \n def getArea(self):\n return round(3.14*self.radius*self.radius)\n \n def getPerimeter(self):\n return round(2*3.14*self.radius)", "_____no_output_____" ], [ "circy = Circle(11)\ncircy.getArea()", "_____no_output_____" ], [ "circy = Circle(4.44)\ncircy.getPerimeter()", "_____no_output_____" ], [ "Question 4\nCreate a function that takes a list of strings and return a list, sorted from shortest to longest.\nExamples\nsort_by_length([\"Google\", \"Apple\", \"Microsoft\"])\n➞ [\"Apple\", \"Google\", \"Microsoft\"]\n\nsort_by_length([\"Leonardo\", \"Michelangelo\", \"Raphael\", \"Donatello\"])\n➞ [\"Raphael\", \"Leonardo\", \"Donatello\", \"Michelangelo\"]\n\nsort_by_length([\"Turing\", \"Einstein\", \"Jung\"])\n➞ [\"Jung\", \"Turing\", \"Einstein\"]\nNotes\nAll test cases contain lists with strings of different lengths, so you won't have to deal with multiple strings of the same length.\n", "_____no_output_____" ], [ "def sort_by_length(l):\n return (sorted(l, key = len))", "_____no_output_____" ], [ "sort_by_length([\"Google\", \"Apple\", \"Microsoft\"])", "_____no_output_____" ], [ "sort_by_length([\"Leonardo\", \"Michelangelo\", \"Raphael\", \"Donatello\"])", "_____no_output_____" ], [ "sort_by_length([\"Turing\", \"Einstein\", \"Jung\"])", "_____no_output_____" ], [ "Question 5\n\nCreate a function that validates whether three given integers form a Pythagorean triplet. The sum of the squares of the two smallest integers must equal the square of the largest number to be validated.\n\nExamples\nis_triplet(3, 4, 5) ➞ True\n# 3² + 4² = 25\n# 5² = 25\n\nis_triplet(13, 5, 12) ➞ True\n# 5² + 12² = 169\n# 13² = 169\n\nis_triplet(1, 2, 3) ➞ False\n# 1² + 2² = 5\n# 3² = 9\nNotes\nNumbers may not be given in a sorted order.\n", "_____no_output_____" ], [ "def is_triplet(*args):\n l = []\n l.extend((args))\n l = sorted(l)\n if l[0]**2 + l[1]**2 == l[2]**2:\n return True\n else:\n return False", "_____no_output_____" ], [ "is_triplet(3, 4, 5)", "_____no_output_____" ], [ "is_triplet(13, 5, 12)", "_____no_output_____" ], [ "is_triplet(1, 2, 3)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d03580dfd46657988aedf46eebc9f5d7968c436b
170,829
ipynb
Jupyter Notebook
TEST/BETAsite/resonator_tools/examples/fitting over and undecoupled resonators in reflection.ipynb
takehuge/PYQUM
bfc9d9b1c2f4246c7aac3a371baaf587c99f8069
[ "MIT" ]
null
null
null
TEST/BETAsite/resonator_tools/examples/fitting over and undecoupled resonators in reflection.ipynb
takehuge/PYQUM
bfc9d9b1c2f4246c7aac3a371baaf587c99f8069
[ "MIT" ]
null
null
null
TEST/BETAsite/resonator_tools/examples/fitting over and undecoupled resonators in reflection.ipynb
takehuge/PYQUM
bfc9d9b1c2f4246c7aac3a371baaf587c99f8069
[ "MIT" ]
null
null
null
319.306542
44,339
0.914985
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
d0358869d44aab3a6aeb6e75177febddfde1693e
98,312
ipynb
Jupyter Notebook
CODS_COMAD/SIN/MNIST/cnn_2layers_1000.ipynb
lnpandey/DL_explore_synth_data
0a5d8b417091897f4c7f358377d5198a155f3f24
[ "MIT" ]
2
2019-08-24T07:20:35.000Z
2020-03-27T08:16:59.000Z
CODS_COMAD/SIN/MNIST/cnn_2layers_1000.ipynb
lnpandey/DL_explore_synth_data
0a5d8b417091897f4c7f358377d5198a155f3f24
[ "MIT" ]
null
null
null
CODS_COMAD/SIN/MNIST/cnn_2layers_1000.ipynb
lnpandey/DL_explore_synth_data
0a5d8b417091897f4c7f358377d5198a155f3f24
[ "MIT" ]
3
2019-06-21T09:34:32.000Z
2019-09-19T10:43:07.000Z
57.124927
19,606
0.629089
[ [ [ "import numpy as np\nimport pandas as pd\nfrom matplotlib import pyplot as plt\nfrom tqdm import tqdm as tqdm\n\n%matplotlib inline\n\nimport torch\nimport torchvision\n\nimport torchvision.transforms as transforms\nimport torch.nn as nn\nimport torch.optim as optim\nimport torch.nn.functional as F\nimport random", "_____no_output_____" ], [ "# from google.colab import drive\n\n# drive.mount('/content/drive')", "_____no_output_____" ], [ "transform = transforms.Compose(\n [transforms.CenterCrop((28,28)),transforms.ToTensor(),transforms.Normalize([0.5], [0.5])])\n", "_____no_output_____" ], [ "mnist_trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform)\nmnist_testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform)", "_____no_output_____" ], [ "index1 = [np.where(mnist_trainset.targets==0)[0] , np.where(mnist_trainset.targets==1)[0] ]\nindex1 = np.concatenate(index1,axis=0)\nlen(index1) #12665", "_____no_output_____" ], [ "true = 1000\ntotal = 47000\nsin = total-true\nsin", "_____no_output_____" ], [ "epochs = 300", "_____no_output_____" ], [ "indices = np.random.choice(index1,true)\nindices.shape", "_____no_output_____" ], [ "index = np.where(np.logical_and(mnist_trainset.targets!=0,mnist_trainset.targets!=1))[0] #47335\nindex.shape", "_____no_output_____" ], [ "req_index = np.random.choice(index.shape[0], sin, replace=False) \n# req_index", "_____no_output_____" ], [ "index = index[req_index]\nindex.shape", "_____no_output_____" ], [ "values = np.random.choice([0,1],size= sin) \nprint(sum(values ==0),sum(values==1), sum(values ==0) + sum(values==1) )\n", "23069 22931 46000\n" ], [ "mnist_trainset.data = torch.cat((mnist_trainset.data[indices],mnist_trainset.data[index]))\nmnist_trainset.targets = torch.cat((mnist_trainset.targets[indices],torch.Tensor(values).type(torch.LongTensor)))", "_____no_output_____" ], [ "mnist_trainset.targets.shape, mnist_trainset.data.shape", "_____no_output_____" ], [ "# mnist_trainset.targets[index] = torch.Tensor(values).type(torch.LongTensor)\nj =20078 # Without Shuffle upto True Training numbers correct , after that corrupted\nprint(plt.imshow(mnist_trainset.data[j]),mnist_trainset.targets[j])", "AxesImage(54,36;334.8x217.44) tensor(1)\n" ], [ "trainloader = torch.utils.data.DataLoader(mnist_trainset, batch_size=250,shuffle=True, num_workers=2)", "_____no_output_____" ], [ "testloader = torch.utils.data.DataLoader(mnist_testset, batch_size=250,shuffle=False, num_workers=2)", "_____no_output_____" ], [ "mnist_trainset.data.shape", "_____no_output_____" ], [ "classes = ('zero', 'one')", "_____no_output_____" ], [ "dataiter = iter(trainloader)\nimages, labels = dataiter.next()", "_____no_output_____" ], [ "images[:4].shape", "_____no_output_____" ], [ "def imshow(img):\n img = img / 2 + 0.5 # unnormalize\n npimg = img.numpy()\n plt.imshow(np.transpose(npimg, (1, 2, 0)))\n plt.show()", "_____no_output_____" ], [ "imshow(torchvision.utils.make_grid(images[:10]))\nprint('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(10)))", "_____no_output_____" ], [ "class Module2(nn.Module):\n def __init__(self):\n super(Module2, self).__init__()\n self.conv1 = nn.Conv2d(1, 6, 5)\n self.pool = nn.MaxPool2d(2, 2)\n self.conv2 = nn.Conv2d(6, 16, 5)\n self.fc1 = nn.Linear(16 * 4 * 4, 128)\n self.fc2 = nn.Linear(128, 64)\n self.fc3 = nn.Linear(64, 10)\n self.fc4 = nn.Linear(10,2)\n\n def forward(self,z): \n y1 = self.pool(F.relu(self.conv1(z)))\n y1 = self.pool(F.relu(self.conv2(y1)))\n # print(y1.shape)\n y1 = y1.view(-1, 16 * 4 * 4)\n\n y1 = F.relu(self.fc1(y1))\n y1 = F.relu(self.fc2(y1))\n y1 = F.relu(self.fc3(y1))\n y1 = self.fc4(y1)\n return y1 ", "_____no_output_____" ], [ "inc = Module2()\ninc = inc.to(\"cuda\")", "_____no_output_____" ], [ "criterion_inception = nn.CrossEntropyLoss()\noptimizer_inception = optim.SGD(inc.parameters(), lr=0.01, momentum=0.9)", "_____no_output_____" ], [ "acti = []\nloss_curi = []\nfor epoch in range(epochs): # loop over the dataset multiple times\n ep_lossi = []\n\n running_loss = 0.0\n for i, data in enumerate(trainloader, 0):\n # get the inputs\n inputs, labels = data\n inputs, labels = inputs.to(\"cuda\"),labels.to(\"cuda\")\n\n # print(inputs.shape)\n # zero the parameter gradients\n optimizer_inception.zero_grad()\n\n # forward + backward + optimize\n outputs = inc(inputs)\n loss = criterion_inception(outputs, labels)\n loss.backward()\n optimizer_inception.step()\n\n # print statistics\n running_loss += loss.item()\n if i % 50 == 49: # print every 50 mini-batches\n print('[%d, %5d] loss: %.3f' %\n (epoch + 1, i + 1, running_loss / 50))\n ep_lossi.append(running_loss/50) # loss per minibatch\n running_loss = 0.0\n \n loss_curi.append(np.mean(ep_lossi)) #loss per epoch\n if (np.mean(ep_lossi)<=0.03):\n break\n# acti.append(actis)\n \n \n\nprint('Finished Training')", "[1, 50] loss: 0.696\n[1, 100] loss: 0.693\n[1, 150] loss: 0.693\n[2, 50] loss: 0.693\n[2, 100] loss: 0.693\n[2, 150] loss: 0.693\n[3, 50] loss: 0.693\n[3, 100] loss: 0.693\n[3, 150] loss: 0.693\n[4, 50] loss: 0.693\n[4, 100] loss: 0.693\n[4, 150] loss: 0.693\n[5, 50] loss: 0.693\n[5, 100] loss: 0.693\n[5, 150] loss: 0.693\n[6, 50] loss: 0.693\n[6, 100] loss: 0.693\n[6, 150] loss: 0.693\n[7, 50] loss: 0.693\n[7, 100] loss: 0.692\n[7, 150] loss: 0.693\n[8, 50] loss: 0.692\n[8, 100] loss: 0.692\n[8, 150] loss: 0.692\n[9, 50] loss: 0.692\n[9, 100] loss: 0.692\n[9, 150] loss: 0.692\n[10, 50] loss: 0.692\n[10, 100] loss: 0.692\n[10, 150] loss: 0.692\n[11, 50] loss: 0.692\n[11, 100] loss: 0.692\n[11, 150] loss: 0.691\n[12, 50] loss: 0.692\n[12, 100] loss: 0.691\n[12, 150] loss: 0.690\n[13, 50] loss: 0.692\n[13, 100] loss: 0.691\n[13, 150] loss: 0.691\n[14, 50] loss: 0.690\n[14, 100] loss: 0.691\n[14, 150] loss: 0.691\n[15, 50] loss: 0.690\n[15, 100] loss: 0.691\n[15, 150] loss: 0.691\n[16, 50] loss: 0.690\n[16, 100] loss: 0.690\n[16, 150] loss: 0.691\n[17, 50] loss: 0.689\n[17, 100] loss: 0.691\n[17, 150] loss: 0.690\n[18, 50] loss: 0.688\n[18, 100] loss: 0.690\n[18, 150] loss: 0.689\n[19, 50] loss: 0.689\n[19, 100] loss: 0.689\n[19, 150] loss: 0.689\n[20, 50] loss: 0.688\n[20, 100] loss: 0.690\n[20, 150] loss: 0.688\n[21, 50] loss: 0.688\n[21, 100] loss: 0.688\n[21, 150] loss: 0.686\n[22, 50] loss: 0.687\n[22, 100] loss: 0.687\n[22, 150] loss: 0.687\n[23, 50] loss: 0.687\n[23, 100] loss: 0.687\n[23, 150] loss: 0.687\n[24, 50] loss: 0.686\n[24, 100] loss: 0.686\n[24, 150] loss: 0.687\n[25, 50] loss: 0.685\n[25, 100] loss: 0.686\n[25, 150] loss: 0.687\n[26, 50] loss: 0.685\n[26, 100] loss: 0.686\n[26, 150] loss: 0.684\n[27, 50] loss: 0.684\n[27, 100] loss: 0.684\n[27, 150] loss: 0.683\n[28, 50] loss: 0.684\n[28, 100] loss: 0.684\n[28, 150] loss: 0.684\n[29, 50] loss: 0.684\n[29, 100] loss: 0.684\n[29, 150] loss: 0.684\n[30, 50] loss: 0.682\n[30, 100] loss: 0.682\n[30, 150] loss: 0.684\n[31, 50] loss: 0.683\n[31, 100] loss: 0.681\n[31, 150] loss: 0.683\n[32, 50] loss: 0.683\n[32, 100] loss: 0.682\n[32, 150] loss: 0.680\n[33, 50] loss: 0.680\n[33, 100] loss: 0.681\n[33, 150] loss: 0.682\n[34, 50] loss: 0.681\n[34, 100] loss: 0.682\n[34, 150] loss: 0.680\n[35, 50] loss: 0.680\n[35, 100] loss: 0.680\n[35, 150] loss: 0.679\n[36, 50] loss: 0.678\n[36, 100] loss: 0.679\n[36, 150] loss: 0.682\n[37, 50] loss: 0.678\n[37, 100] loss: 0.680\n[37, 150] loss: 0.683\n[38, 50] loss: 0.678\n[38, 100] loss: 0.680\n[38, 150] loss: 0.677\n[39, 50] loss: 0.678\n[39, 100] loss: 0.677\n[39, 150] loss: 0.678\n[40, 50] loss: 0.676\n[40, 100] loss: 0.679\n[40, 150] loss: 0.678\n[41, 50] loss: 0.674\n[41, 100] loss: 0.676\n[41, 150] loss: 0.678\n[42, 50] loss: 0.677\n[42, 100] loss: 0.677\n[42, 150] loss: 0.677\n[43, 50] loss: 0.675\n[43, 100] loss: 0.678\n[43, 150] loss: 0.674\n[44, 50] loss: 0.674\n[44, 100] loss: 0.674\n[44, 150] loss: 0.676\n[45, 50] loss: 0.671\n[45, 100] loss: 0.676\n[45, 150] loss: 0.675\n[46, 50] loss: 0.672\n[46, 100] loss: 0.673\n[46, 150] loss: 0.672\n[47, 50] loss: 0.670\n[47, 100] loss: 0.672\n[47, 150] loss: 0.674\n[48, 50] loss: 0.669\n[48, 100] loss: 0.675\n[48, 150] loss: 0.671\n[49, 50] loss: 0.668\n[49, 100] loss: 0.669\n[49, 150] loss: 0.671\n[50, 50] loss: 0.667\n[50, 100] loss: 0.670\n[50, 150] loss: 0.671\n[51, 50] loss: 0.668\n[51, 100] loss: 0.668\n[51, 150] loss: 0.666\n[52, 50] loss: 0.664\n[52, 100] loss: 0.665\n[52, 150] loss: 0.668\n[53, 50] loss: 0.664\n[53, 100] loss: 0.662\n[53, 150] loss: 0.666\n[54, 50] loss: 0.664\n[54, 100] loss: 0.666\n[54, 150] loss: 0.663\n[55, 50] loss: 0.661\n[55, 100] loss: 0.661\n[55, 150] loss: 0.663\n[56, 50] loss: 0.660\n[56, 100] loss: 0.659\n[56, 150] loss: 0.661\n[57, 50] loss: 0.659\n[57, 100] loss: 0.657\n[57, 150] loss: 0.661\n[58, 50] loss: 0.655\n[58, 100] loss: 0.659\n[58, 150] loss: 0.659\n[59, 50] loss: 0.651\n[59, 100] loss: 0.657\n[59, 150] loss: 0.664\n[60, 50] loss: 0.650\n[60, 100] loss: 0.655\n[60, 150] loss: 0.655\n[61, 50] loss: 0.648\n[61, 100] loss: 0.656\n[61, 150] loss: 0.653\n[62, 50] loss: 0.648\n[62, 100] loss: 0.651\n[62, 150] loss: 0.650\n[63, 50] loss: 0.645\n[63, 100] loss: 0.646\n[63, 150] loss: 0.651\n[64, 50] loss: 0.646\n[64, 100] loss: 0.648\n[64, 150] loss: 0.648\n[65, 50] loss: 0.637\n[65, 100] loss: 0.648\n[65, 150] loss: 0.643\n[66, 50] loss: 0.637\n[66, 100] loss: 0.647\n[66, 150] loss: 0.643\n[67, 50] loss: 0.635\n[67, 100] loss: 0.634\n[67, 150] loss: 0.645\n[68, 50] loss: 0.634\n[68, 100] loss: 0.640\n[68, 150] loss: 0.632\n[69, 50] loss: 0.628\n[69, 100] loss: 0.635\n[69, 150] loss: 0.637\n[70, 50] loss: 0.625\n[70, 100] loss: 0.630\n[70, 150] loss: 0.634\n[71, 50] loss: 0.621\n[71, 100] loss: 0.628\n[71, 150] loss: 0.634\n[72, 50] loss: 0.620\n[72, 100] loss: 0.623\n[72, 150] loss: 0.620\n[73, 50] loss: 0.609\n[73, 100] loss: 0.617\n[73, 150] loss: 0.627\n[74, 50] loss: 0.608\n[74, 100] loss: 0.618\n[74, 150] loss: 0.623\n[75, 50] loss: 0.603\n[75, 100] loss: 0.607\n[75, 150] loss: 0.611\n[76, 50] loss: 0.602\n[76, 100] loss: 0.605\n[76, 150] loss: 0.605\n[77, 50] loss: 0.593\n[77, 100] loss: 0.605\n[77, 150] loss: 0.606\n[78, 50] loss: 0.593\n[78, 100] loss: 0.599\n[78, 150] loss: 0.605\n[79, 50] loss: 0.571\n[79, 100] loss: 0.595\n[79, 150] loss: 0.606\n[80, 50] loss: 0.573\n[80, 100] loss: 0.588\n[80, 150] loss: 0.596\n[81, 50] loss: 0.576\n[81, 100] loss: 0.582\n[81, 150] loss: 0.589\n[82, 50] loss: 0.563\n[82, 100] loss: 0.575\n[82, 150] loss: 0.580\n[83, 50] loss: 0.551\n[83, 100] loss: 0.576\n[83, 150] loss: 0.571\n[84, 50] loss: 0.550\n[84, 100] loss: 0.570\n[84, 150] loss: 0.560\n[85, 50] loss: 0.548\n[85, 100] loss: 0.558\n[85, 150] loss: 0.565\n[86, 50] loss: 0.530\n[86, 100] loss: 0.545\n[86, 150] loss: 0.558\n[87, 50] loss: 0.539\n[87, 100] loss: 0.542\n[87, 150] loss: 0.556\n[88, 50] loss: 0.519\n[88, 100] loss: 0.540\n[88, 150] loss: 0.548\n[89, 50] loss: 0.509\n[89, 100] loss: 0.529\n[89, 150] loss: 0.546\n[90, 50] loss: 0.492\n[90, 100] loss: 0.514\n[90, 150] loss: 0.531\n[91, 50] loss: 0.502\n[91, 100] loss: 0.517\n[91, 150] loss: 0.537\n[92, 50] loss: 0.486\n[92, 100] loss: 0.501\n[92, 150] loss: 0.521\n[93, 50] loss: 0.469\n[93, 100] loss: 0.509\n[93, 150] loss: 0.527\n[94, 50] loss: 0.475\n[94, 100] loss: 0.482\n[94, 150] loss: 0.495\n[95, 50] loss: 0.465\n[95, 100] loss: 0.474\n[95, 150] loss: 0.496\n[96, 50] loss: 0.447\n[96, 100] loss: 0.473\n[96, 150] loss: 0.494\n[97, 50] loss: 0.442\n[97, 100] loss: 0.466\n[97, 150] loss: 0.491\n[98, 50] loss: 0.433\n[98, 100] loss: 0.462\n[98, 150] loss: 0.474\n[99, 50] loss: 0.429\n[99, 100] loss: 0.444\n[99, 150] loss: 0.468\n[100, 50] loss: 0.416\n[100, 100] loss: 0.445\n[100, 150] loss: 0.446\n[101, 50] loss: 0.418\n[101, 100] loss: 0.432\n[101, 150] loss: 0.465\n[102, 50] loss: 0.398\n[102, 100] loss: 0.412\n[102, 150] loss: 0.435\n[103, 50] loss: 0.385\n[103, 100] loss: 0.427\n[103, 150] loss: 0.439\n[104, 50] loss: 0.389\n[104, 100] loss: 0.404\n[104, 150] loss: 0.427\n[105, 50] loss: 0.370\n[105, 100] loss: 0.383\n[105, 150] loss: 0.416\n[106, 50] loss: 0.392\n[106, 100] loss: 0.392\n[106, 150] loss: 0.405\n[107, 50] loss: 0.365\n[107, 100] loss: 0.390\n[107, 150] loss: 0.398\n[108, 50] loss: 0.347\n[108, 100] loss: 0.373\n[108, 150] loss: 0.392\n[109, 50] loss: 0.342\n[109, 100] loss: 0.368\n[109, 150] loss: 0.386\n[110, 50] loss: 0.340\n[110, 100] loss: 0.368\n[110, 150] loss: 0.393\n[111, 50] loss: 0.323\n[111, 100] loss: 0.353\n[111, 150] loss: 0.372\n[112, 50] loss: 0.340\n[112, 100] loss: 0.340\n[112, 150] loss: 0.366\n[113, 50] loss: 0.314\n[113, 100] loss: 0.343\n[113, 150] loss: 0.348\n[114, 50] loss: 0.302\n[114, 100] loss: 0.345\n[114, 150] loss: 0.367\n[115, 50] loss: 0.295\n[115, 100] loss: 0.315\n[115, 150] loss: 0.354\n[116, 50] loss: 0.292\n[116, 100] loss: 0.327\n[116, 150] loss: 0.359\n[117, 50] loss: 0.288\n[117, 100] loss: 0.321\n[117, 150] loss: 0.331\n[118, 50] loss: 0.275\n[118, 100] loss: 0.309\n[118, 150] loss: 0.329\n[119, 50] loss: 0.274\n[119, 100] loss: 0.290\n[119, 150] loss: 0.319\n[120, 50] loss: 0.268\n[120, 100] loss: 0.291\n[120, 150] loss: 0.314\n[121, 50] loss: 0.254\n[121, 100] loss: 0.285\n[121, 150] loss: 0.313\n[122, 50] loss: 0.260\n[122, 100] loss: 0.283\n[122, 150] loss: 0.301\n[123, 50] loss: 0.262\n[123, 100] loss: 0.303\n[123, 150] loss: 0.295\n[124, 50] loss: 0.252\n[124, 100] loss: 0.256\n[124, 150] loss: 0.283\n[125, 50] loss: 0.242\n[125, 100] loss: 0.266\n[125, 150] loss: 0.296\n[126, 50] loss: 0.227\n[126, 100] loss: 0.275\n[126, 150] loss: 0.296\n[127, 50] loss: 0.225\n[127, 100] loss: 0.242\n[127, 150] loss: 0.274\n[128, 50] loss: 0.230\n[128, 100] loss: 0.250\n[128, 150] loss: 0.276\n[129, 50] loss: 0.214\n[129, 100] loss: 0.237\n[129, 150] loss: 0.268\n[130, 50] loss: 0.213\n[130, 100] loss: 0.220\n[130, 150] loss: 0.240\n[131, 50] loss: 0.229\n[131, 100] loss: 0.229\n[131, 150] loss: 0.251\n[132, 50] loss: 0.215\n[132, 100] loss: 0.220\n[132, 150] loss: 0.244\n[133, 50] loss: 0.222\n[133, 100] loss: 0.227\n[133, 150] loss: 0.241\n[134, 50] loss: 0.206\n[134, 100] loss: 0.231\n[134, 150] loss: 0.238\n[135, 50] loss: 0.207\n[135, 100] loss: 0.219\n[135, 150] loss: 0.223\n[136, 50] loss: 0.188\n[136, 100] loss: 0.191\n[136, 150] loss: 0.243\n[137, 50] loss: 0.193\n[137, 100] loss: 0.222\n[137, 150] loss: 0.223\n[138, 50] loss: 0.190\n[138, 100] loss: 0.201\n[138, 150] loss: 0.220\n[139, 50] loss: 0.178\n[139, 100] loss: 0.215\n[139, 150] loss: 0.224\n[140, 50] loss: 0.161\n[140, 100] loss: 0.172\n[140, 150] loss: 0.197\n[141, 50] loss: 0.185\n[141, 100] loss: 0.200\n[141, 150] loss: 0.233\n[142, 50] loss: 0.159\n[142, 100] loss: 0.183\n[142, 150] loss: 0.204\n[143, 50] loss: 0.172\n[143, 100] loss: 0.186\n[143, 150] loss: 0.215\n[144, 50] loss: 0.169\n[144, 100] loss: 0.175\n[144, 150] loss: 0.206\n[145, 50] loss: 0.179\n[145, 100] loss: 0.174\n[145, 150] loss: 0.182\n[146, 50] loss: 0.158\n[146, 100] loss: 0.151\n[146, 150] loss: 0.202\n[147, 50] loss: 0.176\n[147, 100] loss: 0.170\n[147, 150] loss: 0.174\n[148, 50] loss: 0.153\n[148, 100] loss: 0.153\n[148, 150] loss: 0.196\n[149, 50] loss: 0.152\n[149, 100] loss: 0.181\n[149, 150] loss: 0.180\n[150, 50] loss: 0.145\n[150, 100] loss: 0.149\n[150, 150] loss: 0.165\n[151, 50] loss: 0.155\n[151, 100] loss: 0.139\n[151, 150] loss: 0.177\n[152, 50] loss: 0.135\n[152, 100] loss: 0.149\n[152, 150] loss: 0.157\n[153, 50] loss: 0.121\n[153, 100] loss: 0.161\n[153, 150] loss: 0.169\n[154, 50] loss: 0.148\n[154, 100] loss: 0.143\n[154, 150] loss: 0.178\n[155, 50] loss: 0.144\n[155, 100] loss: 0.135\n[155, 150] loss: 0.160\n[156, 50] loss: 0.123\n[156, 100] loss: 0.143\n[156, 150] loss: 0.174\n[157, 50] loss: 0.128\n[157, 100] loss: 0.130\n[157, 150] loss: 0.158\n[158, 50] loss: 0.123\n[158, 100] loss: 0.131\n[158, 150] loss: 0.156\n[159, 50] loss: 0.114\n[159, 100] loss: 0.130\n[159, 150] loss: 0.144\n[160, 50] loss: 0.141\n[160, 100] loss: 0.138\n[160, 150] loss: 0.166\n[161, 50] loss: 0.126\n[161, 100] loss: 0.123\n[161, 150] loss: 0.146\n[162, 50] loss: 0.117\n[162, 100] loss: 0.117\n[162, 150] loss: 0.170\n[163, 50] loss: 0.099\n[163, 100] loss: 0.122\n[163, 150] loss: 0.141\n[164, 50] loss: 0.120\n[164, 100] loss: 0.108\n[164, 150] loss: 0.176\n[165, 50] loss: 0.111\n[165, 100] loss: 0.123\n[165, 150] loss: 0.137\n[166, 50] loss: 0.119\n[166, 100] loss: 0.119\n[166, 150] loss: 0.127\n[167, 50] loss: 0.104\n[167, 100] loss: 0.115\n[167, 150] loss: 0.137\n[168, 50] loss: 0.125\n[168, 100] loss: 0.112\n[168, 150] loss: 0.111\n[169, 50] loss: 0.108\n[169, 100] loss: 0.110\n[169, 150] loss: 0.129\n[170, 50] loss: 0.120\n[170, 100] loss: 0.115\n[170, 150] loss: 0.109\n[171, 50] loss: 0.099\n[171, 100] loss: 0.108\n[171, 150] loss: 0.133\n[172, 50] loss: 0.097\n[172, 100] loss: 0.102\n[172, 150] loss: 0.127\n[173, 50] loss: 0.092\n[173, 100] loss: 0.095\n[173, 150] loss: 0.105\n[174, 50] loss: 0.110\n[174, 100] loss: 0.103\n[174, 150] loss: 0.115\n[175, 50] loss: 0.103\n[175, 100] loss: 0.098\n[175, 150] loss: 0.108\n[176, 50] loss: 0.085\n[176, 100] loss: 0.105\n[176, 150] loss: 0.128\n[177, 50] loss: 0.094\n[177, 100] loss: 0.083\n[177, 150] loss: 0.104\n[178, 50] loss: 0.109\n[178, 100] loss: 0.100\n[178, 150] loss: 0.118\n[179, 50] loss: 0.087\n[179, 100] loss: 0.073\n[179, 150] loss: 0.080\n[180, 50] loss: 0.114\n[180, 100] loss: 0.116\n[180, 150] loss: 0.119\n[181, 50] loss: 0.084\n[181, 100] loss: 0.087\n[181, 150] loss: 0.119\n[182, 50] loss: 0.115\n[182, 100] loss: 0.103\n[182, 150] loss: 0.118\n[183, 50] loss: 0.083\n[183, 100] loss: 0.082\n[183, 150] loss: 0.094\n[184, 50] loss: 0.090\n[184, 100] loss: 0.110\n[184, 150] loss: 0.110\n[185, 50] loss: 0.066\n[185, 100] loss: 0.080\n[185, 150] loss: 0.114\n[186, 50] loss: 0.087\n[186, 100] loss: 0.073\n[186, 150] loss: 0.074\n[187, 50] loss: 0.087\n[187, 100] loss: 0.087\n[187, 150] loss: 0.085\n[188, 50] loss: 0.071\n[188, 100] loss: 0.096\n[188, 150] loss: 0.109\n[189, 50] loss: 0.067\n[189, 100] loss: 0.100\n[189, 150] loss: 0.081\n[190, 50] loss: 0.097\n[190, 100] loss: 0.082\n[190, 150] loss: 0.082\n[191, 50] loss: 0.071\n[191, 100] loss: 0.085\n[191, 150] loss: 0.097\n[192, 50] loss: 0.069\n[192, 100] loss: 0.070\n[192, 150] loss: 0.074\n[193, 50] loss: 0.090\n[193, 100] loss: 0.086\n[193, 150] loss: 0.098\n[194, 50] loss: 0.059\n[194, 100] loss: 0.062\n[194, 150] loss: 0.080\n[195, 50] loss: 0.080\n[195, 100] loss: 0.074\n[195, 150] loss: 0.085\n[196, 50] loss: 0.062\n[196, 100] loss: 0.071\n[196, 150] loss: 0.089\n[197, 50] loss: 0.081\n[197, 100] loss: 0.093\n[197, 150] loss: 0.087\n[198, 50] loss: 0.073\n[198, 100] loss: 0.088\n[198, 150] loss: 0.064\n[199, 50] loss: 0.066\n[199, 100] loss: 0.070\n[199, 150] loss: 0.111\n[200, 50] loss: 0.051\n[200, 100] loss: 0.046\n[200, 150] loss: 0.070\n[201, 50] loss: 0.067\n[201, 100] loss: 0.068\n[201, 150] loss: 0.093\n[202, 50] loss: 0.061\n[202, 100] loss: 0.060\n[202, 150] loss: 0.087\n[203, 50] loss: 0.071\n[203, 100] loss: 0.070\n[203, 150] loss: 0.089\n[204, 50] loss: 0.057\n[204, 100] loss: 0.062\n[204, 150] loss: 0.074\n[205, 50] loss: 0.058\n[205, 100] loss: 0.064\n[205, 150] loss: 0.094\n[206, 50] loss: 0.058\n[206, 100] loss: 0.060\n[206, 150] loss: 0.075\n[207, 50] loss: 0.061\n[207, 100] loss: 0.064\n[207, 150] loss: 0.084\n[208, 50] loss: 0.062\n[208, 100] loss: 0.056\n[208, 150] loss: 0.068\n[209, 50] loss: 0.062\n[209, 100] loss: 0.077\n[209, 150] loss: 0.077\n[210, 50] loss: 0.058\n[210, 100] loss: 0.066\n[210, 150] loss: 0.083\n[211, 50] loss: 0.053\n[211, 100] loss: 0.057\n[211, 150] loss: 0.065\n[212, 50] loss: 0.058\n[212, 100] loss: 0.052\n[212, 150] loss: 0.064\n[213, 50] loss: 0.075\n[213, 100] loss: 0.066\n[213, 150] loss: 0.070\n[214, 50] loss: 0.055\n[214, 100] loss: 0.056\n[214, 150] loss: 0.098\n[215, 50] loss: 0.041\n[215, 100] loss: 0.043\n[215, 150] loss: 0.059\n[216, 50] loss: 0.085\n[216, 100] loss: 0.077\n[216, 150] loss: 0.056\n[217, 50] loss: 0.052\n[217, 100] loss: 0.059\n[217, 150] loss: 0.062\n[218, 50] loss: 0.061\n[218, 100] loss: 0.065\n[218, 150] loss: 0.063\n[219, 50] loss: 0.043\n[219, 100] loss: 0.049\n[219, 150] loss: 0.061\n[220, 50] loss: 0.061\n[220, 100] loss: 0.059\n[220, 150] loss: 0.054\n[221, 50] loss: 0.060\n[221, 100] loss: 0.052\n[221, 150] loss: 0.054\n[222, 50] loss: 0.045\n[222, 100] loss: 0.053\n[222, 150] loss: 0.084\n[223, 50] loss: 0.047\n[223, 100] loss: 0.049\n[223, 150] loss: 0.066\n[224, 50] loss: 0.042\n[224, 100] loss: 0.045\n[224, 150] loss: 0.063\n[225, 50] loss: 0.044\n[225, 100] loss: 0.056\n[225, 150] loss: 0.080\n[226, 50] loss: 0.040\n[226, 100] loss: 0.051\n[226, 150] loss: 0.065\n[227, 50] loss: 0.041\n[227, 100] loss: 0.062\n[227, 150] loss: 0.070\n[228, 50] loss: 0.045\n[228, 100] loss: 0.049\n[228, 150] loss: 0.066\n[229, 50] loss: 0.059\n[229, 100] loss: 0.043\n[229, 150] loss: 0.045\n[230, 50] loss: 0.044\n[230, 100] loss: 0.046\n[230, 150] loss: 0.075\n[231, 50] loss: 0.051\n[231, 100] loss: 0.040\n[231, 150] loss: 0.044\n[232, 50] loss: 0.044\n[232, 100] loss: 0.053\n[232, 150] loss: 0.068\n[233, 50] loss: 0.038\n[233, 100] loss: 0.041\n[233, 150] loss: 0.067\n[234, 50] loss: 0.043\n[234, 100] loss: 0.033\n[234, 150] loss: 0.034\n[235, 50] loss: 0.068\n[235, 100] loss: 0.058\n[235, 150] loss: 0.048\n[236, 50] loss: 0.040\n[236, 100] loss: 0.052\n[236, 150] loss: 0.055\n[237, 50] loss: 0.043\n[237, 100] loss: 0.035\n[237, 150] loss: 0.068\n[238, 50] loss: 0.045\n[238, 100] loss: 0.052\n[238, 150] loss: 0.060\n[239, 50] loss: 0.027\n[239, 100] loss: 0.026\n[239, 150] loss: 0.036\nFinished Training\n" ], [ "correct = 0\ntotal = 0\nwith torch.no_grad():\n for data in trainloader:\n images, labels = data\n images, labels = images.to(\"cuda\"), labels.to(\"cuda\")\n outputs = inc(images)\n _, predicted = torch.max(outputs.data, 1)\n total += labels.size(0)\n correct += (predicted == labels).sum().item()\n\nprint('Accuracy of the network on the 60000 train images: %d %%' % (\n 100 * correct / total))", "Accuracy of the network on the 60000 train images: 97 %\n" ], [ "total,correct", "_____no_output_____" ], [ "correct = 0\ntotal = 0\nout = []\npred = []\nwith torch.no_grad():\n for data in testloader:\n images, labels = data\n images, labels = images.to(\"cuda\"),labels.to(\"cuda\")\n out.append(labels.cpu().numpy())\n outputs= inc(images)\n _, predicted = torch.max(outputs.data, 1)\n pred.append(predicted.cpu().numpy())\n total += labels.size(0)\n correct += (predicted == labels).sum().item()\n\nprint('Accuracy of the network on the 10000 test images: %d %%' % (\n 100 * correct / total))", "Accuracy of the network on the 10000 test images: 19 %\n" ], [ "out = np.concatenate(out,axis=0)", "_____no_output_____" ], [ "pred = np.concatenate(pred,axis=0)", "_____no_output_____" ], [ "index = np.logical_or(out ==1,out==0)\nprint(index.shape)", "(10000,)\n" ], [ "acc = sum(out[index] == pred[index])/sum(index)\nprint('Accuracy of the network on the 10000 test images: %d %%' % (\n 100*acc))", "Accuracy of the network on the 10000 test images: 94 %\n" ], [ "\nsum(index)", "_____no_output_____" ], [ "import random\nrandom.sample([1,2,3,4,5,6,7,8],5)", "_____no_output_____" ], [ "# torch.save(inc.state_dict(),\"/content/drive/My Drive/model_simple_8000.pkl\")", "_____no_output_____" ], [ "fig = plt.figure()\nplt.plot(loss_curi,label=\"loss_Curve\")\nplt.xlabel(\"epochs\")\nplt.ylabel(\"training_loss\")\nplt.legend()\nfig.savefig(\"loss_curve.pdf\") ", "_____no_output_____" ], [ "", "_____no_output_____" ] ], [ [ "Simple Model 3 Inception Module\n\n|true training data | Corr Training Data | Test Accuracy | Test Accuracy 0-1 | \n| ------------------ | ------------------ | ------------- | ----------------- |\n| 100 | 47335 | 15 | 75 |\n| 500 | 47335 | 16 | 80 | \n| 1000 | 47335 | 17 | 83 | \n| 2000 | 47335 | 19 | 92 | \n| 4000 | 47335 | 20 | 95 | \n| 6000 | 47335 | 20 | 96 | \n| 8000 | 47335 | 20 | 96 | \n| 12665 | 47335 | 20 | 98 | \n\n\n| Total Training Data | Training Accuracy |\n|---------------------------- | ------------------------ |\n| 47435 | 100 |\n| 47835 | 100 |\n| 48335 | 100 |\n| 49335 | 100 | \n| 51335 | 100 |\n| 53335 | 100 |\n| 55335 | 100 |\n| 60000 | 100 |", "_____no_output_____" ], [ "Mini- Inception network 8 Inception Modules\n\n|true training data | Corr Training Data | Test Accuracy | Test Accuracy 0-1 | \n| ------------------ | ------------------ | ------------- | ----------------- |\n| 100 | 47335 | 14 | 69 |\n| 500 | 47335 | 19 | 90 | \n| 1000 | 47335 | 19 | 92 | \n| 2000 | 47335 | 20 | 95 | \n| 4000 | 47335 | 20 | 97 | \n| 6000 | 47335 | 20 | 97 | \n| 8000 | 47335 | 20 | 98 | \n| 12665 | 47335 | 20 | 99 | ", "_____no_output_____" ], [ "| Total Training Data | Training Accuracy |\n|---------------------------- | ------------------------ |\n| 47435 | 100 |\n| 47835 | 100 |\n| 48335 | 100 |\n| 49335 | 100 | \n| 51335 | 100 |\n| 53335 | 100 |\n| 55335 | 100 |\n| 60000 | 100 |", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ] ]
d0358a472957fb950949550e724276d9b6d78e3d
868,383
ipynb
Jupyter Notebook
Final_Proj/.ipynb_checkpoints/Stats Final Project -checkpoint.ipynb
alsaneaan/BMI_stats_final
d6b32a22e5a37e4947a46f37e2d4a9b976c3112b
[ "MIT" ]
null
null
null
Final_Proj/.ipynb_checkpoints/Stats Final Project -checkpoint.ipynb
alsaneaan/BMI_stats_final
d6b32a22e5a37e4947a46f37e2d4a9b976c3112b
[ "MIT" ]
null
null
null
Final_Proj/.ipynb_checkpoints/Stats Final Project -checkpoint.ipynb
alsaneaan/BMI_stats_final
d6b32a22e5a37e4947a46f37e2d4a9b976c3112b
[ "MIT" ]
1
2018-10-20T21:19:21.000Z
2018-10-20T21:19:21.000Z
435.497994
219,274
0.896508
[ [ [ "# Statistical analysis on NEMSIS ", "_____no_output_____" ], [ "## BMI 6106 - Final Project\n#### Project by: \nAnwar Alsanea <br>\nLuz Gabriela Iorg <br>\nJorge Rojas <br>\n", "_____no_output_____" ], [ "## Abstract <br>\nThe National Emergency Medical Services Information System (NEMSIS) is a national database that contains Emergency Medical Services (EMS) data collected for the United States. In this project, we are adressing multiple questions to determine trends and major factors in EMS patient care. Objectives included predicting gender and age based on other factors such as incident location and type of injury. In order to approach our objectives, statistical analysis were applied using the program R. Analysis included linear regressions and principle component analysis. Our results show no significance when it comes to predicting gender based on factors. As for age, some factors were significant in prediciting the patient's age. Component analysis show low variane across all factors included in this data. We conclude that more data points with more numerical variable should be included and analyzed to provide better EMS patient care. Further analysis is needed to conclude the best approach to better determine EMS patient care with more data.", "_____no_output_____" ], [ "## Introduction\nThe National Emergency Medical Services Information System (NEMSIS) is a national database that contains Emergency Medical Services (EMS) data collected for the United States. The data holds information on patient care from 9-1-1 calls. The goal of NEMSIS in collecting this database across the states to evaluate and analyze EMS needs and improve the performance of patient care (NEMSIS, 2017). The dataset is collected for all states, however the data does not specify which state it was collected. We are unable to compare between states.\n\n\nIn this project, we are adressing multiple questions to determine trends and major factors in EMS patient care. \n\nOur first objective was to examine how the parameters related to gender. Which of the parameters or factors had the highest effect on gender? Did gender play an important role to help in determining better EMS patient care? \n\nOur second objective was to examine the patient's age to the factors. Can we use age to assist in determinig the best approach to EMS patient care?\n\nFianlly, we analyzed the data as a whole, and determined if any factors of paramteres should be highly considered when developing new EMA patient care procedures.\n\n", "_____no_output_____" ], [ "## Methods \nIn order to approach each of our objectives, statistical analysis were applied to determine each objective.\n\n>### Data:\nData was obtained from US Department of Transportation National Highway Traffic Safety Administration (NHTSA) (NHSTA, 2006). The data was first imported to Microsoft SQL Server to clean up the delimeters. Other changes included changing all column names to be able to use them in R easily. The original dataset contains 29 million events, in which were narrowed down to 10,000 events for simplification.\n\n>The data was saved as a csv file that is included with this report.\n\n>The original data set contains 44 National elements (i.e.parameters, factors). However in this analysis we were interested in analysing and examining 8 of the elements that included:\n\n>- Age\n>- Gender\n>- Primary method of payment\n>- Incident location type\n>- Primary symptom\n>- Cause of injury\n>- Incident patient disposition\n>- Complaint reported by dispatch\n\n>Each parameter is explained below:\n\n><font color = blue> *Age* (age.in.years) (numeric) </font>\nThis column was calculated from Date of Birth provided in the original dataset, this conversion was performed in SQL.\n\n\n><font color = blue>*Gender* (categorical)</font>\nThe original data provided gender in terms values that represented each gender (655 for female and 650 for male). These values were taken in R and converted to strings of \"male and \"female\"\n\n><font color = blue>*primary.method.of.payment* (categorical)</font>\nvalues and their representations are as follows:\n\n>- 720 Insurance\n>- 725 Medicaid\n>- 730 Medicare\n>- 735 Not billed\n>- 740 Other government\n>- 745 Self pay\n>- 750 Workes compensaiont\n\n><font color = blue>*incident.location.type* (categorical)</font>\nvalues and their representations are as follows:\n\n>- 1135 Home or residence\n>- 1140 Farm\n>- 1145 Mine or quarry\n>- 1150 Industrial place\n>- 1155 Recreation or sport place\n>- 1160 street or highway\n>- 1165 public building\n>- 1170 business/resturaunts\n>- 1175 Health care facility (hospital, clinic, nursing homes)\n>- 1180 Nursing homes or jail\n>- 1185 Lake, river, ocean\n>- 1190 all other locations\n\n\n><font color = blue>*primary.symptom* (categorical)</font>\nvalues and their representations are as follows:\n\n>- 1405 Bleeding\n>- 1410 Breathing Problem\n>- 1415 Change in responsiveness \n>- 1425 Death\n>- 1420 Choking\n>- 1430 Device/Equipment Problem \n>- 1435 Diarrhea\n>- 1440 Drainage/Discharge\n>- 1445 Fever\n>- 1450 Malaise\n>- 1455 Mass/Lesion\n>- 1460 Mental/Psych\n>- 1465 Nausea/Vomiting\n>- 1470 None\n>- 1475 Pain\n>- 1480 Palpitations\n>- 1485 Rash/Itching\n>- 1490 Swelling\n>- 1495 Transport Only\n>- 1505 Wound\n>- 1500 Weakness\n\n><font color = blue>*cause.of.injury* (categorical)</font>\nvalues and their representations are as follows:\n\n>- 1885 Bites (E906.0)\n>- 9505 Bicycle Accident (E826.0)\n>- 9520 Child battering (E967.0)\n>- 9530 Drug poisoning (E85X.0)\n>- 9540 Excessive Cold (E901.0)\n>- 9550 Falls (E88X.0)\n>- 9560 Firearm assault (E965.0)\n>- 9570 Firearm self inflicted (E955.0)\n>- 9580 Machinery accidents (E919.0)\n>- 9590 Motor Vehicle non-traffic accident (E82X.0) \n>- 9600 Motorcycle Accident (E81X.1)\n>- 9610 Pedestrian traffic accident (E814.0)\n>- 9620 Rape (E960.1)\n>- 9630 Stabbing/Cutting Accidental (E986.0)\n>- 9640 Struck by Blunt/Thrown Object (E968.2) \n>- 9650 Water Transport accident (E83X.0)\n>- 9500 Aircraft related accident (E84X.0) \n>- 9515 Chemical poisoning (E86X.0)\n>- 9525 Drowning (E910.0)\n>- 9535 Electrocution (non-lightning) (E925.0) \n>- 9545 Excessive Heat (E900.0)\n>- 9555 Fire and Flames (E89X.0)\n>- 9565 Firearm injury (accidental) (E985.0)\n>- 9575 Lightning (E907.0)\n>- 9585 Mechanical Suffocation (E913.0)\n>- 9595 Motor Vehicle traffic accident (E81X.0) \n>- 9605 Non-Motorized Vehicle Accident (E848.0) \n>- 9615 Radiation exposure (E926.0)\n>- 9625 Smoke Inhalation (E89X.2)\n>- 9635 Stabbing/Cutting Assault \n>- 9645 Venomous stings (plants, animals) (E905.0)\n\n\n><font color = blue>*incident.patient.disposition* (categorical)</font>\nvalues and their representations are as follows:\n\n>- 4815 Cancelled\n>- 4825 No Patient Found\n>- 4835 Patient Refused Care\n>- 4845 Treated, Transferred Care\n>- 4855 Treated, Transported by Law Enforcement\n>- 4820 Dead at Scene\n>- 4830 No Treatment Required\n>- 4840 Treated and Released\n>- 4850 Treated, Transported by EMS\n>- 4860 Treated, Transported by Private Vehicle\n\n\n><font color = blue>*complaint.reported.by.dispatch* (categorical)</font>\nvalues and their representations are as follows:\n\n>- 400 Abdominal Pain\n>- 410 Animal Bite\n>- 420 Back Pain\n>- 430 Burns\n>- 440 Cardiac Arrest\n>- 450 Choking\n>- 460 Diabetic Problem\n>- 470 Electrocution\n>- 480 Fall Victim\n>- 490 Heart Problems\n>- 500 Hemorrhage/Laceration 510 Ingestion/Poisoning\n>- 520 Psychiatric Problem\n>- 530 Stab/Gunshot Wound\n>- 540 Traffic Accident\n>- 550 Unconscious/Fainting\n>- 560 Transfer/Interfacility/Palliative Care\n>- 405 Allergies\n>- 415 Assault\n>- 425 Breathing Problem\n>- 435 CO Poisoning/Hazmat\n>- 445 Chest Pain\n>- 455 Convulsions/Seizure\n>- 465 Drowning\n>- 475 Eye Problem\n>- 485 Headache\n>- 495 Heat/Cold Exposure\n>- 505 Industrial Accident/Inaccessible Incident/Other Entrapments (non-vehicle) \n>- 515 Pregnancy/Childbirth\n>- 525 Sick Person\n>- 535 Stroke/CVA\n>- 545 Traumatic Injury\n>- 555 Unknown Problem Man Down 565 MCI (Mass Casualty Incident)\n\n\n", "_____no_output_____" ], [ "> ### Statistical Analysis:\nStatistical analysis and visual representations were produced using the program R (R Development Core Team, 2008). Packages used in this report were:\n- FactoMineR\n- factoextra\n- corrplot\n- dplyr\n- ggplot2\n- modelr\n- PCAmixdata\n\n>A code is included in this report to install those packages if needed.\n\n> #### tests:\nLinear models and regression were used to approach the first two objectives. Principle component analysis was used to analyze the data as a whole.", "_____no_output_____" ], [ "## Results and Discussion", "_____no_output_____" ], [ "The goal of this study was to analyze a subset of characteristics or factors from the NEMSIS 911 call events. ", "_____no_output_____" ], [ "### Code Outline:\n- Input and output data\n\n- Create vectors, handle variables, and perform other basic functions (remove NAs)\n\n- Tackle data structures such as matrices, lists, factors, and data frames\n\n- Build statistical models with linear regressions and analysis of variance\n\n- Create a variety of graphic displays\n\n- Finding clusters in data\n", "_____no_output_____" ], [ "### Packages and libraries installation: ", "_____no_output_____" ] ], [ [ "install.packages(c(\"FactoMineR\", \"factoextra\"))\ninstall.packages(\"corrplot\")\ninstall.packages(\"PCAmixdata\")", "Updating HTML index of packages in '.Library'\nMaking 'packages.html' ... done\nUpdating HTML index of packages in '.Library'\nMaking 'packages.html' ... done\nUpdating HTML index of packages in '.Library'\nMaking 'packages.html' ... done\n" ], [ "library(dplyr)\nlibrary(ggplot2)\nlibrary(gridExtra)\nlibrary(\"FactoMineR\")\nlibrary(\"corrplot\")\nlibrary(\"factoextra\")\nlibrary(modelr)\nlibrary(broom)\nlibrary(\"PCAmixdata\")\n\nrequire(stats)\n#require(pls)", "_____no_output_____" ] ], [ [ "### Data: \nThe file to import is saved under the name: events_cleaned_v3.txt\nThe code below is to import the data to the notebook", "_____no_output_____" ] ], [ [ "events = read.table(file = \"events_cleaned_v3.txt\", sep=\"|\", header = TRUE, stringsAsFactors = F)\nhead(events, n=7)\n#dim(events)", "_____no_output_____" ] ], [ [ "### Data cleaning:\n- Create vectors, handle variables, and perform other basic functions (remove NAs)", "_____no_output_____" ] ], [ [ "event1 = select(events, age.in.years, gender, primary.method.of.payment, \n incident.location.type, primary.symptom, \n cause.of.injury, incident.patient.disposition, complaint.reported.by.dispatch\n )\n\n\nevent1[event1 < 0] <- NA\n#head(event1, n=50)\nevent2 = na.exclude(event1)\ndim(event2)\nhead(event2)\n", "_____no_output_____" ] ], [ [ "- Tackle data structures manipulation such as matrices, lists, factors, and data frames.\n", "_____no_output_____" ] ], [ [ "str(event2)", "'data.frame':\t292 obs. of 8 variables:\n $ age.in.years : int 19 75 79 54 45 50 28 62 22 83 ...\n $ gender : num 650 650 655 655 655 650 655 655 655 655 ...\n $ primary.method.of.payment : num 720 720 730 745 735 735 725 725 725 730 ...\n $ incident.location.type : num 1175 1175 1135 1135 1135 ...\n $ primary.symptom : num 1415 1495 1405 1405 1475 ...\n $ cause.of.injury : num 9600 9550 9550 9640 9550 ...\n $ incident.patient.disposition : int 4850 4850 4850 4840 4850 4850 4850 4850 4850 4850 ...\n $ complaint.reported.by.dispatch: num 560 560 545 415 525 525 540 480 480 480 ...\n - attr(*, \"na.action\")=Class 'exclude' Named int [1:9708] 1 2 3 4 5 6 7 8 9 10 ...\n .. ..- attr(*, \"names\")= chr [1:9708] \"1\" \"2\" \"3\" \"4\" ...\n" ], [ "#Converting gender as factor:\nevent2$gender <-as.factor(event2$gender)\nlevels(event2$gender) <- c(\"male\", \"female\")\n\n#Converting dataframe as factor:\nevent2 <- data.frame(lapply(event2, as.factor))\n\n#Converting age.in.years as numeric:\nevent2$age.in.years <-as.numeric(event2$age.in.years)\n\n#Checking summaries\nsummary(event2)\ncontrasts(event2$gender)\n\nhead(event2)\n", "_____no_output_____" ], [ "str(event2)", "'data.frame':\t292 obs. of 8 variables:\n $ age.in.years : num 12 65 69 45 36 41 21 53 15 73 ...\n $ gender : Factor w/ 2 levels \"male\",\"female\": 1 1 2 2 2 1 2 2 2 2 ...\n $ primary.method.of.payment : Factor w/ 6 levels \"720\",\"725\",\"730\",..: 1 1 3 5 4 4 2 2 2 3 ...\n $ incident.location.type : Factor w/ 8 levels \"1135\",\"1150\",..: 6 6 1 1 1 1 3 1 1 7 ...\n $ primary.symptom : Factor w/ 13 levels \"1405\",\"1410\",..: 3 11 1 1 9 9 1 9 9 12 ...\n $ cause.of.injury : Factor w/ 20 levels \"1885\",\"9500\",..: 12 6 6 19 6 6 11 6 6 6 ...\n $ incident.patient.disposition : Factor w/ 4 levels \"4835\",\"4840\",..: 4 4 4 2 4 4 4 4 4 4 ...\n $ complaint.reported.by.dispatch: Factor w/ 19 levels \"400\",\"410\",\"415\",..: 19 19 16 3 13 13 15 8 8 8 ...\n" ] ], [ [ "### Data Analysis: \nBuild statistical models with linear regressions and analysis of variance", "_____no_output_____" ], [ "#### Regressions: \nFor our analysis, we are using the standard cut off at alpha = 0.05\n\n#### Linear regression to predict gender :\nThe first test is a generalized linear model to predict gender based on the remanining factors.\nThe Null hypothesis is that the factors have no effect on gender. \nThe alternative hypothesis is that there is an effect.", "_____no_output_____" ] ], [ [ "model = glm(gender ~. -gender, data= event2, family= binomial)\nsummary(model)\n\n#Gender (outcome variable, Y) and the rest of the variables (predictors, X) \n#Null hypothesis (H0): the coefficients are equal to zero (i.e., no relationship between x and y)\n#Alternative Hypothesis (Ha): the coefficients are not equal to zero (i.e., there is some relationship between x and y)\n\n#There is not enough evidence to say that there is a relationship between gender and the predictors. \n#the p-values for the intercept and the predictor variable are not significant, We can NOT reject the null hypothesis.\n \n\n##further Interpretation:\n\n#From the P value numbers we can say that only primary.method.of.payment745 (Self Pay) and\n#primary.symptom(1500 and 1505) are significantly associated with the caller’s gender.\n#All of the other variables do not seem to show any relationship to the caller’s gender.\n\n#The coefficient estimate of the variable primary.method.of.payment745 is b = -1.779e+00, which is negative. \n#This means that a if the caller (or patient) is Self Pay, then \n#it is associated with a decreased probability of being a female. \n\n#primary.symptom1500 (Weakness) b = 1.411e+00 which is positive.\n#primary.symptom1505 (Wound) b = 1.543e+00 which is positive.\n#This means that symptoms of weakness and wounds are\n#associated with a increased probability of being a female.\n\n#BUT IT IS NOT TRUE BECAUSE THERE IS NOT A SIGNIFICANT ASSOCIATION AMONG VARIABLES! \n#*authors notes and future research needed to prove such claims. ", "Warning message:\n“glm.fit: fitted probabilities numerically 0 or 1 occurred”" ] ], [ [ "Linear regression results show that most factors had a P > 0.05, in which we have to accept the null hypothesis that the factors do not have an effect on gender and cannot predict gender. Except for primary.method.of.payment745 (which is Self Pay) and primary.symptom(1500 and 1505) are significantly associated with the caller’s gender (ie P < 0.05).\nAll of the other variables do not show any effect on the caller’s gender ( P > 0.05).\n\nThe coefficient estimate of the variable primary.method.of.payment745 is b = -1.779e+00, which is negative. \nThis means that a if the caller (or patient) is Self Paid, then it is associated with a decreased probability of being a female. However this coeffiecient is still too low to have significance.\n\nprimary.symptom1500 (Weakness) b = 1.411e+00 which is positive.\nprimary.symptom1505 (Wound) b = 1.543e+00 which is positive.\nThis means that symptoms of weakness and wounds are associated with a increased probability of being a female. Again, the values are too low to be significant. We however conclude that this information is not sufficient to assist EMS patient care procedures, further data needs to be collected in order to determine if EMS patient care can be improved by gender as a dependent variable. ", "_____no_output_____" ], [ "#### Linear regression to predict age :\nOur second test is to predict using age as the independent variable. \nThe null hypothesis is that age cannot be predicted by the other variable. The alternative hypothesis is that other variables can act as independent variable that can predict age.", "_____no_output_____" ] ], [ [ "model2 = lm(age.in.years ~. -age.in.years, data= event2)\nsummary(model2)\n\n#age.in.years (outcome variable, Y) and the rest of the variables (predictors, X) \n#Null hypothesis (H0): the coefficients are equal to zero (i.e., no relationship between x and y)\n#Alternative Hypothesis (Ha): the coefficients are not equal to zero (i.e., there is some relationship between x and y)\n\n#There is enough evidence to say that there is a weak association between age.in.years and the predictors. \n#the p-values for the intercept and the predictor variable are slightly significant, We can reject the null hypothesis.\n\n", "_____no_output_____" ] ], [ [ "Our results show that there are more factors having an effect on age than gender did. Primary methods 725 and 730 (medicaid and medicare) had high significance at P << 0.05, in which we reject the null hypothesis. This result is expected as medicaid is generally for younger ages while medicare is health care for ages > 65. Primary symptom 1500 (weakness) was significant towards age at P < 0.05. Weakness is a symptom that can explain more than one condition, it is however mostly used to describe symptoms experienced with older age. We suggest that further information is included in the primary symptom factor to be able to accurately examine and develop enhanced EMS patient care services. \nCause of injury 9565 and 9605 (fire arm injury and non-motorized vehicle accident respectively) have shown high significance according to age at P < 0.05 in which we reject the null hypothesis. \n\nOther factors had a P value > 0.05 in which we accept the null hypothesis that they have no effect on age.", "_____no_output_____" ], [ "##### Regression assumptions:", "_____no_output_____" ] ], [ [ "par(mfrow = c(2, 2))\nplot(model2)\n\n#### Linearity of the data (Residuals vs Fitted). \n#There is no pattern in the residual plot. This suggests that we can assume linear relationship \n#between the predictors and the outcome variables.\n\n#### Normality of residuals (Normal Q-Q plot).\n#All the points fall approximately along the reference line, so we can assume normality.\n\n\n #### Homogeneity of residuals variance (Scale-Location). \n#It can be seen that the variability (variances) of the residual points does not quite follows a horizontal \n#line with equally spread points, suggesting non-constant variances in the residuals errors \n#(or the presence of some heteroscedasticity).\n#To reduce the heteroscedasticity problem we used the log transformation of the outcome variable (age.in.years, (y)).\n#model3 = lm(log(age.in.years) ~. -age.in.years, data= event2)\n\n\n #### Independence of residuals error terms (Residuals vs Leverage).\n#There are not drastic outliers in our data. ", "Warning message:\n“not plotting observations with leverage one:\n 13, 39, 65, 67, 82, 84, 105, 153, 239, 243, 258, 290, 291”Warning message:\n“not plotting observations with leverage one:\n 13, 39, 65, 67, 82, 84, 105, 153, 239, 243, 258, 290, 291”" ] ], [ [ "##### Linearity of the data (Residuals vs Fitted plot): \nThere is no pattern in the residual plot. This suggests that we can assume linear relationship \nbetween the predictors and the outcome variables.\n##### Normality of residuals (Normal Q-Q plot):\nAll the points fall approximately along the reference line, so we can assume normality.\n##### Homogeneity of residuals variance (Scale-Location):\nIt can be seen that the variability (variances) of the residual points does not quite follows a horizontal line with equally spread points, suggesting non-constant variances in the residuals errors (or the presence of some heteroscedasticity).\nTo reduce the heteroscedasticity problem we used the log transformation of the outcome variable (age.in.years, (y)). Shown next.\n##### Independence of residuals error terms (Residuals vs Leverage):\nThere are not drastic outliers in our data. \n", "_____no_output_____" ], [ "##### Reducing heteroscedasticity:", "_____no_output_____" ] ], [ [ "#Transformed Regression and new plot:\nmodel3 = lm(log(age.in.years) ~. -age.in.years, data= event2)\n\nplot(model3, 3)\n#heteroscedasticity has been improved. \n", "Warning message:\n“not plotting observations with leverage one:\n 13, 39, 65, 67, 82, 84, 105, 153, 239, 243, 258, 290, 291”" ] ], [ [ "#### Linear regression for age after log transformation:\nAfter the noticeable reduced heteroscedasticity in the data after using the log transformation, we examine the linear model again:", "_____no_output_____" ] ], [ [ "summary(model3)\n\n#After the log transformation of age, the p-values for the intercept and the predictor variables has \n#become more significant, hence indicating a stronger association between age.in.years and the predictors. \n\n\n###Interpretation:\n#From the P value numbers we can say that primary.method.of.payment(725 and 730), \n#incident.location.type1170, primary.symptom (1410 and 1500), cause.of.injury(9565, 9600, and 9605), \n#and complaint.reported.by.dispatch(485 and 520), are significantly associated with the caller’s Age.\n\n#The coefficient estimate of the variables are: \n#primary.method.of.payment725 (Medicaid) b = -0.183656, which is negative.\n#primary.method.of.payment730(Medicare) b = 0.476600 which is positive.\n#This means that as age increases the probability of being on Medicaid decreases; \n#but the probability of being on Medicare increases as age increases. \n\n#incident.location.type1170(Trade or service (business, bars, restaurants, etc)) b = 0.396803, which is positive.\n#This means that as age increases the probability of the incident happening at a business, bars, \n#restaurants, etc., increases.\n\n#primary.symptom1410 (Breathing Problem) b = -0.854654, which is negative.\n#This means that Breathing Problem are more prevalent among younger people (perhaps among babies).\n#primary.symptom1500 (Weakness) b = 0.370141 which is positive.\n#This means that as age increases the probability of the primary symptom being \"Weakness\" increases. \n\n#complaint.reported.by.dispatch485(Headache) b = -2.192445, which is negative.\n#complaint.reported.by.dispatch520(Psychiatric Problem) b = -1.606781, which is negative.\n#This means that if the complaint.reported.by.dispatch is a \"Headache\" or a \"Psychiatric Problem\", \n#is associated with an increased probability of being a younger person. \n\n#cause.of.injury9565 (Firearm injury) b = -2.458792, which is negative.\n#cause.of.injury9505 (Bicycle Accident) b = -2.166411, which is negative.\n##cause.of.injury9600 (Motorcycle Accident) b = -1.344680, which is negative.\n#This means that accidents involving Firearms, Motorcycle, and Bicycles are more prevalent among younger people. \n\n", "_____no_output_____" ] ], [ [ "After the log transformation of age, the p-values for the intercept and the predictor variables has become more significant, hence indicating a stronger association between age.in.years and the predictors. \n\n*Interpretation:*\nFrom the P value numbers we can say that primary method of payment(725 and 730), incident location type 1170, primary symptom (1410 and 1500), cause of injury(9565, 9600, and 9605), and complaint reported by dispatch(485 and 520), are significantly associated with the caller’s Age. ( ie P << 0.05, in which we reject the null hypothesis).\n\n*The coefficient estimate of the variables are: *\nprimary.method.of.payment725 (Medicaid) b = -0.183656, which is negative.\nprimary.method.of.payment730(Medicare) b = 0.476600 which is positive.\nThis means that as age increases the probability of being on Medicaid decreases; but the probability of being on Medicare increases as age increases. Which has been shown to be true in the first model as well.\n\nincident.location.type1170(Trade or service (business, bars, restaurants, etc)) b = 0.396803, which is positive.\nThis means that as age increases the probability of the incident happening at a business, bars, restaurants, etc., increases. \n\nprimary.symptom1410 (Breathing Problem) b = -0.854654, which is negative.\nThis means that Breathing Problem are more prevalent among younger people (perhaps among babies). primary.symptom1500 (Weakness) b = 0.370141 which is positive. This means that as age increases the probability of the primary symptom being \"Weakness\" increases. \n\ncomplaint.reported.by.dispatch485(Headache) b = -2.192445, which is negative. complaint.reported.by.dispatch520(Psychiatric Problem) b = -1.606781, which is negative.\nThis means that if the complaint.reported.by.dispatch is a \"Headache\" or a \"Psychiatric Problem\", is associated with an increased probability of being a younger person. \n\ncause.of.injury9565 (Firearm injury) b = -2.458792, which is negative.\ncause.of.injury9505 (Bicycle Accident) b = -2.166411, which is negative.\ncause.of.injury9600 (Motorcycle Accident) b = -1.344680, which is negative.\nThis means that accidents involving Firearms, Motorcycle, and Bicycles are more prevalent among younger people. \n\n\n", "_____no_output_____" ], [ "###### Data visulization:\nIn order to examine those trends, the log transfomation of age was plotted against cause of injury and incident location as shown below:", "_____no_output_____" ], [ "<img src=\"./graphs/Cause-combined-log-lt95.png\">", "_____no_output_____" ], [ "<img src=\"./graphs/Incident-combined-log-lt95.png\">", "_____no_output_____" ], [ "### Component Analysis: FAMD", "_____no_output_____" ], [ "Our data contains both quantitative (numeric) and qualitative (categorical) variables, the best tool to analyze similarity between individuals and the association between all variables is the \"Factor analysis of mixed data\" (FAMD), from the FactoMineR package. \n\nQuantitative and qualitative variables are normalized during the analysis in order to balance the influence of each set of variables, (FAMD does it internally).", "_____no_output_____" ] ], [ [ "res.famd <- FAMD(event2, ncp=5, graph = FALSE)\nsummary(res.famd)\n\n#About 5% of the variation is explained by this first eigenvalue, which is the first dimension.", "\nCall:\nFAMD(base = event2, ncp = 5, graph = FALSE) \n\n\nEigenvalues\n Dim.1 Dim.2 Dim.3 Dim.4 Dim.5\nVariance 3.270 2.503 2.322 2.226 2.137\n% of var. 4.954 3.792 3.518 3.372 3.237\nCumulative % of var. 4.954 8.746 12.264 15.637 18.874\n\nIndividuals (the 10 first)\n Dist Dim.1 ctr cos2 Dim.2 ctr\n1 | 12.693 | -0.026 0.000 0.000 | -0.266 0.010\n2 | 10.976 | -2.149 0.484 0.038 | -0.875 0.105\n3 | 4.786 | -1.744 0.319 0.133 | 0.879 0.106\n4 | 13.821 | 1.933 0.391 0.020 | 4.027 2.219\n5 | 7.772 | -0.748 0.059 0.009 | 1.001 0.137\n6 | 7.787 | -0.546 0.031 0.005 | 1.113 0.170\n7 | 4.358 | 2.236 0.524 0.263 | -0.725 0.072\n8 | 3.315 | -1.242 0.162 0.140 | 0.035 0.000\n9 | 3.534 | -0.544 0.031 0.024 | 0.215 0.006\n10 | 7.170 | -3.346 1.173 0.218 | -0.392 0.021\n cos2 Dim.3 ctr cos2 \n1 0.000 | 3.664 1.980 0.083 |\n2 0.006 | 4.423 2.885 0.162 |\n3 0.034 | -0.080 0.001 0.000 |\n4 0.085 | -1.075 0.170 0.006 |\n5 0.017 | -3.157 1.470 0.165 |\n6 0.020 | -2.754 1.119 0.125 |\n7 0.028 | -0.322 0.015 0.005 |\n8 0.000 | -0.601 0.053 0.033 |\n9 0.004 | -0.542 0.043 0.023 |\n10 0.003 | 0.232 0.008 0.001 |\n\nContinuous variable\n Dim.1 ctr cos2 Dim.2 ctr cos2 \nage.in.years | -0.746 17.009 0.556 | -0.168 1.129 0.028 |\n Dim.3 ctr cos2 \nage.in.years -0.053 0.122 0.003 |\n\nCategories (the 10 first)\n Dim.1 ctr cos2 v.test Dim.2\nmale | 0.520 1.159 0.162 4.515 | 0.184\nfemale | -0.441 0.983 0.162 -4.515 | -0.156\n720 | -0.128 0.058 0.007 -0.950 | -0.147\n725 | 1.135 2.144 0.197 4.982 | 0.049\n730 | -1.875 7.543 0.575 -9.651 | -0.373\n735 | 0.492 0.140 0.008 1.190 | 0.586\n745 | 1.623 3.291 0.270 6.011 | 0.893\n750 | 2.188 0.613 0.048 2.433 | -1.635\n1135 | -0.851 2.458 0.243 -6.059 | 0.964\n1150 | 0.133 0.002 0.000 0.148 | 0.381\n ctr cos2 v.test Dim.3 ctr\nmale 0.248 0.020 1.827 | 0.516 2.269\nfemale 0.210 0.020 -1.827 | -0.438 1.925\n720 0.132 0.010 -1.247 | 0.594 2.514\n725 0.007 0.000 0.248 | -0.229 0.174\n730 0.508 0.023 -2.192 | -0.048 0.010\n735 0.338 0.012 1.619 | -4.193 20.105\n745 1.699 0.082 3.779 | 0.465 0.535\n750 0.584 0.027 -2.077 | 1.488 0.563\n1135 5.383 0.312 7.845 | -0.577 2.239\n1150 0.032 0.001 0.484 | 2.155 1.180\n cos2 v.test \nmale 0.160 5.323 |\nfemale 0.160 -5.323 |\n720 0.162 5.249 |\n725 0.008 -1.196 |\n730 0.000 -0.295 |\n735 0.603 -12.032 |\n745 0.022 2.042 |\n750 0.022 1.963 |\n1135 0.112 -4.874 |\n1150 0.044 2.844 |\n" ], [ "#Based on the contribution plots, the variables plots, and the significant categories, \n#we selected the next varibles for our simpler model:\n\nrelevant = select(event2, age.in.years, cause.of.injury, complaint.reported.by.dispatch,\n primary.method.of.payment, incident.location.type, \n )\n\nimprov_model = lm(log(age.in.years) ~. -age.in.years, data= relevant)\n#summary(improv_model)\n\n\n#Analysis and Comparison of models: \nAIC(model3, improv_model)\n\nglance(model3) %>%\n dplyr::select(adj.r.squared, sigma, AIC, BIC, p.value)\n\nglance(improv_model) %>%\n dplyr::select(adj.r.squared, sigma, AIC, BIC, p.value)\n\n#Looking at the models' summaries, we can see that model3 and improv_model have a similar adjusted R2, \n#but model3's is slightly higher. This means that improv_model is a little bit better at exaining the outcome (age.in.years). \n#The two models have exactly the same (rounded) amount of residual standard error (RSE or sigma = 0.54). \n#However, improv_model is more simple than model3 because it incorporates less variables. \n#All things equal, the simple model is always better. \n\n#The AIC and the BIC of the improv_model are lower than those of the model3 (AIC= 527.9 vs 521.8). \n#In model comparison strategies, the model with the lowest AIC and BIC scores is preferred.\n\n#Finally, the F-statistic P-value of improv_model is lower than the one of the model3.\n#This means that the improv_model is statistically more significant compared to model3.\n\n#In this way, we can conclude that improv_model is the best model and should be used for further analyses. \n", "_____no_output_____" ] ], [ [ "### Visual representations for variables:", "_____no_output_____" ] ], [ [ "#Plots for the frequency of the variables' categories\n\nfor (i in 2:8) {\n plot(event2[,i], main=colnames(event2)[i],\n ylab = \"Count\", xlab = \"Categories\", col=\"#00AFBB\", las = 2)\n }\n\n#Some of the variable categories have a very low frequency. These variables could distort the analysis. \n\n", "_____no_output_____" ], [ "#scree plot\na = fviz_screeplot(res.famd, addlabels = TRUE, ylim = c(0, 5)) \n#19% of the information (variances) contained in the data are retained by the first five principal components.\n#The percentage value of our variables explains less than desired of the variance; \n#The low frequency variables could be distorting the analysis. \n\n\n# Plot of variables\nb = fviz_famd_var(res.famd, repel = TRUE)\n##It can be seen that, the variables gender, age.in.years, and incident.patient.disposition are the most correlated with dimension 1. \n#None of the variables are strongly correlated solely to dimension 2.\n\n# Contribution to the first dimension\nc = fviz_contrib(res.famd, \"var\", axes = 1)\n# Contribution to the second dimension\nd = fviz_contrib(res.famd, \"var\", axes = 2)\n#From the plots, it can be seen that:\n#variables that contribute the most to the first dimension are: cause.of.injury and complaint.reported.by.dispatch.\n#variables that contribute the most to the second dimension are: cause.of.injury and complaint.reported.by.dispatch.\n\ngrid.arrange(a, b, c, d, ncol = 2)", "_____no_output_____" ] ], [ [ "From the plots we can see that 19% of the variances contained in the data were retained by the first five principal components. The percentage value of our variables explains low variance among the factors. \nVariables gender, age, and incident patient disposition are strongly correlated with dimension 1.\n", "_____no_output_____" ], [ "### Hierarchical K-means clustering: ", "_____no_output_____" ] ], [ [ "df= select(relevant, age.in.years)\n#head(df)\n\n#Hierarchical K-means clustering \nhk3 <-hkmeans(df, 3)\nhk3$centers\n\nrelevant2 = relevant\nrelevant2$k3cluster = hk3$cluster\nrelevant2$k3cluster <-as.factor(relevant2$k3cluster)\n#levels(relevant2$k3cluster)\nlevels(relevant2$k3cluster) <- c(\"Child\", \"Young-Adult\", \"Adult\" )\n#levels(relevant2$k3cluster)\n#head(relevant2)", "_____no_output_____" ], [ "res.famd2 <- FAMD(relevant2, ncp = 5, graph = FALSE)", "_____no_output_____" ], [ "fviz_pca_ind(res.famd2,\n geom.ind = \"point\", # show points only\n col.ind = relevant2$k3cluster, # color by groups\n palette = c(\"#00AFBB\", \"#E7B800\", \"#FC4E07\"),\n addEllipses = TRUE, ellipse.type = \"convex\",\n #addEllipses = TRUE, # Concentration ellipses\n legend.title = \"Age category\"\n )", "_____no_output_____" ] ], [ [ "The clustering shows that children and young adults have a higher distribution across the variables or factors than adults. ", "_____no_output_____" ], [ "## Conclusions:\n\nWe have concluded that there is not enough information or data collected to improve and enhance EMS patient care procedures based on gender. As for the individual's age, there is some significance that can be used to enhance EMS patient care procedures. In this analysis, most of the variables were categorical with less events for simplicity. We suggest that more data points should be analyzed to provide better EMS patient care. Also, without geographic location, it is hard to provide the best EMS patient care without knowing where the data applies to most. Further analysis is needed to conclude the best approach to better determine EMS patient care.", "_____no_output_____" ], [ "### References:\n\nR Development Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. (2008). ISBN 3-900051-07-0, URL http://www.R-project.org.\n\nNATIONAL EMS INFORMATION SYSTEM (NEMSIS) (2018). <https://nemsis.org/what-is-nemsis/> \n\nNational Highway Traffic Safety Administration (NHTSA)(2008). Uniform PreHospital EMS Dataset Version 2.2.1. <<https://nemsis.org>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ] ]
d035902b5511a74161f8fa789e2a597b846268c8
1,939
ipynb
Jupyter Notebook
scripts/consistency_ks_cyt/.ipynb_checkpoints/Untitled-checkpoint.ipynb
lorenzo-bioinfo/ms_data_analysis
bc1017dffbe8d25817098fc94805246d0088df39
[ "Apache-2.0" ]
null
null
null
scripts/consistency_ks_cyt/.ipynb_checkpoints/Untitled-checkpoint.ipynb
lorenzo-bioinfo/ms_data_analysis
bc1017dffbe8d25817098fc94805246d0088df39
[ "Apache-2.0" ]
null
null
null
scripts/consistency_ks_cyt/.ipynb_checkpoints/Untitled-checkpoint.ipynb
lorenzo-bioinfo/ms_data_analysis
bc1017dffbe8d25817098fc94805246d0088df39
[ "Apache-2.0" ]
null
null
null
38.78
803
0.593089
[ [ [ "import pandas as pd\ndf = pd.read_csv('RR_bh_corr.tsv', sep = '\\t')\ndf", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
d03590331ea544c88ee68b5ff6c3b931e9192a09
1,315
ipynb
Jupyter Notebook
docs/nusa-info/es/intro-nusa.ipynb
Bartman00/nusa
472180c350d8940bf025f54b5b362e6dab280417
[ "MIT" ]
92
2016-11-14T01:39:55.000Z
2022-03-27T17:23:41.000Z
docs/nusa-info/es/intro-nusa.ipynb
jialigit/nusa
c3dea56dadaed6be5d6ebeb3ba50f4e0dfa65f10
[ "MIT" ]
1
2017-11-30T05:04:02.000Z
2018-08-29T04:31:39.000Z
docs/nusa-info/es/intro-nusa.ipynb
jialigit/nusa
c3dea56dadaed6be5d6ebeb3ba50f4e0dfa65f10
[ "MIT" ]
31
2017-05-17T18:50:18.000Z
2022-03-12T03:08:00.000Z
23.909091
315
0.580228
[ [ [ "# Una introducción a NuSA\n\n**NuSA** es una librería Python para resolver problemas de análisis estructural bidimensional. La idea es tener una estructura de códigos escritos utilizando la programación orientada a objetos, de modo que sea posible crear instancias de un modelo de elemento finito y operar con éste a través de métodos.\n\n## ¿Por qué NuSA?\n\n\n## La estructura de NuSA\n\nLa estructura de **NuSA** está basada en tres clases fundamentales que componen el *core*: `Model`, `Element`, `Node`.\n\n![](src/intro-nusa/nusa_structure.png)", "_____no_output_____" ] ], [ [ "\n", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code" ] ]
d0359de22e4c97838e6457d19be33097ede6a8ac
61,079
ipynb
Jupyter Notebook
examples/trials/cifar10_grad_match/notebooks/cifar10_example.ipynb
savan77/nni
510213393d9cae58c5a8cccd21f322f7bba4e0cf
[ "MIT" ]
null
null
null
examples/trials/cifar10_grad_match/notebooks/cifar10_example.ipynb
savan77/nni
510213393d9cae58c5a8cccd21f322f7bba4e0cf
[ "MIT" ]
null
null
null
examples/trials/cifar10_grad_match/notebooks/cifar10_example.ipynb
savan77/nni
510213393d9cae58c5a8cccd21f322f7bba4e0cf
[ "MIT" ]
null
null
null
49.098875
1,894
0.539956
[ [ [ "from tensorflow.python.client import device_lib\ndevice_lib.list_local_devices()", "_____no_output_____" ], [ "import time\nimport copy\nimport numpy as np\nimport os\nimport subprocess\nimport sys\nimport torch\nimport torch.backends.cudnn as cudnn\nimport torch.nn as nn\nimport torch.optim as optim\nfrom matplotlib import pyplot as plt\nfrom torch.utils.data.sampler import SubsetRandomSampler\nfrom drive.MyDrive.cords.selectionstrategies.supervisedlearning.glisterstrategy import GLISTERStrategy as Strategy\nfrom drive.MyDrive.cords.utils.models.resnet import ResNet18\nfrom drive.MyDrive.cords.utils.custom_dataset import load_mnist_cifar\nfrom torch.utils.data import random_split, SequentialSampler, BatchSampler, RandomSampler\nfrom torch.autograd import Variable\nimport math\nimport tqdm", "_____no_output_____" ], [ "def model_eval_loss(data_loader, model, criterion):\n total_loss = 0\n with torch.no_grad():\n for batch_idx, (inputs, targets) in enumerate(data_loader):\n inputs, targets = inputs.to(device), targets.to(device, non_blocking=True)\n outputs = model(inputs)\n loss = criterion(outputs, targets)\n total_loss += loss.item()\n return total_loss", "_____no_output_____" ], [ "device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nprint(\"Using Device:\", device)\n", "Using Device: cuda\n" ] ], [ [ "#Training Arguments", "_____no_output_____" ] ], [ [ "datadir = '../../data'\ndata_name = 'cifar10'\nfraction = float(0.1)\nnum_epochs = int(300)\nselect_every = int(20)\nfeature = 'dss'# 70\nwarm_method = 0 # whether to use warmstart-onestep (1) or online (0)\nnum_runs = 1 # number of random runs\nlearning_rate = 0.05\n", "_____no_output_____" ] ], [ [ "#Results Folder", "_____no_output_____" ] ], [ [ "all_logs_dir = './results/' + data_name +'/' + feature +'/' + str(fraction) + '/' + str(select_every)\nprint(all_logs_dir)\nsubprocess.run([\"mkdir\", \"-p\", all_logs_dir])\npath_logfile = os.path.join(all_logs_dir, data_name + '.txt')\nlogfile = open(path_logfile, 'w')\nexp_name = data_name + '_fraction:' + str(fraction) + '_epochs:' + str(num_epochs) + \\\n '_selEvery:' + str(select_every) + '_variant' + str(warm_method) + '_runs' + str(num_runs)\nprint(exp_name)\n", "./results/cifar10/dss/0.1/20\ncifar10_fraction:0.1_epochs:300_selEvery:20_variant0_runs1\n" ] ], [ [ "#Loading CIFAR10 Dataset", "_____no_output_____" ] ], [ [ "print(\"=======================================\", file=logfile)\nfullset, valset, testset, num_cls = load_mnist_cifar(datadir, data_name, feature)\n", "Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ../data/cifar-10-python.tar.gz\n" ] ], [ [ "#Splitting Training dataset to train and validation sets", "_____no_output_____" ] ], [ [ "validation_set_fraction = 0.1\nnum_fulltrn = len(fullset)\nnum_val = int(num_fulltrn * validation_set_fraction)\nnum_trn = num_fulltrn - num_val\ntrainset, validset = random_split(fullset, [num_trn, num_val])\nN = len(trainset)\ntrn_batch_size = 20\n", "_____no_output_____" ] ], [ [ "#Creating DataLoaders", "_____no_output_____" ] ], [ [ "trn_batch_size = 20\nval_batch_size = 1000\ntst_batch_size = 1000\n\n\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=trn_batch_size,\n shuffle=False, pin_memory=True)\n\nvalloader = torch.utils.data.DataLoader(valset, batch_size=val_batch_size, shuffle=False,\n sampler=SubsetRandomSampler(validset.indices),\n pin_memory=True)\n\ntestloader = torch.utils.data.DataLoader(testset, batch_size=tst_batch_size,\n shuffle=False, pin_memory=True)\n", "_____no_output_____" ] ], [ [ "#Budget for Data Subset Selection", "_____no_output_____" ] ], [ [ "bud = int(fraction * N)\nprint(\"Budget, fraction and N:\", bud, fraction, N)\n# Transfer all the data to GPU\nprint_every = 3", "Budget, fraction and N: 4500 0.1 45000\n" ] ], [ [ "#Loading ResNet Model", "_____no_output_____" ] ], [ [ "model = ResNet18(num_cls)\nmodel = model.to(device)", "_____no_output_____" ], [ "print(model)", "ResNet(\n (conv1): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (layer1): Sequential(\n (0): BasicBlock(\n (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (shortcut): Sequential()\n )\n (1): BasicBlock(\n (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (shortcut): Sequential()\n )\n )\n (layer2): Sequential(\n (0): BasicBlock(\n (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (shortcut): Sequential(\n (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)\n (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n )\n )\n (1): BasicBlock(\n (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (shortcut): Sequential()\n )\n )\n (layer3): Sequential(\n (0): BasicBlock(\n (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (shortcut): Sequential(\n (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)\n (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n )\n )\n (1): BasicBlock(\n (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (shortcut): Sequential()\n )\n )\n (layer4): Sequential(\n (0): BasicBlock(\n (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (shortcut): Sequential(\n (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)\n (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n )\n )\n (1): BasicBlock(\n (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (shortcut): Sequential()\n )\n )\n (linear): Linear(in_features=512, out_features=10, bias=True)\n)\n" ] ], [ [ "#Initial Random Subset for Training", "_____no_output_____" ] ], [ [ "start_idxs = np.random.choice(N, size=bud, replace=False)", "_____no_output_____" ] ], [ [ "#Loss Type, Optimizer and Learning Rate Scheduler", "_____no_output_____" ] ], [ [ "criterion = nn.CrossEntropyLoss()\noptimizer = optim.SGD(model.parameters(), lr=learning_rate,\n momentum=0.9, weight_decay=5e-4)\nscheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=num_epochs)\n ", "_____no_output_____" ] ], [ [ "#Last Layer GLISTER Strategy with Stcohastic Selection", "_____no_output_____" ] ], [ [ "setf_model = Strategy(trainloader, valloader, model, criterion,\n learning_rate, device, num_cls, False, 'Stochastic')", "_____no_output_____" ], [ "idxs = start_idxs\nprint(\"Starting Greedy Selection Strategy!\")\nsubstrn_losses = np.zeros(num_epochs)\nfulltrn_losses = np.zeros(num_epochs)\nval_losses = np.zeros(num_epochs)\ntiming = np.zeros(num_epochs)\nval_acc = np.zeros(num_epochs)\ntst_acc = np.zeros(num_epochs)\nfull_trn_acc = np.zeros(num_epochs)\nsubtrn_acc = np.zeros(num_epochs)\nsubset_trnloader = torch.utils.data.DataLoader(trainset, batch_size=trn_batch_size,\n shuffle=False, sampler=SubsetRandomSampler(idxs), pin_memory=True)", "Starting Greedy Selection Strategy!\n" ] ], [ [ "#Training Loop", "_____no_output_____" ] ], [ [ " for i in tqdm.trange(num_epochs):\n subtrn_loss = 0\n subtrn_correct = 0\n subtrn_total = 0\n start_time = time.time()\n if (((i+1) % select_every) == 0):\n cached_state_dict = copy.deepcopy(model.state_dict())\n clone_dict = copy.deepcopy(model.state_dict())\n print(\"selEpoch: %d, Starting Selection:\" % i, str(datetime.datetime.now()))\n subset_start_time = time.time()\n subset_idxs, grads_idxs = setf_model.select(int(bud), clone_dict)\n subset_end_time = time.time() - subset_start_time\n print(\"Subset Selection Time is:\" + str(subset_end_time))\n idxs = subset_idxs\n print(\"selEpoch: %d, Selection Ended at:\" % (i), str(datetime.datetime.now()))\n model.load_state_dict(cached_state_dict)\n subset_trnloader = torch.utils.data.DataLoader(trainset, batch_size=trn_batch_size,\n \t\t\tshuffle=False, sampler=SubsetRandomSampler(idxs), pin_memory=True)\n \n model.train() \n for batch_idx, (inputs, targets) in enumerate(subset_trnloader):\n inputs, targets = inputs.to(device), targets.to(device, non_blocking=True) # targets can have non_blocking=True.\n optimizer.zero_grad()\n outputs = model(inputs)\n loss = criterion(outputs, targets)\n subtrn_loss += loss.item()\n loss.backward()\n optimizer.step()\n _, predicted = outputs.max(1)\n subtrn_total += targets.size(0)\n subtrn_correct += predicted.eq(targets).sum().item()\n scheduler.step()\n timing[i] = time.time() - start_time\n #print(\"Epoch timing is: \" + str(timing[i]))\n val_loss = 0\n val_correct = 0\n val_total = 0\n tst_correct = 0\n tst_total = 0\n tst_loss = 0\n full_trn_loss = 0\n #subtrn_loss = 0\n full_trn_correct = 0\n full_trn_total = 0\n model.eval()\n with torch.no_grad():\n\n for batch_idx, (inputs, targets) in enumerate(valloader):\n #print(batch_idx)\n inputs, targets = inputs.to(device), targets.to(device, non_blocking=True)\n outputs = model(inputs)\n loss = criterion(outputs, targets)\n val_loss += loss.item()\n _, predicted = outputs.max(1)\n val_total += targets.size(0)\n val_correct += predicted.eq(targets).sum().item()\n\n for batch_idx, (inputs, targets) in enumerate(testloader):\n #print(batch_idx)\n inputs, targets = inputs.to(device), targets.to(device, non_blocking=True)\n outputs = model(inputs)\n loss = criterion(outputs, targets)\n tst_loss += loss.item()\n _, predicted = outputs.max(1)\n tst_total += targets.size(0)\n tst_correct += predicted.eq(targets).sum().item()\n\n for batch_idx, (inputs, targets) in enumerate(trainloader):\n inputs, targets = inputs.to(device), targets.to(device, non_blocking=True)\n outputs = model(inputs)\n loss = criterion(outputs, targets)\n full_trn_loss += loss.item()\n _, predicted = outputs.max(1)\n full_trn_total += targets.size(0)\n full_trn_correct += predicted.eq(targets).sum().item()\n\n val_acc[i] = val_correct/val_total\n tst_acc[i] = tst_correct/tst_total\n subtrn_acc[i] = subtrn_correct/subtrn_total\n full_trn_acc[i] = full_trn_correct/full_trn_total\n substrn_losses[i] = subtrn_loss\n fulltrn_losses[i] = full_trn_loss\n val_losses[i] = val_loss\n print('Epoch:', i + 1, 'SubsetTrn,FullTrn,ValLoss,Time:', subtrn_loss, full_trn_loss, val_loss, timing[i])\n", " 0%| | 1/300 [00:35<2:58:55, 35.91s/it]" ] ], [ [ "#Results Logging", "_____no_output_____" ] ], [ [ "print(\"SelectionRun---------------------------------\")\nprint(\"Final SubsetTrn and FullTrn Loss:\", subtrn_loss, full_trn_loss)\nprint(\"Validation Loss and Accuracy:\", val_loss, val_acc[-1])\nprint(\"Test Data Loss and Accuracy:\", tst_loss, tst_acc[-1])\nprint('-----------------------------------')\n\nprint(\"GLISTER\", file=logfile)\nprint('---------------------------------------------------------------------', file=logfile)\nval = \"Validation Accuracy,\"\ntst = \"Test Accuracy,\"\ntime_str = \"Time,\"\nfor i in range(num_epochs):\n time_str = time_str + \",\" + str(timing[i])\n val = val + \",\" + str(val_acc[i])\n tst = tst + \",\" + str(tst_acc[i])\nprint(timing, file=logfile)\nprint(val, file=logfile)\nprint(tst, file=logfile)", "_____no_output_____" ] ], [ [ "#Full Data Training", "_____no_output_____" ] ], [ [ "torch.manual_seed(42)\nnp.random.seed(42)\nmodel = ResNet18(num_cls)\nmodel = model.to(device)", "_____no_output_____" ], [ "idxs = start_idxs\ncriterion = nn.CrossEntropyLoss()\n#optimizer = optim.SGD(model.parameters(), lr=learning_rate)\noptimizer = optim.SGD(model.parameters(), lr=learning_rate,\n momentum=0.9, weight_decay=5e-4)\nscheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=num_epochs)\nprint(\"Starting Full Training Run!\")", "Starting Full Training Run!\n" ], [ "substrn_losses = np.zeros(num_epochs)\nfulltrn_losses = np.zeros(num_epochs)\nval_losses = np.zeros(num_epochs)\nsubset_trnloader = torch.utils.data.DataLoader(trainset, batch_size=trn_batch_size, shuffle=False,\n sampler=SubsetRandomSampler(idxs),\n pin_memory=True)\n\ntiming = np.zeros(num_epochs)\nval_acc = np.zeros(num_epochs)\ntst_acc = np.zeros(num_epochs)\nfull_trn_acc = np.zeros(num_epochs)\nsubtrn_acc = np.zeros(num_epochs)\n", "_____no_output_____" ] ], [ [ "#Full Training Loop", "_____no_output_____" ] ], [ [ "for i in tqdm.trange(num_epochs):\n start_time = time.time()\n model.train()\n for batch_idx, (inputs, targets) in enumerate(trainloader):\n inputs, targets = inputs.to(device), targets.to(device, non_blocking=True)\n # Variables in Pytorch are differentiable.\n inputs, target = Variable(inputs), Variable(inputs)\n # This will zero out the gradients for this batch.\n optimizer.zero_grad()\n outputs = model(inputs)\n loss = criterion(outputs, targets)\n loss.backward()\n optimizer.step()\n scheduler.step()\n timing[i] = time.time() - start_time\n val_loss = 0\n val_correct = 0\n val_total = 0\n tst_correct = 0\n tst_total = 0\n tst_loss = 0\n full_trn_loss = 0\n subtrn_loss = 0\n full_trn_correct = 0\n full_trn_total = 0\n subtrn_correct = 0\n subtrn_total = 0\n model.eval()\n with torch.no_grad():\n for batch_idx, (inputs, targets) in enumerate(valloader):\n # print(batch_idx)\n inputs, targets = inputs.to(device), targets.to(device, non_blocking=True)\n outputs = model(inputs)\n loss = criterion(outputs, targets)\n val_loss += loss.item()\n _, predicted = outputs.max(1)\n val_total += targets.size(0)\n val_correct += predicted.eq(targets).sum().item()\n\n for batch_idx, (inputs, targets) in enumerate(testloader):\n # print(batch_idx)\n inputs, targets = inputs.to(device), targets.to(device, non_blocking=True)\n outputs = model(inputs)\n loss = criterion(outputs, targets)\n tst_loss += loss.item()\n _, predicted = outputs.max(1)\n tst_total += targets.size(0)\n tst_correct += predicted.eq(targets).sum().item()\n\n for batch_idx, (inputs, targets) in enumerate(trainloader):\n inputs, targets = inputs.to(device), targets.to(device, non_blocking=True)\n outputs = model(inputs)\n loss = criterion(outputs, targets)\n full_trn_loss += loss.item()\n _, predicted = outputs.max(1)\n full_trn_total += targets.size(0)\n full_trn_correct += predicted.eq(targets).sum().item()\n\n for batch_idx, (inputs, targets) in enumerate(subset_trnloader):\n inputs, targets = inputs.to(device), targets.to(device, non_blocking=True)\n outputs = model(inputs)\n loss = criterion(outputs, targets)\n subtrn_loss += loss.item()\n _, predicted = outputs.max(1)\n subtrn_total += targets.size(0)\n subtrn_correct += predicted.eq(targets).sum().item()\n\n val_acc[i] = val_correct / val_total\n tst_acc[i] = tst_correct / tst_total\n subtrn_acc[i] = subtrn_correct / subtrn_total\n full_trn_acc[i] = full_trn_correct / full_trn_total\n substrn_losses[i] = subtrn_loss\n fulltrn_losses[i] = full_trn_loss\n val_losses[i] = val_loss\n print('Epoch:', i + 1, 'SubsetTrn,FullTrn,ValLoss,Time:', subtrn_loss, full_trn_loss, val_loss, timing[i])", "\n 0%| | 0/300 [00:00<?, ?it/s]\u001b[A" ] ], [ [ "#Results and Timing Logging", "_____no_output_____" ] ], [ [ "print(\"SelectionRun---------------------------------\")\nprint(\"Final SubsetTrn and FullTrn Loss:\", subtrn_loss, full_trn_loss)\nprint(\"Validation Loss and Accuracy:\", val_loss, val_acc[-1])\nprint(\"Test Data Loss and Accuracy:\", tst_loss, tst_acc[-1])\nprint('-----------------------------------')\n\nprint(\"Full Training\", file=logfile)\nprint('---------------------------------------------------------------------', file=logfile)\nval = \"Validation Accuracy,\"\ntst = \"Test Accuracy,\"\ntime_str = \"Time,\"\nfor i in range(num_epochs):\n time_str = time_str + \",\" + str(timing[i])\n val = val + \",\" + str(val_acc[i])\n tst = tst + \",\" + str(tst_acc[i])\nprint(timing, file=logfile)\nprint(val, file=logfile)\nprint(tst, file=logfile)", "_____no_output_____" ], [ "logfile.close()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
d035a75daa166f9d6cf1ea593a5fbc083535757b
292,715
ipynb
Jupyter Notebook
_notebooks/2021-05-12-visualizing-earnings.ipynb
MoRaouf/MoSpace
cbda3fbd965ebd50e2042595568a51b3f85d9a66
[ "Apache-2.0" ]
null
null
null
_notebooks/2021-05-12-visualizing-earnings.ipynb
MoRaouf/MoSpace
cbda3fbd965ebd50e2042595568a51b3f85d9a66
[ "Apache-2.0" ]
null
null
null
_notebooks/2021-05-12-visualizing-earnings.ipynb
MoRaouf/MoSpace
cbda3fbd965ebd50e2042595568a51b3f85d9a66
[ "Apache-2.0" ]
null
null
null
214.286237
42,680
0.886753
[ [ [ "# \"Visualizing Earnings Based On College Majors\"\n> \"Awesome project using numpy, pandas & matplotlib\"\n\n- toc: true\n- comments: true\n- image: images/cosmos.jpg\n- categories: [project]\n- tags: [Numpy, Pandas]\n- badges: true\n- twitter_large_image: true\n- featured: true", "_____no_output_____" ] ], [ [ "import pandas as pd \nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ], [ "recent_grads = pd.read_csv('recent-grads.csv')\nrecent_grads.iloc[0]", "_____no_output_____" ], [ "recent_grads.head()", "_____no_output_____" ], [ "recent_grads.tail()", "_____no_output_____" ], [ "recent_grads.describe()", "_____no_output_____" ], [ "recent_grads.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 173 entries, 0 to 172\nData columns (total 21 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Rank 173 non-null int64 \n 1 Major_code 173 non-null int64 \n 2 Major 173 non-null object \n 3 Total 172 non-null float64\n 4 Men 172 non-null float64\n 5 Women 172 non-null float64\n 6 Major_category 173 non-null object \n 7 ShareWomen 172 non-null float64\n 8 Sample_size 173 non-null int64 \n 9 Employed 173 non-null int64 \n 10 Full_time 173 non-null int64 \n 11 Part_time 173 non-null int64 \n 12 Full_time_year_round 173 non-null int64 \n 13 Unemployed 173 non-null int64 \n 14 Unemployment_rate 173 non-null float64\n 15 Median 173 non-null int64 \n 16 P25th 173 non-null int64 \n 17 P75th 173 non-null int64 \n 18 College_jobs 173 non-null int64 \n 19 Non_college_jobs 173 non-null int64 \n 20 Low_wage_jobs 173 non-null int64 \ndtypes: float64(5), int64(14), object(2)\nmemory usage: 28.5+ KB\n" ], [ "print('Number of Rows Before :', len(recent_grads))\nrecent_grads = recent_grads.dropna()\nprint('Number of Rows After :', len(recent_grads))", "Number of Rows Before : 173\nNumber of Rows After : 172\n" ], [ "p1 = recent_grads.plot(x = 'Sample_size', y = 'Median', kind = 'scatter')\np2 = recent_grads.plot(x = 'Sample_size', y = 'Unemployment_rate', kind = 'scatter')\np3 = recent_grads.plot(x = 'Full_time', y = 'Median', kind = 'scatter')\np4 = recent_grads.plot(x = 'ShareWomen', y = 'Unemployment_rate', kind = 'scatter')\np5 = recent_grads.plot(x = 'Men', y = 'Median', kind = 'scatter')\np6 = recent_grads.plot(x = 'Women', y = 'Median', kind = 'scatter')", "_____no_output_____" ], [ "h1 = recent_grads['Sample_size'].hist(bins = 10, range = (0,4500))\nh1.set_title('Sample_size')", "_____no_output_____" ], [ "h2 = recent_grads['Median'].hist(bins = 20, range = (22000,110000))\nh2.set_title('Median')", "_____no_output_____" ], [ "h3 = recent_grads['Employed'].hist(bins = 10, range = (0,300000))\nh3.set_title('Employed')", "_____no_output_____" ], [ "h4 = recent_grads['Full_time'].hist(bins = 10, range = (0,250000))\nh4.set_title('Full_time')", "_____no_output_____" ], [ "h5 = recent_grads['ShareWomen'].hist(bins = 20, range = (0,1))\nh5.set_title('Share Women')", "_____no_output_____" ], [ "h6 = recent_grads['Men'].hist(bins = 10, range = (110,175000))\nh6.set_title('Men')", "_____no_output_____" ], [ "h7 = recent_grads['Women'].hist(bins = 10, range = (0,300000))\nh7.set_title('Women')", "_____no_output_____" ], [ "from pandas.plotting import scatter_matrix\nmatrix1 = scatter_matrix(recent_grads[['Sample_size', 'Median']])\nmatrix2 = scatter_matrix(recent_grads[['Sample_size', 'Median', 'Unemployment_rate']])", "_____no_output_____" ], [ "recent_grads['ShareWomen'][:10].plot(kind = 'bar')", "_____no_output_____" ], [ "recent_grads['ShareWomen'][-10:-1].plot(kind = 'bar')", "_____no_output_____" ], [ "recent_grads[:10].plot.bar(x='Major_category', y='Unemployment_rate')\n# OR\n# recent_grads['Unemployment_rate'][:10].plot(kind = 'bar')", "_____no_output_____" ], [ "recent_grads[-10:-1].plot.bar(x='Major_category', y='Unemployment_rate')", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d035b379765c05652ab71ad36e93c10acb4793aa
3,807
ipynb
Jupyter Notebook
metrics/f1_score.ipynb
keyianpai/tiny-sklearn
8571fc8dee2a08822b22c540375255dbf19106fa
[ "MIT" ]
19
2019-05-08T14:50:24.000Z
2022-01-18T07:40:55.000Z
metrics/f1_score.ipynb
keyianpai/tiny-sklearn
8571fc8dee2a08822b22c540375255dbf19106fa
[ "MIT" ]
1
2019-12-05T18:08:49.000Z
2019-12-06T04:46:55.000Z
metrics/f1_score.ipynb
keyianpai/tiny-sklearn
8571fc8dee2a08822b22c540375255dbf19106fa
[ "MIT" ]
2
2019-05-08T21:38:37.000Z
2020-01-21T15:33:09.000Z
31.725
183
0.53533
[ [ [ "import numpy as np\nfrom sklearn.metrics import f1_score as skf1_score", "_____no_output_____" ], [ "def f1_score(y_true, y_pred, average):\n n_labels = len(set(y_true) | set(y_pred))\n true_sum = np.bincount(y_true, minlength=n_labels)\n pred_sum = np.bincount(y_pred, minlength=n_labels)\n tp = np.bincount(y_true[y_true == y_pred], minlength=n_labels)\n if average == \"binary\":\n tp = np.array([tp[1]])\n true_sum = np.array([true_sum[1]])\n pred_sum = np.array([pred_sum[1]])\n elif average == \"micro\":\n tp = np.array([np.sum(tp)])\n true_sum = np.array([np.sum(true_sum)])\n pred_sum = np.array([np.sum(pred_sum)])\n precision = np.zeros(len(pred_sum))\n mask = pred_sum != 0\n precision[mask] = tp[mask] / pred_sum[mask]\n recall = np.zeros(len(true_sum))\n mask = true_sum != 0\n recall[mask] = tp[mask] / true_sum[mask]\n denom = precision + recall\n denom[denom == 0.] = 1\n fscore = 2 * precision * recall / denom\n if average == \"weighted\":\n fscore = np.average(fscore, weights=true_sum)\n elif average is not None:\n fscore = np.mean(fscore)\n return fscore", "_____no_output_____" ], [ "# binary\nfor i in range(10):\n rng = np.random.RandomState(i)\n y_true = rng.randint(2, size=10)\n y_pred = rng.randint(2, size=10)\n score1 = f1_score(y_true, y_pred, average=\"binary\")\n score2 = skf1_score(y_true, y_pred, average=\"binary\")\n assert np.isclose(score1, score2)", "_____no_output_____" ], [ "# multiclass\nfor i in range(10):\n for average in (None, \"micro\", \"macro\", \"weighted\"):\n rng = np.random.RandomState(i)\n y_true = rng.randint(3, size=10)\n y_pred = rng.randint(3, size=10)\n score1 = f1_score(y_true, y_pred, average=average)\n score2 = skf1_score(y_true, y_pred, average=average)\n if average is None:\n assert np.array_equal(score1, score2)\n else:\n assert np.isclose(score1, score2)", "d:\\github\\scikit-learn\\sklearn\\metrics\\classification.py:1430: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.\n 'recall', 'true', average, warn_for)\nd:\\github\\scikit-learn\\sklearn\\metrics\\classification.py:1428: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.\n 'precision', 'predicted', average, warn_for)\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
d035b50b81f45d9174d3e48335d3a35920e7bb4a
226,393
ipynb
Jupyter Notebook
src/RandomForest/jf-model-8.ipynb
joaquinfontela/Machine-Learning
733284fe82e6c128358fe2e7721d887e2683da9f
[ "MIT" ]
null
null
null
src/RandomForest/jf-model-8.ipynb
joaquinfontela/Machine-Learning
733284fe82e6c128358fe2e7721d887e2683da9f
[ "MIT" ]
null
null
null
src/RandomForest/jf-model-8.ipynb
joaquinfontela/Machine-Learning
733284fe82e6c128358fe2e7721d887e2683da9f
[ "MIT" ]
1
2021-07-30T20:53:53.000Z
2021-07-30T20:53:53.000Z
40.688893
200
0.299227
[ [ [ "import numpy as np\nimport pandas as pd", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\nfrom matplotlib import style\nimport matplotlib.ticker as ticker\nimport seaborn as sns", "_____no_output_____" ], [ "from sklearn.datasets import load_boston\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import plot_confusion_matrix\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import f1_score\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.preprocessing import OneHotEncoder\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.model_selection import RepeatedKFold\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.model_selection import ParameterGrid\nfrom sklearn.inspection import permutation_importance\nimport multiprocessing", "_____no_output_____" ], [ "labels = pd.read_csv('../../csv/train_labels.csv')\nlabels.head()", "_____no_output_____" ], [ "values = pd.read_csv('../../csv/train_values.csv')\nvalues.T", "_____no_output_____" ], [ "#Promedio de altura por piso\nvalues['height_percentage_per_floor_pre_eq'] = values['height_percentage']/values['count_floors_pre_eq']\nvalues['volume_percentage'] = values['area_percentage'] * values['height_percentage']\n\n#Algunos promedios por localizacion\nvalues['avg_age_for_geo_level_2_id'] = values.groupby('geo_level_2_id')['age'].transform('mean')\n\nvalues['avg_area_percentage_for_geo_level_2_id'] = values.groupby('geo_level_2_id')['area_percentage'].transform('mean')\n\nvalues['avg_height_percentage_for_geo_level_2_id'] = values.groupby('geo_level_2_id')['height_percentage'].transform('mean')\n\nvalues['avg_count_floors_for_geo_level_2_id'] = values.groupby('geo_level_2_id')['count_floors_pre_eq'].transform('mean')\n\nvalues['avg_age_for_geo_level_3_id'] = values.groupby('geo_level_3_id')['age'].transform('mean')\n\nvalues['avg_area_percentage_for_geo_level_3_id'] = values.groupby('geo_level_3_id')['area_percentage'].transform('mean')\n\nvalues['avg_height_percentage_for_geo_level_3_id'] = values.groupby('geo_level_3_id')['height_percentage'].transform('mean')\n\nvalues['avg_count_floors_for_geo_level_3_id'] = values.groupby('geo_level_3_id')['count_floors_pre_eq'].transform('mean')\n\n#Superestructuras\nsuperstructure_cols = [i for i in values.filter(regex='^has_superstructure*').columns]\nvalues[\"num_superstructures\"] = values[superstructure_cols[0]]\nfor c in superstructure_cols[1:]:\n values[\"num_superstructures\"] += values[c]\nvalues['has_superstructure'] = values['num_superstructures'] != 0\n\n#Familias por unidad de area y volumen y por piso\nvalues['family_area_relation'] = values['count_families'] / values['area_percentage']\nvalues['family_volume_relation'] = values['count_families'] / values['volume_percentage']\nvalues['family_floors_relation'] = values['count_families'] / values['count_floors_pre_eq']\n\n\n#Relacion material(los mas importantes segun el modelo 5)-antiguedad\nvalues['20_yr_age_range'] = values['age'] // 20 * 20\nvalues['20_yr_age_range'] = values['20_yr_age_range'].astype('str')\nvalues['superstructure'] = ''\nvalues['superstructure'] = np.where(values['has_superstructure_mud_mortar_stone'], values['superstructure'] + 'b', values['superstructure'])\nvalues['superstructure'] = np.where(values['has_superstructure_cement_mortar_brick'], values['superstructure'] + 'e', values['superstructure'])\nvalues['superstructure'] = np.where(values['has_superstructure_timber'], values['superstructure'] + 'f', values['superstructure'])\nvalues['age_range_superstructure'] = values['20_yr_age_range'] + values['superstructure']\ndel values['20_yr_age_range'] \ndel values['superstructure']\n\nvalues", "_____no_output_____" ], [ "values.isnull().values.any()", "_____no_output_____" ], [ "labels.isnull().values.any()", "_____no_output_____" ], [ "values.dtypes ", "_____no_output_____" ], [ "values[\"building_id\"].count() == values[\"building_id\"].drop_duplicates().count()", "_____no_output_____" ], [ "values.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 260601 entries, 0 to 260600\nData columns (total 55 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 building_id 260601 non-null int64 \n 1 geo_level_1_id 260601 non-null int64 \n 2 geo_level_2_id 260601 non-null int64 \n 3 geo_level_3_id 260601 non-null int64 \n 4 count_floors_pre_eq 260601 non-null int64 \n 5 age 260601 non-null int64 \n 6 area_percentage 260601 non-null int64 \n 7 height_percentage 260601 non-null int64 \n 8 land_surface_condition 260601 non-null object \n 9 foundation_type 260601 non-null object \n 10 roof_type 260601 non-null object \n 11 ground_floor_type 260601 non-null object \n 12 other_floor_type 260601 non-null object \n 13 position 260601 non-null object \n 14 plan_configuration 260601 non-null object \n 15 has_superstructure_adobe_mud 260601 non-null int64 \n 16 has_superstructure_mud_mortar_stone 260601 non-null int64 \n 17 has_superstructure_stone_flag 260601 non-null int64 \n 18 has_superstructure_cement_mortar_stone 260601 non-null int64 \n 19 has_superstructure_mud_mortar_brick 260601 non-null int64 \n 20 has_superstructure_cement_mortar_brick 260601 non-null int64 \n 21 has_superstructure_timber 260601 non-null int64 \n 22 has_superstructure_bamboo 260601 non-null int64 \n 23 has_superstructure_rc_non_engineered 260601 non-null int64 \n 24 has_superstructure_rc_engineered 260601 non-null int64 \n 25 has_superstructure_other 260601 non-null int64 \n 26 legal_ownership_status 260601 non-null object \n 27 count_families 260601 non-null int64 \n 28 has_secondary_use 260601 non-null int64 \n 29 has_secondary_use_agriculture 260601 non-null int64 \n 30 has_secondary_use_hotel 260601 non-null int64 \n 31 has_secondary_use_rental 260601 non-null int64 \n 32 has_secondary_use_institution 260601 non-null int64 \n 33 has_secondary_use_school 260601 non-null int64 \n 34 has_secondary_use_industry 260601 non-null int64 \n 35 has_secondary_use_health_post 260601 non-null int64 \n 36 has_secondary_use_gov_office 260601 non-null int64 \n 37 has_secondary_use_use_police 260601 non-null int64 \n 38 has_secondary_use_other 260601 non-null int64 \n 39 height_percentage_per_floor_pre_eq 260601 non-null float64\n 40 volume_percentage 260601 non-null int64 \n 41 avg_age_for_geo_level_2_id 260601 non-null float64\n 42 avg_area_percentage_for_geo_level_2_id 260601 non-null float64\n 43 avg_height_percentage_for_geo_level_2_id 260601 non-null float64\n 44 avg_count_floors_for_geo_level_2_id 260601 non-null float64\n 45 avg_age_for_geo_level_3_id 260601 non-null float64\n 46 avg_area_percentage_for_geo_level_3_id 260601 non-null float64\n 47 avg_height_percentage_for_geo_level_3_id 260601 non-null float64\n 48 avg_count_floors_for_geo_level_3_id 260601 non-null float64\n 49 num_superstructures 260601 non-null int64 \n 50 has_superstructure 260601 non-null bool \n 51 family_area_relation 260601 non-null float64\n 52 family_volume_relation 260601 non-null float64\n 53 family_floors_relation 260601 non-null float64\n 54 age_range_superstructure 260601 non-null object \ndtypes: bool(1), float64(12), int64(33), object(9)\nmemory usage: 107.6+ MB\n" ], [ "to_be_categorized = [\"land_surface_condition\", \"foundation_type\", \"roof_type\",\\\n \"position\", \"ground_floor_type\", \"other_floor_type\",\\\n \"plan_configuration\", \"legal_ownership_status\", \"age_range_superstructure\"]\nfor row in to_be_categorized:\n values[row] = values[row].astype(\"category\")\nvalues.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 260601 entries, 0 to 260600\nData columns (total 55 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 building_id 260601 non-null int64 \n 1 geo_level_1_id 260601 non-null int64 \n 2 geo_level_2_id 260601 non-null int64 \n 3 geo_level_3_id 260601 non-null int64 \n 4 count_floors_pre_eq 260601 non-null int64 \n 5 age 260601 non-null int64 \n 6 area_percentage 260601 non-null int64 \n 7 height_percentage 260601 non-null int64 \n 8 land_surface_condition 260601 non-null category\n 9 foundation_type 260601 non-null category\n 10 roof_type 260601 non-null category\n 11 ground_floor_type 260601 non-null category\n 12 other_floor_type 260601 non-null category\n 13 position 260601 non-null category\n 14 plan_configuration 260601 non-null category\n 15 has_superstructure_adobe_mud 260601 non-null int64 \n 16 has_superstructure_mud_mortar_stone 260601 non-null int64 \n 17 has_superstructure_stone_flag 260601 non-null int64 \n 18 has_superstructure_cement_mortar_stone 260601 non-null int64 \n 19 has_superstructure_mud_mortar_brick 260601 non-null int64 \n 20 has_superstructure_cement_mortar_brick 260601 non-null int64 \n 21 has_superstructure_timber 260601 non-null int64 \n 22 has_superstructure_bamboo 260601 non-null int64 \n 23 has_superstructure_rc_non_engineered 260601 non-null int64 \n 24 has_superstructure_rc_engineered 260601 non-null int64 \n 25 has_superstructure_other 260601 non-null int64 \n 26 legal_ownership_status 260601 non-null category\n 27 count_families 260601 non-null int64 \n 28 has_secondary_use 260601 non-null int64 \n 29 has_secondary_use_agriculture 260601 non-null int64 \n 30 has_secondary_use_hotel 260601 non-null int64 \n 31 has_secondary_use_rental 260601 non-null int64 \n 32 has_secondary_use_institution 260601 non-null int64 \n 33 has_secondary_use_school 260601 non-null int64 \n 34 has_secondary_use_industry 260601 non-null int64 \n 35 has_secondary_use_health_post 260601 non-null int64 \n 36 has_secondary_use_gov_office 260601 non-null int64 \n 37 has_secondary_use_use_police 260601 non-null int64 \n 38 has_secondary_use_other 260601 non-null int64 \n 39 height_percentage_per_floor_pre_eq 260601 non-null float64 \n 40 volume_percentage 260601 non-null int64 \n 41 avg_age_for_geo_level_2_id 260601 non-null float64 \n 42 avg_area_percentage_for_geo_level_2_id 260601 non-null float64 \n 43 avg_height_percentage_for_geo_level_2_id 260601 non-null float64 \n 44 avg_count_floors_for_geo_level_2_id 260601 non-null float64 \n 45 avg_age_for_geo_level_3_id 260601 non-null float64 \n 46 avg_area_percentage_for_geo_level_3_id 260601 non-null float64 \n 47 avg_height_percentage_for_geo_level_3_id 260601 non-null float64 \n 48 avg_count_floors_for_geo_level_3_id 260601 non-null float64 \n 49 num_superstructures 260601 non-null int64 \n 50 has_superstructure 260601 non-null bool \n 51 family_area_relation 260601 non-null float64 \n 52 family_volume_relation 260601 non-null float64 \n 53 family_floors_relation 260601 non-null float64 \n 54 age_range_superstructure 260601 non-null category\ndtypes: bool(1), category(9), float64(12), int64(33)\nmemory usage: 92.0 MB\n" ], [ "datatypes = dict(values.dtypes)\nfor row in values.columns:\n if datatypes[row] != \"int64\" and datatypes[row] != \"int32\" and \\\n datatypes[row] != \"int16\" and datatypes[row] != \"int8\":\n continue\n if values[row].nlargest(1).item() > 32767 and values[row].nlargest(1).item() < 2**31:\n values[row] = values[row].astype(np.int32)\n elif values[row].nlargest(1).item() > 127:\n values[row] = values[row].astype(np.int16)\n else:\n values[row] = values[row].astype(np.int8)\nvalues.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 260601 entries, 0 to 260600\nData columns (total 55 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 building_id 260601 non-null int32 \n 1 geo_level_1_id 260601 non-null int8 \n 2 geo_level_2_id 260601 non-null int16 \n 3 geo_level_3_id 260601 non-null int16 \n 4 count_floors_pre_eq 260601 non-null int8 \n 5 age 260601 non-null int16 \n 6 area_percentage 260601 non-null int8 \n 7 height_percentage 260601 non-null int8 \n 8 land_surface_condition 260601 non-null category\n 9 foundation_type 260601 non-null category\n 10 roof_type 260601 non-null category\n 11 ground_floor_type 260601 non-null category\n 12 other_floor_type 260601 non-null category\n 13 position 260601 non-null category\n 14 plan_configuration 260601 non-null category\n 15 has_superstructure_adobe_mud 260601 non-null int8 \n 16 has_superstructure_mud_mortar_stone 260601 non-null int8 \n 17 has_superstructure_stone_flag 260601 non-null int8 \n 18 has_superstructure_cement_mortar_stone 260601 non-null int8 \n 19 has_superstructure_mud_mortar_brick 260601 non-null int8 \n 20 has_superstructure_cement_mortar_brick 260601 non-null int8 \n 21 has_superstructure_timber 260601 non-null int8 \n 22 has_superstructure_bamboo 260601 non-null int8 \n 23 has_superstructure_rc_non_engineered 260601 non-null int8 \n 24 has_superstructure_rc_engineered 260601 non-null int8 \n 25 has_superstructure_other 260601 non-null int8 \n 26 legal_ownership_status 260601 non-null category\n 27 count_families 260601 non-null int8 \n 28 has_secondary_use 260601 non-null int8 \n 29 has_secondary_use_agriculture 260601 non-null int8 \n 30 has_secondary_use_hotel 260601 non-null int8 \n 31 has_secondary_use_rental 260601 non-null int8 \n 32 has_secondary_use_institution 260601 non-null int8 \n 33 has_secondary_use_school 260601 non-null int8 \n 34 has_secondary_use_industry 260601 non-null int8 \n 35 has_secondary_use_health_post 260601 non-null int8 \n 36 has_secondary_use_gov_office 260601 non-null int8 \n 37 has_secondary_use_use_police 260601 non-null int8 \n 38 has_secondary_use_other 260601 non-null int8 \n 39 height_percentage_per_floor_pre_eq 260601 non-null float64 \n 40 volume_percentage 260601 non-null int16 \n 41 avg_age_for_geo_level_2_id 260601 non-null float64 \n 42 avg_area_percentage_for_geo_level_2_id 260601 non-null float64 \n 43 avg_height_percentage_for_geo_level_2_id 260601 non-null float64 \n 44 avg_count_floors_for_geo_level_2_id 260601 non-null float64 \n 45 avg_age_for_geo_level_3_id 260601 non-null float64 \n 46 avg_area_percentage_for_geo_level_3_id 260601 non-null float64 \n 47 avg_height_percentage_for_geo_level_3_id 260601 non-null float64 \n 48 avg_count_floors_for_geo_level_3_id 260601 non-null float64 \n 49 num_superstructures 260601 non-null int8 \n 50 has_superstructure 260601 non-null bool \n 51 family_area_relation 260601 non-null float64 \n 52 family_volume_relation 260601 non-null float64 \n 53 family_floors_relation 260601 non-null float64 \n 54 age_range_superstructure 260601 non-null category\ndtypes: bool(1), category(9), float64(12), int16(4), int32(1), int8(28)\nmemory usage: 36.3 MB\n" ], [ "labels.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 260601 entries, 0 to 260600\nData columns (total 2 columns):\n # Column Non-Null Count Dtype\n--- ------ -------------- -----\n 0 building_id 260601 non-null int64\n 1 damage_grade 260601 non-null int64\ndtypes: int64(2)\nmemory usage: 4.0 MB\n" ], [ "labels[\"building_id\"] = labels[\"building_id\"].astype(np.int32)\nlabels[\"damage_grade\"] = labels[\"damage_grade\"].astype(np.int8)\nlabels.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 260601 entries, 0 to 260600\nData columns (total 2 columns):\n # Column Non-Null Count Dtype\n--- ------ -------------- -----\n 0 building_id 260601 non-null int32\n 1 damage_grade 260601 non-null int8 \ndtypes: int32(1), int8(1)\nmemory usage: 1.2 MB\n" ] ], [ [ "# Nuevo Modelo", "_____no_output_____" ] ], [ [ "important_values = values\\\n .merge(labels, on=\"building_id\")\nimportant_values.drop(columns=[\"building_id\"], inplace = True)\nimportant_values[\"geo_level_1_id\"] = important_values[\"geo_level_1_id\"].astype(\"category\")\nimportant_values", "_____no_output_____" ], [ "important_values.shape", "_____no_output_____" ], [ "\nX_train, X_test, y_train, y_test = train_test_split(important_values.drop(columns = 'damage_grade'),\n important_values['damage_grade'], test_size = 0.2, random_state = 123)", "_____no_output_____" ], [ "#OneHotEncoding\ndef encode_and_bind(original_dataframe, feature_to_encode):\n dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])\n res = pd.concat([original_dataframe, dummies], axis=1)\n res = res.drop([feature_to_encode], axis=1)\n return(res) \n\nfeatures_to_encode = [\"geo_level_1_id\", \"land_surface_condition\", \"foundation_type\", \"roof_type\",\\\n \"position\", \"ground_floor_type\", \"other_floor_type\",\\\n \"plan_configuration\", \"legal_ownership_status\", \"age_range_superstructure\"]\nfor feature in features_to_encode:\n X_train = encode_and_bind(X_train, feature)\n X_test = encode_and_bind(X_test, feature)", "_____no_output_____" ], [ "X_train", "_____no_output_____" ], [ "X_train.shape", "_____no_output_____" ], [ "# # Busco los mejores tres parametros indicados abajo.\n# n_estimators = [65, 100, 135]\n# max_features = [0.2, 0.5, 0.8]\n# max_depth = [None, 2, 5]\n# min_samples_split = [5, 15, 25]\n# # min_impurity_decrease = [0.0, 0.01, 0.025, 0.05, 0.1]\n# # min_samples_leaf\n\n# hyperF = {'n_estimators': n_estimators,\n# 'max_features': max_features, \n# 'max_depth': max_depth, \n# 'min_samples_split': min_samples_split\n# }\n\n# gridF = GridSearchCV(estimator = RandomForestClassifier(random_state = 123),\n# scoring = 'f1_micro',\n# param_grid = hyperF,\n# cv = 3,\n# verbose = 1, \n# n_jobs = -1)\n\n# bestF = gridF.fit(X_train, y_train)", "_____no_output_____" ], [ "# res = pd.DataFrame(bestF.cv_results_)\n# res.loc[res['rank_test_score'] <= 10]", "_____no_output_____" ], [ "# Utilizo los mejores parametros segun el GridSearch\nrf_model = RandomForestClassifier(n_estimators = 150,\n max_depth = None,\n max_features = 50,\n min_samples_split = 15,\n min_samples_leaf = 1,\n criterion = \"gini\",\n verbose=True)\nrf_model.fit(X_train, y_train)", "[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 150 out of 150 | elapsed: 3.6min finished\n" ], [ "rf_model.score(X_train, y_train)", "[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 150 out of 150 | elapsed: 9.6s finished\n" ], [ "# Calculo el F1 score para mi training set.\ny_preds = rf_model.predict(X_test)\nf1_score(y_test, y_preds, average='micro')", "[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 150 out of 150 | elapsed: 2.4s finished\n" ], [ "test_values = pd.read_csv('../../csv/test_values.csv', index_col = \"building_id\")\ntest_values", "_____no_output_____" ], [ "test_values_subset = test_values\ntest_values_subset[\"geo_level_1_id\"] = test_values_subset[\"geo_level_1_id\"].astype(\"category\")\ntest_values_subset", "_____no_output_____" ], [ "#Promedio de altura por piso\ntest_values_subset['height_percentage_per_floor_pre_eq'] = test_values_subset['height_percentage']/test_values_subset['count_floors_pre_eq']\ntest_values_subset['volume_percentage'] = test_values_subset['area_percentage'] * test_values_subset['height_percentage']\n\n#Algunos promedios por localizacion\ntest_values_subset['avg_age_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['age'].transform('mean')\n\ntest_values_subset['avg_area_percentage_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['area_percentage'].transform('mean')\n\ntest_values_subset['avg_height_percentage_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['height_percentage'].transform('mean')\n\ntest_values_subset['avg_count_floors_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['count_floors_pre_eq'].transform('mean')\n\ntest_values_subset['avg_age_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['age'].transform('mean')\n\ntest_values_subset['avg_area_percentage_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['area_percentage'].transform('mean')\n\ntest_values_subset['avg_height_percentage_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['height_percentage'].transform('mean')\n\ntest_values_subset['avg_count_floors_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['count_floors_pre_eq'].transform('mean')\n\n#Superestructuras\nsuperstructure_cols = [i for i in test_values_subset.filter(regex='^has_superstructure*').columns]\ntest_values_subset[\"num_superstructures\"] = test_values_subset[superstructure_cols[0]]\nfor c in superstructure_cols[1:]:\n test_values_subset[\"num_superstructures\"] += test_values_subset[c]\ntest_values_subset['has_superstructure'] = test_values_subset['num_superstructures'] != 0\n\n#Familias por unidad de area y volumen y por piso\ntest_values_subset['family_area_relation'] = test_values_subset['count_families'] / test_values_subset['area_percentage']\ntest_values_subset['family_volume_relation'] = test_values_subset['count_families'] / test_values_subset['volume_percentage']\ntest_values_subset['family_floors_relation'] = test_values_subset['count_families'] / test_values_subset['count_floors_pre_eq']\n\n#Relacion material(los mas importantes segun el modelo 5)-antiguedad\ntest_values_subset['20_yr_age_range'] = test_values_subset['age'] // 20 * 20\ntest_values_subset['20_yr_age_range'] = test_values_subset['20_yr_age_range'].astype('str')\ntest_values_subset['superstructure'] = ''\ntest_values_subset['superstructure'] = np.where(test_values_subset['has_superstructure_mud_mortar_stone'], test_values_subset['superstructure'] + 'b', test_values_subset['superstructure'])\ntest_values_subset['superstructure'] = np.where(test_values_subset['has_superstructure_cement_mortar_brick'], test_values_subset['superstructure'] + 'e', test_values_subset['superstructure'])\ntest_values_subset['superstructure'] = np.where(test_values_subset['has_superstructure_timber'], test_values_subset['superstructure'] + 'f', test_values_subset['superstructure'])\ntest_values_subset['age_range_superstructure'] = test_values_subset['20_yr_age_range'] + test_values_subset['superstructure']\ndel test_values_subset['20_yr_age_range'] \ndel test_values_subset['superstructure']\n\ntest_values_subset", "_____no_output_____" ], [ "def encode_and_bind(original_dataframe, feature_to_encode):\n dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])\n res = pd.concat([original_dataframe, dummies], axis=1)\n res = res.drop([feature_to_encode], axis=1)\n return(res) \n\nfeatures_to_encode = [\"geo_level_1_id\", \"land_surface_condition\", \"foundation_type\", \"roof_type\",\\\n \"position\", \"ground_floor_type\", \"other_floor_type\",\\\n \"plan_configuration\", \"legal_ownership_status\", \"age_range_superstructure\"]\nfor feature in features_to_encode:\n test_values_subset = encode_and_bind(test_values_subset, feature)\ntest_values_subset", "_____no_output_____" ], [ "features_in_model_not_in_tests =\\\n list(filter(lambda col: col not in test_values_subset.columns.to_list(), X_train.columns.to_list()))\nfor f in features_in_model_not_in_tests:\n test_values_subset[f] = 0\ntest_values_subset.drop(columns = list(filter(lambda col: col not in X_train.columns.to_list() , test_values_subset.columns.to_list())), inplace = True)", "_____no_output_____" ], [ "test_values_subset.shape", "_____no_output_____" ], [ "# Genero las predicciones para los test.\npreds = rf_model.predict(test_values_subset)", "[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 150 out of 150 | elapsed: 3.8s finished\n" ], [ "submission_format = pd.read_csv('../../csv/submission_format.csv', index_col = \"building_id\")", "_____no_output_____" ], [ "my_submission = pd.DataFrame(data=preds,\n columns=submission_format.columns,\n index=submission_format.index)", "_____no_output_____" ], [ "my_submission.head()", "_____no_output_____" ], [ "my_submission.to_csv('../../csv/predictions/jf/8/jf-model-8-submission.csv')", "_____no_output_____" ], [ "!head ../../csv/predictions/jf/8/jf-model-8-submission.csv", "building_id,damage_grade\r\n300051,3\r\n99355,2\r\n890251,2\r\n745817,1\r\n421793,3\r\n871976,2\r\n691228,1\r\n896100,3\r\n343471,2\r\n" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d035bc61da728c6026d915af3a2994d3957e4043
2,969
ipynb
Jupyter Notebook
notebooks/tutorials/2 equilibrium constants.ipynb
geem-lab/overreact-guide
4a0861bb8ffb451b2c26adfca32758a066a60704
[ "MIT" ]
9
2021-11-09T15:57:07.000Z
2022-01-22T17:12:23.000Z
notebooks/tutorials/2 equilibrium constants.ipynb
Leticia-maria/overreact-guide
de404bb738900536c9f916b10981d6b1e9b60ba8
[ "MIT" ]
12
2021-11-23T19:08:31.000Z
2022-03-28T14:09:26.000Z
notebooks/tutorials/2 equilibrium constants.ipynb
Leticia-maria/overreact-guide
de404bb738900536c9f916b10981d6b1e9b60ba8
[ "MIT" ]
1
2021-12-19T00:44:56.000Z
2021-12-19T00:44:56.000Z
20.908451
115
0.496463
[ [ [ "## Equilibrium constants\n\nCalculating equilibrium constants from energy values is easy.\n\nIt's known that the stability constant of $\\require{mhchem}\\ce{Cd(MeNH2)4^{2+}}$ is around $10^{6.55}$:", "_____no_output_____" ] ], [ [ "from overreact import core, _thermo, simulate\nimport numpy as np\nfrom scipy import constants\n\nK = _thermo.equilibrium_constant(-37.4 * constants.kilo)\nnp.log10(K)", "_____no_output_____" ] ], [ [ "So let's check it:", "_____no_output_____" ] ], [ [ "scheme = core.parse_reactions(\"\"\"\n Cd2p + 4 MeNH2 <=> [Cd(MeNH2)4]2p\n\"\"\")\nscheme.compounds, scheme.reactions", "_____no_output_____" ], [ "dydt = simulate.get_dydt(scheme, np.array([K[0], 1.]))\ny, r = simulate.get_y(dydt, y0=[0., 0., 1.])\ny(y.t_max)", "WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)\n" ], [ "Kobs = y(y.t_max)[2] / (y(y.t_max)[0] * y(y.t_max)[1]**4)\nnp.log10(Kobs)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
d035be8d543d4beb977223d4eb1ef4681e92f7ef
337,049
ipynb
Jupyter Notebook
classification/ClassificationContinuous2Features-KNN.ipynb
tonio73/data-science
e8a365751e7f6956d515f85df4818562b8da3b63
[ "MIT" ]
16
2019-07-25T09:44:23.000Z
2022-03-15T07:14:27.000Z
classification/ClassificationContinuous2Features-KNN.ipynb
tonio73/data-science
e8a365751e7f6956d515f85df4818562b8da3b63
[ "MIT" ]
null
null
null
classification/ClassificationContinuous2Features-KNN.ipynb
tonio73/data-science
e8a365751e7f6956d515f85df4818562b8da3b63
[ "MIT" ]
3
2019-12-09T16:00:02.000Z
2021-12-22T13:22:25.000Z
495.660294
179,960
0.945144
[ [ [ "# Binary classification from 2 features using K Nearest Neighbors (KNN)\n\nClassification using \"raw\" python or libraries.\n\nThe binary classification is on a single boundary defined by a continuous function and added white noise", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom numpy import random\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as pltcolors\nfrom sklearn import metrics\nfrom sklearn.neighbors import KNeighborsClassifier as SkKNeighborsClassifier\nimport pandas as pd\nimport seaborn as sns", "_____no_output_____" ] ], [ [ "## Model\n\nQuadratic function as boundary between positive and negative values\n\nAdding some unknown as a Gaussian noise\n\nThe values of X are uniformly distributed and independent", "_____no_output_____" ] ], [ [ "# Two features, Gaussian noise\ndef generateBatch(N):\n #\n xMin = 0\n xMax = 1\n b = 0.1\n std = 0.1\n #\n x = random.uniform(xMin, xMax, (N, 2))\n # 4th degree relation to shape the boundary\n boundary = 2*(x[:,0]**4 + (x[:,0]-0.3)**3 + b)\n # Adding some gaussian noise\n labels = boundary + random.normal(0, std, N) > x[:,1]\n return (x, labels)", "_____no_output_____" ] ], [ [ "### Training data", "_____no_output_____" ] ], [ [ "N = 2000\n# x has 1 dim in R, label has 1 dim in B\nxTrain, labelTrain = generateBatch(N)\n\ncolors = ['blue','red']\n\nfig = plt.figure(figsize=(15,4))\nplt.subplot(1,3,1)\nplt.scatter(xTrain[:,0], xTrain[:,1], c=labelTrain, cmap=pltcolors.ListedColormap(colors), marker=',', alpha=0.1)\nplt.xlabel('x0')\nplt.ylabel('x1')\nplt.title('Generated train data')\nplt.grid()\ncb = plt.colorbar()\nloc = np.arange(0,1,1/float(len(colors)))\ncb.set_ticks(loc)\ncb.set_ticklabels([0,1])\nplt.subplot(1,3,2)\nplt.scatter(xTrain[:,0], labelTrain, marker=',', alpha=0.01)\nplt.xlabel('x0')\nplt.ylabel('label')\nplt.grid()\nplt.subplot(1,3,3)\nplt.scatter(xTrain[:,1], labelTrain, marker=',', alpha=0.01)\nplt.xlabel('x1')\nplt.ylabel('label')\nplt.grid()", "_____no_output_____" ], [ "count, bins, ignored = plt.hist(labelTrain*1.0, 10, density=True, alpha=0.5)\np = np.mean(labelTrain)\nprint('Bernouilli parameter of the distribution:', p)", "Bernouilli parameter of the distribution: 0.506\n" ] ], [ [ "### Test data for verification of the model", "_____no_output_____" ] ], [ [ "xTest, labelTest = generateBatch(N)\ntestColors = ['navy', 'orangered']", "_____no_output_____" ] ], [ [ "# Helpers", "_____no_output_____" ] ], [ [ "def plotHeatMap(X, classes, title=None, fmt='.2g', ax=None, xlabel=None, ylabel=None):\n \"\"\" Fix heatmap plot from Seaborn with pyplot 3.1.0, 3.1.1\n https://stackoverflow.com/questions/56942670/matplotlib-seaborn-first-and-last-row-cut-in-half-of-heatmap-plot\n \"\"\"\n ax = sns.heatmap(X, xticklabels=classes, yticklabels=classes, annot=True, fmt=fmt, cmap=plt.cm.Blues, ax=ax) #notation: \"annot\" not \"annote\"\n bottom, top = ax.get_ylim()\n ax.set_ylim(bottom + 0.5, top - 0.5)\n if title:\n ax.set_title(title)\n if xlabel:\n ax.set_xlabel(xlabel)\n if ylabel:\n ax.set_ylabel(ylabel)\n \ndef plotConfusionMatrix(yTrue, yEst, classes, title=None, fmt='.2g', ax=None):\n plotHeatMap(metrics.confusion_matrix(yTrue, yEst), classes, title, fmt, ax, \\\n xlabel='Estimations', ylabel='True values');", "_____no_output_____" ] ], [ [ "# K Nearest Neighbors (KNN)\n\n\n\nReferences:\n- https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm\n- https://machinelearningmastery.com/tutorial-to-implement-k-nearest-neighbors-in-python-from-scratch/\n", "_____no_output_____" ], [ "## Homemade\n\nUsing a simple algorithm.\n\nUnweighted : each of the K neighbors has the same weight", "_____no_output_____" ] ], [ [ "# Select a K\nk = 10\n# Create a Panda dataframe in order to link x and y\ndf = pd.DataFrame(np.concatenate((xTrain, labelTrain.reshape(-1,1)), axis=1), columns = ('x0', 'x1', 'label'))\n# Insert columns to compute the difference of current test to the train and the L2\ndf.insert(df.shape[1], 'diff0', 0)\ndf.insert(df.shape[1], 'diff1', 0)\ndf.insert(df.shape[1], 'L2', 0)\n#\nthreshold = k / 2\nlabelEst0 = np.zeros(xTest.shape[0])\nfor i, x in enumerate(xTest):\n # Compute distance and norm to each training sample\n df['diff0'] = df['x0'] - x[0]\n df['diff1'] = df['x1'] - x[1]\n df['L2'] = df['diff0']**2 + df['diff1']**2\n # Get the K lowest\n kSmallest = df.nsmallest(k, 'L2')\n # Finalize prediction based on the mean\n labelEst0[i] = np.sum(kSmallest['label']) > threshold", "_____no_output_____" ] ], [ [ "### Performance of homemade model", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(12,4))\nplt.subplot(1,3,1)\nplt.scatter(xTest[:,0], xTest[:,1], c=labelEst0, cmap=pltcolors.ListedColormap(testColors), marker='x', alpha=0.2);\nplt.xlabel('x0')\nplt.ylabel('x1')\nplt.grid()\nplt.title('Estimated')\ncb = plt.colorbar()\nloc = np.arange(0,1,1./len(testColors))\ncb.set_ticks(loc)\ncb.set_ticklabels([0,1]);\nplt.subplot(1,3,2)\nplt.hist(labelEst0, 10, density=True, alpha=0.5)\nplt.title('Bernouilli parameter =' + str(np.mean(labelEst0)))\nplt.subplot(1,3,3)\nplt.scatter(xTest[:,0], xTest[:,1], c=labelTest, cmap=pltcolors.ListedColormap(colors), marker='x', alpha=0.1);\nplt.xlabel('x0')\nplt.ylabel('x1')\nplt.grid()\nplt.title('Generator')\ncb = plt.colorbar()\nloc = np.arange(0,1,1./len(colors))\ncb.set_ticks(loc)\ncb.set_ticklabels([0,1]);", "_____no_output_____" ], [ "accuracy0 = np.sum(labelTest == labelEst0)/N\nprint('Accuracy =', accuracy0)", "Accuracy = 0.9265\n" ] ], [ [ "### Precision \n$p(y = 1 \\mid \\hat{y} = 1)$", "_____no_output_____" ] ], [ [ "print('Precision =', np.sum(labelTest[labelEst0 == 1])/np.sum(labelEst0))", "Precision = 0.9505783385909569\n" ] ], [ [ "### Recall\n$p(\\hat{y} = 1 \\mid y = 1)$", "_____no_output_____" ] ], [ [ "print('Recall =', np.sum(labelTest[labelEst0 == 1])/np.sum(labelTest))", "Recall = 0.900398406374502\n" ] ], [ [ "### Confusion matrix", "_____no_output_____" ] ], [ [ "plotConfusionMatrix(labelTest, labelEst0, np.array(['Blue', 'Red']));", "_____no_output_____" ], [ "print(metrics.classification_report(labelTest, labelEst0))", " precision recall f1-score support\n\n False 0.90 0.95 0.93 996\n True 0.95 0.90 0.92 1004\n\n accuracy 0.93 2000\n macro avg 0.93 0.93 0.93 2000\nweighted avg 0.93 0.93 0.93 2000\n\n" ] ], [ [ "This non-parametric model has a the best performance of all models used so far, including the neural network with two layers.\n\nThe large drawback is the amount of computation for each sample to predict. \nThis method is hardly usable for sample sizes over 10k.", "_____no_output_____" ], [ "# Using SciKit Learn\n\nReferences:\n- SciKit documentation\n- https://stackabuse.com/k-nearest-neighbors-algorithm-in-python-and-scikit-learn/", "_____no_output_____" ] ], [ [ "model1 = SkKNeighborsClassifier(n_neighbors=k)\nmodel1.fit(xTrain, labelTrain)", "_____no_output_____" ], [ "labelEst1 = model1.predict(xTest)\nprint('Accuracy =', model1.score(xTest, labelTest))", "Accuracy = 0.9265\n" ], [ "plt.hist(labelEst1*1.0, 10, density=True, alpha=0.5)\nplt.title('Bernouilli parameter =' + str(np.mean(labelEst1)));", "_____no_output_____" ] ], [ [ "### Confusion matrix (plot)", "_____no_output_____" ] ], [ [ "plotConfusionMatrix(labelTest, labelEst1, np.array(['Blue', 'Red']));", "_____no_output_____" ] ], [ [ "### Classification report", "_____no_output_____" ] ], [ [ "print(metrics.classification_report(labelTest, labelEst1))", " precision recall f1-score support\n\n False 0.90 0.95 0.93 996\n True 0.95 0.90 0.92 1004\n\n accuracy 0.93 2000\n macro avg 0.93 0.93 0.93 2000\nweighted avg 0.93 0.93 0.93 2000\n\n" ] ], [ [ "### ROC curve", "_____no_output_____" ] ], [ [ "logit_roc_auc = metrics.roc_auc_score(labelTest, labelEst1)\nfpr, tpr, thresholds = metrics.roc_curve(labelTest, model1.predict_proba(xTest)[:,1])\nplt.plot(fpr, tpr, label='KNN classification (area = %0.2f)' % logit_roc_auc)\nplt.plot([0, 1], [0, 1],'r--')\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.05])\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('Receiver operating characteristic')\nplt.legend(loc=\"lower right\");", "_____no_output_____" ] ], [ [ "# Where to go from here ?\n\n- Other linear implementations and simple neural nets using \"raw\" Python or SciKit Learn([HTML](ClassificationContinuous2Features.html) / [Jupyter](ClassificationContinuous2Features.ipynb)), using TensorFlow([HTML](ClassificationContinuous2Features-TensorFlow.html) / [Jupyter](ClassificationContinuous2Features-TensorFlow.ipynb)), or using Keras ([HTML](ClassificationContinuous2Features-Keras.html)/ [Jupyter](ClassificationContinuous2Features-Keras.ipynb))\n\n- Non linear problem solving with Support Vector Machine (SVM) ([HTML](ClassificationSVM.html) / [Jupyter](ClassificationSVM.ipynb))\n- More complex multi-class models on the Czech and Norways flags using Keras ([HTML](ClassificationMulti2Features-Keras.html) / [Jupyter](ClassificationMulti2Features-Keras.ipynb)), showing one of the main motivations to neural networks.\n\n- Compare with the two feature linear regression using simple algorithms ([HTML](../linear/LinearRegressionBivariate.html) / [Jupyter](LinearRegressionBivariate.ipynb])), or using Keras ([HTML](LinearRegressionBivariate-Keras.html) / [Jupyter](LinearRegressionUnivariate-Keras.ipynb))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d035c2073408ddd66d32e8f846209ed1c0c9cb30
3,612
ipynb
Jupyter Notebook
Functions/network_data&code/Network_DEA_function_example.ipynb
PO-LAB/DEA
f17f261e013ad7a0d7ff48affe67174b572e17ba
[ "MIT" ]
74
2018-01-29T06:40:57.000Z
2022-02-17T20:41:11.000Z
Functions/network_data&code/Network_DEA_function_example.ipynb
apapaioannou92/DEA-1
3d73b3333f0bfff0d7fd82d01d56008c127d1b85
[ "MIT" ]
2
2018-02-09T13:42:33.000Z
2021-05-20T10:03:41.000Z
Functions/network_data&code/Network_DEA_function_example.ipynb
apapaioannou92/DEA-1
3d73b3333f0bfff0d7fd82d01d56008c127d1b85
[ "MIT" ]
45
2018-01-30T07:32:13.000Z
2022-01-20T09:49:59.000Z
31.408696
133
0.626523
[ [ [ "# Network DEA function example\n在此示範如何使用Network DEA函式,並顯示執行後的結果。 <BR>\n \n※示範程式碼及csv資料存放於[這裡](https://github.com/wurmen/DEA/tree/master/Functions/network_data%26code),可自行下載測試。", "_____no_output_____" ] ], [ [ "import network_function #載入存放Network DEA函式的py檔(在此檔名為\"network_function.py\")", "_____no_output_____" ] ], [ [ "- 將檔案讀成所需格式\n- X、Z_input存放於network_data_input.csv中,X位於檔案中的2~3行,Z_input位於4~5行\n- Y、Z_output存放於network_data_output.csv中,Y位於檔案中第2行,Z_output位於3~4行\n- 該系統共有3個製程,透過csv2dict_for_network_dea()函式進行讀檔,並回傳得到DMU列表及整體系統與各製程產出投入資料,以及製程數", "_____no_output_____" ] ], [ [ "DMU,X,Z_input,p_n=network_function.csv2dict_for_network_dea('network_data_input.csv', v1_range=[2,3], v2_range=[4,5], p_n=3)\nDMU,Y,Z_output,p_n=network_function.csv2dict_for_network_dea('network_data_output.csv', v1_range=[2,2], v2_range=[3,4], p_n=3)", "_____no_output_____" ] ], [ [ "- 將上述讀檔程式所轉換後的資料放入network DEA函式中,並將權重下限設為1e-11", "_____no_output_____" ] ], [ [ "network_function.network(DMU,X,Y,Z_input,Z_output,p_n,var_lb=1e-11)", "The efficiency of DMU A:0.523\nThe efficiency and inefficiency of Process 0 for DMU A:1.0000 and 0\nThe efficiency and inefficiency of Process 1 for DMU A:0.7500 and 0.09091\nThe efficiency and inefficiency of Process 2 for DMU A:0.3462 and 0.3864\nThe efficiency of DMU B:0.595\nThe efficiency and inefficiency of Process 0 for DMU B:0.8333 and 0.07143\nThe efficiency and inefficiency of Process 1 for DMU B:1.0000 and 0\nThe efficiency and inefficiency of Process 2 for DMU B:0.5088 and 0.3333\nThe efficiency of DMU C:0.568\nThe efficiency and inefficiency of Process 0 for DMU C:0.5000 and 0.1364\nThe efficiency and inefficiency of Process 1 for DMU C:0.4000 and 0.2727\nThe efficiency and inefficiency of Process 2 for DMU C:0.9474 and 0.02273\nThe efficiency of DMU D:0.482\nThe efficiency and inefficiency of Process 0 for DMU D:0.5625 and 0.125\nThe efficiency and inefficiency of Process 1 for DMU D:0.8000 and 0.07143\nThe efficiency and inefficiency of Process 2 for DMU D:0.3333 and 0.3214\nThe efficiency of DMU E:0.800\nThe efficiency and inefficiency of Process 0 for DMU E:0.8333 and 0.06667\nThe efficiency and inefficiency of Process 1 for DMU E:0.5000 and 0.1333\nThe efficiency and inefficiency of Process 2 for DMU E:1.0000 and 0\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d035c569e27049f50006bbcc212bcc9bcf2ba170
5,899
ipynb
Jupyter Notebook
0.15/_downloads/plot_mne_inverse_psi_visual.ipynb
drammock/mne-tools.github.io
5d3a104d174255644d8d5335f58036e32695e85d
[ "BSD-3-Clause" ]
null
null
null
0.15/_downloads/plot_mne_inverse_psi_visual.ipynb
drammock/mne-tools.github.io
5d3a104d174255644d8d5335f58036e32695e85d
[ "BSD-3-Clause" ]
null
null
null
0.15/_downloads/plot_mne_inverse_psi_visual.ipynb
drammock/mne-tools.github.io
5d3a104d174255644d8d5335f58036e32695e85d
[ "BSD-3-Clause" ]
null
null
null
109.240741
3,911
0.651127
[ [ [ "%matplotlib inline", "_____no_output_____" ] ], [ [ "\n=====================================================================\nCompute Phase Slope Index (PSI) in source space for a visual stimulus\n=====================================================================\n\nThis example demonstrates how the Phase Slope Index (PSI) [1]_ can be computed\nin source space based on single trial dSPM source estimates. In addition,\nthe example shows advanced usage of the connectivity estimation routines\nby first extracting a label time course for each epoch and then combining\nthe label time course with the single trial source estimates to compute the\nconnectivity.\n\nThe result clearly shows how the activity in the visual label precedes more\nwidespread activity (a postivive PSI means the label time course is leading).\n\nReferences\n----------\n.. [1] Nolte et al. \"Robustly Estimating the Flow Direction of Information in\n Complex Physical Systems\", Physical Review Letters, vol. 100, no. 23,\n pp. 1-4, Jun. 2008.\n\n", "_____no_output_____" ] ], [ [ "# Author: Martin Luessi <[email protected]>\n#\n# License: BSD (3-clause)\n\n\nimport numpy as np\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.minimum_norm import read_inverse_operator, apply_inverse_epochs\nfrom mne.connectivity import seed_target_indices, phase_slope_index\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nsubjects_dir = data_path + '/subjects'\nfname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'\nfname_raw = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nfname_event = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nfname_label = data_path + '/MEG/sample/labels/Vis-lh.label'\n\nevent_id, tmin, tmax = 4, -0.2, 0.3\nmethod = \"dSPM\" # use dSPM method (could also be MNE or sLORETA)\n\n# Load data\ninverse_operator = read_inverse_operator(fname_inv)\nraw = mne.io.read_raw_fif(fname_raw)\nevents = mne.read_events(fname_event)\n\n# pick MEG channels\npicks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,\n exclude='bads')\n\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=dict(mag=4e-12, grad=4000e-13,\n eog=150e-6))\n\n# Compute inverse solution and for each epoch. Note that since we are passing\n# the output to both extract_label_time_course and the phase_slope_index\n# functions, we have to use \"return_generator=False\", since it is only possible\n# to iterate over generators once.\nsnr = 1.0 # use lower SNR for single epochs\nlambda2 = 1.0 / snr ** 2\nstcs = apply_inverse_epochs(epochs, inverse_operator, lambda2, method,\n pick_ori=\"normal\", return_generator=True)\n\n# Now, we generate seed time series by averaging the activity in the left\n# visual corex\nlabel = mne.read_label(fname_label)\nsrc = inverse_operator['src'] # the source space used\nseed_ts = mne.extract_label_time_course(stcs, label, src, mode='mean_flip',\n verbose='error')\n\n# Combine the seed time course with the source estimates. There will be a total\n# of 7500 signals:\n# index 0: time course extracted from label\n# index 1..7499: dSPM source space time courses\nstcs = apply_inverse_epochs(epochs, inverse_operator, lambda2, method,\n pick_ori=\"normal\", return_generator=True)\ncomb_ts = list(zip(seed_ts, stcs))\n\n# Construct indices to estimate connectivity between the label time course\n# and all source space time courses\nvertices = [src[i]['vertno'] for i in range(2)]\nn_signals_tot = 1 + len(vertices[0]) + len(vertices[1])\n\nindices = seed_target_indices([0], np.arange(1, n_signals_tot))\n\n# Compute the PSI in the frequency range 8Hz..30Hz. We exclude the baseline\n# period from the connectivity estimation\nfmin = 8.\nfmax = 30.\ntmin_con = 0.\nsfreq = raw.info['sfreq'] # the sampling frequency\n\npsi, freqs, times, n_epochs, _ = phase_slope_index(\n comb_ts, mode='multitaper', indices=indices, sfreq=sfreq,\n fmin=fmin, fmax=fmax, tmin=tmin_con)\n\n# Generate a SourceEstimate with the PSI. This is simple since we used a single\n# seed (inspect the indices variable to see how the PSI scores are arranged in\n# the output)\npsi_stc = mne.SourceEstimate(psi, vertices=vertices, tmin=0, tstep=1,\n subject='sample')\n\n# Now we can visualize the PSI using the plot method. We use a custom colormap\n# to show signed values\nv_max = np.max(np.abs(psi))\nbrain = psi_stc.plot(surface='inflated', hemi='lh',\n time_label='Phase Slope Index (PSI)',\n subjects_dir=subjects_dir,\n clim=dict(kind='percent', pos_lims=(95, 97.5, 100)))\nbrain.show_view('medial')\nbrain.add_label(fname_label, color='green', alpha=0.7)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ] ]
d035f9e42ae6ebcbba85a0e83264cd07bfd67d57
5,693
ipynb
Jupyter Notebook
LinkedIn/LinkedIn_Get_contact_from_profile.ipynb
vivard/awesome-notebooks
899558bcc2165bb2155f5ab69ac922c6458e1799
[ "BSD-3-Clause" ]
null
null
null
LinkedIn/LinkedIn_Get_contact_from_profile.ipynb
vivard/awesome-notebooks
899558bcc2165bb2155f5ab69ac922c6458e1799
[ "BSD-3-Clause" ]
null
null
null
LinkedIn/LinkedIn_Get_contact_from_profile.ipynb
vivard/awesome-notebooks
899558bcc2165bb2155f5ab69ac922c6458e1799
[ "BSD-3-Clause" ]
null
null
null
22.065891
296
0.542421
[ [ [ "<img width=\"10%\" alt=\"Naas\" src=\"https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160\"/>", "_____no_output_____" ], [ "# LinkedIn - Get contact from profile\n<a href=\"https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/LinkedIn/LinkedIn_Get_contact_from_profile.ipynb\" target=\"_parent\"><img src=\"https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg\"/></a>", "_____no_output_____" ], [ "**Tags:** #linkedin #profile #contact #naas_drivers", "_____no_output_____" ], [ "## Input", "_____no_output_____" ], [ "### Import library", "_____no_output_____" ] ], [ [ "from naas_drivers import linkedin", "_____no_output_____" ] ], [ [ "### Get your cookies\n<a href='https://www.notion.so/LinkedIn-driver-Get-your-cookies-d20a8e7e508e42af8a5b52e33f3dba75'>How to get your cookies ?</a>", "_____no_output_____" ] ], [ [ "LI_AT = 'YOUR_COOKIE_LI_AT' # EXAMPLE AQFAzQN_PLPR4wAAAXc-FCKmgiMit5FLdY1af3-2\nJSESSIONID = 'YOUR_COOKIE_JSESSIONID' # EXAMPLE ajax:8379907400220387585", "_____no_output_____" ] ], [ [ "### Enter profile URL", "_____no_output_____" ] ], [ [ "PROFILE_URL = \"PROFILE_URL\"", "_____no_output_____" ] ], [ [ "## Model", "_____no_output_____" ], [ "Get the information return in a dataframe.<br><br>\n**Available columns :**\n- PROFILE_URN : LinkedIn unique profile id\n- PROFILE_ID : LinkedIn public profile id\n- EMAIL\n- CONNECTED_AT\n- BIRTHDATE\n- TWITER\n- ADDRESS\n- WEBSITES\n- INTERESTS", "_____no_output_____" ] ], [ [ "df = linkedin.connect(LI_AT, JSESSIONID).profile.get_contact(PROFILE_URL)", "_____no_output_____" ] ], [ [ "## Output", "_____no_output_____" ], [ "### Display result", "_____no_output_____" ] ], [ [ "df", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
d035fa827f12d447c5608b7346bc7b74c8b5acde
46,184
ipynb
Jupyter Notebook
deeplearning1/nbs/sgd-intro.ipynb
shabeer/fastai_courses
41e975a5e4f94dbf08cb7c8efcbe986432fa711d
[ "Apache-2.0" ]
null
null
null
deeplearning1/nbs/sgd-intro.ipynb
shabeer/fastai_courses
41e975a5e4f94dbf08cb7c8efcbe986432fa711d
[ "Apache-2.0" ]
null
null
null
deeplearning1/nbs/sgd-intro.ipynb
shabeer/fastai_courses
41e975a5e4f94dbf08cb7c8efcbe986432fa711d
[ "Apache-2.0" ]
null
null
null
67.718475
5,926
0.802226
[ [ [ "# Table of Contents\n <p>", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport math,sys,os,numpy as np\nfrom numpy.random import random\nfrom matplotlib import pyplot as plt, rcParams, animation, rc\nfrom __future__ import print_function, division\nfrom ipywidgets import interact, interactive, fixed\nfrom ipywidgets.widgets import *\nrc('animation', html='html5')\nrcParams['figure.figsize'] = 3, 3\n%precision 4\nnp.set_printoptions(precision=4, linewidth=100)", "_____no_output_____" ], [ "def lin(a,b,x): return a*x+b", "_____no_output_____" ], [ "a=3.\nb=8.", "_____no_output_____" ], [ "n=30\nx = random(n)\ny = lin(a,b,x)", "_____no_output_____" ], [ "x", "_____no_output_____" ], [ "y", "_____no_output_____" ], [ "plt.scatter(x,y)", "_____no_output_____" ], [ "def sse(y,y_pred): return ((y-y_pred)**2).sum()\ndef loss(y,a,b,x): return sse(y, lin(a,b,x))\ndef avg_loss(y,a,b,x): return np.sqrt(loss(y,a,b,x)/n)", "_____no_output_____" ], [ "a_guess=-1.\nb_guess=1.\navg_loss(y, a_guess, b_guess, x)", "_____no_output_____" ], [ "lr=0.01\n# d[(y-(a*x+b))**2,b] = 2 (b + a x - y) = 2 (y_pred - y)\n# d[(y-(a*x+b))**2,a] = 2 x (b + a x - y) = x * dy/db", "_____no_output_____" ], [ "def upd():\n global a_guess, b_guess\n y_pred = lin(a_guess, b_guess, x)\n dydb = 2 * (y_pred - y)\n dyda = x*dydb\n a_guess -= lr*dyda.mean()\n b_guess -= lr*dydb.mean()", "_____no_output_____" ], [ "?animation.FuncAnimation", "_____no_output_____" ], [ "fig = plt.figure(dpi=100, figsize=(5, 4))\nplt.scatter(x,y)\nline, = plt.plot(x,lin(a_guess,b_guess,x))\nplt.close()\n\ndef animate(i):\n line.set_ydata(lin(a_guess,b_guess,x))\n for i in range(10): upd()\n return line,\n\nani = animation.FuncAnimation(fig, animate, np.arange(0, 40), interval=100)\nani", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d036025850af79912bf50cc999e43cdd98b74688
16,871
ipynb
Jupyter Notebook
notebooks/metrics_ex_02.ipynb
leonsor/scikit-learn-mooc
27c5caf7b0d2f0cc734baee59ad65efc263704cd
[ "CC-BY-4.0" ]
null
null
null
notebooks/metrics_ex_02.ipynb
leonsor/scikit-learn-mooc
27c5caf7b0d2f0cc734baee59ad65efc263704cd
[ "CC-BY-4.0" ]
null
null
null
notebooks/metrics_ex_02.ipynb
leonsor/scikit-learn-mooc
27c5caf7b0d2f0cc734baee59ad65efc263704cd
[ "CC-BY-4.0" ]
null
null
null
28.402357
151
0.433051
[ [ [ "# 📝 Exercise M7.03\n\nAs with the classification metrics exercise, we will evaluate the regression\nmetrics within a cross-validation framework to get familiar with the syntax.\n\nWe will use the Ames house prices dataset.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\n\names_housing = pd.read_csv(\"../datasets/house_prices.csv\")\ndata = ames_housing.drop(columns=\"SalePrice\")\ntarget = ames_housing[\"SalePrice\"]\ndata = data.select_dtypes(np.number)\ntarget /= 1000", "_____no_output_____" ] ], [ [ "<div class=\"admonition note alert alert-info\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Note</p>\n<p class=\"last\">If you want a deeper overview regarding this dataset, you can refer to the\nAppendix - Datasets description section at the end of this MOOC.</p>\n</div>", "_____no_output_____" ], [ "The first step will be to create a linear regression model.", "_____no_output_____" ] ], [ [ "# Write your code here.\nfrom sklearn.linear_model import LinearRegression\nlinreg = LinearRegression()", "_____no_output_____" ] ], [ [ "Then, use the `cross_val_score` to estimate the generalization performance of\nthe model. Use a `KFold` cross-validation with 10 folds. Make the use of the\n$R^2$ score explicit by assigning the parameter `scoring` (even though it is\nthe default score).", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import cross_val_score\nscores = cross_val_score(linreg, data, target, cv=10, scoring='r2')\nprint(f\"R2 score: {scores.mean():.3f} +/- {scores.std():.3f}\")", "R2 score: 0.794 +/- 0.103\n" ], [ "# Write your code here.\nfrom sklearn.model_selection import cross_validate\nresult_linreg_r2 = cross_validate(linreg, data, target, cv=10, scoring=\"r2\")\nresult_reg_r2_df = pd.DataFrame(result_linreg_r2)\nresult_reg_r2_df", "_____no_output_____" ], [ "print(f\"R2 result for linreg: {result_reg_r2_df['test_score'].mean():.3f} +/- {result_reg_r2_df['test_score'].std():.3f}\")", "R2 result for linreg: 0.794 +/- 0.109\n" ] ], [ [ "Then, instead of using the $R^2$ score, use the mean absolute error. You need\nto refer to the documentation for the `scoring` parameter.", "_____no_output_____" ] ], [ [ "# Write your code here.\nresult_linreg_mae = cross_validate(linreg, data, target, cv=10, scoring=\"neg_mean_absolute_error\")\nresult_reg_mae_df = pd.DataFrame(result_linreg_mae)\nresult_reg_mae_df", "_____no_output_____" ], [ "scores = cross_val_score(linreg, data, target, cv=10, scoring='neg_mean_absolute_error')\nscores = -scores\nprint(f\"Mean Absolute Error: {scores.mean():.3f} +/- {scores.std():.3f}\")", "Mean Absolute Error: 21.892 +/- 2.225\n" ], [ "print(f\"Mean Absolute Error result for linreg: {-result_reg_mae_df['test_score'].mean():.3f} +/- {-result_reg_mae_df['test_score'].std():.3f}\")", "Mean Absolute Error result for linreg: 21.892 +/- -2.346\n" ] ], [ [ "Finally, use the `cross_validate` function and compute multiple scores/errors\nat once by passing a list of scorers to the `scoring` parameter. You can\ncompute the $R^2$ score and the mean absolute error for instance.", "_____no_output_____" ] ], [ [ "# Write your code here.\nscoring = [\"r2\", \"neg_mean_absolute_error\"]\nresult_linreg_duo = cross_validate(linreg, data, target, cv=10, scoring=scoring)\n\nscores = {\"R2\": result_linreg_duo[\"test_r2\"],\n \"MAE\": -result_linreg_duo[\"test_neg_mean_absolute_error\"]}\nscores_df = pd.DataFrame(scores)\nscores_df", "_____no_output_____" ], [ "result_linreg_duo", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
d0360a1dad017fd314e4ceafa8e2edde9446803b
6,891
ipynb
Jupyter Notebook
integration-tests/examples/test_templates/jupyter/template_preparation_pyspark.ipynb
AdamsDisturber/incubator-dlab
d052709450e7916860c7dd191708d5524cf44c1e
[ "Apache-2.0" ]
null
null
null
integration-tests/examples/test_templates/jupyter/template_preparation_pyspark.ipynb
AdamsDisturber/incubator-dlab
d052709450e7916860c7dd191708d5524cf44c1e
[ "Apache-2.0" ]
null
null
null
integration-tests/examples/test_templates/jupyter/template_preparation_pyspark.ipynb
AdamsDisturber/incubator-dlab
d052709450e7916860c7dd191708d5524cf44c1e
[ "Apache-2.0" ]
null
null
null
34.628141
326
0.482078
[ [ [ "# Flights data preparation", "_____no_output_____" ] ], [ [ "from pyspark.sql import SQLContext\nfrom pyspark.sql import DataFrame\nfrom pyspark.sql import Row\nfrom pyspark.sql.types import *\nimport pandas as pd\nimport StringIO\nimport matplotlib.pyplot as plt\nhc = sc._jsc.hadoopConfiguration()\nhc.set(\"hive.execution.engine\", \"mr\")", "_____no_output_____" ] ], [ [ "## Function to parse CSV", "_____no_output_____" ] ], [ [ "import csv\n\ndef parseCsv(csvStr):\n f = StringIO.StringIO(csvStr)\n reader = csv.reader(f, delimiter=',')\n row = reader.next()\n return row\n\nscsv = '\"02Q\",\"Titan Airways\"'\nrow = parseCsv(scsv)\nprint row[0]\nprint row[1]\n\nworking_storage = 'WORKING_STORAGE'\noutput_directory = 'jupyter/py2'\nprotocol_name = 'PROTOCOL_NAME://'", "_____no_output_____" ] ], [ [ "## Parse and convert Carrier data to parquet", "_____no_output_____" ] ], [ [ "carriersHeader = 'Code,Description'\ncarriersText = sc.textFile(protocol_name + working_storage + \"/jupyter_dataset/carriers.csv\").filter(lambda x: x != carriersHeader)\ncarriers = carriersText.map(lambda s: parseCsv(s)) \\\n .map(lambda s: Row(code=s[0], description=s[1])).cache().toDF()\ncarriers.write.mode(\"overwrite\").parquet(protocol_name + working_storage + \"/\" + output_directory + \"/carriers\") \nsqlContext.registerDataFrameAsTable(carriers, \"carriers\")\ncarriers.limit(20).toPandas()", "_____no_output_____" ] ], [ [ "## Parse and convert to parquet Airport data", "_____no_output_____" ] ], [ [ "airportsHeader= '\"iata\",\"airport\",\"city\",\"state\",\"country\",\"lat\",\"long\"'\nairports = sc.textFile(protocol_name + working_storage + \"/jupyter_dataset/airports.csv\") \\\n .filter(lambda x: x != airportsHeader) \\\n .map(lambda s: parseCsv(s)) \\\n .map(lambda p: Row(iata=p[0], \\\n airport=p[1], \\\n city=p[2], \\\n state=p[3], \\\n country=p[4], \\\n lat=float(p[5]), \\\n longt=float(p[6])) \\\n ).cache().toDF()\nairports.write.mode(\"overwrite\").parquet(protocol_name + working_storage + \"/\" + output_directory + \"/airports\") \nsqlContext.registerDataFrameAsTable(airports, \"airports\")\nairports.limit(20).toPandas()", "_____no_output_____" ] ], [ [ "## Parse and convert Flights data to parquet", "_____no_output_____" ] ], [ [ "flightsHeader = 'Year,Month,DayofMonth,DayOfWeek,DepTime,CRSDepTime,ArrTime,CRSArrTime,UniqueCarrier,FlightNum,TailNum,ActualElapsedTime,CRSElapsedTime,AirTime,ArrDelay,DepDelay,Origin,Dest,Distance,TaxiIn,TaxiOut,Cancelled,CancellationCode,Diverted,CarrierDelay,WeatherDelay,NASDelay,SecurityDelay,LateAircraftDelay'\nflights = sc.textFile(protocol_name + working_storage + \"/jupyter_dataset/2008.csv.bz2\") \\\n .filter(lambda x: x!= flightsHeader) \\\n .map(lambda s: parseCsv(s)) \\\n .map(lambda p: Row(Year=int(p[0]), \\\n Month=int(p[1]), \\\n DayofMonth=int(p[2]), \\\n DayOfWeek=int(p[3]), \\\n DepTime=p[4], \\\n CRSDepTime=p[5], \\\n ArrTime=p[6], \\\n CRSArrTime=p[7], \\\n UniqueCarrier=p[8], \\\n FlightNum=p[9], \\\n TailNum=p[10], \\\n ActualElapsedTime=p[11], \\\n CRSElapsedTime=p[12], \\\n AirTime=p[13], \\\n ArrDelay=int(p[14].replace(\"NA\", \"0\")), \\\n DepDelay=int(p[15].replace(\"NA\", \"0\")), \\\n Origin=p[16], \\\n Dest=p[17], \\\n Distance=long(p[18]), \\\n TaxiIn=p[19], \\\n TaxiOut=p[20], \\\n Cancelled=p[21], \\\n CancellationCode=p[22], \\\n Diverted=p[23], \\\n CarrierDelay=int(p[24].replace(\"NA\", \"0\")), \\\n CarrierDelayStr=p[24], \\\n WeatherDelay=int(p[25].replace(\"NA\", \"0\")), \\\n WeatherDelayStr=p[25], \\\n NASDelay=int(p[26].replace(\"NA\", \"0\")), \\\n SecurityDelay=int(p[27].replace(\"NA\", \"0\")), \\\n LateAircraftDelay=int(p[28].replace(\"NA\", \"0\")))) \\\n .toDF()\n\nflights.write.mode(\"ignore\").parquet(protocol_name + working_storage + \"/\" + output_directory + \"/flights\")\nsqlContext.registerDataFrameAsTable(flights, \"flights\")\nflights.limit(10).toPandas()[[\"ArrDelay\",\"CarrierDelay\",\"CarrierDelayStr\",\"WeatherDelay\",\"WeatherDelayStr\",\"Distance\"]]", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d0361886f4b6e3ef2bf5c8e7ec93d2d4ca044168
39,160
ipynb
Jupyter Notebook
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
7a162a8639a4bcd51ea5e39e3878339d059d317a
[ "MIT" ]
1
2022-03-31T00:08:01.000Z
2022-03-31T00:08:01.000Z
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
7a162a8639a4bcd51ea5e39e3878339d059d317a
[ "MIT" ]
null
null
null
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
7a162a8639a4bcd51ea5e39e3878339d059d317a
[ "MIT" ]
1
2022-03-23T03:03:04.000Z
2022-03-23T03:03:04.000Z
35.503173
298
0.473544
[ [ [ "<a href=\"https://colab.research.google.com/github/harnalashok/hadoop/blob/main/hadoop_spark_install_on_Colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "# Last amended: 30th March, 2021\n# Myfolder: github/hadoop\n# Objective:\n# i) Install hadoop on colab\n# (current version is 3.2.2)\n# ii) Experiments with hadoop\n# iii) Install spark on colab\n# iv) Access hadoop file from spark\n# v) Install koalas on colab\n#\n#\n# Java 8 install: https://stackoverflow.com/a/58191107\n# Hadoop install: https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html\n# Spark install: https://stackoverflow.com/a/64183749\n# https://www.analyticsvidhya.com/blog/2020/11/a-must-read-guide-on-how-to-work-with-pyspark-on-google-colab-for-data-scientists/", "_____no_output_____" ] ], [ [ "## Install hadoop\nIf it takes too long, it means, it is awaiting input from you regarding overwriting ssh keys", "_____no_output_____" ], [ "### Define functions\nNo downloads. Just function definitions", "_____no_output_____" ] ], [ [ "# 1.0 How to set environment variable\nimport os \nimport time ", "_____no_output_____" ] ], [ [ "#### ssh_install()", "_____no_output_____" ] ], [ [ "# 2.0 Function to install ssh client and sshd (Server)\ndef ssh_install():\n print(\"\\n--1. Download and install ssh server----\\n\")\n ! sudo apt-get remove openssh-client openssh-server\n ! sudo apt install openssh-client openssh-server\n \n print(\"\\n--2. Restart ssh server----\\n\")\n ! service ssh restart", "_____no_output_____" ] ], [ [ "#### Java install", "_____no_output_____" ] ], [ [ "# 3.0 Function to download and install java 8\ndef install_java():\n ! rm -rf /usr/java\n\n print(\"\\n--Download and install Java 8----\\n\")\n !apt-get install -y openjdk-8-jdk-headless -qq > /dev/null # install openjdk\n os.environ[\"JAVA_HOME\"] = \"/usr/lib/jvm/java-8-openjdk-amd64\" # set environment variable\n\n !update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java\n !update-alternatives --set javac /usr/lib/jvm/java-8-openjdk-amd64/bin/javac\n \n !mkdir -p /usr/java\n ! ln -s \"/usr/lib/jvm/java-8-openjdk-amd64\" \"/usr/java\"\n ! mv \"/usr/java/java-8-openjdk-amd64\" \"/usr/java/latest\"\n \n !java -version #check java version\n !javac -version", "_____no_output_____" ] ], [ [ "#### hadoop install", "_____no_output_____" ] ], [ [ "# 4.0 Function to download and install hadoop\ndef hadoop_install():\n print(\"\\n--5. Download hadoop tar.gz----\\n\")\n ! wget -c https://mirrors.estointernet.in/apache/hadoop/common/hadoop-3.2.2/hadoop-3.2.2.tar.gz\n\n print(\"\\n--6. Transfer downloaded content and unzip tar.gz----\\n\")\n ! mv /content/hadoop* /opt/\n ! tar -xzf /opt/hadoop-3.2.2.tar.gz --directory /opt/\n\n print(\"\\n--7. Create hadoop folder----\\n\")\n ! rm -r /app/hadoop/tmp\n ! mkdir -p /app/hadoop/tmp\n \n print(\"\\n--8. Check folder for files----\\n\")\n ! ls -la /opt", "_____no_output_____" ] ], [ [ "#### hadoop config", "_____no_output_____" ] ], [ [ "# 5.0 Function for setting hadoop configuration\ndef hadoop_config():\n print(\"\\n--Begin Configuring hadoop---\\n\")\n print(\"\\n=============================\\n\")\n print(\"\\n--9. core-site.xml----\\n\")\n ! cat /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n\n print(\"\\n--10. Amend core-site.xml----\\n\")\n ! echo '<?xml version=\"1.0\" encoding=\"UTF-8\"?>' > /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n ! echo '<?xml-stylesheet type=\"text/xsl\" href=\"configuration.xsl\"?>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n ! echo ' <configuration>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n ! echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n ! echo ' <name>fs.defaultFS</name>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n ! echo ' <value>hdfs://localhost:9000</value>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n ! echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n ! echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n ! echo ' <name>hadoop.tmp.dir</name>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n ! echo ' <value>/app/hadoop/tmp</value>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n ! echo ' <description>A base for other temporary directories.</description>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n ! echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n # Added following regarding safemode from here:\n # https://stackoverflow.com/a/33800253\n ! echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n ! echo ' <name>dfs.safemode.threshold.pct</name>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n ! echo ' <value>0</value>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n ! echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n ! echo ' </configuration>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n\n print(\"\\n--11. Amended core-site.xml----\\n\")\n ! cat /opt/hadoop-3.2.2/etc/hadoop/core-site.xml\n\n print(\"\\n--12. yarn-site.xml----\\n\")\n !cat /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml\n\n !echo '<?xml version=\"1.0\" encoding=\"UTF-8\"?>' > /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml\n !echo '<configuration>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml\n !echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml\n !echo ' <name>yarn.nodemanager.aux-services</name>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml\n !echo ' <value>mapreduce_shuffle</value>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml\n !echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml\n !echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml\n !echo ' <name>yarn.nodemanager.vmem-check-enabled</name>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml\n !echo ' <value>false</value>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml\n !echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml\n !echo ' </configuration>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml\n \n print(\"\\n--13. Amended yarn-site.xml----\\n\")\n !cat /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml\n\n print(\"\\n--14. mapred-site.xml----\\n\")\n !cat /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n\n print(\"\\n--15. Amend mapred-site.xml----\\n\")\n !echo '<?xml version=\"1.0\"?>' > /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo '<?xml-stylesheet type=\"text/xsl\" href=\"configuration.xsl\"?>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo '<configuration>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo ' <name>mapreduce.framework.name</name>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo ' <value>yarn</value>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo ' <name>yarn.app.mapreduce.am.env</name>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo ' <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo ' <name>mapreduce.map.env</name>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo ' <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo ' <name>mapreduce.reduce.env</name>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo ' <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n !echo '</configuration>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n\n print(\"\\n--16, Amended mapred-site.xml----\\n\")\n !cat /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml\n\n print(\"\\n---17. hdfs-site.xml----\\n\")\n !cat /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml\n \n print(\"\\n---18. Amend hdfs-site.xml----\\n\")\n !echo '<?xml version=\"1.0\" encoding=\"UTF-8\"?> ' > /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml\n !echo '<?xml-stylesheet type=\"text/xsl\" href=\"configuration.xsl\"?>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml\n !echo '<configuration>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml\n !echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml\n !echo ' <name>dfs.replication</name>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml\n !echo ' <value>1</value>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml\n !echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml\n !echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml\n !echo ' <name>dfs.block.size</name>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml\n !echo ' <value>16777216</value>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml\n !echo ' <description>Block size</description>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml\n !echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml\n !echo '</configuration>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml\n\n print(\"\\n---19. Amended hdfs-site.xml----\\n\")\n !cat /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml\n\n print(\"\\n---20. hadoop-env.sh----\\n\")\n # https://stackoverflow.com/a/53140448\n !cat /opt/hadoop-3.2.2/etc/hadoop/hadoop-env.sh\n ! echo 'export JAVA_HOME=\"/usr/lib/jvm/java-8-openjdk-amd64\"' >> /opt/hadoop-3.2.2/etc/hadoop/hadoop-env.sh\n ! echo 'export HDFS_NAMENODE_USER=\"root\"' >> /opt/hadoop-3.2.2/etc/hadoop/hadoop-env.sh\n ! echo 'export HDFS_DATANODE_USER=\"root\"' >> /opt/hadoop-3.2.2/etc/hadoop/hadoop-env.sh\n ! echo 'export HDFS_SECONDARYNAMENODE_USER=\"root\"' >> /opt/hadoop-3.2.2/etc/hadoop/hadoop-env.sh\n ! echo 'export YARN_RESOURCEMANAGER_USER=\"root\"' >> /opt/hadoop-3.2.2/etc/hadoop/hadoop-env.sh\n ! echo 'export YARN_NODEMANAGER_USER=\"root\"' >> /opt/hadoop-3.2.2/etc/hadoop/hadoop-env.sh\n \n print(\"\\n---21. Amended hadoop-env.sh----\\n\")\n !cat /opt/hadoop-3.2.2/etc/hadoop/hadoop-env.sh\n", "_____no_output_____" ] ], [ [ "#### ssh keys", "_____no_output_____" ] ], [ [ "# 6.0 Function tp setup ssh passphrase\ndef set_keys():\n print(\"\\n---22. Generate SSH keys----\\n\")\n ! cd ~ ; pwd \n ! cd ~ ; ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa\n ! cd ~ ; cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys\n ! cd ~ ; chmod 0600 ~/.ssh/authorized_keys\n", "_____no_output_____" ] ], [ [ "#### Set environment", "_____no_output_____" ] ], [ [ "# 7.0 Function to set up environmental variables\ndef set_env():\n print(\"\\n---23. Set Environment variables----\\n\")\n # 'export' command does not work in colab\n # https://stackoverflow.com/a/57240319\n os.environ[\"JAVA_HOME\"] = \"/usr/lib/jvm/java-8-openjdk-amd64\" #set environment variable\n os.environ[\"JRE_HOME\"] = \"/usr/lib/jvm/java-8-openjdk-amd64/jre\" \n os.environ[\"HADOOP_HOME\"] = \"/opt/hadoop-3.2.2\"\n os.environ[\"HADOOP_CONF_DIR\"] = \"/opt/hadoop-3.2.2/etc/hadoop\" \n os.environ[\"LD_LIBRARY_PATH\"] += \":/opt/hadoop-3.2.2/lib/native\"\n os.environ[\"PATH\"] += \":/opt/hadoop-3.2.2/bin:/opt/hadoop-3.2.2/sbin\"", "_____no_output_____" ] ], [ [ "#### Install all function", "_____no_output_____" ] ], [ [ "# 8.0 Function to call all functions\ndef install_hadoop():\n print(\"\\n--Install java----\\n\")\n ssh_install()\n install_java() \n hadoop_install()\n hadoop_config()\n set_keys()\n set_env()\n", "_____no_output_____" ] ], [ [ "### Begin install\nStart downloading, install and configure. Takes around 2 minutes", "_____no_output_____" ] ], [ [ "# 9.0 Start installation\nstart = time.time()\ninstall_hadoop()\nend = time.time()\nprint(\"\\n---Time taken----\\n\")\nprint((end- start)/60)", "_____no_output_____" ] ], [ [ "### Format hadoop", "_____no_output_____" ] ], [ [ "# 10.0 Format hadoop\nprint(\"\\n---24. Format namenode----\\n\")\n!hdfs namenode -format", "_____no_output_____" ] ], [ [ "## Start and test hadoop\nIf namenode is in safemode, use the command: \n`!hdfs dfsadmin -safemode leave`", "_____no_output_____" ], [ "#### Start hadoop\nIf start fails with 'Connection refused', run `ssh_install()` once again", "_____no_output_____" ] ], [ [ "# 11.0 Start namenode\n# If this fails, run\n# ssh_install() below\n# and start hadoop again:\n\nprint(\"\\n---25. Start namenode----\\n\")\n! start-dfs.sh", "\n---25. Start namenode----\n\nStarting namenodes on [localhost]\nlocalhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.\nStarting datanodes\nStarting secondary namenodes [b44e9ceadc5e]\nb44e9ceadc5e: Warning: Permanently added 'b44e9ceadc5e,172.28.0.2' (ECDSA) to the list of known hosts.\n" ], [ "#ssh_install()", "_____no_output_____" ] ], [ [ "#### Start yarn", "_____no_output_____" ] ], [ [ "# 11.1 Start yarn\n! start-yarn.sh", "Starting resourcemanager\nStarting nodemanagers\n" ] ], [ [ "If `start-dfs.sh` fails, issue the following three commands, one after another:<br> \n`! sudo apt-get remove openssh-client openssh-server`<br>\n`! sudo apt-get install openssh-client openssh-server`<br>\n`! service ssh restart`<br>\n\nAnd then try to start hadoop again, as: `start-dfs.sh`", "_____no_output_____" ], [ "#### Test hadoop\nIF in safe mode, leave safe mode as:<br>\n`!hdfs dfsadmin -safemode leave`", "_____no_output_____" ] ], [ [ "# 11.1\nprint(\"\\n---26. Make folders in hadoop----\\n\")\n! hdfs dfs -mkdir /user\n! hdfs dfs -mkdir /user/ashok", "\n---26. Make folders in hadoop----\n\n" ], [ "# 11.2 Run hadoop commands\n! hdfs dfs -ls /\n! hdfs dfs -ls /user", "Found 1 items\ndrwxr-xr-x - root supergroup 0 2021-03-30 11:28 /user\nFound 1 items\ndrwxr-xr-x - root supergroup 0 2021-03-30 11:28 /user/ashok\n" ], [ "# 11.3 Stopping hadoop\n# Gives some errors\n# But hadoop stops\n#!stop-dfs.sh", "_____no_output_____" ] ], [ [ "Run the `ssh_install()` again if hadoop fails to start with `start-dfs.sh` and then try to start hadoop again.", "_____no_output_____" ], [ "## Install spark", "_____no_output_____" ], [ "### Define functions", "_____no_output_____" ], [ "`findspark`: PySpark isn't on `sys.path` by default, but that doesn't mean it can't be used as a regular library. You can address this by either symlinking pyspark into your site-packages, or adding `pyspark` to `sys.path` at runtime. `findspark` does the latter.", "_____no_output_____" ] ], [ [ "# 1.0 Function to download and unzip spark\ndef spark_koalas_install():\n print(\"\\n--1.1 Install findspark----\\n\")\n !pip install -q findspark\n\n print(\"\\n--1.2 Install databricks Koalas----\\n\")\n !pip install koalas\n\n print(\"\\n--1.3 Download Apache tar.gz----\\n\")\n ! wget -c https://mirrors.estointernet.in/apache/spark/spark-3.1.1/spark-3.1.1-bin-hadoop3.2.tgz\n\n print(\"\\n--1.4 Transfer downloaded content and unzip tar.gz----\\n\")\n ! mv /content/spark* /opt/\n ! tar -xzf /opt/spark-3.1.1-bin-hadoop3.2.tgz --directory /opt/\n\n print(\"\\n--1.5 Check folder for files----\\n\")\n ! ls -la /opt\n", "_____no_output_____" ], [ "# 1.1 Function to set environment\ndef set_spark_env():\n print(\"\\n---2. Set Environment variables----\\n\")\n os.environ[\"JAVA_HOME\"] = \"/usr/lib/jvm/java-8-openjdk-amd64\" \n os.environ[\"JRE_HOME\"] = \"/usr/lib/jvm/java-8-openjdk-amd64/jre\" \n os.environ[\"SPARK_HOME\"] = \"/opt/spark-3.1.1-bin-hadoop3.2\" \n os.environ[\"LD_LIBRARY_PATH\"] += \":/opt/spark-3.1.1-bin-hadoop3.2/lib/native\"\n os.environ[\"PATH\"] += \":/opt/spark-3.1.1-bin-hadoop3.2/bin:/opt/spark-3.1.1-bin-hadoop3.2/sbin\"\n print(\"\\n---2.1. Check Environment variables----\\n\")\n # Check\n ! echo $PATH\n ! echo $LD_LIBRARY_PATH", "_____no_output_____" ], [ "# 1.2 Function to configure spark \ndef spark_conf():\n print(\"\\n---3. Configure spark to access hadoop----\\n\")\n !mv /opt/spark-3.1.1-bin-hadoop3.2/conf/spark-env.sh.template /opt/spark-3.1.1-bin-hadoop3.2/conf/spark-env.sh\n !echo \"HADOOP_CONF_DIR=/opt/hadoop-3.2.2/etc/hadoop/\" >> /opt/spark-3.1.1-bin-hadoop3.2/conf/spark-env.sh\n print(\"\\n---3.1 Check ----\\n\")\n #!cat /opt/spark-3.1.1-bin-hadoop3.2/conf/spark-env.sh", "_____no_output_____" ] ], [ [ "### Install spark", "_____no_output_____" ] ], [ [ "# 2.0 Call all the three functions\ndef install_spark():\n spark_koalas_install()\n set_spark_env()\n spark_conf()\n", "_____no_output_____" ], [ "# 2.1 \ninstall_spark()", "\n--1.1 Install findspark----\n\n\n--1.2 Install databricks Koalas----\n\nCollecting koalas\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/40/de/87c016a3e5055251ed117c86eb3b0de2381518c7acae54e115711ff30ceb/koalas-1.7.0-py3-none-any.whl (1.4MB)\n\u001b[K |████████████████████████████████| 1.4MB 5.6MB/s \n\u001b[?25hRequirement already satisfied: numpy<1.20.0,>=1.14 in /usr/local/lib/python3.7/dist-packages (from koalas) (1.19.5)\nRequirement already satisfied: pyarrow>=0.10 in /usr/local/lib/python3.7/dist-packages (from koalas) (3.0.0)\nRequirement already satisfied: pandas<1.2.0,>=0.23.2 in /usr/local/lib/python3.7/dist-packages (from koalas) (1.1.5)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas<1.2.0,>=0.23.2->koalas) (2018.9)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas<1.2.0,>=0.23.2->koalas) (2.8.1)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas<1.2.0,>=0.23.2->koalas) (1.15.0)\nInstalling collected packages: koalas\nSuccessfully installed koalas-1.7.0\n\n--1.3 Download Apache tar.gz----\n\n--2021-03-30 11:29:04-- https://mirrors.estointernet.in/apache/spark/spark-3.1.1/spark-3.1.1-bin-hadoop3.2.tgz\nResolving mirrors.estointernet.in (mirrors.estointernet.in)... 43.255.166.254, 2403:8940:3:1::f\nConnecting to mirrors.estointernet.in (mirrors.estointernet.in)|43.255.166.254|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 228721937 (218M) [application/octet-stream]\nSaving to: ‘spark-3.1.1-bin-hadoop3.2.tgz’\n\nspark-3.1.1-bin-had 100%[===================>] 218.13M 11.9MB/s in 22s \n\n2021-03-30 11:29:27 (9.91 MB/s) - ‘spark-3.1.1-bin-hadoop3.2.tgz’ saved [228721937/228721937]\n\n\n--1.4 Transfer downloaded content and unzip tar.gz----\n\n\n--1.5 Check folder for files----\n\ntotal 609576\ndrwxr-xr-x 1 root root 4096 Mar 30 11:29 .\ndrwxr-xr-x 1 root root 4096 Mar 30 11:26 ..\ndrwxr-xr-x 1 root root 4096 Mar 18 13:31 google\ndrwxr-xr-x 10 1000 1000 4096 Mar 30 11:26 hadoop-3.2.2\n-rw-r--r-- 1 root root 395448622 Jan 13 18:48 hadoop-3.2.2.tar.gz\ndrwxr-xr-x 4 root root 4096 Mar 18 13:25 nvidia\ndrwxr-xr-x 13 1000 1000 4096 Feb 22 02:11 spark-3.1.1-bin-hadoop3.2\n-rw-r--r-- 1 root root 228721937 Feb 22 02:45 spark-3.1.1-bin-hadoop3.2.tgz\n\n---2. Set Environment variables----\n\n\n---2.1. Check Environment variables----\n\n/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin:/opt/hadoop-3.2.2/bin:/opt/hadoop-3.2.2/sbin:/opt/spark-3.1.1-bin-hadoop3.2/bin:/opt/spark-3.1.1-bin-hadoop3.2/sbin\n/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/opt/hadoop-3.2.2/lib/native:/opt/spark-3.1.1-bin-hadoop3.2/lib/native\n\n---3. Configure spark to access hadoop----\n\n\n---3.1 Check ----\n\n" ] ], [ [ "## Test spark\nHadoop should have been started", "_____no_output_____" ], [ "Call some libraries", "_____no_output_____" ] ], [ [ "# 3.0 Just call some libraries to test\nimport pandas as pd\nimport numpy as np\n\n# 3.1 Get spark in sys.path\nimport findspark\nfindspark.init()\n\n# 3.2 Call other spark libraries\n# Just to test\nfrom pyspark.sql import SparkSession\nimport databricks.koalas as ks\nfrom pyspark.ml.feature import VectorAssembler\nfrom pyspark.ml.regression import LinearRegression", "WARNING:root:'PYARROW_IGNORE_TIMEZONE' environment variable was not set. It is required to set this environment variable to '1' in both driver and executor sides if you use pyarrow>=2.0.0. Koalas will set it for you but it does not work if there is a Spark context already launched.\n" ], [ "# 3.1 Build spark session\nspark = SparkSession. \\\n builder. \\\n master(\"local[*]\"). \\\n getOrCreate()\n", "_____no_output_____" ], [ "# 4.0 Pandas DataFrame\npdf = pd.DataFrame({\n 'x1': ['a','a','b','b', 'b', 'c', 'd','d'],\n 'x2': ['apple', 'orange', 'orange','orange', 'peach', 'peach','apple','orange'],\n 'x3': [1, 1, 2, 2, 2, 4, 1, 2],\n 'x4': [2.4, 2.5, 3.5, 1.4, 2.1,1.5, 3.0, 2.0],\n 'y1': [1, 0, 1, 0, 0, 1, 1, 0],\n 'y2': ['yes', 'no', 'no', 'yes', 'yes', 'yes', 'no', 'yes']\n })\n\n# 4.1\npdf", "_____no_output_____" ], [ "# 4.2 Transform to Spark DataFrame\ndf = spark.createDataFrame(pdf)\ndf.show()", "_____no_output_____" ], [ "# 4.3 Create a csv file \n# and tranfer it to hdfs\n!echo \"a,b,c,d\" > /content/airports.csv\n!echo \"5,4,6,7\" >> /content/airports.csv\n!echo \"2,3,4,5\" >> /content/airports.csv\n!echo \"8,9,0,1\" >> /content/airports.csv\n!echo \"2,3,4,1\" >> /content/airports.csv\n!echo \"1,2,2,1\" >> /content/airports.csv\n!echo \"0,1,2,6\" >> /content/airports.csv\n!echo \"9,3,1,8\" >> /content/airports.csv\n!ls -la /content\n\n# 4.4\n!hdfs dfs -rm -f /user/ashok/airports.csv\n!hdfs dfs -put /content/airports.csv /user/ashok/\n!hdfs dfs -ls /user/ashok", "_____no_output_____" ], [ "# 5.0 Read file directly from hadoop\nairports_df = spark.read.csv( \n \"/user/ashok/airports.csv\",\n inferSchema = True,\n header = True\n )\n\n# 5.1 Show file\nairports_df.show()", "_____no_output_____" ] ], [ [ "## Test Koalas\nHadoop should have been started", "_____no_output_____" ], [ "Create a koalas dataframe", "_____no_output_____" ] ], [ [ "# 6.0\n# If namenode is in safemode, first use:\n# hdfs dfsadmin -safemode leave\nkdf = ks.DataFrame(\n {\n 'a': [1, 2, 3, 4, 5, 6],\n 'b': [100, 200, 300, 400, 500, 600],\n 'c': [\"one\", \"two\", \"three\", \"four\", \"five\", \"six\"]\n },\n index=[10, 20, 30, 40, 50, 60]\n )\n\n# 6.1 And show\nkdf", "_____no_output_____" ], [ "# 6.2 Pandas DataFrame\npdf = pd.DataFrame({'x':range(3), 'y':['a','b','b'], 'z':['a','b','b']})\n\n# 6.2.1 Transform to koalas DataFrame\ndf = ks.from_pandas(pdf)", "_____no_output_____" ], [ "# 6.3 Rename koalas dataframe columns\ndf.columns = ['x', 'y', 'z1']\n\n# 6.4 Do some operations on koalas DF, in place:\ndf['x2'] = df.x * df.x\n\n# 6.6 Finally show koalas df\ndf\n", "_____no_output_____" ], [ "# 6.7 Read csv file from hadoop\n# and create koalas df\nks.read_csv(\"/user/ashok/airports.csv\").head(10)", "_____no_output_____" ], [ "###################", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ] ]
d036297892fb23d2d4281c86f86a2fbfb451b4ac
115,815
ipynb
Jupyter Notebook
src/notebook/.ipynb_checkpoints/reseaux_de_neurones-checkpoint.ipynb
desmond-rn/projet-inverse
01f30119de43779b0f747c028f4c837332e7d10e
[ "MIT" ]
null
null
null
src/notebook/.ipynb_checkpoints/reseaux_de_neurones-checkpoint.ipynb
desmond-rn/projet-inverse
01f30119de43779b0f747c028f4c837332e7d10e
[ "MIT" ]
null
null
null
src/notebook/.ipynb_checkpoints/reseaux_de_neurones-checkpoint.ipynb
desmond-rn/projet-inverse
01f30119de43779b0f747c028f4c837332e7d10e
[ "MIT" ]
null
null
null
104.056604
36,852
0.806476
[ [ [ "# OBJECTF\n\nPredire $\\rho$, $\\sigma_a$ et $\\sigma_c$ en fonction de $E_r$, $F_r$, et $T_r$ a droite du domaine en toute temps ", "_____no_output_____" ], [ "# PREPARATION", "_____no_output_____" ], [ "## Les imports", "_____no_output_____" ] ], [ [ "%reset -f", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nfrom ast import literal_eval as l_eval", "_____no_output_____" ], [ "np.set_printoptions(precision = 3)", "_____no_output_____" ] ], [ [ "## Chargement des donnees", "_____no_output_____" ] ], [ [ "# \"\"\" VERSION COLAB \"\"\"\n\n# # to load data from my personal github repo (update it if we have to)\n# import os\n# if not os.path.exists(\"assets\"):\n# print(\"Data wansn't here. Let's download it!\")\n# !git clone https://github.com/desmond-rn/assets.git\n# else:\n# print(\"Data already here. Let's update it!\")\n# %cd assets\n# # %rm -rf assets\n# !git pull https://github.com/desmond-rn/assets.git\n# %cd ..\n\n# print(\"\\n\")\n# !ls assets/dataframes/inverse\n\n# df_path = \"assets/dataframes/inverse/df_temporal.csv\"", "_____no_output_____" ], [ "# \"\"\" VERSION JUPYTER \"\"\"\n\n# to load data locally\n\n%ls \"../../data\"\n\ndf_t_path = \"../../data/df_temporal.csv\"\ndf_s_path = \"../../data/df_spatial.csv\"", " Volume in drive C has no label.\n Volume Serial Number is 2248-85E1\n\n Directory of C:\\Users\\Roussel\\Dropbox\\Unistra\\SEMESTRE 2\\Projet & Stage\\Inverse\\REPO\\data\n\n21-Jun-20 12:53 PM <DIR> .\n21-Jun-20 12:53 PM <DIR> ..\n21-Jun-20 12:53 PM <DIR> anim\n24-Jun-20 02:07 PM 14,895 case_1_spatial.csv\n24-Jun-20 02:07 PM 30,388 case_1_temporal.csv\n24-Jun-20 02:07 PM 10,946 case_2_spatial.csv\n24-Jun-20 02:07 PM 22,409 case_2_temporal.csv\n24-Jun-20 02:07 PM 10,901 case_3_spatial.csv\n21-Jun-20 12:53 PM 62,201 dataframe_1.csv\n21-Jun-20 12:53 PM 89,950 dataframe_2.csv\n21-Jun-20 12:53 PM 1,396 df_1.csv\n22-Jun-20 07:11 AM 47,585 df_1_test.csv\n21-Jun-20 12:53 PM 1,486 df_2.csv\n22-Jun-20 07:11 AM 54,818 df_2_test.csv\n24-Jun-20 02:07 PM 3,643,332 df_spatial.csv\n24-Jun-20 02:07 PM 4,813,222 df_temporal.csv\n21-Jun-20 12:53 PM 14,757 fichier_export_csv.csv\n21-Jun-20 12:53 PM <DIR> img\n21-Jun-20 12:53 PM <DIR> video\n 14 File(s) 8,818,286 bytes\n 5 Dir(s) 103,518,060,544 bytes free\n" ] ], [ [ "## Donnees temporelles", "_____no_output_____" ] ], [ [ "types = {'rho_expr':str, 'sigma_a_expr':str, 'sigma_c_expr':str, 'E_x_0_expr':str, 'F_x_0_expr':str, 'T_x_0_expr':str}\nconverters={'t':l_eval, 'E_l':l_eval, 'F_l':l_eval, 'T_l':l_eval, 'E_r':l_eval, 'F_r':l_eval, 'T_r':l_eval} # on veut convertir les str en listes\n\ndf_t = pd.read_csv(df_t_path, thousands=',', dtype=types, converters=converters)\n\ndf_t.head(2)", "_____no_output_____" ] ], [ [ "## Donnees spatiales", "_____no_output_____" ] ], [ [ "types = {'rho_expr':str, 'sigma_a_expr':str, 'sigma_c_expr':str, 'E_x_0_expr':str, 'F_x_0_expr':str, 'T_x_0_expr':str}\nconverters={'x':l_eval, 'rho':l_eval, 'sigma_a':l_eval, 'sigma_c':l_eval, 'E_0':l_eval, 'F_0':l_eval, 'T_0':l_eval, 'E':l_eval, 'F':l_eval, 'T':l_eval}\n\ndf_s = pd.read_csv(df_s_path, thousands=',', dtype=types, converters=converters)\n\ndf_s.head(2)", "_____no_output_____" ] ], [ [ "## Prerequis pour cet apprentissage", "_____no_output_____" ], [ "Tous les unputs doivent etre similaires sur un certain nombre de leurs parametres.", "_____no_output_____" ] ], [ [ "t_f = 0.005\nx_min = 0\nx_max = 1\n\nfor i in range(len(df_t)):\n assert df_t.loc[i, 't_f'] == 0.005\n assert df_t.loc[i, 'E_0_expr'] == \"0.01372*(5^4)\"\n # etc...\n assert df_t.loc[i, 'x_min'] == x_min\n assert df_t.loc[i, 'x_max'] == x_max\n", "_____no_output_____" ] ], [ [ "## Visualisation", "_____no_output_____" ] ], [ [ "\"\"\" Visualisons les signaux sur la droite et la densite sur le domaine \"\"\"\n\ndef plot_inputs(ax, df_t, index):\n t = np.array(df_t.loc[index, 't'])\n \n # inputs\n E_r = np.array(df_t.loc[index, 'E_r'])\n F_r = np.array(df_t.loc[index, 'F_r'])\n T_r = np.array(df_t.loc[index, 'T_r'])\n\n # plot \n ax[0].plot(t, E_r, 'b', label='énergie à droite', lw=3)\n ax[0].set_ylim(8.275, 8.875)\n ax[0].set_xlabel('t')\n ax[0].legend()\n\n ax[1].plot(t, F_r, 'y', label='flux à droite', lw=3)\n ax[1].set_ylim(-0.25, 0.25)\n ax[1].set_xlabel('t') \n ax[1].legend()\n\n ax[2].plot(t, T_r, 'r', label='température à droite', lw=3)\n ax[2].set_ylim(4.96, 5.04)\n ax[2].set_xlabel('t')\n ax[2].legend()\n \ndef plot_output(ax, df_s, index):\n x = np.array(df_s.loc[index, 'x'])\n rho = np.array(df_s.loc[index, 'rho'])\n\n # plot \n ax.plot(x, rho, 'm--', label='densité')\n ax.set_ylim(0.5, 10.5)\n ax.set_xlabel('x')\n ax.legend()", "_____no_output_____" ], [ "def plot_io(index):\n fig, ax = plt.subplots(2, 3, figsize=(12, 6))\n fig.delaxes(ax[1][0])\n fig.delaxes(ax[1][2])\n\n plot_inputs(ax[0], df_t, index)\n plot_output(ax[1, 1], df_s, index)\n plt.tight_layout()", "_____no_output_____" ], [ "index = 0\nplot_io(index)", "_____no_output_____" ] ], [ [ "## Creation des inputs X", "_____no_output_____" ], [ "Pour chacun des signaux E_r, F_r et T_r, il faut tout d'abord:\n- Tronquer le signal pour ne ne garder que la fin\n- Reechantilloner le signal pour ne garder que 20, voir 50 pas de temps", "_____no_output_____" ] ], [ [ "\"\"\" Permet de couper le debut du signal, parite toujours constante. Retourne la fraction de fin \"\"\"\ndef trim(input, ratio):\n len_input = len(input)\n len_output = int(len_input*ratio)\n return input[len_input-len_output:]\n\n\"\"\" Fonction pour extraire n pas d'iterations \"\"\"\ndef resample(input, len_output):\n len_input = len(input)\n output = []\n for i in np.arange(0, len_input, len_input//len_output):\n output.append(input[i])\n return np.array(output)[1:]\n", "_____no_output_____" ], [ "\"\"\" Testons avec un exemple \"\"\"\nt = np.array(df_t.loc[index, 't'])\nE_r = np.array(df_t.loc[index, 'E_r'])\n\nratio, len_output = 1/2, 20\nt = resample(trim(t, ratio), len_output)\nE_r = resample(trim(E_r, ratio), len_output)\n\nfig, ax = plt.subplots(1, 1, figsize=(6, 4))\nax.plot(t, E_r, 'b', label='énergie à droite coupé et reechantilloné', lw=3)\nax.set_ylim(8.275, 8.875)\nax.set_xlabel('t')\nax.legend();", "_____no_output_____" ], [ "\"\"\" Generation les inputs X \"\"\"\n\nsize = len(df_t)\nX = np.empty(shape=(size, 3, len_output), dtype=float)\n\nfor i in range(size):\n X[i][0] = resample(trim(df_t.loc[i, 'E_r'], ratio), len_output)\n X[i][1] = resample(trim(df_t.loc[i, 'F_r'], ratio), len_output)\n X[i][2] = resample(trim(df_t.loc[i, 'T_r'], ratio), len_output)\n \nprint(\"X shape =\", X.shape)", "X shape = (103, 3, 20)\n" ] ], [ [ "## Creations des outputs y", "_____no_output_____" ], [ "Pour le signal rho, il faut tout d'abord:\n- Detecter la position, la hauteur et la larrgeur de chaque crenau", "_____no_output_____" ] ], [ [ "\"\"\" Calcule les decalages a droite et a gauche d'un signal \"\"\"\ndef decay(signal):\n signal_right = np.zeros_like(signal)\n signal_right[1:] = signal[:-1]\n signal_right[0] = signal[0]\n\n signal_left = np.zeros_like(signal)\n signal_left[:-1] = signal[1:]\n signal_left[-1] = signal[-1]\n \n return signal_left, signal_right", "_____no_output_____" ], [ "\"\"\" Fonction de lissage laplacien 3-means d'un signal \"\"\"\ndef smooth(signal):\n signal_left, signal_right = decay(signal)\n return (signal + signal_left + signal_right) / 3.", "_____no_output_____" ], [ "\"\"\" Pour eliminer les tres tres faibles valeurs dans un signal \"\"\"\ndef sharpen(signal, precision):\n return np.where(abs(signal) < precision, np.zeros_like(signal), signal)", "_____no_output_____" ], [ "\"\"\" Pour afficher un signal et sa derivee seconde \"\"\"\ndef plot_signal(ax, signal):\n signal_left, signal_right = decay(signal)\n diff = -2*signal + signal_right + signal_left\n\n diff = sharpen(diff, 1e-4)\n \n ax[0].plot(signal, 'm--', label='signal')\n ax[1].plot(diff[1:-1], 'c--', label='derivee seconde du signal');\n ax[0].legend()\n ax[1].legend()", "_____no_output_____" ], [ "\"\"\" Une fonction pour detecter la position, hauteur et largeur des crenaux \"\"\"\ndef detect_niches(signal):\n signal_left, signal_right = decay(signal)\n diff = -2*signal + signal_right + signal_left\n \n diff = sharpen(diff, 1e-4)\n \n # zero_crossings = [] # les points de traverse du 0\n niches = [] # les crenaux detectes\n \n prev = diff[0]\n next = diff[2]\n\n ended = False # indique si on aretrouve la fin d'un crenau\n\n start = 1\n end = 1\n \n step = 1 # pas de recherche \n i = step\n len_signal = len(diff)\n \n while i < len_signal-step:\n prev = diff[i-step]\n val = diff[i]\n next = diff[i+step]\n \n if prev > 0. and next < 0.:\n # zero_crossings.append(i)\n start = i\n ended = False\n\n if i == len_signal-step-1 and ended == False:\n prev = -1.\n next = 1.\n\n if prev < 0. and next > 0. and ended==False:\n # zero_crossings.append(i)\n end = i\n \n niche_width = end - start # largeur relative a N = len_signal\n niche_center = (end + start) // 2 # position relative a N\n niche_height = signal[niche_center] # hauteur du crenaux\n\n niches.append((niche_center, niche_height, niche_width))\n\n ended = True\n \n# print(i, ended)\n# print(prev, next)\n\n i += 1\n \n return niches", "_____no_output_____" ], [ "\"\"\" Testons avec un exemple \"\"\"\nsignal = np.zeros(500)\n\nsignal[100:170] = 5. # ajout des crenaux\nsignal[250:265] = 3.\nsignal[325:375] = 10.\n\nfor i in range(25): # lissage du signal\n signal = smooth(signal)\n\nfig, ax = plt.subplots(1, 2, figsize=(12, 4))\nplot_signal(ax, signal)\n\nniches = detect_niches(signal)\n\nprint(\"Position, hauteur et largeur des creneaux detectes\")\nfor el in niches:\n print(\" -\", el)", "Position, hauteur et largeur des creneaux detectes\n - (134, 5.0, 69)\n - (257, 2.804259852773157, 14)\n - (349, 9.999999999988198, 49)\n" ], [ "\"\"\" Testons sur un vrai rho \"\"\"\n# signal = np.array(df_s.loc[4, 'rho'])\n# fig, ax = plt.subplots(1, 2, figsize=(12, 4))\n# plot_signal(ax, signal)\n\n# niches = detect_niches(signal)\n# for el in niches:\n# print(\" -\", el)", "_____no_output_____" ], [ "\"\"\" Pour creer les y, il faut normaliser par rapport a l'abcisse du domaine \"\"\"\ny = np.empty(shape=(size, 3), dtype=float)\n\nfor i in range(size):\n x = np.array(df_s.loc[i, 'x'])\n rho = np.array(df_s.loc[i, 'rho'])\n niche = detect_niches(rho)[0] # on suppose qu'il ny a qu'un seul créneau \n dx = (x_max - x_min) / df_s.loc[i, 'N'] # xmin = 0, xmax = 1 bien sur. condition necessaire pour cette etude\n\n y[i][0] = x[niche[0]] # position relative a x\n y[i][1] = niche[1] # hauteur\n y[i][2] = niche[2]*dx # largeur\n \n# print(i, niche)\n# print(i, y[i])\n\nprint(\"y shape =\", np.shape(y))", "y shape = (103, 3)\n" ] ], [ [ "## Separation des donnees train, test et val", "_____no_output_____" ] ], [ [ "len_train, len_val = 60, 20\n\nX_train = X[:len_train]\nX_val = X[len_train:len_train+len_val]\nX_test = X[len_train+len_val:]\n\ny_train = y[:len_train]\ny_val = y[len_train:len_train+len_val]\ny_test = y[len_train+len_val:]\n\nprint(\"X shapes =\", np.shape(X_train), np.shape(X_val), np.shape(X_test))\nprint(\"y shapes =\", np.shape(y_train), np.shape(y_val), np.shape(y_test))", "X shapes = (60, 3, 20) (20, 3, 20) (23, 3, 20)\ny shapes = (60, 3) (20, 3) (23, 3)\n" ] ], [ [ "# APPRENTISSAGE", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d03667e8882004105ac3b1518b3c26fd775b7fbf
5,329
ipynb
Jupyter Notebook
notebooks/test_profielen.ipynb
d2hydro/HyDAMOValidatieModule
3bc7cccf8d8cc656f53dac2ae31055eccfd58ee8
[ "MIT" ]
1
2021-12-10T18:40:59.000Z
2021-12-10T18:40:59.000Z
notebooks/test_profielen.ipynb
d2hydro/HyDAMOValidatieModule
3bc7cccf8d8cc656f53dac2ae31055eccfd58ee8
[ "MIT" ]
7
2022-01-10T09:29:36.000Z
2022-03-17T19:57:28.000Z
notebooks/test_profielen.ipynb
d2hydro/HyDAMOValidatieModule
3bc7cccf8d8cc656f53dac2ae31055eccfd58ee8
[ "MIT" ]
3
2021-12-09T18:55:56.000Z
2022-02-11T13:49:28.000Z
28.345745
287
0.531432
[ [ [ "# Testen via de Python module\n\nWe hebben hiervoor nodig `directory`; een map met data om te valideren, deze bestaat uit:\n* een map `datasets` met daarin 1 of meerdere GeoPackages met HyDAMO lagen\n* een bestand `validation_rules.json` met daarin de validatieregels\n\nOmdat we op de HyDAMO objecten de maaiveldhoogte willen bepalen definieren we een `coverage`. Dit is een python dictionary. Elke `key` geeft een identificatie voor de coverage die aangeroepen kan worden in de `validation_rules.json`. De `value` verwijst naar een map met daarin:\n* GeoTiffs\n* index.shp met een uitlijn van elke GeoTiff", "_____no_output_____" ] ], [ [ "coverage = {\"AHN\": r\"../tests/data/dtm\"}\ndirectory = r\"../tests/data/tasks/test_profielen\"", "_____no_output_____" ] ], [ [ "We importeren de validator en maken een HyDAMO validator aan die geopackages, csvs en geojsons weg schrijft. We kennen ook de coverage toe.", "_____no_output_____" ] ], [ [ "from hydamo_validation import validator\nhydamo_validator = validator(output_types=[\"geopackage\", \"csv\", \"geojson\"],\n coverages=coverage,\n log_level=\"INFO\")", "_____no_output_____" ] ], [ [ "Nu kunnen we onze `directory` gaan valideren. Dat duurt ongeveer 20-30 seconden", "_____no_output_____" ] ], [ [ "datamodel, layer_summary, result_summary = hydamo_validator(directory=directory,\n raise_error=True)", "profielgroep is empty (!)\nINFO:hydamo_validation.validator:finished in 3.58 seconds\n" ] ], [ [ "We kijken naar de samenvatting van het resultaat", "_____no_output_____" ] ], [ [ "result_summary.to_dict()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d03668c2e8e733edf83303ca0aca73c010371a8f
374,965
ipynb
Jupyter Notebook
Marketing_Campaign_optimization/bin/Phase_3_Deployment_options.ipynb
ThiagoBarsante/DataScience_projects
bf7e0110a905b3bf829fddcba3a2dfb3900df8a2
[ "MIT" ]
null
null
null
Marketing_Campaign_optimization/bin/Phase_3_Deployment_options.ipynb
ThiagoBarsante/DataScience_projects
bf7e0110a905b3bf829fddcba3a2dfb3900df8a2
[ "MIT" ]
null
null
null
Marketing_Campaign_optimization/bin/Phase_3_Deployment_options.ipynb
ThiagoBarsante/DataScience_projects
bf7e0110a905b3bf829fddcba3a2dfb3900df8a2
[ "MIT" ]
null
null
null
530.360679
100,580
0.929999
[ [ [ "## Phase 3 - deployment\n\n#### This notebook will provide and overview how to deploy and predict the CPE in two ways\n\n- The model was build/export in the last notebook (Phase_2_Advanced_Analytics__predictions)\n<br> This notebook show another option to save/export the model using the H2O flow UI and complement the information with deployment for predictions.\n\nThe predictions will be presented in 2 ways\n- Batch process \n- Online / real time predictions\n", "_____no_output_____" ], [ "<div class=\"alert alert-block alert-info\">\n<b>Export model:</b> Export the model GBM (best performance) using H2O flow UI as detailed below\n</div>", "_____no_output_____" ] ], [ [ "from IPython.display import Image\nImage(filename='./data/H2O-FLOW-UI-GBM-MODEL.PNG')", "_____no_output_____" ], [ "from IPython.display import Image\nImage(filename='./data/H2O-FLOW-UI-GBM-MODEL-download.PNG')", "_____no_output_____" ] ], [ [ "## Sample of new campaigns to be predicted", "_____no_output_____" ] ], [ [ "import pandas as pd\ndf = pd.read_csv('./GBM_MODEL/New_campaings_for_predictions.csv')\ndf.tail(10)", "_____no_output_____" ] ], [ [ "### Important attention point\n- All information will be provided for prediction (base information available in the simulated/demo data) however just the relevant information were used during the model build detailed in the Notebook: Phase_2_Advanced_Analytics__predictions <br>\n- For example LineItemsID is just an index number and do not provide relevant information and is not going to be used for prediction", "_____no_output_____" ], [ "<div class=\"alert alert-block alert-info\">\n<b>Batch Prediction:</b> Generate prediction for new data\n</div>\n", "_____no_output_____" ], [ "#### To execute the prediction as presented below it is not necessary to have an H2O cluster running\n##### The processo show below was executed in 2 steps to show in detail the process but in production environment this process must be executed in just one step\n\n###### &emsp; Simulation in 2 steps\n\nStep 1. batch process to run the java program\n<br>Step 2. python program to link the new data and the predictions with the CPE\n<br> &emsp; &emsp; Can be used any programming language to run the prediction and get the results (such as R, Python, Java, C#, ...)\n\n\n### Run batch java process to gererate/score the predictions of CPE", "_____no_output_____" ] ], [ [ "## To generate prediction (CPE) for new data just run the command\n\n## EXAMPLE\n## java -Xmx4g -XX:ReservedCodeCacheSize=256m -cp <h2o-genmodel.jar_EXPORTED_ABOVE> hex.genmodel.tools.PredictCsv --mojo <GBM_log_CPE_model.zip_EXPORTED_ABOVE> --input INPUT_FILE_FOR_PREDICTION.csv --output OUTUPUT_FILE_WITH_PREDICTIONS_FOR_CPE__EXPORT_EXPORT_PREDICTIONS.csv --decimal\n\n## REAL PREDICTION\n## java -Xmx4g -XX:ReservedCodeCacheSize=256m -cp h2o-genmodel.jar hex.genmodel.tools.PredictCsv --mojo GBM_log_CPE_model.zip --input New_campaings_for_predictions.csv --output New_campaings_for_predictions__EXPORT_EXPORT_PREDICTIONS.csv --decimal", "_____no_output_____" ], [ "from IPython.display import Image\nImage(filename='./data/Batch-prediction-h2o.PNG')", "_____no_output_____" ] ], [ [ "### Sincronize all information - new campaign data and new predictions for CPE\n- Remember that the prediction was done in logarithmic scale and now is necessary to rever the result with exponential function", "_____no_output_____" ] ], [ [ "CPE_predictions = pd.read_csv('./GBM_MODEL/New_campaings_for_predictions__EXPORT_EXPORT_PREDICTIONS.csv')\nCPE_predictions.tail()", "_____no_output_____" ], [ "import numpy as np\ndf['CPE_predition_LOG'] = CPE_predictions['predict']\ndf['CPE_predition'] = round(np.exp(CPE_predictions['predict']) -1, 3)\ndf.tail()", "_____no_output_____" ] ], [ [ "<div class=\"alert alert-block alert-info\">\n<b>Online prediction:</b> Generate prediction for new data\n</div>\n", "_____no_output_____" ], [ "### The online prediction could be implemented using diferent architectures such as\n1. Serverless function such as Amazon AWS Lambda + API Gateway\n<br> https://aws.amazon.com/lambda/?nc2=h_ql_prod_fs_lbd\n \n2. Java program that use POJO/MOJO model for online prediction\n<br> http://docs.h2o.ai/h2o/latest-stable/h2o-docs/productionizing.html#step-2-compile-and-run-the-mojo\n\n3. Microservices architecture using Docker (python + flask app + NGINX for load balance)\n<br> Could be implemented on-premise solution or even using cloud solutions such as container orchestration as GKE (Google Kubernetes Engine)\n<br> https://cloud.google.com/kubernetes-engine/\n\nThe solution presented below show the prediction done trought one json information passed to the URL\n<br> &emsp; This API could be deployed in any of the 3 options detailed above", "_____no_output_____" ] ], [ [ "from IPython.display import Image\nImage(filename='./data/Online-Prediction.PNG')", "_____no_output_____" ] ], [ [ "## Summary and final considerations\n\n##### The model build in Phase 2 and also exported in this notebook can be deployed for batch and online predictions\n\n- Batch process => the batch process is the way to go to predict large ammount of campaigns and for back-office analysis using some BI tools\n \n- Online prediction => The online prediction using microservices architecture for example, is the way to go if the company has online interfaces integrated with lauch campaign programs. With this approach is possible to analyse specific campaign prediction\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
d036785c223a4c72d6d66d2d764d5e4e67f30d65
12,591
ipynb
Jupyter Notebook
docs/examples/full-example.ipynb
martinRenou/jupyter-matplotlib
2d8b13f4aa8d0f1b94facccb1b5a3443c6e89112
[ "BSD-3-Clause" ]
1
2022-03-27T20:21:47.000Z
2022-03-27T20:21:47.000Z
docs/examples/full-example.ipynb
jkochNU/ipympl
44a4ff246ec2fe0ca14238d089246aa9c8c6b270
[ "BSD-3-Clause" ]
null
null
null
docs/examples/full-example.ipynb
jkochNU/ipympl
44a4ff246ec2fe0ca14238d089246aa9c8c6b270
[ "BSD-3-Clause" ]
null
null
null
26.341004
340
0.556985
[ [ [ "# Comprehensive Example", "_____no_output_____" ] ], [ [ "# Enabling the `widget` backend.\n# This requires jupyter-matplotlib a.k.a. ipympl.\n# ipympl can be install via pip or conda.\n%matplotlib widget\n\nimport matplotlib.pyplot as plt\nimport numpy as np", "_____no_output_____" ], [ "# Testing matplotlib interactions with a simple plot\nfig = plt.figure()\nplt.plot(np.sin(np.linspace(0, 20, 100)));", "_____no_output_____" ], [ "# Always hide the toolbar\nfig.canvas.toolbar_visible = False", "_____no_output_____" ], [ "# Put it back to its default\nfig.canvas.toolbar_visible = 'fade-in-fade-out'", "_____no_output_____" ], [ "# Change the toolbar position\nfig.canvas.toolbar_position = 'top'", "_____no_output_____" ], [ "# Hide the Figure name at the top of the figure\nfig.canvas.header_visible = False", "_____no_output_____" ], [ "# Hide the footer\nfig.canvas.footer_visible = False", "_____no_output_____" ], [ "# Disable the resizing feature\nfig.canvas.resizable = False", "_____no_output_____" ], [ "# If true then scrolling while the mouse is over the canvas will not move the entire notebook\nfig.canvas.capture_scroll = True", "_____no_output_____" ] ], [ [ "You can also call `display` on `fig.canvas` to display the interactive plot anywhere in the notebooke", "_____no_output_____" ] ], [ [ "fig.canvas.toolbar_visible = True\ndisplay(fig.canvas)", "_____no_output_____" ] ], [ [ "Or you can `display(fig)` to embed the current plot as a png", "_____no_output_____" ] ], [ [ "display(fig)", "_____no_output_____" ] ], [ [ "# 3D plotting", "_____no_output_____" ] ], [ [ "from mpl_toolkits.mplot3d import axes3d\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n\n# Grab some test data.\nX, Y, Z = axes3d.get_test_data(0.05)\n\n# Plot a basic wireframe.\nax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)\n\nplt.show()", "_____no_output_____" ] ], [ [ "# Subplots", "_____no_output_____" ] ], [ [ "# A more complex example from the matplotlib gallery\nnp.random.seed(0)\n\nn_bins = 10\nx = np.random.randn(1000, 3)\n\nfig, axes = plt.subplots(nrows=2, ncols=2)\nax0, ax1, ax2, ax3 = axes.flatten()\n\ncolors = ['red', 'tan', 'lime']\nax0.hist(x, n_bins, density=1, histtype='bar', color=colors, label=colors)\nax0.legend(prop={'size': 10})\nax0.set_title('bars with legend')\n\nax1.hist(x, n_bins, density=1, histtype='bar', stacked=True)\nax1.set_title('stacked bar')\n\nax2.hist(x, n_bins, histtype='step', stacked=True, fill=False)\nax2.set_title('stack step (unfilled)')\n\n# Make a multiple-histogram of data-sets with different length.\nx_multi = [np.random.randn(n) for n in [10000, 5000, 2000]]\nax3.hist(x_multi, n_bins, histtype='bar')\nax3.set_title('different sample sizes')\n\nfig.tight_layout()\nplt.show()", "_____no_output_____" ], [ "fig.canvas.toolbar_position = 'right'", "_____no_output_____" ], [ "fig.canvas.toolbar_visible = False", "_____no_output_____" ] ], [ [ "# Interactions with other widgets and layouting\n\nWhen you want to embed the figure into a layout of other widgets you should call `plt.ioff()` before creating the figure otherwise `plt.figure()` will trigger a display of the canvas automatically and outside of your layout. ", "_____no_output_____" ], [ "### Without using `ioff`\n\nHere we will end up with the figure being displayed twice. The button won't do anything it just placed as an example of layouting.", "_____no_output_____" ] ], [ [ "import ipywidgets as widgets\n\n# ensure we are interactive mode \n# this is default but if this notebook is executed out of order it may have been turned off\nplt.ion()\n\nfig = plt.figure()\nax = fig.gca()\nax.imshow(Z)\n\nwidgets.AppLayout(\n center=fig.canvas,\n footer=widgets.Button(icon='check'),\n pane_heights=[0, 6, 1]\n)", "_____no_output_____" ] ], [ [ "### Fixing the double display with `ioff`\n\nIf we make sure interactive mode is off when we create the figure then the figure will only display where we want it to.\n\nThere is ongoing work to allow usage of `ioff` as a context manager, see the [ipympl issue](https://github.com/matplotlib/ipympl/issues/220) and the [matplotlib issue](https://github.com/matplotlib/matplotlib/issues/17013)", "_____no_output_____" ] ], [ [ "plt.ioff()\nfig = plt.figure()\nplt.ion()\n\nax = fig.gca()\nax.imshow(Z)\n\nwidgets.AppLayout(\n center=fig.canvas,\n footer=widgets.Button(icon='check'),\n pane_heights=[0, 6, 1]\n)", "_____no_output_____" ] ], [ [ "# Interacting with other widgets\n\n## Changing a line plot with a slide", "_____no_output_____" ] ], [ [ "# When using the `widget` backend from ipympl,\n# fig.canvas is a proper Jupyter interactive widget, which can be embedded in\n# an ipywidgets layout. See https://ipywidgets.readthedocs.io/en/stable/examples/Layout%20Templates.html\n\n# One can bound figure attributes to other widget values.\nfrom ipywidgets import AppLayout, FloatSlider\n\nplt.ioff()\n\nslider = FloatSlider(\n orientation='horizontal',\n description='Factor:',\n value=1.0,\n min=0.02,\n max=2.0\n)\n\nslider.layout.margin = '0px 30% 0px 30%'\nslider.layout.width = '40%'\n\nfig = plt.figure()\nfig.canvas.header_visible = False\nfig.canvas.layout.min_height = '400px'\nplt.title('Plotting: y=sin({} * x)'.format(slider.value))\n\nx = np.linspace(0, 20, 500)\n\nlines = plt.plot(x, np.sin(slider.value * x))\n\ndef update_lines(change):\n plt.title('Plotting: y=sin({} * x)'.format(change.new))\n lines[0].set_data(x, np.sin(change.new * x))\n fig.canvas.draw()\n fig.canvas.flush_events()\n\nslider.observe(update_lines, names='value')\n\nAppLayout(\n center=fig.canvas,\n footer=slider,\n pane_heights=[0, 6, 1]\n)", "_____no_output_____" ] ], [ [ "## Update image data in a performant manner\n\nTwo useful tricks to improve performance when updating an image displayed with matplolib are to:\n1. Use the `set_data` method instead of calling imshow\n2. Precompute and then index the array", "_____no_output_____" ] ], [ [ "# precomputing all images\nx = np.linspace(0,np.pi,200)\ny = np.linspace(0,10,200)\nX,Y = np.meshgrid(x,y)\nparameter = np.linspace(-5,5)\nexample_image_stack = np.sin(X)[None,:,:]+np.exp(np.cos(Y[None,:,:]*parameter[:,None,None]))", "_____no_output_____" ], [ "plt.ioff()\nfig = plt.figure()\nplt.ion()\nim = plt.imshow(example_image_stack[0])\n\ndef update(change):\n im.set_data(example_image_stack[change['new']])\n fig.canvas.draw_idle()\n \n \nslider = widgets.IntSlider(value=0, min=0, max=len(parameter)-1)\nslider.observe(update, names='value')\nwidgets.VBox([slider, fig.canvas])", "_____no_output_____" ] ], [ [ "### Debugging widget updates and matplotlib callbacks\n\nIf an error is raised in the `update` function then will not always display in the notebook which can make debugging difficult. This same issue is also true for matplotlib callbacks on user events such as mousemovement, for example see [issue](https://github.com/matplotlib/ipympl/issues/116). There are two ways to see the output:\n1. In jupyterlab the output will show up in the Log Console (View > Show Log Console)\n2. using `ipywidgets.Output`\n\nHere is an example of using an `Output` to capture errors in the update function from the previous example. To induce errors we changed the slider limits so that out of bounds errors will occur:\n\nFrom: `slider = widgets.IntSlider(value=0, min=0, max=len(parameter)-1)`\n\nTo: `slider = widgets.IntSlider(value=0, min=0, max=len(parameter)+10)`\n\nIf you move the slider all the way to the right you should see errors from the Output widget", "_____no_output_____" ] ], [ [ "plt.ioff()\nfig = plt.figure()\nplt.ion()\nim = plt.imshow(example_image_stack[0])\n\nout = widgets.Output()\[email protected]()\ndef update(change):\n with out:\n if change['name'] == 'value':\n im.set_data(example_image_stack[change['new']])\n fig.canvas.draw_idle\n \n \nslider = widgets.IntSlider(value=0, min=0, max=len(parameter)+10)\nslider.observe(update)\ndisplay(widgets.VBox([slider, fig.canvas]))\ndisplay(out)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
d0369736d698b94ecf15f5bb17c5aff493fb6e3a
31,544
ipynb
Jupyter Notebook
module2-wrangle-ml-datasets/LS_DS12_232_assignment.ipynb
jdz014/DS-Unit-2-Applied-Modeling
10d66099739d9f7f86793f788b5e9152612dd2b0
[ "MIT" ]
null
null
null
module2-wrangle-ml-datasets/LS_DS12_232_assignment.ipynb
jdz014/DS-Unit-2-Applied-Modeling
10d66099739d9f7f86793f788b5e9152612dd2b0
[ "MIT" ]
null
null
null
module2-wrangle-ml-datasets/LS_DS12_232_assignment.ipynb
jdz014/DS-Unit-2-Applied-Modeling
10d66099739d9f7f86793f788b5e9152612dd2b0
[ "MIT" ]
null
null
null
37.418743
378
0.386761
[ [ [ "<a href=\"https://colab.research.google.com/github/jdz014/DS-Unit-2-Applied-Modeling/blob/master/module2-wrangle-ml-datasets/LS_DS12_232_assignment.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "Lambda School Data Science\n\n*Unit 2, Sprint 3, Module 1*\n\n---\n\n\n# Wrangle ML datasets\n\n- [ ] Continue to clean and explore your data. \n- [ ] For the evaluation metric you chose, what score would you get just by guessing?\n- [ ] Can you make a fast, first model that beats guessing?\n\n**We recommend that you use your portfolio project dataset for all assignments this sprint.**\n\n**But if you aren't ready yet, or you want more practice, then use the New York City property sales dataset for today's assignment.** Follow the instructions below, to just keep a subset for the Tribeca neighborhood, and remove outliers or dirty data. [Here's a video walkthrough](https://youtu.be/pPWFw8UtBVg?t=584) you can refer to if you get stuck or want hints!\n\n- Data Source: [NYC OpenData: NYC Citywide Rolling Calendar Sales](https://data.cityofnewyork.us/dataset/NYC-Citywide-Rolling-Calendar-Sales/usep-8jbt)\n- Glossary: [NYC Department of Finance: Rolling Sales Data](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page)", "_____no_output_____" ], [ "Your code starts here:", "_____no_output_____" ] ], [ [ "!wget 'https://raw.githubusercontent.com/washingtonpost/data-school-shootings/master/school-shootings-data.csv'", "--2020-02-26 02:29:43-- https://raw.githubusercontent.com/washingtonpost/data-school-shootings/master/school-shootings-data.csv\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 70530 (69K) [text/plain]\nSaving to: ‘school-shootings-data.csv.4’\n\n\r school-sh 0%[ ] 0 --.-KB/s \rschool-shootings-da 100%[===================>] 68.88K --.-KB/s in 0.02s \n\n2020-02-26 02:29:43 (2.77 MB/s) - ‘school-shootings-data.csv.4’ saved [70530/70530]\n\n" ], [ "import pandas as pd\n\ndf = pd.read_csv('school-shootings-data.csv')\nprint(df.shape)\ndf.head()", "(238, 50)\n" ], [ "# Replace shooting type with 'other' for rows not 'targeted' or 'indiscriminate'\n df['shooting_type'] = df['shooting_type'].replace(['accidental', 'unclear',\n 'targeted and indiscriminate',\n 'public suicide',\n 'hostage suicide',\n 'accidental or targeted',\n 'public suicide (attempted)'],\n 'other')\n\n# Fill missing value with 'other'\n df['shooting_type'] = df['shooting_type'].fillna('other')", "_____no_output_____" ], [ "# Majority class baseline 59%\ndf['shooting_type'].value_counts(normalize=True)", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\n\n# Create train, test\ntrain, test = train_test_split(df, train_size=0.80, random_state=21, stratify=df['shooting_type'])\n\ntrain.shape, test.shape", "_____no_output_____" ], [ "def wrangle(df):\n\n # Avoid SettingWithCopyWarning\n df = df.copy()\n\n # Remove commas from numbers\n df['white'] = df['white'].str.replace(\",\", \"\")\n\n # Change from object to int\n df['white'] = pd.to_numeric(df['white'])\n \n # Remove commas from numbers\n df['enrollment'] = df['enrollment'].str.replace(\",\", \"\")\n\n # Change from object to int\n df['enrollment'] = pd.to_numeric(df['enrollment'])\n\n # Fill missing values for these specific columns\n df.fillna({'white': 0, 'black': 0, 'hispanic': 0, 'asian': 0,\n 'american_indian_alaska_native': 0,\n 'hawaiian_native_pacific_islander': 0, 'two_or_more': 0,\n 'district_name': 'Unknown', 'time': '12:00 PM', 'lat': 33.612910,\n 'long': -86.682000, 'staffing': 60.42, 'low_grade': '9',\n 'high_grade': '12'}, inplace=True)\n \n # Drop columns with 200+ missing values\n df = df.drop(columns=['deceased_notes1', 'age_shooter2', 'gender_shooter2', \n 'race_ethnicity_shooter2', 'shooter_relationship2', \n 'shooter_deceased2', 'deceased_notes2'])\n\n # Drop unusable variance \n df = df.drop(columns=['uid', 'nces_school_id', 'nces_district_id', 'weapon', \n 'weapon_source', 'state_fips', 'county_fips', 'ulocale',\n 'lunch', 'age_shooter1', 'gender_shooter1',\n 'race_ethnicity_shooter1', 'shooter_relationship1',\n 'shooter_deceased1'])\n \n # Change date to datettime\n df['date'] = pd.to_datetime(df['date'])\n\n return df\n\ntrain = wrangle(train)\ntest = wrangle(test)", "_____no_output_____" ], [ "train.shape, test.shape", "_____no_output_____" ], [ "!pip install category_encoders==2.*", "Requirement already satisfied: category_encoders==2.* in /usr/local/lib/python3.6/dist-packages (2.1.0)\nRequirement already satisfied: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.22.1)\nRequirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (1.17.5)\nRequirement already satisfied: pandas>=0.21.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.25.3)\nRequirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (1.4.1)\nRequirement already satisfied: patsy>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.5.1)\nRequirement already satisfied: statsmodels>=0.6.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.*) (0.10.2)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.0->category_encoders==2.*) (0.14.1)\nRequirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.*) (2.6.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.*) (2018.9)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.4.1->category_encoders==2.*) (1.12.0)\n" ], [ "import category_encoders as ce\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.feature_selection import f_classif, SelectKBest\nfrom sklearn.linear_model import Ridge\n\ntarget = 'shooting_type'\nfeatures = train.columns.drop([target, 'date'])\nX_train = train[features]\ny_train = train[target]\nX_test = test[features]\ny_test = test[target]\n\npipeline = make_pipeline(\n ce.OrdinalEncoder(), \n StandardScaler(), \n RandomForestClassifier(n_estimators=100, n_jobs=-1, random_state=21)\n)\n\nk = 20\nscores = cross_val_score(pipeline, X_train, y_train, cv=k)\nprint(f'MAE for {k} folds:', scores)", "MAE for 20 folds: [0.4 0.5 0.5 0.8 0.6 0.7\n 0.5 0.8 0.7 0.4 0.66666667 0.44444444\n 0.55555556 0.44444444 0.44444444 0.55555556 0.66666667 0.55555556\n 0.44444444 0.44444444]\n" ], [ "scores.mean()", "_____no_output_____" ], [ "from sklearn.tree import DecisionTreeClassifier\n\ntarget = 'shooting_type'\nfeatures = train.columns.drop([target, 'date', ])\nX_train = train[features]\ny_train = train[target]\nX_test = test[features]\ny_test = test[target]\n\npipeline = make_pipeline(\n ce.OrdinalEncoder(), \n DecisionTreeClassifier(max_depth=3)\n)\n\npipeline.fit(X_train, y_train)\nprint('Test Accuracy:', pipeline.score(X_test, y_test))", "Test Accuracy: 0.5416666666666666\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d036a56f7cf6a159b3322cdc104bfd93da4cf183
1,058
ipynb
Jupyter Notebook
day_1/sonar_sweep_one.ipynb
janlingen/advent_of_code
6908e97117832033a3079982d8efaa2970ecdcf5
[ "MIT" ]
null
null
null
day_1/sonar_sweep_one.ipynb
janlingen/advent_of_code
6908e97117832033a3079982d8efaa2970ecdcf5
[ "MIT" ]
null
null
null
day_1/sonar_sweep_one.ipynb
janlingen/advent_of_code
6908e97117832033a3079982d8efaa2970ecdcf5
[ "MIT" ]
null
null
null
19.962264
77
0.507561
[ [ [ "with open('input1.txt') as f:\n lines = f.read().splitlines()\n count = 0\n for i in range(1, len(lines)):\n if int(lines[i]) > int(lines[i-1]):\n count += 1\n print(count)\n", "1616\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
d036ab50d02d07f931997e708c4ec75304a4b07c
229,534
ipynb
Jupyter Notebook
notebooks/visualize/Interactive 3D.ipynb
wesleyLaurence/Music-Code
3ec001a60b8cd714b47cf9f45da653a0602c0842
[ "Apache-2.0" ]
21
2020-08-15T00:59:19.000Z
2021-09-24T11:03:52.000Z
notebooks/visualize/Interactive 3D.ipynb
wesleyLaurence/Music-Code
3ec001a60b8cd714b47cf9f45da653a0602c0842
[ "Apache-2.0" ]
null
null
null
notebooks/visualize/Interactive 3D.ipynb
wesleyLaurence/Music-Code
3ec001a60b8cd714b47cf9f45da653a0602c0842
[ "Apache-2.0" ]
2
2021-01-18T02:27:45.000Z
2021-04-18T15:48:35.000Z
251.406353
191,690
0.884222
[ [ [ "# updated 5_9_20\n\n\"\"\"\nsetup\n\"\"\"\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport matplotlib\nfrom mpl_toolkits.mplot3d import Axes3D\nmatplotlib.use('Agg')\n%matplotlib notebook\n\n# add musiccode scripts folder to path\nimport sys\n\n# import library\nfrom music_code import music_code\n\n# initialize\nm = music_code.MusicCode(120)\n\n\n\"\"\"\nset up plot\n\"\"\"\n\n# figure \nfig = plt.figure(dpi=200)\n\n# 3d axes\nax = fig.add_axes([0.1,0.1,0.8,0.8],projection='3d')\nfig.patch.set_visible(False)\n\n# Hide grid lines and axes ticks\nax.grid(False)\nax.set_xticks([])\nax.set_yticks([])\nax.set_zticks([])\nax.axis('off')\n\n\"\"\"\nset wave values\n\"\"\"\n\n# Parameters\ntotal_waves = 250\nvol = 1\nalpha_value = np.linspace(0,.6,total_waves+1)\nwidth=1\n\n# create & plot waves\nvolume_list = list(np.linspace(1,10,total_waves))\nposition_list= np.flip(np.arange(1,total_waves*2,2)) \nfade_lengths = np.linspace(1/256,1/64,len(volume_list))\n\n\"\"\"\nloop through wave values\n\"\"\"\n\ni=1\nepoch=0\nfor vols in volume_list: \n current_position = position_list[epoch]\n waveform = m.create_wave([50],'sine', duration=1/32, wt_pos=position_list[epoch]).fade(fade_in=fade_lengths[epoch], fade_out=fade_lengths[epoch])*vols\n total_samples = len(waveform)\n\n # Data for a three-dimensional plot\n zline = waveform\n xline = np.array(range(0,total_samples))+i\n yline = np.zeros(total_samples)+i\n \n # plot 3d\n ax.plot3D(xline, yline, zline,linewidth=2,alpha=alpha_value[epoch])\n \n # update values for next iteration\n width-=.06\n i+=1\n epoch+=1\n\n# display plot\nfig.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
d036ad06a11fdbb142190cb5785a51ee35e8a672
78,540
ipynb
Jupyter Notebook
source/Userdocs/NML2_examples/HH_single_compartment.ipynb
NeuroML/Documentation
06e355a8268c848b872b4e4c44d990b77b1fcb37
[ "CC-BY-4.0" ]
5
2020-09-28T20:47:04.000Z
2021-08-23T16:46:30.000Z
source/Userdocs/NML2_examples/HH_single_compartment.ipynb
NeuroML/Documentation
06e355a8268c848b872b4e4c44d990b77b1fcb37
[ "CC-BY-4.0" ]
86
2020-11-05T12:32:01.000Z
2022-03-30T22:45:00.000Z
source/Userdocs/NML2_examples/HH_single_compartment.ipynb
NeuroML/Documentation
06e355a8268c848b872b4e4c44d990b77b1fcb37
[ "CC-BY-4.0" ]
2
2021-08-23T16:46:34.000Z
2022-03-25T00:43:11.000Z
166.398305
22,528
0.880685
[ [ [ "# Interactive single compartment HH example\n\nTo run this interactive Jupyter Notebook, please click on the rocket icon 🚀 in the top panel. For more information, please see {ref}`how to use this documentation <userdocs:usage:jupyterbooks>`. Please uncomment the line below if you use the Google Colab. (It does not include these packages by default).", "_____no_output_____" ] ], [ [ "#%pip install pyneuroml neuromllite NEURON", "_____no_output_____" ], [ "import math\nfrom neuroml import NeuroMLDocument\nfrom neuroml import Cell\nfrom neuroml import IonChannelHH\nfrom neuroml import GateHHRates\nfrom neuroml import BiophysicalProperties\nfrom neuroml import MembraneProperties\nfrom neuroml import ChannelDensity\nfrom neuroml import HHRate\nfrom neuroml import SpikeThresh\nfrom neuroml import SpecificCapacitance\nfrom neuroml import InitMembPotential\nfrom neuroml import IntracellularProperties\nfrom neuroml import IncludeType\nfrom neuroml import Resistivity\nfrom neuroml import Morphology, Segment, Point3DWithDiam\nfrom neuroml import Network, Population\nfrom neuroml import PulseGenerator, ExplicitInput\nimport numpy as np\nfrom pyneuroml import pynml\nfrom pyneuroml.lems import LEMSSimulation", "_____no_output_____" ] ], [ [ "## Declare the model\n### Create ion channels", "_____no_output_____" ] ], [ [ "def create_na_channel():\n \"\"\"Create the Na channel.\n\n This will create the Na channel and save it to a file.\n It will also validate this file.\n\n returns: name of the created file\n \"\"\"\n na_channel = IonChannelHH(id=\"na_channel\", notes=\"Sodium channel for HH cell\", conductance=\"10pS\", species=\"na\")\n gate_m = GateHHRates(id=\"na_m\", instances=\"3\", notes=\"m gate for na channel\")\n\n m_forward_rate = HHRate(type=\"HHExpLinearRate\", rate=\"1per_ms\", midpoint=\"-40mV\", scale=\"10mV\")\n m_reverse_rate = HHRate(type=\"HHExpRate\", rate=\"4per_ms\", midpoint=\"-65mV\", scale=\"-18mV\")\n gate_m.forward_rate = m_forward_rate\n gate_m.reverse_rate = m_reverse_rate\n na_channel.gate_hh_rates.append(gate_m)\n\n gate_h = GateHHRates(id=\"na_h\", instances=\"1\", notes=\"h gate for na channel\")\n h_forward_rate = HHRate(type=\"HHExpRate\", rate=\"0.07per_ms\", midpoint=\"-65mV\", scale=\"-20mV\")\n h_reverse_rate = HHRate(type=\"HHSigmoidRate\", rate=\"1per_ms\", midpoint=\"-35mV\", scale=\"10mV\")\n gate_h.forward_rate = h_forward_rate\n gate_h.reverse_rate = h_reverse_rate\n na_channel.gate_hh_rates.append(gate_h)\n\n na_channel_doc = NeuroMLDocument(id=\"na_channel\", notes=\"Na channel for HH neuron\")\n na_channel_fn = \"HH_example_na_channel.nml\"\n na_channel_doc.ion_channel_hhs.append(na_channel)\n\n pynml.write_neuroml2_file(nml2_doc=na_channel_doc, nml2_file_name=na_channel_fn, validate=True)\n\n return na_channel_fn", "_____no_output_____" ], [ "def create_k_channel():\n \"\"\"Create the K channel\n\n This will create the K channel and save it to a file.\n It will also validate this file.\n\n :returns: name of the K channel file\n \"\"\"\n k_channel = IonChannelHH(id=\"k_channel\", notes=\"Potassium channel for HH cell\", conductance=\"10pS\", species=\"k\")\n gate_n = GateHHRates(id=\"k_n\", instances=\"4\", notes=\"n gate for k channel\")\n n_forward_rate = HHRate(type=\"HHExpLinearRate\", rate=\"0.1per_ms\", midpoint=\"-55mV\", scale=\"10mV\")\n n_reverse_rate = HHRate(type=\"HHExpRate\", rate=\"0.125per_ms\", midpoint=\"-65mV\", scale=\"-80mV\")\n gate_n.forward_rate = n_forward_rate\n gate_n.reverse_rate = n_reverse_rate\n k_channel.gate_hh_rates.append(gate_n)\n\n k_channel_doc = NeuroMLDocument(id=\"k_channel\", notes=\"k channel for HH neuron\")\n k_channel_fn = \"HH_example_k_channel.nml\"\n k_channel_doc.ion_channel_hhs.append(k_channel)\n\n pynml.write_neuroml2_file(nml2_doc=k_channel_doc, nml2_file_name=k_channel_fn, validate=True)\n\n return k_channel_fn", "_____no_output_____" ], [ "def create_leak_channel():\n \"\"\"Create a leak channel\n\n This will create the leak channel and save it to a file.\n It will also validate this file.\n\n :returns: name of leak channel nml file\n \"\"\"\n leak_channel = IonChannelHH(id=\"leak_channel\", conductance=\"10pS\", notes=\"Leak conductance\")\n leak_channel_doc = NeuroMLDocument(id=\"leak_channel\", notes=\"leak channel for HH neuron\")\n leak_channel_fn = \"HH_example_leak_channel.nml\"\n leak_channel_doc.ion_channel_hhs.append(leak_channel)\n\n pynml.write_neuroml2_file(nml2_doc=leak_channel_doc, nml2_file_name=leak_channel_fn, validate=True)\n\n return leak_channel_fn", "_____no_output_____" ] ], [ [ "### Create cell", "_____no_output_____" ] ], [ [ "def create_cell():\n \"\"\"Create the cell.\n\n :returns: name of the cell nml file\n \"\"\"\n # Create the nml file and add the ion channels\n hh_cell_doc = NeuroMLDocument(id=\"cell\", notes=\"HH cell\")\n hh_cell_fn = \"HH_example_cell.nml\"\n hh_cell_doc.includes.append(IncludeType(href=create_na_channel()))\n hh_cell_doc.includes.append(IncludeType(href=create_k_channel()))\n hh_cell_doc.includes.append(IncludeType(href=create_leak_channel()))\n\n # Define a cell\n hh_cell = Cell(id=\"hh_cell\", notes=\"A single compartment HH cell\")\n\n # Define its biophysical properties\n bio_prop = BiophysicalProperties(id=\"hh_b_prop\")\n # notes=\"Biophysical properties for HH cell\")\n\n # Membrane properties are a type of biophysical properties\n mem_prop = MembraneProperties()\n # Add membrane properties to the biophysical properties\n bio_prop.membrane_properties = mem_prop\n\n # Append to cell\n hh_cell.biophysical_properties = bio_prop\n\n # Channel density for Na channel\n na_channel_density = ChannelDensity(id=\"na_channels\", cond_density=\"120.0 mS_per_cm2\", erev=\"50.0 mV\", ion=\"na\", ion_channel=\"na_channel\")\n mem_prop.channel_densities.append(na_channel_density)\n\n # Channel density for k channel\n k_channel_density = ChannelDensity(id=\"k_channels\", cond_density=\"360 S_per_m2\", erev=\"-77mV\", ion=\"k\", ion_channel=\"k_channel\")\n mem_prop.channel_densities.append(k_channel_density)\n\n # Leak channel\n leak_channel_density = ChannelDensity(id=\"leak_channels\", cond_density=\"3.0 S_per_m2\", erev=\"-54.3mV\", ion=\"non_specific\", ion_channel=\"leak_channel\")\n mem_prop.channel_densities.append(leak_channel_density)\n\n # Other membrane properties\n mem_prop.spike_threshes.append(SpikeThresh(value=\"-20mV\"))\n mem_prop.specific_capacitances.append(SpecificCapacitance(value=\"1.0 uF_per_cm2\"))\n mem_prop.init_memb_potentials.append(InitMembPotential(value=\"-65mV\"))\n\n intra_prop = IntracellularProperties()\n intra_prop.resistivities.append(Resistivity(value=\"0.03 kohm_cm\"))\n\n # Add to biological properties\n bio_prop.intracellular_properties = intra_prop\n\n # Morphology\n morph = Morphology(id=\"hh_cell_morph\")\n # notes=\"Simple morphology for the HH cell\")\n seg = Segment(id=\"0\", name=\"soma\", notes=\"Soma segment\")\n # We want a diameter such that area is 1000 micro meter^2\n # surface area of a sphere is 4pi r^2 = 4pi diam^2\n diam = math.sqrt(1000 / math.pi)\n proximal = distal = Point3DWithDiam(x=\"0\", y=\"0\", z=\"0\", diameter=str(diam))\n seg.proximal = proximal\n seg.distal = distal\n morph.segments.append(seg)\n hh_cell.morphology = morph\n\n hh_cell_doc.cells.append(hh_cell)\n pynml.write_neuroml2_file(nml2_doc=hh_cell_doc, nml2_file_name=hh_cell_fn, validate=True)\n return hh_cell_fn", "_____no_output_____" ] ], [ [ "### Create a network", "_____no_output_____" ] ], [ [ "def create_network():\n \"\"\"Create the network\n\n :returns: name of network nml file\n \"\"\"\n net_doc = NeuroMLDocument(id=\"network\",\n notes=\"HH cell network\")\n net_doc_fn = \"HH_example_net.nml\"\n net_doc.includes.append(IncludeType(href=create_cell()))\n # Create a population: convenient to create many cells of the same type\n pop = Population(id=\"pop0\", notes=\"A population for our cell\", component=\"hh_cell\", size=1)\n # Input\n pulsegen = PulseGenerator(id=\"pg\", notes=\"Simple pulse generator\", delay=\"100ms\", duration=\"100ms\", amplitude=\"0.08nA\")\n\n exp_input = ExplicitInput(target=\"pop0[0]\", input=\"pg\")\n\n net = Network(id=\"single_hh_cell_network\", note=\"A network with a single population\")\n net_doc.pulse_generators.append(pulsegen)\n net.explicit_inputs.append(exp_input)\n net.populations.append(pop)\n net_doc.networks.append(net)\n\n pynml.write_neuroml2_file(nml2_doc=net_doc, nml2_file_name=net_doc_fn, validate=True)\n return net_doc_fn", "_____no_output_____" ] ], [ [ "## Plot the data we record", "_____no_output_____" ] ], [ [ "def plot_data(sim_id):\n \"\"\"Plot the sim data.\n\n Load the data from the file and plot the graph for the membrane potential\n using the pynml generate_plot utility function.\n\n :sim_id: ID of simulaton\n\n \"\"\"\n data_array = np.loadtxt(sim_id + \".dat\")\n pynml.generate_plot([data_array[:, 0]], [data_array[:, 1]], \"Membrane potential\", show_plot_already=False, save_figure_to=sim_id + \"-v.png\", xaxis=\"time (s)\", yaxis=\"membrane potential (V)\")\n pynml.generate_plot([data_array[:, 0]], [data_array[:, 2]], \"channel current\", show_plot_already=False, save_figure_to=sim_id + \"-i.png\", xaxis=\"time (s)\", yaxis=\"channel current (A)\")\n pynml.generate_plot([data_array[:, 0], data_array[:, 0]], [data_array[:, 3], data_array[:, 4]], \"current density\", labels=[\"Na\", \"K\"], show_plot_already=False, save_figure_to=sim_id + \"-iden.png\", xaxis=\"time (s)\", yaxis=\"current density (A_per_m2)\")", "_____no_output_____" ] ], [ [ "## Create and run the simulation\n\nCreate the simulation, run it, record data, and plot the recorded information.", "_____no_output_____" ] ], [ [ "def main():\n \"\"\"Main function\n\n Include the NeuroML model into a LEMS simulation file, run it, plot some\n data.\n \"\"\"\n # Simulation bits\n sim_id = \"HH_single_compartment_example_sim\"\n simulation = LEMSSimulation(sim_id=sim_id, duration=300, dt=0.01, simulation_seed=123)\n # Include the NeuroML model file\n simulation.include_neuroml2_file(create_network())\n # Assign target for the simulation\n simulation.assign_simulation_target(\"single_hh_cell_network\")\n\n # Recording information from the simulation\n simulation.create_output_file(id=\"output0\", file_name=sim_id + \".dat\")\n simulation.add_column_to_output_file(\"output0\", column_id=\"pop0[0]/v\", quantity=\"pop0[0]/v\")\n simulation.add_column_to_output_file(\"output0\", column_id=\"pop0[0]/iChannels\", quantity=\"pop0[0]/iChannels\")\n simulation.add_column_to_output_file(\"output0\", column_id=\"pop0[0]/na/iDensity\", quantity=\"pop0[0]/hh_b_prop/membraneProperties/na_channels/iDensity/\")\n simulation.add_column_to_output_file(\"output0\", column_id=\"pop0[0]/k/iDensity\", quantity=\"pop0[0]/hh_b_prop/membraneProperties/k_channels/iDensity/\")\n\n # Save LEMS simulation to file\n sim_file = simulation.save_to_file()\n\n # Run the simulation using the default jNeuroML simulator\n pynml.run_lems_with_jneuroml(sim_file, max_memory=\"2G\", nogui=True, plot=False)\n # Plot the data\n plot_data(sim_id)", "_____no_output_____" ], [ "if __name__ == \"__main__\":\n main()", "pyNeuroML >>> Written LEMS Simulation HH_single_compartment_example_sim to file: LEMS_HH_single_compartment_example_sim.xml\npyNeuroML >>> Generating plot: Membrane potential\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
d036b2300ca9175ef96fe5c0752c4e755118a6c7
19,901
ipynb
Jupyter Notebook
sagemaker-featurestore/feature_store_kms_key_encryption.ipynb
Amirosimani/amazon-sagemaker-examples
bc35e7a9da9e2258e77f98098254c2a8e308041a
[ "Apache-2.0" ]
2
2021-09-13T15:55:10.000Z
2021-09-16T11:09:58.000Z
sagemaker-featurestore/feature_store_kms_key_encryption.ipynb
Amirosimani/amazon-sagemaker-examples
bc35e7a9da9e2258e77f98098254c2a8e308041a
[ "Apache-2.0" ]
1
2019-07-01T23:54:20.000Z
2019-07-01T23:55:29.000Z
sagemaker-featurestore/feature_store_kms_key_encryption.ipynb
Amirosimani/amazon-sagemaker-examples
bc35e7a9da9e2258e77f98098254c2a8e308041a
[ "Apache-2.0" ]
2
2021-06-24T11:49:58.000Z
2021-06-24T11:54:01.000Z
36.249545
427
0.593086
[ [ [ "## Amazon SageMaker Feature Store: Encrypt Data in your Online or Offline Feature Store using KMS key", "_____no_output_____" ], [ "This notebook demonstrates how to enable encyption for your data in your online or offline Feature Store using KMS key. We start by showing how to programmatically create a KMS key, and how to apply it to the feature store creation process for data encryption. The last portion of this notebook demonstrates how to verify that your KMS key is being used to encerypt your data in your feature store.\n\n### Overview\n1. Create a KMS key.\n - How to create a KMS key programmatically using the KMS client from boto3?\n2. Attach role to your KMS key.\n - Attach the required entries to your policy for data encryption in your feature store.\n3. Create an online or offline feature store and apply it to your feature store creation process.\n - How to enable encryption for your online store?\n - How to enable encryption for your offline store?\n4. How to verify that your data is encrypted in your online or offline store?\n\n### Prerequisites\nThis notebook uses both `boto3` and Python SDK libraries, and the `Python 3 (Data Science)` kernel. This notebook also works with Studio, Jupyter, and JupyterLab. \n\n### Library Dependencies:\n* sagemaker>=2.0.0\n* numpy\n* pandas", "_____no_output_____" ] ], [ [ "import sagemaker\nimport sys\nimport boto3\nimport pandas as pd\nimport numpy as np\nimport json\n\noriginal_version = sagemaker.__version__\n%pip install 'sagemaker>=2.0.0'", "_____no_output_____" ] ], [ [ "### Set up ", "_____no_output_____" ] ], [ [ "sagemaker_session = sagemaker.Session()\ns3_bucket_name = sagemaker_session.default_bucket()\nprefix = \"sagemaker-featurestore-kms-demo\"\nrole = sagemaker.get_execution_role()\nregion = sagemaker_session.boto_region_name", "_____no_output_____" ] ], [ [ "Create a KMS client using boto3. Note that you can access your boto session through your sagemaker session, e.g.,`sagemaker_session`.", "_____no_output_____" ] ], [ [ "kms = sagemaker_session.boto_session.client(\"kms\")", "_____no_output_____" ] ], [ [ "### KMS Policy Template\n\nBelow is the policy template you will use for creating a KMS key. You will specify your role to grant it access to various KMS operations that will be used in the back-end for encrypting your data in your Online or Offline Feature Store. \n\n**Note**: You will need to substitute your Account number in for `123456789012` in the policy below for these lines: `arn:aws:cloudtrail:*:123456789012:trail/*`. \n\nIt is important to understand that the policy below will grant admin privileges for Customer Managed Keys (CMK) around viewing and revoking grants, decrypt and encrypt permissions on CloudTrail and full access permissions through Feature Store. Also, note that the the Feature Store Service creates additonal grants that are used for encryption purposes for your online store. ", "_____no_output_____" ] ], [ [ "policy = {\n \"Version\": \"2012-10-17\",\n \"Id\": \"key-policy-feature-store\",\n \"Statement\": [\n {\n \"Sid\": \"Allow access through Amazon SageMaker Feature Store for all principals in the account that are authorized to use Amazon SageMaker Feature Store\",\n \"Effect\": \"Allow\",\n \"Principal\": {\"AWS\": role},\n \"Action\": [\n \"kms:Encrypt\",\n \"kms:Decrypt\",\n \"kms:DescribeKey\",\n \"kms:CreateGrant\",\n \"kms:RetireGrant\",\n \"kms:ReEncryptFrom\",\n \"kms:ReEncryptTo\",\n \"kms:GenerateDataKey\",\n \"kms:ListAliases\",\n \"kms:ListGrants\",\n ],\n \"Resource\": [\"*\"],\n \"Condition\": {\"StringLike\": {\"kms:ViaService\": \"sagemaker.*.amazonaws.com\"}},\n },\n {\n \"Sid\": \"Allow administrators to view the CMK and revoke grants\",\n \"Effect\": \"Allow\",\n \"Principal\": {\"AWS\": [role]},\n \"Action\": [\"kms:Describe*\", \"kms:Get*\", \"kms:List*\", \"kms:RevokeGrant\"],\n \"Resource\": [\"*\"],\n },\n {\n \"Sid\": \"Enable CloudTrail Encrypt Permissions\",\n \"Effect\": \"Allow\",\n \"Principal\": {\"Service\": \"cloudtrail.amazonaws.com\", \"AWS\": [role]},\n \"Action\": \"kms:GenerateDataKey*\",\n \"Resource\": \"*\",\n \"Condition\": {\n \"StringLike\": {\n \"kms:EncryptionContext:aws:cloudtrail:arn\": [\n \"arn:aws:cloudtrail:*:123456789012:trail/*\",\n \"arn:aws:cloudtrail:*:123456789012:trail/*\",\n ]\n }\n },\n },\n {\n \"Sid\": \"Enable CloudTrail log decrypt permissions\",\n \"Effect\": \"Allow\",\n \"Principal\": {\"AWS\": [role]},\n \"Action\": \"kms:Decrypt\",\n \"Resource\": [\"*\"],\n \"Condition\": {\"Null\": {\"kms:EncryptionContext:aws:cloudtrail:arn\": \"false\"}},\n },\n ],\n}", "_____no_output_____" ] ], [ [ "Create your new KMS key using the policy above and your KMS client. ", "_____no_output_____" ] ], [ [ "try:\n new_kms_key = kms.create_key(\n Policy=json.dumps(policy),\n Description=\"string\",\n KeyUsage=\"ENCRYPT_DECRYPT\",\n CustomerMasterKeySpec=\"SYMMETRIC_DEFAULT\",\n Origin=\"AWS_KMS\",\n )\n AliasName = \"my-new-kms-key\" ## provide a unique alias name\n kms.create_alias(\n AliasName=\"alias/\" + AliasName, TargetKeyId=new_kms_key[\"KeyMetadata\"][\"KeyId\"]\n )\n print(new_kms_key)\nexcept Exception as e:\n print(\"Error {}\".format(e))", "_____no_output_____" ] ], [ [ "Now that we have our KMS key created and the necessary operations added to our role, we now load in our data. ", "_____no_output_____" ] ], [ [ "customer_data = pd.read_csv(\"data/feature_store_introduction_customer.csv\")\norders_data = pd.read_csv(\"data/feature_store_introduction_orders.csv\")", "_____no_output_____" ], [ "customer_data.head()", "_____no_output_____" ], [ "orders_data.head()", "_____no_output_____" ], [ "customer_data.dtypes", "_____no_output_____" ], [ "orders_data.dtypes", "_____no_output_____" ] ], [ [ "### Creating Feature Groups\n\nWe first start by creating feature group names for customer_data and orders_data. Following this, we create two Feature Groups, one for customer_dat and another for orders_data", "_____no_output_____" ] ], [ [ "from time import gmtime, strftime, sleep\n\ncustomers_feature_group_name = \"customers-feature-group-\" + strftime(\"%d-%H-%M-%S\", gmtime())\norders_feature_group_name = \"orders-feature-group-\" + strftime(\"%d-%H-%M-%S\", gmtime())", "_____no_output_____" ] ], [ [ "Instantiate a FeatureGroup object for customers_data and orders_data. ", "_____no_output_____" ] ], [ [ "from sagemaker.feature_store.feature_group import FeatureGroup\n\ncustomers_feature_group = FeatureGroup(\n name=customers_feature_group_name, sagemaker_session=sagemaker_session\n)\norders_feature_group = FeatureGroup(\n name=orders_feature_group_name, sagemaker_session=sagemaker_session\n)", "_____no_output_____" ], [ "import time\n\ncurrent_time_sec = int(round(time.time()))\n\nrecord_identifier_feature_name = \"customer_id\"", "_____no_output_____" ] ], [ [ "Append EventTime feature to your data frame. This parameter is required, and time stamps each data point.", "_____no_output_____" ] ], [ [ "customer_data[\"EventTime\"] = pd.Series([current_time_sec] * len(customer_data), dtype=\"float64\")\norders_data[\"EventTime\"] = pd.Series([current_time_sec] * len(orders_data), dtype=\"float64\")", "_____no_output_____" ], [ "customer_data.head()", "_____no_output_____" ], [ "orders_data.head()", "_____no_output_____" ] ], [ [ "Load feature definitions to your feature group. ", "_____no_output_____" ] ], [ [ "customers_feature_group.load_feature_definitions(data_frame=customer_data)\norders_feature_group.load_feature_definitions(data_frame=orders_data)", "_____no_output_____" ] ], [ [ "### How to create an Online or Offline Feature Store that uses your KMS key for encryption?\n\nBelow we create two feature groups, `customers_feature_group` and `orders_feature_group` respectively, and explain how use your KMS key to securely encrypt your data in your online or offline feature store. \n\n### How to create an Online Feature store with your KMS key? \nTo encrypt data in your online feature store, set `enable_online_store` to be `True` and specify your KMS key as parameter `online_store_kms_key_id`. You will need to substitute your Account number in `arn:aws:kms:us-east-1:123456789012:key/` replacing `123456789012` with your Account number.\n\n```\ncustomers_feature_group.create(\n s3_uri=f\"s3://{s3_bucket_name}/{prefix}\",\n record_identifier_name=record_identifier_feature_name,\n event_time_feature_name=\"EventTime\",\n role_arn=role,\n enable_online_store=True, \n online_store_kms_key_id = 'arn:aws:kms:us-east-1:123456789012:key/'+ new_kms_key['KeyMetadata']['KeyId']\n)\n\norders_feature_group.create(\n s3_uri=f\"s3://{s3_bucket_name}/{prefix}\",\n record_identifier_name=record_identifier_feature_name,\n event_time_feature_name=\"EventTime\",\n role_arn=role,\n enable_online_store=True,\n online_store_kms_key_id = 'arn:aws:kms:us-east-1:123456789012:key/'+new_kms_key['KeyMetadata']['KeyId']\n)\n```\n\n### How to create an Offline Feature store with your KMS key? \nSimilar to the above, set `enable_online_store` to be `False` and then specify your KMS key as parameter `offline_store_kms_key_id`. You will need to substitute your Account number in `arn:aws:kms:us-east-1:123456789012:key/` replacing `123456789012` with your Account number.\n\n```\ncustomers_feature_group.create(\n s3_uri=f\"s3://{s3_bucket_name}/{prefix}\",\n record_identifier_name=record_identifier_feature_name,\n event_time_feature_name=\"EventTime\",\n role_arn=role,\n enable_online_store=False, \n offline_store_kms_key_id = 'arn:aws:kms:us-east-1:123456789012:key/'+ new_kms_key['KeyMetadata']['KeyId']\n)\n\norders_feature_group.create(\n s3_uri=f\"s3://{s3_bucket_name}/{prefix}\",\n record_identifier_name=record_identifier_feature_name,\n event_time_feature_name=\"EventTime\",\n role_arn=role,\n enable_online_store=False,\n offline_store_kms_key_id = 'arn:aws:kms:us-east-1:123456789012:key/'+new_kms_key['KeyMetadata']['KeyId']\n)\n```\n", "_____no_output_____" ], [ "For this example we create an online feature store that encrypts your data using your KMS key.\n\n**Note**: You will need to substitute your Account number in `arn:aws:kms:us-east-1:123456789012:key/` replacing `123456789012` with your Account number.", "_____no_output_____" ] ], [ [ "customers_feature_group.create(\n s3_uri=f\"s3://{s3_bucket_name}/{prefix}\",\n record_identifier_name=record_identifier_feature_name,\n event_time_feature_name=\"EventTime\",\n role_arn=role,\n enable_online_store=False,\n offline_store_kms_key_id=\"arn:aws:kms:us-east-1:123456789012:key/\"\n + new_kms_key[\"KeyMetadata\"][\"KeyId\"],\n)\n\norders_feature_group.create(\n s3_uri=f\"s3://{s3_bucket_name}/{prefix}\",\n record_identifier_name=record_identifier_feature_name,\n event_time_feature_name=\"EventTime\",\n role_arn=role,\n enable_online_store=False,\n offline_store_kms_key_id=\"arn:aws:kms:us-east-1:123456789012:key/\"\n + new_kms_key[\"KeyMetadata\"][\"KeyId\"],\n)", "_____no_output_____" ] ], [ [ "### How to verify that your KMS key is being used to encrypt your data in your Online or Offline Feature Store? \n\n### Online Store Verification\nTo demonstrate that your data is being encrypted in your Online store, use your `kms` client from `boto3` to list the grants under your KMS key. It should show 'SageMakerFeatureStore-' and the name of your feature group you created and should list these operations under Operations:`['Decrypt','Encrypt','GenerateDataKey','ReEncryptFrom','ReEncryptTo','CreateGrant','RetireGrant','DescribeKey']`\n\nAn alternative way for you to check that your data is encrypted in your Online store is to check [Cloud Trails](https://console.aws.amazon.com/cloudtrail/) and navigate to your account name. Once here, under General details you should see that SSE-KMS encryption is enabled and with your AWS KMS key shown below it. Below is a screenshot showing this: \n\n![Cloud Trails](images/cloud-trails.png)\n\n### Offline Store Verification\nTo verify that your data in being encrypted in your Offline store, you must navigate to your S3 bucket through the [Console](https://console.aws.amazon.com/s3/home?region=us-east-1) and then navigate to your prefix, offline store, feature group name and into the /data/ folder. Once here, select a parquet file which is the file containing your feature group data. For this example, the directory path in S3 was this: \n\n`Amazon S3/MYBUCKET/PREFIX/123456789012/sagemaker/region/offline-store/customers-feature-group-23-22-44-47/data/year=2021/month=03/day=23/hour=22/20210323T224448Z_IdfObJjhpqLQ5rmG.parquet.` \n\nAfter selecting the parquet file, navigate to Server-side encryption settings. It should mention that Default encryption is enabled and reference (SSE-KMS) under server-side encryption. If this show, then your data is being encrypted in the offline store. Below is a screenshot of how this should look like in the console: \n\n![Feature Store Policy](images/s3-sse-enabled.png)", "_____no_output_____" ], [ "For this example since we created a secure Online store using our KMS key, below we use `list_grants` to check that our feature group and required grants are present under operations. ", "_____no_output_____" ] ], [ [ "kms.list_grants(\n KeyId=\"arn:aws:kms:us-east-1:123456789012:key/\" + new_kms_key[\"KeyMetadata\"][\"KeyId\"]\n)", "_____no_output_____" ] ], [ [ "### Clean Up Resources\nRemove the Feature Groups we created. ", "_____no_output_____" ] ], [ [ "customers_feature_group.delete()\norders_feature_group.delete()", "_____no_output_____" ], [ "# preserve original sagemaker version\n%pip install 'sagemaker=={}'.format(original_version)", "_____no_output_____" ] ], [ [ "### Next Steps\n\nFor more information on how to use KMS to encrypt your data in your Feature Store, see [Feature Store Security](https://docs.aws.amazon.com/sagemaker/latest/dg/feature-store-security.html). For general information on KMS keys and CMK, see [Customer Managed Keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#master_keys). ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
d036b329752bb82fc319556b1f90acff5c887417
7,004
ipynb
Jupyter Notebook
notebooks/ensemble_hyperparameters.ipynb
lesteve/scikit-learn-mooc
b822586b98e71dbbf003bde86be57412cb170291
[ "CC-BY-4.0" ]
634
2020-03-10T15:42:46.000Z
2022-03-28T15:19:00.000Z
notebooks/ensemble_hyperparameters.ipynb
lesteve/scikit-learn-mooc
b822586b98e71dbbf003bde86be57412cb170291
[ "CC-BY-4.0" ]
467
2020-03-10T15:42:31.000Z
2022-03-31T09:10:04.000Z
notebooks/ensemble_hyperparameters.ipynb
lesteve/scikit-learn-mooc
b822586b98e71dbbf003bde86be57412cb170291
[ "CC-BY-4.0" ]
314
2020-03-11T14:28:26.000Z
2022-03-31T12:01:02.000Z
41.443787
135
0.647915
[ [ [ "# Hyperparameter tuning\n\nIn the previous section, we did not discuss the parameters of random forest\nand gradient-boosting. However, there are a couple of things to keep in mind\nwhen setting these.\n\nThis notebook gives crucial information regarding how to set the\nhyperparameters of both random forest and gradient boosting decision tree\nmodels.\n\n<div class=\"admonition caution alert alert-warning\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Caution!</p>\n<p class=\"last\">For the sake of clarity, no cross-validation will be used to estimate the\ntesting error. We are only showing the effect of the parameters\non the validation set of what should be the inner cross-validation.</p>\n</div>\n\n## Random forest\n\nThe main parameter to tune for random forest is the `n_estimators` parameter.\nIn general, the more trees in the forest, the better the generalization\nperformance will be. However, it will slow down the fitting and prediction\ntime. The goal is to balance computing time and generalization performance when\nsetting the number of estimators when putting such learner in production.\n\nThe `max_depth` parameter could also be tuned. Sometimes, there is no need\nto have fully grown trees. However, be aware that with random forest, trees\nare generally deep since we are seeking to overfit the learners on the\nbootstrap samples because this will be mitigated by combining them.\nAssembling underfitted trees (i.e. shallow trees) might also lead to an\nunderfitted forest.", "_____no_output_____" ] ], [ [ "from sklearn.datasets import fetch_california_housing\nfrom sklearn.model_selection import train_test_split\n\ndata, target = fetch_california_housing(return_X_y=True, as_frame=True)\ntarget *= 100 # rescale the target in k$\ndata_train, data_test, target_train, target_test = train_test_split(\n data, target, random_state=0)", "_____no_output_____" ], [ "import pandas as pd\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.ensemble import RandomForestRegressor\n\nparam_grid = {\n \"n_estimators\": [10, 20, 30],\n \"max_depth\": [3, 5, None],\n}\ngrid_search = GridSearchCV(\n RandomForestRegressor(n_jobs=2), param_grid=param_grid,\n scoring=\"neg_mean_absolute_error\", n_jobs=2,\n)\ngrid_search.fit(data_train, target_train)\n\ncolumns = [f\"param_{name}\" for name in param_grid.keys()]\ncolumns += [\"mean_test_score\", \"rank_test_score\"]\ncv_results = pd.DataFrame(grid_search.cv_results_)\ncv_results[\"mean_test_score\"] = -cv_results[\"mean_test_score\"]\ncv_results[columns].sort_values(by=\"rank_test_score\")", "_____no_output_____" ] ], [ [ "We can observe that in our grid-search, the largest `max_depth` together\nwith the largest `n_estimators` led to the best generalization performance.\n\n## Gradient-boosting decision trees\n\nFor gradient-boosting, parameters are coupled, so we cannot set the\nparameters one after the other anymore. The important parameters are\n`n_estimators`, `max_depth`, and `learning_rate`.\n\nLet's first discuss the `max_depth` parameter.\nWe saw in the section on gradient-boosting that the algorithm fits the error\nof the previous tree in the ensemble. Thus, fitting fully grown trees will\nbe detrimental.\nIndeed, the first tree of the ensemble would perfectly fit (overfit) the data\nand thus no subsequent tree would be required, since there would be no\nresiduals.\nTherefore, the tree used in gradient-boosting should have a low depth,\ntypically between 3 to 8 levels. Having very weak learners at each step will\nhelp reducing overfitting.\n\nWith this consideration in mind, the deeper the trees, the faster the\nresiduals will be corrected and less learners are required. Therefore,\n`n_estimators` should be increased if `max_depth` is lower.\n\nFinally, we have overlooked the impact of the `learning_rate` parameter\nuntil now. When fitting the residuals, we would like the tree\nto try to correct all possible errors or only a fraction of them.\nThe learning-rate allows you to control this behaviour.\nA small learning-rate value would only correct the residuals of very few\nsamples. If a large learning-rate is set (e.g., 1), we would fit the\nresiduals of all samples. So, with a very low learning-rate, we will need\nmore estimators to correct the overall error. However, a too large\nlearning-rate tends to obtain an overfitted ensemble,\nsimilar to having a too large tree depth.", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import GradientBoostingRegressor\n\nparam_grid = {\n \"n_estimators\": [10, 30, 50],\n \"max_depth\": [3, 5, None],\n \"learning_rate\": [0.1, 1],\n}\ngrid_search = GridSearchCV(\n GradientBoostingRegressor(), param_grid=param_grid,\n scoring=\"neg_mean_absolute_error\", n_jobs=2\n)\ngrid_search.fit(data_train, target_train)\n\ncolumns = [f\"param_{name}\" for name in param_grid.keys()]\ncolumns += [\"mean_test_score\", \"rank_test_score\"]\ncv_results = pd.DataFrame(grid_search.cv_results_)\ncv_results[\"mean_test_score\"] = -cv_results[\"mean_test_score\"]\ncv_results[columns].sort_values(by=\"rank_test_score\")", "_____no_output_____" ] ], [ [ "<div class=\"admonition caution alert alert-warning\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Caution!</p>\n<p class=\"last\">Here, we tune the <tt class=\"docutils literal\">n_estimators</tt> but be aware that using early-stopping as\nin the previous exercise will be better.</p>\n</div>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d036c04fc0817e3b1b9ab69663d498c03a8fd466
5,066
ipynb
Jupyter Notebook
ipynb/Germany-Rheinland-Pfalz-SK-Mainz.ipynb
oscovida/oscovida.github.io
c74d6da79feda1b5ccce107ad3acd48cf0e74c1c
[ "CC-BY-4.0" ]
2
2020-06-19T09:16:14.000Z
2021-01-24T17:47:56.000Z
ipynb/Germany-Rheinland-Pfalz-SK-Mainz.ipynb
oscovida/oscovida.github.io
c74d6da79feda1b5ccce107ad3acd48cf0e74c1c
[ "CC-BY-4.0" ]
8
2020-04-20T16:49:49.000Z
2021-12-25T16:54:19.000Z
ipynb/Germany-Rheinland-Pfalz-SK-Mainz.ipynb
oscovida/oscovida.github.io
c74d6da79feda1b5ccce107ad3acd48cf0e74c1c
[ "CC-BY-4.0" ]
4
2020-04-20T13:24:45.000Z
2021-01-29T11:12:12.000Z
30.335329
186
0.521713
[ [ [ "# Germany: SK Mainz (Rheinland-Pfalz)\n\n* Homepage of project: https://oscovida.github.io\n* Plots are explained at http://oscovida.github.io/plots.html\n* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Rheinland-Pfalz-SK-Mainz.ipynb)", "_____no_output_____" ] ], [ [ "import datetime\nimport time\n\nstart = datetime.datetime.now()\nprint(f\"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}\")", "_____no_output_____" ], [ "%config InlineBackend.figure_formats = ['svg']\nfrom oscovida import *", "_____no_output_____" ], [ "overview(country=\"Germany\", subregion=\"SK Mainz\", weeks=5);", "_____no_output_____" ], [ "overview(country=\"Germany\", subregion=\"SK Mainz\");", "_____no_output_____" ], [ "compare_plot(country=\"Germany\", subregion=\"SK Mainz\", dates=\"2020-03-15:\");\n", "_____no_output_____" ], [ "# load the data\ncases, deaths = germany_get_region(landkreis=\"SK Mainz\")\n\n# get population of the region for future normalisation:\ninhabitants = population(country=\"Germany\", subregion=\"SK Mainz\")\nprint(f'Population of country=\"Germany\", subregion=\"SK Mainz\": {inhabitants} people')\n\n# compose into one table\ntable = compose_dataframe_summary(cases, deaths)\n\n# show tables with up to 1000 rows\npd.set_option(\"max_rows\", 1000)\n\n# display the table\ntable", "_____no_output_____" ] ], [ [ "# Explore the data in your web browser\n\n- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Rheinland-Pfalz-SK-Mainz.ipynb)\n- and wait (~1 to 2 minutes)\n- Then press SHIFT+RETURN to advance code cell to code cell\n- See http://jupyter.org for more details on how to use Jupyter Notebook", "_____no_output_____" ], [ "# Acknowledgements:\n\n- Johns Hopkins University provides data for countries\n- Robert Koch Institute provides data for within Germany\n- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)\n- Open source and scientific computing community for the data tools\n- Github for hosting repository and html files\n- Project Jupyter for the Notebook and binder service\n- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))\n\n--------------------", "_____no_output_____" ] ], [ [ "print(f\"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and \"\n f\"deaths at {fetch_deaths_last_execution()}.\")", "_____no_output_____" ], [ "# to force a fresh download of data, run \"clear_cache()\"", "_____no_output_____" ], [ "print(f\"Notebook execution took: {datetime.datetime.now()-start}\")\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ] ]
d036c53deab429c4a1979118e09fda3b3bc7db3b
10,420
ipynb
Jupyter Notebook
notebooks/Example.ipynb
Theerit/kampuan_api
84942bad359018face7e7cf16780ab5bb097a9bf
[ "MIT" ]
7
2020-12-29T16:29:43.000Z
2021-08-06T17:55:08.000Z
notebooks/Example.ipynb
Theerit/kampuan_api
84942bad359018face7e7cf16780ab5bb097a9bf
[ "MIT" ]
6
2021-01-03T12:02:00.000Z
2021-01-10T05:00:28.000Z
notebooks/Example.ipynb
Theerit/kampuan_api
84942bad359018face7e7cf16780ab5bb097a9bf
[ "MIT" ]
1
2020-12-31T07:49:34.000Z
2020-12-31T07:49:34.000Z
26.446701
170
0.384549
[ [ [ "%load_ext autoreload\n%autoreload 2", "The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n" ], [ "from kampuan import puan_kam,puan_kam_all,pun_wunayook", "_____no_output_____" ] ], [ [ "# Auto assume puan kam", "_____no_output_____" ] ], [ [ "case_1 =['มะนาวต่างดุ๊ด',\n 'กาเป็นหมู',\n 'ก้างใหญ่',\n 'อะหรี่ดอย',\n 'นอนแล้ว',\n 'ตะปู',\n 'นักเรียน',\n 'ขนม',\n 'เรอทัก',\n 'สวัสดี',\n ['เป็ด','กิน','ไก่'],\n 'ภูมิหล่อ']\nfor k in case_1:\n print('input: ',k)\n print('output: ',puan_kam(k))\n print('===========')", "input: มะนาวต่างดุ๊ด\noutput: [['มุด', 'นาว', 'ต่าง', 'ด๊ะ'], ['มะ', 'นุด', 'ต่าง', 'ดาว']]\n===========\ninput: กาเป็นหมู\noutput: ['กู', 'เป็น', 'หมา']\n===========\ninput: ก้างใหญ่\noutput: ['ใก้', 'หญ่าง']\n===========\ninput: อะหรี่ดอย\noutput: ['อะ', 'หร่อย', 'ดี']\n===========\ninput: นอนแล้ว\noutput: ['แนว', 'ล้อน']\n===========\ninput: ตะปู\noutput: ['ตู', 'ปะ']\n===========\ninput: นักเรียน\noutput: ['เนียน', 'รัก']\n===========\ninput: ขนม\ncheck this case not sure\noutput: ['ขม', 'หนะ']\n===========\ninput: เรอทัก\noutput: ['รัก', 'เทอ']\n===========\ninput: สวัสดี\noutput: ['สะ', 'วี', 'ดัส']\n===========\ninput: ['เป็ด', 'กิน', 'ไก่']\noutput: ['เป็ด', 'ไก', 'กิ่น']\n===========\ninput: ภูมิหล่อ\noutput: ['ภะ', 'หมอ', 'หลู่']\n===========\n" ] ], [ [ "# Puan all case", "_____no_output_____" ] ], [ [ "for k in case_1:\n print(k)\n print(puan_kam_all(k))\n print('===========')", "มะนาวต่างดุ๊ด\n{0: ['มุด', 'นาว', 'ต่าง', 'ด๊ะ'], 1: ['มะ', 'นุด', 'ต่าง', 'ดาว']}\n===========\nกาเป็นหมู\n{0: ['กู', 'เป็น', 'หมา'], 1: ['กา', 'ปู', 'เหม็น']}\n===========\nก้างใหญ่\n{0: ['ใก้', 'หญ่าง'], 1: ['ใก้', 'หญ่าง']}\n===========\nอะหรี่ดอย\n{0: ['ออย', 'หรี่', 'ดะ'], 1: ['อะ', 'หร่อย', 'ดี']}\n===========\nนอนแล้ว\n{0: ['แนว', 'ล้อน'], 1: ['แนว', 'ล้อน']}\n===========\nตะปู\n{0: ['ตู', 'ปะ'], 1: ['ตู', 'ปะ']}\n===========\nนักเรียน\n{0: ['เนียน', 'รัก'], 1: ['เนียน', 'รัก']}\n===========\nขนม\ncheck this case not sure\ncheck this case not sure\n{0: ['ขม', 'หนะ'], 1: ['ขม', 'หนะ']}\n===========\nเรอทัก\n{0: ['รัก', 'เทอ'], 1: ['รัก', 'เทอ']}\n===========\nสวัสดี\n{0: ['ซี', 'หวัส', 'ดะ'], 1: ['สะ', 'วี', 'ดัส']}\n===========\n['เป็ด', 'กิน', 'ไก่']\n{0: ['ไป่', 'กิน', 'เก็ด'], 1: ['เป็ด', 'ไก', 'กิ่น']}\n===========\nภูมิหล่อ\n{0: ['ผ่อ', 'หูมิ', 'ละ'], 1: ['ภะ', 'หมอ', 'หลู่']}\n===========\n" ] ], [ [ "# pun wunnayook", "_____no_output_____" ] ], [ [ "pun_wunayook('กา')", "_____no_output_____" ], [ "pun_wunayook('กาไปไหน')", "_____no_output_____" ], [ "pun_wunayook('ขาวจังเลย')", "_____no_output_____" ], [ "for k in case_1:\n print(k)\n print(pun_wunayook(k))\n print('===========')", "WARNING:root:มะ with tone 0 not availabe (Dead word type), return normalize\nWARNING:root:ดุ๊ด with tone 0 not availabe (Dead word type), return normalize\nWARNING:root:removing taikoo from เป็น\nWARNING:root:removing taikoo from เป็น\nWARNING:root:removing taikoo from เป็น\nWARNING:root:removing taikoo from เป็น\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
d036cc41914884613dec81926fe778608ec6e452
67,009
ipynb
Jupyter Notebook
src/nn/transfer learning-plots.ipynb
voschezang/trash-image-classification
30ec70264b58a14a54db62081980726b92ce7a0c
[ "MIT" ]
null
null
null
src/nn/transfer learning-plots.ipynb
voschezang/trash-image-classification
30ec70264b58a14a54db62081980726b92ce7a0c
[ "MIT" ]
null
null
null
src/nn/transfer learning-plots.ipynb
voschezang/trash-image-classification
30ec70264b58a14a54db62081980726b92ce7a0c
[ "MIT" ]
null
null
null
84.076537
24,484
0.83538
[ [ [ "from keras import applications\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras import optimizers\nfrom keras.models import Sequential\nfrom keras.layers import Dropout, Flatten, Dense\nfrom keras.models import model_from_json\nimport os, sklearn, pandas, numpy as np, random\nfrom sklearn import svm\nimport skimage, skimage.io, skimage.filters\nimport matplotlib.pyplot as plt\nfrom keras.callbacks import TensorBoard\nfrom sklearn.utils import shuffle\nimport imp\nfrom sklearn.preprocessing import LabelBinarizer\n# from pcanet import PCANet\nfrom pcanet import PCANet\nimport numpy as np\n%matplotlib inline", "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\nUsing TensorFlow backend.\n" ], [ "# set cwd back to default\nos.chdir('../')\nos.getcwd()", "_____no_output_____" ], [ "# custom scripts\nimport config # params, constants\nimport data, models # functions that mutate outr data\n# from utils import utils, plot # custom functions, in local environment", "_____no_output_____" ], [ "import data # src/data.py\ndataset = data.init_dataset()", "_____no_output_____" ], [ "os.listdir('../datasets/models')", "_____no_output_____" ] ], [ [ "### load a model", "_____no_output_____" ] ], [ [ "# load json and create model\n# load json and create model\ndef load_model(filename, weights):\n with open(filename, 'r') as json: # cnn_transfer_augm\n loaded_model_json = json.read()\n loaded_model = model_from_json(loaded_model_json)\n # load weights into new model\n loaded_model.load_weights(weights)\n print(\"Loaded model from disk\")\n optimizer = optimizers.Adam(lr=0.001)\n loaded_model.compile(loss = \"categorical_crossentropy\", optimizer = optimizer, metrics=['accuracy',\n 'mean_squared_error','categorical_crossentropy','top_k_categorical_accuracy'])\n print('compiled model')\n return loaded_model", "_____no_output_____" ], [ "model_augment = config.dataset_dir + 'models/cnntransfer_augm.json'\nmodel_augment_weights = config.dataset_dir + 'models/cnntransferweights_augmen.h5'\nmodel_default = config.dataset_dir + 'models/cnntransfer.json'\nmodel_default_weights = config.dataset_dir + 'models/cnntransferweights.h5'\n\n# augment = load_model(model_augment, model_augment_weights)\ndefault = load_model(model_default, model_default_weights)\naugment = load_model(model_augment, model_augment_weights)", "Loaded model from disk\ncompiled model\nLoaded model from disk\ncompiled model\n" ], [ "# pick the n classes with the most occuring instances\namt = 5\nclasses = data.top_classes(dataset.labels, amt)\nclasses", "_____no_output_____" ], [ "maxx = 100\nmax_train = 100\nx_test, n = data.extract_topx_classes(dataset, classes, 'test', maxx, max_train)\nn", "_____no_output_____" ], [ "x_test, y_test, n = data.extract_all_test(dataset, x_test)", "extract all data: 500\n" ], [ "# y_train, y_test, y_validation = data.labels_to_vectors(dataset, y_train, y_test, y_validation)\ny_test = data.one_hot(y_test)\ninput_shape = y_test.shape[1:] # = shape of an individual image (matrix)\noutput_length = (y_test[0]).shape[0] # = length of an individual label\noutput_length", "_____no_output_____" ] ], [ [ "## running tests\n", "_____no_output_____" ] ], [ [ "# import sklearn.metrics.confusion_matrix\n\ndef evaluate(model):\n cvscores = []\n scores = model.evaluate(x_test, y_test, verbose=0)\n print(\"%s: %.2f%%\" % (model.metrics_names[1], scores[1]*100))\n cvscores.append(scores[1] * 100)\n print(\"%.2f%% (+/- %.2f%%)\" % (np.mean(cvscores), np.std(cvscores)))\n\n# evaluate(model_final_augmentation)", "_____no_output_____" ], [ "import tensorflow as tf\nfrom sklearn.metrics import confusion_matrix\n\ndef test1(model, x_test, y_test):\n y_pred_class = model.predict(x_test)\n # con = tf.confusion_matrix(labels=y_test, predictions=y_pred_class )\n # print(con)\n\n y_test_non_category = [ np.argmax(t) for t in y_test ]\n y_predict_non_category = [ np.argmax(t) for t in y_pred_class ]\n\n\n conf_mat = confusion_matrix(y_test_non_category, y_predict_non_category)\n print(conf_mat)\n return conf_mat", "_____no_output_____" ], [ "c1 = test1(default, x_test, y_test)", "[[60 3 1 25 11]\n [ 0 85 2 0 13]\n [ 0 8 76 0 16]\n [ 0 2 0 96 2]\n [ 0 3 0 2 95]]\n" ], [ "c2 = test1(augment, x_test, y_test)", "[[ 3 2 5 85 5]\n [ 0 76 19 2 3]\n [ 0 13 72 9 6]\n [ 0 0 2 95 3]\n [ 0 36 22 5 37]]\n" ], [ "# http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html\n# comparable but different from: mlxtend.plotting.plot_confusion_matrix\nimport itertools\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn import svm, datasets\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import confusion_matrix\n\n# import some data to play with\niris = datasets.load_iris()\nX = iris.data\ny = iris.target\nclass_names = iris.target_names\n\n# Split the data into a training set and a test set\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)\n\n# Run classifier, using a model that is too regularized (C too low) to see\n# the impact on the results\nclassifier = svm.SVC(kernel='linear', C=0.01)\ny_pred = classifier.fit(X_train, y_train).predict(X_test)\n\n\ndef plot_confusion_matrix(cm, classes,\n normalize=False,\n title='Confusion matrix',\n cmap=plt.cm.Blues):\n \"\"\"\n This function prints and plots the confusion matrix.\n Normalization can be applied by setting `normalize=True`.\n \"\"\"\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n print(\"Normalized confusion matrix\")\n else:\n print('Confusion matrix, without normalization')\n\n print(cm)\n\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks, classes, rotation=45)\n plt.yticks(tick_marks, classes)\n\n fmt = '.2f' if normalize else 'd'\n thresh = cm.max() / 2.\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n plt.text(j, i, format(cm[i, j], fmt),\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')", "_____no_output_____" ], [ "labels = np.array(['Glass','Paper','Cardboard','Plastic','Metal'])\nlabels = np.array(['Paper', 'Glass', 'Plastic', 'Metal', 'Cardboard'])", "_____no_output_____" ], [ "from matplotlib import rcParams\nrcParams['font.family'] = 'sans-serif'\nrcParams['font.sans-serif'] = [\n 'Times New Roman', 'Tahoma', 'DejaVu Sans', 'Lucida Grande', 'Verdana'\n]\nrcParams['font.size'] = 12", "_____no_output_____" ], [ "labels", "_____no_output_____" ], [ "c_ = c1", "_____no_output_____" ], [ "c1 = np.array([[29, 1, 1, 24, 5],\n [ 0, 41, 2, 1, 16],\n [ 0, 11, 34, 1, 14],\n [ 0, 0, 0, 55, 5],\n [ 0, 5, 0, 4, 51]])", "_____no_output_____" ], [ "# from mlxtend.plotting import plot_confusion_matrix\nplt.figure()\nplot_confusion_matrix(c1, labels, title='Confusion matrix - default')", "Confusion matrix, without normalization\n[[29 1 1 24 5]\n [ 0 41 2 1 16]\n [ 0 11 34 1 14]\n [ 0 0 0 55 5]\n [ 0 5 0 4 51]]\n" ], [ "c2 = np.array([[ 3, 2, 5, 85, 5],\n [ 0, 76, 19, 2, 3,],\n [ 0, 13, 72, 9, 6,],\n [ 0, 0, 2, 95, 3,],\n [ 0, 36, 22, 5, 37]])", "_____no_output_____" ], [ "# from mlxtend.plotting import plot_confusion_matrix\nplt.figure()\nplot_confusion_matrix(c2, labels, title='Confusion matrix - augmented')", "Confusion matrix, without normalization\n[[ 3 2 5 85 5]\n [ 0 76 19 2 3]\n [ 0 13 72 9 6]\n [ 0 0 2 95 3]\n [ 0 36 22 5 37]]\n" ] ], [ [ "## T-tests\nttest for the TP per class, between the 2 networks", "_____no_output_____" ] ], [ [ "tp_c1 = c1.diagonal()\ntp_c2 = c2.diagonal()\nprint(tp_c1)\nprint(tp_c2)", "[29 41 34 55 51]\n[ 3 76 72 95 37]\n" ], [ "from utils import utils", "_____no_output_____" ], [ "utils.ttest(0.05, tp_c1, tp_c2)", "RESULT - NO significant difference found\n" ], [ "utils.ttest(0.05, tp_c1.flatten(), tp_c2.flatten())", "RESULT - NO significant difference found\n" ], [ "def select_not_diagonal(arr=[]):\n a = arr.copy()\n np.fill_diagonal(a, -1)\n return [x for x in list(a.flatten()) if x > -1]", "_____no_output_____" ], [ "# everything nog at the diagonal axes is either fp or fn\n# with fn or fp depending on the perspective (which class == p)\nc1_ = select_not_diagonal(c1)\nc2_ = select_not_diagonal(c2)\nprint(c1_)\nprint(c2_)", "[1, 1, 24, 5, 0, 2, 1, 16, 0, 11, 1, 14, 0, 0, 0, 5, 0, 5, 0, 4]\n[2, 5, 85, 5, 0, 19, 2, 3, 0, 13, 9, 6, 0, 0, 2, 3, 0, 36, 22, 5]\n" ], [ "utils.ttest(0.05, c1_, c2_)", "RESULT - NO significant difference found\n" ], [ "def recall_precision(cm=[[]]):\n print('label, recall, precision')\n total = sum(cm.flatten())\n for i, label in enumerate(labels):\n # e.g. label = paper\n true_paper = cm[i]\n tp = cm[i][i] # upper left corner\n fp = sum(cm[i]) - tp # upper col minus tp\n # vertical col\n col = [row[i] for row in cm ]\n fn = sum(col) - tp\n tn = total - tp - fp - fn\n print(label, ':', round(tp * 1./ (tp + fn),3), round(tp * 1./ (tp + fp),3))\n# print(round(tp * 1./ (tp + fp),3))\n\n\nprint('c1 - no aug') \nrecall_precision(c1)\nprint('c2 - aug') \nrecall_precision(c2)", "c1 - no aug\nlabel, recall, precision\nPaper : 1.0 0.483\nGlass : 0.707 0.683\nPlastic : 0.919 0.567\nMetal : 0.647 0.917\nCardboard : 0.56 0.85\nc2 - aug\nlabel, recall, precision\nPaper : 1.0 0.03\nGlass : 0.598 0.76\nPlastic : 0.6 0.72\nMetal : 0.485 0.95\nCardboard : 0.685 0.37\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d036d6431a01469a579b7ecc6509862ea041645c
169,924
ipynb
Jupyter Notebook
Module2/02b_linear_reg.ipynb
GenBill/notebooks
0d3315e2d9065bf9afaa0574194efa6bf8e8ec16
[ "Apache-2.0" ]
null
null
null
Module2/02b_linear_reg.ipynb
GenBill/notebooks
0d3315e2d9065bf9afaa0574194efa6bf8e8ec16
[ "Apache-2.0" ]
null
null
null
Module2/02b_linear_reg.ipynb
GenBill/notebooks
0d3315e2d9065bf9afaa0574194efa6bf8e8ec16
[ "Apache-2.0" ]
null
null
null
187.554084
42,263
0.783303
[ [ [ "# Module 2: Playing with pytorch: linear regression", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n%matplotlib inline\nimport torch\nimport numpy as np", "_____no_output_____" ], [ "torch.__version__", "_____no_output_____" ] ], [ [ "## Warm-up: Linear regression with numpy", "_____no_output_____" ], [ "Our model is:\n$$\ny_t = 2x^1_t-3x^2_t+1, \\quad t\\in\\{1,\\dots,30\\}\n$$\n\nOur task is given the 'observations' $(x_t,y_t)_{t\\in\\{1,\\dots,30\\}}$ to recover the weights $w^1=2, w^2=-3$ and the bias $b = 1$.\n\nIn order to do so, we will solve the following optimization problem:\n$$\n\\underset{w^1,w^2,b}{\\operatorname{argmin}} \\sum_{t=1}^{30} \\left(w^1x^1_t+w^2x^2_t+b-y_t\\right)^2\n$$", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom numpy.random import random\n# generate random input data\nx = random((30,2))\n\n# generate labels corresponding to input data x\ny = np.dot(x, [2., -3.]) + 1.\nw_source = np.array([2., -3.])\nb_source = np.array([1.])\n", "_____no_output_____" ], [ "print(x.shape)\nprint(y.shape)\nprint(np.array([2., -3.]).shape)", "(30, 2)\n(30,)\n(2,)\n" ], [ "print(x[-5:])\nprint(x[:5])", "[[0.26672841 0.47640709]\n [0.09433936 0.52024108]\n [0.82922891 0.44025919]\n [0.25472792 0.052456 ]\n [0.1409807 0.35154726]]\n[[0.58650956 0.6128253 ]\n [0.79002084 0.30711082]\n [0.70895759 0.60681594]\n [0.23159381 0.70153896]\n [0.66882997 0.92579042]]\n" ], [ "import matplotlib.pyplot as plt\nimport numpy as np\nfrom mpl_toolkits.mplot3d import Axes3D\n\ndef plot_figs(fig_num, elev, azim, x, y, weights, bias):\n fig = plt.figure(fig_num, figsize=(4, 3))\n plt.clf()\n ax = Axes3D(fig, elev=elev, azim=azim)\n ax.scatter(x[:, 0], x[:, 1], y)\n ax.plot_surface(np.array([[0, 0], [1, 1]]),\n np.array([[0, 1], [0, 1]]),\n (np.dot(np.array([[0, 0, 1, 1],\n [0, 1, 0, 1]]).T, weights) + bias).reshape((2, 2)),\n alpha=.5)\n ax.set_xlabel('x_1')\n ax.set_ylabel('x_2')\n ax.set_zlabel('y')\n \ndef plot_views(x, y, w, b):\n #Generate the different figures from different views\n elev = 43.5\n azim = -110\n plot_figs(1, elev, azim, x, y, w, b[0])\n\n plt.show()", "_____no_output_____" ], [ "plot_views(x, y, w_source, b_source)", "_____no_output_____" ] ], [ [ "In vector form, we define:\n$$\n\\hat{y}_t = {\\bf w}^T{\\bf x}_t+b\n$$\nand we want to minimize the loss given by:\n$$\nloss = \\sum_t\\underbrace{\\left(\\hat{y}_t-y_t \\right)^2}_{loss_t}.\n$$\n\nTo minimize the loss we first compute the gradient of each $loss_t$:\n\\begin{eqnarray*}\n\\frac{\\partial{loss_t}}{\\partial w^1} &=& 2x^1_t\\left({\\bf w}^T{\\bf x}_t+b-y_t \\right)\\\\\n\\frac{\\partial{loss_t}}{\\partial w^2} &=& 2x^2_t\\left({\\bf w}^T{\\bf x}_t+b-y_t \\right)\\\\\n\\frac{\\partial{loss_t}}{\\partial b} &=& 2\\left({\\bf w}^T{\\bf x}_t+b-y_t \\right)\n\\end{eqnarray*}\n\nNote that the actual gradient of the loss is given by:\n$$\n\\frac{\\partial{loss}}{\\partial w^1} =\\sum_t \\frac{\\partial{loss_t}}{\\partial w^1},\\quad\n\\frac{\\partial{loss}}{\\partial w^2} =\\sum_t \\frac{\\partial{loss_t}}{\\partial w^2},\\quad\n\\frac{\\partial{loss}}{\\partial b} =\\sum_t \\frac{\\partial{loss_t}}{\\partial b}\n$$\n\nFor one epoch, **(Batch) Gradient Descent** updates the weights and bias as follows:\n\\begin{eqnarray*}\nw^1_{new}&=&w^1_{old}-\\alpha\\frac{\\partial{loss}}{\\partial w^1} \\\\\nw^2_{new}&=&w^2_{old}-\\alpha\\frac{\\partial{loss}}{\\partial w^2} \\\\\nb_{new}&=&b_{old}-\\alpha\\frac{\\partial{loss}}{\\partial b},\n\\end{eqnarray*}\n\nand then we run several epochs.", "_____no_output_____" ] ], [ [ "# randomly initialize learnable weights and bias\nw_init = random(2)\nb_init = random(1)\n\nw = w_init\nb = b_init\nprint(\"initial values of the parameters:\", w, b )", "initial values of the parameters: [0.97053071 0.02637377] [0.65732334]\n" ], [ "# our model forward pass\ndef forward(x):\n return x.dot(w)+b\n\n# Loss function\ndef loss(x, y):\n y_pred = forward(x)\n return (y_pred - y)**2 \n\nprint(\"initial loss:\", np.sum([loss(x_val,y_val) for x_val, y_val in zip(x, y)]) )\n\n# compute gradient\ndef gradient(x, y): # d_loss/d_w, d_loss/d_c\n return 2*(x.dot(w)+b - y)*x, 2 * (x.dot(w)+b - y)\n \nlearning_rate = 1e-2\n# Training loop\nfor epoch in range(10):\n grad_w = np.array([0,0])\n grad_b = np.array(0)\n l = 0\n for x_val, y_val in zip(x, y):\n grad_w = np.add(grad_w,gradient(x_val, y_val)[0])\n grad_b = np.add(grad_b,gradient(x_val, y_val)[1])\n l += loss(x_val, y_val)\n w = w - learning_rate * grad_w\n b = b - learning_rate * grad_b\n print(\"progress:\", \"epoch:\", epoch, \"loss\",l[0])\n\n# After training\nprint(\"estimation of the parameters:\", w, b)", "initial loss: 26.038578806790483\nprogress: epoch: 0 loss 26.038578806790483\nprogress: epoch: 1 loss 16.926387216123416\nprogress: epoch: 2 loss 15.558941057107445\nprogress: epoch: 3 loss 14.463707384697132\nprogress: epoch: 4 loss 13.450117300519961\nprogress: epoch: 5 loss 12.508137901846668\nprogress: epoch: 6 loss 11.632581753369948\nprogress: epoch: 7 loss 10.818733761733844\nprogress: epoch: 8 loss 10.06221772544754\nprogress: epoch: 9 loss 9.358969577775865\nestimation of the parameters: [ 1.10061318 -1.0204751 ] [0.54210874]\n" ], [ "plot_views(x, y, w, b)", "_____no_output_____" ] ], [ [ "## Linear regression with tensors", "_____no_output_____" ] ], [ [ "dtype = torch.FloatTensor\nprint(dtype)\n# dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU", "<class 'torch.FloatTensor'>\n" ], [ "x_t = torch.from_numpy(x).type(dtype)\ny_t = torch.from_numpy(y).type(dtype).unsqueeze(1)\nprint(y.shape)\nprint(torch.from_numpy(y).type(dtype).shape)\nprint(y_t.shape)", "(30,)\ntorch.Size([30])\ntorch.Size([30, 1])\n" ] ], [ [ "This is an implementation of **(Batch) Gradient Descent** with tensors.\n\nNote that in the main loop, the functions loss_t and gradient_t are always called with the same inputs: they can easily be incorporated into the loop (we'll do that below).", "_____no_output_____" ] ], [ [ "w_init_t = torch.from_numpy(w_init).type(dtype)\nb_init_t = torch.from_numpy(b_init).type(dtype)\n\nw_t = w_init_t.clone()\nw_t.unsqueeze_(1)\nb_t = b_init_t.clone()\nb_t.unsqueeze_(1)\nprint(\"initial values of the parameters:\\n\", w_t, b_t )", "initial values of the parameters:\n tensor([[0.9705],\n [0.0264]]) tensor([[0.6573]])\n" ], [ "# our model forward pass\ndef forward_t(x):\n return x.mm(w_t)+b_t\n\n# Loss function\ndef loss_t(x, y):\n y_pred = forward_t(x)\n return (y_pred - y).pow(2).sum()\n\n# compute gradient\ndef gradient_t(x, y): # d_loss/d_w, d_loss/d_c\n return 2*torch.mm(torch.t(x),x.mm(w_t)+b_t - y), 2 * (x.mm(w_t)+b_t - y).sum()\n\nlearning_rate = 1e-2\nfor epoch in range(10):\n l_t = loss_t(x_t,y_t)\n grad_w, grad_b = gradient_t(x_t,y_t)\n w_t = w_t-learning_rate*grad_w\n b_t = b_t-learning_rate*grad_b\n print(\"progress:\", \"epoch:\", epoch, \"loss\",l_t)\n\n# After training\nprint(\"estimation of the parameters:\", w_t, b_t )", "progress: epoch: 0 loss tensor(26.0386)\nprogress: epoch: 1 loss tensor(16.9264)\nprogress: epoch: 2 loss tensor(15.5589)\nprogress: epoch: 3 loss tensor(14.4637)\nprogress: epoch: 4 loss tensor(13.4501)\nprogress: epoch: 5 loss tensor(12.5081)\nprogress: epoch: 6 loss tensor(11.6326)\nprogress: epoch: 7 loss tensor(10.8187)\nprogress: epoch: 8 loss tensor(10.0622)\nprogress: epoch: 9 loss tensor(9.3590)\nestimation of the parameters: tensor([[ 1.1006],\n [-1.0205]]) tensor([[0.5421]])\n" ] ], [ [ "## Linear regression with Autograd", "_____no_output_____" ] ], [ [ "# Setting requires_grad=True indicates that we want to compute gradients with\n# respect to these Tensors during the backward pass.\nw_v = w_init_t.clone().unsqueeze(1)\nw_v.requires_grad_(True)\nb_v = b_init_t.clone().unsqueeze(1)\nb_v.requires_grad_(True)\nprint(\"initial values of the parameters:\", w_v.data, b_v.data )", "initial values of the parameters: tensor([[0.9705],\n [0.0264]]) tensor([[0.6573]])\n" ] ], [ [ "An implementation of **(Batch) Gradient Descent** without computing explicitly the gradient and using autograd instead.", "_____no_output_____" ] ], [ [ "for epoch in range(10):\n y_pred = x_t.mm(w_v)+b_v\n loss = (y_pred - y_t).pow(2).sum()\n \n # Use autograd to compute the backward pass. This call will compute the\n # gradient of loss with respect to all Variables with requires_grad=True.\n # After this call w.grad and b.grad will be tensors holding the gradient\n # of the loss with respect to w and b respectively.\n loss.backward()\n \n # Update weights using gradient descent. For this step we just want to mutate\n # the values of w_v and b_v in-place; we don't want to build up a computational\n # graph for the update steps, so we use the torch.no_grad() context manager\n # to prevent PyTorch from building a computational graph for the updates\n with torch.no_grad():\n w_v -= learning_rate * w_v.grad\n b_v -= learning_rate * b_v.grad\n \n # Manually zero the gradients after updating weights\n # otherwise gradients will be acumulated after each .backward()\n w_v.grad.zero_()\n b_v.grad.zero_()\n \n print(\"progress:\", \"epoch:\", epoch, \"loss\",loss.data.item())\n\n# After training\nprint(\"estimation of the parameters:\\n\", w_v.data, b_v.data.t() )", "progress: epoch: 0 loss 26.03858184814453\nprogress: epoch: 1 loss 16.926387786865234\nprogress: epoch: 2 loss 15.558940887451172\nprogress: epoch: 3 loss 14.46370792388916\nprogress: epoch: 4 loss 13.450118064880371\nprogress: epoch: 5 loss 12.508138656616211\nprogress: epoch: 6 loss 11.63258171081543\nprogress: epoch: 7 loss 10.818733215332031\nprogress: epoch: 8 loss 10.062217712402344\nprogress: epoch: 9 loss 9.358969688415527\nestimation of the parameters:\n tensor([[ 1.1006],\n [-1.0205]]) tensor([[0.5421]])\n" ] ], [ [ "## Linear regression with neural network", "_____no_output_____" ], [ "An implementation of **(Batch) Gradient Descent** using the nn package. Here we have a super simple model with only one layer and no activation function!", "_____no_output_____" ] ], [ [ "# Use the nn package to define our model as a sequence of layers. nn.Sequential\n# is a Module which contains other Modules, and applies them in sequence to\n# produce its output. Each Linear Module computes output from input using a\n# linear function, and holds internal Variables for its weight and bias.\nmodel = torch.nn.Sequential(\n torch.nn.Linear(2, 1),\n)\n\nfor m in model.children():\n m.weight.data = w_init_t.clone().unsqueeze(0)\n m.bias.data = b_init_t.clone()\n\n# The nn package also contains definitions of popular loss functions; in this\n# case we will use Mean Squared Error (MSE) as our loss function.\nloss_fn = torch.nn.MSELoss(reduction='sum')\n\n# switch to train mode\nmodel.train()\n\nfor epoch in range(10):\n # Forward pass: compute predicted y by passing x to the model. Module objects\n # override the __call__ operator so you can call them like functions. When\n # doing so you pass a Variable of input data to the Module and it produces\n # a Variable of output data.\n y_pred = model(x_t)\n \n # Note this operation is equivalent to: pred = model.forward(x_v)\n\n # Compute and print loss. We pass Variables containing the predicted and true\n # values of y, and the loss function returns a Variable containing the\n # loss.\n loss = loss_fn(y_pred, y_t)\n\n # Zero the gradients before running the backward pass.\n model.zero_grad()\n\n # Backward pass: compute gradient of the loss with respect to all the learnable\n # parameters of the model. Internally, the parameters of each Module are stored\n # in Variables with requires_grad=True, so this call will compute gradients for\n # all learnable parameters in the model.\n loss.backward()\n\n # Update the weights using gradient descent. Each parameter is a Tensor, so\n # we can access its data and gradients like we did before.\n with torch.no_grad():\n for param in model.parameters():\n param.data -= learning_rate * param.grad\n \n print(\"progress:\", \"epoch:\", epoch, \"loss\",loss.data.item())\n\n# After training\nprint(\"estimation of the parameters:\")\nfor param in model.parameters():\n print(param)", "progress: epoch: 0 loss 26.03858184814453\nprogress: epoch: 1 loss 16.926387786865234\nprogress: epoch: 2 loss 15.558940887451172\nprogress: epoch: 3 loss 14.46370792388916\nprogress: epoch: 4 loss 13.450118064880371\nprogress: epoch: 5 loss 12.508138656616211\nprogress: epoch: 6 loss 11.63258171081543\nprogress: epoch: 7 loss 10.818733215332031\nprogress: epoch: 8 loss 10.062217712402344\nprogress: epoch: 9 loss 9.358969688415527\nestimation of the parameters:\nParameter containing:\ntensor([[ 1.1006, -1.0205]], requires_grad=True)\nParameter containing:\ntensor([0.5421], requires_grad=True)\n" ] ], [ [ "Last step, we use directly the optim package to update the weights and bias.", "_____no_output_____" ] ], [ [ "model = torch.nn.Sequential(\n torch.nn.Linear(2, 1),\n)\n\nfor m in model.children():\n m.weight.data = w_init_t.clone().unsqueeze(0)\n m.bias.data = b_init_t.clone()\n\nloss_fn = torch.nn.MSELoss(reduction='sum')\n\nmodel.train()\n\noptimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)\n\n\nfor epoch in range(10):\n y_pred = model(x_t)\n loss = loss_fn(y_pred, y_t)\n print(\"progress:\", \"epoch:\", epoch, \"loss\",loss.item())\n # print(\"progress:\", \"epoch:\", epoch, \"loss\",loss)\n # Zero gradients, perform a backward pass, and update the weights.\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n \n \n# After training\nprint(\"estimation of the parameters:\")\nfor param in model.parameters():\n print(param)", "progress: epoch: 0 loss 385.95172119140625\nprogress: epoch: 0 loss tensor(385.9517, grad_fn=<MseLossBackward>)\nprogress: epoch: 1 loss 9597.4716796875\nprogress: epoch: 1 loss tensor(9597.4717, grad_fn=<MseLossBackward>)\nprogress: epoch: 2 loss 595541.875\nprogress: epoch: 2 loss tensor(595541.8750, grad_fn=<MseLossBackward>)\nprogress: epoch: 3 loss 37207608.0\nprogress: epoch: 3 loss tensor(37207608., grad_fn=<MseLossBackward>)\nprogress: epoch: 4 loss 2324688896.0\nprogress: epoch: 4 loss tensor(2.3247e+09, grad_fn=<MseLossBackward>)\nprogress: epoch: 5 loss 145243914240.0\nprogress: epoch: 5 loss tensor(1.4524e+11, grad_fn=<MseLossBackward>)\nprogress: epoch: 6 loss 9074673451008.0\nprogress: epoch: 6 loss tensor(9.0747e+12, grad_fn=<MseLossBackward>)\nprogress: epoch: 7 loss 566975210192896.0\nprogress: epoch: 7 loss tensor(5.6698e+14, grad_fn=<MseLossBackward>)\nprogress: epoch: 8 loss 3.54239775768576e+16\nprogress: epoch: 8 loss tensor(3.5424e+16, grad_fn=<MseLossBackward>)\nprogress: epoch: 9 loss 2.2132498365037937e+18\nprogress: epoch: 9 loss tensor(2.2132e+18, grad_fn=<MseLossBackward>)\nestimation of the parameters:\nParameter containing:\ntensor([[2.2344e+08, 2.3512e+08]], requires_grad=True)\nParameter containing:\ntensor([4.5320e+08], requires_grad=True)\n" ] ], [ [ "## Remark\n\nThis problem can be solved in 3 lines of code!", "_____no_output_____" ] ], [ [ "xb_t = torch.cat((x_t,torch.ones(30).unsqueeze(1)),1)\n# print(xb_t)\nsol, _ =torch.lstsq(y_t,xb_t)\nprint(sol[:3])", "tensor([[ 2.0000],\n [-3.0000],\n [ 1.0000]])\n" ] ], [ [ "## Exercise: Play with the code", "_____no_output_____" ], [ "Change the number of samples from 30 to 300. What happens? How to correct it?", "_____no_output_____" ] ], [ [ "x = random((300,2))\ny = np.dot(x, [2., -3.]) + 1.\nx_t = torch.from_numpy(x).type(dtype)\ny_t = torch.from_numpy(y).type(dtype).unsqueeze(1)", "_____no_output_____" ], [ "model = torch.nn.Sequential(\n torch.nn.Linear(2, 1),\n)\n\nfor m in model.children():\n m.weight.data = w_init_t.clone().unsqueeze(0)\n m.bias.data = b_init_t.clone()\n\nloss_fn = torch.nn.MSELoss(reduction = 'mean')\n\nmodel.train()\n\noptimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)\n\n\nfor epoch in range(10000):\n y_pred = model(x_t)\n loss = loss_fn(y_pred, y_t)\n if epoch%500==499:\n print(\"progress:\", \"epoch:\", epoch+1, \"loss\",loss.item())\n # Zero gradients, perform a backward pass, and update the weights.\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n \n \n# After training\nprint(\"estimation of the parameters:\")\nfor param in model.parameters():\n print(param)", "progress: epoch: 499 loss 0.1583678424358368\nprogress: epoch: 999 loss 0.03177538886666298\nprogress: epoch: 1499 loss 0.006897071376442909\nprogress: epoch: 1999 loss 0.0016702644061297178\nprogress: epoch: 2499 loss 0.00045764277456328273\nprogress: epoch: 2999 loss 0.0001400149194523692\nprogress: epoch: 3499 loss 4.638753307517618e-05\nprogress: epoch: 3999 loss 1.614570828678552e-05\nprogress: epoch: 4499 loss 5.776500529464101e-06\nprogress: epoch: 4999 loss 2.09857080335496e-06\nprogress: epoch: 5499 loss 7.687903575970267e-07\nprogress: epoch: 5999 loss 2.8404440399754094e-07\nprogress: epoch: 6499 loss 1.0585888077230265e-07\nprogress: epoch: 6999 loss 3.994070496560198e-08\nprogress: epoch: 7499 loss 1.532848514784746e-08\nprogress: epoch: 7999 loss 5.943735281732643e-09\nprogress: epoch: 8499 loss 2.0236934350492675e-09\nprogress: epoch: 8999 loss 1.1486694928564134e-09\nprogress: epoch: 9499 loss 1.1486694928564134e-09\nprogress: epoch: 9999 loss 1.1486694928564134e-09\nestimation of the parameters:\nParameter containing:\ntensor([[ 2.0001, -2.9999]], requires_grad=True)\nParameter containing:\ntensor([0.9999], requires_grad=True)\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ] ]
d036e6c7207ea4c8af629ebc1f24c9f34befae25
76,187
ipynb
Jupyter Notebook
notebooks/gistemp.ipynb
ocefpaf/bioinfo
3b84d9742fa1d332eae68b277fcc440758b4cebf
[ "CC0-1.0" ]
null
null
null
notebooks/gistemp.ipynb
ocefpaf/bioinfo
3b84d9742fa1d332eae68b277fcc440758b4cebf
[ "CC0-1.0" ]
null
null
null
notebooks/gistemp.ipynb
ocefpaf/bioinfo
3b84d9742fa1d332eae68b277fcc440758b4cebf
[ "CC0-1.0" ]
null
null
null
173.546697
62,220
0.886542
[ [ [ "GHCN V2 Temperatures ANOM (C) CR 1200KM 1880-present\n\nGLOBAL Temperature Anomalies in .01 C base period: 1951-1980\n\nhttp://climatecode.org/", "_____no_output_____" ] ], [ [ "import os\nimport git\n\n\nif not os.path.exists('ccc-gistemp'):\n git.Git().clone('https://github.com/ClimateCodeFoundation/ccc-gistemp.git')\n\nif not os.path.exists('madqc'):\n git.Git().clone('https://github.com/ClimateCodeFoundation/madqc.git')", "_____no_output_____" ] ], [ [ "It seems that\n\nhttp://data.giss.nasa.gov/gistemp/sources_v3/GISTEMPv3_sources.tar.gz\n\nand \n\nhttp://data.giss.nasa.gov/pub/gistemp/SBBX.ERSST.gz\n\nare down, so let's use a local copy instead.", "_____no_output_____" ] ], [ [ "!mkdir -p ccc-gistemp/input\n\n!cp data/GISTEMPv3_sources.tar.gz data/SBBX.ERSST.gz ccc-gistemp/input", "_____no_output_____" ], [ "%cd ccc-gistemp/", "/home/filipe/Dropbox/Meetings/2018-CicloPalestrasComputacaoCientifica/notebooks/ccc-gistemp\n" ] ], [ [ "We don't really need `pypy` for the fetch phase, but the code is Python 2 and the notebook is Python 3, so this is just a lazy way to call py2k code from a py3k notebook ;-p\n\nPS: we are also using the International Surface Temperature Initiative data (ISTI).", "_____no_output_____" ] ], [ [ "!pypy tool/fetch.py isti", "input/isti.v1.tar.gz already exists.\n ... input/isti.merged.inv already exists.\n ... input/isti.merged.dat already exists.\n" ] ], [ [ "QC the ISTI data.", "_____no_output_____" ] ], [ [ "!../madqc/mad.py --progress input/isti.merged.dat", " 100% ZIXLT831324 TAVG 1960 180 \n" ] ], [ [ "We need to copy the ISTI data into the `input` directory.", "_____no_output_____" ] ], [ [ "!cp isti.merged.qc.dat input/isti.merged.qc.dat\n\n!cp input/isti.merged.inv input/isti.merged.qc.inv", "_____no_output_____" ] ], [ [ "Here is where `pypy` is really needed, this step takes ~35 minutes on valina `python` but only ~100 seconds on `pypy`.", "_____no_output_____" ] ], [ [ "!pypy tool/run.py -p 'data_sources=isti.merged.qc.dat;element=TAVG' -s 0-1,3-5", "input/ghcnm.tavg.latest.qca.tar.gz already exists.\n ... input/ghcnm.tavg.qca.dat already exists.\ninput/GISTEMPv3_sources.tar.gz already exists.\n ... input/oisstv2_mod4.clim.gz already exists.\n ... input/sumofday.tbl already exists.\n ... input/v3.inv already exists.\n ... input/ushcn3.tbl already exists.\n ... input/mcdw.tbl already exists.\n ... input/Ts.strange.v3.list.IN_full already exists.\n ... input/antarc2.list already exists.\n ... input/antarc3.list already exists.\n ... input/antarc1.list already exists.\n ... input/antarc1.txt already exists.\n ... input/antarc2.txt already exists.\n ... input/t_hohenpeissenberg_200306.txt_as_received_July17_2003 already exists.\n ... input/antarc3.txt already exists.\ninput/SBBX.ERSST.gz already exists.\n ... input/SBBX.ERSST already exists.\n====> STEPS 0, 1, 3, 4, 5 ====\nNo more recent sea-surface data files.\n\nLoad ISTI.MERGED.QC.DAT records\n(Reading average temperature)\nStep 0: closing output file.\nStep 1: closing output file.\nRegion (+64/+90 S/N -180/-090 W/E): 0 empty cells.\nRegion (+64/+90 S/N -090/+000 W/E): 0 empty cells.\nRegion (+64/+90 S/N +000/+090 W/E): 0 empty cells.\nRegion (+64/+90 S/N +090/+180 W/E): 0 empty cells.\nRegion (+44/+64 S/N -180/-135 W/E): 0 empty cells.\nRegion (+44/+64 S/N -135/-090 W/E): 0 empty cells.\nRegion (+44/+64 S/N -090/-045 W/E): 0 empty cells.\nRegion (+44/+64 S/N -045/+000 W/E): 0 empty cells.\nRegion (+44/+64 S/N +000/+045 W/E): 0 empty cells.\nRegion (+44/+64 S/N +045/+090 W/E): 0 empty cells.\nRegion (+44/+64 S/N +090/+135 W/E): 0 empty cells.\nRegion (+44/+64 S/N +135/+180 W/E): 0 empty cells.\nRegion (+24/+44 S/N -180/-150 W/E): 31 empty cells.\nRegion (+24/+44 S/N -150/-120 W/E): 2 empty cells.\nRegion (+24/+44 S/N -120/-090 W/E): 0 empty cells.\nRegion (+24/+44 S/N -090/-060 W/E): 0 empty cells.\nRegion (+24/+44 S/N -060/-030 W/E): 10 empty cells.\nRegion (+24/+44 S/N -030/+000 W/E): 0 empty cells.\nRegion (+24/+44 S/N +000/+030 W/E): 0 empty cells.\nRegion (+24/+44 S/N +030/+060 W/E): 0 empty cells.\nRegion (+24/+44 S/N +060/+090 W/E): 0 empty cells.\nRegion (+24/+44 S/N +090/+120 W/E): 0 empty cells.\nRegion (+24/+44 S/N +120/+150 W/E): 0 empty cells.\nRegion (+24/+44 S/N +150/+180 W/E): 26 empty cells.\nRegion (+00/+24 S/N -180/-158 W/E): 1 empty cell.\nRegion (+00/+24 S/N -158/-135 W/E): 40 empty cells.\nRegion (+00/+24 S/N -135/-112 W/E): 80 empty cells.\nRegion (+00/+24 S/N -112/-090 W/E): 19 empty cells.\nRegion (+00/+24 S/N -090/-068 W/E): 0 empty cells.\nRegion (+00/+24 S/N -068/-045 W/E): 9 empty cells.\nRegion (+00/+24 S/N -045/-022 W/E): 29 empty cells.\nRegion (+00/+24 S/N -022/+000 W/E): 1 empty cell.\nRegion (+00/+24 S/N +000/+022 W/E): 0 empty cells.\nRegion (+00/+24 S/N +022/+045 W/E): 0 empty cells.\nRegion (+00/+24 S/N +045/+068 W/E): 3 empty cells.\nRegion (+00/+24 S/N +068/+090 W/E): 0 empty cells.\nRegion (+00/+24 S/N +090/+112 W/E): 0 empty cells.\nRegion (+00/+24 S/N +112/+135 W/E): 0 empty cells.\nRegion (+00/+24 S/N +135/+158 W/E): 0 empty cells.\nRegion (+00/+24 S/N +158/+180 W/E): 2 empty cells.\nRegion (-24/-00 S/N -180/-158 W/E): 0 empty cells.\nRegion (-24/-00 S/N -158/-135 W/E): 0 empty cells.\nRegion (-24/-00 S/N -135/-112 W/E): 55 empty cells.\nRegion (-24/-00 S/N -112/-090 W/E): 67 empty cells.\nRegion (-24/-00 S/N -090/-068 W/E): 7 empty cells.\nRegion (-24/-00 S/N -068/-045 W/E): 0 empty cells.\nRegion (-24/-00 S/N -045/-022 W/E): 0 empty cells.\nRegion (-24/-00 S/N -022/+000 W/E): 2 empty cells.\nRegion (-24/-00 S/N +000/+022 W/E): 0 empty cells.\nRegion (-24/-00 S/N +022/+045 W/E): 0 empty cells.\nRegion (-24/-00 S/N +045/+068 W/E): 0 empty cells.\nRegion (-24/-00 S/N +068/+090 W/E): 29 empty cells.\nRegion (-24/-00 S/N +090/+112 W/E): 1 empty cell.\nRegion (-24/-00 S/N +112/+135 W/E): 0 empty cells.\nRegion (-24/-00 S/N +135/+158 W/E): 0 empty cells.\nRegion (-24/-00 S/N +158/+180 W/E): 0 empty cells.\nRegion (-44/-24 S/N -180/-150 W/E): 25 empty cells.\nRegion (-44/-24 S/N -150/-120 W/E): 37 empty cells.\nRegion (-44/-24 S/N -120/-090 W/E): 48 empty cells.\nRegion (-44/-24 S/N -090/-060 W/E): 2 empty cells.\nRegion (-44/-24 S/N -060/-030 W/E): 21 empty cells.\nRegion (-44/-24 S/N -030/+000 W/E): 18 empty cells.\nRegion (-44/-24 S/N +000/+030 W/E): 15 empty cells.\nRegion (-44/-24 S/N +030/+060 W/E): 5 empty cells.\nRegion (-44/-24 S/N +060/+090 W/E): 21 empty cells.\nRegion (-44/-24 S/N +090/+120 W/E): 43 empty cells.\nRegion (-44/-24 S/N +120/+150 W/E): 0 empty cells.\nRegion (-44/-24 S/N +150/+180 W/E): 0 empty cells.\nRegion (-64/-44 S/N -180/-135 W/E): 74 empty cells.\nRegion (-64/-44 S/N -135/-090 W/E): 72 empty cells.\nRegion (-64/-44 S/N -090/-045 W/E): 0 empty cells.\nRegion (-64/-44 S/N -045/+000 W/E): 20 empty cells.\nRegion (-64/-44 S/N +000/+045 W/E): 44 empty cells.\nRegion (-64/-44 S/N +045/+090 W/E): 5 empty cells.\nRegion (-64/-44 S/N +090/+135 W/E): 60 empty cells.\nRegion (-64/-44 S/N +135/+180 W/E): 1 empty cell.\nRegion (-90/-64 S/N -180/-090 W/E): 4 empty cells.\nRegion (-90/-64 S/N -090/+000 W/E): 0 empty cells.\nRegion (-90/-64 S/N +000/+090 W/E): 0 empty cells.\nRegion (-90/-64 S/N +090/+180 W/E): 0 empty cells.\n\nStep3: closing output file\nStep4: closing output file\nWARNING: Bad mix of land and ocean data.\n Land range from 1880-01 to 2018-02; Ocean range from 1880-01 to 2015-09.\nStep 5: Closing box file: result/landBX.Ts.GHCN.CL.PA.1200\nStep 5: Closing box file: result/oceanBX.Ts.ERSST.CL.PA\nStep 5: Closing box file: result/mixedBX.Ts.ERSST.GHCN.CL.PA.1200\n... running vischeck\nSee result/google-chart.url\n====> Timing Summary ====\nRun took 216.1 seconds\n" ] ], [ [ "Python `gistemp` saves the results in the same format as the Fortran program but it ships with `gistemp2csv.py` to make it easier to read the data with `pandas`.", "_____no_output_____" ] ], [ [ "!pypy tool/gistemp2csv.py result/*.txt", "_____no_output_____" ], [ "import pandas as pd\n\n\ndf = pd.read_csv(\n 'result/landGLB.Ts.GHCN.CL.PA.csv',\n skiprows=3,\n index_col=0,\n na_values=('*****', '****'),\n)", "_____no_output_____" ] ], [ [ "Let's use `sklearn` to compute the full trend...", "_____no_output_____" ] ], [ [ "from sklearn import linear_model\nfrom sklearn.metrics import mean_squared_error, r2_score\n\n\nreg0 = linear_model.LinearRegression()\n\nseries0 = df['J-D'].dropna()\ny = series0.values\nX = series0.index.values[:, None]\n\nreg0.fit(X, y)\ny_pred0 = reg0.predict(X)\nR2_0 = mean_squared_error(y, y_pred0)\nvar0 = r2_score(y, y_pred0)", "_____no_output_____" ] ], [ [ " and the past 30 years trend.", "_____no_output_____" ] ], [ [ "reg1 = linear_model.LinearRegression()\n\nseries1 = df['J-D'].dropna().iloc[-30:]\ny = series1.values\nX = series1.index.values[:, None]\nreg1.fit(X, y)\ny_pred1 = reg1.predict(X)\nR2_1 = mean_squared_error(y[-30:], y_pred1)\nvar1 = r2_score(y[-30:], y_pred1)", "_____no_output_____" ], [ "%matplotlib inline\n\nax = df.plot.line(y='J-D', figsize=(9, 9), legend=None)\nax.plot(series0.index, y_pred0, 'r--')\nax.plot(series1.index, y_pred1, 'r')\nax.set_xlim([1879, 2018])\n\nleg = f\"\"\"Trend in ℃/century (R²)\nFull: {reg0.coef_[0]*100:0.2f} ({var0:0.2f})\n30-year: {reg1.coef_[0]*100:0.2f} ({var1:0.2f})\n\"\"\"\n\nax.text(0.10, 0.75, leg, transform=ax.transAxes);", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
d036ede030cea5835e4bc7372265738c9d1c75e0
77,827
ipynb
Jupyter Notebook
docs/tutorial/Inheriting_from_Unit.ipynb
sarangbhagwat/biosteam
fc2d227d3fce9d5f4ccb873a6d41edb535347412
[ "MIT" ]
null
null
null
docs/tutorial/Inheriting_from_Unit.ipynb
sarangbhagwat/biosteam
fc2d227d3fce9d5f4ccb873a6d41edb535347412
[ "MIT" ]
null
null
null
docs/tutorial/Inheriting_from_Unit.ipynb
sarangbhagwat/biosteam
fc2d227d3fce9d5f4ccb873a6d41edb535347412
[ "MIT" ]
null
null
null
125.933657
16,056
0.855963
[ [ [ "# Inheriting from Unit", "_____no_output_____" ], [ "### Abstract attributes and methods", "_____no_output_____" ], [ "![](./Unit_UML.png \"Unit UML Diagram\")", "_____no_output_____" ], [ "**A Unit subclass has class attributes that dictate how an instance is initialized:**\n \n* `_BM` : dict[str, float] Bare module factors for each purchase cost item.\n\n* `_units` : [dict] Units of measure for the `design_results` items.\n\n* `_N_ins`=1 : [int] Expected number of input streams.\n \n* `_N_outs`=2 : [int] Expected number of output streams.\n \n* `_ins_size_is_fixed`=True : [bool] Whether the number of streams in ins is fixed.\n \n* `_outs_size_is_fixed`=True : [bool] Whether the number of streams in outs is fixed.\n \n* `_N_heat_utilities`=0 : [int] Number of heat utility objects in the `heat_utilities` tuple.\n \n* `_stream_link_options`=None : [StreamLinkOptions] Options for linking streams.\n\n* `auxiliary_unit_names`=() : tuple[str] Name of attributes that are auxiliary units.\n\n* `_graphics` : [biosteam Graphics] A Graphics object for diagram representation. Defaults to a box diagram.\n \n* `line` : [str] Label for the unit operation in a diagram. Defaults to the class name.\n\n**Abstract methods are used to setup stream conditions, run heat and mass balances, find design requirements, and cost the unit:**\n\n* `_setup()` : Called before System convergece to initialize constant data and setup stream conditions.\n\n* `_run()` : Called during System convergece to specify `outs` streams.\n\n* `_design()` : Called after System convergence to find design requirements. \n\n* `_cost()` : Called after `_design` to find cost requirements.\n\n**These abstract methods will rely on the following instance attributes:**\n\n* `ins` : Ins[Stream] Input streams.\n\n* `outs` : Outs[Stream] Output streams.\n\n* `power_utility` : [PowerUtility] Can find electricity rate requirement.\n\n* `heat_utilities` : tuple[HeatUtility] Can find cooling and heating requirements.\n\n* `design_results` : [dict] All design requirements.\n\n* `purchase_costs` : [dict] Itemized purchase costs.\n\n* `thermo` : [Thermo] The thermodynamic property package used by the unit.", "_____no_output_____" ], [ "### Subclass example", "_____no_output_____" ], [ "The following example depicts inheritance from Unit by creating a new Boiler class:", "_____no_output_____" ] ], [ [ "import biosteam as bst\nfrom math import ceil\n\nclass Boiler(bst.Unit):\n \"\"\"\n Create a Boiler object that partially boils the feed.\n \n Parameters\n ----------\n ins : stream\n Inlet fluid.\n outs : stream sequence\n * [0] vapor product\n * [1] liquid product\n V : float\n Molar vapor fraction.\n P : float\n Operating pressure [Pa].\n \n \"\"\"\n # Note that the documentation does not include `ID` or `thermo` in the parameters.\n # This is OK, and most subclasses in BioSTEAM are documented this way too.\n # Documentation for all unit operations should include the inlet and outlet streams\n # listed by index. If there is only one stream in the inlets (or outlets), there is no\n # need to list out by index. The types for the `ins` and `outs` should be either\n # `stream sequence` for multiple streams, or `stream` for a single stream.\n # Any additional arguments to the unit should also be listed (e.g. V, and P).\n \n _N_ins = 1 \n _N_outs = 2\n _N_heat_utilities = 1\n _BM = {'Evaporators': 2.45}\n _units = {'Area': 'm^2'}\n \n def __init__(self, ID='', ins=None, outs=(), thermo=None, *, V, P):\n bst.Unit.__init__(self, ID, ins, outs, thermo)\n # Initialize MultiStream object to perform vapor-liquid equilibrium later\n # NOTE: ID is None to not register it in the flowsheet\n self._multistream = bst.MultiStream(None, thermo=self.thermo)\n self.V = V #: Molar vapor fraction.\n self.P = P #: Operating pressure [Pa].\n \n def _setup(self):\n gas, liq = self.outs\n \n # Initialize top stream as a gas\n gas.phase = 'g'\n \n # Initialize bottom stream as a liquid\n liq.phase = 'l'\n \n def _run(self):\n feed = self.ins[0]\n gas, liq = self.outs\n \n # Perform vapor-liquid equilibrium\n ms = self._multistream\n ms.imol['l'] = feed.mol\n ms.vle(V=self.V, P=self.P)\n \n # Update output streams\n gas.mol[:] = ms.imol['g']\n liq.mol[:] = ms.imol['l']\n gas.T = liq.T = ms.T\n gas.P = liq.P = ms.P\n \n # Reset flow to prevent accumulation in multiple simulations\n ms.empty()\n \n \n def _design(self):\n # Calculate heat utility requirement (please read docs for HeatUtility objects)\n T_operation = self._multistream.T\n duty = self.H_out - self.H_in\n if duty < 0:\n raise RuntimeError(f'{repr(self)} is cooling.')\n hu = self.heat_utilities[0]\n hu(duty, T_operation)\n \n # Temperature of utility at entrance\n T_utility = hu.inlet_utility_stream.T\n \n # Temeperature gradient\n dT = T_utility - T_operation\n \n # Heat transfer coefficient kJ/(hr*m2*K)\n U = 8176.699 \n \n # Area requirement (m^2)\n A = duty/(U*dT)\n \n # Maximum area per unit\n A_max = 743.224\n \n # Number of units\n N = ceil(A/A_max)\n \n # Design requirements are stored here\n self.design_results['Area'] = A/N\n self.design_results['N'] = N\n \n def _cost(self):\n A = self.design_results['Area']\n N = self.design_results['N']\n \n # Long-tube vertical boiler cost correlation from \n # \"Product process and design\". Warren et. al. (2016) Table 22.32, pg 592\n purchase_cost = N*bst.CE*3.086*A**0.55\n \n # Itemized purchase costs are stored here\n self.purchase_costs['Boilers'] = purchase_cost\n \n ", "_____no_output_____" ] ], [ [ "### Simulation test", "_____no_output_____" ] ], [ [ "import biosteam as bst\nbst.settings.set_thermo(['Water'])\nwater = bst.Stream('water', Water=300)\nB1 = Boiler('B1', ins=water, outs=('gas', 'liq'),\n V=0.5, P=101325)\nB1.diagram()\nB1.show()", "_____no_output_____" ], [ "B1.simulate()\nB1.show()", "Boiler: B1\nins...\n[0] water\n phase: 'l', T: 298.15 K, P: 101325 Pa\n flow (kmol/hr): Water 300\nouts...\n[0] gas\n phase: 'g', T: 373.12 K, P: 101325 Pa\n flow (kmol/hr): Water 150\n[1] liq\n phase: 'l', T: 373.12 K, P: 101325 Pa\n flow (kmol/hr): Water 150\n" ], [ "B1.results()", "_____no_output_____" ] ], [ [ "### Graphviz attributes", "_____no_output_____" ], [ "All [graphviz](https://graphviz.readthedocs.io/en/stable/manual.html) attributes for generating a diagram are stored in `_graphics` as a Graphics object. One Graphics object is generated for each Unit subclass:", "_____no_output_____" ] ], [ [ "graphics = Boiler._graphics\nedge_in = graphics.edge_in\nedge_out = graphics.edge_out\nnode = graphics.node", "_____no_output_____" ], [ "# Attributes correspond to each inlet stream respectively\n# For example: Attributes for B1.ins[0] would correspond to edge_in[0]\nedge_in ", "_____no_output_____" ], [ "# Attributes correspond to each outlet stream respectively\n# For example: Attributes for B1.outs[0] would correspond to edge_out[0]\nedge_out", "_____no_output_____" ], [ "node # The node represents the actual unit", "_____no_output_____" ] ], [ [ "These attributes can be changed to the user's liking:", "_____no_output_____" ] ], [ [ "edge_out[0]['tailport'] = 'n'\nedge_out[1]['tailport'] = 's'\nnode['width'] = '1'\nnode['height'] = '1.2'", "_____no_output_____" ], [ "B1.diagram()", "_____no_output_____" ] ], [ [ "It is also possible to dynamically adjust node and edge attributes by setting the `tailor_node_to_unit` attribute:", "_____no_output_____" ] ], [ [ "def tailor_node_to_unit(node, unit):\n feed = unit.ins[0]\n if not feed.F_mol:\n node['name'] += '\\n-empty-'\ngraphics.tailor_node_to_unit = tailor_node_to_unit\nB1.diagram()", "_____no_output_____" ], [ "B1.ins[0].empty()\nB1.diagram()", "_____no_output_____" ] ], [ [ "NOTE: The example implementation of the `tailor_node_to_unit` function is not suggested; best to keep diagrams simple.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
d036ef98c6f068f54404cfae7abd3c342a2af804
448,680
ipynb
Jupyter Notebook
Src/Notebooks/oprimizeValues.ipynb
nakujaproject/MPUdata
deb273fbbdbbf9972b975ebad395f43f05aa3b2e
[ "MIT" ]
1
2021-03-27T17:56:20.000Z
2021-03-27T17:56:20.000Z
Src/Notebooks/oprimizeValues.ipynb
nakujaproject/BMPdata
deb273fbbdbbf9972b975ebad395f43f05aa3b2e
[ "MIT" ]
5
2021-03-22T16:27:04.000Z
2021-03-22T16:33:27.000Z
Src/Notebooks/oprimizeValues.ipynb
nakujaproject/MPUdata
deb273fbbdbbf9972b975ebad395f43f05aa3b2e
[ "MIT" ]
1
2021-03-27T17:56:23.000Z
2021-03-27T17:56:23.000Z
707.697161
128,092
0.945732
[ [ [ "# !pip install ray[tune]\nimport pandas as pd\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom sklearn.metrics import mean_squared_error\nfrom hyperopt import hp\nfrom ray import tune\nfrom hyperopt import fmin, tpe, hp,Trials, space_eval\nimport scipy.stats", "_____no_output_____" ], [ "df = pd.read_csv(\"../../Data/Raw/flightLogData.csv\")", "_____no_output_____" ], [ "plt.figure(figsize=(20, 10))\nplt.plot(df.Time, df['Altitude'], linewidth=2, color=\"r\", label=\"Altitude\")\nplt.plot(df.Time, df['Vertical_velocity'], linewidth=2, color=\"y\", label=\"Vertical_velocity\")\nplt.plot(df.Time, df['Vertical_acceleration'], linewidth=2, color=\"b\", label=\"Vertical_acceleration\")\nplt.legend()\nplt.show()", "_____no_output_____" ], [ "temp_df = df[['Altitude', \"Vertical_velocity\", \"Vertical_acceleration\"]]\nnoise = np.random.normal(2, 5, temp_df.shape)\nnoisy_df = temp_df + noise\nnoisy_df['Time'] = df['Time']\n", "_____no_output_____" ], [ "plt.figure(figsize=(20, 10))\nplt.plot(noisy_df.Time, noisy_df['Altitude'], linewidth=2, color=\"r\", label=\"Altitude\")\nplt.plot(noisy_df.Time, noisy_df['Vertical_velocity'], linewidth=2, color=\"y\", label=\"Vertical_velocity\")\nplt.plot(noisy_df.Time, noisy_df['Vertical_acceleration'], linewidth=2, color=\"b\", label=\"Vertical_acceleration\")\nplt.legend()\nplt.show()", "_____no_output_____" ] ], [ [ "## Altitude ", "_____no_output_____" ] ], [ [ "q = 0.001\nA = np.array([[1.0, 0.1, 0.005], [0, 1.0, 0.1], [0, 0, 1]])\nH = np.array([[1.0, 0.0, 0.0],[ 0.0, 0.0, 1.0]])\nP = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]])\n# R = np.array([[0.5, 0.0], [0.0, 0.0012]])\n# Q = np.array([[q, 0.0, 0.0], [0.0, q, 0.0], [0.0, 0.0, q]])\nI = np.identity(3)\nx_hat = np.array([[0.0], [0.0], [0.0]])\nY = np.array([[0.0], [0.0]])", "_____no_output_____" ], [ "def kalman_update(param):\n r1, r2, q1 = param['r1'], param['r2'], param['q1']\n R = np.array([[r1, 0.0], [0.0, r2]])\n Q = np.array([[q1, 0.0, 0.0], [0.0, q1, 0.0], [0.0, 0.0, q1]])\n \n A = np.array([[1.0, 0.05, 0.00125], [0, 1.0, 0.05], [0, 0, 1]])\n H = np.array([[1.0, 0.0, 0.0],[ 0.0, 0.0, 1.0]])\n P = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]])\n I = np.identity(3)\n x_hat = np.array([[0.0], [0.0], [0.0]])\n Y = np.array([[0.0], [0.0]])\n\n new_altitude = []\n new_acceleration = []\n new_velocity = []\n \n for altitude, az in zip(noisy_df['Altitude'], noisy_df['Vertical_acceleration']):\n Z = np.array([[altitude], [az]])\n\n x_hat_minus = np.dot(A, x_hat)\n P_minus = np.dot(np.dot(A, P), np.transpose(A)) + Q\n K = np.dot(np.dot(P_minus, np.transpose(H)), np.linalg.inv((np.dot(np.dot(H, P_minus), np.transpose(H)) + R)))\n Y = Z - np.dot(H, x_hat_minus)\n x_hat = x_hat_minus + np.dot(K, Y)\n P = np.dot((I - np.dot(K, H)), P_minus)\n Y = Z - np.dot(H, x_hat_minus)\n new_altitude.append(float(x_hat[0]))\n new_velocity.append(float(x_hat[1]))\n new_acceleration.append(float(x_hat[2]))\n return new_altitude", "_____no_output_____" ], [ "def objective_function(param):\n r1, r2, q1 = param['r1'], param['r2'], param['q1']\n R = np.array([[r1, 0.0], [0.0, r2]])\n Q = np.array([[q1, 0.0, 0.0], [0.0, q1, 0.0], [0.0, 0.0, q1]])\n \n A = np.array([[1.0, 0.05, 0.00125], [0, 1.0, 0.05], [0, 0, 1]])\n H = np.array([[1.0, 0.0, 0.0],[ 0.0, 0.0, 1.0]])\n P = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]])\n I = np.identity(3)\n x_hat = np.array([[0.0], [0.0], [0.0]])\n Y = np.array([[0.0], [0.0]])\n\n new_altitude = []\n new_acceleration = []\n new_velocity = []\n \n for altitude, az in zip(noisy_df['Altitude'], noisy_df['Vertical_acceleration']):\n Z = np.array([[altitude], [az]])\n\n x_hat_minus = np.dot(A, x_hat)\n P_minus = np.dot(np.dot(A, P), np.transpose(A)) + Q\n K = np.dot(np.dot(P_minus, np.transpose(H)), np.linalg.inv((np.dot(np.dot(H, P_minus), np.transpose(H)) + R)))\n Y = Z - np.dot(H, x_hat_minus)\n x_hat = x_hat_minus + np.dot(K, Y)\n P = np.dot((I - np.dot(K, H)), P_minus)\n Y = Z - np.dot(H, x_hat_minus)\n new_altitude.append(float(x_hat[0]))\n new_velocity.append(float(x_hat[1]))\n new_acceleration.append(float(x_hat[2]))\n return mean_squared_error(df['Altitude'], new_altitude)", "_____no_output_____" ], [ "# space = {\n# \"r1\": hp.choice(\"r1\", np.arange(0.01, 90, 0.005)),\n# \"r2\": hp.choice(\"r2\", np.arange(0.01, 90, 0.005)),\n# \"q1\": hp.choice(\"q1\", np.arange(0.0001, 0.0009, 0.0001))\n# }", "_____no_output_____" ], [ "len(np.arange(0.00001, 0.09, 0.00001))", "_____no_output_____" ], [ "space = {\n \"r1\": hp.choice(\"r1\", np.arange(0.001, 90, 0.001)),\n \"r2\": hp.choice(\"r2\", np.arange(0.001, 90, 0.001)),\n \"q1\": hp.choice(\"q1\", np.arange(0.00001, 0.09, 0.00001))\n}", "_____no_output_____" ], [ "# Initialize trials object\ntrials = Trials()\n\nbest = fmin(fn=objective_function, space = space, algo=tpe.suggest, max_evals=100, trials=trials )", "100%|██████████████████████████████████████████████| 100/100 [34:06<00:00, 20.46s/trial, best loss: 10.382455396105525]\n" ], [ "print(best)\n# -> {'a': 1, 'c2': 0.01420615366247227}\nprint(space_eval(space, best))\n# -> ('case 2', 0.01420615366247227}", "{'q1': 6625, 'r1': 2010, 'r2': 72029}\n{'q1': 0.06626, 'r1': 2.011, 'r2': 72.03}\n" ], [ "d1 = space_eval(space, best)", "_____no_output_____" ], [ "objective_function(d1)", "_____no_output_____" ], [ "%%timeit\nobjective_function({'q1': 0.06626, 'r1': 0.25, 'r2': 0.75})", "24.2 ms ± 499 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\n" ], [ "objective_function({'q1': 0.06626, 'r1': 0.25, 'r2': 0.75})", "_____no_output_____" ], [ "y = kalman_update(d1)\ncurrent = kalman_update({'q1': 0.06626, 'r1': 0.25, 'r2': 0.75})", "_____no_output_____" ], [ "plt.figure(figsize=(20, 10))\nplt.plot(noisy_df.Time, df['Altitude'], linewidth=2, color=\"r\", label=\"Actual\")\nplt.plot(noisy_df.Time, current, linewidth=2, color=\"g\", label=\"ESP32\")\nplt.plot(noisy_df.Time, noisy_df['Altitude'], linewidth=2, color=\"y\", label=\"Noisy\")\nplt.plot(noisy_df.Time, y, linewidth=2, color=\"b\", label=\"Predicted\")\nplt.legend()\nplt.show()", "_____no_output_____" ], [ "def kalman_update_return_velocity(param):\n r1, r2, q1 = param['r1'], param['r2'], param['q1']\n R = np.array([[r1, 0.0], [0.0, r2]])\n Q = np.array([[q1, 0.0, 0.0], [0.0, q1, 0.0], [0.0, 0.0, q1]])\n \n A = np.array([[1.0, 0.05, 0.00125], [0, 1.0, 0.05], [0, 0, 1]])\n H = np.array([[1.0, 0.0, 0.0],[ 0.0, 0.0, 1.0]])\n P = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]])\n I = np.identity(3)\n x_hat = np.array([[0.0], [0.0], [0.0]])\n Y = np.array([[0.0], [0.0]])\n\n new_altitude = []\n new_acceleration = []\n new_velocity = []\n \n for altitude, az in zip(noisy_df['Altitude'], noisy_df['Vertical_acceleration']):\n Z = np.array([[altitude], [az]])\n\n x_hat_minus = np.dot(A, x_hat)\n P_minus = np.dot(np.dot(A, P), np.transpose(A)) + Q\n K = np.dot(np.dot(P_minus, np.transpose(H)), np.linalg.inv((np.dot(np.dot(H, P_minus), np.transpose(H)) + R)))\n Y = Z - np.dot(H, x_hat_minus)\n x_hat = x_hat_minus + np.dot(K, Y)\n P = np.dot((I - np.dot(K, H)), P_minus)\n Y = Z - np.dot(H, x_hat_minus)\n new_altitude.append(float(x_hat[0]))\n new_velocity.append(float(x_hat[1]))\n new_acceleration.append(float(x_hat[2]))\n return new_velocity", "_____no_output_____" ], [ "def objective_function(param):\n r1, r2, q1 = param['r1'], param['r2'], param['q1']\n R = np.array([[r1, 0.0], [0.0, r2]])\n Q = np.array([[q1, 0.0, 0.0], [0.0, q1, 0.0], [0.0, 0.0, q1]])\n \n A = np.array([[1.0, 0.05, 0.00125], [0, 1.0, 0.05], [0, 0, 1]])\n H = np.array([[1.0, 0.0, 0.0],[ 0.0, 0.0, 1.0]])\n P = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]])\n I = np.identity(3)\n x_hat = np.array([[0.0], [0.0], [0.0]])\n Y = np.array([[0.0], [0.0]])\n\n new_altitude = []\n new_acceleration = []\n new_velocity = []\n \n for altitude, az in zip(noisy_df['Altitude'], noisy_df['Vertical_acceleration']):\n Z = np.array([[altitude], [az]])\n\n x_hat_minus = np.dot(A, x_hat)\n P_minus = np.dot(np.dot(A, P), np.transpose(A)) + Q\n K = np.dot(np.dot(P_minus, np.transpose(H)), np.linalg.inv((np.dot(np.dot(H, P_minus), np.transpose(H)) + R)))\n Y = Z - np.dot(H, x_hat_minus)\n x_hat = x_hat_minus + np.dot(K, Y)\n P = np.dot((I - np.dot(K, H)), P_minus)\n Y = Z - np.dot(H, x_hat_minus)\n new_altitude.append(float(x_hat[0]))\n new_velocity.append(float(x_hat[1]))\n new_acceleration.append(float(x_hat[2]))\n return mean_squared_error(df['Vertical_velocity'], new_velocity)", "_____no_output_____" ], [ "space = {\n \"r1\": hp.choice(\"r1\", np.arange(0.001, 90, 0.001)),\n \"r2\": hp.choice(\"r2\", np.arange(0.001, 90, 0.001)),\n \"q1\": hp.choice(\"q1\", np.arange(0.00001, 0.09, 0.00001))\n}", "_____no_output_____" ], [ "# Initialize trials object\ntrials = Trials()\n\nbest = fmin(fn=objective_function, space = space, algo=tpe.suggest, max_evals=100, trials=trials )", "100%|█████████████████████████████████████████████████████| 100/100 [9:31:10<00:00, 342.70s/trial, best loss: 90.1247837384137]\n" ], [ "print(best)\nprint(space_eval(space, best))", "{'q1': 8983, 'r1': 66244, 'r2': 58366}\n{'q1': 0.08984, 'r1': 66.245, 'r2': 58.367}\n" ], [ "d2 = space_eval(space, best)", "_____no_output_____" ], [ "objective_function(d2)", "_____no_output_____" ], [ "y = kalman_update_return_velocity(d2)\ncurrent = kalman_update_return_velocity({'q1': 0.0013, 'r1': 0.25, 'r2': 0.65})\nprevious = kalman_update_return_velocity({'q1': 0.08519, 'r1': 4.719, 'r2': 56.443})", "_____no_output_____" ], [ "plt.figure(figsize=(20, 10))\nplt.plot(noisy_df.Time, df['Vertical_velocity'], linewidth=2, color=\"r\", label=\"Actual\")\nplt.plot(noisy_df.Time, current, linewidth=2, color=\"g\", label=\"ESP32\")\nplt.plot(noisy_df.Time, previous, linewidth=2, color=\"c\", label=\"With previous data\")\nplt.plot(noisy_df.Time, noisy_df['Vertical_velocity'], linewidth=2, color=\"y\", label=\"Noisy\")\nplt.plot(noisy_df.Time, y, linewidth=2, color=\"b\", label=\"Predicted\")\nplt.legend()\nplt.show()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d036ff9d745799b517fcb40a0dc00bf054c5e29a
3,492
ipynb
Jupyter Notebook
model_registery/notebooks/rfr_class.ipynb
dmatrix/olt-mlflow
523b6c499faca02faf9138f2147ad61afb1493b6
[ "Apache-2.0" ]
17
2020-10-26T13:13:37.000Z
2022-03-10T08:30:15.000Z
model_registery/notebooks/rfr_class.ipynb
dmatrix/olt-mlflow
523b6c499faca02faf9138f2147ad61afb1493b6
[ "Apache-2.0" ]
null
null
null
model_registery/notebooks/rfr_class.ipynb
dmatrix/olt-mlflow
523b6c499faca02faf9138f2147ad61afb1493b6
[ "Apache-2.0" ]
13
2020-11-04T13:58:23.000Z
2022-03-23T16:13:28.000Z
30.103448
97
0.506873
[ [ [ "import warnings\nimport mlflow.sklearn\nimport numpy as np\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.metrics import mean_squared_error\nwarnings.filterwarnings(\"ignore\")", "_____no_output_____" ], [ "class RFRModel():\n def __init__(self, params={}):\n self.rf = RandomForestRegressor(**params)\n self.params = params\n self._mse = None\n self._rsme = None\n\n @classmethod\n def new_instance(cls, params={}):\n return cls(params)\n\n @property\n def model(self):\n return self.rf\n\n @property\n def mse(self):\n return self._mse\n\n @mse.setter\n def mse(self, value):\n self._mse = value\n\n @property\n def rsme(self):\n return self._rsme\n\n @rsme.setter\n def rsme(self, value):\n self._rsme = value\n\n def mlflow_run(self, X_train, y_train, val_x, val_y, model_name,\n run_name=\"Random Forest Regressor: Power Forecasting Model\",\n register=False, verbose=False):\n with mlflow.start_run(run_name=run_name) as run:\n # Log all parameters\n mlflow.log_params(self.params)\n\n # Train and fit the model\n self.rf.fit(X_train, y_train)\n y_pred = self.rf.predict(val_x)\n\n # Compute metrics\n self._mse = mean_squared_error(y_pred, val_y)\n self._rsme = np.sqrt(self._mse)\n\n if verbose:\n print(\"Validation MSE: %d\" % self._mse)\n print(\"Validation RMSE: %d\" % self._rsme)\n\n # log params and metrics\n mlflow.log_params(self.params)\n mlflow.log_metric(\"mse\", self._mse)\n mlflow.log_metric(\"rmse\", self._rsme)\n\n # Specify the `registered_model_name` parameter of the\n # function to register the model with the Model Registry. This automatically\n # creates a new model version for each new run\n mlflow.sklearn.log_model(\n sk_model=self.model,\n artifact_path=\"sklearn-model\",\n registered_model_name=model_name) if register else mlflow.sklearn.log_model(\n sk_model=self.model,\n artifact_path=\"sklearn-model\")\n\n run_id = run.info.run_id\n\n return run_id", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
d03711cca77072e192e617458f3e1dec3b281a3e
14,051
ipynb
Jupyter Notebook
ResNet50_CV.ipynb
naufalhisyam/TurbidityPrediction-thesis
fecb53c71a175c70e6b330541c7f5f8fede1fce2
[ "MIT" ]
null
null
null
ResNet50_CV.ipynb
naufalhisyam/TurbidityPrediction-thesis
fecb53c71a175c70e6b330541c7f5f8fede1fce2
[ "MIT" ]
null
null
null
ResNet50_CV.ipynb
naufalhisyam/TurbidityPrediction-thesis
fecb53c71a175c70e6b330541c7f5f8fede1fce2
[ "MIT" ]
null
null
null
37.369681
262
0.51007
[ [ [ "<a href=\"https://colab.research.google.com/github/naufalhisyam/TurbidityPrediction-thesis/blob/main/train_model_DenseNet121_CV.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "import os\nimport datetime\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\n!pip install tensorflow-addons\nimport tensorflow_addons as tfa\nfrom sklearn.model_selection import KFold, train_test_split", "_____no_output_____" ], [ "!git clone https://github.com/naufalhisyam/TurbidityPrediction-thesis.git\nos.chdir('/content/TurbidityPrediction-thesis') ", "_____no_output_____" ], [ "images = pd.read_csv(r'./Datasets/0degree_lowrange/0degInfo.csv') #load dataset info\ntrain_df, test_df = train_test_split(images, train_size=0.9, shuffle=True, random_state=1)\nY = train_df[['Turbidity']]", "_____no_output_____" ], [ "VALIDATION_R2 = []\nVALIDATION_LOSS = []\nVALIDATION_MSE = []\nVALIDATION_MAE = []\n\nname = 'ResNet_0deg_withTL'\nsave_dir = f'saved_models/{name}'\nif not os.path.exists(save_dir):\n os.makedirs(save_dir)", "_____no_output_____" ], [ "def get_model():\n #Create model\n base_model = tf.keras.applications.ResNet50(include_top=False, weights='imagenet', \n input_shape=(224, 224, 3), pooling='avg')\n out = base_model.output\n prediction = tf.keras.layers.Dense(1, activation=\"linear\")(out)\n model = tf.keras.Model(inputs = base_model.input, outputs = prediction)\n\n #Compile the model\n \n return model\n\ndef get_model_name(k):\n return 'resnet_'+str(k)+'.h5'\n\ntf.test.gpu_device_name()", "_____no_output_____" ], [ "train_generator = tf.keras.preprocessing.image.ImageDataGenerator(\n horizontal_flip=True\n)\n\ntest_generator = tf.keras.preprocessing.image.ImageDataGenerator(\n horizontal_flip=True\n)\n\nkf = KFold(n_splits = 5)\nfold_var = 1", "_____no_output_____" ], [ "for train_index, val_index in kf.split(np.zeros(Y.shape[0]),Y):\n training_data = train_df.iloc[train_index]\n validation_data = train_df.iloc[val_index]\n\t\n train_images = train_generator.flow_from_dataframe(training_data,\n x_col = \"Filepath\", y_col = \"Turbidity\",\n target_size=(224, 224), color_mode='rgb',\n class_mode = \"raw\", shuffle = True)\n val_images = train_generator.flow_from_dataframe(validation_data,\n x_col = \"Filepath\", y_col = \"Turbidity\",\n target_size=(224, 224), color_mode='rgb',\n class_mode = \"raw\", shuffle = True)\n\t\n\t# CREATE NEW MODEL\n model = get_model()\n\t# COMPILE NEW MODEL\n opt = tf.keras.optimizers.Adam(learning_rate=1e-4, decay=1e-6)\n model.compile(loss=tf.keras.losses.Huber(), optimizer=opt, metrics=['mae','mse', tfa.metrics.RSquare(name=\"R2\")])\n\t\n\t# CREATE CALLBACKS\n checkpoint_filepath = f'{save_dir}/{get_model_name(fold_var)}'\n checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_filepath,\n monitor='val_loss', verbose=1, save_best_only=True, mode='min')\n callbacks_list = [checkpoint]\n\t# There can be other callbacks, but just showing one because it involves the model name\n\t# This saves the best model\n\t# FIT THE MODEL\n history = model.fit(train_images, epochs=100,\n callbacks=callbacks_list,\n validation_data=val_images)\n\t\n\t# LOAD BEST MODEL to evaluate the performance of the model\n model.load_weights(f\"{save_dir}/resnet_\"+str(fold_var)+\".h5\")\n\t\n results = model.evaluate(val_images)\n results = dict(zip(model.metrics_names,results))\n\t\n VALIDATION_R2.append(results['R2'])\n VALIDATION_MAE.append(results['mae'])\n VALIDATION_MSE.append(results['mse'])\n VALIDATION_LOSS.append(results['loss'])\n\t\n tf.keras.backend.clear_session()\n\t\n fold_var += 1", "_____no_output_____" ], [ "train_images = train_generator.flow_from_dataframe(\n dataframe=train_df,\n x_col='Filepath',\n y_col='Turbidity',\n target_size=(224, 224),\n color_mode='rgb',\n class_mode='raw',\n shuffle=False,\n)\n\ntest_images = test_generator.flow_from_dataframe(\n dataframe=test_df,\n x_col='Filepath',\n y_col='Turbidity',\n target_size=(224, 224),\n color_mode='rgb',\n class_mode='raw',\n shuffle=False\n)", "_____no_output_____" ], [ "min_fold = min(range(len(VALIDATION_LOSS)), key=VALIDATION_LOSS.__getitem__) + 1\n\nmodel = get_model()\nmodel.load_weights(f\"{save_dir}/resnet_\"+str(min_fold)+\".h5\")\nopt = tf.keras.optimizers.Adam(learning_rate=1e-3, decay=1e-6)\nmodel.compile(loss=tf.keras.losses.Huber(), optimizer=opt, metrics=['mae','mse', tfa.metrics.RSquare(name=\"R2\")])", "_____no_output_____" ], [ "test_pred = np.squeeze(model.predict(test_images))\ntest_true = test_images.labels\ntest_residuals = test_true - test_pred\n\ntrain_pred = np.squeeze(model.predict(train_images))\ntrain_true = train_images.labels\ntrain_residuals = train_true - train_pred\n\ntrain_score = model.evaluate(train_images)\ntest_score = model.evaluate(test_images)\nprint('test ',test_score)\nprint('train ', train_score)", "_____no_output_____" ], [ "f, axs = plt.subplots(1, 2, figsize=(8,6), gridspec_kw={'width_ratios': [4, 1]})\n\nf.suptitle(f'Residual Plot - {name}', fontsize=13, fontweight='bold', y=0.92) \naxs[0].scatter(train_pred,train_residuals, label='Train Set', alpha=0.75, color='tab:blue') \naxs[0].scatter(test_pred,test_residuals, label='Test Set', alpha=0.75, color='tab:orange')\naxs[0].set_ylabel('Residual (NTU)')\naxs[0].set_xlabel('Predicted Turbidity (NTU)') \naxs[0].axhline(0, color='black')\naxs[0].legend()\naxs[0].grid()\n\naxs[1].hist(train_residuals, bins=50, orientation=\"horizontal\", density=True, alpha=0.9, color='tab:blue')\naxs[1].hist(test_residuals, bins=50, orientation=\"horizontal\", density=True, alpha=0.75, color='tab:orange')\naxs[1].axhline(0, color='black')\naxs[1].set_xlabel('Distribution') \naxs[1].yaxis.tick_right()\naxs[1].grid(axis='y')\n\nplt.subplots_adjust(wspace=0.05)\n\nplt.savefig(f'{save_dir}/residualPlot_{name}.png', dpi=150)\nplt.show()", "_____no_output_____" ], [ "fig, ax = plt.subplots(1,2,figsize=(13,6))\nfig.suptitle(f'Nilai Prediksi vs Observasi - {name}', fontsize=13, fontweight='bold', y=0.96)\n\nax[0].scatter(test_true,test_pred, label=f'$Test\\ R^2=${round(test_score[3],3)}',color='tab:orange', alpha=0.75)\ntheta = np.polyfit(test_true, test_pred, 1)\ny_line = theta[1] + theta[0] * test_true\nax[0].plot([test_true.min(), test_true.max()], [y_line.min(), y_line.max()],'k--', lw=2,label='best fit')\nax[0].plot([test_true.min(), test_true.max()], [test_true.min(), test_true.max()], 'k--', lw=2, label='identity',color='dimgray')\nax[0].set_xlabel('Measured Turbidity (NTU)')\nax[0].set_ylabel('Predicted Turbidity (NTU)')\nax[0].set_title(f'Test Set', fontsize=10, fontweight='bold')\nax[0].set_xlim([0, 130])\nax[0].set_ylim([0, 130])\nax[0].grid()\nax[0].legend()\n\nax[1].scatter(train_true,train_pred, label=f'$Train\\ R^2=${round(train_score[3],3)}', color='tab:blue', alpha=0.75)\ntheta2 = np.polyfit(train_true, train_pred, 1)\ny_line2 = theta2[1] + theta2[0] * train_true\nax[1].plot([train_true.min(), train_true.max()], [y_line2.min(), y_line2.max()],'k--', lw=2,label='best fit')\nax[1].plot([train_true.min(), train_true.max()], [train_true.min(),train_true.max()], 'k--', lw=2, label='identity',color='dimgray')\nax[1].set_xlabel('Measured Turbidity (NTU)')\nax[1].set_ylabel('Predicted Turbidity (NTU)')\nax[1].set_title(f'Train Set', fontsize=10, fontweight='bold')\nax[1].set_xlim([0, 130])\nax[1].set_ylim([0, 130])\nax[1].grid()\nax[1].legend()\n\nplt.savefig(f'{save_dir}/predErrorPlot_{name}.png', dpi=150)\nplt.show()", "_____no_output_____" ], [ "cv_df = pd.DataFrame.from_dict({'val_loss': VALIDATION_LOSS, 'val_mae': VALIDATION_MAE, 'val_mse': VALIDATION_MSE, 'val_R2': VALIDATION_R2}, orient='index').T\ncv_csv_file = f'{save_dir}/cross_val.csv'\nwith open(cv_csv_file, mode='w') as f:\n cv_df.to_csv(f)", "_____no_output_____" ], [ "from google.colab import drive\ndrive.mount('/content/gdrive')\n\nsave_path = f\"/content/gdrive/MyDrive/MODEL BERHASIL/ResNet/{name}\"\nif not os.path.exists(save_path):\n os.makedirs(save_path)\n\noripath = \"saved_models/.\"\n!cp -a \"{oripath}\" \"{save_path}\" # copies files to google drive", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d0371bc2e0272ba11a83bc1971b583ef73de6b00
22,658
ipynb
Jupyter Notebook
notebooks/python/L04_C01_dogs_vs_cats_without_augmentation.ipynb
rses-dl-course/rses-dl-course.github.io
bd2880d2db833df5049440af1e554441f75c50e7
[ "CC-BY-4.0" ]
2
2021-02-11T08:36:48.000Z
2021-02-11T10:16:30.000Z
notebooks/python/L04_C01_dogs_vs_cats_without_augmentation.ipynb
rses-dl-course/rses-dl-course.github.io
bd2880d2db833df5049440af1e554441f75c50e7
[ "CC-BY-4.0" ]
8
2021-04-15T12:20:13.000Z
2021-04-23T10:55:25.000Z
notebooks/python/L04_C01_dogs_vs_cats_without_augmentation.ipynb
rses-dl-course/rses-dl-course.github.io
bd2880d2db833df5049440af1e554441f75c50e7
[ "CC-BY-4.0" ]
null
null
null
29.236129
423
0.582355
[ [ [ "##### Copyright 2019 The TensorFlow Authors.", "_____no_output_____" ] ], [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# Lab 04a: Dogs vs Cats Image Classification Without Image Augmentation", "_____no_output_____" ], [ "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/sres-dl-course/sres-dl-course.github.io/blob/master/notebooks/python/L04_C01_dogs_vs_cats_without_augmentation.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/sres-dl-course/sres-dl-course.github.io/blob/master/notebooks/python/L04_C01_dogs_vs_cats_without_augmentation.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>", "_____no_output_____" ], [ "In this tutorial, we will discuss how to classify images into pictures of cats or pictures of dogs. We'll build an image classifier using `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`.\n\n## Specific concepts that will be covered:\nIn the process, we will build practical experience and develop intuition around the following concepts\n\n* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class — How can we efficiently work with data on disk to interface with our model?\n* _Overfitting_ - what is it, how to identify it?\n\n<hr>\n\n\n**Before you begin**\n\nBefore running the code in this notebook, reset the runtime by going to **Runtime -> Reset all runtimes** in the menu above. If you have been working through several notebooks, this will help you avoid reaching Colab's memory limits.\n", "_____no_output_____" ], [ "# Importing packages", "_____no_output_____" ], [ "Let's start by importing required packages:\n\n* os — to read files and directory structure\n* numpy — for some matrix math outside of TensorFlow\n* matplotlib.pyplot — to plot the graph and display images in our training and validation data\n", "_____no_output_____" ] ], [ [ "import tensorflow as tf", "_____no_output_____" ], [ "from tensorflow.keras.preprocessing.image import ImageDataGenerator", "_____no_output_____" ], [ "import os\nimport matplotlib.pyplot as plt\nimport numpy as np", "_____no_output_____" ], [ "import logging\nlogger = tf.get_logger()\nlogger.setLevel(logging.ERROR)", "_____no_output_____" ] ], [ [ "# Data Loading", "_____no_output_____" ], [ "To build our image classifier, we begin by downloading the dataset. The dataset we are using is a filtered version of <a href=\"https://www.kaggle.com/c/dogs-vs-cats/data\" target=\"_blank\">Dogs vs. Cats</a> dataset from Kaggle (ultimately, this dataset is provided by Microsoft Research).\n\nIn previous Colabs, we've used <a href=\"https://www.tensorflow.org/datasets\" target=\"_blank\">TensorFlow Datasets</a>, which is a very easy and convenient way to use datasets. In this Colab however, we will make use of the class `tf.keras.preprocessing.image.ImageDataGenerator` which will read data from disk. We therefore need to directly download *Dogs vs. Cats* from a URL and unzip it to the Colab filesystem.", "_____no_output_____" ] ], [ [ "_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'\nzip_dir = tf.keras.utils.get_file('cats_and_dogs_filterted.zip', origin=_URL, extract=True)", "_____no_output_____" ] ], [ [ "The dataset we have downloaded has the following directory structure.\n\n<pre style=\"font-size: 10.0pt; font-family: Arial; line-height: 2; letter-spacing: 1.0pt;\" >\n<b>cats_and_dogs_filtered</b>\n|__ <b>train</b>\n |______ <b>cats</b>: [cat.0.jpg, cat.1.jpg, cat.2.jpg ...]\n |______ <b>dogs</b>: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]\n|__ <b>validation</b>\n |______ <b>cats</b>: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ...]\n |______ <b>dogs</b>: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...]\n</pre>\n\nWe can list the directories with the following terminal command:", "_____no_output_____" ] ], [ [ "zip_dir_base = os.path.dirname(zip_dir)\n!find $zip_dir_base -type d -print", "_____no_output_____" ] ], [ [ "We'll now assign variables with the proper file path for the training and validation sets.", "_____no_output_____" ] ], [ [ "base_dir = os.path.join(os.path.dirname(zip_dir), 'cats_and_dogs_filtered')\ntrain_dir = os.path.join(base_dir, 'train')\nvalidation_dir = os.path.join(base_dir, 'validation')\n\ntrain_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures\ntrain_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures\nvalidation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures\nvalidation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures", "_____no_output_____" ] ], [ [ "### Understanding our data", "_____no_output_____" ], [ "Let's look at how many cats and dogs images we have in our training and validation directory", "_____no_output_____" ] ], [ [ "num_cats_tr = len(os.listdir(train_cats_dir))\nnum_dogs_tr = len(os.listdir(train_dogs_dir))\n\nnum_cats_val = len(os.listdir(validation_cats_dir))\nnum_dogs_val = len(os.listdir(validation_dogs_dir))\n\ntotal_train = num_cats_tr + num_dogs_tr\ntotal_val = num_cats_val + num_dogs_val", "_____no_output_____" ], [ "print('total training cat images:', num_cats_tr)\nprint('total training dog images:', num_dogs_tr)\n\nprint('total validation cat images:', num_cats_val)\nprint('total validation dog images:', num_dogs_val)\nprint(\"--\")\nprint(\"Total training images:\", total_train)\nprint(\"Total validation images:\", total_val)", "_____no_output_____" ] ], [ [ "# Setting Model Parameters", "_____no_output_____" ], [ "For convenience, we'll set up variables that will be used later while pre-processing our dataset and training our network.", "_____no_output_____" ] ], [ [ "BATCH_SIZE = 100 # Number of training examples to process before updating our models variables\nIMG_SHAPE = 150 # Our training data consists of images with width of 150 pixels and height of 150 pixels", "_____no_output_____" ] ], [ [ "# Data Preparation ", "_____no_output_____" ], [ "Images must be formatted into appropriately pre-processed floating point tensors before being fed into the network. The steps involved in preparing these images are:\n\n1. Read images from the disk\n2. Decode contents of these images and convert it into proper grid format as per their RGB content\n3. Convert them into floating point tensors\n4. Rescale the tensors from values between 0 and 255 to values between 0 and 1\n\nFortunately, all these tasks can be done using the class **tf.keras.preprocessing.image.ImageDataGenerator**.\n\nWe can set this up in a couple of lines of code.", "_____no_output_____" ] ], [ [ "train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data\nvalidation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data", "_____no_output_____" ] ], [ [ "After defining our generators for training and validation images, **flow_from_directory** method will load images from the disk, apply rescaling, and resize them using single line of code.", "_____no_output_____" ] ], [ [ "train_data_gen = train_image_generator.flow_from_directory(batch_size=BATCH_SIZE,\n directory=train_dir,\n shuffle=True,\n target_size=(IMG_SHAPE,IMG_SHAPE), #(150,150)\n class_mode='binary')", "_____no_output_____" ], [ "val_data_gen = validation_image_generator.flow_from_directory(batch_size=BATCH_SIZE,\n directory=validation_dir,\n shuffle=False,\n target_size=(IMG_SHAPE,IMG_SHAPE), #(150,150)\n class_mode='binary')", "_____no_output_____" ] ], [ [ "### Visualizing Training images", "_____no_output_____" ], [ "We can visualize our training images by getting a batch of images from the training generator, and then plotting a few of them using `matplotlib`.", "_____no_output_____" ] ], [ [ "sample_training_images, _ = next(train_data_gen) ", "_____no_output_____" ] ], [ [ "The `next` function returns a batch from the dataset. One batch is a tuple of (*many images*, *many labels*). For right now, we're discarding the labels because we just want to look at the images.", "_____no_output_____" ] ], [ [ "# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.\ndef plotImages(images_arr):\n fig, axes = plt.subplots(1, 5, figsize=(20,20))\n axes = axes.flatten()\n for img, ax in zip(images_arr, axes):\n ax.imshow(img)\n plt.tight_layout()\n plt.show()", "_____no_output_____" ], [ "plotImages(sample_training_images[:5]) # Plot images 0-4", "_____no_output_____" ] ], [ [ "# Model Creation", "_____no_output_____" ], [ "## Exercise 4.1 Define the model\n\nThe model consists of four convolution blocks with a max pool layer in each of them. Then we have a fully connected layer with 512 units, with a `relu` activation function. The model will output class probabilities for two classes — dogs and cats — using `softmax`. \n\nThe list of model layers:\n* 2D Convolution - 32 filters, 3x3 kernel, ReLU activation\n* 2D Max pooling - 2x2 kernel\n* 2D Convolution - 64 filters, 3x3 kernel, ReLU activation\n* 2D Max pooling - 2x2 kernel\n* 2D Convolution - 128 filters, 3x3 kernel, ReLU activation\n* 2D Max pooling - 2x2 kernel\n* 2D Convolution - 128 filters, 3x3 kernel, ReLU activation\n* 2D Max pooling - 2x2 kernel\n* Flatten\n* Dense - 512 nodes\n* Dense - 2 nodes\n\nCheck the documentation for how to specify the layers [https://www.tensorflow.org/api_docs/python/tf/keras/layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers)", "_____no_output_____" ] ], [ [ "model = tf.keras.models.Sequential([\n # TODO - Create the CNN model as specified above\n])", "_____no_output_____" ] ], [ [ "### Exercise 4.1 Solution\n\nThe solution for the exercise can be found [here](https://colab.research.google.com/github/rses-dl-course/rses-dl-course.github.io/blob/master/notebooks/python/solutions/E4.1.ipynb)", "_____no_output_____" ], [ "### Exercise 4.2 Compile the model\n\nAs usual, we will use the `adam` optimizer. Since we output a softmax categorization, we'll use `sparse_categorical_crossentropy` as the loss function. We would also like to look at training and validation accuracy on each epoch as we train our network, so we are passing in the metrics argument.", "_____no_output_____" ] ], [ [ "# TODO - Compile the model", "_____no_output_____" ] ], [ [ "#### Exercise 4.2 Solution\n\nThe solution for the exercise can be found [here](https://colab.research.google.com/github/rses-dl-course/rses-dl-course.github.io/blob/master/notebooks/python/solutions/E4.2.ipynb)", "_____no_output_____" ], [ "### Model Summary\n\nLet's look at all the layers of our network using **summary** method.", "_____no_output_____" ] ], [ [ "model.summary()", "_____no_output_____" ] ], [ [ "### Exercise 4.3 Train the model", "_____no_output_____" ], [ "It's time we train our network.\n\n* Since we have a validation dataset, we can use this to evaluate our model as it trains by adding the `validation_data` parameter. \n* `validation_steps` can also be added if you'd like to use less than full validation set.", "_____no_output_____" ] ], [ [ "# TODO - Fit the model", "_____no_output_____" ] ], [ [ "#### Exercise 4.3 Solution\n\nThe solution for the exercise can be found [here](https://colab.research.google.com/github/rses-dl-course/rses-dl-course.github.io/blob/master/notebooks/python/solutions/E4.3.ipynb)", "_____no_output_____" ], [ "### Visualizing results of the training", "_____no_output_____" ], [ "We'll now visualize the results we get after training our network.", "_____no_output_____" ] ], [ [ "acc = history.history['accuracy']\nval_acc = history.history['val_accuracy']\n\nloss = history.history['loss']\nval_loss = history.history['val_loss']\n\nepochs_range = range(EPOCHS)\n\nplt.figure(figsize=(20, 8))\nplt.subplot(1, 2, 1)\nplt.plot(epochs_range, acc, label='Training Accuracy')\nplt.plot(epochs_range, val_acc, label='Validation Accuracy')\nplt.legend(loc='lower right')\nplt.title('Training and Validation Accuracy')\n\nplt.subplot(1, 2, 2)\nplt.plot(epochs_range, loss, label='Training Loss')\nplt.plot(epochs_range, val_loss, label='Validation Loss')\nplt.legend(loc='upper right')\nplt.title('Training and Validation Loss')\nplt.savefig('./foo.png')\nplt.show()", "_____no_output_____" ] ], [ [ "As we can see from the plots, training accuracy and validation accuracy are off by large margin and our model has achieved only around **70%** accuracy on the validation set (depending on the number of epochs you trained for).\n\nThis is a clear indication of overfitting. Once the training and validation curves start to diverge, our model has started to memorize the training data and is unable to perform well on the validation data.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
d037200fdc835edfcd317a147c95b280e9b32062
33,758
ipynb
Jupyter Notebook
GAN Model/Logistic_regression_Chandan_kumar.ipynb
MrTONYCHAN/xyz
f862111060c63abced8d69c9f3fd095cc7e1a262
[ "Apache-2.0" ]
null
null
null
GAN Model/Logistic_regression_Chandan_kumar.ipynb
MrTONYCHAN/xyz
f862111060c63abced8d69c9f3fd095cc7e1a262
[ "Apache-2.0" ]
null
null
null
GAN Model/Logistic_regression_Chandan_kumar.ipynb
MrTONYCHAN/xyz
f862111060c63abced8d69c9f3fd095cc7e1a262
[ "Apache-2.0" ]
null
null
null
27.535073
460
0.499526
[ [ [ "\n\n\n#CHANDAN KUMAR (BATCH 3)- GOOGLE COLAB / logistic regression & Rigid & Lasso Regression\n##(Rahul Agnihotri(T.L))\n \n\n\n\n", "_____no_output_____" ], [ "DATASET [HEART ](https://drive.google.com/file/d/10dopwCjH4VE557tSynCcY3fV9OBowq9h/view?usp=sharing)", "_____no_output_____" ], [ "#Packages to load", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nfrom sklearn.linear_model import Ridge\nfrom sklearn.linear_model import Lasso\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom sklearn.model_selection import GridSearchCV\n\n# for hiding warning\nimport warnings\nwarnings.filterwarnings('ignore')", "_____no_output_____" ] ], [ [ "#Input directory", "_____no_output_____" ] ], [ [ "heart_df = pd.read_csv(r'/content/heart.csv')\nheart_df", "_____no_output_____" ] ], [ [ "#About data set", "_____no_output_____" ], [ "The \"target\" field refers to the presence of heart disease in the patient. It is integer valued 0 = no/less chance of heart attack and 1 = more chance of heart attack\nAttribute Information\n- 1) age\n- 2) sex\n- 3) chest pain type (4 values)\n- 4) resting blood pressure\n- 5) serum cholestoral in mg/dl\n- 6)fasting blood sugar > 120 mg/dl\n- 7) resting electrocardiographic results (values 0,1,2)\n- 8) maximum heart rate achieved\n- 9) exercise induced angina\n- 10) oldpeak = ST depression induced by exercise relative to rest\n- 11)the slope of the peak exercise ST segment\n- 12) number of major vessels (0-3) colored by flourosopy\n- 13) thal: 0 = normal; 1 = fixed defect; 2 = reversable defect\n- 14) target: 0= less chance of heart attack 1= more chance of heart attack", "_____no_output_____" ], [ "#Get to know About data", "_____no_output_____" ] ], [ [ "heart_df.head()", "_____no_output_____" ], [ "heart_df.dtypes", "_____no_output_____" ], [ "heart_df.isnull().sum()", "_____no_output_____" ], [ "print('Shape : ',heart_df.shape)\nprint('Describe : ',heart_df.describe())", "_____no_output_____" ] ], [ [ "#EDA(Exploratory Data Analysis)", "_____no_output_____" ] ], [ [ "#import pandas_profiling as pp", "_____no_output_____" ], [ "#pp.ProfileReport(heart_df)", "_____no_output_____" ], [ "%matplotlib inline\nfrom matplotlib import pyplot as plt", "_____no_output_____" ], [ "fig,axes=plt.subplots(nrows=1,ncols=1,figsize=(10,5))\nsns.countplot(heart_df.target)", "_____no_output_____" ], [ "fig,axes=plt.subplots(nrows=1,ncols=1,figsize=(15,10))\nsns.distplot(heart_df['age'],hist=True,kde=True,rug=False,label='age',norm_hist=True)", "_____no_output_____" ], [ "heart_df.columns", "_____no_output_____" ], [ "corr = heart_df.corr(method = 'pearson')\ncorr", "_____no_output_____" ], [ "\ncolormap = plt.cm.OrRd \nplt.figure(figsize=(15, 10)) \nplt.title(\"Person Correlation of Features\", y = 1.05, size = 15) \nsns.heatmap(corr.astype(float).corr(), linecolor = \"white\", cmap = colormap, annot = True)", "_____no_output_____" ], [ "import plotly.express as px", "_____no_output_____" ], [ "px.bar(heart_df, x= 'age' , y='target', color='sex' , title= 'heart attack patoents age range and sex',\n labels = { 'output': 'Number of patients', 'Age': 'Age od patient'})\n", "_____no_output_____" ] ], [ [ "#Creating and Predicting Learning Models", "_____no_output_____" ] ], [ [ "X= heart_df.drop(columns= ['target'])\ny= heart_df['target']", "_____no_output_____" ] ], [ [ "\n##Data normalization", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import MinMaxScaler\n# Data normalization [0, 1]\n\ntransformer = MinMaxScaler()\ntransformer.fit(X)\nX = transformer.transform(X)\nX", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\nx_test,x_train,y_test,y_train = train_test_split(X,y,test_size = 0.2,random_state = 123)", "_____no_output_____" ], [ "from sklearn.linear_model import LogisticRegression\nlr = LogisticRegression()\nlr.fit(x_train,y_train)", "_____no_output_____" ], [ "y_pred = lr.predict( x_test)\ny_pred_proba = lr.predict_proba(x_test)[:, 1]", "_____no_output_____" ] ], [ [ "##Confusion_matrix", "_____no_output_____" ], [ "- conf_mat=multiclass,\n- colorbar=True,\n- show_absolute=False,\n- show_normed=True,\n- class_names=class_names\n", "_____no_output_____" ] ], [ [ "from sklearn.metrics import confusion_matrix, classification_report\nfrom mlxtend.plotting import plot_confusion_matrix\n\ncm=confusion_matrix(y_test, y_pred)\nfig, ax = plot_confusion_matrix(conf_mat=cm)\nplt.rcParams['font.size'] = 40\n#(conf_mat=multiclass,colorbar=True, show_absolute=False, show_normed=True, class_names=class_names)\nplt.show()\n\n# 0,0 \n# 0,1 \n# 1,0\n# 1,1 ", "_____no_output_____" ], [ "print(classification_report(y_test, y_pred))", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split, cross_val_score\nfrom sklearn.metrics import accuracy_score, classification_report, precision_score, recall_score \nfrom sklearn.metrics import confusion_matrix, precision_recall_curve, roc_curve, auc, log_loss", "_____no_output_____" ], [ "[fpr, tpr, thr] = roc_curve(y_test, y_pred_proba)", "_____no_output_____" ], [ "print('Train/Test split results:')\nprint(lr.__class__.__name__+\" accuracy is %2.3f\" % accuracy_score(y_test, y_pred))\nprint(lr.__class__.__name__+\" log_loss is %2.3f\" % log_loss(y_test, y_pred_proba))\nprint(lr.__class__.__name__+\" auc is %2.3f\" % auc(fpr, tpr))\n\nidx = np.min(np.where(tpr > 0.95)) # index of the first threshold for which the sensibility > 0.95", "_____no_output_____" ], [ "plt.figure(figsize=(10,10))\nplt.plot(fpr, tpr, color='coral', label='ROC curve (area = %0.3f)' % auc(fpr, tpr))\nplt.plot([0, 1], [0, 1], 'k--')\nplt.plot([0,fpr[idx]], [tpr[idx],tpr[idx]], 'k--', color='blue')\nplt.plot([fpr[idx],fpr[idx]], [0,tpr[idx]], 'k--', color='blue')\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.05])\nplt.xlabel('False Positive Rate (1 - specificity)', fontsize=5)\nplt.ylabel('True Positive Rate (recall)', fontsize=5)\nplt.title('Receiver operating characteristic (ROC) curve')\nplt.legend(loc=\"lower right\")\nplt.show()", "_____no_output_____" ], [ "heart_df.corr()", "_____no_output_____" ], [ "from sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn import metrics\n\nLR_model= LogisticRegression()\n\ntuned_parameters = {'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000] ,\n 'penalty':['l1','l2']\n}", "_____no_output_____" ] ], [ [ "L1 and L2 are regularization parameters.They're used to avoid overfiting.Both L1 and L2 regularization prevents overfitting by shrinking (imposing a penalty) on the coefficients.\nL1 is the first moment norm |x1-x2| (|w| for regularization case) that is simply the absolute dıstance between two points where L2 is second moment norm corresponding to Eucledian Distance that is |x1-x2|^2 (|w|^2 for regularization case).\nIn simple words,L2 (Ridge) shrinks all the coefficient by the same proportions but eliminates none, while L1 (Lasso) can shrink some coefficients to zero, performing variable selection. If all the features are correlated with the label, ridge outperforms lasso, as the coefficients are never zero in ridge. If only a subset of features are correlated with the label, lasso outperforms ridge as in lasso model some coefficient can be shrunken to zero.", "_____no_output_____" ] ], [ [ "heart_df.corr()", "_____no_output_____" ], [ "from sklearn.model_selection import GridSearchCV\n\nLR= GridSearchCV(LR_model, tuned_parameters,cv=10)", "_____no_output_____" ], [ "LR.fit(x_train,y_train)", "_____no_output_____" ], [ "print(LR.best_params_)", "_____no_output_____" ], [ "y_prob = LR.predict_proba(x_test)[:,1] # This will give positive class prediction probabilities \ny_pred = np.where(y_prob > 0.5, 1, 0) # This will threshold the probabilities to give class predictions.\nLR.score(x_test, y_pred)", "_____no_output_____" ], [ "confusion_matrix=metrics.confusion_matrix(y_test,y_pred)\nconfusion_matrix", "_____no_output_____" ], [ "\n\nfrom sklearn.metrics import confusion_matrix, classification_report\nfrom mlxtend.plotting import plot_confusion_matrix\n\ncm=confusion_matrix(y_test, y_pred)\nfig, ax = plot_confusion_matrix(conf_mat=cm)\nplt.rcParams['font.size'] = 40\n#(conf_mat=multiclass,colorbar=True, show_absolute=False, show_normed=True, class_names=class_names)\nplt.show()", "_____no_output_____" ], [ "auc_roc=metrics.classification_report(y_test,y_pred)\nauc_roc", "_____no_output_____" ], [ "auc_roc=metrics.roc_auc_score(y_test,y_pred)\nauc_roc", "_____no_output_____" ], [ "from sklearn.metrics import roc_curve, auc\nfalse_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_prob)\nroc_auc = auc(false_positive_rate, true_positive_rate)\nroc_auc", "_____no_output_____" ], [ "LR_ridge= LogisticRegression(penalty='l2')\nLR_ridge.fit(x_train,y_train)", "_____no_output_____" ], [ "y_prob = LR_ridge.predict_proba(x_test)[:,1] # This will give positive class prediction probabilities \ny_pred = np.where(y_prob > 0.5, 1, 0) # This will threshold the probabilities to give class predictions.\nLR_ridge.score(x_test, y_pred)", "_____no_output_____" ], [ "confusion_matrix=metrics.confusion_matrix(y_test,y_pred)\nconfusion_matrix", "_____no_output_____" ], [ "from sklearn.metrics import confusion_matrix, classification_report\nfrom mlxtend.plotting import plot_confusion_matrix\n\ncm=confusion_matrix(y_test, y_pred)\nfig, ax = plot_confusion_matrix(conf_mat=cm)\nplt.rcParams['font.size'] = 40\n#(conf_mat=multiclass,colorbar=True, show_absolute=False, show_normed=True, class_names=class_names)\nplt.show()", "_____no_output_____" ], [ "auc_roc=metrics.classification_report(y_test,y_pred)\nauc_roc", "_____no_output_____" ], [ "auc_roc=metrics.roc_auc_score(y_test,y_pred)\nauc_roc", "_____no_output_____" ], [ "from sklearn.metrics import roc_curve, auc\nfalse_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_prob)\nroc_auc = auc(false_positive_rate, true_positive_rate)\nroc_auc", "_____no_output_____" ] ], [ [ "#**EXPERIMENTAL ZONE**", "_____no_output_____" ], [ "#LASSO AND RIDGE\n\n```\n# This is formatted as code\n```\n\n", "_____no_output_____" ] ], [ [ "Training_Accuracy_Before = []\nTesting_Accuracy_Before = []\nTraining_Accuracy_After = []\nTesting_Accuracy_After = []\nModels = ['Linear Regression', 'Lasso Regression', 'Ridge Regression']", "_____no_output_____" ], [ "alpha_space = np.logspace(-4, 0, 30) # Checking for alpha from .0001 to 1 and finding the best value for alpha\nalpha_space", "_____no_output_____" ], [ "ridge_scores = []\nridge = Ridge(normalize = True)\nfor alpha in alpha_space:\n ridge.alpha = alpha\n val = np.mean(cross_val_score(ridge,x_train,y_train, cv = 10))\n ridge_scores.append(val)", "_____no_output_____" ], [ "lasso_scores = []\nlasso = Lasso(normalize = True)\nfor alpha in alpha_space:\n lasso.alpha = alpha\n val = np.mean(cross_val_score(lasso, x_train,y_train, cv = 10))\n lasso_scores.append(val)", "_____no_output_____" ], [ "plt.figure(figsize=(8, 8))\nplt.plot(alpha_space, ridge_scores, marker = 'D', label = \"Ridge\")\nplt.plot(alpha_space, lasso_scores, marker = 'D', label = \"Lasso\")\nplt.legend()\nplt.show()", "_____no_output_____" ], [ "# Performing GridSearchCV with Cross Validation technique on Lasso Regression and finding the optimum value of alpha\n\nparams = {'alpha': (np.logspace(-8, 8, 100))} # It will check from 1e-08 to 1e+08\nlasso = Lasso(normalize=True)\nlasso_model = GridSearchCV(lasso, params, cv = 10)\nlasso_model.fit(x_train, y_train)\nprint(lasso_model.best_params_)\nprint(lasso_model.best_score_)", "_____no_output_____" ], [ "# Using value of alpha as 0.009545 to get best accuracy for Lasso Regression\nlasso = Lasso(alpha = 0.009545, normalize = True)\nlasso.fit(x_train, y_train)\n\ntrain_score = lasso.score(x_train, y_train)\nprint(train_score)\ntest_score = lasso.score(x_test, y_test)\nprint(test_score)\nTraining_Accuracy_Before.append(train_score)\nTesting_Accuracy_Before.append(test_score)", "_____no_output_____" ], [ "# Performing GridSearchCV with Cross Validation technique on Ridge Regression and finding the optimum value of alpha\n\nparams = {'alpha': (np.logspace(-8, 8, 100))} # It will check from 1e-08 to 1e+08\nridge = Ridge(normalize=True)\nridge_model = GridSearchCV(ridge, params, cv = 10)\nridge_model.fit(x_train, y_train)\nprint(ridge_model.best_params_)\nprint(ridge_model.best_score_)", "_____no_output_____" ], [ "# Using value of alpha as 1.2045035 to get best accuracy for Ridge Regression\nridge = Ridge(alpha = 1.2045035, normalize = True)\nridge.fit(x_train, y_train)\n\ntrain_score = ridge.score(x_train, y_train)\nprint(train_score)\ntest_score = ridge.score(x_test, y_test)\nprint(test_score)\nTraining_Accuracy_Before.append(train_score)\nTesting_Accuracy_Before.append(test_score)", "_____no_output_____" ], [ "coefficients = lasso.coef_\ncoefficients", "_____no_output_____" ], [ "from sklearn.linear_model import LinearRegression", "_____no_output_____" ], [ "logreg = LinearRegression()\nlogreg.fit(x_train, y_train)\n\ntrain_score = logreg.score(x_train, y_train)\nprint(train_score)\ntest_score = logreg.score(x_test, y_test)\nprint(test_score)\n\nTraining_Accuracy_After.append(train_score)\nTesting_Accuracy_After.append(test_score)", "_____no_output_____" ], [ "# Performing GridSearchCV with Cross Validation technique on Lasso Regression and finding the optimum value of alpha\n\nparams = {'alpha': (np.logspace(-8, 8, 100))} # It will check from 1e-08 to 1e+08\nlasso = Lasso(normalize=True)\nlasso_model = GridSearchCV(lasso, params, cv = 10)\nlasso_model.fit(x_train, y_train)\nprint(lasso_model.best_params_)\nprint(lasso_model.best_score_)", "_____no_output_____" ], [ "# Using value of alpha as 0.009545 to get best accuracy for Lasso Regression\nlasso = Lasso(alpha = 0.009545, normalize = True)\nlasso.fit(x_train, y_train)\n\ntrain_score = lasso.score(x_train, y_train)\nprint(train_score)\ntest_score = lasso.score(x_test, y_test)\nprint(test_score)\nTraining_Accuracy_After.append(train_score)\nTesting_Accuracy_After.append(test_score)", "_____no_output_____" ], [ "# Performing GridSearchCV with Cross Validation technique on Ridge Regression and finding the optimum value of alpha\n\nparams = {'alpha': (np.logspace(-8, 8, 100))} # It will check from 1e-08 to 1e+08\nridge = Ridge(normalize=True)\nridge_model = GridSearchCV(ridge, params, cv = 10)\nridge_model.fit(x_train, y_train)\nprint(ridge_model.best_params_)\nprint(ridge_model.best_score_)", "_____no_output_____" ], [ "# Using value of alpha as 1.204503 to get best accuracy for Ridge Regression\nridge = Ridge(alpha = 1.204503, normalize = True)\nridge.fit(x_train, y_train)\n\ntrain_score = ridge.score(x_train, y_train)\nprint(train_score)\ntest_score = ridge.score(x_test, y_test)\nprint(test_score)\nTraining_Accuracy_After.append(train_score)\nTesting_Accuracy_After.append(test_score)", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "plt.figure(figsize=(50,10))\nplt.plot(Training_Accuracy_Before, label = 'Training_Accuracy_Before')\nplt.plot(Training_Accuracy_After, label = 'Training_Accuracy_After')\nplt.xticks(range(len(Models)), Models, Rotation = 45)\nplt.title('Training Accuracy Behaviour')\nplt.legend()\nplt.show()", "_____no_output_____" ], [ "plt.figure(figsize=(50,10))\nplt.plot(Testing_Accuracy_Before, label = 'Testing_Accuracy_Before')\nplt.plot(Testing_Accuracy_After, label = 'Testing_Accuracy_After')\nplt.xticks(range(len(Models)), Models, Rotation = 45)\nplt.title('Testing Accuracy Behaviour')\nplt.legend()\nplt.show()", "_____no_output_____" ] ], [ [ "#**DANGER** **ZONE**", "_____no_output_____" ] ], [ [ "#list of alpha for tuning\nparams = {'alpha' : [0.001 , 0.001,0.01,0.05,\n 0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,.9,\n 1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,\n 10.0,20,30,40,50,100,500,1000]}\n\nridge = Ridge()\n\n# cross validation\nfolds = 5\nmodel_cv = GridSearchCV(estimator = ridge,\n param_grid = params,\n scoring = 'neg_mean_absolute_error',\n cv = folds,\n return_train_score = True,\n verbose = 1)\n\nmodel_cv.fit(x_train,y_train)", "_____no_output_____" ], [ "#Checking the value of optimum number of parameters\nprint(model_cv.best_params_)\nprint(model_cv.best_score_)", "_____no_output_____" ], [ "cv_results = pd.DataFrame(model_cv.cv_results_)\ncv_results = cv_results[cv_results['param_alpha']<=1000]\ncv_results", "_____no_output_____" ], [ "# plotting mean test and train scoes with alpha \ncv_results['param_alpha'] = cv_results['param_alpha'].astype('int32')\nplt.figure(figsize=(16,5))\n\n# plotting\nplt.plot(cv_results['param_alpha'], cv_results['mean_train_score'])\nplt.plot(cv_results['param_alpha'], cv_results['mean_test_score'])\nplt.xlabel('alpha')\nplt.ylabel('Negative Mean Absolute Error')\nplt.title(\"Negative Mean Absolute Error and alpha\")\nplt.legend(['train score', 'test score'], loc='upper right')\nplt.show()", "_____no_output_____" ] ], [ [ "#Insights:", "_____no_output_____" ] ], [ [ "alpha = 4\nridge = Ridge(alpha=alpha)\n\nridge.fit(x_train,y_train)\nridge.coef_", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
d03727d6137b19aecc2fc06265e4e8942ff125fb
49,967
ipynb
Jupyter Notebook
notebooks/dataprep/03a-USCensusDP03Employment.ipynb
yogeshchaudhari/covid-19-community
1f1a27f02ff2551a34beca759dda6f998cd47d4a
[ "MIT" ]
107
2020-03-22T05:39:16.000Z
2022-03-30T11:40:47.000Z
notebooks/dataprep/03a-USCensusDP03Employment.ipynb
yogeshchaudhari/covid-19-community
1f1a27f02ff2551a34beca759dda6f998cd47d4a
[ "MIT" ]
45
2020-03-22T06:18:17.000Z
2022-01-10T00:01:47.000Z
notebooks/dataprep/03a-USCensusDP03Employment.ipynb
yogeshchaudhari/covid-19-community
1f1a27f02ff2551a34beca759dda6f998cd47d4a
[ "MIT" ]
66
2020-03-22T14:46:14.000Z
2022-03-12T11:13:44.000Z
33.73869
316
0.373847
[ [ [ "# Selected Economic Characteristics: Employment Status from the American Community Survey\n\n**[Work in progress]**\n\nThis notebook downloads [selected economic characteristics (DP03)](https://data.census.gov/cedsci/table?tid=ACSDP5Y2018.DP03) from the American Community Survey 2018 5-Year Data.\n\nData source: [American Community Survey 5-Year Data 2018](https://www.census.gov/data/developers/data-sets/acs-5year.html)\n\nAuthors: Peter Rose ([email protected]), Ilya Zaslavsky ([email protected])", "_____no_output_____" ] ], [ [ "import os\nimport pandas as pd\nfrom pathlib import Path\nimport time", "_____no_output_____" ], [ "pd.options.display.max_rows = None # display all rows\npd.options.display.max_columns = None # display all columsns", "_____no_output_____" ], [ "NEO4J_IMPORT = Path(os.getenv('NEO4J_IMPORT'))\nprint(NEO4J_IMPORT)", "/Users/peter/Library/Application Support/com.Neo4j.Relate/data/dbmss/dbms-8bf637fc-0d20-4d9f-9c6f-f7e72e92a4da/import\n" ] ], [ [ "## Download selected variables\n\n* [Selected economic characteristics for US](https://data.census.gov/cedsci/table?tid=ACSDP5Y2018.DP03)\n\n* [List of variables as HTML](https://api.census.gov/data/2018/acs/acs5/profile/groups/DP03.html) or [JSON](https://api.census.gov/data/2018/acs/acs5/profile/groups/DP03/)\n\n* [Description of variables](https://www2.census.gov/programs-surveys/acs/tech_docs/subject_definitions/2018_ACSSubjectDefinitions.pdf)\n\n* [Example URLs for API](https://api.census.gov/data/2018/acs/acs5/profile/examples.html)", "_____no_output_____" ], [ "### Specify variables from DP03 group and assign property names\n\nNames must follow the [Neo4j property naming conventions](https://neo4j.com/docs/getting-started/current/graphdb-concepts/#graphdb-naming-rules-and-recommendations).", "_____no_output_____" ] ], [ [ "variables = {# EMPLOYMENT STATUS\n 'DP03_0001E': 'population16YearsAndOver',\n 'DP03_0002E': 'population16YearsAndOverInLaborForce',\n 'DP03_0002PE': 'population16YearsAndOverInLaborForcePct',\n 'DP03_0003E': 'population16YearsAndOverInCivilianLaborForce',\n 'DP03_0003PE': 'population16YearsAndOverInCivilianLaborForcePct',\n 'DP03_0006E': 'population16YearsAndOverInArmedForces',\n 'DP03_0006PE': 'population16YearsAndOverInArmedForcesPct',\n 'DP03_0007E': 'population16YearsAndOverNotInLaborForce',\n 'DP03_0007PE': 'population16YearsAndOverNotInLaborForcePct'\n #'DP03_0014E': 'ownChildrenOfTheHouseholderUnder6Years',\n #'DP03_0015E': 'ownChildrenOfTheHouseholderUnder6YearsAllParentsInLaborForce',\n #'DP03_0016E': 'ownChildrenOfTheHouseholder6To17Years',\n #'DP03_0017E': 'ownChildrenOfTheHouseholder6To17YearsAllParentsInLaborForce',\n }", "_____no_output_____" ], [ "fields = \",\".join(variables.keys())", "_____no_output_____" ], [ "for v in variables.values():\n print('e.' + v + ' = toInteger(row.' + v + '),')", "e.population16YearsAndOver = toInteger(row.population16YearsAndOver),\ne.population16YearsAndOverInLaborForce = toInteger(row.population16YearsAndOverInLaborForce),\ne.population16YearsAndOverInLaborForcePct = toInteger(row.population16YearsAndOverInLaborForcePct),\ne.population16YearsAndOverInCivilianLaborForce = toInteger(row.population16YearsAndOverInCivilianLaborForce),\ne.population16YearsAndOverInCivilianLaborForcePct = toInteger(row.population16YearsAndOverInCivilianLaborForcePct),\ne.population16YearsAndOverInArmedForces = toInteger(row.population16YearsAndOverInArmedForces),\ne.population16YearsAndOverInArmedForcesPct = toInteger(row.population16YearsAndOverInArmedForcesPct),\ne.population16YearsAndOverNotInLaborForce = toInteger(row.population16YearsAndOverNotInLaborForce),\ne.population16YearsAndOverNotInLaborForcePct = toInteger(row.population16YearsAndOverNotInLaborForcePct),\n" ], [ "print(len(variables.keys()))", "9\n" ] ], [ [ "## Download county-level data using US Census API", "_____no_output_____" ] ], [ [ "url_county = f'https://api.census.gov/data/2018/acs/acs5/profile?get={fields}&for=county:*'", "_____no_output_____" ], [ "df = pd.read_json(url_county, dtype='str')\ndf.fillna('', inplace=True)\ndf.head()", "_____no_output_____" ] ], [ [ "##### Add column names", "_____no_output_____" ] ], [ [ "df = df[1:].copy() # skip first row of labels\ncolumns = list(variables.values())\ncolumns.append('stateFips')\ncolumns.append('countyFips')\ndf.columns = columns", "_____no_output_____" ] ], [ [ "Remove Puerto Rico (stateFips = 72) to limit data to US States\n\nTODO handle data for Puerto Rico (GeoNames represents Puerto Rico as a country)", "_____no_output_____" ] ], [ [ "df.query(\"stateFips != '72'\", inplace=True)", "_____no_output_____" ] ], [ [ "Save list of state fips (required later to get tract data by state)", "_____no_output_____" ] ], [ [ "stateFips = list(df['stateFips'].unique())\nstateFips.sort()\nprint(stateFips)", "['01', '02', '04', '05', '06', '08', '09', '10', '11', '12', '13', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '44', '45', '46', '47', '48', '49', '50', '51', '53', '54', '55', '56']\n" ], [ "df.head()", "_____no_output_____" ], [ "# Example data\ndf[(df['stateFips'] == '06') & (df['countyFips'] == '073')]", "_____no_output_____" ], [ "df['source'] = 'American Community Survey 5 year'\ndf['aggregationLevel'] = 'Admin2'", "_____no_output_____" ] ], [ [ "### Save data", "_____no_output_____" ] ], [ [ "df.to_csv(NEO4J_IMPORT / \"03a-USCensusDP03EmploymentAdmin2.csv\", index=False)", "_____no_output_____" ] ], [ [ "## Download zip-level data using US Census API", "_____no_output_____" ] ], [ [ "url_zip = f'https://api.census.gov/data/2018/acs/acs5/profile?get={fields}&for=zip%20code%20tabulation%20area:*'", "_____no_output_____" ], [ "df = pd.read_json(url_zip, dtype='str')\ndf.fillna('', inplace=True)\ndf.head()", "_____no_output_____" ] ], [ [ "##### Add column names", "_____no_output_____" ] ], [ [ "df = df[1:].copy() # skip first row\ncolumns = list(variables.values())\ncolumns.append('stateFips')\ncolumns.append('postalCode')\ndf.columns = columns", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "# Example data\ndf.query(\"postalCode == '90210'\")", "_____no_output_____" ], [ "df['source'] = 'American Community Survey 5 year'\ndf['aggregationLevel'] = 'PostalCode'", "_____no_output_____" ] ], [ [ "### Save data", "_____no_output_____" ] ], [ [ "df.to_csv(NEO4J_IMPORT / \"03a-USCensusDP03EmploymentZip.csv\", index=False)", "_____no_output_____" ] ], [ [ "## Download tract-level data using US Census API\nTract-level data are only available by state, so we need to loop over all states.", "_____no_output_____" ] ], [ [ "def get_tract_data(state):\n url_tract = f'https://api.census.gov/data/2018/acs/acs5/profile?get={fields}&for=tract:*&in=state:{state}'\n df = pd.read_json(url_tract, dtype='str')\n time.sleep(1)\n # skip first row of labels\n df = df[1:].copy()\n # Add column names\n columns = list(variables.values())\n columns.append('stateFips')\n columns.append('countyFips')\n columns.append('tract')\n df.columns = columns\n return df", "_____no_output_____" ], [ "df = pd.concat((get_tract_data(state) for state in stateFips))\ndf.fillna('', inplace=True)", "_____no_output_____" ], [ "df['tract'] = df['stateFips'] + df['countyFips'] + df['tract']", "_____no_output_____" ], [ "df['source'] = 'American Community Survey 5 year'\ndf['aggregationLevel'] = 'Tract'", "_____no_output_____" ], [ "# Example data for San Diego County\ndf[(df['stateFips'] == '06') & (df['countyFips'] == '073')].head()", "_____no_output_____" ] ], [ [ "### Save data", "_____no_output_____" ] ], [ [ "df.to_csv(NEO4J_IMPORT / \"03a-USCensusDP03EmploymentTract.csv\", index=False)", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
d03736192ee5c7edc83daeb89667f41179e40f25
19,545
ipynb
Jupyter Notebook
notebooks/Atari/Pacman_Colab/Transformative/Dense/AE/pacman_AE_Dense_reconst_ellwlb_episode_flat_working.ipynb
azeghost/Generative_Models
9f341134935781bec67b2f23a3f2788d8fbdb56c
[ "MIT" ]
null
null
null
notebooks/Atari/Pacman_Colab/Transformative/Dense/AE/pacman_AE_Dense_reconst_ellwlb_episode_flat_working.ipynb
azeghost/Generative_Models
9f341134935781bec67b2f23a3f2788d8fbdb56c
[ "MIT" ]
null
null
null
notebooks/Atari/Pacman_Colab/Transformative/Dense/AE/pacman_AE_Dense_reconst_ellwlb_episode_flat_working.ipynb
azeghost/Generative_Models
9f341134935781bec67b2f23a3f2788d8fbdb56c
[ "MIT" ]
null
null
null
23.71966
152
0.558301
[ [ [ "# Settings", "_____no_output_____" ] ], [ [ "%env TF_KERAS = 1\nimport os\nsep_local = os.path.sep\nimport sys\n# sys.path.append('..' + sep_local + '..' + sep_local +'..' + sep_local + '..' + sep_local + '..'+ sep_local + '..') # For Windows import\n# os.chdir('..' + sep_local + '..' + sep_local +'..' + sep_local + '..' + sep_local + '..'+ sep_local + '..') # For Linux import\nos.chdir('/content/Generative_Models/')\nprint(sep_local)\nprint(os.getcwd())", "_____no_output_____" ], [ "import tensorflow as tf\nprint(tf.__version__)", "_____no_output_____" ] ], [ [ "# Dataset loading", "_____no_output_____" ] ], [ [ "dataset_name='atari_pacman'", "_____no_output_____" ], [ "images_dir = IMG_DIR\n# images_dir = '/home/azeghost/datasets/.mspacman/atari_v1/screens/mspacman' #Linux\n#images_dir = 'C:\\\\projects\\\\pokemon\\DS06\\\\'\nvalidation_percentage = 25\nvalid_format = 'png'", "_____no_output_____" ], [ "from training.generators.file_image_generator import create_image_lists, get_generators", "_____no_output_____" ], [ "imgs_list = create_image_lists(\n image_dir=images_dir, \n validation_pct=validation_percentage, \n valid_imgae_formats=valid_format,\n verbose=0 \n)", "_____no_output_____" ], [ "scale=1\nimage_size=(160//scale, 210//scale, 3)\n\nbatch_size = 10\nEPIS_LEN = 10\nEPIS_SHIFT = 5\n\ninputs_shape = image_size\nlatents_dim = 30\nintermediate_dim = 30", "_____no_output_____" ], [ "training_generator, testing_generator = get_generators(\n images_list=imgs_list, \n image_dir=images_dir, \n image_size=image_size, \n batch_size=batch_size, \n class_mode='episode_flat', \n episode_len=EPIS_LEN, \n episode_shift=EPIS_SHIFT\n)", "_____no_output_____" ], [ "import tensorflow as tf", "_____no_output_____" ], [ "train_ds = tf.data.Dataset.from_generator(\n lambda: training_generator, \n output_types=(tf.float32, tf.float32) ,\n output_shapes=(tf.TensorShape((batch_size* EPIS_LEN, ) + image_size), \n tf.TensorShape((batch_size* EPIS_LEN, ) + image_size)\n )\n)\n\ntest_ds = tf.data.Dataset.from_generator(\n lambda: testing_generator, \n output_types=(tf.float32, tf.float32) ,\n output_shapes=(tf.TensorShape((batch_size* EPIS_LEN, ) + image_size), \n tf.TensorShape((batch_size* EPIS_LEN, ) + image_size)\n )\n)", "_____no_output_____" ], [ "_instance_scale=1.0\nfor data in train_ds:\n _instance_scale = float(data[0].numpy().max())\n break", "_____no_output_____" ], [ "_instance_scale = 1.0", "_____no_output_____" ], [ "import numpy as np\nfrom collections.abc import Iterable\nif isinstance(inputs_shape, Iterable):\n _outputs_shape = np.prod(inputs_shape)", "_____no_output_____" ], [ "inputs_shape", "_____no_output_____" ] ], [ [ "# Model's Layers definition", "_____no_output_____" ] ], [ [ "# tdDense = lambda **kwds: tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(**kwds))", "_____no_output_____" ], [ "# enc_lays = [tdDense(units=intermediate_dim//2, activation='relu'),\n# tdDense(units=intermediate_dim//2, activation='relu'),\n# tf.keras.layers.Flatten(),\n# tf.keras.layers.Dense(units=latents_dim)]\n\n# dec_lays = [tf.keras.layers.Dense(units=latents_dim, activation='relu'),\n# tf.keras.layers.Reshape(inputs_shape),\n# tdDense(units=intermediate_dim, activation='relu'),\n# tdDense(units=_outputs_shape),\n# tf.keras.layers.Reshape(inputs_shape)\n# ]", "_____no_output_____" ], [ "enc_lays = [tf.keras.layers.Dense(units=intermediate_dim//2, activation='relu'),\n tf.keras.layers.Dense(units=intermediate_dim//2, activation='relu'),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(units=latents_dim)]\n\ndec_lays = [tf.keras.layers.Dense(units=latents_dim, activation='relu'),\n tf.keras.layers.Dense(units=3*intermediate_dim//2, activation='relu'),\n tf.keras.layers.Dense(units=_outputs_shape),\n tf.keras.layers.Reshape(inputs_shape)]", "_____no_output_____" ] ], [ [ "# Model definition", "_____no_output_____" ] ], [ [ "model_name = dataset_name+'AE_Dense_reconst_ell'\n#windows\n#experiments_dir='..' + sep_local + '..' + sep_local +'..' + sep_local + '..' + sep_local + '..'+sep_local+'experiments'+sep_local + model_name\n\n#linux \nexperiments_dir=os.getcwd()+ sep_local +'experiments'+sep_local + model_name", "_____no_output_____" ], [ "from training.autoencoding_basic.transformative.AE import autoencoder as AE", "_____no_output_____" ], [ "# inputs_shape=image_size", "_____no_output_____" ], [ "variables_params = \\\n[\n {\n 'name': 'inference', \n 'inputs_shape':inputs_shape,\n 'outputs_shape':latents_dim,\n 'layers': enc_lays\n }\n\n ,\n \n {\n 'name': 'generative', \n 'inputs_shape':latents_dim,\n 'outputs_shape':inputs_shape,\n 'layers':dec_lays\n }\n]", "_____no_output_____" ], [ "from os.path import abspath\nfrom utils.data_and_files.file_utils import create_if_not_exist\n_restore = os.path.join(experiments_dir, 'var_save_dir')\ncreate_if_not_exist(_restore)\nabsolute = abspath(_restore)\nprint(\"Restore_dir\",absolute)\nabsolute = abspath(experiments_dir)\nprint(\"Recording_dir\",absolute)\nprint(\"Current working dir\",os.getcwd())", "_____no_output_____" ], [ "#to restore trained model, set filepath=_restore", "_____no_output_____" ], [ "ae = AE( \n name=model_name,\n latents_dim=latents_dim,\n batch_size=batch_size * EPIS_LEN,\n episode_len= 1, \n variables_params=variables_params, \n filepath=_restore\n )", "_____no_output_____" ], [ "#ae.compile(metrics=None)\nae.compile()", "_____no_output_____" ] ], [ [ "# Callbacks", "_____no_output_____" ] ], [ [ "from training.callbacks.sample_generation import SampleGeneration\nfrom training.callbacks.save_model import ModelSaver", "_____no_output_____" ], [ "es = tf.keras.callbacks.EarlyStopping(\n monitor='loss', \n min_delta=1e-12, \n patience=12, \n verbose=1, \n restore_best_weights=False\n)", "_____no_output_____" ], [ "ms = ModelSaver(filepath=_restore)", "_____no_output_____" ], [ "csv_dir = os.path.join(experiments_dir, 'csv_dir')\ncreate_if_not_exist(csv_dir)\ncsv_dir = os.path.join(csv_dir, model_name+'.csv')\ncsv_log = tf.keras.callbacks.CSVLogger(csv_dir, append=True)\nabsolute = abspath(csv_dir)\nprint(\"Csv_dir\",absolute)", "_____no_output_____" ], [ "image_gen_dir = os.path.join(experiments_dir, 'image_gen_dir')\ncreate_if_not_exist(image_gen_dir)\nabsolute = abspath(image_gen_dir)\nprint(\"Image_gen_dir\",absolute)", "_____no_output_____" ], [ "sg = SampleGeneration(latents_shape=latents_dim, filepath=image_gen_dir, gen_freq=5, save_img=True, gray_plot=False)", "_____no_output_____" ] ], [ [ "# Model Training", "_____no_output_____" ] ], [ [ "ae.fit(\n x=train_ds,\n input_kw=None,\n steps_per_epoch=10,\n epochs=10, \n verbose=2,\n callbacks=[ es, ms, csv_log, sg],\n workers=-1,\n use_multiprocessing=True,\n validation_data=test_ds,\n validation_steps=10\n)", "_____no_output_____" ] ], [ [ "# Model Evaluation", "_____no_output_____" ], [ "## inception_score", "_____no_output_____" ] ], [ [ "from evaluation.generativity_metrics.inception_metrics import inception_score", "_____no_output_____" ], [ "is_mean, is_sigma = inception_score(ae, tolerance_threshold=1e-6, max_iteration=200)\nprint(f'inception_score mean: {is_mean}, sigma: {is_sigma}')", "_____no_output_____" ] ], [ [ "## Frechet_inception_distance", "_____no_output_____" ] ], [ [ "from evaluation.generativity_metrics.inception_metrics import frechet_inception_distance", "_____no_output_____" ], [ "fis_score = frechet_inception_distance(ae, training_generator, tolerance_threshold=1e-6, max_iteration=10, batch_size=32)\nprint(f'frechet inception distance: {fis_score}')", "_____no_output_____" ] ], [ [ "## perceptual_path_length_score", "_____no_output_____" ] ], [ [ "from evaluation.generativity_metrics.perceptual_path_length import perceptual_path_length_score", "_____no_output_____" ], [ "ppl_mean_score = perceptual_path_length_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200, batch_size=32)\nprint(f'perceptual path length score: {ppl_mean_score}')", "_____no_output_____" ] ], [ [ "## precision score", "_____no_output_____" ] ], [ [ "from evaluation.generativity_metrics.precision_recall import precision_score", "_____no_output_____" ], [ "_precision_score = precision_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)\nprint(f'precision score: {_precision_score}')", "_____no_output_____" ] ], [ [ "## recall score", "_____no_output_____" ] ], [ [ "from evaluation.generativity_metrics.precision_recall import recall_score", "_____no_output_____" ], [ "_recall_score = recall_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)\nprint(f'recall score: {_recall_score}')", "_____no_output_____" ] ], [ [ "# Image Generation", "_____no_output_____" ], [ "## image reconstruction", "_____no_output_____" ], [ "### Training dataset", "_____no_output_____" ] ], [ [ "%load_ext autoreload\n%autoreload 2", "_____no_output_____" ], [ "from training.generators.image_generation_testing import reconstruct_from_a_batch", "_____no_output_____" ], [ "from utils.data_and_files.file_utils import create_if_not_exist\nsave_dir = os.path.join(experiments_dir, 'reconstruct_training_images_like_a_batch_dir')\ncreate_if_not_exist(save_dir)\n\nreconstruct_from_a_batch(ae, training_generator, save_dir)", "_____no_output_____" ], [ "from utils.data_and_files.file_utils import create_if_not_exist\nsave_dir = os.path.join(experiments_dir, 'reconstruct_testing_images_like_a_batch_dir')\ncreate_if_not_exist(save_dir)\n\nreconstruct_from_a_batch(ae, testing_generator, save_dir)", "_____no_output_____" ] ], [ [ "## with Randomness", "_____no_output_____" ] ], [ [ "from training.generators.image_generation_testing import generate_images_like_a_batch", "_____no_output_____" ], [ "from utils.data_and_files.file_utils import create_if_not_exist\nsave_dir = os.path.join(experiments_dir, 'generate_training_images_like_a_batch_dir')\ncreate_if_not_exist(save_dir)\n\ngenerate_images_like_a_batch(ae, training_generator, save_dir)", "_____no_output_____" ], [ "from utils.data_and_files.file_utils import create_if_not_exist\nsave_dir = os.path.join(experiments_dir, 'generate_testing_images_like_a_batch_dir')\ncreate_if_not_exist(save_dir)\n\ngenerate_images_like_a_batch(ae, testing_generator, save_dir)", "_____no_output_____" ] ], [ [ "### Complete Randomness", "_____no_output_____" ] ], [ [ "from training.generators.image_generation_testing import generate_images_randomly", "_____no_output_____" ], [ "from utils.data_and_files.file_utils import create_if_not_exist\nsave_dir = os.path.join(experiments_dir, 'random_synthetic_dir')\ncreate_if_not_exist(save_dir)\n\ngenerate_images_randomly(ae, testing_generator, save_dir)", "_____no_output_____" ] ], [ [ "### Stacked inputs outputs and predictions ", "_____no_output_____" ] ], [ [ "from training.generators.image_generation_testing import predict_from_a_batch", "_____no_output_____" ], [ "from utils.data_and_files.file_utils import create_if_not_exist\nsave_dir = os.path.join(experiments_dir, 'predictions')\ncreate_if_not_exist(save_dir)\n\npredict_from_a_batch(ae, testing_generator, save_dir)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
d0374285a21806dbfdb57beef359e5d9920b8a4d
31,151
ipynb
Jupyter Notebook
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/RECAP_DS/05_MACHINE_LEARNING/ML/ML04.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
null
null
null
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/RECAP_DS/05_MACHINE_LEARNING/ML/ML04.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
null
null
null
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/RECAP_DS/05_MACHINE_LEARNING/ML/ML04.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
2
2022-02-09T15:41:33.000Z
2022-02-11T07:47:40.000Z
47.341945
747
0.624795
[ [ [ "# DS106 Machine Learning : Lesson Nine Companion Notebook", "_____no_output_____" ], [ "### Table of Contents <a class=\"anchor\" id=\"DS106L9_toc\"></a>\n\n* [Table of Contents](#DS106L9_toc)\n * [Page 1 - Introduction](#DS106L9_page_1)\n * [Page 2 - What are Bayesian Statistics?](#DS106L9_page_2)\n * [Page 3 - Bayes Theorem](#DS106L9_page_3)\n * [Page 4 - Parts of Bayes Theorem](#DS106L9_page_4)\n * [Page 5 - A/B Testing](#DS106L9_page_5)\n * [Page 6 - Bayesian Network Basics](#DS106L9_page_6)\n * [Page 7 - Key Terms](#DS106L9_page_7)\n * [Page 8 - Lesson 4 Practice Hands-On](#DS106L9_page_8)\n * [Page 9 - Lesson 4 Practice Hands-On Solution](#DS106L9_page_9)\n \n ", "_____no_output_____" ], [ "<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n\n# Page 1 - Overview of this Module<a class=\"anchor\" id=\"DS106L9_page_1\"></a>\n\n[Back to Top](#DS106L9_toc)\n\n<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">", "_____no_output_____" ] ], [ [ "from IPython.display import VimeoVideo\n# Tutorial Video Name: Bayesian Networks\nVimeoVideo('388131444', width=720, height=480)", "_____no_output_____" ] ], [ [ "The transcript for the above overview video **[is located here](https://repo.exeterlms.com/documents/V2/DataScience/Video-Transcripts/DSO106-ML-L04overview.zip)**.\n\n# Introduction\n\nBayesian Networks are a way for you to apply probability knowledge in a machine learning algorithm. By the end of this lesson, you should be able to:\n\n* Explain what a Bayesian Network is\n* Perform Bayesian networks in Python\n\nThis lesson will culminate in a hands-on in which you use Bayesian networks to predict the chance of shark attack. \n\n", "_____no_output_____" ], [ "<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n\n# Page 2 - What are Bayesian Statistics?<a class=\"anchor\" id=\"DS106L9_page_2\"></a>\n\n[Back to Top](#DS106L9_toc)\n\n<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n", "_____no_output_____" ], [ "\n# What are Bayesian Statistics?\n\n*Bayesian statistics* are a branch of stats that make use of probability to test your beliefs against data. In practice, the simplest Bayesian statistics are similar in concept to the basics you learned - probability, the normal distribution, etc. But they go out of their way to change the names of everything! As if statistics weren't complicated enough! In this lesson, you'll be learning about Bayesian statistics using the terms you're already familiar with, but don't get into a fistfight with anyone if they use slightly different lingo!\n\n---\n\n## Bayesian Reasoning\n\nHere's an example to ease you into the Bayesian mindset. You start off with an observation of data. For instance, say you hear a very loud, rushing noise outside. You might come up with a couple different ideas of what is going on, or hypotheses, and those are based on your previous experience. You might have a couple different options: it's a plane making the noise, or it's a tornado. Which is more likely? Well, you know that based on your past experience, tornados make this much noise only once or twice a year when they are very severe. So you're thinking that the plane is more likely. \n\nNow add in additional data - you live on an Air Force base. Suddenly, the likelihood that the noise is a fighter jet taking off is much, much higher, and your belief that there's a tornado is almost non-existent. The brilliant thing about Bayesian statistics is that you can continually update your hypotheses based on updated data. You can even compare hypotheses to see which one fits your data better. \n\nAn important thing to note is that your data should help change your beliefs about the world, but you should not search for data to back up your beliefs! \n\n---", "_____no_output_____" ], [ "<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n\n# Page 3 - Bayes Theorem<a class=\"anchor\" id=\"DS106L9_page_3\"></a>\n\n[Back to Top](#DS106L9_toc)\n\n<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n", "_____no_output_____" ], [ "# Bayes Theorem\n\nRemember back to multiple event probability? Where you either used `or` or `and` to combine the probability of multiple things happening? Well, those were great, but they implied independence - that the probability of one of those things did not impact the probability of the other whatsoever. But what happens when your two events are related in any way? For instance, color blindness is much more prevalent in males than in females. So the probability that any one random person is color blind very much depends on their gender. So you can't possibly assume that there is no relation between those two variables! \n\nHow would you calculate probability in that instance? Enter *Bayes theorem*! Bayes theorem is a special probability formula that allows you to calculate the likelihood of an event given the likelihood of another event.\n\n---\n\n## Bayes Formula\n\nHere is the mathematical formula for Bayes theorem. Don't panic! It's not as bad as it looks! You can even see it in neon lights if that makes it more fun! \n\n![Bayes Theorem Formula written in neon on a sign.](Media/NeonBayes.jpg)\n\nToo hard to read? Well, you can have less fun but also squint less with this bad boy:\n\n![Bayes Theorem Formula.](Media/BayesTheorem.png)\n\nIn plain English, this is what this reads like: \n\n> The probability of event A given the probability of event B is equal to the probability of event A times the probability of event B given A, divided by the probability of B. \n\nQuite a mouthful! You can break it down even further. A and B are just two events that are not independent. It's assumed A is the first and B is the second, but it doesn't really matter, as long as you stay consistent with your variable assignment throughout the use of the equation.\n\nThen you have `P`, which is just shorthand for probability.\n\nAnd lastly, you have the pipe symbol, `|`. This means \"given.\" All in all, this equation is telling you that if you know the probability of A by itself, and the probability of B by itself, then you can figure out how A and B interact.\n\n---\n\n## Bayesian Reasoning with the Bayes Formula\n\nIf you want to walk this into the wonderful world of Bayes reasoning that you've just hit upon, you can think of this in terms of observations and beliefs. Substitute for `A` beliefs, and for `B`, observations. Now the question becomes, what is the probability of my beliefs being true, given my observations?\n\nThe pretty cool thing about this is that with Bayes theorem, you can figure out exactly how much your beliefs change because of evidence. \n\n---", "_____no_output_____" ], [ "<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n\n# Page 4 - Parts of Bayes Theorem<a class=\"anchor\" id=\"DS106L9_page_4\"></a>\n\n[Back to Top](#DS106L9_toc)\n\n<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n", "_____no_output_____" ], [ "# Parts of Bayes Theorem\n\nThere are three components to Bayes theorem:\n\n* Posterior probability\n* Likelihood\n* Prior probability\n\nYou will learn about these components in more detail below!\n\n---\n\n## Posterior Probability\n\nThe *posterior probability* is the part of the theorem that lets you quantify how strongly you hold beliefs about the data you've observed. The posterior probability is the end result of Bayes' theorem and what you're trying to find out. This is often shortened to just \"posterior.\" No butt jokes, guys!\n\n---\n\n## Likelihood\n\nThe *likelihood* is the probability of the data given your current beliefs. i.e. How likely is it that x happens? This represents the top left portion of Bayes' theorem - the P(B|A) part.\n\n---\n\n## Prior Probability\n\nThe *prior probability* is all about the strength of your belief before you see the data. Hence, prior, meaning before! This represents the top right portion of Bayes' theorem - the P(A) part.\n\n---\n\n## The Bottom?\n\nYou may be wondering about the bottom of the equation. Doesn't that get its own special name too? Apparently not, but you're encouraged to give it one. Stormageddon, anyone? But the bottom portion of Bayes' theorem helps normalize the data, so that even if you have a different amount of data in A and B, you can still compare them fairly. \n\n---\n\n## An Example\n\nYou will now calculate the probability that your instructor's a dork, given that she cuddles her statistics book at night. Call \"your instructor's a dork\" `A` and \"cuddling statistics books\" `B`. \n\n---\n\n### Find the Likelihood\n\nYou can think of the likelihood in this scenario as the probability that cuddling a statistics book is good evidence that your instructor's a dork. If you are pretty darn certain, then you could make it a probability like 8/10. If you change your mind at any point, well, guess what? That is totally fine! This means `P(B|A) = 8/10`. \n\n---\n\n### Find the Prior\n\nFirst, you need to calculate the prior. Remember, this is just the probability of A. Believe it or not, a survey found that 60% of Americans consider themselves nerds. So you'll use a probability of 6/10 for that. That means: `P(A) = 6/10`. \n\n---\n\n### Calculate the Normalizing Factor P(B)\n\nWhat is `P(B)`? That's on the bottom! Well that is the probability that someone cuddles their statistics book at night, regardless of whether or not they are a dork. How many people is that? Well, you could take an educated guess based upon the fact that only 11% of people take statistics in a secondary school, and of those, 55% sell them back after the course. That means that only 6.05% (11% * 55%) still own a statistics book after a semester is up. So it's not even very likely that people have statistics books, let alone cuddle them. Maybe 1 in 100 will cuddle a statistics book, and with only 1 in 4 owning them at all...that makes it `6.05% * 1%` or `.000065`. \n\nThat is one way to go. But if you don't want to estimate it, or it is difficult to estimate it, then you can choose from a standard `P(B)` setup. Your choices are:\n\n* .05\n* .01\n* .005\n* .001\n\nIt is important to note that the smaller the `P(B)`, the larger your posterior probability, or end result, is. \n\n---\n\n### Calculate the Posterior\n\nNext, you will calculate the posterior. Remember that this is your overall goal! You are ready to solve this bad boy! \n\nThis is just plug 'n play at this point:\n\n```text\nP(A|B) = (P(B|A) * P(A)) / P(B)\nP(A|B) = (.8 * .6) / .000065\nP(A|B) = .3 / .000065\nP(A|B) = 4,615.38\n```\n\nThat's great! You have a number! But what does it mean? It's really hard to say for sure, especially without a comparison to an alternative hypothesis. It's sort of like comparing machine learning models with AIC - the number itself doesn't matter, just whether it is larger or smaller than other models.\n\nCan you guess what you're going to do next?\n\n---\n\n### Create and Test Alternative Hypotheses Using Bayes\n\nOk, so one explanation for why your instructor may cuddle her statistics textbook at night is because she doesn't have a pillow. That becomes your new `A`. A quick internet search shows no relevant results. You can then assume that 99% of people own a pillow, which means that 1% don't. \n\nYour new `P(A)` is now 1/100. And that's probably a high estimate of those who don't own a pillow. How does that change your results?\n\nDo some more plug 'n chug!\n\n```text\nP(A|B) = (P(B|A) * P(A)) / P(B)\nP(A|B) = (.8 * .01) / .000065\nP(A|B) = .0008 / .000065\nP(A|B) = 12.31\n```\n\nSo this means that it is much more likely that your instructor's a dork and not that she doesn't own a pillow. Tada! Relative probability at its finest.\n\n---", "_____no_output_____" ], [ "<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n\n# Page 5 - A/B Testing<a class=\"anchor\" id=\"DS106L9_page_5\"></a>\n\n[Back to Top](#DS106L9_toc)\n\n<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n", "_____no_output_____" ], [ "# A/B Testing\n\nRemember all those fun research designs you learned about in Basic Statistics? Well, there are more than you learned about there. There are nearly infinite variations having to do with comparisons, varying timepoints, and messing with individuals over and over again. Ethically, of course. And only if they're willing!\n\nA/B testing is yet another type of research design, in which you are directly comparing two methods. In practice, if you have means, you'd compare group A and group B with a *t*-test. But what if you don't have means? What if you have probabilities or percentages? Enter the Bayesian A/B test!\n\n---\n\n## Create the Prior\n\nSay you are testing whether your existing recipe for cream cheese frosting (A) is better than your new recipe for cream cheese frosting (B), and that you are testing it at your local bakesale. Your null hypothesis will be that these frostings will be totally equal. No one will like the new one any better than the old. So, assume that 80% of all bakesale buyers will finish eating your cupcakes with both types of frosting. \n\n---\n\n## Collect Data\n\nNow that you have a hypothesis to test, it's time to collect data! You hold that bakesale, and for the old cream cheese frosting, A, 82% of people finish eating their cupcake. And for the new cream cheese frosting recipe, only 61% finish eating their cupcake. Want it in table form? Take a peek.\n\n<table class=\"table table-striped\">\n <tr>\n <th>Frosting Type</th>\n <th>Ate it All</th>\n <th>Did Not Eat it All</th>\n <th>Ratio</th>\n </tr>\n <tr>\n <td>Old</td>\n <td>95</td>\n <td>22</td>\n <td>.82</td>\n </tr>\n <tr>\n <td>New</td>\n <td>73</td>\n <td>46</td>\n <td>.61</td>\n </tr>\n</table>\n\nRight off the bat, you should be thinking to yourself that perhaps frosting recipe B isn't as good. But, it's always a good idea to science things and know for sure! That's what statistics is all about!\n\n---\n\n## Work the Problem in R using Monte Carlo Simulation\n\nThat's right, folks, you're out of calculator land again and into programming! Lucky for you, you can finish the A/B testing in R.\n\n*Monte carlo simulation* is a way to simulate the results of something by re-sampling the data you already have. It's based off a little bit of data, but to get the best results, you may want a lot more rows than you have. So use monte carlo simulation to expand things. Kind of like those toy dinosaurs that grow when you pour water over them. \n\nThe function to do this is `rbeta()` function, which samples from the probability density function for a binomial distribution. Remember that the binomial distribution is one in which you only have two outcomes. For instance, a) did eat the whole cupcake or b) did not eat the whole cupcake.\n\nThere are two components of the beta distribution that you'll need to define as variables, in addition to the number of trials you intend to use: \n* alpha: How many times an event happens that you care about \n* beta: How many times an event happens that you don't care about\n\nFirst, assign some variables in R. You'll need a variable to hold onto the priors and the number of trials you want to extend this to. Although you can choose any number of trials you want, here, you'll use `10,000`. \n\n```{r}\ntrials <-10000\n```\n\n`alpha` and `beta` are based on the priors you created. Since you thought that about 80% of people would finish eating a cupcake, `8` becomes your `alpha`. `beta`, the event you don't care about, or not finishing a cupcake, would be `2`. This is because of the \"not\" rule of probability. You've only got two potential options - people either will finish eating their cupcake or they won't - so the probability of not eating is one minus the probability of eating. Since you are doing this out of 10, that means 10-8 = 2, and 2 becomes your `beta`. \n\n```{r}\nalpha <- 8\nbeta <- 2\n```\n\nNow, you are all set up to use `rbeta()` at last! You'll use it for both frosting types. Remember that A was your old, tried-and-true cream cheese frosting recipe, and B was the new one. The variable `samplesA` calculates the probability of the data you collected happening. The first argument it uses is the number of trials you want to simulate this over, and the second is the number of people who ate all of the cupcake with frosting A plus the prior of alpha. The third argument is the number of people who did not eat frosting A plus the prior of beta. You are basically comparing your guess with reality here. \n\nYou will follow the same flow for `samplesB`.\n\n```{r}\nsamplesA <- rbeta(trials, 95+alpha, 22 + beta)\nsamplesB <- rbeta(trials, 73+alpha, 46 + beta)\n```\n\nLastly, you can figure out if B is better by seeing the percentage of the trials in which B came back greater than A. You are basically just adding up with the `sum()` function every time that `samplesB` was greater than `samplesA` out of the total number of `trials`.\n\n```{R}\nBsuperior <- sum(samplesB > samplesA) / trials\n```\n\nThe end result is `0`. Wow! Your initial suspicions were right! There is definitely a clear case to stick with your original frosting, because in no situations out of 10,000 did people ever eat the whole cupcake more times with frosting B, your new recipe!\n\n---", "_____no_output_____" ], [ "<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n\n# Page 6 - Bayesian Network Basics<a class=\"anchor\" id=\"DS106L9_page_6\"></a>\n\n[Back to Top](#DS106L9_toc)\n\n<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n", "_____no_output_____" ], [ "# Bayesian Network Basics\n\n*Bayesian Statistics* is based around conditional probability. *Bayesian Networks* are a specific type of Bayesian Statistics that map out the conditional relationships of multiple variables. Use Bayesian Networks when you want to find the probability of an outcome when it is impacted by several previous conditional variables.\n\nThe image below is an example of a simple Bayesian Network. The results of condition A impact condition B and condition C, and both condition B and condition C impact the probability of condition D. This means that condition D is the final thing you are trying to predict.\n\n![Four circles, one on top labeled A, one on left labeled B, one on right labeled C, one on bottom labeled D. There is an arrow from A to B and A to C. There is an arrow from B to D and C to D.](Media/BayesianNetwork.png)\n\n---\n\n## Example\n\nHow about an example to clear things up? Ask yourself if you will have fun at the beach today. In this case, you want to know the probability of having fun at the beach today. Sounds simple, right? But maybe not. First, ask yourself if it is sunny today. This directly impacts the temperature of the beach and how crowded it is. If it is sunny, it is more likely to be hot and it is more likely to be crowded. Whether or not it is sunny does not directly impact if you will have fun, but if the beach is hot or if the beach is crowded will both impact your probability of having fun. If the beach is warm and not crowded, you are more likely to have fun than if the beach is blazing hot and so busy you are packed in like sardines. \n\n![Four circles, one on top labeled Sunny? One on left labeled Hot? One on right labeled Busy? One on bottom labeled Fun? There is an arrow from Sunny to Hot and from Sunny to Busy. There is an arrow from Hot to Fun and Busy to Fun.](Media/BeachNetwork.png)\n\n---", "_____no_output_____" ], [ "<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n\n# Page 7 - Key Terms<a class=\"anchor\" id=\"DS106L9_page_7\"></a>\n\n[Back to Top](#DS106L9_toc)\n\n<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n", "_____no_output_____" ], [ "# Key Terms\n\nBelow is a list and short description of the important keywords learned in this lesson. Please read through and go back and review any concepts you do not fully understand. Great Work!\n\n<table class=\"table table-striped\">\n <tr>\n <th>Keyword</th>\n <th>Description</th>\n </tr>\n <tr>\n <td style=\"font-weight: bold;\" nowrap>Bayesian Statistics</td>\n <td>Statistics using conditional probability.</td>\n </tr>\n <tr>\n <td style=\"font-weight: bold;\" nowrap>Bayesian Networks</td>\n <td>Machine learning using the conditional relationships of multiple variables.</td>\n </tr>\n</table>\n", "_____no_output_____" ], [ "<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n\n# Page 8 - Lesson 4 Practice Hands-On<a class=\"anchor\" id=\"DS106L9_page_8\"></a>\n\n[Back to Top](#DS106L9_toc)\n\n<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n", "_____no_output_____" ], [ "This Hands-On will **not** be graded, but you are encouraged to complete it. However, the best way to become a data scientist is to practice.\n\n<div class=\"panel panel-danger\">\n <div class=\"panel-heading\">\n <h3 class=\"panel-title\">Caution!</h3>\n </div>\n <div class=\"panel-body\">\n <p>Do not submit your project until you have completed all requirements, as you will not be able to resubmit.</p>\n </div>\n</div>\n\n---\n\n## Bayesian Statistics Hands-On\n\nFor this hands-on, you will be determining which type of mold-removal solution works better: just bleaching objects, or bleaching them and scrubbing them down thoroughly out of 10,000 trials. Based on the priors you created, the mold-removal solutions have a 90% chance of working.\n\nYou're trying to determine whether the mold will grow back or not, using the following table:\n\n<table class=\"table table-striped\">\n <tr>\n <th>Mold Removal Type</th>\n <th>Mold Returned</th>\n <th>Did Not Return</th>\n <th>Ratio</th>\n </tr>\n <tr>\n <td>Bleach</td>\n <td>27</td>\n <td>39</td>\n <td>.41</td>\n </tr>\n <tr>\n <td>Bleach and Scrubbing</td>\n <td>10</td>\n <td>45</td>\n <td>.18</td>\n </tr>\n</table>\n\nComplete A/B testing and Monte Carlo simulation using R. Please attach your R script file with your code documented and information in comments about your findings.\n\n<div class=\"panel panel-danger\">\n <div class=\"panel-heading\">\n <h3 class=\"panel-title\">Caution!</h3>\n </div>\n <div class=\"panel-body\">\n <p>Be sure to zip and submit your entire directory when finished!</p>\n </div>\n</div>\n", "_____no_output_____" ], [ "<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n\n# Page 9 - Lesson 4 Practice Hands-On Solution<a class=\"anchor\" id=\"DS106L9_page_9\"></a>\n\n[Back to Top](#DS106L9_toc)\n\n<hr style=\"height:10px;border-width:0;color:gray;background-color:gray\">\n", "_____no_output_____" ], [ "# Lesson 4 Practice Hands-On Solution\n\n```{r}\n\ntrials <-10000\n# Create a variable to hold onto the priors and the number of trials you want to extend this to. \n\nalpha <- 9\nbeta <- 1\n# Create your alpha and beta variables out of the priors which were 90% leaving the beta to be 10%. \n\n\nsamplesA <- rbeta(trials, 27+alpha, 39 + beta)\nsamplesB <- rbeta(trials, 10+alpha, 45 + beta)\n# Your rbeta() is ready to be set up by placing the function inside of a two separate sample variables. The alpha is added with the Mold Returned and the beta is added with the Did Not Return. \n\nBsuperior <- sum(samplesB > samplesA) / trials\n# The sum() function is used to add up every time that samplesB was greater than samplesA out of the total number of trials. You are calculating the percentage of trials in which sampleB came back greater than sampleA.\n\nBsuperior\n# Print the answer of the function above.\n\n# Bleach theres a .1318 % chance that the \"bleach\" is 99% effective\n0.1318\n```", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]